pax_global_header00006660000000000000000000000064137555224040014521gustar00rootroot0000000000000052 comment=55bd60019e12a65430867a4c334f2d0ab49bdc3e rclone-1.53.3/000077500000000000000000000000001375552240400130745ustar00rootroot00000000000000rclone-1.53.3/.gitattributes000066400000000000000000000003361375552240400157710ustar00rootroot00000000000000# Ignore generated files in GitHub language statistics and diffs /MANUAL.* linguist-generated=true /rclone.1 linguist-generated=true # Don't fiddle with the line endings of test data **/testdata/** -text **/test/** -text rclone-1.53.3/.github/000077500000000000000000000000001375552240400144345ustar00rootroot00000000000000rclone-1.53.3/.github/FUNDING.yml000066400000000000000000000001221375552240400162440ustar00rootroot00000000000000github: [ncw] patreon: njcw liberapay: ncw custom: ["https://rclone.org/donate/"] rclone-1.53.3/.github/ISSUE_TEMPLATE.md000066400000000000000000000010721375552240400171410ustar00rootroot00000000000000 #### Output of `rclone version` #### Describe the issue rclone-1.53.3/.github/ISSUE_TEMPLATE/000077500000000000000000000000001375552240400166175ustar00rootroot00000000000000rclone-1.53.3/.github/ISSUE_TEMPLATE/Bug.md000066400000000000000000000021431375552240400176560ustar00rootroot00000000000000--- name: Bug report about: Report a problem with rclone --- #### What is the problem you are having with rclone? #### What is your rclone version (output from `rclone version`) #### Which OS you are using and how many bits (eg Windows 7, 64 bit) #### Which cloud storage system are you using? (eg Google Drive) #### The command you were trying to run (eg `rclone copy /tmp remote:tmp`) #### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`) rclone-1.53.3/.github/ISSUE_TEMPLATE/Feature.md000066400000000000000000000014721375552240400205400ustar00rootroot00000000000000--- name: Feature request about: Suggest a new feature or enhancement for rclone --- #### What is your current rclone version (output from `rclone version`)? #### What problem are you are trying to solve? #### How do you think rclone should be changed to solve that? rclone-1.53.3/.github/ISSUE_TEMPLATE/config.yml000066400000000000000000000002501375552240400206040ustar00rootroot00000000000000blank_issues_enabled: false contact_links: - name: Rclone Forum Community Support url: https://forum.rclone.org/ about: Please ask and answer questions here. rclone-1.53.3/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000017621375552240400202430ustar00rootroot00000000000000 #### What is the purpose of this change? #### Was the change discussed in an issue or in the forum before? #### Checklist - [ ] I have read the [contribution guidelines](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#submitting-a-pull-request). - [ ] I have added tests for all changes in this PR if appropriate. - [ ] I have added documentation for the changes if appropriate. - [ ] All commit messages are in [house style](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#commit-messages). - [ ] I'm done, this Pull Request is ready for review :-) rclone-1.53.3/.github/workflows/000077500000000000000000000000001375552240400164715ustar00rootroot00000000000000rclone-1.53.3/.github/workflows/build.yml000066400000000000000000000170661375552240400203250ustar00rootroot00000000000000--- # Github Actions build for rclone # -*- compile-command: "yamllint -f parsable build.yml" -*- name: build # Trigger the workflow on push or pull request on: push: branches: - '*' tags: - '*' pull_request: jobs: build: timeout-minutes: 60 strategy: fail-fast: false matrix: job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'go1.11', 'go1.12', 'go1.13', 'go1.14'] include: - job_name: linux os: ubuntu-latest go: '1.15.x' gotags: cmount build_flags: '-include "^linux/"' check: true quicktest: true racequicktest: true deploy: true - job_name: mac os: macOS-latest go: '1.15.x' gotags: 'cmount' build_flags: '-include "^darwin/amd64" -cgo' quicktest: true racequicktest: true deploy: true - job_name: windows_amd64 os: windows-latest go: '1.15.x' gotags: cmount build_flags: '-include "^windows/amd64" -cgo' quicktest: true racequicktest: true deploy: true - job_name: windows_386 os: windows-latest go: '1.15.x' gotags: cmount goarch: '386' cgo: '1' build_flags: '-include "^windows/386" -cgo' quicktest: true deploy: true - job_name: other_os os: ubuntu-latest go: '1.15.x' build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"' compile_all: true deploy: true - job_name: go1.11 os: ubuntu-latest go: '1.11.x' quicktest: true - job_name: go1.12 os: ubuntu-latest go: '1.12.x' quicktest: true - job_name: go1.13 os: ubuntu-latest go: '1.13.x' quicktest: true - job_name: go1.14 os: ubuntu-latest go: '1.14.x' quicktest: true racequicktest: true name: ${{ matrix.job_name }} runs-on: ${{ matrix.os }} steps: - name: Checkout uses: actions/checkout@v2 with: fetch-depth: 0 - name: Install Go uses: actions/setup-go@v2 with: stable: 'false' go-version: ${{ matrix.go }} - name: Set environment variables shell: bash run: | echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV if [[ "${{ matrix.goarch }}" != "" ]]; then echo 'GOARCH=${{ matrix.goarch }}' >> $GITHUB_ENV ; fi if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi - name: Install Libraries on Linux shell: bash run: | sudo modprobe fuse sudo chmod 666 /dev/fuse sudo chown root:$USER /etc/fuse.conf sudo apt-get install fuse libfuse-dev rpm pkg-config if: matrix.os == 'ubuntu-latest' - name: Install Libraries on macOS shell: bash run: | brew untap local/homebrew-openssl # workaround for https://github.com/actions/virtual-environments/issues/1811 brew untap local/homebrew-python2 # workaround for https://github.com/actions/virtual-environments/issues/1811 brew update brew cask install osxfuse if: matrix.os == 'macOS-latest' - name: Install Libraries on Windows shell: powershell run: | $ProgressPreference = 'SilentlyContinue' choco install -y winfsp zip echo "CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append if ($env:GOARCH -eq "386") { choco install -y mingw --forcex86 --force echo "C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append } # Copy mingw32-make.exe to make.exe so the same command line # can be used on Windows as on macOS and Linux $path = (get-command mingw32-make.exe).Path Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe') if: matrix.os == 'windows-latest' - name: Print Go version and environment shell: bash run: | printf "Using go at: $(which go)\n" printf "Go version: $(go version)\n" printf "\n\nGo environment:\n\n" go env printf "\n\nRclone environment:\n\n" make vars printf "\n\nSystem environment:\n\n" env - name: Go module cache uses: actions/cache@v2 with: path: ~/go/pkg/mod key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }} restore-keys: | ${{ runner.os }}-go- - name: Build rclone shell: bash run: | make - name: Run tests shell: bash run: | make quicktest if: matrix.quicktest - name: Race test shell: bash run: | make racequicktest if: matrix.racequicktest - name: Code quality test shell: bash run: | make build_dep make check if: matrix.check - name: Compile all architectures test shell: bash run: | make make compile_all if: matrix.compile_all - name: Deploy built binaries shell: bash run: | if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi if [[ "${{ matrix.os }}" == "windows-latest" ]]; then make release_dep_windows ; fi make ci_beta env: RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }} # working-directory: '$(modulePath)' # Deploy binaries if enabled in config && not a PR && not a fork if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone' xgo: timeout-minutes: 60 name: "xgo cross compile" runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v1 with: # Checkout into a fixed path to avoid import path problems on go < 1.11 path: ./src/github.com/rclone/rclone - name: Set environment variables shell: bash run: | echo 'GOPATH=${{ runner.workspace }}' >> $GITHUB_ENV echo '${{ runner.workspace }}/bin' >> $GITHUB_PATH - name: Cross-compile rclone run: | docker pull billziss/xgo-cgofuse GO111MODULE=off go get -v github.com/karalabe/xgo # don't add to go.mod # xgo \ # -image=billziss/xgo-cgofuse \ # -targets=darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \ # -tags cmount \ # -dest build \ # . xgo \ -image=billziss/xgo-cgofuse \ -targets=android/*,ios/* \ -dest build \ . - name: Build rclone shell: bash run: | make - name: Upload artifacts run: | make ci_upload env: RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }} # Upload artifacts if not a PR && not a fork if: github.head_ref == '' && github.repository == 'rclone/rclone' rclone-1.53.3/.github/workflows/build_publish_docker_image.yml000066400000000000000000000013471375552240400245370ustar00rootroot00000000000000name: Docker beta build on: push: branches: - master jobs: build: runs-on: ubuntu-latest name: Build image job steps: - name: Checkout master uses: actions/checkout@v2 with: fetch-depth: 0 - name: Build and publish image uses: ilteoood/docker_buildx@439099796bfc03dd9cedeb72a0c7cb92be5cc92c with: tag: beta imageName: rclone/rclone platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7 publish: true dockerHubUser: ${{ secrets.DOCKER_HUB_USER }} dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }} rclone-1.53.3/.github/workflows/build_publish_release_docker_image.yml000066400000000000000000000030331375552240400262310ustar00rootroot00000000000000name: Docker release build on: release: types: [published] jobs: build: runs-on: ubuntu-latest name: Build image job steps: - name: Checkout master uses: actions/checkout@v2 with: fetch-depth: 0 - name: Get actual patch version id: actual_patch_version run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g') - name: Get actual minor version id: actual_minor_version run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2) - name: Get actual major version id: actual_major_version run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1) - name: Build and publish image uses: ilteoood/docker_buildx@439099796bfc03dd9cedeb72a0c7cb92be5cc92c with: tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }} imageName: rclone/rclone platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7 publish: true dockerHubUser: ${{ secrets.DOCKER_HUB_USER }} dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }} rclone-1.53.3/.gitignore000066400000000000000000000001371375552240400150650ustar00rootroot00000000000000*~ _junk/ rclone build docs/public rclone.iml .idea .history *.test *.log *.iml fuzz-build.zip rclone-1.53.3/.golangci.yml000066400000000000000000000010231375552240400154540ustar00rootroot00000000000000# golangci-lint configuration options linters: enable: - deadcode - errcheck - goimports - golint - ineffassign - structcheck - varcheck - govet - unconvert #- prealloc #- maligned disable-all: true issues: # Enable some lints excluded by default exclude-use-default: false # Maximum issues count per one linter. Set to 0 to disable. Default is 50. max-per-linter: 0 # Maximum count of issues with the same text. Set to 0 to disable. Default is 3. max-same-issues: 0 rclone-1.53.3/CONTRIBUTING.md000066400000000000000000000372551375552240400153410ustar00rootroot00000000000000# Contributing to rclone # This is a short guide on how to contribute things to rclone. ## Reporting a bug ## If you've just got a question or aren't sure if you've found a bug then please use the [rclone forum](https://forum.rclone.org/) instead of filing an issue. When filing an issue, please include the following information if possible as well as a description of the problem. Make sure you test with the [latest beta of rclone](https://beta.rclone.org/): * Rclone version (eg output from `rclone -V`) * Which OS you are using and how many bits (eg Windows 7, 64 bit) * The command you were trying to run (eg `rclone copy /tmp remote:tmp`) * A log of the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`) * if the log contains secrets then edit the file with a text editor first to obscure them ## Submitting a pull request ## If you find a bug that you'd like to fix, or a new feature that you'd like to implement then please submit a pull request via GitHub. If it is a big feature then make an issue first so it can be discussed. You'll need a Go environment set up with GOPATH set. See [the Go getting started docs](https://golang.org/doc/install) for more info. First in your web browser press the fork button on [rclone's GitHub page](https://github.com/rclone/rclone). Now in your terminal go get -u github.com/rclone/rclone cd $GOPATH/src/github.com/rclone/rclone git remote rename origin upstream git remote add origin git@github.com:YOURUSER/rclone.git Make a branch to add your new feature git checkout -b my-new-feature And get hacking. When ready - run the unit tests for the code you changed go test -v Note that you may need to make a test remote, eg `TestSwift` for some of the unit tests. Note the top level Makefile targets * make check * make test Both of these will be run by Travis when you make a pull request but you can do this yourself locally too. These require some extra go packages which you can install with * make build_dep Make sure you * Add [documentation](#writing-documentation) for a new feature. * Follow the [commit message guidelines](#commit-messages). * Add [unit tests](#testing) for a new feature * squash commits down to one per feature * rebase to master with `git rebase master` When you are done with that git push origin my-new-feature Go to the GitHub website and click [Create pull request](https://help.github.com/articles/creating-a-pull-request/). You patch will get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, squash the commits (make multiple commits one commit) by running: ``` git log # See how many commits you want to squash git reset --soft HEAD~2 # This squashes the 2 latest commits together. git status # Check what will happen, if you made a mistake resetting, you can run git reset 'HEAD@{1}' to undo. git commit # Add a new commit message. git push --force # Push the squashed commit to your GitHub repo. # For more, see Stack Overflow, Git docs, or generally Duck around the web. jtagcat also reccommends wizardzines.com ``` ## CI for your fork ## rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository. ## Testing ## rclone's tests are run from the go testing framework, so at the top level you can run this to run all the tests. go test -v ./... rclone contains a mixture of unit tests and integration tests. Because it is difficult (and in some respects pointless) to test cloud storage systems by mocking all their interfaces, rclone unit tests can run against any of the backends. This is done by making specially named remotes in the default config file. If you wanted to test changes in the `drive` backend, then you would need to make a remote called `TestDrive`. You can then run the unit tests in the drive directory. These tests are skipped if `TestDrive:` isn't defined. cd backend/drive go test -v You can then run the integration tests which tests all of rclone's operations. Normally these get run against the local filing system, but they can be run against any of the remotes. cd fs/sync go test -v -remote TestDrive: go test -v -remote TestDrive: -fast-list cd fs/operations go test -v -remote TestDrive: If you want to use the integration test framework to run these tests all together with an HTML report and test retries then from the project root: go install github.com/rclone/rclone/fstest/test_all test_all -backend drive If you want to run all the integration tests against all the remotes, then change into the project root and run make test This command is run daily on the integration test server. You can find the results at https://pub.rclone.org/integration-tests/ ## Code Organisation ## Rclone code is organised into a small number of top level directories with modules beneath. * backend - the rclone backends for interfacing to cloud providers - * all - import this to load all the cloud providers * ...providers * bin - scripts for use while building or maintaining rclone * cmd - the rclone commands * all - import this to load all the commands * ...commands * docs - the documentation and website * content - adjust these docs only - everything else is autogenerated * command - these are auto generated - edit the corresponding .go file * fs - main rclone definitions - minimal amount of code * accounting - bandwidth limiting and statistics * asyncreader - an io.Reader which reads ahead * config - manage the config file and flags * driveletter - detect if a name is a drive letter * filter - implements include/exclude filtering * fserrors - rclone specific error handling * fshttp - http handling for rclone * fspath - path handling for rclone * hash - defines rclone's hash types and functions * list - list a remote * log - logging facilities * march - iterates directories in lock step * object - in memory Fs objects * operations - primitives for sync, eg Copy, Move * sync - sync directories * walk - walk a directory * fstest - provides integration test framework * fstests - integration tests for the backends * mockdir - mocks an fs.Directory * mockobject - mocks an fs.Object * test_all - Runs integration tests for everything * graphics - the images used in the website etc * lib - libraries used by the backend * atexit - register functions to run when rclone exits * dircache - directory ID to name caching * oauthutil - helpers for using oauth * pacer - retries with backoff and paces operations * readers - a selection of useful io.Readers * rest - a thin abstraction over net/http for REST * vfs - Virtual FileSystem layer for implementing rclone mount and similar ## Writing Documentation ## If you are adding a new feature then please update the documentation. If you add a new general flag (not for a backend), then document it in `docs/content/docs.md` - the flags there are supposed to be in alphabetical order. If you add a new backend option/flag, then it should be documented in the source file in the `Help:` field. The first line of this is used for the flag help, the remainder is shown to the user in `rclone config` and is added to the docs with `make backenddocs`. The only documentation you need to edit are the `docs/content/*.md` files. The MANUAL.*, rclone.1, web site etc are all auto generated from those during the release process. See the `make doc` and `make website` targets in the Makefile if you are interested in how. You don't need to run these when adding a feature. Documentation for rclone sub commands is with their code, eg `cmd/ls/ls.go`. Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository) for small changes in the docs which makes it very easy. ## Making a release ## There are separate instructions for making a release in the RELEASE.md file. ## Commit messages ## Please make the first line of your commit message a summary of the change that a user (not a developer) of rclone would like to read, and prefix it with the directory of the change followed by a colon. The changelog gets made by looking at just these first lines so make it good! If you have more to say about the commit, then enter a blank line and carry on the description. Remember to say why the change was needed - the commit itself shows what was changed. Writing more is better than less. Comparing the behaviour before the change to that after the change is very useful. Imagine you are writing to yourself in 12 months time when you've forgotten everything about what you just did and you need to get up to speed quickly. If the change fixes an issue then write `Fixes #1234` in the commit message. This can be on the subject line if it will fit. If you don't want to close the associated issue just put `#1234` and the change will get linked into the issue. Here is an example of a short commit message: ``` drive: add team drive support - fixes #885 ``` And here is an example of a longer one: ``` mount: fix hang on errored upload In certain circumstances if an upload failed then the mount could hang indefinitely. This was fixed by closing the read pipe after the Put completed. This will cause the write side to return a pipe closed error fixing the hang. Fixes #1498 ``` ## Adding a dependency ## rclone uses the [go modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more) support in go1.11 and later to manage its dependencies. rclone can be built with modules outside of the GOPATH To add a dependency `github.com/ncw/new_dependency` see the instructions below. These will fetch the dependency and add it to `go.mod` and `go.sum`. GO111MODULE=on go get github.com/ncw/new_dependency You can add constraints on that package when doing `go get` (see the go docs linked above), but don't unless you really need to. Please check in the changes generated by `go mod` including `go.mod` and `go.sum` in the same commit as your other changes. ## Updating a dependency ## If you need to update a dependency then run GO111MODULE=on go get -u github.com/pkg/errors Check in a single commit as above. ## Updating all the dependencies ## In order to update all the dependencies then run `make update`. This just uses the go modules to update all the modules to their latest stable release. Check in the changes in a single commit as above. This should be done early in the release cycle to pick up new versions of packages in time for them to get some testing. ## Updating a backend ## If you update a backend then please run the unit tests and the integration tests for that backend. Assuming the backend is called `remote`, make create a config entry called `TestRemote` for the tests to use. Now `cd remote` and run `go test -v` to run the unit tests. Then `cd fs` and run `go test -v -remote TestRemote:` to run the integration tests. The next section goes into more detail about the tests. ## Writing a new backend ## Choose a name. The docs here will use `remote` as an example. Note that in rclone terminology a file system backend is called a remote or an fs. Research * Look at the interfaces defined in `fs/fs.go` * Study one or more of the existing remotes Getting going * Create `backend/remote/remote.go` (copy this from a similar remote) * box is a good one to start from if you have a directory based remote * b2 is a good one to start from if you have a bucket based remote * Add your remote to the imports in `backend/all/all.go` * HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead. * Try to implement as many optional methods as possible as it makes the remote more usable. * Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed * `rclone purge -v TestRemote:rclone-info` * `rclone info --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info` * `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json` * open `remote.csv` in a spreadsheet and examine Unit tests * Create a config entry called `TestRemote` for the unit tests to use * Create a `backend/remote/remote_test.go` - copy and adjust your example remote * Make sure all tests pass with `go test -v` Integration tests * Add your backend to `fstest/test_all/config.yaml` * Once you've done that then you can use the integration test framework from the project root: * go install ./... * test_all -backends remote Or if you want to run the integration tests manually: * Make sure integration tests pass with * `cd fs/operations` * `go test -v -remote TestRemote:` * `cd fs/sync` * `go test -v -remote TestRemote:` * If your remote defines `ListR` check with this also * `go test -v -remote TestRemote: -fast-list` See the [testing](#testing) section for more information on integration tests. Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in alphabetical order of full name of remote (eg `drive` is ordered as `Google Drive`) but with the local file system last. * `README.md` - main GitHub page * `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`) * make sure this has the `autogenerated options` comments in (see your reference backend docs) * update them with `make backenddocs` - revert any changes in other backends * `docs/content/overview.md` - overview docs * `docs/content/docs.md` - list of remotes in config section * `docs/content/_index.md` - front page of rclone.org * `docs/layouts/chrome/navbar.html` - add it to the website navigation * `bin/make_manual.py` - add the page to the `docs` constant Once you've written the docs, run `make serve` and check they look OK in the web browser and the links (internal and external) all work. ## Writing a plugin ## New features (backends, commands) can also be added "out-of-tree", through Go plugins. Changes will be kept in a dynamically loaded file instead of being compiled into the main binary. This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone. Usage - Naming - Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`. - `KIND` should be one of `backend`, `command` or `bundle`. - Example: A plugin with backend support for PiFS would be called `librcloneplugin_backend_pifs.so`. - Loading - Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282)) - Supported on rclone v1.50 or greater. - All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded. - If this variable doesn't exist, plugin support is disabled. - Plugins must be compiled against the exact version of rclone to work. (The rclone used during building the plugin must be the same as the source of rclone) Building To turn your existing additions into a Go plugin, move them to an external repository and change the top-level package name to `main`. Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match. Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin. [Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin) [Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f) rclone-1.53.3/COPYING000066400000000000000000000021071375552240400141270ustar00rootroot00000000000000Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. rclone-1.53.3/Dockerfile000066400000000000000000000006761375552240400150770ustar00rootroot00000000000000FROM golang AS builder COPY . /go/src/github.com/rclone/rclone/ WORKDIR /go/src/github.com/rclone/rclone/ RUN \ CGO_ENABLED=0 \ make RUN ./rclone version # Begin final image FROM alpine:latest RUN apk --no-cache add ca-certificates fuse tzdata && \ echo "user_allow_other" >> /etc/fuse.conf COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/ ENTRYPOINT [ "rclone" ] WORKDIR /data ENV XDG_CONFIG_HOME=/config rclone-1.53.3/MAINTAINERS.md000066400000000000000000000126511375552240400151750ustar00rootroot00000000000000# Maintainers guide for rclone # Current active maintainers of rclone are: | Name | GitHub ID | Specific Responsibilities | | :--------------- | :---------------- | :-------------------------- | | Nick Craig-Wood | @ncw | overall project health | | Stefan Breunig | @breunigs | | | Ishuah Kariuki | @ishuah | | | Remus Bunduc | @remusb | cache backend | | Fabian Möller | @B4dM4n | | | Alex Chen | @Cnly | onedrive backend | | Sandeep Ummadi | @sandeepkru | azureblob backend | | Sebastian Bünger | @buengese | jottacloud & yandex backends | | Ivan Andreev | @ivandeex | chunker & mailru backends | | Max Sum | @Max-Sum | union backend | | Fred | @creativeprojects | seafile backend | | Caleb Case | @calebcase | tardigrade backend | **This is a work in progress Draft** This is a guide for how to be an rclone maintainer. This is mostly a writeup of what I (@ncw) attempt to do. ## Triaging Tickets ## When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while. Rclone uses the labels like this: * `bug` - a definite verified bug * `can't reproduce` - a problem which we can't reproduce * `doc fix` - a bug in the documentation - if users need help understanding the docs add this label * `duplicate` - normally close these and ask the user to subscribe to the original * `enhancement: new remote` - a new rclone backend * `enhancement` - a new feature * `FUSE` - to do with `rclone mount` command * `good first issue` - mark these if you find a small self contained issue - these get shown to new visitors to the project * `help` wanted - mark these if you find a self contained issue - these get shown to new visitors to the project * `IMPORTANT` - note to maintainers not to forget to fix this for the release * `maintenance` - internal enhancement, code re-organisation etc * `Needs Go 1.XX` - waiting for that version of Go to be released * `question` - not a `bug` or `enhancement` - direct to the forum for next time * `Remote: XXX` - which rclone backend this affects * `thinking` - not decided on the course of action yet If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going. When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (eg the next go release). The milestones have these meanings: * v1.XX - stuff we would like to fit into this release * v1.XX+1 - stuff we are leaving until the next release * Soon - stuff we think is a good idea - waiting to be scheduled to a release * Help wanted - blue sky stuff that might get moved up, or someone could help with * Known bugs - bugs waiting on external factors or we aren't going to fix for the moment Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up. ## Closing Tickets ## Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback. ## Pull requests ## Try to process pull requests promptly! Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message. After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`. Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right. ## Merges ## If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly. ## Release cycle ## Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons. High impact regressions should be fixed before the next release. Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface. Towards the end of the release cycle try not to merge anything too big so let things settle down. Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained. ## Mailing list ## There is now an invite only mailing list for rclone developers `rclone-dev` on google groups. ## TODO ## I should probably make a dev@rclone.org to register with cloud providers. rclone-1.53.3/MANUAL.html000066400000000000000000047107341375552240400147570ustar00rootroot00000000000000 rclone(1) User Manual

rclone(1) User Manual

Nick Craig-Wood

Nov 19, 2020

Rclone syncs your files to cloud storage

rclone logo

About rclone

Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run protection. It is used at the command line, in scripts or via its API.

Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".

Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.

Virtual backends wrap local and cloud file systems to apply encryption, caching, chunking and joining.

Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.

Rclone is mature, open source software originally inspired by rsync and written in Go. The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended.

Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.

Rclone does the heavy lifting of communicating with cloud storage.

What can rclone do for you?

Rclone helps you:

Features

Supported providers

(There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.)

Links

Install

Rclone is a Go program and comes as a single binary file.

Quickstart

See below for some expanded Linux / macOS instructions.

See the Usage section of the docs for how to use rclone, or run rclone -h.

Script installation

To install rclone on Linux/macOS/BSD systems, run:

curl https://rclone.org/install.sh | sudo bash

For beta installation, run:

curl https://rclone.org/install.sh | sudo bash -s beta

Note that this script checks the version of rclone installed first and won't re-download if not needed.

Linux installation from precompiled binary

Fetch and unpack

curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
unzip rclone-current-linux-amd64.zip
cd rclone-*-linux-amd64

Copy binary file

sudo cp rclone /usr/bin/
sudo chown root:root /usr/bin/rclone
sudo chmod 755 /usr/bin/rclone

Install manpage

sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb 

Run rclone config to setup. See rclone config docs for more details.

rclone config

macOS installation with brew

brew install rclone

macOS installation from precompiled binary, using curl

To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with curl.

Download the latest version of rclone.

cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip

Unzip the download and cd to the extracted folder.

unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64

Move rclone to your $PATH. You will be prompted for your password.

sudo mkdir -p /usr/local/bin
sudo mv rclone /usr/local/bin/

(the mkdir command is safe to run, even if the directory already exists).

Remove the leftover files.

cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip

Run rclone config to setup. See rclone config docs for more details.

rclone config

macOS installation from precompiled binary, using a web browser

When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run rclone, a pop-up will appear saying:

“rclone” cannot be opened because the developer cannot be verified.
macOS cannot verify that this app is free from malware.

The simplest fix is to run

xattr -d com.apple.quarantine rclone

Install with docker

The rclone maintains a docker image for rclone. These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image.

The :latest tag will always point to the latest stable release. You can use the :beta tag to get the latest build from master. You can also use version tags, eg :1.49.1, :1.49 or :1.

$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9

There are a few command line options to consider when starting an rclone Docker container from the rclone image.

Here are some commands tested on an Ubuntu 18.04.3 host:

# config on host at ~/.config/rclone/rclone.conf
# data on host at ~/data

# make sure the config is ok by listing the remotes
docker run --rm \
    --volume ~/.config/rclone:/config/rclone \
    --volume ~/data:/data:shared \
    --user $(id -u):$(id -g) \
    rclone/rclone \
    listremotes

# perform mount inside Docker container, expose result to host
mkdir -p ~/data/mount
docker run --rm \
    --volume ~/.config/rclone:/config/rclone \
    --volume ~/data:/data:shared \
    --user $(id -u):$(id -g) \
    --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro \
    --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \
    rclone/rclone \
    mount dropbox:Photos /data/mount &
ls ~/data/mount
kill %1

Install from source

Make sure you have at least Go 1.11 installed. Download go if necessary. The latest release is recommended. Then

git clone https://github.com/rclone/rclone.git
cd rclone
go build
./rclone version

This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make instead of go build then the rclone build will have the correct version information in it.

You can also build the latest stable rclone with:

go get github.com/rclone/rclone

or the latest version (equivalent to the beta) with

go get github.com/rclone/rclone@master

These will build the binary in $(go env GOPATH)/bin (~/go/bin/rclone by default) after downloading the source to the go module cache. Note - do not use the -u flag here. This causes go to try to update the depencencies that rclone uses and sometimes these don't work with the current version of rclone.

Installation with Ansible

This can be done with Stefan Weichinger's ansible role.

Instructions

  1. git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directory
  2. add the role to the hosts you want rclone installed to:
    - hosts: rclone-hosts
      roles:
          - rclone

Configure

First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)

The easiest way to make the config is to run rclone with the config option:

rclone config

See the following for detailed instructions for

Usage

Rclone syncs a directory tree from one storage system to another.

Its syntax is like this

Syntax: [options] subcommand <parameters> <parameters...>

Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.

You can define as many storage paths as you like in the config file.

Please use the -i / --interactive flag while learning rclone to avoid accidental data loss.

Subcommands

rclone uses a system of subcommands. For example

rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
rclone sync -i /local/path remote:path # syncs /local/path to the remote

rclone config

Enter an interactive configuration session.

Synopsis

Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

rclone config [flags]

Options

  -h, --help   help for config

See the global flags page for global options not listed here.

SEE ALSO

rclone copy

Copy files from source to dest, skipping already copied.

Synopsis

Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.

Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.

If dest:path doesn't exist, it is created and the source:path contents go there.

For example

rclone copy source:sourcepath dest:destpath

Let's say there are two files in sourcepath

sourcepath/one.txt
sourcepath/two.txt

This copies them to

destpath/one.txt
destpath/two.txt

Not to

destpath/sourcepath/one.txt
destpath/sourcepath/two.txt

If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.

See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.

For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this:

rclone copy --max-age 24h --no-traverse /path/to/src remote:

Note: Use the -P/--progress flag to view real-time transfer statistics.

Note: Use the --dry-run or the --interactive/-i flag to test without copying anything.

rclone copy source:path dest:path [flags]

Options

      --create-empty-src-dirs   Create empty source dirs on destination after copy
  -h, --help                    help for copy

See the global flags page for global options not listed here.

SEE ALSO

rclone sync

Make source and dest identical, modifying destination only.

Synopsis

Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

rclone sync -i SOURCE remote:DESTINATION

Note that files in the destination won't be deleted if there were any errors at any point.

It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.

If dest:path doesn't exist, it is created and the source:path contents go there.

Note: Use the -P/--progress flag to view real-time transfer statistics

rclone sync source:path dest:path [flags]

Options

      --create-empty-src-dirs   Create empty source dirs on destination after sync
  -h, --help                    help for sync

See the global flags page for global options not listed here.

SEE ALSO

rclone move

Move files from source to dest.

Synopsis

Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation.

If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer exist.

Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.

If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.

See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

Note: Use the -P/--progress flag to view real-time transfer statistics.

rclone move source:path dest:path [flags]

Options

      --create-empty-src-dirs   Create empty source dirs on destination after move
      --delete-empty-src-dirs   Delete empty source dirs after move
  -h, --help                    help for move

See the global flags page for global options not listed here.

SEE ALSO

rclone delete

Remove the contents of path.

Synopsis

Remove the files in path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.

rclone delete only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use rclone purge

If you supply the --rmdirs flag, it will remove all empty directories along with it.

Eg delete all files bigger than 100MBytes

Check what would be deleted first (use either)

rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path

Then delete

rclone --min-size 100M delete remote:path

That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

rclone delete remote:path [flags]

Options

  -h, --help     help for delete
      --rmdirs   rmdirs removes empty directories but leaves root intact

See the global flags page for global options not listed here.

SEE ALSO

rclone purge

Remove the path and all of its contents.

Synopsis

Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

rclone purge remote:path [flags]

Options

  -h, --help   help for purge

See the global flags page for global options not listed here.

SEE ALSO

rclone mkdir

Make the path if it doesn't already exist.

Synopsis

Make the path if it doesn't already exist.

rclone mkdir remote:path [flags]

Options

  -h, --help   help for mkdir

See the global flags page for global options not listed here.

SEE ALSO

rclone rmdir

Remove the path if empty.

Synopsis

Remove the path. Note that you can't remove a path with objects in it, use purge for that.

rclone rmdir remote:path [flags]

Options

  -h, --help   help for rmdir

See the global flags page for global options not listed here.

SEE ALSO

rclone check

Checks the files in the source and destination match.

Synopsis

Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.

If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.

If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.

The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.

The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.

rclone check source:path dest:path [flags]

Options

      --combined string         Make a combined report of changes to this file
      --differ string           Report all non-matching files to this file
      --download                Check by downloading rather than with hash.
      --error string            Report all files with errors (hashing or reading) to this file
  -h, --help                    help for check
      --match string            Report all matching files to this file
      --missing-on-dst string   Report all files missing from the destination to this file
      --missing-on-src string   Report all files missing from the source to this file
      --one-way                 Check one way only, source files must exist on remote

See the global flags page for global options not listed here.

SEE ALSO

rclone ls

List the objects in the path with size and path.

Synopsis

Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.

Eg

$ rclone ls swift:bucket
    60295 bevajer5jef
    90613 canole
    94467 diwogej7
    37600 fubuwic

Any of the filtering options can be applied to this command.

There are several related list commands

ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

rclone ls remote:path [flags]

Options

  -h, --help   help for ls

See the global flags page for global options not listed here.

SEE ALSO

rclone lsd

List all directories/containers/buckets in the path.

Synopsis

Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse.

This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg

$ rclone lsd swift:
      494000 2018-04-26 08:43:20     10000 10000files
          65 2018-04-26 08:43:20         1 1File

Or

$ rclone lsd drive:test
          -1 2016-10-17 17:41:53        -1 1000files
          -1 2017-01-03 14:40:54        -1 2500files
          -1 2017-07-08 14:39:28        -1 4000files

If you just want the directory names use "rclone lsf --dirs-only".

Any of the filtering options can be applied to this command.

There are several related list commands

ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

rclone lsd remote:path [flags]

Options

  -h, --help        help for lsd
  -R, --recursive   Recurse into the listing.

See the global flags page for global options not listed here.

SEE ALSO

rclone lsl

List the objects in path with modification time, size and path.

Synopsis

Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.

Eg

$ rclone lsl swift:bucket
    60295 2016-06-25 18:55:41.062626927 bevajer5jef
    90613 2016-06-25 18:55:43.302607074 canole
    94467 2016-06-25 18:55:43.046609333 diwogej7
    37600 2016-06-25 18:55:40.814629136 fubuwic

Any of the filtering options can be applied to this command.

There are several related list commands

ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

rclone lsl remote:path [flags]

Options

  -h, --help   help for lsl

See the global flags page for global options not listed here.

SEE ALSO

rclone md5sum

Produces an md5sum file for all the objects in the path.

Synopsis

Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.

rclone md5sum remote:path [flags]

Options

  -h, --help   help for md5sum

See the global flags page for global options not listed here.

SEE ALSO

rclone sha1sum

Produces an sha1sum file for all the objects in the path.

Synopsis

Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.

rclone sha1sum remote:path [flags]

Options

  -h, --help   help for sha1sum

See the global flags page for global options not listed here.

SEE ALSO

rclone size

Prints the total size and number of objects in remote:path.

Synopsis

Prints the total size and number of objects in remote:path.

rclone size remote:path [flags]

Options

  -h, --help   help for size
      --json   format output as JSON

See the global flags page for global options not listed here.

SEE ALSO

rclone version

Show the version number.

Synopsis

Show the version number, the go version and the architecture.

Eg

$ rclone version
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10

If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta.

$ rclone version --check
yours:  1.42.0.6
latest: 1.42          (released 2018-06-16)
beta:   1.42.0.5      (released 2018-06-17)

Or

$ rclone version --check
yours:  1.41
latest: 1.42          (released 2018-06-16)
  upgrade: https://downloads.rclone.org/v1.42
beta:   1.42.0.5      (released 2018-06-17)
  upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
rclone version [flags]

Options

      --check   Check for new version.
  -h, --help    help for version

See the global flags page for global options not listed here.

SEE ALSO

rclone cleanup

Clean up the remote if possible.

Synopsis

Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.

rclone cleanup remote:path [flags]

Options

  -h, --help   help for cleanup

See the global flags page for global options not listed here.

SEE ALSO

rclone dedupe

Interactively find duplicate filenames and delete/rename them.

Synopsis

By default dedupe interactively finds files with duplicate names and offers to delete all but one or rename them to be different.

This is only useful with backends like Google Drive which can have duplicate file names. It can be run on wrapping backends (eg crypt) if they wrap a backend which supports duplicate file names.

In the first pass it will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged.

In the second pass, for every group of duplicate file names, it will delete all but one identical files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive.

dedupe considers files to be identical if they have the same hash. If the backend does not support hashes (eg crypt wrapping Google Drive) then they will never be found to be identical. If you use the --size-only flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

Here is an example run.

Before - with duplicates

$ rclone lsl drive:dupes
  6048320 2016-03-05 16:23:16.798000000 one.txt
  6048320 2016-03-05 16:23:11.775000000 one.txt
   564374 2016-03-05 16:23:06.731000000 one.txt
  6048320 2016-03-05 16:18:26.092000000 one.txt
  6048320 2016-03-05 16:22:46.185000000 two.txt
  1744073 2016-03-05 16:22:38.104000000 two.txt
   564374 2016-03-05 16:22:52.118000000 two.txt

Now the dedupe session

$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
  1:      6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
  2:       564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicates names
two.txt: 3 duplicates remain
  1:       564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
  2:      6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
  3:      1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt

The result being

$ rclone lsl drive:dupes
  6048320 2016-03-05 16:23:16.798000000 one.txt
   564374 2016-03-05 16:22:52.118000000 two-1.txt
  6048320 2016-03-05 16:22:46.185000000 two-2.txt
  1744073 2016-03-05 16:22:38.104000000 two-3.txt

Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with the same value

For example to rename all the identically named photos in your Google Photos directory, do

rclone dedupe --dedupe-mode rename "drive:Google Photos"

Or

rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path [flags]

Options

      --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive")
  -h, --help                 help for dedupe

See the global flags page for global options not listed here.

SEE ALSO

rclone about

Get quota information from the remote.

Synopsis

Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes.

This will print to stdout something like this:

Total:   17G
Used:    7.444G
Free:    1.315G
Trashed: 100.000M
Other:   8.241G

Where the fields are:

Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted.

Use the --full flag to see the numbers written out in full, eg

Total:   18253611008
Used:    7993453766
Free:    1411001220
Trashed: 104857602
Other:   8849156022

Use the --json flag for a computer readable output, eg

{
    "total": 18253611008,
    "used": 7993453766,
    "trashed": 104857602,
    "other": 8849156022,
    "free": 1411001220
}
rclone about remote: [flags]

Options

      --full   Full numbers instead of SI units
  -h, --help   help for about
      --json   Format output as JSON

See the global flags page for global options not listed here.

SEE ALSO

rclone authorize

Remote authorization.

Synopsis

Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.

rclone authorize [flags]

Options

      --auth-no-open-browser   Do not automatically open auth link in default browser
  -h, --help                   help for authorize

See the global flags page for global options not listed here.

SEE ALSO

rclone backend

Run a backend specific command.

Synopsis

This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.

You can discover what commands a backend implements by using

rclone backend help remote:
rclone backend help <backendname>

You can also discover information about the backend using (see operations/fsinfo in the remote control docs for more info).

rclone backend features remote:

Pass options to the backend command with -o. This should be key=value or key, eg:

rclone backend stats remote:path stats -o format=json -o long

Pass arguments to the backend by placing them on the end of the line

rclone backend cleanup remote:path file1 file2 file3

Note to run these commands on a running backend then see backend/command in the rc docs.

rclone backend <command> remote:path [opts] <args> [flags]

Options

  -h, --help                 help for backend
      --json                 Always output in JSON format.
  -o, --option stringArray   Option in the form name=value or name.

See the global flags page for global options not listed here.

SEE ALSO

rclone cat

Concatenates any files and sends them to stdout.

Synopsis

rclone cat sends any files to standard output.

You can use it like this to output a single file

rclone cat remote:path/to/file

Or like this to output any file in dir or its subdirectories.

rclone cat remote:path/to/dir

Or like this to output any .txt files in dir or its subdirectories.

rclone --include "*.txt" cat remote:path/to/dir

Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

rclone cat remote:path [flags]

Options

      --count int    Only print N characters. (default -1)
      --discard      Discard the output instead of printing.
      --head int     Only print the first N characters.
  -h, --help         help for cat
      --offset int   Start printing at offset N (or from end if -ve).
      --tail int     Only print the last N characters.

See the global flags page for global options not listed here.

SEE ALSO

rclone config create

Create a new remote with name, type and options.

Synopsis

Create a new remote of name with type and options. The options should be passed in pairs of key value.

For example to make a swift remote of name myremote using auto config you would do:

rclone config create myremote swift env_auth true

Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken.

If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.

NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command.

So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:

rclone config create mydrive drive config_is_local false
rclone config create `name` `type` [`key` `value`]* [flags]

Options

  -h, --help         help for create
      --no-obscure   Force any passwords not to be obscured.
      --obscure      Force any passwords to be obscured.

See the global flags page for global options not listed here.

SEE ALSO

rclone config delete

Delete an existing remote name.

Synopsis

Delete an existing remote name.

rclone config delete `name` [flags]

Options

  -h, --help   help for delete

See the global flags page for global options not listed here.

SEE ALSO

rclone config disconnect

Disconnects user from remote

Synopsis

This disconnects the remote: passed in to the cloud storage system.

This normally means revoking the oauth token.

To reconnect use "rclone config reconnect".

rclone config disconnect remote: [flags]

Options

  -h, --help   help for disconnect

See the global flags page for global options not listed here.

SEE ALSO

rclone config dump

Dump the config file as JSON.

Synopsis

Dump the config file as JSON.

rclone config dump [flags]

Options

  -h, --help   help for dump

See the global flags page for global options not listed here.

SEE ALSO

rclone config edit

Enter an interactive configuration session.

Synopsis

Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

rclone config edit [flags]

Options

  -h, --help   help for edit

See the global flags page for global options not listed here.

SEE ALSO

rclone config file

Show path of configuration file in use.

Synopsis

Show path of configuration file in use.

rclone config file [flags]

Options

  -h, --help   help for file

See the global flags page for global options not listed here.

SEE ALSO

rclone config password

Update password in an existing remote.

Synopsis

Update an existing remote's password. The password should be passed in pairs of key value.

For example to set password of a remote of name myremote you would do:

rclone config password myremote fieldname mypassword

This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.

rclone config password `name` [`key` `value`]+ [flags]

Options

  -h, --help   help for password

See the global flags page for global options not listed here.

SEE ALSO

rclone config providers

List in JSON format all the providers and options.

Synopsis

List in JSON format all the providers and options.

rclone config providers [flags]

Options

  -h, --help   help for providers

See the global flags page for global options not listed here.

SEE ALSO

rclone config reconnect

Re-authenticates user with remote.

Synopsis

This reconnects remote: passed in to the cloud storage system.

To disconnect the remote use "rclone config disconnect".

This normally means going through the interactive oauth flow again.

rclone config reconnect remote: [flags]

Options

  -h, --help   help for reconnect

See the global flags page for global options not listed here.

SEE ALSO

rclone config show

Print (decrypted) config file, or the config for a single remote.

Synopsis

Print (decrypted) config file, or the config for a single remote.

rclone config show [<remote>] [flags]

Options

  -h, --help   help for show

See the global flags page for global options not listed here.

SEE ALSO

rclone config update

Update options in an existing remote.

Synopsis

Update an existing remote's options. The options should be passed in in pairs of key value.

For example to update the env_auth field of a remote of name myremote you would do:

rclone config update myremote swift env_auth true

If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.

NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command.

If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:

rclone config update myremote swift env_auth true config_refresh_token false
rclone config update `name` [`key` `value`]+ [flags]

Options

  -h, --help         help for update
      --no-obscure   Force any passwords not to be obscured.
      --obscure      Force any passwords to be obscured.

See the global flags page for global options not listed here.

SEE ALSO

rclone config userinfo

Prints info about logged in user of remote.

Synopsis

This prints the details of the person logged in to the cloud storage system.

rclone config userinfo remote: [flags]

Options

  -h, --help   help for userinfo
      --json   Format output as JSON

See the global flags page for global options not listed here.

SEE ALSO

rclone copyto

Copy files from source to dest, skipping already copied.

Synopsis

If source:path is a file or directory then it copies it to a file or directory named dest:path.

This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.

So

rclone copyto src dst

where src and dst are rclone paths, either remote:path or /path/to/local or C:.

This will:

if src is file
    copy it to dst, overwriting an existing file if it exists
if src is directory
    copy it to dst, overwriting existing files if they exist
    see copy command for full details

This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.

Note: Use the -P/--progress flag to view real-time transfer statistics

rclone copyto source:path dest:path [flags]

Options

  -h, --help   help for copyto

See the global flags page for global options not listed here.

SEE ALSO

rclone copyurl

Copy url content to dest.

Synopsis

Download a URL's content and copy it to the destination without saving it in temporary storage.

Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path.

Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.

Setting --stdout or making the output file name "-" will cause the output to be written to standard output.

rclone copyurl https://example.com dest:path [flags]

Options

  -a, --auto-filename   Get the file name from the URL and use it for destination file path
  -h, --help            help for copyurl
      --no-clobber      Prevent overwriting file with same name
      --stdout          Write the output to stdout rather than a file

See the global flags page for global options not listed here.

SEE ALSO

rclone cryptcheck

Cryptcheck checks the integrity of a crypted remote.

Synopsis

rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.

For it to work the underlying remote of the cryptedremote must support some kind of checksum.

It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.

Use it like this

rclone cryptcheck /path/to/files encryptedremote:path

You can use it like this also, but that will involve downloading all the files in remote:path.

rclone cryptcheck remote:path encryptedremote:path

After it has run it will log the status of the encryptedremote:.

If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.

The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.

The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.

rclone cryptcheck remote:path cryptedremote:path [flags]

Options

      --combined string         Make a combined report of changes to this file
      --differ string           Report all non-matching files to this file
      --error string            Report all files with errors (hashing or reading) to this file
  -h, --help                    help for cryptcheck
      --match string            Report all matching files to this file
      --missing-on-dst string   Report all files missing from the destination to this file
      --missing-on-src string   Report all files missing from the source to this file
      --one-way                 Check one way only, source files must exist on remote

See the global flags page for global options not listed here.

SEE ALSO

rclone cryptdecode

Cryptdecode returns unencrypted file names.

Synopsis

rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

If you supply the --reverse flag, it will return encrypted file names.

use it like this

rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2

rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone cryptdecode encryptedremote: encryptedfilename [flags]

Options

  -h, --help      help for cryptdecode
      --reverse   Reverse cryptdecode, encrypts filenames

See the global flags page for global options not listed here.

SEE ALSO

rclone deletefile

Remove a single file from remote.

Synopsis

Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.

rclone deletefile remote:path [flags]

Options

  -h, --help   help for deletefile

See the global flags page for global options not listed here.

SEE ALSO

rclone genautocomplete

Output completion script for a given shell.

Synopsis

Generates a shell completion script for rclone. Run with --help to list the supported shells.

Options

  -h, --help   help for genautocomplete

See the global flags page for global options not listed here.

SEE ALSO

rclone genautocomplete bash

Output bash completion script for rclone.

Synopsis

Generates a bash shell autocompletion script for rclone.

This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

sudo rclone genautocomplete bash

Logout and login again to use the autocompletion scripts, or source them directly

. /etc/bash_completion

If you supply a command line argument the script will be written there.

rclone genautocomplete bash [output_file] [flags]

Options

  -h, --help   help for bash

See the global flags page for global options not listed here.

SEE ALSO

rclone genautocomplete fish

Output fish completion script for rclone.

Synopsis

Generates a fish autocompletion script for rclone.

This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, eg

sudo rclone genautocomplete fish

Logout and login again to use the autocompletion scripts, or source them directly

. /etc/fish/completions/rclone.fish

If you supply a command line argument the script will be written there.

rclone genautocomplete fish [output_file] [flags]

Options

  -h, --help   help for fish

See the global flags page for global options not listed here.

SEE ALSO

rclone genautocomplete zsh

Output zsh completion script for rclone.

Synopsis

Generates a zsh autocompletion script for rclone.

This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg

sudo rclone genautocomplete zsh

Logout and login again to use the autocompletion scripts, or source them directly

autoload -U compinit && compinit

If you supply a command line argument the script will be written there.

rclone genautocomplete zsh [output_file] [flags]

Options

  -h, --help   help for zsh

See the global flags page for global options not listed here.

SEE ALSO

rclone gendocs

Output markdown docs for rclone to the directory supplied.

Synopsis

This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

rclone gendocs output_directory [flags]

Options

  -h, --help   help for gendocs

See the global flags page for global options not listed here.

SEE ALSO

rclone hashsum

Produces a hashsum file for all the objects in the path.

Synopsis

Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

Run without a hash to see the list of supported hashes, eg

$ rclone hashsum
Supported hashes are:
  * MD5
  * SHA-1
  * DropboxHash
  * QuickXorHash

Then

$ rclone hashsum MD5 remote:path
rclone hashsum <hash> remote:path [flags]

Options

      --base64   Output base64 encoded hashsum
  -h, --help     help for hashsum

See the global flags page for global options not listed here.

SEE ALSO

rclone link

Generate public link to file/folder.

Synopsis

rclone link will create, retrieve or remove a public link to the given file or folder.

rclone link remote:path/to/file
rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/
rclone link --expire 1d remote:path/to/file

If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). Note not all backends support the --expire flag - if the backend doesn't support it then the link returned won't expire.

Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.

If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

rclone link remote:path [flags]

Options

      --expire Duration   The amount of time that the link will be valid (default 100y)
  -h, --help              help for link
      --unlink            Remove existing public link to file/folder

See the global flags page for global options not listed here.

SEE ALSO

rclone listremotes

List all the remotes in the config file.

Synopsis

rclone listremotes lists all the available remotes from the config file.

When uses with the -l flag it lists the types too.

rclone listremotes [flags]

Options

  -h, --help   help for listremotes
      --long   Show the type as well as names.

See the global flags page for global options not listed here.

SEE ALSO

rclone lsf

List directories and objects in remote:path formatted for parsing.

Synopsis

List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

Eg

$ rclone lsf swift:bucket
bevajer5jef
canole
diwogej7
ferejej3gux/
fubuwic

Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

p - path
s - size
t - modification time
h - hash
i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"

So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

Eg

$ rclone lsf  --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
2016-06-25 18:55:43;90613;canole
2016-06-25 18:55:43;94467;diwogej7
2018-04-26 08:50:45;0;ferejej3gux/
2016-06-25 18:55:40;37600;fubuwic

If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

For example to emulate the md5sum command you can use

rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .

Eg

$ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket 
7908e352297f0f530b84a756f188baa3  bevajer5jef
cd65ac234e6fea5925974a51cdd865cc  canole
03b5341b4f234b9d984d03ad076bae91  diwogej7
8fd37c3810dd660778137ac3a66cc06d  fubuwic
99713e14a4c4ff553acaf1930fad985b  gixacuh7ku

(Though "rclone md5sum ." is an easier way of typing this.)

By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

Eg

$ rclone lsf  --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
2018-04-26 08:52:53,0,,ferejej3gux/
2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic

You can output in CSV standard format. This will escape things in " if they contain ,

Eg

$ rclone lsf --csv --files-only --format ps remote:path
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6

Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.

For example to find all the files modified within one day and copy those only (without traversing the whole directory structure):

rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path

Any of the filtering options can be applied to this command.

There are several related list commands

ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

rclone lsf remote:path [flags]

Options

      --absolute           Put a leading / in front of path names.
      --csv                Output in CSV format.
  -d, --dir-slash          Append a slash to directory names. (default true)
      --dirs-only          Only list directories.
      --files-only         Only list files.
  -F, --format string      Output format - see  help for details (default "p")
      --hash h             Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
  -h, --help               help for lsf
  -R, --recursive          Recurse into the listing.
  -s, --separator string   Separator for the items in the format. (default ";")

See the global flags page for global options not listed here.

SEE ALSO

rclone lsjson

List directories and objects in the path in JSON format.

Synopsis

List directories and objects in the path in JSON format.

The output is an array of Items, where each Item looks like this

{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }

If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash.

If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift).

If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift).

If --encrypted is not specified the Encrypted won't be emitted.

If --dirs-only is not specified files in addition to directories are returned

If --files-only is not specified directories in addition to the files will be returned.

The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.

If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".

The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00").

The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.

Any of the filtering options can be applied to this command.

There are several related list commands

ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

rclone lsjson remote:path [flags]

Options

      --dirs-only               Show only directories in the listing.
  -M, --encrypted               Show the encrypted names.
      --files-only              Show only files in the listing.
      --hash                    Include hashes in the output (may take longer).
      --hash-type stringArray   Show only this hash type (may be repeated).
  -h, --help                    help for lsjson
      --no-mimetype             Don't read the mime type (can speed things up).
      --no-modtime              Don't read the modification time (can speed things up).
      --original                Show the ID of the underlying Object.
  -R, --recursive               Recurse into the listing.

See the global flags page for global options not listed here.

SEE ALSO

rclone mount

Mount the remote as file system on a mountpoint.

Synopsis

rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

First set up your remote using rclone config. Check it works with rclone ls etc.

You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows.

On Linux/macOS/FreeBSD Start the mount like this where /path/to/local/mount is an empty existing directory.

rclone mount remote:path/to/files /path/to/local/mount

Or on Windows like this where X: is an unused drive letter or use a path to non-existent directory.

rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\to\nonexistent\directory

When running in background mode the user will have to stop the mount manually (specified below).

When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.

The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.

Stopping the mount manually:

# Linux
fusermount -u /path/to/local/mount
# OS X
umount /path/to/local/mount

Note: As of rclone 1.52.2, rclone mount now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use.

Installing on Windows

To run rclone mount on Windows, you will need to download and install WinFsp.

WinFsp is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.

Windows caveats

Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.

The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager.

Mount as a network drive

By default, rclone will mount the remote as a normal drive. However, you can also mount it as a Network Drive (or Network Share, as mentioned in some places)

Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected.

Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also Limitations section below for more info

Add "--fuse-flag --VolumePrefix=" to your "mount" command, replacing "share" with any other name of your choice if you are mounting more than one remote. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap.

Read more about drive mapping

Limitations

Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info.

The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.

Only supported on Linux, FreeBSD, OS X and Windows at the moment.

rclone mount vs rclone sync/copy

File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.

Attribute caching

You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.

The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.

In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories.

The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.

If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.

If files don't change on the remote outside of the control of rclone then there is no chance of corruption.

This is the same as setting the attr_timeout option in mount.fuse.

Filters

Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

systemd

When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.

chunked reading

--vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.

When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
--poll-interval duration    Time to wait between polling for changes.

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache. (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Mount read-only.

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.

Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default "off")

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error. (default 1s)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default

The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

rclone mount remote:path /path/to/mountpoint [flags]

Options

      --allow-non-empty                        Allow mounting over a non-empty directory (not Windows).
      --allow-other                            Allow access to other users.
      --allow-root                             Allow access to root user.
      --async-read                             Use asynchronous reads. (default true)
      --attr-timeout duration                  Time for which file/directory attributes are cached. (default 1s)
      --daemon                                 Run mount as a daemon (background mode).
      --daemon-timeout duration                Time limit for rclone to respond to kernel (not supported by all OSes).
      --debug-fuse                             Debug the FUSE internals - needs -v.
      --default-permissions                    Makes kernel enforce access control based on the file mode.
      --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
      --dir-perms FileMode                     Directory permissions (default 0777)
      --file-perms FileMode                    File permissions (default 0666)
      --fuse-flag stringArray                  Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
      --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  -h, --help                                   help for mount
      --max-read-ahead SizeSuffix              The number of bytes that can be prefetched for sequential reads. (default 128k)
      --no-checksum                            Don't compare checksums on up/download.
      --no-modtime                             Don't read/write the modification time (can speed things up).
      --no-seek                                Don't allow seeking in files.
  -o, --option stringArray                     Option for libfuse/WinFsp. Repeat if required.
      --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
      --read-only                              Mount read-only.
      --uid uint32                             Override the uid field set by the filesystem. (default 1000)
      --umask int                              Override the permission bits set by the filesystem.
      --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
      --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
      --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-case-insensitive                   If a file name not found, find a case insensitive match.
      --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full.
      --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
      --vfs-read-wait duration                 Time to wait for in-sequence read before seeking. (default 20ms)
      --vfs-write-back duration                Time to writeback files after last use when using cache. (default 5s)
      --vfs-write-wait duration                Time to wait for in-sequence write before giving error. (default 1s)
      --volname string                         Set the volume name (not supported by all OSes).
      --write-back-cache                       Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

See the global flags page for global options not listed here.

SEE ALSO

rclone moveto

Move file or directory from source to dest.

Synopsis

If source:path is a file or directory then it moves it to a file or directory named dest:path.

This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.

So

rclone moveto src dst

where src and dst are rclone paths, either remote:path or /path/to/local or C:.

This will:

if src is file
    move it to dst, overwriting an existing file if it exists
if src is directory
    move it to dst, overwriting existing files if they exist
    see move command for full details

This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

Note: Use the -P/--progress flag to view real-time transfer statistics.

rclone moveto source:path dest:path [flags]

Options

  -h, --help   help for moveto

See the global flags page for global options not listed here.

SEE ALSO

rclone ncdu

Explore a remote with a text based user interface.

Synopsis

This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

Here are the keys - press '?' to toggle the help on and off

 ↑,↓ or k,j to Move
 →,l to enter
 ←,h to return
 c toggle counts
 g toggle graph
 n,s,C sort by name,size,count
 d delete file/directory
 y copy current path to clipbard
 Y display current path
 ^L refresh screen
 ? to toggle help on and off
 q/ESC/c-C to quit

This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.

Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.

rclone ncdu remote:path [flags]

Options

  -h, --help   help for ncdu

See the global flags page for global options not listed here.

SEE ALSO

rclone obscure

Obscure password for use in the rclone config file.

Synopsis

In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.

Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.

This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. Example:

echo "secretpassword" | rclone obscure -

If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.

If you want to encrypt the config file then please use config file encryption - see rclone config for more info.

rclone obscure password [flags]

Options

  -h, --help   help for obscure

See the global flags page for global options not listed here.

SEE ALSO

rclone rc

Run a command against a running rclone.

Synopsis

This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"

A username and password can be passed in with --user and --pass.

Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

Arguments should be passed in as parameter=value.

The result will be returned as a JSON object by default.

The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.

The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.

-o key=value -o key2

Will place this in the "opt" value

{"key":"value", "key2","")

The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.

-a value -a value2

Will place this in the "arg" value

["value", "value2"]

Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg:

rclone rc --loopback operations/about fs=/

Use "rclone rc" to see a list of all possible commands.

rclone rc commands parameter [flags]

Options

  -a, --arg stringArray   Argument placed in the "arg" array.
  -h, --help              help for rc
      --json string       Input JSON - use instead of key=value args.
      --loopback          If set connect to this rclone instance not via HTTP.
      --no-output         If set don't output the JSON result.
  -o, --opt stringArray   Option in the form name=value or name placed in the "opt" array.
      --pass string       Password to use to connect to rclone remote control.
      --url string        URL to connect to rclone remote control. (default "http://localhost:5572/")
      --user string       Username to use to rclone remote control.

See the global flags page for global options not listed here.

SEE ALSO

rclone rcat

Copies standard input to file on remote.

Synopsis

rclone rcat reads from standard input (stdin) and copies it to a single remote file.

echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file

If the remote file already exists, it will be overwritten.

rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

rclone rcat remote:path [flags]

Options

  -h, --help   help for rcat

See the global flags page for global options not listed here.

SEE ALSO

rclone rcd

Run rclone listening to remote control commands only.

Synopsis

This runs rclone so that it only listens to remote control commands.

This is useful if you are controlling rclone via the rc API.

If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.

See the rc documentation for more info on the rc flags.

rclone rcd <path to files to serve>* [flags]

Options

  -h, --help   help for rcd

See the global flags page for global options not listed here.

SEE ALSO

rclone rmdirs

Remove empty directories under the path.

Synopsis

This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.

If you supply the --leave-root flag, it will not remove the root directory.

This is useful for tidying up remotes that rclone has left a lot of empty directories in.

rclone rmdirs remote:path [flags]

Options

  -h, --help         help for rmdirs
      --leave-root   Do not remove root directory if empty

See the global flags page for global options not listed here.

SEE ALSO

rclone serve

Serve a remote over a protocol.

Synopsis

rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg

rclone serve http remote:

Each subcommand has its own options which you can see in their help.

rclone serve <protocol> [opts] <remote> [flags]

Options

  -h, --help   help for serve

See the global flags page for global options not listed here.

SEE ALSO

rclone serve dlna

Serve remote:path over DLNA

Synopsis

rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.

Server options

Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs.

Use --name to choose the friendly server name, which is by default "rclone (hostname)".

Use --log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
--poll-interval duration    Time to wait between polling for changes.

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache. (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Mount read-only.

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.

Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default "off")

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error. (default 1s)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default

The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

rclone serve dlna remote:path [flags]

Options

      --addr string                            ip:port or :port to bind the DLNA http server to. (default ":7879")
      --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
      --dir-perms FileMode                     Directory permissions (default 0777)
      --file-perms FileMode                    File permissions (default 0666)
      --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  -h, --help                                   help for dlna
      --log-trace                              enable trace logging of SOAP traffic
      --name string                            name of DLNA server
      --no-checksum                            Don't compare checksums on up/download.
      --no-modtime                             Don't read/write the modification time (can speed things up).
      --no-seek                                Don't allow seeking in files.
      --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
      --read-only                              Mount read-only.
      --uid uint32                             Override the uid field set by the filesystem. (default 1000)
      --umask int                              Override the permission bits set by the filesystem. (default 2)
      --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
      --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
      --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-case-insensitive                   If a file name not found, find a case insensitive match.
      --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full.
      --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
      --vfs-read-wait duration                 Time to wait for in-sequence read before seeking. (default 20ms)
      --vfs-write-back duration                Time to writeback files after last use when using cache. (default 5s)
      --vfs-write-wait duration                Time to wait for in-sequence write before giving error. (default 1s)

See the global flags page for global options not listed here.

SEE ALSO

rclone serve ftp

Serve remote:path over FTP.

Synopsis

rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.

Server options

Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

Authentication

By default this will serve files without needing a login.

You can set a single username and password with the --user and --pass flags.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
--poll-interval duration    Time to wait between polling for changes.

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache. (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Mount read-only.

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.

Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default "off")

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error. (default 1s)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default

The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

Auth Proxy

If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.

PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.

There is an example program bin/test_proxy.py in the rclone source code.

The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.

This config generated must have this extra parameter - _root - root to use for the backend

And it may have this parameter - _obscure - comma separated strings for parameters to obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{
    "user": "me",
    "pass": "mypassword"
}

If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{
    "user": "me",
    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
}

And as an example return this on STDOUT

{
    "type": "sftp",
    "_root": "",
    "_obscure": "pass",
    "user": "me",
    "pass": "mypassword",
    "host": "sftp.example.com"
}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).

The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.

Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

This can be used to build general purpose proxies to any kind of backend that rclone supports.

rclone serve ftp remote:path [flags]

Options

      --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:2121")
      --auth-proxy string                      A program to use to create the backend from the auth.
      --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
      --dir-perms FileMode                     Directory permissions (default 0777)
      --file-perms FileMode                    File permissions (default 0666)
      --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  -h, --help                                   help for ftp
      --no-checksum                            Don't compare checksums on up/download.
      --no-modtime                             Don't read/write the modification time (can speed things up).
      --no-seek                                Don't allow seeking in files.
      --pass string                            Password for authentication. (empty value allow every password)
      --passive-port string                    Passive port range to use. (default "30000-32000")
      --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
      --public-ip string                       Public IP address to advertise for passive connections.
      --read-only                              Mount read-only.
      --uid uint32                             Override the uid field set by the filesystem. (default 1000)
      --umask int                              Override the permission bits set by the filesystem. (default 2)
      --user string                            User name for authentication. (default "anonymous")
      --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
      --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
      --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-case-insensitive                   If a file name not found, find a case insensitive match.
      --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full.
      --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
      --vfs-read-wait duration                 Time to wait for in-sequence read before seeking. (default 20ms)
      --vfs-write-back duration                Time to writeback files after last use when using cache. (default 5s)
      --vfs-write-wait duration                Time to wait for in-sequence write before giving error. (default 1s)

See the global flags page for global options not listed here.

SEE ALSO

rclone serve http

Serve the remote over HTTP.

Synopsis

rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

You can use the filter flags (eg --include, --exclude) to control what is served.

The server will log errors. Use -v to see access logs.

--bwlimit will be respected for file transfers. Use --stats to control the stats printing.

Server options

Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:

Parameter Description
.Name The full path of a file/directory.
.Title Directory listing of .Name
.Sort The current sort used. This is changeable via ?sort= parameter
Sort Options: namedirfist,name,size,time (default namedirfirst)
.Order The current ordering used. This is changeable via ?order= parameter
Order Options: asc,desc (default asc)
.Query Currently unused.
.Breadcrumb Allows for creating a relative navigation
-- .Link The relative to the root link of the Text.
-- .Text The Name of the directory.
.Entries Information about a specific file/directory.
-- .URL The 'url' of an entry.
-- .Leaf Currently same as 'URL' but intended to be 'just' the name.
-- .IsDir Boolean for if an entry is a directory or not.
-- .Size Size in Bytes of the entry.
-- .ModTime The UTC timestamp of an entry.

Authentication

By default this will serve files without needing a login.

You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

To create an htpasswd file:

touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser

The password file can be updated while rclone is running.

Use --realm to set the authentication realm.

SSL/TLS

By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
--poll-interval duration    Time to wait between polling for changes.

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache. (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Mount read-only.

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.

Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default "off")

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error. (default 1s)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default

The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

rclone serve http remote:path [flags]

Options

      --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:8080")
      --baseurl string                         Prefix for URLs - leave blank for root.
      --cert string                            SSL PEM key (concatenation of certificate and CA certificate)
      --client-ca string                       Client certificate authority to verify clients with
      --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
      --dir-perms FileMode                     Directory permissions (default 0777)
      --file-perms FileMode                    File permissions (default 0666)
      --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  -h, --help                                   help for http
      --htpasswd string                        htpasswd file - if not provided no authentication is done
      --key string                             SSL PEM Private key
      --max-header-bytes int                   Maximum size of request header (default 4096)
      --no-checksum                            Don't compare checksums on up/download.
      --no-modtime                             Don't read/write the modification time (can speed things up).
      --no-seek                                Don't allow seeking in files.
      --pass string                            Password for authentication.
      --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
      --read-only                              Mount read-only.
      --realm string                           realm for authentication (default "rclone")
      --server-read-timeout duration           Timeout for server reading data (default 1h0m0s)
      --server-write-timeout duration          Timeout for server writing data (default 1h0m0s)
      --template string                        User Specified Template.
      --uid uint32                             Override the uid field set by the filesystem. (default 1000)
      --umask int                              Override the permission bits set by the filesystem. (default 2)
      --user string                            User name for authentication.
      --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
      --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
      --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-case-insensitive                   If a file name not found, find a case insensitive match.
      --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full.
      --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
      --vfs-read-wait duration                 Time to wait for in-sequence read before seeking. (default 20ms)
      --vfs-write-back duration                Time to writeback files after last use when using cache. (default 5s)
      --vfs-write-wait duration                Time to wait for in-sequence write before giving error. (default 1s)

See the global flags page for global options not listed here.

SEE ALSO

rclone serve restic

Serve the remote for restic's REST API.

Synopsis

rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

Restic is a command line program for doing backups.

The server will log errors. Use -v to see access logs.

--bwlimit will be respected for file transfers. Use --stats to control the stats printing.

Setting up rclone for use by restic

First set up a remote for your chosen cloud provider.

Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.

Now start the rclone restic server

rclone serve restic -v remote:backup

Where you can replace "backup" in the above by whatever path in the remote you wish to use.

By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.

You might wish to start this server on boot.

Setting up restic to use rclone

Now you can follow the restic instructions on setting up restic.

Note that you will need restic 0.8.2 or later to interoperate with rclone.

For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.

For example:

$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
$ restic init
created restic backend 8b1a4b56ae at rest:http://localhost:8080/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
$ restic backup /path/to/files/to/backup
scan [/path/to/files/to/backup]
scanned 189 directories, 312 files in 0:00
[0:00] 100.00%  38.128 MiB / 38.128 MiB  501 / 501 items  0 errors  ETA 0:00
duration: 0:00
snapshot 45c8fdd8 saved

Multiple repositories

Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg

$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
# backup user1 stuff
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff

Private repositories

The "--private-repos" flag can be used to limit users to repositories starting with a path of /<username>/.

Server options

Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:

Parameter Description
.Name The full path of a file/directory.
.Title Directory listing of .Name
.Sort The current sort used. This is changeable via ?sort= parameter
Sort Options: namedirfist,name,size,time (default namedirfirst)
.Order The current ordering used. This is changeable via ?order= parameter
Order Options: asc,desc (default asc)
.Query Currently unused.
.Breadcrumb Allows for creating a relative navigation
-- .Link The relative to the root link of the Text.
-- .Text The Name of the directory.
.Entries Information about a specific file/directory.
-- .URL The 'url' of an entry.
-- .Leaf Currently same as 'URL' but intended to be 'just' the name.
-- .IsDir Boolean for if an entry is a directory or not.
-- .Size Size in Bytes of the entry.
-- .ModTime The UTC timestamp of an entry.

Authentication

By default this will serve files without needing a login.

You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

To create an htpasswd file:

touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser

The password file can be updated while rclone is running.

Use --realm to set the authentication realm.

SSL/TLS

By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

rclone serve restic remote:path [flags]

Options

      --addr string                     IPaddress:Port or :Port to bind server to. (default "localhost:8080")
      --append-only                     disallow deletion of repository data
      --baseurl string                  Prefix for URLs - leave blank for root.
      --cert string                     SSL PEM key (concatenation of certificate and CA certificate)
      --client-ca string                Client certificate authority to verify clients with
  -h, --help                            help for restic
      --htpasswd string                 htpasswd file - if not provided no authentication is done
      --key string                      SSL PEM Private key
      --max-header-bytes int            Maximum size of request header (default 4096)
      --pass string                     Password for authentication.
      --private-repos                   users can only access their private repo
      --realm string                    realm for authentication (default "rclone")
      --server-read-timeout duration    Timeout for server reading data (default 1h0m0s)
      --server-write-timeout duration   Timeout for server writing data (default 1h0m0s)
      --stdio                           run an HTTP2 server on stdin/stdout
      --template string                 User Specified Template.
      --user string                     User name for authentication.

See the global flags page for global options not listed here.

SEE ALSO

rclone serve sftp

Serve the remote over SFTP.

Synopsis

rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.

You can use the filter flags (eg --include, --exclude) to control what is served.

The server will log errors. Use -v to see access logs.

--bwlimit will be respected for file transfers. Use --stats to control the stats printing.

You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.

Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.

If you don't supply a --key then rclone will generate one and cache it for later use.

By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example.

Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
--poll-interval duration    Time to wait between polling for changes.

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache. (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Mount read-only.

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.

Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default "off")

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error. (default 1s)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default

The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

Auth Proxy

If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.

PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.

There is an example program bin/test_proxy.py in the rclone source code.

The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.

This config generated must have this extra parameter - _root - root to use for the backend

And it may have this parameter - _obscure - comma separated strings for parameters to obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{
    "user": "me",
    "pass": "mypassword"
}

If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{
    "user": "me",
    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
}

And as an example return this on STDOUT

{
    "type": "sftp",
    "_root": "",
    "_obscure": "pass",
    "user": "me",
    "pass": "mypassword",
    "host": "sftp.example.com"
}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).

The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.

Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

This can be used to build general purpose proxies to any kind of backend that rclone supports.

rclone serve sftp remote:path [flags]

Options

      --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:2022")
      --auth-proxy string                      A program to use to create the backend from the auth.
      --authorized-keys string                 Authorized keys file (default "~/.ssh/authorized_keys")
      --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
      --dir-perms FileMode                     Directory permissions (default 0777)
      --file-perms FileMode                    File permissions (default 0666)
      --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  -h, --help                                   help for sftp
      --key stringArray                        SSH private host key file (Can be multi-valued, leave blank to auto generate)
      --no-auth                                Allow connections with no authentication if set.
      --no-checksum                            Don't compare checksums on up/download.
      --no-modtime                             Don't read/write the modification time (can speed things up).
      --no-seek                                Don't allow seeking in files.
      --pass string                            Password for authentication.
      --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
      --read-only                              Mount read-only.
      --uid uint32                             Override the uid field set by the filesystem. (default 1000)
      --umask int                              Override the permission bits set by the filesystem. (default 2)
      --user string                            User name for authentication.
      --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
      --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
      --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-case-insensitive                   If a file name not found, find a case insensitive match.
      --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full.
      --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
      --vfs-read-wait duration                 Time to wait for in-sequence read before seeking. (default 20ms)
      --vfs-write-back duration                Time to writeback files after last use when using cache. (default 5s)
      --vfs-write-wait duration                Time to wait for in-sequence write before giving error. (default 1s)

See the global flags page for global options not listed here.

SEE ALSO

rclone serve webdav

Serve remote:path over webdav.

Synopsis

rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.

Webdav options

--etag-hash

This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.

If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".

Use "rclone hashsum" to see the full list.

Server options

Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:

Parameter Description
.Name The full path of a file/directory.
.Title Directory listing of .Name
.Sort The current sort used. This is changeable via ?sort= parameter
Sort Options: namedirfist,name,size,time (default namedirfirst)
.Order The current ordering used. This is changeable via ?order= parameter
Order Options: asc,desc (default asc)
.Query Currently unused.
.Breadcrumb Allows for creating a relative navigation
-- .Link The relative to the root link of the Text.
-- .Text The Name of the directory.
.Entries Information about a specific file/directory.
-- .URL The 'url' of an entry.
-- .Leaf Currently same as 'URL' but intended to be 'just' the name.
-- .IsDir Boolean for if an entry is a directory or not.
-- .Size Size in Bytes of the entry.
-- .ModTime The UTC timestamp of an entry.

Authentication

By default this will serve files without needing a login.

You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

To create an htpasswd file:

touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser

The password file can be updated while rclone is running.

Use --realm to set the authentication realm.

SSL/TLS

By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
--poll-interval duration    Time to wait between polling for changes.

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache. (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Mount read-only.

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.

Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default "off")

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error. (default 1s)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default

The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

Auth Proxy

If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.

PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.

There is an example program bin/test_proxy.py in the rclone source code.

The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.

This config generated must have this extra parameter - _root - root to use for the backend

And it may have this parameter - _obscure - comma separated strings for parameters to obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{
    "user": "me",
    "pass": "mypassword"
}

If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{
    "user": "me",
    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
}

And as an example return this on STDOUT

{
    "type": "sftp",
    "_root": "",
    "_obscure": "pass",
    "user": "me",
    "pass": "mypassword",
    "host": "sftp.example.com"
}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).

The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.

Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

This can be used to build general purpose proxies to any kind of backend that rclone supports.

rclone serve webdav remote:path [flags]

Options

      --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:8080")
      --auth-proxy string                      A program to use to create the backend from the auth.
      --baseurl string                         Prefix for URLs - leave blank for root.
      --cert string                            SSL PEM key (concatenation of certificate and CA certificate)
      --client-ca string                       Client certificate authority to verify clients with
      --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
      --dir-perms FileMode                     Directory permissions (default 0777)
      --disable-dir-list                       Disable HTML directory list on GET request for a directory
      --etag-hash string                       Which hash to use for the ETag, or auto or blank for off
      --file-perms FileMode                    File permissions (default 0666)
      --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  -h, --help                                   help for webdav
      --htpasswd string                        htpasswd file - if not provided no authentication is done
      --key string                             SSL PEM Private key
      --max-header-bytes int                   Maximum size of request header (default 4096)
      --no-checksum                            Don't compare checksums on up/download.
      --no-modtime                             Don't read/write the modification time (can speed things up).
      --no-seek                                Don't allow seeking in files.
      --pass string                            Password for authentication.
      --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
      --read-only                              Mount read-only.
      --realm string                           realm for authentication (default "rclone")
      --server-read-timeout duration           Timeout for server reading data (default 1h0m0s)
      --server-write-timeout duration          Timeout for server writing data (default 1h0m0s)
      --template string                        User Specified Template.
      --uid uint32                             Override the uid field set by the filesystem. (default 1000)
      --umask int                              Override the permission bits set by the filesystem. (default 2)
      --user string                            User name for authentication.
      --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
      --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
      --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-case-insensitive                   If a file name not found, find a case insensitive match.
      --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full.
      --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
      --vfs-read-wait duration                 Time to wait for in-sequence read before seeking. (default 20ms)
      --vfs-write-back duration                Time to writeback files after last use when using cache. (default 5s)
      --vfs-write-wait duration                Time to wait for in-sequence write before giving error. (default 1s)

See the global flags page for global options not listed here.

SEE ALSO

rclone settier

Changes storage class/tier of objects in remote.

Synopsis

rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.

Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true

You can use it to tier single object

rclone settier Cool remote:path/file

Or use rclone filters to set tier on only specific files

rclone --include "*.txt" settier Hot remote:path/dir

Or just provide remote directory and all files in directory will be tiered

rclone settier tier remote:path/dir
rclone settier tier remote:path [flags]

Options

  -h, --help   help for settier

See the global flags page for global options not listed here.

SEE ALSO

rclone touch

Create new file or change file modification time.

Synopsis

Set the modification time on object(s) as specified by remote:path to have the current time.

If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided.

If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of:

Note that --timestamp is in UTC if you want local time then add the --localtime flag.

rclone touch remote:path [flags]

Options

  -h, --help               help for touch
      --localtime          Use localtime for timestamp, not UTC.
  -C, --no-create          Do not create the file if it does not exist.
  -t, --timestamp string   Use specified time instead of the current time of day.

See the global flags page for global options not listed here.

SEE ALSO

rclone tree

List the contents of the remote in a tree like fashion.

Synopsis

rclone tree lists the contents of a remote in a similar way to the unix tree command.

For example

$ rclone tree remote:path
/
├── file1
├── file2
├── file3
└── subdir
    ├── file4
    └── file5

1 directories, 5 files

You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.

The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.

rclone tree remote:path [flags]

Options

  -a, --all             All files are listed (list . files too).
  -C, --color           Turn colorization on always.
  -d, --dirs-only       List directories only.
      --dirsfirst       List directories before files (-U disables).
      --full-path       Print the full path prefix for each file.
  -h, --help            help for tree
      --human           Print the size in a more human readable way.
      --level int       Descend only level directories deep.
  -D, --modtime         Print the date of last modification.
      --noindent        Don't print indentation lines.
      --noreport        Turn off file/directory count at end of tree listing.
  -o, --output string   Output to file instead of stdout.
  -p, --protections     Print the protections for each file.
  -Q, --quote           Quote filenames with double quotes.
  -s, --size            Print the size in bytes of each file.
      --sort string     Select sort: name,version,size,mtime,ctime.
      --sort-ctime      Sort files by last status change time.
  -t, --sort-modtime    Sort files by last modification time.
  -r, --sort-reverse    Reverse the order of the sort.
  -U, --unsorted        Leave files unsorted.
      --version         Sort files alphanumerically by version.

See the global flags page for global options not listed here.

SEE ALSO

Copying single files

rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.

For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

rclone copy remote:test.jpg /tmp/download

The file test.jpg will be placed inside /tmp/download.

This is equivalent to specifying

rclone copy --files-from /tmp/files remote: /tmp/download

Where /tmp/files contains the single line

test.jpg

It is recommended to use copy when copying individual files, not sync. They have pretty much the same effect but copy will use a lot less memory.

Syntax of remote paths

The syntax of the paths passed to the rclone command are as follows.

/path/to/dir

This refers to the local file system.

On Windows only \ may be used instead of / in local paths only, non local paths must use /.

These paths needn't start with a leading / - if they don't then they will be relative to the current directory.

remote:path/to/dir

This refers to a directory path/to/dir on remote: as defined in the config file (configured with rclone config).

remote:/path/to/dir

On most backends this is refers to the same directory as remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your "home" directory and paths with a leading / will refer to the root.

:backend:path/to/dir

This is an advanced form for creating remotes on the fly. backend should be the name or prefix of a backend (the type in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables).

Here are some examples:

rclone lsd --http-url https://pub.rclone.org :http:

To list all the directories in the root of https://pub.rclone.org/.

rclone lsf --http-url https://example.com :http:path/to/dir

To list files and directories in https://example.com/path/to/dir/

rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir

To copy files and directories in https://example.com/path/to/dir to /tmp/dir.

rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir

To copy files and directories from example.com in the relative directory path/to/dir to /tmp/dir using sftp.

Valid remote names

Quoting and the shell

When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.

Here are some gotchas which may help users unfamiliar with the shell rules

Linux / OSX

If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc) then you must quote them. Use single quotes ' by default.

rclone copy 'Important files?' remote:backup

If you want to send a ' you will need to use ", eg

rclone copy "O'Reilly Reviews" remote:backup

The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.

Windows

If your names have spaces in you need to put them in ", eg

rclone copy "E:\folder name\folder name\folder name" remote:backup

If you are using the root directory on its own then don't quote it (see #464 for why), eg

rclone copy E:\ remote:backup

Copying files or directories with : in the names

rclone uses : to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a : up to the first / so if you need to act on a file or directory like this then use the full path starting with a /, or use ./ as a current directory prefix.

So to sync a directory called sync:me to a remote called remote: use

rclone sync -i ./sync:me remote:path

or

rclone sync -i /full/path/to/sync:me remote:path

Server Side Copy

Most remotes (but not all - see the overview) support server side copy.

This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.

Eg

rclone copy s3:oldbucket s3:newbucket

Will copy the contents of oldbucket to newbucket without downloading and re-uploading.

Remotes which don't support server side copy will download and re-upload in this case.

Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.

Server side copies will only be attempted if the remote names are the same.

This can be used when scripting to make aged backups efficiently, eg

rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup

Options

Rclone has a number of options to control its behaviour.

Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.

Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

--backup-dir=DIR

When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.

If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.

The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.

For example

rclone sync -i /path/to/local remote:current --backup-dir remote:old

will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.

If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date.

See --compare-dest and --copy-dest.

--bind string

Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.

--bwlimit=BANDWIDTH_SPEC

This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.

Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0 which means to not limit bandwidth.

For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M

It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH... where: WEEKDAY is optional element. It could be written as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.

An example of a typical timetable to avoid link saturation during daytime working hours could be:

--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"

In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.

An example of timetable with WEEKDAY could be:

--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"

It mean that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited.

Timeslots without weekday are extended to whole week. So this one example:

--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"

Is equal to this:

--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"

Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.

Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone.

On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2 signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:

kill -SIGUSR2 $(pidof rclone)

If you configure rclone with a remote control then you can use change the bwlimit dynamically:

rclone rc core/bwlimit rate=1M

--bwlimit-file=BANDWIDTH_SPEC

This option controls per file bandwidth limit. For the options see the --bwlimit flag.

For example use this to allow no transfers to be faster than 1MByte/s

--bwlimit-file 1M

This can be used in conjunction with --bwlimit.

Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer.

--buffer-size=SIZE

Use this sized buffer to speed up file transfers. Each --transfer will use this much memory for buffering.

When using mount or cmount each open file descriptor will use this much memory for buffering. See the mount documentation for more details.

Set to 0 to disable the buffering for the minimum memory usage.

Note that the memory allocation of the buffers is influenced by the --use-mmap flag.

--check-first

If this flag is set then in a sync, copy or move, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible.

This flag can be useful on IO limited systems where transfers interfere with checking.

Using this flag can use more memory as it effectively sets --max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.

--checkers=N

The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.

The default is to run 8 checkers in parallel.

-c, --checksum

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.

This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.

This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.

Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.

When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.

--compare-dest=DIR

When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.

You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.

See --copy-dest and --backup-dir.

--config=CONFIG_FILE

Specify the location of the rclone config file.

Normally the config file is in your home directory as a file called .config/rclone/rclone.conf (or .rclone.conf if created with an older version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf.

If there is a file rclone.conf in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically.

If you run rclone config file you will see where the default location is for you.

Use this flag to override the config location, eg rclone --config=".myconfig" .config.

--contimeout=TIME

Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.

The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.

--copy-dest=DIR

When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup.

The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.

See --compare-dest and --backup-dir.

--dedupe-mode MODE

Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.

--disable FEATURE,FEATURE,...

This disables a comma separated list of optional features. For example to disable server side move and server side copy use:

--disable move,copy

The features can be put in any case.

To see a list of which features can be disabled use:

--disable help

See the overview features and optional features to get an idea of which feature does what.

This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day).

-n, --dry-run

Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.

--expect-continue-timeout=TIME

This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this.

Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header.

The default is 1s. Set to 0 to disable.

--error-on-no-transfer

By default, rclone will exit with return code 0 if there were no errors.

This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not.

NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly!

Add an HTTP header for all transactions. The flag can be repeated to add multiple headers.

If you want to add headers only for uploads use --header-upload and if you want to add headers only for downloads use --header-download.

This flag is supported for all HTTP based backends even those not supported by --header-upload and --header-download so may be used as a workaround for those with care.

rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"

--header-download

Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers.

rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"

See the GitHub issue here for currently supported backends.

--header-upload

Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers.

rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"

See the GitHub issue here for currently supported backends.

--ignore-case-sync

Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different.

--ignore-checksum

Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't.

You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data.

--ignore-existing

Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.

While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.

--ignore-size

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum.

It will also cause rclone to skip verifying the sizes are the same after transfer.

This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info).

-I, --ignore-times

Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.

Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).

--immutable

Treat source and destination files as immutable and disallow modification.

With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified.

Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.

This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.

-i / --interactive

This flag can be used to tell rclone that you wish a manual confirmation before destructive operations.

It is recommended that you use this flag while learning rclone especially with rclone sync.

For example

$ rclone delete -i /tmp/dir
rclone: delete "important-file.txt"?
y) Yes, this is OK (default)
n) No, skip this
s) Skip all delete operations with no more questions
!) Do all delete operations with no more questions
q) Exit rclone now.
y/n/s/!/q> n

The options mean

--leave-root

During rmdirs it will not remove root directory, even if it's empty.

--log-file=FILE

Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

If FILE exists then rclone will append to it.

Note that if you are using the logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs.

--log-format LIST

Comma separated list of log format options. date, time, microseconds, longfile, shortfile, UTC. The default is "date,time".

--log-level LEVEL

This sets the log level for rclone. The default log level is NOTICE.

DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.

INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.

NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.

ERROR is equivalent to -q. It only outputs error messages.

--use-json-log

This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time.

--low-level-retries NUMBER

This controls the number of low level retries rclone does.

A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.

This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.

Disable low level retries with --low-level-retries 1.

--max-backlog=N

This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.

This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.

Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make --order-by work more accurately.

Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.

Setting this to a negative number will make the backlog as large as possible.

--max-delete=N

This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.

--max-depth=N

This modifies the recursion depth for all the commands except purge.

So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.

For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.

You can use this command to disable recursion (with --max-depth 1).

Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.

--max-duration=TIME

Rclone will stop scheduling new transfers when it has run for the duration specified.

Defaults to off.

When the limit is reached any existing transfers will complete.

Rclone won't exit with an error if the transfer limit is reached.

--max-transfer=SIZE

Rclone will stop transferring when it has reached the size specified. Defaults to off.

When the limit is reached all transfers will stop immediately.

Rclone will exit with exit code 8 if the transfer limit is reached.

--cutoff-mode=hard|soft|cautious

This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit.

--modify-window=TIME

When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.

The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.

This command line flag allows you to override that computed default.

--multi-thread-cutoff=SIZE

When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M).

Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.

The number of threads used to download is controlled by --multi-thread-streams.

Use -vv if you wish to see info about the threads.

This will work with the sync/copy/move commands and friends copyto/moveto. Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.

NB that this only works for a local destination but will work with any source.

NB that multi thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams is set explicitly.

NB on Windows using multi-thread downloads will cause the resulting files to be sparse. Use --local-no-sparse to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with --multi-thread-streams 0

--multi-thread-streams=N

When using multi thread downloads (see above --multi-thread-cutoff) this sets the maximum number of streams to use. Set to 0 to disable multi thread downloads (Default 4).

Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff and rounds up, up to the maximum set with --multi-thread-streams.

So if --multi-thread-cutoff 250MB and --multi-thread-streams 4 are in effect (the defaults):

--no-check-dest

The --no-check-dest can be used with move or copy and it causes rclone not to check the destination at all when copying files.

This means that:

This flag is useful to minimise the transactions if you know that none of the files are on the destination.

This is a specialized flag which should be ignored by most users!

--no-gzip-encoding

Don't set Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files.

There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.

--no-traverse

The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.

If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.

However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse.

See rclone copy for an example of how to use it.

--no-unicode-normalization

Don't normalize unicode characters in filenames during the sync routine.

Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem.

Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With --no-unicode-normalization they will be treated as unique characters.

--no-update-modtime

When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.

This can be used if the remote is being synced with another tool also (eg the Google Drive client).

--order-by string

The --order-by flag controls the order in which files in the backlog are processed in rclone sync, rclone copy and rclone move.

The order by string is constructed like this. The first part describes what aspect is being measured:

This can have a modifier appended with a comma:

If the modifier is mixed then it can have an optional percentage (which defaults to 50), eg size,mixed,25 which means that 25% of the threads should be taking the smallest items and 75% the largest. The threads which take the smallest first will always take the smallest first and likewise the largest first threads. The mixed mode can be useful to minimise the transfer time when you are transferring a mixture of large and small files - the large files are guaranteed upload threads and bandwidth and the small files will be processed continuously.

If no modifier is supplied then the order is ascending.

For example

If the --order-by flag is not supplied or it is supplied with an empty string then the default ordering will be used which is as scanned. With --checkers 1 this is mostly alphabetical, however with the default --checkers 8 it is somewhat random.

Limitations

The --order-by flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if

Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of --order-by as being more of a best efforts flag rather than a perfect ordering.

--password-command SpaceSepList

This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the RCLONE_CONFIG_PASS variable.

The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in ", if you want a literal " in an argument then enclose the argument in " and double the ". See CSV encoding for more info.

Eg

--password-command echo hello
--password-command echo "hello with space"
--password-command echo "hello with ""quotes"" and space"

See the Configuration Encryption for more info.

See a Windows PowerShell example on the Wiki.

-P, --progress

This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer.

Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay.

Normally this is updated every 500mS but this period can be overridden with the --stats flag.

This can be used with the --stats-one-line flag for a simpler display.

Note: On Windows until this bug is fixed all non-ASCII characters will be replaced with . when --progress is in use.

-q, --quiet

This flag will limit rclone's output to error messages only.

--refresh-times

The --refresh-times flag can be used to update modification times of existing files when they are out of sync on backends which don't support hashes.

This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them.

This flag is only useful for destinations which don't support hashes (eg crypt).

This can be used any of the sync commands sync, copy or move.

To use this flag you will need to be doing a modification time sync (so not using --size-only or --checksum). The flag will have no effect when using --size-only or --checksum.

If this flag is used when rclone comes to upload a file it will check to see if there is an existing file on the destination. If this file matches the source with size (and checksum if available) but has a differing timestamp then instead of re-uploading it, rclone will update the timestamp on the destination file. If the checksum does not match rclone will upload the new file. If the checksum is absent (eg on a crypt backend) then rclone will update the timestamp.

Note that some remotes can't set the modification time without re-uploading the file so this flag is less useful on them.

Normally if you are doing a modification time sync rclone will update modification times without --refresh-times provided that the remote supports checksums and the checksums match on the file. However if the checksums are absent then rclone will upload the file rather than setting the timestamp as this is the safe behaviour.

--retries int

Retry the entire sync if it fails this many times it fails (default 3).

Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors.

Disable retries with --retries 1.

--retries-sleep=TIME

This sets the interval between each retry specified by --retries

The default is 0. Use 0 to disable.

--size-only

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.

This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.

--stats=TIME

Commands which transfer data (sync, copy, copyto, move, moveto) will print data transfer stats at regular intervals to show their progress.

This sets the interval.

The default is 1m. Use 0 to disable.

If you set the stats interval then all commands can show stats. This can be useful when running other commands, check or mount for example.

Stats are logged at INFO level by default which means they won't show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels.

Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately.

--stats-file-name-length integer

By default, the --stats output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40. Use --stats-file-name-length 0 to disable any truncation of file names printed by stats.

--stats-log-level string

Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won't show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels.

--stats-one-line

When this is specified, rclone condenses the stats into a single line showing the most important stats only.

--stats-one-line-date

When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is 2006/01/02 15:04:05 -

--stats-one-line-date-format

When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow golang specs for date formatting syntax.

--stats-unit=bits|bytes

By default, data transfer rates will be printed in bytes/second.

This option allows the data rate to be printed in bits/second.

Data transfer volume will still be reported in bytes.

The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.

The default is bytes.

--suffix=SUFFIX

When using sync, copy or move any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.

The remote in use must support server side move or copy and you must use the same remote as the destination of the sync.

This is for use with files to add the suffix in the current directory or with --backup-dir. See --backup-dir for more info.

For example

rclone copy -i /path/to/local/file remote:current --suffix .bak

will copy /path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added.

If using rclone sync with --suffix and without --backup-dir then it is recommended to put a filter rule in excluding the suffix otherwise the sync will delete the backup files.

rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak"

--suffix-keep-extension

When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.

So let's say we had --suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.

--syslog

On capable OSes (not Windows or Plan9) send all log output to syslog.

This can be useful for running rclone in a script or rclone mount.

--syslog-facility string

If using --syslog this sets the syslog facility (eg KERN, USER). See man syslog for a list of possible facilities. The default facility is DAEMON.

--tpslimit float

Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second.

For example to limit rclone to 10 HTTP transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.

Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited).

This can be very useful for rclone mount to control the behaviour of applications using it.

See also --tpslimit-burst.

--tpslimit-burst int

Max burst of transactions for --tpslimit (default 1).

Normally --tpslimit will do exactly the number of transaction per second specified. However if you supply --tps-burst then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.

For example if you provide --tpslimit-burst 10 then if rclone has been idle for more than 10*--tpslimit then it can do 10 transactions very quickly before they are limited again.

This may be used to increase performance of --tpslimit without changing the long term average number of transactions per second.

--track-renames

By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.

If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync operations and perform renaming server-side.

Files will be matched by size and hash - if both match then a rename will be considered.

If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.

Encrypted destinations are not currently supported by --track-renames if --track-renames-strategy includes hash.

Note that --track-renames is incompatible with --no-traverse and that it uses extra memory to keep track of all the rename candidates.

Note also that --track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during.

--track-renames-strategy (hash,modtime,leaf,size)

This option changes the matching criteria for --track-renames.

The matching is controlled by a comma separated selection of these tokens:

So using --track-renames-strategy modtime,leaf would match files based on modification time, the leaf of the file name and the size only.

Using --track-renames-strategy modtime or leaf can enable --track-renames support for encrypted destinations.

If nothing is specified, the default option is matching by hashes.

Note that the hash strategy is not supported with encrypted destinations.

--delete-(before,during,after)

This option allows you to specify when files on your destination are deleted when you sync folders.

Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.

Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory.

Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.

--fast-list

When doing anything which involves a directory listing (eg sync, copy, ls - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.

However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).

If you use the --fast-list flag then rclone will use this method for listing directories. This will have the following consequences for the listing:

rclone should always give identical results with and without --fast-list.

If you pay for transactions and can fit your entire sync listing into memory then --fast-list is recommended. If you have a very big sync to do then don't use --fast-list otherwise you will run out of memory.

If you use --fast-list on a remote which doesn't support it, then rclone will just ignore it.

--timeout=TIME

This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.

The default is 5m. Set to 0 to disable.

--transfers=N

The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.

The default is to run 4 file transfers in parallel.

-u, --update

This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.

This can be useful when transferring to a remote which doesn't support mod times directly (or when using --use-server-modtime to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum.

If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. If --checksum is set then rclone will update the destination if the checksums differ too.

If an existing destination file is older than the source file then it will be updated if the size or checksum differs from the source file.

On remotes which don't support mod time directly (or when using --use-server-modtime) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

--use-mmap

If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.

If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.

It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default.

--use-server-modtime

Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.

Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using --update, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.

Using this flag on a sync operation without also using --update would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want.

-v, -vv, --verbose

With -v rclone will tell you about each file that is transferred and a small number of significant events.

With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.

-V, --version

Prints the version number

SSL/TLS options

The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation.

--ca-cert string

This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to.

If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates.

--client-cert string

This loads the PEM encoded client side certificate.

This is used for mutual TLS authentication.

The --client-key flag is required too when using this.

--client-key string

This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with --client-cert.

--no-check-certificate=true/false

--no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.

This option defaults to false.

This should be used only for testing.

Configuration Encryption

Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf file in a secure location.

If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone.

To add a password to your rclone configuration, execute rclone config.

>rclone config
Current remotes:

e) Edit existing remote
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/s/q>

Go into s, Set configuration password:

e/n/d/s/q> s
Your configuration is not encrypted.
If you add a password, you will protect your login information to cloud services.
a) Add Password
q) Quit to main menu
a/q> a
Enter NEW configuration password:
password:
Confirm NEW password:
password:
Password set
Your configuration is encrypted.
c) Change Password
u) Unencrypt configuration
q) Quit to main menu
c/u/q>

Your configuration is now encrypted, and every time you start rclone you will have to supply the password. See below for details. In the same menu, you can change the password or completely remove encryption from your configuration.

There is no way to recover the configuration if you lose your password.

rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.

While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.

If it is safe in your environment, you can set the RCLONE_CONFIG_PASS environment variable to contain your password, in which case it will be used for decrypting the configuration.

You can set this for a session from a script. For unix like systems save this to a file called set-rclone-password:

#!/bin/echo Source this file don't run it

read -s RCLONE_CONFIG_PASS
export RCLONE_CONFIG_PASS

Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable.

An alternate means of supplying the password is to provide a script which will retrieve the password and print on standard output. This script should have a fully specified path name and not rely on any environment variables. The script is supplied either via --password-command="..." command line argument or via the RCLONE_PASSWORD_COMMAND environment variable.

One useful example of this is using the passwordstore application to retrieve the password:

export RCLONE_PASSWORD_COMMAND="pass rclone/config"

If the passwordstore password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably.

If you are running rclone inside a script, unless you are using the --password-command method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password, and --password-command has not been supplied.

Developer options

These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.

--cpuprofile=FILE

Write CPU profile to file. This can be analysed with go tool pprof.

--dump flag,flag,flag

The --dump flag takes a comma separated list of flags to dump info about.

Note that some headers including Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.

The available flags are:

--dump headers

Dump HTTP headers with Authorization: lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.

Use --dump auth if you do want the Authorization: headers.

--dump bodies

Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.

Note that the bodies are buffered in memory so don't use this for enormous files.

--dump requests

Like --dump bodies but dumps the request bodies and the response headers. Useful for debugging download problems.

--dump responses

Like --dump bodies but dumps the response bodies and the request headers. Useful for debugging upload problems.

--dump auth

Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.

--dump filters

Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.

--dump goroutines

This dumps a list of the running go-routines at the end of the command to standard output.

--dump openfiles

This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you'll need that installed to use it.

--memprofile=FILE

Write memory profile to file. This can be analysed with go tool pprof.

Filtering

For the filtering options

See the filtering section.

Remote control

For the remote control options and for instructions on how to remote control rclone

See the remote control section.

Logging

rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.

By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls).

By default, rclone will produce Error and Notice level messages.

If you use the -q flag, rclone will only produce Error messages.

If you use the -v flag, rclone will produce Error, Notice and Info messages.

If you use the -vv flag, rclone will produce Error, Notice, Info and Debug messages.

You can also control the log levels with the --log-level flag.

If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE.

If you use the --syslog flag then rclone will log to syslog and the --syslog-facility control which facility it uses.

Rclone prefixes all log messages with their level in capitals, eg INFO which makes it easy to grep the log file for different kinds of information.

Exit Code

If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.

During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.

When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.

List of exit codes

Environment Variables

Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

Options

Every option in rclone can have its default set by environment variable.

To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true.

The same parser is used for the options and the environment variables so they take exactly the same form.

Config file

You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through rclone config by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for --config in rclone help).

To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ + name of config file option and make it all uppercase.

For example, to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables):

$ export RCLONE_CONFIG_MYS3_TYPE=s3
$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
$ rclone lsd MYS3:
          -1 2016-09-21 12:54:21        -1 my-bucket
$ rclone listremotes | grep mys3
mys3:

Note that if you want to create a remote using environment variables you must create the ..._TYPE variable as above.

Precedence

The various different methods of backend configuration are read in this order and the first one with a value is used.

So if both --drive-use-trash is supplied on the config line and an environment variable RCLONE_DRIVE_USE_TRASH is set, the command line flag will take preference.

For non backend configuration the order is as follows:

Other environment variables

Configuring rclone on a remote / headless machine

Some of the configurations (those involving oauth2) require an Internet connected web browser.

If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.

Configuring using rclone authorize

On the headless box run rclone config but answer N to the Use auto config? question.

...
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n> n
For this to work, you will need rclone available on a machine that has
a web browser available.

For more help and alternate methods see: https://rclone.org/remote_setup/

Execute the following on the machine with the web browser (same rclone
version recommended):

    rclone authorize "amazon cloud drive"

Then paste the result below:
result>

Then on your main desktop machine

rclone authorize "amazon cloud drive"
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Paste the following into your remote machine --->
SECRET_TOKEN
<---End paste

Then back to the headless box, paste in the code

result> SECRET_TOKEN
--------------------
[acd12]
client_id = 
client_secret = 
token = SECRET_TOKEN
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d>

Configuring by copying the config file

Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone.

So first configure rclone on your desktop machine with

rclone config

to set up the config file.

Find the config file by running rclone config file, for example

$ rclone config file
Configuration file is stored at:
/home/user/.rclone.conf

Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and place it in the correct place (use rclone config file on the remote box to find out where).

Filtering, includes and excludes

Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.

The filters are applied for the copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters.

Each path as it passes through rclone is matched against the include and exclude rules like --include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v. --filter-from, --exclude-from, --include-from, --files-from, --files-from-raw understand - as a file name to mean read from standard input.

Patterns

The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.

If the pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn't start with / then it is matched starting at the end of the path, but it will only match a complete path element:

file.jpg  - matches "file.jpg"
          - matches "directory/file.jpg"
          - doesn't match "afile.jpg"
          - doesn't match "directory/afile.jpg"
/file.jpg - matches "file.jpg" in the root directory of the remote
          - doesn't match "afile.jpg"
          - doesn't match "directory/file.jpg"

Important Note that you must use / in patterns and not \ even if running on Windows.

A * matches anything but not a /.

*.jpg  - matches "file.jpg"
       - matches "directory/file.jpg"
       - doesn't match "file.jpg/something"

Use ** to match anything, including slashes (/).

dir/** - matches "dir/file.jpg"
       - matches "dir/dir1/dir2/file.jpg"
       - doesn't match "directory/file.jpg"
       - doesn't match "adir/file.jpg"

A ? matches any character except a slash /.

l?ss  - matches "less"
      - matches "lass"
      - doesn't match "floss"

A [ and ] together make a character class, such as [a-z] or [aeiou] or [[:alpha:]]. See the go regexp docs for more info on these.

h[ae]llo - matches "hello"
         - matches "hallo"
         - doesn't match "hullo"

A { and } define a choice between elements. It should contain a comma separated list of patterns, any of which might match. These patterns can contain wildcards.

{one,two}_potato - matches "one_potato"
                 - matches "two_potato"
                 - doesn't match "three_potato"
                 - doesn't match "_potato"

Special characters can be escaped with a \ before them.

\*.jpg       - matches "*.jpg"
\\.jpg       - matches "\.jpg"
\[one\].jpg  - matches "[one].jpg"

Patterns are case sensitive unless the --ignore-case flag is used.

Without --ignore-case (default)

potato - matches "potato"
       - doesn't match "POTATO"

With --ignore-case

potato - matches "potato"
       - matches "POTATO"

Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir

Directories

Rclone keeps track of directories that could match any file patterns.

Eg if you add the include rule

/a/*.jpg

Rclone will synthesize the directory include rule

/a/

If you put any rules which end in / then it will only match directories.

Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.

Differences between rsync and rclone patterns

Rclone implements bash style {a,b,c} glob matching which rsync doesn't.

Rclone always does a wildcard match so \ must always escape a \.

How the rules are used

Rclone maintains a combined list of include rules and exclude rules.

Each file is matched in order, starting from the top, against the rule in the list until it finds a match. The file is then included or excluded according to the rule type.

If the matcher fails to find a match after testing against all the entries in the list then the path is included.

For example given the following rules, + being include, - being exclude,

- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
- *

This would include

This would exclude

A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).

Adding filtering rules

Filtering rules are added with the following command line flags.

Repeating options

You can repeat the following options to add more than one rule of that type.

Important You should not use --include* together with --exclude*. It may produce different results than you expected. In that case try to use: --filter*.

Note that all the options of the same type are processed together in the order above, regardless of what order they were placed on the command line.

So all --include options are processed first in the order they appeared on the command line, then all --include-from options etc.

To mix up the order includes and excludes, the --filter flag can be used.

--exclude - Exclude files matching pattern

Add a single exclude rule with --exclude.

This flag can be repeated. See above for the order the flags are processed in.

Eg --exclude *.bak to exclude all bak files from the sync.

--exclude-from - Read exclude patterns from file

Add exclude rules from a file.

This flag can be repeated. See above for the order the flags are processed in.

Prepare a file like this exclude-file.txt

# a sample exclude rule file
*.bak
file2.jpg

Then use as --exclude-from exclude-file.txt. This will sync all files except those ending in bak and file2.jpg.

This is useful if you have a lot of rules.

--include - Include files matching pattern

Add a single include rule with --include.

This flag can be repeated. See above for the order the flags are processed in.

Eg --include *.{png,jpg} to include all png and jpg files in the backup and no others.

This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.

--include-from - Read include patterns from file

Add include rules from a file.

This flag can be repeated. See above for the order the flags are processed in.

Prepare a file like this include-file.txt

# a sample include rule file
*.jpg
*.png
file2.avi

Then use as --include-from include-file.txt. This will sync all jpg, png files and file2.avi.

This is useful if you have a lot of rules.

This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.

--filter - Add a file-filtering rule

This can be used to add a single include or exclude rule. Include rules start with + and exclude rules start with -. A special rule called ! can be used to clear the existing rules.

This flag can be repeated. See above for the order the flags are processed in.

Eg --filter "- *.bak" to exclude all bak files from the sync.

--filter-from - Read filtering patterns from a file

Add include/exclude rules from a file.

This flag can be repeated. See above for the order the flags are processed in.

Prepare a file like this filter-file.txt

# a sample filter rule file
- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
- /dir/Trash/**
+ /dir/**
# exclude everything else
- *

Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined.

This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. It will also include everything in the directory dir at the root of the sync, except dir/Trash which it will exclude. Everything else will be excluded from the sync.

--files-from - Read list of source-file names

This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.

--files-from expects a list of files as its input. Leading / trailing whitespace is stripped from the input lines and lines starting with # and ; are ignored.

Rclone will traverse the file system if you use --files-from, effectively using the files in --files-from as a set of filters. Rclone will not error if any of the files are missing.

If you use --no-traverse as well as --files-from then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files.

This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.

Paths within the --files-from file will be interpreted as starting with the root specified in the command. Leading / characters are ignored. See --files-from-raw if you need the input to be processed in a raw manner.

For example, suppose you had files-from.txt with this content:

# comment
file1.jpg
subdir/file2.jpg

You could then use it like this:

rclone copy --files-from files-from.txt /home/me/pics remote:pics

This will transfer these files only (if they exist)

/home/me/pics/file1.jpg        → remote:pics/file1.jpg
/home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg

To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths:

/home/user1/important
/home/user1/dir/file
/home/user2/stuff

To copy these you'd find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, eg

user1/important
user1/dir/file
user2/stuff

You could then copy these to a remote like this

rclone copy --files-from files-from.txt /home remote:backup

The 3 files will arrive in remote:backup with the paths as in the files-from.txt like this:

/home/user1/important → remote:backup/user1/important
/home/user1/dir/file  → remote:backup/user1/dir/file
/home/user2/stuff     → remote:backup/user2/stuff

You could of course choose / as the root too in which case your files-from.txt might look like this.

/home/user1/important
/home/user1/dir/file
/home/user2/stuff

And you would transfer it like this

rclone copy --files-from files-from.txt / remote:backup

In this case there will be an extra home directory on the remote:

/home/user1/important → remote:backup/home/user1/important
/home/user1/dir/file  → remote:backup/home/user1/dir/file
/home/user2/stuff     → remote:backup/home/user2/stuff

--files-from-raw - Read list of source-file names without any processing

This option is same as --files-from with the only difference being that the input is read in a raw manner. This means that lines with leading/trailing whitespace and lines starting with ; or # are read without any processing. rclone lsf has a compatible format that can be used to export file lists from remotes, which can then be used as an input to --files-from-raw.

--min-size - Don't transfer any file smaller than this

This option controls the minimum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.

For example --min-size 50k means no files smaller than 50kByte will be transferred.

--max-size - Don't transfer any file larger than this

This option controls the maximum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.

For example --max-size 1G means no files larger than 1GByte will be transferred.

--max-age - Don't transfer any file older than this

This option controls the maximum age of files to transfer. Give in seconds or with a suffix of:

For example --max-age 2d means no files older than 2 days will be transferred.

This can also be an absolute time in one of these formats

--min-age - Don't transfer any file younger than this

This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age for list of suffixes)

For example --min-age 2d means no files younger than 2 days will be transferred.

--delete-excluded - Delete files on dest excluded from sync

Important this flag is dangerous - use with --dry-run and -v first.

When doing rclone sync this will delete any files which are excluded from the sync on the destination.

If for example you did a sync from A to B without the --min-size 50k flag

rclone sync -i A: B:

Then you repeated it like this with the --delete-excluded

rclone --min-size 50k --delete-excluded sync A: B:

This would delete all files on B which are less than 50 kBytes as these are now excluded from the sync.

Always test first with --dry-run and -v before using this flag.

--dump filters - dump the filters to the output

This dumps the defined filters to the output as regular expressions.

Useful for debugging.

--ignore-case - make searches case insensitive

Normally filter patterns are case sensitive. If this flag is supplied then filter patterns become case insensitive.

Normally a --include "file.txt" will not match a file called FILE.txt. However if you use the --ignore-case flag then --include "file.txt" this will match a file called FILE.txt.

Quoting shell metacharacters

The examples above may not work verbatim in your shell as they have shell metacharacters in them (eg *), and may require quoting.

Eg linux, OSX

In Windows the expansion is done by the command not the shell so this should work fine

Exclude directory based on a file

It is possible to exclude a directory based on a file, which is present in this directory. Filename should be specified using the --exclude-if-present flag. This flag has a priority over the other filtering flags.

Imagine, you have the following directory structure:

dir1/file1
dir1/dir2/file2
dir1/dir2/dir3/file3
dir1/dir2/dir3/.ignore

You can exclude dir3 from sync by running the following command:

rclone sync -i --exclude-if-present .ignore dir1 remote:backup

Currently only one filename is supported, i.e. --exclude-if-present should not be used multiple times.

GUI (Experimental)

Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change.

Run this command in a terminal and rclone will download and then display the GUI in a web browser.

rclone rcd --rc-web-gui

This will produce logs like this and rclone needs to continue to run to serve the GUI:

2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path :  /home/USER/.cache/rclone/webgui/v0.0.6.zip]
2019/08/25 11:40:16 NOTICE: Unzipping
2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/

This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details.

If you wish to check for updates then you can add --rc-web-gui-update to the command line.

If you find your GUI broken, you may force it to update by add --rc-web-gui-force-update.

By default, rclone will open your browser. Add --rc-web-gui-no-open-browser to disable this feature.

Using the GUI

Once the GUI opens, you will be looking at the dashboard which has an overall overview.

On the left hand side you will see a series of view buttons you can click on:

(More docs and walkthrough video to come!)

How it works

When you run the rclone rcd --rc-web-gui this is what happens

Advanced use

The rclone rcd may use any of the flags documented on the rc page.

The flag --rc-web-gui is shorthand for

These flags can be overridden as desired.

See also the rclone rcd documentation.

Example: Running a public GUI

For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags:

Example: Running a GUI behind a proxy

If you want to run the GUI behind a proxy at /rclone you could use these flags:

Or instead of htpasswd if you just want a single user and password:

Project

The GUI is being developed in the: rclone/rclone-webui-react repository.

Bug reports and contributions are very welcome :-)

If you have questions then please ask them on the rclone forum.

Remote controlling rclone with its API

If rclone is run with the --rc flag then it starts an http server which can be used to remote control rclone using its API.

If you just want to run a remote control then see the rcd command.

Supported parameters

--rc

Flag to start the http server listen on remote requests

--rc-addr=IP

IPaddress:Port or :Port to bind server to. (default "localhost:5572")

--rc-cert=KEY

SSL PEM key (concatenation of certificate and CA certificate)

--rc-client-ca=PATH

Client certificate authority to verify clients with

--rc-htpasswd=PATH

htpasswd file - if not provided no authentication is done

--rc-key=PATH

SSL PEM Private key

--rc-max-header-bytes=VALUE

Maximum size of request header (default 4096)

--rc-user=VALUE

User name for authentication.

--rc-pass=VALUE

Password for authentication.

--rc-realm=VALUE

Realm for authentication (default "rclone")

--rc-server-read-timeout=DURATION

Timeout for server reading data (default 1h0m0s)

--rc-server-write-timeout=DURATION

Timeout for server writing data (default 1h0m0s)

--rc-serve

Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object

Default Off.

--rc-files /path/to/directory

Path to local files to serve on the HTTP server.

If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.

If --rc-user or --rc-pass is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/ style.

Default Off.

--rc-enable-metrics

Enable OpenMetrics/Prometheus compatible endpoint at /metrics.

Default Off.

--rc-web-gui

Set this flag to serve the default web gui on the same port as rclone.

Default Off.

--rc-allow-origin

Set the allowed Access-Control-Allow-Origin for rc requests.

Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui.

Default is IP address on which rc is running.

--rc-web-fetch-url

Set the URL to fetch the rclone-web-gui files from.

Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest.

--rc-web-gui-update

Set this flag to check and update rclone-webui-react from the rc-web-fetch-url.

Default Off.

--rc-web-gui-force-update

Set this flag to force update rclone-webui-react from the rc-web-fetch-url.

Default Off.

--rc-web-gui-no-open-browser

Set this flag to disable opening browser automatically when using web-gui.

Default Off.

--rc-job-expire-duration=DURATION

Expire finished async jobs older than DURATION (default 60s).

--rc-job-expire-interval=DURATION

Interval duration to check for expired async jobs (default 10s).

--rc-no-auth

By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list is denied as it involved creating a remote as is sync/copy.

If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user and --rc-pass and use these credentials in the request.

Default Off.

Accessing the remote control via the rclone rc command

Rclone itself implements the remote control protocol in its rclone rc command.

You can use it like this

$ rclone rc rc/noop param1=one param2=two
{
    "param1": "one",
    "param2": "two"
}

Run rclone rc on its own to see the help for the installed remote control commands.

JSON input

rclone rc also supports a --json flag which can be used to send more complicated input parameters.

$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
{
    "p1": [
        1,
        "2",
        null,
        4
    ],
    "p2": {
        "a": 1,
        "b": 2
    }
}

If the parameter being passed is an object then it can be passed as a JSON string rather than using the --json flag which simplifies the command line.

rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'

Rather than

rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'

Special parameters

The rc interface supports some special parameters which apply to all commands. These start with _ to show they are different.

Running asynchronous jobs with _async = true

Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously.

If _async has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The job/status call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished.

It is recommended that potentially long running jobs, eg sync/sync, sync/copy, sync/move, operations/purge are run with the _async flag to avoid any potential problems with the HTTP request and response timing out.

Starting a job with the _async flag:

$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
{
    "jobid": 2
}

Query the status to see if the job has finished. For more information on the meaning of these return parameters see the job/status call.

$ rclone rc --json '{ "jobid":2 }' job/status
{
    "duration": 0.000124163,
    "endTime": "2018-10-27T11:38:07.911245881+01:00",
    "error": "",
    "finished": true,
    "id": 2,
    "output": {
        "_async": true,
        "p1": [
            1,
            "2",
            null,
            4
        ],
        "p2": {
            "a": 1,
            "b": 2
        }
    },
    "startTime": "2018-10-27T11:38:07.911121728+01:00",
    "success": true
}

job/list can be used to show the running or recently completed jobs

$ rclone rc job/list
{
    "jobids": [
        2
    ]
}

Assigning operations to groups with _group = value

Each rc call has its own stats group for tracking its metrics. By default grouping is done by the composite group name from prefix job/ and id of the job like so job/1.

If _group has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name.

Stats for specific group can be accessed by passing group to core/stats:

$ rclone rc --json '{ "group": "job/1" }' core/stats
{
    "speed": 12345
    ...
}

Supported commands

backend/command: Runs a backend command.

This takes the following parameters

Returns

For example

rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2

Returns

{
    "result": {
        "arg": [
            "path1",
            "path2"
        ],
        "name": "noop",
        "opt": {
            "blue": "",
            "echo": "yes"
        }
    }
}

Note that this is the direct equivalent of using this "backend" command:

rclone backend noop . -o echo=yes -o blue path1 path2

Note that arguments must be preceded by the "-a" flag

See the backend command for more information.

Authentication is required for this call.

cache/expire: Purge a remote from cache

Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional)

Eg

rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true

cache/fetch: Fetch file chunks

Ensure the specified file chunks are cached on disk.

The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]

start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.

Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks

Any parameter with a key that starts with "file" can be used to specify files to fetch, eg

rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye

File names will automatically be encrypted when the a crypt remote is used on top of the cache.

cache/stats: Get cache stats

Show statistics for the cache remote.

config/create: create the config for a remote.

This takes the following parameters

See the config create command command for more information on the above.

Authentication is required for this call.

config/delete: Delete a remote in the config file.

Parameters:

See the config delete command command for more information on the above.

Authentication is required for this call.

config/dump: Dumps the config file.

Returns a JSON object: - key: value

Where keys are remote names and values are the config parameters.

See the config dump command command for more information on the above.

Authentication is required for this call.

config/get: Get a remote in the config file.

Parameters:

See the config dump command command for more information on the above.

Authentication is required for this call.

config/listremotes: Lists the remotes in the config file.

Returns - remotes - array of remote names

See the listremotes command command for more information on the above.

Authentication is required for this call.

config/password: password the config for a remote.

This takes the following parameters

See the config password command command for more information on the above.

Authentication is required for this call.

config/providers: Shows how providers are configured in the config file.

Returns a JSON object: - providers - array of objects

See the config providers command command for more information on the above.

Authentication is required for this call.

config/update: update the config for a remote.

This takes the following parameters

See the config update command command for more information on the above.

Authentication is required for this call.

core/bwlimit: Set the bandwidth limit.

This sets the bandwidth limit to that passed in.

Eg

rclone rc core/bwlimit rate=off
{
    "bytesPerSecond": -1,
    "rate": "off"
}
rclone rc core/bwlimit rate=1M
{
    "bytesPerSecond": 1048576,
    "rate": "1M"
}

If the rate parameter is not supplied then the bandwidth is queried

rclone rc core/bwlimit
{
    "bytesPerSecond": 1048576,
    "rate": "1M"
}

The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.

In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number.

core/command: Run a rclone terminal command over rc.

This takes the following parameters

Returns

For example

rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1

Returns

{
    "error": false,
    "result": "<Raw command line output>"
}

OR 
{
    "error": true,
    "result": "<Raw command line output>"
}

Authentication is required for this call.

core/gc: Runs a garbage collection.

This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.

core/group-list: Returns list of stats.

This returns list of stats groups currently in memory.

Returns the following values:

{
    "groups":  an array of group names:
        [
            "group1",
            "group2",
            ...
        ]
}

core/memstats: Returns the memory statistics

This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats

The most interesting values for most people are:

core/obscure: Obscures a string passed in.

Pass a clear string and rclone will obscure it for the config file: - clear - string

Returns - obscured - string

core/pid: Return PID of current process

This returns PID of current process. Useful for stopping rclone process.

core/quit: Terminates the app.

(optional) Pass an exit code to be used for terminating the app: - exitCode - int

core/stats: Returns stats about current transfers.

This returns all available stats:

rclone rc core/stats

If group is not provided then summed up stats for all groups will be returned.

Parameters

Returns the following values:

{
    "speed": average speed in bytes/sec since start of the process,
    "bytes": total transferred bytes since the start of the process,
    "errors": number of errors,
    "fatalError": whether there has been at least one FatalError,
    "retryError": whether there has been at least one non-NoRetryError,
    "checks": number of checked files,
    "transfers": number of transferred files,
    "deletes" : number of deleted files,
    "renames" : number of renamed files,
    "transferTime" : total time spent on running jobs,
    "elapsedTime": time in seconds since the start of the process,
    "lastError": last occurred error,
    "transferring": an array of currently active file transfers:
        [
            {
                "bytes": total transferred bytes for this file,
                "eta": estimated time in seconds until file transfer completion
                "name": name of the file,
                "percentage": progress of the file transfer in percent,
                "speed": average speed over the whole transfer in bytes/sec,
                "speedAvg": current speed in bytes/sec as an exponentially weighted moving average,
                "size": size of the file in bytes
            }
        ],
    "checking": an array of names of currently active file checks
        []
}

Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.

core/stats-delete: Delete stats group.

This deletes entire stats group

Parameters

core/stats-reset: Reset stats.

This clears counters, errors and finished transfers for all stats or specific stats group if group is provided.

Parameters

core/transferred: Returns stats about completed transfers.

This returns stats about completed transfers:

rclone rc core/transferred

If group is not provided then completed transfers for all groups will be returned.

Note only the last 100 completed transfers are returned.

Parameters

Returns the following values:

{
    "transferred":  an array of completed transfers (including failed ones):
        [
            {
                "name": name of the file,
                "size": size of the file in bytes,
                "bytes": total transferred bytes for this file,
                "checked": if the transfer is only checked (skipped, deleted),
                "timestamp": integer representing millisecond unix epoch,
                "error": string description of the error (empty if successful),
                "jobid": id of the job that this transfer belongs to
            }
        ]
}

core/version: Shows the current version of rclone and the go runtime.

This shows the current version of go and the go runtime

debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling.

SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked.

To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0.

After calling this you can use this to see the blocking profile:

go tool pprof http://localhost:5572/debug/pprof/block

Parameters

debug/set-mutex-profile-fraction: Set runtime.SetMutexProfileFraction for mutex profiling.

SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned.

To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.)

Once this is set you can look use this to profile the mutex contention:

go tool pprof http://localhost:5572/debug/pprof/mutex

Parameters

Results

job/list: Lists the IDs of the running jobs

Parameters - None

Results

job/status: Reads the status of the job ID

Parameters

Results

job/stop: Stop the running job

Parameters

mount/listmounts: Show current mount points

This shows currently mounted points, which can be used for performing an unmount

This takes no parameters and returns

Eg

rclone rc mount/listmounts

Authentication is required for this call.

mount/mount: Create a new mount point

rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2

This takes the following parameters

Eg

rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'

The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section.

rclone rc options/get

Authentication is required for this call.

mount/types: Show all possible mount types

This shows all possible mount types and returns them as a list.

This takes no parameters and returns

The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter.

Eg

rclone rc mount/types

Authentication is required for this call.

mount/unmount: Unmount selected active mount

rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

This takes the following parameters

Eg

rclone rc mount/unmount mountPoint=/home/<user>/mountPoint

Authentication is required for this call.

mount/unmountall: Show current mount points

This shows currently mounted points, which can be used for performing an unmount

This takes no parameters and returns error if unmount does not succeed.

Eg

rclone rc mount/unmountall

Authentication is required for this call.

operations/about: Return the space used on the remote

This takes the following parameters

The result is as returned from rclone about --json

See the about command command for more information on the above.

Authentication is required for this call.

operations/cleanup: Remove trashed files in the remote or path

This takes the following parameters

See the cleanup command command for more information on the above.

Authentication is required for this call.

operations/copyfile: Copy a file from source remote to destination remote

This takes the following parameters

Authentication is required for this call.

operations/copyurl: Copy the URL to the object

This takes the following parameters

Authentication is required for this call.

operations/delete: Remove files in the path

This takes the following parameters

See the delete command command for more information on the above.

Authentication is required for this call.

operations/deletefile: Remove the single file pointed to

This takes the following parameters

See the deletefile command command for more information on the above.

Authentication is required for this call.

operations/fsinfo: Return information about the remote

This takes the following parameters

This returns info about the remote passed in;

{
    // optional features and whether they are available or not
    "Features": {
        "About": true,
        "BucketBased": false,
        "CanHaveEmptyDirectories": true,
        "CaseInsensitive": false,
        "ChangeNotify": false,
        "CleanUp": false,
        "Copy": false,
        "DirCacheFlush": false,
        "DirMove": true,
        "DuplicateFiles": false,
        "GetTier": false,
        "ListR": false,
        "MergeDirs": false,
        "Move": true,
        "OpenWriterAt": true,
        "PublicLink": false,
        "Purge": true,
        "PutStream": true,
        "PutUnchecked": false,
        "ReadMimeType": false,
        "ServerSideAcrossConfigs": false,
        "SetTier": false,
        "SetWrapper": false,
        "UnWrap": false,
        "WrapFs": false,
        "WriteMimeType": false
    },
    // Names of hashes available
    "Hashes": [
        "MD5",
        "SHA-1",
        "DropboxHash",
        "QuickXorHash"
    ],
    "Name": "local",    // Name as created
    "Precision": 1,     // Precision of timestamps in ns
    "Root": "/",        // Path as created
    "String": "Local file system at /" // how the remote will appear in logs
}

This command does not have a command line equivalent so use this instead:

rclone rc --loopback operations/fsinfo fs=remote:

operations/list: List the given remote and path in JSON format

This takes the following parameters

The result is

See the lsjson command for more information on the above and examples.

Authentication is required for this call.

operations/mkdir: Make a destination directory or container

This takes the following parameters

See the mkdir command command for more information on the above.

Authentication is required for this call.

operations/movefile: Move a file from source remote to destination remote

This takes the following parameters

Authentication is required for this call.

This takes the following parameters

Returns

See the link command command for more information on the above.

Authentication is required for this call.

operations/purge: Remove a directory or container and all of its contents

This takes the following parameters

See the purge command command for more information on the above.

Authentication is required for this call.

operations/rmdir: Remove an empty directory or container

This takes the following parameters

See the rmdir command command for more information on the above.

Authentication is required for this call.

operations/rmdirs: Remove all the empty directories in the path

This takes the following parameters

See the rmdirs command command for more information on the above.

Authentication is required for this call.

operations/size: Count the number of bytes and files in remote

This takes the following parameters

Returns

See the size command command for more information on the above.

Authentication is required for this call.

operations/uploadfile: Upload file using multiform/form-data

This takes the following parameters

Authentication is required for this call.

options/blocks: List all the option blocks

Returns - options - a list of the options block names

options/get: Get all the options

Returns an object where keys are option block names and values are an object with the current option values in.

This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.

options/set: Set an option

Parameters

Repeated as often as required.

Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this.

For example:

This sets DEBUG level logs (-vv)

rclone rc options/set --json '{"main": {"LogLevel": 8}}'

And this sets INFO level logs (-v)

rclone rc options/set --json '{"main": {"LogLevel": 7}}'

And this sets NOTICE level logs (normal without -v)

rclone rc options/set --json '{"main": {"LogLevel": 6}}'

pluginsctl/addPlugin: Add a plugin using url

used for adding a plugin to the webgui

This takes the following parameters

Eg

rclone rc pluginsctl/addPlugin

Authentication is required for this call.

pluginsctl/getPluginsForType: Get plugins with type criteria

This shows all possible plugins by a mime type

This takes the following parameters

and returns

Eg

rclone rc pluginsctl/getPluginsForType type=video/mp4

Authentication is required for this call.

pluginsctl/listPlugins: Get the list of currently loaded plugins

This allows you to get the currently enabled plugins and their details.

This takes no parameters and returns

Eg

rclone rc pluginsctl/listPlugins

Authentication is required for this call.

pluginsctl/listTestPlugins: Show currently loaded test plugins

allows listing of test plugins with the rclone.test set to true in package.json of the plugin

This takes no parameters and returns

Eg

rclone rc pluginsctl/listTestPlugins

Authentication is required for this call.

pluginsctl/removePlugin: Remove a loaded plugin

This allows you to remove a plugin using it's name

This takes parameters

Eg

rclone rc pluginsctl/removePlugin name=rclone/video-plugin

Authentication is required for this call.

pluginsctl/removeTestPlugin: Remove a test plugin

This allows you to remove a plugin using it's name

This takes the following parameters

Eg

rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react

Authentication is required for this call.

rc/error: This returns an error

This returns an error with the input as part of its error string. Useful for testing error handling.

rc/list: List all the registered remote control commands

This lists all the registered remote control commands as a JSON map in the commands response.

rc/noop: Echo the input to the output parameters

This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

rc/noopauth: Echo the input to the output parameters requiring auth

This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

Authentication is required for this call.

sync/copy: copy a directory from source remote to destination remote

This takes the following parameters

See the copy command command for more information on the above.

Authentication is required for this call.

sync/move: move a directory from source remote to destination remote

This takes the following parameters

See the move command command for more information on the above.

Authentication is required for this call.

sync/sync: sync a directory from source remote to destination remote

This takes the following parameters

See the sync command command for more information on the above.

Authentication is required for this call.

vfs/forget: Forget files or directories in the directory cache.

This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

If no paths are passed in then it will forget all the paths in the directory cache.

rclone rc vfs/forget

Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg

rclone rc vfs/forget file=hello file2=goodbye dir=home/junk

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

vfs/list: List active VFSes.

This lists the active VFSes.

It returns a list under the key "vfses" where the values are the VFS names that could be passed to the other VFS commands in the "fs" parameter.

vfs/poll-interval: Get the status or update the value of the poll-interval option.

Without any parameter given this returns the current status of the poll-interval setting.

When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval.

rclone rc vfs/poll-interval interval=5m

The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely.

The new poll-interval value will only be active when the timeout is not reached.

If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote.

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

vfs/refresh: Refresh the directory cache.

This reads the directories for the specified paths and freshens the directory cache.

If no paths are passed in then it will refresh the root directory.

rclone rc vfs/refresh

Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg

rclone rc vfs/refresh dir=home/junk dir2=data/misc

If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

Accessing the remote control via HTTP

Rclone implements a simple HTTP based protocol.

Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.

All calls must made using POST.

The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl.

The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.

Error returns

If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg

{
    "error": "Expecting string value for key \"remote\" (was float64)",
    "input": {
        "fs": "/tmp",
        "remote": 3
    },
    "status": 400
    "path": "operations/rmdir",
}

The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call

CORS

The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.

Using POST with URL parameters only

curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'

Response

{
    "potato": "1",
    "sausage": "2"
}

Here is what an error response looks like:

curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
{
    "error": "arbitrary error on input map[potato:1 sausage:2]",
    "input": {
        "potato": "1",
        "sausage": "2"
    }
}

Note that curl doesn't return errors to the shell unless you use the -f option

$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
22

Using POST with a form

curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop

Response

{
    "potato": "1",
    "sausage": "2"
}

Note that you can combine these with URL parameters too with the POST parameters taking precedence.

curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"

Response

{
    "potato": "1",
    "rutabaga": "3",
    "sausage": "4"
}

Using POST with a JSON blob

curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop

response

{
    "password": "xyz",
    "username": "xyz"
}

This can be combined with URL parameters too if required. The JSON blob takes precedence.

curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
{
    "potato": 2,
    "rutabaga": "3",
    "sausage": 1
}

Debugging rclone with pprof

If you use the --rc flag this will also enable the use of the go profiling tools on the same port.

To use these, first install go.

Debugging memory use

To profile rclone's memory use you can run:

go tool pprof -web http://localhost:5572/debug/pprof/heap

This should open a page in your browser showing what is using what memory.

You can also use the -text flag to produce a textual summary

$ go tool pprof -text http://localhost:5572/debug/pprof/heap
Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
      flat  flat%   sum%        cum   cum%
 1024.03kB 66.62% 66.62%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
     513kB 33.38%   100%      513kB 33.38%  net/http.newBufioWriterSize
         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/all.init
         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve.init
         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve/restic.init
         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2.init
         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init
         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0
         0     0%   100%  1024.03kB 66.62%  main.init
         0     0%   100%      513kB 33.38%  net/http.(*conn).readRequest
         0     0%   100%      513kB 33.38%  net/http.(*conn).serve
         0     0%   100%  1024.03kB 66.62%  runtime.main

Debugging go routine leaks

Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected.

See all active go routines using

curl http://localhost:5572/debug/pprof/goroutine?debug=1

Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser.

Other profiles to look at

You can see a summary of profiles available at http://localhost:5572/debug/pprof/

Here is how to use some of them:

See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs.

The profiling hook is zero overhead unless it is used.

Overview of cloud storage systems

Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.

Features

Here is an overview of the major features of each cloud storage system.

Name Hash ModTime Case Insensitive Duplicate Files MIME Type
1Fichier Whirlpool No No Yes R
Amazon Drive MD5 No Yes No R
Amazon S3 MD5 Yes No No R/W
Backblaze B2 SHA1 Yes No No R/W
Box SHA1 Yes Yes No -
Citrix ShareFile MD5 Yes Yes No -
Dropbox DBHASH † Yes Yes No -
FTP - No No No -
Google Cloud Storage MD5 Yes No No R/W
Google Drive MD5 Yes No Yes R/W
Google Photos - No No Yes R
HTTP - No No No R
Hubic MD5 Yes No No R/W
Jottacloud MD5 Yes Yes No R/W
Koofr MD5 No Yes No -
Mail.ru Cloud Mailru ‡‡‡ Yes Yes No -
Mega - No No Yes -
Memory MD5 Yes No No -
Microsoft Azure Blob Storage MD5 Yes No No R/W
Microsoft OneDrive SHA1 ‡‡ Yes Yes No R
OpenDrive MD5 Yes Yes Partial * -
OpenStack Swift MD5 Yes No No R/W
pCloud MD5, SHA1 Yes No No W
premiumize.me - No Yes No R
put.io CRC-32 Yes No Yes R
QingStor MD5 No No No R/W
Seafile - No No No -
SFTP MD5, SHA1 ‡ Yes Depends No -
SugarSync - No No No -
Tardigrade - Yes No No -
WebDAV MD5, SHA1 †† Yes ††† Depends No -
Yandex Disk MD5 Yes No No R/W
The local filesystem All Yes Depends No -

Hash

The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

To use the verify checksums when transferring between cloud storage systems they must support a common hash type.

† Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.

‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

†† WebDAV supports hashes when used with Owncloud and Nextcloud only.

††† WebDAV supports modtimes when used with Owncloud and Nextcloud only.

‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.

‡‡‡ Mail.ru uses its own modified SHA1 hash

ModTime

The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.

Case Insensitive

If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.

This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.

The local filesystem and SFTP may or may not be case sensitive depending on OS.

Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.

Duplicate files

If a cloud storage system allows duplicate files then it can have two objects with the same name.

This confuses rclone greatly when syncing - use the rclone dedupe command to rename or remove duplicates.

* Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with rclone. It may be that this is a mistake or an unsupported feature.

Restricted filenames

Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When rclone detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters.

This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently.

The name shown by rclone to the user or during log output will only contain a minimal set of replaced characters to ensure correct formatting and not necessarily the actual name used on the cloud storage.

This transformation is reversed when downloading a file or parsing rclone arguments. For example, when uploading a file named my file?.txt to Onedrive will be displayed as my file?.txt on the console, but stored as my file?.txt (the ? gets replaced by the similar looking character) to Onedrive. The reverse transformation allows to read a fileunusual/name.txt from Google Drive, by passing the name unusual/name.txt (the / needs to be replaced by the similar looking character) on the command line.

Default restricted characters

The table below shows the characters that are replaced by default.

When a replacement character is found in a filename, this character will be escaped with the character to avoid ambiguous file names. (e.g. a file named ␀.txt would shown as ‛␀.txt)

Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend.

Character Value Replacement
NUL 0x00
SOH 0x01
STX 0x02
ETX 0x03
EOT 0x04
ENQ 0x05
ACK 0x06
BEL 0x07
BS 0x08
HT 0x09
LF 0x0A
VT 0x0B
FF 0x0C
CR 0x0D
SO 0x0E
SI 0x0F
DLE 0x10
DC1 0x11
DC2 0x12
DC3 0x13
DC4 0x14
NAK 0x15
SYN 0x16
ETB 0x17
CAN 0x18
EM 0x19
SUB 0x1A
ESC 0x1B
FS 0x1C
GS 0x1D
RS 0x1E
US 0x1F
/ 0x2F
DEL 0x7F

The default encoding will also encode these file names as they are problematic with many cloud storage systems.

File name Replacement
.
.. ..

Invalid UTF-8 bytes

Some backends only support a sequence of well formed UTF-8 bytes as file or directory names.

In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte 0xFE will be encoded as ‛FE.

A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the local filenames section for details.

Encoding option

Most backends have an encoding options, specified as a flag --backend-encoding where backend is the name of the backend, or as a config parameter encoding (you'll need to select the Advanced config in rclone config to see it).

This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above).

However this can be incorrect in some scenarios, for example if you have a Windows file system with characters such as and that you want to remain as those characters on the remote rather than being translated to * and ?.

The --backend-encoding flags allow you to change that. You can disable the encoding completely with --backend-encoding None or set encoding = None in the config file.

Encoding takes a comma separated list of encodings. You can see the list of all available characters by passing an invalid value to this flag, eg --local-encoding "help" and rclone help flags encoding will show you the defaults for the backends.

Encoding Characters
Asterisk *
BackQuote `
BackSlash \
Colon :
CrLf CR 0x0D, LF 0x0A
Ctl All control characters 0x00-0x1F
Del DEL 0x7F
Dollar $
Dot .
DoubleQuote "
Hash #
InvalidUtf8 An invalid UTF-8 character (eg latin1)
LeftCrLfHtVt CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string
LeftPeriod . on the left of a string
LeftSpace SPACE on the left of a string
LeftTilde ~ on the left of a string
LtGt <, >
None No characters are encoded
Percent %
Pipe |
Question ?
RightCrLfHtVt CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string
RightPeriod . on the right of a string
RightSpace SPACE on the right of a string
SingleQuote '
Slash /

To take a specific example, the FTP backend's default encoding is

--ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"

However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are

Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot

to the existing ones, giving:

Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace

This can be specified using the --ftp-encoding flag or using an encoding parameter in the config file.

Or let's say you have a Windows server but you want to preserve and , you would then have this as the encoding (the Windows encoding minus Asterisk and Question).

Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot

This can be specified using the --local-encoding flag or using an encoding parameter in the config file.

MIME Type

MIME types (also known as media types) classify types of documents using a simple text classification, eg text/html or application/pdf.

Some cloud storage systems support reading (R) the MIME type of objects and some support writing (W) the MIME type of objects.

The MIME type can be important if you are serving files directly to HTTP from the storage system.

If you are copying from a remote which supports reading (R) to a remote which supports writing (W) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.

Optional Features

All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient.

Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About EmptyDir
1Fichier No No No No No No No No No Yes
Amazon Drive Yes No Yes Yes No #575 No No No #2178 No Yes
Amazon S3 No Yes No No Yes Yes Yes No #2178 No No
Backblaze B2 No Yes No No Yes Yes Yes Yes No No
Box Yes Yes Yes Yes Yes ‡‡ No Yes Yes No Yes
Citrix ShareFile Yes Yes Yes Yes No No Yes No No Yes
Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes Yes
FTP No No Yes Yes No No Yes No #2178 No Yes
Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No No
Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Google Photos No No No No No No No No No No
HTTP No No No No No No No No #2178 No Yes
Hubic Yes † Yes No No No Yes Yes No #2178 Yes No
Jottacloud Yes Yes Yes Yes Yes Yes No Yes Yes Yes
Mail.ru Cloud Yes Yes Yes Yes Yes No No Yes Yes Yes
Mega Yes No Yes Yes Yes No No No #2178 Yes Yes
Memory No Yes No No No Yes Yes No No No
Microsoft Azure Blob Storage Yes Yes No No No Yes Yes No #2178 No No
Microsoft OneDrive Yes Yes Yes Yes Yes No No Yes Yes Yes
OpenDrive Yes Yes Yes Yes No No No No No Yes
OpenStack Swift Yes † Yes No No No Yes Yes No #2178 Yes No
pCloud Yes Yes Yes Yes Yes No No Yes Yes Yes
premiumize.me Yes No Yes Yes No No No Yes Yes Yes
put.io Yes No Yes Yes Yes No Yes No #2178 Yes Yes
QingStor No Yes No No Yes Yes No No #2178 No No
Seafile Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
SFTP No No Yes Yes No No Yes No #2178 Yes Yes
SugarSync Yes Yes Yes Yes No No Yes Yes No Yes
Tardigrade Yes † No No No No Yes Yes No No No
WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 Yes Yes
Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes
The local filesystem Yes No Yes Yes No No Yes No Yes Yes

Purge

This deletes a directory quicker than just deleting all the files in the directory.

† Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.

‡ StreamUpload is not supported with Nextcloud

Copy

Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy or rclone move if the remote doesn't support Move directly.

If the server doesn't support Copy directly then for copy operations the file is downloaded then re-uploaded.

Move

Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move if the server doesn't support DirMove.

If the server isn't capable of Move then rclone simulates it with Copy then delete. If the server doesn't support Copy then rclone will download the file and re-upload it.

DirMove

This is used to implement rclone move to move a directory if possible. If it isn't then it will use Move on each file (which falls back to Copy then download and upload - see Move section).

CleanUp

This is used for emptying the trash for a remote by rclone cleanup.

If the server can't do CleanUp then rclone cleanup will return an error.

‡‡ Note that while Box implements this it has to delete every file idividually so it will be slower than emptying the trash via the WebUI

ListR

The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

StreamUpload

Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

LinkSharing

Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.

About

This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.

This is also used to return the space used, available for rclone mount.

If the server can't do About then rclone about will return an error.

EmptyDir

The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this.

Global Flags

This describes the global flags available to every rclone command split into two groups, non backend and backend flags.

Non Backend Flags

These flags are available for every command.

      --ask-password                         Allow prompt for password for encrypted configuration. (default true)
      --auto-confirm                         If enabled, do not request console confirmation.
      --backup-dir string                    Make backups into hierarchy based in DIR.
      --bind string                          Local address to bind to for outgoing connections, IPv4, IPv6 or name.
      --buffer-size SizeSuffix               In memory buffer size when reading files for each --transfer. (default 16M)
      --bwlimit BwTimetable                  Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
      --bwlimit-file BwTimetable             Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable.
      --ca-cert string                       CA certificate used to verify servers
      --cache-dir string                     Directory rclone will use for caching. (default "$HOME/.cache/rclone")
      --check-first                          Do all the checks before starting transfers.
      --checkers int                         Number of checkers to run in parallel. (default 8)
  -c, --checksum                             Skip based on checksum (if available) & size, not mod-time & size
      --client-cert string                   Client SSL certificate (PEM) for mutual TLS auth
      --client-key string                    Client SSL private key (PEM) for mutual TLS auth
      --compare-dest string                  Include additional server-side path during comparison.
      --config string                        Config file. (default "$HOME/.config/rclone/rclone.conf")
      --contimeout duration                  Connect timeout (default 1m0s)
      --copy-dest string                     Implies --compare-dest but also copies files from path into destination.
      --cpuprofile string                    Write cpu profile to file
      --cutoff-mode string                   Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
      --delete-after                         When synchronizing, delete files on destination after transferring (default)
      --delete-before                        When synchronizing, delete files on destination before transferring
      --delete-during                        When synchronizing, delete files during transfer
      --delete-excluded                      Delete files on dest excluded from sync
      --disable string                       Disable a comma separated list of features.  Use help to see a list.
  -n, --dry-run                              Do a trial run with no permanent changes
      --dump DumpFlags                       List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
      --dump-bodies                          Dump HTTP headers and bodies - may contain sensitive info
      --dump-headers                         Dump HTTP headers - may contain sensitive info
      --error-on-no-transfer                 Sets exit code 9 if no files are transferred, useful in scripts
      --exclude stringArray                  Exclude files matching pattern
      --exclude-from stringArray             Read exclude patterns from file (use - to read from stdin)
      --exclude-if-present string            Exclude directories if filename is present
      --expect-continue-timeout duration     Timeout when using expect / 100-continue in HTTP (default 1s)
      --fast-list                            Use recursive list if available. Uses more memory but fewer transactions.
      --files-from stringArray               Read list of source-file names from file (use - to read from stdin)
      --files-from-raw stringArray           Read list of source-file names from file without any processing of lines (use - to read from stdin)
  -f, --filter stringArray                   Add a file-filtering rule
      --filter-from stringArray              Read filtering patterns from a file (use - to read from stdin)
      --header stringArray                   Set HTTP header for all transactions
      --header-download stringArray          Set HTTP header for download transactions
      --header-upload stringArray            Set HTTP header for upload transactions
      --ignore-case                          Ignore case in filters (case insensitive)
      --ignore-case-sync                     Ignore case when synchronizing
      --ignore-checksum                      Skip post copy check of checksums.
      --ignore-errors                        delete even if there are I/O errors
      --ignore-existing                      Skip all files that exist on destination
      --ignore-size                          Ignore size when skipping use mod-time or checksum.
  -I, --ignore-times                         Don't skip files that match size and time - transfer all files
      --immutable                            Do not modify files. Fail if existing files have been modified.
      --include stringArray                  Include files matching pattern
      --include-from stringArray             Read include patterns from file (use - to read from stdin)
  -i, --interactive                          Enable interactive mode
      --log-file string                      Log everything to this file
      --log-format string                    Comma separated list of log format options (default "date,time")
      --log-level string                     Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
      --low-level-retries int                Number of low level retries to do. (default 10)
      --max-age Duration                     Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
      --max-backlog int                      Maximum number of objects in sync or check backlog. (default 10000)
      --max-delete int                       When synchronizing, limit the number of deletes (default -1)
      --max-depth int                        If set limits the recursion depth to this. (default -1)
      --max-duration duration                Maximum duration rclone will transfer data for.
      --max-size SizeSuffix                  Only transfer files smaller than this in k or suffix b|k|M|G (default off)
      --max-stats-groups int                 Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
      --max-transfer SizeSuffix              Maximum size of data to transfer. (default off)
      --memprofile string                    Write memory profile to file
      --min-age Duration                     Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
      --min-size SizeSuffix                  Only transfer files bigger than this in k or suffix b|k|M|G (default off)
      --modify-window duration               Max time diff to be considered the same (default 1ns)
      --multi-thread-cutoff SizeSuffix       Use multi-thread downloads for files above this size. (default 250M)
      --multi-thread-streams int             Max number of streams to use for multi-thread downloads. (default 4)
      --no-check-certificate                 Do not verify the server SSL certificate. Insecure.
      --no-check-dest                        Don't check the destination, copy regardless.
      --no-gzip-encoding                     Don't set Accept-Encoding: gzip.
      --no-traverse                          Don't traverse destination file system on copy.
      --no-unicode-normalization             Don't normalize unicode characters in filenames.
      --no-update-modtime                    Don't update destination mod-time if files identical.
      --order-by string                      Instructions on how to order the transfers, eg 'size,descending'
      --password-command SpaceSepList        Command for supplying password for encrypted configuration.
  -P, --progress                             Show progress during transfer.
  -q, --quiet                                Print as little stuff as possible
      --rc                                   Enable the remote control server.
      --rc-addr string                       IPaddress:Port or :Port to bind server to. (default "localhost:5572")
      --rc-allow-origin string               Set the allowed origin for CORS.
      --rc-baseurl string                    Prefix for URLs - leave blank for root.
      --rc-cert string                       SSL PEM key (concatenation of certificate and CA certificate)
      --rc-client-ca string                  Client certificate authority to verify clients with
      --rc-enable-metrics                    Enable prometheus metrics on /metrics
      --rc-files string                      Path to local files to serve on the HTTP server.
      --rc-htpasswd string                   htpasswd file - if not provided no authentication is done
      --rc-job-expire-duration duration      expire finished async jobs older than this value (default 1m0s)
      --rc-job-expire-interval duration      interval to check for expired async jobs (default 10s)
      --rc-key string                        SSL PEM Private key
      --rc-max-header-bytes int              Maximum size of request header (default 4096)
      --rc-no-auth                           Don't require auth for certain methods.
      --rc-pass string                       Password for authentication.
      --rc-realm string                      realm for authentication (default "rclone")
      --rc-serve                             Enable the serving of remote objects.
      --rc-server-read-timeout duration      Timeout for server reading data (default 1h0m0s)
      --rc-server-write-timeout duration     Timeout for server writing data (default 1h0m0s)
      --rc-template string                   User Specified Template.
      --rc-user string                       User name for authentication.
      --rc-web-fetch-url string              URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
      --rc-web-gui                           Launch WebGUI on localhost
      --rc-web-gui-force-update              Force update to latest version of web gui
      --rc-web-gui-no-open-browser           Don't open the browser automatically
      --rc-web-gui-update                    Check and update to latest version of web gui
      --refresh-times                        Refresh the modtime of remote files.
      --retries int                          Retry operations this many times if they fail (default 3)
      --retries-sleep duration               Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
      --size-only                            Skip based on size only, not mod-time or checksum
      --stats duration                       Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
      --stats-file-name-length int           Max file name length in stats. 0 for no limit (default 45)
      --stats-log-level string               Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
      --stats-one-line                       Make the stats fit on one line.
      --stats-one-line-date                  Enables --stats-one-line and add current date/time prefix.
      --stats-one-line-date-format string    Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
      --stats-unit string                    Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
      --streaming-upload-cutoff SizeSuffix   Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
      --suffix string                        Suffix to add to changed files.
      --suffix-keep-extension                Preserve the extension when using --suffix.
      --syslog                               Use Syslog for logging
      --syslog-facility string               Facility for syslog, eg KERN,USER,... (default "DAEMON")
      --timeout duration                     IO idle timeout (default 5m0s)
      --tpslimit float                       Limit HTTP transactions per second to this.
      --tpslimit-burst int                   Max burst of transactions for --tpslimit. (default 1)
      --track-renames                        When synchronizing, track file renames and do a server side move if possible
      --track-renames-strategy string        Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
      --transfers int                        Number of file transfers to run in parallel. (default 4)
  -u, --update                               Skip files that are newer on the destination.
      --use-cookies                          Enable session cookiejar.
      --use-json-log                         Use json log format.
      --use-mmap                             Use mmap allocator (see docs).
      --use-server-modtime                   Use server modified time instead of object metadata
      --user-agent string                    Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.3")
  -v, --verbose count                        Print lots more stuff (repeat for more)

Backend Flags

These flags are available for every command. They control the backends and may be set in the config file.

      --acd-auth-url string                                      Auth server URL.
      --acd-client-id string                                     OAuth Client Id
      --acd-client-secret string                                 OAuth Client Secret
      --acd-encoding MultiEncoder                                This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
      --acd-templink-threshold SizeSuffix                        Files >= this size will be downloaded via their tempLink. (default 9G)
      --acd-token string                                         OAuth Access Token as a JSON blob.
      --acd-token-url string                                     Token server url.
      --acd-upload-wait-per-gb Duration                          Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
      --alias-remote string                                      Remote or path to alias.
      --azureblob-access-tier string                             Access tier of blob: hot, cool or archive.
      --azureblob-account string                                 Storage Account Name (leave blank to use SAS URL or Emulator)
      --azureblob-chunk-size SizeSuffix                          Upload chunk size (<= 100MB). (default 4M)
      --azureblob-disable-checksum                               Don't store MD5 checksum with object metadata.
      --azureblob-encoding MultiEncoder                          This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
      --azureblob-endpoint string                                Endpoint for the service
      --azureblob-key string                                     Storage Account Key (leave blank to use SAS URL or Emulator)
      --azureblob-list-chunk int                                 Size of blob list. (default 5000)
      --azureblob-memory-pool-flush-time Duration                How often internal memory buffer pools will be flushed. (default 1m0s)
      --azureblob-memory-pool-use-mmap                           Whether to use mmap buffers in internal memory pool.
      --azureblob-sas-url string                                 SAS URL for container level access only
      --azureblob-upload-cutoff SizeSuffix                       Cutoff for switching to chunked upload (<= 256MB). (default 256M)
      --azureblob-use-emulator                                   Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
      --b2-account string                                        Account ID or Application Key ID
      --b2-chunk-size SizeSuffix                                 Upload chunk size. Must fit in memory. (default 96M)
      --b2-copy-cutoff SizeSuffix                                Cutoff for switching to multipart copy (default 4G)
      --b2-disable-checksum                                      Disable checksums for large (> upload cutoff) files
      --b2-download-auth-duration Duration                       Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
      --b2-download-url string                                   Custom endpoint for downloads.
      --b2-encoding MultiEncoder                                 This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
      --b2-endpoint string                                       Endpoint for the service.
      --b2-hard-delete                                           Permanently delete files on remote removal, otherwise hide files.
      --b2-key string                                            Application Key
      --b2-memory-pool-flush-time Duration                       How often internal memory buffer pools will be flushed. (default 1m0s)
      --b2-memory-pool-use-mmap                                  Whether to use mmap buffers in internal memory pool.
      --b2-test-mode string                                      A flag string for X-Bz-Test-Mode header for debugging.
      --b2-upload-cutoff SizeSuffix                              Cutoff for switching to chunked upload. (default 200M)
      --b2-versions                                              Include old versions in directory listings.
      --box-access-token string                                  Box App Primary Access Token
      --box-auth-url string                                      Auth server URL.
      --box-box-config-file string                               Box App config.json location
      --box-box-sub-type string                                   (default "user")
      --box-client-id string                                     OAuth Client Id
      --box-client-secret string                                 OAuth Client Secret
      --box-commit-retries int                                   Max number of times to try committing a multipart file. (default 100)
      --box-encoding MultiEncoder                                This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
      --box-root-folder-id string                                Fill in for rclone to use a non root folder as its starting point.
      --box-token string                                         OAuth Access Token as a JSON blob.
      --box-token-url string                                     Token server url.
      --box-upload-cutoff SizeSuffix                             Cutoff for switching to multipart upload (>= 50MB). (default 50M)
      --cache-chunk-clean-interval Duration                      How often should the cache perform cleanups of the chunk storage. (default 1m0s)
      --cache-chunk-no-memory                                    Disable the in-memory cache for storing chunks during streaming.
      --cache-chunk-path string                                  Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
      --cache-chunk-size SizeSuffix                              The size of a chunk (partial file data). (default 5M)
      --cache-chunk-total-size SizeSuffix                        The total size that the chunks can take up on the local disk. (default 10G)
      --cache-db-path string                                     Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
      --cache-db-purge                                           Clear all the cached data for this remote on start.
      --cache-db-wait-time Duration                              How long to wait for the DB to be available - 0 is unlimited (default 1s)
      --cache-info-age Duration                                  How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
      --cache-plex-insecure string                               Skip all certificate verification when connecting to the Plex server
      --cache-plex-password string                               The password of the Plex user (obscured)
      --cache-plex-url string                                    The URL of the Plex server
      --cache-plex-username string                               The username of the Plex user
      --cache-read-retries int                                   How many times to retry a read from a cache storage. (default 10)
      --cache-remote string                                      Remote to cache.
      --cache-rps int                                            Limits the number of requests per second to the source FS (-1 to disable) (default -1)
      --cache-tmp-upload-path string                             Directory to keep temporary files until they are uploaded.
      --cache-tmp-wait-time Duration                             How long should files be stored in local cache before being uploaded (default 15s)
      --cache-workers int                                        How many workers should run in parallel to download chunks. (default 4)
      --cache-writes                                             Cache file data on writes through the FS
      --chunker-chunk-size SizeSuffix                            Files larger than chunk size will be split in chunks. (default 2G)
      --chunker-fail-hard                                        Choose how chunker should handle files with missing or invalid chunks.
      --chunker-hash-type string                                 Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5")
      --chunker-meta-format string                               Format of the metadata object or "none". By default "simplejson". (default "simplejson")
      --chunker-name-format string                               String format of chunk file names. (default "*.rclone_chunk.###")
      --chunker-remote string                                    Remote to chunk/unchunk.
      --chunker-start-from int                                   Minimum valid chunk number. Usually 0 or 1. (default 1)
  -L, --copy-links                                               Follow symlinks and copy the pointed to item.
      --crypt-directory-name-encryption                          Option to either encrypt directory names or leave them intact. (default true)
      --crypt-filename-encryption string                         How to encrypt the filenames. (default "standard")
      --crypt-password string                                    Password or pass phrase for encryption. (obscured)
      --crypt-password2 string                                   Password or pass phrase for salt. Optional but recommended. (obscured)
      --crypt-remote string                                      Remote to encrypt/decrypt.
      --crypt-server-side-across-configs                         Allow server side operations (eg copy) to work across different crypt configs.
      --crypt-show-mapping                                       For all files listed show how the names encrypt.
      --drive-acknowledge-abuse                                  Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
      --drive-allow-import-name-change                           Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
      --drive-auth-owner-only                                    Only consider files owned by the authenticated user.
      --drive-auth-url string                                    Auth server URL.
      --drive-chunk-size SizeSuffix                              Upload chunk size. Must a power of 2 >= 256k. (default 8M)
      --drive-client-id string                                   Google Application Client Id
      --drive-client-secret string                               OAuth Client Secret
      --drive-disable-http2                                      Disable drive using http2 (default true)
      --drive-encoding MultiEncoder                              This sets the encoding for the backend. (default InvalidUtf8)
      --drive-export-formats string                              Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
      --drive-formats string                                     Deprecated: see export_formats
      --drive-impersonate string                                 Impersonate this user when using a service account.
      --drive-import-formats string                              Comma separated list of preferred formats for uploading Google docs.
      --drive-keep-revision-forever                              Keep new head revision of each file forever.
      --drive-list-chunk int                                     Size of listing chunk 100-1000. 0 to disable. (default 1000)
      --drive-pacer-burst int                                    Number of API calls to allow without sleeping. (default 100)
      --drive-pacer-min-sleep Duration                           Minimum time to sleep between API calls. (default 100ms)
      --drive-root-folder-id string                              ID of the root folder
      --drive-scope string                                       Scope that rclone should use when requesting access from drive.
      --drive-server-side-across-configs                         Allow server side operations (eg copy) to work across different drive configs.
      --drive-service-account-credentials string                 Service Account Credentials JSON blob
      --drive-service-account-file string                        Service Account Credentials JSON file path
      --drive-shared-with-me                                     Only show files that are shared with me.
      --drive-size-as-quota                                      Show sizes as storage quota usage, not actual size.
      --drive-skip-checksum-gphotos                              Skip MD5 checksum on Google photos and videos only.
      --drive-skip-gdocs                                         Skip google documents in all listings.
      --drive-skip-shortcuts                                     If set skip shortcut files
      --drive-starred-only                                       Only show files that are starred.
      --drive-stop-on-upload-limit                               Make upload limit errors be fatal
      --drive-team-drive string                                  ID of the Team Drive
      --drive-token string                                       OAuth Access Token as a JSON blob.
      --drive-token-url string                                   Token server url.
      --drive-trashed-only                                       Only show files that are in the trash.
      --drive-upload-cutoff SizeSuffix                           Cutoff for switching to chunked upload (default 8M)
      --drive-use-created-date                                   Use file created date instead of modified date.,
      --drive-use-shared-date                                    Use date file was shared instead of modified date.
      --drive-use-trash                                          Send files to the trash instead of deleting permanently. (default true)
      --drive-v2-download-min-size SizeSuffix                    If Object's are greater, use drive v2 API to download. (default off)
      --dropbox-auth-url string                                  Auth server URL.
      --dropbox-chunk-size SizeSuffix                            Upload chunk size. (< 150M). (default 48M)
      --dropbox-client-id string                                 OAuth Client Id
      --dropbox-client-secret string                             OAuth Client Secret
      --dropbox-encoding MultiEncoder                            This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
      --dropbox-impersonate string                               Impersonate this user when using a business account.
      --dropbox-token string                                     OAuth Access Token as a JSON blob.
      --dropbox-token-url string                                 Token server url.
      --fichier-api-key string                                   Your API Key, get it from https://1fichier.com/console/params.pl
      --fichier-encoding MultiEncoder                            This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
      --fichier-shared-folder string                             If you want to download a shared folder, add this parameter
      --ftp-concurrency int                                      Maximum number of FTP simultaneous connections, 0 for unlimited
      --ftp-disable-epsv                                         Disable using EPSV even if server advertises support
      --ftp-encoding MultiEncoder                                This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot)
      --ftp-explicit-tls                                         Use FTP over TLS (Explicit)
      --ftp-host string                                          FTP host to connect to
      --ftp-no-check-certificate                                 Do not verify the TLS certificate of the server
      --ftp-pass string                                          FTP password (obscured)
      --ftp-port string                                          FTP port, leave blank to use default (21)
      --ftp-tls                                                  Use FTPS over TLS (Implicit)
      --ftp-user string                                          FTP username, leave blank for current username, $USER
      --gcs-anonymous                                            Access public buckets and objects without credentials
      --gcs-auth-url string                                      Auth server URL.
      --gcs-bucket-acl string                                    Access Control List for new buckets.
      --gcs-bucket-policy-only                                   Access checks should use bucket-level IAM policies.
      --gcs-client-id string                                     OAuth Client Id
      --gcs-client-secret string                                 OAuth Client Secret
      --gcs-encoding MultiEncoder                                This sets the encoding for the backend. (default Slash,CrLf,InvalidUtf8,Dot)
      --gcs-location string                                      Location for the newly created buckets.
      --gcs-object-acl string                                    Access Control List for new objects.
      --gcs-project-number string                                Project number.
      --gcs-service-account-file string                          Service Account Credentials JSON file path
      --gcs-storage-class string                                 The storage class to use when storing objects in Google Cloud Storage.
      --gcs-token string                                         OAuth Access Token as a JSON blob.
      --gcs-token-url string                                     Token server url.
      --gphotos-auth-url string                                  Auth server URL.
      --gphotos-client-id string                                 OAuth Client Id
      --gphotos-client-secret string                             OAuth Client Secret
      --gphotos-read-only                                        Set to make the Google Photos backend read only.
      --gphotos-read-size                                        Set to read the size of media items.
      --gphotos-start-year int                                   Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
      --gphotos-token string                                     OAuth Access Token as a JSON blob.
      --gphotos-token-url string                                 Token server url.
      --http-headers CommaSepList                                Set HTTP headers for all transactions
      --http-no-head                                             Don't use HEAD requests to find file sizes in dir listing
      --http-no-slash                                            Set this if the site doesn't end directories with /
      --http-url string                                          URL of http host to connect to
      --hubic-auth-url string                                    Auth server URL.
      --hubic-chunk-size SizeSuffix                              Above this size files will be chunked into a _segments container. (default 5G)
      --hubic-client-id string                                   OAuth Client Id
      --hubic-client-secret string                               OAuth Client Secret
      --hubic-encoding MultiEncoder                              This sets the encoding for the backend. (default Slash,InvalidUtf8)
      --hubic-no-chunk                                           Don't chunk files during streaming upload.
      --hubic-token string                                       OAuth Access Token as a JSON blob.
      --hubic-token-url string                                   Token server url.
      --jottacloud-encoding MultiEncoder                         This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
      --jottacloud-hard-delete                                   Delete files permanently rather than putting them into the trash.
      --jottacloud-md5-memory-limit SizeSuffix                   Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
      --jottacloud-trashed-only                                  Only show files that are in the trash.
      --jottacloud-upload-resume-limit SizeSuffix                Files bigger than this can be resumed if the upload fail's. (default 10M)
      --koofr-encoding MultiEncoder                              This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
      --koofr-endpoint string                                    The Koofr API endpoint to use (default "https://app.koofr.net")
      --koofr-mountid string                                     Mount ID of the mount to use. If omitted, the primary mount is used.
      --koofr-password string                                    Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
      --koofr-setmtime                                           Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true)
      --koofr-user string                                        Your Koofr user name
  -l, --links                                                    Translate symlinks to/from regular files with a '.rclonelink' extension
      --local-case-insensitive                                   Force the filesystem to report itself as case insensitive
      --local-case-sensitive                                     Force the filesystem to report itself as case sensitive.
      --local-encoding MultiEncoder                              This sets the encoding for the backend. (default Slash,Dot)
      --local-no-check-updated                                   Don't check to see if the files change during upload
      --local-no-set-modtime                                     Disable setting modtime
      --local-no-sparse                                          Disable sparse files for multi-thread downloads
      --local-no-unicode-normalization                           Don't apply unicode normalization to paths and filenames (Deprecated)
      --local-nounc string                                       Disable UNC (long path names) conversion on Windows
      --mailru-check-hash                                        What should copy do if file checksum is mismatched or invalid (default true)
      --mailru-encoding MultiEncoder                             This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
      --mailru-pass string                                       Password (obscured)
      --mailru-speedup-enable                                    Skip full upload if there is another file with same data hash. (default true)
      --mailru-speedup-file-patterns string                      Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
      --mailru-speedup-max-disk SizeSuffix                       This option allows you to disable speedup (put by hash) for large files (default 3G)
      --mailru-speedup-max-memory SizeSuffix                     Files larger than the size given below will always be hashed on disk. (default 32M)
      --mailru-user string                                       User name (usually email)
      --mega-debug                                               Output more debug from Mega.
      --mega-encoding MultiEncoder                               This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
      --mega-hard-delete                                         Delete files permanently rather than putting them into the trash.
      --mega-pass string                                         Password. (obscured)
      --mega-user string                                         User name
  -x, --one-file-system                                          Don't cross filesystem boundaries (unix/macOS only).
      --onedrive-auth-url string                                 Auth server URL.
      --onedrive-chunk-size SizeSuffix                           Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M)
      --onedrive-client-id string                                OAuth Client Id
      --onedrive-client-secret string                            OAuth Client Secret
      --onedrive-drive-id string                                 The ID of the drive to use
      --onedrive-drive-type string                               The type of the drive ( personal | business | documentLibrary )
      --onedrive-encoding MultiEncoder                           This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
      --onedrive-expose-onenote-files                            Set to make OneNote files show up in directory listings.
      --onedrive-no-versions                                     Remove all versions on modifying operations
      --onedrive-server-side-across-configs                      Allow server side operations (eg copy) to work across different onedrive configs.
      --onedrive-token string                                    OAuth Access Token as a JSON blob.
      --onedrive-token-url string                                Token server url.
      --opendrive-chunk-size SizeSuffix                          Files will be uploaded in chunks this size. (default 10M)
      --opendrive-encoding MultiEncoder                          This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
      --opendrive-password string                                Password. (obscured)
      --opendrive-username string                                Username
      --pcloud-auth-url string                                   Auth server URL.
      --pcloud-client-id string                                  OAuth Client Id
      --pcloud-client-secret string                              OAuth Client Secret
      --pcloud-encoding MultiEncoder                             This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
      --pcloud-hostname string                                   Hostname to connect to. (default "api.pcloud.com")
      --pcloud-root-folder-id string                             Fill in for rclone to use a non root folder as its starting point. (default "d0")
      --pcloud-token string                                      OAuth Access Token as a JSON blob.
      --pcloud-token-url string                                  Token server url.
      --premiumizeme-encoding MultiEncoder                       This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
      --putio-encoding MultiEncoder                              This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
      --qingstor-access-key-id string                            QingStor Access Key ID
      --qingstor-chunk-size SizeSuffix                           Chunk size to use for uploading. (default 4M)
      --qingstor-connection-retries int                          Number of connection retries. (default 3)
      --qingstor-encoding MultiEncoder                           This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8)
      --qingstor-endpoint string                                 Enter an endpoint URL to connection QingStor API.
      --qingstor-env-auth                                        Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
      --qingstor-secret-access-key string                        QingStor Secret Access Key (password)
      --qingstor-upload-concurrency int                          Concurrency for multipart uploads. (default 1)
      --qingstor-upload-cutoff SizeSuffix                        Cutoff for switching to chunked upload (default 200M)
      --qingstor-zone string                                     Zone to connect to.
      --s3-access-key-id string                                  AWS Access Key ID.
      --s3-acl string                                            Canned ACL used when creating buckets and storing or copying objects.
      --s3-bucket-acl string                                     Canned ACL used when creating buckets.
      --s3-chunk-size SizeSuffix                                 Chunk size to use for uploading. (default 5M)
      --s3-copy-cutoff SizeSuffix                                Cutoff for switching to multipart copy (default 4.656G)
      --s3-disable-checksum                                      Don't store MD5 checksum with object metadata
      --s3-encoding MultiEncoder                                 This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
      --s3-endpoint string                                       Endpoint for S3 API.
      --s3-env-auth                                              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
      --s3-force-path-style                                      If true use path style access if false use virtual hosted style. (default true)
      --s3-leave-parts-on-error                                  If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
      --s3-list-chunk int                                        Size of listing chunk (response list for each ListObject S3 request). (default 1000)
      --s3-location-constraint string                            Location constraint - must be set to match the Region.
      --s3-max-upload-parts int                                  Maximum number of parts in a multipart upload. (default 10000)
      --s3-memory-pool-flush-time Duration                       How often internal memory buffer pools will be flushed. (default 1m0s)
      --s3-memory-pool-use-mmap                                  Whether to use mmap buffers in internal memory pool.
      --s3-no-check-bucket                                       If set don't attempt to check the bucket exists or create it
      --s3-profile string                                        Profile to use in the shared credentials file
      --s3-provider string                                       Choose your S3 provider.
      --s3-region string                                         Region to connect to.
      --s3-secret-access-key string                              AWS Secret Access Key (password)
      --s3-server-side-encryption string                         The server-side encryption algorithm used when storing this object in S3.
      --s3-session-token string                                  An AWS session token
      --s3-shared-credentials-file string                        Path to the shared credentials file
      --s3-sse-customer-algorithm string                         If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
      --s3-sse-customer-key string                               If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
      --s3-sse-customer-key-md5 string                           If using SSE-C you must provide the secret encryption key MD5 checksum.
      --s3-sse-kms-key-id string                                 If using KMS ID you must provide the ARN of Key.
      --s3-storage-class string                                  The storage class to use when storing new objects in S3.
      --s3-upload-concurrency int                                Concurrency for multipart uploads. (default 4)
      --s3-upload-cutoff SizeSuffix                              Cutoff for switching to chunked upload (default 200M)
      --s3-use-accelerate-endpoint                               If true use the AWS S3 accelerated endpoint.
      --s3-v2-auth                                               If true use v2 authentication.
      --seafile-2fa                                              Two-factor authentication ('true' if the account has 2FA enabled)
      --seafile-create-library                                   Should rclone create a library if it doesn't exist
      --seafile-encoding MultiEncoder                            This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
      --seafile-library string                                   Name of the library. Leave blank to access all non-encrypted libraries.
      --seafile-library-key string                               Library password (for encrypted libraries only). Leave blank if you pass it through the command line. (obscured)
      --seafile-pass string                                      Password (obscured)
      --seafile-url string                                       URL of seafile host to connect to
      --seafile-user string                                      User name (usually email address)
      --sftp-ask-password                                        Allow asking for SFTP password when needed.
      --sftp-disable-hashcheck                                   Disable the execution of SSH commands to determine if remote file hashing is available.
      --sftp-host string                                         SSH host to connect to
      --sftp-key-file string                                     Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
      --sftp-key-file-pass string                                The passphrase to decrypt the PEM-encoded private key file. (obscured)
      --sftp-key-pem string                                      Raw PEM-encoded private key, If specified, will override key_file parameter.
      --sftp-key-use-agent                                       When set forces the usage of the ssh-agent.
      --sftp-md5sum-command string                               The command used to read md5 hashes. Leave blank for autodetect.
      --sftp-pass string                                         SSH password, leave blank to use ssh-agent. (obscured)
      --sftp-path-override string                                Override path used by SSH connection.
      --sftp-port string                                         SSH port, leave blank to use default (22)
      --sftp-server-command string                               Specifies the path or command to run a sftp server on the remote host.
      --sftp-set-modtime                                         Set the modified time on the remote if set. (default true)
      --sftp-sha1sum-command string                              The command used to read sha1 hashes. Leave blank for autodetect.
      --sftp-skip-links                                          Set to skip any symlinks and any other non regular files.
      --sftp-subsystem string                                    Specifies the SSH2 subsystem on the remote host. (default "sftp")
      --sftp-use-insecure-cipher                                 Enable the use of insecure ciphers and key exchange methods.
      --sftp-user string                                         SSH username, leave blank for current username, ncw
      --sharefile-chunk-size SizeSuffix                          Upload chunk size. Must a power of 2 >= 256k. (default 64M)
      --sharefile-encoding MultiEncoder                          This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
      --sharefile-endpoint string                                Endpoint for API calls.
      --sharefile-root-folder-id string                          ID of the root folder
      --sharefile-upload-cutoff SizeSuffix                       Cutoff for switching to multipart upload. (default 128M)
      --skip-links                                               Don't warn about skipped symlinks.
      --sugarsync-access-key-id string                           Sugarsync Access Key ID.
      --sugarsync-app-id string                                  Sugarsync App ID.
      --sugarsync-authorization string                           Sugarsync authorization
      --sugarsync-authorization-expiry string                    Sugarsync authorization expiry
      --sugarsync-deleted-id string                              Sugarsync deleted folder id
      --sugarsync-encoding MultiEncoder                          This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8,Dot)
      --sugarsync-hard-delete                                    Permanently delete files if true
      --sugarsync-private-access-key string                      Sugarsync Private Access Key
      --sugarsync-refresh-token string                           Sugarsync refresh token
      --sugarsync-root-id string                                 Sugarsync root id
      --sugarsync-user string                                    Sugarsync user
      --swift-application-credential-id string                   Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
      --swift-application-credential-name string                 Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
      --swift-application-credential-secret string               Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
      --swift-auth string                                        Authentication URL for server (OS_AUTH_URL).
      --swift-auth-token string                                  Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
      --swift-auth-version int                                   AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
      --swift-chunk-size SizeSuffix                              Above this size files will be chunked into a _segments container. (default 5G)
      --swift-domain string                                      User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
      --swift-encoding MultiEncoder                              This sets the encoding for the backend. (default Slash,InvalidUtf8)
      --swift-endpoint-type string                               Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
      --swift-env-auth                                           Get swift credentials from environment variables in standard OpenStack form.
      --swift-key string                                         API key or password (OS_PASSWORD).
      --swift-no-chunk                                           Don't chunk files during streaming upload.
      --swift-region string                                      Region name - optional (OS_REGION_NAME)
      --swift-storage-policy string                              The storage policy to use when creating a new container
      --swift-storage-url string                                 Storage URL - optional (OS_STORAGE_URL)
      --swift-tenant string                                      Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
      --swift-tenant-domain string                               Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
      --swift-tenant-id string                                   Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
      --swift-user string                                        User name to log in (OS_USERNAME).
      --swift-user-id string                                     User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
      --tardigrade-access-grant string                           Access Grant.
      --tardigrade-api-key string                                API Key.
      --tardigrade-passphrase string                             Encryption Passphrase. To access existing objects enter passphrase used for uploading.
      --tardigrade-provider string                               Choose an authentication method. (default "existing")
      --tardigrade-satellite-address <nodeid>@<address>:<port>   Satellite Address. Custom satellite address should match the format: <nodeid>@<address>:<port>. (default "us-central-1.tardigrade.io")
      --union-action-policy string                               Policy to choose upstream on ACTION category. (default "epall")
      --union-cache-time int                                     Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. (default 120)
      --union-create-policy string                               Policy to choose upstream on CREATE category. (default "epmfs")
      --union-search-policy string                               Policy to choose upstream on SEARCH category. (default "ff")
      --union-upstreams string                                   List of space separated upstreams.
      --webdav-bearer-token string                               Bearer token instead of user/pass (eg a Macaroon)
      --webdav-bearer-token-command string                       Command to run to get a bearer token
      --webdav-pass string                                       Password. (obscured)
      --webdav-url string                                        URL of http host to connect to
      --webdav-user string                                       User name
      --webdav-vendor string                                     Name of the Webdav site/service/software you are using
      --yandex-auth-url string                                   Auth server URL.
      --yandex-client-id string                                  OAuth Client Id
      --yandex-client-secret string                              OAuth Client Secret
      --yandex-encoding MultiEncoder                             This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
      --yandex-token string                                      OAuth Access Token as a JSON blob.
      --yandex-token-url string                                  Token server url.

1Fichier

This is a backend for the 1fichier cloud storage service. Note that a Premium subscription is required to use the API.

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / 1Fichier
   \ "fichier"
[snip]
Storage> fichier
** See help for fichier backend at: https://rclone.org/fichier/ **

Your API Key, get it from https://1fichier.com/console/params.pl
Enter a string value. Press Enter for the default ("").
api_key> example_key

Edit advanced config? (y/n)
y) Yes
n) No
y/n> 
Remote config
--------------------
[remote]
type = fichier
api_key = example_key
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Once configured you can then use rclone like this,

List directories in top level of your 1Fichier account

rclone lsd remote:

List all the files in your 1Fichier account

rclone ls remote:

To copy a local directory to a 1Fichier directory called backup

rclone copy /home/source remote:backup

Modified time and hashes

1Fichier does not support modification times. It supports the Whirlpool hash algorithm.

Duplicated files

1Fichier can have two files with exactly the same name and path (unlike a normal file system).

Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C
< 0x3C
> 0x3E
" 0x22
$ 0x24
` 0x60
' 0x27

File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name:

Character Value Replacement
SP 0x20

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to fichier (1Fichier).

--fichier-api-key

Your API Key, get it from https://1fichier.com/console/params.pl

Advanced Options

Here are the advanced options specific to fichier (1Fichier).

--fichier-shared-folder

If you want to download a shared folder, add this parameter

--fichier-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Alias

The alias remote provides a new name for another remote.

Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory.

During the initial setup with rclone config you will specify the target remote. The target remote can either be a local path or another remote.

Subfolders can be used in target remote. Assume an alias remote named backup with the target mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop. The empty path is not allowed as a remote. To alias the current directory use . instead.

Here is an example of how to make an alias called remote for local folder. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Alias for an existing remote
   \ "alias"
[snip]
Storage> alias
Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
remote> /mnt/storage/backup
Remote config
--------------------
[remote]
remote = /mnt/storage/backup
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
remote               alias

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Once configured you can then use rclone like this,

List directories in top level in /mnt/storage/backup

rclone lsd remote:

List all the files in /mnt/storage/backup

rclone ls remote:

Copy another local directory to the alias directory called source

rclone copy /home/source remote:source

Standard Options

Here are the standard options specific to alias (Alias for an existing remote).

--alias-remote

Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".

Amazon Drive

Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.

Status

Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.

For the history on why rclone no longer has a set of Amazon Drive API keys see the forum.

If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!

Setup

The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.

The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.

Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id and client_secret with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.

Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon Drive
   \ "amazon cloud drive"
[snip]
Storage> amazon cloud drive
Amazon Application Client Id - required.
client_id> your client ID goes here
Amazon Application Client Secret - required.
client_secret> your client secret goes here
Auth server URL - leave blank to use Amazon's.
auth_url> Optional auth URL
Token server url - leave blank to use Amazon's.
token_url> Optional token URL
Remote config
Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id = your client ID goes here
client_secret = your client secret goes here
auth_url = Optional auth URL
token_url = Optional token URL
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your Amazon Drive

rclone lsd remote:

List all the files in your Amazon Drive

rclone ls remote:

To copy a local directory to an Amazon Drive directory called backup

rclone copy /home/source remote:backup

Modified time and MD5SUMs

Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.

It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.

Restricted filename characters

Character Value Replacement
NUL 0x00
/ 0x2F

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Deleting files

Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.

Using with non .com Amazon accounts

Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.

Standard Options

Here are the standard options specific to amazon cloud drive (Amazon Drive).

--acd-client-id

OAuth Client Id Leave blank normally.

--acd-client-secret

OAuth Client Secret Leave blank normally.

Advanced Options

Here are the advanced options specific to amazon cloud drive (Amazon Drive).

--acd-token

OAuth Access Token as a JSON blob.

--acd-auth-url

Auth server URL. Leave blank to use the provider defaults.

--acd-token-url

Token server url. Leave blank to use the provider defaults.

--acd-checkpoint

Checkpoint for internal polling (debug).

--acd-upload-wait-per-gb

Additional time per GB to wait after a failed complete upload to see if it appears.

Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.

The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.

You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.

These values were determined empirically by observing lots of uploads of big files for a range of file sizes.

Upload with the "-v" flag to see more info about what rclone is doing in this situation.

Files >= this size will be downloaded via their tempLink.

Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.

To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.

--acd-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.

Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

Amazon S3 Storage Providers

The S3 backend can be used with a number of different providers:

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Once you have made a remote (see the provider specific section above) you can use it like this:

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync -i /home/local/directory remote:bucket

AWS S3

Here is an example of making an s3 configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
   \ "s3"
[snip]
Storage> s3
Choose your S3 provider.
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Ceph Object Storage
   \ "Ceph"
 3 / Digital Ocean Spaces
   \ "DigitalOcean"
 4 / Dreamhost DreamObjects
   \ "Dreamhost"
 5 / IBM COS S3
   \ "IBMCOS"
 6 / Minio Object Storage
   \ "Minio"
 7 / Wasabi Object Storage
   \ "Wasabi"
 8 / Any other S3 compatible provider
   \ "Other"
provider> 1
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YYY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
   / US East (Ohio) Region
 2 | Needs location constraint us-east-2.
   \ "us-east-2"
   / US West (Oregon) Region
 3 | Needs location constraint us-west-2.
   \ "us-west-2"
   / US West (Northern California) Region
 4 | Needs location constraint us-west-1.
   \ "us-west-1"
   / Canada (Central) Region
 5 | Needs location constraint ca-central-1.
   \ "ca-central-1"
   / EU (Ireland) Region
 6 | Needs location constraint EU or eu-west-1.
   \ "eu-west-1"
   / EU (London) Region
 7 | Needs location constraint eu-west-2.
   \ "eu-west-2"
   / EU (Frankfurt) Region
 8 | Needs location constraint eu-central-1.
   \ "eu-central-1"
   / Asia Pacific (Singapore) Region
 9 | Needs location constraint ap-southeast-1.
   \ "ap-southeast-1"
   / Asia Pacific (Sydney) Region
10 | Needs location constraint ap-southeast-2.
   \ "ap-southeast-2"
   / Asia Pacific (Tokyo) Region
11 | Needs location constraint ap-northeast-1.
   \ "ap-northeast-1"
   / Asia Pacific (Seoul)
12 | Needs location constraint ap-northeast-2.
   \ "ap-northeast-2"
   / Asia Pacific (Mumbai)
13 | Needs location constraint ap-south-1.
   \ "ap-south-1"
   / Asia Pacific (Hong Kong) Region
14 | Needs location constraint ap-east-1.
   \ "ap-east-1"
   / South America (Sao Paulo) Region
15 | Needs location constraint sa-east-1.
   \ "sa-east-1"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
endpoint> 
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
 2 / US East (Ohio) Region.
   \ "us-east-2"
 3 / US West (Oregon) Region.
   \ "us-west-2"
 4 / US West (Northern California) Region.
   \ "us-west-1"
 5 / Canada (Central) Region.
   \ "ca-central-1"
 6 / EU (Ireland) Region.
   \ "eu-west-1"
 7 / EU (London) Region.
   \ "eu-west-2"
 8 / EU Region.
   \ "EU"
 9 / Asia Pacific (Singapore) Region.
   \ "ap-southeast-1"
10 / Asia Pacific (Sydney) Region.
   \ "ap-southeast-2"
11 / Asia Pacific (Tokyo) Region.
   \ "ap-northeast-1"
12 / Asia Pacific (Seoul)
   \ "ap-northeast-2"
13 / Asia Pacific (Mumbai)
   \ "ap-south-1"
14 / Asia Pacific (Hong Kong)
   \ "ap-east-1"
15 / South America (Sao Paulo) Region.
   \ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
server_side_encryption> 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
 5 / One Zone Infrequent Access storage class
   \ "ONEZONE_IA"
 6 / Glacier storage class
   \ "GLACIER"
 7 / Glacier Deep Archive storage class
   \ "DEEP_ARCHIVE"
 8 / Intelligent-Tiering storage class
   \ "INTELLIGENT_TIERING"
storage_class> 1
Remote config
--------------------
[remote]
type = s3
provider = AWS
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region = us-east-1
endpoint = 
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> 

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

--update and --use-server-modtime

As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

Modified time

The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.

Cleanup

If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the -i flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.

Restricted filename characters

S3 allows any valid UTF-8 string as a key.

Invalid UTF-8 bytes will be replaced, as they can't be used in XML.

The following characters are replaced since these are problematic when dealing with the REST API:

Character Value Replacement
NUL 0x00
/ 0x2F

The encoding will also encode these file names as they don't seem to work with the SDK properly:

File name Replacement
.
.. ..

Multipart uploads

rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB.

Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files).

The chunk sizes used in the multipart upload are specified by --s3-chunk-size and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency.

Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory. Single part uploads to not use extra memory.

Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster.

Increasing --s3-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.

Buckets and Regions

With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

Authentication

There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment.

The different authentication methods are tried in this order:

If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

S3 Permissions

When using the sync subcommand of rclone the following minimum permissions are required to be available on the bucket being written to:

When using the lsd subcommand, the ListAllMyBuckets permission is required.

Example policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
            },
            "Action": [
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
              "arn:aws:s3:::BUCKET_NAME/*",
              "arn:aws:s3:::BUCKET_NAME"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "arn:aws:s3:::*"
        }   
    ]
}

Notes on above:

  1. This is a policy that can be used when creating bucket. It assumes that USER_NAME has been created.
  2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.

For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.

Key Management System (KMS)

If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the --ignore-checksum flag.

A proper fix is being worked on in issue #1824.

Glacier and Glacier Deep Archive

You can upload objects using the glacier storage class or transition them to glacier using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below.

2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file

In this case you need to restore the object(s) in question before using rclone.

Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults.

Standard Options

Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)).

--s3-provider

Choose your S3 provider.

--s3-env-auth

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.

--s3-access-key-id

AWS Access Key ID. Leave blank for anonymous access or runtime credentials.

--s3-secret-access-key

AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials.

--s3-region

Region to connect to.

--s3-region

Region to connect to.

--s3-region

Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.

--s3-endpoint

Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region.

--s3-endpoint

Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.

--s3-endpoint

Endpoint for OSS API.

--s3-endpoint

Endpoint for Scaleway Object Storage.

--s3-endpoint

Endpoint for StackPath Object Storage.

--s3-endpoint

Endpoint for Tencent COS API.

--s3-endpoint

Endpoint for S3 API. Required when using an S3 clone.

--s3-location-constraint

Location constraint - must be set to match the Region. Used when creating buckets only.

--s3-location-constraint

Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter

--s3-location-constraint

Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only.

--s3-acl

Canned ACL used when creating buckets and storing or copying objects.

This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one.

--s3-server-side-encryption

The server-side encryption algorithm used when storing this object in S3.

--s3-sse-kms-key-id

If using KMS ID you must provide the ARN of Key.

--s3-storage-class

The storage class to use when storing new objects in S3.

--s3-storage-class

The storage class to use when storing new objects in OSS.

--s3-storage-class

The storage class to use when storing new objects in Tencent COS.

--s3-storage-class

The storage class to use when storing new objects in S3.

Advanced Options

Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)).

--s3-bucket-acl

Canned ACL used when creating buckets.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead.

--s3-sse-customer-algorithm

If using SSE-C, the server-side encryption algorithm used when storing this object in S3.

--s3-sse-customer-key

If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.

--s3-sse-customer-key-md5

If using SSE-C you must provide the secret encryption key MD5 checksum.

--s3-upload-cutoff

Cutoff for switching to chunked upload

Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.

--s3-chunk-size

Chunk size to use for uploading.

When uploading files larger than upload_cutoff or files with unknown size (eg from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size.

Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer.

If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.

Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size.

--s3-max-upload-parts

Maximum number of parts in a multipart upload.

This option defines the maximum number of multipart chunks to use when doing a multipart upload.

This can be useful if a service does not support the AWS S3 specification of 10,000 chunks.

Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit.

--s3-copy-cutoff

Cutoff for switching to multipart copy

Any files larger than this that need to be server side copied will be copied in chunks of this size.

The minimum is 0 and the maximum is 5GB.

--s3-disable-checksum

Don't store MD5 checksum with object metadata

Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

--s3-shared-credentials-file

Path to the shared credentials file

If env_auth = true then rclone can use a shared credentials file.

If this variable is empty rclone will look for the "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty it will default to the current user's home directory.

Linux/OSX: "$HOME/.aws/credentials"
Windows:   "%USERPROFILE%\.aws\credentials"

--s3-profile

Profile to use in the shared credentials file

If env_auth = true then rclone can use a shared credentials file. This variable controls which profile is used in that file.

If empty it will default to the environment variable "AWS_PROFILE" or "default" if that environment variable is also not set.

--s3-session-token

An AWS session token

--s3-upload-concurrency

Concurrency for multipart uploads.

This is the number of chunks of the same file that are uploaded concurrently.

If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

--s3-force-path-style

If true use path style access if false use virtual hosted style.

If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.

Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting.

--s3-v2-auth

If true use v2 authentication.

If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication.

Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.

--s3-use-accelerate-endpoint

If true use the AWS S3 accelerated endpoint.

See: AWS S3 Transfer acceleration

--s3-leave-parts-on-error

If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.

It should be set to true for resuming uploads across different sessions.

WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.

--s3-list-chunk

Size of listing chunk (response list for each ListObject S3 request).

This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see AWS S3. In Ceph, this can be increased with the "rgw list buckets max chunk" option.

--s3-no-check-bucket

If set don't attempt to check the bucket exists or create it

This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.

--s3-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

--s3-memory-pool-flush-time

How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

--s3-memory-pool-use-mmap

Whether to use mmap buffers in internal memory pool.

Backend commands

Here are the commands specific to the s3 backend.

Run them with

rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See the "rclone backend" command for more info on how to pass options and arguments.

These can be run on a running backend using the rc command backend/command.

restore

Restore objects from GLACIER to normal storage

rclone backend restore remote: [options] [<arguments>+]

This command can be used to restore one or more objects from GLACIER to normal storage.

Usage Examples:

rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]

This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags

rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard

All the objects shown will be marked for restore, then

rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard

It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successfull or an error message if not.

[
    {
        "Status": "OK",
        "Path": "test.txt"
    },
    {
        "Status": "OK",
        "Path": "test/file4.txt"
    }
]

Options:

list-multipart-uploads

List the unfinished multipart uploads

rclone backend list-multipart-uploads remote: [options] [<arguments>+]

This command lists the unfinished multipart uploads in JSON format.

rclone backend list-multipart s3:bucket/path/to/object

It returns a dictionary of buckets with values as lists of unfinished multipart uploads.

You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.

{
  "rclone": [
    {
      "Initiated": "2020-06-26T14:20:36Z",
      "Initiator": {
        "DisplayName": "XXX",
        "ID": "arn:aws:iam::XXX:user/XXX"
      },
      "Key": "KEY",
      "Owner": {
        "DisplayName": null,
        "ID": "XXX"
      },
      "StorageClass": "STANDARD",
      "UploadId": "XXX"
    }
  ],
  "rclone-1000files": [],
  "rclone-dst": []
}

cleanup

Remove unfinished multipart uploads.

rclone backend cleanup remote: [options] [<arguments>+]

This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.

Note that you can use -i/--dry-run with this command to see what it would do.

rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object

Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

Options:

Anonymous access to public buckets

If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this:

[anons3]
type = s3
provider = AWS
env_auth = false
access_key_id = 
secret_access_key = 
region = us-east-1
endpoint = 
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

Then use it as normal with the name of the public bucket, eg

rclone lsd anons3:1000genomes

You will be able to list and copy data but not upload it.

Ceph

Ceph is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface.

To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

[ceph]
type = s3
provider = Ceph
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region =
endpoint = https://ceph.endpoint.example.com
location_constraint =
acl =
server_side_encryption =
storage_class =

If you are using an older version of CEPH, eg 10.2.x Jewel, then you may need to supply the parameter --s3-upload-cutoff 0 or put this in the config file as upload_cutoff 0 to work around a bug which causes uploading of small files to fail.

Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.

Eg the dump from Ceph looks something like this (irrelevant keys removed).

{
    "user_id": "xxx",
    "display_name": "xxxx",
    "keys": [
        {
            "user": "xxx",
            "access_key": "xxxxxx",
            "secret_key": "xxxxxx\/xxxx"
        }
    ],
}

Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.

Dreamhost

Dreamhost DreamObjects is an object storage system based on CEPH.

To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

[dreamobjects]
type = s3
provider = DreamHost
env_auth = false
access_key_id = your_access_key
secret_access_key = your_secret_key
region =
endpoint = objects-us-west-1.dream.io
location_constraint =
acl = private
server_side_encryption =
storage_class =

DigitalOcean Spaces

Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.

To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when prompted by rclone config for your access_key_id and secret_access_key.

When prompted for a region or location_constraint, press enter to use the default value. The region must be included in the endpoint setting (e.g. nyc3.digitaloceanspaces.com). The default values can be used for other settings.

Going through the whole process of creating a new remote by running rclone config, each prompt should be answered as shown below:

Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
region>
endpoint> nyc3.digitaloceanspaces.com
location_constraint>
acl>
storage_class>

The resulting configuration file should look like:

[spaces]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region =
endpoint = nyc3.digitaloceanspaces.com
location_constraint =
acl =
server_side_encryption =
storage_class =

Once configured, you can create a new Space and begin copying files. For example:

rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space

IBM COS (S3)

Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)

To configure access to IBM COS S3, follow the steps below:

  1. Run rclone config and select n for a new remote.
    2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n
  1. Enter the name for the configuration
    name> <YOUR NAME>
  1. Select "s3" storage.
Choose a number from below, or type in your own value
    1 / Alias for an existing remote
    \ "alias"
    2 / Amazon Drive
    \ "amazon cloud drive"
    3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
    \ "s3"
    4 / Backblaze B2
    \ "b2"
[snip]
    23 / http Connection
    \ "http"
Storage> 3
  1. Select IBM COS as the S3 Storage Provider.
Choose the S3 provider.
Choose a number from below, or type in your own value
     1 / Choose this option to configure Storage to AWS S3
       \ "AWS"
     2 / Choose this option to configure Storage to Ceph Systems
     \ "Ceph"
     3 /  Choose this option to configure Storage to Dreamhost
     \ "Dreamhost"
   4 / Choose this option to the configure Storage to IBM COS S3
     \ "IBMCOS"
     5 / Choose this option to the configure Storage to Minio
     \ "Minio"
     Provider>4
  1. Enter the Access Key and Secret.
    AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    access_key_id> <>
    AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    secret_access_key> <>
  1. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
    Endpoint for IBM COS S3 API.
    Specify if using an IBM COS On Premise.
    Choose a number from below, or type in your own value
     1 / US Cross Region Endpoint
       \ "s3-api.us-geo.objectstorage.softlayer.net"
     2 / US Cross Region Dallas Endpoint
       \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
     3 / US Cross Region Washington DC Endpoint
       \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
     4 / US Cross Region San Jose Endpoint
       \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
     5 / US Cross Region Private Endpoint
       \ "s3-api.us-geo.objectstorage.service.networklayer.com"
     6 / US Cross Region Dallas Private Endpoint
       \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
     7 / US Cross Region Washington DC Private Endpoint
       \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
     8 / US Cross Region San Jose Private Endpoint
       \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
     9 / US Region East Endpoint
       \ "s3.us-east.objectstorage.softlayer.net"
    10 / US Region East Private Endpoint
       \ "s3.us-east.objectstorage.service.networklayer.com"
    11 / US Region South Endpoint
[snip]
    34 / Toronto Single Site Private Endpoint
       \ "s3.tor01.objectstorage.service.networklayer.com"
    endpoint>1
  1. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
     1 / US Cross Region Standard
       \ "us-standard"
     2 / US Cross Region Vault
       \ "us-vault"
     3 / US Cross Region Cold
       \ "us-cold"
     4 / US Cross Region Flex
       \ "us-flex"
     5 / US East Region Standard
       \ "us-east-standard"
     6 / US East Region Vault
       \ "us-east-vault"
     7 / US East Region Cold
       \ "us-east-cold"
     8 / US East Region Flex
       \ "us-east-flex"
     9 / US South Region Standard
       \ "us-south-standard"
    10 / US South Region Vault
       \ "us-south-vault"
[snip]
    32 / Toronto Flex
       \ "tor01-flex"
location_constraint>1
  1. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
      1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
      \ "private"
      2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
      \ "public-read"
      3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
      \ "public-read-write"
      4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
      \ "authenticated-read"
acl> 1
  1. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
    [xxx]
    type = s3
    Provider = IBMCOS
    access_key_id = xxx
    secret_access_key = yyy
    endpoint = s3-api.us-geo.objectstorage.softlayer.net
    location_constraint = us-standard
    acl = private
  1. Execute rclone commands
    1)  Create a bucket.
        rclone mkdir IBM-COS-XREGION:newbucket
    2)  List available buckets.
        rclone lsd IBM-COS-XREGION:
        -1 2017-11-08 21:16:22        -1 test
        -1 2018-02-14 20:16:39        -1 newbucket
    3)  List contents of a bucket.
        rclone ls IBM-COS-XREGION:newbucket
        18685952 test.exe
    4)  Copy a file from local to remote.
        rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
    5)  Copy a file from remote to local.
        rclone copy IBM-COS-XREGION:newbucket/file.txt .
    6)  Delete a file on remote.
        rclone delete IBM-COS-XREGION:newbucket/file.txt

Minio

Minio is an object storage server built for cloud application developers and devops.

It is very easy to install and provides an S3 compatible server which can be used by rclone.

To use it, install Minio following the instructions here.

When it configures itself Minio will print something like this

Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region:    us-east-1
SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis

Browser Access:
   http://192.168.1.106:9000  http://172.23.0.1:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

Drive Capacity: 26 GiB Free, 165 GiB Total

These details need to go into rclone config like this. Note that it is important to put the region in as stated above.

env_auth> 1
access_key_id> USWUXHGYZQYFYFFIT3RE
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region> us-east-1
endpoint> http://192.168.1.106:9000
location_constraint>
server_side_encryption>

Which makes the config file look like this

[minio]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =

So once set up, for example to copy files into a bucket

rclone copy /path/to/files minio:bucket

Scaleway

Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.

Scaleway provides an S3 interface which can be configured for use with rclone like this:

[scaleway]
type = s3
provider = Scaleway
env_auth = false
endpoint = s3.nl-ams.scw.cloud
access_key_id = SCWXXXXXXXXXXXXXX
secret_access_key = 1111111-2222-3333-44444-55555555555555
region = nl-ams
location_constraint =
acl = private
server_side_encryption =
storage_class =

Wasabi

Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.

Wasabi provides an S3 interface which can be configured for use with rclone like this.

No remotes found - make a new one
n) New remote
s) Set configuration password
n/s> n
name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, Minio)
   \ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
[snip]
region> us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
[snip]
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
storage_class>
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This will leave the config file looking like this.

[wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =

Alibaba OSS

Here is an example of making an Alibaba Cloud (Aliyun) OSS configuration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> oss
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
   \ "s3"
[snip]
Storage> s3
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ "Alibaba"
 3 / Ceph Object Storage
   \ "Ceph"
[snip]
provider> Alibaba
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> accesskeyid
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> secretaccesskey
Endpoint for OSS API.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / East China 1 (Hangzhou)
   \ "oss-cn-hangzhou.aliyuncs.com"
 2 / East China 2 (Shanghai)
   \ "oss-cn-shanghai.aliyuncs.com"
 3 / North China 1 (Qingdao)
   \ "oss-cn-qingdao.aliyuncs.com"
[snip]
endpoint> 1
Canned ACL used when creating buckets and storing or copying objects.

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
[snip]
acl> 1
The storage class to use when storing new objects in OSS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Archive storage mode.
   \ "GLACIER"
 4 / Infrequent access storage mode.
   \ "STANDARD_IA"
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[oss]
type = s3
provider = Alibaba
env_auth = false
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = oss-cn-hangzhou.aliyuncs.com
acl = private
storage_class = Standard
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Tencent COS

Tencent Cloud Object Storage (COS) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.

To configure access to Tencent COS, follow the steps below:

  1. Run rclone config and select n for a new remote.
rclone config
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
  1. Give the name of the configuration. For example, name it 'cos'.
name> cos
  1. Select s3 storage.
Choose a number from below, or type in your own value
1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
   \ "s3"
[snip]
Storage> s3
  1. Select TencentCOS provider.
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
   \ "AWS"
[snip]
11 / Tencent Cloud Object Storage (COS)
   \ "TencentCOS"
[snip]
provider> TencentCOS
  1. Enter your SecretId and SecretKey of Tencent Cloud.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
  1. Select endpoint for Tencent COS. This is the standard endpoint for different region.
 1 / Beijing Region.
   \ "cos.ap-beijing.myqcloud.com"
 2 / Nanjing Region.
   \ "cos.ap-nanjing.myqcloud.com"
 3 / Shanghai Region.
   \ "cos.ap-shanghai.myqcloud.com"
 4 / Guangzhou Region.
   \ "cos.ap-guangzhou.myqcloud.com"
[snip]
endpoint> 4
  1. Choose acl and storage class.
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets Full_CONTROL. No one else has access rights (default).
   \ "default"
[snip]
acl> 1
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Default
   \ ""
[snip]
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[cos]
type = s3
provider = TencentCOS
env_auth = false
access_key_id = xxx
secret_access_key = xxx
endpoint = cos.ap-guangzhou.myqcloud.com
acl = default
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
cos                  s3

Netease NOS

For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.

Backblaze B2

B2 is Backblaze's cloud storage system.

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Here is an example of making a b2 configuration. First run

rclone config

This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key.

No remotes found - make a new one
n) New remote
q) Quit config
n/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Backblaze B2
   \ "b2"
[snip]
Storage> b2
Account ID or Application Key ID
account> 123456789abc
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
endpoint>
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
endpoint =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called remote and can now be used like this

See all buckets

rclone lsd remote:

Create a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync -i /home/local/directory remote:bucket

Application Keys

B2 supports multiple Application Keys for different access permission to B2 Buckets.

You can use these with rclone too; you will need to use rclone version 1.43 or later.

Follow Backblaze's docs to create an Application Key with the required permission and add the applicationKeyId as the account and the Application Key itself as the key.

Note that you must put the applicationKeyId as the account – you can't use the master Account ID. If you try then B2 will return 401 errors.

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

Modified time

The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.

Modified times are used in syncing and are fully supported. Note that if a modification time needs to be updated on an object then it will create a new version of the object.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Note that in 2020-05 Backblaze started allowing  characters in file names. Rclone hasn't changed its encoding as this could cause syncs to re-transfer files. If you want rclone not to replace  then see the --b2-encoding flag below and remove the BackSlash from the string. This can be set in the config.

SHA1 checksums

The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.

Large files (bigger than the limit in --b2-upload-cutoff) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.

For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1.

Sources which don't support SHA1, in particular crypt will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).

Files sizes below --b2-upload-cutoff will always have an SHA1 regardless of the source.

Transfers

Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.

Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers of these in use at any moment, so this sets the upper limit on the memory used.

Versions

When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete flag which would permanently remove the file instead of hiding it.

Old versions of files, where available, are visible using the --b2-versions flag.

NB Note that --b2-versions does not work with crypt at the moment #1627. Using --backup-dir with rclone is the recommended way of working around this.

If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff.

Note that cleanup will remove partially uploaded files from the bucket if they are more than a day old.

When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted.

However delete will cause the current versions of the files to become hidden old versions.

Here is a session showing the listing and retrieval of an old version followed by a cleanup of the old versions.

Show current version and all the versions with --b2-versions flag.

$ rclone -q ls b2:cleanup-test
        9 one.txt

$ rclone -q --b2-versions ls b2:cleanup-test
        9 one.txt
        8 one-v2016-07-04-141032-000.txt
       16 one-v2016-07-04-141003-000.txt
       15 one-v2016-07-02-155621-000.txt

Retrieve an old version

$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp

$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt

Clean up all the old versions and show that they've gone.

$ rclone -q cleanup b2:cleanup-test

$ rclone -q ls b2:cleanup-test
        9 one.txt

$ rclone -q --b2-versions ls b2:cleanup-test
        9 one.txt

Data usage

It is useful to know how many requests are sent to the server in different scenarios.

All copy commands send the following 4 requests:

/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names

The b2_list_file_names request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue #818 causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent.

Uploading files that do not require chunking, will send 2 requests per file upload:

/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/

Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk:

/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file

Versions

Versions can be viewed with the --b2-versions flag. When it is set rclone will show and act on older versions of files. For example

Listing without --b2-versions

$ rclone -q ls b2:cleanup-test
        9 one.txt

And with

$ rclone -q --b2-versions ls b2:cleanup-test
        9 one.txt
        8 one-v2016-07-04-141032-000.txt
       16 one-v2016-07-04-141003-000.txt
       15 one-v2016-07-02-155621-000.txt

Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.

Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them.

Rclone supports generating file share links for private B2 buckets. They can either be for a file for example:

./rclone link B2:bucket/path/to/file.txt
https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx

or if run on a directory you will get:

./rclone link B2:bucket/path
https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx

you can then use the authorization token (the part of the url from the ?Authorization= on) on any file path under that directory. For example:

https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx

Standard Options

Here are the standard options specific to b2 (Backblaze B2).

--b2-account

Account ID or Application Key ID

--b2-key

Application Key

--b2-hard-delete

Permanently delete files on remote removal, otherwise hide files.

Advanced Options

Here are the advanced options specific to b2 (Backblaze B2).

--b2-endpoint

Endpoint for the service. Leave blank normally.

--b2-test-mode

A flag string for X-Bz-Test-Mode header for debugging.

This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors:

These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist.

--b2-versions

Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them.

--b2-upload-cutoff

Cutoff for switching to chunked upload.

Files above this size will be uploaded in chunks of "--b2-chunk-size".

This value should be set no larger than 4.657GiB (== 5GB).

--b2-copy-cutoff

Cutoff for switching to multipart copy

Any files larger than this that need to be server side copied will be copied in chunks of this size.

The minimum is 0 and the maximum is 4.6GB.

--b2-chunk-size

Upload chunk size. Must fit in memory.

When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size.

--b2-disable-checksum

Disable checksums for large (> upload cutoff) files

Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

--b2-download-url

Custom endpoint for downloads.

This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze.

--b2-download-auth-duration

Time before the authorization token will expire in s or suffix ms|s|m|h|d.

The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.

--b2-memory-pool-flush-time

How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

--b2-memory-pool-use-mmap

Whether to use mmap buffers in internal memory pool.

--b2-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Box

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Box
   \ "box"
[snip]
Storage> box
Box App Client Id - leave blank normally.
client_id> 
Box App Client Secret - leave blank normally.
client_secret>
Box App config.json location
Leave blank normally.
Enter a string value. Press Enter for the default ("").
box_config_file>
Box App Primary Access Token
Leave blank normally.
Enter a string value. Press Enter for the default ("").
access_token>

Enter a string value. Press Enter for the default ("user").
Choose a number from below, or type in your own value
 1 / Rclone should act on behalf of a user
   \ "user"
 2 / Rclone should act on behalf of a service account
   \ "enterprise"
box_sub_type>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id = 
client_secret = 
token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your Box

rclone lsd remote:

List all the files in your Box

rclone ls remote:

To copy a local directory to an Box directory called backup

rclone copy /home/source remote:backup

Using rclone with an Enterprise account with SSO

If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field.

Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set.

Invalid refresh token

According to the box docs:

Each refresh_token is valid for one use in 60 days.

This means that if you

then rclone will return an error which includes the text Invalid refresh token.

To fix this you will need to use oauth2 again to update the refresh token. You can use the methods in the remote setup docs, bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on.

Here is how to do it.

$ rclone config
Current remotes:

Name                 Type
====                 ====
remote               box

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> e
Choose a number from below, or type in an existing value
 1 > remote
remote> remote
--------------------
[remote]
type = box
token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"}
--------------------
Edit remote
Value "client_id" = ""
Edit? (y/n)>
y) Yes
n) No
y/n> n
Value "client_secret" = ""
Edit? (y/n)>
y) Yes
n) No
y/n> n
Remote config
Already have a token - refresh?
y) Yes
n) No
y/n> y
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
type = box
token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Modified time and hashes

Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

Box supports SHA1 type hashes, so you can use the --checksum flag.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C

File names can also not end with the following characters. These only get replaced if they are the last character in the name:

Character Value Replacement
SP 0x20

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Transfers

For files above 50MB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers will increase memory use.

Deleting files

Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.

Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it may take a very long time. Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI.

Root folder ID

You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your Box drive.

Normally you will leave this blank and rclone will determine the correct root to use itself.

However you can set this to restrict rclone to a specific folder hierarchy.

In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface.

So if the folder you want rclone to use has a URL which looks like https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as the root_folder_id in the config.

Standard Options

Here are the standard options specific to box (Box).

--box-client-id

OAuth Client Id Leave blank normally.

--box-client-secret

OAuth Client Secret Leave blank normally.

--box-box-config-file

Box App config.json location Leave blank normally.

Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

--box-access-token

Box App Primary Access Token Leave blank normally.

--box-box-sub-type

Advanced Options

Here are the advanced options specific to box (Box).

--box-token

OAuth Access Token as a JSON blob.

--box-auth-url

Auth server URL. Leave blank to use the provider defaults.

--box-token-url

Token server url. Leave blank to use the provider defaults.

--box-root-folder-id

Fill in for rclone to use a non root folder as its starting point.

--box-upload-cutoff

Cutoff for switching to multipart upload (>= 50MB).

--box-commit-retries

Max number of times to try committing a multipart file.

--box-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Box file names can't have the \ character in. rclone maps this to and from an identical looking unicode equivalent (U+FF3C Fullwidth Reverse Solidus).

Box only supports filenames up to 255 characters in length.

Cache (BETA)

The cache remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount.

Status

The cache backend code is working but it currently doesn't have a maintainer so there are outstanding bugs which aren't getting fixed.

The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone.

Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more.

Setup

To get started you just need to have an existing remote which can be configured with cache.

Here is an example of how to make a remote called test-cache. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> test-cache
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Cache a remote
   \ "cache"
[snip]
Storage> cache
Remote to cache.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
remote> local:/test
Optional: The URL of the Plex server
plex_url> http://127.0.0.1:32400
Optional: The username of the Plex user
plex_username> dummyusername
Optional: The password of the Plex user
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
 1 / 1MB
   \ "1m"
 2 / 5 MB
   \ "5M"
 3 / 10 MB
   \ "10M"
chunk_size> 2
How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
Accepted units are: "s", "m", "h".
Default: 5m
Choose a number from below, or type in your own value
 1 / 1 hour
   \ "1h"
 2 / 24 hours
   \ "24h"
 3 / 24 hours
   \ "48h"
info_age> 2
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
 1 / 500 MB
   \ "500M"
 2 / 1 GB
   \ "1G"
 3 / 10 GB
   \ "10G"
chunk_total_size> 3
Remote config
--------------------
[test-cache]
remote = local:/test
plex_url = http://127.0.0.1:32400
plex_username = dummyusername
plex_password = *** ENCRYPTED ***
chunk_size = 5M
info_age = 48h
chunk_total_size = 10G

You can then use it like this,

List directories in top level of your drive

rclone lsd test-cache:

List all the files in your drive

rclone ls test-cache:

To start a cached mount

rclone mount --allow-other test-cache: /var/tmp/test-cache

Write Features

Offline uploading

In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a cache-tmp-upload-path.

A files goes through these states when using this feature:

  1. An upload is started (usually by copying a file on the cache remote)
  2. When the copy to the temporary location is complete the file is part of the cached remote and looks and behaves like any other file (reading included)
  3. After cache-tmp-wait-time passes and the file is next in line, rclone move is used to move the file to the cloud provider
  4. Reading the file still works during the upload but most modifications on it will be prohibited
  5. Once the move is complete the file is unlocked for modifications as it becomes as any other regular file
  6. If the file is being read through cache when it's actually deleted from the temporary path then cache will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)

Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the --cache-db-purge flag.

Write Support

Writes are supported through cache. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using Offline uploading for reliable writes.

One special case is covered with cache-writes which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished.

Read Features

Multiple connections

To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the cloud provider for smaller file chunks and combines them together locally where they can be available almost immediately before the reader usually needs them.

This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before.

Plex Integration

There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries the cloud provider depending on what is needed for.

Scans will have a minimum amount of workers (1) while in a confirmed playback cache will deploy the configured number of workers.

This integration opens the doorway to additional performance improvements which will be explored in the near future.

Note: If Plex options are not configured, cache will function with its configured options without adapting any of its settings.

How to enable? Run rclone config and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.

Affected settings: - cache-workers: Configured value during confirmed playback or 1 all the other times

Certificate Validation

When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely.

The format for these URLs is the following:

https://ip-with-dots-replaced.server-hash.plex.direct:32400/

The ip-with-dots-replaced part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.

To get the server-hash part, the easiest way is to visit

https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token

This page will list all the available Plex servers for your account with at least one .plex.direct link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url value.

Known issues

Mount and --dir-cache-time

--dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache backend, it will manage its own entries based on the configured time.

To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set --dir-cache-time to a lower time than --cache-info-age. Default values are already configured in this way.

Windows support - Experimental

There are a couple of issues with Windows mount functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.

Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them.

Any reports or feedback on how cache behaves on this OS is greatly appreciated.

Risk of throttling

Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures.

There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts.

Some recommendations: - don't use a very small interval for entry information (--cache-info-age) - while writes aren't yet optimised, you can still write through cache which gives you the advantage of adding the file in the cache at the same time if configured to do so.

Future enhancements:

cache and crypt

One common scenario is to keep your data encrypted in the cloud provider using the crypt remote. crypt uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.

There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt

absolute remote paths

cache can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading / character.

This behavior is irrelevant for most backend types, but there are backends where a leading / changes the effective directory, e.g. in the sftp backend paths starting with a / are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin and sftp:/bin will share the same cache folder, even if they represent a different directory on the SSH server.

Cache and Remote Control (--rc)

Cache supports the new --rc mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.

rc cache/expire

Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.

Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)

Standard Options

Here are the standard options specific to cache (Cache a remote).

--cache-remote

Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

--cache-plex-url

The URL of the Plex server

--cache-plex-username

The username of the Plex user

--cache-plex-password

The password of the Plex user

NB Input to this must be obscured - see rclone obscure.

--cache-chunk-size

The size of a chunk (partial file data).

Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.

--cache-info-age

How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.

--cache-chunk-total-size

The total size that the chunks can take up on the local disk.

If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.

Advanced Options

Here are the advanced options specific to cache (Cache a remote).

--cache-plex-token

The plex token for authentication - auto set normally

--cache-plex-insecure

Skip all certificate verification when connecting to the Plex server

--cache-db-path

Directory to store file structure metadata DB. The remote name is used as the DB file name.

--cache-chunk-path

Directory to cache chunk files.

Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path.

This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".

--cache-db-purge

Clear all the cached data for this remote on start.

--cache-chunk-clean-interval

How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.

--cache-read-retries

How many times to retry a read from a cache storage.

Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore.

For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.

--cache-workers

How many workers should run in parallel to download chunks.

Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.

Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.

--cache-chunk-no-memory

Disable the in-memory cache for storing chunks during streaming.

By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.

This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time).

If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.

--cache-rps

Limits the number of requests per second to the source FS (-1 to disable)

This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads.

If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.

A good balance of all the other settings should make this setting useless but it is available to set for more special cases.

NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.

--cache-writes

Cache file data on writes through the FS

If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.

--cache-tmp-upload-path

Directory to keep temporary files until they are uploaded.

This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider.

Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider

--cache-tmp-wait-time

How long should files be stored in local cache before being uploaded

This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.

Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.

--cache-db-wait-time

How long to wait for the DB to be available - 0 is unlimited

Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.

If you set it to 0 then it will wait forever.

Backend commands

Here are the commands specific to the cache backend.

Run them with

rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See the "rclone backend" command for more info on how to pass options and arguments.

These can be run on a running backend using the rc command backend/command.

stats

Print stats on the cache backend in JSON format.

rclone backend stats remote: [options] [<arguments>+]

Chunker (BETA)

The chunker overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.

To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote.

First check your chosen remote is working - we'll call it remote:path here. Note that anything inside remote:path will be chunked and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket.

Now configure chunker using rclone config. We will call this one overlay to separate it from the remote itself.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> overlay
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Transparently chunk/split large files
   \ "chunker"
[snip]
Storage> chunker
Remote to chunk/unchunk.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a string value. Press Enter for the default ("").
remote> remote:path
Files larger than chunk size will be split in chunks.
Enter a size with suffix k,M,G,T. Press Enter for the default ("2G").
chunk_size> 100M
Choose how chunker handles hash sums. All modes but "none" require metadata.
Enter a string value. Press Enter for the default ("md5").
Choose a number from below, or type in your own value
 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise
   \ "none"
 2 / MD5 for composite files
   \ "md5"
 3 / SHA1 for composite files
   \ "sha1"
 4 / MD5 for all files
   \ "md5all"
 5 / SHA1 for all files
   \ "sha1all"
 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported
   \ "md5quick"
 7 / Similar to "md5quick" but prefers SHA1 over MD5
   \ "sha1quick"
hash_type> md5
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[overlay]
type = chunker
remote = remote:bucket
chunk_size = 100M
hash_type = md5
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Specifying the remote

In normal use, make sure the remote has a : in. If you specify the remote without a : then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files then rclone will chunk stuff in that directory. If you use a remote of name then rclone will put files in a directory called name in the current directory.

Chunking

When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process.

When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename etc). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact.

When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content.

When the list rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden.

List and other commands can sometimes come across composite files with missing or invalid chunks, eg. shadowed by like-named directory or another file. This usually means that wrapped file system has been directly tampered with or damaged. If chunker detects a missing chunk it will by default print warning, skip the whole incomplete group of chunks but proceed with current command. You can set the --chunker-fail-hard flag to have commands abort with error message in such cases.

Chunk names

The default chunk name format is *.rclone_chunk.###, hence by default chunk names are BIG_FILE_NAME.rclone_chunk.001, BIG_FILE_NAME.rclone_chunk.002 etc. You can configure another name format using the name_format configuration file option. The format uses asterisk * as a placeholder for the base file name and one or more consecutive hash characters # as a placeholder for sequential chunk number. There must be one and only one asterisk. The number of consecutive hash characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is left-padded by zeros. If the decimal string is longer, it is left intact. By default numbering starts from 1 but there is another option that allows user to start from 0, eg. for compatibility with legacy software.

For example, if name format is big_*-##.part and original file name is data.txt and numbering starts from 0, then the first chunk will be named big_data.txt-00.part, the 99th chunk will be big_data.txt-98.part and the 302nd chunk will become big_data.txt-301.part.

Note that list assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files.

Metadata

Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the none format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases.

Simple JSON metadata format

This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields:

There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling.

No metadata

You can disable meta objects by setting the meta format option to none. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled.

Hashsums

Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of none, chunker will report hashsum as UNSUPPORTED.

Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it.

Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent.

If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with md5all or sha1all. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. chunk_type=sha1all to force hashsums and chunk_size=1P to effectively disable chunking.

Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: sha1quick and md5quick. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the sync command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found.

Modified time

Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is none then chunker will use modification time of the first data chunk.

Migrations

The idiomatic way to migrate to a different chunk size, hash type or chunk naming scheme is to:

If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the list command but will eat up your account quota. Please note that the deletefile command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The copy command will copy only active chunks while the purge will remove everything including garbage.

Caveats and Limitations

Chunker requires wrapped remote to support server side move (or copy + delete) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully.

Chunker encodes chunk number in file name, so with default name_format setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. *.rcc## and save 10 characters (provided at most 99 chunks per file).

Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers.

Chunker will not automatically rename existing chunks when you run rclone config on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above.

If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory).

Standard Options

Here are the standard options specific to chunker (Transparently chunk/split large files).

--chunker-remote

Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

--chunker-chunk-size

Files larger than chunk size will be split in chunks.

--chunker-hash-type

Choose how chunker handles hash sums. All modes but "none" require metadata.

Advanced Options

Here are the advanced options specific to chunker (Transparently chunk/split large files).

--chunker-name-format

String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.

--chunker-start-from

Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1.

--chunker-meta-format

Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file.

--chunker-fail-hard

Choose how chunker should handle files with missing or invalid chunks.

Citrix ShareFile

Citrix ShareFile is a secure file sharing and transfer service aimed as business.

The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
XX / Citrix Sharefile
   \ "sharefile"
Storage> sharefile
** See help for sharefile backend at: https://rclone.org/sharefile/ **

ID of the root folder

Leave blank to access "Personal Folders".  You can use one of the
standard values here or any folder ID (long hex number ID).
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Access the Personal Folders. (Default)
   \ ""
 2 / Access the Favorites folder.
   \ "favorites"
 3 / Access all the shared folders.
   \ "allshared"
 4 / Access all the individual connectors.
   \ "connectors"
 5 / Access the home, favorites, and shared folders as well as the connectors.
   \ "top"
root_folder_id> 
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
type = sharefile
endpoint = https://XXX.sharefile.com
token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your ShareFile

rclone lsd remote:

List all the files in your ShareFile

rclone ls remote:

To copy a local directory to an ShareFile directory called backup

rclone copy /home/source remote:backup

Paths may be as deep as required, eg remote:directory/subdirectory.

Modified time and hashes

ShareFile allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

ShareFile supports MD5 type hashes, so you can use the --checksum flag.

Transfers

For files above 128MB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing --transfers will increase memory use.

Limitations

Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

ShareFile only supports filenames up to 256 characters in length.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C
* 0x2A
< 0x3C
> 0x3E
? 0x3F
: 0x3A
| 0x7C
" 0x22

File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name:

Character Value Replacement
SP 0x20
. 0x2E

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to sharefile (Citrix Sharefile).

--sharefile-root-folder-id

ID of the root folder

Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).

Advanced Options

Here are the advanced options specific to sharefile (Citrix Sharefile).

--sharefile-upload-cutoff

Cutoff for switching to multipart upload.

--sharefile-chunk-size

Upload chunk size. Must a power of 2 >= 256k.

Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

Reducing this will reduce memory usage but decrease performance.

--sharefile-endpoint

Endpoint for API calls.

This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com

--sharefile-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Crypt

Rclone crypt remotes encrypt and decrypt other remotes.

To use crypt, first set up the underlying remote. Follow the rclone config instructions for that remote.

crypt applied to a local pathname instead of a remote will encrypt and decrypt that directory, and can be used to encrypt USB removable drives.

Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote:path. Anything inside remote:path will be encrypted and anything outside will not. In the case of an S3 based underlying remote (eg Amazon S3, B2, Swift) it is generally advisable to define a crypt remote in the underlying remote s3:bucket. If s3: alone is specified alongside file name encryption, rclone will encrypt the bucket name.

Configure crypt using rclone config. In this example the crypt remote is called secret, to differentiate it from the underlying remote.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n   
name> secret
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Encrypt/Decrypt a remote
   \ "crypt"
[snip]
Storage> crypt
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
remote> remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
 1 / Don't encrypt the file names.  Adds a ".bin" extension only.
   \ "off"
 2 / Encrypt the filenames see the docs for the details.
   \ "standard"
 3 / Very simple filename obfuscation.
   \ "obfuscate"
filename_encryption> 2
Option to either encrypt directory names or leave them intact.
Choose a number from below, or type in your own value
 1 / Encrypt directory names.
   \ "true"
 2 / Don't encrypt directory names, leave them intact.
   \ "false"
filename_encryption> 1
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> g
Password strength in bits.
64 is just about memorable
128 is secure
1024 is the maximum
Bits> 128
Your password is: JAsJvRcgR-_veXNfy_sGmQ
Use this password?
y) Yes
n) No
y/n> y
Remote config
--------------------
[secret]
remote = remote:path
filename_encryption = standard
password = *** ENCRYPTED ***
password2 = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Important The crypt password stored in rclone.conf is lightly obscured. That only protects it from cursory inspection. It is not secure unless encryption of rclone.conf is specified.

A long passphrase is recommended, or rclone config can generate a random one.

The obscured password is created using AES-CTR with a static key. The salt is stored verbatim at the beginning of the obscured password. This static key is shared between all versions of rclone.

If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt.

Rclone does not encrypt

Specifying the remote

In normal use, ensure the remote has a : in. If specified without, rclone uses a local directory of that name. For example if a remote /path/to/secret/files is specified, rclone encrypts content to that directory. If a remote name is specified, rclone targets a directory name in the current directory.

If remote remote:path/to/dir is specified, rclone stores encrypted files in path/to/dir on the remote. With file name encryption, files saved to secret:subdir/subfile are stored in the unencrypted path path/to/dir but the subdir/subpath element is encrypted.

Example

Create the following file structure using "standard" file name encryption.

plaintext/
├── file0.txt
├── file1.txt
└── subdir
    ├── file2.txt
    ├── file3.txt
    └── subsubdir
        └── file4.txt

Copy these to the remote, and list them

$ rclone -q copy plaintext secret:
$ rclone -q ls secret:
        7 file1.txt
        6 file0.txt
        8 subdir/file2.txt
       10 subdir/subsubdir/file4.txt
        9 subdir/file3.txt

The crypt remote looks like

$ rclone -q ls remote:path
       55 hagjclgavj2mbiqm6u6cnjjqcg
       54 v05749mltvv1tf4onltun46gls
       57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
       58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
       56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps

The directory structure is preserved

$ rclone -q ls secret:subdir
        8 file2.txt
        9 file3.txt
       10 subsubdir/file4.txt

Without file name encryption .bin extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content.

$ rclone -q ls remote:path
       54 file0.txt.bin
       57 subdir/file3.txt.bin
       56 subdir/file2.txt.bin
       58 subdir/subsubdir/file4.txt.bin
       55 file1.txt.bin

File name encryption modes

Off

Standard

Obfuscation

This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. Rclone stores the distance at the beginning of the filename. A file called "hello" may become "53.jgnnq".

Obfuscation is not a strong encryption of filenames, but hinders automated scanning tools picking up on filename patterns. It is an intermediate between "off" and "standard" which allows for longer path segment names.

There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents.

Obfuscation cannot be relied upon for strong protection.

Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less thn 156 characters in length issues should not be encountered, irrespective of cloud storage provider.

An alternative, future rclone file name encryption mode may tolerate backend provider path length limits.

Directory name encryption

Crypt offers the option of encrypting dir names or leaving them intact. There are two options:

True

Encrypts the whole file path including directory names Example: 1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0

False

Only encrypts file names, skips directory names Example: 1/12/123.txt is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0

Modified time and hashes

Crypt stores modification times using the underlying remote so support depends on that.

Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

Use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly.

Standard Options

Here are the standard options specific to crypt (Encrypt/Decrypt a remote).

--crypt-remote

Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

--crypt-filename-encryption

How to encrypt the filenames.

--crypt-directory-name-encryption

Option to either encrypt directory names or leave them intact.

NB If filename_encryption is "off" then this option will do nothing.

--crypt-password

Password or pass phrase for encryption.

NB Input to this must be obscured - see rclone obscure.

--crypt-password2

Password or pass phrase for salt. Optional but recommended. Should be different to the previous password.

NB Input to this must be obscured - see rclone obscure.

Advanced Options

Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).

--crypt-server-side-across-configs

Allow server side operations (eg copy) to work across different crypt configs.

Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it.

This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes.

--crypt-show-mapping

For all files listed show how the names encrypt.

If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.

This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.

Backend commands

Here are the commands specific to the crypt backend.

Run them with

rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See the "rclone backend" command for more info on how to pass options and arguments.

These can be run on a running backend using the rc command backend/command.

encode

Encode the given filename(s)

rclone backend encode remote: [options] [<arguments>+]

This encodes the filenames given as arguments returning a list of strings of the encoded results.

Usage Example:

rclone backend encode crypt: file1 [file2...]
rclone rc backend/command command=encode fs=crypt: file1 [file2...]

decode

Decode the given filename(s)

rclone backend decode remote: [options] [<arguments>+]

This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid.

Usage Example:

rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]

Backing up a crypted remote

If you wish to backup a crypted remote, it is recommended that you use rclone sync on the encrypted files, and make sure the passwords are the same in the new encrypted remote.

This will have the following advantages

For example, let's say you have your original remote at remote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.

To sync the two remotes you would do

rclone sync -i remote:crypt remote2:crypt

And to check the integrity you would do

rclone check remote:crypt remote2:crypt

File formats

File encryption

Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.

Header

The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.

Chunk

Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.

Each chunk contains:

64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.

This uses a 32 byte (256 bit key) key derived from the user password.

Examples

1 byte file will encrypt to

49 bytes total

1MB (1048576 bytes) file will encrypt to

1049120 bytes total (a 0.05% overhead). This is the overhead for big files.

Name encryption

File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually.

File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption.

They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.

This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.

This means that

This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.

After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways:

base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).

Key derivation

Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.

Dropbox

Paths are specified as remote:path

Dropbox paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Dropbox
   \ "dropbox"
[snip]
Storage> dropbox
Dropbox App Key - leave blank normally.
app_key>
Dropbox App Secret - leave blank normally.
app_secret>
Remote config
Please visit:
https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
--------------------
[remote]
app_key =
app_secret =
token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

You can then use it like this,

List directories in top level of your dropbox

rclone lsd remote:

List all the files in your dropbox

rclone ls remote:

To copy a local directory to a dropbox directory called backup

rclone copy /home/source remote:backup

Dropbox for business

Rclone supports Dropbox for business and Team Folders.

When using Dropbox for business remote: and remote:path/to/file will refer to your personal folder.

If you wish to see Team Folders you must use a leading / in the path, so rclone lsd remote:/ will refer to the root and show you all Team Folders and your User Folder.

You can then use team folders like this remote:/TeamFolder and remote:/TeamFolder/path/to/file.

A leading / for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.

Modified time and Hashes

Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it.

Dropbox supports its own hash type which is checked for all transfers.

Restricted filename characters

Character Value Replacement
NUL 0x00
/ 0x2F
DEL 0x7F
\ 0x5C

File names can also not end with the following characters. These only get replaced if they are the last character in the name:

Character Value Replacement
SP 0x20

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to dropbox (Dropbox).

--dropbox-client-id

OAuth Client Id Leave blank normally.

--dropbox-client-secret

OAuth Client Secret Leave blank normally.

Advanced Options

Here are the advanced options specific to dropbox (Dropbox).

--dropbox-token

OAuth Access Token as a JSON blob.

--dropbox-auth-url

Auth server URL. Leave blank to use the provider defaults.

--dropbox-token-url

Token server url. Leave blank to use the provider defaults.

--dropbox-chunk-size

Upload chunk size. (< 150M).

Any files larger than this will be uploaded in chunks of this size.

Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.

--dropbox-impersonate

Impersonate this user when using a business account.

--dropbox-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail.

Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.

If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

Get your own Dropbox App ID

When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.

Here is how to create your own Dropbox App ID for rclone:

  1. Log into the Dropbox App console with your Dropbox Account (It need not to be the same account as the Dropbox you want to access)

  2. Choose an API => Usually this should be Dropbox API

  3. Choose the type of access you want to use => Full Dropbox or App Folder

  4. Name your App. The app name is global, so you can't use rclone for example

  5. Click the button Create App

  6. Fill Redirect URIs as http://localhost:53682/

  7. Find the App key and App secret Use these values in rclone config to add a new remote or edit an existing remote.

FTP

FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

Here is an example of making an FTP configuration. First run

rclone config

This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous as username and your email address as the password.

No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / FTP Connection
   \ "ftp"
[snip]
Storage> ftp
** See help for ftp backend at: https://rclone.org/ftp/ **

FTP host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Connect to ftp.example.com
   \ "ftp.example.com"
host> ftp.example.com
FTP username, leave blank for current username, ncw
Enter a string value. Press Enter for the default ("").
user> 
FTP port, leave blank to use default (21)
Enter a string value. Press Enter for the default ("").
port> 
FTP password
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Use FTP over TLS (Implicit)
Enter a boolean value (true or false). Press Enter for the default ("false").
tls> 
Use FTP over TLS (Explicit)
Enter a boolean value (true or false). Press Enter for the default ("false").
explicit_tls> 
Remote config
--------------------
[remote]
type = ftp
host = ftp.example.com
pass = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called remote and can now be used like this

See all directories in the home directory

rclone lsd remote:

Make a new directory

rclone mkdir remote:path/to/directory

List the contents of a directory

rclone ls remote:path/to/directory

Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

rclone sync -i /home/local/directory remote:directory

Modified time

FTP does not support modified times. Any times you see on the server will be time of upload.

Checksums

FTP does not support any checksums.

Usage without a config file

An example how to use the ftp remote without a config file:

rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

File names can also not end with the following characters. These only get replaced if they are the last character in the name:

Character Value Replacement
SP 0x20

Note that not all FTP servers can have all characters in file names, for example:

FTP Server Forbidden characters
proftpd *
pureftpd \ [ ]

Implicit TLS

FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is 990 so the port will likely have to be explicitly set in the config for the remote.

Standard Options

Here are the standard options specific to ftp (FTP Connection).

--ftp-host

FTP host to connect to

--ftp-user

FTP username, leave blank for current username, $USER

--ftp-port

FTP port, leave blank to use default (21)

--ftp-pass

FTP password

NB Input to this must be obscured - see rclone obscure.

--ftp-tls

Use FTPS over TLS (Implicit) When using implicit FTP over TLS the client will connect using TLS right from the start, which in turn breaks the compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP.

--ftp-explicit-tls

Use FTP over TLS (Explicit) When using explicit FTP over TLS the client explicitly request security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP.

Advanced Options

Here are the advanced options specific to ftp (FTP Connection).

--ftp-concurrency

Maximum number of FTP simultaneous connections, 0 for unlimited

--ftp-no-check-certificate

Do not verify the TLS certificate of the server

--ftp-disable-epsv

Disable using EPSV even if server advertises support

--ftp-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that FTP does have its own implementation of : --dump headers, --dump bodies, --dump auth for debugging which isn't the same as the HTTP based backends - it has less fine grained control.

Note that --timeout isn't supported (but --contimeout is).

Note that --bind isn't supported.

FTP could support server side move but doesn't yet.

Note that the ftp backend does not support the ftp_proxy environment variable yet.

Google Cloud Storage

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
[snip]
Storage> google cloud storage
Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Access Control List for new objects.
Choose a number from below, or type in your own value
 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
   \ "authenticatedRead"
 2 / Object owner gets OWNER access, and project team owners get OWNER access.
   \ "bucketOwnerFullControl"
 3 / Object owner gets OWNER access, and project team owners get READER access.
   \ "bucketOwnerRead"
 4 / Object owner gets OWNER access [default if left blank].
   \ "private"
 5 / Object owner gets OWNER access, and project team members get access according to their roles.
   \ "projectPrivate"
 6 / Object owner gets OWNER access, and all Users get READER access.
   \ "publicRead"
object_acl> 4
Access Control List for new buckets.
Choose a number from below, or type in your own value
 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
   \ "authenticatedRead"
 2 / Project team owners get OWNER access [default if left blank].
   \ "private"
 3 / Project team members get access according to their roles.
   \ "projectPrivate"
 4 / Project team owners get OWNER access, and all Users get READER access.
   \ "publicRead"
 5 / Project team owners get OWNER access, and all Users get WRITER access.
   \ "publicReadWrite"
bucket_acl> 2
Location for the newly created buckets.
Choose a number from below, or type in your own value
 1 / Empty for default location (US).
   \ ""
 2 / Multi-regional location for Asia.
   \ "asia"
 3 / Multi-regional location for Europe.
   \ "eu"
 4 / Multi-regional location for United States.
   \ "us"
 5 / Taiwan.
   \ "asia-east1"
 6 / Tokyo.
   \ "asia-northeast1"
 7 / Singapore.
   \ "asia-southeast1"
 8 / Sydney.
   \ "australia-southeast1"
 9 / Belgium.
   \ "europe-west1"
10 / London.
   \ "europe-west2"
11 / Iowa.
   \ "us-central1"
12 / South Carolina.
   \ "us-east1"
13 / Northern Virginia.
   \ "us-east4"
14 / Oregon.
   \ "us-west1"
location> 12
The storage class to use when storing objects in Google Cloud Storage.
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Multi-regional storage class
   \ "MULTI_REGIONAL"
 3 / Regional storage class
   \ "REGIONAL"
 4 / Nearline storage class
   \ "NEARLINE"
 5 / Coldline storage class
   \ "COLDLINE"
 6 / Durable reduced availability storage class
   \ "DURABLE_REDUCED_AVAILABILITY"
storage_class> 5
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
type = google cloud storage
client_id =
client_secret =
token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
project_number = 12345678
object_acl = private
bucket_acl = private
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

This remote is called remote and can now be used like this

See all the buckets in your project

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync -i /home/local/directory remote:bucket

Service Account support

You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

Anonymous Access

For downloads of objects that permit public access you can configure rclone to use anonymous access by setting anonymous to true. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access.

Application Default Credentials

If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page.

Note that in the case application default credentials are used, there is no need to explicitly configure a project number.

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

Custom upload headers

You can set custom upload headers with the --header-upload flag. Google Cloud Storage supports the headers as described in the working with metadata documentation

Eg --header-upload "Content-Type text/potato"

Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value"

Modified time

Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.

Restricted filename characters

Character Value Replacement
NUL 0x00
LF 0x0A
CR 0x0D
/ 0x2F

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

--gcs-client-id

OAuth Client Id Leave blank normally.

--gcs-client-secret

OAuth Client Secret Leave blank normally.

--gcs-project-number

Project number. Optional - needed only for list/create/delete buckets - see your developer console.

--gcs-service-account-file

Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.

Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

--gcs-service-account-credentials

Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.

--gcs-anonymous

Access public buckets and objects without credentials Set to 'true' if you just want to download files and don't configure credentials.

--gcs-object-acl

Access Control List for new objects.

--gcs-bucket-acl

Access Control List for new buckets.

--gcs-bucket-policy-only

Access checks should use bucket-level IAM policies.

If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this.

When it is set, rclone:

Docs: https://cloud.google.com/storage/docs/bucket-policy-only

--gcs-location

Location for the newly created buckets.

--gcs-storage-class

The storage class to use when storing objects in Google Cloud Storage.

Advanced Options

Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

--gcs-token

OAuth Access Token as a JSON blob.

--gcs-auth-url

Auth server URL. Leave blank to use the provider defaults.

--gcs-token-url

Token server url. Leave blank to use the provider defaults.

--gcs-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Google Drive

Paths are specified as drive:path

Drive paths may be as deep as required, eg drive:directory/subdirectory.

The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Google Drive
   \ "drive"
[snip]
Storage> drive
Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
Scope that rclone should use when requesting access from drive.
Choose a number from below, or type in your own value
 1 / Full access all files, excluding Application Data Folder.
   \ "drive"
 2 / Read-only access to file metadata and file contents.
   \ "drive.readonly"
   / Access to files created by rclone only.
 3 | These are visible in the drive website.
   | File authorization is revoked when the user deauthorizes the app.
   \ "drive.file"
   / Allows read and write access to the Application Data folder.
 4 | This is not visible in the drive website.
   \ "drive.appfolder"
   / Allows read-only access to file metadata but
 5 | does not allow any access to read or download file content.
   \ "drive.metadata.readonly"
scope> 1
ID of the root folder - leave blank normally.  Fill in to access "Computers" folders. (see docs).
root_folder_id> 
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Configure this as a team drive?
y) Yes
n) No
y/n> n
--------------------
[remote]
client_id = 
client_secret = 
scope = drive
root_folder_id = 
service_account_file =
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

You can then use it like this,

List directories in top level of your drive

rclone lsd remote:

List all the files in your drive

rclone ls remote:

To copy a local directory to a drive directory called backup

rclone copy /home/source remote:backup

Scopes

Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. The scopes are defined here.

The scope are

drive

This is the default scope and allows full access to all files, except for the Application Data Folder (see below).

Choose this one if you aren't sure.

drive.readonly

This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted.

drive.file

With this scope rclone can read/view/modify only those files and folders it creates.

So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone.

This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone.

Files created with this scope are visible in the web interface.

drive.appfolder

This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either.

drive.metadata.readonly

This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.

Root folder ID

You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive.

Normally you will leave this blank and rclone will determine the correct root to use itself.

However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).

In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.

So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config.

NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone.

There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise!

Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.

Service Account support

You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

Use case - Google Apps/G-suite account and individual Drive

Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.

There's a few steps we need to go through to accomplish this:

1. Create a service account for example.com
2. Allowing API access to example.com Google Drive
3. Configure rclone, assuming a new install
rclone config

n/s/q> n         # New
name>gdrive      # Gdrive is an example name
Storage>         # Select the number shown for Google Drive
client_id>       # Can be left blank
client_secret>   # Can be left blank
scope>           # Select your scope, 1 for example
root_folder_id>  # Can be left blank
service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
y/n>             # Auto config, y
4. Verify that it's working

Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using --drive-impersonate, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the --drive-impersonate option, like this: rclone -v foo@example.com lsf gdrive:backup

Team drives

If you want to configure the remote to point to a Google Team Drive then answer y to the question Configure this as a team drive?.

This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.

For example:

Configure this as a team drive?
y) Yes
n) No
y/n> y
Fetching team drive list...
Choose a number from below, or type in your own value
 1 / Rclone Test
   \ "xxxxxxxxxxxxxxxxxxxx"
 2 / Rclone Test 2
   \ "yyyyyyyyyyyyyyyyyyyy"
 3 / Rclone Test 3
   \ "zzzzzzzzzzzzzzzzzzzz"
Enter a Team Drive ID> 1
--------------------
[remote]
client_id =
client_secret =
token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
team_drive = xxxxxxxxxxxxxxxxxxxx
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

It does this by combining multiple list calls into a single API request.

This works by combining many '%s' in parents filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List function:

trashed=false and 'a' in parents
trashed=false and 'b' in parents
trashed=false and 'c' in parents

These can now be combined into a single request:

trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)

The implementation of ListR will put up to 50 parents filters into one request. It will use the --checkers value to specify the number of requests to run in parallel.

In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives:

rclone lsjson -vv -R --checkers=6 gdrive:folder

small folder (220 directories, 700 files):

large folder (10600 directories, 39000 files):

Modified time

Google drive stores modification times accurate to 1 ms.

Restricted filename characters

Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON strings.

In contrast to other backends, / can also be used in names and . or .. are valid names.

Revisions

Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.

Revisions follow the standard google policy which at time of writing was

Deleting files

By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false flag, or set the equivalent environment variable.

Shortcuts

In March 2020 Google introduced a new feature in Google Drive called drive shortcuts (API). These will (by September 2020) replace the ability for files or folders to be in multiple folders at once.

Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don't break if the source is renamed or moved about.

Be default rclone treats these as follows.

For shortcuts pointing to files:

For shortcuts pointing to folders:

The rclone backend command can be used to create shortcuts.

Shortcuts can be completely ignored with the --drive-skip-shortcuts flag or the corresponding skip_shortcuts configuration setting.

Emptying trash

If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

Note that Google Drive takes some time (minutes to days) to empty the trash even though the command returns within a few seconds. No output is echoed, so there will be no confirmation even using -v or -vv.

Quota information

To view your current quota you can use the rclone about remote: command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments.

Import/Export of google documents

Google documents can be exported from and uploaded to Google Drive.

When rclone downloads a Google doc it chooses a format to download depending upon the --drive-export-formats setting. By default the export formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.

When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.

If you prefer an archive copy then you might use --drive-export-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp.

Note that rclone adds the extension to the google doc, so if it is called My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.

When importing files into Google Drive, rclone will convert all files with an extension in --drive-import-formats to their associated document type. rclone will not convert any files by default, since the conversion is lossy process.

The conversion must result in a file with the same extension when the --drive-export-formats rules are applied to the uploaded document.

Here are some examples for allowed and prohibited conversions.

export-formats import-formats Upload Ext Document Ext Allowed
odt odt odt odt Yes
odt docx,odt odt odt Yes
docx docx docx Yes
odt odt docx No
odt,docx docx,odt docx odt No
docx,odt docx,odt docx docx Yes
docx,odt docx,odt odt docx No

This limitation can be disabled by specifying --drive-allow-import-name-change. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with --drive-import-formats docx,odt,txt, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes.

Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries.

This list can be changed by Google Drive at any time and might not represent the currently available conversions.

Extension Mime Type Description
csv text/csv Standard CSV format for Spreadsheets
docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document
epub application/epub+zip E-book format
html text/html An HTML Document
jpg image/jpeg A JPEG Image File
json application/vnd.google-apps.script+json JSON Text Format
odp application/vnd.oasis.opendocument.presentation Openoffice Presentation
ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
odt application/vnd.oasis.opendocument.text Openoffice Document
pdf application/pdf Adobe PDF Format
png image/png PNG Image Format
pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint
rtf application/rtf Rich Text Format
svg image/svg+xml Scalable Vector Graphics Format
tsv text/tab-separated-values Standard TSV format for spreadsheets
txt text/plain Plain Text
xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
zip application/zip A ZIP file of HTML, Images CSS

Google documents can also be exported as link files. These files will open a browser window for the Google Docs website of that document when opened. The link file extension has to be specified as a --drive-export-formats parameter. They will match all available Google Documents.

Extension Description OS Support
desktop freedesktop.org specified desktop entry Linux
link.html An HTML Document with a redirect All
url INI style link file macOS, Windows
webloc macOS specific XML format macOS

Standard Options

Here are the standard options specific to drive (Google Drive).

--drive-client-id

Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.

--drive-client-secret

OAuth Client Secret Leave blank normally.

--drive-scope

Scope that rclone should use when requesting access from drive.

--drive-root-folder-id

ID of the root folder Leave blank normally.

Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.

--drive-service-account-file

Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.

Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

--drive-alternate-export

Deprecated: no longer needed

Advanced Options

Here are the advanced options specific to drive (Google Drive).

--drive-token

OAuth Access Token as a JSON blob.

--drive-auth-url

Auth server URL. Leave blank to use the provider defaults.

--drive-token-url

Token server url. Leave blank to use the provider defaults.

--drive-service-account-credentials

Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.

--drive-team-drive

ID of the Team Drive

--drive-auth-owner-only

Only consider files owned by the authenticated user.

--drive-use-trash

Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false to delete files permanently instead.

--drive-skip-gdocs

Skip google documents in all listings. If given, gdocs practically become invisible to rclone.

--drive-skip-checksum-gphotos

Skip MD5 checksum on Google photos and videos only.

Use this if you get checksum errors when transferring Google photos or videos.

Setting this flag will cause Google photos and videos to return a blank MD5 checksum.

Google photos are identified by being in the "photos" space.

Corrupted checksums are caused by Google modifying the image/video but not updating the checksum.

--drive-shared-with-me

Only show files that are shared with me.

Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).

This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too.

--drive-trashed-only

Only show files that are in the trash. This will show trashed files in their original directory structure.

--drive-starred-only

Only show files that are starred.

--drive-formats

Deprecated: see export_formats

--drive-export-formats

Comma separated list of preferred formats for downloading Google docs.

--drive-import-formats

Comma separated list of preferred formats for uploading Google docs.

--drive-allow-import-name-change

Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.

--drive-use-created-date

Use file created date instead of modified date.,

Useful when downloading data and you want the creation date used in place of the last modified date.

WARNING: This flag may have some unexpected consequences.

When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.

This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.

--drive-use-shared-date

Use date file was shared instead of modified date.

Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files.

If both this flag and "--drive-use-created-date" are set, the created date is used.

--drive-list-chunk

Size of listing chunk 100-1000. 0 to disable.

--drive-impersonate

Impersonate this user when using a service account.

--drive-upload-cutoff

Cutoff for switching to chunked upload

--drive-chunk-size

Upload chunk size. Must a power of 2 >= 256k.

Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

Reducing this will reduce memory usage but decrease performance.

--drive-acknowledge-abuse

Set to allow files which return cannotDownloadAbusiveFile to be downloaded.

If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.

--drive-keep-revision-forever

Keep new head revision of each file forever.

--drive-size-as-quota

Show sizes as storage quota usage, not actual size.

Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever.

WARNING: This flag may have some unexpected consequences.

It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only.

If you do use this flag for syncing (not recommended) then you will need to use --ignore size also.

--drive-v2-download-min-size

If Object's are greater, use drive v2 API to download.

--drive-pacer-min-sleep

Minimum time to sleep between API calls.

--drive-pacer-burst

Number of API calls to allow without sleeping.

--drive-server-side-across-configs

Allow server side operations (eg copy) to work across different drive configs.

This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.

--drive-disable-http2

Disable drive using http2

There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed.

See: https://github.com/rclone/rclone/issues/3631

--drive-stop-on-upload-limit

Make upload limit errors be fatal

At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.

Note that this detection is relying on error message strings which Google don't document so it may break in the future.

See: https://github.com/rclone/rclone/issues/3857

--drive-skip-shortcuts

If set skip shortcut files

Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely.

--drive-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Backend commands

Here are the commands specific to the drive backend.

Run them with

rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See the "rclone backend" command for more info on how to pass options and arguments.

These can be run on a running backend using the rc command backend/command.

get

Get command for fetching the drive config parameters

rclone backend get remote: [options] [<arguments>+]

This is a get command which will be used to fetch the various drive config parameters

Usage Examples:

rclone backend get drive: [-o service_account_file] [-o chunk_size]
rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]

Options:

set

Set command for updating the drive config parameters

rclone backend set remote: [options] [<arguments>+]

This is a set command which will be used to update the various drive config parameters

Usage Examples:

rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]

Options:

shortcut

Create shortcuts from files or directories

rclone backend shortcut remote: [options] [<arguments>+]

This command creates shortcuts from files or directories.

Usage:

rclone backend shortcut drive: source_item destination_shortcut
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut

In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:"

In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:".

Options:

drives

List the shared drives available to this account

rclone backend drives remote: [options] [<arguments>+]

This command lists the shared drives (teamdrives) available to this account.

Usage:

rclone backend drives drive:

This will return a JSON list of objects like this

[
    {
        "id": "0ABCDEF-01234567890",
        "kind": "drive#teamDrive",
        "name": "My Drive"
    },
    {
        "id": "0ABCDEFabcdefghijkl",
        "kind": "drive#teamDrive",
        "name": "Test Drive"
    }
]

untrash

Untrash files and directories

rclone backend untrash remote: [options] [<arguments>+]

This command untrashes all the files and directories in the directory passed in recursively.

Usage:

This takes an optional directory to trash which make this easier to use via the API.

rclone backend untrash drive:directory
rclone backend -i untrash drive:directory subdir

Use the -i flag to see what would be restored before restoring it.

Result:

{
    "Untrashed": 17,
    "Errors": 0
}

Limitations

Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.

Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with --disable copy to download and upload the files if you prefer.

Limitations of Google Docs

Google docs will appear as size -1 in rclone ls and as size 0 in anything which uses the VFS layer, eg rclone mount, rclone serve.

This is because rclone can't find out the size of the Google docs without downloading them.

Google docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.

However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!

Duplicated files

Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.

Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

Use rclone dedupe to fix duplicated files.

Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.

Rclone appears to be re-copying files it shouldn't

The most likely cause of this is the duplicated file issue above - run rclone dedupe and check your logs for duplicate object or directory messages.

This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list.

Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem.

Making your own client_id

When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.

Here is how to create your own Google Drive client ID for rclone:

  1. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

  2. Select a project or create a new project.

  3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".

  4. Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"

  5. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.

(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).

  1. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".

  2. Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine)

  3. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.

Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).

(Thanks to @balazer on github for these instructions.)

Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console.

Google Photos

The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.

NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.

Configuring Google Photos

The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Google Photos
   \ "google photos"
[snip]
Storage> google photos
** See help for google photos backend at: https://rclone.org/googlephotos/ **

Google Application Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id> 
Google Application Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret> 
Set to make the Google Photos backend read only.

If you choose read only then rclone will only request read only access
to your photos, otherwise rclone will request full access.
Enter a boolean value (true or false). Press Enter for the default ("false").
read_only> 
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code

*** IMPORTANT: All media items uploaded to Google Photos with rclone
*** are stored in full resolution at original quality.  These uploads
*** will count towards storage in your Google Account.

--------------------
[remote]
type = google photos
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

This remote is called remote and can now be used like this

See all the albums in your photos

rclone lsd remote:album

Make a new album

rclone mkdir remote:album/newAlbum

List the contents of an album

rclone ls remote:album/newAlbum

Sync /home/local/images to the Google Photos, removing any excess files in the album.

rclone sync -i /home/local/image remote:album/newAlbum

Layout

As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.

The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)

Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums.

/
- upload
    - file1.jpg
    - file2.jpg
    - ...
- media
    - all
        - file1.jpg
        - file2.jpg
        - ...
    - by-year
        - 2000
            - file1.jpg
            - ...
        - 2001
            - file2.jpg
            - ...
        - ...
    - by-month
        - 2000
            - 2000-01
                - file1.jpg
                - ...
            - 2000-02
                - file2.jpg
                - ...
        - ...
    - by-day
        - 2000
            - 2000-01-01
                - file1.jpg
                - ...
            - 2000-01-02
                - file2.jpg
                - ...
        - ...
- album
    - album name
    - album name/sub
- shared-album
    - album name
    - album name/sub
- feature
    - favorites
        - file1.jpg
        - file2.jpg

There are two writable parts of the tree, the upload directory and sub directories of the album directory.

The upload directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.

Directories within the album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do

rclone copy /path/to/images remote:album/images

and the images directory contains

images
    - file1.jpg
    dir
        file2.jpg
    dir2
        dir3
            file3.jpg

Then rclone will create the following albums with the following files in

This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.

The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

Limitations

Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.

Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..

Downloading Images

When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.

The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort

Downloading Videos

When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.

Duplicates

If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).

If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.

Modified time

The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.

This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.

Size

The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.

It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter.

If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.

Albums

Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.

Rclone can remove files it uploaded from albums it created only.

Deleting files

Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.

Rclone cannot delete files anywhere except under album.

Deleting albums

The Google Photos API does not support deleting albums - see bug #135714733.

Standard Options

Here are the standard options specific to google photos (Google Photos).

--gphotos-client-id

OAuth Client Id Leave blank normally.

--gphotos-client-secret

OAuth Client Secret Leave blank normally.

--gphotos-read-only

Set to make the Google Photos backend read only.

If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.

Advanced Options

Here are the advanced options specific to google photos (Google Photos).

--gphotos-token

OAuth Access Token as a JSON blob.

--gphotos-auth-url

Auth server URL. Leave blank to use the provider defaults.

--gphotos-token-url

Token server url. Leave blank to use the provider defaults.

--gphotos-read-size

Set to read the size of media items.

Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.

--gphotos-start-year

Year limits the photos to be downloaded to those which are uploaded after the given year

HTTP

The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)

Paths are specified as remote: or remote:path/to/dir.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / http Connection
   \ "http"
[snip]
Storage> http
URL of http host to connect to
Choose a number from below, or type in your own value
 1 / Connect to example.com
   \ "https://example.com"
url> https://beta.rclone.org
Remote config
--------------------
[remote]
url = https://beta.rclone.org
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
remote               http

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

This remote is called remote and can now be used like this

See all the top level directories

rclone lsd remote:

List the contents of a directory

rclone ls remote:directory

Sync the remote directory to /home/local/directory, deleting any excess files.

rclone sync -i remote:directory /home/local/directory

Read only

This remote is read only - you can't upload files to an HTTP server.

Modified time

Most HTTP servers store time accurate to 1 second.

Checksum

No checksums are stored.

Usage without a config file

Since the http remote only has one config parameter it is easy to use without a config file:

rclone lsd --http-url https://beta.rclone.org :http:

Standard Options

Here are the standard options specific to http (http Connection).

--http-url

URL of http host to connect to

Advanced Options

Here are the advanced options specific to http (http Connection).

--http-headers

Set HTTP headers for all transactions

Use this to set additional HTTP headers for all transactions

The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.

For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.

You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'.

--http-no-slash

Set this if the site doesn't end directories with /

Use this if your target website does not use / on the end of directories.

A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them.

Note that this may cause rclone to confuse genuine HTML files with directories.

--http-no-head

Don't use HEAD requests to find file sizes in dir listing

If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:

If you set this option, rclone will not do the HEAD request. This will mean

Hubic

Paths are specified as remote:path

Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

n) New remote
s) Set configuration password
n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Hubic
   \ "hubic"
[snip]
Storage> hubic
Hubic Client Id - leave blank normally.
client_id>
Hubic Client Secret - leave blank normally.
client_secret>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id =
client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List containers in the top level of your Hubic

rclone lsd remote:

List all the files in your Hubic

rclone ls remote:

To copy a local directory to an Hubic directory called backup

rclone copy /home/source remote:backup

If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory

rclone copy /home/source remote:default/backup

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

Modified time

The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

Note that Hubic wraps the Swift backend, so most of the properties of are the same.

Standard Options

Here are the standard options specific to hubic (Hubic).

--hubic-client-id

OAuth Client Id Leave blank normally.

--hubic-client-secret

OAuth Client Secret Leave blank normally.

Advanced Options

Here are the advanced options specific to hubic (Hubic).

--hubic-token

OAuth Access Token as a JSON blob.

--hubic-auth-url

Auth server URL. Leave blank to use the provider defaults.

--hubic-token-url

Token server url. Leave blank to use the provider defaults.

--hubic-chunk-size

Above this size files will be chunked into a _segments container.

Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.

--hubic-no-chunk

Don't chunk files during streaming upload.

When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.

This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM.

Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

--hubic-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

Jottacloud

Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.

In addition to the official service at jottacloud.com, there are also several whitelabel versions which should work with this backend.

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

Setup

Default Setup

To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your account security settings (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token.

Legacy Setup

If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentification. To to this select yes when the setup asks for legacy authentification and enter your username and password. The rest of the setup is identical to the default setup.

Here is an example of how to make a remote called remote with the default setup. First run:

rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Jottacloud
   \ "jottacloud"
[snip]
Storage> jottacloud
** See help for jottacloud backend at: https://rclone.org/jottacloud/ **

Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
Use legacy authentification?.
This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.
y) Yes
n) No (default)
y/n> n

Generate a personal login token here: https://www.jottacloud.com/web/secure
Login Token> <your token here>

Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?

y) Yes
n) No
y/n> y
Please select the device to use. Normally this will be Jotta
Choose a number from below, or type in an existing value
 1 > DESKTOP-3H31129
 2 > Jotta
Devices> 2
Please select the mountpoint to user. Normally this will be Archive
Choose a number from below, or type in an existing value
 1 > Archive
 2 > Links
 3 > Sync
 
Mountpoints> 1
--------------------
[jotta]
type = jottacloud
token = {........}
device = Jotta
mountpoint = Archive
configVersion = 1
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Once configured you can then use rclone like this,

List directories in top level of your Jottacloud

rclone lsd remote:

List all the files in your Jottacloud

rclone ls remote:

To copy a local directory to an Jottacloud directory called backup

rclone copy /home/source remote:backup

Devices and Mountpoints

The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config.

The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.

Modified time and hashes

Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

Jottacloud supports MD5 type hashes, so you can use the --checksum flag.

Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
" 0x22
* 0x2A
: 0x3A
< 0x3C
> 0x3E
? 0x3F
| 0x7C

Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

Deleting files

By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.

Versions

Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.

Quota information

To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.

Advanced Options

Here are the advanced options specific to jottacloud (Jottacloud).

--jottacloud-md5-memory-limit

Files bigger than this will be cached on disk to calculate the MD5 if required.

--jottacloud-trashed-only

Only show files that are in the trash. This will show trashed files in their original directory structure.

--jottacloud-hard-delete

Delete files permanently rather than putting them into the trash.

--jottacloud-upload-resume-limit

Files bigger than this can be resumed if the upload fail's.

--jottacloud-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.

Jottacloud only supports filenames up to 255 characters in length.

Troubleshooting

Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.

Koofr

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.

Here is an example of how to make a remote called koofr. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> koofr 
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Koofr
   \ "koofr"
[snip]
Storage> koofr
** See help for koofr backend at: https://rclone.org/koofr/ **

Your Koofr user name
Enter a string value. Press Enter for the default ("").
user> USER@NAME
Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[koofr]
type = koofr
baseurl = https://app.koofr.net
user = USER@NAME
password = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage.

Once configured you can then use rclone like this,

List directories in top level of your Koofr

rclone lsd koofr:

List all the files in your Koofr

rclone ls koofr:

To copy a local directory to an Koofr directory called backup

rclone copy /home/source remote:backup

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C

Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

Standard Options

Here are the standard options specific to koofr (Koofr).

--koofr-user

Your Koofr user name

--koofr-password

Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)

NB Input to this must be obscured - see rclone obscure.

Advanced Options

Here are the advanced options specific to koofr (Koofr).

--koofr-endpoint

The Koofr API endpoint to use

--koofr-mountid

Mount ID of the mount to use. If omitted, the primary mount is used.

--koofr-setmtime

Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.

--koofr-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Mail.ru Cloud

Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available only on Windows. (Please note that official sites are in Russian)

Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.

Features highlights

Configuration

Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run

rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Mail.ru Cloud
   \ "mailru"
[snip]
Storage> mailru
User name (usually email)
Enter a string value. Press Enter for the default ("").
user> username@mail.ru
Password
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Skip full upload if there is another file with same data hash.
This feature is called "speedup" or "put by hash". It is especially efficient
in case of generally available files like popular books, video or audio clips
[snip]
Enter a boolean value (true or false). Press Enter for the default ("true").
Choose a number from below, or type in your own value
 1 / Enable
   \ "true"
 2 / Disable
   \ "false"
speedup_enable> 1
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[remote]
type = mailru
user = username@mail.ru
pass = *** ENCRYPTED ***
speedup_enable = true
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Configuration of this backend does not require a local web browser. You can use the configured backend as shown below:

See top level directories

rclone lsd remote:

Make a new directory

rclone mkdir remote:directory

List the contents of a directory

rclone ls remote:directory

Sync /home/local/directory to the remote path, deleting any excess files in the path.

rclone sync -i /home/local/directory remote:directory

Modified time

Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".

Hash checksums

Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length.

Emptying Trash

Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote: command, which will permanently delete all your trashed files. This command does not take any path arguments.

Quota information

To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
" 0x22
* 0x2A
: 0x3A
< 0x3C
> 0x3E
? 0x3F
\ 0x5C
| 0x7C

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Limitations

File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.

Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Standard Options

Here are the standard options specific to mailru (Mail.ru Cloud).

--mailru-user

User name (usually email)

--mailru-pass

Password

NB Input to this must be obscured - see rclone obscure.

--mailru-speedup-enable

Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.

Advanced Options

Here are the advanced options specific to mailru (Mail.ru Cloud).

--mailru-speedup-file-patterns

Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters.

--mailru-speedup-max-disk

This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space)

--mailru-speedup-max-memory

Files larger than the size given below will always be hashed on disk.

--mailru-check-hash

What should copy do if file checksum is mismatched or invalid

--mailru-user-agent

HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line.

--mailru-quirks

Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400

--mailru-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Mega

Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.

This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Mega
   \ "mega"
[snip]
Storage> mega
User name
user> you@example.com
Password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
Remote config
--------------------
[remote]
type = mega
user = you@example.com
pass = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

NOTE: The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

Once configured you can then use rclone like this,

List directories in top level of your Mega

rclone lsd remote:

List all the files in your Mega

rclone ls remote:

To copy a local directory to an Mega directory called backup

rclone copy /home/source remote:backup

Modified time and hashes

Mega does not support modification times or hashes yet.

Restricted filename characters

Character Value Replacement
NUL 0x00
/ 0x2F

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Duplicated files

Mega can have two files with exactly the same name and path (unlike a normal file system).

Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

Use rclone dedupe to fix duplicated files.

Failure to log-in

Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands.

For example, executing this command 90 times in a row rclone link remote:file will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again.

You can mitigate this issue by mounting the remote it with rclone mount. This will log-in when mounting and a log-out when unmounting only. You can also run rclone rcd and then use rclone rc to run the commands over the API to avoid logging in each time.

Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue.

If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing...

Note that this has been observed by trial and error and might not be set in stone.

Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach.

Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.

Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.

So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

Standard Options

Here are the standard options specific to mega (Mega).

--mega-user

User name

--mega-pass

Password.

NB Input to this must be obscured - see rclone obscure.

Advanced Options

Here are the advanced options specific to mega (Mega).

--mega-debug

Output more debug from Mega.

If this flag is set (along with -vv) it will print further debugging information from the mega backend.

--mega-hard-delete

Delete files permanently rather than putting them into the trash.

Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead.

--mega-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

Mega allows duplicate files which may confuse rclone.

Memory

The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.

The memory backend behaves like a bucket based remote (eg like s3). Because it has no parameters you can just use it with the :memory: remote name.

You can configure it as a remote like this with rclone config too if you want to:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Memory
   \ "memory"
[snip]
Storage> memory
** See help for memory backend at: https://rclone.org/memory/ **

Remote config

--------------------
[remote]
type = memory
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, eg

rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:

Modified time and hashes

The memory backend supports MD5 hashes and modification times accurate to 1 nS.

Restricted filename characters

The memory backend replaces the default restricted characters set.

Microsoft Azure Blob Storage

Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Microsoft Azure Blob Storage
   \ "azureblob"
[snip]
Storage> azureblob
Storage Account Name
account> account_name
Storage Account Key
key> base64encodedkey==
Endpoint for the service - leave blank normally.
endpoint> 
Remote config
--------------------
[remote]
account = account_name
key = base64encodedkey==
endpoint = 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See all containers

rclone lsd remote:

Make a new container

rclone mkdir remote:container

List the contents of a container

rclone ls remote:container

Sync /home/local/directory to the remote container, deleting any excess files in the container.

rclone sync -i /home/local/directory remote:container

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

Modified time

The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
/ 0x2F
\ 0x5C

File names can also not end with the following characters. These only get replaced if they are the last character in the name:

Character Value Replacement
. 0x2E

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Hashes

MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk.

Authenticating with Azure Blob Storage

Rclone has 3 ways of authenticating with Azure Blob Storage:

Account and Key

This is the most straight forward and least flexible way. Just fill in the account and key lines and leave the rest blank.

SAS URL

This can be an account level SAS URL or container level SAS URL.

To use it leave account, key blank and fill in sas_url.

An account level SAS URL or container level SAS URL can be obtained from the Azure portal or the Azure Storage Explorer. To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.

If you use a container level SAS URL, rclone operations are permitted only on a particular container, eg

rclone ls azureblob:container

You can also list the single container from the root. This will only show the container specified by the SAS URL.

$ rclone lsd azureblob:
container/

Note that you can't see or access any other containers - this will fail

rclone ls azureblob:othercontainer

Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.

Multipart uploads

Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default.

The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to --transfers of them being uploaded at once.

Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M.

Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks.

Standard Options

Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).

--azureblob-account

Storage Account Name (leave blank to use SAS URL or Emulator)

--azureblob-key

Storage Account Key (leave blank to use SAS URL or Emulator)

--azureblob-sas-url

SAS URL for container level access only (leave blank if using account/key or Emulator)

--azureblob-use-emulator

Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)

Advanced Options

Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).

--azureblob-endpoint

Endpoint for the service Leave blank normally.

--azureblob-upload-cutoff

Cutoff for switching to chunked upload (<= 256MB).

--azureblob-chunk-size

Upload chunk size (<= 100MB).

Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory.

--azureblob-list-chunk

Size of blob list.

This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.

--azureblob-access-tier

Access tier of blob: hot, cool or archive.

Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level

If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".

--azureblob-disable-checksum

Don't store MD5 checksum with object metadata.

Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

--azureblob-memory-pool-flush-time

How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

--azureblob-memory-pool-use-mmap

Whether to use mmap buffers in internal memory pool.

--azureblob-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

Azure Storage Emulator Support

You can test rclone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with rclone config follow instructions described in introduction, set use_emulator config as true, you do not need to provide default account name or key if using emulator.

Microsoft OneDrive

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Microsoft OneDrive
   \ "onedrive"
[snip]
Storage> onedrive
Microsoft App Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
Microsoft App Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Choose a number from below, or type in an existing value
 1 / OneDrive Personal or Business
   \ "onedrive"
 2 / Sharepoint site
   \ "sharepoint"
 3 / Type in driveID
   \ "driveid"
 4 / Type in SiteID
   \ "siteid"
 5 / Search a Sharepoint site
   \ "search"
Your choice> 1
Found 1 drives, please select the one you want to use:
0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
Chose drive to use:> 0
Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
Is that okay?
y) Yes
n) No
y/n> y
--------------------
[remote]
type = onedrive
token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
drive_type = business
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your OneDrive

rclone lsd remote:

List all the files in your OneDrive

rclone ls remote:

To copy a local directory to an OneDrive directory called backup

rclone copy /home/source remote:backup

Getting your own Client ID and Key

You can use your own Client ID if the default (client_id left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests.

If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below:

  1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click New registration.
  2. Enter a name for your app, choose account type Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI Enter http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use.
  3. Under manage select Certificates & secrets, click New client secret. Copy and keep that secret for later use.
  4. Under manage select API permissions, click Add a permission and select Microsoft Graph then select delegated permissions.
  5. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read. Once selected click Add permissions at the bottom.

Now the application is complete. Run rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.

Modification time and hashes

OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.

For all types of OneDrive you can use the --checksum flag.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
" 0x22
* 0x2A
: 0x3A
< 0x3C
> 0x3E
? 0x3F
\ 0x5C
| 0x7C
# 0x23
% 0x25

File names can also not end with the following characters. These only get replaced if they are the last character in the name:

Character Value Replacement
SP 0x20
. 0x2E

File names can also not begin with the following characters. These only get replaced if they are the first character in the name:

Character Value Replacement
SP 0x20
~ 0x7E

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Deleting files

Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

Standard Options

Here are the standard options specific to onedrive (Microsoft OneDrive).

--onedrive-client-id

OAuth Client Id Leave blank normally.

--onedrive-client-secret

OAuth Client Secret Leave blank normally.

Advanced Options

Here are the advanced options specific to onedrive (Microsoft OneDrive).

--onedrive-token

OAuth Access Token as a JSON blob.

--onedrive-auth-url

Auth server URL. Leave blank to use the provider defaults.

--onedrive-token-url

Token server url. Leave blank to use the provider defaults.

--onedrive-chunk-size

Chunk size to upload files with - must be multiple of 320k (327,680 bytes).

Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.

--onedrive-drive-id

The ID of the drive to use

--onedrive-drive-type

The type of the drive ( personal | business | documentLibrary )

--onedrive-expose-onenote-files

Set to make OneNote files show up in directory listings.

By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.

--onedrive-server-side-across-configs

Allow server side operations (eg copy) to work across different onedrive configs.

This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.

--onedrive-no-versions

Remove all versions on modifying operations

Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time.

These versions take up space out of the quota.

This flag checks for versions after file upload and setting modification time and removes all but the last version.

NB Onedrive personal can't currently delete versions so don't use this flag there.

--onedrive-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.

Naming

Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

File sizes

The largest allowed file size is 100GB for both OneDrive Personal and OneDrive for Business (Updated 17 June 2020).

Path length

The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.

Number of files

OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info.

An official document about the limitations for different types of OneDrive can be found here.

Versions

Every change in a file OneDrive causes the service to create a new version of the the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.

For example the copy command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version.

You can use the rclone cleanup command (see below) to remove all old versions.

Or you can set the no_versions parameter to true and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it.

Note At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and no_versions should not be used on Onedrive Personal.

Disabling versioning

Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:

  1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already)
  2. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking
  3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)
  4. Set-SPOTenant -EnableMinimumVersionRequirement $False
  5. Disconnect-SPOService (to disconnect from the server)

Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.

User Weropol has found a method to disable versioning on OneDrive

  1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
  2. Click Site settings.
  3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
  4. Click Customize "Documents".
  5. Click General Settings > Versioning Settings.
  6. Under Document Version History select the option No versioning. Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
  7. Apply the changes by clicking OK.
  8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
  9. Restore the versioning settings after using rclone. (Optional)

Cleanup

OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports -i which is a great way to see what it would do.

rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
rclone cleanup remote:path/subdir    # unconditionally remove all old version for path/subdir

NB Onedrive personal can't currently delete versions

Troubleshooting

Unexpected file size/hash differences on Sharepoint

It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:

--ignore-checksum --ignore-size

Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above.

Replacing/deleting existing files on Sharepoint gets "item not found"

It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:

--backup-dir mysharepoint:rclone-backup-dir

access_denied (AADSTS65005)

Error: access_denied
Code: AADSTS65005
Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.

However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint

invalid_grant (AADSTS50076)

Error: invalid_grant
Code: AADSTS50076
Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.

If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.

OpenDrive

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / OpenDrive
   \ "opendrive"
[snip]
Storage> opendrive
Username
username>
Password
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
--------------------
[remote]
username =
password = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

List directories in top level of your OpenDrive

rclone lsd remote:

List all the files in your OpenDrive

rclone ls remote:

To copy a local directory to an OpenDrive directory called backup

rclone copy /home/source remote:backup

Modified time and MD5SUMs

OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

Restricted filename characters

Character Value Replacement
NUL 0x00
/ 0x2F
" 0x22
* 0x2A
: 0x3A
< 0x3C
> 0x3E
? 0x3F
\ 0x5C
| 0x7C

File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name:

Character Value Replacement
SP 0x20
HT 0x09
LF 0x0A
VT 0x0B
CR 0x0D

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to opendrive (OpenDrive).

--opendrive-username

Username

--opendrive-password

Password.

NB Input to this must be obscured - see rclone obscure.

Advanced Options

Here are the advanced options specific to opendrive (OpenDrive).

--opendrive-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

--opendrive-chunk-size

Files will be uploaded in chunks this size.

Note that these chunks are buffered in memory so increasing them will increase memory use.

Limitations

Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

QingStor

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Here is an example of making an QingStor configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / QingStor Object Storage
   \ "qingstor"
[snip]
Storage> qingstor
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter QingStor credentials in the next step
   \ "false"
 2 / Get QingStor credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> access_key
QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> secret_key
Enter an endpoint URL to connection QingStor API.
Leave blank will use the default value "https://qingstor.com:443"
endpoint>
Zone connect to. Default is "pek3a".
Choose a number from below, or type in your own value
   / The Beijing (China) Three Zone
 1 | Needs location constraint pek3a.
   \ "pek3a"
   / The Shanghai (China) First Zone
 2 | Needs location constraint sh1a.
   \ "sh1a"
zone> 1
Number of connection retry.
Leave blank will use the default value "3".
connection_retries>
Remote config
--------------------
[remote]
env_auth = false
access_key_id = access_key
secret_access_key = secret_key
endpoint =
zone = pek3a
connection_retries =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called remote and can now be used like this

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync -i /home/local/directory remote:bucket

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

Multipart uploads

rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.

Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.

Buckets and Zone

With QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone.

Authentication

There are two ways to supply rclone with a set of QingStor credentials. In order of precedence:

Restricted filename characters

The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to qingstor (QingCloud Object Storage).

--qingstor-env-auth

Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.

--qingstor-access-key-id

QingStor Access Key ID Leave blank for anonymous access or runtime credentials.

--qingstor-secret-access-key

QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials.

--qingstor-endpoint

Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443"

--qingstor-zone

Zone to connect to. Default is "pek3a".

Advanced Options

Here are the advanced options specific to qingstor (QingCloud Object Storage).

--qingstor-connection-retries

Number of connection retries.

--qingstor-upload-cutoff

Cutoff for switching to chunked upload

Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.

--qingstor-chunk-size

Chunk size to use for uploading.

When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.

Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.

If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

--qingstor-upload-concurrency

Concurrency for multipart uploads.

This is the number of chunks of the same file that are uploaded concurrently.

NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though).

If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

--qingstor-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Swift

Swift refers to OpenStack Object Storage. Commercial implementations of that being:

Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

Here is an example of making a swift configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
[snip]
Storage> swift
Get swift credentials from environment variables in standard OpenStack form.
Choose a number from below, or type in your own value
 1 / Enter swift credentials in the next step
   \ "false"
 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
   \ "true"
env_auth> true
User name to log in (OS_USERNAME).
user> 
API key or password (OS_PASSWORD).
key> 
Authentication URL for server (OS_AUTH_URL).
Choose a number from below, or type in your own value
 1 / Rackspace US
   \ "https://auth.api.rackspacecloud.com/v1.0"
 2 / Rackspace UK
   \ "https://lon.auth.api.rackspacecloud.com/v1.0"
 3 / Rackspace v2
   \ "https://identity.api.rackspacecloud.com/v2.0"
 4 / Memset Memstore UK
   \ "https://auth.storage.memset.com/v1.0"
 5 / Memset Memstore UK v2
   \ "https://auth.storage.memset.com/v2.0"
 6 / OVH
   \ "https://auth.cloud.ovh.net/v3"
auth> 
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
user_id> 
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
domain> 
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
tenant> 
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
tenant_id> 
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
tenant_domain> 
Region name - optional (OS_REGION_NAME)
region> 
Storage URL - optional (OS_STORAGE_URL)
storage_url> 
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
auth_token> 
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
auth_version> 
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
Choose a number from below, or type in your own value
 1 / Public (default, choose this if not sure)
   \ "public"
 2 / Internal (use internal service net)
   \ "internal"
 3 / Admin
   \ "admin"
endpoint_type> 
Remote config
--------------------
[test]
env_auth = true
user = 
key = 
auth = 
user_id = 
domain = 
tenant = 
tenant_id = 
tenant_domain = 
region = 
storage_url = 
auth_token = 
auth_version = 
endpoint_type = 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called remote and can now be used like this

See all containers

rclone lsd remote:

Make a new container

rclone mkdir remote:container

List the contents of a container

rclone ls remote:container

Sync /home/local/directory to the remote container, deleting any excess files in the container.

rclone sync -i /home/local/directory remote:container

Configuration from an OpenStack credentials file

An OpenStack credentials file typically looks something something like this (without the comments)

export OS_AUTH_URL=https://a.provider.net/v2.0
export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
export OS_TENANT_NAME="1234567890123456"
export OS_USERNAME="123abc567xy"
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_REGION_NAME="SBG1"
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.

[remote]
type = swift
user = $OS_USERNAME
key = $OS_PASSWORD
auth = $OS_AUTH_URL
tenant = $OS_TENANT_NAME

Note that you may (or may not) need to set region too - try without first.

Configuration from the environment

If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables.

When you run through the config, make sure you choose true for env_auth and leave everything else blank.

rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.

Using an alternate authentication method

If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.

Using rclone without a config file

You can use rclone with swift without a config file, if desired, like this:

source openstack-credentials-file
export RCLONE_CONFIG_MYREMOTE_TYPE=swift
export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

--update and --use-server-modtime

As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

Standard Options

Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

--swift-env-auth

Get swift credentials from environment variables in standard OpenStack form.

--swift-user

User name to log in (OS_USERNAME).

--swift-key

API key or password (OS_PASSWORD).

--swift-auth

Authentication URL for server (OS_AUTH_URL).

--swift-user-id

User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).

--swift-domain

User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)

--swift-tenant

Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)

--swift-tenant-id

Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)

--swift-tenant-domain

Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)

--swift-region

Region name - optional (OS_REGION_NAME)

--swift-storage-url

Storage URL - optional (OS_STORAGE_URL)

--swift-auth-token

Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)

--swift-application-credential-id

Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)

--swift-application-credential-name

Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)

--swift-application-credential-secret

Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)

--swift-auth-version

AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)

--swift-endpoint-type

Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)

--swift-storage-policy

The storage policy to use when creating a new container

This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.

Advanced Options

Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

--swift-chunk-size

Above this size files will be chunked into a _segments container.

Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.

--swift-no-chunk

Don't chunk files during streaming upload.

When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.

This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM.

Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

--swift-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Modified time

The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

Restricted filename characters

Character Value Replacement
NUL 0x00
/ 0x2F

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Limitations

The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

Troubleshooting

Rclone gives Failed to create file system for "remote:": Bad Request

Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

This may also be caused by specifying the region when you shouldn't have (eg OVH).

Rclone gives Failed to create file system: Response didn't have storage url and auth token

This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

pCloud

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Pcloud
   \ "pcloud"
[snip]
Storage> pcloud
Pcloud App Client Id - leave blank normally.
client_id> 
Pcloud App Client Secret - leave blank normally.
client_secret> 
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id = 
client_secret = 
token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your pCloud

rclone lsd remote:

List all the files in your pCloud

rclone ls remote:

To copy a local directory to an pCloud directory called backup

rclone copy /home/source remote:backup

Modified time and hashes

pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum flag.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Deleting files

Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

Root folder ID

You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your pCloud drive.

Normally you will leave this blank and rclone will determine the correct root to use itself.

However you can set this to restrict rclone to a specific folder hierarchy.

In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.

Standard Options

Here are the standard options specific to pcloud (Pcloud).

--pcloud-client-id

OAuth Client Id Leave blank normally.

--pcloud-client-secret

OAuth Client Secret Leave blank normally.

Advanced Options

Here are the advanced options specific to pcloud (Pcloud).

--pcloud-token

OAuth Access Token as a JSON blob.

--pcloud-auth-url

Auth server URL. Leave blank to use the provider defaults.

--pcloud-token-url

Token server url. Leave blank to use the provider defaults.

--pcloud-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

--pcloud-root-folder-id

Fill in for rclone to use a non root folder as its starting point.

--pcloud-hostname

Hostname to connect to.

This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize.

premiumize.me

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / premiumize.me
   \ "premiumizeme"
[snip]
Storage> premiumizeme
** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **

Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
type = premiumizeme
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> 

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your premiumize.me

rclone lsd remote:

List all the files in your premiumize.me

rclone ls remote:

To copy a local directory to an premiumize.me directory called backup

rclone copy /home/source remote:backup

Modified time and hashes

premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work.

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C
" 0x22

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Standard Options

Here are the standard options specific to premiumizeme (premiumize.me).

--premiumizeme-api-key

API Key.

This is not normally used - use oauth instead.

Advanced Options

Here are the advanced options specific to premiumizeme (premiumize.me).

--premiumizeme-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Limitations

Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents and

premiumize.me only supports filenames up to 255 characters in length.

put.io

Paths are specified as remote:path

put.io paths may be as deep as required, eg remote:directory/subdirectory.

The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> putio
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Put.io
   \ "putio"
[snip]
Storage> putio
** See help for putio backend at: https://rclone.org/putio/ **

Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[putio]
type = putio
token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
putio                putio

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

You can then use it like this,

List directories in top level of your put.io

rclone lsd remote:

List all the files in your put.io

rclone ls remote:

To copy a local directory to a put.io directory called backup

rclone copy /home/source remote:backup

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
\ 0x5C

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Advanced Options

Here are the advanced options specific to putio (Put.io).

--putio-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Seafile

This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users

Root mode vs Library mode

There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, eg remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)

Configuration in root mode

Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

rclone config

This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> seafile
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Seafile
   \ "seafile"
[snip]
Storage> seafile
** See help for seafile backend at: https://rclone.org/seafile/ **

URL of seafile host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Connect to cloud.seafile.com
   \ "https://cloud.seafile.com/"
url> http://my.seafile.server/
User name (usually email address)
Enter a string value. Press Enter for the default ("").
user> me@example.com
Password
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g> y
Enter the password:
password:
Confirm the password:
password:
Two-factor authentication ('true' if the account has 2FA enabled)
Enter a boolean value (true or false). Press Enter for the default ("false").
2fa> false
Name of the library. Leave blank to access all non-encrypted libraries.
Enter a string value. Press Enter for the default ("").
library>
Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
Two-factor authentication is not enabled on this account.
--------------------
[seafile]
type = seafile
url = http://my.seafile.server/
user = me@example.com
pass = *** ENCRYPTED ***
2fa = false
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called seafile. It's pointing to the root of your seafile server and can now be used like this:

See all libraries

rclone lsd seafile:

Create a new library

rclone mkdir seafile:library

List the contents of a library

rclone ls seafile:library

Sync /home/local/directory to the remote library, deleting any excess files in the library.

rclone sync -i /home/local/directory seafile:library

Configuration in library mode

Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> seafile
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Seafile
   \ "seafile"
[snip]
Storage> seafile
** See help for seafile backend at: https://rclone.org/seafile/ **

URL of seafile host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Connect to cloud.seafile.com
   \ "https://cloud.seafile.com/"
url> http://my.seafile.server/
User name (usually email address)
Enter a string value. Press Enter for the default ("").
user> me@example.com
Password
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g> y
Enter the password:
password:
Confirm the password:
password:
Two-factor authentication ('true' if the account has 2FA enabled)
Enter a boolean value (true or false). Press Enter for the default ("false").
2fa> true
Name of the library. Leave blank to access all non-encrypted libraries.
Enter a string value. Press Enter for the default ("").
library> My Library
Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
Two-factor authentication: please enter your 2FA code
2fa code> 123456
Authenticating...
Success!
--------------------
[seafile]
type = seafile
url = http://my.seafile.server/
user = me@example.com
pass = 
2fa = true
library = My Library
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.

You specified My Library during the configuration. The root of the remote is pointing at the root of the library My Library:

See all files in the library:

rclone lsd seafile:

Create a new directory inside the library

rclone mkdir seafile:directory

List the contents of a directory

rclone ls seafile:directory

Sync /home/local/directory to the remote library, deleting any excess files in the library.

rclone sync -i /home/local/directory seafile:

--fast-list

Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x

Restricted filename characters

In addition to the default restricted characters set the following characters are also replaced:

Character Value Replacement
/ 0x2F
" 0x22
\ 0x5C

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory:

rclone link seafile:seafile-tutorial.doc
http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/

or if run on a directory you will get:

rclone link seafile:dir
http://my.seafile.server/d/9ea2455f6f55478bbb0d/

Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link.

Compatibility

It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition

Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

Standard Options

Here are the standard options specific to seafile (seafile).

--seafile-url

URL of seafile host to connect to

--seafile-user

User name (usually email address)

--seafile-pass

Password

NB Input to this must be obscured - see rclone obscure.

--seafile-2fa

Two-factor authentication ('true' if the account has 2FA enabled)

--seafile-library

Name of the library. Leave blank to access all non-encrypted libraries.

--seafile-library-key

Library password (for encrypted libraries only). Leave blank if you pass it through the command line.

NB Input to this must be obscured - see rclone obscure.

--seafile-auth-token

Authentication token

Advanced Options

Here are the advanced options specific to seafile (seafile).

--seafile-create-library

Should rclone create a library if it doesn't exist

--seafile-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

SFTP

SFTP is the Secure (or SSH) File Transfer Protocol.

The SFTP backend can be used with a number of different providers:

SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

"Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.

Here is an example of making an SFTP configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / SSH/SFTP Connection
   \ "sftp"
[snip]
Storage> sftp
SSH host to connect to
Choose a number from below, or type in your own value
 1 / Connect to example.com
   \ "example.com"
host> example.com
SSH username, leave blank for current username, ncw
user> sftpuser
SSH port, leave blank to use default (22)
port>
SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> n
Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
key_file>
Remote config
--------------------
[remote]
host = example.com
user = sftpuser
port =
pass =
key_file =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called remote and can now be used like this:

See all directories in the home directory

rclone lsd remote:

Make a new directory

rclone mkdir remote:path/to/directory

List the contents of a directory

rclone ls remote:path/to/directory

Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

rclone sync -i /home/local/directory remote:directory

SSH Authentication

The SFTP remote supports three authentication methods:

Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.

The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e.

key_pem = -----BEGIN RSA PRIVATE KEY-----0gAMbMbaSsd-----END RSA PRIVATE KEY-----

This will generate it correctly for key_pem for use in the config:

awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa

If you don't specify pass, key_file, or key_pem then rclone will attempt to contact an ssh-agent.

You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.

Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.

If you set the --sftp-ask-password option, rclone will prompt for a password when needed and no password has been configured.

ssh-agent on macOS

Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg

eval `ssh-agent -s` && ssh-add -A

And then at the end of the session

eval `ssh-agent -k`

These commands can be used in scripts of course.

Modified time

Modified times are stored on the server to 1 second precision.

Modified times are used in syncing and are fully supported.

Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

Standard Options

Here are the standard options specific to sftp (SSH/SFTP Connection).

--sftp-host

SSH host to connect to

--sftp-user

SSH username, leave blank for current username, ncw

--sftp-port

SSH port, leave blank to use default (22)

--sftp-pass

SSH password, leave blank to use ssh-agent.

NB Input to this must be obscured - see rclone obscure.

--sftp-key-pem

Raw PEM-encoded private key, If specified, will override key_file parameter.

--sftp-key-file

Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.

Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

--sftp-key-file-pass

The passphrase to decrypt the PEM-encoded private key file.

Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used.

NB Input to this must be obscured - see rclone obscure.

--sftp-key-use-agent

When set forces the usage of the ssh-agent.

When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Too many authentication failures for *username* errors when the ssh-agent contains many keys.

--sftp-use-insecure-cipher

Enable the use of insecure ciphers and key exchange methods.

This enables the use of the following insecure ciphers and key exchange methods:

Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.

--sftp-disable-hashcheck

Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.

Advanced Options

Here are the advanced options specific to sftp (SSH/SFTP Connection).

--sftp-ask-password

Allow asking for SFTP password when needed.

If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent

--sftp-path-override

Override path used by SSH connection.

This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes.

Shared folders can be found in directories representing volumes

rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory

Home directory can be found in a shared folder called "home"

rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory

--sftp-set-modtime

Set the modified time on the remote if set.

--sftp-md5sum-command

The command used to read md5 hashes. Leave blank for autodetect.

--sftp-sha1sum-command

The command used to read sha1 hashes. Leave blank for autodetect.

Set to skip any symlinks and any other non regular files.

--sftp-subsystem

Specifies the SSH2 subsystem on the remote host.

--sftp-server-command

Specifies the path or command to run a sftp server on the remote host.

The subsystem option is ignored when server_command is defined.

Limitations

SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.

SFTP also supports about if the same login has shell access and df are in the remote's PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote's PATH.

Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

The only ssh agent supported under Windows is Putty's pageant.

The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).

SFTP isn't supported under plan9 until this issue is fixed.

Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

Note that --timeout isn't supported (but --contimeout is).

C14

C14 is supported through the SFTP backend.

See C14's documentation

rsync.net

rsync.net is supported through the SFTP backend.

See rsync.net's documentation of rclone examples.

SugarSync

SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Sugarsync
   \ "sugarsync"
[snip]
Storage> sugarsync
** See help for sugarsync backend at: https://rclone.org/sugarsync/ **

Sugarsync App ID.
Leave blank to use rclone's.
Enter a string value. Press Enter for the default ("").
app_id> 
Sugarsync Access Key ID.
Leave blank to use rclone's.
Enter a string value. Press Enter for the default ("").
access_key_id> 
Sugarsync Private Access Key
Leave blank to use rclone's.
Enter a string value. Press Enter for the default ("").
private_access_key> 
Permanently delete files if true
otherwise put them in the deleted files.
Enter a boolean value (true or false). Press Enter for the default ("false").
hard_delete> 
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
Username (email address)> nick@craig-wood.com
Your Sugarsync password is only required during setup and will not be stored.
password:
--------------------
[remote]
type = sugarsync
refresh_token = https://api.sugarsync.com/app-authorization/XXXXXXXXXXXXXXXXXX
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token.

Once configured you can then use rclone like this,

List directories (sync folders) in top level of your SugarSync

rclone lsd remote:

List all the files in your SugarSync folder "Test"

rclone ls remote:Test

To copy a local directory to an SugarSync folder called backup

rclone copy /home/source remote:backup

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.

Modified time and hashes

SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded.

Restricted filename characters

SugarSync replaces the default restricted characters set except for DEL.

Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

Deleting files

Deleted files will be moved to the "Deleted items" folder by default.

However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

Standard Options

Here are the standard options specific to sugarsync (Sugarsync).

--sugarsync-app-id

Sugarsync App ID.

Leave blank to use rclone's.

--sugarsync-access-key-id

Sugarsync Access Key ID.

Leave blank to use rclone's.

--sugarsync-private-access-key

Sugarsync Private Access Key

Leave blank to use rclone's.

--sugarsync-hard-delete

Permanently delete files if true otherwise put them in the deleted files.

Advanced Options

Here are the advanced options specific to sugarsync (Sugarsync).

--sugarsync-refresh-token

Sugarsync refresh token

Leave blank normally, will be auto configured by rclone.

--sugarsync-authorization

Sugarsync authorization

Leave blank normally, will be auto configured by rclone.

--sugarsync-authorization-expiry

Sugarsync authorization expiry

Leave blank normally, will be auto configured by rclone.

--sugarsync-user

Sugarsync user

Leave blank normally, will be auto configured by rclone.

--sugarsync-root-id

Sugarsync root id

Leave blank normally, will be auto configured by rclone.

--sugarsync-deleted-id

Sugarsync deleted folder id

Leave blank normally, will be auto configured by rclone.

--sugarsync-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Tardigrade

Tardigrade is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.

Setup

To make a new Tardigrade configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Tardigrade project you are a member of.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

Setup with access grant

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Tardigrade Decentralized Cloud Storage
   \ "tardigrade"
[snip]
Storage> tardigrade
** See help for tardigrade backend at: https://rclone.org/tardigrade/ **

Choose an authentication method.
Enter a string value. Press Enter for the default ("existing").
Choose a number from below, or type in your own value
 1 / Use an existing access grant.
   \ "existing"
 2 / Create a new access grant from satellite address, API key, and passphrase.
   \ "new"
provider> existing
Access Grant.
Enter a string value. Press Enter for the default ("").
access_grant> your-access-grant-received-by-someone-else
Remote config
--------------------
[remote]
type = tardigrade
access_grant = your-access-grant-received-by-someone-else
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Setup with API key and passhprase

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Tardigrade Decentralized Cloud Storage
   \ "tardigrade"
[snip]
Storage> tardigrade
** See help for tardigrade backend at: https://rclone.org/tardigrade/ **

Choose an authentication method.
Enter a string value. Press Enter for the default ("existing").
Choose a number from below, or type in your own value
 1 / Use an existing access grant.
   \ "existing"
 2 / Create a new access grant from satellite address, API key, and passphrase.
   \ "new"
provider> new
Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
Enter a string value. Press Enter for the default ("us-central-1.tardigrade.io").
Choose a number from below, or type in your own value
 1 / US Central 1
   \ "us-central-1.tardigrade.io"
 2 / Europe West 1
   \ "europe-west-1.tardigrade.io"
 3 / Asia East 1
   \ "asia-east-1.tardigrade.io"
satellite_address> 1
API Key.
Enter a string value. Press Enter for the default ("").
api_key> your-api-key-for-your-tardigrade-project
Encryption Passphrase. To access existing objects enter passphrase used for uploading.
Enter a string value. Press Enter for the default ("").
passphrase> your-human-readable-encryption-passphrase
Remote config
--------------------
[remote]
type = tardigrade
satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
api_key = your-api-key-for-your-tardigrade-project
passphrase = your-human-readable-encryption-passphrase
access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Usage

Paths are specified as remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Once configured you can then use rclone like this.

Create a new bucket

Use the mkdir command to create new bucket, e.g. bucket.

rclone mkdir remote:bucket

List all buckets

Use the lsf command to list all buckets.

rclone lsf remote:

Note the colon (:) character at the end of the command line.

Delete a bucket

Use the rmdir command to delete an empty bucket.

rclone rmdir remote:bucket

Use the purge command to delete a non-empty bucket with all its content.

rclone purge remote:bucket

Upload objects

Use the copy command to upload an object.

rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/

The --progress flag is for displaying progress information. Remove it if you don't need this information.

Use a folder in the local path to upload all its objects.

rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/

Only modified files will be copied.

List objects

Use the ls command to list recursively all objects in a bucket.

rclone ls remote:bucket

Add the folder to the remote path to list recursively all objects in this folder.

rclone ls remote:bucket/path/to/dir/

Use the lsf command to list non-recursively all objects in a bucket or a folder.

rclone lsf remote:bucket/path/to/dir/

Download objects

Use the copy command to download an object.

rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/

The --progress flag is for displaying progress information. Remove it if you don't need this information.

Use a folder in the remote path to download all its objects.

rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/

Delete objects

Use the deletefile command to delete a single object.

rclone deletefile remote:bucket/path/to/dir/file.ext

Use the delete command to delete all object in a folder.

rclone delete remote:bucket/path/to/dir/

Use the size command to print the total size of objects in a bucket or a folder.

rclone size remote:bucket/path/to/dir/

Sync two Locations

Use the sync command to sync the source to the destination, changing the destination only, deleting any excess files.

rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/

The --progress flag is for displaying progress information. Remove it if you don't need this information.

Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted.

The sync can be done also from Tardigrade to the local file system.

rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/

Or between two Tardigrade buckets.

rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

Or even between another cloud storage and Tardigrade.

rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/

Standard Options

Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).

--tardigrade-provider

Choose an authentication method.

--tardigrade-access-grant

Access Grant.

--tardigrade-satellite-address

Satellite Address. Custom satellite address should match the format: <nodeid>@<address>:<port>.

--tardigrade-api-key

API Key.

--tardigrade-passphrase

Encryption Passphrase. To access existing objects enter passphrase used for uploading.

Union

The union remote provides a unification similar to UnionFS using other remotes.

Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory.

During the initial setup with rclone config you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.

Attribute :ro and :nc can be attach to the end of path to tag the remote as read only or no create, eg remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.

Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive2:/backup/desktop.

There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive2:/backup/../desktop.

Behavior / Policies

The behavior of union backend is inspired by trapexit/mergerfs. All functions are grouped into 3 categories: action, create and search. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: rand (random) may be useful for file creation (create) but could lead to very odd behavior if used for delete if there were more than one copy of the file.

Function / Category classifications

Category Description Functions
action Writing Existing file move, rmdir, rmdirs, delete, purge and copy, sync (as destination when file exist)
create Create non-existing file copy, sync (as destination when file not exist)
search Reading and listing file ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync (as source)
N/A size, about

Path Preservation

Policies, as described below, are of two basic types. path preserving and non-path preserving.

All policies which start with ep (epff, eplfs, eplus, epmfs, eprand) are path preserving. ep stands for existing path.

A path preserving policy will only consider upstreams where the relative path being accessed already exists.

When using non-path preserving policies paths will be created in target upstreams as necessary.

Quota Relevant Policies

Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields.

Policy Required Field
lfs, eplfs Free
mfs, epmfs Free
lus, eplus Used
lno, eplno Objects

To check if your upstream supports the field, run rclone about remote: [flags] and see if the required field exists.

Filters

Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.

If all remotes are filtered an error will be returned.

Policy descriptions

The policies definition are inspired by trapexit/mergerfs but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems.

Policy Description
all Search category: same as epall. Action category: same as epall. Create category: act on all upstreams.
epall (existing path, all) Search category: Given this order configured, act on the first one found where the relative path exists. Action category: apply to all found. Create category: act on all upstreams where the relative path exists.
epff (existing path, first found) Act on the first one found, by the time upstreams reply, where the relative path exists.
eplfs (existing path, least free space) Of all the upstreams on which the relative path exists choose the one with the least free space.
eplus (existing path, least used space) Of all the upstreams on which the relative path exists choose the one with the least used space.
eplno (existing path, least number of objects) Of all the upstreams on which the relative path exists choose the one with the least number of objects.
epmfs (existing path, most free space) Of all the upstreams on which the relative path exists choose the one with the most free space.
eprand (existing path, random) Calls epall and then randomizes. Returns only one upstream.
ff (first found) Search category: same as epff. Action category: same as epff. Create category: Act on the first one found by the time upstreams reply.
lfs (least free space) Search category: same as eplfs. Action category: same as eplfs. Create category: Pick the upstream with the least available free space.
lus (least used space) Search category: same as eplus. Action category: same as eplus. Create category: Pick the upstream with the least used space.
lno (least number of objects) Search category: same as eplno. Action category: same as eplno. Create category: Pick the upstream with the least number of objects.
mfs (most free space) Search category: same as epmfs. Action category: same as epmfs. Create category: Pick the upstream with the most available free space.
newest Pick the file / directory with the largest mtime.
rand (random) Calls all and then randomizes. Returns only one upstream.

Setup

Here is an example of how to make a union called remote for local folders. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Union merges the contents of several remotes
   \ "union"
[snip]
Storage> union
List of space separated upstreams.
Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.
Enter a string value. Press Enter for the default ("").
upstreams>
Policy to choose upstream on ACTION class.
Enter a string value. Press Enter for the default ("epall").
action_policy>
Policy to choose upstream on CREATE class.
Enter a string value. Press Enter for the default ("epmfs").
create_policy>
Policy to choose upstream on SEARCH class.
Enter a string value. Press Enter for the default ("ff").
search_policy>
Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
Enter a signed integer. Press Enter for the default ("120").
cache_time>
Remote config
--------------------
[remote]
type = union
upstreams = C:\dir1 C:\dir2 C:\dir3
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
remote               union

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Once configured you can then use rclone like this,

List directories in top level in C:\dir1, C:\dir2 and C:\dir3

rclone lsd remote:

List all the files in C:\dir1, C:\dir2 and C:\dir3

rclone ls remote:

Copy another local directory to the union directory called source, which will be placed into C:\dir3

rclone copy C:\source remote:source

Standard Options

Here are the standard options specific to union (Union merges the contents of several upstream fs).

--union-upstreams

List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.

--union-action-policy

Policy to choose upstream on ACTION category.

--union-create-policy

Policy to choose upstream on CREATE category.

--union-search-policy

Policy to choose upstream on SEARCH category.

--union-cache-time

Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.

WebDAV

Paths are specified as remote:path

Paths may be as deep as required, eg remote:directory/subdirectory.

To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Webdav
   \ "webdav"
[snip]
Storage> webdav
URL of http host to connect to
Choose a number from below, or type in your own value
 1 / Connect to example.com
   \ "https://example.com"
url> https://example.com/remote.php/webdav/
Name of the Webdav site/service/software you are using
Choose a number from below, or type in your own value
 1 / Nextcloud
   \ "nextcloud"
 2 / Owncloud
   \ "owncloud"
 3 / Sharepoint
   \ "sharepoint"
 4 / Other site/service or software
   \ "other"
vendor> 1
User name
user> user
Password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
Bearer token instead of user/pass (eg a Macaroon)
bearer_token>
Remote config
--------------------
[remote]
type = webdav
url = https://example.com/remote.php/webdav/
vendor = nextcloud
user = user
pass = *** ENCRYPTED ***
bearer_token =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Once configured you can then use rclone like this,

List directories in top level of your WebDAV

rclone lsd remote:

List all the files in your WebDAV

rclone ls remote:

To copy a local directory to an WebDAV directory called backup

rclone copy /home/source remote:backup

Modified time and hashes

Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.

Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.

Standard Options

Here are the standard options specific to webdav (Webdav).

--webdav-url

URL of http host to connect to

--webdav-vendor

Name of the Webdav site/service/software you are using

--webdav-user

User name

--webdav-pass

Password.

NB Input to this must be obscured - see rclone obscure.

--webdav-bearer-token

Bearer token instead of user/pass (eg a Macaroon)

Advanced Options

Here are the advanced options specific to webdav (Webdav).

--webdav-bearer-token-command

Command to run to get a bearer token

Provider notes

See below for notes on specific providers.

Owncloud

Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like https://example.com/remote.php/webdav/.

Owncloud supports modified times using the X-OC-Mtime header.

Nextcloud

This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat) whereas Owncloud does. This may be fixed in the future.

Sharepoint

Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975

This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav.

To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL:

You'll only need this URL up to the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive.

Add the remote to rclone like this: Configure the url as https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents and use your normal account email and password for user and pass. If you have 2FA enabled, you have to generate an app password. Set the vendor to sharepoint.

Your config file should look like this:

[sharepoint]
type = webdav
url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
vendor = other
user = YourEmailAddress
pass = encryptedpassword

Required Flags for SharePoint

As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.

For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents:

--ignore-size --ignore-checksum --update

dCache

dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens.

Configure as normal using the other type. Don't enter a username or password, instead enter your Macaroon as the bearer_token.

The config will end up looking something like this.

[dcache]
type = webdav
url = https://dcache...
vendor = other
user =
pass =
bearer_token = your-macaroon

There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file.

Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache.

OpenID-Connect

dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service.

Support for OpenID-Connect in rclone is currently achieved using another software package called oidc-agent. This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the oidc-token command. The following example shows a (shortened) access token obtained from the XDC OIDC Provider.

paul@celebrimbor:~$ oidc-token XDC
eyJraWQ[...]QFXDt0
paul@celebrimbor:~$

Note Before the oidc-token command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add command (e.g., oidc-add XDC). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation.

The rclone bearer_token_command configuration option is used to fetch the access token from oidc-agent.

Configure as a normal WebDAV endpoint, using the 'other' vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., oidc-agent XDC).

The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider.

[dcache]
type = webdav
url = https://dcache.example.org/
vendor = other
bearer_token_command = oidc-token XDC

Yandex Disk

Yandex Disk is a cloud storage solution created by Yandex.

Here is an example of making a yandex configuration. First run

rclone config

This will guide you through an interactive setup process:

No remotes found - make a new one
n) New remote
s) Set configuration password
n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Yandex Disk
   \ "yandex"
[snip]
Storage> yandex
Yandex Client Id - leave blank normally.
client_id>
Yandex Client Secret - leave blank normally.
client_secret>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id =
client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

See the remote setup docs for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

See top level directories

rclone lsd remote:

Make a new directory

rclone mkdir remote:directory

List the contents of a directory

rclone ls remote:directory

Sync /home/local/directory to the remote path, deleting any excess files in the path.

rclone sync -i /home/local/directory remote:directory

Yandex paths may be as deep as required, eg remote:directory/subdirectory.

Modified time

Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

MD5 checksums

MD5 checksums are natively supported by Yandex Disk.

Emptying Trash

If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

Quota information

To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

Restricted filename characters

The default restricted characters set are replaced.

Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

Limitations

When uploading very large files (bigger than about 5GB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

Standard Options

Here are the standard options specific to yandex (Yandex Disk).

--yandex-client-id

OAuth Client Id Leave blank normally.

--yandex-client-secret

OAuth Client Secret Leave blank normally.

Advanced Options

Here are the advanced options specific to yandex (Yandex Disk).

--yandex-token

OAuth Access Token as a JSON blob.

--yandex-auth-url

Auth server URL. Leave blank to use the provider defaults.

--yandex-token-url

Token server url. Leave blank to use the provider defaults.

--yandex-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Local Filesystem

Local paths are specified as normal filesystem paths, eg /path/to/wherever, so

rclone sync -i /home/source /tmp/destination

Will sync /home/source to /tmp/destination

These can be configured into the config file for consistencies sake, but it is probably easier not to.

Modified time

Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

Filenames

Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.

There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.

If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name gro\xdf will be transferred as gro‛DF. rclone will emit a debug message in this case (use -v to see), eg

Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"

Restricted characters

On non Windows platforms the following characters are replaced when handling file names.

Character Value Replacement
NUL 0x00
/ 0x2F

When running on Windows the following characters are replaced. This list is based on the Windows file naming conventions.

Character Value Replacement
NUL 0x00
SOH 0x01
STX 0x02
ETX 0x03
EOT 0x04
ENQ 0x05
ACK 0x06
BEL 0x07
BS 0x08
HT 0x09
LF 0x0A
VT 0x0B
FF 0x0C
CR 0x0D
SO 0x0E
SI 0x0F
DLE 0x10
DC1 0x11
DC2 0x12
DC3 0x13
DC4 0x14
NAK 0x15
SYN 0x16
ETB 0x17
CAN 0x18
EM 0x19
SUB 0x1A
ESC 0x1B
FS 0x1C
GS 0x1D
RS 0x1E
US 0x1F
/ 0x2F
" 0x22
* 0x2A
: 0x3A
< 0x3C
> 0x3E
? 0x3F
\ 0x5C
| 0x7C

File names on Windows can also not end with the following characters. These only get replaced if they are the last character in the name:

Character Value Replacement
SP 0x20
. 0x2E

Invalid UTF-8 bytes will also be replaced, as they can't be converted to UTF-16.

Long paths on Windows

Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters.

This is why you will see that your paths, for instance c:\files is converted to the UNC path \\?\c:\files in the output, and \\server\share is converted to \\?\UNC\server\share.

However, in rare cases this may cause problems with buggy file system drivers like EncFS. To disable UNC conversion globally, add this to your .rclone.conf file:

[local]
nounc = true

If you want to selectively disable UNC, you can add it to a separate entry like this:

[nounc]
type = local
nounc = true

And use rclone like this:

rclone copy c:\src nounc:z:\dst

This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.

Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

If you supply --copy-links or -L then rclone will follow the symlink and copy the pointed to file or directory. Note that this flag is incompatible with -links / -l.

This flag applies to all commands.

For example, supposing you have a directory structure like this

$ tree /tmp/a
/tmp/a
├── b -> ../b
├── expected -> ../expected
├── one
└── two
    └── three

Then you can see the difference with and without the flag like this

$ rclone ls /tmp/a
        6 one
        6 two/three

and

$ rclone -L ls /tmp/a
     4174 expected
        6 one
        6 two/three
        6 b/two
        6 b/one

Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage.

The text file will contain the target of the symbolic link (see example).

This flag applies to all commands.

For example, supposing you have a directory structure like this

$ tree /tmp/a
/tmp/a
├── file1 -> ./file4
└── file2 -> /home/user/file3

Copying the entire directory with '-l'

$ rclone copyto -l /tmp/a/file1 remote:/tmp/a/

The remote files are created with a '.rclonelink' suffix

$ rclone ls remote:/tmp/a
       5 file1.rclonelink
      14 file2.rclonelink

The remote files will contain the target of the symbolic links

$ rclone cat remote:/tmp/a/file1.rclonelink
./file4

$ rclone cat remote:/tmp/a/file2.rclonelink
/home/user/file3

Copying them back with '-l'

$ rclone copyto -l remote:/tmp/a/ /tmp/b/

$ tree /tmp/b
/tmp/b
├── file1 -> ./file4
└── file2 -> /home/user/file3

However, if copied back without '-l'

$ rclone copyto remote:/tmp/a/ /tmp/b/

$ tree /tmp/b
/tmp/b
├── file1.rclonelink
└── file2.rclonelink

Note that this flag is incompatible with -copy-links / -L.

Restricting filesystems with --one-file-system

Normally rclone will recurse through filesystems as mounted.

However if you set --one-file-system or -x this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.

For example if you have a directory hierarchy like this

root
├── disk1     - disk1 mounted on the root
│   └── file3 - stored on disk1
├── disk2     - disk2 mounted on the root
│   └── file4 - stored on disk12
├── file1     - stored on the root disk
└── file2     - stored on the root disk

Using rclone --one-file-system copy root remote: will only copy file1 and file2. Eg

$ rclone -q --one-file-system ls root
        0 file1
        0 file2
$ rclone -q ls root
        0 disk1/file3
        0 disk2/file4
        0 file1
        0 file2

NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored.

Standard Options

Here are the standard options specific to local (Local Disk).

--local-nounc

Disable UNC (long path names) conversion on Windows

Advanced Options

Here are the advanced options specific to local (Local Disk).

Follow symlinks and copy the pointed to item.

Translate symlinks to/from regular files with a '.rclonelink' extension

Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.

--local-no-unicode-normalization

Don't apply unicode normalization to paths and filenames (Deprecated)

This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.

--local-no-check-updated

Don't check to see if the files change during upload

Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload.

However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag.

If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things appended to it (eg a log) then rclone will transfer the log file with the size it had the first time rclone saw it.

If the file is being modified throughout (not just appended to) then the transfer may fail with a hash check failure.

In detail, once the file has had stat() called on it for the first time we:

--one-file-system / -x

Don't cross filesystem boundaries (unix/macOS only).

--local-case-sensitive

Force the filesystem to report itself as case sensitive.

Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.

--local-case-insensitive

Force the filesystem to report itself as case insensitive

Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.

--local-no-sparse

Disable sparse files for multi-thread downloads

On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with.

--local-no-set-modtime

Disable setting modtime

Normally rclone updates modification time of files after they are done uploading. This can cause permissions issues on Linux platforms when the user rclone is running as does not own the file uploaded, such as when copying to a CIFS mount owned by another user. If this option is enabled, rclone will no longer update the modtime after copying a file.

--local-encoding

This sets the encoding for the backend.

See: the encoding section in the overview for more info.

Backend commands

Here are the commands specific to the local backend.

Run them with

rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See the "rclone backend" command for more info on how to pass options and arguments.

These can be run on a running backend using the rc command backend/command.

noop

A null operation for testing backend commands

rclone backend noop remote: [options] [<arguments>+]

This is a test command which has some options you can try to change the output.

Options:

Changelog

v1.53.3 - 2020-11-19

See commits

v1.53.2 - 2020-10-26

See commits

v1.53.1 - 2020-09-13

See commits

v1.53.0 - 2020-09-02

See commits

v1.52.3 - 2020-08-07

See commits

v1.52.2 - 2020-06-24

See commits

v1.52.1 - 2020-06-10

See commits

v1.52.0 - 2020-05-27

Special thanks to Martin Michlmayr for proof reading and correcting all the docs and Edward Barker for helping re-write the front page.

See commits

v1.51.0 - 2020-02-01

v1.50.2 - 2019-11-19

v1.50.1 - 2019-11-02

v1.50.0 - 2019-10-26

v1.49.5 - 2019-10-05

v1.49.4 - 2019-09-29

v1.49.3 - 2019-09-15

v1.49.2 - 2019-09-08

v1.49.1 - 2019-08-28

v1.49.0 - 2019-08-26

v1.48.0 - 2019-06-15

v1.47.0 - 2019-04-13

v1.46 - 2019-02-09

v1.45 - 2018-11-24

v1.44 - 2018-10-15

v1.43.1 - 2018-09-07

Point release to fix hubic and azureblob backends.

v1.43 - 2018-09-01

v1.42 - 2018-06-16

v1.41 - 2018-04-28

v1.40 - 2018-03-19

v1.39 - 2017-12-23

v1.38 - 2017-09-30

v1.37 - 2017-07-22

v1.36 - 2017-03-18

v1.35 - 2017-01-02

v1.34 - 2016-11-06

v1.33 - 2016-08-24

v1.32 - 2016-07-13

v1.31 - 2016-07-13

v1.30 - 2016-06-18

v1.29 - 2016-04-18

v1.28 - 2016-03-01

v1.27 - 2016-01-31

v1.26 - 2016-01-02

v1.25 - 2015-11-14

v1.24 - 2015-11-07

v1.23 - 2015-10-03

v1.22 - 2015-09-28

v1.21 - 2015-09-22

v1.20 - 2015-09-15

v1.19 - 2015-08-28

v1.18 - 2015-08-17

v1.17 - 2015-06-14

v1.16 - 2015-06-09

v1.15 - 2015-06-06

v1.14 - 2015-05-21

v1.13 - 2015-05-10

v1.12 - 2015-03-15

v1.11 - 2015-03-04

v1.10 - 2015-02-12

v1.09 - 2015-02-07

v1.08 - 2015-02-04

v1.07 - 2014-12-23

v1.06 - 2014-12-12

v1.05 - 2014-08-09

v1.04 - 2014-07-21

v1.03 - 2014-07-20

v1.02 - 2014-07-19

v1.01 - 2014-07-04

v1.00 - 2014-07-03

v0.99 - 2014-06-26

v0.98 - 2014-05-30

v0.97 - 2014-05-05

v0.96 - 2014-04-24

v0.95 - 2014-03-28

v0.94 - 2014-03-27

v0.93 - 2014-03-16

v0.92 - 2014-03-15

v0.91 - 2014-03-15

v0.90 - 2013-06-27

v0.00 - 2012-11-18

Bugs and Limitations

Limitations

Directory timestamps aren't preserved

Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.

Rclone struggles with millions of files in a directory/bucket

Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory.

Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket.

Bucket based remotes and folders

Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear.

Some software creates empty keys ending in / as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. This ability may be added in the future (probably via a flag/option).

Bugs

Bugs are stored in rclone's GitHub project:

Frequently Asked Questions

Do all cloud storage systems support all rclone commands

Yes they do. All the rclone commands (eg sync, copy etc) will work on all the remote storage systems.

Can I copy the config from one machine to another

Sure! Rclone stores all of its config in a single file. If you want to find this file, run rclone config file which will tell you where it is.

See the remote setup docs for more info.

How do I configure rclone on a remote / headless box with no browser?

This has now been documented in its own remote setup page.

Can rclone sync directly from drive to s3

Rclone can sync between two remote cloud storage systems just fine.

Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.

The syncs would be incremental (on a file by file basis).

Eg

rclone sync -i drive:Folder s3:bucket

Using rclone from multiple locations at the same time

You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg

Server A> rclone sync -i /tmp/whatever remote:ServerA
Server B> rclone sync -i /tmp/whatever remote:ServerB

If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, eg

Server A> rclone copy /tmp/whatever remote:Backup
Server B> rclone copy /tmp/whatever remote:Backup

The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates.

Why doesn't rclone support partial transfers / binary diffs like rsync?

Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system.

Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it.

It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system.

All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects.

Can rclone do bi-directional sync?

No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it.

Can I use rclone with an HTTP proxy?

Yes. rclone will follow the standard environment variables for proxies, similar to cURL and other programs.

In general the variables are called http_proxy (for services reached over http) and https_proxy (for services reached over https). Most public services will be using https, but you may wish to set both.

The content of the variable is protocol://server:port. The protocol value is the one used to talk to the proxy server, itself, and is commonly either http or socks5.

Slightly annoyingly, there is no standard for the name; some applications may use http_proxy but another one HTTP_PROXY. The Go libraries used by rclone will try both variations, but you may wish to set all possibilities. So, on Linux, you may end up with code similar to

export http_proxy=http://proxyserver:12345
export https_proxy=$http_proxy
export HTTP_PROXY=$http_proxy
export HTTPS_PROXY=$http_proxy

The NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com".

e.g.

export no_proxy=localhost,127.0.0.0/8,my.host.name
export NO_PROXY=$no_proxy

Note that the ftp backend does not support ftp_proxy yet.

Rclone gives x509: failed to load system roots and no roots provided error

This means that rclone can't file the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris.

Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.

"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
"/etc/pki/tls/certs/ca-bundle.crt",   // Fedora/RHEL
"/etc/ssl/ca-bundle.pem",             // OpenSUSE
"/etc/pki/tls/cacert.pem",            // OpenELEC

So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly.

mkdir -p /etc/ssl/certs/
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org

The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned in the x509 package, provide an additional way to provide the SSL root certificates.

Note that you may need to add the --insecure option to the curl command line if it doesn't work without.

curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

Rclone gives Failed to load config file: function not implemented error

Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.

See the system requirements section in the go install docs for full details.

All my uploaded docx/xlsx/pptx files appear as archive/zip

This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats

tcp lookup some.domain.com no such host

This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g.

# both should print a long list of possible IP addresses
dig www.googleapis.com          # resolve using your default DNS
dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server

If you are using systemd-resolved (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.

Additionally with the GODEBUG=netdns= environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs.

The total size reported in the stats for a sync is wrong and keeps changing

It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.

Rclone is using too much memory or appears to have a memory leak

Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled.

However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value, say export GOGC=20. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage.

The most common cause of rclone using lots of memory is a single directory with thousands or millions of files in. Rclone has to load this entirely into memory as rclone objects. Each rclone object takes 0.5k-1k of memory.

License

This is free software under the terms of MIT the license (check the COPYING file included with the source code).

Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

Authors

Contributors

{{< rem email addresses removed from here need to be addeed to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again. >}}

Contact the rclone project

Forum

Forum for questions and general discussion:

GitHub repository

The project's repository is located at:

There you can file bug reports or contribute with pull requests.

Twitter

You can also follow me on twitter for rclone announcements:

Email

Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don't email me requests for help - those are better directed to the forum. Thanks!

rclone-1.53.3/MANUAL.md000066400000000000000000040062211375552240400144000ustar00rootroot00000000000000% rclone(1) User Manual % Nick Craig-Wood % Nov 19, 2020 # Rclone syncs your files to cloud storage rclone logo - [About rclone](#about) - [What can rclone do for you?](#what) - [What features does rclone have?](#features) - [What providers does rclone support?](#providers) - [Download](https://rclone.org/downloads/) - [Install](https://rclone.org/install/) - [Donate.](https://rclone.org/donate/) ## About rclone {#about} Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. [Over 40 cloud storage products](#providers) support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and `--dry-run` protection. It is used at the command line, in scripts or via its [API](/rc). Users call rclone *"The Swiss army knife of cloud storage"*, and *"Technology indistinguishable from magic"*. Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can [check](https://rclone.org/commands/rclone_check/) the integrity of your files. Where possible, rclone employs server side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk. Virtual backends wrap local and cloud file systems to apply [encryption](https://rclone.org/crypt/), [caching](https://rclone.org/cache/), [chunking](https://rclone.org/chunker/) and [joining](https://rclone.org/union/). Rclone [mounts](https://rclone.org/commands/rclone_mount/) any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over [SFTP](https://rclone.org/commands/rclone_serve_sftp/), [HTTP](https://rclone.org/commands/rclone_serve_http/), [WebDAV](https://rclone.org/commands/rclone_serve_webdav/), [FTP](https://rclone.org/commands/rclone_serve_ftp/) and [DLNA](https://rclone.org/commands/rclone_serve_dlna/). Rclone is mature, open source software originally inspired by rsync and written in [Go](https://golang.org). The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version [downloading from rclone.org](https://rclone.org/downloads/) is recommended. Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API. Rclone does the heavy lifting of communicating with cloud storage. ## What can rclone do for you? {#what} Rclone helps you: - Backup (and encrypt) files to cloud storage - Restore (and decrypt) files from cloud storage - Mirror cloud data to other cloud services or locally - Migrate data to cloud, or between cloud storage vendors - Mount multiple, encrypted, cached or diverse cloud storage as a disk - Analyse and account for data held on cloud storage using [lsf](https://rclone.org/commands/rclone_lsf/), [ljson](https://rclone.org/commands/rclone_lsjson/), [size](https://rclone.org/commands/rclone_size/), [ncdu](https://rclone.org/commands/rclone_ncdu/) - [Union](https://rclone.org/union/) file systems together to present multiple local and/or cloud file systems as one ## Features {#features} - Transfers - MD5, SHA1 hashes are checked at all times for file integrity - Timestamps are preserved on files - Operations can be restarted at any time - Can be to and from network, eg two different cloud providers - Can use multi-threaded downloads to local disk - [Copy](https://rclone.org/commands/rclone_copy/) new or changed files to cloud storage - [Sync](https://rclone.org/commands/rclone_sync/) (one way) to make a directory identical - [Move](https://rclone.org/commands/rclone_move/) files to cloud storage deleting the local after verification - [Check](https://rclone.org/commands/rclone_check/) hashes and for missing/extra files - [Mount](https://rclone.org/commands/rclone_mount/) your cloud storage as a network disk - [Serve](https://rclone.org/commands/rclone_serve/) local or remote files over [HTTP](https://rclone.org/commands/rclone_serve_http/)/[WebDav](https://rclone.org/commands/rclone_serve_webdav/)/[FTP](https://rclone.org/commands/rclone_serve_ftp/)/[SFTP](https://rclone.org/commands/rclone_serve_sftp/)/[dlna](https://rclone.org/commands/rclone_serve_dlna/) - Experimental [Web based GUI](https://rclone.org/gui/) ## Supported providers {#providers} (There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.) - 1Fichier - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Amazon Drive - Amazon S3 - Backblaze B2 - Box - Ceph - Citrix ShareFile - C14 - DigitalOcean Spaces - Dreamhost - Dropbox - FTP - Google Cloud Storage - Google Drive - Google Photos - HTTP - Hubic - Jottacloud - IBM COS S3 - Koofr - Mail.ru Cloud - Memset Memstore - Mega - Memory - Microsoft Azure Blob Storage - Microsoft OneDrive - Minio - Nextcloud - OVH - OpenDrive - OpenStack Swift - Oracle Cloud Storage - ownCloud - pCloud - premiumize.me - put.io - QingStor - Rackspace Cloud Files - rsync.net - Scaleway - Seafile - SFTP - StackPath - SugarSync - Tardigrade - Tencent Cloud Object Storage (COS) - Wasabi - WebDAV - Yandex Disk - The local filesystem Links * [Home page](https://rclone.org/) * [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) * [Rclone Forum](https://forum.rclone.org) * [Downloads](https://rclone.org/downloads/) # Install # Rclone is a Go program and comes as a single binary file. ## Quickstart ## * [Download](https://rclone.org/downloads/) the relevant binary. * Extract the `rclone` or `rclone.exe` binary from the archive * Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. See below for some expanded Linux / macOS instructions. See the [Usage section](https://rclone.org/docs/#usage) of the docs for how to use rclone, or run `rclone -h`. ## Script installation ## To install rclone on Linux/macOS/BSD systems, run: curl https://rclone.org/install.sh | sudo bash For beta installation, run: curl https://rclone.org/install.sh | sudo bash -s beta Note that this script checks the version of rclone installed first and won't re-download if not needed. ## Linux installation from precompiled binary ## Fetch and unpack curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip unzip rclone-current-linux-amd64.zip cd rclone-*-linux-amd64 Copy binary file sudo cp rclone /usr/bin/ sudo chown root:root /usr/bin/rclone sudo chmod 755 /usr/bin/rclone Install manpage sudo mkdir -p /usr/local/share/man/man1 sudo cp rclone.1 /usr/local/share/man/man1/ sudo mandb Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. rclone config ## macOS installation with brew ## brew install rclone ## macOS installation from precompiled binary, using curl ## To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with `curl`. Download the latest version of rclone. cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip Unzip the download and cd to the extracted folder. unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64 Move rclone to your $PATH. You will be prompted for your password. sudo mkdir -p /usr/local/bin sudo mv rclone /usr/local/bin/ (the `mkdir` command is safe to run, even if the directory already exists). Remove the leftover files. cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. rclone config ## macOS installation from precompiled binary, using a web browser ## When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run `rclone`, a pop-up will appear saying: “rclone” cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. The simplest fix is to run xattr -d com.apple.quarantine rclone ## Install with docker ## The rclone maintains a [docker image for rclone](https://hub.docker.com/r/rclone/rclone). These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image. The `:latest` tag will always point to the latest stable release. You can use the `:beta` tag to get the latest build from master. You can also use version tags, eg `:1.49.1`, `:1.49` or `:1`. ``` $ docker pull rclone/rclone:latest latest: Pulling from rclone/rclone Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11 ... $ docker run --rm rclone/rclone:latest version rclone v1.49.1 - os/arch: linux/amd64 - go version: go1.12.9 ``` There are a few command line options to consider when starting an rclone Docker container from the rclone image. - You need to mount the host rclone config dir at `/config/rclone` into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file. - You need to mount a host data dir at `/data` into the Docker container. - By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line. - If you want to access the RC interface (either via the API or the Web UI), it is required to set the `--rc-addr` to `:5572` in order to connect to it from outside the container. An explanation about why this is necessary is present [here](https://web.archive.org/web/20200808071950/https://pythonspeed.com/articles/docker-connection-refused/). * NOTE: Users running this container with the docker network set to `host` should probably set it to listen to localhost only, with `127.0.0.1:5572` as the value for `--rc-addr` - It is possible to use `rclone mount` inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact `docker run` options to do that might vary slightly between hosts. See, e.g. the discussion in this [thread](https://github.com/moby/moby/issues/9448). You also need to mount the host `/etc/passwd` and `/etc/group` for fuse to work inside the container. Here are some commands tested on an Ubuntu 18.04.3 host: ``` # config on host at ~/.config/rclone/rclone.conf # data on host at ~/data # make sure the config is ok by listing the remotes docker run --rm \ --volume ~/.config/rclone:/config/rclone \ --volume ~/data:/data:shared \ --user $(id -u):$(id -g) \ rclone/rclone \ listremotes # perform mount inside Docker container, expose result to host mkdir -p ~/data/mount docker run --rm \ --volume ~/.config/rclone:/config/rclone \ --volume ~/data:/data:shared \ --user $(id -u):$(id -g) \ --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro \ --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \ rclone/rclone \ mount dropbox:Photos /data/mount & ls ~/data/mount kill %1 ``` ## Install from source ## Make sure you have at least [Go](https://golang.org/) 1.11 installed. [Download go](https://golang.org/dl/) if necessary. The latest release is recommended. Then git clone https://github.com/rclone/rclone.git cd rclone go build ./rclone version This will leave you a checked out version of rclone you can modify and send pull requests with. If you use `make` instead of `go build` then the rclone build will have the correct version information in it. You can also build the latest stable rclone with: go get github.com/rclone/rclone or the latest version (equivalent to the beta) with go get github.com/rclone/rclone@master These will build the binary in `$(go env GOPATH)/bin` (`~/go/bin/rclone` by default) after downloading the source to the go module cache. Note - do **not** use the `-u` flag here. This causes go to try to update the depencencies that rclone uses and sometimes these don't work with the current version of rclone. ## Installation with Ansible ## This can be done with [Stefan Weichinger's ansible role](https://github.com/stefangweichinger/ansible-rclone). Instructions 1. `git clone https://github.com/stefangweichinger/ansible-rclone.git` into your local roles-directory 2. add the role to the hosts you want rclone installed to: ``` - hosts: rclone-hosts roles: - rclone ``` Configure --------- First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the `--config` entry for how to find the config file and choose its location.) The easiest way to make the config is to run rclone with the config option: rclone config See the following for detailed instructions for * [1Fichier](https://rclone.org/fichier/) * [Alias](https://rclone.org/alias/) * [Amazon Drive](https://rclone.org/amazonclouddrive/) * [Amazon S3](https://rclone.org/s3/) * [Backblaze B2](https://rclone.org/b2/) * [Box](https://rclone.org/box/) * [Cache](https://rclone.org/cache/) * [Chunker](https://rclone.org/chunker/) - transparently splits large files for other remotes * [Citrix ShareFile](https://rclone.org/sharefile/) * [Crypt](https://rclone.org/crypt/) - to encrypt other remotes * [DigitalOcean Spaces](https://rclone.org/s3/#digitalocean-spaces) * [Dropbox](https://rclone.org/dropbox/) * [FTP](https://rclone.org/ftp/) * [Google Cloud Storage](https://rclone.org/googlecloudstorage/) * [Google Drive](https://rclone.org/drive/) * [Google Photos](https://rclone.org/googlephotos/) * [HTTP](https://rclone.org/http/) * [Hubic](https://rclone.org/hubic/) * [Jottacloud / GetSky.no](https://rclone.org/jottacloud/) * [Koofr](https://rclone.org/koofr/) * [Mail.ru Cloud](https://rclone.org/mailru/) * [Mega](https://rclone.org/mega/) * [Memory](https://rclone.org/memory/) * [Microsoft Azure Blob Storage](https://rclone.org/azureblob/) * [Microsoft OneDrive](https://rclone.org/onedrive/) * [OpenStack Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/) * [OpenDrive](https://rclone.org/opendrive/) * [Pcloud](https://rclone.org/pcloud/) * [premiumize.me](https://rclone.org/premiumizeme/) * [put.io](https://rclone.org/putio/) * [QingStor](https://rclone.org/qingstor/) * [Seafile](https://rclone.org/seafile/) * [SFTP](https://rclone.org/sftp/) * [SugarSync](https://rclone.org/sugarsync/) * [Tardigrade](https://rclone.org/tardigrade/) * [Union](https://rclone.org/union/) * [WebDAV](https://rclone.org/webdav/) * [Yandex Disk](https://rclone.org/yandex/) * [The local filesystem](https://rclone.org/local/) Usage ----- Rclone syncs a directory tree from one storage system to another. Its syntax is like this Syntax: [options] subcommand Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive. You can define as many storage paths as you like in the config file. Please use the [`-i` / `--interactive`](#interactive) flag while learning rclone to avoid accidental data loss. Subcommands ----------- rclone uses a system of subcommands. For example rclone ls remote:path # lists a remote rclone copy /local/path remote:path # copies /local/path to the remote rclone sync -i /local/path remote:path # syncs /local/path to the remote # rclone config Enter an interactive configuration session. ## Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. ``` rclone config [flags] ``` ## Options ``` -h, --help help for config ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](https://rclone.org/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](https://rclone.org/commands/rclone_config_delete/) - Delete an existing remote `name`. * [rclone config disconnect](https://rclone.org/commands/rclone_config_disconnect/) - Disconnects user from remote * [rclone config dump](https://rclone.org/commands/rclone_config_dump/) - Dump the config file as JSON. * [rclone config edit](https://rclone.org/commands/rclone_config_edit/) - Enter an interactive configuration session. * [rclone config file](https://rclone.org/commands/rclone_config_file/) - Show path of configuration file in use. * [rclone config password](https://rclone.org/commands/rclone_config_password/) - Update password in an existing remote. * [rclone config providers](https://rclone.org/commands/rclone_config_providers/) - List in JSON format all the providers and options. * [rclone config reconnect](https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote. * [rclone config show](https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config update](https://rclone.org/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config userinfo](https://rclone.org/commands/rclone_config_userinfo/) - Prints info about logged in user of remote. # rclone copy Copy files from source to dest, skipping already copied. ## Synopsis Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. If dest:path doesn't exist, it is created and the source:path contents go there. For example rclone copy source:sourcepath dest:destpath Let's say there are two files in sourcepath sourcepath/one.txt sourcepath/two.txt This copies them to destpath/one.txt destpath/two.txt Not to destpath/sourcepath/one.txt destpath/sourcepath/two.txt If you are familiar with `rsync`, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. See the [--no-traverse](https://rclone.org/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. **Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything. ``` rclone copy source:path dest:path [flags] ``` ## Options ``` --create-empty-src-dirs Create empty source dirs on destination after copy -h, --help help for copy ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone sync Make source and dest identical, modifying destination only. ## Synopsis Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. rclone sync -i SOURCE remote:DESTINATION Note that files in the destination won't be deleted if there were any errors at any point. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the `copy` command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics ``` rclone sync source:path dest:path [flags] ``` ## Options ``` --create-empty-src-dirs Create empty source dirs on destination after sync -h, --help help for sync ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone move Move files from source to dest. ## Synopsis Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation. If no filters are in use and if possible this will server side move `source:path` into `dest:path`. After this `source:path` will no longer exist. Otherwise for each file in `source:path` selected by the filters (if any) this will move it into `dest:path`. If possible a server side move will be used, otherwise it will copy it (server side if possible) into `dest:path` then delete the original (if no errors on copy) in `source:path`. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. See the [--no-traverse](https://rclone.org/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. ``` rclone move source:path dest:path [flags] ``` ## Options ``` --create-empty-src-dirs Create empty source dirs on destination after move --delete-empty-src-dirs Delete empty source dirs after move -h, --help help for move ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone delete Remove the contents of path. ## Synopsis Remove the files in path. Unlike `purge` it obeys include/exclude filters so can be used to selectively delete files. `rclone delete` only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use `rclone purge` If you supply the --rmdirs flag, it will remove all empty directories along with it. Eg delete all files bigger than 100MBytes Check what would be deleted first (use either) rclone --min-size 100M lsl remote:path rclone --dry-run --min-size 100M delete remote:path Then delete rclone --min-size 100M delete remote:path That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. ``` rclone delete remote:path [flags] ``` ## Options ``` -h, --help help for delete --rmdirs rmdirs removes empty directories but leaves root intact ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone purge Remove the path and all of its contents. ## Synopsis Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use `delete` if you want to selectively delete files. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. ``` rclone purge remote:path [flags] ``` ## Options ``` -h, --help help for purge ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone mkdir Make the path if it doesn't already exist. ## Synopsis Make the path if it doesn't already exist. ``` rclone mkdir remote:path [flags] ``` ## Options ``` -h, --help help for mkdir ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone rmdir Remove the path if empty. ## Synopsis Remove the path. Note that you can't remove a path with objects in it, use purge for that. ``` rclone rmdir remote:path [flags] ``` ## Options ``` -h, --help help for rmdir ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone check Checks the files in the source and destination match. ## Synopsis Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination. If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data. If you supply the `--one-way` flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, one per line, to the file name (or stdout if it is `-`) supplied. What they write is described in the help below. For example `--differ` will write all paths which are present on both the source and destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - `= path` means path was found in source and destination and was identical - `- path` means path was missing on the source, so only in the destination - `+ path` means path was missing on the destination, so only in the source - `* path` means path was present in source and destination but different. - `! path` means there was an error reading or hashing the source or dest. ``` rclone check source:path dest:path [flags] ``` ## Options ``` --combined string Make a combined report of changes to this file --differ string Report all non-matching files to this file --download Check by downloading rather than with hash. --error string Report all files with errors (hashing or reading) to this file -h, --help help for check --match string Report all matching files to this file --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone ls List the objects in the path with size and path. ## Synopsis Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. Eg $ rclone ls swift:bucket 60295 bevajer5jef 90613 canole 94467 diwogej7 37600 fubuwic Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone ls remote:path [flags] ``` ## Options ``` -h, --help help for ls ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone lsd List all directories/containers/buckets in the path. ## Synopsis Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg $ rclone lsd swift: 494000 2018-04-26 08:43:20 10000 10000files 65 2018-04-26 08:43:20 1 1File Or $ rclone lsd drive:test -1 2016-10-17 17:41:53 -1 1000files -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files If you just want the directory names use "rclone lsf --dirs-only". Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsd remote:path [flags] ``` ## Options ``` -h, --help help for lsd -R, --recursive Recurse into the listing. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone lsl List the objects in path with modification time, size and path. ## Synopsis Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. Eg $ rclone lsl swift:bucket 60295 2016-06-25 18:55:41.062626927 bevajer5jef 90613 2016-06-25 18:55:43.302607074 canole 94467 2016-06-25 18:55:43.046609333 diwogej7 37600 2016-06-25 18:55:40.814629136 fubuwic Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsl remote:path [flags] ``` ## Options ``` -h, --help help for lsl ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone md5sum Produces an md5sum file for all the objects in the path. ## Synopsis Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces. ``` rclone md5sum remote:path [flags] ``` ## Options ``` -h, --help help for md5sum ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone sha1sum Produces an sha1sum file for all the objects in the path. ## Synopsis Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces. ``` rclone sha1sum remote:path [flags] ``` ## Options ``` -h, --help help for sha1sum ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone size Prints the total size and number of objects in remote:path. ## Synopsis Prints the total size and number of objects in remote:path. ``` rclone size remote:path [flags] ``` ## Options ``` -h, --help help for size --json format output as JSON ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone version Show the version number. ## Synopsis Show the version number, the go version and the architecture. Eg $ rclone version rclone v1.41 - os/arch: linux/amd64 - go version: go1.10 If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. $ rclone version --check yours: 1.42.0.6 latest: 1.42 (released 2018-06-16) beta: 1.42.0.5 (released 2018-06-17) Or $ rclone version --check yours: 1.41 latest: 1.42 (released 2018-06-16) upgrade: https://downloads.rclone.org/v1.42 beta: 1.42.0.5 (released 2018-06-17) upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 ``` rclone version [flags] ``` ## Options ``` --check Check for new version. -h, --help help for version ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone cleanup Clean up the remote if possible. ## Synopsis Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. ``` rclone cleanup remote:path [flags] ``` ## Options ``` -h, --help help for cleanup ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone dedupe Interactively find duplicate filenames and delete/rename them. ## Synopsis By default `dedupe` interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is only useful with backends like Google Drive which can have duplicate file names. It can be run on wrapping backends (eg crypt) if they wrap a backend which supports duplicate file names. In the first pass it will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged. In the second pass, for every group of duplicate file names, it will delete all but one identical files it finds without confirmation. This means that for most duplicated files the `dedupe` command will not be interactive. `dedupe` considers files to be identical if they have the same hash. If the backend does not support hashes (eg crypt wrapping Google Drive) then they will never be found to be identical. If you use the `--size-only` flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. Here is an example run. Before - with duplicates $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 6048320 2016-03-05 16:23:11.775000000 one.txt 564374 2016-03-05 16:23:06.731000000 one.txt 6048320 2016-03-05 16:18:26.092000000 one.txt 6048320 2016-03-05 16:22:46.185000000 two.txt 1744073 2016-03-05 16:22:38.104000000 two.txt 564374 2016-03-05 16:22:52.118000000 two.txt Now the `dedupe` session $ rclone dedupe drive:dupes 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. one.txt: Found 4 files with duplicate names one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") one.txt: 2 duplicates remain 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> k Enter the number of the file to keep> 1 one.txt: Deleted 1 extra copies two.txt: Found 3 files with duplicates names two.txt: 3 duplicates remain 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> r two-1.txt: renamed from: two.txt two-2.txt: renamed from: two.txt two-3.txt: renamed from: two.txt The result being $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value * `--dedupe-mode interactive` - interactive as above. * `--dedupe-mode skip` - removes identical files then skips anything left. * `--dedupe-mode first` - removes identical files then keeps the first one. * `--dedupe-mode newest` - removes identical files then keeps the newest one. * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. * `--dedupe-mode largest` - removes identical files then keeps the largest one. * `--dedupe-mode smallest` - removes identical files then keeps the smallest one. * `--dedupe-mode rename` - removes identical files then renames the rest to be different. For example to rename all the identically named photos in your Google Photos directory, do rclone dedupe --dedupe-mode rename "drive:Google Photos" Or rclone dedupe rename "drive:Google Photos" ``` rclone dedupe [mode] remote:path [flags] ``` ## Options ``` --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive") -h, --help help for dedupe ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone about Get quota information from the remote. ## Synopsis Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes. This will print to stdout something like this: Total: 17G Used: 7.444G Free: 1.315G Trashed: 100.000M Other: 8.241G Where the fields are: * Total: total size available. * Used: total size used * Free: total amount this user could upload. * Trashed: total amount in the trash * Other: total amount in other storage (eg Gmail, Google Photos) * Objects: total number of objects in the storage Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted. Use the --full flag to see the numbers written out in full, eg Total: 18253611008 Used: 7993453766 Free: 1411001220 Trashed: 104857602 Other: 8849156022 Use the --json flag for a computer readable output, eg { "total": 18253611008, "used": 7993453766, "trashed": 104857602, "other": 8849156022, "free": 1411001220 } ``` rclone about remote: [flags] ``` ## Options ``` --full Full numbers instead of SI units -h, --help help for about --json Format output as JSON ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone authorize Remote authorization. ## Synopsis Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config. Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. ``` rclone authorize [flags] ``` ## Options ``` --auth-no-open-browser Do not automatically open auth link in default browser -h, --help help for authorize ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone backend Run a backend specific command. ## Synopsis This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions. You can discover what commands a backend implements by using rclone backend help remote: rclone backend help You can also discover information about the backend using (see [operations/fsinfo](https://rclone.org/rc/#operations/fsinfo) in the remote control docs for more info). rclone backend features remote: Pass options to the backend command with -o. This should be key=value or key, eg: rclone backend stats remote:path stats -o format=json -o long Pass arguments to the backend by placing them on the end of the line rclone backend cleanup remote:path file1 file2 file3 Note to run these commands on a running backend then see [backend/command](https://rclone.org/rc/#backend/command) in the rc docs. ``` rclone backend remote:path [opts] [flags] ``` ## Options ``` -h, --help help for backend --json Always output in JSON format. -o, --option stringArray Option in the form name=value or name. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone cat Concatenates any files and sends them to stdout. ## Synopsis rclone cat sends any files to standard output. You can use it like this to output a single file rclone cat remote:path/to/file Or like this to output any file in dir or its subdirectories. rclone cat remote:path/to/dir Or like this to output any .txt files in dir or its subdirectories. rclone --include "*.txt" cat remote:path/to/dir Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1. ``` rclone cat remote:path [flags] ``` ## Options ``` --count int Only print N characters. (default -1) --discard Discard the output instead of printing. --head int Only print the first N characters. -h, --help help for cat --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone config create Create a new remote with name, type and options. ## Synopsis Create a new remote of `name` with `type` and options. The options should be passed in pairs of `key` `value`. For example to make a swift remote of name myremote using auto config you would do: rclone config create myremote swift env_auth true Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken. If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. **NB** If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: rclone config create mydrive drive config_is_local false ``` rclone config create `name` `type` [`key` `value`]* [flags] ``` ## Options ``` -h, --help help for create --no-obscure Force any passwords not to be obscured. --obscure Force any passwords to be obscured. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config delete Delete an existing remote `name`. ## Synopsis Delete an existing remote `name`. ``` rclone config delete `name` [flags] ``` ## Options ``` -h, --help help for delete ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config disconnect Disconnects user from remote ## Synopsis This disconnects the remote: passed in to the cloud storage system. This normally means revoking the oauth token. To reconnect use "rclone config reconnect". ``` rclone config disconnect remote: [flags] ``` ## Options ``` -h, --help help for disconnect ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config dump Dump the config file as JSON. ## Synopsis Dump the config file as JSON. ``` rclone config dump [flags] ``` ## Options ``` -h, --help help for dump ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config edit Enter an interactive configuration session. ## Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. ``` rclone config edit [flags] ``` ## Options ``` -h, --help help for edit ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config file Show path of configuration file in use. ## Synopsis Show path of configuration file in use. ``` rclone config file [flags] ``` ## Options ``` -h, --help help for file ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config password Update password in an existing remote. ## Synopsis Update an existing remote's password. The password should be passed in pairs of `key` `value`. For example to set password of a remote of name myremote you would do: rclone config password myremote fieldname mypassword This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. ``` rclone config password `name` [`key` `value`]+ [flags] ``` ## Options ``` -h, --help help for password ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config providers List in JSON format all the providers and options. ## Synopsis List in JSON format all the providers and options. ``` rclone config providers [flags] ``` ## Options ``` -h, --help help for providers ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config reconnect Re-authenticates user with remote. ## Synopsis This reconnects remote: passed in to the cloud storage system. To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. ``` rclone config reconnect remote: [flags] ``` ## Options ``` -h, --help help for reconnect ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config show Print (decrypted) config file, or the config for a single remote. ## Synopsis Print (decrypted) config file, or the config for a single remote. ``` rclone config show [] [flags] ``` ## Options ``` -h, --help help for show ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config update Update options in an existing remote. ## Synopsis Update an existing remote's options. The options should be passed in in pairs of `key` `value`. For example to update the env_auth field of a remote of name myremote you would do: rclone config update myremote swift env_auth true If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. **NB** If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: rclone config update myremote swift env_auth true config_refresh_token false ``` rclone config update `name` [`key` `value`]+ [flags] ``` ## Options ``` -h, --help help for update --no-obscure Force any passwords not to be obscured. --obscure Force any passwords to be obscured. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone config userinfo Prints info about logged in user of remote. ## Synopsis This prints the details of the person logged in to the cloud storage system. ``` rclone config userinfo remote: [flags] ``` ## Options ``` -h, --help help for userinfo --json Format output as JSON ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. # rclone copyto Copy files from source to dest, skipping already copied. ## Synopsis If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command. So rclone copyto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: if src is file copy it to dst, overwriting an existing file if it exists if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics ``` rclone copyto source:path dest:path [flags] ``` ## Options ``` -h, --help help for copyto ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone copyurl Copy url content to dest. ## Synopsis Download a URL's content and copy it to the destination without saving it in temporary storage. Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. Setting --stdout or making the output file name "-" will cause the output to be written to standard output. ``` rclone copyurl https://example.com dest:path [flags] ``` ## Options ``` -a, --auto-filename Get the file name from the URL and use it for destination file path -h, --help help for copyurl --no-clobber Prevent overwriting file with same name --stdout Write the output to stdout rather than a file ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone cryptcheck Cryptcheck checks the integrity of a crypted remote. ## Synopsis rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted. Use it like this rclone cryptcheck /path/to/files encryptedremote:path You can use it like this also, but that will involve downloading all the files in remote:path. rclone cryptcheck remote:path encryptedremote:path After it has run it will log the status of the encryptedremote:. If you supply the `--one-way` flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, one per line, to the file name (or stdout if it is `-`) supplied. What they write is described in the help below. For example `--differ` will write all paths which are present on both the source and destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - `= path` means path was found in source and destination and was identical - `- path` means path was missing on the source, so only in the destination - `+ path` means path was missing on the destination, so only in the source - `* path` means path was present in source and destination but different. - `! path` means there was an error reading or hashing the source or dest. ``` rclone cryptcheck remote:path cryptedremote:path [flags] ``` ## Options ``` --combined string Make a combined report of changes to this file --differ string Report all non-matching files to this file --error string Report all files with errors (hashing or reading) to this file -h, --help help for cryptcheck --match string Report all matching files to this file --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone cryptdecode Cryptdecode returns unencrypted file names. ## Synopsis rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. If you supply the --reverse flag, it will return encrypted file names. use it like this rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 rclone cryptdecode --reverse encryptedremote: filename1 filename2 ``` rclone cryptdecode encryptedremote: encryptedfilename [flags] ``` ## Options ``` -h, --help help for cryptdecode --reverse Reverse cryptdecode, encrypts filenames ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone deletefile Remove a single file from remote. ## Synopsis Remove a single file from remote. Unlike `delete` it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed. ``` rclone deletefile remote:path [flags] ``` ## Options ``` -h, --help help for deletefile ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone genautocomplete Output completion script for a given shell. ## Synopsis Generates a shell completion script for rclone. Run with --help to list the supported shells. ## Options ``` -h, --help help for genautocomplete ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone genautocomplete bash](https://rclone.org/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete fish](https://rclone.org/commands/rclone_genautocomplete_fish/) - Output fish completion script for rclone. * [rclone genautocomplete zsh](https://rclone.org/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. # rclone genautocomplete bash Output bash completion script for rclone. ## Synopsis Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete bash Logout and login again to use the autocompletion scripts, or source them directly . /etc/bash_completion If you supply a command line argument the script will be written there. ``` rclone genautocomplete bash [output_file] [flags] ``` ## Options ``` -h, --help help for bash ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone genautocomplete](https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell. # rclone genautocomplete fish Output fish completion script for rclone. ## Synopsis Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete fish Logout and login again to use the autocompletion scripts, or source them directly . /etc/fish/completions/rclone.fish If you supply a command line argument the script will be written there. ``` rclone genautocomplete fish [output_file] [flags] ``` ## Options ``` -h, --help help for fish ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone genautocomplete](https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell. # rclone genautocomplete zsh Output zsh completion script for rclone. ## Synopsis Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete zsh Logout and login again to use the autocompletion scripts, or source them directly autoload -U compinit && compinit If you supply a command line argument the script will be written there. ``` rclone genautocomplete zsh [output_file] [flags] ``` ## Options ``` -h, --help help for zsh ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone genautocomplete](https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell. # rclone gendocs Output markdown docs for rclone to the directory supplied. ## Synopsis This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website. ``` rclone gendocs output_directory [flags] ``` ## Options ``` -h, --help help for gendocs ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone hashsum Produces a hashsum file for all the objects in the path. ## Synopsis Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool. Run without a hash to see the list of supported hashes, eg $ rclone hashsum Supported hashes are: * MD5 * SHA-1 * DropboxHash * QuickXorHash Then $ rclone hashsum MD5 remote:path ``` rclone hashsum remote:path [flags] ``` ## Options ``` --base64 Output base64 encoded hashsum -h, --help help for hashsum ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone link Generate public link to file/folder. ## Synopsis rclone link will create, retrieve or remove a public link to the given file or folder. rclone link remote:path/to/file rclone link remote:path/to/folder/ rclone link --unlink remote:path/to/folder/ rclone link --expire 1d remote:path/to/file If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). **Note** not all backends support the --expire flag - if the backend doesn't support it then the link returned won't expire. Use the --unlink flag to remove existing public links to the file or folder. **Note** not all backends support "--unlink" flag - those that don't will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account. ``` rclone link remote:path [flags] ``` ## Options ``` --expire Duration The amount of time that the link will be valid (default 100y) -h, --help help for link --unlink Remove existing public link to file/folder ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone listremotes List all the remotes in the config file. ## Synopsis rclone listremotes lists all the available remotes from the config file. When uses with the -l flag it lists the types too. ``` rclone listremotes [flags] ``` ## Options ``` -h, --help help for listremotes --long Show the type as well as names. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone lsf List directories and objects in remote:path formatted for parsing. ## Synopsis List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. Eg $ rclone lsf swift:bucket bevajer5jef canole diwogej7 ferejej3gux/ fubuwic Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: p - path s - size t - modification time h - hash i - ID of object o - Original ID of underlying object m - MimeType of object if known e - encrypted name T - tier of storage if known, eg "Hot" or "Cool" So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. Eg $ rclone lsf --format "tsp" swift:bucket 2016-06-25 18:55:41;60295;bevajer5jef 2016-06-25 18:55:43;90613;canole 2016-06-25 18:55:43;94467;diwogej7 2018-04-26 08:50:45;0;ferejej3gux/ 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type. For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . Eg $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket 7908e352297f0f530b84a756f188baa3 bevajer5jef cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg $ rclone lsf --separator "," --format "tshp" swift:bucket 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 2018-04-26 08:52:53,0,,ferejej3gux/ 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic You can output in CSV standard format. This will escape things in " if they contain , Eg $ rclone lsf --csv --files-only --format ps remote:path test.log,22355 test.sh,449 "this file contains a comma, in the file name.txt",6 Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag. For example to find all the files modified within one day and copy those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsf remote:path [flags] ``` ## Options ``` --absolute Put a leading / in front of path names. --csv Output in CSV format. -d, --dir-slash Append a slash to directory names. (default true) --dirs-only Only list directories. --files-only Only list files. -F, --format string Output format - see help for details (default "p") --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5") -h, --help help for lsf -R, --recursive Recurse into the listing. -s, --separator string Separator for the items in the format. (default ";") ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone lsjson List directories and objects in the path in JSON format. ## Synopsis List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", } If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash. If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift). If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift). If --encrypted is not specified the Encrypted won't be emitted. If --dirs-only is not specified files in addition to directories are returned If --files-only is not specified directories in addition to the files will be returned. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true". The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsjson remote:path [flags] ``` ## Options ``` --dirs-only Show only directories in the listing. -M, --encrypted Show the encrypted names. --files-only Show only files in the listing. --hash Include hashes in the output (may take longer). --hash-type stringArray Show only this hash type (may be repeated). -h, --help help for lsjson --no-mimetype Don't read the mime type (can speed things up). --no-modtime Don't read the modification time (can speed things up). --original Show the ID of the underlying Object. -R, --recursive Recurse into the listing. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone mount Mount the remote as file system on a mountpoint. ## Synopsis rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up your remote using `rclone config`. Check it works with `rclone ls` etc. You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows. On Linux/macOS/FreeBSD Start the mount like this where `/path/to/local/mount` is an **empty** **existing** directory. rclone mount remote:path/to/files /path/to/local/mount Or on Windows like this where `X:` is an unused drive letter or use a path to **non-existent** directory. rclone mount remote:path/to/files X: rclone mount remote:path/to/files C:\path\to\nonexistent\directory When running in background mode the user will have to stop the mount manually (specified below). When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped. The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. Stopping the mount manually: # Linux fusermount -u /path/to/local/mount # OS X umount /path/to/local/mount **Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use. ## Installing on Windows To run rclone mount on Windows, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/billziss-gh/winfsp) is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows. ### Windows caveats Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive. The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using [the WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)) which creates drives accessible for everyone on the system or alternatively using [the nssm service manager](https://nssm.cc/usage). ### Mount as a network drive By default, rclone will mount the remote as a normal drive. However, you can also mount it as a **Network Drive** (or **Network Share**, as mentioned in some places) Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected. Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also [Limitations](#limitations) section below for more info Add "--fuse-flag --VolumePrefix=\server\share" to your "mount" command, **replacing "share" with any other name of your choice if you are mounting more than one remote**. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) ## Limitations Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File Caching](#file-caching) section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. ## rclone mount vs rclone sync/copy File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the [file caching](#file-caching) for solutions to make mount more reliable. ## Attribute caching You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel. In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as [rclone using too much memory](https://github.com/rclone/rclone/issues/2157), [rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147). The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above. If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above. If files don't change on the remote outside of the control of rclone then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. ## Filters Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. ## systemd When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode. ## chunked reading ### --vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests. When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely. With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ``` rclone mount remote:path /path/to/mountpoint [flags] ``` ## Options ``` --allow-non-empty Allow mounting over a non-empty directory (not Windows). --allow-other Allow access to other users. --allow-root Allow access to root user. --async-read Use asynchronous reads. (default true) --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --daemon Run mount as a daemon (background mode). --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). --debug-fuse Debug the FUSE internals - needs -v. --default-permissions Makes kernel enforce access control based on the file mode. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for mount --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) --volname string Set the volume name (not supported by all OSes). --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone moveto Move file or directory from source to dest. ## Synopsis If source:path is a file or directory then it moves it to a file or directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command. So rclone moveto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: if src is file move it to dst, overwriting an existing file if it exists if src is directory move it to dst, overwriting existing files if they exist see move command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. ``` rclone moveto source:path dest:path [flags] ``` ## Options ``` -h, --help help for moveto ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone ncdu Explore a remote with a text based user interface. ## Synopsis This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?". To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. Here are the keys - press '?' to toggle the help on and off ↑,↓ or k,j to Move →,l to enter ←,h to return c toggle counts g toggle graph n,s,C sort by name,size,count d delete file/directory y copy current path to clipbard Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously. ``` rclone ncdu remote:path [flags] ``` ## Options ``` -h, --help help for ncdu ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone obscure Obscure password for use in the rclone config file. ## Synopsis In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is **not** a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token. This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. Example: echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. If you want to encrypt the config file then please use config file encryption - see [rclone config](https://rclone.org/commands/rclone_config/) for more info. ``` rclone obscure password [flags] ``` ## Options ``` -h, --help help for obscure ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone rc Run a command against a running rclone. ## Synopsis This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" A username and password can be passed in with --user and --pass. Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. -o key=value -o key2 Will place this in the "opt" value {"key":"value", "key2","") The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. -a value -a value2 Will place this in the "arg" value ["value", "value2"] Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg: rclone rc --loopback operations/about fs=/ Use "rclone rc" to see a list of all possible commands. ``` rclone rc commands parameter [flags] ``` ## Options ``` -a, --arg stringArray Argument placed in the "arg" array. -h, --help help for rc --json string Input JSON - use instead of key=value args. --loopback If set connect to this rclone instance not via HTTP. --no-output If set don't output the JSON result. -o, --opt stringArray Option in the form name=value or name placed in the "opt" array. --pass string Password to use to connect to rclone remote control. --url string URL to connect to rclone remote control. (default "http://localhost:5572/") --user string Username to use to rclone remote control. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone rcat Copies standard input to file on remote. ## Synopsis rclone rcat reads from standard input (stdin) and copies it to a single remote file. echo "hello world" | rclone rcat remote:path/to/file ffmpeg - | rclone rcat remote:path/to/file If the remote file already exists, it will be overwritten. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through `--streaming-upload-cutoff`. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then `rclone move` it to the destination. ``` rclone rcat remote:path [flags] ``` ## Options ``` -h, --help help for rcat ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone rcd Run rclone listening to remote control commands only. ## Synopsis This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. See the [rc documentation](https://rclone.org/rc/) for more info on the rc flags. ``` rclone rcd * [flags] ``` ## Options ``` -h, --help help for rcd ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone rmdirs Remove empty directories under the path. ## Synopsis This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in. If you supply the --leave-root flag, it will not remove the root directory. This is useful for tidying up remotes that rclone has left a lot of empty directories in. ``` rclone rmdirs remote:path [flags] ``` ## Options ``` -h, --help help for rmdirs --leave-root Do not remove root directory if empty ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone serve Serve a remote over a protocol. ## Synopsis rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg rclone serve http remote: Each subcommand has its own options which you can see in their help. ``` rclone serve [opts] [flags] ``` ## Options ``` -h, --help help for serve ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone serve dlna](https://rclone.org/commands/rclone_serve_dlna/) - Serve remote:path over DLNA * [rclone serve ftp](https://rclone.org/commands/rclone_serve_ftp/) - Serve remote:path over FTP. * [rclone serve http](https://rclone.org/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve restic](https://rclone.org/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve sftp](https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP. * [rclone serve webdav](https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over webdav. # rclone serve dlna Serve remote:path over DLNA ## Synopsis rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. ## Server options Use `--addr` to specify which IP address and port the server should listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all IPs. Use `--name` to choose the friendly server name, which is by default "rclone (hostname)". Use `--log-trace` in conjunction with `-vv` to enable additional debug logging of all UPNP traffic. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ``` rclone serve dlna remote:path [flags] ``` ## Options ``` --addr string ip:port or :port to bind the DLNA http server to. (default ":7879") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for dlna --log-trace enable trace logging of SOAP traffic --name string name of DLNA server --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. # rclone serve ftp Serve remote:path over FTP. ## Synopsis rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ### Authentication By default this will serve files without needing a login. You can set a single username and password with the --user and --pass flags. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ## Auth Proxy If you supply the parameter `--auth-proxy /path/to/program` then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - `_root` - root to use for the backend And it may have this parameter - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT ``` { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ``` This would mean that an SFTP backend would be created on the fly for the `user` and `pass`/`public_key` returned in the output to the host given. Note that since `_obscure` is set to `pass`, rclone will obscure the `pass` parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied `user` in any way, for example to make proxy to many different sftp backends, you could make the `user` be `user@example.com` and then set the `host` to `example.com` in the output and the user to `user`. For security you'd probably want to restrict the `host` to a limited list. Note that an internal cache is keyed on `user` so only use that for configuration, don't use `pass` or `public_key`. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. ``` rclone serve ftp remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") --auth-proxy string A program to use to create the backend from the auth. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for ftp --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. (empty value allow every password) --passive-port string Passive port range to use. (default "30000-32000") --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --public-ip string Public IP address to advertise for passive connections. --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. (default "anonymous") --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. # rclone serve http Serve the remote over HTTP. ## Synopsis rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | ### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. ### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ``` rclone serve http remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for http --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --template string User Specified Template. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. # rclone serve restic Serve the remote for restic's REST API. ## Synopsis rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. [Restic](https://restic.net/) is a command line program for doing backups. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. ## Setting up rclone for use by restic ### First [set up a remote for your chosen cloud provider](https://rclone.org/docs/#configure). Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server rclone serve restic -v remote:backup Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag. You might wish to start this server on boot. ## Setting up restic to use rclone ### Now you can [follow the restic instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: $ export RESTIC_REPOSITORY=rest:http://localhost:8080/ $ export RESTIC_PASSWORD=yourpassword $ restic init created restic backend 8b1a4b56ae at rest:http://localhost:8080/ Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost. $ restic backup /path/to/files/to/backup scan [/path/to/files/to/backup] scanned 189 directories, 312 files in 0:00 [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 duration: 0:00 snapshot 45c8fdd8 saved ### Multiple repositories #### Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these **must** end with /. Eg $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ # backup user1 stuff $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff ### Private repositories #### The "--private-repos" flag can be used to limit users to repositories starting with a path of `//`. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | ### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. ### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ``` rclone serve restic remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --append-only disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --pass string Password for authentication. --private-repos users can only access their private repo --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --stdio run an HTTP2 server on stdin/stdout --template string User Specified Template. --user string User name for authentication. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. # rclone serve sftp Serve the remote over SFTP. ## Synopsis rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in. Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. If you don't supply a --key then rclone will generate one and cache it for later use. By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example. Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ## Auth Proxy If you supply the parameter `--auth-proxy /path/to/program` then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - `_root` - root to use for the backend And it may have this parameter - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT ``` { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ``` This would mean that an SFTP backend would be created on the fly for the `user` and `pass`/`public_key` returned in the output to the host given. Note that since `_obscure` is set to `pass`, rclone will obscure the `pass` parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied `user` in any way, for example to make proxy to many different sftp backends, you could make the `user` be `user@example.com` and then set the `host` to `example.com` in the output and the user to `user`. For security you'd probably want to restrict the `host` to a limited list. Note that an internal cache is keyed on `user` so only use that for configuration, don't use `pass` or `public_key`. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. ``` rclone serve sftp remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022") --auth-proxy string A program to use to create the backend from the auth. --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for sftp --key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate) --no-auth Allow connections with no authentication if set. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. # rclone serve webdav Serve remote:path over webdav. ## Synopsis rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it. ## Webdav options ### --etag-hash This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use "rclone hashsum" to see the full list. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | ### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. ### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ## Auth Proxy If you supply the parameter `--auth-proxy /path/to/program` then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - `_root` - root to use for the backend And it may have this parameter - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT ``` { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ``` This would mean that an SFTP backend would be created on the fly for the `user` and `pass`/`public_key` returned in the output to the host given. Note that since `_obscure` is set to `pass`, rclone will obscure the `pass` parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied `user` in any way, for example to make proxy to many different sftp backends, you could make the `user` be `user@example.com` and then set the `host` to `example.com` in the output and the user to `user`. For security you'd probably want to restrict the `host` to a limited list. Note that an internal cache is keyed on `user` so only use that for configuration, don't use `pass` or `public_key`. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. ``` rclone serve webdav remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --auth-proxy string A program to use to create the backend from the auth. --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --disable-dir-list Disable HTML directory list on GET request for a directory --etag-hash string Which hash to use for the ETag, or auto or blank for off --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for webdav --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --template string User Specified Template. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. # rclone settier Changes storage class/tier of objects in remote. ## Synopsis rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc. Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true You can use it to tier single object rclone settier Cool remote:path/file Or use rclone filters to set tier on only specific files rclone --include "*.txt" settier Hot remote:path/dir Or just provide remote directory and all files in directory will be tiered rclone settier tier remote:path/dir ``` rclone settier tier remote:path [flags] ``` ## Options ``` -h, --help help for settier ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone touch Create new file or change file modification time. ## Synopsis Set the modification time on object(s) as specified by remote:path to have the current time. If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided. If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: - 'YYMMDD' - eg. 17.10.30 - 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05 - 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789 Note that --timestamp is in UTC if you want local time then add the --localtime flag. ``` rclone touch remote:path [flags] ``` ## Options ``` -h, --help help for touch --localtime Use localtime for timestamp, not UTC. -C, --no-create Do not create the file if it does not exist. -t, --timestamp string Use specified time instead of the current time of day. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. # rclone tree List the contents of the remote in a tree like fashion. ## Synopsis rclone tree lists the contents of a remote in a similar way to the unix tree command. For example $ rclone tree remote:path / ├── file1 ├── file2 ├── file3 └── subdir ├── file4 └── file5 1 directories, 5 files You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options. ``` rclone tree remote:path [flags] ``` ## Options ``` -a, --all All files are listed (list . files too). -C, --color Turn colorization on always. -d, --dirs-only List directories only. --dirsfirst List directories before files (-U disables). --full-path Print the full path prefix for each file. -h, --help help for tree --human Print the size in a more human readable way. --level int Descend only level directories deep. -D, --modtime Print the date of last modification. --noindent Don't print indentation lines. --noreport Turn off file/directory count at end of tree listing. -o, --output string Output to file instead of stdout. -p, --protections Print the protections for each file. -Q, --quote Quote filenames with double quotes. -s, --size Print the size in bytes of each file. --sort string Select sort: name,version,size,mtime,ctime. --sort-ctime Sort files by last status change time. -t, --sort-modtime Sort files by last modification time. -r, --sort-reverse Reverse the order of the sort. -U, --unsorted Leave files unsorted. --version Sort files alphanumerically by version. ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. Copying single files -------------------- rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error `Failed to create file system for "remote:file": is a file not a directory` if it isn't. For example, suppose you have a remote with a file in called `test.jpg`, then you could copy just that file like this rclone copy remote:test.jpg /tmp/download The file `test.jpg` will be placed inside `/tmp/download`. This is equivalent to specifying rclone copy --files-from /tmp/files remote: /tmp/download Where `/tmp/files` contains the single line test.jpg It is recommended to use `copy` when copying individual files, not `sync`. They have pretty much the same effect but `copy` will use a lot less memory. Syntax of remote paths ---------------------- The syntax of the paths passed to the rclone command are as follows. ### /path/to/dir This refers to the local file system. On Windows only `\` may be used instead of `/` in local paths **only**, non local paths must use `/`. These paths needn't start with a leading `/` - if they don't then they will be relative to the current directory. ### remote:path/to/dir This refers to a directory `path/to/dir` on `remote:` as defined in the config file (configured with `rclone config`). ### remote:/path/to/dir On most backends this is refers to the same directory as `remote:path/to/dir` and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading `/` will refer to your "home" directory and paths with a leading `/` will refer to the root. ### :backend:path/to/dir This is an advanced form for creating remotes on the fly. `backend` should be the name or prefix of a backend (the `type` in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables). Here are some examples: rclone lsd --http-url https://pub.rclone.org :http: To list all the directories in the root of `https://pub.rclone.org/`. rclone lsf --http-url https://example.com :http:path/to/dir To list files and directories in `https://example.com/path/to/dir/` rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir To copy files and directories in `https://example.com/path/to/dir` to `/tmp/dir`. rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir To copy files and directories from `example.com` in the relative directory `path/to/dir` to `/tmp/dir` using sftp. ### Valid remote names - Remote names may only contain 0-9, A-Z ,a-z ,_ , - and space. - Remote names may not start with -. Quoting and the shell --------------------- When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way. Here are some gotchas which may help users unfamiliar with the shell rules ### Linux / OSX ### If your names have spaces or shell metacharacters (eg `*`, `?`, `$`, `'`, `"` etc) then you must quote them. Use single quotes `'` by default. rclone copy 'Important files?' remote:backup If you want to send a `'` you will need to use `"`, eg rclone copy "O'Reilly Reviews" remote:backup The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell. ### Windows ### If your names have spaces in you need to put them in `"`, eg rclone copy "E:\folder name\folder name\folder name" remote:backup If you are using the root directory on its own then don't quote it (see [#464](https://github.com/rclone/rclone/issues/464) for why), eg rclone copy E:\ remote:backup Copying files or directories with `:` in the names -------------------------------------------------- rclone uses `:` to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a `:` up to the first `/` so if you need to act on a file or directory like this then use the full path starting with a `/`, or use `./` as a current directory prefix. So to sync a directory called `sync:me` to a remote called `remote:` use rclone sync -i ./sync:me remote:path or rclone sync -i /full/path/to/sync:me remote:path Server Side Copy ---------------- Most remotes (but not all - see [the overview](https://rclone.org/overview/#optional-features)) support server side copy. This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place. Eg rclone copy s3:oldbucket s3:newbucket Will copy the contents of `oldbucket` to `newbucket` without downloading and re-uploading. Remotes which don't support server side copy **will** download and re-upload in this case. Server side copies are used with `sync` and `copy` and will be identified in the log when using the `-v` flag. The `move` command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload. Server side copies will only be attempted if the remote names are the same. This can be used when scripting to make aged backups efficiently, eg rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup Options ------- Rclone has a number of options to control its behaviour. Options that take parameters can have the values passed in two ways, `--option=value` or `--option value`. However boolean (true/false) options behave slightly differently to the other options in that `--boolean` sets the option to `true` and the absence of the flag sets it to `false`. It is also possible to specify `--boolean=false` or `--boolean=true`. Note that `--boolean false` is not valid - this is parsed as `--boolean` and the `false` is parsed as an extra command line argument for rclone. Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of `b` for bytes, `k` for kBytes, `M` for MBytes, `G` for GBytes, `T` for TBytes and `P` for PBytes may be used. These are the binary units, eg 1, 2\*\*10, 2\*\*20, 2\*\*30 respectively. ### --backup-dir=DIR ### When using `sync`, `copy` or `move` any files which would have been overwritten or deleted are moved in their original hierarchy into this directory. If `--suffix` is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten. The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory. For example rclone sync -i /path/to/local remote:current --backup-dir remote:old will sync `/path/to/local` to `remote:current`, but for any files which would have been updated or deleted will be stored in `remote:old`. If running rclone from a script you might want to use today's date as the directory name passed to `--backup-dir` to store the old files, or you might want to pass `--suffix` with today's date. See `--compare-dest` and `--copy-dest`. ### --bind string ### Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error. ### --bwlimit=BANDWIDTH_SPEC ### This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable. Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is `0` which means to not limit bandwidth. For example, to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M` It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as `WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH...` where: `WEEKDAY` is optional element. It could be written as whole world or only using 3 first characters. `HH:MM` is an hour from 00:00 to 23:59. An example of a typical timetable to avoid link saturation during daytime working hours could be: `--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"` In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. An example of timetable with `WEEKDAY` could be: `--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"` It mean that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited. Timeslots without weekday are extended to whole week. So this one example: `--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"` Is equal to this: `--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"` Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc. Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a `--bwlimit 0.625M` parameter for rclone. On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a `SIGUSR2` signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with `--bwlimit` quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this: kill -SIGUSR2 $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use change the bwlimit dynamically: rclone rc core/bwlimit rate=1M ### --bwlimit-file=BANDWIDTH_SPEC ### This option controls per file bandwidth limit. For the options see the `--bwlimit` flag. For example use this to allow no transfers to be faster than 1MByte/s --bwlimit-file 1M This can be used in conjunction with `--bwlimit`. Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer. ### --buffer-size=SIZE ### Use this sized buffer to speed up file transfers. Each `--transfer` will use this much memory for buffering. When using `mount` or `cmount` each open file descriptor will use this much memory for buffering. See the [mount](https://rclone.org/commands/rclone_mount/#file-buffering) documentation for more details. Set to `0` to disable the buffering for the minimum memory usage. Note that the memory allocation of the buffers is influenced by the [--use-mmap](#use-mmap) flag. ### --check-first ### If this flag is set then in a `sync`, `copy` or `move`, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible. This flag can be useful on IO limited systems where transfers interfere with checking. Using this flag can use more memory as it effectively sets `--max-backlog` to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start. ### --checkers=N ### The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel. The default is to run 8 checkers in parallel. ### -c, --checksum ### Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal. This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the [overview section](https://rclone.org/overview/). Eg `rclone --checksum sync s3:/bucket swift:/bucket` would run much quicker than without the `--checksum` flag. When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. ### --compare-dest=DIR ### When using `sync`, `copy` or `move` DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup. You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See `--copy-dest` and `--backup-dir`. ### --config=CONFIG_FILE ### Specify the location of the rclone config file. Normally the config file is in your home directory as a file called `.config/rclone/rclone.conf` (or `.rclone.conf` if created with an older version). If `$XDG_CONFIG_HOME` is set it will be at `$XDG_CONFIG_HOME/rclone/rclone.conf`. If there is a file `rclone.conf` in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically. If you run `rclone config file` you will see where the default location is for you. Use this flag to override the config location, eg `rclone --config=".myconfig" .config`. ### --contimeout=TIME ### Set the connection timeout. This should be in go time format which looks like `5s` for 5 seconds, `10m` for 10 minutes, or `3h30m`. The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is `1m` by default. ### --copy-dest=DIR ### When using `sync`, `copy` or `move` DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup. The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See `--compare-dest` and `--backup-dir`. ### --dedupe-mode MODE ### Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean. ### --disable FEATURE,FEATURE,... ### This disables a comma separated list of optional features. For example to disable server side move and server side copy use: --disable move,copy The features can be put in any case. To see a list of which features can be disabled use: --disable help See the overview [features](https://rclone.org/overview/#features) and [optional features](https://rclone.org/overview/#optional-features) to get an idea of which feature does what. This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day). ### -n, --dry-run ### Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the `sync` command which deletes files in the destination. ### --expect-continue-timeout=TIME ### This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this. Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header. The default is `1s`. Set to `0` to disable. ### --error-on-no-transfer ### By default, rclone will exit with return code 0 if there were no errors. This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! ### --header ### Add an HTTP header for all transactions. The flag can be repeated to add multiple headers. If you want to add headers only for uploads use `--header-upload` and if you want to add headers only for downloads use `--header-download`. This flag is supported for all HTTP based backends even those not supported by `--header-upload` and `--header-download` so may be used as a workaround for those with care. ``` rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes" ``` ### --header-download ### Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers. ``` rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar" ``` See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for currently supported backends. ### --header-upload ### Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers. ``` rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar" ``` See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for currently supported backends. ### --ignore-case-sync ### Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different. ### --ignore-checksum ### Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't. You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data. ### --ignore-existing ### Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files. While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted. ### --ignore-size ### Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If `--checksum` is set then it only checks the checksum. It will also cause rclone to skip verifying the sizes are the same after transfer. This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see [#399](https://github.com/rclone/rclone/issues/399) for more info). ### -I, --ignore-times ### Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination. Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using `--checksum`). ### --immutable ### Treat source and destination files as immutable and disallow modification. With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error `Source and destination exist but do not match: immutable file modified`. Note that only commands which transfer files (e.g. `sync`, `copy`, `move`) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. `delete`, `purge`) or implicitly (e.g. `sync`, `move`). Use `copy --immutable` if it is desired to avoid deletion as well as modification. This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated. ### -i / --interactive {#interactive} This flag can be used to tell rclone that you wish a manual confirmation before destructive operations. It is **recommended** that you use this flag while learning rclone especially with `rclone sync`. For example ``` $ rclone delete -i /tmp/dir rclone: delete "important-file.txt"? y) Yes, this is OK (default) n) No, skip this s) Skip all delete operations with no more questions !) Do all delete operations with no more questions q) Exit rclone now. y/n/s/!/q> n ``` The options mean - `y`: **Yes**, this operation should go ahead. You can also press Return for this to happen. You'll be asked every time unless you choose `s` or `!`. - `n`: **No**, do not do this operation. You'll be asked every time unless you choose `s` or `!`. - `s`: **Skip** all the following operations of this type with no more questions. This takes effect until rclone exits. If there are any different kind of operations you'll be prompted for them. - `!`: **Do all** the following operations with no more questions. Useful if you've decided that you don't mind rclone doing that kind of operation. This takes effect until rclone exits . If there are any different kind of operations you'll be prompted for them. - `q`: **Quit** rclone now, just in case! ### --leave-root #### During rmdirs it will not remove root directory, even if it's empty. ### --log-file=FILE ### Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the `-v` flag. See the [Logging section](#logging) for more info. If FILE exists then rclone will append to it. Note that if you are using the `logrotate` program to manage rclone's logs, then you should use the `copytruncate` option as rclone doesn't have a signal to rotate logs. ### --log-format LIST ### Comma separated list of log format options. `date`, `time`, `microseconds`, `longfile`, `shortfile`, `UTC`. The default is "`date`,`time`". ### --log-level LEVEL ### This sets the log level for rclone. The default log level is `NOTICE`. `DEBUG` is equivalent to `-vv`. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing. `INFO` is equivalent to `-v`. It outputs information about each transfer and prints stats once a minute by default. `NOTICE` is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events. `ERROR` is equivalent to `-q`. It only outputs error messages. ### --use-json-log ### This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time. ### --low-level-retries NUMBER ### This controls the number of low level retries rclone does. A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the `-v` flag. This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the `--retries` flag) quicker. Disable low level retries with `--low-level-retries 1`. ### --max-backlog=N ### This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred. This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use. Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make `--order-by` work more accurately. Setting this small will make rclone more synchronous to the listings of the remote which may be desirable. Setting this to a negative number will make the backlog as large as possible. ### --max-delete=N ### This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress. ### --max-depth=N ### This modifies the recursion depth for all the commands except purge. So if you do `rclone --max-depth 1 ls remote:path` you will see only the files in the top level directory. Using `--max-depth 2` means you will see all the files in first two directory levels and so on. For historical reasons the `lsd` command defaults to using a `--max-depth` of 1 - you can override this with the command line flag. You can use this command to disable recursion (with `--max-depth 1`). Note that if you use this with `sync` and `--delete-excluded` the files not recursed through are considered excluded and will be deleted on the destination. Test first with `--dry-run` if you are not sure what will happen. ### --max-duration=TIME ### Rclone will stop scheduling new transfers when it has run for the duration specified. Defaults to off. When the limit is reached any existing transfers will complete. Rclone won't exit with an error if the transfer limit is reached. ### --max-transfer=SIZE ### Rclone will stop transferring when it has reached the size specified. Defaults to off. When the limit is reached all transfers will stop immediately. Rclone will exit with exit code 8 if the transfer limit is reached. ### --cutoff-mode=hard|soft|cautious ### This modifies the behavior of `--max-transfer` Defaults to `--cutoff-mode=hard`. Specifying `--cutoff-mode=hard` will stop transferring immediately when Rclone reaches the limit. Specifying `--cutoff-mode=soft` will stop starting new transfers when Rclone reaches the limit. Specifying `--cutoff-mode=cautious` will try to prevent Rclone from reaching the limit. ### --modify-window=TIME ### When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent. The default is `1ns` unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be `1s` by default. This command line flag allows you to override that computed default. ### --multi-thread-cutoff=SIZE ### When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M). Rclone preallocates the file (using `fallocate(FALLOC_FL_KEEP_SIZE)` on unix or `NTSetInformationFile` on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer. The number of threads used to download is controlled by `--multi-thread-streams`. Use `-vv` if you wish to see info about the threads. This will work with the `sync`/`copy`/`move` commands and friends `copyto`/`moveto`. Multi thread downloads will be used with `rclone mount` and `rclone serve` if `--vfs-cache-mode` is set to `writes` or above. **NB** that this **only** works for a local destination but will work with any source. **NB** that multi thread copies are disabled for local to local copies as they are faster without unless `--multi-thread-streams` is set explicitly. **NB** on Windows using multi-thread downloads will cause the resulting files to be [sparse](https://en.wikipedia.org/wiki/Sparse_file). Use `--local-no-sparse` to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with `--multi-thread-streams 0` ### --multi-thread-streams=N ### When using multi thread downloads (see above `--multi-thread-cutoff`) this sets the maximum number of streams to use. Set to `0` to disable multi thread downloads (Default 4). Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the `--multi-thread-cutoff` and rounds up, up to the maximum set with `--multi-thread-streams`. So if `--multi-thread-cutoff 250MB` and `--multi-thread-streams 4` are in effect (the defaults): - 0MB..250MB files will be downloaded with 1 stream - 250MB..500MB files will be downloaded with 2 streams - 500MB..750MB files will be downloaded with 3 streams - 750MB+ files will be downloaded with 4 streams ### --no-check-dest ### The `--no-check-dest` can be used with `move` or `copy` and it causes rclone not to check the destination at all when copying files. This means that: - the destination is not listed minimising the API calls - files are always transferred - this can cause duplicates on remotes which allow it (eg Google Drive) - `--retries 1` is recommended otherwise you'll transfer everything again on a retry This flag is useful to minimise the transactions if you know that none of the files are on the destination. This is a specialized flag which should be ignored by most users! ### --no-gzip-encoding ### Don't set `Accept-Encoding: gzip`. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with `Content-Encoding: gzip` but you uploaded compressed files. There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone. ### --no-traverse ### The `--no-traverse` flag controls whether the destination file system is traversed when using the `copy` or `move` commands. `--no-traverse` is not compatible with `sync` and will be ignored if you supply it with `sync`. If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then `--no-traverse` will stop rclone listing the destination and save time. However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use `--no-traverse`. See [rclone copy](https://rclone.org/commands/rclone_copy/) for an example of how to use it. ### --no-unicode-normalization ### Don't normalize unicode characters in filenames during the sync routine. Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem. Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With `--no-unicode-normalization` they will be treated as unique characters. ### --no-update-modtime ### When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (eg the Google Drive client). ### --order-by string ### The `--order-by` flag controls the order in which files in the backlog are processed in `rclone sync`, `rclone copy` and `rclone move`. The order by string is constructed like this. The first part describes what aspect is being measured: - `size` - order by the size of the files - `name` - order by the full path of the files - `modtime` - order by the modification date of the files This can have a modifier appended with a comma: - `ascending` or `asc` - order so that the smallest (or oldest) is processed first - `descending` or `desc` - order so that the largest (or newest) is processed first - `mixed` - order so that the smallest is processed first for some threads and the largest for others If the modifier is `mixed` then it can have an optional percentage (which defaults to `50`), eg `size,mixed,25` which means that 25% of the threads should be taking the smallest items and 75% the largest. The threads which take the smallest first will always take the smallest first and likewise the largest first threads. The `mixed` mode can be useful to minimise the transfer time when you are transferring a mixture of large and small files - the large files are guaranteed upload threads and bandwidth and the small files will be processed continuously. If no modifier is supplied then the order is `ascending`. For example - `--order-by size,desc` - send the largest files first - `--order-by modtime,ascending` - send the oldest files first - `--order-by name` - send the files with alphabetically by path first If the `--order-by` flag is not supplied or it is supplied with an empty string then the default ordering will be used which is as scanned. With `--checkers 1` this is mostly alphabetical, however with the default `--checkers 8` it is somewhat random. #### Limitations The `--order-by` flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if - there are no files in the backlog or the source has not been fully scanned yet - there are more than [--max-backlog](#max-backlog-n) files in the backlog Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of `--order-by` as being more of a best efforts flag rather than a perfect ordering. ### --password-command SpaceSepList ### This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the `RCLONE_CONFIG_PASS` variable. The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in `"`, if you want a literal `"` in an argument then enclose the argument in `"` and double the `"`. See [CSV encoding](https://godoc.org/encoding/csv) for more info. Eg --password-command echo hello --password-command echo "hello with space" --password-command echo "hello with ""quotes"" and space" See the [Configuration Encryption](#configuration-encryption) for more info. See a [Windows PowerShell example on the Wiki](https://github.com/rclone/rclone/wiki/Windows-Powershell-use-rclone-password-command-for-Config-file-password). ### -P, --progress ### This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer. Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay. Normally this is updated every 500mS but this period can be overridden with the `--stats` flag. This can be used with the `--stats-one-line` flag for a simpler display. Note: On Windows until [this bug](https://github.com/Azure/go-ansiterm/issues/26) is fixed all non-ASCII characters will be replaced with `.` when `--progress` is in use. ### -q, --quiet ### This flag will limit rclone's output to error messages only. ### --refresh-times ### The `--refresh-times` flag can be used to update modification times of existing files when they are out of sync on backends which don't support hashes. This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them. This flag is **only** useful for destinations which don't support hashes (eg `crypt`). This can be used any of the sync commands `sync`, `copy` or `move`. To use this flag you will need to be doing a modification time sync (so not using `--size-only` or `--checksum`). The flag will have no effect when using `--size-only` or `--checksum`. If this flag is used when rclone comes to upload a file it will check to see if there is an existing file on the destination. If this file matches the source with size (and checksum if available) but has a differing timestamp then instead of re-uploading it, rclone will update the timestamp on the destination file. If the checksum does not match rclone will upload the new file. If the checksum is absent (eg on a `crypt` backend) then rclone will update the timestamp. Note that some remotes can't set the modification time without re-uploading the file so this flag is less useful on them. Normally if you are doing a modification time sync rclone will update modification times without `--refresh-times` provided that the remote supports checksums **and** the checksums match on the file. However if the checksums are absent then rclone will upload the file rather than setting the timestamp as this is the safe behaviour. ### --retries int ### Retry the entire sync if it fails this many times it fails (default 3). Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors. Disable retries with `--retries 1`. ### --retries-sleep=TIME ### This sets the interval between each retry specified by `--retries` The default is `0`. Use `0` to disable. ### --size-only ### Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size. This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone. ### --stats=TIME ### Commands which transfer data (`sync`, `copy`, `copyto`, `move`, `moveto`) will print data transfer stats at regular intervals to show their progress. This sets the interval. The default is `1m`. Use `0` to disable. If you set the stats interval then all commands can show stats. This can be useful when running other commands, `check` or `mount` for example. Stats are logged at `INFO` level by default which means they won't show at default log level `NOTICE`. Use `--stats-log-level NOTICE` or `-v` to make them show. See the [Logging section](#logging) for more info on log levels. Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately. ### --stats-file-name-length integer ### By default, the `--stats` output will truncate file names and paths longer than 40 characters. This is equivalent to providing `--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable any truncation of file names printed by stats. ### --stats-log-level string ### Log level to show `--stats` output at. This can be `DEBUG`, `INFO`, `NOTICE`, or `ERROR`. The default is `INFO`. This means at the default level of logging which is `NOTICE` the stats won't show - if you want them to then use `--stats-log-level NOTICE`. See the [Logging section](#logging) for more info on log levels. ### --stats-one-line ### When this is specified, rclone condenses the stats into a single line showing the most important stats only. ### --stats-one-line-date ### When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is `2006/01/02 15:04:05 - ` ### --stats-one-line-date-format ### When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow [golang specs](https://golang.org/pkg/time/#Time.Format) for date formatting syntax. ### --stats-unit=bits|bytes ### By default, data transfer rates will be printed in bytes/second. This option allows the data rate to be printed in bits/second. Data transfer volume will still be reported in bytes. The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s. The default is `bytes`. ### --suffix=SUFFIX ### When using `sync`, `copy` or `move` any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten. The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. This is for use with files to add the suffix in the current directory or with `--backup-dir`. See `--backup-dir` for more info. For example rclone copy -i /path/to/local/file remote:current --suffix .bak will copy `/path/to/local` to `remote:current`, but for any files which would have been updated or deleted have .bak added. If using `rclone sync` with `--suffix` and without `--backup-dir` then it is recommended to put a filter rule in excluding the suffix otherwise the `sync` will delete the backup files. rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak" ### --suffix-keep-extension ### When using `--suffix`, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after. So let's say we had `--suffix -2019-01-01`, without the flag `file.txt` would be backed up to `file.txt-2019-01-01` and with the flag it would be backed up to `file-2019-01-01.txt`. This can be helpful to make sure the suffixed files can still be opened. ### --syslog ### On capable OSes (not Windows or Plan9) send all log output to syslog. This can be useful for running rclone in a script or `rclone mount`. ### --syslog-facility string ### If using `--syslog` this sets the syslog facility (eg `KERN`, `USER`). See `man syslog` for a list of possible facilities. The default facility is `DAEMON`. ### --tpslimit float ### Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second. For example to limit rclone to 10 HTTP transactions per second use `--tpslimit 10`, or to 1 transaction every 2 seconds use `--tpslimit 0.5`. Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited). This can be very useful for `rclone mount` to control the behaviour of applications using it. See also `--tpslimit-burst`. ### --tpslimit-burst int ### Max burst of transactions for `--tpslimit` (default `1`). Normally `--tpslimit` will do exactly the number of transaction per second specified. However if you supply `--tps-burst` then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied. For example if you provide `--tpslimit-burst 10` then if rclone has been idle for more than 10*`--tpslimit` then it can do 10 transactions very quickly before they are limited again. This may be used to increase performance of `--tpslimit` without changing the long term average number of transactions per second. ### --track-renames ### By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy. If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during `sync` operations and perform renaming server-side. Files will be matched by size and hash - if both match then a rename will be considered. If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Encrypted destinations are not currently supported by `--track-renames` if `--track-renames-strategy` includes `hash`. Note that `--track-renames` is incompatible with `--no-traverse` and that it uses extra memory to keep track of all the rename candidates. Note also that `--track-renames` is incompatible with `--delete-before` and will select `--delete-after` instead of `--delete-during`. ### --track-renames-strategy (hash,modtime,leaf,size) ### This option changes the matching criteria for `--track-renames`. The matching is controlled by a comma separated selection of these tokens: - `modtime` - the modification time of the file - not supported on all backends - `hash` - the hash of the file contents - not supported on all backends - `leaf` - the name of the file not including its directory name - `size` - the size of the file (this is always enabled) So using `--track-renames-strategy modtime,leaf` would match files based on modification time, the leaf of the file name and the size only. Using `--track-renames-strategy modtime` or `leaf` can enable `--track-renames` support for encrypted destinations. If nothing is specified, the default option is matching by `hash`es. Note that the `hash` strategy is not supported with encrypted destinations. ### --delete-(before,during,after) ### This option allows you to specify when files on your destination are deleted when you sync folders. Specifying the value `--delete-before` will delete all files present on the destination, but not on the source *before* starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies. Specifying `--delete-during` will delete files while checking and uploading files. This is the fastest option and uses the least memory. Specifying `--delete-after` (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message `not deleting files as there were IO errors`. ### --fast-list ### When doing anything which involves a directory listing (eg `sync`, `copy`, `ls` - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory. However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic). If you use the `--fast-list` flag then rclone will use this method for listing directories. This will have the following consequences for the listing: * It **will** use fewer transactions (important if you pay for them) * It **will** use more memory. Rclone has to load the whole listing into memory. * It *may* be faster because it uses fewer transactions * It *may* be slower because it can't be parallelized rclone should always give identical results with and without `--fast-list`. If you pay for transactions and can fit your entire sync listing into memory then `--fast-list` is recommended. If you have a very big sync to do then don't use `--fast-list` otherwise you will run out of memory. If you use `--fast-list` on a remote which doesn't support it, then rclone will just ignore it. ### --timeout=TIME ### This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected. The default is `5m`. Set to `0` to disable. ### --transfers=N ### The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote. The default is to run 4 file transfers in parallel. ### -u, --update ### This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file. This can be useful when transferring to a remote which doesn't support mod times directly (or when using `--use-server-modtime` to avoid extra API calls) as it is more accurate than a `--size-only` check and faster than using `--checksum`. If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. If `--checksum` is set then rclone will update the destination if the checksums differ too. If an existing destination file is older than the source file then it will be updated if the size or checksum differs from the source file. On remotes which don't support mod time directly (or when using `--use-server-modtime`) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. ### --use-mmap ### If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by `--buffer-size`). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with. If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS. It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default. ### --use-server-modtime ### Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using `--update`, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary. Using this flag on a sync operation without also using `--update` would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want. ### -v, -vv, --verbose ### With `-v` rclone will tell you about each file that is transferred and a small number of significant events. With `-vv` rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting. ### -V, --version ### Prints the version number SSL/TLS options --------------- The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation. ### --ca-cert string This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to. If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates. ### --client-cert string This loads the PEM encoded client side certificate. This is used for [mutual TLS authentication](https://en.wikipedia.org/wiki/Mutual_authentication). The `--client-key` flag is required too when using this. ### --client-key string This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with `--client-cert`. ### --no-check-certificate=true/false ### `--no-check-certificate` controls whether a client verifies the server's certificate chain and host name. If `--no-check-certificate` is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This option defaults to `false`. **This should be used only for testing.** Configuration Encryption ------------------------ Your configuration file contains information for logging in to your cloud services. This means that you should keep your `.rclone.conf` file in a secure location. If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone. To add a password to your rclone configuration, execute `rclone config`. ``` >rclone config Current remotes: e) Edit existing remote n) New remote d) Delete remote s) Set configuration password q) Quit config e/n/d/s/q> ``` Go into `s`, Set configuration password: ``` e/n/d/s/q> s Your configuration is not encrypted. If you add a password, you will protect your login information to cloud services. a) Add Password q) Quit to main menu a/q> a Enter NEW configuration password: password: Confirm NEW password: password: Password set Your configuration is encrypted. c) Change Password u) Unencrypt configuration q) Quit to main menu c/u/q> ``` Your configuration is now encrypted, and every time you start rclone you will have to supply the password. See below for details. In the same menu, you can change the password or completely remove encryption from your configuration. There is no way to recover the configuration if you lose your password. rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored. While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password. If it is safe in your environment, you can set the `RCLONE_CONFIG_PASS` environment variable to contain your password, in which case it will be used for decrypting the configuration. You can set this for a session from a script. For unix like systems save this to a file called `set-rclone-password`: ``` #!/bin/echo Source this file don't run it read -s RCLONE_CONFIG_PASS export RCLONE_CONFIG_PASS ``` Then source the file when you want to use it. From the shell you would do `source set-rclone-password`. It will then ask you for the password and set it in the environment variable. An alternate means of supplying the password is to provide a script which will retrieve the password and print on standard output. This script should have a fully specified path name and not rely on any environment variables. The script is supplied either via `--password-command="..."` command line argument or via the `RCLONE_PASSWORD_COMMAND` environment variable. One useful example of this is using the `passwordstore` application to retrieve the password: ``` export RCLONE_PASSWORD_COMMAND="pass rclone/config" ``` If the `passwordstore` password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the `passwordstore` system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably. If you are running rclone inside a script, unless you are using the `--password-command` method, you might want to disable password prompts. To do that, pass the parameter `--ask-password=false` to rclone. This will make rclone fail instead of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain a valid password, and `--password-command` has not been supplied. Developer options ----------------- These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg `--drive-test-option` - see the docs for the remote in question. ### --cpuprofile=FILE ### Write CPU profile to file. This can be analysed with `go tool pprof`. #### --dump flag,flag,flag #### The `--dump` flag takes a comma separated list of flags to dump info about. Note that some headers including `Accept-Encoding` as shown may not be correct in the request and the response may not show `Content-Encoding` if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it. The available flags are: #### --dump headers #### Dump HTTP headers with `Authorization:` lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only. Use `--dump auth` if you do want the `Authorization:` headers. #### --dump bodies #### Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only. Note that the bodies are buffered in memory so don't use this for enormous files. #### --dump requests #### Like `--dump bodies` but dumps the request bodies and the response headers. Useful for debugging download problems. #### --dump responses #### Like `--dump bodies` but dumps the response bodies and the request headers. Useful for debugging upload problems. #### --dump auth #### Dump HTTP headers - will contain sensitive info such as `Authorization:` headers - use `--dump headers` to dump without `Authorization:` headers. Can be very verbose. Useful for debugging only. #### --dump filters #### Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. #### --dump goroutines #### This dumps a list of the running go-routines at the end of the command to standard output. #### --dump openfiles #### This dumps a list of the open files at the end of the command. It uses the `lsof` command to do that so you'll need that installed to use it. ### --memprofile=FILE ### Write memory profile to file. This can be analysed with `go tool pprof`. Filtering --------- For the filtering options * `--delete-excluded` * `--filter` * `--filter-from` * `--exclude` * `--exclude-from` * `--include` * `--include-from` * `--files-from` * `--files-from-raw` * `--min-size` * `--max-size` * `--min-age` * `--max-age` * `--dump filters` See the [filtering section](https://rclone.org/filtering/). Remote control -------------- For the remote control options and for instructions on how to remote control rclone * `--rc` * and anything starting with `--rc-` See [the remote control section](https://rclone.org/rc/). Logging ------- rclone has 4 levels of logging, `ERROR`, `NOTICE`, `INFO` and `DEBUG`. By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg `rclone ls`). By default, rclone will produce `Error` and `Notice` level messages. If you use the `-q` flag, rclone will only produce `Error` messages. If you use the `-v` flag, rclone will produce `Error`, `Notice` and `Info` messages. If you use the `-vv` flag, rclone will produce `Error`, `Notice`, `Info` and `Debug` messages. You can also control the log levels with the `--log-level` flag. If you use the `--log-file=FILE` option, rclone will redirect `Error`, `Info` and `Debug` messages along with standard error to FILE. If you use the `--syslog` flag then rclone will log to syslog and the `--syslog-facility` control which facility it uses. Rclone prefixes all log messages with their level in capitals, eg INFO which makes it easy to grep the log file for different kinds of information. Exit Code --------- If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed. During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting. When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with `-q`) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful. ### List of exit codes ### * `0` - success * `1` - Syntax or usage error * `2` - Error not otherwise categorised * `3` - Directory not found * `4` - File not found * `5` - Temporary error (one that more retries might fix) (Retry errors) * `6` - Less serious errors (like 461 errors from dropbox) (NoRetry errors) * `7` - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors) * `8` - Transfer exceeded - limit set by --max-transfer reached * `9` - Operation successful, but no files transferred Environment Variables --------------------- Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries. ### Options ### Every option in rclone can have its default set by environment variable. To find the name of the environment variable, first, take the long option name, strip the leading `--`, change `-` to `_`, make upper case and prepend `RCLONE_`. For example, to always set `--stats 5s`, set the environment variable `RCLONE_STATS=5s`. If you set stats on the command line this will override the environment variable setting. Or to always use the trash in drive `--drive-use-trash`, set `RCLONE_DRIVE_USE_TRASH=true`. The same parser is used for the options and the environment variables so they take exactly the same form. ### Config file ### You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through `rclone config` by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for `--config` in `rclone help`). To find the name of the environment variable, you need to set, take `RCLONE_CONFIG_` + name of remote + `_` + name of config file option and make it all uppercase. For example, to configure an S3 remote named `mys3:` without a config file (using unix ways of setting environment variables): ``` $ export RCLONE_CONFIG_MYS3_TYPE=s3 $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX $ rclone lsd MYS3: -1 2016-09-21 12:54:21 -1 my-bucket $ rclone listremotes | grep mys3 mys3: ``` Note that if you want to create a remote using environment variables you must create the `..._TYPE` variable as above. ### Precedence The various different methods of backend configuration are read in this order and the first one with a value is used. - Flag values as supplied on the command line, eg `--drive-use-trash`. - Remote specific environment vars, eg `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above). - Backend specific environment vars, eg `RCLONE_DRIVE_USE_TRASH`. - Config file, eg `use_trash = false`. - Default values, eg `true` - these can't be changed. So if both `--drive-use-trash` is supplied on the config line and an environment variable `RCLONE_DRIVE_USE_TRASH` is set, the command line flag will take preference. For non backend configuration the order is as follows: - Flag values as supplied on the command line, eg `--stats 5s`. - Environment vars, eg `RCLONE_STATS=5s`. - Default values, eg `1m` - these can't be changed. ### Other environment variables ### - `RCLONE_CONFIG_PASS` set to contain your config file password (see [Configuration Encryption](#configuration-encryption) section) - `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` (or the lowercase versions thereof). - `HTTPS_PROXY` takes precedence over `HTTP_PROXY` for https requests. - The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed. - `RCLONE_CONFIG_DIR` - rclone **sets** this variable for use in config files and sub processes to point to the directory holding the config file. # Configuring rclone on a remote / headless machine # Some of the configurations (those involving oauth2) require an Internet connected web browser. If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below. ## Configuring using rclone authorize ## On the headless box run `rclone` config but answer `N` to the `Use auto config?` question. ``` ... Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> n For this to work, you will need rclone available on a machine that has a web browser available. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): rclone authorize "amazon cloud drive" Then paste the result below: result> ``` Then on your main desktop machine ``` rclone authorize "amazon cloud drive" If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Paste the following into your remote machine ---> SECRET_TOKEN <---End paste ``` Then back to the headless box, paste in the code ``` result> SECRET_TOKEN -------------------- [acd12] client_id = client_secret = token = SECRET_TOKEN -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> ``` ## Configuring by copying the config file ## Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone. So first configure rclone on your desktop machine with rclone config to set up the config file. Find the config file by running `rclone config file`, for example ``` $ rclone config file Configuration file is stored at: /home/user/.rclone.conf ``` Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and place it in the correct place (use `rclone config file` on the remote box to find out where). # Filtering, includes and excludes # Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size. The filters are applied for the `copy`, `sync`, `move`, `ls`, `lsl`, `md5sum`, `sha1sum`, `size`, `delete` and `check` operations. Note that `purge` does not obey the filters. Each path as it passes through rclone is matched against the include and exclude rules like `--include`, `--exclude`, `--include-from`, `--exclude-from`, `--filter`, or `--filter-from`. The simplest way to try them out is using the `ls` command, or `--dry-run` together with `-v`. `--filter-from`, `--exclude-from`, `--include-from`, `--files-from`, `--files-from-raw` understand `-` as a file name to mean read from standard input. ## Patterns ## The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell. If the pattern starts with a `/` then it only matches at the top level of the directory tree, **relative to the root of the remote** (not necessarily the root of the local drive). If it doesn't start with `/` then it is matched starting at the **end of the path**, but it will only match a complete path element: file.jpg - matches "file.jpg" - matches "directory/file.jpg" - doesn't match "afile.jpg" - doesn't match "directory/afile.jpg" /file.jpg - matches "file.jpg" in the root directory of the remote - doesn't match "afile.jpg" - doesn't match "directory/file.jpg" **Important** Note that you must use `/` in patterns and not `\` even if running on Windows. A `*` matches anything but not a `/`. *.jpg - matches "file.jpg" - matches "directory/file.jpg" - doesn't match "file.jpg/something" Use `**` to match anything, including slashes (`/`). dir/** - matches "dir/file.jpg" - matches "dir/dir1/dir2/file.jpg" - doesn't match "directory/file.jpg" - doesn't match "adir/file.jpg" A `?` matches any character except a slash `/`. l?ss - matches "less" - matches "lass" - doesn't match "floss" A `[` and `]` together make a character class, such as `[a-z]` or `[aeiou]` or `[[:alpha:]]`. See the [go regexp docs](https://golang.org/pkg/regexp/syntax/) for more info on these. h[ae]llo - matches "hello" - matches "hallo" - doesn't match "hullo" A `{` and `}` define a choice between elements. It should contain a comma separated list of patterns, any of which might match. These patterns can contain wildcards. {one,two}_potato - matches "one_potato" - matches "two_potato" - doesn't match "three_potato" - doesn't match "_potato" Special characters can be escaped with a `\` before them. \*.jpg - matches "*.jpg" \\.jpg - matches "\.jpg" \[one\].jpg - matches "[one].jpg" Patterns are case sensitive unless the `--ignore-case` flag is used. Without `--ignore-case` (default) potato - matches "potato" - doesn't match "POTATO" With `--ignore-case` potato - matches "potato" - matches "POTATO" Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so `rclone copy "remote:dir*.jpg" /path/to/dir` won't work - what is required is `rclone --include "*.jpg" copy remote:dir /path/to/dir` ### Directories ### Rclone keeps track of directories that could match any file patterns. Eg if you add the include rule /a/*.jpg Rclone will synthesize the directory include rule /a/ If you put any rules which end in `/` then it will only match directories. Directory matches are **only** used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory. ### Differences between rsync and rclone patterns ### Rclone implements bash style `{a,b,c}` glob matching which rsync doesn't. Rclone always does a wildcard match so `\` must always escape a `\`. ## How the rules are used ## Rclone maintains a combined list of include rules and exclude rules. Each file is matched in order, starting from the top, against the rule in the list until it finds a match. The file is then included or excluded according to the rule type. If the matcher fails to find a match after testing against all the entries in the list then the path is included. For example given the following rules, `+` being include, `-` being exclude, - secret*.jpg + *.jpg + *.png + file2.avi - * This would include * `file1.jpg` * `file3.png` * `file2.avi` This would exclude * `secret17.jpg` * non `*.jpg` and `*.png` A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2). ## Adding filtering rules ## Filtering rules are added with the following command line flags. ### Repeating options ## You can repeat the following options to add more than one rule of that type. * `--include` * `--include-from` * `--exclude` * `--exclude-from` * `--filter` * `--filter-from` * `--filter-from-raw` **Important** You should not use `--include*` together with `--exclude*`. It may produce different results than you expected. In that case try to use: `--filter*`. Note that all the options of the same type are processed together in the order above, regardless of what order they were placed on the command line. So all `--include` options are processed first in the order they appeared on the command line, then all `--include-from` options etc. To mix up the order includes and excludes, the `--filter` flag can be used. ### `--exclude` - Exclude files matching pattern ### Add a single exclude rule with `--exclude`. This flag can be repeated. See above for the order the flags are processed in. Eg `--exclude *.bak` to exclude all bak files from the sync. ### `--exclude-from` - Read exclude patterns from file ### Add exclude rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this `exclude-file.txt` # a sample exclude rule file *.bak file2.jpg Then use as `--exclude-from exclude-file.txt`. This will sync all files except those ending in `bak` and `file2.jpg`. This is useful if you have a lot of rules. ### `--include` - Include files matching pattern ### Add a single include rule with `--include`. This flag can be repeated. See above for the order the flags are processed in. Eg `--include *.{png,jpg}` to include all `png` and `jpg` files in the backup and no others. This adds an implicit `--exclude *` at the very end of the filter list. This means you can mix `--include` and `--include-from` with the other filters (eg `--exclude`) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use `--filter-from`. ### `--include-from` - Read include patterns from file ### Add include rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this `include-file.txt` # a sample include rule file *.jpg *.png file2.avi Then use as `--include-from include-file.txt`. This will sync all `jpg`, `png` files and `file2.avi`. This is useful if you have a lot of rules. This adds an implicit `--exclude *` at the very end of the filter list. This means you can mix `--include` and `--include-from` with the other filters (eg `--exclude`) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use `--filter-from`. ### `--filter` - Add a file-filtering rule ### This can be used to add a single include or exclude rule. Include rules start with `+ ` and exclude rules start with `- `. A special rule called `!` can be used to clear the existing rules. This flag can be repeated. See above for the order the flags are processed in. Eg `--filter "- *.bak"` to exclude all bak files from the sync. ### `--filter-from` - Read filtering patterns from a file ### Add include/exclude rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this `filter-file.txt` # a sample filter rule file - secret*.jpg + *.jpg + *.png + file2.avi - /dir/Trash/** + /dir/** # exclude everything else - * Then use as `--filter-from filter-file.txt`. The rules are processed in the order that they are defined. This example will include all `jpg` and `png` files, exclude any files matching `secret*.jpg` and include `file2.avi`. It will also include everything in the directory `dir` at the root of the sync, except `dir/Trash` which it will exclude. Everything else will be excluded from the sync. ### `--files-from` - Read list of source-file names ### This reads a list of file names from the file passed in and **only** these files are transferred. The **filtering rules are ignored** completely if you use this option. `--files-from` expects a list of files as its input. Leading / trailing whitespace is stripped from the input lines and lines starting with `#` and `;` are ignored. Rclone will traverse the file system if you use `--files-from`, effectively using the files in `--files-from` as a set of filters. Rclone will not error if any of the files are missing. If you use `--no-traverse` as well as `--files-from` then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files. This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line. Paths within the `--files-from` file will be interpreted as starting with the root specified in the command. Leading `/` characters are ignored. See [--files-from-raw](#files-from-raw-read-list-of-source-file-names-without-any-processing) if you need the input to be processed in a raw manner. For example, suppose you had `files-from.txt` with this content: # comment file1.jpg subdir/file2.jpg You could then use it like this: rclone copy --files-from files-from.txt /home/me/pics remote:pics This will transfer these files only (if they exist) /home/me/pics/file1.jpg → remote:pics/file1.jpg /home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths: /home/user1/important /home/user1/dir/file /home/user2/stuff To copy these you'd find a common subdirectory - in this case `/home` and put the remaining files in `files-from.txt` with or without leading `/`, eg user1/important user1/dir/file user2/stuff You could then copy these to a remote like this rclone copy --files-from files-from.txt /home remote:backup The 3 files will arrive in `remote:backup` with the paths as in the `files-from.txt` like this: /home/user1/important → remote:backup/user1/important /home/user1/dir/file → remote:backup/user1/dir/file /home/user2/stuff → remote:backup/user2/stuff You could of course choose `/` as the root too in which case your `files-from.txt` might look like this. /home/user1/important /home/user1/dir/file /home/user2/stuff And you would transfer it like this rclone copy --files-from files-from.txt / remote:backup In this case there will be an extra `home` directory on the remote: /home/user1/important → remote:backup/home/user1/important /home/user1/dir/file → remote:backup/home/user1/dir/file /home/user2/stuff → remote:backup/home/user2/stuff ### `--files-from-raw` - Read list of source-file names without any processing ### This option is same as `--files-from` with the only difference being that the input is read in a raw manner. This means that lines with leading/trailing whitespace and lines starting with `;` or `#` are read without any processing. [rclone lsf](https://rclone.org/commands/rclone_lsf/) has a compatible format that can be used to export file lists from remotes, which can then be used as an input to `--files-from-raw`. ### `--min-size` - Don't transfer any file smaller than this ### This option controls the minimum size file which will be transferred. This defaults to `kBytes` but a suffix of `k`, `M`, or `G` can be used. For example `--min-size 50k` means no files smaller than 50kByte will be transferred. ### `--max-size` - Don't transfer any file larger than this ### This option controls the maximum size file which will be transferred. This defaults to `kBytes` but a suffix of `k`, `M`, or `G` can be used. For example `--max-size 1G` means no files larger than 1GByte will be transferred. ### `--max-age` - Don't transfer any file older than this ### This option controls the maximum age of files to transfer. Give in seconds or with a suffix of: * `ms` - Milliseconds * `s` - Seconds * `m` - Minutes * `h` - Hours * `d` - Days * `w` - Weeks * `M` - Months * `y` - Years For example `--max-age 2d` means no files older than 2 days will be transferred. This can also be an absolute time in one of these formats - RFC3339 - eg "2006-01-02T15:04:05Z07:00" - ISO8601 Date and time, local timezone - "2006-01-02T15:04:05" - ISO8601 Date and time, local timezone - "2006-01-02 15:04:05" - ISO8601 Date - "2006-01-02" (YYYY-MM-DD) ### `--min-age` - Don't transfer any file younger than this ### This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see `--max-age` for list of suffixes) For example `--min-age 2d` means no files younger than 2 days will be transferred. ### `--delete-excluded` - Delete files on dest excluded from sync ### **Important** this flag is dangerous - use with `--dry-run` and `-v` first. When doing `rclone sync` this will delete any files which are excluded from the sync on the destination. If for example you did a sync from `A` to `B` without the `--min-size 50k` flag rclone sync -i A: B: Then you repeated it like this with the `--delete-excluded` rclone --min-size 50k --delete-excluded sync A: B: This would delete all files on `B` which are less than 50 kBytes as these are now excluded from the sync. Always test first with `--dry-run` and `-v` before using this flag. ### `--dump filters` - dump the filters to the output ### This dumps the defined filters to the output as regular expressions. Useful for debugging. ### `--ignore-case` - make searches case insensitive ### Normally filter patterns are case sensitive. If this flag is supplied then filter patterns become case insensitive. Normally a `--include "file.txt"` will not match a file called `FILE.txt`. However if you use the `--ignore-case` flag then `--include "file.txt"` this will match a file called `FILE.txt`. ## Quoting shell metacharacters ## The examples above may not work verbatim in your shell as they have shell metacharacters in them (eg `*`), and may require quoting. Eg linux, OSX * `--include \*.jpg` * `--include '*.jpg'` * `--include='*.jpg'` In Windows the expansion is done by the command not the shell so this should work fine * `--include *.jpg` ## Exclude directory based on a file ## It is possible to exclude a directory based on a file, which is present in this directory. Filename should be specified using the `--exclude-if-present` flag. This flag has a priority over the other filtering flags. Imagine, you have the following directory structure: dir1/file1 dir1/dir2/file2 dir1/dir2/dir3/file3 dir1/dir2/dir3/.ignore You can exclude `dir3` from sync by running the following command: rclone sync -i --exclude-if-present .ignore dir1 remote:backup Currently only one filename is supported, i.e. `--exclude-if-present` should not be used multiple times. # GUI (Experimental) Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change. Run this command in a terminal and rclone will download and then display the GUI in a web browser. ``` rclone rcd --rc-web-gui ``` This will produce logs like this and rclone needs to continue to run to serve the GUI: ``` 2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip 2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] 2019/08/25 11:40:16 NOTICE: Unzipping 2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/ ``` This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details. If you wish to check for updates then you can add `--rc-web-gui-update` to the command line. If you find your GUI broken, you may force it to update by add `--rc-web-gui-force-update`. By default, rclone will open your browser. Add `--rc-web-gui-no-open-browser` to disable this feature. ## Using the GUI Once the GUI opens, you will be looking at the dashboard which has an overall overview. On the left hand side you will see a series of view buttons you can click on: - Dashboard - main overview - Configs - examine and create new configurations - Explorer - view, download and upload files to the cloud storage systems - Backend - view or alter the backend config - Log out (More docs and walkthrough video to come!) ## How it works When you run the `rclone rcd --rc-web-gui` this is what happens - Rclone starts but only runs the remote control API ("rc"). - The API is bound to localhost with an auto generated username and password. - If the API bundle is missing then rclone will download it. - rclone will start serving the files from the API bundle over the same port as the API - rclone will open the browser with a `login_token` so it can log straight in. ## Advanced use The `rclone rcd` may use any of the [flags documented on the rc page](https://rclone.org/rc/#supported-parameters). The flag `--rc-web-gui` is shorthand for - Download the web GUI if necessary - Check we are using some authentication - `--rc-user gui` - `--rc-pass ` - `--rc-serve` These flags can be overridden as desired. See also the [rclone rcd documentation](https://rclone.org/commands/rclone_rcd/). ### Example: Running a public GUI For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags: - `--rc-web-gui` - `--rc-addr :443` - `--rc-htpasswd /path/to/htpasswd` - `--rc-cert /path/to/ssl.crt` - `--rc-key /path/to/ssl.key` ### Example: Running a GUI behind a proxy If you want to run the GUI behind a proxy at `/rclone` you could use these flags: - `--rc-web-gui` - `--rc-baseurl rclone` - `--rc-htpasswd /path/to/htpasswd` Or instead of htpasswd if you just want a single user and password: - `--rc-user me` - `--rc-pass mypassword` ## Project The GUI is being developed in the: [rclone/rclone-webui-react repository](https://github.com/rclone/rclone-webui-react). Bug reports and contributions are very welcome :-) If you have questions then please ask them on the [rclone forum](https://forum.rclone.org/). # Remote controlling rclone with its API If rclone is run with the `--rc` flag then it starts an http server which can be used to remote control rclone using its API. If you just want to run a remote control then see the [rcd command](https://rclone.org/commands/rclone_rcd/). ## Supported parameters ### --rc Flag to start the http server listen on remote requests ### --rc-addr=IP IPaddress:Port or :Port to bind server to. (default "localhost:5572") ### --rc-cert=KEY SSL PEM key (concatenation of certificate and CA certificate) ### --rc-client-ca=PATH Client certificate authority to verify clients with ### --rc-htpasswd=PATH htpasswd file - if not provided no authentication is done ### --rc-key=PATH SSL PEM Private key ### --rc-max-header-bytes=VALUE Maximum size of request header (default 4096) ### --rc-user=VALUE User name for authentication. ### --rc-pass=VALUE Password for authentication. ### --rc-realm=VALUE Realm for authentication (default "rclone") ### --rc-server-read-timeout=DURATION Timeout for server reading data (default 1h0m0s) ### --rc-server-write-timeout=DURATION Timeout for server writing data (default 1h0m0s) ### --rc-serve Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object Default Off. ### --rc-files /path/to/directory Path to local files to serve on the HTTP server. If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions. If `--rc-user` or `--rc-pass` is set then the URL that is opened will have the authorization in the URL in the `http://user:pass@localhost/` style. Default Off. ### --rc-enable-metrics Enable OpenMetrics/Prometheus compatible endpoint at `/metrics`. Default Off. ### --rc-web-gui Set this flag to serve the default web gui on the same port as rclone. Default Off. ### --rc-allow-origin Set the allowed Access-Control-Allow-Origin for rc requests. Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui. Default is IP address on which rc is running. ### --rc-web-fetch-url Set the URL to fetch the rclone-web-gui files from. Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. ### --rc-web-gui-update Set this flag to check and update rclone-webui-react from the rc-web-fetch-url. Default Off. ### --rc-web-gui-force-update Set this flag to force update rclone-webui-react from the rc-web-fetch-url. Default Off. ### --rc-web-gui-no-open-browser Set this flag to disable opening browser automatically when using web-gui. Default Off. ### --rc-job-expire-duration=DURATION Expire finished async jobs older than DURATION (default 60s). ### --rc-job-expire-interval=DURATION Interval duration to check for expired async jobs (default 10s). ### --rc-no-auth By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg `operations/list` is denied as it involved creating a remote as is `sync/copy`. If this is set then no authorisation will be required on the server to use these methods. The alternative is to use `--rc-user` and `--rc-pass` and use these credentials in the request. Default Off. ## Accessing the remote control via the rclone rc command Rclone itself implements the remote control protocol in its `rclone rc` command. You can use it like this ``` $ rclone rc rc/noop param1=one param2=two { "param1": "one", "param2": "two" } ``` Run `rclone rc` on its own to see the help for the installed remote control commands. ## JSON input `rclone rc` also supports a `--json` flag which can be used to send more complicated input parameters. ``` $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop { "p1": [ 1, "2", null, 4 ], "p2": { "a": 1, "b": 2 } } ``` If the parameter being passed is an object then it can be passed as a JSON string rather than using the `--json` flag which simplifies the command line. ``` rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}' ``` Rather than ``` rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}' ``` ## Special parameters The rc interface supports some special parameters which apply to **all** commands. These start with `_` to show they are different. ### Running asynchronous jobs with _async = true Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously. If `_async` has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The `job/status` call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished. It is recommended that potentially long running jobs, eg `sync/sync`, `sync/copy`, `sync/move`, `operations/purge` are run with the `_async` flag to avoid any potential problems with the HTTP request and response timing out. Starting a job with the `_async` flag: ``` $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop { "jobid": 2 } ``` Query the status to see if the job has finished. For more information on the meaning of these return parameters see the `job/status` call. ``` $ rclone rc --json '{ "jobid":2 }' job/status { "duration": 0.000124163, "endTime": "2018-10-27T11:38:07.911245881+01:00", "error": "", "finished": true, "id": 2, "output": { "_async": true, "p1": [ 1, "2", null, 4 ], "p2": { "a": 1, "b": 2 } }, "startTime": "2018-10-27T11:38:07.911121728+01:00", "success": true } ``` `job/list` can be used to show the running or recently completed jobs ``` $ rclone rc job/list { "jobids": [ 2 ] } ``` ### Assigning operations to groups with _group = value Each rc call has its own stats group for tracking its metrics. By default grouping is done by the composite group name from prefix `job/` and id of the job like so `job/1`. If `_group` has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name. Stats for specific group can be accessed by passing `group` to `core/stats`: ``` $ rclone rc --json '{ "group": "job/1" }' core/stats { "speed": 12345 ... } ``` ## Supported commands ### backend/command: Runs a backend command. {#backend-command} This takes the following parameters - command - a string with the command name - fs - a remote name string eg "drive:" - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command For example rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2 Returns ``` { "result": { "arg": [ "path1", "path2" ], "name": "noop", "opt": { "blue": "", "echo": "yes" } } } ``` Note that this is the direct equivalent of using this "backend" command: rclone backend noop . -o echo=yes -o blue path1 path2 Note that arguments must be preceded by the "-a" flag See the [backend](https://rclone.org/commands/rclone_backend/) command for more information. **Authentication is required for this call.** ### cache/expire: Purge a remote from cache {#cache-expire} Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional) Eg rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=/ withData=true ### cache/fetch: Fetch file chunks {#cache-fetch} Ensure the specified file chunks are cached on disk. The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to specify files to fetch, eg rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye File names will automatically be encrypted when the a crypt remote is used on top of the cache. ### cache/stats: Get cache stats {#cache-stats} Show statistics for the cache remote. ### config/create: create the config for a remote. {#config-create} This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs - type - type of the new remote - obscure - optional bool - forces obscuring of passwords - noObscure - optional bool - forces passwords not to be obscured See the [config create command](https://rclone.org/commands/rclone_config_create/) command for more information on the above. **Authentication is required for this call.** ### config/delete: Delete a remote in the config file. {#config-delete} Parameters: - name - name of remote to delete See the [config delete command](https://rclone.org/commands/rclone_config_delete/) command for more information on the above. **Authentication is required for this call.** ### config/dump: Dumps the config file. {#config-dump} Returns a JSON object: - key: value Where keys are remote names and values are the config parameters. See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. **Authentication is required for this call.** ### config/get: Get a remote in the config file. {#config-get} Parameters: - name - name of remote to get See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. **Authentication is required for this call.** ### config/listremotes: Lists the remotes in the config file. {#config-listremotes} Returns - remotes - array of remote names See the [listremotes command](https://rclone.org/commands/rclone_listremotes/) command for more information on the above. **Authentication is required for this call.** ### config/password: password the config for a remote. {#config-password} This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs See the [config password command](https://rclone.org/commands/rclone_config_password/) command for more information on the above. **Authentication is required for this call.** ### config/providers: Shows how providers are configured in the config file. {#config-providers} Returns a JSON object: - providers - array of objects See the [config providers command](https://rclone.org/commands/rclone_config_providers/) command for more information on the above. **Authentication is required for this call.** ### config/update: update the config for a remote. {#config-update} This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs - obscure - optional bool - forces obscuring of passwords - noObscure - optional bool - forces passwords not to be obscured See the [config update command](https://rclone.org/commands/rclone_config_update/) command for more information on the above. **Authentication is required for this call.** ### core/bwlimit: Set the bandwidth limit. {#core-bwlimit} This sets the bandwidth limit to that passed in. Eg rclone rc core/bwlimit rate=off { "bytesPerSecond": -1, "rate": "off" } rclone rc core/bwlimit rate=1M { "bytesPerSecond": 1048576, "rate": "1M" } If the rate parameter is not supplied then the bandwidth is queried rclone rc core/bwlimit { "bytesPerSecond": 1048576, "rate": "1M" } The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number. ### core/command: Run a rclone terminal command over rc. {#core-command} This takes the following parameters - command - a string with the command name - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command - error - set if rclone exits with an error code - returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT". "STREAM_ONLY_STDERR") For example rclone rc core/command command=ls -a mydrive:/ -o max-depth=1 rclone rc core/command -a ls -a mydrive:/ -o max-depth=1 Returns ``` { "error": false, "result": "" } OR { "error": true, "result": "" } ``` **Authentication is required for this call.** ### core/gc: Runs a garbage collection. {#core-gc} This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. ### core/group-list: Returns list of stats. {#core-group-list} This returns list of stats groups currently in memory. Returns the following values: ``` { "groups": an array of group names: [ "group1", "group2", ... ] } ``` ### core/memstats: Returns the memory statistics {#core-memstats} This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats The most interesting values for most people are: * HeapAlloc: This is the amount of memory rclone is actually using * HeapSys: This is the amount of memory rclone has obtained from the OS * Sys: this is the total amount of memory requested from the OS * It is virtual memory so may include unused memory ### core/obscure: Obscures a string passed in. {#core-obscure} Pass a clear string and rclone will obscure it for the config file: - clear - string Returns - obscured - string ### core/pid: Return PID of current process {#core-pid} This returns PID of current process. Useful for stopping rclone process. ### core/quit: Terminates the app. {#core-quit} (optional) Pass an exit code to be used for terminating the app: - exitCode - int ### core/stats: Returns stats about current transfers. {#core-stats} This returns all available stats: rclone rc core/stats If group is not provided then summed up stats for all groups will be returned. Parameters - group - name of the stats group (string) Returns the following values: ``` { "speed": average speed in bytes/sec since start of the process, "bytes": total transferred bytes since the start of the process, "errors": number of errors, "fatalError": whether there has been at least one FatalError, "retryError": whether there has been at least one non-NoRetryError, "checks": number of checked files, "transfers": number of transferred files, "deletes" : number of deleted files, "renames" : number of renamed files, "transferTime" : total time spent on running jobs, "elapsedTime": time in seconds since the start of the process, "lastError": last occurred error, "transferring": an array of currently active file transfers: [ { "bytes": total transferred bytes for this file, "eta": estimated time in seconds until file transfer completion "name": name of the file, "percentage": progress of the file transfer in percent, "speed": average speed over the whole transfer in bytes/sec, "speedAvg": current speed in bytes/sec as an exponentially weighted moving average, "size": size of the file in bytes } ], "checking": an array of names of currently active file checks [] } ``` Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined. ### core/stats-delete: Delete stats group. {#core-stats-delete} This deletes entire stats group Parameters - group - name of the stats group (string) ### core/stats-reset: Reset stats. {#core-stats-reset} This clears counters, errors and finished transfers for all stats or specific stats group if group is provided. Parameters - group - name of the stats group (string) ### core/transferred: Returns stats about completed transfers. {#core-transferred} This returns stats about completed transfers: rclone rc core/transferred If group is not provided then completed transfers for all groups will be returned. Note only the last 100 completed transfers are returned. Parameters - group - name of the stats group (string) Returns the following values: ``` { "transferred": an array of completed transfers (including failed ones): [ { "name": name of the file, "size": size of the file in bytes, "bytes": total transferred bytes for this file, "checked": if the transfer is only checked (skipped, deleted), "timestamp": integer representing millisecond unix epoch, "error": string description of the error (empty if successful), "jobid": id of the job that this transfer belongs to } ] } ``` ### core/version: Shows the current version of rclone and the go runtime. {#core-version} This shows the current version of go and the go runtime - version - rclone version, eg "v1.53.0" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use ### debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling. {#debug-set-block-profile-rate} SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked. To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0. After calling this you can use this to see the blocking profile: go tool pprof http://localhost:5572/debug/pprof/block Parameters - rate - int ### debug/set-mutex-profile-fraction: Set runtime.SetMutexProfileFraction for mutex profiling. {#debug-set-mutex-profile-fraction} SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned. To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.) Once this is set you can look use this to profile the mutex contention: go tool pprof http://localhost:5572/debug/pprof/mutex Parameters - rate - int Results - previousRate - int ### job/list: Lists the IDs of the running jobs {#job-list} Parameters - None Results - jobids - array of integer job ids ### job/status: Reads the status of the job ID {#job-status} Parameters - jobid - id of the job (integer) Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job ### job/stop: Stop the running job {#job-stop} Parameters - jobid - id of the job (integer) ### mount/listmounts: Show current mount points {#mount-listmounts} This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns - mountPoints: list of current mount points Eg rclone rc mount/listmounts **Authentication is required for this call.** ### mount/mount: Create a new mount point {#mount-mount} rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2 This takes the following parameters - fs - a remote path to be mounted (required) - mountPoint: valid path on the local machine (required) - mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use - mountOpt: a JSON object with Mount options in. - vfsOpt: a JSON object with VFS options in. Eg rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section. rclone rc options/get **Authentication is required for this call.** ### mount/types: Show all possible mount types {#mount-types} This shows all possible mount types and returns them as a list. This takes no parameters and returns - mountTypes: list of mount types The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter. Eg rclone rc mount/types **Authentication is required for this call.** ### mount/unmount: Unmount selected active mount {#mount-unmount} rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. This takes the following parameters - mountPoint: valid path on the local machine where the mount was created (required) Eg rclone rc mount/unmount mountPoint=/home//mountPoint **Authentication is required for this call.** ### mount/unmountall: Show current mount points {#mount-unmountall} This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns error if unmount does not succeed. Eg rclone rc mount/unmountall **Authentication is required for this call.** ### operations/about: Return the space used on the remote {#operations-about} This takes the following parameters - fs - a remote name string eg "drive:" The result is as returned from rclone about --json See the [about command](https://rclone.org/commands/rclone_size/) command for more information on the above. **Authentication is required for this call.** ### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup} This takes the following parameters - fs - a remote name string eg "drive:" See the [cleanup command](https://rclone.org/commands/rclone_cleanup/) command for more information on the above. **Authentication is required for this call.** ### operations/copyfile: Copy a file from source remote to destination remote {#operations-copyfile} This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination **Authentication is required for this call.** ### operations/copyurl: Copy the URL to the object {#operations-copyurl} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - url - string, URL to read from - autoFilename - boolean, set to true to retrieve destination file name from url See the [copyurl command](https://rclone.org/commands/rclone_copyurl/) command for more information on the above. **Authentication is required for this call.** ### operations/delete: Remove files in the path {#operations-delete} This takes the following parameters - fs - a remote name string eg "drive:" See the [delete command](https://rclone.org/commands/rclone_delete/) command for more information on the above. **Authentication is required for this call.** ### operations/deletefile: Remove the single file pointed to {#operations-deletefile} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [deletefile command](https://rclone.org/commands/rclone_deletefile/) command for more information on the above. **Authentication is required for this call.** ### operations/fsinfo: Return information about the remote {#operations-fsinfo} This takes the following parameters - fs - a remote name string eg "drive:" This returns info about the remote passed in; ``` { // optional features and whether they are available or not "Features": { "About": true, "BucketBased": false, "CanHaveEmptyDirectories": true, "CaseInsensitive": false, "ChangeNotify": false, "CleanUp": false, "Copy": false, "DirCacheFlush": false, "DirMove": true, "DuplicateFiles": false, "GetTier": false, "ListR": false, "MergeDirs": false, "Move": true, "OpenWriterAt": true, "PublicLink": false, "Purge": true, "PutStream": true, "PutUnchecked": false, "ReadMimeType": false, "ServerSideAcrossConfigs": false, "SetTier": false, "SetWrapper": false, "UnWrap": false, "WrapFs": false, "WriteMimeType": false }, // Names of hashes available "Hashes": [ "MD5", "SHA-1", "DropboxHash", "QuickXorHash" ], "Name": "local", // Name as created "Precision": 1, // Precision of timestamps in ns "Root": "/", // Path as created "String": "Local file system at /" // how the remote will appear in logs } ``` This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: ### operations/list: List the given remote and path in JSON format {#operations-list} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - opt - a dictionary of options to control the listing (optional) - recurse - If set recurse directories - noModTime - If set return modification time - showEncrypted - If set show decrypted names - showOrigIDs - If set show the IDs for each item if known - showHash - If set return a dictionary of hashes The result is - list - This is an array of objects as described in the lsjson command See the [lsjson command](https://rclone.org/commands/rclone_lsjson/) for more information on the above and examples. **Authentication is required for this call.** ### operations/mkdir: Make a destination directory or container {#operations-mkdir} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [mkdir command](https://rclone.org/commands/rclone_mkdir/) command for more information on the above. **Authentication is required for this call.** ### operations/movefile: Move a file from source remote to destination remote {#operations-movefile} This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination **Authentication is required for this call.** ### operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations-publiclink} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - unlink - boolean - if set removes the link rather than adding it (optional) - expire - string - the expiry time of the link eg "1d" (optional) Returns - url - URL of the resource See the [link command](https://rclone.org/commands/rclone_link/) command for more information on the above. **Authentication is required for this call.** ### operations/purge: Remove a directory or container and all of its contents {#operations-purge} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [purge command](https://rclone.org/commands/rclone_purge/) command for more information on the above. **Authentication is required for this call.** ### operations/rmdir: Remove an empty directory or container {#operations-rmdir} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [rmdir command](https://rclone.org/commands/rclone_rmdir/) command for more information on the above. **Authentication is required for this call.** ### operations/rmdirs: Remove all the empty directories in the path {#operations-rmdirs} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - leaveRoot - boolean, set to true not to delete the root See the [rmdirs command](https://rclone.org/commands/rclone_rmdirs/) command for more information on the above. **Authentication is required for this call.** ### operations/size: Count the number of bytes and files in remote {#operations-size} This takes the following parameters - fs - a remote name string eg "drive:path/to/dir" Returns - count - number of files - bytes - number of bytes in those files See the [size command](https://rclone.org/commands/rclone_size/) command for more information on the above. **Authentication is required for this call.** ### operations/uploadfile: Upload file using multiform/form-data {#operations-uploadfile} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - each part in body represents a file to be uploaded See the [uploadfile command](https://rclone.org/commands/rclone_uploadfile/) command for more information on the above. **Authentication is required for this call.** ### options/blocks: List all the option blocks {#options-blocks} Returns - options - a list of the options block names ### options/get: Get all the options {#options-get} Returns an object where keys are option block names and values are an object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. ### options/set: Set an option {#options-set} Parameters - option block name containing an object with - key: value Repeated as often as required. Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. For example: This sets DEBUG level logs (-vv) rclone rc options/set --json '{"main": {"LogLevel": 8}}' And this sets INFO level logs (-v) rclone rc options/set --json '{"main": {"LogLevel": 7}}' And this sets NOTICE level logs (normal without -v) rclone rc options/set --json '{"main": {"LogLevel": 6}}' ### pluginsctl/addPlugin: Add a plugin using url {#pluginsctl-addPlugin} used for adding a plugin to the webgui This takes the following parameters - url: http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react) Eg rclone rc pluginsctl/addPlugin **Authentication is required for this call.** ### pluginsctl/getPluginsForType: Get plugins with type criteria {#pluginsctl-getPluginsForType} This shows all possible plugins by a mime type This takes the following parameters - type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3) - pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL) and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/getPluginsForType type=video/mp4 **Authentication is required for this call.** ### pluginsctl/listPlugins: Get the list of currently loaded plugins {#pluginsctl-listPlugins} This allows you to get the currently enabled plugins and their details. This takes no parameters and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/listPlugins **Authentication is required for this call.** ### pluginsctl/listTestPlugins: Show currently loaded test plugins {#pluginsctl-listTestPlugins} allows listing of test plugins with the rclone.test set to true in package.json of the plugin This takes no parameters and returns - loadedTestPlugins: list of currently available test plugins Eg rclone rc pluginsctl/listTestPlugins **Authentication is required for this call.** ### pluginsctl/removePlugin: Remove a loaded plugin {#pluginsctl-removePlugin} This allows you to remove a plugin using it's name This takes parameters - name: name of the plugin in the format `author`/`plugin_name` Eg rclone rc pluginsctl/removePlugin name=rclone/video-plugin **Authentication is required for this call.** ### pluginsctl/removeTestPlugin: Remove a test plugin {#pluginsctl-removeTestPlugin} This allows you to remove a plugin using it's name This takes the following parameters - name: name of the plugin in the format `author`/`plugin_name` Eg rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react **Authentication is required for this call.** ### rc/error: This returns an error {#rc-error} This returns an error with the input as part of its error string. Useful for testing error handling. ### rc/list: List all the registered remote control commands {#rc-list} This lists all the registered remote control commands as a JSON map in the commands response. ### rc/noop: Echo the input to the output parameters {#rc-noop} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. ### rc/noopauth: Echo the input to the output parameters requiring auth {#rc-noopauth} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. **Authentication is required for this call.** ### sync/copy: copy a directory from source remote to destination remote {#sync-copy} This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination See the [copy command](https://rclone.org/commands/rclone_copy/) command for more information on the above. **Authentication is required for this call.** ### sync/move: move a directory from source remote to destination remote {#sync-move} This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set See the [move command](https://rclone.org/commands/rclone_move/) command for more information on the above. **Authentication is required for this call.** ### sync/sync: sync a directory from source remote to destination remote {#sync-sync} This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination See the [sync command](https://rclone.org/commands/rclone_sync/) command for more information on the above. **Authentication is required for this call.** ### vfs/forget: Forget files or directories in the directory cache. {#vfs-forget} This forgets the paths in the directory cache causing them to be re-read from the remote when needed. If no paths are passed in then it will forget all the paths in the directory cache. rclone rc vfs/forget Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. ### vfs/list: List active VFSes. {#vfs-list} This lists the active VFSes. It returns a list under the key "vfses" where the values are the VFS names that could be passed to the other VFS commands in the "fs" parameter. ### vfs/poll-interval: Get the status or update the value of the poll-interval option. {#vfs-poll-interval} Without any parameter given this returns the current status of the poll-interval setting. When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval. rclone rc vfs/poll-interval interval=5m The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely. The new poll-interval value will only be active when the timeout is not reached. If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. ### vfs/refresh: Refresh the directory cache. {#vfs-refresh} This reads the directories for the specified paths and freshens the directory cache. If no paths are passed in then it will refresh the root directory. rclone rc vfs/refresh Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg rclone rc vfs/refresh dir=home/junk dir2=data/misc If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. ## Accessing the remote control via HTTP Rclone implements a simple HTTP based protocol. Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values. All calls must made using POST. The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using `curl`. The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable. ### Error returns If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg ``` { "error": "Expecting string value for key \"remote\" (was float64)", "input": { "fs": "/tmp", "remote": 3 }, "status": 400 "path": "operations/rmdir", } ``` The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call ### CORS The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back. ### Using POST with URL parameters only ``` curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2' ``` Response ``` { "potato": "1", "sausage": "2" } ``` Here is what an error response looks like: ``` curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' ``` ``` { "error": "arbitrary error on input map[potato:1 sausage:2]", "input": { "potato": "1", "sausage": "2" } } ``` Note that curl doesn't return errors to the shell unless you use the `-f` option ``` $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' curl: (22) The requested URL returned error: 400 Bad Request $ echo $? 22 ``` ### Using POST with a form ``` curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop ``` Response ``` { "potato": "1", "sausage": "2" } ``` Note that you can combine these with URL parameters too with the POST parameters taking precedence. ``` curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4" ``` Response ``` { "potato": "1", "rutabaga": "3", "sausage": "4" } ``` ### Using POST with a JSON blob ``` curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop ``` response ``` { "password": "xyz", "username": "xyz" } ``` This can be combined with URL parameters too if required. The JSON blob takes precedence. ``` curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4' ``` ``` { "potato": 2, "rutabaga": "3", "sausage": 1 } ``` ## Debugging rclone with pprof ## If you use the `--rc` flag this will also enable the use of the go profiling tools on the same port. To use these, first [install go](https://golang.org/doc/install). ### Debugging memory use To profile rclone's memory use you can run: go tool pprof -web http://localhost:5572/debug/pprof/heap This should open a page in your browser showing what is using what memory. You can also use the `-text` flag to produce a textual summary ``` $ go tool pprof -text http://localhost:5572/debug/pprof/heap Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total flat flat% sum% cum cum% 1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode 513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/all.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve/restic.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0 0 0% 100% 1024.03kB 66.62% main.init 0 0% 100% 513kB 33.38% net/http.(*conn).readRequest 0 0% 100% 513kB 33.38% net/http.(*conn).serve 0 0% 100% 1024.03kB 66.62% runtime.main ``` ### Debugging go routine leaks Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected. See all active go routines using curl http://localhost:5572/debug/pprof/goroutine?debug=1 Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser. ### Other profiles to look at You can see a summary of profiles available at http://localhost:5572/debug/pprof/ Here is how to use some of them: - Memory: `go tool pprof http://localhost:5572/debug/pprof/heap` - Go routines: `curl http://localhost:5572/debug/pprof/goroutine?debug=1` - 30-second CPU profile: `go tool pprof http://localhost:5572/debug/pprof/profile` - 5-second execution trace: `wget http://localhost:5572/debug/pprof/trace?seconds=5` - Goroutine blocking profile - Enable first with: `rclone rc debug/set-block-profile-rate rate=1` ([docs](#debug/set-block-profile-rate)) - `go tool pprof http://localhost:5572/debug/pprof/block` - Contended mutexes: - Enable first with: `rclone rc debug/set-mutex-profile-fraction rate=1` ([docs](#debug/set-mutex-profile-fraction)) - `go tool pprof http://localhost:5572/debug/pprof/mutex` See the [net/http/pprof docs](https://golang.org/pkg/net/http/pprof/) for more info on how to use the profiling and for a general overview see [the Go team's blog post on profiling go programs](https://blog.golang.org/profiling-go-programs). The profiling hook is [zero overhead unless it is used](https://stackoverflow.com/q/26545159/164234). # Overview of cloud storage systems # Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through. ## Features ## Here is an overview of the major features of each cloud storage system. | Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | | ---------------------------- |:-----------:|:-------:|:----------------:|:---------------:|:---------:| | 1Fichier | Whirlpool | No | No | Yes | R | | Amazon Drive | MD5 | No | Yes | No | R | | Amazon S3 | MD5 | Yes | No | No | R/W | | Backblaze B2 | SHA1 | Yes | No | No | R/W | | Box | SHA1 | Yes | Yes | No | - | | Citrix ShareFile | MD5 | Yes | Yes | No | - | | Dropbox | DBHASH † | Yes | Yes | No | - | | FTP | - | No | No | No | - | | Google Cloud Storage | MD5 | Yes | No | No | R/W | | Google Drive | MD5 | Yes | No | Yes | R/W | | Google Photos | - | No | No | Yes | R | | HTTP | - | No | No | No | R | | Hubic | MD5 | Yes | No | No | R/W | | Jottacloud | MD5 | Yes | Yes | No | R/W | | Koofr | MD5 | No | Yes | No | - | | Mail.ru Cloud | Mailru ‡‡‡ | Yes | Yes | No | - | | Mega | - | No | No | Yes | - | | Memory | MD5 | Yes | No | No | - | | Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W | | Microsoft OneDrive | SHA1 ‡‡ | Yes | Yes | No | R | | OpenDrive | MD5 | Yes | Yes | Partial \* | - | | OpenStack Swift | MD5 | Yes | No | No | R/W | | pCloud | MD5, SHA1 | Yes | No | No | W | | premiumize.me | - | No | Yes | No | R | | put.io | CRC-32 | Yes | No | Yes | R | | QingStor | MD5 | No | No | No | R/W | | Seafile | - | No | No | No | - | | SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - | | SugarSync | - | No | No | No | - | | Tardigrade | - | Yes | No | No | - | | WebDAV | MD5, SHA1 ††| Yes ††† | Depends | No | - | | Yandex Disk | MD5 | Yes | No | No | R/W | | The local filesystem | All | Yes | Depends | No | - | ### Hash ### The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the `--checksum` flag in syncs and in the `check` command. To use the verify checksums when transferring between cloud storage systems they must support a common hash type. † Note that Dropbox supports [its own custom hash](https://www.dropbox.com/developers/reference/content-hash). This is an SHA256 sum of all the 4MB block SHA256s. ‡ SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. †† WebDAV supports hashes when used with Owncloud and Nextcloud only. ††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). ‡‡‡ Mail.ru uses its own modified SHA1 hash ### ModTime ### The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the `--checksum` flag. All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system. ### Case Insensitive ### If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg `file.txt` and `FILE.txt`. If a cloud storage system is case insensitive then that isn't possible. This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully. The local filesystem and SFTP may or may not be case sensitive depending on OS. * Windows - usually case insensitive, though case is preserved * OSX - usually case insensitive, though it is possible to format case sensitive * Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys) Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems. ### Duplicate files ### If a cloud storage system allows duplicate files then it can have two objects with the same name. This confuses rclone greatly when syncing - use the `rclone dedupe` command to rename or remove duplicates. \* Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with `rclone`. It may be that this is a mistake or an unsupported feature. ### Restricted filenames ### Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When `rclone` detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters. This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently. The name shown by `rclone` to the user or during log output will only contain a minimal set of [replaced characters](#restricted-characters) to ensure correct formatting and not necessarily the actual name used on the cloud storage. This transformation is reversed when downloading a file or parsing `rclone` arguments. For example, when uploading a file named `my file?.txt` to Onedrive will be displayed as `my file?.txt` on the console, but stored as `my file?.txt` (the `?` gets replaced by the similar looking `?` character) to Onedrive. The reverse transformation allows to read a file`unusual/name.txt` from Google Drive, by passing the name `unusual/name.txt` (the `/` needs to be replaced by the similar looking `/` character) on the command line. #### Default restricted characters {#restricted-characters} The table below shows the characters that are replaced by default. When a replacement character is found in a filename, this character will be escaped with the `‛` character to avoid ambiguous file names. (e.g. a file named `␀.txt` would shown as `‛␀.txt`) Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend. | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | SOH | 0x01 | ␁ | | STX | 0x02 | ␂ | | ETX | 0x03 | ␃ | | EOT | 0x04 | ␄ | | ENQ | 0x05 | ␅ | | ACK | 0x06 | ␆ | | BEL | 0x07 | ␇ | | BS | 0x08 | ␈ | | HT | 0x09 | ␉ | | LF | 0x0A | ␊ | | VT | 0x0B | ␋ | | FF | 0x0C | ␌ | | CR | 0x0D | ␍ | | SO | 0x0E | ␎ | | SI | 0x0F | ␏ | | DLE | 0x10 | ␐ | | DC1 | 0x11 | ␑ | | DC2 | 0x12 | ␒ | | DC3 | 0x13 | ␓ | | DC4 | 0x14 | ␔ | | NAK | 0x15 | ␕ | | SYN | 0x16 | ␖ | | ETB | 0x17 | ␗ | | CAN | 0x18 | ␘ | | EM | 0x19 | ␙ | | SUB | 0x1A | ␚ | | ESC | 0x1B | ␛ | | FS | 0x1C | ␜ | | GS | 0x1D | ␝ | | RS | 0x1E | ␞ | | US | 0x1F | ␟ | | / | 0x2F | / | | DEL | 0x7F | ␡ | The default encoding will also encode these file names as they are problematic with many cloud storage systems. | File name | Replacement | | --------- |:-----------:| | . | . | | .. | .. | #### Invalid UTF-8 bytes {#invalid-utf8} Some backends only support a sequence of well formed UTF-8 bytes as file or directory names. In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte `0xFE` will be encoded as `‛FE`. A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the [local filenames](https://rclone.org/local/#filenames) section for details. #### Encoding option {#encoding} Most backends have an encoding options, specified as a flag `--backend-encoding` where `backend` is the name of the backend, or as a config parameter `encoding` (you'll need to select the Advanced config in `rclone config` to see it). This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above). However this can be incorrect in some scenarios, for example if you have a Windows file system with characters such as `*` and `?` that you want to remain as those characters on the remote rather than being translated to `*` and `?`. The `--backend-encoding` flags allow you to change that. You can disable the encoding completely with `--backend-encoding None` or set `encoding = None` in the config file. Encoding takes a comma separated list of encodings. You can see the list of all available characters by passing an invalid value to this flag, eg `--local-encoding "help"` and `rclone help flags encoding` will show you the defaults for the backends. | Encoding | Characters | | --------- | ---------- | | Asterisk | `*` | | BackQuote | `` ` `` | | BackSlash | `\` | | Colon | `:` | | CrLf | CR 0x0D, LF 0x0A | | Ctl | All control characters 0x00-0x1F | | Del | DEL 0x7F | | Dollar | `$` | | Dot | `.` | | DoubleQuote | `"` | | Hash | `#` | | InvalidUtf8 | An invalid UTF-8 character (eg latin1) | | LeftCrLfHtVt | CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string | | LeftPeriod | `.` on the left of a string | | LeftSpace | SPACE on the left of a string | | LeftTilde | `~` on the left of a string | | LtGt | `<`, `>` | | None | No characters are encoded | | Percent | `%` | | Pipe | \| | | Question | `?` | | RightCrLfHtVt | CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string | | RightPeriod | `.` on the right of a string | | RightSpace | SPACE on the right of a string | | SingleQuote | `'` | | Slash | `/` | To take a specific example, the FTP backend's default encoding is --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot" However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot to the existing ones, giving: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace This can be specified using the `--ftp-encoding` flag or using an `encoding` parameter in the config file. Or let's say you have a Windows server but you want to preserve `*` and `?`, you would then have this as the encoding (the Windows encoding minus `Asterisk` and `Question`). Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot This can be specified using the `--local-encoding` flag or using an `encoding` parameter in the config file. ### MIME Type ### MIME types (also known as media types) classify types of documents using a simple text classification, eg `text/html` or `application/pdf`. Some cloud storage systems support reading (`R`) the MIME type of objects and some support writing (`W`) the MIME type of objects. The MIME type can be important if you are serving files directly to HTTP from the storage system. If you are copying from a remote which supports reading (`R`) to a remote which supports writing (`W`) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type. ## Optional Features ## All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. | Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | | ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:| :------: | | 1Fichier | No | No | No | No | No | No | No | No | No | Yes | | Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | | Amazon S3 | No | Yes | No | No | Yes | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | | Box | Yes | Yes | Yes | Yes | Yes ‡‡ | No | Yes | Yes | No | Yes | | Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | Yes | No | No | Yes | | Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | Yes | Yes | Yes | Yes | | FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | | Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Google Photos | No | No | No | No | No | No | No | No | No | No | | HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | | Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | No | | Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | | Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Mega | Yes | No | Yes | Yes | Yes | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | | Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | | OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | No | | pCloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | premiumize.me | Yes | No | Yes | Yes | No | No | No | Yes | Yes | Yes | | put.io | Yes | No | Yes | Yes | Yes | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | QingStor | No | Yes | No | No | Yes | Yes | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | | Tardigrade | Yes † | No | No | No | No | Yes | Yes | No | No | No | | WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes | | The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | ### Purge ### This deletes a directory quicker than just deleting all the files in the directory. † Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually. ‡ StreamUpload is not supported with Nextcloud ### Copy ### Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use `rclone copy` or `rclone move` if the remote doesn't support `Move` directly. If the server doesn't support `Copy` directly then for copy operations the file is downloaded then re-uploaded. ### Move ### Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in `rclone move` if the server doesn't support `DirMove`. If the server isn't capable of `Move` then rclone simulates it with `Copy` then delete. If the server doesn't support `Copy` then rclone will download the file and re-upload it. ### DirMove ### This is used to implement `rclone move` to move a directory if possible. If it isn't then it will use `Move` on each file (which falls back to `Copy` then download and upload - see `Move` section). ### CleanUp ### This is used for emptying the trash for a remote by `rclone cleanup`. If the server can't do `CleanUp` then `rclone cleanup` will return an error. ‡‡ Note that while Box implements this it has to delete every file idividually so it will be slower than emptying the trash via the WebUI ### ListR ### The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the `--fast-list` flag to work. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### StreamUpload ### Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. `rclone rcat`. ### LinkSharing ### Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider. ### About ### This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. This is also used to return the space used, available for `rclone mount`. If the server can't do `About` then `rclone about` will return an error. ### EmptyDir ### The remote supports empty directories. See [Limitations](https://rclone.org/bugs/#limitations) for details. Most Object/Bucket based remotes do not support this. # Global Flags This describes the global flags available to every rclone command split into two groups, non backend and backend flags. ## Non Backend Flags These flags are available for every command. ``` --ask-password Allow prompt for password for encrypted configuration. (default true) --auto-confirm If enabled, do not request console confirmation. --backup-dir string Make backups into hierarchy based in DIR. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --bwlimit-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable. --ca-cert string CA certificate used to verify servers --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --check-first Do all the checks before starting transfers. --checkers int Number of checkers to run in parallel. (default 8) -c, --checksum Skip based on checksum (if available) & size, not mod-time & size --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth --compare-dest string Include additional server-side path during comparison. --config string Config file. (default "$HOME/.config/rclone/rclone.conf") --contimeout duration Connect timeout (default 1m0s) --copy-dest string Implies --compare-dest but also copies files from path into destination. --cpuprofile string Write cpu profile to file --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features. Use help to see a list. -n, --dry-run Do a trial run with no permanent changes --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP headers - may contain sensitive info --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) --exclude-if-present string Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file (use - to read from stdin) --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file (use - to read from stdin) --header stringArray Set HTTP header for all transactions --header-download stringArray Set HTTP header for download transactions --header-upload stringArray Set HTTP header for upload transactions --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums. --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file (use - to read from stdin) -i, --interactive Enable interactive mode --log-file string Log everything to this file --log-format string Comma separated list of log format options (default "date,time") --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) --max-duration duration Maximum duration rclone will transfer data for. --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer. (default off) --memprofile string Write memory profile to file --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M) --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-check-dest Don't check the destination, copy regardless. --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-unicode-normalization Don't normalize unicode characters in filenames. --no-update-modtime Don't update destination mod-time if files identical. --order-by string Instructions on how to order the transfers, eg 'size,descending' --password-command SpaceSepList Command for supplying password for encrypted configuration. -P, --progress Show progress during transfer. -q, --quiet Print as little stuff as possible --rc Enable the remote control server. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --rc-allow-origin string Set the allowed origin for CORS. --rc-baseurl string Prefix for URLs - leave blank for root. --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --rc-client-ca string Client certificate authority to verify clients with --rc-enable-metrics Enable prometheus metrics on /metrics --rc-files string Path to local files to serve on the HTTP server. --rc-htpasswd string htpasswd file - if not provided no authentication is done --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s) --rc-job-expire-interval duration interval to check for expired async jobs (default 10s) --rc-key string SSL PEM Private key --rc-max-header-bytes int Maximum size of request header (default 4096) --rc-no-auth Don't require auth for certain methods. --rc-pass string Password for authentication. --rc-realm string realm for authentication (default "rclone") --rc-serve Enable the serving of remote objects. --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-template string User Specified Template. --rc-user string User name for authentication. --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") --rc-web-gui Launch WebGUI on localhost --rc-web-gui-force-update Force update to latest version of web gui --rc-web-gui-no-open-browser Don't open the browser automatically --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files. --retries int Retry operations this many times if they fail (default 3) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --size-only Skip based on size only, not mod-time or checksum --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-one-line Make the stats fit on one line. --stats-one-line-date Enables --stats-one-line and add current date/time prefix. --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix to add to changed files. --suffix-keep-extension Preserve the extension when using --suffix. --syslog Use Syslog for logging --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --timeout duration IO idle timeout (default 5m0s) --tpslimit float Limit HTTP transactions per second to this. --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --track-renames When synchronizing, track file renames and do a server side move if possible --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. --use-cookies Enable session cookiejar. --use-json-log Use json log format. --use-mmap Use mmap allocator (see docs). --use-server-modtime Use server modified time instead of object metadata --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.3") -v, --verbose count Print lots more stuff (repeat for more) ``` ## Backend Flags These flags are available for every command. They control the backends and may be set in the config file. ``` --acd-auth-url string Auth server URL. --acd-client-id string OAuth Client Id --acd-client-secret string OAuth Client Secret --acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-token string OAuth Access Token as a JSON blob. --acd-token-url string Token server url. --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --alias-remote string Remote or path to alias. --azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-disable-checksum Don't store MD5 checksum with object metadata. --azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) --azureblob-endpoint string Endpoint for the service --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator) --azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --azureblob-sas-url string SAS URL for container level access only --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G) --b2-disable-checksum Disable checksums for large (> upload cutoff) files --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w) --b2-download-url string Custom endpoint for downloads. --b2-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service. --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-key string Application Key --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-versions Include old versions in directory listings. --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL. --box-box-config-file string Box App config.json location --box-box-sub-type string (default "user") --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point. --box-token string OAuth Access Token as a JSON blob. --box-token-url string Token server url. --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start. --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) --cache-plex-url string The URL of the Plex server --cache-plex-username string The username of the Plex user --cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-remote string Remote to cache. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G) --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks. --chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5") --chunker-meta-format string Format of the metadata object or "none". By default "simplejson". (default "simplejson") --chunker-name-format string String format of chunk file names. (default "*.rclone_chunk.###") --chunker-remote string Remote to chunk/unchunk. --chunker-start-from int Minimum valid chunk number. Usually 0 or 1. (default 1) -L, --copy-links Follow symlinks and copy the pointed to item. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-password string Password or pass phrase for encryption. (obscured) --crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured) --crypt-remote string Remote to encrypt/decrypt. --crypt-server-side-across-configs Allow server side operations (eg copy) to work across different crypt configs. --crypt-show-mapping For all files listed show how the names encrypt. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-auth-url string Auth server URL. --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Deprecated: see export_formats --drive-impersonate string Impersonate this user when using a service account. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-keep-revision-forever Keep new head revision of each file forever. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive. --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. --drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-file string Service Account Credentials JSON file path --drive-shared-with-me Only show files that are shared with me. --drive-size-as-quota Show sizes as storage quota usage, not actual size. --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. --drive-skip-gdocs Skip google documents in all listings. --drive-skip-shortcuts If set skip shortcut files --drive-starred-only Only show files that are starred. --drive-stop-on-upload-limit Make upload limit errors be fatal --drive-team-drive string ID of the Team Drive --drive-token string OAuth Access Token as a JSON blob. --drive-token-url string Token server url. --drive-trashed-only Only show files that are in the trash. --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-use-created-date Use file created date instead of modified date., --drive-use-shared-date Use date file was shared instead of modified date. --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --dropbox-auth-url string Auth server URL. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret --dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account. --dropbox-token string OAuth Access Token as a JSON blob. --dropbox-token-url string Token server url. --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-shared-folder string If you want to download a shared folder, add this parameter --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use FTP over TLS (Explicit) --ftp-host string FTP host to connect to --ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-pass string FTP password (obscured) --ftp-port string FTP port, leave blank to use default (21) --ftp-tls Use FTPS over TLS (Implicit) --ftp-user string FTP username, leave blank for current username, $USER --gcs-anonymous Access public buckets and objects without credentials --gcs-auth-url string Auth server URL. --gcs-bucket-acl string Access Control List for new buckets. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-encoding MultiEncoder This sets the encoding for the backend. (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets. --gcs-object-acl string Access Control List for new objects. --gcs-project-number string Project number. --gcs-service-account-file string Service Account Credentials JSON file path --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-token string OAuth Access Token as a JSON blob. --gcs-token-url string Token server url. --gphotos-auth-url string Auth server URL. --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret --gphotos-read-only Set to make the Google Photos backend read only. --gphotos-read-size Set to read the size of media items. --gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000) --gphotos-token string OAuth Access Token as a JSON blob. --gphotos-token-url string Token server url. --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests to find file sizes in dir listing --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of http host to connect to --hubic-auth-url string Auth server URL. --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --hubic-client-id string OAuth Client Id --hubic-client-secret string OAuth Client Secret --hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8) --hubic-no-chunk Don't chunk files during streaming upload. --hubic-token string OAuth Access Token as a JSON blob. --hubic-token-url string Token server url. --jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --jottacloud-trashed-only Only show files that are in the trash. --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) --koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net") --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used. --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured) --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true) --koofr-user string Your Koofr user name -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive. --local-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --local-nounc string Disable UNC (long path names) conversion on Windows --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M) --mailru-user string User name (usually email) --mega-debug Output more debug from Mega. --mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash. --mega-pass string Password. (obscured) --mega-user string User name -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --onedrive-auth-url string Auth server URL. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) --onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-no-versions Remove all versions on modifying operations --onedrive-server-side-across-configs Allow server side operations (eg copy) to work across different onedrive configs. --onedrive-token string OAuth Access Token as a JSON blob. --onedrive-token-url string Token server url. --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M) --opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password. (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL. --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to. (default "api.pcloud.com") --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point. (default "d0") --pcloud-token string OAuth Access Token as a JSON blob. --pcloud-token-url string Token server url. --premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) --qingstor-connection-retries int Number of connection retries. (default 3) --qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --qingstor-secret-access-key string QingStor Secret Access Key (password) --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) --qingstor-zone string Zone to connect to. --s3-access-key-id string AWS Access Key ID. --s3-acl string Canned ACL used when creating buckets and storing or copying objects. --s3-bucket-acl string Canned ACL used when creating buckets. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G) --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --s3-endpoint string Endpoint for S3 API. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. --s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request). (default 1000) --s3-location-constraint string Location constraint - must be set to match the Region. --s3-max-upload-parts int Maximum number of parts in a multipart upload. (default 10000) --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --s3-no-check-bucket If set don't attempt to check the bucket exists or create it --s3-profile string Profile to use in the shared credentials file --s3-provider string Choose your S3 provider. --s3-region string Region to connect to. --s3-secret-access-key string AWS Secret Access Key (password) --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-session-token string An AWS session token --s3-shared-credentials-file string Path to the shared credentials file --s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3. --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. --s3-sse-customer-key-md5 string If using SSE-C you must provide the secret encryption key MD5 checksum. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-storage-class string The storage class to use when storing new objects in S3. --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. --s3-v2-auth If true use v2 authentication. --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist --seafile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library. Leave blank to access all non-encrypted libraries. --seafile-library-key string Library password (for encrypted libraries only). Leave blank if you pass it through the command line. (obscured) --seafile-pass string Password (obscured) --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --sftp-host string SSH host to connect to --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. (obscured) --sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter. --sftp-key-use-agent When set forces the usage of the ssh-agent. --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect. --sftp-pass string SSH password, leave blank to use ssh-agent. (obscured) --sftp-path-override string Override path used by SSH connection. --sftp-port string SSH port, leave blank to use default (22) --sftp-server-command string Specifies the path or command to run a sftp server on the remote host. --sftp-set-modtime Set the modified time on the remote if set. (default true) --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect. --sftp-skip-links Set to skip any symlinks and any other non regular files. --sftp-subsystem string Specifies the SSH2 subsystem on the remote host. (default "sftp") --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods. --sftp-user string SSH username, leave blank for current username, ncw --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M) --sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls. --sharefile-root-folder-id string ID of the root folder --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M) --skip-links Don't warn about skipped symlinks. --sugarsync-access-key-id string Sugarsync Access Key ID. --sugarsync-app-id string Sugarsync App ID. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id --sugarsync-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key --sugarsync-refresh-token string Sugarsync refresh token --sugarsync-root-id string Sugarsync root id --sugarsync-user string Sugarsync user --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) --swift-auth string Authentication URL for server (OS_AUTH_URL). --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --swift-key string API key or password (OS_PASSWORD). --swift-no-chunk Don't chunk files during streaming upload. --swift-region string Region name - optional (OS_REGION_NAME) --swift-storage-policy string The storage policy to use when creating a new container --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-user string User name to log in (OS_USERNAME). --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --tardigrade-access-grant string Access Grant. --tardigrade-api-key string API Key. --tardigrade-passphrase string Encryption Passphrase. To access existing objects enter passphrase used for uploading. --tardigrade-provider string Choose an authentication method. (default "existing") --tardigrade-satellite-address @
: Satellite Address. Custom satellite address should match the format: @
:. (default "us-central-1.tardigrade.io") --union-action-policy string Policy to choose upstream on ACTION category. (default "epall") --union-cache-time int Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. (default 120) --union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs") --union-search-policy string Policy to choose upstream on SEARCH category. (default "ff") --union-upstreams string List of space separated upstreams. --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token --webdav-pass string Password. (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name --webdav-vendor string Name of the Webdav site/service/software you are using --yandex-auth-url string Auth server URL. --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-token string OAuth Access Token as a JSON blob. --yandex-token-url string Token server url. ``` 1Fichier ----------------------------------------- This is a backend for the [1fichier](https://1fichier.com) cloud storage service. Note that a Premium subscription is required to use the API. Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / 1Fichier \ "fichier" [snip] Storage> fichier ** See help for fichier backend at: https://rclone.org/fichier/ ** Your API Key, get it from https://1fichier.com/console/params.pl Enter a string value. Press Enter for the default (""). api_key> example_key Edit advanced config? (y/n) y) Yes n) No y/n> Remote config -------------------- [remote] type = fichier api_key = example_key -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Once configured you can then use `rclone` like this, List directories in top level of your 1Fichier account rclone lsd remote: List all the files in your 1Fichier account rclone ls remote: To copy a local directory to a 1Fichier directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### 1Fichier does not support modification times. It supports the Whirlpool hash algorithm. ### Duplicated files ### 1Fichier can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | | < | 0x3C | < | | > | 0x3E | > | | " | 0x22 | " | | $ | 0x24 | $ | | ` | 0x60 | ` | | ' | 0x27 | ' | File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to fichier (1Fichier). #### --fichier-api-key Your API Key, get it from https://1fichier.com/console/params.pl - Config: api_key - Env Var: RCLONE_FICHIER_API_KEY - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to fichier (1Fichier). #### --fichier-shared-folder If you want to download a shared folder, add this parameter - Config: shared_folder - Env Var: RCLONE_FICHIER_SHARED_FOLDER - Type: string - Default: "" #### --fichier-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_FICHIER_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot Alias ----------------------------------------- The `alias` remote provides a new name for another remote. Paths may be as deep as required or a local path, eg `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the target remote. The target remote can either be a local path or another remote. Subfolders can be used in target remote. Assume an alias remote named `backup` with the target `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` is exactly the same as invoking `rclone mkdir mydrive:private/backup/desktop`. There will be no special handling of paths containing `..` segments. Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking `rclone mkdir mydrive:private/backup/../desktop`. The empty path is not allowed as a remote. To alias the current directory use `.` instead. Here is an example of how to make an alias called `remote` for local folder. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Alias for an existing remote \ "alias" [snip] Storage> alias Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". remote> /mnt/storage/backup Remote config -------------------- [remote] remote = /mnt/storage/backup -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote alias e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` Once configured you can then use `rclone` like this, List directories in top level in `/mnt/storage/backup` rclone lsd remote: List all the files in `/mnt/storage/backup` rclone ls remote: Copy another local directory to the alias directory called source rclone copy /home/source remote:source ### Standard Options Here are the standard options specific to alias (Alias for an existing remote). #### --alias-remote Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". - Config: remote - Env Var: RCLONE_ALIAS_REMOTE - Type: string - Default: "" Amazon Drive ----------------------------------------- Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers. ## Status **Important:** rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the [Amazon Drive developer program](https://developer.amazon.com/amazon-drive) is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive. For the history on why rclone no longer has a set of Amazon Drive API keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks! ## Setup The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. `rclone config` walks you through it. The configuration process for Amazon Drive may involve using an [oauth proxy](https://github.com/ncw/oauthproxy). This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it. Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own `client_id` and `client_secret` with Amazon Drive, or use a third party oauth proxy in which case you will need to enter `client_id`, `client_secret`, `auth_url` and `token_url`. Note also if you are not using Amazon's `auth_url` and `token_url`, (ie you filled in something for those) then if setting up on a remote machine you can only use the [copying the config method of configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) - `rclone authorize` will not work. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon Drive \ "amazon cloud drive" [snip] Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. client_secret> your client secret goes here Auth server URL - leave blank to use Amazon's. auth_url> Optional auth URL Token server url - leave blank to use Amazon's. token_url> Optional token URL Remote config Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = your client ID goes here client_secret = your client secret goes here auth_url = Optional auth URL token_url = Optional token URL token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your Amazon Drive rclone lsd remote: List all the files in your Amazon Drive rclone ls remote: To copy a local directory to an Amazon Drive directory called backup rclone copy /home/source remote:backup ### Modified time and MD5SUMs ### Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing. It does store MD5SUMs so for a more accurate sync, you can use the `--checksum` flag. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Deleting files ### Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days. ### Using with non `.com` Amazon accounts ### Let's say you usually use `amazon.co.uk`. When you authenticate with rclone it will take you to an `amazon.com` page to log in. Your `amazon.co.uk` email and password should work here just fine. ### Standard Options Here are the standard options specific to amazon cloud drive (Amazon Drive). #### --acd-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_ACD_CLIENT_ID - Type: string - Default: "" #### --acd-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_ACD_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to amazon cloud drive (Amazon Drive). #### --acd-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_ACD_TOKEN - Type: string - Default: "" #### --acd-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_ACD_AUTH_URL - Type: string - Default: "" #### --acd-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_ACD_TOKEN_URL - Type: string - Default: "" #### --acd-checkpoint Checkpoint for internal polling (debug). - Config: checkpoint - Env Var: RCLONE_ACD_CHECKPOINT - Type: string - Default: "" #### --acd-upload-wait-per-gb Additional time per GB to wait after a failed complete upload to see if it appears. Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear. The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears. You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually. These values were determined empirically by observing lots of uploads of big files for a range of file sizes. Upload with the "-v" flag to see more info about what rclone is doing in this situation. - Config: upload_wait_per_gb - Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB - Type: Duration - Default: 3m0s #### --acd-templink-threshold Files >= this size will be downloaded via their tempLink. Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed. To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage. - Config: templink_threshold - Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD - Type: SizeSuffix - Default: 9G #### --acd-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_ACD_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot ### Limitations ### Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see `--retries` flag) which should hopefully work around this problem. Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use `--max-size 50000M` option to limit the maximum size of uploaded files. Note that `--max-size` does not split files into segments, it only ignores files over this size. Amazon S3 Storage Providers -------------------------------------------------------- The S3 backend can be used with a number of different providers: - AWS S3 - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Ceph - DigitalOcean Spaces - Dreamhost - IBM COS S3 - Minio - Scaleway - StackPath - Tencent Cloud Object Storage (COS) - Wasabi Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Once you have made a remote (see the provider specific section above) you can use it like this: See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ## AWS S3 {#amazon-s3} Here is an example of making an s3 configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" [snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Ceph Object Storage \ "Ceph" 3 / Digital Ocean Spaces \ "DigitalOcean" 4 / Dreamhost DreamObjects \ "Dreamhost" 5 / IBM COS S3 \ "IBMCOS" 6 / Minio Object Storage \ "Minio" 7 / Wasabi Object Storage \ "Wasabi" 8 / Any other S3 compatible provider \ "Other" provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US East (Ohio) Region 2 | Needs location constraint us-east-2. \ "us-east-2" / US West (Oregon) Region 3 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 4 | Needs location constraint us-west-1. \ "us-west-1" / Canada (Central) Region 5 | Needs location constraint ca-central-1. \ "ca-central-1" / EU (Ireland) Region 6 | Needs location constraint EU or eu-west-1. \ "eu-west-1" / EU (London) Region 7 | Needs location constraint eu-west-2. \ "eu-west-2" / EU (Frankfurt) Region 8 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 9 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region 10 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region 11 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / Asia Pacific (Seoul) 12 | Needs location constraint ap-northeast-2. \ "ap-northeast-2" / Asia Pacific (Mumbai) 13 | Needs location constraint ap-south-1. \ "ap-south-1" / Asia Pacific (Hong Kong) Region 14 | Needs location constraint ap-east-1. \ "ap-east-1" / South America (Sao Paulo) Region 15 | Needs location constraint sa-east-1. \ "sa-east-1" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" 2 / US East (Ohio) Region. \ "us-east-2" 3 / US West (Oregon) Region. \ "us-west-2" 4 / US West (Northern California) Region. \ "us-west-1" 5 / Canada (Central) Region. \ "ca-central-1" 6 / EU (Ireland) Region. \ "eu-west-1" 7 / EU (London) Region. \ "eu-west-2" 8 / EU Region. \ "EU" 9 / Asia Pacific (Singapore) Region. \ "ap-southeast-1" 10 / Asia Pacific (Sydney) Region. \ "ap-southeast-2" 11 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1" 12 / Asia Pacific (Seoul) \ "ap-northeast-2" 13 / Asia Pacific (Mumbai) \ "ap-south-1" 14 / Asia Pacific (Hong Kong) \ "ap-east-1" 15 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl> 1 The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> 1 The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" 6 / Glacier storage class \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" 8 / Intelligent-Tiering storage class \ "INTELLIGENT_TIERING" storage_class> 1 Remote config -------------------- [remote] type = s3 provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> ``` ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### --update and --use-server-modtime ### As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using `--update` along with `--use-server-modtime`, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded. ### Modified time ### The modified time is stored as metadata on the object as `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns. If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied. ### Cleanup ### If you run `rclone cleanup s3:bucket` then it will remove all pending multipart uploads older than 24 hours. You can use the `-i` flag to see exactly what it will do. If you want more control over the expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h` to expire all uploads older than one hour. You can use `rclone backend list-multipart-uploads s3:bucket` to see the pending multipart uploads. #### Restricted filename characters S3 allows any valid UTF-8 string as a key. Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in XML. The following characters are replaced since these are problematic when dealing with the REST API: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | The encoding will also encode these file names as they don't seem to work with the SDK properly: | File name | Replacement | | --------- |:-----------:| | . | . | | .. | .. | ### Multipart uploads ### rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded *both* with multipart upload *and* through crypt remotes do not have MD5 sums. rclone switches from single part uploads to multipart uploads at the point specified by `--s3-upload-cutoff`. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files). The chunk sizes used in the multipart upload are specified by `--s3-chunk-size` and the number of chunks uploaded concurrently is specified by `--s3-upload-concurrency`. Multipart uploads will use `--transfers` * `--s3-upload-concurrency` * `--s3-chunk-size` extra memory. Single part uploads to not use extra memory. Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster. Increasing `--s3-upload-concurrency` will increase throughput (8 would be a sensible value) and increasing `--s3-chunk-size` also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory. ### Buckets and Regions ### With Amazon S3 you can list buckets (`rclone lsd`) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, `incorrect region, the bucket is not in 'XXX' region`. ### Authentication ### There are a number of ways to supply `rclone` with a set of AWS credentials, with and without using the environment. The different authentication methods are tried in this order: - Directly in the rclone configuration file (`env_auth = false` in the config file): - `access_key_id` and `secret_access_key` are required. - `session_token` can be optionally set when using AWS STS. - Runtime configuration (`env_auth = true` in the config file): - Export the following environment variables before running `rclone`: - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` - Session Token: `AWS_SESSION_TOKEN` (optional) - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html): - Profile files are standard files used by AWS CLI tools - By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables: - `AWS_SHARED_CREDENTIALS_FILE` to control which file. - `AWS_PROFILE` to control which profile to use. - Or, run `rclone` in an ECS task with an IAM role (AWS only). - Or, run `rclone` on an EC2 instance with an IAM role (AWS only). - Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only). If none of these option actually end up providing `rclone` with AWS credentials then S3 interaction will be non-authenticated (see below). ### S3 Permissions ### When using the `sync` subcommand of `rclone` the following minimum permissions are required to be available on the bucket being written to: * `ListBucket` * `DeleteObject` * `GetObject` * `PutObject` * `PutObjectACL` When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required. Example policy: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" }, "Action": [ "s3:ListBucket", "s3:DeleteObject", "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::BUCKET_NAME/*", "arn:aws:s3:::BUCKET_NAME" ] }, { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" } ] } ``` Notes on above: 1. This is a policy that can be used when creating bucket. It assumes that `USER_NAME` has been created. 2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with `rclone sync`. ### Key Management System (KMS) ### If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the `--ignore-checksum` flag. A proper fix is being worked on in [issue #1824](https://github.com/rclone/rclone/issues/1824). ### Glacier and Glacier Deep Archive ### You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below. 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) the object(s) in question before using rclone. Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults. ### Standard Options Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)). #### --s3-provider Choose your S3 provider. - Config: provider - Env Var: RCLONE_S3_PROVIDER - Type: string - Default: "" - Examples: - "AWS" - Amazon Web Services (AWS) S3 - "Alibaba" - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage - "DigitalOcean" - Digital Ocean Spaces - "Dreamhost" - Dreamhost DreamObjects - "IBMCOS" - IBM COS S3 - "Minio" - Minio Object Storage - "Netease" - Netease Object Storage (NOS) - "Scaleway" - Scaleway Object Storage - "StackPath" - StackPath Object Storage - "TencentCOS" - Tencent Cloud Object Storage (COS) - "Wasabi" - Wasabi Object Storage - "Other" - Any other S3 compatible provider #### --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. - Config: env_auth - Env Var: RCLONE_S3_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter AWS credentials in the next step - "true" - Get AWS credentials from the environment (env vars or IAM) #### --s3-access-key-id AWS Access Key ID. Leave blank for anonymous access or runtime credentials. - Config: access_key_id - Env Var: RCLONE_S3_ACCESS_KEY_ID - Type: string - Default: "" #### --s3-secret-access-key AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. - Config: secret_access_key - Env Var: RCLONE_S3_SECRET_ACCESS_KEY - Type: string - Default: "" #### --s3-region Region to connect to. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "us-east-1" - The default endpoint - a good choice if you are unsure. - US Region, Northern Virginia or Pacific Northwest. - Leave location constraint empty. - "us-east-2" - US East (Ohio) Region - Needs location constraint us-east-2. - "us-west-1" - US West (Northern California) Region - Needs location constraint us-west-1. - "us-west-2" - US West (Oregon) Region - Needs location constraint us-west-2. - "ca-central-1" - Canada (Central) Region - Needs location constraint ca-central-1. - "eu-west-1" - EU (Ireland) Region - Needs location constraint EU or eu-west-1. - "eu-west-2" - EU (London) Region - Needs location constraint eu-west-2. - "eu-west-3" - EU (Paris) Region - Needs location constraint eu-west-3. - "eu-north-1" - EU (Stockholm) Region - Needs location constraint eu-north-1. - "eu-south-1" - EU (Milan) Region - Needs location constraint eu-south-1. - "eu-central-1" - EU (Frankfurt) Region - Needs location constraint eu-central-1. - "ap-southeast-1" - Asia Pacific (Singapore) Region - Needs location constraint ap-southeast-1. - "ap-southeast-2" - Asia Pacific (Sydney) Region - Needs location constraint ap-southeast-2. - "ap-northeast-1" - Asia Pacific (Tokyo) Region - Needs location constraint ap-northeast-1. - "ap-northeast-2" - Asia Pacific (Seoul) - Needs location constraint ap-northeast-2. - "ap-northeast-3" - Asia Pacific (Osaka-Local) - Needs location constraint ap-northeast-3. - "ap-south-1" - Asia Pacific (Mumbai) - Needs location constraint ap-south-1. - "ap-east-1" - Asia Pacific (Hong Kong) Region - Needs location constraint ap-east-1. - "sa-east-1" - South America (Sao Paulo) Region - Needs location constraint sa-east-1. - "me-south-1" - Middle East (Bahrain) Region - Needs location constraint me-south-1. - "af-south-1" - Africa (Cape Town) Region - Needs location constraint af-south-1. - "cn-north-1" - China (Beijing) Region - Needs location constraint cn-north-1. - "cn-northwest-1" - China (Ningxia) Region - Needs location constraint cn-northwest-1. - "us-gov-east-1" - AWS GovCloud (US-East) Region - Needs location constraint us-gov-east-1. - "us-gov-west-1" - AWS GovCloud (US) Region - Needs location constraint us-gov-west-1. #### --s3-region Region to connect to. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "nl-ams" - Amsterdam, The Netherlands - "fr-par" - Paris, France #### --s3-region Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "" - Use this if unsure. Will use v4 signatures and an empty region. - "other-v2-signature" - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. #### --s3-endpoint Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" #### --s3-endpoint Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.us.cloud-object-storage.appdomain.cloud" - US Cross Region Endpoint - "s3.dal.us.cloud-object-storage.appdomain.cloud" - US Cross Region Dallas Endpoint - "s3.wdc.us.cloud-object-storage.appdomain.cloud" - US Cross Region Washington DC Endpoint - "s3.sjc.us.cloud-object-storage.appdomain.cloud" - US Cross Region San Jose Endpoint - "s3.private.us.cloud-object-storage.appdomain.cloud" - US Cross Region Private Endpoint - "s3.private.dal.us.cloud-object-storage.appdomain.cloud" - US Cross Region Dallas Private Endpoint - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud" - US Cross Region Washington DC Private Endpoint - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud" - US Cross Region San Jose Private Endpoint - "s3.us-east.cloud-object-storage.appdomain.cloud" - US Region East Endpoint - "s3.private.us-east.cloud-object-storage.appdomain.cloud" - US Region East Private Endpoint - "s3.us-south.cloud-object-storage.appdomain.cloud" - US Region South Endpoint - "s3.private.us-south.cloud-object-storage.appdomain.cloud" - US Region South Private Endpoint - "s3.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Endpoint - "s3.fra.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Frankfurt Endpoint - "s3.mil.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Milan Endpoint - "s3.ams.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Amsterdam Endpoint - "s3.private.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Private Endpoint - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Frankfurt Private Endpoint - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Milan Private Endpoint - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Amsterdam Private Endpoint - "s3.eu-gb.cloud-object-storage.appdomain.cloud" - Great Britain Endpoint - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud" - Great Britain Private Endpoint - "s3.eu-de.cloud-object-storage.appdomain.cloud" - EU Region DE Endpoint - "s3.private.eu-de.cloud-object-storage.appdomain.cloud" - EU Region DE Private Endpoint - "s3.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Endpoint - "s3.tok.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Tokyo Endpoint - "s3.hkg.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional HongKong Endpoint - "s3.seo.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Seoul Endpoint - "s3.private.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Private Endpoint - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Tokyo Private Endpoint - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional HongKong Private Endpoint - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Seoul Private Endpoint - "s3.jp-tok.cloud-object-storage.appdomain.cloud" - APAC Region Japan Endpoint - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud" - APAC Region Japan Private Endpoint - "s3.au-syd.cloud-object-storage.appdomain.cloud" - APAC Region Australia Endpoint - "s3.private.au-syd.cloud-object-storage.appdomain.cloud" - APAC Region Australia Private Endpoint - "s3.ams03.cloud-object-storage.appdomain.cloud" - Amsterdam Single Site Endpoint - "s3.private.ams03.cloud-object-storage.appdomain.cloud" - Amsterdam Single Site Private Endpoint - "s3.che01.cloud-object-storage.appdomain.cloud" - Chennai Single Site Endpoint - "s3.private.che01.cloud-object-storage.appdomain.cloud" - Chennai Single Site Private Endpoint - "s3.mel01.cloud-object-storage.appdomain.cloud" - Melbourne Single Site Endpoint - "s3.private.mel01.cloud-object-storage.appdomain.cloud" - Melbourne Single Site Private Endpoint - "s3.osl01.cloud-object-storage.appdomain.cloud" - Oslo Single Site Endpoint - "s3.private.osl01.cloud-object-storage.appdomain.cloud" - Oslo Single Site Private Endpoint - "s3.tor01.cloud-object-storage.appdomain.cloud" - Toronto Single Site Endpoint - "s3.private.tor01.cloud-object-storage.appdomain.cloud" - Toronto Single Site Private Endpoint - "s3.seo01.cloud-object-storage.appdomain.cloud" - Seoul Single Site Endpoint - "s3.private.seo01.cloud-object-storage.appdomain.cloud" - Seoul Single Site Private Endpoint - "s3.mon01.cloud-object-storage.appdomain.cloud" - Montreal Single Site Endpoint - "s3.private.mon01.cloud-object-storage.appdomain.cloud" - Montreal Single Site Private Endpoint - "s3.mex01.cloud-object-storage.appdomain.cloud" - Mexico Single Site Endpoint - "s3.private.mex01.cloud-object-storage.appdomain.cloud" - Mexico Single Site Private Endpoint - "s3.sjc04.cloud-object-storage.appdomain.cloud" - San Jose Single Site Endpoint - "s3.private.sjc04.cloud-object-storage.appdomain.cloud" - San Jose Single Site Private Endpoint - "s3.mil01.cloud-object-storage.appdomain.cloud" - Milan Single Site Endpoint - "s3.private.mil01.cloud-object-storage.appdomain.cloud" - Milan Single Site Private Endpoint - "s3.hkg02.cloud-object-storage.appdomain.cloud" - Hong Kong Single Site Endpoint - "s3.private.hkg02.cloud-object-storage.appdomain.cloud" - Hong Kong Single Site Private Endpoint - "s3.par01.cloud-object-storage.appdomain.cloud" - Paris Single Site Endpoint - "s3.private.par01.cloud-object-storage.appdomain.cloud" - Paris Single Site Private Endpoint - "s3.sng01.cloud-object-storage.appdomain.cloud" - Singapore Single Site Endpoint - "s3.private.sng01.cloud-object-storage.appdomain.cloud" - Singapore Single Site Private Endpoint #### --s3-endpoint Endpoint for OSS API. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "oss-cn-hangzhou.aliyuncs.com" - East China 1 (Hangzhou) - "oss-cn-shanghai.aliyuncs.com" - East China 2 (Shanghai) - "oss-cn-qingdao.aliyuncs.com" - North China 1 (Qingdao) - "oss-cn-beijing.aliyuncs.com" - North China 2 (Beijing) - "oss-cn-zhangjiakou.aliyuncs.com" - North China 3 (Zhangjiakou) - "oss-cn-huhehaote.aliyuncs.com" - North China 5 (Huhehaote) - "oss-cn-shenzhen.aliyuncs.com" - South China 1 (Shenzhen) - "oss-cn-hongkong.aliyuncs.com" - Hong Kong (Hong Kong) - "oss-us-west-1.aliyuncs.com" - US West 1 (Silicon Valley) - "oss-us-east-1.aliyuncs.com" - US East 1 (Virginia) - "oss-ap-southeast-1.aliyuncs.com" - Southeast Asia Southeast 1 (Singapore) - "oss-ap-southeast-2.aliyuncs.com" - Asia Pacific Southeast 2 (Sydney) - "oss-ap-southeast-3.aliyuncs.com" - Southeast Asia Southeast 3 (Kuala Lumpur) - "oss-ap-southeast-5.aliyuncs.com" - Asia Pacific Southeast 5 (Jakarta) - "oss-ap-northeast-1.aliyuncs.com" - Asia Pacific Northeast 1 (Japan) - "oss-ap-south-1.aliyuncs.com" - Asia Pacific South 1 (Mumbai) - "oss-eu-central-1.aliyuncs.com" - Central Europe 1 (Frankfurt) - "oss-eu-west-1.aliyuncs.com" - West Europe (London) - "oss-me-east-1.aliyuncs.com" - Middle East 1 (Dubai) #### --s3-endpoint Endpoint for Scaleway Object Storage. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.nl-ams.scw.cloud" - Amsterdam Endpoint - "s3.fr-par.scw.cloud" - Paris Endpoint #### --s3-endpoint Endpoint for StackPath Object Storage. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.us-east-2.stackpathstorage.com" - US East Endpoint - "s3.us-west-1.stackpathstorage.com" - US West Endpoint - "s3.eu-central-1.stackpathstorage.com" - EU Endpoint #### --s3-endpoint Endpoint for Tencent COS API. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "cos.ap-beijing.myqcloud.com" - Beijing Region. - "cos.ap-nanjing.myqcloud.com" - Nanjing Region. - "cos.ap-shanghai.myqcloud.com" - Shanghai Region. - "cos.ap-guangzhou.myqcloud.com" - Guangzhou Region. - "cos.ap-nanjing.myqcloud.com" - Nanjing Region. - "cos.ap-chengdu.myqcloud.com" - Chengdu Region. - "cos.ap-chongqing.myqcloud.com" - Chongqing Region. - "cos.ap-hongkong.myqcloud.com" - Hong Kong (China) Region. - "cos.ap-singapore.myqcloud.com" - Singapore Region. - "cos.ap-mumbai.myqcloud.com" - Mumbai Region. - "cos.ap-seoul.myqcloud.com" - Seoul Region. - "cos.ap-bangkok.myqcloud.com" - Bangkok Region. - "cos.ap-tokyo.myqcloud.com" - Tokyo Region. - "cos.na-siliconvalley.myqcloud.com" - Silicon Valley Region. - "cos.na-ashburn.myqcloud.com" - Virginia Region. - "cos.na-toronto.myqcloud.com" - Toronto Region. - "cos.eu-frankfurt.myqcloud.com" - Frankfurt Region. - "cos.eu-moscow.myqcloud.com" - Moscow Region. - "cos.accelerate.myqcloud.com" - Use Tencent COS Accelerate Endpoint. #### --s3-endpoint Endpoint for S3 API. Required when using an S3 clone. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "objects-us-east-1.dream.io" - Dream Objects endpoint - "nyc3.digitaloceanspaces.com" - Digital Ocean Spaces New York 3 - "ams3.digitaloceanspaces.com" - Digital Ocean Spaces Amsterdam 3 - "sgp1.digitaloceanspaces.com" - Digital Ocean Spaces Singapore 1 - "s3.wasabisys.com" - Wasabi US East endpoint - "s3.us-west-1.wasabisys.com" - Wasabi US West endpoint - "s3.eu-central-1.wasabisys.com" - Wasabi EU Central endpoint #### --s3-location-constraint Location constraint - must be set to match the Region. Used when creating buckets only. - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" - Examples: - "" - Empty for US Region, Northern Virginia or Pacific Northwest. - "us-east-2" - US East (Ohio) Region. - "us-west-1" - US West (Northern California) Region. - "us-west-2" - US West (Oregon) Region. - "ca-central-1" - Canada (Central) Region. - "eu-west-1" - EU (Ireland) Region. - "eu-west-2" - EU (London) Region. - "eu-west-3" - EU (Paris) Region. - "eu-north-1" - EU (Stockholm) Region. - "eu-south-1" - EU (Milan) Region. - "EU" - EU Region. - "ap-southeast-1" - Asia Pacific (Singapore) Region. - "ap-southeast-2" - Asia Pacific (Sydney) Region. - "ap-northeast-1" - Asia Pacific (Tokyo) Region. - "ap-northeast-2" - Asia Pacific (Seoul) Region. - "ap-northeast-3" - Asia Pacific (Osaka-Local) Region. - "ap-south-1" - Asia Pacific (Mumbai) Region. - "ap-east-1" - Asia Pacific (Hong Kong) Region. - "sa-east-1" - South America (Sao Paulo) Region. - "me-south-1" - Middle East (Bahrain) Region. - "af-south-1" - Africa (Cape Town) Region. - "cn-north-1" - China (Beijing) Region - "cn-northwest-1" - China (Ningxia) Region. - "us-gov-east-1" - AWS GovCloud (US-East) Region. - "us-gov-west-1" - AWS GovCloud (US) Region. #### --s3-location-constraint Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" - Examples: - "us-standard" - US Cross Region Standard - "us-vault" - US Cross Region Vault - "us-cold" - US Cross Region Cold - "us-flex" - US Cross Region Flex - "us-east-standard" - US East Region Standard - "us-east-vault" - US East Region Vault - "us-east-cold" - US East Region Cold - "us-east-flex" - US East Region Flex - "us-south-standard" - US South Region Standard - "us-south-vault" - US South Region Vault - "us-south-cold" - US South Region Cold - "us-south-flex" - US South Region Flex - "eu-standard" - EU Cross Region Standard - "eu-vault" - EU Cross Region Vault - "eu-cold" - EU Cross Region Cold - "eu-flex" - EU Cross Region Flex - "eu-gb-standard" - Great Britain Standard - "eu-gb-vault" - Great Britain Vault - "eu-gb-cold" - Great Britain Cold - "eu-gb-flex" - Great Britain Flex - "ap-standard" - APAC Standard - "ap-vault" - APAC Vault - "ap-cold" - APAC Cold - "ap-flex" - APAC Flex - "mel01-standard" - Melbourne Standard - "mel01-vault" - Melbourne Vault - "mel01-cold" - Melbourne Cold - "mel01-flex" - Melbourne Flex - "tor01-standard" - Toronto Standard - "tor01-vault" - Toronto Vault - "tor01-cold" - Toronto Cold - "tor01-flex" - Toronto Flex #### --s3-location-constraint Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" #### --s3-acl Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. - Config: acl - Env Var: RCLONE_S3_ACL - Type: string - Default: "" - Examples: - "default" - Owner gets Full_CONTROL. No one else has access rights (default). - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. - "bucket-owner-read" - Object owner gets FULL_CONTROL. Bucket owner gets READ access. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - "bucket-owner-full-control" - Both the object owner and the bucket owner get FULL_CONTROL over the object. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS #### --s3-server-side-encryption The server-side encryption algorithm used when storing this object in S3. - Config: server_side_encryption - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION - Type: string - Default: "" - Examples: - "" - None - "AES256" - AES256 - "aws:kms" - aws:kms #### --s3-sse-kms-key-id If using KMS ID you must provide the ARN of Key. - Config: sse_kms_key_id - Env Var: RCLONE_S3_SSE_KMS_KEY_ID - Type: string - Default: "" - Examples: - "" - None - "arn:aws:kms:us-east-1:*" - arn:aws:kms:* #### --s3-storage-class The storage class to use when storing new objects in S3. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "REDUCED_REDUNDANCY" - Reduced redundancy storage class - "STANDARD_IA" - Standard Infrequent Access storage class - "ONEZONE_IA" - One Zone Infrequent Access storage class - "GLACIER" - Glacier storage class - "DEEP_ARCHIVE" - Glacier Deep Archive storage class - "INTELLIGENT_TIERING" - Intelligent-Tiering storage class #### --s3-storage-class The storage class to use when storing new objects in OSS. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "GLACIER" - Archive storage mode. - "STANDARD_IA" - Infrequent access storage mode. #### --s3-storage-class The storage class to use when storing new objects in Tencent COS. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "ARCHIVE" - Archive storage mode. - "STANDARD_IA" - Infrequent access storage mode. #### --s3-storage-class The storage class to use when storing new objects in S3. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - The Standard class for any upload; suitable for on-demand content like streaming or CDN. - "GLACIER" - Archived storage; prices are lower, but it needs to be restored first to be accessed. ### Advanced Options Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)). #### --s3-bucket-acl Canned ACL used when creating buckets. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead. - Config: bucket_acl - Env Var: RCLONE_S3_BUCKET_ACL - Type: string - Default: "" - Examples: - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. #### --s3-sse-customer-algorithm If using SSE-C, the server-side encryption algorithm used when storing this object in S3. - Config: sse_customer_algorithm - Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM - Type: string - Default: "" - Examples: - "" - None - "AES256" - AES256 #### --s3-sse-customer-key If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. - Config: sse_customer_key - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY - Type: string - Default: "" - Examples: - "" - None #### --s3-sse-customer-key-md5 If using SSE-C you must provide the secret encryption key MD5 checksum. - Config: sse_customer_key_md5 - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 - Type: string - Default: "" - Examples: - "" - None #### --s3-upload-cutoff Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. - Config: upload_cutoff - Env Var: RCLONE_S3_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M #### --s3-chunk-size Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown size (eg from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit. Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size. - Config: chunk_size - Env Var: RCLONE_S3_CHUNK_SIZE - Type: SizeSuffix - Default: 5M #### --s3-max-upload-parts Maximum number of parts in a multipart upload. This option defines the maximum number of multipart chunks to use when doing a multipart upload. This can be useful if a service does not support the AWS S3 specification of 10,000 chunks. Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit. - Config: max_upload_parts - Env Var: RCLONE_S3_MAX_UPLOAD_PARTS - Type: int - Default: 10000 #### --s3-copy-cutoff Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 5GB. - Config: copy_cutoff - Env Var: RCLONE_S3_COPY_CUTOFF - Type: SizeSuffix - Default: 4.656G #### --s3-disable-checksum Don't store MD5 checksum with object metadata Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_S3_DISABLE_CHECKSUM - Type: bool - Default: false #### --s3-shared-credentials-file Path to the shared credentials file If env_auth = true then rclone can use a shared credentials file. If this variable is empty rclone will look for the "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty it will default to the current user's home directory. Linux/OSX: "$HOME/.aws/credentials" Windows: "%USERPROFILE%\.aws\credentials" - Config: shared_credentials_file - Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE - Type: string - Default: "" #### --s3-profile Profile to use in the shared credentials file If env_auth = true then rclone can use a shared credentials file. This variable controls which profile is used in that file. If empty it will default to the environment variable "AWS_PROFILE" or "default" if that environment variable is also not set. - Config: profile - Env Var: RCLONE_S3_PROFILE - Type: string - Default: "" #### --s3-session-token An AWS session token - Config: session_token - Env Var: RCLONE_S3_SESSION_TOKEN - Type: string - Default: "" #### --s3-upload-concurrency Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_S3_UPLOAD_CONCURRENCY - Type: int - Default: 4 #### --s3-force-path-style If true use path style access if false use virtual hosted style. If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See [the AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) for more info. Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting. - Config: force_path_style - Env Var: RCLONE_S3_FORCE_PATH_STYLE - Type: bool - Default: true #### --s3-v2-auth If true use v2 authentication. If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. - Config: v2_auth - Env Var: RCLONE_S3_V2_AUTH - Type: bool - Default: false #### --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html) - Config: use_accelerate_endpoint - Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT - Type: bool - Default: false #### --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. It should be set to true for resuming uploads across different sessions. WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up. - Config: leave_parts_on_error - Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR - Type: bool - Default: false #### --s3-list-chunk Size of listing chunk (response list for each ListObject S3 request). This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html). In Ceph, this can be increased with the "rgw list buckets max chunk" option. - Config: list_chunk - Env Var: RCLONE_S3_LIST_CHUNK - Type: int - Default: 1000 #### --s3-no-check-bucket If set don't attempt to check the bucket exists or create it This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already. - Config: no_check_bucket - Env Var: RCLONE_S3_NO_CHECK_BUCKET - Type: bool - Default: false #### --s3-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_S3_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot #### --s3-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s #### --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP - Type: bool - Default: false ### Backend commands Here are the commands specific to the s3 backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](https://rclone.org/rc/#backend/command). #### restore Restore objects from GLACIER to normal storage rclone backend restore remote: [options] [+] This command can be used to restore one or more objects from GLACIER to normal storage. Usage Examples: rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard All the objects shown will be marked for restore, then rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successfull or an error message if not. [ { "Status": "OK", "Path": "test.txt" }, { "Status": "OK", "Path": "test/file4.txt" } ] Options: - "description": The optional description for the job. - "lifetime": Lifetime of the active copy in days - "priority": Priority of restore: Standard|Expedited|Bulk #### list-multipart-uploads List the unfinished multipart uploads rclone backend list-multipart-uploads remote: [options] [+] This command lists the unfinished multipart uploads in JSON format. rclone backend list-multipart s3:bucket/path/to/object It returns a dictionary of buckets with values as lists of unfinished multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. { "rclone": [ { "Initiated": "2020-06-26T14:20:36Z", "Initiator": { "DisplayName": "XXX", "ID": "arn:aws:iam::XXX:user/XXX" }, "Key": "KEY", "Owner": { "DisplayName": null, "ID": "XXX" }, "StorageClass": "STANDARD", "UploadId": "XXX" } ], "rclone-1000files": [], "rclone-dst": [] } #### cleanup Remove unfinished multipart uploads. rclone backend cleanup remote: [options] [+] This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. Note that you can use -i/--dry-run with this command to see what it would do. rclone backend cleanup s3:bucket/path/to/object rclone backend cleanup -o max-age=7w s3:bucket/path/to/object Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Options: - "max-age": Max age of upload to delete ### Anonymous access to public buckets ### If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Your config should end up looking like this: ``` [anons3] type = s3 provider = AWS env_auth = false access_key_id = secret_access_key = region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = ``` Then use it as normal with the name of the public bucket, eg rclone lsd anons3:1000genomes You will be able to list and copy data but not upload it. ### Ceph ### [Ceph](https://ceph.com/) is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface. To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: ``` [ceph] type = s3 provider = Ceph env_auth = false access_key_id = XXX secret_access_key = YYY region = endpoint = https://ceph.endpoint.example.com location_constraint = acl = server_side_encryption = storage_class = ``` If you are using an older version of CEPH, eg 10.2.x Jewel, then you may need to supply the parameter `--s3-upload-cutoff 0` or put this in the config file as `upload_cutoff 0` to work around a bug which causes uploading of small files to fail. Note also that Ceph sometimes puts `/` in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the `/` escaped as `\/`. Make sure you only write `/` in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). ``` { "user_id": "xxx", "display_name": "xxxx", "keys": [ { "user": "xxx", "access_key": "xxxxxx", "secret_key": "xxxxxx\/xxxx" } ], } ``` Because this is a json dump, it is encoding the `/` as `\/`, so if you use the secret key as `xxxxxx/xxxx` it will work fine. ### Dreamhost ### Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is an object storage system based on CEPH. To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: ``` [dreamobjects] type = s3 provider = DreamHost env_auth = false access_key_id = your_access_key secret_access_key = your_secret_key region = endpoint = objects-us-west-1.dream.io location_constraint = acl = private server_side_encryption = storage_class = ``` ### DigitalOcean Spaces ### [Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`. When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings. Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: ``` Storage> s3 env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY region> endpoint> nyc3.digitaloceanspaces.com location_constraint> acl> storage_class> ``` The resulting configuration file should look like: ``` [spaces] type = s3 provider = DigitalOcean env_auth = false access_key_id = YOUR_ACCESS_KEY secret_access_key = YOUR_SECRET_KEY region = endpoint = nyc3.digitaloceanspaces.com location_constraint = acl = server_side_encryption = storage_class = ``` Once configured, you can create a new Space and begin copying files. For example: ``` rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space ``` ### IBM COS (S3) ### Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage) To configure access to IBM COS S3, follow the steps below: 1. Run rclone config and select n for a new remote. ``` 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n ``` 2. Enter the name for the configuration ``` name> ``` 3. Select "s3" storage. ``` Choose a number from below, or type in your own value 1 / Alias for an existing remote \ "alias" 2 / Amazon Drive \ "amazon cloud drive" 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) \ "s3" 4 / Backblaze B2 \ "b2" [snip] 23 / http Connection \ "http" Storage> 3 ``` 4. Select IBM COS as the S3 Storage Provider. ``` Choose the S3 provider. Choose a number from below, or type in your own value 1 / Choose this option to configure Storage to AWS S3 \ "AWS" 2 / Choose this option to configure Storage to Ceph Systems \ "Ceph" 3 / Choose this option to configure Storage to Dreamhost \ "Dreamhost" 4 / Choose this option to the configure Storage to IBM COS S3 \ "IBMCOS" 5 / Choose this option to the configure Storage to Minio \ "Minio" Provider>4 ``` 5. Enter the Access Key and Secret. ``` AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> <> AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> <> ``` 6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address. ``` Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. Choose a number from below, or type in your own value 1 / US Cross Region Endpoint \ "s3-api.us-geo.objectstorage.softlayer.net" 2 / US Cross Region Dallas Endpoint \ "s3-api.dal.us-geo.objectstorage.softlayer.net" 3 / US Cross Region Washington DC Endpoint \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" 4 / US Cross Region San Jose Endpoint \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" 5 / US Cross Region Private Endpoint \ "s3-api.us-geo.objectstorage.service.networklayer.com" 6 / US Cross Region Dallas Private Endpoint \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" 7 / US Cross Region Washington DC Private Endpoint \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" 8 / US Cross Region San Jose Private Endpoint \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" 9 / US Region East Endpoint \ "s3.us-east.objectstorage.softlayer.net" 10 / US Region East Private Endpoint \ "s3.us-east.objectstorage.service.networklayer.com" 11 / US Region South Endpoint [snip] 34 / Toronto Single Site Private Endpoint \ "s3.tor01.objectstorage.service.networklayer.com" endpoint>1 ``` 7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter ``` 1 / US Cross Region Standard \ "us-standard" 2 / US Cross Region Vault \ "us-vault" 3 / US Cross Region Cold \ "us-cold" 4 / US Cross Region Flex \ "us-flex" 5 / US East Region Standard \ "us-east-standard" 6 / US East Region Vault \ "us-east-vault" 7 / US East Region Cold \ "us-east-cold" 8 / US East Region Flex \ "us-east-flex" 9 / US South Region Standard \ "us-south-standard" 10 / US South Region Vault \ "us-south-vault" [snip] 32 / Toronto Flex \ "tor01-flex" location_constraint>1 ``` 9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. ``` Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS \ "public-read" 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS \ "authenticated-read" acl> 1 ``` 12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this ``` [xxx] type = s3 Provider = IBMCOS access_key_id = xxx secret_access_key = yyy endpoint = s3-api.us-geo.objectstorage.softlayer.net location_constraint = us-standard acl = private ``` 13. Execute rclone commands ``` 1) Create a bucket. rclone mkdir IBM-COS-XREGION:newbucket 2) List available buckets. rclone lsd IBM-COS-XREGION: -1 2017-11-08 21:16:22 -1 test -1 2018-02-14 20:16:39 -1 newbucket 3) List contents of a bucket. rclone ls IBM-COS-XREGION:newbucket 18685952 test.exe 4) Copy a file from local to remote. rclone copy /Users/file.txt IBM-COS-XREGION:newbucket 5) Copy a file from remote to local. rclone copy IBM-COS-XREGION:newbucket/file.txt . 6) Delete a file on remote. rclone delete IBM-COS-XREGION:newbucket/file.txt ``` ### Minio ### [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. It is very easy to install and provides an S3 compatible server which can be used by rclone. To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide). When it configures itself Minio will print something like this ``` Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 AccessKey: USWUXHGYZQYFYFFIT3RE SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 Region: us-east-1 SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis Browser Access: http://192.168.1.106:9000 http://172.23.0.1:9000 Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 Object API (Amazon S3 compatible): Go: https://docs.minio.io/docs/golang-client-quickstart-guide Java: https://docs.minio.io/docs/java-client-quickstart-guide Python: https://docs.minio.io/docs/python-client-quickstart-guide JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide Drive Capacity: 26 GiB Free, 165 GiB Total ``` These details need to go into `rclone config` like this. Note that it is important to put the region in as stated above. ``` env_auth> 1 access_key_id> USWUXHGYZQYFYFFIT3RE secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 region> us-east-1 endpoint> http://192.168.1.106:9000 location_constraint> server_side_encryption> ``` Which makes the config file look like this ``` [minio] type = s3 provider = Minio env_auth = false access_key_id = USWUXHGYZQYFYFFIT3RE secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 region = us-east-1 endpoint = http://192.168.1.106:9000 location_constraint = server_side_encryption = ``` So once set up, for example to copy files into a bucket ``` rclone copy /path/to/files minio:bucket ``` ### Scaleway {#scaleway} [Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. Scaleway provides an S3 interface which can be configured for use with rclone like this: ``` [scaleway] type = s3 provider = Scaleway env_auth = false endpoint = s3.nl-ams.scw.cloud access_key_id = SCWXXXXXXXXXXXXXX secret_access_key = 1111111-2222-3333-44444-55555555555555 region = nl-ams location_constraint = acl = private server_side_encryption = storage_class = ``` ### Wasabi ### [Wasabi](https://wasabi.com) is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost. Wasabi provides an S3 interface which can be configured for use with rclone like this. ``` No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> wasabi Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> YOURACCESSKEY AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YOURSECRETACCESSKEY Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" [snip] region> us-east-1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Specify if using an S3 clone such as Ceph. endpoint> s3.wasabisys.com Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" [snip] location_constraint> Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" [snip] acl> The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" storage_class> Remote config -------------------- [wasabi] env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = us-east-1 endpoint = s3.wasabisys.com location_constraint = acl = server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This will leave the config file looking like this. ``` [wasabi] type = s3 provider = Wasabi env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = endpoint = s3.wasabisys.com location_constraint = acl = server_side_encryption = storage_class = ``` ### Alibaba OSS {#alibaba-oss} Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/) configuration. First run: rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> oss Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ "s3" [snip] Storage> s3 Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph" [snip] provider> Alibaba Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> accesskeyid AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> secretaccesskey Endpoint for OSS API. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / East China 1 (Hangzhou) \ "oss-cn-hangzhou.aliyuncs.com" 2 / East China 2 (Shanghai) \ "oss-cn-shanghai.aliyuncs.com" 3 / North China 1 (Qingdao) \ "oss-cn-qingdao.aliyuncs.com" [snip] endpoint> 1 Canned ACL used when creating buckets and storing or copying objects. Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. [snip] acl> 1 The storage class to use when storing new objects in OSS. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Archive storage mode. \ "GLACIER" 4 / Infrequent access storage mode. \ "STANDARD_IA" storage_class> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [oss] type = s3 provider = Alibaba env_auth = false access_key_id = accesskeyid secret_access_key = secretaccesskey endpoint = oss-cn-hangzhou.aliyuncs.com acl = private storage_class = Standard -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### Tencent COS {#tencent-cos} [Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. To configure access to Tencent COS, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. ``` rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n ``` 2. Give the name of the configuration. For example, name it 'cos'. ``` name> cos ``` 3. Select `s3` storage. ``` Choose a number from below, or type in your own value 1 / 1Fichier \ "fichier" 2 / Alias for an existing remote \ "alias" 3 / Amazon Drive \ "amazon cloud drive" 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc) \ "s3" [snip] Storage> s3 ``` 4. Select `TencentCOS` provider. ``` Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" [snip] 11 / Tencent Cloud Object Storage (COS) \ "TencentCOS" [snip] provider> TencentCOS ``` 5. Enter your SecretId and SecretKey of Tencent Cloud. ``` Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> AKIDxxxxxxxxxx AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> xxxxxxxxxxx ``` 6. Select endpoint for Tencent COS. This is the standard endpoint for different region. ``` 1 / Beijing Region. \ "cos.ap-beijing.myqcloud.com" 2 / Nanjing Region. \ "cos.ap-nanjing.myqcloud.com" 3 / Shanghai Region. \ "cos.ap-shanghai.myqcloud.com" 4 / Guangzhou Region. \ "cos.ap-guangzhou.myqcloud.com" [snip] endpoint> 4 ``` 7. Choose acl and storage class. ``` Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets Full_CONTROL. No one else has access rights (default). \ "default" [snip] acl> 1 The storage class to use when storing new objects in Tencent COS. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" [snip] storage_class> 1 Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [cos] type = s3 provider = TencentCOS env_auth = false access_key_id = xxx secret_access_key = xxx endpoint = cos.ap-guangzhou.myqcloud.com acl = default -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== cos s3 ``` ### Netease NOS ### For Netease NOS configure as per the configurator `rclone config` setting the provider `Netease`. This will automatically set `force_path_style = false` which is necessary for it to run properly. Backblaze B2 ---------------------------------------- B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/). Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Here is an example of making a b2 configuration. First run rclone config This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key. ``` No remotes found - make a new one n) New remote q) Quit config n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Backblaze B2 \ "b2" [snip] Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key key> 0123456789abcdef0123456789abcdef0123456789 Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = 123456789abc key = 0123456789abcdef0123456789abcdef0123456789 endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all buckets rclone lsd remote: Create a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ### Application Keys ### B2 supports multiple [Application Keys for different access permission to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html). You can use these with rclone too; you will need to use rclone version 1.43 or later. Follow Backblaze's docs to create an Application Key with the required permission and add the `applicationKeyId` as the `account` and the `Application Key` itself as the `key`. Note that you must put the _applicationKeyId_ as the `account` – you can't use the master Account ID. If you try then B2 will return 401 errors. ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### Modified time ### The modified time is stored as metadata on the object as `X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time. Modified times are used in syncing and are fully supported. Note that if a modification time needs to be updated on an object then it will create a new version of the object. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. Note that in 2020-05 Backblaze started allowing \ characters in file names. Rclone hasn't changed its encoding as this could cause syncs to re-transfer files. If you want rclone not to replace \ then see the `--b2-encoding` flag below and remove the `BackSlash` from the string. This can be set in the config. ### SHA1 checksums ### The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. Large files (bigger than the limit in `--b2-upload-cutoff`) which are uploaded in chunks will store their SHA1 on the object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze. For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See [the overview](https://rclone.org/overview/#features) for exactly which remotes support SHA1. Sources which don't support SHA1, in particular `crypt` will upload large files without SHA1 checksums. This may be fixed in the future (see [#1767](https://github.com/rclone/rclone/issues/1767)). Files sizes below `--b2-upload-cutoff` will always have an SHA1 regardless of the source. ### Transfers ### Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about `--transfers 32` though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of `--transfers 4` is definitely too low for Backblaze B2 though. Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most `--transfers` of these in use at any moment, so this sets the upper limit on the memory used. ### Versions ### When rclone uploads a new version of a file it creates a [new version of it](https://www.backblaze.com/b2/docs/file_versions.html). Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the `--b2-hard-delete` flag which would permanently remove the file instead of hiding it. Old versions of files, where available, are visible using the `--b2-versions` flag. **NB** Note that `--b2-versions` does not work with crypt at the moment [#1627](https://github.com/rclone/rclone/issues/1627). Using [--backup-dir](https://rclone.org/docs/#backup-dir-dir) with rclone is the recommended way of working around this. If you wish to remove all the old versions then you can use the `rclone cleanup remote:bucket` command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg `rclone cleanup remote:bucket/path/to/stuff`. Note that `cleanup` will remove partially uploaded files from the bucket if they are more than a day old. When you `purge` a bucket, the current and the old versions will be deleted then the bucket will be deleted. However `delete` will cause the current versions of the files to become hidden old versions. Here is a session showing the listing and retrieval of an old version followed by a `cleanup` of the old versions. Show current version and all the versions with `--b2-versions` flag. ``` $ rclone -q ls b2:cleanup-test 9 one.txt $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt ``` Retrieve an old version ``` $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ ls -l /tmp/one-v2016-07-04-141003-000.txt -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt ``` Clean up all the old versions and show that they've gone. ``` $ rclone -q cleanup b2:cleanup-test $ rclone -q ls b2:cleanup-test 9 one.txt $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt ``` ### Data usage ### It is useful to know how many requests are sent to the server in different scenarios. All copy commands send the following 4 requests: ``` /b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket /b2api/v1/b2_list_buckets /b2api/v1/b2_list_file_names ``` The `b2_list_file_names` request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue [#818](https://github.com/rclone/rclone/issues/818) causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent. Uploading files that do not require chunking, will send 2 requests per file upload: ``` /b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ ``` Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk: ``` /b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url /b2api/v1/b2_upload_part/ /b2api/v1/b2_finish_large_file ``` #### Versions #### Versions can be viewed with the `--b2-versions` flag. When it is set rclone will show and act on older versions of files. For example Listing without `--b2-versions` ``` $ rclone -q ls b2:cleanup-test 9 one.txt ``` And with ``` $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt ``` Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them. Note that when using `--b2-versions` no file write operations are permitted, so you can't upload files or delete them. ### B2 and rclone link ### Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: ``` ./rclone link B2:bucket/path/to/file.txt https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx ``` or if run on a directory you will get: ``` ./rclone link B2:bucket/path https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx ``` you can then use the authorization token (the part of the url from the `?Authorization=` on) on any file path under that directory. For example: ``` https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx ``` ### Standard Options Here are the standard options specific to b2 (Backblaze B2). #### --b2-account Account ID or Application Key ID - Config: account - Env Var: RCLONE_B2_ACCOUNT - Type: string - Default: "" #### --b2-key Application Key - Config: key - Env Var: RCLONE_B2_KEY - Type: string - Default: "" #### --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - Config: hard_delete - Env Var: RCLONE_B2_HARD_DELETE - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to b2 (Backblaze B2). #### --b2-endpoint Endpoint for the service. Leave blank normally. - Config: endpoint - Env Var: RCLONE_B2_ENDPOINT - Type: string - Default: "" #### --b2-test-mode A flag string for X-Bz-Test-Mode header for debugging. This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors: * "fail_some_uploads" * "expire_some_account_authorization_tokens" * "force_cap_exceeded" These will be set in the "X-Bz-Test-Mode" header which is documented in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html). - Config: test_mode - Env Var: RCLONE_B2_TEST_MODE - Type: string - Default: "" #### --b2-versions Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them. - Config: versions - Env Var: RCLONE_B2_VERSIONS - Type: bool - Default: false #### --b2-upload-cutoff Cutoff for switching to chunked upload. Files above this size will be uploaded in chunks of "--b2-chunk-size". This value should be set no larger than 4.657GiB (== 5GB). - Config: upload_cutoff - Env Var: RCLONE_B2_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M #### --b2-copy-cutoff Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 4.6GB. - Config: copy_cutoff - Env Var: RCLONE_B2_COPY_CUTOFF - Type: SizeSuffix - Default: 4G #### --b2-chunk-size Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size. - Config: chunk_size - Env Var: RCLONE_B2_CHUNK_SIZE - Type: SizeSuffix - Default: 96M #### --b2-disable-checksum Disable checksums for large (> upload cutoff) files Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_B2_DISABLE_CHECKSUM - Type: bool - Default: false #### --b2-download-url Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. - Config: download_url - Env Var: RCLONE_B2_DOWNLOAD_URL - Type: string - Default: "" #### --b2-download-auth-duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. - Config: download_auth_duration - Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION - Type: Duration - Default: 1w #### --b2-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s #### --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP - Type: bool - Default: false #### --b2-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_B2_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot Box ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Box \ "box" [snip] Storage> box Box App Client Id - leave blank normally. client_id> Box App Client Secret - leave blank normally. client_secret> Box App config.json location Leave blank normally. Enter a string value. Press Enter for the default (""). box_config_file> Box App Primary Access Token Leave blank normally. Enter a string value. Press Enter for the default (""). access_token> Enter a string value. Press Enter for the default ("user"). Choose a number from below, or type in your own value 1 / Rclone should act on behalf of a user \ "user" 2 / Rclone should act on behalf of a service account \ "enterprise" box_sub_type> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your Box rclone lsd remote: List all the files in your Box rclone ls remote: To copy a local directory to an Box directory called backup rclone copy /home/source remote:backup ### Using rclone with an Enterprise account with SSO ### If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field. Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set. ### Invalid refresh token ### According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): > Each refresh_token is valid for one use in 60 days. This means that if you * Don't use the box remote for 60 days * Copy the config file with a box refresh token in and use it in two places * Get an error on a token refresh then rclone will return an error which includes the text `Invalid refresh token`. To fix this you will need to use oauth2 again to update the refresh token. You can use the methods in [the remote setup docs](https://rclone.org/remote_setup/), bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on. Here is how to do it. ``` $ rclone config Current remotes: Name Type ==== ==== remote box e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote -------------------- [remote] type = box token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} -------------------- Edit remote Value "client_id" = "" Edit? (y/n)> y) Yes n) No y/n> n Value "client_secret" = "" Edit? (y/n)> y) Yes n) No y/n> n Remote config Already have a token - refresh? y) Yes n) No y/n> y Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = box token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### Modified time and hashes ### Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Box supports SHA1 type hashes, so you can use the `--checksum` flag. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Transfers ### For files above 50MB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing `--transfers` will increase memory use. ### Deleting files ### Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash. Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it may take a very long time. Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI. ### Root folder ID ### You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your Box drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface. So if the folder you want rclone to use has a URL which looks like `https://app.box.com/folder/11xxxxxxxxx8` in the browser, then you use `11xxxxxxxxx8` as the `root_folder_id` in the config. ### Standard Options Here are the standard options specific to box (Box). #### --box-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_BOX_CLIENT_ID - Type: string - Default: "" #### --box-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_BOX_CLIENT_SECRET - Type: string - Default: "" #### --box-box-config-file Box App config.json location Leave blank normally. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: box_config_file - Env Var: RCLONE_BOX_BOX_CONFIG_FILE - Type: string - Default: "" #### --box-access-token Box App Primary Access Token Leave blank normally. - Config: access_token - Env Var: RCLONE_BOX_ACCESS_TOKEN - Type: string - Default: "" #### --box-box-sub-type - Config: box_sub_type - Env Var: RCLONE_BOX_BOX_SUB_TYPE - Type: string - Default: "user" - Examples: - "user" - Rclone should act on behalf of a user - "enterprise" - Rclone should act on behalf of a service account ### Advanced Options Here are the advanced options specific to box (Box). #### --box-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_BOX_TOKEN - Type: string - Default: "" #### --box-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_BOX_AUTH_URL - Type: string - Default: "" #### --box-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_BOX_TOKEN_URL - Type: string - Default: "" #### --box-root-folder-id Fill in for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_BOX_ROOT_FOLDER_ID - Type: string - Default: "0" #### --box-upload-cutoff Cutoff for switching to multipart upload (>= 50MB). - Config: upload_cutoff - Env Var: RCLONE_BOX_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 50M #### --box-commit-retries Max number of times to try committing a multipart file. - Config: commit_retries - Env Var: RCLONE_BOX_COMMIT_RETRIES - Type: int - Default: 100 #### --box-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_BOX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot ### Limitations ### Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Box file names can't have the `\` character in. rclone maps this to and from an identical looking unicode equivalent `\` (U+FF3C Fullwidth Reverse Solidus). Box only supports filenames up to 255 characters in length. Cache (BETA) ----------------------------------------- The `cache` remote wraps another existing remote and stores file structure and its data for long running tasks like `rclone mount`. ## Status The cache backend code is working but it currently doesn't have a maintainer so there are [outstanding bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22) which aren't getting fixed. The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone. Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more. ## Setup To get started you just need to have an existing remote which can be configured with `cache`. Here is an example of how to make a remote called `test-cache`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Cache a remote \ "cache" [snip] Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> local:/test Optional: The URL of the Plex server plex_url> http://127.0.0.1:32400 Optional: The username of the Plex user plex_username> dummyusername Optional: The password of the Plex user y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: The size of a chunk. Lower value good for slow connections but can affect seamless reading. Default: 5M Choose a number from below, or type in your own value 1 / 1MB \ "1m" 2 / 5 MB \ "5M" 3 / 10 MB \ "10M" chunk_size> 2 How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. Accepted units are: "s", "m", "h". Default: 5m Choose a number from below, or type in your own value 1 / 1 hour \ "1h" 2 / 24 hours \ "24h" 3 / 24 hours \ "48h" info_age> 2 The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. Default: 10G Choose a number from below, or type in your own value 1 / 500 MB \ "500M" 2 / 1 GB \ "1G" 3 / 10 GB \ "10G" chunk_total_size> 3 Remote config -------------------- [test-cache] remote = local:/test plex_url = http://127.0.0.1:32400 plex_username = dummyusername plex_password = *** ENCRYPTED *** chunk_size = 5M info_age = 48h chunk_total_size = 10G ``` You can then use it like this, List directories in top level of your drive rclone lsd test-cache: List all the files in your drive rclone ls test-cache: To start a cached mount rclone mount --allow-other test-cache: /var/tmp/test-cache ### Write Features ### ### Offline uploading ### In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a `cache-tmp-upload-path`. A files goes through these states when using this feature: 1. An upload is started (usually by copying a file on the cache remote) 2. When the copy to the temporary location is complete the file is part of the cached remote and looks and behaves like any other file (reading included) 3. After `cache-tmp-wait-time` passes and the file is next in line, `rclone move` is used to move the file to the cloud provider 4. Reading the file still works during the upload but most modifications on it will be prohibited 5. Once the move is complete the file is unlocked for modifications as it becomes as any other regular file 6. If the file is being read through `cache` when it's actually deleted from the temporary path then `cache` will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though) Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the `--cache-db-purge` flag. ### Write Support ### Writes are supported through `cache`. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using `Offline uploading` for reliable writes. One special case is covered with `cache-writes` which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished. ### Read Features ### #### Multiple connections #### To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the cloud provider for smaller file chunks and combines them together locally where they can be available almost immediately before the reader usually needs them. This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before. #### Plex Integration #### There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries the cloud provider depending on what is needed for. Scans will have a minimum amount of workers (1) while in a confirmed playback cache will deploy the configured number of workers. This integration opens the doorway to additional performance improvements which will be explored in the near future. **Note:** If Plex options are not configured, `cache` will function with its configured options without adapting any of its settings. How to enable? Run `rclone config` and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled. Affected settings: - `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times ##### Certificate Validation ##### When the Plex server is configured to only accept secure connections, it is possible to use `.plex.direct` URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely. The format for these URLs is the following: https://ip-with-dots-replaced.server-hash.plex.direct:32400/ The `ip-with-dots-replaced` part can be any IPv4 address, where the dots have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. To get the `server-hash` part, the easiest way is to visit https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token This page will list all the available Plex servers for your account with at least one `.plex.direct` link for each. Copy one URL and replace the IP address with the desired address. This can be used as the `plex_url` value. ### Known issues ### #### Mount and --dir-cache-time #### --dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the `cache` backend, it will manage its own entries based on the configured time. To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are already configured in this way. #### Windows support - Experimental #### There are a couple of issues with Windows `mount` functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS. Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. - https://github.com/rclone/rclone/issues/1935 - https://github.com/rclone/rclone/issues/1907 - https://github.com/rclone/rclone/issues/1834 #### Risk of throttling #### Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures. There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. Some recommendations: - don't use a very small interval for entry information (`--cache-info-age`) - while writes aren't yet optimised, you can still write through `cache` which gives you the advantage of adding the file in the cache at the same time if configured to do so. Future enhancements: - https://github.com/rclone/rclone/issues/1937 - https://github.com/rclone/rclone/issues/1936 #### cache and crypt #### One common scenario is to keep your data encrypted in the cloud provider using the `crypt` remote. `crypt` uses a similar technique to wrap around an existing remote and handles this translation in a seamless way. There is an issue with wrapping the remotes in this order: **cloud remote** -> **crypt** -> **cache** During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: **cloud remote** -> **cache** -> **crypt** #### absolute remote paths #### `cache` can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the `remote` config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading `/` character. This behavior is irrelevant for most backend types, but there are backends where a leading `/` changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are relative to the root of the SSH server and paths without are relative to the user home directory. As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent a different directory on the SSH server. ### Cache and Remote Control (--rc) ### Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag. ### rc cache/expire Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - **remote** = path to remote **(required)** - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ ### Standard Options Here are the standard options specific to cache (Cache a remote). #### --cache-remote Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CACHE_REMOTE - Type: string - Default: "" #### --cache-plex-url The URL of the Plex server - Config: plex_url - Env Var: RCLONE_CACHE_PLEX_URL - Type: string - Default: "" #### --cache-plex-username The username of the Plex user - Config: plex_username - Env Var: RCLONE_CACHE_PLEX_USERNAME - Type: string - Default: "" #### --cache-plex-password The password of the Plex user **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: plex_password - Env Var: RCLONE_CACHE_PLEX_PASSWORD - Type: string - Default: "" #### --cache-chunk-size The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur. - Config: chunk_size - Env Var: RCLONE_CACHE_CHUNK_SIZE - Type: SizeSuffix - Default: 5M - Examples: - "1m" - 1MB - "5M" - 5 MB - "10M" - 10 MB #### --cache-info-age How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time. - Config: info_age - Env Var: RCLONE_CACHE_INFO_AGE - Type: Duration - Default: 6h0m0s - Examples: - "1h" - 1 hour - "24h" - 24 hours - "48h" - 48 hours #### --cache-chunk-total-size The total size that the chunks can take up on the local disk. If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value. - Config: chunk_total_size - Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE - Type: SizeSuffix - Default: 10G - Examples: - "500M" - 500 MB - "1G" - 1 GB - "10G" - 10 GB ### Advanced Options Here are the advanced options specific to cache (Cache a remote). #### --cache-plex-token The plex token for authentication - auto set normally - Config: plex_token - Env Var: RCLONE_CACHE_PLEX_TOKEN - Type: string - Default: "" #### --cache-plex-insecure Skip all certificate verification when connecting to the Plex server - Config: plex_insecure - Env Var: RCLONE_CACHE_PLEX_INSECURE - Type: string - Default: "" #### --cache-db-path Directory to store file structure metadata DB. The remote name is used as the DB file name. - Config: db_path - Env Var: RCLONE_CACHE_DB_PATH - Type: string - Default: "$HOME/.cache/rclone/cache-backend" #### --cache-chunk-path Directory to cache chunk files. Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path. This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path". - Config: chunk_path - Env Var: RCLONE_CACHE_CHUNK_PATH - Type: string - Default: "$HOME/.cache/rclone/cache-backend" #### --cache-db-purge Clear all the cached data for this remote on start. - Config: db_purge - Env Var: RCLONE_CACHE_DB_PURGE - Type: bool - Default: false #### --cache-chunk-clean-interval How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often. - Config: chunk_clean_interval - Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL - Type: Duration - Default: 1m0s #### --cache-read-retries How many times to retry a read from a cache storage. Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore. For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering. - Config: read_retries - Env Var: RCLONE_CACHE_READ_RETRIES - Type: int - Default: 10 #### --cache-workers How many workers should run in parallel to download chunks. Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers. **Note**: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. - Config: workers - Env Var: RCLONE_CACHE_WORKERS - Type: int - Default: 4 #### --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible. This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time). If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine. - Config: chunk_no_memory - Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY - Type: bool - Default: false #### --cache-rps Limits the number of requests per second to the source FS (-1 to disable) This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads. If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that. A good balance of all the other settings should make this setting useless but it is available to set for more special cases. **NOTE**: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass. - Config: rps - Env Var: RCLONE_CACHE_RPS - Type: int - Default: -1 #### --cache-writes Cache file data on writes through the FS If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload. - Config: writes - Env Var: RCLONE_CACHE_WRITES - Type: bool - Default: false #### --cache-tmp-upload-path Directory to keep temporary files until they are uploaded. This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider. Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider - Config: tmp_upload_path - Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH - Type: string - Default: "" #### --cache-tmp-wait-time How long should files be stored in local cache before being uploaded This is the duration that a file must wait in the temporary location _cache-tmp-upload-path_ before it is selected for upload. Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose. - Config: tmp_wait_time - Env Var: RCLONE_CACHE_TMP_WAIT_TIME - Type: Duration - Default: 15s #### --cache-db-wait-time How long to wait for the DB to be available - 0 is unlimited Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error. If you set it to 0 then it will wait forever. - Config: db_wait_time - Env Var: RCLONE_CACHE_DB_WAIT_TIME - Type: Duration - Default: 1s ### Backend commands Here are the commands specific to the cache backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](https://rclone.org/rc/#backend/command). #### stats Print stats on the cache backend in JSON format. rclone backend stats remote: [options] [+] Chunker (BETA) ---------------------------------------- The `chunker` overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers. To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. First check your chosen remote is working - we'll call it `remote:path` here. Note that anything inside `remote:path` will be chunked and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote `s3:bucket`. Now configure `chunker` using `rclone config`. We will call this one `overlay` to separate it from the `remote` itself. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> overlay Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Transparently chunk/split large files \ "chunker" [snip] Storage> chunker Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Enter a string value. Press Enter for the default (""). remote> remote:path Files larger than chunk size will be split in chunks. Enter a size with suffix k,M,G,T. Press Enter for the default ("2G"). chunk_size> 100M Choose how chunker handles hash sums. All modes but "none" require metadata. Enter a string value. Press Enter for the default ("md5"). Choose a number from below, or type in your own value 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise \ "none" 2 / MD5 for composite files \ "md5" 3 / SHA1 for composite files \ "sha1" 4 / MD5 for all files \ "md5all" 5 / SHA1 for all files \ "sha1all" 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported \ "md5quick" 7 / Similar to "md5quick" but prefers SHA1 over MD5 \ "sha1quick" hash_type> md5 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [overlay] type = chunker remote = remote:bucket chunk_size = 100M hash_type = md5 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### Specifying the remote In normal use, make sure the remote has a `:` in. If you specify the remote without a `:` then rclone will use a local directory of that name. So if you use a remote of `/path/to/secret/files` then rclone will chunk stuff in that directory. If you use a remote of `name` then rclone will put files in a directory called `name` in the current directory. ### Chunking When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename etc). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact. When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content. When the `list` rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden. List and other commands can sometimes come across composite files with missing or invalid chunks, eg. shadowed by like-named directory or another file. This usually means that wrapped file system has been directly tampered with or damaged. If chunker detects a missing chunk it will by default print warning, skip the whole incomplete group of chunks but proceed with current command. You can set the `--chunker-fail-hard` flag to have commands abort with error message in such cases. #### Chunk names The default chunk name format is `*.rclone_chunk.###`, hence by default chunk names are `BIG_FILE_NAME.rclone_chunk.001`, `BIG_FILE_NAME.rclone_chunk.002` etc. You can configure another name format using the `name_format` configuration file option. The format uses asterisk `*` as a placeholder for the base file name and one or more consecutive hash characters `#` as a placeholder for sequential chunk number. There must be one and only one asterisk. The number of consecutive hash characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is left-padded by zeros. If the decimal string is longer, it is left intact. By default numbering starts from 1 but there is another option that allows user to start from 0, eg. for compatibility with legacy software. For example, if name format is `big_*-##.part` and original file name is `data.txt` and numbering starts from 0, then the first chunk will be named `big_data.txt-00.part`, the 99th chunk will be `big_data.txt-98.part` and the 302nd chunk will become `big_data.txt-301.part`. Note that `list` assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files. ### Metadata Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the `none` format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases. #### Simple JSON metadata format This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields: - `ver` - version of format, currently `1` - `size` - total size of composite file - `nchunks` - number of data chunks in file - `md5` - MD5 hashsum of composite file (if present) - `sha1` - SHA1 hashsum (if present) There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling. #### No metadata You can disable meta objects by setting the meta format option to `none`. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled. ### Hashsums Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of `none`, chunker will report hashsum as `UNSUPPORTED`. Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it. Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent. If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with `md5all` or `sha1all`. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. `chunk_type=sha1all` to force hashsums and `chunk_size=1P` to effectively disable chunking. Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: `sha1quick` and `md5quick`. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the `sync` command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found. ### Modified time Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is `none` then chunker will use modification time of the first data chunk. ### Migrations The idiomatic way to migrate to a different chunk size, hash type or chunk naming scheme is to: - Collect all your chunked files under a directory and have your chunker remote point to it. - Create another directory (most probably on the same cloud storage) and configure a new remote with desired metadata format, hash type, chunk naming etc. - Now run `rclone sync -i oldchunks: newchunks:` and all your data will be transparently converted in transfer. This may take some time, yet chunker will try server-side copy if possible. - After checking data integrity you may remove configuration section of the old remote. If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the `list` command but will eat up your account quota. Please note that the `deletefile` command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The `copy` command will copy only active chunks while the `purge` will remove everything including garbage. ### Caveats and Limitations Chunker requires wrapped remote to support server side `move` (or `copy` + `delete`) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully. Chunker encodes chunk number in file name, so with default `name_format` setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. `*.rcc##` and save 10 characters (provided at most 99 chunks per file). Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers. Chunker will not automatically rename existing chunks when you run `rclone config` on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above. If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory). ### Standard Options Here are the standard options specific to chunker (Transparently chunk/split large files). #### --chunker-remote Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CHUNKER_REMOTE - Type: string - Default: "" #### --chunker-chunk-size Files larger than chunk size will be split in chunks. - Config: chunk_size - Env Var: RCLONE_CHUNKER_CHUNK_SIZE - Type: SizeSuffix - Default: 2G #### --chunker-hash-type Choose how chunker handles hash sums. All modes but "none" require metadata. - Config: hash_type - Env Var: RCLONE_CHUNKER_HASH_TYPE - Type: string - Default: "md5" - Examples: - "none" - Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise - "md5" - MD5 for composite files - "sha1" - SHA1 for composite files - "md5all" - MD5 for all files - "sha1all" - SHA1 for all files - "md5quick" - Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported - "sha1quick" - Similar to "md5quick" but prefers SHA1 over MD5 ### Advanced Options Here are the advanced options specific to chunker (Transparently chunk/split large files). #### --chunker-name-format String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format. - Config: name_format - Env Var: RCLONE_CHUNKER_NAME_FORMAT - Type: string - Default: "*.rclone_chunk.###" #### --chunker-start-from Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1. - Config: start_from - Env Var: RCLONE_CHUNKER_START_FROM - Type: int - Default: 1 #### --chunker-meta-format Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file. - Config: meta_format - Env Var: RCLONE_CHUNKER_META_FORMAT - Type: string - Default: "simplejson" - Examples: - "none" - Do not use metadata files at all. Requires hash type "none". - "simplejson" - Simple JSON supports hash sums and chunk validation. - It has the following fields: ver, size, nchunks, md5, sha1. #### --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks. - Config: fail_hard - Env Var: RCLONE_CHUNKER_FAIL_HARD - Type: bool - Default: false - Examples: - "true" - Report errors and abort current command. - "false" - Warn user, skip incomplete file and proceed. ## Citrix ShareFile [Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business. The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value XX / Citrix Sharefile \ "sharefile" Storage> sharefile ** See help for sharefile backend at: https://rclone.org/sharefile/ ** ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Access the Personal Folders. (Default) \ "" 2 / Access the Favorites folder. \ "favorites" 3 / Access all the shared folders. \ "allshared" 4 / Access all the individual connectors. \ "connectors" 5 / Access the home, favorites, and shared folders as well as the connectors. \ "top" root_folder_id> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = sharefile endpoint = https://XXX.sharefile.com token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your ShareFile rclone lsd remote: List all the files in your ShareFile rclone ls remote: To copy a local directory to an ShareFile directory called backup rclone copy /home/source remote:backup Paths may be as deep as required, eg `remote:directory/subdirectory`. ### Modified time and hashes ### ShareFile allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. ShareFile supports MD5 type hashes, so you can use the `--checksum` flag. ### Transfers ### For files above 128MB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing `--transfers` will increase memory use. ### Limitations ### Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". ShareFile only supports filenames up to 256 characters in length. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \\ | 0x5C | \ | | * | 0x2A | * | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | : | 0x3A | : | | \| | 0x7C | | | | " | 0x22 | " | File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | . | 0x2E | . | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to sharefile (Citrix Sharefile). #### --sharefile-root-folder-id ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). - Config: root_folder_id - Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID - Type: string - Default: "" - Examples: - "" - Access the Personal Folders. (Default) - "favorites" - Access the Favorites folder. - "allshared" - Access all the shared folders. - "connectors" - Access all the individual connectors. - "top" - Access the home, favorites, and shared folders as well as the connectors. ### Advanced Options Here are the advanced options specific to sharefile (Citrix Sharefile). #### --sharefile-upload-cutoff Cutoff for switching to multipart upload. - Config: upload_cutoff - Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 128M #### --sharefile-chunk-size Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. - Config: chunk_size - Env Var: RCLONE_SHAREFILE_CHUNK_SIZE - Type: SizeSuffix - Default: 64M #### --sharefile-endpoint Endpoint for API calls. This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com - Config: endpoint - Env Var: RCLONE_SHAREFILE_ENDPOINT - Type: string - Default: "" #### --sharefile-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SHAREFILE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot Crypt ---------------------------------------- Rclone `crypt` remotes encrypt and decrypt other remotes. To use `crypt`, first set up the underlying remote. Follow the `rclone config` instructions for that remote. `crypt` applied to a local pathname instead of a remote will encrypt and decrypt that directory, and can be used to encrypt USB removable drives. Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called `remote:path`. Anything inside `remote:path` will be encrypted and anything outside will not. In the case of an S3 based underlying remote (eg Amazon S3, B2, Swift) it is generally advisable to define a crypt remote in the underlying remote `s3:bucket`. If `s3:` alone is specified alongside file name encryption, rclone will encrypt the bucket name. Configure `crypt` using `rclone config`. In this example the `crypt` remote is called `secret`, to differentiate it from the underlying `remote`. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> secret Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Encrypt/Decrypt a remote \ "crypt" [snip] Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> remote:path How to encrypt the filenames. Choose a number from below, or type in your own value 1 / Don't encrypt the file names. Adds a ".bin" extension only. \ "off" 2 / Encrypt the filenames see the docs for the details. \ "standard" 3 / Very simple filename obfuscation. \ "obfuscate" filename_encryption> 2 Option to either encrypt directory names or leave them intact. Choose a number from below, or type in your own value 1 / Encrypt directory names. \ "true" 2 / Don't encrypt directory names, leave them intact. \ "false" filename_encryption> 1 Password or pass phrase for encryption. y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> g Password strength in bits. 64 is just about memorable 128 is secure 1024 is the maximum Bits> 128 Your password is: JAsJvRcgR-_veXNfy_sGmQ Use this password? y) Yes n) No y/n> y Remote config -------------------- [secret] remote = remote:path filename_encryption = standard password = *** ENCRYPTED *** password2 = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` **Important** The crypt password stored in `rclone.conf` is lightly obscured. That only protects it from cursory inspection. It is not secure unless encryption of `rclone.conf` is specified. A long passphrase is recommended, or `rclone config` can generate a random one. The obscured password is created using AES-CTR with a static key. The salt is stored verbatim at the beginning of the obscured password. This static key is shared between all versions of rclone. If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt. Rclone does not encrypt * file length - this can be calculated within 16 bytes * modification time - used for syncing ## Specifying the remote ## In normal use, ensure the remote has a `:` in. If specified without, rclone uses a local directory of that name. For example if a remote `/path/to/secret/files` is specified, rclone encrypts content to that directory. If a remote `name` is specified, rclone targets a directory `name` in the current directory. If remote `remote:path/to/dir` is specified, rclone stores encrypted files in `path/to/dir` on the remote. With file name encryption, files saved to `secret:subdir/subfile` are stored in the unencrypted path `path/to/dir` but the `subdir/subpath` element is encrypted. ## Example ## Create the following file structure using "standard" file name encryption. ``` plaintext/ ├── file0.txt ├── file1.txt └── subdir ├── file2.txt ├── file3.txt └── subsubdir └── file4.txt ``` Copy these to the remote, and list them ``` $ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt 6 file0.txt 8 subdir/file2.txt 10 subdir/subsubdir/file4.txt 9 subdir/file3.txt ``` The crypt remote looks like ``` $ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 v05749mltvv1tf4onltun46gls 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps ``` The directory structure is preserved ``` $ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt 10 subsubdir/file4.txt ``` Without file name encryption `.bin` extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content. ``` $ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin 56 subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin ``` ### File name encryption modes ### Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files Standard * file names encrypted * file names can't be as long (~143 characters) * can use sub paths and copy single files * directory structure visible * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion Obfuscation This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. Rclone stores the distance at the beginning of the filename. A file called "hello" may become "53.jgnnq". Obfuscation is not a strong encryption of filenames, but hinders automated scanning tools picking up on filename patterns. It is an intermediate between "off" and "standard" which allows for longer path segment names. There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. Obfuscation cannot be relied upon for strong protection. * file names very lightly obfuscated * file names can be longer than standard encryption * can use sub paths and copy single files * directory structure visible * identical files names will have identical uploaded names Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less thn 156 characters in length issues should not be encountered, irrespective of cloud storage provider. An alternative, future rclone file name encryption mode may tolerate backend provider path length limits. ### Directory name encryption ### Crypt offers the option of encrypting dir names or leaving them intact. There are two options: True Encrypts the whole file path including directory names Example: `1/12/123.txt` is encrypted to `p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0` False Only encrypts file names, skips directory names Example: `1/12/123.txt` is encrypted to `1/12/qgm4avr35m5loi1th53ato71v0` ### Modified time and hashes ### Crypt stores modification times using the underlying remote so support depends on that. Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator. Use the `rclone cryptcheck` command to check the integrity of a crypted remote instead of `rclone check` which can't check the checksums properly. ### Standard Options Here are the standard options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-remote Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CRYPT_REMOTE - Type: string - Default: "" #### --crypt-filename-encryption How to encrypt the filenames. - Config: filename_encryption - Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION - Type: string - Default: "standard" - Examples: - "standard" - Encrypt the filenames see the docs for the details. - "obfuscate" - Very simple filename obfuscation. - "off" - Don't encrypt the file names. Adds a ".bin" extension only. #### --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. NB If filename_encryption is "off" then this option will do nothing. - Config: directory_name_encryption - Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION - Type: bool - Default: true - Examples: - "true" - Encrypt directory names. - "false" - Don't encrypt directory names, leave them intact. #### --crypt-password Password or pass phrase for encryption. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: password - Env Var: RCLONE_CRYPT_PASSWORD - Type: string - Default: "" #### --crypt-password2 Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: password2 - Env Var: RCLONE_CRYPT_PASSWORD2 - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-server-side-across-configs Allow server side operations (eg copy) to work across different crypt configs. Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes. - Config: server_side_across_configs - Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false #### --crypt-show-mapping For all files listed show how the names encrypt. If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name. This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes. - Config: show_mapping - Env Var: RCLONE_CRYPT_SHOW_MAPPING - Type: bool - Default: false ### Backend commands Here are the commands specific to the crypt backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](https://rclone.org/rc/#backend/command). #### encode Encode the given filename(s) rclone backend encode remote: [options] [+] This encodes the filenames given as arguments returning a list of strings of the encoded results. Usage Example: rclone backend encode crypt: file1 [file2...] rclone rc backend/command command=encode fs=crypt: file1 [file2...] #### decode Decode the given filename(s) rclone backend decode remote: [options] [+] This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. Usage Example: rclone backend decode crypt: encryptedfile1 [encryptedfile2...] rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] ## Backing up a crypted remote ## If you wish to backup a crypted remote, it is recommended that you use `rclone sync` on the encrypted files, and make sure the passwords are the same in the new encrypted remote. This will have the following advantages * `rclone sync` will check the checksums while copying * you can use `rclone check` between the encrypted remotes * you don't decrypt and encrypt unnecessarily For example, let's say you have your original remote at `remote:` with the encrypted version at `eremote:` with path `remote:crypt`. You would then set up the new remote `remote2:` and then the encrypted version `eremote2:` with path `remote2:crypt` using the same passwords as `eremote:`. To sync the two remotes you would do rclone sync -i remote:crypt remote2:crypt And to check the integrity you would do rclone check remote:crypt remote2:crypt ## File formats ## ### File encryption ### Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks. #### Header #### * 8 bytes magic string `RCLONE\x00\x00` * 24 bytes Nonce (IV) The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce. #### Chunk #### Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages. Each chunk contains: * 16 Bytes of Poly1305 authenticator * 1 - 65536 bytes XSalsa20 encrypted data 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big. This uses a 32 byte (256 bit key) key derived from the user password. #### Examples #### 1 byte file will encrypt to * 32 bytes header * 17 bytes data chunk 49 bytes total 1MB (1048576 bytes) file will encrypt to * 32 bytes header * 16 chunks of 65568 bytes 1049120 bytes total (a 0.05% overhead). This is the overhead for big files. ### Name encryption ### File names are encrypted segment by segment - the path is broken up into `/` separated strings and these are encrypted individually. File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption. They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway. This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system. This means that * filenames with the same name will encrypt the same * filenames which start the same won't have a common prefix This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password. After encryption they are written out using a modified version of standard `base32` encoding as described in RFC4648. The standard encoding is modified in two ways: * it becomes lower case (no-one likes upper case filenames!) * we strip the padding character `=` `base32` is used rather than the more efficient `base64` so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive). ### Key derivation ### Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one. `scrypt` makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt. Dropbox --------------------------------- Paths are specified as `remote:path` Dropbox paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Dropbox \ "dropbox" [snip] Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. app_secret> Remote config Please visit: https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX -------------------- [remote] app_key = app_secret = token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` You can then use it like this, List directories in top level of your dropbox rclone lsd remote: List all the files in your dropbox rclone ls remote: To copy a local directory to a dropbox directory called backup rclone copy /home/source remote:backup ### Dropbox for business ### Rclone supports Dropbox for business and Team Folders. When using Dropbox for business `remote:` and `remote:path/to/file` will refer to your personal folder. If you wish to see Team Folders you must use a leading `/` in the path, so `rclone lsd remote:/` will refer to the root and show you all Team Folders and your User Folder. You can then use team folders like this `remote:/TeamFolder` and `remote:/TeamFolder/path/to/file`. A leading `/` for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided. ### Modified time and Hashes ### Dropbox supports modified times, but the only way to set a modification time is to re-upload the file. This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use `--size-only` or `--checksum` flag to stop it. Dropbox supports [its own hash type](https://www.dropbox.com/developers/reference/content-hash) which is checked for all transfers. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | | DEL | 0x7F | ␡ | | \ | 0x5C | \ | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to dropbox (Dropbox). #### --dropbox-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_DROPBOX_CLIENT_ID - Type: string - Default: "" #### --dropbox-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_DROPBOX_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to dropbox (Dropbox). #### --dropbox-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_DROPBOX_TOKEN - Type: string - Default: "" #### --dropbox-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_DROPBOX_AUTH_URL - Type: string - Default: "" #### --dropbox-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_DROPBOX_TOKEN_URL - Type: string - Default: "" #### --dropbox-chunk-size Upload chunk size. (< 150M). Any files larger than this will be uploaded in chunks of this size. Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory. - Config: chunk_size - Env Var: RCLONE_DROPBOX_CHUNK_SIZE - Type: SizeSuffix - Default: 48M #### --dropbox-impersonate Impersonate this user when using a business account. - Config: impersonate - Env Var: RCLONE_DROPBOX_IMPERSONATE - Type: string - Default: "" #### --dropbox-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_DROPBOX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot ### Limitations ### Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are some file names such as `thumbs.db` which Dropbox can't store. There is a full list of them in the ["Ignored Files" section of this document](https://www.dropbox.com/en/help/145). Rclone will issue an error message `File name disallowed - not uploading` if it attempts to upload one of those file names, but the sync won't fail. Some errors may occur if you try to sync copyright-protected files because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) that prevents this sort of file being downloaded. This will return the error `ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.` If you have more than 10,000 files in a directory then `rclone purge dropbox:dir` will return the error `Failed to purge: There are too many files involved in this operation`. As a work-around do an `rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`. ### Get your own Dropbox App ID ### When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users. Here is how to create your own Dropbox App ID for rclone: 1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) with your Dropbox Account (It need not to be the same account as the Dropbox you want to access) 2. Choose an API => Usually this should be `Dropbox API` 3. Choose the type of access you want to use => `Full Dropbox` or `App Folder` 4. Name your App. The app name is global, so you can't use `rclone` for example 5. Click the button `Create App` 5. Fill `Redirect URIs` as `http://localhost:53682/` 6. Find the `App key` and `App secret` Use these values in rclone config to add a new remote or edit an existing remote. FTP ------------------------------ FTP is the File Transfer Protocol. FTP support is provided using the [github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) package. Paths are specified as `remote:path`. If the path does not begin with a `/` it is relative to the home directory of the user. An empty path `remote:` refers to the user's home directory. Here is an example of making an FTP configuration. First run rclone config This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use `anonymous` as username and your email address as the password. ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / FTP Connection \ "ftp" [snip] Storage> ftp ** See help for ftp backend at: https://rclone.org/ftp/ ** FTP host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to ftp.example.com \ "ftp.example.com" host> ftp.example.com FTP username, leave blank for current username, ncw Enter a string value. Press Enter for the default (""). user> FTP port, leave blank to use default (21) Enter a string value. Press Enter for the default (""). port> FTP password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Use FTP over TLS (Implicit) Enter a boolean value (true or false). Press Enter for the default ("false"). tls> Use FTP over TLS (Explicit) Enter a boolean value (true or false). Press Enter for the default ("false"). explicit_tls> Remote config -------------------- [remote] type = ftp host = ftp.example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all directories in the home directory rclone lsd remote: Make a new directory rclone mkdir remote:path/to/directory List the contents of a directory rclone ls remote:path/to/directory Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. rclone sync -i /home/local/directory remote:directory ### Modified time ### FTP does not support modified times. Any times you see on the server will be time of upload. ### Checksums ### FTP does not support any checksums. ### Usage without a config file ### An example how to use the ftp remote without a config file: rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy` #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Note that not all FTP servers can have all characters in file names, for example: | FTP Server| Forbidden characters | | --------- |:--------------------:| | proftpd | `*` | | pureftpd | `\ [ ]` | ### Implicit TLS ### FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is `990` so the port will likely have to be explicitly set in the config for the remote. ### Standard Options Here are the standard options specific to ftp (FTP Connection). #### --ftp-host FTP host to connect to - Config: host - Env Var: RCLONE_FTP_HOST - Type: string - Default: "" - Examples: - "ftp.example.com" - Connect to ftp.example.com #### --ftp-user FTP username, leave blank for current username, $USER - Config: user - Env Var: RCLONE_FTP_USER - Type: string - Default: "" #### --ftp-port FTP port, leave blank to use default (21) - Config: port - Env Var: RCLONE_FTP_PORT - Type: string - Default: "" #### --ftp-pass FTP password **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_FTP_PASS - Type: string - Default: "" #### --ftp-tls Use FTPS over TLS (Implicit) When using implicit FTP over TLS the client will connect using TLS right from the start, which in turn breaks the compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP. - Config: tls - Env Var: RCLONE_FTP_TLS - Type: bool - Default: false #### --ftp-explicit-tls Use FTP over TLS (Explicit) When using explicit FTP over TLS the client explicitly request security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP. - Config: explicit_tls - Env Var: RCLONE_FTP_EXPLICIT_TLS - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to ftp (FTP Connection). #### --ftp-concurrency Maximum number of FTP simultaneous connections, 0 for unlimited - Config: concurrency - Env Var: RCLONE_FTP_CONCURRENCY - Type: int - Default: 0 #### --ftp-no-check-certificate Do not verify the TLS certificate of the server - Config: no_check_certificate - Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE - Type: bool - Default: false #### --ftp-disable-epsv Disable using EPSV even if server advertises support - Config: disable_epsv - Env Var: RCLONE_FTP_DISABLE_EPSV - Type: bool - Default: false #### --ftp-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_FTP_ENCODING - Type: MultiEncoder - Default: Slash,Del,Ctl,RightSpace,Dot ### Limitations ### Note that FTP does have its own implementation of : `--dump headers`, `--dump bodies`, `--dump auth` for debugging which isn't the same as the HTTP based backends - it has less fine grained control. Note that `--timeout` isn't supported (but `--contimeout` is). Note that `--bind` isn't supported. FTP could support server side move but doesn't yet. Note that the ftp backend does not support the `ftp_proxy` environment variable yet. Google Cloud Storage ------------------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" [snip] Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Project number optional - needed only for list/create/delete buckets - see your developer console. project_number> 12345678 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Access Control List for new objects. Choose a number from below, or type in your own value 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. \ "authenticatedRead" 2 / Object owner gets OWNER access, and project team owners get OWNER access. \ "bucketOwnerFullControl" 3 / Object owner gets OWNER access, and project team owners get READER access. \ "bucketOwnerRead" 4 / Object owner gets OWNER access [default if left blank]. \ "private" 5 / Object owner gets OWNER access, and project team members get access according to their roles. \ "projectPrivate" 6 / Object owner gets OWNER access, and all Users get READER access. \ "publicRead" object_acl> 4 Access Control List for new buckets. Choose a number from below, or type in your own value 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. \ "authenticatedRead" 2 / Project team owners get OWNER access [default if left blank]. \ "private" 3 / Project team members get access according to their roles. \ "projectPrivate" 4 / Project team owners get OWNER access, and all Users get READER access. \ "publicRead" 5 / Project team owners get OWNER access, and all Users get WRITER access. \ "publicReadWrite" bucket_acl> 2 Location for the newly created buckets. Choose a number from below, or type in your own value 1 / Empty for default location (US). \ "" 2 / Multi-regional location for Asia. \ "asia" 3 / Multi-regional location for Europe. \ "eu" 4 / Multi-regional location for United States. \ "us" 5 / Taiwan. \ "asia-east1" 6 / Tokyo. \ "asia-northeast1" 7 / Singapore. \ "asia-southeast1" 8 / Sydney. \ "australia-southeast1" 9 / Belgium. \ "europe-west1" 10 / London. \ "europe-west2" 11 / Iowa. \ "us-central1" 12 / South Carolina. \ "us-east1" 13 / Northern Virginia. \ "us-east4" 14 / Oregon. \ "us-west1" location> 12 The storage class to use when storing objects in Google Cloud Storage. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Multi-regional storage class \ "MULTI_REGIONAL" 3 / Regional storage class \ "REGIONAL" 4 / Nearline storage class \ "NEARLINE" 5 / Coldline storage class \ "COLDLINE" 6 / Durable reduced availability storage class \ "DURABLE_REDUCED_AVAILABILITY" storage_class> 5 Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine or Y didn't work y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = google cloud storage client_id = client_secret = token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} project_number = 12345678 object_acl = private bucket_acl = private -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. This remote is called `remote` and can now be used like this See all the buckets in your project rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ### Service Account support ### You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To get credentials for Google Cloud Platform [IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), please head to the [Service Account](https://console.cloud.google.com/permissions/serviceaccounts) section of the Google Developer Console. Service Accounts behave just like normal `User` permissions in [Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the `service_account_file` prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set `service_account_credentials` with the actual contents of the file instead, or set the equivalent environment variable. ### Anonymous Access ### For downloads of objects that permit public access you can configure rclone to use anonymous access by setting `anonymous` to `true`. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access. ### Application Default Credentials ### If no other source of credentials is provided, rclone will fall back to [Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials) this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - [see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper). Note that in the case application default credentials are used, there is no need to explicitly configure a project number. ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### Custom upload headers ### You can set custom upload headers with the `--header-upload` flag. Google Cloud Storage supports the headers as described in the [working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata) - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Goog-Meta- Eg `--header-upload "Content-Type text/potato"` Note that the last of these is for setting custom metadata in the form `--header-upload "x-goog-meta-key: value"` ### Modified time ### Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | LF | 0x0A | ␊ | | CR | 0x0D | ␍ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_GCS_CLIENT_ID - Type: string - Default: "" #### --gcs-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_GCS_CLIENT_SECRET - Type: string - Default: "" #### --gcs-project-number Project number. Optional - needed only for list/create/delete buckets - see your developer console. - Config: project_number - Env Var: RCLONE_GCS_PROJECT_NUMBER - Type: string - Default: "" #### --gcs-service-account-file Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: service_account_file - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE - Type: string - Default: "" #### --gcs-service-account-credentials Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. - Config: service_account_credentials - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS - Type: string - Default: "" #### --gcs-anonymous Access public buckets and objects without credentials Set to 'true' if you just want to download files and don't configure credentials. - Config: anonymous - Env Var: RCLONE_GCS_ANONYMOUS - Type: bool - Default: false #### --gcs-object-acl Access Control List for new objects. - Config: object_acl - Env Var: RCLONE_GCS_OBJECT_ACL - Type: string - Default: "" - Examples: - "authenticatedRead" - Object owner gets OWNER access, and all Authenticated Users get READER access. - "bucketOwnerFullControl" - Object owner gets OWNER access, and project team owners get OWNER access. - "bucketOwnerRead" - Object owner gets OWNER access, and project team owners get READER access. - "private" - Object owner gets OWNER access [default if left blank]. - "projectPrivate" - Object owner gets OWNER access, and project team members get access according to their roles. - "publicRead" - Object owner gets OWNER access, and all Users get READER access. #### --gcs-bucket-acl Access Control List for new buckets. - Config: bucket_acl - Env Var: RCLONE_GCS_BUCKET_ACL - Type: string - Default: "" - Examples: - "authenticatedRead" - Project team owners get OWNER access, and all Authenticated Users get READER access. - "private" - Project team owners get OWNER access [default if left blank]. - "projectPrivate" - Project team members get access according to their roles. - "publicRead" - Project team owners get OWNER access, and all Users get READER access. - "publicReadWrite" - Project team owners get OWNER access, and all Users get WRITER access. #### --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this. When it is set, rclone: - ignores ACLs set on buckets - ignores ACLs set on objects - creates buckets with Bucket Policy Only set Docs: https://cloud.google.com/storage/docs/bucket-policy-only - Config: bucket_policy_only - Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY - Type: bool - Default: false #### --gcs-location Location for the newly created buckets. - Config: location - Env Var: RCLONE_GCS_LOCATION - Type: string - Default: "" - Examples: - "" - Empty for default location (US). - "asia" - Multi-regional location for Asia. - "eu" - Multi-regional location for Europe. - "us" - Multi-regional location for United States. - "asia-east1" - Taiwan. - "asia-east2" - Hong Kong. - "asia-northeast1" - Tokyo. - "asia-south1" - Mumbai. - "asia-southeast1" - Singapore. - "australia-southeast1" - Sydney. - "europe-north1" - Finland. - "europe-west1" - Belgium. - "europe-west2" - London. - "europe-west3" - Frankfurt. - "europe-west4" - Netherlands. - "us-central1" - Iowa. - "us-east1" - South Carolina. - "us-east4" - Northern Virginia. - "us-west1" - Oregon. - "us-west2" - California. #### --gcs-storage-class The storage class to use when storing objects in Google Cloud Storage. - Config: storage_class - Env Var: RCLONE_GCS_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "MULTI_REGIONAL" - Multi-regional storage class - "REGIONAL" - Regional storage class - "NEARLINE" - Nearline storage class - "COLDLINE" - Coldline storage class - "ARCHIVE" - Archive storage class - "DURABLE_REDUCED_AVAILABILITY" - Durable reduced availability storage class ### Advanced Options Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_GCS_TOKEN - Type: string - Default: "" #### --gcs-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_GCS_AUTH_URL - Type: string - Default: "" #### --gcs-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_GCS_TOKEN_URL - Type: string - Default: "" #### --gcs-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_GCS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot Google Drive ----------------------------------------- Paths are specified as `drive:path` Drive paths may be as deep as required, eg `drive:directory/subdirectory`. The initial setup for drive involves getting a token from Google drive which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Drive \ "drive" [snip] Storage> drive Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Scope that rclone should use when requesting access from drive. Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ "drive" 2 / Read-only access to file metadata and file contents. \ "drive.readonly" / Access to files created by rclone only. 3 | These are visible in the drive website. | File authorization is revoked when the user deauthorizes the app. \ "drive.file" / Allows read and write access to the Application Data folder. 4 | This is not visible in the drive website. \ "drive.appfolder" / Allows read-only access to file metadata but 5 | does not allow any access to read or download file content. \ "drive.metadata.readonly" scope> 1 ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs). root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine or Y didn't work y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Configure this as a team drive? y) Yes n) No y/n> n -------------------- [remote] client_id = client_secret = scope = drive root_folder_id = service_account_file = token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. You can then use it like this, List directories in top level of your drive rclone lsd remote: List all the files in your drive rclone ls remote: To copy a local directory to a drive directory called backup rclone copy /home/source remote:backup ### Scopes ### Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. [The scopes are defined here](https://developers.google.com/drive/v3/web/about-auth). The scope are #### drive #### This is the default scope and allows full access to all files, except for the Application Data Folder (see below). Choose this one if you aren't sure. #### drive.readonly #### This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted. #### drive.file #### With this scope rclone can read/view/modify only those files and folders it creates. So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone. This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone. Files created with this scope are visible in the web interface. #### drive.appfolder #### This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either. #### drive.metadata.readonly #### This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories. ### Root folder ID ### You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go). In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface. So if the folder you want rclone to use has a URL which looks like `https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` in the browser, then you use `1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` as the `root_folder_id` in the config. **NB** folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone. There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise! Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet. ### Service Account support ### You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the `service_account_file` prompt during `rclone config` and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set `service_account_credentials` with the actual contents of the file instead, or set the equivalent environment variable. #### Use case - Google Apps/G-suite account and individual Drive #### Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain **example.com**, and the user **foo@example.com**. There's a few steps we need to go through to accomplish this: ##### 1. Create a service account for example.com ##### - To create a service account and obtain its credentials, go to the [Google Developer Console](https://console.developers.google.com). - You must have a project - create one if you don't. - Then go to "IAM & admin" -> "Service Accounts". - Use the "Create Credentials" button. Fill in "Service account name" with something that identifies your client. "Role" can be empty. - Tick "Furnish a new private key" - select "Key type JSON". - Tick "Enable G Suite Domain-wide Delegation". This option makes "impersonation" possible, as documented here: [Delegating domain-wide authority to the service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) - These credentials are what rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button. ##### 2. Allowing API access to example.com Google Drive ##### - Go to example.com's admin console - Go into "Security" (or use the search bar) - Select "Show more" and then "Advanced settings" - Select "Manage API client access" in the "Authentication" section - In the "Client Name" field enter the service account's "Client ID" - this can be found in the Developer Console under "IAM & Admin" -> "Service Accounts", then "View Client ID" for the newly created service account. It is a ~21 character numerical string. - In the next field, "One or More API Scopes", enter `https://www.googleapis.com/auth/drive` to grant access to Google Drive specifically. ##### 3. Configure rclone, assuming a new install ##### ``` rclone config n/s/q> n # New name>gdrive # Gdrive is an example name Storage> # Select the number shown for Google Drive client_id> # Can be left blank client_secret> # Can be left blank scope> # Select your scope, 1 for example root_folder_id> # Can be left blank service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes! y/n> # Auto config, y ``` ##### 4. Verify that it's working ##### - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` - The arguments do: - `-v` - verbose logging - `--drive-impersonate foo@example.com` - this is what does the magic, pretending to be user foo. - `lsf` - list files in a parsing friendly way - `gdrive:backup` - use the remote called gdrive, work in the folder named backup. Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the `--drive-impersonate` option, like this: `rclone -v foo@example.com lsf gdrive:backup` ### Team drives ### If you want to configure the remote to point to a Google Team Drive then answer `y` to the question `Configure this as a team drive?`. This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer. For example: ``` Configure this as a team drive? y) Yes n) No y/n> y Fetching team drive list... Choose a number from below, or type in your own value 1 / Rclone Test \ "xxxxxxxxxxxxxxxxxxxx" 2 / Rclone Test 2 \ "yyyyyyyyyyyyyyyyyyyy" 3 / Rclone Test 3 \ "zzzzzzzzzzzzzzzzzzzz" Enter a Team Drive ID> 1 -------------------- [remote] client_id = client_secret = token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} team_drive = xxxxxxxxxxxxxxxxxxxx -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. It does this by combining multiple `list` calls into a single API request. This works by combining many `'%s' in parents` filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular `List` function: ``` trashed=false and 'a' in parents trashed=false and 'b' in parents trashed=false and 'c' in parents ``` These can now be combined into a single request: ``` trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) ``` The implementation of `ListR` will put up to 50 `parents` filters into one request. It will use the `--checkers` value to specify the number of requests to run in parallel. In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives: ``` rclone lsjson -vv -R --checkers=6 gdrive:folder ``` small folder (220 directories, 700 files): - without `--fast-list`: 38s - with `--fast-list`: 10s large folder (10600 directories, 39000 files): - without `--fast-list`: 22:05 min - with `--fast-list`: 58s ### Modified time ### Google drive stores modification times accurate to 1 ms. #### Restricted filename characters Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. In contrast to other backends, `/` can also be used in names and `.` or `..` are valid names. ### Revisions ### Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file. Revisions follow the standard google policy which at time of writing was * They are deleted after 30 days or 100 revisions (whatever comes first). * They do not count towards a user storage quota. ### Deleting files ### By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the `--drive-use-trash=false` flag, or set the equivalent environment variable. ### Shortcuts ### In March 2020 Google introduced a new feature in Google Drive called [drive shortcuts](https://support.google.com/drive/answer/9700156) ([API](https://developers.google.com/drive/api/v3/shortcuts)). These will (by September 2020) [replace the ability for files or folders to be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-structure-and-sharing-models). Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don't break if the source is renamed or moved about. Be default rclone treats these as follows. For shortcuts pointing to files: - When listing a file shortcut appears as the destination file. - When downloading the contents of the destination file is downloaded. - When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. - When server side moving (renaming) the shortcut is renamed, not the destination file. - When server side copying the shortcut is copied, not the contents of the shortcut. - When deleting the shortcut is deleted not the linked file. - When setting the modification time, the modification time of the linked file will be set. For shortcuts pointing to folders: - When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) - When downloading the contents of the linked folder and sub contents are downloaded - When uploading to a shortcut folder the file will be placed in the linked folder - When server side moving (renaming) the shortcut is renamed, not the destination folder - When server side copying the contents of the linked folder is copied, not the shortcut. - When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder. - **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted. The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts. Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag or the corresponding `skip_shortcuts` configuration setting. ### Emptying trash ### If you wish to empty your trash you can use the `rclone cleanup remote:` command which will permanently delete all your trashed files. This command does not take any path arguments. Note that Google Drive takes some time (minutes to days) to empty the trash even though the command returns within a few seconds. No output is echoed, so there will be no confirmation even using -v or -vv. ### Quota information ### To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments. #### Import/Export of google documents #### Google documents can be exported from and uploaded to Google Drive. When rclone downloads a Google doc it chooses a format to download depending upon the `--drive-export-formats` setting. By default the export formats are `docx,xlsx,pptx,svg` which are a sensible default for an editable document. When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list. If you prefer an archive copy then you might use `--drive-export-formats pdf`, or if you prefer openoffice/libreoffice formats you might use `--drive-export-formats ods,odt,odp`. Note that rclone adds the extension to the google doc, so if it is called `My Spreadsheet` on google docs, it will be exported as `My Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc. When importing files into Google Drive, rclone will convert all files with an extension in `--drive-import-formats` to their associated document type. rclone will not convert any files by default, since the conversion is lossy process. The conversion must result in a file with the same extension when the `--drive-export-formats` rules are applied to the uploaded document. Here are some examples for allowed and prohibited conversions. | export-formats | import-formats | Upload Ext | Document Ext | Allowed | | -------------- | -------------- | ---------- | ------------ | ------- | | odt | odt | odt | odt | Yes | | odt | docx,odt | odt | odt | Yes | | | docx | docx | docx | Yes | | | odt | odt | docx | No | | odt,docx | docx,odt | docx | odt | No | | docx,odt | docx,odt | docx | docx | Yes | | docx,odt | docx,odt | odt | docx | No | This limitation can be disabled by specifying `--drive-allow-import-name-change`. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with `--drive-import-formats docx,odt,txt`, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes. Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries. This list can be changed by Google Drive at any time and might not represent the currently available conversions. | Extension | Mime Type | Description | | --------- |-----------| ------------| | csv | text/csv | Standard CSV format for Spreadsheets | | docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | | epub | application/epub+zip | E-book format | | html | text/html | An HTML Document | | jpg | image/jpeg | A JPEG Image File | | json | application/vnd.google-apps.script+json | JSON Text Format | | odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | | ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | odt | application/vnd.oasis.opendocument.text | Openoffice Document | | pdf | application/pdf | Adobe PDF Format | | png | image/png | PNG Image Format| | pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | | rtf | application/rtf | Rich Text Format | | svg | image/svg+xml | Scalable Vector Graphics Format | | tsv | text/tab-separated-values | Standard TSV format for spreadsheets | | txt | text/plain | Plain Text | | xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | | zip | application/zip | A ZIP file of HTML, Images CSS | Google documents can also be exported as link files. These files will open a browser window for the Google Docs website of that document when opened. The link file extension has to be specified as a `--drive-export-formats` parameter. They will match all available Google Documents. | Extension | Description | OS Support | | --------- | ----------- | ---------- | | desktop | freedesktop.org specified desktop entry | Linux | | link.html | An HTML Document with a redirect | All | | url | INI style link file | macOS, Windows | | webloc | macOS specific XML format | macOS | ### Standard Options Here are the standard options specific to drive (Google Drive). #### --drive-client-id Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. - Config: client_id - Env Var: RCLONE_DRIVE_CLIENT_ID - Type: string - Default: "" #### --drive-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_DRIVE_CLIENT_SECRET - Type: string - Default: "" #### --drive-scope Scope that rclone should use when requesting access from drive. - Config: scope - Env Var: RCLONE_DRIVE_SCOPE - Type: string - Default: "" - Examples: - "drive" - Full access all files, excluding Application Data Folder. - "drive.readonly" - Read-only access to file metadata and file contents. - "drive.file" - Access to files created by rclone only. - These are visible in the drive website. - File authorization is revoked when the user deauthorizes the app. - "drive.appfolder" - Allows read and write access to the Application Data folder. - This is not visible in the drive website. - "drive.metadata.readonly" - Allows read-only access to file metadata but - does not allow any access to read or download file content. #### --drive-root-folder-id ID of the root folder Leave blank normally. Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID - Type: string - Default: "" #### --drive-service-account-file Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: service_account_file - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE - Type: string - Default: "" #### --drive-alternate-export Deprecated: no longer needed - Config: alternate_export - Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to drive (Google Drive). #### --drive-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_DRIVE_TOKEN - Type: string - Default: "" #### --drive-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_DRIVE_AUTH_URL - Type: string - Default: "" #### --drive-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_DRIVE_TOKEN_URL - Type: string - Default: "" #### --drive-service-account-credentials Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. - Config: service_account_credentials - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS - Type: string - Default: "" #### --drive-team-drive ID of the Team Drive - Config: team_drive - Env Var: RCLONE_DRIVE_TEAM_DRIVE - Type: string - Default: "" #### --drive-auth-owner-only Only consider files owned by the authenticated user. - Config: auth_owner_only - Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY - Type: bool - Default: false #### --drive-use-trash Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use `--drive-use-trash=false` to delete files permanently instead. - Config: use_trash - Env Var: RCLONE_DRIVE_USE_TRASH - Type: bool - Default: true #### --drive-skip-gdocs Skip google documents in all listings. If given, gdocs practically become invisible to rclone. - Config: skip_gdocs - Env Var: RCLONE_DRIVE_SKIP_GDOCS - Type: bool - Default: false #### --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. Use this if you get checksum errors when transferring Google photos or videos. Setting this flag will cause Google photos and videos to return a blank MD5 checksum. Google photos are identified by being in the "photos" space. Corrupted checksums are caused by Google modifying the image/video but not updating the checksum. - Config: skip_checksum_gphotos - Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS - Type: bool - Default: false #### --drive-shared-with-me Only show files that are shared with me. Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too. - Config: shared_with_me - Env Var: RCLONE_DRIVE_SHARED_WITH_ME - Type: bool - Default: false #### --drive-trashed-only Only show files that are in the trash. This will show trashed files in their original directory structure. - Config: trashed_only - Env Var: RCLONE_DRIVE_TRASHED_ONLY - Type: bool - Default: false #### --drive-starred-only Only show files that are starred. - Config: starred_only - Env Var: RCLONE_DRIVE_STARRED_ONLY - Type: bool - Default: false #### --drive-formats Deprecated: see export_formats - Config: formats - Env Var: RCLONE_DRIVE_FORMATS - Type: string - Default: "" #### --drive-export-formats Comma separated list of preferred formats for downloading Google docs. - Config: export_formats - Env Var: RCLONE_DRIVE_EXPORT_FORMATS - Type: string - Default: "docx,xlsx,pptx,svg" #### --drive-import-formats Comma separated list of preferred formats for uploading Google docs. - Config: import_formats - Env Var: RCLONE_DRIVE_IMPORT_FORMATS - Type: string - Default: "" #### --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - Config: allow_import_name_change - Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE - Type: bool - Default: false #### --drive-use-created-date Use file created date instead of modified date., Useful when downloading data and you want the creation date used in place of the last modified date. **WARNING**: This flag may have some unexpected consequences. When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag. This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date. - Config: use_created_date - Env Var: RCLONE_DRIVE_USE_CREATED_DATE - Type: bool - Default: false #### --drive-use-shared-date Use date file was shared instead of modified date. Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files. If both this flag and "--drive-use-created-date" are set, the created date is used. - Config: use_shared_date - Env Var: RCLONE_DRIVE_USE_SHARED_DATE - Type: bool - Default: false #### --drive-list-chunk Size of listing chunk 100-1000. 0 to disable. - Config: list_chunk - Env Var: RCLONE_DRIVE_LIST_CHUNK - Type: int - Default: 1000 #### --drive-impersonate Impersonate this user when using a service account. - Config: impersonate - Env Var: RCLONE_DRIVE_IMPERSONATE - Type: string - Default: "" #### --drive-upload-cutoff Cutoff for switching to chunked upload - Config: upload_cutoff - Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 8M #### --drive-chunk-size Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. - Config: chunk_size - Env Var: RCLONE_DRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 8M #### --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway. - Config: acknowledge_abuse - Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE - Type: bool - Default: false #### --drive-keep-revision-forever Keep new head revision of each file forever. - Config: keep_revision_forever - Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER - Type: bool - Default: false #### --drive-size-as-quota Show sizes as storage quota usage, not actual size. Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever. **WARNING**: This flag may have some unexpected consequences. It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only. If you do use this flag for syncing (not recommended) then you will need to use --ignore size also. - Config: size_as_quota - Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA - Type: bool - Default: false #### --drive-v2-download-min-size If Object's are greater, use drive v2 API to download. - Config: v2_download_min_size - Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE - Type: SizeSuffix - Default: off #### --drive-pacer-min-sleep Minimum time to sleep between API calls. - Config: pacer_min_sleep - Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP - Type: Duration - Default: 100ms #### --drive-pacer-burst Number of API calls to allow without sleeping. - Config: pacer_burst - Env Var: RCLONE_DRIVE_PACER_BURST - Type: int - Default: 100 #### --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. - Config: server_side_across_configs - Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false #### --drive-disable-http2 Disable drive using http2 There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed. See: https://github.com/rclone/rclone/issues/3631 - Config: disable_http2 - Env Var: RCLONE_DRIVE_DISABLE_HTTP2 - Type: bool - Default: true #### --drive-stop-on-upload-limit Make upload limit errors be fatal At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync. Note that this detection is relying on error message strings which Google don't document so it may break in the future. See: https://github.com/rclone/rclone/issues/3857 - Config: stop_on_upload_limit - Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT - Type: bool - Default: false #### --drive-skip-shortcuts If set skip shortcut files Normally rclone dereferences shortcut files making them appear as if they are the original file (see [the shortcuts section](#shortcuts)). If this flag is set then rclone will ignore shortcut files completely. - Config: skip_shortcuts - Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS - Type: bool - Default: false #### --drive-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_DRIVE_ENCODING - Type: MultiEncoder - Default: InvalidUtf8 ### Backend commands Here are the commands specific to the drive backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](https://rclone.org/rc/#backend/command). #### get Get command for fetching the drive config parameters rclone backend get remote: [options] [+] This is a get command which will be used to fetch the various drive config parameters Usage Examples: rclone backend get drive: [-o service_account_file] [-o chunk_size] rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] Options: - "chunk_size": show the current upload chunk size - "service_account_file": show the current service account file #### set Set command for updating the drive config parameters rclone backend set remote: [options] [+] This is a set command which will be used to update the various drive config parameters Usage Examples: rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] Options: - "chunk_size": update the current upload chunk size - "service_account_file": update the current service account file #### shortcut Create shortcuts from files or directories rclone backend shortcut remote: [options] [+] This command creates shortcuts from files or directories. Usage: rclone backend shortcut drive: source_item destination_shortcut rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:" In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". Options: - "target": optional target remote for the shortcut destination #### drives List the shared drives available to this account rclone backend drives remote: [options] [+] This command lists the shared drives (teamdrives) available to this account. Usage: rclone backend drives drive: This will return a JSON list of objects like this [ { "id": "0ABCDEF-01234567890", "kind": "drive#teamDrive", "name": "My Drive" }, { "id": "0ABCDEFabcdefghijkl", "kind": "drive#teamDrive", "name": "Test Drive" } ] #### untrash Untrash files and directories rclone backend untrash remote: [options] [+] This command untrashes all the files and directories in the directory passed in recursively. Usage: This takes an optional directory to trash which make this easier to use via the API. rclone backend untrash drive:directory rclone backend -i untrash drive:directory subdir Use the -i flag to see what would be restored before restoring it. Result: { "Untrashed": 17, "Errors": 0 } ### Limitations ### Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time. Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with `--disable copy` to download and upload the files if you prefer. #### Limitations of Google Docs #### Google docs will appear as size -1 in `rclone ls` and as size 0 in anything which uses the VFS layer, eg `rclone mount`, `rclone serve`. This is because rclone can't find out the size of the Google docs without downloading them. Google docs will transfer correctly with `rclone sync`, `rclone copy` etc as rclone knows to ignore the size when doing the transfer. However an unfortunate consequence of this is that you may not be able to download Google docs using `rclone mount`. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you! ### Duplicated files ### Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files. Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use `rclone dedupe` to fix duplicated files. Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes. ### Rclone appears to be re-copying files it shouldn't ### The most likely cause of this is the duplicated file issue above - run `rclone dedupe` and check your logs for duplicate object or directory messages. This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list. Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem. ### Making your own client_id ### When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. Here is how to create your own Google Drive client ID for rclone: 1. Log into the [Google API Console](https://console.developers.google.com/) with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access) 2. Select a project or create a new project. 3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API". 4. Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials" 5. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen. (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far). 6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". 7. Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine) 8. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote. Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal). (Thanks to @balazer on github for these instructions.) Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the [Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. Google Photos ------------------------------------------------- The rclone backend for [Google Photos](https://www.google.com/photos/about/) is a specialized backend for transferring photos and videos to and from Google Photos. **NB** The Google Photos API which rclone uses has quite a few limitations, so please read the [limitations section](#limitations) carefully to make sure it is suitable for your use. ## Configuring Google Photos The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Google Photos \ "google photos" [snip] Storage> google photos ** See help for google photos backend at: https://rclone.org/googlephotos/ ** Google Application Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Google Application Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. Enter a boolean value (true or false). Press Enter for the default ("false"). read_only> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code *** IMPORTANT: All media items uploaded to Google Photos with rclone *** are stored in full resolution at original quality. These uploads *** will count towards storage in your Google Account. -------------------- [remote] type = google photos token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode. This remote is called `remote` and can now be used like this See all the albums in your photos rclone lsd remote:album Make a new album rclone mkdir remote:album/newAlbum List the contents of an album rclone ls remote:album/newAlbum Sync `/home/local/images` to the Google Photos, removing any excess files in the album. rclone sync -i /home/local/image remote:album/newAlbum ## Layout As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it. The directories under `media` show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup `remote:media/by-month`. (**NB** `remote:media/by-day` is rather slow at the moment so avoid for syncing.) Note that all your photos and videos will appear somewhere under `media`, but they may not appear under `album` unless you've put them into albums. ``` / - upload - file1.jpg - file2.jpg - ... - media - all - file1.jpg - file2.jpg - ... - by-year - 2000 - file1.jpg - ... - 2001 - file2.jpg - ... - ... - by-month - 2000 - 2000-01 - file1.jpg - ... - 2000-02 - file2.jpg - ... - ... - by-day - 2000 - 2000-01-01 - file1.jpg - ... - 2000-01-02 - file2.jpg - ... - ... - album - album name - album name/sub - shared-album - album name - album name/sub - feature - favorites - file1.jpg - file2.jpg ``` There are two writable parts of the tree, the `upload` directory and sub directories of the `album` directory. The `upload` directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to `album` will work better. Directories within the `album` directory are also writeable and you may create new directories (albums) under `album`. If you copy files with a directory hierarchy in there then rclone will create albums with the `/` character in them. For example if you do rclone copy /path/to/images remote:album/images and the images directory contains ``` images - file1.jpg dir file2.jpg dir2 dir3 file3.jpg ``` Then rclone will create the following albums with the following files in - images - file1.jpg - images/dir - file2.jpg - images/dir2/dir3 - file3.jpg This means that you can use the `album` path pretty much like a normal filesystem and it is a good target for repeated syncing. The `shared-album` directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface. ## Limitations Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and **will** count towards your storage quota in your Google Account. The API does **not** offer a way to upload in "high quality" mode.. ### Downloading Images When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115). **The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort** ### Downloading Videos When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044). ### Duplicates If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called `file.jpg` would then appear as `file {123456}.jpg` and `file {ABCDEF}.jpg` (the actual IDs are a lot longer alas!). If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to `upload` then uploaded the same image to `album/my_album` the filename of the image in `album/my_album` will be what it was uploaded with initially, not what you uploaded it with to `album`. In practise this shouldn't cause too many problems. ### Modified time The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. ### Size The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is **very slow** and uses up a lot of transactions. This can be enabled with the `--gphotos-read-size` option or the `read_size = true` config parameter. If you want to use the backend with `rclone mount` you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag. ### Albums Rclone can only upload files to albums it created. This is a [limitation of the Google Photos API](https://developers.google.com/photos/library/guides/manage-albums). Rclone can remove files it uploaded from albums it created only. ### Deleting files Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See [bug #109759781](https://issuetracker.google.com/issues/109759781). Rclone cannot delete files anywhere except under `album`. ### Deleting albums The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733). ### Standard Options Here are the standard options specific to google photos (Google Photos). #### --gphotos-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_GPHOTOS_CLIENT_ID - Type: string - Default: "" #### --gphotos-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_GPHOTOS_CLIENT_SECRET - Type: string - Default: "" #### --gphotos-read-only Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. - Config: read_only - Env Var: RCLONE_GPHOTOS_READ_ONLY - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to google photos (Google Photos). #### --gphotos-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_GPHOTOS_TOKEN - Type: string - Default: "" #### --gphotos-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_GPHOTOS_AUTH_URL - Type: string - Default: "" #### --gphotos-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_GPHOTOS_TOKEN_URL - Type: string - Default: "" #### --gphotos-read-size Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. - Config: read_size - Env Var: RCLONE_GPHOTOS_READ_SIZE - Type: bool - Default: false #### --gphotos-start-year Year limits the photos to be downloaded to those which are uploaded after the given year - Config: start_year - Env Var: RCLONE_GPHOTOS_START_YEAR - Type: int - Default: 2000 HTTP ------------------------------------------------- The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!) Paths are specified as `remote:` or `remote:path/to/dir`. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / http Connection \ "http" [snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://beta.rclone.org Remote config -------------------- [remote] url = https://beta.rclone.org -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote http e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` This remote is called `remote` and can now be used like this See all the top level directories rclone lsd remote: List the contents of a directory rclone ls remote:directory Sync the remote `directory` to `/home/local/directory`, deleting any excess files. rclone sync -i remote:directory /home/local/directory ### Read only ### This remote is read only - you can't upload files to an HTTP server. ### Modified time ### Most HTTP servers store time accurate to 1 second. ### Checksum ### No checksums are stored. ### Usage without a config file ### Since the http remote only has one config parameter it is easy to use without a config file: rclone lsd --http-url https://beta.rclone.org :http: ### Standard Options Here are the standard options specific to http (http Connection). #### --http-url URL of http host to connect to - Config: url - Env Var: RCLONE_HTTP_URL - Type: string - Default: "" - Examples: - "https://example.com" - Connect to example.com - "https://user:pass@example.com" - Connect to example.com using a username and password ### Advanced Options Here are the advanced options specific to http (http Connection). #### --http-headers Set HTTP headers for all transactions Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard [CSV encoding](https://godoc.org/encoding/csv) may be used. For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. - Config: headers - Env Var: RCLONE_HTTP_HEADERS - Type: CommaSepList - Default: #### --http-no-slash Set this if the site doesn't end directories with / Use this if your target website does not use / on the end of directories. A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them. Note that this may cause rclone to confuse genuine HTML files with directories. - Config: no_slash - Env Var: RCLONE_HTTP_NO_SLASH - Type: bool - Default: false #### --http-no-head Don't use HEAD requests to find file sizes in dir listing If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to: - find its size - check it really exists - check to see if it is a directory If you set this option, rclone will not do the HEAD request. This will mean - directory listings are much quicker - rclone won't have the times or sizes of any files - some files that don't exist may be in the listing - Config: no_head - Env Var: RCLONE_HTTP_NO_HEAD - Type: bool - Default: false Hubic ----------------------------------------- Paths are specified as `remote:path` Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Hubic \ "hubic" [snip] Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXXXXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List containers in the top level of your Hubic rclone lsd remote: List all the files in your Hubic rclone ls remote: To copy a local directory to an Hubic directory called backup rclone copy /home/source remote:backup If you want the directory to be visible in the official *Hubic browser*, you need to copy your files to the `default` directory rclone copy /home/source remote:default/backup ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### Modified time ### The modified time is stored as metadata on the object as `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 ns. This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. Note that Hubic wraps the Swift backend, so most of the properties of are the same. ### Standard Options Here are the standard options specific to hubic (Hubic). #### --hubic-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_HUBIC_CLIENT_ID - Type: string - Default: "" #### --hubic-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_HUBIC_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to hubic (Hubic). #### --hubic-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_HUBIC_TOKEN - Type: string - Default: "" #### --hubic-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_HUBIC_AUTH_URL - Type: string - Default: "" #### --hubic-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_HUBIC_TOKEN_URL - Type: string - Default: "" #### --hubic-chunk-size Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. - Config: chunk_size - Env Var: RCLONE_HUBIC_CHUNK_SIZE - Type: SizeSuffix - Default: 5G #### --hubic-no-chunk Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - Config: no_chunk - Env Var: RCLONE_HUBIC_NO_CHUNK - Type: bool - Default: false #### --hubic-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_HUBIC_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8 ### Limitations ### This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. Jottacloud ----------------------------------------- Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), there are also several whitelabel versions which should work with this backend. Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. ## Setup ### Default Setup To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your [account security settings](https://www.jottacloud.com/web/secure) (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token. ### Legacy Setup If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentification. To to this select yes when the setup asks for legacy authentification and enter your username and password. The rest of the setup is identical to the default setup. Here is an example of how to make a remote called `remote` with the default setup. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Jottacloud \ "jottacloud" [snip] Storage> jottacloud ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use legacy authentification?. This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. y) Yes n) No (default) y/n> n Generate a personal login token here: https://www.jottacloud.com/web/secure Login Token> Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? y) Yes n) No y/n> y Please select the device to use. Normally this will be Jotta Choose a number from below, or type in an existing value 1 > DESKTOP-3H31129 2 > Jotta Devices> 2 Please select the mountpoint to user. Normally this will be Archive Choose a number from below, or type in an existing value 1 > Archive 2 > Links 3 > Sync Mountpoints> 1 -------------------- [jotta] type = jottacloud token = {........} device = Jotta mountpoint = Archive configVersion = 1 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Once configured you can then use `rclone` like this, List directories in top level of your Jottacloud rclone lsd remote: List all the files in your Jottacloud rclone ls remote: To copy a local directory to an Jottacloud directory called backup rclone copy /home/source remote:backup ### Devices and Mountpoints The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing. ### --fast-list This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown. ### Modified time and hashes Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Jottacloud supports MD5 type hashes, so you can use the `--checksum` flag. Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the `TMPDIR` environment variable points to) before it is uploaded. Small files will be cached in memory - see the [--jottacloud-md5-memory-limit](#jottacloud-md5-memory-limit) flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above). #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \| | 0x7C | | | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in XML strings. ### Deleting files By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. Emptying the trash is supported by the [cleanup](https://rclone.org/commands/rclone_cleanup/) command. ### Versions Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. ### Quota information To view your current quota you can use the `rclone about remote:` command which will display your usage limit (unless it is unlimited) and the current usage. ### Advanced Options Here are the advanced options specific to jottacloud (Jottacloud). #### --jottacloud-md5-memory-limit Files bigger than this will be cached on disk to calculate the MD5 if required. - Config: md5_memory_limit - Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT - Type: SizeSuffix - Default: 10M #### --jottacloud-trashed-only Only show files that are in the trash. This will show trashed files in their original directory structure. - Config: trashed_only - Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY - Type: bool - Default: false #### --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - Config: hard_delete - Env Var: RCLONE_JOTTACLOUD_HARD_DELETE - Type: bool - Default: false #### --jottacloud-upload-resume-limit Files bigger than this can be resumed if the upload fail's. - Config: upload_resume_limit - Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT - Type: SizeSuffix - Default: 10M #### --jottacloud-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_JOTTACLOUD_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot ### Limitations Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. ### Troubleshooting Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases. Koofr ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr [web application](https://app.koofr.net/app/admin/preferences/password), giving the password a nice name like `rclone` and clicking on generate. Here is an example of how to make a remote called `koofr`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> koofr Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Koofr \ "koofr" [snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** Your Koofr user name Enter a string value. Press Enter for the default (""). user> USER@NAME Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [koofr] type = koofr baseurl = https://app.koofr.net user = USER@NAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage. Once configured you can then use `rclone` like this, List directories in top level of your Koofr rclone lsd koofr: List all the files in your Koofr rclone ls koofr: To copy a local directory to an Koofr directory called backup rclone copy /home/source remote:backup #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in XML strings. ### Standard Options Here are the standard options specific to koofr (Koofr). #### --koofr-user Your Koofr user name - Config: user - Env Var: RCLONE_KOOFR_USER - Type: string - Default: "" #### --koofr-password Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: password - Env Var: RCLONE_KOOFR_PASSWORD - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to koofr (Koofr). #### --koofr-endpoint The Koofr API endpoint to use - Config: endpoint - Env Var: RCLONE_KOOFR_ENDPOINT - Type: string - Default: "https://app.koofr.net" #### --koofr-mountid Mount ID of the mount to use. If omitted, the primary mount is used. - Config: mountid - Env Var: RCLONE_KOOFR_MOUNTID - Type: string - Default: "" #### --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. - Config: setmtime - Env Var: RCLONE_KOOFR_SETMTIME - Type: bool - Default: true #### --koofr-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_KOOFR_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot ### Limitations ### Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Mail.ru Cloud ---------------------------------------- [Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/), available only on Windows. (Please note that official sites are in Russian) Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented. ### Features highlights ### - Paths may be as deep as required, eg `remote:directory/subdirectory` - Files have a `last modified time` property, directories don't - Deleted files are by default moved to the trash - Files and directories can be shared via public links - Partial uploads or streaming are not supported, file size must be known before upload - Maximum file size is limited to 2G for a free account, unlimited for paid accounts - Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1 - If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone) ### Configuration ### Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Mail.ru Cloud \ "mailru" [snip] Storage> mailru User name (usually email) Enter a string value. Press Enter for the default (""). user> username@mail.ru Password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips [snip] Enter a boolean value (true or false). Press Enter for the default ("true"). Choose a number from below, or type in your own value 1 / Enable \ "true" 2 / Disable \ "false" speedup_enable> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [remote] type = mailru user = username@mail.ru pass = *** ENCRYPTED *** speedup_enable = true -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Configuration of this backend does not require a local web browser. You can use the configured backend as shown below: See top level directories rclone lsd remote: Make a new directory rclone mkdir remote:directory List the contents of a directory rclone ls remote:directory Sync `/home/local/directory` to the remote path, deleting any excess files in the path. rclone sync -i /home/local/directory remote:directory ### Modified time ### Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970". ### Hash checksums ### Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length. ### Emptying Trash ### Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the `rclone cleanup remote:` command, which will permanently delete all your trashed files. This command does not take any path arguments. ### Quota information ### To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota) and the current usage. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Limitations ### File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits. Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". ### Standard Options Here are the standard options specific to mailru (Mail.ru Cloud). #### --mailru-user User name (usually email) - Config: user - Env Var: RCLONE_MAILRU_USER - Type: string - Default: "" #### --mailru-pass Password **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_MAILRU_PASS - Type: string - Default: "" #### --mailru-speedup-enable Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization. - Config: speedup_enable - Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE - Type: bool - Default: true - Examples: - "true" - Enable - "false" - Disable ### Advanced Options Here are the advanced options specific to mailru (Mail.ru Cloud). #### --mailru-speedup-file-patterns Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters. - Config: speedup_file_patterns - Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS - Type: string - Default: "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf" - Examples: - "" - Empty list completely disables speedup (put by hash). - "*" - All files will be attempted for speedup. - "*.mkv,*.avi,*.mp4,*.mp3" - Only common audio/video files will be tried for put by hash. - "*.zip,*.gz,*.rar,*.pdf" - Only common archives or PDF books will be tried for speedup. #### --mailru-speedup-max-disk This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space) - Config: speedup_max_disk - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK - Type: SizeSuffix - Default: 3G - Examples: - "0" - Completely disable speedup (put by hash). - "1G" - Files larger than 1Gb will be uploaded directly. - "3G" - Choose this option if you have less than 3Gb free on local disk. #### --mailru-speedup-max-memory Files larger than the size given below will always be hashed on disk. - Config: speedup_max_memory - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY - Type: SizeSuffix - Default: 32M - Examples: - "0" - Preliminary hashing will always be done in a temporary disk location. - "32M" - Do not dedicate more than 32Mb RAM for preliminary hashing. - "256M" - You have at most 256Mb RAM free for hash calculations. #### --mailru-check-hash What should copy do if file checksum is mismatched or invalid - Config: check_hash - Env Var: RCLONE_MAILRU_CHECK_HASH - Type: bool - Default: true - Examples: - "true" - Fail with error. - "false" - Ignore and continue. #### --mailru-user-agent HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line. - Config: user_agent - Env Var: RCLONE_MAILRU_USER_AGENT - Type: string - Default: "" #### --mailru-quirks Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400 - Config: quirks - Env Var: RCLONE_MAILRU_QUIRKS - Type: string - Default: "" #### --mailru-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_MAILRU_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot Mega ----------------------------------------- [Mega](https://mega.nz/) is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption. This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Mega \ "mega" [snip] Storage> mega User name user> you@example.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Remote config -------------------- [remote] type = mega user = you@example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` **NOTE:** The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in `rclone` will fail. Once configured you can then use `rclone` like this, List directories in top level of your Mega rclone lsd remote: List all the files in your Mega rclone ls remote: To copy a local directory to an Mega directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### Mega does not support modification times or hashes yet. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Duplicated files ### Mega can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use `rclone dedupe` to fix duplicated files. ### Failure to log-in ### Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. For example, executing this command 90 times in a row `rclone link remote:file` will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again. You can mitigate this issue by mounting the remote it with `rclone mount`. This will log-in when mounting and a log-out when unmounting only. You can also run `rclone rcd` and then use `rclone rc` to run the commands over the API to avoid logging in each time. Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue. If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing... Note that this has been observed by trial and error and might not be set in stone. Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach. Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though. Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum. So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while. ### Standard Options Here are the standard options specific to mega (Mega). #### --mega-user User name - Config: user - Env Var: RCLONE_MEGA_USER - Type: string - Default: "" #### --mega-pass Password. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_MEGA_PASS - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to mega (Mega). #### --mega-debug Output more debug from Mega. If this flag is set (along with -vv) it will print further debugging information from the mega backend. - Config: debug - Env Var: RCLONE_MEGA_DEBUG - Type: bool - Default: false #### --mega-hard-delete Delete files permanently rather than putting them into the trash. Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead. - Config: hard_delete - Env Var: RCLONE_MEGA_HARD_DELETE - Type: bool - Default: false #### --mega-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_MEGA_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot ### Limitations ### This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. Memory ----------------------------------------- The memory backend is an in RAM backend. It does not persist its data - use the local backend for that. The memory backend behaves like a bucket based remote (eg like s3). Because it has no parameters you can just use it with the `:memory:` remote name. You can configure it as a remote like this with `rclone config` too if you want to: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Memory \ "memory" [snip] Storage> memory ** See help for memory backend at: https://rclone.org/memory/ ** Remote config -------------------- [remote] type = memory -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, eg rclone mount :memory: /mnt/tmp rclone serve webdav :memory: rclone serve sftp :memory: ### Modified time and hashes ### The memory backend supports MD5 hashes and modification times accurate to 1 nS. #### Restricted filename characters The memory backend replaces the [default restricted characters set](https://rclone.org/overview/#restricted-characters). Microsoft Azure Blob Storage ----------------------------------------- Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Microsoft Azure Blob Storage \ "azureblob" [snip] Storage> azureblob Storage Account Name account> account_name Storage Account Key key> base64encodedkey== Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = account_name key = base64encodedkey== endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See all containers rclone lsd remote: Make a new container rclone mkdir remote:container List the contents of a container rclone ls remote:container Sync `/home/local/directory` to the remote container, deleting any excess files in the container. rclone sync -i /home/local/directory remote:container ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### Modified time ### The modified time is stored as metadata on the object with the `mtime` key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it. ### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | / | 0x2F | / | | \ | 0x5C | \ | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | . | 0x2E | . | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Hashes ### MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk. ### Authenticating with Azure Blob Storage Rclone has 3 ways of authenticating with Azure Blob Storage: #### Account and Key This is the most straight forward and least flexible way. Just fill in the `account` and `key` lines and leave the rest blank. #### SAS URL This can be an account level SAS URL or container level SAS URL. To use it leave `account`, `key` blank and fill in `sas_url`. An account level SAS URL or container level SAS URL can be obtained from the Azure portal or the Azure Storage Explorer. To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal. If you use a container level SAS URL, rclone operations are permitted only on a particular container, eg rclone ls azureblob:container You can also list the single container from the root. This will only show the container specified by the SAS URL. $ rclone lsd azureblob: container/ Note that you can't see or access any other containers - this will fail rclone ls azureblob:othercontainer Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server. ### Multipart uploads ### Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default. The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to `--transfers` of them being uploaded at once. Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using `--azureblob-chunk-size 100M`. Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks. ### Standard Options Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-account Storage Account Name (leave blank to use SAS URL or Emulator) - Config: account - Env Var: RCLONE_AZUREBLOB_ACCOUNT - Type: string - Default: "" #### --azureblob-key Storage Account Key (leave blank to use SAS URL or Emulator) - Config: key - Env Var: RCLONE_AZUREBLOB_KEY - Type: string - Default: "" #### --azureblob-sas-url SAS URL for container level access only (leave blank if using account/key or Emulator) - Config: sas_url - Env Var: RCLONE_AZUREBLOB_SAS_URL - Type: string - Default: "" #### --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) - Config: use_emulator - Env Var: RCLONE_AZUREBLOB_USE_EMULATOR - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-endpoint Endpoint for the service Leave blank normally. - Config: endpoint - Env Var: RCLONE_AZUREBLOB_ENDPOINT - Type: string - Default: "" #### --azureblob-upload-cutoff Cutoff for switching to chunked upload (<= 256MB). - Config: upload_cutoff - Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 256M #### --azureblob-chunk-size Upload chunk size (<= 100MB). Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory. - Config: chunk_size - Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE - Type: SizeSuffix - Default: 4M #### --azureblob-list-chunk Size of blob list. This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( [source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) ). This can be used to limit the number of blobs items to return, to avoid the time out. - Config: list_chunk - Env Var: RCLONE_AZUREBLOB_LIST_CHUNK - Type: int - Default: 5000 #### --azureblob-access-tier Access tier of blob: hot, cool or archive. Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool". - Config: access_tier - Env Var: RCLONE_AZUREBLOB_ACCESS_TIER - Type: string - Default: "" #### --azureblob-disable-checksum Don't store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM - Type: bool - Default: false #### --azureblob-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s #### --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP - Type: bool - Default: false #### --azureblob-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_AZUREBLOB_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 ### Limitations ### MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. ### Azure Storage Emulator Support ### You can test rclone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with `rclone config` follow instructions described in introduction, set `use_emulator` config as `true`, you do not need to provide default account name or key if using emulator. Microsoft OneDrive ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Microsoft OneDrive \ "onedrive" [snip] Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Microsoft App Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Choose a number from below, or type in an existing value 1 / OneDrive Personal or Business \ "onedrive" 2 / Sharepoint site \ "sharepoint" 3 / Type in driveID \ "driveid" 4 / Type in SiteID \ "siteid" 5 / Search a Sharepoint site \ "search" Your choice> 1 Found 1 drives, please select the one you want to use: 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk Chose drive to use:> 0 Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents Is that okay? y) Yes n) No y/n> y -------------------- [remote] type = onedrive token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"} drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk drive_type = business -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your OneDrive rclone lsd remote: List all the files in your OneDrive rclone ls remote: To copy a local directory to an OneDrive directory called backup rclone copy /home/source remote:backup ### Getting your own Client ID and Key ### You can use your own Client ID if the default (`client_id` left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests. If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below: 1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`. 2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI` Enter `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. 3. Under `manage` select `Certificates & secrets`, click `New client secret`. Copy and keep that secret for later use. 4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. 5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read`. Once selected click `Add permissions` at the bottom. Now the application is complete. Run `rclone config` to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. ### Modification time and hashes ### OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). For all types of OneDrive you can use the `--checksum` flag. ### Restricted filename characters ### In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | | # | 0x23 | # | | % | 0x25 | % | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | . | 0x2E | . | File names can also not begin with the following characters. These only get replaced if they are the first character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | ~ | 0x7E | ~ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Deleting files ### Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website. ### Standard Options Here are the standard options specific to onedrive (Microsoft OneDrive). #### --onedrive-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_ONEDRIVE_CLIENT_ID - Type: string - Default: "" #### --onedrive-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to onedrive (Microsoft OneDrive). #### --onedrive-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_ONEDRIVE_TOKEN - Type: string - Default: "" #### --onedrive-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_ONEDRIVE_AUTH_URL - Type: string - Default: "" #### --onedrive-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_ONEDRIVE_TOKEN_URL - Type: string - Default: "" #### --onedrive-chunk-size Chunk size to upload files with - must be multiple of 320k (327,680 bytes). Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter \"Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\" Note that the chunks will be buffered into memory. - Config: chunk_size - Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 10M #### --onedrive-drive-id The ID of the drive to use - Config: drive_id - Env Var: RCLONE_ONEDRIVE_DRIVE_ID - Type: string - Default: "" #### --onedrive-drive-type The type of the drive ( personal | business | documentLibrary ) - Config: drive_type - Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE - Type: string - Default: "" #### --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option. - Config: expose_onenote_files - Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES - Type: bool - Default: false #### --onedrive-server-side-across-configs Allow server side operations (eg copy) to work across different onedrive configs. This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. - Config: server_side_across_configs - Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false #### --onedrive-no-versions Remove all versions on modifying operations Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time. These versions take up space out of the quota. This flag checks for versions after file upload and setting modification time and removes all but the last version. **NB** Onedrive personal can't currently delete versions so don't use this flag there. - Config: no_versions - Env Var: RCLONE_ONEDRIVE_NO_VERSIONS - Type: bool - Default: false #### --onedrive-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_ONEDRIVE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot ### Limitations If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the `rclone config reconnect remote:` command to get a new token and refresh token. #### Naming #### Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a `?` in it will be mapped to `?` instead. #### File sizes #### The largest allowed file size is 100GB for both OneDrive Personal and OneDrive for Business [(Updated 17 June 2020)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). #### Path length #### The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. #### Number of files #### OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like `couldn’t list files: UnknownError:`. See [#2707](https://github.com/rclone/rclone/issues/2707) for more info. An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). ### Versions Every change in a file OneDrive causes the service to create a new version of the the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space. For example the `copy` command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version. You can use the `rclone cleanup` command (see below) to remove all old versions. Or you can set the `no_versions` parameter to `true` and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it. **Note** At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and `no_versions` should not be used on Onedrive Personal. ### Disabling versioning Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: 1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) 2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` 3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) 4. `Set-SPOTenant -EnableMinimumVersionRequirement $False` 5. `Disconnect-SPOService` (to disconnect from the server) *Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* User [Weropol](https://github.com/Weropol) has found a method to disable versioning on OneDrive 1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. 2. Click Site settings. 3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. 4. Click Customize "Documents". 5. Click General Settings > Versioning Settings. 6. Under Document Version History select the option No versioning. Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. 7. Apply the changes by clicking OK. 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) 9. Restore the versioning settings after using rclone. (Optional) ### Cleanup OneDrive supports `rclone cleanup` which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does `--checkers` tests in parallel. The command also supports `-i` which is a great way to see what it would do. rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir **NB** Onedrive personal can't currently delete versions ### Troubleshooting ### #### Unexpected file size/hash differences on Sharepoint #### It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments: ``` --ignore-checksum --ignore-size ``` Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for [OneDrive](https://onedrive.live.com) and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above. #### Replacing/deleting existing files on Sharepoint gets "item not found" #### It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the `--backup-dir ` command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory `rclone-backup-dir` on backend `mysharepoint`, you may use: ``` --backup-dir mysharepoint:rclone-backup-dir ``` #### access\_denied (AADSTS65005) #### ``` Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. ``` This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint #### invalid\_grant (AADSTS50076) #### ``` Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. ``` If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. OpenDrive ------------------------------------ Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenDrive \ "opendrive" [snip] Storage> opendrive Username username> Password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: -------------------- [remote] username = password = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` List directories in top level of your OpenDrive rclone lsd remote: List all the files in your OpenDrive rclone ls remote: To copy a local directory to an OpenDrive directory called backup rclone copy /home/source remote:backup ### Modified time and MD5SUMs ### OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | HT | 0x09 | ␉ | | LF | 0x0A | ␊ | | VT | 0x0B | ␋ | | CR | 0x0D | ␍ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to opendrive (OpenDrive). #### --opendrive-username Username - Config: username - Env Var: RCLONE_OPENDRIVE_USERNAME - Type: string - Default: "" #### --opendrive-password Password. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: password - Env Var: RCLONE_OPENDRIVE_PASSWORD - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to opendrive (OpenDrive). #### --opendrive-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_OPENDRIVE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot #### --opendrive-chunk-size Files will be uploaded in chunks this size. Note that these chunks are buffered in memory so increasing them will increase memory use. - Config: chunk_size - Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 10M ### Limitations ### Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a `?` in it will be mapped to `?` instead. QingStor --------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Here is an example of making an QingStor configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage \ "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step \ "false" 2 / Get QingStor credentials from the environment (env vars or IAM) \ "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a. \ "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a. \ "sh1a" zone> 1 Number of connection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### Multipart uploads ### rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM. Note that incomplete multipart uploads older than 24 hours can be removed with `rclone cleanup remote:bucket` just for one bucket `rclone cleanup remote:` for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time. ### Buckets and Zone ### With QingStor you can list buckets (`rclone lsd`) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, `incorrect zone, the bucket is not in 'XXX' zone`. ### Authentication ### There are two ways to supply `rclone` with a set of QingStor credentials. In order of precedence: - Directly in the rclone configuration file (as configured by `rclone config`) - set `access_key_id` and `secret_access_key` - Runtime configuration: - set `env_auth` to `true` in the config file - Exporting the following environment variables before running `rclone` - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` ### Restricted filename characters The control characters 0x00-0x1F and / are replaced as in the [default restricted characters set](https://rclone.org/overview/#restricted-characters). Note that 0x7F is not replaced. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to qingstor (QingCloud Object Storage). #### --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - Config: env_auth - Env Var: RCLONE_QINGSTOR_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter QingStor credentials in the next step - "true" - Get QingStor credentials from the environment (env vars or IAM) #### --qingstor-access-key-id QingStor Access Key ID Leave blank for anonymous access or runtime credentials. - Config: access_key_id - Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID - Type: string - Default: "" #### --qingstor-secret-access-key QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials. - Config: secret_access_key - Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY - Type: string - Default: "" #### --qingstor-endpoint Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" - Config: endpoint - Env Var: RCLONE_QINGSTOR_ENDPOINT - Type: string - Default: "" #### --qingstor-zone Zone to connect to. Default is "pek3a". - Config: zone - Env Var: RCLONE_QINGSTOR_ZONE - Type: string - Default: "" - Examples: - "pek3a" - The Beijing (China) Three Zone - Needs location constraint pek3a. - "sh1a" - The Shanghai (China) First Zone - Needs location constraint sh1a. - "gd2a" - The Guangdong (China) Second Zone - Needs location constraint gd2a. ### Advanced Options Here are the advanced options specific to qingstor (QingCloud Object Storage). #### --qingstor-connection-retries Number of connection retries. - Config: connection_retries - Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES - Type: int - Default: 3 #### --qingstor-upload-cutoff Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. - Config: upload_cutoff - Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M #### --qingstor-chunk-size Chunk size to use for uploading. When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size. Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. - Config: chunk_size - Env Var: RCLONE_QINGSTOR_CHUNK_SIZE - Type: SizeSuffix - Default: 4M #### --qingstor-upload-concurrency Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though). If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY - Type: int - Default: 1 #### --qingstor-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_QINGSTOR_ENCODING - Type: MultiEncoder - Default: Slash,Ctl,InvalidUtf8 Swift ---------------------------------------- Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). Commercial implementations of that being: * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) * [Memset Memstore](https://www.memset.com/cloud/storage/) * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) * [Oracle Cloud Storage](https://cloud.oracle.com/storage-opc) * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. Here is an example of making a swift configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value 1 / Enter swift credentials in the next step \ "false" 2 / Get swift credentials from environment vars. Leave other fields blank if using this. \ "true" env_auth> true User name to log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> Authentication URL for server (OS_AUTH_URL). Choose a number from below, or type in your own value 1 / Rackspace US \ "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK \ "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2 \ "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK \ "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2 \ "https://auth.storage.memset.com/v2.0" 6 / OVH \ "https://auth.cloud.ovh.net/v3" auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> Region name - optional (OS_REGION_NAME) region> Storage URL - optional (OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) Choose a number from below, or type in your own value 1 / Public (default, choose this if not sure) \ "public" 2 / Internal (use internal service net) \ "internal" 3 / Admin \ "admin" endpoint_type> Remote config -------------------- [test] env_auth = true user = key = auth = user_id = domain = tenant = tenant_id = tenant_domain = region = storage_url = auth_token = auth_version = endpoint_type = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all containers rclone lsd remote: Make a new container rclone mkdir remote:container List the contents of a container rclone ls remote:container Sync `/home/local/directory` to the remote container, deleting any excess files in the container. rclone sync -i /home/local/directory remote:container ### Configuration from an OpenStack credentials file ### An OpenStack credentials file typically looks something something like this (without the comments) ``` export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi ``` The config file needs to look something like this where `$OS_USERNAME` represents the value of the `OS_USERNAME` variable - `123abc567xy` in the example above. ``` [remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME ``` Note that you may (or may not) need to set `region` too - try without first. ### Configuration from the environment ### If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables. When you run through the config, make sure you choose `true` for `env_auth` and leave everything else blank. rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is [a list of the variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) in the docs for the swift library. ### Using an alternate authentication method ### If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the `openstack` commands to get a token). Then, you just need to pass the two configuration variables ``auth_token`` and ``storage_url``. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation. #### Using rclone without a config file #### You can use rclone with swift without a config file, if desired, like this: ``` source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: ``` ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. ### --update and --use-server-modtime ### As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using `--update` along with `--use-server-modtime`, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded. ### Standard Options Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - Config: env_auth - Env Var: RCLONE_SWIFT_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter swift credentials in the next step - "true" - Get swift credentials from environment vars. Leave other fields blank if using this. #### --swift-user User name to log in (OS_USERNAME). - Config: user - Env Var: RCLONE_SWIFT_USER - Type: string - Default: "" #### --swift-key API key or password (OS_PASSWORD). - Config: key - Env Var: RCLONE_SWIFT_KEY - Type: string - Default: "" #### --swift-auth Authentication URL for server (OS_AUTH_URL). - Config: auth - Env Var: RCLONE_SWIFT_AUTH - Type: string - Default: "" - Examples: - "https://auth.api.rackspacecloud.com/v1.0" - Rackspace US - "https://lon.auth.api.rackspacecloud.com/v1.0" - Rackspace UK - "https://identity.api.rackspacecloud.com/v2.0" - Rackspace v2 - "https://auth.storage.memset.com/v1.0" - Memset Memstore UK - "https://auth.storage.memset.com/v2.0" - Memset Memstore UK v2 - "https://auth.cloud.ovh.net/v3" - OVH #### --swift-user-id User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - Config: user_id - Env Var: RCLONE_SWIFT_USER_ID - Type: string - Default: "" #### --swift-domain User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - Config: domain - Env Var: RCLONE_SWIFT_DOMAIN - Type: string - Default: "" #### --swift-tenant Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - Config: tenant - Env Var: RCLONE_SWIFT_TENANT - Type: string - Default: "" #### --swift-tenant-id Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - Config: tenant_id - Env Var: RCLONE_SWIFT_TENANT_ID - Type: string - Default: "" #### --swift-tenant-domain Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - Config: tenant_domain - Env Var: RCLONE_SWIFT_TENANT_DOMAIN - Type: string - Default: "" #### --swift-region Region name - optional (OS_REGION_NAME) - Config: region - Env Var: RCLONE_SWIFT_REGION - Type: string - Default: "" #### --swift-storage-url Storage URL - optional (OS_STORAGE_URL) - Config: storage_url - Env Var: RCLONE_SWIFT_STORAGE_URL - Type: string - Default: "" #### --swift-auth-token Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - Config: auth_token - Env Var: RCLONE_SWIFT_AUTH_TOKEN - Type: string - Default: "" #### --swift-application-credential-id Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) - Config: application_credential_id - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID - Type: string - Default: "" #### --swift-application-credential-name Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) - Config: application_credential_name - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME - Type: string - Default: "" #### --swift-application-credential-secret Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) - Config: application_credential_secret - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET - Type: string - Default: "" #### --swift-auth-version AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - Config: auth_version - Env Var: RCLONE_SWIFT_AUTH_VERSION - Type: int - Default: 0 #### --swift-endpoint-type Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) - Config: endpoint_type - Env Var: RCLONE_SWIFT_ENDPOINT_TYPE - Type: string - Default: "public" - Examples: - "public" - Public (default, choose this if not sure) - "internal" - Internal (use internal service net) - "admin" - Admin #### --swift-storage-policy The storage policy to use when creating a new container This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider. - Config: storage_policy - Env Var: RCLONE_SWIFT_STORAGE_POLICY - Type: string - Default: "" - Examples: - "" - Default - "pcs" - OVH Public Cloud Storage - "pca" - OVH Public Cloud Archive ### Advanced Options Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-chunk-size Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. - Config: chunk_size - Env Var: RCLONE_SWIFT_CHUNK_SIZE - Type: SizeSuffix - Default: 5G #### --swift-no-chunk Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - Config: no_chunk - Env Var: RCLONE_SWIFT_NO_CHUNK - Type: bool - Default: false #### --swift-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SWIFT_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8 ### Modified time ### The modified time is stored as metadata on the object as `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 ns. This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. ### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Limitations ### The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. ### Troubleshooting ### #### Rclone gives Failed to create file system for "remote:": Bad Request #### Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift. So this most likely means your username / password is wrong. You can investigate further with the `--dump-bodies` flag. This may also be caused by specifying the region when you shouldn't have (eg OVH). #### Rclone gives Failed to create file system: Response didn't have storage url and auth token #### This is most likely caused by forgetting to specify your tenant when setting up a swift remote. pCloud ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Pcloud \ "pcloud" [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your pCloud rclone lsd remote: List all the files in your pCloud rclone ls remote: To copy a local directory to an pCloud directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded. pCloud supports MD5 and SHA1 type hashes, so you can use the `--checksum` flag. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Deleting files ### Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. `rclone cleanup` can be used to empty the trash. ### Root folder ID ### You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your pCloud drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. This will be the `folder` field of the URL when you open the relevant folder in the pCloud web interface. So if the folder you want rclone to use has a URL which looks like `https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid` in the browser, then you use `5xxxxxxxx8` as the `root_folder_id` in the config. ### Standard Options Here are the standard options specific to pcloud (Pcloud). #### --pcloud-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_PCLOUD_CLIENT_ID - Type: string - Default: "" #### --pcloud-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_PCLOUD_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to pcloud (Pcloud). #### --pcloud-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_PCLOUD_TOKEN - Type: string - Default: "" #### --pcloud-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_PCLOUD_AUTH_URL - Type: string - Default: "" #### --pcloud-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_PCLOUD_TOKEN_URL - Type: string - Default: "" #### --pcloud-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_PCLOUD_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot #### --pcloud-root-folder-id Fill in for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID - Type: string - Default: "d0" #### --pcloud-hostname Hostname to connect to. This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize. - Config: hostname - Env Var: RCLONE_PCLOUD_HOSTNAME - Type: string - Default: "api.pcloud.com" - Examples: - "api.pcloud.com" - Original/US region - "eapi.pcloud.com" - EU region premiumize.me ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / premiumize.me \ "premiumizeme" [snip] Storage> premiumizeme ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = premiumizeme token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your premiumize.me rclone lsd remote: List all the files in your premiumize.me rclone ls remote: To copy a local directory to an premiumize.me directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### premiumize.me does not support modification times or hashes, therefore syncing will default to `--size-only` checking. Note that using `--update` will work. #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | | " | 0x22 | " | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Standard Options Here are the standard options specific to premiumizeme (premiumize.me). #### --premiumizeme-api-key API Key. This is not normally used - use oauth instead. - Config: api_key - Env Var: RCLONE_PREMIUMIZEME_API_KEY - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to premiumizeme (premiumize.me). #### --premiumizeme-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_PREMIUMIZEME_ENCODING - Type: MultiEncoder - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot ### Limitations ### Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". premiumize.me file names can't have the `\` or `"` characters in. rclone maps these to and from an identical looking unicode equivalents `\` and `"` premiumize.me only supports filenames up to 255 characters in length. put.io --------------------------------- Paths are specified as `remote:path` put.io paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for put.io involves getting a token from put.io which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> putio Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Put.io \ "putio" [snip] Storage> putio ** See help for putio backend at: https://rclone.org/putio/ ** Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [putio] type = putio token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== putio putio e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. You can then use it like this, List directories in top level of your put.io rclone lsd remote: List all the files in your put.io rclone ls remote: To copy a local directory to a put.io directory called backup rclone copy /home/source remote:backup #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Advanced Options Here are the advanced options specific to putio (Put.io). #### --putio-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_PUTIO_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot Seafile ---------------------------------------- This is a backend for the [Seafile](https://www.seafile.com/) storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users ### Root mode vs Library mode ### There are two distinct modes you can setup your remote: - you point your remote to the **root of the server**, meaning you don't specify a library during the configuration: Paths are specified as `remote:library`. You may put subdirectories in too, eg `remote:library/path/to/dir`. - you point your remote to a specific library during the configuration: Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) ### Configuration in root mode ### Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run rclone config This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile \ "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ ** URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com \ "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> false Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication is not enabled on this account. -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** 2fa = false -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this: See all libraries rclone lsd seafile: Create a new library rclone mkdir seafile:library List the contents of a library rclone ls seafile:library Sync `/home/local/directory` to the remote library, deleting any excess files in the library. rclone sync -i /home/local/directory seafile:library ### Configuration in library mode ### Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile \ "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ ** URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com \ "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> true Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> My Library Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication: please enter your 2FA code 2fa code> 123456 Authenticating... Success! -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = 2fa = true library = My Library -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. You specified `My Library` during the configuration. The root of the remote is pointing at the root of the library `My Library`: See all files in the library: rclone lsd seafile: Create a new directory inside the library rclone mkdir seafile:directory List the contents of a directory rclone ls seafile:directory Sync `/home/local/directory` to the remote library, deleting any excess files in the library. rclone sync -i /home/local/directory seafile: ### --fast-list ### Seafile version 7+ supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. Please note this is not supported on seafile server version 6.x #### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | / | 0x2F | / | | " | 0x22 | " | | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Seafile and rclone link ### Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: ``` rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ ``` or if run on a directory you will get: ``` rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ ``` Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link. ### Compatibility ### It has been actively tested using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly. ### Standard Options Here are the standard options specific to seafile (seafile). #### --seafile-url URL of seafile host to connect to - Config: url - Env Var: RCLONE_SEAFILE_URL - Type: string - Default: "" - Examples: - "https://cloud.seafile.com/" - Connect to cloud.seafile.com #### --seafile-user User name (usually email address) - Config: user - Env Var: RCLONE_SEAFILE_USER - Type: string - Default: "" #### --seafile-pass Password **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_SEAFILE_PASS - Type: string - Default: "" #### --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) - Config: 2fa - Env Var: RCLONE_SEAFILE_2FA - Type: bool - Default: false #### --seafile-library Name of the library. Leave blank to access all non-encrypted libraries. - Config: library - Env Var: RCLONE_SEAFILE_LIBRARY - Type: string - Default: "" #### --seafile-library-key Library password (for encrypted libraries only). Leave blank if you pass it through the command line. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: library_key - Env Var: RCLONE_SEAFILE_LIBRARY_KEY - Type: string - Default: "" #### --seafile-auth-token Authentication token - Config: auth_token - Env Var: RCLONE_SEAFILE_AUTH_TOKEN - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to seafile (seafile). #### --seafile-create-library Should rclone create a library if it doesn't exist - Config: create_library - Env Var: RCLONE_SEAFILE_CREATE_LIBRARY - Type: bool - Default: false #### --seafile-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SEAFILE_ENCODING - Type: MultiEncoder - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 SFTP ---------------------------------------- SFTP is the [Secure (or SSH) File Transfer Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). The SFTP backend can be used with a number of different providers: - C14 - rsync.net SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. Paths are specified as `remote:path`. If the path does not begin with a `/` it is relative to the home directory of the user. An empty path `remote:` refers to the user's home directory. "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /. Here is an example of making an SFTP configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / SSH/SFTP Connection \ "sftp" [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "example.com" host> example.com SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. key_file> Remote config -------------------- [remote] host = example.com user = sftpuser port = pass = key_file = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this: See all directories in the home directory rclone lsd remote: Make a new directory rclone mkdir remote:path/to/directory List the contents of a directory rclone ls remote:path/to/directory Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. rclone sync -i /home/local/directory remote:directory ### SSH Authentication ### The SFTP remote supports three authentication methods: * Password * Key file * ssh-agent Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`. Only unencrypted OpenSSH or PEM encrypted files are supported. The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('\n' or '\r\n') separating lines. i.e. key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- This will generate it correctly for key_pem for use in the config: awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa If you don't specify `pass`, `key_file`, or `key_pem` then rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent` to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can also be specified to force the usage of a specific key in the ssh-agent. Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. If you set the `--sftp-ask-password` option, rclone will prompt for a password when needed and no password has been configured. ### ssh-agent on macOS ### Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg eval `ssh-agent -s` && ssh-add -A And then at the end of the session eval `ssh-agent -k` These commands can be used in scripts of course. ### Modified time ### Modified times are stored on the server to 1 second precision. Modified times are used in syncing and are fully supported. Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option `set_modtime = false` in your RClone backend configuration to disable this behaviour. ### Standard Options Here are the standard options specific to sftp (SSH/SFTP Connection). #### --sftp-host SSH host to connect to - Config: host - Env Var: RCLONE_SFTP_HOST - Type: string - Default: "" - Examples: - "example.com" - Connect to example.com #### --sftp-user SSH username, leave blank for current username, ncw - Config: user - Env Var: RCLONE_SFTP_USER - Type: string - Default: "" #### --sftp-port SSH port, leave blank to use default (22) - Config: port - Env Var: RCLONE_SFTP_PORT - Type: string - Default: "" #### --sftp-pass SSH password, leave blank to use ssh-agent. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_SFTP_PASS - Type: string - Default: "" #### --sftp-key-pem Raw PEM-encoded private key, If specified, will override key_file parameter. - Config: key_pem - Env Var: RCLONE_SFTP_KEY_PEM - Type: string - Default: "" #### --sftp-key-file Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: key_file - Env Var: RCLONE_SFTP_KEY_FILE - Type: string - Default: "" #### --sftp-key-file-pass The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: key_file_pass - Env Var: RCLONE_SFTP_KEY_FILE_PASS - Type: string - Default: "" #### --sftp-key-use-agent When set forces the usage of the ssh-agent. When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors when the ssh-agent contains many keys. - Config: key_use_agent - Env Var: RCLONE_SFTP_KEY_USE_AGENT - Type: bool - Default: false #### --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: - aes128-cbc - aes192-cbc - aes256-cbc - 3des-cbc - diffie-hellman-group-exchange-sha256 - diffie-hellman-group-exchange-sha1 Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. - Config: use_insecure_cipher - Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER - Type: bool - Default: false - Examples: - "false" - Use default Cipher list. - "true" - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. #### --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing. - Config: disable_hashcheck - Env Var: RCLONE_SFTP_DISABLE_HASHCHECK - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to sftp (SSH/SFTP Connection). #### --sftp-ask-password Allow asking for SFTP password when needed. If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent - Config: ask_password - Env Var: RCLONE_SFTP_ASK_PASSWORD - Type: bool - Default: false #### --sftp-path-override Override path used by SSH connection. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. Shared folders can be found in directories representing volumes rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory Home directory can be found in a shared folder called "home" rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory - Config: path_override - Env Var: RCLONE_SFTP_PATH_OVERRIDE - Type: string - Default: "" #### --sftp-set-modtime Set the modified time on the remote if set. - Config: set_modtime - Env Var: RCLONE_SFTP_SET_MODTIME - Type: bool - Default: true #### --sftp-md5sum-command The command used to read md5 hashes. Leave blank for autodetect. - Config: md5sum_command - Env Var: RCLONE_SFTP_MD5SUM_COMMAND - Type: string - Default: "" #### --sftp-sha1sum-command The command used to read sha1 hashes. Leave blank for autodetect. - Config: sha1sum_command - Env Var: RCLONE_SFTP_SHA1SUM_COMMAND - Type: string - Default: "" #### --sftp-skip-links Set to skip any symlinks and any other non regular files. - Config: skip_links - Env Var: RCLONE_SFTP_SKIP_LINKS - Type: bool - Default: false #### --sftp-subsystem Specifies the SSH2 subsystem on the remote host. - Config: subsystem - Env Var: RCLONE_SFTP_SUBSYSTEM - Type: string - Default: "sftp" #### --sftp-server-command Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined. - Config: server_command - Env Var: RCLONE_SFTP_SERVER_COMMAND - Type: string - Default: "" ### Limitations ### SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option `disable_hashcheck` to `true` to disable checksumming. SFTP also supports `about` if the same login has shell access and `df` are in the remote's PATH. `about` will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. `about` will fail if it does not have shell access or if `df` is not in the remote's PATH. Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using `disable_hashcheck` is a good idea. The only ssh agent supported under Windows is Putty's pageant. The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the `use_insecure_cipher` setting in the configuration file to `true`. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf). SFTP isn't supported under plan9 until [this issue](https://github.com/pkg/sftp/issues/156) is fixed. Note that since SFTP isn't HTTP based the following flags don't work with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` Note that `--timeout` isn't supported (but `--contimeout` is). ## C14 {#c14} C14 is supported through the SFTP backend. See [C14's documentation](https://www.online.net/en/storage/c14-cold-storage) ## rsync.net {#rsync-net} rsync.net is supported through the SFTP backend. See [rsync.net's documentation of rclone examples](https://www.rsync.net/products/rclone.html). SugarSync ----------------------------------------- [SugarSync](https://sugarsync.com) is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing. The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Sugarsync \ "sugarsync" [snip] Storage> sugarsync ** See help for sugarsync backend at: https://rclone.org/sugarsync/ ** Sugarsync App ID. Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). app_id> Sugarsync Access Key ID. Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). access_key_id> Sugarsync Private Access Key Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). private_access_key> Permanently delete files if true otherwise put them in the deleted files. Enter a boolean value (true or false). Press Enter for the default ("false"). hard_delete> Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Username (email address)> nick@craig-wood.com Your Sugarsync password is only required during setup and will not be stored. password: -------------------- [remote] type = sugarsync refresh_token = https://api.sugarsync.com/app-authorization/XXXXXXXXXXXXXXXXXX -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token. Once configured you can then use `rclone` like this, List directories (sync folders) in top level of your SugarSync rclone lsd remote: List all the files in your SugarSync folder "Test" rclone ls remote:Test To copy a local directory to an SugarSync folder called backup rclone copy /home/source remote:backup Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. **NB** you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync. ### Modified time and hashes ### SugarSync does not support modification times or hashes, therefore syncing will default to `--size-only` checking. Note that using `--update` will work as rclone can read the time files were uploaded. #### Restricted filename characters SugarSync replaces the [default restricted characters set](https://rclone.org/overview/#restricted-characters) except for DEL. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in XML strings. ### Deleting files ### Deleted files will be moved to the "Deleted items" folder by default. However you can supply the flag `--sugarsync-hard-delete` or set the config parameter `hard_delete = true` if you would like files to be deleted straight away. ### Standard Options Here are the standard options specific to sugarsync (Sugarsync). #### --sugarsync-app-id Sugarsync App ID. Leave blank to use rclone's. - Config: app_id - Env Var: RCLONE_SUGARSYNC_APP_ID - Type: string - Default: "" #### --sugarsync-access-key-id Sugarsync Access Key ID. Leave blank to use rclone's. - Config: access_key_id - Env Var: RCLONE_SUGARSYNC_ACCESS_KEY_ID - Type: string - Default: "" #### --sugarsync-private-access-key Sugarsync Private Access Key Leave blank to use rclone's. - Config: private_access_key - Env Var: RCLONE_SUGARSYNC_PRIVATE_ACCESS_KEY - Type: string - Default: "" #### --sugarsync-hard-delete Permanently delete files if true otherwise put them in the deleted files. - Config: hard_delete - Env Var: RCLONE_SUGARSYNC_HARD_DELETE - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to sugarsync (Sugarsync). #### --sugarsync-refresh-token Sugarsync refresh token Leave blank normally, will be auto configured by rclone. - Config: refresh_token - Env Var: RCLONE_SUGARSYNC_REFRESH_TOKEN - Type: string - Default: "" #### --sugarsync-authorization Sugarsync authorization Leave blank normally, will be auto configured by rclone. - Config: authorization - Env Var: RCLONE_SUGARSYNC_AUTHORIZATION - Type: string - Default: "" #### --sugarsync-authorization-expiry Sugarsync authorization expiry Leave blank normally, will be auto configured by rclone. - Config: authorization_expiry - Env Var: RCLONE_SUGARSYNC_AUTHORIZATION_EXPIRY - Type: string - Default: "" #### --sugarsync-user Sugarsync user Leave blank normally, will be auto configured by rclone. - Config: user - Env Var: RCLONE_SUGARSYNC_USER - Type: string - Default: "" #### --sugarsync-root-id Sugarsync root id Leave blank normally, will be auto configured by rclone. - Config: root_id - Env Var: RCLONE_SUGARSYNC_ROOT_ID - Type: string - Default: "" #### --sugarsync-deleted-id Sugarsync deleted folder id Leave blank normally, will be auto configured by rclone. - Config: deleted_id - Env Var: RCLONE_SUGARSYNC_DELETED_ID - Type: string - Default: "" #### --sugarsync-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SUGARSYNC_ENCODING - Type: MultiEncoder - Default: Slash,Ctl,InvalidUtf8,Dot Tardigrade ----------------------------------------- [Tardigrade](https://tardigrade.io) is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner. ## Setup To make a new Tardigrade configuration you need one of the following: * Access Grant that someone else shared with you. * [API Key](https://documentation.tardigrade.io/getting-started/uploading-your-first-object/create-an-api-key) of a Tardigrade project you are a member of. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ### Setup with access grant ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Tardigrade Decentralized Cloud Storage \ "tardigrade" [snip] Storage> tardigrade ** See help for tardigrade backend at: https://rclone.org/tardigrade/ ** Choose an authentication method. Enter a string value. Press Enter for the default ("existing"). Choose a number from below, or type in your own value 1 / Use an existing access grant. \ "existing" 2 / Create a new access grant from satellite address, API key, and passphrase. \ "new" provider> existing Access Grant. Enter a string value. Press Enter for the default (""). access_grant> your-access-grant-received-by-someone-else Remote config -------------------- [remote] type = tardigrade access_grant = your-access-grant-received-by-someone-else -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` ### Setup with API key and passhprase ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Tardigrade Decentralized Cloud Storage \ "tardigrade" [snip] Storage> tardigrade ** See help for tardigrade backend at: https://rclone.org/tardigrade/ ** Choose an authentication method. Enter a string value. Press Enter for the default ("existing"). Choose a number from below, or type in your own value 1 / Use an existing access grant. \ "existing" 2 / Create a new access grant from satellite address, API key, and passphrase. \ "new" provider> new Satellite Address. Custom satellite address should match the format: `@
:`. Enter a string value. Press Enter for the default ("us-central-1.tardigrade.io"). Choose a number from below, or type in your own value 1 / US Central 1 \ "us-central-1.tardigrade.io" 2 / Europe West 1 \ "europe-west-1.tardigrade.io" 3 / Asia East 1 \ "asia-east-1.tardigrade.io" satellite_address> 1 API Key. Enter a string value. Press Enter for the default (""). api_key> your-api-key-for-your-tardigrade-project Encryption Passphrase. To access existing objects enter passphrase used for uploading. Enter a string value. Press Enter for the default (""). passphrase> your-human-readable-encryption-passphrase Remote config -------------------- [remote] type = tardigrade satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777 api_key = your-api-key-for-your-tardigrade-project passphrase = your-human-readable-encryption-passphrase access_grant = the-access-grant-generated-from-the-api-key-and-passphrase -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` ## Usage Paths are specified as `remote:bucket` (or `remote:` for the `lsf` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Once configured you can then use `rclone` like this. ### Create a new bucket Use the `mkdir` command to create new bucket, e.g. `bucket`. rclone mkdir remote:bucket ### List all buckets Use the `lsf` command to list all buckets. rclone lsf remote: Note the colon (`:`) character at the end of the command line. ### Delete a bucket Use the `rmdir` command to delete an empty bucket. rclone rmdir remote:bucket Use the `purge` command to delete a non-empty bucket with all its content. rclone purge remote:bucket ### Upload objects Use the `copy` command to upload an object. rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/ The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the local path to upload all its objects. rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/ Only modified files will be copied. ### List objects Use the `ls` command to list recursively all objects in a bucket. rclone ls remote:bucket Add the folder to the remote path to list recursively all objects in this folder. rclone ls remote:bucket/path/to/dir/ Use the `lsf` command to list non-recursively all objects in a bucket or a folder. rclone lsf remote:bucket/path/to/dir/ ### Download objects Use the `copy` command to download an object. rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/ The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the remote path to download all its objects. rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/ ### Delete objects Use the `deletefile` command to delete a single object. rclone deletefile remote:bucket/path/to/dir/file.ext Use the `delete` command to delete all object in a folder. rclone delete remote:bucket/path/to/dir/ ### Print the total size of objects Use the `size` command to print the total size of objects in a bucket or a folder. rclone size remote:bucket/path/to/dir/ ### Sync two Locations Use the `sync` command to sync the source to the destination, changing the destination only, deleting any excess files. rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/ The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Since this can cause data loss, test first with the `--dry-run` flag to see exactly what would be copied and deleted. The sync can be done also from Tardigrade to the local file system. rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/ Or between two Tardigrade buckets. rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/ Or even between another cloud storage and Tardigrade. rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/ ### Standard Options Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage). #### --tardigrade-provider Choose an authentication method. - Config: provider - Env Var: RCLONE_TARDIGRADE_PROVIDER - Type: string - Default: "existing" - Examples: - "existing" - Use an existing access grant. - "new" - Create a new access grant from satellite address, API key, and passphrase. #### --tardigrade-access-grant Access Grant. - Config: access_grant - Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT - Type: string - Default: "" #### --tardigrade-satellite-address Satellite Address. Custom satellite address should match the format: `@
:`. - Config: satellite_address - Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS - Type: string - Default: "us-central-1.tardigrade.io" - Examples: - "us-central-1.tardigrade.io" - US Central 1 - "europe-west-1.tardigrade.io" - Europe West 1 - "asia-east-1.tardigrade.io" - Asia East 1 #### --tardigrade-api-key API Key. - Config: api_key - Env Var: RCLONE_TARDIGRADE_API_KEY - Type: string - Default: "" #### --tardigrade-passphrase Encryption Passphrase. To access existing objects enter passphrase used for uploading. - Config: passphrase - Env Var: RCLONE_TARDIGRADE_PASSPHRASE - Type: string - Default: "" Union ----------------------------------------- The `union` remote provides a unification similar to UnionFS using other remotes. Paths may be as deep as required or a local path, eg `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. Attribute `:ro` and `:nc` can be attach to the end of path to tag the remote as **read only** or **no create**, eg `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`. Subfolders can be used in upstream remotes. Assume a union remote named `backup` with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` is exactly the same as invoking `rclone mkdir mydrive2:/backup/desktop`. There will be no special handling of paths containing `..` segments. Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking `rclone mkdir mydrive2:/backup/../desktop`. ### Behavior / Policies The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). All functions are grouped into 3 categories: **action**, **create** and **search**. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: **rand** (random) may be useful for file creation (create) but could lead to very odd behavior if used for `delete` if there were more than one copy of the file. #### Function / Category classifications | Category | Description | Functions | |----------|--------------------------|-------------------------------------------------------------------------------------| | action | Writing Existing file | move, rmdir, rmdirs, delete, purge and copy, sync (as destination when file exist) | | create | Create non-existing file | copy, sync (as destination when file not exist) | | search | Reading and listing file | ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync (as source) | | N/A | | size, about | #### Path Preservation Policies, as described below, are of two basic types. `path preserving` and `non-path preserving`. All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) are `path preserving`. `ep` stands for `existing path`. A path preserving policy will only consider upstreams where the relative path being accessed already exists. When using non-path preserving policies paths will be created in target upstreams as necessary. #### Quota Relevant Policies Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields. | Policy | Required Field | |------------|----------------| | lfs, eplfs | Free | | mfs, epmfs | Free | | lus, eplus | Used | | lno, eplno | Objects | To check if your upstream supports the field, run `rclone about remote: [flags]` and see if the required field exists. #### Filters Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below. * No **search** policies filter. * All **action** policies will filter out remotes which are tagged as **read-only**. * All **create** policies will filter out remotes which are tagged **read-only** or **no-create**. If all remotes are filtered an error will be returned. #### Policy descriptions The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems. | Policy | Description | |------------------|------------------------------------------------------------| | all | Search category: same as **epall**. Action category: same as **epall**. Create category: act on all upstreams. | | epall (existing path, all) | Search category: Given this order configured, act on the first one found where the relative path exists. Action category: apply to all found. Create category: act on all upstreams where the relative path exists. | | epff (existing path, first found) | Act on the first one found, by the time upstreams reply, where the relative path exists. | | eplfs (existing path, least free space) | Of all the upstreams on which the relative path exists choose the one with the least free space. | | eplus (existing path, least used space) | Of all the upstreams on which the relative path exists choose the one with the least used space. | | eplno (existing path, least number of objects) | Of all the upstreams on which the relative path exists choose the one with the least number of objects. | | epmfs (existing path, most free space) | Of all the upstreams on which the relative path exists choose the one with the most free space. | | eprand (existing path, random) | Calls **epall** and then randomizes. Returns only one upstream. | | ff (first found) | Search category: same as **epff**. Action category: same as **epff**. Create category: Act on the first one found by the time upstreams reply. | | lfs (least free space) | Search category: same as **eplfs**. Action category: same as **eplfs**. Create category: Pick the upstream with the least available free space. | | lus (least used space) | Search category: same as **eplus**. Action category: same as **eplus**. Create category: Pick the upstream with the least used space. | | lno (least number of objects) | Search category: same as **eplno**. Action category: same as **eplno**. Create category: Pick the upstream with the least number of objects. | | mfs (most free space) | Search category: same as **epmfs**. Action category: same as **epmfs**. Create category: Pick the upstream with the most available free space. | | newest | Pick the file / directory with the largest mtime. | | rand (random) | Calls **all** and then randomizes. Returns only one upstream. | ### Setup Here is an example of how to make a union called `remote` for local folders. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Union merges the contents of several remotes \ "union" [snip] Storage> union List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc. Enter a string value. Press Enter for the default (""). upstreams> Policy to choose upstream on ACTION class. Enter a string value. Press Enter for the default ("epall"). action_policy> Policy to choose upstream on CREATE class. Enter a string value. Press Enter for the default ("epmfs"). create_policy> Policy to choose upstream on SEARCH class. Enter a string value. Press Enter for the default ("ff"). search_policy> Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. Enter a signed integer. Press Enter for the default ("120"). cache_time> Remote config -------------------- [remote] type = union upstreams = C:\dir1 C:\dir2 C:\dir3 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote union e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` Once configured you can then use `rclone` like this, List directories in top level in `C:\dir1`, `C:\dir2` and `C:\dir3` rclone lsd remote: List all the files in `C:\dir1`, `C:\dir2` and `C:\dir3` rclone ls remote: Copy another local directory to the union directory called source, which will be placed into `C:\dir3` rclone copy C:\source remote:source ### Standard Options Here are the standard options specific to union (Union merges the contents of several upstream fs). #### --union-upstreams List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc. - Config: upstreams - Env Var: RCLONE_UNION_UPSTREAMS - Type: string - Default: "" #### --union-action-policy Policy to choose upstream on ACTION category. - Config: action_policy - Env Var: RCLONE_UNION_ACTION_POLICY - Type: string - Default: "epall" #### --union-create-policy Policy to choose upstream on CREATE category. - Config: create_policy - Env Var: RCLONE_UNION_CREATE_POLICY - Type: string - Default: "epmfs" #### --union-search-policy Policy to choose upstream on SEARCH category. - Config: search_policy - Env Var: RCLONE_UNION_SEARCH_POLICY - Type: string - Default: "ff" #### --union-cache-time Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. - Config: cache_time - Env Var: RCLONE_UNION_CACHE_TIME - Type: int - Default: 120 WebDAV ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Webdav \ "webdav" [snip] Storage> webdav URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://example.com/remote.php/webdav/ Name of the Webdav site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \ "nextcloud" 2 / Owncloud \ "owncloud" 3 / Sharepoint \ "sharepoint" 4 / Other site/service or software \ "other" vendor> 1 User name user> user Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Bearer token instead of user/pass (eg a Macaroon) bearer_token> Remote config -------------------- [remote] type = webdav url = https://example.com/remote.php/webdav/ vendor = nextcloud user = user pass = *** ENCRYPTED *** bearer_token = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Once configured you can then use `rclone` like this, List directories in top level of your WebDAV rclone lsd remote: List all the files in your WebDAV rclone ls remote: To copy a local directory to an WebDAV directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them. ### Standard Options Here are the standard options specific to webdav (Webdav). #### --webdav-url URL of http host to connect to - Config: url - Env Var: RCLONE_WEBDAV_URL - Type: string - Default: "" - Examples: - "https://example.com" - Connect to example.com #### --webdav-vendor Name of the Webdav site/service/software you are using - Config: vendor - Env Var: RCLONE_WEBDAV_VENDOR - Type: string - Default: "" - Examples: - "nextcloud" - Nextcloud - "owncloud" - Owncloud - "sharepoint" - Sharepoint - "other" - Other site/service or software #### --webdav-user User name - Config: user - Env Var: RCLONE_WEBDAV_USER - Type: string - Default: "" #### --webdav-pass Password. **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_WEBDAV_PASS - Type: string - Default: "" #### --webdav-bearer-token Bearer token instead of user/pass (eg a Macaroon) - Config: bearer_token - Env Var: RCLONE_WEBDAV_BEARER_TOKEN - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to webdav (Webdav). #### --webdav-bearer-token-command Command to run to get a bearer token - Config: bearer_token_command - Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND - Type: string - Default: "" ## Provider notes ## See below for notes on specific providers. ### Owncloud ### Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like `https://example.com/remote.php/webdav/`. Owncloud supports modified times using the `X-OC-Mtime` header. ### Nextcloud ### This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (`rcat`) whereas Owncloud does. This [may be fixed](https://github.com/nextcloud/nextcloud-snap/issues/365) in the future. ### Sharepoint ### Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner [github#1975](https://github.com/rclone/rclone/issues/1975) This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav. To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL: - Go [here](https://onedrive.live.com/about/en-us/signin/) to open your OneDrive or to sign in - Now take a look at your address bar, the URL should look like this: `https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx` You'll only need this URL up to the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive. Add the remote to rclone like this: Configure the `url` as `https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents` and use your normal account email and password for `user` and `pass`. If you have 2FA enabled, you have to generate an app password. Set the `vendor` to `sharepoint`. Your config file should look like this: ``` [sharepoint] type = webdav url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents vendor = other user = YourEmailAddress pass = encryptedpassword ``` #### Required Flags for SharePoint #### As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents: ``` --ignore-size --ignore-checksum --update ``` ### dCache ### dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including [Macaroons](https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) and [OpenID-Connect](https://en.wikipedia.org/wiki/OpenID_Connect) access tokens. Configure as normal using the `other` type. Don't enter a username or password, instead enter your Macaroon as the `bearer_token`. The config will end up looking something like this. ``` [dcache] type = webdav url = https://dcache... vendor = other user = pass = bearer_token = your-macaroon ``` There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache. ### OpenID-Connect ### dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service. Support for OpenID-Connect in rclone is currently achieved using another software package called [oidc-agent](https://github.com/indigo-dc/oidc-agent). This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the `oidc-token` command. The following example shows a (shortened) access token obtained from the *XDC* OIDC Provider. ``` paul@celebrimbor:~$ oidc-token XDC eyJraWQ[...]QFXDt0 paul@celebrimbor:~$ ``` **Note** Before the `oidc-token` command will work, the refresh token must be loaded into the oidc agent. This is done with the `oidc-add` command (e.g., `oidc-add XDC`). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the [oidc-agent documentation](https://indigo-dc.gitbooks.io/oidc-agent/). The rclone `bearer_token_command` configuration option is used to fetch the access token from oidc-agent. Configure as a normal WebDAV endpoint, using the 'other' vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., `oidc-agent XDC`). The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the *XDC* OIDC Provider. ``` [dcache] type = webdav url = https://dcache.example.org/ vendor = other bearer_token_command = oidc-token XDC ``` Yandex Disk ---------------------------------------- [Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com). Here is an example of making a yandex configuration. First run rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Yandex Disk \ "yandex" [snip] Storage> yandex Yandex Client Id - leave blank normally. client_id> Yandex Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, See top level directories rclone lsd remote: Make a new directory rclone mkdir remote:directory List the contents of a directory rclone ls remote:directory Sync `/home/local/directory` to the remote path, deleting any excess files in the path. rclone sync -i /home/local/directory remote:directory Yandex paths may be as deep as required, eg `remote:directory/subdirectory`. ### Modified time ### Modified times are supported and are stored accurate to 1 ns in custom metadata called `rclone_modified` in RFC3339 with nanoseconds format. ### MD5 checksums ### MD5 checksums are natively supported by Yandex Disk. ### Emptying Trash ### If you wish to empty your trash you can use the `rclone cleanup remote:` command which will permanently delete all your trashed files. This command does not take any path arguments. ### Quota information ### To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota) and the current usage. #### Restricted filename characters The [default restricted characters set](https://rclone.org/overview/#restricted-characters) are replaced. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. ### Limitations ### When uploading very large files (bigger than about 5GB) you will need to increase the `--timeout` parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see `net/http: timeout awaiting response headers` errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of `2 * 30 = 60m`, that is `--timeout 60m`. ### Standard Options Here are the standard options specific to yandex (Yandex Disk). #### --yandex-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_YANDEX_CLIENT_ID - Type: string - Default: "" #### --yandex-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_YANDEX_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to yandex (Yandex Disk). #### --yandex-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_YANDEX_TOKEN - Type: string - Default: "" #### --yandex-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_YANDEX_AUTH_URL - Type: string - Default: "" #### --yandex-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_YANDEX_TOKEN_URL - Type: string - Default: "" #### --yandex-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_YANDEX_ENCODING - Type: MultiEncoder - Default: Slash,Del,Ctl,InvalidUtf8,Dot Local Filesystem ------------------------------------------- Local paths are specified as normal filesystem paths, eg `/path/to/wherever`, so rclone sync -i /home/source /tmp/destination Will sync `/home/source` to `/tmp/destination` These can be configured into the config file for consistencies sake, but it is probably easier not to. ### Modified time ### Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. ### Filenames ### Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the `convmv` tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers. If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name `gro\xdf` will be transferred as `gro‛DF`. `rclone` will emit a debug message in this case (use `-v` to see), eg ``` Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf" ``` #### Restricted characters On non Windows platforms the following characters are replaced when handling file names. | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | When running on Windows the following characters are replaced. This list is based on the [Windows file naming conventions](https://docs.microsoft.com/de-de/windows/desktop/FileIO/naming-a-file#naming-conventions). | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | SOH | 0x01 | ␁ | | STX | 0x02 | ␂ | | ETX | 0x03 | ␃ | | EOT | 0x04 | ␄ | | ENQ | 0x05 | ␅ | | ACK | 0x06 | ␆ | | BEL | 0x07 | ␇ | | BS | 0x08 | ␈ | | HT | 0x09 | ␉ | | LF | 0x0A | ␊ | | VT | 0x0B | ␋ | | FF | 0x0C | ␌ | | CR | 0x0D | ␍ | | SO | 0x0E | ␎ | | SI | 0x0F | ␏ | | DLE | 0x10 | ␐ | | DC1 | 0x11 | ␑ | | DC2 | 0x12 | ␒ | | DC3 | 0x13 | ␓ | | DC4 | 0x14 | ␔ | | NAK | 0x15 | ␕ | | SYN | 0x16 | ␖ | | ETB | 0x17 | ␗ | | CAN | 0x18 | ␘ | | EM | 0x19 | ␙ | | SUB | 0x1A | ␚ | | ESC | 0x1B | ␛ | | FS | 0x1C | ␜ | | GS | 0x1D | ␝ | | RS | 0x1E | ␞ | | US | 0x1F | ␟ | | / | 0x2F | / | | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | File names on Windows can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | . | 0x2E | . | Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be converted to UTF-16. ### Long paths on Windows ### Rclone handles long paths automatically, by converting all paths to long [UNC paths](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#maxpath) which allows paths up to 32,767 characters. This is why you will see that your paths, for instance `c:\files` is converted to the UNC path `\\?\c:\files` in the output, and `\\server\share` is converted to `\\?\UNC\server\share`. However, in rare cases this may cause problems with buggy file system drivers like [EncFS](https://github.com/rclone/rclone/issues/261). To disable UNC conversion globally, add this to your `.rclone.conf` file: ``` [local] nounc = true ``` If you want to selectively disable UNC, you can add it to a separate entry like this: ``` [nounc] type = local nounc = true ``` And use rclone like this: `rclone copy c:\src nounc:z:\dst` This will use UNC paths on `c:\src` but not on `z:\dst`. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to. ### Symlinks / Junction points Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply `--copy-links` or `-L` then rclone will follow the symlink and copy the pointed to file or directory. Note that this flag is incompatible with `-links` / `-l`. This flag applies to all commands. For example, supposing you have a directory structure like this ``` $ tree /tmp/a /tmp/a ├── b -> ../b ├── expected -> ../expected ├── one └── two └── three ``` Then you can see the difference with and without the flag like this ``` $ rclone ls /tmp/a 6 one 6 two/three ``` and ``` $ rclone -L ls /tmp/a 4174 expected 6 one 6 two/three 6 b/two 6 b/one ``` #### --links, -l Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage. The text file will contain the target of the symbolic link (see example). This flag applies to all commands. For example, supposing you have a directory structure like this ``` $ tree /tmp/a /tmp/a ├── file1 -> ./file4 └── file2 -> /home/user/file3 ``` Copying the entire directory with '-l' ``` $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/ ``` The remote files are created with a '.rclonelink' suffix ``` $ rclone ls remote:/tmp/a 5 file1.rclonelink 14 file2.rclonelink ``` The remote files will contain the target of the symbolic links ``` $ rclone cat remote:/tmp/a/file1.rclonelink ./file4 $ rclone cat remote:/tmp/a/file2.rclonelink /home/user/file3 ``` Copying them back with '-l' ``` $ rclone copyto -l remote:/tmp/a/ /tmp/b/ $ tree /tmp/b /tmp/b ├── file1 -> ./file4 └── file2 -> /home/user/file3 ``` However, if copied back without '-l' ``` $ rclone copyto remote:/tmp/a/ /tmp/b/ $ tree /tmp/b /tmp/b ├── file1.rclonelink └── file2.rclonelink ```` Note that this flag is incompatible with `-copy-links` / `-L`. ### Restricting filesystems with --one-file-system Normally rclone will recurse through filesystems as mounted. However if you set `--one-file-system` or `-x` this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems. For example if you have a directory hierarchy like this ``` root ├── disk1 - disk1 mounted on the root │   └── file3 - stored on disk1 ├── disk2 - disk2 mounted on the root │   └── file4 - stored on disk12 ├── file1 - stored on the root disk └── file2 - stored on the root disk ``` Using `rclone --one-file-system copy root remote:` will only copy `file1` and `file2`. Eg ``` $ rclone -q --one-file-system ls root 0 file1 0 file2 ``` ``` $ rclone -q ls root 0 disk1/file3 0 disk2/file4 0 file1 0 file2 ``` **NB** Rclone (like most unix tools such as `du`, `rsync` and `tar`) treats a bind mount to the same device as being on the same filesystem. **NB** This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored. ### Standard Options Here are the standard options specific to local (Local Disk). #### --local-nounc Disable UNC (long path names) conversion on Windows - Config: nounc - Env Var: RCLONE_LOCAL_NOUNC - Type: string - Default: "" - Examples: - "true" - Disables long file names ### Advanced Options Here are the advanced options specific to local (Local Disk). #### --copy-links / -L Follow symlinks and copy the pointed to item. - Config: copy_links - Env Var: RCLONE_LOCAL_COPY_LINKS - Type: bool - Default: false #### --links / -l Translate symlinks to/from regular files with a '.rclonelink' extension - Config: links - Env Var: RCLONE_LOCAL_LINKS - Type: bool - Default: false #### --skip-links Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped. - Config: skip_links - Env Var: RCLONE_LOCAL_SKIP_LINKS - Type: bool - Default: false #### --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead. - Config: no_unicode_normalization - Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION - Type: bool - Default: false #### --local-no-check-updated Don't check to see if the files change during upload Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. However on some file systems this modification time check may fail (eg [Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this check can be disabled with this flag. If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things appended to it (eg a log) then rclone will transfer the log file with the size it had the first time rclone saw it. If the file is being modified throughout (not just appended to) then the transfer may fail with a hash check failure. In detail, once the file has had stat() called on it for the first time we: - Only transfer the size that stat gave - Only checksum the size that stat gave - Don't update the stat info for the file - Config: no_check_updated - Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED - Type: bool - Default: false #### --one-file-system / -x Don't cross filesystem boundaries (unix/macOS only). - Config: one_file_system - Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM - Type: bool - Default: false #### --local-case-sensitive Force the filesystem to report itself as case sensitive. Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. - Config: case_sensitive - Env Var: RCLONE_LOCAL_CASE_SENSITIVE - Type: bool - Default: false #### --local-case-insensitive Force the filesystem to report itself as case insensitive Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. - Config: case_insensitive - Env Var: RCLONE_LOCAL_CASE_INSENSITIVE - Type: bool - Default: false #### --local-no-sparse Disable sparse files for multi-thread downloads On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with. - Config: no_sparse - Env Var: RCLONE_LOCAL_NO_SPARSE - Type: bool - Default: false #### --local-no-set-modtime Disable setting modtime Normally rclone updates modification time of files after they are done uploading. This can cause permissions issues on Linux platforms when the user rclone is running as does not own the file uploaded, such as when copying to a CIFS mount owned by another user. If this option is enabled, rclone will no longer update the modtime after copying a file. - Config: no_set_modtime - Env Var: RCLONE_LOCAL_NO_SET_MODTIME - Type: bool - Default: false #### --local-encoding This sets the encoding for the backend. See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_LOCAL_ENCODING - Type: MultiEncoder - Default: Slash,Dot ### Backend commands Here are the commands specific to the local backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](https://rclone.org/rc/#backend/command). #### noop A null operation for testing backend commands rclone backend noop remote: [options] [+] This is a test command which has some options you can try to change the output. Options: - "echo": echo the input arguments - "error": return an error based on option value # Changelog ## v1.53.3 - 2020-11-19 [See commits](https://github.com/rclone/rclone/compare/v1.53.2...v1.53.3) * Bug Fixes * random: Fix incorrect use of math/rand instead of crypto/rand CVE-2020-28924 (Nick Craig-Wood) * Passwords you have generated with `rclone config` may be insecure * See [issue #4783](https://github.com/rclone/rclone/issues/4783) for more details and a checking tool * random: Seed math/rand in one place with crypto strong seed (Nick Craig-Wood) * VFS * Fix vfs/refresh calls with fs= parameter (Nick Craig-Wood) * Sharefile * Fix backend due to API swapping integers for strings (Nick Craig-Wood) ## v1.53.2 - 2020-10-26 [See commits](https://github.com/rclone/rclone/compare/v1.53.1...v1.53.2) * Bug Fixes * acounting * Fix incorrect speed and transferTime in core/stats (Nick Craig-Wood) * Stabilize display order of transfers on Windows (Nick Craig-Wood) * operations * Fix use of --suffix without --backup-dir (Nick Craig-Wood) * Fix spurious "--checksum is in use but the source and destination have no hashes in common" (Nick Craig-Wood) * build * Work around GitHub actions brew problem (Nick Craig-Wood) * Stop using set-env and set-path in the GitHub actions (Nick Craig-Wood) * Mount * mount2: Fix the swapped UID / GID values (Russell Cattelan) * VFS * Detect and recover from a file being removed externally from the cache (Nick Craig-Wood) * Fix a deadlock vulnerability in downloaders.Close (Leo Luan) * Fix a race condition in retryFailedResets (Leo Luan) * Fix missed concurrency control between some item operations and reset (Leo Luan) * Add exponential backoff during ENOSPC retries (Leo Luan) * Add a missed update of used cache space (Leo Luan) * Fix --no-modtime to not attempt to set modtimes (as documented) (Nick Craig-Wood) * Local * Fix sizes and syncing with --links option on Windows (Nick Craig-Wood) * Chunker * Disable ListR to fix missing files on GDrive (workaround) (Ivan Andreev) * Fix upload over crypt (Ivan Andreev) * Fichier * Increase maximum file size from 100GB to 300GB (gyutw) * Jottacloud * Remove clientSecret from config when upgrading to token based authentication (buengese) * Avoid double url escaping of device/mountpoint (albertony) * Remove DirMove workaround as it's not required anymore - also (buengese) * Mailru * Fix uploads after recent changes on server (Ivan Andreev) * Fix range requests after june changes on server (Ivan Andreev) * Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev) * Onedrive * Fix disk usage for sharepoint (Nick Craig-Wood) * S3 * Add missing regions for AWS (Anagh Kumar Baranwal) * Seafile * Fix accessing libraries > 2GB on 32 bit systems (Muffin King) * SFTP * Always convert the checksum to lower case (buengese) * Union * Create root directories if none exist (Nick Craig-Wood) ## v1.53.1 - 2020-09-13 [See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.53.1) * Bug Fixes * accounting: Remove new line from end of --stats-one-line display (Nick Craig-Wood) * check * Add back missing --download flag (Nick Craig-Wood) * Fix docs (Nick Craig-Wood) * docs * Note --log-file does append (Nick Craig-Wood) * Add full stops for consistency in rclone --help (edwardxml) * Add Tencent COS to s3 provider list (wjielai) * Updated mount command to reflect that it requires Go 1.13 or newer (Evan Harris) * jottacloud: Mention that uploads from local disk will not need to cache files to disk for md5 calculation (albertony) * Fix formatting of rc docs page (Nick Craig-Wood) * build * Include vendor tar ball in release and fix startdev (Nick Craig-Wood) * Fix "Illegal instruction" error for ARMv6 builds (Nick Craig-Wood) * Fix architecture name in ARMv7 build (Nick Craig-Wood) * VFS * Fix spurious error "vfs cache: failed to _ensure cache EOF" (Nick Craig-Wood) * Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood) * Local * Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood) * Drive * Re-adds special oauth help text (Tim Gallant) * Opendrive * Do not retry 400 errors (Evan Harris) ## v1.53.0 - 2020-09-02 [See commits](https://github.com/rclone/rclone/compare/v1.52.0...v1.53.0) * New Features * The [VFS layer](https://rclone.org/commands/rclone_mount/#vfs-virtual-file-system) was heavily reworked for this release - see below for more details * Interactive mode [-i/--interactive](https://rclone.org/docs/#interactive) for destructive operations (fishbullet) * Add [--bwlimit-file](https://rclone.org/docs/#bwlimit-file-bandwidth-spec) flag to limit speeds of individual file transfers (Nick Craig-Wood) * Transfers are sorted by start time in the stats and progress output (Max Sum) * Make sure backends expand `~` and environment vars in file names they use (Nick Craig-Wood) * Add [--refresh-times](https://rclone.org/docs/#refresh-times) flag to set modtimes on hashless backends (Nick Craig-Wood) * build * Remove vendor directory in favour of Go modules (Nick Craig-Wood) * Build with go1.15.x by default (Nick Craig-Wood) * Drop macOS 386 build as it is no longer supported by go1.15 (Nick Craig-Wood) * Add ARMv7 to the supported builds (Nick Craig-Wood) * Enable `rclone cmount` on macOS (Nick Craig-Wood) * Make rclone build with gccgo (Nick Craig-Wood) * Make rclone build with wasm (Nick Craig-Wood) * Change beta numbering to be semver compatible (Nick Craig-Wood) * Add file properties and icon to Windows executable (albertony) * Add experimental interface for integrating rclone into browsers (Nick Craig-Wood) * lib: Add file name compression (Klaus Post) * rc * Allow installation and use of plugins and test plugins with rclone-webui (Chaitanya Bankanhal) * Add reverse proxy pluginsHandler for serving plugins (Chaitanya Bankanhal) * Add `mount/listmounts` option for listing current mounts (Chaitanya Bankanhal) * Add `operations/uploadfile` to upload a file through rc using encoding multipart/form-data (Chaitanya Bankanhal) * Add `core/copmmand` to execute rclone terminal commands. (Chaitanya Bankanhal) * `rclone check` * Add reporting of filenames for same/missing/changed (Nick Craig-Wood) * Make check command obey `--dry-run`/`-i`/`--interactive` (Nick Craig-Wood) * Make check do `--checkers` files concurrently (Nick Craig-Wood) * Retry downloads if they fail when using the `--download` flag (Nick Craig-Wood) * Make it show stats by default (Nick Craig-Wood) * `rclone obscure`: Allow obscure command to accept password on STDIN (David Ibarra) * `rclone config` * Set RCLONE_CONFIG_DIR for use in config files and subprocesses (Nick Craig-Wood) * Reject remote names starting with a dash. (jtagcat) * `rclone cryptcheck`: Add reporting of filenames for same/missing/changed (Nick Craig-Wood) * `rclone dedupe`: Make it obey the `--size-only` flag for duplicate detection (Nick Craig-Wood) * `rclone link`: Add `--expire` and `--unlink` flags (Roman Kredentser) * `rclone mkdir`: Warn when using mkdir on remotes which can't have empty directories (Nick Craig-Wood) * `rclone rc`: Allow JSON parameters to simplify command line usage (Nick Craig-Wood) * `rclone serve ftp` * Don't compile on < go1.13 after dependency update (Nick Craig-Wood) * Add error message if auth proxy fails (Nick Craig-Wood) * Use refactored goftp.io/server library for binary shrink (Nick Craig-Wood) * `rclone serve restic`: Expose interfaces so that rclone can be used as a library from within restic (Jack) * `rclone sync`: Add `--track-renames-strategy leaf` (Nick Craig-Wood) * `rclone touch`: Add ability to set nanosecond resolution times (Nick Craig-Wood) * `rclone tree`: Remove `-i` shorthand for `--noindent` as it conflicts with `-i`/`--interactive` (Nick Craig-Wood) * Bug Fixes * accounting * Fix documentation for `speed`/`speedAvg` (Nick Craig-Wood) * Fix elapsed time not show actual time since beginning (Chaitanya Bankanhal) * Fix deadlock in stats printing (Nick Craig-Wood) * build * Fix file handle leak in GitHub release tool (Garrett Squire) * `rclone check`: Fix successful retries with `--download` counting errors (Nick Craig-Wood) * `rclone dedupe`: Fix logging to be easier to understand (Nick Craig-Wood) * Mount * Warn macOS users that mount implementation is changing (Nick Craig-Wood) * to test the new implementation use `rclone cmount` instead of `rclone mount` * this is because the library rclone uses has dropped macOS support * rc interface * Add call for unmount all (Chaitanya Bankanhal) * Make `mount/mount` remote control take vfsOpt option (Nick Craig-Wood) * Add mountOpt to `mount/mount` (Nick Craig-Wood) * Add VFS and Mount options to `mount/listmounts` (Nick Craig-Wood) * Catch panics in cgofuse initialization and turn into error messages (Nick Craig-Wood) * Always supply stat information in Readdir (Nick Craig-Wood) * Add support for reading unknown length files using direct IO (Windows) (Nick Craig-Wood) * Fix On Windows don't add `-o uid/gid=-1` if user supplies `-o uid/gid`. (Nick Craig-Wood) * Fix macOS losing directory contents in cmount (Nick Craig-Wood) * Fix volume name broken in recent refactor (Nick Craig-Wood) * VFS * Implement partial reads for `--vfs-cache-mode full` (Nick Craig-Wood) * Add `--vfs-writeback` option to delay writes back to cloud storage (Nick Craig-Wood) * Add `--vfs-read-ahead` parameter for use with `--vfs-cache-mode full` (Nick Craig-Wood) * Restart pending uploads on restart of the cache (Nick Craig-Wood) * Support synchronous cache space recovery upon ENOSPC (Leo Luan) * Allow ReadAt and WriteAt to run concurrently with themselves (Nick Craig-Wood) * Change modtime of file before upload to current (Rob Calistri) * Recommend `--vfs-cache-modes writes` on backends which can't stream (Nick Craig-Wood) * Add an optional `fs` parameter to vfs rc methods (Nick Craig-Wood) * Fix errors when using > 260 char files in the cache in Windows (Nick Craig-Wood) * Fix renaming of items while they are being uploaded (Nick Craig-Wood) * Fix very high load caused by slow directory listings (Nick Craig-Wood) * Fix renamed files not being uploaded with `--vfs-cache-mode minimal` (Nick Craig-Wood) * Fix directory locking caused by slow directory listings (Nick Craig-Wood) * Fix saving from chrome without `--vfs-cache-mode writes` (Nick Craig-Wood) * Local * Add `--local-no-updated` to provide a consistent view of changing objects (Nick Craig-Wood) * Add `--local-no-set-modtime` option to prevent modtime changes (tyhuber1) * Fix race conditions updating and reading Object metadata (Nick Craig-Wood) * Cache * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Fix dedupe on caches wrapping drives (Nick Craig-Wood) * Crypt * Add `--crypt-server-side-across-configs` flag (Nick Craig-Wood) * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Alias * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Azure Blob * Don't compile on < go1.13 after dependency update (Nick Craig-Wood) * B2 * Implement server side copy for files > 5GB (Nick Craig-Wood) * Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) * Note that b2's encoding now allows \ but rclone's hasn't changed (Nick Craig-Wood) * Fix transfers when using download_url (Nick Craig-Wood) * Box * Implement rclone cleanup (buengese) * Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) * Allow authentication with access token (David) * Chunker * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Drive * Add `rclone backend drives` to list shared drives (teamdrives) (Nick Craig-Wood) * Implement `rclone backend untrash` (Nick Craig-Wood) * Work around drive bug which didn't set modtime of copied docs (Nick Craig-Wood) * Added `--drive-starred-only` to only show starred files (Jay McEntire) * Deprecate `--drive-alternate-export` as it is no longer needed (themylogin) * Fix duplication of Google docs on server side copy (Nick Craig-Wood) * Fix "panic: send on closed channel" when recycling dir entries (Nick Craig-Wood) * Dropbox * Add copyright detector info in limitations section in the docs (Alex Guerrero) * Fix `rclone link` by removing expires parameter (Nick Craig-Wood) * Fichier * Detect Flood detected: IP Locked error and sleep for 30s (Nick Craig-Wood) * FTP * Add explicit TLS support (Heiko Bornholdt) * Add support for `--dump bodies` and `--dump auth` for debugging (Nick Craig-Wood) * Fix interoperation with pure-ftpd (Nick Craig-Wood) * Google Cloud Storage * Add support for anonymous access (Kai Lüke) * Jottacloud * Bring back legacy authentification for use with whitelabel versions (buengese) * Switch to new api root - also implement a very ugly workaround for the DirMove failures (buengese) * Onedrive * Rework cancel of multipart uploads on rclone exit (Nick Craig-Wood) * Implement rclone cleanup (Nick Craig-Wood) * Add `--onedrive-no-versions` flag to remove old versions (Nick Craig-Wood) * Pcloud * Implement `rclone link` for public link creation (buengese) * Qingstor * Cancel in progress multipart uploads on rclone exit (Nick Craig-Wood) * S3 * Preserve metadata when doing multipart copy (Nick Craig-Wood) * Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) * Add `rclone link` for public link sharing (Roman Kredentser) * Add `rclone backend restore` command to restore objects from GLACIER (Nick Craig-Wood) * Add `rclone cleanup` and `rclone backend cleanup` to clean unfinished multipart uploads (Nick Craig-Wood) * Add `rclone backend list-multipart-uploads` to list unfinished multipart uploads (Nick Craig-Wood) * Add `--s3-max-upload-parts` support (Kamil Trzciński) * Add `--s3-no-check-bucket` for minimising rclone transactions and perms (Nick Craig-Wood) * Add `--s3-profile` and `--s3-shared-credentials-file` options (Nick Craig-Wood) * Use regional s3 us-east-1 endpoint (David) * Add Scaleway provider (Vincent Feltz) * Update IBM COS endpoints (Egor Margineanu) * Reduce the default `--s3-copy-cutoff` to < 5GB for Backblaze S3 compatibility (Nick Craig-Wood) * Fix detection of bucket existing (Nick Craig-Wood) * SFTP * Use the absolute path instead of the relative path for listing for improved compatibility (Nick Craig-Wood) * Add `--sftp-subsystem` and `--sftp-server-command` options (aus) * Swift * Fix dangling large objects breaking the listing (Nick Craig-Wood) * Fix purge not deleting directory markers (Nick Craig-Wood) * Fix update multipart object removing all of its own parts (Nick Craig-Wood) * Fix missing hash from object returned from upload (Nick Craig-Wood) * Tardigrade * Upgrade to uplink v1.2.0 (Kaloyan Raev) * Union * Fix writing with the all policy (Nick Craig-Wood) * WebDAV * Fix directory creation with 4shared (Nick Craig-Wood) ## v1.52.3 - 2020-08-07 [See commits](https://github.com/rclone/rclone/compare/v1.52.2...v1.52.3) * Bug Fixes * docs * Disable smart typography (eg en-dash) in MANUAL.* and man page (Nick Craig-Wood) * Update install.md to reflect minimum Go version (Evan Harris) * Update install from source instructions (Nick Craig-Wood) * make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud) * log: Fix --use-json-log going to stderr not --log-file on Windows (Nick Craig-Wood) * serve dlna: Fix file list on Samsung Series 6+ TVs (Matteo Pietro Dazzi) * sync: Fix deadlock with --track-renames-strategy modtime (Nick Craig-Wood) * Cache * Fix moveto/copyto remote:file remote:file2 (Nick Craig-Wood) * Drive * Stop using root_folder_id as a cache (Nick Craig-Wood) * Make dangling shortcuts appear in listings (Nick Craig-Wood) * Drop "Disabling ListR" messages down to debug (Nick Craig-Wood) * Workaround and policy for Google Drive API (Dmitry Ustalov) * FTP * Add note to docs about home vs root directory selection (Nick Craig-Wood) * Onedrive * Fix reverting to Copy when Move would have worked (Nick Craig-Wood) * Avoid comma rendered in URL in onedrive.md (Kevin) * Pcloud * Fix oauth on European region "eapi.pcloud.com" (Nick Craig-Wood) * S3 * Fix bucket Region auto detection when Region unset in config (Nick Craig-Wood) ## v1.52.2 - 2020-06-24 [See commits](https://github.com/rclone/rclone/compare/v1.52.1...v1.52.2) * Bug Fixes * build * Fix docker release build action (Nick Craig-Wood) * Fix custom timezone in Docker image (NoLooseEnds) * check: Fix misleading message which printed errors instead of differences (Nick Craig-Wood) * errors: Add WSAECONNREFUSED and more to the list of retriable Windows errors (Nick Craig-Wood) * rcd: Fix incorrect prometheus metrics (Gary Kim) * serve restic: Fix flags so they use environment variables (Nick Craig-Wood) * serve webdav: Fix flags so they use environment variables (Nick Craig-Wood) * sync: Fix --track-renames-strategy modtime (Nick Craig-Wood) * Drive * Fix not being able to delete a directory with a trashed shortcut (Nick Craig-Wood) * Fix creating a directory inside a shortcut (Nick Craig-Wood) * Fix --drive-impersonate with cached root_folder_id (Nick Craig-Wood) * SFTP * Fix SSH key PEM loading (Zac Rubin) * Swift * Speed up deletes by not retrying segment container deletes (Nick Craig-Wood) * Tardigrade * Upgrade to uplink v1.1.1 (Caleb Case) * WebDAV * Fix free/used display for rclone about/df for certain backends (Nick Craig-Wood) ## v1.52.1 - 2020-06-10 [See commits](https://github.com/rclone/rclone/compare/v1.52.0...v1.52.1) * Bug Fixes * lib/file: Fix SetSparse on Windows 7 which fixes downloads of files > 250MB (Nick Craig-Wood) * build * Update go.mod to go1.14 to enable -mod=vendor build (Nick Craig-Wood) * Remove quicktest from Dockerfile (Nick Craig-Wood) * Build Docker images with GitHub actions (Matteo Pietro Dazzi) * Update Docker build workflows (Nick Craig-Wood) * Set user_allow_other in /etc/fuse.conf in the Docker image (Nick Craig-Wood) * Fix xgo build after go1.14 go.mod update (Nick Craig-Wood) * docs * Add link to source and modified time to footer of every page (Nick Craig-Wood) * Remove manually set dates and use git dates instead (Nick Craig-Wood) * Minor tense, punctuation, brevity and positivity changes for the home page (edwardxml) * Remove leading slash in page reference in footer when present (Nick Craig-Wood) * Note commands which need obscured input in the docs (Nick Craig-Wood) * obscure: Write more help as we are referencing it elsewhere (Nick Craig-Wood) * VFS * Fix OS vs Unix path confusion - fixes ChangeNotify on Windows (Nick Craig-Wood) * Drive * Fix missing items when listing using --fast-list / ListR (Nick Craig-Wood) * Putio * Fix panic on Object.Open (Cenk Alti) * S3 * Fix upload of single files into buckets without create permission (Nick Craig-Wood) * Fix --header-upload (Nick Craig-Wood) * Tardigrade * Fix listing bug by upgrading to v1.0.7 * Set UserAgent to rclone (Caleb Case) ## v1.52.0 - 2020-05-27 Special thanks to Martin Michlmayr for proof reading and correcting all the docs and Edward Barker for helping re-write the front page. [See commits](https://github.com/rclone/rclone/compare/v1.51.0...v1.52.0) * New backends * [Tardigrade](https://rclone.org/tardigrade/) backend for use with storj.io (Caleb Case) * [Union](https://rclone.org/union/) re-write to have multiple writable remotes (Max Sum) * [Seafile](/seafile) for Seafile server (Fred @creativeprojects) * New commands * backend: command for backend specific commands (see backends) (Nick Craig-Wood) * cachestats: Deprecate in favour of `rclone backend stats cache:` (Nick Craig-Wood) * dbhashsum: Deprecate in favour of `rclone hashsum DropboxHash` (Nick Craig-Wood) * New Features * Add `--header-download` and `--header-upload` flags for setting HTTP headers when uploading/downloading (Tim Gallant) * Add `--header` flag to add HTTP headers to every HTTP transaction (Nick Craig-Wood) * Add `--check-first` to do all checking before starting transfers (Nick Craig-Wood) * Add `--track-renames-strategy` for configurable matching criteria for `--track-renames` (Bernd Schoolmann) * Add `--cutoff-mode` hard,soft,catious (Shing Kit Chan & Franklyn Tackitt) * Filter flags (eg `--files-from -`) can read from stdin (fishbullet) * Add `--error-on-no-transfer` option (Jon Fautley) * Implement `--order-by xxx,mixed` for copying some small and some big files (Nick Craig-Wood) * Allow `--max-backlog` to be negative meaning as large as possible (Nick Craig-Wood) * Added `--no-unicode-normalization` flag to allow Unicode filenames to remain unique (Ben Zenker) * Allow `--min-age`/`--max-age` to take a date as well as a duration (Nick Craig-Wood) * Add rename statistics for file and directory renames (Nick Craig-Wood) * Add statistics output to JSON log (reddi) * Make stats be printed on non-zero exit code (Nick Craig-Wood) * When running `--password-command` allow use of stdin (Sébastien Gross) * Stop empty strings being a valid remote path (Nick Craig-Wood) * accounting: support WriterTo for less memory copying (Nick Craig-Wood) * build * Update to use go1.14 for the build (Nick Craig-Wood) * Add `-trimpath` to release build for reproduceable builds (Nick Craig-Wood) * Remove GOOS and GOARCH from Dockerfile (Brandon Philips) * config * Fsync the config file after writing to save more reliably (Nick Craig-Wood) * Add `--obscure` and `--no-obscure` flags to `config create`/`update` (Nick Craig-Wood) * Make `config show` take `remote:` as well as `remote` (Nick Craig-Wood) * copyurl: Add `--no-clobber` flag (Denis) * delete: Added `--rmdirs` flag to delete directories as well (Kush) * filter: Added `--files-from-raw` flag (Ankur Gupta) * genautocomplete: Add support for fish shell (Matan Rosenberg) * log: Add support for syslog LOCAL facilities (Patryk Jakuszew) * lsjson: Add `--hash-type` parameter and use it in lsf to speed up hashing (Nick Craig-Wood) * rc * Add `-o`/`--opt` and `-a`/`--arg` for more structured input (Nick Craig-Wood) * Implement `backend/command` for running backend specific commands remotely (Nick Craig-Wood) * Add `mount/mount` command for starting `rclone mount` via the API (Chaitanya) * rcd: Add Prometheus metrics support (Gary Kim) * serve http * Added a `--template` flag for user defined markup (calistri) * Add Last-Modified headers to files and directories (Nick Craig-Wood) * serve sftp: Add support for multiple host keys by repeating `--key` flag (Maxime Suret) * touch: Add `--localtime` flag to make `--timestamp` localtime not UTC (Nick Craig-Wood) * Bug Fixes * accounting * Restore "Max number of stats groups reached" log line (Michał Matczuk) * Correct exitcode on Transfer Limit Exceeded flag. (Anuar Serdaliyev) * Reset bytes read during copy retry (Ankur Gupta) * Fix race clearing stats (Nick Craig-Wood) * copy: Only create empty directories when they don't exist on the remote (Ishuah Kariuki) * dedupe: Stop dedupe deleting files with identical IDs (Nick Craig-Wood) * oauth * Use custom http client so that `--no-check-certificate` is honored by oauth token fetch (Mark Spieth) * Replace deprecated oauth2.NoContext (Lars Lehtonen) * operations * Fix setting the timestamp on Windows for multithread copy (Nick Craig-Wood) * Make rcat obey `--ignore-checksum` (Nick Craig-Wood) * Make `--max-transfer` more accurate (Nick Craig-Wood) * rc * Fix dropped error (Lars Lehtonen) * Fix misplaced http server config (Xiaoxing Ye) * Disable duplicate log (ElonH) * serve dlna * Cds: don't specify childCount at all when unknown (Dan Walters) * Cds: use modification time as date in dlna metadata (Dan Walters) * serve restic: Fix tests after restic project removed vendoring (Nick Craig-Wood) * sync * Fix incorrect "nothing to transfer" message using `--delete-before` (Nick Craig-Wood) * Only create empty directories when they don't exist on the remote (Ishuah Kariuki) * Mount * Add `--async-read` flag to disable asynchronous reads (Nick Craig-Wood) * Ignore `--allow-root` flag with a warning as it has been removed upstream (Nick Craig-Wood) * Warn if `--allow-non-empty` used on Windows and clarify docs (Nick Craig-Wood) * Constrain to go1.13 or above otherwise bazil.org/fuse fails to compile (Nick Craig-Wood) * Fix fail because of too long volume name (evileye) * Report 1PB free for unknown disk sizes (Nick Craig-Wood) * Map more rclone errors into file systems errors (Nick Craig-Wood) * Fix disappearing cwd problem (Nick Craig-Wood) * Use ReaddirPlus on Windows to improve directory listing performance (Nick Craig-Wood) * Send a hint as to whether the filesystem is case insensitive or not (Nick Craig-Wood) * Add rc command `mount/types` (Nick Craig-Wood) * Change maximum leaf name length to 1024 bytes (Nick Craig-Wood) * VFS * Add `--vfs-read-wait` and `--vfs-write-wait` flags to control time waiting for a sequential read/write (Nick Craig-Wood) * Change default `--vfs-read-wait` to 20ms (it was 5ms and not configurable) (Nick Craig-Wood) * Make `df` output more consistent on a rclone mount. (Yves G) * Report 1PB free for unknown disk sizes (Nick Craig-Wood) * Fix race condition caused by unlocked reading of Dir.path (Nick Craig-Wood) * Make File lock and Dir lock not overlap to avoid deadlock (Nick Craig-Wood) * Implement lock ordering between File and Dir to eliminate deadlocks (Nick Craig-Wood) * Factor the vfs cache into its own package (Nick Craig-Wood) * Pin the Fs in use in the Fs cache (Nick Craig-Wood) * Add SetSys() methods to Node to allow caching stuff on a node (Nick Craig-Wood) * Ignore file not found errors from Hash in Read.Release (Nick Craig-Wood) * Fix hang in read wait code (Nick Craig-Wood) * Local * Speed up multi thread downloads by using sparse files on Windows (Nick Craig-Wood) * Implement `--local-no-sparse` flag for disabling sparse files (Nick Craig-Wood) * Implement `rclone backend noop` for testing purposes (Nick Craig-Wood) * Fix "file not found" errors on post transfer Hash calculation (Nick Craig-Wood) * Cache * Implement `rclone backend stats` command (Nick Craig-Wood) * Fix Server Side Copy with Temp Upload (Brandon McNama) * Remove Unused Functions (Lars Lehtonen) * Disable race tests until bbolt is fixed (Nick Craig-Wood) * Move methods used for testing into test file (greatroar) * Add Pin and Unpin and canonicalised lookup (Nick Craig-Wood) * Use proper import path go.etcd.io/bbolt (Robert-André Mauchin) * Crypt * Calculate hashes for uploads from local disk (Nick Craig-Wood) * This allows crypted Jottacloud uploads without using local disk * This means crypted s3/b2 uploads will now have hashes * Added `rclone backend decode`/`encode` commands to replicate functionality of `cryptdecode` (Anagh Kumar Baranwal) * Get rid of the unused Cipher interface as it obfuscated the code (Nick Craig-Wood) * Azure Blob * Implement streaming of unknown sized files so `rcat` is now supported (Nick Craig-Wood) * Implement memory pooling to control memory use (Nick Craig-Wood) * Add `--azureblob-disable-checksum` flag (Nick Craig-Wood) * Retry `InvalidBlobOrBlock` error as it may indicate block concurrency problems (Nick Craig-Wood) * Remove unused `Object.parseTimeString()` (Lars Lehtonen) * Fix permission error on SAS URL limited to container (Nick Craig-Wood) * B2 * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Ignore directory markers at the root also (Nick Craig-Wood) * Force the case of the SHA1 to lowercase (Nick Craig-Wood) * Remove unused `largeUpload.clearUploadURL()` (Lars Lehtonen) * Box * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Implement About to read size used (Nick Craig-Wood) * Add token renew function for jwt auth (David Bramwell) * Added support for interchangeable root folder for Box backend (Sunil Patra) * Remove unnecessary iat from jws claims (David) * Drive * Follow shortcuts by default, skip with `--drive-skip-shortcuts` (Nick Craig-Wood) * Implement `rclone backend shortcut` command for creating shortcuts (Nick Craig-Wood) * Added `rclone backend` command to change `service_account_file` and `chunk_size` (Anagh Kumar Baranwal) * Fix missing files when using `--fast-list` and `--drive-shared-with-me` (Nick Craig-Wood) * Fix duplicate items when using `--drive-shared-with-me` (Nick Craig-Wood) * Extend `--drive-stop-on-upload-limit` to respond to `teamDriveFileLimitExceeded`. (harry) * Don't delete files with multiple parents to avoid data loss (Nick Craig-Wood) * Server side copy docs use default description if empty (Nick Craig-Wood) * Dropbox * Make error insufficient space to be fatal (harry) * Add info about required redirect url (Elan Ruusamäe) * Fichier * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Implement custom pacer to deal with the new rate limiting (buengese) * FTP * Fix lockup when using concurrency limit on failed connections (Nick Craig-Wood) * Fix lockup on failed upload when using concurrency limit (Nick Craig-Wood) * Fix lockup on Close failures when using concurrency limit (Nick Craig-Wood) * Work around pureftp sending spurious 150 messages (Nick Craig-Wood) * Google Cloud Storage * Add support for `--header-upload` and `--header-download` (Nick Craig-Wood) * Add `ARCHIVE` storage class to help (Adam Stroud) * Ignore directory markers at the root (Nick Craig-Wood) * Googlephotos * Make the start year configurable (Daven) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Create feature/favorites directory (Brandon Philips) * Fix "concurrent map write" error (Nick Craig-Wood) * Don't put an image in error message (Nick Craig-Wood) * HTTP * Improved directory listing with new template from Caddy project (calisro) * Jottacloud * Implement `--jottacloud-trashed-only` (buengese) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Use `RawURLEncoding` when decoding base64 encoded login token (buengese) * Implement cleanup (buengese) * Update docs regarding cleanup, removed remains from old auth, and added warning about special mountpoints. (albertony) * Mailru * Describe 2FA requirements (valery1707) * Onedrive * Implement `--onedrive-server-side-across-configs` (Nick Craig-Wood) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Fix occasional 416 errors on multipart uploads (Nick Craig-Wood) * Added maximum chunk size limit warning in the docs (Harry) * Fix missing drive on config (Nick Craig-Wood) * Make error `quotaLimitReached` to be fatal (harry) * Opendrive * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Pcloud * Added support for interchangeable root folder for pCloud backend (Sunil Patra) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Fix initial config "Auth state doesn't match" message (Nick Craig-Wood) * Premiumizeme * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Prune unused functions (Lars Lehtonen) * Putio * Add support for `--header-upload` and `--header-download` (Nick Craig-Wood) * Make downloading files use the rclone http Client (Nick Craig-Wood) * Fix parsing of remotes with leading and trailing / (Nick Craig-Wood) * Qingstor * Make `rclone cleanup` remove pending multipart uploads older than 24h (Nick Craig-Wood) * Try harder to cancel failed multipart uploads (Nick Craig-Wood) * Prune `multiUploader.list()` (Lars Lehtonen) * Lint fix (Lars Lehtonen) * S3 * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Use memory pool for buffer allocations (Maciej Zimnoch) * Add SSE-C support for AWS, Ceph, and MinIO (Jack Anderson) * Fail fast multipart upload (Michał Matczuk) * Report errors on bucket creation (mkdir) correctly (Nick Craig-Wood) * Specify that Minio supports URL encoding in listings (Nick Craig-Wood) * Added 500 as retryErrorCode (Michał Matczuk) * Use `--low-level-retries` as the number of SDK retries (Aleksandar Janković) * Fix multipart abort context (Aleksandar Jankovic) * Replace deprecated `session.New()` with `session.NewSession()` (Lars Lehtonen) * Use the provided size parameter when allocating a new memory pool (Joachim Brandon LeBlanc) * Use rclone's low level retries instead of AWS SDK to fix listing retries (Nick Craig-Wood) * Ignore directory markers at the root also (Nick Craig-Wood) * Use single memory pool (Michał Matczuk) * Do not resize buf on put to memBuf (Michał Matczuk) * Improve docs for `--s3-disable-checksum` (Nick Craig-Wood) * Don't leak memory or tokens in edge cases for multipart upload (Nick Craig-Wood) * Seafile * Implement 2FA (Fred) * SFTP * Added `--sftp-pem-key` to support inline key files (calisro) * Fix post transfer copies failing with 0 size when using `set_modtime=false` (Nick Craig-Wood) * Sharefile * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Sugarsync * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Swift * Add support for `--header-upload` and `--header-download` (Nick Craig-Wood) * Fix cosmetic issue in error message (Martin Michlmayr) * Union * Implement multiple writable remotes (Max Sum) * Fix server-side copy (Max Sum) * Implement ListR (Max Sum) * Enable ListR when upstreams contain local (Max Sum) * WebDAV * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Fix `X-OC-Mtime` header for Transip compatibility (Nick Craig-Wood) * Report full and consistent usage with `about` (Yves G) * Yandex * Add support for `--header-upload` and `--header-download` (Tim Gallant) ## v1.51.0 - 2020-02-01 * New backends * [Memory](https://rclone.org/memory/) (Nick Craig-Wood) * [Sugarsync](https://rclone.org/sugarsync/) (Nick Craig-Wood) * New Features * Adjust all backends to have `--backend-encoding` parameter (Nick Craig-Wood) * this enables the encoding for special characters to be adjusted or disabled * Add `--max-duration` flag to control the maximum duration of a transfer session (boosh) * Add `--expect-continue-timeout` flag, default 1s (Nick Craig-Wood) * Add `--no-check-dest` flag for copying without testing the destination (Nick Craig-Wood) * Implement `--order-by` flag to order transfers (Nick Craig-Wood) * accounting * Don't show entries in both transferring and checking (Nick Craig-Wood) * Add option to delete stats (Aleksandar Jankovic) * build * Compress the test builds with gzip (Nick Craig-Wood) * Implement a framework for starting test servers during tests (Nick Craig-Wood) * cmd: Always print elapsed time to tenth place seconds in progress (Gary Kim) * config * Add `--password-command` to allow dynamic config password (Damon Permezel) * Give config questions default values (Nick Craig-Wood) * Check a remote exists when creating a new one (Nick Craig-Wood) * copyurl: Add `--stdout` flag to write to stdout (Nick Craig-Wood) * dedupe: Implement keep smallest too (Nick Craig-Wood) * hashsum: Add flag `--base64` flag (landall) * lsf: Speed up on s3/swift/etc by not reading mimetype by default (Nick Craig-Wood) * lsjson: Add `--no-mimetype` flag (Nick Craig-Wood) * rc: Add methods to turn on blocking and mutex profiling (Nick Craig-Wood) * rcd * Adding group parameter to stats (Chaitanya) * Move webgui apart; option to disable browser (Xiaoxing Ye) * serve sftp: Add support for public key with auth proxy (Paul Tinsley) * stats: Show deletes in stats and hide zero stats (anuar45) * Bug Fixes * accounting * Fix error counter counting multiple times (Ankur Gupta) * Fix error count shown as checks (Cnly) * Clear finished transfer in stats-reset (Maciej Zimnoch) * Added StatsInfo locking in statsGroups sum function (Michał Matczuk) * asyncreader: Fix EOF error (buengese) * check: Fix `--one-way` recursing more directories than it needs to (Nick Craig-Wood) * chunkedreader: Disable hash calculation for first segment (Nick Craig-Wood) * config * Do not open browser on headless on drive/gcs/google photos (Xiaoxing Ye) * SetValueAndSave ignore error if config section does not exist yet (buengese) * cmd: Fix completion with an encrypted config (Danil Semelenov) * dbhashsum: Stop it returning UNSUPPORTED on dropbox (Nick Craig-Wood) * dedupe: Add missing modes to help string (Nick Craig-Wood) * operations * Fix dedupe continuing on errors like insufficientFilePermisson (SezalAgrawal) * Clear accounting before low level retry (Maciej Zimnoch) * Write debug message when hashes could not be checked (Ole Schütt) * Move interface assertion to tests to remove pflag dependency (Nick Craig-Wood) * Make NewOverrideObjectInfo public and factor uses (Nick Craig-Wood) * proxy: Replace use of bcrypt with sha256 (Nick Craig-Wood) * vendor * Update bazil.org/fuse to fix FreeBSD 12.1 (Nick Craig-Wood) * Update github.com/t3rm1n4l/go-mega to fix mega "illegal base64 data at input byte 22" (Nick Craig-Wood) * Update termbox-go to fix ncdu command on FreeBSD (Kuang-che Wu) * Update t3rm1n4l/go-mega - fixes mega: couldn't login: crypto/aes: invalid key size 0 (Nick Craig-Wood) * Mount * Enable async reads for a 20% speedup (Nick Craig-Wood) * Replace use of WriteAt with Write for cache mode >= writes and O_APPEND (Brett Dutro) * Make sure we call unmount when exiting (Nick Craig-Wood) * Don't build on go1.10 as bazil/fuse no longer supports it (Nick Craig-Wood) * When setting dates discard out of range dates (Nick Craig-Wood) * VFS * Add a newly created file straight into the directory (Nick Craig-Wood) * Only calculate one hash for reads for a speedup (Nick Craig-Wood) * Make ReadAt for non cached files work better with non-sequential reads (Nick Craig-Wood) * Fix edge cases when reading ModTime from file (Nick Craig-Wood) * Make sure existing files opened for write show correct size (Nick Craig-Wood) * Don't cache the path in RW file objects to fix renaming (Nick Craig-Wood) * Fix rename of open files when using the VFS cache (Nick Craig-Wood) * When renaming files in the cache, rename the cache item in memory too (Nick Craig-Wood) * Fix open file renaming on drive when using `--vfs-cache-mode writes` (Nick Craig-Wood) * Fix incorrect modtime for mv into mount with `--vfs-cache-modes writes` (Nick Craig-Wood) * On rename, rename in cache too if the file exists (Anagh Kumar Baranwal) * Local * Make source file being updated errors be NoLowLevelRetry errors (Nick Craig-Wood) * Fix update of hidden files on Windows (Nick Craig-Wood) * Cache * Follow move of upstream library github.com/coreos/bbolt github.com/etcd-io/bbolt (Nick Craig-Wood) * Fix `fatal error: concurrent map writes` (Nick Craig-Wood) * Crypt * Reorder the filename encryption options (Thomas Eales) * Correctly handle trailing dot (buengese) * Chunker * Reduce length of temporary suffix (Ivan Andreev) * Drive * Add `--drive-stop-on-upload-limit` flag to stop syncs when upload limit reached (Nick Craig-Wood) * Add `--drive-use-shared-date` to use date file was shared instead of modified date (Garry McNulty) * Make sure invalid auth for teamdrives always reports an error (Nick Craig-Wood) * Fix `--fast-list` when using appDataFolder (Nick Craig-Wood) * Use multipart resumable uploads for streaming and uploads in mount (Nick Craig-Wood) * Log an ERROR if an incomplete search is returned (Nick Craig-Wood) * Hide dangerous config from the configurator (Nick Craig-Wood) * Dropbox * Treat `insufficient_space` errors as non retriable errors (Nick Craig-Wood) * Jottacloud * Use new auth method used by official client (buengese) * Add URL to generate Login Token to config wizard (Nick Craig-Wood) * Add support whitelabel versions (buengese) * Koofr * Use rclone HTTP client. (jaKa) * Onedrive * Add Sites.Read.All permission (Benjamin Richter) * Add support "Retry-After" header (Motonori IWAMURO) * Opendrive * Implement `--opendrive-chunk-size` (Nick Craig-Wood) * S3 * Re-implement multipart upload to fix memory issues (Nick Craig-Wood) * Add `--s3-copy-cutoff` for size to switch to multipart copy (Nick Craig-Wood) * Add new region Asia Patific (Hong Kong) (Outvi V) * Reduce memory usage streaming files by reducing max stream upload size (Nick Craig-Wood) * Add `--s3-list-chunk` option for bucket listing (Thomas Kriechbaumer) * Force path style bucket access to off for AWS deprecation (Nick Craig-Wood) * Use AWS web identity role provider if available (Tennix) * Add StackPath Object Storage Support (Dave Koston) * Fix ExpiryWindow value (Aleksandar Jankovic) * Fix DisableChecksum condition (Aleksandar Janković) * Fix URL decoding of NextMarker (Nick Craig-Wood) * SFTP * Add `--sftp-skip-links` to skip symlinks and non regular files (Nick Craig-Wood) * Retry Creation of Connection (Sebastian Brandt) * Fix "failed to parse private key file: ssh: not an encrypted key" error (Nick Craig-Wood) * Open files for update write only to fix AWS SFTP interop (Nick Craig-Wood) * Swift * Reserve segments of dynamic large object when delete objects in container what was enabled versioning. (Nguyễn Hữu Luân) * Fix parsing of X-Object-Manifest (Nick Craig-Wood) * Update OVH API endpoint (unbelauscht) * WebDAV * Make nextcloud only upload SHA1 checksums (Nick Craig-Wood) * Fix case of "Bearer" in Authorization: header to agree with RFC (Nick Craig-Wood) * Add Referer header to fix problems with WAFs (Nick Craig-Wood) ## v1.50.2 - 2019-11-19 * Bug Fixes * accounting: Fix memory leak on retries operations (Nick Craig-Wood) * Drive * Fix listing of the root directory with drive.files scope (Nick Craig-Wood) * Fix --drive-root-folder-id with team/shared drives (Nick Craig-Wood) ## v1.50.1 - 2019-11-02 * Bug Fixes * hash: Fix accidentally changed hash names for `DropboxHash` and `CRC-32` (Nick Craig-Wood) * fshttp: Fix error reporting on tpslimit token bucket errors (Nick Craig-Wood) * fshttp: Don't print token bucket errors on context cancelled (Nick Craig-Wood) * Local * Fix listings of . on Windows (Nick Craig-Wood) * Onedrive * Fix DirMove/Move after Onedrive change (Xiaoxing Ye) ## v1.50.0 - 2019-10-26 * New backends * [Citrix Sharefile](https://rclone.org/sharefile/) (Nick Craig-Wood) * [Chunker](https://rclone.org/chunker/) - an overlay backend to split files into smaller parts (Ivan Andreev) * [Mail.ru Cloud](https://rclone.org/mailru/) (Ivan Andreev) * New Features * encodings (Fabian Möller & Nick Craig-Wood) * All backends now use file name encoding to ensure any file name can be written to any backend. * See the [restricted file name docs](https://rclone.org/overview/#restricted-filenames) for more info and the [local backend docs](/local/#filenames). * Some file names may look different in rclone if you are using any control characters in names or [unicode FULLWIDTH symbols](https://en.wikipedia.org/wiki/Halfwidth_and_Fullwidth_Forms_(Unicode_block)). * build * Update to use go1.13 for the build (Nick Craig-Wood) * Drop support for go1.9 (Nick Craig-Wood) * Build rclone with GitHub actions (Nick Craig-Wood) * Convert python scripts to python3 (Nick Craig-Wood) * Swap Azure/go-ansiterm for mattn/go-colorable (Nick Craig-Wood) * Dockerfile fixes (Matei David) * Add [plugin support](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#writing-a-plugin) for backends and commands (Richard Patel) * config * Use alternating Red/Green in config to make more obvious (Nick Craig-Wood) * contrib * Add sample DLNA server Docker Compose manifest. (pataquets) * Add sample WebDAV server Docker Compose manifest. (pataquets) * copyurl * Add `--auto-filename` flag for using file name from URL in destination path (Denis) * serve dlna: * Many compatibility improvements (Dan Walters) * Support for external srt subtitles (Dan Walters) * rc * Added command core/quit (Saksham Khanna) * Bug Fixes * sync * Make `--update`/`-u` not transfer files that haven't changed (Nick Craig-Wood) * Free objects after they come out of the transfer pipe to save memory (Nick Craig-Wood) * Fix `--files-from without --no-traverse` doing a recursive scan (Nick Craig-Wood) * operations * Fix accounting for server side copies (Nick Craig-Wood) * Display 'All duplicates removed' only if dedupe successful (Sezal Agrawal) * Display 'Deleted X extra copies' only if dedupe successful (Sezal Agrawal) * accounting * Only allow up to 100 completed transfers in the accounting list to save memory (Nick Craig-Wood) * Cull the old time ranges when possible to save memory (Nick Craig-Wood) * Fix panic due to server-side copy fallback (Ivan Andreev) * Fix memory leak noticeable for transfers of large numbers of objects (Nick Craig-Wood) * Fix total duration calculation (Nick Craig-Wood) * cmd * Fix environment variables not setting command line flags (Nick Craig-Wood) * Make autocomplete compatible with bash's posix mode for macOS (Danil Semelenov) * Make `--progress` work in git bash on Windows (Nick Craig-Wood) * Fix 'compopt: command not found' on autocomplete on macOS (Danil Semelenov) * config * Fix setting of non top level flags from environment variables (Nick Craig-Wood) * Check config names more carefully and report errors (Nick Craig-Wood) * Remove error: can't use `--size-only` and `--ignore-size` together. (Nick Craig-Wood) * filter: Prevent mixing options when `--files-from` is in use (Michele Caci) * serve sftp: Fix crash on unsupported operations (eg Readlink) (Nick Craig-Wood) * Mount * Allow files of unknown size to be read properly (Nick Craig-Wood) * Skip tests on <= 2 CPUs to avoid lockup (Nick Craig-Wood) * Fix panic on File.Open (Nick Craig-Wood) * Fix "mount_fusefs: -o timeout=: option not supported" on FreeBSD (Nick Craig-Wood) * Don't pass huge filenames (>4k) to FUSE as it can't cope (Nick Craig-Wood) * VFS * Add flag `--vfs-case-insensitive` for windows/macOS mounts (Ivan Andreev) * Make objects of unknown size readable through the VFS (Nick Craig-Wood) * Move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush() (Brett Dutro) * Stop empty dirs disappearing when renamed on bucket based remotes (Nick Craig-Wood) * Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood) * Azure Blob * Disable logging to the Windows event log (Nick Craig-Wood) * B2 * Remove `unverified:` prefix on sha1 to improve interop (eg with CyberDuck) (Nick Craig-Wood) * Box * Add options to get access token via JWT auth (David) * Drive * Disable HTTP/2 by default to work around INTERNAL_ERROR problems (Nick Craig-Wood) * Make sure that drive root ID is always canonical (Nick Craig-Wood) * Fix `--drive-shared-with-me` from the root with lsand `--fast-list` (Nick Craig-Wood) * Fix ChangeNotify polling for shared drives (Nick Craig-Wood) * Fix change notify polling when using appDataFolder (Nick Craig-Wood) * Dropbox * Make disallowed filenames errors not retry (Nick Craig-Wood) * Fix nil pointer exception on restricted files (Nick Craig-Wood) * Fichier * Fix accessing files > 2GB on 32 bit systems (Nick Craig-Wood) * FTP * Allow disabling EPSV mode (Jon Fautley) * HTTP * HEAD directory entries in parallel to speedup (Nick Craig-Wood) * Add `--http-no-head` to stop rclone doing HEAD in listings (Nick Craig-Wood) * Putio * Add ability to resume uploads (Cenk Alti) * S3 * Fix signature v2_auth headers (Anthony Rusdi) * Fix encoding for control characters (Nick Craig-Wood) * Only ask for URL encoded directory listings if we need them on Ceph (Nick Craig-Wood) * Add option for multipart failure behaviour (Aleksandar Jankovic) * Support for multipart copy (庄天翼) * Fix nil pointer reference if no metadata returned for object (Nick Craig-Wood) * SFTP * Fix `--sftp-ask-password` trying to contact the ssh agent (Nick Craig-Wood) * Fix hashes of files with backslashes (Nick Craig-Wood) * Include more ciphers with `--sftp-use-insecure-cipher` (Carlos Ferreyra) * WebDAV * Parse and return Sharepoint error response (Henning Surmeier) ## v1.49.5 - 2019-10-05 * Bug Fixes * Revert back to go1.12.x for the v1.49.x builds as go1.13.x was causing issues (Nick Craig-Wood) * Fix rpm packages by using master builds of nfpm (Nick Craig-Wood) * Fix macOS build after brew changes (Nick Craig-Wood) ## v1.49.4 - 2019-09-29 * Bug Fixes * cmd/rcd: Address ZipSlip vulnerability (Richard Patel) * accounting: Fix file handle leak on errors (Nick Craig-Wood) * oauthutil: Fix security problem when running with two users on the same machine (Nick Craig-Wood) * FTP * Fix listing of an empty root returning: error dir not found (Nick Craig-Wood) * S3 * Fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier (Nick Craig-Wood) ## v1.49.3 - 2019-09-15 * Bug Fixes * accounting * Fix total duration calculation (Aleksandar Jankovic) * Fix "file already closed" on transfer retries (Nick Craig-Wood) ## v1.49.2 - 2019-09-08 * New Features * build: Add Docker workflow support (Alfonso Montero) * Bug Fixes * accounting: Fix locking in Transfer to avoid deadlock with `--progress` (Nick Craig-Wood) * docs: Fix template argument for mktemp in install.sh (Cnly) * operations: Fix `-u`/`--update` with google photos / files of unknown size (Nick Craig-Wood) * rc: Fix docs for config/create /update /password (Nick Craig-Wood) * Google Cloud Storage * Fix need for elevated permissions on SetModTime (Nick Craig-Wood) ## v1.49.1 - 2019-08-28 * Bug Fixes * config: Fix generated passwords being stored as empty password (Nick Craig-Wood) * rcd: Added missing parameter for web-gui info logs. (Chaitanya) * Googlephotos * Fix crash on error response (Nick Craig-Wood) * Onedrive * Fix crash on error response (Nick Craig-Wood) ## v1.49.0 - 2019-08-26 * New backends * [1fichier](https://rclone.org/fichier/) (Laura Hausmann) * [Google Photos](https://rclone.org/googlephotos/) (Nick Craig-Wood) * [Putio](https://rclone.org/putio/) (Cenk Alti) * [premiumize.me](https://rclone.org/premiumizeme/) (Nick Craig-Wood) * New Features * Experimental [web GUI](https://rclone.org/gui/) (Chaitanya Bankanhal) * Implement `--compare-dest` & `--copy-dest` (yparitcher) * Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher) * `config reconnect` to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood) * `config userinfo` to discover which user you are logged in as. (Nick Craig-Wood) * `config disconnect` to disconnect you (log out) from the backend. (Nick Craig-Wood) * Add `--use-json-log` for JSON logging (justinalin) * Add context propagation to rclone (Aleksandar Jankovic) * Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic) * Add Higher units for ETA (AbelThar) * Update rclone logos to new design (Andreas Chlupka) * hash: Add CRC-32 support (Cenk Alti) * help showbackend: Fixed advanced option category when there are no standard options (buengese) * ncdu: Display/Copy to Clipboard Current Path (Gary Kim) * operations: * Run hashing operations in parallel (Nick Craig-Wood) * Don't calculate checksums when using `--ignore-checksum` (Nick Craig-Wood) * Check transfer hashes when using `--size-only` mode (Nick Craig-Wood) * Disable multi thread copy for local to local copies (Nick Craig-Wood) * Debug successful hashes as well as failures (Nick Craig-Wood) * rc * Add ability to stop async jobs (Aleksandar Jankovic) * Return current settings if core/bwlimit called without parameters (Nick Craig-Wood) * Rclone-WebUI integration with rclone (Chaitanya Bankanhal) * Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement) (Chaitanya Bankanhal) * Add anchor tags to the docs so links are consistent (Nick Craig-Wood) * Remove _async key from input parameters after parsing so later operations won't get confused (buengese) * Add call to clear stats (Aleksandar Jankovic) * rcd * Auto-login for web-gui (Chaitanya Bankanhal) * Implement `--baseurl` for rcd and web-gui (Chaitanya Bankanhal) * serve dlna * Only select interfaces which can multicast for SSDP (Nick Craig-Wood) * Add more builtin mime types to cover standard audio/video (Nick Craig-Wood) * Fix missing mime types on Android causing missing videos (Nick Craig-Wood) * serve ftp * Refactor to bring into line with other serve commands (Nick Craig-Wood) * Implement `--auth-proxy` (Nick Craig-Wood) * serve http: Implement `--baseurl` (Nick Craig-Wood) * serve restic: Implement `--baseurl` (Nick Craig-Wood) * serve sftp * Implement auth proxy (Nick Craig-Wood) * Fix detection of whether server is authorized (Nick Craig-Wood) * serve webdav * Implement `--baseurl` (Nick Craig-Wood) * Support `--auth-proxy` (Nick Craig-Wood) * Bug Fixes * Make "bad record MAC" a retriable error (Nick Craig-Wood) * copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood) * march: Fix checking sub-directories when using `--no-traverse` (buengese) * rc * Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood) * Move job expire flags to rc to fix initialization problem (Nick Craig-Wood) * Fix `--loopback` with rc/list and others (Nick Craig-Wood) * rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood) * rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) * Mount * Default `--deamon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) * Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) * Remove nonseekable flag from write files (Nick Craig-Wood) * VFS * Make write without cache more efficient (Nick Craig-Wood) * Fix `--vfs-cache-mode minimal` and `writes` ignoring cached files (Nick Craig-Wood) * Local * Add `--local-case-sensitive` and `--local-case-insensitive` (Nick Craig-Wood) * Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk) * Don't calculate any hashes by default (Nick Craig-Wood) * Fadvise run syscall on a dedicated go routine (Michał Matczuk) * Azure Blob * Azure Storage Emulator support (Sandeep) * Updated config help details to remove connection string references (Sandeep) * Make all operations work from the root (Nick Craig-Wood) * B2 * Implement link sharing (yparitcher) * Enable server side copy to copy between buckets (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * Drive * Fix server side copy of big files (Nick Craig-Wood) * Update API for teamdrive use (Nick Craig-Wood) * Add error for purge with `--drive-trashed-only` (ginvine) * Fichier * Make FolderID int and adjust related code (buengese) * Google Cloud Storage * Reduce oauth scope requested as suggested by Google (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * HTTP * Add `--http-headers` flag for setting arbitrary headers (Nick Craig-Wood) * Jottacloud * Use new api for retrieving internal username (buengese) * Refactor configuration and minor cleanup (buengese) * Koofr * Support setting modification times on Koofr backend. (jaKa) * Opendrive * Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood) * Qingstor * Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * S3 * Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) * Make all operations work from the root (Nick Craig-Wood) * SFTP * Add missing interface check and fix About (Nick Craig-Wood) * Completely ignore all modtime checks if SetModTime=false (Jon Fautley) * Support md5/sha1 with rsync.net (Nick Craig-Wood) * Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood) * Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU) * Swift * Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood) * Fix upload when using no_chunk to return the correct size (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * Fix segments leak during failed large file uploads. (nguyenhuuluan434) * WebDAV * Add `--webdav-bearer-token-command` (Nick Craig-Wood) * Refresh token when it expires with `--webdav-bearer-token-command` (Nick Craig-Wood) * Add docs for using bearer_token_command with oidc-agent (Paul Millar) ## v1.48.0 - 2019-06-15 * New commands * serve sftp: Serve an rclone remote over SFTP (Nick Craig-Wood) * New Features * Multi threaded downloads to local storage (Nick Craig-Wood) * controlled with `--multi-thread-cutoff` and `--multi-thread-streams` * Use rclone.conf from rclone executable directory to enable portable use (albertony) * Allow sync of a file and a directory with the same name (forgems) * this is common on bucket based remotes, eg s3, gcs * Add `--ignore-case-sync` for forced case insensitivity (garry415) * Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec) * Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood) * Use go-homedir to read the home directory more reliably (Nick Craig-Wood) * Enable creating encrypted config through external script invocation (Wojciech Smigielski) * build: Drop support for go1.8 (Nick Craig-Wood) * config: Make config create/update encrypt passwords where necessary (Nick Craig-Wood) * copyurl: Honor `--no-check-certificate` (Stefan Breunig) * install: Linux skip man pages if no mandb (didil) * lsf: Support showing the Tier of the object (Nick Craig-Wood) * lsjson * Added EncryptedPath to output (calisro) * Support showing the Tier of the object (Nick Craig-Wood) * Add IsBucket field for bucket based remote listing of the root (Nick Craig-Wood) * rc * Add `--loopback` flag to run commands directly without a server (Nick Craig-Wood) * Add operations/fsinfo: Return information about the remote (Nick Craig-Wood) * Skip auth for OPTIONS request (Nick Craig-Wood) * cmd/providers: Add DefaultStr, ValueStr and Type fields (Nick Craig-Wood) * jobs: Make job expiry timeouts configurable (Aleksandar Jankovic) * serve dlna reworked and improved (Dan Walters) * serve ftp: add `--ftp-public-ip` flag to specify public IP (calistri) * serve restic: Add support for `--private-repos` in `serve restic` (Florian Apolloner) * serve webdav: Combine serve webdav and serve http (Gary Kim) * size: Ignore negative sizes when calculating total (Garry McNulty) * Bug Fixes * Make move and copy individual files obey `--backup-dir` (Nick Craig-Wood) * If `--ignore-checksum` is in effect, don't calculate checksum (Nick Craig-Wood) * moveto: Fix case-insensitive same remote move (Gary Kim) * rc: Fix serving bucket based objects with `--rc-serve` (Nick Craig-Wood) * serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim) * Mount * Fix poll interval documentation (Animosity022) * VFS * Make WriteAt for non cached files work with non-sequential writes (Nick Craig-Wood) * Local * Only calculate the required hashes for big speedup (Nick Craig-Wood) * Log errors when listing instead of returning an error (Nick Craig-Wood) * Fix preallocate warning on Linux with ZFS (Nick Craig-Wood) * Crypt * Make rclone dedupe work through crypt (Nick Craig-Wood) * Fix wrapping of ChangeNotify to decrypt directories properly (Nick Craig-Wood) * Support PublicLink (rclone link) of underlying backend (Nick Craig-Wood) * Implement Optional methods SetTier, GetTier (Nick Craig-Wood) * B2 * Implement server side copy (Nick Craig-Wood) * Implement SetModTime (Nick Craig-Wood) * Drive * Fix move and copy from TeamDrive to GDrive (Fionera) * Add notes that cleanup works in the background on drive (Nick Craig-Wood) * Add `--drive-server-side-across-configs` to default back to old server side copy semantics by default (Nick Craig-Wood) * Add `--drive-size-as-quota` to show storage quota usage for file size (Garry McNulty) * FTP * Add FTP List timeout (Jeff Quinn) * Add FTP over TLS support (Gary Kim) * Add `--ftp-no-check-certificate` option for FTPS (Gary Kim) * Google Cloud Storage * Fix upload errors when uploading pre 1970 files (Nick Craig-Wood) * Jottacloud * Add support for selecting device and mountpoint. (buengese) * Mega * Add cleanup support (Gary Kim) * Onedrive * More accurately check if root is found (Cnly) * S3 * Support S3 Accelerated endpoints with `--s3-use-accelerate-endpoint` (Nick Craig-Wood) * Add config info for Wasabi's EU Central endpoint (Robert Marko) * Make SetModTime work for GLACIER while syncing (Philip Harvey) * SFTP * Add About support (Gary Kim) * Fix about parsing of `df` results so it can cope with -ve results (Nick Craig-Wood) * Send custom client version and debug server version (Nick Craig-Wood) * WebDAV * Retry on 423 Locked errors (Nick Craig-Wood) ## v1.47.0 - 2019-04-13 * New backends * Backend for Koofr cloud storage service. (jaKa) * New Features * Resume downloads if the reader fails in copy (Nick Craig-Wood) * this means rclone will restart transfers if the source has an error * this is most useful for downloads or cloud to cloud copies * Use `--fast-list` for listing operations where it won't use more memory (Nick Craig-Wood) * this should speed up the following operations on remotes which support `ListR` * `dedupe`, `serve restic` `lsf`, `ls`, `lsl`, `lsjson`, `lsd`, `md5sum`, `sha1sum`, `hashsum`, `size`, `delete`, `cat`, `settier` * use `--disable ListR` to get old behaviour if required * Make `--files-from` traverse the destination unless `--no-traverse` is set (Nick Craig-Wood) * this fixes `--files-from` with Google drive and excessive API use in general. * Make server side copy account bytes and obey `--max-transfer` (Nick Craig-Wood) * Add `--create-empty-src-dirs` flag and default to not creating empty dirs (ishuah) * Add client side TLS/SSL flags `--ca-cert`/`--client-cert`/`--client-key` (Nick Craig-Wood) * Implement `--suffix-keep-extension` for use with `--suffix` (Nick Craig-Wood) * build: * Switch to semvar compliant version tags to be go modules compliant (Nick Craig-Wood) * Update to use go1.12.x for the build (Nick Craig-Wood) * serve dlna: Add connection manager service description to improve compatibility (Dan Walters) * lsf: Add 'e' format to show encrypted names and 'o' for original IDs (Nick Craig-Wood) * lsjson: Added `--files-only` and `--dirs-only` flags (calistri) * rc: Implement operations/publiclink the equivalent of `rclone link` (Nick Craig-Wood) * Bug Fixes * accounting: Fix total ETA when `--stats-unit bits` is in effect (Nick Craig-Wood) * Bash TAB completion * Use private custom func to fix clash between rclone and kubectl (Nick Craig-Wood) * Fix for remotes with underscores in their names (Six) * Fix completion of remotes (Florian Gamböck) * Fix autocompletion of remote paths with spaces (Danil Semelenov) * serve dlna: Fix root XML service descriptor (Dan Walters) * ncdu: Fix display corruption with Chinese characters (Nick Craig-Wood) * Add SIGTERM to signals which run the exit handlers on unix (Nick Craig-Wood) * rc: Reload filter when the options are set via the rc (Nick Craig-Wood) * VFS / Mount * Fix FreeBSD: Ignore Truncate if called with no readers and already the correct size (Nick Craig-Wood) * Read directory and check for a file before mkdir (Nick Craig-Wood) * Shorten the locking window for vfs/refresh (Nick Craig-Wood) * Azure Blob * Enable MD5 checksums when uploading files bigger than the "Cutoff" (Dr.Rx) * Fix SAS URL support (Nick Craig-Wood) * B2 * Allow manual configuration of backblaze downloadUrl (Vince) * Ignore already_hidden error on remove (Nick Craig-Wood) * Ignore malformed `src_last_modified_millis` (Nick Craig-Wood) * Drive * Add `--skip-checksum-gphotos` to ignore incorrect checksums on Google Photos (Nick Craig-Wood) * Allow server side move/copy between different remotes. (Fionera) * Add docs on team drives and `--fast-list` eventual consistency (Nestar47) * Fix imports of text files (Nick Craig-Wood) * Fix range requests on 0 length files (Nick Craig-Wood) * Fix creation of duplicates with server side copy (Nick Craig-Wood) * Dropbox * Retry blank errors to fix long listings (Nick Craig-Wood) * FTP * Add `--ftp-concurrency` to limit maximum number of connections (Nick Craig-Wood) * Google Cloud Storage * Fall back to default application credentials (marcintustin) * Allow bucket policy only buckets (Nick Craig-Wood) * HTTP * Add `--http-no-slash` for websites with directories with no slashes (Nick Craig-Wood) * Remove duplicates from listings (Nick Craig-Wood) * Fix socket leak on 404 errors (Nick Craig-Wood) * Jottacloud * Fix token refresh (Sebastian Bünger) * Add device registration (Oliver Heyme) * Onedrive * Implement graceful cancel of multipart uploads if rclone is interrupted (Cnly) * Always add trailing colon to path when addressing items, (Cnly) * Return errors instead of panic for invalid uploads (Fabian Möller) * S3 * Add support for "Glacier Deep Archive" storage class (Manu) * Update Dreamhost endpoint (Nick Craig-Wood) * Note incompatibility with CEPH Jewel (Nick Craig-Wood) * SFTP * Allow custom ssh client config (Alexandru Bumbacea) * Swift * Obey Retry-After to enable OVH restore from cold storage (Nick Craig-Wood) * Work around token expiry on CEPH (Nick Craig-Wood) * WebDAV * Allow IsCollection property to be integer or boolean (Nick Craig-Wood) * Fix race when creating directories (Nick Craig-Wood) * Fix About/df when reading the available/total returns 0 (Nick Craig-Wood) ## v1.46 - 2019-02-09 * New backends * Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood) * New commands * serve dlna: serves a remove via DLNA for the local network (nicolov) * New Features * copy, move: Restore deprecated `--no-traverse` flag (Nick Craig-Wood) * This is useful for when transferring a small number of files into a large destination * genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov) * Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood) * Buffer recycling library to replace sync.Pool * Optionally use memory mapped memory for better memory shrinking * Enable with `--use-mmap` if having memory problems - not default yet * Parallelise reading of files specified by `--files-from` (Nick Craig-Wood) * check: Add stats showing total files matched. (Dario Guzik) * Allow rename/delete open files under Windows (Nick Craig-Wood) * lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood) * Add cookie support with cmdline switch `--use-cookies` for all HTTP based remotes (qip) * Warn if `--checksum` is set but there are no hashes available (Nick Craig-Wood) * Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood) * Improve error reporting for too many/few arguments in commands (Nick Craig-Wood) * listremotes: Remove `-l` short flag as it conflicts with the new global flag (weetmuts) * Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood) * Bug Fixes * Fix layout of stats (Nick Craig-Wood) * Fix `--progress` crash under Windows Jenkins (Nick Craig-Wood) * Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly) * copyurl: Fix checking of `--dry-run` (Denis Skovpen) * Mount * Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood) * Fix mount size under 32 bit Windows (Nick Craig-Wood) * VFS * Implement renaming of directories for backends without DirMove (Nick Craig-Wood) * now all backends except b2 support renaming directories * Implement `--vfs-cache-max-size` to limit the total size of the cache (Nick Craig-Wood) * Add `--dir-perms` and `--file-perms` flags to set default permissions (Nick Craig-Wood) * Fix deadlock on concurrent operations on a directory (Nick Craig-Wood) * Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood) * Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood) * Fix panic on rename with `--dry-run` set (Nick Craig-Wood) * Fix vfs/refresh with recurse=true needing the `--fast-list` flag * Local * Add support for `-l`/`--links` (symbolic link translation) (yair@unicorn) * this works by showing links as `link.rclonelink` - see local backend docs for more info * this errors if used with `-L`/`--copy-links` * Fix renaming/deleting open files on Windows (Nick Craig-Wood) * Crypt * Check for maximum length before decrypting filename to fix panic (Garry McNulty) * Azure Blob * Allow building azureblob backend on *BSD (themylogin) * Use the rclone HTTP client to support `--dump headers`, `--tpslimit` etc (Nick Craig-Wood) * Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood) * Ignore directory markers (Nick Craig-Wood) * Stop Mkdir attempting to create existing containers (Nick Craig-Wood) * B2 * cleanup: will remove unfinished large files >24hrs old (Garry McNulty) * For a bucket limited application key check the bucket name (Nick Craig-Wood) * before this, rclone would use the authorised bucket regardless of what you put on the command line * Added `--b2-disable-checksum` flag (Wojciech Smigielski) * this enables large files to be uploaded without a SHA-1 hash for speed reasons * Drive * Set default pacer to 100ms for 10 tps (Nick Craig-Wood) * This fits the Google defaults much better and reduces the 403 errors massively * Add `--drive-pacer-min-sleep` and `--drive-pacer-burst` to control the pacer * Improve ChangeNotify support for items with multiple parents (Fabian Möller) * Fix ListR for items with multiple parents - this fixes oddities with `vfs/refresh` (Fabian Möller) * Fix using `--drive-impersonate` and appfolders (Nick Craig-Wood) * Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood) * Dropbox * Retry-After support for Dropbox backend (Mathieu Carbou) * FTP * Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood) * helps with indefinite hangs on some FTP servers * Google Cloud Storage * Update google cloud storage endpoints (weetmuts) * HTTP * Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood) * Fix backend with `--files-from` and non-existent files (Nick Craig-Wood) * Hubic * Make error message more informative if authentication fails (Nick Craig-Wood) * Jottacloud * Resume and deduplication support (Oliver Heyme) * Use token auth for all API requests Don't store password anymore (Sebastian Bünger) * Add support for 2-factor authentication (Sebastian Bünger) * Mega * Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood) * Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood) * Add new error codes for better error reporting (Nick Craig-Wood) * Onedrive * Fix broken support for "shared with me" folders (Alex Chen) * Fix root ID not normalised (Cnly) * Return err instead of panic on unknown-sized uploads (Cnly) * Qingstor * Fix go routine leak on multipart upload errors (Nick Craig-Wood) * Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood) * Default `--qingstor-upload-concurrency` to 1 to work around bug (Nick Craig-Wood) * S3 * Implement `--s3-upload-cutoff` for single part uploads below this (Nick Craig-Wood) * Change `--s3-upload-concurrency` default to 4 to increase performance (Nick Craig-Wood) * Add `--s3-bucket-acl` to control bucket ACL (Nick Craig-Wood) * Auto detect region for buckets on operation failure (Nick Craig-Wood) * Add GLACIER storage class (William Cocker) * Add Scaleway to s3 documentation (Rémy Léone) * Add AWS endpoint eu-north-1 (weetmuts) * SFTP * Add support for PEM encrypted private keys (Fabian Möller) * Add option to force the usage of an ssh-agent (Fabian Möller) * Perform environment variable expansion on key-file (Fabian Möller) * Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood) * Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood) * Fix error on dangling symlinks (Nick Craig-Wood) * Swift * Add `--swift-no-chunk` to disable segmented uploads in rcat/mount (Nick Craig-Wood) * Introduce application credential auth support (kayrus) * Fix memory usage by slimming Object (Nick Craig-Wood) * Fix extra requests on upload (Nick Craig-Wood) * Fix reauth on big files (Nick Craig-Wood) * Union * Fix poll-interval not working (Nick Craig-Wood) * WebDAV * Support About which means rclone mount will show the correct disk size (Nick Craig-Wood) * Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood) * Fail soft on time parsing errors (Nick Craig-Wood) * Fix infinite loop on failed directory creation (Nick Craig-Wood) * Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood) * Fix upload of 0 length files on some servers (Nick Craig-Wood) * Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood) ## v1.45 - 2018-11-24 * New backends * The Yandex backend was re-written - see below for details (Sebastian Bünger) * New commands * rcd: New command just to serve the remote control API (Nick Craig-Wood) * New Features * The remote control API (rc) was greatly expanded to allow full control over rclone (Nick Craig-Wood) * sensitive operations require authorization or the `--rc-no-auth` flag * config/* operations to configure rclone * options/* for reading/setting command line flags * operations/* for all low level operations, eg copy file, list directory * sync/* for sync, copy and move * `--rc-files` flag to serve files on the rc http server * this is for building web native GUIs for rclone * Optionally serving objects on the rc http server * Ensure rclone fails to start up if the `--rc` port is in use already * See [the rc docs](https://rclone.org/rc/) for more info * sync/copy/move * Make `--files-from` only read the objects specified and don't scan directories (Nick Craig-Wood) * This is a huge speed improvement for destinations with lots of files * filter: Add `--ignore-case` flag (Nick Craig-Wood) * ncdu: Add remove function ('d' key) (Henning Surmeier) * rc command * Add `--json` flag for structured JSON input (Nick Craig-Wood) * Add `--user` and `--pass` flags and interpret `--rc-user`, `--rc-pass`, `--rc-addr` (Nick Craig-Wood) * build * Require go1.8 or later for compilation (Nick Craig-Wood) * Enable softfloat on MIPS arch (Scott Edlund) * Integration test framework revamped with a better report and better retries (Nick Craig-Wood) * Bug Fixes * cmd: Make `--progress` update the stats correctly at the end (Nick Craig-Wood) * config: Create config directory on save if it is missing (Nick Craig-Wood) * dedupe: Check for existing filename before renaming a dupe file (ssaqua) * move: Don't create directories with `--dry-run` (Nick Craig-Wood) * operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood) * serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood) * Mount * Make `--volname` work for Windows and macOS (Nick Craig-Wood) * Azure Blob * Avoid context deadline exceeded error by setting a large TryTimeout value (brused27) * Fix erroneous Rmdir error "directory not empty" (Nick Craig-Wood) * Wait for up to 60s to create a just deleted container (Nick Craig-Wood) * Dropbox * Add dropbox impersonate support (Jake Coggiano) * Jottacloud * Fix bug in `--fast-list` handing of empty folders (albertony) * Opendrive * Fix transfer of files with `+` and `&` in (Nick Craig-Wood) * Fix retries of upload chunks (Nick Craig-Wood) * S3 * Set ACL for server side copies to that provided by the user (Nick Craig-Wood) * Fix role_arn, credential_source, ... (Erik Swanson) * Add config info for Wasabi's US-West endpoint (Henry Ptasinski) * SFTP * Ensure file hash checking is really disabled (Jon Fautley) * Swift * Add pacer for retries to make swift more reliable (Nick Craig-Wood) * WebDAV * Add Content-Type to PUT requests (Nick Craig-Wood) * Fix config parsing so `--webdav-user` and `--webdav-pass` flags work (Nick Craig-Wood) * Add RFC3339 date format (Ralf Hemberger) * Yandex * The yandex backend was re-written (Sebastian Bünger) * This implements low level retries (Sebastian Bünger) * Copy, Move, DirMove, PublicLink and About optional interfaces (Sebastian Bünger) * Improved general error handling (Sebastian Bünger) * Removed ListR for now due to inconsistent behaviour (Sebastian Bünger) ## v1.44 - 2018-10-15 * New commands * serve ftp: Add ftp server (Antoine GIRARD) * settier: perform storage tier changes on supported remotes (sandeepkru) * New Features * Reworked command line help * Make default help less verbose (Nick Craig-Wood) * Split flags up into global and backend flags (Nick Craig-Wood) * Implement specialised help for flags and backends (Nick Craig-Wood) * Show URL of backend help page when starting config (Nick Craig-Wood) * stats: Long names now split in center (Joanna Marek) * Add `--log-format` flag for more control over log output (dcpu) * rc: Add support for OPTIONS and basic CORS (frenos) * stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes) * Bug Fixes * Fix -P not ending with a new line (Nick Craig-Wood) * config: don't create default config dir when user supplies `--config` (albertony) * Don't print non-ASCII characters with `--progress` on windows (Nick Craig-Wood) * Correct logs for excluded items (ssaqua) * Mount * Remove EXPERIMENTAL tags (Nick Craig-Wood) * VFS * Fix race condition detected by serve ftp tests (Nick Craig-Wood) * Add vfs/poll-interval rc command (Fabian Möller) * Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood) * Reduce directory cache cleared by poll-interval (Fabian Möller) * Remove EXPERIMENTAL tags (Nick Craig-Wood) * Local * Skip bad symlinks in dir listing with -L enabled (Cédric Connes) * Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood) * Preallocate files on linux with fallocate(2) (Nick Craig-Wood) * Cache * Add cache/fetch rc function (Fabian Möller) * Fix worker scale down (Fabian Möller) * Improve performance by not sending info requests for cached chunks (dcpu) * Fix error return value of cache/fetch rc method (Fabian Möller) * Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal) * Preserve leading / in wrapped remote path (Fabian Möller) * Add plex_insecure option to skip certificate validation (Fabian Möller) * Remove entries that no longer exist in the source (dcpu) * Crypt * Preserve leading / in wrapped remote path (Fabian Möller) * Alias * Fix handling of Windows network paths (Nick Craig-Wood) * Azure Blob * Add `--azureblob-list-chunk` parameter (Santiago Rodríguez) * Implemented settier command support on azureblob remote. (sandeepkru) * Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood) * Box * Implement link sharing. (Sebastian Bünger) * Drive * Add `--drive-import-formats` - google docs can now be imported (Fabian Möller) * Rewrite mime type and extension handling (Fabian Möller) * Add document links (Fabian Möller) * Add support for multipart document extensions (Fabian Möller) * Add support for apps-script to json export (Fabian Möller) * Fix escaped chars in documents during list (Fabian Möller) * Add `--drive-v2-download-min-size` a workaround for slow downloads (Fabian Möller) * Improve directory notifications in ChangeNotify (Fabian Möller) * When listing team drives in config, continue on failure (Nick Craig-Wood) * FTP * Add a small pause after failed upload before deleting file (Nick Craig-Wood) * Google Cloud Storage * Fix service_account_file being ignored (Fabian Möller) * Jottacloud * Minor improvement in quota info (omit if unlimited) (albertony) * Add `--fast-list` support (albertony) * Add permanent delete support: `--jottacloud-hard-delete` (albertony) * Add link sharing support (albertony) * Fix handling of reserved characters. (Sebastian Bünger) * Fix socket leak on Object.Remove (Nick Craig-Wood) * Onedrive * Rework to support Microsoft Graph (Cnly) * **NB** this will require re-authenticating the remote * Removed upload cutoff and always do session uploads (Oliver Heyme) * Use single-part upload for empty files (Cnly) * Fix new fields not saved when editing old config (Alex Chen) * Fix sometimes special chars in filenames not replaced (Alex Chen) * Ignore OneNote files by default (Alex Chen) * Add link sharing support (jackyzy823) * S3 * Use custom pacer, to retry operations when reasonable (Craig Miskell) * Use configured server-side-encryption and storage class options when calling CopyObject() (Paul Kohout) * Make `--s3-v2-auth` flag (Nick Craig-Wood) * Fix v2 auth on files with spaces (Nick Craig-Wood) * Union * Implement union backend which reads from multiple backends (Felix Brucker) * Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood) * Fix ChangeNotify to support multiple remotes (Fabian Möller) * Fix `--backup-dir` on union backend (Nick Craig-Wood) * WebDAV * Add another time format (Nick Craig-Wood) * Add a small pause after failed upload before deleting file (Nick Craig-Wood) * Add workaround for missing mtime (buergi) * Sharepoint: Renew cookies after 12hrs (Henning Surmeier) * Yandex * Remove redundant nil checks (teresy) ## v1.43.1 - 2018-09-07 Point release to fix hubic and azureblob backends. * Bug Fixes * ncdu: Return error instead of log.Fatal in Show (Fabian Möller) * cmd: Fix crash with `--progress` and `--stats 0` (Nick Craig-Wood) * docs: Tidy website display (Anagh Kumar Baranwal) * Azure Blob: * Fix multi-part uploads. (sandeepkru) * Hubic * Fix uploads (Nick Craig-Wood) * Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood) ## v1.43 - 2018-09-01 * New backends * Jottacloud (Sebastian Bünger) * New commands * copyurl: copies a URL to a remote (Denis) * New Features * Reworked config for backends (Nick Craig-Wood) * All backend config can now be supplied by command line, env var or config file * Advanced section in the config wizard for the optional items * A large step towards rclone backends being usable in other go software * Allow on the fly remotes with :backend: syntax * Stats revamp * Add `--progress`/`-P` flag to show interactive progress (Nick Craig-Wood) * Show the total progress of the sync in the stats (Nick Craig-Wood) * Add `--stats-one-line` flag for single line stats (Nick Craig-Wood) * Added weekday schedule into `--bwlimit` (Mateusz) * lsjson: Add option to show the original object IDs (Fabian Möller) * serve webdav: Make Content-Type without reading the file and add `--etag-hash` (Nick Craig-Wood) * build * Build macOS with native compiler (Nick Craig-Wood) * Update to use go1.11 for the build (Nick Craig-Wood) * rc * Added core/stats to return the stats (reddi1) * `version --check`: Prints the current release and beta versions (Nick Craig-Wood) * Bug Fixes * accounting * Fix time to completion estimates (Nick Craig-Wood) * Fix moving average speed for file stats (Nick Craig-Wood) * config: Fix error reading password from piped input (Nick Craig-Wood) * move: Fix `--delete-empty-src-dirs` flag to delete all empty dirs on move (ishuah) * Mount * Implement `--daemon-timeout` flag for OSXFUSE (Nick Craig-Wood) * Fix mount `--daemon` not working with encrypted config (Alex Chen) * Clip the number of blocks to 2^32-1 on macOS - fixes borg backup (Nick Craig-Wood) * VFS * Enable vfs-read-chunk-size by default (Fabian Möller) * Add the vfs/refresh rc command (Fabian Möller) * Add non recursive mode to vfs/refresh rc command (Fabian Möller) * Try to seek buffer on read only files (Fabian Möller) * Local * Fix crash when deprecated `--local-no-unicode-normalization` is supplied (Nick Craig-Wood) * Fix mkdir error when trying to copy files to the root of a drive on windows (Nick Craig-Wood) * Cache * Fix nil pointer deref when using lsjson on cached directory (Nick Craig-Wood) * Fix nil pointer deref for occasional crash on playback (Nick Craig-Wood) * Crypt * Fix accounting when checking hashes on upload (Nick Craig-Wood) * Amazon Cloud Drive * Make very clear in the docs that rclone has no ACD keys (Nick Craig-Wood) * Azure Blob * Add connection string and SAS URL auth (Nick Craig-Wood) * List the container to see if it exists (Nick Craig-Wood) * Port new Azure Blob Storage SDK (sandeepkru) * Added blob tier, tier between Hot, Cool and Archive. (sandeepkru) * Remove leading / from paths (Nick Craig-Wood) * B2 * Support Application Keys (Nick Craig-Wood) * Remove leading / from paths (Nick Craig-Wood) * Box * Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood) * Make `--box-commit-retries` flag defaulting to 100 to fix large uploads (Nick Craig-Wood) * Drive * Add `--drive-keep-revision-forever` flag (lewapm) * Handle gdocs when filtering file names in list (Fabian Möller) * Support using `--fast-list` for large speedups (Fabian Möller) * FTP * Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood) * Google Cloud Storage * Fix index out of range error with `--fast-list` (Nick Craig-Wood) * Jottacloud * Fix MD5 error check (Oliver Heyme) * Handle empty time values (Martin Polden) * Calculate missing MD5s (Oliver Heyme) * Docs, fixes and tests for MD5 calculation (Nick Craig-Wood) * Add optional MimeTyper interface. (Sebastian Bünger) * Implement optional About interface (for `df` support). (Sebastian Bünger) * Mega * Wait for events instead of arbitrary sleeping (Nick Craig-Wood) * Add `--mega-hard-delete` flag (Nick Craig-Wood) * Fix failed logins with upper case chars in email (Nick Craig-Wood) * Onedrive * Shared folder support (Yoni Jah) * Implement DirMove (Cnly) * Fix rmdir sometimes deleting directories with contents (Nick Craig-Wood) * Pcloud * Delete half uploaded files on upload error (Nick Craig-Wood) * Qingstor * Remove leading / from paths (Nick Craig-Wood) * S3 * Fix index out of range error with `--fast-list` (Nick Craig-Wood) * Add `--s3-force-path-style` (Nick Craig-Wood) * Add support for KMS Key ID (bsteiss) * Remove leading / from paths (Nick Craig-Wood) * Swift * Add `storage_policy` (Ruben Vandamme) * Make it so just `storage_url` or `auth_token` can be overridden (Nick Craig-Wood) * Fix server side copy bug for unusual file names (Nick Craig-Wood) * Remove leading / from paths (Nick Craig-Wood) * WebDAV * Ensure we call MKCOL with a URL with a trailing / for QNAP interop (Nick Craig-Wood) * If root ends with / then don't check if it is a file (Nick Craig-Wood) * Don't accept redirects when reading metadata (Nick Craig-Wood) * Add bearer token (Macaroon) support for dCache (Nick Craig-Wood) * Document dCache and Macaroons (Onno Zweers) * Sharepoint recursion with different depth (Henning) * Attempt to remove failed uploads (Nick Craig-Wood) * Yandex * Fix listing/deleting files in the root (Nick Craig-Wood) ## v1.42 - 2018-06-16 * New backends * OpenDrive (Oliver Heyme, Jakub Karlicek, ncw) * New commands * deletefile command (Filip Bartodziej) * New Features * copy, move: Copy single files directly, don't use `--files-from` work-around * this makes them much more efficient * Implement `--max-transfer` flag to quit transferring at a limit * make exit code 8 for `--max-transfer` exceeded * copy: copy empty source directories to destination (Ishuah Kariuki) * check: Add `--one-way` flag (Kasper Byrdal Nielsen) * Add siginfo handler for macOS for ctrl-T stats (kubatasiemski) * rc * add core/gc to run a garbage collection on demand * enable go profiling by default on the `--rc` port * return error from remote on failure * lsf * Add `--absolute` flag to add a leading / onto path names * Add `--csv` flag for compliant CSV output * Add 'm' format specifier to show the MimeType * Implement 'i' format for showing object ID * lsjson * Add MimeType to the output * Add ID field to output to show Object ID * Add `--retries-sleep` flag (Benjamin Joseph Dag) * Oauth tidy up web page and error handling (Henning Surmeier) * Bug Fixes * Password prompt output with `--log-file` fixed for unix (Filip Bartodziej) * Calculate ModifyWindow each time on the fly to fix various problems (Stefan Breunig) * Mount * Only print "File.rename error" if there actually is an error (Stefan Breunig) * Delay rename if file has open writers instead of failing outright (Stefan Breunig) * Ensure atexit gets run on interrupt * macOS enhancements * Make `--noappledouble` `--noapplexattr` * Add `--volname` flag and remove special chars from it * Make Get/List/Set/Remove xattr return ENOSYS for efficiency * Make `--daemon` work for macOS without CGO * VFS * Add `--vfs-read-chunk-size` and `--vfs-read-chunk-size-limit` (Fabian Möller) * Fix ChangeNotify for new or changed folders (Fabian Möller) * Local * Fix symlink/junction point directory handling under Windows * **NB** you will need to add `-L` to your command line to copy files with reparse points * Cache * Add non cached dirs on notifications (Remus Bunduc) * Allow root to be expired from rc (Remus Bunduc) * Clean remaining empty folders from temp upload path (Remus Bunduc) * Cache lists using batch writes (Remus Bunduc) * Use secure websockets for HTTPS Plex addresses (John Clayton) * Reconnect plex websocket on failures (Remus Bunduc) * Fix panic when running without plex configs (Remus Bunduc) * Fix root folder caching (Remus Bunduc) * Crypt * Check the crypted hash of files when uploading for extra data security * Dropbox * Make Dropbox for business folders accessible using an initial `/` in the path * Google Cloud Storage * Low level retry all operations if necessary * Google Drive * Add `--drive-acknowledge-abuse` to download flagged files * Add `--drive-alternate-export` to fix large doc export * Don't attempt to choose Team Drives when using rclone config create * Fix change list polling with team drives * Fix ChangeNotify for folders (Fabian Möller) * Fix about (and df on a mount) for team drives * Onedrive * Errorhandler for onedrive for business requests (Henning Surmeier) * S3 * Adjust upload concurrency with `--s3-upload-concurrency` (themylogin) * Fix `--s3-chunk-size` which was always using the minimum * SFTP * Add `--ssh-path-override` flag (Piotr Oleszczyk) * Fix slow downloads for long latency connections * Webdav * Add workarounds for biz.mail.ru * Ignore Reason-Phrase in status line to fix 4shared (Rodrigo) * Better error message generation ## v1.41 - 2018-04-28 * New backends * Mega support added * Webdav now supports SharePoint cookie authentication (hensur) * New commands * link: create public link to files and folders (Stefan Breunig) * about: gets quota info from a remote (a-roussos, ncw) * hashsum: a generic tool for any hash to produce md5sum like output * New Features * lsd: Add -R flag and fix and update docs for all ls commands * ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb) * serve restic: Add append-only mode (Steve Kriss) * serve restic: Disallow overwriting files in append-only mode (Alexander Neumann) * serve restic: Print actual listener address (Matt Holt) * size: Add --json flag (Matthew Holt) * sync: implement --ignore-errors (Mateusz Pabian) * dedupe: Add dedupe largest functionality (Richard Yang) * fs: Extend SizeSuffix to include TB and PB for rclone about * fs: add --dump goroutines and --dump openfiles for debugging * rc: implement core/memstats to print internal memory usage info * rc: new call rc/pid (Michael P. Dubner) * Compile * Drop support for go1.6 * Release * Fix `make tarball` (Chih-Hsuan Yen) * Bug Fixes * filter: fix --min-age and --max-age together check * fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport * lsd,lsf: make sure all times we output are in local time * rc: fix setting bwlimit to unlimited * rc: take note of the --rc-addr flag too as per the docs * Mount * Use About to return the correct disk total/used/free (eg in `df`) * Set `--attr-timeout default` to `1s` - fixes: * rclone using too much memory * rclone not serving files to samba * excessive time listing directories * Fix `df -i` (upstream fix) * VFS * Filter files `.` and `..` from directory listing * Only make the VFS cache if --vfs-cache-mode > Off * Local * Add --local-no-check-updated to disable updated file checks * Retry remove on Windows sharing violation error * Cache * Flush the memory cache after close * Purge file data on notification * Always forget parent dir for notifications * Integrate with Plex websocket * Add rc cache/stats (seuffert) * Add info log on notification * Box * Fix failure reading large directories - parse file/directory size as float * Dropbox * Fix crypt+obfuscate on dropbox * Fix repeatedly uploading the same files * FTP * Work around strange response from box FTP server * More workarounds for FTP servers to fix mkParentDir error * Fix no error on listing non-existent directory * Google Cloud Storage * Add service_account_credentials (Matt Holt) * Detect bucket presence by listing it - minimises permissions needed * Ignore zero length directory markers * Google Drive * Add service_account_credentials (Matt Holt) * Fix directory move leaving a hardlinked directory behind * Return proper google errors when Opening files * When initialized with a filepath, optional features used incorrect root path (Stefan Breunig) * HTTP * Fix sync for servers which don't return Content-Length in HEAD * Onedrive * Add QuickXorHash support for OneDrive for business * Fix socket leak in multipart session upload * S3 * Look in S3 named profile files for credentials * Add `--s3-disable-checksum` to disable checksum uploading (Chris Redekop) * Hierarchical configuration support (Giri Badanahatti) * Add in config for all the supported S3 providers * Add One Zone Infrequent Access storage class (Craig Rachel) * Add --use-server-modtime support (Peter Baumgartner) * Add --s3-chunk-size option to control multipart uploads * Ignore zero length directory markers * SFTP * Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll) * Update docs with Synology quirks * Fail soft with a debug on hash failure * Swift * Add --use-server-modtime support (Peter Baumgartner) * Webdav * Support SharePoint cookie authentication (hensur) * Strip leading and trailing / off root ## v1.40 - 2018-03-19 * New backends * Alias backend to create aliases for existing remote names (Fabian Möller) * New commands * `lsf`: list for parsing purposes (Jakub Tasiemski) * by default this is a simple non recursive list of files and directories * it can be configured to add more info in an easy to parse way * `serve restic`: for serving a remote as a Restic REST endpoint * This enables restic to use any backends that rclone can access * Thanks Alexander Neumann for help, patches and review * `rc`: enable the remote control of a running rclone * The running rclone must be started with --rc and related flags. * Currently there is support for bwlimit, and flushing for mount and cache. * New Features * `--max-delete` flag to add a delete threshold (Bjørn Erik Pedersen) * All backends now support RangeOption for ranged Open * `cat`: Use RangeOption for limited fetches to make more efficient * `cryptcheck`: make reading of nonce more efficient with RangeOption * serve http/webdav/restic * support SSL/TLS * add `--user` `--pass` and `--htpasswd` for authentication * `copy`/`move`: detect file size change during copy/move and abort transfer (ishuah) * `cryptdecode`: added option to return encrypted file names. (ishuah) * `lsjson`: add `--encrypted` to show encrypted name (Jakub Tasiemski) * Add `--stats-file-name-length` to specify the printed file name length for stats (Will Gunn) * Compile * Code base was shuffled and factored * backends moved into a backend directory * large packages split up * See the CONTRIBUTING.md doc for info as to what lives where now * Update to using go1.10 as the default go version * Implement daily [full integration tests](https://pub.rclone.org/integration-tests/) * Release * Include a source tarball and sign it and the binaries * Sign the git tags as part of the release process * Add .deb and .rpm packages as part of the build * Make a beta release for all branches on the main repo (but not pull requests) * Bug Fixes * config: fixes errors on non existing config by loading config file only on first access * config: retry saving the config after failure (Mateusz) * sync: when using `--backup-dir` don't delete files if we can't set their modtime * this fixes odd behaviour with Dropbox and `--backup-dir` * fshttp: fix idle timeouts for HTTP connections * `serve http`: fix serving files with : in - fixes * Fix `--exclude-if-present` to ignore directories which it doesn't have permission for (Iakov Davydov) * Make accounting work properly with crypt and b2 * remove `--no-traverse` flag because it is obsolete * Mount * Add `--attr-timeout` flag to control attribute caching in kernel * this now defaults to 0 which is correct but less efficient * see [the mount docs](https://rclone.org/commands/rclone_mount/#attribute-caching) for more info * Add `--daemon` flag to allow mount to run in the background (ishuah) * Fix: Return ENOSYS rather than EIO on attempted link * This fixes FileZilla accessing an rclone mount served over sftp. * Fix setting modtime twice * Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows * Many bugs fixed in the VFS layer - see below * VFS * Many fixes for `--vfs-cache-mode` writes and above * Update cached copy if we know it has changed (fixes stale data) * Clean path names before using them in the cache * Disable cache cleaner if `--vfs-cache-poll-interval=0` * Fill and clean the cache immediately on startup * Fix Windows opening every file when it stats the file * Fix applying modtime for an open Write Handle * Fix creation of files when truncating * Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE * Downgrade "poll-interval is not supported" message to Info * Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC * Local * Downgrade "invalid cross-device link: trying copy" to debug * Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device * Fix race conditions updating the hashes * Cache * Add support for polling - cache will update when remote changes on supported backends * Reduce log level for Plex api * Fix dir cache issue * Implement `--cache-db-wait-time` flag * Improve efficiency with RangeOption and RangeSeek * Fix dirmove with temp fs enabled * Notify vfs when using temp fs * Offline uploading * Remote control support for path flushing * Amazon cloud drive * Rclone no longer has any working keys - disable integration tests * Implement DirChangeNotify to notify cache/vfs/mount of changes * Azureblob * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Improve accounting for chunked uploads * Backblaze B2 * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Box * Improve accounting for chunked uploads * Dropbox * Fix custom oauth client parameters * Google Cloud Storage * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Google Drive * Migrate to api v3 (Fabian Möller) * Add scope configuration and root folder selection * Add `--drive-impersonate` for service accounts * thanks to everyone who tested, explored and contributed docs * Add `--drive-use-created-date` to use created date as modified date (nbuchanan) * Request the export formats only when required * This makes rclone quicker when there are no google docs * Fix finding paths with latin1 chars (a workaround for a drive bug) * Fix copying of a single Google doc file * Fix `--drive-auth-owner-only` to look in all directories * HTTP * Fix handling of directories with & in * Onedrive * Removed upload cutoff and always do session uploads * this stops the creation of multiple versions on business onedrive * Overwrite object size value with real size when reading file. (Victor) * this fixes oddities when onedrive misreports the size of images * Pcloud * Remove unused chunked upload flag and code * Qingstor * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * S3 * Support hashes for multipart files (Chris Redekop) * Initial support for IBM COS (S3) (Giri Badanahatti) * Update docs to discourage use of v2 auth with CEPH and others * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Fix server side copy and set modtime on files with + in * SFTP * Add option to disable remote hash check command execution (Jon Fautley) * Add `--sftp-ask-password` flag to prompt for password when needed (Leo R. Lundgren) * Add `set_modtime` configuration option * Fix following of symlinks * Fix reading config file outside of Fs setup * Fix reading $USER in username fallback not $HOME * Fix running under crontab - Use correct OS way of reading username * Swift * Fix refresh of authentication token * in v1.39 a bug was introduced which ignored new tokens - this fixes it * Fix extra HEAD transaction when uploading a new file * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Webdav * Add new time formats to support mydrive.ch and others ## v1.39 - 2017-12-23 * New backends * WebDAV * tested with nextcloud, owncloud, put.io and others! * Pcloud * cache - wraps a cache around other backends (Remus Bunduc) * useful in combination with mount * NB this feature is in beta so use with care * New commands * serve command with subcommands: * serve webdav: this implements a webdav server for any rclone remote. * serve http: command to serve a remote over HTTP * config: add sub commands for full config file management * create/delete/dump/edit/file/password/providers/show/update * touch: to create or update the timestamp of a file (Jakub Tasiemski) * New Features * curl install for rclone (Filip Bartodziej) * --stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki) * --exclude-if-present to exclude a directory if a file is present (Iakov Davydov) * rmdirs: add --leave-root flag (lewapm) * move: add --delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki) * Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters * Obscure X-Auth-Token: from headers when dumping too * Document and implement exit codes for different failure modes (Ishuah Kariuki) * Compile * Bug Fixes * Retry lots more different types of errors to make multipart transfers more reliable * Save the config before asking for a token, fixes disappearing oauth config * Warn the user if --include and --exclude are used together (Ernest Borowski) * Fix duplicate files (eg on Google drive) causing spurious copies * Allow trailing and leading whitespace for passwords (Jason Rose) * ncdu: fix crashes on empty directories * rcat: fix goroutine leak * moveto/copyto: Fix to allow copying to the same name * Mount * --vfs-cache mode to make writes into mounts more reliable. * this requires caching files on the disk (see --cache-dir) * As this is a new feature, use with care * Use sdnotify to signal systemd the mount is ready (Fabian Möller) * Check if directory is not empty before mounting (Ernest Borowski) * Local * Add error message for cross file system moves * Fix equality check for times * Dropbox * Rework multipart upload * buffer the chunks when uploading large files so they can be retried * change default chunk size to 48MB now we are buffering them in memory * retry every error after the first chunk is done successfully * Fix error when renaming directories * Swift * Fix crash on bad authentication * Google Drive * Add service account support (Tim Cooijmans) * S3 * Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio) * Fix crash if a bad listing is received * Add support for ECS task IAM roles (David Minor) * Backblaze B2 * Fix multipart upload retries * Fix --hard-delete to make it work 100% of the time * Swift * Allow authentication with storage URL and auth key (Giovanni Pizzi) * Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson) * Add OS_TENANT_ID and OS_USER_ID to config * Allow configs with user id instead of user name * Check if swift segments container exists before creating (John Leach) * Fix memory leak in swift transfers (upstream fix) * SFTP * Add option to enable the use of aes128-cbc cipher (Jon Fautley) * Amazon cloud drive * Fix download of large files failing with "Only one auth mechanism allowed" * crypt * Option to encrypt directory names or leave them intact * Implement DirChangeNotify (Fabian Möller) * onedrive * Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user ## v1.38 - 2017-09-30 * New backends * Azure Blob Storage (thanks Andrei Dragomir) * Box * Onedrive for Business (thanks Oliver Heyme) * QingStor from QingCloud (thanks wuyu) * New commands * `rcat` - read from standard input and stream upload * `tree` - shows a nicely formatted recursive listing * `cryptdecode` - decode crypted file names (thanks ishuah) * `config show` - print the config file * `config file` - print the config file location * New Features * Empty directories are deleted on `sync` * `dedupe` - implement merging of duplicate directories * `check` and `cryptcheck` made more consistent and use less memory * `cleanup` for remaining remotes (thanks ishuah) * `--immutable` for ensuring that files don't change (thanks Jacob McNamee) * `--user-agent` option (thanks Alex McGrath Kraak) * `--disable` flag to disable optional features * `--bind` flag for choosing the local addr on outgoing connections * Support for zsh auto-completion (thanks bpicode) * Stop normalizing file names but do a normalized compare in `sync` * Compile * Update to using go1.9 as the default go version * Remove snapd build due to maintenance problems * Bug Fixes * Improve retriable error detection which makes multipart uploads better * Make `check` obey `--ignore-size` * Fix bwlimit toggle in conjunction with schedules (thanks cbruegg) * `config` ensures newly written config is on the same mount * Local * Revert to copy when moving file across file system boundaries * `--skip-links` to suppress symlink warnings (thanks Zhiming Wang) * Mount * Re-use `rcat` internals to support uploads from all remotes * Dropbox * Fix "entry doesn't belong in directory" error * Stop using deprecated API methods * Swift * Fix server side copy to empty container with `--fast-list` * Google Drive * Change the default for `--drive-use-trash` to `true` * S3 * Set session token when using STS (thanks Girish Ramakrishnan) * Glacier docs and error messages (thanks Jan Varho) * Read 1000 (not 1024) items in dir listings to fix Wasabi * Backblaze B2 * Fix SHA1 mismatch when downloading files with no SHA1 * Calculate missing hashes on the fly instead of spooling * `--b2-hard-delete` to permanently delete (not hide) files (thanks John Papandriopoulos) * Hubic * Fix creating containers - no longer have to use the `default` container * Swift * Optionally configure from a standard set of OpenStack environment vars * Add `endpoint_type` config * Google Cloud Storage * Fix bucket creation to work with limited permission users * SFTP * Implement connection pooling for multiple ssh connections * Limit new connections per second * Add support for MD5 and SHA1 hashes where available (thanks Christian Brüggemann) * HTTP * Fix URL encoding issues * Fix directories with `:` in * Fix panic with URL encoded content ## v1.37 - 2017-07-22 * New backends * FTP - thanks to Antonio Messina * HTTP - thanks to Vasiliy Tolstov * New commands * rclone ncdu - for exploring a remote with a text based user interface. * rclone lsjson - for listing with a machine readable output * rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox) * New Features * Implement --fast-list flag * This allows remotes to list recursively if they can * This uses less transactions (important if you pay for them) * This may or may not be quicker * This will use more memory as it has to hold the listing in memory * --old-sync-method deprecated - the remaining uses are covered by --fast-list * This involved a major re-write of all the listing code * Add --tpslimit and --tpslimit-burst to limit transactions per second * this is useful in conjunction with `rclone mount` to limit external apps * Add --stats-log-level so can see --stats without -v * Print password prompts to stderr - Hraban Luyat * Warn about duplicate files when syncing * Oauth improvements * allow auth_url and token_url to be set in the config file * Print redirection URI if using own credentials. * Don't Mkdir at the start of sync to save transactions * Compile * Update build to go1.8.3 * Require go1.6 for building rclone * Compile 386 builds with "GO386=387" for maximum compatibility * Bug Fixes * Fix menu selection when no remotes * Config saving reworked to not kill the file if disk gets full * Don't delete remote if name does not change while renaming * moveto, copyto: report transfers and checks as per move and copy * Local * Add --local-no-unicode-normalization flag - Bob Potter * Mount * Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help * Compare checksums on upload/download via FUSE * Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino * On read only open of file, make open pending until first read * Make --read-only reject modify operations * Implement ModTime via FUSE for remotes that support it * Allow modTime to be changed even before all writers are closed * Fix panic on renames * Fix hang on errored upload * Crypt * Report the name:root as specified by the user * Add an "obfuscate" option for filename encryption - Stephen Harris * Amazon Drive * Fix initialization order for token renewer * Remove revoked credentials, allow oauth proxy config and update docs * B2 * Reduce minimum chunk size to 5MB * Drive * Add team drive support * Reduce bandwidth by adding fields for partial responses - Martin Kristensen * Implement --drive-shared-with-me flag to view shared with me files - Danny Tsai * Add --drive-trashed-only to read only the files in the trash * Remove obsolete --drive-full-list * Add missing seek to start on retries of chunked uploads * Fix stats accounting for upload * Convert / in names to a unicode equivalent (/) * Poll for Google Drive changes when mounted * OneDrive * Fix the uploading of files with spaces * Fix initialization order for token renewer * Display speeds accurately when uploading - Yoni Jah * Swap to using http://localhost:53682/ as redirect URL - Michael Ledin * Retry on token expired error, reset upload body on retry - Yoni Jah * Google Cloud Storage * Add ability to specify location and storage class via config and command line - thanks gdm85 * Create container if necessary on server side copy * Increase directory listing chunk to 1000 to increase performance * Obtain a refresh token for GCS - Steven Lu * Yandex * Fix the name reported in log messages (was empty) * Correct error return for listing empty directory * Dropbox * Rewritten to use the v2 API * Now supports ModTime * Can only set by uploading the file again * If you uploaded with an old rclone, rclone may upload everything again * Use `--size-only` or `--checksum` to avoid this * Now supports the Dropbox content hashing scheme * Now supports low level retries * S3 * Work around eventual consistency in bucket creation * Create container if necessary on server side copy * Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed * Swift, Hubic * Fix zero length directory markers showing in the subdirectory listing * this caused lots of duplicate transfers * Fix paged directory listings * this caused duplicate directory errors * Create container if necessary on server side copy * Increase directory listing chunk to 1000 to increase performance * Make sensible error if the user forgets the container * SFTP * Add support for using ssh key files * Fix under Windows * Fix ssh agent on Windows * Adapt to latest version of library - Igor Kharin ## v1.36 - 2017-03-18 * New Features * SFTP remote (Jack Schmidt) * Re-implement sync routine to work a directory at a time reducing memory usage * Logging revamped to be more inline with rsync - now much quieter * -v only shows transfers * -vv is for full debug * --syslog to log to syslog on capable platforms * Implement --backup-dir and --suffix * Implement --track-renames (initial implementation by Bjørn Erik Pedersen) * Add time-based bandwidth limits (Lukas Loesche) * rclone cryptcheck: checks integrity of crypt remotes * Allow all config file variables and options to be set from environment variables * Add --buffer-size parameter to control buffer size for copy * Make --delete-after the default * Add --ignore-checksum flag (fixed by Hisham Zarka) * rclone check: Add --download flag to check all the data, not just hashes * rclone cat: add --head, --tail, --offset, --count and --discard * rclone config: when choosing from a list, allow the value to be entered too * rclone config: allow rename and copy of remotes * rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson) * Comply with XDG Base Directory specification (Dario Giovannetti) * this moves the default location of the config file in a backwards compatible way * Release changes * Ubuntu snap support (Dedsec1) * Compile with go 1.8 * MIPS/Linux big and little endian support * Bug Fixes * Fix copyto copying things to the wrong place if the destination dir didn't exist * Fix parsing of remotes in moveto and copyto * Fix --delete-before deleting files on copy * Fix --files-from with an empty file copying everything * Fix sync: don't update mod times if --dry-run set * Fix MimeType propagation * Fix filters to add ** rules to directory rules * Local * Implement -L, --copy-links flag to allow rclone to follow symlinks * Open files in write only mode so rclone can write to an rclone mount * Fix unnormalised unicode causing problems reading directories * Fix interaction between -x flag and --max-depth * Mount * Implement proper directory handling (mkdir, rmdir, renaming) * Make include and exclude filters apply to mount * Implement read and write async buffers - control with --buffer-size * Fix fsync on for directories * Fix retry on network failure when reading off crypt * Crypt * Add --crypt-show-mapping to show encrypted file mapping * Fix crypt writer getting stuck in a loop * **IMPORTANT** this bug had the potential to cause data corruption when * reading data from a network based remote and * writing to a crypt on Google Drive * Use the cryptcheck command to validate your data if you are concerned * If syncing two crypt remotes, sync the unencrypted remote * Amazon Drive * Fix panics on Move (rename) * Fix panic on token expiry * B2 * Fix inconsistent listings and rclone check * Fix uploading empty files with go1.8 * Constrain memory usage when doing multipart uploads * Fix upload url not being refreshed properly * Drive * Fix Rmdir on directories with trashed files * Fix "Ignoring unknown object" when downloading * Add --drive-list-chunk * Add --drive-skip-gdocs (Károly Oláh) * OneDrive * Implement Move * Fix Copy * Fix overwrite detection in Copy * Fix waitForJob to parse errors correctly * Use token renewer to stop auth errors on long uploads * Fix uploading empty files with go1.8 * Google Cloud Storage * Fix depth 1 directory listings * Yandex * Fix single level directory listing * Dropbox * Normalise the case for single level directory listings * Fix depth 1 listing * S3 * Added ca-central-1 region (Jon Yergatian) ## v1.35 - 2017-01-02 * New Features * moveto and copyto commands for choosing a destination name on copy/move * rmdirs command to recursively delete empty directories * Allow repeated --include/--exclude/--filter options * Only show transfer stats on commands which transfer stuff * show stats on any command using the `--stats` flag * Allow overlapping directories in move when server side dir move is supported * Add --stats-unit option - thanks Scott McGillivray * Bug Fixes * Fix the config file being overwritten when two rclone instances are running * Make rclone lsd obey the filters properly * Fix compilation on mips * Fix not transferring files that don't differ in size * Fix panic on nil retry/fatal error * Mount * Retry reads on error - should help with reliability a lot * Report the modification times for directories from the remote * Add bandwidth accounting and limiting (fixes --bwlimit) * If --stats provided will show stats and which files are transferring * Support R/W files if truncate is set. * Implement statfs interface so df works * Note that write is now supported on Amazon Drive * Report number of blocks in a file - thanks Stefan Breunig * Crypt * Prevent the user pointing crypt at itself * Fix failed to authenticate decrypted block errors * these will now return the underlying unexpected EOF instead * Amazon Drive * Add support for server side move and directory move - thanks Stefan Breunig * Fix nil pointer deref on size attribute * B2 * Use new prefix and delimiter parameters in directory listings * This makes --max-depth 1 dir listings as used in mount much faster * Reauth the account while doing uploads too - should help with token expiry * Drive * Make DirMove more efficient and complain about moving the root * Create destination directory on Move() ## v1.34 - 2016-11-06 * New Features * Stop single file and `--files-from` operations iterating through the source bucket. * Stop removing failed upload to cloud storage remotes * Make ContentType be preserved for cloud to cloud copies * Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini * `rclone check` shows count of hashes that couldn't be checked * `rclone listremotes` command * Support linux/arm64 build - thanks Fredrik Fornwall * Remove `Authorization:` lines from `--dump-headers` output * Bug Fixes * Ignore files with control characters in the names * Fix `rclone move` command * Delete src files which already existed in dst * Fix deletion of src file when dst file older * Fix `rclone check` on crypted file systems * Make failed uploads not count as "Transferred" * Make sure high level retries show with `-q` * Use a vendor directory with godep for repeatable builds * `rclone mount` - FUSE * Implement FUSE mount options * `--no-modtime`, `--debug-fuse`, `--read-only`, `--allow-non-empty`, `--allow-root`, `--allow-other` * `--default-permissions`, `--write-back-cache`, `--max-read-ahead`, `--umask`, `--uid`, `--gid` * Add `--dir-cache-time` to control caching of directory entries * Implement seek for files opened for read (useful for video players) * with `-no-seek` flag to disable * Fix crash on 32 bit ARM (alignment of 64 bit counter) * ...and many more internal fixes and improvements! * Crypt * Don't show encrypted password in configurator to stop confusion * Amazon Drive * New wait for upload option `--acd-upload-wait-per-gb` * upload timeouts scale by file size and can be disabled * Add 502 Bad Gateway to list of errors we retry * Fix overwriting a file with a zero length file * Fix ACD file size warning limit - thanks Felix Bünemann * Local * Unix: implement `-x`/`--one-file-system` to stay on a single file system * thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana * Windows: ignore the symlink bit on files * Windows: Ignore directory based junction points * B2 * Make sure each upload has at least one upload slot - fixes strange upload stats * Fix uploads when using crypt * Fix download of large files (sha1 mismatch) * Return error when we try to create a bucket which someone else owns * Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur * S3 * Command line and config file support for * Setting/overriding ACL - thanks Radek Senfeld * Setting storage class - thanks Asko Tamm * Drive * Make exponential backoff work exactly as per Google specification * add `.epub`, `.odp` and `.tsv` as export formats. * Swift * Don't read metadata for directory marker objects ## v1.33 - 2016-08-24 * New Features * Implement encryption * data encrypted in NACL secretbox format * with optional file name encryption * New commands * rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL) * works on Linux, FreeBSD and OS X (need testers for the last 2!) * rclone cat - outputs remote file or files to the terminal * rclone genautocomplete - command to make a bash completion script for rclone * Editing a remote using `rclone config` now goes through the wizard * Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors * Use cobra for sub commands and docs generation * drive * Document how to make your own client_id * s3 * User-configurable Amazon S3 ACL (thanks Radek Šenfeld) * b2 * Fix stats accounting for upload - no more jumping to 100% done * On cleanup delete hide marker if it is the current file * New B2 API endpoint (thanks Per Cederberg) * Set maximum backoff to 5 Minutes * onedrive * Fix URL escaping in file names - eg uploading files with `+` in them. * amazon cloud drive * Fix token expiry during large uploads * Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors * local * Fix filenames with invalid UTF-8 not being uploaded * Fix problem with some UTF-8 characters on OS X ## v1.32 - 2016-07-13 * Backblaze B2 * Fix upload of files large files not in root ## v1.31 - 2016-07-13 * New Features * Reduce memory on sync by about 50% * Implement --no-traverse flag to stop copy traversing the destination remote. * This can be used to reduce memory usage down to the smallest possible. * Useful to copy a small number of files into a large destination folder. * Implement cleanup command for emptying trash / removing old versions of files * Currently B2 only * Single file handling improved * Now copied with --files-from * Automatically sets --no-traverse when copying a single file * Info on using installing with ansible - thanks Stefan Weichinger * Implement --no-update-modtime flag to stop rclone fixing the remote modified times. * Bug Fixes * Fix move command - stop it running for overlapping Fses - this was causing data loss. * Local * Fix incomplete hashes - this was causing problems for B2. * Amazon Drive * Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed. * Swift * Add support for non-default project domain - thanks Antonio Messina. * S3 * Add instructions on how to use rclone with minio. * Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions. * Skip setting the modified time for objects > 5GB as it isn't possible. * Backblaze B2 * Add --b2-versions flag so old versions can be listed and retrieved. * Treat 403 errors (eg cap exceeded) as fatal. * Implement cleanup command for deleting old file versions. * Make error handling compliant with B2 integrations notes. * Fix handling of token expiry. * Implement --b2-test-mode to set `X-Bz-Test-Mode` header. * Set cutoff for chunked upload to 200MB as per B2 guidelines. * Make upload multi-threaded. * Dropbox * Don't retry 461 errors. ## v1.30 - 2016-06-18 * New Features * Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables * Directory include filtering for efficiency * --max-depth parameter * Better error reporting * More to come * Retry more errors * Add --ignore-size flag - for uploading images to onedrive * Log -v output to stdout by default * Display the transfer stats in more human readable form * Make 0 size files specifiable with `--max-size 0b` * Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc * Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz * Bug Fixes * Fix retry doing one too many retries * Local * Fix problems with OS X and UTF-8 characters * Amazon Drive * Check a file exists before uploading to help with 408 Conflict errors * Reauth on 401 errors - this has been causing a lot of problems * Work around spurious 403 errors * Restart directory listings on error * Google Drive * Check a file exists before uploading to help with duplicates * Fix retry of multipart uploads * Backblaze B2 * Implement large file uploading * S3 * Add AES256 server-side encryption for - thanks Justin R. Wilson * Google Cloud Storage * Make sure we don't use conflicting content types on upload * Add service account support - thanks Michal Witkowski * Swift * Add auth version parameter * Add domain option for openstack (v3 auth) - thanks Fabian Ruff ## v1.29 - 2016-04-18 * New Features * Implement `-I, --ignore-times` for unconditional upload * Improve `dedupe`command * Now removes identical copies without asking * Now obeys `--dry-run` * Implement `--dedupe-mode` for non interactive running * `--dedupe-mode interactive` - interactive the default. * `--dedupe-mode skip` - removes identical files then skips anything left. * `--dedupe-mode first` - removes identical files then keeps the first one. * `--dedupe-mode newest` - removes identical files then keeps the newest one. * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. * `--dedupe-mode rename` - removes identical files then renames the rest to be different. * Bug fixes * Make rclone check obey the `--size-only` flag. * Use "application/octet-stream" if discovered mime type is invalid. * Fix missing "quit" option when there are no remotes. * Google Drive * Increase default chunk size to 8 MB - increases upload speed of big files * Speed up directory listings and make more reliable * Add missing retries for Move and DirMove - increases reliability * Preserve mime type on file update * Backblaze B2 * Enable mod time syncing * This means that B2 will now check modification times * It will upload new files to update the modification times * (there isn't an API to just set the mod time.) * If you want the old behaviour use `--size-only`. * Update API to new version * Fix parsing of mod time when not in metadata * Swift/Hubic * Don't return an MD5SUM for static large objects * S3 * Fix uploading files bigger than 50GB ## v1.28 - 2016-03-01 * New Features * Configuration file encryption - thanks Klaus Post * Improve `rclone config` adding more help and making it easier to understand * Implement `-u`/`--update` so creation times can be used on all remotes * Implement `--low-level-retries` flag * Optionally disable gzip compression on downloads with `--no-gzip-encoding` * Bug fixes * Don't make directories if `--dry-run` set * Fix and document the `move` command * Fix redirecting stderr on unix-like OSes when using `--log-file` * Fix `delete` command to wait until all finished - fixes missing deletes. * Backblaze B2 * Use one upload URL per go routine fixes `more than one upload using auth token` * Add pacing, retries and reauthentication - fixes token expiry problems * Upload without using a temporary file from local (and remotes which support SHA1) * Fix reading metadata for all files when it shouldn't have been * Drive * Fix listing drive documents at root * Disable copy and move for Google docs * Swift * Fix uploading of chunked files with non ASCII characters * Allow setting of `storage_url` in the config - thanks Xavier Lucas * S3 * Allow IAM role and credentials from environment variables - thanks Brian Stengaard * Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon * Amazon Drive * Retry on more things to make directory listings more reliable ## v1.27 - 2016-01-31 * New Features * Easier headless configuration with `rclone authorize` * Add support for multiple hash types - we now check SHA1 as well as MD5 hashes. * `delete` command which does obey the filters (unlike `purge`) * `dedupe` command to deduplicate a remote. Useful with Google Drive. * Add `--ignore-existing` flag to skip all files that exist on destination. * Add `--delete-before`, `--delete-during`, `--delete-after` flags. * Add `--memprofile` flag to debug memory use. * Warn the user about files with same name but different case * Make `--include` rules add their implicit exclude * at the end of the filter list * Deprecate compiling with go1.3 * Amazon Drive * Fix download of files > 10 GB * Fix directory traversal ("Next token is expired") for large directory listings * Remove 409 conflict from error codes we will retry - stops very long pauses * Backblaze B2 * SHA1 hashes now checked by rclone core * Drive * Add `--drive-auth-owner-only` to only consider files owned by the user - thanks Björn Harrtell * Export Google documents * Dropbox * Make file exclusion error controllable with -q * Swift * Fix upload from unprivileged user. * S3 * Fix updating of mod times of files with `+` in. * Local * Add local file system option to disable UNC on Windows. ## v1.26 - 2016-01-02 * New Features * Yandex storage backend - thank you Dmitry Burdeev ("dibu") * Implement Backblaze B2 storage backend * Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles * Make ls/lsl/md5sum/size/check obey includes and excludes * Fixes * Fix crash in http logging * Upload releases to github too * Swift * Fix sync for chunked files * OneDrive * Re-enable server side copy * Don't mask HTTP error codes with JSON decode error * S3 * Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier) ## v1.25 - 2015-11-14 * New features * Implement Hubic storage system * Fixes * Fix deletion of some excluded files without --delete-excluded * This could have deleted files unexpectedly on sync * Always check first with `--dry-run`! * Swift * Stop SetModTime losing metadata (eg X-Object-Manifest) * This could have caused data loss for files > 5GB in size * Use ContentType from Object to avoid lookups in listings * OneDrive * disable server side copy as it seems to be broken at Microsoft ## v1.24 - 2015-11-07 * New features * Add support for Microsoft OneDrive * Add `--no-check-certificate` option to disable server certificate verification * Add async readahead buffer for faster transfer of big files * Fixes * Allow spaces in remotes and check remote names for validity at creation time * Allow '&' and disallow ':' in Windows filenames. * Swift * Ignore directory marker objects where appropriate - allows working with Hubic * Don't delete the container if fs wasn't at root * S3 * Don't delete the bucket if fs wasn't at root * Google Cloud Storage * Don't delete the bucket if fs wasn't at root ## v1.23 - 2015-10-03 * New features * Implement `rclone size` for measuring remotes * Fixes * Fix headless config for drive and gcs * Tell the user they should try again if the webserver method failed * Improve output of `--dump-headers` * S3 * Allow anonymous access to public buckets * Swift * Stop chunked operations logging "Failed to read info: Object Not Found" * Use Content-Length on uploads for extra reliability ## v1.22 - 2015-09-28 * Implement rsync like include and exclude flags * swift * Support files > 5GB - thanks Sergey Tolmachev ## v1.21 - 2015-09-22 * New features * Display individual transfer progress * Make lsl output times in localtime * Fixes * Fix allowing user to override credentials again in Drive, GCS and ACD * Amazon Drive * Implement compliant pacing scheme * Google Drive * Make directory reads concurrent for increased speed. ## v1.20 - 2015-09-15 * New features * Amazon Drive support * Oauth support redone - fix many bugs and improve usability * Use "golang.org/x/oauth2" as oauth library of choice * Improve oauth usability for smoother initial signup * drive, googlecloudstorage: optionally use auto config for the oauth token * Implement --dump-headers and --dump-bodies debug flags * Show multiple matched commands if abbreviation too short * Implement server side move where possible * local * Always use UNC paths internally on Windows - fixes a lot of bugs * dropbox * force use of our custom transport which makes timeouts work * Thanks to Klaus Post for lots of help with this release ## v1.19 - 2015-08-28 * New features * Server side copies for s3/swift/drive/dropbox/gcs * Move command - uses server side copies if it can * Implement --retries flag - tries 3 times by default * Build for plan9/amd64 and solaris/amd64 too * Fixes * Make a current version download with a fixed URL for scripting * Ignore rmdir in limited fs rather than throwing error * dropbox * Increase chunk size to improve upload speeds massively * Issue an error message when trying to upload bad file name ## v1.18 - 2015-08-17 * drive * Add `--drive-use-trash` flag so rclone trashes instead of deletes * Add "Forbidden to download" message for files with no downloadURL * dropbox * Remove datastore * This was deprecated and it caused a lot of problems * Modification times and MD5SUMs no longer stored * Fix uploading files > 2GB * s3 * use official AWS SDK from github.com/aws/aws-sdk-go * **NB** will most likely require you to delete and recreate remote * enable multipart upload which enables files > 5GB * tested with Ceph / RadosGW / S3 emulation * many thanks to Sam Liston and Brian Haymore at the [Utah Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account * misc * Show errors when reading the config file * Do not print stats in quiet mode - thanks Leonid Shalupov * Add FAQ * Fix created directories not obeying umask * Linux installation instructions - thanks Shimon Doodkin ## v1.17 - 2015-06-14 * dropbox: fix case insensitivity issues - thanks Leonid Shalupov ## v1.16 - 2015-06-09 * Fix uploading big files which was causing timeouts or panics * Don't check md5sum after download with --size-only ## v1.15 - 2015-06-06 * Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper * Implement --size-only flag to sync on size not checksum & modtime * Expand docs and remove duplicated information * Document rclone's limitations with directories * dropbox: update docs about case insensitivity ## v1.14 - 2015-05-21 * local: fix encoding of non utf-8 file names - fixes a duplicate file problem * drive: docs about rate limiting * google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1" ## v1.13 - 2015-05-10 * Revise documentation (especially sync) * Implement --timeout and --conntimeout * s3: ignore etags from multipart uploads which aren't md5sums ## v1.12 - 2015-03-15 * drive: Use chunked upload for files above a certain size * drive: add --drive-chunk-size and --drive-upload-cutoff parameters * drive: switch to insert from update when a failed copy deletes the upload * core: Log duplicate files if they are detected ## v1.11 - 2015-03-04 * swift: add region parameter * drive: fix crash on failed to update remote mtime * In remote paths, change native directory separators to / * Add synchronization to ls/lsl/lsd output to stop corruptions * Ensure all stats/log messages to go stderr * Add --log-file flag to log everything (including panics) to file * Make it possible to disable stats printing with --stats=0 * Implement --bwlimit to limit data transfer bandwidth ## v1.10 - 2015-02-12 * s3: list an unlimited number of items * Fix getting stuck in the configurator ## v1.09 - 2015-02-07 * windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:) * local: Fix directory separators on Windows * drive: fix rate limit exceeded errors ## v1.08 - 2015-02-04 * drive: fix subdirectory listing to not list entire drive * drive: Fix SetModTime * dropbox: adapt code to recent library changes ## v1.07 - 2014-12-23 * google cloud storage: fix memory leak ## v1.06 - 2014-12-12 * Fix "Couldn't find home directory" on OSX * swift: Add tenant parameter * Use new location of Google API packages ## v1.05 - 2014-08-09 * Improved tests and consequently lots of minor fixes * core: Fix race detected by go race detector * core: Fixes after running errcheck * drive: reset root directory on Rmdir and Purge * fs: Document that Purger returns error on empty directory, test and fix * google cloud storage: fix ListDir on subdirectory * google cloud storage: re-read metadata in SetModTime * s3: make reading metadata more reliable to work around eventual consistency problems * s3: strip trailing / from ListDir() * swift: return directories without / in ListDir ## v1.04 - 2014-07-21 * google cloud storage: Fix crash on Update ## v1.03 - 2014-07-20 * swift, s3, dropbox: fix updated files being marked as corrupted * Make compile with go 1.1 again ## v1.02 - 2014-07-19 * Implement Dropbox remote * Implement Google Cloud Storage remote * Verify Md5sums and Sizes after copies * Remove times from "ls" command - lists sizes only * Add add "lsl" - lists times and sizes * Add "md5sum" command ## v1.01 - 2014-07-04 * drive: fix transfer of big files using up lots of memory ## v1.00 - 2014-07-03 * drive: fix whole second dates ## v0.99 - 2014-06-26 * Fix --dry-run not working * Make compatible with go 1.1 ## v0.98 - 2014-05-30 * s3: Treat missing Content-Length as 0 for some ceph installations * rclonetest: add file with a space in ## v0.97 - 2014-05-05 * Implement copying of single files * s3 & swift: support paths inside containers/buckets ## v0.96 - 2014-04-24 * drive: Fix multiple files of same name being created * drive: Use o.Update and fs.Put to optimise transfers * Add version number, -V and --version ## v0.95 - 2014-03-28 * rclone.org: website, docs and graphics * drive: fix path parsing ## v0.94 - 2014-03-27 * Change remote format one last time * GNU style flags ## v0.93 - 2014-03-16 * drive: store token in config file * cross compile other versions * set strict permissions on config file ## v0.92 - 2014-03-15 * Config fixes and --config option ## v0.91 - 2014-03-15 * Make config file ## v0.90 - 2013-06-27 * Project named rclone ## v0.00 - 2012-11-18 * Project started # Bugs and Limitations ## Limitations ### Directory timestamps aren't preserved Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing. ### Rclone struggles with millions of files in a directory/bucket Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory. Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket. ### Bucket based remotes and folders Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear. Some software creates empty keys ending in `/` as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. This ability may be added in the future (probably via a flag/option). ## Bugs Bugs are stored in rclone's GitHub project: * [Reported bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug) * [Known issues](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22) Frequently Asked Questions -------------------------- ### Do all cloud storage systems support all rclone commands ### Yes they do. All the rclone commands (eg `sync`, `copy` etc) will work on all the remote storage systems. ### Can I copy the config from one machine to another ### Sure! Rclone stores all of its config in a single file. If you want to find this file, run `rclone config file` which will tell you where it is. See the [remote setup docs](https://rclone.org/remote_setup/) for more info. ### How do I configure rclone on a remote / headless box with no browser? ### This has now been documented in its own [remote setup page](https://rclone.org/remote_setup/). ### Can rclone sync directly from drive to s3 ### Rclone can sync between two remote cloud storage systems just fine. Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth. The syncs would be incremental (on a file by file basis). Eg rclone sync -i drive:Folder s3:bucket ### Using rclone from multiple locations at the same time ### You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg ``` Server A> rclone sync -i /tmp/whatever remote:ServerA Server B> rclone sync -i /tmp/whatever remote:ServerB ``` If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, eg ``` Server A> rclone copy /tmp/whatever remote:Backup Server B> rclone copy /tmp/whatever remote:Backup ``` The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates. ### Why doesn't rclone support partial transfers / binary diffs like rsync? ### Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system. Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it. It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system. All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects. ### Can rclone do bi-directional sync? ### No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it. ### Can I use rclone with an HTTP proxy? ### Yes. rclone will follow the standard environment variables for proxies, similar to cURL and other programs. In general the variables are called `http_proxy` (for services reached over `http`) and `https_proxy` (for services reached over `https`). Most public services will be using `https`, but you may wish to set both. The content of the variable is `protocol://server:port`. The protocol value is the one used to talk to the proxy server, itself, and is commonly either `http` or `socks5`. Slightly annoyingly, there is no _standard_ for the name; some applications may use `http_proxy` but another one `HTTP_PROXY`. The `Go` libraries used by `rclone` will try both variations, but you may wish to set all possibilities. So, on Linux, you may end up with code similar to export http_proxy=http://proxyserver:12345 export https_proxy=$http_proxy export HTTP_PROXY=$http_proxy export HTTPS_PROXY=$http_proxy The `NO_PROXY` allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com". e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy Note that the ftp backend does not support `ftp_proxy` yet. ### Rclone gives x509: failed to load system roots and no roots provided error ### This means that `rclone` can't file the SSL root certificates. Likely you are running `rclone` on a NAS with a cut-down Linux OS, or possibly on Solaris. Rclone (via the Go runtime) tries to load the root certificates from these places on Linux. "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL "/etc/ssl/ca-bundle.pem", // OpenSUSE "/etc/pki/tls/cacert.pem", // OpenELEC So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly. ``` mkdir -p /etc/ssl/certs/ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ntpclient -s -h pool.ntp.org ``` The two environment variables `SSL_CERT_FILE` and `SSL_CERT_DIR`, mentioned in the [x509 package](https://godoc.org/crypto/x509), provide an additional way to provide the SSL root certificates. Note that you may need to add the `--insecure` option to the `curl` command line if it doesn't work without. ``` curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ``` ### Rclone gives Failed to load config file: function not implemented error ### Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23. See the [system requirements section in the go install docs](https://golang.org/doc/install) for full details. ### All my uploaded docx/xlsx/pptx files appear as archive/zip ### This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats ### tcp lookup some.domain.com no such host ### This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g. ``` # both should print a long list of possible IP addresses dig www.googleapis.com # resolve using your default DNS dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server ``` If you are using `systemd-resolved` (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly. Additionally with the `GODEBUG=netdns=` environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the [name resolution section in the go docs](https://golang.org/pkg/net/#hdr-Name_Resolution). ### The total size reported in the stats for a sync is wrong and keeps changing It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the [--max-backlog](https://rclone.org/docs/#max-backlog-n) flag. ### Rclone is using too much memory or appears to have a memory leak Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled. However it is possible to tune the garbage collector to use less memory by [setting GOGC](https://dave.cheney.net/tag/gogc) to a lower value, say `export GOGC=20`. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage. The most common cause of rclone using lots of memory is a single directory with thousands or millions of files in. Rclone has to load this entirely into memory as rclone objects. Each rclone object takes 0.5k-1k of memory. License ------- This is free software under the terms of MIT the license (check the COPYING file included with the source code). ``` Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` Authors ------- * Nick Craig-Wood Contributors ------------ {{< rem `email addresses removed from here need to be addeed to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again.` >}} * Alex Couper * Leonid Shalupov * Shimon Doodkin * Colin Nicholson * Klaus Post * Sergey Tolmachev * Adriano Aurélio Meirelles * C. Bess * Dmitry Burdeev * Joseph Spurrier * Björn Harrtell * Xavier Lucas * Werner Beroux * Brian Stengaard * Jakub Gedeon * Jim Tittsler * Michal Witkowski * Fabian Ruff * Leigh Klotz * Romain Lapray * Justin R. Wilson * Antonio Messina * Stefan G. Weichinger * Per Cederberg * Radek Šenfeld * Fredrik Fornwall * Asko Tamm * xor-zz * Tomasz Mazur * Marco Paganini * Felix Bünemann * Durval Menezes * Luiz Carlos Rumbelsperger Viana * Stefan Breunig * Alishan Ladhani * 0xJAKE <0xJAKE@users.noreply.github.com> * Thibault Molleman * Scott McGillivray * Bjørn Erik Pedersen * Lukas Loesche * emyarod * T.C. Ferguson * Brandur * Dario Giovannetti * Károly Oláh * Jon Yergatian * Jack Schmidt * Dedsec1 * Hisham Zarka * Jérôme Vizcaino * Mike Tesch * Marvin Watson * Danny Tsai * Yoni Jah * Stephen Harris * Ihor Dvoretskyi * Jon Craton * Hraban Luyat * Michael Ledin * Martin Kristensen * Too Much IO * Anisse Astier * Zahiar Ahmed * Igor Kharin * Bill Zissimopoulos * Bob Potter * Steven Lu * Sjur Fredriksen * Ruwbin * Fabian Möller * Edward Q. Bridges * Vasiliy Tolstov * Harshavardhana * sainaen * gdm85 * Yaroslav Halchenko * John Papandriopoulos * Zhiming Wang * Andy Pilate * Oliver Heyme * wuyu * Andrei Dragomir * Christian Brüggemann * Alex McGrath Kraak * bpicode * Daniel Jagszent * Josiah White * Ishuah Kariuki * Jan Varho * Girish Ramakrishnan * LingMan * Jacob McNamee * jersou * thierry * Simon Leinen * Dan Dascalescu * Jason Rose * Andrew Starr-Bochicchio * John Leach * Corban Raun * Pierre Carlson * Ernest Borowski * Remus Bunduc * Iakov Davydov * Jakub Tasiemski * David Minor * Tim Cooijmans * Laurence * Giovanni Pizzi * Filip Bartodziej * Jon Fautley * lewapm <32110057+lewapm@users.noreply.github.com> * Yassine Imounachen * Chris Redekop * Jon Fautley * Will Gunn * Lucas Bremgartner * Jody Frankowski * Andreas Roussos * nbuchanan * Durval Menezes * Victor * Mateusz * Daniel Loader * David0rk * Alexander Neumann * Giri Badanahatti * Leo R. Lundgren * wolfv * Dave Pedu * Stefan Lindblom * seuffert * gbadanahatti <37121690+gbadanahatti@users.noreply.github.com> * Keith Goldfarb * Steve Kriss * Chih-Hsuan Yen * Alexander Neumann * Matt Holt * Eri Bastos * Michael P. Dubner * Antoine GIRARD * Mateusz Piotrowski * Animosity022 * Peter Baumgartner * Craig Rachel * Michael G. Noll * hensur * Oliver Heyme * Richard Yang * Piotr Oleszczyk * Rodrigo * NoLooseEnds * Jakub Karlicek * John Clayton * Kasper Byrdal Nielsen * Benjamin Joseph Dag * themylogin * Onno Zweers * Jasper Lievisse Adriaanse * sandeepkru * HerrH * Andrew <4030760+sparkyman215@users.noreply.github.com> * dan smith * Oleg Kovalov * Ruben Vandamme * Cnly * Andres Alvarez <1671935+kir4h@users.noreply.github.com> * reddi1 * Matt Tucker * Sebastian Bünger * Martin Polden * Alex Chen * Denis * bsteiss <35940619+bsteiss@users.noreply.github.com> * Cédric Connes * Dr. Tobias Quathamer * dcpu <42736967+dcpu@users.noreply.github.com> * Sheldon Rupp * albertony <12441419+albertony@users.noreply.github.com> * cron410 * Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com> * Felix Brucker * Santiago Rodríguez * Craig Miskell * Antoine GIRARD * Joanna Marek * frenos * ssaqua * xnaas * Frantisek Fuka * Paul Kohout * dcpu <43330287+dcpu@users.noreply.github.com> * jackyzy823 * David Haguenauer * teresy * buergi * Florian Gamboeck * Ralf Hemberger <10364191+rhemberger@users.noreply.github.com> * Scott Edlund * Erik Swanson * Jake Coggiano * brused27 * Peter Kaminski * Henry Ptasinski * Alexander * Garry McNulty * Mathieu Carbou * Mark Otway * William Cocker <37018962+WilliamCocker@users.noreply.github.com> * François Leurent <131.js@cloudyks.org> * Arkadius Stefanski * Jay * andrea rota * nicolov * Dario Guzik * qip * yair@unicorn * Matt Robinson * kayrus * Rémy Léone * Wojciech Smigielski * weetmuts * Jonathan * James Carpenter * Vince * Nestar47 <47841759+Nestar47@users.noreply.github.com> * Six * Alexandru Bumbacea * calisro * Dr.Rx * marcintustin * jaKa Močnik * Fionera * Dan Walters * Danil Semelenov * xopez <28950736+xopez@users.noreply.github.com> * Ben Boeckel * Manu * Kyle E. Mitchell * Gary Kim * Jon * Jeff Quinn * Peter Berbec * didil <1284255+didil@users.noreply.github.com> * id01 * Robert Marko * Philip Harvey <32467456+pharveybattelle@users.noreply.github.com> * JorisE * garry415 * forgems * Florian Apolloner * Aleksandar Janković * Maran * nguyenhuuluan434 * Laura Hausmann * yparitcher * AbelThar * Matti Niemenmaa * Russell Davis * Yi FU * Paul Millar * justinalin * EliEron * justina777 * Chaitanya Bankanhal * Michał Matczuk * Macavirus * Abhinav Sharma * ginvine <34869051+ginvine@users.noreply.github.com> * Patrick Wang * Cenk Alti * Andreas Chlupka * Alfonso Montero * Ivan Andreev * David Baumgold * Lars Lehtonen * Matei David * David * Anthony Rusdi <33247310+antrusd@users.noreply.github.com> * Richard Patel * 庄天翼 * SwitchJS * Raphael * Sezal Agrawal * Tyler * Brett Dutro * Vighnesh SK * Arijit Biswas * Michele Caci * AlexandrBoltris * Bryce Larson * Carlos Ferreyra * Saksham Khanna * dausruddin <5763466+dausruddin@users.noreply.github.com> * zero-24 * Xiaoxing Ye * Barry Muldrey * Sebastian Brandt * Marco Molteni * Ankur Gupta <7876747+ankur0493@users.noreply.github.com> * Maciej Zimnoch * anuar45 * Fernando * David Cole * Wei He * Outvi V <19144373+outloudvi@users.noreply.github.com> * Thomas Kriechbaumer * Tennix * Ole Schütt * Kuang-che Wu * Thomas Eales * Paul Tinsley * Felix Hungenberg * Benjamin Richter * landall * thestigma * jtagcat <38327267+jtagcat@users.noreply.github.com> * Damon Permezel * boosh * unbelauscht <58393353+unbelauscht@users.noreply.github.com> * Motonori IWAMURO * Benjapol Worakan * Dave Koston * Durval Menezes * Tim Gallant * Frederick Zhang * valery1707 * Yves G * Shing Kit Chan * Franklyn Tackitt * Robert-André Mauchin * evileye <48332831+ibiruai@users.noreply.github.com> * Joachim Brandon LeBlanc * Patryk Jakuszew * fishbullet * greatroar <@> * Bernd Schoolmann * Elan Ruusamäe * Max Sum * Mark Spieth * harry * Samantha McVey * Jack Anderson * Michael G * Brandon Philips * Daven * Martin Stone * David Bramwell <13053834+dbramwell@users.noreply.github.com> * Sunil Patra * Adam Stroud * Kush * Matan Rosenberg * gitch1 <63495046+gitch1@users.noreply.github.com> * ElonH * Fred * Sébastien Gross * Maxime Suret <11944422+msuret@users.noreply.github.com> * Caleb Case * Ben Zenker * Martin Michlmayr * Brandon McNama * Daniel Slyman * Alex Guerrero * Matteo Pietro Dazzi * edwardxml <56691903+edwardxml@users.noreply.github.com> * Roman Kredentser * Kamil Trzciński * Zac Rubin * Vincent Feltz * Heiko Bornholdt * Matteo Pietro Dazzi * jtagcat * Petri Salminen * Tim Burke * Kai Lüke * Garrett Squire * Evan Harris * Kevin * Morten Linderud * Dmitry Ustalov * Jack <196648+jdeng@users.noreply.github.com> * kcris * tyhuber1 <68970760+tyhuber1@users.noreply.github.com> * David Ibarra * Tim Gallant * Kaloyan Raev * Jay McEntire * Leo Luan * aus <549081+aus@users.noreply.github.com> * Aaron Gokaslan * Egor Margineanu * Lucas Kanashiro * WarpedPixel * Sam Edwards # Contact the rclone project # ## Forum ## Forum for questions and general discussion: * https://forum.rclone.org ## GitHub repository ## The project's repository is located at: * https://github.com/rclone/rclone There you can file bug reports or contribute with pull requests. ## Twitter ## You can also follow me on twitter for rclone announcements: * [@njcw](https://twitter.com/njcw) ## Email ## Or if all else fails or you want to ask something private or confidential email [Nick Craig-Wood](mailto:nick@craig-wood.com). Please don't email me requests for help - those are better directed to the forum. Thanks! rclone-1.53.3/MANUAL.txt000066400000000000000000037525551375552240400146400ustar00rootroot00000000000000rclone(1) User Manual Nick Craig-Wood Nov 19, 2020 RCLONE SYNCS YOUR FILES TO CLOUD STORAGE - About rclone - What can rclone do for you? - What features does rclone have? - What providers does rclone support? - Download - Install - Donate. About rclone Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run protection. It is used at the command line, in scripts or via its API. Users call rclone _"The Swiss army knife of cloud storage"_, and _"Technology indistinguishable from magic"_. Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk. Virtual backends wrap local and cloud file systems to apply encryption, caching, chunking and joining. Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA. Rclone is mature, open source software originally inspired by rsync and written in Go. The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended. Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API. Rclone does the heavy lifting of communicating with cloud storage. What can rclone do for you? Rclone helps you: - Backup (and encrypt) files to cloud storage - Restore (and decrypt) files from cloud storage - Mirror cloud data to other cloud services or locally - Migrate data to cloud, or between cloud storage vendors - Mount multiple, encrypted, cached or diverse cloud storage as a disk - Analyse and account for data held on cloud storage using lsf, ljson, size, ncdu - Union file systems together to present multiple local and/or cloud file systems as one Features - Transfers - MD5, SHA1 hashes are checked at all times for file integrity - Timestamps are preserved on files - Operations can be restarted at any time - Can be to and from network, eg two different cloud providers - Can use multi-threaded downloads to local disk - Copy new or changed files to cloud storage - Sync (one way) to make a directory identical - Move files to cloud storage deleting the local after verification - Check hashes and for missing/extra files - Mount your cloud storage as a network disk - Serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna - Experimental Web based GUI Supported providers (There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.) - 1Fichier - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Amazon Drive - Amazon S3 - Backblaze B2 - Box - Ceph - Citrix ShareFile - C14 - DigitalOcean Spaces - Dreamhost - Dropbox - FTP - Google Cloud Storage - Google Drive - Google Photos - HTTP - Hubic - Jottacloud - IBM COS S3 - Koofr - Mail.ru Cloud - Memset Memstore - Mega - Memory - Microsoft Azure Blob Storage - Microsoft OneDrive - Minio - Nextcloud - OVH - OpenDrive - OpenStack Swift - Oracle Cloud Storage - ownCloud - pCloud - premiumize.me - put.io - QingStor - Rackspace Cloud Files - rsync.net - Scaleway - Seafile - SFTP - StackPath - SugarSync - Tardigrade - Tencent Cloud Object Storage (COS) - Wasabi - WebDAV - Yandex Disk - The local filesystem Links - Home page - GitHub project page for source and bug tracker - Rclone Forum - Downloads INSTALL Rclone is a Go program and comes as a single binary file. Quickstart - Download the relevant binary. - Extract the rclone or rclone.exe binary from the archive - Run rclone config to setup. See rclone config docs for more details. See below for some expanded Linux / macOS instructions. See the Usage section of the docs for how to use rclone, or run rclone -h. Script installation To install rclone on Linux/macOS/BSD systems, run: curl https://rclone.org/install.sh | sudo bash For beta installation, run: curl https://rclone.org/install.sh | sudo bash -s beta Note that this script checks the version of rclone installed first and won't re-download if not needed. Linux installation from precompiled binary Fetch and unpack curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip unzip rclone-current-linux-amd64.zip cd rclone-*-linux-amd64 Copy binary file sudo cp rclone /usr/bin/ sudo chown root:root /usr/bin/rclone sudo chmod 755 /usr/bin/rclone Install manpage sudo mkdir -p /usr/local/share/man/man1 sudo cp rclone.1 /usr/local/share/man/man1/ sudo mandb Run rclone config to setup. See rclone config docs for more details. rclone config macOS installation with brew brew install rclone macOS installation from precompiled binary, using curl To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with curl. Download the latest version of rclone. cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip Unzip the download and cd to the extracted folder. unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64 Move rclone to your $PATH. You will be prompted for your password. sudo mkdir -p /usr/local/bin sudo mv rclone /usr/local/bin/ (the mkdir command is safe to run, even if the directory already exists). Remove the leftover files. cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip Run rclone config to setup. See rclone config docs for more details. rclone config macOS installation from precompiled binary, using a web browser When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run rclone, a pop-up will appear saying: “rclone” cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. The simplest fix is to run xattr -d com.apple.quarantine rclone Install with docker The rclone maintains a docker image for rclone. These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image. The :latest tag will always point to the latest stable release. You can use the :beta tag to get the latest build from master. You can also use version tags, eg :1.49.1, :1.49 or :1. $ docker pull rclone/rclone:latest latest: Pulling from rclone/rclone Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11 ... $ docker run --rm rclone/rclone:latest version rclone v1.49.1 - os/arch: linux/amd64 - go version: go1.12.9 There are a few command line options to consider when starting an rclone Docker container from the rclone image. - You need to mount the host rclone config dir at /config/rclone into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file. - You need to mount a host data dir at /data into the Docker container. - By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line. - If you want to access the RC interface (either via the API or the Web UI), it is required to set the --rc-addr to :5572 in order to connect to it from outside the container. An explanation about why this is necessary is present here. - NOTE: Users running this container with the docker network set to host should probably set it to listen to localhost only, with 127.0.0.1:5572 as the value for --rc-addr - It is possible to use rclone mount inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact docker run options to do that might vary slightly between hosts. See, e.g. the discussion in this thread. You also need to mount the host /etc/passwd and /etc/group for fuse to work inside the container. Here are some commands tested on an Ubuntu 18.04.3 host: # config on host at ~/.config/rclone/rclone.conf # data on host at ~/data # make sure the config is ok by listing the remotes docker run --rm \ --volume ~/.config/rclone:/config/rclone \ --volume ~/data:/data:shared \ --user $(id -u):$(id -g) \ rclone/rclone \ listremotes # perform mount inside Docker container, expose result to host mkdir -p ~/data/mount docker run --rm \ --volume ~/.config/rclone:/config/rclone \ --volume ~/data:/data:shared \ --user $(id -u):$(id -g) \ --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro \ --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \ rclone/rclone \ mount dropbox:Photos /data/mount & ls ~/data/mount kill %1 Install from source Make sure you have at least Go 1.11 installed. Download go if necessary. The latest release is recommended. Then git clone https://github.com/rclone/rclone.git cd rclone go build ./rclone version This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make instead of go build then the rclone build will have the correct version information in it. You can also build the latest stable rclone with: go get github.com/rclone/rclone or the latest version (equivalent to the beta) with go get github.com/rclone/rclone@master These will build the binary in $(go env GOPATH)/bin (~/go/bin/rclone by default) after downloading the source to the go module cache. Note - do NOT use the -u flag here. This causes go to try to update the depencencies that rclone uses and sometimes these don't work with the current version of rclone. Installation with Ansible This can be done with Stefan Weichinger's ansible role. Instructions 1. git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directory 2. add the role to the hosts you want rclone installed to: - hosts: rclone-hosts roles: - rclone Configure First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.) The easiest way to make the config is to run rclone with the config option: rclone config See the following for detailed instructions for - 1Fichier - Alias - Amazon Drive - Amazon S3 - Backblaze B2 - Box - Cache - Chunker - transparently splits large files for other remotes - Citrix ShareFile - Crypt - to encrypt other remotes - DigitalOcean Spaces - Dropbox - FTP - Google Cloud Storage - Google Drive - Google Photos - HTTP - Hubic - Jottacloud / GetSky.no - Koofr - Mail.ru Cloud - Mega - Memory - Microsoft Azure Blob Storage - Microsoft OneDrive - OpenStack Swift / Rackspace Cloudfiles / Memset Memstore - OpenDrive - Pcloud - premiumize.me - put.io - QingStor - Seafile - SFTP - SugarSync - Tardigrade - Union - WebDAV - Yandex Disk - The local filesystem Usage Rclone syncs a directory tree from one storage system to another. Its syntax is like this Syntax: [options] subcommand Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive. You can define as many storage paths as you like in the config file. Please use the -i / --interactive flag while learning rclone to avoid accidental data loss. Subcommands rclone uses a system of subcommands. For example rclone ls remote:path # lists a remote rclone copy /local/path remote:path # copies /local/path to the remote rclone sync -i /local/path remote:path # syncs /local/path to the remote RCLONE CONFIG Enter an interactive configuration session. Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. rclone config [flags] Options -h, --help help for config See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. - rclone config create - Create a new remote with name, type and options. - rclone config delete - Delete an existing remote name. - rclone config disconnect - Disconnects user from remote - rclone config dump - Dump the config file as JSON. - rclone config edit - Enter an interactive configuration session. - rclone config file - Show path of configuration file in use. - rclone config password - Update password in an existing remote. - rclone config providers - List in JSON format all the providers and options. - rclone config reconnect - Re-authenticates user with remote. - rclone config show - Print (decrypted) config file, or the config for a single remote. - rclone config update - Update options in an existing remote. - rclone config userinfo - Prints info about logged in user of remote. RCLONE COPY Copy files from source to dest, skipping already copied. Synopsis Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. If dest:path doesn't exist, it is created and the source:path contents go there. For example rclone copy source:sourcepath dest:destpath Let's say there are two files in sourcepath sourcepath/one.txt sourcepath/two.txt This copies them to destpath/one.txt destpath/two.txt Not to destpath/sourcepath/one.txt destpath/sourcepath/two.txt If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: NOTE: Use the -P/--progress flag to view real-time transfer statistics. NOTE: Use the --dry-run or the --interactive/-i flag to test without copying anything. rclone copy source:path dest:path [flags] Options --create-empty-src-dirs Create empty source dirs on destination after copy -h, --help help for copy See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE SYNC Make source and dest identical, modifying destination only. Synopsis Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. IMPORTANT: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. rclone sync -i SOURCE remote:DESTINATION Note that files in the destination won't be deleted if there were any errors at any point. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. NOTE: Use the -P/--progress flag to view real-time transfer statistics rclone sync source:path dest:path [flags] Options --create-empty-src-dirs Create empty source dirs on destination after sync -h, --help help for sync See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE MOVE Move files from source to dest. Synopsis Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation. If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer exist. Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. IMPORTANT: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. NOTE: Use the -P/--progress flag to view real-time transfer statistics. rclone move source:path dest:path [flags] Options --create-empty-src-dirs Create empty source dirs on destination after move --delete-empty-src-dirs Delete empty source dirs after move -h, --help help for move See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE DELETE Remove the contents of path. Synopsis Remove the files in path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files. rclone delete only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use rclone purge If you supply the --rmdirs flag, it will remove all empty directories along with it. Eg delete all files bigger than 100MBytes Check what would be deleted first (use either) rclone --min-size 100M lsl remote:path rclone --dry-run --min-size 100M delete remote:path Then delete rclone --min-size 100M delete remote:path That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. IMPORTANT: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. rclone delete remote:path [flags] Options -h, --help help for delete --rmdirs rmdirs removes empty directories but leaves root intact See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE PURGE Remove the path and all of its contents. Synopsis Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files. IMPORTANT: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. rclone purge remote:path [flags] Options -h, --help help for purge See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE MKDIR Make the path if it doesn't already exist. Synopsis Make the path if it doesn't already exist. rclone mkdir remote:path [flags] Options -h, --help help for mkdir See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE RMDIR Remove the path if empty. Synopsis Remove the path. Note that you can't remove a path with objects in it, use purge for that. rclone rmdir remote:path [flags] Options -h, --help help for rmdir See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE CHECK Checks the files in the source and destination match. Synopsis Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination. If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data. If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different. The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - = path means path was found in source and destination and was identical - - path means path was missing on the source, so only in the destination - + path means path was missing on the destination, so only in the source - * path means path was present in source and destination but different. - ! path means there was an error reading or hashing the source or dest. rclone check source:path dest:path [flags] Options --combined string Make a combined report of changes to this file --differ string Report all non-matching files to this file --download Check by downloading rather than with hash. --error string Report all files with errors (hashing or reading) to this file -h, --help help for check --match string Report all matching files to this file --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LS List the objects in the path with size and path. Synopsis Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. Eg $ rclone ls swift:bucket 60295 bevajer5jef 90613 canole 94467 diwogej7 37600 fubuwic Any of the filtering options can be applied to this command. There are several related list commands - ls to list size and path of objects only - lsl to list modification time, size and path of objects only - lsd to list directories only - lsf to list objects and directories in easy to parse format - lsjson to list objects and directories in JSON format ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion. The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). rclone ls remote:path [flags] Options -h, --help help for ls See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LSD List all directories/containers/buckets in the path. Synopsis Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg $ rclone lsd swift: 494000 2018-04-26 08:43:20 10000 10000files 65 2018-04-26 08:43:20 1 1File Or $ rclone lsd drive:test -1 2016-10-17 17:41:53 -1 1000files -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files If you just want the directory names use "rclone lsf --dirs-only". Any of the filtering options can be applied to this command. There are several related list commands - ls to list size and path of objects only - lsl to list modification time, size and path of objects only - lsd to list directories only - lsf to list objects and directories in easy to parse format - lsjson to list objects and directories in JSON format ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion. The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). rclone lsd remote:path [flags] Options -h, --help help for lsd -R, --recursive Recurse into the listing. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LSL List the objects in path with modification time, size and path. Synopsis Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. Eg $ rclone lsl swift:bucket 60295 2016-06-25 18:55:41.062626927 bevajer5jef 90613 2016-06-25 18:55:43.302607074 canole 94467 2016-06-25 18:55:43.046609333 diwogej7 37600 2016-06-25 18:55:40.814629136 fubuwic Any of the filtering options can be applied to this command. There are several related list commands - ls to list size and path of objects only - lsl to list modification time, size and path of objects only - lsd to list directories only - lsf to list objects and directories in easy to parse format - lsjson to list objects and directories in JSON format ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion. The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). rclone lsl remote:path [flags] Options -h, --help help for lsl See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE MD5SUM Produces an md5sum file for all the objects in the path. Synopsis Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces. rclone md5sum remote:path [flags] Options -h, --help help for md5sum See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE SHA1SUM Produces an sha1sum file for all the objects in the path. Synopsis Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces. rclone sha1sum remote:path [flags] Options -h, --help help for sha1sum See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE SIZE Prints the total size and number of objects in remote:path. Synopsis Prints the total size and number of objects in remote:path. rclone size remote:path [flags] Options -h, --help help for size --json format output as JSON See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE VERSION Show the version number. Synopsis Show the version number, the go version and the architecture. Eg $ rclone version rclone v1.41 - os/arch: linux/amd64 - go version: go1.10 If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. $ rclone version --check yours: 1.42.0.6 latest: 1.42 (released 2018-06-16) beta: 1.42.0.5 (released 2018-06-17) Or $ rclone version --check yours: 1.41 latest: 1.42 (released 2018-06-16) upgrade: https://downloads.rclone.org/v1.42 beta: 1.42.0.5 (released 2018-06-17) upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 rclone version [flags] Options --check Check for new version. -h, --help help for version See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE CLEANUP Clean up the remote if possible. Synopsis Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. rclone cleanup remote:path [flags] Options -h, --help help for cleanup See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE DEDUPE Interactively find duplicate filenames and delete/rename them. Synopsis By default dedupe interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is only useful with backends like Google Drive which can have duplicate file names. It can be run on wrapping backends (eg crypt) if they wrap a backend which supports duplicate file names. In the first pass it will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged. In the second pass, for every group of duplicate file names, it will delete all but one identical files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive. dedupe considers files to be identical if they have the same hash. If the backend does not support hashes (eg crypt wrapping Google Drive) then they will never be found to be identical. If you use the --size-only flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes. IMPORTANT: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. Here is an example run. Before - with duplicates $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 6048320 2016-03-05 16:23:11.775000000 one.txt 564374 2016-03-05 16:23:06.731000000 one.txt 6048320 2016-03-05 16:18:26.092000000 one.txt 6048320 2016-03-05 16:22:46.185000000 two.txt 1744073 2016-03-05 16:22:38.104000000 two.txt 564374 2016-03-05 16:22:52.118000000 two.txt Now the dedupe session $ rclone dedupe drive:dupes 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. one.txt: Found 4 files with duplicate names one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") one.txt: 2 duplicates remain 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> k Enter the number of the file to keep> 1 one.txt: Deleted 1 extra copies two.txt: Found 3 files with duplicates names two.txt: 3 duplicates remain 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> r two-1.txt: renamed from: two.txt two-2.txt: renamed from: two.txt two-3.txt: renamed from: two.txt The result being $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with the same value - --dedupe-mode interactive - interactive as above. - --dedupe-mode skip - removes identical files then skips anything left. - --dedupe-mode first - removes identical files then keeps the first one. - --dedupe-mode newest - removes identical files then keeps the newest one. - --dedupe-mode oldest - removes identical files then keeps the oldest one. - --dedupe-mode largest - removes identical files then keeps the largest one. - --dedupe-mode smallest - removes identical files then keeps the smallest one. - --dedupe-mode rename - removes identical files then renames the rest to be different. For example to rename all the identically named photos in your Google Photos directory, do rclone dedupe --dedupe-mode rename "drive:Google Photos" Or rclone dedupe rename "drive:Google Photos" rclone dedupe [mode] remote:path [flags] Options --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive") -h, --help help for dedupe See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE ABOUT Get quota information from the remote. Synopsis Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes. This will print to stdout something like this: Total: 17G Used: 7.444G Free: 1.315G Trashed: 100.000M Other: 8.241G Where the fields are: - Total: total size available. - Used: total size used - Free: total amount this user could upload. - Trashed: total amount in the trash - Other: total amount in other storage (eg Gmail, Google Photos) - Objects: total number of objects in the storage Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted. Use the --full flag to see the numbers written out in full, eg Total: 18253611008 Used: 7993453766 Free: 1411001220 Trashed: 104857602 Other: 8849156022 Use the --json flag for a computer readable output, eg { "total": 18253611008, "used": 7993453766, "trashed": 104857602, "other": 8849156022, "free": 1411001220 } rclone about remote: [flags] Options --full Full numbers instead of SI units -h, --help help for about --json Format output as JSON See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE AUTHORIZE Remote authorization. Synopsis Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config. Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. rclone authorize [flags] Options --auth-no-open-browser Do not automatically open auth link in default browser -h, --help help for authorize See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE BACKEND Run a backend specific command. Synopsis This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions. You can discover what commands a backend implements by using rclone backend help remote: rclone backend help You can also discover information about the backend using (see operations/fsinfo in the remote control docs for more info). rclone backend features remote: Pass options to the backend command with -o. This should be key=value or key, eg: rclone backend stats remote:path stats -o format=json -o long Pass arguments to the backend by placing them on the end of the line rclone backend cleanup remote:path file1 file2 file3 Note to run these commands on a running backend then see backend/command in the rc docs. rclone backend remote:path [opts] [flags] Options -h, --help help for backend --json Always output in JSON format. -o, --option stringArray Option in the form name=value or name. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE CAT Concatenates any files and sends them to stdout. Synopsis rclone cat sends any files to standard output. You can use it like this to output a single file rclone cat remote:path/to/file Or like this to output any file in dir or its subdirectories. rclone cat remote:path/to/dir Or like this to output any .txt files in dir or its subdirectories. rclone --include "*.txt" cat remote:path/to/dir Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1. rclone cat remote:path [flags] Options --count int Only print N characters. (default -1) --discard Discard the output instead of printing. --head int Only print the first N characters. -h, --help help for cat --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE CONFIG CREATE Create a new remote with name, type and options. Synopsis Create a new remote of name with type and options. The options should be passed in pairs of key value. For example to make a swift remote of name myremote using auto config you would do: rclone config create myremote swift env_auth true Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken. If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: rclone config create mydrive drive config_is_local false rclone config create `name` `type` [`key` `value`]* [flags] Options -h, --help help for create --no-obscure Force any passwords not to be obscured. --obscure Force any passwords to be obscured. See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG DELETE Delete an existing remote name. Synopsis Delete an existing remote name. rclone config delete `name` [flags] Options -h, --help help for delete See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG DISCONNECT Disconnects user from remote Synopsis This disconnects the remote: passed in to the cloud storage system. This normally means revoking the oauth token. To reconnect use "rclone config reconnect". rclone config disconnect remote: [flags] Options -h, --help help for disconnect See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG DUMP Dump the config file as JSON. Synopsis Dump the config file as JSON. rclone config dump [flags] Options -h, --help help for dump See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG EDIT Enter an interactive configuration session. Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. rclone config edit [flags] Options -h, --help help for edit See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG FILE Show path of configuration file in use. Synopsis Show path of configuration file in use. rclone config file [flags] Options -h, --help help for file See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG PASSWORD Update password in an existing remote. Synopsis Update an existing remote's password. The password should be passed in pairs of key value. For example to set password of a remote of name myremote you would do: rclone config password myremote fieldname mypassword This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. rclone config password `name` [`key` `value`]+ [flags] Options -h, --help help for password See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG PROVIDERS List in JSON format all the providers and options. Synopsis List in JSON format all the providers and options. rclone config providers [flags] Options -h, --help help for providers See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG RECONNECT Re-authenticates user with remote. Synopsis This reconnects remote: passed in to the cloud storage system. To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. rclone config reconnect remote: [flags] Options -h, --help help for reconnect See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG SHOW Print (decrypted) config file, or the config for a single remote. Synopsis Print (decrypted) config file, or the config for a single remote. rclone config show [] [flags] Options -h, --help help for show See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG UPDATE Update options in an existing remote. Synopsis Update an existing remote's options. The options should be passed in in pairs of key value. For example to update the env_auth field of a remote of name myremote you would do: rclone config update myremote swift env_auth true If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: rclone config update myremote swift env_auth true config_refresh_token false rclone config update `name` [`key` `value`]+ [flags] Options -h, --help help for update --no-obscure Force any passwords not to be obscured. --obscure Force any passwords to be obscured. See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE CONFIG USERINFO Prints info about logged in user of remote. Synopsis This prints the details of the person logged in to the cloud storage system. rclone config userinfo remote: [flags] Options -h, --help help for userinfo --json Format output as JSON See the global flags page for global options not listed here. SEE ALSO - rclone config - Enter an interactive configuration session. RCLONE COPYTO Copy files from source to dest, skipping already copied. Synopsis If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command. So rclone copyto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:. This will: if src is file copy it to dst, overwriting an existing file if it exists if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. NOTE: Use the -P/--progress flag to view real-time transfer statistics rclone copyto source:path dest:path [flags] Options -h, --help help for copyto See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE COPYURL Copy url content to dest. Synopsis Download a URL's content and copy it to the destination without saving it in temporary storage. Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. Setting --stdout or making the output file name "-" will cause the output to be written to standard output. rclone copyurl https://example.com dest:path [flags] Options -a, --auto-filename Get the file name from the URL and use it for destination file path -h, --help help for copyurl --no-clobber Prevent overwriting file with same name --stdout Write the output to stdout rather than a file See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE CRYPTCHECK Cryptcheck checks the integrity of a crypted remote. Synopsis rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted. Use it like this rclone cryptcheck /path/to/files encryptedremote:path You can use it like this also, but that will involve downloading all the files in remote:path. rclone cryptcheck remote:path encryptedremote:path After it has run it will log the status of the encryptedremote:. If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different. The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - = path means path was found in source and destination and was identical - - path means path was missing on the source, so only in the destination - + path means path was missing on the destination, so only in the source - * path means path was present in source and destination but different. - ! path means there was an error reading or hashing the source or dest. rclone cryptcheck remote:path cryptedremote:path [flags] Options --combined string Make a combined report of changes to this file --differ string Report all non-matching files to this file --error string Report all files with errors (hashing or reading) to this file -h, --help help for cryptcheck --match string Report all matching files to this file --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE CRYPTDECODE Cryptdecode returns unencrypted file names. Synopsis rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. If you supply the --reverse flag, it will return encrypted file names. use it like this rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 rclone cryptdecode --reverse encryptedremote: filename1 filename2 rclone cryptdecode encryptedremote: encryptedfilename [flags] Options -h, --help help for cryptdecode --reverse Reverse cryptdecode, encrypts filenames See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE DELETEFILE Remove a single file from remote. Synopsis Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed. rclone deletefile remote:path [flags] Options -h, --help help for deletefile See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE GENAUTOCOMPLETE Output completion script for a given shell. Synopsis Generates a shell completion script for rclone. Run with --help to list the supported shells. Options -h, --help help for genautocomplete See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. - rclone genautocomplete bash - Output bash completion script for rclone. - rclone genautocomplete fish - Output fish completion script for rclone. - rclone genautocomplete zsh - Output zsh completion script for rclone. RCLONE GENAUTOCOMPLETE BASH Output bash completion script for rclone. Synopsis Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete bash Logout and login again to use the autocompletion scripts, or source them directly . /etc/bash_completion If you supply a command line argument the script will be written there. rclone genautocomplete bash [output_file] [flags] Options -h, --help help for bash See the global flags page for global options not listed here. SEE ALSO - rclone genautocomplete - Output completion script for a given shell. RCLONE GENAUTOCOMPLETE FISH Output fish completion script for rclone. Synopsis Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete fish Logout and login again to use the autocompletion scripts, or source them directly . /etc/fish/completions/rclone.fish If you supply a command line argument the script will be written there. rclone genautocomplete fish [output_file] [flags] Options -h, --help help for fish See the global flags page for global options not listed here. SEE ALSO - rclone genautocomplete - Output completion script for a given shell. RCLONE GENAUTOCOMPLETE ZSH Output zsh completion script for rclone. Synopsis Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete zsh Logout and login again to use the autocompletion scripts, or source them directly autoload -U compinit && compinit If you supply a command line argument the script will be written there. rclone genautocomplete zsh [output_file] [flags] Options -h, --help help for zsh See the global flags page for global options not listed here. SEE ALSO - rclone genautocomplete - Output completion script for a given shell. RCLONE GENDOCS Output markdown docs for rclone to the directory supplied. Synopsis This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website. rclone gendocs output_directory [flags] Options -h, --help help for gendocs See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE HASHSUM Produces a hashsum file for all the objects in the path. Synopsis Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool. Run without a hash to see the list of supported hashes, eg $ rclone hashsum Supported hashes are: * MD5 * SHA-1 * DropboxHash * QuickXorHash Then $ rclone hashsum MD5 remote:path rclone hashsum remote:path [flags] Options --base64 Output base64 encoded hashsum -h, --help help for hashsum See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LINK Generate public link to file/folder. Synopsis rclone link will create, retrieve or remove a public link to the given file or folder. rclone link remote:path/to/file rclone link remote:path/to/folder/ rclone link --unlink remote:path/to/folder/ rclone link --expire 1d remote:path/to/file If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). NOTE not all backends support the --expire flag - if the backend doesn't support it then the link returned won't expire. Use the --unlink flag to remove existing public links to the file or folder. NOTE not all backends support "--unlink" flag - those that don't will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account. rclone link remote:path [flags] Options --expire Duration The amount of time that the link will be valid (default 100y) -h, --help help for link --unlink Remove existing public link to file/folder See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LISTREMOTES List all the remotes in the config file. Synopsis rclone listremotes lists all the available remotes from the config file. When uses with the -l flag it lists the types too. rclone listremotes [flags] Options -h, --help help for listremotes --long Show the type as well as names. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LSF List directories and objects in remote:path formatted for parsing. Synopsis List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. Eg $ rclone lsf swift:bucket bevajer5jef canole diwogej7 ferejej3gux/ fubuwic Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: p - path s - size t - modification time h - hash i - ID of object o - Original ID of underlying object m - MimeType of object if known e - encrypted name T - tier of storage if known, eg "Hot" or "Cool" So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. Eg $ rclone lsf --format "tsp" swift:bucket 2016-06-25 18:55:41;60295;bevajer5jef 2016-06-25 18:55:43;90613;canole 2016-06-25 18:55:43;94467;diwogej7 2018-04-26 08:50:45;0;ferejej3gux/ 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type. For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . Eg $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket 7908e352297f0f530b84a756f188baa3 bevajer5jef cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg $ rclone lsf --separator "," --format "tshp" swift:bucket 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 2018-04-26 08:52:53,0,,ferejej3gux/ 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic You can output in CSV standard format. This will escape things in " if they contain , Eg $ rclone lsf --csv --files-only --format ps remote:path test.log,22355 test.sh,449 "this file contains a comma, in the file name.txt",6 Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag. For example to find all the files modified within one day and copy those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path Any of the filtering options can be applied to this command. There are several related list commands - ls to list size and path of objects only - lsl to list modification time, size and path of objects only - lsd to list directories only - lsf to list objects and directories in easy to parse format - lsjson to list objects and directories in JSON format ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion. The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). rclone lsf remote:path [flags] Options --absolute Put a leading / in front of path names. --csv Output in CSV format. -d, --dir-slash Append a slash to directory names. (default true) --dirs-only Only list directories. --files-only Only list files. -F, --format string Output format - see help for details (default "p") --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5") -h, --help help for lsf -R, --recursive Recurse into the listing. -s, --separator string Separator for the items in the format. (default ";") See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE LSJSON List directories and objects in the path in JSON format. Synopsis List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", } If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash. If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift). If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift). If --encrypted is not specified the Encrypted won't be emitted. If --dirs-only is not specified files in addition to directories are returned If --files-only is not specified directories in addition to the files will be returned. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true". The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. Any of the filtering options can be applied to this command. There are several related list commands - ls to list size and path of objects only - lsl to list modification time, size and path of objects only - lsd to list directories only - lsf to list objects and directories in easy to parse format - lsjson to list objects and directories in JSON format ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion. The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). rclone lsjson remote:path [flags] Options --dirs-only Show only directories in the listing. -M, --encrypted Show the encrypted names. --files-only Show only files in the listing. --hash Include hashes in the output (may take longer). --hash-type stringArray Show only this hash type (may be repeated). -h, --help help for lsjson --no-mimetype Don't read the mime type (can speed things up). --no-modtime Don't read the modification time (can speed things up). --original Show the ID of the underlying Object. -R, --recursive Recurse into the listing. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE MOUNT Mount the remote as file system on a mountpoint. Synopsis rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up your remote using rclone config. Check it works with rclone ls etc. You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows. On Linux/macOS/FreeBSD Start the mount like this where /path/to/local/mount is an EMPTY EXISTING directory. rclone mount remote:path/to/files /path/to/local/mount Or on Windows like this where X: is an unused drive letter or use a path to NON-EXISTENT directory. rclone mount remote:path/to/files X: rclone mount remote:path/to/files C:\path\to\nonexistent\directory When running in background mode the user will have to stop the mount manually (specified below). When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped. The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. Stopping the mount manually: # Linux fusermount -u /path/to/local/mount # OS X umount /path/to/local/mount NOTE: As of rclone 1.52.2, rclone mount now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use. Installing on Windows To run rclone mount on Windows, you will need to download and install WinFsp. WinFsp is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows. Windows caveats Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive. The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager. Mount as a network drive By default, rclone will mount the remote as a normal drive. However, you can also mount it as a NETWORK DRIVE (or NETWORK SHARE, as mentioned in some places) Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected. Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also Limitations section below for more info Add "--fuse-flag --VolumePrefix=" to your "mount" command, REPLACING "SHARE" WITH ANY OTHER NAME OF YOUR CHOICE IF YOU ARE MOUNTING MORE THAN ONE REMOTE. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap. Read more about drive mapping Limitations Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. rclone mount vs rclone sync/copy File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable. Attribute caching You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel. In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories. The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above. If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above. If files don't change on the remote outside of the control of rclone then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. Filters Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. systemd When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode. chunked reading --vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests. When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely. With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir VFS File Buffering The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files. VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable. The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible - Files can't be opened for both read AND write - Files opened for write can't be seeked - Existing files opened for write must have O_TRUNC set - Files open for read with O_TRUNC will be opened write only - Files open for write only will behave as if O_TRUNC was supplied - Open modes O_APPEND, O_TRUNC are ignored - If an upload fails it can't be retried --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - Files opened for write only can't be seeked - Existing files opened for write must have O_TRUNC set - Files opened for write only will ignore O_APPEND, O_TRUNC - If an upload fails it can't be retried --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". rclone mount remote:path /path/to/mountpoint [flags] Options --allow-non-empty Allow mounting over a non-empty directory (not Windows). --allow-other Allow access to other users. --allow-root Allow access to root user. --async-read Use asynchronous reads. (default true) --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --daemon Run mount as a daemon (background mode). --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). --debug-fuse Debug the FUSE internals - needs -v. --default-permissions Makes kernel enforce access control based on the file mode. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for mount --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) --volname string Set the volume name (not supported by all OSes). --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE MOVETO Move file or directory from source to dest. Synopsis If source:path is a file or directory then it moves it to a file or directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command. So rclone moveto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:. This will: if src is file move it to dst, overwriting an existing file if it exists if src is directory move it to dst, overwriting existing files if they exist see move command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer. IMPORTANT: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. NOTE: Use the -P/--progress flag to view real-time transfer statistics. rclone moveto source:path dest:path [flags] Options -h, --help help for moveto See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE NCDU Explore a remote with a text based user interface. Synopsis This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?". To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. Here are the keys - press '?' to toggle the help on and off ↑,↓ or k,j to Move →,l to enter ←,h to return c toggle counts g toggle graph n,s,C sort by name,size,count d delete file/directory y copy current path to clipbard Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously. rclone ncdu remote:path [flags] Options -h, --help help for ncdu See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE OBSCURE Obscure password for use in the rclone config file. Synopsis In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is NOT a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token. This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. Example: echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. If you want to encrypt the config file then please use config file encryption - see rclone config for more info. rclone obscure password [flags] Options -h, --help help for obscure See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE RC Run a command against a running rclone. Synopsis This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" A username and password can be passed in with --user and --pass. Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. -o key=value -o key2 Will place this in the "opt" value {"key":"value", "key2","") The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. -a value -a value2 Will place this in the "arg" value ["value", "value2"] Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg: rclone rc --loopback operations/about fs=/ Use "rclone rc" to see a list of all possible commands. rclone rc commands parameter [flags] Options -a, --arg stringArray Argument placed in the "arg" array. -h, --help help for rc --json string Input JSON - use instead of key=value args. --loopback If set connect to this rclone instance not via HTTP. --no-output If set don't output the JSON result. -o, --opt stringArray Option in the form name=value or name placed in the "opt" array. --pass string Password to use to connect to rclone remote control. --url string URL to connect to rclone remote control. (default "http://localhost:5572/") --user string Username to use to rclone remote control. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE RCAT Copies standard input to file on remote. Synopsis rclone rcat reads from standard input (stdin) and copies it to a single remote file. echo "hello world" | rclone rcat remote:path/to/file ffmpeg - | rclone rcat remote:path/to/file If the remote file already exists, it will be overwritten. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination. rclone rcat remote:path [flags] Options -h, --help help for rcat See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE RCD Run rclone listening to remote control commands only. Synopsis This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. See the rc documentation for more info on the rc flags. rclone rcd * [flags] Options -h, --help help for rcd See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE RMDIRS Remove empty directories under the path. Synopsis This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in. If you supply the --leave-root flag, it will not remove the root directory. This is useful for tidying up remotes that rclone has left a lot of empty directories in. rclone rmdirs remote:path [flags] Options -h, --help help for rmdirs --leave-root Do not remove root directory if empty See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE SERVE Serve a remote over a protocol. Synopsis rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg rclone serve http remote: Each subcommand has its own options which you can see in their help. rclone serve [opts] [flags] Options -h, --help help for serve See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. - rclone serve dlna - Serve remote:path over DLNA - rclone serve ftp - Serve remote:path over FTP. - rclone serve http - Serve the remote over HTTP. - rclone serve restic - Serve the remote for restic's REST API. - rclone serve sftp - Serve the remote over SFTP. - rclone serve webdav - Serve remote:path over webdav. RCLONE SERVE DLNA Serve remote:path over DLNA Synopsis rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. Use --name to choose the friendly server name, which is by default "rclone (hostname)". Use --log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic. VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir VFS File Buffering The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files. VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable. The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible - Files can't be opened for both read AND write - Files opened for write can't be seeked - Existing files opened for write must have O_TRUNC set - Files open for read with O_TRUNC will be opened write only - Files open for write only will behave as if O_TRUNC was supplied - Open modes O_APPEND, O_TRUNC are ignored - If an upload fails it can't be retried --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - Files opened for write only can't be seeked - Existing files opened for write must have O_TRUNC set - Files opened for write only will ignore O_APPEND, O_TRUNC - If an upload fails it can't be retried --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". rclone serve dlna remote:path [flags] Options --addr string ip:port or :port to bind the DLNA http server to. (default ":7879") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for dlna --log-trace enable trace logging of SOAP traffic --name string name of DLNA server --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) See the global flags page for global options not listed here. SEE ALSO - rclone serve - Serve a remote over a protocol. RCLONE SERVE FTP Serve remote:path over FTP. Synopsis rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it. Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. Authentication By default this will serve files without needing a login. You can set a single username and password with the --user and --pass flags. VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir VFS File Buffering The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files. VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable. The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible - Files can't be opened for both read AND write - Files opened for write can't be seeked - Existing files opened for write must have O_TRUNC set - Files open for read with O_TRUNC will be opened write only - Files open for write only will behave as if O_TRUNC was supplied - Open modes O_APPEND, O_TRUNC are ignored - If an upload fails it can't be retried --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - Files opened for write only can't be seeked - Existing files opened for write must have O_TRUNC set - Files opened for write only will ignore O_APPEND, O_TRUNC - If an upload fails it can't be retried --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". Auth Proxy If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored. There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - _root - root to use for the backend And it may have this parameter - _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { "user": "me", "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list. Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. rclone serve ftp remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") --auth-proxy string A program to use to create the backend from the auth. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for ftp --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. (empty value allow every password) --passive-port string Passive port range to use. (default "30000-32000") --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --public-ip string Public IP address to advertise for passive connections. --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. (default "anonymous") --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) See the global flags page for global options not listed here. SEE ALSO - rclone serve - Serve a remote over a protocol. RCLONE SERVE HTTP Serve the remote over HTTP. Synopsis rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- Parameter Description ----------------------------------- ----------------------------------- .Name The full path of a file/directory. .Title Directory listing of .Name .Sort The current sort used. This is changeable via ?sort= parameter Sort Options: namedirfist,name,size,time (default namedirfirst) .Order The current ordering used. This is changeable via ?order= parameter Order Options: asc,desc (default asc) .Query Currently unused. .Breadcrumb Allows for creating a relative navigation -- .Link The relative to the root link of the Text. -- .Text The Name of the directory. .Entries Information about a specific file/directory. -- .URL The 'url' of an entry. -- .Leaf Currently same as 'URL' but intended to be 'just' the name. -- .IsDir Boolean for if an entry is a directory or not. -- .Size Size in Bytes of the entry. -- .ModTime The UTC timestamp of an entry. ----------------------------------------------------------------------- Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir VFS File Buffering The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files. VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable. The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible - Files can't be opened for both read AND write - Files opened for write can't be seeked - Existing files opened for write must have O_TRUNC set - Files open for read with O_TRUNC will be opened write only - Files open for write only will behave as if O_TRUNC was supplied - Open modes O_APPEND, O_TRUNC are ignored - If an upload fails it can't be retried --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - Files opened for write only can't be seeked - Existing files opened for write must have O_TRUNC set - Files opened for write only will ignore O_APPEND, O_TRUNC - If an upload fails it can't be retried --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". rclone serve http remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for http --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --template string User Specified Template. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) See the global flags page for global options not listed here. SEE ALSO - rclone serve - Serve a remote over a protocol. RCLONE SERVE RESTIC Serve the remote for restic's REST API. Synopsis rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. Restic is a command line program for doing backups. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. Setting up rclone for use by restic First set up a remote for your chosen cloud provider. Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server rclone serve restic -v remote:backup Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag. You might wish to start this server on boot. Setting up restic to use rclone Now you can follow the restic instructions on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: $ export RESTIC_REPOSITORY=rest:http://localhost:8080/ $ export RESTIC_PASSWORD=yourpassword $ restic init created restic backend 8b1a4b56ae at rest:http://localhost:8080/ Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost. $ restic backup /path/to/files/to/backup scan [/path/to/files/to/backup] scanned 189 directories, 312 files in 0:00 [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 duration: 0:00 snapshot 45c8fdd8 saved Multiple repositories Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these MUST end with /. Eg $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ # backup user1 stuff $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff Private repositories The "--private-repos" flag can be used to limit users to repositories starting with a path of //. Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- Parameter Description ----------------------------------- ----------------------------------- .Name The full path of a file/directory. .Title Directory listing of .Name .Sort The current sort used. This is changeable via ?sort= parameter Sort Options: namedirfist,name,size,time (default namedirfirst) .Order The current ordering used. This is changeable via ?order= parameter Order Options: asc,desc (default asc) .Query Currently unused. .Breadcrumb Allows for creating a relative navigation -- .Link The relative to the root link of the Text. -- .Text The Name of the directory. .Entries Information about a specific file/directory. -- .URL The 'url' of an entry. -- .Leaf Currently same as 'URL' but intended to be 'just' the name. -- .IsDir Boolean for if an entry is a directory or not. -- .Size Size in Bytes of the entry. -- .ModTime The UTC timestamp of an entry. ----------------------------------------------------------------------- Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. rclone serve restic remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --append-only disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --pass string Password for authentication. --private-repos users can only access their private repo --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --stdio run an HTTP2 server on stdin/stdout --template string User Specified Template. --user string User name for authentication. See the global flags page for global options not listed here. SEE ALSO - rclone serve - Serve a remote over a protocol. RCLONE SERVE SFTP Serve the remote over SFTP. Synopsis rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in. Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. If you don't supply a --key then rclone will generate one and cache it for later use. By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example. Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients. VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir VFS File Buffering The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files. VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable. The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible - Files can't be opened for both read AND write - Files opened for write can't be seeked - Existing files opened for write must have O_TRUNC set - Files open for read with O_TRUNC will be opened write only - Files open for write only will behave as if O_TRUNC was supplied - Open modes O_APPEND, O_TRUNC are ignored - If an upload fails it can't be retried --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - Files opened for write only can't be seeked - Existing files opened for write must have O_TRUNC set - Files opened for write only will ignore O_APPEND, O_TRUNC - If an upload fails it can't be retried --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". Auth Proxy If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored. There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - _root - root to use for the backend And it may have this parameter - _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { "user": "me", "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list. Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. rclone serve sftp remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022") --auth-proxy string A program to use to create the backend from the auth. --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for sftp --key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate) --no-auth Allow connections with no authentication if set. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) See the global flags page for global options not listed here. SEE ALSO - rclone serve - Serve a remote over a protocol. RCLONE SERVE WEBDAV Serve remote:path over webdav. Synopsis rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it. Webdav options --etag-hash This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use "rclone hashsum" to see the full list. Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- Parameter Description ----------------------------------- ----------------------------------- .Name The full path of a file/directory. .Title Directory listing of .Name .Sort The current sort used. This is changeable via ?sort= parameter Sort Options: namedirfist,name,size,time (default namedirfirst) .Order The current ordering used. This is changeable via ?order= parameter Order Options: asc,desc (default asc) .Query Currently unused. .Breadcrumb Allows for creating a relative navigation -- .Link The relative to the root link of the Text. -- .Text The Name of the directory. .Entries Information about a specific file/directory. -- .URL The 'url' of an entry. -- .Leaf Currently same as 'URL' but intended to be 'just' the name. -- .IsDir Boolean for if an entry is a directory or not. -- .Size Size in Bytes of the entry. -- .ModTime The UTC timestamp of an entry. ----------------------------------------------------------------------- Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir VFS File Buffering The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files. VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable. The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible - Files can't be opened for both read AND write - Files opened for write can't be seeked - Existing files opened for write must have O_TRUNC set - Files open for read with O_TRUNC will be opened write only - Files open for write only will behave as if O_TRUNC was supplied - Open modes O_APPEND, O_TRUNC are ignored - If an upload fails it can't be retried --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - Files opened for write only can't be seeked - Existing files opened for write must have O_TRUNC set - Files opened for write only will ignore O_APPEND, O_TRUNC - If an upload fails it can't be retried --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". Auth Proxy If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored. There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - _root - root to use for the backend And it may have this parameter - _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { "user": "me", "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list. Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. rclone serve webdav remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --auth-proxy string A program to use to create the backend from the auth. --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --disable-dir-list Disable HTML directory list on GET request for a directory --etag-hash string Which hash to use for the ETag, or auto or blank for off --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for webdav --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --template string User Specified Template. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) See the global flags page for global options not listed here. SEE ALSO - rclone serve - Serve a remote over a protocol. RCLONE SETTIER Changes storage class/tier of objects in remote. Synopsis rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc. Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true You can use it to tier single object rclone settier Cool remote:path/file Or use rclone filters to set tier on only specific files rclone --include "*.txt" settier Hot remote:path/dir Or just provide remote directory and all files in directory will be tiered rclone settier tier remote:path/dir rclone settier tier remote:path [flags] Options -h, --help help for settier See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE TOUCH Create new file or change file modification time. Synopsis Set the modification time on object(s) as specified by remote:path to have the current time. If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided. If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: - 'YYMMDD' - eg. 17.10.30 - 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05 - 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789 Note that --timestamp is in UTC if you want local time then add the --localtime flag. rclone touch remote:path [flags] Options -h, --help help for touch --localtime Use localtime for timestamp, not UTC. -C, --no-create Do not create the file if it does not exist. -t, --timestamp string Use specified time instead of the current time of day. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. RCLONE TREE List the contents of the remote in a tree like fashion. Synopsis rclone tree lists the contents of a remote in a similar way to the unix tree command. For example $ rclone tree remote:path / ├── file1 ├── file2 ├── file3 └── subdir ├── file4 └── file5 1 directories, 5 files You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options. rclone tree remote:path [flags] Options -a, --all All files are listed (list . files too). -C, --color Turn colorization on always. -d, --dirs-only List directories only. --dirsfirst List directories before files (-U disables). --full-path Print the full path prefix for each file. -h, --help help for tree --human Print the size in a more human readable way. --level int Descend only level directories deep. -D, --modtime Print the date of last modification. --noindent Don't print indentation lines. --noreport Turn off file/directory count at end of tree listing. -o, --output string Output to file instead of stdout. -p, --protections Print the protections for each file. -Q, --quote Quote filenames with double quotes. -s, --size Print the size in bytes of each file. --sort string Select sort: name,version,size,mtime,ctime. --sort-ctime Sort files by last status change time. -t, --sort-modtime Sort files by last modification time. -r, --sort-reverse Reverse the order of the sort. -U, --unsorted Leave files unsorted. --version Sort files alphanumerically by version. See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. Copying single files rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't. For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this rclone copy remote:test.jpg /tmp/download The file test.jpg will be placed inside /tmp/download. This is equivalent to specifying rclone copy --files-from /tmp/files remote: /tmp/download Where /tmp/files contains the single line test.jpg It is recommended to use copy when copying individual files, not sync. They have pretty much the same effect but copy will use a lot less memory. Syntax of remote paths The syntax of the paths passed to the rclone command are as follows. /path/to/dir This refers to the local file system. On Windows only \ may be used instead of / in local paths ONLY, non local paths must use /. These paths needn't start with a leading / - if they don't then they will be relative to the current directory. remote:path/to/dir This refers to a directory path/to/dir on remote: as defined in the config file (configured with rclone config). remote:/path/to/dir On most backends this is refers to the same directory as remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your "home" directory and paths with a leading / will refer to the root. :backend:path/to/dir This is an advanced form for creating remotes on the fly. backend should be the name or prefix of a backend (the type in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables). Here are some examples: rclone lsd --http-url https://pub.rclone.org :http: To list all the directories in the root of https://pub.rclone.org/. rclone lsf --http-url https://example.com :http:path/to/dir To list files and directories in https://example.com/path/to/dir/ rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir To copy files and directories in https://example.com/path/to/dir to /tmp/dir. rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir To copy files and directories from example.com in the relative directory path/to/dir to /tmp/dir using sftp. Valid remote names - Remote names may only contain 0-9, A-Z ,a-z ,_ , - and space. - Remote names may not start with -. Quoting and the shell When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way. Here are some gotchas which may help users unfamiliar with the shell rules Linux / OSX If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc) then you must quote them. Use single quotes ' by default. rclone copy 'Important files?' remote:backup If you want to send a ' you will need to use ", eg rclone copy "O'Reilly Reviews" remote:backup The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell. Windows If your names have spaces in you need to put them in ", eg rclone copy "E:\folder name\folder name\folder name" remote:backup If you are using the root directory on its own then don't quote it (see #464 for why), eg rclone copy E:\ remote:backup Copying files or directories with : in the names rclone uses : to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a : up to the first / so if you need to act on a file or directory like this then use the full path starting with a /, or use ./ as a current directory prefix. So to sync a directory called sync:me to a remote called remote: use rclone sync -i ./sync:me remote:path or rclone sync -i /full/path/to/sync:me remote:path Server Side Copy Most remotes (but not all - see the overview) support server side copy. This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place. Eg rclone copy s3:oldbucket s3:newbucket Will copy the contents of oldbucket to newbucket without downloading and re-uploading. Remotes which don't support server side copy WILL download and re-upload in this case. Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload. Server side copies will only be attempted if the remote names are the same. This can be used when scripting to make aged backups efficiently, eg rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup Options Rclone has a number of options to control its behaviour. Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone. Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively. --backup-dir=DIR When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory. If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten. The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory. For example rclone sync -i /path/to/local remote:current --backup-dir remote:old will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old. If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date. See --compare-dest and --copy-dest. --bind string Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error. --bwlimit=BANDWIDTH_SPEC This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable. Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0 which means to not limit bandwidth. For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH... where: WEEKDAY is optional element. It could be written as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59. An example of a typical timetable to avoid link saturation during daytime working hours could be: --bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off" In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. An example of timetable with WEEKDAY could be: --bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off" It mean that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited. Timeslots without weekday are extended to whole week. So this one example: --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off" Is equal to this: --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off" Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc. Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone. On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2 signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this: kill -SIGUSR2 $(pidof rclone) If you configure rclone with a remote control then you can use change the bwlimit dynamically: rclone rc core/bwlimit rate=1M --bwlimit-file=BANDWIDTH_SPEC This option controls per file bandwidth limit. For the options see the --bwlimit flag. For example use this to allow no transfers to be faster than 1MByte/s --bwlimit-file 1M This can be used in conjunction with --bwlimit. Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer. --buffer-size=SIZE Use this sized buffer to speed up file transfers. Each --transfer will use this much memory for buffering. When using mount or cmount each open file descriptor will use this much memory for buffering. See the mount documentation for more details. Set to 0 to disable the buffering for the minimum memory usage. Note that the memory allocation of the buffers is influenced by the --use-mmap flag. --check-first If this flag is set then in a sync, copy or move, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible. This flag can be useful on IO limited systems where transfers interfere with checking. Using this flag can use more memory as it effectively sets --max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start. --checkers=N The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel. The default is to run 8 checkers in parallel. -c, --checksum Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal. This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section. Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag. When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. --compare-dest=DIR When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup. You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See --copy-dest and --backup-dir. --config=CONFIG_FILE Specify the location of the rclone config file. Normally the config file is in your home directory as a file called .config/rclone/rclone.conf (or .rclone.conf if created with an older version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf. If there is a file rclone.conf in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically. If you run rclone config file you will see where the default location is for you. Use this flag to override the config location, eg rclone --config=".myconfig" .config. --contimeout=TIME Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m. The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default. --copy-dest=DIR When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup. The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See --compare-dest and --backup-dir. --dedupe-mode MODE Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean. --disable FEATURE,FEATURE,... This disables a comma separated list of optional features. For example to disable server side move and server side copy use: --disable move,copy The features can be put in any case. To see a list of which features can be disabled use: --disable help See the overview features and optional features to get an idea of which feature does what. This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day). -n, --dry-run Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination. --expect-continue-timeout=TIME This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this. Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header. The default is 1s. Set to 0 to disable. --error-on-no-transfer By default, rclone will exit with return code 0 if there were no errors. This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! --header Add an HTTP header for all transactions. The flag can be repeated to add multiple headers. If you want to add headers only for uploads use --header-upload and if you want to add headers only for downloads use --header-download. This flag is supported for all HTTP based backends even those not supported by --header-upload and --header-download so may be used as a workaround for those with care. rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes" --header-download Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers. rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar" See the GitHub issue here for currently supported backends. --header-upload Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers. rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar" See the GitHub issue here for currently supported backends. --ignore-case-sync Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different. --ignore-checksum Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't. You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data. --ignore-existing Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files. While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted. --ignore-size Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum. It will also cause rclone to skip verifying the sizes are the same after transfer. This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info). -I, --ignore-times Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination. Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum). --immutable Treat source and destination files as immutable and disallow modification. With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified. Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification. This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated. -i / --interactive This flag can be used to tell rclone that you wish a manual confirmation before destructive operations. It is RECOMMENDED that you use this flag while learning rclone especially with rclone sync. For example $ rclone delete -i /tmp/dir rclone: delete "important-file.txt"? y) Yes, this is OK (default) n) No, skip this s) Skip all delete operations with no more questions !) Do all delete operations with no more questions q) Exit rclone now. y/n/s/!/q> n The options mean - y: YES, this operation should go ahead. You can also press Return for this to happen. You'll be asked every time unless you choose s or !. - n: NO, do not do this operation. You'll be asked every time unless you choose s or !. - s: SKIP all the following operations of this type with no more questions. This takes effect until rclone exits. If there are any different kind of operations you'll be prompted for them. - !: DO ALL the following operations with no more questions. Useful if you've decided that you don't mind rclone doing that kind of operation. This takes effect until rclone exits . If there are any different kind of operations you'll be prompted for them. - q: QUIT rclone now, just in case! --leave-root During rmdirs it will not remove root directory, even if it's empty. --log-file=FILE Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info. If FILE exists then rclone will append to it. Note that if you are using the logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs. --log-format LIST Comma separated list of log format options. date, time, microseconds, longfile, shortfile, UTC. The default is "date,time". --log-level LEVEL This sets the log level for rclone. The default log level is NOTICE. DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing. INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default. NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events. ERROR is equivalent to -q. It only outputs error messages. --use-json-log This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time. --low-level-retries NUMBER This controls the number of low level retries rclone does. A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag. This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker. Disable low level retries with --low-level-retries 1. --max-backlog=N This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred. This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use. Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make --order-by work more accurately. Setting this small will make rclone more synchronous to the listings of the remote which may be desirable. Setting this to a negative number will make the backlog as large as possible. --max-delete=N This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress. --max-depth=N This modifies the recursion depth for all the commands except purge. So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on. For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag. You can use this command to disable recursion (with --max-depth 1). Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen. --max-duration=TIME Rclone will stop scheduling new transfers when it has run for the duration specified. Defaults to off. When the limit is reached any existing transfers will complete. Rclone won't exit with an error if the transfer limit is reached. --max-transfer=SIZE Rclone will stop transferring when it has reached the size specified. Defaults to off. When the limit is reached all transfers will stop immediately. Rclone will exit with exit code 8 if the transfer limit is reached. --cutoff-mode=hard|soft|cautious This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard. Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit. Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit. Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit. --modify-window=TIME When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent. The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default. This command line flag allows you to override that computed default. --multi-thread-cutoff=SIZE When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M). Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer. The number of threads used to download is controlled by --multi-thread-streams. Use -vv if you wish to see info about the threads. This will work with the sync/copy/move commands and friends copyto/moveto. Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above. NB that this ONLY works for a local destination but will work with any source. NB that multi thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams is set explicitly. NB on Windows using multi-thread downloads will cause the resulting files to be sparse. Use --local-no-sparse to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with --multi-thread-streams 0 --multi-thread-streams=N When using multi thread downloads (see above --multi-thread-cutoff) this sets the maximum number of streams to use. Set to 0 to disable multi thread downloads (Default 4). Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff and rounds up, up to the maximum set with --multi-thread-streams. So if --multi-thread-cutoff 250MB and --multi-thread-streams 4 are in effect (the defaults): - 0MB..250MB files will be downloaded with 1 stream - 250MB..500MB files will be downloaded with 2 streams - 500MB..750MB files will be downloaded with 3 streams - 750MB+ files will be downloaded with 4 streams --no-check-dest The --no-check-dest can be used with move or copy and it causes rclone not to check the destination at all when copying files. This means that: - the destination is not listed minimising the API calls - files are always transferred - this can cause duplicates on remotes which allow it (eg Google Drive) - --retries 1 is recommended otherwise you'll transfer everything again on a retry This flag is useful to minimise the transactions if you know that none of the files are on the destination. This is a specialized flag which should be ignored by most users! --no-gzip-encoding Don't set Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files. There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone. --no-traverse The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync. If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time. However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse. See rclone copy for an example of how to use it. --no-unicode-normalization Don't normalize unicode characters in filenames during the sync routine. Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem. Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With --no-unicode-normalization they will be treated as unique characters. --no-update-modtime When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (eg the Google Drive client). --order-by string The --order-by flag controls the order in which files in the backlog are processed in rclone sync, rclone copy and rclone move. The order by string is constructed like this. The first part describes what aspect is being measured: - size - order by the size of the files - name - order by the full path of the files - modtime - order by the modification date of the files This can have a modifier appended with a comma: - ascending or asc - order so that the smallest (or oldest) is processed first - descending or desc - order so that the largest (or newest) is processed first - mixed - order so that the smallest is processed first for some threads and the largest for others If the modifier is mixed then it can have an optional percentage (which defaults to 50), eg size,mixed,25 which means that 25% of the threads should be taking the smallest items and 75% the largest. The threads which take the smallest first will always take the smallest first and likewise the largest first threads. The mixed mode can be useful to minimise the transfer time when you are transferring a mixture of large and small files - the large files are guaranteed upload threads and bandwidth and the small files will be processed continuously. If no modifier is supplied then the order is ascending. For example - --order-by size,desc - send the largest files first - --order-by modtime,ascending - send the oldest files first - --order-by name - send the files with alphabetically by path first If the --order-by flag is not supplied or it is supplied with an empty string then the default ordering will be used which is as scanned. With --checkers 1 this is mostly alphabetical, however with the default --checkers 8 it is somewhat random. Limitations The --order-by flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if - there are no files in the backlog or the source has not been fully scanned yet - there are more than --max-backlog files in the backlog Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of --order-by as being more of a best efforts flag rather than a perfect ordering. --password-command SpaceSepList This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the RCLONE_CONFIG_PASS variable. The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in ", if you want a literal " in an argument then enclose the argument in " and double the ". See CSV encoding for more info. Eg --password-command echo hello --password-command echo "hello with space" --password-command echo "hello with ""quotes"" and space" See the Configuration Encryption for more info. See a Windows PowerShell example on the Wiki. -P, --progress This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer. Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay. Normally this is updated every 500mS but this period can be overridden with the --stats flag. This can be used with the --stats-one-line flag for a simpler display. Note: On Windows until this bug is fixed all non-ASCII characters will be replaced with . when --progress is in use. -q, --quiet This flag will limit rclone's output to error messages only. --refresh-times The --refresh-times flag can be used to update modification times of existing files when they are out of sync on backends which don't support hashes. This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them. This flag is ONLY useful for destinations which don't support hashes (eg crypt). This can be used any of the sync commands sync, copy or move. To use this flag you will need to be doing a modification time sync (so not using --size-only or --checksum). The flag will have no effect when using --size-only or --checksum. If this flag is used when rclone comes to upload a file it will check to see if there is an existing file on the destination. If this file matches the source with size (and checksum if available) but has a differing timestamp then instead of re-uploading it, rclone will update the timestamp on the destination file. If the checksum does not match rclone will upload the new file. If the checksum is absent (eg on a crypt backend) then rclone will update the timestamp. Note that some remotes can't set the modification time without re-uploading the file so this flag is less useful on them. Normally if you are doing a modification time sync rclone will update modification times without --refresh-times provided that the remote supports checksums AND the checksums match on the file. However if the checksums are absent then rclone will upload the file rather than setting the timestamp as this is the safe behaviour. --retries int Retry the entire sync if it fails this many times it fails (default 3). Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors. Disable retries with --retries 1. --retries-sleep=TIME This sets the interval between each retry specified by --retries The default is 0. Use 0 to disable. --size-only Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size. This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone. --stats=TIME Commands which transfer data (sync, copy, copyto, move, moveto) will print data transfer stats at regular intervals to show their progress. This sets the interval. The default is 1m. Use 0 to disable. If you set the stats interval then all commands can show stats. This can be useful when running other commands, check or mount for example. Stats are logged at INFO level by default which means they won't show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels. Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately. --stats-file-name-length integer By default, the --stats output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40. Use --stats-file-name-length 0 to disable any truncation of file names printed by stats. --stats-log-level string Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won't show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels. --stats-one-line When this is specified, rclone condenses the stats into a single line showing the most important stats only. --stats-one-line-date When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is 2006/01/02 15:04:05 - --stats-one-line-date-format When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow golang specs for date formatting syntax. --stats-unit=bits|bytes By default, data transfer rates will be printed in bytes/second. This option allows the data rate to be printed in bits/second. Data transfer volume will still be reported in bytes. The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s. The default is bytes. --suffix=SUFFIX When using sync, copy or move any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten. The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. This is for use with files to add the suffix in the current directory or with --backup-dir. See --backup-dir for more info. For example rclone copy -i /path/to/local/file remote:current --suffix .bak will copy /path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added. If using rclone sync with --suffix and without --backup-dir then it is recommended to put a filter rule in excluding the suffix otherwise the sync will delete the backup files. rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak" --suffix-keep-extension When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after. So let's say we had --suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened. --syslog On capable OSes (not Windows or Plan9) send all log output to syslog. This can be useful for running rclone in a script or rclone mount. --syslog-facility string If using --syslog this sets the syslog facility (eg KERN, USER). See man syslog for a list of possible facilities. The default facility is DAEMON. --tpslimit float Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second. For example to limit rclone to 10 HTTP transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5. Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited). This can be very useful for rclone mount to control the behaviour of applications using it. See also --tpslimit-burst. --tpslimit-burst int Max burst of transactions for --tpslimit (default 1). Normally --tpslimit will do exactly the number of transaction per second specified. However if you supply --tps-burst then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied. For example if you provide --tpslimit-burst 10 then if rclone has been idle for more than 10*--tpslimit then it can do 10 transactions very quickly before they are limited again. This may be used to increase performance of --tpslimit without changing the long term average number of transactions per second. --track-renames By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy. If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync operations and perform renaming server-side. Files will be matched by size and hash - if both match then a rename will be considered. If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Encrypted destinations are not currently supported by --track-renames if --track-renames-strategy includes hash. Note that --track-renames is incompatible with --no-traverse and that it uses extra memory to keep track of all the rename candidates. Note also that --track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during. --track-renames-strategy (hash,modtime,leaf,size) This option changes the matching criteria for --track-renames. The matching is controlled by a comma separated selection of these tokens: - modtime - the modification time of the file - not supported on all backends - hash - the hash of the file contents - not supported on all backends - leaf - the name of the file not including its directory name - size - the size of the file (this is always enabled) So using --track-renames-strategy modtime,leaf would match files based on modification time, the leaf of the file name and the size only. Using --track-renames-strategy modtime or leaf can enable --track-renames support for encrypted destinations. If nothing is specified, the default option is matching by hashes. Note that the hash strategy is not supported with encrypted destinations. --delete-(before,during,after) This option allows you to specify when files on your destination are deleted when you sync folders. Specifying the value --delete-before will delete all files present on the destination, but not on the source _before_ starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies. Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory. Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors. --fast-list When doing anything which involves a directory listing (eg sync, copy, ls - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory. However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic). If you use the --fast-list flag then rclone will use this method for listing directories. This will have the following consequences for the listing: - It WILL use fewer transactions (important if you pay for them) - It WILL use more memory. Rclone has to load the whole listing into memory. - It _may_ be faster because it uses fewer transactions - It _may_ be slower because it can't be parallelized rclone should always give identical results with and without --fast-list. If you pay for transactions and can fit your entire sync listing into memory then --fast-list is recommended. If you have a very big sync to do then don't use --fast-list otherwise you will run out of memory. If you use --fast-list on a remote which doesn't support it, then rclone will just ignore it. --timeout=TIME This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected. The default is 5m. Set to 0 to disable. --transfers=N The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote. The default is to run 4 file transfers in parallel. -u, --update This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file. This can be useful when transferring to a remote which doesn't support mod times directly (or when using --use-server-modtime to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum. If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. If --checksum is set then rclone will update the destination if the checksums differ too. If an existing destination file is older than the source file then it will be updated if the size or checksum differs from the source file. On remotes which don't support mod time directly (or when using --use-server-modtime) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. --use-mmap If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with. If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS. It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default. --use-server-modtime Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using --update, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary. Using this flag on a sync operation without also using --update would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want. -v, -vv, --verbose With -v rclone will tell you about each file that is transferred and a small number of significant events. With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting. -V, --version Prints the version number SSL/TLS options The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation. --ca-cert string This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to. If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates. --client-cert string This loads the PEM encoded client side certificate. This is used for mutual TLS authentication. The --client-key flag is required too when using this. --client-key string This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with --client-cert. --no-check-certificate=true/false --no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This option defaults to false. THIS SHOULD BE USED ONLY FOR TESTING. Configuration Encryption Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf file in a secure location. If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone. To add a password to your rclone configuration, execute rclone config. >rclone config Current remotes: e) Edit existing remote n) New remote d) Delete remote s) Set configuration password q) Quit config e/n/d/s/q> Go into s, Set configuration password: e/n/d/s/q> s Your configuration is not encrypted. If you add a password, you will protect your login information to cloud services. a) Add Password q) Quit to main menu a/q> a Enter NEW configuration password: password: Confirm NEW password: password: Password set Your configuration is encrypted. c) Change Password u) Unencrypt configuration q) Quit to main menu c/u/q> Your configuration is now encrypted, and every time you start rclone you will have to supply the password. See below for details. In the same menu, you can change the password or completely remove encryption from your configuration. There is no way to recover the configuration if you lose your password. rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored. While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password. If it is safe in your environment, you can set the RCLONE_CONFIG_PASS environment variable to contain your password, in which case it will be used for decrypting the configuration. You can set this for a session from a script. For unix like systems save this to a file called set-rclone-password: #!/bin/echo Source this file don't run it read -s RCLONE_CONFIG_PASS export RCLONE_CONFIG_PASS Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable. An alternate means of supplying the password is to provide a script which will retrieve the password and print on standard output. This script should have a fully specified path name and not rely on any environment variables. The script is supplied either via --password-command="..." command line argument or via the RCLONE_PASSWORD_COMMAND environment variable. One useful example of this is using the passwordstore application to retrieve the password: export RCLONE_PASSWORD_COMMAND="pass rclone/config" If the passwordstore password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably. If you are running rclone inside a script, unless you are using the --password-command method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password, and --password-command has not been supplied. Developer options These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question. --cpuprofile=FILE Write CPU profile to file. This can be analysed with go tool pprof. --dump flag,flag,flag The --dump flag takes a comma separated list of flags to dump info about. Note that some headers including Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it. The available flags are: --dump headers Dump HTTP headers with Authorization: lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only. Use --dump auth if you do want the Authorization: headers. --dump bodies Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only. Note that the bodies are buffered in memory so don't use this for enormous files. --dump requests Like --dump bodies but dumps the request bodies and the response headers. Useful for debugging download problems. --dump responses Like --dump bodies but dumps the response bodies and the request headers. Useful for debugging upload problems. --dump auth Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only. --dump filters Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. --dump goroutines This dumps a list of the running go-routines at the end of the command to standard output. --dump openfiles This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you'll need that installed to use it. --memprofile=FILE Write memory profile to file. This can be analysed with go tool pprof. Filtering For the filtering options - --delete-excluded - --filter - --filter-from - --exclude - --exclude-from - --include - --include-from - --files-from - --files-from-raw - --min-size - --max-size - --min-age - --max-age - --dump filters See the filtering section. Remote control For the remote control options and for instructions on how to remote control rclone - --rc - and anything starting with --rc- See the remote control section. Logging rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG. By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls). By default, rclone will produce Error and Notice level messages. If you use the -q flag, rclone will only produce Error messages. If you use the -v flag, rclone will produce Error, Notice and Info messages. If you use the -vv flag, rclone will produce Error, Notice, Info and Debug messages. You can also control the log levels with the --log-level flag. If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE. If you use the --syslog flag then rclone will log to syslog and the --syslog-facility control which facility it uses. Rclone prefixes all log messages with their level in capitals, eg INFO which makes it easy to grep the log file for different kinds of information. Exit Code If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed. During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting. When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful. List of exit codes - 0 - success - 1 - Syntax or usage error - 2 - Error not otherwise categorised - 3 - Directory not found - 4 - File not found - 5 - Temporary error (one that more retries might fix) (Retry errors) - 6 - Less serious errors (like 461 errors from dropbox) (NoRetry errors) - 7 - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors) - 8 - Transfer exceeded - limit set by --max-transfer reached - 9 - Operation successful, but no files transferred Environment Variables Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries. Options Every option in rclone can have its default set by environment variable. To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_. For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting. Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true. The same parser is used for the options and the environment variables so they take exactly the same form. Config file You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through rclone config by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for --config in rclone help). To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ + name of config file option and make it all uppercase. For example, to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables): $ export RCLONE_CONFIG_MYS3_TYPE=s3 $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX $ rclone lsd MYS3: -1 2016-09-21 12:54:21 -1 my-bucket $ rclone listremotes | grep mys3 mys3: Note that if you want to create a remote using environment variables you must create the ..._TYPE variable as above. Precedence The various different methods of backend configuration are read in this order and the first one with a value is used. - Flag values as supplied on the command line, eg --drive-use-trash. - Remote specific environment vars, eg RCLONE_CONFIG_MYREMOTE_USE_TRASH (see above). - Backend specific environment vars, eg RCLONE_DRIVE_USE_TRASH. - Config file, eg use_trash = false. - Default values, eg true - these can't be changed. So if both --drive-use-trash is supplied on the config line and an environment variable RCLONE_DRIVE_USE_TRASH is set, the command line flag will take preference. For non backend configuration the order is as follows: - Flag values as supplied on the command line, eg --stats 5s. - Environment vars, eg RCLONE_STATS=5s. - Default values, eg 1m - these can't be changed. Other environment variables - RCLONE_CONFIG_PASS set to contain your config file password (see Configuration Encryption section) - HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof). - HTTPS_PROXY takes precedence over HTTP_PROXY for https requests. - The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed. - RCLONE_CONFIG_DIR - rclone SETS this variable for use in config files and sub processes to point to the directory holding the config file. CONFIGURING RCLONE ON A REMOTE / HEADLESS MACHINE Some of the configurations (those involving oauth2) require an Internet connected web browser. If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below. Configuring using rclone authorize On the headless box run rclone config but answer N to the Use auto config? question. ... Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> n For this to work, you will need rclone available on a machine that has a web browser available. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): rclone authorize "amazon cloud drive" Then paste the result below: result> Then on your main desktop machine rclone authorize "amazon cloud drive" If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Paste the following into your remote machine ---> SECRET_TOKEN <---End paste Then back to the headless box, paste in the code result> SECRET_TOKEN -------------------- [acd12] client_id = client_secret = token = SECRET_TOKEN -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> Configuring by copying the config file Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone. So first configure rclone on your desktop machine with rclone config to set up the config file. Find the config file by running rclone config file, for example $ rclone config file Configuration file is stored at: /home/user/.rclone.conf Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and place it in the correct place (use rclone config file on the remote box to find out where). FILTERING, INCLUDES AND EXCLUDES Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size. The filters are applied for the copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters. Each path as it passes through rclone is matched against the include and exclude rules like --include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v. --filter-from, --exclude-from, --include-from, --files-from, --files-from-raw understand - as a file name to mean read from standard input. Patterns The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell. If the pattern starts with a / then it only matches at the top level of the directory tree, RELATIVE TO THE ROOT OF THE REMOTE (not necessarily the root of the local drive). If it doesn't start with / then it is matched starting at the END OF THE PATH, but it will only match a complete path element: file.jpg - matches "file.jpg" - matches "directory/file.jpg" - doesn't match "afile.jpg" - doesn't match "directory/afile.jpg" /file.jpg - matches "file.jpg" in the root directory of the remote - doesn't match "afile.jpg" - doesn't match "directory/file.jpg" IMPORTANT Note that you must use / in patterns and not \ even if running on Windows. A * matches anything but not a /. *.jpg - matches "file.jpg" - matches "directory/file.jpg" - doesn't match "file.jpg/something" Use ** to match anything, including slashes (/). dir/** - matches "dir/file.jpg" - matches "dir/dir1/dir2/file.jpg" - doesn't match "directory/file.jpg" - doesn't match "adir/file.jpg" A ? matches any character except a slash /. l?ss - matches "less" - matches "lass" - doesn't match "floss" A [ and ] together make a character class, such as [a-z] or [aeiou] or [[:alpha:]]. See the go regexp docs for more info on these. h[ae]llo - matches "hello" - matches "hallo" - doesn't match "hullo" A { and } define a choice between elements. It should contain a comma separated list of patterns, any of which might match. These patterns can contain wildcards. {one,two}_potato - matches "one_potato" - matches "two_potato" - doesn't match "three_potato" - doesn't match "_potato" Special characters can be escaped with a \ before them. \*.jpg - matches "*.jpg" \\.jpg - matches "\.jpg" \[one\].jpg - matches "[one].jpg" Patterns are case sensitive unless the --ignore-case flag is used. Without --ignore-case (default) potato - matches "potato" - doesn't match "POTATO" With --ignore-case potato - matches "potato" - matches "POTATO" Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir Directories Rclone keeps track of directories that could match any file patterns. Eg if you add the include rule /a/*.jpg Rclone will synthesize the directory include rule /a/ If you put any rules which end in / then it will only match directories. Directory matches are ONLY used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory. Differences between rsync and rclone patterns Rclone implements bash style {a,b,c} glob matching which rsync doesn't. Rclone always does a wildcard match so \ must always escape a \. How the rules are used Rclone maintains a combined list of include rules and exclude rules. Each file is matched in order, starting from the top, against the rule in the list until it finds a match. The file is then included or excluded according to the rule type. If the matcher fails to find a match after testing against all the entries in the list then the path is included. For example given the following rules, + being include, - being exclude, - secret*.jpg + *.jpg + *.png + file2.avi - * This would include - file1.jpg - file3.png - file2.avi This would exclude - secret17.jpg - non *.jpg and *.png A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2). Adding filtering rules Filtering rules are added with the following command line flags. Repeating options You can repeat the following options to add more than one rule of that type. - --include - --include-from - --exclude - --exclude-from - --filter - --filter-from - --filter-from-raw IMPORTANT You should not use --include* together with --exclude*. It may produce different results than you expected. In that case try to use: --filter*. Note that all the options of the same type are processed together in the order above, regardless of what order they were placed on the command line. So all --include options are processed first in the order they appeared on the command line, then all --include-from options etc. To mix up the order includes and excludes, the --filter flag can be used. --exclude - Exclude files matching pattern Add a single exclude rule with --exclude. This flag can be repeated. See above for the order the flags are processed in. Eg --exclude *.bak to exclude all bak files from the sync. --exclude-from - Read exclude patterns from file Add exclude rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this exclude-file.txt # a sample exclude rule file *.bak file2.jpg Then use as --exclude-from exclude-file.txt. This will sync all files except those ending in bak and file2.jpg. This is useful if you have a lot of rules. --include - Include files matching pattern Add a single include rule with --include. This flag can be repeated. See above for the order the flags are processed in. Eg --include *.{png,jpg} to include all png and jpg files in the backup and no others. This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from. --include-from - Read include patterns from file Add include rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this include-file.txt # a sample include rule file *.jpg *.png file2.avi Then use as --include-from include-file.txt. This will sync all jpg, png files and file2.avi. This is useful if you have a lot of rules. This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from. --filter - Add a file-filtering rule This can be used to add a single include or exclude rule. Include rules start with + and exclude rules start with -. A special rule called ! can be used to clear the existing rules. This flag can be repeated. See above for the order the flags are processed in. Eg --filter "- *.bak" to exclude all bak files from the sync. --filter-from - Read filtering patterns from a file Add include/exclude rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this filter-file.txt # a sample filter rule file - secret*.jpg + *.jpg + *.png + file2.avi - /dir/Trash/** + /dir/** # exclude everything else - * Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined. This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. It will also include everything in the directory dir at the root of the sync, except dir/Trash which it will exclude. Everything else will be excluded from the sync. --files-from - Read list of source-file names This reads a list of file names from the file passed in and ONLY these files are transferred. The FILTERING RULES ARE IGNORED completely if you use this option. --files-from expects a list of files as its input. Leading / trailing whitespace is stripped from the input lines and lines starting with # and ; are ignored. Rclone will traverse the file system if you use --files-from, effectively using the files in --files-from as a set of filters. Rclone will not error if any of the files are missing. If you use --no-traverse as well as --files-from then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files. This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line. Paths within the --files-from file will be interpreted as starting with the root specified in the command. Leading / characters are ignored. See --files-from-raw if you need the input to be processed in a raw manner. For example, suppose you had files-from.txt with this content: # comment file1.jpg subdir/file2.jpg You could then use it like this: rclone copy --files-from files-from.txt /home/me/pics remote:pics This will transfer these files only (if they exist) /home/me/pics/file1.jpg → remote:pics/file1.jpg /home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths: /home/user1/important /home/user1/dir/file /home/user2/stuff To copy these you'd find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, eg user1/important user1/dir/file user2/stuff You could then copy these to a remote like this rclone copy --files-from files-from.txt /home remote:backup The 3 files will arrive in remote:backup with the paths as in the files-from.txt like this: /home/user1/important → remote:backup/user1/important /home/user1/dir/file → remote:backup/user1/dir/file /home/user2/stuff → remote:backup/user2/stuff You could of course choose / as the root too in which case your files-from.txt might look like this. /home/user1/important /home/user1/dir/file /home/user2/stuff And you would transfer it like this rclone copy --files-from files-from.txt / remote:backup In this case there will be an extra home directory on the remote: /home/user1/important → remote:backup/home/user1/important /home/user1/dir/file → remote:backup/home/user1/dir/file /home/user2/stuff → remote:backup/home/user2/stuff --files-from-raw - Read list of source-file names without any processing This option is same as --files-from with the only difference being that the input is read in a raw manner. This means that lines with leading/trailing whitespace and lines starting with ; or # are read without any processing. rclone lsf has a compatible format that can be used to export file lists from remotes, which can then be used as an input to --files-from-raw. --min-size - Don't transfer any file smaller than this This option controls the minimum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used. For example --min-size 50k means no files smaller than 50kByte will be transferred. --max-size - Don't transfer any file larger than this This option controls the maximum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used. For example --max-size 1G means no files larger than 1GByte will be transferred. --max-age - Don't transfer any file older than this This option controls the maximum age of files to transfer. Give in seconds or with a suffix of: - ms - Milliseconds - s - Seconds - m - Minutes - h - Hours - d - Days - w - Weeks - M - Months - y - Years For example --max-age 2d means no files older than 2 days will be transferred. This can also be an absolute time in one of these formats - RFC3339 - eg "2006-01-02T15:04:05Z07:00" - ISO8601 Date and time, local timezone - "2006-01-02T15:04:05" - ISO8601 Date and time, local timezone - "2006-01-02 15:04:05" - ISO8601 Date - "2006-01-02" (YYYY-MM-DD) --min-age - Don't transfer any file younger than this This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age for list of suffixes) For example --min-age 2d means no files younger than 2 days will be transferred. --delete-excluded - Delete files on dest excluded from sync IMPORTANT this flag is dangerous - use with --dry-run and -v first. When doing rclone sync this will delete any files which are excluded from the sync on the destination. If for example you did a sync from A to B without the --min-size 50k flag rclone sync -i A: B: Then you repeated it like this with the --delete-excluded rclone --min-size 50k --delete-excluded sync A: B: This would delete all files on B which are less than 50 kBytes as these are now excluded from the sync. Always test first with --dry-run and -v before using this flag. --dump filters - dump the filters to the output This dumps the defined filters to the output as regular expressions. Useful for debugging. --ignore-case - make searches case insensitive Normally filter patterns are case sensitive. If this flag is supplied then filter patterns become case insensitive. Normally a --include "file.txt" will not match a file called FILE.txt. However if you use the --ignore-case flag then --include "file.txt" this will match a file called FILE.txt. Quoting shell metacharacters The examples above may not work verbatim in your shell as they have shell metacharacters in them (eg *), and may require quoting. Eg linux, OSX - --include \*.jpg - --include '*.jpg' - --include='*.jpg' In Windows the expansion is done by the command not the shell so this should work fine - --include *.jpg Exclude directory based on a file It is possible to exclude a directory based on a file, which is present in this directory. Filename should be specified using the --exclude-if-present flag. This flag has a priority over the other filtering flags. Imagine, you have the following directory structure: dir1/file1 dir1/dir2/file2 dir1/dir2/dir3/file3 dir1/dir2/dir3/.ignore You can exclude dir3 from sync by running the following command: rclone sync -i --exclude-if-present .ignore dir1 remote:backup Currently only one filename is supported, i.e. --exclude-if-present should not be used multiple times. GUI (EXPERIMENTAL) Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change. Run this command in a terminal and rclone will download and then display the GUI in a web browser. rclone rcd --rc-web-gui This will produce logs like this and rclone needs to continue to run to serve the GUI: 2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip 2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] 2019/08/25 11:40:16 NOTICE: Unzipping 2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/ This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details. If you wish to check for updates then you can add --rc-web-gui-update to the command line. If you find your GUI broken, you may force it to update by add --rc-web-gui-force-update. By default, rclone will open your browser. Add --rc-web-gui-no-open-browser to disable this feature. Using the GUI Once the GUI opens, you will be looking at the dashboard which has an overall overview. On the left hand side you will see a series of view buttons you can click on: - Dashboard - main overview - Configs - examine and create new configurations - Explorer - view, download and upload files to the cloud storage systems - Backend - view or alter the backend config - Log out (More docs and walkthrough video to come!) How it works When you run the rclone rcd --rc-web-gui this is what happens - Rclone starts but only runs the remote control API ("rc"). - The API is bound to localhost with an auto generated username and password. - If the API bundle is missing then rclone will download it. - rclone will start serving the files from the API bundle over the same port as the API - rclone will open the browser with a login_token so it can log straight in. Advanced use The rclone rcd may use any of the flags documented on the rc page. The flag --rc-web-gui is shorthand for - Download the web GUI if necessary - Check we are using some authentication - --rc-user gui - --rc-pass - --rc-serve These flags can be overridden as desired. See also the rclone rcd documentation. Example: Running a public GUI For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags: - --rc-web-gui - --rc-addr :443 - --rc-htpasswd /path/to/htpasswd - --rc-cert /path/to/ssl.crt - --rc-key /path/to/ssl.key Example: Running a GUI behind a proxy If you want to run the GUI behind a proxy at /rclone you could use these flags: - --rc-web-gui - --rc-baseurl rclone - --rc-htpasswd /path/to/htpasswd Or instead of htpasswd if you just want a single user and password: - --rc-user me - --rc-pass mypassword Project The GUI is being developed in the: rclone/rclone-webui-react repository. Bug reports and contributions are very welcome :-) If you have questions then please ask them on the rclone forum. REMOTE CONTROLLING RCLONE WITH ITS API If rclone is run with the --rc flag then it starts an http server which can be used to remote control rclone using its API. If you just want to run a remote control then see the rcd command. Supported parameters --rc Flag to start the http server listen on remote requests --rc-addr=IP IPaddress:Port or :Port to bind server to. (default "localhost:5572") --rc-cert=KEY SSL PEM key (concatenation of certificate and CA certificate) --rc-client-ca=PATH Client certificate authority to verify clients with --rc-htpasswd=PATH htpasswd file - if not provided no authentication is done --rc-key=PATH SSL PEM Private key --rc-max-header-bytes=VALUE Maximum size of request header (default 4096) --rc-user=VALUE User name for authentication. --rc-pass=VALUE Password for authentication. --rc-realm=VALUE Realm for authentication (default "rclone") --rc-server-read-timeout=DURATION Timeout for server reading data (default 1h0m0s) --rc-server-write-timeout=DURATION Timeout for server writing data (default 1h0m0s) --rc-serve Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object Default Off. --rc-files /path/to/directory Path to local files to serve on the HTTP server. If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions. If --rc-user or --rc-pass is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/ style. Default Off. --rc-enable-metrics Enable OpenMetrics/Prometheus compatible endpoint at /metrics. Default Off. --rc-web-gui Set this flag to serve the default web gui on the same port as rclone. Default Off. --rc-allow-origin Set the allowed Access-Control-Allow-Origin for rc requests. Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui. Default is IP address on which rc is running. --rc-web-fetch-url Set the URL to fetch the rclone-web-gui files from. Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. --rc-web-gui-update Set this flag to check and update rclone-webui-react from the rc-web-fetch-url. Default Off. --rc-web-gui-force-update Set this flag to force update rclone-webui-react from the rc-web-fetch-url. Default Off. --rc-web-gui-no-open-browser Set this flag to disable opening browser automatically when using web-gui. Default Off. --rc-job-expire-duration=DURATION Expire finished async jobs older than DURATION (default 60s). --rc-job-expire-interval=DURATION Interval duration to check for expired async jobs (default 10s). --rc-no-auth By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list is denied as it involved creating a remote as is sync/copy. If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user and --rc-pass and use these credentials in the request. Default Off. Accessing the remote control via the rclone rc command Rclone itself implements the remote control protocol in its rclone rc command. You can use it like this $ rclone rc rc/noop param1=one param2=two { "param1": "one", "param2": "two" } Run rclone rc on its own to see the help for the installed remote control commands. JSON input rclone rc also supports a --json flag which can be used to send more complicated input parameters. $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop { "p1": [ 1, "2", null, 4 ], "p2": { "a": 1, "b": 2 } } If the parameter being passed is an object then it can be passed as a JSON string rather than using the --json flag which simplifies the command line. rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}' Rather than rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}' Special parameters The rc interface supports some special parameters which apply to ALL commands. These start with _ to show they are different. Running asynchronous jobs with _async = true Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously. If _async has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The job/status call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished. It is recommended that potentially long running jobs, eg sync/sync, sync/copy, sync/move, operations/purge are run with the _async flag to avoid any potential problems with the HTTP request and response timing out. Starting a job with the _async flag: $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop { "jobid": 2 } Query the status to see if the job has finished. For more information on the meaning of these return parameters see the job/status call. $ rclone rc --json '{ "jobid":2 }' job/status { "duration": 0.000124163, "endTime": "2018-10-27T11:38:07.911245881+01:00", "error": "", "finished": true, "id": 2, "output": { "_async": true, "p1": [ 1, "2", null, 4 ], "p2": { "a": 1, "b": 2 } }, "startTime": "2018-10-27T11:38:07.911121728+01:00", "success": true } job/list can be used to show the running or recently completed jobs $ rclone rc job/list { "jobids": [ 2 ] } Assigning operations to groups with _group = value Each rc call has its own stats group for tracking its metrics. By default grouping is done by the composite group name from prefix job/ and id of the job like so job/1. If _group has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name. Stats for specific group can be accessed by passing group to core/stats: $ rclone rc --json '{ "group": "job/1" }' core/stats { "speed": 12345 ... } Supported commands backend/command: Runs a backend command. This takes the following parameters - command - a string with the command name - fs - a remote name string eg "drive:" - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command For example rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2 Returns { "result": { "arg": [ "path1", "path2" ], "name": "noop", "opt": { "blue": "", "echo": "yes" } } } Note that this is the direct equivalent of using this "backend" command: rclone backend noop . -o echo=yes -o blue path1 path2 Note that arguments must be preceded by the "-a" flag See the backend command for more information. AUTHENTICATION IS REQUIRED FOR THIS CALL. cache/expire: Purge a remote from cache Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional) Eg rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=/ withData=true cache/fetch: Fetch file chunks Ensure the specified file chunks are cached on disk. The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to specify files to fetch, eg rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye File names will automatically be encrypted when the a crypt remote is used on top of the cache. cache/stats: Get cache stats Show statistics for the cache remote. config/create: create the config for a remote. This takes the following parameters - name - name of remote - parameters - a map of { "key": "value" } pairs - type - type of the new remote - obscure - optional bool - forces obscuring of passwords - noObscure - optional bool - forces passwords not to be obscured See the config create command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/delete: Delete a remote in the config file. Parameters: - name - name of remote to delete See the config delete command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/dump: Dumps the config file. Returns a JSON object: - key: value Where keys are remote names and values are the config parameters. See the config dump command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/get: Get a remote in the config file. Parameters: - name - name of remote to get See the config dump command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/listremotes: Lists the remotes in the config file. Returns - remotes - array of remote names See the listremotes command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/password: password the config for a remote. This takes the following parameters - name - name of remote - parameters - a map of { "key": "value" } pairs See the config password command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/providers: Shows how providers are configured in the config file. Returns a JSON object: - providers - array of objects See the config providers command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. config/update: update the config for a remote. This takes the following parameters - name - name of remote - parameters - a map of { "key": "value" } pairs - obscure - optional bool - forces obscuring of passwords - noObscure - optional bool - forces passwords not to be obscured See the config update command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. core/bwlimit: Set the bandwidth limit. This sets the bandwidth limit to that passed in. Eg rclone rc core/bwlimit rate=off { "bytesPerSecond": -1, "rate": "off" } rclone rc core/bwlimit rate=1M { "bytesPerSecond": 1048576, "rate": "1M" } If the rate parameter is not supplied then the bandwidth is queried rclone rc core/bwlimit { "bytesPerSecond": 1048576, "rate": "1M" } The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number. core/command: Run a rclone terminal command over rc. This takes the following parameters - command - a string with the command name - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command - error - set if rclone exits with an error code - returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT". "STREAM_ONLY_STDERR") For example rclone rc core/command command=ls -a mydrive:/ -o max-depth=1 rclone rc core/command -a ls -a mydrive:/ -o max-depth=1 Returns { "error": false, "result": "" } OR { "error": true, "result": "" } AUTHENTICATION IS REQUIRED FOR THIS CALL. core/gc: Runs a garbage collection. This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. core/group-list: Returns list of stats. This returns list of stats groups currently in memory. Returns the following values: { "groups": an array of group names: [ "group1", "group2", ... ] } core/memstats: Returns the memory statistics This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats The most interesting values for most people are: - HeapAlloc: This is the amount of memory rclone is actually using - HeapSys: This is the amount of memory rclone has obtained from the OS - Sys: this is the total amount of memory requested from the OS - It is virtual memory so may include unused memory core/obscure: Obscures a string passed in. Pass a clear string and rclone will obscure it for the config file: - clear - string Returns - obscured - string core/pid: Return PID of current process This returns PID of current process. Useful for stopping rclone process. core/quit: Terminates the app. (optional) Pass an exit code to be used for terminating the app: - exitCode - int core/stats: Returns stats about current transfers. This returns all available stats: rclone rc core/stats If group is not provided then summed up stats for all groups will be returned. Parameters - group - name of the stats group (string) Returns the following values: { "speed": average speed in bytes/sec since start of the process, "bytes": total transferred bytes since the start of the process, "errors": number of errors, "fatalError": whether there has been at least one FatalError, "retryError": whether there has been at least one non-NoRetryError, "checks": number of checked files, "transfers": number of transferred files, "deletes" : number of deleted files, "renames" : number of renamed files, "transferTime" : total time spent on running jobs, "elapsedTime": time in seconds since the start of the process, "lastError": last occurred error, "transferring": an array of currently active file transfers: [ { "bytes": total transferred bytes for this file, "eta": estimated time in seconds until file transfer completion "name": name of the file, "percentage": progress of the file transfer in percent, "speed": average speed over the whole transfer in bytes/sec, "speedAvg": current speed in bytes/sec as an exponentially weighted moving average, "size": size of the file in bytes } ], "checking": an array of names of currently active file checks [] } Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined. core/stats-delete: Delete stats group. This deletes entire stats group Parameters - group - name of the stats group (string) core/stats-reset: Reset stats. This clears counters, errors and finished transfers for all stats or specific stats group if group is provided. Parameters - group - name of the stats group (string) core/transferred: Returns stats about completed transfers. This returns stats about completed transfers: rclone rc core/transferred If group is not provided then completed transfers for all groups will be returned. Note only the last 100 completed transfers are returned. Parameters - group - name of the stats group (string) Returns the following values: { "transferred": an array of completed transfers (including failed ones): [ { "name": name of the file, "size": size of the file in bytes, "bytes": total transferred bytes for this file, "checked": if the transfer is only checked (skipped, deleted), "timestamp": integer representing millisecond unix epoch, "error": string description of the error (empty if successful), "jobid": id of the job that this transfer belongs to } ] } core/version: Shows the current version of rclone and the go runtime. This shows the current version of go and the go runtime - version - rclone version, eg "v1.53.0" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling. SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked. To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0. After calling this you can use this to see the blocking profile: go tool pprof http://localhost:5572/debug/pprof/block Parameters - rate - int debug/set-mutex-profile-fraction: Set runtime.SetMutexProfileFraction for mutex profiling. SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned. To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.) Once this is set you can look use this to profile the mutex contention: go tool pprof http://localhost:5572/debug/pprof/mutex Parameters - rate - int Results - previousRate - int job/list: Lists the IDs of the running jobs Parameters - None Results - jobids - array of integer job ids job/status: Reads the status of the job ID Parameters - jobid - id of the job (integer) Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job job/stop: Stop the running job Parameters - jobid - id of the job (integer) mount/listmounts: Show current mount points This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns - mountPoints: list of current mount points Eg rclone rc mount/listmounts AUTHENTICATION IS REQUIRED FOR THIS CALL. mount/mount: Create a new mount point rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2 This takes the following parameters - fs - a remote path to be mounted (required) - mountPoint: valid path on the local machine (required) - mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use - mountOpt: a JSON object with Mount options in. - vfsOpt: a JSON object with VFS options in. Eg rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section. rclone rc options/get AUTHENTICATION IS REQUIRED FOR THIS CALL. mount/types: Show all possible mount types This shows all possible mount types and returns them as a list. This takes no parameters and returns - mountTypes: list of mount types The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter. Eg rclone rc mount/types AUTHENTICATION IS REQUIRED FOR THIS CALL. mount/unmount: Unmount selected active mount rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. This takes the following parameters - mountPoint: valid path on the local machine where the mount was created (required) Eg rclone rc mount/unmount mountPoint=/home//mountPoint AUTHENTICATION IS REQUIRED FOR THIS CALL. mount/unmountall: Show current mount points This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns error if unmount does not succeed. Eg rclone rc mount/unmountall AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/about: Return the space used on the remote This takes the following parameters - fs - a remote name string eg "drive:" The result is as returned from rclone about --json See the about command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/cleanup: Remove trashed files in the remote or path This takes the following parameters - fs - a remote name string eg "drive:" See the cleanup command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/copyfile: Copy a file from source remote to destination remote This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/copyurl: Copy the URL to the object This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - url - string, URL to read from - autoFilename - boolean, set to true to retrieve destination file name from url See the copyurl command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/delete: Remove files in the path This takes the following parameters - fs - a remote name string eg "drive:" See the delete command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/deletefile: Remove the single file pointed to This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the deletefile command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/fsinfo: Return information about the remote This takes the following parameters - fs - a remote name string eg "drive:" This returns info about the remote passed in; { // optional features and whether they are available or not "Features": { "About": true, "BucketBased": false, "CanHaveEmptyDirectories": true, "CaseInsensitive": false, "ChangeNotify": false, "CleanUp": false, "Copy": false, "DirCacheFlush": false, "DirMove": true, "DuplicateFiles": false, "GetTier": false, "ListR": false, "MergeDirs": false, "Move": true, "OpenWriterAt": true, "PublicLink": false, "Purge": true, "PutStream": true, "PutUnchecked": false, "ReadMimeType": false, "ServerSideAcrossConfigs": false, "SetTier": false, "SetWrapper": false, "UnWrap": false, "WrapFs": false, "WriteMimeType": false }, // Names of hashes available "Hashes": [ "MD5", "SHA-1", "DropboxHash", "QuickXorHash" ], "Name": "local", // Name as created "Precision": 1, // Precision of timestamps in ns "Root": "/", // Path as created "String": "Local file system at /" // how the remote will appear in logs } This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: operations/list: List the given remote and path in JSON format This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - opt - a dictionary of options to control the listing (optional) - recurse - If set recurse directories - noModTime - If set return modification time - showEncrypted - If set show decrypted names - showOrigIDs - If set show the IDs for each item if known - showHash - If set return a dictionary of hashes The result is - list - This is an array of objects as described in the lsjson command See the lsjson command for more information on the above and examples. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/mkdir: Make a destination directory or container This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the mkdir command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/movefile: Move a file from source remote to destination remote This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/publiclink: Create or retrieve a public link to the given file or folder. This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - unlink - boolean - if set removes the link rather than adding it (optional) - expire - string - the expiry time of the link eg "1d" (optional) Returns - url - URL of the resource See the link command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/purge: Remove a directory or container and all of its contents This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the purge command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/rmdir: Remove an empty directory or container This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the rmdir command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/rmdirs: Remove all the empty directories in the path This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - leaveRoot - boolean, set to true not to delete the root See the rmdirs command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/size: Count the number of bytes and files in remote This takes the following parameters - fs - a remote name string eg "drive:path/to/dir" Returns - count - number of files - bytes - number of bytes in those files See the size command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. operations/uploadfile: Upload file using multiform/form-data This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - each part in body represents a file to be uploaded See the uploadfile command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. options/blocks: List all the option blocks Returns - options - a list of the options block names options/get: Get all the options Returns an object where keys are option block names and values are an object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. options/set: Set an option Parameters - option block name containing an object with - key: value Repeated as often as required. Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. For example: This sets DEBUG level logs (-vv) rclone rc options/set --json '{"main": {"LogLevel": 8}}' And this sets INFO level logs (-v) rclone rc options/set --json '{"main": {"LogLevel": 7}}' And this sets NOTICE level logs (normal without -v) rclone rc options/set --json '{"main": {"LogLevel": 6}}' pluginsctl/addPlugin: Add a plugin using url used for adding a plugin to the webgui This takes the following parameters - url: http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react) Eg rclone rc pluginsctl/addPlugin AUTHENTICATION IS REQUIRED FOR THIS CALL. pluginsctl/getPluginsForType: Get plugins with type criteria This shows all possible plugins by a mime type This takes the following parameters - type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3) - pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL) and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/getPluginsForType type=video/mp4 AUTHENTICATION IS REQUIRED FOR THIS CALL. pluginsctl/listPlugins: Get the list of currently loaded plugins This allows you to get the currently enabled plugins and their details. This takes no parameters and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/listPlugins AUTHENTICATION IS REQUIRED FOR THIS CALL. pluginsctl/listTestPlugins: Show currently loaded test plugins allows listing of test plugins with the rclone.test set to true in package.json of the plugin This takes no parameters and returns - loadedTestPlugins: list of currently available test plugins Eg rclone rc pluginsctl/listTestPlugins AUTHENTICATION IS REQUIRED FOR THIS CALL. pluginsctl/removePlugin: Remove a loaded plugin This allows you to remove a plugin using it's name This takes parameters - name: name of the plugin in the format author/plugin_name Eg rclone rc pluginsctl/removePlugin name=rclone/video-plugin AUTHENTICATION IS REQUIRED FOR THIS CALL. pluginsctl/removeTestPlugin: Remove a test plugin This allows you to remove a plugin using it's name This takes the following parameters - name: name of the plugin in the format author/plugin_name Eg rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react AUTHENTICATION IS REQUIRED FOR THIS CALL. rc/error: This returns an error This returns an error with the input as part of its error string. Useful for testing error handling. rc/list: List all the registered remote control commands This lists all the registered remote control commands as a JSON map in the commands response. rc/noop: Echo the input to the output parameters This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. rc/noopauth: Echo the input to the output parameters requiring auth This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. AUTHENTICATION IS REQUIRED FOR THIS CALL. sync/copy: copy a directory from source remote to destination remote This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination See the copy command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. sync/move: move a directory from source remote to destination remote This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set See the move command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. sync/sync: sync a directory from source remote to destination remote This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination See the sync command command for more information on the above. AUTHENTICATION IS REQUIRED FOR THIS CALL. vfs/forget: Forget files or directories in the directory cache. This forgets the paths in the directory cache causing them to be re-read from the remote when needed. If no paths are passed in then it will forget all the paths in the directory cache. rclone rc vfs/forget Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. vfs/list: List active VFSes. This lists the active VFSes. It returns a list under the key "vfses" where the values are the VFS names that could be passed to the other VFS commands in the "fs" parameter. vfs/poll-interval: Get the status or update the value of the poll-interval option. Without any parameter given this returns the current status of the poll-interval setting. When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval. rclone rc vfs/poll-interval interval=5m The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely. The new poll-interval value will only be active when the timeout is not reached. If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. vfs/refresh: Refresh the directory cache. This reads the directories for the specified paths and freshens the directory cache. If no paths are passed in then it will refresh the root directory. rclone rc vfs/refresh Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg rclone rc vfs/refresh dir=home/junk dir2=data/misc If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. Accessing the remote control via HTTP Rclone implements a simple HTTP based protocol. Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values. All calls must made using POST. The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl. The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable. Error returns If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg { "error": "Expecting string value for key \"remote\" (was float64)", "input": { "fs": "/tmp", "remote": 3 }, "status": 400 "path": "operations/rmdir", } The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call CORS The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back. Using POST with URL parameters only curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2' Response { "potato": "1", "sausage": "2" } Here is what an error response looks like: curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' { "error": "arbitrary error on input map[potato:1 sausage:2]", "input": { "potato": "1", "sausage": "2" } } Note that curl doesn't return errors to the shell unless you use the -f option $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' curl: (22) The requested URL returned error: 400 Bad Request $ echo $? 22 Using POST with a form curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop Response { "potato": "1", "sausage": "2" } Note that you can combine these with URL parameters too with the POST parameters taking precedence. curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4" Response { "potato": "1", "rutabaga": "3", "sausage": "4" } Using POST with a JSON blob curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop response { "password": "xyz", "username": "xyz" } This can be combined with URL parameters too if required. The JSON blob takes precedence. curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4' { "potato": 2, "rutabaga": "3", "sausage": 1 } Debugging rclone with pprof If you use the --rc flag this will also enable the use of the go profiling tools on the same port. To use these, first install go. Debugging memory use To profile rclone's memory use you can run: go tool pprof -web http://localhost:5572/debug/pprof/heap This should open a page in your browser showing what is using what memory. You can also use the -text flag to produce a textual summary $ go tool pprof -text http://localhost:5572/debug/pprof/heap Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total flat flat% sum% cum cum% 1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode 513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/all.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve/restic.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0 0 0% 100% 1024.03kB 66.62% main.init 0 0% 100% 513kB 33.38% net/http.(*conn).readRequest 0 0% 100% 513kB 33.38% net/http.(*conn).serve 0 0% 100% 1024.03kB 66.62% runtime.main Debugging go routine leaks Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected. See all active go routines using curl http://localhost:5572/debug/pprof/goroutine?debug=1 Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser. Other profiles to look at You can see a summary of profiles available at http://localhost:5572/debug/pprof/ Here is how to use some of them: - Memory: go tool pprof http://localhost:5572/debug/pprof/heap - Go routines: curl http://localhost:5572/debug/pprof/goroutine?debug=1 - 30-second CPU profile: go tool pprof http://localhost:5572/debug/pprof/profile - 5-second execution trace: wget http://localhost:5572/debug/pprof/trace?seconds=5 - Goroutine blocking profile - Enable first with: rclone rc debug/set-block-profile-rate rate=1 (docs) - go tool pprof http://localhost:5572/debug/pprof/block - Contended mutexes: - Enable first with: rclone rc debug/set-mutex-profile-fraction rate=1 (docs) - go tool pprof http://localhost:5572/debug/pprof/mutex See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs. The profiling hook is zero overhead unless it is used. OVERVIEW OF CLOUD STORAGE SYSTEMS Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through. Features Here is an overview of the major features of each cloud storage system. Name Hash ModTime Case Insensitive Duplicate Files MIME Type ------------------------------ -------------- --------- ------------------ ----------------- ----------- 1Fichier Whirlpool No No Yes R Amazon Drive MD5 No Yes No R Amazon S3 MD5 Yes No No R/W Backblaze B2 SHA1 Yes No No R/W Box SHA1 Yes Yes No - Citrix ShareFile MD5 Yes Yes No - Dropbox DBHASH † Yes Yes No - FTP - No No No - Google Cloud Storage MD5 Yes No No R/W Google Drive MD5 Yes No Yes R/W Google Photos - No No Yes R HTTP - No No No R Hubic MD5 Yes No No R/W Jottacloud MD5 Yes Yes No R/W Koofr MD5 No Yes No - Mail.ru Cloud Mailru ‡‡‡ Yes Yes No - Mega - No No Yes - Memory MD5 Yes No No - Microsoft Azure Blob Storage MD5 Yes No No R/W Microsoft OneDrive SHA1 ‡‡ Yes Yes No R OpenDrive MD5 Yes Yes Partial * - OpenStack Swift MD5 Yes No No R/W pCloud MD5, SHA1 Yes No No W premiumize.me - No Yes No R put.io CRC-32 Yes No Yes R QingStor MD5 No No No R/W Seafile - No No No - SFTP MD5, SHA1 ‡ Yes Depends No - SugarSync - No No No - Tardigrade - Yes No No - WebDAV MD5, SHA1 †† Yes ††† Depends No - Yandex Disk MD5 Yes No No R/W The local filesystem All Yes Depends No - Hash The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command. To use the verify checksums when transferring between cloud storage systems they must support a common hash type. † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s. ‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. †† WebDAV supports hashes when used with Owncloud and Nextcloud only. ††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash. ‡‡‡ Mail.ru uses its own modified SHA1 hash ModTime The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag. All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system. Case Insensitive If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible. This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully. The local filesystem and SFTP may or may not be case sensitive depending on OS. - Windows - usually case insensitive, though case is preserved - OSX - usually case insensitive, though it is possible to format case sensitive - Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys) Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems. Duplicate files If a cloud storage system allows duplicate files then it can have two objects with the same name. This confuses rclone greatly when syncing - use the rclone dedupe command to rename or remove duplicates. * Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with rclone. It may be that this is a mistake or an unsupported feature. Restricted filenames Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When rclone detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters. This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently. The name shown by rclone to the user or during log output will only contain a minimal set of replaced characters to ensure correct formatting and not necessarily the actual name used on the cloud storage. This transformation is reversed when downloading a file or parsing rclone arguments. For example, when uploading a file named my file?.txt to Onedrive will be displayed as my file?.txt on the console, but stored as my file?.txt (the ? gets replaced by the similar looking ? character) to Onedrive. The reverse transformation allows to read a fileunusual/name.txt from Google Drive, by passing the name unusual/name.txt (the / needs to be replaced by the similar looking / character) on the command line. Default restricted characters The table below shows the characters that are replaced by default. When a replacement character is found in a filename, this character will be escaped with the ‛ character to avoid ambiguous file names. (e.g. a file named ␀.txt would shown as ‛␀.txt) Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend. Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ SOH 0x01 ␁ STX 0x02 ␂ ETX 0x03 ␃ EOT 0x04 ␄ ENQ 0x05 ␅ ACK 0x06 ␆ BEL 0x07 ␇ BS 0x08 ␈ HT 0x09 ␉ LF 0x0A ␊ VT 0x0B ␋ FF 0x0C ␌ CR 0x0D ␍ SO 0x0E ␎ SI 0x0F ␏ DLE 0x10 ␐ DC1 0x11 ␑ DC2 0x12 ␒ DC3 0x13 ␓ DC4 0x14 ␔ NAK 0x15 ␕ SYN 0x16 ␖ ETB 0x17 ␗ CAN 0x18 ␘ EM 0x19 ␙ SUB 0x1A ␚ ESC 0x1B ␛ FS 0x1C ␜ GS 0x1D ␝ RS 0x1E ␞ US 0x1F ␟ / 0x2F / DEL 0x7F ␡ The default encoding will also encode these file names as they are problematic with many cloud storage systems. File name Replacement ----------- ------------- . . .. .. Invalid UTF-8 bytes Some backends only support a sequence of well formed UTF-8 bytes as file or directory names. In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte 0xFE will be encoded as ‛FE. A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the local filenames section for details. Encoding option Most backends have an encoding options, specified as a flag --backend-encoding where backend is the name of the backend, or as a config parameter encoding (you'll need to select the Advanced config in rclone config to see it). This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above). However this can be incorrect in some scenarios, for example if you have a Windows file system with characters such as * and ? that you want to remain as those characters on the remote rather than being translated to * and ?. The --backend-encoding flags allow you to change that. You can disable the encoding completely with --backend-encoding None or set encoding = None in the config file. Encoding takes a comma separated list of encodings. You can see the list of all available characters by passing an invalid value to this flag, eg --local-encoding "help" and rclone help flags encoding will show you the defaults for the backends. Encoding Characters --------------- ------------------------------------------------------------- Asterisk * BackQuote ` BackSlash \ Colon : CrLf CR 0x0D, LF 0x0A Ctl All control characters 0x00-0x1F Del DEL 0x7F Dollar $ Dot . DoubleQuote " Hash # InvalidUtf8 An invalid UTF-8 character (eg latin1) LeftCrLfHtVt CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string LeftPeriod . on the left of a string LeftSpace SPACE on the left of a string LeftTilde ~ on the left of a string LtGt <, > None No characters are encoded Percent % Pipe | Question ? RightCrLfHtVt CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string RightPeriod . on the right of a string RightSpace SPACE on the right of a string SingleQuote ' Slash / To take a specific example, the FTP backend's default encoding is --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot" However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot to the existing ones, giving: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace This can be specified using the --ftp-encoding flag or using an encoding parameter in the config file. Or let's say you have a Windows server but you want to preserve * and ?, you would then have this as the encoding (the Windows encoding minus Asterisk and Question). Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot This can be specified using the --local-encoding flag or using an encoding parameter in the config file. MIME Type MIME types (also known as media types) classify types of documents using a simple text classification, eg text/html or application/pdf. Some cloud storage systems support reading (R) the MIME type of objects and some support writing (W) the MIME type of objects. The MIME type can be important if you are serving files directly to HTTP from the storage system. If you are copying from a remote which supports reading (R) to a remote which supports writing (W) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type. Optional Features All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About EmptyDir ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- ------- ---------- 1Fichier No No No No No No No No No Yes Amazon Drive Yes No Yes Yes No #575 No No No #2178 No Yes Amazon S3 No Yes No No Yes Yes Yes No #2178 No No Backblaze B2 No Yes No No Yes Yes Yes Yes No No Box Yes Yes Yes Yes Yes ‡‡ No Yes Yes No Yes Citrix ShareFile Yes Yes Yes Yes No No Yes No No Yes Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes Yes FTP No No Yes Yes No No Yes No #2178 No Yes Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No No Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Google Photos No No No No No No No No No No HTTP No No No No No No No No #2178 No Yes Hubic Yes † Yes No No No Yes Yes No #2178 Yes No Jottacloud Yes Yes Yes Yes Yes Yes No Yes Yes Yes Mail.ru Cloud Yes Yes Yes Yes Yes No No Yes Yes Yes Mega Yes No Yes Yes Yes No No No #2178 Yes Yes Memory No Yes No No No Yes Yes No No No Microsoft Azure Blob Storage Yes Yes No No No Yes Yes No #2178 No No Microsoft OneDrive Yes Yes Yes Yes Yes No No Yes Yes Yes OpenDrive Yes Yes Yes Yes No No No No No Yes OpenStack Swift Yes † Yes No No No Yes Yes No #2178 Yes No pCloud Yes Yes Yes Yes Yes No No Yes Yes Yes premiumize.me Yes No Yes Yes No No No Yes Yes Yes put.io Yes No Yes Yes Yes No Yes No #2178 Yes Yes QingStor No Yes No No Yes Yes No No #2178 No No Seafile Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes SFTP No No Yes Yes No No Yes No #2178 Yes Yes SugarSync Yes Yes Yes Yes No No Yes Yes No Yes Tardigrade Yes † No No No No Yes Yes No No No WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 Yes Yes Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes The local filesystem Yes No Yes Yes No No Yes No Yes Yes Purge This deletes a directory quicker than just deleting all the files in the directory. † Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually. ‡ StreamUpload is not supported with Nextcloud Copy Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy or rclone move if the remote doesn't support Move directly. If the server doesn't support Copy directly then for copy operations the file is downloaded then re-uploaded. Move Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move if the server doesn't support DirMove. If the server isn't capable of Move then rclone simulates it with Copy then delete. If the server doesn't support Copy then rclone will download the file and re-upload it. DirMove This is used to implement rclone move to move a directory if possible. If it isn't then it will use Move on each file (which falls back to Copy then download and upload - see Move section). CleanUp This is used for emptying the trash for a remote by rclone cleanup. If the server can't do CleanUp then rclone cleanup will return an error. ‡‡ Note that while Box implements this it has to delete every file idividually so it will be slower than emptying the trash via the WebUI ListR The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details. StreamUpload Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat. LinkSharing Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider. About This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. This is also used to return the space used, available for rclone mount. If the server can't do About then rclone about will return an error. EmptyDir The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this. GLOBAL FLAGS This describes the global flags available to every rclone command split into two groups, non backend and backend flags. Non Backend Flags These flags are available for every command. --ask-password Allow prompt for password for encrypted configuration. (default true) --auto-confirm If enabled, do not request console confirmation. --backup-dir string Make backups into hierarchy based in DIR. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --bwlimit-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable. --ca-cert string CA certificate used to verify servers --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --check-first Do all the checks before starting transfers. --checkers int Number of checkers to run in parallel. (default 8) -c, --checksum Skip based on checksum (if available) & size, not mod-time & size --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth --compare-dest string Include additional server-side path during comparison. --config string Config file. (default "$HOME/.config/rclone/rclone.conf") --contimeout duration Connect timeout (default 1m0s) --copy-dest string Implies --compare-dest but also copies files from path into destination. --cpuprofile string Write cpu profile to file --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features. Use help to see a list. -n, --dry-run Do a trial run with no permanent changes --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP headers - may contain sensitive info --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) --exclude-if-present string Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file (use - to read from stdin) --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file (use - to read from stdin) --header stringArray Set HTTP header for all transactions --header-download stringArray Set HTTP header for download transactions --header-upload stringArray Set HTTP header for upload transactions --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums. --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file (use - to read from stdin) -i, --interactive Enable interactive mode --log-file string Log everything to this file --log-format string Comma separated list of log format options (default "date,time") --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) --max-duration duration Maximum duration rclone will transfer data for. --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer. (default off) --memprofile string Write memory profile to file --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M) --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-check-dest Don't check the destination, copy regardless. --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-unicode-normalization Don't normalize unicode characters in filenames. --no-update-modtime Don't update destination mod-time if files identical. --order-by string Instructions on how to order the transfers, eg 'size,descending' --password-command SpaceSepList Command for supplying password for encrypted configuration. -P, --progress Show progress during transfer. -q, --quiet Print as little stuff as possible --rc Enable the remote control server. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --rc-allow-origin string Set the allowed origin for CORS. --rc-baseurl string Prefix for URLs - leave blank for root. --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --rc-client-ca string Client certificate authority to verify clients with --rc-enable-metrics Enable prometheus metrics on /metrics --rc-files string Path to local files to serve on the HTTP server. --rc-htpasswd string htpasswd file - if not provided no authentication is done --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s) --rc-job-expire-interval duration interval to check for expired async jobs (default 10s) --rc-key string SSL PEM Private key --rc-max-header-bytes int Maximum size of request header (default 4096) --rc-no-auth Don't require auth for certain methods. --rc-pass string Password for authentication. --rc-realm string realm for authentication (default "rclone") --rc-serve Enable the serving of remote objects. --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-template string User Specified Template. --rc-user string User name for authentication. --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") --rc-web-gui Launch WebGUI on localhost --rc-web-gui-force-update Force update to latest version of web gui --rc-web-gui-no-open-browser Don't open the browser automatically --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files. --retries int Retry operations this many times if they fail (default 3) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --size-only Skip based on size only, not mod-time or checksum --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-one-line Make the stats fit on one line. --stats-one-line-date Enables --stats-one-line and add current date/time prefix. --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix to add to changed files. --suffix-keep-extension Preserve the extension when using --suffix. --syslog Use Syslog for logging --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --timeout duration IO idle timeout (default 5m0s) --tpslimit float Limit HTTP transactions per second to this. --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --track-renames When synchronizing, track file renames and do a server side move if possible --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. --use-cookies Enable session cookiejar. --use-json-log Use json log format. --use-mmap Use mmap allocator (see docs). --use-server-modtime Use server modified time instead of object metadata --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.3") -v, --verbose count Print lots more stuff (repeat for more) Backend Flags These flags are available for every command. They control the backends and may be set in the config file. --acd-auth-url string Auth server URL. --acd-client-id string OAuth Client Id --acd-client-secret string OAuth Client Secret --acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-token string OAuth Access Token as a JSON blob. --acd-token-url string Token server url. --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --alias-remote string Remote or path to alias. --azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-disable-checksum Don't store MD5 checksum with object metadata. --azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) --azureblob-endpoint string Endpoint for the service --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator) --azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --azureblob-sas-url string SAS URL for container level access only --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G) --b2-disable-checksum Disable checksums for large (> upload cutoff) files --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w) --b2-download-url string Custom endpoint for downloads. --b2-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service. --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-key string Application Key --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-versions Include old versions in directory listings. --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL. --box-box-config-file string Box App config.json location --box-box-sub-type string (default "user") --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point. --box-token string OAuth Access Token as a JSON blob. --box-token-url string Token server url. --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start. --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) --cache-plex-url string The URL of the Plex server --cache-plex-username string The username of the Plex user --cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-remote string Remote to cache. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G) --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks. --chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5") --chunker-meta-format string Format of the metadata object or "none". By default "simplejson". (default "simplejson") --chunker-name-format string String format of chunk file names. (default "*.rclone_chunk.###") --chunker-remote string Remote to chunk/unchunk. --chunker-start-from int Minimum valid chunk number. Usually 0 or 1. (default 1) -L, --copy-links Follow symlinks and copy the pointed to item. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-password string Password or pass phrase for encryption. (obscured) --crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured) --crypt-remote string Remote to encrypt/decrypt. --crypt-server-side-across-configs Allow server side operations (eg copy) to work across different crypt configs. --crypt-show-mapping For all files listed show how the names encrypt. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-auth-url string Auth server URL. --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Deprecated: see export_formats --drive-impersonate string Impersonate this user when using a service account. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-keep-revision-forever Keep new head revision of each file forever. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive. --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. --drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-file string Service Account Credentials JSON file path --drive-shared-with-me Only show files that are shared with me. --drive-size-as-quota Show sizes as storage quota usage, not actual size. --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. --drive-skip-gdocs Skip google documents in all listings. --drive-skip-shortcuts If set skip shortcut files --drive-starred-only Only show files that are starred. --drive-stop-on-upload-limit Make upload limit errors be fatal --drive-team-drive string ID of the Team Drive --drive-token string OAuth Access Token as a JSON blob. --drive-token-url string Token server url. --drive-trashed-only Only show files that are in the trash. --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-use-created-date Use file created date instead of modified date., --drive-use-shared-date Use date file was shared instead of modified date. --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --dropbox-auth-url string Auth server URL. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret --dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account. --dropbox-token string OAuth Access Token as a JSON blob. --dropbox-token-url string Token server url. --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-shared-folder string If you want to download a shared folder, add this parameter --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use FTP over TLS (Explicit) --ftp-host string FTP host to connect to --ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-pass string FTP password (obscured) --ftp-port string FTP port, leave blank to use default (21) --ftp-tls Use FTPS over TLS (Implicit) --ftp-user string FTP username, leave blank for current username, $USER --gcs-anonymous Access public buckets and objects without credentials --gcs-auth-url string Auth server URL. --gcs-bucket-acl string Access Control List for new buckets. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-encoding MultiEncoder This sets the encoding for the backend. (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets. --gcs-object-acl string Access Control List for new objects. --gcs-project-number string Project number. --gcs-service-account-file string Service Account Credentials JSON file path --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-token string OAuth Access Token as a JSON blob. --gcs-token-url string Token server url. --gphotos-auth-url string Auth server URL. --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret --gphotos-read-only Set to make the Google Photos backend read only. --gphotos-read-size Set to read the size of media items. --gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000) --gphotos-token string OAuth Access Token as a JSON blob. --gphotos-token-url string Token server url. --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests to find file sizes in dir listing --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of http host to connect to --hubic-auth-url string Auth server URL. --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --hubic-client-id string OAuth Client Id --hubic-client-secret string OAuth Client Secret --hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8) --hubic-no-chunk Don't chunk files during streaming upload. --hubic-token string OAuth Access Token as a JSON blob. --hubic-token-url string Token server url. --jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --jottacloud-trashed-only Only show files that are in the trash. --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) --koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net") --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used. --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured) --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true) --koofr-user string Your Koofr user name -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive. --local-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --local-nounc string Disable UNC (long path names) conversion on Windows --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M) --mailru-user string User name (usually email) --mega-debug Output more debug from Mega. --mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash. --mega-pass string Password. (obscured) --mega-user string User name -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --onedrive-auth-url string Auth server URL. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) --onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-no-versions Remove all versions on modifying operations --onedrive-server-side-across-configs Allow server side operations (eg copy) to work across different onedrive configs. --onedrive-token string OAuth Access Token as a JSON blob. --onedrive-token-url string Token server url. --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M) --opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password. (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL. --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to. (default "api.pcloud.com") --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point. (default "d0") --pcloud-token string OAuth Access Token as a JSON blob. --pcloud-token-url string Token server url. --premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) --qingstor-connection-retries int Number of connection retries. (default 3) --qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --qingstor-secret-access-key string QingStor Secret Access Key (password) --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) --qingstor-zone string Zone to connect to. --s3-access-key-id string AWS Access Key ID. --s3-acl string Canned ACL used when creating buckets and storing or copying objects. --s3-bucket-acl string Canned ACL used when creating buckets. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G) --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --s3-endpoint string Endpoint for S3 API. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. --s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request). (default 1000) --s3-location-constraint string Location constraint - must be set to match the Region. --s3-max-upload-parts int Maximum number of parts in a multipart upload. (default 10000) --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --s3-no-check-bucket If set don't attempt to check the bucket exists or create it --s3-profile string Profile to use in the shared credentials file --s3-provider string Choose your S3 provider. --s3-region string Region to connect to. --s3-secret-access-key string AWS Secret Access Key (password) --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-session-token string An AWS session token --s3-shared-credentials-file string Path to the shared credentials file --s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3. --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. --s3-sse-customer-key-md5 string If using SSE-C you must provide the secret encryption key MD5 checksum. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-storage-class string The storage class to use when storing new objects in S3. --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. --s3-v2-auth If true use v2 authentication. --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist --seafile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library. Leave blank to access all non-encrypted libraries. --seafile-library-key string Library password (for encrypted libraries only). Leave blank if you pass it through the command line. (obscured) --seafile-pass string Password (obscured) --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --sftp-host string SSH host to connect to --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. (obscured) --sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter. --sftp-key-use-agent When set forces the usage of the ssh-agent. --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect. --sftp-pass string SSH password, leave blank to use ssh-agent. (obscured) --sftp-path-override string Override path used by SSH connection. --sftp-port string SSH port, leave blank to use default (22) --sftp-server-command string Specifies the path or command to run a sftp server on the remote host. --sftp-set-modtime Set the modified time on the remote if set. (default true) --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect. --sftp-skip-links Set to skip any symlinks and any other non regular files. --sftp-subsystem string Specifies the SSH2 subsystem on the remote host. (default "sftp") --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods. --sftp-user string SSH username, leave blank for current username, ncw --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M) --sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls. --sharefile-root-folder-id string ID of the root folder --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M) --skip-links Don't warn about skipped symlinks. --sugarsync-access-key-id string Sugarsync Access Key ID. --sugarsync-app-id string Sugarsync App ID. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id --sugarsync-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key --sugarsync-refresh-token string Sugarsync refresh token --sugarsync-root-id string Sugarsync root id --sugarsync-user string Sugarsync user --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) --swift-auth string Authentication URL for server (OS_AUTH_URL). --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --swift-key string API key or password (OS_PASSWORD). --swift-no-chunk Don't chunk files during streaming upload. --swift-region string Region name - optional (OS_REGION_NAME) --swift-storage-policy string The storage policy to use when creating a new container --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-user string User name to log in (OS_USERNAME). --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --tardigrade-access-grant string Access Grant. --tardigrade-api-key string API Key. --tardigrade-passphrase string Encryption Passphrase. To access existing objects enter passphrase used for uploading. --tardigrade-provider string Choose an authentication method. (default "existing") --tardigrade-satellite-address @
: Satellite Address. Custom satellite address should match the format: @
:. (default "us-central-1.tardigrade.io") --union-action-policy string Policy to choose upstream on ACTION category. (default "epall") --union-cache-time int Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. (default 120) --union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs") --union-search-policy string Policy to choose upstream on SEARCH category. (default "ff") --union-upstreams string List of space separated upstreams. --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token --webdav-pass string Password. (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name --webdav-vendor string Name of the Webdav site/service/software you are using --yandex-auth-url string Auth server URL. --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-token string OAuth Access Token as a JSON blob. --yandex-token-url string Token server url. 1Fichier This is a backend for the 1fichier cloud storage service. Note that a Premium subscription is required to use the API. Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / 1Fichier \ "fichier" [snip] Storage> fichier ** See help for fichier backend at: https://rclone.org/fichier/ ** Your API Key, get it from https://1fichier.com/console/params.pl Enter a string value. Press Enter for the default (""). api_key> example_key Edit advanced config? (y/n) y) Yes n) No y/n> Remote config -------------------- [remote] type = fichier api_key = example_key -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Once configured you can then use rclone like this, List directories in top level of your 1Fichier account rclone lsd remote: List all the files in your 1Fichier account rclone ls remote: To copy a local directory to a 1Fichier directory called backup rclone copy /home/source remote:backup Modified time and hashes 1Fichier does not support modification times. It supports the Whirlpool hash algorithm. Duplicated files 1Fichier can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ < 0x3C < > 0x3E > " 0x22 " $ 0x24 $ ` 0x60 ` ' 0x27 ' File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to fichier (1Fichier). --fichier-api-key Your API Key, get it from https://1fichier.com/console/params.pl - Config: api_key - Env Var: RCLONE_FICHIER_API_KEY - Type: string - Default: "" Advanced Options Here are the advanced options specific to fichier (1Fichier). --fichier-shared-folder If you want to download a shared folder, add this parameter - Config: shared_folder - Env Var: RCLONE_FICHIER_SHARED_FOLDER - Type: string - Default: "" --fichier-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_FICHIER_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot Alias The alias remote provides a new name for another remote. Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory. During the initial setup with rclone config you will specify the target remote. The target remote can either be a local path or another remote. Subfolders can be used in target remote. Assume an alias remote named backup with the target mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop. There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop. The empty path is not allowed as a remote. To alias the current directory use . instead. Here is an example of how to make an alias called remote for local folder. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Alias for an existing remote \ "alias" [snip] Storage> alias Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". remote> /mnt/storage/backup Remote config -------------------- [remote] remote = /mnt/storage/backup -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote alias e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q Once configured you can then use rclone like this, List directories in top level in /mnt/storage/backup rclone lsd remote: List all the files in /mnt/storage/backup rclone ls remote: Copy another local directory to the alias directory called source rclone copy /home/source remote:source Standard Options Here are the standard options specific to alias (Alias for an existing remote). --alias-remote Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". - Config: remote - Env Var: RCLONE_ALIAS_REMOTE - Type: string - Default: "" Amazon Drive Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers. Status IMPORTANT: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive. For the history on why rclone no longer has a set of Amazon Drive API keys see the forum. If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks! Setup The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it. The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it. Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id and client_secret with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url. Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon Drive \ "amazon cloud drive" [snip] Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. client_secret> your client secret goes here Auth server URL - leave blank to use Amazon's. auth_url> Optional auth URL Token server url - leave blank to use Amazon's. token_url> Optional token URL Remote config Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = your client ID goes here client_secret = your client secret goes here auth_url = Optional auth URL token_url = Optional token URL token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List directories in top level of your Amazon Drive rclone lsd remote: List all the files in your Amazon Drive rclone ls remote: To copy a local directory to an Amazon Drive directory called backup rclone copy /home/source remote:backup Modified time and MD5SUMs Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing. It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag. Restricted filename characters Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Deleting files Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days. Using with non .com Amazon accounts Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine. Standard Options Here are the standard options specific to amazon cloud drive (Amazon Drive). --acd-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_ACD_CLIENT_ID - Type: string - Default: "" --acd-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_ACD_CLIENT_SECRET - Type: string - Default: "" Advanced Options Here are the advanced options specific to amazon cloud drive (Amazon Drive). --acd-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_ACD_TOKEN - Type: string - Default: "" --acd-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_ACD_AUTH_URL - Type: string - Default: "" --acd-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_ACD_TOKEN_URL - Type: string - Default: "" --acd-checkpoint Checkpoint for internal polling (debug). - Config: checkpoint - Env Var: RCLONE_ACD_CHECKPOINT - Type: string - Default: "" --acd-upload-wait-per-gb Additional time per GB to wait after a failed complete upload to see if it appears. Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear. The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears. You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually. These values were determined empirically by observing lots of uploads of big files for a range of file sizes. Upload with the "-v" flag to see more info about what rclone is doing in this situation. - Config: upload_wait_per_gb - Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB - Type: Duration - Default: 3m0s --acd-templink-threshold Files >= this size will be downloaded via their tempLink. Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed. To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage. - Config: templink_threshold - Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD - Type: SizeSuffix - Default: 9G --acd-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_ACD_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot Limitations Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem. Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size. Amazon S3 Storage Providers The S3 backend can be used with a number of different providers: - AWS S3 - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Ceph - DigitalOcean Spaces - Dreamhost - IBM COS S3 - Minio - Scaleway - StackPath - Tencent Cloud Object Storage (COS) - Wasabi Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir. Once you have made a remote (see the provider specific section above) you can use it like this: See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket AWS S3 Here is an example of making an s3 configuration. First run rclone config This will guide you through an interactive setup process. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" [snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Ceph Object Storage \ "Ceph" 3 / Digital Ocean Spaces \ "DigitalOcean" 4 / Dreamhost DreamObjects \ "Dreamhost" 5 / IBM COS S3 \ "IBMCOS" 6 / Minio Object Storage \ "Minio" 7 / Wasabi Object Storage \ "Wasabi" 8 / Any other S3 compatible provider \ "Other" provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US East (Ohio) Region 2 | Needs location constraint us-east-2. \ "us-east-2" / US West (Oregon) Region 3 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 4 | Needs location constraint us-west-1. \ "us-west-1" / Canada (Central) Region 5 | Needs location constraint ca-central-1. \ "ca-central-1" / EU (Ireland) Region 6 | Needs location constraint EU or eu-west-1. \ "eu-west-1" / EU (London) Region 7 | Needs location constraint eu-west-2. \ "eu-west-2" / EU (Frankfurt) Region 8 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 9 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region 10 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region 11 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / Asia Pacific (Seoul) 12 | Needs location constraint ap-northeast-2. \ "ap-northeast-2" / Asia Pacific (Mumbai) 13 | Needs location constraint ap-south-1. \ "ap-south-1" / Asia Pacific (Hong Kong) Region 14 | Needs location constraint ap-east-1. \ "ap-east-1" / South America (Sao Paulo) Region 15 | Needs location constraint sa-east-1. \ "sa-east-1" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" 2 / US East (Ohio) Region. \ "us-east-2" 3 / US West (Oregon) Region. \ "us-west-2" 4 / US West (Northern California) Region. \ "us-west-1" 5 / Canada (Central) Region. \ "ca-central-1" 6 / EU (Ireland) Region. \ "eu-west-1" 7 / EU (London) Region. \ "eu-west-2" 8 / EU Region. \ "EU" 9 / Asia Pacific (Singapore) Region. \ "ap-southeast-1" 10 / Asia Pacific (Sydney) Region. \ "ap-southeast-2" 11 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1" 12 / Asia Pacific (Seoul) \ "ap-northeast-2" 13 / Asia Pacific (Mumbai) \ "ap-south-1" 14 / Asia Pacific (Hong Kong) \ "ap-east-1" 15 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl> 1 The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> 1 The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" 6 / Glacier storage class \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" 8 / Intelligent-Tiering storage class \ "INTELLIGENT_TIERING" storage_class> 1 Remote config -------------------- [remote] type = s3 provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. --update and --use-server-modtime As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded. Modified time The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns. If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied. Cleanup If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the -i flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads. Restricted filename characters S3 allows any valid UTF-8 string as a key. Invalid UTF-8 bytes will be replaced, as they can't be used in XML. The following characters are replaced since these are problematic when dealing with the REST API: Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / The encoding will also encode these file names as they don't seem to work with the SDK properly: File name Replacement ----------- ------------- . . .. .. Multipart uploads rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded _both_ with multipart upload _and_ through crypt remotes do not have MD5 sums. rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files). The chunk sizes used in the multipart upload are specified by --s3-chunk-size and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency. Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory. Single part uploads to not use extra memory. Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster. Increasing --s3-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory. Buckets and Regions With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region. Authentication There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment. The different authentication methods are tried in this order: - Directly in the rclone configuration file (env_auth = false in the config file): - access_key_id and secret_access_key are required. - session_token can be optionally set when using AWS STS. - Runtime configuration (env_auth = true in the config file): - Export the following environment variables before running rclone: - Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY - Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY - Session Token: AWS_SESSION_TOKEN (optional) - Or, use a named profile: - Profile files are standard files used by AWS CLI tools - By default it will use the profile in your home directory (eg ~/.aws/credentials on unix based systems) file and the "default" profile, to change set these environment variables: - AWS_SHARED_CREDENTIALS_FILE to control which file. - AWS_PROFILE to control which profile to use. - Or, run rclone in an ECS task with an IAM role (AWS only). - Or, run rclone on an EC2 instance with an IAM role (AWS only). - Or, run rclone in an EKS pod with an IAM role that is associated with a service account (AWS only). If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below). S3 Permissions When using the sync subcommand of rclone the following minimum permissions are required to be available on the bucket being written to: - ListBucket - DeleteObject - GetObject - PutObject - PutObjectACL When using the lsd subcommand, the ListAllMyBuckets permission is required. Example policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" }, "Action": [ "s3:ListBucket", "s3:DeleteObject", "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::BUCKET_NAME/*", "arn:aws:s3:::BUCKET_NAME" ] }, { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" } ] } Notes on above: 1. This is a policy that can be used when creating bucket. It assumes that USER_NAME has been created. 2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync. Key Management System (KMS) If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the --ignore-checksum flag. A proper fix is being worked on in issue #1824. Glacier and Glacier Deep Archive You can upload objects using the glacier storage class or transition them to glacier using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below. 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file In this case you need to restore the object(s) in question before using rclone. Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults. Standard Options Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)). --s3-provider Choose your S3 provider. - Config: provider - Env Var: RCLONE_S3_PROVIDER - Type: string - Default: "" - Examples: - "AWS" - Amazon Web Services (AWS) S3 - "Alibaba" - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage - "DigitalOcean" - Digital Ocean Spaces - "Dreamhost" - Dreamhost DreamObjects - "IBMCOS" - IBM COS S3 - "Minio" - Minio Object Storage - "Netease" - Netease Object Storage (NOS) - "Scaleway" - Scaleway Object Storage - "StackPath" - StackPath Object Storage - "TencentCOS" - Tencent Cloud Object Storage (COS) - "Wasabi" - Wasabi Object Storage - "Other" - Any other S3 compatible provider --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. - Config: env_auth - Env Var: RCLONE_S3_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter AWS credentials in the next step - "true" - Get AWS credentials from the environment (env vars or IAM) --s3-access-key-id AWS Access Key ID. Leave blank for anonymous access or runtime credentials. - Config: access_key_id - Env Var: RCLONE_S3_ACCESS_KEY_ID - Type: string - Default: "" --s3-secret-access-key AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. - Config: secret_access_key - Env Var: RCLONE_S3_SECRET_ACCESS_KEY - Type: string - Default: "" --s3-region Region to connect to. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "us-east-1" - The default endpoint - a good choice if you are unsure. - US Region, Northern Virginia or Pacific Northwest. - Leave location constraint empty. - "us-east-2" - US East (Ohio) Region - Needs location constraint us-east-2. - "us-west-1" - US West (Northern California) Region - Needs location constraint us-west-1. - "us-west-2" - US West (Oregon) Region - Needs location constraint us-west-2. - "ca-central-1" - Canada (Central) Region - Needs location constraint ca-central-1. - "eu-west-1" - EU (Ireland) Region - Needs location constraint EU or eu-west-1. - "eu-west-2" - EU (London) Region - Needs location constraint eu-west-2. - "eu-west-3" - EU (Paris) Region - Needs location constraint eu-west-3. - "eu-north-1" - EU (Stockholm) Region - Needs location constraint eu-north-1. - "eu-south-1" - EU (Milan) Region - Needs location constraint eu-south-1. - "eu-central-1" - EU (Frankfurt) Region - Needs location constraint eu-central-1. - "ap-southeast-1" - Asia Pacific (Singapore) Region - Needs location constraint ap-southeast-1. - "ap-southeast-2" - Asia Pacific (Sydney) Region - Needs location constraint ap-southeast-2. - "ap-northeast-1" - Asia Pacific (Tokyo) Region - Needs location constraint ap-northeast-1. - "ap-northeast-2" - Asia Pacific (Seoul) - Needs location constraint ap-northeast-2. - "ap-northeast-3" - Asia Pacific (Osaka-Local) - Needs location constraint ap-northeast-3. - "ap-south-1" - Asia Pacific (Mumbai) - Needs location constraint ap-south-1. - "ap-east-1" - Asia Pacific (Hong Kong) Region - Needs location constraint ap-east-1. - "sa-east-1" - South America (Sao Paulo) Region - Needs location constraint sa-east-1. - "me-south-1" - Middle East (Bahrain) Region - Needs location constraint me-south-1. - "af-south-1" - Africa (Cape Town) Region - Needs location constraint af-south-1. - "cn-north-1" - China (Beijing) Region - Needs location constraint cn-north-1. - "cn-northwest-1" - China (Ningxia) Region - Needs location constraint cn-northwest-1. - "us-gov-east-1" - AWS GovCloud (US-East) Region - Needs location constraint us-gov-east-1. - "us-gov-west-1" - AWS GovCloud (US) Region - Needs location constraint us-gov-west-1. --s3-region Region to connect to. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "nl-ams" - Amsterdam, The Netherlands - "fr-par" - Paris, France --s3-region Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "" - Use this if unsure. Will use v4 signatures and an empty region. - "other-v2-signature" - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. --s3-endpoint Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" --s3-endpoint Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.us.cloud-object-storage.appdomain.cloud" - US Cross Region Endpoint - "s3.dal.us.cloud-object-storage.appdomain.cloud" - US Cross Region Dallas Endpoint - "s3.wdc.us.cloud-object-storage.appdomain.cloud" - US Cross Region Washington DC Endpoint - "s3.sjc.us.cloud-object-storage.appdomain.cloud" - US Cross Region San Jose Endpoint - "s3.private.us.cloud-object-storage.appdomain.cloud" - US Cross Region Private Endpoint - "s3.private.dal.us.cloud-object-storage.appdomain.cloud" - US Cross Region Dallas Private Endpoint - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud" - US Cross Region Washington DC Private Endpoint - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud" - US Cross Region San Jose Private Endpoint - "s3.us-east.cloud-object-storage.appdomain.cloud" - US Region East Endpoint - "s3.private.us-east.cloud-object-storage.appdomain.cloud" - US Region East Private Endpoint - "s3.us-south.cloud-object-storage.appdomain.cloud" - US Region South Endpoint - "s3.private.us-south.cloud-object-storage.appdomain.cloud" - US Region South Private Endpoint - "s3.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Endpoint - "s3.fra.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Frankfurt Endpoint - "s3.mil.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Milan Endpoint - "s3.ams.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Amsterdam Endpoint - "s3.private.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Private Endpoint - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Frankfurt Private Endpoint - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Milan Private Endpoint - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Amsterdam Private Endpoint - "s3.eu-gb.cloud-object-storage.appdomain.cloud" - Great Britain Endpoint - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud" - Great Britain Private Endpoint - "s3.eu-de.cloud-object-storage.appdomain.cloud" - EU Region DE Endpoint - "s3.private.eu-de.cloud-object-storage.appdomain.cloud" - EU Region DE Private Endpoint - "s3.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Endpoint - "s3.tok.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Tokyo Endpoint - "s3.hkg.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional HongKong Endpoint - "s3.seo.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Seoul Endpoint - "s3.private.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Private Endpoint - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Tokyo Private Endpoint - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional HongKong Private Endpoint - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Seoul Private Endpoint - "s3.jp-tok.cloud-object-storage.appdomain.cloud" - APAC Region Japan Endpoint - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud" - APAC Region Japan Private Endpoint - "s3.au-syd.cloud-object-storage.appdomain.cloud" - APAC Region Australia Endpoint - "s3.private.au-syd.cloud-object-storage.appdomain.cloud" - APAC Region Australia Private Endpoint - "s3.ams03.cloud-object-storage.appdomain.cloud" - Amsterdam Single Site Endpoint - "s3.private.ams03.cloud-object-storage.appdomain.cloud" - Amsterdam Single Site Private Endpoint - "s3.che01.cloud-object-storage.appdomain.cloud" - Chennai Single Site Endpoint - "s3.private.che01.cloud-object-storage.appdomain.cloud" - Chennai Single Site Private Endpoint - "s3.mel01.cloud-object-storage.appdomain.cloud" - Melbourne Single Site Endpoint - "s3.private.mel01.cloud-object-storage.appdomain.cloud" - Melbourne Single Site Private Endpoint - "s3.osl01.cloud-object-storage.appdomain.cloud" - Oslo Single Site Endpoint - "s3.private.osl01.cloud-object-storage.appdomain.cloud" - Oslo Single Site Private Endpoint - "s3.tor01.cloud-object-storage.appdomain.cloud" - Toronto Single Site Endpoint - "s3.private.tor01.cloud-object-storage.appdomain.cloud" - Toronto Single Site Private Endpoint - "s3.seo01.cloud-object-storage.appdomain.cloud" - Seoul Single Site Endpoint - "s3.private.seo01.cloud-object-storage.appdomain.cloud" - Seoul Single Site Private Endpoint - "s3.mon01.cloud-object-storage.appdomain.cloud" - Montreal Single Site Endpoint - "s3.private.mon01.cloud-object-storage.appdomain.cloud" - Montreal Single Site Private Endpoint - "s3.mex01.cloud-object-storage.appdomain.cloud" - Mexico Single Site Endpoint - "s3.private.mex01.cloud-object-storage.appdomain.cloud" - Mexico Single Site Private Endpoint - "s3.sjc04.cloud-object-storage.appdomain.cloud" - San Jose Single Site Endpoint - "s3.private.sjc04.cloud-object-storage.appdomain.cloud" - San Jose Single Site Private Endpoint - "s3.mil01.cloud-object-storage.appdomain.cloud" - Milan Single Site Endpoint - "s3.private.mil01.cloud-object-storage.appdomain.cloud" - Milan Single Site Private Endpoint - "s3.hkg02.cloud-object-storage.appdomain.cloud" - Hong Kong Single Site Endpoint - "s3.private.hkg02.cloud-object-storage.appdomain.cloud" - Hong Kong Single Site Private Endpoint - "s3.par01.cloud-object-storage.appdomain.cloud" - Paris Single Site Endpoint - "s3.private.par01.cloud-object-storage.appdomain.cloud" - Paris Single Site Private Endpoint - "s3.sng01.cloud-object-storage.appdomain.cloud" - Singapore Single Site Endpoint - "s3.private.sng01.cloud-object-storage.appdomain.cloud" - Singapore Single Site Private Endpoint --s3-endpoint Endpoint for OSS API. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "oss-cn-hangzhou.aliyuncs.com" - East China 1 (Hangzhou) - "oss-cn-shanghai.aliyuncs.com" - East China 2 (Shanghai) - "oss-cn-qingdao.aliyuncs.com" - North China 1 (Qingdao) - "oss-cn-beijing.aliyuncs.com" - North China 2 (Beijing) - "oss-cn-zhangjiakou.aliyuncs.com" - North China 3 (Zhangjiakou) - "oss-cn-huhehaote.aliyuncs.com" - North China 5 (Huhehaote) - "oss-cn-shenzhen.aliyuncs.com" - South China 1 (Shenzhen) - "oss-cn-hongkong.aliyuncs.com" - Hong Kong (Hong Kong) - "oss-us-west-1.aliyuncs.com" - US West 1 (Silicon Valley) - "oss-us-east-1.aliyuncs.com" - US East 1 (Virginia) - "oss-ap-southeast-1.aliyuncs.com" - Southeast Asia Southeast 1 (Singapore) - "oss-ap-southeast-2.aliyuncs.com" - Asia Pacific Southeast 2 (Sydney) - "oss-ap-southeast-3.aliyuncs.com" - Southeast Asia Southeast 3 (Kuala Lumpur) - "oss-ap-southeast-5.aliyuncs.com" - Asia Pacific Southeast 5 (Jakarta) - "oss-ap-northeast-1.aliyuncs.com" - Asia Pacific Northeast 1 (Japan) - "oss-ap-south-1.aliyuncs.com" - Asia Pacific South 1 (Mumbai) - "oss-eu-central-1.aliyuncs.com" - Central Europe 1 (Frankfurt) - "oss-eu-west-1.aliyuncs.com" - West Europe (London) - "oss-me-east-1.aliyuncs.com" - Middle East 1 (Dubai) --s3-endpoint Endpoint for Scaleway Object Storage. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.nl-ams.scw.cloud" - Amsterdam Endpoint - "s3.fr-par.scw.cloud" - Paris Endpoint --s3-endpoint Endpoint for StackPath Object Storage. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.us-east-2.stackpathstorage.com" - US East Endpoint - "s3.us-west-1.stackpathstorage.com" - US West Endpoint - "s3.eu-central-1.stackpathstorage.com" - EU Endpoint --s3-endpoint Endpoint for Tencent COS API. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "cos.ap-beijing.myqcloud.com" - Beijing Region. - "cos.ap-nanjing.myqcloud.com" - Nanjing Region. - "cos.ap-shanghai.myqcloud.com" - Shanghai Region. - "cos.ap-guangzhou.myqcloud.com" - Guangzhou Region. - "cos.ap-nanjing.myqcloud.com" - Nanjing Region. - "cos.ap-chengdu.myqcloud.com" - Chengdu Region. - "cos.ap-chongqing.myqcloud.com" - Chongqing Region. - "cos.ap-hongkong.myqcloud.com" - Hong Kong (China) Region. - "cos.ap-singapore.myqcloud.com" - Singapore Region. - "cos.ap-mumbai.myqcloud.com" - Mumbai Region. - "cos.ap-seoul.myqcloud.com" - Seoul Region. - "cos.ap-bangkok.myqcloud.com" - Bangkok Region. - "cos.ap-tokyo.myqcloud.com" - Tokyo Region. - "cos.na-siliconvalley.myqcloud.com" - Silicon Valley Region. - "cos.na-ashburn.myqcloud.com" - Virginia Region. - "cos.na-toronto.myqcloud.com" - Toronto Region. - "cos.eu-frankfurt.myqcloud.com" - Frankfurt Region. - "cos.eu-moscow.myqcloud.com" - Moscow Region. - "cos.accelerate.myqcloud.com" - Use Tencent COS Accelerate Endpoint. --s3-endpoint Endpoint for S3 API. Required when using an S3 clone. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "objects-us-east-1.dream.io" - Dream Objects endpoint - "nyc3.digitaloceanspaces.com" - Digital Ocean Spaces New York 3 - "ams3.digitaloceanspaces.com" - Digital Ocean Spaces Amsterdam 3 - "sgp1.digitaloceanspaces.com" - Digital Ocean Spaces Singapore 1 - "s3.wasabisys.com" - Wasabi US East endpoint - "s3.us-west-1.wasabisys.com" - Wasabi US West endpoint - "s3.eu-central-1.wasabisys.com" - Wasabi EU Central endpoint --s3-location-constraint Location constraint - must be set to match the Region. Used when creating buckets only. - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" - Examples: - "" - Empty for US Region, Northern Virginia or Pacific Northwest. - "us-east-2" - US East (Ohio) Region. - "us-west-1" - US West (Northern California) Region. - "us-west-2" - US West (Oregon) Region. - "ca-central-1" - Canada (Central) Region. - "eu-west-1" - EU (Ireland) Region. - "eu-west-2" - EU (London) Region. - "eu-west-3" - EU (Paris) Region. - "eu-north-1" - EU (Stockholm) Region. - "eu-south-1" - EU (Milan) Region. - "EU" - EU Region. - "ap-southeast-1" - Asia Pacific (Singapore) Region. - "ap-southeast-2" - Asia Pacific (Sydney) Region. - "ap-northeast-1" - Asia Pacific (Tokyo) Region. - "ap-northeast-2" - Asia Pacific (Seoul) Region. - "ap-northeast-3" - Asia Pacific (Osaka-Local) Region. - "ap-south-1" - Asia Pacific (Mumbai) Region. - "ap-east-1" - Asia Pacific (Hong Kong) Region. - "sa-east-1" - South America (Sao Paulo) Region. - "me-south-1" - Middle East (Bahrain) Region. - "af-south-1" - Africa (Cape Town) Region. - "cn-north-1" - China (Beijing) Region - "cn-northwest-1" - China (Ningxia) Region. - "us-gov-east-1" - AWS GovCloud (US-East) Region. - "us-gov-west-1" - AWS GovCloud (US) Region. --s3-location-constraint Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" - Examples: - "us-standard" - US Cross Region Standard - "us-vault" - US Cross Region Vault - "us-cold" - US Cross Region Cold - "us-flex" - US Cross Region Flex - "us-east-standard" - US East Region Standard - "us-east-vault" - US East Region Vault - "us-east-cold" - US East Region Cold - "us-east-flex" - US East Region Flex - "us-south-standard" - US South Region Standard - "us-south-vault" - US South Region Vault - "us-south-cold" - US South Region Cold - "us-south-flex" - US South Region Flex - "eu-standard" - EU Cross Region Standard - "eu-vault" - EU Cross Region Vault - "eu-cold" - EU Cross Region Cold - "eu-flex" - EU Cross Region Flex - "eu-gb-standard" - Great Britain Standard - "eu-gb-vault" - Great Britain Vault - "eu-gb-cold" - Great Britain Cold - "eu-gb-flex" - Great Britain Flex - "ap-standard" - APAC Standard - "ap-vault" - APAC Vault - "ap-cold" - APAC Cold - "ap-flex" - APAC Flex - "mel01-standard" - Melbourne Standard - "mel01-vault" - Melbourne Vault - "mel01-cold" - Melbourne Cold - "mel01-flex" - Melbourne Flex - "tor01-standard" - Toronto Standard - "tor01-vault" - Toronto Vault - "tor01-cold" - Toronto Cold - "tor01-flex" - Toronto Flex --s3-location-constraint Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" --s3-acl Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. - Config: acl - Env Var: RCLONE_S3_ACL - Type: string - Default: "" - Examples: - "default" - Owner gets Full_CONTROL. No one else has access rights (default). - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. - "bucket-owner-read" - Object owner gets FULL_CONTROL. Bucket owner gets READ access. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - "bucket-owner-full-control" - Both the object owner and the bucket owner get FULL_CONTROL over the object. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS --s3-server-side-encryption The server-side encryption algorithm used when storing this object in S3. - Config: server_side_encryption - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION - Type: string - Default: "" - Examples: - "" - None - "AES256" - AES256 - "aws:kms" - aws:kms --s3-sse-kms-key-id If using KMS ID you must provide the ARN of Key. - Config: sse_kms_key_id - Env Var: RCLONE_S3_SSE_KMS_KEY_ID - Type: string - Default: "" - Examples: - "" - None - "arn:aws:kms:us-east-1:*" - arn:aws:kms:* --s3-storage-class The storage class to use when storing new objects in S3. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "REDUCED_REDUNDANCY" - Reduced redundancy storage class - "STANDARD_IA" - Standard Infrequent Access storage class - "ONEZONE_IA" - One Zone Infrequent Access storage class - "GLACIER" - Glacier storage class - "DEEP_ARCHIVE" - Glacier Deep Archive storage class - "INTELLIGENT_TIERING" - Intelligent-Tiering storage class --s3-storage-class The storage class to use when storing new objects in OSS. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "GLACIER" - Archive storage mode. - "STANDARD_IA" - Infrequent access storage mode. --s3-storage-class The storage class to use when storing new objects in Tencent COS. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "ARCHIVE" - Archive storage mode. - "STANDARD_IA" - Infrequent access storage mode. --s3-storage-class The storage class to use when storing new objects in S3. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - The Standard class for any upload; suitable for on-demand content like streaming or CDN. - "GLACIER" - Archived storage; prices are lower, but it needs to be restored first to be accessed. Advanced Options Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)). --s3-bucket-acl Canned ACL used when creating buckets. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead. - Config: bucket_acl - Env Var: RCLONE_S3_BUCKET_ACL - Type: string - Default: "" - Examples: - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. --s3-sse-customer-algorithm If using SSE-C, the server-side encryption algorithm used when storing this object in S3. - Config: sse_customer_algorithm - Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM - Type: string - Default: "" - Examples: - "" - None - "AES256" - AES256 --s3-sse-customer-key If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. - Config: sse_customer_key - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY - Type: string - Default: "" - Examples: - "" - None --s3-sse-customer-key-md5 If using SSE-C you must provide the secret encryption key MD5 checksum. - Config: sse_customer_key_md5 - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 - Type: string - Default: "" - Examples: - "" - None --s3-upload-cutoff Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. - Config: upload_cutoff - Env Var: RCLONE_S3_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M --s3-chunk-size Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown size (eg from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit. Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size. - Config: chunk_size - Env Var: RCLONE_S3_CHUNK_SIZE - Type: SizeSuffix - Default: 5M --s3-max-upload-parts Maximum number of parts in a multipart upload. This option defines the maximum number of multipart chunks to use when doing a multipart upload. This can be useful if a service does not support the AWS S3 specification of 10,000 chunks. Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit. - Config: max_upload_parts - Env Var: RCLONE_S3_MAX_UPLOAD_PARTS - Type: int - Default: 10000 --s3-copy-cutoff Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 5GB. - Config: copy_cutoff - Env Var: RCLONE_S3_COPY_CUTOFF - Type: SizeSuffix - Default: 4.656G --s3-disable-checksum Don't store MD5 checksum with object metadata Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_S3_DISABLE_CHECKSUM - Type: bool - Default: false --s3-shared-credentials-file Path to the shared credentials file If env_auth = true then rclone can use a shared credentials file. If this variable is empty rclone will look for the "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty it will default to the current user's home directory. Linux/OSX: "$HOME/.aws/credentials" Windows: "%USERPROFILE%\.aws\credentials" - Config: shared_credentials_file - Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE - Type: string - Default: "" --s3-profile Profile to use in the shared credentials file If env_auth = true then rclone can use a shared credentials file. This variable controls which profile is used in that file. If empty it will default to the environment variable "AWS_PROFILE" or "default" if that environment variable is also not set. - Config: profile - Env Var: RCLONE_S3_PROFILE - Type: string - Default: "" --s3-session-token An AWS session token - Config: session_token - Env Var: RCLONE_S3_SESSION_TOKEN - Type: string - Default: "" --s3-upload-concurrency Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_S3_UPLOAD_CONCURRENCY - Type: int - Default: 4 --s3-force-path-style If true use path style access if false use virtual hosted style. If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info. Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting. - Config: force_path_style - Env Var: RCLONE_S3_FORCE_PATH_STYLE - Type: bool - Default: true --s3-v2-auth If true use v2 authentication. If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. - Config: v2_auth - Env Var: RCLONE_S3_V2_AUTH - Type: bool - Default: false --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. See: AWS S3 Transfer acceleration - Config: use_accelerate_endpoint - Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT - Type: bool - Default: false --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. It should be set to true for resuming uploads across different sessions. WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up. - Config: leave_parts_on_error - Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR - Type: bool - Default: false --s3-list-chunk Size of listing chunk (response list for each ListObject S3 request). This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see AWS S3. In Ceph, this can be increased with the "rgw list buckets max chunk" option. - Config: list_chunk - Env Var: RCLONE_S3_LIST_CHUNK - Type: int - Default: 1000 --s3-no-check-bucket If set don't attempt to check the bucket exists or create it This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already. - Config: no_check_bucket - Env Var: RCLONE_S3_NO_CHECK_BUCKET - Type: bool - Default: false --s3-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_S3_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot --s3-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP - Type: bool - Default: false Backend commands Here are the commands specific to the s3 backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. restore Restore objects from GLACIER to normal storage rclone backend restore remote: [options] [+] This command can be used to restore one or more objects from GLACIER to normal storage. Usage Examples: rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard All the objects shown will be marked for restore, then rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successfull or an error message if not. [ { "Status": "OK", "Path": "test.txt" }, { "Status": "OK", "Path": "test/file4.txt" } ] Options: - "description": The optional description for the job. - "lifetime": Lifetime of the active copy in days - "priority": Priority of restore: Standard|Expedited|Bulk list-multipart-uploads List the unfinished multipart uploads rclone backend list-multipart-uploads remote: [options] [+] This command lists the unfinished multipart uploads in JSON format. rclone backend list-multipart s3:bucket/path/to/object It returns a dictionary of buckets with values as lists of unfinished multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. { "rclone": [ { "Initiated": "2020-06-26T14:20:36Z", "Initiator": { "DisplayName": "XXX", "ID": "arn:aws:iam::XXX:user/XXX" }, "Key": "KEY", "Owner": { "DisplayName": null, "ID": "XXX" }, "StorageClass": "STANDARD", "UploadId": "XXX" } ], "rclone-1000files": [], "rclone-dst": [] } cleanup Remove unfinished multipart uploads. rclone backend cleanup remote: [options] [+] This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. Note that you can use -i/--dry-run with this command to see what it would do. rclone backend cleanup s3:bucket/path/to/object rclone backend cleanup -o max-age=7w s3:bucket/path/to/object Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Options: - "max-age": Max age of upload to delete Anonymous access to public buckets If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this: [anons3] type = s3 provider = AWS env_auth = false access_key_id = secret_access_key = region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = Then use it as normal with the name of the public bucket, eg rclone lsd anons3:1000genomes You will be able to list and copy data but not upload it. Ceph Ceph is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface. To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: [ceph] type = s3 provider = Ceph env_auth = false access_key_id = XXX secret_access_key = YYY region = endpoint = https://ceph.endpoint.example.com location_constraint = acl = server_side_encryption = storage_class = If you are using an older version of CEPH, eg 10.2.x Jewel, then you may need to supply the parameter --s3-upload-cutoff 0 or put this in the config file as upload_cutoff 0 to work around a bug which causes uploading of small files to fail. Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). { "user_id": "xxx", "display_name": "xxxx", "keys": [ { "user": "xxx", "access_key": "xxxxxx", "secret_key": "xxxxxx\/xxxx" } ], } Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine. Dreamhost Dreamhost DreamObjects is an object storage system based on CEPH. To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: [dreamobjects] type = s3 provider = DreamHost env_auth = false access_key_id = your_access_key secret_access_key = your_secret_key region = endpoint = objects-us-west-1.dream.io location_constraint = acl = private server_side_encryption = storage_class = DigitalOcean Spaces Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean. To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when prompted by rclone config for your access_key_id and secret_access_key. When prompted for a region or location_constraint, press enter to use the default value. The region must be included in the endpoint setting (e.g. nyc3.digitaloceanspaces.com). The default values can be used for other settings. Going through the whole process of creating a new remote by running rclone config, each prompt should be answered as shown below: Storage> s3 env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY region> endpoint> nyc3.digitaloceanspaces.com location_constraint> acl> storage_class> The resulting configuration file should look like: [spaces] type = s3 provider = DigitalOcean env_auth = false access_key_id = YOUR_ACCESS_KEY secret_access_key = YOUR_SECRET_KEY region = endpoint = nyc3.digitaloceanspaces.com location_constraint = acl = server_side_encryption = storage_class = Once configured, you can create a new Space and begin copying files. For example: rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space IBM COS (S3) Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage) To configure access to IBM COS S3, follow the steps below: 1. Run rclone config and select n for a new remote. 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n 2. Enter the name for the configuration name> 3. Select "s3" storage. Choose a number from below, or type in your own value 1 / Alias for an existing remote \ "alias" 2 / Amazon Drive \ "amazon cloud drive" 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) \ "s3" 4 / Backblaze B2 \ "b2" [snip] 23 / http Connection \ "http" Storage> 3 4. Select IBM COS as the S3 Storage Provider. Choose the S3 provider. Choose a number from below, or type in your own value 1 / Choose this option to configure Storage to AWS S3 \ "AWS" 2 / Choose this option to configure Storage to Ceph Systems \ "Ceph" 3 / Choose this option to configure Storage to Dreamhost \ "Dreamhost" 4 / Choose this option to the configure Storage to IBM COS S3 \ "IBMCOS" 5 / Choose this option to the configure Storage to Minio \ "Minio" Provider>4 5. Enter the Access Key and Secret. AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> <> AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> <> 6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address. Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. Choose a number from below, or type in your own value 1 / US Cross Region Endpoint \ "s3-api.us-geo.objectstorage.softlayer.net" 2 / US Cross Region Dallas Endpoint \ "s3-api.dal.us-geo.objectstorage.softlayer.net" 3 / US Cross Region Washington DC Endpoint \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" 4 / US Cross Region San Jose Endpoint \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" 5 / US Cross Region Private Endpoint \ "s3-api.us-geo.objectstorage.service.networklayer.com" 6 / US Cross Region Dallas Private Endpoint \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" 7 / US Cross Region Washington DC Private Endpoint \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" 8 / US Cross Region San Jose Private Endpoint \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" 9 / US Region East Endpoint \ "s3.us-east.objectstorage.softlayer.net" 10 / US Region East Private Endpoint \ "s3.us-east.objectstorage.service.networklayer.com" 11 / US Region South Endpoint [snip] 34 / Toronto Single Site Private Endpoint \ "s3.tor01.objectstorage.service.networklayer.com" endpoint>1 7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter 1 / US Cross Region Standard \ "us-standard" 2 / US Cross Region Vault \ "us-vault" 3 / US Cross Region Cold \ "us-cold" 4 / US Cross Region Flex \ "us-flex" 5 / US East Region Standard \ "us-east-standard" 6 / US East Region Vault \ "us-east-vault" 7 / US East Region Cold \ "us-east-cold" 8 / US East Region Flex \ "us-east-flex" 9 / US South Region Standard \ "us-south-standard" 10 / US South Region Vault \ "us-south-vault" [snip] 32 / Toronto Flex \ "tor01-flex" location_constraint>1 9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS \ "public-read" 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS \ "authenticated-read" acl> 1 12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this [xxx] type = s3 Provider = IBMCOS access_key_id = xxx secret_access_key = yyy endpoint = s3-api.us-geo.objectstorage.softlayer.net location_constraint = us-standard acl = private 13. Execute rclone commands 1) Create a bucket. rclone mkdir IBM-COS-XREGION:newbucket 2) List available buckets. rclone lsd IBM-COS-XREGION: -1 2017-11-08 21:16:22 -1 test -1 2018-02-14 20:16:39 -1 newbucket 3) List contents of a bucket. rclone ls IBM-COS-XREGION:newbucket 18685952 test.exe 4) Copy a file from local to remote. rclone copy /Users/file.txt IBM-COS-XREGION:newbucket 5) Copy a file from remote to local. rclone copy IBM-COS-XREGION:newbucket/file.txt . 6) Delete a file on remote. rclone delete IBM-COS-XREGION:newbucket/file.txt Minio Minio is an object storage server built for cloud application developers and devops. It is very easy to install and provides an S3 compatible server which can be used by rclone. To use it, install Minio following the instructions here. When it configures itself Minio will print something like this Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 AccessKey: USWUXHGYZQYFYFFIT3RE SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 Region: us-east-1 SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis Browser Access: http://192.168.1.106:9000 http://172.23.0.1:9000 Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 Object API (Amazon S3 compatible): Go: https://docs.minio.io/docs/golang-client-quickstart-guide Java: https://docs.minio.io/docs/java-client-quickstart-guide Python: https://docs.minio.io/docs/python-client-quickstart-guide JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide Drive Capacity: 26 GiB Free, 165 GiB Total These details need to go into rclone config like this. Note that it is important to put the region in as stated above. env_auth> 1 access_key_id> USWUXHGYZQYFYFFIT3RE secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 region> us-east-1 endpoint> http://192.168.1.106:9000 location_constraint> server_side_encryption> Which makes the config file look like this [minio] type = s3 provider = Minio env_auth = false access_key_id = USWUXHGYZQYFYFFIT3RE secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 region = us-east-1 endpoint = http://192.168.1.106:9000 location_constraint = server_side_encryption = So once set up, for example to copy files into a bucket rclone copy /path/to/files minio:bucket Scaleway Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. Scaleway provides an S3 interface which can be configured for use with rclone like this: [scaleway] type = s3 provider = Scaleway env_auth = false endpoint = s3.nl-ams.scw.cloud access_key_id = SCWXXXXXXXXXXXXXX secret_access_key = 1111111-2222-3333-44444-55555555555555 region = nl-ams location_constraint = acl = private server_side_encryption = storage_class = Wasabi Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost. Wasabi provides an S3 interface which can be configured for use with rclone like this. No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> wasabi Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> YOURACCESSKEY AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YOURSECRETACCESSKEY Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" [snip] region> us-east-1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Specify if using an S3 clone such as Ceph. endpoint> s3.wasabisys.com Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" [snip] location_constraint> Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" [snip] acl> The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" storage_class> Remote config -------------------- [wasabi] env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = us-east-1 endpoint = s3.wasabisys.com location_constraint = acl = server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y This will leave the config file looking like this. [wasabi] type = s3 provider = Wasabi env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = endpoint = s3.wasabisys.com location_constraint = acl = server_side_encryption = storage_class = Alibaba OSS Here is an example of making an Alibaba Cloud (Aliyun) OSS configuration. First run: rclone config This will guide you through an interactive setup process. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> oss Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ "s3" [snip] Storage> s3 Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph" [snip] provider> Alibaba Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> accesskeyid AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> secretaccesskey Endpoint for OSS API. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / East China 1 (Hangzhou) \ "oss-cn-hangzhou.aliyuncs.com" 2 / East China 2 (Shanghai) \ "oss-cn-shanghai.aliyuncs.com" 3 / North China 1 (Qingdao) \ "oss-cn-qingdao.aliyuncs.com" [snip] endpoint> 1 Canned ACL used when creating buckets and storing or copying objects. Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. [snip] acl> 1 The storage class to use when storing new objects in OSS. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Archive storage mode. \ "GLACIER" 4 / Infrequent access storage mode. \ "STANDARD_IA" storage_class> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [oss] type = s3 provider = Alibaba env_auth = false access_key_id = accesskeyid secret_access_key = secretaccesskey endpoint = oss-cn-hangzhou.aliyuncs.com acl = private storage_class = Standard -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Tencent COS Tencent Cloud Object Storage (COS) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. To configure access to Tencent COS, follow the steps below: 1. Run rclone config and select n for a new remote. rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n 2. Give the name of the configuration. For example, name it 'cos'. name> cos 3. Select s3 storage. Choose a number from below, or type in your own value 1 / 1Fichier \ "fichier" 2 / Alias for an existing remote \ "alias" 3 / Amazon Drive \ "amazon cloud drive" 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc) \ "s3" [snip] Storage> s3 4. Select TencentCOS provider. Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" [snip] 11 / Tencent Cloud Object Storage (COS) \ "TencentCOS" [snip] provider> TencentCOS 5. Enter your SecretId and SecretKey of Tencent Cloud. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> AKIDxxxxxxxxxx AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> xxxxxxxxxxx 6. Select endpoint for Tencent COS. This is the standard endpoint for different region. 1 / Beijing Region. \ "cos.ap-beijing.myqcloud.com" 2 / Nanjing Region. \ "cos.ap-nanjing.myqcloud.com" 3 / Shanghai Region. \ "cos.ap-shanghai.myqcloud.com" 4 / Guangzhou Region. \ "cos.ap-guangzhou.myqcloud.com" [snip] endpoint> 4 7. Choose acl and storage class. Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets Full_CONTROL. No one else has access rights (default). \ "default" [snip] acl> 1 The storage class to use when storing new objects in Tencent COS. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" [snip] storage_class> 1 Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [cos] type = s3 provider = TencentCOS env_auth = false access_key_id = xxx secret_access_key = xxx endpoint = cos.ap-guangzhou.myqcloud.com acl = default -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== cos s3 Netease NOS For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly. Backblaze B2 B2 is Backblaze's cloud storage system. Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir. Here is an example of making a b2 configuration. First run rclone config This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key. No remotes found - make a new one n) New remote q) Quit config n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Backblaze B2 \ "b2" [snip] Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key key> 0123456789abcdef0123456789abcdef0123456789 Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = 123456789abc key = 0123456789abcdef0123456789abcdef0123456789 endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y This remote is called remote and can now be used like this See all buckets rclone lsd remote: Create a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket Application Keys B2 supports multiple Application Keys for different access permission to B2 Buckets. You can use these with rclone too; you will need to use rclone version 1.43 or later. Follow Backblaze's docs to create an Application Key with the required permission and add the applicationKeyId as the account and the Application Key itself as the key. Note that you must put the _applicationKeyId_ as the account – you can't use the master Account ID. If you try then B2 will return 401 errors. --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Modified time The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time. Modified times are used in syncing and are fully supported. Note that if a modification time needs to be updated on an object then it will create a new version of the object. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Note that in 2020-05 Backblaze started allowing  characters in file names. Rclone hasn't changed its encoding as this could cause syncs to re-transfer files. If you want rclone not to replace  then see the --b2-encoding flag below and remove the BackSlash from the string. This can be set in the config. SHA1 checksums The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. Large files (bigger than the limit in --b2-upload-cutoff) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze. For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1. Sources which don't support SHA1, in particular crypt will upload large files without SHA1 checksums. This may be fixed in the future (see #1767). Files sizes below --b2-upload-cutoff will always have an SHA1 regardless of the source. Transfers Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though. Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers of these in use at any moment, so this sets the upper limit on the memory used. Versions When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete flag which would permanently remove the file instead of hiding it. Old versions of files, where available, are visible using the --b2-versions flag. NB Note that --b2-versions does not work with crypt at the moment #1627. Using --backup-dir with rclone is the recommended way of working around this. If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff. Note that cleanup will remove partially uploaded files from the bucket if they are more than a day old. When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted. However delete will cause the current versions of the files to become hidden old versions. Here is a session showing the listing and retrieval of an old version followed by a cleanup of the old versions. Show current version and all the versions with --b2-versions flag. $ rclone -q ls b2:cleanup-test 9 one.txt $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt Retrieve an old version $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ ls -l /tmp/one-v2016-07-04-141003-000.txt -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt Clean up all the old versions and show that they've gone. $ rclone -q cleanup b2:cleanup-test $ rclone -q ls b2:cleanup-test 9 one.txt $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt Data usage It is useful to know how many requests are sent to the server in different scenarios. All copy commands send the following 4 requests: /b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket /b2api/v1/b2_list_buckets /b2api/v1/b2_list_file_names The b2_list_file_names request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue #818 causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent. Uploading files that do not require chunking, will send 2 requests per file upload: /b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk: /b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url /b2api/v1/b2_upload_part/ /b2api/v1/b2_finish_large_file Versions Versions can be viewed with the --b2-versions flag. When it is set rclone will show and act on older versions of files. For example Listing without --b2-versions $ rclone -q ls b2:cleanup-test 9 one.txt And with $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them. Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them. B2 and rclone link Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: ./rclone link B2:bucket/path/to/file.txt https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx or if run on a directory you will get: ./rclone link B2:bucket/path https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx you can then use the authorization token (the part of the url from the ?Authorization= on) on any file path under that directory. For example: https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx Standard Options Here are the standard options specific to b2 (Backblaze B2). --b2-account Account ID or Application Key ID - Config: account - Env Var: RCLONE_B2_ACCOUNT - Type: string - Default: "" --b2-key Application Key - Config: key - Env Var: RCLONE_B2_KEY - Type: string - Default: "" --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - Config: hard_delete - Env Var: RCLONE_B2_HARD_DELETE - Type: bool - Default: false Advanced Options Here are the advanced options specific to b2 (Backblaze B2). --b2-endpoint Endpoint for the service. Leave blank normally. - Config: endpoint - Env Var: RCLONE_B2_ENDPOINT - Type: string - Default: "" --b2-test-mode A flag string for X-Bz-Test-Mode header for debugging. This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors: - "fail_some_uploads" - "expire_some_account_authorization_tokens" - "force_cap_exceeded" These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist. - Config: test_mode - Env Var: RCLONE_B2_TEST_MODE - Type: string - Default: "" --b2-versions Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them. - Config: versions - Env Var: RCLONE_B2_VERSIONS - Type: bool - Default: false --b2-upload-cutoff Cutoff for switching to chunked upload. Files above this size will be uploaded in chunks of "--b2-chunk-size". This value should be set no larger than 4.657GiB (== 5GB). - Config: upload_cutoff - Env Var: RCLONE_B2_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M --b2-copy-cutoff Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 4.6GB. - Config: copy_cutoff - Env Var: RCLONE_B2_COPY_CUTOFF - Type: SizeSuffix - Default: 4G --b2-chunk-size Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size. - Config: chunk_size - Env Var: RCLONE_B2_CHUNK_SIZE - Type: SizeSuffix - Default: 96M --b2-disable-checksum Disable checksums for large (> upload cutoff) files Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_B2_DISABLE_CHECKSUM - Type: bool - Default: false --b2-download-url Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. - Config: download_url - Env Var: RCLONE_B2_DOWNLOAD_URL - Type: string - Default: "" --b2-download-auth-duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. - Config: download_auth_duration - Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION - Type: Duration - Default: 1w --b2-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP - Type: bool - Default: false --b2-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_B2_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot Box Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Box \ "box" [snip] Storage> box Box App Client Id - leave blank normally. client_id> Box App Client Secret - leave blank normally. client_secret> Box App config.json location Leave blank normally. Enter a string value. Press Enter for the default (""). box_config_file> Box App Primary Access Token Leave blank normally. Enter a string value. Press Enter for the default (""). access_token> Enter a string value. Press Enter for the default ("user"). Choose a number from below, or type in your own value 1 / Rclone should act on behalf of a user \ "user" 2 / Rclone should act on behalf of a service account \ "enterprise" box_sub_type> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List directories in top level of your Box rclone lsd remote: List all the files in your Box rclone ls remote: To copy a local directory to an Box directory called backup rclone copy /home/source remote:backup Using rclone with an Enterprise account with SSO If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field. Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set. Invalid refresh token According to the box docs: Each refresh_token is valid for one use in 60 days. This means that if you - Don't use the box remote for 60 days - Copy the config file with a box refresh token in and use it in two places - Get an error on a token refresh then rclone will return an error which includes the text Invalid refresh token. To fix this you will need to use oauth2 again to update the refresh token. You can use the methods in the remote setup docs, bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on. Here is how to do it. $ rclone config Current remotes: Name Type ==== ==== remote box e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote -------------------- [remote] type = box token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} -------------------- Edit remote Value "client_id" = "" Edit? (y/n)> y) Yes n) No y/n> n Value "client_secret" = "" Edit? (y/n)> y) Yes n) No y/n> n Remote config Already have a token - refresh? y) Yes n) No y/n> y Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = box token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Modified time and hashes Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Box supports SHA1 type hashes, so you can use the --checksum flag. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ File names can also not end with the following characters. These only get replaced if they are the last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Transfers For files above 50MB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers will increase memory use. Deleting files Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash. Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it may take a very long time. Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI. Root folder ID You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your Box drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface. So if the folder you want rclone to use has a URL which looks like https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as the root_folder_id in the config. Standard Options Here are the standard options specific to box (Box). --box-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_BOX_CLIENT_ID - Type: string - Default: "" --box-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_BOX_CLIENT_SECRET - Type: string - Default: "" --box-box-config-file Box App config.json location Leave blank normally. Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}. - Config: box_config_file - Env Var: RCLONE_BOX_BOX_CONFIG_FILE - Type: string - Default: "" --box-access-token Box App Primary Access Token Leave blank normally. - Config: access_token - Env Var: RCLONE_BOX_ACCESS_TOKEN - Type: string - Default: "" --box-box-sub-type - Config: box_sub_type - Env Var: RCLONE_BOX_BOX_SUB_TYPE - Type: string - Default: "user" - Examples: - "user" - Rclone should act on behalf of a user - "enterprise" - Rclone should act on behalf of a service account Advanced Options Here are the advanced options specific to box (Box). --box-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_BOX_TOKEN - Type: string - Default: "" --box-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_BOX_AUTH_URL - Type: string - Default: "" --box-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_BOX_TOKEN_URL - Type: string - Default: "" --box-root-folder-id Fill in for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_BOX_ROOT_FOLDER_ID - Type: string - Default: "0" --box-upload-cutoff Cutoff for switching to multipart upload (>= 50MB). - Config: upload_cutoff - Env Var: RCLONE_BOX_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 50M --box-commit-retries Max number of times to try committing a multipart file. - Config: commit_retries - Env Var: RCLONE_BOX_COMMIT_RETRIES - Type: int - Default: 100 --box-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_BOX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot Limitations Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Box file names can't have the \ character in. rclone maps this to and from an identical looking unicode equivalent \ (U+FF3C Fullwidth Reverse Solidus). Box only supports filenames up to 255 characters in length. Cache (BETA) The cache remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount. Status The cache backend code is working but it currently doesn't have a maintainer so there are outstanding bugs which aren't getting fixed. The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone. Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more. Setup To get started you just need to have an existing remote which can be configured with cache. Here is an example of how to make a remote called test-cache. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Cache a remote \ "cache" [snip] Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> local:/test Optional: The URL of the Plex server plex_url> http://127.0.0.1:32400 Optional: The username of the Plex user plex_username> dummyusername Optional: The password of the Plex user y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: The size of a chunk. Lower value good for slow connections but can affect seamless reading. Default: 5M Choose a number from below, or type in your own value 1 / 1MB \ "1m" 2 / 5 MB \ "5M" 3 / 10 MB \ "10M" chunk_size> 2 How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. Accepted units are: "s", "m", "h". Default: 5m Choose a number from below, or type in your own value 1 / 1 hour \ "1h" 2 / 24 hours \ "24h" 3 / 24 hours \ "48h" info_age> 2 The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. Default: 10G Choose a number from below, or type in your own value 1 / 500 MB \ "500M" 2 / 1 GB \ "1G" 3 / 10 GB \ "10G" chunk_total_size> 3 Remote config -------------------- [test-cache] remote = local:/test plex_url = http://127.0.0.1:32400 plex_username = dummyusername plex_password = *** ENCRYPTED *** chunk_size = 5M info_age = 48h chunk_total_size = 10G You can then use it like this, List directories in top level of your drive rclone lsd test-cache: List all the files in your drive rclone ls test-cache: To start a cached mount rclone mount --allow-other test-cache: /var/tmp/test-cache Write Features Offline uploading In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a cache-tmp-upload-path. A files goes through these states when using this feature: 1. An upload is started (usually by copying a file on the cache remote) 2. When the copy to the temporary location is complete the file is part of the cached remote and looks and behaves like any other file (reading included) 3. After cache-tmp-wait-time passes and the file is next in line, rclone move is used to move the file to the cloud provider 4. Reading the file still works during the upload but most modifications on it will be prohibited 5. Once the move is complete the file is unlocked for modifications as it becomes as any other regular file 6. If the file is being read through cache when it's actually deleted from the temporary path then cache will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though) Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the --cache-db-purge flag. Write Support Writes are supported through cache. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using Offline uploading for reliable writes. One special case is covered with cache-writes which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished. Read Features Multiple connections To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the cloud provider for smaller file chunks and combines them together locally where they can be available almost immediately before the reader usually needs them. This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before. Plex Integration There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries the cloud provider depending on what is needed for. Scans will have a minimum amount of workers (1) while in a confirmed playback cache will deploy the configured number of workers. This integration opens the doorway to additional performance improvements which will be explored in the near future. NOTE: If Plex options are not configured, cache will function with its configured options without adapting any of its settings. How to enable? Run rclone config and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled. Affected settings: - cache-workers: _Configured value_ during confirmed playback or _1_ all the other times Certificate Validation When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely. The format for these URLs is the following: https://ip-with-dots-replaced.server-hash.plex.direct:32400/ The ip-with-dots-replaced part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1. To get the server-hash part, the easiest way is to visit https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token This page will list all the available Plex servers for your account with at least one .plex.direct link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url value. Known issues Mount and --dir-cache-time --dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache backend, it will manage its own entries based on the configured time. To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set --dir-cache-time to a lower time than --cache-info-age. Default values are already configured in this way. Windows support - Experimental There are a couple of issues with Windows mount functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS. Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. - https://github.com/rclone/rclone/issues/1935 - https://github.com/rclone/rclone/issues/1907 - https://github.com/rclone/rclone/issues/1834 Risk of throttling Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures. There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. Some recommendations: - don't use a very small interval for entry information (--cache-info-age) - while writes aren't yet optimised, you can still write through cache which gives you the advantage of adding the file in the cache at the same time if configured to do so. Future enhancements: - https://github.com/rclone/rclone/issues/1937 - https://github.com/rclone/rclone/issues/1936 cache and crypt One common scenario is to keep your data encrypted in the cloud provider using the crypt remote. crypt uses a similar technique to wrap around an existing remote and handles this translation in a seamless way. There is an issue with wrapping the remotes in this order: CLOUD REMOTE -> CRYPT -> CACHE During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: CLOUD REMOTE -> CACHE -> CRYPT absolute remote paths cache can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading / character. This behavior is irrelevant for most backend types, but there are backends where a leading / changes the effective directory, e.g. in the sftp backend paths starting with a / are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin and sftp:/bin will share the same cache folder, even if they represent a different directory on the SSH server. Cache and Remote Control (--rc) Cache supports the new --rc mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag. rc cache/expire Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - REMOTE = path to remote (REQUIRED) - WITHDATA = true/false to delete cached data (chunks) as well _(optional, false by default)_ Standard Options Here are the standard options specific to cache (Cache a remote). --cache-remote Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CACHE_REMOTE - Type: string - Default: "" --cache-plex-url The URL of the Plex server - Config: plex_url - Env Var: RCLONE_CACHE_PLEX_URL - Type: string - Default: "" --cache-plex-username The username of the Plex user - Config: plex_username - Env Var: RCLONE_CACHE_PLEX_USERNAME - Type: string - Default: "" --cache-plex-password The password of the Plex user NB Input to this must be obscured - see rclone obscure. - Config: plex_password - Env Var: RCLONE_CACHE_PLEX_PASSWORD - Type: string - Default: "" --cache-chunk-size The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur. - Config: chunk_size - Env Var: RCLONE_CACHE_CHUNK_SIZE - Type: SizeSuffix - Default: 5M - Examples: - "1m" - 1MB - "5M" - 5 MB - "10M" - 10 MB --cache-info-age How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time. - Config: info_age - Env Var: RCLONE_CACHE_INFO_AGE - Type: Duration - Default: 6h0m0s - Examples: - "1h" - 1 hour - "24h" - 24 hours - "48h" - 48 hours --cache-chunk-total-size The total size that the chunks can take up on the local disk. If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value. - Config: chunk_total_size - Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE - Type: SizeSuffix - Default: 10G - Examples: - "500M" - 500 MB - "1G" - 1 GB - "10G" - 10 GB Advanced Options Here are the advanced options specific to cache (Cache a remote). --cache-plex-token The plex token for authentication - auto set normally - Config: plex_token - Env Var: RCLONE_CACHE_PLEX_TOKEN - Type: string - Default: "" --cache-plex-insecure Skip all certificate verification when connecting to the Plex server - Config: plex_insecure - Env Var: RCLONE_CACHE_PLEX_INSECURE - Type: string - Default: "" --cache-db-path Directory to store file structure metadata DB. The remote name is used as the DB file name. - Config: db_path - Env Var: RCLONE_CACHE_DB_PATH - Type: string - Default: "$HOME/.cache/rclone/cache-backend" --cache-chunk-path Directory to cache chunk files. Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path. This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path". - Config: chunk_path - Env Var: RCLONE_CACHE_CHUNK_PATH - Type: string - Default: "$HOME/.cache/rclone/cache-backend" --cache-db-purge Clear all the cached data for this remote on start. - Config: db_purge - Env Var: RCLONE_CACHE_DB_PURGE - Type: bool - Default: false --cache-chunk-clean-interval How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often. - Config: chunk_clean_interval - Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL - Type: Duration - Default: 1m0s --cache-read-retries How many times to retry a read from a cache storage. Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore. For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering. - Config: read_retries - Env Var: RCLONE_CACHE_READ_RETRIES - Type: int - Default: 10 --cache-workers How many workers should run in parallel to download chunks. Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers. NOTE: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. - Config: workers - Env Var: RCLONE_CACHE_WORKERS - Type: int - Default: 4 --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible. This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time). If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine. - Config: chunk_no_memory - Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY - Type: bool - Default: false --cache-rps Limits the number of requests per second to the source FS (-1 to disable) This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads. If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that. A good balance of all the other settings should make this setting useless but it is available to set for more special cases. NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass. - Config: rps - Env Var: RCLONE_CACHE_RPS - Type: int - Default: -1 --cache-writes Cache file data on writes through the FS If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload. - Config: writes - Env Var: RCLONE_CACHE_WRITES - Type: bool - Default: false --cache-tmp-upload-path Directory to keep temporary files until they are uploaded. This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider. Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider - Config: tmp_upload_path - Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH - Type: string - Default: "" --cache-tmp-wait-time How long should files be stored in local cache before being uploaded This is the duration that a file must wait in the temporary location _cache-tmp-upload-path_ before it is selected for upload. Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose. - Config: tmp_wait_time - Env Var: RCLONE_CACHE_TMP_WAIT_TIME - Type: Duration - Default: 15s --cache-db-wait-time How long to wait for the DB to be available - 0 is unlimited Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error. If you set it to 0 then it will wait forever. - Config: db_wait_time - Env Var: RCLONE_CACHE_DB_WAIT_TIME - Type: Duration - Default: 1s Backend commands Here are the commands specific to the cache backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. stats Print stats on the cache backend in JSON format. rclone backend stats remote: [options] [+] Chunker (BETA) The chunker overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers. To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. First check your chosen remote is working - we'll call it remote:path here. Note that anything inside remote:path will be chunked and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. Now configure chunker using rclone config. We will call this one overlay to separate it from the remote itself. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> overlay Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Transparently chunk/split large files \ "chunker" [snip] Storage> chunker Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Enter a string value. Press Enter for the default (""). remote> remote:path Files larger than chunk size will be split in chunks. Enter a size with suffix k,M,G,T. Press Enter for the default ("2G"). chunk_size> 100M Choose how chunker handles hash sums. All modes but "none" require metadata. Enter a string value. Press Enter for the default ("md5"). Choose a number from below, or type in your own value 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise \ "none" 2 / MD5 for composite files \ "md5" 3 / SHA1 for composite files \ "sha1" 4 / MD5 for all files \ "md5all" 5 / SHA1 for all files \ "sha1all" 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported \ "md5quick" 7 / Similar to "md5quick" but prefers SHA1 over MD5 \ "sha1quick" hash_type> md5 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [overlay] type = chunker remote = remote:bucket chunk_size = 100M hash_type = md5 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Specifying the remote In normal use, make sure the remote has a : in. If you specify the remote without a : then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files then rclone will chunk stuff in that directory. If you use a remote of name then rclone will put files in a directory called name in the current directory. Chunking When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename etc). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact. When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content. When the list rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden. List and other commands can sometimes come across composite files with missing or invalid chunks, eg. shadowed by like-named directory or another file. This usually means that wrapped file system has been directly tampered with or damaged. If chunker detects a missing chunk it will by default print warning, skip the whole incomplete group of chunks but proceed with current command. You can set the --chunker-fail-hard flag to have commands abort with error message in such cases. Chunk names The default chunk name format is *.rclone_chunk.###, hence by default chunk names are BIG_FILE_NAME.rclone_chunk.001, BIG_FILE_NAME.rclone_chunk.002 etc. You can configure another name format using the name_format configuration file option. The format uses asterisk * as a placeholder for the base file name and one or more consecutive hash characters # as a placeholder for sequential chunk number. There must be one and only one asterisk. The number of consecutive hash characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is left-padded by zeros. If the decimal string is longer, it is left intact. By default numbering starts from 1 but there is another option that allows user to start from 0, eg. for compatibility with legacy software. For example, if name format is big_*-##.part and original file name is data.txt and numbering starts from 0, then the first chunk will be named big_data.txt-00.part, the 99th chunk will be big_data.txt-98.part and the 302nd chunk will become big_data.txt-301.part. Note that list assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files. Metadata Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the none format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases. Simple JSON metadata format This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields: - ver - version of format, currently 1 - size - total size of composite file - nchunks - number of data chunks in file - md5 - MD5 hashsum of composite file (if present) - sha1 - SHA1 hashsum (if present) There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling. No metadata You can disable meta objects by setting the meta format option to none. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled. Hashsums Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of none, chunker will report hashsum as UNSUPPORTED. Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it. Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent. If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with md5all or sha1all. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. chunk_type=sha1all to force hashsums and chunk_size=1P to effectively disable chunking. Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: sha1quick and md5quick. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the sync command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found. Modified time Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is none then chunker will use modification time of the first data chunk. Migrations The idiomatic way to migrate to a different chunk size, hash type or chunk naming scheme is to: - Collect all your chunked files under a directory and have your chunker remote point to it. - Create another directory (most probably on the same cloud storage) and configure a new remote with desired metadata format, hash type, chunk naming etc. - Now run rclone sync -i oldchunks: newchunks: and all your data will be transparently converted in transfer. This may take some time, yet chunker will try server-side copy if possible. - After checking data integrity you may remove configuration section of the old remote. If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the list command but will eat up your account quota. Please note that the deletefile command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The copy command will copy only active chunks while the purge will remove everything including garbage. Caveats and Limitations Chunker requires wrapped remote to support server side move (or copy + delete) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully. Chunker encodes chunk number in file name, so with default name_format setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. *.rcc## and save 10 characters (provided at most 99 chunks per file). Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers. Chunker will not automatically rename existing chunks when you run rclone config on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above. If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory). Standard Options Here are the standard options specific to chunker (Transparently chunk/split large files). --chunker-remote Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CHUNKER_REMOTE - Type: string - Default: "" --chunker-chunk-size Files larger than chunk size will be split in chunks. - Config: chunk_size - Env Var: RCLONE_CHUNKER_CHUNK_SIZE - Type: SizeSuffix - Default: 2G --chunker-hash-type Choose how chunker handles hash sums. All modes but "none" require metadata. - Config: hash_type - Env Var: RCLONE_CHUNKER_HASH_TYPE - Type: string - Default: "md5" - Examples: - "none" - Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise - "md5" - MD5 for composite files - "sha1" - SHA1 for composite files - "md5all" - MD5 for all files - "sha1all" - SHA1 for all files - "md5quick" - Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported - "sha1quick" - Similar to "md5quick" but prefers SHA1 over MD5 Advanced Options Here are the advanced options specific to chunker (Transparently chunk/split large files). --chunker-name-format String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format. - Config: name_format - Env Var: RCLONE_CHUNKER_NAME_FORMAT - Type: string - Default: "*.rclone_chunk.###" --chunker-start-from Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1. - Config: start_from - Env Var: RCLONE_CHUNKER_START_FROM - Type: int - Default: 1 --chunker-meta-format Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file. - Config: meta_format - Env Var: RCLONE_CHUNKER_META_FORMAT - Type: string - Default: "simplejson" - Examples: - "none" - Do not use metadata files at all. Requires hash type "none". - "simplejson" - Simple JSON supports hash sums and chunk validation. - It has the following fields: ver, size, nchunks, md5, sha1. --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks. - Config: fail_hard - Env Var: RCLONE_CHUNKER_FAIL_HARD - Type: bool - Default: false - Examples: - "true" - Report errors and abort current command. - "false" - Warn user, skip incomplete file and proceed. Citrix ShareFile Citrix ShareFile is a secure file sharing and transfer service aimed as business. The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value XX / Citrix Sharefile \ "sharefile" Storage> sharefile ** See help for sharefile backend at: https://rclone.org/sharefile/ ** ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Access the Personal Folders. (Default) \ "" 2 / Access the Favorites folder. \ "favorites" 3 / Access all the shared folders. \ "allshared" 4 / Access all the individual connectors. \ "connectors" 5 / Access the home, favorites, and shared folders as well as the connectors. \ "top" root_folder_id> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = sharefile endpoint = https://XXX.sharefile.com token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List directories in top level of your ShareFile rclone lsd remote: List all the files in your ShareFile rclone ls remote: To copy a local directory to an ShareFile directory called backup rclone copy /home/source remote:backup Paths may be as deep as required, eg remote:directory/subdirectory. Modified time and hashes ShareFile allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. ShareFile supports MD5 type hashes, so you can use the --checksum flag. Transfers For files above 128MB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing --transfers will increase memory use. Limitations Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". ShareFile only supports filenames up to 256 characters in length. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ * 0x2A * < 0x3C < > 0x3E > ? 0x3F ? : 0x3A : | 0x7C | " 0x22 " File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ . 0x2E . Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to sharefile (Citrix Sharefile). --sharefile-root-folder-id ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). - Config: root_folder_id - Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID - Type: string - Default: "" - Examples: - "" - Access the Personal Folders. (Default) - "favorites" - Access the Favorites folder. - "allshared" - Access all the shared folders. - "connectors" - Access all the individual connectors. - "top" - Access the home, favorites, and shared folders as well as the connectors. Advanced Options Here are the advanced options specific to sharefile (Citrix Sharefile). --sharefile-upload-cutoff Cutoff for switching to multipart upload. - Config: upload_cutoff - Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 128M --sharefile-chunk-size Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. - Config: chunk_size - Env Var: RCLONE_SHAREFILE_CHUNK_SIZE - Type: SizeSuffix - Default: 64M --sharefile-endpoint Endpoint for API calls. This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com - Config: endpoint - Env Var: RCLONE_SHAREFILE_ENDPOINT - Type: string - Default: "" --sharefile-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_SHAREFILE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot Crypt Rclone crypt remotes encrypt and decrypt other remotes. To use crypt, first set up the underlying remote. Follow the rclone config instructions for that remote. crypt applied to a local pathname instead of a remote will encrypt and decrypt that directory, and can be used to encrypt USB removable drives. Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote:path. Anything inside remote:path will be encrypted and anything outside will not. In the case of an S3 based underlying remote (eg Amazon S3, B2, Swift) it is generally advisable to define a crypt remote in the underlying remote s3:bucket. If s3: alone is specified alongside file name encryption, rclone will encrypt the bucket name. Configure crypt using rclone config. In this example the crypt remote is called secret, to differentiate it from the underlying remote. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> secret Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Encrypt/Decrypt a remote \ "crypt" [snip] Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> remote:path How to encrypt the filenames. Choose a number from below, or type in your own value 1 / Don't encrypt the file names. Adds a ".bin" extension only. \ "off" 2 / Encrypt the filenames see the docs for the details. \ "standard" 3 / Very simple filename obfuscation. \ "obfuscate" filename_encryption> 2 Option to either encrypt directory names or leave them intact. Choose a number from below, or type in your own value 1 / Encrypt directory names. \ "true" 2 / Don't encrypt directory names, leave them intact. \ "false" filename_encryption> 1 Password or pass phrase for encryption. y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> g Password strength in bits. 64 is just about memorable 128 is secure 1024 is the maximum Bits> 128 Your password is: JAsJvRcgR-_veXNfy_sGmQ Use this password? y) Yes n) No y/n> y Remote config -------------------- [secret] remote = remote:path filename_encryption = standard password = *** ENCRYPTED *** password2 = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y IMPORTANT The crypt password stored in rclone.conf is lightly obscured. That only protects it from cursory inspection. It is not secure unless encryption of rclone.conf is specified. A long passphrase is recommended, or rclone config can generate a random one. The obscured password is created using AES-CTR with a static key. The salt is stored verbatim at the beginning of the obscured password. This static key is shared between all versions of rclone. If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt. Rclone does not encrypt - file length - this can be calculated within 16 bytes - modification time - used for syncing Specifying the remote In normal use, ensure the remote has a : in. If specified without, rclone uses a local directory of that name. For example if a remote /path/to/secret/files is specified, rclone encrypts content to that directory. If a remote name is specified, rclone targets a directory name in the current directory. If remote remote:path/to/dir is specified, rclone stores encrypted files in path/to/dir on the remote. With file name encryption, files saved to secret:subdir/subfile are stored in the unencrypted path path/to/dir but the subdir/subpath element is encrypted. Example Create the following file structure using "standard" file name encryption. plaintext/ ├── file0.txt ├── file1.txt └── subdir ├── file2.txt ├── file3.txt └── subsubdir └── file4.txt Copy these to the remote, and list them $ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt 6 file0.txt 8 subdir/file2.txt 10 subdir/subsubdir/file4.txt 9 subdir/file3.txt The crypt remote looks like $ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 v05749mltvv1tf4onltun46gls 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps The directory structure is preserved $ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt 10 subsubdir/file4.txt Without file name encryption .bin extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content. $ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin 56 subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin File name encryption modes Off - doesn't hide file names or directory structure - allows for longer file names (~246 characters) - can use sub paths and copy single files Standard - file names encrypted - file names can't be as long (~143 characters) - can use sub paths and copy single files - directory structure visible - identical files names will have identical uploaded names - can use shortcuts to shorten the directory recursion Obfuscation This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. Rclone stores the distance at the beginning of the filename. A file called "hello" may become "53.jgnnq". Obfuscation is not a strong encryption of filenames, but hinders automated scanning tools picking up on filename patterns. It is an intermediate between "off" and "standard" which allows for longer path segment names. There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. Obfuscation cannot be relied upon for strong protection. - file names very lightly obfuscated - file names can be longer than standard encryption - can use sub paths and copy single files - directory structure visible - identical files names will have identical uploaded names Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less thn 156 characters in length issues should not be encountered, irrespective of cloud storage provider. An alternative, future rclone file name encryption mode may tolerate backend provider path length limits. Directory name encryption Crypt offers the option of encrypting dir names or leaving them intact. There are two options: True Encrypts the whole file path including directory names Example: 1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0 False Only encrypts file names, skips directory names Example: 1/12/123.txt is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0 Modified time and hashes Crypt stores modification times using the underlying remote so support depends on that. Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator. Use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly. Standard Options Here are the standard options specific to crypt (Encrypt/Decrypt a remote). --crypt-remote Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CRYPT_REMOTE - Type: string - Default: "" --crypt-filename-encryption How to encrypt the filenames. - Config: filename_encryption - Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION - Type: string - Default: "standard" - Examples: - "standard" - Encrypt the filenames see the docs for the details. - "obfuscate" - Very simple filename obfuscation. - "off" - Don't encrypt the file names. Adds a ".bin" extension only. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. NB If filename_encryption is "off" then this option will do nothing. - Config: directory_name_encryption - Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION - Type: bool - Default: true - Examples: - "true" - Encrypt directory names. - "false" - Don't encrypt directory names, leave them intact. --crypt-password Password or pass phrase for encryption. NB Input to this must be obscured - see rclone obscure. - Config: password - Env Var: RCLONE_CRYPT_PASSWORD - Type: string - Default: "" --crypt-password2 Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. NB Input to this must be obscured - see rclone obscure. - Config: password2 - Env Var: RCLONE_CRYPT_PASSWORD2 - Type: string - Default: "" Advanced Options Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). --crypt-server-side-across-configs Allow server side operations (eg copy) to work across different crypt configs. Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes. - Config: server_side_across_configs - Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false --crypt-show-mapping For all files listed show how the names encrypt. If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name. This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes. - Config: show_mapping - Env Var: RCLONE_CRYPT_SHOW_MAPPING - Type: bool - Default: false Backend commands Here are the commands specific to the crypt backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. encode Encode the given filename(s) rclone backend encode remote: [options] [+] This encodes the filenames given as arguments returning a list of strings of the encoded results. Usage Example: rclone backend encode crypt: file1 [file2...] rclone rc backend/command command=encode fs=crypt: file1 [file2...] decode Decode the given filename(s) rclone backend decode remote: [options] [+] This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. Usage Example: rclone backend decode crypt: encryptedfile1 [encryptedfile2...] rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] Backing up a crypted remote If you wish to backup a crypted remote, it is recommended that you use rclone sync on the encrypted files, and make sure the passwords are the same in the new encrypted remote. This will have the following advantages - rclone sync will check the checksums while copying - you can use rclone check between the encrypted remotes - you don't decrypt and encrypt unnecessarily For example, let's say you have your original remote at remote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:. To sync the two remotes you would do rclone sync -i remote:crypt remote2:crypt And to check the integrity you would do rclone check remote:crypt remote2:crypt File formats File encryption Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks. Header - 8 bytes magic string RCLONE\x00\x00 - 24 bytes Nonce (IV) The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce. Chunk Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages. Each chunk contains: - 16 Bytes of Poly1305 authenticator - 1 - 65536 bytes XSalsa20 encrypted data 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big. This uses a 32 byte (256 bit key) key derived from the user password. Examples 1 byte file will encrypt to - 32 bytes header - 17 bytes data chunk 49 bytes total 1MB (1048576 bytes) file will encrypt to - 32 bytes header - 16 chunks of 65568 bytes 1049120 bytes total (a 0.05% overhead). This is the overhead for big files. Name encryption File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually. File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption. They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway. This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system. This means that - filenames with the same name will encrypt the same - filenames which start the same won't have a common prefix This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password. After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways: - it becomes lower case (no-one likes upper case filenames!) - we strip the padding character = base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive). Key derivation Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one. scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt. Dropbox Paths are specified as remote:path Dropbox paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Dropbox \ "dropbox" [snip] Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. app_secret> Remote config Please visit: https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX -------------------- [remote] app_key = app_secret = token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y You can then use it like this, List directories in top level of your dropbox rclone lsd remote: List all the files in your dropbox rclone ls remote: To copy a local directory to a dropbox directory called backup rclone copy /home/source remote:backup Dropbox for business Rclone supports Dropbox for business and Team Folders. When using Dropbox for business remote: and remote:path/to/file will refer to your personal folder. If you wish to see Team Folders you must use a leading / in the path, so rclone lsd remote:/ will refer to the root and show you all Team Folders and your User Folder. You can then use team folders like this remote:/TeamFolder and remote:/TeamFolder/path/to/file. A leading / for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided. Modified time and Hashes Dropbox supports modified times, but the only way to set a modification time is to re-upload the file. This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it. Dropbox supports its own hash type which is checked for all transfers. Restricted filename characters Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / DEL 0x7F ␡ \ 0x5C \ File names can also not end with the following characters. These only get replaced if they are the last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to dropbox (Dropbox). --dropbox-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_DROPBOX_CLIENT_ID - Type: string - Default: "" --dropbox-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_DROPBOX_CLIENT_SECRET - Type: string - Default: "" Advanced Options Here are the advanced options specific to dropbox (Dropbox). --dropbox-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_DROPBOX_TOKEN - Type: string - Default: "" --dropbox-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_DROPBOX_AUTH_URL - Type: string - Default: "" --dropbox-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_DROPBOX_TOKEN_URL - Type: string - Default: "" --dropbox-chunk-size Upload chunk size. (< 150M). Any files larger than this will be uploaded in chunks of this size. Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory. - Config: chunk_size - Env Var: RCLONE_DROPBOX_CHUNK_SIZE - Type: SizeSuffix - Default: 48M --dropbox-impersonate Impersonate this user when using a business account. - Config: impersonate - Env Var: RCLONE_DROPBOX_IMPERSONATE - Type: string - Default: "" --dropbox-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_DROPBOX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot Limitations Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail. Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/. If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir. Get your own Dropbox App ID When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users. Here is how to create your own Dropbox App ID for rclone: 1. Log into the Dropbox App console with your Dropbox Account (It need not to be the same account as the Dropbox you want to access) 2. Choose an API => Usually this should be Dropbox API 3. Choose the type of access you want to use => Full Dropbox or App Folder 4. Name your App. The app name is global, so you can't use rclone for example 5. Click the button Create App 6. Fill Redirect URIs as http://localhost:53682/ 7. Find the App key and App secret Use these values in rclone config to add a new remote or edit an existing remote. FTP FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package. Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. Here is an example of making an FTP configuration. First run rclone config This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous as username and your email address as the password. No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / FTP Connection \ "ftp" [snip] Storage> ftp ** See help for ftp backend at: https://rclone.org/ftp/ ** FTP host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to ftp.example.com \ "ftp.example.com" host> ftp.example.com FTP username, leave blank for current username, ncw Enter a string value. Press Enter for the default (""). user> FTP port, leave blank to use default (21) Enter a string value. Press Enter for the default (""). port> FTP password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Use FTP over TLS (Implicit) Enter a boolean value (true or false). Press Enter for the default ("false"). tls> Use FTP over TLS (Explicit) Enter a boolean value (true or false). Press Enter for the default ("false"). explicit_tls> Remote config -------------------- [remote] type = ftp host = ftp.example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y This remote is called remote and can now be used like this See all directories in the home directory rclone lsd remote: Make a new directory rclone mkdir remote:path/to/directory List the contents of a directory rclone ls remote:path/to/directory Sync /home/local/directory to the remote directory, deleting any excess files in the directory. rclone sync -i /home/local/directory remote:directory Modified time FTP does not support modified times. Any times you see on the server will be time of upload. Checksums FTP does not support any checksums. Usage without a config file An example how to use the ftp remote without a config file: rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy` Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: File names can also not end with the following characters. These only get replaced if they are the last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ Note that not all FTP servers can have all characters in file names, for example: FTP Server Forbidden characters ------------ ---------------------- proftpd * pureftpd \ [ ] Implicit TLS FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is 990 so the port will likely have to be explicitly set in the config for the remote. Standard Options Here are the standard options specific to ftp (FTP Connection). --ftp-host FTP host to connect to - Config: host - Env Var: RCLONE_FTP_HOST - Type: string - Default: "" - Examples: - "ftp.example.com" - Connect to ftp.example.com --ftp-user FTP username, leave blank for current username, $USER - Config: user - Env Var: RCLONE_FTP_USER - Type: string - Default: "" --ftp-port FTP port, leave blank to use default (21) - Config: port - Env Var: RCLONE_FTP_PORT - Type: string - Default: "" --ftp-pass FTP password NB Input to this must be obscured - see rclone obscure. - Config: pass - Env Var: RCLONE_FTP_PASS - Type: string - Default: "" --ftp-tls Use FTPS over TLS (Implicit) When using implicit FTP over TLS the client will connect using TLS right from the start, which in turn breaks the compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP. - Config: tls - Env Var: RCLONE_FTP_TLS - Type: bool - Default: false --ftp-explicit-tls Use FTP over TLS (Explicit) When using explicit FTP over TLS the client explicitly request security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP. - Config: explicit_tls - Env Var: RCLONE_FTP_EXPLICIT_TLS - Type: bool - Default: false Advanced Options Here are the advanced options specific to ftp (FTP Connection). --ftp-concurrency Maximum number of FTP simultaneous connections, 0 for unlimited - Config: concurrency - Env Var: RCLONE_FTP_CONCURRENCY - Type: int - Default: 0 --ftp-no-check-certificate Do not verify the TLS certificate of the server - Config: no_check_certificate - Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE - Type: bool - Default: false --ftp-disable-epsv Disable using EPSV even if server advertises support - Config: disable_epsv - Env Var: RCLONE_FTP_DISABLE_EPSV - Type: bool - Default: false --ftp-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_FTP_ENCODING - Type: MultiEncoder - Default: Slash,Del,Ctl,RightSpace,Dot Limitations Note that FTP does have its own implementation of : --dump headers, --dump bodies, --dump auth for debugging which isn't the same as the HTTP based backends - it has less fine grained control. Note that --timeout isn't supported (but --contimeout is). Note that --bind isn't supported. FTP could support server side move but doesn't yet. Note that the ftp backend does not support the ftp_proxy environment variable yet. Google Cloud Storage Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir. The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" [snip] Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Project number optional - needed only for list/create/delete buckets - see your developer console. project_number> 12345678 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Access Control List for new objects. Choose a number from below, or type in your own value 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. \ "authenticatedRead" 2 / Object owner gets OWNER access, and project team owners get OWNER access. \ "bucketOwnerFullControl" 3 / Object owner gets OWNER access, and project team owners get READER access. \ "bucketOwnerRead" 4 / Object owner gets OWNER access [default if left blank]. \ "private" 5 / Object owner gets OWNER access, and project team members get access according to their roles. \ "projectPrivate" 6 / Object owner gets OWNER access, and all Users get READER access. \ "publicRead" object_acl> 4 Access Control List for new buckets. Choose a number from below, or type in your own value 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. \ "authenticatedRead" 2 / Project team owners get OWNER access [default if left blank]. \ "private" 3 / Project team members get access according to their roles. \ "projectPrivate" 4 / Project team owners get OWNER access, and all Users get READER access. \ "publicRead" 5 / Project team owners get OWNER access, and all Users get WRITER access. \ "publicReadWrite" bucket_acl> 2 Location for the newly created buckets. Choose a number from below, or type in your own value 1 / Empty for default location (US). \ "" 2 / Multi-regional location for Asia. \ "asia" 3 / Multi-regional location for Europe. \ "eu" 4 / Multi-regional location for United States. \ "us" 5 / Taiwan. \ "asia-east1" 6 / Tokyo. \ "asia-northeast1" 7 / Singapore. \ "asia-southeast1" 8 / Sydney. \ "australia-southeast1" 9 / Belgium. \ "europe-west1" 10 / London. \ "europe-west2" 11 / Iowa. \ "us-central1" 12 / South Carolina. \ "us-east1" 13 / Northern Virginia. \ "us-east4" 14 / Oregon. \ "us-west1" location> 12 The storage class to use when storing objects in Google Cloud Storage. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Multi-regional storage class \ "MULTI_REGIONAL" 3 / Regional storage class \ "REGIONAL" 4 / Nearline storage class \ "NEARLINE" 5 / Coldline storage class \ "COLDLINE" 6 / Durable reduced availability storage class \ "DURABLE_REDUCED_AVAILABILITY" storage_class> 5 Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine or Y didn't work y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = google cloud storage client_id = client_secret = token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} project_number = 12345678 object_acl = private bucket_acl = private -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. This remote is called remote and can now be used like this See all the buckets in your project rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket Service Account support You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable. Anonymous Access For downloads of objects that permit public access you can configure rclone to use anonymous access by setting anonymous to true. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access. Application Default Credentials If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page. Note that in the case application default credentials are used, there is no need to explicitly configure a project number. --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Custom upload headers You can set custom upload headers with the --header-upload flag. Google Cloud Storage supports the headers as described in the working with metadata documentation - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Goog-Meta- Eg --header-upload "Content-Type text/potato" Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value" Modified time Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns. Restricted filename characters Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ LF 0x0A ␊ CR 0x0D ␍ / 0x2F / Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). --gcs-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_GCS_CLIENT_ID - Type: string - Default: "" --gcs-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_GCS_CLIENT_SECRET - Type: string - Default: "" --gcs-project-number Project number. Optional - needed only for list/create/delete buckets - see your developer console. - Config: project_number - Env Var: RCLONE_GCS_PROJECT_NUMBER - Type: string - Default: "" --gcs-service-account-file Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}. - Config: service_account_file - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE - Type: string - Default: "" --gcs-service-account-credentials Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. - Config: service_account_credentials - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS - Type: string - Default: "" --gcs-anonymous Access public buckets and objects without credentials Set to 'true' if you just want to download files and don't configure credentials. - Config: anonymous - Env Var: RCLONE_GCS_ANONYMOUS - Type: bool - Default: false --gcs-object-acl Access Control List for new objects. - Config: object_acl - Env Var: RCLONE_GCS_OBJECT_ACL - Type: string - Default: "" - Examples: - "authenticatedRead" - Object owner gets OWNER access, and all Authenticated Users get READER access. - "bucketOwnerFullControl" - Object owner gets OWNER access, and project team owners get OWNER access. - "bucketOwnerRead" - Object owner gets OWNER access, and project team owners get READER access. - "private" - Object owner gets OWNER access [default if left blank]. - "projectPrivate" - Object owner gets OWNER access, and project team members get access according to their roles. - "publicRead" - Object owner gets OWNER access, and all Users get READER access. --gcs-bucket-acl Access Control List for new buckets. - Config: bucket_acl - Env Var: RCLONE_GCS_BUCKET_ACL - Type: string - Default: "" - Examples: - "authenticatedRead" - Project team owners get OWNER access, and all Authenticated Users get READER access. - "private" - Project team owners get OWNER access [default if left blank]. - "projectPrivate" - Project team members get access according to their roles. - "publicRead" - Project team owners get OWNER access, and all Users get READER access. - "publicReadWrite" - Project team owners get OWNER access, and all Users get WRITER access. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this. When it is set, rclone: - ignores ACLs set on buckets - ignores ACLs set on objects - creates buckets with Bucket Policy Only set Docs: https://cloud.google.com/storage/docs/bucket-policy-only - Config: bucket_policy_only - Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY - Type: bool - Default: false --gcs-location Location for the newly created buckets. - Config: location - Env Var: RCLONE_GCS_LOCATION - Type: string - Default: "" - Examples: - "" - Empty for default location (US). - "asia" - Multi-regional location for Asia. - "eu" - Multi-regional location for Europe. - "us" - Multi-regional location for United States. - "asia-east1" - Taiwan. - "asia-east2" - Hong Kong. - "asia-northeast1" - Tokyo. - "asia-south1" - Mumbai. - "asia-southeast1" - Singapore. - "australia-southeast1" - Sydney. - "europe-north1" - Finland. - "europe-west1" - Belgium. - "europe-west2" - London. - "europe-west3" - Frankfurt. - "europe-west4" - Netherlands. - "us-central1" - Iowa. - "us-east1" - South Carolina. - "us-east4" - Northern Virginia. - "us-west1" - Oregon. - "us-west2" - California. --gcs-storage-class The storage class to use when storing objects in Google Cloud Storage. - Config: storage_class - Env Var: RCLONE_GCS_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "MULTI_REGIONAL" - Multi-regional storage class - "REGIONAL" - Regional storage class - "NEARLINE" - Nearline storage class - "COLDLINE" - Coldline storage class - "ARCHIVE" - Archive storage class - "DURABLE_REDUCED_AVAILABILITY" - Durable reduced availability storage class Advanced Options Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). --gcs-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_GCS_TOKEN - Type: string - Default: "" --gcs-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_GCS_AUTH_URL - Type: string - Default: "" --gcs-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_GCS_TOKEN_URL - Type: string - Default: "" --gcs-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_GCS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot Google Drive Paths are specified as drive:path Drive paths may be as deep as required, eg drive:directory/subdirectory. The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Drive \ "drive" [snip] Storage> drive Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Scope that rclone should use when requesting access from drive. Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ "drive" 2 / Read-only access to file metadata and file contents. \ "drive.readonly" / Access to files created by rclone only. 3 | These are visible in the drive website. | File authorization is revoked when the user deauthorizes the app. \ "drive.file" / Allows read and write access to the Application Data folder. 4 | This is not visible in the drive website. \ "drive.appfolder" / Allows read-only access to file metadata but 5 | does not allow any access to read or download file content. \ "drive.metadata.readonly" scope> 1 ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs). root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine or Y didn't work y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Configure this as a team drive? y) Yes n) No y/n> n -------------------- [remote] client_id = client_secret = scope = drive root_folder_id = service_account_file = token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. You can then use it like this, List directories in top level of your drive rclone lsd remote: List all the files in your drive rclone ls remote: To copy a local directory to a drive directory called backup rclone copy /home/source remote:backup Scopes Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. The scopes are defined here. The scope are drive This is the default scope and allows full access to all files, except for the Application Data Folder (see below). Choose this one if you aren't sure. drive.readonly This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted. drive.file With this scope rclone can read/view/modify only those files and folders it creates. So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone. This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone. Files created with this scope are visible in the web interface. drive.appfolder This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either. drive.metadata.readonly This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories. Root folder ID You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go). In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface. So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config. NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone. There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise! Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet. Service Account support You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable. Use case - Google Apps/G-suite account and individual Drive Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain EXAMPLE.COM, and the user FOO@EXAMPLE.COM. There's a few steps we need to go through to accomplish this: 1. Create a service account for example.com - To create a service account and obtain its credentials, go to the Google Developer Console. - You must have a project - create one if you don't. - Then go to "IAM & admin" -> "Service Accounts". - Use the "Create Credentials" button. Fill in "Service account name" with something that identifies your client. "Role" can be empty. - Tick "Furnish a new private key" - select "Key type JSON". - Tick "Enable G Suite Domain-wide Delegation". This option makes "impersonation" possible, as documented here: Delegating domain-wide authority to the service account - These credentials are what rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button. 2. Allowing API access to example.com Google Drive - Go to example.com's admin console - Go into "Security" (or use the search bar) - Select "Show more" and then "Advanced settings" - Select "Manage API client access" in the "Authentication" section - In the "Client Name" field enter the service account's "Client ID" - this can be found in the Developer Console under "IAM & Admin" -> "Service Accounts", then "View Client ID" for the newly created service account. It is a ~21 character numerical string. - In the next field, "One or More API Scopes", enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically. 3. Configure rclone, assuming a new install rclone config n/s/q> n # New name>gdrive # Gdrive is an example name Storage> # Select the number shown for Google Drive client_id> # Can be left blank client_secret> # Can be left blank scope> # Select your scope, 1 for example root_folder_id> # Can be left blank service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes! y/n> # Auto config, y 4. Verify that it's working - rclone -v --drive-impersonate foo@example.com lsf gdrive:backup - The arguments do: - -v - verbose logging - --drive-impersonate foo@example.com - this is what does the magic, pretending to be user foo. - lsf - list files in a parsing friendly way - gdrive:backup - use the remote called gdrive, work in the folder named backup. Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using --drive-impersonate, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the --drive-impersonate option, like this: rclone -v foo@example.com lsf gdrive:backup Team drives If you want to configure the remote to point to a Google Team Drive then answer y to the question Configure this as a team drive?. This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer. For example: Configure this as a team drive? y) Yes n) No y/n> y Fetching team drive list... Choose a number from below, or type in your own value 1 / Rclone Test \ "xxxxxxxxxxxxxxxxxxxx" 2 / Rclone Test 2 \ "yyyyyyyyyyyyyyyyyyyy" 3 / Rclone Test 3 \ "zzzzzzzzzzzzzzzzzzzz" Enter a Team Drive ID> 1 -------------------- [remote] client_id = client_secret = token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} team_drive = xxxxxxxxxxxxxxxxxxxx -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. It does this by combining multiple list calls into a single API request. This works by combining many '%s' in parents filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List function: trashed=false and 'a' in parents trashed=false and 'b' in parents trashed=false and 'c' in parents These can now be combined into a single request: trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) The implementation of ListR will put up to 50 parents filters into one request. It will use the --checkers value to specify the number of requests to run in parallel. In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives: rclone lsjson -vv -R --checkers=6 gdrive:folder small folder (220 directories, 700 files): - without --fast-list: 38s - with --fast-list: 10s large folder (10600 directories, 39000 files): - without --fast-list: 22:05 min - with --fast-list: 58s Modified time Google drive stores modification times accurate to 1 ms. Restricted filename characters Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON strings. In contrast to other backends, / can also be used in names and . or .. are valid names. Revisions Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file. Revisions follow the standard google policy which at time of writing was - They are deleted after 30 days or 100 revisions (whatever comes first). - They do not count towards a user storage quota. Deleting files By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false flag, or set the equivalent environment variable. Shortcuts In March 2020 Google introduced a new feature in Google Drive called drive shortcuts (API). These will (by September 2020) replace the ability for files or folders to be in multiple folders at once. Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don't break if the source is renamed or moved about. Be default rclone treats these as follows. For shortcuts pointing to files: - When listing a file shortcut appears as the destination file. - When downloading the contents of the destination file is downloaded. - When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. - When server side moving (renaming) the shortcut is renamed, not the destination file. - When server side copying the shortcut is copied, not the contents of the shortcut. - When deleting the shortcut is deleted not the linked file. - When setting the modification time, the modification time of the linked file will be set. For shortcuts pointing to folders: - When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) - When downloading the contents of the linked folder and sub contents are downloaded - When uploading to a shortcut folder the file will be placed in the linked folder - When server side moving (renaming) the shortcut is renamed, not the destination folder - When server side copying the contents of the linked folder is copied, not the shortcut. - When deleting with rclone rmdir or rclone purge the shortcut is deleted not the linked folder. - NB When deleting with rclone remove or rclone mount the contents of the linked folder will be deleted. The rclone backend command can be used to create shortcuts. Shortcuts can be completely ignored with the --drive-skip-shortcuts flag or the corresponding skip_shortcuts configuration setting. Emptying trash If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments. Note that Google Drive takes some time (minutes to days) to empty the trash even though the command returns within a few seconds. No output is echoed, so there will be no confirmation even using -v or -vv. Quota information To view your current quota you can use the rclone about remote: command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments. Import/Export of google documents Google documents can be exported from and uploaded to Google Drive. When rclone downloads a Google doc it chooses a format to download depending upon the --drive-export-formats setting. By default the export formats are docx,xlsx,pptx,svg which are a sensible default for an editable document. When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list. If you prefer an archive copy then you might use --drive-export-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp. Note that rclone adds the extension to the google doc, so if it is called My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc. When importing files into Google Drive, rclone will convert all files with an extension in --drive-import-formats to their associated document type. rclone will not convert any files by default, since the conversion is lossy process. The conversion must result in a file with the same extension when the --drive-export-formats rules are applied to the uploaded document. Here are some examples for allowed and prohibited conversions. export-formats import-formats Upload Ext Document Ext Allowed ---------------- ---------------- ------------ -------------- --------- odt odt odt odt Yes odt docx,odt odt odt Yes docx docx docx Yes odt odt docx No odt,docx docx,odt docx odt No docx,odt docx,odt docx docx Yes docx,odt docx,odt odt docx No This limitation can be disabled by specifying --drive-allow-import-name-change. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with --drive-import-formats docx,odt,txt, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes. Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries. This list can be changed by Google Drive at any time and might not represent the currently available conversions. -------------------------------------------------------------------------------------------------------------------------- Extension Mime Type Description ------------------- --------------------------------------------------------------------------- -------------------------- csv text/csv Standard CSV format for Spreadsheets docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document epub application/epub+zip E-book format html text/html An HTML Document jpg image/jpeg A JPEG Image File json application/vnd.google-apps.script+json JSON Text Format odp application/vnd.oasis.opendocument.presentation Openoffice Presentation ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet odt application/vnd.oasis.opendocument.text Openoffice Document pdf application/pdf Adobe PDF Format png image/png PNG Image Format pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint rtf application/rtf Rich Text Format svg image/svg+xml Scalable Vector Graphics Format tsv text/tab-separated-values Standard TSV format for spreadsheets txt text/plain Plain Text xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet zip application/zip A ZIP file of HTML, Images CSS -------------------------------------------------------------------------------------------------------------------------- Google documents can also be exported as link files. These files will open a browser window for the Google Docs website of that document when opened. The link file extension has to be specified as a --drive-export-formats parameter. They will match all available Google Documents. Extension Description OS Support ----------- ----------------------------------------- ---------------- desktop freedesktop.org specified desktop entry Linux link.html An HTML Document with a redirect All url INI style link file macOS, Windows webloc macOS specific XML format macOS Standard Options Here are the standard options specific to drive (Google Drive). --drive-client-id Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. - Config: client_id - Env Var: RCLONE_DRIVE_CLIENT_ID - Type: string - Default: "" --drive-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_DRIVE_CLIENT_SECRET - Type: string - Default: "" --drive-scope Scope that rclone should use when requesting access from drive. - Config: scope - Env Var: RCLONE_DRIVE_SCOPE - Type: string - Default: "" - Examples: - "drive" - Full access all files, excluding Application Data Folder. - "drive.readonly" - Read-only access to file metadata and file contents. - "drive.file" - Access to files created by rclone only. - These are visible in the drive website. - File authorization is revoked when the user deauthorizes the app. - "drive.appfolder" - Allows read and write access to the Application Data folder. - This is not visible in the drive website. - "drive.metadata.readonly" - Allows read-only access to file metadata but - does not allow any access to read or download file content. --drive-root-folder-id ID of the root folder Leave blank normally. Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID - Type: string - Default: "" --drive-service-account-file Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}. - Config: service_account_file - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE - Type: string - Default: "" --drive-alternate-export Deprecated: no longer needed - Config: alternate_export - Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT - Type: bool - Default: false Advanced Options Here are the advanced options specific to drive (Google Drive). --drive-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_DRIVE_TOKEN - Type: string - Default: "" --drive-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_DRIVE_AUTH_URL - Type: string - Default: "" --drive-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_DRIVE_TOKEN_URL - Type: string - Default: "" --drive-service-account-credentials Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. - Config: service_account_credentials - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS - Type: string - Default: "" --drive-team-drive ID of the Team Drive - Config: team_drive - Env Var: RCLONE_DRIVE_TEAM_DRIVE - Type: string - Default: "" --drive-auth-owner-only Only consider files owned by the authenticated user. - Config: auth_owner_only - Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY - Type: bool - Default: false --drive-use-trash Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false to delete files permanently instead. - Config: use_trash - Env Var: RCLONE_DRIVE_USE_TRASH - Type: bool - Default: true --drive-skip-gdocs Skip google documents in all listings. If given, gdocs practically become invisible to rclone. - Config: skip_gdocs - Env Var: RCLONE_DRIVE_SKIP_GDOCS - Type: bool - Default: false --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. Use this if you get checksum errors when transferring Google photos or videos. Setting this flag will cause Google photos and videos to return a blank MD5 checksum. Google photos are identified by being in the "photos" space. Corrupted checksums are caused by Google modifying the image/video but not updating the checksum. - Config: skip_checksum_gphotos - Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS - Type: bool - Default: false --drive-shared-with-me Only show files that are shared with me. Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too. - Config: shared_with_me - Env Var: RCLONE_DRIVE_SHARED_WITH_ME - Type: bool - Default: false --drive-trashed-only Only show files that are in the trash. This will show trashed files in their original directory structure. - Config: trashed_only - Env Var: RCLONE_DRIVE_TRASHED_ONLY - Type: bool - Default: false --drive-starred-only Only show files that are starred. - Config: starred_only - Env Var: RCLONE_DRIVE_STARRED_ONLY - Type: bool - Default: false --drive-formats Deprecated: see export_formats - Config: formats - Env Var: RCLONE_DRIVE_FORMATS - Type: string - Default: "" --drive-export-formats Comma separated list of preferred formats for downloading Google docs. - Config: export_formats - Env Var: RCLONE_DRIVE_EXPORT_FORMATS - Type: string - Default: "docx,xlsx,pptx,svg" --drive-import-formats Comma separated list of preferred formats for uploading Google docs. - Config: import_formats - Env Var: RCLONE_DRIVE_IMPORT_FORMATS - Type: string - Default: "" --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - Config: allow_import_name_change - Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE - Type: bool - Default: false --drive-use-created-date Use file created date instead of modified date., Useful when downloading data and you want the creation date used in place of the last modified date. WARNING: This flag may have some unexpected consequences. When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag. This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date. - Config: use_created_date - Env Var: RCLONE_DRIVE_USE_CREATED_DATE - Type: bool - Default: false --drive-use-shared-date Use date file was shared instead of modified date. Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files. If both this flag and "--drive-use-created-date" are set, the created date is used. - Config: use_shared_date - Env Var: RCLONE_DRIVE_USE_SHARED_DATE - Type: bool - Default: false --drive-list-chunk Size of listing chunk 100-1000. 0 to disable. - Config: list_chunk - Env Var: RCLONE_DRIVE_LIST_CHUNK - Type: int - Default: 1000 --drive-impersonate Impersonate this user when using a service account. - Config: impersonate - Env Var: RCLONE_DRIVE_IMPERSONATE - Type: string - Default: "" --drive-upload-cutoff Cutoff for switching to chunked upload - Config: upload_cutoff - Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 8M --drive-chunk-size Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. - Config: chunk_size - Env Var: RCLONE_DRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 8M --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway. - Config: acknowledge_abuse - Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE - Type: bool - Default: false --drive-keep-revision-forever Keep new head revision of each file forever. - Config: keep_revision_forever - Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER - Type: bool - Default: false --drive-size-as-quota Show sizes as storage quota usage, not actual size. Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever. WARNING: This flag may have some unexpected consequences. It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only. If you do use this flag for syncing (not recommended) then you will need to use --ignore size also. - Config: size_as_quota - Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA - Type: bool - Default: false --drive-v2-download-min-size If Object's are greater, use drive v2 API to download. - Config: v2_download_min_size - Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE - Type: SizeSuffix - Default: off --drive-pacer-min-sleep Minimum time to sleep between API calls. - Config: pacer_min_sleep - Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP - Type: Duration - Default: 100ms --drive-pacer-burst Number of API calls to allow without sleeping. - Config: pacer_burst - Env Var: RCLONE_DRIVE_PACER_BURST - Type: int - Default: 100 --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. - Config: server_side_across_configs - Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false --drive-disable-http2 Disable drive using http2 There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed. See: https://github.com/rclone/rclone/issues/3631 - Config: disable_http2 - Env Var: RCLONE_DRIVE_DISABLE_HTTP2 - Type: bool - Default: true --drive-stop-on-upload-limit Make upload limit errors be fatal At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync. Note that this detection is relying on error message strings which Google don't document so it may break in the future. See: https://github.com/rclone/rclone/issues/3857 - Config: stop_on_upload_limit - Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT - Type: bool - Default: false --drive-skip-shortcuts If set skip shortcut files Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely. - Config: skip_shortcuts - Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS - Type: bool - Default: false --drive-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_DRIVE_ENCODING - Type: MultiEncoder - Default: InvalidUtf8 Backend commands Here are the commands specific to the drive backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. get Get command for fetching the drive config parameters rclone backend get remote: [options] [+] This is a get command which will be used to fetch the various drive config parameters Usage Examples: rclone backend get drive: [-o service_account_file] [-o chunk_size] rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] Options: - "chunk_size": show the current upload chunk size - "service_account_file": show the current service account file set Set command for updating the drive config parameters rclone backend set remote: [options] [+] This is a set command which will be used to update the various drive config parameters Usage Examples: rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] Options: - "chunk_size": update the current upload chunk size - "service_account_file": update the current service account file shortcut Create shortcuts from files or directories rclone backend shortcut remote: [options] [+] This command creates shortcuts from files or directories. Usage: rclone backend shortcut drive: source_item destination_shortcut rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:" In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". Options: - "target": optional target remote for the shortcut destination drives List the shared drives available to this account rclone backend drives remote: [options] [+] This command lists the shared drives (teamdrives) available to this account. Usage: rclone backend drives drive: This will return a JSON list of objects like this [ { "id": "0ABCDEF-01234567890", "kind": "drive#teamDrive", "name": "My Drive" }, { "id": "0ABCDEFabcdefghijkl", "kind": "drive#teamDrive", "name": "Test Drive" } ] untrash Untrash files and directories rclone backend untrash remote: [options] [+] This command untrashes all the files and directories in the directory passed in recursively. Usage: This takes an optional directory to trash which make this easier to use via the API. rclone backend untrash drive:directory rclone backend -i untrash drive:directory subdir Use the -i flag to see what would be restored before restoring it. Result: { "Untrashed": 17, "Errors": 0 } Limitations Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time. Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with --disable copy to download and upload the files if you prefer. Limitations of Google Docs Google docs will appear as size -1 in rclone ls and as size 0 in anything which uses the VFS layer, eg rclone mount, rclone serve. This is because rclone can't find out the size of the Google docs without downloading them. Google docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer. However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you! Duplicated files Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files. Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use rclone dedupe to fix duplicated files. Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes. Rclone appears to be re-copying files it shouldn't The most likely cause of this is the duplicated file issue above - run rclone dedupe and check your logs for duplicate object or directory messages. This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list. Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem. Making your own client_id When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. Here is how to create your own Google Drive client ID for rclone: 1. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access) 2. Select a project or create a new project. 3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API". 4. Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials" 5. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen. (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far). 6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". 7. Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine) 8. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote. Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal). (Thanks to @balazer on github for these instructions.) Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. Google Photos The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos. NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use. Configuring Google Photos The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Google Photos \ "google photos" [snip] Storage> google photos ** See help for google photos backend at: https://rclone.org/googlephotos/ ** Google Application Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Google Application Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. Enter a boolean value (true or false). Press Enter for the default ("false"). read_only> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code *** IMPORTANT: All media items uploaded to Google Photos with rclone *** are stored in full resolution at original quality. These uploads *** will count towards storage in your Google Account. -------------------- [remote] type = google photos token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode. This remote is called remote and can now be used like this See all the albums in your photos rclone lsd remote:album Make a new album rclone mkdir remote:album/newAlbum List the contents of an album rclone ls remote:album/newAlbum Sync /home/local/images to the Google Photos, removing any excess files in the album. rclone sync -i /home/local/image remote:album/newAlbum Layout As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it. The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.) Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums. / - upload - file1.jpg - file2.jpg - ... - media - all - file1.jpg - file2.jpg - ... - by-year - 2000 - file1.jpg - ... - 2001 - file2.jpg - ... - ... - by-month - 2000 - 2000-01 - file1.jpg - ... - 2000-02 - file2.jpg - ... - ... - by-day - 2000 - 2000-01-01 - file1.jpg - ... - 2000-01-02 - file2.jpg - ... - ... - album - album name - album name/sub - shared-album - album name - album name/sub - feature - favorites - file1.jpg - file2.jpg There are two writable parts of the tree, the upload directory and sub directories of the album directory. The upload directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better. Directories within the album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do rclone copy /path/to/images remote:album/images and the images directory contains images - file1.jpg dir file2.jpg dir2 dir3 file3.jpg Then rclone will create the following albums with the following files in - images - file1.jpg - images/dir - file2.jpg - images/dir2/dir3 - file3.jpg This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing. The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface. Limitations Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and WILL count towards your storage quota in your Google Account. The API does NOT offer a way to upload in "high quality" mode.. Downloading Images When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115. THE CURRENT GOOGLE API DOES NOT ALLOW PHOTOS TO BE DOWNLOADED AT ORIGINAL RESOLUTION. THIS IS VERY IMPORTANT IF YOU ARE, FOR EXAMPLE, RELYING ON "GOOGLE PHOTOS" AS A BACKUP OF YOUR PHOTOS. YOU WILL NOT BE ABLE TO USE RCLONE TO REDOWNLOAD ORIGINAL IMAGES. YOU COULD USE 'GOOGLE TAKEOUT' TO RECOVER THE ORIGINAL PHOTOS AS A LAST RESORT Downloading Videos When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044. Duplicates If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!). If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems. Modified time The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. Size The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is VERY SLOW and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter. If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag. Albums Rclone can only upload files to albums it created. This is a limitation of the Google Photos API. Rclone can remove files it uploaded from albums it created only. Deleting files Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781. Rclone cannot delete files anywhere except under album. Deleting albums The Google Photos API does not support deleting albums - see bug #135714733. Standard Options Here are the standard options specific to google photos (Google Photos). --gphotos-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_GPHOTOS_CLIENT_ID - Type: string - Default: "" --gphotos-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_GPHOTOS_CLIENT_SECRET - Type: string - Default: "" --gphotos-read-only Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. - Config: read_only - Env Var: RCLONE_GPHOTOS_READ_ONLY - Type: bool - Default: false Advanced Options Here are the advanced options specific to google photos (Google Photos). --gphotos-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_GPHOTOS_TOKEN - Type: string - Default: "" --gphotos-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_GPHOTOS_AUTH_URL - Type: string - Default: "" --gphotos-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_GPHOTOS_TOKEN_URL - Type: string - Default: "" --gphotos-read-size Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. - Config: read_size - Env Var: RCLONE_GPHOTOS_READ_SIZE - Type: bool - Default: false --gphotos-start-year Year limits the photos to be downloaded to those which are uploaded after the given year - Config: start_year - Env Var: RCLONE_GPHOTOS_START_YEAR - Type: int - Default: 2000 HTTP The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!) Paths are specified as remote: or remote:path/to/dir. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / http Connection \ "http" [snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://beta.rclone.org Remote config -------------------- [remote] url = https://beta.rclone.org -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote http e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q This remote is called remote and can now be used like this See all the top level directories rclone lsd remote: List the contents of a directory rclone ls remote:directory Sync the remote directory to /home/local/directory, deleting any excess files. rclone sync -i remote:directory /home/local/directory Read only This remote is read only - you can't upload files to an HTTP server. Modified time Most HTTP servers store time accurate to 1 second. Checksum No checksums are stored. Usage without a config file Since the http remote only has one config parameter it is easy to use without a config file: rclone lsd --http-url https://beta.rclone.org :http: Standard Options Here are the standard options specific to http (http Connection). --http-url URL of http host to connect to - Config: url - Env Var: RCLONE_HTTP_URL - Type: string - Default: "" - Examples: - "https://example.com" - Connect to example.com - "https://user:pass@example.com" - Connect to example.com using a username and password Advanced Options Here are the advanced options specific to http (http Connection). --http-headers Set HTTP headers for all transactions Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard CSV encoding may be used. For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. - Config: headers - Env Var: RCLONE_HTTP_HEADERS - Type: CommaSepList - Default: --http-no-slash Set this if the site doesn't end directories with / Use this if your target website does not use / on the end of directories. A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them. Note that this may cause rclone to confuse genuine HTML files with directories. - Config: no_slash - Env Var: RCLONE_HTTP_NO_SLASH - Type: bool - Default: false --http-no-head Don't use HEAD requests to find file sizes in dir listing If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to: - find its size - check it really exists - check to see if it is a directory If you set this option, rclone will not do the HEAD request. This will mean - directory listings are much quicker - rclone won't have the times or sizes of any files - some files that don't exist may be in the listing - Config: no_head - Env Var: RCLONE_HTTP_NO_HEAD - Type: bool - Default: false Hubic Paths are specified as remote:path Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir. The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Hubic \ "hubic" [snip] Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXXXXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List containers in the top level of your Hubic rclone lsd remote: List all the files in your Hubic rclone ls remote: To copy a local directory to an Hubic directory called backup rclone copy /home/source remote:backup If you want the directory to be visible in the official _Hubic browser_, you need to copy your files to the default directory rclone copy /home/source remote:default/backup --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Modified time The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. Note that Hubic wraps the Swift backend, so most of the properties of are the same. Standard Options Here are the standard options specific to hubic (Hubic). --hubic-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_HUBIC_CLIENT_ID - Type: string - Default: "" --hubic-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_HUBIC_CLIENT_SECRET - Type: string - Default: "" Advanced Options Here are the advanced options specific to hubic (Hubic). --hubic-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_HUBIC_TOKEN - Type: string - Default: "" --hubic-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_HUBIC_AUTH_URL - Type: string - Default: "" --hubic-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_HUBIC_TOKEN_URL - Type: string - Default: "" --hubic-chunk-size Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. - Config: chunk_size - Env Var: RCLONE_HUBIC_CHUNK_SIZE - Type: SizeSuffix - Default: 5G --hubic-no-chunk Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - Config: no_chunk - Env Var: RCLONE_HUBIC_NO_CHUNK - Type: bool - Default: false --hubic-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_HUBIC_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8 Limitations This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. Jottacloud Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, there are also several whitelabel versions which should work with this backend. Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. Setup Default Setup To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your account security settings (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token. Legacy Setup If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentification. To to this select yes when the setup asks for legacy authentification and enter your username and password. The rest of the setup is identical to the default setup. Here is an example of how to make a remote called remote with the default setup. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Jottacloud \ "jottacloud" [snip] Storage> jottacloud ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use legacy authentification?. This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. y) Yes n) No (default) y/n> n Generate a personal login token here: https://www.jottacloud.com/web/secure Login Token> Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? y) Yes n) No y/n> y Please select the device to use. Normally this will be Jotta Choose a number from below, or type in an existing value 1 > DESKTOP-3H31129 2 > Jotta Devices> 2 Please select the mountpoint to user. Normally this will be Archive Choose a number from below, or type in an existing value 1 > Archive 2 > Links 3 > Sync Mountpoints> 1 -------------------- [jotta] type = jottacloud token = {........} device = Jotta mountpoint = Archive configVersion = 1 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Once configured you can then use rclone like this, List directories in top level of your Jottacloud rclone lsd remote: List all the files in your Jottacloud rclone ls remote: To copy a local directory to an Jottacloud directory called backup rclone copy /home/source remote:backup Devices and Mountpoints The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing. --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown. Modified time and hashes Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Jottacloud supports MD5 type hashes, so you can use the --checksum flag. Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above). Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- " 0x22 " * 0x2A * : 0x3A : < 0x3C < > 0x3E > ? 0x3F ? | 0x7C | Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings. Deleting files By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command. Versions Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. Quota information To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage. Advanced Options Here are the advanced options specific to jottacloud (Jottacloud). --jottacloud-md5-memory-limit Files bigger than this will be cached on disk to calculate the MD5 if required. - Config: md5_memory_limit - Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT - Type: SizeSuffix - Default: 10M --jottacloud-trashed-only Only show files that are in the trash. This will show trashed files in their original directory structure. - Config: trashed_only - Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY - Type: bool - Default: false --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - Config: hard_delete - Env Var: RCLONE_JOTTACLOUD_HARD_DELETE - Type: bool - Default: false --jottacloud-upload-resume-limit Files bigger than this can be resumed if the upload fail's. - Config: upload_resume_limit - Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT - Type: SizeSuffix - Default: 10M --jottacloud-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_JOTTACLOUD_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot Limitations Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. Troubleshooting Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases. Koofr Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate. Here is an example of how to make a remote called koofr. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> koofr Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Koofr \ "koofr" [snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** Your Koofr user name Enter a string value. Press Enter for the default (""). user> USER@NAME Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [koofr] type = koofr baseurl = https://app.koofr.net user = USER@NAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage. Once configured you can then use rclone like this, List directories in top level of your Koofr rclone lsd koofr: List all the files in your Koofr rclone ls koofr: To copy a local directory to an Koofr directory called backup rclone copy /home/source remote:backup Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings. Standard Options Here are the standard options specific to koofr (Koofr). --koofr-user Your Koofr user name - Config: user - Env Var: RCLONE_KOOFR_USER - Type: string - Default: "" --koofr-password Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) NB Input to this must be obscured - see rclone obscure. - Config: password - Env Var: RCLONE_KOOFR_PASSWORD - Type: string - Default: "" Advanced Options Here are the advanced options specific to koofr (Koofr). --koofr-endpoint The Koofr API endpoint to use - Config: endpoint - Env Var: RCLONE_KOOFR_ENDPOINT - Type: string - Default: "https://app.koofr.net" --koofr-mountid Mount ID of the mount to use. If omitted, the primary mount is used. - Config: mountid - Env Var: RCLONE_KOOFR_MOUNTID - Type: string - Default: "" --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. - Config: setmtime - Env Var: RCLONE_KOOFR_SETMTIME - Type: bool - Default: true --koofr-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_KOOFR_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot Limitations Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Mail.ru Cloud Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available only on Windows. (Please note that official sites are in Russian) Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented. Features highlights - Paths may be as deep as required, eg remote:directory/subdirectory - Files have a last modified time property, directories don't - Deleted files are by default moved to the trash - Files and directories can be shared via public links - Partial uploads or streaming are not supported, file size must be known before upload - Maximum file size is limited to 2G for a free account, unlimited for paid accounts - Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1 - If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone) Configuration Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Mail.ru Cloud \ "mailru" [snip] Storage> mailru User name (usually email) Enter a string value. Press Enter for the default (""). user> username@mail.ru Password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips [snip] Enter a boolean value (true or false). Press Enter for the default ("true"). Choose a number from below, or type in your own value 1 / Enable \ "true" 2 / Disable \ "false" speedup_enable> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [remote] type = mailru user = username@mail.ru pass = *** ENCRYPTED *** speedup_enable = true -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Configuration of this backend does not require a local web browser. You can use the configured backend as shown below: See top level directories rclone lsd remote: Make a new directory rclone mkdir remote:directory List the contents of a directory rclone ls remote:directory Sync /home/local/directory to the remote path, deleting any excess files in the path. rclone sync -i /home/local/directory remote:directory Modified time Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970". Hash checksums Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length. Emptying Trash Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote: command, which will permanently delete all your trashed files. This command does not take any path arguments. Quota information To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- " 0x22 " * 0x2A * : 0x3A : < 0x3C < > 0x3E > ? 0x3F ? \ 0x5C \ | 0x7C | Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Limitations File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits. Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Standard Options Here are the standard options specific to mailru (Mail.ru Cloud). --mailru-user User name (usually email) - Config: user - Env Var: RCLONE_MAILRU_USER - Type: string - Default: "" --mailru-pass Password NB Input to this must be obscured - see rclone obscure. - Config: pass - Env Var: RCLONE_MAILRU_PASS - Type: string - Default: "" --mailru-speedup-enable Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization. - Config: speedup_enable - Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE - Type: bool - Default: true - Examples: - "true" - Enable - "false" - Disable Advanced Options Here are the advanced options specific to mailru (Mail.ru Cloud). --mailru-speedup-file-patterns Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters. - Config: speedup_file_patterns - Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS - Type: string - Default: "_.mkv,_.avi,_.mp4,_.mp3,_.zip,_.gz,_.rar,_.pdf" - Examples: - "" - Empty list completely disables speedup (put by hash). - "*" - All files will be attempted for speedup. - "_.mkv,_.avi,_.mp4,_.mp3" - Only common audio/video files will be tried for put by hash. - "_.zip,_.gz,_.rar,_.pdf" - Only common archives or PDF books will be tried for speedup. --mailru-speedup-max-disk This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space) - Config: speedup_max_disk - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK - Type: SizeSuffix - Default: 3G - Examples: - "0" - Completely disable speedup (put by hash). - "1G" - Files larger than 1Gb will be uploaded directly. - "3G" - Choose this option if you have less than 3Gb free on local disk. --mailru-speedup-max-memory Files larger than the size given below will always be hashed on disk. - Config: speedup_max_memory - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY - Type: SizeSuffix - Default: 32M - Examples: - "0" - Preliminary hashing will always be done in a temporary disk location. - "32M" - Do not dedicate more than 32Mb RAM for preliminary hashing. - "256M" - You have at most 256Mb RAM free for hash calculations. --mailru-check-hash What should copy do if file checksum is mismatched or invalid - Config: check_hash - Env Var: RCLONE_MAILRU_CHECK_HASH - Type: bool - Default: true - Examples: - "true" - Fail with error. - "false" - Ignore and continue. --mailru-user-agent HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line. - Config: user_agent - Env Var: RCLONE_MAILRU_USER_AGENT - Type: string - Default: "" --mailru-quirks Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400 - Config: quirks - Env Var: RCLONE_MAILRU_QUIRKS - Type: string - Default: "" --mailru-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_MAILRU_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot Mega Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption. This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Mega \ "mega" [snip] Storage> mega User name user> you@example.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Remote config -------------------- [remote] type = mega user = you@example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y NOTE: The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail. Once configured you can then use rclone like this, List directories in top level of your Mega rclone lsd remote: List all the files in your Mega rclone ls remote: To copy a local directory to an Mega directory called backup rclone copy /home/source remote:backup Modified time and hashes Mega does not support modification times or hashes yet. Restricted filename characters Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Duplicated files Mega can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use rclone dedupe to fix duplicated files. Failure to log-in Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. For example, executing this command 90 times in a row rclone link remote:file will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again. You can mitigate this issue by mounting the remote it with rclone mount. This will log-in when mounting and a log-out when unmounting only. You can also run rclone rcd and then use rclone rc to run the commands over the API to avoid logging in each time. Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue. If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing... Note that this has been observed by trial and error and might not be set in stone. Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach. Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though. Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum. So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while. Standard Options Here are the standard options specific to mega (Mega). --mega-user User name - Config: user - Env Var: RCLONE_MEGA_USER - Type: string - Default: "" --mega-pass Password. NB Input to this must be obscured - see rclone obscure. - Config: pass - Env Var: RCLONE_MEGA_PASS - Type: string - Default: "" Advanced Options Here are the advanced options specific to mega (Mega). --mega-debug Output more debug from Mega. If this flag is set (along with -vv) it will print further debugging information from the mega backend. - Config: debug - Env Var: RCLONE_MEGA_DEBUG - Type: bool - Default: false --mega-hard-delete Delete files permanently rather than putting them into the trash. Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead. - Config: hard_delete - Env Var: RCLONE_MEGA_HARD_DELETE - Type: bool - Default: false --mega-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_MEGA_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot Limitations This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. Memory The memory backend is an in RAM backend. It does not persist its data - use the local backend for that. The memory backend behaves like a bucket based remote (eg like s3). Because it has no parameters you can just use it with the :memory: remote name. You can configure it as a remote like this with rclone config too if you want to: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Memory \ "memory" [snip] Storage> memory ** See help for memory backend at: https://rclone.org/memory/ ** Remote config -------------------- [remote] type = memory -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, eg rclone mount :memory: /mnt/tmp rclone serve webdav :memory: rclone serve sftp :memory: Modified time and hashes The memory backend supports MD5 hashes and modification times accurate to 1 nS. Restricted filename characters The memory backend replaces the default restricted characters set. Microsoft Azure Blob Storage Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir. Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Microsoft Azure Blob Storage \ "azureblob" [snip] Storage> azureblob Storage Account Name account> account_name Storage Account Key key> base64encodedkey== Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = account_name key = base64encodedkey== endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See all containers rclone lsd remote: Make a new container rclone mkdir remote:container List the contents of a container rclone ls remote:container Sync /home/local/directory to the remote container, deleting any excess files in the container. rclone sync -i /home/local/directory remote:container --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Modified time The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- / 0x2F / \ 0x5C \ File names can also not end with the following characters. These only get replaced if they are the last character in the name: Character Value Replacement ----------- ------- ------------- . 0x2E . Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Hashes MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk. Authenticating with Azure Blob Storage Rclone has 3 ways of authenticating with Azure Blob Storage: Account and Key This is the most straight forward and least flexible way. Just fill in the account and key lines and leave the rest blank. SAS URL This can be an account level SAS URL or container level SAS URL. To use it leave account, key blank and fill in sas_url. An account level SAS URL or container level SAS URL can be obtained from the Azure portal or the Azure Storage Explorer. To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal. If you use a container level SAS URL, rclone operations are permitted only on a particular container, eg rclone ls azureblob:container You can also list the single container from the root. This will only show the container specified by the SAS URL. $ rclone lsd azureblob: container/ Note that you can't see or access any other containers - this will fail rclone ls azureblob:othercontainer Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server. Multipart uploads Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default. The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to --transfers of them being uploaded at once. Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M. Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks. Standard Options Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). --azureblob-account Storage Account Name (leave blank to use SAS URL or Emulator) - Config: account - Env Var: RCLONE_AZUREBLOB_ACCOUNT - Type: string - Default: "" --azureblob-key Storage Account Key (leave blank to use SAS URL or Emulator) - Config: key - Env Var: RCLONE_AZUREBLOB_KEY - Type: string - Default: "" --azureblob-sas-url SAS URL for container level access only (leave blank if using account/key or Emulator) - Config: sas_url - Env Var: RCLONE_AZUREBLOB_SAS_URL - Type: string - Default: "" --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) - Config: use_emulator - Env Var: RCLONE_AZUREBLOB_USE_EMULATOR - Type: bool - Default: false Advanced Options Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). --azureblob-endpoint Endpoint for the service Leave blank normally. - Config: endpoint - Env Var: RCLONE_AZUREBLOB_ENDPOINT - Type: string - Default: "" --azureblob-upload-cutoff Cutoff for switching to chunked upload (<= 256MB). - Config: upload_cutoff - Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 256M --azureblob-chunk-size Upload chunk size (<= 100MB). Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory. - Config: chunk_size - Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE - Type: SizeSuffix - Default: 4M --azureblob-list-chunk Size of blob list. This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out. - Config: list_chunk - Env Var: RCLONE_AZUREBLOB_LIST_CHUNK - Type: int - Default: 5000 --azureblob-access-tier Access tier of blob: hot, cool or archive. Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool". - Config: access_tier - Env Var: RCLONE_AZUREBLOB_ACCESS_TIER - Type: string - Default: "" --azureblob-disable-checksum Don't store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM - Type: bool - Default: false --azureblob-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP - Type: bool - Default: false --azureblob-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_AZUREBLOB_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 Limitations MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. Azure Storage Emulator Support You can test rclone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with rclone config follow instructions described in introduction, set use_emulator config as true, you do not need to provide default account name or key if using emulator. Microsoft OneDrive Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Microsoft OneDrive \ "onedrive" [snip] Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Microsoft App Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Choose a number from below, or type in an existing value 1 / OneDrive Personal or Business \ "onedrive" 2 / Sharepoint site \ "sharepoint" 3 / Type in driveID \ "driveid" 4 / Type in SiteID \ "siteid" 5 / Search a Sharepoint site \ "search" Your choice> 1 Found 1 drives, please select the one you want to use: 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk Chose drive to use:> 0 Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents Is that okay? y) Yes n) No y/n> y -------------------- [remote] type = onedrive token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"} drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk drive_type = business -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List directories in top level of your OneDrive rclone lsd remote: List all the files in your OneDrive rclone ls remote: To copy a local directory to an OneDrive directory called backup rclone copy /home/source remote:backup Getting your own Client ID and Key You can use your own Client ID if the default (client_id left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests. If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below: 1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click New registration. 2. Enter a name for your app, choose account type Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI Enter http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use. 3. Under manage select Certificates & secrets, click New client secret. Copy and keep that secret for later use. 4. Under manage select API permissions, click Add a permission and select Microsoft Graph then select delegated permissions. 5. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read. Once selected click Add permissions at the bottom. Now the application is complete. Run rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. Modification time and hashes OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash. For all types of OneDrive you can use the --checksum flag. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- " 0x22 " * 0x2A * : 0x3A : < 0x3C < > 0x3E > ? 0x3F ? \ 0x5C \ | 0x7C | # 0x23 # % 0x25 % File names can also not end with the following characters. These only get replaced if they are the last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ . 0x2E . File names can also not begin with the following characters. These only get replaced if they are the first character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ ~ 0x7E ~ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Deleting files Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website. Standard Options Here are the standard options specific to onedrive (Microsoft OneDrive). --onedrive-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_ONEDRIVE_CLIENT_ID - Type: string - Default: "" --onedrive-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET - Type: string - Default: "" Advanced Options Here are the advanced options specific to onedrive (Microsoft OneDrive). --onedrive-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_ONEDRIVE_TOKEN - Type: string - Default: "" --onedrive-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_ONEDRIVE_AUTH_URL - Type: string - Default: "" --onedrive-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_ONEDRIVE_TOKEN_URL - Type: string - Default: "" --onedrive-chunk-size Chunk size to upload files with - must be multiple of 320k (327,680 bytes). Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory. - Config: chunk_size - Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 10M --onedrive-drive-id The ID of the drive to use - Config: drive_id - Env Var: RCLONE_ONEDRIVE_DRIVE_ID - Type: string - Default: "" --onedrive-drive-type The type of the drive ( personal | business | documentLibrary ) - Config: drive_type - Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE - Type: string - Default: "" --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option. - Config: expose_onenote_files - Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES - Type: bool - Default: false --onedrive-server-side-across-configs Allow server side operations (eg copy) to work across different onedrive configs. This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. - Config: server_side_across_configs - Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false --onedrive-no-versions Remove all versions on modifying operations Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time. These versions take up space out of the quota. This flag checks for versions after file upload and setting modification time and removes all but the last version. NB Onedrive personal can't currently delete versions so don't use this flag there. - Config: no_versions - Env Var: RCLONE_ONEDRIVE_NO_VERSIONS - Type: bool - Default: false --onedrive-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_ONEDRIVE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot Limitations If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token. Naming Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. File sizes The largest allowed file size is 100GB for both OneDrive Personal and OneDrive for Business (Updated 17 June 2020). Path length The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. Number of files OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info. An official document about the limitations for different types of OneDrive can be found here. Versions Every change in a file OneDrive causes the service to create a new version of the the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space. For example the copy command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version. You can use the rclone cleanup command (see below) to remove all old versions. Or you can set the no_versions parameter to true and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it. NOTE At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and no_versions should not be used on Onedrive Personal. Disabling versioning Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: 1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already) 2. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking 3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials) 4. Set-SPOTenant -EnableMinimumVersionRequirement $False 5. Disconnect-SPOService (to disconnect from the server) _Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met._ User Weropol has found a method to disable versioning on OneDrive 1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. 2. Click Site settings. 3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. 4. Click Customize "Documents". 5. Click General Settings > Versioning Settings. 6. Under Document Version History select the option No versioning. Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. 7. Apply the changes by clicking OK. 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) 9. Restore the versioning settings after using rclone. (Optional) Cleanup OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports -i which is a great way to see what it would do. rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir NB Onedrive personal can't currently delete versions Troubleshooting Unexpected file size/hash differences on Sharepoint It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments: --ignore-checksum --ignore-size Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above. Replacing/deleting existing files on Sharepoint gets "item not found" It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the --backup-dir command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use: --backup-dir mysharepoint:rclone-backup-dir access_denied (AADSTS65005) Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint invalid_grant (AADSTS50076) Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. OpenDrive Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenDrive \ "opendrive" [snip] Storage> opendrive Username username> Password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: -------------------- [remote] username = password = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y List directories in top level of your OpenDrive rclone lsd remote: List all the files in your OpenDrive rclone ls remote: To copy a local directory to an OpenDrive directory called backup rclone copy /home/source remote:backup Modified time and MD5SUMs OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Restricted filename characters Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / " 0x22 " * 0x2A * : 0x3A : < 0x3C < > 0x3E > ? 0x3F ? \ 0x5C \ | 0x7C | File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ HT 0x09 ␉ LF 0x0A ␊ VT 0x0B ␋ CR 0x0D ␍ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to opendrive (OpenDrive). --opendrive-username Username - Config: username - Env Var: RCLONE_OPENDRIVE_USERNAME - Type: string - Default: "" --opendrive-password Password. NB Input to this must be obscured - see rclone obscure. - Config: password - Env Var: RCLONE_OPENDRIVE_PASSWORD - Type: string - Default: "" Advanced Options Here are the advanced options specific to opendrive (OpenDrive). --opendrive-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_OPENDRIVE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot --opendrive-chunk-size Files will be uploaded in chunks this size. Note that these chunks are buffered in memory so increasing them will increase memory use. - Config: chunk_size - Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 10M Limitations Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. QingStor Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir. Here is an example of making an QingStor configuration. First run rclone config This will guide you through an interactive setup process. No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage \ "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step \ "false" 2 / Get QingStor credentials from the environment (env vars or IAM) \ "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a. \ "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a. \ "sh1a" zone> 1 Number of connection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y This remote is called remote and can now be used like this See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Multipart uploads rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM. Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time. Buckets and Zone With QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone. Authentication There are two ways to supply rclone with a set of QingStor credentials. In order of precedence: - Directly in the rclone configuration file (as configured by rclone config) - set access_key_id and secret_access_key - Runtime configuration: - set env_auth to true in the config file - Exporting the following environment variables before running rclone - Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY - Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY Restricted filename characters The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to qingstor (QingCloud Object Storage). --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - Config: env_auth - Env Var: RCLONE_QINGSTOR_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter QingStor credentials in the next step - "true" - Get QingStor credentials from the environment (env vars or IAM) --qingstor-access-key-id QingStor Access Key ID Leave blank for anonymous access or runtime credentials. - Config: access_key_id - Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID - Type: string - Default: "" --qingstor-secret-access-key QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials. - Config: secret_access_key - Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY - Type: string - Default: "" --qingstor-endpoint Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" - Config: endpoint - Env Var: RCLONE_QINGSTOR_ENDPOINT - Type: string - Default: "" --qingstor-zone Zone to connect to. Default is "pek3a". - Config: zone - Env Var: RCLONE_QINGSTOR_ZONE - Type: string - Default: "" - Examples: - "pek3a" - The Beijing (China) Three Zone - Needs location constraint pek3a. - "sh1a" - The Shanghai (China) First Zone - Needs location constraint sh1a. - "gd2a" - The Guangdong (China) Second Zone - Needs location constraint gd2a. Advanced Options Here are the advanced options specific to qingstor (QingCloud Object Storage). --qingstor-connection-retries Number of connection retries. - Config: connection_retries - Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES - Type: int - Default: 3 --qingstor-upload-cutoff Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. - Config: upload_cutoff - Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M --qingstor-chunk-size Chunk size to use for uploading. When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size. Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. - Config: chunk_size - Env Var: RCLONE_QINGSTOR_CHUNK_SIZE - Type: SizeSuffix - Default: 4M --qingstor-upload-concurrency Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though). If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY - Type: int - Default: 1 --qingstor-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_QINGSTOR_ENCODING - Type: MultiEncoder - Default: Slash,Ctl,InvalidUtf8 Swift Swift refers to OpenStack Object Storage. Commercial implementations of that being: - Rackspace Cloud Files - Memset Memstore - OVH Object Storage - Oracle Cloud Storage - IBM Bluemix Cloud ObjectStorage Swift Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir. Here is an example of making a swift configuration. First run rclone config This will guide you through an interactive setup process. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value 1 / Enter swift credentials in the next step \ "false" 2 / Get swift credentials from environment vars. Leave other fields blank if using this. \ "true" env_auth> true User name to log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> Authentication URL for server (OS_AUTH_URL). Choose a number from below, or type in your own value 1 / Rackspace US \ "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK \ "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2 \ "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK \ "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2 \ "https://auth.storage.memset.com/v2.0" 6 / OVH \ "https://auth.cloud.ovh.net/v3" auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> Region name - optional (OS_REGION_NAME) region> Storage URL - optional (OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) Choose a number from below, or type in your own value 1 / Public (default, choose this if not sure) \ "public" 2 / Internal (use internal service net) \ "internal" 3 / Admin \ "admin" endpoint_type> Remote config -------------------- [test] env_auth = true user = key = auth = user_id = domain = tenant = tenant_id = tenant_domain = region = storage_url = auth_token = auth_version = endpoint_type = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y This remote is called remote and can now be used like this See all containers rclone lsd remote: Make a new container rclone mkdir remote:container List the contents of a container rclone ls remote:container Sync /home/local/directory to the remote container, deleting any excess files in the container. rclone sync -i /home/local/directory remote:container Configuration from an OpenStack credentials file An OpenStack credentials file typically looks something something like this (without the comments) export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above. [remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME Note that you may (or may not) need to set region too - try without first. Configuration from the environment If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables. When you run through the config, make sure you choose true for env_auth and leave everything else blank. rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library. Using an alternate authentication method If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation. Using rclone without a config file You can use rclone with swift without a config file, if desired, like this: source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: --fast-list This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. --update and --use-server-modtime As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded. Standard Options Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - Config: env_auth - Env Var: RCLONE_SWIFT_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter swift credentials in the next step - "true" - Get swift credentials from environment vars. Leave other fields blank if using this. --swift-user User name to log in (OS_USERNAME). - Config: user - Env Var: RCLONE_SWIFT_USER - Type: string - Default: "" --swift-key API key or password (OS_PASSWORD). - Config: key - Env Var: RCLONE_SWIFT_KEY - Type: string - Default: "" --swift-auth Authentication URL for server (OS_AUTH_URL). - Config: auth - Env Var: RCLONE_SWIFT_AUTH - Type: string - Default: "" - Examples: - "https://auth.api.rackspacecloud.com/v1.0" - Rackspace US - "https://lon.auth.api.rackspacecloud.com/v1.0" - Rackspace UK - "https://identity.api.rackspacecloud.com/v2.0" - Rackspace v2 - "https://auth.storage.memset.com/v1.0" - Memset Memstore UK - "https://auth.storage.memset.com/v2.0" - Memset Memstore UK v2 - "https://auth.cloud.ovh.net/v3" - OVH --swift-user-id User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - Config: user_id - Env Var: RCLONE_SWIFT_USER_ID - Type: string - Default: "" --swift-domain User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - Config: domain - Env Var: RCLONE_SWIFT_DOMAIN - Type: string - Default: "" --swift-tenant Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - Config: tenant - Env Var: RCLONE_SWIFT_TENANT - Type: string - Default: "" --swift-tenant-id Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - Config: tenant_id - Env Var: RCLONE_SWIFT_TENANT_ID - Type: string - Default: "" --swift-tenant-domain Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - Config: tenant_domain - Env Var: RCLONE_SWIFT_TENANT_DOMAIN - Type: string - Default: "" --swift-region Region name - optional (OS_REGION_NAME) - Config: region - Env Var: RCLONE_SWIFT_REGION - Type: string - Default: "" --swift-storage-url Storage URL - optional (OS_STORAGE_URL) - Config: storage_url - Env Var: RCLONE_SWIFT_STORAGE_URL - Type: string - Default: "" --swift-auth-token Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - Config: auth_token - Env Var: RCLONE_SWIFT_AUTH_TOKEN - Type: string - Default: "" --swift-application-credential-id Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) - Config: application_credential_id - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID - Type: string - Default: "" --swift-application-credential-name Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) - Config: application_credential_name - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME - Type: string - Default: "" --swift-application-credential-secret Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) - Config: application_credential_secret - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET - Type: string - Default: "" --swift-auth-version AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - Config: auth_version - Env Var: RCLONE_SWIFT_AUTH_VERSION - Type: int - Default: 0 --swift-endpoint-type Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) - Config: endpoint_type - Env Var: RCLONE_SWIFT_ENDPOINT_TYPE - Type: string - Default: "public" - Examples: - "public" - Public (default, choose this if not sure) - "internal" - Internal (use internal service net) - "admin" - Admin --swift-storage-policy The storage policy to use when creating a new container This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider. - Config: storage_policy - Env Var: RCLONE_SWIFT_STORAGE_POLICY - Type: string - Default: "" - Examples: - "" - Default - "pcs" - OVH Public Cloud Storage - "pca" - OVH Public Cloud Archive Advanced Options Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). --swift-chunk-size Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. - Config: chunk_size - Env Var: RCLONE_SWIFT_CHUNK_SIZE - Type: SizeSuffix - Default: 5G --swift-no-chunk Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - Config: no_chunk - Env Var: RCLONE_SWIFT_NO_CHUNK - Type: bool - Default: false --swift-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_SWIFT_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8 Modified time The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. Restricted filename characters Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Limitations The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. Troubleshooting Rclone gives Failed to create file system for "remote:": Bad Request Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift. So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag. This may also be caused by specifying the region when you shouldn't have (eg OVH). Rclone gives Failed to create file system: Response didn't have storage url and auth token This is most likely caused by forgetting to specify your tenant when setting up a swift remote. pCloud Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Pcloud \ "pcloud" [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List directories in top level of your pCloud rclone lsd remote: List all the files in your pCloud rclone ls remote: To copy a local directory to an pCloud directory called backup rclone copy /home/source remote:backup Modified time and hashes pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded. pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum flag. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Deleting files Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash. Root folder ID You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your pCloud drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface. So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config. Standard Options Here are the standard options specific to pcloud (Pcloud). --pcloud-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_PCLOUD_CLIENT_ID - Type: string - Default: "" --pcloud-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_PCLOUD_CLIENT_SECRET - Type: string - Default: "" Advanced Options Here are the advanced options specific to pcloud (Pcloud). --pcloud-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_PCLOUD_TOKEN - Type: string - Default: "" --pcloud-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_PCLOUD_AUTH_URL - Type: string - Default: "" --pcloud-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_PCLOUD_TOKEN_URL - Type: string - Default: "" --pcloud-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_PCLOUD_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot --pcloud-root-folder-id Fill in for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID - Type: string - Default: "d0" --pcloud-hostname Hostname to connect to. This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize. - Config: hostname - Env Var: RCLONE_PCLOUD_HOSTNAME - Type: string - Default: "api.pcloud.com" - Examples: - "api.pcloud.com" - Original/US region - "eapi.pcloud.com" - EU region premiumize.me Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / premiumize.me \ "premiumizeme" [snip] Storage> premiumizeme ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = premiumizeme token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, List directories in top level of your premiumize.me rclone lsd remote: List all the files in your premiumize.me rclone ls remote: To copy a local directory to an premiumize.me directory called backup rclone copy /home/source remote:backup Modified time and hashes premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work. Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ " 0x22 " Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Standard Options Here are the standard options specific to premiumizeme (premiumize.me). --premiumizeme-api-key API Key. This is not normally used - use oauth instead. - Config: api_key - Env Var: RCLONE_PREMIUMIZEME_API_KEY - Type: string - Default: "" Advanced Options Here are the advanced options specific to premiumizeme (premiumize.me). --premiumizeme-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_PREMIUMIZEME_ENCODING - Type: MultiEncoder - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot Limitations Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents \ and " premiumize.me only supports filenames up to 255 characters in length. put.io Paths are specified as remote:path put.io paths may be as deep as required, eg remote:directory/subdirectory. The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> putio Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Put.io \ "putio" [snip] Storage> putio ** See help for putio backend at: https://rclone.org/putio/ ** Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [putio] type = putio token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== putio putio e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. You can then use it like this, List directories in top level of your put.io rclone lsd remote: List all the files in your put.io rclone ls remote: To copy a local directory to a put.io directory called backup rclone copy /home/source remote:backup Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- \ 0x5C \ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Advanced Options Here are the advanced options specific to putio (Put.io). --putio-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_PUTIO_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot Seafile This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users Root mode vs Library mode There are two distinct modes you can setup your remote: - you point your remote to the ROOT OF THE SERVER, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, eg remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. THIS IS THE RECOMMENDED MODE WHEN USING ENCRYPTED LIBRARIES. (_This mode is possibly slightly faster than the root mode_) Configuration in root mode Here is an example of making a seafile configuration for a user with NO two-factor authentication. First run rclone config This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile \ "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ ** URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com \ "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> false Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication is not enabled on this account. -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** 2fa = false -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y This remote is called seafile. It's pointing to the root of your seafile server and can now be used like this: See all libraries rclone lsd seafile: Create a new library rclone mkdir seafile:library List the contents of a library rclone ls seafile:library Sync /home/local/directory to the remote library, deleting any excess files in the library. rclone sync -i /home/local/directory seafile:library Configuration in library mode Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile \ "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ ** URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com \ "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> true Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> My Library Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication: please enter your 2FA code 2fa code> 123456 Authenticating... Success! -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = 2fa = true library = My Library -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. You specified My Library during the configuration. The root of the remote is pointing at the root of the library My Library: See all files in the library: rclone lsd seafile: Create a new directory inside the library rclone mkdir seafile:directory List the contents of a directory rclone ls seafile:directory Sync /home/local/directory to the remote library, deleting any excess files in the library. rclone sync -i /home/local/directory seafile: --fast-list Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x Restricted filename characters In addition to the default restricted characters set the following characters are also replaced: Character Value Replacement ----------- ------- ------------- / 0x2F / " 0x22 " \ 0x5C \ Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Seafile and rclone link Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ or if run on a directory you will get: rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link. Compatibility It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly. Standard Options Here are the standard options specific to seafile (seafile). --seafile-url URL of seafile host to connect to - Config: url - Env Var: RCLONE_SEAFILE_URL - Type: string - Default: "" - Examples: - "https://cloud.seafile.com/" - Connect to cloud.seafile.com --seafile-user User name (usually email address) - Config: user - Env Var: RCLONE_SEAFILE_USER - Type: string - Default: "" --seafile-pass Password NB Input to this must be obscured - see rclone obscure. - Config: pass - Env Var: RCLONE_SEAFILE_PASS - Type: string - Default: "" --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) - Config: 2fa - Env Var: RCLONE_SEAFILE_2FA - Type: bool - Default: false --seafile-library Name of the library. Leave blank to access all non-encrypted libraries. - Config: library - Env Var: RCLONE_SEAFILE_LIBRARY - Type: string - Default: "" --seafile-library-key Library password (for encrypted libraries only). Leave blank if you pass it through the command line. NB Input to this must be obscured - see rclone obscure. - Config: library_key - Env Var: RCLONE_SEAFILE_LIBRARY_KEY - Type: string - Default: "" --seafile-auth-token Authentication token - Config: auth_token - Env Var: RCLONE_SEAFILE_AUTH_TOKEN - Type: string - Default: "" Advanced Options Here are the advanced options specific to seafile (seafile). --seafile-create-library Should rclone create a library if it doesn't exist - Config: create_library - Env Var: RCLONE_SEAFILE_CREATE_LIBRARY - Type: bool - Default: false --seafile-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_SEAFILE_ENCODING - Type: MultiEncoder - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 SFTP SFTP is the Secure (or SSH) File Transfer Protocol. The SFTP backend can be used with a number of different providers: - C14 - rsync.net SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /. Here is an example of making an SFTP configuration. First run rclone config This will guide you through an interactive setup process. No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / SSH/SFTP Connection \ "sftp" [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "example.com" host> example.com SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. key_file> Remote config -------------------- [remote] host = example.com user = sftpuser port = pass = key_file = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y This remote is called remote and can now be used like this: See all directories in the home directory rclone lsd remote: Make a new directory rclone mkdir remote:path/to/directory List the contents of a directory rclone ls remote:path/to/directory Sync /home/local/directory to the remote directory, deleting any excess files in the directory. rclone sync -i /home/local/directory remote:directory SSH Authentication The SFTP remote supports three authentication methods: - Password - Key file - ssh-agent Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported. The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e. key_pem = -----BEGIN RSA PRIVATE KEY-----0gAMbMbaSsd-----END RSA PRIVATE KEY----- This will generate it correctly for key_pem for use in the config: awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa If you don't specify pass, key_file, or key_pem then rclone will attempt to contact an ssh-agent. You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent. Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. If you set the --sftp-ask-password option, rclone will prompt for a password when needed and no password has been configured. ssh-agent on macOS Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg eval `ssh-agent -s` && ssh-add -A And then at the end of the session eval `ssh-agent -k` These commands can be used in scripts of course. Modified time Modified times are stored on the server to 1 second precision. Modified times are used in syncing and are fully supported. Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour. Standard Options Here are the standard options specific to sftp (SSH/SFTP Connection). --sftp-host SSH host to connect to - Config: host - Env Var: RCLONE_SFTP_HOST - Type: string - Default: "" - Examples: - "example.com" - Connect to example.com --sftp-user SSH username, leave blank for current username, ncw - Config: user - Env Var: RCLONE_SFTP_USER - Type: string - Default: "" --sftp-port SSH port, leave blank to use default (22) - Config: port - Env Var: RCLONE_SFTP_PORT - Type: string - Default: "" --sftp-pass SSH password, leave blank to use ssh-agent. NB Input to this must be obscured - see rclone obscure. - Config: pass - Env Var: RCLONE_SFTP_PASS - Type: string - Default: "" --sftp-key-pem Raw PEM-encoded private key, If specified, will override key_file parameter. - Config: key_pem - Env Var: RCLONE_SFTP_KEY_PEM - Type: string - Default: "" --sftp-key-file Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}. - Config: key_file - Env Var: RCLONE_SFTP_KEY_FILE - Type: string - Default: "" --sftp-key-file-pass The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used. NB Input to this must be obscured - see rclone obscure. - Config: key_file_pass - Env Var: RCLONE_SFTP_KEY_FILE_PASS - Type: string - Default: "" --sftp-key-use-agent When set forces the usage of the ssh-agent. When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Too many authentication failures for *username* errors when the ssh-agent contains many keys. - Config: key_use_agent - Env Var: RCLONE_SFTP_KEY_USE_AGENT - Type: bool - Default: false --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: - aes128-cbc - aes192-cbc - aes256-cbc - 3des-cbc - diffie-hellman-group-exchange-sha256 - diffie-hellman-group-exchange-sha1 Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. - Config: use_insecure_cipher - Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER - Type: bool - Default: false - Examples: - "false" - Use default Cipher list. - "true" - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing. - Config: disable_hashcheck - Env Var: RCLONE_SFTP_DISABLE_HASHCHECK - Type: bool - Default: false Advanced Options Here are the advanced options specific to sftp (SSH/SFTP Connection). --sftp-ask-password Allow asking for SFTP password when needed. If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent - Config: ask_password - Env Var: RCLONE_SFTP_ASK_PASSWORD - Type: bool - Default: false --sftp-path-override Override path used by SSH connection. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. Shared folders can be found in directories representing volumes rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory Home directory can be found in a shared folder called "home" rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory - Config: path_override - Env Var: RCLONE_SFTP_PATH_OVERRIDE - Type: string - Default: "" --sftp-set-modtime Set the modified time on the remote if set. - Config: set_modtime - Env Var: RCLONE_SFTP_SET_MODTIME - Type: bool - Default: true --sftp-md5sum-command The command used to read md5 hashes. Leave blank for autodetect. - Config: md5sum_command - Env Var: RCLONE_SFTP_MD5SUM_COMMAND - Type: string - Default: "" --sftp-sha1sum-command The command used to read sha1 hashes. Leave blank for autodetect. - Config: sha1sum_command - Env Var: RCLONE_SFTP_SHA1SUM_COMMAND - Type: string - Default: "" --sftp-skip-links Set to skip any symlinks and any other non regular files. - Config: skip_links - Env Var: RCLONE_SFTP_SKIP_LINKS - Type: bool - Default: false --sftp-subsystem Specifies the SSH2 subsystem on the remote host. - Config: subsystem - Env Var: RCLONE_SFTP_SUBSYSTEM - Type: string - Default: "sftp" --sftp-server-command Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined. - Config: server_command - Env Var: RCLONE_SFTP_SERVER_COMMAND - Type: string - Default: "" Limitations SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming. SFTP also supports about if the same login has shell access and df are in the remote's PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote's PATH. Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea. The only ssh agent supported under Windows is Putty's pageant. The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf). SFTP isn't supported under plan9 until this issue is fixed. Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth Note that --timeout isn't supported (but --contimeout is). C14 C14 is supported through the SFTP backend. See C14's documentation rsync.net rsync.net is supported through the SFTP backend. See rsync.net's documentation of rclone examples. SugarSync SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing. The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Sugarsync \ "sugarsync" [snip] Storage> sugarsync ** See help for sugarsync backend at: https://rclone.org/sugarsync/ ** Sugarsync App ID. Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). app_id> Sugarsync Access Key ID. Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). access_key_id> Sugarsync Private Access Key Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). private_access_key> Permanently delete files if true otherwise put them in the deleted files. Enter a boolean value (true or false). Press Enter for the default ("false"). hard_delete> Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Username (email address)> nick@craig-wood.com Your Sugarsync password is only required during setup and will not be stored. password: -------------------- [remote] type = sugarsync refresh_token = https://api.sugarsync.com/app-authorization/XXXXXXXXXXXXXXXXXX -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token. Once configured you can then use rclone like this, List directories (sync folders) in top level of your SugarSync rclone lsd remote: List all the files in your SugarSync folder "Test" rclone ls remote:Test To copy a local directory to an SugarSync folder called backup rclone copy /home/source remote:backup Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync. Modified time and hashes SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded. Restricted filename characters SugarSync replaces the default restricted characters set except for DEL. Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings. Deleting files Deleted files will be moved to the "Deleted items" folder by default. However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away. Standard Options Here are the standard options specific to sugarsync (Sugarsync). --sugarsync-app-id Sugarsync App ID. Leave blank to use rclone's. - Config: app_id - Env Var: RCLONE_SUGARSYNC_APP_ID - Type: string - Default: "" --sugarsync-access-key-id Sugarsync Access Key ID. Leave blank to use rclone's. - Config: access_key_id - Env Var: RCLONE_SUGARSYNC_ACCESS_KEY_ID - Type: string - Default: "" --sugarsync-private-access-key Sugarsync Private Access Key Leave blank to use rclone's. - Config: private_access_key - Env Var: RCLONE_SUGARSYNC_PRIVATE_ACCESS_KEY - Type: string - Default: "" --sugarsync-hard-delete Permanently delete files if true otherwise put them in the deleted files. - Config: hard_delete - Env Var: RCLONE_SUGARSYNC_HARD_DELETE - Type: bool - Default: false Advanced Options Here are the advanced options specific to sugarsync (Sugarsync). --sugarsync-refresh-token Sugarsync refresh token Leave blank normally, will be auto configured by rclone. - Config: refresh_token - Env Var: RCLONE_SUGARSYNC_REFRESH_TOKEN - Type: string - Default: "" --sugarsync-authorization Sugarsync authorization Leave blank normally, will be auto configured by rclone. - Config: authorization - Env Var: RCLONE_SUGARSYNC_AUTHORIZATION - Type: string - Default: "" --sugarsync-authorization-expiry Sugarsync authorization expiry Leave blank normally, will be auto configured by rclone. - Config: authorization_expiry - Env Var: RCLONE_SUGARSYNC_AUTHORIZATION_EXPIRY - Type: string - Default: "" --sugarsync-user Sugarsync user Leave blank normally, will be auto configured by rclone. - Config: user - Env Var: RCLONE_SUGARSYNC_USER - Type: string - Default: "" --sugarsync-root-id Sugarsync root id Leave blank normally, will be auto configured by rclone. - Config: root_id - Env Var: RCLONE_SUGARSYNC_ROOT_ID - Type: string - Default: "" --sugarsync-deleted-id Sugarsync deleted folder id Leave blank normally, will be auto configured by rclone. - Config: deleted_id - Env Var: RCLONE_SUGARSYNC_DELETED_ID - Type: string - Default: "" --sugarsync-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_SUGARSYNC_ENCODING - Type: MultiEncoder - Default: Slash,Ctl,InvalidUtf8,Dot Tardigrade Tardigrade is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner. Setup To make a new Tardigrade configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Tardigrade project you are a member of. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: Setup with access grant No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Tardigrade Decentralized Cloud Storage \ "tardigrade" [snip] Storage> tardigrade ** See help for tardigrade backend at: https://rclone.org/tardigrade/ ** Choose an authentication method. Enter a string value. Press Enter for the default ("existing"). Choose a number from below, or type in your own value 1 / Use an existing access grant. \ "existing" 2 / Create a new access grant from satellite address, API key, and passphrase. \ "new" provider> existing Access Grant. Enter a string value. Press Enter for the default (""). access_grant> your-access-grant-received-by-someone-else Remote config -------------------- [remote] type = tardigrade access_grant = your-access-grant-received-by-someone-else -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Setup with API key and passhprase No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Tardigrade Decentralized Cloud Storage \ "tardigrade" [snip] Storage> tardigrade ** See help for tardigrade backend at: https://rclone.org/tardigrade/ ** Choose an authentication method. Enter a string value. Press Enter for the default ("existing"). Choose a number from below, or type in your own value 1 / Use an existing access grant. \ "existing" 2 / Create a new access grant from satellite address, API key, and passphrase. \ "new" provider> new Satellite Address. Custom satellite address should match the format: `@
:`. Enter a string value. Press Enter for the default ("us-central-1.tardigrade.io"). Choose a number from below, or type in your own value 1 / US Central 1 \ "us-central-1.tardigrade.io" 2 / Europe West 1 \ "europe-west-1.tardigrade.io" 3 / Asia East 1 \ "asia-east-1.tardigrade.io" satellite_address> 1 API Key. Enter a string value. Press Enter for the default (""). api_key> your-api-key-for-your-tardigrade-project Encryption Passphrase. To access existing objects enter passphrase used for uploading. Enter a string value. Press Enter for the default (""). passphrase> your-human-readable-encryption-passphrase Remote config -------------------- [remote] type = tardigrade satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777 api_key = your-api-key-for-your-tardigrade-project passphrase = your-human-readable-encryption-passphrase access_grant = the-access-grant-generated-from-the-api-key-and-passphrase -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Usage Paths are specified as remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, eg remote:bucket/path/to/dir. Once configured you can then use rclone like this. Create a new bucket Use the mkdir command to create new bucket, e.g. bucket. rclone mkdir remote:bucket List all buckets Use the lsf command to list all buckets. rclone lsf remote: Note the colon (:) character at the end of the command line. Delete a bucket Use the rmdir command to delete an empty bucket. rclone rmdir remote:bucket Use the purge command to delete a non-empty bucket with all its content. rclone purge remote:bucket Upload objects Use the copy command to upload an object. rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/ The --progress flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the local path to upload all its objects. rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/ Only modified files will be copied. List objects Use the ls command to list recursively all objects in a bucket. rclone ls remote:bucket Add the folder to the remote path to list recursively all objects in this folder. rclone ls remote:bucket/path/to/dir/ Use the lsf command to list non-recursively all objects in a bucket or a folder. rclone lsf remote:bucket/path/to/dir/ Download objects Use the copy command to download an object. rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/ The --progress flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the remote path to download all its objects. rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/ Delete objects Use the deletefile command to delete a single object. rclone deletefile remote:bucket/path/to/dir/file.ext Use the delete command to delete all object in a folder. rclone delete remote:bucket/path/to/dir/ Print the total size of objects Use the size command to print the total size of objects in a bucket or a folder. rclone size remote:bucket/path/to/dir/ Sync two Locations Use the sync command to sync the source to the destination, changing the destination only, deleting any excess files. rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/ The --progress flag is for displaying progress information. Remove it if you don't need this information. Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted. The sync can be done also from Tardigrade to the local file system. rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/ Or between two Tardigrade buckets. rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/ Or even between another cloud storage and Tardigrade. rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/ Standard Options Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage). --tardigrade-provider Choose an authentication method. - Config: provider - Env Var: RCLONE_TARDIGRADE_PROVIDER - Type: string - Default: "existing" - Examples: - "existing" - Use an existing access grant. - "new" - Create a new access grant from satellite address, API key, and passphrase. --tardigrade-access-grant Access Grant. - Config: access_grant - Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT - Type: string - Default: "" --tardigrade-satellite-address Satellite Address. Custom satellite address should match the format: @
:. - Config: satellite_address - Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS - Type: string - Default: "us-central-1.tardigrade.io" - Examples: - "us-central-1.tardigrade.io" - US Central 1 - "europe-west-1.tardigrade.io" - Europe West 1 - "asia-east-1.tardigrade.io" - Asia East 1 --tardigrade-api-key API Key. - Config: api_key - Env Var: RCLONE_TARDIGRADE_API_KEY - Type: string - Default: "" --tardigrade-passphrase Encryption Passphrase. To access existing objects enter passphrase used for uploading. - Config: passphrase - Env Var: RCLONE_TARDIGRADE_PASSPHRASE - Type: string - Default: "" Union The union remote provides a unification similar to UnionFS using other remotes. Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory. During the initial setup with rclone config you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. Attribute :ro and :nc can be attach to the end of path to tag the remote as READ ONLY or NO CREATE, eg remote:directory/subdirectory:ro or remote:directory/subdirectory:nc. Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive2:/backup/desktop. There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive2:/backup/../desktop. Behavior / Policies The behavior of union backend is inspired by trapexit/mergerfs. All functions are grouped into 3 categories: ACTION, CREATE and SEARCH. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: RAND (random) may be useful for file creation (create) but could lead to very odd behavior if used for delete if there were more than one copy of the file. Function / Category classifications ---------------------------------------------------------------------------- Category Description Functions ---------- --------------- ------------------------------------------------- action Writing move, rmdir, rmdirs, delete, purge and copy, sync Existing file (as destination when file exist) create Create copy, sync (as destination when file not exist) non-existing file search Reading and ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync listing file (as source) N/A size, about ---------------------------------------------------------------------------- Path Preservation Policies, as described below, are of two basic types. path preserving and non-path preserving. All policies which start with ep (EPFF, EPLFS, EPLUS, EPMFS, EPRAND) are path preserving. ep stands for existing path. A path preserving policy will only consider upstreams where the relative path being accessed already exists. When using non-path preserving policies paths will be created in target upstreams as necessary. Quota Relevant Policies Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields. Policy Required Field ------------ ---------------- lfs, eplfs Free mfs, epmfs Free lus, eplus Used lno, eplno Objects To check if your upstream supports the field, run rclone about remote: [flags] and see if the required field exists. Filters Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below. - No SEARCH policies filter. - All ACTION policies will filter out remotes which are tagged as READ-ONLY. - All CREATE policies will filter out remotes which are tagged READ-ONLY or NO-CREATE. If all remotes are filtered an error will be returned. Policy descriptions The policies definition are inspired by trapexit/mergerfs but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems. ----------------------------------------------------------------------- Policy Description ---------------- ------------------------------------------------------ all Search category: same as EPALL. Action category: same as EPALL. Create category: act on all upstreams. epall (existing Search category: Given this order configured, act on path, all) the first one found where the relative path exists. Action category: apply to all found. Create category: act on all upstreams where the relative path exists. epff (existing Act on the first one found, by the time upstreams path, first reply, where the relative path exists. found) eplfs (existing Of all the upstreams on which the relative path exists path, least free choose the one with the least free space. space) eplus (existing Of all the upstreams on which the relative path exists path, least used choose the one with the least used space. space) eplno (existing Of all the upstreams on which the relative path exists path, least choose the one with the least number of objects. number of objects) epmfs (existing Of all the upstreams on which the relative path exists path, most free choose the one with the most free space. space) eprand (existing Calls EPALL and then randomizes. Returns only one path, random) upstream. ff (first found) Search category: same as EPFF. Action category: same as EPFF. Create category: Act on the first one found by the time upstreams reply. lfs (least free Search category: same as EPLFS. Action category: same space) as EPLFS. Create category: Pick the upstream with the least available free space. lus (least used Search category: same as EPLUS. Action category: same space) as EPLUS. Create category: Pick the upstream with the least used space. lno (least Search category: same as EPLNO. Action category: same number of as EPLNO. Create category: Pick the upstream with the objects) least number of objects. mfs (most free Search category: same as EPMFS. Action category: same space) as EPMFS. Create category: Pick the upstream with the most available free space. newest Pick the file / directory with the largest mtime. rand (random) Calls ALL and then randomizes. Returns only one upstream. ----------------------------------------------------------------------- Setup Here is an example of how to make a union called remote for local folders. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Union merges the contents of several remotes \ "union" [snip] Storage> union List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc. Enter a string value. Press Enter for the default (""). upstreams> Policy to choose upstream on ACTION class. Enter a string value. Press Enter for the default ("epall"). action_policy> Policy to choose upstream on CREATE class. Enter a string value. Press Enter for the default ("epmfs"). create_policy> Policy to choose upstream on SEARCH class. Enter a string value. Press Enter for the default ("ff"). search_policy> Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. Enter a signed integer. Press Enter for the default ("120"). cache_time> Remote config -------------------- [remote] type = union upstreams = C:\dir1 C:\dir2 C:\dir3 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote union e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q Once configured you can then use rclone like this, List directories in top level in C:\dir1, C:\dir2 and C:\dir3 rclone lsd remote: List all the files in C:\dir1, C:\dir2 and C:\dir3 rclone ls remote: Copy another local directory to the union directory called source, which will be placed into C:\dir3 rclone copy C:\source remote:source Standard Options Here are the standard options specific to union (Union merges the contents of several upstream fs). --union-upstreams List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc. - Config: upstreams - Env Var: RCLONE_UNION_UPSTREAMS - Type: string - Default: "" --union-action-policy Policy to choose upstream on ACTION category. - Config: action_policy - Env Var: RCLONE_UNION_ACTION_POLICY - Type: string - Default: "epall" --union-create-policy Policy to choose upstream on CREATE category. - Config: create_policy - Env Var: RCLONE_UNION_CREATE_POLICY - Type: string - Default: "epmfs" --union-search-policy Policy to choose upstream on SEARCH category. - Config: search_policy - Env Var: RCLONE_UNION_SEARCH_POLICY - Type: string - Default: "ff" --union-cache-time Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. - Config: cache_time - Env Var: RCLONE_UNION_CACHE_TIME - Type: int - Default: 120 WebDAV Paths are specified as remote:path Paths may be as deep as required, eg remote:directory/subdirectory. To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features. Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Webdav \ "webdav" [snip] Storage> webdav URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://example.com/remote.php/webdav/ Name of the Webdav site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \ "nextcloud" 2 / Owncloud \ "owncloud" 3 / Sharepoint \ "sharepoint" 4 / Other site/service or software \ "other" vendor> 1 User name user> user Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Bearer token instead of user/pass (eg a Macaroon) bearer_token> Remote config -------------------- [remote] type = webdav url = https://example.com/remote.php/webdav/ vendor = nextcloud user = user pass = *** ENCRYPTED *** bearer_token = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Once configured you can then use rclone like this, List directories in top level of your WebDAV rclone lsd remote: List all the files in your WebDAV rclone ls remote: To copy a local directory to an WebDAV directory called backup rclone copy /home/source remote:backup Modified time and hashes Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them. Standard Options Here are the standard options specific to webdav (Webdav). --webdav-url URL of http host to connect to - Config: url - Env Var: RCLONE_WEBDAV_URL - Type: string - Default: "" - Examples: - "https://example.com" - Connect to example.com --webdav-vendor Name of the Webdav site/service/software you are using - Config: vendor - Env Var: RCLONE_WEBDAV_VENDOR - Type: string - Default: "" - Examples: - "nextcloud" - Nextcloud - "owncloud" - Owncloud - "sharepoint" - Sharepoint - "other" - Other site/service or software --webdav-user User name - Config: user - Env Var: RCLONE_WEBDAV_USER - Type: string - Default: "" --webdav-pass Password. NB Input to this must be obscured - see rclone obscure. - Config: pass - Env Var: RCLONE_WEBDAV_PASS - Type: string - Default: "" --webdav-bearer-token Bearer token instead of user/pass (eg a Macaroon) - Config: bearer_token - Env Var: RCLONE_WEBDAV_BEARER_TOKEN - Type: string - Default: "" Advanced Options Here are the advanced options specific to webdav (Webdav). --webdav-bearer-token-command Command to run to get a bearer token - Config: bearer_token_command - Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND - Type: string - Default: "" Provider notes See below for notes on specific providers. Owncloud Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like https://example.com/remote.php/webdav/. Owncloud supports modified times using the X-OC-Mtime header. Nextcloud This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat) whereas Owncloud does. This may be fixed in the future. Sharepoint Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975 This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav. To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL: - Go here to open your OneDrive or to sign in - Now take a look at your address bar, the URL should look like this: https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx You'll only need this URL up to the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive. Add the remote to rclone like this: Configure the url as https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents and use your normal account email and password for user and pass. If you have 2FA enabled, you have to generate an app password. Set the vendor to sharepoint. Your config file should look like this: [sharepoint] type = webdav url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents vendor = other user = YourEmailAddress pass = encryptedpassword Required Flags for SharePoint As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents: --ignore-size --ignore-checksum --update dCache dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens. Configure as normal using the other type. Don't enter a username or password, instead enter your Macaroon as the bearer_token. The config will end up looking something like this. [dcache] type = webdav url = https://dcache... vendor = other user = pass = bearer_token = your-macaroon There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache. OpenID-Connect dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service. Support for OpenID-Connect in rclone is currently achieved using another software package called oidc-agent. This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the oidc-token command. The following example shows a (shortened) access token obtained from the _XDC_ OIDC Provider. paul@celebrimbor:~$ oidc-token XDC eyJraWQ[...]QFXDt0 paul@celebrimbor:~$ NOTE Before the oidc-token command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add command (e.g., oidc-add XDC). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation. The rclone bearer_token_command configuration option is used to fetch the access token from oidc-agent. Configure as a normal WebDAV endpoint, using the 'other' vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., oidc-agent XDC). The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the _XDC_ OIDC Provider. [dcache] type = webdav url = https://dcache.example.org/ vendor = other bearer_token_command = oidc-token XDC Yandex Disk Yandex Disk is a cloud storage solution created by Yandex. Here is an example of making a yandex configuration. First run rclone config This will guide you through an interactive setup process: No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Yandex Disk \ "yandex" [snip] Storage> yandex Yandex Client Id - leave blank normally. client_id> Yandex Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use rclone like this, See top level directories rclone lsd remote: Make a new directory rclone mkdir remote:directory List the contents of a directory rclone ls remote:directory Sync /home/local/directory to the remote path, deleting any excess files in the path. rclone sync -i /home/local/directory remote:directory Yandex paths may be as deep as required, eg remote:directory/subdirectory. Modified time Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format. MD5 checksums MD5 checksums are natively supported by Yandex Disk. Emptying Trash If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments. Quota information To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage. Restricted filename characters The default restricted characters set are replaced. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Limitations When uploading very large files (bigger than about 5GB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m. Standard Options Here are the standard options specific to yandex (Yandex Disk). --yandex-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_YANDEX_CLIENT_ID - Type: string - Default: "" --yandex-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_YANDEX_CLIENT_SECRET - Type: string - Default: "" Advanced Options Here are the advanced options specific to yandex (Yandex Disk). --yandex-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_YANDEX_TOKEN - Type: string - Default: "" --yandex-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_YANDEX_AUTH_URL - Type: string - Default: "" --yandex-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_YANDEX_TOKEN_URL - Type: string - Default: "" --yandex-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_YANDEX_ENCODING - Type: MultiEncoder - Default: Slash,Del,Ctl,InvalidUtf8,Dot Local Filesystem Local paths are specified as normal filesystem paths, eg /path/to/wherever, so rclone sync -i /home/source /tmp/destination Will sync /home/source to /tmp/destination These can be configured into the config file for consistencies sake, but it is probably easier not to. Modified time Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. Filenames Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers. If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name gro\xdf will be transferred as gro‛DF. rclone will emit a debug message in this case (use -v to see), eg Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf" Restricted characters On non Windows platforms the following characters are replaced when handling file names. Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ / 0x2F / When running on Windows the following characters are replaced. This list is based on the Windows file naming conventions. Character Value Replacement ----------- ------- ------------- NUL 0x00 ␀ SOH 0x01 ␁ STX 0x02 ␂ ETX 0x03 ␃ EOT 0x04 ␄ ENQ 0x05 ␅ ACK 0x06 ␆ BEL 0x07 ␇ BS 0x08 ␈ HT 0x09 ␉ LF 0x0A ␊ VT 0x0B ␋ FF 0x0C ␌ CR 0x0D ␍ SO 0x0E ␎ SI 0x0F ␏ DLE 0x10 ␐ DC1 0x11 ␑ DC2 0x12 ␒ DC3 0x13 ␓ DC4 0x14 ␔ NAK 0x15 ␕ SYN 0x16 ␖ ETB 0x17 ␗ CAN 0x18 ␘ EM 0x19 ␙ SUB 0x1A ␚ ESC 0x1B ␛ FS 0x1C ␜ GS 0x1D ␝ RS 0x1E ␞ US 0x1F ␟ / 0x2F / " 0x22 " * 0x2A * : 0x3A : < 0x3C < > 0x3E > ? 0x3F ? \ 0x5C \ | 0x7C | File names on Windows can also not end with the following characters. These only get replaced if they are the last character in the name: Character Value Replacement ----------- ------- ------------- SP 0x20 ␠ . 0x2E . Invalid UTF-8 bytes will also be replaced, as they can't be converted to UTF-16. Long paths on Windows Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters. This is why you will see that your paths, for instance c:\files is converted to the UNC path \\?\c:\files in the output, and \\server\share is converted to \\?\UNC\server\share. However, in rare cases this may cause problems with buggy file system drivers like EncFS. To disable UNC conversion globally, add this to your .rclone.conf file: [local] nounc = true If you want to selectively disable UNC, you can add it to a separate entry like this: [nounc] type = local nounc = true And use rclone like this: rclone copy c:\src nounc:z:\dst This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to. Symlinks / Junction points Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply --copy-links or -L then rclone will follow the symlink and copy the pointed to file or directory. Note that this flag is incompatible with -links / -l. This flag applies to all commands. For example, supposing you have a directory structure like this $ tree /tmp/a /tmp/a ├── b -> ../b ├── expected -> ../expected ├── one └── two └── three Then you can see the difference with and without the flag like this $ rclone ls /tmp/a 6 one 6 two/three and $ rclone -L ls /tmp/a 4174 expected 6 one 6 two/three 6 b/two 6 b/one --links, -l Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage. The text file will contain the target of the symbolic link (see example). This flag applies to all commands. For example, supposing you have a directory structure like this $ tree /tmp/a /tmp/a ├── file1 -> ./file4 └── file2 -> /home/user/file3 Copying the entire directory with '-l' $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/ The remote files are created with a '.rclonelink' suffix $ rclone ls remote:/tmp/a 5 file1.rclonelink 14 file2.rclonelink The remote files will contain the target of the symbolic links $ rclone cat remote:/tmp/a/file1.rclonelink ./file4 $ rclone cat remote:/tmp/a/file2.rclonelink /home/user/file3 Copying them back with '-l' $ rclone copyto -l remote:/tmp/a/ /tmp/b/ $ tree /tmp/b /tmp/b ├── file1 -> ./file4 └── file2 -> /home/user/file3 However, if copied back without '-l' $ rclone copyto remote:/tmp/a/ /tmp/b/ $ tree /tmp/b /tmp/b ├── file1.rclonelink └── file2.rclonelink Note that this flag is incompatible with -copy-links / -L. Restricting filesystems with --one-file-system Normally rclone will recurse through filesystems as mounted. However if you set --one-file-system or -x this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems. For example if you have a directory hierarchy like this root ├── disk1 - disk1 mounted on the root │   └── file3 - stored on disk1 ├── disk2 - disk2 mounted on the root │   └── file4 - stored on disk12 ├── file1 - stored on the root disk └── file2 - stored on the root disk Using rclone --one-file-system copy root remote: will only copy file1 and file2. Eg $ rclone -q --one-file-system ls root 0 file1 0 file2 $ rclone -q ls root 0 disk1/file3 0 disk2/file4 0 file1 0 file2 NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem. NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored. Standard Options Here are the standard options specific to local (Local Disk). --local-nounc Disable UNC (long path names) conversion on Windows - Config: nounc - Env Var: RCLONE_LOCAL_NOUNC - Type: string - Default: "" - Examples: - "true" - Disables long file names Advanced Options Here are the advanced options specific to local (Local Disk). --copy-links / -L Follow symlinks and copy the pointed to item. - Config: copy_links - Env Var: RCLONE_LOCAL_COPY_LINKS - Type: bool - Default: false --links / -l Translate symlinks to/from regular files with a '.rclonelink' extension - Config: links - Env Var: RCLONE_LOCAL_LINKS - Type: bool - Default: false --skip-links Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped. - Config: skip_links - Env Var: RCLONE_LOCAL_SKIP_LINKS - Type: bool - Default: false --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead. - Config: no_unicode_normalization - Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION - Type: bool - Default: false --local-no-check-updated Don't check to see if the files change during upload Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag. If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things appended to it (eg a log) then rclone will transfer the log file with the size it had the first time rclone saw it. If the file is being modified throughout (not just appended to) then the transfer may fail with a hash check failure. In detail, once the file has had stat() called on it for the first time we: - Only transfer the size that stat gave - Only checksum the size that stat gave - Don't update the stat info for the file - Config: no_check_updated - Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED - Type: bool - Default: false --one-file-system / -x Don't cross filesystem boundaries (unix/macOS only). - Config: one_file_system - Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM - Type: bool - Default: false --local-case-sensitive Force the filesystem to report itself as case sensitive. Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. - Config: case_sensitive - Env Var: RCLONE_LOCAL_CASE_SENSITIVE - Type: bool - Default: false --local-case-insensitive Force the filesystem to report itself as case insensitive Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. - Config: case_insensitive - Env Var: RCLONE_LOCAL_CASE_INSENSITIVE - Type: bool - Default: false --local-no-sparse Disable sparse files for multi-thread downloads On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with. - Config: no_sparse - Env Var: RCLONE_LOCAL_NO_SPARSE - Type: bool - Default: false --local-no-set-modtime Disable setting modtime Normally rclone updates modification time of files after they are done uploading. This can cause permissions issues on Linux platforms when the user rclone is running as does not own the file uploaded, such as when copying to a CIFS mount owned by another user. If this option is enabled, rclone will no longer update the modtime after copying a file. - Config: no_set_modtime - Env Var: RCLONE_LOCAL_NO_SET_MODTIME - Type: bool - Default: false --local-encoding This sets the encoding for the backend. See: the encoding section in the overview for more info. - Config: encoding - Env Var: RCLONE_LOCAL_ENCODING - Type: MultiEncoder - Default: Slash,Dot Backend commands Here are the commands specific to the local backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. noop A null operation for testing backend commands rclone backend noop remote: [options] [+] This is a test command which has some options you can try to change the output. Options: - "echo": echo the input arguments - "error": return an error based on option value CHANGELOG v1.53.3 - 2020-11-19 See commits - Bug Fixes - random: Fix incorrect use of math/rand instead of crypto/rand CVE-2020-28924 (Nick Craig-Wood) - Passwords you have generated with rclone config may be insecure - See issue #4783 for more details and a checking tool - random: Seed math/rand in one place with crypto strong seed (Nick Craig-Wood) - VFS - Fix vfs/refresh calls with fs= parameter (Nick Craig-Wood) - Sharefile - Fix backend due to API swapping integers for strings (Nick Craig-Wood) v1.53.2 - 2020-10-26 See commits - Bug Fixes - acounting - Fix incorrect speed and transferTime in core/stats (Nick Craig-Wood) - Stabilize display order of transfers on Windows (Nick Craig-Wood) - operations - Fix use of --suffix without --backup-dir (Nick Craig-Wood) - Fix spurious "--checksum is in use but the source and destination have no hashes in common" (Nick Craig-Wood) - build - Work around GitHub actions brew problem (Nick Craig-Wood) - Stop using set-env and set-path in the GitHub actions (Nick Craig-Wood) - Mount - mount2: Fix the swapped UID / GID values (Russell Cattelan) - VFS - Detect and recover from a file being removed externally from the cache (Nick Craig-Wood) - Fix a deadlock vulnerability in downloaders.Close (Leo Luan) - Fix a race condition in retryFailedResets (Leo Luan) - Fix missed concurrency control between some item operations and reset (Leo Luan) - Add exponential backoff during ENOSPC retries (Leo Luan) - Add a missed update of used cache space (Leo Luan) - Fix --no-modtime to not attempt to set modtimes (as documented) (Nick Craig-Wood) - Local - Fix sizes and syncing with --links option on Windows (Nick Craig-Wood) - Chunker - Disable ListR to fix missing files on GDrive (workaround) (Ivan Andreev) - Fix upload over crypt (Ivan Andreev) - Fichier - Increase maximum file size from 100GB to 300GB (gyutw) - Jottacloud - Remove clientSecret from config when upgrading to token based authentication (buengese) - Avoid double url escaping of device/mountpoint (albertony) - Remove DirMove workaround as it's not required anymore - also (buengese) - Mailru - Fix uploads after recent changes on server (Ivan Andreev) - Fix range requests after june changes on server (Ivan Andreev) - Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev) - Onedrive - Fix disk usage for sharepoint (Nick Craig-Wood) - S3 - Add missing regions for AWS (Anagh Kumar Baranwal) - Seafile - Fix accessing libraries > 2GB on 32 bit systems (Muffin King) - SFTP - Always convert the checksum to lower case (buengese) - Union - Create root directories if none exist (Nick Craig-Wood) v1.53.1 - 2020-09-13 See commits - Bug Fixes - accounting: Remove new line from end of --stats-one-line display (Nick Craig-Wood) - check - Add back missing --download flag (Nick Craig-Wood) - Fix docs (Nick Craig-Wood) - docs - Note --log-file does append (Nick Craig-Wood) - Add full stops for consistency in rclone --help (edwardxml) - Add Tencent COS to s3 provider list (wjielai) - Updated mount command to reflect that it requires Go 1.13 or newer (Evan Harris) - jottacloud: Mention that uploads from local disk will not need to cache files to disk for md5 calculation (albertony) - Fix formatting of rc docs page (Nick Craig-Wood) - build - Include vendor tar ball in release and fix startdev (Nick Craig-Wood) - Fix "Illegal instruction" error for ARMv6 builds (Nick Craig-Wood) - Fix architecture name in ARMv7 build (Nick Craig-Wood) - VFS - Fix spurious error "vfs cache: failed to _ensure cache EOF" (Nick Craig-Wood) - Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood) - Local - Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood) - Drive - Re-adds special oauth help text (Tim Gallant) - Opendrive - Do not retry 400 errors (Evan Harris) v1.53.0 - 2020-09-02 See commits - New Features - The VFS layer was heavily reworked for this release - see below for more details - Interactive mode -i/--interactive for destructive operations (fishbullet) - Add --bwlimit-file flag to limit speeds of individual file transfers (Nick Craig-Wood) - Transfers are sorted by start time in the stats and progress output (Max Sum) - Make sure backends expand ~ and environment vars in file names they use (Nick Craig-Wood) - Add --refresh-times flag to set modtimes on hashless backends (Nick Craig-Wood) - build - Remove vendor directory in favour of Go modules (Nick Craig-Wood) - Build with go1.15.x by default (Nick Craig-Wood) - Drop macOS 386 build as it is no longer supported by go1.15 (Nick Craig-Wood) - Add ARMv7 to the supported builds (Nick Craig-Wood) - Enable rclone cmount on macOS (Nick Craig-Wood) - Make rclone build with gccgo (Nick Craig-Wood) - Make rclone build with wasm (Nick Craig-Wood) - Change beta numbering to be semver compatible (Nick Craig-Wood) - Add file properties and icon to Windows executable (albertony) - Add experimental interface for integrating rclone into browsers (Nick Craig-Wood) - lib: Add file name compression (Klaus Post) - rc - Allow installation and use of plugins and test plugins with rclone-webui (Chaitanya Bankanhal) - Add reverse proxy pluginsHandler for serving plugins (Chaitanya Bankanhal) - Add mount/listmounts option for listing current mounts (Chaitanya Bankanhal) - Add operations/uploadfile to upload a file through rc using encoding multipart/form-data (Chaitanya Bankanhal) - Add core/copmmand to execute rclone terminal commands. (Chaitanya Bankanhal) - rclone check - Add reporting of filenames for same/missing/changed (Nick Craig-Wood) - Make check command obey --dry-run/-i/--interactive (Nick Craig-Wood) - Make check do --checkers files concurrently (Nick Craig-Wood) - Retry downloads if they fail when using the --download flag (Nick Craig-Wood) - Make it show stats by default (Nick Craig-Wood) - rclone obscure: Allow obscure command to accept password on STDIN (David Ibarra) - rclone config - Set RCLONE_CONFIG_DIR for use in config files and subprocesses (Nick Craig-Wood) - Reject remote names starting with a dash. (jtagcat) - rclone cryptcheck: Add reporting of filenames for same/missing/changed (Nick Craig-Wood) - rclone dedupe: Make it obey the --size-only flag for duplicate detection (Nick Craig-Wood) - rclone link: Add --expire and --unlink flags (Roman Kredentser) - rclone mkdir: Warn when using mkdir on remotes which can't have empty directories (Nick Craig-Wood) - rclone rc: Allow JSON parameters to simplify command line usage (Nick Craig-Wood) - rclone serve ftp - Don't compile on < go1.13 after dependency update (Nick Craig-Wood) - Add error message if auth proxy fails (Nick Craig-Wood) - Use refactored goftp.io/server library for binary shrink (Nick Craig-Wood) - rclone serve restic: Expose interfaces so that rclone can be used as a library from within restic (Jack) - rclone sync: Add --track-renames-strategy leaf (Nick Craig-Wood) - rclone touch: Add ability to set nanosecond resolution times (Nick Craig-Wood) - rclone tree: Remove -i shorthand for --noindent as it conflicts with -i/--interactive (Nick Craig-Wood) - Bug Fixes - accounting - Fix documentation for speed/speedAvg (Nick Craig-Wood) - Fix elapsed time not show actual time since beginning (Chaitanya Bankanhal) - Fix deadlock in stats printing (Nick Craig-Wood) - build - Fix file handle leak in GitHub release tool (Garrett Squire) - rclone check: Fix successful retries with --download counting errors (Nick Craig-Wood) - rclone dedupe: Fix logging to be easier to understand (Nick Craig-Wood) - Mount - Warn macOS users that mount implementation is changing (Nick Craig-Wood) - to test the new implementation use rclone cmount instead of rclone mount - this is because the library rclone uses has dropped macOS support - rc interface - Add call for unmount all (Chaitanya Bankanhal) - Make mount/mount remote control take vfsOpt option (Nick Craig-Wood) - Add mountOpt to mount/mount (Nick Craig-Wood) - Add VFS and Mount options to mount/listmounts (Nick Craig-Wood) - Catch panics in cgofuse initialization and turn into error messages (Nick Craig-Wood) - Always supply stat information in Readdir (Nick Craig-Wood) - Add support for reading unknown length files using direct IO (Windows) (Nick Craig-Wood) - Fix On Windows don't add -o uid/gid=-1 if user supplies -o uid/gid. (Nick Craig-Wood) - Fix macOS losing directory contents in cmount (Nick Craig-Wood) - Fix volume name broken in recent refactor (Nick Craig-Wood) - VFS - Implement partial reads for --vfs-cache-mode full (Nick Craig-Wood) - Add --vfs-writeback option to delay writes back to cloud storage (Nick Craig-Wood) - Add --vfs-read-ahead parameter for use with --vfs-cache-mode full (Nick Craig-Wood) - Restart pending uploads on restart of the cache (Nick Craig-Wood) - Support synchronous cache space recovery upon ENOSPC (Leo Luan) - Allow ReadAt and WriteAt to run concurrently with themselves (Nick Craig-Wood) - Change modtime of file before upload to current (Rob Calistri) - Recommend --vfs-cache-modes writes on backends which can't stream (Nick Craig-Wood) - Add an optional fs parameter to vfs rc methods (Nick Craig-Wood) - Fix errors when using > 260 char files in the cache in Windows (Nick Craig-Wood) - Fix renaming of items while they are being uploaded (Nick Craig-Wood) - Fix very high load caused by slow directory listings (Nick Craig-Wood) - Fix renamed files not being uploaded with --vfs-cache-mode minimal (Nick Craig-Wood) - Fix directory locking caused by slow directory listings (Nick Craig-Wood) - Fix saving from chrome without --vfs-cache-mode writes (Nick Craig-Wood) - Local - Add --local-no-updated to provide a consistent view of changing objects (Nick Craig-Wood) - Add --local-no-set-modtime option to prevent modtime changes (tyhuber1) - Fix race conditions updating and reading Object metadata (Nick Craig-Wood) - Cache - Make any created backends be cached to fix rc problems (Nick Craig-Wood) - Fix dedupe on caches wrapping drives (Nick Craig-Wood) - Crypt - Add --crypt-server-side-across-configs flag (Nick Craig-Wood) - Make any created backends be cached to fix rc problems (Nick Craig-Wood) - Alias - Make any created backends be cached to fix rc problems (Nick Craig-Wood) - Azure Blob - Don't compile on < go1.13 after dependency update (Nick Craig-Wood) - B2 - Implement server side copy for files > 5GB (Nick Craig-Wood) - Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) - Note that b2's encoding now allows  but rclone's hasn't changed (Nick Craig-Wood) - Fix transfers when using download_url (Nick Craig-Wood) - Box - Implement rclone cleanup (buengese) - Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) - Allow authentication with access token (David) - Chunker - Make any created backends be cached to fix rc problems (Nick Craig-Wood) - Drive - Add rclone backend drives to list shared drives (teamdrives) (Nick Craig-Wood) - Implement rclone backend untrash (Nick Craig-Wood) - Work around drive bug which didn't set modtime of copied docs (Nick Craig-Wood) - Added --drive-starred-only to only show starred files (Jay McEntire) - Deprecate --drive-alternate-export as it is no longer needed (themylogin) - Fix duplication of Google docs on server side copy (Nick Craig-Wood) - Fix "panic: send on closed channel" when recycling dir entries (Nick Craig-Wood) - Dropbox - Add copyright detector info in limitations section in the docs (Alex Guerrero) - Fix rclone link by removing expires parameter (Nick Craig-Wood) - Fichier - Detect Flood detected: IP Locked error and sleep for 30s (Nick Craig-Wood) - FTP - Add explicit TLS support (Heiko Bornholdt) - Add support for --dump bodies and --dump auth for debugging (Nick Craig-Wood) - Fix interoperation with pure-ftpd (Nick Craig-Wood) - Google Cloud Storage - Add support for anonymous access (Kai Lüke) - Jottacloud - Bring back legacy authentification for use with whitelabel versions (buengese) - Switch to new api root - also implement a very ugly workaround for the DirMove failures (buengese) - Onedrive - Rework cancel of multipart uploads on rclone exit (Nick Craig-Wood) - Implement rclone cleanup (Nick Craig-Wood) - Add --onedrive-no-versions flag to remove old versions (Nick Craig-Wood) - Pcloud - Implement rclone link for public link creation (buengese) - Qingstor - Cancel in progress multipart uploads on rclone exit (Nick Craig-Wood) - S3 - Preserve metadata when doing multipart copy (Nick Craig-Wood) - Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) - Add rclone link for public link sharing (Roman Kredentser) - Add rclone backend restore command to restore objects from GLACIER (Nick Craig-Wood) - Add rclone cleanup and rclone backend cleanup to clean unfinished multipart uploads (Nick Craig-Wood) - Add rclone backend list-multipart-uploads to list unfinished multipart uploads (Nick Craig-Wood) - Add --s3-max-upload-parts support (Kamil Trzciński) - Add --s3-no-check-bucket for minimising rclone transactions and perms (Nick Craig-Wood) - Add --s3-profile and --s3-shared-credentials-file options (Nick Craig-Wood) - Use regional s3 us-east-1 endpoint (David) - Add Scaleway provider (Vincent Feltz) - Update IBM COS endpoints (Egor Margineanu) - Reduce the default --s3-copy-cutoff to < 5GB for Backblaze S3 compatibility (Nick Craig-Wood) - Fix detection of bucket existing (Nick Craig-Wood) - SFTP - Use the absolute path instead of the relative path for listing for improved compatibility (Nick Craig-Wood) - Add --sftp-subsystem and --sftp-server-command options (aus) - Swift - Fix dangling large objects breaking the listing (Nick Craig-Wood) - Fix purge not deleting directory markers (Nick Craig-Wood) - Fix update multipart object removing all of its own parts (Nick Craig-Wood) - Fix missing hash from object returned from upload (Nick Craig-Wood) - Tardigrade - Upgrade to uplink v1.2.0 (Kaloyan Raev) - Union - Fix writing with the all policy (Nick Craig-Wood) - WebDAV - Fix directory creation with 4shared (Nick Craig-Wood) v1.52.3 - 2020-08-07 See commits - Bug Fixes - docs - Disable smart typography (eg en-dash) in MANUAL.* and man page (Nick Craig-Wood) - Update install.md to reflect minimum Go version (Evan Harris) - Update install from source instructions (Nick Craig-Wood) - make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud) - log: Fix --use-json-log going to stderr not --log-file on Windows (Nick Craig-Wood) - serve dlna: Fix file list on Samsung Series 6+ TVs (Matteo Pietro Dazzi) - sync: Fix deadlock with --track-renames-strategy modtime (Nick Craig-Wood) - Cache - Fix moveto/copyto remote:file remote:file2 (Nick Craig-Wood) - Drive - Stop using root_folder_id as a cache (Nick Craig-Wood) - Make dangling shortcuts appear in listings (Nick Craig-Wood) - Drop "Disabling ListR" messages down to debug (Nick Craig-Wood) - Workaround and policy for Google Drive API (Dmitry Ustalov) - FTP - Add note to docs about home vs root directory selection (Nick Craig-Wood) - Onedrive - Fix reverting to Copy when Move would have worked (Nick Craig-Wood) - Avoid comma rendered in URL in onedrive.md (Kevin) - Pcloud - Fix oauth on European region "eapi.pcloud.com" (Nick Craig-Wood) - S3 - Fix bucket Region auto detection when Region unset in config (Nick Craig-Wood) v1.52.2 - 2020-06-24 See commits - Bug Fixes - build - Fix docker release build action (Nick Craig-Wood) - Fix custom timezone in Docker image (NoLooseEnds) - check: Fix misleading message which printed errors instead of differences (Nick Craig-Wood) - errors: Add WSAECONNREFUSED and more to the list of retriable Windows errors (Nick Craig-Wood) - rcd: Fix incorrect prometheus metrics (Gary Kim) - serve restic: Fix flags so they use environment variables (Nick Craig-Wood) - serve webdav: Fix flags so they use environment variables (Nick Craig-Wood) - sync: Fix --track-renames-strategy modtime (Nick Craig-Wood) - Drive - Fix not being able to delete a directory with a trashed shortcut (Nick Craig-Wood) - Fix creating a directory inside a shortcut (Nick Craig-Wood) - Fix --drive-impersonate with cached root_folder_id (Nick Craig-Wood) - SFTP - Fix SSH key PEM loading (Zac Rubin) - Swift - Speed up deletes by not retrying segment container deletes (Nick Craig-Wood) - Tardigrade - Upgrade to uplink v1.1.1 (Caleb Case) - WebDAV - Fix free/used display for rclone about/df for certain backends (Nick Craig-Wood) v1.52.1 - 2020-06-10 See commits - Bug Fixes - lib/file: Fix SetSparse on Windows 7 which fixes downloads of files > 250MB (Nick Craig-Wood) - build - Update go.mod to go1.14 to enable -mod=vendor build (Nick Craig-Wood) - Remove quicktest from Dockerfile (Nick Craig-Wood) - Build Docker images with GitHub actions (Matteo Pietro Dazzi) - Update Docker build workflows (Nick Craig-Wood) - Set user_allow_other in /etc/fuse.conf in the Docker image (Nick Craig-Wood) - Fix xgo build after go1.14 go.mod update (Nick Craig-Wood) - docs - Add link to source and modified time to footer of every page (Nick Craig-Wood) - Remove manually set dates and use git dates instead (Nick Craig-Wood) - Minor tense, punctuation, brevity and positivity changes for the home page (edwardxml) - Remove leading slash in page reference in footer when present (Nick Craig-Wood) - Note commands which need obscured input in the docs (Nick Craig-Wood) - obscure: Write more help as we are referencing it elsewhere (Nick Craig-Wood) - VFS - Fix OS vs Unix path confusion - fixes ChangeNotify on Windows (Nick Craig-Wood) - Drive - Fix missing items when listing using --fast-list / ListR (Nick Craig-Wood) - Putio - Fix panic on Object.Open (Cenk Alti) - S3 - Fix upload of single files into buckets without create permission (Nick Craig-Wood) - Fix --header-upload (Nick Craig-Wood) - Tardigrade - Fix listing bug by upgrading to v1.0.7 - Set UserAgent to rclone (Caleb Case) v1.52.0 - 2020-05-27 Special thanks to Martin Michlmayr for proof reading and correcting all the docs and Edward Barker for helping re-write the front page. See commits - New backends - Tardigrade backend for use with storj.io (Caleb Case) - Union re-write to have multiple writable remotes (Max Sum) - Seafile for Seafile server (Fred @creativeprojects) - New commands - backend: command for backend specific commands (see backends) (Nick Craig-Wood) - cachestats: Deprecate in favour of rclone backend stats cache: (Nick Craig-Wood) - dbhashsum: Deprecate in favour of rclone hashsum DropboxHash (Nick Craig-Wood) - New Features - Add --header-download and --header-upload flags for setting HTTP headers when uploading/downloading (Tim Gallant) - Add --header flag to add HTTP headers to every HTTP transaction (Nick Craig-Wood) - Add --check-first to do all checking before starting transfers (Nick Craig-Wood) - Add --track-renames-strategy for configurable matching criteria for --track-renames (Bernd Schoolmann) - Add --cutoff-mode hard,soft,catious (Shing Kit Chan & Franklyn Tackitt) - Filter flags (eg --files-from -) can read from stdin (fishbullet) - Add --error-on-no-transfer option (Jon Fautley) - Implement --order-by xxx,mixed for copying some small and some big files (Nick Craig-Wood) - Allow --max-backlog to be negative meaning as large as possible (Nick Craig-Wood) - Added --no-unicode-normalization flag to allow Unicode filenames to remain unique (Ben Zenker) - Allow --min-age/--max-age to take a date as well as a duration (Nick Craig-Wood) - Add rename statistics for file and directory renames (Nick Craig-Wood) - Add statistics output to JSON log (reddi) - Make stats be printed on non-zero exit code (Nick Craig-Wood) - When running --password-command allow use of stdin (Sébastien Gross) - Stop empty strings being a valid remote path (Nick Craig-Wood) - accounting: support WriterTo for less memory copying (Nick Craig-Wood) - build - Update to use go1.14 for the build (Nick Craig-Wood) - Add -trimpath to release build for reproduceable builds (Nick Craig-Wood) - Remove GOOS and GOARCH from Dockerfile (Brandon Philips) - config - Fsync the config file after writing to save more reliably (Nick Craig-Wood) - Add --obscure and --no-obscure flags to config create/update (Nick Craig-Wood) - Make config show take remote: as well as remote (Nick Craig-Wood) - copyurl: Add --no-clobber flag (Denis) - delete: Added --rmdirs flag to delete directories as well (Kush) - filter: Added --files-from-raw flag (Ankur Gupta) - genautocomplete: Add support for fish shell (Matan Rosenberg) - log: Add support for syslog LOCAL facilities (Patryk Jakuszew) - lsjson: Add --hash-type parameter and use it in lsf to speed up hashing (Nick Craig-Wood) - rc - Add -o/--opt and -a/--arg for more structured input (Nick Craig-Wood) - Implement backend/command for running backend specific commands remotely (Nick Craig-Wood) - Add mount/mount command for starting rclone mount via the API (Chaitanya) - rcd: Add Prometheus metrics support (Gary Kim) - serve http - Added a --template flag for user defined markup (calistri) - Add Last-Modified headers to files and directories (Nick Craig-Wood) - serve sftp: Add support for multiple host keys by repeating --key flag (Maxime Suret) - touch: Add --localtime flag to make --timestamp localtime not UTC (Nick Craig-Wood) - Bug Fixes - accounting - Restore "Max number of stats groups reached" log line (Michał Matczuk) - Correct exitcode on Transfer Limit Exceeded flag. (Anuar Serdaliyev) - Reset bytes read during copy retry (Ankur Gupta) - Fix race clearing stats (Nick Craig-Wood) - copy: Only create empty directories when they don't exist on the remote (Ishuah Kariuki) - dedupe: Stop dedupe deleting files with identical IDs (Nick Craig-Wood) - oauth - Use custom http client so that --no-check-certificate is honored by oauth token fetch (Mark Spieth) - Replace deprecated oauth2.NoContext (Lars Lehtonen) - operations - Fix setting the timestamp on Windows for multithread copy (Nick Craig-Wood) - Make rcat obey --ignore-checksum (Nick Craig-Wood) - Make --max-transfer more accurate (Nick Craig-Wood) - rc - Fix dropped error (Lars Lehtonen) - Fix misplaced http server config (Xiaoxing Ye) - Disable duplicate log (ElonH) - serve dlna - Cds: don't specify childCount at all when unknown (Dan Walters) - Cds: use modification time as date in dlna metadata (Dan Walters) - serve restic: Fix tests after restic project removed vendoring (Nick Craig-Wood) - sync - Fix incorrect "nothing to transfer" message using --delete-before (Nick Craig-Wood) - Only create empty directories when they don't exist on the remote (Ishuah Kariuki) - Mount - Add --async-read flag to disable asynchronous reads (Nick Craig-Wood) - Ignore --allow-root flag with a warning as it has been removed upstream (Nick Craig-Wood) - Warn if --allow-non-empty used on Windows and clarify docs (Nick Craig-Wood) - Constrain to go1.13 or above otherwise bazil.org/fuse fails to compile (Nick Craig-Wood) - Fix fail because of too long volume name (evileye) - Report 1PB free for unknown disk sizes (Nick Craig-Wood) - Map more rclone errors into file systems errors (Nick Craig-Wood) - Fix disappearing cwd problem (Nick Craig-Wood) - Use ReaddirPlus on Windows to improve directory listing performance (Nick Craig-Wood) - Send a hint as to whether the filesystem is case insensitive or not (Nick Craig-Wood) - Add rc command mount/types (Nick Craig-Wood) - Change maximum leaf name length to 1024 bytes (Nick Craig-Wood) - VFS - Add --vfs-read-wait and --vfs-write-wait flags to control time waiting for a sequential read/write (Nick Craig-Wood) - Change default --vfs-read-wait to 20ms (it was 5ms and not configurable) (Nick Craig-Wood) - Make df output more consistent on a rclone mount. (Yves G) - Report 1PB free for unknown disk sizes (Nick Craig-Wood) - Fix race condition caused by unlocked reading of Dir.path (Nick Craig-Wood) - Make File lock and Dir lock not overlap to avoid deadlock (Nick Craig-Wood) - Implement lock ordering between File and Dir to eliminate deadlocks (Nick Craig-Wood) - Factor the vfs cache into its own package (Nick Craig-Wood) - Pin the Fs in use in the Fs cache (Nick Craig-Wood) - Add SetSys() methods to Node to allow caching stuff on a node (Nick Craig-Wood) - Ignore file not found errors from Hash in Read.Release (Nick Craig-Wood) - Fix hang in read wait code (Nick Craig-Wood) - Local - Speed up multi thread downloads by using sparse files on Windows (Nick Craig-Wood) - Implement --local-no-sparse flag for disabling sparse files (Nick Craig-Wood) - Implement rclone backend noop for testing purposes (Nick Craig-Wood) - Fix "file not found" errors on post transfer Hash calculation (Nick Craig-Wood) - Cache - Implement rclone backend stats command (Nick Craig-Wood) - Fix Server Side Copy with Temp Upload (Brandon McNama) - Remove Unused Functions (Lars Lehtonen) - Disable race tests until bbolt is fixed (Nick Craig-Wood) - Move methods used for testing into test file (greatroar) - Add Pin and Unpin and canonicalised lookup (Nick Craig-Wood) - Use proper import path go.etcd.io/bbolt (Robert-André Mauchin) - Crypt - Calculate hashes for uploads from local disk (Nick Craig-Wood) - This allows crypted Jottacloud uploads without using local disk - This means crypted s3/b2 uploads will now have hashes - Added rclone backend decode/encode commands to replicate functionality of cryptdecode (Anagh Kumar Baranwal) - Get rid of the unused Cipher interface as it obfuscated the code (Nick Craig-Wood) - Azure Blob - Implement streaming of unknown sized files so rcat is now supported (Nick Craig-Wood) - Implement memory pooling to control memory use (Nick Craig-Wood) - Add --azureblob-disable-checksum flag (Nick Craig-Wood) - Retry InvalidBlobOrBlock error as it may indicate block concurrency problems (Nick Craig-Wood) - Remove unused Object.parseTimeString() (Lars Lehtonen) - Fix permission error on SAS URL limited to container (Nick Craig-Wood) - B2 - Add support for --header-upload and --header-download (Tim Gallant) - Ignore directory markers at the root also (Nick Craig-Wood) - Force the case of the SHA1 to lowercase (Nick Craig-Wood) - Remove unused largeUpload.clearUploadURL() (Lars Lehtonen) - Box - Add support for --header-upload and --header-download (Tim Gallant) - Implement About to read size used (Nick Craig-Wood) - Add token renew function for jwt auth (David Bramwell) - Added support for interchangeable root folder for Box backend (Sunil Patra) - Remove unnecessary iat from jws claims (David) - Drive - Follow shortcuts by default, skip with --drive-skip-shortcuts (Nick Craig-Wood) - Implement rclone backend shortcut command for creating shortcuts (Nick Craig-Wood) - Added rclone backend command to change service_account_file and chunk_size (Anagh Kumar Baranwal) - Fix missing files when using --fast-list and --drive-shared-with-me (Nick Craig-Wood) - Fix duplicate items when using --drive-shared-with-me (Nick Craig-Wood) - Extend --drive-stop-on-upload-limit to respond to teamDriveFileLimitExceeded. (harry) - Don't delete files with multiple parents to avoid data loss (Nick Craig-Wood) - Server side copy docs use default description if empty (Nick Craig-Wood) - Dropbox - Make error insufficient space to be fatal (harry) - Add info about required redirect url (Elan Ruusamäe) - Fichier - Add support for --header-upload and --header-download (Tim Gallant) - Implement custom pacer to deal with the new rate limiting (buengese) - FTP - Fix lockup when using concurrency limit on failed connections (Nick Craig-Wood) - Fix lockup on failed upload when using concurrency limit (Nick Craig-Wood) - Fix lockup on Close failures when using concurrency limit (Nick Craig-Wood) - Work around pureftp sending spurious 150 messages (Nick Craig-Wood) - Google Cloud Storage - Add support for --header-upload and --header-download (Nick Craig-Wood) - Add ARCHIVE storage class to help (Adam Stroud) - Ignore directory markers at the root (Nick Craig-Wood) - Googlephotos - Make the start year configurable (Daven) - Add support for --header-upload and --header-download (Tim Gallant) - Create feature/favorites directory (Brandon Philips) - Fix "concurrent map write" error (Nick Craig-Wood) - Don't put an image in error message (Nick Craig-Wood) - HTTP - Improved directory listing with new template from Caddy project (calisro) - Jottacloud - Implement --jottacloud-trashed-only (buengese) - Add support for --header-upload and --header-download (Tim Gallant) - Use RawURLEncoding when decoding base64 encoded login token (buengese) - Implement cleanup (buengese) - Update docs regarding cleanup, removed remains from old auth, and added warning about special mountpoints. (albertony) - Mailru - Describe 2FA requirements (valery1707) - Onedrive - Implement --onedrive-server-side-across-configs (Nick Craig-Wood) - Add support for --header-upload and --header-download (Tim Gallant) - Fix occasional 416 errors on multipart uploads (Nick Craig-Wood) - Added maximum chunk size limit warning in the docs (Harry) - Fix missing drive on config (Nick Craig-Wood) - Make error quotaLimitReached to be fatal (harry) - Opendrive - Add support for --header-upload and --header-download (Tim Gallant) - Pcloud - Added support for interchangeable root folder for pCloud backend (Sunil Patra) - Add support for --header-upload and --header-download (Tim Gallant) - Fix initial config "Auth state doesn't match" message (Nick Craig-Wood) - Premiumizeme - Add support for --header-upload and --header-download (Tim Gallant) - Prune unused functions (Lars Lehtonen) - Putio - Add support for --header-upload and --header-download (Nick Craig-Wood) - Make downloading files use the rclone http Client (Nick Craig-Wood) - Fix parsing of remotes with leading and trailing / (Nick Craig-Wood) - Qingstor - Make rclone cleanup remove pending multipart uploads older than 24h (Nick Craig-Wood) - Try harder to cancel failed multipart uploads (Nick Craig-Wood) - Prune multiUploader.list() (Lars Lehtonen) - Lint fix (Lars Lehtonen) - S3 - Add support for --header-upload and --header-download (Tim Gallant) - Use memory pool for buffer allocations (Maciej Zimnoch) - Add SSE-C support for AWS, Ceph, and MinIO (Jack Anderson) - Fail fast multipart upload (Michał Matczuk) - Report errors on bucket creation (mkdir) correctly (Nick Craig-Wood) - Specify that Minio supports URL encoding in listings (Nick Craig-Wood) - Added 500 as retryErrorCode (Michał Matczuk) - Use --low-level-retries as the number of SDK retries (Aleksandar Janković) - Fix multipart abort context (Aleksandar Jankovic) - Replace deprecated session.New() with session.NewSession() (Lars Lehtonen) - Use the provided size parameter when allocating a new memory pool (Joachim Brandon LeBlanc) - Use rclone's low level retries instead of AWS SDK to fix listing retries (Nick Craig-Wood) - Ignore directory markers at the root also (Nick Craig-Wood) - Use single memory pool (Michał Matczuk) - Do not resize buf on put to memBuf (Michał Matczuk) - Improve docs for --s3-disable-checksum (Nick Craig-Wood) - Don't leak memory or tokens in edge cases for multipart upload (Nick Craig-Wood) - Seafile - Implement 2FA (Fred) - SFTP - Added --sftp-pem-key to support inline key files (calisro) - Fix post transfer copies failing with 0 size when using set_modtime=false (Nick Craig-Wood) - Sharefile - Add support for --header-upload and --header-download (Tim Gallant) - Sugarsync - Add support for --header-upload and --header-download (Tim Gallant) - Swift - Add support for --header-upload and --header-download (Nick Craig-Wood) - Fix cosmetic issue in error message (Martin Michlmayr) - Union - Implement multiple writable remotes (Max Sum) - Fix server-side copy (Max Sum) - Implement ListR (Max Sum) - Enable ListR when upstreams contain local (Max Sum) - WebDAV - Add support for --header-upload and --header-download (Tim Gallant) - Fix X-OC-Mtime header for Transip compatibility (Nick Craig-Wood) - Report full and consistent usage with about (Yves G) - Yandex - Add support for --header-upload and --header-download (Tim Gallant) v1.51.0 - 2020-02-01 - New backends - Memory (Nick Craig-Wood) - Sugarsync (Nick Craig-Wood) - New Features - Adjust all backends to have --backend-encoding parameter (Nick Craig-Wood) - this enables the encoding for special characters to be adjusted or disabled - Add --max-duration flag to control the maximum duration of a transfer session (boosh) - Add --expect-continue-timeout flag, default 1s (Nick Craig-Wood) - Add --no-check-dest flag for copying without testing the destination (Nick Craig-Wood) - Implement --order-by flag to order transfers (Nick Craig-Wood) - accounting - Don't show entries in both transferring and checking (Nick Craig-Wood) - Add option to delete stats (Aleksandar Jankovic) - build - Compress the test builds with gzip (Nick Craig-Wood) - Implement a framework for starting test servers during tests (Nick Craig-Wood) - cmd: Always print elapsed time to tenth place seconds in progress (Gary Kim) - config - Add --password-command to allow dynamic config password (Damon Permezel) - Give config questions default values (Nick Craig-Wood) - Check a remote exists when creating a new one (Nick Craig-Wood) - copyurl: Add --stdout flag to write to stdout (Nick Craig-Wood) - dedupe: Implement keep smallest too (Nick Craig-Wood) - hashsum: Add flag --base64 flag (landall) - lsf: Speed up on s3/swift/etc by not reading mimetype by default (Nick Craig-Wood) - lsjson: Add --no-mimetype flag (Nick Craig-Wood) - rc: Add methods to turn on blocking and mutex profiling (Nick Craig-Wood) - rcd - Adding group parameter to stats (Chaitanya) - Move webgui apart; option to disable browser (Xiaoxing Ye) - serve sftp: Add support for public key with auth proxy (Paul Tinsley) - stats: Show deletes in stats and hide zero stats (anuar45) - Bug Fixes - accounting - Fix error counter counting multiple times (Ankur Gupta) - Fix error count shown as checks (Cnly) - Clear finished transfer in stats-reset (Maciej Zimnoch) - Added StatsInfo locking in statsGroups sum function (Michał Matczuk) - asyncreader: Fix EOF error (buengese) - check: Fix --one-way recursing more directories than it needs to (Nick Craig-Wood) - chunkedreader: Disable hash calculation for first segment (Nick Craig-Wood) - config - Do not open browser on headless on drive/gcs/google photos (Xiaoxing Ye) - SetValueAndSave ignore error if config section does not exist yet (buengese) - cmd: Fix completion with an encrypted config (Danil Semelenov) - dbhashsum: Stop it returning UNSUPPORTED on dropbox (Nick Craig-Wood) - dedupe: Add missing modes to help string (Nick Craig-Wood) - operations - Fix dedupe continuing on errors like insufficientFilePermisson (SezalAgrawal) - Clear accounting before low level retry (Maciej Zimnoch) - Write debug message when hashes could not be checked (Ole Schütt) - Move interface assertion to tests to remove pflag dependency (Nick Craig-Wood) - Make NewOverrideObjectInfo public and factor uses (Nick Craig-Wood) - proxy: Replace use of bcrypt with sha256 (Nick Craig-Wood) - vendor - Update bazil.org/fuse to fix FreeBSD 12.1 (Nick Craig-Wood) - Update github.com/t3rm1n4l/go-mega to fix mega "illegal base64 data at input byte 22" (Nick Craig-Wood) - Update termbox-go to fix ncdu command on FreeBSD (Kuang-che Wu) - Update t3rm1n4l/go-mega - fixes mega: couldn't login: crypto/aes: invalid key size 0 (Nick Craig-Wood) - Mount - Enable async reads for a 20% speedup (Nick Craig-Wood) - Replace use of WriteAt with Write for cache mode >= writes and O_APPEND (Brett Dutro) - Make sure we call unmount when exiting (Nick Craig-Wood) - Don't build on go1.10 as bazil/fuse no longer supports it (Nick Craig-Wood) - When setting dates discard out of range dates (Nick Craig-Wood) - VFS - Add a newly created file straight into the directory (Nick Craig-Wood) - Only calculate one hash for reads for a speedup (Nick Craig-Wood) - Make ReadAt for non cached files work better with non-sequential reads (Nick Craig-Wood) - Fix edge cases when reading ModTime from file (Nick Craig-Wood) - Make sure existing files opened for write show correct size (Nick Craig-Wood) - Don't cache the path in RW file objects to fix renaming (Nick Craig-Wood) - Fix rename of open files when using the VFS cache (Nick Craig-Wood) - When renaming files in the cache, rename the cache item in memory too (Nick Craig-Wood) - Fix open file renaming on drive when using --vfs-cache-mode writes (Nick Craig-Wood) - Fix incorrect modtime for mv into mount with --vfs-cache-modes writes (Nick Craig-Wood) - On rename, rename in cache too if the file exists (Anagh Kumar Baranwal) - Local - Make source file being updated errors be NoLowLevelRetry errors (Nick Craig-Wood) - Fix update of hidden files on Windows (Nick Craig-Wood) - Cache - Follow move of upstream library github.com/coreos/bbolt github.com/etcd-io/bbolt (Nick Craig-Wood) - Fix fatal error: concurrent map writes (Nick Craig-Wood) - Crypt - Reorder the filename encryption options (Thomas Eales) - Correctly handle trailing dot (buengese) - Chunker - Reduce length of temporary suffix (Ivan Andreev) - Drive - Add --drive-stop-on-upload-limit flag to stop syncs when upload limit reached (Nick Craig-Wood) - Add --drive-use-shared-date to use date file was shared instead of modified date (Garry McNulty) - Make sure invalid auth for teamdrives always reports an error (Nick Craig-Wood) - Fix --fast-list when using appDataFolder (Nick Craig-Wood) - Use multipart resumable uploads for streaming and uploads in mount (Nick Craig-Wood) - Log an ERROR if an incomplete search is returned (Nick Craig-Wood) - Hide dangerous config from the configurator (Nick Craig-Wood) - Dropbox - Treat insufficient_space errors as non retriable errors (Nick Craig-Wood) - Jottacloud - Use new auth method used by official client (buengese) - Add URL to generate Login Token to config wizard (Nick Craig-Wood) - Add support whitelabel versions (buengese) - Koofr - Use rclone HTTP client. (jaKa) - Onedrive - Add Sites.Read.All permission (Benjamin Richter) - Add support "Retry-After" header (Motonori IWAMURO) - Opendrive - Implement --opendrive-chunk-size (Nick Craig-Wood) - S3 - Re-implement multipart upload to fix memory issues (Nick Craig-Wood) - Add --s3-copy-cutoff for size to switch to multipart copy (Nick Craig-Wood) - Add new region Asia Patific (Hong Kong) (Outvi V) - Reduce memory usage streaming files by reducing max stream upload size (Nick Craig-Wood) - Add --s3-list-chunk option for bucket listing (Thomas Kriechbaumer) - Force path style bucket access to off for AWS deprecation (Nick Craig-Wood) - Use AWS web identity role provider if available (Tennix) - Add StackPath Object Storage Support (Dave Koston) - Fix ExpiryWindow value (Aleksandar Jankovic) - Fix DisableChecksum condition (Aleksandar Janković) - Fix URL decoding of NextMarker (Nick Craig-Wood) - SFTP - Add --sftp-skip-links to skip symlinks and non regular files (Nick Craig-Wood) - Retry Creation of Connection (Sebastian Brandt) - Fix "failed to parse private key file: ssh: not an encrypted key" error (Nick Craig-Wood) - Open files for update write only to fix AWS SFTP interop (Nick Craig-Wood) - Swift - Reserve segments of dynamic large object when delete objects in container what was enabled versioning. (Nguyễn Hữu Luân) - Fix parsing of X-Object-Manifest (Nick Craig-Wood) - Update OVH API endpoint (unbelauscht) - WebDAV - Make nextcloud only upload SHA1 checksums (Nick Craig-Wood) - Fix case of "Bearer" in Authorization: header to agree with RFC (Nick Craig-Wood) - Add Referer header to fix problems with WAFs (Nick Craig-Wood) v1.50.2 - 2019-11-19 - Bug Fixes - accounting: Fix memory leak on retries operations (Nick Craig-Wood) - Drive - Fix listing of the root directory with drive.files scope (Nick Craig-Wood) - Fix --drive-root-folder-id with team/shared drives (Nick Craig-Wood) v1.50.1 - 2019-11-02 - Bug Fixes - hash: Fix accidentally changed hash names for DropboxHash and CRC-32 (Nick Craig-Wood) - fshttp: Fix error reporting on tpslimit token bucket errors (Nick Craig-Wood) - fshttp: Don't print token bucket errors on context cancelled (Nick Craig-Wood) - Local - Fix listings of . on Windows (Nick Craig-Wood) - Onedrive - Fix DirMove/Move after Onedrive change (Xiaoxing Ye) v1.50.0 - 2019-10-26 - New backends - Citrix Sharefile (Nick Craig-Wood) - Chunker - an overlay backend to split files into smaller parts (Ivan Andreev) - Mail.ru Cloud (Ivan Andreev) - New Features - encodings (Fabian Möller & Nick Craig-Wood) - All backends now use file name encoding to ensure any file name can be written to any backend. - See the restricted file name docs for more info and the local backend docs. - Some file names may look different in rclone if you are using any control characters in names or unicode FULLWIDTH symbols. - build - Update to use go1.13 for the build (Nick Craig-Wood) - Drop support for go1.9 (Nick Craig-Wood) - Build rclone with GitHub actions (Nick Craig-Wood) - Convert python scripts to python3 (Nick Craig-Wood) - Swap Azure/go-ansiterm for mattn/go-colorable (Nick Craig-Wood) - Dockerfile fixes (Matei David) - Add plugin support for backends and commands (Richard Patel) - config - Use alternating Red/Green in config to make more obvious (Nick Craig-Wood) - contrib - Add sample DLNA server Docker Compose manifest. (pataquets) - Add sample WebDAV server Docker Compose manifest. (pataquets) - copyurl - Add --auto-filename flag for using file name from URL in destination path (Denis) - serve dlna: - Many compatibility improvements (Dan Walters) - Support for external srt subtitles (Dan Walters) - rc - Added command core/quit (Saksham Khanna) - Bug Fixes - sync - Make --update/-u not transfer files that haven't changed (Nick Craig-Wood) - Free objects after they come out of the transfer pipe to save memory (Nick Craig-Wood) - Fix --files-from without --no-traverse doing a recursive scan (Nick Craig-Wood) - operations - Fix accounting for server side copies (Nick Craig-Wood) - Display 'All duplicates removed' only if dedupe successful (Sezal Agrawal) - Display 'Deleted X extra copies' only if dedupe successful (Sezal Agrawal) - accounting - Only allow up to 100 completed transfers in the accounting list to save memory (Nick Craig-Wood) - Cull the old time ranges when possible to save memory (Nick Craig-Wood) - Fix panic due to server-side copy fallback (Ivan Andreev) - Fix memory leak noticeable for transfers of large numbers of objects (Nick Craig-Wood) - Fix total duration calculation (Nick Craig-Wood) - cmd - Fix environment variables not setting command line flags (Nick Craig-Wood) - Make autocomplete compatible with bash's posix mode for macOS (Danil Semelenov) - Make --progress work in git bash on Windows (Nick Craig-Wood) - Fix 'compopt: command not found' on autocomplete on macOS (Danil Semelenov) - config - Fix setting of non top level flags from environment variables (Nick Craig-Wood) - Check config names more carefully and report errors (Nick Craig-Wood) - Remove error: can't use --size-only and --ignore-size together. (Nick Craig-Wood) - filter: Prevent mixing options when --files-from is in use (Michele Caci) - serve sftp: Fix crash on unsupported operations (eg Readlink) (Nick Craig-Wood) - Mount - Allow files of unknown size to be read properly (Nick Craig-Wood) - Skip tests on <= 2 CPUs to avoid lockup (Nick Craig-Wood) - Fix panic on File.Open (Nick Craig-Wood) - Fix "mount_fusefs: -o timeout=: option not supported" on FreeBSD (Nick Craig-Wood) - Don't pass huge filenames (>4k) to FUSE as it can't cope (Nick Craig-Wood) - VFS - Add flag --vfs-case-insensitive for windows/macOS mounts (Ivan Andreev) - Make objects of unknown size readable through the VFS (Nick Craig-Wood) - Move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush() (Brett Dutro) - Stop empty dirs disappearing when renamed on bucket based remotes (Nick Craig-Wood) - Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood) - Azure Blob - Disable logging to the Windows event log (Nick Craig-Wood) - B2 - Remove unverified: prefix on sha1 to improve interop (eg with CyberDuck) (Nick Craig-Wood) - Box - Add options to get access token via JWT auth (David) - Drive - Disable HTTP/2 by default to work around INTERNAL_ERROR problems (Nick Craig-Wood) - Make sure that drive root ID is always canonical (Nick Craig-Wood) - Fix --drive-shared-with-me from the root with lsand --fast-list (Nick Craig-Wood) - Fix ChangeNotify polling for shared drives (Nick Craig-Wood) - Fix change notify polling when using appDataFolder (Nick Craig-Wood) - Dropbox - Make disallowed filenames errors not retry (Nick Craig-Wood) - Fix nil pointer exception on restricted files (Nick Craig-Wood) - Fichier - Fix accessing files > 2GB on 32 bit systems (Nick Craig-Wood) - FTP - Allow disabling EPSV mode (Jon Fautley) - HTTP - HEAD directory entries in parallel to speedup (Nick Craig-Wood) - Add --http-no-head to stop rclone doing HEAD in listings (Nick Craig-Wood) - Putio - Add ability to resume uploads (Cenk Alti) - S3 - Fix signature v2_auth headers (Anthony Rusdi) - Fix encoding for control characters (Nick Craig-Wood) - Only ask for URL encoded directory listings if we need them on Ceph (Nick Craig-Wood) - Add option for multipart failure behaviour (Aleksandar Jankovic) - Support for multipart copy (庄天翼) - Fix nil pointer reference if no metadata returned for object (Nick Craig-Wood) - SFTP - Fix --sftp-ask-password trying to contact the ssh agent (Nick Craig-Wood) - Fix hashes of files with backslashes (Nick Craig-Wood) - Include more ciphers with --sftp-use-insecure-cipher (Carlos Ferreyra) - WebDAV - Parse and return Sharepoint error response (Henning Surmeier) v1.49.5 - 2019-10-05 - Bug Fixes - Revert back to go1.12.x for the v1.49.x builds as go1.13.x was causing issues (Nick Craig-Wood) - Fix rpm packages by using master builds of nfpm (Nick Craig-Wood) - Fix macOS build after brew changes (Nick Craig-Wood) v1.49.4 - 2019-09-29 - Bug Fixes - cmd/rcd: Address ZipSlip vulnerability (Richard Patel) - accounting: Fix file handle leak on errors (Nick Craig-Wood) - oauthutil: Fix security problem when running with two users on the same machine (Nick Craig-Wood) - FTP - Fix listing of an empty root returning: error dir not found (Nick Craig-Wood) - S3 - Fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier (Nick Craig-Wood) v1.49.3 - 2019-09-15 - Bug Fixes - accounting - Fix total duration calculation (Aleksandar Jankovic) - Fix "file already closed" on transfer retries (Nick Craig-Wood) v1.49.2 - 2019-09-08 - New Features - build: Add Docker workflow support (Alfonso Montero) - Bug Fixes - accounting: Fix locking in Transfer to avoid deadlock with --progress (Nick Craig-Wood) - docs: Fix template argument for mktemp in install.sh (Cnly) - operations: Fix -u/--update with google photos / files of unknown size (Nick Craig-Wood) - rc: Fix docs for config/create /update /password (Nick Craig-Wood) - Google Cloud Storage - Fix need for elevated permissions on SetModTime (Nick Craig-Wood) v1.49.1 - 2019-08-28 - Bug Fixes - config: Fix generated passwords being stored as empty password (Nick Craig-Wood) - rcd: Added missing parameter for web-gui info logs. (Chaitanya) - Googlephotos - Fix crash on error response (Nick Craig-Wood) - Onedrive - Fix crash on error response (Nick Craig-Wood) v1.49.0 - 2019-08-26 - New backends - 1fichier (Laura Hausmann) - Google Photos (Nick Craig-Wood) - Putio (Cenk Alti) - premiumize.me (Nick Craig-Wood) - New Features - Experimental web GUI (Chaitanya Bankanhal) - Implement --compare-dest & --copy-dest (yparitcher) - Implement --suffix without --backup-dir for backup to current dir (yparitcher) - config reconnect to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood) - config userinfo to discover which user you are logged in as. (Nick Craig-Wood) - config disconnect to disconnect you (log out) from the backend. (Nick Craig-Wood) - Add --use-json-log for JSON logging (justinalin) - Add context propagation to rclone (Aleksandar Jankovic) - Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic) - Add Higher units for ETA (AbelThar) - Update rclone logos to new design (Andreas Chlupka) - hash: Add CRC-32 support (Cenk Alti) - help showbackend: Fixed advanced option category when there are no standard options (buengese) - ncdu: Display/Copy to Clipboard Current Path (Gary Kim) - operations: - Run hashing operations in parallel (Nick Craig-Wood) - Don't calculate checksums when using --ignore-checksum (Nick Craig-Wood) - Check transfer hashes when using --size-only mode (Nick Craig-Wood) - Disable multi thread copy for local to local copies (Nick Craig-Wood) - Debug successful hashes as well as failures (Nick Craig-Wood) - rc - Add ability to stop async jobs (Aleksandar Jankovic) - Return current settings if core/bwlimit called without parameters (Nick Craig-Wood) - Rclone-WebUI integration with rclone (Chaitanya Bankanhal) - Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement) (Chaitanya Bankanhal) - Add anchor tags to the docs so links are consistent (Nick Craig-Wood) - Remove _async key from input parameters after parsing so later operations won't get confused (buengese) - Add call to clear stats (Aleksandar Jankovic) - rcd - Auto-login for web-gui (Chaitanya Bankanhal) - Implement --baseurl for rcd and web-gui (Chaitanya Bankanhal) - serve dlna - Only select interfaces which can multicast for SSDP (Nick Craig-Wood) - Add more builtin mime types to cover standard audio/video (Nick Craig-Wood) - Fix missing mime types on Android causing missing videos (Nick Craig-Wood) - serve ftp - Refactor to bring into line with other serve commands (Nick Craig-Wood) - Implement --auth-proxy (Nick Craig-Wood) - serve http: Implement --baseurl (Nick Craig-Wood) - serve restic: Implement --baseurl (Nick Craig-Wood) - serve sftp - Implement auth proxy (Nick Craig-Wood) - Fix detection of whether server is authorized (Nick Craig-Wood) - serve webdav - Implement --baseurl (Nick Craig-Wood) - Support --auth-proxy (Nick Craig-Wood) - Bug Fixes - Make "bad record MAC" a retriable error (Nick Craig-Wood) - copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood) - march: Fix checking sub-directories when using --no-traverse (buengese) - rc - Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood) - Move job expire flags to rc to fix initialization problem (Nick Craig-Wood) - Fix --loopback with rc/list and others (Nick Craig-Wood) - rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood) - rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) - Mount - Default --deamon-timout to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) - Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) - Remove nonseekable flag from write files (Nick Craig-Wood) - VFS - Make write without cache more efficient (Nick Craig-Wood) - Fix --vfs-cache-mode minimal and writes ignoring cached files (Nick Craig-Wood) - Local - Add --local-case-sensitive and --local-case-insensitive (Nick Craig-Wood) - Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk) - Don't calculate any hashes by default (Nick Craig-Wood) - Fadvise run syscall on a dedicated go routine (Michał Matczuk) - Azure Blob - Azure Storage Emulator support (Sandeep) - Updated config help details to remove connection string references (Sandeep) - Make all operations work from the root (Nick Craig-Wood) - B2 - Implement link sharing (yparitcher) - Enable server side copy to copy between buckets (Nick Craig-Wood) - Make all operations work from the root (Nick Craig-Wood) - Drive - Fix server side copy of big files (Nick Craig-Wood) - Update API for teamdrive use (Nick Craig-Wood) - Add error for purge with --drive-trashed-only (ginvine) - Fichier - Make FolderID int and adjust related code (buengese) - Google Cloud Storage - Reduce oauth scope requested as suggested by Google (Nick Craig-Wood) - Make all operations work from the root (Nick Craig-Wood) - HTTP - Add --http-headers flag for setting arbitrary headers (Nick Craig-Wood) - Jottacloud - Use new api for retrieving internal username (buengese) - Refactor configuration and minor cleanup (buengese) - Koofr - Support setting modification times on Koofr backend. (jaKa) - Opendrive - Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood) - Qingstor - Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood) - Make all operations work from the root (Nick Craig-Wood) - S3 - Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) - Make all operations work from the root (Nick Craig-Wood) - SFTP - Add missing interface check and fix About (Nick Craig-Wood) - Completely ignore all modtime checks if SetModTime=false (Jon Fautley) - Support md5/sha1 with rsync.net (Nick Craig-Wood) - Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood) - Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU) - Swift - Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood) - Fix upload when using no_chunk to return the correct size (Nick Craig-Wood) - Make all operations work from the root (Nick Craig-Wood) - Fix segments leak during failed large file uploads. (nguyenhuuluan434) - WebDAV - Add --webdav-bearer-token-command (Nick Craig-Wood) - Refresh token when it expires with --webdav-bearer-token-command (Nick Craig-Wood) - Add docs for using bearer_token_command with oidc-agent (Paul Millar) v1.48.0 - 2019-06-15 - New commands - serve sftp: Serve an rclone remote over SFTP (Nick Craig-Wood) - New Features - Multi threaded downloads to local storage (Nick Craig-Wood) - controlled with --multi-thread-cutoff and --multi-thread-streams - Use rclone.conf from rclone executable directory to enable portable use (albertony) - Allow sync of a file and a directory with the same name (forgems) - this is common on bucket based remotes, eg s3, gcs - Add --ignore-case-sync for forced case insensitivity (garry415) - Implement --stats-one-line-date and --stats-one-line-date-format (Peter Berbec) - Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood) - Use go-homedir to read the home directory more reliably (Nick Craig-Wood) - Enable creating encrypted config through external script invocation (Wojciech Smigielski) - build: Drop support for go1.8 (Nick Craig-Wood) - config: Make config create/update encrypt passwords where necessary (Nick Craig-Wood) - copyurl: Honor --no-check-certificate (Stefan Breunig) - install: Linux skip man pages if no mandb (didil) - lsf: Support showing the Tier of the object (Nick Craig-Wood) - lsjson - Added EncryptedPath to output (calisro) - Support showing the Tier of the object (Nick Craig-Wood) - Add IsBucket field for bucket based remote listing of the root (Nick Craig-Wood) - rc - Add --loopback flag to run commands directly without a server (Nick Craig-Wood) - Add operations/fsinfo: Return information about the remote (Nick Craig-Wood) - Skip auth for OPTIONS request (Nick Craig-Wood) - cmd/providers: Add DefaultStr, ValueStr and Type fields (Nick Craig-Wood) - jobs: Make job expiry timeouts configurable (Aleksandar Jankovic) - serve dlna reworked and improved (Dan Walters) - serve ftp: add --ftp-public-ip flag to specify public IP (calistri) - serve restic: Add support for --private-repos in serve restic (Florian Apolloner) - serve webdav: Combine serve webdav and serve http (Gary Kim) - size: Ignore negative sizes when calculating total (Garry McNulty) - Bug Fixes - Make move and copy individual files obey --backup-dir (Nick Craig-Wood) - If --ignore-checksum is in effect, don't calculate checksum (Nick Craig-Wood) - moveto: Fix case-insensitive same remote move (Gary Kim) - rc: Fix serving bucket based objects with --rc-serve (Nick Craig-Wood) - serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim) - Mount - Fix poll interval documentation (Animosity022) - VFS - Make WriteAt for non cached files work with non-sequential writes (Nick Craig-Wood) - Local - Only calculate the required hashes for big speedup (Nick Craig-Wood) - Log errors when listing instead of returning an error (Nick Craig-Wood) - Fix preallocate warning on Linux with ZFS (Nick Craig-Wood) - Crypt - Make rclone dedupe work through crypt (Nick Craig-Wood) - Fix wrapping of ChangeNotify to decrypt directories properly (Nick Craig-Wood) - Support PublicLink (rclone link) of underlying backend (Nick Craig-Wood) - Implement Optional methods SetTier, GetTier (Nick Craig-Wood) - B2 - Implement server side copy (Nick Craig-Wood) - Implement SetModTime (Nick Craig-Wood) - Drive - Fix move and copy from TeamDrive to GDrive (Fionera) - Add notes that cleanup works in the background on drive (Nick Craig-Wood) - Add --drive-server-side-across-configs to default back to old server side copy semantics by default (Nick Craig-Wood) - Add --drive-size-as-quota to show storage quota usage for file size (Garry McNulty) - FTP - Add FTP List timeout (Jeff Quinn) - Add FTP over TLS support (Gary Kim) - Add --ftp-no-check-certificate option for FTPS (Gary Kim) - Google Cloud Storage - Fix upload errors when uploading pre 1970 files (Nick Craig-Wood) - Jottacloud - Add support for selecting device and mountpoint. (buengese) - Mega - Add cleanup support (Gary Kim) - Onedrive - More accurately check if root is found (Cnly) - S3 - Support S3 Accelerated endpoints with --s3-use-accelerate-endpoint (Nick Craig-Wood) - Add config info for Wasabi's EU Central endpoint (Robert Marko) - Make SetModTime work for GLACIER while syncing (Philip Harvey) - SFTP - Add About support (Gary Kim) - Fix about parsing of df results so it can cope with -ve results (Nick Craig-Wood) - Send custom client version and debug server version (Nick Craig-Wood) - WebDAV - Retry on 423 Locked errors (Nick Craig-Wood) v1.47.0 - 2019-04-13 - New backends - Backend for Koofr cloud storage service. (jaKa) - New Features - Resume downloads if the reader fails in copy (Nick Craig-Wood) - this means rclone will restart transfers if the source has an error - this is most useful for downloads or cloud to cloud copies - Use --fast-list for listing operations where it won't use more memory (Nick Craig-Wood) - this should speed up the following operations on remotes which support ListR - dedupe, serve restic lsf, ls, lsl, lsjson, lsd, md5sum, sha1sum, hashsum, size, delete, cat, settier - use --disable ListR to get old behaviour if required - Make --files-from traverse the destination unless --no-traverse is set (Nick Craig-Wood) - this fixes --files-from with Google drive and excessive API use in general. - Make server side copy account bytes and obey --max-transfer (Nick Craig-Wood) - Add --create-empty-src-dirs flag and default to not creating empty dirs (ishuah) - Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key (Nick Craig-Wood) - Implement --suffix-keep-extension for use with --suffix (Nick Craig-Wood) - build: - Switch to semvar compliant version tags to be go modules compliant (Nick Craig-Wood) - Update to use go1.12.x for the build (Nick Craig-Wood) - serve dlna: Add connection manager service description to improve compatibility (Dan Walters) - lsf: Add 'e' format to show encrypted names and 'o' for original IDs (Nick Craig-Wood) - lsjson: Added --files-only and --dirs-only flags (calistri) - rc: Implement operations/publiclink the equivalent of rclone link (Nick Craig-Wood) - Bug Fixes - accounting: Fix total ETA when --stats-unit bits is in effect (Nick Craig-Wood) - Bash TAB completion - Use private custom func to fix clash between rclone and kubectl (Nick Craig-Wood) - Fix for remotes with underscores in their names (Six) - Fix completion of remotes (Florian Gamböck) - Fix autocompletion of remote paths with spaces (Danil Semelenov) - serve dlna: Fix root XML service descriptor (Dan Walters) - ncdu: Fix display corruption with Chinese characters (Nick Craig-Wood) - Add SIGTERM to signals which run the exit handlers on unix (Nick Craig-Wood) - rc: Reload filter when the options are set via the rc (Nick Craig-Wood) - VFS / Mount - Fix FreeBSD: Ignore Truncate if called with no readers and already the correct size (Nick Craig-Wood) - Read directory and check for a file before mkdir (Nick Craig-Wood) - Shorten the locking window for vfs/refresh (Nick Craig-Wood) - Azure Blob - Enable MD5 checksums when uploading files bigger than the "Cutoff" (Dr.Rx) - Fix SAS URL support (Nick Craig-Wood) - B2 - Allow manual configuration of backblaze downloadUrl (Vince) - Ignore already_hidden error on remove (Nick Craig-Wood) - Ignore malformed src_last_modified_millis (Nick Craig-Wood) - Drive - Add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos (Nick Craig-Wood) - Allow server side move/copy between different remotes. (Fionera) - Add docs on team drives and --fast-list eventual consistency (Nestar47) - Fix imports of text files (Nick Craig-Wood) - Fix range requests on 0 length files (Nick Craig-Wood) - Fix creation of duplicates with server side copy (Nick Craig-Wood) - Dropbox - Retry blank errors to fix long listings (Nick Craig-Wood) - FTP - Add --ftp-concurrency to limit maximum number of connections (Nick Craig-Wood) - Google Cloud Storage - Fall back to default application credentials (marcintustin) - Allow bucket policy only buckets (Nick Craig-Wood) - HTTP - Add --http-no-slash for websites with directories with no slashes (Nick Craig-Wood) - Remove duplicates from listings (Nick Craig-Wood) - Fix socket leak on 404 errors (Nick Craig-Wood) - Jottacloud - Fix token refresh (Sebastian Bünger) - Add device registration (Oliver Heyme) - Onedrive - Implement graceful cancel of multipart uploads if rclone is interrupted (Cnly) - Always add trailing colon to path when addressing items, (Cnly) - Return errors instead of panic for invalid uploads (Fabian Möller) - S3 - Add support for "Glacier Deep Archive" storage class (Manu) - Update Dreamhost endpoint (Nick Craig-Wood) - Note incompatibility with CEPH Jewel (Nick Craig-Wood) - SFTP - Allow custom ssh client config (Alexandru Bumbacea) - Swift - Obey Retry-After to enable OVH restore from cold storage (Nick Craig-Wood) - Work around token expiry on CEPH (Nick Craig-Wood) - WebDAV - Allow IsCollection property to be integer or boolean (Nick Craig-Wood) - Fix race when creating directories (Nick Craig-Wood) - Fix About/df when reading the available/total returns 0 (Nick Craig-Wood) v1.46 - 2019-02-09 - New backends - Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood) - New commands - serve dlna: serves a remove via DLNA for the local network (nicolov) - New Features - copy, move: Restore deprecated --no-traverse flag (Nick Craig-Wood) - This is useful for when transferring a small number of files into a large destination - genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov) - Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood) - Buffer recycling library to replace sync.Pool - Optionally use memory mapped memory for better memory shrinking - Enable with --use-mmap if having memory problems - not default yet - Parallelise reading of files specified by --files-from (Nick Craig-Wood) - check: Add stats showing total files matched. (Dario Guzik) - Allow rename/delete open files under Windows (Nick Craig-Wood) - lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood) - Add cookie support with cmdline switch --use-cookies for all HTTP based remotes (qip) - Warn if --checksum is set but there are no hashes available (Nick Craig-Wood) - Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood) - Improve error reporting for too many/few arguments in commands (Nick Craig-Wood) - listremotes: Remove -l short flag as it conflicts with the new global flag (weetmuts) - Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood) - Bug Fixes - Fix layout of stats (Nick Craig-Wood) - Fix --progress crash under Windows Jenkins (Nick Craig-Wood) - Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly) - copyurl: Fix checking of --dry-run (Denis Skovpen) - Mount - Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood) - Fix mount size under 32 bit Windows (Nick Craig-Wood) - VFS - Implement renaming of directories for backends without DirMove (Nick Craig-Wood) - now all backends except b2 support renaming directories - Implement --vfs-cache-max-size to limit the total size of the cache (Nick Craig-Wood) - Add --dir-perms and --file-perms flags to set default permissions (Nick Craig-Wood) - Fix deadlock on concurrent operations on a directory (Nick Craig-Wood) - Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood) - Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood) - Fix panic on rename with --dry-run set (Nick Craig-Wood) - Fix vfs/refresh with recurse=true needing the --fast-list flag - Local - Add support for -l/--links (symbolic link translation) (yair@unicorn) - this works by showing links as link.rclonelink - see local backend docs for more info - this errors if used with -L/--copy-links - Fix renaming/deleting open files on Windows (Nick Craig-Wood) - Crypt - Check for maximum length before decrypting filename to fix panic (Garry McNulty) - Azure Blob - Allow building azureblob backend on *BSD (themylogin) - Use the rclone HTTP client to support --dump headers, --tpslimit etc (Nick Craig-Wood) - Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood) - Ignore directory markers (Nick Craig-Wood) - Stop Mkdir attempting to create existing containers (Nick Craig-Wood) - B2 - cleanup: will remove unfinished large files >24hrs old (Garry McNulty) - For a bucket limited application key check the bucket name (Nick Craig-Wood) - before this, rclone would use the authorised bucket regardless of what you put on the command line - Added --b2-disable-checksum flag (Wojciech Smigielski) - this enables large files to be uploaded without a SHA-1 hash for speed reasons - Drive - Set default pacer to 100ms for 10 tps (Nick Craig-Wood) - This fits the Google defaults much better and reduces the 403 errors massively - Add --drive-pacer-min-sleep and --drive-pacer-burst to control the pacer - Improve ChangeNotify support for items with multiple parents (Fabian Möller) - Fix ListR for items with multiple parents - this fixes oddities with vfs/refresh (Fabian Möller) - Fix using --drive-impersonate and appfolders (Nick Craig-Wood) - Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood) - Dropbox - Retry-After support for Dropbox backend (Mathieu Carbou) - FTP - Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood) - helps with indefinite hangs on some FTP servers - Google Cloud Storage - Update google cloud storage endpoints (weetmuts) - HTTP - Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood) - Fix backend with --files-from and non-existent files (Nick Craig-Wood) - Hubic - Make error message more informative if authentication fails (Nick Craig-Wood) - Jottacloud - Resume and deduplication support (Oliver Heyme) - Use token auth for all API requests Don't store password anymore (Sebastian Bünger) - Add support for 2-factor authentication (Sebastian Bünger) - Mega - Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood) - Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood) - Add new error codes for better error reporting (Nick Craig-Wood) - Onedrive - Fix broken support for "shared with me" folders (Alex Chen) - Fix root ID not normalised (Cnly) - Return err instead of panic on unknown-sized uploads (Cnly) - Qingstor - Fix go routine leak on multipart upload errors (Nick Craig-Wood) - Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood) - Default --qingstor-upload-concurrency to 1 to work around bug (Nick Craig-Wood) - S3 - Implement --s3-upload-cutoff for single part uploads below this (Nick Craig-Wood) - Change --s3-upload-concurrency default to 4 to increase performance (Nick Craig-Wood) - Add --s3-bucket-acl to control bucket ACL (Nick Craig-Wood) - Auto detect region for buckets on operation failure (Nick Craig-Wood) - Add GLACIER storage class (William Cocker) - Add Scaleway to s3 documentation (Rémy Léone) - Add AWS endpoint eu-north-1 (weetmuts) - SFTP - Add support for PEM encrypted private keys (Fabian Möller) - Add option to force the usage of an ssh-agent (Fabian Möller) - Perform environment variable expansion on key-file (Fabian Möller) - Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood) - Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood) - Fix error on dangling symlinks (Nick Craig-Wood) - Swift - Add --swift-no-chunk to disable segmented uploads in rcat/mount (Nick Craig-Wood) - Introduce application credential auth support (kayrus) - Fix memory usage by slimming Object (Nick Craig-Wood) - Fix extra requests on upload (Nick Craig-Wood) - Fix reauth on big files (Nick Craig-Wood) - Union - Fix poll-interval not working (Nick Craig-Wood) - WebDAV - Support About which means rclone mount will show the correct disk size (Nick Craig-Wood) - Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood) - Fail soft on time parsing errors (Nick Craig-Wood) - Fix infinite loop on failed directory creation (Nick Craig-Wood) - Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood) - Fix upload of 0 length files on some servers (Nick Craig-Wood) - Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood) v1.45 - 2018-11-24 - New backends - The Yandex backend was re-written - see below for details (Sebastian Bünger) - New commands - rcd: New command just to serve the remote control API (Nick Craig-Wood) - New Features - The remote control API (rc) was greatly expanded to allow full control over rclone (Nick Craig-Wood) - sensitive operations require authorization or the --rc-no-auth flag - config/* operations to configure rclone - options/* for reading/setting command line flags - operations/* for all low level operations, eg copy file, list directory - sync/* for sync, copy and move - --rc-files flag to serve files on the rc http server - this is for building web native GUIs for rclone - Optionally serving objects on the rc http server - Ensure rclone fails to start up if the --rc port is in use already - See the rc docs for more info - sync/copy/move - Make --files-from only read the objects specified and don't scan directories (Nick Craig-Wood) - This is a huge speed improvement for destinations with lots of files - filter: Add --ignore-case flag (Nick Craig-Wood) - ncdu: Add remove function ('d' key) (Henning Surmeier) - rc command - Add --json flag for structured JSON input (Nick Craig-Wood) - Add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr (Nick Craig-Wood) - build - Require go1.8 or later for compilation (Nick Craig-Wood) - Enable softfloat on MIPS arch (Scott Edlund) - Integration test framework revamped with a better report and better retries (Nick Craig-Wood) - Bug Fixes - cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood) - config: Create config directory on save if it is missing (Nick Craig-Wood) - dedupe: Check for existing filename before renaming a dupe file (ssaqua) - move: Don't create directories with --dry-run (Nick Craig-Wood) - operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood) - serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood) - Mount - Make --volname work for Windows and macOS (Nick Craig-Wood) - Azure Blob - Avoid context deadline exceeded error by setting a large TryTimeout value (brused27) - Fix erroneous Rmdir error "directory not empty" (Nick Craig-Wood) - Wait for up to 60s to create a just deleted container (Nick Craig-Wood) - Dropbox - Add dropbox impersonate support (Jake Coggiano) - Jottacloud - Fix bug in --fast-list handing of empty folders (albertony) - Opendrive - Fix transfer of files with + and & in (Nick Craig-Wood) - Fix retries of upload chunks (Nick Craig-Wood) - S3 - Set ACL for server side copies to that provided by the user (Nick Craig-Wood) - Fix role_arn, credential_source, ... (Erik Swanson) - Add config info for Wasabi's US-West endpoint (Henry Ptasinski) - SFTP - Ensure file hash checking is really disabled (Jon Fautley) - Swift - Add pacer for retries to make swift more reliable (Nick Craig-Wood) - WebDAV - Add Content-Type to PUT requests (Nick Craig-Wood) - Fix config parsing so --webdav-user and --webdav-pass flags work (Nick Craig-Wood) - Add RFC3339 date format (Ralf Hemberger) - Yandex - The yandex backend was re-written (Sebastian Bünger) - This implements low level retries (Sebastian Bünger) - Copy, Move, DirMove, PublicLink and About optional interfaces (Sebastian Bünger) - Improved general error handling (Sebastian Bünger) - Removed ListR for now due to inconsistent behaviour (Sebastian Bünger) v1.44 - 2018-10-15 - New commands - serve ftp: Add ftp server (Antoine GIRARD) - settier: perform storage tier changes on supported remotes (sandeepkru) - New Features - Reworked command line help - Make default help less verbose (Nick Craig-Wood) - Split flags up into global and backend flags (Nick Craig-Wood) - Implement specialised help for flags and backends (Nick Craig-Wood) - Show URL of backend help page when starting config (Nick Craig-Wood) - stats: Long names now split in center (Joanna Marek) - Add --log-format flag for more control over log output (dcpu) - rc: Add support for OPTIONS and basic CORS (frenos) - stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes) - Bug Fixes - Fix -P not ending with a new line (Nick Craig-Wood) - config: don't create default config dir when user supplies --config (albertony) - Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood) - Correct logs for excluded items (ssaqua) - Mount - Remove EXPERIMENTAL tags (Nick Craig-Wood) - VFS - Fix race condition detected by serve ftp tests (Nick Craig-Wood) - Add vfs/poll-interval rc command (Fabian Möller) - Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood) - Reduce directory cache cleared by poll-interval (Fabian Möller) - Remove EXPERIMENTAL tags (Nick Craig-Wood) - Local - Skip bad symlinks in dir listing with -L enabled (Cédric Connes) - Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood) - Preallocate files on linux with fallocate(2) (Nick Craig-Wood) - Cache - Add cache/fetch rc function (Fabian Möller) - Fix worker scale down (Fabian Möller) - Improve performance by not sending info requests for cached chunks (dcpu) - Fix error return value of cache/fetch rc method (Fabian Möller) - Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal) - Preserve leading / in wrapped remote path (Fabian Möller) - Add plex_insecure option to skip certificate validation (Fabian Möller) - Remove entries that no longer exist in the source (dcpu) - Crypt - Preserve leading / in wrapped remote path (Fabian Möller) - Alias - Fix handling of Windows network paths (Nick Craig-Wood) - Azure Blob - Add --azureblob-list-chunk parameter (Santiago Rodríguez) - Implemented settier command support on azureblob remote. (sandeepkru) - Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood) - Box - Implement link sharing. (Sebastian Bünger) - Drive - Add --drive-import-formats - google docs can now be imported (Fabian Möller) - Rewrite mime type and extension handling (Fabian Möller) - Add document links (Fabian Möller) - Add support for multipart document extensions (Fabian Möller) - Add support for apps-script to json export (Fabian Möller) - Fix escaped chars in documents during list (Fabian Möller) - Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller) - Improve directory notifications in ChangeNotify (Fabian Möller) - When listing team drives in config, continue on failure (Nick Craig-Wood) - FTP - Add a small pause after failed upload before deleting file (Nick Craig-Wood) - Google Cloud Storage - Fix service_account_file being ignored (Fabian Möller) - Jottacloud - Minor improvement in quota info (omit if unlimited) (albertony) - Add --fast-list support (albertony) - Add permanent delete support: --jottacloud-hard-delete (albertony) - Add link sharing support (albertony) - Fix handling of reserved characters. (Sebastian Bünger) - Fix socket leak on Object.Remove (Nick Craig-Wood) - Onedrive - Rework to support Microsoft Graph (Cnly) - NB this will require re-authenticating the remote - Removed upload cutoff and always do session uploads (Oliver Heyme) - Use single-part upload for empty files (Cnly) - Fix new fields not saved when editing old config (Alex Chen) - Fix sometimes special chars in filenames not replaced (Alex Chen) - Ignore OneNote files by default (Alex Chen) - Add link sharing support (jackyzy823) - S3 - Use custom pacer, to retry operations when reasonable (Craig Miskell) - Use configured server-side-encryption and storage class options when calling CopyObject() (Paul Kohout) - Make --s3-v2-auth flag (Nick Craig-Wood) - Fix v2 auth on files with spaces (Nick Craig-Wood) - Union - Implement union backend which reads from multiple backends (Felix Brucker) - Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood) - Fix ChangeNotify to support multiple remotes (Fabian Möller) - Fix --backup-dir on union backend (Nick Craig-Wood) - WebDAV - Add another time format (Nick Craig-Wood) - Add a small pause after failed upload before deleting file (Nick Craig-Wood) - Add workaround for missing mtime (buergi) - Sharepoint: Renew cookies after 12hrs (Henning Surmeier) - Yandex - Remove redundant nil checks (teresy) v1.43.1 - 2018-09-07 Point release to fix hubic and azureblob backends. - Bug Fixes - ncdu: Return error instead of log.Fatal in Show (Fabian Möller) - cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood) - docs: Tidy website display (Anagh Kumar Baranwal) - Azure Blob: - Fix multi-part uploads. (sandeepkru) - Hubic - Fix uploads (Nick Craig-Wood) - Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood) v1.43 - 2018-09-01 - New backends - Jottacloud (Sebastian Bünger) - New commands - copyurl: copies a URL to a remote (Denis) - New Features - Reworked config for backends (Nick Craig-Wood) - All backend config can now be supplied by command line, env var or config file - Advanced section in the config wizard for the optional items - A large step towards rclone backends being usable in other go software - Allow on the fly remotes with :backend: syntax - Stats revamp - Add --progress/-P flag to show interactive progress (Nick Craig-Wood) - Show the total progress of the sync in the stats (Nick Craig-Wood) - Add --stats-one-line flag for single line stats (Nick Craig-Wood) - Added weekday schedule into --bwlimit (Mateusz) - lsjson: Add option to show the original object IDs (Fabian Möller) - serve webdav: Make Content-Type without reading the file and add --etag-hash (Nick Craig-Wood) - build - Build macOS with native compiler (Nick Craig-Wood) - Update to use go1.11 for the build (Nick Craig-Wood) - rc - Added core/stats to return the stats (reddi1) - version --check: Prints the current release and beta versions (Nick Craig-Wood) - Bug Fixes - accounting - Fix time to completion estimates (Nick Craig-Wood) - Fix moving average speed for file stats (Nick Craig-Wood) - config: Fix error reading password from piped input (Nick Craig-Wood) - move: Fix --delete-empty-src-dirs flag to delete all empty dirs on move (ishuah) - Mount - Implement --daemon-timeout flag for OSXFUSE (Nick Craig-Wood) - Fix mount --daemon not working with encrypted config (Alex Chen) - Clip the number of blocks to 2^32-1 on macOS - fixes borg backup (Nick Craig-Wood) - VFS - Enable vfs-read-chunk-size by default (Fabian Möller) - Add the vfs/refresh rc command (Fabian Möller) - Add non recursive mode to vfs/refresh rc command (Fabian Möller) - Try to seek buffer on read only files (Fabian Möller) - Local - Fix crash when deprecated --local-no-unicode-normalization is supplied (Nick Craig-Wood) - Fix mkdir error when trying to copy files to the root of a drive on windows (Nick Craig-Wood) - Cache - Fix nil pointer deref when using lsjson on cached directory (Nick Craig-Wood) - Fix nil pointer deref for occasional crash on playback (Nick Craig-Wood) - Crypt - Fix accounting when checking hashes on upload (Nick Craig-Wood) - Amazon Cloud Drive - Make very clear in the docs that rclone has no ACD keys (Nick Craig-Wood) - Azure Blob - Add connection string and SAS URL auth (Nick Craig-Wood) - List the container to see if it exists (Nick Craig-Wood) - Port new Azure Blob Storage SDK (sandeepkru) - Added blob tier, tier between Hot, Cool and Archive. (sandeepkru) - Remove leading / from paths (Nick Craig-Wood) - B2 - Support Application Keys (Nick Craig-Wood) - Remove leading / from paths (Nick Craig-Wood) - Box - Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood) - Make --box-commit-retries flag defaulting to 100 to fix large uploads (Nick Craig-Wood) - Drive - Add --drive-keep-revision-forever flag (lewapm) - Handle gdocs when filtering file names in list (Fabian Möller) - Support using --fast-list for large speedups (Fabian Möller) - FTP - Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood) - Google Cloud Storage - Fix index out of range error with --fast-list (Nick Craig-Wood) - Jottacloud - Fix MD5 error check (Oliver Heyme) - Handle empty time values (Martin Polden) - Calculate missing MD5s (Oliver Heyme) - Docs, fixes and tests for MD5 calculation (Nick Craig-Wood) - Add optional MimeTyper interface. (Sebastian Bünger) - Implement optional About interface (for df support). (Sebastian Bünger) - Mega - Wait for events instead of arbitrary sleeping (Nick Craig-Wood) - Add --mega-hard-delete flag (Nick Craig-Wood) - Fix failed logins with upper case chars in email (Nick Craig-Wood) - Onedrive - Shared folder support (Yoni Jah) - Implement DirMove (Cnly) - Fix rmdir sometimes deleting directories with contents (Nick Craig-Wood) - Pcloud - Delete half uploaded files on upload error (Nick Craig-Wood) - Qingstor - Remove leading / from paths (Nick Craig-Wood) - S3 - Fix index out of range error with --fast-list (Nick Craig-Wood) - Add --s3-force-path-style (Nick Craig-Wood) - Add support for KMS Key ID (bsteiss) - Remove leading / from paths (Nick Craig-Wood) - Swift - Add storage_policy (Ruben Vandamme) - Make it so just storage_url or auth_token can be overridden (Nick Craig-Wood) - Fix server side copy bug for unusual file names (Nick Craig-Wood) - Remove leading / from paths (Nick Craig-Wood) - WebDAV - Ensure we call MKCOL with a URL with a trailing / for QNAP interop (Nick Craig-Wood) - If root ends with / then don't check if it is a file (Nick Craig-Wood) - Don't accept redirects when reading metadata (Nick Craig-Wood) - Add bearer token (Macaroon) support for dCache (Nick Craig-Wood) - Document dCache and Macaroons (Onno Zweers) - Sharepoint recursion with different depth (Henning) - Attempt to remove failed uploads (Nick Craig-Wood) - Yandex - Fix listing/deleting files in the root (Nick Craig-Wood) v1.42 - 2018-06-16 - New backends - OpenDrive (Oliver Heyme, Jakub Karlicek, ncw) - New commands - deletefile command (Filip Bartodziej) - New Features - copy, move: Copy single files directly, don't use --files-from work-around - this makes them much more efficient - Implement --max-transfer flag to quit transferring at a limit - make exit code 8 for --max-transfer exceeded - copy: copy empty source directories to destination (Ishuah Kariuki) - check: Add --one-way flag (Kasper Byrdal Nielsen) - Add siginfo handler for macOS for ctrl-T stats (kubatasiemski) - rc - add core/gc to run a garbage collection on demand - enable go profiling by default on the --rc port - return error from remote on failure - lsf - Add --absolute flag to add a leading / onto path names - Add --csv flag for compliant CSV output - Add 'm' format specifier to show the MimeType - Implement 'i' format for showing object ID - lsjson - Add MimeType to the output - Add ID field to output to show Object ID - Add --retries-sleep flag (Benjamin Joseph Dag) - Oauth tidy up web page and error handling (Henning Surmeier) - Bug Fixes - Password prompt output with --log-file fixed for unix (Filip Bartodziej) - Calculate ModifyWindow each time on the fly to fix various problems (Stefan Breunig) - Mount - Only print "File.rename error" if there actually is an error (Stefan Breunig) - Delay rename if file has open writers instead of failing outright (Stefan Breunig) - Ensure atexit gets run on interrupt - macOS enhancements - Make --noappledouble --noapplexattr - Add --volname flag and remove special chars from it - Make Get/List/Set/Remove xattr return ENOSYS for efficiency - Make --daemon work for macOS without CGO - VFS - Add --vfs-read-chunk-size and --vfs-read-chunk-size-limit (Fabian Möller) - Fix ChangeNotify for new or changed folders (Fabian Möller) - Local - Fix symlink/junction point directory handling under Windows - NB you will need to add -L to your command line to copy files with reparse points - Cache - Add non cached dirs on notifications (Remus Bunduc) - Allow root to be expired from rc (Remus Bunduc) - Clean remaining empty folders from temp upload path (Remus Bunduc) - Cache lists using batch writes (Remus Bunduc) - Use secure websockets for HTTPS Plex addresses (John Clayton) - Reconnect plex websocket on failures (Remus Bunduc) - Fix panic when running without plex configs (Remus Bunduc) - Fix root folder caching (Remus Bunduc) - Crypt - Check the crypted hash of files when uploading for extra data security - Dropbox - Make Dropbox for business folders accessible using an initial / in the path - Google Cloud Storage - Low level retry all operations if necessary - Google Drive - Add --drive-acknowledge-abuse to download flagged files - Add --drive-alternate-export to fix large doc export - Don't attempt to choose Team Drives when using rclone config create - Fix change list polling with team drives - Fix ChangeNotify for folders (Fabian Möller) - Fix about (and df on a mount) for team drives - Onedrive - Errorhandler for onedrive for business requests (Henning Surmeier) - S3 - Adjust upload concurrency with --s3-upload-concurrency (themylogin) - Fix --s3-chunk-size which was always using the minimum - SFTP - Add --ssh-path-override flag (Piotr Oleszczyk) - Fix slow downloads for long latency connections - Webdav - Add workarounds for biz.mail.ru - Ignore Reason-Phrase in status line to fix 4shared (Rodrigo) - Better error message generation v1.41 - 2018-04-28 - New backends - Mega support added - Webdav now supports SharePoint cookie authentication (hensur) - New commands - link: create public link to files and folders (Stefan Breunig) - about: gets quota info from a remote (a-roussos, ncw) - hashsum: a generic tool for any hash to produce md5sum like output - New Features - lsd: Add -R flag and fix and update docs for all ls commands - ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb) - serve restic: Add append-only mode (Steve Kriss) - serve restic: Disallow overwriting files in append-only mode (Alexander Neumann) - serve restic: Print actual listener address (Matt Holt) - size: Add --json flag (Matthew Holt) - sync: implement --ignore-errors (Mateusz Pabian) - dedupe: Add dedupe largest functionality (Richard Yang) - fs: Extend SizeSuffix to include TB and PB for rclone about - fs: add --dump goroutines and --dump openfiles for debugging - rc: implement core/memstats to print internal memory usage info - rc: new call rc/pid (Michael P. Dubner) - Compile - Drop support for go1.6 - Release - Fix make tarball (Chih-Hsuan Yen) - Bug Fixes - filter: fix --min-age and --max-age together check - fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport - lsd,lsf: make sure all times we output are in local time - rc: fix setting bwlimit to unlimited - rc: take note of the --rc-addr flag too as per the docs - Mount - Use About to return the correct disk total/used/free (eg in df) - Set --attr-timeout default to 1s - fixes: - rclone using too much memory - rclone not serving files to samba - excessive time listing directories - Fix df -i (upstream fix) - VFS - Filter files . and .. from directory listing - Only make the VFS cache if --vfs-cache-mode > Off - Local - Add --local-no-check-updated to disable updated file checks - Retry remove on Windows sharing violation error - Cache - Flush the memory cache after close - Purge file data on notification - Always forget parent dir for notifications - Integrate with Plex websocket - Add rc cache/stats (seuffert) - Add info log on notification - Box - Fix failure reading large directories - parse file/directory size as float - Dropbox - Fix crypt+obfuscate on dropbox - Fix repeatedly uploading the same files - FTP - Work around strange response from box FTP server - More workarounds for FTP servers to fix mkParentDir error - Fix no error on listing non-existent directory - Google Cloud Storage - Add service_account_credentials (Matt Holt) - Detect bucket presence by listing it - minimises permissions needed - Ignore zero length directory markers - Google Drive - Add service_account_credentials (Matt Holt) - Fix directory move leaving a hardlinked directory behind - Return proper google errors when Opening files - When initialized with a filepath, optional features used incorrect root path (Stefan Breunig) - HTTP - Fix sync for servers which don't return Content-Length in HEAD - Onedrive - Add QuickXorHash support for OneDrive for business - Fix socket leak in multipart session upload - S3 - Look in S3 named profile files for credentials - Add --s3-disable-checksum to disable checksum uploading (Chris Redekop) - Hierarchical configuration support (Giri Badanahatti) - Add in config for all the supported S3 providers - Add One Zone Infrequent Access storage class (Craig Rachel) - Add --use-server-modtime support (Peter Baumgartner) - Add --s3-chunk-size option to control multipart uploads - Ignore zero length directory markers - SFTP - Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll) - Update docs with Synology quirks - Fail soft with a debug on hash failure - Swift - Add --use-server-modtime support (Peter Baumgartner) - Webdav - Support SharePoint cookie authentication (hensur) - Strip leading and trailing / off root v1.40 - 2018-03-19 - New backends - Alias backend to create aliases for existing remote names (Fabian Möller) - New commands - lsf: list for parsing purposes (Jakub Tasiemski) - by default this is a simple non recursive list of files and directories - it can be configured to add more info in an easy to parse way - serve restic: for serving a remote as a Restic REST endpoint - This enables restic to use any backends that rclone can access - Thanks Alexander Neumann for help, patches and review - rc: enable the remote control of a running rclone - The running rclone must be started with --rc and related flags. - Currently there is support for bwlimit, and flushing for mount and cache. - New Features - --max-delete flag to add a delete threshold (Bjørn Erik Pedersen) - All backends now support RangeOption for ranged Open - cat: Use RangeOption for limited fetches to make more efficient - cryptcheck: make reading of nonce more efficient with RangeOption - serve http/webdav/restic - support SSL/TLS - add --user --pass and --htpasswd for authentication - copy/move: detect file size change during copy/move and abort transfer (ishuah) - cryptdecode: added option to return encrypted file names. (ishuah) - lsjson: add --encrypted to show encrypted name (Jakub Tasiemski) - Add --stats-file-name-length to specify the printed file name length for stats (Will Gunn) - Compile - Code base was shuffled and factored - backends moved into a backend directory - large packages split up - See the CONTRIBUTING.md doc for info as to what lives where now - Update to using go1.10 as the default go version - Implement daily full integration tests - Release - Include a source tarball and sign it and the binaries - Sign the git tags as part of the release process - Add .deb and .rpm packages as part of the build - Make a beta release for all branches on the main repo (but not pull requests) - Bug Fixes - config: fixes errors on non existing config by loading config file only on first access - config: retry saving the config after failure (Mateusz) - sync: when using --backup-dir don't delete files if we can't set their modtime - this fixes odd behaviour with Dropbox and --backup-dir - fshttp: fix idle timeouts for HTTP connections - serve http: fix serving files with : in - fixes - Fix --exclude-if-present to ignore directories which it doesn't have permission for (Iakov Davydov) - Make accounting work properly with crypt and b2 - remove --no-traverse flag because it is obsolete - Mount - Add --attr-timeout flag to control attribute caching in kernel - this now defaults to 0 which is correct but less efficient - see the mount docs for more info - Add --daemon flag to allow mount to run in the background (ishuah) - Fix: Return ENOSYS rather than EIO on attempted link - This fixes FileZilla accessing an rclone mount served over sftp. - Fix setting modtime twice - Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows - Many bugs fixed in the VFS layer - see below - VFS - Many fixes for --vfs-cache-mode writes and above - Update cached copy if we know it has changed (fixes stale data) - Clean path names before using them in the cache - Disable cache cleaner if --vfs-cache-poll-interval=0 - Fill and clean the cache immediately on startup - Fix Windows opening every file when it stats the file - Fix applying modtime for an open Write Handle - Fix creation of files when truncating - Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE - Downgrade "poll-interval is not supported" message to Info - Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC - Local - Downgrade "invalid cross-device link: trying copy" to debug - Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device - Fix race conditions updating the hashes - Cache - Add support for polling - cache will update when remote changes on supported backends - Reduce log level for Plex api - Fix dir cache issue - Implement --cache-db-wait-time flag - Improve efficiency with RangeOption and RangeSeek - Fix dirmove with temp fs enabled - Notify vfs when using temp fs - Offline uploading - Remote control support for path flushing - Amazon cloud drive - Rclone no longer has any working keys - disable integration tests - Implement DirChangeNotify to notify cache/vfs/mount of changes - Azureblob - Don't check for bucket/container presence if listing was OK - this makes rclone do one less request per invocation - Improve accounting for chunked uploads - Backblaze B2 - Don't check for bucket/container presence if listing was OK - this makes rclone do one less request per invocation - Box - Improve accounting for chunked uploads - Dropbox - Fix custom oauth client parameters - Google Cloud Storage - Don't check for bucket/container presence if listing was OK - this makes rclone do one less request per invocation - Google Drive - Migrate to api v3 (Fabian Möller) - Add scope configuration and root folder selection - Add --drive-impersonate for service accounts - thanks to everyone who tested, explored and contributed docs - Add --drive-use-created-date to use created date as modified date (nbuchanan) - Request the export formats only when required - This makes rclone quicker when there are no google docs - Fix finding paths with latin1 chars (a workaround for a drive bug) - Fix copying of a single Google doc file - Fix --drive-auth-owner-only to look in all directories - HTTP - Fix handling of directories with & in - Onedrive - Removed upload cutoff and always do session uploads - this stops the creation of multiple versions on business onedrive - Overwrite object size value with real size when reading file. (Victor) - this fixes oddities when onedrive misreports the size of images - Pcloud - Remove unused chunked upload flag and code - Qingstor - Don't check for bucket/container presence if listing was OK - this makes rclone do one less request per invocation - S3 - Support hashes for multipart files (Chris Redekop) - Initial support for IBM COS (S3) (Giri Badanahatti) - Update docs to discourage use of v2 auth with CEPH and others - Don't check for bucket/container presence if listing was OK - this makes rclone do one less request per invocation - Fix server side copy and set modtime on files with + in - SFTP - Add option to disable remote hash check command execution (Jon Fautley) - Add --sftp-ask-password flag to prompt for password when needed (Leo R. Lundgren) - Add set_modtime configuration option - Fix following of symlinks - Fix reading config file outside of Fs setup - Fix reading $USER in username fallback not $HOME - Fix running under crontab - Use correct OS way of reading username - Swift - Fix refresh of authentication token - in v1.39 a bug was introduced which ignored new tokens - this fixes it - Fix extra HEAD transaction when uploading a new file - Don't check for bucket/container presence if listing was OK - this makes rclone do one less request per invocation - Webdav - Add new time formats to support mydrive.ch and others v1.39 - 2017-12-23 - New backends - WebDAV - tested with nextcloud, owncloud, put.io and others! - Pcloud - cache - wraps a cache around other backends (Remus Bunduc) - useful in combination with mount - NB this feature is in beta so use with care - New commands - serve command with subcommands: - serve webdav: this implements a webdav server for any rclone remote. - serve http: command to serve a remote over HTTP - config: add sub commands for full config file management - create/delete/dump/edit/file/password/providers/show/update - touch: to create or update the timestamp of a file (Jakub Tasiemski) - New Features - curl install for rclone (Filip Bartodziej) - --stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki) - --exclude-if-present to exclude a directory if a file is present (Iakov Davydov) - rmdirs: add --leave-root flag (lewapm) - move: add --delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki) - Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters - Obscure X-Auth-Token: from headers when dumping too - Document and implement exit codes for different failure modes (Ishuah Kariuki) - Compile - Bug Fixes - Retry lots more different types of errors to make multipart transfers more reliable - Save the config before asking for a token, fixes disappearing oauth config - Warn the user if --include and --exclude are used together (Ernest Borowski) - Fix duplicate files (eg on Google drive) causing spurious copies - Allow trailing and leading whitespace for passwords (Jason Rose) - ncdu: fix crashes on empty directories - rcat: fix goroutine leak - moveto/copyto: Fix to allow copying to the same name - Mount - --vfs-cache mode to make writes into mounts more reliable. - this requires caching files on the disk (see --cache-dir) - As this is a new feature, use with care - Use sdnotify to signal systemd the mount is ready (Fabian Möller) - Check if directory is not empty before mounting (Ernest Borowski) - Local - Add error message for cross file system moves - Fix equality check for times - Dropbox - Rework multipart upload - buffer the chunks when uploading large files so they can be retried - change default chunk size to 48MB now we are buffering them in memory - retry every error after the first chunk is done successfully - Fix error when renaming directories - Swift - Fix crash on bad authentication - Google Drive - Add service account support (Tim Cooijmans) - S3 - Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio) - Fix crash if a bad listing is received - Add support for ECS task IAM roles (David Minor) - Backblaze B2 - Fix multipart upload retries - Fix --hard-delete to make it work 100% of the time - Swift - Allow authentication with storage URL and auth key (Giovanni Pizzi) - Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson) - Add OS_TENANT_ID and OS_USER_ID to config - Allow configs with user id instead of user name - Check if swift segments container exists before creating (John Leach) - Fix memory leak in swift transfers (upstream fix) - SFTP - Add option to enable the use of aes128-cbc cipher (Jon Fautley) - Amazon cloud drive - Fix download of large files failing with "Only one auth mechanism allowed" - crypt - Option to encrypt directory names or leave them intact - Implement DirChangeNotify (Fabian Möller) - onedrive - Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user v1.38 - 2017-09-30 - New backends - Azure Blob Storage (thanks Andrei Dragomir) - Box - Onedrive for Business (thanks Oliver Heyme) - QingStor from QingCloud (thanks wuyu) - New commands - rcat - read from standard input and stream upload - tree - shows a nicely formatted recursive listing - cryptdecode - decode crypted file names (thanks ishuah) - config show - print the config file - config file - print the config file location - New Features - Empty directories are deleted on sync - dedupe - implement merging of duplicate directories - check and cryptcheck made more consistent and use less memory - cleanup for remaining remotes (thanks ishuah) - --immutable for ensuring that files don't change (thanks Jacob McNamee) - --user-agent option (thanks Alex McGrath Kraak) - --disable flag to disable optional features - --bind flag for choosing the local addr on outgoing connections - Support for zsh auto-completion (thanks bpicode) - Stop normalizing file names but do a normalized compare in sync - Compile - Update to using go1.9 as the default go version - Remove snapd build due to maintenance problems - Bug Fixes - Improve retriable error detection which makes multipart uploads better - Make check obey --ignore-size - Fix bwlimit toggle in conjunction with schedules (thanks cbruegg) - config ensures newly written config is on the same mount - Local - Revert to copy when moving file across file system boundaries - --skip-links to suppress symlink warnings (thanks Zhiming Wang) - Mount - Re-use rcat internals to support uploads from all remotes - Dropbox - Fix "entry doesn't belong in directory" error - Stop using deprecated API methods - Swift - Fix server side copy to empty container with --fast-list - Google Drive - Change the default for --drive-use-trash to true - S3 - Set session token when using STS (thanks Girish Ramakrishnan) - Glacier docs and error messages (thanks Jan Varho) - Read 1000 (not 1024) items in dir listings to fix Wasabi - Backblaze B2 - Fix SHA1 mismatch when downloading files with no SHA1 - Calculate missing hashes on the fly instead of spooling - --b2-hard-delete to permanently delete (not hide) files (thanks John Papandriopoulos) - Hubic - Fix creating containers - no longer have to use the default container - Swift - Optionally configure from a standard set of OpenStack environment vars - Add endpoint_type config - Google Cloud Storage - Fix bucket creation to work with limited permission users - SFTP - Implement connection pooling for multiple ssh connections - Limit new connections per second - Add support for MD5 and SHA1 hashes where available (thanks Christian Brüggemann) - HTTP - Fix URL encoding issues - Fix directories with : in - Fix panic with URL encoded content v1.37 - 2017-07-22 - New backends - FTP - thanks to Antonio Messina - HTTP - thanks to Vasiliy Tolstov - New commands - rclone ncdu - for exploring a remote with a text based user interface. - rclone lsjson - for listing with a machine readable output - rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox) - New Features - Implement --fast-list flag - This allows remotes to list recursively if they can - This uses less transactions (important if you pay for them) - This may or may not be quicker - This will use more memory as it has to hold the listing in memory - --old-sync-method deprecated - the remaining uses are covered by --fast-list - This involved a major re-write of all the listing code - Add --tpslimit and --tpslimit-burst to limit transactions per second - this is useful in conjunction with rclone mount to limit external apps - Add --stats-log-level so can see --stats without -v - Print password prompts to stderr - Hraban Luyat - Warn about duplicate files when syncing - Oauth improvements - allow auth_url and token_url to be set in the config file - Print redirection URI if using own credentials. - Don't Mkdir at the start of sync to save transactions - Compile - Update build to go1.8.3 - Require go1.6 for building rclone - Compile 386 builds with "GO386=387" for maximum compatibility - Bug Fixes - Fix menu selection when no remotes - Config saving reworked to not kill the file if disk gets full - Don't delete remote if name does not change while renaming - moveto, copyto: report transfers and checks as per move and copy - Local - Add --local-no-unicode-normalization flag - Bob Potter - Mount - Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help - Compare checksums on upload/download via FUSE - Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino - On read only open of file, make open pending until first read - Make --read-only reject modify operations - Implement ModTime via FUSE for remotes that support it - Allow modTime to be changed even before all writers are closed - Fix panic on renames - Fix hang on errored upload - Crypt - Report the name:root as specified by the user - Add an "obfuscate" option for filename encryption - Stephen Harris - Amazon Drive - Fix initialization order for token renewer - Remove revoked credentials, allow oauth proxy config and update docs - B2 - Reduce minimum chunk size to 5MB - Drive - Add team drive support - Reduce bandwidth by adding fields for partial responses - Martin Kristensen - Implement --drive-shared-with-me flag to view shared with me files - Danny Tsai - Add --drive-trashed-only to read only the files in the trash - Remove obsolete --drive-full-list - Add missing seek to start on retries of chunked uploads - Fix stats accounting for upload - Convert / in names to a unicode equivalent (/) - Poll for Google Drive changes when mounted - OneDrive - Fix the uploading of files with spaces - Fix initialization order for token renewer - Display speeds accurately when uploading - Yoni Jah - Swap to using http://localhost:53682/ as redirect URL - Michael Ledin - Retry on token expired error, reset upload body on retry - Yoni Jah - Google Cloud Storage - Add ability to specify location and storage class via config and command line - thanks gdm85 - Create container if necessary on server side copy - Increase directory listing chunk to 1000 to increase performance - Obtain a refresh token for GCS - Steven Lu - Yandex - Fix the name reported in log messages (was empty) - Correct error return for listing empty directory - Dropbox - Rewritten to use the v2 API - Now supports ModTime - Can only set by uploading the file again - If you uploaded with an old rclone, rclone may upload everything again - Use --size-only or --checksum to avoid this - Now supports the Dropbox content hashing scheme - Now supports low level retries - S3 - Work around eventual consistency in bucket creation - Create container if necessary on server side copy - Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed - Swift, Hubic - Fix zero length directory markers showing in the subdirectory listing - this caused lots of duplicate transfers - Fix paged directory listings - this caused duplicate directory errors - Create container if necessary on server side copy - Increase directory listing chunk to 1000 to increase performance - Make sensible error if the user forgets the container - SFTP - Add support for using ssh key files - Fix under Windows - Fix ssh agent on Windows - Adapt to latest version of library - Igor Kharin v1.36 - 2017-03-18 - New Features - SFTP remote (Jack Schmidt) - Re-implement sync routine to work a directory at a time reducing memory usage - Logging revamped to be more inline with rsync - now much quieter * -v only shows transfers * -vv is for full debug * --syslog to log to syslog on capable platforms - Implement --backup-dir and --suffix - Implement --track-renames (initial implementation by Bjørn Erik Pedersen) - Add time-based bandwidth limits (Lukas Loesche) - rclone cryptcheck: checks integrity of crypt remotes - Allow all config file variables and options to be set from environment variables - Add --buffer-size parameter to control buffer size for copy - Make --delete-after the default - Add --ignore-checksum flag (fixed by Hisham Zarka) - rclone check: Add --download flag to check all the data, not just hashes - rclone cat: add --head, --tail, --offset, --count and --discard - rclone config: when choosing from a list, allow the value to be entered too - rclone config: allow rename and copy of remotes - rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson) - Comply with XDG Base Directory specification (Dario Giovannetti) - this moves the default location of the config file in a backwards compatible way - Release changes - Ubuntu snap support (Dedsec1) - Compile with go 1.8 - MIPS/Linux big and little endian support - Bug Fixes - Fix copyto copying things to the wrong place if the destination dir didn't exist - Fix parsing of remotes in moveto and copyto - Fix --delete-before deleting files on copy - Fix --files-from with an empty file copying everything - Fix sync: don't update mod times if --dry-run set - Fix MimeType propagation - Fix filters to add ** rules to directory rules - Local - Implement -L, --copy-links flag to allow rclone to follow symlinks - Open files in write only mode so rclone can write to an rclone mount - Fix unnormalised unicode causing problems reading directories - Fix interaction between -x flag and --max-depth - Mount - Implement proper directory handling (mkdir, rmdir, renaming) - Make include and exclude filters apply to mount - Implement read and write async buffers - control with --buffer-size - Fix fsync on for directories - Fix retry on network failure when reading off crypt - Crypt - Add --crypt-show-mapping to show encrypted file mapping - Fix crypt writer getting stuck in a loop - IMPORTANT this bug had the potential to cause data corruption when - reading data from a network based remote and - writing to a crypt on Google Drive - Use the cryptcheck command to validate your data if you are concerned - If syncing two crypt remotes, sync the unencrypted remote - Amazon Drive - Fix panics on Move (rename) - Fix panic on token expiry - B2 - Fix inconsistent listings and rclone check - Fix uploading empty files with go1.8 - Constrain memory usage when doing multipart uploads - Fix upload url not being refreshed properly - Drive - Fix Rmdir on directories with trashed files - Fix "Ignoring unknown object" when downloading - Add --drive-list-chunk - Add --drive-skip-gdocs (Károly Oláh) - OneDrive - Implement Move - Fix Copy - Fix overwrite detection in Copy - Fix waitForJob to parse errors correctly - Use token renewer to stop auth errors on long uploads - Fix uploading empty files with go1.8 - Google Cloud Storage - Fix depth 1 directory listings - Yandex - Fix single level directory listing - Dropbox - Normalise the case for single level directory listings - Fix depth 1 listing - S3 - Added ca-central-1 region (Jon Yergatian) v1.35 - 2017-01-02 - New Features - moveto and copyto commands for choosing a destination name on copy/move - rmdirs command to recursively delete empty directories - Allow repeated --include/--exclude/--filter options - Only show transfer stats on commands which transfer stuff - show stats on any command using the --stats flag - Allow overlapping directories in move when server side dir move is supported - Add --stats-unit option - thanks Scott McGillivray - Bug Fixes - Fix the config file being overwritten when two rclone instances are running - Make rclone lsd obey the filters properly - Fix compilation on mips - Fix not transferring files that don't differ in size - Fix panic on nil retry/fatal error - Mount - Retry reads on error - should help with reliability a lot - Report the modification times for directories from the remote - Add bandwidth accounting and limiting (fixes --bwlimit) - If --stats provided will show stats and which files are transferring - Support R/W files if truncate is set. - Implement statfs interface so df works - Note that write is now supported on Amazon Drive - Report number of blocks in a file - thanks Stefan Breunig - Crypt - Prevent the user pointing crypt at itself - Fix failed to authenticate decrypted block errors - these will now return the underlying unexpected EOF instead - Amazon Drive - Add support for server side move and directory move - thanks Stefan Breunig - Fix nil pointer deref on size attribute - B2 - Use new prefix and delimiter parameters in directory listings - This makes --max-depth 1 dir listings as used in mount much faster - Reauth the account while doing uploads too - should help with token expiry - Drive - Make DirMove more efficient and complain about moving the root - Create destination directory on Move() v1.34 - 2016-11-06 - New Features - Stop single file and --files-from operations iterating through the source bucket. - Stop removing failed upload to cloud storage remotes - Make ContentType be preserved for cloud to cloud copies - Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini - rclone check shows count of hashes that couldn't be checked - rclone listremotes command - Support linux/arm64 build - thanks Fredrik Fornwall - Remove Authorization: lines from --dump-headers output - Bug Fixes - Ignore files with control characters in the names - Fix rclone move command - Delete src files which already existed in dst - Fix deletion of src file when dst file older - Fix rclone check on crypted file systems - Make failed uploads not count as "Transferred" - Make sure high level retries show with -q - Use a vendor directory with godep for repeatable builds - rclone mount - FUSE - Implement FUSE mount options - --no-modtime, --debug-fuse, --read-only, --allow-non-empty, --allow-root, --allow-other - --default-permissions, --write-back-cache, --max-read-ahead, --umask, --uid, --gid - Add --dir-cache-time to control caching of directory entries - Implement seek for files opened for read (useful for video players) - with -no-seek flag to disable - Fix crash on 32 bit ARM (alignment of 64 bit counter) - ...and many more internal fixes and improvements! - Crypt - Don't show encrypted password in configurator to stop confusion - Amazon Drive - New wait for upload option --acd-upload-wait-per-gb - upload timeouts scale by file size and can be disabled - Add 502 Bad Gateway to list of errors we retry - Fix overwriting a file with a zero length file - Fix ACD file size warning limit - thanks Felix Bünemann - Local - Unix: implement -x/--one-file-system to stay on a single file system - thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana - Windows: ignore the symlink bit on files - Windows: Ignore directory based junction points - B2 - Make sure each upload has at least one upload slot - fixes strange upload stats - Fix uploads when using crypt - Fix download of large files (sha1 mismatch) - Return error when we try to create a bucket which someone else owns - Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur - S3 - Command line and config file support for - Setting/overriding ACL - thanks Radek Senfeld - Setting storage class - thanks Asko Tamm - Drive - Make exponential backoff work exactly as per Google specification - add .epub, .odp and .tsv as export formats. - Swift - Don't read metadata for directory marker objects v1.33 - 2016-08-24 - New Features - Implement encryption - data encrypted in NACL secretbox format - with optional file name encryption - New commands - rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL) - works on Linux, FreeBSD and OS X (need testers for the last 2!) - rclone cat - outputs remote file or files to the terminal - rclone genautocomplete - command to make a bash completion script for rclone - Editing a remote using rclone config now goes through the wizard - Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors - Use cobra for sub commands and docs generation - drive - Document how to make your own client_id - s3 - User-configurable Amazon S3 ACL (thanks Radek Šenfeld) - b2 - Fix stats accounting for upload - no more jumping to 100% done - On cleanup delete hide marker if it is the current file - New B2 API endpoint (thanks Per Cederberg) - Set maximum backoff to 5 Minutes - onedrive - Fix URL escaping in file names - eg uploading files with + in them. - amazon cloud drive - Fix token expiry during large uploads - Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors - local - Fix filenames with invalid UTF-8 not being uploaded - Fix problem with some UTF-8 characters on OS X v1.32 - 2016-07-13 - Backblaze B2 - Fix upload of files large files not in root v1.31 - 2016-07-13 - New Features - Reduce memory on sync by about 50% - Implement --no-traverse flag to stop copy traversing the destination remote. - This can be used to reduce memory usage down to the smallest possible. - Useful to copy a small number of files into a large destination folder. - Implement cleanup command for emptying trash / removing old versions of files - Currently B2 only - Single file handling improved - Now copied with --files-from - Automatically sets --no-traverse when copying a single file - Info on using installing with ansible - thanks Stefan Weichinger - Implement --no-update-modtime flag to stop rclone fixing the remote modified times. - Bug Fixes - Fix move command - stop it running for overlapping Fses - this was causing data loss. - Local - Fix incomplete hashes - this was causing problems for B2. - Amazon Drive - Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed. - Swift - Add support for non-default project domain - thanks Antonio Messina. - S3 - Add instructions on how to use rclone with minio. - Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions. - Skip setting the modified time for objects > 5GB as it isn't possible. - Backblaze B2 - Add --b2-versions flag so old versions can be listed and retrieved. - Treat 403 errors (eg cap exceeded) as fatal. - Implement cleanup command for deleting old file versions. - Make error handling compliant with B2 integrations notes. - Fix handling of token expiry. - Implement --b2-test-mode to set X-Bz-Test-Mode header. - Set cutoff for chunked upload to 200MB as per B2 guidelines. - Make upload multi-threaded. - Dropbox - Don't retry 461 errors. v1.30 - 2016-06-18 - New Features - Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables - Directory include filtering for efficiency - --max-depth parameter - Better error reporting - More to come - Retry more errors - Add --ignore-size flag - for uploading images to onedrive - Log -v output to stdout by default - Display the transfer stats in more human readable form - Make 0 size files specifiable with --max-size 0b - Add b suffix so we can specify bytes in --bwlimit, --min-size etc - Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz - Bug Fixes - Fix retry doing one too many retries - Local - Fix problems with OS X and UTF-8 characters - Amazon Drive - Check a file exists before uploading to help with 408 Conflict errors - Reauth on 401 errors - this has been causing a lot of problems - Work around spurious 403 errors - Restart directory listings on error - Google Drive - Check a file exists before uploading to help with duplicates - Fix retry of multipart uploads - Backblaze B2 - Implement large file uploading - S3 - Add AES256 server-side encryption for - thanks Justin R. Wilson - Google Cloud Storage - Make sure we don't use conflicting content types on upload - Add service account support - thanks Michal Witkowski - Swift - Add auth version parameter - Add domain option for openstack (v3 auth) - thanks Fabian Ruff v1.29 - 2016-04-18 - New Features - Implement -I, --ignore-times for unconditional upload - Improve dedupecommand - Now removes identical copies without asking - Now obeys --dry-run - Implement --dedupe-mode for non interactive running - --dedupe-mode interactive - interactive the default. - --dedupe-mode skip - removes identical files then skips anything left. - --dedupe-mode first - removes identical files then keeps the first one. - --dedupe-mode newest - removes identical files then keeps the newest one. - --dedupe-mode oldest - removes identical files then keeps the oldest one. - --dedupe-mode rename - removes identical files then renames the rest to be different. - Bug fixes - Make rclone check obey the --size-only flag. - Use "application/octet-stream" if discovered mime type is invalid. - Fix missing "quit" option when there are no remotes. - Google Drive - Increase default chunk size to 8 MB - increases upload speed of big files - Speed up directory listings and make more reliable - Add missing retries for Move and DirMove - increases reliability - Preserve mime type on file update - Backblaze B2 - Enable mod time syncing - This means that B2 will now check modification times - It will upload new files to update the modification times - (there isn't an API to just set the mod time.) - If you want the old behaviour use --size-only. - Update API to new version - Fix parsing of mod time when not in metadata - Swift/Hubic - Don't return an MD5SUM for static large objects - S3 - Fix uploading files bigger than 50GB v1.28 - 2016-03-01 - New Features - Configuration file encryption - thanks Klaus Post - Improve rclone config adding more help and making it easier to understand - Implement -u/--update so creation times can be used on all remotes - Implement --low-level-retries flag - Optionally disable gzip compression on downloads with --no-gzip-encoding - Bug fixes - Don't make directories if --dry-run set - Fix and document the move command - Fix redirecting stderr on unix-like OSes when using --log-file - Fix delete command to wait until all finished - fixes missing deletes. - Backblaze B2 - Use one upload URL per go routine fixes more than one upload using auth token - Add pacing, retries and reauthentication - fixes token expiry problems - Upload without using a temporary file from local (and remotes which support SHA1) - Fix reading metadata for all files when it shouldn't have been - Drive - Fix listing drive documents at root - Disable copy and move for Google docs - Swift - Fix uploading of chunked files with non ASCII characters - Allow setting of storage_url in the config - thanks Xavier Lucas - S3 - Allow IAM role and credentials from environment variables - thanks Brian Stengaard - Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon - Amazon Drive - Retry on more things to make directory listings more reliable v1.27 - 2016-01-31 - New Features - Easier headless configuration with rclone authorize - Add support for multiple hash types - we now check SHA1 as well as MD5 hashes. - delete command which does obey the filters (unlike purge) - dedupe command to deduplicate a remote. Useful with Google Drive. - Add --ignore-existing flag to skip all files that exist on destination. - Add --delete-before, --delete-during, --delete-after flags. - Add --memprofile flag to debug memory use. - Warn the user about files with same name but different case - Make --include rules add their implicit exclude * at the end of the filter list - Deprecate compiling with go1.3 - Amazon Drive - Fix download of files > 10 GB - Fix directory traversal ("Next token is expired") for large directory listings - Remove 409 conflict from error codes we will retry - stops very long pauses - Backblaze B2 - SHA1 hashes now checked by rclone core - Drive - Add --drive-auth-owner-only to only consider files owned by the user - thanks Björn Harrtell - Export Google documents - Dropbox - Make file exclusion error controllable with -q - Swift - Fix upload from unprivileged user. - S3 - Fix updating of mod times of files with + in. - Local - Add local file system option to disable UNC on Windows. v1.26 - 2016-01-02 - New Features - Yandex storage backend - thank you Dmitry Burdeev ("dibu") - Implement Backblaze B2 storage backend - Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles - Make ls/lsl/md5sum/size/check obey includes and excludes - Fixes - Fix crash in http logging - Upload releases to github too - Swift - Fix sync for chunked files - OneDrive - Re-enable server side copy - Don't mask HTTP error codes with JSON decode error - S3 - Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier) v1.25 - 2015-11-14 - New features - Implement Hubic storage system - Fixes - Fix deletion of some excluded files without --delete-excluded - This could have deleted files unexpectedly on sync - Always check first with --dry-run! - Swift - Stop SetModTime losing metadata (eg X-Object-Manifest) - This could have caused data loss for files > 5GB in size - Use ContentType from Object to avoid lookups in listings - OneDrive - disable server side copy as it seems to be broken at Microsoft v1.24 - 2015-11-07 - New features - Add support for Microsoft OneDrive - Add --no-check-certificate option to disable server certificate verification - Add async readahead buffer for faster transfer of big files - Fixes - Allow spaces in remotes and check remote names for validity at creation time - Allow '&' and disallow ':' in Windows filenames. - Swift - Ignore directory marker objects where appropriate - allows working with Hubic - Don't delete the container if fs wasn't at root - S3 - Don't delete the bucket if fs wasn't at root - Google Cloud Storage - Don't delete the bucket if fs wasn't at root v1.23 - 2015-10-03 - New features - Implement rclone size for measuring remotes - Fixes - Fix headless config for drive and gcs - Tell the user they should try again if the webserver method failed - Improve output of --dump-headers - S3 - Allow anonymous access to public buckets - Swift - Stop chunked operations logging "Failed to read info: Object Not Found" - Use Content-Length on uploads for extra reliability v1.22 - 2015-09-28 - Implement rsync like include and exclude flags - swift - Support files > 5GB - thanks Sergey Tolmachev v1.21 - 2015-09-22 - New features - Display individual transfer progress - Make lsl output times in localtime - Fixes - Fix allowing user to override credentials again in Drive, GCS and ACD - Amazon Drive - Implement compliant pacing scheme - Google Drive - Make directory reads concurrent for increased speed. v1.20 - 2015-09-15 - New features - Amazon Drive support - Oauth support redone - fix many bugs and improve usability - Use "golang.org/x/oauth2" as oauth library of choice - Improve oauth usability for smoother initial signup - drive, googlecloudstorage: optionally use auto config for the oauth token - Implement --dump-headers and --dump-bodies debug flags - Show multiple matched commands if abbreviation too short - Implement server side move where possible - local - Always use UNC paths internally on Windows - fixes a lot of bugs - dropbox - force use of our custom transport which makes timeouts work - Thanks to Klaus Post for lots of help with this release v1.19 - 2015-08-28 - New features - Server side copies for s3/swift/drive/dropbox/gcs - Move command - uses server side copies if it can - Implement --retries flag - tries 3 times by default - Build for plan9/amd64 and solaris/amd64 too - Fixes - Make a current version download with a fixed URL for scripting - Ignore rmdir in limited fs rather than throwing error - dropbox - Increase chunk size to improve upload speeds massively - Issue an error message when trying to upload bad file name v1.18 - 2015-08-17 - drive - Add --drive-use-trash flag so rclone trashes instead of deletes - Add "Forbidden to download" message for files with no downloadURL - dropbox - Remove datastore - This was deprecated and it caused a lot of problems - Modification times and MD5SUMs no longer stored - Fix uploading files > 2GB - s3 - use official AWS SDK from github.com/aws/aws-sdk-go - NB will most likely require you to delete and recreate remote - enable multipart upload which enables files > 5GB - tested with Ceph / RadosGW / S3 emulation - many thanks to Sam Liston and Brian Haymore at the Utah Center for High Performance Computing for a Ceph test account - misc - Show errors when reading the config file - Do not print stats in quiet mode - thanks Leonid Shalupov - Add FAQ - Fix created directories not obeying umask - Linux installation instructions - thanks Shimon Doodkin v1.17 - 2015-06-14 - dropbox: fix case insensitivity issues - thanks Leonid Shalupov v1.16 - 2015-06-09 - Fix uploading big files which was causing timeouts or panics - Don't check md5sum after download with --size-only v1.15 - 2015-06-06 - Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper - Implement --size-only flag to sync on size not checksum & modtime - Expand docs and remove duplicated information - Document rclone's limitations with directories - dropbox: update docs about case insensitivity v1.14 - 2015-05-21 - local: fix encoding of non utf-8 file names - fixes a duplicate file problem - drive: docs about rate limiting - google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1" v1.13 - 2015-05-10 - Revise documentation (especially sync) - Implement --timeout and --conntimeout - s3: ignore etags from multipart uploads which aren't md5sums v1.12 - 2015-03-15 - drive: Use chunked upload for files above a certain size - drive: add --drive-chunk-size and --drive-upload-cutoff parameters - drive: switch to insert from update when a failed copy deletes the upload - core: Log duplicate files if they are detected v1.11 - 2015-03-04 - swift: add region parameter - drive: fix crash on failed to update remote mtime - In remote paths, change native directory separators to / - Add synchronization to ls/lsl/lsd output to stop corruptions - Ensure all stats/log messages to go stderr - Add --log-file flag to log everything (including panics) to file - Make it possible to disable stats printing with --stats=0 - Implement --bwlimit to limit data transfer bandwidth v1.10 - 2015-02-12 - s3: list an unlimited number of items - Fix getting stuck in the configurator v1.09 - 2015-02-07 - windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:) - local: Fix directory separators on Windows - drive: fix rate limit exceeded errors v1.08 - 2015-02-04 - drive: fix subdirectory listing to not list entire drive - drive: Fix SetModTime - dropbox: adapt code to recent library changes v1.07 - 2014-12-23 - google cloud storage: fix memory leak v1.06 - 2014-12-12 - Fix "Couldn't find home directory" on OSX - swift: Add tenant parameter - Use new location of Google API packages v1.05 - 2014-08-09 - Improved tests and consequently lots of minor fixes - core: Fix race detected by go race detector - core: Fixes after running errcheck - drive: reset root directory on Rmdir and Purge - fs: Document that Purger returns error on empty directory, test and fix - google cloud storage: fix ListDir on subdirectory - google cloud storage: re-read metadata in SetModTime - s3: make reading metadata more reliable to work around eventual consistency problems - s3: strip trailing / from ListDir() - swift: return directories without / in ListDir v1.04 - 2014-07-21 - google cloud storage: Fix crash on Update v1.03 - 2014-07-20 - swift, s3, dropbox: fix updated files being marked as corrupted - Make compile with go 1.1 again v1.02 - 2014-07-19 - Implement Dropbox remote - Implement Google Cloud Storage remote - Verify Md5sums and Sizes after copies - Remove times from "ls" command - lists sizes only - Add add "lsl" - lists times and sizes - Add "md5sum" command v1.01 - 2014-07-04 - drive: fix transfer of big files using up lots of memory v1.00 - 2014-07-03 - drive: fix whole second dates v0.99 - 2014-06-26 - Fix --dry-run not working - Make compatible with go 1.1 v0.98 - 2014-05-30 - s3: Treat missing Content-Length as 0 for some ceph installations - rclonetest: add file with a space in v0.97 - 2014-05-05 - Implement copying of single files - s3 & swift: support paths inside containers/buckets v0.96 - 2014-04-24 - drive: Fix multiple files of same name being created - drive: Use o.Update and fs.Put to optimise transfers - Add version number, -V and --version v0.95 - 2014-03-28 - rclone.org: website, docs and graphics - drive: fix path parsing v0.94 - 2014-03-27 - Change remote format one last time - GNU style flags v0.93 - 2014-03-16 - drive: store token in config file - cross compile other versions - set strict permissions on config file v0.92 - 2014-03-15 - Config fixes and --config option v0.91 - 2014-03-15 - Make config file v0.90 - 2013-06-27 - Project named rclone v0.00 - 2012-11-18 - Project started BUGS AND LIMITATIONS Limitations Directory timestamps aren't preserved Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing. Rclone struggles with millions of files in a directory/bucket Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory. Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket. Bucket based remotes and folders Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear. Some software creates empty keys ending in / as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. This ability may be added in the future (probably via a flag/option). Bugs Bugs are stored in rclone's GitHub project: - Reported bugs - Known issues Frequently Asked Questions Do all cloud storage systems support all rclone commands Yes they do. All the rclone commands (eg sync, copy etc) will work on all the remote storage systems. Can I copy the config from one machine to another Sure! Rclone stores all of its config in a single file. If you want to find this file, run rclone config file which will tell you where it is. See the remote setup docs for more info. How do I configure rclone on a remote / headless box with no browser? This has now been documented in its own remote setup page. Can rclone sync directly from drive to s3 Rclone can sync between two remote cloud storage systems just fine. Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth. The syncs would be incremental (on a file by file basis). Eg rclone sync -i drive:Folder s3:bucket Using rclone from multiple locations at the same time You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg Server A> rclone sync -i /tmp/whatever remote:ServerA Server B> rclone sync -i /tmp/whatever remote:ServerB If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, eg Server A> rclone copy /tmp/whatever remote:Backup Server B> rclone copy /tmp/whatever remote:Backup The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates. Why doesn't rclone support partial transfers / binary diffs like rsync? Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system. Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it. It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system. All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects. Can rclone do bi-directional sync? No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it. Can I use rclone with an HTTP proxy? Yes. rclone will follow the standard environment variables for proxies, similar to cURL and other programs. In general the variables are called http_proxy (for services reached over http) and https_proxy (for services reached over https). Most public services will be using https, but you may wish to set both. The content of the variable is protocol://server:port. The protocol value is the one used to talk to the proxy server, itself, and is commonly either http or socks5. Slightly annoyingly, there is no _standard_ for the name; some applications may use http_proxy but another one HTTP_PROXY. The Go libraries used by rclone will try both variations, but you may wish to set all possibilities. So, on Linux, you may end up with code similar to export http_proxy=http://proxyserver:12345 export https_proxy=$http_proxy export HTTP_PROXY=$http_proxy export HTTPS_PROXY=$http_proxy The NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com". e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy Note that the ftp backend does not support ftp_proxy yet. Rclone gives x509: failed to load system roots and no roots provided error This means that rclone can't file the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris. Rclone (via the Go runtime) tries to load the root certificates from these places on Linux. "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL "/etc/ssl/ca-bundle.pem", // OpenSUSE "/etc/pki/tls/cacert.pem", // OpenELEC So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly. mkdir -p /etc/ssl/certs/ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ntpclient -s -h pool.ntp.org The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned in the x509 package, provide an additional way to provide the SSL root certificates. Note that you may need to add the --insecure option to the curl command line if it doesn't work without. curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt Rclone gives Failed to load config file: function not implemented error Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23. See the system requirements section in the go install docs for full details. All my uploaded docx/xlsx/pptx files appear as archive/zip This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats tcp lookup some.domain.com no such host This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g. # both should print a long list of possible IP addresses dig www.googleapis.com # resolve using your default DNS dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server If you are using systemd-resolved (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly. Additionally with the GODEBUG=netdns= environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs. The total size reported in the stats for a sync is wrong and keeps changing It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag. Rclone is using too much memory or appears to have a memory leak Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled. However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value, say export GOGC=20. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage. The most common cause of rclone using lots of memory is a single directory with thousands or millions of files in. Rclone has to load this entirely into memory as rclone objects. Each rclone object takes 0.5k-1k of memory. License This is free software under the terms of MIT the license (check the COPYING file included with the source code). Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Authors - Nick Craig-Wood nick@craig-wood.com Contributors {{< rem email addresses removed from here need to be addeed to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again. >}} - Alex Couper amcouper@gmail.com - Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru - Shimon Doodkin helpmepro1@gmail.com - Colin Nicholson colin@colinn.com - Klaus Post klauspost@gmail.com - Sergey Tolmachev tolsi.ru@gmail.com - Adriano Aurélio Meirelles adriano@atinge.com - C. Bess cbess@users.noreply.github.com - Dmitry Burdeev dibu28@gmail.com - Joseph Spurrier github@josephspurrier.com - Björn Harrtell bjorn@wololo.org - Xavier Lucas xavier.lucas@corp.ovh.com - Werner Beroux werner@beroux.com - Brian Stengaard brian@stengaard.eu - Jakub Gedeon jgedeon@sofi.com - Jim Tittsler jwt@onjapan.net - Michal Witkowski michal@improbable.io - Fabian Ruff fabian.ruff@sap.com - Leigh Klotz klotz@quixey.com - Romain Lapray lapray.romain@gmail.com - Justin R. Wilson jrw972@gmail.com - Antonio Messina antonio.s.messina@gmail.com - Stefan G. Weichinger office@oops.co.at - Per Cederberg cederberg@gmail.com - Radek Šenfeld rush@logic.cz - Fredrik Fornwall fredrik@fornwall.net - Asko Tamm asko@deekit.net - xor-zz xor@gstocco.com - Tomasz Mazur tmazur90@gmail.com - Marco Paganini paganini@paganini.net - Felix Bünemann buenemann@louis.info - Durval Menezes jmrclone@durval.com - Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com - Stefan Breunig stefan-github@yrden.de - Alishan Ladhani ali-l@users.noreply.github.com - 0xJAKE 0xJAKE@users.noreply.github.com - Thibault Molleman thibaultmol@users.noreply.github.com - Scott McGillivray scott.mcgillivray@gmail.com - Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com - Lukas Loesche lukas@mesosphere.io - emyarod allllaboutyou@gmail.com - T.C. Ferguson tcf909@gmail.com - Brandur brandur@mutelight.org - Dario Giovannetti dev@dariogiovannetti.net - Károly Oláh okaresz@aol.com - Jon Yergatian jon@macfanatic.ca - Jack Schmidt github@mowsey.org - Dedsec1 Dedsec1@users.noreply.github.com - Hisham Zarka hzarka@gmail.com - Jérôme Vizcaino jerome.vizcaino@gmail.com - Mike Tesch mjt6129@rit.edu - Marvin Watson marvwatson@users.noreply.github.com - Danny Tsai danny8376@gmail.com - Yoni Jah yonjah+git@gmail.com yonjah+github@gmail.com - Stephen Harris github@spuddy.org sweharris@users.noreply.github.com - Ihor Dvoretskyi ihor.dvoretskyi@gmail.com - Jon Craton jncraton@gmail.com - Hraban Luyat hraban@0brg.net - Michael Ledin mledin89@gmail.com - Martin Kristensen me@azgul.com - Too Much IO toomuchio@users.noreply.github.com - Anisse Astier anisse@astier.eu - Zahiar Ahmed zahiar@live.com - Igor Kharin igorkharin@gmail.com - Bill Zissimopoulos billziss@navimatics.com - Bob Potter bobby.potter@gmail.com - Steven Lu tacticalazn@gmail.com - Sjur Fredriksen sjurtf@ifi.uio.no - Ruwbin hubus12345@gmail.com - Fabian Möller fabianm88@gmail.com f.moeller@nynex.de - Edward Q. Bridges github@eqbridges.com - Vasiliy Tolstov v.tolstov@selfip.ru - Harshavardhana harsha@minio.io - sainaen sainaen@gmail.com - gdm85 gdm85@users.noreply.github.com - Yaroslav Halchenko debian@onerussian.com - John Papandriopoulos jpap@users.noreply.github.com - Zhiming Wang zmwangx@gmail.com - Andy Pilate cubox@cubox.me - Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com de8olihe@lego.com - wuyu wuyu@yunify.com - Andrei Dragomir adragomi@adobe.com - Christian Brüggemann mail@cbruegg.com - Alex McGrath Kraak amkdude@gmail.com - bpicode bjoern.pirnay@googlemail.com - Daniel Jagszent daniel@jagszent.de - Josiah White thegenius2009@gmail.com - Ishuah Kariuki kariuki@ishuah.com ishuah91@gmail.com - Jan Varho jan@varho.org - Girish Ramakrishnan girish@cloudron.io - LingMan LingMan@users.noreply.github.com - Jacob McNamee jacobmcnamee@gmail.com - jersou jertux@gmail.com - thierry thierry@substantiel.fr - Simon Leinen simon.leinen@gmail.com ubuntu@s3-test.novalocal - Dan Dascalescu ddascalescu+github@gmail.com - Jason Rose jason@jro.io - Andrew Starr-Bochicchio a.starr.b@gmail.com - John Leach john@johnleach.co.uk - Corban Raun craun@instructure.com - Pierre Carlson mpcarl@us.ibm.com - Ernest Borowski er.borowski@gmail.com - Remus Bunduc remus.bunduc@gmail.com - Iakov Davydov iakov.davydov@unil.ch dav05.gith@myths.ru - Jakub Tasiemski tasiemski@gmail.com - David Minor dminor@saymedia.com - Tim Cooijmans cooijmans.tim@gmail.com - Laurence liuxy6@gmail.com - Giovanni Pizzi gio.piz@gmail.com - Filip Bartodziej filipbartodziej@gmail.com - Jon Fautley jon@dead.li - lewapm 32110057+lewapm@users.noreply.github.com - Yassine Imounachen yassine256@gmail.com - Chris Redekop chris-redekop@users.noreply.github.com chris.redekop@gmail.com - Jon Fautley jon@adenoid.appstal.co.uk - Will Gunn WillGunn@users.noreply.github.com - Lucas Bremgartner lucas@bremis.ch - Jody Frankowski jody.frankowski@gmail.com - Andreas Roussos arouss1980@gmail.com - nbuchanan nbuchanan@utah.gov - Durval Menezes rclone@durval.com - Victor vb-github@viblo.se - Mateusz pabian.mateusz@gmail.com - Daniel Loader spicypixel@gmail.com - David0rk davidork@gmail.com - Alexander Neumann alexander@bumpern.de - Giri Badanahatti gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local - Leo R. Lundgren leo@finalresort.org - wolfv wolfv6@users.noreply.github.com - Dave Pedu dave@davepedu.com - Stefan Lindblom lindblom@spotify.com - seuffert oliver@seuffert.biz - gbadanahatti 37121690+gbadanahatti@users.noreply.github.com - Keith Goldfarb barkofdelight@gmail.com - Steve Kriss steve@heptio.com - Chih-Hsuan Yen yan12125@gmail.com - Alexander Neumann fd0@users.noreply.github.com - Matt Holt mholt@users.noreply.github.com - Eri Bastos bastos.eri@gmail.com - Michael P. Dubner pywebmail@list.ru - Antoine GIRARD sapk@users.noreply.github.com - Mateusz Piotrowski mpp302@gmail.com - Animosity022 animosity22@users.noreply.github.com earl.texter@gmail.com - Peter Baumgartner pete@lincolnloop.com - Craig Rachel craig@craigrachel.com - Michael G. Noll miguno@users.noreply.github.com - hensur me@hensur.de - Oliver Heyme de8olihe@lego.com - Richard Yang richard@yenforyang.com - Piotr Oleszczyk piotr.oleszczyk@gmail.com - Rodrigo rodarima@gmail.com - NoLooseEnds NoLooseEnds@users.noreply.github.com - Jakub Karlicek jakub@karlicek.me - John Clayton john@codemonkeylabs.com - Kasper Byrdal Nielsen byrdal76@gmail.com - Benjamin Joseph Dag bjdag1234@users.noreply.github.com - themylogin themylogin@gmail.com - Onno Zweers onno.zweers@surfsara.nl - Jasper Lievisse Adriaanse jasper@humppa.nl - sandeepkru sandeep.ummadi@gmail.com sandeepkru@users.noreply.github.com - HerrH atomtigerzoo@users.noreply.github.com - Andrew 4030760+sparkyman215@users.noreply.github.com - dan smith XX1011@gmail.com - Oleg Kovalov iamolegkovalov@gmail.com - Ruben Vandamme github-com-00ff86@vandamme.email - Cnly minecnly@gmail.com - Andres Alvarez 1671935+kir4h@users.noreply.github.com - reddi1 xreddi@gmail.com - Matt Tucker matthewtckr@gmail.com - Sebastian Bünger buengese@gmail.com - Martin Polden mpolden@mpolden.no - Alex Chen Cnly@users.noreply.github.com - Denis deniskovpen@gmail.com - bsteiss 35940619+bsteiss@users.noreply.github.com - Cédric Connes cedric.connes@gmail.com - Dr. Tobias Quathamer toddy15@users.noreply.github.com - dcpu 42736967+dcpu@users.noreply.github.com - Sheldon Rupp me@shel.io - albertony 12441419+albertony@users.noreply.github.com - cron410 cron410@gmail.com - Anagh Kumar Baranwal 6824881+darthShadow@users.noreply.github.com - Felix Brucker felix@felixbrucker.com - Santiago Rodríguez scollazo@users.noreply.github.com - Craig Miskell craig.miskell@fluxfederation.com - Antoine GIRARD sapk@sapk.fr - Joanna Marek joanna.marek@u2i.com - frenos frenos@users.noreply.github.com - ssaqua ssaqua@users.noreply.github.com - xnaas me@xnaas.info - Frantisek Fuka fuka@fuxoft.cz - Paul Kohout pauljkohout@yahoo.com - dcpu 43330287+dcpu@users.noreply.github.com - jackyzy823 jackyzy823@gmail.com - David Haguenauer ml@kurokatta.org - teresy hi.teresy@gmail.com - buergi patbuergi@gmx.de - Florian Gamboeck mail@floga.de - Ralf Hemberger 10364191+rhemberger@users.noreply.github.com - Scott Edlund sedlund@users.noreply.github.com - Erik Swanson erik@retailnext.net - Jake Coggiano jake@stripe.com - brused27 brused27@noemailaddress - Peter Kaminski kaminski@istori.com - Henry Ptasinski henry@logout.com - Alexander kharkovalexander@gmail.com - Garry McNulty garrmcnu@gmail.com - Mathieu Carbou mathieu.carbou@gmail.com - Mark Otway mark@otway.com - William Cocker 37018962+WilliamCocker@users.noreply.github.com - François Leurent 131.js@cloudyks.org - Arkadius Stefanski arkste@gmail.com - Jay dev@jaygoel.com - andrea rota a@xelera.eu - nicolov nicolov@users.noreply.github.com - Dario Guzik dario@guzik.com.ar - qip qip@users.noreply.github.com - yair@unicorn yair@unicorn - Matt Robinson brimstone@the.narro.ws - kayrus kay.diam@gmail.com - Rémy Léone remy.leone@gmail.com - Wojciech Smigielski wojciech.hieronim.smigielski@gmail.com - weetmuts oehrstroem@gmail.com - Jonathan vanillajonathan@users.noreply.github.com - James Carpenter orbsmiv@users.noreply.github.com - Vince vince0villamora@gmail.com - Nestar47 47841759+Nestar47@users.noreply.github.com - Six brbsix@gmail.com - Alexandru Bumbacea alexandru.bumbacea@booking.com - calisro robert.calistri@gmail.com - Dr.Rx david.rey@nventive.com - marcintustin marcintustin@users.noreply.github.com - jaKa Močnik jaka@koofr.net - Fionera fionera@fionera.de - Dan Walters dan@walters.io - Danil Semelenov sgtpep@users.noreply.github.com - xopez 28950736+xopez@users.noreply.github.com - Ben Boeckel mathstuf@gmail.com - Manu manu@snapdragon.cc - Kyle E. Mitchell kyle@kemitchell.com - Gary Kim gary@garykim.dev - Jon jonathn@github.com - Jeff Quinn jeffrey.quinn@bluevoyant.com - Peter Berbec peter@berbec.com - didil 1284255+didil@users.noreply.github.com - id01 gaviniboom@gmail.com - Robert Marko robimarko@gmail.com - Philip Harvey 32467456+pharveybattelle@users.noreply.github.com - JorisE JorisE@users.noreply.github.com - garry415 garry.415@gmail.com - forgems forgems@gmail.com - Florian Apolloner florian@apolloner.eu - Aleksandar Janković office@ajankovic.com ajankovic@users.noreply.github.com - Maran maran@protonmail.com - nguyenhuuluan434 nguyenhuuluan434@gmail.com - Laura Hausmann zotan@zotan.pw laura@hausmann.dev - yparitcher y@paritcher.com - AbelThar abela.tharen@gmail.com - Matti Niemenmaa matti.niemenmaa+git@iki.fi - Russell Davis russelldavis@users.noreply.github.com - Yi FU yi.fu@tink.se - Paul Millar paul.millar@desy.de - justinalin justinalin@qnap.com - EliEron subanimehd@gmail.com - justina777 chiahuei.lin@gmail.com - Chaitanya Bankanhal bchaitanya15@gmail.com - Michał Matczuk michal@scylladb.com - Macavirus macavirus@zoho.com - Abhinav Sharma abhi18av@users.noreply.github.com - ginvine 34869051+ginvine@users.noreply.github.com - Patrick Wang mail6543210@yahoo.com.tw - Cenk Alti cenkalti@gmail.com - Andreas Chlupka andy@chlupka.com - Alfonso Montero amontero@tinet.org - Ivan Andreev ivandeex@gmail.com - David Baumgold david@davidbaumgold.com - Lars Lehtonen lars.lehtonen@gmail.com - Matei David matei.david@gmail.com - David david.bramwell@endemolshine.com - Anthony Rusdi 33247310+antrusd@users.noreply.github.com - Richard Patel me@terorie.dev - 庄天翼 zty0826@gmail.com - SwitchJS dev@switchjs.com - Raphael PowershellNinja@users.noreply.github.com - Sezal Agrawal sezalagrawal@gmail.com - Tyler TylerNakamura@users.noreply.github.com - Brett Dutro brett.dutro@gmail.com - Vighnesh SK booterror99@gmail.com - Arijit Biswas dibbyo456@gmail.com - Michele Caci michele.caci@gmail.com - AlexandrBoltris ua2fgb@gmail.com - Bryce Larson blarson@saltstack.com - Carlos Ferreyra crypticmind@gmail.com - Saksham Khanna sakshamkhanna@outlook.com - dausruddin 5763466+dausruddin@users.noreply.github.com - zero-24 zero-24@users.noreply.github.com - Xiaoxing Ye ye@xiaoxing.us - Barry Muldrey barry@muldrey.net - Sebastian Brandt sebastian.brandt@friday.de - Marco Molteni marco.molteni@mailbox.org - Ankur Gupta ankur0493@gmail.com 7876747+ankur0493@users.noreply.github.com - Maciej Zimnoch maciej@scylladb.com - anuar45 serdaliyev.anuar@gmail.com - Fernando ferferga@users.noreply.github.com - David Cole david.cole@sohonet.com - Wei He git@weispot.com - Outvi V 19144373+outloudvi@users.noreply.github.com - Thomas Kriechbaumer thomas@kriechbaumer.name - Tennix tennix@users.noreply.github.com - Ole Schütt ole@schuett.name - Kuang-che Wu kcwu@csie.org - Thomas Eales wingsuit@users.noreply.github.com - Paul Tinsley paul.tinsley@vitalsource.com - Felix Hungenberg git@shiftgeist.com - Benjamin Richter github@dev.telepath.de - landall cst_zf@qq.com - thestigma thestigma@gmail.com - jtagcat 38327267+jtagcat@users.noreply.github.com - Damon Permezel permezel@me.com - boosh boosh@users.noreply.github.com - unbelauscht 58393353+unbelauscht@users.noreply.github.com - Motonori IWAMURO vmi@nifty.com - Benjapol Worakan benwrk@live.com - Dave Koston dave.koston@stackpath.com - Durval Menezes DurvalMenezes@users.noreply.github.com - Tim Gallant me@timgallant.us - Frederick Zhang frederick888@tsundere.moe - valery1707 valery1707@gmail.com - Yves G theYinYeti@yalis.fr - Shing Kit Chan chanshingkit@gmail.com - Franklyn Tackitt franklyn@tackitt.net - Robert-André Mauchin zebob.m@gmail.com - evileye 48332831+ibiruai@users.noreply.github.com - Joachim Brandon LeBlanc brandon@leblanc.codes - Patryk Jakuszew patryk.jakuszew@gmail.com - fishbullet shindu666@gmail.com - greatroar <@> - Bernd Schoolmann mail@quexten.com - Elan Ruusamäe glen@pld-linux.org - Max Sum max@lolyculture.com - Mark Spieth mspieth@users.noreply.github.com - harry me@harry.plus - Samantha McVey samantham@posteo.net - Jack Anderson jack.anderson@metaswitch.com - Michael G draget@speciesm.net - Brandon Philips brandon@ifup.org - Daven dooven@users.noreply.github.com - Martin Stone martin@d7415.co.uk - David Bramwell 13053834+dbramwell@users.noreply.github.com - Sunil Patra snl_su@live.com - Adam Stroud adam.stroud@gmail.com - Kush kushsharma@users.noreply.github.com - Matan Rosenberg matan129@gmail.com - gitch1 63495046+gitch1@users.noreply.github.com - ElonH elonhhuang@gmail.com - Fred fred@creativeprojects.tech - Sébastien Gross renard@users.noreply.github.com - Maxime Suret 11944422+msuret@users.noreply.github.com - Caleb Case caleb@storj.io - Ben Zenker imbenzenker@gmail.com - Martin Michlmayr tbm@cyrius.com - Brandon McNama bmcnama@pagerduty.com - Daniel Slyman github@skylayer.eu - Alex Guerrero guerrero@users.noreply.github.com - Matteo Pietro Dazzi matteopietro.dazzi@gft.com - edwardxml 56691903+edwardxml@users.noreply.github.com - Roman Kredentser shareed2k@gmail.com - Kamil Trzciński ayufan@ayufan.eu - Zac Rubin z-0@users.noreply.github.com - Vincent Feltz psycho@feltzv.fr - Heiko Bornholdt bornholdt@informatik.uni-hamburg.de - Matteo Pietro Dazzi matteopietro.dazzi@gmail.com - jtagcat gitlab@c7.ee - Petri Salminen petri@salminen.dev - Tim Burke tim.burke@gmail.com - Kai Lüke kai@kinvolk.io - Garrett Squire github@garrettsquire.com - Evan Harris eharris@puremagic.com - Kevin keyam@microsoft.com - Morten Linderud morten@linderud.pw - Dmitry Ustalov dmitry.ustalov@gmail.com - Jack 196648+jdeng@users.noreply.github.com - kcris cristian.tarsoaga@gmail.com - tyhuber1 68970760+tyhuber1@users.noreply.github.com - David Ibarra david.ibarra@realty.com - Tim Gallant tim@lilt.com - Kaloyan Raev kaloyan@storj.io - Jay McEntire jay.mcentire@gmail.com - Leo Luan leoluan@us.ibm.com - aus 549081+aus@users.noreply.github.com - Aaron Gokaslan agokaslan@fb.com - Egor Margineanu egmar@users.noreply.github.com - Lucas Kanashiro lucas.kanashiro@canonical.com - WarpedPixel WarpedPixel@users.noreply.github.com - Sam Edwards sam@samedwards.ca CONTACT THE RCLONE PROJECT Forum Forum for questions and general discussion: - https://forum.rclone.org GitHub repository The project's repository is located at: - https://github.com/rclone/rclone There you can file bug reports or contribute with pull requests. Twitter You can also follow me on twitter for rclone announcements: - [@njcw](https://twitter.com/njcw) Email Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don't email me requests for help - those are better directed to the forum. Thanks! rclone-1.53.3/Makefile000066400000000000000000000227001375552240400145350ustar00rootroot00000000000000SHELL = bash # Branch we are working on BRANCH := $(or $(BUILD_SOURCEBRANCHNAME),$(lastword $(subst /, ,$(GITHUB_REF))),$(shell git rev-parse --abbrev-ref HEAD)) # Tag of the current commit, if any. If this is not "" then we are building a release RELEASE_TAG := $(shell git tag -l --points-at HEAD) # Version of last release (may not be on this branch) VERSION := $(shell cat VERSION) # Last tag on this branch LAST_TAG := $(shell git describe --tags --abbrev=0) # Next version NEXT_VERSION := $(shell echo $(VERSION) | awk -F. -v OFS=. '{print $$1,$$2+1,0}') NEXT_PATCH_VERSION := $(shell echo $(VERSION) | awk -F. -v OFS=. '{print $$1,$$2,$$3+1}') # If we are working on a release, override branch to master ifdef RELEASE_TAG BRANCH := master LAST_TAG := $(shell git describe --abbrev=0 --tags $(VERSION)^) endif TAG_BRANCH := .$(BRANCH) BRANCH_PATH := branch/$(BRANCH)/ # If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),) TAG_BRANCH := BRANCH_PATH := endif # Make version suffix -beta.NNNN.CCCCCCCC (N=Commit number, C=Commit) VERSION_SUFFIX := -beta.$(shell git rev-list --count HEAD).$(shell git show --no-patch --no-notes --pretty='%h' HEAD) # TAG is current version + commit number + commit + branch TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH) ifdef RELEASE_TAG TAG := $(RELEASE_TAG) endif GO_VERSION := $(shell go version) ifdef BETA_SUBDIR BETA_SUBDIR := /$(BETA_SUBDIR) endif BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR) BETA_URL := https://beta.rclone.org/$(BETA_PATH)/ BETA_UPLOAD_ROOT := memstore:beta-rclone-org BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH) # Pass in GOTAGS=xyz on the make command line to set build tags ifdef GOTAGS BUILDTAGS=-tags "$(GOTAGS)" LINTTAGS=--build-tags "$(GOTAGS)" endif .PHONY: rclone test_all vars version rclone: go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) mkdir -p `go env GOPATH`/bin/ cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE` test_all: go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all vars: @echo SHELL="'$(SHELL)'" @echo BRANCH="'$(BRANCH)'" @echo TAG="'$(TAG)'" @echo VERSION="'$(VERSION)'" @echo GO_VERSION="'$(GO_VERSION)'" @echo BETA_URL="'$(BETA_URL)'" btest: @echo "[$(TAG)]($(BETA_URL)) on branch [$(BRANCH)](https://github.com/rclone/rclone/tree/$(BRANCH)) (uploaded in 15-30 mins)" | xclip -r -sel clip @echo "Copied markdown of beta release to clip board" version: @echo '$(TAG)' # Full suite of integration tests test: rclone test_all -test_all 2>&1 | tee test_all.log @echo "Written logs in test_all.log" # Quick test quicktest: RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./... racequicktest: RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./... # Do source code quality checks check: rclone @echo "-- START CODE QUALITY REPORT -------------------------------" @golangci-lint run $(LINTTAGS) ./... @echo "-- END CODE QUALITY REPORT ---------------------------------" # Get the build dependencies build_dep: go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz' # Get the release dependencies we only install on linux release_dep_linux: go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz' go run bin/get-github-release.go -extract github-release aktau/github-release 'linux-amd64-github-release.tar.bz2' # Get the release dependencies we only install on Windows release_dep_windows: GO111MODULE=off GOOS="" GOARCH="" go get github.com/josephspurrier/goversioninfo/cmd/goversioninfo # Update dependencies showupdates: @echo "*** Direct dependencies that could be updated ***" @GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null # Update direct and indirect dependencies and test dependencies update: GO111MODULE=on go get -u -t ./... -#GO111MODULE=on go get -d $(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all) GO111MODULE=on go mod tidy # Tidy the module dependencies tidy: GO111MODULE=on go mod tidy doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs rclone.1: MANUAL.md pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1 MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs ./bin/make_manual.py MANUAL.html: MANUAL.md pandoc -s --from markdown-smart --to html MANUAL.md -o MANUAL.html MANUAL.txt: MANUAL.md pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt commanddocs: rclone XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/ backenddocs: rclone bin/make_backend_docs.py XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py rcdocs: rclone bin/make_rc_docs.sh install: rclone install -d ${DESTDIR}/usr/bin install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone clean: go clean ./... find . -name \*~ | xargs -r rm -f rm -rf build docs/public rm -f rclone fs/operations/operations.test fs/sync/sync.test fs/test_all.log test.log website: rm -rf docs/public cd docs && hugo @if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi upload_website: website rclone -v sync docs/public memstore:www-rclone-org upload_test_website: website rclone -P sync docs/public test-rclone-org: validate_website: website find docs/public -type f -name "*.html" | xargs tidy --mute-id yes -errors --gnu-emacs yes --drop-empty-elements no --warn-proprietary-attributes no --mute MISMATCHED_ATTRIBUTE_WARN tarball: git archive -9 --format=tar.gz --prefix=rclone-$(TAG)/ -o build/rclone-$(TAG).tar.gz $(TAG) vendorball: go mod vendor tar -zcf build/rclone-$(TAG)-vendor.tar.gz vendor rm -rf vendor sign_upload: cd build && md5sum rclone-v* | gpg --clearsign > MD5SUMS cd build && sha1sum rclone-v* | gpg --clearsign > SHA1SUMS cd build && sha256sum rclone-v* | gpg --clearsign > SHA256SUMS check_sign: cd build && gpg --verify MD5SUMS && gpg --decrypt MD5SUMS | md5sum -c cd build && gpg --verify SHA1SUMS && gpg --decrypt SHA1SUMS | sha1sum -c cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c upload: rclone -P copy build/ memstore:downloads-rclone-org/$(TAG) rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"' upload_github: ./bin/upload-github $(TAG) cross: doc go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG) beta: go run bin/cross-compile.go $(BUILDTAGS) $(TAG) rclone -v copy build/ memstore:pub-rclone-org/$(TAG) @echo Beta release ready at https://pub.rclone.org/$(TAG)/ log_since_last_release: git log $(LAST_TAG).. compile_all: go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(TAG) ci_upload: sudo chown -R $$USER build find build -type l -delete gzip -r9v build ./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds ifndef BRANCH_PATH ./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest endif @echo Beta release ready at $(BETA_URL)/testbuilds ci_beta: git log $(LAST_TAG).. > /tmp/git-log.txt go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(TAG) rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD) ifndef BRANCH_PATH rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR) endif @echo Beta release ready at $(BETA_URL) # Fetch the binary builds from GitHub actions fetch_binaries: rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/ serve: website cd docs && hugo server -v -w --disableFastRender tag: retag doc bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new mv docs/content/changelog.md.new docs/content/changelog.md @echo "Edit the new changelog in docs/content/changelog.md" @echo "Then commit all the changes" @echo git commit -m \"Version $(VERSION)\" -a -v @echo "And finally run make retag before make cross etc" retag: @echo "Version is $(VERSION)" git tag -f -s -m "Version $(VERSION)" $(VERSION) startdev: @echo "Version is $(VERSION)" @echo "Next version is $(NEXT_VERSION)" echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)-DEV\"\n" | gofmt > fs/version.go echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html echo "$(NEXT_VERSION)" > VERSION git commit -m "Start $(NEXT_VERSION)-DEV development" fs/version.go VERSION docs/layouts/partials/version.html startstable: @echo "Version is $(VERSION)" @echo "Next stable version is $(NEXT_PATCH_VERSION)" echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_PATCH_VERSION)-DEV\"\n" | gofmt > fs/version.go echo -n "$(NEXT_PATCH_VERSION)" > docs/layouts/partials/version.html echo "$(NEXT_PATCH_VERSION)" > VERSION git commit -m "Start $(NEXT_PATCH_VERSION)-DEV development" fs/version.go VERSION docs/layouts/partials/version.html winzip: zip -9 rclone-$(TAG).zip rclone.exe rclone-1.53.3/README.md000066400000000000000000000136321375552240400143600ustar00rootroot00000000000000[rclone logo](https://rclone.org/) [Website](https://rclone.org) | [Documentation](https://rclone.org/docs/) | [Download](https://rclone.org/downloads/) | [Contributing](CONTRIBUTING.md) | [Changelog](https://rclone.org/changelog/) | [Installation](https://rclone.org/install/) | [Forum](https://forum.rclone.org/) [![Build Status](https://github.com/rclone/rclone/workflows/build/badge.svg)](https://github.com/rclone/rclone/actions?query=workflow%3Abuild) [![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone) [![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone) [![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone) # Rclone Rclone *("rsync for cloud storage")* is a command line program to sync files and directories to and from different cloud storage providers. ## Storage providers * 1Fichier [:page_facing_up:](https://rclone.org/fichier/) * Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss) * Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status)) * Amazon S3 [:page_facing_up:](https://rclone.org/s3/) * Backblaze B2 [:page_facing_up:](https://rclone.org/b2/) * Box [:page_facing_up:](https://rclone.org/box/) * Ceph [:page_facing_up:](https://rclone.org/s3/#ceph) * Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/) * DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces) * Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost) * Dropbox [:page_facing_up:](https://rclone.org/dropbox/) * FTP [:page_facing_up:](https://rclone.org/ftp/) * GetSky [:page_facing_up:](https://rclone.org/jottacloud/) * Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/) * Google Drive [:page_facing_up:](https://rclone.org/drive/) * Google Photos [:page_facing_up:](https://rclone.org/googlephotos/) * HTTP [:page_facing_up:](https://rclone.org/http/) * Hubic [:page_facing_up:](https://rclone.org/hubic/) * Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/) * IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3) * Koofr [:page_facing_up:](https://rclone.org/koofr/) * Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/) * Memset Memstore [:page_facing_up:](https://rclone.org/swift/) * Mega [:page_facing_up:](https://rclone.org/mega/) * Memory [:page_facing_up:](https://rclone.org/memory/) * Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/) * Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/) * Minio [:page_facing_up:](https://rclone.org/s3/#minio) * Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud) * OVH [:page_facing_up:](https://rclone.org/swift/) * OpenDrive [:page_facing_up:](https://rclone.org/opendrive/) * OpenStack Swift [:page_facing_up:](https://rclone.org/swift/) * Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/) * ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud) * pCloud [:page_facing_up:](https://rclone.org/pcloud/) * premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/) * put.io [:page_facing_up:](https://rclone.org/putio/) * QingStor [:page_facing_up:](https://rclone.org/qingstor/) * Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/) * Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway) * Seafile [:page_facing_up:](https://rclone.org/seafile/) * SFTP [:page_facing_up:](https://rclone.org/sftp/) * StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath) * SugarSync [:page_facing_up:](https://rclone.org/sugarsync/) * Tardigrade [:page_facing_up:](https://rclone.org/tardigrade/) * Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos) * Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi) * WebDAV [:page_facing_up:](https://rclone.org/webdav/) * Yandex Disk [:page_facing_up:](https://rclone.org/yandex/) * The local filesystem [:page_facing_up:](https://rclone.org/local/) Please see [the full list of all storage providers and their features](https://rclone.org/overview/) ## Features * MD5/SHA-1 hashes checked at all times for file integrity * Timestamps preserved on files * Partial syncs supported on a whole file basis * [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files * [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical * [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality * Can sync to and from network, e.g. two different cloud accounts * Optional large file chunking ([Chunker](https://rclone.org/chunker/)) * Optional encryption ([Crypt](https://rclone.org/crypt/)) * Optional cache ([Cache](https://rclone.org/cache/)) * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) * Multi-threaded downloads to local disk * Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna ## Installation & documentation Please see the [rclone website](https://rclone.org/) for: * [Installation](https://rclone.org/install/) * [Documentation & configuration](https://rclone.org/docs/) * [Changelog](https://rclone.org/changelog/) * [FAQ](https://rclone.org/faq/) * [Storage providers](https://rclone.org/overview/) * [Forum](https://forum.rclone.org/) * ...and more ## Downloads * https://rclone.org/downloads/ License ------- This is free software under the terms of MIT the license (check the [COPYING file](/COPYING) included in this package). rclone-1.53.3/RELEASE.md000066400000000000000000000050501375552240400144760ustar00rootroot00000000000000# Release This file describes how to make the various kinds of releases ## Extra required software for making a release * [github-release](https://github.com/aktau/github-release) for uploading packages * pandoc for making the html and man pages ## Making a release * git checkout master # see below for stable branch * git pull * git status - make sure everything is checked in * Check GitHub actions build for master is Green * make test # see integration test server or run locally * make tag * edit docs/content/changelog.md # make sure to remove duplicate logs from point releases * make tidy * make doc * git status - to check for new man pages - git add them * git commit -a -v -m "Version v1.XX.0" * make retag * git push --tags origin master * # Wait for the GitHub builds to complete then... * make fetch_binaries * make tarball * make vendorball * make sign_upload * make check_sign * make upload * make upload_website * make upload_github * make startdev # make startstable for stable branch * # announce with forum post, twitter post, patreon post Early in the next release cycle update the dependencies * Review any pinned packages in go.mod and remove if possible * make update * git status * git add new files * git commit -a -v ## Making a point release If rclone needs a point release due to some horrendous bug: Set vars * BASE_TAG=v1.XX # eg v1.52 * NEW_TAG=${BASE_TAG}.Y # eg v1.52.1 * echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1 First make the release branch. If this is a second point release then this will be done already. * git branch ${BASE_TAG} ${BASE_TAG}-stable * git co ${BASE_TAG}-stable * make startstable Now * git co ${BASE_TAG}-stable * git cherry-pick any fixes * Do the steps as above * make startstable * NB this overwrites the current beta so we need to do this - FIXME is this true any more? * git co master * # cherry pick the changes to the changelog * git checkout ${BASE_TAG}-stable docs/content/changelog.md * git commit -a -v -m "Changelog updates from Version ${NEW_TAG}" * git push ## Making a manual build of docker The rclone docker image should autobuild on via GitHub actions. If it doesn't or needs to be updated then rebuild like this. ``` docker pull golang docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest . docker push rclone/rclone:1.52.0 docker push rclone/rclone:1.52 docker push rclone/rclone:1 docker push rclone/rclone:latest ``` rclone-1.53.3/VERSION000066400000000000000000000000101375552240400141330ustar00rootroot00000000000000v1.53.3 rclone-1.53.3/backend/000077500000000000000000000000001375552240400144635ustar00rootroot00000000000000rclone-1.53.3/backend/alias/000077500000000000000000000000001375552240400155545ustar00rootroot00000000000000rclone-1.53.3/backend/alias/alias.go000066400000000000000000000025631375552240400172020ustar00rootroot00000000000000package alias import ( "errors" "strings" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fspath" ) // Register with Fs func init() { fsi := &fs.RegInfo{ Name: "alias", Description: "Alias for an existing remote", NewFs: NewFs, Options: []fs.Option{{ Name: "remote", Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".", Required: true, }}, } fs.Register(fsi) } // Options defines the configuration for this backend type Options struct { Remote string `config:"remote"` } // NewFs constructs an Fs from the path. // // The returned Fs is the actual Fs, referenced by remote in the config func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.Remote == "" { return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting") } if strings.HasPrefix(opt.Remote, name+":") { return nil, errors.New("can't point alias remote at itself - check the value of the remote setting") } return cache.Get(fspath.JoinRootPath(opt.Remote, root)) } rclone-1.53.3/backend/alias/alias_internal_test.go000066400000000000000000000047661375552240400221440ustar00rootroot00000000000000package alias import ( "context" "fmt" "path" "path/filepath" "sort" "testing" _ "github.com/rclone/rclone/backend/local" // pull in test backend "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/stretchr/testify/require" ) var ( remoteName = "TestAlias" ) func prepare(t *testing.T, root string) { config.LoadConfig() // Configure the remote config.FileSet(remoteName, "type", "alias") config.FileSet(remoteName, "remote", root) } func TestNewFS(t *testing.T) { type testEntry struct { remote string size int64 isDir bool } for testi, test := range []struct { remoteRoot string fsRoot string fsList string wantOK bool entries []testEntry }{ {"", "", "", true, []testEntry{ {"four", -1, true}, {"one%.txt", 6, false}, {"three", -1, true}, {"two.html", 7, false}, }}, {"", "four", "", true, []testEntry{ {"five", -1, true}, {"under four.txt", 9, false}, }}, {"", "", "four", true, []testEntry{ {"four/five", -1, true}, {"four/under four.txt", 9, false}, }}, {"four", "..", "", true, []testEntry{ {"four", -1, true}, {"one%.txt", 6, false}, {"three", -1, true}, {"two.html", 7, false}, }}, {"four", "../three", "", true, []testEntry{ {"underthree.txt", 9, false}, }}, } { what := fmt.Sprintf("test %d remoteRoot=%q, fsRoot=%q, fsList=%q", testi, test.remoteRoot, test.fsRoot, test.fsList) remoteRoot, err := filepath.Abs(filepath.FromSlash(path.Join("test/files", test.remoteRoot))) require.NoError(t, err, what) prepare(t, remoteRoot) f, err := fs.NewFs(fmt.Sprintf("%s:%s", remoteName, test.fsRoot)) require.NoError(t, err, what) gotEntries, err := f.List(context.Background(), test.fsList) require.NoError(t, err, what) sort.Sort(gotEntries) require.Equal(t, len(test.entries), len(gotEntries), what) for i, gotEntry := range gotEntries { what := fmt.Sprintf("%s, entry=%d", what, i) wantEntry := test.entries[i] require.Equal(t, wantEntry.remote, gotEntry.Remote(), what) require.Equal(t, wantEntry.size, gotEntry.Size(), what) _, isDir := gotEntry.(fs.Directory) require.Equal(t, wantEntry.isDir, isDir, what) } } } func TestNewFSNoRemote(t *testing.T) { prepare(t, "") f, err := fs.NewFs(fmt.Sprintf("%s:", remoteName)) require.Error(t, err) require.Nil(t, f) } func TestNewFSInvalidRemote(t *testing.T) { prepare(t, "not_existing_test_remote:") f, err := fs.NewFs(fmt.Sprintf("%s:", remoteName)) require.Error(t, err) require.Nil(t, f) } rclone-1.53.3/backend/alias/test/000077500000000000000000000000001375552240400165335ustar00rootroot00000000000000rclone-1.53.3/backend/alias/test/files/000077500000000000000000000000001375552240400176355ustar00rootroot00000000000000rclone-1.53.3/backend/alias/test/files/four/000077500000000000000000000000001375552240400206105ustar00rootroot00000000000000rclone-1.53.3/backend/alias/test/files/four/five/000077500000000000000000000000001375552240400215415ustar00rootroot00000000000000rclone-1.53.3/backend/alias/test/files/four/five/underfive.txt000066400000000000000000000000061375552240400242650ustar00rootroot00000000000000apple rclone-1.53.3/backend/alias/test/files/four/under four.txt000066400000000000000000000000111375552240400234120ustar00rootroot00000000000000beetroot rclone-1.53.3/backend/alias/test/files/one%.txt000066400000000000000000000000061375552240400212200ustar00rootroot00000000000000hello rclone-1.53.3/backend/alias/test/files/three/000077500000000000000000000000001375552240400207445ustar00rootroot00000000000000rclone-1.53.3/backend/alias/test/files/three/underthree.txt000066400000000000000000000000111375552240400236420ustar00rootroot00000000000000rutabaga rclone-1.53.3/backend/alias/test/files/two.html000066400000000000000000000000071375552240400213310ustar00rootroot00000000000000potato rclone-1.53.3/backend/all/000077500000000000000000000000001375552240400152335ustar00rootroot00000000000000rclone-1.53.3/backend/all/all.go000066400000000000000000000033761375552240400163430ustar00rootroot00000000000000package all import ( // Active file systems _ "github.com/rclone/rclone/backend/alias" _ "github.com/rclone/rclone/backend/amazonclouddrive" _ "github.com/rclone/rclone/backend/azureblob" _ "github.com/rclone/rclone/backend/b2" _ "github.com/rclone/rclone/backend/box" _ "github.com/rclone/rclone/backend/cache" _ "github.com/rclone/rclone/backend/chunker" _ "github.com/rclone/rclone/backend/crypt" _ "github.com/rclone/rclone/backend/drive" _ "github.com/rclone/rclone/backend/dropbox" _ "github.com/rclone/rclone/backend/fichier" _ "github.com/rclone/rclone/backend/ftp" _ "github.com/rclone/rclone/backend/googlecloudstorage" _ "github.com/rclone/rclone/backend/googlephotos" _ "github.com/rclone/rclone/backend/http" _ "github.com/rclone/rclone/backend/hubic" _ "github.com/rclone/rclone/backend/jottacloud" _ "github.com/rclone/rclone/backend/koofr" _ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/mailru" _ "github.com/rclone/rclone/backend/mega" _ "github.com/rclone/rclone/backend/memory" _ "github.com/rclone/rclone/backend/onedrive" _ "github.com/rclone/rclone/backend/opendrive" _ "github.com/rclone/rclone/backend/pcloud" _ "github.com/rclone/rclone/backend/premiumizeme" _ "github.com/rclone/rclone/backend/putio" _ "github.com/rclone/rclone/backend/qingstor" _ "github.com/rclone/rclone/backend/s3" _ "github.com/rclone/rclone/backend/seafile" _ "github.com/rclone/rclone/backend/sftp" _ "github.com/rclone/rclone/backend/sharefile" _ "github.com/rclone/rclone/backend/sugarsync" _ "github.com/rclone/rclone/backend/swift" _ "github.com/rclone/rclone/backend/tardigrade" _ "github.com/rclone/rclone/backend/union" _ "github.com/rclone/rclone/backend/webdav" _ "github.com/rclone/rclone/backend/yandex" ) rclone-1.53.3/backend/amazonclouddrive/000077500000000000000000000000001375552240400200315ustar00rootroot00000000000000rclone-1.53.3/backend/amazonclouddrive/amazonclouddrive.go000066400000000000000000001203741375552240400237350ustar00rootroot00000000000000// Package amazonclouddrive provides an interface to the Amazon Cloud // Drive object storage system. package amazonclouddrive /* FIXME make searching for directory in id and file in id more efficient - use the name: search parameter - remember the escaping rules - use Folder GetNode and GetFile FIXME make the default for no files and no dirs be (FILE & FOLDER) so we ignore assets completely! */ import ( "context" "encoding/json" "fmt" "io" "log" "net/http" "path" "strings" "time" acd "github.com/ncw/go-acd" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "golang.org/x/oauth2" ) const ( folderKind = "FOLDER" fileKind = "FILE" statusAvailable = "AVAILABLE" timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z minSleep = 20 * time.Millisecond warnFileSize = 50000 << 20 // Display warning for files larger than this size defaultTempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink ) // Globals var ( // Description of how to auth for this app acdConfig = &oauth2.Config{ Scopes: []string{"clouddrive:read_all", "clouddrive:write"}, Endpoint: oauth2.Endpoint{ AuthURL: "https://www.amazon.com/ap/oa", TokenURL: "https://api.amazon.com/auth/o2/token", }, ClientID: "", ClientSecret: "", RedirectURL: oauthutil.RedirectURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "amazon cloud drive", Prefix: "acd", Description: "Amazon Drive", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { err := oauthutil.Config("amazon cloud drive", name, m, acdConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: "checkpoint", Help: "Checkpoint for internal polling (debug).", Hide: fs.OptionHideBoth, Advanced: true, }, { Name: "upload_wait_per_gb", Help: `Additional time per GB to wait after a failed complete upload to see if it appears. Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear. The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears. You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually. These values were determined empirically by observing lots of uploads of big files for a range of file sizes. Upload with the "-v" flag to see more info about what rclone is doing in this situation.`, Default: fs.Duration(180 * time.Second), Advanced: true, }, { Name: "templink_threshold", Help: `Files >= this size will be downloaded via their tempLink. Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed. To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.`, Default: defaultTempLinkThreshold, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Base | encoder.EncodeInvalidUtf8), }}...), }) } // Options defines the configuration for this backend type Options struct { Checkpoint string `config:"checkpoint"` UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"` TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote acd server type Fs struct { name string // name of this remote features *fs.Features // optional features opt Options // options for this Fs c *acd.Client // the connection to the acd server noAuthClient *http.Client // unauthenticated http client root string // the path we are working on dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls trueRootID string // ID of true root directory tokenRenewer *oauthutil.Renew // renew the token on expiry } // Object describes an acd object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path info *acd.Node // Info from the acd object if known } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("amazon drive root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses an acd 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 400, // Bad request (seen in "Next token is expired") 401, // Unauthorized (seen in "Token has expired") 408, // Request Timeout 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error 502, // Bad Gateway when doing big listings 503, // Service Unavailable 504, // Gateway Time-out } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { if resp != nil { if resp.StatusCode == 401 { f.tokenRenewer.Invalidate() fs.Debugf(f, "401 error received - invalidating token") return true, err } // Work around receiving this error sporadically on authentication // // HTTP code 403: "403 Forbidden", response body: {"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Bearer"} if resp.StatusCode == 403 && strings.Contains(err.Error(), "Authorization header requires") { fs.Debugf(f, "403 \"Authorization header requires...\" error received - retry") return true, err } } return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // If query parameters contain X-Amz-Algorithm remove Authorization header // // This happens when ACD redirects to S3 for the download. The oauth // transport puts an Authorization header in which we need to remove // otherwise we get this message from AWS // // Only one auth mechanism allowed; only the X-Amz-Algorithm query // parameter, Signature query string parameter or the Authorization // header should be specified func filterRequest(req *http.Request) { if req.URL.Query().Get("X-Amz-Algorithm") != "" { fs.Debugf(nil, "Removing Authorization: header after redirect to S3") req.Header.Del("Authorization") } } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = parsePath(root) baseClient := fshttp.NewClient(fs.Config) if do, ok := baseClient.Transport.(interface { SetRequestFilter(f func(req *http.Request)) }); ok { do.SetRequestFilter(filterRequest) } else { fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail") } oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient) if err != nil { return nil, errors.Wrap(err, "failed to configure Amazon Drive") } c := acd.NewClient(oAuthClient) f := &Fs{ name: name, root: root, opt: *opt, c: c, pacer: fs.NewPacer(pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))), noAuthClient: fshttp.NewClient(fs.Config), } f.features = (&fs.Features{ CaseInsensitive: true, ReadMimeType: true, CanHaveEmptyDirectories: true, }).Fill(f) // Renew the token in the background f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.getRootInfo() return err }) // Update endpoints var resp *http.Response err = f.pacer.Call(func() (bool, error) { _, resp, err = f.c.Account.GetEndpoints() return f.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to get endpoints") } // Get rootID rootInfo, err := f.getRootInfo() if err != nil || rootInfo.Id == nil { return nil, errors.Wrap(err, "failed to get root") } f.trueRootID = *rootInfo.Id f.dirCache = dircache.New(root, f.trueRootID, f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, f.trueRootID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // getRootInfo gets the root folder info func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) { var resp *http.Response err = f.pacer.Call(func() (bool, error) { rootInfo, resp, err = f.c.Nodes.GetRoot() return f.shouldRetry(resp, err) }) return rootInfo, err } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *acd.Node) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { // Set info but not meta o.info = info } else { err := o.readMetaData(ctx) // reads info and meta, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { //fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf) folder := acd.FolderFromId(pathID, f.c.Nodes) var resp *http.Response var subFolder *acd.Folder err = f.pacer.Call(func() (bool, error) { subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf)) return f.shouldRetry(resp, err) }) if err != nil { if err == acd.ErrorNodeNotFound { //fs.Debugf(f, "...Not found") return "", false, nil } //fs.Debugf(f, "...Error %v", err) return "", false, err } if subFolder.Status != nil && *subFolder.Status != statusAvailable { fs.Debugf(f, "Ignoring folder %q in state %q", leaf, *subFolder.Status) time.Sleep(1 * time.Second) // FIXME wait for problem to go away! return "", false, nil } //fs.Debugf(f, "...Found(%q, %v)", *subFolder.Id, leaf) return *subFolder.Id, true, nil } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { //fmt.Printf("CreateDir(%q, %q)\n", pathID, leaf) folder := acd.FolderFromId(pathID, f.c.Nodes) var resp *http.Response var info *acd.Folder err = f.pacer.Call(func() (bool, error) { info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf)) return f.shouldRetry(resp, err) }) if err != nil { //fmt.Printf("...Error %v\n", err) return "", err } //fmt.Printf("...Id %q\n", *info.Id) return *info.Id, nil } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(*acd.Node) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { query := "parents:" + dirID if directoriesOnly { query += " AND kind:" + folderKind } else if filesOnly { query += " AND kind:" + fileKind } else { // FIXME none of these work //query += " AND kind:(" + fileKind + " OR " + folderKind + ")" //query += " AND (kind:" + fileKind + " OR kind:" + folderKind + ")" } opts := acd.NodeListOptions{ Filters: query, } var nodes []*acd.Node var out []*acd.Node //var resp *http.Response for { var resp *http.Response err = f.pacer.CallNoRetry(func() (bool, error) { nodes, resp, err = f.c.Nodes.GetNodes(&opts) return f.shouldRetry(resp, err) }) if err != nil { return false, err } if nodes == nil { break } for _, node := range nodes { if node.Name != nil && node.Id != nil && node.Kind != nil && node.Status != nil { // Ignore nodes if not AVAILABLE if *node.Status != statusAvailable { continue } // Ignore bogus nodes Amazon Drive sometimes reports hasValidParent := false for _, parent := range node.Parents { if parent == dirID { hasValidParent = true break } } if !hasValidParent { continue } *node.Name = f.opt.Enc.ToStandardName(*node.Name) // Store the nodes up in case we have to retry the listing out = append(out, node) } } } // Send the nodes now for _, node := range out { if fn(node) { found = true break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } maxTries := fs.Config.LowLevelRetries var iErr error for tries := 1; tries <= maxTries; tries++ { entries = nil _, err = f.listAll(directoryID, "", false, false, func(node *acd.Node) bool { remote := path.Join(dir, *node.Name) switch *node.Kind { case folderKind: // cache the directory ID for later lookups f.dirCache.Put(remote, *node.Id) when, _ := time.Parse(timeFormat, *node.ModifiedDate) // FIXME d := fs.NewDir(remote, when).SetID(*node.Id) entries = append(entries, d) case fileKind: o, err := f.newObjectWithInfo(ctx, remote, node) if err != nil { iErr = err return true } entries = append(entries, o) default: // ignore ASSET etc } return false }) if iErr != nil { return nil, iErr } if fserrors.IsRetryError(err) { fs.Debugf(f, "Directory listing error for %q: %v - low level retry %d/%d", dir, err, tries, maxTries) continue } if err != nil { return nil, err } break } return entries, nil } // checkUpload checks to see if an error occurred after the file was // completely uploaded. // // If it was then it waits for a while to see if the file really // exists and is the right size and returns an updated info. // // If the file wasn't found or was the wrong size then it returns the // original error. // // This is a workaround for Amazon sometimes returning // // * 408 REQUEST_TIMEOUT // * 504 GATEWAY_TIMEOUT // * 500 Internal server error // // At the end of large uploads. The speculation is that the timeout // is waiting for the sha1 hashing to complete and the file may well // be properly uploaded. func (f *Fs) checkUpload(ctx context.Context, resp *http.Response, in io.Reader, src fs.ObjectInfo, inInfo *acd.File, inErr error, uploadTime time.Duration) (fixedError bool, info *acd.File, err error) { // Return if no error - all is well if inErr == nil { return false, inInfo, inErr } // If not one of the errors we can fix return // if resp == nil || resp.StatusCode != 408 && resp.StatusCode != 500 && resp.StatusCode != 504 { // return false, inInfo, inErr // } // The HTTP status httpStatus := "HTTP status UNKNOWN" if resp != nil { httpStatus = resp.Status } // check to see if we read to the end buf := make([]byte, 1) n, err := in.Read(buf) if !(n == 0 && err == io.EOF) { fs.Debugf(src, "Upload error detected but didn't finish upload: %v (%q)", inErr, httpStatus) return false, inInfo, inErr } // Don't wait for uploads - assume they will appear later if f.opt.UploadWaitPerGB <= 0 { fs.Debugf(src, "Upload error detected but waiting disabled: %v (%q)", inErr, httpStatus) return false, inInfo, inErr } // Time we should wait for the upload uploadWaitPerByte := float64(f.opt.UploadWaitPerGB) / 1024 / 1024 / 1024 timeToWait := time.Duration(uploadWaitPerByte * float64(src.Size())) const sleepTime = 5 * time.Second // sleep between tries retries := int((timeToWait + sleepTime - 1) / sleepTime) // number of retries, rounded up fs.Debugf(src, "Error detected after finished upload - waiting to see if object was uploaded correctly: %v (%q)", inErr, httpStatus) remote := src.Remote() for i := 1; i <= retries; i++ { o, err := f.NewObject(ctx, remote) if err == fs.ErrorObjectNotFound { fs.Debugf(src, "Object not found - waiting (%d/%d)", i, retries) } else if err != nil { fs.Debugf(src, "Object returned error - waiting (%d/%d): %v", i, retries, err) } else { if src.Size() == o.Size() { fs.Debugf(src, "Object found with correct size %d after waiting (%d/%d) - %v - returning with no error", src.Size(), i, retries, sleepTime*time.Duration(i-1)) info = &acd.File{ Node: o.(*Object).info, } return true, info, nil } fs.Debugf(src, "Object found but wrong size %d vs %d - waiting (%d/%d)", src.Size(), o.Size(), i, retries) } time.Sleep(sleepTime) } fs.Debugf(src, "Giving up waiting for object - returning original error: %v (%q)", inErr, httpStatus) return false, inInfo, inErr } // Put the object into the container // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() // Temporary Object under construction o := &Object{ fs: f, remote: remote, } // Check if object already exists err := o.readMetaData(ctx) switch err { case nil: return o, o.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it default: return nil, err } // If not create it leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } if size > warnFileSize { fs.Logf(f, "Warning: file %q may fail because it is too big. Use --max-size=%dM to skip large files.", remote, warnFileSize>>20) } folder := acd.FolderFromId(directoryID, o.fs.c.Nodes) var info *acd.File var resp *http.Response err = f.pacer.CallNoRetry(func() (bool, error) { start := time.Now() f.tokenRenewer.Start() info, resp, err = folder.Put(in, f.opt.Enc.FromStandardName(leaf)) f.tokenRenewer.Stop() var ok bool ok, info, err = f.checkUpload(ctx, resp, in, src, info, err, time.Since(start)) if ok { return false, nil } return f.shouldRetry(resp, err) }) if err != nil { return nil, err } o.info = info.Node return o, nil } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { // go test -v -run '^Test(Setup|Init|FsMkdir|FsPutFile1|FsPutFile2|FsUpdateFile1|FsMove)$' srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // create the destination directory if necessary srcLeaf, srcDirectoryID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false) if err != nil { return nil, err } dstLeaf, dstDirectoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } err = f.moveNode(srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false) if err != nil { return nil, err } // Wait for directory caching so we can no longer see the old // object and see the new object time.Sleep(200 * time.Millisecond) // enough time 90% of the time var ( dstObj fs.Object srcErr, dstErr error ) for i := 1; i <= fs.Config.LowLevelRetries; i++ { _, srcErr = srcObj.fs.NewObject(ctx, srcObj.remote) // try reading the object if srcErr != nil && srcErr != fs.ErrorObjectNotFound { // exit if error on source return nil, srcErr } dstObj, dstErr = f.NewObject(ctx, remote) if dstErr != nil && dstErr != fs.ErrorObjectNotFound { // exit if error on dst return nil, dstErr } if srcErr == fs.ErrorObjectNotFound && dstErr == nil { // finished if src not found and dst found break } fs.Debugf(src, "Wait for directory listing to update after move %d/%d", i, fs.Config.LowLevelRetries) time.Sleep(1 * time.Second) } return dstObj, dstErr } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(src, "DirMove error: not same remote type") return fs.ErrorCantDirMove } srcPath := path.Join(srcFs.root, srcRemote) dstPath := path.Join(f.root, dstRemote) // Refuse to move to or from the root if srcPath == "" || dstPath == "" { fs.Debugf(src, "DirMove error: Can't move root") return errors.New("can't move root directory") } // Find ID of dst parent, creating subdirs if necessary dstLeaf, dstDirectoryID, err := f.dirCache.FindPath(ctx, dstRemote, true) if err != nil { return err } // Check destination does not exist _, err = f.dirCache.FindDir(ctx, dstRemote, false) if err == fs.ErrorDirNotFound { // OK } else if err != nil { return err } else { return fs.ErrorDirExists } // Find ID of src parent _, srcDirectoryID, err := srcFs.dirCache.FindPath(ctx, srcRemote, false) if err != nil { return err } srcLeaf, _ := dircache.SplitPath(srcPath) // Find ID of src srcID, err := srcFs.dirCache.FindDir(ctx, srcRemote, false) if err != nil { return err } // FIXME make a proper node.UpdateMetadata command srcInfo := acd.NodeFromId(srcID, f.c.Nodes) var jsonStr string err = srcFs.pacer.Call(func() (bool, error) { jsonStr, err = srcInfo.GetMetadata() return srcFs.shouldRetry(nil, err) }) if err != nil { fs.Debugf(src, "DirMove error: error reading src metadata: %v", err) return err } err = json.Unmarshal([]byte(jsonStr), &srcInfo) if err != nil { fs.Debugf(src, "DirMove error: error reading unpacking src metadata: %v", err) return err } err = f.moveNode(srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // purgeCheck remotes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } if check { // check directory is empty empty := true _, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool { switch *node.Kind { case folderKind: empty = false return true case fileKind: empty = false return true default: fs.Debugf("Found ASSET %s", *node.Id) } return false }) if err != nil { return err } if !empty { return errors.New("directory not empty") } } node := acd.NodeFromId(rootID, f.c.Nodes) var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = node.Trash() return f.shouldRetry(resp, err) }) if err != nil { return err } f.dirCache.FlushDir(dir) if err != nil { return err } return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy //func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { // srcObj, ok := src.(*Object) // if !ok { // fs.Debugf(src, "Can't copy - not same remote type") // return nil, fs.ErrorCantCopy // } // srcFs := srcObj.fs // _, err := f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil) // if err != nil { // return nil, err // } // return f.NewObject(ctx, remote), nil //} // Purge deletes all the files and the container // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } if o.info.ContentProperties != nil && o.info.ContentProperties.Md5 != nil { return *o.info.ContentProperties.Md5, nil } return "", nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { if o.info.ContentProperties != nil && o.info.ContentProperties.Size != nil { return int64(*o.info.ContentProperties.Size) } return 0 // Object is likely PENDING } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (o *Object) readMetaData(ctx context.Context) (err error) { if o.info != nil { return nil } leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, o.remote, false) if err != nil { if err == fs.ErrorDirNotFound { return fs.ErrorObjectNotFound } return err } folder := acd.FolderFromId(directoryID, o.fs.c.Nodes) var resp *http.Response var info *acd.File err = o.fs.pacer.Call(func() (bool, error) { info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf)) return o.fs.shouldRetry(resp, err) }) if err != nil { if err == acd.ErrorNodeNotFound { return fs.ErrorObjectNotFound } return err } o.info = info.Node return nil } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Debugf(o, "Failed to read metadata: %v", err) return time.Now() } modTime, err := time.Parse(timeFormat, *o.info.ModifiedDate) if err != nil { fs.Debugf(o, "Failed to read mtime from object: %v", err) return time.Now() } return modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // FIXME not implemented return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { bigObject := o.Size() >= int64(o.fs.opt.TempLinkThreshold) if bigObject { fs.Debugf(o, "Downloading large object via tempLink") } file := acd.File{Node: o.info} var resp *http.Response headers := fs.OpenOptionHeaders(options) err = o.fs.pacer.Call(func() (bool, error) { if !bigObject { in, resp, err = file.OpenHeaders(headers) } else { in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers) } return o.fs.shouldRetry(resp, err) }) return in, err } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { file := acd.File{Node: o.info} var info *acd.File var resp *http.Response var err error err = o.fs.pacer.CallNoRetry(func() (bool, error) { start := time.Now() o.fs.tokenRenewer.Start() info, resp, err = file.Overwrite(in) o.fs.tokenRenewer.Stop() var ok bool ok, info, err = o.fs.checkUpload(ctx, resp, in, src, info, err, time.Since(start)) if ok { return false, nil } return o.fs.shouldRetry(resp, err) }) if err != nil { return err } o.info = info.Node return nil } // Remove a node func (f *Fs) removeNode(info *acd.Node) error { var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = info.Trash() return f.shouldRetry(resp, err) }) return err } // Remove an object func (o *Object) Remove(ctx context.Context) error { return o.fs.removeNode(o.info) } // Restore a node func (f *Fs) restoreNode(info *acd.Node) (newInfo *acd.Node, err error) { var resp *http.Response err = f.pacer.Call(func() (bool, error) { newInfo, resp, err = info.Restore() return f.shouldRetry(resp, err) }) return newInfo, err } // Changes name of given node func (f *Fs) renameNode(info *acd.Node, newName string) (newInfo *acd.Node, err error) { var resp *http.Response err = f.pacer.Call(func() (bool, error) { newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName)) return f.shouldRetry(resp, err) }) return newInfo, err } // Replaces one parent with another, effectively moving the file. Leaves other // parents untouched. ReplaceParent cannot be used when the file is trashed. func (f *Fs) replaceParent(info *acd.Node, oldParentID string, newParentID string) error { return f.pacer.Call(func() (bool, error) { resp, err := info.ReplaceParent(oldParentID, newParentID) return f.shouldRetry(resp, err) }) } // Adds one additional parent to object. func (f *Fs) addParent(info *acd.Node, newParentID string) error { return f.pacer.Call(func() (bool, error) { resp, err := info.AddParent(newParentID) return f.shouldRetry(resp, err) }) } // Remove given parent from object, leaving the other possible // parents untouched. Object can end up having no parents. func (f *Fs) removeParent(info *acd.Node, parentID string) error { return f.pacer.Call(func() (bool, error) { resp, err := info.RemoveParent(parentID) return f.shouldRetry(resp, err) }) } // moveNode moves the node given from the srcLeaf,srcDirectoryID to // the dstLeaf,dstDirectoryID func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) { // fs.Debugf(name, "moveNode dst(%q,%s) <- src(%q,%s)", dstLeaf, dstDirectoryID, srcLeaf, srcDirectoryID) cantMove := fs.ErrorCantMove if useDirErrorMsgs { cantMove = fs.ErrorCantDirMove } if len(srcInfo.Parents) > 1 && srcLeaf != dstLeaf { fs.Debugf(name, "Move error: object is attached to multiple parents and should be renamed. This would change the name of the node in all parents.") return cantMove } if srcLeaf != dstLeaf { // fs.Debugf(name, "renaming") _, err = f.renameNode(srcInfo, dstLeaf) if err != nil { fs.Debugf(name, "Move: quick path rename failed: %v", err) goto OnConflict } } if srcDirectoryID != dstDirectoryID { // fs.Debugf(name, "trying parent replace: %s -> %s", oldParentID, newParentID) err = f.replaceParent(srcInfo, srcDirectoryID, dstDirectoryID) if err != nil { fs.Debugf(name, "Move: quick path parent replace failed: %v", err) return err } } return nil OnConflict: fs.Debugf(name, "Could not directly rename file, presumably because there was a file with the same name already. Instead, the file will now be trashed where such operations do not cause errors. It will be restored to the correct parent after. If any of the subsequent calls fails, the rename/move will be in an invalid state.") // fs.Debugf(name, "Trashing file") err = f.removeNode(srcInfo) if err != nil { fs.Debugf(name, "Move: remove node failed: %v", err) return err } // fs.Debugf(name, "Renaming file") _, err = f.renameNode(srcInfo, dstLeaf) if err != nil { fs.Debugf(name, "Move: rename node failed: %v", err) return err } // note: replacing parent is forbidden by API, modifying them individually is // okay though // fs.Debugf(name, "Adding target parent") err = f.addParent(srcInfo, dstDirectoryID) if err != nil { fs.Debugf(name, "Move: addParent failed: %v", err) return err } // fs.Debugf(name, "removing original parent") err = f.removeParent(srcInfo, srcDirectoryID) if err != nil { fs.Debugf(name, "Move: removeParent failed: %v", err) return err } // fs.Debugf(name, "Restoring") _, err = f.restoreNode(srcInfo) if err != nil { fs.Debugf(name, "Move: restoreNode node failed: %v", err) return err } return nil } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { if o.info.ContentProperties != nil && o.info.ContentProperties.ContentType != nil { return *o.info.ContentProperties.ContentType } return "" } // ChangeNotify calls the passed function with a path that has had changes. // If the implementation uses polling, it should adhere to the given interval. // // Automatically restarts itself in case of unexpected behaviour of the remote. // // Close the returned channel to stop being notified. func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) { checkpoint := f.opt.Checkpoint go func() { var ticker *time.Ticker var tickerC <-chan time.Time for { select { case pollInterval, ok := <-pollIntervalChan: if !ok { if ticker != nil { ticker.Stop() } return } if pollInterval == 0 { if ticker != nil { ticker.Stop() ticker, tickerC = nil, nil } } else { ticker = time.NewTicker(pollInterval) tickerC = ticker.C } case <-tickerC: checkpoint = f.changeNotifyRunner(notifyFunc, checkpoint) if err := config.SetValueAndSave(f.name, "checkpoint", checkpoint); err != nil { fs.Debugf(f, "Unable to save checkpoint: %v", err) } } } }() } func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), checkpoint string) string { var err error var resp *http.Response var reachedEnd bool var csCount int var nodeCount int fs.Debugf(f, "Checking for changes on remote (Checkpoint %q)", checkpoint) err = f.pacer.CallNoRetry(func() (bool, error) { resp, err = f.c.Changes.GetChangesFunc(&acd.ChangesOptions{ Checkpoint: checkpoint, IncludePurged: true, }, func(changeSet *acd.ChangeSet, err error) error { if err != nil { return err } type entryType struct { path string entryType fs.EntryType } var pathsToClear []entryType csCount++ nodeCount += len(changeSet.Nodes) if changeSet.End { reachedEnd = true } if changeSet.Checkpoint != "" { checkpoint = changeSet.Checkpoint } for _, node := range changeSet.Nodes { if path, ok := f.dirCache.GetInv(*node.Id); ok { if node.IsFile() { pathsToClear = append(pathsToClear, entryType{path: path, entryType: fs.EntryObject}) } else { pathsToClear = append(pathsToClear, entryType{path: path, entryType: fs.EntryDirectory}) } continue } if node.IsFile() { // translate the parent dir of this object if len(node.Parents) > 0 { if path, ok := f.dirCache.GetInv(node.Parents[0]); ok { // and append the drive file name to compute the full file name name := f.opt.Enc.ToStandardName(*node.Name) if len(path) > 0 { path = path + "/" + name } else { path = name } // this will now clear the actual file too pathsToClear = append(pathsToClear, entryType{path: path, entryType: fs.EntryObject}) } } else { // a true root object that is changed pathsToClear = append(pathsToClear, entryType{path: *node.Name, entryType: fs.EntryObject}) } } } visitedPaths := make(map[string]bool) for _, entry := range pathsToClear { if _, ok := visitedPaths[entry.path]; ok { continue } visitedPaths[entry.path] = true notifyFunc(entry.path, entry.entryType) } return nil }) return false, err }) fs.Debugf(f, "Got %d ChangeSets with %d Nodes", csCount, nodeCount) if err != nil && err != io.ErrUnexpectedEOF { fs.Debugf(f, "Failed to get Changes: %v", err) return checkpoint } if reachedEnd { reachedEnd = false fs.Debugf(f, "All changes were processed. Waiting for more.") } else if checkpoint == "" { fs.Debugf(f, "Did not get any checkpoint, something went wrong! %+v", resp) } return checkpoint } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { if o.info.Id == nil { return "" } return *o.info.Id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) // _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.ChangeNotifier = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = &Object{} _ fs.IDer = &Object{} ) rclone-1.53.3/backend/amazonclouddrive/amazonclouddrive_test.go000066400000000000000000000007261375552240400247720ustar00rootroot00000000000000// Test AmazonCloudDrive filesystem interface // +build acd package amazonclouddrive_test import ( "testing" "github.com/rclone/rclone/backend/amazonclouddrive" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil)) fstests.RemoteName = "TestAmazonCloudDrive:" fstests.Run(t) } rclone-1.53.3/backend/azureblob/000077500000000000000000000000001375552240400164505ustar00rootroot00000000000000rclone-1.53.3/backend/azureblob/azureblob.go000066400000000000000000001432571375552240400210000ustar00rootroot00000000000000// Package azureblob provides an interface to the Microsoft Azure blob object storage system // +build !plan9,!solaris,!js,go1.13 package azureblob import ( "bytes" "context" "crypto/md5" "encoding/base64" "encoding/hex" "fmt" "io" "net/http" "net/url" "path" "strings" "sync" "time" "github.com/Azure/azure-pipeline-go/pipeline" "github.com/Azure/azure-storage-blob-go/azblob" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pool" "github.com/rclone/rclone/lib/readers" "golang.org/x/sync/errgroup" ) const ( minSleep = 10 * time.Millisecond maxSleep = 10 * time.Second decayConstant = 1 // bigger for slower decay, exponential maxListChunkSize = 5000 // number of items to read at once modTimeKey = "mtime" timeFormatIn = time.RFC3339 timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00" maxTotalParts = 50000 // in multipart upload storageDefaultBaseURL = "blob.core.windows.net" // maxUncommittedSize = 9 << 30 // can't upload bigger than this defaultChunkSize = 4 * fs.MebiByte maxChunkSize = 100 * fs.MebiByte defaultUploadCutoff = 256 * fs.MebiByte maxUploadCutoff = 256 * fs.MebiByte defaultAccessTier = azblob.AccessTierNone maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing) // Default storage account, key and blob endpoint for emulator support, // though it is a base64 key checked in here, it is publicly available secret. emulatorAccount = "devstoreaccount1" emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1" memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long memoryPoolUseMmap = false ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "azureblob", Description: "Microsoft Azure Blob Storage", NewFs: NewFs, Options: []fs.Option{{ Name: "account", Help: "Storage Account Name (leave blank to use SAS URL or Emulator)", }, { Name: "key", Help: "Storage Account Key (leave blank to use SAS URL or Emulator)", }, { Name: "sas_url", Help: "SAS URL for container level access only\n(leave blank if using account/key or Emulator)", }, { Name: "use_emulator", Help: "Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)", Default: false, }, { Name: "endpoint", Help: "Endpoint for the service\nLeave blank normally.", Advanced: true, }, { Name: "upload_cutoff", Help: "Cutoff for switching to chunked upload (<= 256MB).", Default: defaultUploadCutoff, Advanced: true, }, { Name: "chunk_size", Help: `Upload chunk size (<= 100MB). Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory.`, Default: defaultChunkSize, Advanced: true, }, { Name: "list_chunk", Help: `Size of blob list. This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( [source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) ). This can be used to limit the number of blobs items to return, to avoid the time out.`, Default: maxListChunkSize, Advanced: true, }, { Name: "access_tier", Help: `Access tier of blob: hot, cool or archive. Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".`, Advanced: true, }, { Name: "disable_checksum", Help: `Don't store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.`, Default: false, Advanced: true, }, { Name: "memory_pool_flush_time", Default: memoryPoolFlushTime, Advanced: true, Help: `How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.`, }, { Name: "memory_pool_use_mmap", Default: memoryPoolUseMmap, Advanced: true, Help: `Whether to use mmap buffers in internal memory pool.`, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.EncodeInvalidUtf8 | encoder.EncodeSlash | encoder.EncodeCtl | encoder.EncodeDel | encoder.EncodeBackSlash | encoder.EncodeRightPeriod), }}, }) } // Options defines the configuration for this backend type Options struct { Account string `config:"account"` Key string `config:"key"` Endpoint string `config:"endpoint"` SASURL string `config:"sas_url"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` ChunkSize fs.SizeSuffix `config:"chunk_size"` ListChunkSize uint `config:"list_chunk"` AccessTier string `config:"access_tier"` UseEmulator bool `config:"use_emulator"` DisableCheckSum bool `config:"disable_checksum"` MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"` MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote azure server type Fs struct { name string // name of this remote root string // the path we are working on if any opt Options // parsed config options features *fs.Features // optional features client *http.Client // http client we are using svcURL *azblob.ServiceURL // reference to serviceURL cntURLcacheMu sync.Mutex // mutex to protect cntURLcache cntURLcache map[string]*azblob.ContainerURL // reference to containerURL per container rootContainer string // container part of root (if any) rootDirectory string // directory part of root (if any) isLimited bool // if limited to one container cache *bucket.Cache // cache for container creation status pacer *fs.Pacer // To pace and retry the API calls uploadToken *pacer.TokenDispenser // control concurrency pool *pool.Pool // memory pool } // Object describes an azure object type Object struct { fs *Fs // what this object is part of remote string // The remote path modTime time.Time // The modified time of the object if known md5 string // MD5 hash if known size int64 // Size of the object mimeType string // Content-Type of the object accessTier azblob.AccessTierType // Blob Access Tier meta map[string]string // blob metadata } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.rootContainer == "" { return "Azure root" } if f.rootDirectory == "" { return fmt.Sprintf("Azure container %s", f.rootContainer) } return fmt.Sprintf("Azure container %s path %s", f.rootContainer, f.rootDirectory) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns container and containerPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) { containerName, containerPath = bucket.Split(path.Join(f.root, rootRelativePath)) return f.opt.Enc.FromStandardName(containerName), f.opt.Enc.FromStandardPath(containerPath) } // split returns container and containerPath from the object func (o *Object) split() (container, containerPath string) { return o.fs.split(o.remote) } // validateAccessTier checks if azureblob supports user supplied tier func validateAccessTier(tier string) bool { switch tier { case string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive): // valid cases return true default: return false } } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 401, // Unauthorized (eg "Token has expired") 408, // Request Timeout 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error 503, // Service Unavailable 504, // Gateway Time-out } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetry(err error) (bool, error) { // FIXME interpret special errors - more to do here if storageErr, ok := err.(azblob.StorageError); ok { switch storageErr.ServiceCode() { case "InvalidBlobOrBlock": // These errors happen sometimes in multipart uploads // because of block concurrency issues return true, err } statusCode := storageErr.Response().StatusCode for _, e := range retryErrorCodes { if statusCode == e { return true, err } } } return fserrors.ShouldRetry(err), err } func checkUploadChunkSize(cs fs.SizeSuffix) error { const minChunkSize = fs.Byte if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } if cs > maxChunkSize { return errors.Errorf("%s is greater than %s", cs, maxChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } func checkUploadCutoff(cs fs.SizeSuffix) error { if cs > maxUploadCutoff { return errors.Errorf("%v must be less than or equal to %v", cs, maxUploadCutoff) } return nil } func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadCutoff(cs) if err == nil { old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs } return } // httpClientFactory creates a Factory object that sends HTTP requests // to an rclone's http.Client. // // copied from azblob.newDefaultHTTPClientFactory func httpClientFactory(client *http.Client) pipeline.Factory { return pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc { return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) { r, err := client.Do(request.WithContext(ctx)) if err != nil { err = pipeline.NewError(err, "HTTP request failed") } return pipeline.NewHTTPResponse(r), err } }) } // newPipeline creates a Pipeline using the specified credentials and options. // // this code was copied from azblob.NewPipeline func (f *Fs) newPipeline(c azblob.Credential, o azblob.PipelineOptions) pipeline.Pipeline { // Don't log stuff to syslog/Windows Event log pipeline.SetForceLogEnabled(false) // Closest to API goes first; closest to the wire goes last factories := []pipeline.Factory{ azblob.NewTelemetryPolicyFactory(o.Telemetry), azblob.NewUniqueRequestIDPolicyFactory(), azblob.NewRetryPolicyFactory(o.Retry), c, pipeline.MethodFactoryMarker(), // indicates at what stage in the pipeline the method factory is invoked azblob.NewRequestLogPolicyFactory(o.RequestLog), } return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log}) } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootContainer, f.rootDirectory = bucket.Split(f.root) } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadCutoff(opt.UploadCutoff) if err != nil { return nil, errors.Wrap(err, "azure: upload cutoff") } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "azure: chunk size") } if opt.ListChunkSize > maxListChunkSize { return nil, errors.Errorf("azure: blob list size can't be greater than %v - was %v", maxListChunkSize, opt.ListChunkSize) } if opt.Endpoint == "" { opt.Endpoint = storageDefaultBaseURL } if opt.AccessTier == "" { opt.AccessTier = string(defaultAccessTier) } else if !validateAccessTier(opt.AccessTier) { return nil, errors.Errorf("Azure Blob: Supported access tiers are %s, %s and %s", string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive)) } f := &Fs{ name: name, opt: *opt, pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), client: fshttp.NewClient(fs.Config), cache: bucket.NewCache(), cntURLcache: make(map[string]*azblob.ContainerURL, 1), pool: pool.New( time.Duration(opt.MemoryPoolFlushTime), int(opt.ChunkSize), fs.Config.Transfers, opt.MemoryPoolUseMmap, ), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, SetTier: true, GetTier: true, }).Fill(f) var ( u *url.URL serviceURL azblob.ServiceURL ) switch { case opt.UseEmulator: credential, err := azblob.NewSharedKeyCredential(emulatorAccount, emulatorAccountKey) if err != nil { return nil, errors.Wrapf(err, "Failed to parse credentials") } u, err = url.Parse(emulatorBlobEndpoint) if err != nil { return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint") } pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}}) serviceURL = azblob.NewServiceURL(*u, pipeline) case opt.Account != "" && opt.Key != "": credential, err := azblob.NewSharedKeyCredential(opt.Account, opt.Key) if err != nil { return nil, errors.Wrapf(err, "Failed to parse credentials") } u, err = url.Parse(fmt.Sprintf("https://%s.%s", opt.Account, opt.Endpoint)) if err != nil { return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint") } pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}}) serviceURL = azblob.NewServiceURL(*u, pipeline) case opt.SASURL != "": u, err = url.Parse(opt.SASURL) if err != nil { return nil, errors.Wrapf(err, "failed to parse SAS URL") } // use anonymous credentials in case of sas url pipeline := f.newPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}}) // Check if we have container level SAS or account level sas parts := azblob.NewBlobURLParts(*u) if parts.ContainerName != "" { if f.rootContainer != "" && parts.ContainerName != f.rootContainer { return nil, errors.New("Container name in SAS URL and container provided in command do not match") } containerURL := azblob.NewContainerURL(*u, pipeline) f.cntURLcache[parts.ContainerName] = &containerURL f.isLimited = true } else { serviceURL = azblob.NewServiceURL(*u, pipeline) } default: return nil, errors.New("Need account+key or connectionString or sasURL") } f.svcURL = &serviceURL if f.rootContainer != "" && f.rootDirectory != "" { // Check to see if the (container,directory) is actually an existing file oldRoot := f.root newRoot, leaf := path.Split(oldRoot) f.setRoot(newRoot) _, err := f.NewObject(ctx, leaf) if err != nil { if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile { // File doesn't exist or is a directory so return old f f.setRoot(oldRoot) return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // return the container URL for the container passed in func (f *Fs) cntURL(container string) (containerURL *azblob.ContainerURL) { f.cntURLcacheMu.Lock() defer f.cntURLcacheMu.Unlock() var ok bool if containerURL, ok = f.cntURLcache[container]; !ok { cntURL := f.svcURL.NewContainerURL(container) containerURL = &cntURL f.cntURLcache[container] = containerURL } return containerURL } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(remote string, info *azblob.BlobItem) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { err := o.decodeMetaDataFromBlob(info) if err != nil { return nil, err } } else { err := o.readMetaData() // reads info and headers, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(remote, nil) } // getBlobReference creates an empty blob reference with no metadata func (f *Fs) getBlobReference(container, containerPath string) azblob.BlobURL { return f.cntURL(container).NewBlobURL(containerPath) } // updateMetadataWithModTime adds the modTime passed in to o.meta. func (o *Object) updateMetadataWithModTime(modTime time.Time) { // Make sure o.meta is not nil if o.meta == nil { o.meta = make(map[string]string, 1) } // Set modTimeKey in it o.meta[modTimeKey] = modTime.Format(timeFormatOut) } // Returns whether file is a directory marker or not func isDirectoryMarker(size int64, metadata azblob.Metadata, remote string) bool { // Directory markers are 0 length if size == 0 { // Note that metadata with hdi_isfolder = true seems to be a // defacto standard for marking blobs as directories. endsWithSlash := strings.HasSuffix(remote, "/") if endsWithSlash || remote == "" || metadata["hdi_isfolder"] == "true" { return true } } return false } // listFn is called from list to handle an object type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error // list lists the objects into the function supplied from // the container and root supplied // // dir is the starting directory, "" for root // // The remote has prefix removed from it and if addContainer is set then // it adds the container to the start. func (f *Fs) list(ctx context.Context, container, directory, prefix string, addContainer bool, recurse bool, maxResults uint, fn listFn) error { if f.cache.IsDeleted(container) { return fs.ErrorDirNotFound } if prefix != "" { prefix += "/" } if directory != "" { directory += "/" } delimiter := "" if !recurse { delimiter = "/" } options := azblob.ListBlobsSegmentOptions{ Details: azblob.BlobListingDetails{ Copy: false, Metadata: true, Snapshots: false, UncommittedBlobs: false, Deleted: false, }, Prefix: directory, MaxResults: int32(maxResults), } for marker := (azblob.Marker{}); marker.NotDone(); { var response *azblob.ListBlobsHierarchySegmentResponse err := f.pacer.Call(func() (bool, error) { var err error response, err = f.cntURL(container).ListBlobsHierarchySegment(ctx, marker, delimiter, options) return f.shouldRetry(err) }) if err != nil { // Check http error code along with service code, current SDK doesn't populate service code correctly sometimes if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) { return fs.ErrorDirNotFound } return err } // Advance marker to next marker = response.NextMarker for i := range response.Segment.BlobItems { file := &response.Segment.BlobItems[i] // Finish if file name no longer has prefix // if prefix != "" && !strings.HasPrefix(file.Name, prefix) { // return nil // } remote := f.opt.Enc.ToStandardPath(file.Name) if !strings.HasPrefix(remote, prefix) { fs.Debugf(f, "Odd name received %q", remote) continue } remote = remote[len(prefix):] if isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote) { continue // skip directory marker } if addContainer { remote = path.Join(container, remote) } // Send object err = fn(remote, file, false) if err != nil { return err } } // Send the subdirectories for _, remote := range response.Segment.BlobPrefixes { remote := strings.TrimRight(remote.Name, "/") remote = f.opt.Enc.ToStandardPath(remote) if !strings.HasPrefix(remote, prefix) { fs.Debugf(f, "Odd directory name received %q", remote) continue } remote = remote[len(prefix):] if addContainer { remote = path.Join(container, remote) } // Send object err = fn(remote, nil, true) if err != nil { return err } } } return nil } // Convert a list item into a DirEntry func (f *Fs) itemToDirEntry(remote string, object *azblob.BlobItem, isDirectory bool) (fs.DirEntry, error) { if isDirectory { d := fs.NewDir(remote, time.Time{}) return d, nil } o, err := f.newObjectWithInfo(remote, object) if err != nil { return nil, err } return o, nil } // listDir lists a single directory func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) { err = f.list(ctx, container, directory, prefix, addContainer, false, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error { entry, err := f.itemToDirEntry(remote, object, isDirectory) if err != nil { return err } if entry != nil { entries = append(entries, entry) } return nil }) if err != nil { return nil, err } // container must be present if listing succeeded f.cache.MarkOK(container) return entries, nil } // listContainers returns all the containers to out func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err error) { if f.isLimited { f.cntURLcacheMu.Lock() for container := range f.cntURLcache { d := fs.NewDir(container, time.Time{}) entries = append(entries, d) } f.cntURLcacheMu.Unlock() return entries, nil } err = f.listContainersToFn(func(container *azblob.ContainerItem) error { d := fs.NewDir(f.opt.Enc.ToStandardName(container.Name), container.Properties.LastModified) f.cache.MarkOK(container.Name) entries = append(entries, d) return nil }) if err != nil { return nil, err } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { container, directory := f.split(dir) if container == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listContainers(ctx) } return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { container, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(container, directory, prefix string, addContainer bool) error { return f.list(ctx, container, directory, prefix, addContainer, true, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error { entry, err := f.itemToDirEntry(remote, object, isDirectory) if err != nil { return err } return list.Add(entry) }) } if container == "" { entries, err := f.listContainers(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } container := entry.Remote() err = listR(container, "", f.rootDirectory, true) if err != nil { return err } // container must be present if listing succeeded f.cache.MarkOK(container) } } else { err = listR(container, directory, f.rootDirectory, f.rootContainer == "") if err != nil { return err } // container must be present if listing succeeded f.cache.MarkOK(container) } return list.Flush() } // listContainerFn is called from listContainersToFn to handle a container type listContainerFn func(*azblob.ContainerItem) error // listContainersToFn lists the containers to the function supplied func (f *Fs) listContainersToFn(fn listContainerFn) error { params := azblob.ListContainersSegmentOptions{ MaxResults: int32(f.opt.ListChunkSize), } ctx := context.Background() for marker := (azblob.Marker{}); marker.NotDone(); { var response *azblob.ListContainersSegmentResponse err := f.pacer.Call(func() (bool, error) { var err error response, err = f.svcURL.ListContainersSegment(ctx, marker, params) return f.shouldRetry(err) }) if err != nil { return err } for i := range response.ContainerItems { err = fn(&response.ContainerItems[i]) if err != nil { return err } } marker = response.NextMarker } return nil } // Put the object into the container // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction fs := &Object{ fs: f, remote: src.Remote(), } return fs, fs.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { container, _ := f.split(dir) return f.makeContainer(ctx, container) } // makeContainer creates the container if it doesn't exist func (f *Fs) makeContainer(ctx context.Context, container string) error { return f.cache.Create(container, func() error { // If this is a SAS URL limited to a container then assume it is already created if f.isLimited { return nil } // now try to create the container return f.pacer.Call(func() (bool, error) { _, err := f.cntURL(container).Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone) if err != nil { if storageErr, ok := err.(azblob.StorageError); ok { switch storageErr.ServiceCode() { case azblob.ServiceCodeContainerAlreadyExists: return false, nil case azblob.ServiceCodeContainerBeingDeleted: // From https://docs.microsoft.com/en-us/rest/api/storageservices/delete-container // When a container is deleted, a container with the same name cannot be created // for at least 30 seconds; the container may not be available for more than 30 // seconds if the service is still processing the request. time.Sleep(6 * time.Second) // default 10 retries will be 60 seconds f.cache.MarkDeleted(container) return true, err } } } return f.shouldRetry(err) }) }, nil) } // isEmpty checks to see if a given (container, directory) is empty and returns an error if not func (f *Fs) isEmpty(ctx context.Context, container, directory string) (err error) { empty := true err = f.list(ctx, container, directory, f.rootDirectory, f.rootContainer == "", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error { empty = false return nil }) if err != nil { return err } if !empty { return fs.ErrorDirectoryNotEmpty } return nil } // deleteContainer deletes the container. It can delete a full // container so use isEmpty if you don't want that. func (f *Fs) deleteContainer(ctx context.Context, container string) error { return f.cache.Remove(container, func() error { options := azblob.ContainerAccessConditions{} return f.pacer.Call(func() (bool, error) { _, err := f.cntURL(container).GetProperties(ctx, azblob.LeaseAccessConditions{}) if err == nil { _, err = f.cntURL(container).Delete(ctx, options) } if err != nil { // Check http error code along with service code, current SDK doesn't populate service code correctly sometimes if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) { return false, fs.ErrorDirNotFound } return f.shouldRetry(err) } return f.shouldRetry(err) }) }) } // Rmdir deletes the container if the fs is at the root // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { container, directory := f.split(dir) if container == "" || directory != "" { return nil } err := f.isEmpty(ctx, container, directory) if err != nil { return err } return f.deleteContainer(ctx, container) } // Precision of the remote func (f *Fs) Precision() time.Duration { return time.Nanosecond } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // Purge deletes all the files and directories including the old versions. func (f *Fs) Purge(ctx context.Context, dir string) error { container, directory := f.split(dir) if container == "" || directory != "" { // Delegate to caller if not root of a container return fs.ErrorCantPurge } return f.deleteContainer(ctx, container) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstContainer, dstPath := f.split(remote) err := f.makeContainer(ctx, dstContainer) if err != nil { return nil, err } srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } dstBlobURL := f.getBlobReference(dstContainer, dstPath) srcBlobURL := srcObj.getBlobReference() source, err := url.Parse(srcBlobURL.String()) if err != nil { return nil, err } options := azblob.BlobAccessConditions{} var startCopy *azblob.BlobStartCopyFromURLResponse err = f.pacer.Call(func() (bool, error) { startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, azblob.ModifiedAccessConditions{}, options) return f.shouldRetry(err) }) if err != nil { return nil, err } copyStatus := startCopy.CopyStatus() for copyStatus == azblob.CopyStatusPending { time.Sleep(1 * time.Second) getMetadata, err := dstBlobURL.GetProperties(ctx, options) if err != nil { return nil, err } copyStatus = getMetadata.CopyStatus() } return f.NewObject(ctx, remote) } func (f *Fs) getMemoryPool(size int64) *pool.Pool { if size == int64(f.opt.ChunkSize) { return f.pool } return pool.New( time.Duration(f.opt.MemoryPoolFlushTime), int(size), fs.Config.Transfers, f.opt.MemoryPoolUseMmap, ) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the MD5 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } // Convert base64 encoded md5 into lower case hex if o.md5 == "" { return "", nil } data, err := base64.StdEncoding.DecodeString(o.md5) if err != nil { return "", errors.Wrapf(err, "Failed to decode Content-MD5: %q", o.md5) } return hex.EncodeToString(data), nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.size } func (o *Object) setMetadata(metadata azblob.Metadata) { if len(metadata) > 0 { o.meta = metadata if modTime, ok := metadata[modTimeKey]; ok { when, err := time.Parse(timeFormatIn, modTime) if err != nil { fs.Debugf(o, "Couldn't parse %v = %q: %v", modTimeKey, modTime, err) } o.modTime = when } } else { o.meta = nil } } // decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in // // Sets // o.id // o.modTime // o.size // o.md5 // o.meta func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) { metadata := info.NewMetadata() size := info.ContentLength() if isDirectoryMarker(size, metadata, o.remote) { return fs.ErrorNotAFile } // NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain // this as base64 encoded string. o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5()) o.mimeType = info.ContentType() o.size = size o.modTime = info.LastModified() o.accessTier = azblob.AccessTierType(info.AccessTier()) o.setMetadata(metadata) return nil } func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) { metadata := info.Metadata size := *info.Properties.ContentLength if isDirectoryMarker(size, metadata, o.remote) { return fs.ErrorNotAFile } // NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain // this as base64 encoded string. o.md5 = base64.StdEncoding.EncodeToString(info.Properties.ContentMD5) o.mimeType = *info.Properties.ContentType o.size = size o.modTime = info.Properties.LastModified o.accessTier = info.Properties.AccessTier o.setMetadata(metadata) return nil } // getBlobReference creates an empty blob reference with no metadata func (o *Object) getBlobReference() azblob.BlobURL { container, directory := o.split() return o.fs.getBlobReference(container, directory) } // clearMetaData clears enough metadata so readMetaData will re-read it func (o *Object) clearMetaData() { o.modTime = time.Time{} } // readMetaData gets the metadata if it hasn't already been fetched // // Sets // o.id // o.modTime // o.size // o.md5 func (o *Object) readMetaData() (err error) { if !o.modTime.IsZero() { return nil } blob := o.getBlobReference() // Read metadata (this includes metadata) options := azblob.BlobAccessConditions{} ctx := context.Background() var blobProperties *azblob.BlobGetPropertiesResponse err = o.fs.pacer.Call(func() (bool, error) { blobProperties, err = blob.GetProperties(ctx, options) return o.fs.shouldRetry(err) }) if err != nil { // On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeBlobNotFound || storageErr.Response().StatusCode == http.StatusNotFound) { return fs.ErrorObjectNotFound } return err } return o.decodeMetaDataFromPropertiesResponse(blobProperties) } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) (result time.Time) { // The error is logged in readMetaData _ = o.readMetaData() return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // Make sure o.meta is not nil if o.meta == nil { o.meta = make(map[string]string, 1) } // Set modTimeKey in it o.meta[modTimeKey] = modTime.Format(timeFormatOut) blob := o.getBlobReference() err := o.fs.pacer.Call(func() (bool, error) { _, err := blob.SetMetadata(ctx, o.meta, azblob.BlobAccessConditions{}) return o.fs.shouldRetry(err) }) if err != nil { return err } o.modTime = modTime return nil } // Storable returns if this object is storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { // Offset and Count for range download var offset int64 var count int64 if o.AccessTier() == azblob.AccessTierArchive { return nil, errors.Errorf("Blob in archive tier, you need to set tier to hot or cool first") } fs.FixRangeOption(options, o.size) for _, option := range options { switch x := option.(type) { case *fs.RangeOption: offset, count = x.Decode(o.size) if count < 0 { count = o.size - offset } case *fs.SeekOption: offset = x.Offset default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } blob := o.getBlobReference() ac := azblob.BlobAccessConditions{} var dowloadResponse *azblob.DownloadResponse err = o.fs.pacer.Call(func() (bool, error) { dowloadResponse, err = blob.Download(ctx, offset, count, ac, false) return o.fs.shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "failed to open for download") } in = dowloadResponse.Body(azblob.RetryReaderOptions{}) return in, nil } // dontEncode is the characters that do not need percent-encoding // // The characters that do not need percent-encoding are a subset of // the printable ASCII characters: upper-case letters, lower-case // letters, digits, ".", "_", "-", "/", "~", "!", "$", "'", "(", ")", // "*", ";", "=", ":", and "@". All other byte values in a UTF-8 must // be replaced with "%" and the two-digit hex value of the byte. const dontEncode = (`abcdefghijklmnopqrstuvwxyz` + `ABCDEFGHIJKLMNOPQRSTUVWXYZ` + `0123456789` + `._-/~!$'()*;=:@`) // noNeedToEncode is a bitmap of characters which don't need % encoding var noNeedToEncode [256]bool func init() { for _, c := range dontEncode { noNeedToEncode[c] = true } } // readSeeker joins an io.Reader and an io.Seeker type readSeeker struct { io.Reader io.Seeker } // increment the slice passed in as LSB binary func increment(xs []byte) { for i, digit := range xs { newDigit := digit + 1 xs[i] = newDigit if newDigit >= digit { // exit if no carry break } } } var warnStreamUpload sync.Once // uploadMultipart uploads a file using multipart upload // // Write a larger blob, using CreateBlockBlob, PutBlock, and PutBlockList. func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64, blob *azblob.BlobURL, httpHeaders *azblob.BlobHTTPHeaders) (err error) { // Calculate correct chunkSize chunkSize := int64(o.fs.opt.ChunkSize) totalParts := -1 // Note that the max size of file is 4.75 TB (100 MB X 50,000 // blocks) and this is bigger than the max uncommitted block // size (9.52 TB) so we do not need to part commit block lists // or garbage collect uncommitted blocks. // // See: https://docs.microsoft.com/en-gb/rest/api/storageservices/put-block // size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize // buffers here (default 4MB). With a maximum number of parts (50,000) this will be a file of // 195GB which seems like a not too unreasonable limit. if size == -1 { warnStreamUpload.Do(func() { fs.Logf(o, "Streaming uploads using chunk size %v will have maximum file size of %v", o.fs.opt.ChunkSize, fs.SizeSuffix(chunkSize*maxTotalParts)) }) } else { // Adjust partSize until the number of parts is small enough. if size/chunkSize >= maxTotalParts { // Calculate partition size rounded up to the nearest MB chunkSize = (((size / maxTotalParts) >> 20) + 1) << 20 } if chunkSize > int64(maxChunkSize) { return errors.Errorf("can't upload as it is too big %v - takes more than %d chunks of %v", fs.SizeSuffix(size), totalParts, fs.SizeSuffix(chunkSize/2)) } totalParts = int(size / chunkSize) if size%chunkSize != 0 { totalParts++ } } fs.Debugf(o, "Multipart upload session started for %d parts of size %v", totalParts, fs.SizeSuffix(chunkSize)) // unwrap the accounting from the input, we use wrap to put it // back on after the buffering in, wrap := accounting.UnWrap(in) // Upload the chunks var ( g, gCtx = errgroup.WithContext(ctx) remaining = size // remaining size in file for logging only, -1 if size < 0 position = int64(0) // position in file memPool = o.fs.getMemoryPool(chunkSize) // pool to get memory from finished = false // set when we have read EOF blocks []string // list of blocks for finalize blockBlobURL = blob.ToBlockBlobURL() // Get BlockBlobURL, we will use default pipeline here ac = azblob.LeaseAccessConditions{} // Use default lease access conditions binaryBlockID = make([]byte, 8) // block counter as LSB first 8 bytes ) for part := 0; !finished; part++ { // Get a block of memory from the pool and a token which limits concurrency o.fs.uploadToken.Get() buf := memPool.Get() free := func() { memPool.Put(buf) // return the buf o.fs.uploadToken.Put() // return the token } // Fail fast, in case an errgroup managed function returns an error // gCtx is cancelled. There is no point in uploading all the other parts. if gCtx.Err() != nil { free() break } // Read the chunk n, err := readers.ReadFill(in, buf) // this can never return 0, nil if err == io.EOF { if n == 0 { // end if no data free() break } finished = true } else if err != nil { free() return errors.Wrap(err, "multipart upload failed to read source") } buf = buf[:n] // increment the blockID and save the blocks for finalize increment(binaryBlockID) blockID := base64.StdEncoding.EncodeToString(binaryBlockID) blocks = append(blocks, blockID) // Transfer the chunk fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, totalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize)) g.Go(func() (err error) { defer free() // Upload the block, with MD5 for check md5sum := md5.Sum(buf) transactionalMD5 := md5sum[:] err = o.fs.pacer.Call(func() (bool, error) { bufferReader := bytes.NewReader(buf) wrappedReader := wrap(bufferReader) rs := readSeeker{wrappedReader, bufferReader} _, err = blockBlobURL.StageBlock(ctx, blockID, &rs, ac, transactionalMD5) return o.fs.shouldRetry(err) }) if err != nil { return errors.Wrap(err, "multipart upload failed to upload part") } return nil }) // ready for next block if size >= 0 { remaining -= chunkSize } position += chunkSize } err = g.Wait() if err != nil { return err } // Finalise the upload session err = o.fs.pacer.Call(func() (bool, error) { _, err := blockBlobURL.CommitBlockList(ctx, blocks, *httpHeaders, o.meta, azblob.BlobAccessConditions{}) return o.fs.shouldRetry(err) }) if err != nil { return errors.Wrap(err, "multipart upload failed to finalize") } return nil } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { container, _ := o.split() err = o.fs.makeContainer(ctx, container) if err != nil { return err } size := src.Size() // Update Mod time o.updateMetadataWithModTime(src.ModTime(ctx)) if err != nil { return err } blob := o.getBlobReference() httpHeaders := azblob.BlobHTTPHeaders{} httpHeaders.ContentType = fs.MimeType(ctx, o) // Compute the Content-MD5 of the file, for multiparts uploads it // will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header // Note: If multipart, an MD5 checksum will also be computed for each uploaded block // in order to validate its integrity during transport if !o.fs.opt.DisableCheckSum { if sourceMD5, _ := src.Hash(ctx, hash.MD5); sourceMD5 != "" { sourceMD5bytes, err := hex.DecodeString(sourceMD5) if err == nil { httpHeaders.ContentMD5 = sourceMD5bytes } else { fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err) } } } putBlobOptions := azblob.UploadStreamToBlockBlobOptions{ BufferSize: int(o.fs.opt.ChunkSize), MaxBuffers: 4, Metadata: o.meta, BlobHTTPHeaders: httpHeaders, } // FIXME Until https://github.com/Azure/azure-storage-blob-go/pull/75 // is merged the SDK can't upload a single blob of exactly the chunk // size, so upload with a multpart upload to work around. // See: https://github.com/rclone/rclone/issues/2653 multipartUpload := size < 0 || size >= int64(o.fs.opt.UploadCutoff) if size == int64(o.fs.opt.ChunkSize) { multipartUpload = true fs.Debugf(o, "Setting multipart upload for file of chunk size (%d) to work around SDK bug", size) } // Don't retry, return a retry error instead err = o.fs.pacer.CallNoRetry(func() (bool, error) { if multipartUpload { // If a large file upload in chunks err = o.uploadMultipart(ctx, in, size, &blob, &httpHeaders) } else { // Write a small blob in one transaction blockBlobURL := blob.ToBlockBlobURL() _, err = azblob.UploadStreamToBlockBlob(ctx, in, blockBlobURL, putBlobOptions) } return o.fs.shouldRetry(err) }) if err != nil { return err } // Refresh metadata on object o.clearMetaData() err = o.readMetaData() if err != nil { return err } // If tier is not changed or not specified, do not attempt to invoke `SetBlobTier` operation if o.fs.opt.AccessTier == string(defaultAccessTier) || o.fs.opt.AccessTier == string(o.AccessTier()) { return nil } // Now, set blob tier based on configured access tier return o.SetTier(o.fs.opt.AccessTier) } // Remove an object func (o *Object) Remove(ctx context.Context) error { blob := o.getBlobReference() snapShotOptions := azblob.DeleteSnapshotsOptionNone ac := azblob.BlobAccessConditions{} return o.fs.pacer.Call(func() (bool, error) { _, err := blob.Delete(ctx, snapShotOptions, ac) return o.fs.shouldRetry(err) }) } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // AccessTier of an object, default is of type none func (o *Object) AccessTier() azblob.AccessTierType { return o.accessTier } // SetTier performs changing object tier func (o *Object) SetTier(tier string) error { if !validateAccessTier(tier) { return errors.Errorf("Tier %s not supported by Azure Blob Storage", tier) } // Check if current tier already matches with desired tier if o.GetTier() == tier { return nil } desiredAccessTier := azblob.AccessTierType(tier) blob := o.getBlobReference() ctx := context.Background() err := o.fs.pacer.Call(func() (bool, error) { _, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{}) return o.fs.shouldRetry(err) }) if err != nil { return errors.Wrap(err, "Failed to set Blob Tier") } // Set access tier on local object also, this typically // gets updated on get blob properties o.accessTier = desiredAccessTier fs.Debugf(o, "Successfully changed object tier to %s", tier) return nil } // GetTier returns object tier in azure as string func (o *Object) GetTier() string { return string(o.accessTier) } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Copier = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.Purger = &Fs{} _ fs.ListRer = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} _ fs.GetTierer = &Object{} _ fs.SetTierer = &Object{} ) rclone-1.53.3/backend/azureblob/azureblob_internal_test.go000066400000000000000000000014451375552240400237230ustar00rootroot00000000000000// +build !plan9,!solaris,!js,go1.13 package azureblob import ( "testing" "github.com/stretchr/testify/assert" ) func (f *Fs) InternalTest(t *testing.T) { // Check first feature flags are set on this // remote enabled := f.Features().SetTier assert.True(t, enabled) enabled = f.Features().GetTier assert.True(t, enabled) } func TestIncrement(t *testing.T) { for _, test := range []struct { in []byte want []byte }{ {[]byte{0, 0, 0, 0}, []byte{1, 0, 0, 0}}, {[]byte{0xFE, 0, 0, 0}, []byte{0xFF, 0, 0, 0}}, {[]byte{0xFF, 0, 0, 0}, []byte{0, 1, 0, 0}}, {[]byte{0, 1, 0, 0}, []byte{1, 1, 0, 0}}, {[]byte{0xFF, 0xFF, 0xFF, 0xFE}, []byte{0, 0, 0, 0xFF}}, {[]byte{0xFF, 0xFF, 0xFF, 0xFF}, []byte{0, 0, 0, 0}}, } { increment(test.in) assert.Equal(t, test.want, test.in) } } rclone-1.53.3/backend/azureblob/azureblob_test.go000066400000000000000000000014741375552240400220310ustar00rootroot00000000000000// Test AzureBlob filesystem interface // +build !plan9,!solaris,!js,go1.13 package azureblob import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestAzureBlob:", NilObject: (*Object)(nil), TiersToTest: []string{"Hot", "Cool"}, ChunkedUpload: fstests.ChunkedUploadConfig{ MaxChunkSize: maxChunkSize, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadCutoff(cs) } var ( _ fstests.SetUploadChunkSizer = (*Fs)(nil) _ fstests.SetUploadCutoffer = (*Fs)(nil) ) rclone-1.53.3/backend/azureblob/azureblob_unsupported.go000066400000000000000000000002501375552240400234310ustar00rootroot00000000000000// Build for azureblob for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 solaris js !go1.13 package azureblob rclone-1.53.3/backend/b2/000077500000000000000000000000001375552240400147665ustar00rootroot00000000000000rclone-1.53.3/backend/b2/api/000077500000000000000000000000001375552240400155375ustar00rootroot00000000000000rclone-1.53.3/backend/b2/api/types.go000066400000000000000000000502301375552240400172320ustar00rootroot00000000000000package api import ( "fmt" "path" "strconv" "strings" "time" "github.com/rclone/rclone/fs/fserrors" ) // Error describes a B2 error response type Error struct { Status int `json:"status"` // The numeric HTTP status code. Always matches the status in the HTTP response. Code string `json:"code"` // A single-identifier code that identifies the error. Message string `json:"message"` // A human-readable message, in English, saying what went wrong. } // Error satisfies the error interface func (e *Error) Error() string { return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code) } // Fatal satisfies the Fatal interface // // It indicates which errors should be treated as fatal func (e *Error) Fatal() bool { return e.Status == 403 // 403 errors shouldn't be retried } var _ fserrors.Fataler = (*Error)(nil) // Bucket describes a B2 bucket type Bucket struct { ID string `json:"bucketId"` AccountID string `json:"accountId"` Name string `json:"bucketName"` Type string `json:"bucketType"` } // Timestamp is a UTC time when this file was uploaded. It is a base // 10 number of milliseconds since midnight, January 1, 1970 UTC. This // fits in a 64 bit integer such as the type "long" in the programming // language Java. It is intended to be compatible with Java's time // long. For example, it can be passed directly into the java call // Date.setTime(long time). type Timestamp time.Time // MarshalJSON turns a Timestamp into JSON (in UTC) func (t *Timestamp) MarshalJSON() (out []byte, err error) { timestamp := (*time.Time)(t).UTC().UnixNano() return []byte(strconv.FormatInt(timestamp/1e6, 10)), nil } // UnmarshalJSON turns JSON into a Timestamp func (t *Timestamp) UnmarshalJSON(data []byte) error { timestamp, err := strconv.ParseInt(string(data), 10, 64) if err != nil { return err } *t = Timestamp(time.Unix(timestamp/1e3, (timestamp%1e3)*1e6).UTC()) return nil } const versionFormat = "-v2006-01-02-150405.000" // AddVersion adds the timestamp as a version string into the filename passed in. func (t Timestamp) AddVersion(remote string) string { ext := path.Ext(remote) base := remote[:len(remote)-len(ext)] s := time.Time(t).Format(versionFormat) // Replace the '.' with a '-' s = strings.Replace(s, ".", "-", -1) return base + s + ext } // RemoveVersion removes the timestamp from a filename as a version string. // // It returns the new file name and a timestamp, or the old filename // and a zero timestamp. func RemoveVersion(remote string) (t Timestamp, newRemote string) { newRemote = remote ext := path.Ext(remote) base := remote[:len(remote)-len(ext)] if len(base) < len(versionFormat) { return } versionStart := len(base) - len(versionFormat) // Check it ends in -xxx if base[len(base)-4] != '-' { return } // Replace with .xxx for parsing base = base[:len(base)-4] + "." + base[len(base)-3:] newT, err := time.Parse(versionFormat, base[versionStart:]) if err != nil { return } return Timestamp(newT), base[:versionStart] + ext } // IsZero returns true if the timestamp is uninitialized func (t Timestamp) IsZero() bool { return time.Time(t).IsZero() } // Equal compares two timestamps // // If either are !IsZero then it returns false func (t Timestamp) Equal(s Timestamp) bool { if time.Time(t).IsZero() { return false } if time.Time(s).IsZero() { return false } return time.Time(t).Equal(time.Time(s)) } // File is info about a file type File struct { ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version. Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name. Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both. Size int64 `json:"size"` // The number of bytes in the file. UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded. SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file. ContentType string `json:"contentType"` // The MIME type of the file. Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file. } // AuthorizeAccountResponse is as returned from the b2_authorize_account call type AuthorizeAccountResponse struct { AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file. AccountID string `json:"accountId"` // The identifier for the account. Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it. BucketID string `json:"bucketId"` // When present, access is restricted to one bucket. BucketName string `json:"bucketName"` // When present, name of bucket - may be empty Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has. NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix } `json:"allowed"` APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files. AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header. DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files. MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead. RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance. } // ListBucketsRequest is parameters for b2_list_buckets call type ListBucketsRequest struct { AccountID string `json:"accountId"` // The identifier for the account. BucketID string `json:"bucketId,omitempty"` // When specified, the result will be a list containing just this bucket. BucketName string `json:"bucketName,omitempty"` // When specified, the result will be a list containing just this bucket. BucketTypes []string `json:"bucketTypes,omitempty"` // If present, B2 will use it as a filter for bucket types returned in the list buckets response. } // ListBucketsResponse is as returned from the b2_list_buckets call type ListBucketsResponse struct { Buckets []Bucket `json:"buckets"` } // ListFileNamesRequest is as passed to b2_list_file_names or b2_list_file_versions type ListFileNamesRequest struct { BucketID string `json:"bucketId"` // required - The bucket to look for file names in. StartFileName string `json:"startFileName,omitempty"` // optional - The first file name to return. If there is a file with this name, it will be returned in the list. If not, the first file name after this the first one after this name. MaxFileCount int `json:"maxFileCount,omitempty"` // optional - The maximum number of files to return from this call. The default value is 100, and the maximum allowed is 1000. StartFileID string `json:"startFileId,omitempty"` // optional - What to pass in to startFileId for the next search to continue where this one left off. Prefix string `json:"prefix,omitempty"` // optional - Files returned will be limited to those with the given prefix. Defaults to the empty string, which matches all files. Delimiter string `json:"delimiter,omitempty"` // Files returned will be limited to those within the top folder, or any one subfolder. Defaults to NULL. Folder names will also be returned. The delimiter character will be used to "break" file names into folders. } // ListFileNamesResponse is as received from b2_list_file_names or b2_list_file_versions type ListFileNamesResponse struct { Files []File `json:"files"` // An array of objects, each one describing one file. NextFileName *string `json:"nextFileName"` // What to pass in to startFileName for the next search to continue where this one left off, or null if there are no more files. NextFileID *string `json:"nextFileId"` // What to pass in to startFileId for the next search to continue where this one left off, or null if there are no more files. } // GetUploadURLRequest is passed to b2_get_upload_url type GetUploadURLRequest struct { BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to. } // GetUploadURLResponse is received from b2_get_upload_url type GetUploadURLResponse struct { BucketID string `json:"bucketId"` // The unique ID of the bucket. UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_file. AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file. } // GetDownloadAuthorizationRequest is passed to b2_get_download_authorization type GetDownloadAuthorizationRequest struct { BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to. FileNamePrefix string `json:"fileNamePrefix"` // The file name prefix of files the download authorization token will allow access to. ValidDurationInSeconds int64 `json:"validDurationInSeconds"` // The number of seconds before the authorization token will expire. The minimum value is 1 second. The maximum value is 604800 which is one week in seconds. B2ContentDisposition string `json:"b2ContentDisposition,omitempty"` // optional - If this is present, download requests using the returned authorization must include the same value for b2ContentDisposition. } // GetDownloadAuthorizationResponse is received from b2_get_download_authorization type GetDownloadAuthorizationResponse struct { BucketID string `json:"bucketId"` // The unique ID of the bucket. FileNamePrefix string `json:"fileNamePrefix"` // The file name prefix of files the download authorization token will allow access to. AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when downloading files, see b2_download_file_by_name. } // FileInfo is received from b2_upload_file, b2_get_file_info and b2_finish_large_file type FileInfo struct { ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version. Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name. Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both. AccountID string `json:"accountId"` // Your account ID. BucketID string `json:"bucketId"` // The bucket that the file is in. Size int64 `json:"contentLength"` // The number of bytes stored in the file. UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded. SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file. ContentType string `json:"contentType"` // The MIME type of the file. Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file. } // CreateBucketRequest is used to create a bucket type CreateBucketRequest struct { AccountID string `json:"accountId"` Name string `json:"bucketName"` Type string `json:"bucketType"` } // DeleteBucketRequest is used to create a bucket type DeleteBucketRequest struct { ID string `json:"bucketId"` AccountID string `json:"accountId"` } // DeleteFileRequest is used to delete a file version type DeleteFileRequest struct { ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions. Name string `json:"fileName"` // The name of this file. } // HideFileRequest is used to delete a file type HideFileRequest struct { BucketID string `json:"bucketId"` // The bucket containing the file to hide. Name string `json:"fileName"` // The name of the file to hide. } // GetFileInfoRequest is used to return a FileInfo struct with b2_get_file_info type GetFileInfoRequest struct { ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions. } // StartLargeFileRequest (b2_start_large_file) Prepares for uploading the parts of a large file. // // If the original source of the file being uploaded has a last // modified time concept, Backblaze recommends using // src_last_modified_millis as the name, and a string holding the base // 10 number number of milliseconds since midnight, January 1, 1970 // UTC. This fits in a 64 bit integer such as the type "long" in the // programming language Java. It is intended to be compatible with // Java's time long. For example, it can be passed directly into the // Java call Date.setTime(long time). // // If the caller knows the SHA1 of the entire large file being // uploaded, Backblaze recommends using large_file_sha1 as the name, // and a 40 byte hex string representing the SHA1. // // Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" } type StartLargeFileRequest struct { BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in. Name string `json:"fileName"` // The name of the file. See Files for requirements on file names. ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream. Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info. } // StartLargeFileResponse is the response to StartLargeFileRequest type StartLargeFileResponse struct { ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version. Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name. AccountID string `json:"accountId"` // The identifier for the account. BucketID string `json:"bucketId"` // The unique ID of the bucket. ContentType string `json:"contentType"` // The MIME type of the file. Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file. UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded. } // GetUploadPartURLRequest is passed to b2_get_upload_part_url type GetUploadPartURLRequest struct { ID string `json:"fileId"` // The unique identifier of the file being uploaded. } // GetUploadPartURLResponse is received from b2_get_upload_url type GetUploadPartURLResponse struct { ID string `json:"fileId"` // The unique identifier of the file being uploaded. UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_part. AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_part. } // UploadPartResponse is the response to b2_upload_part type UploadPartResponse struct { ID string `json:"fileId"` // The unique identifier of the file being uploaded. PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1) Size int64 `json:"contentLength"` // The number of bytes stored in the file. SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file. } // FinishLargeFileRequest is passed to b2_finish_large_file // // The response is a FileInfo object (with extra AccountID and BucketID fields which we ignore). // // Large files do not have a SHA1 checksum. The value will always be "none". type FinishLargeFileRequest struct { ID string `json:"fileId"` // The unique identifier of the file being uploaded. SHA1s []string `json:"partSha1Array"` // A JSON array of hex SHA1 checksums of the parts of the large file. This is a double-check that the right parts were uploaded in the right order, and that none were missed. Note that the part numbers start at 1, and the SHA1 of the part 1 is the first string in the array, at index 0. } // CancelLargeFileRequest is passed to b2_finish_large_file // // The response is a CancelLargeFileResponse type CancelLargeFileRequest struct { ID string `json:"fileId"` // The unique identifier of the file being uploaded. } // CancelLargeFileResponse is the response to CancelLargeFileRequest type CancelLargeFileResponse struct { ID string `json:"fileId"` // The unique identifier of the file being uploaded. Name string `json:"fileName"` // The name of this file. AccountID string `json:"accountId"` // The identifier for the account. BucketID string `json:"bucketId"` // The unique ID of the bucket. } // CopyFileRequest is as passed to b2_copy_file type CopyFileRequest struct { SourceID string `json:"sourceFileId"` // The ID of the source file being copied. Name string `json:"fileName"` // The name of the new file being created. Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied. MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only) Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only) DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used } // CopyPartRequest is the request for b2_copy_part - the response is UploadPartResponse type CopyPartRequest struct { SourceID string `json:"sourceFileId"` // The ID of the source file being copied. LargeFileID string `json:"largeFileId"` // The ID of the large file the part will belong to, as returned by b2_start_large_file. PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1) Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied. } rclone-1.53.3/backend/b2/api/types_test.go000066400000000000000000000045721375552240400203010ustar00rootroot00000000000000package api_test import ( "testing" "time" "github.com/rclone/rclone/backend/b2/api" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var ( emptyT api.Timestamp t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z")) t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z")) t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z")) ) func TestTimestampMarshalJSON(t *testing.T) { resB, err := t0.MarshalJSON() res := string(resB) require.NoError(t, err) assert.Equal(t, "3661123", res) resB, err = t1.MarshalJSON() res = string(resB) require.NoError(t, err) assert.Equal(t, "981173106123", res) } func TestTimestampUnmarshalJSON(t *testing.T) { var tActual api.Timestamp err := tActual.UnmarshalJSON([]byte("981173106123")) require.NoError(t, err) assert.Equal(t, (time.Time)(t1), (time.Time)(tActual)) } func TestTimestampAddVersion(t *testing.T) { for _, test := range []struct { t api.Timestamp in string expected string }{ {t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"}, {t1, "potato", "potato-v2001-02-03-040506-123"}, {t1, "", "-v2001-02-03-040506-123"}, } { actual := test.t.AddVersion(test.in) assert.Equal(t, test.expected, actual, test.in) } } func TestTimestampRemoveVersion(t *testing.T) { for _, test := range []struct { in string expectedT api.Timestamp expectedRemote string }{ {"potato.txt", emptyT, "potato.txt"}, {"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"}, {"potato-v2001-02-03-040506-123", t1, "potato"}, {"-v2001-02-03-040506-123", t1, ""}, {"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"}, {"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"}, } { actualT, actualRemote := api.RemoveVersion(test.in) assert.Equal(t, test.expectedT, actualT, test.in) assert.Equal(t, test.expectedRemote, actualRemote, test.in) } } func TestTimestampIsZero(t *testing.T) { assert.True(t, emptyT.IsZero()) assert.False(t, t0.IsZero()) assert.False(t, t1.IsZero()) } func TestTimestampEqual(t *testing.T) { assert.False(t, emptyT.Equal(emptyT)) assert.False(t, t0.Equal(emptyT)) assert.False(t, emptyT.Equal(t0)) assert.False(t, t0.Equal(t1)) assert.False(t, t1.Equal(t0)) assert.True(t, t0.Equal(t0)) assert.True(t, t1.Equal(t1)) } rclone-1.53.3/backend/b2/b2.go000066400000000000000000001615711375552240400156330ustar00rootroot00000000000000// Package b2 provides an interface to the Backblaze B2 object storage system package b2 // FIXME should we remove sha1 checks from here as rclone now supports // checking SHA1s? import ( "bufio" "bytes" "context" "crypto/sha1" "fmt" gohash "hash" "io" "net/http" "path" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/b2/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pool" "github.com/rclone/rclone/lib/rest" ) const ( defaultEndpoint = "https://api.backblazeb2.com" headerPrefix = "x-bz-info-" // lower case as that is what the server returns timeKey = "src_last_modified_millis" timeHeader = headerPrefix + timeKey sha1Key = "large_file_sha1" sha1Header = "X-Bz-Content-Sha1" sha1InfoHeader = headerPrefix + sha1Key testModeHeader = "X-Bz-Test-Mode" retryAfterHeader = "Retry-After" minSleep = 10 * time.Millisecond maxSleep = 5 * time.Minute decayConstant = 1 // bigger for slower decay, exponential maxParts = 10000 maxVersions = 100 // maximum number of versions we search in --b2-versions mode minChunkSize = 5 * fs.MebiByte defaultChunkSize = 96 * fs.MebiByte defaultUploadCutoff = 200 * fs.MebiByte largeFileCopyCutoff = 4 * fs.GibiByte // 5E9 is the max memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long memoryPoolUseMmap = false ) // Globals var ( errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode") ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "b2", Description: "Backblaze B2", NewFs: NewFs, Options: []fs.Option{{ Name: "account", Help: "Account ID or Application Key ID", Required: true, }, { Name: "key", Help: "Application Key", Required: true, }, { Name: "endpoint", Help: "Endpoint for the service.\nLeave blank normally.", Advanced: true, }, { Name: "test_mode", Help: `A flag string for X-Bz-Test-Mode header for debugging. This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors: * "fail_some_uploads" * "expire_some_account_authorization_tokens" * "force_cap_exceeded" These will be set in the "X-Bz-Test-Mode" header which is documented in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).`, Default: "", Hide: fs.OptionHideConfigurator, Advanced: true, }, { Name: "versions", Help: "Include old versions in directory listings.\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.", Default: false, Advanced: true, }, { Name: "hard_delete", Help: "Permanently delete files on remote removal, otherwise hide files.", Default: false, }, { Name: "upload_cutoff", Help: `Cutoff for switching to chunked upload. Files above this size will be uploaded in chunks of "--b2-chunk-size". This value should be set no larger than 4.657GiB (== 5GB).`, Default: defaultUploadCutoff, Advanced: true, }, { Name: "copy_cutoff", Help: `Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 4.6GB.`, Default: largeFileCopyCutoff, Advanced: true, }, { Name: "chunk_size", Help: `Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size.`, Default: defaultChunkSize, Advanced: true, }, { Name: "disable_checksum", Help: `Disable checksums for large (> upload cutoff) files Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.`, Default: false, Advanced: true, }, { Name: "download_url", Help: `Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze.`, Advanced: true, }, { Name: "download_auth_duration", Help: `Time before the authorization token will expire in s or suffix ms|s|m|h|d. The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.`, Default: fs.Duration(7 * 24 * time.Hour), Advanced: true, }, { Name: "memory_pool_flush_time", Default: memoryPoolFlushTime, Advanced: true, Help: `How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.`, }, { Name: "memory_pool_use_mmap", Default: memoryPoolUseMmap, Advanced: true, Help: `Whether to use mmap buffers in internal memory pool.`, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // See: https://www.backblaze.com/b2/docs/files.html // Encode invalid UTF-8 bytes as json doesn't handle them properly. // FIXME: allow /, but not leading, trailing or double Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { Account string `config:"account"` Key string `config:"key"` Endpoint string `config:"endpoint"` TestMode string `config:"test_mode"` Versions bool `config:"versions"` HardDelete bool `config:"hard_delete"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` CopyCutoff fs.SizeSuffix `config:"copy_cutoff"` ChunkSize fs.SizeSuffix `config:"chunk_size"` DisableCheckSum bool `config:"disable_checksum"` DownloadURL string `config:"download_url"` DownloadAuthorizationDuration fs.Duration `config:"download_auth_duration"` MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"` MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote b2 server type Fs struct { name string // name of this remote root string // the path we are working on if any opt Options // parsed config options features *fs.Features // optional features srv *rest.Client // the connection to the b2 server rootBucket string // bucket part of root (if any) rootDirectory string // directory part of root (if any) cache *bucket.Cache // cache for bucket creation status bucketIDMutex sync.Mutex // mutex to protect _bucketID _bucketID map[string]string // the ID of the bucket we are working on bucketTypeMutex sync.Mutex // mutex to protect _bucketType _bucketType map[string]string // the Type of the bucket we are working on info api.AuthorizeAccountResponse // result of authorize call uploadMu sync.Mutex // lock for upload variable uploads map[string][]*api.GetUploadURLResponse // Upload URLs by buckedID authMu sync.Mutex // lock for authorizing the account pacer *fs.Pacer // To pace and retry the API calls uploadToken *pacer.TokenDispenser // control concurrency pool *pool.Pool // memory pool } // Object describes a b2 object type Object struct { fs *Fs // what this object is part of remote string // The remote path id string // b2 id of the file modTime time.Time // The modified time of the object if known sha1 string // SHA-1 hash if known size int64 // Size of the object mimeType string // Content-Type of the object } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.rootBucket == "" { return fmt.Sprintf("B2 root") } if f.rootDirectory == "" { return fmt.Sprintf("B2 bucket %s", f.rootBucket) } return fmt.Sprintf("B2 bucket %s path %s", f.rootBucket, f.rootDirectory) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns bucket and bucketPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) { return bucket.Split(path.Join(f.root, rootRelativePath)) } // split returns bucket and bucketPath from the object func (o *Object) split() (bucket, bucketPath string) { return o.fs.split(o.remote) } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 401, // Unauthorized (eg "Token has expired") 408, // Request Timeout 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error 503, // Service Unavailable 504, // Gateway Time-out } // shouldRetryNoAuth returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) { // For 429 or 503 errors look at the Retry-After: header and // set the retry appropriately, starting with a minimum of 1 // second if it isn't set. if resp != nil && (resp.StatusCode == 429 || resp.StatusCode == 503) { var retryAfter = 1 retryAfterString := resp.Header.Get(retryAfterHeader) if retryAfterString != "" { var err error retryAfter, err = strconv.Atoi(retryAfterString) if err != nil { fs.Errorf(f, "Malformed %s header %q: %v", retryAfterHeader, retryAfterString, err) } } return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Second) } return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) { if resp != nil && resp.StatusCode == 401 { fs.Debugf(f, "Unauthorized: %v", err) // Reauth authErr := f.authorizeAccount(ctx) if authErr != nil { err = authErr } return true, err } return f.shouldRetryNoReauth(resp, err) } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { // Decode error response errResponse := new(api.Error) err := rest.DecodeJSON(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.Code == "" { errResponse.Code = "unknown" } if errResponse.Status == 0 { errResponse.Status = resp.StatusCode } if errResponse.Message == "" { errResponse.Message = "Unknown " + resp.Status } return errResponse } func checkUploadChunkSize(cs fs.SizeSuffix) error { if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } func checkUploadCutoff(opt *Options, cs fs.SizeSuffix) error { if cs < opt.ChunkSize { return errors.Errorf("%v is less than chunk size %v", cs, opt.ChunkSize) } return nil } func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadCutoff(&f.opt, cs) if err == nil { old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs } return } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootBucket, f.rootDirectory = bucket.Split(f.root) } // NewFs constructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadCutoff(opt, opt.UploadCutoff) if err != nil { return nil, errors.Wrap(err, "b2: upload cutoff") } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "b2: chunk size") } if opt.Account == "" { return nil, errors.New("account not found") } if opt.Key == "" { return nil, errors.New("key not found") } if opt.Endpoint == "" { opt.Endpoint = defaultEndpoint } f := &Fs{ name: name, opt: *opt, srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler), cache: bucket.NewCache(), _bucketID: make(map[string]string, 1), _bucketType: make(map[string]string, 1), uploads: make(map[string][]*api.GetUploadURLResponse), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), pool: pool.New( time.Duration(opt.MemoryPoolFlushTime), int(opt.ChunkSize), fs.Config.Transfers, opt.MemoryPoolUseMmap, ), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, }).Fill(f) // Set the test flag if required if opt.TestMode != "" { testMode := strings.TrimSpace(opt.TestMode) f.srv.SetHeader(testModeHeader, testMode) fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode) } err = f.authorizeAccount(ctx) if err != nil { return nil, errors.Wrap(err, "failed to authorize account") } // If this is a key limited to a single bucket, it must exist already if f.rootBucket != "" && f.info.Allowed.BucketID != "" { allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName) if allowedBucket == "" { return nil, errors.New("bucket that application key is restricted to no longer exists") } if allowedBucket != f.rootBucket { return nil, errors.Errorf("you must use bucket %q with this application key", allowedBucket) } f.cache.MarkOK(f.rootBucket) f.setBucketID(f.rootBucket, f.info.Allowed.BucketID) } if f.rootBucket != "" && f.rootDirectory != "" { // Check to see if the (bucket,directory) is actually an existing file oldRoot := f.root newRoot, leaf := path.Split(oldRoot) f.setRoot(newRoot) _, err := f.NewObject(ctx, leaf) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f f.setRoot(oldRoot) return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // authorizeAccount gets the API endpoint and auth token. Can be used // for reauthentication too. func (f *Fs) authorizeAccount(ctx context.Context) error { f.authMu.Lock() defer f.authMu.Unlock() opts := rest.Opts{ Method: "GET", Path: "/b2api/v1/b2_authorize_account", RootURL: f.opt.Endpoint, UserName: f.opt.Account, Password: f.opt.Key, ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request } err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info) return f.shouldRetryNoReauth(resp, err) }) if err != nil { return errors.Wrap(err, "failed to authenticate") } f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken) return nil } // hasPermission returns if the current AuthorizationToken has the selected permission func (f *Fs) hasPermission(permission string) bool { for _, capability := range f.info.Allowed.Capabilities { if capability == permission { return true } } return false } // getUploadURL returns the upload info with the UploadURL and the AuthorizationToken // // This should be returned with returnUploadURL when finished func (f *Fs) getUploadURL(ctx context.Context, bucket string) (upload *api.GetUploadURLResponse, err error) { f.uploadMu.Lock() defer f.uploadMu.Unlock() bucketID, err := f.getBucketID(ctx, bucket) if err != nil { return nil, err } // look for a stored upload URL for the correct bucketID uploads := f.uploads[bucketID] if len(uploads) > 0 { upload, uploads = uploads[0], uploads[1:] f.uploads[bucketID] = uploads return upload, nil } // get a new upload URL since not found opts := rest.Opts{ Method: "POST", Path: "/b2_get_upload_url", } var request = api.GetUploadURLRequest{ BucketID: bucketID, } err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &upload) return f.shouldRetry(ctx, resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to get upload URL") } return upload, nil } // returnUploadURL returns the UploadURL to the cache func (f *Fs) returnUploadURL(upload *api.GetUploadURLResponse) { if upload == nil { return } f.uploadMu.Lock() f.uploads[upload.BucketID] = append(f.uploads[upload.BucketID], upload) f.uploadMu.Unlock() } // clearUploadURL clears the current UploadURL and the AuthorizationToken func (f *Fs) clearUploadURL(bucketID string) { f.uploadMu.Lock() delete(f.uploads, bucketID) f.uploadMu.Unlock() } // getBuf gets a buffer of f.opt.ChunkSize and an upload token // // If noBuf is set then it just gets an upload token func (f *Fs) getBuf(noBuf bool) (buf []byte) { f.uploadToken.Get() if !noBuf { buf = f.pool.Get() } return buf } // putBuf returns a buffer to the memory pool and an upload token // // If noBuf is set then it just returns the upload token func (f *Fs) putBuf(buf []byte, noBuf bool) { if !noBuf { f.pool.Put(buf) } f.uploadToken.Put() } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.File) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { err := o.decodeMetaData(info) if err != nil { return nil, err } } else { err := o.readMetaData(ctx) // reads info and headers, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // listFn is called from list to handle an object type listFn func(remote string, object *api.File, isDirectory bool) error // errEndList is a sentinel used to end the list iteration now. // listFn should return it to end the iteration with no errors. var errEndList = errors.New("end list") // list lists the objects into the function supplied from // the bucket and root supplied // // (bucket, directory) is the starting directory // // If prefix is set then it is removed from all file names // // If addBucket is set then it adds the bucket to the start of the // remotes generated // // If recurse is set the function will recursively list // // If limit is > 0 then it limits to that many files (must be less // than 1000) // // If hidden is set then it will list the hidden (deleted) files too. // // if findFile is set it will look for files called (bucket, directory) func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, limit int, hidden bool, findFile bool, fn listFn) error { if !findFile { if prefix != "" { prefix += "/" } if directory != "" { directory += "/" } } delimiter := "" if !recurse { delimiter = "/" } bucketID, err := f.getBucketID(ctx, bucket) if err != nil { return err } chunkSize := 1000 if limit > 0 { chunkSize = limit } var request = api.ListFileNamesRequest{ BucketID: bucketID, MaxFileCount: chunkSize, Prefix: f.opt.Enc.FromStandardPath(directory), Delimiter: delimiter, } if directory != "" { request.StartFileName = f.opt.Enc.FromStandardPath(directory) } opts := rest.Opts{ Method: "POST", Path: "/b2_list_file_names", } if hidden { opts.Path = "/b2_list_file_versions" } for { var response api.ListFileNamesResponse err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return err } for i := range response.Files { file := &response.Files[i] file.Name = f.opt.Enc.ToStandardPath(file.Name) // Finish if file name no longer has prefix if prefix != "" && !strings.HasPrefix(file.Name, prefix) { return nil } if !strings.HasPrefix(file.Name, prefix) { fs.Debugf(f, "Odd name received %q", file.Name) continue } remote := file.Name[len(prefix):] // Check for directory isDirectory := remote == "" || strings.HasSuffix(remote, "/") if isDirectory { remote = remote[:len(remote)-1] } if addBucket { remote = path.Join(bucket, remote) } // Send object err = fn(remote, file, isDirectory) if err != nil { if err == errEndList { return nil } return err } } // end if no NextFileName if response.NextFileName == nil { break } request.StartFileName = *response.NextFileName if response.NextFileID != nil { request.StartFileID = *response.NextFileID } } return nil } // Convert a list item into a DirEntry func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File, isDirectory bool, last *string) (fs.DirEntry, error) { if isDirectory { d := fs.NewDir(remote, time.Time{}) return d, nil } if remote == *last { remote = object.UploadTimestamp.AddVersion(remote) } else { *last = remote } // hide objects represent deleted files which we don't list if object.Action == "hide" { return nil, nil } o, err := f.newObjectWithInfo(ctx, remote, object) if err != nil { return nil, err } return o, nil } // listDir lists a single directory func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { last := "" err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last) if err != nil { return err } if entry != nil { entries = append(entries, entry) } return nil }) if err != nil { return nil, err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) return entries, nil } // listBuckets returns all the buckets to out func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error { d := fs.NewDir(bucket.Name, time.Time{}) entries = append(entries, d) return nil }) if err != nil { return nil, err } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { bucket, directory := f.split(dir) if bucket == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listBuckets(ctx) } return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { bucket, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(bucket, directory, prefix string, addBucket bool) error { last := "" return f.list(ctx, bucket, directory, prefix, addBucket, true, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last) if err != nil { return err } return list.Add(entry) }) } if bucket == "" { entries, err := f.listBuckets(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } bucket := entry.Remote() err = listR(bucket, "", f.rootDirectory, true) if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } } else { err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "") if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } return list.Flush() } // listBucketFn is called from listBucketsToFn to handle a bucket type listBucketFn func(*api.Bucket) error // listBucketsToFn lists the buckets to the function supplied func (f *Fs) listBucketsToFn(ctx context.Context, fn listBucketFn) error { var account = api.ListBucketsRequest{ AccountID: f.info.AccountID, BucketID: f.info.Allowed.BucketID, } var response api.ListBucketsResponse opts := rest.Opts{ Method: "POST", Path: "/b2_list_buckets", } err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &account, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return err } f.bucketIDMutex.Lock() f.bucketTypeMutex.Lock() f._bucketID = make(map[string]string, 1) f._bucketType = make(map[string]string, 1) for i := range response.Buckets { bucket := &response.Buckets[i] bucket.Name = f.opt.Enc.ToStandardName(bucket.Name) f.cache.MarkOK(bucket.Name) f._bucketID[bucket.Name] = bucket.ID f._bucketType[bucket.Name] = bucket.Type } f.bucketTypeMutex.Unlock() f.bucketIDMutex.Unlock() for i := range response.Buckets { bucket := &response.Buckets[i] err = fn(bucket) if err != nil { return err } } return nil } // getbucketType finds the bucketType for the current bucket name // can be one of allPublic. allPrivate, or snapshot func (f *Fs) getbucketType(ctx context.Context, bucket string) (bucketType string, err error) { f.bucketTypeMutex.Lock() bucketType = f._bucketType[bucket] f.bucketTypeMutex.Unlock() if bucketType != "" { return bucketType, nil } err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error { // listBucketsToFn reads bucket Types return nil }) f.bucketTypeMutex.Lock() bucketType = f._bucketType[bucket] f.bucketTypeMutex.Unlock() if bucketType == "" { err = fs.ErrorDirNotFound } return bucketType, err } // setBucketType sets the Type for the current bucket name func (f *Fs) setBucketType(bucket string, Type string) { f.bucketTypeMutex.Lock() f._bucketType[bucket] = Type f.bucketTypeMutex.Unlock() } // clearBucketType clears the Type for the current bucket name func (f *Fs) clearBucketType(bucket string) { f.bucketTypeMutex.Lock() delete(f._bucketType, bucket) f.bucketTypeMutex.Unlock() } // getBucketID finds the ID for the current bucket name func (f *Fs) getBucketID(ctx context.Context, bucket string) (bucketID string, err error) { f.bucketIDMutex.Lock() bucketID = f._bucketID[bucket] f.bucketIDMutex.Unlock() if bucketID != "" { return bucketID, nil } err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error { // listBucketsToFn sets IDs return nil }) f.bucketIDMutex.Lock() bucketID = f._bucketID[bucket] f.bucketIDMutex.Unlock() if bucketID == "" { err = fs.ErrorDirNotFound } return bucketID, err } // setBucketID sets the ID for the current bucket name func (f *Fs) setBucketID(bucket, ID string) { f.bucketIDMutex.Lock() f._bucketID[bucket] = ID f.bucketIDMutex.Unlock() } // clearBucketID clears the ID for the current bucket name func (f *Fs) clearBucketID(bucket string) { f.bucketIDMutex.Lock() delete(f._bucketID, bucket) f.bucketIDMutex.Unlock() } // Put the object into the bucket // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction fs := &Object{ fs: f, remote: src.Remote(), } return fs, fs.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the bucket if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { bucket, _ := f.split(dir) return f.makeBucket(ctx, bucket) } // makeBucket creates the bucket if it doesn't exist func (f *Fs) makeBucket(ctx context.Context, bucket string) error { return f.cache.Create(bucket, func() error { opts := rest.Opts{ Method: "POST", Path: "/b2_create_bucket", } var request = api.CreateBucketRequest{ AccountID: f.info.AccountID, Name: f.opt.Enc.FromStandardName(bucket), Type: "allPrivate", } var response api.Bucket err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { if apiErr, ok := err.(*api.Error); ok { if apiErr.Code == "duplicate_bucket_name" { // Check this is our bucket - buckets are globally unique and this // might be someone elses. _, getBucketErr := f.getBucketID(ctx, bucket) if getBucketErr == nil { // found so it is our bucket return nil } if getBucketErr != fs.ErrorDirNotFound { fs.Debugf(f, "Error checking bucket exists: %v", getBucketErr) } } } return errors.Wrap(err, "failed to create bucket") } f.setBucketID(bucket, response.ID) f.setBucketType(bucket, response.Type) return nil }, nil) } // Rmdir deletes the bucket if the fs is at the root // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { bucket, directory := f.split(dir) if bucket == "" || directory != "" { return nil } return f.cache.Remove(bucket, func() error { opts := rest.Opts{ Method: "POST", Path: "/b2_delete_bucket", } bucketID, err := f.getBucketID(ctx, bucket) if err != nil { return err } var request = api.DeleteBucketRequest{ ID: bucketID, AccountID: f.info.AccountID, } var response api.Bucket err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return errors.Wrap(err, "failed to delete bucket") } f.clearBucketID(bucket) f.clearBucketType(bucket) f.clearUploadURL(bucketID) return nil }) } // Precision of the remote func (f *Fs) Precision() time.Duration { return time.Millisecond } // hide hides a file on the remote func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error { bucketID, err := f.getBucketID(ctx, bucket) if err != nil { return err } opts := rest.Opts{ Method: "POST", Path: "/b2_hide_file", } var request = api.HideFileRequest{ BucketID: bucketID, Name: f.opt.Enc.FromStandardPath(bucketPath), } var response api.File err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { if apiErr, ok := err.(*api.Error); ok { if apiErr.Code == "already_hidden" { // sometimes eventual consistency causes this, so // ignore this error since it is harmless return nil } } return errors.Wrapf(err, "failed to hide %q", bucketPath) } return nil } // deleteByID deletes a file version given Name and ID func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error { opts := rest.Opts{ Method: "POST", Path: "/b2_delete_file_version", } var request = api.DeleteFileRequest{ ID: ID, Name: f.opt.Enc.FromStandardPath(Name), } var response api.File err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return errors.Wrapf(err, "failed to delete %q", Name) } return nil } // purge deletes all the files and directories // // if oldOnly is true then it deletes only non current files. // // Implemented here so we can make sure we delete old versions. func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error { bucket, directory := f.split(dir) if bucket == "" { return errors.New("can't purge from root") } var errReturn error var checkErrMutex sync.Mutex var checkErr = func(err error) { if err == nil { return } checkErrMutex.Lock() defer checkErrMutex.Unlock() if errReturn == nil { errReturn = err } } var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool { if time.Since(time.Time(timestamp)).Hours() > 24 { return true } return false } // Delete Config.Transfers in parallel toBeDeleted := make(chan *api.File, fs.Config.Transfers) var wg sync.WaitGroup wg.Add(fs.Config.Transfers) for i := 0; i < fs.Config.Transfers; i++ { go func() { defer wg.Done() for object := range toBeDeleted { oi, err := f.newObjectWithInfo(ctx, object.Name, object) if err != nil { fs.Errorf(object.Name, "Can't create object %v", err) continue } tr := accounting.Stats(ctx).NewCheckingTransfer(oi) err = f.deleteByID(ctx, object.ID, object.Name) checkErr(err) tr.Done(err) } }() } last := "" checkErr(f.list(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error { if !isDirectory { oi, err := f.newObjectWithInfo(ctx, object.Name, object) if err != nil { fs.Errorf(object, "Can't create object %+v", err) } tr := accounting.Stats(ctx).NewCheckingTransfer(oi) if oldOnly && last != remote { // Check current version of the file if object.Action == "hide" { fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID) toBeDeleted <- object } else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) { fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local()) toBeDeleted <- object } else { fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action) } } else { fs.Debugf(remote, "Deleting (id %q)", object.ID) toBeDeleted <- object } last = remote tr.Done(nil) } return nil })) close(toBeDeleted) wg.Wait() if !oldOnly { checkErr(f.Rmdir(ctx, dir)) } return errReturn } // Purge deletes all the files and directories including the old versions. func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purge(ctx, dir, false) } // CleanUp deletes all the hidden files. func (f *Fs) CleanUp(ctx context.Context) error { return f.purge(ctx, "", true) } // copy does a server side copy from dstObj <- srcObj // // If newInfo is nil then the metadata will be copied otherwise it // will be replaced with newInfo func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *api.File) (err error) { if srcObj.size >= int64(f.opt.CopyCutoff) { if newInfo == nil { newInfo, err = srcObj.getMetaData(ctx) if err != nil { return err } } up, err := f.newLargeUpload(ctx, dstObj, nil, srcObj, f.opt.CopyCutoff, true, newInfo) if err != nil { return err } return up.Upload(ctx) } dstBucket, dstPath := dstObj.split() err = f.makeBucket(ctx, dstBucket) if err != nil { return err } destBucketID, err := f.getBucketID(ctx, dstBucket) if err != nil { return err } opts := rest.Opts{ Method: "POST", Path: "/b2_copy_file", } var request = api.CopyFileRequest{ SourceID: srcObj.id, Name: f.opt.Enc.FromStandardPath(dstPath), DestBucketID: destBucketID, } if newInfo == nil { request.MetadataDirective = "COPY" } else { request.MetadataDirective = "REPLACE" request.ContentType = newInfo.ContentType request.Info = newInfo.Info } var response api.FileInfo err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return err } return dstObj.decodeMetaDataFileInfo(&response) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } // Temporary Object under construction dstObj := &Object{ fs: f, remote: remote, } err := f.copy(ctx, dstObj, srcObj, nil) if err != nil { return nil, err } return dstObj, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.SHA1) } // getDownloadAuthorization returns authorization token for downloading // without account. func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string) (authorization string, err error) { validDurationInSeconds := time.Duration(f.opt.DownloadAuthorizationDuration).Nanoseconds() / 1e9 if validDurationInSeconds <= 0 || validDurationInSeconds > 604800 { return "", errors.New("--b2-download-auth-duration must be between 1 sec and 1 week") } if !f.hasPermission("shareFiles") { return "", errors.New("sharing a file link requires the shareFiles permission") } bucketID, err := f.getBucketID(ctx, bucket) if err != nil { return "", err } opts := rest.Opts{ Method: "POST", Path: "/b2_get_download_authorization", } var request = api.GetDownloadAuthorizationRequest{ BucketID: bucketID, FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)), ValidDurationInSeconds: validDurationInSeconds, } var response api.GetDownloadAuthorizationResponse err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return "", errors.Wrap(err, "failed to get download authorization") } return response.AuthorizationToken, nil } // PublicLink returns a link for downloading without account func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { bucket, bucketPath := f.split(remote) var RootURL string if f.opt.DownloadURL == "" { RootURL = f.info.DownloadURL } else { RootURL = f.opt.DownloadURL } _, err = f.NewObject(ctx, remote) if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile { err2 := f.list(ctx, bucket, bucketPath, f.rootDirectory, f.rootBucket == "", false, 1, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error { err = nil return nil }) if err2 != nil { return "", err2 } } if err != nil { return "", err } absPath := "/" + bucketPath link = RootURL + "/file/" + urlEncode(bucket) + absPath bucketType, err := f.getbucketType(ctx, bucket) if err != nil { return "", err } if bucketType == "allPrivate" || bucketType == "snapshot" { AuthorizationToken, err := f.getDownloadAuthorization(ctx, bucket, remote) if err != nil { return "", err } link += "?Authorization=" + AuthorizationToken } return link, nil } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the Sha-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.SHA1 { return "", hash.ErrUnsupported } if o.sha1 == "" { // Error is logged in readMetaData err := o.readMetaData(ctx) if err != nil { return "", err } } return o.sha1, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.size } // Clean the SHA1 // // Make sure it is lower case // // Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html // Some tools (eg Cyberduck) use this func cleanSHA1(sha1 string) (out string) { out = strings.ToLower(sha1) const unverified = "unverified:" if strings.HasPrefix(out, unverified) { out = out[len(unverified):] } return out } // decodeMetaDataRaw sets the metadata from the data passed in // // Sets // o.id // o.modTime // o.size // o.sha1 func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp api.Timestamp, Info map[string]string, mimeType string) (err error) { o.id = ID o.sha1 = SHA1 o.mimeType = mimeType // Read SHA1 from metadata if it exists and isn't set if o.sha1 == "" || o.sha1 == "none" { o.sha1 = Info[sha1Key] } o.sha1 = cleanSHA1(o.sha1) o.size = Size // Use the UploadTimestamp if can't get file info o.modTime = time.Time(UploadTimestamp) return o.parseTimeString(Info[timeKey]) } // decodeMetaData sets the metadata in the object from an api.File // // Sets // o.id // o.modTime // o.size // o.sha1 func (o *Object) decodeMetaData(info *api.File) (err error) { return o.decodeMetaDataRaw(info.ID, info.SHA1, info.Size, info.UploadTimestamp, info.Info, info.ContentType) } // decodeMetaDataFileInfo sets the metadata in the object from an api.FileInfo // // Sets // o.id // o.modTime // o.size // o.sha1 func (o *Object) decodeMetaDataFileInfo(info *api.FileInfo) (err error) { return o.decodeMetaDataRaw(info.ID, info.SHA1, info.Size, info.UploadTimestamp, info.Info, info.ContentType) } // getMetaData gets the metadata from the object unconditionally func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) { bucket, bucketPath := o.split() maxSearched := 1 var timestamp api.Timestamp if o.fs.opt.Versions { timestamp, bucketPath = api.RemoveVersion(bucketPath) maxSearched = maxVersions } err = o.fs.list(ctx, bucket, bucketPath, "", false, true, maxSearched, o.fs.opt.Versions, true, func(remote string, object *api.File, isDirectory bool) error { if isDirectory { return nil } if remote == bucketPath { if !timestamp.IsZero() && !timestamp.Equal(object.UploadTimestamp) { return nil } info = object } return errEndList // read only 1 item }) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } if info == nil { return nil, fs.ErrorObjectNotFound } return info, nil } // readMetaData gets the metadata if it hasn't already been fetched // // Sets // o.id // o.modTime // o.size // o.sha1 func (o *Object) readMetaData(ctx context.Context) (err error) { if o.id != "" { return nil } info, err := o.getMetaData(ctx) if err != nil { return err } return o.decodeMetaData(info) } // timeString returns modTime as the number of milliseconds // elapsed since January 1, 1970 UTC as a decimal string. func timeString(modTime time.Time) string { return strconv.FormatInt(modTime.UnixNano()/1e6, 10) } // parseTimeString converts a decimal string number of milliseconds // elapsed since January 1, 1970 UTC into a time.Time and stores it in // the modTime variable. func (o *Object) parseTimeString(timeString string) (err error) { if timeString == "" { return nil } unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64) if err != nil { fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err) return nil } o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC() return nil } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers // // SHA-1 will also be updated once the request has completed. func (o *Object) ModTime(ctx context.Context) (result time.Time) { // The error is logged in readMetaData _ = o.readMetaData(ctx) return o.modTime } // SetModTime sets the modification time of the Object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { info, err := o.getMetaData(ctx) if err != nil { return err } info.Info[timeKey] = timeString(modTime) // Copy to the same name, overwriting the metadata only return o.fs.copy(ctx, o, o, info) } // Storable returns if this object is storable func (o *Object) Storable() bool { return true } // openFile represents an Object open for reading type openFile struct { o *Object // Object we are reading for resp *http.Response // response of the GET body io.Reader // reading from here hash gohash.Hash // currently accumulating SHA1 bytes int64 // number of bytes read on this connection eof bool // whether we have read end of file } // newOpenFile wraps an io.ReadCloser and checks the sha1sum func newOpenFile(o *Object, resp *http.Response) *openFile { file := &openFile{ o: o, resp: resp, hash: sha1.New(), } file.body = io.TeeReader(resp.Body, file.hash) return file } // Read bytes from the object - see io.Reader func (file *openFile) Read(p []byte) (n int, err error) { n, err = file.body.Read(p) file.bytes += int64(n) if err == io.EOF { file.eof = true } return } // Close the object and checks the length and SHA1 if all the object // was read func (file *openFile) Close() (err error) { // Close the body at the end defer fs.CheckClose(file.resp.Body, &err) // If not end of file then can't check SHA1 if !file.eof { return nil } // Check to see we read the correct number of bytes if file.o.Size() != file.bytes { return errors.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes) } // Check the SHA1 receivedSHA1 := file.o.sha1 calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil)) if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 { return errors.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1) } return nil } // Check it satisfies the interfaces var _ io.ReadCloser = &openFile{} // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { fs.FixRangeOption(options, o.size) opts := rest.Opts{ Method: "GET", Options: options, } // Use downloadUrl from backblaze if downloadUrl is not set // otherwise use the custom downloadUrl if o.fs.opt.DownloadURL == "" { opts.RootURL = o.fs.info.DownloadURL } else { opts.RootURL = o.fs.opt.DownloadURL } // Download by id if set and not using DownloadURL otherwise by name if o.id != "" && o.fs.opt.DownloadURL == "" { opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id) } else { bucket, bucketPath := o.split() opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath)) } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return o.fs.shouldRetry(ctx, resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to open for download") } // Parse the time out of the headers if possible err = o.parseTimeString(resp.Header.Get(timeHeader)) if err != nil { _ = resp.Body.Close() return nil, err } // Read sha1 from header if it isn't set if o.sha1 == "" { o.sha1 = resp.Header.Get(sha1Header) fs.Debugf(o, "Reading sha1 from header - %q", o.sha1) // if sha1 header is "none" (in big files), then need // to read it from the metadata if o.sha1 == "none" { o.sha1 = resp.Header.Get(sha1InfoHeader) fs.Debugf(o, "Reading sha1 from info - %q", o.sha1) } o.sha1 = cleanSHA1(o.sha1) } // Don't check length or hash on partial content if resp.StatusCode == http.StatusPartialContent { return resp.Body, nil } return newOpenFile(o, resp), nil } // dontEncode is the characters that do not need percent-encoding // // The characters that do not need percent-encoding are a subset of // the printable ASCII characters: upper-case letters, lower-case // letters, digits, ".", "_", "-", "/", "~", "!", "$", "'", "(", ")", // "*", ";", "=", ":", and "@". All other byte values in a UTF-8 must // be replaced with "%" and the two-digit hex value of the byte. const dontEncode = (`abcdefghijklmnopqrstuvwxyz` + `ABCDEFGHIJKLMNOPQRSTUVWXYZ` + `0123456789` + `._-/~!$'()*;=:@`) // noNeedToEncode is a bitmap of characters which don't need % encoding var noNeedToEncode [256]bool func init() { for _, c := range dontEncode { noNeedToEncode[c] = true } } // urlEncode encodes in with % encoding func urlEncode(in string) string { var out bytes.Buffer for i := 0; i < len(in); i++ { c := in[i] if noNeedToEncode[c] { _ = out.WriteByte(c) } else { _, _ = out.WriteString(fmt.Sprintf("%%%2X", c)) } } return out.String() } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { if o.fs.opt.Versions { return errNotWithVersions } size := src.Size() bucket, bucketPath := o.split() err = o.fs.makeBucket(ctx, bucket) if err != nil { return err } if size == -1 { // Check if the file is large enough for a chunked upload (needs to be at least two chunks) buf := o.fs.getBuf(false) n, err := io.ReadFull(in, buf) if err == nil { bufReader := bufio.NewReader(in) in = bufReader _, err = bufReader.Peek(1) } if err == nil { fs.Debugf(o, "File is big enough for chunked streaming") up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil) if err != nil { o.fs.putBuf(buf, false) return err } // NB Stream returns the buffer and token return up.Stream(ctx, buf) } else if err == io.EOF || err == io.ErrUnexpectedEOF { fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n) defer o.fs.putBuf(buf, false) size = int64(n) in = bytes.NewReader(buf[:n]) } else { o.fs.putBuf(buf, false) return err } } else if size > int64(o.fs.opt.UploadCutoff) { up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil) if err != nil { return err } return up.Upload(ctx) } modTime := src.ModTime(ctx) calculatedSha1, _ := src.Hash(ctx, hash.SHA1) if calculatedSha1 == "" { calculatedSha1 = "hex_digits_at_end" har := newHashAppendingReader(in, sha1.New()) size += int64(har.AdditionalLength()) in = har } // Get upload URL upload, err := o.fs.getUploadURL(ctx, bucket) if err != nil { return err } defer func() { // return it like this because we might nil it out o.fs.returnUploadURL(upload) }() // Headers for upload file // // Authorization // required // An upload authorization token, from b2_get_upload_url. // // X-Bz-File-Name // required // // The name of the file, in percent-encoded UTF-8. See Files for requirements on file names. See String Encoding. // // Content-Type // required // // The MIME type of the content of the file, which will be returned in // the Content-Type header when downloading the file. Use the // Content-Type b2/x-auto to automatically set the stored Content-Type // post upload. In the case where a file extension is absent or the // lookup fails, the Content-Type is set to application/octet-stream. The // Content-Type mappings can be pursued here. // // X-Bz-Content-Sha1 // required // // The SHA1 checksum of the content of the file. B2 will check this when // the file is uploaded, to make sure that the file arrived correctly. It // will be returned in the X-Bz-Content-Sha1 header when the file is // downloaded. // // X-Bz-Info-src_last_modified_millis // optional // // If the original source of the file being uploaded has a last modified // time concept, Backblaze recommends using this spelling of one of your // ten X-Bz-Info-* headers (see below). Using a standard spelling allows // different B2 clients and the B2 web user interface to interoperate // correctly. The value should be a base 10 number which represents a UTC // time when the original source file was last modified. It is a base 10 // number of milliseconds since midnight, January 1, 1970 UTC. This fits // in a 64 bit integer such as the type "long" in the programming // language Java. It is intended to be compatible with Java's time // long. For example, it can be passed directly into the Java call // Date.setTime(long time). // // X-Bz-Info-* // optional // // Up to 10 of these headers may be present. The * part of the header // name is replace with the name of a custom field in the file // information stored with the file, and the value is an arbitrary UTF-8 // string, percent-encoded. The same info headers sent with the upload // will be returned with the download. opts := rest.Opts{ Method: "POST", RootURL: upload.UploadURL, Body: in, Options: options, ExtraHeaders: map[string]string{ "Authorization": upload.AuthorizationToken, "X-Bz-File-Name": urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath)), "Content-Type": fs.MimeType(ctx, src), sha1Header: calculatedSha1, timeHeader: timeString(modTime), }, ContentLength: &size, } var response api.FileInfo // Don't retry, return a retry error instead err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &response) retry, err := o.fs.shouldRetry(ctx, resp, err) // On retryable error clear UploadURL if retry { fs.Debugf(o, "Clearing upload URL because of error: %v", err) upload = nil } return retry, err }) if err != nil { return err } return o.decodeMetaDataFileInfo(&response) } // Remove an object func (o *Object) Remove(ctx context.Context) error { bucket, bucketPath := o.split() if o.fs.opt.Versions { return errNotWithVersions } if o.fs.opt.HardDelete { return o.fs.deleteByID(ctx, o.id, bucketPath) } return o.fs.hide(ctx, bucket, bucketPath) } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Purger = &Fs{} _ fs.Copier = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.CleanUpper = &Fs{} _ fs.ListRer = &Fs{} _ fs.PublicLinker = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} _ fs.IDer = &Object{} ) rclone-1.53.3/backend/b2/b2_internal_test.go000066400000000000000000000174771375552240400205730ustar00rootroot00000000000000package b2 import ( "testing" "time" "github.com/rclone/rclone/fstest" ) // Test b2 string encoding // https://www.backblaze.com/b2/docs/string_encoding.html var encodeTest = []struct { fullyEncoded string minimallyEncoded string plainText string }{ {fullyEncoded: "%20", minimallyEncoded: "+", plainText: " "}, {fullyEncoded: "%21", minimallyEncoded: "!", plainText: "!"}, {fullyEncoded: "%22", minimallyEncoded: "%22", plainText: "\""}, {fullyEncoded: "%23", minimallyEncoded: "%23", plainText: "#"}, {fullyEncoded: "%24", minimallyEncoded: "$", plainText: "$"}, {fullyEncoded: "%25", minimallyEncoded: "%25", plainText: "%"}, {fullyEncoded: "%26", minimallyEncoded: "%26", plainText: "&"}, {fullyEncoded: "%27", minimallyEncoded: "'", plainText: "'"}, {fullyEncoded: "%28", minimallyEncoded: "(", plainText: "("}, {fullyEncoded: "%29", minimallyEncoded: ")", plainText: ")"}, {fullyEncoded: "%2A", minimallyEncoded: "*", plainText: "*"}, {fullyEncoded: "%2B", minimallyEncoded: "%2B", plainText: "+"}, {fullyEncoded: "%2C", minimallyEncoded: "%2C", plainText: ","}, {fullyEncoded: "%2D", minimallyEncoded: "-", plainText: "-"}, {fullyEncoded: "%2E", minimallyEncoded: ".", plainText: "."}, {fullyEncoded: "%2F", minimallyEncoded: "/", plainText: "/"}, {fullyEncoded: "%30", minimallyEncoded: "0", plainText: "0"}, {fullyEncoded: "%31", minimallyEncoded: "1", plainText: "1"}, {fullyEncoded: "%32", minimallyEncoded: "2", plainText: "2"}, {fullyEncoded: "%33", minimallyEncoded: "3", plainText: "3"}, {fullyEncoded: "%34", minimallyEncoded: "4", plainText: "4"}, {fullyEncoded: "%35", minimallyEncoded: "5", plainText: "5"}, {fullyEncoded: "%36", minimallyEncoded: "6", plainText: "6"}, {fullyEncoded: "%37", minimallyEncoded: "7", plainText: "7"}, {fullyEncoded: "%38", minimallyEncoded: "8", plainText: "8"}, {fullyEncoded: "%39", minimallyEncoded: "9", plainText: "9"}, {fullyEncoded: "%3A", minimallyEncoded: ":", plainText: ":"}, {fullyEncoded: "%3B", minimallyEncoded: ";", plainText: ";"}, {fullyEncoded: "%3C", minimallyEncoded: "%3C", plainText: "<"}, {fullyEncoded: "%3D", minimallyEncoded: "=", plainText: "="}, {fullyEncoded: "%3E", minimallyEncoded: "%3E", plainText: ">"}, {fullyEncoded: "%3F", minimallyEncoded: "%3F", plainText: "?"}, {fullyEncoded: "%40", minimallyEncoded: "@", plainText: "@"}, {fullyEncoded: "%41", minimallyEncoded: "A", plainText: "A"}, {fullyEncoded: "%42", minimallyEncoded: "B", plainText: "B"}, {fullyEncoded: "%43", minimallyEncoded: "C", plainText: "C"}, {fullyEncoded: "%44", minimallyEncoded: "D", plainText: "D"}, {fullyEncoded: "%45", minimallyEncoded: "E", plainText: "E"}, {fullyEncoded: "%46", minimallyEncoded: "F", plainText: "F"}, {fullyEncoded: "%47", minimallyEncoded: "G", plainText: "G"}, {fullyEncoded: "%48", minimallyEncoded: "H", plainText: "H"}, {fullyEncoded: "%49", minimallyEncoded: "I", plainText: "I"}, {fullyEncoded: "%4A", minimallyEncoded: "J", plainText: "J"}, {fullyEncoded: "%4B", minimallyEncoded: "K", plainText: "K"}, {fullyEncoded: "%4C", minimallyEncoded: "L", plainText: "L"}, {fullyEncoded: "%4D", minimallyEncoded: "M", plainText: "M"}, {fullyEncoded: "%4E", minimallyEncoded: "N", plainText: "N"}, {fullyEncoded: "%4F", minimallyEncoded: "O", plainText: "O"}, {fullyEncoded: "%50", minimallyEncoded: "P", plainText: "P"}, {fullyEncoded: "%51", minimallyEncoded: "Q", plainText: "Q"}, {fullyEncoded: "%52", minimallyEncoded: "R", plainText: "R"}, {fullyEncoded: "%53", minimallyEncoded: "S", plainText: "S"}, {fullyEncoded: "%54", minimallyEncoded: "T", plainText: "T"}, {fullyEncoded: "%55", minimallyEncoded: "U", plainText: "U"}, {fullyEncoded: "%56", minimallyEncoded: "V", plainText: "V"}, {fullyEncoded: "%57", minimallyEncoded: "W", plainText: "W"}, {fullyEncoded: "%58", minimallyEncoded: "X", plainText: "X"}, {fullyEncoded: "%59", minimallyEncoded: "Y", plainText: "Y"}, {fullyEncoded: "%5A", minimallyEncoded: "Z", plainText: "Z"}, {fullyEncoded: "%5B", minimallyEncoded: "%5B", plainText: "["}, {fullyEncoded: "%5C", minimallyEncoded: "%5C", plainText: "\\"}, {fullyEncoded: "%5D", minimallyEncoded: "%5D", plainText: "]"}, {fullyEncoded: "%5E", minimallyEncoded: "%5E", plainText: "^"}, {fullyEncoded: "%5F", minimallyEncoded: "_", plainText: "_"}, {fullyEncoded: "%60", minimallyEncoded: "%60", plainText: "`"}, {fullyEncoded: "%61", minimallyEncoded: "a", plainText: "a"}, {fullyEncoded: "%62", minimallyEncoded: "b", plainText: "b"}, {fullyEncoded: "%63", minimallyEncoded: "c", plainText: "c"}, {fullyEncoded: "%64", minimallyEncoded: "d", plainText: "d"}, {fullyEncoded: "%65", minimallyEncoded: "e", plainText: "e"}, {fullyEncoded: "%66", minimallyEncoded: "f", plainText: "f"}, {fullyEncoded: "%67", minimallyEncoded: "g", plainText: "g"}, {fullyEncoded: "%68", minimallyEncoded: "h", plainText: "h"}, {fullyEncoded: "%69", minimallyEncoded: "i", plainText: "i"}, {fullyEncoded: "%6A", minimallyEncoded: "j", plainText: "j"}, {fullyEncoded: "%6B", minimallyEncoded: "k", plainText: "k"}, {fullyEncoded: "%6C", minimallyEncoded: "l", plainText: "l"}, {fullyEncoded: "%6D", minimallyEncoded: "m", plainText: "m"}, {fullyEncoded: "%6E", minimallyEncoded: "n", plainText: "n"}, {fullyEncoded: "%6F", minimallyEncoded: "o", plainText: "o"}, {fullyEncoded: "%70", minimallyEncoded: "p", plainText: "p"}, {fullyEncoded: "%71", minimallyEncoded: "q", plainText: "q"}, {fullyEncoded: "%72", minimallyEncoded: "r", plainText: "r"}, {fullyEncoded: "%73", minimallyEncoded: "s", plainText: "s"}, {fullyEncoded: "%74", minimallyEncoded: "t", plainText: "t"}, {fullyEncoded: "%75", minimallyEncoded: "u", plainText: "u"}, {fullyEncoded: "%76", minimallyEncoded: "v", plainText: "v"}, {fullyEncoded: "%77", minimallyEncoded: "w", plainText: "w"}, {fullyEncoded: "%78", minimallyEncoded: "x", plainText: "x"}, {fullyEncoded: "%79", minimallyEncoded: "y", plainText: "y"}, {fullyEncoded: "%7A", minimallyEncoded: "z", plainText: "z"}, {fullyEncoded: "%7B", minimallyEncoded: "%7B", plainText: "{"}, {fullyEncoded: "%7C", minimallyEncoded: "%7C", plainText: "|"}, {fullyEncoded: "%7D", minimallyEncoded: "%7D", plainText: "}"}, {fullyEncoded: "%7E", minimallyEncoded: "~", plainText: "~"}, {fullyEncoded: "%7F", minimallyEncoded: "%7F", plainText: "\u007f"}, {fullyEncoded: "%E8%87%AA%E7%94%B1", minimallyEncoded: "%E8%87%AA%E7%94%B1", plainText: "自由"}, {fullyEncoded: "%F0%90%90%80", minimallyEncoded: "%F0%90%90%80", plainText: "𐐀"}, } func TestUrlEncode(t *testing.T) { for _, test := range encodeTest { got := urlEncode(test.plainText) if got != test.minimallyEncoded && got != test.fullyEncoded { t.Errorf("urlEncode(%q) got %q wanted %q or %q", test.plainText, got, test.minimallyEncoded, test.fullyEncoded) } } } func TestTimeString(t *testing.T) { for _, test := range []struct { in time.Time want string }{ {fstest.Time("1970-01-01T00:00:00.000000000Z"), "0"}, {fstest.Time("2001-02-03T04:05:10.123123123Z"), "981173110123"}, {fstest.Time("2001-02-03T05:05:10.123123123+01:00"), "981173110123"}, } { got := timeString(test.in) if test.want != got { t.Logf("%v: want %v got %v", test.in, test.want, got) } } } func TestParseTimeString(t *testing.T) { for _, test := range []struct { in string want time.Time wantError string }{ {"0", fstest.Time("1970-01-01T00:00:00.000000000Z"), ""}, {"981173110123", fstest.Time("2001-02-03T04:05:10.123000000Z"), ""}, {"", time.Time{}, ""}, {"potato", time.Time{}, `strconv.ParseInt: parsing "potato": invalid syntax`}, } { o := Object{} err := o.parseTimeString(test.in) got := o.modTime var gotError string if err != nil { gotError = err.Error() } if test.want != got { t.Logf("%v: want %v got %v", test.in, test.want, got) } if test.wantError != gotError { t.Logf("%v: want error %v got error %v", test.in, test.wantError, gotError) } } } rclone-1.53.3/backend/b2/b2_test.go000066400000000000000000000013711375552240400166610ustar00rootroot00000000000000// Test B2 filesystem interface package b2 import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestB2:", NilObject: (*Object)(nil), ChunkedUpload: fstests.ChunkedUploadConfig{ MinChunkSize: minChunkSize, NeedMultipleChunks: true, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadCutoff(cs) } var ( _ fstests.SetUploadChunkSizer = (*Fs)(nil) _ fstests.SetUploadCutoffer = (*Fs)(nil) ) rclone-1.53.3/backend/b2/upload.go000066400000000000000000000340541375552240400166070ustar00rootroot00000000000000// Upload large files for b2 // // Docs - https://www.backblaze.com/b2/docs/large_files.html package b2 import ( "bytes" "context" "crypto/sha1" "encoding/hex" "fmt" gohash "hash" "io" "strings" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/backend/b2/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/rest" "golang.org/x/sync/errgroup" ) type hashAppendingReader struct { h gohash.Hash in io.Reader hexSum string hexReader io.Reader } // Read returns bytes all bytes from the original reader, then the hex sum // of what was read so far, then EOF. func (har *hashAppendingReader) Read(b []byte) (int, error) { if har.hexReader == nil { n, err := har.in.Read(b) if err == io.EOF { har.in = nil // allow GC err = nil // allow reading hexSum before EOF har.hexSum = hex.EncodeToString(har.h.Sum(nil)) har.hexReader = strings.NewReader(har.hexSum) } return n, err } return har.hexReader.Read(b) } // AdditionalLength returns how many bytes the appended hex sum will take up. func (har *hashAppendingReader) AdditionalLength() int { return hex.EncodedLen(har.h.Size()) } // HexSum returns the hash sum as hex. It's only available after the original // reader has EOF'd. It's an empty string before that. func (har *hashAppendingReader) HexSum() string { return har.hexSum } // newHashAppendingReader takes a Reader and a Hash and will append the hex sum // after the original reader reaches EOF. The increased size depends on the // given hash, which may be queried through AdditionalLength() func newHashAppendingReader(in io.Reader, h gohash.Hash) *hashAppendingReader { withHash := io.TeeReader(in, h) return &hashAppendingReader{h: h, in: withHash} } // largeUpload is used to control the upload of large files which need chunking type largeUpload struct { f *Fs // parent Fs o *Object // object being uploaded doCopy bool // doing copy rather than upload what string // text name of operation for logs in io.Reader // read the data from here wrap accounting.WrapFn // account parts being transferred id string // ID of the file being uploaded size int64 // total size parts int64 // calculated number of parts, if known sha1s []string // slice of SHA1s for each part uploadMu sync.Mutex // lock for upload variable uploads []*api.GetUploadPartURLResponse // result of get upload URL calls chunkSize int64 // chunk size to use src *Object // if copying, object we are reading from } // newLargeUpload starts an upload of object o from in with metadata in src // // If newInfo is set then metadata from that will be used instead of reading it from src func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, chunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) { remote := o.remote size := src.Size() parts := int64(0) sha1SliceSize := int64(maxParts) if size == -1 { fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize) } else { parts = size / int64(chunkSize) if size%int64(chunkSize) != 0 { parts++ } if parts > maxParts { return nil, errors.Errorf("%q too big (%d bytes) makes too many parts %d > %d - increase --b2-chunk-size", remote, size, parts, maxParts) } sha1SliceSize = parts } opts := rest.Opts{ Method: "POST", Path: "/b2_start_large_file", } bucket, bucketPath := o.split() bucketID, err := f.getBucketID(ctx, bucket) if err != nil { return nil, err } var request = api.StartLargeFileRequest{ BucketID: bucketID, Name: f.opt.Enc.FromStandardPath(bucketPath), } if newInfo == nil { modTime := src.ModTime(ctx) request.ContentType = fs.MimeType(ctx, src) request.Info = map[string]string{ timeKey: timeString(modTime), } // Set the SHA1 if known if !o.fs.opt.DisableCheckSum || doCopy { if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" { request.Info[sha1Key] = calculatedSha1 } } } else { request.ContentType = newInfo.ContentType request.Info = newInfo.Info } var response api.StartLargeFileResponse err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) return f.shouldRetry(ctx, resp, err) }) if err != nil { return nil, err } up = &largeUpload{ f: f, o: o, doCopy: doCopy, what: "upload", id: response.ID, size: size, parts: parts, sha1s: make([]string, sha1SliceSize), chunkSize: int64(chunkSize), } // unwrap the accounting from the input, we use wrap to put it // back on after the buffering if doCopy { up.what = "copy" up.src = src.(*Object) } else { up.in, up.wrap = accounting.UnWrap(in) } return up, nil } // getUploadURL returns the upload info with the UploadURL and the AuthorizationToken // // This should be returned with returnUploadURL when finished func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) { up.uploadMu.Lock() defer up.uploadMu.Unlock() if len(up.uploads) == 0 { opts := rest.Opts{ Method: "POST", Path: "/b2_get_upload_part_url", } var request = api.GetUploadPartURLRequest{ ID: up.id, } err := up.f.pacer.Call(func() (bool, error) { resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload) return up.f.shouldRetry(ctx, resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to get upload URL") } } else { upload, up.uploads = up.uploads[0], up.uploads[1:] } return upload, nil } // returnUploadURL returns the UploadURL to the cache func (up *largeUpload) returnUploadURL(upload *api.GetUploadPartURLResponse) { if upload == nil { return } up.uploadMu.Lock() up.uploads = append(up.uploads, upload) up.uploadMu.Unlock() } // Transfer a chunk func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byte) error { err := up.f.pacer.Call(func() (bool, error) { fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body)) // Get upload URL upload, err := up.getUploadURL(ctx) if err != nil { return false, err } in := newHashAppendingReader(bytes.NewReader(body), sha1.New()) size := int64(len(body)) + int64(in.AdditionalLength()) // Authorization // // An upload authorization token, from b2_get_upload_part_url. // // X-Bz-Part-Number // // A number from 1 to 10000. The parts uploaded for one file // must have contiguous numbers, starting with 1. // // Content-Length // // The number of bytes in the file being uploaded. Note that // this header is required; you cannot leave it out and just // use chunked encoding. The minimum size of every part but // the last one is 100MB. // // X-Bz-Content-Sha1 // // The SHA1 checksum of the this part of the file. B2 will // check this when the part is uploaded, to make sure that the // data arrived correctly. The same SHA1 checksum must be // passed to b2_finish_large_file. opts := rest.Opts{ Method: "POST", RootURL: upload.UploadURL, Body: up.wrap(in), ExtraHeaders: map[string]string{ "Authorization": upload.AuthorizationToken, "X-Bz-Part-Number": fmt.Sprintf("%d", part), sha1Header: "hex_digits_at_end", }, ContentLength: &size, } var response api.UploadPartResponse resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response) retry, err := up.f.shouldRetry(ctx, resp, err) if err != nil { fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", part, retry, err, err) } // On retryable error clear PartUploadURL if retry { fs.Debugf(up.o, "Clearing part upload URL because of error: %v", err) upload = nil } up.returnUploadURL(upload) up.sha1s[part-1] = in.HexSum() return retry, err }) if err != nil { fs.Debugf(up.o, "Error sending chunk %d: %v", part, err) } else { fs.Debugf(up.o, "Done sending chunk %d", part) } return err } // Copy a chunk func (up *largeUpload) copyChunk(ctx context.Context, part int64, partSize int64) error { err := up.f.pacer.Call(func() (bool, error) { fs.Debugf(up.o, "Copying chunk %d length %d", part, partSize) opts := rest.Opts{ Method: "POST", Path: "/b2_copy_part", } offset := (part - 1) * up.chunkSize // where we are in the source file var request = api.CopyPartRequest{ SourceID: up.src.id, LargeFileID: up.id, PartNumber: part, Range: fmt.Sprintf("bytes=%d-%d", offset, offset+partSize-1), } var response api.UploadPartResponse resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response) retry, err := up.f.shouldRetry(ctx, resp, err) if err != nil { fs.Debugf(up.o, "Error copying chunk %d (retry=%v): %v: %#v", part, retry, err, err) } up.sha1s[part-1] = response.SHA1 return retry, err }) if err != nil { fs.Debugf(up.o, "Error copying chunk %d: %v", part, err) } else { fs.Debugf(up.o, "Done copying chunk %d", part) } return err } // finish closes off the large upload func (up *largeUpload) finish(ctx context.Context) error { fs.Debugf(up.o, "Finishing large file %s with %d parts", up.what, up.parts) opts := rest.Opts{ Method: "POST", Path: "/b2_finish_large_file", } var request = api.FinishLargeFileRequest{ ID: up.id, SHA1s: up.sha1s, } var response api.FileInfo err := up.f.pacer.Call(func() (bool, error) { resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response) return up.f.shouldRetry(ctx, resp, err) }) if err != nil { return err } return up.o.decodeMetaDataFileInfo(&response) } // cancel aborts the large upload func (up *largeUpload) cancel(ctx context.Context) error { fs.Debugf(up.o, "Cancelling large file %s", up.what) opts := rest.Opts{ Method: "POST", Path: "/b2_cancel_large_file", } var request = api.CancelLargeFileRequest{ ID: up.id, } var response api.CancelLargeFileResponse err := up.f.pacer.Call(func() (bool, error) { resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response) return up.f.shouldRetry(ctx, resp, err) }) if err != nil { fs.Errorf(up.o, "Failed to cancel large file %s: %v", up.what, err) } return err } // Stream uploads the chunks from the input, starting with a required initial // chunk. Assumes the file size is unknown and will upload until the input // reaches EOF. // // Note that initialUploadBlock must be returned to f.putBuf() func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (err error) { defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })() fs.Debugf(up.o, "Starting streaming of large file (id %q)", up.id) var ( g, gCtx = errgroup.WithContext(ctx) hasMoreParts = true ) up.size = int64(len(initialUploadBlock)) g.Go(func() error { for part := int64(1); hasMoreParts; part++ { // Get a block of memory from the pool and token which limits concurrency. var buf []byte if part == 1 { buf = initialUploadBlock } else { buf = up.f.getBuf(false) } // Fail fast, in case an errgroup managed function returns an error // gCtx is cancelled. There is no point in uploading all the other parts. if gCtx.Err() != nil { up.f.putBuf(buf, false) return nil } // Read the chunk var n int if part == 1 { n = len(buf) } else { n, err = io.ReadFull(up.in, buf) if err == io.ErrUnexpectedEOF { fs.Debugf(up.o, "Read less than a full chunk, making this the last one.") buf = buf[:n] hasMoreParts = false } else if err == io.EOF { fs.Debugf(up.o, "Could not read any more bytes, previous chunk was the last.") up.f.putBuf(buf, false) return nil } else if err != nil { // other kinds of errors indicate failure up.f.putBuf(buf, false) return err } } // Keep stats up to date up.parts = part up.size += int64(n) if part > maxParts { up.f.putBuf(buf, false) return errors.Errorf("%q too big (%d bytes so far) makes too many parts %d > %d - increase --b2-chunk-size", up.o, up.size, up.parts, maxParts) } part := part // for the closure g.Go(func() (err error) { defer up.f.putBuf(buf, false) return up.transferChunk(gCtx, part, buf) }) } return nil }) err = g.Wait() if err != nil { return err } up.sha1s = up.sha1s[:up.parts] return up.finish(ctx) } // Upload uploads the chunks from the input func (up *largeUpload) Upload(ctx context.Context) (err error) { defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })() fs.Debugf(up.o, "Starting %s of large file in %d chunks (id %q)", up.what, up.parts, up.id) var ( g, gCtx = errgroup.WithContext(ctx) remaining = up.size ) g.Go(func() error { for part := int64(1); part <= up.parts; part++ { // Get a block of memory from the pool and token which limits concurrency. buf := up.f.getBuf(up.doCopy) // Fail fast, in case an errgroup managed function returns an error // gCtx is cancelled. There is no point in uploading all the other parts. if gCtx.Err() != nil { up.f.putBuf(buf, up.doCopy) return nil } reqSize := remaining if reqSize >= up.chunkSize { reqSize = up.chunkSize } if !up.doCopy { // Read the chunk buf = buf[:reqSize] _, err = io.ReadFull(up.in, buf) if err != nil { up.f.putBuf(buf, up.doCopy) return err } } part := part // for the closure g.Go(func() (err error) { defer up.f.putBuf(buf, up.doCopy) if !up.doCopy { err = up.transferChunk(gCtx, part, buf) } else { err = up.copyChunk(gCtx, part, reqSize) } return err }) remaining -= reqSize } return nil }) err = g.Wait() if err != nil { return err } return up.finish(ctx) } rclone-1.53.3/backend/box/000077500000000000000000000000001375552240400152535ustar00rootroot00000000000000rclone-1.53.3/backend/box/api/000077500000000000000000000000001375552240400160245ustar00rootroot00000000000000rclone-1.53.3/backend/box/api/types.go000066400000000000000000000163331375552240400175250ustar00rootroot00000000000000// Package api has type definitions for box // // Converted from the API docs with help from https://mholt.github.io/json-to-go/ package api import ( "encoding/json" "fmt" "time" ) const ( // 2017-05-03T07:26:10-07:00 timeFormat = `"` + time.RFC3339 + `"` ) // Time represents represents date and time information for the // box API, by using RFC3339 type Time time.Time // MarshalJSON turns a Time into JSON (in UTC) func (t *Time) MarshalJSON() (out []byte, err error) { timeString := (*time.Time)(t).Format(timeFormat) return []byte(timeString), nil } // UnmarshalJSON turns JSON into a Time func (t *Time) UnmarshalJSON(data []byte) error { newT, err := time.Parse(timeFormat, string(data)) if err != nil { return err } *t = Time(newT) return nil } // Error is returned from box when things go wrong type Error struct { Type string `json:"type"` Status int `json:"status"` Code string `json:"code"` ContextInfo json.RawMessage HelpURL string `json:"help_url"` Message string `json:"message"` RequestID string `json:"request_id"` } // Error returns a string for the error and satisfies the error interface func (e *Error) Error() string { out := fmt.Sprintf("Error %q (%d)", e.Code, e.Status) if e.Message != "" { out += ": " + e.Message } if e.ContextInfo != nil { out += fmt.Sprintf(" (%+v)", e.ContextInfo) } return out } // Check Error satisfies the error interface var _ error = (*Error)(nil) // ItemFields are the fields needed for FileInfo var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link" // Types of things in Item const ( ItemTypeFolder = "folder" ItemTypeFile = "file" ItemStatusActive = "active" ItemStatusTrashed = "trashed" ItemStatusDeleted = "deleted" ) // Item describes a folder or a file as returned by Get Folder Items and others type Item struct { Type string `json:"type"` ID string `json:"id"` SequenceID string `json:"sequence_id"` Etag string `json:"etag"` SHA1 string `json:"sha1"` Name string `json:"name"` Size float64 `json:"size"` // box returns this in xEyy format for very large numbers - see #2261 CreatedAt Time `json:"created_at"` ModifiedAt Time `json:"modified_at"` ContentCreatedAt Time `json:"content_created_at"` ContentModifiedAt Time `json:"content_modified_at"` ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted SharedLink struct { URL string `json:"url,omitempty"` Access string `json:"access,omitempty"` } `json:"shared_link"` } // ModTime returns the modification time of the item func (i *Item) ModTime() (t time.Time) { t = time.Time(i.ContentModifiedAt) if t.IsZero() { t = time.Time(i.ModifiedAt) } return t } // FolderItems is returned from the GetFolderItems call type FolderItems struct { TotalCount int `json:"total_count"` Entries []Item `json:"entries"` Offset int `json:"offset"` Limit int `json:"limit"` Order []struct { By string `json:"by"` Direction string `json:"direction"` } `json:"order"` } // Parent defined the ID of the parent directory type Parent struct { ID string `json:"id"` } // CreateFolder is the request for Create Folder type CreateFolder struct { Name string `json:"name"` Parent Parent `json:"parent"` } // UploadFile is the request for Upload File type UploadFile struct { Name string `json:"name"` Parent Parent `json:"parent"` ContentCreatedAt Time `json:"content_created_at"` ContentModifiedAt Time `json:"content_modified_at"` } // UpdateFileModTime is used in Update File Info type UpdateFileModTime struct { ContentModifiedAt Time `json:"content_modified_at"` } // UpdateFileMove is the request for Upload File to change name and parent type UpdateFileMove struct { Name string `json:"name"` Parent Parent `json:"parent"` } // CopyFile is the request for Copy File type CopyFile struct { Name string `json:"name"` Parent Parent `json:"parent"` } // CreateSharedLink is the request for Public Link type CreateSharedLink struct { SharedLink struct { URL string `json:"url,omitempty"` Access string `json:"access,omitempty"` } `json:"shared_link"` } // UploadSessionRequest is uses in Create Upload Session type UploadSessionRequest struct { FolderID string `json:"folder_id,omitempty"` // don't pass for update FileSize int64 `json:"file_size"` FileName string `json:"file_name,omitempty"` // optional for update } // UploadSessionResponse is returned from Create Upload Session type UploadSessionResponse struct { TotalParts int `json:"total_parts"` PartSize int64 `json:"part_size"` SessionEndpoints struct { ListParts string `json:"list_parts"` Commit string `json:"commit"` UploadPart string `json:"upload_part"` Status string `json:"status"` Abort string `json:"abort"` } `json:"session_endpoints"` SessionExpiresAt Time `json:"session_expires_at"` ID string `json:"id"` Type string `json:"type"` NumPartsProcessed int `json:"num_parts_processed"` } // Part defines the return from upload part call which are passed to commit upload also type Part struct { PartID string `json:"part_id"` Offset int64 `json:"offset"` Size int64 `json:"size"` Sha1 string `json:"sha1"` } // UploadPartResponse is returned from the upload part call type UploadPartResponse struct { Part Part `json:"part"` } // CommitUpload is used in the Commit Upload call type CommitUpload struct { Parts []Part `json:"parts"` Attributes struct { ContentCreatedAt Time `json:"content_created_at"` ContentModifiedAt Time `json:"content_modified_at"` } `json:"attributes"` } // ConfigJSON defines the shape of a box config.json type ConfigJSON struct { BoxAppSettings AppSettings `json:"boxAppSettings"` EnterpriseID string `json:"enterpriseID"` } // AppSettings defines the shape of the boxAppSettings within box config.json type AppSettings struct { ClientID string `json:"clientID"` ClientSecret string `json:"clientSecret"` AppAuth AppAuth `json:"appAuth"` } // AppAuth defines the shape of the appAuth within boxAppSettings in config.json type AppAuth struct { PublicKeyID string `json:"publicKeyID"` PrivateKey string `json:"privateKey"` Passphrase string `json:"passphrase"` } // User is returned from /users/me type User struct { Type string `json:"type"` ID string `json:"id"` Name string `json:"name"` Login string `json:"login"` CreatedAt time.Time `json:"created_at"` ModifiedAt time.Time `json:"modified_at"` Language string `json:"language"` Timezone string `json:"timezone"` SpaceAmount int64 `json:"space_amount"` SpaceUsed int64 `json:"space_used"` MaxUploadSize int64 `json:"max_upload_size"` Status string `json:"status"` JobTitle string `json:"job_title"` Phone string `json:"phone"` Address string `json:"address"` AvatarURL string `json:"avatar_url"` } rclone-1.53.3/backend/box/box.go000066400000000000000000001115731375552240400164020ustar00rootroot00000000000000// Package box provides an interface to the Box // object storage system. package box // FIXME Box only supports file names of 255 characters or less. Names // that will not be supported are those that contain non-printable // ascii, / or \, names with trailing spaces, and the special names // “.” and “..”. // FIXME box can copy a directory import ( "context" "crypto/rsa" "encoding/json" "encoding/pem" "fmt" "io" "io/ioutil" "log" "net/http" "net/url" "path" "strconv" "strings" "time" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/jwtutil" "github.com/youmark/pkcs8" "github.com/pkg/errors" "github.com/rclone/rclone/backend/box/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" "golang.org/x/oauth2/jws" ) const ( rcloneClientID = "d0374ba6pgmaguie02ge15sv1mllndho" rcloneEncryptedClientSecret = "sYbJYm99WB8jzeaLPU0OPDMJKIkZvD2qOn3SyEMfiJr03RdtDt3xcZEIudRhbIDL" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential rootURL = "https://api.box.com/2.0" uploadURL = "https://upload.box.com/api/2.0" listChunks = 1000 // chunk size to read directory listings minUploadCutoff = 50000000 // upload cutoff can be no lower than this defaultUploadCutoff = 50 * 1024 * 1024 tokenURL = "https://api.box.com/oauth2/token" ) // Globals var ( // Description of how to auth for this app oauthConfig = &oauth2.Config{ Scopes: nil, Endpoint: oauth2.Endpoint{ AuthURL: "https://app.box.com/api/oauth2/authorize", TokenURL: "https://app.box.com/api/oauth2/token", }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "box", Description: "Box", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { jsonFile, ok := m.Get("box_config_file") boxSubType, boxSubTypeOk := m.Get("box_sub_type") boxAccessToken, boxAccessTokenOk := m.Get("access_token") var err error // If using box config.json, use JWT auth if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { err = refreshJWTToken(jsonFile, boxSubType, name, m) if err != nil { log.Fatalf("Failed to configure token with jwt authentication: %v", err) } // Else, if not using an access token, use oauth2 } else if boxAccessToken == "" || !boxAccessTokenOk { err = oauthutil.Config("box", name, m, oauthConfig, nil) if err != nil { log.Fatalf("Failed to configure token with oauth authentication: %v", err) } } }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: "root_folder_id", Help: "Fill in for rclone to use a non root folder as its starting point.", Default: "0", Advanced: true, }, { Name: "box_config_file", Help: "Box App config.json location\nLeave blank normally." + env.ShellExpandHelp, }, { Name: "access_token", Help: "Box App Primary Access Token\nLeave blank normally.", }, { Name: "box_sub_type", Default: "user", Examples: []fs.OptionExample{{ Value: "user", Help: "Rclone should act on behalf of a user", }, { Value: "enterprise", Help: "Rclone should act on behalf of a service account", }}, }, { Name: "upload_cutoff", Help: "Cutoff for switching to multipart upload (>= 50MB).", Default: fs.SizeSuffix(defaultUploadCutoff), Advanced: true, }, { Name: "commit_retries", Help: "Max number of times to try committing a multipart file.", Default: 100, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // From https://developer.box.com/docs/error-codes#section-400-bad-request : // > Box only supports file or folder names that are 255 characters or less. // > File names containing non-printable ascii, "/" or "\", names with leading // > or trailing spaces, and the special names “.” and “..” are also unsupported. // // Testing revealed names with leading spaces work fine. // Also encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeRightSpace | encoder.EncodeInvalidUtf8), }}...), }) } func refreshJWTToken(jsonFile string, boxSubType string, name string, m configmap.Mapper) error { jsonFile = env.ShellExpand(jsonFile) boxConfig, err := getBoxConfig(jsonFile) if err != nil { log.Fatalf("Failed to configure token: %v", err) } privateKey, err := getDecryptedPrivateKey(boxConfig) if err != nil { log.Fatalf("Failed to configure token: %v", err) } claims, err := getClaims(boxConfig, boxSubType) if err != nil { log.Fatalf("Failed to configure token: %v", err) } signingHeaders := getSigningHeaders(boxConfig) queryParams := getQueryParams(boxConfig) client := fshttp.NewClient(fs.Config) err = jwtutil.Config("box", name, claims, signingHeaders, queryParams, privateKey, m, client) return err } func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) { file, err := ioutil.ReadFile(configFile) if err != nil { return nil, errors.Wrap(err, "box: failed to read Box config") } err = json.Unmarshal(file, &boxConfig) if err != nil { return nil, errors.Wrap(err, "box: failed to parse Box config") } return boxConfig, nil } func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *jws.ClaimSet, err error) { val, err := jwtutil.RandomHex(20) if err != nil { return nil, errors.Wrap(err, "box: failed to generate random string for jti") } claims = &jws.ClaimSet{ Iss: boxConfig.BoxAppSettings.ClientID, Sub: boxConfig.EnterpriseID, Aud: tokenURL, Exp: time.Now().Add(time.Second * 45).Unix(), PrivateClaims: map[string]interface{}{ "box_sub_type": boxSubType, "aud": tokenURL, "jti": val, }, } return claims, nil } func getSigningHeaders(boxConfig *api.ConfigJSON) *jws.Header { signingHeaders := &jws.Header{ Algorithm: "RS256", Typ: "JWT", KeyID: boxConfig.BoxAppSettings.AppAuth.PublicKeyID, } return signingHeaders } func getQueryParams(boxConfig *api.ConfigJSON) map[string]string { queryParams := map[string]string{ "client_id": boxConfig.BoxAppSettings.ClientID, "client_secret": boxConfig.BoxAppSettings.ClientSecret, } return queryParams } func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) { block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey)) if len(rest) > 0 { return nil, errors.Wrap(err, "box: extra data included in private key") } rsaKey, err := pkcs8.ParsePKCS8PrivateKey(block.Bytes, []byte(boxConfig.BoxAppSettings.AppAuth.Passphrase)) if err != nil { return nil, errors.Wrap(err, "box: failed to decrypt private key") } return rsaKey.(*rsa.PrivateKey), nil } // Options defines the configuration for this backend type Options struct { UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` CommitRetries int `config:"commit_retries"` Enc encoder.MultiEncoder `config:"encoding"` RootFolderID string `config:"root_folder_id"` AccessToken string `config:"access_token"` } // Fs represents a remote box type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the one drive server dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls tokenRenewer *oauthutil.Renew // renew the token on expiry uploadToken *pacer.TokenDispenser // control concurrency } // Object describes a box object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set size int64 // size of the object modTime time.Time // modification time of the object id string // ID of the object publicLink string // Public Link for the object sha1 string // SHA-1 of the object content } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("box root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a box 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { authRetry := false if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { authRetry = true fs.Debugf(nil, "Should retry: %v", err) } return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) { // defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err) leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool { if item.Name == leaf { info = item return true } return false }) if err != nil { return nil, err } if !found { return nil, fs.ErrorObjectNotFound } return info, nil } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { // Decode error response errResponse := new(api.Error) err := rest.DecodeJSON(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.Code == "" { errResponse.Code = resp.Status } if errResponse.Status == 0 { errResponse.Status = resp.StatusCode } return errResponse } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.UploadCutoff < minUploadCutoff { return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff)) } root = parsePath(root) client := fshttp.NewClient(fs.Config) var ts *oauthutil.TokenSource // If not using an accessToken, create an oauth client and tokensource if opt.AccessToken == "" { client, ts, err = oauthutil.NewClient(name, m, oauthConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure Box") } } f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(client).SetRoot(rootURL), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), } f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, }).Fill(f) f.srv.SetErrorHandler(errorHandler) // If using an accessToken, set the Authorization header if f.opt.AccessToken != "" { f.srv.SetHeader("Authorization", "Bearer "+f.opt.AccessToken) } jsonFile, ok := m.Get("box_config_file") boxSubType, boxSubTypeOk := m.Get("box_sub_type") if ts != nil { // If using box config.json and JWT, renewing should just refresh the token and // should do so whether there are uploads pending or not. if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { err := refreshJWTToken(jsonFile, boxSubType, name, m) return err }) f.tokenRenewer.Start() } else { // Renew the token in the background f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.readMetaDataForPath(ctx, "") return err }) } } // Get rootFolderID rootID := f.opt.RootFolderID f.dirCache = dircache.New(root, rootID, f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, rootID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } f.features.Fill(&tempF) // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // rootSlash returns root with a slash on if it is empty, otherwise empty string func (f *Fs) rootSlash() string { if f.root == "" { return f.root } return f.root + "/" } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Item) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // Find the leaf in pathID found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { if item.Name == leaf { pathIDOut = item.ID return true } return false }) return pathIDOut, found, err } // fieldsValue creates a url.Values with fields set to those in api.Item func fieldsValue() url.Values { values := url.Values{} values.Set("fields", api.ItemFields) return values } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf) var resp *http.Response var info *api.Item opts := rest.Opts{ Method: "POST", Path: "/folders", Parameters: fieldsValue(), } mkdir := api.CreateFolder{ Name: f.opt.Enc.FromStandardName(leaf), Parent: api.Parent{ ID: pathID, }, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) return shouldRetry(resp, err) }) if err != nil { //fmt.Printf("...Error %v\n", err) return "", err } // fmt.Printf("...Id %q\n", *info.Id) return info.ID, nil } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(*api.Item) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { opts := rest.Opts{ Method: "GET", Path: "/folders/" + dirID + "/items", Parameters: fieldsValue(), } opts.Parameters.Set("limit", strconv.Itoa(listChunks)) offset := 0 OUTER: for { opts.Parameters.Set("offset", strconv.Itoa(offset)) var result api.FolderItems var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return found, errors.Wrap(err, "couldn't list files") } for i := range result.Entries { item := &result.Entries[i] if item.Type == api.ItemTypeFolder { if filesOnly { continue } } else if item.Type == api.ItemTypeFile { if directoriesOnly { continue } } else { fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type) continue } if item.ItemStatus != api.ItemStatusActive { continue } item.Name = f.opt.Enc.ToStandardName(item.Name) if fn(item) { found = true break OUTER } } offset += result.Limit if offset >= result.TotalCount { break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var iErr error _, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { remote := path.Join(dir, info.Name) if info.Type == api.ItemTypeFolder { // cache the directory ID for later lookups f.dirCache.Put(remote, info.ID) d := fs.NewDir(remote, info.ModTime()).SetID(info.ID) // FIXME more info from dir? entries = append(entries, d) } else if info.Type == api.ItemTypeFile { o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, leaf, directoryID, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil) switch err { case nil: return existingObj, existingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src) default: return nil, err } } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // PutUnchecked the object into the container // // This will produce an error if the object already exists // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // deleteObject removes an object by ID func (f *Fs) deleteObject(ctx context.Context, id string) error { opts := rest.Opts{ Method: "DELETE", Path: "/files/" + id, NoResponse: true, } return f.pacer.Call(func() (bool, error) { resp, err := f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } opts := rest.Opts{ Method: "DELETE", Path: "/folders/" + rootID, Parameters: url.Values{}, NoResponse: true, } opts.Parameters.Set("recursive", strconv.FormatBool(!check)) var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "rmdir failed") } f.dirCache.FlushDir(dir) if err != nil { return err } return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return time.Second } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } err := srcObj.readMetaData(ctx) if err != nil { return nil, err } srcPath := srcObj.fs.rootSlash() + srcObj.remote dstPath := f.rootSlash() + remote if strings.ToLower(srcPath) == strings.ToLower(dstPath) { return nil, errors.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath) } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Copy the object opts := rest.Opts{ Method: "POST", Path: "/files/" + srcObj.id + "/copy", Parameters: fieldsValue(), } copyFile := api.CopyFile{ Name: f.opt.Enc.FromStandardName(leaf), Parent: api.Parent{ ID: directoryID, }, } var resp *http.Response var info *api.Item err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, ©File, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } err = dstObj.setMetaData(info) if err != nil { return nil, err } return dstObj, nil } // Purge deletes all the files and the container // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // move a file or folder func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (info *api.Item, err error) { // Move the object opts := rest.Opts{ Method: "PUT", Path: endpoint + id, Parameters: fieldsValue(), } move := api.UpdateFileMove{ Name: f.opt.Enc.FromStandardName(leaf), Parent: api.Parent{ ID: directoryID, }, } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } return info, nil } // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { opts := rest.Opts{ Method: "GET", Path: "/users/me", } var user api.User var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &user) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to read user info") } // FIXME max upload size would be useful to use in Update usage = &fs.Usage{ Used: fs.NewUsageValue(user.SpaceUsed), // bytes in use Total: fs.NewUsageValue(user.SpaceAmount), // bytes total Free: fs.NewUsageValue(user.SpaceAmount - user.SpaceUsed), // bytes free } return usage, nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Do the move info, err := f.move(ctx, "/files/", srcObj.id, leaf, directoryID) if err != nil { return nil, err } err = dstObj.setMetaData(info) if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } // Do the move _, err = f.move(ctx, "/folders/", srcID, dstLeaf, dstDirectoryID) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { id, err := f.dirCache.FindDir(ctx, remote, false) var opts rest.Opts if err == nil { fs.Debugf(f, "attempting to share directory '%s'", remote) opts = rest.Opts{ Method: "PUT", Path: "/folders/" + id, Parameters: fieldsValue(), } } else { fs.Debugf(f, "attempting to share single file '%s'", remote) o, err := f.NewObject(ctx, remote) if err != nil { return "", err } if o.(*Object).publicLink != "" { return o.(*Object).publicLink, nil } opts = rest.Opts{ Method: "PUT", Path: "/files/" + o.(*Object).id, Parameters: fieldsValue(), } } shareLink := api.CreateSharedLink{} var info api.Item var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info) return shouldRetry(resp, err) }) return info.SharedLink.URL, err } // deletePermanently permenently deletes a trashed file func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error { opts := rest.Opts{ Method: "DELETE", NoResponse: true, } if itemType == api.ItemTypeFile { opts.Path = "/files/" + id + "/trash" } else { opts.Path = "/folders/" + id + "/trash" } return f.pacer.Call(func() (bool, error) { resp, err := f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) } // CleanUp empties the trash func (f *Fs) CleanUp(ctx context.Context) (err error) { opts := rest.Opts{ Method: "GET", Path: "/folders/trash/items", Parameters: url.Values{ "fields": []string{"type", "id"}, }, } opts.Parameters.Set("limit", strconv.Itoa(listChunks)) offset := 0 for { opts.Parameters.Set("offset", strconv.Itoa(offset)) var result api.FolderItems var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't list trash") } for i := range result.Entries { item := &result.Entries[i] if item.Type == api.ItemTypeFolder || item.Type == api.ItemTypeFile { err := f.deletePermanently(ctx, item.Type, item.ID) if err != nil { return errors.Wrap(err, "failed to delete file") } } else { fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type) continue } } offset += result.Limit if offset >= result.TotalCount { break } } return } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.SHA1) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the SHA-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.SHA1 { return "", hash.ErrUnsupported } return o.sha1, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { err := o.readMetaData(context.TODO()) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.Item) (err error) { if info.Type != api.ItemTypeFile { return errors.Wrapf(fs.ErrorNotAFile, "%q is %q", o.remote, info.Type) } o.hasMetaData = true o.size = int64(info.Size) o.sha1 = info.SHA1 o.modTime = info.ModTime() o.id = info.ID o.publicLink = info.SharedLink.URL return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } info, err := o.fs.readMetaDataForPath(ctx, o.remote) if err != nil { if apiErr, ok := err.(*api.Error); ok { if apiErr.Code == "not_found" || apiErr.Code == "trashed" { return fs.ErrorObjectNotFound } } return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // setModTime sets the modification time of the local fs object func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item, error) { opts := rest.Opts{ Method: "PUT", Path: "/files/" + o.id, Parameters: fieldsValue(), } update := api.UpdateFileModTime{ ContentModifiedAt: api.Time(modTime), } var info *api.Item err := o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) return shouldRetry(resp, err) }) return info, err } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { info, err := o.setModTime(ctx, modTime) if err != nil { return err } return o.setMetaData(info) } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { if o.id == "" { return nil, errors.New("can't download - no id") } fs.FixRangeOption(options, o.size) var resp *http.Response opts := rest.Opts{ Method: "GET", Path: "/files/" + o.id + "/content", Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // upload does a single non-multipart upload // // This is recommended for less than 50 MB of content func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time, options ...fs.OpenOption) (err error) { upload := api.UploadFile{ Name: o.fs.opt.Enc.FromStandardName(leaf), ContentModifiedAt: api.Time(modTime), ContentCreatedAt: api.Time(modTime), Parent: api.Parent{ ID: directoryID, }, } var resp *http.Response var result api.FolderItems opts := rest.Opts{ Method: "POST", Body: in, MultipartMetadataName: "attributes", MultipartContentName: "contents", MultipartFileName: upload.Name, RootURL: uploadURL, Options: options, } // If object has an ID then it is existing so create a new version if o.id != "" { opts.Path = "/files/" + o.id + "/content" } else { opts.Path = "/files/content" } err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result) return shouldRetry(resp, err) }) if err != nil { return err } if result.TotalCount != 1 || len(result.Entries) != 1 { return errors.Errorf("failed to upload %v - not sure why", o) } return o.setMetaData(&result.Entries[0]) } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { if o.fs.tokenRenewer != nil { o.fs.tokenRenewer.Start() defer o.fs.tokenRenewer.Stop() } size := src.Size() modTime := src.ModTime(ctx) remote := o.Remote() // Create the directory for the object if it doesn't exist leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, remote, true) if err != nil { return err } // Upload with simple or multipart if size <= int64(o.fs.opt.UploadCutoff) { err = o.upload(ctx, in, leaf, directoryID, modTime, options...) } else { err = o.uploadMultipart(ctx, in, leaf, directoryID, size, modTime, options...) } return err } // Remove an object func (o *Object) Remove(ctx context.Context) error { return o.fs.deleteObject(ctx, o.id) } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/box/box_test.go000066400000000000000000000005401375552240400174300ustar00rootroot00000000000000// Test Box filesystem interface package box_test import ( "testing" "github.com/rclone/rclone/backend/box" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestBox:", NilObject: (*box.Object)(nil), }) } rclone-1.53.3/backend/box/upload.go000066400000000000000000000175631375552240400171020ustar00rootroot00000000000000// multpart upload for box package box import ( "bytes" "context" "crypto/sha1" "encoding/base64" "encoding/json" "fmt" "io" "net/http" "strconv" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/box/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/rest" ) // createUploadSession creates an upload session for the object func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID string, size int64) (response *api.UploadSessionResponse, err error) { opts := rest.Opts{ Method: "POST", Path: "/files/upload_sessions", RootURL: uploadURL, } request := api.UploadSessionRequest{ FileSize: size, } // If object has an ID then it is existing so create a new version if o.id != "" { opts.Path = "/files/" + o.id + "/upload_sessions" } else { opts.Path = "/files/upload_sessions" request.FolderID = directoryID request.FileName = o.fs.opt.Enc.FromStandardName(leaf) } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response) return shouldRetry(resp, err) }) return } // sha1Digest produces a digest using sha1 as per RFC3230 func sha1Digest(digest []byte) string { return "sha=" + base64.StdEncoding.EncodeToString(digest) } // uploadPart uploads a part in an upload session func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, totalSize int64, chunk []byte, wrap accounting.WrapFn, options ...fs.OpenOption) (response *api.UploadPartResponse, err error) { chunkSize := int64(len(chunk)) sha1sum := sha1.Sum(chunk) opts := rest.Opts{ Method: "PUT", Path: "/files/upload_sessions/" + SessionID, RootURL: uploadURL, ContentType: "application/octet-stream", ContentLength: &chunkSize, ContentRange: fmt.Sprintf("bytes %d-%d/%d", offset, offset+chunkSize-1, totalSize), Options: options, ExtraHeaders: map[string]string{ "Digest": sha1Digest(sha1sum[:]), }, } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { opts.Body = wrap(bytes.NewReader(chunk)) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response) return shouldRetry(resp, err) }) if err != nil { return nil, err } return response, nil } // commitUpload finishes an upload session func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api.Part, modTime time.Time, sha1sum []byte) (result *api.FolderItems, err error) { opts := rest.Opts{ Method: "POST", Path: "/files/upload_sessions/" + SessionID + "/commit", RootURL: uploadURL, ExtraHeaders: map[string]string{ "Digest": sha1Digest(sha1sum), }, } request := api.CommitUpload{ Parts: parts, } request.Attributes.ContentModifiedAt = api.Time(modTime) request.Attributes.ContentCreatedAt = api.Time(modTime) var body []byte var resp *http.Response // For discussion of this value see: // https://github.com/rclone/rclone/issues/2054 maxTries := o.fs.opt.CommitRetries const defaultDelay = 10 var tries int outer: for tries = 0; tries < maxTries; tries++ { err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) if err != nil { return shouldRetry(resp, err) } body, err = rest.ReadBody(resp) return shouldRetry(resp, err) }) delay := defaultDelay var why string if err != nil { // Sometimes we get 400 Error with // parts_mismatch immediately after uploading // the last part. Ignore this error and wait. if boxErr, ok := err.(*api.Error); ok && boxErr.Code == "parts_mismatch" { why = err.Error() } else { return nil, err } } else { switch resp.StatusCode { case http.StatusOK, http.StatusCreated: break outer case http.StatusAccepted: why = "not ready yet" delayString := resp.Header.Get("Retry-After") if delayString != "" { delay, err = strconv.Atoi(delayString) if err != nil { fs.Debugf(o, "Couldn't decode Retry-After header %q: %v", delayString, err) delay = defaultDelay } } default: return nil, errors.Errorf("unknown HTTP status return %q (%d)", resp.Status, resp.StatusCode) } } fs.Debugf(o, "commit multipart upload failed %d/%d - trying again in %d seconds (%s)", tries+1, maxTries, delay, why) time.Sleep(time.Duration(delay) * time.Second) } if tries >= maxTries { return nil, errors.New("too many tries to commit multipart upload - increase --low-level-retries") } err = json.Unmarshal(body, &result) if err != nil { return nil, errors.Wrapf(err, "couldn't decode commit response: %q", body) } return result, nil } // abortUpload cancels an upload session func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error) { opts := rest.Opts{ Method: "DELETE", Path: "/files/upload_sessions/" + SessionID, RootURL: uploadURL, NoResponse: true, } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) return err } // uploadMultipart uploads a file using multipart upload func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, directoryID string, size int64, modTime time.Time, options ...fs.OpenOption) (err error) { // Create upload session session, err := o.createUploadSession(ctx, leaf, directoryID, size) if err != nil { return errors.Wrap(err, "multipart upload create session failed") } chunkSize := session.PartSize fs.Debugf(o, "Multipart upload session started for %d parts of size %v", session.TotalParts, fs.SizeSuffix(chunkSize)) // Cancel the session if something went wrong defer atexit.OnError(&err, func() { fs.Debugf(o, "Cancelling multipart upload: %v", err) cancelErr := o.abortUpload(ctx, session.ID) if cancelErr != nil { fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr) } })() // unwrap the accounting from the input, we use wrap to put it // back on after the buffering in, wrap := accounting.UnWrap(in) // Upload the chunks remaining := size position := int64(0) parts := make([]api.Part, session.TotalParts) hash := sha1.New() errs := make(chan error, 1) var wg sync.WaitGroup outer: for part := 0; part < session.TotalParts; part++ { // Check any errors select { case err = <-errs: break outer default: } reqSize := remaining if reqSize >= chunkSize { reqSize = chunkSize } // Make a block of memory buf := make([]byte, reqSize) // Read the chunk _, err = io.ReadFull(in, buf) if err != nil { err = errors.Wrap(err, "multipart upload failed to read source") break outer } // Make the global hash (must be done sequentially) _, _ = hash.Write(buf) // Transfer the chunk wg.Add(1) o.fs.uploadToken.Get() go func(part int, position int64) { defer wg.Done() defer o.fs.uploadToken.Put() fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize)) partResponse, err := o.uploadPart(ctx, session.ID, position, size, buf, wrap, options...) if err != nil { err = errors.Wrap(err, "multipart upload failed to upload part") select { case errs <- err: default: } return } parts[part] = partResponse.Part }(part, position) // ready for next block remaining -= chunkSize position += chunkSize } wg.Wait() if err == nil { select { case err = <-errs: default: } } if err != nil { return err } // Finalise the upload session result, err := o.commitUpload(ctx, session.ID, parts, modTime, hash.Sum(nil)) if err != nil { return errors.Wrap(err, "multipart upload failed to finalize") } if result.TotalCount != 1 || len(result.Entries) != 1 { return errors.Errorf("multipart upload failed %v - not sure why", o) } return o.setMetaData(&result.Entries[0]) } rclone-1.53.3/backend/cache/000077500000000000000000000000001375552240400155265ustar00rootroot00000000000000rclone-1.53.3/backend/cache/cache.go000066400000000000000000001633021375552240400171250ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "context" "fmt" "io" "math" "os" "os/signal" "path" "path/filepath" "sort" "strconv" "strings" "sync" "syscall" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/atexit" "golang.org/x/time/rate" ) const ( // DefCacheChunkSize is the default value for chunk size DefCacheChunkSize = fs.SizeSuffix(5 * 1024 * 1024) // DefCacheTotalChunkSize is the default value for the maximum size of stored chunks DefCacheTotalChunkSize = fs.SizeSuffix(10 * 1024 * 1024 * 1024) // DefCacheChunkCleanInterval is the interval at which chunks are cleaned DefCacheChunkCleanInterval = fs.Duration(time.Minute) // DefCacheInfoAge is the default value for object info age DefCacheInfoAge = fs.Duration(6 * time.Hour) // DefCacheReadRetries is the default value for read retries DefCacheReadRetries = 10 // DefCacheTotalWorkers is how many workers run in parallel to download chunks DefCacheTotalWorkers = 4 // DefCacheChunkNoMemory will enable or disable in-memory storage for chunks DefCacheChunkNoMemory = false // DefCacheRps limits the number of requests per second to the source FS DefCacheRps = -1 // DefCacheWrites will cache file data on writes through the cache DefCacheWrites = false // DefCacheTmpWaitTime says how long should files be stored in local cache before being uploaded DefCacheTmpWaitTime = fs.Duration(15 * time.Second) // DefCacheDbWaitTime defines how long the cache backend should wait for the DB to be available DefCacheDbWaitTime = fs.Duration(1 * time.Second) ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "cache", Description: "Cache a remote", NewFs: NewFs, CommandHelp: commandHelp, Options: []fs.Option{{ Name: "remote", Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Required: true, }, { Name: "plex_url", Help: "The URL of the Plex server", }, { Name: "plex_username", Help: "The username of the Plex user", }, { Name: "plex_password", Help: "The password of the Plex user", IsPassword: true, }, { Name: "plex_token", Help: "The plex token for authentication - auto set normally", Hide: fs.OptionHideBoth, Advanced: true, }, { Name: "plex_insecure", Help: "Skip all certificate verification when connecting to the Plex server", Advanced: true, }, { Name: "chunk_size", Help: `The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.`, Default: DefCacheChunkSize, Examples: []fs.OptionExample{{ Value: "1m", Help: "1MB", }, { Value: "5M", Help: "5 MB", }, { Value: "10M", Help: "10 MB", }}, }, { Name: "info_age", Help: `How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.`, Default: DefCacheInfoAge, Examples: []fs.OptionExample{{ Value: "1h", Help: "1 hour", }, { Value: "24h", Help: "24 hours", }, { Value: "48h", Help: "48 hours", }}, }, { Name: "chunk_total_size", Help: `The total size that the chunks can take up on the local disk. If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.`, Default: DefCacheTotalChunkSize, Examples: []fs.OptionExample{{ Value: "500M", Help: "500 MB", }, { Value: "1G", Help: "1 GB", }, { Value: "10G", Help: "10 GB", }}, }, { Name: "db_path", Default: filepath.Join(config.CacheDir, "cache-backend"), Help: "Directory to store file structure metadata DB.\nThe remote name is used as the DB file name.", Advanced: true, }, { Name: "chunk_path", Default: filepath.Join(config.CacheDir, "cache-backend"), Help: `Directory to cache chunk files. Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path. This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".`, Advanced: true, }, { Name: "db_purge", Default: false, Help: "Clear all the cached data for this remote on start.", Hide: fs.OptionHideConfigurator, Advanced: true, }, { Name: "chunk_clean_interval", Default: DefCacheChunkCleanInterval, Help: `How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.`, Advanced: true, }, { Name: "read_retries", Default: DefCacheReadRetries, Help: `How many times to retry a read from a cache storage. Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore. For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.`, Advanced: true, }, { Name: "workers", Default: DefCacheTotalWorkers, Help: `How many workers should run in parallel to download chunks. Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers. **Note**: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.`, Advanced: true, }, { Name: "chunk_no_memory", Default: DefCacheChunkNoMemory, Help: `Disable the in-memory cache for storing chunks during streaming. By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible. This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time). If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.`, Advanced: true, }, { Name: "rps", Default: int(DefCacheRps), Help: `Limits the number of requests per second to the source FS (-1 to disable) This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads. If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that. A good balance of all the other settings should make this setting useless but it is available to set for more special cases. **NOTE**: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.`, Advanced: true, }, { Name: "writes", Default: DefCacheWrites, Help: `Cache file data on writes through the FS If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.`, Advanced: true, }, { Name: "tmp_upload_path", Default: "", Help: `Directory to keep temporary files until they are uploaded. This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider. Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider`, Advanced: true, }, { Name: "tmp_wait_time", Default: DefCacheTmpWaitTime, Help: `How long should files be stored in local cache before being uploaded This is the duration that a file must wait in the temporary location _cache-tmp-upload-path_ before it is selected for upload. Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.`, Advanced: true, }, { Name: "db_wait_time", Default: DefCacheDbWaitTime, Help: `How long to wait for the DB to be available - 0 is unlimited Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error. If you set it to 0 then it will wait forever.`, Advanced: true, }}, }) } // Options defines the configuration for this backend type Options struct { Remote string `config:"remote"` PlexURL string `config:"plex_url"` PlexUsername string `config:"plex_username"` PlexPassword string `config:"plex_password"` PlexToken string `config:"plex_token"` PlexInsecure bool `config:"plex_insecure"` ChunkSize fs.SizeSuffix `config:"chunk_size"` InfoAge fs.Duration `config:"info_age"` ChunkTotalSize fs.SizeSuffix `config:"chunk_total_size"` DbPath string `config:"db_path"` ChunkPath string `config:"chunk_path"` DbPurge bool `config:"db_purge"` ChunkCleanInterval fs.Duration `config:"chunk_clean_interval"` ReadRetries int `config:"read_retries"` TotalWorkers int `config:"workers"` ChunkNoMemory bool `config:"chunk_no_memory"` Rps int `config:"rps"` StoreWrites bool `config:"writes"` TempWritePath string `config:"tmp_upload_path"` TempWaitTime fs.Duration `config:"tmp_wait_time"` DbWaitTime fs.Duration `config:"db_wait_time"` } // Fs represents a wrapped fs.Fs type Fs struct { fs.Fs wrapper fs.Fs name string root string opt Options // parsed options features *fs.Features // optional features cache *Persistent tempFs fs.Fs lastChunkCleanup time.Time cleanupMu sync.Mutex rateLimiter *rate.Limiter plexConnector *plexConnector backgroundRunner *backgroundWriter cleanupChan chan bool parentsForgetFn []func(string, fs.EntryType) notifiedRemotes map[string]bool notifiedMu sync.Mutex parentsForgetMu sync.Mutex } // parseRootPath returns a cleaned root path and a nil error or "" and an error when the path is invalid func parseRootPath(path string) (string, error) { return strings.Trim(path, "/"), nil } // NewFs constructs an Fs from the path, container:path func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) { return nil, errors.Errorf("don't set cache-chunk-total-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)", opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers) } if strings.HasPrefix(opt.Remote, name+":") { return nil, errors.New("can't point cache remote at itself - check the value of the remote setting") } rpath, err := parseRootPath(rootPath) if err != nil { return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath) } remotePath := fspath.JoinRootPath(opt.Remote, rootPath) wrappedFs, wrapErr := cache.Get(remotePath) if wrapErr != nil && wrapErr != fs.ErrorIsFile { return nil, errors.Wrapf(wrapErr, "failed to make remote %q to wrap", remotePath) } var fsErr error fs.Debugf(name, "wrapped %v:%v at root %v", wrappedFs.Name(), wrappedFs.Root(), rpath) if wrapErr == fs.ErrorIsFile { fsErr = fs.ErrorIsFile rpath = cleanPath(path.Dir(rpath)) } // configure cache backend if opt.DbPurge { fs.Debugf(name, "Purging the DB") } f := &Fs{ Fs: wrappedFs, name: name, root: rpath, opt: *opt, lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30), cleanupChan: make(chan bool, 1), notifiedRemotes: make(map[string]bool), } cache.PinUntilFinalized(f.Fs, f) f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers) f.plexConnector = &plexConnector{} if opt.PlexURL != "" { if opt.PlexToken != "" { f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken, opt.PlexInsecure) if err != nil { return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL) } } else { if opt.PlexPassword != "" && opt.PlexUsername != "" { decPass, err := obscure.Reveal(opt.PlexPassword) if err != nil { decPass = opt.PlexPassword } f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) { m.Set("plex_token", token) }) if err != nil { return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL) } } } } dbPath := f.opt.DbPath chunkPath := f.opt.ChunkPath // if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath if dbPath != filepath.Join(config.CacheDir, "cache-backend") && chunkPath == filepath.Join(config.CacheDir, "cache-backend") { chunkPath = dbPath } if filepath.Ext(dbPath) != "" { dbPath = filepath.Dir(dbPath) } if filepath.Ext(chunkPath) != "" { chunkPath = filepath.Dir(chunkPath) } err = os.MkdirAll(dbPath, os.ModePerm) if err != nil { return nil, errors.Wrapf(err, "failed to create cache directory %v", dbPath) } err = os.MkdirAll(chunkPath, os.ModePerm) if err != nil { return nil, errors.Wrapf(err, "failed to create cache directory %v", chunkPath) } dbPath = filepath.Join(dbPath, name+".db") chunkPath = filepath.Join(chunkPath, name) fs.Infof(name, "Cache DB path: %v", dbPath) fs.Infof(name, "Cache chunk path: %v", chunkPath) f.cache, err = GetPersistent(dbPath, chunkPath, &Features{ PurgeDb: opt.DbPurge, DbWaitTime: time.Duration(opt.DbWaitTime), }) if err != nil { return nil, errors.Wrapf(err, "failed to start cache db") } // Trap SIGINT and SIGTERM to close the DB handle gracefully c := make(chan os.Signal, 1) signal.Notify(c, syscall.SIGHUP) atexit.Register(func() { if opt.PlexURL != "" { f.plexConnector.closeWebsocket() } f.StopBackgroundRunners() }) go func() { for { s := <-c if s == syscall.SIGHUP { fs.Infof(f, "Clearing cache from signal") f.DirCacheFlush() } } }() fs.Infof(name, "Chunk Memory: %v", !f.opt.ChunkNoMemory) fs.Infof(name, "Chunk Size: %v", f.opt.ChunkSize) fs.Infof(name, "Chunk Total Size: %v", f.opt.ChunkTotalSize) fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval) fs.Infof(name, "Workers: %v", f.opt.TotalWorkers) fs.Infof(name, "File Age: %v", f.opt.InfoAge) if f.opt.StoreWrites { fs.Infof(name, "Cache Writes: enabled") } if f.opt.TempWritePath != "" { err = os.MkdirAll(f.opt.TempWritePath, os.ModePerm) if err != nil { return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath) } f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath) f.tempFs, err = cache.Get(f.opt.TempWritePath) if err != nil { return nil, errors.Wrapf(err, "failed to create temp fs: %v", err) } fs.Infof(name, "Upload Temp Rest Time: %v", f.opt.TempWaitTime) fs.Infof(name, "Upload Temp FS: %v", f.opt.TempWritePath) f.backgroundRunner, _ = initBackgroundUploader(f) go f.backgroundRunner.run() } go func() { for { time.Sleep(time.Duration(f.opt.ChunkCleanInterval)) select { case <-f.cleanupChan: fs.Infof(f, "stopping cleanup") return default: fs.Debugf(f, "starting cleanup") f.CleanUpCache(false) } } }() if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil { pollInterval := make(chan time.Duration, 1) pollInterval <- time.Duration(f.opt.ChunkCleanInterval) doChangeNotify(context.Background(), f.receiveChangeNotify, pollInterval) } f.features = (&fs.Features{ CanHaveEmptyDirectories: true, DuplicateFiles: false, // storage doesn't permit this }).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs) // override only those features that use a temp fs and it doesn't support them //f.features.ChangeNotify = f.ChangeNotify if f.opt.TempWritePath != "" { if f.tempFs.Features().Move == nil { f.features.Move = nil } if f.tempFs.Features().Move == nil { f.features.Move = nil } if f.tempFs.Features().DirMove == nil { f.features.DirMove = nil } if f.tempFs.Features().MergeDirs == nil { f.features.MergeDirs = nil } } // even if the wrapped fs doesn't support it, we still want it f.features.DirCacheFlush = f.DirCacheFlush rc.Add(rc.Call{ Path: "cache/expire", Fn: f.httpExpireRemote, Title: "Purge a remote from cache", Help: ` Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional) Eg rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=/ withData=true `, }) rc.Add(rc.Call{ Path: "cache/stats", Fn: f.httpStats, Title: "Get cache stats", Help: ` Show statistics for the cache remote. `, }) rc.Add(rc.Call{ Path: "cache/fetch", Fn: f.rcFetch, Title: "Fetch file chunks", Help: ` Ensure the specified file chunks are cached on disk. The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to specify files to fetch, eg rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye File names will automatically be encrypted when the a crypt remote is used on top of the cache. `, }) return f, fsErr } func (f *Fs) httpStats(ctx context.Context, in rc.Params) (out rc.Params, err error) { out = make(rc.Params) m, err := f.Stats() if err != nil { return out, errors.Errorf("error while getting cache stats") } out["status"] = "ok" out["stats"] = m return out, nil } func (f *Fs) unwrapRemote(remote string) string { remote = cleanPath(remote) if remote != "" { // if it's wrapped by crypt we need to check what format we got if cryptFs, yes := f.isWrappedByCrypt(); yes { _, err := cryptFs.DecryptFileName(remote) // if it failed to decrypt then it is a decrypted format and we need to encrypt it if err != nil { return cryptFs.EncryptFileName(remote) } // else it's an encrypted format and we can use it as it is } } return remote } func (f *Fs) httpExpireRemote(ctx context.Context, in rc.Params) (out rc.Params, err error) { out = make(rc.Params) remoteInt, ok := in["remote"] if !ok { return out, errors.Errorf("remote is needed") } remote := remoteInt.(string) withData := false _, ok = in["withData"] if ok { withData = true } remote = f.unwrapRemote(remote) if !f.cache.HasEntry(path.Join(f.Root(), remote)) { return out, errors.Errorf("%s doesn't exist in cache", remote) } co := NewObject(f, remote) err = f.cache.GetObject(co) if err != nil { // it could be a dir cd := NewDirectory(f, remote) err := f.cache.ExpireDir(cd) if err != nil { return out, errors.WithMessage(err, "error expiring directory") } // notify vfs too f.notifyChangeUpstream(cd.Remote(), fs.EntryDirectory) out["status"] = "ok" out["message"] = fmt.Sprintf("cached directory cleared: %v", remote) return out, nil } // expire the entry err = f.cache.ExpireObject(co, withData) if err != nil { return out, errors.WithMessage(err, "error expiring file") } // notify vfs too f.notifyChangeUpstream(co.Remote(), fs.EntryObject) out["status"] = "ok" out["message"] = fmt.Sprintf("cached file cleared: %v", remote) return out, nil } func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) { type chunkRange struct { start, end int64 } parseChunks := func(ranges string) (crs []chunkRange, err error) { for _, part := range strings.Split(ranges, ",") { var start, end int64 = 0, math.MaxInt64 switch ints := strings.Split(part, ":"); len(ints) { case 1: start, err = strconv.ParseInt(ints[0], 10, 64) if err != nil { return nil, errors.Errorf("invalid range: %q", part) } end = start + 1 case 2: if ints[0] != "" { start, err = strconv.ParseInt(ints[0], 10, 64) if err != nil { return nil, errors.Errorf("invalid range: %q", part) } } if ints[1] != "" { end, err = strconv.ParseInt(ints[1], 10, 64) if err != nil { return nil, errors.Errorf("invalid range: %q", part) } } default: return nil, errors.Errorf("invalid range: %q", part) } crs = append(crs, chunkRange{start: start, end: end}) } return } walkChunkRange := func(cr chunkRange, size int64, cb func(chunk int64)) { if size <= 0 { return } chunks := (size-1)/f.ChunkSize() + 1 start, end := cr.start, cr.end if start < 0 { start += chunks } if end <= 0 { end += chunks } if end <= start { return } switch { case start < 0: start = 0 case start >= chunks: return } switch { case end <= start: end = start + 1 case end >= chunks: end = chunks } for i := start; i < end; i++ { cb(i) } } walkChunkRanges := func(crs []chunkRange, size int64, cb func(chunk int64)) { for _, cr := range crs { walkChunkRange(cr, size, cb) } } v, ok := in["chunks"] if !ok { return nil, errors.New("missing chunks parameter") } s, ok := v.(string) if !ok { return nil, errors.New("invalid chunks parameter") } delete(in, "chunks") crs, err := parseChunks(s) if err != nil { return nil, errors.Wrap(err, "invalid chunks parameter") } var files [][2]string for k, v := range in { if !strings.HasPrefix(k, "file") { return nil, errors.Errorf("invalid parameter %s=%s", k, v) } switch v := v.(type) { case string: files = append(files, [2]string{v, f.unwrapRemote(v)}) default: return nil, errors.Errorf("invalid parameter %s=%s", k, v) } } type fileStatus struct { Error string FetchedChunks int } fetchedChunks := make(map[string]fileStatus, len(files)) for _, pair := range files { file, remote := pair[0], pair[1] var status fileStatus o, err := f.NewObject(ctx, remote) if err != nil { fetchedChunks[file] = fileStatus{Error: err.Error()} continue } co := o.(*Object) err = co.refreshFromSource(ctx, true) if err != nil { fetchedChunks[file] = fileStatus{Error: err.Error()} continue } handle := NewObjectHandle(ctx, co, f) handle.UseMemory = false handle.scaleWorkers(1) walkChunkRanges(crs, co.Size(), func(chunk int64) { _, err := handle.getChunk(chunk * f.ChunkSize()) if err != nil { if status.Error == "" { status.Error = err.Error() } } else { status.FetchedChunks++ } }) fetchedChunks[file] = status } return rc.Params{"status": fetchedChunks}, nil } // receiveChangeNotify is a wrapper to notifications sent from the wrapped FS about changed files func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) { if crypt, yes := f.isWrappedByCrypt(); yes { decryptedPath, err := crypt.DecryptFileName(forgetPath) if err == nil { fs.Infof(decryptedPath, "received cache expiry notification") } else { fs.Infof(forgetPath, "received cache expiry notification") } } else { fs.Infof(forgetPath, "received cache expiry notification") } // notify upstreams too (vfs) f.notifyChangeUpstream(forgetPath, entryType) var cd *Directory if entryType == fs.EntryObject { co := NewObject(f, forgetPath) err := f.cache.GetObject(co) if err != nil { fs.Debugf(f, "got change notification for non cached entry %v", co) } err = f.cache.ExpireObject(co, true) if err != nil { fs.Debugf(forgetPath, "notify: error expiring '%v': %v", co, err) } cd = NewDirectory(f, cleanPath(path.Dir(co.Remote()))) } else { cd = NewDirectory(f, forgetPath) } // we expire the dir err := f.cache.ExpireDir(cd) if err != nil { fs.Debugf(forgetPath, "notify: error expiring '%v': %v", cd, err) } else { fs.Debugf(forgetPath, "notify: expired '%v'", cd) } f.notifiedMu.Lock() defer f.notifiedMu.Unlock() f.notifiedRemotes[forgetPath] = true f.notifiedRemotes[cd.Remote()] = true } // notifyChangeUpstreamIfNeeded will check if the wrapped remote doesn't notify on changes // or if we use a temp fs func (f *Fs) notifyChangeUpstreamIfNeeded(remote string, entryType fs.EntryType) { if f.Fs.Features().ChangeNotify == nil || f.opt.TempWritePath != "" { f.notifyChangeUpstream(remote, entryType) } } // notifyChangeUpstream will loop through all the upstreams and notify // of the provided remote (should be only a dir) func (f *Fs) notifyChangeUpstream(remote string, entryType fs.EntryType) { f.parentsForgetMu.Lock() defer f.parentsForgetMu.Unlock() if len(f.parentsForgetFn) > 0 { for _, fn := range f.parentsForgetFn { fn(remote, entryType) } } } // ChangeNotify can subscribe multiple callers // this is coupled with the wrapped fs ChangeNotify (if it supports it) // and also notifies other caches (i.e VFS) to clear out whenever something changes func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) { f.parentsForgetMu.Lock() defer f.parentsForgetMu.Unlock() fs.Debugf(f, "subscribing to ChangeNotify") f.parentsForgetFn = append(f.parentsForgetFn, notifyFunc) go func() { for range pollInterval { } }() } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // String returns a description of the FS func (f *Fs) String() string { return fmt.Sprintf("Cache remote %s:%s", f.name, f.root) } // ChunkSize returns the configured chunk size func (f *Fs) ChunkSize() int64 { return int64(f.opt.ChunkSize) } // InfoAge returns the configured file age func (f *Fs) InfoAge() time.Duration { return time.Duration(f.opt.InfoAge) } // TempUploadWaitTime returns the configured temp file upload wait time func (f *Fs) TempUploadWaitTime() time.Duration { return time.Duration(f.opt.TempWaitTime) } // NewObject finds the Object at remote. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { var err error fs.Debugf(f, "new object '%s'", remote) co := NewObject(f, remote) // search for entry in cache and validate it err = f.cache.GetObject(co) if err != nil { fs.Debugf(remote, "find: error: %v", err) } else if time.Now().After(co.CacheTs.Add(time.Duration(f.opt.InfoAge))) { fs.Debugf(co, "find: cold object: %+v", co) } else { fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(time.Duration(f.opt.InfoAge))) return co, nil } // search for entry in source or temp fs var obj fs.Object if f.opt.TempWritePath != "" { obj, err = f.tempFs.NewObject(ctx, remote) // not found in temp fs if err != nil { fs.Debugf(remote, "find: not found in local cache fs") obj, err = f.Fs.NewObject(ctx, remote) } else { fs.Debugf(obj, "find: found in local cache fs") } } else { obj, err = f.Fs.NewObject(ctx, remote) } // not found in either fs if err != nil { fs.Debugf(obj, "find failed: not found in either local or remote fs") return nil, err } // cache the new entry co = ObjectFromOriginal(ctx, f, obj).persist() fs.Debugf(co, "find: cached object") return co, nil } // List the objects and directories in dir into entries func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { fs.Debugf(f, "list '%s'", dir) cd := ShallowDirectory(f, dir) // search for cached dir entries and validate them entries, err = f.cache.GetDirEntries(cd) if err != nil { fs.Debugf(dir, "list: error: %v", err) } else if time.Now().After(cd.CacheTs.Add(time.Duration(f.opt.InfoAge))) { fs.Debugf(dir, "list: cold listing: %v", cd.CacheTs) } else if len(entries) == 0 { // TODO: read empty dirs from source? fs.Debugf(dir, "list: empty listing") } else { fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(time.Duration(f.opt.InfoAge))) fs.Debugf(dir, "list: cached entries: %v", entries) return entries, nil } // we first search any temporary files stored locally var cachedEntries fs.DirEntries if f.opt.TempWritePath != "" { queuedEntries, err := f.cache.searchPendingUploadFromDir(cd.abs()) if err != nil { fs.Errorf(dir, "list: error getting pending uploads: %v", err) } else { fs.Debugf(dir, "list: read %v from temp fs", len(queuedEntries)) fs.Debugf(dir, "list: temp fs entries: %v", queuedEntries) for _, queuedRemote := range queuedEntries { queuedEntry, err := f.tempFs.NewObject(ctx, f.cleanRootFromPath(queuedRemote)) if err != nil { fs.Debugf(dir, "list: temp file not found in local fs: %v", err) continue } co := ObjectFromOriginal(ctx, f, queuedEntry).persist() fs.Debugf(co, "list: cached temp object") cachedEntries = append(cachedEntries, co) } } } // search from the source sourceEntries, err := f.Fs.List(ctx, dir) if err != nil { return nil, err } fs.Debugf(dir, "list: read %v from source", len(sourceEntries)) fs.Debugf(dir, "list: source entries: %v", sourceEntries) sort.Sort(sourceEntries) for _, entry := range entries { entryRemote := entry.Remote() i := sort.Search(len(sourceEntries), func(i int) bool { return sourceEntries[i].Remote() >= entryRemote }) if i < len(sourceEntries) && sourceEntries[i].Remote() == entryRemote { continue } fp := path.Join(f.Root(), entryRemote) switch entry.(type) { case fs.Object: _ = f.cache.RemoveObject(fp) case fs.Directory: _ = f.cache.RemoveDir(fp) } fs.Debugf(dir, "list: remove entry: %v", entryRemote) } entries = nil // and then iterate over the ones from source (temp Objects will override source ones) var batchDirectories []*Directory sort.Sort(cachedEntries) tmpCnt := len(cachedEntries) for _, entry := range sourceEntries { switch o := entry.(type) { case fs.Object: // skip over temporary objects (might be uploading) oRemote := o.Remote() i := sort.Search(tmpCnt, func(i int) bool { return cachedEntries[i].Remote() >= oRemote }) if i < tmpCnt && cachedEntries[i].Remote() == oRemote { continue } co := ObjectFromOriginal(ctx, f, o).persist() cachedEntries = append(cachedEntries, co) fs.Debugf(dir, "list: cached object: %v", co) case fs.Directory: cdd := DirectoryFromOriginal(ctx, f, o) // check if the dir isn't expired and add it in cache if it isn't if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(time.Duration(f.opt.InfoAge))) { batchDirectories = append(batchDirectories, cdd) } cachedEntries = append(cachedEntries, cdd) default: fs.Debugf(entry, "list: Unknown object type %T", entry) } } err = f.cache.AddBatchDir(batchDirectories) if err != nil { fs.Errorf(dir, "list: error caching directories from listing %v", dir) } else { fs.Debugf(dir, "list: cached directories: %v", len(batchDirectories)) } // cache dir meta t := time.Now() cd.CacheTs = &t err = f.cache.AddDir(cd) if err != nil { fs.Errorf(cd, "list: save error: '%v'", err) } else { fs.Debugf(dir, "list: cached dir: '%v', cache ts: %v", cd.abs(), cd.CacheTs) } return cachedEntries, nil } func (f *Fs) recurse(ctx context.Context, dir string, list *walk.ListRHelper) error { entries, err := f.List(ctx, dir) if err != nil { return err } for i := 0; i < len(entries); i++ { innerDir, ok := entries[i].(fs.Directory) if ok { err := f.recurse(ctx, innerDir.Remote(), list) if err != nil { return err } } err := list.Add(entries[i]) if err != nil { return err } } return nil } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { fs.Debugf(f, "list recursively from '%s'", dir) // we check if the source FS supports ListR // if it does, we'll use that to get all the entries, cache them and return do := f.Fs.Features().ListR if do != nil { return do(ctx, dir, func(entries fs.DirEntries) error { // we got called back with a set of entries so let's cache them and call the original callback for _, entry := range entries { switch o := entry.(type) { case fs.Object: _ = f.cache.AddObject(ObjectFromOriginal(ctx, f, o)) case fs.Directory: _ = f.cache.AddDir(DirectoryFromOriginal(ctx, f, o)) default: return errors.Errorf("Unknown object type %T", entry) } } // call the original callback return callback(entries) }) } // if we're here, we're gonna do a standard recursive traversal and cache everything list := walk.NewListRHelper(callback) err = f.recurse(ctx, dir, list) if err != nil { return err } return list.Flush() } // Mkdir makes the directory (container, bucket) func (f *Fs) Mkdir(ctx context.Context, dir string) error { fs.Debugf(f, "mkdir '%s'", dir) err := f.Fs.Mkdir(ctx, dir) if err != nil { return err } fs.Debugf(dir, "mkdir: created dir in source fs") cd := NewDirectory(f, cleanPath(dir)) err = f.cache.AddDir(cd) if err != nil { fs.Errorf(dir, "mkdir: add error: %v", err) } else { fs.Debugf(cd, "mkdir: added to cache") } // expire parent of new dir parentCd := NewDirectory(f, cleanPath(path.Dir(dir))) err = f.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(parentCd, "mkdir: cache expire error: %v", err) } else { fs.Infof(parentCd, "mkdir: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) return nil } // Rmdir removes the directory (container, bucket) if empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { fs.Debugf(f, "rmdir '%s'", dir) if f.opt.TempWritePath != "" { // pause background uploads f.backgroundRunner.pause() defer f.backgroundRunner.play() // we check if the source exists on the remote and make the same move on it too if it does // otherwise, we skip this step _, err := f.UnWrap().List(ctx, dir) if err == nil { err := f.Fs.Rmdir(ctx, dir) if err != nil { return err } fs.Debugf(dir, "rmdir: removed dir in source fs") } var queuedEntries []*Object err = walk.ListR(ctx, f.tempFs, dir, true, -1, walk.ListObjects, func(entries fs.DirEntries) error { for _, o := range entries { if oo, ok := o.(fs.Object); ok { co := ObjectFromOriginal(ctx, f, oo) queuedEntries = append(queuedEntries, co) } } return nil }) if err != nil { fs.Errorf(dir, "rmdir: error getting pending uploads: %v", err) } else { fs.Debugf(dir, "rmdir: read %v from temp fs", len(queuedEntries)) fs.Debugf(dir, "rmdir: temp fs entries: %v", queuedEntries) if len(queuedEntries) > 0 { fs.Errorf(dir, "rmdir: temporary dir not empty: %v", queuedEntries) return fs.ErrorDirectoryNotEmpty } } } else { err := f.Fs.Rmdir(ctx, dir) if err != nil { return err } fs.Debugf(dir, "rmdir: removed dir in source fs") } // remove dir data d := NewDirectory(f, dir) err := f.cache.RemoveDir(d.abs()) if err != nil { fs.Errorf(dir, "rmdir: remove error: %v", err) } else { fs.Debugf(d, "rmdir: removed from cache") } // expire parent parentCd := NewDirectory(f, cleanPath(path.Dir(dir))) err = f.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(dir, "rmdir: cache expire error: %v", err) } else { fs.Infof(parentCd, "rmdir: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) return nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { fs.Debugf(f, "move dir '%s'/'%s' -> '%s'/'%s'", src.Root(), srcRemote, f.Root(), dstRemote) do := f.Fs.Features().DirMove if do == nil { return fs.ErrorCantDirMove } srcFs, ok := src.(*Fs) if !ok { fs.Errorf(srcFs, "can't move directory - not same remote type") return fs.ErrorCantDirMove } if srcFs.Fs.Name() != f.Fs.Name() { fs.Errorf(srcFs, "can't move directory - not wrapping same remotes") return fs.ErrorCantDirMove } if f.opt.TempWritePath != "" { // pause background uploads f.backgroundRunner.pause() defer f.backgroundRunner.play() _, errInWrap := srcFs.UnWrap().List(ctx, srcRemote) _, errInTemp := f.tempFs.List(ctx, srcRemote) // not found in either fs if errInWrap != nil && errInTemp != nil { return fs.ErrorDirNotFound } // we check if the source exists on the remote and make the same move on it too if it does // otherwise, we skip this step if errInWrap == nil { err := do(ctx, srcFs.UnWrap(), srcRemote, dstRemote) if err != nil { return err } fs.Debugf(srcRemote, "movedir: dir moved in the source fs") } // we need to check if the directory exists in the temp fs // and skip the move if it doesn't if errInTemp != nil { goto cleanup } var queuedEntries []*Object err := walk.ListR(ctx, f.tempFs, srcRemote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error { for _, o := range entries { if oo, ok := o.(fs.Object); ok { co := ObjectFromOriginal(ctx, f, oo) queuedEntries = append(queuedEntries, co) if co.tempFileStartedUpload() { fs.Errorf(co, "can't move - upload has already started. need to finish that") return fs.ErrorCantDirMove } } } return nil }) if err != nil { return err } fs.Debugf(srcRemote, "dirmove: read %v from temp fs", len(queuedEntries)) fs.Debugf(srcRemote, "dirmove: temp fs entries: %v", queuedEntries) do := f.tempFs.Features().DirMove if do == nil { fs.Errorf(srcRemote, "dirmove: can't move dir in temp fs") return fs.ErrorCantDirMove } err = do(ctx, f.tempFs, srcRemote, dstRemote) if err != nil { return err } err = f.cache.ReconcileTempUploads(ctx, f) if err != nil { return err } } else { err := do(ctx, srcFs.UnWrap(), srcRemote, dstRemote) if err != nil { return err } fs.Debugf(srcRemote, "movedir: dir moved in the source fs") } cleanup: // delete src dir from cache along with all chunks srcDir := NewDirectory(srcFs, srcRemote) err := f.cache.RemoveDir(srcDir.abs()) if err != nil { fs.Errorf(srcDir, "dirmove: remove error: %v", err) } else { fs.Debugf(srcDir, "dirmove: removed cached dir") } // expire src parent srcParent := NewDirectory(f, cleanPath(path.Dir(srcRemote))) err = f.cache.ExpireDir(srcParent) if err != nil { fs.Errorf(srcParent, "dirmove: cache expire error: %v", err) } else { fs.Debugf(srcParent, "dirmove: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(srcParent.Remote(), fs.EntryDirectory) // expire parent dir at the destination path dstParent := NewDirectory(f, cleanPath(path.Dir(dstRemote))) err = f.cache.ExpireDir(dstParent) if err != nil { fs.Errorf(dstParent, "dirmove: cache expire error: %v", err) } else { fs.Debugf(dstParent, "dirmove: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(dstParent.Remote(), fs.EntryDirectory) // TODO: precache dst dir and save the chunks return nil } // cacheReader will split the stream of a reader to be cached at the same time it is read by the original source func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn io.Reader)) { // create the pipe and tee reader pr, pw := io.Pipe() tr := io.TeeReader(u, pw) // create channel to synchronize done := make(chan bool) defer close(done) go func() { // notify the cache reader that we're complete after the source FS finishes defer func() { _ = pw.Close() }() // process original reading originalRead(tr) // signal complete done <- true }() go func() { var offset int64 for { chunk := make([]byte, f.opt.ChunkSize) readSize, err := io.ReadFull(pr, chunk) // we ignore 3 failures which are ok: // 1. EOF - original reading finished and we got a full buffer too // 2. ErrUnexpectedEOF - original reading finished and partial buffer // 3. ErrClosedPipe - source remote reader was closed (usually means it reached the end) and we need to stop too // if we have a different error: we're going to error out the original reading too and stop this if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF && err != io.ErrClosedPipe { fs.Errorf(src, "error saving new data in cache. offset: %v, err: %v", offset, err) _ = pr.CloseWithError(err) break } // if we have some bytes we cache them if readSize > 0 { chunk = chunk[:readSize] err2 := f.cache.AddChunk(cleanPath(path.Join(f.root, src.Remote())), chunk, offset) if err2 != nil { fs.Errorf(src, "error saving new data in cache '%v'", err2) _ = pr.CloseWithError(err2) break } offset += int64(readSize) } // stuff should be closed but let's be sure if err == io.EOF || err == io.ErrUnexpectedEOF || err == io.ErrClosedPipe { _ = pr.Close() break } } // signal complete done <- true }() // wait until both are done for c := 0; c < 2; c++ { <-done } } type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) // put in to the remote path func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) { var err error var obj fs.Object // queue for upload and store in temp fs if configured if f.opt.TempWritePath != "" { // we need to clear the caches before a put through temp fs parentCd := NewDirectory(f, cleanPath(path.Dir(src.Remote()))) _ = f.cache.ExpireDir(parentCd) f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) obj, err = f.tempFs.Put(ctx, in, src, options...) if err != nil { fs.Errorf(obj, "put: failed to upload in temp fs: %v", err) return nil, err } fs.Infof(obj, "put: uploaded in temp fs") err = f.cache.addPendingUpload(path.Join(f.Root(), src.Remote()), false) if err != nil { fs.Errorf(obj, "put: failed to queue for upload: %v", err) return nil, err } fs.Infof(obj, "put: queued for upload") // if cache writes is enabled write it first through cache } else if f.opt.StoreWrites { f.cacheReader(in, src, func(inn io.Reader) { obj, err = put(ctx, inn, src, options...) }) if err == nil { fs.Debugf(obj, "put: uploaded to remote fs and saved in cache") } // last option: save it directly in remote fs } else { obj, err = put(ctx, in, src, options...) if err == nil { fs.Debugf(obj, "put: uploaded to remote fs") } } // validate and stop if errors are found if err != nil { fs.Errorf(src, "put: error uploading: %v", err) return nil, err } // cache the new file cachedObj := ObjectFromOriginal(ctx, f, obj) // deleting cached chunks and info to be replaced with new ones _ = f.cache.RemoveObject(cachedObj.abs()) cachedObj.persist() fs.Debugf(cachedObj, "put: added to cache") // expire parent parentCd := NewDirectory(f, cleanPath(path.Dir(cachedObj.Remote()))) err = f.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(cachedObj, "put: cache expire error: %v", err) } else { fs.Infof(parentCd, "put: cache expired") } // advertise to ChangeNotify f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) return cachedObj, nil } // Put in to the remote path with the modTime given of the given size func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { fs.Debugf(f, "put data at '%s'", src.Remote()) return f.put(ctx, in, src, options, f.Fs.Put) } // PutUnchecked uploads the object func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { do := f.Fs.Features().PutUnchecked if do == nil { return nil, errors.New("can't PutUnchecked") } fs.Debugf(f, "put data unchecked in '%s'", src.Remote()) return f.put(ctx, in, src, options, do) } // PutStream uploads the object func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { do := f.Fs.Features().PutStream if do == nil { return nil, errors.New("can't PutStream") } fs.Debugf(f, "put data streaming in '%s'", src.Remote()) return f.put(ctx, in, src, options, do) } // Copy src to this remote using server side copy operations. func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { fs.Debugf(f, "copy obj '%s' -> '%s'", src, remote) do := f.Fs.Features().Copy if do == nil { fs.Errorf(src, "source remote (%v) doesn't support Copy", src.Fs()) return nil, fs.ErrorCantCopy } if f.opt.TempWritePath != "" && src.Fs() == f.tempFs { return nil, fs.ErrorCantCopy } // the source must be a cached object or we abort srcObj, ok := src.(*Object) if !ok { fs.Errorf(srcObj, "can't copy - not same remote type") return nil, fs.ErrorCantCopy } // both the source cache fs and this cache fs need to wrap the same remote if srcObj.CacheFs.Fs.Name() != f.Fs.Name() { fs.Errorf(srcObj, "can't copy - not wrapping same remotes") return nil, fs.ErrorCantCopy } // refresh from source or abort if err := srcObj.refreshFromSource(ctx, false); err != nil { fs.Errorf(f, "can't copy %v - %v", src, err) return nil, fs.ErrorCantCopy } if srcObj.isTempFile() { // we check if the feature is still active if f.opt.TempWritePath == "" { fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run") return nil, fs.ErrorCantCopy } do = srcObj.ParentFs.Features().Copy if do == nil { fs.Errorf(src, "parent remote (%v) doesn't support Copy", srcObj.ParentFs) return nil, fs.ErrorCantCopy } } obj, err := do(ctx, srcObj.Object, remote) if err != nil { fs.Errorf(srcObj, "error moving in cache: %v", err) return nil, err } fs.Debugf(obj, "copy: file copied") // persist new co := ObjectFromOriginal(ctx, f, obj).persist() fs.Debugf(co, "copy: added to cache") // expire the destination path parentCd := NewDirectory(f, cleanPath(path.Dir(co.Remote()))) err = f.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(parentCd, "copy: cache expire error: %v", err) } else { fs.Infof(parentCd, "copy: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) // expire src parent srcParent := NewDirectory(f, cleanPath(path.Dir(src.Remote()))) err = f.cache.ExpireDir(srcParent) if err != nil { fs.Errorf(srcParent, "copy: cache expire error: %v", err) } else { fs.Infof(srcParent, "copy: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(srcParent.Remote(), fs.EntryDirectory) return co, nil } // Move src to this remote using server side move operations. func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { fs.Debugf(f, "moving obj '%s' -> %s", src, remote) // if source fs doesn't support move abort do := f.Fs.Features().Move if do == nil { fs.Errorf(src, "source remote (%v) doesn't support Move", src.Fs()) return nil, fs.ErrorCantMove } // the source must be a cached object or we abort srcObj, ok := src.(*Object) if !ok { fs.Errorf(srcObj, "can't move - not same remote type") return nil, fs.ErrorCantMove } // both the source cache fs and this cache fs need to wrap the same remote if srcObj.CacheFs.Fs.Name() != f.Fs.Name() { fs.Errorf(srcObj, "can't move - not wrapping same remote types") return nil, fs.ErrorCantMove } // refresh from source or abort if err := srcObj.refreshFromSource(ctx, false); err != nil { fs.Errorf(f, "can't move %v - %v", src, err) return nil, fs.ErrorCantMove } // if this is a temp object then we perform the changes locally if srcObj.isTempFile() { // we check if the feature is still active if f.opt.TempWritePath == "" { fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run") return nil, fs.ErrorCantMove } // pause background uploads f.backgroundRunner.pause() defer f.backgroundRunner.play() // started uploads can't be moved until they complete if srcObj.tempFileStartedUpload() { fs.Errorf(srcObj, "can't move - upload has already started. need to finish that") return nil, fs.ErrorCantMove } do = f.tempFs.Features().Move // we must also update the pending queue err := f.cache.updatePendingUpload(srcObj.abs(), func(item *tempUploadInfo) error { item.DestPath = path.Join(f.Root(), remote) item.AddedOn = time.Now() return nil }) if err != nil { fs.Errorf(srcObj, "failed to rename queued file for upload: %v", err) return nil, fs.ErrorCantMove } fs.Debugf(srcObj, "move: queued file moved to %v", remote) } obj, err := do(ctx, srcObj.Object, remote) if err != nil { fs.Errorf(srcObj, "error moving: %v", err) return nil, err } fs.Debugf(obj, "move: file moved") // remove old err = f.cache.RemoveObject(srcObj.abs()) if err != nil { fs.Errorf(srcObj, "move: remove error: %v", err) } else { fs.Debugf(srcObj, "move: removed from cache") } // expire old parent parentCd := NewDirectory(f, cleanPath(path.Dir(srcObj.Remote()))) err = f.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(parentCd, "move: parent cache expire error: %v", err) } else { fs.Infof(parentCd, "move: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) // persist new cachedObj := ObjectFromOriginal(ctx, f, obj).persist() fs.Debugf(cachedObj, "move: added to cache") // expire new parent parentCd = NewDirectory(f, cleanPath(path.Dir(cachedObj.Remote()))) err = f.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(parentCd, "move: expire error: %v", err) } else { fs.Infof(parentCd, "move: cache expired") } // advertise to ChangeNotify if wrapped doesn't do that f.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) return cachedObj, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return f.Fs.Hashes() } // Purge all files in the directory func (f *Fs) Purge(ctx context.Context, dir string) error { if dir == "" { // FIXME this isn't quite right as it should purge the dir prefix fs.Infof(f, "purging cache") f.cache.Purge() } do := f.Fs.Features().Purge if do == nil { return fs.ErrorCantPurge } err := do(ctx, dir) if err != nil { return err } return nil } // CleanUp the trash in the Fs func (f *Fs) CleanUp(ctx context.Context) error { f.CleanUpCache(false) do := f.Fs.Features().CleanUp if do == nil { return nil } return do(ctx) } // About gets quota information from the Fs func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { do := f.Fs.Features().About if do == nil { return nil, errors.New("About not supported") } return do(ctx) } // Stats returns stats about the cache storage func (f *Fs) Stats() (map[string]map[string]interface{}, error) { return f.cache.Stats() } // openRateLimited will execute a closure under a rate limiter watch func (f *Fs) openRateLimited(fn func() (io.ReadCloser, error)) (io.ReadCloser, error) { var err error ctx, cancel := context.WithTimeout(context.Background(), time.Second*10) defer cancel() start := time.Now() if err = f.rateLimiter.Wait(ctx); err != nil { return nil, err } elapsed := time.Since(start) if elapsed > time.Second*2 { fs.Debugf(f, "rate limited: %s", elapsed) } return fn() } // CleanUpCache will cleanup only the cache data that is expired func (f *Fs) CleanUpCache(ignoreLastTs bool) { f.cleanupMu.Lock() defer f.cleanupMu.Unlock() if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(time.Duration(f.opt.ChunkCleanInterval))) { f.cache.CleanChunksBySize(int64(f.opt.ChunkTotalSize)) f.lastChunkCleanup = time.Now() } } // StopBackgroundRunners will signall all the runners to stop their work // can be triggered from a terminate signal or from testing between runs func (f *Fs) StopBackgroundRunners() { f.cleanupChan <- false if f.opt.TempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() { f.backgroundRunner.close() } f.cache.Close() fs.Debugf(f, "Services stopped") } // UnWrap returns the Fs that this Fs is wrapping func (f *Fs) UnWrap() fs.Fs { return f.Fs } // WrapFs returns the Fs that is wrapping this Fs func (f *Fs) WrapFs() fs.Fs { return f.wrapper } // SetWrapper sets the Fs that is wrapping this Fs func (f *Fs) SetWrapper(wrapper fs.Fs) { f.wrapper = wrapper } // isWrappedByCrypt checks if this is wrapped by a crypt remote func (f *Fs) isWrappedByCrypt() (*crypt.Fs, bool) { if f.wrapper == nil { return nil, false } c, ok := f.wrapper.(*crypt.Fs) return c, ok } // cleanRootFromPath trims the root of the current fs from a path func (f *Fs) cleanRootFromPath(p string) string { if f.Root() != "" { p = p[len(f.Root()):] // trim out root if len(p) > 0 { // remove first separator p = p[1:] } } return p } func (f *Fs) isRootInPath(p string) bool { if f.Root() == "" { return true } return strings.HasPrefix(p, f.Root()+"/") } // MergeDirs merges the contents of all the directories passed // in into the first one and rmdirs the other directories. func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error { do := f.Fs.Features().MergeDirs if do == nil { return errors.New("MergeDirs not supported") } for _, dir := range dirs { _ = f.cache.RemoveDir(dir.Remote()) } return do(ctx, dirs) } // DirCacheFlush flushes the dir cache func (f *Fs) DirCacheFlush() { _ = f.cache.RemoveDir("") } // GetBackgroundUploadChannel returns a channel that can be listened to for remote activities that happen // in the background func (f *Fs) GetBackgroundUploadChannel() chan BackgroundUploadState { if f.opt.TempWritePath != "" { return f.backgroundRunner.notifyCh } return nil } func (f *Fs) isNotifiedRemote(remote string) bool { f.notifiedMu.Lock() defer f.notifiedMu.Unlock() n, ok := f.notifiedRemotes[remote] if !ok || !n { return false } delete(f.notifiedRemotes, remote) return n } func cleanPath(p string) string { p = path.Clean(p) if p == "." || p == "/" { p = "" } return p } // UserInfo returns info about the connected user func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) { do := f.Fs.Features().UserInfo if do == nil { return nil, fs.ErrorNotImplemented } return do(ctx) } // Disconnect the current user func (f *Fs) Disconnect(ctx context.Context) error { do := f.Fs.Features().Disconnect if do == nil { return fs.ErrorNotImplemented } return do(ctx) } var commandHelp = []fs.CommandHelp{ { Name: "stats", Short: "Print stats on the cache backend in JSON format.", }, } // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) { switch name { case "stats": return f.Stats() default: return nil, fs.ErrorCommandNotFound } } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.UnWrapper = (*Fs)(nil) _ fs.Wrapper = (*Fs)(nil) _ fs.ListRer = (*Fs)(nil) _ fs.ChangeNotifier = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.UserInfoer = (*Fs)(nil) _ fs.Disconnecter = (*Fs)(nil) _ fs.Commander = (*Fs)(nil) _ fs.MergeDirser = (*Fs)(nil) ) rclone-1.53.3/backend/cache/cache_internal_test.go000066400000000000000000001244611375552240400220630ustar00rootroot00000000000000// +build !plan9,!js // +build !race package cache_test import ( "bytes" "context" "encoding/base64" goflag "flag" "fmt" "io" "io/ioutil" "log" "math/rand" "os" "path" "path/filepath" "runtime/debug" "strings" "testing" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/cache" "github.com/rclone/rclone/backend/crypt" _ "github.com/rclone/rclone/backend/drive" "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/testy" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/vfs/vfsflags" "github.com/stretchr/testify/require" ) const ( // these 2 passwords are test random cryptPassword1 = "3XcvMMdsV3d-HGAReTMdNH-5FcX5q32_lUeA" // oGJdUbQc7s8 cryptPassword2 = "NlgTBEIe-qibA7v-FoMfuX6Cw8KlLai_aMvV" // mv4mZW572HM cryptedTextBase64 = "UkNMT05FAAC320i2xIee0BiNyknSPBn+Qcw3q9FhIFp3tvq6qlqvbsno3PnxmEFeJG3jDBnR/wku2gHWeQ==" // one content cryptedText2Base64 = "UkNMT05FAAATcQkVsgjBh8KafCKcr0wdTa1fMmV0U8hsCLGFoqcvxKVmvv7wx3Hf5EXxFcki2FFV4sdpmSrb9Q==" // updated content cryptedText3Base64 = "UkNMT05FAAB/f7YtYKbPfmk9+OX/ffN3qG3OEdWT+z74kxCX9V/YZwJ4X2DN3HOnUC3gKQ4Gcoud5UtNvQ==" // test content letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" ) var ( remoteName string uploadDir string runInstance *run errNotSupported = errors.New("not supported") decryptedToEncryptedRemotes = map[string]string{ "one": "lm4u7jjt3c85bf56vjqgeenuno", "second": "qvt1ochrkcfbptp5mu9ugb2l14", "test": "jn4tegjtpqro30t3o11thb4b5s", "test2": "qakvqnh8ttei89e0gc76crpql4", "data.bin": "0q2847tfko6mhj3dag3r809qbc", "ticw/data.bin": "5mv97b0ule6pht33srae5pice8/0q2847tfko6mhj3dag3r809qbc", "tiuufo/test/one": "vi6u1olqhirqv14cd8qlej1mgo/jn4tegjtpqro30t3o11thb4b5s/lm4u7jjt3c85bf56vjqgeenuno", "tiuufo/test/second": "vi6u1olqhirqv14cd8qlej1mgo/jn4tegjtpqro30t3o11thb4b5s/qvt1ochrkcfbptp5mu9ugb2l14", "tiutfo/test/one": "legd371aa8ol36tjfklt347qnc/jn4tegjtpqro30t3o11thb4b5s/lm4u7jjt3c85bf56vjqgeenuno", "tiutfo/second/one": "legd371aa8ol36tjfklt347qnc/qvt1ochrkcfbptp5mu9ugb2l14/lm4u7jjt3c85bf56vjqgeenuno", "second/one": "qvt1ochrkcfbptp5mu9ugb2l14/lm4u7jjt3c85bf56vjqgeenuno", "test/one": "jn4tegjtpqro30t3o11thb4b5s/lm4u7jjt3c85bf56vjqgeenuno", "test/second": "jn4tegjtpqro30t3o11thb4b5s/qvt1ochrkcfbptp5mu9ugb2l14", "one/test": "lm4u7jjt3c85bf56vjqgeenuno/jn4tegjtpqro30t3o11thb4b5s", "one/test/data.bin": "lm4u7jjt3c85bf56vjqgeenuno/jn4tegjtpqro30t3o11thb4b5s/0q2847tfko6mhj3dag3r809qbc", "second/test/data.bin": "qvt1ochrkcfbptp5mu9ugb2l14/jn4tegjtpqro30t3o11thb4b5s/0q2847tfko6mhj3dag3r809qbc", "test/third": "jn4tegjtpqro30t3o11thb4b5s/2nd7fjiop5h3ihfj1vl953aa5g", "test/0.bin": "jn4tegjtpqro30t3o11thb4b5s/e6frddt058b6kvbpmlstlndmtk", "test/1.bin": "jn4tegjtpqro30t3o11thb4b5s/kck472nt1k7qbmob0mt1p1crgc", "test/2.bin": "jn4tegjtpqro30t3o11thb4b5s/744oe9ven2rmak4u27if51qk24", "test/3.bin": "jn4tegjtpqro30t3o11thb4b5s/2bjd8kef0u5lmsu6qhqll34bcs", "test/4.bin": "jn4tegjtpqro30t3o11thb4b5s/cvjs73iv0a82v0c7r67avllh7s", "test/5.bin": "jn4tegjtpqro30t3o11thb4b5s/0plkdo790b6bnmt33qsdqmhv9c", "test/6.bin": "jn4tegjtpqro30t3o11thb4b5s/s5r633srnjtbh83893jovjt5d0", "test/7.bin": "jn4tegjtpqro30t3o11thb4b5s/6rq45tr9bjsammku622flmqsu4", "test/8.bin": "jn4tegjtpqro30t3o11thb4b5s/37bc6tcl3e31qb8cadvjb749vk", "test/9.bin": "jn4tegjtpqro30t3o11thb4b5s/t4pr35hnls32789o8fk0chk1ec", } ) func init() { goflag.StringVar(&remoteName, "remote-internal", "TestInternalCache", "Remote to test with, defaults to local filesystem") goflag.StringVar(&uploadDir, "upload-dir-internal", "", "") } // TestMain drives the tests func TestMain(m *testing.M) { goflag.Parse() var rc int log.Printf("Running with the following params: \n remote: %v", remoteName) runInstance = newRun() rc = m.Run() os.Exit(rc) } func TestInternalListRootAndInnerRemotes(t *testing.T) { id := fmt.Sprintf("tilrair%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) // Instantiate inner fs innerFolder := "inner" runInstance.mkdir(t, rootFs, innerFolder) rootFs2, boltDb2 := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil, nil) defer runInstance.cleanupFs(t, rootFs2, boltDb2) runInstance.writeObjectString(t, rootFs2, "one", "content") listRoot, err := runInstance.list(t, rootFs, "") require.NoError(t, err) listRootInner, err := runInstance.list(t, rootFs, innerFolder) require.NoError(t, err) listInner, err := rootFs2.List(context.Background(), "") require.NoError(t, err) require.Len(t, listRoot, 1) require.Len(t, listRootInner, 1) require.Len(t, listInner, 1) } /* TODO: is this testing something? func TestInternalVfsCache(t *testing.T) { vfsflags.Opt.DirCacheTime = time.Second * 30 testSize := int64(524288000) vfsflags.Opt.CacheMode = vfs.CacheModeWrites id := "tiuufo" rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"}) defer runInstance.cleanupFs(t, rootFs, boltDb) err := rootFs.Mkdir(context.Background(), "test") require.NoError(t, err) runInstance.writeObjectString(t, rootFs, "test/second", "content") _, err = rootFs.List(context.Background(), "test") require.NoError(t, err) testReader := runInstance.randomReader(t, testSize) writeCh := make(chan interface{}) //write2Ch := make(chan interface{}) readCh := make(chan interface{}) cacheCh := make(chan interface{}) // write the main file go func() { defer func() { writeCh <- true }() log.Printf("========== started writing file 'test/one'") runInstance.writeRemoteReader(t, rootFs, "test/one", testReader) log.Printf("========== done writing file 'test/one'") }() // routine to check which cache has what, autostarts go func() { for { select { case <-cacheCh: log.Printf("========== finished checking caches") return default: } li2 := [2]string{path.Join("test", "one"), path.Join("test", "second")} for _, r := range li2 { var err error ci, err := ioutil.ReadDir(path.Join(runInstance.chunkPath, runInstance.encryptRemoteIfNeeded(t, path.Join(id, r)))) if err != nil || len(ci) == 0 { log.Printf("========== '%v' not in cache", r) } else { log.Printf("========== '%v' IN CACHE", r) } _, err = os.Stat(path.Join(runInstance.vfsCachePath, id, r)) if err != nil { log.Printf("========== '%v' not in vfs", r) } else { log.Printf("========== '%v' IN VFS", r) } } time.Sleep(time.Second * 10) } }() // routine to list, autostarts go func() { for { select { case <-readCh: log.Printf("========== finished checking listings and readings") return default: } li, err := runInstance.list(t, rootFs, "test") if err != nil { log.Printf("========== error listing 'test' folder: %v", err) } else { log.Printf("========== list 'test' folder count: %v", len(li)) } time.Sleep(time.Second * 10) } }() // wait for main file to be written <-writeCh log.Printf("========== waiting for VFS to expire") time.Sleep(time.Second * 120) // try a final read li2 := [2]string{"test/one", "test/second"} for _, r := range li2 { _, err := runInstance.readDataFromRemote(t, rootFs, r, int64(0), int64(2), false) if err != nil { log.Printf("========== error reading '%v': %v", r, err) } else { log.Printf("========== read '%v'", r) } } // close the cache and list checkers cacheCh <- true readCh <- true } */ func TestInternalObjWrapFsFound(t *testing.T) { id := fmt.Sprintf("tiowff%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) wrappedFs := cfs.UnWrap() var testData []byte if runInstance.rootIsCrypt { testData, err = base64.StdEncoding.DecodeString(cryptedTextBase64) require.NoError(t, err) } else { testData = []byte("test content") } runInstance.writeObjectBytes(t, wrappedFs, runInstance.encryptRemoteIfNeeded(t, "test"), testData) listRoot, err := runInstance.list(t, rootFs, "") require.NoError(t, err) require.Len(t, listRoot, 1) cachedData, err := runInstance.readDataFromRemote(t, rootFs, "test", 0, int64(len([]byte("test content"))), false) require.NoError(t, err) require.Equal(t, "test content", string(cachedData)) err = runInstance.rm(t, rootFs, "test") require.NoError(t, err) listRoot, err = runInstance.list(t, rootFs, "") require.NoError(t, err) require.Len(t, listRoot, 0) } func TestInternalObjNotFound(t *testing.T) { id := fmt.Sprintf("tionf%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) obj, err := rootFs.NewObject(context.Background(), "404") require.Error(t, err) require.Nil(t, obj) } func TestInternalCachedWrittenContentMatches(t *testing.T) { testy.SkipUnreliable(t) id := fmt.Sprintf("ticwcm%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() // create some rand test data testData := randStringBytes(int(chunkSize*4 + chunkSize/2)) // write the object runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData) // check sample of data from in-file sampleStart := chunkSize / 2 sampleEnd := chunkSize testSample := testData[sampleStart:sampleEnd] checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", sampleStart, sampleEnd, false) require.NoError(t, err) require.Equal(t, int64(len(checkSample)), sampleEnd-sampleStart) require.Equal(t, checkSample, testSample) } func TestInternalDoubleWrittenContentMatches(t *testing.T) { id := fmt.Sprintf("tidwcm%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) // write the object runInstance.writeRemoteString(t, rootFs, "one", "one content") err := runInstance.updateData(t, rootFs, "one", "one content", " updated") require.NoError(t, err) err = runInstance.updateData(t, rootFs, "one", "one content updated", " double") require.NoError(t, err) // check sample of data from in-file data, err := runInstance.readDataFromRemote(t, rootFs, "one", int64(0), int64(len("one content updated double")), true) require.NoError(t, err) require.Equal(t, "one content updated double", string(data)) } func TestInternalCachedUpdatedContentMatches(t *testing.T) { testy.SkipUnreliable(t) id := fmt.Sprintf("ticucm%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) var err error // create some rand test data var testData1 []byte var testData2 []byte if runInstance.rootIsCrypt { testData1, err = base64.StdEncoding.DecodeString(cryptedTextBase64) require.NoError(t, err) testData2, err = base64.StdEncoding.DecodeString(cryptedText2Base64) require.NoError(t, err) } else { testData1 = []byte(random.String(100)) testData2 = []byte(random.String(200)) } // write the object o := runInstance.updateObjectRemote(t, rootFs, "data.bin", testData1, testData2) require.Equal(t, o.Size(), int64(len(testData2))) // check data from in-file checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, int64(len(testData2)), false) require.NoError(t, err) require.Equal(t, checkSample, testData2) } func TestInternalWrappedWrittenContentMatches(t *testing.T) { id := fmt.Sprintf("tiwwcm%v", time.Now().Unix()) vfsflags.Opt.DirCacheTime = time.Second rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) if runInstance.rootIsCrypt { t.Skip("test skipped with crypt remote") } cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() // create some rand test data testSize := chunkSize*4 + chunkSize/2 testData := randStringBytes(int(testSize)) // write the object o := runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData) require.Equal(t, o.Size(), testSize) time.Sleep(time.Second * 3) checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false) require.NoError(t, err) require.Equal(t, int64(len(checkSample)), o.Size()) for i := 0; i < len(checkSample); i++ { require.Equal(t, testData[i], checkSample[i]) } } func TestInternalLargeWrittenContentMatches(t *testing.T) { id := fmt.Sprintf("tilwcm%v", time.Now().Unix()) vfsflags.Opt.DirCacheTime = time.Second rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) if runInstance.rootIsCrypt { t.Skip("test skipped with crypt remote") } cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() // create some rand test data testSize := chunkSize*10 + chunkSize/2 testData := randStringBytes(int(testSize)) // write the object runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData) time.Sleep(time.Second * 3) readData, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false) require.NoError(t, err) for i := 0; i < len(readData); i++ { require.Equalf(t, testData[i], readData[i], "at byte %v", i) } } func TestInternalWrappedFsChangeNotSeen(t *testing.T) { id := fmt.Sprintf("tiwfcns%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() // create some rand test data testData := randStringBytes(int(chunkSize*4 + chunkSize/2)) runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData) // update in the wrapped fs originalSize, err := runInstance.size(t, rootFs, "data.bin") require.NoError(t, err) log.Printf("original size: %v", originalSize) o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin")) require.NoError(t, err) expectedSize := int64(len([]byte("test content"))) var data2 []byte if runInstance.rootIsCrypt { data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64) require.NoError(t, err) expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue } else { data2 = []byte("test content") } objInfo := object.NewStaticObjectInfo(runInstance.encryptRemoteIfNeeded(t, "data.bin"), time.Now(), int64(len(data2)), true, nil, cfs.UnWrap()) err = o.Update(context.Background(), bytes.NewReader(data2), objInfo) require.NoError(t, err) require.Equal(t, int64(len(data2)), o.Size()) log.Printf("updated size: %v", len(data2)) // get a new instance from the cache if runInstance.wrappedIsExternal { err = runInstance.retryBlock(func() error { coSize, err := runInstance.size(t, rootFs, "data.bin") if err != nil { return err } if coSize != expectedSize { return errors.Errorf("%v <> %v", coSize, expectedSize) } return nil }, 12, time.Second*10) require.NoError(t, err) } else { coSize, err := runInstance.size(t, rootFs, "data.bin") require.NoError(t, err) require.NotEqual(t, coSize, expectedSize) } } func TestInternalMoveWithNotify(t *testing.T) { id := fmt.Sprintf("timwn%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) if !runInstance.wrappedIsExternal { t.Skipf("Not external") } cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) srcName := runInstance.encryptRemoteIfNeeded(t, "test") + "/" + runInstance.encryptRemoteIfNeeded(t, "one") + "/" + runInstance.encryptRemoteIfNeeded(t, "data.bin") dstName := runInstance.encryptRemoteIfNeeded(t, "test") + "/" + runInstance.encryptRemoteIfNeeded(t, "second") + "/" + runInstance.encryptRemoteIfNeeded(t, "data.bin") // create some rand test data var testData []byte if runInstance.rootIsCrypt { testData, err = base64.StdEncoding.DecodeString(cryptedTextBase64) require.NoError(t, err) } else { testData = []byte("test content") } _ = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test")) _ = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test/one")) _ = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test/second")) srcObj := runInstance.writeObjectBytes(t, cfs.UnWrap(), srcName, testData) // list in mount _, err = runInstance.list(t, rootFs, "test") require.NoError(t, err) _, err = runInstance.list(t, rootFs, "test/one") require.NoError(t, err) // move file _, err = cfs.UnWrap().Features().Move(context.Background(), srcObj, dstName) require.NoError(t, err) err = runInstance.retryBlock(func() error { li, err := runInstance.list(t, rootFs, "test") if err != nil { log.Printf("err: %v", err) return err } if len(li) != 2 { log.Printf("not expected listing /test: %v", li) return errors.Errorf("not expected listing /test: %v", li) } li, err = runInstance.list(t, rootFs, "test/one") if err != nil { log.Printf("err: %v", err) return err } if len(li) != 0 { log.Printf("not expected listing /test/one: %v", li) return errors.Errorf("not expected listing /test/one: %v", li) } li, err = runInstance.list(t, rootFs, "test/second") if err != nil { log.Printf("err: %v", err) return err } if len(li) != 1 { log.Printf("not expected listing /test/second: %v", li) return errors.Errorf("not expected listing /test/second: %v", li) } if fi, ok := li[0].(os.FileInfo); ok { if fi.Name() != "data.bin" { log.Printf("not expected name: %v", fi.Name()) return errors.Errorf("not expected name: %v", fi.Name()) } } else if di, ok := li[0].(fs.DirEntry); ok { if di.Remote() != "test/second/data.bin" { log.Printf("not expected remote: %v", di.Remote()) return errors.Errorf("not expected remote: %v", di.Remote()) } } else { log.Printf("unexpected listing: %v", li) return errors.Errorf("unexpected listing: %v", li) } log.Printf("complete listing: %v", li) return nil }, 12, time.Second*10) require.NoError(t, err) } func TestInternalNotifyCreatesEmptyParts(t *testing.T) { id := fmt.Sprintf("tincep%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) if !runInstance.wrappedIsExternal { t.Skipf("Not external") } cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) srcName := runInstance.encryptRemoteIfNeeded(t, "test") + "/" + runInstance.encryptRemoteIfNeeded(t, "one") + "/" + runInstance.encryptRemoteIfNeeded(t, "test") dstName := runInstance.encryptRemoteIfNeeded(t, "test") + "/" + runInstance.encryptRemoteIfNeeded(t, "one") + "/" + runInstance.encryptRemoteIfNeeded(t, "test2") // create some rand test data var testData []byte if runInstance.rootIsCrypt { testData, err = base64.StdEncoding.DecodeString(cryptedTextBase64) require.NoError(t, err) } else { testData = []byte("test content") } err = rootFs.Mkdir(context.Background(), "test") require.NoError(t, err) err = rootFs.Mkdir(context.Background(), "test/one") require.NoError(t, err) srcObj := runInstance.writeObjectBytes(t, cfs.UnWrap(), srcName, testData) // list in mount _, err = runInstance.list(t, rootFs, "test") require.NoError(t, err) _, err = runInstance.list(t, rootFs, "test/one") require.NoError(t, err) found := boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"))) require.True(t, found) boltDb.Purge() found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"))) require.False(t, found) // move file _, err = cfs.UnWrap().Features().Move(context.Background(), srcObj, dstName) require.NoError(t, err) err = runInstance.retryBlock(func() error { found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"))) if !found { log.Printf("not found /test") return errors.Errorf("not found /test") } found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"))) if !found { log.Printf("not found /test/one") return errors.Errorf("not found /test/one") } found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2"))) if !found { log.Printf("not found /test/one/test2") return errors.Errorf("not found /test/one/test2") } li, err := runInstance.list(t, rootFs, "test/one") if err != nil { log.Printf("err: %v", err) return err } if len(li) != 1 { log.Printf("not expected listing /test/one: %v", li) return errors.Errorf("not expected listing /test/one: %v", li) } if fi, ok := li[0].(os.FileInfo); ok { if fi.Name() != "test2" { log.Printf("not expected name: %v", fi.Name()) return errors.Errorf("not expected name: %v", fi.Name()) } } else if di, ok := li[0].(fs.DirEntry); ok { if di.Remote() != "test/one/test2" { log.Printf("not expected remote: %v", di.Remote()) return errors.Errorf("not expected remote: %v", di.Remote()) } } else { log.Printf("unexpected listing: %v", li) return errors.Errorf("unexpected listing: %v", li) } log.Printf("complete listing /test/one/test2") return nil }, 12, time.Second*10) require.NoError(t, err) } func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) { id := fmt.Sprintf("ticsadcf%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() // create some rand test data testData := randStringBytes(int(chunkSize*4 + chunkSize/2)) runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData) // update in the wrapped fs o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin")) require.NoError(t, err) wrappedTime := time.Now().Add(-1 * time.Hour) err = o.SetModTime(context.Background(), wrappedTime) require.NoError(t, err) // get a new instance from the cache co, err := rootFs.NewObject(context.Background(), "data.bin") require.NoError(t, err) require.NotEqual(t, o.ModTime(context.Background()).String(), co.ModTime(context.Background()).String()) cfs.DirCacheFlush() // flush the cache // get a new instance from the cache co, err = rootFs.NewObject(context.Background(), "data.bin") require.NoError(t, err) require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix()) } func TestInternalCacheWrites(t *testing.T) { id := "ticw" rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"}) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() // create some rand test data earliestTime := time.Now() testData := randStringBytes(int(chunkSize*4 + chunkSize/2)) runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData) expectedTs := time.Now() ts, err := boltDb.GetChunkTs(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "data.bin")), 0) require.NoError(t, err) require.WithinDuration(t, expectedTs, ts, expectedTs.Sub(earliestTime)) } func TestInternalMaxChunkSizeRespected(t *testing.T) { id := fmt.Sprintf("timcsr%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"}) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) chunkSize := cfs.ChunkSize() totalChunks := 20 // create some rand test data testData := randStringBytes(int(int64(totalChunks-1)*chunkSize + chunkSize/2)) runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData) o, err := cfs.NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin")) require.NoError(t, err) co, ok := o.(*cache.Object) require.True(t, ok) for i := 0; i < 4; i++ { // read first 4 _ = runInstance.readDataFromObj(t, co, chunkSize*int64(i), chunkSize*int64(i+1), false) } cfs.CleanUpCache(true) // the last 2 **must** be in the cache require.True(t, boltDb.HasChunk(co, chunkSize*2)) require.True(t, boltDb.HasChunk(co, chunkSize*3)) for i := 4; i < 6; i++ { // read next 2 _ = runInstance.readDataFromObj(t, co, chunkSize*int64(i), chunkSize*int64(i+1), false) } cfs.CleanUpCache(true) // the last 2 **must** be in the cache require.True(t, boltDb.HasChunk(co, chunkSize*4)) require.True(t, boltDb.HasChunk(co, chunkSize*5)) } func TestInternalExpiredEntriesRemoved(t *testing.T) { id := fmt.Sprintf("tieer%v", time.Now().Unix()) vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, map[string]string{"info_age": "5s"}, nil) defer runInstance.cleanupFs(t, rootFs, boltDb) cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) // create some rand test data runInstance.writeRemoteString(t, rootFs, "one", "one content") runInstance.mkdir(t, rootFs, "test") runInstance.writeRemoteString(t, rootFs, "test/second", "second content") l, err := runInstance.list(t, rootFs, "test") require.NoError(t, err) require.Len(t, l, 1) err = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test/third")) require.NoError(t, err) l, err = runInstance.list(t, rootFs, "test") require.NoError(t, err) require.Len(t, l, 1) err = runInstance.retryBlock(func() error { l, err = runInstance.list(t, rootFs, "test") if err != nil { return err } if len(l) != 2 { return errors.New("list is not 2") } return nil }, 10, time.Second) require.NoError(t, err) } func TestInternalBug2117(t *testing.T) { vfsflags.Opt.DirCacheTime = time.Second * 10 id := fmt.Sprintf("tib2117%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"}) defer runInstance.cleanupFs(t, rootFs, boltDb) if runInstance.rootIsCrypt { t.Skipf("skipping crypt") } cfs, err := runInstance.getCacheFs(rootFs) require.NoError(t, err) err = cfs.UnWrap().Mkdir(context.Background(), "test") require.NoError(t, err) for i := 1; i <= 4; i++ { err = cfs.UnWrap().Mkdir(context.Background(), fmt.Sprintf("test/dir%d", i)) require.NoError(t, err) for j := 1; j <= 4; j++ { err = cfs.UnWrap().Mkdir(context.Background(), fmt.Sprintf("test/dir%d/dir%d", i, j)) require.NoError(t, err) runInstance.writeObjectString(t, cfs.UnWrap(), fmt.Sprintf("test/dir%d/dir%d/test.txt", i, j), "test") } } di, err := runInstance.list(t, rootFs, "test/dir1/dir2") require.NoError(t, err) log.Printf("len: %v", len(di)) require.Len(t, di, 1) time.Sleep(time.Second * 30) di, err = runInstance.list(t, rootFs, "test/dir1/dir2") require.NoError(t, err) log.Printf("len: %v", len(di)) require.Len(t, di, 1) di, err = runInstance.list(t, rootFs, "test/dir1") require.NoError(t, err) log.Printf("len: %v", len(di)) require.Len(t, di, 4) di, err = runInstance.list(t, rootFs, "test") require.NoError(t, err) log.Printf("len: %v", len(di)) require.Len(t, di, 4) } // run holds the remotes for a test run type run struct { okDiff time.Duration runDefaultCfgMap configmap.Simple tmpUploadDir string rootIsCrypt bool wrappedIsExternal bool tempFiles []*os.File dbPath string chunkPath string vfsCachePath string } func newRun() *run { var err error r := &run{ okDiff: time.Second * 9, // really big diff here but the build machines seem to be slow. need a different way for this } // Read in all the defaults for all the options fsInfo, err := fs.Find("cache") if err != nil { panic(fmt.Sprintf("Couldn't find cache remote: %v", err)) } r.runDefaultCfgMap = configmap.Simple{} for _, option := range fsInfo.Options { r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default)) } if uploadDir == "" { r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp") if err != nil { log.Fatalf("Failed to create temp dir: %v", err) } } else { r.tmpUploadDir = uploadDir } log.Printf("Temp Upload Dir: %v", r.tmpUploadDir) return r } func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string { if !runInstance.rootIsCrypt || len(decryptedToEncryptedRemotes) == 0 { return remote } enc, ok := decryptedToEncryptedRemotes[remote] if !ok { t.Fatalf("Failed to find decrypted -> encrypted mapping for '%v'", remote) return remote } return enc } func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, cfg map[string]string, flags map[string]string) (fs.Fs, *cache.Persistent) { fstest.Initialise() remoteExists := false for _, s := range config.FileSections() { if s == remote { remoteExists = true } } if !remoteExists && needRemote { t.Skipf("Need remote (%v) to exist", remote) return nil, nil } // Config to pass to NewFs m := configmap.Simple{} for k, v := range r.runDefaultCfgMap { m.Set(k, v) } for k, v := range flags { m.Set(k, v) } // if the remote doesn't exist, create a new one with a local one for it // identify which is the cache remote (it can be wrapped by a crypt too) rootIsCrypt := false cacheRemote := remote if !remoteExists { localRemote := remote + "-local" config.FileSet(localRemote, "type", "local") config.FileSet(localRemote, "nounc", "true") m.Set("type", "cache") m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote)) } else { remoteType := config.FileGet(remote, "type", "") if remoteType == "" { t.Skipf("skipped due to invalid remote type for %v", remote) return nil, nil } if remoteType != "cache" { if remoteType == "crypt" { rootIsCrypt = true m.Set("password", cryptPassword1) m.Set("password2", cryptPassword2) } remoteRemote := config.FileGet(remote, "remote", "") if remoteRemote == "" { t.Skipf("skipped due to invalid remote wrapper for %v", remote) return nil, nil } remoteRemoteParts := strings.Split(remoteRemote, ":") remoteWrapping := remoteRemoteParts[0] remoteType := config.FileGet(remoteWrapping, "type", "") if remoteType != "cache" { t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType) return nil, nil } cacheRemote = remoteWrapping } } runInstance.rootIsCrypt = rootIsCrypt runInstance.dbPath = filepath.Join(config.CacheDir, "cache-backend", cacheRemote+".db") runInstance.chunkPath = filepath.Join(config.CacheDir, "cache-backend", cacheRemote) runInstance.vfsCachePath = filepath.Join(config.CacheDir, "vfs", remote) boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true}) require.NoError(t, err) fs.Config.LowLevelRetries = 1 // Instantiate root if purge { boltDb.PurgeTempUploads() _ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id)) } f, err := cache.NewFs(remote, id, m) require.NoError(t, err) cfs, err := r.getCacheFs(f) require.NoError(t, err) _, isCache := cfs.Features().UnWrap().(*cache.Fs) _, isCrypt := cfs.Features().UnWrap().(*crypt.Fs) _, isLocal := cfs.Features().UnWrap().(*local.Fs) if isCache || isCrypt || isLocal { r.wrappedIsExternal = false } else { r.wrappedIsExternal = true } if purge { _ = f.Features().Purge(context.Background(), "") require.NoError(t, err) } err = f.Mkdir(context.Background(), "") require.NoError(t, err) return f, boltDb } func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) { err := f.Features().Purge(context.Background(), "") require.NoError(t, err) cfs, err := r.getCacheFs(f) require.NoError(t, err) cfs.StopBackgroundRunners() err = os.RemoveAll(r.tmpUploadDir) require.NoError(t, err) for _, f := range r.tempFiles { _ = f.Close() _ = os.Remove(f.Name()) } r.tempFiles = nil debug.FreeOSMemory() } func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser { chunk := int64(1024) cnt := size / chunk left := size % chunk f, err := ioutil.TempFile("", "rclonecache-tempfile") require.NoError(t, err) for i := 0; i < int(cnt); i++ { data := randStringBytes(int(chunk)) _, _ = f.Write(data) } data := randStringBytes(int(left)) _, _ = f.Write(data) _, _ = f.Seek(int64(0), io.SeekStart) r.tempFiles = append(r.tempFiles, f) return f } func (r *run) writeRemoteString(t *testing.T, f fs.Fs, remote, content string) { r.writeRemoteBytes(t, f, remote, []byte(content)) } func (r *run) writeObjectString(t *testing.T, f fs.Fs, remote, content string) fs.Object { return r.writeObjectBytes(t, f, remote, []byte(content)) } func (r *run) writeRemoteBytes(t *testing.T, f fs.Fs, remote string, data []byte) { r.writeObjectBytes(t, f, remote, data) } func (r *run) writeRemoteReader(t *testing.T, f fs.Fs, remote string, in io.ReadCloser) { r.writeObjectReader(t, f, remote, in) } func (r *run) writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object { in := bytes.NewReader(data) _ = r.writeObjectReader(t, f, remote, in) o, err := f.NewObject(context.Background(), remote) require.NoError(t, err) require.Equal(t, int64(len(data)), o.Size()) return o } func (r *run) writeObjectReader(t *testing.T, f fs.Fs, remote string, in io.Reader) fs.Object { modTime := time.Now() objInfo := object.NewStaticObjectInfo(remote, modTime, -1, true, nil, f) obj, err := f.Put(context.Background(), in, objInfo) require.NoError(t, err) return obj } func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []byte, data2 []byte) fs.Object { var err error var obj fs.Object in1 := bytes.NewReader(data1) in2 := bytes.NewReader(data2) objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f) objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f) obj, err = f.Put(context.Background(), in1, objInfo1) require.NoError(t, err) obj, err = f.NewObject(context.Background(), remote) require.NoError(t, err) err = obj.Update(context.Background(), in2, objInfo2) require.NoError(t, err) return obj } func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, end int64, noLengthCheck bool) ([]byte, error) { size := end - offset checkSample := make([]byte, size) co, err := f.NewObject(context.Background(), remote) if err != nil { return checkSample, err } checkSample = r.readDataFromObj(t, co, offset, end, noLengthCheck) if !noLengthCheck && size != int64(len(checkSample)) { return checkSample, errors.Errorf("read size doesn't match expected: %v <> %v", len(checkSample), size) } return checkSample, nil } func (r *run) readDataFromObj(t *testing.T, o fs.Object, offset, end int64, noLengthCheck bool) []byte { size := end - offset checkSample := make([]byte, size) reader, err := o.Open(context.Background(), &fs.SeekOption{Offset: offset}) require.NoError(t, err) totalRead, err := io.ReadFull(reader, checkSample) if (err == io.EOF || err == io.ErrUnexpectedEOF) && noLengthCheck { err = nil checkSample = checkSample[:totalRead] } require.NoError(t, err, "with string -%v-", string(checkSample)) _ = reader.Close() return checkSample } func (r *run) mkdir(t *testing.T, f fs.Fs, remote string) { err := f.Mkdir(context.Background(), remote) require.NoError(t, err) } func (r *run) rm(t *testing.T, f fs.Fs, remote string) error { var err error var obj fs.Object obj, err = f.NewObject(context.Background(), remote) if err != nil { err = f.Rmdir(context.Background(), remote) } else { err = obj.Remove(context.Background()) } return err } func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) { var err error var l []interface{} var list fs.DirEntries list, err = f.List(context.Background(), remote) for _, ll := range list { l = append(l, ll) } return l, err } func (r *run) copyFile(t *testing.T, f fs.Fs, src, dst string) error { in, err := os.Open(src) if err != nil { return err } defer func() { _ = in.Close() }() out, err := os.Create(dst) if err != nil { return err } defer func() { _ = out.Close() }() _, err = io.Copy(out, in) return err } func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error { var err error if rootFs.Features().DirMove != nil { err = rootFs.Features().DirMove(context.Background(), rootFs, src, dst) if err != nil { return err } } else { t.Logf("DirMove not supported by %v", rootFs) return errNotSupported } return err } func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error { var err error if rootFs.Features().Move != nil { obj1, err := rootFs.NewObject(context.Background(), src) if err != nil { return err } _, err = rootFs.Features().Move(context.Background(), obj1, dst) if err != nil { return err } } else { t.Logf("Move not supported by %v", rootFs) return errNotSupported } return err } func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error { var err error if rootFs.Features().Copy != nil { obj, err := rootFs.NewObject(context.Background(), src) if err != nil { return err } _, err = rootFs.Features().Copy(context.Background(), obj, dst) if err != nil { return err } } else { t.Logf("Copy not supported by %v", rootFs) return errNotSupported } return err } func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error) { var err error obj1, err := rootFs.NewObject(context.Background(), src) if err != nil { return time.Time{}, err } return obj1.ModTime(context.Background()), nil } func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) { var err error obj1, err := rootFs.NewObject(context.Background(), src) if err != nil { return int64(0), err } return obj1.Size(), nil } func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) error { var err error var obj1 fs.Object obj1, err = rootFs.NewObject(context.Background(), src) if err != nil { return err } data1 := []byte(data + append) reader := bytes.NewReader(data1) objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs) err = obj1.Update(context.Background(), reader, objInfo1) return err } func (r *run) cleanSize(t *testing.T, size int64) int64 { if r.rootIsCrypt { denominator := int64(65536 + 16) size = size - 32 quotient := size / denominator remainder := size % denominator return (quotient*65536 + remainder - 16) } return size } func (r *run) listenForBackgroundUpload(t *testing.T, f fs.Fs, remote string) chan error { cfs, err := r.getCacheFs(f) require.NoError(t, err) buCh := cfs.GetBackgroundUploadChannel() require.NotNil(t, buCh) maxDuration := time.Minute * 3 if r.wrappedIsExternal { maxDuration = time.Minute * 10 } waitCh := make(chan error) go func() { var err error var state cache.BackgroundUploadState for i := 0; i < 2; i++ { select { case state = <-buCh: // continue case <-time.After(maxDuration): waitCh <- errors.Errorf("Timed out waiting for background upload: %v", remote) return } checkRemote := state.Remote if r.rootIsCrypt { cryptFs := f.(*crypt.Fs) checkRemote, err = cryptFs.DecryptFileName(checkRemote) if err != nil { waitCh <- err return } } if checkRemote == remote && cache.BackgroundUploadStarted != state.Status { waitCh <- state.Error return } } waitCh <- errors.Errorf("Too many attempts to wait for the background upload: %v", remote) }() return waitCh } func (r *run) completeBackgroundUpload(t *testing.T, remote string, waitCh chan error) { var err error maxDuration := time.Minute * 3 if r.wrappedIsExternal { maxDuration = time.Minute * 10 } select { case err = <-waitCh: // continue case <-time.After(maxDuration): t.Fatalf("Timed out waiting to complete the background upload %v", remote) return } require.NoError(t, err) } func (r *run) completeAllBackgroundUploads(t *testing.T, f fs.Fs, lastRemote string) { var state cache.BackgroundUploadState var err error maxDuration := time.Minute * 5 if r.wrappedIsExternal { maxDuration = time.Minute * 15 } cfs, err := r.getCacheFs(f) require.NoError(t, err) buCh := cfs.GetBackgroundUploadChannel() require.NotNil(t, buCh) for { select { case state = <-buCh: checkRemote := state.Remote if r.rootIsCrypt { cryptFs := f.(*crypt.Fs) checkRemote, err = cryptFs.DecryptFileName(checkRemote) require.NoError(t, err) } if checkRemote == lastRemote && cache.BackgroundUploadCompleted == state.Status { require.NoError(t, state.Error) return } case <-time.After(maxDuration): t.Fatalf("Timed out waiting to complete the background upload %v", lastRemote) return } } } func (r *run) retryBlock(block func() error, maxRetries int, rate time.Duration) error { var err error for i := 0; i < maxRetries; i++ { err = block() if err == nil { return nil } time.Sleep(rate) } return err } func (r *run) getCacheFs(f fs.Fs) (*cache.Fs, error) { cfs, ok := f.(*cache.Fs) if ok { return cfs, nil } if f.Features().UnWrap != nil { cfs, ok := f.Features().UnWrap().(*cache.Fs) if ok { return cfs, nil } } return nil, errors.New("didn't found a cache fs") } func randStringBytes(n int) []byte { b := make([]byte, n) for i := range b { b[i] = letterBytes[rand.Intn(len(letterBytes))] } return b } var ( _ fs.Fs = (*cache.Fs)(nil) _ fs.Fs = (*local.Fs)(nil) ) rclone-1.53.3/backend/cache/cache_test.go000066400000000000000000000013051375552240400201560ustar00rootroot00000000000000// Test Cache filesystem interface // +build !plan9,!js // +build !race package cache_test import ( "testing" "github.com/rclone/rclone/backend/cache" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestCache:", NilObject: (*cache.Object)(nil), UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt"}, UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"}, SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache }) } rclone-1.53.3/backend/cache/cache_unsupported.go000066400000000000000000000002201375552240400215620ustar00rootroot00000000000000// Build for cache for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 js package cache rclone-1.53.3/backend/cache/cache_upload_test.go000066400000000000000000000435171375552240400215350ustar00rootroot00000000000000// +build !plan9,!js // +build !race package cache_test import ( "context" "fmt" "math/rand" "os" "path" "strconv" "testing" "time" "github.com/rclone/rclone/backend/cache" _ "github.com/rclone/rclone/backend/drive" "github.com/rclone/rclone/fs" "github.com/stretchr/testify/require" ) func TestInternalUploadTempDirCreated(t *testing.T) { id := fmt.Sprintf("tiutdc%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)}) defer runInstance.cleanupFs(t, rootFs, boltDb) _, err := os.Stat(path.Join(runInstance.tmpUploadDir, id)) require.NoError(t, err) } func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) { // create some rand test data testSize := int64(524288000) testReader := runInstance.randomReader(t, testSize) bu := runInstance.listenForBackgroundUpload(t, rootFs, "one") runInstance.writeRemoteReader(t, rootFs, "one", testReader) // validate that it exists in temp fs ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one"))) require.NoError(t, err) if runInstance.rootIsCrypt { require.Equal(t, int64(524416032), ti.Size()) } else { require.Equal(t, testSize, ti.Size()) } de1, err := runInstance.list(t, rootFs, "") require.NoError(t, err) require.Len(t, de1, 1) runInstance.completeBackgroundUpload(t, "one", bu) // check if it was removed from temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one"))) require.True(t, os.IsNotExist(err)) // check if it can be read data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false) require.NoError(t, err) require.Len(t, data2, 1024) } func TestInternalUploadQueueOneFileNoRest(t *testing.T) { id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"}) defer runInstance.cleanupFs(t, rootFs, boltDb) testInternalUploadQueueOneFile(t, id, rootFs, boltDb) } func TestInternalUploadQueueOneFileWithRest(t *testing.T) { id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"}) defer runInstance.cleanupFs(t, rootFs, boltDb) testInternalUploadQueueOneFile(t, id, rootFs, boltDb) } func TestInternalUploadMoveExistingFile(t *testing.T) { id := fmt.Sprintf("tiumef%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"}) defer runInstance.cleanupFs(t, rootFs, boltDb) err := rootFs.Mkdir(context.Background(), "one") require.NoError(t, err) err = rootFs.Mkdir(context.Background(), "one/test") require.NoError(t, err) err = rootFs.Mkdir(context.Background(), "second") require.NoError(t, err) // create some rand test data testSize := int64(10485760) testReader := runInstance.randomReader(t, testSize) runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader) runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin") de1, err := runInstance.list(t, rootFs, "one/test") require.NoError(t, err) require.Len(t, de1, 1) time.Sleep(time.Second * 5) //_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test"))) //require.NoError(t, err) err = runInstance.dirMove(t, rootFs, "one/test", "second/test") require.NoError(t, err) // check if it can be read de1, err = runInstance.list(t, rootFs, "second/test") require.NoError(t, err) require.Len(t, de1, 1) } func TestInternalUploadTempPathCleaned(t *testing.T) { id := fmt.Sprintf("tiutpc%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"}) defer runInstance.cleanupFs(t, rootFs, boltDb) err := rootFs.Mkdir(context.Background(), "one") require.NoError(t, err) err = rootFs.Mkdir(context.Background(), "one/test") require.NoError(t, err) err = rootFs.Mkdir(context.Background(), "second") require.NoError(t, err) // create some rand test data testSize := int64(1048576) testReader := runInstance.randomReader(t, testSize) testReader2 := runInstance.randomReader(t, testSize) runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader) runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2) runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin") _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test"))) require.True(t, os.IsNotExist(err)) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one"))) require.True(t, os.IsNotExist(err)) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second"))) require.False(t, os.IsNotExist(err)) runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin") _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin"))) require.True(t, os.IsNotExist(err)) de1, err := runInstance.list(t, rootFs, "one/test") require.NoError(t, err) require.Len(t, de1, 1) // check if it can be read de1, err = runInstance.list(t, rootFs, "second") require.NoError(t, err) require.Len(t, de1, 1) } func TestInternalUploadQueueMoreFiles(t *testing.T) { id := fmt.Sprintf("tiuqmf%v", time.Now().Unix()) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"}) defer runInstance.cleanupFs(t, rootFs, boltDb) err := rootFs.Mkdir(context.Background(), "test") require.NoError(t, err) minSize := 5242880 maxSize := 10485760 totalFiles := 10 rand.Seed(time.Now().Unix()) lastFile := "" for i := 0; i < totalFiles; i++ { size := int64(rand.Intn(maxSize-minSize) + minSize) testReader := runInstance.randomReader(t, size) remote := "test/" + strconv.Itoa(i) + ".bin" runInstance.writeRemoteReader(t, rootFs, remote, testReader) // validate that it exists in temp fs ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote))) require.NoError(t, err) require.Equal(t, size, runInstance.cleanSize(t, ti.Size())) if runInstance.wrappedIsExternal && i < totalFiles-1 { time.Sleep(time.Second * 3) } lastFile = remote } // check if cache lists all files, likely temp upload didn't finish yet de1, err := runInstance.list(t, rootFs, "test") require.NoError(t, err) require.Len(t, de1, totalFiles) // wait for background uploader to do its thing runInstance.completeAllBackgroundUploads(t, rootFs, lastFile) // retry until we have no more temp files and fail if they don't go down to 0 _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test"))) require.True(t, os.IsNotExist(err)) // check if cache lists all files de1, err = runInstance.list(t, rootFs, "test") require.NoError(t, err) require.Len(t, de1, totalFiles) } func TestInternalUploadTempFileOperations(t *testing.T) { id := "tiutfo" rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"}) defer runInstance.cleanupFs(t, rootFs, boltDb) boltDb.PurgeTempUploads() // create some rand test data runInstance.mkdir(t, rootFs, "test") runInstance.writeRemoteString(t, rootFs, "test/one", "one content") // check if it can be read data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false) require.NoError(t, err) require.Equal(t, []byte("one content"), data1) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) // test DirMove - allowed err = runInstance.dirMove(t, rootFs, "test", "second") if err != errNotSupported { require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.Error(t, err) _, err = rootFs.NewObject(context.Background(), "second/one") require.NoError(t, err) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.Error(t, err) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one"))) require.NoError(t, err) _, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one"))) require.Error(t, err) var started bool started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one"))) require.NoError(t, err) require.False(t, started) runInstance.mkdir(t, rootFs, "test") runInstance.writeRemoteString(t, rootFs, "test/one", "one content") } // test Rmdir - allowed err = runInstance.rm(t, rootFs, "test") require.Error(t, err) require.Contains(t, err.Error(), "directory not empty") _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one"))) require.False(t, started) require.NoError(t, err) // test Move/Rename -- allowed err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second")) if err != errNotSupported { require.NoError(t, err) // try to read from it _, err = rootFs.NewObject(context.Background(), "test/one") require.Error(t, err) _, err = rootFs.NewObject(context.Background(), "test/second") require.NoError(t, err) data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false) require.NoError(t, err) require.Equal(t, []byte("one content"), data2) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.Error(t, err) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second"))) require.NoError(t, err) runInstance.writeRemoteString(t, rootFs, "test/one", "one content") } // test Copy -- allowed err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third")) if err != errNotSupported { require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/third") require.NoError(t, err) data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false) require.NoError(t, err) require.Equal(t, []byte("one content"), data2) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third"))) require.NoError(t, err) } // test Remove -- allowed err = runInstance.rm(t, rootFs, "test/one") require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.Error(t, err) // validate that it doesn't exist in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.Error(t, err) runInstance.writeRemoteString(t, rootFs, "test/one", "one content") // test Update -- allowed firstModTime, err := runInstance.modTime(t, rootFs, "test/one") require.NoError(t, err) err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated") require.NoError(t, err) obj2, err := rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false) require.Equal(t, "one content updated", string(data2)) tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) if runInstance.rootIsCrypt { require.Equal(t, int64(67), tmpInfo.Size()) } else { require.Equal(t, int64(len(data2)), tmpInfo.Size()) } // test SetModTime -- allowed secondModTime, err := runInstance.modTime(t, rootFs, "test/one") require.NoError(t, err) require.NotEqual(t, secondModTime, firstModTime) require.NotEqual(t, time.Time{}, firstModTime) require.NotEqual(t, time.Time{}, secondModTime) } func TestInternalUploadUploadingFileOperations(t *testing.T) { id := "tiuufo" rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"}) defer runInstance.cleanupFs(t, rootFs, boltDb) boltDb.PurgeTempUploads() // create some rand test data runInstance.mkdir(t, rootFs, "test") runInstance.writeRemoteString(t, rootFs, "test/one", "one content") // check if it can be read data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false) require.NoError(t, err) require.Equal(t, []byte("one content"), data1) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one"))) require.NoError(t, err) // test DirMove err = runInstance.dirMove(t, rootFs, "test", "second") if err != errNotSupported { require.Error(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one"))) require.Error(t, err) } // test Rmdir err = runInstance.rm(t, rootFs, "test") require.Error(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) // validate that it doesn't exist in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) // test Move/Rename err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second")) if err != errNotSupported { require.Error(t, err) // try to read from it _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/second") require.Error(t, err) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second"))) require.Error(t, err) } // test Copy -- allowed err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third")) if err != errNotSupported { require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) _, err = rootFs.NewObject(context.Background(), "test/third") require.NoError(t, err) data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false) require.NoError(t, err) require.Equal(t, []byte("one content"), data2) // validate that it exists in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third"))) require.NoError(t, err) } // test Remove err = runInstance.rm(t, rootFs, "test/one") require.Error(t, err) _, err = rootFs.NewObject(context.Background(), "test/one") require.NoError(t, err) // validate that it doesn't exist in temp fs _, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) require.NoError(t, err) runInstance.writeRemoteString(t, rootFs, "test/one", "one content") // test Update - this seems to work. Why? FIXME //firstModTime, err := runInstance.modTime(t, rootFs, "test/one") //require.NoError(t, err) //err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() { // data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true) // require.Equal(t, "one content", string(data2)) // // tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one"))) // require.NoError(t, err) // if runInstance.rootIsCrypt { // require.Equal(t, int64(67), tmpInfo.Size()) // } else { // require.Equal(t, int64(len(data2)), tmpInfo.Size()) // } //}) //require.Error(t, err) // test SetModTime -- seems to work cause of previous //secondModTime, err := runInstance.modTime(t, rootFs, "test/one") //require.NoError(t, err) //require.Equal(t, secondModTime, firstModTime) //require.NotEqual(t, time.Time{}, firstModTime) //require.NotEqual(t, time.Time{}, secondModTime) } rclone-1.53.3/backend/cache/directory.go000066400000000000000000000060471375552240400200700ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "context" "path" "time" "github.com/rclone/rclone/fs" ) // Directory is a generic dir that stores basic information about it type Directory struct { Directory fs.Directory `json:"-"` // can be nil CacheFs *Fs `json:"-"` // cache fs Name string `json:"name"` // name of the directory Dir string `json:"dir"` // abs path of the directory CacheModTime int64 `json:"modTime"` // modification or creation time - IsZero for unknown CacheSize int64 `json:"size"` // size of directory and contents or -1 if unknown CacheItems int64 `json:"items"` // number of objects or -1 for unknown CacheType string `json:"cacheType"` // object type CacheTs *time.Time `json:",omitempty"` } // NewDirectory builds an empty dir which will be used to unmarshal data in it func NewDirectory(f *Fs, remote string) *Directory { cd := ShallowDirectory(f, remote) t := time.Now() cd.CacheTs = &t return cd } // ShallowDirectory builds an empty dir which will be used to unmarshal data in it func ShallowDirectory(f *Fs, remote string) *Directory { var cd *Directory fullRemote := cleanPath(path.Join(f.Root(), remote)) // build a new one dir := cleanPath(path.Dir(fullRemote)) name := cleanPath(path.Base(fullRemote)) cd = &Directory{ CacheFs: f, Name: name, Dir: dir, CacheModTime: time.Now().UnixNano(), CacheSize: 0, CacheItems: 0, CacheType: "Directory", } return cd } // DirectoryFromOriginal builds one from a generic fs.Directory func DirectoryFromOriginal(ctx context.Context, f *Fs, d fs.Directory) *Directory { var cd *Directory fullRemote := path.Join(f.Root(), d.Remote()) dir := cleanPath(path.Dir(fullRemote)) name := cleanPath(path.Base(fullRemote)) t := time.Now() cd = &Directory{ Directory: d, CacheFs: f, Name: name, Dir: dir, CacheModTime: d.ModTime(ctx).UnixNano(), CacheSize: d.Size(), CacheItems: d.Items(), CacheType: "Directory", CacheTs: &t, } return cd } // Fs returns its FS info func (d *Directory) Fs() fs.Info { return d.CacheFs } // String returns a human friendly name for this object func (d *Directory) String() string { if d == nil { return "" } return d.Remote() } // Remote returns the remote path func (d *Directory) Remote() string { return d.CacheFs.cleanRootFromPath(d.abs()) } // abs returns the absolute path to the dir func (d *Directory) abs() string { return cleanPath(path.Join(d.Dir, d.Name)) } // ModTime returns the cached ModTime func (d *Directory) ModTime(ctx context.Context) time.Time { return time.Unix(0, d.CacheModTime) } // Size returns the cached Size func (d *Directory) Size() int64 { return d.CacheSize } // Items returns the cached Items func (d *Directory) Items() int64 { return d.CacheItems } // ID returns the ID of the cached directory if known func (d *Directory) ID() string { if d.Directory == nil { return "" } return d.Directory.ID() } var ( _ fs.Directory = (*Directory)(nil) ) rclone-1.53.3/backend/cache/handle.go000066400000000000000000000376361375552240400173270ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "context" "fmt" "io" "path" "runtime" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/operations" ) var uploaderMap = make(map[string]*backgroundWriter) var uploaderMapMx sync.Mutex // initBackgroundUploader returns a single instance func initBackgroundUploader(fs *Fs) (*backgroundWriter, error) { // write lock to create one uploaderMapMx.Lock() defer uploaderMapMx.Unlock() if b, ok := uploaderMap[fs.String()]; ok { // if it was already started we close it so that it can be started again if b.running { b.close() } else { return b, nil } } bb := newBackgroundWriter(fs) uploaderMap[fs.String()] = bb return uploaderMap[fs.String()], nil } // Handle is managing the read/write/seek operations on an open handle type Handle struct { ctx context.Context cachedObject *Object cfs *Fs memory *Memory preloadQueue chan int64 preloadOffset int64 offset int64 seenOffsets map[int64]bool mu sync.Mutex workersWg sync.WaitGroup confirmReading chan bool workers int maxWorkerID int UseMemory bool closed bool reading bool } // NewObjectHandle returns a new Handle for an existing Object func NewObjectHandle(ctx context.Context, o *Object, cfs *Fs) *Handle { r := &Handle{ ctx: ctx, cachedObject: o, cfs: cfs, offset: 0, preloadOffset: -1, // -1 to trigger the first preload UseMemory: !cfs.opt.ChunkNoMemory, reading: false, } r.seenOffsets = make(map[int64]bool) r.memory = NewMemory(-1) // create a larger buffer to queue up requests r.preloadQueue = make(chan int64, r.cfs.opt.TotalWorkers*10) r.confirmReading = make(chan bool) r.startReadWorkers() return r } // cacheFs is a convenience method to get the parent cache FS of the object's manager func (r *Handle) cacheFs() *Fs { return r.cfs } // storage is a convenience method to get the persistent storage of the object's manager func (r *Handle) storage() *Persistent { return r.cacheFs().cache } // String representation of this reader func (r *Handle) String() string { return r.cachedObject.abs() } // startReadWorkers will start the worker pool func (r *Handle) startReadWorkers() { if r.workers > 0 { return } totalWorkers := r.cacheFs().opt.TotalWorkers if r.cacheFs().plexConnector.isConfigured() { if !r.cacheFs().plexConnector.isConnected() { err := r.cacheFs().plexConnector.authenticate() if err != nil { fs.Errorf(r, "failed to authenticate to Plex: %v", err) } } if r.cacheFs().plexConnector.isConnected() { totalWorkers = 1 } } r.scaleWorkers(totalWorkers) } // scaleOutWorkers will increase the worker pool count by the provided amount func (r *Handle) scaleWorkers(desired int) { current := r.workers if current == desired { return } if current > desired { // scale in gracefully for r.workers > desired { r.preloadQueue <- -1 r.workers-- } } else { // scale out for r.workers < desired { w := &worker{ r: r, id: r.maxWorkerID, } r.workersWg.Add(1) r.workers++ r.maxWorkerID++ go w.run() } } // ignore first scale out from 0 if current != 0 { fs.Debugf(r, "scale workers to %v", desired) } } func (r *Handle) confirmExternalReading() { // if we have a max value of workers // then we skip this step if r.workers > 1 || !r.cacheFs().plexConnector.isConfigured() { return } if !r.cacheFs().plexConnector.isPlaying(r.cachedObject) { return } fs.Infof(r, "confirmed reading by external reader") r.scaleWorkers(r.cacheFs().opt.TotalWorkers) } // queueOffset will send an offset to the workers if it's different from the last one func (r *Handle) queueOffset(offset int64) { if offset != r.preloadOffset { // clean past in-memory chunks if r.UseMemory { go r.memory.CleanChunksByNeed(offset) } r.confirmExternalReading() r.preloadOffset = offset // clear the past seen chunks // they will remain in our persistent storage but will be removed from transient // so they need to be picked up by a worker for k := range r.seenOffsets { if k < offset { r.seenOffsets[k] = false } } for i := 0; i < r.workers; i++ { o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i) if o < 0 || o >= r.cachedObject.Size() { continue } if v, ok := r.seenOffsets[o]; ok && v { continue } r.seenOffsets[o] = true r.preloadQueue <- o } } } // getChunk is called by the FS to retrieve a specific chunk of known start and size from where it can find it // it can be from transient or persistent cache // it will also build the chunk from the cache's specific chunk boundaries and build the final desired chunk in a buffer func (r *Handle) getChunk(chunkStart int64) ([]byte, error) { var data []byte var err error // we calculate the modulus of the requested offset with the size of a chunk offset := chunkStart % int64(r.cacheFs().opt.ChunkSize) // we align the start offset of the first chunk to a likely chunk in the storage chunkStart = chunkStart - offset r.queueOffset(chunkStart) found := false if r.UseMemory { data, err = r.memory.GetChunk(r.cachedObject, chunkStart) if err == nil { found = true } } if !found { // we're gonna give the workers a chance to pickup the chunk // and retry a couple of times for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ { data, err = r.storage().GetChunk(r.cachedObject, chunkStart) if err == nil { found = true break } fs.Debugf(r, "%v: chunk retry storage: %v", chunkStart, i) time.Sleep(time.Millisecond * 500) } } // not found in ram or // the worker didn't managed to download the chunk in time so we abort and close the stream if err != nil || len(data) == 0 || !found { if r.workers == 0 { fs.Errorf(r, "out of workers") return nil, io.ErrUnexpectedEOF } return nil, errors.Errorf("chunk not found %v", chunkStart) } // first chunk will be aligned with the start if offset > 0 { if offset > int64(len(data)) { fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v", r.offset, chunkStart, len(data), offset, r.cacheFs().opt.ChunkSize, r.cachedObject.Size()) return nil, io.ErrUnexpectedEOF } data = data[int(offset):] } return data, nil } // Read a chunk from storage or len(p) func (r *Handle) Read(p []byte) (n int, err error) { r.mu.Lock() defer r.mu.Unlock() var buf []byte // first reading if !r.reading { r.reading = true } // reached EOF if r.offset >= r.cachedObject.Size() { return 0, io.EOF } currentOffset := r.offset buf, err = r.getChunk(currentOffset) if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF { fs.Errorf(r, "(%v/%v) error (%v) response", currentOffset, r.cachedObject.Size(), err) } if len(buf) == 0 && err != io.ErrUnexpectedEOF { return 0, io.EOF } readSize := copy(p, buf) newOffset := currentOffset + int64(readSize) r.offset = newOffset return readSize, err } // Close will tell the workers to stop func (r *Handle) Close() error { r.mu.Lock() defer r.mu.Unlock() if r.closed { return errors.New("file already closed") } close(r.preloadQueue) r.closed = true // wait for workers to complete their jobs before returning r.workersWg.Wait() r.memory.db.Flush() fs.Debugf(r, "cache reader closed %v", r.offset) return nil } // Seek will move the current offset based on whence and instruct the workers to move there too func (r *Handle) Seek(offset int64, whence int) (int64, error) { r.mu.Lock() defer r.mu.Unlock() var err error switch whence { case io.SeekStart: fs.Debugf(r, "moving offset set from %v to %v", r.offset, offset) r.offset = offset case io.SeekCurrent: fs.Debugf(r, "moving offset cur from %v to %v", r.offset, r.offset+offset) r.offset += offset case io.SeekEnd: fs.Debugf(r, "moving offset end (%v) from %v to %v", r.cachedObject.Size(), r.offset, r.cachedObject.Size()+offset) r.offset = r.cachedObject.Size() + offset default: err = errors.Errorf("cache: unimplemented seek whence %v", whence) } chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize)) if chunkStart >= int64(r.cacheFs().opt.ChunkSize) { chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize) } r.queueOffset(chunkStart) return r.offset, err } type worker struct { r *Handle rc io.ReadCloser id int } // String is a representation of this worker func (w *worker) String() string { return fmt.Sprintf("worker-%v <%v>", w.id, w.r.cachedObject.Name) } // reader will return a reader depending on the capabilities of the source reader: // - if it supports seeking it will seek to the desired offset and return the same reader // - if it doesn't support seeking it will close a possible existing one and open at the desired offset // - if there's no reader associated with this worker, it will create one func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error) { var err error r := w.rc if w.rc == nil { r, err = w.r.cacheFs().openRateLimited(func() (io.ReadCloser, error) { return w.r.cachedObject.Object.Open(w.r.ctx, &fs.RangeOption{Start: offset, End: end - 1}) }) if err != nil { return nil, err } return r, nil } if !closeOpen { if do, ok := r.(fs.RangeSeeker); ok { _, err = do.RangeSeek(w.r.ctx, offset, io.SeekStart, end-offset) return r, err } else if do, ok := r.(io.Seeker); ok { _, err = do.Seek(offset, io.SeekStart) return r, err } } _ = w.rc.Close() return w.r.cacheFs().openRateLimited(func() (io.ReadCloser, error) { r, err = w.r.cachedObject.Object.Open(w.r.ctx, &fs.RangeOption{Start: offset, End: end - 1}) if err != nil { return nil, err } return r, nil }) } // run is the main loop for the worker which receives offsets to preload func (w *worker) run() { var err error var data []byte defer func() { if w.rc != nil { _ = w.rc.Close() } w.r.workersWg.Done() }() for { chunkStart, open := <-w.r.preloadQueue if chunkStart < 0 || !open { break } // skip if it exists if w.r.UseMemory { if w.r.memory.HasChunk(w.r.cachedObject, chunkStart) { continue } // add it in ram if it's in the persistent storage data, err = w.r.storage().GetChunk(w.r.cachedObject, chunkStart) if err == nil { err = w.r.memory.AddChunk(w.r.cachedObject.abs(), data, chunkStart) if err != nil { fs.Errorf(w, "failed caching chunk in ram %v: %v", chunkStart, err) } else { continue } } } else { if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) { continue } } chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize) // TODO: Remove this comment if it proves to be reliable for #1896 //if chunkEnd > w.r.cachedObject.Size() { // chunkEnd = w.r.cachedObject.Size() //} w.download(chunkStart, chunkEnd, 0) } } func (w *worker) download(chunkStart, chunkEnd int64, retry int) { var err error var data []byte // stop retries if retry >= w.r.cacheFs().opt.ReadRetries { return } // back-off between retries if retry > 0 { time.Sleep(time.Second * time.Duration(retry)) } closeOpen := false if retry > 0 { closeOpen = true } w.rc, err = w.reader(chunkStart, chunkEnd, closeOpen) // we seem to be getting only errors so we abort if err != nil { fs.Errorf(w, "object open failed %v: %v", chunkStart, err) err = w.r.cachedObject.refreshFromSource(w.r.ctx, true) if err != nil { fs.Errorf(w, "%v", err) } w.download(chunkStart, chunkEnd, retry+1) return } data = make([]byte, chunkEnd-chunkStart) var sourceRead int sourceRead, err = io.ReadFull(w.rc, data) if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF { fs.Errorf(w, "failed to read chunk %v: %v", chunkStart, err) err = w.r.cachedObject.refreshFromSource(w.r.ctx, true) if err != nil { fs.Errorf(w, "%v", err) } w.download(chunkStart, chunkEnd, retry+1) return } data = data[:sourceRead] // reslice to remove extra garbage if err == io.ErrUnexpectedEOF { fs.Debugf(w, "partial downloaded chunk %v", fs.SizeSuffix(chunkStart)) } else { fs.Debugf(w, "downloaded chunk %v", chunkStart) } if w.r.UseMemory { err = w.r.memory.AddChunk(w.r.cachedObject.abs(), data, chunkStart) if err != nil { fs.Errorf(w, "failed caching chunk in ram %v: %v", chunkStart, err) } } err = w.r.storage().AddChunk(w.r.cachedObject.abs(), data, chunkStart) if err != nil { fs.Errorf(w, "failed caching chunk in storage %v: %v", chunkStart, err) } } const ( // BackgroundUploadStarted is a state for a temp file that has started upload BackgroundUploadStarted = iota // BackgroundUploadCompleted is a state for a temp file that has completed upload BackgroundUploadCompleted // BackgroundUploadError is a state for a temp file that has an error upload BackgroundUploadError ) // BackgroundUploadState is an entity that maps to an existing file which is stored on the temp fs type BackgroundUploadState struct { Remote string Status int Error error } type backgroundWriter struct { fs *Fs stateCh chan int running bool notifyCh chan BackgroundUploadState mu sync.Mutex } func newBackgroundWriter(f *Fs) *backgroundWriter { b := &backgroundWriter{ fs: f, stateCh: make(chan int), notifyCh: make(chan BackgroundUploadState), } return b } func (b *backgroundWriter) close() { b.stateCh <- 2 b.mu.Lock() defer b.mu.Unlock() b.running = false } func (b *backgroundWriter) pause() { b.stateCh <- 1 } func (b *backgroundWriter) play() { b.stateCh <- 0 } func (b *backgroundWriter) isRunning() bool { b.mu.Lock() defer b.mu.Unlock() return b.running } func (b *backgroundWriter) notify(remote string, status int, err error) { state := BackgroundUploadState{ Remote: remote, Status: status, Error: err, } select { case b.notifyCh <- state: fs.Debugf(remote, "notified background upload state: %v", state.Status) default: } } func (b *backgroundWriter) run() { state := 0 for { b.mu.Lock() b.running = true b.mu.Unlock() select { case s := <-b.stateCh: state = s default: // } switch state { case 1: runtime.Gosched() time.Sleep(time.Millisecond * 500) continue case 2: return } absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), time.Duration(b.fs.opt.TempWaitTime)) if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) { time.Sleep(time.Second) continue } remote := b.fs.cleanRootFromPath(absPath) b.notify(remote, BackgroundUploadStarted, nil) fs.Infof(remote, "background upload: started upload") err = operations.MoveFile(context.TODO(), b.fs.UnWrap(), b.fs.tempFs, remote, remote) if err != nil { b.notify(remote, BackgroundUploadError, err) _ = b.fs.cache.rollbackPendingUpload(absPath) fs.Errorf(remote, "background upload: %v", err) continue } // clean empty dirs up to root thisDir := cleanPath(path.Dir(remote)) for thisDir != "" { thisList, err := b.fs.tempFs.List(context.TODO(), thisDir) if err != nil { break } if len(thisList) > 0 { break } err = b.fs.tempFs.Rmdir(context.TODO(), thisDir) fs.Debugf(thisDir, "cleaned from temp path") if err != nil { break } thisDir = cleanPath(path.Dir(thisDir)) } fs.Infof(remote, "background upload: uploaded entry") err = b.fs.cache.removePendingUpload(absPath) if err != nil && !strings.Contains(err.Error(), "pending upload not found") { fs.Errorf(remote, "background upload: %v", err) } parentCd := NewDirectory(b.fs, cleanPath(path.Dir(remote))) err = b.fs.cache.ExpireDir(parentCd) if err != nil { fs.Errorf(parentCd, "background upload: cache expire error: %v", err) } b.fs.notifyChangeUpstream(remote, fs.EntryObject) fs.Infof(remote, "finished background upload") b.notify(remote, BackgroundUploadCompleted, nil) } } // Check the interfaces are satisfied var ( _ io.ReadCloser = (*Handle)(nil) _ io.Seeker = (*Handle)(nil) ) rclone-1.53.3/backend/cache/object.go000066400000000000000000000233741375552240400173340ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "context" "io" "path" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/readers" ) const ( objectInCache = "Object" objectPendingUpload = "TempObject" ) // Object is a generic file like object that stores basic information about it type Object struct { fs.Object `json:"-"` ParentFs fs.Fs `json:"-"` // parent fs CacheFs *Fs `json:"-"` // cache fs Name string `json:"name"` // name of the directory Dir string `json:"dir"` // abs path of the object CacheModTime int64 `json:"modTime"` // modification or creation time - IsZero for unknown CacheSize int64 `json:"size"` // size of directory and contents or -1 if unknown CacheStorable bool `json:"storable"` // says whether this object can be stored CacheType string `json:"cacheType"` CacheTs time.Time `json:"cacheTs"` cacheHashesMu sync.Mutex CacheHashes map[hash.Type]string // all supported hashes cached refreshMutex sync.Mutex } // NewObject builds one from a generic fs.Object func NewObject(f *Fs, remote string) *Object { fullRemote := path.Join(f.Root(), remote) dir, name := path.Split(fullRemote) cacheType := objectInCache parentFs := f.UnWrap() if f.opt.TempWritePath != "" { _, err := f.cache.SearchPendingUpload(fullRemote) if err == nil { // queued for upload cacheType = objectPendingUpload parentFs = f.tempFs fs.Debugf(fullRemote, "pending upload found") } } co := &Object{ ParentFs: parentFs, CacheFs: f, Name: cleanPath(name), Dir: cleanPath(dir), CacheModTime: time.Now().UnixNano(), CacheSize: 0, CacheStorable: false, CacheType: cacheType, CacheTs: time.Now(), } return co } // ObjectFromOriginal builds one from a generic fs.Object func ObjectFromOriginal(ctx context.Context, f *Fs, o fs.Object) *Object { var co *Object fullRemote := cleanPath(path.Join(f.Root(), o.Remote())) dir, name := path.Split(fullRemote) cacheType := objectInCache parentFs := f.UnWrap() if f.opt.TempWritePath != "" { _, err := f.cache.SearchPendingUpload(fullRemote) if err == nil { // queued for upload cacheType = objectPendingUpload parentFs = f.tempFs fs.Debugf(fullRemote, "pending upload found") } } co = &Object{ ParentFs: parentFs, CacheFs: f, Name: cleanPath(name), Dir: cleanPath(dir), CacheType: cacheType, CacheTs: time.Now(), } co.updateData(ctx, o) return co } func (o *Object) updateData(ctx context.Context, source fs.Object) { o.Object = source o.CacheModTime = source.ModTime(ctx).UnixNano() o.CacheSize = source.Size() o.CacheStorable = source.Storable() o.CacheTs = time.Now() o.cacheHashesMu.Lock() o.CacheHashes = make(map[hash.Type]string) o.cacheHashesMu.Unlock() } // Fs returns its FS info func (o *Object) Fs() fs.Info { return o.CacheFs } // String returns a human friendly name for this object func (o *Object) String() string { if o == nil { return "" } return o.Remote() } // Remote returns the remote path func (o *Object) Remote() string { p := path.Join(o.Dir, o.Name) return o.CacheFs.cleanRootFromPath(p) } // abs returns the absolute path to the object func (o *Object) abs() string { return path.Join(o.Dir, o.Name) } // ModTime returns the cached ModTime func (o *Object) ModTime(ctx context.Context) time.Time { _ = o.refresh(ctx) return time.Unix(0, o.CacheModTime) } // Size returns the cached Size func (o *Object) Size() int64 { _ = o.refresh(context.TODO()) return o.CacheSize } // Storable returns the cached Storable func (o *Object) Storable() bool { _ = o.refresh(context.TODO()) return o.CacheStorable } // refresh will check if the object info is expired and request the info from source if it is // all these conditions must be true to ignore a refresh // 1. cache ts didn't expire yet // 2. is not pending a notification from the wrapped fs func (o *Object) refresh(ctx context.Context) error { isNotified := o.CacheFs.isNotifiedRemote(o.Remote()) isExpired := time.Now().After(o.CacheTs.Add(time.Duration(o.CacheFs.opt.InfoAge))) if !isExpired && !isNotified { return nil } return o.refreshFromSource(ctx, true) } // refreshFromSource requests the original FS for the object in case it comes from a cached entry func (o *Object) refreshFromSource(ctx context.Context, force bool) error { o.refreshMutex.Lock() defer o.refreshMutex.Unlock() var err error var liveObject fs.Object if o.Object != nil && !force { return nil } if o.isTempFile() { liveObject, err = o.ParentFs.NewObject(ctx, o.Remote()) err = errors.Wrapf(err, "in parent fs %v", o.ParentFs) } else { liveObject, err = o.CacheFs.Fs.NewObject(ctx, o.Remote()) err = errors.Wrapf(err, "in cache fs %v", o.CacheFs.Fs) } if err != nil { fs.Errorf(o, "error refreshing object in : %v", err) return err } o.updateData(ctx, liveObject) o.persist() return nil } // SetModTime sets the ModTime of this object func (o *Object) SetModTime(ctx context.Context, t time.Time) error { if err := o.refreshFromSource(ctx, false); err != nil { return err } err := o.Object.SetModTime(ctx, t) if err != nil { return err } o.CacheModTime = t.UnixNano() o.persist() fs.Debugf(o, "updated ModTime: %v", t) return nil } // Open is used to request a specific part of the file using fs.RangeOption func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { var err error if o.Object == nil { err = o.refreshFromSource(ctx, true) } else { err = o.refresh(ctx) } if err != nil { return nil, err } cacheReader := NewObjectHandle(ctx, o, o.CacheFs) var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(o.Size()) } _, err = cacheReader.Seek(offset, io.SeekStart) if err != nil { return nil, err } } return readers.NewLimitedReadCloser(cacheReader, limit), nil } // Update will change the object data func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { if err := o.refreshFromSource(ctx, false); err != nil { return err } // pause background uploads if active if o.CacheFs.opt.TempWritePath != "" { o.CacheFs.backgroundRunner.pause() defer o.CacheFs.backgroundRunner.play() // don't allow started uploads if o.isTempFile() && o.tempFileStartedUpload() { return errors.Errorf("%v is currently uploading, can't update", o) } } fs.Debugf(o, "updating object contents with size %v", src.Size()) // FIXME use reliable upload err := o.Object.Update(ctx, in, src, options...) if err != nil { fs.Errorf(o, "error updating source: %v", err) return err } // deleting cached chunks and info to be replaced with new ones _ = o.CacheFs.cache.RemoveObject(o.abs()) // advertise to ChangeNotify if wrapped doesn't do that o.CacheFs.notifyChangeUpstreamIfNeeded(o.Remote(), fs.EntryObject) o.CacheModTime = src.ModTime(ctx).UnixNano() o.CacheSize = src.Size() o.cacheHashesMu.Lock() o.CacheHashes = make(map[hash.Type]string) o.cacheHashesMu.Unlock() o.CacheTs = time.Now() o.persist() return nil } // Remove deletes the object from both the cache and the source func (o *Object) Remove(ctx context.Context) error { if err := o.refreshFromSource(ctx, false); err != nil { return err } // pause background uploads if active if o.CacheFs.opt.TempWritePath != "" { o.CacheFs.backgroundRunner.pause() defer o.CacheFs.backgroundRunner.play() // don't allow started uploads if o.isTempFile() && o.tempFileStartedUpload() { return errors.Errorf("%v is currently uploading, can't delete", o) } } err := o.Object.Remove(ctx) if err != nil { return err } fs.Debugf(o, "removing object") _ = o.CacheFs.cache.RemoveObject(o.abs()) _ = o.CacheFs.cache.removePendingUpload(o.abs()) parentCd := NewDirectory(o.CacheFs, cleanPath(path.Dir(o.Remote()))) _ = o.CacheFs.cache.ExpireDir(parentCd) // advertise to ChangeNotify if wrapped doesn't do that o.CacheFs.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory) return nil } // Hash requests a hash of the object and stores in the cache // since it might or might not be called, this is lazy loaded func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) { _ = o.refresh(ctx) o.cacheHashesMu.Lock() if o.CacheHashes == nil { o.CacheHashes = make(map[hash.Type]string) } cachedHash, found := o.CacheHashes[ht] o.cacheHashesMu.Unlock() if found { return cachedHash, nil } if err := o.refreshFromSource(ctx, false); err != nil { return "", err } liveHash, err := o.Object.Hash(ctx, ht) if err != nil { return "", err } o.cacheHashesMu.Lock() o.CacheHashes[ht] = liveHash o.cacheHashesMu.Unlock() o.persist() fs.Debugf(o, "object hash cached: %v", liveHash) return liveHash, nil } // persist adds this object to the persistent cache func (o *Object) persist() *Object { err := o.CacheFs.cache.AddObject(o) if err != nil { fs.Errorf(o, "failed to cache object: %v", err) } return o } func (o *Object) isTempFile() bool { _, err := o.CacheFs.cache.SearchPendingUpload(o.abs()) if err != nil { o.CacheType = objectInCache return false } o.CacheType = objectPendingUpload return true } func (o *Object) tempFileStartedUpload() bool { started, err := o.CacheFs.cache.SearchPendingUpload(o.abs()) if err != nil { return false } return started } // UnWrap returns the Object that this Object is wrapping or // nil if it isn't wrapping anything func (o *Object) UnWrap() fs.Object { return o.Object } var ( _ fs.Object = (*Object)(nil) _ fs.ObjectUnWrapper = (*Object)(nil) ) rclone-1.53.3/backend/cache/plex.go000066400000000000000000000164061375552240400170340ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "bytes" "crypto/tls" "encoding/json" "fmt" "io/ioutil" "net/http" "net/url" "strings" "sync" "time" cache "github.com/patrickmn/go-cache" "github.com/rclone/rclone/fs" "golang.org/x/net/websocket" ) const ( // defPlexLoginURL is the default URL for Plex login defPlexLoginURL = "https://plex.tv/users/sign_in.json" defPlexNotificationURL = "%s/:/websockets/notifications?X-Plex-Token=%s" ) // PlaySessionStateNotification is part of the API response of Plex type PlaySessionStateNotification struct { SessionKey string `json:"sessionKey"` GUID string `json:"guid"` Key string `json:"key"` ViewOffset int64 `json:"viewOffset"` State string `json:"state"` TranscodeSession string `json:"transcodeSession"` } // NotificationContainer is part of the API response of Plex type NotificationContainer struct { Type string `json:"type"` Size int `json:"size"` PlaySessionState []PlaySessionStateNotification `json:"PlaySessionStateNotification"` } // PlexNotification is part of the API response of Plex type PlexNotification struct { Container NotificationContainer `json:"NotificationContainer"` } // plexConnector is managing the cache integration with Plex type plexConnector struct { url *url.URL username string password string token string insecure bool f *Fs mu sync.Mutex running bool runningMu sync.Mutex stateCache *cache.Cache saveToken func(string) } // newPlexConnector connects to a Plex server and generates a token func newPlexConnector(f *Fs, plexURL, username, password string, insecure bool, saveToken func(string)) (*plexConnector, error) { u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/")) if err != nil { return nil, err } pc := &plexConnector{ f: f, url: u, username: username, password: password, token: "", insecure: insecure, stateCache: cache.New(time.Hour, time.Minute), saveToken: saveToken, } return pc, nil } // newPlexConnector connects to a Plex server and generates a token func newPlexConnectorWithToken(f *Fs, plexURL, token string, insecure bool) (*plexConnector, error) { u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/")) if err != nil { return nil, err } pc := &plexConnector{ f: f, url: u, token: token, insecure: insecure, stateCache: cache.New(time.Hour, time.Minute), } pc.listenWebsocket() return pc, nil } func (p *plexConnector) closeWebsocket() { p.runningMu.Lock() defer p.runningMu.Unlock() fs.Infof("plex", "stopped Plex watcher") p.running = false } func (p *plexConnector) websocketDial() (*websocket.Conn, error) { u := strings.TrimRight(strings.Replace(strings.Replace( p.url.String(), "http://", "ws://", 1), "https://", "wss://", 1), "/") url := fmt.Sprintf(defPlexNotificationURL, u, p.token) config, err := websocket.NewConfig(url, "http://localhost") if err != nil { return nil, err } if p.insecure { config.TlsConfig = &tls.Config{InsecureSkipVerify: true} } return websocket.DialConfig(config) } func (p *plexConnector) listenWebsocket() { p.runningMu.Lock() defer p.runningMu.Unlock() conn, err := p.websocketDial() if err != nil { fs.Errorf("plex", "%v", err) return } p.running = true go func() { for { if !p.isConnected() { break } notif := &PlexNotification{} err := websocket.JSON.Receive(conn, notif) if err != nil { fs.Debugf("plex", "%v", err) p.closeWebsocket() break } // we're only interested in play events if notif.Container.Type == "playing" { // we loop through each of them for _, v := range notif.Container.PlaySessionState { // event type of playing if v.State == "playing" { // if it's not cached get the details and cache them if _, found := p.stateCache.Get(v.Key); !found { req, err := http.NewRequest("GET", fmt.Sprintf("%s%s", p.url.String(), v.Key), nil) if err != nil { continue } p.fillDefaultHeaders(req) resp, err := http.DefaultClient.Do(req) if err != nil { continue } var data []byte data, err = ioutil.ReadAll(resp.Body) if err != nil { continue } p.stateCache.Set(v.Key, data, cache.DefaultExpiration) } } else if v.State == "stopped" { p.stateCache.Delete(v.Key) } } } } }() } // fillDefaultHeaders will add common headers to requests func (p *plexConnector) fillDefaultHeaders(req *http.Request) { req.Header.Add("X-Plex-Client-Identifier", fmt.Sprintf("rclone (%v)", p.f.String())) req.Header.Add("X-Plex-Product", fmt.Sprintf("rclone (%v)", p.f.Name())) req.Header.Add("X-Plex-Version", fs.Version) req.Header.Add("Accept", "application/json") if p.token != "" { req.Header.Add("X-Plex-Token", p.token) } } // authenticate will generate a token based on a username/password func (p *plexConnector) authenticate() error { p.mu.Lock() defer p.mu.Unlock() form := url.Values{} form.Set("user[login]", p.username) form.Add("user[password]", p.password) req, err := http.NewRequest("POST", defPlexLoginURL, strings.NewReader(form.Encode())) if err != nil { return err } p.fillDefaultHeaders(req) resp, err := http.DefaultClient.Do(req) if err != nil { return err } var data map[string]interface{} err = json.NewDecoder(resp.Body).Decode(&data) if err != nil { return fmt.Errorf("failed to obtain token: %v", err) } tokenGen, ok := get(data, "user", "authToken") if !ok { return fmt.Errorf("failed to obtain token: %v", data) } token, ok := tokenGen.(string) if !ok { return fmt.Errorf("failed to obtain token: %v", data) } p.token = token if p.token != "" { if p.saveToken != nil { p.saveToken(p.token) } fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String()) } p.listenWebsocket() return nil } // isConnected checks if this rclone is authenticated to Plex func (p *plexConnector) isConnected() bool { p.runningMu.Lock() defer p.runningMu.Unlock() return p.running } // isConfigured checks if this rclone is configured to use a Plex server func (p *plexConnector) isConfigured() bool { return p.url != nil } func (p *plexConnector) isPlaying(co *Object) bool { var err error if !p.isConnected() { p.listenWebsocket() } remote := co.Remote() if cr, yes := p.f.isWrappedByCrypt(); yes { remote, err = cr.DecryptFileName(co.Remote()) if err != nil { fs.Debugf("plex", "can not decrypt wrapped file: %v", err) return false } } isPlaying := false for _, v := range p.stateCache.Items() { if bytes.Contains(v.Object.([]byte), []byte(remote)) { isPlaying = true break } } return isPlaying } // adapted from: https://stackoverflow.com/a/28878037 (credit) func get(m interface{}, path ...interface{}) (interface{}, bool) { for _, p := range path { switch idx := p.(type) { case string: if mm, ok := m.(map[string]interface{}); ok { if val, found := mm[idx]; found { m = val continue } } return nil, false case int: if mm, ok := m.([]interface{}); ok { if len(mm) > idx { m = mm[idx] continue } } return nil, false } } return m, true } rclone-1.53.3/backend/cache/storage_memory.go000066400000000000000000000051211375552240400211100ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "strconv" "strings" "time" cache "github.com/patrickmn/go-cache" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) // Memory is a wrapper of transient storage for a go-cache store type Memory struct { db *cache.Cache } // NewMemory builds this cache storage // defaultExpiration will set the expiry time of chunks in this storage func NewMemory(defaultExpiration time.Duration) *Memory { mem := &Memory{} err := mem.Connect(defaultExpiration) if err != nil { fs.Errorf("cache", "can't open ram connection: %v", err) } return mem } // Connect will create a connection for the storage func (m *Memory) Connect(defaultExpiration time.Duration) error { m.db = cache.New(defaultExpiration, -1) return nil } // HasChunk confirms the existence of a single chunk of an object func (m *Memory) HasChunk(cachedObject *Object, offset int64) bool { key := cachedObject.abs() + "-" + strconv.FormatInt(offset, 10) _, found := m.db.Get(key) return found } // GetChunk will retrieve a single chunk which belongs to a cached object or an error if it doesn't find it func (m *Memory) GetChunk(cachedObject *Object, offset int64) ([]byte, error) { key := cachedObject.abs() + "-" + strconv.FormatInt(offset, 10) var data []byte if x, found := m.db.Get(key); found { data = x.([]byte) return data, nil } return nil, errors.Errorf("couldn't get cached object data at offset %v", offset) } // AddChunk adds a new chunk of a cached object func (m *Memory) AddChunk(fp string, data []byte, offset int64) error { return m.AddChunkAhead(fp, data, offset, time.Second) } // AddChunkAhead adds a new chunk of a cached object func (m *Memory) AddChunkAhead(fp string, data []byte, offset int64, t time.Duration) error { key := fp + "-" + strconv.FormatInt(offset, 10) m.db.Set(key, data, cache.DefaultExpiration) return nil } // CleanChunksByAge will cleanup on a cron basis func (m *Memory) CleanChunksByAge(chunkAge time.Duration) { m.db.DeleteExpired() } // CleanChunksByNeed will cleanup chunks after the FS passes a specific chunk func (m *Memory) CleanChunksByNeed(offset int64) { var items map[string]cache.Item items = m.db.Items() for key := range items { sepIdx := strings.LastIndex(key, "-") keyOffset, err := strconv.ParseInt(key[sepIdx+1:], 10, 64) if err != nil { fs.Errorf("cache", "couldn't parse offset entry %v", key) continue } if keyOffset < offset { m.db.Delete(key) } } } // CleanChunksBySize will cleanup chunks after the total size passes a certain point func (m *Memory) CleanChunksBySize(maxSize int64) { // NOOP } rclone-1.53.3/backend/cache/storage_persistent.go000066400000000000000000000671621375552240400220150ustar00rootroot00000000000000// +build !plan9,!js package cache import ( "bytes" "context" "encoding/binary" "encoding/json" "fmt" "io/ioutil" "os" "path" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/walk" bolt "go.etcd.io/bbolt" ) // Constants const ( RootBucket = "root" RootTsBucket = "rootTs" DataTsBucket = "dataTs" tempBucket = "pending" ) // Features flags for this storage type type Features struct { PurgeDb bool // purge the db before starting DbWaitTime time.Duration // time to wait for DB to be available } var boltMap = make(map[string]*Persistent) var boltMapMx sync.Mutex // GetPersistent returns a single instance for the specific store func GetPersistent(dbPath, chunkPath string, f *Features) (*Persistent, error) { // write lock to create one boltMapMx.Lock() defer boltMapMx.Unlock() if b, ok := boltMap[dbPath]; ok { if !b.open { err := b.connect() if err != nil { return nil, err } } return b, nil } bb, err := newPersistent(dbPath, chunkPath, f) if err != nil { return nil, err } boltMap[dbPath] = bb return boltMap[dbPath], nil } type chunkInfo struct { Path string Offset int64 Size int64 } type tempUploadInfo struct { DestPath string AddedOn time.Time Started bool } // String representation of a tempUploadInfo func (t *tempUploadInfo) String() string { return fmt.Sprintf("%v - %v (%v)", t.DestPath, t.Started, t.AddedOn) } // Persistent is a wrapper of persistent storage for a bolt.DB file type Persistent struct { dbPath string dataPath string open bool db *bolt.DB cleanupMux sync.Mutex tempQueueMux sync.Mutex features *Features } // newPersistent builds a new wrapper and connects to the bolt.DB file func newPersistent(dbPath, chunkPath string, f *Features) (*Persistent, error) { b := &Persistent{ dbPath: dbPath, dataPath: chunkPath, features: f, } err := b.connect() if err != nil { fs.Errorf(dbPath, "Error opening storage cache. Is there another rclone running on the same remote? %v", err) return nil, err } return b, nil } // String will return a human friendly string for this DB (currently the dbPath) func (b *Persistent) String() string { return " " + b.dbPath } // connect creates a connection to the configured file // refreshDb will delete the file before to create an empty DB if it's set to true func (b *Persistent) connect() error { var err error err = os.MkdirAll(b.dataPath, os.ModePerm) if err != nil { return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath) } b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime}) if err != nil { return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath) } if b.features.PurgeDb { b.Purge() } _ = b.db.Update(func(tx *bolt.Tx) error { _, _ = tx.CreateBucketIfNotExists([]byte(RootBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(RootTsBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(DataTsBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(tempBucket)) return nil }) b.open = true return nil } // getBucket prepares and cleans a specific path of the form: /var/tmp and will iterate through each path component // to get to the nested bucket of the final part (in this example: tmp) func (b *Persistent) getBucket(dir string, createIfMissing bool, tx *bolt.Tx) *bolt.Bucket { cleanPath(dir) entries := strings.FieldsFunc(dir, func(c rune) bool { // cover Windows where rclone still uses '/' as path separator // this should be safe as '/' is not a valid Windows character return (os.PathSeparator == c || c == rune('/')) }) bucket := tx.Bucket([]byte(RootBucket)) for _, entry := range entries { if createIfMissing { bucket, _ = bucket.CreateBucketIfNotExists([]byte(entry)) } else { bucket = bucket.Bucket([]byte(entry)) } if bucket == nil { return nil } } return bucket } // GetDir will retrieve data of a cached directory func (b *Persistent) GetDir(remote string) (*Directory, error) { cd := &Directory{} err := b.db.View(func(tx *bolt.Tx) error { bucket := b.getBucket(remote, false, tx) if bucket == nil { return errors.Errorf("couldn't open bucket (%v)", remote) } data := bucket.Get([]byte(".")) if data != nil { return json.Unmarshal(data, cd) } return errors.Errorf("%v not found", remote) }) return cd, err } // AddDir will update a CachedDirectory metadata and all its entries func (b *Persistent) AddDir(cachedDir *Directory) error { return b.AddBatchDir([]*Directory{cachedDir}) } // AddBatchDir will update a list of CachedDirectory metadata and all their entries func (b *Persistent) AddBatchDir(cachedDirs []*Directory) error { if len(cachedDirs) == 0 { return nil } return b.db.Update(func(tx *bolt.Tx) error { var bucket *bolt.Bucket if cachedDirs[0].Dir == "" { bucket = tx.Bucket([]byte(RootBucket)) } else { bucket = b.getBucket(cachedDirs[0].Dir, true, tx) } if bucket == nil { return errors.Errorf("couldn't open bucket (%v)", cachedDirs[0].Dir) } for _, cachedDir := range cachedDirs { var b *bolt.Bucket var err error if cachedDir.Name == "" { b = bucket } else { b, err = bucket.CreateBucketIfNotExists([]byte(cachedDir.Name)) } if err != nil { return err } encoded, err := json.Marshal(cachedDir) if err != nil { return errors.Errorf("couldn't marshal object (%v): %v", cachedDir, err) } err = b.Put([]byte("."), encoded) if err != nil { return err } } return nil }) } // GetDirEntries will return a CachedDirectory, its list of dir entries and/or an error if it encountered issues func (b *Persistent) GetDirEntries(cachedDir *Directory) (fs.DirEntries, error) { var dirEntries fs.DirEntries err := b.db.View(func(tx *bolt.Tx) error { bucket := b.getBucket(cachedDir.abs(), false, tx) if bucket == nil { return errors.Errorf("couldn't open bucket (%v)", cachedDir.abs()) } val := bucket.Get([]byte(".")) if val != nil { err := json.Unmarshal(val, cachedDir) if err != nil { return errors.Errorf("error during unmarshalling obj: %v", err) } } else { return errors.Errorf("missing cached dir: %v", cachedDir) } c := bucket.Cursor() for k, v := c.First(); k != nil; k, v = c.Next() { // ignore metadata key: . if bytes.Equal(k, []byte(".")) { continue } entryPath := path.Join(cachedDir.Remote(), string(k)) if v == nil { // directory // we try to find a cached meta for the dir currentBucket := c.Bucket().Bucket(k) if currentBucket == nil { return errors.Errorf("couldn't open bucket (%v)", string(k)) } metaKey := currentBucket.Get([]byte(".")) d := NewDirectory(cachedDir.CacheFs, entryPath) if metaKey != nil { //if we don't find it, we create an empty dir err := json.Unmarshal(metaKey, d) if err != nil { // if even this fails, we fallback to an empty dir fs.Debugf(string(k), "error during unmarshalling obj: %v", err) } } dirEntries = append(dirEntries, d) } else { // object o := NewObject(cachedDir.CacheFs, entryPath) err := json.Unmarshal(v, o) if err != nil { fs.Debugf(string(k), "error during unmarshalling obj: %v", err) continue } dirEntries = append(dirEntries, o) } } return nil }) return dirEntries, err } // RemoveDir will delete a CachedDirectory, all its objects and all the chunks stored for it func (b *Persistent) RemoveDir(fp string) error { var err error parentDir, dirName := path.Split(fp) if fp == "" { err = b.db.Update(func(tx *bolt.Tx) error { err := tx.DeleteBucket([]byte(RootBucket)) if err != nil { fs.Debugf(fp, "couldn't delete from cache: %v", err) return err } _, _ = tx.CreateBucketIfNotExists([]byte(RootBucket)) return nil }) } else { err = b.db.Update(func(tx *bolt.Tx) error { bucket := b.getBucket(cleanPath(parentDir), false, tx) if bucket == nil { return errors.Errorf("couldn't open bucket (%v)", fp) } // delete the cached dir err := bucket.DeleteBucket([]byte(cleanPath(dirName))) if err != nil { fs.Debugf(fp, "couldn't delete from cache: %v", err) } return nil }) } // delete chunks on disk // safe to ignore as the files might not have been open if err == nil { _ = os.RemoveAll(path.Join(b.dataPath, fp)) _ = os.MkdirAll(b.dataPath, os.ModePerm) } return err } // ExpireDir will flush a CachedDirectory and all its objects from the objects // chunks will remain as they are func (b *Persistent) ExpireDir(cd *Directory) error { t := time.Now().Add(time.Duration(-cd.CacheFs.opt.InfoAge)) cd.CacheTs = &t // expire all parents return b.db.Update(func(tx *bolt.Tx) error { // expire all the parents currentDir := cd.abs() for { // until we get to the root bucket := b.getBucket(currentDir, false, tx) if bucket != nil { val := bucket.Get([]byte(".")) if val != nil { cd2 := &Directory{CacheFs: cd.CacheFs} err := json.Unmarshal(val, cd2) if err == nil { fs.Debugf(cd, "cache: expired %v", currentDir) cd2.CacheTs = &t enc2, _ := json.Marshal(cd2) _ = bucket.Put([]byte("."), enc2) } } } if currentDir == "" { break } currentDir = cleanPath(path.Dir(currentDir)) } return nil }) } // GetObject will return a CachedObject from its parent directory or an error if it doesn't find it func (b *Persistent) GetObject(cachedObject *Object) (err error) { return b.db.View(func(tx *bolt.Tx) error { bucket := b.getBucket(cachedObject.Dir, false, tx) if bucket == nil { return errors.Errorf("couldn't open parent bucket for %v", cachedObject.Dir) } val := bucket.Get([]byte(cachedObject.Name)) if val != nil { return json.Unmarshal(val, cachedObject) } return errors.Errorf("couldn't find object (%v)", cachedObject.Name) }) } // AddObject will create a cached object in its parent directory func (b *Persistent) AddObject(cachedObject *Object) error { return b.db.Update(func(tx *bolt.Tx) error { bucket := b.getBucket(cachedObject.Dir, true, tx) if bucket == nil { return errors.Errorf("couldn't open parent bucket for %v", cachedObject) } // cache Object Info encoded, err := json.Marshal(cachedObject) if err != nil { return errors.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err) } err = bucket.Put([]byte(cachedObject.Name), encoded) if err != nil { return errors.Errorf("couldn't cache object (%v) info: %v", cachedObject, err) } return nil }) } // RemoveObject will delete a single cached object and all the chunks which belong to it func (b *Persistent) RemoveObject(fp string) error { parentDir, objName := path.Split(fp) return b.db.Update(func(tx *bolt.Tx) error { bucket := b.getBucket(cleanPath(parentDir), false, tx) if bucket == nil { return errors.Errorf("couldn't open parent bucket for %v", cleanPath(parentDir)) } err := bucket.Delete([]byte(cleanPath(objName))) if err != nil { fs.Debugf(fp, "couldn't delete obj from storage: %v", err) } // delete chunks on disk // safe to ignore as the file might not have been open _ = os.RemoveAll(path.Join(b.dataPath, fp)) return nil }) } // ExpireObject will flush an Object and all its data if desired func (b *Persistent) ExpireObject(co *Object, withData bool) error { co.CacheTs = time.Now().Add(time.Duration(-co.CacheFs.opt.InfoAge)) err := b.AddObject(co) if withData { _ = os.RemoveAll(path.Join(b.dataPath, co.abs())) } return err } // HasEntry confirms the existence of a single entry (dir or object) func (b *Persistent) HasEntry(remote string) bool { dir, name := path.Split(remote) dir = cleanPath(dir) name = cleanPath(name) err := b.db.View(func(tx *bolt.Tx) error { bucket := b.getBucket(dir, false, tx) if bucket == nil { return errors.Errorf("couldn't open parent bucket for %v", remote) } if f := bucket.Bucket([]byte(name)); f != nil { return nil } if f := bucket.Get([]byte(name)); f != nil { return nil } return errors.Errorf("couldn't find object (%v)", remote) }) if err == nil { return true } return false } // HasChunk confirms the existence of a single chunk of an object func (b *Persistent) HasChunk(cachedObject *Object, offset int64) bool { fp := path.Join(b.dataPath, cachedObject.abs(), strconv.FormatInt(offset, 10)) if _, err := os.Stat(fp); !os.IsNotExist(err) { return true } return false } // GetChunk will retrieve a single chunk which belongs to a cached object or an error if it doesn't find it func (b *Persistent) GetChunk(cachedObject *Object, offset int64) ([]byte, error) { var data []byte fp := path.Join(b.dataPath, cachedObject.abs(), strconv.FormatInt(offset, 10)) data, err := ioutil.ReadFile(fp) if err != nil { return nil, err } return data, err } // AddChunk adds a new chunk of a cached object func (b *Persistent) AddChunk(fp string, data []byte, offset int64) error { _ = os.MkdirAll(path.Join(b.dataPath, fp), os.ModePerm) filePath := path.Join(b.dataPath, fp, strconv.FormatInt(offset, 10)) err := ioutil.WriteFile(filePath, data, os.ModePerm) if err != nil { return err } return b.db.Update(func(tx *bolt.Tx) error { tsBucket := tx.Bucket([]byte(DataTsBucket)) ts := time.Now() found := false // delete (older) timestamps for the same object c := tsBucket.Cursor() for k, v := c.First(); k != nil; k, v = c.Next() { var ci chunkInfo err = json.Unmarshal(v, &ci) if err != nil { continue } if ci.Path == fp && ci.Offset == offset { if tsInCache := time.Unix(0, btoi(k)); tsInCache.After(ts) && !found { found = true continue } err := c.Delete() if err != nil { fs.Debugf(fp, "failed to clean chunk: %v", err) } } } // don't overwrite if a newer one is already there if found { return nil } enc, err := json.Marshal(chunkInfo{Path: fp, Offset: offset, Size: int64(len(data))}) if err != nil { fs.Debugf(fp, "failed to timestamp chunk: %v", err) } err = tsBucket.Put(itob(ts.UnixNano()), enc) if err != nil { fs.Debugf(fp, "failed to timestamp chunk: %v", err) } return nil }) } // CleanChunksByAge will cleanup on a cron basis func (b *Persistent) CleanChunksByAge(chunkAge time.Duration) { // NOOP } // CleanChunksByNeed is a noop for this implementation func (b *Persistent) CleanChunksByNeed(offset int64) { // noop: we want to clean a Bolt DB by time only } // CleanChunksBySize will cleanup chunks after the total size passes a certain point func (b *Persistent) CleanChunksBySize(maxSize int64) { b.cleanupMux.Lock() defer b.cleanupMux.Unlock() var cntChunks int var roughlyCleaned fs.SizeSuffix err := b.db.Update(func(tx *bolt.Tx) error { dataTsBucket := tx.Bucket([]byte(DataTsBucket)) if dataTsBucket == nil { return errors.Errorf("Couldn't open (%v) bucket", DataTsBucket) } // iterate through ts c := dataTsBucket.Cursor() totalSize := int64(0) for k, v := c.First(); k != nil; k, v = c.Next() { var ci chunkInfo err := json.Unmarshal(v, &ci) if err != nil { continue } totalSize += ci.Size } if totalSize > maxSize { needToClean := totalSize - maxSize roughlyCleaned = fs.SizeSuffix(needToClean) for k, v := c.First(); k != nil; k, v = c.Next() { var ci chunkInfo err := json.Unmarshal(v, &ci) if err != nil { continue } // delete this ts entry err = c.Delete() if err != nil { fs.Errorf(ci.Path, "failed deleting chunk ts during cleanup (%v): %v", ci.Offset, err) continue } err = os.Remove(path.Join(b.dataPath, ci.Path, strconv.FormatInt(ci.Offset, 10))) if err == nil { cntChunks++ needToClean -= ci.Size if needToClean <= 0 { break } } } } if cntChunks > 0 { fs.Infof("cache-cleanup", "chunks %v, est. size: %v", cntChunks, roughlyCleaned.String()) } return nil }) if err != nil { if err == bolt.ErrDatabaseNotOpen { // we're likely a late janitor and we need to end quietly as there's no guarantee of what exists anymore return } fs.Errorf("cache", "cleanup failed: %v", err) } } // Stats returns a go map with the stats key values func (b *Persistent) Stats() (map[string]map[string]interface{}, error) { r := make(map[string]map[string]interface{}) r["data"] = make(map[string]interface{}) r["data"]["oldest-ts"] = time.Now() r["data"]["oldest-file"] = "" r["data"]["newest-ts"] = time.Now() r["data"]["newest-file"] = "" r["data"]["total-chunks"] = 0 r["data"]["total-size"] = int64(0) r["files"] = make(map[string]interface{}) r["files"]["oldest-ts"] = time.Now() r["files"]["oldest-name"] = "" r["files"]["newest-ts"] = time.Now() r["files"]["newest-name"] = "" r["files"]["total-files"] = 0 _ = b.db.View(func(tx *bolt.Tx) error { dataTsBucket := tx.Bucket([]byte(DataTsBucket)) rootTsBucket := tx.Bucket([]byte(RootTsBucket)) var totalDirs int var totalFiles int _ = b.iterateBuckets(tx.Bucket([]byte(RootBucket)), func(name string) { totalDirs++ }, func(key string, val []byte) { totalFiles++ }) r["files"]["total-dir"] = totalDirs r["files"]["total-files"] = totalFiles c := dataTsBucket.Cursor() totalChunks := 0 totalSize := int64(0) for k, v := c.First(); k != nil; k, v = c.Next() { var ci chunkInfo err := json.Unmarshal(v, &ci) if err != nil { continue } totalChunks++ totalSize += ci.Size } r["data"]["total-chunks"] = totalChunks r["data"]["total-size"] = totalSize if k, v := c.First(); k != nil { var ci chunkInfo _ = json.Unmarshal(v, &ci) r["data"]["oldest-ts"] = time.Unix(0, btoi(k)) r["data"]["oldest-file"] = ci.Path } if k, v := c.Last(); k != nil { var ci chunkInfo _ = json.Unmarshal(v, &ci) r["data"]["newest-ts"] = time.Unix(0, btoi(k)) r["data"]["newest-file"] = ci.Path } c = rootTsBucket.Cursor() if k, v := c.First(); k != nil { // split to get (abs path - offset) r["files"]["oldest-ts"] = time.Unix(0, btoi(k)) r["files"]["oldest-name"] = string(v) } if k, v := c.Last(); k != nil { r["files"]["newest-ts"] = time.Unix(0, btoi(k)) r["files"]["newest-name"] = string(v) } return nil }) return r, nil } // Purge will flush the entire cache func (b *Persistent) Purge() { b.cleanupMux.Lock() defer b.cleanupMux.Unlock() _ = b.db.Update(func(tx *bolt.Tx) error { _ = tx.DeleteBucket([]byte(RootBucket)) _ = tx.DeleteBucket([]byte(RootTsBucket)) _ = tx.DeleteBucket([]byte(DataTsBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(RootBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(RootTsBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(DataTsBucket)) return nil }) err := os.RemoveAll(b.dataPath) if err != nil { fs.Errorf(b, "issue removing data folder: %v", err) } err = os.MkdirAll(b.dataPath, os.ModePerm) if err != nil { fs.Errorf(b, "issue removing data folder: %v", err) } } // GetChunkTs retrieves the current timestamp of this chunk func (b *Persistent) GetChunkTs(path string, offset int64) (time.Time, error) { var t time.Time err := b.db.View(func(tx *bolt.Tx) error { tsBucket := tx.Bucket([]byte(DataTsBucket)) c := tsBucket.Cursor() for k, v := c.First(); k != nil; k, v = c.Next() { var ci chunkInfo err := json.Unmarshal(v, &ci) if err != nil { continue } if ci.Path == path && ci.Offset == offset { t = time.Unix(0, btoi(k)) return nil } } return errors.Errorf("not found %v-%v", path, offset) }) return t, err } func (b *Persistent) iterateBuckets(buk *bolt.Bucket, bucketFn func(name string), kvFn func(key string, val []byte)) error { err := b.db.View(func(tx *bolt.Tx) error { var c *bolt.Cursor if buk == nil { c = tx.Cursor() } else { c = buk.Cursor() } for k, v := c.First(); k != nil; k, v = c.Next() { if v == nil { var buk2 *bolt.Bucket if buk == nil { buk2 = tx.Bucket(k) } else { buk2 = buk.Bucket(k) } bucketFn(string(k)) _ = b.iterateBuckets(buk2, bucketFn, kvFn) } else { kvFn(string(k), v) } } return nil }) return err } // addPendingUpload adds a new file to the pending queue of uploads func (b *Persistent) addPendingUpload(destPath string, started bool) error { return b.db.Update(func(tx *bolt.Tx) error { bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) if err != nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } tempObj := &tempUploadInfo{ DestPath: destPath, AddedOn: time.Now(), Started: started, } // cache Object Info encoded, err := json.Marshal(tempObj) if err != nil { return errors.Errorf("couldn't marshal object (%v) info: %v", destPath, err) } err = bucket.Put([]byte(destPath), encoded) if err != nil { return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err) } return nil }) } // getPendingUpload returns the next file from the pending queue of uploads func (b *Persistent) getPendingUpload(inRoot string, waitTime time.Duration) (destPath string, err error) { b.tempQueueMux.Lock() defer b.tempQueueMux.Unlock() err = b.db.Update(func(tx *bolt.Tx) error { bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) if err != nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } c := bucket.Cursor() for k, v := c.Seek([]byte(inRoot)); k != nil && bytes.HasPrefix(k, []byte(inRoot)); k, v = c.Next() { //for k, v := c.First(); k != nil; k, v = c.Next() { var tempObj = &tempUploadInfo{} err = json.Unmarshal(v, tempObj) if err != nil { fs.Errorf(b, "failed to read pending upload: %v", err) continue } // skip over started uploads if tempObj.Started || time.Now().Before(tempObj.AddedOn.Add(waitTime)) { continue } tempObj.Started = true v2, err := json.Marshal(tempObj) if err != nil { fs.Errorf(b, "failed to update pending upload: %v", err) continue } err = bucket.Put(k, v2) if err != nil { fs.Errorf(b, "failed to update pending upload: %v", err) continue } destPath = tempObj.DestPath return nil } return errors.Errorf("no pending upload found") }) return destPath, err } // SearchPendingUpload returns the file info from the pending queue of uploads func (b *Persistent) SearchPendingUpload(remote string) (started bool, err error) { err = b.db.View(func(tx *bolt.Tx) error { bucket := tx.Bucket([]byte(tempBucket)) if bucket == nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } var tempObj = &tempUploadInfo{} v := bucket.Get([]byte(remote)) err = json.Unmarshal(v, tempObj) if err != nil { return errors.Errorf("pending upload (%v) not found %v", remote, err) } started = tempObj.Started return nil }) return started, err } // searchPendingUploadFromDir files currently pending upload from a single dir func (b *Persistent) searchPendingUploadFromDir(dir string) (remotes []string, err error) { err = b.db.View(func(tx *bolt.Tx) error { bucket := tx.Bucket([]byte(tempBucket)) if bucket == nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } c := bucket.Cursor() for k, v := c.First(); k != nil; k, v = c.Next() { var tempObj = &tempUploadInfo{} err = json.Unmarshal(v, tempObj) if err != nil { fs.Errorf(b, "failed to read pending upload: %v", err) continue } parentDir := cleanPath(path.Dir(tempObj.DestPath)) if dir == parentDir { remotes = append(remotes, tempObj.DestPath) } } return nil }) return remotes, err } func (b *Persistent) rollbackPendingUpload(remote string) error { b.tempQueueMux.Lock() defer b.tempQueueMux.Unlock() return b.db.Update(func(tx *bolt.Tx) error { bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) if err != nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } var tempObj = &tempUploadInfo{} v := bucket.Get([]byte(remote)) err = json.Unmarshal(v, tempObj) if err != nil { return errors.Errorf("pending upload (%v) not found %v", remote, err) } tempObj.Started = false v2, err := json.Marshal(tempObj) if err != nil { return errors.Errorf("pending upload not updated %v", err) } err = bucket.Put([]byte(tempObj.DestPath), v2) if err != nil { return errors.Errorf("pending upload not updated %v", err) } return nil }) } func (b *Persistent) removePendingUpload(remote string) error { b.tempQueueMux.Lock() defer b.tempQueueMux.Unlock() return b.db.Update(func(tx *bolt.Tx) error { bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) if err != nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } return bucket.Delete([]byte(remote)) }) } // updatePendingUpload allows to update an existing item in the queue while checking if it's not started in the same // transaction. If it is started, it will not allow the update func (b *Persistent) updatePendingUpload(remote string, fn func(item *tempUploadInfo) error) error { b.tempQueueMux.Lock() defer b.tempQueueMux.Unlock() return b.db.Update(func(tx *bolt.Tx) error { bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) if err != nil { return errors.Errorf("couldn't bucket for %v", tempBucket) } var tempObj = &tempUploadInfo{} v := bucket.Get([]byte(remote)) err = json.Unmarshal(v, tempObj) if err != nil { return errors.Errorf("pending upload (%v) not found %v", remote, err) } if tempObj.Started { return errors.Errorf("pending upload already started %v", remote) } err = fn(tempObj) if err != nil { return err } if remote != tempObj.DestPath { err := bucket.Delete([]byte(remote)) if err != nil { return err } // if this is removed then the entry can be removed too if tempObj.DestPath == "" { return nil } } v2, err := json.Marshal(tempObj) if err != nil { return errors.Errorf("pending upload not updated %v", err) } err = bucket.Put([]byte(tempObj.DestPath), v2) if err != nil { return errors.Errorf("pending upload not updated %v", err) } return nil }) } // ReconcileTempUploads will recursively look for all the files in the temp directory and add them to the queue func (b *Persistent) ReconcileTempUploads(ctx context.Context, cacheFs *Fs) error { return b.db.Update(func(tx *bolt.Tx) error { _ = tx.DeleteBucket([]byte(tempBucket)) bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) if err != nil { return err } var queuedEntries []fs.Object err = walk.ListR(ctx, cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error { for _, o := range entries { if oo, ok := o.(fs.Object); ok { queuedEntries = append(queuedEntries, oo) } } return nil }) if err != nil { return err } fs.Debugf(cacheFs, "reconciling temporary uploads") for _, queuedEntry := range queuedEntries { destPath := path.Join(cacheFs.Root(), queuedEntry.Remote()) tempObj := &tempUploadInfo{ DestPath: destPath, AddedOn: time.Now(), Started: false, } // cache Object Info encoded, err := json.Marshal(tempObj) if err != nil { return errors.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err) } err = bucket.Put([]byte(destPath), encoded) if err != nil { return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err) } fs.Debugf(cacheFs, "reconciled temporary upload: %v", destPath) } return nil }) } // Close should be called when the program ends gracefully func (b *Persistent) Close() { b.cleanupMux.Lock() defer b.cleanupMux.Unlock() err := b.db.Close() if err != nil { fs.Errorf(b, "closing handle: %v", err) } b.open = false } // itob returns an 8-byte big endian representation of v. func itob(v int64) []byte { b := make([]byte, 8) binary.BigEndian.PutUint64(b, uint64(v)) return b } func btoi(d []byte) int64 { return int64(binary.BigEndian.Uint64(d)) } rclone-1.53.3/backend/cache/utils_test.go000066400000000000000000000012151375552240400202530ustar00rootroot00000000000000package cache import bolt "go.etcd.io/bbolt" // PurgeTempUploads will remove all the pending uploads from the queue func (b *Persistent) PurgeTempUploads() { b.tempQueueMux.Lock() defer b.tempQueueMux.Unlock() _ = b.db.Update(func(tx *bolt.Tx) error { _ = tx.DeleteBucket([]byte(tempBucket)) _, _ = tx.CreateBucketIfNotExists([]byte(tempBucket)) return nil }) } // SetPendingUploadToStarted is a way to mark an entry as started (even if it's not already) func (b *Persistent) SetPendingUploadToStarted(remote string) error { return b.updatePendingUpload(remote, func(item *tempUploadInfo) error { item.Started = true return nil }) } rclone-1.53.3/backend/chunker/000077500000000000000000000000001375552240400161225ustar00rootroot00000000000000rclone-1.53.3/backend/chunker/chunker.go000066400000000000000000002100511375552240400201070ustar00rootroot00000000000000// Package chunker provides wrappers for Fs and Object which split large files in chunks package chunker import ( "bytes" "context" "crypto/md5" "crypto/sha1" "encoding/hex" "encoding/json" "fmt" gohash "hash" "io" "io/ioutil" "math/rand" "path" "regexp" "sort" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" ) // // Chunker's composite files have one or more chunks // and optional metadata object. If it's present, // meta object is named after the original file. // // The only supported metadata format is simplejson atm. // It supports only per-file meta objects that are rudimentary, // used mostly for consistency checks (lazily for performance reasons). // Other formats can be developed that use an external meta store // free of these limitations, but this needs some support from // rclone core (eg. metadata store interfaces). // // The following types of chunks are supported: // data and control, active and temporary. // Chunk type is identified by matching chunk file name // based on the chunk name format configured by user. // // Both data and control chunks can be either temporary (aka hidden) // or active (non-temporary aka normal aka permanent). // An operation creates temporary chunks while it runs. // By completion it removes temporary and leaves active chunks. // // Temporary chunks have a special hardcoded suffix in addition // to the configured name pattern. // Temporary suffix includes so called transaction identifier // (abbreviated as `xactID` below), a generic non-negative base-36 "number" // used by parallel operations to share a composite object. // Chunker also accepts the longer decimal temporary suffix (obsolete), // which is transparently converted to the new format. In its maximum // length of 13 decimals it makes a 7-digit base-36 number. // // Chunker can tell data chunks from control chunks by the characters // located in the "hash placeholder" position of configured format. // Data chunks have decimal digits there. // Control chunks have in that position a short lowercase alphanumeric // string (starting with a letter) prepended by underscore. // // Metadata format v1 does not define any control chunk types, // they are currently ignored aka reserved. // In future they can be used to implement resumable uploads etc. // const ( ctrlTypeRegStr = `[a-z][a-z0-9]{2,6}` tempSuffixFormat = `_%04s` tempSuffixRegStr = `_([0-9a-z]{4,9})` tempSuffixRegOld = `\.\.tmp_([0-9]{10,13})` ) var ( // regular expressions to validate control type and temporary suffix ctrlTypeRegexp = regexp.MustCompile(`^` + ctrlTypeRegStr + `$`) tempSuffixRegexp = regexp.MustCompile(`^` + tempSuffixRegStr + `$`) ) // Normally metadata is a small piece of JSON (about 100-300 bytes). // The size of valid metadata must never exceed this limit. // Current maximum provides a reasonable room for future extensions. // // Please refrain from increasing it, this can cause old rclone versions // to fail, or worse, treat meta object as a normal file (see NewObject). // If more room is needed please bump metadata version forcing previous // releases to ask for upgrade, and offload extra info to a control chunk. // // And still chunker's primary function is to chunk large files // rather than serve as a generic metadata container. const maxMetadataSize = 255 // Current/highest supported metadata format. const metadataVersion = 1 // optimizeFirstChunk enables the following optimization in the Put: // If a single chunk is expected, put the first chunk using the // base target name instead of a temporary name, thus avoiding // extra rename operation. // Warning: this optimization is not transaction safe. const optimizeFirstChunk = false // revealHidden is a stub until chunker lands the `reveal hidden` option. const revealHidden = false // Prevent memory overflow due to specially crafted chunk name const maxSafeChunkNumber = 10000000 // Number of attempts to find unique transaction identifier const maxTransactionProbes = 100 // standard chunker errors var ( ErrChunkOverflow = errors.New("chunk number overflow") ) // variants of baseMove's parameter delMode const ( delNever = 0 // don't delete, just move delAlways = 1 // delete destination before moving delFailed = 2 // move, then delete and try again if failed ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "chunker", Description: "Transparently chunk/split large files", NewFs: NewFs, Options: []fs.Option{{ Name: "remote", Required: true, Help: `Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).`, }, { Name: "chunk_size", Advanced: false, Default: fs.SizeSuffix(2147483648), // 2GB Help: `Files larger than chunk size will be split in chunks.`, }, { Name: "name_format", Advanced: true, Default: `*.rclone_chunk.###`, Help: `String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.`, }, { Name: "start_from", Advanced: true, Default: 1, Help: `Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1.`, }, { Name: "meta_format", Advanced: true, Default: "simplejson", Help: `Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file.`, Examples: []fs.OptionExample{{ Value: "none", Help: `Do not use metadata files at all. Requires hash type "none".`, }, { Value: "simplejson", Help: `Simple JSON supports hash sums and chunk validation. It has the following fields: ver, size, nchunks, md5, sha1.`, }}, }, { Name: "hash_type", Advanced: false, Default: "md5", Help: `Choose how chunker handles hash sums. All modes but "none" require metadata.`, Examples: []fs.OptionExample{{ Value: "none", Help: `Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise`, }, { Value: "md5", Help: `MD5 for composite files`, }, { Value: "sha1", Help: `SHA1 for composite files`, }, { Value: "md5all", Help: `MD5 for all files`, }, { Value: "sha1all", Help: `SHA1 for all files`, }, { Value: "md5quick", Help: `Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported`, }, { Value: "sha1quick", Help: `Similar to "md5quick" but prefers SHA1 over MD5`, }}, }, { Name: "fail_hard", Advanced: true, Default: false, Help: `Choose how chunker should handle files with missing or invalid chunks.`, Examples: []fs.OptionExample{ { Value: "true", Help: "Report errors and abort current command.", }, { Value: "false", Help: "Warn user, skip incomplete file and proceed.", }, }, }}, }) } // NewFs constructs an Fs from the path, container:path func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.StartFrom < 0 { return nil, errors.New("start_from must be non-negative") } remote := opt.Remote if strings.HasPrefix(remote, name+":") { return nil, errors.New("can't point remote at itself - check the value of the remote setting") } baseName, basePath, err := fspath.Parse(remote) if err != nil { return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote) } if baseName != "" { baseName += ":" } // Look for a file first remotePath := fspath.JoinRootPath(basePath, rpath) baseFs, err := cache.Get(baseName + remotePath) if err != fs.ErrorIsFile && err != nil { return nil, errors.Wrapf(err, "failed to make remote %q to wrap", baseName+remotePath) } if !operations.CanServerSideMove(baseFs) { return nil, errors.New("can't use chunker on a backend which doesn't support server side move or copy") } f := &Fs{ base: baseFs, name: name, root: rpath, opt: *opt, } cache.PinUntilFinalized(f.base, f) f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm. if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType); err != nil { return nil, err } // Handle the tricky case detected by FsMkdir/FsPutFiles/FsIsFile // when `rpath` points to a composite multi-chunk file without metadata, // i.e. `rpath` does not exist in the wrapped remote, but chunker // detects a composite file because it finds the first chunk! // (yet can't satisfy fstest.CheckListing, will ignore) if err == nil && !f.useMeta && strings.Contains(rpath, "/") { firstChunkPath := f.makeChunkName(remotePath, 0, "", "") _, testErr := cache.Get(baseName + firstChunkPath) if testErr == fs.ErrorIsFile { err = testErr } } // Note 1: the features here are ones we could support, and they are // ANDed with the ones from wrappedFs. // Note 2: features.Fill() points features.PutStream to our PutStream, // but features.Mask() will nullify it if wrappedFs does not have it. f.features = (&fs.Features{ CaseInsensitive: true, DuplicateFiles: true, ReadMimeType: true, WriteMimeType: true, BucketBased: true, CanHaveEmptyDirectories: true, ServerSideAcrossConfigs: true, }).Fill(f).Mask(baseFs).WrapsFs(f, baseFs) f.features.Disable("ListR") // Recursive listing may cause chunker skip files return f, err } // Options defines the configuration for this backend type Options struct { Remote string `config:"remote"` ChunkSize fs.SizeSuffix `config:"chunk_size"` NameFormat string `config:"name_format"` StartFrom int `config:"start_from"` MetaFormat string `config:"meta_format"` HashType string `config:"hash_type"` FailHard bool `config:"fail_hard"` } // Fs represents a wrapped fs.Fs type Fs struct { name string root string base fs.Fs // remote wrapped by chunker overlay wrapper fs.Fs // wrapper is used by SetWrapper useMeta bool // false if metadata format is 'none' useMD5 bool // mutually exclusive with useSHA1 useSHA1 bool // mutually exclusive with useMD5 hashFallback bool // allows fallback from MD5 to SHA1 and vice versa hashAll bool // hash all files, mutually exclusive with hashFallback dataNameFmt string // name format of data chunks ctrlNameFmt string // name format of control chunks nameRegexp *regexp.Regexp // regular expression to match chunk names xactIDRand *rand.Rand // generator of random transaction identifiers xactIDMutex sync.Mutex // mutex for the source of randomness opt Options // copy of Options features *fs.Features // optional features dirSort bool // reserved for future, ignored } // configure sets up chunker for given name format, meta format and hash type. // It also seeds the source of random transaction identifiers. // configure must be called only from NewFs or by unit tests. func (f *Fs) configure(nameFormat, metaFormat, hashType string) error { if err := f.setChunkNameFormat(nameFormat); err != nil { return errors.Wrapf(err, "invalid name format '%s'", nameFormat) } if err := f.setMetaFormat(metaFormat); err != nil { return err } if err := f.setHashType(hashType); err != nil { return err } randomSeed := time.Now().UnixNano() f.xactIDRand = rand.New(rand.NewSource(randomSeed)) return nil } func (f *Fs) setMetaFormat(metaFormat string) error { switch metaFormat { case "none": f.useMeta = false case "simplejson": f.useMeta = true default: return fmt.Errorf("unsupported meta format '%s'", metaFormat) } return nil } // setHashType // must be called *after* setMetaFormat. // // In the "All" mode chunker will force metadata on all files // if the wrapped remote can't provide given hashsum. func (f *Fs) setHashType(hashType string) error { f.useMD5 = false f.useSHA1 = false f.hashFallback = false f.hashAll = false requireMetaHash := true switch hashType { case "none": requireMetaHash = false case "md5": f.useMD5 = true case "sha1": f.useSHA1 = true case "md5quick": f.useMD5 = true f.hashFallback = true case "sha1quick": f.useSHA1 = true f.hashFallback = true case "md5all": f.useMD5 = true f.hashAll = !f.base.Hashes().Contains(hash.MD5) case "sha1all": f.useSHA1 = true f.hashAll = !f.base.Hashes().Contains(hash.SHA1) default: return fmt.Errorf("unsupported hash type '%s'", hashType) } if requireMetaHash && !f.useMeta { return fmt.Errorf("hash type '%s' requires compatible meta format", hashType) } return nil } // setChunkNameFormat converts pattern based chunk name format // into Printf format and Regular expressions for data and // control chunks. func (f *Fs) setChunkNameFormat(pattern string) error { // validate pattern if strings.Count(pattern, "*") != 1 { return errors.New("pattern must have exactly one asterisk (*)") } numDigits := strings.Count(pattern, "#") if numDigits < 1 { return errors.New("pattern must have a hash character (#)") } if strings.Index(pattern, "*") > strings.Index(pattern, "#") { return errors.New("asterisk (*) in pattern must come before hashes (#)") } if ok, _ := regexp.MatchString("^[^#]*[#]+[^#]*$", pattern); !ok { return errors.New("hashes (#) in pattern must be consecutive") } if dir, _ := path.Split(pattern); dir != "" { return errors.New("directory separator prohibited") } if pattern[0] != '*' { return errors.New("pattern must start with asterisk") // to be lifted later } // craft a unified regular expression for all types of chunks reHashes := regexp.MustCompile("[#]+") reDigits := "[0-9]+" if numDigits > 1 { reDigits = fmt.Sprintf("[0-9]{%d,}", numDigits) } reDataOrCtrl := fmt.Sprintf("(?:(%s)|_(%s))", reDigits, ctrlTypeRegStr) // this must be non-greedy or else it could eat up temporary suffix const mainNameRegStr = "(.+?)" strRegex := regexp.QuoteMeta(pattern) strRegex = reHashes.ReplaceAllLiteralString(strRegex, reDataOrCtrl) strRegex = strings.Replace(strRegex, "\\*", mainNameRegStr, -1) strRegex = fmt.Sprintf("^%s(?:%s|%s)?$", strRegex, tempSuffixRegStr, tempSuffixRegOld) f.nameRegexp = regexp.MustCompile(strRegex) // craft printf formats for active data/control chunks fmtDigits := "%d" if numDigits > 1 { fmtDigits = fmt.Sprintf("%%0%dd", numDigits) } strFmt := strings.Replace(pattern, "%", "%%", -1) strFmt = strings.Replace(strFmt, "*", "%s", 1) f.dataNameFmt = reHashes.ReplaceAllLiteralString(strFmt, fmtDigits) f.ctrlNameFmt = reHashes.ReplaceAllLiteralString(strFmt, "_%s") return nil } // makeChunkName produces chunk name (or path) for a given file. // // filePath can be name, relative or absolute path of main file. // // chunkNo must be a zero based index of data chunk. // Negative chunkNo eg. -1 indicates a control chunk. // ctrlType is type of control chunk (must be valid). // ctrlType must be "" for data chunks. // // xactID is a transaction identifier. Empty xactID denotes active chunk, // otherwise temporary chunk name is produced. // func (f *Fs) makeChunkName(filePath string, chunkNo int, ctrlType, xactID string) string { dir, parentName := path.Split(filePath) var name, tempSuffix string switch { case chunkNo >= 0 && ctrlType == "": name = fmt.Sprintf(f.dataNameFmt, parentName, chunkNo+f.opt.StartFrom) case chunkNo < 0 && ctrlTypeRegexp.MatchString(ctrlType): name = fmt.Sprintf(f.ctrlNameFmt, parentName, ctrlType) default: panic("makeChunkName: invalid argument") // must not produce something we can't consume } if xactID != "" { tempSuffix = fmt.Sprintf(tempSuffixFormat, xactID) if !tempSuffixRegexp.MatchString(tempSuffix) { panic("makeChunkName: invalid argument") } } return dir + name + tempSuffix } // parseChunkName checks whether given file path belongs to // a chunk and extracts chunk name parts. // // filePath can be name, relative or absolute path of a file. // // Returned parentPath is path of the composite file owning the chunk. // It's a non-empty string if valid chunk name is detected // or "" if it's not a chunk. // Other returned values depend on detected chunk type: // data or control, active or temporary: // // data chunk - the returned chunkNo is non-negative and ctrlType is "" // control chunk - the chunkNo is -1 and ctrlType is a non-empty string // active chunk - the returned xactID is "" // temporary chunk - the xactID is a non-empty string func (f *Fs) parseChunkName(filePath string) (parentPath string, chunkNo int, ctrlType, xactID string) { dir, name := path.Split(filePath) match := f.nameRegexp.FindStringSubmatch(name) if match == nil || match[1] == "" { return "", -1, "", "" } var err error chunkNo = -1 if match[2] != "" { if chunkNo, err = strconv.Atoi(match[2]); err != nil { chunkNo = -1 } if chunkNo -= f.opt.StartFrom; chunkNo < 0 { fs.Infof(f, "invalid data chunk number in file %q", name) return "", -1, "", "" } } if match[4] != "" { xactID = match[4] } if match[5] != "" { // old-style temporary suffix number, err := strconv.ParseInt(match[5], 10, 64) if err != nil || number < 0 { fs.Infof(f, "invalid old-style transaction number in file %q", name) return "", -1, "", "" } // convert old-style transaction number to base-36 transaction ID xactID = fmt.Sprintf(tempSuffixFormat, strconv.FormatInt(number, 36)) xactID = xactID[1:] // strip leading underscore } parentPath = dir + match[1] ctrlType = match[3] return } // forbidChunk prints error message or raises error if file is chunk. // First argument sets log prefix, use `false` to suppress message. func (f *Fs) forbidChunk(o interface{}, filePath string) error { if parentPath, _, _, _ := f.parseChunkName(filePath); parentPath != "" { if f.opt.FailHard { return fmt.Errorf("chunk overlap with %q", parentPath) } if boolVal, isBool := o.(bool); !isBool || boolVal { fs.Errorf(o, "chunk overlap with %q", parentPath) } } return nil } // newXactID produces a sufficiently random transaction identifier. // // The temporary suffix mask allows identifiers consisting of 4-9 // base-36 digits (ie. digits 0-9 or lowercase letters a-z). // The identifiers must be unique between transactions running on // the single file in parallel. // // Currently the function produces 6-character identifiers. // Together with underscore this makes a 7-character temporary suffix. // // The first 4 characters isolate groups of transactions by time intervals. // The maximum length of interval is base-36 "zzzz" ie. 1,679,615 seconds. // The function rather takes a maximum prime closest to this number // (see https://primes.utm.edu) as the interval length to better safeguard // against repeating pseudo-random sequences in cases when rclone is // invoked from a periodic scheduler like unix cron. // Thus, the interval is slightly more than 19 days 10 hours 33 minutes. // // The remaining 2 base-36 digits (in the range from 0 to 1295 inclusive) // are taken from the local random source. // This provides about 0.1% collision probability for two parallel // operations started at the same second and working on the same file. // // Non-empty filePath argument enables probing for existing temporary chunk // to further eliminate collisions. func (f *Fs) newXactID(ctx context.Context, filePath string) (xactID string, err error) { const closestPrimeZzzzSeconds = 1679609 const maxTwoBase36Digits = 1295 unixSec := time.Now().Unix() if unixSec < 0 { unixSec = -unixSec // unlikely but the number must be positive } circleSec := unixSec % closestPrimeZzzzSeconds first4chars := strconv.FormatInt(circleSec, 36) for tries := 0; tries < maxTransactionProbes; tries++ { f.xactIDMutex.Lock() randomness := f.xactIDRand.Int63n(maxTwoBase36Digits + 1) f.xactIDMutex.Unlock() last2chars := strconv.FormatInt(randomness, 36) xactID = fmt.Sprintf("%04s%02s", first4chars, last2chars) if filePath == "" { return } probeChunk := f.makeChunkName(filePath, 0, "", xactID) _, probeErr := f.base.NewObject(ctx, probeChunk) if probeErr != nil { return } } return "", fmt.Errorf("can't setup transaction for %s", filePath) } // List the objects and directories in dir into entries. // The entries can be returned in any order but should be // for a complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't found. // // Commands normally cleanup all temporary chunks in case of a failure. // However, if rclone dies unexpectedly, it can leave behind a bunch of // hidden temporary chunks. List and its underlying chunkEntries() // silently skip all temporary chunks in the directory. It's okay if // they belong to an unfinished command running in parallel. // // However, there is no way to discover dead temporary chunks atm. // As a workaround users can use `purge` to forcibly remove the whole // directory together with dead chunks. // In future a flag named like `--chunker-list-hidden` may be added to // rclone that will tell List to reveal hidden chunks. // func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { entries, err = f.base.List(ctx, dir) if err != nil { return nil, err } return f.processEntries(ctx, entries, dir) } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively than doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { do := f.base.Features().ListR return do(ctx, dir, func(entries fs.DirEntries) error { newEntries, err := f.processEntries(ctx, entries, dir) if err != nil { return err } return callback(newEntries) }) } // processEntries assembles chunk entries into composite entries func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirPath string) (newEntries fs.DirEntries, err error) { var sortedEntries fs.DirEntries if f.dirSort { // sort entries so that meta objects go before their chunks sortedEntries = make(fs.DirEntries, len(origEntries)) copy(sortedEntries, origEntries) sort.Sort(sortedEntries) } else { sortedEntries = origEntries } byRemote := make(map[string]*Object) badEntry := make(map[string]bool) isSubdir := make(map[string]bool) var tempEntries fs.DirEntries for _, dirOrObject := range sortedEntries { switch entry := dirOrObject.(type) { case fs.Object: remote := entry.Remote() if mainRemote, chunkNo, ctrlType, xactID := f.parseChunkName(remote); mainRemote != "" { if xactID != "" { if revealHidden { fs.Infof(f, "ignore temporary chunk %q", remote) } break } if ctrlType != "" { if revealHidden { fs.Infof(f, "ignore control chunk %q", remote) } break } mainObject := byRemote[mainRemote] if mainObject == nil && f.useMeta { fs.Debugf(f, "skip chunk %q without meta object", remote) break } if mainObject == nil { // useMeta is false - create chunked object without metadata mainObject = f.newObject(mainRemote, nil, nil) byRemote[mainRemote] = mainObject if !badEntry[mainRemote] { tempEntries = append(tempEntries, mainObject) } } if err := mainObject.addChunk(entry, chunkNo); err != nil { if f.opt.FailHard { return nil, err } badEntry[mainRemote] = true } break } object := f.newObject("", entry, nil) byRemote[remote] = object tempEntries = append(tempEntries, object) case fs.Directory: isSubdir[entry.Remote()] = true wrapDir := fs.NewDirCopy(ctx, entry) wrapDir.SetRemote(entry.Remote()) tempEntries = append(tempEntries, wrapDir) default: if f.opt.FailHard { return nil, fmt.Errorf("Unknown object type %T", entry) } fs.Debugf(f, "unknown object type %T", entry) } } for _, entry := range tempEntries { if object, ok := entry.(*Object); ok { remote := object.Remote() if isSubdir[remote] { if f.opt.FailHard { return nil, fmt.Errorf("%q is both meta object and directory", remote) } badEntry[remote] = true // fall thru } if badEntry[remote] { fs.Debugf(f, "invalid directory entry %q", remote) continue } if err := object.validate(); err != nil { if f.opt.FailHard { return nil, err } fs.Debugf(f, "invalid chunks in object %q", remote) continue } } newEntries = append(newEntries, entry) } if f.dirSort { sort.Sort(newEntries) } return newEntries, nil } // NewObject finds the Object at remote. // // Please note that every NewObject invocation will scan the whole directory. // Using here something like fs.DirCache might improve performance // (yet making the logic more complex). // // Note that chunker prefers analyzing file names rather than reading // the content of meta object assuming that directory scans are fast // but opening even a small file can be slow on some backends. // func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { if err := f.forbidChunk(false, remote); err != nil { return nil, errors.Wrap(err, "can't access") } var ( o *Object baseObj fs.Object err error ) if f.useMeta { baseObj, err = f.base.NewObject(ctx, remote) if err != nil { return nil, err } remote = baseObj.Remote() // Chunker's meta object cannot be large and maxMetadataSize acts // as a hard limit. Anything larger than that is treated as a // non-chunked file without even checking its contents, so it's // paramount to prevent metadata from exceeding the maximum size. o = f.newObject("", baseObj, nil) if o.size > maxMetadataSize { return o, nil } } else { // Metadata is disabled, hence this is either a multi-chunk // composite file without meta object or a non-chunked file. // Create an empty wrapper here, scan directory to determine // which case it is and postpone reading if it's the latter one. o = f.newObject(remote, nil, nil) } // If the object is small, it's probably a meta object. // However, composite file must have data chunks besides it. // Scan directory for possible data chunks now and decide later on. dir := path.Dir(strings.TrimRight(remote, "/")) if dir == "." { dir = "" } entries, err := f.base.List(ctx, dir) switch err { case nil: // OK, fall thru case fs.ErrorDirNotFound: entries = nil default: return nil, errors.Wrap(err, "can't detect composite file") } for _, dirOrObject := range entries { entry, ok := dirOrObject.(fs.Object) if !ok { continue } entryRemote := entry.Remote() if !strings.Contains(entryRemote, remote) { continue // bypass regexp to save cpu } mainRemote, chunkNo, ctrlType, xactID := f.parseChunkName(entryRemote) if mainRemote == "" || mainRemote != remote || ctrlType != "" || xactID != "" { continue // skip non-conforming, temporary and control chunks } //fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo) if err := o.addChunk(entry, chunkNo); err != nil { return nil, err } } if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) { // Scanning hasn't found data chunks with conforming names. if f.useMeta { // Metadata is required but absent and there are no chunks. return nil, fs.ErrorObjectNotFound } // Data chunks are not found and metadata is disabled. // Thus, we are in the "latter case" from above. // Let's try the postponed reading of a non-chunked file and add it // as a single chunk to the empty composite wrapper created above // with nil metadata. baseObj, err = f.base.NewObject(ctx, remote) if err == nil { err = o.addChunk(baseObj, 0) } if err != nil { return nil, err } } // This is either a composite object with metadata or a non-chunked // file without metadata. Validate it and update the total data size. // As an optimization, skip metadata reading here - we will call // readMetadata lazily when needed (reading can be expensive). if err := o.validate(); err != nil { return nil, err } return o, nil } func (o *Object) readMetadata(ctx context.Context) error { if o.isFull { return nil } if !o.isComposite() || !o.f.useMeta { o.isFull = true return nil } // validate metadata metaObject := o.main reader, err := metaObject.Open(ctx) if err != nil { return err } metadata, err := ioutil.ReadAll(reader) _ = reader.Close() // ensure file handle is freed on windows if err != nil { return err } switch o.f.opt.MetaFormat { case "simplejson": metaInfo, err := unmarshalSimpleJSON(ctx, metaObject, metadata, true) if err != nil { return errors.Wrap(err, "invalid metadata") } if o.size != metaInfo.Size() || len(o.chunks) != metaInfo.nChunks { return errors.New("metadata doesn't match file size") } o.md5 = metaInfo.md5 o.sha1 = metaInfo.sha1 } o.isFull = true return nil } // put implements Put, PutStream, PutUnchecked, Update func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption, basePut putFn) (obj fs.Object, err error) { c := f.newChunkingReader(src) wrapIn := c.wrapStream(ctx, in, src) var metaObject fs.Object defer func() { if err != nil { c.rollback(ctx, metaObject) } }() baseRemote := remote xactID, errXact := f.newXactID(ctx, baseRemote) if errXact != nil { return nil, errXact } // Transfer chunks data for c.chunkNo = 0; !c.done; c.chunkNo++ { if c.chunkNo > maxSafeChunkNumber { return nil, ErrChunkOverflow } tempRemote := f.makeChunkName(baseRemote, c.chunkNo, "", xactID) size := c.sizeLeft if size > c.chunkSize { size = c.chunkSize } savedReadCount := c.readCount // If a single chunk is expected, avoid the extra rename operation chunkRemote := tempRemote if c.expectSingle && c.chunkNo == 0 && optimizeFirstChunk { chunkRemote = baseRemote } info := f.wrapInfo(src, chunkRemote, size) // Refill chunkLimit and let basePut repeatedly call chunkingReader.Read() c.chunkLimit = c.chunkSize // TODO: handle range/limit options chunk, errChunk := basePut(ctx, wrapIn, info, options...) if errChunk != nil { return nil, errChunk } if size > 0 && c.readCount == savedReadCount && c.expectSingle { // basePut returned success but didn't call chunkingReader's Read. // This is possible if wrapped remote has performed the put by hash // because chunker bridges Hash from source for non-chunked files. // Hence, force Read here to update accounting and hashsums. if err := c.dummyRead(wrapIn, size); err != nil { return nil, err } } if c.sizeLeft == 0 && !c.done { // The file has been apparently put by hash, force completion. c.done = true } // Expected a single chunk but more to come, so name it as usual. if !c.done && chunkRemote != tempRemote { fs.Infof(chunk, "Expected single chunk, got more") chunkMoved, errMove := f.baseMove(ctx, chunk, tempRemote, delFailed) if errMove != nil { silentlyRemove(ctx, chunk) return nil, errMove } chunk = chunkMoved } // Wrapped remote may or may not have seen EOF from chunking reader, // eg. the box multi-uploader reads exactly the chunk size specified // and skips the "EOF" read. Hence, switch to next limit here. if !(c.chunkLimit == 0 || c.chunkLimit == c.chunkSize || c.sizeTotal == -1 || c.done) { silentlyRemove(ctx, chunk) return nil, fmt.Errorf("Destination ignored %d data bytes", c.chunkLimit) } c.chunkLimit = c.chunkSize c.chunks = append(c.chunks, chunk) } // Validate uploaded size if c.sizeTotal != -1 && c.readCount != c.sizeTotal { return nil, fmt.Errorf("Incorrect upload size %d != %d", c.readCount, c.sizeTotal) } // Check for input that looks like valid metadata needMeta := len(c.chunks) > 1 if c.readCount <= maxMetadataSize && len(c.chunks) == 1 { _, err := unmarshalSimpleJSON(ctx, c.chunks[0], c.smallHead, false) needMeta = err == nil } // Finalize small object as non-chunked. // This can be bypassed, and single chunk with metadata will be // created if forced by consistent hashing or due to unsafe input. if !needMeta && !f.hashAll && f.useMeta { // If previous object was chunked, remove its chunks f.removeOldChunks(ctx, baseRemote) // Rename single data chunk in place chunk := c.chunks[0] if chunk.Remote() != baseRemote { chunkMoved, errMove := f.baseMove(ctx, chunk, baseRemote, delAlways) if errMove != nil { silentlyRemove(ctx, chunk) return nil, errMove } chunk = chunkMoved } return f.newObject("", chunk, nil), nil } // Validate total size of data chunks var sizeTotal int64 for _, chunk := range c.chunks { sizeTotal += chunk.Size() } if sizeTotal != c.readCount { return nil, fmt.Errorf("Incorrect chunks size %d != %d", sizeTotal, c.readCount) } // If previous object was chunked, remove its chunks f.removeOldChunks(ctx, baseRemote) // Rename data chunks from temporary to final names for chunkNo, chunk := range c.chunks { chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", "") chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed) if errMove != nil { return nil, errMove } c.chunks[chunkNo] = chunkMoved } if !f.useMeta { // Remove stale metadata, if any oldMeta, errOldMeta := f.base.NewObject(ctx, baseRemote) if errOldMeta == nil { silentlyRemove(ctx, oldMeta) } o := f.newObject(baseRemote, nil, c.chunks) o.size = sizeTotal return o, nil } // Update meta object var metadata []byte switch f.opt.MetaFormat { case "simplejson": c.updateHashes() metadata, err = marshalSimpleJSON(ctx, sizeTotal, len(c.chunks), c.md5, c.sha1) } if err == nil { metaInfo := f.wrapInfo(src, baseRemote, int64(len(metadata))) metaObject, err = basePut(ctx, bytes.NewReader(metadata), metaInfo) } if err != nil { return nil, err } o := f.newObject("", metaObject, c.chunks) o.size = sizeTotal return o, nil } type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) type chunkingReader struct { baseReader io.Reader sizeTotal int64 sizeLeft int64 readCount int64 chunkSize int64 chunkLimit int64 chunkNo int err error done bool chunks []fs.Object expectSingle bool smallHead []byte fs *Fs hasher gohash.Hash md5 string sha1 string } func (f *Fs) newChunkingReader(src fs.ObjectInfo) *chunkingReader { c := &chunkingReader{ fs: f, chunkSize: int64(f.opt.ChunkSize), sizeTotal: src.Size(), } c.chunkLimit = c.chunkSize c.sizeLeft = c.sizeTotal c.expectSingle = c.sizeTotal >= 0 && c.sizeTotal <= c.chunkSize return c } func (c *chunkingReader) wrapStream(ctx context.Context, in io.Reader, src fs.ObjectInfo) io.Reader { baseIn, wrapBack := accounting.UnWrap(in) switch { case c.fs.useMD5: if c.md5, _ = src.Hash(ctx, hash.MD5); c.md5 == "" { if c.fs.hashFallback { c.sha1, _ = src.Hash(ctx, hash.SHA1) } else { c.hasher = md5.New() } } case c.fs.useSHA1: if c.sha1, _ = src.Hash(ctx, hash.SHA1); c.sha1 == "" { if c.fs.hashFallback { c.md5, _ = src.Hash(ctx, hash.MD5) } else { c.hasher = sha1.New() } } } if c.hasher != nil { baseIn = io.TeeReader(baseIn, c.hasher) } c.baseReader = baseIn return wrapBack(c) } func (c *chunkingReader) updateHashes() { if c.hasher == nil { return } switch { case c.fs.useMD5: c.md5 = hex.EncodeToString(c.hasher.Sum(nil)) case c.fs.useSHA1: c.sha1 = hex.EncodeToString(c.hasher.Sum(nil)) } } // Note: Read is not called if wrapped remote performs put by hash. func (c *chunkingReader) Read(buf []byte) (bytesRead int, err error) { if c.chunkLimit <= 0 { // Chunk complete - switch to next one. // Note #1: // We might not get here because some remotes (eg. box multi-uploader) // read the specified size exactly and skip the concluding EOF Read. // Then a check in the put loop will kick in. // Note #2: // The crypt backend after receiving EOF here will call Read again // and we must insist on returning EOF, so we postpone refilling // chunkLimit to the main loop. return 0, io.EOF } if int64(len(buf)) > c.chunkLimit { buf = buf[0:c.chunkLimit] } bytesRead, err = c.baseReader.Read(buf) if err != nil && err != io.EOF { c.err = err c.done = true return } c.accountBytes(int64(bytesRead)) if c.chunkNo == 0 && c.expectSingle && bytesRead > 0 && c.readCount <= maxMetadataSize { c.smallHead = append(c.smallHead, buf[:bytesRead]...) } if bytesRead == 0 && c.sizeLeft == 0 { err = io.EOF // Force EOF when no data left. } if err == io.EOF { c.done = true } return } func (c *chunkingReader) accountBytes(bytesRead int64) { c.readCount += bytesRead c.chunkLimit -= bytesRead if c.sizeLeft != -1 { c.sizeLeft -= bytesRead } } // dummyRead updates accounting, hashsums etc by simulating reads func (c *chunkingReader) dummyRead(in io.Reader, size int64) error { if c.hasher == nil && c.readCount+size > maxMetadataSize { c.accountBytes(size) return nil } const bufLen = 1048576 // 1MB buf := make([]byte, bufLen) for size > 0 { n := size if n > bufLen { n = bufLen } if _, err := io.ReadFull(in, buf[0:n]); err != nil { return err } size -= n } return nil } // rollback removes uploaded temporary chunks func (c *chunkingReader) rollback(ctx context.Context, metaObject fs.Object) { if metaObject != nil { c.chunks = append(c.chunks, metaObject) } for _, chunk := range c.chunks { if err := chunk.Remove(ctx); err != nil { fs.Errorf(chunk, "Failed to remove temporary chunk: %v", err) } } } func (f *Fs) removeOldChunks(ctx context.Context, remote string) { oldFsObject, err := f.NewObject(ctx, remote) if err == nil { oldObject := oldFsObject.(*Object) for _, chunk := range oldObject.chunks { if err := chunk.Remove(ctx); err != nil { fs.Errorf(chunk, "Failed to remove old chunk: %v", err) } } } } // Put into the remote path with the given modTime and size. // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { if err := f.forbidChunk(src, src.Remote()); err != nil { return nil, errors.Wrap(err, "refusing to put") } return f.put(ctx, in, src, src.Remote(), options, f.base.Put) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { if err := f.forbidChunk(src, src.Remote()); err != nil { return nil, errors.Wrap(err, "refusing to upload") } return f.put(ctx, in, src, src.Remote(), options, f.base.Features().PutStream) } // Update in to the object with the modTime given of the given size func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { if err := o.f.forbidChunk(o, o.Remote()); err != nil { return errors.Wrap(err, "update refused") } if err := o.readMetadata(ctx); err != nil { // refuse to update a file of unsupported format return errors.Wrap(err, "refusing to update") } basePut := o.f.base.Put if src.Size() < 0 { basePut = o.f.base.Features().PutStream if basePut == nil { return errors.New("wrapped file system does not support streaming uploads") } } oNew, err := o.f.put(ctx, in, src, o.Remote(), options, basePut) if err == nil { *o = *oNew.(*Object) } return err } // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { do := f.base.Features().PutUnchecked if do == nil { return nil, errors.New("can't PutUnchecked") } // TODO: handle range/limit options and really chunk stream here! o, err := do(ctx, in, f.wrapInfo(src, "", -1)) if err != nil { return nil, err } return f.newObject("", o, nil), nil } // Hashes returns the supported hash sets. // Chunker advertises a hash type if and only if it can be calculated // for files of any size, non-chunked or composite. func (f *Fs) Hashes() hash.Set { // composites AND no fallback AND (chunker OR wrapped Fs will hash all non-chunked's) if f.useMD5 && !f.hashFallback && (f.hashAll || f.base.Hashes().Contains(hash.MD5)) { return hash.NewHashSet(hash.MD5) } if f.useSHA1 && !f.hashFallback && (f.hashAll || f.base.Hashes().Contains(hash.SHA1)) { return hash.NewHashSet(hash.SHA1) } return hash.NewHashSet() // can't provide strong guarantees } // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists func (f *Fs) Mkdir(ctx context.Context, dir string) error { if err := f.forbidChunk(dir, dir); err != nil { return errors.Wrap(err, "can't mkdir") } return f.base.Mkdir(ctx, dir) } // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.base.Rmdir(ctx, dir) } // Purge all files in the directory // // Implement this if you have a way of deleting all the files // quicker than just running Remove() on the result of List() // // Return an error if it doesn't exist. // // This command will chain to `purge` from wrapped remote. // As a result it removes not only composite chunker files with their // active chunks but also all hidden temporary chunks in the directory. // func (f *Fs) Purge(ctx context.Context, dir string) error { do := f.base.Features().Purge if do == nil { return fs.ErrorCantPurge } return do(ctx, dir) } // Remove an object (chunks and metadata, if any) // // Remove deletes only active chunks of the composite object. // It does not try to look for temporary chunks because they could belong // to another command modifying this composite file in parallel. // // Commands normally cleanup all temporary chunks in case of a failure. // However, if rclone dies unexpectedly, it can leave hidden temporary // chunks, which cannot be discovered using the `list` command. // Remove does not try to search for such chunks or to delete them. // Sometimes this can lead to strange results eg. when `list` shows that // directory is empty but `rmdir` refuses to remove it because on the // level of wrapped remote it's actually *not* empty. // As a workaround users can use `purge` to forcibly remove it. // // In future, a flag `--chunker-delete-hidden` may be added which tells // Remove to search directory for hidden chunks and remove them too // (at the risk of breaking parallel commands). // // Remove is the only operation allowed on the composite files with // invalid or future metadata format. // We don't let user copy/move/update unsupported composite files. // Let's at least let her get rid of them, just complain loudly. // // This can litter directory with orphan chunks of unsupported types, // but as long as we remove meta object, even future releases will // treat the composite file as removed and refuse to act upon it. // // Disclaimer: corruption can still happen if unsupported file is removed // and then recreated with the same name. // Unsupported control chunks will get re-picked by a more recent // rclone version with unexpected results. This can be helped by // the `delete hidden` flag above or at least the user has been warned. // func (o *Object) Remove(ctx context.Context) (err error) { if err := o.f.forbidChunk(o, o.Remote()); err != nil { // operations.Move can still call Remove if chunker's Move refuses // to corrupt file in hard mode. Hence, refuse to Remove, too. return errors.Wrap(err, "refuse to corrupt") } if err := o.readMetadata(ctx); err != nil { // Proceed but warn user that unexpected things can happen. fs.Errorf(o, "Removing a file with unsupported metadata: %v", err) } // Remove non-chunked file or meta object of a composite file. if o.main != nil { err = o.main.Remove(ctx) } // Remove only active data chunks, ignore any temporary chunks that // might probably be created in parallel by other transactions. for _, chunk := range o.chunks { chunkErr := chunk.Remove(ctx) if err == nil { err = chunkErr } } // There are no known control chunks to remove atm. return err } // copyOrMove implements copy or move func (f *Fs) copyOrMove(ctx context.Context, o *Object, remote string, do copyMoveFn, md5, sha1, opName string) (fs.Object, error) { if err := f.forbidChunk(o, remote); err != nil { return nil, errors.Wrapf(err, "can't %s", opName) } if !o.isComposite() { fs.Debugf(o, "%s non-chunked object...", opName) oResult, err := do(ctx, o.mainChunk(), remote) // chain operation to a single wrapped chunk if err != nil { return nil, err } return f.newObject("", oResult, nil), nil } if err := o.readMetadata(ctx); err != nil { // Refuse to copy/move composite files with invalid or future // metadata format which might involve unsupported chunk types. return nil, errors.Wrapf(err, "can't %s this file", opName) } fs.Debugf(o, "%s %d data chunks...", opName, len(o.chunks)) mainRemote := o.remote var newChunks []fs.Object var err error // Copy/move active data chunks. // Ignore possible temporary chunks being created by parallel operations. for _, chunk := range o.chunks { chunkRemote := chunk.Remote() if !strings.HasPrefix(chunkRemote, mainRemote) { err = fmt.Errorf("invalid chunk name %q", chunkRemote) break } chunkSuffix := chunkRemote[len(mainRemote):] chunkResult, err := do(ctx, chunk, remote+chunkSuffix) if err != nil { break } newChunks = append(newChunks, chunkResult) } // Copy or move old metadata. // There are no known control chunks to move/copy atm. var metaObject fs.Object if err == nil && o.main != nil { metaObject, err = do(ctx, o.main, remote) } if err != nil { for _, chunk := range newChunks { silentlyRemove(ctx, chunk) } return nil, err } // Create wrapping object, calculate and validate total size newObj := f.newObject(remote, metaObject, newChunks) err = newObj.validate() if err != nil { silentlyRemove(ctx, newObj) return nil, err } // Update metadata var metadata []byte switch f.opt.MetaFormat { case "simplejson": metadata, err = marshalSimpleJSON(ctx, newObj.size, len(newChunks), md5, sha1) if err == nil { metaInfo := f.wrapInfo(metaObject, "", int64(len(metadata))) err = newObj.main.Update(ctx, bytes.NewReader(metadata), metaInfo) } case "none": if newObj.main != nil { err = newObj.main.Remove(ctx) } } // Return the composite object if err != nil { silentlyRemove(ctx, newObj) return nil, err } return newObj, nil } type copyMoveFn func(context.Context, fs.Object, string) (fs.Object, error) func (f *Fs) okForServerSide(ctx context.Context, src fs.Object, opName string) (obj *Object, md5, sha1 string, ok bool) { var diff string obj, ok = src.(*Object) switch { case !ok: diff = "remote types" case !operations.SameConfig(f.base, obj.f.base): diff = "wrapped remotes" case f.opt.ChunkSize != obj.f.opt.ChunkSize: diff = "chunk sizes" case f.opt.NameFormat != obj.f.opt.NameFormat: diff = "chunk name formats" case f.opt.MetaFormat != obj.f.opt.MetaFormat: diff = "meta formats" } if diff != "" { fs.Debugf(src, "Can't %s - different %s", opName, diff) ok = false return } requireMetaHash := obj.isComposite() && f.opt.MetaFormat == "simplejson" if !requireMetaHash && !f.hashAll { ok = true // hash is not required for metadata return } switch { case f.useMD5: md5, _ = obj.Hash(ctx, hash.MD5) ok = md5 != "" if !ok && f.hashFallback { sha1, _ = obj.Hash(ctx, hash.SHA1) ok = sha1 != "" } case f.useSHA1: sha1, _ = obj.Hash(ctx, hash.SHA1) ok = sha1 != "" if !ok && f.hashFallback { md5, _ = obj.Hash(ctx, hash.MD5) ok = md5 != "" } default: ok = false } if !ok { fs.Debugf(src, "Can't %s - required hash not found", opName) } return } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { baseCopy := f.base.Features().Copy if baseCopy == nil { return nil, fs.ErrorCantCopy } obj, md5, sha1, ok := f.okForServerSide(ctx, src, "copy") if !ok { return nil, fs.ErrorCantCopy } return f.copyOrMove(ctx, obj, remote, baseCopy, md5, sha1, "copy") } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { baseMove := func(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { return f.baseMove(ctx, src, remote, delNever) } obj, md5, sha1, ok := f.okForServerSide(ctx, src, "move") if !ok { return nil, fs.ErrorCantMove } return f.copyOrMove(ctx, obj, remote, baseMove, md5, sha1, "move") } // baseMove chains to the wrapped Move or simulates it by Copy+Delete func (f *Fs) baseMove(ctx context.Context, src fs.Object, remote string, delMode int) (fs.Object, error) { var ( dest fs.Object err error ) switch delMode { case delAlways: dest, err = f.base.NewObject(ctx, remote) case delFailed: dest, err = operations.Move(ctx, f.base, nil, remote, src) if err == nil { return dest, err } dest, err = f.base.NewObject(ctx, remote) case delNever: // fall thru, the default } if err != nil { dest = nil } return operations.Move(ctx, f.base, dest, remote, src) } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { do := f.base.Features().DirMove if do == nil { return fs.ErrorCantDirMove } srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } return do(ctx, srcFs.base, srcRemote, dstRemote) } // CleanUp the trash in the Fs // // Implement this if you have a way of emptying the trash or // otherwise cleaning up old versions of files. func (f *Fs) CleanUp(ctx context.Context) error { do := f.base.Features().CleanUp if do == nil { return errors.New("can't CleanUp") } return do(ctx) } // About gets quota information from the Fs func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { do := f.base.Features().About if do == nil { return nil, errors.New("About not supported") } return do(ctx) } // UnWrap returns the Fs that this Fs is wrapping func (f *Fs) UnWrap() fs.Fs { return f.base } // WrapFs returns the Fs that is wrapping this Fs func (f *Fs) WrapFs() fs.Fs { return f.wrapper } // SetWrapper sets the Fs that is wrapping this Fs func (f *Fs) SetWrapper(wrapper fs.Fs) { f.wrapper = wrapper } // ChangeNotify calls the passed function with a path // that has had changes. If the implementation // uses polling, it should adhere to the given interval. // // Replace data chunk names by the name of composite file. // Ignore temporary and control chunks. func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) { do := f.base.Features().ChangeNotify if do == nil { return } wrappedNotifyFunc := func(path string, entryType fs.EntryType) { //fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType) if entryType == fs.EntryObject { mainPath, _, _, xactID := f.parseChunkName(path) if mainPath != "" && xactID == "" { path = mainPath } } notifyFunc(path, entryType) } do(ctx, wrappedNotifyFunc, pollIntervalChan) } // Object represents a composite file wrapping one or more data chunks type Object struct { remote string main fs.Object // meta object if file is composite, or wrapped non-chunked file, nil if meta format is 'none' chunks []fs.Object // active data chunks if file is composite, or wrapped file as a single chunk if meta format is 'none' size int64 // cached total size of chunks in a composite file or -1 for non-chunked files isFull bool // true if metadata has been read md5 string sha1 string f *Fs } func (o *Object) addChunk(chunk fs.Object, chunkNo int) error { if chunkNo < 0 { return fmt.Errorf("invalid chunk number %d", chunkNo+o.f.opt.StartFrom) } if chunkNo == len(o.chunks) { o.chunks = append(o.chunks, chunk) return nil } if chunkNo > maxSafeChunkNumber { return ErrChunkOverflow } if chunkNo > len(o.chunks) { newChunks := make([]fs.Object, (chunkNo + 1), (chunkNo+1)*2) copy(newChunks, o.chunks) o.chunks = newChunks } o.chunks[chunkNo] = chunk return nil } // validate verifies the object internals and updates total size func (o *Object) validate() error { if !o.isComposite() { _ = o.mainChunk() // verify that single wrapped chunk exists return nil } metaObject := o.main // this file is composite - o.main refers to meta object (or nil if meta format is 'none') if metaObject != nil && metaObject.Size() > maxMetadataSize { // metadata of a chunked file must be a tiny piece of json o.size = -1 return fmt.Errorf("%q metadata is too large", o.remote) } var totalSize int64 for _, chunk := range o.chunks { if chunk == nil { o.size = -1 return fmt.Errorf("%q has missing chunks", o) } totalSize += chunk.Size() } o.size = totalSize // cache up the total data size return nil } func (f *Fs) newObject(remote string, main fs.Object, chunks []fs.Object) *Object { var size int64 = -1 if main != nil { size = main.Size() if remote == "" { remote = main.Remote() } } return &Object{ remote: remote, main: main, size: size, f: f, chunks: chunks, } } // mainChunk returns: // - a wrapped object for non-chunked files // - meta object for chunked files with metadata // - first chunk for chunked files without metadata // Never returns nil. func (o *Object) mainChunk() fs.Object { if o.main != nil { return o.main // meta object or non-chunked wrapped file } if o.chunks != nil { return o.chunks[0] // first chunk of a chunked composite file } panic("invalid chunked object") // very unlikely } func (o *Object) isComposite() bool { return o.chunks != nil } // Fs returns read only access to the Fs that this object is part of func (o *Object) Fs() fs.Info { return o.f } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Size returns the size of the file func (o *Object) Size() int64 { if o.isComposite() { return o.size // total size of data chunks in a composite file } return o.mainChunk().Size() // size of wrapped non-chunked file } // Storable returns whether object is storable func (o *Object) Storable() bool { return o.mainChunk().Storable() } // ModTime returns the modification time of the file func (o *Object) ModTime(ctx context.Context) time.Time { return o.mainChunk().ModTime(ctx) } // SetModTime sets the modification time of the file func (o *Object) SetModTime(ctx context.Context, mtime time.Time) error { if err := o.readMetadata(ctx); err != nil { return err // refuse to act on unsupported format } return o.mainChunk().SetModTime(ctx, mtime) } // Hash returns the selected checksum of the file. // If no checksum is available it returns "". // // Hash won't fail with `unsupported` error but return empty // hash string if a particular hashsum type is not supported // // Hash takes hashsum from metadata if available or requests it // from wrapped remote for non-chunked files. // Metadata (if meta format is not 'none') is by default kept // only for composite files. In the "All" hashing mode chunker // will force metadata on all files if particular hashsum type // is not supported by wrapped remote. // // Note that Hash prefers the wrapped hashsum for non-chunked // file, then tries to read it from metadata. This in theory // handles the unusual case when a small file has been tampered // on the level of wrapped remote but chunker is unaware of that. // func (o *Object) Hash(ctx context.Context, hashType hash.Type) (string, error) { if !o.isComposite() { // First, chain to the wrapped non-chunked file if possible. if value, err := o.mainChunk().Hash(ctx, hashType); err == nil && value != "" { return value, nil } } if err := o.readMetadata(ctx); err != nil { return "", err // valid metadata is required to get hash, abort } // Try hash from metadata if the file is composite or if wrapped remote fails. switch hashType { case hash.MD5: if o.md5 == "" { return "", nil } return o.md5, nil case hash.SHA1: if o.sha1 == "" { return "", nil } return o.sha1, nil default: return "", hash.ErrUnsupported } } // UnWrap returns the wrapped Object func (o *Object) UnWrap() fs.Object { return o.mainChunk() } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) { if !o.isComposite() { return o.mainChunk().Open(ctx, options...) // chain to wrapped non-chunked file } if err := o.readMetadata(ctx); err != nil { // refuse to open unsupported format return nil, errors.Wrap(err, "can't open") } var openOptions []fs.OpenOption var offset, limit int64 = 0, -1 for _, option := range options { switch opt := option.(type) { case *fs.SeekOption: offset = opt.Offset case *fs.RangeOption: offset, limit = opt.Decode(o.size) default: // pass Options on to the wrapped open, if appropriate openOptions = append(openOptions, option) } } if offset < 0 { return nil, errors.New("invalid offset") } if limit < 0 { limit = o.size - offset } return o.newLinearReader(ctx, offset, limit, openOptions) } // linearReader opens and reads file chunks sequentially, without read-ahead type linearReader struct { ctx context.Context chunks []fs.Object options []fs.OpenOption limit int64 count int64 pos int reader io.ReadCloser err error } func (o *Object) newLinearReader(ctx context.Context, offset, limit int64, options []fs.OpenOption) (io.ReadCloser, error) { r := &linearReader{ ctx: ctx, chunks: o.chunks, options: options, limit: limit, } // skip to chunk for given offset err := io.EOF for offset >= 0 && err != nil { offset, err = r.nextChunk(offset) } if err == nil || err == io.EOF { r.err = err return r, nil } return nil, err } func (r *linearReader) nextChunk(offset int64) (int64, error) { if r.err != nil { return -1, r.err } if r.pos >= len(r.chunks) || r.limit <= 0 || offset < 0 { return -1, io.EOF } chunk := r.chunks[r.pos] count := chunk.Size() r.pos++ if offset >= count { return offset - count, io.EOF } count -= offset if r.limit < count { count = r.limit } options := append(r.options, &fs.RangeOption{Start: offset, End: offset + count - 1}) if err := r.Close(); err != nil { return -1, err } reader, err := chunk.Open(r.ctx, options...) if err != nil { return -1, err } r.reader = reader r.count = count return offset, nil } func (r *linearReader) Read(p []byte) (n int, err error) { if r.err != nil { return 0, r.err } if r.limit <= 0 { r.err = io.EOF return 0, io.EOF } for r.count <= 0 { // current chunk has been read completely or its size is zero off, err := r.nextChunk(0) if off < 0 { r.err = err return 0, err } } n, err = r.reader.Read(p) if err == nil || err == io.EOF { r.count -= int64(n) r.limit -= int64(n) if r.limit > 0 { err = nil // more data to read } } r.err = err return } func (r *linearReader) Close() (err error) { if r.reader != nil { err = r.reader.Close() r.reader = nil } return } // ObjectInfo describes a wrapped fs.ObjectInfo for being the source type ObjectInfo struct { src fs.ObjectInfo fs *Fs nChunks int // number of data chunks size int64 // overrides source size by the total size of data chunks remote string // overrides remote name md5 string // overrides MD5 checksum sha1 string // overrides SHA1 checksum } func (f *Fs) wrapInfo(src fs.ObjectInfo, newRemote string, totalSize int64) *ObjectInfo { return &ObjectInfo{ src: src, fs: f, size: totalSize, remote: newRemote, } } // Fs returns read only access to the Fs that this object is part of func (oi *ObjectInfo) Fs() fs.Info { if oi.fs == nil { panic("stub ObjectInfo") } return oi.fs } // String returns string representation func (oi *ObjectInfo) String() string { return oi.src.String() } // Storable returns whether object is storable func (oi *ObjectInfo) Storable() bool { return oi.src.Storable() } // Remote returns the remote path func (oi *ObjectInfo) Remote() string { if oi.remote != "" { return oi.remote } return oi.src.Remote() } // Size returns the size of the file func (oi *ObjectInfo) Size() int64 { if oi.size != -1 { return oi.size } return oi.src.Size() } // ModTime returns the modification time func (oi *ObjectInfo) ModTime(ctx context.Context) time.Time { return oi.src.ModTime(ctx) } // Hash returns the selected checksum of the wrapped file // It returns "" if no checksum is available or if this // info doesn't wrap the complete file. func (oi *ObjectInfo) Hash(ctx context.Context, hashType hash.Type) (string, error) { var errUnsupported error switch hashType { case hash.MD5: if oi.md5 != "" { return oi.md5, nil } case hash.SHA1: if oi.sha1 != "" { return oi.sha1, nil } default: errUnsupported = hash.ErrUnsupported } if oi.Size() != oi.src.Size() { // fail if this info wraps only a part of the file return "", errUnsupported } // chain to full source if possible value, err := oi.src.Hash(ctx, hashType) if err == hash.ErrUnsupported { return "", errUnsupported } return value, err } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { if doer, ok := o.mainChunk().(fs.IDer); ok { return doer.ID() } return "" } // Meta format `simplejson` type metaSimpleJSON struct { // required core fields Version *int `json:"ver"` Size *int64 `json:"size"` // total size of data chunks ChunkNum *int `json:"nchunks"` // number of data chunks // optional extra fields MD5 string `json:"md5,omitempty"` SHA1 string `json:"sha1,omitempty"` } // marshalSimpleJSON // // Current implementation creates metadata in three cases: // - for files larger than chunk size // - if file contents can be mistaken as meta object // - if consistent hashing is On but wrapped remote can't provide given hash // func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1 string) ([]byte, error) { version := metadataVersion metadata := metaSimpleJSON{ // required core fields Version: &version, Size: &size, ChunkNum: &nChunks, // optional extra fields MD5: md5, SHA1: sha1, } data, err := json.Marshal(&metadata) if err == nil && data != nil && len(data) >= maxMetadataSize { // be a nitpicker, never produce something you can't consume return nil, errors.New("metadata can't be this big, please report to rclone developers") } return data, err } // unmarshalSimpleJSON // // Only metadata format version 1 is supported atm. // Future releases will transparently migrate older metadata objects. // New format will have a higher version number and cannot be correctly // handled by current implementation. // The version check below will then explicitly ask user to upgrade rclone. // func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte, strictChecks bool) (info *ObjectInfo, err error) { // Be strict about JSON format // to reduce possibility that a random small file resembles metadata. if data != nil && len(data) > maxMetadataSize { return nil, errors.New("too big") } if data == nil || len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' { return nil, errors.New("invalid json") } var metadata metaSimpleJSON err = json.Unmarshal(data, &metadata) if err != nil { return nil, err } // Basic fields are strictly required // to reduce possibility that a random small file resembles metadata. if metadata.Version == nil || metadata.Size == nil || metadata.ChunkNum == nil { return nil, errors.New("missing required field") } // Perform strict checks, avoid corruption of future metadata formats. if *metadata.Version < 1 { return nil, errors.New("wrong version") } if *metadata.Size < 0 { return nil, errors.New("negative file size") } if *metadata.ChunkNum < 0 { return nil, errors.New("negative number of chunks") } if *metadata.ChunkNum > maxSafeChunkNumber { return nil, ErrChunkOverflow } if metadata.MD5 != "" { _, err = hex.DecodeString(metadata.MD5) if len(metadata.MD5) != 32 || err != nil { return nil, errors.New("wrong md5 hash") } } if metadata.SHA1 != "" { _, err = hex.DecodeString(metadata.SHA1) if len(metadata.SHA1) != 40 || err != nil { return nil, errors.New("wrong sha1 hash") } } // ChunkNum is allowed to be 0 in future versions if *metadata.ChunkNum < 1 && *metadata.Version <= metadataVersion { return nil, errors.New("wrong number of chunks") } // Non-strict mode also accepts future metadata versions if *metadata.Version > metadataVersion && strictChecks { return nil, fmt.Errorf("version %d is not supported, please upgrade rclone", metadata.Version) } var nilFs *Fs // nil object triggers appropriate type method info = nilFs.wrapInfo(metaObject, "", *metadata.Size) info.nChunks = *metadata.ChunkNum info.md5 = metadata.MD5 info.sha1 = metadata.SHA1 return info, nil } func silentlyRemove(ctx context.Context, o fs.Object) { _ = o.Remove(ctx) // ignore error } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // String returns a description of the FS func (f *Fs) String() string { return fmt.Sprintf("Chunked '%s:%s'", f.name, f.root) } // Precision returns the precision of this Fs func (f *Fs) Precision() time.Duration { return f.base.Precision() } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.UnWrapper = (*Fs)(nil) _ fs.ListRer = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Wrapper = (*Fs)(nil) _ fs.ChangeNotifier = (*Fs)(nil) _ fs.ObjectInfo = (*ObjectInfo)(nil) _ fs.Object = (*Object)(nil) _ fs.ObjectUnWrapper = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/chunker/chunker_internal_test.go000066400000000000000000000674701375552240400230610ustar00rootroot00000000000000package chunker import ( "bytes" "context" "flag" "fmt" "io/ioutil" "path" "regexp" "strings" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Command line flags var ( UploadKilobytes = flag.Int("upload-kilobytes", 0, "Upload size in Kilobytes, set this to test large uploads") ) // test that chunking does not break large uploads func testPutLarge(t *testing.T, f *Fs, kilobytes int) { t.Run(fmt.Sprintf("PutLarge%dk", kilobytes), func(t *testing.T) { fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: fmt.Sprintf("chunker-upload-%dk", kilobytes), Size: int64(kilobytes) * int64(fs.KibiByte), }) }) } // test chunk name parser func testChunkNameFormat(t *testing.T, f *Fs) { saveOpt := f.opt defer func() { // restore original settings (f is pointer, f.opt is struct) f.opt = saveOpt _ = f.setChunkNameFormat(f.opt.NameFormat) }() assertFormat := func(pattern, wantDataFormat, wantCtrlFormat, wantNameRegexp string) { err := f.setChunkNameFormat(pattern) assert.NoError(t, err) assert.Equal(t, wantDataFormat, f.dataNameFmt) assert.Equal(t, wantCtrlFormat, f.ctrlNameFmt) assert.Equal(t, wantNameRegexp, f.nameRegexp.String()) } assertFormatValid := func(pattern string) { err := f.setChunkNameFormat(pattern) assert.NoError(t, err) } assertFormatInvalid := func(pattern string) { err := f.setChunkNameFormat(pattern) assert.Error(t, err) } assertMakeName := func(wantChunkName, mainName string, chunkNo int, ctrlType, xactID string) { gotChunkName := "" assert.NotPanics(t, func() { gotChunkName = f.makeChunkName(mainName, chunkNo, ctrlType, xactID) }, "makeChunkName(%q,%d,%q,%q) must not panic", mainName, chunkNo, ctrlType, xactID) if gotChunkName != "" { assert.Equal(t, wantChunkName, gotChunkName) } } assertMakeNamePanics := func(mainName string, chunkNo int, ctrlType, xactID string) { assert.Panics(t, func() { _ = f.makeChunkName(mainName, chunkNo, ctrlType, xactID) }, "makeChunkName(%q,%d,%q,%q) should panic", mainName, chunkNo, ctrlType, xactID) } assertParseName := func(fileName, wantMainName string, wantChunkNo int, wantCtrlType, wantXactID string) { gotMainName, gotChunkNo, gotCtrlType, gotXactID := f.parseChunkName(fileName) assert.Equal(t, wantMainName, gotMainName) assert.Equal(t, wantChunkNo, gotChunkNo) assert.Equal(t, wantCtrlType, gotCtrlType) assert.Equal(t, wantXactID, gotXactID) } const newFormatSupported = false // support for patterns not starting with base name (*) // valid formats assertFormat(`*.rclone_chunk.###`, `%s.rclone_chunk.%03d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]{3,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) assertFormat(`*.rclone_chunk.#`, `%s.rclone_chunk.%d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) assertFormat(`*_chunk_#####`, `%s_chunk_%05d`, `%s_chunk__%s`, `^(.+?)_chunk_(?:([0-9]{5,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) assertFormat(`*-chunk-#`, `%s-chunk-%d`, `%s-chunk-_%s`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) assertFormat(`*-chunk-#-%^$()[]{}.+-!?:\`, `%s-chunk-%d-%%^$()[]{}.+-!?:\`, `%s-chunk-_%s-%%^$()[]{}.+-!?:\`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))-%\^\$\(\)\[\]\{\}\.\+-!\?:\\(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) if newFormatSupported { assertFormat(`_*-chunk-##,`, `_%s-chunk-%02d,`, `_%s-chunk-_%s,`, `^_(.+?)-chunk-(?:([0-9]{2,})|_([a-z][a-z0-9]{2,6})),(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) } // invalid formats assertFormatInvalid(`chunk-#`) assertFormatInvalid(`*-chunk`) assertFormatInvalid(`*-*-chunk-#`) assertFormatInvalid(`*-chunk-#-#`) assertFormatInvalid(`#-chunk-*`) assertFormatInvalid(`*/#`) assertFormatValid(`*#`) assertFormatInvalid(`**#`) assertFormatInvalid(`#*`) assertFormatInvalid(``) assertFormatInvalid(`-`) // quick tests if newFormatSupported { assertFormat(`part_*_#`, `part_%s_%d`, `part_%s__%s`, `^part_(.+?)_(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9][0-9a-z]{3,8})\.\.tmp_([0-9]{10,13}))?$`) f.opt.StartFrom = 1 assertMakeName(`part_fish_1`, "fish", 0, "", "") assertParseName(`part_fish_43`, "fish", 42, "", "") assertMakeName(`part_fish__locks`, "fish", -2, "locks", "") assertParseName(`part_fish__locks`, "fish", -1, "locks", "") assertMakeName(`part_fish__x2y`, "fish", -2, "x2y", "") assertParseName(`part_fish__x2y`, "fish", -1, "x2y", "") assertMakeName(`part_fish_3_0004`, "fish", 2, "", "4") assertParseName(`part_fish_4_0005`, "fish", 3, "", "0005") assertMakeName(`part_fish__blkinfo_jj5fvo3wr`, "fish", -3, "blkinfo", "jj5fvo3wr") assertParseName(`part_fish__blkinfo_zz9fvo3wr`, "fish", -1, "blkinfo", "zz9fvo3wr") // old-style temporary suffix (parse only) assertParseName(`part_fish_4..tmp_0000000011`, "fish", 3, "", "000b") assertParseName(`part_fish__blkinfo_jj5fvo3wr`, "fish", -1, "blkinfo", "jj5fvo3wr") } // prepare format for long tests assertFormat(`*.chunk.###`, `%s.chunk.%03d`, `%s.chunk._%s`, `^(.+?)\.chunk\.(?:([0-9]{3,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`) f.opt.StartFrom = 2 // valid data chunks assertMakeName(`fish.chunk.003`, "fish", 1, "", "") assertParseName(`fish.chunk.003`, "fish", 1, "", "") assertMakeName(`fish.chunk.021`, "fish", 19, "", "") assertParseName(`fish.chunk.021`, "fish", 19, "", "") // valid temporary data chunks assertMakeName(`fish.chunk.011_4321`, "fish", 9, "", "4321") assertParseName(`fish.chunk.011_4321`, "fish", 9, "", "4321") assertMakeName(`fish.chunk.011_00bc`, "fish", 9, "", "00bc") assertParseName(`fish.chunk.011_00bc`, "fish", 9, "", "00bc") assertMakeName(`fish.chunk.1916_5jjfvo3wr`, "fish", 1914, "", "5jjfvo3wr") assertParseName(`fish.chunk.1916_5jjfvo3wr`, "fish", 1914, "", "5jjfvo3wr") assertMakeName(`fish.chunk.1917_zz9fvo3wr`, "fish", 1915, "", "zz9fvo3wr") assertParseName(`fish.chunk.1917_zz9fvo3wr`, "fish", 1915, "", "zz9fvo3wr") // valid temporary data chunks (old temporary suffix, only parse) assertParseName(`fish.chunk.004..tmp_0000000047`, "fish", 2, "", "001b") assertParseName(`fish.chunk.323..tmp_9994567890123`, "fish", 321, "", "3jjfvo3wr") // parsing invalid data chunk names assertParseName(`fish.chunk.3`, "", -1, "", "") assertParseName(`fish.chunk.001`, "", -1, "", "") assertParseName(`fish.chunk.21`, "", -1, "", "") assertParseName(`fish.chunk.-21`, "", -1, "", "") assertParseName(`fish.chunk.004abcd`, "", -1, "", "") // missing underscore delimiter assertParseName(`fish.chunk.004__1234`, "", -1, "", "") // extra underscore delimiter assertParseName(`fish.chunk.004_123`, "", -1, "", "") // too short temporary suffix assertParseName(`fish.chunk.004_1234567890`, "", -1, "", "") // too long temporary suffix assertParseName(`fish.chunk.004_-1234`, "", -1, "", "") // temporary suffix must be positive assertParseName(`fish.chunk.004_123E`, "", -1, "", "") // uppercase not allowed assertParseName(`fish.chunk.004_12.3`, "", -1, "", "") // punctuation not allowed // parsing invalid data chunk names (old temporary suffix) assertParseName(`fish.chunk.004.tmp_0000000021`, "", -1, "", "") assertParseName(`fish.chunk.003..tmp_123456789`, "", -1, "", "") assertParseName(`fish.chunk.003..tmp_012345678901234567890123456789`, "", -1, "", "") assertParseName(`fish.chunk.323..tmp_12345678901234`, "", -1, "", "") assertParseName(`fish.chunk.003..tmp_-1`, "", -1, "", "") // valid control chunks assertMakeName(`fish.chunk._info`, "fish", -1, "info", "") assertMakeName(`fish.chunk._locks`, "fish", -2, "locks", "") assertMakeName(`fish.chunk._blkinfo`, "fish", -3, "blkinfo", "") assertMakeName(`fish.chunk._x2y`, "fish", -4, "x2y", "") assertParseName(`fish.chunk._info`, "fish", -1, "info", "") assertParseName(`fish.chunk._locks`, "fish", -1, "locks", "") assertParseName(`fish.chunk._blkinfo`, "fish", -1, "blkinfo", "") assertParseName(`fish.chunk._x2y`, "fish", -1, "x2y", "") // valid temporary control chunks assertMakeName(`fish.chunk._info_0001`, "fish", -1, "info", "1") assertMakeName(`fish.chunk._locks_4321`, "fish", -2, "locks", "4321") assertMakeName(`fish.chunk._uploads_abcd`, "fish", -3, "uploads", "abcd") assertMakeName(`fish.chunk._blkinfo_xyzabcdef`, "fish", -4, "blkinfo", "xyzabcdef") assertMakeName(`fish.chunk._x2y_1aaa`, "fish", -5, "x2y", "1aaa") assertParseName(`fish.chunk._info_0001`, "fish", -1, "info", "0001") assertParseName(`fish.chunk._locks_4321`, "fish", -1, "locks", "4321") assertParseName(`fish.chunk._uploads_9abc`, "fish", -1, "uploads", "9abc") assertParseName(`fish.chunk._blkinfo_xyzabcdef`, "fish", -1, "blkinfo", "xyzabcdef") assertParseName(`fish.chunk._x2y_1aaa`, "fish", -1, "x2y", "1aaa") // valid temporary control chunks (old temporary suffix, parse only) assertParseName(`fish.chunk._info..tmp_0000000047`, "fish", -1, "info", "001b") assertParseName(`fish.chunk._locks..tmp_0000054321`, "fish", -1, "locks", "15wx") assertParseName(`fish.chunk._uploads..tmp_0000000000`, "fish", -1, "uploads", "0000") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123`, "fish", -1, "blkinfo", "3jjfvo3wr") assertParseName(`fish.chunk._x2y..tmp_0000000000`, "fish", -1, "x2y", "0000") // parsing invalid control chunk names assertParseName(`fish.chunk.metadata`, "", -1, "", "") // must be prepended by underscore assertParseName(`fish.chunk.info`, "", -1, "", "") assertParseName(`fish.chunk.locks`, "", -1, "", "") assertParseName(`fish.chunk.uploads`, "", -1, "", "") assertParseName(`fish.chunk._os`, "", -1, "", "") // too short assertParseName(`fish.chunk._metadata`, "", -1, "", "") // too long assertParseName(`fish.chunk._blockinfo`, "", -1, "", "") // way too long assertParseName(`fish.chunk._4me`, "", -1, "", "") // cannot start with digit assertParseName(`fish.chunk._567`, "", -1, "", "") // cannot be all digits assertParseName(`fish.chunk._me_ta`, "", -1, "", "") // punctuation not allowed assertParseName(`fish.chunk._in-fo`, "", -1, "", "") assertParseName(`fish.chunk._.bin`, "", -1, "", "") assertParseName(`fish.chunk._.2xy`, "", -1, "", "") // parsing invalid temporary control chunks assertParseName(`fish.chunk._blkinfo1234`, "", -1, "", "") // missing underscore delimiter assertParseName(`fish.chunk._info__1234`, "", -1, "", "") // extra underscore delimiter assertParseName(`fish.chunk._info_123`, "", -1, "", "") // too short temporary suffix assertParseName(`fish.chunk._info_1234567890`, "", -1, "", "") // too long temporary suffix assertParseName(`fish.chunk._info_-1234`, "", -1, "", "") // temporary suffix must be positive assertParseName(`fish.chunk._info_123E`, "", -1, "", "") // uppercase not allowed assertParseName(`fish.chunk._info_12.3`, "", -1, "", "") // punctuation not allowed assertParseName(`fish.chunk._locks..tmp_123456789`, "", -1, "", "") assertParseName(`fish.chunk._meta..tmp_-1`, "", -1, "", "") assertParseName(`fish.chunk._blockinfo..tmp_012345678901234567890123456789`, "", -1, "", "") // short control chunk names: 3 letters ok, 1-2 letters not allowed assertMakeName(`fish.chunk._ext`, "fish", -1, "ext", "") assertParseName(`fish.chunk._int`, "fish", -1, "int", "") assertMakeNamePanics("fish", -1, "in", "") assertMakeNamePanics("fish", -1, "up", "4") assertMakeNamePanics("fish", -1, "x", "") assertMakeNamePanics("fish", -1, "c", "1z") assertMakeName(`fish.chunk._ext_0000`, "fish", -1, "ext", "0") assertMakeName(`fish.chunk._ext_0026`, "fish", -1, "ext", "26") assertMakeName(`fish.chunk._int_0abc`, "fish", -1, "int", "abc") assertMakeName(`fish.chunk._int_9xyz`, "fish", -1, "int", "9xyz") assertMakeName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr") assertMakeName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr") assertParseName(`fish.chunk._ext_0000`, "fish", -1, "ext", "0000") assertParseName(`fish.chunk._ext_0026`, "fish", -1, "ext", "0026") assertParseName(`fish.chunk._int_0abc`, "fish", -1, "int", "0abc") assertParseName(`fish.chunk._int_9xyz`, "fish", -1, "int", "9xyz") assertParseName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr") assertParseName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr") // base file name can sometimes look like a valid chunk name assertParseName(`fish.chunk.003.chunk.004`, "fish.chunk.003", 2, "", "") assertParseName(`fish.chunk.003.chunk._info`, "fish.chunk.003", -1, "info", "") assertParseName(`fish.chunk.003.chunk._Meta`, "", -1, "", "") assertParseName(`fish.chunk._info.chunk.004`, "fish.chunk._info", 2, "", "") assertParseName(`fish.chunk._info.chunk._info`, "fish.chunk._info", -1, "info", "") assertParseName(`fish.chunk._info.chunk._info.chunk._Meta`, "", -1, "", "") // base file name looking like a valid chunk name (old temporary suffix) assertParseName(`fish.chunk.003.chunk.005..tmp_0000000022`, "fish.chunk.003", 3, "", "000m") assertParseName(`fish.chunk.003.chunk._x..tmp_0000054321`, "", -1, "", "") assertParseName(`fish.chunk._info.chunk.005..tmp_0000000023`, "fish.chunk._info", 3, "", "000n") assertParseName(`fish.chunk._info.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "") assertParseName(`fish.chunk.003.chunk._blkinfo..tmp_9994567890123`, "fish.chunk.003", -1, "blkinfo", "3jjfvo3wr") assertParseName(`fish.chunk._info.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._info", -1, "blkinfo", "3jjfvo3wr") assertParseName(`fish.chunk.004..tmp_0000000021.chunk.004`, "fish.chunk.004..tmp_0000000021", 2, "", "") assertParseName(`fish.chunk.004..tmp_0000000021.chunk.005..tmp_0000000025`, "fish.chunk.004..tmp_0000000021", 3, "", "000p") assertParseName(`fish.chunk.004..tmp_0000000021.chunk._info`, "fish.chunk.004..tmp_0000000021", -1, "info", "") assertParseName(`fish.chunk.004..tmp_0000000021.chunk._blkinfo..tmp_9994567890123`, "fish.chunk.004..tmp_0000000021", -1, "blkinfo", "3jjfvo3wr") assertParseName(`fish.chunk.004..tmp_0000000021.chunk._Meta`, "", -1, "", "") assertParseName(`fish.chunk.004..tmp_0000000021.chunk._x..tmp_0000054321`, "", -1, "", "") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk.004`, "fish.chunk._blkinfo..tmp_9994567890123", 2, "", "") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk.005..tmp_0000000026`, "fish.chunk._blkinfo..tmp_9994567890123", 3, "", "000q") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info`, "fish.chunk._blkinfo..tmp_9994567890123", -1, "info", "") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._blkinfo..tmp_9994567890123", -1, "blkinfo", "3jjfvo3wr") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info.chunk._Meta`, "", -1, "", "") assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "") assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk.004`, "fish.chunk._blkinfo..tmp_1234567890123456789", 2, "", "") assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk.005..tmp_0000000022`, "fish.chunk._blkinfo..tmp_1234567890123456789", 3, "", "000m") assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info`, "fish.chunk._blkinfo..tmp_1234567890123456789", -1, "info", "") assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._blkinfo..tmp_1234567890123456789", -1, "blkinfo", "3jjfvo3wr") assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info.chunk._Meta`, "", -1, "", "") assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "") // attempts to make invalid chunk names assertMakeNamePanics("fish", -1, "", "") // neither data nor control assertMakeNamePanics("fish", 0, "info", "") // both data and control assertMakeNamePanics("fish", -1, "metadata", "") // control type too long assertMakeNamePanics("fish", -1, "blockinfo", "") // control type way too long assertMakeNamePanics("fish", -1, "2xy", "") // first digit not allowed assertMakeNamePanics("fish", -1, "123", "") // all digits not allowed assertMakeNamePanics("fish", -1, "Meta", "") // only lower case letters allowed assertMakeNamePanics("fish", -1, "in-fo", "") // punctuation not allowed assertMakeNamePanics("fish", -1, "_info", "") assertMakeNamePanics("fish", -1, "info_", "") assertMakeNamePanics("fish", -2, ".bind", "") assertMakeNamePanics("fish", -2, "bind.", "") assertMakeNamePanics("fish", -1, "", "1") // neither data nor control assertMakeNamePanics("fish", 0, "info", "23") // both data and control assertMakeNamePanics("fish", -1, "metadata", "45") // control type too long assertMakeNamePanics("fish", -1, "blockinfo", "7") // control type way too long assertMakeNamePanics("fish", -1, "2xy", "abc") // first digit not allowed assertMakeNamePanics("fish", -1, "123", "def") // all digits not allowed assertMakeNamePanics("fish", -1, "Meta", "mnk") // only lower case letters allowed assertMakeNamePanics("fish", -1, "in-fo", "xyz") // punctuation not allowed assertMakeNamePanics("fish", -1, "_info", "5678") assertMakeNamePanics("fish", -1, "info_", "999") assertMakeNamePanics("fish", -2, ".bind", "0") assertMakeNamePanics("fish", -2, "bind.", "0") assertMakeNamePanics("fish", 0, "", "1234567890") // temporary suffix too long assertMakeNamePanics("fish", 0, "", "123F4") // uppercase not allowed assertMakeNamePanics("fish", 0, "", "123.") // punctuation not allowed assertMakeNamePanics("fish", 0, "", "_123") } func testSmallFileInternals(t *testing.T, f *Fs) { const dir = "small" ctx := context.Background() saveOpt := f.opt defer func() { f.opt.FailHard = false _ = operations.Purge(ctx, f.base, dir) f.opt = saveOpt }() f.opt.FailHard = false modTime := fstest.Time("2001-02-03T04:05:06.499999999Z") checkSmallFileInternals := func(obj fs.Object) { assert.NotNil(t, obj) o, ok := obj.(*Object) assert.True(t, ok) assert.NotNil(t, o) if o == nil { return } switch { case !f.useMeta: // If meta format is "none", non-chunked file (even empty) // internally is a single chunk without meta object. assert.Nil(t, o.main) assert.True(t, o.isComposite()) // sorry, sometimes a name is misleading assert.Equal(t, 1, len(o.chunks)) case f.hashAll: // Consistent hashing forces meta object on small files too assert.NotNil(t, o.main) assert.True(t, o.isComposite()) assert.Equal(t, 1, len(o.chunks)) default: // normally non-chunked file is kept in the Object's main field assert.NotNil(t, o.main) assert.False(t, o.isComposite()) assert.Equal(t, 0, len(o.chunks)) } } checkContents := func(obj fs.Object, contents string) { assert.NotNil(t, obj) assert.Equal(t, int64(len(contents)), obj.Size()) r, err := obj.Open(ctx) assert.NoError(t, err) assert.NotNil(t, r) if r == nil { return } data, err := ioutil.ReadAll(r) assert.NoError(t, err) assert.Equal(t, contents, string(data)) _ = r.Close() } checkHashsum := func(obj fs.Object) { var ht hash.Type switch { case !f.hashAll: return case f.useMD5: ht = hash.MD5 case f.useSHA1: ht = hash.SHA1 default: return } // even empty files must have hashsum in consistent mode sum, err := obj.Hash(ctx, ht) assert.NoError(t, err) assert.NotEqual(t, sum, "") } checkSmallFile := func(name, contents string) { filename := path.Join(dir, name) item := fstest.Item{Path: filename, ModTime: modTime} _, put := fstests.PutTestContents(ctx, t, f, &item, contents, false) assert.NotNil(t, put) checkSmallFileInternals(put) checkContents(put, contents) checkHashsum(put) // objects returned by Put and NewObject must have similar structure obj, err := f.NewObject(ctx, filename) assert.NoError(t, err) assert.NotNil(t, obj) checkSmallFileInternals(obj) checkContents(obj, contents) checkHashsum(obj) _ = obj.Remove(ctx) _ = put.Remove(ctx) // for good } checkSmallFile("emptyfile", "") checkSmallFile("smallfile", "Ok") } func testPreventCorruption(t *testing.T, f *Fs) { if f.opt.ChunkSize > 50 { t.Skip("this test requires small chunks") } const dir = "corrupted" ctx := context.Background() saveOpt := f.opt defer func() { f.opt.FailHard = false _ = operations.Purge(ctx, f.base, dir) f.opt = saveOpt }() f.opt.FailHard = true contents := random.String(250) modTime := fstest.Time("2001-02-03T04:05:06.499999999Z") const overlapMessage = "chunk overlap" assertOverlapError := func(err error) { assert.Error(t, err) if err != nil { assert.Contains(t, err.Error(), overlapMessage) } } newFile := func(name string) fs.Object { item := fstest.Item{Path: path.Join(dir, name), ModTime: modTime} _, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true) require.NotNil(t, obj) return obj } billyObj := newFile("billy") billyChunkName := func(chunkNo int) string { return f.makeChunkName(billyObj.Remote(), chunkNo, "", "") } err := f.Mkdir(ctx, billyChunkName(1)) assertOverlapError(err) _, err = f.Move(ctx, newFile("silly1"), billyChunkName(2)) assert.Error(t, err) assert.True(t, err == fs.ErrorCantMove || (err != nil && strings.Contains(err.Error(), overlapMessage))) _, err = f.Copy(ctx, newFile("silly2"), billyChunkName(3)) assert.Error(t, err) assert.True(t, err == fs.ErrorCantCopy || (err != nil && strings.Contains(err.Error(), overlapMessage))) // accessing chunks in strict mode is prohibited f.opt.FailHard = true billyChunk4Name := billyChunkName(4) billyChunk4, err := f.NewObject(ctx, billyChunk4Name) assertOverlapError(err) f.opt.FailHard = false billyChunk4, err = f.NewObject(ctx, billyChunk4Name) assert.NoError(t, err) require.NotNil(t, billyChunk4) f.opt.FailHard = true _, err = f.Put(ctx, bytes.NewBufferString(contents), billyChunk4) assertOverlapError(err) // you can freely read chunks (if you have an object) r, err := billyChunk4.Open(ctx) assert.NoError(t, err) var chunkContents []byte assert.NotPanics(t, func() { chunkContents, err = ioutil.ReadAll(r) _ = r.Close() }) assert.NoError(t, err) assert.NotEqual(t, contents, string(chunkContents)) // but you can't change them err = billyChunk4.Update(ctx, bytes.NewBufferString(contents), newFile("silly3")) assertOverlapError(err) // Remove isn't special, you can't corrupt files even if you have an object err = billyChunk4.Remove(ctx) assertOverlapError(err) // recreate billy in case it was anyhow corrupted willyObj := newFile("willy") willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", "") f.opt.FailHard = false willyChunk, err := f.NewObject(ctx, willyChunkName) f.opt.FailHard = true assert.NoError(t, err) require.NotNil(t, willyChunk) _, err = operations.Copy(ctx, f, willyChunk, willyChunkName, newFile("silly4")) assertOverlapError(err) // operations.Move will return error when chunker's Move refused // to corrupt target file, but reverts to copy/delete method // still trying to delete target chunk. Chunker must come to rescue. _, err = operations.Move(ctx, f, willyChunk, willyChunkName, newFile("silly5")) assertOverlapError(err) r, err = willyChunk.Open(ctx) assert.NoError(t, err) assert.NotPanics(t, func() { _, err = ioutil.ReadAll(r) _ = r.Close() }) assert.NoError(t, err) } func testChunkNumberOverflow(t *testing.T, f *Fs) { if f.opt.ChunkSize > 50 { t.Skip("this test requires small chunks") } const dir = "wreaked" const wreakNumber = 10200300 ctx := context.Background() saveOpt := f.opt defer func() { f.opt.FailHard = false _ = operations.Purge(ctx, f.base, dir) f.opt = saveOpt }() modTime := fstest.Time("2001-02-03T04:05:06.499999999Z") contents := random.String(100) newFile := func(f fs.Fs, name string) (fs.Object, string) { filename := path.Join(dir, name) item := fstest.Item{Path: filename, ModTime: modTime} _, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true) require.NotNil(t, obj) return obj, filename } f.opt.FailHard = false file, fileName := newFile(f, "wreaker") wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", "")) f.opt.FailHard = false fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision()) _, err := f.NewObject(ctx, fileName) assert.Error(t, err) f.opt.FailHard = true _, err = f.List(ctx, dir) assert.Error(t, err) _, err = f.NewObject(ctx, fileName) assert.Error(t, err) f.opt.FailHard = false _ = wreak.Remove(ctx) _ = file.Remove(ctx) } func testMetadataInput(t *testing.T, f *Fs) { const minChunkForTest = 50 if f.opt.ChunkSize < minChunkForTest { t.Skip("this test requires chunks that fit metadata") } const dir = "usermeta" ctx := context.Background() saveOpt := f.opt defer func() { f.opt.FailHard = false _ = operations.Purge(ctx, f.base, dir) f.opt = saveOpt }() f.opt.FailHard = false modTime := fstest.Time("2001-02-03T04:05:06.499999999Z") putFile := func(f fs.Fs, name, contents, message string, check bool) fs.Object { item := fstest.Item{Path: name, ModTime: modTime} _, obj := fstests.PutTestContents(ctx, t, f, &item, contents, check) assert.NotNil(t, obj, message) return obj } runSubtest := func(contents, name string) { description := fmt.Sprintf("file with %s metadata", name) filename := path.Join(dir, name) require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct") part := putFile(f.base, f.makeChunkName(filename, 0, "", ""), "oops", "", true) _ = putFile(f, filename, contents, "upload "+description, false) obj, err := f.NewObject(ctx, filename) assert.NoError(t, err, "access "+description) assert.NotNil(t, obj) assert.Equal(t, int64(len(contents)), obj.Size(), "size "+description) o, ok := obj.(*Object) assert.NotNil(t, ok) if o != nil { assert.True(t, o.isComposite() && len(o.chunks) == 1, description+" is forced composite") o = nil } defer func() { _ = obj.Remove(ctx) _ = part.Remove(ctx) }() r, err := obj.Open(ctx) assert.NoError(t, err, "open "+description) assert.NotNil(t, r, "open stream of "+description) if err == nil && r != nil { data, err := ioutil.ReadAll(r) assert.NoError(t, err, "read all of "+description) assert.Equal(t, contents, string(data), description+" contents is ok") _ = r.Close() } } metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "") require.NoError(t, err) todaysMeta := string(metaData) runSubtest(todaysMeta, "today") pastMeta := regexp.MustCompile(`"ver":[0-9]+`).ReplaceAllLiteralString(todaysMeta, `"ver":1`) pastMeta = regexp.MustCompile(`"size":[0-9]+`).ReplaceAllLiteralString(pastMeta, `"size":0`) runSubtest(pastMeta, "past") futureMeta := regexp.MustCompile(`"ver":[0-9]+`).ReplaceAllLiteralString(todaysMeta, `"ver":999`) futureMeta = regexp.MustCompile(`"nchunks":[0-9]+`).ReplaceAllLiteralString(futureMeta, `"nchunks":0,"x":"y"`) runSubtest(futureMeta, "future") } // InternalTest dispatches all internal tests func (f *Fs) InternalTest(t *testing.T) { t.Run("PutLarge", func(t *testing.T) { if *UploadKilobytes <= 0 { t.Skip("-upload-kilobytes is not set") } testPutLarge(t, f, *UploadKilobytes) }) t.Run("ChunkNameFormat", func(t *testing.T) { testChunkNameFormat(t, f) }) t.Run("SmallFileInternals", func(t *testing.T) { testSmallFileInternals(t, f) }) t.Run("PreventCorruption", func(t *testing.T) { testPreventCorruption(t, f) }) t.Run("ChunkNumberOverflow", func(t *testing.T) { testChunkNumberOverflow(t, f) }) t.Run("MetadataInput", func(t *testing.T) { testMetadataInput(t, f) }) } var _ fstests.InternalTester = (*Fs)(nil) rclone-1.53.3/backend/chunker/chunker_test.go000066400000000000000000000033651375552240400211560ustar00rootroot00000000000000// Test the Chunker filesystem interface package chunker_test import ( "flag" "os" "path/filepath" "testing" _ "github.com/rclone/rclone/backend/all" // for integration tests "github.com/rclone/rclone/backend/chunker" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" ) // Command line flags var ( // Invalid characters are not supported by some remotes, eg. Mailru. // We enable testing with invalid characters when -remote is not set, so // chunker overlays a local directory, but invalid characters are disabled // by default when -remote is set, eg. when test_all runs backend tests. // You can still test with invalid characters using the below flag. UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set") ) // TestIntegration runs integration tests against a concrete remote // set by the -remote flag. If the flag is not set, it creates a // dynamic chunker overlay wrapping a local temporary directory. func TestIntegration(t *testing.T) { opt := fstests.Opt{ RemoteName: *fstest.RemoteName, NilObject: (*chunker.Object)(nil), SkipBadWindowsCharacters: !*UseBadChars, UnimplementableObjectMethods: []string{ "MimeType", "GetTier", "SetTier", }, UnimplementableFsMethods: []string{ "PublicLink", "OpenWriterAt", "MergeDirs", "DirCacheFlush", "UserInfo", "Disconnect", }, } if *fstest.RemoteName == "" { name := "TestChunker" opt.RemoteName = name + ":" tempDir := filepath.Join(os.TempDir(), "rclone-chunker-test-standard") opt.ExtraConfig = []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "chunker"}, {Name: name, Key: "remote", Value: tempDir}, } } fstests.Run(t, &opt) } rclone-1.53.3/backend/crypt/000077500000000000000000000000001375552240400156245ustar00rootroot00000000000000rclone-1.53.3/backend/crypt/cipher.go000066400000000000000000000675341375552240400174440ustar00rootroot00000000000000package crypt import ( "bytes" "context" "crypto/aes" gocipher "crypto/cipher" "crypto/rand" "encoding/base32" "fmt" "io" "strconv" "strings" "sync" "unicode/utf8" "github.com/pkg/errors" "github.com/rclone/rclone/backend/crypt/pkcs7" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rfjakob/eme" "golang.org/x/crypto/nacl/secretbox" "golang.org/x/crypto/scrypt" ) // Constants const ( nameCipherBlockSize = aes.BlockSize fileMagic = "RCLONE\x00\x00" fileMagicSize = len(fileMagic) fileNonceSize = 24 fileHeaderSize = fileMagicSize + fileNonceSize blockHeaderSize = secretbox.Overhead blockDataSize = 64 * 1024 blockSize = blockHeaderSize + blockDataSize encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file ) // Errors returned by cipher var ( ErrorBadDecryptUTF8 = errors.New("bad decryption - utf-8 invalid") ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars") ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize") ErrorTooShortAfterDecode = errors.New("too short after base32 decode") ErrorTooLongAfterDecode = errors.New("too long after base32 decode") ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted") ErrorEncryptedFileBadHeader = errors.New("file has truncated block header") ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string") ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?") ErrorBadBase32Encoding = errors.New("bad base32 filename encoding") ErrorFileClosed = errors.New("file already closed") ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix") ErrorBadSeek = errors.New("Seek beyond end of file") defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1} obfuscQuoteRune = '!' ) // Global variables var ( fileMagicBytes = []byte(fileMagic) ) // ReadSeekCloser is the interface of the read handles type ReadSeekCloser interface { io.Reader io.Seeker io.Closer fs.RangeSeeker } // OpenRangeSeek opens the file handle at the offset with the limit given type OpenRangeSeek func(ctx context.Context, offset, limit int64) (io.ReadCloser, error) // NameEncryptionMode is the type of file name encryption in use type NameEncryptionMode int // NameEncryptionMode levels const ( NameEncryptionOff NameEncryptionMode = iota NameEncryptionStandard NameEncryptionObfuscated ) // NewNameEncryptionMode turns a string into a NameEncryptionMode func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) { s = strings.ToLower(s) switch s { case "off": mode = NameEncryptionOff case "standard": mode = NameEncryptionStandard case "obfuscate": mode = NameEncryptionObfuscated default: err = errors.Errorf("Unknown file name encryption mode %q", s) } return mode, err } // String turns mode into a human readable string func (mode NameEncryptionMode) String() (out string) { switch mode { case NameEncryptionOff: out = "off" case NameEncryptionStandard: out = "standard" case NameEncryptionObfuscated: out = "obfuscate" default: out = fmt.Sprintf("Unknown mode #%d", mode) } return out } // Cipher defines an encoding and decoding cipher for the crypt backend type Cipher struct { dataKey [32]byte // Key for secretbox nameKey [32]byte // 16,24 or 32 bytes nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto block gocipher.Block mode NameEncryptionMode buffers sync.Pool // encrypt/decrypt buffers cryptoRand io.Reader // read crypto random numbers from here dirNameEncrypt bool } // newCipher initialises the cipher. If salt is "" then it uses a built in salt val func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bool) (*Cipher, error) { c := &Cipher{ mode: mode, cryptoRand: rand.Reader, dirNameEncrypt: dirNameEncrypt, } c.buffers.New = func() interface{} { return make([]byte, blockSize) } err := c.Key(password, salt) if err != nil { return nil, err } return c, nil } // Key creates all the internal keys from the password passed in using // scrypt. // // If salt is "" we use a fixed salt just to make attackers lives // slighty harder than using no salt. // // Note that empty passsword makes all 0x00 keys which is used in the // tests. func (c *Cipher) Key(password, salt string) (err error) { const keySize = len(c.dataKey) + len(c.nameKey) + len(c.nameTweak) var saltBytes = defaultSalt if salt != "" { saltBytes = []byte(salt) } var key []byte if password == "" { key = make([]byte, keySize) } else { key, err = scrypt.Key([]byte(password), saltBytes, 16384, 8, 1, keySize) if err != nil { return err } } copy(c.dataKey[:], key) copy(c.nameKey[:], key[len(c.dataKey):]) copy(c.nameTweak[:], key[len(c.dataKey)+len(c.nameKey):]) // Key the name cipher c.block, err = aes.NewCipher(c.nameKey[:]) return err } // getBlock gets a block from the pool of size blockSize func (c *Cipher) getBlock() []byte { return c.buffers.Get().([]byte) } // putBlock returns a block to the pool of size blockSize func (c *Cipher) putBlock(buf []byte) { if len(buf) != blockSize { panic("bad blocksize returned to pool") } c.buffers.Put(buf) } // encodeFileName encodes a filename using a modified version of // standard base32 as described in RFC4648 // // The standard encoding is modified in two ways // * it becomes lower case (no-one likes upper case filenames!) // * we strip the padding character `=` func encodeFileName(in []byte) string { encoded := base32.HexEncoding.EncodeToString(in) encoded = strings.TrimRight(encoded, "=") return strings.ToLower(encoded) } // decodeFileName decodes a filename as encoded by encodeFileName func decodeFileName(in string) ([]byte, error) { if strings.HasSuffix(in, "=") { return nil, ErrorBadBase32Encoding } // First figure out how many padding characters to add roundUpToMultipleOf8 := (len(in) + 7) &^ 7 equals := roundUpToMultipleOf8 - len(in) in = strings.ToUpper(in) + "========"[:equals] return base32.HexEncoding.DecodeString(in) } // encryptSegment encrypts a path segment // // This uses EME with AES // // EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the // 2003 paper "A Parallelizable Enciphering Mode" by Halevi and // Rogaway. // // This makes for deterministic encryption which is what we want - the // same filename must encrypt to the same thing. // // This means that // * filenames with the same name will encrypt the same // * filenames which start the same won't have a common prefix func (c *Cipher) encryptSegment(plaintext string) string { if plaintext == "" { return "" } paddedPlaintext := pkcs7.Pad(nameCipherBlockSize, []byte(plaintext)) ciphertext := eme.Transform(c.block, c.nameTweak[:], paddedPlaintext, eme.DirectionEncrypt) return encodeFileName(ciphertext) } // decryptSegment decrypts a path segment func (c *Cipher) decryptSegment(ciphertext string) (string, error) { if ciphertext == "" { return "", nil } rawCiphertext, err := decodeFileName(ciphertext) if err != nil { return "", err } if len(rawCiphertext)%nameCipherBlockSize != 0 { return "", ErrorNotAMultipleOfBlocksize } if len(rawCiphertext) == 0 { // not possible if decodeFilename() working correctly return "", ErrorTooShortAfterDecode } if len(rawCiphertext) > 2048 { return "", ErrorTooLongAfterDecode } paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt) plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext) if err != nil { return "", err } return string(plaintext), err } // Simple obfuscation routines func (c *Cipher) obfuscateSegment(plaintext string) string { if plaintext == "" { return "" } // If the string isn't valid UTF8 then don't rotate; just // prepend a !. if !utf8.ValidString(plaintext) { return "!." + plaintext } // Calculate a simple rotation based on the filename and // the nameKey var dir int for _, runeValue := range plaintext { dir += int(runeValue) } dir = dir % 256 // We'll use this number to store in the result filename... var result bytes.Buffer _, _ = result.WriteString(strconv.Itoa(dir) + ".") // but we'll augment it with the nameKey for real calculation for i := 0; i < len(c.nameKey); i++ { dir += int(c.nameKey[i]) } // Now for each character, depending on the range it is in // we will actually rotate a different amount for _, runeValue := range plaintext { switch { case runeValue == obfuscQuoteRune: // Quote the Quote character _, _ = result.WriteRune(obfuscQuoteRune) _, _ = result.WriteRune(obfuscQuoteRune) case runeValue >= '0' && runeValue <= '9': // Number thisdir := (dir % 9) + 1 newRune := '0' + (int(runeValue)-'0'+thisdir)%10 _, _ = result.WriteRune(rune(newRune)) case (runeValue >= 'A' && runeValue <= 'Z') || (runeValue >= 'a' && runeValue <= 'z'): // ASCII letter. Try to avoid trivial A->a mappings thisdir := dir%25 + 1 // Calculate the offset of this character in A-Za-z pos := int(runeValue - 'A') if pos >= 26 { pos -= 6 // It's lower case } // Rotate the character to the new location pos = (pos + thisdir) % 52 if pos >= 26 { pos += 6 // and handle lower case offset again } _, _ = result.WriteRune(rune('A' + pos)) case runeValue >= 0xA0 && runeValue <= 0xFF: // Latin 1 supplement thisdir := (dir % 95) + 1 newRune := 0xA0 + (int(runeValue)-0xA0+thisdir)%96 _, _ = result.WriteRune(rune(newRune)) case runeValue >= 0x100: // Some random Unicode range; we have no good rules here thisdir := (dir % 127) + 1 base := int(runeValue - runeValue%256) newRune := rune(base + (int(runeValue)-base+thisdir)%256) // If the new character isn't a valid UTF8 char // then don't rotate it. Quote it instead if !utf8.ValidRune(newRune) { _, _ = result.WriteRune(obfuscQuoteRune) _, _ = result.WriteRune(runeValue) } else { _, _ = result.WriteRune(newRune) } default: // Leave character untouched _, _ = result.WriteRune(runeValue) } } return result.String() } func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) { if ciphertext == "" { return "", nil } pos := strings.Index(ciphertext, ".") if pos == -1 { return "", ErrorNotAnEncryptedFile } // No . num := ciphertext[:pos] if num == "!" { // No rotation; probably original was not valid unicode return ciphertext[pos+1:], nil } dir, err := strconv.Atoi(num) if err != nil { return "", ErrorNotAnEncryptedFile // Not a number } // add the nameKey to get the real rotate distance for i := 0; i < len(c.nameKey); i++ { dir += int(c.nameKey[i]) } var result bytes.Buffer inQuote := false for _, runeValue := range ciphertext[pos+1:] { switch { case inQuote: _, _ = result.WriteRune(runeValue) inQuote = false case runeValue == obfuscQuoteRune: inQuote = true case runeValue >= '0' && runeValue <= '9': // Number thisdir := (dir % 9) + 1 newRune := '0' + int(runeValue) - '0' - thisdir if newRune < '0' { newRune += 10 } _, _ = result.WriteRune(rune(newRune)) case (runeValue >= 'A' && runeValue <= 'Z') || (runeValue >= 'a' && runeValue <= 'z'): thisdir := dir%25 + 1 pos := int(runeValue - 'A') if pos >= 26 { pos -= 6 } pos = pos - thisdir if pos < 0 { pos += 52 } if pos >= 26 { pos += 6 } _, _ = result.WriteRune(rune('A' + pos)) case runeValue >= 0xA0 && runeValue <= 0xFF: thisdir := (dir % 95) + 1 newRune := 0xA0 + int(runeValue) - 0xA0 - thisdir if newRune < 0xA0 { newRune += 96 } _, _ = result.WriteRune(rune(newRune)) case runeValue >= 0x100: thisdir := (dir % 127) + 1 base := int(runeValue - runeValue%256) newRune := rune(base + (int(runeValue) - base - thisdir)) if int(newRune) < base { newRune += 256 } _, _ = result.WriteRune(newRune) default: _, _ = result.WriteRune(runeValue) } } return result.String(), nil } // encryptFileName encrypts a file path func (c *Cipher) encryptFileName(in string) string { segments := strings.Split(in, "/") for i := range segments { // Skip directory name encryption if the user chose to // leave them intact if !c.dirNameEncrypt && i != (len(segments)-1) { continue } if c.mode == NameEncryptionStandard { segments[i] = c.encryptSegment(segments[i]) } else { segments[i] = c.obfuscateSegment(segments[i]) } } return strings.Join(segments, "/") } // EncryptFileName encrypts a file path func (c *Cipher) EncryptFileName(in string) string { if c.mode == NameEncryptionOff { return in + encryptedSuffix } return c.encryptFileName(in) } // EncryptDirName encrypts a directory path func (c *Cipher) EncryptDirName(in string) string { if c.mode == NameEncryptionOff || !c.dirNameEncrypt { return in } return c.encryptFileName(in) } // decryptFileName decrypts a file path func (c *Cipher) decryptFileName(in string) (string, error) { segments := strings.Split(in, "/") for i := range segments { var err error // Skip directory name decryption if the user chose to // leave them intact if !c.dirNameEncrypt && i != (len(segments)-1) { continue } if c.mode == NameEncryptionStandard { segments[i], err = c.decryptSegment(segments[i]) } else { segments[i], err = c.deobfuscateSegment(segments[i]) } if err != nil { return "", err } } return strings.Join(segments, "/"), nil } // DecryptFileName decrypts a file path func (c *Cipher) DecryptFileName(in string) (string, error) { if c.mode == NameEncryptionOff { remainingLength := len(in) - len(encryptedSuffix) if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) { return in[:remainingLength], nil } return "", ErrorNotAnEncryptedFile } return c.decryptFileName(in) } // DecryptDirName decrypts a directory path func (c *Cipher) DecryptDirName(in string) (string, error) { if c.mode == NameEncryptionOff || !c.dirNameEncrypt { return in, nil } return c.decryptFileName(in) } // NameEncryptionMode returns the encryption mode in use for names func (c *Cipher) NameEncryptionMode() NameEncryptionMode { return c.mode } // nonce is an NACL secretbox nonce type nonce [fileNonceSize]byte // pointer returns the nonce as a *[24]byte for secretbox func (n *nonce) pointer() *[fileNonceSize]byte { return (*[fileNonceSize]byte)(n) } // fromReader fills the nonce from an io.Reader - normally the OSes // crypto random number generator func (n *nonce) fromReader(in io.Reader) error { read, err := io.ReadFull(in, (*n)[:]) if read != fileNonceSize { return errors.Wrap(err, "short read of nonce") } return nil } // fromBuf fills the nonce from the buffer passed in func (n *nonce) fromBuf(buf []byte) { read := copy((*n)[:], buf) if read != fileNonceSize { panic("buffer to short to read nonce") } } // carry 1 up the nonce from position i func (n *nonce) carry(i int) { for ; i < len(*n); i++ { digit := (*n)[i] newDigit := digit + 1 (*n)[i] = newDigit if newDigit >= digit { // exit if no carry break } } } // increment to add 1 to the nonce func (n *nonce) increment() { n.carry(0) } // add a uint64 to the nonce func (n *nonce) add(x uint64) { carry := uint16(0) for i := 0; i < 8; i++ { digit := (*n)[i] xDigit := byte(x) x >>= 8 carry += uint16(digit) + uint16(xDigit) (*n)[i] = byte(carry) carry >>= 8 } if carry != 0 { n.carry(8) } } // encrypter encrypts an io.Reader on the fly type encrypter struct { mu sync.Mutex in io.Reader c *Cipher nonce nonce buf []byte readBuf []byte bufIndex int bufSize int err error } // newEncrypter creates a new file handle encrypting on the fly func (c *Cipher) newEncrypter(in io.Reader, nonce *nonce) (*encrypter, error) { fh := &encrypter{ in: in, c: c, buf: c.getBlock(), readBuf: c.getBlock(), bufSize: fileHeaderSize, } // Initialise nonce if nonce != nil { fh.nonce = *nonce } else { err := fh.nonce.fromReader(c.cryptoRand) if err != nil { return nil, err } } // Copy magic into buffer copy(fh.buf, fileMagicBytes) // Copy nonce into buffer copy(fh.buf[fileMagicSize:], fh.nonce[:]) return fh, nil } // Read as per io.Reader func (fh *encrypter) Read(p []byte) (n int, err error) { fh.mu.Lock() defer fh.mu.Unlock() if fh.err != nil { return 0, fh.err } if fh.bufIndex >= fh.bufSize { // Read data // FIXME should overlap the reads with a go-routine and 2 buffers? readBuf := fh.readBuf[:blockDataSize] n, err = io.ReadFull(fh.in, readBuf) if n == 0 { // err can't be nil since: // n == len(buf) if and only if err == nil. return fh.finish(err) } // possibly err != nil here, but we will process the // data and the next call to ReadFull will return 0, err // Write nonce to start of block copy(fh.buf, fh.nonce[:]) // Encrypt the block using the nonce block := fh.buf secretbox.Seal(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey) fh.bufIndex = 0 fh.bufSize = blockHeaderSize + n fh.nonce.increment() } n = copy(p, fh.buf[fh.bufIndex:fh.bufSize]) fh.bufIndex += n return n, nil } // finish sets the final error and tidies up func (fh *encrypter) finish(err error) (int, error) { if fh.err != nil { return 0, fh.err } fh.err = err fh.c.putBlock(fh.buf) fh.buf = nil fh.c.putBlock(fh.readBuf) fh.readBuf = nil return 0, err } // Encrypt data encrypts the data stream func (c *Cipher) encryptData(in io.Reader) (io.Reader, *encrypter, error) { in, wrap := accounting.UnWrap(in) // unwrap the accounting off the Reader out, err := c.newEncrypter(in, nil) if err != nil { return nil, nil, err } return wrap(out), out, nil // and wrap the accounting back on } // EncryptData encrypts the data stream func (c *Cipher) EncryptData(in io.Reader) (io.Reader, error) { out, _, err := c.encryptData(in) return out, err } // decrypter decrypts an io.ReaderCloser on the fly type decrypter struct { mu sync.Mutex rc io.ReadCloser nonce nonce initialNonce nonce c *Cipher buf []byte readBuf []byte bufIndex int bufSize int err error limit int64 // limit of bytes to read, -1 for unlimited open OpenRangeSeek } // newDecrypter creates a new file handle decrypting on the fly func (c *Cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) { fh := &decrypter{ rc: rc, c: c, buf: c.getBlock(), readBuf: c.getBlock(), limit: -1, } // Read file header (magic + nonce) readBuf := fh.readBuf[:fileHeaderSize] _, err := io.ReadFull(fh.rc, readBuf) if err == io.EOF || err == io.ErrUnexpectedEOF { // This read from 0..fileHeaderSize-1 bytes return nil, fh.finishAndClose(ErrorEncryptedFileTooShort) } else if err != nil { return nil, fh.finishAndClose(err) } // check the magic if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) { return nil, fh.finishAndClose(ErrorEncryptedBadMagic) } // retrieve the nonce fh.nonce.fromBuf(readBuf[fileMagicSize:]) fh.initialNonce = fh.nonce return fh, nil } // newDecrypterSeek creates a new file handle decrypting on the fly func (c *Cipher) newDecrypterSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (fh *decrypter, err error) { var rc io.ReadCloser doRangeSeek := false setLimit := false // Open initially with no seek if offset == 0 && limit < 0 { // If no offset or limit then open whole file rc, err = open(ctx, 0, -1) } else if offset == 0 { // If no offset open the header + limit worth of the file _, underlyingLimit, _, _ := calculateUnderlying(offset, limit) rc, err = open(ctx, 0, int64(fileHeaderSize)+underlyingLimit) setLimit = true } else { // Otherwise just read the header to start with rc, err = open(ctx, 0, int64(fileHeaderSize)) doRangeSeek = true } if err != nil { return nil, err } // Open the stream which fills in the nonce fh, err = c.newDecrypter(rc) if err != nil { return nil, err } fh.open = open // will be called by fh.RangeSeek if doRangeSeek { _, err = fh.RangeSeek(ctx, offset, io.SeekStart, limit) if err != nil { _ = fh.Close() return nil, err } } if setLimit { fh.limit = limit } return fh, nil } // read data into internal buffer - call with fh.mu held func (fh *decrypter) fillBuffer() (err error) { // FIXME should overlap the reads with a go-routine and 2 buffers? readBuf := fh.readBuf n, err := io.ReadFull(fh.rc, readBuf) if n == 0 { // err can't be nil since: // n == len(buf) if and only if err == nil. return err } // possibly err != nil here, but we will process the data and // the next call to ReadFull will return 0, err // Check header + 1 byte exists if n <= blockHeaderSize { if err != nil { return err // return pending error as it is likely more accurate } return ErrorEncryptedFileBadHeader } // Decrypt the block using the nonce block := fh.buf _, ok := secretbox.Open(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey) if !ok { if err != nil { return err // return pending error as it is likely more accurate } return ErrorEncryptedBadBlock } fh.bufIndex = 0 fh.bufSize = n - blockHeaderSize fh.nonce.increment() return nil } // Read as per io.Reader func (fh *decrypter) Read(p []byte) (n int, err error) { fh.mu.Lock() defer fh.mu.Unlock() if fh.err != nil { return 0, fh.err } if fh.bufIndex >= fh.bufSize { err = fh.fillBuffer() if err != nil { return 0, fh.finish(err) } } toCopy := fh.bufSize - fh.bufIndex if fh.limit >= 0 && fh.limit < int64(toCopy) { toCopy = int(fh.limit) } n = copy(p, fh.buf[fh.bufIndex:fh.bufIndex+toCopy]) fh.bufIndex += n if fh.limit >= 0 { fh.limit -= int64(n) if fh.limit == 0 { return n, fh.finish(io.EOF) } } return n, nil } // calculateUnderlying converts an (offset, limit) in a crypted file // into an (underlyingOffset, underlyingLimit) for the underlying // file. // // It also returns number of bytes to discard after reading the first // block and number of blocks this is from the start so the nonce can // be incremented. func calculateUnderlying(offset, limit int64) (underlyingOffset, underlyingLimit, discard, blocks int64) { // blocks we need to seek, plus bytes we need to discard blocks, discard = offset/blockDataSize, offset%blockDataSize // Offset in underlying stream we need to seek underlyingOffset = int64(fileHeaderSize) + blocks*(blockHeaderSize+blockDataSize) // work out how many blocks we need to read underlyingLimit = int64(-1) if limit >= 0 { // bytes to read beyond the first block bytesToRead := limit - (blockDataSize - discard) // Read the first block blocksToRead := int64(1) if bytesToRead > 0 { // Blocks that need to be read plus left over blocks extraBlocksToRead, endBytes := bytesToRead/blockDataSize, bytesToRead%blockDataSize if endBytes != 0 { // If left over bytes must read another block extraBlocksToRead++ } blocksToRead += extraBlocksToRead } // Must read a whole number of blocks underlyingLimit = blocksToRead * (blockHeaderSize + blockDataSize) } return } // RangeSeek behaves like a call to Seek(offset int64, whence // int) with the output wrapped in an io.LimitedReader // limiting the total length to limit. // // RangeSeek with a limit of < 0 is equivalent to a regular Seek. func (fh *decrypter) RangeSeek(ctx context.Context, offset int64, whence int, limit int64) (int64, error) { fh.mu.Lock() defer fh.mu.Unlock() if fh.open == nil { return 0, fh.finish(errors.New("can't seek - not initialised with newDecrypterSeek")) } if whence != io.SeekStart { return 0, fh.finish(errors.New("can only seek from the start")) } // Reset error or return it if not EOF if fh.err == io.EOF { fh.unFinish() } else if fh.err != nil { return 0, fh.err } underlyingOffset, underlyingLimit, discard, blocks := calculateUnderlying(offset, limit) // Move the nonce on the correct number of blocks from the start fh.nonce = fh.initialNonce fh.nonce.add(uint64(blocks)) // Can we seek underlying stream directly? if do, ok := fh.rc.(fs.RangeSeeker); ok { // Seek underlying stream directly _, err := do.RangeSeek(ctx, underlyingOffset, 0, underlyingLimit) if err != nil { return 0, fh.finish(err) } } else { // if not reopen with seek _ = fh.rc.Close() // close underlying file fh.rc = nil // Re-open the underlying object with the offset given rc, err := fh.open(ctx, underlyingOffset, underlyingLimit) if err != nil { return 0, fh.finish(errors.Wrap(err, "couldn't reopen file with offset and limit")) } // Set the file handle fh.rc = rc } // Fill the buffer err := fh.fillBuffer() if err != nil { return 0, fh.finish(err) } // Discard bytes from the buffer if int(discard) > fh.bufSize { return 0, fh.finish(ErrorBadSeek) } fh.bufIndex = int(discard) // Set the limit fh.limit = limit return offset, nil } // Seek implements the io.Seeker interface func (fh *decrypter) Seek(offset int64, whence int) (int64, error) { return fh.RangeSeek(context.TODO(), offset, whence, -1) } // finish sets the final error and tidies up func (fh *decrypter) finish(err error) error { if fh.err != nil { return fh.err } fh.err = err fh.c.putBlock(fh.buf) fh.buf = nil fh.c.putBlock(fh.readBuf) fh.readBuf = nil return err } // unFinish undoes the effects of finish func (fh *decrypter) unFinish() { // Clear error fh.err = nil // reinstate the buffers fh.buf = fh.c.getBlock() fh.readBuf = fh.c.getBlock() // Empty the buffer fh.bufIndex = 0 fh.bufSize = 0 } // Close func (fh *decrypter) Close() error { fh.mu.Lock() defer fh.mu.Unlock() // Check already closed if fh.err == ErrorFileClosed { return fh.err } // Closed before reading EOF so not finish()ed yet if fh.err == nil { _ = fh.finish(io.EOF) } // Show file now closed fh.err = ErrorFileClosed if fh.rc == nil { return nil } return fh.rc.Close() } // finishAndClose does finish then Close() // // Used when we are returning a nil fh from new func (fh *decrypter) finishAndClose(err error) error { _ = fh.finish(err) _ = fh.Close() return err } // DecryptData decrypts the data stream func (c *Cipher) DecryptData(rc io.ReadCloser) (io.ReadCloser, error) { out, err := c.newDecrypter(rc) if err != nil { return nil, err } return out, nil } // DecryptDataSeek decrypts the data stream from offset // // The open function must return a ReadCloser opened to the offset supplied // // You must use this form of DecryptData if you might want to Seek the file handle func (c *Cipher) DecryptDataSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error) { out, err := c.newDecrypterSeek(ctx, open, offset, limit) if err != nil { return nil, err } return out, nil } // EncryptedSize calculates the size of the data when encrypted func (c *Cipher) EncryptedSize(size int64) int64 { blocks, residue := size/blockDataSize, size%blockDataSize encryptedSize := int64(fileHeaderSize) + blocks*(blockHeaderSize+blockDataSize) if residue != 0 { encryptedSize += blockHeaderSize + residue } return encryptedSize } // DecryptedSize calculates the size of the data when decrypted func (c *Cipher) DecryptedSize(size int64) (int64, error) { size -= int64(fileHeaderSize) if size < 0 { return 0, ErrorEncryptedFileTooShort } blocks, residue := size/blockSize, size%blockSize decryptedSize := blocks * blockDataSize if residue != 0 { residue -= blockHeaderSize if residue <= 0 { return 0, ErrorEncryptedFileBadHeader } } decryptedSize += residue return decryptedSize, nil } // check interfaces var ( _ io.ReadCloser = (*decrypter)(nil) _ io.Seeker = (*decrypter)(nil) _ fs.RangeSeeker = (*decrypter)(nil) _ io.Reader = (*encrypter)(nil) ) rclone-1.53.3/backend/crypt/cipher_test.go000066400000000000000000001322701375552240400204710ustar00rootroot00000000000000package crypt import ( "bytes" "context" "encoding/base32" "fmt" "io" "io/ioutil" "strings" "testing" "github.com/pkg/errors" "github.com/rclone/rclone/backend/crypt/pkcs7" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestNewNameEncryptionMode(t *testing.T) { for _, test := range []struct { in string expected NameEncryptionMode expectedErr string }{ {"off", NameEncryptionOff, ""}, {"standard", NameEncryptionStandard, ""}, {"obfuscate", NameEncryptionObfuscated, ""}, {"potato", NameEncryptionOff, "Unknown file name encryption mode \"potato\""}, } { actual, actualErr := NewNameEncryptionMode(test.in) assert.Equal(t, actual, test.expected) if test.expectedErr == "" { assert.NoError(t, actualErr) } else { assert.Error(t, actualErr, test.expectedErr) } } } func TestNewNameEncryptionModeString(t *testing.T) { assert.Equal(t, NameEncryptionOff.String(), "off") assert.Equal(t, NameEncryptionStandard.String(), "standard") assert.Equal(t, NameEncryptionObfuscated.String(), "obfuscate") assert.Equal(t, NameEncryptionMode(3).String(), "Unknown mode #3") } func TestEncodeFileName(t *testing.T) { for _, test := range []struct { in string expected string }{ {"", ""}, {"1", "64"}, {"12", "64p0"}, {"123", "64p36"}, {"1234", "64p36d0"}, {"12345", "64p36d1l"}, {"123456", "64p36d1l6o"}, {"1234567", "64p36d1l6org"}, {"12345678", "64p36d1l6orjg"}, {"123456789", "64p36d1l6orjge8"}, {"1234567890", "64p36d1l6orjge9g"}, {"12345678901", "64p36d1l6orjge9g64"}, {"123456789012", "64p36d1l6orjge9g64p0"}, {"1234567890123", "64p36d1l6orjge9g64p36"}, {"12345678901234", "64p36d1l6orjge9g64p36d0"}, {"123456789012345", "64p36d1l6orjge9g64p36d1l"}, {"1234567890123456", "64p36d1l6orjge9g64p36d1l6o"}, } { actual := encodeFileName([]byte(test.in)) assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in)) recovered, err := decodeFileName(test.expected) assert.NoError(t, err) assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", test.expected)) in := strings.ToUpper(test.expected) recovered, err = decodeFileName(in) assert.NoError(t, err) assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", in)) } } func TestDecodeFileName(t *testing.T) { // We've tested decoding the valid ones above, now concentrate on the invalid ones for _, test := range []struct { in string expectedErr error }{ {"64=", ErrorBadBase32Encoding}, {"!", base32.CorruptInputError(0)}, {"hello=hello", base32.CorruptInputError(5)}, } { actual, actualErr := decodeFileName(test.in) assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr)) } } func TestEncryptSegment(t *testing.T) { c, _ := newCipher(NameEncryptionStandard, "", "", true) for _, test := range []struct { in string expected string }{ {"", ""}, {"1", "p0e52nreeaj0a5ea7s64m4j72s"}, {"12", "l42g6771hnv3an9cgc8cr2n1ng"}, {"123", "qgm4avr35m5loi1th53ato71v0"}, {"1234", "8ivr2e9plj3c3esisjpdisikos"}, {"12345", "rh9vu63q3o29eqmj4bg6gg7s44"}, {"123456", "bn717l3alepn75b2fb2ejmi4b4"}, {"1234567", "n6bo9jmb1qe3b1ogtj5qkf19k8"}, {"12345678", "u9t24j7uaq94dh5q53m3s4t9ok"}, {"123456789", "37hn305g6j12d1g0kkrl7ekbs4"}, {"1234567890", "ot8d91eplaglb62k2b1trm2qv0"}, {"12345678901", "h168vvrgb53qnrtvvmb378qrcs"}, {"123456789012", "s3hsdf9e29ithrqbjqu01t8q2s"}, {"1234567890123", "cf3jimlv1q2oc553mv7s3mh3eo"}, {"12345678901234", "moq0uqdlqrblrc5pa5u5c7hq9g"}, {"123456789012345", "eeam3li4rnommi3a762h5n7meg"}, {"1234567890123456", "mijbj0frqf6ms7frcr6bd9h0env53jv96pjaaoirk7forcgpt70g"}, } { actual := c.encryptSegment(test.in) assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %q", test.in)) recovered, err := c.decryptSegment(test.expected) assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", test.expected)) assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", test.expected)) in := strings.ToUpper(test.expected) recovered, err = c.decryptSegment(in) assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", in)) assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", in)) } } func TestDecryptSegment(t *testing.T) { // We've tested the forwards above, now concentrate on the errors longName := make([]byte, 3328) for i := range longName { longName[i] = 'a' } c, _ := newCipher(NameEncryptionStandard, "", "", true) for _, test := range []struct { in string expectedErr error }{ {"64=", ErrorBadBase32Encoding}, {"!", base32.CorruptInputError(0)}, {string(longName), ErrorTooLongAfterDecode}, {encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize}, {encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize}, {encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong}, } { actual, actualErr := c.decryptSegment(test.in) assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr)) } } func TestEncryptFileName(t *testing.T) { // First standard mode c, _ := newCipher(NameEncryptionStandard, "", "", true) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123")) // Standard mode with directory name encryption off c, _ = newCipher(NameEncryptionStandard, "", "", false) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1")) assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12")) assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123")) // Now off mode c, _ = newCipher(NameEncryptionOff, "", "", true) assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123")) // Obfuscation mode c, _ = newCipher(NameEncryptionObfuscated, "", "", true) assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello")) assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1")) assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0")) // Obfuscation mode with directory name encryption off c, _ = newCipher(NameEncryptionObfuscated, "", "", false) assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello")) assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1")) assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0")) } func TestDecryptFileName(t *testing.T) { for _, test := range []struct { mode NameEncryptionMode dirNameEncrypt bool in string expected string expectedErr error }{ {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil}, {NameEncryptionStandard, true, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize}, {NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil}, {NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil}, {NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile}, {NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile}, {NameEncryptionObfuscated, true, "!.hello", "hello", nil}, {NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile}, {NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil}, {NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil}, {NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil}, } { c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt) actual, actualErr := c.DecryptFileName(test.in) what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode) assert.Equal(t, test.expected, actual, what) assert.Equal(t, test.expectedErr, actualErr, what) } } func TestEncDecMatches(t *testing.T) { for _, test := range []struct { mode NameEncryptionMode in string }{ {NameEncryptionStandard, "1/2/3/4"}, {NameEncryptionOff, "1/2/3/4"}, {NameEncryptionObfuscated, "1/2/3/4/!hello\u03a0"}, {NameEncryptionObfuscated, "Avatar The Last Airbender"}, } { c, _ := newCipher(test.mode, "", "", true) out, err := c.DecryptFileName(c.EncryptFileName(test.in)) what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode) assert.Equal(t, out, test.in, what) assert.Equal(t, err, nil, what) } } func TestEncryptDirName(t *testing.T) { // First standard mode c, _ := newCipher(NameEncryptionStandard, "", "", true) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptDirName("1")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptDirName("1/12")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptDirName("1/12/123")) // Standard mode with dir name encryption off c, _ = newCipher(NameEncryptionStandard, "", "", false) assert.Equal(t, "1/12", c.EncryptDirName("1/12")) assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123")) // Now off mode c, _ = newCipher(NameEncryptionOff, "", "", true) assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123")) } func TestDecryptDirName(t *testing.T) { for _, test := range []struct { mode NameEncryptionMode dirNameEncrypt bool in string expected string expectedErr error }{ {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil}, {NameEncryptionStandard, true, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize}, {NameEncryptionStandard, false, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", nil}, {NameEncryptionStandard, false, "1/12/123", "1/12/123", nil}, {NameEncryptionOff, true, "1/12/123.bin", "1/12/123.bin", nil}, {NameEncryptionOff, true, "1/12/123", "1/12/123", nil}, {NameEncryptionOff, true, ".bin", ".bin", nil}, } { c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt) actual, actualErr := c.DecryptDirName(test.in) what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode) assert.Equal(t, test.expected, actual, what) assert.Equal(t, test.expectedErr, actualErr, what) } } func TestEncryptedSize(t *testing.T) { c, _ := newCipher(NameEncryptionStandard, "", "", true) for _, test := range []struct { in int64 expected int64 }{ {0, 32}, {1, 32 + 16 + 1}, {65536, 32 + 16 + 65536}, {65537, 32 + 16 + 65536 + 16 + 1}, {1 << 20, 32 + 16*(16+65536)}, {(1 << 20) + 65535, 32 + 16*(16+65536) + 16 + 65535}, {1 << 30, 32 + 16384*(16+65536)}, {(1 << 40) + 1, 32 + 16777216*(16+65536) + 16 + 1}, } { actual := c.EncryptedSize(test.in) assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %d", test.in)) recovered, err := c.DecryptedSize(test.expected) assert.NoError(t, err, fmt.Sprintf("Testing reverse %d", test.expected)) assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %d", test.expected)) } } func TestDecryptedSize(t *testing.T) { // Test the errors since we tested the reverse above c, _ := newCipher(NameEncryptionStandard, "", "", true) for _, test := range []struct { in int64 expectedErr error }{ {0, ErrorEncryptedFileTooShort}, {0, ErrorEncryptedFileTooShort}, {1, ErrorEncryptedFileTooShort}, {7, ErrorEncryptedFileTooShort}, {32 + 1, ErrorEncryptedFileBadHeader}, {32 + 16, ErrorEncryptedFileBadHeader}, {32 + 16 + 65536 + 1, ErrorEncryptedFileBadHeader}, {32 + 16 + 65536 + 16, ErrorEncryptedFileBadHeader}, } { _, actualErr := c.DecryptedSize(test.in) assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("Testing %d", test.in)) } } func TestNoncePointer(t *testing.T) { var x nonce assert.Equal(t, (*[24]byte)(&x), x.pointer()) } func TestNonceFromReader(t *testing.T) { var x nonce buf := bytes.NewBufferString("123456789abcdefghijklmno") err := x.fromReader(buf) assert.NoError(t, err) assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x) buf = bytes.NewBufferString("123456789abcdefghijklmn") err = x.fromReader(buf) assert.Error(t, err, "short read of nonce") } func TestNonceFromBuf(t *testing.T) { var x nonce buf := []byte("123456789abcdefghijklmnoXXXXXXXX") x.fromBuf(buf) assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x) buf = []byte("0123456789abcdefghijklmn") x.fromBuf(buf) assert.Equal(t, nonce{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'}, x) buf = []byte("0123456789abcdefghijklm") assert.Panics(t, func() { x.fromBuf(buf) }) } func TestNonceIncrement(t *testing.T) { for _, test := range []struct { in nonce out nonce }{ { nonce{0x00}, nonce{0x01}, }, { nonce{0xFF}, nonce{0x00, 0x01}, }, { nonce{0xFF, 0xFF}, nonce{0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, }, } { x := test.in x.increment() assert.Equal(t, test.out, x) } } func TestNonceAdd(t *testing.T) { for _, test := range []struct { add uint64 in nonce out nonce }{ { 0x01, nonce{0x00}, nonce{0x01}, }, { 0xFF, nonce{0xFF}, nonce{0xFE, 0x01}, }, { 0xFFFF, nonce{0xFF, 0xFF}, nonce{0xFE, 0xFF, 0x01}, }, { 0xFFFFFF, nonce{0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0x01}, }, { 0xFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFe, 0xFF, 0xFF, 0xFF, 0x01}, }, { 0xFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0x01}, }, { 0xFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x01}, }, { 0xFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}, }, { 0xFFFFFFFFFFFFFFFF, nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, nonce{0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, }, } { x := test.in x.add(test.add) assert.Equal(t, test.out, x) } } // randomSource can read or write a random sequence type randomSource struct { counter int64 size int64 } func newRandomSource(size int64) *randomSource { return &randomSource{ size: size, } } func (r *randomSource) next() byte { r.counter++ return byte(r.counter % 257) } func (r *randomSource) Read(p []byte) (n int, err error) { for i := range p { if r.counter >= r.size { err = io.EOF break } p[i] = r.next() n++ } return n, err } func (r *randomSource) Write(p []byte) (n int, err error) { for i := range p { if p[i] != r.next() { return 0, errors.Errorf("Error in stream at %d", r.counter) } } return len(p), nil } func (r *randomSource) Close() error { return nil } // Check interfaces var ( _ io.ReadCloser = (*randomSource)(nil) _ io.WriteCloser = (*randomSource)(nil) ) // Test test infrastructure first! func TestRandomSource(t *testing.T) { source := newRandomSource(1e8) sink := newRandomSource(1e8) n, err := io.Copy(sink, source) assert.NoError(t, err) assert.Equal(t, int64(1e8), n) source = newRandomSource(1e8) buf := make([]byte, 16) _, _ = source.Read(buf) sink = newRandomSource(1e8) _, err = io.Copy(sink, source) assert.Error(t, err, "Error in stream") } type zeroes struct{} func (z *zeroes) Read(p []byte) (n int, err error) { for i := range p { p[i] = 0 n++ } return n, nil } // Test encrypt decrypt with different buffer sizes func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) c.cryptoRand = &zeroes{} // zero out the nonce buf := make([]byte, bufSize) source := newRandomSource(copySize) encrypted, err := c.newEncrypter(source, nil) assert.NoError(t, err) decrypted, err := c.newDecrypter(ioutil.NopCloser(encrypted)) assert.NoError(t, err) sink := newRandomSource(copySize) n, err := io.CopyBuffer(sink, decrypted, buf) assert.NoError(t, err) assert.Equal(t, copySize, n) blocks := copySize / blockSize if (copySize % blockSize) != 0 { blocks++ } var expectedNonce = nonce{byte(blocks), byte(blocks >> 8), byte(blocks >> 16), byte(blocks >> 32)} assert.Equal(t, expectedNonce, encrypted.nonce) assert.Equal(t, expectedNonce, decrypted.nonce) } func TestEncryptDecrypt1(t *testing.T) { testEncryptDecrypt(t, 1, 1e7) } func TestEncryptDecrypt32(t *testing.T) { testEncryptDecrypt(t, 32, 1e8) } func TestEncryptDecrypt4096(t *testing.T) { testEncryptDecrypt(t, 4096, 1e8) } func TestEncryptDecrypt65536(t *testing.T) { testEncryptDecrypt(t, 65536, 1e8) } func TestEncryptDecrypt65537(t *testing.T) { testEncryptDecrypt(t, 65537, 1e8) } var ( file0 = []byte{ 0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, } file1 = []byte{ 0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x09, 0x5b, 0x44, 0x6c, 0xd6, 0x23, 0x7b, 0xbc, 0xb0, 0x8d, 0x09, 0xfb, 0x52, 0x4c, 0xe5, 0x65, 0xAA, } file16 = []byte{ 0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0xb9, 0xc4, 0x55, 0x2a, 0x27, 0x10, 0x06, 0x29, 0x18, 0x96, 0x0a, 0x3e, 0x60, 0x8c, 0x29, 0xb9, 0xaa, 0x8a, 0x5e, 0x1e, 0x16, 0x5b, 0x6d, 0x07, 0x5d, 0xe4, 0xe9, 0xbb, 0x36, 0x7f, 0xd6, 0xd4, } ) func TestEncryptData(t *testing.T) { for _, test := range []struct { in []byte expected []byte }{ {[]byte{}, file0}, {[]byte{1}, file1}, {[]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, file16}, } { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator // Check encode works buf := bytes.NewBuffer(test.in) encrypted, err := c.EncryptData(buf) assert.NoError(t, err) out, err := ioutil.ReadAll(encrypted) assert.NoError(t, err) assert.Equal(t, test.expected, out) // Check we can decode the data properly too... buf = bytes.NewBuffer(out) decrypted, err := c.DecryptData(ioutil.NopCloser(buf)) assert.NoError(t, err) out, err = ioutil.ReadAll(decrypted) assert.NoError(t, err) assert.Equal(t, test.in, out) } } func TestNewEncrypter(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator z := &zeroes{} fh, err := c.newEncrypter(z, nil) assert.NoError(t, err) assert.Equal(t, nonce{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.nonce) assert.Equal(t, []byte{'R', 'C', 'L', 'O', 'N', 'E', 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.buf[:32]) // Test error path c.cryptoRand = bytes.NewBufferString("123456789abcdefghijklmn") fh, err = c.newEncrypter(z, nil) assert.Nil(t, fh) assert.Error(t, err, "short read of nonce") } // Test the stream returning 0, io.ErrUnexpectedEOF - this used to // cause a fatal loop func TestNewEncrypterErrUnexpectedEOF(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) in := &readers.ErrorReader{Err: io.ErrUnexpectedEOF} fh, err := c.newEncrypter(in, nil) assert.NoError(t, err) n, err := io.CopyN(ioutil.Discard, fh, 1e6) assert.Equal(t, io.ErrUnexpectedEOF, err) assert.Equal(t, int64(32), n) } type closeDetector struct { io.Reader closed int } func newCloseDetector(in io.Reader) *closeDetector { return &closeDetector{ Reader: in, } } func (c *closeDetector) Close() error { c.closed++ return nil } func TestNewDecrypter(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator cd := newCloseDetector(bytes.NewBuffer(file0)) fh, err := c.newDecrypter(cd) assert.NoError(t, err) // check nonce is in place assert.Equal(t, file0[8:32], fh.nonce[:]) assert.Equal(t, 0, cd.closed) // Test error paths for i := range file0 { cd := newCloseDetector(bytes.NewBuffer(file0[:i])) fh, err = c.newDecrypter(cd) assert.Nil(t, fh) assert.Error(t, err, ErrorEncryptedFileTooShort.Error()) assert.Equal(t, 1, cd.closed) } er := &readers.ErrorReader{Err: errors.New("potato")} cd = newCloseDetector(er) fh, err = c.newDecrypter(cd) assert.Nil(t, fh) assert.Error(t, err, "potato") assert.Equal(t, 1, cd.closed) // bad magic file0copy := make([]byte, len(file0)) copy(file0copy, file0) for i := range fileMagic { file0copy[i] ^= 0x1 cd := newCloseDetector(bytes.NewBuffer(file0copy)) fh, err := c.newDecrypter(cd) assert.Nil(t, fh) assert.Error(t, err, ErrorEncryptedBadMagic.Error()) file0copy[i] ^= 0x1 assert.Equal(t, 1, cd.closed) } } // Test the stream returning 0, io.ErrUnexpectedEOF func TestNewDecrypterErrUnexpectedEOF(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) in2 := &readers.ErrorReader{Err: io.ErrUnexpectedEOF} in1 := bytes.NewBuffer(file16) in := ioutil.NopCloser(io.MultiReader(in1, in2)) fh, err := c.newDecrypter(in) assert.NoError(t, err) n, err := io.CopyN(ioutil.Discard, fh, 1e6) assert.Equal(t, io.ErrUnexpectedEOF, err) assert.Equal(t, int64(16), n) } func TestNewDecrypterSeekLimit(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) c.cryptoRand = &zeroes{} // nodge the crypto rand generator // Make random data const dataSize = 150000 plaintext, err := ioutil.ReadAll(newRandomSource(dataSize)) assert.NoError(t, err) // Encrypt the data buf := bytes.NewBuffer(plaintext) encrypted, err := c.EncryptData(buf) assert.NoError(t, err) ciphertext, err := ioutil.ReadAll(encrypted) assert.NoError(t, err) trials := []int{0, 1, 2, 3, 4, 5, 7, 8, 9, 15, 16, 17, 31, 32, 33, 63, 64, 65, 127, 128, 129, 255, 256, 257, 511, 512, 513, 1023, 1024, 1025, 2047, 2048, 2049, 4095, 4096, 4097, 8191, 8192, 8193, 16383, 16384, 16385, 32767, 32768, 32769, 65535, 65536, 65537, 131071, 131072, 131073, dataSize - 1, dataSize} limits := []int{-1, 0, 1, 65535, 65536, 65537, 131071, 131072, 131073} // Open stream with a seek of underlyingOffset var reader io.ReadCloser open := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) { end := len(ciphertext) if underlyingLimit >= 0 { end = int(underlyingOffset + underlyingLimit) if end > len(ciphertext) { end = len(ciphertext) } } reader = ioutil.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end])) return reader, nil } inBlock := make([]byte, dataSize) // Check the seek worked by reading a block and checking it // against what it should be check := func(rc io.Reader, offset, limit int) { n, err := io.ReadFull(rc, inBlock) if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF { require.NoError(t, err) } seekedDecrypted := inBlock[:n] what := fmt.Sprintf("offset = %d, limit = %d", offset, limit) if limit >= 0 { assert.Equal(t, limit, n, what) } require.Equal(t, plaintext[offset:offset+n], seekedDecrypted, what) // We should have completely emptied the reader at this point n, err = reader.Read(inBlock) assert.Equal(t, io.EOF, err) assert.Equal(t, 0, n) } // Now try decoding it with an open/seek for _, offset := range trials { for _, limit := range limits { if offset+limit > len(plaintext) { continue } rc, err := c.DecryptDataSeek(context.Background(), open, int64(offset), int64(limit)) assert.NoError(t, err) check(rc, offset, limit) } } // Try decoding it with a single open and lots of seeks fh, err := c.DecryptDataSeek(context.Background(), open, 0, -1) assert.NoError(t, err) for _, offset := range trials { for _, limit := range limits { if offset+limit > len(plaintext) { continue } _, err := fh.RangeSeek(context.Background(), int64(offset), io.SeekStart, int64(limit)) assert.NoError(t, err) check(fh, offset, limit) } } // Do some checks on the open callback for _, test := range []struct { offset, limit int64 wantOffset, wantLimit int64 }{ // unlimited {0, -1, int64(fileHeaderSize), -1}, {1, -1, int64(fileHeaderSize), -1}, {blockDataSize - 1, -1, int64(fileHeaderSize), -1}, {blockDataSize, -1, int64(fileHeaderSize) + blockSize, -1}, {blockDataSize + 1, -1, int64(fileHeaderSize) + blockSize, -1}, // limit=1 {0, 1, int64(fileHeaderSize), blockSize}, {1, 1, int64(fileHeaderSize), blockSize}, {blockDataSize - 1, 1, int64(fileHeaderSize), blockSize}, {blockDataSize, 1, int64(fileHeaderSize) + blockSize, blockSize}, {blockDataSize + 1, 1, int64(fileHeaderSize) + blockSize, blockSize}, // limit=100 {0, 100, int64(fileHeaderSize), blockSize}, {1, 100, int64(fileHeaderSize), blockSize}, {blockDataSize - 1, 100, int64(fileHeaderSize), 2 * blockSize}, {blockDataSize, 100, int64(fileHeaderSize) + blockSize, blockSize}, {blockDataSize + 1, 100, int64(fileHeaderSize) + blockSize, blockSize}, // limit=blockDataSize-1 {0, blockDataSize - 1, int64(fileHeaderSize), blockSize}, {1, blockDataSize - 1, int64(fileHeaderSize), blockSize}, {blockDataSize - 1, blockDataSize - 1, int64(fileHeaderSize), 2 * blockSize}, {blockDataSize, blockDataSize - 1, int64(fileHeaderSize) + blockSize, blockSize}, {blockDataSize + 1, blockDataSize - 1, int64(fileHeaderSize) + blockSize, blockSize}, // limit=blockDataSize {0, blockDataSize, int64(fileHeaderSize), blockSize}, {1, blockDataSize, int64(fileHeaderSize), 2 * blockSize}, {blockDataSize - 1, blockDataSize, int64(fileHeaderSize), 2 * blockSize}, {blockDataSize, blockDataSize, int64(fileHeaderSize) + blockSize, blockSize}, {blockDataSize + 1, blockDataSize, int64(fileHeaderSize) + blockSize, 2 * blockSize}, // limit=blockDataSize+1 {0, blockDataSize + 1, int64(fileHeaderSize), 2 * blockSize}, {1, blockDataSize + 1, int64(fileHeaderSize), 2 * blockSize}, {blockDataSize - 1, blockDataSize + 1, int64(fileHeaderSize), 2 * blockSize}, {blockDataSize, blockDataSize + 1, int64(fileHeaderSize) + blockSize, 2 * blockSize}, {blockDataSize + 1, blockDataSize + 1, int64(fileHeaderSize) + blockSize, 2 * blockSize}, } { what := fmt.Sprintf("offset = %d, limit = %d", test.offset, test.limit) callCount := 0 testOpen := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) { switch callCount { case 0: assert.Equal(t, int64(0), underlyingOffset, what) assert.Equal(t, int64(-1), underlyingLimit, what) case 1: assert.Equal(t, test.wantOffset, underlyingOffset, what) assert.Equal(t, test.wantLimit, underlyingLimit, what) default: t.Errorf("Too many calls %d for %s", callCount+1, what) } callCount++ return open(ctx, underlyingOffset, underlyingLimit) } fh, err := c.DecryptDataSeek(context.Background(), testOpen, 0, -1) assert.NoError(t, err) gotOffset, err := fh.RangeSeek(context.Background(), test.offset, io.SeekStart, test.limit) assert.NoError(t, err) assert.Equal(t, gotOffset, test.offset) } } func TestDecrypterCalculateUnderlying(t *testing.T) { for _, test := range []struct { offset, limit int64 wantOffset, wantLimit int64 wantDiscard, wantBlocks int64 }{ // unlimited {0, -1, int64(fileHeaderSize), -1, 0, 0}, {1, -1, int64(fileHeaderSize), -1, 1, 0}, {blockDataSize - 1, -1, int64(fileHeaderSize), -1, blockDataSize - 1, 0}, {blockDataSize, -1, int64(fileHeaderSize) + blockSize, -1, 0, 1}, {blockDataSize + 1, -1, int64(fileHeaderSize) + blockSize, -1, 1, 1}, // limit=1 {0, 1, int64(fileHeaderSize), blockSize, 0, 0}, {1, 1, int64(fileHeaderSize), blockSize, 1, 0}, {blockDataSize - 1, 1, int64(fileHeaderSize), blockSize, blockDataSize - 1, 0}, {blockDataSize, 1, int64(fileHeaderSize) + blockSize, blockSize, 0, 1}, {blockDataSize + 1, 1, int64(fileHeaderSize) + blockSize, blockSize, 1, 1}, // limit=100 {0, 100, int64(fileHeaderSize), blockSize, 0, 0}, {1, 100, int64(fileHeaderSize), blockSize, 1, 0}, {blockDataSize - 1, 100, int64(fileHeaderSize), 2 * blockSize, blockDataSize - 1, 0}, {blockDataSize, 100, int64(fileHeaderSize) + blockSize, blockSize, 0, 1}, {blockDataSize + 1, 100, int64(fileHeaderSize) + blockSize, blockSize, 1, 1}, // limit=blockDataSize-1 {0, blockDataSize - 1, int64(fileHeaderSize), blockSize, 0, 0}, {1, blockDataSize - 1, int64(fileHeaderSize), blockSize, 1, 0}, {blockDataSize - 1, blockDataSize - 1, int64(fileHeaderSize), 2 * blockSize, blockDataSize - 1, 0}, {blockDataSize, blockDataSize - 1, int64(fileHeaderSize) + blockSize, blockSize, 0, 1}, {blockDataSize + 1, blockDataSize - 1, int64(fileHeaderSize) + blockSize, blockSize, 1, 1}, // limit=blockDataSize {0, blockDataSize, int64(fileHeaderSize), blockSize, 0, 0}, {1, blockDataSize, int64(fileHeaderSize), 2 * blockSize, 1, 0}, {blockDataSize - 1, blockDataSize, int64(fileHeaderSize), 2 * blockSize, blockDataSize - 1, 0}, {blockDataSize, blockDataSize, int64(fileHeaderSize) + blockSize, blockSize, 0, 1}, {blockDataSize + 1, blockDataSize, int64(fileHeaderSize) + blockSize, 2 * blockSize, 1, 1}, // limit=blockDataSize+1 {0, blockDataSize + 1, int64(fileHeaderSize), 2 * blockSize, 0, 0}, {1, blockDataSize + 1, int64(fileHeaderSize), 2 * blockSize, 1, 0}, {blockDataSize - 1, blockDataSize + 1, int64(fileHeaderSize), 2 * blockSize, blockDataSize - 1, 0}, {blockDataSize, blockDataSize + 1, int64(fileHeaderSize) + blockSize, 2 * blockSize, 0, 1}, {blockDataSize + 1, blockDataSize + 1, int64(fileHeaderSize) + blockSize, 2 * blockSize, 1, 1}, } { what := fmt.Sprintf("offset = %d, limit = %d", test.offset, test.limit) underlyingOffset, underlyingLimit, discard, blocks := calculateUnderlying(test.offset, test.limit) assert.Equal(t, test.wantOffset, underlyingOffset, what) assert.Equal(t, test.wantLimit, underlyingLimit, what) assert.Equal(t, test.wantDiscard, discard, what) assert.Equal(t, test.wantBlocks, blocks, what) } } func TestDecrypterRead(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) // Test truncating the file at each possible point for i := 0; i < len(file16)-1; i++ { what := fmt.Sprintf("truncating to %d/%d", i, len(file16)) cd := newCloseDetector(bytes.NewBuffer(file16[:i])) fh, err := c.newDecrypter(cd) if i < fileHeaderSize { assert.EqualError(t, err, ErrorEncryptedFileTooShort.Error(), what) continue } if err != nil { assert.NoError(t, err, what) continue } _, err = ioutil.ReadAll(fh) var expectedErr error switch { case i == fileHeaderSize: // This would normally produce an error *except* on the first block expectedErr = nil default: expectedErr = io.ErrUnexpectedEOF } if expectedErr != nil { assert.EqualError(t, err, expectedErr.Error(), what) } else { assert.NoError(t, err, what) } assert.Equal(t, 0, cd.closed, what) } // Test producing an error on the file on Read the underlying file in1 := bytes.NewBuffer(file1) in2 := &readers.ErrorReader{Err: errors.New("potato")} in := io.MultiReader(in1, in2) cd := newCloseDetector(in) fh, err := c.newDecrypter(cd) assert.NoError(t, err) _, err = ioutil.ReadAll(fh) assert.Error(t, err, "potato") assert.Equal(t, 0, cd.closed) // Test corrupting the input // shouldn't be able to corrupt any byte without some sort of error file16copy := make([]byte, len(file16)) copy(file16copy, file16) for i := range file16copy { file16copy[i] ^= 0xFF fh, err := c.newDecrypter(ioutil.NopCloser(bytes.NewBuffer(file16copy))) if i < fileMagicSize { assert.Error(t, err, ErrorEncryptedBadMagic.Error()) assert.Nil(t, fh) } else { assert.NoError(t, err) _, err = ioutil.ReadAll(fh) assert.Error(t, err, ErrorEncryptedFileBadHeader.Error()) } file16copy[i] ^= 0xFF } } func TestDecrypterClose(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) cd := newCloseDetector(bytes.NewBuffer(file16)) fh, err := c.newDecrypter(cd) assert.NoError(t, err) assert.Equal(t, 0, cd.closed) // close before reading assert.Equal(t, nil, fh.err) err = fh.Close() assert.NoError(t, err) assert.Equal(t, ErrorFileClosed, fh.err) assert.Equal(t, 1, cd.closed) // double close err = fh.Close() assert.Error(t, err, ErrorFileClosed.Error()) assert.Equal(t, 1, cd.closed) // try again reading the file this time cd = newCloseDetector(bytes.NewBuffer(file1)) fh, err = c.newDecrypter(cd) assert.NoError(t, err) assert.Equal(t, 0, cd.closed) // close after reading out, err := ioutil.ReadAll(fh) assert.NoError(t, err) assert.Equal(t, []byte{1}, out) assert.Equal(t, io.EOF, fh.err) err = fh.Close() assert.NoError(t, err) assert.Equal(t, ErrorFileClosed, fh.err) assert.Equal(t, 1, cd.closed) } func TestPutGetBlock(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) block := c.getBlock() c.putBlock(block) c.putBlock(block) assert.Panics(t, func() { c.putBlock(block[:len(block)-1]) }) } func TestKey(t *testing.T) { c, err := newCipher(NameEncryptionStandard, "", "", true) assert.NoError(t, err) // Check zero keys OK assert.Equal(t, [32]byte{}, c.dataKey) assert.Equal(t, [32]byte{}, c.nameKey) assert.Equal(t, [16]byte{}, c.nameTweak) require.NoError(t, c.Key("potato", "")) assert.Equal(t, [32]byte{0x74, 0x55, 0xC7, 0x1A, 0xB1, 0x7C, 0x86, 0x5B, 0x84, 0x71, 0xF4, 0x7B, 0x79, 0xAC, 0xB0, 0x7E, 0xB3, 0x1D, 0x56, 0x78, 0xB8, 0x0C, 0x7E, 0x2E, 0xAF, 0x4F, 0xC8, 0x06, 0x6A, 0x9E, 0xE4, 0x68}, c.dataKey) assert.Equal(t, [32]byte{0x76, 0x5D, 0xA2, 0x7A, 0xB1, 0x5D, 0x77, 0xF9, 0x57, 0x96, 0x71, 0x1F, 0x7B, 0x93, 0xAD, 0x63, 0xBB, 0xB4, 0x84, 0x07, 0x2E, 0x71, 0x80, 0xA8, 0xD1, 0x7A, 0x9B, 0xBE, 0xC1, 0x42, 0x70, 0xD0}, c.nameKey) assert.Equal(t, [16]byte{0xC1, 0x8D, 0x59, 0x32, 0xF5, 0x5B, 0x28, 0x28, 0xC5, 0xE1, 0xE8, 0x72, 0x15, 0x52, 0x03, 0x10}, c.nameTweak) require.NoError(t, c.Key("Potato", "")) assert.Equal(t, [32]byte{0xAE, 0xEA, 0x6A, 0xD3, 0x47, 0xDF, 0x75, 0xB9, 0x63, 0xCE, 0x12, 0xF5, 0x76, 0x23, 0xE9, 0x46, 0xD4, 0x2E, 0xD8, 0xBF, 0x3E, 0x92, 0x8B, 0x39, 0x24, 0x37, 0x94, 0x13, 0x3E, 0x5E, 0xF7, 0x5E}, c.dataKey) assert.Equal(t, [32]byte{0x54, 0xF7, 0x02, 0x6E, 0x8A, 0xFC, 0x56, 0x0A, 0x86, 0x63, 0x6A, 0xAB, 0x2C, 0x9C, 0x51, 0x62, 0xE5, 0x1A, 0x12, 0x23, 0x51, 0x83, 0x6E, 0xAF, 0x50, 0x42, 0x0F, 0x98, 0x1C, 0x86, 0x0A, 0x19}, c.nameKey) assert.Equal(t, [16]byte{0xF8, 0xC1, 0xB6, 0x27, 0x2D, 0x52, 0x9B, 0x4A, 0x8F, 0xDA, 0xEB, 0x42, 0x4A, 0x28, 0xDD, 0xF3}, c.nameTweak) require.NoError(t, c.Key("potato", "sausage")) assert.Equal(t, [32]uint8{0x8e, 0x9b, 0x6b, 0x99, 0xf8, 0x69, 0x4, 0x67, 0xa0, 0x71, 0xf9, 0xcb, 0x92, 0xd0, 0xaa, 0x78, 0x7f, 0x8f, 0xf1, 0x78, 0xbe, 0xc9, 0x6f, 0x99, 0x9f, 0xd5, 0x20, 0x6e, 0x64, 0x4a, 0x1b, 0x50}, c.dataKey) assert.Equal(t, [32]uint8{0x3e, 0xa9, 0x5e, 0xf6, 0x81, 0x78, 0x2d, 0xc9, 0xd9, 0x95, 0x5d, 0x22, 0x5b, 0xfd, 0x44, 0x2c, 0x6f, 0x5d, 0x68, 0x97, 0xb0, 0x29, 0x1, 0x5c, 0x6f, 0x46, 0x2e, 0x2a, 0x9d, 0xae, 0x2c, 0xe3}, c.nameKey) assert.Equal(t, [16]uint8{0xf1, 0x7f, 0xd7, 0x14, 0x1d, 0x65, 0x27, 0x4f, 0x36, 0x3f, 0xc2, 0xa0, 0x4d, 0xd2, 0x14, 0x8a}, c.nameTweak) require.NoError(t, c.Key("potato", "Sausage")) assert.Equal(t, [32]uint8{0xda, 0x81, 0x8c, 0x67, 0xef, 0x11, 0xf, 0xc8, 0xd5, 0xc8, 0x62, 0x4b, 0x7f, 0xe2, 0x9e, 0x35, 0x35, 0xb0, 0x8d, 0x79, 0x84, 0x89, 0xac, 0xcb, 0xa0, 0xff, 0x2, 0x72, 0x3, 0x1a, 0x5e, 0x64}, c.dataKey) assert.Equal(t, [32]uint8{0x2, 0x81, 0x7e, 0x7b, 0xea, 0x99, 0x81, 0x5a, 0xd0, 0x2d, 0xb9, 0x64, 0x48, 0xb0, 0x28, 0x27, 0x7c, 0x20, 0xb4, 0xd4, 0xa4, 0x68, 0xad, 0x4e, 0x5c, 0x29, 0xf, 0x79, 0xef, 0xee, 0xdb, 0x3b}, c.nameKey) assert.Equal(t, [16]uint8{0x9a, 0xb5, 0xb, 0x3d, 0xcb, 0x60, 0x59, 0x55, 0xa5, 0x4d, 0xe6, 0xb6, 0x47, 0x3, 0x23, 0xe2}, c.nameTweak) require.NoError(t, c.Key("", "")) assert.Equal(t, [32]byte{}, c.dataKey) assert.Equal(t, [32]byte{}, c.nameKey) assert.Equal(t, [16]byte{}, c.nameTweak) } rclone-1.53.3/backend/crypt/crypt.go000066400000000000000000000734751375552240400173340ustar00rootroot00000000000000// Package crypt provides wrappers for Fs and Object which implement encryption package crypt import ( "context" "fmt" "io" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/hash" ) // Globals // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "crypt", Description: "Encrypt/Decrypt a remote", NewFs: NewFs, CommandHelp: commandHelp, Options: []fs.Option{{ Name: "remote", Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Required: true, }, { Name: "filename_encryption", Help: "How to encrypt the filenames.", Default: "standard", Examples: []fs.OptionExample{ { Value: "standard", Help: "Encrypt the filenames see the docs for the details.", }, { Value: "obfuscate", Help: "Very simple filename obfuscation.", }, { Value: "off", Help: "Don't encrypt the file names. Adds a \".bin\" extension only.", }, }, }, { Name: "directory_name_encryption", Help: `Option to either encrypt directory names or leave them intact. NB If filename_encryption is "off" then this option will do nothing.`, Default: true, Examples: []fs.OptionExample{ { Value: "true", Help: "Encrypt directory names.", }, { Value: "false", Help: "Don't encrypt directory names, leave them intact.", }, }, }, { Name: "password", Help: "Password or pass phrase for encryption.", IsPassword: true, Required: true, }, { Name: "password2", Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.", IsPassword: true, }, { Name: "server_side_across_configs", Default: false, Help: `Allow server side operations (eg copy) to work across different crypt configs. Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes.`, Advanced: true, }, { Name: "show_mapping", Help: `For all files listed show how the names encrypt. If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name. This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.`, Default: false, Hide: fs.OptionHideConfigurator, Advanced: true, }}, }) } // newCipherForConfig constructs a Cipher for the given config name func newCipherForConfig(opt *Options) (*Cipher, error) { mode, err := NewNameEncryptionMode(opt.FilenameEncryption) if err != nil { return nil, err } if opt.Password == "" { return nil, errors.New("password not set in config file") } password, err := obscure.Reveal(opt.Password) if err != nil { return nil, errors.Wrap(err, "failed to decrypt password") } var salt string if opt.Password2 != "" { salt, err = obscure.Reveal(opt.Password2) if err != nil { return nil, errors.Wrap(err, "failed to decrypt password2") } } cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption) if err != nil { return nil, errors.Wrap(err, "failed to make cipher") } return cipher, nil } // NewCipher constructs a Cipher for the given config func NewCipher(m configmap.Mapper) (*Cipher, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } return newCipherForConfig(opt) } // NewFs constructs an Fs from the path, container:path func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } cipher, err := newCipherForConfig(opt) if err != nil { return nil, err } remote := opt.Remote if strings.HasPrefix(remote, name+":") { return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting") } // Make sure to remove trailing . reffering to the current dir if path.Base(rpath) == "." { rpath = strings.TrimSuffix(rpath, ".") } // Look for a file first var wrappedFs fs.Fs if rpath == "" { wrappedFs, err = cache.Get(remote) } else { remotePath := fspath.JoinRootPath(remote, cipher.EncryptFileName(rpath)) wrappedFs, err = cache.Get(remotePath) // if that didn't produce a file, look for a directory if err != fs.ErrorIsFile { remotePath = fspath.JoinRootPath(remote, cipher.EncryptDirName(rpath)) wrappedFs, err = cache.Get(remotePath) } } if err != fs.ErrorIsFile && err != nil { return nil, errors.Wrapf(err, "failed to make remote %q to wrap", remote) } f := &Fs{ Fs: wrappedFs, name: name, root: rpath, opt: *opt, cipher: cipher, } cache.PinUntilFinalized(f.Fs, f) // the features here are ones we could support, and they are // ANDed with the ones from wrappedFs f.features = (&fs.Features{ CaseInsensitive: cipher.NameEncryptionMode() == NameEncryptionOff, DuplicateFiles: true, ReadMimeType: false, // MimeTypes not supported with crypt WriteMimeType: false, BucketBased: true, CanHaveEmptyDirectories: true, SetTier: true, GetTier: true, ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs, }).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs) return f, err } // Options defines the configuration for this backend type Options struct { Remote string `config:"remote"` FilenameEncryption string `config:"filename_encryption"` DirectoryNameEncryption bool `config:"directory_name_encryption"` Password string `config:"password"` Password2 string `config:"password2"` ServerSideAcrossConfigs bool `config:"server_side_across_configs"` ShowMapping bool `config:"show_mapping"` } // Fs represents a wrapped fs.Fs type Fs struct { fs.Fs wrapper fs.Fs name string root string opt Options features *fs.Features // optional features cipher *Cipher } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // String returns a description of the FS func (f *Fs) String() string { return fmt.Sprintf("Encrypted drive '%s:%s'", f.name, f.root) } // Encrypt an object file name to entries. func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) { remote := obj.Remote() decryptedRemote, err := f.cipher.DecryptFileName(remote) if err != nil { fs.Debugf(remote, "Skipping undecryptable file name: %v", err) return } if f.opt.ShowMapping { fs.Logf(decryptedRemote, "Encrypts to %q", remote) } *entries = append(*entries, f.newObject(obj)) } // Encrypt a directory file name to entries. func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) { remote := dir.Remote() decryptedRemote, err := f.cipher.DecryptDirName(remote) if err != nil { fs.Debugf(remote, "Skipping undecryptable dir name: %v", err) return } if f.opt.ShowMapping { fs.Logf(decryptedRemote, "Encrypts to %q", remote) } *entries = append(*entries, f.newDir(ctx, dir)) } // Encrypt some directory entries. This alters entries returning it as newEntries. func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntries fs.DirEntries, err error) { newEntries = entries[:0] // in place filter for _, entry := range entries { switch x := entry.(type) { case fs.Object: f.add(&newEntries, x) case fs.Directory: f.addDir(ctx, &newEntries, x) default: return nil, errors.Errorf("Unknown object type %T", entry) } } return newEntries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { entries, err = f.Fs.List(ctx, f.cipher.EncryptDirName(dir)) if err != nil { return nil, err } return f.encryptEntries(ctx, entries) } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { return f.Fs.Features().ListR(ctx, f.cipher.EncryptDirName(dir), func(entries fs.DirEntries) error { newEntries, err := f.encryptEntries(ctx, entries) if err != nil { return err } return callback(newEntries) }) } // NewObject finds the Object at remote. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { o, err := f.Fs.NewObject(ctx, f.cipher.EncryptFileName(remote)) if err != nil { return nil, err } return f.newObject(o), nil } type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) // put implements Put or PutStream func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) { // Encrypt the data into wrappedIn wrappedIn, encrypter, err := f.cipher.encryptData(in) if err != nil { return nil, err } // Find a hash the destination supports to compute a hash of // the encrypted data ht := f.Fs.Hashes().GetOne() var hasher *hash.MultiHasher if ht != hash.None { hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht)) if err != nil { return nil, err } // unwrap the accounting var wrap accounting.WrapFn wrappedIn, wrap = accounting.UnWrap(wrappedIn) // add the hasher wrappedIn = io.TeeReader(wrappedIn, hasher) // wrap the accounting back on wrappedIn = wrap(wrappedIn) } // Transfer the data o, err := put(ctx, wrappedIn, f.newObjectInfo(src, encrypter.nonce), options...) if err != nil { return nil, err } // Check the hashes of the encrypted data if we were comparing them if ht != hash.None && hasher != nil { srcHash := hasher.Sums()[ht] var dstHash string dstHash, err = o.Hash(ctx, ht) if err != nil { return nil, errors.Wrap(err, "failed to read destination hash") } if srcHash != "" && dstHash != "" && srcHash != dstHash { // remove object err = o.Remove(ctx) if err != nil { fs.Errorf(o, "Failed to remove corrupted object: %v", err) } return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash) } } return f.newObject(o), nil } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.put(ctx, in, src, options, f.Fs.Put) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.put(ctx, in, src, options, f.Fs.Features().PutStream) } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists func (f *Fs) Mkdir(ctx context.Context, dir string) error { return f.Fs.Mkdir(ctx, f.cipher.EncryptDirName(dir)) } // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.Fs.Rmdir(ctx, f.cipher.EncryptDirName(dir)) } // Purge all files in the directory specified // // Implement this if you have a way of deleting all the files // quicker than just running Remove() on the result of List() // // Return an error if it doesn't exist func (f *Fs) Purge(ctx context.Context, dir string) error { do := f.Fs.Features().Purge if do == nil { return fs.ErrorCantPurge } return do(ctx, f.cipher.EncryptDirName(dir)) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { do := f.Fs.Features().Copy if do == nil { return nil, fs.ErrorCantCopy } o, ok := src.(*Object) if !ok { return nil, fs.ErrorCantCopy } oResult, err := do(ctx, o.Object, f.cipher.EncryptFileName(remote)) if err != nil { return nil, err } return f.newObject(oResult), nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { do := f.Fs.Features().Move if do == nil { return nil, fs.ErrorCantMove } o, ok := src.(*Object) if !ok { return nil, fs.ErrorCantMove } oResult, err := do(ctx, o.Object, f.cipher.EncryptFileName(remote)) if err != nil { return nil, err } return f.newObject(oResult), nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { do := f.Fs.Features().DirMove if do == nil { return fs.ErrorCantDirMove } srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } return do(ctx, srcFs.Fs, f.cipher.EncryptDirName(srcRemote), f.cipher.EncryptDirName(dstRemote)) } // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { do := f.Fs.Features().PutUnchecked if do == nil { return nil, errors.New("can't PutUnchecked") } wrappedIn, encrypter, err := f.cipher.encryptData(in) if err != nil { return nil, err } o, err := do(ctx, wrappedIn, f.newObjectInfo(src, encrypter.nonce)) if err != nil { return nil, err } return f.newObject(o), nil } // CleanUp the trash in the Fs // // Implement this if you have a way of emptying the trash or // otherwise cleaning up old versions of files. func (f *Fs) CleanUp(ctx context.Context) error { do := f.Fs.Features().CleanUp if do == nil { return errors.New("can't CleanUp") } return do(ctx) } // About gets quota information from the Fs func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { do := f.Fs.Features().About if do == nil { return nil, errors.New("About not supported") } return do(ctx) } // UnWrap returns the Fs that this Fs is wrapping func (f *Fs) UnWrap() fs.Fs { return f.Fs } // WrapFs returns the Fs that is wrapping this Fs func (f *Fs) WrapFs() fs.Fs { return f.wrapper } // SetWrapper sets the Fs that is wrapping this Fs func (f *Fs) SetWrapper(wrapper fs.Fs) { f.wrapper = wrapper } // EncryptFileName returns an encrypted file name func (f *Fs) EncryptFileName(fileName string) string { return f.cipher.EncryptFileName(fileName) } // DecryptFileName returns a decrypted file name func (f *Fs) DecryptFileName(encryptedFileName string) (string, error) { return f.cipher.DecryptFileName(encryptedFileName) } // computeHashWithNonce takes the nonce and encrypts the contents of // src with it, and calculates the hash given by HashType on the fly // // Note that we break lots of encapsulation in this function. func (f *Fs) computeHashWithNonce(ctx context.Context, nonce nonce, src fs.Object, hashType hash.Type) (hashStr string, err error) { // Open the src for input in, err := src.Open(ctx) if err != nil { return "", errors.Wrap(err, "failed to open src") } defer fs.CheckClose(in, &err) // Now encrypt the src with the nonce out, err := f.cipher.newEncrypter(in, &nonce) if err != nil { return "", errors.Wrap(err, "failed to make encrypter") } // pipe into hash m, err := hash.NewMultiHasherTypes(hash.NewHashSet(hashType)) if err != nil { return "", errors.Wrap(err, "failed to make hasher") } _, err = io.Copy(m, out) if err != nil { return "", errors.Wrap(err, "failed to hash data") } return m.Sums()[hashType], nil } // ComputeHash takes the nonce from o, and encrypts the contents of // src with it, and calculates the hash given by HashType on the fly // // Note that we break lots of encapsulation in this function. func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) { // Read the nonce - opening the file is sufficient to read the nonce in // use a limited read so we only read the header in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1}) if err != nil { return "", errors.Wrap(err, "failed to open object to read nonce") } d, err := f.cipher.newDecrypter(in) if err != nil { _ = in.Close() return "", errors.Wrap(err, "failed to open object to read nonce") } nonce := d.nonce // fs.Debugf(o, "Read nonce % 2x", nonce) // Check nonce isn't all zeros isZero := true for i := range nonce { if nonce[i] != 0 { isZero = false } } if isZero { fs.Errorf(o, "empty nonce read") } // Close d (and hence in) once we have read the nonce err = d.Close() if err != nil { return "", errors.Wrap(err, "failed to close nonce read") } return f.computeHashWithNonce(ctx, nonce, src, hashType) } // MergeDirs merges the contents of all the directories passed // in into the first one and rmdirs the other directories. func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error { do := f.Fs.Features().MergeDirs if do == nil { return errors.New("MergeDirs not supported") } out := make([]fs.Directory, len(dirs)) for i, dir := range dirs { out[i] = fs.NewDirCopy(ctx, dir).SetRemote(f.cipher.EncryptDirName(dir.Remote())) } return do(ctx, out) } // DirCacheFlush resets the directory cache - used in testing // as an optional interface func (f *Fs) DirCacheFlush() { do := f.Fs.Features().DirCacheFlush if do != nil { do() } } // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { do := f.Fs.Features().PublicLink if do == nil { return "", errors.New("PublicLink not supported") } o, err := f.NewObject(ctx, remote) if err != nil { // assume it is a directory return do(ctx, f.cipher.EncryptDirName(remote), expire, unlink) } return do(ctx, o.(*Object).Object.Remote(), expire, unlink) } // ChangeNotify calls the passed function with a path // that has had changes. If the implementation // uses polling, it should adhere to the given interval. func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) { do := f.Fs.Features().ChangeNotify if do == nil { return } wrappedNotifyFunc := func(path string, entryType fs.EntryType) { // fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType) var ( err error decrypted string ) switch entryType { case fs.EntryDirectory: decrypted, err = f.cipher.DecryptDirName(path) case fs.EntryObject: decrypted, err = f.cipher.DecryptFileName(path) default: fs.Errorf(path, "crypt ChangeNotify: ignoring unknown EntryType %d", entryType) return } if err != nil { fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err) return } notifyFunc(decrypted, entryType) } do(ctx, wrappedNotifyFunc, pollIntervalChan) } var commandHelp = []fs.CommandHelp{ { Name: "encode", Short: "Encode the given filename(s)", Long: `This encodes the filenames given as arguments returning a list of strings of the encoded results. Usage Example: rclone backend encode crypt: file1 [file2...] rclone rc backend/command command=encode fs=crypt: file1 [file2...] `, }, { Name: "decode", Short: "Decode the given filename(s)", Long: `This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. Usage Example: rclone backend decode crypt: encryptedfile1 [encryptedfile2...] rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] `, }, } // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) { switch name { case "decode": out := make([]string, 0, len(arg)) for _, encryptedFileName := range arg { fileName, err := f.DecryptFileName(encryptedFileName) if err != nil { return out, errors.Wrap(err, fmt.Sprintf("Failed to decrypt : %s", encryptedFileName)) } out = append(out, fileName) } return out, nil case "encode": out := make([]string, 0, len(arg)) for _, fileName := range arg { encryptedFileName := f.EncryptFileName(fileName) out = append(out, encryptedFileName) } return out, nil default: return nil, fs.ErrorCommandNotFound } } // Object describes a wrapped for being read from the Fs // // This decrypts the remote name and decrypts the data type Object struct { fs.Object f *Fs } func (f *Fs) newObject(o fs.Object) *Object { return &Object{ Object: o, f: f, } } // Fs returns read only access to the Fs that this object is part of func (o *Object) Fs() fs.Info { return o.f } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.Remote() } // Remote returns the remote path func (o *Object) Remote() string { remote := o.Object.Remote() decryptedName, err := o.f.cipher.DecryptFileName(remote) if err != nil { fs.Debugf(remote, "Undecryptable file name: %v", err) return remote } return decryptedName } // Size returns the size of the file func (o *Object) Size() int64 { size, err := o.f.cipher.DecryptedSize(o.Object.Size()) if err != nil { fs.Debugf(o, "Bad size for decrypt: %v", err) } return size } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) { return "", hash.ErrUnsupported } // UnWrap returns the wrapped Object func (o *Object) UnWrap() fs.Object { return o.Object } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) { var openOptions []fs.OpenOption var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(o.Size()) default: // pass on Options to underlying open if appropriate openOptions = append(openOptions, option) } } rc, err = o.f.cipher.DecryptDataSeek(ctx, func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) { if underlyingOffset == 0 && underlyingLimit < 0 { // Open with no seek return o.Object.Open(ctx, openOptions...) } // Open stream with a range of underlyingOffset, underlyingLimit end := int64(-1) if underlyingLimit >= 0 { end = underlyingOffset + underlyingLimit - 1 if end >= o.Object.Size() { end = -1 } } newOpenOptions := append(openOptions, &fs.RangeOption{Start: underlyingOffset, End: end}) return o.Object.Open(ctx, newOpenOptions...) }, offset, limit) if err != nil { return nil, err } return rc, nil } // Update in to the object with the modTime given of the given size func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { update := func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return o.Object, o.Object.Update(ctx, in, src, options...) } _, err := o.f.put(ctx, in, src, options, update) return err } // newDir returns a dir with the Name decrypted func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory { newDir := fs.NewDirCopy(ctx, dir) remote := dir.Remote() decryptedRemote, err := f.cipher.DecryptDirName(remote) if err != nil { fs.Debugf(remote, "Undecryptable dir name: %v", err) } else { newDir.SetRemote(decryptedRemote) } return newDir } // UserInfo returns info about the connected user func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) { do := f.Fs.Features().UserInfo if do == nil { return nil, fs.ErrorNotImplemented } return do(ctx) } // Disconnect the current user func (f *Fs) Disconnect(ctx context.Context) error { do := f.Fs.Features().Disconnect if do == nil { return fs.ErrorNotImplemented } return do(ctx) } // ObjectInfo describes a wrapped fs.ObjectInfo for being the source // // This encrypts the remote name and adjusts the size type ObjectInfo struct { fs.ObjectInfo f *Fs nonce nonce } func (f *Fs) newObjectInfo(src fs.ObjectInfo, nonce nonce) *ObjectInfo { return &ObjectInfo{ ObjectInfo: src, f: f, nonce: nonce, } } // Fs returns read only access to the Fs that this object is part of func (o *ObjectInfo) Fs() fs.Info { return o.f } // Remote returns the remote path func (o *ObjectInfo) Remote() string { return o.f.cipher.EncryptFileName(o.ObjectInfo.Remote()) } // Size returns the size of the file func (o *ObjectInfo) Size() int64 { size := o.ObjectInfo.Size() if size < 0 { return size } return o.f.cipher.EncryptedSize(size) } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *ObjectInfo) Hash(ctx context.Context, hash hash.Type) (string, error) { var srcObj fs.Object var ok bool // Get the underlying object if there is one if srcObj, ok = o.ObjectInfo.(fs.Object); ok { // Prefer direct interface assertion } else if do, ok := o.ObjectInfo.(fs.ObjectUnWrapper); ok { // Otherwise likely is an operations.OverrideRemote srcObj = do.UnWrap() } else { return "", nil } // if this is wrapping a local object then we work out the hash if srcObj.Fs().Features().IsLocal { // Read the data and encrypt it to calculate the hash fs.Debugf(o, "Computing %v hash of encrypted source", hash) return o.f.computeHashWithNonce(ctx, o.nonce, srcObj, hash) } return "", nil } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { do, ok := o.Object.(fs.IDer) if !ok { return "" } return do.ID() } // SetTier performs changing storage tier of the Object if // multiple storage classes supported func (o *Object) SetTier(tier string) error { do, ok := o.Object.(fs.SetTierer) if !ok { return errors.New("crypt: underlying remote does not support SetTier") } return do.SetTier(tier) } // GetTier returns storage tier or class of the Object func (o *Object) GetTier() string { do, ok := o.Object.(fs.GetTierer) if !ok { return "" } return do.GetTier() } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.Commander = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.UnWrapper = (*Fs)(nil) _ fs.ListRer = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Wrapper = (*Fs)(nil) _ fs.MergeDirser = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.ChangeNotifier = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.UserInfoer = (*Fs)(nil) _ fs.Disconnecter = (*Fs)(nil) _ fs.ObjectInfo = (*ObjectInfo)(nil) _ fs.Object = (*Object)(nil) _ fs.ObjectUnWrapper = (*Object)(nil) _ fs.IDer = (*Object)(nil) _ fs.SetTierer = (*Object)(nil) _ fs.GetTierer = (*Object)(nil) ) rclone-1.53.3/backend/crypt/crypt_internal_test.go000066400000000000000000000074651375552240400222630ustar00rootroot00000000000000package crypt import ( "bytes" "context" "crypto/md5" "fmt" "io" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) type testWrapper struct { fs.ObjectInfo } // UnWrap returns the Object that this Object is wrapping or nil if it // isn't wrapping anything func (o testWrapper) UnWrap() fs.Object { if o, ok := o.ObjectInfo.(fs.Object); ok { return o } return nil } // Create a temporary local fs to upload things from func makeTempLocalFs(t *testing.T) (localFs fs.Fs, cleanup func()) { localFs, err := fs.TemporaryLocalFs() require.NoError(t, err) cleanup = func() { require.NoError(t, localFs.Rmdir(context.Background(), "")) } return localFs, cleanup } // Upload a file to a remote func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object, cleanup func()) { inBuf := bytes.NewBufferString(contents) t1 := time.Date(2012, time.December, 17, 18, 32, 31, 0, time.UTC) upSrc := object.NewStaticObjectInfo(remote, t1, int64(len(contents)), true, nil, nil) obj, err := f.Put(context.Background(), inBuf, upSrc) require.NoError(t, err) cleanup = func() { require.NoError(t, obj.Remove(context.Background())) } return obj, cleanup } // Test the ObjectInfo func testObjectInfo(t *testing.T, f *Fs, wrap bool) { var ( contents = random.String(100) path = "hash_test_object" ctx = context.Background() ) if wrap { path = "_wrap" } localFs, cleanupLocalFs := makeTempLocalFs(t) defer cleanupLocalFs() obj, cleanupObj := uploadFile(t, localFs, path, contents) defer cleanupObj() // encrypt the data inBuf := bytes.NewBufferString(contents) var outBuf bytes.Buffer enc, err := f.cipher.newEncrypter(inBuf, nil) require.NoError(t, err) nonce := enc.nonce // read the nonce at the start _, err = io.Copy(&outBuf, enc) require.NoError(t, err) var oi fs.ObjectInfo = obj if wrap { // wrap the object in an fs.ObjectUnwrapper if required oi = testWrapper{oi} } // wrap the object in a crypt for upload using the nonce we // saved from the encryptor src := f.newObjectInfo(oi, nonce) // Test ObjectInfo methods assert.Equal(t, int64(outBuf.Len()), src.Size()) assert.Equal(t, f, src.Fs()) assert.NotEqual(t, path, src.Remote()) // Test ObjectInfo.Hash wantHash := md5.Sum(outBuf.Bytes()) gotHash, err := src.Hash(ctx, hash.MD5) require.NoError(t, err) assert.Equal(t, fmt.Sprintf("%x", wantHash), gotHash) } func testComputeHash(t *testing.T, f *Fs) { var ( contents = random.String(100) path = "compute_hash_test" ctx = context.Background() hashType = f.Fs.Hashes().GetOne() ) if hashType == hash.None { t.Skipf("%v: does not support hashes", f.Fs) } localFs, cleanupLocalFs := makeTempLocalFs(t) defer cleanupLocalFs() // Upload a file to localFs as a test object localObj, cleanupLocalObj := uploadFile(t, localFs, path, contents) defer cleanupLocalObj() // Upload the same data to the remote Fs also remoteObj, cleanupRemoteObj := uploadFile(t, f, path, contents) defer cleanupRemoteObj() // Calculate the expected Hash of the remote object computedHash, err := f.ComputeHash(ctx, remoteObj.(*Object), localObj, hashType) require.NoError(t, err) // Test computed hash matches remote object hash remoteObjHash, err := remoteObj.(*Object).Object.Hash(ctx, hashType) require.NoError(t, err) assert.Equal(t, remoteObjHash, computedHash) } // InternalTest is called by fstests.Run to extra tests func (f *Fs) InternalTest(t *testing.T) { t.Run("ObjectInfo", func(t *testing.T) { testObjectInfo(t, f, false) }) t.Run("ObjectInfoWrap", func(t *testing.T) { testObjectInfo(t, f, true) }) t.Run("ComputeHash", func(t *testing.T) { testComputeHash(t, f) }) } rclone-1.53.3/backend/crypt/crypt_test.go000066400000000000000000000060651375552240400203620ustar00rootroot00000000000000// Test Crypt filesystem interface package crypt_test import ( "os" "path/filepath" "testing" "github.com/rclone/rclone/backend/crypt" _ "github.com/rclone/rclone/backend/drive" // for integration tests _ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/swift" // for integration tests "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { if *fstest.RemoteName == "" { t.Skip("Skipping as -remote not set") } fstests.Run(t, &fstests.Opt{ RemoteName: *fstest.RemoteName, NilObject: (*crypt.Object)(nil), UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } // TestStandard runs integration tests against the remote func TestStandard(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard") name := "TestCrypt" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", NilObject: (*crypt.Object)(nil), ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "crypt"}, {Name: name, Key: "remote", Value: tempdir}, {Name: name, Key: "password", Value: obscure.MustObscure("potato")}, {Name: name, Key: "filename_encryption", Value: "standard"}, }, UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } // TestOff runs integration tests against the remote func TestOff(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off") name := "TestCrypt2" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", NilObject: (*crypt.Object)(nil), ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "crypt"}, {Name: name, Key: "remote", Value: tempdir}, {Name: name, Key: "password", Value: obscure.MustObscure("potato2")}, {Name: name, Key: "filename_encryption", Value: "off"}, }, UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } // TestObfuscate runs integration tests against the remote func TestObfuscate(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate") name := "TestCrypt3" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", NilObject: (*crypt.Object)(nil), ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "crypt"}, {Name: name, Key: "remote", Value: tempdir}, {Name: name, Key: "password", Value: obscure.MustObscure("potato2")}, {Name: name, Key: "filename_encryption", Value: "obfuscate"}, }, SkipBadWindowsCharacters: true, UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } rclone-1.53.3/backend/crypt/pkcs7/000077500000000000000000000000001375552240400166535ustar00rootroot00000000000000rclone-1.53.3/backend/crypt/pkcs7/pkcs7.go000066400000000000000000000032641375552240400202360ustar00rootroot00000000000000// Package pkcs7 implements PKCS#7 padding // // This is a standard way of encoding variable length buffers into // buffers which are a multiple of an underlying crypto block size. package pkcs7 import "github.com/pkg/errors" // Errors Unpad can return var ( ErrorPaddingNotFound = errors.New("Bad PKCS#7 padding - not padded") ErrorPaddingNotAMultiple = errors.New("Bad PKCS#7 padding - not a multiple of blocksize") ErrorPaddingTooLong = errors.New("Bad PKCS#7 padding - too long") ErrorPaddingTooShort = errors.New("Bad PKCS#7 padding - too short") ErrorPaddingNotAllTheSame = errors.New("Bad PKCS#7 padding - not all the same") ) // Pad buf using PKCS#7 to a multiple of n. // // Appends the padding to buf - make a copy of it first if you don't // want it modified. func Pad(n int, buf []byte) []byte { if n <= 1 || n >= 256 { panic("bad multiple") } length := len(buf) padding := n - (length % n) for i := 0; i < padding; i++ { buf = append(buf, byte(padding)) } if (len(buf) % n) != 0 { panic("padding failed") } return buf } // Unpad buf using PKCS#7 from a multiple of n returning a slice of // buf or an error if malformed. func Unpad(n int, buf []byte) ([]byte, error) { if n <= 1 || n >= 256 { panic("bad multiple") } length := len(buf) if length == 0 { return nil, ErrorPaddingNotFound } if (length % n) != 0 { return nil, ErrorPaddingNotAMultiple } padding := int(buf[length-1]) if padding > n { return nil, ErrorPaddingTooLong } if padding == 0 { return nil, ErrorPaddingTooShort } for i := 0; i < padding; i++ { if buf[length-1-i] != byte(padding) { return nil, ErrorPaddingNotAllTheSame } } return buf[:length-padding], nil } rclone-1.53.3/backend/crypt/pkcs7/pkcs7_test.go000066400000000000000000000052421375552240400212730ustar00rootroot00000000000000package pkcs7 import ( "fmt" "testing" "github.com/stretchr/testify/assert" ) func TestPad(t *testing.T) { for _, test := range []struct { n int in string expected string }{ {8, "", "\x08\x08\x08\x08\x08\x08\x08\x08"}, {8, "1", "1\x07\x07\x07\x07\x07\x07\x07"}, {8, "12", "12\x06\x06\x06\x06\x06\x06"}, {8, "123", "123\x05\x05\x05\x05\x05"}, {8, "1234", "1234\x04\x04\x04\x04"}, {8, "12345", "12345\x03\x03\x03"}, {8, "123456", "123456\x02\x02"}, {8, "1234567", "1234567\x01"}, {8, "abcdefgh", "abcdefgh\x08\x08\x08\x08\x08\x08\x08\x08"}, {8, "abcdefgh1", "abcdefgh1\x07\x07\x07\x07\x07\x07\x07"}, {8, "abcdefgh12", "abcdefgh12\x06\x06\x06\x06\x06\x06"}, {8, "abcdefgh123", "abcdefgh123\x05\x05\x05\x05\x05"}, {8, "abcdefgh1234", "abcdefgh1234\x04\x04\x04\x04"}, {8, "abcdefgh12345", "abcdefgh12345\x03\x03\x03"}, {8, "abcdefgh123456", "abcdefgh123456\x02\x02"}, {8, "abcdefgh1234567", "abcdefgh1234567\x01"}, {8, "abcdefgh12345678", "abcdefgh12345678\x08\x08\x08\x08\x08\x08\x08\x08"}, {16, "", "\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10"}, {16, "a", "a\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f"}, } { actual := Pad(test.n, []byte(test.in)) assert.Equal(t, test.expected, string(actual), fmt.Sprintf("Pad %d %q", test.n, test.in)) recovered, err := Unpad(test.n, actual) assert.NoError(t, err) assert.Equal(t, []byte(test.in), recovered, fmt.Sprintf("Unpad %d %q", test.n, test.in)) } assert.Panics(t, func() { Pad(1, []byte("")) }, "bad multiple") assert.Panics(t, func() { Pad(256, []byte("")) }, "bad multiple") } func TestUnpad(t *testing.T) { // We've tested the OK decoding in TestPad, now test the error cases for _, test := range []struct { n int in string err error }{ {8, "", ErrorPaddingNotFound}, {8, "1", ErrorPaddingNotAMultiple}, {8, "12", ErrorPaddingNotAMultiple}, {8, "123", ErrorPaddingNotAMultiple}, {8, "1234", ErrorPaddingNotAMultiple}, {8, "12345", ErrorPaddingNotAMultiple}, {8, "123456", ErrorPaddingNotAMultiple}, {8, "1234567", ErrorPaddingNotAMultiple}, {8, "1234567\xFF", ErrorPaddingTooLong}, {8, "1234567\x09", ErrorPaddingTooLong}, {8, "1234567\x00", ErrorPaddingTooShort}, {8, "123456\x01\x02", ErrorPaddingNotAllTheSame}, {8, "\x07\x08\x08\x08\x08\x08\x08\x08", ErrorPaddingNotAllTheSame}, } { result, actualErr := Unpad(test.n, []byte(test.in)) assert.Equal(t, test.err, actualErr, fmt.Sprintf("Unpad %d %q", test.n, test.in)) assert.Equal(t, result, []byte(nil)) } assert.Panics(t, func() { _, _ = Unpad(1, []byte("")) }, "bad multiple") assert.Panics(t, func() { _, _ = Unpad(256, []byte("")) }, "bad multiple") } rclone-1.53.3/backend/drive/000077500000000000000000000000001375552240400155745ustar00rootroot00000000000000rclone-1.53.3/backend/drive/drive.go000077500000000000000000003327771375552240400172620ustar00rootroot00000000000000// Package drive interfaces with the Google Drive object storage system package drive // FIXME need to deal with some corner cases // * multiple files with the same name // * files can be in multiple directories // * can have directory loops // * files with / in name import ( "bytes" "context" "crypto/tls" "fmt" "io" "io/ioutil" "log" "mime" "net/http" "path" "sort" "strconv" "strings" "sync" "sync/atomic" "text/template" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" "golang.org/x/oauth2" "golang.org/x/oauth2/google" drive_v2 "google.golang.org/api/drive/v2" drive "google.golang.org/api/drive/v3" "google.golang.org/api/googleapi" ) // Constants const ( rcloneClientID = "202264815644.apps.googleusercontent.com" rcloneEncryptedClientSecret = "eX8GpZTVx3vxMWVkuuBdDWmAUE6rGhTwVrvG9GhllYccSdj2-mvHVg" driveFolderType = "application/vnd.google-apps.folder" shortcutMimeType = "application/vnd.google-apps.shortcut" shortcutMimeTypeDangling = "application/vnd.google-apps.shortcut.dangling" // synthetic mime type for internal use timeFormatIn = time.RFC3339 timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00" defaultMinSleep = fs.Duration(100 * time.Millisecond) defaultBurst = 100 defaultExportExtensions = "docx,xlsx,pptx,svg" scopePrefix = "https://www.googleapis.com/auth/" defaultScope = "drive" // chunkSize is the size of the chunks created during a resumable upload and should be a power of two. // 1<<18 is the minimum size supported by the Google uploader, and there is no maximum. minChunkSize = 256 * fs.KibiByte defaultChunkSize = 8 * fs.MebiByte partialFields = "id,name,size,md5Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails,exportLinks" listRGrouping = 50 // number of IDs to search at once when using ListR listRInputBuffer = 1000 // size of input buffer when using ListR ) // Globals var ( // Description of how to auth for this app driveConfig = &oauth2.Config{ Scopes: []string{scopePrefix + "drive"}, Endpoint: google.Endpoint, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.TitleBarRedirectURL, } _mimeTypeToExtensionDuplicates = map[string]string{ "application/x-vnd.oasis.opendocument.presentation": ".odp", "application/x-vnd.oasis.opendocument.spreadsheet": ".ods", "application/x-vnd.oasis.opendocument.text": ".odt", "image/jpg": ".jpg", "image/x-bmp": ".bmp", "image/x-png": ".png", "text/rtf": ".rtf", } _mimeTypeToExtension = map[string]string{ "application/epub+zip": ".epub", "application/json": ".json", "application/msword": ".doc", "application/pdf": ".pdf", "application/rtf": ".rtf", "application/vnd.ms-excel": ".xls", "application/vnd.oasis.opendocument.presentation": ".odp", "application/vnd.oasis.opendocument.spreadsheet": ".ods", "application/vnd.oasis.opendocument.text": ".odt", "application/vnd.openxmlformats-officedocument.presentationml.presentation": ".pptx", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": ".xlsx", "application/vnd.openxmlformats-officedocument.wordprocessingml.document": ".docx", "application/x-msmetafile": ".wmf", "application/zip": ".zip", "image/bmp": ".bmp", "image/jpeg": ".jpg", "image/pjpeg": ".pjpeg", "image/png": ".png", "image/svg+xml": ".svg", "text/csv": ".csv", "text/html": ".html", "text/plain": ".txt", "text/tab-separated-values": ".tsv", } _mimeTypeToExtensionLinks = map[string]string{ "application/x-link-desktop": ".desktop", "application/x-link-html": ".link.html", "application/x-link-url": ".url", "application/x-link-webloc": ".webloc", } _mimeTypeCustomTransform = map[string]string{ "application/vnd.google-apps.script+json": "application/json", } fetchFormatsOnce sync.Once // make sure we fetch the export/import formats only once _exportFormats map[string][]string // allowed export MIME type conversions _importFormats map[string][]string // allowed import MIME type conversions templatesOnce sync.Once // parse link templates only once _linkTemplates map[string]*template.Template // available link types ) // Parse the scopes option returning a slice of scopes func driveScopes(scopesString string) (scopes []string) { if scopesString == "" { scopesString = defaultScope } for _, scope := range strings.Split(scopesString, ",") { scope = strings.TrimSpace(scope) scopes = append(scopes, scopePrefix+scope) } return scopes } // Returns true if one of the scopes was "drive.appfolder" func driveScopesContainsAppFolder(scopes []string) bool { for _, scope := range scopes { if scope == scopePrefix+"drive.appfolder" { return true } } return false } func driveOAuthOptions() []fs.Option { opts := []fs.Option{} for _, opt := range oauthutil.SharedOptions { if opt.Name == config.ConfigClientID { opt.Help = "Google Application Client Id\nSetting your own is recommended.\nSee https://rclone.org/drive/#making-your-own-client-id for how to create your own.\nIf you leave this blank, it will use an internal key which is low performance." } opts = append(opts, opt) } return opts } // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "drive", Description: "Google Drive", NewFs: NewFs, CommandHelp: commandHelp, Config: func(name string, m configmap.Mapper) { ctx := context.TODO() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { fs.Errorf(nil, "Couldn't parse config into struct: %v", err) return } // Fill in the scopes driveConfig.Scopes = driveScopes(opt.Scope) // Set the root_folder_id if using drive.appfolder if driveScopesContainsAppFolder(driveConfig.Scopes) { m.Set("root_folder_id", "appDataFolder") } if opt.ServiceAccountFile == "" { err = oauthutil.Config("drive", name, m, driveConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) } } err = configTeamDrive(ctx, opt, m, name) if err != nil { log.Fatalf("Failed to configure team drive: %v", err) } }, Options: append(driveOAuthOptions(), []fs.Option{{ Name: "scope", Help: "Scope that rclone should use when requesting access from drive.", Examples: []fs.OptionExample{{ Value: "drive", Help: "Full access all files, excluding Application Data Folder.", }, { Value: "drive.readonly", Help: "Read-only access to file metadata and file contents.", }, { Value: "drive.file", Help: "Access to files created by rclone only.\nThese are visible in the drive website.\nFile authorization is revoked when the user deauthorizes the app.", }, { Value: "drive.appfolder", Help: "Allows read and write access to the Application Data folder.\nThis is not visible in the drive website.", }, { Value: "drive.metadata.readonly", Help: "Allows read-only access to file metadata but\ndoes not allow any access to read or download file content.", }}, }, { Name: "root_folder_id", Help: `ID of the root folder Leave blank normally. Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point. `, }, { Name: "service_account_file", Help: "Service Account Credentials JSON file path \nLeave blank normally.\nNeeded only if you want use SA instead of interactive login." + env.ShellExpandHelp, }, { Name: "service_account_credentials", Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.", Hide: fs.OptionHideConfigurator, Advanced: true, }, { Name: "team_drive", Help: "ID of the Team Drive", Hide: fs.OptionHideConfigurator, Advanced: true, }, { Name: "auth_owner_only", Default: false, Help: "Only consider files owned by the authenticated user.", Advanced: true, }, { Name: "use_trash", Default: true, Help: "Send files to the trash instead of deleting permanently.\nDefaults to true, namely sending files to the trash.\nUse `--drive-use-trash=false` to delete files permanently instead.", Advanced: true, }, { Name: "skip_gdocs", Default: false, Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.", Advanced: true, }, { Name: "skip_checksum_gphotos", Default: false, Help: `Skip MD5 checksum on Google photos and videos only. Use this if you get checksum errors when transferring Google photos or videos. Setting this flag will cause Google photos and videos to return a blank MD5 checksum. Google photos are identified by being in the "photos" space. Corrupted checksums are caused by Google modifying the image/video but not updating the checksum.`, Advanced: true, }, { Name: "shared_with_me", Default: false, Help: `Only show files that are shared with me. Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too.`, Advanced: true, }, { Name: "trashed_only", Default: false, Help: "Only show files that are in the trash.\nThis will show trashed files in their original directory structure.", Advanced: true, }, { Name: "starred_only", Default: false, Help: "Only show files that are starred.", Advanced: true, }, { Name: "formats", Default: "", Help: "Deprecated: see export_formats", Advanced: true, Hide: fs.OptionHideConfigurator, }, { Name: "export_formats", Default: defaultExportExtensions, Help: "Comma separated list of preferred formats for downloading Google docs.", Advanced: true, }, { Name: "import_formats", Default: "", Help: "Comma separated list of preferred formats for uploading Google docs.", Advanced: true, }, { Name: "allow_import_name_change", Default: false, Help: "Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.", Advanced: true, }, { Name: "use_created_date", Default: false, Help: `Use file created date instead of modified date., Useful when downloading data and you want the creation date used in place of the last modified date. **WARNING**: This flag may have some unexpected consequences. When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag. This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.`, Advanced: true, Hide: fs.OptionHideConfigurator, }, { Name: "use_shared_date", Default: false, Help: `Use date file was shared instead of modified date. Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files. If both this flag and "--drive-use-created-date" are set, the created date is used.`, Advanced: true, Hide: fs.OptionHideConfigurator, }, { Name: "list_chunk", Default: 1000, Help: "Size of listing chunk 100-1000. 0 to disable.", Advanced: true, }, { Name: "impersonate", Default: "", Help: `Impersonate this user when using a service account.`, Advanced: true, }, { Name: "alternate_export", Default: false, Help: "Deprecated: no longer needed", Hide: fs.OptionHideBoth, }, { Name: "upload_cutoff", Default: defaultChunkSize, Help: "Cutoff for switching to chunked upload", Advanced: true, }, { Name: "chunk_size", Default: defaultChunkSize, Help: `Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance.`, Advanced: true, }, { Name: "acknowledge_abuse", Default: false, Help: `Set to allow files which return cannotDownloadAbusiveFile to be downloaded. If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.`, Advanced: true, }, { Name: "keep_revision_forever", Default: false, Help: "Keep new head revision of each file forever.", Advanced: true, }, { Name: "size_as_quota", Default: false, Help: `Show sizes as storage quota usage, not actual size. Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever. **WARNING**: This flag may have some unexpected consequences. It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only. If you do use this flag for syncing (not recommended) then you will need to use --ignore size also.`, Advanced: true, Hide: fs.OptionHideConfigurator, }, { Name: "v2_download_min_size", Default: fs.SizeSuffix(-1), Help: "If Object's are greater, use drive v2 API to download.", Advanced: true, }, { Name: "pacer_min_sleep", Default: defaultMinSleep, Help: "Minimum time to sleep between API calls.", Advanced: true, }, { Name: "pacer_burst", Default: defaultBurst, Help: "Number of API calls to allow without sleeping.", Advanced: true, }, { Name: "server_side_across_configs", Default: false, Help: `Allow server side operations (eg copy) to work across different drive configs. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.`, Advanced: true, }, { Name: "disable_http2", Default: true, Help: `Disable drive using http2 There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed. See: https://github.com/rclone/rclone/issues/3631 `, Advanced: true, }, { Name: "stop_on_upload_limit", Default: false, Help: `Make upload limit errors be fatal At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync. Note that this detection is relying on error message strings which Google don't document so it may break in the future. See: https://github.com/rclone/rclone/issues/3857 `, Advanced: true, }, { Name: "skip_shortcuts", Help: `If set skip shortcut files Normally rclone dereferences shortcut files making them appear as if they are the original file (see [the shortcuts section](#shortcuts)). If this flag is set then rclone will ignore shortcut files completely. `, Advanced: true, Default: false, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. // Don't encode / as it's a valid name character in drive. Default: encoder.EncodeInvalidUtf8, }}...), }) // register duplicate MIME types first // this allows them to be used with mime.ExtensionsByType() but // mime.TypeByExtension() will return the later registered MIME type for _, m := range []map[string]string{ _mimeTypeToExtensionDuplicates, _mimeTypeToExtension, _mimeTypeToExtensionLinks, } { for mimeType, extension := range m { if err := mime.AddExtensionType(extension, mimeType); err != nil { log.Fatalf("Failed to register MIME type %q: %v", mimeType, err) } } } } // Options defines the configuration for this backend type Options struct { Scope string `config:"scope"` RootFolderID string `config:"root_folder_id"` ServiceAccountFile string `config:"service_account_file"` ServiceAccountCredentials string `config:"service_account_credentials"` TeamDriveID string `config:"team_drive"` AuthOwnerOnly bool `config:"auth_owner_only"` UseTrash bool `config:"use_trash"` SkipGdocs bool `config:"skip_gdocs"` SkipChecksumGphotos bool `config:"skip_checksum_gphotos"` SharedWithMe bool `config:"shared_with_me"` TrashedOnly bool `config:"trashed_only"` StarredOnly bool `config:"starred_only"` Extensions string `config:"formats"` ExportExtensions string `config:"export_formats"` ImportExtensions string `config:"import_formats"` AllowImportNameChange bool `config:"allow_import_name_change"` UseCreatedDate bool `config:"use_created_date"` UseSharedDate bool `config:"use_shared_date"` ListChunk int64 `config:"list_chunk"` Impersonate string `config:"impersonate"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` ChunkSize fs.SizeSuffix `config:"chunk_size"` AcknowledgeAbuse bool `config:"acknowledge_abuse"` KeepRevisionForever bool `config:"keep_revision_forever"` SizeAsQuota bool `config:"size_as_quota"` V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"` PacerMinSleep fs.Duration `config:"pacer_min_sleep"` PacerBurst int `config:"pacer_burst"` ServerSideAcrossConfigs bool `config:"server_side_across_configs"` DisableHTTP2 bool `config:"disable_http2"` StopOnUploadLimit bool `config:"stop_on_upload_limit"` SkipShortcuts bool `config:"skip_shortcuts"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote drive server type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features svc *drive.Service // the connection to the drive server v2Svc *drive_v2.Service // used to create download links for the v2 api client *http.Client // authorized client rootFolderID string // the id of the root folder dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // To pace the API calls exportExtensions []string // preferred extensions to download docs importMimeTypes []string // MIME types to convert to docs isTeamDrive bool // true if this is a team drive fileFields googleapi.Field // fields to fetch file info with m configmap.Mapper grouping int32 // number of IDs to search at once in ListR - read with atomic listRmu *sync.Mutex // protects listRempties listRempties map[string]struct{} // IDs of supposedly empty directories which triggered grouping disable } type baseObject struct { fs *Fs // what this object is part of remote string // The remote path id string // Drive Id of this object modifiedDate string // RFC3339 time it was last modified mimeType string // The object MIME type bytes int64 // size of the object parents int // number of parents } type documentObject struct { baseObject url string // Download URL of this object documentMimeType string // the original document MIME type extLen int // The length of the added export extension } type linkObject struct { baseObject content []byte // The file content generated by a link template extLen int // The length of the added export extension } // Object describes a drive object type Object struct { baseObject url string // Download URL of this object md5sum string // md5sum of the object v2Download bool // generate v2 download link ondemand } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Google drive root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // shouldRetry determines whether a given err rates being retried func (f *Fs) shouldRetry(err error) (bool, error) { if err == nil { return false, nil } if fserrors.ShouldRetry(err) { return true, err } switch gerr := err.(type) { case *googleapi.Error: if gerr.Code >= 500 && gerr.Code < 600 { // All 5xx errors should be retried return true, err } if len(gerr.Errors) > 0 { reason := gerr.Errors[0].Reason if reason == "rateLimitExceeded" || reason == "userRateLimitExceeded" { if f.opt.StopOnUploadLimit && gerr.Errors[0].Message == "User rate limit exceeded." { fs.Errorf(f, "Received upload limit error: %v", err) return false, fserrors.FatalError(err) } return true, err } else if f.opt.StopOnUploadLimit && reason == "teamDriveFileLimitExceeded" { fs.Errorf(f, "Received team drive file limit error: %v", err) return false, fserrors.FatalError(err) } } } return false, err } // parseParse parses a drive 'url' func parseDrivePath(path string) (root string, err error) { root = strings.Trim(path, "/") return } // User function to process a File item from list // // Should return true to finish processing type listFn func(*drive.File) bool func containsString(slice []string, s string) bool { for _, e := range slice { if e == s { return true } } return false } // getFile returns drive.File for the ID passed and fields passed in func (f *Fs) getFile(ID string, fields googleapi.Field) (info *drive.File, err error) { err = f.pacer.CallNoRetry(func() (bool, error) { info, err = f.svc.Files.Get(ID). Fields(fields). SupportsAllDrives(true). Do() return f.shouldRetry(err) }) return info, err } // getRootID returns the canonical ID for the "root" ID func (f *Fs) getRootID() (string, error) { info, err := f.getFile("root", "id") if err != nil { return "", errors.Wrap(err, "couldn't find root directory ID") } return info.Id, nil } // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true // // Search params: https://developers.google.com/drive/search-parameters func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directoriesOnly, filesOnly, includeAll bool, fn listFn) (found bool, err error) { var query []string if !includeAll { q := "trashed=" + strconv.FormatBool(f.opt.TrashedOnly) if f.opt.TrashedOnly { q = fmt.Sprintf("(mimeType='%s' or %s)", driveFolderType, q) } query = append(query, q) } // Search with sharedWithMe will always return things listed in "Shared With Me" (without any parents) // We must not filter with parent when we try list "ROOT" with drive-shared-with-me // If we need to list file inside those shared folders, we must search it without sharedWithMe parentsQuery := bytes.NewBufferString("(") for _, dirID := range dirIDs { if dirID == "" { continue } if parentsQuery.Len() > 1 { _, _ = parentsQuery.WriteString(" or ") } if (f.opt.SharedWithMe || f.opt.StarredOnly) && dirID == f.rootFolderID { if f.opt.SharedWithMe { _, _ = parentsQuery.WriteString("sharedWithMe=true") } if f.opt.StarredOnly { if f.opt.SharedWithMe { _, _ = parentsQuery.WriteString(" and ") } _, _ = parentsQuery.WriteString("starred=true") } } else { _, _ = fmt.Fprintf(parentsQuery, "'%s' in parents", dirID) } } if parentsQuery.Len() > 1 { _ = parentsQuery.WriteByte(')') query = append(query, parentsQuery.String()) } var stems []string if title != "" { searchTitle := f.opt.Enc.FromStandardName(title) // Escaping the backslash isn't documented but seems to work searchTitle = strings.Replace(searchTitle, `\`, `\\`, -1) searchTitle = strings.Replace(searchTitle, `'`, `\'`, -1) var titleQuery bytes.Buffer _, _ = fmt.Fprintf(&titleQuery, "(name='%s'", searchTitle) if !directoriesOnly && !f.opt.SkipGdocs { // If the search title has an extension that is in the export extensions add a search // for the filename without the extension. // Assume that export extensions don't contain escape sequences. for _, ext := range f.exportExtensions { if strings.HasSuffix(searchTitle, ext) { stems = append(stems, title[:len(title)-len(ext)]) _, _ = fmt.Fprintf(&titleQuery, " or name='%s'", searchTitle[:len(searchTitle)-len(ext)]) } } } _ = titleQuery.WriteByte(')') query = append(query, titleQuery.String()) } if directoriesOnly { query = append(query, fmt.Sprintf("(mimeType='%s' or mimeType='%s')", driveFolderType, shortcutMimeType)) } if filesOnly { query = append(query, fmt.Sprintf("mimeType!='%s'", driveFolderType)) } list := f.svc.Files.List() if len(query) > 0 { list.Q(strings.Join(query, " and ")) // fmt.Printf("list Query = %q\n", query) } if f.opt.ListChunk > 0 { list.PageSize(f.opt.ListChunk) } list.SupportsAllDrives(true) list.IncludeItemsFromAllDrives(true) if f.isTeamDrive { list.DriveId(f.opt.TeamDriveID) list.Corpora("drive") } // If using appDataFolder then need to add Spaces if f.rootFolderID == "appDataFolder" { list.Spaces("appDataFolder") } fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.fileFields) OUTER: for { var files *drive.FileList err = f.pacer.Call(func() (bool, error) { files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do() return f.shouldRetry(err) }) if err != nil { return false, errors.Wrap(err, "couldn't list directory") } if files.IncompleteSearch { fs.Errorf(f, "search result INCOMPLETE") } for _, item := range files.Files { item.Name = f.opt.Enc.ToStandardName(item.Name) if isShortcut(item) { // ignore shortcuts if directed if f.opt.SkipShortcuts { continue } // skip file shortcuts if directory only if directoriesOnly && item.ShortcutDetails.TargetMimeType != driveFolderType { continue } // skip directory shortcuts if file only if filesOnly && item.ShortcutDetails.TargetMimeType == driveFolderType { continue } item, err = f.resolveShortcut(item) if err != nil { return false, errors.Wrap(err, "list") } } // Check the case of items is correct since // the `=` operator is case insensitive. if title != "" && title != item.Name { found := false for _, stem := range stems { if stem == item.Name { found = true break } } if !found { continue } _, exportName, _, _ := f.findExportFormat(item) if exportName == "" || exportName != title { continue } } if fn(item) { found = true break OUTER } } if files.NextPageToken == "" { break } list.PageToken(files.NextPageToken) } return } // Returns true of x is a power of 2 or zero func isPowerOfTwo(x int64) bool { switch { case x == 0: return true case x < 0: return false default: return (x & (x - 1)) == 0 } } // add a charset parameter to all text/* MIME types func fixMimeType(mimeTypeIn string) string { if mimeTypeIn == "" { return "" } mediaType, param, err := mime.ParseMediaType(mimeTypeIn) if err != nil { return mimeTypeIn } mimeTypeOut := mimeTypeIn if strings.HasPrefix(mediaType, "text/") && param["charset"] == "" { param["charset"] = "utf-8" mimeTypeOut = mime.FormatMediaType(mediaType, param) } if mimeTypeOut == "" { panic(errors.Errorf("unable to fix MIME type %q", mimeTypeIn)) } return mimeTypeOut } func fixMimeTypeMap(in map[string][]string) (out map[string][]string) { out = make(map[string][]string, len(in)) for k, v := range in { for i, mt := range v { v[i] = fixMimeType(mt) } out[fixMimeType(k)] = v } return out } func isInternalMimeType(mimeType string) bool { return strings.HasPrefix(mimeType, "application/vnd.google-apps.") } func isLinkMimeType(mimeType string) bool { return strings.HasPrefix(mimeType, "application/x-link-") } // parseExtensions parses a list of comma separated extensions // into a list of unique extensions with leading "." and a list of associated MIME types func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, err error) { for _, extensionText := range extensionsIn { for _, extension := range strings.Split(extensionText, ",") { extension = strings.ToLower(strings.TrimSpace(extension)) if extension == "" { continue } if len(extension) > 0 && extension[0] != '.' { extension = "." + extension } mt := mime.TypeByExtension(extension) if mt == "" { return extensions, mimeTypes, errors.Errorf("couldn't find MIME type for extension %q", extension) } if !containsString(extensions, extension) { extensions = append(extensions, extension) mimeTypes = append(mimeTypes, mt) } } } return } // Figure out if the user wants to use a team drive func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name string) error { // Stop if we are running non-interactive config if fs.Config.AutoConfirm { return nil } if opt.TeamDriveID == "" { fmt.Printf("Configure this as a team drive?\n") } else { fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID) } if !config.Confirm(false) { return nil } f, err := newFs(name, "", m) if err != nil { return errors.Wrap(err, "failed to make Fs to list teamdrives") } fmt.Printf("Fetching team drive list...\n") teamDrives, err := f.listTeamDrives(ctx) if err != nil { return err } if len(teamDrives) == 0 { fmt.Printf("No team drives found in your account") return nil } var driveIDs, driveNames []string for _, teamDrive := range teamDrives { driveIDs = append(driveIDs, teamDrive.Id) driveNames = append(driveNames, teamDrive.Name) } driveID := config.Choose("Enter a Team Drive ID", driveIDs, driveNames, true) m.Set("team_drive", driveID) m.Set("root_folder_id", "") opt.TeamDriveID = driveID opt.RootFolderID = "" return nil } // getClient makes an http client according to the options func getClient(opt *Options) *http.Client { t := fshttp.NewTransportCustom(fs.Config, func(t *http.Transport) { if opt.DisableHTTP2 { t.TLSNextProto = map[string]func(string, *tls.Conn) http.RoundTripper{} } }) return &http.Client{ Transport: t, } } func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) { scopes := driveScopes(opt.Scope) conf, err := google.JWTConfigFromJSON(credentialsData, scopes...) if err != nil { return nil, errors.Wrap(err, "error processing credentials") } if opt.Impersonate != "" { conf.Subject = opt.Impersonate } ctxWithSpecialClient := oauthutil.Context(getClient(opt)) return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil } func createOAuthClient(opt *Options, name string, m configmap.Mapper) (*http.Client, error) { var oAuthClient *http.Client var err error // try loading service account credentials from env variable, then from a file if len(opt.ServiceAccountCredentials) == 0 && opt.ServiceAccountFile != "" { loadedCreds, err := ioutil.ReadFile(env.ShellExpand(opt.ServiceAccountFile)) if err != nil { return nil, errors.Wrap(err, "error opening service account credentials file") } opt.ServiceAccountCredentials = string(loadedCreds) } if opt.ServiceAccountCredentials != "" { oAuthClient, err = getServiceAccountClient(opt, []byte(opt.ServiceAccountCredentials)) if err != nil { return nil, errors.Wrap(err, "failed to create oauth client from service account") } } else { oAuthClient, _, err = oauthutil.NewClientWithBaseClient(name, m, driveConfig, getClient(opt)) if err != nil { return nil, errors.Wrap(err, "failed to create oauth client") } } return oAuthClient, nil } func checkUploadChunkSize(cs fs.SizeSuffix) error { if !isPowerOfTwo(int64(cs)) { return errors.Errorf("%v isn't a power of two", cs) } if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } func checkUploadCutoff(cs fs.SizeSuffix) error { return nil } func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadCutoff(cs) if err == nil { old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs } return } // newFs partially constructs Fs from the path // // It constructs a valid Fs but doesn't attempt to figure out whether // it is a file or a directory. func newFs(name, path string, m configmap.Mapper) (*Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadCutoff(opt.UploadCutoff) if err != nil { return nil, errors.Wrap(err, "drive: upload cutoff") } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "drive: chunk size") } oAuthClient, err := createOAuthClient(opt, name, m) if err != nil { return nil, errors.Wrap(err, "drive: failed when making oauth client") } root, err := parseDrivePath(path) if err != nil { return nil, err } f := &Fs{ name: name, root: root, opt: *opt, pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(opt.PacerMinSleep), pacer.Burst(opt.PacerBurst))), m: m, grouping: listRGrouping, listRmu: new(sync.Mutex), listRempties: make(map[string]struct{}), } f.isTeamDrive = opt.TeamDriveID != "" f.fileFields = f.getFileFields() f.features = (&fs.Features{ DuplicateFiles: true, ReadMimeType: true, WriteMimeType: true, CanHaveEmptyDirectories: true, ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs, }).Fill(f) // Create a new authorized Drive client. f.client = oAuthClient f.svc, err = drive.New(f.client) if err != nil { return nil, errors.Wrap(err, "couldn't create Drive client") } if f.opt.V2DownloadMinSize >= 0 { f.v2Svc, err = drive_v2.New(f.client) if err != nil { return nil, errors.Wrap(err, "couldn't create Drive v2 client") } } return f, nil } // NewFs constructs an Fs from the path, container:path func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() f, err := newFs(name, path, m) if err != nil { return nil, err } // Set the root folder ID if f.opt.RootFolderID != "" { // use root_folder ID if set f.rootFolderID = f.opt.RootFolderID } else if f.isTeamDrive { // otherwise use team_drive if set f.rootFolderID = f.opt.TeamDriveID } else { // otherwise look up the actual root ID rootID, err := f.getRootID() if err != nil { if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 { // 404 means that this scope does not have permission to get the // root so just use "root" rootID = "root" } else { return nil, err } } f.rootFolderID = rootID fs.Debugf(f, "root_folder_id = %q - save this in the config to speed up startup", rootID) } f.dirCache = dircache.New(f.root, f.rootFolderID, f) // Parse extensions if f.opt.Extensions != "" { if f.opt.ExportExtensions != defaultExportExtensions { return nil, errors.New("only one of 'formats' and 'export_formats' can be specified") } f.opt.Extensions, f.opt.ExportExtensions = "", f.opt.Extensions } f.exportExtensions, _, err = parseExtensions(f.opt.ExportExtensions, defaultExportExtensions) if err != nil { return nil, err } _, f.importMimeTypes, err = parseExtensions(f.opt.ImportExtensions) if err != nil { return nil, err } // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(f.root) tempF := *f tempF.dirCache = dircache.New(newRoot, f.rootFolderID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.NewObject(ctx, remote) if err != nil { // unable to list folder so return old f return f, nil } // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root return f, fs.ErrorIsFile } // fmt.Printf("Root id %s", f.dirCache.RootID()) return f, nil } func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject { modifiedDate := info.ModifiedTime if f.opt.UseCreatedDate { modifiedDate = info.CreatedTime } else if f.opt.UseSharedDate && info.SharedWithMeTime != "" { modifiedDate = info.SharedWithMeTime } size := info.Size if f.opt.SizeAsQuota { size = info.QuotaBytesUsed } return baseObject{ fs: f, remote: remote, id: info.Id, modifiedDate: modifiedDate, mimeType: info.MimeType, bytes: size, parents: len(info.Parents), } } // getFileFields gets the fields for a normal file Get or List func (f *Fs) getFileFields() (fields googleapi.Field) { fields = partialFields if f.opt.AuthOwnerOnly { fields += ",owners" } if f.opt.UseSharedDate { fields += ",sharedWithMeTime" } if f.opt.SkipChecksumGphotos { fields += ",spaces" } if f.opt.SizeAsQuota { fields += ",quotaBytesUsed" } return fields } // newRegularObject creates an fs.Object for a normal drive.File func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object { // wipe checksum if SkipChecksumGphotos and file is type Photo or Video if f.opt.SkipChecksumGphotos { for _, space := range info.Spaces { if space == "photos" { info.Md5Checksum = "" break } } } return &Object{ baseObject: f.newBaseObject(remote, info), url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, actualID(info.Id)), md5sum: strings.ToLower(info.Md5Checksum), v2Download: f.opt.V2DownloadMinSize != -1 && info.Size >= int64(f.opt.V2DownloadMinSize), } } // newDocumentObject creates an fs.Object for a google docs drive.File func (f *Fs) newDocumentObject(remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) { mediaType, _, err := mime.ParseMediaType(exportMimeType) if err != nil { return nil, err } url := info.ExportLinks[mediaType] baseObject := f.newBaseObject(remote+extension, info) baseObject.bytes = -1 baseObject.mimeType = exportMimeType return &documentObject{ baseObject: baseObject, url: url, documentMimeType: info.MimeType, extLen: len(extension), }, nil } // newLinkObject creates an fs.Object that represents a link a google docs drive.File func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) { t := linkTemplate(exportMimeType) if t == nil { return nil, errors.Errorf("unsupported link type %s", exportMimeType) } var buf bytes.Buffer err := t.Execute(&buf, struct { URL, Title string }{ info.WebViewLink, info.Name, }) if err != nil { return nil, errors.Wrap(err, "executing template failed") } baseObject := f.newBaseObject(remote+extension, info) baseObject.bytes = int64(buf.Len()) baseObject.mimeType = exportMimeType return &linkObject{ baseObject: baseObject, content: buf.Bytes(), extLen: len(extension), }, nil } // newObjectWithInfo creates an fs.Object for any drive.File // // When the drive.File cannot be represented as an fs.Object it will return (nil, nil). func (f *Fs) newObjectWithInfo(remote string, info *drive.File) (fs.Object, error) { // If item has MD5 sum or a length it is a file stored on drive if info.Md5Checksum != "" || info.Size > 0 { return f.newRegularObject(remote, info), nil } extension, exportName, exportMimeType, isDocument := f.findExportFormat(info) return f.newObjectWithExportInfo(remote, info, extension, exportName, exportMimeType, isDocument) } // newObjectWithExportInfo creates an fs.Object for any drive.File and the result of findExportFormat // // When the drive.File cannot be represented as an fs.Object it will return (nil, nil). func (f *Fs) newObjectWithExportInfo( remote string, info *drive.File, extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) { // Note that resolveShortcut will have been called already if // we are being called from a listing. However the drive.Item // will have been resolved so this will do nothing. info, err = f.resolveShortcut(info) if err != nil { return nil, errors.Wrap(err, "new object") } switch { case info.MimeType == driveFolderType: return nil, fs.ErrorNotAFile case info.MimeType == shortcutMimeType: // We can only get here if f.opt.SkipShortcuts is set // and not from a listing. This is unlikely. fs.Debugf(remote, "Ignoring shortcut as skip shortcuts is set") return nil, fs.ErrorObjectNotFound case info.MimeType == shortcutMimeTypeDangling: // Pretend a dangling shortcut is a regular object // It will error if used, but appear in listings so it can be deleted return f.newRegularObject(remote, info), nil case info.Md5Checksum != "" || info.Size > 0: // If item has MD5 sum or a length it is a file stored on drive return f.newRegularObject(remote, info), nil case f.opt.SkipGdocs: fs.Debugf(remote, "Skipping google document type %q", info.MimeType) return nil, fs.ErrorObjectNotFound default: // If item MimeType is in the ExportFormats then it is a google doc if !isDocument { fs.Debugf(remote, "Ignoring unknown document type %q", info.MimeType) return nil, fs.ErrorObjectNotFound } if extension == "" { fs.Debugf(remote, "No export formats found for %q", info.MimeType) return nil, fs.ErrorObjectNotFound } if isLinkMimeType(exportMimeType) { return f.newLinkObject(remote, info, extension, exportMimeType) } return f.newDocumentObject(remote, info, extension, exportMimeType) } } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { info, extension, exportName, exportMimeType, isDocument, err := f.getRemoteInfoWithExport(ctx, remote) if err != nil { return nil, err } remote = remote[:len(remote)-len(extension)] obj, err := f.newObjectWithExportInfo(remote, info, extension, exportName, exportMimeType, isDocument) switch { case err != nil: return nil, err case obj == nil: return nil, fs.ErrorObjectNotFound default: return obj, nil } } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // Find the leaf in pathID pathID = actualID(pathID) found, err = f.list(ctx, []string{pathID}, leaf, true, false, false, func(item *drive.File) bool { if !f.opt.SkipGdocs { _, exportName, _, isDocument := f.findExportFormat(item) if exportName == leaf { pathIDOut = item.Id return true } if isDocument { return false } } if item.Name == leaf { pathIDOut = item.Id return true } return false }) return pathIDOut, found, err } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { leaf = f.opt.Enc.FromStandardName(leaf) // fmt.Println("Making", path) // Define the metadata for the directory we are going to create. pathID = actualID(pathID) createInfo := &drive.File{ Name: leaf, Description: leaf, MimeType: driveFolderType, Parents: []string{pathID}, } var info *drive.File err = f.pacer.Call(func() (bool, error) { info, err = f.svc.Files.Create(createInfo). Fields("id"). SupportsAllDrives(true). Do() return f.shouldRetry(err) }) if err != nil { return "", err } return info.Id, nil } // isAuthOwned checks if any of the item owners is the authenticated owner func isAuthOwned(item *drive.File) bool { for _, owner := range item.Owners { if owner.Me { return true } } return false } // linkTemplate returns the Template for a MIME type or nil if the // MIME type does not represent a link func linkTemplate(mt string) *template.Template { templatesOnce.Do(func() { _linkTemplates = map[string]*template.Template{ "application/x-link-desktop": template.Must( template.New("application/x-link-desktop").Parse(desktopTemplate)), "application/x-link-html": template.Must( template.New("application/x-link-html").Parse(htmlTemplate)), "application/x-link-url": template.Must( template.New("application/x-link-url").Parse(urlTemplate)), "application/x-link-webloc": template.Must( template.New("application/x-link-webloc").Parse(weblocTemplate)), } }) return _linkTemplates[mt] } func (f *Fs) fetchFormats() { fetchFormatsOnce.Do(func() { var about *drive.About var err error err = f.pacer.Call(func() (bool, error) { about, err = f.svc.About.Get(). Fields("exportFormats,importFormats"). Do() return f.shouldRetry(err) }) if err != nil { fs.Errorf(f, "Failed to get Drive exportFormats and importFormats: %v", err) _exportFormats = map[string][]string{} _importFormats = map[string][]string{} return } _exportFormats = fixMimeTypeMap(about.ExportFormats) _importFormats = fixMimeTypeMap(about.ImportFormats) }) } // exportFormats returns the export formats from drive, fetching them // if necessary. // // if the fetch fails then it will not export any drive formats func (f *Fs) exportFormats() map[string][]string { f.fetchFormats() return _exportFormats } // importFormats returns the import formats from drive, fetching them // if necessary. // // if the fetch fails then it will not import any drive formats func (f *Fs) importFormats() map[string][]string { f.fetchFormats() return _importFormats } // findExportFormatByMimeType works out the optimum export settings // for the given MIME type. // // Look through the exportExtensions and find the first format that can be // converted. If none found then return ("", "", false) func (f *Fs) findExportFormatByMimeType(itemMimeType string) ( extension, mimeType string, isDocument bool) { exportMimeTypes, isDocument := f.exportFormats()[itemMimeType] if isDocument { for _, _extension := range f.exportExtensions { _mimeType := mime.TypeByExtension(_extension) if isLinkMimeType(_mimeType) { return _extension, _mimeType, true } for _, emt := range exportMimeTypes { if emt == _mimeType { return _extension, emt, true } if _mimeType == _mimeTypeCustomTransform[emt] { return _extension, emt, true } } } } // else return empty return "", "", isDocument } // findExportFormatByMimeType works out the optimum export settings // for the given drive.File. // // Look through the exportExtensions and find the first format that can be // converted. If none found then return ("", "", "", false) func (f *Fs) findExportFormat(item *drive.File) (extension, filename, mimeType string, isDocument bool) { extension, mimeType, isDocument = f.findExportFormatByMimeType(item.MimeType) if extension != "" { filename = item.Name + extension } return } // findImportFormat finds the matching upload MIME type for a file // If the given MIME type is in importMimeTypes, the matching upload // MIME type is returned // // When no match is found "" is returned. func (f *Fs) findImportFormat(mimeType string) string { mimeType = fixMimeType(mimeType) ifs := f.importFormats() for _, mt := range f.importMimeTypes { if mt == mimeType { importMimeTypes := ifs[mimeType] if l := len(importMimeTypes); l > 0 { if l > 1 { fs.Infof(f, "found %d import formats for %q: %q", l, mimeType, importMimeTypes) } return importMimeTypes[0] } } } return "" } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } directoryID = actualID(directoryID) var iErr error _, err = f.list(ctx, []string{directoryID}, "", false, false, false, func(item *drive.File) bool { entry, err := f.itemToDirEntry(path.Join(dir, item.Name), item) if err != nil { iErr = err return true } if entry != nil { entries = append(entries, entry) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } // If listing the root of a teamdrive and got no entries, // double check we have access if f.isTeamDrive && len(entries) == 0 && f.root == "" && dir == "" { err = f.teamDriveOK(ctx) if err != nil { return nil, err } } return entries, nil } // listREntry is a task to be executed by a litRRunner type listREntry struct { id, path string } // listRSlices is a helper struct to sort two slices at once type listRSlices struct { dirs []string paths []string } func (s listRSlices) Sort() { sort.Sort(s) } func (s listRSlices) Len() int { return len(s.dirs) } func (s listRSlices) Swap(i, j int) { s.dirs[i], s.dirs[j] = s.dirs[j], s.dirs[i] s.paths[i], s.paths[j] = s.paths[j], s.paths[i] } func (s listRSlices) Less(i, j int) bool { return s.dirs[i] < s.dirs[j] } // listRRunner will read dirIDs from the in channel, perform the file listing and call cb with each DirEntry. // // In each cycle it will read up to grouping entries from the in channel without blocking. // If an error occurs it will be send to the out channel and then return. Once the in channel is closed, // nil is send to the out channel and the function returns. func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listREntry, out chan<- error, cb func(fs.DirEntry) error, sendJob func(listREntry)) { var dirs []string var paths []string var grouping int32 for dir := range in { dirs = append(dirs[:0], dir.id) paths = append(paths[:0], dir.path) grouping = atomic.LoadInt32(&f.grouping) waitloop: for i := int32(1); i < grouping; i++ { select { case d, ok := <-in: if !ok { break waitloop } dirs = append(dirs, d.id) paths = append(paths, d.path) default: } } listRSlices{dirs, paths}.Sort() var iErr error foundItems := false _, err := f.list(ctx, dirs, "", false, false, false, func(item *drive.File) bool { // shared with me items have no parents when at the root if f.opt.SharedWithMe && len(item.Parents) == 0 && len(paths) == 1 && paths[0] == "" { item.Parents = dirs } for _, parent := range item.Parents { var i int foundItems = true earlyExit := false // If only one item in paths then no need to search for the ID // assuming google drive is doing its job properly. // // Note that we at the root when len(paths) == 1 && paths[0] == "" if len(paths) == 1 { // don't check parents at root because // - shared with me items have no parents at the root // - if using a root alias, eg "root" or "appDataFolder" the ID won't match i = 0 // items at root can have more than one parent so we need to put // the item in just once. earlyExit = true } else { // only handle parents that are in the requested dirs list if not at root i = sort.SearchStrings(dirs, parent) if i == len(dirs) || dirs[i] != parent { continue } } remote := path.Join(paths[i], item.Name) entry, err := f.itemToDirEntry(remote, item) if err != nil { iErr = err return true } err = cb(entry) if err != nil { iErr = err return true } // If didn't check parents then insert only once if earlyExit { break } } return false }) // Found no items in more than one directory. Retry these as // individual directories This is to work around a bug in google // drive where (A in parents) or (B in parents) returns nothing // sometimes. See #3114, #4289 and // https://issuetracker.google.com/issues/149522397 if len(dirs) > 1 && !foundItems { if atomic.SwapInt32(&f.grouping, 1) != 1 { fs.Debugf(f, "Disabling ListR to work around bug in drive as multi listing (%d) returned no entries", len(dirs)) } f.listRmu.Lock() for i := range dirs { // Requeue the jobs job := listREntry{id: dirs[i], path: paths[i]} sendJob(job) // Make a note of these dirs - if they all turn // out to be empty then we can re-enable grouping f.listRempties[dirs[i]] = struct{}{} } f.listRmu.Unlock() fs.Debugf(f, "Recycled %d entries", len(dirs)) } // If using a grouping of 1 and dir was empty then check to see if it // is part of the group that caused grouping to be disabled. if grouping == 1 && len(dirs) == 1 && !foundItems { f.listRmu.Lock() if _, found := f.listRempties[dirs[0]]; found { // Remove the ID delete(f.listRempties, dirs[0]) // If no empties left => all the directories that // triggered the grouping being set to 1 were actually // empty so must have made a mistake if len(f.listRempties) == 0 { if atomic.SwapInt32(&f.grouping, listRGrouping) != listRGrouping { fs.Debugf(f, "Re-enabling ListR as previous detection was in error") } } } f.listRmu.Unlock() } for range dirs { wg.Done() } if iErr != nil { out <- iErr return } if err != nil { out <- err return } } out <- nil } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return err } directoryID = actualID(directoryID) mu := sync.Mutex{} // protects in and overflow wg := sync.WaitGroup{} in := make(chan listREntry, listRInputBuffer) out := make(chan error, fs.Config.Checkers) list := walk.NewListRHelper(callback) overflow := []listREntry{} listed := 0 // Send a job to the input channel if not closed. If the job // won't fit then queue it in the overflow slice. // // This will not block if the channel is full. sendJob := func(job listREntry) { mu.Lock() defer mu.Unlock() if in == nil { return } wg.Add(1) select { case in <- job: default: overflow = append(overflow, job) wg.Add(-1) } } // Send the entry to the caller, queueing any directories as new jobs cb := func(entry fs.DirEntry) error { if d, isDir := entry.(*fs.Dir); isDir { job := listREntry{actualID(d.ID()), d.Remote()} sendJob(job) } mu.Lock() defer mu.Unlock() listed++ return list.Add(entry) } wg.Add(1) in <- listREntry{directoryID, dir} for i := 0; i < fs.Config.Checkers; i++ { go f.listRRunner(ctx, &wg, in, out, cb, sendJob) } go func() { // wait until the all directories are processed wg.Wait() // if the input channel overflowed add the collected entries to the channel now for len(overflow) > 0 { mu.Lock() l := len(overflow) // only fill half of the channel to prevent entries being put into overflow again if l > listRInputBuffer/2 { l = listRInputBuffer / 2 } wg.Add(l) for _, d := range overflow[:l] { in <- d } overflow = overflow[l:] mu.Unlock() // wait again for the completion of all directories wg.Wait() } mu.Lock() if in != nil { // notify all workers to exit close(in) in = nil } mu.Unlock() }() // wait until the all workers to finish for i := 0; i < fs.Config.Checkers; i++ { e := <-out mu.Lock() // if one worker returns an error early, close the input so all other workers exit if e != nil && in != nil { err = e close(in) in = nil } mu.Unlock() } close(out) if err != nil { return err } err = list.Flush() if err != nil { return err } // If listing the root of a teamdrive and got no entries, // double check we have access if f.isTeamDrive && listed == 0 && f.root == "" && dir == "" { err = f.teamDriveOK(ctx) if err != nil { return err } } return nil } const shortcutSeparator = '\t' // joinID adds an actual drive ID to the shortcut ID it came from // // directoryIDs in the dircache are these composite directory IDs so // we must always unpack them before use. func joinID(actual, shortcut string) string { return actual + string(shortcutSeparator) + shortcut } // splitID separates an actual ID and a shortcut ID from a composite // ID. If there was no shortcut ID then it will return "" for it. func splitID(compositeID string) (actualID, shortcutID string) { i := strings.IndexRune(compositeID, shortcutSeparator) if i < 0 { return compositeID, "" } return compositeID[:i], compositeID[i+1:] } // isShortcutID returns true if compositeID refers to a shortcut func isShortcutID(compositeID string) bool { return strings.IndexRune(compositeID, shortcutSeparator) >= 0 } // actualID returns an actual ID from a composite ID func actualID(compositeID string) (actualID string) { actualID, _ = splitID(compositeID) return actualID } // shortcutID returns a shortcut ID from a composite ID if available, // or the actual ID if not. func shortcutID(compositeID string) (shortcutID string) { actualID, shortcutID := splitID(compositeID) if shortcutID != "" { return shortcutID } return actualID } // isShortcut returns true of the item is a shortcut func isShortcut(item *drive.File) bool { return item.MimeType == shortcutMimeType && item.ShortcutDetails != nil } // Dereference shortcut if required. It returns the newItem (which may // be just item). // // If we return a new item then the ID will be adjusted to be a // composite of the actual ID and the shortcut ID. This is to make // sure that we have decided in all use places what we are doing with // the ID. // // Note that we assume shortcuts can't point to shortcuts. Google // drive web interface doesn't offer the option to create a shortcut // to a shortcut. The documentation is silent on the issue. func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error) { if f.opt.SkipShortcuts || item.MimeType != shortcutMimeType { return item, nil } if item.ShortcutDetails == nil { fs.Errorf(nil, "Expecting shortcutDetails in %v", item) return item, nil } newItem, err = f.getFile(item.ShortcutDetails.TargetId, f.fileFields) if err != nil { if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 { // 404 means dangling shortcut, so just return the shortcut with the mime type mangled fs.Logf(nil, "Dangling shortcut %q detected", item.Name) item.MimeType = shortcutMimeTypeDangling return item, nil } return nil, errors.Wrap(err, "failed to resolve shortcut") } // make sure we use the Name, Parents and Trashed from the original item newItem.Name = item.Name newItem.Parents = item.Parents newItem.Trashed = item.Trashed // the new ID is a composite ID newItem.Id = joinID(newItem.Id, item.Id) return newItem, nil } // itemToDirEntry converts a drive.File to an fs.DirEntry. // When the drive.File cannot be represented as an fs.DirEntry // (nil, nil) is returned. func (f *Fs) itemToDirEntry(remote string, item *drive.File) (entry fs.DirEntry, err error) { switch { case item.MimeType == driveFolderType: // cache the directory ID for later lookups f.dirCache.Put(remote, item.Id) when, _ := time.Parse(timeFormatIn, item.ModifiedTime) d := fs.NewDir(remote, when).SetID(item.Id) return d, nil case f.opt.AuthOwnerOnly && !isAuthOwned(item): // ignore object default: entry, err = f.newObjectWithInfo(remote, item) if err == fs.ErrorObjectNotFound { return nil, nil } return entry, err } return nil, nil } // Creates a drive.File info from the parameters passed in. // // Used to create new objects func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Time) (*drive.File, error) { leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } directoryID = actualID(directoryID) leaf = f.opt.Enc.FromStandardName(leaf) // Define the metadata for the file we are going to create. createInfo := &drive.File{ Name: leaf, Description: leaf, Parents: []string{directoryID}, ModifiedTime: modTime.Format(timeFormatOut), } return createInfo, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { exisitingObj, err := f.NewObject(ctx, src.Remote()) switch err { case nil: return exisitingObj, exisitingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src, options...) default: return nil, err } } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) srcMimeType := fs.MimeTypeFromName(remote) srcExt := path.Ext(remote) exportExt := "" importMimeType := "" if f.importMimeTypes != nil && !f.opt.SkipGdocs { importMimeType = f.findImportFormat(srcMimeType) if isInternalMimeType(importMimeType) { remote = remote[:len(remote)-len(srcExt)] exportExt, _, _ = f.findExportFormatByMimeType(importMimeType) if exportExt == "" { return nil, errors.Errorf("No export format found for %q", importMimeType) } if exportExt != srcExt && !f.opt.AllowImportNameChange { return nil, errors.Errorf("Can't convert %q to a document with a different export filetype (%q)", srcExt, exportExt) } } } createInfo, err := f.createFileInfo(ctx, remote, modTime) if err != nil { return nil, err } if importMimeType != "" { createInfo.MimeType = importMimeType } else { createInfo.MimeType = fs.MimeTypeFromName(remote) } var info *drive.File if size >= 0 && size < int64(f.opt.UploadCutoff) { // Make the API request to upload metadata and file data. // Don't retry, return a retry error instead err = f.pacer.CallNoRetry(func() (bool, error) { info, err = f.svc.Files.Create(createInfo). Media(in, googleapi.ContentType(srcMimeType)). Fields(partialFields). SupportsAllDrives(true). KeepRevisionForever(f.opt.KeepRevisionForever). Do() return f.shouldRetry(err) }) if err != nil { return nil, err } } else { // Upload the file in chunks info, err = f.Upload(ctx, in, size, srcMimeType, "", remote, createInfo) if err != nil { return nil, err } } return f.newObjectWithInfo(remote, info) } // MergeDirs merges the contents of all the directories passed // in into the first one and rmdirs the other directories. func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error { if len(dirs) < 2 { return nil } newDirs := dirs[:0] for _, dir := range dirs { if isShortcutID(dir.ID()) { fs.Infof(dir, "skipping shortcut directory") continue } newDirs = append(newDirs, dir) } dirs = newDirs if len(dirs) < 2 { return nil } dstDir := dirs[0] for _, srcDir := range dirs[1:] { // list the objects infos := []*drive.File{} _, err := f.list(ctx, []string{srcDir.ID()}, "", false, false, true, func(info *drive.File) bool { infos = append(infos, info) return false }) if err != nil { return errors.Wrapf(err, "MergeDirs list failed on %v", srcDir) } // move them into place for _, info := range infos { fs.Infof(srcDir, "merging %q", info.Name) // Move the file into the destination err = f.pacer.Call(func() (bool, error) { _, err = f.svc.Files.Update(info.Id, nil). RemoveParents(srcDir.ID()). AddParents(dstDir.ID()). Fields(""). SupportsAllDrives(true). Do() return f.shouldRetry(err) }) if err != nil { return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir) } } // rmdir (into trash) the now empty source directory fs.Infof(srcDir, "removing empty directory") err = f.delete(ctx, srcDir.ID(), true) if err != nil { return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir) } } return nil } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // delete a file or directory unconditionally by ID func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error { return f.pacer.Call(func() (bool, error) { var err error if useTrash { info := drive.File{ Trashed: true, } _, err = f.svc.Files.Update(id, &info). Fields(""). SupportsAllDrives(true). Do() } else { err = f.svc.Files.Delete(id). Fields(""). SupportsAllDrives(true). Do() } return f.shouldRetry(err) }) } // purgeCheck removes the dir directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) dc := f.dirCache directoryID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } directoryID, shortcutID := splitID(directoryID) // if directory is a shortcut remove it regardless if shortcutID != "" { return f.delete(ctx, shortcutID, f.opt.UseTrash) } var trashedFiles = false if check { found, err := f.list(ctx, []string{directoryID}, "", false, false, true, func(item *drive.File) bool { if !item.Trashed { fs.Debugf(dir, "Rmdir: contains file: %q", item.Name) return true } fs.Debugf(dir, "Rmdir: contains trashed file: %q", item.Name) trashedFiles = true return false }) if err != nil { return err } if found { return errors.Errorf("directory not empty") } } if root != "" { // trash the directory if it had trashed files // in or the user wants to trash, otherwise // delete it. err = f.delete(ctx, directoryID, trashedFiles || f.opt.UseTrash) if err != nil { return err } } else if check { return errors.New("can't purge root directory") } f.dirCache.FlushDir(dir) if err != nil { return err } return nil } // Rmdir deletes a directory // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision of the object storage system func (f *Fs) Precision() time.Duration { return time.Millisecond } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { var srcObj *baseObject ext := "" isDoc := false switch src := src.(type) { case *Object: srcObj = &src.baseObject case *documentObject: srcObj, ext = &src.baseObject, src.ext() isDoc = true case *linkObject: srcObj, ext = &src.baseObject, src.ext() default: fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } // Look to see if there is an existing object before we remove // the extension from the remote existingObject, _ := f.NewObject(ctx, remote) // Adjust the remote name to be without the extension if we // are about to create a doc. if ext != "" { if !strings.HasSuffix(remote, ext) { fs.Debugf(src, "Can't copy - not same document type") return nil, fs.ErrorCantCopy } remote = remote[:len(remote)-len(ext)] } createInfo, err := f.createFileInfo(ctx, remote, src.ModTime(ctx)) if err != nil { return nil, err } if isDoc { // preserve the description on copy for docs info, err := f.getFile(actualID(srcObj.id), "description") if err != nil { return nil, errors.Wrap(err, "failed to read description for Google Doc") } createInfo.Description = info.Description } else { // don't overwrite the description on copy for files // this should work for docs but it doesn't - it is probably a bug in Google Drive createInfo.Description = "" } // get the ID of the thing to copy - this is the shortcut if available id := shortcutID(srcObj.id) var info *drive.File err = f.pacer.Call(func() (bool, error) { info, err = f.svc.Files.Copy(id, createInfo). Fields(partialFields). SupportsAllDrives(true). KeepRevisionForever(f.opt.KeepRevisionForever). Do() return f.shouldRetry(err) }) if err != nil { return nil, err } newObject, err := f.newObjectWithInfo(remote, info) if err != nil { return nil, err } // Google docs aren't preserving their mod time after copy, so set them explicitly // See: https://github.com/rclone/rclone/issues/4517 // // FIXME remove this when google fixes the problem! if isDoc { // A short sleep is needed here in order to make the // change effective, without it is is ignored. This is // probably some eventual consistency nastiness. sleepTime := 2 * time.Second fs.Debugf(f, "Sleeping for %v before setting the modtime to work around drive bug - see #4517", sleepTime) time.Sleep(sleepTime) err = newObject.SetModTime(ctx, src.ModTime(ctx)) if err != nil { return nil, err } } if existingObject != nil { err = existingObject.Remove(ctx) if err != nil { fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err) } } return newObject, nil } // Purge deletes all the files and the container // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { if f.opt.TrashedOnly { return errors.New("Can't purge with --drive-trashed-only. Use delete if you want to selectively delete files") } return f.purgeCheck(ctx, dir, false) } // CleanUp empties the trash func (f *Fs) CleanUp(ctx context.Context) error { err := f.pacer.Call(func() (bool, error) { err := f.svc.Files.EmptyTrash().Context(ctx).Do() return f.shouldRetry(err) }) if err != nil { return err } return nil } // teamDriveOK checks to see if we can access the team drive func (f *Fs) teamDriveOK(ctx context.Context) (err error) { if !f.isTeamDrive { return nil } var td *drive.Drive err = f.pacer.Call(func() (bool, error) { td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do() return f.shouldRetry(err) }) if err != nil { return errors.Wrap(err, "failed to get Team/Shared Drive info") } fs.Debugf(f, "read info from team drive %q", td.Name) return err } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { if f.isTeamDrive { err := f.teamDriveOK(ctx) if err != nil { return nil, err } // Teamdrives don't appear to have a usage API so just return empty return &fs.Usage{}, nil } var about *drive.About var err error err = f.pacer.Call(func() (bool, error) { about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do() return f.shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "failed to get Drive storageQuota") } q := about.StorageQuota usage := &fs.Usage{ Used: fs.NewUsageValue(q.UsageInDrive), // bytes in use Trashed: fs.NewUsageValue(q.UsageInDriveTrash), // bytes in trash Other: fs.NewUsageValue(q.Usage - q.UsageInDrive), // other usage eg gmail in drive } if q.Limit > 0 { usage.Total = fs.NewUsageValue(q.Limit) // quota of bytes that can be used usage.Free = fs.NewUsageValue(q.Limit - q.Usage) // bytes which can be uploaded before reaching the quota } return usage, nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { var srcObj *baseObject ext := "" switch src := src.(type) { case *Object: srcObj = &src.baseObject case *documentObject: srcObj, ext = &src.baseObject, src.ext() case *linkObject: srcObj, ext = &src.baseObject, src.ext() default: fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } if ext != "" { if !strings.HasSuffix(remote, ext) { fs.Debugf(src, "Can't move - not same document type") return nil, fs.ErrorCantMove } remote = remote[:len(remote)-len(ext)] } _, srcParentID, err := srcObj.fs.dirCache.FindPath(ctx, src.Remote(), false) if err != nil { return nil, err } srcParentID = actualID(srcParentID) // Temporary Object under construction dstInfo, err := f.createFileInfo(ctx, remote, src.ModTime(ctx)) if err != nil { return nil, err } dstParents := strings.Join(dstInfo.Parents, ",") dstInfo.Parents = nil // Do the move var info *drive.File err = f.pacer.Call(func() (bool, error) { info, err = f.svc.Files.Update(shortcutID(srcObj.id), dstInfo). RemoveParents(srcParentID). AddParents(dstParents). Fields(partialFields). SupportsAllDrives(true). Do() return f.shouldRetry(err) }) if err != nil { return nil, err } return f.newObjectWithInfo(remote, info) } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { id, err := f.dirCache.FindDir(ctx, remote, false) if err == nil { fs.Debugf(f, "attempting to share directory '%s'", remote) id = shortcutID(id) } else { fs.Debugf(f, "attempting to share single file '%s'", remote) o, err := f.NewObject(ctx, remote) if err != nil { return "", err } id = shortcutID(o.(fs.IDer).ID()) } permission := &drive.Permission{ AllowFileDiscovery: false, Role: "reader", Type: "anyone", } err = f.pacer.Call(func() (bool, error) { // TODO: On TeamDrives this might fail if lacking permissions to change ACLs. // Need to either check `canShare` attribute on the object or see if a sufficient permission is already present. _, err = f.svc.Permissions.Create(id, permission). Fields(""). SupportsAllDrives(true). Do() return f.shouldRetry(err) }) if err != nil { return "", err } return fmt.Sprintf("https://drive.google.com/open?id=%s", id), nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, srcDirectoryID, srcLeaf, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } _ = srcLeaf dstDirectoryID = actualID(dstDirectoryID) srcDirectoryID = actualID(srcDirectoryID) // Do the move patch := drive.File{ Name: dstLeaf, } err = f.pacer.Call(func() (bool, error) { _, err = f.svc.Files.Update(shortcutID(srcID), &patch). RemoveParents(srcDirectoryID). AddParents(dstDirectoryID). Fields(""). SupportsAllDrives(true). Do() return f.shouldRetry(err) }) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // ChangeNotify calls the passed function with a path that has had changes. // If the implementation uses polling, it should adhere to the given interval. // // Automatically restarts itself in case of unexpected behavior of the remote. // // Close the returned channel to stop being notified. func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) { go func() { // get the StartPageToken early so all changes from now on get processed startPageToken, err := f.changeNotifyStartPageToken() if err != nil { fs.Infof(f, "Failed to get StartPageToken: %s", err) } var ticker *time.Ticker var tickerC <-chan time.Time for { select { case pollInterval, ok := <-pollIntervalChan: if !ok { if ticker != nil { ticker.Stop() } return } if ticker != nil { ticker.Stop() ticker, tickerC = nil, nil } if pollInterval != 0 { ticker = time.NewTicker(pollInterval) tickerC = ticker.C } case <-tickerC: if startPageToken == "" { startPageToken, err = f.changeNotifyStartPageToken() if err != nil { fs.Infof(f, "Failed to get StartPageToken: %s", err) continue } } fs.Debugf(f, "Checking for changes on remote") startPageToken, err = f.changeNotifyRunner(ctx, notifyFunc, startPageToken) if err != nil { fs.Infof(f, "Change notify listener failure: %s", err) } } } }() } func (f *Fs) changeNotifyStartPageToken() (pageToken string, err error) { var startPageToken *drive.StartPageToken err = f.pacer.Call(func() (bool, error) { changes := f.svc.Changes.GetStartPageToken().SupportsAllDrives(true) if f.isTeamDrive { changes.DriveId(f.opt.TeamDriveID) } startPageToken, err = changes.Do() return f.shouldRetry(err) }) if err != nil { return } return startPageToken.StartPageToken, nil } func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.EntryType), startPageToken string) (newStartPageToken string, err error) { pageToken := startPageToken for { var changeList *drive.ChangeList err = f.pacer.Call(func() (bool, error) { changesCall := f.svc.Changes.List(pageToken). Fields("nextPageToken,newStartPageToken,changes(fileId,file(name,parents,mimeType))") if f.opt.ListChunk > 0 { changesCall.PageSize(f.opt.ListChunk) } changesCall.SupportsAllDrives(true) changesCall.IncludeItemsFromAllDrives(true) if f.isTeamDrive { changesCall.DriveId(f.opt.TeamDriveID) } // If using appDataFolder then need to add Spaces if f.rootFolderID == "appDataFolder" { changesCall.Spaces("appDataFolder") } changeList, err = changesCall.Context(ctx).Do() return f.shouldRetry(err) }) if err != nil { return } type entryType struct { path string entryType fs.EntryType } var pathsToClear []entryType for _, change := range changeList.Changes { // find the previous path if path, ok := f.dirCache.GetInv(change.FileId); ok { if change.File != nil && change.File.MimeType != driveFolderType { pathsToClear = append(pathsToClear, entryType{path: path, entryType: fs.EntryObject}) } else { pathsToClear = append(pathsToClear, entryType{path: path, entryType: fs.EntryDirectory}) } } // find the new path if change.File != nil { change.File.Name = f.opt.Enc.ToStandardName(change.File.Name) changeType := fs.EntryDirectory if change.File.MimeType != driveFolderType { changeType = fs.EntryObject } // translate the parent dir of this object if len(change.File.Parents) > 0 { for _, parent := range change.File.Parents { if parentPath, ok := f.dirCache.GetInv(parent); ok { // and append the drive file name to compute the full file name newPath := path.Join(parentPath, change.File.Name) // this will now clear the actual file too pathsToClear = append(pathsToClear, entryType{path: newPath, entryType: changeType}) } } } else { // a true root object that is changed pathsToClear = append(pathsToClear, entryType{path: change.File.Name, entryType: changeType}) } } } visitedPaths := make(map[string]struct{}) for _, entry := range pathsToClear { if _, ok := visitedPaths[entry.path]; ok { continue } visitedPaths[entry.path] = struct{}{} notifyFunc(entry.path, entry.entryType) } switch { case changeList.NewStartPageToken != "": return changeList.NewStartPageToken, nil case changeList.NextPageToken != "": pageToken = changeList.NextPageToken default: return } } } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } func (f *Fs) changeChunkSize(chunkSizeString string) (err error) { chunkSizeInt, err := strconv.ParseInt(chunkSizeString, 10, 64) if err != nil { return errors.Wrap(err, "couldn't convert chunk size to int") } chunkSize := fs.SizeSuffix(chunkSizeInt) if chunkSize == f.opt.ChunkSize { return nil } err = checkUploadChunkSize(chunkSize) if err == nil { f.opt.ChunkSize = chunkSize } return err } func (f *Fs) changeServiceAccountFile(file string) (err error) { fs.Debugf(nil, "Changing Service Account File from %s to %s", f.opt.ServiceAccountFile, file) if file == f.opt.ServiceAccountFile { return nil } oldSvc := f.svc oldv2Svc := f.v2Svc oldOAuthClient := f.client oldFile := f.opt.ServiceAccountFile oldCredentials := f.opt.ServiceAccountCredentials defer func() { // Undo all the changes instead of doing selective undo's if err != nil { f.svc = oldSvc f.v2Svc = oldv2Svc f.client = oldOAuthClient f.opt.ServiceAccountFile = oldFile f.opt.ServiceAccountCredentials = oldCredentials } }() f.opt.ServiceAccountFile = file f.opt.ServiceAccountCredentials = "" oAuthClient, err := createOAuthClient(&f.opt, f.name, f.m) if err != nil { return errors.Wrap(err, "drive: failed when making oauth client") } f.client = oAuthClient f.svc, err = drive.New(f.client) if err != nil { return errors.Wrap(err, "couldn't create Drive client") } if f.opt.V2DownloadMinSize >= 0 { f.v2Svc, err = drive_v2.New(f.client) if err != nil { return errors.Wrap(err, "couldn't create Drive v2 client") } } return nil } // Create a shortcut from (f, srcPath) to (dstFs, dstPath) // // Will not overwrite existing files func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPath string) (o fs.Object, err error) { srcFs := f srcPath = strings.Trim(srcPath, "/") dstPath = strings.Trim(dstPath, "/") if dstPath == "" { return nil, errors.New("shortcut destination can't be root directory") } // Find source var srcID string isDir := false if srcPath == "" { // source is root directory srcID, err = f.dirCache.RootID(ctx, false) if err != nil { return nil, err } isDir = true } else if srcObj, err := srcFs.NewObject(ctx, srcPath); err != nil { if err != fs.ErrorNotAFile { return nil, errors.Wrap(err, "can't find source") } // source was a directory srcID, err = srcFs.dirCache.FindDir(ctx, srcPath, false) if err != nil { return nil, errors.Wrap(err, "failed to find source dir") } isDir = true } else { // source was a file srcID = srcObj.(*Object).id } srcID = actualID(srcID) // link to underlying object not to shortcut // Find destination _, err = dstFs.NewObject(ctx, dstPath) if err != fs.ErrorObjectNotFound { if err == nil { err = errors.New("existing file") } else if err == fs.ErrorNotAFile { err = errors.New("existing directory") } return nil, errors.Wrap(err, "not overwriting shortcut target") } // Create destination shortcut createInfo, err := dstFs.createFileInfo(ctx, dstPath, time.Now()) if err != nil { return nil, errors.Wrap(err, "shortcut destination failed") } createInfo.MimeType = shortcutMimeType createInfo.ShortcutDetails = &drive.FileShortcutDetails{ TargetId: srcID, } var info *drive.File err = dstFs.pacer.CallNoRetry(func() (bool, error) { info, err = dstFs.svc.Files.Create(createInfo). Fields(partialFields). SupportsAllDrives(true). KeepRevisionForever(dstFs.opt.KeepRevisionForever). Do() return dstFs.shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "shortcut creation failed") } if isDir { return nil, nil } return dstFs.newObjectWithInfo(dstPath, info) } // List all team drives func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) { drives = []*drive.TeamDrive{} listTeamDrives := f.svc.Teamdrives.List().PageSize(100) var defaultFs Fs // default Fs with default Options for { var teamDrives *drive.TeamDriveList err = f.pacer.Call(func() (bool, error) { teamDrives, err = listTeamDrives.Context(ctx).Do() return defaultFs.shouldRetry(err) }) if err != nil { return drives, errors.Wrap(err, "listing team drives failed") } drives = append(drives, teamDrives.TeamDrives...) if teamDrives.NextPageToken == "" { break } listTeamDrives.PageToken(teamDrives.NextPageToken) } return drives, nil } type unTrashResult struct { Untrashed int Errors int } func (r unTrashResult) Error() string { return fmt.Sprintf("%d errors while untrashing - see log", r.Errors) } // Restore the trashed files from dir, directoryID recursing if needed func (f *Fs) unTrash(ctx context.Context, dir string, directoryID string, recurse bool) (r unTrashResult, err error) { directoryID = actualID(directoryID) fs.Debugf(dir, "finding trash to restore in directory %q", directoryID) _, err = f.list(ctx, []string{directoryID}, "", false, false, true, func(item *drive.File) bool { remote := path.Join(dir, item.Name) if item.ExplicitlyTrashed { fs.Infof(remote, "restoring %q", item.Id) if operations.SkipDestructive(ctx, remote, "restore") { return false } update := drive.File{ ForceSendFields: []string{"Trashed"}, // necessary to set false value Trashed: false, } err := f.pacer.Call(func() (bool, error) { _, err := f.svc.Files.Update(item.Id, &update). SupportsAllDrives(true). Fields("trashed"). Do() return f.shouldRetry(err) }) if err != nil { err = errors.Wrap(err, "failed to restore") r.Errors++ fs.Errorf(remote, "%v", err) } else { r.Untrashed++ } } if recurse && item.MimeType == "application/vnd.google-apps.folder" { if !isShortcutID(item.Id) { rNew, _ := f.unTrash(ctx, remote, item.Id, recurse) r.Untrashed += rNew.Untrashed r.Errors += rNew.Errors } } return false }) if err != nil { err = errors.Wrap(err, "failed to list directory") r.Errors++ fs.Errorf(dir, "%v", err) } if r.Errors != 0 { return r, r } return r, nil } // Untrash dir func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTrashResult, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { r.Errors++ return r, err } return f.unTrash(ctx, dir, directoryID, true) } var commandHelp = []fs.CommandHelp{{ Name: "get", Short: "Get command for fetching the drive config parameters", Long: `This is a get command which will be used to fetch the various drive config parameters Usage Examples: rclone backend get drive: [-o service_account_file] [-o chunk_size] rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] `, Opts: map[string]string{ "chunk_size": "show the current upload chunk size", "service_account_file": "show the current service account file", }, }, { Name: "set", Short: "Set command for updating the drive config parameters", Long: `This is a set command which will be used to update the various drive config parameters Usage Examples: rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] `, Opts: map[string]string{ "chunk_size": "update the current upload chunk size", "service_account_file": "update the current service account file", }, }, { Name: "shortcut", Short: "Create shortcuts from files or directories", Long: `This command creates shortcuts from files or directories. Usage: rclone backend shortcut drive: source_item destination_shortcut rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:" In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". `, Opts: map[string]string{ "target": "optional target remote for the shortcut destination", }, }, { Name: "drives", Short: "List the shared drives available to this account", Long: `This command lists the shared drives (teamdrives) available to this account. Usage: rclone backend drives drive: This will return a JSON list of objects like this [ { "id": "0ABCDEF-01234567890", "kind": "drive#teamDrive", "name": "My Drive" }, { "id": "0ABCDEFabcdefghijkl", "kind": "drive#teamDrive", "name": "Test Drive" } ] `, }, { Name: "untrash", Short: "Untrash files and directories", Long: `This command untrashes all the files and directories in the directory passed in recursively. Usage: This takes an optional directory to trash which make this easier to use via the API. rclone backend untrash drive:directory rclone backend -i untrash drive:directory subdir Use the -i flag to see what would be restored before restoring it. Result: { "Untrashed": 17, "Errors": 0 } `, }} // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) { switch name { case "get": out := make(map[string]string) if _, ok := opt["service_account_file"]; ok { out["service_account_file"] = f.opt.ServiceAccountFile } if _, ok := opt["chunk_size"]; ok { out["chunk_size"] = fmt.Sprintf("%s", f.opt.ChunkSize) } return out, nil case "set": out := make(map[string]map[string]string) if serviceAccountFile, ok := opt["service_account_file"]; ok { serviceAccountMap := make(map[string]string) serviceAccountMap["previous"] = f.opt.ServiceAccountFile if err = f.changeServiceAccountFile(serviceAccountFile); err != nil { return out, err } f.m.Set("service_account_file", serviceAccountFile) serviceAccountMap["current"] = f.opt.ServiceAccountFile out["service_account_file"] = serviceAccountMap } if chunkSize, ok := opt["chunk_size"]; ok { chunkSizeMap := make(map[string]string) chunkSizeMap["previous"] = fmt.Sprintf("%s", f.opt.ChunkSize) if err = f.changeChunkSize(chunkSize); err != nil { return out, err } chunkSizeString := fmt.Sprintf("%s", f.opt.ChunkSize) f.m.Set("chunk_size", chunkSizeString) chunkSizeMap["current"] = chunkSizeString out["chunk_size"] = chunkSizeMap } return out, nil case "shortcut": if len(arg) != 2 { return nil, errors.New("need exactly 2 arguments") } dstFs := f target, ok := opt["target"] if ok { targetFs, err := cache.Get(target) if err != nil { return nil, errors.Wrap(err, "couldn't find target") } dstFs, ok = targetFs.(*Fs) if !ok { return nil, errors.New("target is not a drive backend") } } return f.makeShortcut(ctx, arg[0], dstFs, arg[1]) case "drives": return f.listTeamDrives(ctx) case "untrash": dir := "" if len(arg) > 0 { dir = arg[0] } return f.unTrashDir(ctx, dir, true) default: return nil, fs.ErrorCommandNotFound } } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *baseObject) Fs() fs.Info { return o.fs } // Return a string version func (o *baseObject) String() string { return o.remote } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *baseObject) Remote() string { return o.remote } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } return o.md5sum, nil } func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } return "", nil } // Size returns the size of an object in bytes func (o *baseObject) Size() int64 { return o.bytes } // getRemoteInfo returns a drive.File for the remote func (f *Fs) getRemoteInfo(ctx context.Context, remote string) (info *drive.File, err error) { info, _, _, _, _, err = f.getRemoteInfoWithExport(ctx, remote) return } // getRemoteInfoWithExport returns a drive.File and the export settings for the remote func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) ( info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error) { leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, "", "", "", false, fs.ErrorObjectNotFound } return nil, "", "", "", false, err } directoryID = actualID(directoryID) found, err := f.list(ctx, []string{directoryID}, leaf, false, false, false, func(item *drive.File) bool { if !f.opt.SkipGdocs { extension, exportName, exportMimeType, isDocument = f.findExportFormat(item) if exportName == leaf { info = item return true } if isDocument { return false } } if item.Name == leaf { info = item return true } return false }) if err != nil { return nil, "", "", "", false, err } if !found { return nil, "", "", "", false, fs.ErrorObjectNotFound } return } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *baseObject) ModTime(ctx context.Context) time.Time { modTime, err := time.Parse(timeFormatIn, o.modifiedDate) if err != nil { fs.Debugf(o, "Failed to read mtime from object: %v", err) return time.Now() } return modTime } // SetModTime sets the modification time of the drive fs object func (o *baseObject) SetModTime(ctx context.Context, modTime time.Time) error { // New metadata updateInfo := &drive.File{ ModifiedTime: modTime.Format(timeFormatOut), } // Set modified date var info *drive.File err := o.fs.pacer.Call(func() (bool, error) { var err error info, err = o.fs.svc.Files.Update(actualID(o.id), updateInfo). Fields(partialFields). SupportsAllDrives(true). Do() return o.fs.shouldRetry(err) }) if err != nil { return err } // Update info from read data o.modifiedDate = info.ModifiedTime return nil } // Storable returns a boolean as to whether this object is storable func (o *baseObject) Storable() bool { return true } // httpResponse gets an http.Response object for the object // using the url and method passed in func (o *baseObject) httpResponse(ctx context.Context, url, method string, options []fs.OpenOption) (req *http.Request, res *http.Response, err error) { if url == "" { return nil, nil, errors.New("forbidden to download - check sharing permission") } req, err = http.NewRequest(method, url, nil) if err != nil { return req, nil, err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext fs.OpenOptionAddHTTPHeaders(req.Header, options) if o.bytes == 0 { // Don't supply range requests for 0 length objects as they always fail delete(req.Header, "Range") } err = o.fs.pacer.Call(func() (bool, error) { res, err = o.fs.client.Do(req) if err == nil { err = googleapi.CheckResponse(res) if err != nil { _ = res.Body.Close() // ignore error } } return o.fs.shouldRetry(err) }) if err != nil { return req, nil, err } return req, res, nil } // openDocumentFile represents a documentObject open for reading. // Updates the object size after read successfully. type openDocumentFile struct { o *documentObject // Object we are reading for in io.ReadCloser // reading from here bytes int64 // number of bytes read on this connection eof bool // whether we have read end of file errored bool // whether we have encountered an error during reading } // Read bytes from the object - see io.Reader func (file *openDocumentFile) Read(p []byte) (n int, err error) { n, err = file.in.Read(p) file.bytes += int64(n) if err != nil && err != io.EOF { file.errored = true } if err == io.EOF { file.eof = true } return } // Close the object and update bytes read func (file *openDocumentFile) Close() (err error) { // If end of file, update bytes read if file.eof && !file.errored { fs.Debugf(file.o, "Updating size of doc after download to %v", file.bytes) file.o.bytes = file.bytes } return file.in.Close() } // Check it satisfies the interfaces var _ io.ReadCloser = (*openDocumentFile)(nil) // Checks to see if err is a googleapi.Error with of type what func isGoogleError(err error, what string) bool { if gerr, ok := err.(*googleapi.Error); ok { for _, error := range gerr.Errors { if error.Reason == what { return true } } } return false } // open a url for reading func (o *baseObject) open(ctx context.Context, url string, options ...fs.OpenOption) (in io.ReadCloser, err error) { _, res, err := o.httpResponse(ctx, url, "GET", options) if err != nil { if isGoogleError(err, "cannotDownloadAbusiveFile") { if o.fs.opt.AcknowledgeAbuse { // Retry acknowledging abuse if strings.ContainsRune(url, '?') { url += "&" } else { url += "?" } url += "acknowledgeAbuse=true" _, res, err = o.httpResponse(ctx, url, "GET", options) } else { err = errors.Wrap(err, "Use the --drive-acknowledge-abuse flag to download this file") } } if err != nil { return nil, errors.Wrap(err, "open file failed") } } return res.Body, nil } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { if o.mimeType == shortcutMimeTypeDangling { return nil, errors.New("can't read dangling shortcut") } if o.v2Download { var v2File *drive_v2.File err = o.fs.pacer.Call(func() (bool, error) { v2File, err = o.fs.v2Svc.Files.Get(actualID(o.id)). Fields("downloadUrl"). SupportsAllDrives(true). Do() return o.fs.shouldRetry(err) }) if err == nil { fs.Debugf(o, "Using v2 download: %v", v2File.DownloadUrl) o.url = v2File.DownloadUrl o.v2Download = false } } return o.baseObject.open(ctx, o.url, options...) } func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { // Update the size with what we are reading as it can change from // the HEAD in the listing to this GET. This stops rclone marking // the transfer as corrupted. var offset, end int64 = 0, -1 var newOptions = options[:0] for _, o := range options { // Note that Range requests don't work on Google docs: // https://developers.google.com/drive/v3/web/manage-downloads#partial_download // So do a subset of them manually switch x := o.(type) { case *fs.RangeOption: offset, end = x.Start, x.End case *fs.SeekOption: offset, end = x.Offset, -1 default: newOptions = append(newOptions, o) } } options = newOptions if offset != 0 { return nil, errors.New("partial downloads are not supported while exporting Google Documents") } in, err = o.baseObject.open(ctx, o.url, options...) if in != nil { in = &openDocumentFile{o: o, in: in} } if end >= 0 { in = readers.NewLimitedReadCloser(in, end-offset+1) } return } func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { var offset, limit int64 = 0, -1 var data = o.content for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(int64(len(data))) default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } if l := int64(len(data)); offset > l { offset = l } data = data[offset:] if limit != -1 && limit < int64(len(data)) { data = data[:limit] } return ioutil.NopCloser(bytes.NewReader(data)), nil } func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader, src fs.ObjectInfo) (info *drive.File, err error) { // Make the API request to upload metadata and file data. size := src.Size() if size >= 0 && size < int64(o.fs.opt.UploadCutoff) { // Don't retry, return a retry error instead err = o.fs.pacer.CallNoRetry(func() (bool, error) { info, err = o.fs.svc.Files.Update(actualID(o.id), updateInfo). Media(in, googleapi.ContentType(uploadMimeType)). Fields(partialFields). SupportsAllDrives(true). KeepRevisionForever(o.fs.opt.KeepRevisionForever). Do() return o.fs.shouldRetry(err) }) return } // Upload the file in chunks return o.fs.Upload(ctx, in, size, uploadMimeType, o.id, o.remote, updateInfo) } // Update the already existing object // // Copy the reader into the object updating modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { // If o is a shortcut if isShortcutID(o.id) { // Delete it first err := o.fs.delete(ctx, shortcutID(o.id), o.fs.opt.UseTrash) if err != nil { return err } // Then put the file as a new file newObj, err := o.fs.PutUnchecked(ctx, in, src, options...) if err != nil { return err } // Update the object if newO, ok := newObj.(*Object); ok { *o = *newO } else { fs.Debugf(newObj, "Failed to update object %T from new object %T", o, newObj) } return nil } srcMimeType := fs.MimeType(ctx, src) updateInfo := &drive.File{ MimeType: srcMimeType, ModifiedTime: src.ModTime(ctx).Format(timeFormatOut), } info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src) if err != nil { return err } newO, err := o.fs.newObjectWithInfo(src.Remote(), info) if err != nil { return err } switch newO := newO.(type) { case *Object: *o = *newO default: return errors.New("object type changed by update") } return nil } func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { srcMimeType := fs.MimeType(ctx, src) importMimeType := "" updateInfo := &drive.File{ MimeType: srcMimeType, ModifiedTime: src.ModTime(ctx).Format(timeFormatOut), } if o.fs.importMimeTypes == nil || o.fs.opt.SkipGdocs { return errors.Errorf("can't update google document type without --drive-import-formats") } importMimeType = o.fs.findImportFormat(updateInfo.MimeType) if importMimeType == "" { return errors.Errorf("no import format found for %q", srcMimeType) } if importMimeType != o.documentMimeType { return errors.Errorf("can't change google document type (o: %q, src: %q, import: %q)", o.documentMimeType, srcMimeType, importMimeType) } updateInfo.MimeType = importMimeType info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src) if err != nil { return err } remote := src.Remote() remote = remote[:len(remote)-o.extLen] newO, err := o.fs.newObjectWithInfo(remote, info) if err != nil { return err } switch newO := newO.(type) { case *documentObject: *o = *newO default: return errors.New("object type changed by update") } return nil } func (o *linkObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { return errors.New("cannot update link files") } // Remove an object func (o *baseObject) Remove(ctx context.Context) error { if o.parents > 1 { return errors.New("can't delete safely - has multiple parents") } return o.fs.delete(ctx, shortcutID(o.id), o.fs.opt.UseTrash) } // MimeType of an Object if known, "" otherwise func (o *baseObject) MimeType(ctx context.Context) string { return o.mimeType } // ID returns the ID of the Object if known, or "" if not func (o *baseObject) ID() string { return o.id } func (o *documentObject) ext() string { return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:] } func (o *linkObject) ext() string { return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:] } // templates for document link files const ( urlTemplate = `[InternetShortcut]{{"\r"}} URL={{ .URL }}{{"\r"}} ` weblocTemplate = ` URL {{ .URL }} ` desktopTemplate = `[Desktop Entry] Encoding=UTF-8 Name={{ .Title }} URL={{ .URL }} Icon=text-html Type=Link ` htmlTemplate = ` {{ .Title }} Loading {{ .Title }} ` ) // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.Commander = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.ChangeNotifier = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.ListRer = (*Fs)(nil) _ fs.MergeDirser = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil) _ fs.IDer = (*Object)(nil) _ fs.Object = (*documentObject)(nil) _ fs.MimeTyper = (*documentObject)(nil) _ fs.IDer = (*documentObject)(nil) _ fs.Object = (*linkObject)(nil) _ fs.MimeTyper = (*linkObject)(nil) _ fs.IDer = (*linkObject)(nil) ) rclone-1.53.3/backend/drive/drive_internal_test.go000066400000000000000000000324321375552240400221730ustar00rootroot00000000000000package drive import ( "bytes" "context" "encoding/json" "io" "io/ioutil" "mime" "path/filepath" "strings" "testing" "time" "github.com/pkg/errors" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "google.golang.org/api/drive/v3" ) func TestDriveScopes(t *testing.T) { for _, test := range []struct { in string want []string wantFlag bool }{ {"", []string{ "https://www.googleapis.com/auth/drive", }, false}, {" drive.file , drive.readonly", []string{ "https://www.googleapis.com/auth/drive.file", "https://www.googleapis.com/auth/drive.readonly", }, false}, {" drive.file , drive.appfolder", []string{ "https://www.googleapis.com/auth/drive.file", "https://www.googleapis.com/auth/drive.appfolder", }, true}, } { got := driveScopes(test.in) assert.Equal(t, test.want, got, test.in) gotFlag := driveScopesContainsAppFolder(got) assert.Equal(t, test.wantFlag, gotFlag, test.in) } } /* var additionalMimeTypes = map[string]string{ "application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm", "application/vnd.ms-excel.template.macroenabled.12": ".xltm", "application/vnd.ms-powerpoint.presentation.macroenabled.12": ".pptm", "application/vnd.ms-powerpoint.slideshow.macroenabled.12": ".ppsm", "application/vnd.ms-powerpoint.template.macroenabled.12": ".potm", "application/vnd.ms-powerpoint": ".ppt", "application/vnd.ms-word.document.macroenabled.12": ".docm", "application/vnd.ms-word.template.macroenabled.12": ".dotm", "application/vnd.openxmlformats-officedocument.presentationml.template": ".potx", "application/vnd.openxmlformats-officedocument.spreadsheetml.template": ".xltx", "application/vnd.openxmlformats-officedocument.wordprocessingml.template": ".dotx", "application/vnd.sun.xml.writer": ".sxw", "text/richtext": ".rtf", } */ // Load the example export formats into exportFormats for testing func TestInternalLoadExampleFormats(t *testing.T) { fetchFormatsOnce.Do(func() {}) buf, err := ioutil.ReadFile(filepath.FromSlash("test/about.json")) var about struct { ExportFormats map[string][]string `json:"exportFormats,omitempty"` ImportFormats map[string][]string `json:"importFormats,omitempty"` } require.NoError(t, err) require.NoError(t, json.Unmarshal(buf, &about)) _exportFormats = fixMimeTypeMap(about.ExportFormats) _importFormats = fixMimeTypeMap(about.ImportFormats) } func TestInternalParseExtensions(t *testing.T) { for _, test := range []struct { in string want []string wantErr error }{ {"doc", []string{".doc"}, nil}, {" docx ,XLSX, pptx,svg", []string{".docx", ".xlsx", ".pptx", ".svg"}, nil}, {"docx,svg,Docx", []string{".docx", ".svg"}, nil}, {"docx,potato,docx", []string{".docx"}, errors.New(`couldn't find MIME type for extension ".potato"`)}, } { extensions, _, gotErr := parseExtensions(test.in) if test.wantErr == nil { assert.NoError(t, gotErr) } else { assert.EqualError(t, gotErr, test.wantErr.Error()) } assert.Equal(t, test.want, extensions) } // Test it is appending extensions, _, gotErr := parseExtensions("docx,svg", "docx,svg,xlsx") assert.NoError(t, gotErr) assert.Equal(t, []string{".docx", ".svg", ".xlsx"}, extensions) } func TestInternalFindExportFormat(t *testing.T) { item := &drive.File{ Name: "file", MimeType: "application/vnd.google-apps.document", } for _, test := range []struct { extensions []string wantExtension string wantMimeType string }{ {[]string{}, "", ""}, {[]string{".pdf"}, ".pdf", "application/pdf"}, {[]string{".pdf", ".rtf", ".xls"}, ".pdf", "application/pdf"}, {[]string{".xls", ".rtf", ".pdf"}, ".rtf", "application/rtf"}, {[]string{".xls", ".csv", ".svg"}, "", ""}, } { f := new(Fs) f.exportExtensions = test.extensions gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item) assert.Equal(t, test.wantExtension, gotExtension) if test.wantExtension != "" { assert.Equal(t, item.Name+gotExtension, gotFilename) } else { assert.Equal(t, "", gotFilename) } assert.Equal(t, test.wantMimeType, gotMimeType) assert.Equal(t, true, gotIsDocument) } } func TestMimeTypesToExtension(t *testing.T) { for mimeType, extension := range _mimeTypeToExtension { extensions, err := mime.ExtensionsByType(mimeType) assert.NoError(t, err) assert.Contains(t, extensions, extension) } } func TestExtensionToMimeType(t *testing.T) { for mimeType, extension := range _mimeTypeToExtension { gotMimeType := mime.TypeByExtension(extension) mediatype, _, err := mime.ParseMediaType(gotMimeType) assert.NoError(t, err) assert.Equal(t, mimeType, mediatype) } } func TestExtensionsForExportFormats(t *testing.T) { if _exportFormats == nil { t.Error("exportFormats == nil") } for fromMT, toMTs := range _exportFormats { for _, toMT := range toMTs { if !isInternalMimeType(toMT) { extensions, err := mime.ExtensionsByType(toMT) assert.NoError(t, err, "invalid MIME type %q", toMT) assert.NotEmpty(t, extensions, "No extension found for %q (from: %q)", fromMT, toMT) } } } } func TestExtensionsForImportFormats(t *testing.T) { t.Skip() if _importFormats == nil { t.Error("_importFormats == nil") } for fromMT := range _importFormats { if !isInternalMimeType(fromMT) { extensions, err := mime.ExtensionsByType(fromMT) assert.NoError(t, err, "invalid MIME type %q", fromMT) assert.NotEmpty(t, extensions, "No extension found for %q", fromMT) } } } func (f *Fs) InternalTestDocumentImport(t *testing.T) { oldAllow := f.opt.AllowImportNameChange f.opt.AllowImportNameChange = true defer func() { f.opt.AllowImportNameChange = oldAllow }() testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files")) require.NoError(t, err) testFilesFs, err := fs.NewFs(testFilesPath) require.NoError(t, err) _, f.importMimeTypes, err = parseExtensions("odt,ods,doc") require.NoError(t, err) err = operations.CopyFile(context.Background(), f, testFilesFs, "example2.doc", "example2.doc") require.NoError(t, err) } func (f *Fs) InternalTestDocumentUpdate(t *testing.T) { testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files")) require.NoError(t, err) testFilesFs, err := fs.NewFs(testFilesPath) require.NoError(t, err) _, f.importMimeTypes, err = parseExtensions("odt,ods,doc") require.NoError(t, err) err = operations.CopyFile(context.Background(), f, testFilesFs, "example2.xlsx", "example1.ods") require.NoError(t, err) } func (f *Fs) InternalTestDocumentExport(t *testing.T) { var buf bytes.Buffer var err error f.exportExtensions, _, err = parseExtensions("txt") require.NoError(t, err) obj, err := f.NewObject(context.Background(), "example2.txt") require.NoError(t, err) rc, err := obj.Open(context.Background()) require.NoError(t, err) defer func() { require.NoError(t, rc.Close()) }() _, err = io.Copy(&buf, rc) require.NoError(t, err) text := buf.String() for _, excerpt := range []string{ "Lorem ipsum dolor sit amet, consectetur", "porta at ultrices in, consectetur at augue.", } { require.Contains(t, text, excerpt) } } func (f *Fs) InternalTestDocumentLink(t *testing.T) { var buf bytes.Buffer var err error f.exportExtensions, _, err = parseExtensions("link.html") require.NoError(t, err) obj, err := f.NewObject(context.Background(), "example2.link.html") require.NoError(t, err) rc, err := obj.Open(context.Background()) require.NoError(t, err) defer func() { require.NoError(t, rc.Close()) }() _, err = io.Copy(&buf, rc) require.NoError(t, err) text := buf.String() require.True(t, strings.HasPrefix(text, "")) require.True(t, strings.HasSuffix(text, "\n")) for _, excerpt := range []string{ ` & ? + ≠/z.txt` existingSubDir = "êé" ) ctx := context.Background() srcObj, err := f.NewObject(ctx, existingFile) require.NoError(t, err) srcHash, err := srcObj.Hash(ctx, hash.MD5) require.NoError(t, err) assert.NotEqual(t, "", srcHash) t.Run("Errors", func(t *testing.T) { _, err := f.makeShortcut(ctx, "", f, "") assert.Error(t, err) assert.Contains(t, err.Error(), "can't be root") _, err = f.makeShortcut(ctx, "notfound", f, "dst") assert.Error(t, err) assert.Contains(t, err.Error(), "can't find source") _, err = f.makeShortcut(ctx, existingFile, f, existingFile) assert.Error(t, err) assert.Contains(t, err.Error(), "not overwriting") assert.Contains(t, err.Error(), "existing file") _, err = f.makeShortcut(ctx, existingFile, f, existingDir) assert.Error(t, err) assert.Contains(t, err.Error(), "not overwriting") assert.Contains(t, err.Error(), "existing directory") }) t.Run("File", func(t *testing.T) { dstObj, err := f.makeShortcut(ctx, existingFile, f, "shortcut.txt") require.NoError(t, err) require.NotNil(t, dstObj) assert.Equal(t, "shortcut.txt", dstObj.Remote()) dstHash, err := dstObj.Hash(ctx, hash.MD5) require.NoError(t, err) assert.Equal(t, srcHash, dstHash) require.NoError(t, dstObj.Remove(ctx)) }) t.Run("Dir", func(t *testing.T) { dstObj, err := f.makeShortcut(ctx, existingDir, f, "shortcutdir") require.NoError(t, err) require.Nil(t, dstObj) entries, err := f.List(ctx, "shortcutdir") require.NoError(t, err) require.Equal(t, 1, len(entries)) require.Equal(t, "shortcutdir/"+existingSubDir, entries[0].Remote()) require.NoError(t, f.Rmdir(ctx, "shortcutdir")) }) t.Run("Command", func(t *testing.T) { _, err := f.Command(ctx, "shortcut", []string{"one"}, nil) require.Error(t, err) require.Contains(t, err.Error(), "need exactly 2 arguments") _, err = f.Command(ctx, "shortcut", []string{"one", "two"}, map[string]string{ "target": "doesnotexistremote:", }) require.Error(t, err) require.Contains(t, err.Error(), "couldn't find target") _, err = f.Command(ctx, "shortcut", []string{"one", "two"}, map[string]string{ "target": ".", }) require.Error(t, err) require.Contains(t, err.Error(), "target is not a drive backend") dstObjI, err := f.Command(ctx, "shortcut", []string{existingFile, "shortcut2.txt"}, map[string]string{ "target": fs.ConfigString(f), }) require.NoError(t, err) dstObj := dstObjI.(*Object) assert.Equal(t, "shortcut2.txt", dstObj.Remote()) dstHash, err := dstObj.Hash(ctx, hash.MD5) require.NoError(t, err) assert.Equal(t, srcHash, dstHash) require.NoError(t, dstObj.Remove(ctx)) dstObjI, err = f.Command(ctx, "shortcut", []string{existingFile, "shortcut3.txt"}, nil) require.NoError(t, err) dstObj = dstObjI.(*Object) assert.Equal(t, "shortcut3.txt", dstObj.Remote()) dstHash, err = dstObj.Hash(ctx, hash.MD5) require.NoError(t, err) assert.Equal(t, srcHash, dstHash) require.NoError(t, dstObj.Remove(ctx)) }) } // TestIntegration/FsMkdir/FsPutFiles/Internal/UnTrash func (f *Fs) InternalTestUnTrash(t *testing.T) { ctx := context.Background() // Make some objects, one in a subdir contents := random.String(100) file1 := fstest.NewItem("trashDir/toBeTrashed", contents, time.Now()) _, obj1 := fstests.PutTestContents(ctx, t, f, &file1, contents, false) file2 := fstest.NewItem("trashDir/subdir/toBeTrashed", contents, time.Now()) _, _ = fstests.PutTestContents(ctx, t, f, &file2, contents, false) // Check objects checkObjects := func() { fstest.CheckListingWithRoot(t, f, "trashDir", []fstest.Item{ file1, file2, }, []string{ "trashDir/subdir", }, f.Precision()) } checkObjects() // Make sure we are using the trash require.Equal(t, true, f.opt.UseTrash) // Remove the object and the dir require.NoError(t, obj1.Remove(ctx)) require.NoError(t, f.Purge(ctx, "trashDir/subdir")) // Check objects gone fstest.CheckListingWithRoot(t, f, "trashDir", []fstest.Item{}, []string{}, f.Precision()) // Restore the object and directory r, err := f.unTrashDir(ctx, "trashDir", true) require.NoError(t, err) assert.Equal(t, unTrashResult{Errors: 0, Untrashed: 2}, r) // Check objects restored checkObjects() // Remove the test dir require.NoError(t, f.Purge(ctx, "trashDir")) } func (f *Fs) InternalTest(t *testing.T) { // These tests all depend on each other so run them as nested tests t.Run("DocumentImport", func(t *testing.T) { f.InternalTestDocumentImport(t) t.Run("DocumentUpdate", func(t *testing.T) { f.InternalTestDocumentUpdate(t) t.Run("DocumentExport", func(t *testing.T) { f.InternalTestDocumentExport(t) t.Run("DocumentLink", func(t *testing.T) { f.InternalTestDocumentLink(t) }) }) }) }) t.Run("Shortcuts", f.InternalTestShortcuts) t.Run("UnTrash", f.InternalTestUnTrash) } var _ fstests.InternalTester = (*Fs)(nil) rclone-1.53.3/backend/drive/drive_test.go000066400000000000000000000014131375552240400202720ustar00rootroot00000000000000// Test Drive filesystem interface package drive import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestDrive:", NilObject: (*Object)(nil), ChunkedUpload: fstests.ChunkedUploadConfig{ MinChunkSize: minChunkSize, CeilChunkSize: fstests.NextPowerOfTwo, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadCutoff(cs) } var ( _ fstests.SetUploadChunkSizer = (*Fs)(nil) _ fstests.SetUploadCutoffer = (*Fs)(nil) ) rclone-1.53.3/backend/drive/test/000077500000000000000000000000001375552240400165535ustar00rootroot00000000000000rclone-1.53.3/backend/drive/test/about.json000066400000000000000000000121651375552240400205650ustar00rootroot00000000000000{ "importFormats": { "text/tab-separated-values": [ "application/vnd.google-apps.spreadsheet" ], "application/x-vnd.oasis.opendocument.presentation": [ "application/vnd.google-apps.presentation" ], "image/jpeg": [ "application/vnd.google-apps.document" ], "image/bmp": [ "application/vnd.google-apps.document" ], "image/gif": [ "application/vnd.google-apps.document" ], "application/vnd.ms-excel.sheet.macroenabled.12": [ "application/vnd.google-apps.spreadsheet" ], "application/vnd.openxmlformats-officedocument.wordprocessingml.template": [ "application/vnd.google-apps.document" ], "application/vnd.ms-powerpoint.presentation.macroenabled.12": [ "application/vnd.google-apps.presentation" ], "application/vnd.ms-word.template.macroenabled.12": [ "application/vnd.google-apps.document" ], "application/vnd.openxmlformats-officedocument.wordprocessingml.document": [ "application/vnd.google-apps.document" ], "image/pjpeg": [ "application/vnd.google-apps.document" ], "application/vnd.google-apps.script+text/plain": [ "application/vnd.google-apps.script" ], "application/vnd.ms-excel": [ "application/vnd.google-apps.spreadsheet" ], "application/vnd.sun.xml.writer": [ "application/vnd.google-apps.document" ], "application/vnd.ms-word.document.macroenabled.12": [ "application/vnd.google-apps.document" ], "application/vnd.ms-powerpoint.slideshow.macroenabled.12": [ "application/vnd.google-apps.presentation" ], "text/rtf": [ "application/vnd.google-apps.document" ], "text/plain": [ "application/vnd.google-apps.document" ], "application/vnd.oasis.opendocument.spreadsheet": [ "application/vnd.google-apps.spreadsheet" ], "application/x-vnd.oasis.opendocument.spreadsheet": [ "application/vnd.google-apps.spreadsheet" ], "image/png": [ "application/vnd.google-apps.document" ], "application/x-vnd.oasis.opendocument.text": [ "application/vnd.google-apps.document" ], "application/msword": [ "application/vnd.google-apps.document" ], "application/pdf": [ "application/vnd.google-apps.document" ], "application/json": [ "application/vnd.google-apps.script" ], "application/x-msmetafile": [ "application/vnd.google-apps.drawing" ], "application/vnd.openxmlformats-officedocument.spreadsheetml.template": [ "application/vnd.google-apps.spreadsheet" ], "application/vnd.ms-powerpoint": [ "application/vnd.google-apps.presentation" ], "application/vnd.ms-excel.template.macroenabled.12": [ "application/vnd.google-apps.spreadsheet" ], "image/x-bmp": [ "application/vnd.google-apps.document" ], "application/rtf": [ "application/vnd.google-apps.document" ], "application/vnd.openxmlformats-officedocument.presentationml.template": [ "application/vnd.google-apps.presentation" ], "image/x-png": [ "application/vnd.google-apps.document" ], "text/html": [ "application/vnd.google-apps.document" ], "application/vnd.oasis.opendocument.text": [ "application/vnd.google-apps.document" ], "application/vnd.openxmlformats-officedocument.presentationml.presentation": [ "application/vnd.google-apps.presentation" ], "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": [ "application/vnd.google-apps.spreadsheet" ], "application/vnd.google-apps.script+json": [ "application/vnd.google-apps.script" ], "application/vnd.openxmlformats-officedocument.presentationml.slideshow": [ "application/vnd.google-apps.presentation" ], "application/vnd.ms-powerpoint.template.macroenabled.12": [ "application/vnd.google-apps.presentation" ], "text/csv": [ "application/vnd.google-apps.spreadsheet" ], "application/vnd.oasis.opendocument.presentation": [ "application/vnd.google-apps.presentation" ], "image/jpg": [ "application/vnd.google-apps.document" ], "text/richtext": [ "application/vnd.google-apps.document" ] }, "exportFormats": { "application/vnd.google-apps.document": [ "application/rtf", "application/vnd.oasis.opendocument.text", "text/html", "application/pdf", "application/epub+zip", "application/zip", "application/vnd.openxmlformats-officedocument.wordprocessingml.document", "text/plain" ], "application/vnd.google-apps.spreadsheet": [ "application/x-vnd.oasis.opendocument.spreadsheet", "text/tab-separated-values", "application/pdf", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", "text/csv", "application/zip", "application/vnd.oasis.opendocument.spreadsheet" ], "application/vnd.google-apps.jam": [ "application/pdf" ], "application/vnd.google-apps.script": [ "application/vnd.google-apps.script+json" ], "application/vnd.google-apps.presentation": [ "application/vnd.oasis.opendocument.presentation", "application/pdf", "application/vnd.openxmlformats-officedocument.presentationml.presentation", "text/plain" ], "application/vnd.google-apps.form": [ "application/zip" ], "application/vnd.google-apps.drawing": [ "image/svg+xml", "image/png", "application/pdf", "image/jpeg" ] } } rclone-1.53.3/backend/drive/test/files/000077500000000000000000000000001375552240400176555ustar00rootroot00000000000000rclone-1.53.3/backend/drive/test/files/example1.ods000066400000000000000000000320201375552240400220750ustar00rootroot00000000000000PKqYLl9..mimetypeapplication/vnd.oasis.opendocument.spreadsheetPKqYL>ɢiThumbnails/thumbnail.pngPNG  IHDR'ePLTE.(&%,/3-270*)67:,@):N7JA:T@.VF9HFHHLUKQZSLHVPO\RLSQTSU[TY]ZTQ[VY]XU[Z\MWfN_pSV`SZdT]kZ]cX]iR_r[aiTbrTfyVi|Zfs\htYhyeVJd\Ws^Ja_`kbZxgXcccbeifidfhljealhelllemvep~npsks}rkermkzmc~mhwqlssssvyuxz{vr|xs{{{]n^p`obreugzkunxm{k~v}q{wz}n_q^odug{s}jymrz~r{t{z}ũƧѵĭԹʱԱ۵ڹѽԼ۸űĵĺɶ̹ɺƽѽ¸ȸκ6kIDATx}TSW^[aP tBglO `0@( $qն-BЊ(uZB!b?x4dʇcCgJp=;qq.ٛ_w^ܷdlbX_ W*NP?QK|!l#=^x?sCv"x֍o a*qu,q)8({s  N?PVV:;Zfwїh"&G{ma9Okwbg~INq"A,M6>k^ѩY&mُ#SY,i8+c`{7v+8v8}]$>DaTO.sK~)Մ%\Soƀ5bC\w`y ϟexm3<3sO9^Gu4qz@5Z< 3^KENxN *;DKOYP_֕~? NB1SeW'k!Zt=Ftiϒ |;kw /JG+yÀ9LaM^o*;Frww2޻dOmnU^ЧjM~|z,^CYu kPuby@*}8/%?6腈y&b|y1ZC\r5Xt!9qIAiRR)+x\j 'q/ :يf/swBڙK̟Oy.6`П(eJ1hy$*SՐJlEkG= #\EߣRQwOׯjv 趂'+u&Ͷ(|!7NY^;_?!QNg? Ymk; y0igLфf*wws_z}T3%G"X~CZk]9V;g}\F9X=QIAO,x8}WLe8ֲG&"\#o{?̺,YQ@x19H?!*p!KK;>il}hʽѕeR zHE fJbC#Ւ9\9{'{]fR;HF0?oqHd >~<]GuP$5e5b٧1IR<_ڼ^n:p}:kn`F)-[z'wd/d=)xPI^脊CŜ!]!957kMk2hOIHF}ǠQׁA21]G7I=G8zj;z֍^})zZl.?aޚC4rGjR\黏_"k{vcуۑ9">Xp`ݶhr7R1C8{~ 2ѕT1z9fS:VD||q9F߇aE´h-c! =_rX~EOeYbɎ&uMt'q&ԘT~Ke*9o4;zN^tK֫31.shra0f4$H5N,͕ [~=-tMƬ8}*ӆ Dk+yD,Pb9 ~-[ħrI;>fW[&z6P`[+DQi#s_=?krg|?Nkk`N8=uJh*i?9z~3? ]/[ &1 dW +C8Ώ2sDhn'Ez8R0KJJcS_zPoƞLNz' "@|S@/,ܼSAg1)%OI#N11uF /z &9[B7˲#*|ǣ:E{TΝ<>KLg6 쨶jU=< kܭ jdo`G1Z=k9)Bnrk=Npwp?zFd&t6xl 4ify3U=ӓuzyBW?H"K?qowLۃaNY{#^u pmߵ68IҏKsv:ܧ gE-jVjuXgIENDB`PKqYL settings.xmlYs8~;GRxB#\$M h"k=6q:\g oէow[6 $E>z']Ez_ZF]rI]=tCjKPJO-K;Z6I͉Vl}8NO򧡵V*;(NN⢓fS]Kj*}h $Y'mNo8dt7~?Mk8vmhiB=jVѺy.9V6v\Ye9ȋoa ZAOF] =ߜWo$hS-7)Y%vU<˹) F1z/49':"?s` z(8 Dq*}ŻrWٛzSrE0D;FqA(ByECz᫒g62YYZ9^R7џkF>QG{ݺ1 +55F D[#%By8 N+%VݕZ&K<h ͐#@bFޅJ72@7b$T  ?8wSv#N9ΈTgUR`Җ˘HMBҚ._a n,u!bR'otvZ[9Km'HPoo 1^rs#/5F-os3rV'?Hw^K>.&gCB5A𱤄߇U!)h}޵ֶ쿟lZKV4-]Ul1}p' }e0jQ|ЩP)JJ%m7F߫t\F58T8! ͔=9 "apvE9*=H"{:ъ ŗQҵUd%oPKPKqYL content.xml͝sTWd00|.)(t쨕,!q߿+֑l ї@=zt3se\,qO{f'A^|{ߜ|qfoA/c(?Yȧ2[L/‹M>-ifԴ]=֪U^G.kzg Ve-4=|`_3KqQp;/t꺫pldV{~S. |D\,wšp7)I&j9>٫n{z|ZrߋI|SӠNPᘋ$* zwS;~nb\섋3Yyvҁ4Ɋ13lqQݔγ (.#p;Yoں[{[4$n{i,A}a.S./Mfh%) [DyI7N$N }N[Y|= fgvyn)opcXz\ͳA}-f '0~fA{w j og\7q]$Z]w5uY|VE>\P/,sG{h{cj0ğqf̎@]zl+z߿",wܐ]4{"t;< ϭ~"1y8rw^_ x^dpo^lYާVo(oq뷇Ż޾DP_M[ᢊYk?8u@g{y;U~%F>ABȨr[@BPHH*p}~jP$2ڟ($43C\r $4 g$($Hh*p}~āĠAb@@b:؟2#1"H $$ȨYo 3C*-2h[@bBb̌Ĉ\p 1n#g$&($&HH|wHL$&$&ȨYo Ǚ>3*:=[[է>2-v D`@d,d<&Id dZ(8)e&fz}Bՙ3DΑSi |S2)2-v jS揸^Bb7FΑp |32)ȴP92pSpNA6?z Vu CdZ(8)} ϟq k<Ey2-v Ss\aŧO1FΑ~ | 2*&ȴP52$NJn*%/Yޝe8PGΑsہJ}+2*2-v 9$;׸^f!N{:%2-v $;_p @%ŁJL #@%d7,dX*)TjdZ(8*$;_q @%Ł2-v To^Bu@ ;G΁Jn*-oYȰTR!Ba9P@%ف,dX*)Tisdv@z @%ŁVPdL罍"v@2u@U ;G΁*nO\d!:PEqJ Ba9P@ف_,dX(TIdZ(ȏs;PEv O>ޮisdb;فz}Bu@FΑsہ*qYȰTQ Ba9P@ف^,dX(T isdv@ ׆ @Ł2-v UT  ց*UcdZ(8";PYȰTQ Ba8";s\9ցj}dZ(8&;됅 @5ŁjL #@5d,dX)TKdZ(8&;^`!:PMqZ!Ba@~'dzO7}jdZ(8ٿ@c\c2ȴP92pTs;PMv \,dX)Tisdv@\2GȴP92pTs;PMv)) ցjcdZ(8&;пq @5Ł 2-v΁&; 댃uihoټǘZVl@m6"j?{PK"w wPKqYLmeta.xmlO0|58!Ҭā"(*g+ٔoty7c~t:z+kv(E`ʜvC\Mm.:0>Q/G;48m۫{f4-E+s{愌 [w"IUUd>P)ypz aГ'dcjbo#Yk&| =K)ed蓓R  [/ Ʒ(ZǿYxmSDФ4)ciON \2Ӣ&w5Ix]嗵?ǯ]7_*/>Ѭ!v0>,-ZNWNYK@e]>;%9f8,Ӈ'e[Y  SEhGJ8]gS&.!PK^NPKqYL styles.xmlZo6P&>l/v hcH̕#)QQ Mww$%6- teySך4 !J[{n] Q I'06wPNٲKb$-y$LD/Tň26V]MmoXe邛3+R,SS="co#b$G /n1J qvt7;bpr8pYNB11Ǜz&IR'7pjF!qLZ}mյ{h63,_*Mw|Bχع$FU@Q6:mB*WB]u;ŵ wqH x0*IEy@p+K^ ŒP^9_;~ժVR ivB;3Gh{Im-(~ *x#1U z- "ymWo3HԖ5 f`rr1,V*N3 Q*dKCxW*>t̖X7yԷz猈5#@;fbū'ŵtreGbTO>TX44Ane 2~mAkR3-v ShCX^"Uΰk|-3ƥ;arOo)={ E}.S%ER^rN_5űP<,j˥Oj! ؙFH9'B$ 1Ry$ٓ"i,1HBB5eS7Vӄ-V{6[@a ђҠq6+vTJelPp%JeEq +e|\V4)1z!gz5QM dUR0&.$8Pp|\89;Uy#$r;77Ο>_]'a%w͵RNʟASDZa>zd_N U *պUJKCoX Ym , 9Xd iIo++^Hn?A᤼~OמkmXiyw-j+ cD~(ŀ~Z#MhqEHmڐ꤆!:Vܤđ[s kS뺥eQRVNwQݿX\rTPq֋PwPK\x"&PKqYLConfigurations2/accelerator/PKqYLConfigurations2/toolpanel/PKqYLConfigurations2/floater/PKqYLConfigurations2/menubar/PKqYLConfigurations2/images/Bitmaps/PKqYLConfigurations2/popupmenu/PKqYLConfigurations2/progressbar/PKqYLConfigurations2/toolbar/PKqYLConfigurations2/statusbar/PKqYL manifest.rdf͓n0ɢiTThumbnails/thumbnail.pngPKqYL settings.xmlPKqYL"w w content.xmlPKqYL^Nz"meta.xmlPKqYL\x"& L$styles.xmlPKqYL+Configurations2/accelerator/PKqYL@+Configurations2/toolpanel/PKqYLx+Configurations2/floater/PKqYL+Configurations2/menubar/PKqYL+Configurations2/images/Bitmaps/PKqYL!,Configurations2/popupmenu/PKqYLY,Configurations2/progressbar/PKqYL,Configurations2/toolbar/PKqYL,Configurations2/statusbar/PKqYLh -manifest.rdfPKqYLE/@.META-INF/manifest.xmlPKe/rclone-1.53.3/backend/drive/test/files/example2.doc000066400000000000000000000360001375552240400220600ustar00rootroot00000000000000ࡱ;  Root Entry  "  FMicrosoft Word-Dokument MSWordDocWord.Document.89q [^^Normal 1$*$A$3B*OJQJCJmHsHKHPJnHtH^JaJ_H9NN berschrift x$OJQJCJPJ^JaJ6B6 Textkrperd "/"Liste^JJ""J Beschriftung xx $CJ6^JaJ]222 Verzeichnis $^J $  PGTimes New Roman5Symbol3&ArialiLiberation SerifTimes New Roman7Unifont9FreeSansS&Liberation SansArial9$FreeSansBhzbhG  ' 0 0Oh+'0|8 @ L X d p3@.@@2#s@`ńwV9՜.+,D՜.+,\M 0Caolan80 2$    )b Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus auctor volutpat arcu. Proin facilisis efficitur lacus id viverra. Praesent tempor egestas lectus, a consequat dolor tincidunt vitae. Donec ut velit eget sapien tincidunt volutpat vitae et urna. Curabitur odio eros, varius a feugiat hendrerit, hendrerit eget sem. Pellentesque consectetur commodo magna quis sagittis. Nunc nec vehicula nisi. Pellentesque ut dui mollis, ullamcorper magna in, auctor arcu. Cras bibendum malesuada eleifend. Suspendisse ac magna eu felis ullamcorper efficitur. Vestibulum porta mauris id purus blandit, vitae porttitor tellus sagittis. Etiam nec sapien quam. Curabitur euismod egestas odio, ac ultricies arcu fermentum ut. Donec vel nisi luctus, ullamcorper enim non, imperdiet velit. Nulla vestibulum commodo arcu eget rhoncus. Ut id commodo neque, quis mattis erat. Duis et tincidunt odio. In fermentum, erat tristique varius pretium, sapien odio elementum nulla, eu porta libero nunc at erat. Maecenas egestas porta erat eu eleifend. Aliquam suscipit hendrerit purus a aliquet. Aliquam libero mauris, sollicitudin eu semper quis, finibus imperdiet sapien. Pellentesque hendrerit quis justo et ullamcorper. Proin elementum arcu a hendrerit ultrices. Praesent sit amet iaculis dui. Nam posuere, quam eget molestie ornare, diam enim laoreet magna, at tincidunt velit felis vel erat. Vestibulum eu ante congue, maximus dui at, volutpat ligula. Suspendisse a mi eros. In diam leo, rhoncus in eleifend vitae, condimentum vitae quam. Sed ultricies facilisis magna ac tincidunt. In ultrices, mi sit amet placerat tempor, risus turpis consequat leo, non vestibulum lorem nunc quis sem. Aenean mattis, metus vel placerat mattis, eros velit elementum nisl, consequat congue eros tortor sit amet justo. Suspendisse potenti. Proin vehicula lorem bibendum neque pretium suscipit. In nec lorem tortor. Aenean scelerisque facilisis molestie. Donec pellentesque felis magna, a aliquam mi finibus ut. Aenean eros diam, viverra id dignissim et, pulvinar a eros. Mauris risus tortor, condimentum efficitur velit vitae, fringilla auctor nibh. Cras pretium orci sed auctor volutpat. Cras non diam mauris. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque ornare sodales nibh, vitae placerat nibh porta et. Donec dictum fermentum sem quis porta. Suspendisse suscipit sem ut congue dapibus. Nullam sit amet rhoncus turpis, ut feugiat enim. Maecenas hendrerit metus tortor, finibus tincidunt libero ullamcorper eu. Donec sed arcu ullamcorper, bibendum lectus nec, consectetur mi. Aenean eget aliquet nunc. Pellentesque quis condimentum nisl. Donec consequat magna odio. Aenean in massa massa. Phasellus egestas at nunc finibus volutpat. Phasellus urna ante, tincidunt consectetur ornare nec, rhoncus a justo. Vestibulum id quam id diam maximus congue. Sed luctus odio vitae felis interdum scelerisque. Quisque non interdum sapien, in viverra nisi. In neque justo, porta at ultrices in, consectetur at augue.  " nr5\ *7 B*ph " pr 0. A!n"n#n$n2P1h0p3P(20Root Entry FCompObjjOle 1TableSummaryInformation(WordDocument2$DocumentSummaryInformation8!trclone-1.53.3/backend/drive/test/files/example3.odt000066400000000000000000000527541375552240400221200ustar00rootroot00000000000000PKnM^2 ''mimetypeapplication/vnd.oasis.opendocument.textPKnMa11Thumbnails/thumbnail.pngPNG  IHDRPLTE7%"9158@KG5/PBOt2HpD"Q xd=:#d"0N$G@ &Rd*)~Gӣ ?8HO?'CwT Mdy=rnj*{$j&1D@"uϗKl,]/ӥ L..1Tڥsd I\[a30Rb x87sb5Up'RCˁ^7~ߏS:9e?à {7 p6|(] Ao~+Pӟ>4'ˆo #@P%ꏓ>D"JL.%'ңq $F'S?I|i#sda4HN'h`ȟ?> Vp9ȏ&C?:[LO~M }& L`mO}ӧ!b/_O*_aוCr SZr^\,_si2gJWUқƽZNU$.lie; ;:|p~%o;F>/#7;/Td}G3K!8<:J^|#@> #ONP+]KxG`-Y #);8`*/OJCãX-{ֆ@:082gN͌.J##@{dߘLO&R)(S|pmX9^h%I_d:N$.S$)tꃹT:4XbəDD>]N>y!u'ajӧ/ գPD`Gaޟ|O~7`|H8x~x?x0@_%F @/G?x@rӏ@jQC~)]\]YX 'KsJ_)].F.q EN xKKs|i:C0WC./Ν^&sб|2lEJRJsft0llUt~(NO]NO!x} ө|t*?7 ؟?S>_ 3>L>CM:5=K'Rt"=7ɦ3`SSSҩaH >ޟ3Ã!r` {?ԏ !7}!F8<8u|h(#Ë",@É b8o0sqt:=9.L$/X3S|ʛW*TEO:9ʂ99yrjjj'sSɃCS"ȥʉ ɩӧ/˩ܥt~dBrr{vKˏ+QG%o,pQ;]yiY]xM\) T*fsCKçR3\#T~:uT~f:ϦINc\NO_`Oa-Nq4;KC: _.M陙 3I:Bj03Y<@MχG'Ё~ߠ{ 8Jpp" $>RCF"o ʥ|ȩ_%?2xoŠ}d N ɩ47JͤRS vRivB>7RǧNXJ]HөORLjdi8Rzb`i̝ N%g S` p3_PVpniWaY+-x(XLzfn[1 T-q 1iu(5=UQPɰ 9] I-ts2oBx#~70:P=upp:D#\oreF١ё_=a@=z{_͌!/s_ (䷣fN%O)2DS-L$I6KN'FC#P? K$Rԥ:yy0DK[p ~̝NT_ekwkkoLGC|!W3WNb7IgCSne3গfAasY&L"6q/wV̡F#=[13N(3n:mnF=8vl5q][\a0lvpfw}یj&4`8rJ7]ħU#HL<30VܠĄԿ {jlAYz6V6܏hT#,%M1Cm}fs&dl}{Kúhg1C> FL:,Zb!S?- [>d⧹l2B6ǂxS(fzX};+.BghW6|k\3w,ѐ;.+.ܢg^Nw8ϖn=Yl1ΑdYCQdjQq{ >Ja=*,Sr;0H4 Qq ppv66Niy1l'f't8J3XMƨ9dh ƋȌ[`pELƳ agc ӅkbUwNߑBi]Rsٕ&3$L%APx=a4=\8"E!Y4 1\+E,9QTir,LDpS2qG' `h3pq"AzI$i 7D؀)\XjkZ!ӏ_*x\c"#|T8v ףc?A{@{|EPsoDPD Zo4uB% wP `(N4!aM}Lz`_9*31zΜ 1{lf+ xp<%:G(ÆX)+[?t6۵'H XbC"'%`4L<ӍW,/@/ (5=;2N{("{CLP"KP3h# E: ٱq"'BIҹ'L7ogYV|=_6Rq8F&jE/ @ިפ 1.a \JၯwPQhD{zuh=ܿZ <mxITdsـ9Q p~I W$uF>ap`Bt?ۍQ֑d#A@@V,9!N9c$0-} {]SֵmOq"v&jdd qDm09]Ϲ"q&N m'6s}p>$, ;$;@8;ѻrYwplIINqHZ2{-YbdV/LETa*x{yy{sW4ѴK}cҼ{03(̀AU:!VT`p:3=(f@QLk3(>H=(6C~֬D1#d\{(+8}Kf.fe>ڇ Dՠ~τ ~l41=}ۜ&qnL6ԺC/Y0ilGw :b6v`2k`e_8zPDSA^BqU-b~ۻi E31|b AzM;ЁأgXpL'CLn7 m$]f 6p7   `1V7Y}Bh9k{&kѪ Ww-^t W}|-ZȲc#weK o|<pu+^w q3w%Vd*]3BnY{][S}J1b  1;x atg"g{ǹ7 ,`㐹쬝 rYHqt0zohAhUX Lt\8J"Pesq0A3qJse:0e<\r:@; l0x Hgc8(h<s]<+AK0("fbgh<P9L#PC R)!=&4m(Iɔ2f"1ϝEr[]AoTmG6:50w!:}F)86X`~WQ"VtG\6,,k@|uVTҮ;"Hz/BP%=. RI=¹V7W V5=|٪]JQAqK^lQa25[Z:` u.-h:CtmQ@FT-zME]W8ߺmX|%]]]T_[֡us;(h.)wC :[Bo;éFW5Wh 2zڡp򭜪^\Aױ4,oaB5ѓV1X .ƶ-(J 9K!.I;kki(C bG)RB.mV3mEDԢ͏!ҀIAgI[DآkDRC$">q+q讫 L+d.BjW+jSHJZLծoWUGIlBVPw-r}Ҕn $ Zr!{v+-@"`KݮR(V)U Uՠ[y {ޘ|lvP9NuqH'jʺ;`NtaMUZR-=5KҶr5N\]JV_)oXŒ]mV\5j;`X+:8K\R=RouCK$Bo*omlA x3-mZe]tC> [ CĭS@[%Cg*dXQAxCt,)2n^jVIVE'[U*q!-`:TڢPI6kD*mf] T-tIrBQ$d0{ufC{WWբ2,_T7rY']k]_+9Cq٪ҩG<ý9P_ .@b 6оQ%6Ɇ6ȩomA8d⭏*7ud9Usw v 8mM.tM$lX%h]0\o%* +@lJ)ܦ Z}?۠k8TAo[2}k侍m* \"{Vn vnK.O 0G`4.t҂VZ" K:Z3['=u?=˶54TyS_;Q@Gީ.KerZy99wQ f˂/D hdNڥRB+}6IoDm'z'|O~󩧾߄x)OVpPPP߰M䘱 'r9e2L1 bar9! G$1l,Z.\6Ad@>0E gL逓L9g6AA|tA Qcm;(xzdGDoDxMja}iDf5oѦHSi*)ɝ?C&qժޭGV!U ٷacuٴ[k#7F7" l%9R&f93E:q7ܴA܆[1N ʺp37"8n&&< bsPk8>w0+9Ǟ?bN7 ؾnŜafaĄ }Q.WVE=D1쯄&cyL_ e8Ān -Yd B/}{𱉉B1B?Xù2"3 0b4@l.<#@vaB FcX0Jl^XW<_ .&!p!ap<-9JBkd*dٓ6f(P̍ˮvRet{|7u05&f@ŴjAl#",ֿ` ^3AgQڳ`t'Fk0({0^&|D)h3ϏY,m M`Yvl)H6OV7<-ѪQg¬XUgp}ڊg{=fVbkf6[\EM̺-{3b:LvQG=fư{w[6˶yԏaf̸pplM}nwEW=CaiVomhP>~3yuRf+oZ A^i=Kҁy/ m_Gxz7xA&޷E7F9C @2!̌3ƃ +noӳVސpwBߘ']g_` cf>e#SEa7"mn8pb,ʐg.qPm% HIsġ+>Svg=C`C9H;ANN"p4ʀ{(p.iR&w,,ZeE{;b3c} Ղ[{p,nay[Y[(Gk3D,rveG+u7Q^|VS3fXxǚ[,8>o:^ qf> Y㷷xFƃnŖ) [΁>7"v2 3iƌc/.;^p8pֱcώ o;w;'p9bsweu[kw8le}e-bN\WggF:a2Yjn rLS e\m}rwWmEGayBv$uiD>0m[רNItc(̹*Q(b'لɟ -~N%EI2C"#ɛЇڷpx脑UP[MvQز]BB%\:Ec V?8t>(*k1<|WIkG4l,x9M3@[/֘5DQJ۷ˇ,>7wm'Fqm&;b-ؽvV>;>r؟y_g 0D︑_`\A bl.0-KΣuĂ#+_n{;Yasb/ʎ1ۦ9<Q]榞C 㤰lôZp%7AturXX̓t Hg͕e>c役_b}23<{Ynkl~bCJǎ[CAY@CUU/qy %rE_ת>𼾢r!i4+ ?񯚛o_&RyȂ++78E):2HK_~6qJ,,r}xds%t`'=aϝt<!(` M#H(O~^׏r6|qb'/ڱ_>}«ZNB{RަȔY)t8m!v\ɏQJ^R(wڈՔcfcXvsV֊ C;i+Än~r+(Aow<֍-ð#} J%k 'Y.,}Y 7V&6]Nvx4H'J*;Xu+*LYIĠ ~lӵ Jyo$ >DfghE'C+jtY7-g؂#2>9toWڷ[08ʪG5Q ,I-8GbD+"Nɉ!hU~h0D 5`$j6M_@{e"qm_8 $S1(%%gi)`!F`ĵ}9۾hS(ҋ~#\^f#{m!K O<1$1KC?oQ`* F֔w4)c"%kj&tDC;ṉArIA=aJ?ܜTD|u^cY'4GCl#GE(A aHe}9B^n#?D7NXg>ḳCFd4ۆV1šSSJ!OuqP̡\blj48^D'}Y8p=آ; l;7aYNpPKr8 M PKnM styles.xmlZ[۶~?P&Kwk7>8$}.hP@Rwx(Y*ٴ(P'@q>gCү?tq\V=2JYFS뇟7zx)q%C!tW[ Xlee5\LV6%>ɹ揬~ s6I0gaI2DI)(Qt<}l6HK[W7jTFb5e9l%k&UM|65HUl8'I gw_u-,&1zBϻ/rX ۣ*夞=M3ZSUUECH=xz"r4% B|Pn\MzR}q͸l '(`gՆW!K:^J{eP0gAy}6i.穫8R6l`ImYS$`S9Q"DumOe}JWxz `"ҨRdB}d= ?qMsq;d`wQ Tym2[ۼ0ȧ'{T$X@ss' = JqG"D]-d)^n`F5 !j"SH}ĉ0y74 LV,$._bQ:KǏ׶Jٙ5NiQ]4pXsK}İcS%7W.X1Ii+yXm5g# Y-uU,T߶(PƎ!X+ OOA\<JCp(jBuOL ]=^ԤK(l(s^@GtZ)Hdz7GTxS#4=HCH"fh] 76c1Z}"UVbg36pV ZӶpe܅ɍ0J-z 5c\O OVmU)ú!GrpҀ%lG+=1C!P5jL'O}݊Npk刪4z-Lx0h+-pV/AlP:WxgOD7ns]JgHQjhCuSRvn3=nT6FJ@;o3 Rntq]%˰`͹fvksT::ǻt̺1պB)\<W jC d?b\,^Esg~1!-E ..̋Iu^ZT*V}hmcy̬r\8V{J]IN'+%2on""EGt%l%?死Z~V#>69T䝁Syn)˚2 mi4svj^8,|Kh2Ɛ LTI|B%ӿJR9#|{WHw0>}EOb8̔kKR~.0#J;NiS=t!g/JG:E.1L,n>3=O_ѣwգݣXO5Ҝ/ L-С`ZmAՔz@f[U]D"&atLb{hbp}nv Vn̡dfGIPztҩi$du#OFH!}Bo yu#OÍ>!7Bln H_T1UNj=I=Xb x@7ho~sfPJ!| qMH t#cLx3l'1vsieJh{xVW̠N0Nj{XmtTe'q-9L>k,8Z.V @ '7&v o#&n"͊Fn zr;K舗Ǜí8cm*<RmuKfX.&:C p(6/䱛{C4^a8\=AgrFI{5!Mdx1o.cf[`GS5^*2,Dюf/A^8{a>E?PKK<(,PKnMmeta.xmlˎ0} Dg 3`FꢫZ]ld̐}-e2Yts|x4u*mއ(a 57Bj8|P%.jcΣ}[M TG5kdGz-MYJއgZ 0 Vy ~%8:bVvlFv[s !Lҕ&7dEJ`qX;( 9OD,K QAar$Be ,G!8X"@]`T)NkK_"ލߺ^ˮLnEJjƖdiigg;`[k~I@ Wo9D;KvN`;2nDհhNc֭nHt,,k,Xji-*?{;iQq̴ᬜZ}-ICP[/PK=OPKnM settings.xmlZ[w8~_{JB\8 =6 L}IH6iJh_fˏω8x9ʫc.WoycKx,i5Cz]v~)FnKn)kOʳ0&m7;TE]?_BE|1NY褙n|eVzBAq $6el?qXX_N YtG7:Gͷ"v;!fUE/nW.p9b z &74L0A57pc(>cB,0a!1\Tt[Uj>8{:.ňEZh &hɄDmFEHPdu!.P.}T%k?h@Wt!R $YlTVv`P&=w2 UGėfPdmfv-;dwY8W͔/%=$g@J4}[Zhϊ Zt>LP΁+LB0fj[[J E ԸJS1֫wL=Q%L>Klisc Z&̹ &Kc0S}+a„'Y[{]+tle=Lʡ]CC_ 3Qvs-䔚Gf z\+M7|Y[xbډ5/ -z,#P^NO} -)IͪBX(``yеAhWدv̢4YC \Iya?ͨX 7z atVU2̌􀒪~P=L҉bϻG/K\i3 P#͍oi nI*'#E]> Tb%]y͝L+ցP:D 𴆡 Òd@)mȞ){*2LZGz &CrkEv=Oelcvmʂ7}ni](D`BÔ0ldsZ߿D|&-6z RڴN== rq({݉"aXǰi݀ iC(6U-CoA@s&G6cN3 P?đ#jGLx.ljl|JGV'k;0L(pCEֳ!f*^BZθ g@uwZ+zޘzj3(xM+NwPץeokM".o%S0}YBjrTq @\h鉼?>;:?xX}0ciSuEZŻ1oh0]|ʹ᳕ $Ɍ _@r -cmb ڄxK1b);@P*';٬R{^bٟvz6,hқ+E*\XJ)|f?B;PK_ 3NS*PKnMConfigurations2/toolbar/PKnMConfigurations2/progressbar/PKnMConfigurations2/floater/PKnMConfigurations2/menubar/PKnMConfigurations2/toolpanel/PKnMConfigurations2/accelerator/PKnMConfigurations2/statusbar/PKnMConfigurations2/popupmenu/PKnMConfigurations2/images/Bitmaps/PKnM manifest.rdf͓n0= 0 { req.Header.Set("X-Upload-Content-Length", fmt.Sprintf("%v", size)) } res, err = f.client.Do(req) if err == nil { defer googleapi.CloseBody(res) err = googleapi.CheckResponse(res) } return f.shouldRetry(err) }) if err != nil { return nil, err } loc := res.Header.Get("Location") rx := &resumableUpload{ f: f, remote: remote, URI: loc, Media: in, MediaType: contentType, ContentLength: size, } return rx.Upload(ctx) } // Make an http.Request for the range passed in func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io.ReadSeeker, reqSize int64) *http.Request { req, _ := http.NewRequest("POST", rx.URI, body) req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext req.ContentLength = reqSize totalSize := "*" if rx.ContentLength >= 0 { totalSize = strconv.FormatInt(rx.ContentLength, 10) } if reqSize != 0 { req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, totalSize)) } else { req.Header.Set("Content-Range", fmt.Sprintf("bytes */%v", totalSize)) } req.Header.Set("Content-Type", rx.MediaType) return req } // Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil func (rx *resumableUpload) transferChunk(ctx context.Context, start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) { _, _ = chunk.Seek(0, io.SeekStart) req := rx.makeRequest(ctx, start, chunk, chunkSize) res, err := rx.f.client.Do(req) if err != nil { return 599, err } defer googleapi.CloseBody(res) if res.StatusCode == statusResumeIncomplete { return res.StatusCode, nil } err = googleapi.CheckResponse(res) if err != nil { return res.StatusCode, err } // When the entire file upload is complete, the server // responds with an HTTP 201 Created along with any metadata // associated with this resource. If this request had been // updating an existing entity rather than creating a new one, // the HTTP response code for a completed upload would have // been 200 OK. // // So parse the response out of the body. We aren't expecting // any other 2xx codes, so we parse it unconditionally on // StatusCode if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil { return 598, err } return res.StatusCode, nil } // Upload uploads the chunks from the input // It retries each chunk using the pacer and --low-level-retries func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) { start := int64(0) var StatusCode int var err error buf := make([]byte, int(rx.f.opt.ChunkSize)) for finished := false; !finished; { var reqSize int64 var chunk io.ReadSeeker if rx.ContentLength >= 0 { // If size known use repeatable reader for smoother bwlimit if start >= rx.ContentLength { break } reqSize = rx.ContentLength - start if reqSize >= int64(rx.f.opt.ChunkSize) { reqSize = int64(rx.f.opt.ChunkSize) } chunk = readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize) } else { // If size unknown read into buffer var n int n, err = readers.ReadFill(rx.Media, buf) if err == io.EOF { // Send the last chunk with the correct ContentLength // otherwise Google doesn't know we've finished rx.ContentLength = start + int64(n) finished = true } else if err != nil { return nil, err } reqSize = int64(n) chunk = bytes.NewReader(buf[:reqSize]) } // Transfer the chunk err = rx.f.pacer.Call(func() (bool, error) { fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize) StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize) again, err := rx.f.shouldRetry(err) if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK { again = false err = nil } return again, err }) if err != nil { return nil, err } start += reqSize } // Resume or retry uploads that fail due to connection interruptions or // any 5xx errors, including: // // 500 Internal Server Error // 502 Bad Gateway // 503 Service Unavailable // 504 Gateway Timeout // // Use an exponential backoff strategy if any 5xx server error is // returned when resuming or retrying upload requests. These errors can // occur if a server is getting overloaded. Exponential backoff can help // alleviate these kinds of problems during periods of high volume of // requests or heavy network traffic. Other kinds of requests should not // be handled by exponential backoff but you can still retry a number of // them. When retrying these requests, limit the number of times you // retry them. For example your code could limit to ten retries or less // before reporting an error. // // Handle 404 Not Found errors when doing resumable uploads by starting // the entire upload over from the beginning. if rx.ret == nil { return nil, fserrors.RetryErrorf("Incomplete upload - retry, last error %d", StatusCode) } return rx.ret, nil } rclone-1.53.3/backend/dropbox/000077500000000000000000000000001375552240400161405ustar00rootroot00000000000000rclone-1.53.3/backend/dropbox/dbhash/000077500000000000000000000000001375552240400173715ustar00rootroot00000000000000rclone-1.53.3/backend/dropbox/dbhash/dbhash.go000066400000000000000000000061271375552240400211570ustar00rootroot00000000000000// Package dbhash implements the dropbox hash as described in // // https://www.dropbox.com/developers/reference/content-hash package dbhash import ( "crypto/sha256" "hash" ) const ( // BlockSize of the checksum in bytes. BlockSize = sha256.BlockSize // Size of the checksum in bytes. Size = sha256.BlockSize bytesPerBlock = 4 * 1024 * 1024 hashReturnedError = "hash function returned error" ) type digest struct { n int // bytes written into blockHash so far blockHash hash.Hash totalHash hash.Hash sumCalled bool writtenMore bool } // New returns a new hash.Hash computing the Dropbox checksum. func New() hash.Hash { d := &digest{} d.Reset() return d } // writeBlockHash writes the current block hash into the total hash func (d *digest) writeBlockHash() { blockHash := d.blockHash.Sum(nil) _, err := d.totalHash.Write(blockHash) if err != nil { panic(hashReturnedError) } // reset counters for blockhash d.n = 0 d.blockHash.Reset() } // Write writes len(p) bytes from p to the underlying data stream. It returns // the number of bytes written from p (0 <= n <= len(p)) and any error // encountered that caused the write to stop early. Write must return a non-nil // error if it returns n < len(p). Write must not modify the slice data, even // temporarily. // // Implementations must not retain p. func (d *digest) Write(p []byte) (n int, err error) { n = len(p) for len(p) > 0 { d.writtenMore = true toWrite := bytesPerBlock - d.n if toWrite > len(p) { toWrite = len(p) } _, err = d.blockHash.Write(p[:toWrite]) if err != nil { panic(hashReturnedError) } d.n += toWrite p = p[toWrite:] // Accumulate the total hash if d.n == bytesPerBlock { d.writeBlockHash() } } return n, nil } // Sum appends the current hash to b and returns the resulting slice. // It does not change the underlying hash state. // // TODO(ncw) Sum() can only be called once for this type of hash. // If you call Sum(), then Write() then Sum() it will result in // a panic. Calling Write() then Sum(), then Sum() is OK. func (d *digest) Sum(b []byte) []byte { if d.sumCalled && d.writtenMore { panic("digest.Sum() called more than once") } d.sumCalled = true d.writtenMore = false if d.n != 0 { d.writeBlockHash() } return d.totalHash.Sum(b) } // Reset resets the Hash to its initial state. func (d *digest) Reset() { d.n = 0 d.totalHash = sha256.New() d.blockHash = sha256.New() d.sumCalled = false d.writtenMore = false } // Size returns the number of bytes Sum will return. func (d *digest) Size() int { return d.totalHash.Size() } // BlockSize returns the hash's underlying block size. // The Write method must be able to accept any amount // of data, but it may operate more efficiently if all writes // are a multiple of the block size. func (d *digest) BlockSize() int { return d.totalHash.BlockSize() } // Sum returns the Dropbox checksum of the data. func Sum(data []byte) [Size]byte { var d digest d.Reset() _, _ = d.Write(data) var out [Size]byte d.Sum(out[:0]) return out } // must implement this interface var _ hash.Hash = (*digest)(nil) rclone-1.53.3/backend/dropbox/dbhash/dbhash_test.go000066400000000000000000000054471375552240400222220ustar00rootroot00000000000000package dbhash_test import ( "encoding/hex" "fmt" "testing" "github.com/rclone/rclone/backend/dropbox/dbhash" "github.com/stretchr/testify/assert" ) func testChunk(t *testing.T, chunk int) { data := make([]byte, chunk) for i := 0; i < chunk; i++ { data[i] = 'A' } for _, test := range []struct { n int want string }{ {0, "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"}, {1, "1cd6ef71e6e0ff46ad2609d403dc3fee244417089aa4461245a4e4fe23a55e42"}, {2, "01e0655fb754d10418a73760f57515f4903b298e6d67dda6bf0987fa79c22c88"}, {4096, "8620913d33852befe09f16fff8fd75f77a83160d29f76f07e0276e9690903035"}, {4194303, "647c8627d70f7a7d13ce96b1e7710a771a55d41a62c3da490d92e56044d311fa"}, {4194304, "d4d63bac5b866c71620185392a8a6218ac1092454a2d16f820363b69852befa3"}, {4194305, "8f553da8d00d0bf509d8470e242888be33019c20c0544811f5b2b89e98360b92"}, {8388607, "83b30cf4fb5195b04a937727ae379cf3d06673bf8f77947f6a92858536e8369c"}, {8388608, "e08b3ba1f538804075c5f939accdeaa9efc7b5c01865c94a41e78ca6550a88e7"}, {8388609, "02c8a4aefc2bfc9036f89a7098001865885938ca580e5c9e5db672385edd303c"}, } { d := dbhash.New() var toWrite int for toWrite = test.n; toWrite >= chunk; toWrite -= chunk { n, err := d.Write(data) assert.Nil(t, err) assert.Equal(t, chunk, n) } n, err := d.Write(data[:toWrite]) assert.Nil(t, err) assert.Equal(t, toWrite, n) got := hex.EncodeToString(d.Sum(nil)) assert.Equal(t, test.want, got, fmt.Sprintf("when testing length %d", n)) } } func TestHashChunk16M(t *testing.T) { testChunk(t, 16*1024*1024) } func TestHashChunk8M(t *testing.T) { testChunk(t, 8*1024*1024) } func TestHashChunk4M(t *testing.T) { testChunk(t, 4*1024*1024) } func TestHashChunk2M(t *testing.T) { testChunk(t, 2*1024*1024) } func TestHashChunk1M(t *testing.T) { testChunk(t, 1*1024*1024) } func TestHashChunk64k(t *testing.T) { testChunk(t, 64*1024) } func TestHashChunk32k(t *testing.T) { testChunk(t, 32*1024) } func TestHashChunk2048(t *testing.T) { testChunk(t, 2048) } func TestHashChunk2047(t *testing.T) { testChunk(t, 2047) } func TestSumCalledTwice(t *testing.T) { d := dbhash.New() assert.NotPanics(t, func() { d.Sum(nil) }) d.Reset() assert.NotPanics(t, func() { d.Sum(nil) }) assert.NotPanics(t, func() { d.Sum(nil) }) _, _ = d.Write([]byte{1}) assert.Panics(t, func() { d.Sum(nil) }) } func TestSize(t *testing.T) { d := dbhash.New() assert.Equal(t, 32, d.Size()) } func TestBlockSize(t *testing.T) { d := dbhash.New() assert.Equal(t, 64, d.BlockSize()) } func TestSum(t *testing.T) { assert.Equal(t, [64]byte{ 0x1c, 0xd6, 0xef, 0x71, 0xe6, 0xe0, 0xff, 0x46, 0xad, 0x26, 0x09, 0xd4, 0x03, 0xdc, 0x3f, 0xee, 0x24, 0x44, 0x17, 0x08, 0x9a, 0xa4, 0x46, 0x12, 0x45, 0xa4, 0xe4, 0xfe, 0x23, 0xa5, 0x5e, 0x42, }, dbhash.Sum([]byte{'A'}), ) } rclone-1.53.3/backend/dropbox/dropbox.go000077500000000000000000001041751375552240400201570ustar00rootroot00000000000000// Package dropbox provides an interface to Dropbox object storage package dropbox // FIXME dropbox for business would be quite easy to add /* The Case folding of PathDisplay problem From the docs: path_display String. The cased path to be used for display purposes only. In rare instances the casing will not correctly match the user's filesystem, but this behavior will match the path provided in the Core API v1, and at least the last path component will have the correct casing. Changes to only the casing of paths won't be returned by list_folder/continue. This field will be null if the file or folder is not mounted. This field is optional. We solve this by not implementing the ListR interface. The dropbox remote will recurse directory by directory only using the last element of path_display and all will be well. */ import ( "context" "fmt" "io" "log" "path" "regexp" "strings" "time" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/auth" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users" "github.com/pkg/errors" "github.com/rclone/rclone/backend/dropbox/dbhash" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" "golang.org/x/oauth2" ) // Constants const ( rcloneClientID = "5jcck7diasz0rqy" rcloneEncryptedClientSecret = "fRS5vVLr2v6FbyXYnIgjwBuUAt0osq_QZTXAEcmZ7g" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential // Upload chunk size - setting too small makes uploads slow. // Chunks are buffered into memory for retries. // // Speed vs chunk size uploading a 1 GB file on 2017-11-22 // // Chunk Size MB, Speed Mbyte/s, % of max // 1 1.364 11% // 2 2.443 19% // 4 4.288 33% // 8 6.79 52% // 16 8.916 69% // 24 10.195 79% // 32 10.427 81% // 40 10.96 85% // 48 11.828 91% // 56 11.763 91% // 64 12.047 93% // 96 12.302 95% // 128 12.945 100% // // Choose 48MB which is 91% of Maximum speed. rclone by // default does 4 transfers so this should use 4*48MB = 192MB // by default. defaultChunkSize = 48 * fs.MebiByte maxChunkSize = 150 * fs.MebiByte ) var ( // Description of how to auth for this app dropboxConfig = &oauth2.Config{ Scopes: []string{}, // Endpoint: oauth2.Endpoint{ // AuthURL: "https://www.dropbox.com/1/oauth2/authorize", // TokenURL: "https://api.dropboxapi.com/1/oauth2/token", // }, Endpoint: dropbox.OAuthEndpoint(""), ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectLocalhostURL, } // A regexp matching path names for files Dropbox ignores // See https://www.dropbox.com/en/help/145 - Ignored files ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`) // DbHashType is the hash.Type for Dropbox DbHashType hash.Type ) // Register with Fs func init() { DbHashType = hash.RegisterHash("DropboxHash", 64, dbhash.New) fs.Register(&fs.RegInfo{ Name: "dropbox", Description: "Dropbox", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { opt := oauthutil.Options{ NoOffline: true, } err := oauthutil.Config("dropbox", name, m, dropboxConfig, &opt) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: "chunk_size", Help: fmt.Sprintf(`Upload chunk size. (< %v). Any files larger than this will be uploaded in chunks of this size. Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10%% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.`, maxChunkSize), Default: defaultChunkSize, Advanced: true, }, { Name: "impersonate", Help: "Impersonate this user when using a business account.", Default: "", Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // https://www.dropbox.com/help/syncing-uploads/files-not-syncing lists / and \ // as invalid characters. // Testing revealed names with trailing spaces and the DEL character don't work. // Also encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Base | encoder.EncodeBackSlash | encoder.EncodeDel | encoder.EncodeRightSpace | encoder.EncodeInvalidUtf8), }}...), }) } // Options defines the configuration for this backend type Options struct { ChunkSize fs.SizeSuffix `config:"chunk_size"` Impersonate string `config:"impersonate"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote dropbox server type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv files.Client // the connection to the dropbox server sharing sharing.Client // as above, but for generating sharing links users users.Client // as above, but for accessing user information team team.Client // for the Teams API slashRoot string // root with "/" prefix, lowercase slashRootSlash string // root with "/" prefix and postfix, lowercase pacer *fs.Pacer // To pace the API calls ns string // The namespace we are using or "" for none } // Object describes a dropbox object // // Dropbox Objects always have full metadata type Object struct { fs *Fs // what this object is part of remote string // The remote path bytes int64 // size of the object modTime time.Time // time it was last modified hash string // content_hash of the object } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Dropbox root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // shouldRetry returns a boolean as to whether this err deserves to be // retried. It returns the err as a convenience func shouldRetry(err error) (bool, error) { if err == nil { return false, err } baseErrString := errors.Cause(err).Error() // First check for Insufficient Space if strings.Contains(baseErrString, "insufficient_space") { return false, fserrors.FatalError(err) } // Then handle any official Retry-After header from Dropbox's SDK switch e := err.(type) { case auth.RateLimitAPIError: if e.RateLimitError.RetryAfter > 0 { fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter) err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second) } return true, err } // Keep old behavior for backward compatibility if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") || baseErrString == "" { return true, err } return fserrors.ShouldRetry(err), err } func checkUploadChunkSize(cs fs.SizeSuffix) error { const minChunkSize = fs.Byte if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } if cs > maxChunkSize { return errors.Errorf("%s is greater than %s", cs, maxChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "dropbox: chunk size") } // Convert the old token if it exists. The old token was just // just a string, the new one is a JSON blob oldToken, ok := m.Get(config.ConfigToken) oldToken = strings.TrimSpace(oldToken) if ok && oldToken != "" && oldToken[0] != '{' { fs.Infof(name, "Converting token to new format") newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken) err := config.SetValueAndSave(name, config.ConfigToken, newToken) if err != nil { return nil, errors.Wrap(err, "NewFS convert token") } } oAuthClient, _, err := oauthutil.NewClient(name, m, dropboxConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure dropbox") } f := &Fs{ name: name, opt: *opt, pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } config := dropbox.Config{ LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo Client: oAuthClient, // maybe??? HeaderGenerator: f.headerGenerator, } // NOTE: needs to be created pre-impersonation so we can look up the impersonated user f.team = team.New(config) if opt.Impersonate != "" { user := team.UserSelectorArg{ Email: opt.Impersonate, } user.Tag = "email" members := []*team.UserSelectorArg{&user} args := team.NewMembersGetInfoArgs(members) memberIds, err := f.team.MembersGetInfo(args) if err != nil { return nil, errors.Wrapf(err, "invalid dropbox team member: %q", opt.Impersonate) } config.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId } f.srv = files.New(config) f.sharing = sharing.New(config) f.users = users.New(config) f.features = (&fs.Features{ CaseInsensitive: true, ReadMimeType: true, CanHaveEmptyDirectories: true, }).Fill(f) f.setRoot(root) // If root starts with / then use the actual root if strings.HasPrefix(root, "/") { var acc *users.FullAccount err = f.pacer.Call(func() (bool, error) { acc, err = f.users.GetCurrentAccount() return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "get current account failed") } switch x := acc.RootInfo.(type) { case *common.TeamRootInfo: f.ns = x.RootNamespaceId case *common.UserRootInfo: f.ns = x.RootNamespaceId default: return nil, errors.Errorf("unknown RootInfo type %v %T", acc.RootInfo, acc.RootInfo) } fs.Debugf(f, "Using root namespace %q", f.ns) } // See if the root is actually an object _, err = f.getFileMetadata(f.slashRoot) if err == nil { newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.setRoot(newRoot) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // headerGenerator for dropbox sdk func (f *Fs) headerGenerator(hostType string, style string, namespace string, route string) map[string]string { if f.ns == "" { return map[string]string{} } return map[string]string{ "Dropbox-API-Path-Root": `{".tag": "namespace_id", "namespace_id": "` + f.ns + `"}`, } } // Sets root in f func (f *Fs) setRoot(root string) { f.root = strings.Trim(root, "/") f.slashRoot = "/" + f.root f.slashRootSlash = f.slashRoot if f.root != "" { f.slashRootSlash += "/" } } // getMetadata gets the metadata for a file or directory func (f *Fs) getMetadata(objPath string) (entry files.IsMetadata, notFound bool, err error) { err = f.pacer.Call(func() (bool, error) { entry, err = f.srv.GetMetadata(&files.GetMetadataArg{ Path: f.opt.Enc.FromStandardPath(objPath), }) return shouldRetry(err) }) if err != nil { switch e := err.(type) { case files.GetMetadataAPIError: if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorNotFound { notFound = true err = nil } } } return } // getFileMetadata gets the metadata for a file func (f *Fs) getFileMetadata(filePath string) (fileInfo *files.FileMetadata, err error) { entry, notFound, err := f.getMetadata(filePath) if err != nil { return nil, err } if notFound { return nil, fs.ErrorObjectNotFound } fileInfo, ok := entry.(*files.FileMetadata) if !ok { return nil, fs.ErrorNotAFile } return fileInfo, nil } // getDirMetadata gets the metadata for a directory func (f *Fs) getDirMetadata(dirPath string) (dirInfo *files.FolderMetadata, err error) { entry, notFound, err := f.getMetadata(dirPath) if err != nil { return nil, err } if notFound { return nil, fs.ErrorDirNotFound } dirInfo, ok := entry.(*files.FolderMetadata) if !ok { return nil, fs.ErrorIsFile } return dirInfo, nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(remote string, info *files.FileMetadata) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { err = o.setMetadataFromEntry(info) } else { err = o.readEntryAndSetMetadata() } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(remote, nil) } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { root := f.slashRoot if dir != "" { root += "/" + dir } started := false var res *files.ListFolderResult for { if !started { arg := files.ListFolderArg{ Path: f.opt.Enc.FromStandardPath(root), Recursive: false, } if root == "/" { arg.Path = "" // Specify root folder as empty string } err = f.pacer.Call(func() (bool, error) { res, err = f.srv.ListFolder(&arg) return shouldRetry(err) }) if err != nil { switch e := err.(type) { case files.ListFolderAPIError: if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorNotFound { err = fs.ErrorDirNotFound } } return nil, err } started = true } else { arg := files.ListFolderContinueArg{ Cursor: res.Cursor, } err = f.pacer.Call(func() (bool, error) { res, err = f.srv.ListFolderContinue(&arg) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "list continue") } } for _, entry := range res.Entries { var fileInfo *files.FileMetadata var folderInfo *files.FolderMetadata var metadata *files.Metadata switch info := entry.(type) { case *files.FolderMetadata: folderInfo = info metadata = &info.Metadata case *files.FileMetadata: fileInfo = info metadata = &info.Metadata default: fs.Errorf(f, "Unknown type %T", entry) continue } // Only the last element is reliably cased in PathDisplay entryPath := metadata.PathDisplay leaf := f.opt.Enc.ToStandardName(path.Base(entryPath)) remote := path.Join(dir, leaf) if folderInfo != nil { d := fs.NewDir(remote, time.Now()) entries = append(entries, d) } else if fileInfo != nil { o, err := f.newObjectWithInfo(remote, fileInfo) if err != nil { return nil, err } entries = append(entries, o) } } if !res.HasMore { break } } return entries, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction o := &Object{ fs: f, remote: src.Remote(), } return o, o.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { root := path.Join(f.slashRoot, dir) // can't create or run metadata on root if root == "/" { return nil } // check directory doesn't exist _, err := f.getDirMetadata(root) if err == nil { return nil // directory exists already } else if err != fs.ErrorDirNotFound { return err // some other error } // create it arg2 := files.CreateFolderArg{ Path: f.opt.Enc.FromStandardPath(root), } err = f.pacer.Call(func() (bool, error) { _, err = f.srv.CreateFolderV2(&arg2) return shouldRetry(err) }) return err } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error) { root := path.Join(f.slashRoot, dir) // can't remove root if root == "/" { return errors.New("can't remove root directory") } if check { // check directory exists _, err = f.getDirMetadata(root) if err != nil { return errors.Wrap(err, "Rmdir") } root = f.opt.Enc.FromStandardPath(root) // check directory empty arg := files.ListFolderArg{ Path: root, Recursive: false, } if root == "/" { arg.Path = "" // Specify root folder as empty string } var res *files.ListFolderResult err = f.pacer.Call(func() (bool, error) { res, err = f.srv.ListFolder(&arg) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "Rmdir") } if len(res.Entries) != 0 { return errors.New("directory not empty") } } // remove it err = f.pacer.Call(func() (bool, error) { _, err = f.srv.DeleteV2(&files.DeleteArg{Path: root}) return shouldRetry(err) }) return err } // Rmdir deletes the container // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision returns the precision func (f *Fs) Precision() time.Duration { return time.Second } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } // Temporary Object under construction dstObj := &Object{ fs: f, remote: remote, } // Copy arg := files.RelocationArg{ RelocationPath: files.RelocationPath{ FromPath: f.opt.Enc.FromStandardPath(srcObj.remotePath()), ToPath: f.opt.Enc.FromStandardPath(dstObj.remotePath()), }, } var err error var result *files.RelocationResult err = f.pacer.Call(func() (bool, error) { result, err = f.srv.CopyV2(&arg) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "copy failed") } // Set the metadata fileInfo, ok := result.Metadata.(*files.FileMetadata) if !ok { return nil, fs.ErrorNotAFile } err = dstObj.setMetadataFromEntry(fileInfo) if err != nil { return nil, errors.Wrap(err, "copy failed") } return dstObj, nil } // Purge deletes all the files and the container // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) (err error) { return f.purgeCheck(ctx, dir, false) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Temporary Object under construction dstObj := &Object{ fs: f, remote: remote, } // Do the move arg := files.RelocationArg{ RelocationPath: files.RelocationPath{ FromPath: f.opt.Enc.FromStandardPath(srcObj.remotePath()), ToPath: f.opt.Enc.FromStandardPath(dstObj.remotePath()), }, } var err error var result *files.RelocationResult err = f.pacer.Call(func() (bool, error) { result, err = f.srv.MoveV2(&arg) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "move failed") } // Set the metadata fileInfo, ok := result.Metadata.(*files.FileMetadata) if !ok { return nil, fs.ErrorNotAFile } err = dstObj.setMetadataFromEntry(fileInfo) if err != nil { return nil, errors.Wrap(err, "move failed") } return dstObj, nil } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { absPath := f.opt.Enc.FromStandardPath(path.Join(f.slashRoot, remote)) fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath) createArg := sharing.CreateSharedLinkWithSettingsArg{ Path: absPath, // FIXME this gives settings_error/not_authorized/.. errors // and the expires setting isn't in the documentation so remove // for now. // Settings: &sharing.SharedLinkSettings{ // Expires: time.Now().Add(time.Duration(expire)).UTC().Round(time.Second), // }, } var linkRes sharing.IsSharedLinkMetadata err = f.pacer.Call(func() (bool, error) { linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg) return shouldRetry(err) }) if err != nil && strings.Contains(err.Error(), sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) { fs.Debugf(absPath, "has a public link already, attempting to retrieve it") listArg := sharing.ListSharedLinksArg{ Path: absPath, DirectOnly: true, } var listRes *sharing.ListSharedLinksResult err = f.pacer.Call(func() (bool, error) { listRes, err = f.sharing.ListSharedLinks(&listArg) return shouldRetry(err) }) if err != nil { return } if len(listRes.Links) == 0 { err = errors.New("Dropbox says the sharing link already exists, but list came back empty") return } linkRes = listRes.Links[0] } if err == nil { switch res := linkRes.(type) { case *sharing.FileLinkMetadata: link = res.Url case *sharing.FolderLinkMetadata: link = res.Url default: err = fmt.Errorf("Don't know how to extract link, response has unknown format: %T", res) } } return } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := path.Join(srcFs.slashRoot, srcRemote) dstPath := path.Join(f.slashRoot, dstRemote) // Check if destination exists _, err := f.getDirMetadata(dstPath) if err == nil { return fs.ErrorDirExists } else if err != fs.ErrorDirNotFound { return err } // Make sure the parent directory exists // ...apparently not necessary // Do the move arg := files.RelocationArg{ RelocationPath: files.RelocationPath{ FromPath: f.opt.Enc.FromStandardPath(srcPath), ToPath: f.opt.Enc.FromStandardPath(dstPath), }, } err = f.pacer.Call(func() (bool, error) { _, err = f.srv.MoveV2(&arg) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "MoveDir failed") } return nil } // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { var q *users.SpaceUsage err = f.pacer.Call(func() (bool, error) { q, err = f.users.GetSpaceUsage() return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "about failed") } var total uint64 if q.Allocation != nil { if q.Allocation.Individual != nil { total += q.Allocation.Individual.Allocated } if q.Allocation.Team != nil { total += q.Allocation.Team.Allocated } } usage = &fs.Usage{ Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used Used: fs.NewUsageValue(int64(q.Used)), // bytes in use Free: fs.NewUsageValue(int64(total - q.Used)), // bytes which can be uploaded before reaching the quota } return usage, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(DbHashType) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the dropbox special hash func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != DbHashType { return "", hash.ErrUnsupported } err := o.readMetaData() if err != nil { return "", errors.Wrap(err, "failed to read hash from metadata") } return o.hash, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.bytes } // setMetadataFromEntry sets the fs data from a files.FileMetadata // // This isn't a complete set of metadata and has an inacurate date func (o *Object) setMetadataFromEntry(info *files.FileMetadata) error { o.bytes = int64(info.Size) o.modTime = info.ClientModified o.hash = info.ContentHash return nil } // Reads the entry for a file from dropbox func (o *Object) readEntry() (*files.FileMetadata, error) { return o.fs.getFileMetadata(o.remotePath()) } // Read entry if not set and set metadata from it func (o *Object) readEntryAndSetMetadata() error { // Last resort set time from client if !o.modTime.IsZero() { return nil } entry, err := o.readEntry() if err != nil { return err } return o.setMetadataFromEntry(entry) } // Returns the remote path for the object func (o *Object) remotePath() string { return o.fs.slashRootSlash + o.remote } // readMetaData gets the info if it hasn't already been fetched func (o *Object) readMetaData() (err error) { if !o.modTime.IsZero() { return nil } // Last resort return o.readEntryAndSetMetadata() } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData() if err != nil { fs.Debugf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object // // Commits the datastore func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // Dropbox doesn't have a way of doing this so returning this // error will cause the file to be deleted first then // re-uploaded to set the time. return fs.ErrorCantSetModTimeWithoutDelete } // Storable returns whether this object is storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { fs.FixRangeOption(options, o.bytes) headers := fs.OpenOptionHeaders(options) arg := files.DownloadArg{ Path: o.fs.opt.Enc.FromStandardPath(o.remotePath()), ExtraHeaders: headers, } err = o.fs.pacer.Call(func() (bool, error) { _, in, err = o.fs.srv.Download(&arg) return shouldRetry(err) }) switch e := err.(type) { case files.DownloadAPIError: // Don't attempt to retry copyright violation errors if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorRestrictedContent { return nil, fserrors.NoRetryError(err) } } return } // uploadChunked uploads the object in parts // // Will work optimally if size is >= uploadChunkSize. If the size is either // unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an // avoidable request to the Dropbox API that does not carry payload. func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) { chunkSize := int64(o.fs.opt.ChunkSize) chunks := 0 if size != -1 { chunks = int(size/chunkSize) + 1 } in := readers.NewCountingReader(in0) buf := make([]byte, int(chunkSize)) fmtChunk := func(cur int, last bool) { if chunks == 0 && last { fs.Debugf(o, "Streaming chunk %d/%d", cur, cur) } else if chunks == 0 { fs.Debugf(o, "Streaming chunk %d/unknown", cur) } else { fs.Debugf(o, "Uploading chunk %d/%d", cur, chunks) } } // write the first chunk fmtChunk(1, false) var res *files.UploadSessionStartResult chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize) err = o.fs.pacer.Call(func() (bool, error) { // seek to the start in case this is a retry if _, err = chunk.Seek(0, io.SeekStart); err != nil { return false, nil } res, err = o.fs.srv.UploadSessionStart(&files.UploadSessionStartArg{}, chunk) return shouldRetry(err) }) if err != nil { return nil, err } cursor := files.UploadSessionCursor{ SessionId: res.SessionId, Offset: 0, } appendArg := files.UploadSessionAppendArg{ Cursor: &cursor, Close: false, } // write more whole chunks (if any) currentChunk := 2 for { if chunks > 0 && currentChunk >= chunks { // if the size is known, only upload full chunks. Remaining bytes are uploaded with // the UploadSessionFinish request. break } else if chunks == 0 && in.BytesRead()-cursor.Offset < uint64(chunkSize) { // if the size is unknown, upload as long as we can read full chunks from the reader. // The UploadSessionFinish request will not contain any payload. break } cursor.Offset = in.BytesRead() fmtChunk(currentChunk, false) chunk = readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize) err = o.fs.pacer.Call(func() (bool, error) { // seek to the start in case this is a retry if _, err = chunk.Seek(0, io.SeekStart); err != nil { return false, nil } err = o.fs.srv.UploadSessionAppendV2(&appendArg, chunk) // after the first chunk is uploaded, we retry everything return err != nil, err }) if err != nil { return nil, err } currentChunk++ } // write the remains cursor.Offset = in.BytesRead() args := &files.UploadSessionFinishArg{ Cursor: &cursor, Commit: commitInfo, } fmtChunk(currentChunk, true) chunk = readers.NewRepeatableReaderBuffer(in, buf) err = o.fs.pacer.Call(func() (bool, error) { // seek to the start in case this is a retry if _, err = chunk.Seek(0, io.SeekStart); err != nil { return false, nil } entry, err = o.fs.srv.UploadSessionFinish(args, chunk) // If error is insufficient space then don't retry if e, ok := err.(files.UploadSessionFinishAPIError); ok { if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace { err = fserrors.NoRetryError(err) return false, err } } // after the first chunk is uploaded, we retry everything return err != nil, err }) if err != nil { return nil, err } return entry, nil } // Update the already existing object // // Copy the reader into the object updating modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { remote := o.remotePath() if ignoredFiles.MatchString(remote) { return fserrors.NoRetryError(errors.Errorf("file name %q is disallowed - not uploading", path.Base(remote))) } commitInfo := files.NewCommitInfo(o.fs.opt.Enc.FromStandardPath(o.remotePath())) commitInfo.Mode.Tag = "overwrite" // The Dropbox API only accepts timestamps in UTC with second precision. commitInfo.ClientModified = src.ModTime(ctx).UTC().Round(time.Second) size := src.Size() var err error var entry *files.FileMetadata if size > int64(o.fs.opt.ChunkSize) || size == -1 { entry, err = o.uploadChunked(in, commitInfo, size) } else { err = o.fs.pacer.CallNoRetry(func() (bool, error) { entry, err = o.fs.srv.Upload(commitInfo, in) return shouldRetry(err) }) } if err != nil { return errors.Wrap(err, "upload failed") } return o.setMetadataFromEntry(entry) } // Remove an object func (o *Object) Remove(ctx context.Context) (err error) { err = o.fs.pacer.Call(func() (bool, error) { _, err = o.fs.srv.DeleteV2(&files.DeleteArg{ Path: o.fs.opt.Enc.FromStandardPath(o.remotePath()), }) return shouldRetry(err) }) return err } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) ) rclone-1.53.3/backend/dropbox/dropbox_test.go000066400000000000000000000011131375552240400211770ustar00rootroot00000000000000// Test Dropbox filesystem interface package dropbox import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestDropbox:", NilObject: (*Object)(nil), ChunkedUpload: fstests.ChunkedUploadConfig{ MaxChunkSize: maxChunkSize, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } var _ fstests.SetUploadChunkSizer = (*Fs)(nil) rclone-1.53.3/backend/fichier/000077500000000000000000000000001375552240400160745ustar00rootroot00000000000000rclone-1.53.3/backend/fichier/api.go000066400000000000000000000241211375552240400171740ustar00rootroot00000000000000package fichier import ( "context" "io" "net/http" "regexp" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/lib/rest" ) // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 403, // Forbidden (may happen when request limit is exceeded) 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { // Detect this error which the integration tests provoke // error HTTP error 403 (403 Forbidden) returned body: "{\"message\":\"Flood detected: IP Locked #374\",\"status\":\"KO\"}" // // https://1fichier.com/api.html // // file/ls.cgi is limited : // // Warning (can be changed in case of abuses) : // List all files of the account is limited to 1 request per hour. // List folders is limited to 5 000 results and 1 request per folder per 30s. if err != nil && strings.Contains(err.Error(), "Flood detected") { fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err) time.Sleep(30 * time.Second) } return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) { request := DownloadRequest{ URL: url, Single: 1, } opts := rest.Opts{ Method: "POST", Path: "/download/get_token.cgi", } var token GetTokenResponse err := f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, &request, &token) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't list files") } return &token, nil } func fileFromSharedFile(file *SharedFile) File { return File{ URL: file.Link, Filename: file.Filename, Size: file.Size, } } func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) { opts := rest.Opts{ Method: "GET", RootURL: "https://1fichier.com/dir/", Path: id, Parameters: map[string][]string{"json": {"1"}}, } var sharedFiles SharedFolderResponse err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't list files") } entries = make([]fs.DirEntry, len(sharedFiles)) for i, sharedFile := range sharedFiles { entries[i] = f.newObjectFromFile(ctx, "", fileFromSharedFile(&sharedFile)) } return entries, nil } func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesList, err error) { // fs.Debugf(f, "Requesting files for dir `%s`", directoryID) request := ListFilesRequest{ FolderID: directoryID, } opts := rest.Opts{ Method: "POST", Path: "/file/ls.cgi", } filesList = &FilesList{} err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't list files") } for i := range filesList.Items { item := &filesList.Items[i] item.Filename = f.opt.Enc.ToStandardName(item.Filename) } return filesList, nil } func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *FoldersList, err error) { // fs.Debugf(f, "Requesting folders for id `%s`", directoryID) request := ListFolderRequest{ FolderID: directoryID, } opts := rest.Opts{ Method: "POST", Path: "/folder/ls.cgi", } foldersList = &FoldersList{} err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't list folders") } foldersList.Name = f.opt.Enc.ToStandardName(foldersList.Name) for i := range foldersList.SubFolders { folder := &foldersList.SubFolders[i] folder.Name = f.opt.Enc.ToStandardName(folder.Name) } // fs.Debugf(f, "Got FoldersList for id `%s`", directoryID) return foldersList, err } func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } folderID, err := strconv.Atoi(directoryID) if err != nil { return nil, err } files, err := f.listFiles(ctx, folderID) if err != nil { return nil, err } folders, err := f.listFolders(ctx, folderID) if err != nil { return nil, err } entries = make([]fs.DirEntry, len(files.Items)+len(folders.SubFolders)) for i, item := range files.Items { entries[i] = f.newObjectFromFile(ctx, dir, item) } for i, folder := range folders.SubFolders { createDate, err := time.Parse("2006-01-02 15:04:05", folder.CreateDate) if err != nil { return nil, err } fullPath := getRemote(dir, folder.Name) folderID := strconv.Itoa(folder.ID) entries[len(files.Items)+i] = fs.NewDir(fullPath, createDate).SetID(folderID) // fs.Debugf(f, "Put Path `%s` for id `%d` into dircache", fullPath, folder.ID) f.dirCache.Put(fullPath, folderID) } return entries, nil } func (f *Fs) newObjectFromFile(ctx context.Context, dir string, item File) *Object { return &Object{ fs: f, remote: getRemote(dir, item.Filename), file: item, } } func getRemote(dir, fileName string) string { if dir == "" { return fileName } return dir + "/" + fileName } func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (response *MakeFolderResponse, err error) { name := f.opt.Enc.FromStandardName(leaf) // fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID) request := MakeFolderRequest{ FolderID: folderID, Name: name, } opts := rest.Opts{ Method: "POST", Path: "/folder/mkdir.cgi", } response = &MakeFolderResponse{} err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, &request, response) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't create folder") } // fs.Debugf(f, "Created Folder `%s` in id `%s`", name, directoryID) return response, err } func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (response *GenericOKResponse, err error) { // fs.Debugf(f, "Removing folder with id `%s`", directoryID) request := &RemoveFolderRequest{ FolderID: folderID, } opts := rest.Opts{ Method: "POST", Path: "/folder/rm.cgi", } response = &GenericOKResponse{} var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.rest.CallJSON(ctx, &opts, request, response) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't remove folder") } if response.Status != "OK" { return nil, errors.New("Can't remove non-empty dir") } // fs.Debugf(f, "Removed Folder with id `%s`", directoryID) return response, nil } func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKResponse, err error) { request := &RemoveFileRequest{ Files: []RmFile{ {url}, }, } opts := rest.Opts{ Method: "POST", Path: "/file/rm.cgi", } response = &GenericOKResponse{} err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, request, response) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't remove file") } // fs.Debugf(f, "Removed file with url `%s`", url) return response, nil } func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) { // fs.Debugf(f, "Requesting Upload node") opts := rest.Opts{ Method: "GET", ContentType: "application/json", // 1Fichier API is bad Path: "/upload/get_upload_server.cgi", } response = &GetUploadNodeResponse{} err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, nil, response) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "didnt got an upload node") } // fs.Debugf(f, "Got Upload node") return response, err } func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName, folderID, uploadID, node string, options ...fs.OpenOption) (response *http.Response, err error) { // fs.Debugf(f, "Uploading File `%s`", fileName) fileName = f.opt.Enc.FromStandardName(fileName) if len(uploadID) > 10 || !isAlphaNumeric(uploadID) { return nil, errors.New("Invalid UploadID") } opts := rest.Opts{ Method: "POST", Path: "/upload.cgi", Parameters: map[string][]string{ "id": {uploadID}, }, NoResponse: true, Body: in, ContentLength: &size, Options: options, MultipartContentName: "file[]", MultipartFileName: fileName, MultipartParams: map[string][]string{ "did": {folderID}, }, } if node != "" { opts.RootURL = "https://" + node } err = f.pacer.CallNoRetry(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, nil, nil) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't upload file") } // fs.Debugf(f, "Uploaded File `%s`", fileName) return response, err } func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) { // fs.Debugf(f, "Ending File Upload `%s`", uploadID) if len(uploadID) > 10 || !isAlphaNumeric(uploadID) { return nil, errors.New("Invalid UploadID") } opts := rest.Opts{ Method: "GET", Path: "/end.pl", RootURL: "https://" + nodeurl, Parameters: map[string][]string{ "xid": {uploadID}, }, ExtraHeaders: map[string]string{ "JSON": "1", }, } response = &EndFileUploadResponse{} err = f.pacer.Call(func() (bool, error) { resp, err := f.rest.CallJSON(ctx, &opts, nil, response) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't finish file upload") } return response, err } rclone-1.53.3/backend/fichier/fichier.go000066400000000000000000000262111375552240400200360ustar00rootroot00000000000000package fichier import ( "context" "fmt" "io" "net/http" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" ) const ( rootID = "0" apiBaseURL = "https://api.1fichier.com/v1" minSleep = 400 * time.Millisecond // api is extremely rate limited now maxSleep = 5 * time.Second decayConstant = 2 // bigger for slower decay, exponential attackConstant = 0 // start with max sleep ) func init() { fs.Register(&fs.RegInfo{ Name: "fichier", Description: "1Fichier", Config: func(name string, config configmap.Mapper) { }, NewFs: NewFs, Options: []fs.Option{{ Help: "Your API Key, get it from https://1fichier.com/console/params.pl", Name: "api_key", }, { Help: "If you want to download a shared folder, add this parameter", Name: "shared_folder", Required: false, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Characters that need escaping // // '\\': '\', // FULLWIDTH REVERSE SOLIDUS // '<': '<', // FULLWIDTH LESS-THAN SIGN // '>': '>', // FULLWIDTH GREATER-THAN SIGN // '"': '"', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved // '\'': ''', // FULLWIDTH APOSTROPHE // '$': '$', // FULLWIDTH DOLLAR SIGN // '`': '`', // FULLWIDTH GRAVE ACCENT // // Leading space and trailing space Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeSingleQuote | encoder.EncodeBackQuote | encoder.EncodeDoubleQuote | encoder.EncodeLtGt | encoder.EncodeDollar | encoder.EncodeLeftSpace | encoder.EncodeRightSpace | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { APIKey string `config:"api_key"` SharedFolder string `config:"shared_folder"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs is the interface a cloud storage system must provide type Fs struct { root string name string features *fs.Features opt Options dirCache *dircache.DirCache baseClient *http.Client pacer *fs.Pacer rest *rest.Client } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { folderID, err := strconv.Atoi(pathID) if err != nil { return "", false, err } folders, err := f.listFolders(ctx, folderID) if err != nil { return "", false, err } for _, folder := range folders.SubFolders { if folder.Name == leaf { pathIDOut := strconv.Itoa(folder.ID) return pathIDOut, true, nil } } return "", false, nil } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { folderID, err := strconv.Atoi(pathID) if err != nil { return "", err } resp, err := f.makeFolder(ctx, leaf, folderID) if err != nil { return "", err } return strconv.Itoa(resp.FolderID), err } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String returns a description of the FS func (f *Fs) String() string { return fmt.Sprintf("1Fichier root '%s'", f.root) } // Precision of the ModTimes in this Fs func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Hashes returns the supported hash types of the filesystem func (f *Fs) Hashes() hash.Set { return hash.Set(hash.Whirlpool) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // NewFs makes a new Fs object from the path // // The path is of the form remote:path // // Remotes are looked up in the config file. If the remote isn't // found then NotFoundInConfigFile will be returned. // // On Windows avoid single character remote names as they can be mixed // up with drive letters. func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) { opt := new(Options) err := configstruct.Set(config, opt) if err != nil { return nil, err } // If using a Shared Folder override root if opt.SharedFolder != "" { root = "" } //workaround for wonky parser root = strings.Trim(root, "/") f := &Fs{ name: name, root: root, opt: *opt, pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant), pacer.AttackConstant(attackConstant))), baseClient: &http.Client{}, } f.features = (&fs.Features{ DuplicateFiles: true, CanHaveEmptyDirectories: true, }).Fill(f) client := fshttp.NewClient(fs.Config) f.rest = rest.NewClient(client).SetRoot(apiBaseURL) f.rest.SetHeader("Authorization", "Bearer "+f.opt.APIKey) f.dirCache = dircache.New(root, rootID, f) ctx := context.Background() // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, rootID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.NewObject(ctx, remote) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } f.features.Fill(&tempF) // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { if f.opt.SharedFolder != "" { return f.listSharedFiles(ctx, f.opt.SharedFolder) } dirContent, err := f.listDir(ctx, dir) if err != nil { return nil, err } return dirContent, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } folderID, err := strconv.Atoi(directoryID) if err != nil { return nil, err } files, err := f.listFiles(ctx, folderID) if err != nil { return nil, err } for _, file := range files.Items { if file.Filename == leaf { path, ok := f.dirCache.GetInv(directoryID) if !ok { return nil, errors.New("Cannot find dir in dircache") } return f.newObjectFromFile(ctx, path, file), nil } } return nil, fs.ErrorObjectNotFound } // Put in to the remote path with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Put should either // return an error or upload it properly (rather than e.g. calling panic). // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { exisitingObj, err := f.NewObject(ctx, src.Remote()) switch err { case nil: return exisitingObj, exisitingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src, options...) default: return nil, err } } // putUnchecked uploads the object with the given name and size // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) { if size > int64(300e9) { return nil, errors.New("File too big, cant upload") } else if size == 0 { return nil, fs.ErrorCantUploadEmptyFiles } nodeResponse, err := f.getUploadNode(ctx) if err != nil { return nil, err } leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } _, err = f.uploadFile(ctx, in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL, options...) if err != nil { return nil, err } fileUploadResponse, err := f.endUpload(ctx, nodeResponse.ID, nodeResponse.URL) if err != nil { return nil, err } if len(fileUploadResponse.Links) != 1 { return nil, errors.New("unexpected amount of files") } link := fileUploadResponse.Links[0] fileSize, err := strconv.ParseInt(link.Size, 10, 64) if err != nil { return nil, err } return &Object{ fs: f, remote: remote, file: File{ ACL: 0, CDN: 0, Checksum: link.Whirlpool, ContentType: "", Date: time.Now().Format("2006-01-02 15:04:05"), Filename: link.Filename, Pass: 0, Size: fileSize, URL: link.Download, }, }, nil } // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...) } // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return err } folderID, err := strconv.Atoi(directoryID) if err != nil { return err } _, err = f.removeFolder(ctx, dir, folderID) if err != nil { return err } f.dirCache.FlushDir(dir) return nil } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ dircache.DirCacher = (*Fs)(nil) ) rclone-1.53.3/backend/fichier/fichier_test.go000066400000000000000000000005441375552240400210760ustar00rootroot00000000000000// Test 1Fichier filesystem interface package fichier import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fs.Config.LogLevel = fs.LogLevelDebug fstests.Run(t, &fstests.Opt{ RemoteName: "TestFichier:", }) } rclone-1.53.3/backend/fichier/object.go000066400000000000000000000073171375552240400177010ustar00rootroot00000000000000package fichier import ( "context" "io" "net/http" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/rest" ) // Object is a filesystem like object provided by an Fs type Object struct { fs *Fs remote string file File } // String returns a description of the Object func (o *Object) String() string { return o.file.Filename } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // ModTime returns the modification date of the file // It should return a best guess if one isn't available func (o *Object) ModTime(ctx context.Context) time.Time { modTime, err := time.Parse("2006-01-02 15:04:05", o.file.Date) if err != nil { return time.Now() } return modTime } // Size returns the size of the file func (o *Object) Size() int64 { return o.file.Size } // Fs returns read only access to the Fs that this object is part of func (o *Object) Fs() fs.Info { return o.fs } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.Whirlpool { return "", hash.ErrUnsupported } return o.file.Checksum, nil } // Storable says whether this object can be stored func (o *Object) Storable() bool { return true } // SetModTime sets the metadata on the object to set the modification date func (o *Object) SetModTime(context.Context, time.Time) error { return fs.ErrorCantSetModTime //return errors.New("setting modtime is not supported for 1fichier remotes") } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { fs.FixRangeOption(options, o.file.Size) downloadToken, err := o.fs.getDownloadToken(ctx, o.file.URL) if err != nil { return nil, err } var resp *http.Response opts := rest.Opts{ Method: "GET", RootURL: downloadToken.URL, Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.rest.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // Update in to the object with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // return an error or update the object properly (rather than e.g. calling panic). func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { if src.Size() < 0 { return errors.New("refusing to update with unknown size") } // upload with new size but old name info, err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...) if err != nil { return err } // Delete duplicate after successful upload err = o.Remove(ctx) if err != nil { return errors.Wrap(err, "failed to remove old version") } // Replace guts of old object with new one *o = *info.(*Object) return nil } // Remove removes this object func (o *Object) Remove(ctx context.Context) error { // fs.Debugf(f, "Removing file `%s` with url `%s`", o.file.Filename, o.file.URL) _, err := o.fs.deleteFile(ctx, o.file.URL) if err != nil { return err } return nil } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.file.ContentType } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.file.URL } // Check the interfaces are satisfied var ( _ fs.Object = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/fichier/structs.go000066400000000000000000000066171375552240400201440ustar00rootroot00000000000000package fichier // ListFolderRequest is the request structure of the corresponding request type ListFolderRequest struct { FolderID int `json:"folder_id"` } // ListFilesRequest is the request structure of the corresponding request type ListFilesRequest struct { FolderID int `json:"folder_id"` } // DownloadRequest is the request structure of the corresponding request type DownloadRequest struct { URL string `json:"url"` Single int `json:"single"` } // RemoveFolderRequest is the request structure of the corresponding request type RemoveFolderRequest struct { FolderID int `json:"folder_id"` } // RemoveFileRequest is the request structure of the corresponding request type RemoveFileRequest struct { Files []RmFile `json:"files"` } // RmFile is the request structure of the corresponding request type RmFile struct { URL string `json:"url"` } // GenericOKResponse is the response structure of the corresponding request type GenericOKResponse struct { Status string `json:"status"` Message string `json:"message"` } // MakeFolderRequest is the request structure of the corresponding request type MakeFolderRequest struct { Name string `json:"name"` FolderID int `json:"folder_id"` } // MakeFolderResponse is the response structure of the corresponding request type MakeFolderResponse struct { Name string `json:"name"` FolderID int `json:"folder_id"` } // GetUploadNodeResponse is the response structure of the corresponding request type GetUploadNodeResponse struct { ID string `json:"id"` URL string `json:"url"` } // GetTokenResponse is the response structure of the corresponding request type GetTokenResponse struct { URL string `json:"url"` Status string `json:"Status"` Message string `json:"Message"` } // SharedFolderResponse is the response structure of the corresponding request type SharedFolderResponse []SharedFile // SharedFile is the structure how 1Fichier returns a shared File type SharedFile struct { Filename string `json:"filename"` Link string `json:"link"` Size int64 `json:"size"` } // EndFileUploadResponse is the response structure of the corresponding request type EndFileUploadResponse struct { Incoming int `json:"incoming"` Links []struct { Download string `json:"download"` Filename string `json:"filename"` Remove string `json:"remove"` Size string `json:"size"` Whirlpool string `json:"whirlpool"` } `json:"links"` } // File is the structure how 1Fichier returns a File type File struct { ACL int `json:"acl"` CDN int `json:"cdn"` Checksum string `json:"checksum"` ContentType string `json:"content-type"` Date string `json:"date"` Filename string `json:"filename"` Pass int `json:"pass"` Size int64 `json:"size"` URL string `json:"url"` } // FilesList is the structure how 1Fichier returns a list of files type FilesList struct { Items []File `json:"items"` Status string `json:"Status"` } // Folder is the structure how 1Fichier returns a Folder type Folder struct { CreateDate string `json:"create_date"` ID int `json:"id"` Name string `json:"name"` Pass int `json:"pass"` } // FoldersList is the structure how 1Fichier returns a list of Folders type FoldersList struct { FolderID int `json:"folder_id"` Name string `json:"name"` Status string `json:"Status"` SubFolders []Folder `json:"sub_folders"` } rclone-1.53.3/backend/ftp/000077500000000000000000000000001375552240400152545ustar00rootroot00000000000000rclone-1.53.3/backend/ftp/ftp.go000066400000000000000000000634021375552240400164010ustar00rootroot00000000000000// Package ftp interfaces with FTP servers package ftp import ( "context" "crypto/tls" "io" "net/textproto" "os" "path" "runtime" "strings" "sync" "time" "github.com/jlaffaye/ftp" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "ftp", Description: "FTP Connection", NewFs: NewFs, Options: []fs.Option{{ Name: "host", Help: "FTP host to connect to", Required: true, Examples: []fs.OptionExample{{ Value: "ftp.example.com", Help: "Connect to ftp.example.com", }}, }, { Name: "user", Help: "FTP username, leave blank for current username, " + os.Getenv("USER"), }, { Name: "port", Help: "FTP port, leave blank to use default (21)", }, { Name: "pass", Help: "FTP password", IsPassword: true, Required: true, }, { Name: "tls", Help: `Use FTPS over TLS (Implicit) When using implicit FTP over TLS the client will connect using TLS right from the start, which in turn breaks the compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP.`, Default: false, }, { Name: "explicit_tls", Help: `Use FTP over TLS (Explicit) When using explicit FTP over TLS the client explicitly request security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP.`, Default: false, }, { Name: "concurrency", Help: "Maximum number of FTP simultaneous connections, 0 for unlimited", Default: 0, Advanced: true, }, { Name: "no_check_certificate", Help: "Do not verify the TLS certificate of the server", Default: false, Advanced: true, }, { Name: "disable_epsv", Help: "Disable using EPSV even if server advertises support", Default: false, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // The FTP protocol can't handle trailing spaces (for instance // pureftpd turns them into _) // // proftpd can't handle '*' in file names // pureftpd can't handle '[', ']' or '*' Default: (encoder.Display | encoder.EncodeRightSpace), }}, }) } // Options defines the configuration for this backend type Options struct { Host string `config:"host"` User string `config:"user"` Pass string `config:"pass"` Port string `config:"port"` TLS bool `config:"tls"` ExplicitTLS bool `config:"explicit_tls"` Concurrency int `config:"concurrency"` SkipVerifyTLSCert bool `config:"no_check_certificate"` DisableEPSV bool `config:"disable_epsv"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote FTP server type Fs struct { name string // name of this remote root string // the path we are working on if any opt Options // parsed options features *fs.Features // optional features url string user string pass string dialAddr string poolMu sync.Mutex pool []*ftp.ServerConn tokens *pacer.TokenDispenser } // Object describes an FTP file type Object struct { fs *Fs remote string info *FileInfo } // FileInfo is the metadata known about an FTP file type FileInfo struct { Name string Size uint64 ModTime time.Time IsDir bool } // ------------------------------------------------------------ // Name of this fs func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String returns a description of the FS func (f *Fs) String() string { return f.url } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // Enable debugging output type debugLog struct { mu sync.Mutex auth bool } // Write writes len(p) bytes from p to the underlying data stream. It returns // the number of bytes written from p (0 <= n <= len(p)) and any error // encountered that caused the write to stop early. Write must return a non-nil // error if it returns n < len(p). Write must not modify the slice data, even // temporarily. // // Implementations must not retain p. // // This writes debug info to the log func (dl *debugLog) Write(p []byte) (n int, err error) { dl.mu.Lock() defer dl.mu.Unlock() _, file, _, ok := runtime.Caller(1) direction := "FTP Rx" if ok && strings.Contains(file, "multi") { direction = "FTP Tx" } lines := strings.Split(string(p), "\r\n") if lines[len(lines)-1] == "" { lines = lines[:len(lines)-1] } for _, line := range lines { if !dl.auth && strings.HasPrefix(line, "PASS") { fs.Debugf(direction, "PASS *****") continue } fs.Debugf(direction, "%q", line) } return len(p), nil } // Open a new connection to the FTP server. func (f *Fs) ftpConnection() (*ftp.ServerConn, error) { fs.Debugf(f, "Connecting to FTP server") ftpConfig := []ftp.DialOption{ftp.DialWithTimeout(fs.Config.ConnectTimeout)} if f.opt.TLS && f.opt.ExplicitTLS { fs.Errorf(f, "Implicit TLS and explicit TLS are mutually incompatible. Please revise your config") return nil, errors.New("Implicit TLS and explicit TLS are mutually incompatible. Please revise your config") } else if f.opt.TLS { tlsConfig := &tls.Config{ ServerName: f.opt.Host, InsecureSkipVerify: f.opt.SkipVerifyTLSCert, } ftpConfig = append(ftpConfig, ftp.DialWithTLS(tlsConfig)) } else if f.opt.ExplicitTLS { tlsConfig := &tls.Config{ ServerName: f.opt.Host, InsecureSkipVerify: f.opt.SkipVerifyTLSCert, } ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(tlsConfig)) } if f.opt.DisableEPSV { ftpConfig = append(ftpConfig, ftp.DialWithDisabledEPSV(true)) } if fs.Config.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpRequests|fs.DumpResponses) != 0 { ftpConfig = append(ftpConfig, ftp.DialWithDebugOutput(&debugLog{auth: fs.Config.Dump&fs.DumpAuth != 0})) } c, err := ftp.Dial(f.dialAddr, ftpConfig...) if err != nil { fs.Errorf(f, "Error while Dialing %s: %s", f.dialAddr, err) return nil, errors.Wrap(err, "ftpConnection Dial") } err = c.Login(f.user, f.pass) if err != nil { _ = c.Quit() fs.Errorf(f, "Error while Logging in into %s: %s", f.dialAddr, err) return nil, errors.Wrap(err, "ftpConnection Login") } return c, nil } // Get an FTP connection from the pool, or open a new one func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) { if f.opt.Concurrency > 0 { f.tokens.Get() } f.poolMu.Lock() if len(f.pool) > 0 { c = f.pool[0] f.pool = f.pool[1:] } f.poolMu.Unlock() if c != nil { return c, nil } c, err = f.ftpConnection() if err != nil && f.opt.Concurrency > 0 { f.tokens.Put() } return c, err } // Return an FTP connection to the pool // // It nils the pointed to connection out so it can't be reused // // if err is not nil then it checks the connection is alive using a // NOOP request func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) { if f.opt.Concurrency > 0 { defer f.tokens.Put() } if pc == nil { return } c := *pc if c == nil { return } *pc = nil if err != nil { // If not a regular FTP error code then check the connection _, isRegularError := errors.Cause(err).(*textproto.Error) if !isRegularError { nopErr := c.NoOp() if nopErr != nil { fs.Debugf(f, "Connection failed, closing: %v", nopErr) _ = c.Quit() return } } } f.poolMu.Lock() f.pool = append(f.pool, c) f.poolMu.Unlock() } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) { ctx := context.Background() // defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err) // Parse config into Options struct opt := new(Options) err = configstruct.Set(m, opt) if err != nil { return nil, err } pass, err := obscure.Reveal(opt.Pass) if err != nil { return nil, errors.Wrap(err, "NewFS decrypt password") } user := opt.User if user == "" { user = os.Getenv("USER") } port := opt.Port if port == "" { port = "21" } dialAddr := opt.Host + ":" + port protocol := "ftp://" if opt.TLS { protocol = "ftps://" } u := protocol + path.Join(dialAddr+"/", root) f := &Fs{ name: name, root: root, opt: *opt, url: u, user: user, pass: pass, dialAddr: dialAddr, tokens: pacer.NewTokenDispenser(opt.Concurrency), } f.features = (&fs.Features{ CanHaveEmptyDirectories: true, }).Fill(f) // Make a connection and pool it to return errors early c, err := f.getFtpConnection() if err != nil { return nil, errors.Wrap(err, "NewFs") } f.putFtpConnection(&c, nil) if root != "" { // Check to see if the root actually an existing file remote := path.Base(root) f.root = path.Dir(root) if f.root == "." { f.root = "" } _, err := f.NewObject(ctx, remote) if err != nil { if err == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile { // File doesn't exist so return old f f.root = root return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, err } // translateErrorFile turns FTP errors into rclone errors if possible for a file func translateErrorFile(err error) error { switch errX := err.(type) { case *textproto.Error: switch errX.Code { case ftp.StatusFileUnavailable, ftp.StatusFileActionIgnored: err = fs.ErrorObjectNotFound } } return err } // translateErrorDir turns FTP errors into rclone errors if possible for a directory func translateErrorDir(err error) error { switch errX := err.(type) { case *textproto.Error: switch errX.Code { case ftp.StatusFileUnavailable, ftp.StatusFileActionIgnored: err = fs.ErrorDirNotFound } } return err } // entryToStandard converts an incoming ftp.Entry to Standard encoding func (f *Fs) entryToStandard(entry *ftp.Entry) { // Skip . and .. as we don't want these encoded if entry.Name == "." || entry.Name == ".." { return } entry.Name = f.opt.Enc.ToStandardName(entry.Name) entry.Target = f.opt.Enc.ToStandardPath(entry.Target) } // dirFromStandardPath returns dir in encoded form. func (f *Fs) dirFromStandardPath(dir string) string { // Skip . and .. as we don't want these encoded if dir == "." || dir == ".." { return dir } return f.opt.Enc.FromStandardPath(dir) } // findItem finds a directory entry for the name in its parent directory func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) { // defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err) fullPath := path.Join(f.root, remote) if fullPath == "" || fullPath == "." || fullPath == "/" { // if root, assume exists and synthesize an entry return &ftp.Entry{ Name: "", Type: ftp.EntryTypeFolder, Time: time.Now(), }, nil } dir := path.Dir(fullPath) base := path.Base(fullPath) c, err := f.getFtpConnection() if err != nil { return nil, errors.Wrap(err, "findItem") } files, err := c.List(f.dirFromStandardPath(dir)) f.putFtpConnection(&c, err) if err != nil { return nil, translateErrorFile(err) } for _, file := range files { f.entryToStandard(file) if file.Name == base { return file, nil } } return nil, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) { // defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err) entry, err := f.findItem(remote) if err != nil { return nil, err } if entry != nil && entry.Type != ftp.EntryTypeFolder { o := &Object{ fs: f, remote: remote, } info := &FileInfo{ Name: remote, Size: entry.Size, ModTime: entry.Time, } o.info = info return o, nil } return nil, fs.ErrorObjectNotFound } // dirExists checks the directory pointed to by remote exists or not func (f *Fs) dirExists(remote string) (exists bool, err error) { entry, err := f.findItem(remote) if err != nil { return false, errors.Wrap(err, "dirExists") } if entry != nil && entry.Type == ftp.EntryTypeFolder { return true, nil } return false, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { // defer log.Trace(dir, "dir=%q", dir)("entries=%v, err=%v", &entries, &err) c, err := f.getFtpConnection() if err != nil { return nil, errors.Wrap(err, "list") } var listErr error var files []*ftp.Entry resultchan := make(chan []*ftp.Entry, 1) errchan := make(chan error, 1) go func() { result, err := c.List(f.dirFromStandardPath(path.Join(f.root, dir))) f.putFtpConnection(&c, err) if err != nil { errchan <- err return } resultchan <- result }() // Wait for List for up to Timeout seconds timer := time.NewTimer(fs.Config.Timeout) select { case listErr = <-errchan: timer.Stop() return nil, translateErrorDir(listErr) case files = <-resultchan: timer.Stop() case <-timer.C: // if timer fired assume no error but connection dead fs.Errorf(f, "Timeout when waiting for List") return nil, errors.New("Timeout when waiting for List") } // Annoyingly FTP returns success for a directory which // doesn't exist, so check it really doesn't exist if no // entries found. if len(files) == 0 { exists, err := f.dirExists(dir) if err != nil { return nil, errors.Wrap(err, "list") } if !exists { return nil, fs.ErrorDirNotFound } } for i := range files { object := files[i] f.entryToStandard(object) newremote := path.Join(dir, object.Name) switch object.Type { case ftp.EntryTypeFolder: if object.Name == "." || object.Name == ".." { continue } d := fs.NewDir(newremote, object.Time) entries = append(entries, d) default: o := &Object{ fs: f, remote: newremote, } info := &FileInfo{ Name: newremote, Size: object.Size, ModTime: object.Time, } o.info = info entries = append(entries, o) } } return entries, nil } // Hashes are not supported func (f *Fs) Hashes() hash.Set { return 0 } // Precision shows Modified Time not supported func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // fs.Debugf(f, "Trying to put file %s", src.Remote()) err := f.mkParentDir(src.Remote()) if err != nil { return nil, errors.Wrap(err, "Put mkParentDir failed") } o := &Object{ fs: f, remote: src.Remote(), } err = o.Update(ctx, in, src, options...) return o, err } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // getInfo reads the FileInfo for a path func (f *Fs) getInfo(remote string) (fi *FileInfo, err error) { // defer fs.Trace(remote, "")("fi=%v, err=%v", &fi, &err) dir := path.Dir(remote) base := path.Base(remote) c, err := f.getFtpConnection() if err != nil { return nil, errors.Wrap(err, "getInfo") } files, err := c.List(f.dirFromStandardPath(dir)) f.putFtpConnection(&c, err) if err != nil { return nil, translateErrorFile(err) } for i := range files { file := files[i] f.entryToStandard(file) if file.Name == base { info := &FileInfo{ Name: remote, Size: file.Size, ModTime: file.Time, IsDir: file.Type == ftp.EntryTypeFolder, } return info, nil } } return nil, fs.ErrorObjectNotFound } // mkdir makes the directory and parents using unrooted paths func (f *Fs) mkdir(abspath string) error { abspath = path.Clean(abspath) if abspath == "." || abspath == "/" { return nil } fi, err := f.getInfo(abspath) if err == nil { if fi.IsDir { return nil } return fs.ErrorIsFile } else if err != fs.ErrorObjectNotFound { return errors.Wrapf(err, "mkdir %q failed", abspath) } parent := path.Dir(abspath) err = f.mkdir(parent) if err != nil { return err } c, connErr := f.getFtpConnection() if connErr != nil { return errors.Wrap(connErr, "mkdir") } err = c.MakeDir(f.dirFromStandardPath(abspath)) f.putFtpConnection(&c, err) switch errX := err.(type) { case *textproto.Error: switch errX.Code { case ftp.StatusFileUnavailable: // dir already exists: see issue #2181 err = nil case 521: // dir already exists: error number according to RFC 959: issue #2363 err = nil } } return err } // mkParentDir makes the parent of remote if necessary and any // directories above that func (f *Fs) mkParentDir(remote string) error { parent := path.Dir(remote) return f.mkdir(path.Join(f.root, parent)) } // Mkdir creates the directory if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) { // defer fs.Trace(dir, "")("err=%v", &err) root := path.Join(f.root, dir) return f.mkdir(root) } // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { c, err := f.getFtpConnection() if err != nil { return errors.Wrap(translateErrorFile(err), "Rmdir") } err = c.RemoveDir(f.dirFromStandardPath(path.Join(f.root, dir))) f.putFtpConnection(&c, err) return translateErrorDir(err) } // Move renames a remote file object func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } err := f.mkParentDir(remote) if err != nil { return nil, errors.Wrap(err, "Move mkParentDir failed") } c, err := f.getFtpConnection() if err != nil { return nil, errors.Wrap(err, "Move") } err = c.Rename( f.opt.Enc.FromStandardPath(path.Join(srcObj.fs.root, srcObj.remote)), f.opt.Enc.FromStandardPath(path.Join(f.root, remote)), ) f.putFtpConnection(&c, err) if err != nil { return nil, errors.Wrap(err, "Move Rename failed") } dstObj, err := f.NewObject(ctx, remote) if err != nil { return nil, errors.Wrap(err, "Move NewObject failed") } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := path.Join(srcFs.root, srcRemote) dstPath := path.Join(f.root, dstRemote) // Check if destination exists fi, err := f.getInfo(dstPath) if err == nil { if fi.IsDir { return fs.ErrorDirExists } return fs.ErrorIsFile } else if err != fs.ErrorObjectNotFound { return errors.Wrapf(err, "DirMove getInfo failed") } // Make sure the parent directory exists err = f.mkdir(path.Dir(dstPath)) if err != nil { return errors.Wrap(err, "DirMove mkParentDir dst failed") } // Do the move c, err := f.getFtpConnection() if err != nil { return errors.Wrap(err, "DirMove") } err = c.Rename( f.dirFromStandardPath(srcPath), f.dirFromStandardPath(dstPath), ) f.putFtpConnection(&c, err) if err != nil { return errors.Wrapf(err, "DirMove Rename(%q,%q) failed", srcPath, dstPath) } return nil } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // String version of o func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the hash of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return int64(o.info.Size) } // ModTime returns the modification time of the object func (o *Object) ModTime(ctx context.Context) time.Time { return o.info.ModTime } // SetModTime sets the modification time of the object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { return nil } // Storable returns a boolean as to whether this object is storable func (o *Object) Storable() bool { return true } // ftpReadCloser implements io.ReadCloser for FTP objects. type ftpReadCloser struct { rc io.ReadCloser c *ftp.ServerConn f *Fs err error // errors found during read } // Read bytes into p func (f *ftpReadCloser) Read(p []byte) (n int, err error) { n, err = f.rc.Read(p) if err != nil && err != io.EOF { f.err = err // store any errors for Close to examine } return } // Close the FTP reader and return the connection to the pool func (f *ftpReadCloser) Close() error { var err error errchan := make(chan error, 1) go func() { errchan <- f.rc.Close() }() // Wait for Close for up to 60 seconds timer := time.NewTimer(60 * time.Second) select { case err = <-errchan: timer.Stop() case <-timer.C: // if timer fired assume no error but connection dead fs.Errorf(f.f, "Timeout when waiting for connection Close") f.f.putFtpConnection(nil, nil) return nil } // if errors while reading or closing, dump the connection if err != nil || f.err != nil { _ = f.c.Quit() f.f.putFtpConnection(nil, nil) } else { f.f.putFtpConnection(&f.c, nil) } // mask the error if it was caused by a premature close // NB StatusAboutToSend is to work around a bug in pureftpd // See: https://github.com/rclone/rclone/issues/3445#issuecomment-521654257 switch errX := err.(type) { case *textproto.Error: switch errX.Code { case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend: err = nil } } return err } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) { // defer fs.Trace(o, "")("rc=%v, err=%v", &rc, &err) path := path.Join(o.fs.root, o.remote) var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(o.Size()) default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } c, err := o.fs.getFtpConnection() if err != nil { return nil, errors.Wrap(err, "open") } fd, err := c.RetrFrom(o.fs.opt.Enc.FromStandardPath(path), uint64(offset)) if err != nil { o.fs.putFtpConnection(&c, err) return nil, errors.Wrap(err, "open") } rc = &ftpReadCloser{rc: readers.NewLimitedReadCloser(fd, limit), c: c, f: o.fs} return rc, nil } // Update the already existing object // // Copy the reader into the object updating modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { // defer fs.Trace(o, "src=%v", src)("err=%v", &err) path := path.Join(o.fs.root, o.remote) // remove the file if upload failed remove := func() { // Give the FTP server a chance to get its internal state in order after the error. // The error may have been local in which case we closed the connection. The server // may still be dealing with it for a moment. A sleep isn't ideal but I haven't been // able to think of a better method to find out if the server has finished - ncw time.Sleep(1 * time.Second) removeErr := o.Remove(ctx) if removeErr != nil { fs.Debugf(o, "Failed to remove: %v", removeErr) } else { fs.Debugf(o, "Removed after failed upload: %v", err) } } c, err := o.fs.getFtpConnection() if err != nil { return errors.Wrap(err, "Update") } err = c.Stor(o.fs.opt.Enc.FromStandardPath(path), in) if err != nil { _ = c.Quit() // toss this connection to avoid sync errors remove() o.fs.putFtpConnection(nil, err) return errors.Wrap(err, "update stor") } o.fs.putFtpConnection(&c, nil) o.info, err = o.fs.getInfo(path) if err != nil { return errors.Wrap(err, "update getinfo") } return nil } // Remove an object func (o *Object) Remove(ctx context.Context) (err error) { // defer fs.Trace(o, "")("err=%v", &err) path := path.Join(o.fs.root, o.remote) // Check if it's a directory or a file info, err := o.fs.getInfo(path) if err != nil { return err } if info.IsDir { err = o.fs.Rmdir(ctx, o.remote) } else { c, err := o.fs.getFtpConnection() if err != nil { return errors.Wrap(err, "Remove") } err = c.Delete(o.fs.opt.Enc.FromStandardPath(path)) o.fs.putFtpConnection(&c, err) } return err } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Mover = &Fs{} _ fs.DirMover = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.Object = &Object{} ) rclone-1.53.3/backend/ftp/ftp_test.go000066400000000000000000000020511375552240400174310ustar00rootroot00000000000000// Test FTP filesystem interface package ftp_test import ( "testing" "github.com/rclone/rclone/backend/ftp" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestFTPProftpd:", NilObject: (*ftp.Object)(nil), }) } func TestIntegration2(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("skipping as -remote is set") } fstests.Run(t, &fstests.Opt{ RemoteName: "TestFTPRclone:", NilObject: (*ftp.Object)(nil), }) } func TestIntegration3(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("skipping as -remote is set") } fstests.Run(t, &fstests.Opt{ RemoteName: "TestFTPPureftpd:", NilObject: (*ftp.Object)(nil), }) } // func TestIntegration4(t *testing.T) { // if *fstest.RemoteName != "" { // t.Skip("skipping as -remote is set") // } // fstests.Run(t, &fstests.Opt{ // RemoteName: "TestFTPVsftpd:", // NilObject: (*ftp.Object)(nil), // }) // } rclone-1.53.3/backend/googlecloudstorage/000077500000000000000000000000001375552240400203535ustar00rootroot00000000000000rclone-1.53.3/backend/googlecloudstorage/googlecloudstorage.go000066400000000000000000001006631375552240400246000ustar00rootroot00000000000000// Package googlecloudstorage provides an interface to Google Cloud Storage package googlecloudstorage /* Notes Can't set Updated but can set Metadata on object creation Patch needs full_control not just read_write FIXME Patch/Delete/Get isn't working with files with spaces in - giving 404 error - https://code.google.com/p/google-api-go-client/issues/detail?id=64 */ import ( "context" "encoding/base64" "encoding/hex" "fmt" "io" "io/ioutil" "log" "net/http" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "golang.org/x/oauth2" "golang.org/x/oauth2/google" "google.golang.org/api/googleapi" // NOTE: This API is deprecated storage "google.golang.org/api/storage/v1" ) const ( rcloneClientID = "202264815644.apps.googleusercontent.com" rcloneEncryptedClientSecret = "Uj7C9jGfb9gmeaV70Lh058cNkWvepr-Es9sBm0zdgil7JaOWF1VySw" timeFormatIn = time.RFC3339 timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00" metaMtime = "mtime" // key to store mtime under in metadata listChunks = 1000 // chunk size to read directory listings minSleep = 10 * time.Millisecond ) var ( // Description of how to auth for this app storageConfig = &oauth2.Config{ Scopes: []string{storage.DevstorageReadWriteScope}, Endpoint: google.Endpoint, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.TitleBarRedirectURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "google cloud storage", Prefix: "gcs", Description: "Google Cloud Storage (this is not Google Drive)", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { saFile, _ := m.Get("service_account_file") saCreds, _ := m.Get("service_account_credentials") anonymous, _ := m.Get("anonymous") if saFile != "" || saCreds != "" || anonymous == "true" { return } err := oauthutil.Config("google cloud storage", name, m, storageConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: "project_number", Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.", }, { Name: "service_account_file", Help: "Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login." + env.ShellExpandHelp, }, { Name: "service_account_credentials", Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.", Hide: fs.OptionHideBoth, }, { Name: "anonymous", Help: "Access public buckets and objects without credentials\nSet to 'true' if you just want to download files and don't configure credentials.", Default: false, }, { Name: "object_acl", Help: "Access Control List for new objects.", Examples: []fs.OptionExample{{ Value: "authenticatedRead", Help: "Object owner gets OWNER access, and all Authenticated Users get READER access.", }, { Value: "bucketOwnerFullControl", Help: "Object owner gets OWNER access, and project team owners get OWNER access.", }, { Value: "bucketOwnerRead", Help: "Object owner gets OWNER access, and project team owners get READER access.", }, { Value: "private", Help: "Object owner gets OWNER access [default if left blank].", }, { Value: "projectPrivate", Help: "Object owner gets OWNER access, and project team members get access according to their roles.", }, { Value: "publicRead", Help: "Object owner gets OWNER access, and all Users get READER access.", }}, }, { Name: "bucket_acl", Help: "Access Control List for new buckets.", Examples: []fs.OptionExample{{ Value: "authenticatedRead", Help: "Project team owners get OWNER access, and all Authenticated Users get READER access.", }, { Value: "private", Help: "Project team owners get OWNER access [default if left blank].", }, { Value: "projectPrivate", Help: "Project team members get access according to their roles.", }, { Value: "publicRead", Help: "Project team owners get OWNER access, and all Users get READER access.", }, { Value: "publicReadWrite", Help: "Project team owners get OWNER access, and all Users get WRITER access.", }}, }, { Name: "bucket_policy_only", Help: `Access checks should use bucket-level IAM policies. If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this. When it is set, rclone: - ignores ACLs set on buckets - ignores ACLs set on objects - creates buckets with Bucket Policy Only set Docs: https://cloud.google.com/storage/docs/bucket-policy-only `, Default: false, }, { Name: "location", Help: "Location for the newly created buckets.", Examples: []fs.OptionExample{{ Value: "", Help: "Empty for default location (US).", }, { Value: "asia", Help: "Multi-regional location for Asia.", }, { Value: "eu", Help: "Multi-regional location for Europe.", }, { Value: "us", Help: "Multi-regional location for United States.", }, { Value: "asia-east1", Help: "Taiwan.", }, { Value: "asia-east2", Help: "Hong Kong.", }, { Value: "asia-northeast1", Help: "Tokyo.", }, { Value: "asia-south1", Help: "Mumbai.", }, { Value: "asia-southeast1", Help: "Singapore.", }, { Value: "australia-southeast1", Help: "Sydney.", }, { Value: "europe-north1", Help: "Finland.", }, { Value: "europe-west1", Help: "Belgium.", }, { Value: "europe-west2", Help: "London.", }, { Value: "europe-west3", Help: "Frankfurt.", }, { Value: "europe-west4", Help: "Netherlands.", }, { Value: "us-central1", Help: "Iowa.", }, { Value: "us-east1", Help: "South Carolina.", }, { Value: "us-east4", Help: "Northern Virginia.", }, { Value: "us-west1", Help: "Oregon.", }, { Value: "us-west2", Help: "California.", }}, }, { Name: "storage_class", Help: "The storage class to use when storing objects in Google Cloud Storage.", Examples: []fs.OptionExample{{ Value: "", Help: "Default", }, { Value: "MULTI_REGIONAL", Help: "Multi-regional storage class", }, { Value: "REGIONAL", Help: "Regional storage class", }, { Value: "NEARLINE", Help: "Nearline storage class", }, { Value: "COLDLINE", Help: "Coldline storage class", }, { Value: "ARCHIVE", Help: "Archive storage class", }, { Value: "DURABLE_REDUCED_AVAILABILITY", Help: "Durable reduced availability storage class", }}, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.Base | encoder.EncodeCrLf | encoder.EncodeInvalidUtf8), }}...), }) } // Options defines the configuration for this backend type Options struct { ProjectNumber string `config:"project_number"` ServiceAccountFile string `config:"service_account_file"` ServiceAccountCredentials string `config:"service_account_credentials"` Anonymous bool `config:"anonymous"` ObjectACL string `config:"object_acl"` BucketACL string `config:"bucket_acl"` BucketPolicyOnly bool `config:"bucket_policy_only"` Location string `config:"location"` StorageClass string `config:"storage_class"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote storage server type Fs struct { name string // name of this remote root string // the path we are working on if any opt Options // parsed options features *fs.Features // optional features svc *storage.Service // the connection to the storage server client *http.Client // authorized client rootBucket string // bucket part of root (if any) rootDirectory string // directory part of root (if any) cache *bucket.Cache // cache of bucket status pacer *fs.Pacer // To pace the API calls } // Object describes a storage object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path url string // download path md5sum string // The MD5Sum of the object bytes int64 // Bytes in the object modTime time.Time // Modified time of the object mimeType string } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.rootBucket == "" { return fmt.Sprintf("GCS root") } if f.rootDirectory == "" { return fmt.Sprintf("GCS bucket %s", f.rootBucket) } return fmt.Sprintf("GCS bucket %s path %s", f.rootBucket, f.rootDirectory) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // shouldRetry determines whether a given err rates being retried func shouldRetry(err error) (again bool, errOut error) { again = false if err != nil { if fserrors.ShouldRetry(err) { again = true } else { switch gerr := err.(type) { case *googleapi.Error: if gerr.Code >= 500 && gerr.Code < 600 { // All 5xx errors should be retried again = true } else if len(gerr.Errors) > 0 { reason := gerr.Errors[0].Reason if reason == "rateLimitExceeded" || reason == "userRateLimitExceeded" { again = true } } } } } return again, err } // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns bucket and bucketPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) { bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath)) return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath) } // split returns bucket and bucketPath from the object func (o *Object) split() (bucket, bucketPath string) { return o.fs.split(o.remote) } func getServiceAccountClient(credentialsData []byte) (*http.Client, error) { conf, err := google.JWTConfigFromJSON(credentialsData, storageConfig.Scopes...) if err != nil { return nil, errors.Wrap(err, "error processing credentials") } ctxWithSpecialClient := oauthutil.Context(fshttp.NewClient(fs.Config)) return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootBucket, f.rootDirectory = bucket.Split(f.root) } // NewFs constructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.TODO() var oAuthClient *http.Client // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.ObjectACL == "" { opt.ObjectACL = "private" } if opt.BucketACL == "" { opt.BucketACL = "private" } // try loading service account credentials from env variable, then from a file if opt.ServiceAccountCredentials == "" && opt.ServiceAccountFile != "" { loadedCreds, err := ioutil.ReadFile(env.ShellExpand(opt.ServiceAccountFile)) if err != nil { return nil, errors.Wrap(err, "error opening service account credentials file") } opt.ServiceAccountCredentials = string(loadedCreds) } if opt.Anonymous { oAuthClient = &http.Client{} } else if opt.ServiceAccountCredentials != "" { oAuthClient, err = getServiceAccountClient([]byte(opt.ServiceAccountCredentials)) if err != nil { return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account") } } else { oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig) if err != nil { ctx := context.Background() oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope) if err != nil { return nil, errors.Wrap(err, "failed to configure Google Cloud Storage") } } } f := &Fs{ name: name, root: root, opt: *opt, pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))), cache: bucket.NewCache(), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, }).Fill(f) // Create a new authorized Drive client. f.client = oAuthClient f.svc, err = storage.New(f.client) if err != nil { return nil, errors.Wrap(err, "couldn't create Google Cloud Storage client") } if f.rootBucket != "" && f.rootDirectory != "" { // Check to see if the object exists encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory) err = f.pacer.Call(func() (bool, error) { _, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do() return shouldRetry(err) }) if err == nil { newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.setRoot(newRoot) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } } return f, nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *storage.Object) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { o.setMetaData(info) } else { err := o.readMetaData(ctx) // reads info and meta, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // listFn is called from list to handle an object. type listFn func(remote string, object *storage.Object, isDirectory bool) error // list the objects into the function supplied // // dir is the starting directory, "" for root // // Set recurse to read sub directories // // The remote has prefix removed from it and if addBucket is set // then it adds the bucket to the start. func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) (err error) { if prefix != "" { prefix += "/" } if directory != "" { directory += "/" } list := f.svc.Objects.List(bucket).Prefix(directory).MaxResults(listChunks) if !recurse { list = list.Delimiter("/") } for { var objects *storage.Objects err = f.pacer.Call(func() (bool, error) { objects, err = list.Context(ctx).Do() return shouldRetry(err) }) if err != nil { if gErr, ok := err.(*googleapi.Error); ok { if gErr.Code == http.StatusNotFound { err = fs.ErrorDirNotFound } } return err } if !recurse { var object storage.Object for _, remote := range objects.Prefixes { if !strings.HasSuffix(remote, "/") { continue } remote = f.opt.Enc.ToStandardPath(remote) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", remote) continue } remote = remote[len(prefix) : len(remote)-1] if addBucket { remote = path.Join(bucket, remote) } err = fn(remote, &object, true) if err != nil { return err } } } for _, object := range objects.Items { remote := f.opt.Enc.ToStandardPath(object.Name) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", object.Name) continue } remote = remote[len(prefix):] isDirectory := remote == "" || strings.HasSuffix(remote, "/") if addBucket { remote = path.Join(bucket, remote) } // is this a directory marker? if isDirectory && object.Size == 0 { continue // skip directory marker } err = fn(remote, object, false) if err != nil { return err } } if objects.NextPageToken == "" { break } list.PageToken(objects.NextPageToken) } return nil } // Convert a list item into a DirEntry func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.Object, isDirectory bool) (fs.DirEntry, error) { if isDirectory { d := fs.NewDir(remote, time.Time{}).SetSize(int64(object.Size)) return d, nil } o, err := f.newObjectWithInfo(ctx, remote, object) if err != nil { return nil, err } return o, nil } // listDir lists a single directory func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { // List the objects err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) if err != nil { return err } if entry != nil { entries = append(entries, entry) } return nil }) if err != nil { return nil, err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) return entries, err } // listBuckets lists the buckets func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { if f.opt.ProjectNumber == "" { return nil, errors.New("can't list buckets without project number") } listBuckets := f.svc.Buckets.List(f.opt.ProjectNumber).MaxResults(listChunks) for { var buckets *storage.Buckets err = f.pacer.Call(func() (bool, error) { buckets, err = listBuckets.Context(ctx).Do() return shouldRetry(err) }) if err != nil { return nil, err } for _, bucket := range buckets.Items { d := fs.NewDir(f.opt.Enc.ToStandardName(bucket.Name), time.Time{}) entries = append(entries, d) } if buckets.NextPageToken == "" { break } listBuckets.PageToken(buckets.NextPageToken) } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { bucket, directory := f.split(dir) if bucket == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listBuckets(ctx) } return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { bucket, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(bucket, directory, prefix string, addBucket bool) error { return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) if err != nil { return err } return list.Add(entry) }) } if bucket == "" { entries, err := f.listBuckets(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } bucket := entry.Remote() err = listR(bucket, "", f.rootDirectory, true) if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } } else { err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "") if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } return list.Flush() } // Put the object into the bucket // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction o := &Object{ fs: f, remote: src.Remote(), } return o, o.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the bucket if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) { bucket, _ := f.split(dir) return f.makeBucket(ctx, bucket) } // makeBucket creates the bucket if it doesn't exist func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) { return f.cache.Create(bucket, func() error { // List something from the bucket to see if it exists. Doing it like this enables the use of a // service account that only has the "Storage Object Admin" role. See #2193 for details. err = f.pacer.Call(func() (bool, error) { _, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do() return shouldRetry(err) }) if err == nil { // Bucket already exists return nil } else if gErr, ok := err.(*googleapi.Error); ok { if gErr.Code != http.StatusNotFound { return errors.Wrap(err, "failed to get bucket") } } else { return errors.Wrap(err, "failed to get bucket") } if f.opt.ProjectNumber == "" { return errors.New("can't make bucket without project number") } bucket := storage.Bucket{ Name: bucket, Location: f.opt.Location, StorageClass: f.opt.StorageClass, } if f.opt.BucketPolicyOnly { bucket.IamConfiguration = &storage.BucketIamConfiguration{ BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{ Enabled: true, }, } } return f.pacer.Call(func() (bool, error) { insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket) if !f.opt.BucketPolicyOnly { insertBucket.PredefinedAcl(f.opt.BucketACL) } _, err = insertBucket.Context(ctx).Do() return shouldRetry(err) }) }, nil) } // Rmdir deletes the bucket if the fs is at the root // // Returns an error if it isn't empty: Error 409: The bucket you tried // to delete was not empty. func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) { bucket, directory := f.split(dir) if bucket == "" || directory != "" { return nil } return f.cache.Remove(bucket, func() error { return f.pacer.Call(func() (bool, error) { err = f.svc.Buckets.Delete(bucket).Context(ctx).Do() return shouldRetry(err) }) }) } // Precision returns the precision func (f *Fs) Precision() time.Duration { return time.Nanosecond } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstBucket, dstPath := f.split(remote) err := f.makeBucket(ctx, dstBucket) if err != nil { return nil, err } srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } srcBucket, srcPath := srcObj.split() // Temporary Object under construction dstObj := &Object{ fs: f, remote: remote, } var newObject *storage.Object err = f.pacer.Call(func() (bool, error) { copyObject := f.svc.Objects.Copy(srcBucket, srcPath, dstBucket, dstPath, nil) if !f.opt.BucketPolicyOnly { copyObject.DestinationPredefinedAcl(f.opt.ObjectACL) } newObject, err = copyObject.Context(ctx).Do() return shouldRetry(err) }) if err != nil { return nil, err } // Set the metadata for the new object while we have it dstObj.setMetaData(newObject) return dstObj, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } return o.md5sum, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.bytes } // setMetaData sets the fs data from a storage.Object func (o *Object) setMetaData(info *storage.Object) { o.url = info.MediaLink o.bytes = int64(info.Size) o.mimeType = info.ContentType // Read md5sum md5sumData, err := base64.StdEncoding.DecodeString(info.Md5Hash) if err != nil { fs.Logf(o, "Bad MD5 decode: %v", err) } else { o.md5sum = hex.EncodeToString(md5sumData) } // read mtime out of metadata if available mtimeString, ok := info.Metadata[metaMtime] if ok { modTime, err := time.Parse(timeFormatIn, mtimeString) if err == nil { o.modTime = modTime return } fs.Debugf(o, "Failed to read mtime from metadata: %s", err) } // Fallback to the Updated time modTime, err := time.Parse(timeFormatIn, info.Updated) if err != nil { fs.Logf(o, "Bad time decode: %v", err) } else { o.modTime = modTime } } // readObjectInfo reads the definition for an object func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, err error) { bucket, bucketPath := o.split() err = o.fs.pacer.Call(func() (bool, error) { object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do() return shouldRetry(err) }) if err != nil { if gErr, ok := err.(*googleapi.Error); ok { if gErr.Code == http.StatusNotFound { return nil, fs.ErrorObjectNotFound } } return nil, err } return object, nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if !o.modTime.IsZero() { return nil } object, err := o.readObjectInfo(ctx) if err != nil { return err } o.setMetaData(object) return nil } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { // fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // Returns metadata for an object func metadataFromModTime(modTime time.Time) map[string]string { metadata := make(map[string]string, 1) metadata[metaMtime] = modTime.Format(timeFormatOut) return metadata } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) { // read the complete existing object first object, err := o.readObjectInfo(ctx) if err != nil { return err } // Add the mtime to the existing metadata mtime := modTime.Format(timeFormatOut) if object.Metadata == nil { object.Metadata = make(map[string]string, 1) } object.Metadata[metaMtime] = mtime // Copy the object to itself to update the metadata // Using PATCH requires too many permissions bucket, bucketPath := o.split() var newObject *storage.Object err = o.fs.pacer.Call(func() (bool, error) { copyObject := o.fs.svc.Objects.Copy(bucket, bucketPath, bucket, bucketPath, object) if !o.fs.opt.BucketPolicyOnly { copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL) } newObject, err = copyObject.Context(ctx).Do() return shouldRetry(err) }) if err != nil { return err } o.setMetaData(newObject) return nil } // Storable returns a boolean as to whether this object is storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { req, err := http.NewRequest("GET", o.url, nil) if err != nil { return nil, err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext fs.FixRangeOption(options, o.bytes) fs.OpenOptionAddHTTPHeaders(req.Header, options) var res *http.Response err = o.fs.pacer.Call(func() (bool, error) { res, err = o.fs.client.Do(req) if err == nil { err = googleapi.CheckResponse(res) if err != nil { _ = res.Body.Close() // ignore error } } return shouldRetry(err) }) if err != nil { return nil, err } _, isRanging := req.Header["Range"] if !(res.StatusCode == http.StatusOK || (isRanging && res.StatusCode == http.StatusPartialContent)) { _ = res.Body.Close() // ignore error return nil, errors.Errorf("bad response: %d: %s", res.StatusCode, res.Status) } return res.Body, nil } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { bucket, bucketPath := o.split() err := o.fs.makeBucket(ctx, bucket) if err != nil { return err } modTime := src.ModTime(ctx) object := storage.Object{ Bucket: bucket, Name: bucketPath, ContentType: fs.MimeType(ctx, src), Metadata: metadataFromModTime(modTime), } // Apply upload options for _, option := range options { key, value := option.Header() lowerKey := strings.ToLower(key) switch lowerKey { case "": // ignore case "cache-control": object.CacheControl = value case "content-disposition": object.ContentDisposition = value case "content-encoding": object.ContentEncoding = value case "content-language": object.ContentLanguage = value case "content-type": object.ContentType = value default: const googMetaPrefix = "x-goog-meta-" if strings.HasPrefix(lowerKey, googMetaPrefix) { metaKey := lowerKey[len(googMetaPrefix):] object.Metadata[metaKey] = value } else { fs.Errorf(o, "Don't know how to set key %q on upload", key) } } } var newObject *storage.Object err = o.fs.pacer.CallNoRetry(func() (bool, error) { insertObject := o.fs.svc.Objects.Insert(bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name) if !o.fs.opt.BucketPolicyOnly { insertObject.PredefinedAcl(o.fs.opt.ObjectACL) } newObject, err = insertObject.Context(ctx).Do() return shouldRetry(err) }) if err != nil { return err } // Set the metadata for the new object while we have it o.setMetaData(newObject) return nil } // Remove an object func (o *Object) Remove(ctx context.Context) (err error) { bucket, bucketPath := o.split() err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do() return shouldRetry(err) }) return err } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Copier = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.ListRer = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} ) rclone-1.53.3/backend/googlecloudstorage/googlecloudstorage_test.go000066400000000000000000000006541375552240400256360ustar00rootroot00000000000000// Test GoogleCloudStorage filesystem interface package googlecloudstorage_test import ( "testing" "github.com/rclone/rclone/backend/googlecloudstorage" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestGoogleCloudStorage:", NilObject: (*googlecloudstorage.Object)(nil), }) } rclone-1.53.3/backend/googlephotos/000077500000000000000000000000001375552240400171745ustar00rootroot00000000000000rclone-1.53.3/backend/googlephotos/albums.go000066400000000000000000000062441375552240400210140ustar00rootroot00000000000000// This file contains the albums abstraction package googlephotos import ( "path" "strings" "sync" "github.com/rclone/rclone/backend/googlephotos/api" ) // All the albums type albums struct { mu sync.Mutex dupes map[string][]*api.Album // duplicated names byID map[string]*api.Album //..indexed by ID byTitle map[string]*api.Album //..indexed by Title path map[string][]string // partial album names to directory } // Create a new album func newAlbums() *albums { return &albums{ dupes: map[string][]*api.Album{}, byID: map[string]*api.Album{}, byTitle: map[string]*api.Album{}, path: map[string][]string{}, } } // add an album func (as *albums) add(album *api.Album) { // Munge the name of the album into a sensible path name album.Title = path.Clean(album.Title) if album.Title == "." || album.Title == "/" { album.Title = addID("", album.ID) } as.mu.Lock() as._add(album) as.mu.Unlock() } // _add an album - call with lock held func (as *albums) _add(album *api.Album) { // update dupes by title dupes := as.dupes[album.Title] dupes = append(dupes, album) as.dupes[album.Title] = dupes // Dedupe the album name if necessary if len(dupes) >= 2 { // If this is the first dupe, then need to adjust the first one if len(dupes) == 2 { firstAlbum := dupes[0] as._del(firstAlbum) as._add(firstAlbum) // undo add of firstAlbum to dupes as.dupes[album.Title] = dupes } album.Title = addID(album.Title, album.ID) } // Store the new album as.byID[album.ID] = album as.byTitle[album.Title] = album // Store the partial paths dir, leaf := album.Title, "" for dir != "" { i := strings.LastIndex(dir, "/") if i >= 0 { dir, leaf = dir[:i], dir[i+1:] } else { dir, leaf = "", dir } dirs := as.path[dir] found := false for _, dir := range dirs { if dir == leaf { found = true } } if !found { as.path[dir] = append(as.path[dir], leaf) } } } // del an album func (as *albums) del(album *api.Album) { as.mu.Lock() as._del(album) as.mu.Unlock() } // _del an album - call with lock held func (as *albums) _del(album *api.Album) { // We leave in dupes so it doesn't cause albums to get renamed // Remove from byID and byTitle delete(as.byID, album.ID) delete(as.byTitle, album.Title) // Remove from paths dir, leaf := album.Title, "" for dir != "" { // Can't delete if this dir exists anywhere in the path structure if _, found := as.path[dir]; found { break } i := strings.LastIndex(dir, "/") if i >= 0 { dir, leaf = dir[:i], dir[i+1:] } else { dir, leaf = "", dir } dirs := as.path[dir] for i, dir := range dirs { if dir == leaf { dirs = append(dirs[:i], dirs[i+1:]...) break } } if len(dirs) == 0 { delete(as.path, dir) } else { as.path[dir] = dirs } } } // get an album by title func (as *albums) get(title string) (album *api.Album, ok bool) { as.mu.Lock() defer as.mu.Unlock() album, ok = as.byTitle[title] return album, ok } // getDirs gets directories below an album path func (as *albums) getDirs(albumPath string) (dirs []string, ok bool) { as.mu.Lock() defer as.mu.Unlock() dirs, ok = as.path[albumPath] return dirs, ok } rclone-1.53.3/backend/googlephotos/albums_test.go000066400000000000000000000141431375552240400220500ustar00rootroot00000000000000package googlephotos import ( "testing" "github.com/rclone/rclone/backend/googlephotos/api" "github.com/stretchr/testify/assert" ) func TestNewAlbums(t *testing.T) { albums := newAlbums() assert.NotNil(t, albums.dupes) assert.NotNil(t, albums.byID) assert.NotNil(t, albums.byTitle) assert.NotNil(t, albums.path) } func TestAlbumsAdd(t *testing.T) { albums := newAlbums() assert.Equal(t, map[string][]*api.Album{}, albums.dupes) assert.Equal(t, map[string]*api.Album{}, albums.byID) assert.Equal(t, map[string]*api.Album{}, albums.byTitle) assert.Equal(t, map[string][]string{}, albums.path) a1 := &api.Album{ Title: "one", ID: "1", } albums.add(a1) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "1": a1, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one": a1, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one"}, }, albums.path) a2 := &api.Album{ Title: "two", ID: "2", } albums.add(a2) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "1": a1, "2": a2, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one": a1, "two": a2, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two"}, }, albums.path) // Add a duplicate a2a := &api.Album{ Title: "two", ID: "2a", } albums.add(a2a) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "1": a1, "2": a2, "2a": a2a, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one": a1, "two {2}": a2, "two {2a}": a2a, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two {2}", "two {2a}"}, }, albums.path) // Add a sub directory a1sub := &api.Album{ Title: "one/sub", ID: "1sub", } albums.add(a1sub) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "1": a1, "2": a2, "2a": a2a, "1sub": a1sub, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one": a1, "one/sub": a1sub, "two {2}": a2, "two {2a}": a2a, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two {2}", "two {2a}"}, "one": {"sub"}, }, albums.path) // Add a weird path a0 := &api.Album{ Title: "/../././..////.", ID: "0", } albums.add(a0) assert.Equal(t, map[string][]*api.Album{ "{0}": {a0}, "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "0": a0, "1": a1, "2": a2, "2a": a2a, "1sub": a1sub, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "{0}": a0, "one": a1, "one/sub": a1sub, "two {2}": a2, "two {2a}": a2a, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two {2}", "two {2a}", "{0}"}, "one": {"sub"}, }, albums.path) } func TestAlbumsDel(t *testing.T) { albums := newAlbums() a1 := &api.Album{ Title: "one", ID: "1", } albums.add(a1) a2 := &api.Album{ Title: "two", ID: "2", } albums.add(a2) // Add a duplicate a2a := &api.Album{ Title: "two", ID: "2a", } albums.add(a2a) // Add a sub directory a1sub := &api.Album{ Title: "one/sub", ID: "1sub", } albums.add(a1sub) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "1": a1, "2": a2, "2a": a2a, "1sub": a1sub, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one": a1, "one/sub": a1sub, "two {2}": a2, "two {2a}": a2a, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two {2}", "two {2a}"}, "one": {"sub"}, }, albums.path) albums.del(a1) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "2": a2, "2a": a2a, "1sub": a1sub, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one/sub": a1sub, "two {2}": a2, "two {2a}": a2a, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two {2}", "two {2a}"}, "one": {"sub"}, }, albums.path) albums.del(a2) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "2a": a2a, "1sub": a1sub, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one/sub": a1sub, "two {2a}": a2a, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one", "two {2a}"}, "one": {"sub"}, }, albums.path) albums.del(a2a) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{ "1sub": a1sub, }, albums.byID) assert.Equal(t, map[string]*api.Album{ "one/sub": a1sub, }, albums.byTitle) assert.Equal(t, map[string][]string{ "": {"one"}, "one": {"sub"}, }, albums.path) albums.del(a1sub) assert.Equal(t, map[string][]*api.Album{ "one": {a1}, "two": {a2, a2a}, "one/sub": {a1sub}, }, albums.dupes) assert.Equal(t, map[string]*api.Album{}, albums.byID) assert.Equal(t, map[string]*api.Album{}, albums.byTitle) assert.Equal(t, map[string][]string{}, albums.path) } func TestAlbumsGet(t *testing.T) { albums := newAlbums() a1 := &api.Album{ Title: "one", ID: "1", } albums.add(a1) album, ok := albums.get("one") assert.Equal(t, true, ok) assert.Equal(t, a1, album) album, ok = albums.get("notfound") assert.Equal(t, false, ok) assert.Nil(t, album) } func TestAlbumsGetDirs(t *testing.T) { albums := newAlbums() a1 := &api.Album{ Title: "one", ID: "1", } albums.add(a1) dirs, ok := albums.getDirs("") assert.Equal(t, true, ok) assert.Equal(t, []string{"one"}, dirs) dirs, ok = albums.getDirs("notfound") assert.Equal(t, false, ok) assert.Nil(t, dirs) } rclone-1.53.3/backend/googlephotos/api/000077500000000000000000000000001375552240400177455ustar00rootroot00000000000000rclone-1.53.3/backend/googlephotos/api/types.go000066400000000000000000000150541375552240400214450ustar00rootroot00000000000000package api import ( "fmt" "time" ) // ErrorDetails in the internals of the Error type type ErrorDetails struct { Code int `json:"code"` Message string `json:"message"` Status string `json:"status"` } // Error is returned on errors type Error struct { Details ErrorDetails `json:"error"` } // Error satisfies error interface func (e *Error) Error() string { return fmt.Sprintf("%s (%d %s)", e.Details.Message, e.Details.Code, e.Details.Status) } // Album of photos type Album struct { ID string `json:"id,omitempty"` Title string `json:"title"` ProductURL string `json:"productUrl,omitempty"` MediaItemsCount string `json:"mediaItemsCount,omitempty"` CoverPhotoBaseURL string `json:"coverPhotoBaseUrl,omitempty"` CoverPhotoMediaItemID string `json:"coverPhotoMediaItemId,omitempty"` IsWriteable bool `json:"isWriteable,omitempty"` } // ListAlbums is returned from albums.list and sharedAlbums.list type ListAlbums struct { Albums []Album `json:"albums"` SharedAlbums []Album `json:"sharedAlbums"` NextPageToken string `json:"nextPageToken"` } // CreateAlbum creates an Album type CreateAlbum struct { Album *Album `json:"album"` } // MediaItem is a photo or video type MediaItem struct { ID string `json:"id"` ProductURL string `json:"productUrl"` BaseURL string `json:"baseUrl"` MimeType string `json:"mimeType"` MediaMetadata struct { CreationTime time.Time `json:"creationTime"` Width string `json:"width"` Height string `json:"height"` Photo struct { } `json:"photo"` } `json:"mediaMetadata"` Filename string `json:"filename"` } // MediaItems is returned from mediaitems.list, mediaitems.search type MediaItems struct { MediaItems []MediaItem `json:"mediaItems"` NextPageToken string `json:"nextPageToken"` } //Content categories // NONE Default content category. This category is ignored when any other category is used in the filter. // LANDSCAPES Media items containing landscapes. // RECEIPTS Media items containing receipts. // CITYSCAPES Media items containing cityscapes. // LANDMARKS Media items containing landmarks. // SELFIES Media items that are selfies. // PEOPLE Media items containing people. // PETS Media items containing pets. // WEDDINGS Media items from weddings. // BIRTHDAYS Media items from birthdays. // DOCUMENTS Media items containing documents. // TRAVEL Media items taken during travel. // ANIMALS Media items containing animals. // FOOD Media items containing food. // SPORT Media items from sporting events. // NIGHT Media items taken at night. // PERFORMANCES Media items from performances. // WHITEBOARDS Media items containing whiteboards. // SCREENSHOTS Media items that are screenshots. // UTILITY Media items that are considered to be utility. These include, but aren't limited to documents, screenshots, whiteboards etc. // ARTS Media items containing art. // CRAFTS Media items containing crafts. // FASHION Media items related to fashion. // HOUSES Media items containing houses. // GARDENS Media items containing gardens. // FLOWERS Media items containing flowers. // HOLIDAYS Media items taken of holidays. // MediaTypes // ALL_MEDIA Treated as if no filters are applied. All media types are included. // VIDEO All media items that are considered videos. This also includes movies the user has created using the Google Photos app. // PHOTO All media items that are considered photos. This includes .bmp, .gif, .ico, .jpg (and other spellings), .tiff, .webp and special photo types such as iOS live photos, Android motion photos, panoramas, photospheres. // Features // NONE Treated as if no filters are applied. All features are included. // FAVORITES Media items that the user has marked as favorites in the Google Photos app. // Date is used as part of SearchFilter type Date struct { Year int `json:"year,omitempty"` Month int `json:"month,omitempty"` Day int `json:"day,omitempty"` } // DateFilter is uses to add date ranges to media item queries type DateFilter struct { Dates []Date `json:"dates,omitempty"` Ranges []struct { StartDate Date `json:"startDate,omitempty"` EndDate Date `json:"endDate,omitempty"` } `json:"ranges,omitempty"` } // ContentFilter is uses to add content categories to media item queries type ContentFilter struct { IncludedContentCategories []string `json:"includedContentCategories,omitempty"` ExcludedContentCategories []string `json:"excludedContentCategories,omitempty"` } // MediaTypeFilter is uses to add media types to media item queries type MediaTypeFilter struct { MediaTypes []string `json:"mediaTypes,omitempty"` } // FeatureFilter is uses to add features to media item queries type FeatureFilter struct { IncludedFeatures []string `json:"includedFeatures,omitempty"` } // Filters combines all the filter types for media item queries type Filters struct { DateFilter *DateFilter `json:"dateFilter,omitempty"` ContentFilter *ContentFilter `json:"contentFilter,omitempty"` MediaTypeFilter *MediaTypeFilter `json:"mediaTypeFilter,omitempty"` FeatureFilter *FeatureFilter `json:"featureFilter,omitempty"` IncludeArchivedMedia *bool `json:"includeArchivedMedia,omitempty"` ExcludeNonAppCreatedData *bool `json:"excludeNonAppCreatedData,omitempty"` } // SearchFilter is uses with mediaItems.search type SearchFilter struct { AlbumID string `json:"albumId,omitempty"` PageSize int `json:"pageSize"` PageToken string `json:"pageToken,omitempty"` Filters *Filters `json:"filters,omitempty"` } // SimpleMediaItem is part of NewMediaItem type SimpleMediaItem struct { UploadToken string `json:"uploadToken"` } // NewMediaItem is a single media item for upload type NewMediaItem struct { Description string `json:"description"` SimpleMediaItem SimpleMediaItem `json:"simpleMediaItem"` } // BatchCreateRequest creates media items from upload tokens type BatchCreateRequest struct { AlbumID string `json:"albumId,omitempty"` NewMediaItems []NewMediaItem `json:"newMediaItems"` } // BatchCreateResponse is returned from BatchCreateRequest type BatchCreateResponse struct { NewMediaItemResults []struct { UploadToken string `json:"uploadToken"` Status struct { Message string `json:"message"` Code int `json:"code"` } `json:"status"` MediaItem MediaItem `json:"mediaItem"` } `json:"newMediaItemResults"` } // BatchRemoveItems is for removing items from an album type BatchRemoveItems struct { MediaItemIds []string `json:"mediaItemIds"` } rclone-1.53.3/backend/googlephotos/googlephotos.go000066400000000000000000000701131375552240400222360ustar00rootroot00000000000000// Package googlephotos provides an interface to Google Photos package googlephotos // FIXME Resumable uploads not implemented - rclone can't resume uploads in general import ( "context" "encoding/json" "fmt" "io" golog "log" "net/http" "net/url" "path" "regexp" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/googlephotos/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/dirtree" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" "golang.org/x/oauth2/google" ) var ( errCantUpload = errors.New("can't upload files here") errCantMkdir = errors.New("can't make directories here") errCantRmdir = errors.New("can't remove this directory") errAlbumDelete = errors.New("google photos API does not implement deleting albums") errRemove = errors.New("google photos API only implements removing files from albums") errOwnAlbums = errors.New("google photos API only allows uploading to albums rclone created") ) const ( rcloneClientID = "202264815644-rt1o1c9evjaotbpbab10m83i8cnjk077.apps.googleusercontent.com" rcloneEncryptedClientSecret = "kLJLretPefBgrDHosdml_nlF64HZ9mUcO85X5rdjYBPP8ChA-jr3Ow" rootURL = "https://photoslibrary.googleapis.com/v1" listChunks = 100 // chunk size to read directory listings albumChunks = 50 // chunk size to read album listings minSleep = 10 * time.Millisecond scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly" scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary" ) var ( // Description of how to auth for this app oauthConfig = &oauth2.Config{ Scopes: []string{ "openid", "profile", scopeReadWrite, }, Endpoint: google.Endpoint, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.TitleBarRedirectURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "google photos", Prefix: "gphotos", Description: "Google Photos", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { fs.Errorf(nil, "Couldn't parse config into struct: %v", err) return } // Fill in the scopes if opt.ReadOnly { oauthConfig.Scopes[0] = scopeReadOnly } else { oauthConfig.Scopes[0] = scopeReadWrite } // Do the oauth err = oauthutil.Config("google photos", name, m, oauthConfig, nil) if err != nil { golog.Fatalf("Failed to configure token: %v", err) } // Warn the user fmt.Print(` *** IMPORTANT: All media items uploaded to Google Photos with rclone *** are stored in full resolution at original quality. These uploads *** will count towards storage in your Google Account. `) }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: "read_only", Default: false, Help: `Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.`, }, { Name: "read_size", Default: false, Help: `Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.`, Advanced: true, }, { Name: "start_year", Default: 2000, Help: `Year limits the photos to be downloaded to those which are uploaded after the given year`, Advanced: true, }}...), }) } // Options defines the configuration for this backend type Options struct { ReadOnly bool `config:"read_only"` ReadSize bool `config:"read_size"` StartYear int `config:"start_year"` } // Fs represents a remote storage server type Fs struct { name string // name of this remote root string // the path we are working on if any opt Options // parsed options features *fs.Features // optional features unAuth *rest.Client // unauthenticated http client srv *rest.Client // the connection to the one drive server ts *oauthutil.TokenSource // token source for oauth2 pacer *fs.Pacer // To pace the API calls startTime time.Time // time Fs was started - used for datestamps albumsMu sync.Mutex // protect albums (but not contents) albums map[bool]*albums // albums, shared or not uploadedMu sync.Mutex // to protect the below uploaded dirtree.DirTree // record of uploaded items createMu sync.Mutex // held when creating albums to prevent dupes } // Object describes a storage object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path url string // download path id string // ID of this object bytes int64 // Bytes in the object modTime time.Time // Modified time of the object mimeType string } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Google Photos path %q", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // dirTime returns the time to set a directory to func (f *Fs) dirTime() time.Time { return f.startTime } // startYear returns the start year func (f *Fs) startYear() int { return f.opt.StartYear } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { body, err := rest.ReadBody(resp) if err != nil { body = nil } // Google sends 404 messages as images so be prepared for that if strings.HasPrefix(resp.Header.Get("Content-Type"), "image/") { body = []byte("Image not found or broken") } var e = api.Error{ Details: api.ErrorDetails{ Code: resp.StatusCode, Message: string(body), Status: resp.Status, }, } if body != nil { _ = json.Unmarshal(body, &e) } return &e } // NewFs constructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } baseClient := fshttp.NewClient(fs.Config) oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient) if err != nil { return nil, errors.Wrap(err, "failed to configure Box") } root = strings.Trim(path.Clean(root), "/") if root == "." || root == "/" { root = "" } f := &Fs{ name: name, root: root, opt: *opt, unAuth: rest.NewClient(baseClient), srv: rest.NewClient(oAuthClient).SetRoot(rootURL), ts: ts, pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))), startTime: time.Now(), albums: map[bool]*albums{}, uploaded: dirtree.New(), } f.features = (&fs.Features{ ReadMimeType: true, }).Fill(f) f.srv.SetErrorHandler(errorHandler) _, _, pattern := patterns.match(f.root, "", true) if pattern != nil && pattern.isFile { oldRoot := f.root var leaf string f.root, leaf = path.Split(f.root) f.root = strings.TrimRight(f.root, "/") _, err := f.NewObject(context.TODO(), leaf) if err == nil { return f, fs.ErrorIsFile } f.root = oldRoot } return f, nil } // fetchEndpoint gets the openid endpoint named from the Google config func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, err error) { // Get openID config without auth opts := rest.Opts{ Method: "GET", RootURL: "https://accounts.google.com/.well-known/openid-configuration", } var openIDconfig map[string]interface{} err = f.pacer.Call(func() (bool, error) { resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig) return shouldRetry(resp, err) }) if err != nil { return "", errors.Wrap(err, "couldn't read openID config") } // Find userinfo endpoint endpoint, ok := openIDconfig[name].(string) if !ok { return "", errors.Errorf("couldn't find %q from openID config", name) } return endpoint, nil } // UserInfo fetches info about the current user with oauth2 func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err error) { endpoint, err := f.fetchEndpoint(ctx, "userinfo_endpoint") if err != nil { return nil, err } // Fetch the user info with auth opts := rest.Opts{ Method: "GET", RootURL: endpoint, } err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't read user info") } return userInfo, nil } // Disconnect kills the token and refresh token func (f *Fs) Disconnect(ctx context.Context) (err error) { endpoint, err := f.fetchEndpoint(ctx, "revocation_endpoint") if err != nil { return err } token, err := f.ts.Token() if err != nil { return err } // Revoke the token and the refresh token opts := rest.Opts{ Method: "POST", RootURL: endpoint, MultipartParams: url.Values{ "token": []string{token.AccessToken}, "token_type_hint": []string{"access_token"}, }, } var res interface{} err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, nil, &res) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't revoke token") } fs.Infof(f, "res = %+v", res) return nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.MediaItem) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { o.setMetaData(info) } else { err := o.readMetaData(ctx) // reads info and meta, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { defer log.Trace(f, "remote=%q", remote)("") return f.newObjectWithInfo(ctx, remote, nil) } // addID adds the ID to name func addID(name string, ID string) string { idStr := "{" + ID + "}" if name == "" { return idStr } return name + " " + idStr } // addFileID adds the ID to the fileName passed in func addFileID(fileName string, ID string) string { ext := path.Ext(fileName) base := fileName[:len(fileName)-len(ext)] return addID(base, ID) + ext } var idRe = regexp.MustCompile(`\{([A-Za-z0-9_-]{55,})\}`) // findID finds an ID in string if one is there or "" func findID(name string) string { match := idRe.FindStringSubmatch(name) if match == nil { return "" } return match[1] } // list the albums into an internal cache // FIXME cache invalidation func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err error) { f.albumsMu.Lock() defer f.albumsMu.Unlock() all, ok := f.albums[shared] if ok && all != nil { return all, nil } opts := rest.Opts{ Method: "GET", Path: "/albums", Parameters: url.Values{}, } if shared { opts.Path = "/sharedAlbums" } all = newAlbums() opts.Parameters.Set("pageSize", strconv.Itoa(albumChunks)) lastID := "" for { var result api.ListAlbums var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't list albums") } newAlbums := result.Albums if shared { newAlbums = result.SharedAlbums } if len(newAlbums) > 0 && newAlbums[0].ID == lastID { // skip first if ID duplicated from last page newAlbums = newAlbums[1:] } if len(newAlbums) > 0 { lastID = newAlbums[len(newAlbums)-1].ID } for i := range newAlbums { all.add(&newAlbums[i]) } if result.NextPageToken == "" { break } opts.Parameters.Set("pageToken", result.NextPageToken) } f.albums[shared] = all return all, nil } // listFn is called from list to handle an object. type listFn func(remote string, object *api.MediaItem, isDirectory bool) error // list the objects into the function supplied // // dir is the starting directory, "" for root // // Set recurse to read sub directories func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err error) { opts := rest.Opts{ Method: "POST", Path: "/mediaItems:search", } filter.PageSize = listChunks filter.PageToken = "" lastID := "" for { var result api.MediaItems var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't list files") } items := result.MediaItems if len(items) > 0 && items[0].ID == lastID { // skip first if ID duplicated from last page items = items[1:] } if len(items) > 0 { lastID = items[len(items)-1].ID } for i := range items { item := &result.MediaItems[i] remote := item.Filename remote = strings.Replace(remote, "/", "/", -1) err = fn(remote, item, false) if err != nil { return err } } if result.NextPageToken == "" { break } filter.PageToken = result.NextPageToken } return nil } // Convert a list item into a DirEntry func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *api.MediaItem, isDirectory bool) (fs.DirEntry, error) { if isDirectory { d := fs.NewDir(remote, f.dirTime()) return d, nil } o := &Object{ fs: f, remote: remote, } o.setMetaData(item) return o, nil } // listDir lists a single directory func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) { // List the objects err = f.list(ctx, filter, func(remote string, item *api.MediaItem, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, prefix+remote, item, isDirectory) if err != nil { return err } if entry != nil { entries = append(entries, entry) } return nil }) if err != nil { return nil, err } // Dedupe the file names dupes := map[string]int{} for _, entry := range entries { o, ok := entry.(*Object) if ok { dupes[o.remote]++ } } for _, entry := range entries { o, ok := entry.(*Object) if ok { duplicated := dupes[o.remote] > 1 if duplicated || o.remote == "" { o.remote = addFileID(o.remote, o.id) } } } return entries, err } // listUploads lists a single directory from the uploads func (f *Fs) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) { f.uploadedMu.Lock() entries, ok := f.uploaded[dir] f.uploadedMu.Unlock() if !ok && dir != "" { return nil, fs.ErrorDirNotFound } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { defer log.Trace(f, "dir=%q", dir)("err=%v", &err) match, prefix, pattern := patterns.match(f.root, dir, false) if pattern == nil || pattern.isFile { return nil, fs.ErrorDirNotFound } if pattern.toEntries != nil { return pattern.toEntries(ctx, f, prefix, match) } return nil, fs.ErrorDirNotFound } // Put the object into the bucket // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { defer log.Trace(f, "src=%+v", src)("") // Temporary Object under construction o := &Object{ fs: f, remote: src.Remote(), } return o, o.Update(ctx, in, src, options...) } // createAlbum creates the album func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Album, err error) { opts := rest.Opts{ Method: "POST", Path: "/albums", Parameters: url.Values{}, } var request = api.CreateAlbum{ Album: &api.Album{ Title: albumTitle, }, } var result api.Album var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, request, &result) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "couldn't create album") } f.albums[false].add(&result) return &result, nil } // getOrCreateAlbum gets an existing album or creates a new one // // It does the creation with the lock held to avoid duplicates func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *api.Album, err error) { f.createMu.Lock() defer f.createMu.Unlock() albums, err := f.listAlbums(ctx, false) if err != nil { return nil, err } album, ok := albums.get(albumTitle) if ok { return album, nil } return f.createAlbum(ctx, albumTitle) } // Mkdir creates the album if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) { defer log.Trace(f, "dir=%q", dir)("err=%v", &err) match, prefix, pattern := patterns.match(f.root, dir, false) if pattern == nil { return fs.ErrorDirNotFound } if !pattern.canMkdir { return errCantMkdir } if pattern.isUpload { f.uploadedMu.Lock() d := fs.NewDir(strings.Trim(prefix, "/"), f.dirTime()) f.uploaded.AddEntry(d) f.uploadedMu.Unlock() return nil } albumTitle := match[1] _, err = f.getOrCreateAlbum(ctx, albumTitle) return err } // Rmdir deletes the bucket if the fs is at the root // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) { defer log.Trace(f, "dir=%q")("err=%v", &err) match, _, pattern := patterns.match(f.root, dir, false) if pattern == nil { return fs.ErrorDirNotFound } if !pattern.canMkdir { return errCantRmdir } if pattern.isUpload { f.uploadedMu.Lock() err = f.uploaded.Prune(map[string]bool{ dir: true, }) f.uploadedMu.Unlock() return err } albumTitle := match[1] allAlbums, err := f.listAlbums(ctx, false) if err != nil { return err } album, ok := allAlbums.get(albumTitle) if !ok { return fs.ErrorDirNotFound } _ = album return errAlbumDelete } // Precision returns the precision func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { defer log.Trace(o, "")("") if !o.fs.opt.ReadSize || o.bytes >= 0 { return o.bytes } ctx := context.TODO() err := o.readMetaData(ctx) if err != nil { fs.Debugf(o, "Size: Failed to read metadata: %v", err) return -1 } var resp *http.Response opts := rest.Opts{ Method: "HEAD", RootURL: o.downloadURL(), } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { fs.Debugf(o, "Reading size failed: %v", err) } else { lengthStr := resp.Header.Get("Content-Length") length, err := strconv.ParseInt(lengthStr, 10, 64) if err != nil { fs.Debugf(o, "Reading size failed to parse Content_length %q: %v", lengthStr, err) } else { o.bytes = length } } return o.bytes } // setMetaData sets the fs data from a storage.Object func (o *Object) setMetaData(info *api.MediaItem) { o.url = info.BaseURL o.id = info.ID o.bytes = -1 // FIXME o.mimeType = info.MimeType o.modTime = info.MediaMetadata.CreationTime } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if !o.modTime.IsZero() && o.url != "" { return nil } dir, fileName := path.Split(o.remote) dir = strings.Trim(dir, "/") _, _, pattern := patterns.match(o.fs.root, o.remote, true) if pattern == nil { return fs.ErrorObjectNotFound } if !pattern.isFile { return fs.ErrorNotAFile } // If have ID fetch it directly if id := findID(fileName); id != "" { opts := rest.Opts{ Method: "GET", Path: "/mediaItems/" + id, } var item api.MediaItem var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't get media item") } o.setMetaData(&item) return nil } // Otherwise list the directory the file is in entries, err := o.fs.List(ctx, dir) if err != nil { if err == fs.ErrorDirNotFound { return fs.ErrorObjectNotFound } return err } // and find the file in the directory for _, entry := range entries { if entry.Remote() == o.remote { if newO, ok := entry.(*Object); ok { *o = *newO return nil } } } return fs.ErrorObjectNotFound } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { defer log.Trace(o, "")("") err := o.readMetaData(ctx) if err != nil { fs.Debugf(o, "ModTime: Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) { return fs.ErrorCantSetModTime } // Storable returns a boolean as to whether this object is storable func (o *Object) Storable() bool { return true } // downloadURL returns the URL for a full bytes download for the object func (o *Object) downloadURL() string { url := o.url + "=d" if strings.HasPrefix(o.mimeType, "video/") { url += "v" } return url } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { defer log.Trace(o, "")("") err = o.readMetaData(ctx) if err != nil { fs.Debugf(o, "Open: Failed to read metadata: %v", err) return nil, err } var resp *http.Response opts := rest.Opts{ Method: "GET", RootURL: o.downloadURL(), Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { defer log.Trace(o, "src=%+v", src)("err=%v", &err) match, _, pattern := patterns.match(o.fs.root, o.remote, true) if pattern == nil || !pattern.isFile || !pattern.canUpload { return errCantUpload } var ( albumID string fileName string ) if pattern.isUpload { fileName = match[1] } else { var albumTitle string albumTitle, fileName = match[1], match[2] album, err := o.fs.getOrCreateAlbum(ctx, albumTitle) if err != nil { return err } if !album.IsWriteable { return errOwnAlbums } albumID = album.ID } // Upload the media item in exchange for an UploadToken opts := rest.Opts{ Method: "POST", Path: "/uploads", Options: options, ExtraHeaders: map[string]string{ "X-Goog-Upload-File-Name": fileName, "X-Goog-Upload-Protocol": "raw", }, Body: in, } var token []byte var resp *http.Response err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) if err != nil { return shouldRetry(resp, err) } token, err = rest.ReadBody(resp) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't upload file") } uploadToken := strings.TrimSpace(string(token)) if uploadToken == "" { return errors.New("empty upload token") } // Create the media item from an UploadToken, optionally adding to an album opts = rest.Opts{ Method: "POST", Path: "/mediaItems:batchCreate", } var request = api.BatchCreateRequest{ AlbumID: albumID, NewMediaItems: []api.NewMediaItem{ { SimpleMediaItem: api.SimpleMediaItem{ UploadToken: uploadToken, }, }, }, } var result api.BatchCreateResponse err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to create media item") } if len(result.NewMediaItemResults) != 1 { return errors.New("bad response to BatchCreate wrong number of items") } mediaItemResult := result.NewMediaItemResults[0] if mediaItemResult.Status.Code != 0 { return errors.Errorf("upload failed: %s (%d)", mediaItemResult.Status.Message, mediaItemResult.Status.Code) } o.setMetaData(&mediaItemResult.MediaItem) // Add upload to internal storage if pattern.isUpload { o.fs.uploadedMu.Lock() o.fs.uploaded.AddEntry(o) o.fs.uploadedMu.Unlock() } return nil } // Remove an object func (o *Object) Remove(ctx context.Context) (err error) { match, _, pattern := patterns.match(o.fs.root, o.remote, true) if pattern == nil || !pattern.isFile || !pattern.canUpload || pattern.isUpload { return errRemove } albumTitle, fileName := match[1], match[2] album, ok := o.fs.albums[false].get(albumTitle) if !ok { return errors.Errorf("couldn't file %q in album %q for delete", fileName, albumTitle) } opts := rest.Opts{ Method: "POST", Path: "/albums/" + album.ID + ":batchRemoveMediaItems", NoResponse: true, } var request = api.BatchRemoveItems{ MediaItemIds: []string{o.id}, } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't delete item from album") } return nil } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // ID of an Object if known, "" otherwise func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.UserInfoer = &Fs{} _ fs.Disconnecter = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} _ fs.IDer = &Object{} ) rclone-1.53.3/backend/googlephotos/googlephotos_test.go000066400000000000000000000203301375552240400232710ustar00rootroot00000000000000package googlephotos import ( "context" "fmt" "io/ioutil" "net/http" "path" "testing" "time" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) const ( // We have two different files here as Google Photos will uniq // them otherwise which confuses the tests as the filename is // unexpected. fileNameAlbum = "rclone-test-image1.jpg" fileNameUpload = "rclone-test-image2.jpg" ) func TestIntegration(t *testing.T) { ctx := context.Background() fstest.Initialise() // Create Fs if *fstest.RemoteName == "" { *fstest.RemoteName = "TestGooglePhotos:" } f, err := fs.NewFs(*fstest.RemoteName) if err == fs.ErrorNotFoundInConfigFile { t.Skip(fmt.Sprintf("Couldn't create google photos backend - skipping tests: %v", err)) } require.NoError(t, err) // Create local Fs pointing at testfiles localFs, err := fs.NewFs("testfiles") require.NoError(t, err) t.Run("CreateAlbum", func(t *testing.T) { albumName := "album/rclone-test-" + random.String(24) err = f.Mkdir(ctx, albumName) require.NoError(t, err) remote := albumName + "/" + fileNameAlbum t.Run("PutFile", func(t *testing.T) { srcObj, err := localFs.NewObject(ctx, fileNameAlbum) require.NoError(t, err) in, err := srcObj.Open(ctx) require.NoError(t, err) dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote)) require.NoError(t, err) assert.Equal(t, remote, dstObj.Remote()) _ = in.Close() remoteWithID := addFileID(remote, dstObj.(*Object).id) t.Run("ObjectFs", func(t *testing.T) { assert.Equal(t, f, dstObj.Fs()) }) t.Run("ObjectString", func(t *testing.T) { assert.Equal(t, remote, dstObj.String()) assert.Equal(t, "", (*Object)(nil).String()) }) t.Run("ObjectHash", func(t *testing.T) { h, err := dstObj.Hash(ctx, hash.MD5) assert.Equal(t, "", h) assert.Equal(t, hash.ErrUnsupported, err) }) t.Run("ObjectSize", func(t *testing.T) { assert.Equal(t, int64(-1), dstObj.Size()) f.(*Fs).opt.ReadSize = true defer func() { f.(*Fs).opt.ReadSize = false }() size := dstObj.Size() assert.True(t, size > 1000, fmt.Sprintf("Size too small %d", size)) }) t.Run("ObjectSetModTime", func(t *testing.T) { err := dstObj.SetModTime(ctx, time.Now()) assert.Equal(t, fs.ErrorCantSetModTime, err) }) t.Run("ObjectStorable", func(t *testing.T) { assert.True(t, dstObj.Storable()) }) t.Run("ObjectOpen", func(t *testing.T) { in, err := dstObj.Open(ctx) require.NoError(t, err) buf, err := ioutil.ReadAll(in) require.NoError(t, err) require.NoError(t, in.Close()) assert.True(t, len(buf) > 1000) contentType := http.DetectContentType(buf[:512]) assert.Equal(t, "image/jpeg", contentType) }) t.Run("CheckFileInAlbum", func(t *testing.T) { entries, err := f.List(ctx, albumName) require.NoError(t, err) assert.Equal(t, 1, len(entries)) assert.Equal(t, remote, entries[0].Remote()) assert.Equal(t, "2013-07-26 08:57:21 +0000 UTC", entries[0].ModTime(ctx).String()) }) // Check it is there in the date/month/year heirachy // 2013-07-13 is the creation date of the folder checkPresent := func(t *testing.T, objPath string) { entries, err := f.List(ctx, objPath) require.NoError(t, err) found := false for _, entry := range entries { leaf := path.Base(entry.Remote()) if leaf == fileNameAlbum || leaf == remoteWithID { found = true } } assert.True(t, found, fmt.Sprintf("didn't find %q in %q", fileNameAlbum, objPath)) } t.Run("CheckInByYear", func(t *testing.T) { checkPresent(t, "media/by-year/2013") }) t.Run("CheckInByMonth", func(t *testing.T) { checkPresent(t, "media/by-month/2013/2013-07") }) t.Run("CheckInByDay", func(t *testing.T) { checkPresent(t, "media/by-day/2013/2013-07-26") }) t.Run("NewObject", func(t *testing.T) { o, err := f.NewObject(ctx, remote) require.NoError(t, err) require.Equal(t, remote, o.Remote()) }) t.Run("NewObjectWithID", func(t *testing.T) { o, err := f.NewObject(ctx, remoteWithID) require.NoError(t, err) require.Equal(t, remoteWithID, o.Remote()) }) t.Run("NewFsIsFile", func(t *testing.T) { fNew, err := fs.NewFs(*fstest.RemoteName + remote) assert.Equal(t, fs.ErrorIsFile, err) leaf := path.Base(remote) o, err := fNew.NewObject(ctx, leaf) require.NoError(t, err) require.Equal(t, leaf, o.Remote()) }) t.Run("RemoveFileFromAlbum", func(t *testing.T) { err = dstObj.Remove(ctx) require.NoError(t, err) time.Sleep(time.Second) // Check album empty entries, err := f.List(ctx, albumName) require.NoError(t, err) assert.Equal(t, 0, len(entries)) }) }) // remove the album err = f.Rmdir(ctx, albumName) require.Error(t, err) // FIXME doesn't work yet }) t.Run("UploadMkdir", func(t *testing.T) { assert.NoError(t, f.Mkdir(ctx, "upload/dir")) assert.NoError(t, f.Mkdir(ctx, "upload/dir/subdir")) t.Run("List", func(t *testing.T) { entries, err := f.List(ctx, "upload") require.NoError(t, err) assert.Equal(t, 1, len(entries)) assert.Equal(t, "upload/dir", entries[0].Remote()) entries, err = f.List(ctx, "upload/dir") require.NoError(t, err) assert.Equal(t, 1, len(entries)) assert.Equal(t, "upload/dir/subdir", entries[0].Remote()) }) t.Run("Rmdir", func(t *testing.T) { assert.NoError(t, f.Rmdir(ctx, "upload/dir/subdir")) assert.NoError(t, f.Rmdir(ctx, "upload/dir")) }) t.Run("ListEmpty", func(t *testing.T) { entries, err := f.List(ctx, "upload") require.NoError(t, err) assert.Equal(t, 0, len(entries)) _, err = f.List(ctx, "upload/dir") assert.Equal(t, fs.ErrorDirNotFound, err) }) }) t.Run("Upload", func(t *testing.T) { uploadDir := "upload/dir/subdir" remote := path.Join(uploadDir, fileNameUpload) srcObj, err := localFs.NewObject(ctx, fileNameUpload) require.NoError(t, err) in, err := srcObj.Open(ctx) require.NoError(t, err) dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote)) require.NoError(t, err) assert.Equal(t, remote, dstObj.Remote()) _ = in.Close() remoteWithID := addFileID(remote, dstObj.(*Object).id) t.Run("List", func(t *testing.T) { entries, err := f.List(ctx, uploadDir) require.NoError(t, err) require.Equal(t, 1, len(entries)) assert.Equal(t, remote, entries[0].Remote()) assert.Equal(t, "2013-07-26 08:57:21 +0000 UTC", entries[0].ModTime(ctx).String()) }) t.Run("NewObject", func(t *testing.T) { o, err := f.NewObject(ctx, remote) require.NoError(t, err) require.Equal(t, remote, o.Remote()) }) t.Run("NewObjectWithID", func(t *testing.T) { o, err := f.NewObject(ctx, remoteWithID) require.NoError(t, err) require.Equal(t, remoteWithID, o.Remote()) }) }) t.Run("Name", func(t *testing.T) { assert.Equal(t, (*fstest.RemoteName)[:len(*fstest.RemoteName)-1], f.Name()) }) t.Run("Root", func(t *testing.T) { assert.Equal(t, "", f.Root()) }) t.Run("String", func(t *testing.T) { assert.Equal(t, `Google Photos path ""`, f.String()) }) t.Run("Features", func(t *testing.T) { features := f.Features() assert.False(t, features.CaseInsensitive) assert.True(t, features.ReadMimeType) }) t.Run("Precision", func(t *testing.T) { assert.Equal(t, fs.ModTimeNotSupported, f.Precision()) }) t.Run("Hashes", func(t *testing.T) { assert.Equal(t, hash.Set(hash.None), f.Hashes()) }) } func TestAddID(t *testing.T) { assert.Equal(t, "potato {123}", addID("potato", "123")) assert.Equal(t, "{123}", addID("", "123")) } func TestFileAddID(t *testing.T) { assert.Equal(t, "potato {123}.txt", addFileID("potato.txt", "123")) assert.Equal(t, "potato {123}", addFileID("potato", "123")) assert.Equal(t, "{123}", addFileID("", "123")) } func TestFindID(t *testing.T) { assert.Equal(t, "", findID("potato")) ID := "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" assert.Equal(t, ID, findID("potato {"+ID+"}.txt")) ID = ID[1:] assert.Equal(t, "", findID("potato {"+ID+"}.txt")) } rclone-1.53.3/backend/googlephotos/pattern.go000066400000000000000000000245231375552240400212060ustar00rootroot00000000000000// Store the parsing of file patterns package googlephotos import ( "context" "fmt" "path" "regexp" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/googlephotos/api" "github.com/rclone/rclone/fs" ) // lister describes the subset of the interfaces on Fs needed for the // file pattern parsing type lister interface { listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) listAlbums(ctx context.Context, shared bool) (all *albums, err error) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) dirTime() time.Time startYear() int } // dirPattern describes a single directory pattern type dirPattern struct { re string // match for the path match *regexp.Regexp // compiled match canUpload bool // true if can upload here canMkdir bool // true if can make a directory here isFile bool // true if this is a file isUpload bool // true if this is the upload directory // function to turn a match into DirEntries toEntries func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) } // dirPatters is a slice of all the directory patterns type dirPatterns []dirPattern // patterns describes the layout of the google photos backend file system. // // NB no trailing / on paths var patterns = dirPatterns{ { re: `^$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { return fs.DirEntries{ fs.NewDir(prefix+"media", f.dirTime()), fs.NewDir(prefix+"album", f.dirTime()), fs.NewDir(prefix+"shared-album", f.dirTime()), fs.NewDir(prefix+"upload", f.dirTime()), fs.NewDir(prefix+"feature", f.dirTime()), }, nil }, }, { re: `^upload(?:/(.*))?$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { return f.listUploads(ctx, match[0]) }, canUpload: true, canMkdir: true, isUpload: true, }, { re: `^upload/(.*)$`, isFile: true, canUpload: true, isUpload: true, }, { re: `^media$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { return fs.DirEntries{ fs.NewDir(prefix+"all", f.dirTime()), fs.NewDir(prefix+"by-year", f.dirTime()), fs.NewDir(prefix+"by-month", f.dirTime()), fs.NewDir(prefix+"by-day", f.dirTime()), }, nil }, }, { re: `^media/all$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { return f.listDir(ctx, prefix, api.SearchFilter{}) }, }, { re: `^media/all/([^/]+)$`, isFile: true, }, { re: `^media/by-year$`, toEntries: years, }, { re: `^media/by-year/(\d{4})$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { filter, err := yearMonthDayFilter(ctx, f, match) if err != nil { return nil, err } return f.listDir(ctx, prefix, filter) }, }, { re: `^media/by-year/(\d{4})/([^/]+)$`, isFile: true, }, { re: `^media/by-month$`, toEntries: years, }, { re: `^media/by-month/(\d{4})$`, toEntries: months, }, { re: `^media/by-month/\d{4}/(\d{4})-(\d{2})$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { filter, err := yearMonthDayFilter(ctx, f, match) if err != nil { return nil, err } return f.listDir(ctx, prefix, filter) }, }, { re: `^media/by-month/\d{4}/(\d{4})-(\d{2})/([^/]+)$`, isFile: true, }, { re: `^media/by-day$`, toEntries: years, }, { re: `^media/by-day/(\d{4})$`, toEntries: days, }, { re: `^media/by-day/\d{4}/(\d{4})-(\d{2})-(\d{2})$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) { filter, err := yearMonthDayFilter(ctx, f, match) if err != nil { return nil, err } return f.listDir(ctx, prefix, filter) }, }, { re: `^media/by-day/\d{4}/(\d{4})-(\d{2})-(\d{2})/([^/]+)$`, isFile: true, }, { re: `^album$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { return albumsToEntries(ctx, f, false, prefix, "") }, }, { re: `^album/(.+)$`, canMkdir: true, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { return albumsToEntries(ctx, f, false, prefix, match[1]) }, }, { re: `^album/(.+?)/([^/]+)$`, canUpload: true, isFile: true, }, { re: `^shared-album$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { return albumsToEntries(ctx, f, true, prefix, "") }, }, { re: `^shared-album/(.+)$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { return albumsToEntries(ctx, f, true, prefix, match[1]) }, }, { re: `^shared-album/(.+?)/([^/]+)$`, isFile: true, }, { re: `^feature$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { return fs.DirEntries{ fs.NewDir(prefix+"favorites", f.dirTime()), }, nil }, }, { re: `^feature/favorites$`, toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { filter := featureFilter(ctx, f, match) if err != nil { return nil, err } return f.listDir(ctx, prefix, filter) }, }, { re: `^feature/favorites/([^/]+)$`, isFile: true, }, }.mustCompile() // mustCompile compiles the regexps in the dirPatterns func (ds dirPatterns) mustCompile() dirPatterns { for i := range ds { pattern := &ds[i] pattern.match = regexp.MustCompile(pattern.re) } return ds } // match finds the path passed in the matching structure and // returns the parameters and a pointer to the match, or nil. func (ds dirPatterns) match(root string, itemPath string, isFile bool) (match []string, prefix string, pattern *dirPattern) { itemPath = strings.Trim(itemPath, "/") absPath := path.Join(root, itemPath) prefix = strings.Trim(absPath[len(root):], "/") if prefix != "" { prefix += "/" } for i := range ds { pattern = &ds[i] if pattern.isFile != isFile { continue } match = pattern.match.FindStringSubmatch(absPath) if match != nil { return } } return nil, "", nil } // Return the years from startYear to today func years(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { currentYear := f.dirTime().Year() for year := f.startYear(); year <= currentYear; year++ { entries = append(entries, fs.NewDir(prefix+fmt.Sprint(year), f.dirTime())) } return entries, nil } // Return the months in a given year func months(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { year := match[1] for month := 1; month <= 12; month++ { entries = append(entries, fs.NewDir(fmt.Sprintf("%s%s-%02d", prefix, year, month), f.dirTime())) } return entries, nil } // Return the days in a given year func days(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) { year := match[1] current, err := time.Parse("2006", year) if err != nil { return nil, errors.Errorf("bad year %q", match[1]) } currentYear := current.Year() for current.Year() == currentYear { entries = append(entries, fs.NewDir(prefix+current.Format("2006-01-02"), f.dirTime())) current = current.AddDate(0, 0, 1) } return entries, nil } // This creates a search filter on year/month/day as provided func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter, err error) { year, err := strconv.Atoi(match[1]) if err != nil || year < 1000 || year > 3000 { return sf, errors.Errorf("bad year %q", match[1]) } sf = api.SearchFilter{ Filters: &api.Filters{ DateFilter: &api.DateFilter{ Dates: []api.Date{ { Year: year, }, }, }, }, } if len(match) >= 3 { month, err := strconv.Atoi(match[2]) if err != nil || month < 1 || month > 12 { return sf, errors.Errorf("bad month %q", match[2]) } sf.Filters.DateFilter.Dates[0].Month = month } if len(match) >= 4 { day, err := strconv.Atoi(match[3]) if err != nil || day < 1 || day > 31 { return sf, errors.Errorf("bad day %q", match[3]) } sf.Filters.DateFilter.Dates[0].Day = day } return sf, nil } // featureFilter creates a filter for the Feature enum // // The API only supports one feature, FAVORITES, so hardcode that feature // // https://developers.google.com/photos/library/reference/rest/v1/mediaItems/search#FeatureFilter func featureFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter) { sf = api.SearchFilter{ Filters: &api.Filters{ FeatureFilter: &api.FeatureFilter{ IncludedFeatures: []string{ "FAVORITES", }, }, }, } return sf } // Turns an albumPath into entries // // These can either be synthetic directory entries if the album path // is a prefix of another album, or actual files, or a combination of // the two. func albumsToEntries(ctx context.Context, f lister, shared bool, prefix string, albumPath string) (entries fs.DirEntries, err error) { albums, err := f.listAlbums(ctx, shared) if err != nil { return nil, err } // Put in the directories dirs, foundAlbumPath := albums.getDirs(albumPath) if foundAlbumPath { for _, dir := range dirs { d := fs.NewDir(prefix+dir, f.dirTime()) dirPath := path.Join(albumPath, dir) // if this dir is an album add more special stuff album, ok := albums.get(dirPath) if ok { count, err := strconv.ParseInt(album.MediaItemsCount, 10, 64) if err != nil { fs.Debugf(f, "Error reading media count: %v", err) } d.SetID(album.ID).SetItems(count) } entries = append(entries, d) } } // if this is an album then return a filter to list it album, foundAlbum := albums.get(albumPath) if foundAlbum { filter := api.SearchFilter{AlbumID: album.ID} newEntries, err := f.listDir(ctx, prefix, filter) if err != nil { return nil, err } entries = append(entries, newEntries...) } if !foundAlbumPath && !foundAlbum && albumPath != "" { return nil, fs.ErrorDirNotFound } return entries, nil } rclone-1.53.3/backend/googlephotos/pattern_test.go000066400000000000000000000316111375552240400222410ustar00rootroot00000000000000package googlephotos import ( "context" "fmt" "testing" "time" "github.com/rclone/rclone/backend/googlephotos/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/dirtree" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // time for directories var startTime = fstest.Time("2019-06-24T15:53:05.999999999Z") // mock Fs for testing patterns type testLister struct { t *testing.T albums *albums names []string uploaded dirtree.DirTree } // newTestLister makes a mock for testing func newTestLister(t *testing.T) *testLister { return &testLister{ t: t, albums: newAlbums(), uploaded: dirtree.New(), } } // mock listDir for testing func (f *testLister) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) { for _, name := range f.names { entries = append(entries, mockobject.New(prefix+name)) } return entries, nil } // mock listAlbums for testing func (f *testLister) listAlbums(ctx context.Context, shared bool) (all *albums, err error) { return f.albums, nil } // mock listUploads for testing func (f *testLister) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) { entries, _ = f.uploaded[dir] return entries, nil } // mock dirTime for testing func (f *testLister) dirTime() time.Time { return startTime } // mock startYear for testing func (f *testLister) startYear() int { return 2000 } func TestPatternMatch(t *testing.T) { for testNumber, test := range []struct { // input root string itemPath string isFile bool // expected output wantMatch []string wantPrefix string wantPattern *dirPattern }{ { root: "", itemPath: "", isFile: false, wantMatch: []string{""}, wantPrefix: "", wantPattern: &patterns[0], }, { root: "", itemPath: "", isFile: true, wantMatch: nil, wantPrefix: "", wantPattern: nil, }, { root: "upload", itemPath: "", isFile: false, wantMatch: []string{"upload", ""}, wantPrefix: "", wantPattern: &patterns[1], }, { root: "upload/dir", itemPath: "", isFile: false, wantMatch: []string{"upload/dir", "dir"}, wantPrefix: "", wantPattern: &patterns[1], }, { root: "upload/file.jpg", itemPath: "", isFile: true, wantMatch: []string{"upload/file.jpg", "file.jpg"}, wantPrefix: "", wantPattern: &patterns[2], }, { root: "media", itemPath: "", isFile: false, wantMatch: []string{"media"}, wantPrefix: "", wantPattern: &patterns[3], }, { root: "", itemPath: "media", isFile: false, wantMatch: []string{"media"}, wantPrefix: "media/", wantPattern: &patterns[3], }, { root: "media/all", itemPath: "", isFile: false, wantMatch: []string{"media/all"}, wantPrefix: "", wantPattern: &patterns[4], }, { root: "media", itemPath: "all", isFile: false, wantMatch: []string{"media/all"}, wantPrefix: "all/", wantPattern: &patterns[4], }, { root: "media/all", itemPath: "file.jpg", isFile: true, wantMatch: []string{"media/all/file.jpg", "file.jpg"}, wantPrefix: "file.jpg/", wantPattern: &patterns[5], }, { root: "", itemPath: "feature", isFile: false, wantMatch: []string{"feature"}, wantPrefix: "feature/", wantPattern: &patterns[23], }, { root: "feature/favorites", itemPath: "", isFile: false, wantMatch: []string{"feature/favorites"}, wantPrefix: "", wantPattern: &patterns[24], }, { root: "feature", itemPath: "favorites", isFile: false, wantMatch: []string{"feature/favorites"}, wantPrefix: "favorites/", wantPattern: &patterns[24], }, { root: "feature/favorites", itemPath: "file.jpg", isFile: true, wantMatch: []string{"feature/favorites/file.jpg", "file.jpg"}, wantPrefix: "file.jpg/", wantPattern: &patterns[25], }, } { t.Run(fmt.Sprintf("#%d,root=%q,itemPath=%q,isFile=%v", testNumber, test.root, test.itemPath, test.isFile), func(t *testing.T) { gotMatch, gotPrefix, gotPattern := patterns.match(test.root, test.itemPath, test.isFile) assert.Equal(t, test.wantMatch, gotMatch) assert.Equal(t, test.wantPrefix, gotPrefix) assert.Equal(t, test.wantPattern, gotPattern) }) } } func TestPatternMatchToEntries(t *testing.T) { ctx := context.Background() f := newTestLister(t) f.names = []string{"file.jpg"} f.albums.add(&api.Album{ ID: "1", Title: "sub/one", }) f.albums.add(&api.Album{ ID: "2", Title: "sub", }) f.uploaded.AddEntry(mockobject.New("upload/file1.jpg")) f.uploaded.AddEntry(mockobject.New("upload/dir/file2.jpg")) for testNumber, test := range []struct { // input root string itemPath string // expected output wantMatch []string wantPrefix string remotes []string }{ { root: "", itemPath: "", wantMatch: []string{""}, wantPrefix: "", remotes: []string{"media/", "album/", "shared-album/", "upload/"}, }, { root: "upload", itemPath: "", wantMatch: []string{"upload", ""}, wantPrefix: "", remotes: []string{"upload/file1.jpg", "upload/dir/"}, }, { root: "upload", itemPath: "dir", wantMatch: []string{"upload/dir", "dir"}, wantPrefix: "dir/", remotes: []string{"upload/dir/file2.jpg"}, }, { root: "media", itemPath: "", wantMatch: []string{"media"}, wantPrefix: "", remotes: []string{"all/", "by-year/", "by-month/", "by-day/"}, }, { root: "media/all", itemPath: "", wantMatch: []string{"media/all"}, wantPrefix: "", remotes: []string{"file.jpg"}, }, { root: "media", itemPath: "all", wantMatch: []string{"media/all"}, wantPrefix: "all/", remotes: []string{"all/file.jpg"}, }, { root: "media/by-year", itemPath: "", wantMatch: []string{"media/by-year"}, wantPrefix: "", remotes: []string{"2000/", "2001/", "2002/", "2003/"}, }, { root: "media/by-year/2000", itemPath: "", wantMatch: []string{"media/by-year/2000", "2000"}, wantPrefix: "", remotes: []string{"file.jpg"}, }, { root: "media/by-month", itemPath: "", wantMatch: []string{"media/by-month"}, wantPrefix: "", remotes: []string{"2000/", "2001/", "2002/", "2003/"}, }, { root: "media/by-month/2001", itemPath: "", wantMatch: []string{"media/by-month/2001", "2001"}, wantPrefix: "", remotes: []string{"2001-01/", "2001-02/", "2001-03/", "2001-04/"}, }, { root: "media/by-month/2001/2001-01", itemPath: "", wantMatch: []string{"media/by-month/2001/2001-01", "2001", "01"}, wantPrefix: "", remotes: []string{"file.jpg"}, }, { root: "media/by-day", itemPath: "", wantMatch: []string{"media/by-day"}, wantPrefix: "", remotes: []string{"2000/", "2001/", "2002/", "2003/"}, }, { root: "media/by-day/2001", itemPath: "", wantMatch: []string{"media/by-day/2001", "2001"}, wantPrefix: "", remotes: []string{"2001-01-01/", "2001-01-02/", "2001-01-03/", "2001-01-04/"}, }, { root: "media/by-day/2001/2001-01-02", itemPath: "", wantMatch: []string{"media/by-day/2001/2001-01-02", "2001", "01", "02"}, wantPrefix: "", remotes: []string{"file.jpg"}, }, { root: "album", itemPath: "", wantMatch: []string{"album"}, wantPrefix: "", remotes: []string{"sub/"}, }, { root: "album/sub", itemPath: "", wantMatch: []string{"album/sub", "sub"}, wantPrefix: "", remotes: []string{"one/", "file.jpg"}, }, { root: "album/sub/one", itemPath: "", wantMatch: []string{"album/sub/one", "sub/one"}, wantPrefix: "", remotes: []string{"file.jpg"}, }, { root: "shared-album", itemPath: "", wantMatch: []string{"shared-album"}, wantPrefix: "", remotes: []string{"sub/"}, }, { root: "shared-album/sub", itemPath: "", wantMatch: []string{"shared-album/sub", "sub"}, wantPrefix: "", remotes: []string{"one/", "file.jpg"}, }, { root: "shared-album/sub/one", itemPath: "", wantMatch: []string{"shared-album/sub/one", "sub/one"}, wantPrefix: "", remotes: []string{"file.jpg"}, }, } { t.Run(fmt.Sprintf("#%d,root=%q,itemPath=%q", testNumber, test.root, test.itemPath), func(t *testing.T) { match, prefix, pattern := patterns.match(test.root, test.itemPath, false) assert.Equal(t, test.wantMatch, match) assert.Equal(t, test.wantPrefix, prefix) assert.NotNil(t, pattern) assert.NotNil(t, pattern.toEntries) entries, err := pattern.toEntries(ctx, f, prefix, match) assert.NoError(t, err) var remotes = []string{} for _, entry := range entries { remote := entry.Remote() if _, isDir := entry.(fs.Directory); isDir { remote += "/" } remotes = append(remotes, remote) if len(remotes) >= 4 { break // only test first 4 entries } } assert.Equal(t, test.remotes, remotes) }) } } func TestPatternYears(t *testing.T) { f := newTestLister(t) entries, err := years(context.Background(), f, "potato/", nil) require.NoError(t, err) year := 2000 for _, entry := range entries { assert.Equal(t, "potato/"+fmt.Sprint(year), entry.Remote()) year++ } } func TestPatternMonths(t *testing.T) { f := newTestLister(t) entries, err := months(context.Background(), f, "potato/", []string{"", "2020"}) require.NoError(t, err) assert.Equal(t, 12, len(entries)) for i, entry := range entries { assert.Equal(t, fmt.Sprintf("potato/2020-%02d", i+1), entry.Remote()) } } func TestPatternDays(t *testing.T) { f := newTestLister(t) entries, err := days(context.Background(), f, "potato/", []string{"", "2020"}) require.NoError(t, err) assert.Equal(t, 366, len(entries)) assert.Equal(t, "potato/2020-01-01", entries[0].Remote()) assert.Equal(t, "potato/2020-12-31", entries[len(entries)-1].Remote()) } func TestPatternYearMonthDayFilter(t *testing.T) { ctx := context.Background() f := newTestLister(t) // Years sf, err := yearMonthDayFilter(ctx, f, []string{"", "2000"}) require.NoError(t, err) assert.Equal(t, api.SearchFilter{ Filters: &api.Filters{ DateFilter: &api.DateFilter{ Dates: []api.Date{ { Year: 2000, }, }, }, }, }, sf) _, err = yearMonthDayFilter(ctx, f, []string{"", "potato"}) require.Error(t, err) _, err = yearMonthDayFilter(ctx, f, []string{"", "999"}) require.Error(t, err) _, err = yearMonthDayFilter(ctx, f, []string{"", "4000"}) require.Error(t, err) // Months sf, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01"}) require.NoError(t, err) assert.Equal(t, api.SearchFilter{ Filters: &api.Filters{ DateFilter: &api.DateFilter{ Dates: []api.Date{ { Month: 1, Year: 2000, }, }, }, }, }, sf) _, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "potato"}) require.Error(t, err) _, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "0"}) require.Error(t, err) _, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "13"}) require.Error(t, err) // Days sf, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "02"}) require.NoError(t, err) assert.Equal(t, api.SearchFilter{ Filters: &api.Filters{ DateFilter: &api.DateFilter{ Dates: []api.Date{ { Day: 2, Month: 1, Year: 2000, }, }, }, }, }, sf) _, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "potato"}) require.Error(t, err) _, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "0"}) require.Error(t, err) _, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "32"}) require.Error(t, err) } func TestPatternAlbumsToEntries(t *testing.T) { f := newTestLister(t) ctx := context.Background() _, err := albumsToEntries(ctx, f, false, "potato/", "sub") assert.Equal(t, fs.ErrorDirNotFound, err) f.albums.add(&api.Album{ ID: "1", Title: "sub/one", }) entries, err := albumsToEntries(ctx, f, false, "potato/", "sub") assert.NoError(t, err) assert.Equal(t, 1, len(entries)) assert.Equal(t, "potato/one", entries[0].Remote()) _, ok := entries[0].(fs.Directory) assert.Equal(t, true, ok) f.albums.add(&api.Album{ ID: "1", Title: "sub", }) f.names = []string{"file.jpg"} entries, err = albumsToEntries(ctx, f, false, "potato/", "sub") assert.NoError(t, err) assert.Equal(t, 2, len(entries)) assert.Equal(t, "potato/one", entries[0].Remote()) _, ok = entries[0].(fs.Directory) assert.Equal(t, true, ok) assert.Equal(t, "potato/file.jpg", entries[1].Remote()) _, ok = entries[1].(fs.Object) assert.Equal(t, true, ok) } rclone-1.53.3/backend/googlephotos/testfiles/000077500000000000000000000000001375552240400211765ustar00rootroot00000000000000rclone-1.53.3/backend/googlephotos/testfiles/rclone-test-image1.jpg000066400000000000000000000402501375552240400253010ustar00rootroot00000000000000JFIF'ExifII* z (1 2iCanonCanon EOS 7DGIMP 2.10.82019:06:28 12:11:52Nick Craig-Wood 2 ("'0D X` hp  x5150000CanonCanon EOS 7D2014:03:16 17:26:02Nick Craig-Wood33 2013:07:26 08:57:212013:07:26 08:57:21`<#$JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?<]e BF0PqDo3W2c,5 : ksJt\8b_k Q] !bjqy/h-[1dSqWdzT%0k(;yH)UXegzwSw(El"|5iF4:}ޕ2~ZA[lU3[zWw6:W:?4,yTfʅ?Ά W;s9K`dj("vsV7qX6@P>(\NV7JnCnT%WQw9h4'Ҙ-W ;̢RzշT^J̲5|V=U,_^[4Y% ׷)ӭV\u^Xt*[D.qUnM^, 2uMVg ͓JSyySE $IFZ#H_[a)!#i^ƪC-[}u hU]8<=kO[EwH|n"n,V\c c_Ht]ZYZ.ǭaY;ʆuv;jopKs<4jSiTtG-J(H(1H>0'Exjǩ,|84X~TzGxBs1S) < }̐~aНQRYZ9W ڶfc qx#<JRt7cGەcϽOj8YHW#Z6Z\Y];q nJfgs=krM?j?*HA9yCrEhєrMWR1Wk6[`p*m8-̒0 'rE*7U]Ѭ28wkxLqɮ )UqΔs}jʮCji U V, C<;j1ta< .y# .[+ 5c? Q8h_2p?:H{~uE qN֌#U&e;UqFX%Q/,;63^JMà铏δ,BXNVj^^S0Qoj9ܫF*dqνift H:U;jxA*ީ<=kCAe*GQըJKuSO.Dːkߙ1~s\ύ1Ƭq<0>aȅ zW#DYެ.:O^rQ[۫Ɂrmun%IREl ZzNGV`HTSuc)SW؄^! [Ijk2\GsZH09z3\ḖeIN; o&.# /q=Fky6w=;c^}y}|{RG6]nj[]~K܉F`A${եJqT_@-imȖ!z{Ƥ:ˇP *i#X#X''ϯz-Κ+xdPRmQ[jq\8 }Ÿ#5= rs[V7"[uOjƪmm!x4OQ=xc<8z+jII >dGjZ]Dlnͺ#RnJg;Mw Wnygw.T('&hj9 Iyٞ8H8+˹bsk˖r^y0rGZkrUkR$doy+F8U^\Iun 񬴔@}kH;(`#>y?r*.HUӳ˕=.d%\:s\#;+o _'zu;Nz`񀼟jŝ4H~lwU;4g!q6mS9>Wu:p<`j~hKL҉5;o*3O>.uq#κ}(Deg΋:qp'{TQW6V#mVZ >nwv_YfKY8<~V;]gPH¾F1]()=θ4s`B:׽mn"JIҶLPFTq{{ix4;;3H{,S]X Nx-*ޞ,q}}m#d k{&cZvź] N9la'AmL. ڶnS21a~TW0jӪnp9;VܕWFfizM iMrN{qZེ51㕕]c+IWn0GӵnEeeF1&;zw5 o̦cQYT.Uvt®9TzҒs|V/mln3LּjͷǗVnOR!8jrd= ͻ`X`4;?{\)^ڦ|R?= |`9jfrjÜdf*ri!t,h J :^sZjyW'KgyPڽ 3jdqjeQ ggsq)t G]³")Y2ݛVdx̯o'$`ZV$Ixk5䩺9({r -yl)_, "]]‘= X -PR~7V4e/  $hB3N;Wqw>Ffw,Ó [d}%èl ҷ a \cݱQN${Rq\mFo'p #kuHۓU[-8%!H&5:K4ǚ[';}kfz>DPwNpc;#cQϝ)7r쓪FMf&.xnwB[Ұ[dwdA]dҹ\[فBs5Dܪuo "QF=;=e.ыMqJƭskF!֝@5(G۱װ>{T[`V SgB&avlG<+0tv(k3Z2u3zo4g8\ =[3lq3뭻JByۿJEỺ>[mzzV.q3Qz&+"NN͋lYc(A~ժtwLS{gelL&vHYBv9(jE? O\%u( <{7??)7;b2qޕz0 TjZӴbIw \ַ܀d{d ${kV`ⱵfSnI=M 2^G\֮KM2\p7>>gSjߏLH8GhbǞkJPnNS_&ZO WSxPԙ7.GCX_Sv ]_ntHsf9 kJ<5-]}vw~O;?θȠ oL@Ae#kѤc\t&1+NOHHu=*F⥒A\S)YFO5Ly ޳>Yq![Z`'͚ty.y[pWsk*-Un+J.2=SMj[ k֎ ?+D2{ɼ# m$r6ancGmןҩbxsV~LAz(U4M*I^N"We8-X6flՙSWBF15lUCА v18e46Ap:}+|-uѥ[GЗM4lai]N;fIf[ec0I&(=SxhYK /;>f۽٭$ݢnO_ֺM>_XqeQf+_CܷSOjFʸs [ᡆ***?ZV[ZFiqVKS'WO|$\aWbE F{T#1V]jѼ鸔 :KGsdZp+Wf535fU1yyçd<35kDڬ( tVV$ scUƏi x|Or{W W̞ a>m8{x"IؤԞRksp`(lGKgb^"j.tnWM`fUg<sRogF >0#H3Fǚ图2{T2#zԿ6ٽE68ʷQҒW0Tq%AJiqtcץ;91̘՛-P"ϥ}}xr )ێzUn*2i7+RrVaG=&S:snMNM) Tf_:S̀Ĥp33 zT9~B=tݝM+d5NHGbU*jG9sA3DqbFBǐCq>,~}+ BzrΔS^̎1圶*CeyL%I^ߕAq,F$"9sfa˖1k,&jvJIB1WtucC6ۃQ]9>ޭ lp^:N,ʯ p 浧Rq/+GC=qr&f\Qs( NkkX՘Pn2@,&9ͫ$9j$2nxք N=j9-7U>-s[S4І_TVшU$mU;*n>h$:H?.ٶ+,Ljgִ /J0G{LcQ|YnLW_b yus:G=ȈjFqQ21+ȜLj7)pm2ӳ#[DS n"lzз9!DS@*OZl=S9tᬌ!dTEvJ;Լև1ӭMاBbFE@w*M\p] 'ԥ B+(-c"'_X۴nu?aTWJ ޺KSo ]r@FFQ[J@pÚQ)4KYaUgk#IJHm ZO8]$)09M-FԨRX:vKSb"s~+K;VfyA;nc]6mXPfڪŪ8PcҵϠ=s2S* SRlDخ{gUSmg t9Z_x}*}¹W2[ cnqP1 WY?zz!ɒD+ MTg8Z۶ o_k/{cL;+mj{<7ںmǷߏ]Mubi*^)ZӍ0:TtkA+Ȧa QZn: CdiW")E;es?\~On6Vs*ڪODUޫ- U http://ns.adobe.com/xap/1.0/ Nick Craig-Wood \Photoshop 3.08BIM@Z%GtNick Craig-Wood720190628< 121152-1211C 2!=,.$2I@LKG@FEPZsbPUmVEFdemw{N`}s~|C;!!;|SFS||||||||||||||||||||||||||||||||||||||||||||||||||Another image!Cd 1v]4OB]lu=In*"V %vFBMM['ZoiNU6̵ JY8NMY8ĝ*(OJVIYM&͹JVF5b4IBUdegS) lrcDO [Ҝ%l !1 2A"T)ni;1C\HM;mteÍ6JhjHrjzjGBmLVޣ|$=8R6ek!v>sv$$Qܽ(z>?% 0!1?9B )*+T,᫱w hzY"҉pu#DȺ$e(4=3et{Bҋ&!ı ac::!1AQq?`ٹmBsmlFL,IvHLe3oxJwm).:wo\KI0dU{nXf7I`&?|u' Sس Gy ۆGf ʘaB,RJ6mSoϓ!!1AQaq?؎.vq캩mEUG8'] z(aIKk>M,LT|@.E* 0 u\`MQ^XM s@Q, |Z'D+񍂢FZܸJ[@g"PY|02OJ9#+?B[B?{Sѣi3~Fۺilx USGc$.O[5.ckVݨڎC,VAGbG&`,-rP>v!f쩞VL_ydM&_IVuMxw#}(JHZ.qV1Z["P1?_\y4_Z@ K#G C8CJS_.2M)Dp W!p(zwq38B f.{PԸ?"?rclone-1.53.3/backend/googlephotos/testfiles/rclone-test-image2.jpg000066400000000000000000000405751375552240400253140ustar00rootroot00000000000000JFIF'ExifII* z (1 2iCanonCanon EOS 7DGIMP 2.10.82019:06:28 12:10:48Nick Craig-Wood 2 ("'0D X` hp  x5150000CanonCanon EOS 7D2014:03:16 17:26:02Nick Craig-Wood33 2013:07:26 08:57:212013:07:26 08:57:21`<#$JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?<]e BF0PqDo3W2c,5 : ksJt\8b_k Q] !bjqy/h-[1dSqWdzT%0k(;yH)UXegzwSw(El"|5iF4:}ޕ2~ZA[lU3[zWw6:W:?4,yTfʅ?Ά W;s9K`dj("vsV7qX6@P>(\NV7JnCnT%WQw9h4'Ҙ-W ;̢RzշT^J̲5|V=U,_^[4Y% ׷)ӭV\u^Xt*[D.qUnM^, 2uMVg ͓JSyySE $IFZ#H_[a)!#i^ƪC-[}u hU]8<=kO[EwH|n"n,V\c c_Ht]ZYZ.ǭaY;ʆuv;jopKs<4jSiTtG-J(H(1H>0'Exjǩ,|84X~TzGxBs1S) < }̐~aНQRYZ9W ڶfc qx#<JRt7cGەcϽOj8YHW#Z6Z\Y];q nJfgs=krM?j?*HA9yCrEhєrMWR1Wk6[`p*m8-̒0 'rE*7U]Ѭ28wkxLqɮ )UqΔs}jʮCji U V, C<;j1ta< .y# .[+ 5c? Q8h_2p?:H{~uE qN֌#U&e;UqFX%Q/,;63^JMà铏δ,BXNVj^^S0Qoj9ܫF*dqνift H:U;jxA*ީ<=kCAe*GQըJKuSO.Dːkߙ1~s\ύ1Ƭq<0>aȅ zW#DYެ.:O^rQ[۫Ɂrmun%IREl ZzNGV`HTSuc)SW؄^! [Ijk2\GsZH09z3\ḖeIN; o&.# /q=Fky6w=;c^}y}|{RG6]nj[]~K܉F`A${եJqT_@-imȖ!z{Ƥ:ˇP *i#X#X''ϯz-Κ+xdPRmQ[jq\8 }Ÿ#5= rs[V7"[uOjƪmm!x4OQ=xc<8z+jII >dGjZ]Dlnͺ#RnJg;Mw Wnygw.T('&hj9 Iyٞ8H8+˹bsk˖r^y0rGZkrUkR$doy+F8U^\Iun 񬴔@}kH;(`#>y?r*.HUӳ˕=.d%\:s\#;+o _'zu;Nz`񀼟jŝ4H~lwU;4g!q6mS9>Wu:p<`j~hKL҉5;o*3O>.uq#κ}(Deg΋:qp'{TQW6V#mVZ >nwv_YfKY8<~V;]gPH¾F1]()=θ4s`B:׽mn"JIҶLPFTq{{ix4;;3H{,S]X Nx-*ޞ,q}}m#d k{&cZvź] N9la'AmL. ڶnS21a~TW0jӪnp9;VܕWFfizM iMrN{qZེ51㕕]c+IWn0GӵnEeeF1&;zw5 o̦cQYT.Uvt®9TzҒs|V/mln3LּjͷǗVnOR!8jrd= ͻ`X`4;?{\)^ڦ|R?= |`9jfrjÜdf*ri!t,h J :^sZjyW'KgyPڽ 3jdqjeQ ggsq)t G]³")Y2ݛVdx̯o'$`ZV$Ixk5䩺9({r -yl)_, "]]‘= X -PR~7V4e/  $hB3N;Wqw>Ffw,Ó [d}%èl ҷ a \cݱQN${Rq\mFo'p #kuHۓU[-8%!H&5:K4ǚ[';}kfz>DPwNpc;#cQϝ)7r쓪FMf&.xnwB[Ұ[dwdA]dҹ\[فBs5Dܪuo "QF=;=e.ыMqJƭskF!֝@5(G۱װ>{T[`V SgB&avlG<+0tv(k3Z2u3zo4g8\ =[3lq3뭻JByۿJEỺ>[mzzV.q3Qz&+"NN͋lYc(A~ժtwLS{gelL&vHYBv9(jE? O\%u( <{7??)7;b2qޕz0 TjZӴbIw \ַ܀d{d ${kV`ⱵfSnI=M 2^G\֮KM2\p7>>gSjߏLH8GhbǞkJPnNS_&ZO WSxPԙ7.GCX_Sv ]_ntHsf9 kJ<5-]}vw~O;?θȠ oL@Ae#kѤc\t&1+NOHHu=*F⥒A\S)YFO5Ly ޳>Yq![Z`'͚ty.y[pWsk*-Un+J.2=SMj[ k֎ ?+D2{ɼ# m$r6ancGmןҩbxsV~LAz(U4M*I^N"We8-X6flՙSWBF15lUCА v18e46Ap:}+|-uѥ[GЗM4lai]N;fIf[ec0I&(=SxhYK /;>f۽٭$ݢnO_ֺM>_XqeQf+_CܷSOjFʸs [ᡆ***?ZV[ZFiqVKS'WO|$\aWbE F{T#1V]jѼ鸔 :KGsdZp+Wf535fU1yyçd<35kDڬ( tVV$ scUƏi x|Or{W W̞ a>m8{x"IؤԞRksp`(lGKgb^"j.tnWM`fUg<sRogF >0#H3Fǚ图2{T2#zԿ6ٽE68ʷQҒW0Tq%AJiqtcץ;91̘՛-P"ϥ}}xr )ێzUn*2i7+RrVaG=&S:snMNM) Tf_:S̀Ĥp33 zT9~B=tݝM+d5NHGbU*jG9sA3DqbFBǐCq>,~}+ BzrΔS^̎1圶*CeyL%I^ߕAq,F$"9sfa˖1k,&jvJIB1WtucC6ۃQ]9>ޭ lp^:N,ʯ p 浧Rq/+GC=qr&f\Qs( NkkX՘Pn2@,&9ͫ$9j$2nxք N=j9-7U>-s[S4І_TVшU$mU;*n>h$:H?.ٶ+,Ljgִ /J0G{LcQ|YnLW_b yus:G=ȈjFqQ21+ȜLj7)pm2ӳ#[DS n"lzз9!DS@*OZl=S9tᬌ!dTEvJ;Լև1ӭMاBbFE@w*M\p] 'ԥ B+(-c"'_X۴nu?aTWJ ޺KSo ]r@FFQ[J@pÚQ)4KYaUgk#IJHm ZO8]$)09M-FԨRX:vKSb"s~+K;VfyA;nc]6mXPfڪŪ8PcҵϠ=s2S* SRlDخ{gUSmg t9Z_x}*}¹W2[ cnqP1 WY?zz!ɒD+ MTg8Z۶ o_k/{cL;+mj{<7ںmǷߏ]Mubi*^)ZӍ0:TtkA+Ȧa QZn: CdiW")E;es?\~On6Vs*ڪODUޫ- U http://ns.adobe.com/xap/1.0/ Nick Craig-Wood \Photoshop 3.08BIM@Z%GtNick Craig-Wood720190628< 121048-1210C   (1#%(:3=<9387@H\N@DWE78PmQW_bghg>Mqypdx\egcC//cB8BccccccccccccccccccccccccccccccccccccccccccccccccccCd 1Xcg.|'n[a4X%= &IMϪ815. 1M-)M}; #Uq<ӯn6ΚA+oV\Po%X[[mt Gdapb㚤*&m4l0;!Y i@:t`w!!1 A"2&54v`js_& STM@@%T֑ ^\j.HZ2ˢU'"C,/&o(710fXi 3 7ciPv gjLnrOjcb &1U T>Ox?  !1A0Q?%KH2DŖ6.f Cl蹣4v(x"Wш',JNO !A 1Qa?Q⇬B"1id YŌ,qg QH[ױ[Yl6Dz</Fl5А=.BB,彚这 vsx!CV%^fS<-茸W&!$! 1"02QAa?xXge-"tm%HT#Y<ĭBiyڢ$Cxȝ?_e8u="!1AQqa?!m(sʰ7hgl[CoplJ w+.axpWwZ}1`UKЙ4K1%T?%R#$xy*-G˞dN 3ܹ44Ԯ e H!$u16-Xt"yoppîF%cpXZKs`ʦ27,і~ g%Z aJTHRdc?Q!aP du+FK5QֻQAo%y#YpjČs!1AQ ?_>٤2 [&Rn aXGY3{z-=y$y &\位޶cWK9qr35o?DJ夌}K5~`po9#ˤaYcGkjgk3C='SGmI$lRrm%wkoɎWꏤE6@Mi D &;jBr&]r髎gpmpB=n68}wẸ\}N'$!1AQaq?^ceD%XM0rM#?(jm-xߨ8W)Ir7$B>P \ÅHWŅo|2'J3d*XplTvhLe#۽ȗbJwVQjpΟ `4jW;ix>bSn^yv*F+CKᑵ.Ah>cjeR["ouQ>" +>bhR \ڰWkkk9U˯Raq VƐ 5+k3+udmiM nA_lPV^/t# A5Vֹp,ȹJ ט sL W؏%koErclone-1.53.3/backend/http/000077500000000000000000000000001375552240400154425ustar00rootroot00000000000000rclone-1.53.3/backend/http/http.go000066400000000000000000000415151375552240400167560ustar00rootroot00000000000000// Package http provides a filesystem interface using golang.org/net/http // // It treats HTML pages served from the endpoint as directory // listings, and includes any links found as files. package http import ( "context" "io" "mime" "net/http" "net/url" "path" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/rest" "golang.org/x/net/html" ) var ( errorReadOnly = errors.New("http remotes are read only") timeUnset = time.Unix(0, 0) ) func init() { fsi := &fs.RegInfo{ Name: "http", Description: "http Connection", NewFs: NewFs, Options: []fs.Option{{ Name: "url", Help: "URL of http host to connect to", Required: true, Examples: []fs.OptionExample{{ Value: "https://example.com", Help: "Connect to example.com", }, { Value: "https://user:pass@example.com", Help: "Connect to example.com using a username and password", }}, }, { Name: "headers", Help: `Set HTTP headers for all transactions Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard [CSV encoding](https://godoc.org/encoding/csv) may be used. For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. `, Default: fs.CommaSepList{}, Advanced: true, }, { Name: "no_slash", Help: `Set this if the site doesn't end directories with / Use this if your target website does not use / on the end of directories. A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them. Note that this may cause rclone to confuse genuine HTML files with directories.`, Default: false, Advanced: true, }, { Name: "no_head", Help: `Don't use HEAD requests to find file sizes in dir listing If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to: - find its size - check it really exists - check to see if it is a directory If you set this option, rclone will not do the HEAD request. This will mean - directory listings are much quicker - rclone won't have the times or sizes of any files - some files that don't exist may be in the listing `, Default: false, Advanced: true, }}, } fs.Register(fsi) } // Options defines the configuration for this backend type Options struct { Endpoint string `config:"url"` NoSlash bool `config:"no_slash"` NoHead bool `config:"no_head"` Headers fs.CommaSepList `config:"headers"` } // Fs stores the interface to the remote HTTP files type Fs struct { name string root string features *fs.Features // optional features opt Options // options for this backend endpoint *url.URL endpointURL string // endpoint as a string httpClient *http.Client } // Object is a remote object that has been stat'd (so it exists, but is not necessarily open for reading) type Object struct { fs *Fs remote string size int64 modTime time.Time contentType string } // statusError returns an error if the res contained an error func statusError(res *http.Response, err error) error { if err != nil { return err } if res.StatusCode < 200 || res.StatusCode > 299 { _ = res.Body.Close() return errors.Errorf("HTTP Error %d: %s", res.StatusCode, res.Status) } return nil } // NewFs creates a new Fs object from the name and root. It connects to // the host specified in the config file. func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.TODO() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if len(opt.Headers)%2 != 0 { return nil, errors.New("odd number of headers supplied") } if !strings.HasSuffix(opt.Endpoint, "/") { opt.Endpoint += "/" } // Parse the endpoint and stick the root onto it base, err := url.Parse(opt.Endpoint) if err != nil { return nil, err } u, err := rest.URLJoin(base, rest.URLPathEscape(root)) if err != nil { return nil, err } client := fshttp.NewClient(fs.Config) var isFile = false if !strings.HasSuffix(u.String(), "/") { // Make a client which doesn't follow redirects so the server // doesn't redirect http://host/dir to http://host/dir/ noRedir := *client noRedir.CheckRedirect = func(req *http.Request, via []*http.Request) error { return http.ErrUseLastResponse } // check to see if points to a file req, err := http.NewRequest("HEAD", u.String(), nil) if err == nil { req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext addHeaders(req, opt) res, err := noRedir.Do(req) err = statusError(res, err) if err == nil { isFile = true } } } newRoot := u.String() if isFile { // Point to the parent if this is a file newRoot, _ = path.Split(u.String()) } else { if !strings.HasSuffix(newRoot, "/") { newRoot += "/" } } u, err = url.Parse(newRoot) if err != nil { return nil, err } f := &Fs{ name: name, root: root, opt: *opt, httpClient: client, endpoint: u, endpointURL: u.String(), } f.features = (&fs.Features{ CanHaveEmptyDirectories: true, }).Fill(f) if isFile { return f, fs.ErrorIsFile } if !strings.HasSuffix(f.endpointURL, "/") { return nil, errors.New("internal error: url doesn't end with /") } return f, nil } // Name returns the configured name of the file system func (f *Fs) Name() string { return f.name } // Root returns the root for the filesystem func (f *Fs) Root() string { return f.root } // String returns the URL for the filesystem func (f *Fs) String() string { return f.endpointURL } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // Precision is the remote http file system's modtime precision, which we have no way of knowing. We estimate at 1s func (f *Fs) Precision() time.Duration { return time.Second } // NewObject creates a new remote http file object func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } err := o.stat(ctx) if err != nil { return nil, err } return o, nil } // Join's the remote onto the base URL func (f *Fs) url(remote string) string { return f.endpointURL + rest.URLPathEscape(remote) } // parse s into an int64, on failure return def func parseInt64(s string, def int64) int64 { n, e := strconv.ParseInt(s, 10, 64) if e != nil { return def } return n } // Errors returned by parseName var ( errURLJoinFailed = errors.New("URLJoin failed") errFoundQuestionMark = errors.New("found ? in URL") errHostMismatch = errors.New("host mismatch") errSchemeMismatch = errors.New("scheme mismatch") errNotUnderRoot = errors.New("not under root") errNameIsEmpty = errors.New("name is empty") errNameContainsSlash = errors.New("name contains /") ) // parseName turns a name as found in the page into a remote path or returns an error func parseName(base *url.URL, name string) (string, error) { // make URL absolute u, err := rest.URLJoin(base, name) if err != nil { return "", errURLJoinFailed } // check it doesn't have URL parameters uStr := u.String() if strings.Index(uStr, "?") >= 0 { return "", errFoundQuestionMark } // check that this is going back to the same host and scheme if base.Host != u.Host { return "", errHostMismatch } if base.Scheme != u.Scheme { return "", errSchemeMismatch } // check has path prefix if !strings.HasPrefix(u.Path, base.Path) { return "", errNotUnderRoot } // calculate the name relative to the base name = u.Path[len(base.Path):] // mustn't be empty if name == "" { return "", errNameIsEmpty } // mustn't contain a / - we are looking for a single level directory slash := strings.Index(name, "/") if slash >= 0 && slash != len(name)-1 { return "", errNameContainsSlash } return name, nil } // Parse turns HTML for a directory into names // base should be the base URL to resolve any relative names from func parse(base *url.URL, in io.Reader) (names []string, err error) { doc, err := html.Parse(in) if err != nil { return nil, err } var ( walk func(*html.Node) seen = make(map[string]struct{}) ) walk = func(n *html.Node) { if n.Type == html.ElementNode && n.Data == "a" { for _, a := range n.Attr { if a.Key == "href" { name, err := parseName(base, a.Val) if err == nil { if _, found := seen[name]; !found { names = append(names, name) seen[name] = struct{}{} } } break } } } for c := n.FirstChild; c != nil; c = c.NextSibling { walk(c) } } walk(doc) return names, nil } // Adds the configured headers to the request if any func addHeaders(req *http.Request, opt *Options) { for i := 0; i < len(opt.Headers); i += 2 { key := opt.Headers[i] value := opt.Headers[i+1] req.Header.Add(key, value) } } // Adds the configured headers to the request if any func (f *Fs) addHeaders(req *http.Request) { addHeaders(req, &f.opt) } // Read the directory passed in func (f *Fs) readDir(ctx context.Context, dir string) (names []string, err error) { URL := f.url(dir) u, err := url.Parse(URL) if err != nil { return nil, errors.Wrap(err, "failed to readDir") } if !strings.HasSuffix(URL, "/") { return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL) } // Do the request req, err := http.NewRequest("GET", URL, nil) if err != nil { return nil, errors.Wrap(err, "readDir failed") } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext f.addHeaders(req) res, err := f.httpClient.Do(req) if err == nil { defer fs.CheckClose(res.Body, &err) if res.StatusCode == http.StatusNotFound { return nil, fs.ErrorDirNotFound } } err = statusError(res, err) if err != nil { return nil, errors.Wrap(err, "failed to readDir") } contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0] switch contentType { case "text/html": names, err = parse(u, res.Body) if err != nil { return nil, errors.Wrap(err, "readDir") } default: return nil, errors.Errorf("Can't parse content type %q", contentType) } return names, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { if !strings.HasSuffix(dir, "/") && dir != "" { dir += "/" } names, err := f.readDir(ctx, dir) if err != nil { return nil, errors.Wrapf(err, "error listing %q", dir) } var ( entriesMu sync.Mutex // to protect entries wg sync.WaitGroup in = make(chan string, fs.Config.Checkers) ) add := func(entry fs.DirEntry) { entriesMu.Lock() entries = append(entries, entry) entriesMu.Unlock() } for i := 0; i < fs.Config.Checkers; i++ { wg.Add(1) go func() { defer wg.Done() for remote := range in { file := &Object{ fs: f, remote: remote, } switch err := file.stat(ctx); err { case nil: add(file) case fs.ErrorNotAFile: // ...found a directory not a file add(fs.NewDir(remote, timeUnset)) default: fs.Debugf(remote, "skipping because of error: %v", err) } } }() } for _, name := range names { isDir := name[len(name)-1] == '/' name = strings.TrimRight(name, "/") remote := path.Join(dir, name) if isDir { add(fs.NewDir(remote, timeUnset)) } else { in <- remote } } close(in) wg.Wait() return entries, nil } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return nil, errorReadOnly } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return nil, errorReadOnly } // Fs is the filesystem this remote http file object is located within func (o *Object) Fs() fs.Info { return o.fs } // String returns the URL to the remote HTTP file func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote the name of the remote HTTP file, relative to the fs root func (o *Object) Remote() string { return o.remote } // Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) { return "", hash.ErrUnsupported } // Size returns the size in bytes of the remote http file func (o *Object) Size() int64 { return o.size } // ModTime returns the modification time of the remote http file func (o *Object) ModTime(ctx context.Context) time.Time { return o.modTime } // url returns the native url of the object func (o *Object) url() string { return o.fs.url(o.remote) } // stat updates the info field in the Object func (o *Object) stat(ctx context.Context) error { if o.fs.opt.NoHead { o.size = -1 o.modTime = timeUnset o.contentType = fs.MimeType(ctx, o) return nil } url := o.url() req, err := http.NewRequest("HEAD", url, nil) if err != nil { return errors.Wrap(err, "stat failed") } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext o.fs.addHeaders(req) res, err := o.fs.httpClient.Do(req) if err == nil && res.StatusCode == http.StatusNotFound { return fs.ErrorObjectNotFound } err = statusError(res, err) if err != nil { return errors.Wrap(err, "failed to stat") } t, err := http.ParseTime(res.Header.Get("Last-Modified")) if err != nil { t = timeUnset } o.size = parseInt64(res.Header.Get("Content-Length"), -1) o.modTime = t o.contentType = res.Header.Get("Content-Type") // If NoSlash is set then check ContentType to see if it is a directory if o.fs.opt.NoSlash { mediaType, _, err := mime.ParseMediaType(o.contentType) if err != nil { return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType) } if mediaType == "text/html" { return fs.ErrorNotAFile } } return nil } // SetModTime sets the modification and access time to the specified time // // it also updates the info field func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { return errorReadOnly } // Storable returns whether the remote http file is a regular file (not a directory, symbolic link, block device, character device, named pipe, etc) func (o *Object) Storable() bool { return true } // Open a remote http file object for reading. Seek is supported func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { url := o.url() req, err := http.NewRequest("GET", url, nil) if err != nil { return nil, errors.Wrap(err, "Open failed") } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext // Add optional headers for k, v := range fs.OpenOptionHeaders(options) { req.Header.Add(k, v) } o.fs.addHeaders(req) // Do the request res, err := o.fs.httpClient.Do(req) err = statusError(res, err) if err != nil { return nil, errors.Wrap(err, "Open failed") } return res.Body, nil } // Hashes returns hash.HashNone to indicate remote hashing is unavailable func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // Mkdir makes the root directory of the Fs object func (f *Fs) Mkdir(ctx context.Context, dir string) error { return errorReadOnly } // Remove a remote http file object func (o *Object) Remove(ctx context.Context) error { return errorReadOnly } // Rmdir removes the root directory of the Fs object func (f *Fs) Rmdir(ctx context.Context, dir string) error { return errorReadOnly } // Update in to the object with the modTime given of the given size func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { return errorReadOnly } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.contentType } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} ) rclone-1.53.3/backend/http/http_internal_test.go000066400000000000000000000220651375552240400217100ustar00rootroot00000000000000package http import ( "context" "fmt" "io/ioutil" "net/http" "net/http/httptest" "net/url" "os" "path/filepath" "sort" "strings" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/lib/rest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var ( remoteName = "TestHTTP" testPath = "test" filesPath = filepath.Join(testPath, "files") headers = []string{"X-Potato", "sausage", "X-Rhubarb", "cucumber"} ) // prepareServer the test server and return a function to tidy it up afterwards func prepareServer(t *testing.T) (configmap.Simple, func()) { // file server for test/files fileServer := http.FileServer(http.Dir(filesPath)) // test the headers are there then pass on to fileServer handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path) assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0]) assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2]) fileServer.ServeHTTP(w, r) }) // Make the test server ts := httptest.NewServer(handler) // Configure the remote config.LoadConfig() // fs.Config.LogLevel = fs.LogLevelDebug // fs.Config.DumpHeaders = true // fs.Config.DumpBodies = true // config.FileSet(remoteName, "type", "http") // config.FileSet(remoteName, "url", ts.URL) m := configmap.Simple{ "type": "http", "url": ts.URL, "headers": strings.Join(headers, ","), } // return a function to tidy up return m, ts.Close } // prepare the test server and return a function to tidy it up afterwards func prepare(t *testing.T) (fs.Fs, func()) { m, tidy := prepareServer(t) // Instantiate it f, err := NewFs(remoteName, "", m) require.NoError(t, err) return f, tidy } func testListRoot(t *testing.T, f fs.Fs, noSlash bool) { entries, err := f.List(context.Background(), "") require.NoError(t, err) sort.Sort(entries) require.Equal(t, 4, len(entries)) e := entries[0] assert.Equal(t, "four", e.Remote()) assert.Equal(t, int64(-1), e.Size()) _, ok := e.(fs.Directory) assert.True(t, ok) e = entries[1] assert.Equal(t, "one%.txt", e.Remote()) assert.Equal(t, int64(6), e.Size()) _, ok = e.(*Object) assert.True(t, ok) e = entries[2] assert.Equal(t, "three", e.Remote()) assert.Equal(t, int64(-1), e.Size()) _, ok = e.(fs.Directory) assert.True(t, ok) e = entries[3] assert.Equal(t, "two.html", e.Remote()) if noSlash { assert.Equal(t, int64(-1), e.Size()) _, ok = e.(fs.Directory) assert.True(t, ok) } else { assert.Equal(t, int64(41), e.Size()) _, ok = e.(*Object) assert.True(t, ok) } } func TestListRoot(t *testing.T) { f, tidy := prepare(t) defer tidy() testListRoot(t, f, false) } func TestListRootNoSlash(t *testing.T) { f, tidy := prepare(t) f.(*Fs).opt.NoSlash = true defer tidy() testListRoot(t, f, true) } func TestListSubDir(t *testing.T) { f, tidy := prepare(t) defer tidy() entries, err := f.List(context.Background(), "three") require.NoError(t, err) sort.Sort(entries) assert.Equal(t, 1, len(entries)) e := entries[0] assert.Equal(t, "three/underthree.txt", e.Remote()) assert.Equal(t, int64(9), e.Size()) _, ok := e.(*Object) assert.True(t, ok) } func TestNewObject(t *testing.T) { f, tidy := prepare(t) defer tidy() o, err := f.NewObject(context.Background(), "four/under four.txt") require.NoError(t, err) assert.Equal(t, "four/under four.txt", o.Remote()) assert.Equal(t, int64(9), o.Size()) _, ok := o.(*Object) assert.True(t, ok) // Test the time is correct on the object tObj := o.ModTime(context.Background()) fi, err := os.Stat(filepath.Join(filesPath, "four", "under four.txt")) require.NoError(t, err) tFile := fi.ModTime() fstest.AssertTimeEqualWithPrecision(t, o.Remote(), tFile, tObj, time.Second) // check object not found o, err = f.NewObject(context.Background(), "not found.txt") assert.Nil(t, o) assert.Equal(t, fs.ErrorObjectNotFound, err) } func TestOpen(t *testing.T) { f, tidy := prepare(t) defer tidy() o, err := f.NewObject(context.Background(), "four/under four.txt") require.NoError(t, err) // Test normal read fd, err := o.Open(context.Background()) require.NoError(t, err) data, err := ioutil.ReadAll(fd) require.NoError(t, err) require.NoError(t, fd.Close()) assert.Equal(t, "beetroot\n", string(data)) // Test with range request fd, err = o.Open(context.Background(), &fs.RangeOption{Start: 1, End: 5}) require.NoError(t, err) data, err = ioutil.ReadAll(fd) require.NoError(t, err) require.NoError(t, fd.Close()) assert.Equal(t, "eetro", string(data)) } func TestMimeType(t *testing.T) { f, tidy := prepare(t) defer tidy() o, err := f.NewObject(context.Background(), "four/under four.txt") require.NoError(t, err) do, ok := o.(fs.MimeTyper) require.True(t, ok) assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background())) } func TestIsAFileRoot(t *testing.T) { m, tidy := prepareServer(t) defer tidy() f, err := NewFs(remoteName, "one%.txt", m) assert.Equal(t, err, fs.ErrorIsFile) testListRoot(t, f, false) } func TestIsAFileSubDir(t *testing.T) { m, tidy := prepareServer(t) defer tidy() f, err := NewFs(remoteName, "three/underthree.txt", m) assert.Equal(t, err, fs.ErrorIsFile) entries, err := f.List(context.Background(), "") require.NoError(t, err) sort.Sort(entries) assert.Equal(t, 1, len(entries)) e := entries[0] assert.Equal(t, "underthree.txt", e.Remote()) assert.Equal(t, int64(9), e.Size()) _, ok := e.(*Object) assert.True(t, ok) } func TestParseName(t *testing.T) { for i, test := range []struct { base string val string wantErr error want string }{ {"http://example.com/", "potato", nil, "potato"}, {"http://example.com/dir/", "potato", nil, "potato"}, {"http://example.com/dir/", "potato?download=true", errFoundQuestionMark, ""}, {"http://example.com/dir/", "../dir/potato", nil, "potato"}, {"http://example.com/dir/", "..", errNotUnderRoot, ""}, {"http://example.com/dir/", "http://example.com/", errNotUnderRoot, ""}, {"http://example.com/dir/", "http://example.com/dir/", errNameIsEmpty, ""}, {"http://example.com/dir/", "http://example.com/dir/potato", nil, "potato"}, {"http://example.com/dir/", "https://example.com/dir/potato", errSchemeMismatch, ""}, {"http://example.com/dir/", "http://notexample.com/dir/potato", errHostMismatch, ""}, {"http://example.com/dir/", "/dir/", errNameIsEmpty, ""}, {"http://example.com/dir/", "/dir/potato", nil, "potato"}, {"http://example.com/dir/", "subdir/potato", errNameContainsSlash, ""}, {"http://example.com/dir/", "With percent %25.txt", nil, "With percent %.txt"}, {"http://example.com/dir/", "With colon :", errURLJoinFailed, ""}, {"http://example.com/dir/", rest.URLPathEscape("With colon :"), nil, "With colon :"}, {"http://example.com/Dungeons%20%26%20Dragons/", "/Dungeons%20&%20Dragons/D%26D%20Basic%20%28Holmes%2C%20B%2C%20X%2C%20BECMI%29/", nil, "D&D Basic (Holmes, B, X, BECMI)/"}, } { u, err := url.Parse(test.base) require.NoError(t, err) got, gotErr := parseName(u, test.val) what := fmt.Sprintf("test %d base=%q, val=%q", i, test.base, test.val) assert.Equal(t, test.wantErr, gotErr, what) assert.Equal(t, test.want, got, what) } } // Load HTML from the file given and parse it, checking it against the entries passed in func parseHTML(t *testing.T, name string, base string, want []string) { in, err := os.Open(filepath.Join(testPath, "index_files", name)) require.NoError(t, err) defer func() { require.NoError(t, in.Close()) }() if base == "" { base = "http://example.com/" } u, err := url.Parse(base) require.NoError(t, err) entries, err := parse(u, in) require.NoError(t, err) assert.Equal(t, want, entries) } func TestParseEmpty(t *testing.T) { parseHTML(t, "empty.html", "", []string(nil)) } func TestParseApache(t *testing.T) { parseHTML(t, "apache.html", "http://example.com/nick/pub/", []string{ "SWIG-embed.tar.gz", "avi2dvd.pl", "cambert.exe", "cambert.gz", "fedora_demo.gz", "gchq-challenge/", "mandelterm/", "pgp-key.txt", "pymath/", "rclone", "readdir.exe", "rush_hour_solver_cut_down.py", "snake-puzzle/", "stressdisk/", "timer-test", "words-to-regexp.pl", "Now 100% better.mp3", "Now better.mp3", }) } func TestParseMemstore(t *testing.T) { parseHTML(t, "memstore.html", "", []string{ "test/", "v1.35/", "v1.36-01-g503cd84/", "rclone-beta-latest-freebsd-386.zip", "rclone-beta-latest-freebsd-amd64.zip", "rclone-beta-latest-windows-amd64.zip", }) } func TestParseNginx(t *testing.T) { parseHTML(t, "nginx.html", "", []string{ "deltas/", "objects/", "refs/", "state/", "config", "summary", }) } func TestParseCaddy(t *testing.T) { parseHTML(t, "caddy.html", "", []string{ "mimetype.zip", "rclone-delete-empty-dirs.py", "rclone-show-empty-dirs.py", "stat-windows-386.zip", "v1.36-155-gcf29ee8b-team-driveβ/", "v1.36-156-gca76b3fb-team-driveβ/", "v1.36-156-ge1f0e0f5-team-driveβ/", "v1.36-22-g06ea13a-ssh-agentβ/", }) } rclone-1.53.3/backend/http/test/000077500000000000000000000000001375552240400164215ustar00rootroot00000000000000rclone-1.53.3/backend/http/test/files/000077500000000000000000000000001375552240400175235ustar00rootroot00000000000000rclone-1.53.3/backend/http/test/files/four/000077500000000000000000000000001375552240400204765ustar00rootroot00000000000000rclone-1.53.3/backend/http/test/files/four/under four.txt000066400000000000000000000000111375552240400233000ustar00rootroot00000000000000beetroot rclone-1.53.3/backend/http/test/files/one%.txt000066400000000000000000000000061375552240400211060ustar00rootroot00000000000000hello rclone-1.53.3/backend/http/test/files/three/000077500000000000000000000000001375552240400206325ustar00rootroot00000000000000rclone-1.53.3/backend/http/test/files/three/underthree.txt000066400000000000000000000000111375552240400235300ustar00rootroot00000000000000rutabaga rclone-1.53.3/backend/http/test/files/two.html000066400000000000000000000000511375552240400212160ustar00rootroot00000000000000file.txt rclone-1.53.3/backend/http/test/index_files/000077500000000000000000000000001375552240400207125ustar00rootroot00000000000000rclone-1.53.3/backend/http/test/index_files/apache.html000066400000000000000000000105471375552240400230300ustar00rootroot00000000000000 Index of /nick/pub

Index of /nick/pub

[ICO]NameLast modifiedSizeDescription

[DIR]Parent Directory  -  
[   ]SWIG-embed.tar.gz29-Nov-2005 16:27 2.3K 
[TXT]avi2dvd.pl14-Apr-2010 23:07 17K 
[   ]cambert.exe15-Dec-2006 18:07 54K 
[   ]cambert.gz14-Apr-2010 23:07 18K 
[   ]fedora_demo.gz08-Jun-2007 11:01 1.0M 
[DIR]gchq-challenge/24-Dec-2016 15:24 -  
[DIR]mandelterm/13-Jul-2013 22:22 -  
[TXT]pgp-key.txt14-Apr-2010 23:07 400  
[DIR]pymath/24-Dec-2016 15:24 -  
[   ]rclone09-May-2017 17:15 22M 
[   ]readdir.exe21-Oct-2016 14:47 1.6M 
[TXT]rush_hour_solver_cut_down.py23-Jul-2009 11:44 14K 
[DIR]snake-puzzle/25-Sep-2016 20:56 -  
[DIR]stressdisk/08-Nov-2016 14:25 -  
[   ]timer-test09-May-2017 17:05 1.5M 
[TXT]words-to-regexp.pl01-Mar-2005 20:43 6.0K 

[SND]Now 100% better.mp32017-08-01 11:41 0  
[SND]Now better.mp32017-08-01 11:41 0  
rclone-1.53.3/backend/http/test/index_files/caddy.html000066400000000000000000000264651375552240400227010ustar00rootroot00000000000000 /

/

4 directories 4 files
Name Size Modified
mimetype.zip 765 KiB
rclone-delete-empty-dirs.py 1.2 KiB
rclone-show-empty-dirs.py 868 B
stat-windows-386.zip 688 KiB
v1.36-155-gcf29ee8b-team-driveβ
v1.36-156-gca76b3fb-team-driveβ
v1.36-156-ge1f0e0f5-team-driveβ
v1.36-22-g06ea13a-ssh-agentβ
rclone-1.53.3/backend/http/test/index_files/empty.html000066400000000000000000000000001375552240400227240ustar00rootroot00000000000000rclone-1.53.3/backend/http/test/index_files/memstore.html000066400000000000000000000040721375552240400234360ustar00rootroot00000000000000 Index of /

Index of /

Name Type Size Last modified MD5
test/ application/directory 0 bytes - -
v1.35/ application/directory 0 bytes - -
v1.36-01-g503cd84/ application/directory 0 bytes - -
rclone-beta-latest-freebsd-386.zip application/zip 4.6 MB 2017-06-19 14:04:52 e747003c69c81e675f206a715264bfa8
rclone-beta-latest-freebsd-amd64.zip application/zip 5.0 MB 2017-06-19 14:04:53 ff30b5e9bf2863a2373069142e6f2b7f
rclone-beta-latest-windows-amd64.zip application/x-zip-compressed 4.9 MB 2017-06-19 13:56:02 851a5547a0495cbbd94cbc90a80ed6f5

Memset Ltd.

rclone-1.53.3/backend/http/test/index_files/nginx.html000066400000000000000000000015041375552240400227230ustar00rootroot00000000000000 Index of /atomic/fedora/

Index of /atomic/fedora/


../
deltas/                                            04-May-2017 21:37                   -
objects/                                           04-May-2017 20:44                   -
refs/                                              04-May-2017 20:42                   -
state/                                             04-May-2017 21:36                   -
config                                             04-May-2017 20:42                 118
summary                                            04-May-2017 21:36                 806

rclone-1.53.3/backend/hubic/000077500000000000000000000000001375552240400155555ustar00rootroot00000000000000rclone-1.53.3/backend/hubic/auth.go000066400000000000000000000023451375552240400170510ustar00rootroot00000000000000package hubic import ( "context" "net/http" "time" "github.com/ncw/swift" "github.com/rclone/rclone/fs" ) // auth is an authenticator for swift type auth struct { f *Fs } // newAuth creates a swift authenticator func newAuth(f *Fs) *auth { return &auth{ f: f, } } // Request constructs an http.Request for authentication // // returns nil for not needed func (a *auth) Request(*swift.Connection) (r *http.Request, err error) { const retries = 10 for try := 1; try <= retries; try++ { err = a.f.getCredentials(context.TODO()) if err == nil { break } time.Sleep(100 * time.Millisecond) fs.Debugf(a.f, "retrying auth request %d/%d: %v", try, retries, err) } return nil, err } // Response parses the result of an http request func (a *auth) Response(resp *http.Response) error { return nil } // The public storage URL - set Internal to true to read // internal/service net URL func (a *auth) StorageUrl(Internal bool) string { // nolint return a.f.credentials.Endpoint } // The access token func (a *auth) Token() string { return a.f.credentials.Token } // The CDN url if available func (a *auth) CdnUrl() string { // nolint return "" } // Check the interfaces are satisfied var _ swift.Authenticator = (*auth)(nil) rclone-1.53.3/backend/hubic/hubic.go000066400000000000000000000127541375552240400172070ustar00rootroot00000000000000// Package hubic provides an interface to the Hubic object storage // system. package hubic // This uses the normal swift mechanism to update the credentials and // ignores the expires field returned by the Hubic API. This may need // to be revisted after some actual experience. import ( "context" "encoding/json" "fmt" "io/ioutil" "log" "net/http" "strings" "time" swiftLib "github.com/ncw/swift" "github.com/pkg/errors" "github.com/rclone/rclone/backend/swift" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/lib/oauthutil" "golang.org/x/oauth2" ) const ( rcloneClientID = "api_hubic_svWP970PvSWbw5G3PzrAqZ6X2uHeZBPI" rcloneEncryptedClientSecret = "leZKCcqy9movLhDWLVXX8cSLp_FzoiAPeEJOIOMRw1A5RuC4iLEPDYPWVF46adC_MVonnLdVEOTHVstfBOZ_lY4WNp8CK_YWlpRZ9diT5YI" ) // Globals var ( // Description of how to auth for this app oauthConfig = &oauth2.Config{ Scopes: []string{ "credentials.r", // Read OpenStack credentials }, Endpoint: oauth2.Endpoint{ AuthURL: "https://api.hubic.com/oauth/auth/", TokenURL: "https://api.hubic.com/oauth/token/", }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectLocalhostURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "hubic", Description: "Hubic", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { err := oauthutil.Config("hubic", name, m, oauthConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: append(oauthutil.SharedOptions, swift.SharedOptions...), }) } // credentials is the JSON returned from the Hubic API to read the // OpenStack credentials type credentials struct { Token string `json:"token"` // OpenStack token Endpoint string `json:"endpoint"` // OpenStack endpoint Expires string `json:"expires"` // Expires date - eg "2015-11-09T14:24:56+01:00" } // Fs represents a remote hubic type Fs struct { fs.Fs // wrapped Fs features *fs.Features // optional features client *http.Client // client for oauth api credentials credentials // returned from the Hubic API expires time.Time // time credentials expire } // Object describes a swift object type Object struct { *swift.Object } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.Object.String() } // ------------------------------------------------------------ // String converts this Fs to a string func (f *Fs) String() string { if f.Fs == nil { return "Hubic" } return fmt.Sprintf("Hubic %s", f.Fs.String()) } // getCredentials reads the OpenStack Credentials using the Hubic API // // The credentials are read into the Fs func (f *Fs) getCredentials(ctx context.Context) (err error) { req, err := http.NewRequest("GET", "https://api.hubic.com/1.0/account/credentials", nil) if err != nil { return err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext resp, err := f.client.Do(req) if err != nil { return err } defer fs.CheckClose(resp.Body, &err) if resp.StatusCode < 200 || resp.StatusCode > 299 { body, _ := ioutil.ReadAll(resp.Body) bodyStr := strings.TrimSpace(strings.Replace(string(body), "\n", " ", -1)) return errors.Errorf("failed to get credentials: %s: %s", resp.Status, bodyStr) } decoder := json.NewDecoder(resp.Body) var result credentials err = decoder.Decode(&result) if err != nil { return err } // fs.Debugf(f, "Got credentials %+v", result) if result.Token == "" || result.Endpoint == "" || result.Expires == "" { return errors.New("couldn't read token, result and expired from credentials") } f.credentials = result expires, err := time.Parse(time.RFC3339, result.Expires) if err != nil { return err } f.expires = expires fs.Debugf(f, "Got swift credentials (expiry %v in %v)", f.expires, f.expires.Sub(time.Now())) return nil } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { client, _, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure Hubic") } f := &Fs{ client: client, } // Make the swift Connection c := &swiftLib.Connection{ Auth: newAuth(f), ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport Transport: fshttp.NewTransport(fs.Config), } err = c.Authenticate() if err != nil { return nil, errors.Wrap(err, "error authenticating swift connection") } // Parse config into swift.Options struct opt := new(swift.Options) err = configstruct.Set(m, opt) if err != nil { return nil, err } // Make inner swift Fs from the connection swiftFs, err := swift.NewFsWithConnection(opt, name, root, c, true) if err != nil && err != fs.ErrorIsFile { return nil, err } f.Fs = swiftFs f.features = f.Fs.Features().Wrap(f) return f, err } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // UnWrap returns the Fs that this Fs is wrapping func (f *Fs) UnWrap() fs.Fs { return f.Fs } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.UnWrapper = (*Fs)(nil) ) rclone-1.53.3/backend/hubic/hubic_test.go000066400000000000000000000006661375552240400202450ustar00rootroot00000000000000// Test Hubic filesystem interface package hubic_test import ( "testing" "github.com/rclone/rclone/backend/hubic" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestHubic:", NilObject: (*hubic.Object)(nil), SkipFsCheckWrap: true, SkipObjectCheckWrap: true, }) } rclone-1.53.3/backend/jottacloud/000077500000000000000000000000001375552240400166335ustar00rootroot00000000000000rclone-1.53.3/backend/jottacloud/api/000077500000000000000000000000001375552240400174045ustar00rootroot00000000000000rclone-1.53.3/backend/jottacloud/api/types.go000066400000000000000000000335251375552240400211070ustar00rootroot00000000000000package api import ( "encoding/xml" "fmt" "time" "github.com/pkg/errors" ) const ( // default time format for almost all request and responses timeFormat = "2006-01-02-T15:04:05Z0700" // the API server seems to use a different format apiTimeFormat = "2006-01-02T15:04:05Z07:00" ) // Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format. type Time time.Time // UnmarshalXML turns XML into a Time func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error { var v string if err := d.DecodeElement(&v, &start); err != nil { return err } if v == "" { *t = Time(time.Time{}) return nil } newTime, err := time.Parse(timeFormat, v) if err == nil { *t = Time(newTime) } return err } // MarshalXML turns a Time into XML func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error { return e.EncodeElement(t.String(), start) } // Return Time string in Jottacloud format func (t Time) String() string { return time.Time(t).Format(timeFormat) } // APIString returns Time string in Jottacloud API format func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) } // LoginToken is struct representing the login token generated in the WebUI type LoginToken struct { Username string `json:"username"` Realm string `json:"realm"` WellKnownLink string `json:"well_known_link"` AuthToken string `json:"auth_token"` } // WellKnown contains some configuration parameters for setting up endpoints type WellKnown struct { Issuer string `json:"issuer"` AuthorizationEndpoint string `json:"authorization_endpoint"` TokenEndpoint string `json:"token_endpoint"` TokenIntrospectionEndpoint string `json:"token_introspection_endpoint"` UserinfoEndpoint string `json:"userinfo_endpoint"` EndSessionEndpoint string `json:"end_session_endpoint"` JwksURI string `json:"jwks_uri"` CheckSessionIframe string `json:"check_session_iframe"` GrantTypesSupported []string `json:"grant_types_supported"` ResponseTypesSupported []string `json:"response_types_supported"` SubjectTypesSupported []string `json:"subject_types_supported"` IDTokenSigningAlgValuesSupported []string `json:"id_token_signing_alg_values_supported"` UserinfoSigningAlgValuesSupported []string `json:"userinfo_signing_alg_values_supported"` RequestObjectSigningAlgValuesSupported []string `json:"request_object_signing_alg_values_supported"` ResponseNodesSupported []string `json:"response_modes_supported"` RegistrationEndpoint string `json:"registration_endpoint"` TokenEndpointAuthMethodsSupported []string `json:"token_endpoint_auth_methods_supported"` TokenEndpointAuthSigningAlgValuesSupported []string `json:"token_endpoint_auth_signing_alg_values_supported"` ClaimsSupported []string `json:"claims_supported"` ClaimTypesSupported []string `json:"claim_types_supported"` ClaimsParameterSupported bool `json:"claims_parameter_supported"` ScopesSupported []string `json:"scopes_supported"` RequestParameterSupported bool `json:"request_parameter_supported"` RequestURIParameterSupported bool `json:"request_uri_parameter_supported"` CodeChallengeMethodsSupported []string `json:"code_challenge_methods_supported"` TLSClientCertificateBoundAccessTokens bool `json:"tls_client_certificate_bound_access_tokens"` IntrospectionEndpoint string `json:"introspection_endpoint"` } // TokenJSON is the struct representing the HTTP response from OAuth2 // providers returning a token in JSON form. type TokenJSON struct { AccessToken string `json:"access_token"` ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number RefreshExpiresIn int32 `json:"refresh_expires_in"` RefreshToken string `json:"refresh_token"` TokenType string `json:"token_type"` IDToken string `json:"id_token"` NotBeforePolicy int32 `json:"not-before-policy"` SessionState string `json:"session_state"` Scope string `json:"scope"` } // JSON structures returned by new API // AllocateFileRequest to prepare an upload to Jottacloud type AllocateFileRequest struct { Bytes int64 `json:"bytes"` Created string `json:"created"` Md5 string `json:"md5"` Modified string `json:"modified"` Path string `json:"path"` } // AllocateFileResponse for upload requests type AllocateFileResponse struct { Name string `json:"name"` Path string `json:"path"` State string `json:"state"` UploadID string `json:"upload_id"` UploadURL string `json:"upload_url"` Bytes int64 `json:"bytes"` ResumePos int64 `json:"resume_pos"` } // UploadResponse after an upload type UploadResponse struct { Name string `json:"name"` Path string `json:"path"` Kind string `json:"kind"` ContentID string `json:"content_id"` Bytes int64 `json:"bytes"` Md5 string `json:"md5"` Created int64 `json:"created"` Modified int64 `json:"modified"` Deleted interface{} `json:"deleted"` Mime string `json:"mime"` } // DeviceRegistrationResponse is the response to registering a device type DeviceRegistrationResponse struct { ClientID string `json:"client_id"` ClientSecret string `json:"client_secret"` } // CustomerInfo provides general information about the account. Required for finding the correct internal username. type CustomerInfo struct { Username string `json:"username"` Email string `json:"email"` Name string `json:"name"` CountryCode string `json:"country_code"` LanguageCode string `json:"language_code"` CustomerGroupCode string `json:"customer_group_code"` BrandCode string `json:"brand_code"` AccountType string `json:"account_type"` SubscriptionType string `json:"subscription_type"` Usage int64 `json:"usage"` Qouta int64 `json:"quota"` BusinessUsage int64 `json:"business_usage"` BusinessQouta int64 `json:"business_quota"` WriteLocked bool `json:"write_locked"` ReadLocked bool `json:"read_locked"` LockedCause interface{} `json:"locked_cause"` WebHash string `json:"web_hash"` AndroidHash string `json:"android_hash"` IOSHash string `json:"ios_hash"` } // TrashResponse is returned when emptying the Trash type TrashResponse struct { Folders int64 `json:"folders"` Files int64 `json:"files"` } // XML structures returned by the old API // Flag is a hacky type for checking if an attribute is present type Flag bool // UnmarshalXMLAttr sets Flag to true if the attribute is present func (f *Flag) UnmarshalXMLAttr(attr xml.Attr) error { *f = true return nil } // MarshalXMLAttr : Do not use func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) { attr := xml.Attr{ Name: name, Value: "false", } return attr, errors.New("unimplemented") } /* GET http://www.jottacloud.com/JFS/ 12qh1wsht8cssxdtwl15rqh9 free false 5368709120 -1 -1 0 false false false true true Jotta Jotta JOTTA 5c458d01-9eaf-4f23-8d3c-2486fd9704d8 0 2018-07-15-T22:04:59Z */ // DriveInfo represents a Jottacloud account type DriveInfo struct { Username string `xml:"username"` AccountType string `xml:"account-type"` Locked bool `xml:"locked"` Capacity int64 `xml:"capacity"` MaxDevices int `xml:"max-devices"` MaxMobileDevices int `xml:"max-mobile-devices"` Usage int64 `xml:"usage"` ReadLocked bool `xml:"read-locked"` WriteLocked bool `xml:"write-locked"` QuotaWriteLocked bool `xml:"quota-write-locked"` EnableSync bool `xml:"enable-sync"` EnableFolderShare bool `xml:"enable-foldershare"` Devices []JottaDevice `xml:"devices>device"` } /* GET http://www.jottacloud.com/JFS// Jotta Jotta JOTTA 5c458d01-9eaf-4f23-8d3c-2486fd9704d8 0 2018-07-15-T22:04:59Z 12qh1wsht8cssxdtwl15rqh9 Archive 0 2018-07-15-T22:04:59Z Shared 0 Sync 0 */ // JottaDevice represents a Jottacloud Device type JottaDevice struct { Name string `xml:"name"` DisplayName string `xml:"display_name"` Type string `xml:"type"` Sid string `xml:"sid"` Size int64 `xml:"size"` User string `xml:"user"` MountPoints []JottaMountPoint `xml:"mountPoints>mountPoint"` } /* GET http://www.jottacloud.com/JFS/// Sync /12qh1wsht8cssxdtwl15rqh9/Jotta /12qh1wsht8cssxdtwl15rqh9/Jotta 0 Jotta 12qh1wsht8cssxdtwl15rqh9 */ // JottaMountPoint represents a Jottacloud mountpoint type JottaMountPoint struct { Name string `xml:"name"` Size int64 `xml:"size"` Device string `xml:"device"` Folders []JottaFolder `xml:"folders>folder"` Files []JottaFile `xml:"files>file"` } /* GET http://www.jottacloud.com/JFS//// /12qh1wsht8cssxdtwl15rqh9/Jotta/Sync /12qh1wsht8cssxdtwl15rqh9/Jotta/Sync c 1 COMPLETED 2018-07-05-T15:08:02Z 2018-07-05-T15:08:02Z application/octet-stream 30827730 1e8a7b728ab678048df00075c9507158 2018-07-24-T20:41:10Z */ // JottaFolder represents a JottacloudFolder type JottaFolder struct { XMLName xml.Name Name string `xml:"name,attr"` Deleted Flag `xml:"deleted,attr"` Path string `xml:"path"` CreatedAt Time `xml:"created"` ModifiedAt Time `xml:"modified"` Updated Time `xml:"updated"` Folders []JottaFolder `xml:"folders>folder"` Files []JottaFile `xml:"files>file"` } /* GET http://www.jottacloud.com/JFS////.../ 1 COMPLETED 2018-07-05-T15:08:02Z 2018-07-05-T15:08:02Z application/octet-stream 30827730 1e8a7b728ab678048df00075c9507158 2018-07-24-T20:41:10Z */ // JottaFile represents a Jottacloud file type JottaFile struct { XMLName xml.Name Name string `xml:"name,attr"` Deleted Flag `xml:"deleted,attr"` PublicSharePath string `xml:"publicSharePath"` State string `xml:"currentRevision>state"` CreatedAt Time `xml:"currentRevision>created"` ModifiedAt Time `xml:"currentRevision>modified"` Updated Time `xml:"currentRevision>updated"` Size int64 `xml:"currentRevision>size"` MimeType string `xml:"currentRevision>mime"` MD5 string `xml:"currentRevision>md5"` } // Error is a custom Error for wrapping Jottacloud error responses type Error struct { StatusCode int `xml:"code"` Message string `xml:"message"` Reason string `xml:"reason"` Cause string `xml:"cause"` } // Error returns a string for the error and statistifes the error interface func (e *Error) Error() string { out := fmt.Sprintf("error %d", e.StatusCode) if e.Message != "" { out += ": " + e.Message } if e.Reason != "" { out += fmt.Sprintf(" (%+v)", e.Reason) } return out } rclone-1.53.3/backend/jottacloud/api/types_test.go000066400000000000000000000012651375552240400221420ustar00rootroot00000000000000package api import ( "encoding/xml" "testing" "time" ) func TestMountpointEmptyModificationTime(t *testing.T) { mountpoint := ` Sync /foo/Jotta /foo/Jotta 0 Jotta foo ` var jf JottaFolder if err := xml.Unmarshal([]byte(mountpoint), &jf); err != nil { t.Fatal(err) } if !time.Time(jf.ModifiedAt).IsZero() { t.Errorf("got non-zero time, want zero") } } rclone-1.53.3/backend/jottacloud/jottacloud.go000066400000000000000000001324511375552240400213400ustar00rootroot00000000000000package jottacloud import ( "bytes" "context" "crypto/md5" "encoding/base64" "encoding/hex" "encoding/json" "fmt" "io" "io/ioutil" "log" "math/rand" "net/http" "net/url" "os" "path" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/jottacloud/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" ) // Globals const ( minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential defaultDevice = "Jotta" defaultMountpoint = "Archive" rootURL = "https://jfs.jottacloud.com/jfs/" apiURL = "https://api.jottacloud.com/" baseURL = "https://www.jottacloud.com/" defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token" cachePrefix = "rclone-jcmd5-" configDevice = "device" configMountpoint = "mountpoint" configTokenURL = "tokenURL" configClientID = "client_id" configClientSecret = "client_secret" configVersion = 1 v1tokenURL = "https://api.jottacloud.com/auth/v1/token" v1registerURL = "https://api.jottacloud.com/auth/v1/register" v1ClientID = "nibfk8biu12ju7hpqomr8b1e40" v1EncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2" v1configVersion = 0 ) var ( // Description of how to auth for this app for a personal account oauthConfig = &oauth2.Config{ Endpoint: oauth2.Endpoint{ AuthURL: defaultTokenURL, TokenURL: defaultTokenURL, }, RedirectURL: oauthutil.RedirectLocalhostURL, } ) // Register with Fs func init() { // needs to be done early so we can use oauth during config fs.Register(&fs.RegInfo{ Name: "jottacloud", Description: "Jottacloud", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { ctx := context.TODO() refresh := false if version, ok := m.Get("configVersion"); ok { ver, err := strconv.Atoi(version) if err != nil { log.Fatalf("Failed to parse config version - corrupted config") } refresh = (ver != configVersion) && (ver != v1configVersion) } if refresh { fmt.Printf("Config outdated - refreshing\n") } else { tokenString, ok := m.Get("token") if ok && tokenString != "" { fmt.Printf("Already have a token - refresh?\n") if !config.Confirm(false) { return } } } fmt.Printf("Use legacy authentification?.\nThis is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.\n") if config.Confirm(false) { v1config(ctx, name, m) } else { v2config(ctx, name, m) } }, Options: []fs.Option{{ Name: "md5_memory_limit", Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.", Default: fs.SizeSuffix(10 * 1024 * 1024), Advanced: true, }, { Name: "trashed_only", Help: "Only show files that are in the trash.\nThis will show trashed files in their original directory structure.", Default: false, Advanced: true, }, { Name: "hard_delete", Help: "Delete files permanently rather than putting them into the trash.", Default: false, Advanced: true, }, { Name: "upload_resume_limit", Help: "Files bigger than this can be resumed if the upload fail's.", Default: fs.SizeSuffix(10 * 1024 * 1024), Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as xml doesn't handle them properly. // // Also: '*', '/', ':', '<', '>', '?', '\"', '\x00', '|' Default: (encoder.Display | encoder.EncodeWin | // :?"*<>| encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { Device string `config:"device"` Mountpoint string `config:"mountpoint"` MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"` TrashedOnly bool `config:"trashed_only"` HardDelete bool `config:"hard_delete"` UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote jottacloud type Fs struct { name string root string user string opt Options features *fs.Features endpointURL string srv *rest.Client apiSrv *rest.Client pacer *fs.Pacer tokenRenewer *oauthutil.Renew // renew the token on expiry } // Object describes a jottacloud object // // Will definitely have info but maybe not meta type Object struct { fs *Fs remote string hasMetaData bool size int64 modTime time.Time md5 string mimeType string } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("jottacloud root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a box 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // v1config configure a jottacloud backend using legacy authentification func v1config(ctx context.Context, name string, m configmap.Mapper) { srv := rest.NewClient(fshttp.NewClient(fs.Config)) fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n") if config.Confirm(false) { deviceRegistration, err := registerDevice(ctx, srv) if err != nil { log.Fatalf("Failed to register device: %v", err) } m.Set(configClientID, deviceRegistration.ClientID) m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret)) fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret) } clientID, ok := m.Get(configClientID) if !ok { clientID = v1ClientID } clientSecret, ok := m.Get(configClientSecret) if !ok { clientSecret = v1EncryptedClientSecret } oauthConfig.ClientID = clientID oauthConfig.ClientSecret = obscure.MustReveal(clientSecret) oauthConfig.Endpoint.AuthURL = v1tokenURL oauthConfig.Endpoint.TokenURL = v1tokenURL fmt.Printf("Username> ") username := config.ReadLine() password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.") token, err := doAuthV1(ctx, srv, username, password) if err != nil { log.Fatalf("Failed to get oauth token: %s", err) } err = oauthutil.PutToken(name, m, &token, true) if err != nil { log.Fatalf("Error while saving token: %s", err) } fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n") if config.Confirm(false) { oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { log.Fatalf("Failed to load oAuthClient: %s", err) } srv = rest.NewClient(oAuthClient).SetRoot(rootURL) apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL) device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv) if err != nil { log.Fatalf("Failed to setup mountpoint: %s", err) } m.Set(configDevice, device) m.Set(configMountpoint, mountpoint) } m.Set("configVersion", strconv.Itoa(v1configVersion)) } // registerDevice register a new device for use with the jottacloud API func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) { // random generator to generate random device names seededRand := rand.New(rand.NewSource(time.Now().UnixNano())) randonDeviceNamePartLength := 21 randomDeviceNamePart := make([]byte, randonDeviceNamePartLength) charset := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" for i := range randomDeviceNamePart { randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))] } randomDeviceName := "rclone-" + string(randomDeviceNamePart) fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName) values := url.Values{} values.Set("device_id", randomDeviceName) opts := rest.Opts{ Method: "POST", RootURL: v1registerURL, ContentType: "application/x-www-form-urlencoded", ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"}, Parameters: values, } var deviceRegistration *api.DeviceRegistrationResponse _, err = srv.CallJSON(ctx, &opts, nil, &deviceRegistration) return deviceRegistration, err } // doAuthV1 runs the actual token request for V1 authentification func doAuthV1(ctx context.Context, srv *rest.Client, username, password string) (token oauth2.Token, err error) { // prepare out token request with username and password values := url.Values{} values.Set("grant_type", "PASSWORD") values.Set("password", password) values.Set("username", username) values.Set("client_id", oauthConfig.ClientID) values.Set("client_secret", oauthConfig.ClientSecret) opts := rest.Opts{ Method: "POST", RootURL: oauthConfig.Endpoint.AuthURL, ContentType: "application/x-www-form-urlencoded", Parameters: values, } // do the first request var jsonToken api.TokenJSON resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken) if err != nil { // if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header if resp != nil { if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" { fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n") fmt.Printf("Enter verification code> ") authCode := config.ReadLine() authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number opts.ExtraHeaders = make(map[string]string) opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode _, err = srv.CallJSON(ctx, &opts, nil, &jsonToken) } } } token.AccessToken = jsonToken.AccessToken token.RefreshToken = jsonToken.RefreshToken token.TokenType = jsonToken.TokenType token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second) return token, err } // v2config configure a jottacloud backend using the modern JottaCli token based authentification func v2config(ctx context.Context, name string, m configmap.Mapper) { srv := rest.NewClient(fshttp.NewClient(fs.Config)) fmt.Printf("Generate a personal login token here: https://www.jottacloud.com/web/secure\n") fmt.Printf("Login Token> ") loginToken := config.ReadLine() m.Set(configClientID, "jottacli") m.Set(configClientSecret, "") token, err := doAuthV2(ctx, srv, loginToken, m) if err != nil { log.Fatalf("Failed to get oauth token: %s", err) } err = oauthutil.PutToken(name, m, &token, true) if err != nil { log.Fatalf("Error while saving token: %s", err) } fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n") if config.Confirm(false) { oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { log.Fatalf("Failed to load oAuthClient: %s", err) } srv = rest.NewClient(oAuthClient).SetRoot(rootURL) apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL) device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv) if err != nil { log.Fatalf("Failed to setup mountpoint: %s", err) } m.Set(configDevice, device) m.Set(configMountpoint, mountpoint) } m.Set("configVersion", strconv.Itoa(configVersion)) } // doAuthV2 runs the actual token request for V2 authentification func doAuthV2(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m configmap.Mapper) (token oauth2.Token, err error) { loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64) if err != nil { return token, err } // decode login token var loginToken api.LoginToken decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes)) err = decoder.Decode(&loginToken) if err != nil { return token, err } // retrieve endpoint urls opts := rest.Opts{ Method: "GET", RootURL: loginToken.WellKnownLink, } var wellKnown api.WellKnown _, err = srv.CallJSON(ctx, &opts, nil, &wellKnown) if err != nil { return token, err } // save the tokenurl oauthConfig.Endpoint.AuthURL = wellKnown.TokenEndpoint oauthConfig.Endpoint.TokenURL = wellKnown.TokenEndpoint m.Set(configTokenURL, wellKnown.TokenEndpoint) // prepare out token request with username and password values := url.Values{} values.Set("client_id", "jottacli") values.Set("grant_type", "password") values.Set("password", loginToken.AuthToken) values.Set("scope", "offline_access+openid") values.Set("username", loginToken.Username) values.Encode() opts = rest.Opts{ Method: "POST", RootURL: oauthConfig.Endpoint.AuthURL, ContentType: "application/x-www-form-urlencoded", Body: strings.NewReader(values.Encode()), } // do the first request var jsonToken api.TokenJSON _, err = srv.CallJSON(ctx, &opts, nil, &jsonToken) if err != nil { return token, err } token.AccessToken = jsonToken.AccessToken token.RefreshToken = jsonToken.RefreshToken token.TokenType = jsonToken.TokenType token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second) return token, err } // setupMountpoint sets up a custom device and mountpoint if desired by the user func setupMountpoint(ctx context.Context, srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) { cust, err := getCustomerInfo(ctx, apiSrv) if err != nil { return "", "", err } acc, err := getDriveInfo(ctx, srv, cust.Username) if err != nil { return "", "", err } var deviceNames []string for i := range acc.Devices { deviceNames = append(deviceNames, acc.Devices[i].Name) } fmt.Printf("Please select the device to use. Normally this will be Jotta\n") device = config.Choose("Devices", deviceNames, nil, false) dev, err := getDeviceInfo(ctx, srv, path.Join(cust.Username, device)) if err != nil { return "", "", err } if len(dev.MountPoints) == 0 { return "", "", errors.New("no mountpoints for selected device") } var mountpointNames []string for i := range dev.MountPoints { mountpointNames = append(mountpointNames, dev.MountPoints[i].Name) } fmt.Printf("Please select the mountpoint to user. Normally this will be Archive\n") mountpoint = config.Choose("Mountpoints", mountpointNames, nil, false) return device, mountpoint, err } // getCustomerInfo queries general information about the account func getCustomerInfo(ctx context.Context, srv *rest.Client) (info *api.CustomerInfo, err error) { opts := rest.Opts{ Method: "GET", Path: "account/v1/customer", } _, err = srv.CallJSON(ctx, &opts, nil, &info) if err != nil { return nil, errors.Wrap(err, "couldn't get customer info") } return info, nil } // getDriveInfo queries general information about the account and the available devices and mountpoints. func getDriveInfo(ctx context.Context, srv *rest.Client, username string) (info *api.DriveInfo, err error) { opts := rest.Opts{ Method: "GET", Path: username, } _, err = srv.CallXML(ctx, &opts, nil, &info) if err != nil { return nil, errors.Wrap(err, "couldn't get drive info") } return info, nil } // getDeviceInfo queries Information about a jottacloud device func getDeviceInfo(ctx context.Context, srv *rest.Client, path string) (info *api.JottaDevice, err error) { opts := rest.Opts{ Method: "GET", Path: urlPathEscape(path), } _, err = srv.CallXML(ctx, &opts, nil, &info) if err != nil { return nil, errors.Wrap(err, "couldn't get device info") } return info, nil } // setEndpointURL generates the API endpoint URL func (f *Fs) setEndpointURL() { if f.opt.Device == "" { f.opt.Device = defaultDevice } if f.opt.Mountpoint == "" { f.opt.Mountpoint = defaultMountpoint } f.endpointURL = path.Join(f.user, f.opt.Device, f.opt.Mountpoint) } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.JottaFile, err error) { opts := rest.Opts{ Method: "GET", Path: f.filePath(path), } var result api.JottaFile var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if apiErr, ok := err.(*api.Error); ok { // does not exist if apiErr.StatusCode == http.StatusNotFound { return nil, fs.ErrorObjectNotFound } } if err != nil { return nil, errors.Wrap(err, "read metadata failed") } if result.XMLName.Local != "file" { return nil, fs.ErrorNotAFile } return &result, nil } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { // Decode error response errResponse := new(api.Error) err := rest.DecodeXML(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.Message == "" { errResponse.Message = resp.Status } if errResponse.StatusCode == 0 { errResponse.StatusCode = resp.StatusCode } return errResponse } // Jottacloud wants '+' to be URL encoded even though the RFC states it's not reserved func urlPathEscape(in string) string { return strings.Replace(rest.URLPathEscape(in), "+", "%2B", -1) } // filePathRaw returns an unescaped file path (f.root, file) func (f *Fs) filePathRaw(file string) string { return path.Join(f.endpointURL, f.opt.Enc.FromStandardPath(path.Join(f.root, file))) } // filePath returns an escaped file path (f.root, file) func (f *Fs) filePath(file string) string { return urlPathEscape(f.filePathRaw(file)) } // Jottacloud requires the grant_type 'refresh_token' string // to be uppercase and throws a 400 Bad Request if we use the // lower case used by the oauth2 module // // This filter catches all refresh requests, reads the body, // changes the case and then sends it on func grantTypeFilter(req *http.Request) { if v1tokenURL == req.URL.String() { // read the entire body refreshBody, err := ioutil.ReadAll(req.Body) if err != nil { return } _ = req.Body.Close() // make the refresh token upper case refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1)) // set the new ReadCloser (with a dummy Close()) req.Body = ioutil.NopCloser(bytes.NewReader(refreshBody)) } } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.TODO() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } // Check config version var ver int version, ok := m.Get("configVersion") if ok { ver, err = strconv.Atoi(version) if err != nil { return nil, errors.New("Failed to parse config version") } ok = (ver == configVersion) || (ver == v1configVersion) } if !ok { return nil, errors.New("Outdated config - please reconfigure this backend") } baseClient := fshttp.NewClient(fs.Config) if ver == configVersion { oauthConfig.ClientID = "jottacli" // if custom endpoints are set use them else stick with defaults if tokenURL, ok := m.Get(configTokenURL); ok { oauthConfig.Endpoint.TokenURL = tokenURL // jottacloud is weird. we need to use the tokenURL as authURL oauthConfig.Endpoint.AuthURL = tokenURL } } else if ver == v1configVersion { clientID, ok := m.Get(configClientID) if !ok { clientID = v1ClientID } clientSecret, ok := m.Get(configClientSecret) if !ok { clientSecret = v1EncryptedClientSecret } oauthConfig.ClientID = clientID oauthConfig.ClientSecret = obscure.MustReveal(clientSecret) oauthConfig.Endpoint.TokenURL = v1tokenURL oauthConfig.Endpoint.AuthURL = v1tokenURL // add the request filter to fix token refresh if do, ok := baseClient.Transport.(interface { SetRequestFilter(f func(req *http.Request)) }); ok { do.SetRequestFilter(grantTypeFilter) } else { fs.Debugf(name+":", "Couldn't add request filter - uploads will fail") } } // Create OAuth Client oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient) if err != nil { return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client") } rootIsDir := strings.HasSuffix(root, "/") root = parsePath(root) f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(oAuthClient).SetRoot(rootURL), apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, ReadMimeType: true, WriteMimeType: true, }).Fill(f) f.srv.SetErrorHandler(errorHandler) if opt.TrashedOnly { // we cannot support showing Trashed Files when using ListR right now f.features.ListR = nil } // Renew the token in the background f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.readMetaDataForPath(ctx, "") return err }) cust, err := getCustomerInfo(ctx, f.apiSrv) if err != nil { return nil, err } f.user = cust.Username f.setEndpointURL() if root != "" && !rootIsDir { // Check to see if the root actually an existing file remote := path.Base(root) f.root = path.Dir(root) if f.root == "." { f.root = "" } _, err := f.NewObject(context.TODO(), remote) if err != nil { if errors.Cause(err) == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile { // File doesn't exist so return old f f.root = root return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.JottaFile) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx, false) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // CreateDir makes a directory func (f *Fs) CreateDir(ctx context.Context, path string) (jf *api.JottaFolder, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf) var resp *http.Response opts := rest.Opts{ Method: "POST", Path: f.filePath(path), Parameters: url.Values{}, } opts.Parameters.Set("mkDir", "true") err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &jf) return shouldRetry(resp, err) }) if err != nil { //fmt.Printf("...Error %v\n", err) return nil, err } // fmt.Printf("...Id %q\n", *info.Id) return jf, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { opts := rest.Opts{ Method: "GET", Path: f.filePath(dir), } var resp *http.Response var result api.JottaFolder err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { if apiErr, ok := err.(*api.Error); ok { // does not exist if apiErr.StatusCode == http.StatusNotFound { return nil, fs.ErrorDirNotFound } } return nil, errors.Wrap(err, "couldn't list files") } if bool(result.Deleted) && !f.opt.TrashedOnly { return nil, fs.ErrorDirNotFound } for i := range result.Folders { item := &result.Folders[i] if !f.opt.TrashedOnly && bool(item.Deleted) { continue } remote := path.Join(dir, f.opt.Enc.ToStandardName(item.Name)) d := fs.NewDir(remote, time.Time(item.ModifiedAt)) entries = append(entries, d) } for i := range result.Files { item := &result.Files[i] if f.opt.TrashedOnly { if !item.Deleted || item.State != "COMPLETED" { continue } } else { if item.Deleted || item.State != "COMPLETED" { continue } } remote := path.Join(dir, f.opt.Enc.ToStandardName(item.Name)) o, err := f.newObjectWithInfo(ctx, remote, item) if err != nil { continue } entries = append(entries, o) } return entries, nil } // listFileDirFn is called from listFileDir to handle an object. type listFileDirFn func(fs.DirEntry) error // List the objects and directories into entries, from a // special kind of JottaFolder representing a FileDirLis func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error { pathPrefix := "/" + f.filePathRaw("") // Non-escaped prefix of API paths to be cut off, to be left with the remote path including the remoteStartPath pathPrefixLength := len(pathPrefix) startPath := path.Join(pathPrefix, remoteStartPath) // Non-escaped API path up to and including remoteStartPath, to decide if it should be created as a new dir object startPathLength := len(startPath) for i := range startFolder.Folders { folder := &startFolder.Folders[i] if folder.Deleted { return nil } folderPath := f.opt.Enc.ToStandardPath(path.Join(folder.Path, folder.Name)) folderPathLength := len(folderPath) var remoteDir string if folderPathLength > pathPrefixLength { remoteDir = folderPath[pathPrefixLength+1:] if folderPathLength > startPathLength { d := fs.NewDir(remoteDir, time.Time(folder.ModifiedAt)) err := fn(d) if err != nil { return err } } } for i := range folder.Files { file := &folder.Files[i] if file.Deleted || file.State != "COMPLETED" { continue } remoteFile := path.Join(remoteDir, f.opt.Enc.ToStandardName(file.Name)) o, err := f.newObjectWithInfo(ctx, remoteFile, file) if err != nil { return err } err = fn(o) if err != nil { return err } } } return nil } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { opts := rest.Opts{ Method: "GET", Path: f.filePath(dir), Parameters: url.Values{}, } opts.Parameters.Set("mode", "list") var resp *http.Response var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { if apiErr, ok := err.(*api.Error); ok { // does not exist if apiErr.StatusCode == http.StatusNotFound { return fs.ErrorDirNotFound } } return errors.Wrap(err, "couldn't list files") } list := walk.NewListRHelper(callback) err = f.listFileDir(ctx, dir, &result, func(entry fs.DirEntry) error { return list.Add(entry) }) if err != nil { return err } return list.Flush() } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Used to create new objects func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object) { // Temporary Object under construction o = &Object{ fs: f, remote: remote, size: size, modTime: modTime, } return o } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { if f.opt.Device != "Jotta" { return nil, errors.New("upload not supported for devices other than Jotta") } o := f.createObject(src.Remote(), src.ModTime(ctx), src.Size()) return o, o.Update(ctx, in, src, options...) } // mkParentDir makes the parent of the native path dirPath if // necessary and any directories above that func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error { // defer log.Trace(dirPath, "")("") // chop off trailing / if it exists if strings.HasSuffix(dirPath, "/") { dirPath = dirPath[:len(dirPath)-1] } parent := path.Dir(dirPath) if parent == "." { parent = "" } return f.Mkdir(ctx, parent) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.CreateDir(ctx, dir) return err } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error) { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } // check that the directory exists entries, err := f.List(ctx, dir) if err != nil { return err } if check { if len(entries) != 0 { return fs.ErrorDirectoryNotEmpty } } opts := rest.Opts{ Method: "POST", Path: f.filePath(dir), Parameters: url.Values{}, NoResponse: true, } if f.opt.HardDelete { opts.Parameters.Set("rmDir", "true") } else { opts.Parameters.Set("dlDir", "true") } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "couldn't purge directory") } return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return time.Second } // Purge deletes all the files and the container func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // copyOrMoves copies or moves directories or files depending on the method parameter func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *api.JottaFile, err error) { opts := rest.Opts{ Method: "POST", Path: src, Parameters: url.Values{}, } opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, f.opt.Enc.FromStandardPath(path.Join(f.root, dest)))) var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } return info, nil } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantMove } err := f.mkParentDir(ctx, remote) if err != nil { return nil, err } info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote) if err != nil { return nil, errors.Wrap(err, "couldn't copy file") } return f.newObjectWithInfo(ctx, remote, info) //return f.newObjectWithInfo(remote, &result) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } err := f.mkParentDir(ctx, remote) if err != nil { return nil, err } info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote) if err != nil { return nil, errors.Wrap(err, "couldn't move file") } return f.newObjectWithInfo(ctx, remote, info) //return f.newObjectWithInfo(remote, result) } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := path.Join(srcFs.root, srcRemote) dstPath := path.Join(f.root, dstRemote) // Refuse to move to or from the root if srcPath == "" || dstPath == "" { fs.Debugf(src, "DirMove error: Can't move root") return errors.New("can't move root directory") } //fmt.Printf("Move src: %s (FullPath %s), dst: %s (FullPath: %s)\n", srcRemote, srcPath, dstRemote, dstPath) var err error _, err = f.List(ctx, dstRemote) if err == fs.ErrorDirNotFound { // OK } else if err != nil { return err } else { return fs.ErrorDirExists } _, err = f.copyOrMove(ctx, "mvDir", path.Join(f.endpointURL, f.opt.Enc.FromStandardPath(srcPath))+"/", dstRemote) if err != nil { return errors.Wrap(err, "couldn't move directory") } return nil } // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { opts := rest.Opts{ Method: "GET", Path: f.filePath(remote), Parameters: url.Values{}, } if unlink { opts.Parameters.Set("mode", "disableShare") } else { opts.Parameters.Set("mode", "enableShare") } var resp *http.Response var result api.JottaFile err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if apiErr, ok := err.(*api.Error); ok { // does not exist if apiErr.StatusCode == http.StatusNotFound { return "", fs.ErrorObjectNotFound } } if err != nil { if unlink { return "", errors.Wrap(err, "couldn't remove public link") } return "", errors.Wrap(err, "couldn't create public link") } if unlink { if result.PublicSharePath != "" { return "", errors.Errorf("couldn't remove public link - %q", result.PublicSharePath) } return "", nil } if result.PublicSharePath == "" { return "", errors.New("couldn't create public link - no link path received") } link = path.Join(baseURL, result.PublicSharePath) return link, nil } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { info, err := getDriveInfo(ctx, f.srv, f.user) if err != nil { return nil, err } usage := &fs.Usage{ Used: fs.NewUsageValue(info.Usage), } if info.Capacity > 0 { usage.Total = fs.NewUsageValue(info.Capacity) usage.Free = fs.NewUsageValue(info.Capacity - info.Usage) } return usage, nil } // CleanUp empties the trash func (f *Fs) CleanUp(ctx context.Context) error { opts := rest.Opts{ Method: "POST", Path: "files/v1/purge_trash", } var info api.TrashResponse _, err := f.apiSrv.CallJSON(ctx, &opts, nil, &info) if err != nil { return errors.Wrap(err, "couldn't empty trash") } return nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // --------------------------------------------- // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // filePath returns an escaped file path (f.root, remote) func (o *Object) filePath() string { return o.fs.filePath(o.remote) } // Hash returns the MD5 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } return o.md5, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { ctx := context.TODO() err := o.readMetaData(ctx, false) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.JottaFile) (err error) { o.hasMetaData = true o.size = info.Size o.md5 = info.MD5 o.mimeType = info.MimeType o.modTime = time.Time(info.ModifiedAt) return nil } // readMetaData reads and updates the metadata for an object func (o *Object) readMetaData(ctx context.Context, force bool) (err error) { if o.hasMetaData && !force { return nil } info, err := o.fs.readMetaDataForPath(ctx, o.remote) if err != nil { return err } if bool(info.Deleted) && !o.fs.opt.TrashedOnly { return fs.ErrorObjectNotFound } return o.setMetaData(info) } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx, false) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { fs.FixRangeOption(options, o.size) var resp *http.Response opts := rest.Opts{ Method: "GET", Path: o.filePath(), Parameters: url.Values{}, Options: options, } opts.Parameters.Set("mode", "bin") err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // Read the md5 of in returning a reader which will read the same contents // // The cleanup function should be called when out is finished with // regardless of whether this function returned an error or not. func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) { // we need an MD5 md5Hasher := md5.New() // use the teeReader to write to the local file AND calculate the MD5 while doing so teeReader := io.TeeReader(in, md5Hasher) // nothing to clean up by default cleanup = func() {} // don't cache small files on disk to reduce wear of the disk if size > threshold { var tempFile *os.File // create the cache file tempFile, err = ioutil.TempFile("", cachePrefix) if err != nil { return } _ = os.Remove(tempFile.Name()) // Delete the file - may not work on Windows // clean up the file after we are done downloading cleanup = func() { // the file should normally already be close, but just to make sure _ = tempFile.Close() _ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already } // copy the ENTIRE file to disc and calculate the MD5 in the process if _, err = io.Copy(tempFile, teeReader); err != nil { return } // jump to the start of the local file so we can pass it along if _, err = tempFile.Seek(0, 0); err != nil { return } // replace the already read source with a reader of our cached file out = tempFile } else { // that's a small file, just read it into memory var inData []byte inData, err = ioutil.ReadAll(teeReader) if err != nil { return } // set the reader to our read memory block out = bytes.NewReader(inData) } return hex.EncodeToString(md5Hasher.Sum(nil)), out, cleanup, nil } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { size := src.Size() md5String, err := src.Hash(ctx, hash.MD5) if err != nil || md5String == "" { // unwrap the accounting from the input, we use wrap to put it // back on after the buffering var wrap accounting.WrapFn in, wrap = accounting.UnWrap(in) var cleanup func() md5String, in, cleanup, err = readMD5(in, size, int64(o.fs.opt.MD5MemoryThreshold)) defer cleanup() if err != nil { return errors.Wrap(err, "failed to calculate MD5") } // Wrap the accounting back onto the stream in = wrap(in) } // use the api to allocate the file first and get resume / deduplication info var resp *http.Response opts := rest.Opts{ Method: "POST", Path: "files/v1/allocate", Options: options, ExtraHeaders: make(map[string]string), } fileDate := api.Time(src.ModTime(ctx)).APIString() // the allocate request var request = api.AllocateFileRequest{ Bytes: size, Created: fileDate, Modified: fileDate, Md5: md5String, Path: path.Join(o.fs.opt.Mountpoint, o.fs.opt.Enc.FromStandardPath(path.Join(o.fs.root, o.remote))), } // send it var response api.AllocateFileResponse err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response) return shouldRetry(resp, err) }) if err != nil { return err } // If the file state is INCOMPLETE and CORRPUT, try to upload a then if response.State != "COMPLETED" { // how much do we still have to upload? remainingBytes := size - response.ResumePos opts = rest.Opts{ Method: "POST", RootURL: response.UploadURL, ContentLength: &remainingBytes, ContentType: "application/octet-stream", Body: in, ExtraHeaders: make(map[string]string), } if response.ResumePos != 0 { opts.ExtraHeaders["Range"] = "bytes=" + strconv.FormatInt(response.ResumePos, 10) + "-" + strconv.FormatInt(size-1, 10) } // copy the already uploaded bytes into the trash :) var result api.UploadResponse _, err = io.CopyN(ioutil.Discard, in, response.ResumePos) if err != nil { return err } // send the remaining bytes resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, nil, &result) if err != nil { return err } // finally update the meta data o.hasMetaData = true o.size = result.Bytes o.md5 = result.Md5 o.modTime = time.Unix(result.Modified/1000, 0) } else { // If the file state is COMPLETE we don't need to upload it because the file was already found but we still ned to update our metadata return o.readMetaData(ctx, true) } return nil } // Remove an object func (o *Object) Remove(ctx context.Context) error { opts := rest.Opts{ Method: "POST", Path: o.filePath(), Parameters: url.Values{}, NoResponse: true, } if o.fs.opt.HardDelete { opts.Parameters.Set("rm", "true") } else { opts.Parameters.Set("dl", "true") } return o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil) return shouldRetry(resp, err) }) } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.ListRer = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil) ) rclone-1.53.3/backend/jottacloud/jottacloud_internal_test.go000066400000000000000000000022141375552240400242640ustar00rootroot00000000000000package jottacloud import ( "crypto/md5" "fmt" "io" "testing" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestReadMD5(t *testing.T) { // Check readMD5 for different size and threshold for _, size := range []int64{0, 1024, 10 * 1024, 100 * 1024} { t.Run(fmt.Sprintf("%d", size), func(t *testing.T) { hasher := md5.New() n, err := io.Copy(hasher, readers.NewPatternReader(size)) require.NoError(t, err) assert.Equal(t, n, size) wantMD5 := fmt.Sprintf("%x", hasher.Sum(nil)) for _, threshold := range []int64{512, 1024, 10 * 1024, 20 * 1024} { t.Run(fmt.Sprintf("%d", threshold), func(t *testing.T) { in := readers.NewPatternReader(size) gotMD5, out, cleanup, err := readMD5(in, size, threshold) defer cleanup() require.NoError(t, err) assert.Equal(t, wantMD5, gotMD5) // check md5hash of out hasher := md5.New() n, err := io.Copy(hasher, out) require.NoError(t, err) assert.Equal(t, n, size) outMD5 := fmt.Sprintf("%x", hasher.Sum(nil)) assert.Equal(t, wantMD5, outMD5) }) } }) } } rclone-1.53.3/backend/jottacloud/jottacloud_test.go000066400000000000000000000005741375552240400223770ustar00rootroot00000000000000// Test Box filesystem interface package jottacloud_test import ( "testing" "github.com/rclone/rclone/backend/jottacloud" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestJottacloud:", NilObject: (*jottacloud.Object)(nil), }) } rclone-1.53.3/backend/koofr/000077500000000000000000000000001375552240400156035ustar00rootroot00000000000000rclone-1.53.3/backend/koofr/koofr.go000066400000000000000000000377671375552240400172760ustar00rootroot00000000000000package koofr import ( "context" "encoding/base64" "errors" "fmt" "io" "net/http" "path" "strings" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/encoder" httpclient "github.com/koofr/go-httpclient" koofrclient "github.com/koofr/go-koofrclient" ) // Register Fs with rclone func init() { fs.Register(&fs.RegInfo{ Name: "koofr", Description: "Koofr", NewFs: NewFs, Options: []fs.Option{{ Name: "endpoint", Help: "The Koofr API endpoint to use", Default: "https://app.koofr.net", Required: true, Advanced: true, }, { Name: "mountid", Help: "Mount ID of the mount to use. If omitted, the primary mount is used.", Required: false, Default: "", Advanced: true, }, { Name: "setmtime", Help: "Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.", Default: true, Required: true, Advanced: true, }, { Name: "user", Help: "Your Koofr user name", Required: true, }, { Name: "password", Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)", IsPassword: true, Required: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeInvalidUtf8), }}, }) } // Options represent the configuration of the Koofr backend type Options struct { Endpoint string `config:"endpoint"` MountID string `config:"mountid"` User string `config:"user"` Password string `config:"password"` SetMTime bool `config:"setmtime"` Enc encoder.MultiEncoder `config:"encoding"` } // An Fs is a representation of a remote Koofr Fs type Fs struct { name string mountID string root string opt Options features *fs.Features client *koofrclient.KoofrClient } // An Object on the remote Koofr Fs type Object struct { fs *Fs remote string info koofrclient.FileInfo } func base(pth string) string { rv := path.Base(pth) if rv == "" || rv == "." { rv = "/" } return rv } func dir(pth string) string { rv := path.Dir(pth) if rv == "" || rv == "." { rv = "/" } return rv } // String returns a string representation of the remote Object func (o *Object) String() string { return o.remote } // Remote returns the remote path of the Object, relative to Fs root func (o *Object) Remote() string { return o.remote } // ModTime returns the modification time of the Object func (o *Object) ModTime(ctx context.Context) time.Time { return time.Unix(o.info.Modified/1000, (o.info.Modified%1000)*1000*1000) } // Size return the size of the Object in bytes func (o *Object) Size() int64 { return o.info.Size } // Fs returns a reference to the Koofr Fs containing the Object func (o *Object) Fs() fs.Info { return o.fs } // Hash returns an MD5 hash of the Object func (o *Object) Hash(ctx context.Context, typ hash.Type) (string, error) { if typ == hash.MD5 { return o.info.Hash, nil } return "", nil } // fullPath returns full path of the remote Object (including Fs root) func (o *Object) fullPath() string { return o.fs.fullPath(o.remote) } // Storable returns true if the Object is storable func (o *Object) Storable() bool { return true } // SetModTime is not supported func (o *Object) SetModTime(ctx context.Context, mtime time.Time) error { return fs.ErrorCantSetModTimeWithoutDelete } // Open opens the Object for reading func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { var sOff, eOff int64 = 0, -1 fs.FixRangeOption(options, o.Size()) for _, option := range options { switch x := option.(type) { case *fs.SeekOption: sOff = x.Offset case *fs.RangeOption: sOff = x.Start eOff = x.End default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } if sOff == 0 && eOff < 0 { return o.fs.client.FilesGet(o.fs.mountID, o.fullPath()) } span := &koofrclient.FileSpan{ Start: sOff, End: eOff, } return o.fs.client.FilesGetRange(o.fs.mountID, o.fullPath(), span) } // Update updates the Object contents func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { mtime := src.ModTime(ctx).UnixNano() / 1000 / 1000 putopts := &koofrclient.PutOptions{ ForceOverwrite: true, NoRename: true, OverwriteIgnoreNonExisting: true, SetModified: &mtime, } fullPath := o.fullPath() dirPath := dir(fullPath) name := base(fullPath) err := o.fs.mkdir(dirPath) if err != nil { return err } info, err := o.fs.client.FilesPutWithOptions(o.fs.mountID, dirPath, name, in, putopts) if err != nil { return err } o.info = *info return nil } // Remove deletes the remote Object func (o *Object) Remove(ctx context.Context) error { return o.fs.client.FilesDelete(o.fs.mountID, o.fullPath()) } // Name returns the name of the Fs func (f *Fs) Name() string { return f.name } // Root returns the root path of the Fs func (f *Fs) Root() string { return f.root } // String returns a string representation of the Fs func (f *Fs) String() string { return "koofr:" + f.mountID + ":" + f.root } // Features returns the optional features supported by this Fs func (f *Fs) Features() *fs.Features { return f.features } // Precision denotes that setting modification times is not supported func (f *Fs) Precision() time.Duration { if !f.opt.SetMTime { return fs.ModTimeNotSupported } return time.Millisecond } // Hashes returns a set of hashes are Provided by the Fs func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // fullPath constructs a full, absolute path from an Fs root relative path, func (f *Fs) fullPath(part string) string { return f.opt.Enc.FromStandardPath(path.Join("/", f.root, part)) } // NewFs constructs a new filesystem given a root path and configuration options func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) { opt := new(Options) err = configstruct.Set(m, opt) if err != nil { return nil, err } pass, err := obscure.Reveal(opt.Password) if err != nil { return nil, err } httpClient := httpclient.New() httpClient.Client = fshttp.NewClient(fs.Config) client := koofrclient.NewKoofrClientWithHTTPClient(opt.Endpoint, httpClient) basicAuth := fmt.Sprintf("Basic %s", base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass))) client.HTTPClient.Headers.Set("Authorization", basicAuth) mounts, err := client.Mounts() if err != nil { return nil, err } f := &Fs{ name: name, root: root, opt: *opt, client: client, } f.features = (&fs.Features{ CaseInsensitive: true, DuplicateFiles: false, BucketBased: false, CanHaveEmptyDirectories: true, }).Fill(f) for _, m := range mounts { if opt.MountID != "" { if m.Id == opt.MountID { f.mountID = m.Id break } } else if m.IsPrimary { f.mountID = m.Id break } } if f.mountID == "" { if opt.MountID == "" { return nil, errors.New("Failed to find primary mount") } return nil, errors.New("Failed to find mount " + opt.MountID) } rootFile, err := f.client.FilesInfo(f.mountID, f.opt.Enc.FromStandardPath("/"+f.root)) if err == nil && rootFile.Type != "dir" { f.root = dir(f.root) err = fs.ErrorIsFile } else { err = nil } return f, err } // List returns a list of items in a directory func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { files, err := f.client.FilesList(f.mountID, f.fullPath(dir)) if err != nil { return nil, translateErrorsDir(err) } entries = make([]fs.DirEntry, len(files)) for i, file := range files { remote := path.Join(dir, f.opt.Enc.ToStandardName(file.Name)) if file.Type == "dir" { entries[i] = fs.NewDir(remote, time.Unix(0, 0)) } else { entries[i] = &Object{ fs: f, info: file, remote: remote, } } } return entries, nil } // NewObject creates a new remote Object for a given remote path func (f *Fs) NewObject(ctx context.Context, remote string) (obj fs.Object, err error) { info, err := f.client.FilesInfo(f.mountID, f.fullPath(remote)) if err != nil { return nil, translateErrorsObject(err) } if info.Type == "dir" { return nil, fs.ErrorNotAFile } return &Object{ fs: f, info: info, remote: remote, }, nil } // Put updates a remote Object func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (obj fs.Object, err error) { mtime := src.ModTime(ctx).UnixNano() / 1000 / 1000 putopts := &koofrclient.PutOptions{ ForceOverwrite: true, NoRename: true, OverwriteIgnoreNonExisting: true, SetModified: &mtime, } fullPath := f.fullPath(src.Remote()) dirPath := dir(fullPath) name := base(fullPath) err = f.mkdir(dirPath) if err != nil { return nil, err } info, err := f.client.FilesPutWithOptions(f.mountID, dirPath, name, in, putopts) if err != nil { return nil, translateErrorsObject(err) } return &Object{ fs: f, info: *info, remote: src.Remote(), }, nil } // PutStream updates a remote Object with a stream of unknown size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // isBadRequest is a predicate which holds true iff the error returned was // HTTP status 400 func isBadRequest(err error) bool { switch err := err.(type) { case httpclient.InvalidStatusError: if err.Got == http.StatusBadRequest { return true } } return false } // translateErrorsDir translates koofr errors to rclone errors (for a dir // operation) func translateErrorsDir(err error) error { switch err := err.(type) { case httpclient.InvalidStatusError: if err.Got == http.StatusNotFound { return fs.ErrorDirNotFound } } return err } // translatesErrorsObject translates Koofr errors to rclone errors (for an object operation) func translateErrorsObject(err error) error { switch err := err.(type) { case httpclient.InvalidStatusError: if err.Got == http.StatusNotFound { return fs.ErrorObjectNotFound } } return err } // mkdir creates a directory at the given remote path. Creates ancestors if // necessary func (f *Fs) mkdir(fullPath string) error { if fullPath == "/" { return nil } info, err := f.client.FilesInfo(f.mountID, fullPath) if err == nil && info.Type == "dir" { return nil } err = translateErrorsDir(err) if err != nil && err != fs.ErrorDirNotFound { return err } dirs := strings.Split(fullPath, "/") parent := "/" for _, part := range dirs { if part == "" { continue } info, err = f.client.FilesInfo(f.mountID, path.Join(parent, part)) if err != nil || info.Type != "dir" { err = translateErrorsDir(err) if err != nil && err != fs.ErrorDirNotFound { return err } err = f.client.FilesNewFolder(f.mountID, parent, part) if err != nil && !isBadRequest(err) { return err } } parent = path.Join(parent, part) } return nil } // Mkdir creates a directory at the given remote path. Creates ancestors if // necessary func (f *Fs) Mkdir(ctx context.Context, dir string) error { fullPath := f.fullPath(dir) return f.mkdir(fullPath) } // Rmdir removes an (empty) directory at the given remote path func (f *Fs) Rmdir(ctx context.Context, dir string) error { files, err := f.client.FilesList(f.mountID, f.fullPath(dir)) if err != nil { return translateErrorsDir(err) } if len(files) > 0 { return fs.ErrorDirectoryNotEmpty } err = f.client.FilesDelete(f.mountID, f.fullPath(dir)) if err != nil { return translateErrorsDir(err) } return nil } // Copy copies a remote Object to the given path func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstFullPath := f.fullPath(remote) dstDir := dir(dstFullPath) err := f.mkdir(dstDir) if err != nil { return nil, fs.ErrorCantCopy } mtime := src.ModTime(ctx).UnixNano() / 1000 / 1000 err = f.client.FilesCopy((src.(*Object)).fs.mountID, (src.(*Object)).fs.fullPath((src.(*Object)).remote), f.mountID, dstFullPath, koofrclient.CopyOptions{SetModified: &mtime}) if err != nil { return nil, fs.ErrorCantCopy } return f.NewObject(ctx, remote) } // Move moves a remote Object to the given path func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj := src.(*Object) dstFullPath := f.fullPath(remote) dstDir := dir(dstFullPath) err := f.mkdir(dstDir) if err != nil { return nil, fs.ErrorCantMove } err = f.client.FilesMove(srcObj.fs.mountID, srcObj.fs.fullPath(srcObj.remote), f.mountID, dstFullPath) if err != nil { return nil, fs.ErrorCantMove } return f.NewObject(ctx, remote) } // DirMove moves a remote directory to the given path func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs := src.(*Fs) srcFullPath := srcFs.fullPath(srcRemote) dstFullPath := f.fullPath(dstRemote) if srcFs.mountID == f.mountID && srcFullPath == dstFullPath { return fs.ErrorDirExists } dstDir := dir(dstFullPath) err := f.mkdir(dstDir) if err != nil { return fs.ErrorCantDirMove } err = f.client.FilesMove(srcFs.mountID, srcFullPath, f.mountID, dstFullPath) if err != nil { return fs.ErrorCantDirMove } return nil } // About reports space usage (with a MB precision) func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { mount, err := f.client.MountsDetails(f.mountID) if err != nil { return nil, err } return &fs.Usage{ Total: fs.NewUsageValue(mount.SpaceTotal * 1024 * 1024), Used: fs.NewUsageValue(mount.SpaceUsed * 1024 * 1024), Trashed: nil, Other: nil, Free: fs.NewUsageValue((mount.SpaceTotal - mount.SpaceUsed) * 1024 * 1024), Objects: nil, }, nil } // Purge purges the complete Fs func (f *Fs) Purge(ctx context.Context) error { err := translateErrorsDir(f.client.FilesDelete(f.mountID, f.fullPath(""))) return err } // linkCreate is a Koofr API request for creating a public link type linkCreate struct { Path string `json:"path"` } // link is a Koofr API response to creating a public link type link struct { ID string `json:"id"` Name string `json:"name"` Path string `json:"path"` Counter int64 `json:"counter"` URL string `json:"url"` ShortURL string `json:"shortUrl"` Hash string `json:"hash"` Host string `json:"host"` HasPassword bool `json:"hasPassword"` Password string `json:"password"` ValidFrom int64 `json:"validFrom"` ValidTo int64 `json:"validTo"` PasswordRequired bool `json:"passwordRequired"` } // createLink makes a Koofr API call to create a public link func createLink(c *koofrclient.KoofrClient, mountID string, path string) (*link, error) { linkCreate := linkCreate{ Path: path, } linkData := link{} request := httpclient.RequestData{ Method: "POST", Path: "/api/v2/mounts/" + mountID + "/links", ExpectedStatus: []int{http.StatusOK, http.StatusCreated}, ReqEncoding: httpclient.EncodingJSON, ReqValue: linkCreate, RespEncoding: httpclient.EncodingJSON, RespValue: &linkData, } _, err := c.Request(&request) if err != nil { return nil, err } return &linkData, nil } // PublicLink creates a public link to the remote path func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { linkData, err := createLink(f.client, f.mountID, f.fullPath(remote)) if err != nil { return "", translateErrorsDir(err) } return linkData.ShortURL, nil } rclone-1.53.3/backend/koofr/koofr_test.go000066400000000000000000000003711375552240400203120ustar00rootroot00000000000000package koofr_test import ( "testing" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestKoofr:", }) } rclone-1.53.3/backend/local/000077500000000000000000000000001375552240400155555ustar00rootroot00000000000000rclone-1.53.3/backend/local/about_unix.go000066400000000000000000000015371375552240400202670ustar00rootroot00000000000000// +build darwin dragonfly freebsd linux package local import ( "context" "os" "syscall" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { var s syscall.Statfs_t err := syscall.Statfs(f.root, &s) if err != nil { if os.IsNotExist(err) { return nil, fs.ErrorDirNotFound } return nil, errors.Wrap(err, "failed to read disk usage") } bs := int64(s.Bsize) // nolint: unconvert usage := &fs.Usage{ Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use Free: fs.NewUsageValue(bs * int64(s.Bavail)), // bytes which can be uploaded before reaching the quota } return usage, nil } // check interface var _ fs.Abouter = &Fs{} rclone-1.53.3/backend/local/about_windows.go000066400000000000000000000020611375552240400207670ustar00rootroot00000000000000// +build windows package local import ( "context" "syscall" "unsafe" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) var getFreeDiskSpace = syscall.NewLazyDLL("kernel32.dll").NewProc("GetDiskFreeSpaceExW") // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { var available, total, free int64 _, _, e1 := getFreeDiskSpace.Call( uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(f.root))), uintptr(unsafe.Pointer(&available)), // lpFreeBytesAvailable - for this user uintptr(unsafe.Pointer(&total)), // lpTotalNumberOfBytes uintptr(unsafe.Pointer(&free)), // lpTotalNumberOfFreeBytes ) if e1 != syscall.Errno(0) { return nil, errors.Wrap(e1, "failed to read disk usage") } usage := &fs.Usage{ Total: fs.NewUsageValue(total), // quota of bytes that can be used Used: fs.NewUsageValue(total - free), // bytes in use Free: fs.NewUsageValue(available), // bytes which can be uploaded before reaching the quota } return usage, nil } // check interface var _ fs.Abouter = &Fs{} rclone-1.53.3/backend/local/encode_darwin.go000066400000000000000000000004241375552240400207050ustar00rootroot00000000000000//+build darwin package local import "github.com/rclone/rclone/lib/encoder" // This is the encoding used by the local backend for macOS // // macOS can't store invalid UTF-8, it converts them into %XX encoding const defaultEnc = (encoder.Base | encoder.EncodeInvalidUtf8) rclone-1.53.3/backend/local/encode_other.go000066400000000000000000000003051375552240400205400ustar00rootroot00000000000000//+build !windows,!darwin package local import "github.com/rclone/rclone/lib/encoder" // This is the encoding used by the local backend for non windows platforms const defaultEnc = encoder.Base rclone-1.53.3/backend/local/encode_windows.go000066400000000000000000000023021375552240400211100ustar00rootroot00000000000000//+build windows package local import "github.com/rclone/rclone/lib/encoder" // This is the encoding used by the local backend for windows platforms // // List of replaced characters: // < (less than) -> '<' // FULLWIDTH LESS-THAN SIGN // > (greater than) -> '>' // FULLWIDTH GREATER-THAN SIGN // : (colon) -> ':' // FULLWIDTH COLON // " (double quote) -> '"' // FULLWIDTH QUOTATION MARK // \ (backslash) -> '\' // FULLWIDTH REVERSE SOLIDUS // | (vertical line) -> '|' // FULLWIDTH VERTICAL LINE // ? (question mark) -> '?' // FULLWIDTH QUESTION MARK // * (asterisk) -> '*' // FULLWIDTH ASTERISK // // Additionally names can't end with a period (.) or space ( ). // List of replaced characters: // . (period) -> '.' // FULLWIDTH FULL STOP // (space) -> '␠' // SYMBOL FOR SPACE // // Also encode invalid UTF-8 bytes as Go can't convert them to UTF-16. // // https://docs.microsoft.com/de-de/windows/desktop/FileIO/naming-a-file#naming-conventions const defaultEnc = (encoder.Base | encoder.EncodeWin | encoder.EncodeBackSlash | encoder.EncodeCtl | encoder.EncodeRightSpace | encoder.EncodeRightPeriod | encoder.EncodeInvalidUtf8) rclone-1.53.3/backend/local/fadvise_other.go000066400000000000000000000002321375552240400207230ustar00rootroot00000000000000//+build !linux package local import ( "io" "os" ) func newFadviseReadCloser(o *Object, f *os.File, offset, limit int64) io.ReadCloser { return f } rclone-1.53.3/backend/local/fadvise_unix.go000066400000000000000000000103731375552240400205740ustar00rootroot00000000000000//+build linux package local import ( "io" "os" "github.com/rclone/rclone/fs" "golang.org/x/sys/unix" ) // fadvise provides means to automate freeing pages in kernel page cache for // a given file descriptor as the file is sequentially processed (read or // written). // // When copying a file to a remote backend all the file content is read by // kernel and put to page cache to make future reads faster. // This causes memory pressure visible in both memory usage and CPU consumption // and can even cause OOM errors in applications consuming large amounts memory. // // In case of an upload to a remote backend, there is no benefits from caching. // // fadvise would orchestrate calling POSIX_FADV_DONTNEED // // POSIX_FADV_DONTNEED attempts to free cached pages associated // with the specified region. This is useful, for example, while // streaming large files. A program may periodically request the // kernel to free cached data that has already been used, so that // more useful cached pages are not discarded instead. // // Requests to discard partial pages are ignored. It is // preferable to preserve needed data than discard unneeded data. // If the application requires that data be considered for // discarding, then offset and len must be page-aligned. // // The implementation may attempt to write back dirty pages in // the specified region, but this is not guaranteed. Any // unwritten dirty pages will not be freed. If the application // wishes to ensure that dirty pages will be released, it should // call fsync(2) or fdatasync(2) first. type fadvise struct { o *Object fd int lastPos int64 curPos int64 windowSize int64 freePagesCh chan offsetLength doneCh chan struct{} } type offsetLength struct { offset int64 length int64 } const ( defaultAllowPages = 32 defaultWorkerQueueSize = 64 ) func newFadvise(o *Object, fd int, offset int64) *fadvise { f := &fadvise{ o: o, fd: fd, lastPos: offset, curPos: offset, windowSize: int64(os.Getpagesize()) * defaultAllowPages, freePagesCh: make(chan offsetLength, defaultWorkerQueueSize), doneCh: make(chan struct{}), } go f.worker() return f } // sequential configures readahead strategy in Linux kernel. // // Under Linux, POSIX_FADV_NORMAL sets the readahead window to the // default size for the backing device; POSIX_FADV_SEQUENTIAL doubles // this size, and POSIX_FADV_RANDOM disables file readahead entirely. func (f *fadvise) sequential(limit int64) bool { l := int64(0) if limit > 0 { l = limit } if err := unix.Fadvise(f.fd, f.curPos, l, unix.FADV_SEQUENTIAL); err != nil { fs.Debugf(f.o, "fadvise sequential failed on file descriptor %d: %s", f.fd, err) return false } return true } func (f *fadvise) next(n int) { f.curPos += int64(n) f.freePagesIfNeeded() } func (f *fadvise) freePagesIfNeeded() { if f.curPos >= f.lastPos+f.windowSize { f.freePages() } } func (f *fadvise) freePages() { f.freePagesCh <- offsetLength{f.lastPos, f.curPos - f.lastPos} f.lastPos = f.curPos } func (f *fadvise) worker() { for p := range f.freePagesCh { if err := unix.Fadvise(f.fd, p.offset, p.length, unix.FADV_DONTNEED); err != nil { fs.Debugf(f.o, "fadvise dontneed failed on file descriptor %d: %s", f.fd, err) } } close(f.doneCh) } func (f *fadvise) wait() { close(f.freePagesCh) <-f.doneCh } type fadviseReadCloser struct { *fadvise inner io.ReadCloser } // newFadviseReadCloser wraps os.File so that reading from that file would // remove already consumed pages from kernel page cache. // In addition to that it instructs kernel to double the readahead window to // make sequential reads faster. // See also fadvise. func newFadviseReadCloser(o *Object, f *os.File, offset, limit int64) io.ReadCloser { r := fadviseReadCloser{ fadvise: newFadvise(o, int(f.Fd()), offset), inner: f, } // If syscall failed it's likely that the subsequent syscalls to that // file descriptor would also fail. In that case return the provided os.File // pointer. if !r.sequential(limit) { r.wait() return f } return r } func (f fadviseReadCloser) Read(p []byte) (n int, err error) { n, err = f.inner.Read(p) f.next(n) return } func (f fadviseReadCloser) Close() error { f.freePages() f.wait() return f.inner.Close() } rclone-1.53.3/backend/local/lchtimes.go000066400000000000000000000007271375552240400177220ustar00rootroot00000000000000// +build windows plan9 js package local import ( "time" ) const haveLChtimes = false // lChtimes changes the access and modification times of the named // link, similar to the Unix utime() or utimes() functions. // // The underlying filesystem may truncate or round the values to a // less precise time unit. // If there is an error, it will be of type *PathError. func lChtimes(name string, atime time.Time, mtime time.Time) error { // Does nothing return nil } rclone-1.53.3/backend/local/lchtimes_unix.go000066400000000000000000000014131375552240400207560ustar00rootroot00000000000000// +build !windows,!plan9,!js package local import ( "os" "time" "golang.org/x/sys/unix" ) const haveLChtimes = true // lChtimes changes the access and modification times of the named // link, similar to the Unix utime() or utimes() functions. // // The underlying filesystem may truncate or round the values to a // less precise time unit. // If there is an error, it will be of type *PathError. func lChtimes(name string, atime time.Time, mtime time.Time) error { var utimes [2]unix.Timespec utimes[0] = unix.NsecToTimespec(atime.UnixNano()) utimes[1] = unix.NsecToTimespec(mtime.UnixNano()) if e := unix.UtimesNanoAt(unix.AT_FDCWD, name, utimes[0:], unix.AT_SYMLINK_NOFOLLOW); e != nil { return &os.PathError{Op: "lchtimes", Path: name, Err: e} } return nil } rclone-1.53.3/backend/local/local.go000066400000000000000000001067301375552240400172050ustar00rootroot00000000000000// Package local provides a filesystem interface package local import ( "bytes" "context" "fmt" "io" "io/ioutil" "os" "path" "path/filepath" "runtime" "strings" "sync" "time" "unicode/utf8" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/file" "github.com/rclone/rclone/lib/readers" ) // Constants const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly // Register with Fs func init() { fsi := &fs.RegInfo{ Name: "local", Description: "Local Disk", NewFs: NewFs, CommandHelp: commandHelp, Options: []fs.Option{{ Name: "nounc", Help: "Disable UNC (long path names) conversion on Windows", Examples: []fs.OptionExample{{ Value: "true", Help: "Disables long file names", }}, }, { Name: "copy_links", Help: "Follow symlinks and copy the pointed to item.", Default: false, NoPrefix: true, ShortOpt: "L", Advanced: true, }, { Name: "links", Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension", Default: false, NoPrefix: true, ShortOpt: "l", Advanced: true, }, { Name: "skip_links", Help: `Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.`, Default: false, NoPrefix: true, Advanced: true, }, { Name: "no_unicode_normalization", Help: `Don't apply unicode normalization to paths and filenames (Deprecated) This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.`, Default: false, Advanced: true, }, { Name: "no_check_updated", Help: `Don't check to see if the files change during upload Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. However on some file systems this modification time check may fail (eg [Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this check can be disabled with this flag. If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things appended to it (eg a log) then rclone will transfer the log file with the size it had the first time rclone saw it. If the file is being modified throughout (not just appended to) then the transfer may fail with a hash check failure. In detail, once the file has had stat() called on it for the first time we: - Only transfer the size that stat gave - Only checksum the size that stat gave - Don't update the stat info for the file `, Default: false, Advanced: true, }, { Name: "one_file_system", Help: "Don't cross filesystem boundaries (unix/macOS only).", Default: false, NoPrefix: true, ShortOpt: "x", Advanced: true, }, { Name: "case_sensitive", Help: `Force the filesystem to report itself as case sensitive. Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.`, Default: false, Advanced: true, }, { Name: "case_insensitive", Help: `Force the filesystem to report itself as case insensitive Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.`, Default: false, Advanced: true, }, { Name: "no_sparse", Help: `Disable sparse files for multi-thread downloads On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with.`, Default: false, Advanced: true, }, { Name: "no_set_modtime", Help: `Disable setting modtime Normally rclone updates modification time of files after they are done uploading. This can cause permissions issues on Linux platforms when the user rclone is running as does not own the file uploaded, such as when copying to a CIFS mount owned by another user. If this option is enabled, rclone will no longer update the modtime after copying a file.`, Default: false, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: defaultEnc, }}, } fs.Register(fsi) } // Options defines the configuration for this backend type Options struct { FollowSymlinks bool `config:"copy_links"` TranslateSymlinks bool `config:"links"` SkipSymlinks bool `config:"skip_links"` NoUTFNorm bool `config:"no_unicode_normalization"` NoCheckUpdated bool `config:"no_check_updated"` NoUNC bool `config:"nounc"` OneFileSystem bool `config:"one_file_system"` CaseSensitive bool `config:"case_sensitive"` CaseInsensitive bool `config:"case_insensitive"` NoSparse bool `config:"no_sparse"` NoSetModTime bool `config:"no_set_modtime"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a local filesystem rooted at root type Fs struct { name string // the name of the remote root string // The root directory (OS path) opt Options // parsed config options features *fs.Features // optional features dev uint64 // device number of root node precisionOk sync.Once // Whether we need to read the precision precision time.Duration // precision of local filesystem warnedMu sync.Mutex // used for locking access to 'warned'. warned map[string]struct{} // whether we have warned about this string // do os.Lstat or os.Stat lstat func(name string) (os.FileInfo, error) objectMetaMu sync.RWMutex // global lock for Object metadata } // Object represents a local filesystem object type Object struct { fs *Fs // The Fs this object is part of remote string // The remote path (encoded path) path string // The local path (OS path) // When using these items the fs.objectMetaMu must be held size int64 // file metadata - always present mode os.FileMode modTime time.Time hashes map[hash.Type]string // Hashes // these are read only and don't need the mutex held translatedLink bool // Is this object a translated link } // ------------------------------------------------------------ var errLinksAndCopyLinks = errors.New("can't use -l/--links with -L/--copy-links") // NewFs constructs an Fs from the path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.TranslateSymlinks && opt.FollowSymlinks { return nil, errLinksAndCopyLinks } if opt.NoUTFNorm { fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed") } f := &Fs{ name: name, opt: *opt, warned: make(map[string]struct{}), dev: devUnset, lstat: os.Lstat, } f.root = cleanRootPath(root, f.opt.NoUNC, f.opt.Enc) f.features = (&fs.Features{ CaseInsensitive: f.caseInsensitive(), CanHaveEmptyDirectories: true, IsLocal: true, SlowHash: true, }).Fill(f) if opt.FollowSymlinks { f.lstat = os.Stat } // Check to see if this points to a file fi, err := f.lstat(f.root) if err == nil { f.dev = readDevice(fi, f.opt.OneFileSystem) } if err == nil && f.isRegular(fi.Mode()) { // It is a file, so use the parent as the root f.root = filepath.Dir(f.root) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Determine whether a file is a 'regular' file, // Symlinks are regular files, only if the TranslateSymlink // option is in-effect func (f *Fs) isRegular(mode os.FileMode) bool { if !f.opt.TranslateSymlinks { return mode.IsRegular() } // fi.Mode().IsRegular() tests that all mode bits are zero // Since symlinks are accepted, test that all other bits are zero, // except the symlink bit return mode&os.ModeType&^os.ModeSymlink == 0 } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.opt.Enc.ToStandardPath(filepath.ToSlash(f.root)) } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Local file system at %s", f.Root()) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // caseInsensitive returns whether the remote is case insensitive or not func (f *Fs) caseInsensitive() bool { if f.opt.CaseSensitive { return false } if f.opt.CaseInsensitive { return true } // FIXME not entirely accurate since you can have case // sensitive Fses on darwin and case insensitive Fses on linux. // Should probably check but that would involve creating a // file in the remote to be most accurate which probably isn't // desirable. return runtime.GOOS == "windows" || runtime.GOOS == "darwin" } // translateLink checks whether the remote is a translated link // and returns a new path, removing the suffix as needed, // It also returns whether this is a translated link at all // // for regular files, localPath is returned unchanged func translateLink(remote, localPath string) (newLocalPath string, isTranslatedLink bool) { isTranslatedLink = strings.HasSuffix(remote, linkSuffix) newLocalPath = strings.TrimSuffix(localPath, linkSuffix) return newLocalPath, isTranslatedLink } // newObject makes a half completed Object func (f *Fs) newObject(remote string) *Object { translatedLink := false localPath := f.localPath(remote) if f.opt.TranslateSymlinks { // Possibly receive a new name for localPath localPath, translatedLink = translateLink(remote, localPath) } return &Object{ fs: f, remote: remote, path: localPath, translatedLink: translatedLink, } } // Return an Object from a path // // May return nil if an error occurred func (f *Fs) newObjectWithInfo(remote string, info os.FileInfo) (fs.Object, error) { o := f.newObject(remote) if info != nil { o.setMetadata(info) } else { err := o.lstat() if err != nil { if os.IsNotExist(err) { return nil, fs.ErrorObjectNotFound } if os.IsPermission(err) { return nil, fs.ErrorPermissionDenied } return nil, err } // Handle the odd case, that a symlink was specified by name without the link suffix if o.fs.opt.TranslateSymlinks && o.mode&os.ModeSymlink != 0 && !o.translatedLink { return nil, fs.ErrorObjectNotFound } } if o.mode.IsDir() { return nil, errors.Wrapf(fs.ErrorNotAFile, "%q", remote) } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(remote, nil) } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { fsDirPath := f.localPath(dir) _, err = os.Stat(fsDirPath) if err != nil { return nil, fs.ErrorDirNotFound } fd, err := os.Open(fsDirPath) if err != nil { isPerm := os.IsPermission(err) err = errors.Wrapf(err, "failed to open directory %q", dir) fs.Errorf(dir, "%v", err) if isPerm { _ = accounting.Stats(ctx).Error(fserrors.NoRetryError(err)) err = nil // ignore error but fail sync } return nil, err } defer func() { cerr := fd.Close() if cerr != nil && err == nil { err = errors.Wrapf(cerr, "failed to close directory %q:", dir) } }() for { var fis []os.FileInfo if useReadDir { // Windows and Plan9 read the directory entries with the stat information in which // shouldn't fail because of unreadable entries. fis, err = fd.Readdir(1024) if err == io.EOF && len(fis) == 0 { break } } else { // For other OSes we read the names only (which shouldn't fail) then stat the // individual ourselves so we can log errors but not fail the directory read. var names []string names, err = fd.Readdirnames(1024) if err == io.EOF && len(names) == 0 { break } if err == nil { for _, name := range names { namepath := filepath.Join(fsDirPath, name) fi, fierr := os.Lstat(namepath) if fierr != nil { err = errors.Wrapf(err, "failed to read directory %q", namepath) fs.Errorf(dir, "%v", fierr) _ = accounting.Stats(ctx).Error(fserrors.NoRetryError(fierr)) // fail the sync continue } fis = append(fis, fi) } } } if err != nil { return nil, errors.Wrap(err, "failed to read directory entry") } for _, fi := range fis { name := fi.Name() mode := fi.Mode() newRemote := f.cleanRemote(dir, name) // Follow symlinks if required if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 { localPath := filepath.Join(fsDirPath, name) fi, err = os.Stat(localPath) if os.IsNotExist(err) { // Skip bad symlinks err = fserrors.NoRetryError(errors.Wrap(err, "symlink")) fs.Errorf(newRemote, "Listing error: %v", err) err = accounting.Stats(ctx).Error(err) continue } if err != nil { return nil, err } mode = fi.Mode() } if fi.IsDir() { // Ignore directories which are symlinks. These are junction points under windows which // are kind of a souped up symlink. Unix doesn't have directories which are symlinks. if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) { d := fs.NewDir(newRemote, fi.ModTime()) entries = append(entries, d) } } else { // Check whether this link should be translated if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 { newRemote += linkSuffix } fso, err := f.newObjectWithInfo(newRemote, fi) if err != nil { return nil, err } if fso.Storable() { entries = append(entries, fso) } } } } return entries, nil } func (f *Fs) cleanRemote(dir, filename string) (remote string) { remote = path.Join(dir, f.opt.Enc.ToStandardName(filename)) if !utf8.ValidString(filename) { f.warnedMu.Lock() if _, ok := f.warned[remote]; !ok { fs.Logf(f, "Replacing invalid UTF-8 characters in %q", remote) f.warned[remote] = struct{}{} } f.warnedMu.Unlock() } return } func (f *Fs) localPath(name string) string { return filepath.Join(f.root, filepath.FromSlash(f.opt.Enc.FromStandardPath(name))) } // Put the Object to the local filesystem func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction - info filled in by Update() o := f.newObject(src.Remote()) err := o.Update(ctx, in, src, options...) if err != nil { return nil, err } return o, nil } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the directory if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { // FIXME: https://github.com/syncthing/syncthing/blob/master/lib/osutil/mkdirall_windows.go localPath := f.localPath(dir) err := os.MkdirAll(localPath, 0777) if err != nil { return err } if dir == "" { fi, err := f.lstat(localPath) if err != nil { return err } f.dev = readDevice(fi, f.opt.OneFileSystem) } return nil } // Rmdir removes the directory // // If it isn't empty it will return an error func (f *Fs) Rmdir(ctx context.Context, dir string) error { return os.Remove(f.localPath(dir)) } // Precision of the file system func (f *Fs) Precision() (precision time.Duration) { if f.opt.NoSetModTime { return fs.ModTimeNotSupported } f.precisionOk.Do(func() { f.precision = f.readPrecision() }) return f.precision } // Read the precision func (f *Fs) readPrecision() (precision time.Duration) { // Default precision of 1s precision = time.Second // Create temporary file and test it fd, err := ioutil.TempFile("", "rclone") if err != nil { // If failed return 1s // fmt.Println("Failed to create temp file", err) return time.Second } path := fd.Name() // fmt.Println("Created temp file", path) err = fd.Close() if err != nil { return time.Second } // Delete it on return defer func() { // fmt.Println("Remove temp file") _ = os.Remove(path) // ignore error }() // Find the minimum duration we can detect for duration := time.Duration(1); duration < time.Second; duration *= 10 { // Current time with delta t := time.Unix(time.Now().Unix(), int64(duration)) err := os.Chtimes(path, t, t) if err != nil { // fmt.Println("Failed to Chtimes", err) break } // Read the actual time back fi, err := os.Stat(path) if err != nil { // fmt.Println("Failed to Stat", err) break } // If it matches - have found the precision // fmt.Println("compare", fi.ModTime(ctx), t) if fi.ModTime().Equal(t) { // fmt.Println("Precision detected as", duration) return duration } } return } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { dir = f.localPath(dir) fi, err := f.lstat(dir) if err != nil { // already purged if os.IsNotExist(err) { return fs.ErrorDirNotFound } return err } if !fi.Mode().IsDir() { return errors.Errorf("can't purge non directory: %q", dir) } return os.RemoveAll(dir) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Temporary Object under construction dstObj := f.newObject(remote) dstObj.fs.objectMetaMu.RLock() dstObjMode := dstObj.mode dstObj.fs.objectMetaMu.RUnlock() // Check it is a file if it exists err := dstObj.lstat() if os.IsNotExist(err) { // OK } else if err != nil { return nil, err } else if !dstObj.fs.isRegular(dstObjMode) { // It isn't a file return nil, errors.New("can't move file onto non-file") } // Create destination err = dstObj.mkdirAll() if err != nil { return nil, err } // Do the move err = os.Rename(srcObj.path, dstObj.path) if os.IsNotExist(err) { // race condition, source was deleted in the meantime return nil, err } else if os.IsPermission(err) { // not enough rights to write to dst return nil, err } else if err != nil { // not quite clear, but probably trying to move a file across file system // boundaries. Copying might still work. fs.Debugf(src, "Can't move: %v: trying copy", err) return nil, fs.ErrorCantMove } // Update the info err = dstObj.lstat() if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := srcFs.localPath(srcRemote) dstPath := f.localPath(dstRemote) // Check if destination exists _, err := os.Lstat(dstPath) if !os.IsNotExist(err) { return fs.ErrorDirExists } // Create parent of destination dstParentPath := filepath.Dir(dstPath) err = os.MkdirAll(dstParentPath, 0777) if err != nil { return err } // Do the move err = os.Rename(srcPath, dstPath) if os.IsNotExist(err) { // race condition, source was deleted in the meantime return err } else if os.IsPermission(err) { // not enough rights to write to dst return err } else if err != nil { // not quite clear, but probably trying to move directory across file system // boundaries. Copying might still work. fs.Debugf(src, "Can't move dir: %v: trying copy", err) return fs.ErrorCantDirMove } return nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Supported() } var commandHelp = []fs.CommandHelp{ { Name: "noop", Short: "A null operation for testing backend commands", Long: `This is a test command which has some options you can try to change the output.`, Opts: map[string]string{ "echo": "echo the input arguments", "error": "return an error based on option value", }, }, } // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) { switch name { case "noop": if txt, ok := opt["error"]; ok { if txt == "" { txt = "unspecified error" } return nil, errors.New(txt) } if _, ok := opt["echo"]; ok { out := map[string]interface{}{} out["name"] = name out["arg"] = arg out["opt"] = opt return out, nil } return nil, nil default: return nil, fs.ErrorCommandNotFound } } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the requested hash of a file as a lowercase hex string func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) { // Check that the underlying file hasn't changed o.fs.objectMetaMu.RLock() oldtime := o.modTime oldsize := o.size o.fs.objectMetaMu.RUnlock() err := o.lstat() var changed bool if err != nil { if os.IsNotExist(errors.Cause(err)) { // If file not found then we assume any accumulated // hashes are OK - this will error on Open changed = true } else { return "", errors.Wrap(err, "hash: failed to stat") } } else { o.fs.objectMetaMu.RLock() changed = !o.modTime.Equal(oldtime) || oldsize != o.size o.fs.objectMetaMu.RUnlock() } o.fs.objectMetaMu.RLock() hashValue, hashFound := o.hashes[r] o.fs.objectMetaMu.RUnlock() if changed || !hashFound { var in io.ReadCloser if !o.translatedLink { var fd *os.File fd, err = file.Open(o.path) if fd != nil { in = newFadviseReadCloser(o, fd, 0, 0) } } else { in, err = o.openTranslatedLink(0, -1) } // If not checking for updates, only read size given if o.fs.opt.NoCheckUpdated { in = readers.NewLimitedReadCloser(in, o.size) } if err != nil { return "", errors.Wrap(err, "hash: failed to open") } var hashes map[hash.Type]string hashes, err = hash.StreamTypes(in, hash.NewHashSet(r)) closeErr := in.Close() if err != nil { return "", errors.Wrap(err, "hash: failed to read") } if closeErr != nil { return "", errors.Wrap(closeErr, "hash: failed to close") } hashValue = hashes[r] o.fs.objectMetaMu.Lock() if o.hashes == nil { o.hashes = hashes } else { o.hashes[r] = hashValue } o.fs.objectMetaMu.Unlock() } return hashValue, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { o.fs.objectMetaMu.RLock() defer o.fs.objectMetaMu.RUnlock() return o.size } // ModTime returns the modification time of the object func (o *Object) ModTime(ctx context.Context) time.Time { o.fs.objectMetaMu.RLock() defer o.fs.objectMetaMu.RUnlock() return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { if o.fs.opt.NoSetModTime { return nil } var err error if o.translatedLink { err = lChtimes(o.path, modTime, modTime) } else { err = os.Chtimes(o.path, modTime, modTime) } if err != nil { return err } // Re-read metadata return o.lstat() } // Storable returns a boolean showing if this object is storable func (o *Object) Storable() bool { o.fs.objectMetaMu.RLock() mode := o.mode o.fs.objectMetaMu.RUnlock() if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks { if !o.fs.opt.SkipSymlinks { fs.Logf(o, "Can't follow symlink without -L/--copy-links") } return false } else if mode&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 { fs.Logf(o, "Can't transfer non file/directory") return false } else if mode&os.ModeDir != 0 { // fs.Debugf(o, "Skipping directory") return false } return true } // localOpenFile wraps an io.ReadCloser and updates the md5sum of the // object that is read type localOpenFile struct { o *Object // object that is open in io.ReadCloser // handle we are wrapping hash *hash.MultiHasher // currently accumulating hashes fd *os.File // file object reference } // Read bytes from the object - see io.Reader func (file *localOpenFile) Read(p []byte) (n int, err error) { if !file.o.fs.opt.NoCheckUpdated { // Check if file has the same size and modTime fi, err := file.fd.Stat() if err != nil { return 0, errors.Wrap(err, "can't read status of source file while transferring") } file.o.fs.objectMetaMu.RLock() oldtime := file.o.modTime oldsize := file.o.size file.o.fs.objectMetaMu.RUnlock() if oldsize != fi.Size() { return 0, fserrors.NoLowLevelRetryError(errors.Errorf("can't copy - source file is being updated (size changed from %d to %d)", oldsize, fi.Size())) } if !oldtime.Equal(fi.ModTime()) { return 0, fserrors.NoLowLevelRetryError(errors.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", oldtime, fi.ModTime())) } } n, err = file.in.Read(p) if n > 0 { // Hash routines never return an error _, _ = file.hash.Write(p[:n]) } return } // Close the object and update the hashes func (file *localOpenFile) Close() (err error) { err = file.in.Close() if err == nil { if file.hash.Size() == file.o.Size() { file.o.fs.objectMetaMu.Lock() file.o.hashes = file.hash.Sums() file.o.fs.objectMetaMu.Unlock() } } return err } // Returns a ReadCloser() object that contains the contents of a symbolic link func (o *Object) openTranslatedLink(offset, limit int64) (lrc io.ReadCloser, err error) { // Read the link and return the destination it as the contents of the object linkdst, err := os.Readlink(o.path) if err != nil { return nil, err } return readers.NewLimitedReadCloser(ioutil.NopCloser(strings.NewReader(linkdst[offset:])), limit), nil } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { var offset, limit int64 = 0, -1 var hasher *hash.MultiHasher for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(o.Size()) case *fs.HashesOption: if x.Hashes.Count() > 0 { hasher, err = hash.NewMultiHasherTypes(x.Hashes) if err != nil { return nil, err } } default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } // If not checking updated then limit to current size. This means if // file is being extended, readers will read a o.Size() bytes rather // than the new size making for a consistent upload. if limit < 0 && o.fs.opt.NoCheckUpdated { limit = o.size } // Handle a translated link if o.translatedLink { return o.openTranslatedLink(offset, limit) } fd, err := file.Open(o.path) if err != nil { return } wrappedFd := readers.NewLimitedReadCloser(newFadviseReadCloser(o, fd, offset, limit), limit) if offset != 0 { // seek the object _, err = fd.Seek(offset, io.SeekStart) // don't attempt to make checksums return wrappedFd, err } if hasher == nil { // no need to wrap since we don't need checksums return wrappedFd, nil } // Update the hashes as we go along in = &localOpenFile{ o: o, in: wrappedFd, hash: hasher, fd: fd, } return in, nil } // mkdirAll makes all the directories needed to store the object func (o *Object) mkdirAll() error { dir := filepath.Dir(o.path) return os.MkdirAll(dir, 0777) } type nopWriterCloser struct { *bytes.Buffer } func (nwc nopWriterCloser) Close() error { // noop return nil } // Update the object from in with modTime and size func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { var out io.WriteCloser var hasher *hash.MultiHasher for _, option := range options { switch x := option.(type) { case *fs.HashesOption: if x.Hashes.Count() > 0 { hasher, err = hash.NewMultiHasherTypes(x.Hashes) if err != nil { return err } } } } err = o.mkdirAll() if err != nil { return err } var symlinkData bytes.Buffer // If the object is a regular file, create it. // If it is a translated link, just read in the contents, and // then create a symlink if !o.translatedLink { f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { if runtime.GOOS == "windows" && os.IsPermission(err) { // If permission denied on Windows might be trying to update a // hidden file, in which case try opening without CREATE // See: https://stackoverflow.com/questions/13215716/ioerror-errno-13-permission-denied-when-trying-to-open-hidden-file-in-w-mod f, err = file.OpenFile(o.path, os.O_WRONLY|os.O_TRUNC, 0666) if err != nil { return err } } else { return err } } // Pre-allocate the file for performance reasons err = file.PreAllocate(src.Size(), f) if err != nil { fs.Debugf(o, "Failed to pre-allocate: %v", err) } out = f } else { out = nopWriterCloser{&symlinkData} } // Calculate the hash of the object we are reading as we go along if hasher != nil { in = io.TeeReader(in, hasher) } _, err = io.Copy(out, in) closeErr := out.Close() if err == nil { err = closeErr } if o.translatedLink { if err == nil { // Remove any current symlink or file, if one exists if _, err := os.Lstat(o.path); err == nil { if removeErr := os.Remove(o.path); removeErr != nil { fs.Errorf(o, "Failed to remove previous file: %v", removeErr) return removeErr } } // Use the contents for the copied object to create a symlink err = os.Symlink(symlinkData.String(), o.path) } // only continue if symlink creation succeeded if err != nil { return err } } if err != nil { fs.Logf(o, "Removing partially written file on error: %v", err) if removeErr := os.Remove(o.path); removeErr != nil { fs.Errorf(o, "Failed to remove partially written file: %v", removeErr) } return err } // All successful so update the hashes if hasher != nil { o.fs.objectMetaMu.Lock() o.hashes = hasher.Sums() o.fs.objectMetaMu.Unlock() } // Set the mtime err = o.SetModTime(ctx, src.ModTime(ctx)) if err != nil { return err } // ReRead info now that we have finished return o.lstat() } var sparseWarning sync.Once // OpenWriterAt opens with a handle for random access writes // // Pass in the remote desired and the size if known. // // It truncates any existing object func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) { // Temporary Object under construction o := f.newObject(remote) err := o.mkdirAll() if err != nil { return nil, err } if o.translatedLink { return nil, errors.New("can't open a symlink for random writing") } out, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { return nil, err } // Pre-allocate the file for performance reasons err = file.PreAllocate(size, out) if err != nil { fs.Debugf(o, "Failed to pre-allocate: %v", err) } if !f.opt.NoSparse && file.SetSparseImplemented { sparseWarning.Do(func() { fs.Infof(nil, "Writing sparse files: use --local-no-sparse or --multi-thread-streams 0 to disable") }) // Set the file to be a sparse file (important on Windows) err = file.SetSparse(out) if err != nil { fs.Errorf(o, "Failed to set sparse: %v", err) } } return out, nil } // setMetadata sets the file info from the os.FileInfo passed in func (o *Object) setMetadata(info os.FileInfo) { // if not checking updated then don't update the stat if o.fs.opt.NoCheckUpdated && !o.modTime.IsZero() { return } o.fs.objectMetaMu.Lock() o.size = info.Size() o.modTime = info.ModTime() o.mode = info.Mode() o.fs.objectMetaMu.Unlock() // On Windows links read as 0 size so set the correct size here if runtime.GOOS == "windows" && o.translatedLink { linkdst, err := os.Readlink(o.path) if err != nil { fs.Errorf(o, "Failed to read link size: %v", err) } else { o.size = int64(len(linkdst)) } } } // Stat an Object into info func (o *Object) lstat() error { info, err := o.fs.lstat(o.path) if err == nil { o.setMetadata(info) } return err } // Remove an object func (o *Object) Remove(ctx context.Context) error { return remove(o.path) } func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string { if runtime.GOOS == "windows" { if !filepath.IsAbs(s) && !strings.HasPrefix(s, "\\") { s2, err := filepath.Abs(s) if err == nil { s = s2 } } s = filepath.ToSlash(s) vol := filepath.VolumeName(s) s = vol + enc.FromStandardPath(s[len(vol):]) s = filepath.FromSlash(s) if !noUNC { // Convert to UNC s = file.UNCPath(s) } return s } if !filepath.IsAbs(s) { s2, err := filepath.Abs(s) if err == nil { s = s2 } } s = enc.FromStandardPath(s) return s } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Purger = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.Mover = &Fs{} _ fs.DirMover = &Fs{} _ fs.Commander = &Fs{} _ fs.OpenWriterAter = &Fs{} _ fs.Object = &Object{} ) rclone-1.53.3/backend/local/local_internal_test.go000066400000000000000000000110511375552240400221270ustar00rootroot00000000000000package local import ( "context" "io/ioutil" "os" "path" "path/filepath" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/lib/file" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // TestMain drives the tests func TestMain(m *testing.M) { fstest.TestMain(m) } // Test copy with source file that's updating func TestUpdatingCheck(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() filePath := "sub dir/local test" r.WriteFile(filePath, "content", time.Now()) fd, err := file.Open(path.Join(r.LocalName, filePath)) if err != nil { t.Fatalf("failed opening file %q: %v", filePath, err) } defer func() { require.NoError(t, fd.Close()) }() fi, err := fd.Stat() require.NoError(t, err) o := &Object{size: fi.Size(), modTime: fi.ModTime(), fs: &Fs{}} wrappedFd := readers.NewLimitedReadCloser(fd, -1) hash, err := hash.NewMultiHasherTypes(hash.Supported()) require.NoError(t, err) in := localOpenFile{ o: o, in: wrappedFd, hash: hash, fd: fd, } buf := make([]byte, 1) _, err = in.Read(buf) require.NoError(t, err) r.WriteFile(filePath, "content updated", time.Now()) _, err = in.Read(buf) require.Errorf(t, err, "can't copy - source file is being updated") // turn the checking off and try again in.o.fs.opt.NoCheckUpdated = true r.WriteFile(filePath, "content updated", time.Now()) _, err = in.Read(buf) require.NoError(t, err) } func TestSymlink(t *testing.T) { ctx := context.Background() r := fstest.NewRun(t) defer r.Finalise() f := r.Flocal.(*Fs) dir := f.root // Write a file modTime1 := fstest.Time("2001-02-03T04:05:10.123123123Z") file1 := r.WriteFile("file.txt", "hello", modTime1) // Write a symlink modTime2 := fstest.Time("2002-02-03T04:05:10.123123123Z") symlinkPath := filepath.Join(dir, "symlink.txt") require.NoError(t, os.Symlink("file.txt", symlinkPath)) require.NoError(t, lChtimes(symlinkPath, modTime2, modTime2)) // Object viewed as symlink file2 := fstest.NewItem("symlink.txt"+linkSuffix, "file.txt", modTime2) // Object viewed as destination file2d := fstest.NewItem("symlink.txt", "hello", modTime1) // Check with no symlink flags fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote) // Set fs into "-L" mode f.opt.FollowSymlinks = true f.opt.TranslateSymlinks = false f.lstat = os.Stat fstest.CheckItems(t, r.Flocal, file1, file2d) fstest.CheckItems(t, r.Fremote) // Set fs into "-l" mode f.opt.FollowSymlinks = false f.opt.TranslateSymlinks = true f.lstat = os.Lstat fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2}, nil, fs.ModTimeNotSupported) if haveLChtimes { fstest.CheckItems(t, r.Flocal, file1, file2) } // Create a symlink modTime3 := fstest.Time("2002-03-03T04:05:10.123123123Z") file3 := r.WriteObjectTo(ctx, r.Flocal, "symlink2.txt"+linkSuffix, "file.txt", modTime3, false) fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2, file3}, nil, fs.ModTimeNotSupported) if haveLChtimes { fstest.CheckItems(t, r.Flocal, file1, file2, file3) } // Check it got the correct contents symlinkPath = filepath.Join(dir, "symlink2.txt") fi, err := os.Lstat(symlinkPath) require.NoError(t, err) assert.False(t, fi.Mode().IsRegular()) linkText, err := os.Readlink(symlinkPath) require.NoError(t, err) assert.Equal(t, "file.txt", linkText) // Check that NewObject gets the correct object o, err := r.Flocal.NewObject(ctx, "symlink2.txt"+linkSuffix) require.NoError(t, err) assert.Equal(t, "symlink2.txt"+linkSuffix, o.Remote()) assert.Equal(t, int64(8), o.Size()) // Check that NewObject doesn't see the non suffixed version _, err = r.Flocal.NewObject(ctx, "symlink2.txt") require.Equal(t, fs.ErrorObjectNotFound, err) // Check reading the object in, err := o.Open(ctx) require.NoError(t, err) contents, err := ioutil.ReadAll(in) require.NoError(t, err) require.Equal(t, "file.txt", string(contents)) require.NoError(t, in.Close()) // Check reading the object with range in, err = o.Open(ctx, &fs.RangeOption{Start: 2, End: 5}) require.NoError(t, err) contents, err = ioutil.ReadAll(in) require.NoError(t, err) require.Equal(t, "file.txt"[2:5+1], string(contents)) require.NoError(t, in.Close()) } func TestSymlinkError(t *testing.T) { m := configmap.Simple{ "links": "true", "copy_links": "true", } _, err := NewFs("local", "/", m) assert.Equal(t, errLinksAndCopyLinks, err) } rclone-1.53.3/backend/local/local_test.go000066400000000000000000000005401375552240400202340ustar00rootroot00000000000000// Test Local filesystem interface package local_test import ( "testing" "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "", NilObject: (*local.Object)(nil), }) } rclone-1.53.3/backend/local/read_device_other.go000066400000000000000000000004621375552240400215410ustar00rootroot00000000000000// Device reading functions // +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris package local import "os" // readDevice turns a valid os.FileInfo into a device number, // returning devUnset if it fails. func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 { return devUnset } rclone-1.53.3/backend/local/read_device_unix.go000066400000000000000000000011121375552240400213740ustar00rootroot00000000000000// Device reading functions // +build darwin dragonfly freebsd linux netbsd openbsd solaris package local import ( "os" "syscall" "github.com/rclone/rclone/fs" ) // readDevice turns a valid os.FileInfo into a device number, // returning devUnset if it fails. func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 { if !oneFileSystem { return devUnset } statT, ok := fi.Sys().(*syscall.Stat_t) if !ok { fs.Debugf(fi.Name(), "Type assertion fi.Sys().(*syscall.Stat_t) failed from: %#v", fi.Sys()) return devUnset } return uint64(statT.Dev) // nolint: unconvert } rclone-1.53.3/backend/local/remove_other.go000066400000000000000000000002331375552240400206000ustar00rootroot00000000000000//+build !windows package local import "os" // Removes name, retrying on a sharing violation func remove(name string) error { return os.Remove(name) } rclone-1.53.3/backend/local/remove_test.go000066400000000000000000000016241375552240400204430ustar00rootroot00000000000000package local import ( "io/ioutil" "os" "sync" "testing" "time" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check we can remove an open file func TestRemove(t *testing.T) { fd, err := ioutil.TempFile("", "rclone-remove-test") require.NoError(t, err) name := fd.Name() defer func() { _ = os.Remove(name) }() exists := func() bool { _, err := os.Stat(name) if err == nil { return true } else if os.IsNotExist(err) { return false } require.NoError(t, err) return false } assert.True(t, exists()) // close the file in the background var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() time.Sleep(250 * time.Millisecond) require.NoError(t, fd.Close()) }() // delete the open file err = remove(name) require.NoError(t, err) // check it no longer exists assert.False(t, exists()) // wait for background close wg.Wait() } rclone-1.53.3/backend/local/remove_windows.go000066400000000000000000000012441375552240400211540ustar00rootroot00000000000000//+build windows package local import ( "os" "syscall" "time" "github.com/rclone/rclone/fs" ) const ( ERROR_SHARING_VIOLATION syscall.Errno = 32 ) // Removes name, retrying on a sharing violation func remove(name string) (err error) { const maxTries = 10 var sleepTime = 1 * time.Millisecond for i := 0; i < maxTries; i++ { err = os.Remove(name) if err == nil { break } pathErr, ok := err.(*os.PathError) if !ok { break } if pathErr.Err != ERROR_SHARING_VIOLATION { break } fs.Logf(name, "Remove detected sharing violation - retry %d/%d sleeping %v", i+1, maxTries, sleepTime) time.Sleep(sleepTime) sleepTime <<= 1 } return err } rclone-1.53.3/backend/local/tests_test.go000066400000000000000000000014601375552240400203060ustar00rootroot00000000000000package local import ( "runtime" "testing" ) // Test Windows character replacements var testsWindows = [][2]string{ {`c:\temp`, `c:\temp`}, {`\\?\UNC\theserver\dir\file.txt`, `\\?\UNC\theserver\dir\file.txt`}, {`//?/UNC/theserver/dir\file.txt`, `\\?\UNC\theserver\dir\file.txt`}, {`c:/temp`, `c:\temp`}, {`C:/temp/file.txt`, `C:\temp\file.txt`}, {`c:\!\"#¤%&/()=;:*^?+-`, `c:\!\"#¤%&\()=;:*^?+-`}, {`c:\<>"|?*:&\<>"|?*:&\<>"|?*:&`, `c:\<>"|?*:&\<>"|?*:&\<>"|?*:&`}, } func TestCleanWindows(t *testing.T) { if runtime.GOOS != "windows" { t.Skipf("windows only") } for _, test := range testsWindows { got := cleanRootPath(test[0], true, defaultEnc) expect := test[1] if got != expect { t.Fatalf("got %q, expected %q", got, expect) } } } rclone-1.53.3/backend/mailru/000077500000000000000000000000001375552240400157545ustar00rootroot00000000000000rclone-1.53.3/backend/mailru/api/000077500000000000000000000000001375552240400165255ustar00rootroot00000000000000rclone-1.53.3/backend/mailru/api/bin.go000066400000000000000000000047701375552240400176340ustar00rootroot00000000000000package api // BIN protocol constants const ( BinContentType = "application/x-www-form-urlencoded" TreeIDLength = 12 DunnoNodeIDLength = 16 ) // Operations in binary protocol const ( OperationAddFile = 103 // 0x67 OperationRename = 105 // 0x69 OperationCreateFolder = 106 // 0x6A OperationFolderList = 117 // 0x75 OperationSharedFoldersList = 121 // 0x79 // TODO investigate opcodes below Operation154MaybeItemInfo = 154 // 0x9A Operation102MaybeAbout = 102 // 0x66 Operation104MaybeDelete = 104 // 0x68 ) // CreateDir protocol constants const ( MkdirResultOK = 0 MkdirResultSourceNotExists = 1 MkdirResultAlreadyExists = 4 MkdirResultExistsDifferentCase = 9 MkdirResultInvalidName = 10 MkdirResultFailed254 = 254 ) // Move result codes const ( MoveResultOK = 0 MoveResultSourceNotExists = 1 MoveResultFailed002 = 2 MoveResultAlreadyExists = 4 MoveResultFailed005 = 5 MoveResultFailed254 = 254 ) // AddFile result codes const ( AddResultOK = 0 AddResultError01 = 1 AddResultDunno04 = 4 AddResultWrongPath = 5 AddResultNoFreeSpace = 7 AddResultDunno09 = 9 AddResultInvalidName = 10 AddResultNotModified = 12 AddResultFailedA = 253 AddResultFailedB = 254 ) // List request options const ( ListOptTotalSpace = 1 ListOptDelete = 2 ListOptFingerprint = 4 ListOptUnknown8 = 8 ListOptUnknown16 = 16 ListOptFolderSize = 32 ListOptUsedSpace = 64 ListOptUnknown128 = 128 ListOptUnknown256 = 256 ) // ListOptDefaults ... const ListOptDefaults = ListOptUnknown128 | ListOptUnknown256 | ListOptFolderSize | ListOptTotalSpace | ListOptUsedSpace // List parse flags const ( ListParseDone = 0 ListParseReadItem = 1 ListParsePin = 2 ListParsePinUpper = 3 ListParseUnknown15 = 15 ) // List operation results const ( ListResultOK = 0 ListResultNotExists = 1 ListResultDunno02 = 2 ListResultDunno03 = 3 ListResultAlreadyExists04 = 4 ListResultDunno05 = 5 ListResultDunno06 = 6 ListResultDunno07 = 7 ListResultDunno08 = 8 ListResultAlreadyExists09 = 9 ListResultDunno10 = 10 ListResultDunno11 = 11 ListResultDunno12 = 12 ListResultFailedB = 253 ListResultFailedA = 254 ) // Directory item types const ( ListItemMountPoint = 0 ListItemFile = 1 ListItemFolder = 2 ListItemSharedFolder = 3 ) rclone-1.53.3/backend/mailru/api/helpers.go000066400000000000000000000116051375552240400205210ustar00rootroot00000000000000package api // BIN protocol helpers import ( "bufio" "bytes" "encoding/binary" "io" "log" "time" "github.com/pkg/errors" "github.com/rclone/rclone/lib/readers" ) // protocol errors var ( ErrorPrematureEOF = errors.New("Premature EOF") ErrorInvalidLength = errors.New("Invalid length") ErrorZeroTerminate = errors.New("String must end with zero") ) // BinWriter is a binary protocol writer type BinWriter struct { b *bytes.Buffer // growing byte buffer a []byte // temporary buffer for next varint } // NewBinWriter creates a binary protocol helper func NewBinWriter() *BinWriter { return &BinWriter{ b: new(bytes.Buffer), a: make([]byte, binary.MaxVarintLen64), } } // Bytes returns binary data func (w *BinWriter) Bytes() []byte { return w.b.Bytes() } // Reader returns io.Reader with binary data func (w *BinWriter) Reader() io.Reader { return bytes.NewReader(w.b.Bytes()) } // WritePu16 writes a short as unsigned varint func (w *BinWriter) WritePu16(val int) { if val < 0 || val > 65535 { log.Fatalf("Invalid UInt16 %v", val) } w.WritePu64(int64(val)) } // WritePu32 writes a signed long as unsigned varint func (w *BinWriter) WritePu32(val int64) { if val < 0 || val > 4294967295 { log.Fatalf("Invalid UInt32 %v", val) } w.WritePu64(val) } // WritePu64 writes an unsigned (actually, signed) long as unsigned varint func (w *BinWriter) WritePu64(val int64) { if val < 0 { log.Fatalf("Invalid UInt64 %v", val) } w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))]) } // WriteString writes a zero-terminated string func (w *BinWriter) WriteString(str string) { buf := []byte(str) w.WritePu64(int64(len(buf) + 1)) w.b.Write(buf) w.b.WriteByte(0) } // Write writes a byte buffer func (w *BinWriter) Write(buf []byte) { w.b.Write(buf) } // WriteWithLength writes a byte buffer prepended with its length as varint func (w *BinWriter) WriteWithLength(buf []byte) { w.WritePu64(int64(len(buf))) w.b.Write(buf) } // BinReader is a binary protocol reader helper type BinReader struct { b *bufio.Reader count *readers.CountingReader err error // keeps the first error encountered } // NewBinReader creates a binary protocol reader helper func NewBinReader(reader io.Reader) *BinReader { r := &BinReader{} r.count = readers.NewCountingReader(reader) r.b = bufio.NewReader(r.count) return r } // Count returns number of bytes read func (r *BinReader) Count() uint64 { return r.count.BytesRead() } // Error returns first encountered error or nil func (r *BinReader) Error() error { return r.err } // check() keeps the first error encountered in a stream func (r *BinReader) check(err error) bool { if err == nil { return true } if r.err == nil { // keep the first error r.err = err } if err != io.EOF { log.Fatalf("Error parsing response: %v", err) } return false } // ReadByteAsInt reads a single byte as uint32, returns -1 for EOF or errors func (r *BinReader) ReadByteAsInt() int { if octet, err := r.b.ReadByte(); r.check(err) { return int(octet) } return -1 } // ReadByteAsShort reads a single byte as uint16, returns -1 for EOF or errors func (r *BinReader) ReadByteAsShort() int16 { if octet, err := r.b.ReadByte(); r.check(err) { return int16(octet) } return -1 } // ReadIntSpl reads two bytes as little-endian uint16, returns -1 for EOF or errors func (r *BinReader) ReadIntSpl() int { var val uint16 if r.check(binary.Read(r.b, binary.LittleEndian, &val)) { return int(val) } return -1 } // ReadULong returns uint64 equivalent of -1 for EOF or errors func (r *BinReader) ReadULong() uint64 { if val, err := binary.ReadUvarint(r.b); r.check(err) { return val } return 0xffffffffffffffff } // ReadPu32 returns -1 for EOF or errors func (r *BinReader) ReadPu32() int64 { if val, err := binary.ReadUvarint(r.b); r.check(err) { return int64(val) } return -1 } // ReadNBytes reads given number of bytes, returns invalid data for EOF or errors func (r *BinReader) ReadNBytes(len int) []byte { buf := make([]byte, len) n, err := r.b.Read(buf) if r.check(err) { return buf } if n != len { r.check(ErrorPrematureEOF) } return buf } // ReadBytesByLength reads buffer length and its bytes func (r *BinReader) ReadBytesByLength() []byte { len := r.ReadPu32() if len < 0 { r.check(ErrorInvalidLength) return []byte{} } return r.ReadNBytes(int(len)) } // ReadString reads a zero-terminated string with length func (r *BinReader) ReadString() string { len := int(r.ReadPu32()) if len < 1 { r.check(ErrorInvalidLength) return "" } buf := make([]byte, len-1) n, err := r.b.Read(buf) if !r.check(err) { return "" } if n != len-1 { r.check(ErrorPrematureEOF) return "" } zeroByte, err := r.b.ReadByte() if !r.check(err) { return "" } if zeroByte != 0 { r.check(ErrorZeroTerminate) return "" } return string(buf) } // ReadDate reads a Unix encoded time func (r *BinReader) ReadDate() time.Time { return time.Unix(r.ReadPu32(), 0) } rclone-1.53.3/backend/mailru/api/m1.go000066400000000000000000000117231375552240400173750ustar00rootroot00000000000000package api import ( "fmt" ) // M1 protocol constants and structures const ( APIServerURL = "https://cloud.mail.ru" PublicLinkURL = "https://cloud.mail.ru/public/" DispatchServerURL = "https://dispatcher.cloud.mail.ru" OAuthURL = "https://o2.mail.ru/token" OAuthClientID = "cloud-win" ) // ServerErrorResponse represents erroneous API response. type ServerErrorResponse struct { Message string `json:"body"` Time int64 `json:"time"` Status int `json:"status"` } func (e *ServerErrorResponse) Error() string { return fmt.Sprintf("server error %d (%s)", e.Status, e.Message) } // FileErrorResponse represents erroneous API response for a file type FileErrorResponse struct { Body struct { Home struct { Value string `json:"value"` Error string `json:"error"` } `json:"home"` } `json:"body"` Status int `json:"status"` Account string `json:"email,omitempty"` Time int64 `json:"time,omitempty"` Message string // non-json, calculated field } func (e *FileErrorResponse) Error() string { return fmt.Sprintf("file error %d (%s)", e.Status, e.Body.Home.Error) } // UserInfoResponse contains account metadata type UserInfoResponse struct { Body struct { AccountType string `json:"account_type"` AccountVerified bool `json:"account_verified"` Cloud struct { Beta struct { Allowed bool `json:"allowed"` Asked bool `json:"asked"` } `json:"beta"` Billing struct { ActiveCostID string `json:"active_cost_id"` ActiveRateID string `json:"active_rate_id"` AutoProlong bool `json:"auto_prolong"` Basequota int64 `json:"basequota"` Enabled bool `json:"enabled"` Expires int `json:"expires"` Prolong bool `json:"prolong"` Promocodes struct { } `json:"promocodes"` Subscription []interface{} `json:"subscription"` Version string `json:"version"` } `json:"billing"` Bonuses struct { CameraUpload bool `json:"camera_upload"` Complete bool `json:"complete"` Desktop bool `json:"desktop"` Feedback bool `json:"feedback"` Links bool `json:"links"` Mobile bool `json:"mobile"` Registration bool `json:"registration"` } `json:"bonuses"` Enable struct { Sharing bool `json:"sharing"` } `json:"enable"` FileSizeLimit int64 `json:"file_size_limit"` Space struct { BytesTotal int64 `json:"bytes_total"` BytesUsed int `json:"bytes_used"` Overquota bool `json:"overquota"` } `json:"space"` } `json:"cloud"` Cloudflags struct { Exists bool `json:"exists"` } `json:"cloudflags"` Domain string `json:"domain"` Login string `json:"login"` Newbie bool `json:"newbie"` UI struct { ExpandLoader bool `json:"expand_loader"` Kind string `json:"kind"` Sidebar bool `json:"sidebar"` Sort struct { Order string `json:"order"` Type string `json:"type"` } `json:"sort"` Thumbs bool `json:"thumbs"` } `json:"ui"` } `json:"body"` Email string `json:"email"` Status int `json:"status"` Time int64 `json:"time"` } // ListItem ... type ListItem struct { Count struct { Folders int `json:"folders"` Files int `json:"files"` } `json:"count,omitempty"` Kind string `json:"kind"` Type string `json:"type"` Name string `json:"name"` Home string `json:"home"` Size int64 `json:"size"` Mtime uint64 `json:"mtime,omitempty"` Hash string `json:"hash,omitempty"` VirusScan string `json:"virus_scan,omitempty"` Tree string `json:"tree,omitempty"` Grev int `json:"grev,omitempty"` Rev int `json:"rev,omitempty"` } // ItemInfoResponse ... type ItemInfoResponse struct { Email string `json:"email"` Body ListItem `json:"body"` Time int64 `json:"time"` Status int `json:"status"` } // FolderInfoResponse ... type FolderInfoResponse struct { Body struct { Count struct { Folders int `json:"folders"` Files int `json:"files"` } `json:"count"` Tree string `json:"tree"` Name string `json:"name"` Grev int `json:"grev"` Size int64 `json:"size"` Sort struct { Order string `json:"order"` Type string `json:"type"` } `json:"sort"` Kind string `json:"kind"` Rev int `json:"rev"` Type string `json:"type"` Home string `json:"home"` List []ListItem `json:"list"` } `json:"body,omitempty"` Time int64 `json:"time"` Status int `json:"status"` Email string `json:"email"` } // CleanupResponse ... type CleanupResponse struct { Email string `json:"email"` Time int64 `json:"time"` StatusStr string `json:"status"` } // GenericResponse ... type GenericResponse struct { Email string `json:"email"` Time int64 `json:"time"` Status int `json:"status"` // ignore other fields } // GenericBodyResponse ... type GenericBodyResponse struct { Email string `json:"email"` Body string `json:"body"` Time int64 `json:"time"` Status int `json:"status"` } rclone-1.53.3/backend/mailru/mailru.go000066400000000000000000001772441375552240400176130ustar00rootroot00000000000000package mailru import ( "bytes" "context" "fmt" gohash "hash" "io" "path" "path/filepath" "sort" "strconv" "strings" "sync" "time" "encoding/hex" "encoding/json" "io/ioutil" "net/http" "net/url" "github.com/rclone/rclone/backend/mailru/api" "github.com/rclone/rclone/backend/mailru/mrhash" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" "github.com/pkg/errors" "golang.org/x/oauth2" ) // Global constants const ( minSleepPacer = 10 * time.Millisecond maxSleepPacer = 2 * time.Second decayConstPacer = 2 // bigger for slower decay, exponential metaExpirySec = 20 * 60 // meta server expiration time serverExpirySec = 3 * 60 // download server expiration time shardExpirySec = 30 * 60 // upload server expiration time maxServerLocks = 4 // maximum number of locks per single download server maxInt32 = 2147483647 // used as limit in directory list request speedupMinSize = 512 // speedup is not optimal if data is smaller than average packet ) // Global errors var ( ErrorDirAlreadyExists = errors.New("directory already exists") ErrorDirSourceNotExists = errors.New("directory source does not exist") ErrorInvalidName = errors.New("invalid characters in object name") // MrHashType is the hash.Type for Mailru MrHashType hash.Type ) // Description of how to authorize var oauthConfig = &oauth2.Config{ ClientID: api.OAuthClientID, ClientSecret: "", Endpoint: oauth2.Endpoint{ AuthURL: api.OAuthURL, TokenURL: api.OAuthURL, AuthStyle: oauth2.AuthStyleInParams, }, } // Register with Fs func init() { MrHashType = hash.RegisterHash("MailruHash", 40, mrhash.New) fs.Register(&fs.RegInfo{ Name: "mailru", Description: "Mail.ru Cloud", NewFs: NewFs, Options: []fs.Option{{ Name: "user", Help: "User name (usually email)", Required: true, }, { Name: "pass", Help: "Password", Required: true, IsPassword: true, }, { Name: "speedup_enable", Default: true, Advanced: false, Help: `Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.`, Examples: []fs.OptionExample{{ Value: "true", Help: "Enable", }, { Value: "false", Help: "Disable", }}, }, { Name: "speedup_file_patterns", Default: "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf", Advanced: true, Help: `Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters.`, Examples: []fs.OptionExample{{ Value: "", Help: "Empty list completely disables speedup (put by hash).", }, { Value: "*", Help: "All files will be attempted for speedup.", }, { Value: "*.mkv,*.avi,*.mp4,*.mp3", Help: "Only common audio/video files will be tried for put by hash.", }, { Value: "*.zip,*.gz,*.rar,*.pdf", Help: "Only common archives or PDF books will be tried for speedup.", }}, }, { Name: "speedup_max_disk", Default: fs.SizeSuffix(3 * 1024 * 1024 * 1024), Advanced: true, Help: `This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space)`, Examples: []fs.OptionExample{{ Value: "0", Help: "Completely disable speedup (put by hash).", }, { Value: "1G", Help: "Files larger than 1Gb will be uploaded directly.", }, { Value: "3G", Help: "Choose this option if you have less than 3Gb free on local disk.", }}, }, { Name: "speedup_max_memory", Default: fs.SizeSuffix(32 * 1024 * 1024), Advanced: true, Help: `Files larger than the size given below will always be hashed on disk.`, Examples: []fs.OptionExample{{ Value: "0", Help: "Preliminary hashing will always be done in a temporary disk location.", }, { Value: "32M", Help: "Do not dedicate more than 32Mb RAM for preliminary hashing.", }, { Value: "256M", Help: "You have at most 256Mb RAM free for hash calculations.", }}, }, { Name: "check_hash", Default: true, Advanced: true, Help: "What should copy do if file checksum is mismatched or invalid", Examples: []fs.OptionExample{{ Value: "true", Help: "Fail with error.", }, { Value: "false", Help: "Ignore and continue.", }}, }, { Name: "user_agent", Default: "", Advanced: true, Hide: fs.OptionHideBoth, Help: `HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line.`, }, { Name: "quirks", Default: "", Advanced: true, Hide: fs.OptionHideBoth, Help: `Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400`, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Display | encoder.EncodeWin | // :?"*<>| encoder.EncodeBackSlash | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { Username string `config:"user"` Password string `config:"pass"` UserAgent string `config:"user_agent"` CheckHash bool `config:"check_hash"` SpeedupEnable bool `config:"speedup_enable"` SpeedupPatterns string `config:"speedup_file_patterns"` SpeedupMaxDisk fs.SizeSuffix `config:"speedup_max_disk"` SpeedupMaxMem fs.SizeSuffix `config:"speedup_max_memory"` Quirks string `config:"quirks"` Enc encoder.MultiEncoder `config:"encoding"` } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this response and err // deserve to be retried. It returns the err as a convenience. // Retries password authorization (once) in a special case of access denied. func shouldRetry(res *http.Response, err error, f *Fs, opts *rest.Opts) (bool, error) { if res != nil && res.StatusCode == 403 && f.opt.Password != "" && !f.passFailed { reAuthErr := f.reAuthorize(opts, err) return reAuthErr == nil, err // return an original error } if res != nil && res.StatusCode == 400 && f.quirks.retry400 { return true, err } return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(res, retryErrorCodes), err } // errorHandler parses a non 2xx error response into an error func errorHandler(res *http.Response) (err error) { data, err := rest.ReadBody(res) if err != nil { return err } fileError := &api.FileErrorResponse{} err = json.NewDecoder(bytes.NewReader(data)).Decode(fileError) if err == nil { fileError.Message = fileError.Body.Home.Error return fileError } serverError := &api.ServerErrorResponse{} err = json.NewDecoder(bytes.NewReader(data)).Decode(serverError) if err == nil { return serverError } serverError.Message = string(data) if serverError.Message == "" || strings.HasPrefix(serverError.Message, "{") { // Replace empty or JSON response with a human readable text. serverError.Message = res.Status } serverError.Status = res.StatusCode return serverError } // Fs represents a remote mail.ru type Fs struct { name string root string // root path opt Options // parsed options speedupGlobs []string // list of file name patterns eligible for speedup speedupAny bool // true if all file names are aligible for speedup features *fs.Features // optional features srv *rest.Client // REST API client cli *http.Client // underlying HTTP client (for authorize) m configmap.Mapper // config reader (for authorize) source oauth2.TokenSource // OAuth token refresher pacer *fs.Pacer // pacer for API calls metaMu sync.Mutex // lock for meta server switcher metaURL string // URL of meta server metaExpiry time.Time // time to refresh meta server shardMu sync.Mutex // lock for upload shard switcher shardURL string // URL of upload shard shardExpiry time.Time // time to refresh upload shard fileServers serverPool // file server dispatcher authMu sync.Mutex // mutex for authorize() passFailed bool // true if authorize() failed after 403 quirks quirks // internal maintenance flags } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // fs.Debugf(nil, ">>> NewFs %q %q", name, root) ctx := context.Background() // Note: NewFs does not pass context! // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.Password != "" { opt.Password = obscure.MustReveal(opt.Password) } // Trailing slash signals us to optimize out one file check rootIsDir := strings.HasSuffix(root, "/") // However the f.root string should not have leading or trailing slashes root = strings.Trim(root, "/") f := &Fs{ name: name, root: root, opt: *opt, m: m, } if err := f.parseSpeedupPatterns(opt.SpeedupPatterns); err != nil { return nil, err } f.quirks.parseQuirks(opt.Quirks) f.pacer = fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleepPacer), pacer.MaxSleep(maxSleepPacer), pacer.DecayConstant(decayConstPacer))) f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, // Can copy/move across mailru configs (almost, thus true here), but // only when they share common account (this is checked in Copy/Move). ServerSideAcrossConfigs: true, }).Fill(f) // Override few config settings and create a client clientConfig := *fs.Config if opt.UserAgent != "" { clientConfig.UserAgent = opt.UserAgent } clientConfig.NoGzip = !f.quirks.gzip // Send not "Accept-Encoding: gzip" like official client f.cli = fshttp.NewClient(&clientConfig) f.srv = rest.NewClient(f.cli) f.srv.SetRoot(api.APIServerURL) f.srv.SetHeader("Accept", "*/*") // Send "Accept: */*" with every request like official client f.srv.SetErrorHandler(errorHandler) if f.quirks.insecure { transport := f.cli.Transport.(*fshttp.Transport).Transport transport.TLSClientConfig.InsecureSkipVerify = true transport.ProxyConnectHeader = http.Header{"User-Agent": {clientConfig.UserAgent}} } if err = f.authorize(ctx, false); err != nil { return nil, err } f.fileServers = serverPool{ pool: make(pendingServerMap), fs: f, path: "/d", expirySec: serverExpirySec, } if !rootIsDir { _, dirSize, err := f.readItemMetaData(ctx, f.root) rootIsDir = (dirSize >= 0) // Ignore non-existing item and other errors if err == nil && !rootIsDir { root = path.Dir(f.root) if root == "." { root = "" } f.root = root // Return fs that points to the parent and signal rclone to do filtering return f, fs.ErrorIsFile } } return f, nil } // Internal maintenance flags (to be removed when the backend matures). // Primarily intended to facilitate remote support and troubleshooting. type quirks struct { gzip bool insecure bool binlist bool atomicmkdir bool retry400 bool } func (q *quirks) parseQuirks(option string) { for _, flag := range strings.Split(option, ",") { switch strings.ToLower(strings.TrimSpace(flag)) { case "gzip": // This backend mimics the official client which never sends the // "Accept-Encoding: gzip" header. However, enabling compression // might be good for performance. // Use this quirk to investigate the performance impact. // Remove this quirk if performance does not improve. q.gzip = true case "insecure": // The mailru disk-o protocol is not documented. To compare HTTP // stream against the official client one can use Telerik Fiddler, // which introduces a self-signed certificate. This quirk forces // the Go http layer to accept it. // Remove this quirk when the backend reaches maturity. q.insecure = true case "binlist": // The official client sometimes uses a so called "bin" protocol, // implemented in the listBin file system method below. This method // is generally faster than non-recursive listM1 but results in // sporadic deserialization failures if total size of tree data // approaches 8Kb (?). The recursive method is normally disabled. // This quirk can be used to enable it for further investigation. // Remove this quirk when the "bin" protocol support is complete. q.binlist = true case "atomicmkdir": // At the moment rclone requires Mkdir to return success if the // directory already exists. However, such programs as borgbackup // or restic use mkdir as a locking primitive and depend on its // atomicity. This quirk is a workaround. It can be removed // when the above issue is investigated. q.atomicmkdir = true case "retry400": // This quirk will help in troubleshooting a very rare "Error 400" // issue. It can be removed if the problem does not show up // for a year or so. See the below issue: // https://github.com/ivandeex/rclone/issues/14 q.retry400 = true default: // Just ignore all unknown flags } } } // Note: authorize() is not safe for concurrent access as it updates token source func (f *Fs) authorize(ctx context.Context, force bool) (err error) { var t *oauth2.Token if !force { t, err = oauthutil.GetToken(f.name, f.m) } if err != nil || !tokenIsValid(t) { fs.Infof(f, "Valid token not found, authorizing.") ctx := oauthutil.Context(f.cli) t, err = oauthConfig.PasswordCredentialsToken(ctx, f.opt.Username, f.opt.Password) } if err == nil && !tokenIsValid(t) { err = errors.New("Invalid token") } if err != nil { return errors.Wrap(err, "Failed to authorize") } if err = oauthutil.PutToken(f.name, f.m, t, false); err != nil { return err } // Mailru API server expects access token not in the request header but // in the URL query string, so we must use a bare token source rather than // client provided by oauthutil. // // WARNING: direct use of the returned token source triggers a bug in the // `(*token != *ts.token)` comparison in oauthutil.TokenSource.Token() // crashing with panic `comparing uncomparable type map[string]interface{}` // As a workaround, mimic oauth2.NewClient() wrapping token source in // oauth2.ReuseTokenSource _, ts, err := oauthutil.NewClientWithBaseClient(f.name, f.m, oauthConfig, f.cli) if err == nil { f.source = oauth2.ReuseTokenSource(nil, ts) } return err } func tokenIsValid(t *oauth2.Token) bool { return t.Valid() && t.RefreshToken != "" && t.Type() == "Bearer" } // reAuthorize is called after getting 403 (access denied) from the server. // It handles the case when user has changed password since a previous // rclone invocation and obtains a new access token, if needed. func (f *Fs) reAuthorize(opts *rest.Opts, origErr error) error { // lock and recheck the flag to ensure authorize() is attempted only once f.authMu.Lock() defer f.authMu.Unlock() if f.passFailed { return origErr } ctx := context.Background() // Note: reAuthorize is called by ShouldRetry, no context! fs.Debugf(f, "re-authorize with new password") if err := f.authorize(ctx, true); err != nil { f.passFailed = true return err } // obtain new token, if needed tokenParameter := "" if opts != nil && opts.Parameters.Get("token") != "" { tokenParameter = "token" } if opts != nil && opts.Parameters.Get("access_token") != "" { tokenParameter = "access_token" } if tokenParameter != "" { token, err := f.accessToken() if err != nil { f.passFailed = true return err } opts.Parameters.Set(tokenParameter, token) } return nil } // accessToken() returns OAuth token and possibly refreshes it func (f *Fs) accessToken() (string, error) { token, err := f.source.Token() if err != nil { return "", errors.Wrap(err, "cannot refresh access token") } return token.AccessToken, nil } // absPath converts root-relative remote to absolute home path func (f *Fs) absPath(remote string) string { return path.Join("/", f.root, remote) } // relPath converts absolute home path to root-relative remote // Note that f.root can not have leading and trailing slashes func (f *Fs) relPath(absPath string) (string, error) { target := strings.Trim(absPath, "/") if f.root == "" { return target, nil } if target == f.root { return "", nil } if strings.HasPrefix(target+"/", f.root+"/") { return target[len(f.root)+1:], nil } return "", fmt.Errorf("path %q should be under %q", absPath, f.root) } // metaServer ... func (f *Fs) metaServer(ctx context.Context) (string, error) { f.metaMu.Lock() defer f.metaMu.Unlock() if f.metaURL != "" && time.Now().Before(f.metaExpiry) { return f.metaURL, nil } opts := rest.Opts{ RootURL: api.DispatchServerURL, Method: "GET", Path: "/m", } var ( res *http.Response url string err error ) err = f.pacer.Call(func() (bool, error) { res, err = f.srv.Call(ctx, &opts) if err == nil { url, err = readBodyWord(res) } return fserrors.ShouldRetry(err), err }) if err != nil { closeBody(res) return "", err } f.metaURL = url f.metaExpiry = time.Now().Add(metaExpirySec * time.Second) fs.Debugf(f, "new meta server: %s", f.metaURL) return f.metaURL, nil } // readBodyWord reads the single line response to completion // and extracts the first word from the first line. func readBodyWord(res *http.Response) (word string, err error) { var body []byte body, err = rest.ReadBody(res) if err == nil { line := strings.Trim(string(body), " \r\n") word = strings.Split(line, " ")[0] } if word == "" { return "", errors.New("Empty reply from dispatcher") } return word, nil } // readItemMetaData returns a file/directory info at given full path // If it can't be found it fails with fs.ErrorObjectNotFound // For the return value `dirSize` please see Fs.itemToEntry() func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEntry, dirSize int, err error) { token, err := f.accessToken() if err != nil { return nil, -1, err } opts := rest.Opts{ Method: "GET", Path: "/api/m1/file", Parameters: url.Values{ "access_token": {token}, "home": {f.opt.Enc.FromStandardPath(path)}, "offset": {"0"}, "limit": {strconv.Itoa(maxInt32)}, }, } var info api.ItemInfoResponse err = f.pacer.Call(func() (bool, error) { res, err := f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(res, err, f, &opts) }) if err != nil { if apiErr, ok := err.(*api.FileErrorResponse); ok { switch apiErr.Status { case 404: err = fs.ErrorObjectNotFound case 400: fs.Debugf(f, "object %q status %d (%s)", path, apiErr.Status, apiErr.Message) err = fs.ErrorObjectNotFound } } return } entry, dirSize, err = f.itemToDirEntry(ctx, &info.Body) return } // itemToEntry converts API item to rclone directory entry // The dirSize return value is: // <0 - for a file or in case of error // =0 - for an empty directory // >0 - for a non-empty directory func (f *Fs) itemToDirEntry(ctx context.Context, item *api.ListItem) (entry fs.DirEntry, dirSize int, err error) { remote, err := f.relPath(f.opt.Enc.ToStandardPath(item.Home)) if err != nil { return nil, -1, err } mTime := int64(item.Mtime) if mTime < 0 { fs.Debugf(f, "Fixing invalid timestamp %d on mailru file %q", mTime, remote) mTime = 0 } switch item.Kind { case "folder": dir := fs.NewDir(remote, time.Unix(mTime, 0)).SetSize(item.Size) dirSize := item.Count.Files + item.Count.Folders return dir, dirSize, nil case "file": binHash, err := mrhash.DecodeString(item.Hash) if err != nil { return nil, -1, err } file := &Object{ fs: f, remote: remote, hasMetaData: true, size: item.Size, mrHash: binHash, modTime: time.Unix(mTime, 0), } return file, -1, nil default: return nil, -1, fmt.Errorf("Unknown resource type %q", item.Kind) } } // List the objects and directories in dir into entries. // The entries can be returned in any order but should be for a complete directory. // dir should be "" to list the root, and should not have trailing slashes. // This should return ErrDirNotFound if the directory isn't found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { // fs.Debugf(f, ">>> List: %q", dir) if f.quirks.binlist { entries, err = f.listBin(ctx, f.absPath(dir), 1) } else { entries, err = f.listM1(ctx, f.absPath(dir), 0, maxInt32) } if err == nil && fs.Config.LogLevel >= fs.LogLevelDebug { names := []string{} for _, entry := range entries { names = append(names, entry.Remote()) } sort.Strings(names) // fs.Debugf(f, "List(%q): %v", dir, names) } return } // list using protocol "m1" func (f *Fs) listM1(ctx context.Context, dirPath string, offset int, limit int) (entries fs.DirEntries, err error) { token, err := f.accessToken() if err != nil { return nil, err } params := url.Values{} params.Set("access_token", token) params.Set("offset", strconv.Itoa(offset)) params.Set("limit", strconv.Itoa(limit)) data := url.Values{} data.Set("home", f.opt.Enc.FromStandardPath(dirPath)) opts := rest.Opts{ Method: "POST", Path: "/api/m1/folder", Parameters: params, Body: strings.NewReader(data.Encode()), ContentType: api.BinContentType, } var ( info api.FolderInfoResponse res *http.Response ) err = f.pacer.Call(func() (bool, error) { res, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(res, err, f, &opts) }) if err != nil { apiErr, ok := err.(*api.FileErrorResponse) if ok && apiErr.Status == 404 { return nil, fs.ErrorDirNotFound } return nil, err } if info.Body.Kind != "folder" { return nil, fs.ErrorIsFile } for _, item := range info.Body.List { entry, _, err := f.itemToDirEntry(ctx, &item) if err == nil { entries = append(entries, entry) } else { fs.Debugf(f, "Excluding path %q from list: %v", item.Home, err) } } return entries, nil } // list using protocol "bin" func (f *Fs) listBin(ctx context.Context, dirPath string, depth int) (entries fs.DirEntries, err error) { options := api.ListOptDefaults req := api.NewBinWriter() req.WritePu16(api.OperationFolderList) req.WriteString(f.opt.Enc.FromStandardPath(dirPath)) req.WritePu32(int64(depth)) req.WritePu32(int64(options)) req.WritePu32(0) token, err := f.accessToken() if err != nil { return nil, err } metaURL, err := f.metaServer(ctx) if err != nil { return nil, err } opts := rest.Opts{ Method: "POST", RootURL: metaURL, Parameters: url.Values{ "client_id": {api.OAuthClientID}, "token": {token}, }, ContentType: api.BinContentType, Body: req.Reader(), } var res *http.Response err = f.pacer.Call(func() (bool, error) { res, err = f.srv.Call(ctx, &opts) return shouldRetry(res, err, f, &opts) }) if err != nil { closeBody(res) return nil, err } r := api.NewBinReader(res.Body) defer closeBody(res) // read status switch status := r.ReadByteAsInt(); status { case api.ListResultOK: // go on... case api.ListResultNotExists: return nil, fs.ErrorDirNotFound default: return nil, fmt.Errorf("directory list error %d", status) } t := &treeState{ f: f, r: r, options: options, rootDir: parentDir(dirPath), lastDir: "", level: 0, } t.currDir = t.rootDir // read revision if err := t.revision.Read(r); err != nil { return nil, err } // read space if (options & api.ListOptTotalSpace) != 0 { t.totalSpace = int64(r.ReadULong()) } if (options & api.ListOptUsedSpace) != 0 { t.usedSpace = int64(r.ReadULong()) } t.fingerprint = r.ReadBytesByLength() // deserialize for { entry, err := t.NextRecord() if err != nil { break } if entry != nil { entries = append(entries, entry) } } if err != nil && err != fs.ErrorListAborted { fs.Debugf(f, "listBin failed at offset %d: %v", r.Count(), err) return nil, err } return entries, nil } func (t *treeState) NextRecord() (fs.DirEntry, error) { r := t.r parseOp := r.ReadByteAsShort() if r.Error() != nil { return nil, r.Error() } switch parseOp { case api.ListParseDone: return nil, fs.ErrorListAborted case api.ListParsePin: if t.lastDir == "" { return nil, errors.New("last folder is null") } t.currDir = t.lastDir t.level++ return nil, nil case api.ListParsePinUpper: if t.currDir == t.rootDir { return nil, nil } if t.level <= 0 { return nil, errors.New("no parent folder") } t.currDir = parentDir(t.currDir) t.level-- return nil, nil case api.ListParseUnknown15: skip := int(r.ReadPu32()) for i := 0; i < skip; i++ { r.ReadPu32() r.ReadPu32() } return nil, nil case api.ListParseReadItem: // get item (see below) default: return nil, fmt.Errorf("unknown parse operation %d", parseOp) } // get item head := r.ReadIntSpl() itemType := head & 3 if (head & 4096) != 0 { t.dunnoNodeID = r.ReadNBytes(api.DunnoNodeIDLength) } name := t.f.opt.Enc.FromStandardPath(string(r.ReadBytesByLength())) t.dunno1 = int(r.ReadULong()) t.dunno2 = 0 t.dunno3 = 0 if r.Error() != nil { return nil, r.Error() } var ( modTime time.Time size int64 binHash []byte dirSize int64 isDir = true ) switch itemType { case api.ListItemMountPoint: t.treeID = r.ReadNBytes(api.TreeIDLength) t.dunno2 = int(r.ReadULong()) t.dunno3 = int(r.ReadULong()) case api.ListItemFolder: t.dunno2 = int(r.ReadULong()) case api.ListItemSharedFolder: t.dunno2 = int(r.ReadULong()) t.treeID = r.ReadNBytes(api.TreeIDLength) case api.ListItemFile: isDir = false modTime = r.ReadDate() size = int64(r.ReadULong()) binHash = r.ReadNBytes(mrhash.Size) default: return nil, fmt.Errorf("unknown item type %d", itemType) } if isDir { t.lastDir = path.Join(t.currDir, name) if (t.options & api.ListOptDelete) != 0 { t.dunnoDel1 = int(r.ReadPu32()) t.dunnoDel2 = int(r.ReadPu32()) } if (t.options & api.ListOptFolderSize) != 0 { dirSize = int64(r.ReadULong()) } } if r.Error() != nil { return nil, r.Error() } if fs.Config.LogLevel >= fs.LogLevelDebug { ctime, _ := modTime.MarshalJSON() fs.Debugf(t.f, "binDir %d.%d %q %q (%d) %s", t.level, itemType, t.currDir, name, size, ctime) } if t.level != 1 { // TODO: implement recursion and ListR // Note: recursion is broken because maximum buffer size is 8K return nil, nil } remote, err := t.f.relPath(path.Join(t.currDir, name)) if err != nil { return nil, err } if isDir { return fs.NewDir(remote, modTime).SetSize(dirSize), nil } obj := &Object{ fs: t.f, remote: remote, hasMetaData: true, size: size, mrHash: binHash, modTime: modTime, } return obj, nil } type treeState struct { f *Fs r *api.BinReader options int rootDir string currDir string lastDir string level int revision treeRevision totalSpace int64 usedSpace int64 fingerprint []byte dunno1 int dunno2 int dunno3 int dunnoDel1 int dunnoDel2 int dunnoNodeID []byte treeID []byte } type treeRevision struct { ver int16 treeID []byte treeIDNew []byte bgn uint64 bgnNew uint64 } func (rev *treeRevision) Read(data *api.BinReader) error { rev.ver = data.ReadByteAsShort() switch rev.ver { case 0: // Revision() case 1, 2: rev.treeID = data.ReadNBytes(api.TreeIDLength) rev.bgn = data.ReadULong() case 3, 4: rev.treeID = data.ReadNBytes(api.TreeIDLength) rev.bgn = data.ReadULong() rev.treeIDNew = data.ReadNBytes(api.TreeIDLength) rev.bgnNew = data.ReadULong() case 5: rev.treeID = data.ReadNBytes(api.TreeIDLength) rev.bgn = data.ReadULong() rev.treeIDNew = data.ReadNBytes(api.TreeIDLength) default: return fmt.Errorf("unknown directory revision %d", rev.ver) } return data.Error() } // CreateDir makes a directory (parent must exist) func (f *Fs) CreateDir(ctx context.Context, path string) error { // fs.Debugf(f, ">>> CreateDir %q", path) req := api.NewBinWriter() req.WritePu16(api.OperationCreateFolder) req.WritePu16(0) // revision req.WriteString(f.opt.Enc.FromStandardPath(path)) req.WritePu32(0) token, err := f.accessToken() if err != nil { return err } metaURL, err := f.metaServer(ctx) if err != nil { return err } opts := rest.Opts{ Method: "POST", RootURL: metaURL, Parameters: url.Values{ "client_id": {api.OAuthClientID}, "token": {token}, }, ContentType: api.BinContentType, Body: req.Reader(), } var res *http.Response err = f.pacer.Call(func() (bool, error) { res, err = f.srv.Call(ctx, &opts) return shouldRetry(res, err, f, &opts) }) if err != nil { closeBody(res) return err } reply := api.NewBinReader(res.Body) defer closeBody(res) switch status := reply.ReadByteAsInt(); status { case api.MkdirResultOK: return nil case api.MkdirResultAlreadyExists, api.MkdirResultExistsDifferentCase: return ErrorDirAlreadyExists case api.MkdirResultSourceNotExists: return ErrorDirSourceNotExists case api.MkdirResultInvalidName: return ErrorInvalidName default: return fmt.Errorf("mkdir error %d", status) } } // Mkdir creates the container (and its parents) if it doesn't exist. // Normally it ignores the ErrorDirAlreadyExist, as required by rclone tests. // Nevertheless, such programs as borgbackup or restic use mkdir as a locking // primitive and depend on its atomicity, i.e. mkdir should fail if directory // already exists. As a workaround, users can add string "atomicmkdir" in the // hidden `quirks` parameter or in the `--mailru-quirks` command-line option. func (f *Fs) Mkdir(ctx context.Context, dir string) error { // fs.Debugf(f, ">>> Mkdir %q", dir) err := f.mkDirs(ctx, f.absPath(dir)) if err == ErrorDirAlreadyExists && !f.quirks.atomicmkdir { return nil } return err } // mkDirs creates container and its parents by absolute path, // fails with ErrorDirAlreadyExists if it already exists. func (f *Fs) mkDirs(ctx context.Context, path string) error { if path == "/" || path == "" { return nil } switch err := f.CreateDir(ctx, path); err { case nil: return nil case ErrorDirSourceNotExists: fs.Debugf(f, "mkDirs by part %q", path) // fall thru... default: return err } parts := strings.Split(strings.Trim(path, "/"), "/") path = "" for _, part := range parts { if part == "" { continue } path += "/" + part switch err := f.CreateDir(ctx, path); err { case nil, ErrorDirAlreadyExists: continue default: return err } } return nil } func parentDir(absPath string) string { parent := path.Dir(strings.TrimRight(absPath, "/")) if parent == "." { parent = "" } return parent } // mkParentDirs creates parent containers by absolute path, // ignores the ErrorDirAlreadyExists func (f *Fs) mkParentDirs(ctx context.Context, path string) error { err := f.mkDirs(ctx, parentDir(path)) if err == ErrorDirAlreadyExists { return nil } return err } // Rmdir deletes a directory. // Returns an error if it isn't empty. func (f *Fs) Rmdir(ctx context.Context, dir string) error { // fs.Debugf(f, ">>> Rmdir %q", dir) return f.purgeWithCheck(ctx, dir, true, "rmdir") } // Purge deletes all the files in the directory // Optional interface: Only implement this if you have a way of deleting // all the files quicker than just running Remove() on the result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { // fs.Debugf(f, ">>> Purge") return f.purgeWithCheck(ctx, dir, false, "purge") } // purgeWithCheck() removes the root directory. // Refuses if `check` is set and directory has anything in. func (f *Fs) purgeWithCheck(ctx context.Context, dir string, check bool, opName string) error { path := f.absPath(dir) if path == "/" || path == "" { // Mailru will not allow to purge root space returning status 400 return fs.ErrorNotDeletingDirs } _, dirSize, err := f.readItemMetaData(ctx, path) if err != nil { return errors.Wrapf(err, "%s failed", opName) } if check && dirSize > 0 { return fs.ErrorDirectoryNotEmpty } return f.delete(ctx, path, false) } func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) error { token, err := f.accessToken() if err != nil { return err } data := url.Values{"home": {f.opt.Enc.FromStandardPath(path)}} opts := rest.Opts{ Method: "POST", Path: "/api/m1/file/remove", Parameters: url.Values{ "access_token": {token}, }, Body: strings.NewReader(data.Encode()), ContentType: api.BinContentType, } var response api.GenericResponse err = f.pacer.Call(func() (bool, error) { res, err := f.srv.CallJSON(ctx, &opts, nil, &response) return shouldRetry(res, err, f, &opts) }) switch { case err != nil: return err case response.Status == 200: return nil default: return fmt.Errorf("delete failed with code %d", response.Status) } } // Copy src to this remote using server side copy operations. // This is stored with the remote path given. // It returns the destination Object and a possible error. // Will only be called if src.Fs().Name() == f.Name() // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { // fs.Debugf(f, ">>> Copy %q %q", src.Remote(), remote) srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } if srcObj.fs.opt.Username != f.opt.Username { // Can copy across mailru configs only if they share common account fs.Debugf(src, "Can't copy - not same account") return nil, fs.ErrorCantCopy } srcPath := srcObj.absPath() dstPath := f.absPath(remote) overwrite := false // fs.Debugf(f, "copy %q -> %q\n", srcPath, dstPath) err := f.mkParentDirs(ctx, dstPath) if err != nil { return nil, err } data := url.Values{} data.Set("home", f.opt.Enc.FromStandardPath(srcPath)) data.Set("folder", f.opt.Enc.FromStandardPath(parentDir(dstPath))) data.Set("email", f.opt.Username) data.Set("x-email", f.opt.Username) if overwrite { data.Set("conflict", "rewrite") } else { data.Set("conflict", "rename") } token, err := f.accessToken() if err != nil { return nil, err } opts := rest.Opts{ Method: "POST", Path: "/api/m1/file/copy", Parameters: url.Values{ "access_token": {token}, }, Body: strings.NewReader(data.Encode()), ContentType: api.BinContentType, } var response api.GenericBodyResponse err = f.pacer.Call(func() (bool, error) { res, err := f.srv.CallJSON(ctx, &opts, nil, &response) return shouldRetry(res, err, f, &opts) }) if err != nil { return nil, errors.Wrap(err, "couldn't copy file") } if response.Status != 200 { return nil, fmt.Errorf("copy failed with code %d", response.Status) } tmpPath := f.opt.Enc.ToStandardPath(response.Body) if tmpPath != dstPath { // fs.Debugf(f, "rename temporary file %q -> %q\n", tmpPath, dstPath) err = f.moveItemBin(ctx, tmpPath, dstPath, "rename temporary file") if err != nil { _ = f.delete(ctx, tmpPath, false) // ignore error return nil, err } } // fix modification time at destination dstObj := &Object{ fs: f, remote: remote, } err = dstObj.readMetaData(ctx, true) if err == nil && dstObj.modTime != srcObj.modTime { dstObj.modTime = srcObj.modTime err = dstObj.addFileMetaData(ctx, true) } if err != nil { dstObj = nil } return dstObj, err } // Move src to this remote using server side move operations. // This is stored with the remote path given. // It returns the destination Object and a possible error. // Will only be called if src.Fs().Name() == f.Name() // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { // fs.Debugf(f, ">>> Move %q %q", src.Remote(), remote) srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } if srcObj.fs.opt.Username != f.opt.Username { // Can move across mailru configs only if they share common account fs.Debugf(src, "Can't move - not same account") return nil, fs.ErrorCantMove } srcPath := srcObj.absPath() dstPath := f.absPath(remote) err := f.mkParentDirs(ctx, dstPath) if err != nil { return nil, err } err = f.moveItemBin(ctx, srcPath, dstPath, "move file") if err != nil { return nil, err } return f.NewObject(ctx, remote) } // move/rename an object using BIN protocol func (f *Fs) moveItemBin(ctx context.Context, srcPath, dstPath, opName string) error { token, err := f.accessToken() if err != nil { return err } metaURL, err := f.metaServer(ctx) if err != nil { return err } req := api.NewBinWriter() req.WritePu16(api.OperationRename) req.WritePu32(0) // old revision req.WriteString(f.opt.Enc.FromStandardPath(srcPath)) req.WritePu32(0) // new revision req.WriteString(f.opt.Enc.FromStandardPath(dstPath)) req.WritePu32(0) // dunno opts := rest.Opts{ Method: "POST", RootURL: metaURL, Parameters: url.Values{ "client_id": {api.OAuthClientID}, "token": {token}, }, ContentType: api.BinContentType, Body: req.Reader(), } var res *http.Response err = f.pacer.Call(func() (bool, error) { res, err = f.srv.Call(ctx, &opts) return shouldRetry(res, err, f, &opts) }) if err != nil { closeBody(res) return err } reply := api.NewBinReader(res.Body) defer closeBody(res) switch status := reply.ReadByteAsInt(); status { case api.MoveResultOK: return nil default: return fmt.Errorf("%s failed with error %d", opName, status) } } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // Will only be called if src.Fs().Name() == f.Name() // If it isn't possible then return fs.ErrorCantDirMove // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { // fs.Debugf(f, ">>> DirMove %q %q", srcRemote, dstRemote) srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } if srcFs.opt.Username != f.opt.Username { // Can move across mailru configs only if they share common account fs.Debugf(src, "Can't move - not same account") return fs.ErrorCantDirMove } srcPath := srcFs.absPath(srcRemote) dstPath := f.absPath(dstRemote) // fs.Debugf(srcFs, "DirMove [%s]%q --> [%s]%q\n", srcRemote, srcPath, dstRemote, dstPath) // Refuse to move to or from the root if len(srcPath) <= len(srcFs.root) || len(dstPath) <= len(f.root) { fs.Debugf(src, "DirMove error: Can't move root") return errors.New("can't move root directory") } err := f.mkParentDirs(ctx, dstPath) if err != nil { return err } _, _, err = f.readItemMetaData(ctx, dstPath) switch err { case fs.ErrorObjectNotFound: // OK! case nil: return fs.ErrorDirExists default: return err } return f.moveItemBin(ctx, srcPath, dstPath, "directory move") } // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { // fs.Debugf(f, ">>> PublicLink %q", remote) token, err := f.accessToken() if err != nil { return "", err } data := url.Values{} data.Set("home", f.opt.Enc.FromStandardPath(f.absPath(remote))) data.Set("email", f.opt.Username) data.Set("x-email", f.opt.Username) opts := rest.Opts{ Method: "POST", Path: "/api/m1/file/publish", Parameters: url.Values{ "access_token": {token}, }, Body: strings.NewReader(data.Encode()), ContentType: api.BinContentType, } var response api.GenericBodyResponse err = f.pacer.Call(func() (bool, error) { res, err := f.srv.CallJSON(ctx, &opts, nil, &response) return shouldRetry(res, err, f, &opts) }) if err == nil && response.Body != "" { return api.PublicLinkURL + response.Body, nil } if err == nil { return "", errors.New("server returned empty link") } if apiErr, ok := err.(*api.FileErrorResponse); ok && apiErr.Status == 404 { return "", fs.ErrorObjectNotFound } return "", err } // CleanUp permanently deletes all trashed files/folders func (f *Fs) CleanUp(ctx context.Context) error { // fs.Debugf(f, ">>> CleanUp") token, err := f.accessToken() if err != nil { return err } data := url.Values{ "email": {f.opt.Username}, "x-email": {f.opt.Username}, } opts := rest.Opts{ Method: "POST", Path: "/api/m1/trashbin/empty", Parameters: url.Values{ "access_token": {token}, }, Body: strings.NewReader(data.Encode()), ContentType: api.BinContentType, } var response api.CleanupResponse err = f.pacer.Call(func() (bool, error) { res, err := f.srv.CallJSON(ctx, &opts, nil, &response) return shouldRetry(res, err, f, &opts) }) if err != nil { return err } switch response.StatusStr { case "200": return nil default: return fmt.Errorf("cleanup failed (%s)", response.StatusStr) } } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { // fs.Debugf(f, ">>> About") token, err := f.accessToken() if err != nil { return nil, err } opts := rest.Opts{ Method: "GET", Path: "/api/m1/user", Parameters: url.Values{ "access_token": {token}, }, } var info api.UserInfoResponse err = f.pacer.Call(func() (bool, error) { res, err := f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(res, err, f, &opts) }) if err != nil { return nil, err } total := info.Body.Cloud.Space.BytesTotal used := int64(info.Body.Cloud.Space.BytesUsed) usage := &fs.Usage{ Total: fs.NewUsageValue(total), Used: fs.NewUsageValue(used), Free: fs.NewUsageValue(total - used), } return usage, nil } // Put the object // Copy the reader in to the new object which is returned // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o := &Object{ fs: f, remote: src.Remote(), size: src.Size(), modTime: src.ModTime(ctx), } // fs.Debugf(f, ">>> Put: %q %d '%v'", o.remote, o.size, o.modTime) return o, o.Update(ctx, in, src, options...) } // Update an existing object // Copy the reader into the object updating modTime and size // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { wrapIn := in size := src.Size() if size < 0 { return errors.New("mailru does not support streaming uploads") } err := o.fs.mkParentDirs(ctx, o.absPath()) if err != nil { return err } var ( fileBuf []byte fileHash []byte newHash []byte trySpeedup bool ) // Don't disturb the source if file fits in hash. // Skip an extra speedup request if file fits in hash. if size > mrhash.Size { // Request hash from source. if srcHash, err := src.Hash(ctx, MrHashType); err == nil && srcHash != "" { fileHash, _ = mrhash.DecodeString(srcHash) } // Try speedup if it's globally enabled and source hash is available. trySpeedup = o.fs.opt.SpeedupEnable if trySpeedup && fileHash != nil { if o.putByHash(ctx, fileHash, src, "source") { return nil } trySpeedup = false // speedup failed, force upload } } // Need to calculate hash, check whether file is still eligible for speedup if trySpeedup { trySpeedup = o.fs.eligibleForSpeedup(o.Remote(), size, options...) } // Attempt to put by calculating hash in memory if trySpeedup && size <= int64(o.fs.opt.SpeedupMaxMem) { //fs.Debugf(o, "attempt to put by hash from memory") fileBuf, err = ioutil.ReadAll(in) if err != nil { return err } fileHash = mrhash.Sum(fileBuf) if o.putByHash(ctx, fileHash, src, "memory") { return nil } wrapIn = bytes.NewReader(fileBuf) trySpeedup = false // speedup failed, force upload } // Attempt to put by hash using a spool file if trySpeedup { tmpFs, err := fs.TemporaryLocalFs() if err != nil { fs.Infof(tmpFs, "Failed to create spool FS: %v", err) } else { defer func() { if err := operations.Purge(ctx, tmpFs, ""); err != nil { fs.Infof(tmpFs, "Failed to cleanup spool FS: %v", err) } }() spoolFile, mrHash, err := makeTempFile(ctx, tmpFs, wrapIn, src) if err != nil { return errors.Wrap(err, "Failed to create spool file") } if o.putByHash(ctx, mrHash, src, "spool") { // If put by hash is successful, ignore transitive error return nil } if wrapIn, err = spoolFile.Open(ctx); err != nil { return err } fileHash = mrHash } } // Upload object data if size <= mrhash.Size { // Optimize upload: skip extra request if data fits in the hash buffer. if fileBuf == nil { fileBuf, err = ioutil.ReadAll(wrapIn) } if fileHash == nil && err == nil { fileHash = mrhash.Sum(fileBuf) } newHash = fileHash } else { var hasher gohash.Hash if fileHash == nil { // Calculate hash in transit hasher = mrhash.New() wrapIn = io.TeeReader(wrapIn, hasher) } newHash, err = o.upload(ctx, wrapIn, size, options...) if fileHash == nil && err == nil { fileHash = hasher.Sum(nil) } } if err != nil { return err } if bytes.Compare(fileHash, newHash) != 0 { if o.fs.opt.CheckHash { return mrhash.ErrorInvalidHash } fs.Infof(o, "hash mismatch on upload: expected %x received %x", fileHash, newHash) } o.mrHash = newHash o.size = size o.modTime = src.ModTime(ctx) return o.addFileMetaData(ctx, true) } // eligibleForSpeedup checks whether file is eligible for speedup method (put by hash) func (f *Fs) eligibleForSpeedup(remote string, size int64, options ...fs.OpenOption) bool { if !f.opt.SpeedupEnable { return false } if size <= mrhash.Size || size < speedupMinSize || size >= int64(f.opt.SpeedupMaxDisk) { return false } _, _, partial := getTransferRange(size, options...) if partial { return false } if f.speedupAny { return true } if f.speedupGlobs == nil { return false } nameLower := strings.ToLower(strings.TrimSpace(path.Base(remote))) for _, pattern := range f.speedupGlobs { if matches, _ := filepath.Match(pattern, nameLower); matches { return true } } return false } // parseSpeedupPatterns converts pattern string into list of unique glob patterns func (f *Fs) parseSpeedupPatterns(patternString string) (err error) { f.speedupGlobs = nil f.speedupAny = false uniqueValidPatterns := make(map[string]interface{}) for _, pattern := range strings.Split(patternString, ",") { pattern = strings.ToLower(strings.TrimSpace(pattern)) if pattern == "" { continue } if pattern == "*" { f.speedupAny = true } if _, err := filepath.Match(pattern, ""); err != nil { return fmt.Errorf("invalid file name pattern %q", pattern) } uniqueValidPatterns[pattern] = nil } for pattern := range uniqueValidPatterns { f.speedupGlobs = append(f.speedupGlobs, pattern) } return nil } func (o *Object) putByHash(ctx context.Context, mrHash []byte, info fs.ObjectInfo, method string) bool { oNew := new(Object) *oNew = *o oNew.mrHash = mrHash oNew.size = info.Size() oNew.modTime = info.ModTime(ctx) if err := oNew.addFileMetaData(ctx, true); err != nil { fs.Debugf(o, "Cannot put by hash from %s, performing upload", method) return false } *o = *oNew fs.Debugf(o, "File has been put by hash from %s", method) return true } func makeTempFile(ctx context.Context, tmpFs fs.Fs, wrapIn io.Reader, src fs.ObjectInfo) (spoolFile fs.Object, mrHash []byte, err error) { // Local temporary file system must support SHA1 hashType := hash.SHA1 // Calculate Mailru and spool verification hashes in transit hashSet := hash.NewHashSet(MrHashType, hashType) hasher, err := hash.NewMultiHasherTypes(hashSet) if err != nil { return nil, nil, err } wrapIn = io.TeeReader(wrapIn, hasher) // Copy stream into spool file tmpInfo := object.NewStaticObjectInfo(src.Remote(), src.ModTime(ctx), src.Size(), false, nil, nil) hashOption := &fs.HashesOption{Hashes: hashSet} if spoolFile, err = tmpFs.Put(ctx, wrapIn, tmpInfo, hashOption); err != nil { return nil, nil, err } // Validate spool file sums := hasher.Sums() checkSum := sums[hashType] fileSum, err := spoolFile.Hash(ctx, hashType) if spoolFile.Size() != src.Size() || err != nil || checkSum == "" || fileSum != checkSum { return nil, nil, mrhash.ErrorInvalidHash } mrHash, err = mrhash.DecodeString(sums[MrHashType]) return } func (o *Object) upload(ctx context.Context, in io.Reader, size int64, options ...fs.OpenOption) ([]byte, error) { token, err := o.fs.accessToken() if err != nil { return nil, err } shardURL, err := o.fs.uploadShard(ctx) if err != nil { return nil, err } opts := rest.Opts{ Method: "PUT", RootURL: shardURL, Body: in, Options: options, ContentLength: &size, Parameters: url.Values{ "client_id": {api.OAuthClientID}, "token": {token}, }, ExtraHeaders: map[string]string{ "Accept": "*/*", }, } var ( res *http.Response strHash string ) err = o.fs.pacer.Call(func() (bool, error) { res, err = o.fs.srv.Call(ctx, &opts) if err == nil { strHash, err = readBodyWord(res) } return fserrors.ShouldRetry(err), err }) if err != nil { closeBody(res) return nil, err } switch res.StatusCode { case 200, 201: return mrhash.DecodeString(strHash) default: return nil, fmt.Errorf("upload failed with code %s (%d)", res.Status, res.StatusCode) } } func (f *Fs) uploadShard(ctx context.Context) (string, error) { f.shardMu.Lock() defer f.shardMu.Unlock() if f.shardURL != "" && time.Now().Before(f.shardExpiry) { return f.shardURL, nil } opts := rest.Opts{ RootURL: api.DispatchServerURL, Method: "GET", Path: "/u", } var ( res *http.Response url string err error ) err = f.pacer.Call(func() (bool, error) { res, err = f.srv.Call(ctx, &opts) if err == nil { url, err = readBodyWord(res) } return fserrors.ShouldRetry(err), err }) if err != nil { closeBody(res) return "", err } f.shardURL = url f.shardExpiry = time.Now().Add(shardExpirySec * time.Second) fs.Debugf(f, "new upload shard: %s", f.shardURL) return f.shardURL, nil } // Object describes a mailru object type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set size int64 // Bytes in the object modTime time.Time // Modified time of the object mrHash []byte // Mail.ru flavored SHA1 hash of the object } // NewObject finds an Object at the remote. // If object can't be found it fails with fs.ErrorObjectNotFound func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { // fs.Debugf(f, ">>> NewObject %q", remote) o := &Object{ fs: f, remote: remote, } err := o.readMetaData(ctx, true) if err != nil { return nil, err } return o, nil } // absPath converts root-relative remote to absolute home path func (o *Object) absPath() string { return o.fs.absPath(o.remote) } // Object.readMetaData reads and fills a file info // If object can't be found it fails with fs.ErrorObjectNotFound func (o *Object) readMetaData(ctx context.Context, force bool) error { if o.hasMetaData && !force { return nil } entry, dirSize, err := o.fs.readItemMetaData(ctx, o.absPath()) if err != nil { return err } newObj, ok := entry.(*Object) if !ok || dirSize >= 0 { return fs.ErrorNotAFile } if newObj.remote != o.remote { return fmt.Errorf("File %q path has changed to %q", o.remote, newObj.remote) } o.hasMetaData = true o.size = newObj.size o.modTime = newObj.modTime o.mrHash = newObj.mrHash return nil } // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } //return fmt.Sprintf("[%s]%q", o.fs.root, o.remote) return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // ModTime returns the modification time of the object // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx, false) if err != nil { fs.Errorf(o, "%v", err) } return o.modTime } // Size returns the size of an object in bytes func (o *Object) Size() int64 { ctx := context.Background() // Note: Object.Size does not pass context! err := o.readMetaData(ctx, false) if err != nil { fs.Errorf(o, "%v", err) } return o.size } // Hash returns the MD5 or SHA1 sum of an object // returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t == MrHashType { return hex.EncodeToString(o.mrHash), nil } return "", hash.ErrUnsupported } // Storable returns whether this object is storable func (o *Object) Storable() bool { return true } // SetModTime sets the modification time of the local fs object // // Commits the datastore func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // fs.Debugf(o, ">>> SetModTime [%v]", modTime) o.modTime = modTime return o.addFileMetaData(ctx, true) } func (o *Object) addFileMetaData(ctx context.Context, overwrite bool) error { if len(o.mrHash) != mrhash.Size { return mrhash.ErrorInvalidHash } token, err := o.fs.accessToken() if err != nil { return err } metaURL, err := o.fs.metaServer(ctx) if err != nil { return err } req := api.NewBinWriter() req.WritePu16(api.OperationAddFile) req.WritePu16(0) // revision req.WriteString(o.fs.opt.Enc.FromStandardPath(o.absPath())) req.WritePu64(o.size) req.WritePu64(o.modTime.Unix()) req.WritePu32(0) req.Write(o.mrHash) if overwrite { // overwrite req.WritePu32(1) } else { // don't add if not changed, add with rename if changed req.WritePu32(55) req.Write(o.mrHash) req.WritePu64(o.size) } opts := rest.Opts{ Method: "POST", RootURL: metaURL, Parameters: url.Values{ "client_id": {api.OAuthClientID}, "token": {token}, }, ContentType: api.BinContentType, Body: req.Reader(), } var res *http.Response err = o.fs.pacer.Call(func() (bool, error) { res, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(res, err, o.fs, &opts) }) if err != nil { closeBody(res) return err } reply := api.NewBinReader(res.Body) defer closeBody(res) switch status := reply.ReadByteAsInt(); status { case api.AddResultOK, api.AddResultNotModified, api.AddResultDunno04, api.AddResultDunno09: return nil case api.AddResultInvalidName: return ErrorInvalidName default: return fmt.Errorf("add file error %d", status) } } // Remove an object func (o *Object) Remove(ctx context.Context) error { // fs.Debugf(o, ">>> Remove") return o.fs.delete(ctx, o.absPath(), false) } // getTransferRange detects partial transfers and calculates start/end offsets into file func getTransferRange(size int64, options ...fs.OpenOption) (start int64, end int64, partial bool) { var offset, limit int64 = 0, -1 for _, option := range options { switch opt := option.(type) { case *fs.SeekOption: offset = opt.Offset case *fs.RangeOption: offset, limit = opt.Decode(size) default: if option.Mandatory() { fs.Errorf(nil, "Unsupported mandatory option: %v", option) } } } if limit < 0 { limit = size - offset } end = offset + limit if end > size { end = size } partial = !(offset == 0 && end == size) return offset, end, partial } // Open an object for read and download its content func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { // fs.Debugf(o, ">>> Open") token, err := o.fs.accessToken() if err != nil { return nil, err } start, end, partialRequest := getTransferRange(o.size, options...) headers := map[string]string{ "Accept": "*/*", "Content-Type": "application/octet-stream", } if partialRequest { rangeStr := fmt.Sprintf("bytes=%d-%d", start, end-1) headers["Range"] = rangeStr // headers["Content-Range"] = rangeStr headers["Accept-Ranges"] = "bytes" } // TODO: set custom timeouts opts := rest.Opts{ Method: "GET", Options: options, Path: url.PathEscape(strings.TrimLeft(o.fs.opt.Enc.FromStandardPath(o.absPath()), "/")), Parameters: url.Values{ "client_id": {api.OAuthClientID}, "token": {token}, }, ExtraHeaders: headers, } var res *http.Response server := "" err = o.fs.pacer.Call(func() (bool, error) { server, err = o.fs.fileServers.Dispatch(ctx, server) if err != nil { return false, err } opts.RootURL = server res, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(res, err, o.fs, &opts) }) if err != nil { if res != nil && res.Body != nil { closeBody(res) } return nil, err } // Server should respond with Status 206 and Content-Range header to a range // request. Status 200 (and no Content-Range) means a full-content response. partialResponse := res.StatusCode == 206 var ( hasher gohash.Hash wrapStream io.ReadCloser ) if !partialResponse { // Cannot check hash of partial download hasher = mrhash.New() } wrapStream = &endHandler{ ctx: ctx, stream: res.Body, hasher: hasher, o: o, server: server, } if partialRequest && !partialResponse { fs.Debugf(o, "Server returned full content instead of range") if start > 0 { // Discard the beginning of the data _, err = io.CopyN(ioutil.Discard, wrapStream, start) if err != nil { return nil, err } } wrapStream = readers.NewLimitedReadCloser(wrapStream, end-start) } return wrapStream, nil } type endHandler struct { ctx context.Context stream io.ReadCloser hasher gohash.Hash o *Object server string done bool } func (e *endHandler) Read(p []byte) (n int, err error) { n, err = e.stream.Read(p) if e.hasher != nil { // hasher will not return an error, just panic _, _ = e.hasher.Write(p[:n]) } if err != nil { // io.Error or EOF err = e.handle(err) } return } func (e *endHandler) Close() error { _ = e.handle(nil) // ignore returned error return e.stream.Close() } func (e *endHandler) handle(err error) error { if e.done { return err } e.done = true o := e.o o.fs.fileServers.Free(e.server) if err != io.EOF || e.hasher == nil { return err } newHash := e.hasher.Sum(nil) if bytes.Compare(o.mrHash, newHash) == 0 { return io.EOF } if o.fs.opt.CheckHash { return mrhash.ErrorInvalidHash } fs.Infof(o, "hash mismatch on download: expected %x received %x", o.mrHash, newHash) return io.EOF } // serverPool backs server dispacher type serverPool struct { pool pendingServerMap mu sync.Mutex path string expirySec time.Duration fs *Fs } type pendingServerMap map[string]*pendingServer type pendingServer struct { locks int expiry time.Time } // Dispatch dispatches next download server. // It prefers switching and tries to avoid current server // in use by caller because it may be overloaded or slow. func (p *serverPool) Dispatch(ctx context.Context, current string) (string, error) { now := time.Now() url := p.getServer(current, now) if url != "" { return url, nil } // Server not found - ask Mailru dispatcher. opts := rest.Opts{ Method: "GET", RootURL: api.DispatchServerURL, Path: p.path, } var ( res *http.Response err error ) err = p.fs.pacer.Call(func() (bool, error) { res, err = p.fs.srv.Call(ctx, &opts) if err != nil { return fserrors.ShouldRetry(err), err } url, err = readBodyWord(res) return fserrors.ShouldRetry(err), err }) if err != nil || url == "" { closeBody(res) return "", errors.Wrap(err, "Failed to request file server") } p.addServer(url, now) return url, nil } func (p *serverPool) Free(url string) { if url == "" { return } p.mu.Lock() defer p.mu.Unlock() srv := p.pool[url] if srv == nil { return } if srv.locks <= 0 { // Getting here indicates possible race fs.Infof(p.fs, "Purge file server: locks -, url %s", url) delete(p.pool, url) return } srv.locks-- if srv.locks == 0 && time.Now().After(srv.expiry) { delete(p.pool, url) fs.Debugf(p.fs, "Free file server: locks 0, url %s", url) return } fs.Debugf(p.fs, "Unlock file server: locks %d, url %s", srv.locks, url) } // Find an underlocked server func (p *serverPool) getServer(current string, now time.Time) string { p.mu.Lock() defer p.mu.Unlock() for url, srv := range p.pool { if url == "" || srv.locks < 0 { continue // Purged server slot } if url == current { continue // Current server - prefer another } if srv.locks >= maxServerLocks { continue // Overlocked server } if now.After(srv.expiry) { continue // Expired server } srv.locks++ fs.Debugf(p.fs, "Lock file server: locks %d, url %s", srv.locks, url) return url } return "" } func (p *serverPool) addServer(url string, now time.Time) { p.mu.Lock() defer p.mu.Unlock() expiry := now.Add(p.expirySec * time.Second) expiryStr := []byte("-") if fs.Config.LogLevel >= fs.LogLevelInfo { expiryStr, _ = expiry.MarshalJSON() } // Attach to a server proposed by dispatcher srv := p.pool[url] if srv != nil { srv.locks++ srv.expiry = expiry fs.Debugf(p.fs, "Reuse file server: locks %d, url %s, expiry %s", srv.locks, url, expiryStr) return } // Add new server p.pool[url] = &pendingServer{locks: 1, expiry: expiry} fs.Debugf(p.fs, "Switch file server: locks 1, url %s, expiry %s", url, expiryStr) } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("[%s]", f.root) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return time.Second } // Hashes returns the supported hash sets func (f *Fs) Hashes() hash.Set { return hash.Set(MrHashType) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // close response body ignoring errors func closeBody(res *http.Response) { if res != nil { _ = res.Body.Close() } } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) ) rclone-1.53.3/backend/mailru/mailru_test.go000066400000000000000000000006551375552240400206410ustar00rootroot00000000000000// Test Mailru filesystem interface package mailru_test import ( "testing" "github.com/rclone/rclone/backend/mailru" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestMailru:", NilObject: (*mailru.Object)(nil), SkipBadWindowsCharacters: true, }) } rclone-1.53.3/backend/mailru/mrhash/000077500000000000000000000000001375552240400172365ustar00rootroot00000000000000rclone-1.53.3/backend/mailru/mrhash/mrhash.go000066400000000000000000000064051375552240400210540ustar00rootroot00000000000000// Package mrhash implements the mailru hash, which is a modified SHA1. // If file size is less than or equal to the SHA1 block size (20 bytes), // its hash is simply its data right-padded with zero bytes. // Hash sum of a larger file is computed as a SHA1 sum of the file data // bytes concatenated with a decimal representation of the data length. package mrhash import ( "crypto/sha1" "encoding" "encoding/hex" "errors" "hash" "strconv" ) const ( // BlockSize of the checksum in bytes. BlockSize = sha1.BlockSize // Size of the checksum in bytes. Size = sha1.Size startString = "mrCloud" hashError = "hash function returned error" ) // Global errors var ( ErrorInvalidHash = errors.New("invalid hash") ) type digest struct { total int // bytes written into hash so far sha hash.Hash // underlying SHA1 small []byte // small content } // New returns a new hash.Hash computing the Mailru checksum. func New() hash.Hash { d := &digest{} d.Reset() return d } // Write writes len(p) bytes from p to the underlying data stream. It returns // the number of bytes written from p (0 <= n <= len(p)) and any error // encountered that caused the write to stop early. Write must return a non-nil // error if it returns n < len(p). Write must not modify the slice data, even // temporarily. // // Implementations must not retain p. func (d *digest) Write(p []byte) (n int, err error) { n, err = d.sha.Write(p) if err != nil { panic(hashError) } d.total += n if d.total <= Size { d.small = append(d.small, p...) } return n, nil } // Sum appends the current hash to b and returns the resulting slice. // It does not change the underlying hash state. func (d *digest) Sum(b []byte) []byte { // If content is small, return it padded to Size if d.total <= Size { padded := make([]byte, Size) copy(padded, d.small) return append(b, padded...) } endString := strconv.Itoa(d.total) copy, err := cloneSHA1(d.sha) if err == nil { _, err = copy.Write([]byte(endString)) } if err != nil { panic(hashError) } return copy.Sum(b) } // cloneSHA1 clones state of SHA1 hash func cloneSHA1(orig hash.Hash) (clone hash.Hash, err error) { state, err := orig.(encoding.BinaryMarshaler).MarshalBinary() if err != nil { return nil, err } clone = sha1.New() err = clone.(encoding.BinaryUnmarshaler).UnmarshalBinary(state) return } // Reset resets the Hash to its initial state. func (d *digest) Reset() { d.sha = sha1.New() _, _ = d.sha.Write([]byte(startString)) d.total = 0 } // Size returns the number of bytes Sum will return. func (d *digest) Size() int { return Size } // BlockSize returns the hash's underlying block size. // The Write method must be able to accept any amount // of data, but it may operate more efficiently if all writes // are a multiple of the block size. func (d *digest) BlockSize() int { return BlockSize } // Sum returns the Mailru checksum of the data. func Sum(data []byte) []byte { var d digest d.Reset() _, _ = d.Write(data) return d.Sum(nil) } // DecodeString converts a string to the Mailru hash func DecodeString(s string) ([]byte, error) { b, err := hex.DecodeString(s) if err != nil || len(b) != Size { return nil, ErrorInvalidHash } return b, nil } // must implement this interface var ( _ hash.Hash = (*digest)(nil) ) rclone-1.53.3/backend/mailru/mrhash/mrhash_test.go000066400000000000000000000051361375552240400221130ustar00rootroot00000000000000package mrhash_test import ( "encoding/hex" "fmt" "testing" "github.com/rclone/rclone/backend/mailru/mrhash" "github.com/stretchr/testify/assert" ) func testChunk(t *testing.T, chunk int) { data := make([]byte, chunk) for i := 0; i < chunk; i++ { data[i] = 'A' } for _, test := range []struct { n int want string }{ {0, "0000000000000000000000000000000000000000"}, {1, "4100000000000000000000000000000000000000"}, {2, "4141000000000000000000000000000000000000"}, {19, "4141414141414141414141414141414141414100"}, {20, "4141414141414141414141414141414141414141"}, {21, "eb1d05e78a18691a5aa196a6c2b60cd40b5faafb"}, {22, "037e6d960601118a0639afbeff30fe716c66ed2d"}, {4096, "45a16aa192502b010280fb5b44274c601a91fd9f"}, {4194303, "fa019d5bd26498cf6abe35e0d61801bf19bf704b"}, {4194304, "5ed0e07aa6ea5c1beb9402b4d807258f27d40773"}, {4194305, "67bd0b9247db92e0e7d7e29a0947a50fedcb5452"}, {8388607, "41a8e2eb044c2e242971b5445d7be2a13fc0dd84"}, {8388608, "267a970917c624c11fe624276ec60233a66dc2c0"}, {8388609, "37b60b308d553d2732aefb62b3ea88f74acfa13f"}, } { d := mrhash.New() var toWrite int for toWrite = test.n; toWrite >= chunk; toWrite -= chunk { n, err := d.Write(data) assert.Nil(t, err) assert.Equal(t, chunk, n) } n, err := d.Write(data[:toWrite]) assert.Nil(t, err) assert.Equal(t, toWrite, n) got1 := hex.EncodeToString(d.Sum(nil)) assert.Equal(t, test.want, got1, fmt.Sprintf("when testing length %d", n)) got2 := hex.EncodeToString(d.Sum(nil)) assert.Equal(t, test.want, got2, fmt.Sprintf("when testing length %d (2nd sum)", n)) } } func TestHashChunk16M(t *testing.T) { testChunk(t, 16*1024*1024) } func TestHashChunk8M(t *testing.T) { testChunk(t, 8*1024*1024) } func TestHashChunk4M(t *testing.T) { testChunk(t, 4*1024*1024) } func TestHashChunk2M(t *testing.T) { testChunk(t, 2*1024*1024) } func TestHashChunk1M(t *testing.T) { testChunk(t, 1*1024*1024) } func TestHashChunk64k(t *testing.T) { testChunk(t, 64*1024) } func TestHashChunk32k(t *testing.T) { testChunk(t, 32*1024) } func TestHashChunk2048(t *testing.T) { testChunk(t, 2048) } func TestHashChunk2047(t *testing.T) { testChunk(t, 2047) } func TestSumCalledTwice(t *testing.T) { d := mrhash.New() assert.NotPanics(t, func() { d.Sum(nil) }) d.Reset() assert.NotPanics(t, func() { d.Sum(nil) }) assert.NotPanics(t, func() { d.Sum(nil) }) _, _ = d.Write([]byte{1}) assert.NotPanics(t, func() { d.Sum(nil) }) } func TestSize(t *testing.T) { d := mrhash.New() assert.Equal(t, 20, d.Size()) } func TestBlockSize(t *testing.T) { d := mrhash.New() assert.Equal(t, 64, d.BlockSize()) } rclone-1.53.3/backend/mega/000077500000000000000000000000001375552240400153745ustar00rootroot00000000000000rclone-1.53.3/backend/mega/mega.go000066400000000000000000000774521375552240400166530ustar00rootroot00000000000000// Package mega provides an interface to the Mega // object storage system. package mega /* Open questions * Does mega support a content hash - what exactly are the mega hashes? * Can mega support setting modification times? Improvements: * Uploads could be done in parallel * Downloads would be more efficient done in one go * Uploads would be more efficient with bigger chunks * Looks like mega can support server side copy, but it isn't implemented in go-mega * Upload can set modtime... - set as int64_t - can set ctime and mtime? */ import ( "context" "fmt" "io" "path" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" mega "github.com/t3rm1n4l/go-mega" ) const ( minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second eventWaitTime = 500 * time.Millisecond decayConstant = 2 // bigger for slower decay, exponential ) var ( megaCacheMu sync.Mutex // mutex for the below megaCache = map[string]*mega.Mega{} // cache logged in Mega's by user ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "mega", Description: "Mega", NewFs: NewFs, Options: []fs.Option{{ Name: "user", Help: "User name", Required: true, }, { Name: "pass", Help: "Password.", Required: true, IsPassword: true, }, { Name: "debug", Help: `Output more debug from Mega. If this flag is set (along with -vv) it will print further debugging information from the mega backend.`, Default: false, Advanced: true, }, { Name: "hard_delete", Help: `Delete files permanently rather than putting them into the trash. Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead.`, Default: false, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Base | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { User string `config:"user"` Pass string `config:"pass"` Debug bool `config:"debug"` HardDelete bool `config:"hard_delete"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote mega type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed config options features *fs.Features // optional features srv *mega.Mega // the connection to the server pacer *fs.Pacer // pacer for API calls rootNodeMu sync.Mutex // mutex for _rootNode _rootNode *mega.Node // root node - call findRoot to use this mkdirMu sync.Mutex // used to serialize calls to mkdir / rmdir } // Object describes a mega object // // Will definitely have info but maybe not meta // // Normally rclone would just store an ID here but go-mega and mega.nz // expect you to build an entire tree of all the objects in memory. // In this case we just store a pointer to the object. type Object struct { fs *Fs // what this object is part of remote string // The remote path info *mega.Node // pointer to the mega node } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("mega root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a mega 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // shouldRetry returns a boolean as to whether this err deserves to be // retried. It returns the err as a convenience func shouldRetry(err error) (bool, error) { // Let the mega library handle the low level retries return false, err /* switch errors.Cause(err) { case mega.EAGAIN, mega.ERATELIMIT, mega.ETEMPUNAVAIL: return true, err } return fserrors.ShouldRetry(err), err */ } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(remote string) (info *mega.Node, err error) { rootNode, err := f.findRoot(false) if err != nil { return nil, err } return f.findObject(rootNode, remote) } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.Pass != "" { var err error opt.Pass, err = obscure.Reveal(opt.Pass) if err != nil { return nil, errors.Wrap(err, "couldn't decrypt password") } } // cache *mega.Mega on username so we can re-use and share // them between remotes. They are expensive to make as they // contain all the objects and sharing the objects makes the // move code easier as we don't have to worry about mixing // them up between different remotes. megaCacheMu.Lock() defer megaCacheMu.Unlock() srv := megaCache[opt.User] if srv == nil { srv = mega.New().SetClient(fshttp.NewClient(fs.Config)) srv.SetRetries(fs.Config.LowLevelRetries) // let mega do the low level retries srv.SetLogger(func(format string, v ...interface{}) { fs.Infof("*go-mega*", format, v...) }) if opt.Debug { srv.SetDebugger(func(format string, v ...interface{}) { fs.Debugf("*go-mega*", format, v...) }) } err := srv.Login(opt.User, opt.Pass) if err != nil { return nil, errors.Wrap(err, "couldn't login") } megaCache[opt.User] = srv } root = parsePath(root) f := &Fs{ name: name, root: root, opt: *opt, srv: srv, pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ DuplicateFiles: true, CanHaveEmptyDirectories: true, }).Fill(f) // Find the root node and check if it is a file or not _, err = f.findRoot(false) switch err { case nil: // root node found and is a directory case fs.ErrorDirNotFound: // root node not found, so can't be a file case fs.ErrorIsFile: // root node is a file so point to parent directory root = path.Dir(root) if root == "." { root = "" } f.root = root return f, err } return f, nil } // splitNodePath splits nodePath into / separated parts, returning nil if it // should refer to the root. // It also encodes the parts into backend specific encoding func (f *Fs) splitNodePath(nodePath string) (parts []string) { nodePath = path.Clean(nodePath) if nodePath == "." || nodePath == "/" { return nil } nodePath = f.opt.Enc.FromStandardPath(nodePath) return strings.Split(nodePath, "/") } // findNode looks up the node for the path of the name given from the root given // // It returns mega.ENOENT if it wasn't found func (f *Fs) findNode(rootNode *mega.Node, nodePath string) (*mega.Node, error) { parts := f.splitNodePath(nodePath) if parts == nil { return rootNode, nil } nodes, err := f.srv.FS.PathLookup(rootNode, parts) if err != nil { return nil, err } return nodes[len(nodes)-1], nil } // findDir finds the directory rooted from the node passed in func (f *Fs) findDir(rootNode *mega.Node, dir string) (node *mega.Node, err error) { node, err = f.findNode(rootNode, dir) if err == mega.ENOENT { return nil, fs.ErrorDirNotFound } else if err == nil && node.GetType() == mega.FILE { return nil, fs.ErrorIsFile } return node, err } // findObject looks up the node for the object of the name given func (f *Fs) findObject(rootNode *mega.Node, file string) (node *mega.Node, err error) { node, err = f.findNode(rootNode, file) if err == mega.ENOENT { return nil, fs.ErrorObjectNotFound } else if err == nil && node.GetType() != mega.FILE { return nil, fs.ErrorNotAFile } return node, err } // lookupDir looks up the node for the directory of the name given // // if create is true it tries to create the root directory if not found func (f *Fs) lookupDir(dir string) (*mega.Node, error) { rootNode, err := f.findRoot(false) if err != nil { return nil, err } return f.findDir(rootNode, dir) } // lookupParentDir finds the parent node for the remote passed in func (f *Fs) lookupParentDir(remote string) (dirNode *mega.Node, leaf string, err error) { parent, leaf := path.Split(remote) dirNode, err = f.lookupDir(parent) return dirNode, leaf, err } // mkdir makes the directory and any parent directories for the // directory of the name given func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error) { f.mkdirMu.Lock() defer f.mkdirMu.Unlock() parts := f.splitNodePath(dir) if parts == nil { return rootNode, nil } var i int // look up until we find a directory which exists for i = 0; i <= len(parts); i++ { var nodes []*mega.Node nodes, err = f.srv.FS.PathLookup(rootNode, parts[:len(parts)-i]) if err == nil { if len(nodes) == 0 { node = rootNode } else { node = nodes[len(nodes)-1] } break } if err != mega.ENOENT { return nil, errors.Wrap(err, "mkdir lookup failed") } } if err != nil { return nil, errors.Wrap(err, "internal error: mkdir called with non existent root node") } // i is number of directories to create (may be 0) // node is directory to create them from for _, name := range parts[len(parts)-i:] { // create directory called name in node err = f.pacer.Call(func() (bool, error) { node, err = f.srv.CreateDir(name, node) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "mkdir create node failed") } } return node, nil } // mkdirParent creates the parent directory of remote func (f *Fs) mkdirParent(remote string) (dirNode *mega.Node, leaf string, err error) { rootNode, err := f.findRoot(true) if err != nil { return nil, "", err } parent, leaf := path.Split(remote) dirNode, err = f.mkdir(rootNode, parent) return dirNode, leaf, err } // findRoot looks up the root directory node and returns it. // // if create is true it tries to create the root directory if not found func (f *Fs) findRoot(create bool) (*mega.Node, error) { f.rootNodeMu.Lock() defer f.rootNodeMu.Unlock() // Check if we haven't found it already if f._rootNode != nil { return f._rootNode, nil } // Check for pre-existing root absRoot := f.srv.FS.GetRoot() node, err := f.findDir(absRoot, f.root) //log.Printf("findRoot findDir %p %v", node, err) if err == nil { f._rootNode = node return node, nil } if !create || err != fs.ErrorDirNotFound { return nil, err } //..not found so create the root directory f._rootNode, err = f.mkdir(absRoot, f.root) return f._rootNode, err } // clearRoot unsets the root directory func (f *Fs) clearRoot() { f.rootNodeMu.Lock() f._rootNode = nil f.rootNodeMu.Unlock() //log.Printf("cleared root directory") } // CleanUp deletes all files currently in trash func (f *Fs) CleanUp(ctx context.Context) (err error) { trash := f.srv.FS.GetTrash() items := []*mega.Node{} _, err = f.list(ctx, trash, func(item *mega.Node) bool { items = append(items, item) return false }) if err != nil { return errors.Wrap(err, "CleanUp failed to list items in trash") } fs.Infof(f, "Deleting %d items from the trash", len(items)) errors := 0 // similar to f.deleteNode(trash) but with HardDelete as true for _, item := range items { fs.Debugf(f, "Deleting trash %q", f.opt.Enc.ToStandardName(item.GetName())) deleteErr := f.pacer.Call(func() (bool, error) { err := f.srv.Delete(item, true) return shouldRetry(err) }) if deleteErr != nil { err = deleteErr errors++ } } fs.Infof(f, "Deleted %d items from the trash with %d errors", len(items), errors) return err } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData() // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(remote, nil) } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listFn func(*mega.Node) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) list(ctx context.Context, dir *mega.Node, fn listFn) (found bool, err error) { nodes, err := f.srv.FS.GetChildren(dir) if err != nil { return false, errors.Wrapf(err, "list failed") } for _, item := range nodes { if fn(item) { found = true break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { dirNode, err := f.lookupDir(dir) if err != nil { return nil, err } var iErr error _, err = f.list(ctx, dirNode, func(info *mega.Node) bool { remote := path.Join(dir, f.opt.Enc.ToStandardName(info.GetName())) switch info.GetType() { case mega.FOLDER, mega.ROOT, mega.INBOX, mega.TRASH: d := fs.NewDir(remote, info.GetTimeStamp()).SetID(info.GetHash()) entries = append(entries, d) case mega.FILE: o, err := f.newObjectWithInfo(remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the dirNode, object, leaf and error // // Used to create new objects func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) { dirNode, leaf, err = f.mkdirParent(remote) if err != nil { return nil, nil, leaf, err } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, dirNode, leaf, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { existingObj, err := f.newObjectWithInfo(src.Remote(), nil) switch err { case nil: return existingObj, existingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src) default: return nil, err } } // PutUnchecked the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the directory if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { rootNode, err := f.findRoot(true) if err != nil { return err } _, err = f.mkdir(rootNode, dir) return errors.Wrap(err, "Mkdir failed") } // deleteNode removes a file or directory, observing useTrash func (f *Fs) deleteNode(node *mega.Node) (err error) { err = f.pacer.Call(func() (bool, error) { err = f.srv.Delete(node, f.opt.HardDelete) return shouldRetry(err) }) return err } // purgeCheck removes the directory dir, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(dir string, check bool) error { f.mkdirMu.Lock() defer f.mkdirMu.Unlock() rootNode, err := f.findRoot(false) if err != nil { return err } dirNode, err := f.findDir(rootNode, dir) if err != nil { return err } if check { children, err := f.srv.FS.GetChildren(dirNode) if err != nil { return errors.Wrap(err, "purgeCheck GetChildren failed") } if len(children) > 0 { return fs.ErrorDirectoryNotEmpty } } waitEvent := f.srv.WaitEventsStart() err = f.deleteNode(dirNode) if err != nil { return errors.Wrap(err, "delete directory node failed") } // Remove the root node if we just deleted it if dirNode == rootNode { f.clearRoot() } f.srv.WaitEvents(waitEvent, eventWaitTime) return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(dir, false) } // move a file or folder (srcFs, srcRemote, info) to (f, dstRemote) // // info will be updates func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node) (err error) { var ( dstFs = f srcDirNode, dstDirNode *mega.Node srcParent, dstParent string srcLeaf, dstLeaf string ) if dstRemote != "" { // lookup or create the destination parent directory dstDirNode, dstLeaf, err = dstFs.mkdirParent(dstRemote) } else { // find or create the parent of the root directory absRoot := dstFs.srv.FS.GetRoot() dstParent, dstLeaf = path.Split(dstFs.root) dstDirNode, err = dstFs.mkdir(absRoot, dstParent) } if err != nil { return errors.Wrap(err, "server side move failed to make dst parent dir") } if srcRemote != "" { // lookup the existing parent directory srcDirNode, srcLeaf, err = srcFs.lookupParentDir(srcRemote) } else { // lookup the existing root parent absRoot := srcFs.srv.FS.GetRoot() srcParent, srcLeaf = path.Split(srcFs.root) srcDirNode, err = f.findDir(absRoot, srcParent) } if err != nil { return errors.Wrap(err, "server side move failed to lookup src parent dir") } // move the object into its new directory if required if srcDirNode != dstDirNode && srcDirNode.GetHash() != dstDirNode.GetHash() { //log.Printf("move src %p %q dst %p %q", srcDirNode, srcDirNode.GetName(), dstDirNode, dstDirNode.GetName()) err = f.pacer.Call(func() (bool, error) { err = f.srv.Move(info, dstDirNode) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "server side move failed") } } waitEvent := f.srv.WaitEventsStart() // rename the object if required if srcLeaf != dstLeaf { //log.Printf("rename %q to %q", srcLeaf, dstLeaf) err = f.pacer.Call(func() (bool, error) { err = f.srv.Rename(info, f.opt.Enc.FromStandardName(dstLeaf)) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "server side rename failed") } } f.srv.WaitEvents(waitEvent, eventWaitTime) return nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstFs := f //log.Printf("Move %q -> %q", src.Remote(), remote) srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Do the move err := f.move(remote, srcObj.fs, srcObj.remote, srcObj.info) if err != nil { return nil, err } // Create a destination object dstObj := &Object{ fs: dstFs, remote: remote, info: srcObj.info, } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { dstFs := f srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } // find the source info, err := srcFs.lookupDir(srcRemote) if err != nil { return err } // check the destination doesn't exist _, err = dstFs.lookupDir(dstRemote) if err == nil { return fs.ErrorDirExists } else if err != fs.ErrorDirNotFound { return errors.Wrap(err, "DirMove error while checking dest directory") } // Do the move err = f.move(dstRemote, srcFs, srcRemote, info) if err != nil { return err } // Clear src if it was the root if srcRemote == "" { srcFs.clearRoot() } return nil } // DirCacheFlush an optional interface to flush internal directory cache func (f *Fs) DirCacheFlush() { // f.dirCache.ResetRoot() // FIXME Flush the mega somehow? } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { root, err := f.findRoot(false) if err != nil { return "", errors.Wrap(err, "PublicLink failed to find root node") } node, err := f.findNode(root, remote) if err != nil { return "", errors.Wrap(err, "PublicLink failed to find path") } link, err = f.srv.Link(node, true) if err != nil { return "", errors.Wrap(err, "PublicLink failed to create link") } return link, nil } // MergeDirs merges the contents of all the directories passed // in into the first one and rmdirs the other directories. func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error { if len(dirs) < 2 { return nil } // find dst directory dstDir := dirs[0] dstDirNode := f.srv.FS.HashLookup(dstDir.ID()) if dstDirNode == nil { return errors.Errorf("MergeDirs failed to find node for: %v", dstDir) } for _, srcDir := range dirs[1:] { // find src directory srcDirNode := f.srv.FS.HashLookup(srcDir.ID()) if srcDirNode == nil { return errors.Errorf("MergeDirs failed to find node for: %v", srcDir) } // list the objects infos := []*mega.Node{} _, err := f.list(ctx, srcDirNode, func(info *mega.Node) bool { infos = append(infos, info) return false }) if err != nil { return errors.Wrapf(err, "MergeDirs list failed on %v", srcDir) } // move them into place for _, info := range infos { fs.Infof(srcDir, "merging %q", f.opt.Enc.ToStandardName(info.GetName())) err = f.pacer.Call(func() (bool, error) { err = f.srv.Move(info, dstDirNode) return shouldRetry(err) }) if err != nil { return errors.Wrapf(err, "MergeDirs move failed on %q in %v", f.opt.Enc.ToStandardName(info.GetName()), srcDir) } } // rmdir (into trash) the now empty source directory fs.Infof(srcDir, "removing empty directory") err = f.deleteNode(srcDirNode) if err != nil { return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir) } } return nil } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { var q mega.QuotaResp var err error err = f.pacer.Call(func() (bool, error) { q, err = f.srv.GetQuota() return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "failed to get Mega Quota") } usage := &fs.Usage{ Total: fs.NewUsageValue(int64(q.Mstrg)), // quota of bytes that can be used Used: fs.NewUsageValue(int64(q.Cstrg)), // bytes in use Free: fs.NewUsageValue(int64(q.Mstrg - q.Cstrg)), // bytes which can be uploaded before reaching the quota } return usage, nil } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the hashes of an object func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.info.GetSize() } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *mega.Node) (err error) { if info.GetType() != mega.FILE { return fs.ErrorNotAFile } o.info = info return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData() (err error) { if o.info != nil { return nil } info, err := o.fs.readMetaDataForPath(o.remote) if err != nil { if err == fs.ErrorDirNotFound { err = fs.ErrorObjectNotFound } return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { return o.info.GetTimeStamp() } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // openObject represents a download in progress type openObject struct { mu sync.Mutex o *Object d *mega.Download id int skip int64 chunk []byte closed bool } // get the next chunk func (oo *openObject) getChunk() (err error) { if oo.id >= oo.d.Chunks() { return io.EOF } var chunk []byte err = oo.o.fs.pacer.Call(func() (bool, error) { chunk, err = oo.d.DownloadChunk(oo.id) return shouldRetry(err) }) if err != nil { return err } oo.id++ oo.chunk = chunk return nil } // Read reads up to len(p) bytes into p. func (oo *openObject) Read(p []byte) (n int, err error) { oo.mu.Lock() defer oo.mu.Unlock() if oo.closed { return 0, errors.New("read on closed file") } // Skip data at the start if requested for oo.skip > 0 { _, size, err := oo.d.ChunkLocation(oo.id) if err != nil { return 0, err } if oo.skip < int64(size) { break } oo.id++ oo.skip -= int64(size) } if len(oo.chunk) == 0 { err = oo.getChunk() if err != nil { return 0, err } if oo.skip > 0 { oo.chunk = oo.chunk[oo.skip:] oo.skip = 0 } } n = copy(p, oo.chunk) oo.chunk = oo.chunk[n:] return n, nil } // Close closed the file - MAC errors are reported here func (oo *openObject) Close() (err error) { oo.mu.Lock() defer oo.mu.Unlock() if oo.closed { return nil } err = oo.o.fs.pacer.Call(func() (bool, error) { err = oo.d.Finish() return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "failed to finish download") } oo.closed = true return nil } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(o.Size()) default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } var d *mega.Download err = o.fs.pacer.Call(func() (bool, error) { d, err = o.fs.srv.NewDownload(o.info) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "open download file failed") } oo := &openObject{ o: o, d: d, skip: offset, } return readers.NewLimitedReadCloser(oo, limit), nil } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { size := src.Size() if size < 0 { return errors.New("mega backend can't upload a file of unknown length") } //modTime := src.ModTime(ctx) remote := o.Remote() // Create the parent directory dirNode, leaf, err := o.fs.mkdirParent(remote) if err != nil { return errors.Wrap(err, "update make parent dir failed") } var u *mega.Upload err = o.fs.pacer.Call(func() (bool, error) { u, err = o.fs.srv.NewUpload(dirNode, o.fs.opt.Enc.FromStandardName(leaf), size) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "upload file failed to create session") } // Upload the chunks // FIXME do this in parallel for id := 0; id < u.Chunks(); id++ { _, chunkSize, err := u.ChunkLocation(id) if err != nil { return errors.Wrap(err, "upload failed to read chunk location") } chunk := make([]byte, chunkSize) _, err = io.ReadFull(in, chunk) if err != nil { return errors.Wrap(err, "upload failed to read data") } err = o.fs.pacer.Call(func() (bool, error) { err = u.UploadChunk(id, chunk) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "upload file failed to upload chunk") } } // Finish the upload var info *mega.Node err = o.fs.pacer.Call(func() (bool, error) { info, err = u.Finish() return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "failed to finish upload") } // If the upload succeeded and the original object existed, then delete it if o.info != nil { err = o.fs.deleteNode(o.info) if err != nil { return errors.Wrap(err, "upload failed to remove old version") } o.info = nil } return o.setMetaData(info) } // Remove an object func (o *Object) Remove(ctx context.Context) error { err := o.fs.deleteNode(o.info) if err != nil { return errors.Wrap(err, "Remove object failed") } return nil } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.info.GetHash() } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.MergeDirser = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/mega/mega_test.go000066400000000000000000000005451375552240400176770ustar00rootroot00000000000000// Test Mega filesystem interface package mega_test import ( "testing" "github.com/rclone/rclone/backend/mega" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestMega:", NilObject: (*mega.Object)(nil), }) } rclone-1.53.3/backend/memory/000077500000000000000000000000001375552240400157735ustar00rootroot00000000000000rclone-1.53.3/backend/memory/memory.go000066400000000000000000000375021375552240400176410ustar00rootroot00000000000000// Package memory provides an interface to an in memory object storage system package memory import ( "bytes" "context" "crypto/md5" "encoding/hex" "fmt" "io" "io/ioutil" "path" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/bucket" ) var ( hashType = hash.MD5 // the object storage is persistent buckets = newBucketsInfo() ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "memory", Description: "In memory object storage system.", NewFs: NewFs, Options: []fs.Option{}, }) } // Options defines the configuration for this backend type Options struct { } // Fs represents a remote memory server type Fs struct { name string // name of this remote root string // the path we are working on if any opt Options // parsed config options rootBucket string // bucket part of root (if any) rootDirectory string // directory part of root (if any) features *fs.Features // optional features } // bucketsInfo holds info about all the buckets type bucketsInfo struct { mu sync.RWMutex buckets map[string]*bucketInfo } func newBucketsInfo() *bucketsInfo { return &bucketsInfo{ buckets: make(map[string]*bucketInfo, 16), } } // getBucket gets a names bucket or nil func (bi *bucketsInfo) getBucket(name string) (b *bucketInfo) { bi.mu.RLock() b = bi.buckets[name] bi.mu.RUnlock() return b } // makeBucket returns the bucket or makes it func (bi *bucketsInfo) makeBucket(name string) (b *bucketInfo) { bi.mu.Lock() defer bi.mu.Unlock() b = bi.buckets[name] if b != nil { return b } b = newBucketInfo() bi.buckets[name] = b return b } // deleteBucket deleted the bucket or returns an error func (bi *bucketsInfo) deleteBucket(name string) error { bi.mu.Lock() defer bi.mu.Unlock() b := bi.buckets[name] if b == nil { return fs.ErrorDirNotFound } if !b.isEmpty() { return fs.ErrorDirectoryNotEmpty } delete(bi.buckets, name) return nil } // getObjectData gets an object from (bucketName, bucketPath) or nil func (bi *bucketsInfo) getObjectData(bucketName, bucketPath string) (od *objectData) { b := bi.getBucket(bucketName) if b == nil { return nil } return b.getObjectData(bucketPath) } // updateObjectData updates an object from (bucketName, bucketPath) func (bi *bucketsInfo) updateObjectData(bucketName, bucketPath string, od *objectData) { b := bi.makeBucket(bucketName) b.mu.Lock() b.objects[bucketPath] = od b.mu.Unlock() } // removeObjectData removes an object from (bucketName, bucketPath) returning true if removed func (bi *bucketsInfo) removeObjectData(bucketName, bucketPath string) (removed bool) { b := bi.getBucket(bucketName) if b != nil { b.mu.Lock() od := b.objects[bucketPath] if od != nil { delete(b.objects, bucketPath) removed = true } b.mu.Unlock() } return removed } // bucketInfo holds info about a single bucket type bucketInfo struct { mu sync.RWMutex objects map[string]*objectData } func newBucketInfo() *bucketInfo { return &bucketInfo{ objects: make(map[string]*objectData, 16), } } // getBucket gets a names bucket or nil func (bi *bucketInfo) getObjectData(name string) (od *objectData) { bi.mu.RLock() od = bi.objects[name] bi.mu.RUnlock() return od } // getBucket gets a names bucket or nil func (bi *bucketInfo) isEmpty() (empty bool) { bi.mu.RLock() empty = len(bi.objects) == 0 bi.mu.RUnlock() return empty } // the object data and metadata type objectData struct { modTime time.Time hash string mimeType string data []byte } // Object describes a memory object type Object struct { fs *Fs // what this object is part of remote string // The remote path od *objectData // the object data } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Memory root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns bucket and bucketPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) { return bucket.Split(path.Join(f.root, rootRelativePath)) } // split returns bucket and bucketPath from the object func (o *Object) split() (bucket, bucketPath string) { return o.fs.split(o.remote) } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootBucket, f.rootDirectory = bucket.Split(f.root) } // NewFs contstructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = strings.Trim(root, "/") f := &Fs{ name: name, root: root, opt: *opt, } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, }).Fill(f) if f.rootBucket != "" && f.rootDirectory != "" { od := buckets.getObjectData(f.rootBucket, f.rootDirectory) if od != nil { newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.setRoot(newRoot) // return an error with an fs which points to the parent err = fs.ErrorIsFile } } return f, err } // newObject makes an object from a remote and an objectData func (f *Fs) newObject(remote string, od *objectData) *Object { return &Object{fs: f, remote: remote, od: od} } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { bucket, bucketPath := f.split(remote) od := buckets.getObjectData(bucket, bucketPath) if od == nil { return nil, fs.ErrorObjectNotFound } return f.newObject(remote, od), nil } // listFn is called from list to handle an object. type listFn func(remote string, entry fs.DirEntry, isDirectory bool) error // list the buckets to fn func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) (err error) { if prefix != "" { prefix += "/" } if directory != "" { directory += "/" } b := buckets.getBucket(bucket) if b == nil { return fs.ErrorDirNotFound } b.mu.RLock() defer b.mu.RUnlock() dirs := make(map[string]struct{}) for absPath, od := range b.objects { if strings.HasPrefix(absPath, directory) { remote := absPath[len(prefix):] if !recurse { localPath := absPath[len(directory):] slash := strings.IndexRune(localPath, '/') if slash >= 0 { // send a directory if have a slash dir := directory + localPath[:slash] if addBucket { dir = path.Join(bucket, dir) } _, found := dirs[dir] if !found { err = fn(dir, fs.NewDir(dir, time.Time{}), true) if err != nil { return err } dirs[dir] = struct{}{} } continue // don't send this file if not recursing } } // send an object if addBucket { remote = path.Join(bucket, remote) } err = fn(remote, f.newObject(remote, od), false) if err != nil { return err } } } return nil } // listDir lists the bucket to the entries func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { // List the objects and directories err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error { entries = append(entries, entry) return nil }) return entries, err } // listBuckets lists the buckets to entries func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { buckets.mu.RLock() defer buckets.mu.RUnlock() for name := range buckets.buckets { entries = append(entries, fs.NewDir(name, time.Time{})) } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { // defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err) bucket, directory := f.split(dir) if bucket == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listBuckets(ctx) } return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { bucket, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(bucket, directory, prefix string, addBucket bool) error { return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, entry fs.DirEntry, isDirectory bool) error { return list.Add(entry) }) } if bucket == "" { entries, err := f.listBuckets(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } bucket := entry.Remote() err = listR(bucket, "", f.rootDirectory, true) if err != nil { return err } } } else { err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "") if err != nil { return err } } return list.Flush() } // Put the object into the bucket // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction fs := &Object{ fs: f, remote: src.Remote(), od: &objectData{ modTime: src.ModTime(ctx), }, } return fs, fs.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the bucket if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { bucket, _ := f.split(dir) buckets.makeBucket(bucket) return nil } // Rmdir deletes the bucket if the fs is at the root // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { bucket, directory := f.split(dir) if bucket == "" || directory != "" { return nil } return buckets.deleteBucket(bucket) } // Precision of the remote func (f *Fs) Precision() time.Duration { return time.Nanosecond } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstBucket, dstPath := f.split(remote) _ = buckets.makeBucket(dstBucket) srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } srcBucket, srcPath := srcObj.split() od := buckets.getObjectData(srcBucket, srcPath) if od == nil { return nil, fs.ErrorObjectNotFound } buckets.updateObjectData(dstBucket, dstPath, od) return f.NewObject(ctx, remote) } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hashType) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.Remote() } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the hash of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hashType { return "", hash.ErrUnsupported } if o.od.hash == "" { sum := md5.Sum(o.od.data) o.od.hash = hex.EncodeToString(sum[:]) } return o.od.hash, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return int64(len(o.od.data)) } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers // // SHA-1 will also be updated once the request has completed. func (o *Object) ModTime(ctx context.Context) (result time.Time) { return o.od.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { o.od.modTime = modTime return nil } // Storable returns if this object is storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.RangeOption: offset, limit = x.Decode(int64(len(o.od.data))) case *fs.SeekOption: offset = x.Offset default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } if offset > int64(len(o.od.data)) { offset = int64(len(o.od.data)) } data := o.od.data[offset:] if limit >= 0 { if limit > int64(len(data)) { limit = int64(len(data)) } data = data[:limit] } return ioutil.NopCloser(bytes.NewBuffer(data)), nil } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { bucket, bucketPath := o.split() data, err := ioutil.ReadAll(in) if err != nil { return errors.Wrap(err, "failed to update memory object") } o.od = &objectData{ data: data, hash: "", modTime: src.ModTime(ctx), mimeType: fs.MimeType(ctx, o), } buckets.updateObjectData(bucket, bucketPath, o.od) return nil } // Remove an object func (o *Object) Remove(ctx context.Context) error { bucket, bucketPath := o.split() removed := buckets.removeObjectData(bucket, bucketPath) if !removed { return fs.ErrorObjectNotFound } return nil } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.od.mimeType } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Copier = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.ListRer = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} ) rclone-1.53.3/backend/memory/memory_test.go000066400000000000000000000004651375552240400206760ustar00rootroot00000000000000// Test memory filesystem interface package memory import ( "testing" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: ":memory:", NilObject: (*Object)(nil), }) } rclone-1.53.3/backend/onedrive/000077500000000000000000000000001375552240400162765ustar00rootroot00000000000000rclone-1.53.3/backend/onedrive/api/000077500000000000000000000000001375552240400170475ustar00rootroot00000000000000rclone-1.53.3/backend/onedrive/api/types.go000066400000000000000000000463641375552240400205570ustar00rootroot00000000000000// Types passed and returned to and from the API package api import ( "strings" "time" ) const ( timeFormat = `"` + time.RFC3339 + `"` // PackageTypeOneNote is the package type value for OneNote files PackageTypeOneNote = "oneNote" ) // Error is returned from one drive when things go wrong type Error struct { ErrorInfo struct { Code string `json:"code"` Message string `json:"message"` InnerError struct { Code string `json:"code"` } `json:"innererror"` } `json:"error"` } // Error returns a string for the error and satisfies the error interface func (e *Error) Error() string { out := e.ErrorInfo.Code if e.ErrorInfo.InnerError.Code != "" { out += ": " + e.ErrorInfo.InnerError.Code } out += ": " + e.ErrorInfo.Message return out } // Check Error satisfies the error interface var _ error = (*Error)(nil) // Identity represents an identity of an actor. For example, and actor // can be a user, device, or application. type Identity struct { DisplayName string `json:"displayName"` ID string `json:"id"` } // IdentitySet is a keyed collection of Identity objects. It is used // to represent a set of identities associated with various events for // an item, such as created by or last modified by. type IdentitySet struct { User Identity `json:"user"` Application Identity `json:"application"` Device Identity `json:"device"` } // Quota groups storage space quota-related information on OneDrive into a single structure. type Quota struct { Total int64 `json:"total"` Used int64 `json:"used"` Remaining int64 `json:"remaining"` Deleted int64 `json:"deleted"` State string `json:"state"` // normal | nearing | critical | exceeded } // Drive is a representation of a drive resource type Drive struct { ID string `json:"id"` DriveType string `json:"driveType"` Owner IdentitySet `json:"owner"` Quota Quota `json:"quota"` } // Timestamp represents represents date and time information for the // OneDrive API, by using ISO 8601 and is always in UTC time. type Timestamp time.Time // MarshalJSON turns a Timestamp into JSON (in UTC) func (t *Timestamp) MarshalJSON() (out []byte, err error) { timeString := (*time.Time)(t).UTC().Format(timeFormat) return []byte(timeString), nil } // UnmarshalJSON turns JSON into a Timestamp func (t *Timestamp) UnmarshalJSON(data []byte) error { newT, err := time.Parse(timeFormat, string(data)) if err != nil { return err } *t = Timestamp(newT) return nil } // ItemReference groups data needed to reference a OneDrive item // across the service into a single structure. type ItemReference struct { DriveID string `json:"driveId"` // Unique identifier for the Drive that contains the item. Read-only. ID string `json:"id"` // Unique identifier for the item. Read/Write. Path string `json:"path"` // Path that used to navigate to the item. Read/Write. DriveType string `json:"driveType"` // Type of the drive, Read-Only } // RemoteItemFacet groups data needed to reference a OneDrive remote item type RemoteItemFacet struct { ID string `json:"id"` // The unique identifier of the item within the remote Drive. Read-only. Name string `json:"name"` // The name of the item (filename and extension). Read-write. CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only. LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only. CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only. LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only. Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only. File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only. Package *PackageFacet `json:"package"` // If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. Read-only. FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write. ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write. Size int64 `json:"size"` // Size of the item in bytes. Read-only. WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only. } // FolderFacet groups folder-related data on OneDrive into a single structure type FolderFacet struct { ChildCount int64 `json:"childCount"` // Number of children contained immediately within this container. } // HashesType groups different types of hashes into a single structure, for an item on OneDrive. type HashesType struct { Sha1Hash string `json:"sha1Hash"` // hex encoded SHA1 hash for the contents of the file (if available) Crc32Hash string `json:"crc32Hash"` // hex encoded CRC32 value of the file (if available) QuickXorHash string `json:"quickXorHash"` // base64 encoded QuickXorHash value of the file (if available) } // FileFacet groups file-related data on OneDrive into a single structure. type FileFacet struct { MimeType string `json:"mimeType"` // The MIME type for the file. This is determined by logic on the server and might not be the value provided when the file was uploaded. Hashes HashesType `json:"hashes"` // Hashes of the file's binary content, if available. } // FileSystemInfoFacet contains properties that are reported by the // device's local file system for the local version of an item. This // facet can be used to specify the last modified date or created date // of the item as it was on the local device. type FileSystemInfoFacet struct { CreatedDateTime Timestamp `json:"createdDateTime"` // The UTC date and time the file was created on a client. LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // The UTC date and time the file was last modified on a client. } // DeletedFacet indicates that the item on OneDrive has been // deleted. In this version of the API, the presence (non-null) of the // facet value indicates that the file was deleted. A null (or // missing) value indicates that the file is not deleted. type DeletedFacet struct { } // PackageFacet indicates that a DriveItem is the top level item // in a "package" or a collection of items that should be treated as a collection instead of individual items. // `oneNote` is the only currently defined value. type PackageFacet struct { Type string `json:"type"` } // Item represents metadata for an item in OneDrive type Item struct { ID string `json:"id"` // The unique identifier of the item within the Drive. Read-only. Name string `json:"name"` // The name of the item (filename and extension). Read-write. ETag string `json:"eTag"` // eTag for the entire item (metadata + content). Read-only. CTag string `json:"cTag"` // An eTag for the content of the item. This eTag is not changed if only the metadata is changed. Read-only. CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only. LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only. CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only. LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only. Size int64 `json:"size"` // Size of the item in bytes. Read-only. ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write. WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only. Description string `json:"description"` // Provide a user-visible description of the item. Read-write. Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only. File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only. RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only. FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write. // Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only. // Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only. // Audio *AudioFacet `json:"audio"` // Audio metadata, if the item is an audio file. Read-only. // Video *VideoFacet `json:"video"` // Video metadata, if the item is a video. Read-only. // Location *LocationFacet `json:"location"` // Location metadata, if the item has location data. Read-only. Package *PackageFacet `json:"package"` // If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. Read-only. Deleted *DeletedFacet `json:"deleted"` // Information about the deleted state of the item. Read-only. } // ViewDeltaResponse is the response to the view delta method type ViewDeltaResponse struct { Value []Item `json:"value"` // An array of Item objects which have been created, modified, or deleted. NextLink string `json:"@odata.nextLink"` // A URL to retrieve the next available page of changes. DeltaLink string `json:"@odata.deltaLink"` // A URL returned instead of @odata.nextLink after all current changes have been returned. Used to read the next set of changes in the future. DeltaToken string `json:"@delta.token"` // A token value that can be used in the query string on manually-crafted calls to view.delta. Not needed if you're using nextLink and deltaLink. } // ListChildrenResponse is the response to the list children method type ListChildrenResponse struct { Value []Item `json:"value"` // An array of Item objects NextLink string `json:"@odata.nextLink"` // A URL to retrieve the next available page of items. } // CreateItemRequest is the request to create an item object type CreateItemRequest struct { Name string `json:"name"` // Name of the folder to be created. Folder FolderFacet `json:"folder"` // Empty Folder facet to indicate that folder is the type of resource to be created. ConflictBehavior string `json:"@name.conflictBehavior"` // Determines what to do if an item with a matching name already exists in this folder. Accepted values are: rename, replace, and fail (the default). } // SetFileSystemInfo is used to Update an object's FileSystemInfo. type SetFileSystemInfo struct { FileSystemInfo FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write. } // CreateUploadRequest is used by CreateUploadSession to set the dates correctly type CreateUploadRequest struct { Item SetFileSystemInfo `json:"item"` } // CreateUploadResponse is the response from creating an upload session type CreateUploadResponse struct { UploadURL string `json:"uploadUrl"` // "https://sn3302.up.1drv.com/up/fe6987415ace7X4e1eF866337", ExpirationDateTime Timestamp `json:"expirationDateTime"` // "2015-01-29T09:21:55.523Z", NextExpectedRanges []string `json:"nextExpectedRanges"` // ["0-"] } // UploadFragmentResponse is the response from uploading a fragment type UploadFragmentResponse struct { ExpirationDateTime Timestamp `json:"expirationDateTime"` // "2015-01-29T09:21:55.523Z", NextExpectedRanges []string `json:"nextExpectedRanges"` // ["0-"] } // CopyItemRequest is the request to copy an item object // // Note: The parentReference should include either an id or path but // not both. If both are included, they need to reference the same // item or an error will occur. type CopyItemRequest struct { ParentReference ItemReference `json:"parentReference"` // Reference to the parent item the copy will be created in. Name *string `json:"name"` // Optional The new name for the copy. If this isn't provided, the same name will be used as the original. } // MoveItemRequest is the request to copy an item object // // Note: The parentReference should include either an id or path but // not both. If both are included, they need to reference the same // item or an error will occur. type MoveItemRequest struct { ParentReference *ItemReference `json:"parentReference,omitempty"` // Reference to the destination parent directory Name string `json:"name,omitempty"` // Optional The new name for the file. If this isn't provided, the same name will be used as the original. FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo,omitempty"` // File system information on client. Read-write. } //CreateShareLinkRequest is the request to create a sharing link //Always Type:view and Scope:anonymous for public sharing type CreateShareLinkRequest struct { Type string `json:"type"` //Link type in View, Edit or Embed Scope string `json:"scope,omitempty"` //Optional. Scope in anonymousi, organization } //CreateShareLinkResponse is the response from CreateShareLinkRequest type CreateShareLinkResponse struct { ID string `json:"id"` Roles []string `json:"roles"` Link struct { Type string `json:"type"` Scope string `json:"scope"` WebURL string `json:"webUrl"` Application struct { ID string `json:"id"` DisplayName string `json:"displayName"` } `json:"application"` } `json:"link"` } // AsyncOperationStatus provides information on the status of an asynchronous job progress. // // The following API calls return AsyncOperationStatus resources: // // Copy Item // Upload From URL type AsyncOperationStatus struct { PercentageComplete float64 `json:"percentageComplete"` // A float value between 0 and 100 that indicates the percentage complete. Status string `json:"status"` // A string value that maps to an enumeration of possible values about the status of the job. "notStarted | inProgress | completed | updating | failed | deletePending | deleteFailed | waiting" } // GetID returns a normalized ID of the item // If DriveID is known it will be prefixed to the ID with # separator // Can be parsed using onedrive.parseNormalizedID(normalizedID) func (i *Item) GetID() string { if i.IsRemote() && i.RemoteItem.ID != "" { return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID } else if i.ParentReference != nil && strings.Index(i.ID, "#") == -1 { return i.ParentReference.DriveID + "#" + i.ID } return i.ID } // GetDriveID returns a normalized ParentReference of the item func (i *Item) GetDriveID() string { return i.GetParentReference().DriveID } // GetName returns a normalized Name of the item func (i *Item) GetName() string { if i.IsRemote() && i.RemoteItem.Name != "" { return i.RemoteItem.Name } return i.Name } // GetFolder returns a normalized Folder of the item func (i *Item) GetFolder() *FolderFacet { if i.IsRemote() && i.RemoteItem.Folder != nil { return i.RemoteItem.Folder } return i.Folder } // GetPackage returns a normalized Package of the item func (i *Item) GetPackage() *PackageFacet { if i.IsRemote() && i.RemoteItem.Package != nil { return i.RemoteItem.Package } return i.Package } // GetPackageType returns the package type of the item if available, // otherwise "" func (i *Item) GetPackageType() string { pack := i.GetPackage() if pack == nil { return "" } return pack.Type } // GetFile returns a normalized File of the item func (i *Item) GetFile() *FileFacet { if i.IsRemote() && i.RemoteItem.File != nil { return i.RemoteItem.File } return i.File } // GetFileSystemInfo returns a normalized FileSystemInfo of the item func (i *Item) GetFileSystemInfo() *FileSystemInfoFacet { if i.IsRemote() && i.RemoteItem.FileSystemInfo != nil { return i.RemoteItem.FileSystemInfo } return i.FileSystemInfo } // GetSize returns a normalized Size of the item func (i *Item) GetSize() int64 { if i.IsRemote() && i.RemoteItem.Size != 0 { return i.RemoteItem.Size } return i.Size } // GetWebURL returns a normalized WebURL of the item func (i *Item) GetWebURL() string { if i.IsRemote() && i.RemoteItem.WebURL != "" { return i.RemoteItem.WebURL } return i.WebURL } // GetCreatedBy returns a normalized CreatedBy of the item func (i *Item) GetCreatedBy() IdentitySet { if i.IsRemote() && i.RemoteItem.CreatedBy != (IdentitySet{}) { return i.RemoteItem.CreatedBy } return i.CreatedBy } // GetLastModifiedBy returns a normalized LastModifiedBy of the item func (i *Item) GetLastModifiedBy() IdentitySet { if i.IsRemote() && i.RemoteItem.LastModifiedBy != (IdentitySet{}) { return i.RemoteItem.LastModifiedBy } return i.LastModifiedBy } // GetCreatedDateTime returns a normalized CreatedDateTime of the item func (i *Item) GetCreatedDateTime() Timestamp { if i.IsRemote() && i.RemoteItem.CreatedDateTime != (Timestamp{}) { return i.RemoteItem.CreatedDateTime } return i.CreatedDateTime } // GetLastModifiedDateTime returns a normalized LastModifiedDateTime of the item func (i *Item) GetLastModifiedDateTime() Timestamp { if i.IsRemote() && i.RemoteItem.LastModifiedDateTime != (Timestamp{}) { return i.RemoteItem.LastModifiedDateTime } return i.LastModifiedDateTime } // GetParentReference returns a normalized ParentReference of the item func (i *Item) GetParentReference() *ItemReference { if i.IsRemote() && i.ParentReference == nil { return i.RemoteItem.ParentReference } return i.ParentReference } // IsRemote checks if item is a remote item func (i *Item) IsRemote() bool { return i.RemoteItem != nil } // User details for each version type User struct { Email string `json:"email"` ID string `json:"id"` DisplayName string `json:"displayName"` } // LastModifiedBy for each version type LastModifiedBy struct { User User `json:"user"` } // Version info type Version struct { ID string `json:"id"` LastModifiedDateTime time.Time `json:"lastModifiedDateTime"` Size int `json:"size"` LastModifiedBy LastModifiedBy `json:"lastModifiedBy"` } // VersionsResponse is returned from /versions type VersionsResponse struct { Versions []Version `json:"value"` } rclone-1.53.3/backend/onedrive/onedrive.go000077500000000000000000001653231375552240400204550ustar00rootroot00000000000000// Package onedrive provides an interface to the Microsoft OneDrive // object storage system. package onedrive import ( "context" "encoding/base64" "encoding/hex" "encoding/json" "fmt" "io" "log" "net/http" "path" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/onedrive/api" "github.com/rclone/rclone/backend/onedrive/quickxorhash" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" ) const ( rcloneClientID = "b15665d9-eda6-4092-8539-0eec376afd59" rcloneEncryptedClientSecret = "_JUdzh3LnKNqSPcf4Wu5fgMFIQOI8glZu_akYgR8yf6egowNBg-R" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential graphURL = "https://graph.microsoft.com/v1.0" configDriveID = "drive_id" configDriveType = "drive_type" driveTypePersonal = "personal" driveTypeBusiness = "business" driveTypeSharepoint = "documentLibrary" defaultChunkSize = 10 * fs.MebiByte chunkSizeMultiple = 320 * fs.KibiByte ) // Globals var ( // Description of how to auth for this app for a business account oauthConfig = &oauth2.Config{ Endpoint: oauth2.Endpoint{ AuthURL: "https://login.microsoftonline.com/common/oauth2/v2.0/authorize", TokenURL: "https://login.microsoftonline.com/common/oauth2/v2.0/token", }, Scopes: []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access", "Sites.Read.All"}, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectLocalhostURL, } // QuickXorHashType is the hash.Type for OneDrive QuickXorHashType hash.Type ) // Register with Fs func init() { QuickXorHashType = hash.RegisterHash("QuickXorHash", 40, quickxorhash.New) fs.Register(&fs.RegInfo{ Name: "onedrive", Description: "Microsoft OneDrive", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { ctx := context.TODO() err := oauthutil.Config("onedrive", name, m, oauthConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) return } // Stop if we are running non-interactive config if fs.Config.AutoConfirm { return } type driveResource struct { DriveID string `json:"id"` DriveName string `json:"name"` DriveType string `json:"driveType"` } type drivesResponse struct { Drives []driveResource `json:"value"` } type siteResource struct { SiteID string `json:"id"` SiteName string `json:"displayName"` SiteURL string `json:"webUrl"` } type siteResponse struct { Sites []siteResource `json:"value"` } oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { log.Fatalf("Failed to configure OneDrive: %v", err) } srv := rest.NewClient(oAuthClient) var opts rest.Opts var finalDriveID string var siteID string switch config.Choose("Your choice", []string{"onedrive", "sharepoint", "driveid", "siteid", "search"}, []string{"OneDrive Personal or Business", "Root Sharepoint site", "Type in driveID", "Type in SiteID", "Search a Sharepoint site"}, false) { case "onedrive": opts = rest.Opts{ Method: "GET", RootURL: graphURL, Path: "/me/drives", } case "sharepoint": opts = rest.Opts{ Method: "GET", RootURL: graphURL, Path: "/sites/root/drives", } case "driveid": fmt.Printf("Paste your Drive ID here> ") finalDriveID = config.ReadLine() case "siteid": fmt.Printf("Paste your Site ID here> ") siteID = config.ReadLine() case "search": fmt.Printf("What to search for> ") searchTerm := config.ReadLine() opts = rest.Opts{ Method: "GET", RootURL: graphURL, Path: "/sites?search=" + searchTerm, } sites := siteResponse{} _, err := srv.CallJSON(ctx, &opts, nil, &sites) if err != nil { log.Fatalf("Failed to query available sites: %v", err) } if len(sites.Sites) == 0 { log.Fatalf("Search for '%s' returned no results", searchTerm) } else { fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites)) for index, site := range sites.Sites { fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID) } siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID } } // if we have a siteID we need to ask for the drives if siteID != "" { opts = rest.Opts{ Method: "GET", RootURL: graphURL, Path: "/sites/" + siteID + "/drives", } } // We don't have the final ID yet? // query Microsoft Graph if finalDriveID == "" { drives := drivesResponse{} _, err := srv.CallJSON(ctx, &opts, nil, &drives) if err != nil { log.Fatalf("Failed to query available drives: %v", err) } // Also call /me/drive as sometimes /me/drives doesn't return it #4068 if opts.Path == "/me/drives" { opts.Path = "/me/drive" meDrive := driveResource{} _, err := srv.CallJSON(ctx, &opts, nil, &meDrive) if err != nil { log.Fatalf("Failed to query available drives: %v", err) } found := false for _, drive := range drives.Drives { if drive.DriveID == meDrive.DriveID { found = true break } } // add the me drive if not found already if !found { fs.Debugf(nil, "Adding %v to drives list from /me/drive", meDrive) drives.Drives = append(drives.Drives, meDrive) } } if len(drives.Drives) == 0 { log.Fatalf("No drives found") } else { fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives)) for index, drive := range drives.Drives { fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID) } finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID } } // Test the driveID and get drive type opts = rest.Opts{ Method: "GET", RootURL: graphURL, Path: "/drives/" + finalDriveID + "/root"} var rootItem api.Item _, err = srv.CallJSON(ctx, &opts, nil, &rootItem) if err != nil { log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err) } fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL) // This does not work, YET :) if !config.ConfirmWithConfig(m, "config_drive_ok", true) { log.Fatalf("Cancelled by user") } m.Set(configDriveID, finalDriveID) m.Set(configDriveType, rootItem.ParentReference.DriveType) config.SaveConfig() }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: "chunk_size", Help: `Chunk size to upload files with - must be multiple of 320k (327,680 bytes). Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter \"Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\" Note that the chunks will be buffered into memory.`, Default: defaultChunkSize, Advanced: true, }, { Name: "drive_id", Help: "The ID of the drive to use", Default: "", Advanced: true, }, { Name: "drive_type", Help: "The type of the drive ( " + driveTypePersonal + " | " + driveTypeBusiness + " | " + driveTypeSharepoint + " )", Default: "", Advanced: true, }, { Name: "expose_onenote_files", Help: `Set to make OneNote files show up in directory listings. By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.`, Default: false, Advanced: true, }, { Name: "server_side_across_configs", Default: false, Help: `Allow server side operations (eg copy) to work across different onedrive configs. This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.`, Advanced: true, }, { Name: "no_versions", Default: false, Help: `Remove all versions on modifying operations Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time. These versions take up space out of the quota. This flag checks for versions after file upload and setting modification time and removes all but the last version. **NB** Onedrive personal can't currently delete versions so don't use this flag there. `, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // List of replaced characters: // < (less than) -> '<' // FULLWIDTH LESS-THAN SIGN // > (greater than) -> '>' // FULLWIDTH GREATER-THAN SIGN // : (colon) -> ':' // FULLWIDTH COLON // " (double quote) -> '"' // FULLWIDTH QUOTATION MARK // \ (backslash) -> '\' // FULLWIDTH REVERSE SOLIDUS // | (vertical line) -> '|' // FULLWIDTH VERTICAL LINE // ? (question mark) -> '?' // FULLWIDTH QUESTION MARK // * (asterisk) -> '*' // FULLWIDTH ASTERISK // # (number sign) -> '#' // FULLWIDTH NUMBER SIGN // % (percent sign) -> '%' // FULLWIDTH PERCENT SIGN // // Folder names cannot begin with a tilde ('~') // List of replaced characters: // ~ (tilde) -> '~' // FULLWIDTH TILDE // // Additionally names can't begin with a space ( ) or end with a period (.) or space ( ). // List of replaced characters: // . (period) -> '.' // FULLWIDTH FULL STOP // (space) -> '␠' // SYMBOL FOR SPACE // // Also encode invalid UTF-8 bytes as json doesn't handle them. // // The OneDrive API documentation lists the set of reserved characters, but // testing showed this list is incomplete. This are the differences: // - " (double quote) is rejected, but missing in the documentation // - space at the end of file and folder names is rejected, but missing in the documentation // - period at the end of file names is rejected, but missing in the documentation // // Adding these restrictions to the OneDrive API documentation yields exactly // the same rules as the Windows naming conventions. // // https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/addressing-driveitems?view=odsp-graph-online#path-encoding Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeHashPercent | encoder.EncodeLeftSpace | encoder.EncodeLeftTilde | encoder.EncodeRightPeriod | encoder.EncodeRightSpace | encoder.EncodeWin | encoder.EncodeInvalidUtf8), }}...), }) } // Options defines the configuration for this backend type Options struct { ChunkSize fs.SizeSuffix `config:"chunk_size"` DriveID string `config:"drive_id"` DriveType string `config:"drive_type"` ExposeOneNoteFiles bool `config:"expose_onenote_files"` ServerSideAcrossConfigs bool `config:"server_side_across_configs"` NoVersions bool `config:"no_versions"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote one drive type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the one drive server dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls tokenRenewer *oauthutil.Renew // renew the token on expiry driveID string // ID to use for querying Microsoft Graph driveType string // https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/drive } // Object describes a one drive object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set isOneNoteFile bool // Whether the object is a OneNote file size int64 // size of the object modTime time.Time // modification time of the object id string // ID of the object sha1 string // SHA-1 of the object content quickxorhash string // QuickXorHash of the object content mimeType string // Content-Type of object from server (may not be as uploaded) } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("One drive root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a one drive 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { retry := false if resp != nil { switch resp.StatusCode { case 401: if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { retry = true fs.Debugf(nil, "Should retry: %v", err) } case 429: // Too Many Requests. // see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" { retryAfter, parseErr := strconv.Atoi(values[0]) if parseErr != nil { fs.Debugf(nil, "Failed to parse Retry-After: %q: %v", values[0], parseErr) } else { duration := time.Second * time.Duration(retryAfter) retry = true err = pacer.RetryAfterError(err, duration) fs.Debugf(nil, "Too many requests. Trying again in %d seconds.", retryAfter) } } case 507: // Insufficient Storage return false, fserrors.FatalError(err) } } return retry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID. // if `relPath` == "", it reads the metadata for the item with that ID. // // We address items using the pattern `drives/driveID/items/itemID:/relativePath` // instead of simply using `drives/driveID/root:/itemPath` because it works for // "shared with me" folders in OneDrive Personal (See #2536, #2778) // This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480 // // If `relPath` == '', do not append the slash (See #3664) func (f *Fs) readMetaDataForPathRelativeToID(ctx context.Context, normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) { if relPath != "" { relPath = "/" + withTrailingColon(rest.URLPathEscape(f.opt.Enc.FromStandardPath(relPath))) } opts := newOptsCall(normalizedID, "GET", ":"+relPath) err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) return info, resp, err } // readMetaDataForPath reads the metadata from the path (relative to the absolute root) func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, resp *http.Response, err error) { firstSlashIndex := strings.IndexRune(path, '/') if f.driveType != driveTypePersonal || firstSlashIndex == -1 { var opts rest.Opts if len(path) == 0 { opts = rest.Opts{ Method: "GET", Path: "/root", } } else { opts = rest.Opts{ Method: "GET", Path: "/root:/" + rest.URLPathEscape(f.opt.Enc.FromStandardPath(path)), } } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) return info, resp, err } // The following branch handles the case when we're using OneDrive Personal and the path is in a folder. // For OneDrive Personal, we need to consider the "shared with me" folders. // An item in such a folder can only be addressed by its ID relative to the sharer's driveID or // by its path relative to the folder's ID relative to the sharer's driveID. // Note: A "shared with me" folder can only be placed in the sharee's absolute root. // So we read metadata relative to a suitable folder's normalized ID. var dirCacheFoundRoot bool var rootNormalizedID string if f.dirCache != nil { rootNormalizedID, err = f.dirCache.RootID(ctx, false) dirCacheRootIDExists := err == nil if f.root == "" { // if f.root == "", it means f.root is the absolute root of the drive // and its ID should have been found in NewFs dirCacheFoundRoot = dirCacheRootIDExists } else if _, err := f.dirCache.RootParentID(ctx, false); err == nil { // if root is in a folder, it must have a parent folder, and // if dirCache has found root in NewFs, the parent folder's ID // should be present. // This RootParentID() check is a fix for #3164 which describes // a possible case where the root is not found. dirCacheFoundRoot = dirCacheRootIDExists } } relPath, insideRoot := getRelativePathInsideBase(f.root, path) var firstDir, baseNormalizedID string if !insideRoot || !dirCacheFoundRoot { // We do not have the normalized ID in dirCache for our query to base on. Query it manually. firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:] info, resp, err := f.readMetaDataForPath(ctx, firstDir) if err != nil { return info, resp, err } baseNormalizedID = info.GetID() } else { if f.root != "" { // Read metadata based on root baseNormalizedID = rootNormalizedID } else { // Read metadata based on firstDir firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:] baseNormalizedID, err = f.dirCache.FindDir(ctx, firstDir, false) if err != nil { return nil, nil, err } } } return f.readMetaDataForPathRelativeToID(ctx, baseNormalizedID, relPath) } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { // Decode error response errResponse := new(api.Error) err := rest.DecodeJSON(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.ErrorInfo.Code == "" { errResponse.ErrorInfo.Code = resp.Status } return errResponse } func checkUploadChunkSize(cs fs.SizeSuffix) error { const minChunkSize = fs.Byte if cs%chunkSizeMultiple != 0 { return errors.Errorf("%s is not a multiple of %s", cs, chunkSizeMultiple) } if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "onedrive: chunk size") } if opt.DriveID == "" || opt.DriveType == "" { return nil, errors.New("unable to get drive_id and drive_type - if you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend") } root = parsePath(root) oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure OneDrive") } f := &Fs{ name: name, root: root, opt: *opt, driveID: opt.DriveID, driveType: opt.DriveType, srv: rest.NewClient(oAuthClient).SetRoot(graphURL + "/drives/" + opt.DriveID), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ CaseInsensitive: true, ReadMimeType: true, CanHaveEmptyDirectories: true, ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs, }).Fill(f) f.srv.SetErrorHandler(errorHandler) // Renew the token in the background f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, _, err := f.readMetaDataForPath(ctx, "") return err }) // Get rootID rootInfo, _, err := f.readMetaDataForPath(ctx, "") if err != nil || rootInfo.GetID() == "" { return nil, errors.Wrap(err, "failed to get root") } f.dirCache = dircache.New(root, rootInfo.GetID(), f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, rootInfo.ID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // rootSlash returns root with a slash on if it is empty, otherwise empty string func (f *Fs) rootSlash() string { if f.root == "" { return f.root } return f.root + "/" } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Item) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf) _, ok := f.dirCache.GetInv(pathID) if !ok { return "", false, errors.New("couldn't find parent ID") } info, resp, err := f.readMetaDataForPathRelativeToID(ctx, pathID, leaf) if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { return "", false, nil } return "", false, err } if info.GetPackageType() == api.PackageTypeOneNote { return "", false, errors.New("found OneNote file when looking for folder") } if info.GetFolder() == nil { return "", false, errors.New("found file when looking for folder") } return info.GetID(), true, nil } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", dirID, leaf) var resp *http.Response var info *api.Item opts := newOptsCall(dirID, "POST", "/children") mkdir := api.CreateItemRequest{ Name: f.opt.Enc.FromStandardName(leaf), ConflictBehavior: "fail", } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) return shouldRetry(resp, err) }) if err != nil { //fmt.Printf("...Error %v\n", err) return "", err } //fmt.Printf("...Id %q\n", *info.Id) return info.GetID(), nil } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(*api.Item) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { // Top parameter asks for bigger pages of data // https://dev.onedrive.com/odata/optional-query-parameters.htm opts := newOptsCall(dirID, "GET", "/children?$top=1000") OUTER: for { var result api.ListChildrenResponse var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return found, errors.Wrap(err, "couldn't list files") } if len(result.Value) == 0 { break } for i := range result.Value { item := &result.Value[i] isFolder := item.GetFolder() != nil if isFolder { if filesOnly { continue } } else { if directoriesOnly { continue } } if item.Deleted != nil { continue } item.Name = f.opt.Enc.ToStandardName(item.GetName()) if fn(item) { found = true break OUTER } } if result.NextLink == "" { break } opts.Path = "" opts.RootURL = result.NextLink } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var iErr error _, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { if !f.opt.ExposeOneNoteFiles && info.GetPackageType() == api.PackageTypeOneNote { fs.Debugf(info.Name, "OneNote file not shown in directory listing") return false } remote := path.Join(dir, info.GetName()) folder := info.GetFolder() if folder != nil { // cache the directory ID for later lookups id := info.GetID() f.dirCache.Put(remote, id) d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id) d.SetItems(folder.ChildCount) entries = append(entries, d) } else { o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, leaf, directoryID, err } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, leaf, directoryID, nil } // Put the object into the container // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // deleteObject removes an object by ID func (f *Fs) deleteObject(ctx context.Context, id string) error { opts := newOptsCall(id, "DELETE", "") opts.NoResponse = true return f.pacer.Call(func() (bool, error) { resp, err := f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } if check { // check to see if there are any items found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool { return true }) if err != nil { return err } if found { return fs.ErrorDirectoryNotEmpty } } err = f.deleteObject(ctx, rootID) if err != nil { return err } f.dirCache.FlushDir(dir) return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return time.Second } // waitForJob waits for the job with status in url to complete func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error { deadline := time.Now().Add(fs.Config.Timeout) for time.Now().Before(deadline) { var resp *http.Response var err error var body []byte err = f.pacer.Call(func() (bool, error) { resp, err = http.Get(location) if err != nil { return fserrors.ShouldRetry(err), err } body, err = rest.ReadBody(resp) return fserrors.ShouldRetry(err), err }) if err != nil { return err } // Try to decode the body first as an api.AsyncOperationStatus var status api.AsyncOperationStatus err = json.Unmarshal(body, &status) if err != nil { return errors.Wrapf(err, "async status result not JSON: %q", body) } switch status.Status { case "failed": case "deleteFailed": { return errors.Errorf("%s: async operation returned %q", o.remote, status.Status) } case "completed": err = o.readMetaData(ctx) return errors.Wrapf(err, "async operation completed but readMetaData failed") } time.Sleep(1 * time.Second) } return errors.Errorf("async operation didn't complete after %v", fs.Config.Timeout) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } err := srcObj.readMetaData(ctx) if err != nil { return nil, err } // Check we aren't overwriting a file on the same remote if srcObj.fs == f { srcPath := srcObj.rootPath() dstPath := f.rootPath(remote) if strings.ToLower(srcPath) == strings.ToLower(dstPath) { return nil, errors.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath) } } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Copy the object opts := newOptsCall(srcObj.id, "POST", "/copy") opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"} opts.NoResponse = true id, dstDriveID, _ := parseNormalizedID(directoryID) replacedLeaf := f.opt.Enc.FromStandardName(leaf) copyReq := api.CopyItemRequest{ Name: &replacedLeaf, ParentReference: api.ItemReference{ DriveID: dstDriveID, ID: id, }, } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, ©Req, nil) return shouldRetry(resp, err) }) if err != nil { return nil, err } // read location header location := resp.Header.Get("Location") if location == "" { return nil, errors.New("didn't receive location header in copy response") } // Wait for job to finish err = f.waitForJob(ctx, location, dstObj) if err != nil { return nil, err } // Copy does NOT copy the modTime from the source and there seems to // be no way to set date before // This will create TWO versions on OneDrive err = dstObj.SetModTime(ctx, srcObj.ModTime(ctx)) if err != nil { return nil, err } return dstObj, nil } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } id, dstDriveID, _ := parseNormalizedID(directoryID) _, srcObjDriveID, _ := parseNormalizedID(srcObj.id) if f.canonicalDriveID(dstDriveID) != srcObj.fs.canonicalDriveID(srcObjDriveID) { // https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0 // "Items cannot be moved between Drives using this request." fs.Debugf(f, "Can't move files between drives (%q != %q)", dstDriveID, srcObjDriveID) return nil, fs.ErrorCantMove } // Move the object opts := newOptsCall(srcObj.id, "PATCH", "") move := api.MoveItemRequest{ Name: f.opt.Enc.FromStandardName(leaf), ParentReference: &api.ItemReference{ DriveID: dstDriveID, ID: id, }, // We set the mod time too as it gets reset otherwise FileSystemInfo: &api.FileSystemInfoFacet{ CreatedDateTime: api.Timestamp(srcObj.modTime), LastModifiedDateTime: api.Timestamp(srcObj.modTime), }, } var resp *http.Response var info api.Item err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } err = dstObj.setMetaData(&info) if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } parsedDstDirID, dstDriveID, _ := parseNormalizedID(dstDirectoryID) _, srcDriveID, _ := parseNormalizedID(srcID) if f.canonicalDriveID(dstDriveID) != srcFs.canonicalDriveID(srcDriveID) { // https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0 // "Items cannot be moved between Drives using this request." fs.Debugf(f, "Can't move directories between drives (%q != %q)", dstDriveID, srcDriveID) return fs.ErrorCantDirMove } // Get timestamps of src so they can be preserved srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(ctx, srcID, "") if err != nil { return err } // Do the move opts := newOptsCall(srcID, "PATCH", "") move := api.MoveItemRequest{ Name: f.opt.Enc.FromStandardName(dstLeaf), ParentReference: &api.ItemReference{ DriveID: dstDriveID, ID: parsedDstDirID, }, // We set the mod time too as it gets reset otherwise FileSystemInfo: &api.FileSystemInfoFacet{ CreatedDateTime: srcInfo.CreatedDateTime, LastModifiedDateTime: srcInfo.LastModifiedDateTime, }, } var resp *http.Response var info api.Item err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) return shouldRetry(resp, err) }) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { var drive api.Drive opts := rest.Opts{ Method: "GET", Path: "", } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &drive) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "about failed") } q := drive.Quota // On (some?) Onedrive sharepoints these are all 0 so return unknown in that case if q.Total == 0 && q.Used == 0 && q.Deleted == 0 && q.Remaining == 0 { return &fs.Usage{}, nil } usage = &fs.Usage{ Total: fs.NewUsageValue(q.Total), // quota of bytes that can be used Used: fs.NewUsageValue(q.Used), // bytes in use Trashed: fs.NewUsageValue(q.Deleted), // bytes in trash Free: fs.NewUsageValue(q.Remaining), // bytes which can be uploaded before reaching the quota } return usage, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { if f.driveType == driveTypePersonal { return hash.Set(hash.SHA1) } return hash.Set(QuickXorHashType) } // PublicLink returns a link for downloading without account. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { info, _, err := f.readMetaDataForPath(ctx, f.rootPath(remote)) if err != nil { return "", err } opts := newOptsCall(info.GetID(), "POST", "/createLink") share := api.CreateShareLinkRequest{ Type: "view", Scope: "anonymous", } var resp *http.Response var result api.CreateShareLinkResponse err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &share, &result) return shouldRetry(resp, err) }) if err != nil { fmt.Println(err) return "", err } return result.Link.WebURL, nil } // CleanUp deletes all the hidden files. func (f *Fs) CleanUp(ctx context.Context) error { token := make(chan struct{}, fs.Config.Checkers) var wg sync.WaitGroup err := walk.Walk(ctx, f, "", true, -1, func(path string, entries fs.DirEntries, err error) error { err = entries.ForObjectError(func(obj fs.Object) error { o, ok := obj.(*Object) if !ok { return errors.New("internal error: not a onedrive object") } wg.Add(1) token <- struct{}{} go func() { defer func() { <-token wg.Done() }() err := o.deleteVersions(ctx) if err != nil { fs.Errorf(o, "Failed to remove versions: %v", err) } }() return nil }) wg.Wait() return err }) return err } // Finds and removes any old versions for o func (o *Object) deleteVersions(ctx context.Context) error { opts := newOptsCall(o.id, "GET", "/versions") var versions api.VersionsResponse err := o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &versions) return shouldRetry(resp, err) }) if err != nil { return err } if len(versions.Versions) < 2 { return nil } for _, version := range versions.Versions[1:] { err = o.deleteVersion(ctx, version.ID) if err != nil { return err } } return nil } // Finds and removes any old versions for o func (o *Object) deleteVersion(ctx context.Context, ID string) error { if operations.SkipDestructive(ctx, fmt.Sprintf("%s of %s", ID, o.remote), "delete version") { return nil } fs.Infof(o, "removing version %q", ID) opts := newOptsCall(o.id, "DELETE", "/versions/"+ID) opts.NoResponse = true return o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // rootPath returns a path for use in server given a remote func (f *Fs) rootPath(remote string) string { return f.rootSlash() + remote } // rootPath returns a path for use in local functions func (o *Object) rootPath() string { return o.fs.rootPath(o.remote) } // srvPath returns a path for use in server given a remote func (f *Fs) srvPath(remote string) string { return f.opt.Enc.FromStandardPath(f.rootSlash() + remote) } // srvPath returns a path for use in server func (o *Object) srvPath() string { return o.fs.srvPath(o.remote) } // Hash returns the SHA-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if o.fs.driveType == driveTypePersonal { if t == hash.SHA1 { return o.sha1, nil } } else { if t == QuickXorHashType { return o.quickxorhash, nil } } return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { err := o.readMetaData(context.TODO()) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.Item) (err error) { if info.GetFolder() != nil { return errors.Wrapf(fs.ErrorNotAFile, "%q", o.remote) } o.hasMetaData = true o.size = info.GetSize() o.isOneNoteFile = info.GetPackageType() == api.PackageTypeOneNote // Docs: https://docs.microsoft.com/en-us/onedrive/developer/rest-api/resources/hashes // // We use SHA1 for onedrive personal and QuickXorHash for onedrive for business file := info.GetFile() if file != nil { o.mimeType = file.MimeType if file.Hashes.Sha1Hash != "" { o.sha1 = strings.ToLower(file.Hashes.Sha1Hash) } if file.Hashes.QuickXorHash != "" { h, err := base64.StdEncoding.DecodeString(file.Hashes.QuickXorHash) if err != nil { fs.Errorf(o, "Failed to decode QuickXorHash %q: %v", file.Hashes.QuickXorHash, err) } else { o.quickxorhash = hex.EncodeToString(h) } } } fileSystemInfo := info.GetFileSystemInfo() if fileSystemInfo != nil { o.modTime = time.Time(fileSystemInfo.LastModifiedDateTime) } else { o.modTime = time.Time(info.GetLastModifiedDateTime()) } o.id = info.GetID() return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } info, _, err := o.fs.readMetaDataForPath(ctx, o.rootPath()) if err != nil { if apiErr, ok := err.(*api.Error); ok { if apiErr.ErrorInfo.Code == "itemNotFound" { return fs.ErrorObjectNotFound } } return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // setModTime sets the modification time of the local fs object func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item, error) { var opts rest.Opts leaf, directoryID, _ := o.fs.dirCache.FindPath(ctx, o.remote, false) trueDirID, drive, rootURL := parseNormalizedID(directoryID) if drive != "" { opts = rest.Opts{ Method: "PATCH", RootURL: rootURL, Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(o.fs.opt.Enc.FromStandardName(leaf))), } } else { opts = rest.Opts{ Method: "PATCH", Path: "/root:/" + withTrailingColon(rest.URLPathEscape(o.srvPath())), } } update := api.SetFileSystemInfo{ FileSystemInfo: api.FileSystemInfoFacet{ CreatedDateTime: api.Timestamp(modTime), LastModifiedDateTime: api.Timestamp(modTime), }, } var info *api.Item err := o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) return shouldRetry(resp, err) }) // Remove versions if required if o.fs.opt.NoVersions { err := o.deleteVersions(ctx) if err != nil { fs.Errorf(o, "Failed to remove versions: %v", err) } } return info, err } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { info, err := o.setModTime(ctx, modTime) if err != nil { return err } return o.setMetaData(info) } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { if o.id == "" { return nil, errors.New("can't download - no id") } if o.isOneNoteFile { return nil, errors.New("can't open a OneNote file") } fs.FixRangeOption(options, o.size) var resp *http.Response opts := newOptsCall(o.id, "GET", "/content") opts.Options = options err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } if resp.StatusCode == http.StatusOK && resp.ContentLength > 0 && resp.Header.Get("Content-Range") == "" { //Overwrite size with actual size since size readings from Onedrive is unreliable. o.size = resp.ContentLength } return resp.Body, err } // createUploadSession creates an upload session for the object func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (response *api.CreateUploadResponse, err error) { leaf, directoryID, _ := o.fs.dirCache.FindPath(ctx, o.remote, false) id, drive, rootURL := parseNormalizedID(directoryID) var opts rest.Opts if drive != "" { opts = rest.Opts{ Method: "POST", RootURL: rootURL, Path: fmt.Sprintf("/%s/items/%s:/%s:/createUploadSession", drive, id, rest.URLPathEscape(o.fs.opt.Enc.FromStandardName(leaf))), } } else { opts = rest.Opts{ Method: "POST", Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/createUploadSession", } } createRequest := api.CreateUploadRequest{} createRequest.Item.FileSystemInfo.CreatedDateTime = api.Timestamp(modTime) createRequest.Item.FileSystemInfo.LastModifiedDateTime = api.Timestamp(modTime) var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &createRequest, &response) if apiErr, ok := err.(*api.Error); ok { if apiErr.ErrorInfo.Code == "nameAlreadyExists" { // Make the error more user-friendly err = errors.New(err.Error() + " (is it a OneNote file?)") } } return shouldRetry(resp, err) }) return response, err } // getPosition gets the current position in a multipart upload func (o *Object) getPosition(ctx context.Context, url string) (pos int64, err error) { opts := rest.Opts{ Method: "GET", RootURL: url, } var info api.UploadFragmentResponse var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { return 0, err } if len(info.NextExpectedRanges) != 1 { return 0, errors.Errorf("bad number of ranges in upload position: %v", info.NextExpectedRanges) } position := info.NextExpectedRanges[0] i := strings.IndexByte(position, '-') if i < 0 { return 0, errors.Errorf("no '-' in next expected range: %q", position) } position = position[:i] pos, err = strconv.ParseInt(position, 10, 64) if err != nil { return 0, errors.Wrapf(err, "bad expected range: %q", position) } return pos, nil } // uploadFragment uploads a part func (o *Object) uploadFragment(ctx context.Context, url string, start int64, totalSize int64, chunk io.ReadSeeker, chunkSize int64, options ...fs.OpenOption) (info *api.Item, err error) { // var response api.UploadFragmentResponse var resp *http.Response var body []byte var skip = int64(0) err = o.fs.pacer.Call(func() (bool, error) { toSend := chunkSize - skip opts := rest.Opts{ Method: "PUT", RootURL: url, ContentLength: &toSend, ContentRange: fmt.Sprintf("bytes %d-%d/%d", start+skip, start+chunkSize-1, totalSize), Body: chunk, Options: options, } _, _ = chunk.Seek(skip, io.SeekStart) resp, err = o.fs.srv.Call(ctx, &opts) if err != nil && resp != nil && resp.StatusCode == http.StatusRequestedRangeNotSatisfiable { fs.Debugf(o, "Received 416 error - reading current position from server: %v", err) pos, posErr := o.getPosition(ctx, url) if posErr != nil { fs.Debugf(o, "Failed to read position: %v", posErr) return false, posErr } skip = pos - start fs.Debugf(o, "Read position %d, chunk is %d..%d, bytes to skip = %d", pos, start, start+chunkSize, skip) switch { case skip < 0: return false, errors.Wrapf(err, "sent block already (skip %d < 0), can't rewind", skip) case skip > chunkSize: return false, errors.Wrapf(err, "position is in the future (skip %d > chunkSize %d), can't skip forward", skip, chunkSize) case skip == chunkSize: fs.Debugf(o, "Skipping chunk as already sent (skip %d == chunkSize %d)", skip, chunkSize) return false, nil } return true, errors.Wrapf(err, "retry this chunk skipping %d bytes", skip) } if err != nil { return shouldRetry(resp, err) } body, err = rest.ReadBody(resp) if err != nil { return shouldRetry(resp, err) } if resp.StatusCode == 200 || resp.StatusCode == 201 { // we are done :) // read the item info = &api.Item{} return false, json.Unmarshal(body, info) } return false, nil }) return info, err } // cancelUploadSession cancels an upload session func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error) { opts := rest.Opts{ Method: "DELETE", RootURL: url, NoResponse: true, } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) return } // uploadMultipart uploads a file using multipart upload func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) { if size <= 0 { return nil, errors.New("unknown-sized upload not supported") } // Create upload session fs.Debugf(o, "Starting multipart upload") session, err := o.createUploadSession(ctx, modTime) if err != nil { return nil, err } uploadURL := session.UploadURL // Cancel the session if something went wrong defer atexit.OnError(&err, func() { fs.Debugf(o, "Cancelling multipart upload: %v", err) cancelErr := o.cancelUploadSession(ctx, uploadURL) if cancelErr != nil { fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr) } })() // Upload the chunks remaining := size position := int64(0) for remaining > 0 { n := int64(o.fs.opt.ChunkSize) if remaining < n { n = remaining } seg := readers.NewRepeatableReader(io.LimitReader(in, n)) fs.Debugf(o, "Uploading segment %d/%d size %d", position, size, n) info, err = o.uploadFragment(ctx, uploadURL, position, size, seg, n, options...) if err != nil { return nil, err } remaining -= n position += n } return info, nil } // Update the content of a remote file within 4MB size in one single request // This function will set modtime after uploading, which will create a new version for the remote file func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) { if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) { return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4MiB") } fs.Debugf(o, "Starting singlepart upload") var resp *http.Response var opts rest.Opts leaf, directoryID, _ := o.fs.dirCache.FindPath(ctx, o.remote, false) trueDirID, drive, rootURL := parseNormalizedID(directoryID) if drive != "" { opts = rest.Opts{ Method: "PUT", RootURL: rootURL, Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(o.fs.opt.Enc.FromStandardName(leaf)) + ":/content", ContentLength: &size, Body: in, Options: options, } } else { opts = rest.Opts{ Method: "PUT", Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content", ContentLength: &size, Body: in, Options: options, } } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) if apiErr, ok := err.(*api.Error); ok { if apiErr.ErrorInfo.Code == "nameAlreadyExists" { // Make the error more user-friendly err = errors.New(err.Error() + " (is it a OneNote file?)") } } return shouldRetry(resp, err) }) if err != nil { return nil, err } err = o.setMetaData(info) if err != nil { return nil, err } // Set the mod time now and read metadata return o.setModTime(ctx, modTime) } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { if o.hasMetaData && o.isOneNoteFile { return errors.New("can't upload content to a OneNote file") } o.fs.tokenRenewer.Start() defer o.fs.tokenRenewer.Stop() size := src.Size() modTime := src.ModTime(ctx) var info *api.Item if size > 0 { info, err = o.uploadMultipart(ctx, in, size, modTime, options...) } else if size == 0 { info, err = o.uploadSinglepart(ctx, in, size, modTime, options...) } else { return errors.New("unknown-sized upload not supported") } if err != nil { return err } // If updating the file then remove versions if o.fs.opt.NoVersions && o.hasMetaData { err = o.deleteVersions(ctx) if err != nil { fs.Errorf(o, "Failed to remove versions: %v", err) } } return o.setMetaData(info) } // Remove an object func (o *Object) Remove(ctx context.Context) error { return o.fs.deleteObject(ctx, o.id) } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } func newOptsCall(normalizedID string, method string, route string) (opts rest.Opts) { id, drive, rootURL := parseNormalizedID(normalizedID) if drive != "" { return rest.Opts{ Method: method, RootURL: rootURL, Path: "/" + drive + "/items/" + id + route, } } return rest.Opts{ Method: method, Path: "/items/" + id + route, } } // parseNormalizedID parses a normalized ID (may be in the form `driveID#itemID` or just `itemID`) // and returns itemID, driveID, rootURL. // Such a normalized ID can come from (*Item).GetID() func parseNormalizedID(ID string) (string, string, string) { if strings.Index(ID, "#") >= 0 { s := strings.Split(ID, "#") return s[1], s[0], graphURL + "/drives" } return ID, "", "" } // Returns the canonical form of the driveID func (f *Fs) canonicalDriveID(driveID string) (canonicalDriveID string) { if driveID == "" { canonicalDriveID = f.opt.DriveID } else { canonicalDriveID = driveID } canonicalDriveID = strings.ToLower(canonicalDriveID) return canonicalDriveID } // getRelativePathInsideBase checks if `target` is inside `base`. If so, it // returns a relative path for `target` based on `base` and a boolean `true`. // Otherwise returns "", false. func getRelativePathInsideBase(base, target string) (string, bool) { if base == "" { return target, true } baseSlash := base + "/" if strings.HasPrefix(target+"/", baseSlash) { return target[len(baseSlash):], true } return "", false } // Adds a ":" at the end of `remotePath` in a proper manner. // If `remotePath` already ends with "/", change it to ":/" // If `remotePath` is "", return "". // A workaround for #2720 and #3039 func withTrailingColon(remotePath string) string { if remotePath == "" { return "" } if strings.HasSuffix(remotePath, "/") { return remotePath[:len(remotePath)-1] + ":/" } return remotePath + ":" } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = &Object{} _ fs.IDer = &Object{} ) rclone-1.53.3/backend/onedrive/onedrive_test.go000066400000000000000000000011541375552240400215000ustar00rootroot00000000000000// Test OneDrive filesystem interface package onedrive import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestOneDrive:", NilObject: (*Object)(nil), ChunkedUpload: fstests.ChunkedUploadConfig{ CeilChunkSize: fstests.NextMultipleOf(chunkSizeMultiple), }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } var _ fstests.SetUploadChunkSizer = (*Fs)(nil) rclone-1.53.3/backend/onedrive/quickxorhash/000077500000000000000000000000001375552240400210075ustar00rootroot00000000000000rclone-1.53.3/backend/onedrive/quickxorhash/quickxorhash.go000066400000000000000000000142011375552240400240450ustar00rootroot00000000000000// Package quickxorhash provides the quickXorHash algorithm which is a // quick, simple non-cryptographic hash algorithm that works by XORing // the bytes in a circular-shifting fashion. // // It is used by Microsoft Onedrive for Business to hash data. // // See: https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash package quickxorhash // This code was ported from the code snippet linked from // https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash // Which has the copyright // ------------------------------------------------------------------------------ // Copyright (c) 2016 Microsoft Corporation // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN // THE SOFTWARE. // ------------------------------------------------------------------------------ import ( "hash" ) const ( // BlockSize is the preferred size for hashing BlockSize = 64 // Size of the output checksum Size = 20 bitsInLastCell = 32 shift = 11 widthInBits = 8 * Size dataSize = (widthInBits-1)/64 + 1 ) type quickXorHash struct { data [dataSize]uint64 lengthSoFar uint64 shiftSoFar int } // New returns a new hash.Hash computing the quickXorHash checksum. func New() hash.Hash { return &quickXorHash{} } // Write (via the embedded io.Writer interface) adds more data to the running hash. // It never returns an error. // // Write writes len(p) bytes from p to the underlying data stream. It returns // the number of bytes written from p (0 <= n <= len(p)) and any error // encountered that caused the write to stop early. Write must return a non-nil // error if it returns n < len(p). Write must not modify the slice data, even // temporarily. // // Implementations must not retain p. func (q *quickXorHash) Write(p []byte) (n int, err error) { currentshift := q.shiftSoFar // The bitvector where we'll start xoring vectorArrayIndex := currentshift / 64 // The position within the bit vector at which we begin xoring vectorOffset := currentshift % 64 iterations := len(p) if iterations > widthInBits { iterations = widthInBits } for i := 0; i < iterations; i++ { isLastCell := vectorArrayIndex == len(q.data)-1 var bitsInVectorCell int if isLastCell { bitsInVectorCell = bitsInLastCell } else { bitsInVectorCell = 64 } // There's at least 2 bitvectors before we reach the end of the array if vectorOffset <= bitsInVectorCell-8 { for j := i; j < len(p); j += widthInBits { q.data[vectorArrayIndex] ^= uint64(p[j]) << uint(vectorOffset) } } else { index1 := vectorArrayIndex var index2 int if isLastCell { index2 = 0 } else { index2 = vectorArrayIndex + 1 } low := byte(bitsInVectorCell - vectorOffset) xoredByte := byte(0) for j := i; j < len(p); j += widthInBits { xoredByte ^= p[j] } q.data[index1] ^= uint64(xoredByte) << uint(vectorOffset) q.data[index2] ^= uint64(xoredByte) >> low } vectorOffset += shift for vectorOffset >= bitsInVectorCell { if isLastCell { vectorArrayIndex = 0 } else { vectorArrayIndex = vectorArrayIndex + 1 } vectorOffset -= bitsInVectorCell } } // Update the starting position in a circular shift pattern q.shiftSoFar = (q.shiftSoFar + shift*(len(p)%widthInBits)) % widthInBits q.lengthSoFar += uint64(len(p)) return len(p), nil } // Calculate the current checksum func (q *quickXorHash) checkSum() (h [Size]byte) { // Output the data as little endian bytes ph := 0 for _, d := range q.data[:len(q.data)-1] { _ = h[ph+7] // bounds check h[ph+0] = byte(d >> (8 * 0)) h[ph+1] = byte(d >> (8 * 1)) h[ph+2] = byte(d >> (8 * 2)) h[ph+3] = byte(d >> (8 * 3)) h[ph+4] = byte(d >> (8 * 4)) h[ph+5] = byte(d >> (8 * 5)) h[ph+6] = byte(d >> (8 * 6)) h[ph+7] = byte(d >> (8 * 7)) ph += 8 } // remaining 32 bits d := q.data[len(q.data)-1] h[Size-4] = byte(d >> (8 * 0)) h[Size-3] = byte(d >> (8 * 1)) h[Size-2] = byte(d >> (8 * 2)) h[Size-1] = byte(d >> (8 * 3)) // XOR the file length with the least significant bits in little endian format d = q.lengthSoFar h[Size-8] ^= byte(d >> (8 * 0)) h[Size-7] ^= byte(d >> (8 * 1)) h[Size-6] ^= byte(d >> (8 * 2)) h[Size-5] ^= byte(d >> (8 * 3)) h[Size-4] ^= byte(d >> (8 * 4)) h[Size-3] ^= byte(d >> (8 * 5)) h[Size-2] ^= byte(d >> (8 * 6)) h[Size-1] ^= byte(d >> (8 * 7)) return h } // Sum appends the current hash to b and returns the resulting slice. // It does not change the underlying hash state. func (q *quickXorHash) Sum(b []byte) []byte { hash := q.checkSum() return append(b, hash[:]...) } // Reset resets the Hash to its initial state. func (q *quickXorHash) Reset() { *q = quickXorHash{} } // Size returns the number of bytes Sum will return. func (q *quickXorHash) Size() int { return Size } // BlockSize returns the hash's underlying block size. // The Write method must be able to accept any amount // of data, but it may operate more efficiently if all writes // are a multiple of the block size. func (q *quickXorHash) BlockSize() int { return BlockSize } // Sum returns the quickXorHash checksum of the data. func Sum(data []byte) [Size]byte { var d quickXorHash _, _ = d.Write(data) return d.checkSum() } rclone-1.53.3/backend/onedrive/quickxorhash/quickxorhash_test.go000066400000000000000000000214721375552240400251140ustar00rootroot00000000000000package quickxorhash import ( "encoding/base64" "fmt" "hash" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var testVectors = []struct { size int in string out string }{ {0, ``, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="}, {1, `Sg==`, "SgAAAAAAAAAAAAAAAQAAAAAAAAA="}, {2, `tbQ=`, "taAFAAAAAAAAAAAAAgAAAAAAAAA="}, {3, `0pZP`, "0rDEEwAAAAAAAAAAAwAAAAAAAAA="}, {4, `jRRDVA==`, "jaDAEKgAAAAAAAAABAAAAAAAAAA="}, {5, `eAV52qE=`, "eChAHrQRCgAAAAAABQAAAAAAAAA="}, {6, `luBZlaT6`, "lgBHFipBCn0AAAAABgAAAAAAAAA="}, {7, `qaApEj66lw==`, "qQBFCiTgA11cAgAABwAAAAAAAAA="}, {8, `/aNzzCFPS/A=`, "/RjFHJgRgicsAR4ACAAAAAAAAAA="}, {9, `n6Neh7p6fFgm`, "nxiFFw6hCz3wAQsmCQAAAAAAAAA="}, {10, `J9iPGCbfZSTNyw==`, "J8DGIzBggm+UgQTNUgYAAAAAAAA="}, {11, `i+UZyUGJKh+ISbk=`, "iyhHBpIRhESo4AOIQ0IuAAAAAAA="}, {12, `h490d57Pqz5q2rtT`, "h3gEHe7giWeswgdq3MYupgAAAAA="}, {13, `vPgoDjOfO6fm71RxLw==`, "vMAHChwwg0/s4BTmdQcV4vACAAA="}, {14, `XoJ1AsoR4fDYJrDqYs4=`, "XhBEHQSgjAiEAx7YPgEs1CEGZwA="}, {15, `gQaybEqS/4UlDc8e4IJm`, "gDCALNigBEn8oxAlZ8AzPAAOQZg="}, {16, `2fuxhBJXtpWFe8dOfdGeHw==`, "O9tHLAghgSvYohKFyMMxnNCHaHg="}, {17, `XBV6YKU9V7yMakZnFIxIkuU=`, "HbplHsBQih5cgReMQYMRzkABRiA="}, {18, `XJZSOiNO2bmfKnTKD7fztcQX`, "/6ZArHQwAidkIxefQgEdlPGAW8w="}, {19, `g8VtAh+2Kf4k0kY5tzji2i2zmA==`, "wDNrgwHWAVukwB8kg4YRcnALHIg="}, {20, `T6LYJIfDh81JrAK309H2JMJTXis=`, "zBTHrspn3mEcohlJdIUAbjGNaNg="}, {21, `DWAAX5/CIfrmErgZa8ot6ZraeSbu`, "LR2Z0PjuRYGKQB/mhQAuMrAGZbQ="}, {22, `N9abi3qy/mC1THZuVLHPpx7SgwtLOA==`, "1KTYttCBEen8Hwy1doId3ECFWDw="}, {23, `LlUe7wHerLqEtbSZLZgZa9u0m7hbiFs=`, "TqVZpxs3cN61BnuFvwUtMtECTGQ="}, {24, `bU2j/0XYdgfPFD4691jV0AOUEUPR4Z5E`, "bnLBiLpVgnxVkXhNsIAPdHAPLFQ="}, {25, `lScPwPsyUsH2T1Qsr31wXtP55Wqbe47Uyg==`, "VDMSy8eI26nBHCB0e8gVWPCKPsA="}, {26, `rJaKh1dLR1k+4hynliTZMGf8Nd4qKKoZiAM=`, "r7bjwkl8OYQeNaMcCY8fTmEJEmQ="}, {27, `pPsT0CPmHrd3Frsnva1pB/z1ytARLeHEYRCo`, "Rdg7rCcDomL59pL0s6GuTvqLVqQ="}, {28, `wSRChaqmrsnMrfB2yqI43eRWbro+f9kBvh+01w==`, "YTtloIi6frI7HX3vdLvE7I2iUOA="}, {29, `apL67KMIRxQeE9k1/RuW09ppPjbF1WeQpTjSWtI=`, "CIpedls+ZlSQ654fl+X26+Q7LVU="}, {30, `53yx0/QgMTVb7OOzHRHbkS7ghyRc+sIXxi7XHKgT`, "zfJtLGFgR9DB3Q64fAFIp+S5iOY="}, {31, `PwXNnutoLLmxD8TTog52k8cQkukmT87TTnDipKLHQw==`, "PTaGs7yV3FUyBy/SfU6xJRlCJlI="}, {32, `NbYXsp5/K6mR+NmHwExjvWeWDJFnXTKWVlzYHoesp2E=`, "wjuAuWDiq04qDt1R8hHWDDcwVoQ="}, {33, `qQ70RB++JAR5ljNv3lJt1PpqETPsckopfonItu18Cr3E`, "FkJaeg/0Z5+euShYlLpE2tJh+Lo="}, {34, `RhzSatQTQ9/RFvpHyQa1WLdkr3nIk6MjJUma998YRtp44A==`, "SPN2D29reImAqJezlqV2DLbi8tk="}, {35, `DND1u1uZ5SqZVpRUk6NxSUdVo7IjjL9zs4A1evDNCDLcXWc=`, "S6lBk2hxI2SWBfn7nbEl7D19UUs="}, {36, `jEi62utFz69JMYHjg1iXy7oO6ZpZSLcVd2B+pjm6BGsv/CWi`, "s0lYU9tr/bp9xsnrrjYgRS5EvV8="}, {37, `hfS3DZZnhy0hv7nJdXLv/oJOtIgAuP9SInt/v8KeuO4/IvVh4A==`, "CV+HQCdd2A/e/vdi12f2UU55GLA="}, {38, `EkPQAC6ymuRrYjIXD/LT/4Vb+7aTjYVZOHzC8GPCEtYDP0+T3Nc=`, "kE9H9sEmr3vHBYUiPbvsrcDgSEo="}, {39, `vtBOGIENG7yQ/N7xNWPNIgy66Gk/I2Ur/ZhdFNUK9/1FCZuu/KeS`, "+Fgp3HBimtCzUAyiinj3pkarYTk="}, {40, `YnF4smoy9hox2jBlJ3VUa4qyCRhOZbWcmFGIiszTT4zAdYHsqJazyg==`, "arkIn+ELddmE8N34J9ydyFKW+9w="}, {41, `0n7nl3YJtipy6yeUbVPWtc2h45WbF9u8hTz5tNwj3dZZwfXWkk+GN3g=`, "YJLNK7JR64j9aODWfqDvEe/u6NU="}, {42, `FnIIPHayc1pHkY4Lh8+zhWwG8xk6Knk/D3cZU1/fOUmRAoJ6CeztvMOL`, "22RPOylMtdk7xO/QEQiMli4ql0k="}, {43, `J82VT7ND0Eg1MorSfJMUhn+qocF7PsUpdQAMrDiHJ2JcPZAHZ2nyuwjoKg==`, "pOR5eYfwCLRJbJsidpc1rIJYwtM="}, {44, `Zbu+78+e35ZIymV5KTDdub5McyI3FEO8fDxs62uWHQ9U3Oh3ZqgaZ30SnmQ=`, "DbvbTkgNTgWRqRidA9r1jhtUjro="}, {45, `lgybK3Da7LEeY5aeeNrqcdHvv6mD1W4cuQ3/rUj2C/CNcSI0cAMw6vtpVY3y`, "700RQByn1lRQSSme9npQB/Ye+bY="}, {46, `jStZgKHv4QyJLvF2bYbIUZi/FscHALfKHAssTXkrV1byVR9eACwW9DNZQRHQwg==`, "uwN55He8xgE4g93dH9163xPew4U="}, {47, `V1PSud3giF5WW72JB/bgtltsWtEB5V+a+wUALOJOGuqztzVXUZYrvoP3XV++gM0=`, "U+3ZfUF/6mwOoHJcSHkQkckfTDA="}, {48, `VXs4t4tfXGiWAL6dlhEMm0YQF0f2w9rzX0CvIVeuW56o6/ec2auMpKeU2VeteEK5`, "sq24lSf7wXLH8eigHl07X+qPTps="}, {49, `bLUn3jLH+HFUsG3ptWTHgNvtr3eEv9lfKBf0jm6uhpqhRwtbEQ7Ovj/hYQf42zfdtQ==`, "uC8xrnopGiHebGuwgq607WRQyxQ="}, {50, `4SVmjtXIL8BB8SfkbR5Cpaljm2jpyUfAhIBf65XmKxHlz9dy5XixgiE/q1lv+esZW/E=`, "wxZ0rxkMQEnRNAp8ZgEZLT4RdLM="}, {51, `pMljctlXeFUqbG3BppyiNbojQO3ygg6nZPeUZaQcVyJ+Clgiw3Q8ntLe8+02ZSfyCc39`, "aZEPmNvOXnTt7z7wt+ewV7QGMlg="}, {52, `C16uQlxsHxMWnV2gJhFPuJ2/guZ4N1YgmNvAwL1yrouGQtwieGx8WvZsmYRnX72JnbVtTw==`, "QtlSNqXhVij64MMhKJ3EsDFB/z8="}, {53, `7ZVDOywvrl3L0GyKjjcNg2CcTI81n2CeUbzdYWcZOSCEnA/xrNHpiK01HOcGh3BbxuS4S6g=`, "4NznNJc4nmXeApfiCFTq/H5LbHw="}, {54, `JXm2tTVqpYuuz2Cc+ZnPusUb8vccPGrzWK2oVwLLl/FjpFoxO9FxGlhnB08iu8Q/XQSdzHn+`, "IwE5+2pKNcK366I2k2BzZYPibSI="}, {55, `TiiU1mxzYBSGZuE+TX0l9USWBilQ7dEml5lLrzNPh75xmhjIK8SGqVAkvIMgAmcMB+raXdMPZg==`, "yECGHtgR128ScP4XlvF96eLbIBE="}, {56, `zz+Q4zi6wh0fCJUFU9yUOqEVxlIA93gybXHOtXIPwQQ44pW4fyh6BRgc1bOneRuSWp85hwlTJl8=`, "+3Ef4D6yuoC8J+rbFqU1cegverE="}, {57, `sa6SHK9z/G505bysK5KgRO2z2cTksDkLoFc7sv0tWBmf2G2mCiozf2Ce6EIO+W1fRsrrtn/eeOAV`, "xZg1CwMNAjN0AIXw2yh4+1N3oos="}, {58, `0qx0xdyTHhnKJ22IeTlAjRpWw6y2sOOWFP75XJ7cleGJQiV2kyrmQOST4DGHIL0qqA7sMOdzKyTV iw==`, "bS0tRYPkP1Gfc+ZsBm9PMzPunG8="}, {59, `QuzaF0+5ooig6OLEWeibZUENl8EaiXAQvK9UjBEauMeuFFDCtNcGs25BDtJGGbX90gH4VZvCCDNC q4s=`, "rggokuJq1OGNOfB6aDp2g4rdPgw="}, {60, `+wg2x23GZQmMLkdv9MeAdettIWDmyK6Wr+ba23XD+Pvvq1lIMn9QIQT4Z7QHJE3iC/ZMFgaId9VA yY3d`, "ahQbTmOdiKUNdhYRHgv5/Ky+Y6k="}, {61, `y0ydRgreRQwP95vpNP92ioI+7wFiyldHRbr1SfoPNdbKGFA0lBREaBEGNhf9yixmfE+Azo2AuROx b7Yc7g==`, "cJKFc0dXfiN4hMg1lcMf5E4gqvo="}, {62, `LxlVvGXSQlSubK8r0pGf9zf7s/3RHe75a2WlSXQf3gZFR/BtRnR7fCIcaG//CbGfodBFp06DBx/S 9hUV8Bk=`, "NwuwhhRWX8QZ/vhWKWgQ1+rNomI="}, {63, `L+LSB8kmGMnHaWVA5P/+qFnfQliXvgJW7d2JGAgT6+koi5NQujFW1bwQVoXrBVyob/gBxGizUoJM gid5gGNo`, "ndX/KZBtFoeO3xKeo1ajO/Jy+rY="}, {64, `Mb7EGva2rEE5fENDL85P+BsapHEEjv2/siVhKjvAQe02feExVOQSkfmuYzU/kTF1MaKjPmKF/w+c bvwfdWL8aQ==`, "n1anP5NfvD4XDYWIeRPW3ZkPv1Y="}, {111, `jyibxJSzO6ZiZ0O1qe3tG/bvIAYssvukh9suIT5wEy1JBINVgPiqdsTW0cOpP0aUfP7mgqLfADkz I/m/GgCuVhr8oFLrOCoTx1/psBOWwhltCbhUx51Icm9aH8tY4Z3ccU+6BKpYQkLCy0B/A9Zc`, "hZfLIilSITC6N3e3tQ/iSgEzkto="}, {128, `ikwCorI7PKWz17EI50jZCGbV9JU2E8bXVfxNMg5zdmqSZ2NlsQPp0kqYIPjzwTg1MBtfWPg53k0h 0P2naJNEVgrqpoHTfV2b3pJ4m0zYPTJmUX4Bg/lOxcnCxAYKU29Y5F0U8Quz7ZXFBEweftXxJ7RS 4r6N7BzJrPsLhY7hgck=`, "imAoFvCWlDn4yVw3/oq1PDbbm6U="}, {222, `PfxMcUd0vIW6VbHG/uj/Y0W6qEoKmyBD0nYebEKazKaKG+UaDqBEcmQjbfQeVnVLuodMoPp7P7TR 1htX5n2VnkHh22xDyoJ8C/ZQKiSNqQfXvh83judf4RVr9exJCud8Uvgip6aVZTaPrJHVjQhMCp/d EnGvqg0oN5OVkM2qqAXvA0teKUDhgNM71sDBVBCGXxNOR2bpbD1iM4dnuT0ey4L+loXEHTL0fqMe UcEi2asgImnlNakwenDzz0x57aBwyq3AspCFGB1ncX4yYCr/OaCcS5OKi/00WH+wNQU3`, "QX/YEpG0gDsmhEpCdWhsxDzsfVE="}, {256, `qwGf2ESubE5jOUHHyc94ORczFYYbc2OmEzo+hBIyzJiNwAzC8PvJqtTzwkWkSslgHFGWQZR2BV5+ uYTrYT7HVwRM40vqfj0dBgeDENyTenIOL1LHkjtDKoXEnQ0mXAHoJ8PjbNC93zi5TovVRXTNzfGE s5dpWVqxUzb5lc7dwkyvOluBw482mQ4xrzYyIY1t+//OrNi1ObGXuUw2jBQOFfJVj2Y6BOyYmfB1 y36eBxi3zxeG5d5NYjm2GSh6e08QMAwu3zrINcqIzLOuNIiGXBtl7DjKt7b5wqi4oFiRpZsCyx2s mhSrdrtK/CkdU6nDN+34vSR/M8rZpWQdBE7a8g==`, "WYT9JY3JIo/pEBp+tIM6Gt2nyTM="}, {333, `w0LGhqU1WXFbdavqDE4kAjEzWLGGzmTNikzqnsiXHx2KRReKVTxkv27u3UcEz9+lbMvYl4xFf2Z4 aE1xRBBNd1Ke5C0zToSaYw5o4B/7X99nKK2/XaUX1byLow2aju2XJl2OpKpJg+tSJ2fmjIJTkfuY Uz574dFX6/VXxSxwGH/xQEAKS5TCsBK3CwnuG1p5SAsQq3gGVozDWyjEBcWDMdy8/AIFrj/y03Lf c/RNRCQTAfZbnf2QwV7sluw4fH3XJr07UoD0YqN+7XZzidtrwqMY26fpLZnyZjnBEt1FAZWO7RnK G5asg8xRk9YaDdedXdQSJAOy6bWEWlABj+tVAigBxavaluUH8LOj+yfCFldJjNLdi90fVHkUD/m4 Mr5OtmupNMXPwuG3EQlqWUVpQoYpUYKLsk7a5Mvg6UFkiH596y5IbJEVCI1Kb3D1`, "e3+wo77iKcILiZegnzyUNcjCdoQ="}, } func TestQuickXorHash(t *testing.T) { for _, test := range testVectors { what := fmt.Sprintf("test size %d", test.size) in, err := base64.StdEncoding.DecodeString(test.in) require.NoError(t, err, what) got := Sum(in) want, err := base64.StdEncoding.DecodeString(test.out) require.NoError(t, err, what) assert.Equal(t, want, got[:], what) } } func TestQuickXorHashByBlock(t *testing.T) { for _, blockSize := range []int{1, 2, 4, 7, 8, 16, 32, 64, 128, 256, 512} { for _, test := range testVectors { what := fmt.Sprintf("test size %d blockSize %d", test.size, blockSize) in, err := base64.StdEncoding.DecodeString(test.in) require.NoError(t, err, what) h := New() for i := 0; i < len(in); i += blockSize { end := i + blockSize if end > len(in) { end = len(in) } n, err := h.Write(in[i:end]) require.Equal(t, end-i, n, what) require.NoError(t, err, what) } got := h.Sum(nil) want, err := base64.StdEncoding.DecodeString(test.out) require.NoError(t, err, what) assert.Equal(t, want, got, test.size, what) } } } func TestSize(t *testing.T) { d := New() assert.Equal(t, 20, d.Size()) } func TestBlockSize(t *testing.T) { d := New() assert.Equal(t, 64, d.BlockSize()) } func TestReset(t *testing.T) { d := New() zeroHash := d.Sum(nil) _, _ = d.Write([]byte{1}) assert.NotEqual(t, zeroHash, d.Sum(nil)) d.Reset() assert.Equal(t, zeroHash, d.Sum(nil)) } // check interface var _ hash.Hash = (*quickXorHash)(nil) rclone-1.53.3/backend/opendrive/000077500000000000000000000000001375552240400164565ustar00rootroot00000000000000rclone-1.53.3/backend/opendrive/opendrive.go000066400000000000000000000734271375552240400210150ustar00rootroot00000000000000package opendrive import ( "context" "fmt" "io" "net/http" "net/url" "path" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" ) const ( defaultEndpoint = "https://dev.opendrive.com/api/v1" minSleep = 10 * time.Millisecond maxSleep = 5 * time.Minute decayConstant = 1 // bigger for slower decay, exponential ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "opendrive", Description: "OpenDrive", NewFs: NewFs, Options: []fs.Option{{ Name: "username", Help: "Username", Required: true, }, { Name: "password", Help: "Password.", IsPassword: true, Required: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // List of replaced characters: // < (less than) -> '<' // FULLWIDTH LESS-THAN SIGN // > (greater than) -> '>' // FULLWIDTH GREATER-THAN SIGN // : (colon) -> ':' // FULLWIDTH COLON // " (double quote) -> '"' // FULLWIDTH QUOTATION MARK // \ (backslash) -> '\' // FULLWIDTH REVERSE SOLIDUS // | (vertical line) -> '|' // FULLWIDTH VERTICAL LINE // ? (question mark) -> '?' // FULLWIDTH QUESTION MARK // * (asterisk) -> '*' // FULLWIDTH ASTERISK // // Additionally names can't begin or end with an ASCII whitespace. // List of replaced characters: // (space) -> '␠' // SYMBOL FOR SPACE // (horizontal tab) -> '␉' // SYMBOL FOR HORIZONTAL TABULATION // (line feed) -> '␊' // SYMBOL FOR LINE FEED // (vertical tab) -> '␋' // SYMBOL FOR VERTICAL TABULATION // (carriage return) -> '␍' // SYMBOL FOR CARRIAGE RETURN // // Also encode invalid UTF-8 bytes as json doesn't handle them properly. // // https://www.opendrive.com/wp-content/uploads/guides/OpenDrive_API_guide.pdf Default: (encoder.Base | encoder.EncodeWin | encoder.EncodeLeftCrLfHtVt | encoder.EncodeRightCrLfHtVt | encoder.EncodeBackSlash | encoder.EncodeLeftSpace | encoder.EncodeRightSpace | encoder.EncodeInvalidUtf8), }, { Name: "chunk_size", Help: `Files will be uploaded in chunks this size. Note that these chunks are buffered in memory so increasing them will increase memory use.`, Default: 10 * fs.MebiByte, Advanced: true, }}, }) } // Options defines the configuration for this backend type Options struct { UserName string `config:"username"` Password string `config:"password"` Enc encoder.MultiEncoder `config:"encoding"` ChunkSize fs.SizeSuffix `config:"chunk_size"` } // Fs represents a remote server type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the server pacer *fs.Pacer // To pace and retry the API calls session UserSessionInfo // contains the session data dirCache *dircache.DirCache // Map of directory path to directory id } // Object describes an object type Object struct { fs *Fs // what this object is part of remote string // The remote path id string // ID of the file modTime time.Time // The modified time of the object if known md5 string // MD5 hash if known size int64 // Size of the object } // parsePath parses an incoming 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("OpenDrive root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // NewFs constructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = parsePath(root) if opt.UserName == "" { return nil, errors.New("username not found") } opt.Password, err = obscure.Reveal(opt.Password) if err != nil { return nil, errors.New("password could not revealed") } if opt.Password == "" { return nil, errors.New("password not found") } f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.dirCache = dircache.New(root, "0", f) // set the rootURL for the REST client f.srv.SetRoot(defaultEndpoint) // get sessionID var resp *http.Response err = f.pacer.Call(func() (bool, error) { account := Account{Username: opt.UserName, Password: opt.Password} opts := rest.Opts{ Method: "POST", Path: "/session/login.json", } resp, err = f.srv.CallJSON(ctx, &opts, &account, &f.session) return f.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to create session") } fs.Debugf(nil, "Starting OpenDrive session with ID: %s", f.session.SessionID) f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, }).Fill(f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, "0", &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // rootSlash returns root with a slash on if it is empty, otherwise empty string func (f *Fs) rootSlash() string { if f.root == "" { return f.root } return f.root + "/" } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { errResponse := new(Error) err := rest.DecodeJSON(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.Info.Code == 0 { errResponse.Info.Code = resp.StatusCode } if errResponse.Info.Message == "" { errResponse.Info.Message = "Unknown " + resp.Status } return errResponse } // Mkdir creates the folder if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { // fs.Debugf(nil, "Mkdir(\"%s\")", dir) _, err := f.dirCache.FindDir(ctx, dir, true) return err } // deleteObject removes an object by ID func (f *Fs) deleteObject(ctx context.Context, id string) error { return f.pacer.Call(func() (bool, error) { removeDirData := removeFolder{SessionID: f.session.SessionID, FolderID: id} opts := rest.Opts{ Method: "POST", NoResponse: true, Path: "/folder/remove.json", } resp, err := f.srv.CallJSON(ctx, &opts, &removeDirData, nil) return f.shouldRetry(resp, err) }) } // purgeCheck remotes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } item, err := f.readMetaDataForFolderID(ctx, rootID) if err != nil { return err } if check && len(item.Files) != 0 { return errors.New("folder not empty") } err = f.deleteObject(ctx, rootID) if err != nil { return err } f.dirCache.FlushDir(dir) return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { // fs.Debugf(nil, "Rmdir(\"%s\")", path.Join(f.root, dir)) return f.purgeCheck(ctx, dir, true) } // Precision of the remote func (f *Fs) Precision() time.Duration { return time.Second } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { // fs.Debugf(nil, "Copy(%v)", remote) srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } err := srcObj.readMetaData(ctx) if err != nil { return nil, err } srcPath := srcObj.fs.rootSlash() + srcObj.remote dstPath := f.rootSlash() + remote if strings.ToLower(srcPath) == strings.ToLower(dstPath) { return nil, errors.Errorf("Can't copy %q -> %q as are same name when lowercase", srcPath, dstPath) } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // fs.Debugf(nil, "...%#v\n...%#v", remote, directoryID) // Copy the object var resp *http.Response response := moveCopyFileResponse{} err = f.pacer.Call(func() (bool, error) { copyFileData := moveCopyFile{ SessionID: f.session.SessionID, SrcFileID: srcObj.id, DstFolderID: directoryID, Move: "false", OverwriteIfExists: "true", NewFileName: leaf, } opts := rest.Opts{ Method: "POST", Path: "/file/move_copy.json", } resp, err = f.srv.CallJSON(ctx, &opts, ©FileData, &response) return f.shouldRetry(resp, err) }) if err != nil { return nil, err } size, _ := strconv.ParseInt(response.Size, 10, 64) dstObj.id = response.FileID dstObj.size = size return dstObj, nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { // fs.Debugf(nil, "Move(%v)", remote) srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantCopy } err := srcObj.readMetaData(ctx) if err != nil { return nil, err } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Copy the object var resp *http.Response response := moveCopyFileResponse{} err = f.pacer.Call(func() (bool, error) { copyFileData := moveCopyFile{ SessionID: f.session.SessionID, SrcFileID: srcObj.id, DstFolderID: directoryID, Move: "true", OverwriteIfExists: "true", NewFileName: leaf, } opts := rest.Opts{ Method: "POST", Path: "/file/move_copy.json", } resp, err = f.srv.CallJSON(ctx, &opts, ©FileData, &response) return f.shouldRetry(resp, err) }) if err != nil { return nil, err } size, _ := strconv.ParseInt(response.Size, 10, 64) dstObj.id = response.FileID dstObj.size = size return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } // Do the move var resp *http.Response response := moveCopyFolderResponse{} err = f.pacer.Call(func() (bool, error) { moveFolderData := moveCopyFolder{ SessionID: f.session.SessionID, FolderID: srcID, DstFolderID: dstDirectoryID, Move: "true", NewFolderName: dstLeaf, } opts := rest.Opts{ Method: "POST", Path: "/folder/move_copy.json", } resp, err = f.srv.CallJSON(ctx, &opts, &moveFolderData, &response) return f.shouldRetry(resp, err) }) if err != nil { fs.Debugf(src, "DirMove error %v", err) return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (fs.Object, error) { // fs.Debugf(nil, "newObjectWithInfo(%s, %v)", remote, file) var o *Object if nil != file { o = &Object{ fs: f, remote: remote, id: file.FileID, modTime: time.Unix(file.DateModified, 0), size: file.Size, md5: file.FileHash, } } else { o = &Object{ fs: f, remote: remote, } err := o.readMetaData(ctx) if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { // fs.Debugf(nil, "NewObject(\"%s\")", remote) return f.newObjectWithInfo(ctx, remote, nil) } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, leaf, directoryID, err } // fs.Debugf(nil, "\n...leaf %#v\n...id %#v", leaf, directoryID) // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, f.opt.Enc.FromStandardName(leaf), directoryID, nil } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForFolderID(ctx context.Context, id string) (info *FolderList, err error) { var resp *http.Response opts := rest.Opts{ Method: "GET", Path: "/folder/list.json/" + f.session.SessionID + "/" + id, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return f.shouldRetry(resp, err) }) if err != nil { return nil, err } if resp != nil { } return info, err } // Put the object into the bucket // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) // fs.Debugf(nil, "Put(%s)", remote) o, leaf, directoryID, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } if "" == o.id { // Attempt to read ID, ignore error // FIXME is this correct? _ = o.readMetaData(ctx) } if "" == o.id { // We need to create an ID for this file var resp *http.Response response := createFileResponse{} err := o.fs.pacer.Call(func() (bool, error) { createFileData := createFile{ SessionID: o.fs.session.SessionID, FolderID: directoryID, Name: leaf, } opts := rest.Opts{ Method: "POST", Options: options, Path: "/upload/create_file.json", } resp, err = o.fs.srv.CallJSON(ctx, &opts, &createFileData, &response) return o.fs.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to create file") } o.id = response.FileID } return o, o.Update(ctx, in, src, options...) } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 401, // Unauthorized (seen in "Token has expired") 408, // Request Timeout 423, // Locked - get this on folders sometimes 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error 502, // Bad Gateway when doing big listings 503, // Service Unavailable 504, // Gateway Time-out } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // DirCacher methods // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, replaceReservedChars(leaf)) var resp *http.Response response := createFolderResponse{} err = f.pacer.Call(func() (bool, error) { createDirData := createFolder{ SessionID: f.session.SessionID, FolderName: f.opt.Enc.FromStandardName(leaf), FolderSubParent: pathID, FolderIsPublic: 0, FolderPublicUpl: 0, FolderPublicDisplay: 0, FolderPublicDnl: 0, } opts := rest.Opts{ Method: "POST", Path: "/folder.json", } resp, err = f.srv.CallJSON(ctx, &opts, &createDirData, &response) return f.shouldRetry(resp, err) }) if err != nil { return "", err } return response.FolderID, nil } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // fs.Debugf(nil, "FindLeaf(\"%s\", \"%s\")", pathID, leaf) if pathID == "0" && leaf == "" { // fs.Debugf(nil, "Found OpenDrive root") // that's the root directory return pathID, true, nil } // get the folderIDs var resp *http.Response folderList := FolderList{} err = f.pacer.Call(func() (bool, error) { opts := rest.Opts{ Method: "GET", Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID, } resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList) return f.shouldRetry(resp, err) }) if err != nil { return "", false, errors.Wrap(err, "failed to get folder list") } leaf = f.opt.Enc.FromStandardName(leaf) for _, folder := range folderList.Folders { // fs.Debugf(nil, "Folder: %s (%s)", folder.Name, folder.FolderID) if leaf == folder.Name { // found return folder.FolderID, true, nil } } return "", false, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { // fs.Debugf(nil, "List(%v)", dir) directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var resp *http.Response opts := rest.Opts{ Method: "GET", Path: "/folder/list.json/" + f.session.SessionID + "/" + directoryID, } folderList := FolderList{} err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList) return f.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to get folder list") } for _, folder := range folderList.Folders { folder.Name = f.opt.Enc.ToStandardName(folder.Name) // fs.Debugf(nil, "Folder: %s (%s)", folder.Name, folder.FolderID) remote := path.Join(dir, folder.Name) // cache the directory ID for later lookups f.dirCache.Put(remote, folder.FolderID) d := fs.NewDir(remote, time.Unix(folder.DateModified, 0)).SetID(folder.FolderID) d.SetItems(int64(folder.ChildFolders)) entries = append(entries, d) } for _, file := range folderList.Files { file.Name = f.opt.Enc.ToStandardName(file.Name) // fs.Debugf(nil, "File: %s (%s)", file.Name, file.FileID) remote := path.Join(dir, file.Name) o, err := f.newObjectWithInfo(ctx, remote, &file) if err != nil { return nil, err } entries = append(entries, o) } return entries, nil } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } return o.md5, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.size // Object is likely PENDING } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // fs.Debugf(nil, "SetModTime(%v)", modTime.String()) opts := rest.Opts{ Method: "PUT", NoResponse: true, Path: "/file/filesettings.json", } update := modTimeFile{ SessionID: o.fs.session.SessionID, FileID: o.id, FileModificationTime: strconv.FormatInt(modTime.Unix(), 10), } err := o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, nil) return o.fs.shouldRetry(resp, err) }) o.modTime = modTime return err } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { // fs.Debugf(nil, "Open(\"%v\")", o.remote) fs.FixRangeOption(options, o.size) opts := rest.Opts{ Method: "GET", Path: "/download/file.json/" + o.id + "?session_id=" + o.fs.session.SessionID, Options: options, } var resp *http.Response err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return o.fs.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to open file)") } return resp.Body, nil } // Remove an object func (o *Object) Remove(ctx context.Context) error { // fs.Debugf(nil, "Remove(\"%s\")", o.id) return o.fs.pacer.Call(func() (bool, error) { opts := rest.Opts{ Method: "DELETE", NoResponse: true, Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id, } resp, err := o.fs.srv.Call(ctx, &opts) return o.fs.shouldRetry(resp, err) }) } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { size := src.Size() modTime := src.ModTime(ctx) // fs.Debugf(nil, "Update(\"%s\", \"%s\")", o.id, o.remote) // Open file for upload var resp *http.Response openResponse := openUploadResponse{} err := o.fs.pacer.Call(func() (bool, error) { openUploadData := openUpload{SessionID: o.fs.session.SessionID, FileID: o.id, Size: size} // fs.Debugf(nil, "PreOpen: %#v", openUploadData) opts := rest.Opts{ Method: "POST", Options: options, Path: "/upload/open_file_upload.json", } resp, err := o.fs.srv.CallJSON(ctx, &opts, &openUploadData, &openResponse) return o.fs.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to create file") } // resp.Body.Close() // fs.Debugf(nil, "PostOpen: %#v", openResponse) buf := make([]byte, o.fs.opt.ChunkSize) chunkOffset := int64(0) remainingBytes := size chunkCounter := 0 for remainingBytes > 0 { currentChunkSize := int64(o.fs.opt.ChunkSize) if currentChunkSize > remainingBytes { currentChunkSize = remainingBytes } remainingBytes -= currentChunkSize fs.Debugf(o, "Uploading chunk %d, size=%d, remain=%d", chunkCounter, currentChunkSize, remainingBytes) chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, currentChunkSize) var reply uploadFileChunkReply err = o.fs.pacer.Call(func() (bool, error) { // seek to the start in case this is a retry if _, err = chunk.Seek(0, io.SeekStart); err != nil { return false, err } opts := rest.Opts{ Method: "POST", Path: "/upload/upload_file_chunk.json", Body: chunk, MultipartParams: url.Values{ "session_id": []string{o.fs.session.SessionID}, "file_id": []string{o.id}, "temp_location": []string{openResponse.TempLocation}, "chunk_offset": []string{strconv.FormatInt(chunkOffset, 10)}, "chunk_size": []string{strconv.FormatInt(currentChunkSize, 10)}, }, MultipartContentName: "file_data", // ..name of the parameter which is the attached file MultipartFileName: o.remote, // ..name of the file for the attached file } resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &reply) return o.fs.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to create file") } if reply.TotalWritten != currentChunkSize { return errors.Errorf("failed to create file: incomplete write of %d/%d bytes", reply.TotalWritten, currentChunkSize) } chunkCounter++ chunkOffset += currentChunkSize } // Close file for upload closeResponse := closeUploadResponse{} err = o.fs.pacer.Call(func() (bool, error) { closeUploadData := closeUpload{SessionID: o.fs.session.SessionID, FileID: o.id, Size: size, TempLocation: openResponse.TempLocation} // fs.Debugf(nil, "PreClose: %#v", closeUploadData) opts := rest.Opts{ Method: "POST", Path: "/upload/close_file_upload.json", } resp, err = o.fs.srv.CallJSON(ctx, &opts, &closeUploadData, &closeResponse) return o.fs.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to create file") } // fs.Debugf(nil, "PostClose: %#v", closeResponse) o.id = closeResponse.FileID o.size = closeResponse.Size // Set the mod time now err = o.SetModTime(ctx, modTime) if err != nil { return err } // Set permissions err = o.fs.pacer.Call(func() (bool, error) { update := permissions{SessionID: o.fs.session.SessionID, FileID: o.id, FileIsPublic: 0} // fs.Debugf(nil, "Permissions : %#v", update) opts := rest.Opts{ Method: "POST", NoResponse: true, Path: "/file/access.json", } resp, err = o.fs.srv.CallJSON(ctx, &opts, &update, nil) return o.fs.shouldRetry(resp, err) }) if err != nil { return err } return o.readMetaData(ctx) } func (o *Object) readMetaData(ctx context.Context) (err error) { leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, o.remote, false) if err != nil { if err == fs.ErrorDirNotFound { return fs.ErrorObjectNotFound } return err } var resp *http.Response folderList := FolderList{} err = o.fs.pacer.Call(func() (bool, error) { opts := rest.Opts{ Method: "GET", Path: fmt.Sprintf("/folder/itembyname.json/%s/%s?name=%s", o.fs.session.SessionID, directoryID, url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf))), } resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList) return o.fs.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to get folder list") } if len(folderList.Files) == 0 { return fs.ErrorObjectNotFound } leafFile := folderList.Files[0] o.id = leafFile.FileID o.modTime = time.Unix(leafFile.DateModified, 0) o.md5 = leafFile.FileHash o.size = leafFile.Size return nil } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/opendrive/opendrive_test.go000066400000000000000000000005761375552240400220470ustar00rootroot00000000000000// Test Opendrive filesystem interface package opendrive_test import ( "testing" "github.com/rclone/rclone/backend/opendrive" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestOpenDrive:", NilObject: (*opendrive.Object)(nil), }) } rclone-1.53.3/backend/opendrive/types.go000066400000000000000000000162401375552240400201540ustar00rootroot00000000000000package opendrive import ( "encoding/json" "fmt" ) // Error describes an openDRIVE error response type Error struct { Info struct { Code int `json:"code"` Message string `json:"message"` } `json:"error"` } // Error satisfies the error interface func (e *Error) Error() string { return fmt.Sprintf("%s (Error %d)", e.Info.Message, e.Info.Code) } // Account describes an OpenDRIVE account type Account struct { Username string `json:"username"` Password string `json:"passwd"` } // UserSessionInfo describes an OpenDRIVE session type UserSessionInfo struct { Username string `json:"username"` Password string `json:"passwd"` SessionID string `json:"SessionID"` UserName string `json:"UserName"` UserFirstName string `json:"UserFirstName"` UserLastName string `json:"UserLastName"` AccType string `json:"AccType"` UserLang string `json:"UserLang"` UserID string `json:"UserID"` IsAccountUser json.RawMessage `json:"IsAccountUser"` DriveName string `json:"DriveName"` UserLevel string `json:"UserLevel"` UserPlan string `json:"UserPlan"` FVersioning string `json:"FVersioning"` UserDomain string `json:"UserDomain"` PartnerUsersDomain string `json:"PartnerUsersDomain"` } // FolderList describes an OpenDRIVE listing type FolderList struct { // DirUpdateTime string `json:"DirUpdateTime,string"` Name string `json:"Name"` ParentFolderID string `json:"ParentFolderID"` DirectFolderLink string `json:"DirectFolderLink"` ResponseType int `json:"ResponseType"` Folders []Folder `json:"Folders"` Files []File `json:"Files"` } // Folder describes an OpenDRIVE folder type Folder struct { FolderID string `json:"FolderID"` Name string `json:"Name"` DateCreated int `json:"DateCreated"` DirUpdateTime int `json:"DirUpdateTime"` Access int `json:"Access"` DateModified int64 `json:"DateModified"` Shared string `json:"Shared"` ChildFolders int `json:"ChildFolders"` Link string `json:"Link"` Encrypted string `json:"Encrypted"` } type createFolder struct { SessionID string `json:"session_id"` FolderName string `json:"folder_name"` FolderSubParent string `json:"folder_sub_parent"` FolderIsPublic int64 `json:"folder_is_public"` // (0 = private, 1 = public, 2 = hidden) FolderPublicUpl int64 `json:"folder_public_upl"` // (0 = disabled, 1 = enabled) FolderPublicDisplay int64 `json:"folder_public_display"` // (0 = disabled, 1 = enabled) FolderPublicDnl int64 `json:"folder_public_dnl"` // (0 = disabled, 1 = enabled). } type createFolderResponse struct { FolderID string `json:"FolderID"` Name string `json:"Name"` DateCreated int `json:"DateCreated"` DirUpdateTime int `json:"DirUpdateTime"` Access int `json:"Access"` DateModified int `json:"DateModified"` Shared string `json:"Shared"` Description string `json:"Description"` Link string `json:"Link"` } type moveCopyFolder struct { SessionID string `json:"session_id"` FolderID string `json:"folder_id"` DstFolderID string `json:"dst_folder_id"` Move string `json:"move"` NewFolderName string `json:"new_folder_name"` // New name for destination folder. } type moveCopyFolderResponse struct { FolderID string `json:"FolderID"` } type removeFolder struct { SessionID string `json:"session_id"` FolderID string `json:"folder_id"` } // File describes an OpenDRIVE file type File struct { FileID string `json:"FileId"` FileHash string `json:"FileHash"` Name string `json:"Name"` GroupID int `json:"GroupID"` Extension string `json:"Extension"` Size int64 `json:"Size,string"` Views string `json:"Views"` Version string `json:"Version"` Downloads string `json:"Downloads"` DateModified int64 `json:"DateModified,string"` Access string `json:"Access"` Link string `json:"Link"` DownloadLink string `json:"DownloadLink"` StreamingLink string `json:"StreamingLink"` TempStreamingLink string `json:"TempStreamingLink"` EditLink string `json:"EditLink"` ThumbLink string `json:"ThumbLink"` Password string `json:"Password"` EditOnline int `json:"EditOnline"` } type moveCopyFile struct { SessionID string `json:"session_id"` SrcFileID string `json:"src_file_id"` DstFolderID string `json:"dst_folder_id"` Move string `json:"move"` OverwriteIfExists string `json:"overwrite_if_exists"` NewFileName string `json:"new_file_name"` // New name for destination file. } type moveCopyFileResponse struct { FileID string `json:"FileID"` Size string `json:"Size"` } type createFile struct { SessionID string `json:"session_id"` FolderID string `json:"folder_id"` Name string `json:"file_name"` } type createFileResponse struct { FileID string `json:"FileId"` Name string `json:"Name"` GroupID int `json:"GroupID"` Extension string `json:"Extension"` Size string `json:"Size"` Views string `json:"Views"` Downloads string `json:"Downloads"` DateModified string `json:"DateModified"` Access string `json:"Access"` Link string `json:"Link"` DownloadLink string `json:"DownloadLink"` StreamingLink string `json:"StreamingLink"` TempStreamingLink string `json:"TempStreamingLink"` DirUpdateTime int `json:"DirUpdateTime"` TempLocation string `json:"TempLocation"` SpeedLimit int `json:"SpeedLimit"` RequireCompression int `json:"RequireCompression"` RequireHash int `json:"RequireHash"` RequireHashOnly int `json:"RequireHashOnly"` } type modTimeFile struct { SessionID string `json:"session_id"` FileID string `json:"file_id"` FileModificationTime string `json:"file_modification_time"` } type openUpload struct { SessionID string `json:"session_id"` FileID string `json:"file_id"` Size int64 `json:"file_size"` } type openUploadResponse struct { TempLocation string `json:"TempLocation"` RequireCompression bool `json:"RequireCompression"` RequireHash bool `json:"RequireHash"` RequireHashOnly bool `json:"RequireHashOnly"` SpeedLimit int `json:"SpeedLimit"` } type closeUpload struct { SessionID string `json:"session_id"` FileID string `json:"file_id"` Size int64 `json:"file_size"` TempLocation string `json:"temp_location"` } type closeUploadResponse struct { FileID string `json:"FileID"` FileHash string `json:"FileHash"` Size int64 `json:"Size"` } type permissions struct { SessionID string `json:"session_id"` FileID string `json:"file_id"` FileIsPublic int64 `json:"file_ispublic"` } type uploadFileChunkReply struct { TotalWritten int64 `json:"TotalWritten"` } rclone-1.53.3/backend/pcloud/000077500000000000000000000000001375552240400157515ustar00rootroot00000000000000rclone-1.53.3/backend/pcloud/api/000077500000000000000000000000001375552240400165225ustar00rootroot00000000000000rclone-1.53.3/backend/pcloud/api/types.go000066400000000000000000000125571375552240400202270ustar00rootroot00000000000000// Package api has type definitions for pcloud // // Converted from the API docs with help from https://mholt.github.io/json-to-go/ package api import ( "fmt" "time" ) const ( // Sun, 16 Mar 2014 17:26:04 +0000 timeFormat = `"` + time.RFC1123Z + `"` ) // Time represents represents date and time information for the // pcloud API, by using RFC1123Z type Time time.Time // MarshalJSON turns a Time into JSON (in UTC) func (t *Time) MarshalJSON() (out []byte, err error) { timeString := (*time.Time)(t).Format(timeFormat) return []byte(timeString), nil } // UnmarshalJSON turns JSON into a Time func (t *Time) UnmarshalJSON(data []byte) error { newT, err := time.Parse(timeFormat, string(data)) if err != nil { return err } *t = Time(newT) return nil } // Error is returned from pcloud when things go wrong // // If result is 0 then everything is OK type Error struct { Result int `json:"result"` ErrorString string `json:"error"` } // Error returns a string for the error and satisfies the error interface func (e *Error) Error() string { return fmt.Sprintf("pcloud error: %s (%d)", e.ErrorString, e.Result) } // Update returns err directly if it was != nil, otherwise it returns // an Error or nil if no error was detected func (e *Error) Update(err error) error { if err != nil { return err } if e.Result == 0 { return nil } return e } // Check Error satisfies the error interface var _ error = (*Error)(nil) // Item describes a folder or a file as returned by Get Folder Items and others type Item struct { Path string `json:"path"` Name string `json:"name"` Created Time `json:"created"` IsMine bool `json:"ismine"` Thumb bool `json:"thumb"` Modified Time `json:"modified"` Comments int `json:"comments"` ID string `json:"id"` IsShared bool `json:"isshared"` IsDeleted bool `json:"isdeleted"` Icon string `json:"icon"` IsFolder bool `json:"isfolder"` ParentFolderID int64 `json:"parentfolderid"` FolderID int64 `json:"folderid,omitempty"` Height int `json:"height,omitempty"` FileID int64 `json:"fileid,omitempty"` Width int `json:"width,omitempty"` Hash uint64 `json:"hash,omitempty"` Category int `json:"category,omitempty"` Size int64 `json:"size,omitempty"` ContentType string `json:"contenttype,omitempty"` Contents []Item `json:"contents"` } // ModTime returns the modification time of the item func (i *Item) ModTime() (t time.Time) { t = time.Time(i.Modified) if t.IsZero() { t = time.Time(i.Created) } return t } // ItemResult is returned from the /listfolder, /createfolder, /deletefolder, /deletefile etc methods type ItemResult struct { Error Metadata Item `json:"metadata"` } // Hashes contains the supported hashes type Hashes struct { SHA1 string `json:"sha1"` MD5 string `json:"md5"` } // UploadFileResponse is the response from /uploadfile type UploadFileResponse struct { Error Items []Item `json:"metadata"` Checksums []Hashes `json:"checksums"` Fileids []int64 `json:"fileids"` } // GetFileLinkResult is returned from /getfilelink type GetFileLinkResult struct { Error Dwltag string `json:"dwltag"` Hash uint64 `json:"hash"` Size int64 `json:"size"` Expires Time `json:"expires"` Path string `json:"path"` Hosts []string `json:"hosts"` } // IsValid returns whether the link is valid and has not expired func (g *GetFileLinkResult) IsValid() bool { if g == nil { return false } if len(g.Hosts) == 0 { return false } return time.Time(g.Expires).Sub(time.Now()) > 30*time.Second } // URL returns a URL from the Path and Hosts. Check with IsValid // before calling. func (g *GetFileLinkResult) URL() string { // FIXME rotate the hosts? return "https://" + g.Hosts[0] + g.Path } // ChecksumFileResult is returned from /checksumfile type ChecksumFileResult struct { Error Hashes Metadata Item `json:"metadata"` } // PubLinkResult is returned from /getfilepublink and /getfolderpublink type PubLinkResult struct { Error LinkID int `json:"linkid"` Link string `json:"link"` LinkCode string `json:"code"` } // UserInfo is returned from /userinfo type UserInfo struct { Error Cryptosetup bool `json:"cryptosetup"` Plan int `json:"plan"` CryptoSubscription bool `json:"cryptosubscription"` PublicLinkQuota int64 `json:"publiclinkquota"` Email string `json:"email"` UserID int `json:"userid"` Quota int64 `json:"quota"` TrashRevretentionDays int `json:"trashrevretentiondays"` Premium bool `json:"premium"` PremiumLifetime bool `json:"premiumlifetime"` EmailVerified bool `json:"emailverified"` UsedQuota int64 `json:"usedquota"` Language string `json:"language"` Business bool `json:"business"` CryptoLifetime bool `json:"cryptolifetime"` Registered string `json:"registered"` Journey struct { Claimed bool `json:"claimed"` Steps struct { VerifyMail bool `json:"verifymail"` UploadFile bool `json:"uploadfile"` AutoUpload bool `json:"autoupload"` DownloadApp bool `json:"downloadapp"` DownloadDrive bool `json:"downloaddrive"` } `json:"steps"` } `json:"journey"` } rclone-1.53.3/backend/pcloud/pcloud.go000066400000000000000000001021631375552240400175710ustar00rootroot00000000000000// Package pcloud provides an interface to the Pcloud // object storage system. package pcloud // FIXME implement ListR? /listfolder can do recursive lists // FIXME cleanup returns login required? // FIXME mime type? Fix overview if implement. import ( "context" "fmt" "io" "log" "net/http" "net/url" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/pcloud/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" ) const ( rcloneClientID = "DnONSzyJXpm" rcloneEncryptedClientSecret = "ej1OIF39VOQQ0PXaSdK9ztkLw3tdLNscW2157TKNQdQKkICR4uU7aFg4eFM" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential defaultHostname = "api.pcloud.com" ) // Globals var ( // Description of how to auth for this app oauthConfig = &oauth2.Config{ Scopes: nil, Endpoint: oauth2.Endpoint{ AuthURL: "https://my.pcloud.com/oauth2/authorize", // TokenURL: "https://api.pcloud.com/oauth2_token", set by updateTokenURL }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectLocalhostURL, } ) // Update the TokenURL with the actual hostname func updateTokenURL(oauthConfig *oauth2.Config, hostname string) { oauthConfig.Endpoint.TokenURL = "https://" + hostname + "/oauth2_token" } // Register with Fs func init() { updateTokenURL(oauthConfig, defaultHostname) fs.Register(&fs.RegInfo{ Name: "pcloud", Description: "Pcloud", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { optc := new(Options) err := configstruct.Set(m, optc) if err != nil { fs.Errorf(nil, "Failed to read config: %v", err) } updateTokenURL(oauthConfig, optc.Hostname) checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error { if auth == nil || auth.Form == nil { return errors.New("form not found in response") } hostname := auth.Form.Get("hostname") if hostname == "" { hostname = defaultHostname } // Save the hostname in the config m.Set("hostname", hostname) // Update the token URL updateTokenURL(oauthConfig, hostname) fs.Debugf(nil, "pcloud: got hostname %q", hostname) return nil } opt := oauthutil.Options{ CheckAuth: checkAuth, StateBlankOK: true, // pCloud seems to drop the state parameter now - see #4210 } err = oauthutil.Config("pcloud", name, m, oauthConfig, &opt) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. // // TODO: Investigate Unicode simplification (\ gets converted to \ server-side) Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeInvalidUtf8), }, { Name: "root_folder_id", Help: "Fill in for rclone to use a non root folder as its starting point.", Default: "d0", Advanced: true, }, { Name: "hostname", Help: `Hostname to connect to. This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize. `, Default: defaultHostname, Advanced: true, Examples: []fs.OptionExample{{ Value: defaultHostname, Help: "Original/US region", }, { Value: "eapi.pcloud.com", Help: "EU region", }}, }}...), }) } // Options defines the configuration for this backend type Options struct { Enc encoder.MultiEncoder `config:"encoding"` RootFolderID string `config:"root_folder_id"` Hostname string `config:"hostname"` } // Fs represents a remote pcloud type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the server dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls tokenRenewer *oauthutil.Renew // renew the token on expiry } // Object describes a pcloud object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set size int64 // size of the object modTime time.Time // modification time of the object id string // ID of the object md5 string // MD5 if known sha1 string // SHA1 if known link *api.GetFileLinkResult } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("pcloud root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a pcloud 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { doRetry := false // Check if it is an api.Error if apiErr, ok := err.(*api.Error); ok { // See https://docs.pcloud.com/errors/ for error treatment // Errors are classified as 1xxx, 2xxx etc switch apiErr.Result / 1000 { case 4: // 4xxx: rate limiting doRetry = true case 5: // 5xxx: internal errors doRetry = true } } if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { doRetry = true fs.Debugf(nil, "Should retry: %v", err) } return doRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) { // defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err) leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool { if item.Name == leaf { info = item return true } return false }) if err != nil { return nil, err } if !found { return nil, fs.ErrorObjectNotFound } return info, nil } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { // Decode error response errResponse := new(api.Error) err := rest.DecodeJSON(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.ErrorString == "" { errResponse.ErrorString = resp.Status } if errResponse.Result == 0 { errResponse.Result = resp.StatusCode } return errResponse } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = parsePath(root) oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure Pcloud") } updateTokenURL(oauthConfig, opt.Hostname) f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(oAuthClient).SetRoot("https://" + opt.Hostname), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ CaseInsensitive: false, CanHaveEmptyDirectories: true, }).Fill(f) f.srv.SetErrorHandler(errorHandler) // Renew the token in the background f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.readMetaDataForPath(ctx, "") return err }) // Get rootFolderID rootID := f.opt.RootFolderID f.dirCache = dircache.New(root, rootID, f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, rootID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Item) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // Find the leaf in pathID found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { if item.Name == leaf { pathIDOut = item.ID return true } return false }) return pathIDOut, found, err } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf) var resp *http.Response var result api.ItemResult opts := rest.Opts{ Method: "POST", Path: "/createfolder", Parameters: url.Values{}, } opts.Parameters.Set("name", f.opt.Enc.FromStandardName(leaf)) opts.Parameters.Set("folderid", dirIDtoNumber(pathID)) err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { //fmt.Printf("...Error %v\n", err) return "", err } // fmt.Printf("...Id %q\n", *info.Id) return result.Metadata.ID, nil } // Converts a dirID which is usually 'd' followed by digits into just // the digits func dirIDtoNumber(dirID string) string { if len(dirID) > 0 && dirID[0] == 'd' { return dirID[1:] } fs.Debugf(nil, "Invalid directory id %q", dirID) return dirID } // Converts a fileID which is usually 'f' followed by digits into just // the digits func fileIDtoNumber(fileID string) string { if len(fileID) > 0 && fileID[0] == 'f' { return fileID[1:] } fs.Debugf(nil, "Invalid file id %q", fileID) return fileID } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(*api.Item) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { opts := rest.Opts{ Method: "GET", Path: "/listfolder", Parameters: url.Values{}, } opts.Parameters.Set("folderid", dirIDtoNumber(dirID)) // FIXME can do recursive var result api.ItemResult var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return found, errors.Wrap(err, "couldn't list files") } for i := range result.Metadata.Contents { item := &result.Metadata.Contents[i] if item.IsFolder { if filesOnly { continue } } else { if directoriesOnly { continue } } item.Name = f.opt.Enc.ToStandardName(item.Name) if fn(item) { found = true break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var iErr error _, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { remote := path.Join(dir, info.Name) if info.IsFolder { // cache the directory ID for later lookups f.dirCache.Put(remote, info.ID) d := fs.NewDir(remote, info.ModTime()).SetID(info.ID) // FIXME more info from dir? entries = append(entries, d) } else { o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, leaf, directoryID, nil } // Put the object into the container // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } opts := rest.Opts{ Method: "POST", Path: "/deletefolder", Parameters: url.Values{}, } opts.Parameters.Set("folderid", dirIDtoNumber(rootID)) if !check { opts.Path = "/deletefolderrecursive" } var resp *http.Response var result api.ItemResult err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "rmdir failed") } f.dirCache.FlushDir(dir) if err != nil { return err } return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return time.Second } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } err := srcObj.readMetaData(ctx) if err != nil { return nil, err } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Copy the object opts := rest.Opts{ Method: "POST", Path: "/copyfile", Parameters: url.Values{}, } opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id)) opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf)) opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID)) opts.Parameters.Set("mtime", fmt.Sprintf("%d", srcObj.modTime.Unix())) var resp *http.Response var result api.ItemResult err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return nil, err } err = dstObj.setMetaData(&result.Metadata) if err != nil { return nil, err } return dstObj, nil } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // CleanUp empties the trash func (f *Fs) CleanUp(ctx context.Context) error { rootID, err := f.dirCache.RootID(ctx, false) if err != nil { return err } opts := rest.Opts{ Method: "POST", Path: "/trash_clear", Parameters: url.Values{}, } opts.Parameters.Set("folderid", dirIDtoNumber(rootID)) var resp *http.Response var result api.Error return f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Update(err) return shouldRetry(resp, err) }) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Do the move opts := rest.Opts{ Method: "POST", Path: "/renamefile", Parameters: url.Values{}, } opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id)) opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf)) opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID)) var resp *http.Response var result api.ItemResult err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return nil, err } err = dstObj.setMetaData(&result.Metadata) if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } // Do the move opts := rest.Opts{ Method: "POST", Path: "/renamefolder", Parameters: url.Values{}, } opts.Parameters.Set("folderid", dirIDtoNumber(srcID)) opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(dstLeaf)) opts.Parameters.Set("tofolderid", dirIDtoNumber(dstDirectoryID)) var resp *http.Response var result api.ItemResult err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } func (f *Fs) linkDir(ctx context.Context, dirID string, expire fs.Duration) (string, error) { opts := rest.Opts{ Method: "POST", Path: "/getfolderpublink", Parameters: url.Values{}, } var result api.PubLinkResult opts.Parameters.Set("folderid", dirIDtoNumber(dirID)) err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return "", err } return result.Link, err } func (f *Fs) linkFile(ctx context.Context, path string, expire fs.Duration) (string, error) { obj, err := f.NewObject(ctx, path) if err != nil { return "", err } o := obj.(*Object) opts := rest.Opts{ Method: "POST", Path: "/getfilepublink", Parameters: url.Values{}, } var result api.PubLinkResult opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) err = f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return "", err } return result.Link, nil } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { dirID, err := f.dirCache.FindDir(ctx, remote, false) if err == fs.ErrorDirNotFound { return f.linkFile(ctx, remote, expire) } if err != nil { return "", err } return f.linkDir(ctx, dirID, expire) } // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { opts := rest.Opts{ Method: "POST", Path: "/userinfo", } var resp *http.Response var q api.UserInfo err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &q) err = q.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "about failed") } usage = &fs.Usage{ Total: fs.NewUsageValue(q.Quota), // quota of bytes that can be used Used: fs.NewUsageValue(q.UsedQuota), // bytes in use Free: fs.NewUsageValue(q.Quota - q.UsedQuota), // bytes which can be uploaded before reaching the quota } return usage, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5 | hash.SHA1) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // getHashes fetches the hashes into the object func (o *Object) getHashes(ctx context.Context) (err error) { var resp *http.Response var result api.ChecksumFileResult opts := rest.Opts{ Method: "GET", Path: "/checksumfile", Parameters: url.Values{}, } opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return err } o.setHashes(&result.Hashes) return o.setMetaData(&result.Metadata) } // Hash returns the SHA-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 && t != hash.SHA1 { return "", hash.ErrUnsupported } if o.md5 == "" && o.sha1 == "" { err := o.getHashes(ctx) if err != nil { return "", errors.Wrap(err, "failed to get hash") } } if t == hash.MD5 { return o.md5, nil } return o.sha1, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { err := o.readMetaData(context.TODO()) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.Item) (err error) { if info.IsFolder { return errors.Wrapf(fs.ErrorNotAFile, "%q is a folder", o.remote) } o.hasMetaData = true o.size = info.Size o.modTime = info.ModTime() o.id = info.ID return nil } // setHashes sets the hashes from that passed in func (o *Object) setHashes(hashes *api.Hashes) { o.sha1 = hashes.SHA1 o.md5 = hashes.MD5 } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } info, err := o.fs.readMetaDataForPath(ctx, o.remote) if err != nil { //if apiErr, ok := err.(*api.Error); ok { // FIXME // if apiErr.Code == "not_found" || apiErr.Code == "trashed" { // return fs.ErrorObjectNotFound // } //} return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // Pcloud doesn't have a way of doing this so returning this // error will cause the file to be re-uploaded to set the time. return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // downloadURL fetches the download link func (o *Object) downloadURL(ctx context.Context) (URL string, err error) { if o.id == "" { return "", errors.New("can't download - no id") } if o.link.IsValid() { return o.link.URL(), nil } var resp *http.Response var result api.GetFileLinkResult opts := rest.Opts{ Method: "GET", Path: "/getfilelink", Parameters: url.Values{}, } opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { return "", err } if !result.IsValid() { return "", errors.Errorf("fetched invalid link %+v", result) } o.link = &result return o.link.URL(), nil } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { url, err := o.downloadURL(ctx) if err != nil { return nil, err } var resp *http.Response opts := rest.Opts{ Method: "GET", RootURL: url, Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { o.fs.tokenRenewer.Start() defer o.fs.tokenRenewer.Stop() size := src.Size() // NB can upload without size modTime := src.ModTime(ctx) remote := o.Remote() // Create the directory for the object if it doesn't exist leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, remote, true) if err != nil { return err } // Experiments with pcloud indicate that it doesn't like any // form of request which doesn't have a Content-Length. // According to the docs if you close the connection at the // end then it should work without Content-Length, but I // couldn't get this to work using opts.Close (which sets // http.Request.Close). // // This means that chunked transfer encoding needs to be // disabled and a Content-Length needs to be supplied. This // also rules out streaming. // // Docs: https://docs.pcloud.com/methods/file/uploadfile.html var resp *http.Response var result api.UploadFileResponse opts := rest.Opts{ Method: "PUT", Path: "/uploadfile", Body: in, ContentType: fs.MimeType(ctx, o), ContentLength: &size, Parameters: url.Values{}, TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding Options: options, } leaf = o.fs.opt.Enc.FromStandardName(leaf) opts.Parameters.Set("filename", leaf) opts.Parameters.Set("folderid", dirIDtoNumber(directoryID)) opts.Parameters.Set("nopartial", "1") opts.Parameters.Set("mtime", fmt.Sprintf("%d", modTime.Unix())) // Special treatment for a 0 length upload. This doesn't work // with PUT even with Content-Length set (by setting // opts.Body=0), so upload it as a multpart form POST with // Content-Length set. if size == 0 { formReader, contentType, overhead, err := rest.MultipartUpload(in, opts.Parameters, "content", leaf) if err != nil { return errors.Wrap(err, "failed to make multipart upload for 0 length file") } contentLength := overhead + size opts.ContentType = contentType opts.Body = formReader opts.Method = "POST" opts.Parameters = nil opts.ContentLength = &contentLength } err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) if err != nil { // sometimes pcloud leaves a half complete file on // error, so delete it if it exists delObj, delErr := o.fs.NewObject(ctx, o.remote) if delErr == nil && delObj != nil { _ = delObj.Remove(ctx) } return err } if len(result.Items) != 1 { return errors.Errorf("failed to upload %v - not sure why", o) } o.setHashes(&result.Checksums[0]) return o.setMetaData(&result.Items[0]) } // Remove an object func (o *Object) Remove(ctx context.Context) error { opts := rest.Opts{ Method: "POST", Path: "/deletefile", Parameters: url.Values{}, } var result api.ItemResult opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) return o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &result) err = result.Error.Update(err) return shouldRetry(resp, err) }) } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/pcloud/pcloud_test.go000066400000000000000000000005571375552240400206340ustar00rootroot00000000000000// Test Pcloud filesystem interface package pcloud_test import ( "testing" "github.com/rclone/rclone/backend/pcloud" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestPcloud:", NilObject: (*pcloud.Object)(nil), }) } rclone-1.53.3/backend/premiumizeme/000077500000000000000000000000001375552240400171735ustar00rootroot00000000000000rclone-1.53.3/backend/premiumizeme/api/000077500000000000000000000000001375552240400177445ustar00rootroot00000000000000rclone-1.53.3/backend/premiumizeme/api/types.go000066400000000000000000000046041375552240400214430ustar00rootroot00000000000000// Package api contains definitions for using the premiumize.me API package api import "fmt" // Response is returned by all messages and embedded in the // structures below type Response struct { Message string `json:"message,omitempty"` Status string `json:"status"` } // Error satisfies the error interface func (e *Response) Error() string { return fmt.Sprintf("%s: %s", e.Status, e.Message) } // AsErr checks the status and returns an err if bad or nil if good func (e *Response) AsErr() error { if e.Status != "success" { return e } return nil } // Item Types const ( ItemTypeFolder = "folder" ItemTypeFile = "file" ) // Item refers to a file or folder type Item struct { Breadcrumbs []Breadcrumb `json:"breadcrumbs"` CreatedAt int64 `json:"created_at,omitempty"` ID string `json:"id"` Link string `json:"link,omitempty"` Name string `json:"name"` Size int64 `json:"size,omitempty"` StreamLink string `json:"stream_link,omitempty"` Type string `json:"type"` TranscodeStatus string `json:"transcode_status"` IP string `json:"ip"` MimeType string `json:"mime_type"` } // Breadcrumb is part the breadcrumb trail for a file or folder. It // is returned as part of folder/list if required type Breadcrumb struct { ID string `json:"id,omitempty"` Name string `json:"name,omitempty"` ParentID string `json:"parent_id,omitempty"` } // FolderListResponse is the response to folder/list type FolderListResponse struct { Response Content []Item `json:"content"` Name string `json:"name,omitempty"` ParentID string `json:"parent_id,omitempty"` } // FolderCreateResponse is the response to folder/create type FolderCreateResponse struct { Response ID string `json:"id,omitempty"` } // FolderUploadinfoResponse is the response to folder/uploadinfo type FolderUploadinfoResponse struct { Response Token string `json:"token,omitempty"` URL string `json:"url,omitempty"` } // AccountInfoResponse is the response to account/info type AccountInfoResponse struct { Response CustomerID string `json:"customer_id,omitempty"` LimitUsed float64 `json:"limit_used,omitempty"` // fraction 0..1 of download traffic limit PremiumUntil int64 `json:"premium_until,omitempty"` SpaceUsed float64 `json:"space_used,omitempty"` } rclone-1.53.3/backend/premiumizeme/premiumizeme.go000066400000000000000000000727431375552240400222470ustar00rootroot00000000000000// Package premiumizeme provides an interface to the premiumize.me // object storage system. package premiumizeme /* Run of rclone info stringNeedsEscaping = []rune{ 0x00, 0x0A, 0x0D, 0x22, 0x2F, 0x5C, 0xBF, 0xFE 0x00, 0x0A, 0x0D, '"', '/', '\\', 0xBF, 0xFE } maxFileLength = 255 canWriteUnnormalized = true canReadUnnormalized = true canReadRenormalized = false canStream = false */ import ( "context" "encoding/json" "fmt" "io" "log" "net" "net/http" "net/url" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/premiumizeme/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" ) const ( rcloneClientID = "658922194" rcloneEncryptedClientSecret = "B5YIvQoRIhcpAYs8HYeyjb9gK-ftmZEbqdh_gNfc4RgO9Q" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential rootID = "0" // ID of root folder is always this rootURL = "https://www.premiumize.me/api" ) // Globals var ( // Description of how to auth for this app oauthConfig = &oauth2.Config{ Scopes: nil, Endpoint: oauth2.Endpoint{ AuthURL: "https://www.premiumize.me/authorize", TokenURL: "https://www.premiumize.me/token", }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "premiumizeme", Description: "premiumize.me", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { err := oauthutil.Config("premiumizeme", name, m, oauthConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: []fs.Option{{ Name: "api_key", Help: `API Key. This is not normally used - use oauth instead. `, Hide: fs.OptionHideBoth, Default: "", }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeDoubleQuote | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { APIKey string `config:"api_key"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote cloud storage system type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the server dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls tokenRenewer *oauthutil.Renew // renew the token on expiry } // Object describes a file type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // metadata is present and correct size int64 // size of the object modTime time.Time // modification time of the object id string // ID of the object parentID string // ID of parent directory mimeType string // Mime type of object url string // URL to download file } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("premiumize.me root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a premiumize.me 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string, directoriesOnly bool, filesOnly bool) (info *api.Item, err error) { // defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err) leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } lcLeaf := strings.ToLower(leaf) found, err := f.listAll(ctx, directoryID, directoriesOnly, filesOnly, func(item *api.Item) bool { if strings.ToLower(item.Name) == lcLeaf { info = item return true } return false }) if err != nil { return nil, err } if !found { return nil, fs.ErrorObjectNotFound } return info, nil } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { body, err := rest.ReadBody(resp) if err != nil { body = nil } var e = api.Response{ Message: string(body), Status: fmt.Sprintf("%s (%d)", resp.Status, resp.StatusCode), } if body != nil { _ = json.Unmarshal(body, &e) } return &e } // Return a url.Values with the api key in func (f *Fs) baseParams() url.Values { params := url.Values{} if f.opt.APIKey != "" { params.Add("apikey", f.opt.APIKey) } return params } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = parsePath(root) var client *http.Client var ts *oauthutil.TokenSource if opt.APIKey == "" { client, ts, err = oauthutil.NewClient(name, m, oauthConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure premiumize.me") } } else { client = fshttp.NewClient(fs.Config) } f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(client).SetRoot(rootURL), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, ReadMimeType: true, }).Fill(f) f.srv.SetErrorHandler(errorHandler) // Renew the token in the background if ts != nil { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.About(ctx) return err }) } // Get rootID f.dirCache = dircache.New(root, rootID, f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, rootID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } f.features.Fill(&tempF) // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Item) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // Find the leaf in pathID found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { if item.Name == leaf { pathIDOut = item.ID return true } return false }) return pathIDOut, found, err } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf) var resp *http.Response var info api.FolderCreateResponse opts := rest.Opts{ Method: "POST", Path: "/folder/create", Parameters: f.baseParams(), MultipartParams: url.Values{ "name": {f.opt.Enc.FromStandardName(leaf)}, "parent_id": {pathID}, }, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { //fmt.Printf("...Error %v\n", err) return "", errors.Wrap(err, "CreateDir http") } if err = info.AsErr(); err != nil { return "", errors.Wrap(err, "CreateDir") } // fmt.Printf("...Id %q\n", *info.Id) return info.ID, nil } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(*api.Item) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { opts := rest.Opts{ Method: "GET", Path: "/folder/list", Parameters: f.baseParams(), } opts.Parameters.Set("id", dirID) opts.Parameters.Set("includebreadcrumbs", "false") var result api.FolderListResponse var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return found, errors.Wrap(err, "couldn't list files") } if err = result.AsErr(); err != nil { return found, errors.Wrap(err, "error while listing") } for i := range result.Content { item := &result.Content[i] if item.Type == api.ItemTypeFolder { if filesOnly { continue } } else if item.Type == api.ItemTypeFile { if directoriesOnly { continue } } else { fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type) continue } item.Name = f.opt.Enc.ToStandardName(item.Name) if fn(item) { found = true break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var iErr error _, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { remote := path.Join(dir, info.Name) if info.Type == api.ItemTypeFolder { // cache the directory ID for later lookups f.dirCache.Put(remote, info.ID) d := fs.NewDir(remote, time.Unix(info.CreatedAt, 0)).SetID(info.ID) entries = append(entries, d) } else if info.Type == api.ItemTypeFile { o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, leaf, directoryID, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil) switch err { case nil: return existingObj, existingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src, options...) default: return nil, err } } // PutUnchecked the object into the container // // This will produce an error if the object already exists // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } // need to check if empty as it will delete recursively by default if check { found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool { return true }) if err != nil { return errors.Wrap(err, "purgeCheck") } if found { return fs.ErrorDirectoryNotEmpty } } opts := rest.Opts{ Method: "POST", Path: "/folder/delete", MultipartParams: url.Values{ "id": {rootID}, }, Parameters: f.baseParams(), } var resp *http.Response var result api.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "rmdir failed") } if err = result.AsErr(); err != nil { return errors.Wrap(err, "rmdir") } f.dirCache.FlushDir(dir) if err != nil { return err } return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // move a file or folder // // This is complicated by the fact that there is an API to move files // between directories and a separate one to rename them. We try to // call the minimum number of API calls. func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDirectoryID, newDirectoryID string) (err error) { newLeaf = f.opt.Enc.FromStandardName(newLeaf) oldLeaf = f.opt.Enc.FromStandardName(oldLeaf) doRenameLeaf := oldLeaf != newLeaf doMove := oldDirectoryID != newDirectoryID // Now rename the leaf to a temporary name if we are moving to // another directory to make sure we don't overwrite something // in the destination directory by accident if doRenameLeaf && doMove { tmpLeaf := newLeaf + "." + random.String(8) err = f.renameLeaf(ctx, isFile, id, tmpLeaf) if err != nil { return errors.Wrap(err, "Move rename leaf") } } // Move the object to a new directory (with the existing name) // if required if doMove { opts := rest.Opts{ Method: "POST", Path: "/folder/paste", Parameters: f.baseParams(), MultipartParams: url.Values{ "id": {newDirectoryID}, }, } if isFile { opts.MultipartParams.Set("files[]", id) } else { opts.MultipartParams.Set("folders[]", id) } //replacedLeaf := enc.FromStandardName(leaf) var resp *http.Response var result api.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "Move http") } if err = result.AsErr(); err != nil { return errors.Wrap(err, "Move") } } // Rename the leaf to its final name if required if doRenameLeaf { err = f.renameLeaf(ctx, isFile, id, newLeaf) if err != nil { return errors.Wrap(err, "Move rename leaf") } } return nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Do the move err = f.move(ctx, true, srcObj.id, path.Base(srcObj.remote), leaf, srcObj.parentID, directoryID) if err != nil { return nil, err } err = dstObj.readMetaData(ctx) if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, srcDirectoryID, srcLeaf, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } // Do the move err = f.move(ctx, false, srcID, srcLeaf, dstLeaf, srcDirectoryID, dstDirectoryID) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { _, err := f.dirCache.FindDir(ctx, remote, false) if err == nil { return "", fs.ErrorCantShareDirectories } o, err := f.NewObject(ctx, remote) if err != nil { return "", err } return o.(*Object).url, nil } // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { var resp *http.Response var info api.AccountInfoResponse opts := rest.Opts{ Method: "POST", Path: "/account/info", Parameters: f.baseParams(), } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "CreateDir http") } if err = info.AsErr(); err != nil { return nil, errors.Wrap(err, "CreateDir") } usage = &fs.Usage{ Used: fs.NewUsageValue(int64(info.SpaceUsed)), } return usage, nil } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the SHA-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { err := o.readMetaData(context.TODO()) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.Item) (err error) { if info.Type != "file" { return errors.Wrapf(fs.ErrorNotAFile, "%q is %q", o.remote, info.Type) } o.hasMetaData = true o.size = info.Size o.modTime = time.Unix(info.CreatedAt, 0) o.id = info.ID o.mimeType = info.MimeType o.url = info.Link return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } info, err := o.fs.readMetaDataForPath(ctx, o.remote, false, true) if err != nil { return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { if o.url == "" { return nil, errors.New("can't download - no URL") } fs.FixRangeOption(options, o.size) var resp *http.Response opts := rest.Opts{ Path: "", RootURL: o.url, Method: "GET", Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { remote := o.Remote() size := src.Size() // Create the directory for the object if it doesn't exist leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, remote, true) if err != nil { return err } leaf = o.fs.opt.Enc.FromStandardName(leaf) var resp *http.Response var info api.FolderUploadinfoResponse opts := rest.Opts{ Method: "POST", Path: "/folder/uploadinfo", Parameters: o.fs.baseParams(), Options: options, MultipartParams: url.Values{ "id": {directoryID}, }, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) if err != nil { return shouldRetry(resp, err) } // Just check the download URL resolves - sometimes // the URLs returned by premiumize.me don't resolve so // this needs a retry. var u *url.URL u, err = url.Parse(info.URL) if err != nil { return true, errors.Wrap(err, "failed to parse download URL") } _, err = net.LookupIP(u.Hostname()) if err != nil { return true, errors.Wrap(err, "failed to resolve download URL") } return false, nil }) if err != nil { return errors.Wrap(err, "upload get URL http") } if err = info.AsErr(); err != nil { return errors.Wrap(err, "upload get URL") } // if file exists then rename it out the way otherwise uploads can fail uploaded := false var oldID = o.id if o.hasMetaData { newLeaf := leaf + "." + random.String(8) fs.Debugf(o, "Moving old file out the way to %q", newLeaf) err = o.fs.renameLeaf(ctx, true, oldID, newLeaf) if err != nil { return errors.Wrap(err, "upload rename old file") } defer func() { // on failed upload rename old file back if !uploaded { fs.Debugf(o, "Renaming old file back (from %q to %q) since upload failed", leaf, newLeaf) newErr := o.fs.renameLeaf(ctx, true, oldID, leaf) if newErr != nil && err == nil { err = errors.Wrap(newErr, "upload renaming old file back") } } }() } opts = rest.Opts{ Method: "POST", RootURL: info.URL, Body: in, MultipartParams: url.Values{ "token": {info.Token}, }, MultipartContentName: "file", // ..name of the parameter which is the attached file MultipartFileName: leaf, // ..name of the file for the attached file ContentLength: &size, } var result api.Response err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "upload file http") } if err = result.AsErr(); err != nil { return errors.Wrap(err, "upload file") } // on successful upload, remove old file if it exists uploaded = true if o.hasMetaData { fs.Debugf(o, "Removing old file") err := o.fs.remove(ctx, oldID) if err != nil { return errors.Wrap(err, "upload remove old file") } } o.hasMetaData = false return o.readMetaData(ctx) } // Rename the leaf of a file or directory in a directory func (f *Fs) renameLeaf(ctx context.Context, isFile bool, id string, newLeaf string) (err error) { opts := rest.Opts{ Method: "POST", MultipartParams: url.Values{ "id": {id}, "name": {newLeaf}, }, Parameters: f.baseParams(), } if isFile { opts.Path = "/item/rename" } else { opts.Path = "/folder/rename" } var resp *http.Response var result api.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "rename http") } if err = result.AsErr(); err != nil { return errors.Wrap(err, "rename") } return nil } // Remove an object by ID func (f *Fs) remove(ctx context.Context, id string) (err error) { opts := rest.Opts{ Method: "POST", Path: "/item/delete", MultipartParams: url.Values{ "id": {id}, }, Parameters: f.baseParams(), } var resp *http.Response var result api.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "remove http") } if err = result.AsErr(); err != nil { return errors.Wrap(err, "remove") } return nil } // Remove an object func (o *Object) Remove(ctx context.Context) error { err := o.readMetaData(ctx) if err != nil { return errors.Wrap(err, "Remove: Failed to read metadata") } return o.fs.remove(ctx, o.id) } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/premiumizeme/premiumizeme_test.go000066400000000000000000000006001375552240400232650ustar00rootroot00000000000000// Test filesystem interface package premiumizeme_test import ( "testing" "github.com/rclone/rclone/backend/premiumizeme" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestPremiumizeMe:", NilObject: (*premiumizeme.Object)(nil), }) } rclone-1.53.3/backend/putio/000077500000000000000000000000001375552240400156235ustar00rootroot00000000000000rclone-1.53.3/backend/putio/error.go000066400000000000000000000020311375552240400172770ustar00rootroot00000000000000package putio import ( "fmt" "net/http" "github.com/putdotio/go-putio/putio" "github.com/rclone/rclone/fs/fserrors" ) func checkStatusCode(resp *http.Response, expected int) error { if resp.StatusCode != expected { return &statusCodeError{response: resp} } return nil } type statusCodeError struct { response *http.Response } func (e *statusCodeError) Error() string { return fmt.Sprintf("unexpected status code (%d) response while doing %s to %s", e.response.StatusCode, e.response.Request.Method, e.response.Request.URL.String()) } func (e *statusCodeError) Temporary() bool { return e.response.StatusCode == 429 || e.response.StatusCode >= 500 } // shouldRetry returns a boolean as to whether this err deserves to be // retried. It returns the err as a convenience func shouldRetry(err error) (bool, error) { if err == nil { return false, nil } if perr, ok := err.(*putio.ErrorResponse); ok { err = &statusCodeError{response: perr.Response} } if fserrors.ShouldRetry(err) { return true, err } return false, err } rclone-1.53.3/backend/putio/fs.go000066400000000000000000000516331375552240400165720ustar00rootroot00000000000000package putio import ( "bytes" "context" "encoding/base64" "fmt" "io" "net/http" "net/url" "path" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/putdotio/go-putio/putio" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" ) // Fs represents a remote Putio server type Fs struct { name string // name of this remote root string // the path we are working on features *fs.Features // optional features opt Options // options for this Fs client *putio.Client // client for making API calls to Put.io pacer *fs.Pacer // To pace the API calls dirCache *dircache.DirCache // Map of directory path to directory id httpClient *http.Client // base http client oAuthClient *http.Client // http client with oauth Authorization } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Putio root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a putio 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (f fs.Fs, err error) { // defer log.Trace(name, "root=%v", root)("f=%+v, err=%v", &f, &err) // Parse config into Options struct opt := new(Options) err = configstruct.Set(m, opt) if err != nil { return nil, err } root = parsePath(root) httpClient := fshttp.NewClient(fs.Config) oAuthClient, _, err := oauthutil.NewClientWithBaseClient(name, m, putioConfig, httpClient) if err != nil { return nil, errors.Wrap(err, "failed to configure putio") } p := &Fs{ name: name, root: root, opt: *opt, pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), client: putio.NewClient(oAuthClient), httpClient: httpClient, oAuthClient: oAuthClient, } p.features = (&fs.Features{ DuplicateFiles: true, ReadMimeType: true, CanHaveEmptyDirectories: true, }).Fill(p) p.dirCache = dircache.New(root, "0", p) ctx := context.Background() // Find the current root err = p.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *p tempF.dirCache = dircache.New(newRoot, "0", &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return p, nil } _, err := tempF.NewObject(ctx, remote) if err != nil { // unable to list folder so return old f return p, nil } // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 p.dirCache = tempF.dirCache p.root = tempF.root return p, fs.ErrorIsFile } // fs.Debugf(p, "Root id: %s", p.dirCache.RootID()) return p, nil } func itoa(i int64) string { return strconv.FormatInt(i, 10) } func atoi(a string) int64 { i, err := strconv.ParseInt(a, 10, 64) if err != nil { panic(err) } return i } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { // defer log.Trace(f, "pathID=%v, leaf=%v", pathID, leaf)("newID=%v, err=%v", newID, &err) parentID := atoi(pathID) var entry putio.File err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID) entry, err = f.client.Files.CreateFolder(ctx, f.opt.Enc.FromStandardName(leaf), parentID) return shouldRetry(err) }) return itoa(entry.ID), err } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { // defer log.Trace(f, "pathID=%v, leaf=%v", pathID, leaf)("pathIDOut=%v, found=%v, err=%v", pathIDOut, found, &err) if pathID == "0" && leaf == "" { // that's the root directory return pathID, true, nil } fileID := atoi(pathID) var children []putio.File err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "listing file: %d", fileID) children, _, err = f.client.Files.List(ctx, fileID) return shouldRetry(err) }) if err != nil { if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 { err = nil } return } for _, child := range children { if f.opt.Enc.ToStandardName(child.Name) == leaf { found = true pathIDOut = itoa(child.ID) if !child.IsDir() { err = fs.ErrorIsFile } return } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { // defer log.Trace(f, "dir=%v", dir)("err=%v", &err) directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } parentID := atoi(directoryID) var children []putio.File err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "listing files inside List: %d", parentID) children, _, err = f.client.Files.List(ctx, parentID) return shouldRetry(err) }) if err != nil { return } for _, child := range children { remote := path.Join(dir, f.opt.Enc.ToStandardName(child.Name)) // fs.Debugf(f, "child: %s", remote) if child.IsDir() { f.dirCache.Put(remote, itoa(child.ID)) d := fs.NewDir(remote, child.UpdatedAt.Time) entries = append(entries, d) } else { o, err := f.newObjectWithInfo(ctx, remote, child) if err != nil { return nil, err } entries = append(entries, o) } } return } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) { // defer log.Trace(f, "src=%+v", src)("o=%+v, err=%v", &o, &err) exisitingObj, err := f.NewObject(ctx, src.Remote()) switch err { case nil: return exisitingObj, exisitingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src, options...) default: return nil, err } } // PutUnchecked uploads the object // // This will create a duplicate if we upload a new file without // checking to see if there is one already - use Put() for that. func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) { // defer log.Trace(f, "src=%+v", src)("o=%+v, err=%v", &o, &err) size := src.Size() remote := src.Remote() leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } loc, err := f.createUpload(ctx, leaf, size, directoryID, src.ModTime(ctx), options) if err != nil { return nil, err } fileID, err := f.sendUpload(ctx, loc, size, in) if err != nil { return nil, err } var entry putio.File err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "getting file: %d", fileID) entry, err = f.client.Files.Get(ctx, fileID) return shouldRetry(err) }) if err != nil { return nil, err } return f.newObjectWithInfo(ctx, remote, entry) } func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID string, modTime time.Time, options []fs.OpenOption) (location string, err error) { // defer log.Trace(f, "name=%v, size=%v, parentID=%v, modTime=%v", name, size, parentID, modTime.String())("location=%v, err=%v", location, &err) err = f.pacer.Call(func() (bool, error) { req, err := http.NewRequest("POST", "https://upload.put.io/files/", nil) if err != nil { return false, err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext req.Header.Set("tus-resumable", "1.0.0") req.Header.Set("upload-length", strconv.FormatInt(size, 10)) b64name := base64.StdEncoding.EncodeToString([]byte(f.opt.Enc.FromStandardName(name))) b64true := base64.StdEncoding.EncodeToString([]byte("true")) b64parentID := base64.StdEncoding.EncodeToString([]byte(parentID)) b64modifiedAt := base64.StdEncoding.EncodeToString([]byte(modTime.Format(time.RFC3339))) req.Header.Set("upload-metadata", fmt.Sprintf("name %s,no-torrent %s,parent_id %s,updated-at %s", b64name, b64true, b64parentID, b64modifiedAt)) fs.OpenOptionAddHTTPHeaders(req.Header, options) resp, err := f.oAuthClient.Do(req) retry, err := shouldRetry(err) if retry { return true, err } if err != nil { return false, err } if resp.StatusCode != 201 { return false, fmt.Errorf("unexpected status code from upload create: %d", resp.StatusCode) } location = resp.Header.Get("location") if location == "" { return false, errors.New("empty location header from upload create") } return false, nil }) return } func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.Reader) (fileID int64, err error) { // defer log.Trace(f, "location=%v, size=%v", location, size)("fileID=%v, err=%v", &fileID, &err) if size == 0 { err = f.pacer.Call(func() (bool, error) { fs.Debugf(f, "Sending zero length chunk") _, fileID, err = f.transferChunk(ctx, location, 0, bytes.NewReader([]byte{}), 0) return shouldRetry(err) }) return } var clientOffset int64 var offsetMismatch bool buf := make([]byte, defaultChunkSize) for clientOffset < size { chunkSize := size - clientOffset if chunkSize >= int64(defaultChunkSize) { chunkSize = int64(defaultChunkSize) } chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize) chunkStart := clientOffset reqSize := chunkSize transferOffset := clientOffset fs.Debugf(f, "chunkStart: %d, reqSize: %d", chunkStart, reqSize) // Transfer the chunk err = f.pacer.Call(func() (bool, error) { if offsetMismatch { // Get file offset and seek to the position offset, err := f.getServerOffset(ctx, location) if err != nil { return shouldRetry(err) } sentBytes := offset - chunkStart fs.Debugf(f, "sentBytes: %d", sentBytes) _, err = chunk.Seek(sentBytes, io.SeekStart) if err != nil { return shouldRetry(err) } transferOffset = offset reqSize = chunkSize - sentBytes offsetMismatch = false } fs.Debugf(f, "Sending chunk. transferOffset: %d length: %d", transferOffset, reqSize) var serverOffset int64 serverOffset, fileID, err = f.transferChunk(ctx, location, transferOffset, chunk, reqSize) if cerr, ok := err.(*statusCodeError); ok && cerr.response.StatusCode == 409 { offsetMismatch = true return true, err } if serverOffset != (transferOffset + reqSize) { offsetMismatch = true return true, errors.New("connection broken") } return shouldRetry(err) }) if err != nil { return } clientOffset += chunkSize } return } func (f *Fs) getServerOffset(ctx context.Context, location string) (offset int64, err error) { // defer log.Trace(f, "location=%v", location)("offset=%v, err=%v", &offset, &err) req, err := f.makeUploadHeadRequest(ctx, location) if err != nil { return 0, err } resp, err := f.oAuthClient.Do(req) if err != nil { return 0, err } err = checkStatusCode(resp, 200) if err != nil { return 0, err } return strconv.ParseInt(resp.Header.Get("upload-offset"), 10, 64) } func (f *Fs) transferChunk(ctx context.Context, location string, start int64, chunk io.ReadSeeker, chunkSize int64) (serverOffset, fileID int64, err error) { // defer log.Trace(f, "location=%v, start=%v, chunkSize=%v", location, start, chunkSize)("fileID=%v, err=%v", &fileID, &err) req, err := f.makeUploadPatchRequest(ctx, location, chunk, start, chunkSize) if err != nil { return } resp, err := f.oAuthClient.Do(req) if err != nil { return } defer func() { _ = resp.Body.Close() }() err = checkStatusCode(resp, 204) if err != nil { return } serverOffset, err = strconv.ParseInt(resp.Header.Get("upload-offset"), 10, 64) if err != nil { return } sfid := resp.Header.Get("putio-file-id") if sfid != "" { fileID, err = strconv.ParseInt(sfid, 10, 64) if err != nil { return } } return } func (f *Fs) makeUploadHeadRequest(ctx context.Context, location string) (*http.Request, error) { req, err := http.NewRequest("HEAD", location, nil) if err != nil { return nil, err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext req.Header.Set("tus-resumable", "1.0.0") return req, nil } func (f *Fs) makeUploadPatchRequest(ctx context.Context, location string, in io.Reader, offset, length int64) (*http.Request, error) { req, err := http.NewRequest("PATCH", location, in) if err != nil { return nil, err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext req.Header.Set("tus-resumable", "1.0.0") req.Header.Set("upload-offset", strconv.FormatInt(offset, 10)) req.Header.Set("content-length", strconv.FormatInt(length, 10)) req.Header.Set("content-type", "application/offset+octet-stream") return req, nil } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) { // defer log.Trace(f, "dir=%v", dir)("err=%v", &err) _, err = f.dirCache.FindDir(ctx, dir, true) return err } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error) { // defer log.Trace(f, "dir=%v", dir)("err=%v", &err) root := strings.Trim(path.Join(f.root, dir), "/") // can't remove root if root == "" { return errors.New("can't remove root directory") } // check directory exists directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return errors.Wrap(err, "Rmdir") } dirID := atoi(directoryID) if check { // check directory empty var children []putio.File err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "listing files: %d", dirID) children, _, err = f.client.Files.List(ctx, dirID) return shouldRetry(err) }) if err != nil { return errors.Wrap(err, "Rmdir") } if len(children) != 0 { return errors.New("directory not empty") } } // remove it err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "deleting file: %d", dirID) err = f.client.Files.Delete(ctx, dirID) return shouldRetry(err) }) f.dirCache.FlushDir(dir) return err } // Rmdir deletes the container // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) { return f.purgeCheck(ctx, dir, true) } // Precision returns the precision func (f *Fs) Precision() time.Duration { return time.Second } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) (err error) { // defer log.Trace(f, "")("err=%v", &err) return f.purgeCheck(ctx, dir, false) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Object, err error) { // defer log.Trace(f, "src=%+v, remote=%v", src, remote)("o=%+v, err=%v", &o, &err) srcObj, ok := src.(*Object) if !ok { return nil, fs.ErrorCantCopy } leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } err = f.pacer.Call(func() (bool, error) { params := url.Values{} params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10)) params.Set("parent_id", directoryID) params.Set("name", f.opt.Enc.FromStandardName(leaf)) req, err := f.client.NewRequest(ctx, "POST", "/v2/files/copy", strings.NewReader(params.Encode())) if err != nil { return false, err } req.Header.Set("Content-Type", "application/x-www-form-urlencoded") // fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID) _, err = f.client.Do(req, nil) return shouldRetry(err) }) if err != nil { return nil, err } return f.NewObject(ctx, remote) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Object, err error) { // defer log.Trace(f, "src=%+v, remote=%v", src, remote)("o=%+v, err=%v", &o, &err) srcObj, ok := src.(*Object) if !ok { return nil, fs.ErrorCantMove } leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true) if err != nil { return nil, err } err = f.pacer.Call(func() (bool, error) { params := url.Values{} params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10)) params.Set("parent_id", directoryID) params.Set("name", f.opt.Enc.FromStandardName(leaf)) req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode())) if err != nil { return false, err } req.Header.Set("Content-Type", "application/x-www-form-urlencoded") // fs.Debugf(f, "moving file (%d) to parent_id: %s", srcObj.file.ID, directoryID) _, err = f.client.Do(req, nil) return shouldRetry(err) }) if err != nil { return nil, err } return f.NewObject(ctx, remote) } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) { // defer log.Trace(f, "src=%+v, srcRemote=%v, dstRemote", src, srcRemote, dstRemote)("err=%v", &err) srcFs, ok := src.(*Fs) if !ok { return fs.ErrorCantDirMove } srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } err = f.pacer.Call(func() (bool, error) { params := url.Values{} params.Set("file_id", srcID) params.Set("parent_id", dstDirectoryID) params.Set("name", f.opt.Enc.FromStandardName(dstLeaf)) req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode())) if err != nil { return false, err } req.Header.Set("Content-Type", "application/x-www-form-urlencoded") // fs.Debugf(f, "moving file (%s) to parent_id: %s", srcID, dstDirectoryID) _, err = f.client.Do(req, nil) return shouldRetry(err) }) srcFs.dirCache.FlushDir(srcRemote) return err } // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { // defer log.Trace(f, "")("usage=%+v, err=%v", usage, &err) var ai putio.AccountInfo err = f.pacer.Call(func() (bool, error) { // fs.Debugf(f, "getting account info") ai, err = f.client.Account.Info(ctx) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "about failed") } return &fs.Usage{ Total: fs.NewUsageValue(ai.Disk.Size), // quota of bytes that can be used Used: fs.NewUsageValue(ai.Disk.Used), // bytes in use Free: fs.NewUsageValue(ai.Disk.Avail), // bytes which can be uploaded before reaching the quota }, nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.CRC32) } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { // defer log.Trace(f, "")("") f.dirCache.ResetRoot() } // CleanUp the trash in the Fs func (f *Fs) CleanUp(ctx context.Context) (err error) { // defer log.Trace(f, "")("err=%v", &err) return f.pacer.Call(func() (bool, error) { req, err := f.client.NewRequest(ctx, "POST", "/v2/trash/empty", nil) if err != nil { return false, err } // fs.Debugf(f, "emptying trash") _, err = f.client.Do(req, nil) return shouldRetry(err) }) } rclone-1.53.3/backend/putio/object.go000066400000000000000000000165611375552240400174310ustar00rootroot00000000000000package putio import ( "context" "io" "net/http" "net/url" "path" "strconv" "time" "github.com/pkg/errors" "github.com/putdotio/go-putio/putio" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" ) // Object describes a Putio object // // Putio Objects always have full metadata type Object struct { fs *Fs // what this object is part of file *putio.File remote string // The remote path modtime time.Time } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) { // defer log.Trace(f, "remote=%v", remote)("o=%+v, err=%v", &o, &err) obj := &Object{ fs: f, remote: remote, } err = obj.readEntryAndSetMetadata(ctx) if err != nil { return nil, err } return obj, err } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info putio.File) (o fs.Object, err error) { // defer log.Trace(f, "remote=%v, info=+v", remote, &info)("o=%+v, err=%v", &o, &err) obj := &Object{ fs: f, remote: remote, } err = obj.setMetadataFromEntry(info) if err != nil { return nil, err } return obj, err } // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the dropbox special hash func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.CRC32 { return "", hash.ErrUnsupported } err := o.readEntryAndSetMetadata(ctx) if err != nil { return "", errors.Wrap(err, "failed to read hash from metadata") } return o.file.CRC32, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { if o.file == nil { return 0 } return o.file.Size } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { if o.file == nil { return "" } return itoa(o.file.ID) } // MimeType returns the content type of the Object if // known, or "" if not func (o *Object) MimeType(ctx context.Context) string { err := o.readEntryAndSetMetadata(ctx) if err != nil { return "" } return o.file.ContentType } // setMetadataFromEntry sets the fs data from a putio.File // // This isn't a complete set of metadata and has an inacurate date func (o *Object) setMetadataFromEntry(info putio.File) error { o.file = &info o.modtime = info.UpdatedAt.Time return nil } // Reads the entry for a file from putio func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) { // defer log.Trace(o, "")("f=%+v, err=%v", f, &err) leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, o.remote, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } var resp struct { File putio.File `json:"file"` } err = o.fs.pacer.Call(func() (bool, error) { // fs.Debugf(o, "requesting child. directoryID: %s, name: %s", directoryID, leaf) req, err := o.fs.client.NewRequest(ctx, "GET", "/v2/files/"+directoryID+"/child?name="+url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf)), nil) if err != nil { return false, err } _, err = o.fs.client.Do(req, &resp) if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 { return false, fs.ErrorObjectNotFound } return shouldRetry(err) }) if err != nil { return nil, err } if resp.File.IsDir() { return nil, fs.ErrorNotAFile } return &resp.File, err } // Read entry if not set and set metadata from it func (o *Object) readEntryAndSetMetadata(ctx context.Context) error { if o.file != nil { return nil } entry, err := o.readEntry(ctx) if err != nil { return err } return o.setMetadataFromEntry(*entry) } // Returns the remote path for the object func (o *Object) remotePath() string { return path.Join(o.fs.root, o.remote) } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { if o.modtime.IsZero() { err := o.readEntryAndSetMetadata(ctx) if err != nil { fs.Debugf(o, "Failed to read metadata: %v", err) return time.Now() } } return o.modtime } // SetModTime sets the modification time of the local fs object // // Commits the datastore func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) { // defer log.Trace(o, "modTime=%v", modTime.String())("err=%v", &err) req, err := o.fs.client.NewRequest(ctx, "POST", "/v2/files/touch?file_id="+strconv.FormatInt(o.file.ID, 10)+"&updated_at="+url.QueryEscape(modTime.Format(time.RFC3339)), nil) if err != nil { return err } // fs.Debugf(o, "setting modtime: %s", modTime.String()) _, err = o.fs.client.Do(req, nil) if err != nil { return err } o.modtime = modTime if o.file != nil { o.file.UpdatedAt.Time = modTime } return nil } // Storable returns whether this object is storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { // defer log.Trace(o, "")("err=%v", &err) var storageURL string err = o.fs.pacer.Call(func() (bool, error) { storageURL, err = o.fs.client.Files.URL(ctx, o.file.ID, true) return shouldRetry(err) }) if err != nil { return } var resp *http.Response headers := fs.OpenOptionHeaders(options) err = o.fs.pacer.Call(func() (bool, error) { req, err := http.NewRequest(http.MethodGet, storageURL, nil) if err != nil { return shouldRetry(err) } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext req.Header.Set("User-Agent", o.fs.client.UserAgent) // merge headers with extra headers for header, value := range headers { req.Header.Set(header, value) } // fs.Debugf(o, "opening file: id=%d", o.file.ID) resp, err = o.fs.httpClient.Do(req) return shouldRetry(err) }) if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 { _ = resp.Body.Close() return nil, fserrors.NoRetryError(err) } if err != nil { return nil, err } return resp.Body, nil } // Update the already existing object // // Copy the reader into the object updating modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { // defer log.Trace(o, "src=%+v", src)("err=%v", &err) remote := o.remotePath() if ignoredFiles.MatchString(remote) { fs.Logf(o, "File name disallowed - not uploading") return nil } err = o.Remove(ctx) if err != nil { return err } newObj, err := o.fs.PutUnchecked(ctx, in, src, options...) if err != nil { return err } *o = *(newObj.(*Object)) return err } // Remove an object func (o *Object) Remove(ctx context.Context) (err error) { // defer log.Trace(o, "")("err=%v", &err) return o.fs.pacer.Call(func() (bool, error) { // fs.Debugf(o, "removing file: id=%d", o.file.ID) err = o.fs.client.Files.Delete(ctx, o.file.ID) return shouldRetry(err) }) } rclone-1.53.3/backend/putio/putio.go000066400000000000000000000054121375552240400173140ustar00rootroot00000000000000package putio import ( "log" "regexp" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "golang.org/x/oauth2" ) /* // TestPutio stringNeedsEscaping = []rune{ '/', '\x00' } maxFileLength = 255 canWriteUnnormalized = true canReadUnnormalized = true canReadRenormalized = true canStream = false */ // Constants const ( rcloneClientID = "4131" rcloneObscuredClientSecret = "cMwrjWVmrHZp3gf1ZpCrlyGAmPpB-YY5BbVnO1fj-G9evcd8" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential defaultChunkSize = 48 * fs.MebiByte ) var ( // Description of how to auth for this app putioConfig = &oauth2.Config{ Scopes: []string{}, Endpoint: oauth2.Endpoint{ AuthURL: "https://api.put.io/v2/oauth2/authenticate", TokenURL: "https://api.put.io/v2/oauth2/access_token", }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneObscuredClientSecret), RedirectURL: oauthutil.RedirectLocalhostURL, } // A regexp matching path names for ignoring unnecessary files ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r)$`) ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "putio", Description: "Put.io", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { opt := oauthutil.Options{ NoOffline: true, } err := oauthutil.Config("putio", name, m, putioConfig, &opt) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: []fs.Option{{ Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Note that \ is renamed to - // // Encode invalid UTF-8 bytes as json doesn't handle them properly. Default: (encoder.Display | encoder.EncodeBackSlash | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { Enc encoder.MultiEncoder `config:"encoding"` } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ dircache.DirCacher = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/putio/putio_test.go000066400000000000000000000004661375552240400203570ustar00rootroot00000000000000// Test Put.io filesystem interface package putio import ( "testing" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestPutio:", NilObject: (*Object)(nil), }) } rclone-1.53.3/backend/qingstor/000077500000000000000000000000001375552240400163315ustar00rootroot00000000000000rclone-1.53.3/backend/qingstor/qingstor.go000066400000000000000000000760411375552240400205360ustar00rootroot00000000000000// Package qingstor provides an interface to QingStor object storage // Home: https://www.qingcloud.com/ // +build !plan9,!js package qingstor import ( "context" "fmt" "io" "net/http" "path" "regexp" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/encoder" qsConfig "github.com/yunify/qingstor-sdk-go/v3/config" qsErr "github.com/yunify/qingstor-sdk-go/v3/request/errors" qs "github.com/yunify/qingstor-sdk-go/v3/service" ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "qingstor", Description: "QingCloud Object Storage", NewFs: NewFs, Options: []fs.Option{{ Name: "env_auth", Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.", Default: false, Examples: []fs.OptionExample{{ Value: "false", Help: "Enter QingStor credentials in the next step", }, { Value: "true", Help: "Get QingStor credentials from the environment (env vars or IAM)", }}, }, { Name: "access_key_id", Help: "QingStor Access Key ID\nLeave blank for anonymous access or runtime credentials.", }, { Name: "secret_access_key", Help: "QingStor Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.", }, { Name: "endpoint", Help: "Enter an endpoint URL to connection QingStor API.\nLeave blank will use the default value \"https://qingstor.com:443\"", }, { Name: "zone", Help: "Zone to connect to.\nDefault is \"pek3a\".", Examples: []fs.OptionExample{{ Value: "pek3a", Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.", }, { Value: "sh1a", Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.", }, { Value: "gd2a", Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.", }}, }, { Name: "connection_retries", Help: "Number of connection retries.", Default: 3, Advanced: true, }, { Name: "upload_cutoff", Help: `Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.`, Default: defaultUploadCutoff, Advanced: true, }, { Name: "chunk_size", Help: `Chunk size to use for uploading. When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size. Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.`, Default: minChunkSize, Advanced: true, }, { Name: "upload_concurrency", Help: `Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though). If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.`, Default: 1, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.EncodeInvalidUtf8 | encoder.EncodeCtl | encoder.EncodeSlash), }}, }) } // Constants const ( listLimitSize = 1000 // Number of items to read at once maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY minChunkSize = fs.SizeSuffix(minMultiPartSize) defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024) maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024) ) // Globals func timestampToTime(tp int64) time.Time { timeLayout := time.RFC3339Nano ts := time.Unix(tp, 0).Format(timeLayout) tm, _ := time.Parse(timeLayout, ts) return tm.UTC() } // Options defines the configuration for this backend type Options struct { EnvAuth bool `config:"env_auth"` AccessKeyID string `config:"access_key_id"` SecretAccessKey string `config:"secret_access_key"` Endpoint string `config:"endpoint"` Zone string `config:"zone"` ConnectionRetries int `config:"connection_retries"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` ChunkSize fs.SizeSuffix `config:"chunk_size"` UploadConcurrency int `config:"upload_concurrency"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote qingstor server type Fs struct { name string // The name of the remote root string // The root is a subdir, is a special object opt Options // parsed options features *fs.Features // optional features svc *qs.Service // The connection to the qingstor server zone string // The zone we are working on rootBucket string // bucket part of root (if any) rootDirectory string // directory part of root (if any) cache *bucket.Cache // cache for bucket creation status } // Object describes a qingstor object type Object struct { // Will definitely have everything but meta which may be nil // // List will read everything but meta & mimeType - to fill // that in you need to call readMetaData fs *Fs // what this object is part of remote string // object of remote etag string // md5sum of the object size int64 // length of the object content mimeType string // ContentType of object - may be "" lastModified time.Time // Last modified encrypted bool // whether the object is encryption algo string // Custom encryption algorithms } // ------------------------------------------------------------ // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns bucket and bucketPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) { bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath)) return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath) } // split returns bucket and bucketPath from the object func (o *Object) split() (bucket, bucketPath string) { return o.fs.split(o.remote) } // Split a URL into three parts: protocol host and port func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) { /* Pattern to match an endpoint, eg: "http(s)://qingstor.com:443" --> "http(s)", "qingstor.com", 443 "http(s)//qingstor.com" --> "http(s)", "qingstor.com", "" "qingstor.com" --> "", "qingstor.com", "" */ defer func() { if r := recover(); r != nil { switch x := r.(type) { case error: err = x default: err = nil } } }() var mather = regexp.MustCompile(`^(?:(http|https)://)*(\w+\.(?:[\w\.])*)(?::(\d{0,5}))*$`) parts := mather.FindStringSubmatch(endpoint) protocol, host, port = parts[1], parts[2], parts[3] return } // qsConnection makes a connection to qingstor func qsServiceConnection(opt *Options) (*qs.Service, error) { accessKeyID := opt.AccessKeyID secretAccessKey := opt.SecretAccessKey switch { case opt.EnvAuth: // No need for empty checks if "env_auth" is true case accessKeyID == "" && secretAccessKey == "": // if no access key/secret and iam is explicitly disabled then fall back to anon interaction case accessKeyID == "": return nil, errors.New("access_key_id not found") case secretAccessKey == "": return nil, errors.New("secret_access_key not found") } protocol := "https" host := "qingstor.com" port := 443 endpoint := opt.Endpoint if endpoint != "" { _protocol, _host, _port, err := qsParseEndpoint(endpoint) if err != nil { return nil, fmt.Errorf("The endpoint \"%s\" format error", endpoint) } if _protocol != "" { protocol = _protocol } host = _host if _port != "" { port, _ = strconv.Atoi(_port) } else if protocol == "http" { port = 80 } } cf, err := qsConfig.NewDefault() if err != nil { return nil, err } cf.AccessKeyID = accessKeyID cf.SecretAccessKey = secretAccessKey cf.Protocol = protocol cf.Host = host cf.Port = port // unsupported in v3.1: cf.ConnectionRetries = opt.ConnectionRetries cf.Connection = fshttp.NewClient(fs.Config) return qs.Init(cf) } func checkUploadChunkSize(cs fs.SizeSuffix) error { if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } func checkUploadCutoff(cs fs.SizeSuffix) error { if cs > maxUploadCutoff { return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff) } return nil } func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadCutoff(cs) if err == nil { old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs } return } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootBucket, f.rootDirectory = bucket.Split(f.root) } // NewFs constructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "qingstor: chunk size") } err = checkUploadCutoff(opt.UploadCutoff) if err != nil { return nil, errors.Wrap(err, "qingstor: upload cutoff") } svc, err := qsServiceConnection(opt) if err != nil { return nil, err } if opt.Zone == "" { opt.Zone = "pek3a" } f := &Fs{ name: name, opt: *opt, svc: svc, zone: opt.Zone, cache: bucket.NewCache(), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, SlowModTime: true, }).Fill(f) if f.rootBucket != "" && f.rootDirectory != "" { // Check to see if the object exists bucketInit, err := svc.Bucket(f.rootBucket, opt.Zone) if err != nil { return nil, err } encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory) _, err = bucketInit.HeadObject(encodedDirectory, &qs.HeadObjectInput{}) if err == nil { newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.setRoot(newRoot) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } } return f, nil } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.rootBucket == "" { return "QingStor root" } if f.rootDirectory == "" { return fmt.Sprintf("QingStor bucket %s", f.rootBucket) } return fmt.Sprintf("QingStor bucket %s path %s", f.rootBucket, f.rootDirectory) } // Precision of the remote func (f *Fs) Precision() time.Duration { //return time.Nanosecond //Not supported temporary return fs.ModTimeNotSupported } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) //return hash.HashSet(hash.HashNone) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // Put created a new object func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { fsObj := &Object{ fs: f, remote: src.Remote(), } return fsObj, fsObj.Update(ctx, in, src, options...) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstBucket, dstPath := f.split(remote) err := f.makeBucket(ctx, dstBucket) if err != nil { return nil, err } srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } srcBucket, srcPath := srcObj.split() source := path.Join("/", srcBucket, srcPath) // fs.Debugf(f, "Copied, source key is: %s, and dst key is: %s", source, key) req := qs.PutObjectInput{ XQSCopySource: &source, } bucketInit, err := f.svc.Bucket(dstBucket, f.zone) if err != nil { return nil, err } _, err = bucketInit.PutObject(dstPath, &req) if err != nil { // fs.Debugf(f, "Copy Failed, API Error: %v", err) return nil, err } return f.NewObject(ctx, remote) } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(remote, nil) } // Return an Object from a path // //If it can't be found it returns the error ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(remote string, info *qs.KeyType) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { // Set info if info.Size != nil { o.size = *info.Size } if info.Etag != nil { o.etag = qs.StringValue(info.Etag) } if info.Modified == nil { fs.Logf(o, "Failed to read last modified") o.lastModified = time.Now() } else { o.lastModified = timestampToTime(int64(*info.Modified)) } if info.MimeType != nil { o.mimeType = qs.StringValue(info.MimeType) } if info.Encrypted != nil { o.encrypted = qs.BoolValue(info.Encrypted) } } else { err := o.readMetaData() // reads info and meta, returning an error if err != nil { return nil, err } } return o, nil } // listFn is called from list to handle an object. type listFn func(remote string, object *qs.KeyType, isDirectory bool) error // list the objects into the function supplied // // dir is the starting directory, "" for root // // Set recurse to read sub directories func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) error { if prefix != "" { prefix += "/" } if directory != "" { directory += "/" } delimiter := "" if !recurse { delimiter = "/" } maxLimit := int(listLimitSize) var marker *string for { bucketInit, err := f.svc.Bucket(bucket, f.zone) if err != nil { return err } req := qs.ListObjectsInput{ Delimiter: &delimiter, Prefix: &directory, Limit: &maxLimit, Marker: marker, } resp, err := bucketInit.ListObjects(&req) if err != nil { if e, ok := err.(*qsErr.QingStorError); ok { if e.StatusCode == http.StatusNotFound { err = fs.ErrorDirNotFound } } return err } if !recurse { for _, commonPrefix := range resp.CommonPrefixes { if commonPrefix == nil { fs.Logf(f, "Nil common prefix received") continue } remote := *commonPrefix remote = f.opt.Enc.ToStandardPath(remote) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", remote) continue } remote = remote[len(prefix):] if addBucket { remote = path.Join(bucket, remote) } if strings.HasSuffix(remote, "/") { remote = remote[:len(remote)-1] } err = fn(remote, &qs.KeyType{Key: &remote}, true) if err != nil { return err } } } for _, object := range resp.Keys { remote := qs.StringValue(object.Key) remote = f.opt.Enc.ToStandardPath(remote) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", remote) continue } remote = remote[len(prefix):] if addBucket { remote = path.Join(bucket, remote) } err = fn(remote, object, false) if err != nil { return err } } if resp.HasMore != nil && !*resp.HasMore { break } // Use NextMarker if set, otherwise use last Key if resp.NextMarker == nil || *resp.NextMarker == "" { fs.Errorf(f, "Expecting NextMarker but didn't find one") break } else { marker = resp.NextMarker } } return nil } // Convert a list item into a BasicInfo func (f *Fs) itemToDirEntry(remote string, object *qs.KeyType, isDirectory bool) (fs.DirEntry, error) { if isDirectory { size := int64(0) if object.Size != nil { size = *object.Size } d := fs.NewDir(remote, time.Time{}).SetSize(size) return d, nil } o, err := f.newObjectWithInfo(remote, object) if err != nil { return nil, err } return o, nil } // listDir lists files and directories to out func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { // List the objects and directories err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *qs.KeyType, isDirectory bool) error { entry, err := f.itemToDirEntry(remote, object, isDirectory) if err != nil { return err } if entry != nil { entries = append(entries, entry) } return nil }) if err != nil { return nil, err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) return entries, nil } // listBuckets lists the buckets to out func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { req := qs.ListBucketsInput{ Location: &f.zone, } resp, err := f.svc.ListBuckets(&req) if err != nil { return nil, err } for _, bucket := range resp.Buckets { d := fs.NewDir(f.opt.Enc.ToStandardName(qs.StringValue(bucket.Name)), qs.TimeValue(bucket.Created)) entries = append(entries, d) } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { bucket, directory := f.split(dir) if bucket == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listBuckets(ctx) } return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { bucket, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(bucket, directory, prefix string, addBucket bool) error { return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *qs.KeyType, isDirectory bool) error { entry, err := f.itemToDirEntry(remote, object, isDirectory) if err != nil { return err } return list.Add(entry) }) } if bucket == "" { entries, err := f.listBuckets(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } bucket := entry.Remote() err = listR(bucket, "", f.rootDirectory, true) if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } } else { err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "") if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } return list.Flush() } // Mkdir creates the bucket if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { bucket, _ := f.split(dir) return f.makeBucket(ctx, bucket) } // makeBucket creates the bucket if it doesn't exist func (f *Fs) makeBucket(ctx context.Context, bucket string) error { return f.cache.Create(bucket, func() error { bucketInit, err := f.svc.Bucket(bucket, f.zone) if err != nil { return err } /* When delete a bucket, qingstor need about 60 second to sync status; So, need wait for it sync end if we try to operation a just deleted bucket */ wasDeleted := false retries := 0 for retries <= 120 { statistics, err := bucketInit.GetStatistics() if statistics == nil || err != nil { break } switch *statistics.Status { case "deleted": fs.Debugf(f, "Wait for qingstor bucket to be deleted, retries: %d", retries) time.Sleep(time.Second * 1) retries++ wasDeleted = true continue default: break } break } retries = 0 for retries <= 120 { _, err = bucketInit.Put() if e, ok := err.(*qsErr.QingStorError); ok { if e.StatusCode == http.StatusConflict { if wasDeleted { fs.Debugf(f, "Wait for qingstor bucket to be creatable, retries: %d", retries) time.Sleep(time.Second * 1) retries++ continue } err = nil } } break } return err }, nil) } // bucketIsEmpty check if the bucket empty func (f *Fs) bucketIsEmpty(bucket string) (bool, error) { bucketInit, err := f.svc.Bucket(bucket, f.zone) if err != nil { return true, err } statistics, err := bucketInit.GetStatistics() if err != nil { return true, err } if *statistics.Count == 0 { return true, nil } return false, nil } // Rmdir delete a bucket func (f *Fs) Rmdir(ctx context.Context, dir string) error { bucket, directory := f.split(dir) if bucket == "" || directory != "" { return nil } isEmpty, err := f.bucketIsEmpty(bucket) if err != nil { return err } if !isEmpty { // fs.Debugf(f, "The bucket %s you tried to delete not empty.", bucket) return errors.New("BucketNotEmpty: The bucket you tried to delete is not empty") } return f.cache.Remove(bucket, func() error { // fs.Debugf(f, "Deleting the bucket %s", bucket) bucketInit, err := f.svc.Bucket(bucket, f.zone) if err != nil { return err } retries := 0 for retries <= 10 { _, delErr := bucketInit.Delete() if delErr != nil { if e, ok := delErr.(*qsErr.QingStorError); ok { switch e.Code { // The status of "lease" takes a few seconds to "ready" when creating a new bucket // wait for lease status ready case "lease_not_ready": fs.Debugf(f, "QingStor bucket lease not ready, retries: %d", retries) retries++ time.Sleep(time.Second * 1) continue default: err = e break } } } else { err = delErr } break } return err }) } // cleanUpBucket removes all pending multipart uploads for a given bucket func (f *Fs) cleanUpBucket(ctx context.Context, bucket string) (err error) { fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than 24 hours", bucket) bucketInit, err := f.svc.Bucket(bucket, f.zone) if err != nil { return err } maxLimit := int(listLimitSize) var marker *string for { req := qs.ListMultipartUploadsInput{ Limit: &maxLimit, KeyMarker: marker, } var resp *qs.ListMultipartUploadsOutput resp, err = bucketInit.ListMultipartUploads(&req) if err != nil { return errors.Wrap(err, "clean up bucket list multipart uploads") } for _, upload := range resp.Uploads { if upload.Created != nil && upload.Key != nil && upload.UploadID != nil { age := time.Since(*upload.Created) if age > 24*time.Hour { fs.Infof(f, "removing pending multipart upload for %q dated %v (%v ago)", *upload.Key, upload.Created, age) req := qs.AbortMultipartUploadInput{ UploadID: upload.UploadID, } _, abortErr := bucketInit.AbortMultipartUpload(*upload.Key, &req) if abortErr != nil { err = errors.Wrapf(abortErr, "failed to remove multipart upload for %q", *upload.Key) fs.Errorf(f, "%v", err) } } else { fs.Debugf(f, "ignoring pending multipart upload for %q dated %v (%v ago)", *upload.Key, upload.Created, age) } } } if resp.HasMore != nil && !*resp.HasMore { break } // Use NextMarker if set, otherwise use last Key if resp.NextKeyMarker == nil || *resp.NextKeyMarker == "" { fs.Errorf(f, "Expecting NextKeyMarker but didn't find one") break } else { marker = resp.NextKeyMarker } } return err } // CleanUp removes all pending multipart uploads func (f *Fs) CleanUp(ctx context.Context) (err error) { if f.rootBucket != "" { return f.cleanUpBucket(ctx, f.rootBucket) } entries, err := f.listBuckets(ctx) if err != nil { return err } for _, entry := range entries { cleanErr := f.cleanUpBucket(ctx, f.opt.Enc.FromStandardName(entry.Remote())) if err != nil { fs.Errorf(f, "Failed to cleanup bucket: %q", cleanErr) err = cleanErr } } return err } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData() (err error) { bucket, bucketPath := o.split() bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone) if err != nil { return err } // fs.Debugf(o, "Read metadata of key: %s", key) resp, err := bucketInit.HeadObject(bucketPath, &qs.HeadObjectInput{}) if err != nil { // fs.Debugf(o, "Read metadata failed, API Error: %v", err) if e, ok := err.(*qsErr.QingStorError); ok { if e.StatusCode == http.StatusNotFound { return fs.ErrorObjectNotFound } } return err } // Ignore missing Content-Length assuming it is 0 if resp.ContentLength != nil { o.size = *resp.ContentLength } if resp.ETag != nil { o.etag = qs.StringValue(resp.ETag) } if resp.LastModified == nil { fs.Logf(o, "Failed to read last modified from HEAD: %v", err) o.lastModified = time.Now() } else { o.lastModified = *resp.LastModified } if resp.ContentType != nil { o.mimeType = qs.StringValue(resp.ContentType) } if resp.XQSEncryptionCustomerAlgorithm != nil { o.algo = qs.StringValue(resp.XQSEncryptionCustomerAlgorithm) o.encrypted = true } return nil } // ModTime returns the modification date of the file // It should return a best guess if one isn't available func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData() if err != nil { fs.Logf(o, "Failed to read metadata, %v", err) return time.Now() } modTime := o.lastModified return modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { err := o.readMetaData() if err != nil { return err } o.lastModified = modTime mimeType := fs.MimeType(ctx, o) if o.size >= maxSizeForCopy { fs.Debugf(o, "SetModTime is unsupported for objects bigger than %v bytes", fs.SizeSuffix(maxSizeForCopy)) return nil } // Copy the object to itself to update the metadata bucket, bucketPath := o.split() sourceKey := path.Join("/", bucket, bucketPath) bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone) if err != nil { return err } req := qs.PutObjectInput{ XQSCopySource: &sourceKey, ContentType: &mimeType, } _, err = bucketInit.PutObject(bucketPath, &req) return err } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { bucket, bucketPath := o.split() bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone) if err != nil { return nil, err } req := qs.GetObjectInput{} fs.FixRangeOption(options, o.size) for _, option := range options { switch option.(type) { case *fs.RangeOption, *fs.SeekOption: _, value := option.Header() req.Range = &value default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } resp, err := bucketInit.GetObject(bucketPath, &req) if err != nil { return nil, err } return resp.Body, nil } // Update in to the object func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { // The maximum size of upload object is multipartUploadSize * MaxMultipleParts bucket, bucketPath := o.split() err := o.fs.makeBucket(ctx, bucket) if err != nil { return err } // Guess the content type mimeType := fs.MimeType(ctx, src) req := uploadInput{ body: in, qsSvc: o.fs.svc, bucket: bucket, zone: o.fs.zone, key: bucketPath, mimeType: mimeType, partSize: int64(o.fs.opt.ChunkSize), concurrency: o.fs.opt.UploadConcurrency, } uploader := newUploader(&req) size := src.Size() multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff) if multipart { err = uploader.upload() } else { err = uploader.singlePartUpload(in, size) } if err != nil { return err } // Read Metadata of object err = o.readMetaData() return err } // Remove this object func (o *Object) Remove(ctx context.Context) error { bucket, bucketPath := o.split() bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone) if err != nil { return err } _, err = bucketInit.DeleteObject(bucketPath) return err } // Fs returns read only access to the Fs that this object is part of func (o *Object) Fs() fs.Info { return o.fs } var matchMd5 = regexp.MustCompile(`^[0-9a-f]{32}$`) // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } etag := strings.Trim(strings.ToLower(o.etag), `"`) // Check the etag is a valid md5sum if !matchMd5.MatchString(etag) { fs.Debugf(o, "Invalid md5sum (probably multipart uploaded) - ignoring: %q", etag) return "", nil } return etag, nil } // Storable says whether this object can be stored func (o *Object) Storable() bool { return true } // String returns a description of the Object func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Size returns the size of the file func (o *Object) Size() int64 { return o.size } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { err := o.readMetaData() if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return "" } return o.mimeType } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.CleanUpper = &Fs{} _ fs.Copier = &Fs{} _ fs.Object = &Object{} _ fs.ListRer = &Fs{} _ fs.MimeTyper = &Object{} ) rclone-1.53.3/backend/qingstor/qingstor_test.go000066400000000000000000000013161375552240400215660ustar00rootroot00000000000000// Test QingStor filesystem interface // +build !plan9,!js package qingstor import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestQingStor:", NilObject: (*Object)(nil), ChunkedUpload: fstests.ChunkedUploadConfig{ MinChunkSize: minChunkSize, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadCutoff(cs) } var _ fstests.SetUploadChunkSizer = (*Fs)(nil) rclone-1.53.3/backend/qingstor/qingstor_unsupported.go000066400000000000000000000002111375552240400231700ustar00rootroot00000000000000// Build for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 js package qingstor rclone-1.53.3/backend/qingstor/upload.go000066400000000000000000000261051375552240400201500ustar00rootroot00000000000000// Upload object to QingStor // +build !plan9,!js package qingstor import ( "bytes" "crypto/md5" "fmt" "hash" "io" "sort" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/atexit" qs "github.com/yunify/qingstor-sdk-go/v3/service" ) const ( // maxSinglePartSize = 1024 * 1024 * 1024 * 5 // The maximum allowed size when uploading a single object to QingStor // maxMultiPartSize = 1024 * 1024 * 1024 * 1 // The maximum allowed part size when uploading a part to QingStor minMultiPartSize = 1024 * 1024 * 4 // The minimum allowed part size when uploading a part to QingStor maxMultiParts = 10000 // The maximum allowed number of parts in a multi-part upload ) const ( defaultUploadPartSize = 1024 * 1024 * 64 // The default part size to buffer chunks of a payload into. defaultUploadConcurrency = 4 // the default number of goroutines to spin up when using multiPartUpload. ) func readFillBuf(r io.Reader, b []byte) (offset int, err error) { for offset < len(b) && err == nil { var n int n, err = r.Read(b[offset:]) offset += n } return offset, err } // uploadInput contains all input for upload requests to QingStor. type uploadInput struct { body io.Reader qsSvc *qs.Service mimeType string zone string bucket string key string partSize int64 concurrency int maxUploadParts int } // uploader internal structure to manage an upload to QingStor. type uploader struct { cfg *uploadInput totalSize int64 // set to -1 if the size is not known readerPos int64 // current reader position readerSize int64 // current reader content size } // newUploader creates a new Uploader instance to upload objects to QingStor. func newUploader(in *uploadInput) *uploader { u := &uploader{ cfg: in, } return u } // bucketInit initiate as bucket controller func (u *uploader) bucketInit() (*qs.Bucket, error) { bucketInit, err := u.cfg.qsSvc.Bucket(u.cfg.bucket, u.cfg.zone) return bucketInit, err } // String converts uploader to a string func (u *uploader) String() string { return fmt.Sprintf("QingStor bucket %s key %s", u.cfg.bucket, u.cfg.key) } // nextReader returns a seekable reader representing the next packet of data. // This operation increases the shared u.readerPos counter, but note that it // does not need to be wrapped in a mutex because nextReader is only called // from the main thread. func (u *uploader) nextReader() (io.ReadSeeker, int, error) { type readerAtSeeker interface { io.ReaderAt io.ReadSeeker } switch r := u.cfg.body.(type) { case readerAtSeeker: var err error n := u.cfg.partSize if u.totalSize >= 0 { bytesLeft := u.totalSize - u.readerPos if bytesLeft <= u.cfg.partSize { err = io.EOF n = bytesLeft } } reader := io.NewSectionReader(r, u.readerPos, n) u.readerPos += n u.readerSize = n return reader, int(n), err default: part := make([]byte, u.cfg.partSize) n, err := readFillBuf(r, part) u.readerPos += int64(n) u.readerSize = int64(n) return bytes.NewReader(part[0:n]), n, err } } // init will initialize all default options. func (u *uploader) init() { if u.cfg.concurrency == 0 { u.cfg.concurrency = defaultUploadConcurrency } if u.cfg.partSize == 0 { u.cfg.partSize = defaultUploadPartSize } if u.cfg.maxUploadParts == 0 { u.cfg.maxUploadParts = maxMultiParts } // Try to get the total size for some optimizations u.totalSize = -1 switch r := u.cfg.body.(type) { case io.Seeker: pos, _ := r.Seek(0, io.SeekCurrent) defer func() { _, _ = r.Seek(pos, io.SeekStart) }() n, err := r.Seek(0, io.SeekEnd) if err != nil { return } u.totalSize = n // Try to adjust partSize if it is too small and account for // integer division truncation. if u.totalSize/u.cfg.partSize >= u.cfg.partSize { // Add one to the part size to account for remainders // during the size calculation. e.g odd number of bytes. u.cfg.partSize = (u.totalSize / int64(u.cfg.maxUploadParts)) + 1 } } } // singlePartUpload upload a single object that contentLength less than "defaultUploadPartSize" func (u *uploader) singlePartUpload(buf io.Reader, size int64) error { bucketInit, _ := u.bucketInit() req := qs.PutObjectInput{ ContentLength: &size, ContentType: &u.cfg.mimeType, Body: buf, } _, err := bucketInit.PutObject(u.cfg.key, &req) if err == nil { fs.Debugf(u, "Upload single object finished") } return err } // Upload upload an object into QingStor func (u *uploader) upload() error { u.init() if u.cfg.partSize < minMultiPartSize { return errors.Errorf("part size must be at least %d bytes", minMultiPartSize) } // Do one read to determine if we have more than one part reader, _, err := u.nextReader() if err == io.EOF { // single part fs.Debugf(u, "Uploading as single part object to QingStor") return u.singlePartUpload(reader, u.readerPos) } else if err != nil { return errors.Errorf("read upload data failed: %s", err) } fs.Debugf(u, "Uploading as multi-part object to QingStor") mu := multiUploader{uploader: u} return mu.multiPartUpload(reader) } // internal structure to manage a specific multipart upload to QingStor. type multiUploader struct { *uploader wg sync.WaitGroup mtx sync.Mutex err error uploadID *string objectParts completedParts hashMd5 hash.Hash } // keeps track of a single chunk of data being sent to QingStor. type chunk struct { buffer io.ReadSeeker partNumber int size int64 } // completedParts is a wrapper to make parts sortable by their part number, // since QingStor required this list to be sent in sorted order. type completedParts []*qs.ObjectPartType func (a completedParts) Len() int { return len(a) } func (a completedParts) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a completedParts) Less(i, j int) bool { return *a[i].PartNumber < *a[j].PartNumber } // String converts multiUploader to a string func (mu *multiUploader) String() string { if uploadID := mu.uploadID; uploadID != nil { return fmt.Sprintf("QingStor bucket %s key %s uploadID %s", mu.cfg.bucket, mu.cfg.key, *uploadID) } return fmt.Sprintf("QingStor bucket %s key %s uploadID ", mu.cfg.bucket, mu.cfg.key) } // getErr is a thread-safe getter for the error object func (mu *multiUploader) getErr() error { mu.mtx.Lock() defer mu.mtx.Unlock() return mu.err } // setErr is a thread-safe setter for the error object func (mu *multiUploader) setErr(e error) { mu.mtx.Lock() defer mu.mtx.Unlock() mu.err = e } // readChunk runs in worker goroutines to pull chunks off of the ch channel // and send() them as UploadPart requests. func (mu *multiUploader) readChunk(ch chan chunk) { defer mu.wg.Done() for { c, ok := <-ch if !ok { break } if mu.getErr() == nil { if err := mu.send(c); err != nil { mu.setErr(err) } } } } // initiate init a Multiple Object and obtain UploadID func (mu *multiUploader) initiate() error { bucketInit, _ := mu.bucketInit() req := qs.InitiateMultipartUploadInput{ ContentType: &mu.cfg.mimeType, } fs.Debugf(mu, "Initiating a multi-part upload") rsp, err := bucketInit.InitiateMultipartUpload(mu.cfg.key, &req) if err == nil { mu.uploadID = rsp.UploadID mu.hashMd5 = md5.New() } return err } // send upload a part into QingStor func (mu *multiUploader) send(c chunk) error { bucketInit, _ := mu.bucketInit() req := qs.UploadMultipartInput{ PartNumber: &c.partNumber, UploadID: mu.uploadID, ContentLength: &c.size, Body: c.buffer, } fs.Debugf(mu, "Uploading a part to QingStor with partNumber %d and partSize %d", c.partNumber, c.size) _, err := bucketInit.UploadMultipart(mu.cfg.key, &req) if err != nil { return err } fs.Debugf(mu, "Done uploading part partNumber %d and partSize %d", c.partNumber, c.size) mu.mtx.Lock() defer mu.mtx.Unlock() _, _ = c.buffer.Seek(0, 0) _, _ = io.Copy(mu.hashMd5, c.buffer) parts := qs.ObjectPartType{PartNumber: &c.partNumber, Size: &c.size} mu.objectParts = append(mu.objectParts, &parts) return err } // complete complete a multipart upload func (mu *multiUploader) complete() error { var err error if err = mu.getErr(); err != nil { return err } bucketInit, _ := mu.bucketInit() //if err = mu.list(); err != nil { // return err //} //md5String := fmt.Sprintf("\"%s\"", hex.EncodeToString(mu.hashMd5.Sum(nil))) md5String := fmt.Sprintf("\"%x\"", mu.hashMd5.Sum(nil)) sort.Sort(mu.objectParts) req := qs.CompleteMultipartUploadInput{ UploadID: mu.uploadID, ObjectParts: mu.objectParts, ETag: &md5String, } fs.Debugf(mu, "Completing multi-part object") _, err = bucketInit.CompleteMultipartUpload(mu.cfg.key, &req) if err == nil { fs.Debugf(mu, "Complete multi-part finished") } return err } // abort abort a multipart upload func (mu *multiUploader) abort() error { var err error bucketInit, _ := mu.bucketInit() if uploadID := mu.uploadID; uploadID != nil { req := qs.AbortMultipartUploadInput{ UploadID: uploadID, } fs.Debugf(mu, "Aborting multi-part object %q", *uploadID) _, err = bucketInit.AbortMultipartUpload(mu.cfg.key, &req) } return err } // multiPartUpload upload a multiple object into QingStor func (mu *multiUploader) multiPartUpload(firstBuf io.ReadSeeker) (err error) { // Initiate a multi-part upload if err = mu.initiate(); err != nil { return err } // Cancel the session if something went wrong defer atexit.OnError(&err, func() { fs.Debugf(mu, "Cancelling multipart upload: %v", err) cancelErr := mu.abort() if cancelErr != nil { fs.Logf(mu, "Failed to cancel multipart upload: %v", cancelErr) } })() ch := make(chan chunk, mu.cfg.concurrency) for i := 0; i < mu.cfg.concurrency; i++ { mu.wg.Add(1) go mu.readChunk(ch) } var partNumber int ch <- chunk{partNumber: partNumber, buffer: firstBuf, size: mu.readerSize} for mu.getErr() == nil { partNumber++ // This upload exceeded maximum number of supported parts, error now. if partNumber > mu.cfg.maxUploadParts || partNumber > maxMultiParts { var msg string if partNumber > mu.cfg.maxUploadParts { msg = fmt.Sprintf("exceeded total allowed configured maxUploadParts (%d). "+ "Adjust PartSize to fit in this limit", mu.cfg.maxUploadParts) } else { msg = fmt.Sprintf("exceeded total allowed QingStor limit maxUploadParts (%d). "+ "Adjust PartSize to fit in this limit", maxMultiParts) } mu.setErr(errors.New(msg)) break } var reader io.ReadSeeker var nextChunkLen int reader, nextChunkLen, err = mu.nextReader() if err != nil && err != io.EOF { // empty ch go func() { for range ch { } }() // Wait for all goroutines finish close(ch) mu.wg.Wait() return err } if nextChunkLen == 0 && partNumber > 0 { // No need to upload empty part, if file was empty to start // with empty single part would of been created and never // started multipart upload. break } num := partNumber ch <- chunk{partNumber: num, buffer: reader, size: mu.readerSize} } // Wait for all goroutines finish close(ch) mu.wg.Wait() // Complete Multipart Upload return mu.complete() } rclone-1.53.3/backend/s3/000077500000000000000000000000001375552240400150105ustar00rootroot00000000000000rclone-1.53.3/backend/s3/s3.go000066400000000000000000003053111375552240400156670ustar00rootroot00000000000000// Package s3 provides an interface to Amazon S3 oject storage package s3 import ( "bytes" "context" "crypto/md5" "encoding/base64" "encoding/hex" "encoding/xml" "fmt" "io" "net/http" "net/url" "path" "regexp" "sort" "strconv" "strings" "sync" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/corehandlers" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" "github.com/aws/aws-sdk-go/aws/credentials/stscreds" "github.com/aws/aws-sdk-go/aws/defaults" "github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3" "github.com/ncw/swift" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pool" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/structs" "golang.org/x/sync/errgroup" ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "s3", Description: "Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)", NewFs: NewFs, CommandHelp: commandHelp, Options: []fs.Option{{ Name: fs.ConfigProvider, Help: "Choose your S3 provider.", Examples: []fs.OptionExample{{ Value: "AWS", Help: "Amazon Web Services (AWS) S3", }, { Value: "Alibaba", Help: "Alibaba Cloud Object Storage System (OSS) formerly Aliyun", }, { Value: "Ceph", Help: "Ceph Object Storage", }, { Value: "DigitalOcean", Help: "Digital Ocean Spaces", }, { Value: "Dreamhost", Help: "Dreamhost DreamObjects", }, { Value: "IBMCOS", Help: "IBM COS S3", }, { Value: "Minio", Help: "Minio Object Storage", }, { Value: "Netease", Help: "Netease Object Storage (NOS)", }, { Value: "Scaleway", Help: "Scaleway Object Storage", }, { Value: "StackPath", Help: "StackPath Object Storage", }, { Value: "TencentCOS", Help: "Tencent Cloud Object Storage (COS)", }, { Value: "Wasabi", Help: "Wasabi Object Storage", }, { Value: "Other", Help: "Any other S3 compatible provider", }}, }, { Name: "env_auth", Help: "Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).\nOnly applies if access_key_id and secret_access_key is blank.", Default: false, Examples: []fs.OptionExample{{ Value: "false", Help: "Enter AWS credentials in the next step", }, { Value: "true", Help: "Get AWS credentials from the environment (env vars or IAM)", }}, }, { Name: "access_key_id", Help: "AWS Access Key ID.\nLeave blank for anonymous access or runtime credentials.", }, { Name: "secret_access_key", Help: "AWS Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.", }, { // References: // 1. https://docs.aws.amazon.com/general/latest/gr/rande.html // 2. https://docs.aws.amazon.com/general/latest/gr/s3.html Name: "region", Help: "Region to connect to.", Provider: "AWS", Examples: []fs.OptionExample{{ Value: "us-east-1", Help: "The default endpoint - a good choice if you are unsure.\nUS Region, Northern Virginia or Pacific Northwest.\nLeave location constraint empty.", }, { Value: "us-east-2", Help: "US East (Ohio) Region\nNeeds location constraint us-east-2.", }, { Value: "us-west-1", Help: "US West (Northern California) Region\nNeeds location constraint us-west-1.", }, { Value: "us-west-2", Help: "US West (Oregon) Region\nNeeds location constraint us-west-2.", }, { Value: "ca-central-1", Help: "Canada (Central) Region\nNeeds location constraint ca-central-1.", }, { Value: "eu-west-1", Help: "EU (Ireland) Region\nNeeds location constraint EU or eu-west-1.", }, { Value: "eu-west-2", Help: "EU (London) Region\nNeeds location constraint eu-west-2.", }, { Value: "eu-west-3", Help: "EU (Paris) Region\nNeeds location constraint eu-west-3.", }, { Value: "eu-north-1", Help: "EU (Stockholm) Region\nNeeds location constraint eu-north-1.", }, { Value: "eu-south-1", Help: "EU (Milan) Region\nNeeds location constraint eu-south-1.", }, { Value: "eu-central-1", Help: "EU (Frankfurt) Region\nNeeds location constraint eu-central-1.", }, { Value: "ap-southeast-1", Help: "Asia Pacific (Singapore) Region\nNeeds location constraint ap-southeast-1.", }, { Value: "ap-southeast-2", Help: "Asia Pacific (Sydney) Region\nNeeds location constraint ap-southeast-2.", }, { Value: "ap-northeast-1", Help: "Asia Pacific (Tokyo) Region\nNeeds location constraint ap-northeast-1.", }, { Value: "ap-northeast-2", Help: "Asia Pacific (Seoul)\nNeeds location constraint ap-northeast-2.", }, { Value: "ap-northeast-3", Help: "Asia Pacific (Osaka-Local)\nNeeds location constraint ap-northeast-3.", }, { Value: "ap-south-1", Help: "Asia Pacific (Mumbai)\nNeeds location constraint ap-south-1.", }, { Value: "ap-east-1", Help: "Asia Pacific (Hong Kong) Region\nNeeds location constraint ap-east-1.", }, { Value: "sa-east-1", Help: "South America (Sao Paulo) Region\nNeeds location constraint sa-east-1.", }, { Value: "me-south-1", Help: "Middle East (Bahrain) Region\nNeeds location constraint me-south-1.", }, { Value: "af-south-1", Help: "Africa (Cape Town) Region\nNeeds location constraint af-south-1.", }, { Value: "cn-north-1", Help: "China (Beijing) Region\nNeeds location constraint cn-north-1.", }, { Value: "cn-northwest-1", Help: "China (Ningxia) Region\nNeeds location constraint cn-northwest-1.", }, { Value: "us-gov-east-1", Help: "AWS GovCloud (US-East) Region\nNeeds location constraint us-gov-east-1.", }, { Value: "us-gov-west-1", Help: "AWS GovCloud (US) Region\nNeeds location constraint us-gov-west-1.", }}, }, { Name: "region", Help: "Region to connect to.", Provider: "Scaleway", Examples: []fs.OptionExample{{ Value: "nl-ams", Help: "Amsterdam, The Netherlands", }, { Value: "fr-par", Help: "Paris, France", }}, }, { Name: "region", Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.", Provider: "!AWS,Alibaba,Scaleway,TencentCOS", Examples: []fs.OptionExample{{ Value: "", Help: "Use this if unsure. Will use v4 signatures and an empty region.", }, { Value: "other-v2-signature", Help: "Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.", }}, }, { Name: "endpoint", Help: "Endpoint for S3 API.\nLeave blank if using AWS to use the default endpoint for the region.", Provider: "AWS", }, { Name: "endpoint", Help: "Endpoint for IBM COS S3 API.\nSpecify if using an IBM COS On Premise.", Provider: "IBMCOS", Examples: []fs.OptionExample{{ Value: "s3.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region Endpoint", }, { Value: "s3.dal.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region Dallas Endpoint", }, { Value: "s3.wdc.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region Washington DC Endpoint", }, { Value: "s3.sjc.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region San Jose Endpoint", }, { Value: "s3.private.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region Private Endpoint", }, { Value: "s3.private.dal.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region Dallas Private Endpoint", }, { Value: "s3.private.wdc.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region Washington DC Private Endpoint", }, { Value: "s3.private.sjc.us.cloud-object-storage.appdomain.cloud", Help: "US Cross Region San Jose Private Endpoint", }, { Value: "s3.us-east.cloud-object-storage.appdomain.cloud", Help: "US Region East Endpoint", }, { Value: "s3.private.us-east.cloud-object-storage.appdomain.cloud", Help: "US Region East Private Endpoint", }, { Value: "s3.us-south.cloud-object-storage.appdomain.cloud", Help: "US Region South Endpoint", }, { Value: "s3.private.us-south.cloud-object-storage.appdomain.cloud", Help: "US Region South Private Endpoint", }, { Value: "s3.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Endpoint", }, { Value: "s3.fra.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Frankfurt Endpoint", }, { Value: "s3.mil.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Milan Endpoint", }, { Value: "s3.ams.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Amsterdam Endpoint", }, { Value: "s3.private.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Private Endpoint", }, { Value: "s3.private.fra.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Frankfurt Private Endpoint", }, { Value: "s3.private.mil.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Milan Private Endpoint", }, { Value: "s3.private.ams.eu.cloud-object-storage.appdomain.cloud", Help: "EU Cross Region Amsterdam Private Endpoint", }, { Value: "s3.eu-gb.cloud-object-storage.appdomain.cloud", Help: "Great Britain Endpoint", }, { Value: "s3.private.eu-gb.cloud-object-storage.appdomain.cloud", Help: "Great Britain Private Endpoint", }, { Value: "s3.eu-de.cloud-object-storage.appdomain.cloud", Help: "EU Region DE Endpoint", }, { Value: "s3.private.eu-de.cloud-object-storage.appdomain.cloud", Help: "EU Region DE Private Endpoint", }, { Value: "s3.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional Endpoint", }, { Value: "s3.tok.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional Tokyo Endpoint", }, { Value: "s3.hkg.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional HongKong Endpoint", }, { Value: "s3.seo.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional Seoul Endpoint", }, { Value: "s3.private.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional Private Endpoint", }, { Value: "s3.private.tok.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional Tokyo Private Endpoint", }, { Value: "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional HongKong Private Endpoint", }, { Value: "s3.private.seo.ap.cloud-object-storage.appdomain.cloud", Help: "APAC Cross Regional Seoul Private Endpoint", }, { Value: "s3.jp-tok.cloud-object-storage.appdomain.cloud", Help: "APAC Region Japan Endpoint", }, { Value: "s3.private.jp-tok.cloud-object-storage.appdomain.cloud", Help: "APAC Region Japan Private Endpoint", }, { Value: "s3.au-syd.cloud-object-storage.appdomain.cloud", Help: "APAC Region Australia Endpoint", }, { Value: "s3.private.au-syd.cloud-object-storage.appdomain.cloud", Help: "APAC Region Australia Private Endpoint", }, { Value: "s3.ams03.cloud-object-storage.appdomain.cloud", Help: "Amsterdam Single Site Endpoint", }, { Value: "s3.private.ams03.cloud-object-storage.appdomain.cloud", Help: "Amsterdam Single Site Private Endpoint", }, { Value: "s3.che01.cloud-object-storage.appdomain.cloud", Help: "Chennai Single Site Endpoint", }, { Value: "s3.private.che01.cloud-object-storage.appdomain.cloud", Help: "Chennai Single Site Private Endpoint", }, { Value: "s3.mel01.cloud-object-storage.appdomain.cloud", Help: "Melbourne Single Site Endpoint", }, { Value: "s3.private.mel01.cloud-object-storage.appdomain.cloud", Help: "Melbourne Single Site Private Endpoint", }, { Value: "s3.osl01.cloud-object-storage.appdomain.cloud", Help: "Oslo Single Site Endpoint", }, { Value: "s3.private.osl01.cloud-object-storage.appdomain.cloud", Help: "Oslo Single Site Private Endpoint", }, { Value: "s3.tor01.cloud-object-storage.appdomain.cloud", Help: "Toronto Single Site Endpoint", }, { Value: "s3.private.tor01.cloud-object-storage.appdomain.cloud", Help: "Toronto Single Site Private Endpoint", }, { Value: "s3.seo01.cloud-object-storage.appdomain.cloud", Help: "Seoul Single Site Endpoint", }, { Value: "s3.private.seo01.cloud-object-storage.appdomain.cloud", Help: "Seoul Single Site Private Endpoint", }, { Value: "s3.mon01.cloud-object-storage.appdomain.cloud", Help: "Montreal Single Site Endpoint", }, { Value: "s3.private.mon01.cloud-object-storage.appdomain.cloud", Help: "Montreal Single Site Private Endpoint", }, { Value: "s3.mex01.cloud-object-storage.appdomain.cloud", Help: "Mexico Single Site Endpoint", }, { Value: "s3.private.mex01.cloud-object-storage.appdomain.cloud", Help: "Mexico Single Site Private Endpoint", }, { Value: "s3.sjc04.cloud-object-storage.appdomain.cloud", Help: "San Jose Single Site Endpoint", }, { Value: "s3.private.sjc04.cloud-object-storage.appdomain.cloud", Help: "San Jose Single Site Private Endpoint", }, { Value: "s3.mil01.cloud-object-storage.appdomain.cloud", Help: "Milan Single Site Endpoint", }, { Value: "s3.private.mil01.cloud-object-storage.appdomain.cloud", Help: "Milan Single Site Private Endpoint", }, { Value: "s3.hkg02.cloud-object-storage.appdomain.cloud", Help: "Hong Kong Single Site Endpoint", }, { Value: "s3.private.hkg02.cloud-object-storage.appdomain.cloud", Help: "Hong Kong Single Site Private Endpoint", }, { Value: "s3.par01.cloud-object-storage.appdomain.cloud", Help: "Paris Single Site Endpoint", }, { Value: "s3.private.par01.cloud-object-storage.appdomain.cloud", Help: "Paris Single Site Private Endpoint", }, { Value: "s3.sng01.cloud-object-storage.appdomain.cloud", Help: "Singapore Single Site Endpoint", }, { Value: "s3.private.sng01.cloud-object-storage.appdomain.cloud", Help: "Singapore Single Site Private Endpoint", }}, }, { // oss endpoints: https://help.aliyun.com/document_detail/31837.html Name: "endpoint", Help: "Endpoint for OSS API.", Provider: "Alibaba", Examples: []fs.OptionExample{{ Value: "oss-cn-hangzhou.aliyuncs.com", Help: "East China 1 (Hangzhou)", }, { Value: "oss-cn-shanghai.aliyuncs.com", Help: "East China 2 (Shanghai)", }, { Value: "oss-cn-qingdao.aliyuncs.com", Help: "North China 1 (Qingdao)", }, { Value: "oss-cn-beijing.aliyuncs.com", Help: "North China 2 (Beijing)", }, { Value: "oss-cn-zhangjiakou.aliyuncs.com", Help: "North China 3 (Zhangjiakou)", }, { Value: "oss-cn-huhehaote.aliyuncs.com", Help: "North China 5 (Huhehaote)", }, { Value: "oss-cn-shenzhen.aliyuncs.com", Help: "South China 1 (Shenzhen)", }, { Value: "oss-cn-hongkong.aliyuncs.com", Help: "Hong Kong (Hong Kong)", }, { Value: "oss-us-west-1.aliyuncs.com", Help: "US West 1 (Silicon Valley)", }, { Value: "oss-us-east-1.aliyuncs.com", Help: "US East 1 (Virginia)", }, { Value: "oss-ap-southeast-1.aliyuncs.com", Help: "Southeast Asia Southeast 1 (Singapore)", }, { Value: "oss-ap-southeast-2.aliyuncs.com", Help: "Asia Pacific Southeast 2 (Sydney)", }, { Value: "oss-ap-southeast-3.aliyuncs.com", Help: "Southeast Asia Southeast 3 (Kuala Lumpur)", }, { Value: "oss-ap-southeast-5.aliyuncs.com", Help: "Asia Pacific Southeast 5 (Jakarta)", }, { Value: "oss-ap-northeast-1.aliyuncs.com", Help: "Asia Pacific Northeast 1 (Japan)", }, { Value: "oss-ap-south-1.aliyuncs.com", Help: "Asia Pacific South 1 (Mumbai)", }, { Value: "oss-eu-central-1.aliyuncs.com", Help: "Central Europe 1 (Frankfurt)", }, { Value: "oss-eu-west-1.aliyuncs.com", Help: "West Europe (London)", }, { Value: "oss-me-east-1.aliyuncs.com", Help: "Middle East 1 (Dubai)", }}, }, { Name: "endpoint", Help: "Endpoint for Scaleway Object Storage.", Provider: "Scaleway", Examples: []fs.OptionExample{{ Value: "s3.nl-ams.scw.cloud", Help: "Amsterdam Endpoint", }, { Value: "s3.fr-par.scw.cloud", Help: "Paris Endpoint", }}, }, { Name: "endpoint", Help: "Endpoint for StackPath Object Storage.", Provider: "StackPath", Examples: []fs.OptionExample{{ Value: "s3.us-east-2.stackpathstorage.com", Help: "US East Endpoint", }, { Value: "s3.us-west-1.stackpathstorage.com", Help: "US West Endpoint", }, { Value: "s3.eu-central-1.stackpathstorage.com", Help: "EU Endpoint", }}, }, { // cos endpoints: https://intl.cloud.tencent.com/document/product/436/6224 Name: "endpoint", Help: "Endpoint for Tencent COS API.", Provider: "TencentCOS", Examples: []fs.OptionExample{{ Value: "cos.ap-beijing.myqcloud.com", Help: "Beijing Region.", }, { Value: "cos.ap-nanjing.myqcloud.com", Help: "Nanjing Region.", }, { Value: "cos.ap-shanghai.myqcloud.com", Help: "Shanghai Region.", }, { Value: "cos.ap-guangzhou.myqcloud.com", Help: "Guangzhou Region.", }, { Value: "cos.ap-nanjing.myqcloud.com", Help: "Nanjing Region.", }, { Value: "cos.ap-chengdu.myqcloud.com", Help: "Chengdu Region.", }, { Value: "cos.ap-chongqing.myqcloud.com", Help: "Chongqing Region.", }, { Value: "cos.ap-hongkong.myqcloud.com", Help: "Hong Kong (China) Region.", }, { Value: "cos.ap-singapore.myqcloud.com", Help: "Singapore Region.", }, { Value: "cos.ap-mumbai.myqcloud.com", Help: "Mumbai Region.", }, { Value: "cos.ap-seoul.myqcloud.com", Help: "Seoul Region.", }, { Value: "cos.ap-bangkok.myqcloud.com", Help: "Bangkok Region.", }, { Value: "cos.ap-tokyo.myqcloud.com", Help: "Tokyo Region.", }, { Value: "cos.na-siliconvalley.myqcloud.com", Help: "Silicon Valley Region.", }, { Value: "cos.na-ashburn.myqcloud.com", Help: "Virginia Region.", }, { Value: "cos.na-toronto.myqcloud.com", Help: "Toronto Region.", }, { Value: "cos.eu-frankfurt.myqcloud.com", Help: "Frankfurt Region.", }, { Value: "cos.eu-moscow.myqcloud.com", Help: "Moscow Region.", }, { Value: "cos.accelerate.myqcloud.com", Help: "Use Tencent COS Accelerate Endpoint.", }}, }, { Name: "endpoint", Help: "Endpoint for S3 API.\nRequired when using an S3 clone.", Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath", Examples: []fs.OptionExample{{ Value: "objects-us-east-1.dream.io", Help: "Dream Objects endpoint", Provider: "Dreamhost", }, { Value: "nyc3.digitaloceanspaces.com", Help: "Digital Ocean Spaces New York 3", Provider: "DigitalOcean", }, { Value: "ams3.digitaloceanspaces.com", Help: "Digital Ocean Spaces Amsterdam 3", Provider: "DigitalOcean", }, { Value: "sgp1.digitaloceanspaces.com", Help: "Digital Ocean Spaces Singapore 1", Provider: "DigitalOcean", }, { Value: "s3.wasabisys.com", Help: "Wasabi US East endpoint", Provider: "Wasabi", }, { Value: "s3.us-west-1.wasabisys.com", Help: "Wasabi US West endpoint", Provider: "Wasabi", }, { Value: "s3.eu-central-1.wasabisys.com", Help: "Wasabi EU Central endpoint", Provider: "Wasabi", }}, }, { Name: "location_constraint", Help: "Location constraint - must be set to match the Region.\nUsed when creating buckets only.", Provider: "AWS", Examples: []fs.OptionExample{{ Value: "", Help: "Empty for US Region, Northern Virginia or Pacific Northwest.", }, { Value: "us-east-2", Help: "US East (Ohio) Region.", }, { Value: "us-west-1", Help: "US West (Northern California) Region.", }, { Value: "us-west-2", Help: "US West (Oregon) Region.", }, { Value: "ca-central-1", Help: "Canada (Central) Region.", }, { Value: "eu-west-1", Help: "EU (Ireland) Region.", }, { Value: "eu-west-2", Help: "EU (London) Region.", }, { Value: "eu-west-3", Help: "EU (Paris) Region.", }, { Value: "eu-north-1", Help: "EU (Stockholm) Region.", }, { Value: "eu-south-1", Help: "EU (Milan) Region.", }, { Value: "EU", Help: "EU Region.", }, { Value: "ap-southeast-1", Help: "Asia Pacific (Singapore) Region.", }, { Value: "ap-southeast-2", Help: "Asia Pacific (Sydney) Region.", }, { Value: "ap-northeast-1", Help: "Asia Pacific (Tokyo) Region.", }, { Value: "ap-northeast-2", Help: "Asia Pacific (Seoul) Region.", }, { Value: "ap-northeast-3", Help: "Asia Pacific (Osaka-Local) Region.", }, { Value: "ap-south-1", Help: "Asia Pacific (Mumbai) Region.", }, { Value: "ap-east-1", Help: "Asia Pacific (Hong Kong) Region.", }, { Value: "sa-east-1", Help: "South America (Sao Paulo) Region.", }, { Value: "me-south-1", Help: "Middle East (Bahrain) Region.", }, { Value: "af-south-1", Help: "Africa (Cape Town) Region.", }, { Value: "cn-north-1", Help: "China (Beijing) Region", }, { Value: "cn-northwest-1", Help: "China (Ningxia) Region.", }, { Value: "us-gov-east-1", Help: "AWS GovCloud (US-East) Region.", }, { Value: "us-gov-west-1", Help: "AWS GovCloud (US) Region.", }}, }, { Name: "location_constraint", Help: "Location constraint - must match endpoint when using IBM Cloud Public.\nFor on-prem COS, do not make a selection from this list, hit enter", Provider: "IBMCOS", Examples: []fs.OptionExample{{ Value: "us-standard", Help: "US Cross Region Standard", }, { Value: "us-vault", Help: "US Cross Region Vault", }, { Value: "us-cold", Help: "US Cross Region Cold", }, { Value: "us-flex", Help: "US Cross Region Flex", }, { Value: "us-east-standard", Help: "US East Region Standard", }, { Value: "us-east-vault", Help: "US East Region Vault", }, { Value: "us-east-cold", Help: "US East Region Cold", }, { Value: "us-east-flex", Help: "US East Region Flex", }, { Value: "us-south-standard", Help: "US South Region Standard", }, { Value: "us-south-vault", Help: "US South Region Vault", }, { Value: "us-south-cold", Help: "US South Region Cold", }, { Value: "us-south-flex", Help: "US South Region Flex", }, { Value: "eu-standard", Help: "EU Cross Region Standard", }, { Value: "eu-vault", Help: "EU Cross Region Vault", }, { Value: "eu-cold", Help: "EU Cross Region Cold", }, { Value: "eu-flex", Help: "EU Cross Region Flex", }, { Value: "eu-gb-standard", Help: "Great Britain Standard", }, { Value: "eu-gb-vault", Help: "Great Britain Vault", }, { Value: "eu-gb-cold", Help: "Great Britain Cold", }, { Value: "eu-gb-flex", Help: "Great Britain Flex", }, { Value: "ap-standard", Help: "APAC Standard", }, { Value: "ap-vault", Help: "APAC Vault", }, { Value: "ap-cold", Help: "APAC Cold", }, { Value: "ap-flex", Help: "APAC Flex", }, { Value: "mel01-standard", Help: "Melbourne Standard", }, { Value: "mel01-vault", Help: "Melbourne Vault", }, { Value: "mel01-cold", Help: "Melbourne Cold", }, { Value: "mel01-flex", Help: "Melbourne Flex", }, { Value: "tor01-standard", Help: "Toronto Standard", }, { Value: "tor01-vault", Help: "Toronto Vault", }, { Value: "tor01-cold", Help: "Toronto Cold", }, { Value: "tor01-flex", Help: "Toronto Flex", }}, }, { Name: "location_constraint", Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.", Provider: "!AWS,IBMCOS,Alibaba,Scaleway,StackPath,TencentCOS", }, { Name: "acl", Help: `Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one.`, Examples: []fs.OptionExample{{ Value: "default", Help: "Owner gets Full_CONTROL. No one else has access rights (default).", Provider: "TencentCOS", }, { Value: "private", Help: "Owner gets FULL_CONTROL. No one else has access rights (default).", Provider: "!IBMCOS,TencentCOS", }, { Value: "public-read", Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access.", Provider: "!IBMCOS", }, { Value: "public-read-write", Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.\nGranting this on a bucket is generally not recommended.", Provider: "!IBMCOS", }, { Value: "authenticated-read", Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.", Provider: "!IBMCOS", }, { Value: "bucket-owner-read", Help: "Object owner gets FULL_CONTROL. Bucket owner gets READ access.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.", Provider: "!IBMCOS", }, { Value: "bucket-owner-full-control", Help: "Both the object owner and the bucket owner get FULL_CONTROL over the object.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.", Provider: "!IBMCOS", }, { Value: "private", Help: "Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS", Provider: "IBMCOS", }, { Value: "public-read", Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS", Provider: "IBMCOS", }, { Value: "public-read-write", Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS", Provider: "IBMCOS", }, { Value: "authenticated-read", Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS", Provider: "IBMCOS", }}, }, { Name: "bucket_acl", Help: `Canned ACL used when creating buckets. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead.`, Advanced: true, Examples: []fs.OptionExample{{ Value: "private", Help: "Owner gets FULL_CONTROL. No one else has access rights (default).", }, { Value: "public-read", Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access.", }, { Value: "public-read-write", Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.\nGranting this on a bucket is generally not recommended.", }, { Value: "authenticated-read", Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.", }}, }, { Name: "server_side_encryption", Help: "The server-side encryption algorithm used when storing this object in S3.", Provider: "AWS,Ceph,Minio", Examples: []fs.OptionExample{{ Value: "", Help: "None", }, { Value: "AES256", Help: "AES256", }, { Value: "aws:kms", Help: "aws:kms", }}, }, { Name: "sse_customer_algorithm", Help: "If using SSE-C, the server-side encryption algorithm used when storing this object in S3.", Provider: "AWS,Ceph,Minio", Advanced: true, Examples: []fs.OptionExample{{ Value: "", Help: "None", }, { Value: "AES256", Help: "AES256", }}, }, { Name: "sse_kms_key_id", Help: "If using KMS ID you must provide the ARN of Key.", Provider: "AWS,Ceph,Minio", Examples: []fs.OptionExample{{ Value: "", Help: "None", }, { Value: "arn:aws:kms:us-east-1:*", Help: "arn:aws:kms:*", }}, }, { Name: "sse_customer_key", Help: "If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.", Provider: "AWS,Ceph,Minio", Advanced: true, Examples: []fs.OptionExample{{ Value: "", Help: "None", }}, }, { Name: "sse_customer_key_md5", Help: "If using SSE-C you must provide the secret encryption key MD5 checksum.", Provider: "AWS,Ceph,Minio", Advanced: true, Examples: []fs.OptionExample{{ Value: "", Help: "None", }}, }, { Name: "storage_class", Help: "The storage class to use when storing new objects in S3.", Provider: "AWS", Examples: []fs.OptionExample{{ Value: "", Help: "Default", }, { Value: "STANDARD", Help: "Standard storage class", }, { Value: "REDUCED_REDUNDANCY", Help: "Reduced redundancy storage class", }, { Value: "STANDARD_IA", Help: "Standard Infrequent Access storage class", }, { Value: "ONEZONE_IA", Help: "One Zone Infrequent Access storage class", }, { Value: "GLACIER", Help: "Glacier storage class", }, { Value: "DEEP_ARCHIVE", Help: "Glacier Deep Archive storage class", }, { Value: "INTELLIGENT_TIERING", Help: "Intelligent-Tiering storage class", }}, }, { // Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm Name: "storage_class", Help: "The storage class to use when storing new objects in OSS.", Provider: "Alibaba", Examples: []fs.OptionExample{{ Value: "", Help: "Default", }, { Value: "STANDARD", Help: "Standard storage class", }, { Value: "GLACIER", Help: "Archive storage mode.", }, { Value: "STANDARD_IA", Help: "Infrequent access storage mode.", }}, }, { // Mapping from here: https://intl.cloud.tencent.com/document/product/436/30925 Name: "storage_class", Help: "The storage class to use when storing new objects in Tencent COS.", Provider: "TencentCOS", Examples: []fs.OptionExample{{ Value: "", Help: "Default", }, { Value: "STANDARD", Help: "Standard storage class", }, { Value: "ARCHIVE", Help: "Archive storage mode.", }, { Value: "STANDARD_IA", Help: "Infrequent access storage mode.", }}, }, { // Mapping from here: https://www.scaleway.com/en/docs/object-storage-glacier/#-Scaleway-Storage-Classes Name: "storage_class", Help: "The storage class to use when storing new objects in S3.", Provider: "Scaleway", Examples: []fs.OptionExample{{ Value: "", Help: "Default", }, { Value: "STANDARD", Help: "The Standard class for any upload; suitable for on-demand content like streaming or CDN.", }, { Value: "GLACIER", Help: "Archived storage; prices are lower, but it needs to be restored first to be accessed.", }}, }, { Name: "upload_cutoff", Help: `Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.`, Default: defaultUploadCutoff, Advanced: true, }, { Name: "chunk_size", Help: `Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown size (eg from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit. Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size.`, Default: minChunkSize, Advanced: true, }, { Name: "max_upload_parts", Help: `Maximum number of parts in a multipart upload. This option defines the maximum number of multipart chunks to use when doing a multipart upload. This can be useful if a service does not support the AWS S3 specification of 10,000 chunks. Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit. `, Default: maxUploadParts, Advanced: true, }, { Name: "copy_cutoff", Help: `Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 5GB.`, Default: fs.SizeSuffix(maxSizeForCopy), Advanced: true, }, { Name: "disable_checksum", Help: `Don't store MD5 checksum with object metadata Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.`, Default: false, Advanced: true, }, { Name: "shared_credentials_file", Help: `Path to the shared credentials file If env_auth = true then rclone can use a shared credentials file. If this variable is empty rclone will look for the "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty it will default to the current user's home directory. Linux/OSX: "$HOME/.aws/credentials" Windows: "%USERPROFILE%\.aws\credentials" `, Advanced: true, }, { Name: "profile", Help: `Profile to use in the shared credentials file If env_auth = true then rclone can use a shared credentials file. This variable controls which profile is used in that file. If empty it will default to the environment variable "AWS_PROFILE" or "default" if that environment variable is also not set. `, Advanced: true, }, { Name: "session_token", Help: "An AWS session token", Advanced: true, }, { Name: "upload_concurrency", Help: `Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.`, Default: 4, Advanced: true, }, { Name: "force_path_style", Help: `If true use path style access if false use virtual hosted style. If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See [the AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) for more info. Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting.`, Default: true, Advanced: true, }, { Name: "v2_auth", Help: `If true use v2 authentication. If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`, Default: false, Advanced: true, }, { Name: "use_accelerate_endpoint", Provider: "AWS", Help: `If true use the AWS S3 accelerated endpoint. See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`, Default: false, Advanced: true, }, { Name: "leave_parts_on_error", Provider: "AWS", Help: `If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. It should be set to true for resuming uploads across different sessions. WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up. `, Default: false, Advanced: true, }, { Name: "list_chunk", Help: `Size of listing chunk (response list for each ListObject S3 request). This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html). In Ceph, this can be increased with the "rgw list buckets max chunk" option. `, Default: 1000, Advanced: true, }, { Name: "no_check_bucket", Help: `If set don't attempt to check the bucket exists or create it This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already. `, Default: false, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Any UTF-8 character is valid in a key, however it can't handle // invalid UTF-8 and / have a special meaning. // // The SDK can't seem to handle uploading files called '.' // // FIXME would be nice to add // - initial / encoding // - doubled / encoding // - trailing / encoding // so that AWS keys are always valid file names Default: encoder.EncodeInvalidUtf8 | encoder.EncodeSlash | encoder.EncodeDot, }, { Name: "memory_pool_flush_time", Default: memoryPoolFlushTime, Advanced: true, Help: `How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.`, }, { Name: "memory_pool_use_mmap", Default: memoryPoolUseMmap, Advanced: true, Help: `Whether to use mmap buffers in internal memory pool.`, }, }}) } // Constants const ( metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime metaMD5Hash = "Md5chksum" // the meta key to store md5hash in // The maximum size of object we can COPY - this should be 5GiB but is < 5GB for b2 compatibility // See https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76 maxSizeForCopy = 4768 * 1024 * 1024 maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload minChunkSize = fs.SizeSuffix(1024 * 1024 * 5) defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024) maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024) minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep. memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long memoryPoolUseMmap = false maxExpireDuration = fs.Duration(7 * 24 * time.Hour) // max expiry is 1 week ) // Options defines the configuration for this backend type Options struct { Provider string `config:"provider"` EnvAuth bool `config:"env_auth"` AccessKeyID string `config:"access_key_id"` SecretAccessKey string `config:"secret_access_key"` Region string `config:"region"` Endpoint string `config:"endpoint"` LocationConstraint string `config:"location_constraint"` ACL string `config:"acl"` BucketACL string `config:"bucket_acl"` ServerSideEncryption string `config:"server_side_encryption"` SSEKMSKeyID string `config:"sse_kms_key_id"` SSECustomerAlgorithm string `config:"sse_customer_algorithm"` SSECustomerKey string `config:"sse_customer_key"` SSECustomerKeyMD5 string `config:"sse_customer_key_md5"` StorageClass string `config:"storage_class"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` CopyCutoff fs.SizeSuffix `config:"copy_cutoff"` ChunkSize fs.SizeSuffix `config:"chunk_size"` MaxUploadParts int64 `config:"max_upload_parts"` DisableChecksum bool `config:"disable_checksum"` SharedCredentialsFile string `config:"shared_credentials_file"` Profile string `config:"profile"` SessionToken string `config:"session_token"` UploadConcurrency int `config:"upload_concurrency"` ForcePathStyle bool `config:"force_path_style"` V2Auth bool `config:"v2_auth"` UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"` LeavePartsOnError bool `config:"leave_parts_on_error"` ListChunk int64 `config:"list_chunk"` NoCheckBucket bool `config:"no_check_bucket"` Enc encoder.MultiEncoder `config:"encoding"` MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"` MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"` } // Fs represents a remote s3 server type Fs struct { name string // the name of the remote root string // root of the bucket - ignore all objects above this opt Options // parsed options features *fs.Features // optional features c *s3.S3 // the connection to the s3 server ses *session.Session // the s3 session rootBucket string // bucket part of root (if any) rootDirectory string // directory part of root (if any) cache *bucket.Cache // cache for bucket creation status pacer *fs.Pacer // To pace the API calls srv *http.Client // a plain http client pool *pool.Pool // memory pool } // Object describes a s3 object type Object struct { // Will definitely have everything but meta which may be nil // // List will read everything but meta & mimeType - to fill // that in you need to call readMetaData fs *Fs // what this object is part of remote string // The remote path etag string // md5sum of the object bytes int64 // size of the object lastModified time.Time // Last modified meta map[string]*string // The object metadata if known - may be nil mimeType string // MimeType of object - may be "" storageClass string // eg GLACIER } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.rootBucket == "" { return fmt.Sprintf("S3 root") } if f.rootDirectory == "" { return fmt.Sprintf("S3 bucket %s", f.rootBucket) } return fmt.Sprintf("S3 bucket %s path %s", f.rootBucket, f.rootDirectory) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // retryErrorCodes is a slice of error codes that we will retry // See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html var retryErrorCodes = []int{ 500, // Internal Server Error - "We encountered an internal error. Please try again." 503, // Service Unavailable/Slow Down - "Reduce your request rate" } //S3 is pretty resilient, and the built in retry handling is probably sufficient // as it should notice closed connections and timeouts which are the most likely // sort of failure modes func (f *Fs) shouldRetry(err error) (bool, error) { // If this is an awserr object, try and extract more useful information to determine if we should retry if awsError, ok := err.(awserr.Error); ok { // Simple case, check the original embedded error in case it's generically retryable if fserrors.ShouldRetry(awsError.OrigErr()) { return true, err } // Failing that, if it's a RequestFailure it's probably got an http status code we can check if reqErr, ok := err.(awserr.RequestFailure); ok { // 301 if wrong region for bucket - can only update if running from a bucket if f.rootBucket != "" { if reqErr.StatusCode() == http.StatusMovedPermanently { urfbErr := f.updateRegionForBucket(f.rootBucket) if urfbErr != nil { fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr) return false, err } return true, err } } for _, e := range retryErrorCodes { if reqErr.StatusCode() == e { return true, err } } } } // Ok, not an awserr, check for generic failure conditions return fserrors.ShouldRetry(err), err } // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns bucket and bucketPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) { bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath)) return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath) } // split returns bucket and bucketPath from the object func (o *Object) split() (bucket, bucketPath string) { return o.fs.split(o.remote) } // s3Connection makes a connection to s3 func s3Connection(opt *Options) (*s3.S3, *session.Session, error) { // Make the auth v := credentials.Value{ AccessKeyID: opt.AccessKeyID, SecretAccessKey: opt.SecretAccessKey, SessionToken: opt.SessionToken, } lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service def := defaults.Get() def.Config.HTTPClient = lowTimeoutClient // start a new AWS session awsSession, err := session.NewSession() if err != nil { return nil, nil, errors.Wrap(err, "NewSession") } // first provider to supply a credential set "wins" providers := []credentials.Provider{ // use static credentials if they're present (checked by provider) &credentials.StaticProvider{Value: v}, // * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY // * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY &credentials.EnvProvider{}, // A SharedCredentialsProvider retrieves credentials // from the current user's home directory. It checks // AWS_SHARED_CREDENTIALS_FILE and AWS_PROFILE too. &credentials.SharedCredentialsProvider{ Filename: opt.SharedCredentialsFile, // If empty will look for "AWS_SHARED_CREDENTIALS_FILE" env variable. Profile: opt.Profile, // If empty will look gor "AWS_PROFILE" env var or "default" if not set. }, // Pick up IAM role if we're in an ECS task defaults.RemoteCredProvider(*def.Config, def.Handlers), // Pick up IAM role in case we're on EC2 &ec2rolecreds.EC2RoleProvider{ Client: ec2metadata.New(awsSession, &aws.Config{ HTTPClient: lowTimeoutClient, }), ExpiryWindow: 3 * time.Minute, }, // Pick up IAM role if we are in EKS &stscreds.WebIdentityRoleProvider{ ExpiryWindow: 3 * time.Minute, }, } cred := credentials.NewChainCredentials(providers) switch { case opt.EnvAuth: // No need for empty checks if "env_auth" is true case v.AccessKeyID == "" && v.SecretAccessKey == "": // if no access key/secret and iam is explicitly disabled then fall back to anon interaction cred = credentials.AnonymousCredentials case v.AccessKeyID == "": return nil, nil, errors.New("access_key_id not found") case v.SecretAccessKey == "": return nil, nil, errors.New("secret_access_key not found") } if opt.Region == "" { opt.Region = "us-east-1" } if opt.Provider == "AWS" || opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.Provider == "Scaleway" || opt.Provider == "TencentCOS" || opt.UseAccelerateEndpoint { opt.ForcePathStyle = false } if opt.Provider == "Scaleway" && opt.MaxUploadParts > 1000 { opt.MaxUploadParts = 1000 } awsConfig := aws.NewConfig(). WithMaxRetries(0). // Rely on rclone's retry logic WithCredentials(cred). WithHTTPClient(fshttp.NewClient(fs.Config)). WithS3ForcePathStyle(opt.ForcePathStyle). WithS3UseAccelerate(opt.UseAccelerateEndpoint). WithS3UsEast1RegionalEndpoint(endpoints.RegionalS3UsEast1Endpoint) if opt.Region != "" { awsConfig.WithRegion(opt.Region) } if opt.Endpoint != "" { awsConfig.WithEndpoint(opt.Endpoint) } // awsConfig.WithLogLevel(aws.LogDebugWithSigning) awsSessionOpts := session.Options{ Config: *awsConfig, } if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" { // Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env) awsSessionOpts.SharedConfigState = session.SharedConfigEnable // The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source // (from the shared config file) if the passed-in Options.Config.Credentials is nil. awsSessionOpts.Config.Credentials = nil } ses, err := session.NewSessionWithOptions(awsSessionOpts) if err != nil { return nil, nil, err } c := s3.New(ses) if opt.V2Auth || opt.Region == "other-v2-signature" { fs.Debugf(nil, "Using v2 auth") signer := func(req *request.Request) { // Ignore AnonymousCredentials object if req.Config.Credentials == credentials.AnonymousCredentials { return } sign(v.AccessKeyID, v.SecretAccessKey, req.HTTPRequest) } c.Handlers.Sign.Clear() c.Handlers.Sign.PushBackNamed(corehandlers.BuildContentLengthHandler) c.Handlers.Sign.PushBack(signer) } return c, ses, nil } func checkUploadChunkSize(cs fs.SizeSuffix) error { if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } func checkUploadCutoff(cs fs.SizeSuffix) error { if cs > maxUploadCutoff { return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff) } return nil } func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadCutoff(cs) if err == nil { old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs } return } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootBucket, f.rootDirectory = bucket.Split(f.root) } // NewFs constructs an Fs from the path, bucket:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "s3: chunk size") } err = checkUploadCutoff(opt.UploadCutoff) if err != nil { return nil, errors.Wrap(err, "s3: upload cutoff") } if opt.ACL == "" { opt.ACL = "private" } if opt.BucketACL == "" { opt.BucketACL = opt.ACL } c, ses, err := s3Connection(opt) if err != nil { return nil, err } f := &Fs{ name: name, opt: *opt, c: c, ses: ses, pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))), cache: bucket.NewCache(), srv: fshttp.NewClient(fs.Config), pool: pool.New( time.Duration(opt.MemoryPoolFlushTime), int(opt.ChunkSize), opt.UploadConcurrency*fs.Config.Transfers, opt.MemoryPoolUseMmap, ), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, SetTier: true, GetTier: true, SlowModTime: true, }).Fill(f) if f.rootBucket != "" && f.rootDirectory != "" { // Check to see if the object exists encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory) req := s3.HeadObjectInput{ Bucket: &f.rootBucket, Key: &encodedDirectory, } err = f.pacer.Call(func() (bool, error) { _, err = f.c.HeadObject(&req) return f.shouldRetry(err) }) if err == nil { newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.setRoot(newRoot) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } } // f.listMultipartUploads() return f, nil } // Return an Object from a path // //If it can't be found it returns the error ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Object) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } if info != nil { // Set info but not meta if info.LastModified == nil { fs.Logf(o, "Failed to read last modified") o.lastModified = time.Now() } else { o.lastModified = *info.LastModified } o.etag = aws.StringValue(info.ETag) o.bytes = aws.Int64Value(info.Size) o.storageClass = aws.StringValue(info.StorageClass) } else { err := o.readMetaData(ctx) // reads info and meta, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // Gets the bucket location func (f *Fs) getBucketLocation(bucket string) (string, error) { req := s3.GetBucketLocationInput{ Bucket: &bucket, } var resp *s3.GetBucketLocationOutput var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.c.GetBucketLocation(&req) return f.shouldRetry(err) }) if err != nil { return "", err } return s3.NormalizeBucketLocation(aws.StringValue(resp.LocationConstraint)), nil } // Updates the region for the bucket by reading the region from the // bucket then updating the session. func (f *Fs) updateRegionForBucket(bucket string) error { region, err := f.getBucketLocation(bucket) if err != nil { return errors.Wrap(err, "reading bucket location failed") } if aws.StringValue(f.c.Config.Endpoint) != "" { return errors.Errorf("can't set region to %q as endpoint is set", region) } if aws.StringValue(f.c.Config.Region) == region { return errors.Errorf("region is already %q - not updating", region) } // Make a new session with the new region oldRegion := f.opt.Region f.opt.Region = region c, ses, err := s3Connection(&f.opt) if err != nil { return errors.Wrap(err, "creating new session failed") } f.c = c f.ses = ses fs.Logf(f, "Switched region to %q from %q", region, oldRegion) return nil } // listFn is called from list to handle an object. type listFn func(remote string, object *s3.Object, isDirectory bool) error // list lists the objects into the function supplied from // the bucket and directory supplied. The remote has prefix // removed from it and if addBucket is set then it adds the // bucket to the start. // // Set recurse to read sub directories func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) error { if prefix != "" { prefix += "/" } if directory != "" { directory += "/" } delimiter := "" if !recurse { delimiter = "/" } var marker *string // URL encode the listings so we can use control characters in object names // See: https://github.com/aws/aws-sdk-go/issues/1914 // // However this doesn't work perfectly under Ceph (and hence DigitalOcean/Dreamhost) because // it doesn't encode CommonPrefixes. // See: https://tracker.ceph.com/issues/41870 // // This does not work under IBM COS also: See https://github.com/rclone/rclone/issues/3345 // though maybe it does on some versions. // // This does work with minio but was only added relatively recently // https://github.com/minio/minio/pull/7265 // // So we enable only on providers we know supports it properly, all others can retry when a // XML Syntax error is detected. var urlEncodeListings = (f.opt.Provider == "AWS" || f.opt.Provider == "Wasabi" || f.opt.Provider == "Alibaba" || f.opt.Provider == "Minio" || f.opt.Provider == "TencentCOS") for { // FIXME need to implement ALL loop req := s3.ListObjectsInput{ Bucket: &bucket, Delimiter: &delimiter, Prefix: &directory, MaxKeys: &f.opt.ListChunk, Marker: marker, } if urlEncodeListings { req.EncodingType = aws.String(s3.EncodingTypeUrl) } var resp *s3.ListObjectsOutput var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.c.ListObjectsWithContext(ctx, &req) if err != nil && !urlEncodeListings { if awsErr, ok := err.(awserr.RequestFailure); ok { if origErr := awsErr.OrigErr(); origErr != nil { if _, ok := origErr.(*xml.SyntaxError); ok { // Retry the listing with URL encoding as there were characters that XML can't encode urlEncodeListings = true req.EncodingType = aws.String(s3.EncodingTypeUrl) fs.Debugf(f, "Retrying listing because of characters which can't be XML encoded") return true, err } } } } return f.shouldRetry(err) }) if err != nil { if awsErr, ok := err.(awserr.RequestFailure); ok { if awsErr.StatusCode() == http.StatusNotFound { err = fs.ErrorDirNotFound } } if f.rootBucket == "" { // if listing from the root ignore wrong region requests returning // empty directory if reqErr, ok := err.(awserr.RequestFailure); ok { // 301 if wrong region for bucket if reqErr.StatusCode() == http.StatusMovedPermanently { fs.Errorf(f, "Can't change region for bucket %q with no bucket specified", bucket) return nil } } } return err } if !recurse { for _, commonPrefix := range resp.CommonPrefixes { if commonPrefix.Prefix == nil { fs.Logf(f, "Nil common prefix received") continue } remote := *commonPrefix.Prefix if urlEncodeListings { remote, err = url.QueryUnescape(remote) if err != nil { fs.Logf(f, "failed to URL decode %q in listing common prefix: %v", *commonPrefix.Prefix, err) continue } } remote = f.opt.Enc.ToStandardPath(remote) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", remote) continue } remote = remote[len(prefix):] if addBucket { remote = path.Join(bucket, remote) } if strings.HasSuffix(remote, "/") { remote = remote[:len(remote)-1] } err = fn(remote, &s3.Object{Key: &remote}, true) if err != nil { return err } } } for _, object := range resp.Contents { remote := aws.StringValue(object.Key) if urlEncodeListings { remote, err = url.QueryUnescape(remote) if err != nil { fs.Logf(f, "failed to URL decode %q in listing: %v", aws.StringValue(object.Key), err) continue } } remote = f.opt.Enc.ToStandardPath(remote) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", remote) continue } remote = remote[len(prefix):] isDirectory := remote == "" || strings.HasSuffix(remote, "/") if addBucket { remote = path.Join(bucket, remote) } // is this a directory marker? if isDirectory && object.Size != nil && *object.Size == 0 { continue // skip directory marker } err = fn(remote, object, false) if err != nil { return err } } if !aws.BoolValue(resp.IsTruncated) { break } // Use NextMarker if set, otherwise use last Key if resp.NextMarker == nil || *resp.NextMarker == "" { if len(resp.Contents) == 0 { return errors.New("s3 protocol error: received listing with IsTruncated set, no NextMarker and no Contents") } marker = resp.Contents[len(resp.Contents)-1].Key } else { marker = resp.NextMarker } if urlEncodeListings { *marker, err = url.QueryUnescape(*marker) if err != nil { return errors.Wrapf(err, "failed to URL decode NextMarker %q", *marker) } } } return nil } // Convert a list item into a DirEntry func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *s3.Object, isDirectory bool) (fs.DirEntry, error) { if isDirectory { size := int64(0) if object.Size != nil { size = *object.Size } d := fs.NewDir(remote, time.Time{}).SetSize(size) return d, nil } o, err := f.newObjectWithInfo(ctx, remote, object) if err != nil { return nil, err } return o, nil } // listDir lists files and directories to out func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { // List the objects and directories err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *s3.Object, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) if err != nil { return err } if entry != nil { entries = append(entries, entry) } return nil }) if err != nil { return nil, err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) return entries, nil } // listBuckets lists the buckets to out func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { req := s3.ListBucketsInput{} var resp *s3.ListBucketsOutput err = f.pacer.Call(func() (bool, error) { resp, err = f.c.ListBucketsWithContext(ctx, &req) return f.shouldRetry(err) }) if err != nil { return nil, err } for _, bucket := range resp.Buckets { bucketName := f.opt.Enc.ToStandardName(aws.StringValue(bucket.Name)) f.cache.MarkOK(bucketName) d := fs.NewDir(bucketName, aws.TimeValue(bucket.CreationDate)) entries = append(entries, d) } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { bucket, directory := f.split(dir) if bucket == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listBuckets(ctx) } return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively than doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { bucket, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(bucket, directory, prefix string, addBucket bool) error { return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *s3.Object, isDirectory bool) error { entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) if err != nil { return err } return list.Add(entry) }) } if bucket == "" { entries, err := f.listBuckets(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } bucket := entry.Remote() err = listR(bucket, "", f.rootDirectory, true) if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } } else { err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "") if err != nil { return err } // bucket must be present if listing succeeded f.cache.MarkOK(bucket) } return list.Flush() } // Put the Object into the bucket func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction fs := &Object{ fs: f, remote: src.Remote(), } return fs, fs.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Check if the bucket exists // // NB this can return incorrect results if called immediately after bucket deletion func (f *Fs) bucketExists(ctx context.Context, bucket string) (bool, error) { req := s3.HeadBucketInput{ Bucket: &bucket, } err := f.pacer.Call(func() (bool, error) { _, err := f.c.HeadBucketWithContext(ctx, &req) return f.shouldRetry(err) }) if err == nil { return true, nil } if err, ok := err.(awserr.RequestFailure); ok { if err.StatusCode() == http.StatusNotFound { return false, nil } } return false, err } // Mkdir creates the bucket if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { bucket, _ := f.split(dir) return f.makeBucket(ctx, bucket) } // makeBucket creates the bucket if it doesn't exist func (f *Fs) makeBucket(ctx context.Context, bucket string) error { if f.opt.NoCheckBucket { return nil } return f.cache.Create(bucket, func() error { req := s3.CreateBucketInput{ Bucket: &bucket, ACL: &f.opt.BucketACL, } if f.opt.LocationConstraint != "" { req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{ LocationConstraint: &f.opt.LocationConstraint, } } err := f.pacer.Call(func() (bool, error) { _, err := f.c.CreateBucketWithContext(ctx, &req) return f.shouldRetry(err) }) if err == nil { fs.Infof(f, "Bucket %q created with ACL %q", bucket, f.opt.BucketACL) } if awsErr, ok := err.(awserr.Error); ok { if code := awsErr.Code(); code == "BucketAlreadyOwnedByYou" || code == "BucketAlreadyExists" { err = nil } } return err }, func() (bool, error) { return f.bucketExists(ctx, bucket) }) } // Rmdir deletes the bucket if the fs is at the root // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { bucket, directory := f.split(dir) if bucket == "" || directory != "" { return nil } return f.cache.Remove(bucket, func() error { req := s3.DeleteBucketInput{ Bucket: &bucket, } err := f.pacer.Call(func() (bool, error) { _, err := f.c.DeleteBucketWithContext(ctx, &req) return f.shouldRetry(err) }) if err == nil { fs.Infof(f, "Bucket %q deleted", bucket) } return err }) } // Precision of the remote func (f *Fs) Precision() time.Duration { return time.Nanosecond } // pathEscape escapes s as for a URL path. It uses rest.URLPathEscape // but also escapes '+' for S3 and Digital Ocean spaces compatibility func pathEscape(s string) string { return strings.Replace(rest.URLPathEscape(s), "+", "%2B", -1) } // copy does a server side copy // // It adds the boiler plate to the req passed in and calls the s3 // method func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPath, srcBucket, srcPath string, src *Object) error { req.Bucket = &dstBucket req.ACL = &f.opt.ACL req.Key = &dstPath source := pathEscape(path.Join(srcBucket, srcPath)) req.CopySource = &source if f.opt.ServerSideEncryption != "" { req.ServerSideEncryption = &f.opt.ServerSideEncryption } if f.opt.SSEKMSKeyID != "" { req.SSEKMSKeyId = &f.opt.SSEKMSKeyID } if req.StorageClass == nil && f.opt.StorageClass != "" { req.StorageClass = &f.opt.StorageClass } if src.bytes >= int64(f.opt.CopyCutoff) { return f.copyMultipart(ctx, req, dstBucket, dstPath, srcBucket, srcPath, src) } return f.pacer.Call(func() (bool, error) { _, err := f.c.CopyObjectWithContext(ctx, req) return f.shouldRetry(err) }) } func calculateRange(partSize, partIndex, numParts, totalSize int64) string { start := partIndex * partSize var ends string if partIndex == numParts-1 { if totalSize >= 1 { ends = strconv.FormatInt(totalSize-1, 10) } } else { ends = strconv.FormatInt(start+partSize-1, 10) } return fmt.Sprintf("bytes=%v-%v", start, ends) } func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dstBucket, dstPath, srcBucket, srcPath string, src *Object) (err error) { info, err := src.headObject(ctx) if err != nil { return err } req := &s3.CreateMultipartUploadInput{} // Fill in the request from the head info structs.SetFrom(req, info) // If copy metadata was set then set the Metadata to that read // from the head request if aws.StringValue(copyReq.MetadataDirective) == s3.MetadataDirectiveCopy { copyReq.Metadata = info.Metadata } // Overwrite any from the copyReq structs.SetFrom(req, copyReq) req.Bucket = &dstBucket req.Key = &dstPath var cout *s3.CreateMultipartUploadOutput if err := f.pacer.Call(func() (bool, error) { var err error cout, err = f.c.CreateMultipartUploadWithContext(ctx, req) return f.shouldRetry(err) }); err != nil { return err } uid := cout.UploadId defer atexit.OnError(&err, func() { // Try to abort the upload, but ignore the error. fs.Debugf(src, "Cancelling multipart copy") _ = f.pacer.Call(func() (bool, error) { _, err := f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{ Bucket: &dstBucket, Key: &dstPath, UploadId: uid, RequestPayer: req.RequestPayer, }) return f.shouldRetry(err) }) })() srcSize := src.bytes partSize := int64(f.opt.CopyCutoff) numParts := (srcSize-1)/partSize + 1 fs.Debugf(src, "Starting multipart copy with %d parts", numParts) var parts []*s3.CompletedPart for partNum := int64(1); partNum <= numParts; partNum++ { if err := f.pacer.Call(func() (bool, error) { partNum := partNum uploadPartReq := &s3.UploadPartCopyInput{} structs.SetFrom(uploadPartReq, copyReq) uploadPartReq.Bucket = &dstBucket uploadPartReq.Key = &dstPath uploadPartReq.PartNumber = &partNum uploadPartReq.UploadId = uid uploadPartReq.CopySourceRange = aws.String(calculateRange(partSize, partNum-1, numParts, srcSize)) uout, err := f.c.UploadPartCopyWithContext(ctx, uploadPartReq) if err != nil { return f.shouldRetry(err) } parts = append(parts, &s3.CompletedPart{ PartNumber: &partNum, ETag: uout.CopyPartResult.ETag, }) return false, nil }); err != nil { return err } } return f.pacer.Call(func() (bool, error) { _, err := f.c.CompleteMultipartUploadWithContext(ctx, &s3.CompleteMultipartUploadInput{ Bucket: &dstBucket, Key: &dstPath, MultipartUpload: &s3.CompletedMultipartUpload{ Parts: parts, }, RequestPayer: req.RequestPayer, UploadId: uid, }) return f.shouldRetry(err) }) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstBucket, dstPath := f.split(remote) err := f.makeBucket(ctx, dstBucket) if err != nil { return nil, err } srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } srcBucket, srcPath := srcObj.split() req := s3.CopyObjectInput{ MetadataDirective: aws.String(s3.MetadataDirectiveCopy), } err = f.copy(ctx, &req, dstBucket, dstPath, srcBucket, srcPath, srcObj) if err != nil { return nil, err } return f.NewObject(ctx, remote) } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } func (f *Fs) getMemoryPool(size int64) *pool.Pool { if size == int64(f.opt.ChunkSize) { return f.pool } return pool.New( time.Duration(f.opt.MemoryPoolFlushTime), int(size), f.opt.UploadConcurrency*fs.Config.Transfers, f.opt.MemoryPoolUseMmap, ) } // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { if strings.HasSuffix(remote, "/") { return "", fs.ErrorCantShareDirectories } if _, err := f.NewObject(ctx, remote); err != nil { return "", err } if expire > maxExpireDuration { fs.Logf(f, "Public Link: Reducing expiry to %v as %v is greater than the max time allowed", maxExpireDuration, expire) expire = maxExpireDuration } bucket, bucketPath := f.split(remote) httpReq, _ := f.c.GetObjectRequest(&s3.GetObjectInput{ Bucket: &bucket, Key: &bucketPath, }) return httpReq.Presign(time.Duration(expire)) } var commandHelp = []fs.CommandHelp{{ Name: "restore", Short: "Restore objects from GLACIER to normal storage", Long: `This command can be used to restore one or more objects from GLACIER to normal storage. Usage Examples: rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard All the objects shown will be marked for restore, then rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successfull or an error message if not. [ { "Status": "OK", "Path": "test.txt" }, { "Status": "OK", "Path": "test/file4.txt" } ] `, Opts: map[string]string{ "priority": "Priority of restore: Standard|Expedited|Bulk", "lifetime": "Lifetime of the active copy in days", "description": "The optional description for the job.", }, }, { Name: "list-multipart-uploads", Short: "List the unfinished multipart uploads", Long: `This command lists the unfinished multipart uploads in JSON format. rclone backend list-multipart s3:bucket/path/to/object It returns a dictionary of buckets with values as lists of unfinished multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. { "rclone": [ { "Initiated": "2020-06-26T14:20:36Z", "Initiator": { "DisplayName": "XXX", "ID": "arn:aws:iam::XXX:user/XXX" }, "Key": "KEY", "Owner": { "DisplayName": null, "ID": "XXX" }, "StorageClass": "STANDARD", "UploadId": "XXX" } ], "rclone-1000files": [], "rclone-dst": [] } `, }, { Name: "cleanup", Short: "Remove unfinished multipart uploads.", Long: `This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. Note that you can use -i/--dry-run with this command to see what it would do. rclone backend cleanup s3:bucket/path/to/object rclone backend cleanup -o max-age=7w s3:bucket/path/to/object Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. `, Opts: map[string]string{ "max-age": "Max age of upload to delete", }, }} // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) { switch name { case "restore": req := s3.RestoreObjectInput{ //Bucket: &f.rootBucket, //Key: &encodedDirectory, RestoreRequest: &s3.RestoreRequest{}, } if lifetime := opt["lifetime"]; lifetime != "" { ilifetime, err := strconv.ParseInt(lifetime, 10, 64) if err != nil { return nil, errors.Wrap(err, "bad lifetime") } req.RestoreRequest.Days = &ilifetime } if priority := opt["priority"]; priority != "" { req.RestoreRequest.GlacierJobParameters = &s3.GlacierJobParameters{ Tier: &priority, } } if description := opt["description"]; description != "" { req.RestoreRequest.Description = &description } type status struct { Status string Remote string } var ( outMu sync.Mutex out = []status{} ) err = operations.ListFn(ctx, f, func(obj fs.Object) { // Remember this is run --checkers times concurrently o, ok := obj.(*Object) st := status{Status: "OK", Remote: obj.Remote()} defer func() { outMu.Lock() out = append(out, st) outMu.Unlock() }() if operations.SkipDestructive(ctx, obj, "restore") { return } if !ok { st.Status = "Not an S3 object" return } bucket, bucketPath := o.split() reqCopy := req reqCopy.Bucket = &bucket reqCopy.Key = &bucketPath err = f.pacer.Call(func() (bool, error) { _, err = f.c.RestoreObject(&reqCopy) return f.shouldRetry(err) }) if err != nil { st.Status = err.Error() } }) if err != nil { return out, err } return out, nil case "list-multipart-uploads": return f.listMultipartUploadsAll(ctx) case "cleanup": maxAge := 24 * time.Hour if opt["max-age"] != "" { maxAge, err = fs.ParseDuration(opt["max-age"]) if err != nil { return nil, errors.Wrap(err, "bad max-age") } } return nil, f.cleanUp(ctx, maxAge) default: return nil, fs.ErrorCommandNotFound } } // listMultipartUploads lists all outstanding multipart uploads for (bucket, key) // // Note that rather lazily we treat key as a prefix so it matches // directories and objects. This could suprise the user if they ask // for "dir" and it returns "dirKey" func (f *Fs) listMultipartUploads(ctx context.Context, bucket, key string) (uploads []*s3.MultipartUpload, err error) { var ( keyMarker *string uploadIDMarker *string ) uploads = []*s3.MultipartUpload{} for { req := s3.ListMultipartUploadsInput{ Bucket: &bucket, MaxUploads: &f.opt.ListChunk, KeyMarker: keyMarker, UploadIdMarker: uploadIDMarker, Prefix: &key, } var resp *s3.ListMultipartUploadsOutput err = f.pacer.Call(func() (bool, error) { resp, err = f.c.ListMultipartUploads(&req) return f.shouldRetry(err) }) if err != nil { return nil, errors.Wrapf(err, "list multipart uploads bucket %q key %q", bucket, key) } uploads = append(uploads, resp.Uploads...) if !aws.BoolValue(resp.IsTruncated) { break } keyMarker = resp.NextKeyMarker uploadIDMarker = resp.NextUploadIdMarker } return uploads, nil } func (f *Fs) listMultipartUploadsAll(ctx context.Context) (uploadsMap map[string][]*s3.MultipartUpload, err error) { uploadsMap = make(map[string][]*s3.MultipartUpload) bucket, directory := f.split("") if bucket != "" { uploads, err := f.listMultipartUploads(ctx, bucket, directory) if err != nil { return uploadsMap, err } uploadsMap[bucket] = uploads return uploadsMap, nil } entries, err := f.listBuckets(ctx) if err != nil { return uploadsMap, err } for _, entry := range entries { bucket := entry.Remote() uploads, listErr := f.listMultipartUploads(ctx, bucket, "") if listErr != nil { err = listErr fs.Errorf(f, "%v", err) } uploadsMap[bucket] = uploads } return uploadsMap, err } // cleanUpBucket removes all pending multipart uploads for a given bucket over the age of maxAge func (f *Fs) cleanUpBucket(ctx context.Context, bucket string, maxAge time.Duration, uploads []*s3.MultipartUpload) (err error) { fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than %v", bucket, maxAge) for _, upload := range uploads { if upload.Initiated != nil && upload.Key != nil && upload.UploadId != nil { age := time.Since(*upload.Initiated) what := fmt.Sprintf("pending multipart upload for bucket %q key %q dated %v (%v ago)", bucket, *upload.Key, upload.Initiated, age) if age > maxAge { fs.Infof(f, "removing %s", what) if operations.SkipDestructive(ctx, what, "remove pending upload") { continue } req := s3.AbortMultipartUploadInput{ Bucket: &bucket, UploadId: upload.UploadId, Key: upload.Key, } _, abortErr := f.c.AbortMultipartUpload(&req) if abortErr != nil { err = errors.Wrapf(abortErr, "failed to remove %s", what) fs.Errorf(f, "%v", err) } } else { fs.Debugf(f, "ignoring %s", what) } } } return err } // CleanUp removes all pending multipart uploads func (f *Fs) cleanUp(ctx context.Context, maxAge time.Duration) (err error) { uploadsMap, err := f.listMultipartUploadsAll(ctx) if err != nil { return err } for bucket, uploads := range uploadsMap { cleanErr := f.cleanUpBucket(ctx, bucket, maxAge, uploads) if err != nil { fs.Errorf(f, "Failed to cleanup bucket %q: %v", bucket, cleanErr) err = cleanErr } } return err } // CleanUp removes all pending multipart uploads older than 24 hours func (f *Fs) CleanUp(ctx context.Context) (err error) { return f.cleanUp(ctx, 24*time.Hour) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } var matchMd5 = regexp.MustCompile(`^[0-9a-f]{32}$`) // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } hash := strings.Trim(strings.ToLower(o.etag), `"`) // Check the etag is a valid md5sum if !matchMd5.MatchString(hash) { err := o.readMetaData(ctx) if err != nil { return "", err } if md5sum, ok := o.meta[metaMD5Hash]; ok { md5sumBytes, err := base64.StdEncoding.DecodeString(*md5sum) if err != nil { return "", err } hash = hex.EncodeToString(md5sumBytes) } else { hash = "" } } return hash, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.bytes } func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err error) { bucket, bucketPath := o.split() req := s3.HeadObjectInput{ Bucket: &bucket, Key: &bucketPath, } err = o.fs.pacer.Call(func() (bool, error) { var err error resp, err = o.fs.c.HeadObjectWithContext(ctx, &req) return o.fs.shouldRetry(err) }) if err != nil { if awsErr, ok := err.(awserr.RequestFailure); ok { if awsErr.StatusCode() == http.StatusNotFound { return nil, fs.ErrorObjectNotFound } } return nil, err } o.fs.cache.MarkOK(bucket) return resp, nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.meta != nil { return nil } resp, err := o.headObject(ctx) if err != nil { return err } var size int64 // Ignore missing Content-Length assuming it is 0 // Some versions of ceph do this due their apache proxies if resp.ContentLength != nil { size = *resp.ContentLength } o.etag = aws.StringValue(resp.ETag) o.bytes = size o.meta = resp.Metadata if o.meta == nil { o.meta = map[string]*string{} } o.storageClass = aws.StringValue(resp.StorageClass) if resp.LastModified == nil { fs.Logf(o, "Failed to read last modified from HEAD: %v", err) o.lastModified = time.Now() } else { o.lastModified = *resp.LastModified } o.mimeType = aws.StringValue(resp.ContentType) return nil } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { if fs.Config.UseServerModTime { return o.lastModified } err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } // read mtime out of metadata if available d, ok := o.meta[metaMtime] if !ok || d == nil { // fs.Debugf(o, "No metadata") return o.lastModified } modTime, err := swift.FloatStringToTime(*d) if err != nil { fs.Logf(o, "Failed to read mtime from object: %v", err) return o.lastModified } return modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { err := o.readMetaData(ctx) if err != nil { return err } o.meta[metaMtime] = aws.String(swift.TimeToFloatString(modTime)) // Can't update metadata here, so return this error to force a recopy if o.storageClass == "GLACIER" || o.storageClass == "DEEP_ARCHIVE" { return fs.ErrorCantSetModTime } // Copy the object to itself to update the metadata bucket, bucketPath := o.split() req := s3.CopyObjectInput{ ContentType: aws.String(fs.MimeType(ctx, o)), // Guess the content type Metadata: o.meta, MetadataDirective: aws.String(s3.MetadataDirectiveReplace), // replace metadata with that passed in } return o.fs.copy(ctx, &req, bucket, bucketPath, bucket, bucketPath, o) } // Storable raturns a boolean indicating if this object is storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { bucket, bucketPath := o.split() req := s3.GetObjectInput{ Bucket: &bucket, Key: &bucketPath, } if o.fs.opt.SSECustomerAlgorithm != "" { req.SSECustomerAlgorithm = &o.fs.opt.SSECustomerAlgorithm } if o.fs.opt.SSECustomerKey != "" { req.SSECustomerKey = &o.fs.opt.SSECustomerKey } if o.fs.opt.SSECustomerKeyMD5 != "" { req.SSECustomerKeyMD5 = &o.fs.opt.SSECustomerKeyMD5 } httpReq, resp := o.fs.c.GetObjectRequest(&req) fs.FixRangeOption(options, o.bytes) for _, option := range options { switch option.(type) { case *fs.RangeOption, *fs.SeekOption: _, value := option.Header() req.Range = &value case *fs.HTTPOption: key, value := option.Header() httpReq.HTTPRequest.Header.Add(key, value) default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } err = o.fs.pacer.Call(func() (bool, error) { var err error httpReq.HTTPRequest = httpReq.HTTPRequest.WithContext(ctx) err = httpReq.Send() return o.fs.shouldRetry(err) }) if err, ok := err.(awserr.RequestFailure); ok { if err.Code() == "InvalidObjectState" { return nil, errors.Errorf("Object in GLACIER, restore first: bucket=%q, key=%q", bucket, bucketPath) } } if err != nil { return nil, err } return resp.Body, nil } var warnStreamUpload sync.Once func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (err error) { f := o.fs // make concurrency machinery concurrency := f.opt.UploadConcurrency if concurrency < 1 { concurrency = 1 } tokens := pacer.NewTokenDispenser(concurrency) uploadParts := f.opt.MaxUploadParts if uploadParts < 1 { uploadParts = 1 } else if uploadParts > maxUploadParts { uploadParts = maxUploadParts } // calculate size of parts partSize := int(f.opt.ChunkSize) // size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize // buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of // 48GB which seems like a not too unreasonable limit. if size == -1 { warnStreamUpload.Do(func() { fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v", f.opt.ChunkSize, fs.SizeSuffix(int64(partSize)*uploadParts)) }) } else { // Adjust partSize until the number of parts is small enough. if size/int64(partSize) >= uploadParts { // Calculate partition size rounded up to the nearest MB partSize = int((((size / uploadParts) >> 20) + 1) << 20) } } memPool := f.getMemoryPool(int64(partSize)) var mReq s3.CreateMultipartUploadInput structs.SetFrom(&mReq, req) var cout *s3.CreateMultipartUploadOutput err = f.pacer.Call(func() (bool, error) { var err error cout, err = f.c.CreateMultipartUploadWithContext(ctx, &mReq) return f.shouldRetry(err) }) if err != nil { return errors.Wrap(err, "multipart upload failed to initialise") } uid := cout.UploadId defer atexit.OnError(&err, func() { if o.fs.opt.LeavePartsOnError { return } fs.Debugf(o, "Cancelling multipart upload") errCancel := f.pacer.Call(func() (bool, error) { _, err := f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{ Bucket: req.Bucket, Key: req.Key, UploadId: uid, RequestPayer: req.RequestPayer, }) return f.shouldRetry(err) }) if errCancel != nil { fs.Debugf(o, "Failed to cancel multipart upload: %v", errCancel) } })() var ( g, gCtx = errgroup.WithContext(ctx) finished = false partsMu sync.Mutex // to protect parts parts []*s3.CompletedPart off int64 ) for partNum := int64(1); !finished; partNum++ { // Get a block of memory from the pool and token which limits concurrency. tokens.Get() buf := memPool.Get() free := func() { // return the memory and token memPool.Put(buf) tokens.Put() } // Fail fast, in case an errgroup managed function returns an error // gCtx is cancelled. There is no point in uploading all the other parts. if gCtx.Err() != nil { free() break } // Read the chunk var n int n, err = readers.ReadFill(in, buf) // this can never return 0, nil if err == io.EOF { if n == 0 && partNum != 1 { // end if no data and if not first chunk free() break } finished = true } else if err != nil { free() return errors.Wrap(err, "multipart upload failed to read source") } buf = buf[:n] partNum := partNum fs.Debugf(o, "multipart upload starting chunk %d size %v offset %v/%v", partNum, fs.SizeSuffix(n), fs.SizeSuffix(off), fs.SizeSuffix(size)) off += int64(n) g.Go(func() (err error) { defer free() partLength := int64(len(buf)) // create checksum of buffer for integrity checking md5sumBinary := md5.Sum(buf) md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:]) err = f.pacer.Call(func() (bool, error) { uploadPartReq := &s3.UploadPartInput{ Body: bytes.NewReader(buf), Bucket: req.Bucket, Key: req.Key, PartNumber: &partNum, UploadId: uid, ContentMD5: &md5sum, ContentLength: &partLength, RequestPayer: req.RequestPayer, SSECustomerAlgorithm: req.SSECustomerAlgorithm, SSECustomerKey: req.SSECustomerKey, SSECustomerKeyMD5: req.SSECustomerKeyMD5, } uout, err := f.c.UploadPartWithContext(gCtx, uploadPartReq) if err != nil { if partNum <= int64(concurrency) { return f.shouldRetry(err) } // retry all chunks once have done the first batch return true, err } partsMu.Lock() parts = append(parts, &s3.CompletedPart{ PartNumber: &partNum, ETag: uout.ETag, }) partsMu.Unlock() return false, nil }) if err != nil { return errors.Wrap(err, "multipart upload failed to upload part") } return nil }) } err = g.Wait() if err != nil { return err } // sort the completed parts by part number sort.Slice(parts, func(i, j int) bool { return *parts[i].PartNumber < *parts[j].PartNumber }) err = f.pacer.Call(func() (bool, error) { _, err := f.c.CompleteMultipartUploadWithContext(ctx, &s3.CompleteMultipartUploadInput{ Bucket: req.Bucket, Key: req.Key, MultipartUpload: &s3.CompletedMultipartUpload{ Parts: parts, }, RequestPayer: req.RequestPayer, UploadId: uid, }) return f.shouldRetry(err) }) if err != nil { return errors.Wrap(err, "multipart upload failed to finalise") } return nil } // Update the Object from in with modTime and size func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { bucket, bucketPath := o.split() err := o.fs.makeBucket(ctx, bucket) if err != nil { return err } modTime := src.ModTime(ctx) size := src.Size() multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff) // Set the mtime in the meta data metadata := map[string]*string{ metaMtime: aws.String(swift.TimeToFloatString(modTime)), } // read the md5sum if available // - for non multpart // - so we can add a ContentMD5 // - for multipart provided checksums aren't disabled // - so we can add the md5sum in the metadata as metaMD5Hash var md5sum string if !multipart || !o.fs.opt.DisableChecksum { hash, err := src.Hash(ctx, hash.MD5) if err == nil && matchMd5.MatchString(hash) { hashBytes, err := hex.DecodeString(hash) if err == nil { md5sum = base64.StdEncoding.EncodeToString(hashBytes) if multipart { metadata[metaMD5Hash] = &md5sum } } } } // Guess the content type mimeType := fs.MimeType(ctx, src) req := s3.PutObjectInput{ Bucket: &bucket, ACL: &o.fs.opt.ACL, Key: &bucketPath, ContentType: &mimeType, Metadata: metadata, } if md5sum != "" { req.ContentMD5 = &md5sum } if o.fs.opt.ServerSideEncryption != "" { req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption } if o.fs.opt.SSECustomerAlgorithm != "" { req.SSECustomerAlgorithm = &o.fs.opt.SSECustomerAlgorithm } if o.fs.opt.SSECustomerKey != "" { req.SSECustomerKey = &o.fs.opt.SSECustomerKey } if o.fs.opt.SSECustomerKeyMD5 != "" { req.SSECustomerKeyMD5 = &o.fs.opt.SSECustomerKeyMD5 } if o.fs.opt.SSEKMSKeyID != "" { req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID } if o.fs.opt.StorageClass != "" { req.StorageClass = &o.fs.opt.StorageClass } // Apply upload options for _, option := range options { key, value := option.Header() lowerKey := strings.ToLower(key) switch lowerKey { case "": // ignore case "cache-control": req.CacheControl = aws.String(value) case "content-disposition": req.ContentDisposition = aws.String(value) case "content-encoding": req.ContentEncoding = aws.String(value) case "content-language": req.ContentLanguage = aws.String(value) case "content-type": req.ContentType = aws.String(value) case "x-amz-tagging": req.Tagging = aws.String(value) default: const amzMetaPrefix = "x-amz-meta-" if strings.HasPrefix(lowerKey, amzMetaPrefix) { metaKey := lowerKey[len(amzMetaPrefix):] req.Metadata[metaKey] = aws.String(value) } else { fs.Errorf(o, "Don't know how to set key %q on upload", key) } } } if multipart { err = o.uploadMultipart(ctx, &req, size, in) if err != nil { return err } } else { // Create the request putObj, _ := o.fs.c.PutObjectRequest(&req) // Sign it so we can upload using a presigned request. // // Note the SDK doesn't currently support streaming to // PutObject so we'll use this work-around. url, headers, err := putObj.PresignRequest(15 * time.Minute) if err != nil { return errors.Wrap(err, "s3 upload: sign request") } if o.fs.opt.V2Auth && headers == nil { headers = putObj.HTTPRequest.Header } // Set request to nil if empty so as not to make chunked encoding if size == 0 { in = nil } // create the vanilla http request httpReq, err := http.NewRequest("PUT", url, in) if err != nil { return errors.Wrap(err, "s3 upload: new request") } httpReq = httpReq.WithContext(ctx) // go1.13 can use NewRequestWithContext // set the headers we signed and the length httpReq.Header = headers httpReq.ContentLength = size err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err := o.fs.srv.Do(httpReq) if err != nil { return o.fs.shouldRetry(err) } body, err := rest.ReadBody(resp) if err != nil { return o.fs.shouldRetry(err) } if resp.StatusCode >= 200 && resp.StatusCode < 299 { return false, nil } err = errors.Errorf("s3 upload: %s: %s", resp.Status, body) return fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err }) if err != nil { return err } } // Read the metadata from the newly created object o.meta = nil // wipe old metadata err = o.readMetaData(ctx) return err } // Remove an object func (o *Object) Remove(ctx context.Context) error { bucket, bucketPath := o.split() req := s3.DeleteObjectInput{ Bucket: &bucket, Key: &bucketPath, } err := o.fs.pacer.Call(func() (bool, error) { _, err := o.fs.c.DeleteObjectWithContext(ctx, &req) return o.fs.shouldRetry(err) }) return err } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return "" } return o.mimeType } // SetTier performs changing storage class func (o *Object) SetTier(tier string) (err error) { ctx := context.TODO() tier = strings.ToUpper(tier) bucket, bucketPath := o.split() req := s3.CopyObjectInput{ MetadataDirective: aws.String(s3.MetadataDirectiveCopy), StorageClass: aws.String(tier), } err = o.fs.copy(ctx, &req, bucket, bucketPath, bucket, bucketPath, o) if err != nil { return err } o.storageClass = tier return err } // GetTier returns storage class as string func (o *Object) GetTier() string { if o.storageClass == "" { return "STANDARD" } return o.storageClass } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Copier = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.ListRer = &Fs{} _ fs.Commander = &Fs{} _ fs.CleanUpper = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} _ fs.GetTierer = &Object{} _ fs.SetTierer = &Object{} ) rclone-1.53.3/backend/s3/s3_test.go000066400000000000000000000013331375552240400167230ustar00rootroot00000000000000// Test S3 filesystem interface package s3 import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestS3:", NilObject: (*Object)(nil), TiersToTest: []string{"STANDARD", "STANDARD_IA"}, ChunkedUpload: fstests.ChunkedUploadConfig{ MinChunkSize: minChunkSize, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadCutoff(cs) } var _ fstests.SetUploadChunkSizer = (*Fs)(nil) rclone-1.53.3/backend/s3/v2sign.go000066400000000000000000000054161375552240400165550ustar00rootroot00000000000000// v2 signing package s3 import ( "crypto/hmac" "crypto/sha1" "encoding/base64" "net/http" "sort" "strings" "time" ) // URL parameters that need to be added to the signature var s3ParamsToSign = map[string]struct{}{ "acl": {}, "location": {}, "logging": {}, "notification": {}, "partNumber": {}, "policy": {}, "requestPayment": {}, "torrent": {}, "uploadId": {}, "uploads": {}, "versionId": {}, "versioning": {}, "versions": {}, "response-content-type": {}, "response-content-language": {}, "response-expires": {}, "response-cache-control": {}, "response-content-disposition": {}, "response-content-encoding": {}, } // sign signs requests using v2 auth // // Cobbled together from goamz and aws-sdk-go func sign(AccessKey, SecretKey string, req *http.Request) { // Set date date := time.Now().UTC().Format(time.RFC1123) req.Header.Set("Date", date) // Sort out URI uri := req.URL.EscapedPath() if uri == "" { uri = "/" } // Look through headers of interest var md5 string var contentType string var headersToSign []string for k, v := range req.Header { k = strings.ToLower(k) switch k { case "content-md5": md5 = v[0] case "content-type": contentType = v[0] default: if strings.HasPrefix(k, "x-amz-") { vall := strings.Join(v, ",") headersToSign = append(headersToSign, k+":"+vall) } } } // Make headers of interest into canonical string var joinedHeadersToSign string if len(headersToSign) > 0 { sort.StringSlice(headersToSign).Sort() joinedHeadersToSign = strings.Join(headersToSign, "\n") + "\n" } // Look for query parameters which need to be added to the signature params := req.URL.Query() var queriesToSign []string for k, vs := range params { if _, ok := s3ParamsToSign[k]; ok { for _, v := range vs { if v == "" { queriesToSign = append(queriesToSign, k) } else { queriesToSign = append(queriesToSign, k+"="+v) } } } } // Add query parameters to URI if len(queriesToSign) > 0 { sort.StringSlice(queriesToSign).Sort() uri += "?" + strings.Join(queriesToSign, "&") } // Make signature payload := req.Method + "\n" + md5 + "\n" + contentType + "\n" + date + "\n" + joinedHeadersToSign + uri hash := hmac.New(sha1.New, []byte(SecretKey)) _, _ = hash.Write([]byte(payload)) signature := make([]byte, base64.StdEncoding.EncodedLen(hash.Size())) base64.StdEncoding.Encode(signature, hash.Sum(nil)) // Set signature in request req.Header.Set("Authorization", "AWS "+AccessKey+":"+string(signature)) } rclone-1.53.3/backend/seafile/000077500000000000000000000000001375552240400160735ustar00rootroot00000000000000rclone-1.53.3/backend/seafile/api/000077500000000000000000000000001375552240400166445ustar00rootroot00000000000000rclone-1.53.3/backend/seafile/api/types.go000066400000000000000000000105531375552240400203430ustar00rootroot00000000000000package api // Some api objects are duplicated with only small differences, // it's because the returned JSON objects are very inconsistent between api calls // AuthenticationRequest contains user credentials type AuthenticationRequest struct { Username string `json:"username"` Password string `json:"password"` } // AuthenticationResult is returned by a call to the authentication api type AuthenticationResult struct { Token string `json:"token"` Errors []string `json:"non_field_errors"` } // AccountInfo contains simple user properties type AccountInfo struct { Usage int64 `json:"usage"` Total int64 `json:"total"` Email string `json:"email"` Name string `json:"name"` } // ServerInfo contains server information type ServerInfo struct { Version string `json:"version"` } // DefaultLibrary when none specified type DefaultLibrary struct { ID string `json:"repo_id"` Exists bool `json:"exists"` } // CreateLibraryRequest contains the information needed to create a library type CreateLibraryRequest struct { Name string `json:"name"` Description string `json:"desc"` Password string `json:"passwd"` } // Library properties. Please note not all properties are going to be useful for rclone type Library struct { Encrypted bool `json:"encrypted"` Owner string `json:"owner"` ID string `json:"id"` Size int64 `json:"size"` Name string `json:"name"` Modified int64 `json:"mtime"` } // CreateLibrary properties. Seafile is not consistent and returns different types for different API calls type CreateLibrary struct { ID string `json:"repo_id"` Name string `json:"repo_name"` } // FileType is either "dir" or "file" type FileType string // File types var ( FileTypeDir FileType = "dir" FileTypeFile FileType = "file" ) // FileDetail contains file properties (for older api v2.0) type FileDetail struct { ID string `json:"id"` Type FileType `json:"type"` Name string `json:"name"` Size int64 `json:"size"` Parent string `json:"parent_dir"` Modified string `json:"last_modified"` } // DirEntries contains a list of DirEntry type DirEntries struct { Entries []DirEntry `json:"dirent_list"` } // DirEntry contains a directory entry type DirEntry struct { ID string `json:"id"` Type FileType `json:"type"` Name string `json:"name"` Size int64 `json:"size"` Path string `json:"parent_dir"` Modified int64 `json:"mtime"` } // Operation is move, copy or rename type Operation string // Operations var ( CopyFileOperation Operation = "copy" MoveFileOperation Operation = "move" RenameFileOperation Operation = "rename" ) // FileOperationRequest is sent to the api to copy, move or rename a file type FileOperationRequest struct { Operation Operation `json:"operation"` DestinationLibraryID string `json:"dst_repo"` // For copy/move operation DestinationPath string `json:"dst_dir"` // For copy/move operation NewName string `json:"newname"` // Only to be used by the rename operation } // FileInfo is returned by a server file copy/move/rename (new api v2.1) type FileInfo struct { Type string `json:"type"` LibraryID string `json:"repo_id"` Path string `json:"parent_dir"` Name string `json:"obj_name"` ID string `json:"obj_id"` Size int64 `json:"size"` } // CreateDirRequest only contain an operation field type CreateDirRequest struct { Operation string `json:"operation"` } // DirectoryDetail contains the directory details specific to the getDirectoryDetails call type DirectoryDetail struct { ID string `json:"repo_id"` Name string `json:"name"` Path string `json:"path"` } // ShareLinkRequest contains the information needed to create or list shared links type ShareLinkRequest struct { LibraryID string `json:"repo_id"` Path string `json:"path"` } // SharedLink contains the information returned by a call to shared link creation type SharedLink struct { Link string `json:"link"` IsExpired bool `json:"is_expired"` } // BatchSourceDestRequest contains JSON parameters for sending a batch copy or move operation type BatchSourceDestRequest struct { SrcLibraryID string `json:"src_repo_id"` SrcParentDir string `json:"src_parent_dir"` SrcItems []string `json:"src_dirents"` DstLibraryID string `json:"dst_repo_id"` DstParentDir string `json:"dst_parent_dir"` } rclone-1.53.3/backend/seafile/object.go000066400000000000000000000071231375552240400176730ustar00rootroot00000000000000package seafile import ( "context" "io" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" ) // Object describes a seafile object (also commonly called a file) type Object struct { fs *Fs // what this object is part of id string // internal ID of object remote string // The remote path (full path containing library name if target at root) pathInLibrary string // Path of the object without the library name size int64 // size of the object modTime time.Time // modification time of the object libraryID string // Needed to download the file } // ==================== Interface fs.DirEntry ==================== // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote string func (o *Object) Remote() string { return o.remote } // ModTime returns last modified time func (o *Object) ModTime(context.Context) time.Time { return o.modTime } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.size } // ==================== Interface fs.ObjectInfo ==================== // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) { return "", hash.ErrUnsupported } // Storable says whether this object can be stored func (o *Object) Storable() bool { return true } // ==================== Interface fs.Object ==================== // SetModTime sets the metadata on the object to set the modification date func (o *Object) SetModTime(ctx context.Context, t time.Time) error { return fs.ErrorCantSetModTime } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { downloadLink, err := o.fs.getDownloadLink(ctx, o.libraryID, o.pathInLibrary) if err != nil { return nil, err } reader, err := o.fs.download(ctx, downloadLink, o.Size(), options...) if err != nil { return nil, err } return reader, nil } // Update in to the object with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // return an error or update the object properly (rather than e.g. calling panic). func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { // The upload sometimes return a temporary 500 error // We cannot use the pacer to retry uploading the file as the upload link is single use only for retry := 0; retry <= 3; retry++ { uploadLink, err := o.fs.getUploadLink(ctx, o.libraryID) if err != nil { return err } uploaded, err := o.fs.upload(ctx, in, uploadLink, o.pathInLibrary) if err == ErrorInternalDuringUpload { // This is a temporary error, try again with a new upload link continue } if err != nil { return err } // Set the properties from the upload back to the object o.size = uploaded.Size o.id = uploaded.ID return nil } return ErrorInternalDuringUpload } // Remove this object func (o *Object) Remove(ctx context.Context) error { return o.fs.deleteFile(ctx, o.libraryID, o.pathInLibrary) } // ==================== Optional Interface fs.IDer ==================== // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } rclone-1.53.3/backend/seafile/pacer.go000066400000000000000000000025351375552240400175210ustar00rootroot00000000000000package seafile import ( "fmt" "net/url" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/pacer" ) const ( minSleep = 100 * time.Millisecond maxSleep = 10 * time.Second decayConstant = 2 // bigger for slower decay, exponential ) // Use only one pacer per server URL var ( pacers map[string]*fs.Pacer pacerMutex sync.Mutex ) func init() { pacers = make(map[string]*fs.Pacer, 0) } // getPacer returns the unique pacer for that remote URL func getPacer(remote string) *fs.Pacer { pacerMutex.Lock() defer pacerMutex.Unlock() remote = parseRemote(remote) if existing, found := pacers[remote]; found { return existing } pacers[remote] = fs.NewPacer( pacer.NewDefault( pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant), ), ) return pacers[remote] } // parseRemote formats a remote url into "hostname:port" func parseRemote(remote string) string { remoteURL, err := url.Parse(remote) if err != nil { // Return a default value in the very unlikely event we're not going to parse remote fs.Infof(nil, "Cannot parse remote %s", remote) return "default" } host := remoteURL.Hostname() port := remoteURL.Port() if port == "" { if remoteURL.Scheme == "https" { port = "443" } else { port = "80" } } return fmt.Sprintf("%s:%s", host, port) } rclone-1.53.3/backend/seafile/seafile.go000066400000000000000000001133721375552240400200410ustar00rootroot00000000000000package seafile import ( "context" "fmt" "io" "net/http" "net/url" "path" "strconv" "strings" "sync" "time" "github.com/coreos/go-semver/semver" "github.com/pkg/errors" "github.com/rclone/rclone/backend/seafile/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/cache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/rest" ) const ( librariesCacheKey = "all" retryAfterHeader = "Retry-After" configURL = "url" configUser = "user" configPassword = "pass" config2FA = "2fa" configLibrary = "library" configLibraryKey = "library_key" configCreateLibrary = "create_library" configAuthToken = "auth_token" ) // This is global to all instances of fs // (copying from a seafile remote to another remote would create 2 fs) var ( rangeDownloadNotice sync.Once // Display the notice only once createLibraryMutex sync.Mutex // Mutex to protect library creation ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "seafile", Description: "seafile", NewFs: NewFs, Config: Config, Options: []fs.Option{{ Name: configURL, Help: "URL of seafile host to connect to", Required: true, Examples: []fs.OptionExample{{ Value: "https://cloud.seafile.com/", Help: "Connect to cloud.seafile.com", }}, }, { Name: configUser, Help: "User name (usually email address)", Required: true, }, { // Password is not required, it will be left blank for 2FA Name: configPassword, Help: "Password", IsPassword: true, }, { Name: config2FA, Help: "Two-factor authentication ('true' if the account has 2FA enabled)", Default: false, }, { Name: configLibrary, Help: "Name of the library. Leave blank to access all non-encrypted libraries.", }, { Name: configLibraryKey, Help: "Library password (for encrypted libraries only). Leave blank if you pass it through the command line.", IsPassword: true, }, { Name: configCreateLibrary, Help: "Should rclone create a library if it doesn't exist", Advanced: true, Default: false, }, { // Keep the authentication token after entering the 2FA code Name: configAuthToken, Help: "Authentication token", Hide: fs.OptionHideBoth, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.EncodeZero | encoder.EncodeCtl | encoder.EncodeSlash | encoder.EncodeBackSlash | encoder.EncodeDoubleQuote | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { URL string `config:"url"` User string `config:"user"` Password string `config:"pass"` Is2FA bool `config:"2fa"` AuthToken string `config:"auth_token"` LibraryName string `config:"library"` LibraryKey string `config:"library_key"` CreateLibrary bool `config:"create_library"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote seafile type Fs struct { name string // name of this remote root string // the path we are working on libraryName string // current library encrypted bool // Is this an encrypted library rootDirectory string // directory part of root (if any) opt Options // parsed options libraries *cache.Cache // Keep a cache of libraries librariesMutex sync.Mutex // Mutex to protect getLibraryID features *fs.Features // optional features endpoint *url.URL // URL of the host endpointURL string // endpoint as a string srv *rest.Client // the connection to the one drive server pacer *fs.Pacer // pacer for API calls authMu sync.Mutex // Mutex to protect library decryption createDirMutex sync.Mutex // Protect creation of directories useOldDirectoryAPI bool // Use the old API v2 if seafile < 7 moveDirNotAvailable bool // Version < 7.0 don't have an API to move a directory } // ------------------------------------------------------------ // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = strings.Trim(root, "/") isLibraryRooted := opt.LibraryName != "" var libraryName, rootDirectory string if isLibraryRooted { libraryName = opt.LibraryName rootDirectory = root } else { libraryName, rootDirectory = bucket.Split(root) } if !strings.HasSuffix(opt.URL, "/") { opt.URL += "/" } if opt.Password != "" { var err error opt.Password, err = obscure.Reveal(opt.Password) if err != nil { return nil, errors.Wrap(err, "couldn't decrypt user password") } } if opt.LibraryKey != "" { var err error opt.LibraryKey, err = obscure.Reveal(opt.LibraryKey) if err != nil { return nil, errors.Wrap(err, "couldn't decrypt library password") } } // Parse the endpoint u, err := url.Parse(opt.URL) if err != nil { return nil, err } f := &Fs{ name: name, root: root, libraryName: libraryName, rootDirectory: rootDirectory, libraries: cache.New(), opt: *opt, endpoint: u, endpointURL: u.String(), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()), pacer: getPacer(opt.URL), } f.features = (&fs.Features{ CanHaveEmptyDirectories: true, BucketBased: opt.LibraryName == "", }).Fill(f) ctx := context.Background() serverInfo, err := f.getServerInfo(ctx) if err != nil { return nil, err } fs.Debugf(nil, "Seafile server version %s", serverInfo.Version) // We don't support lower than seafile v6.0 (version 6.0 is already more than 3 years old) serverVersion := semver.New(serverInfo.Version) if serverVersion.Major < 6 { return nil, errors.New("unsupported Seafile server (version < 6.0)") } if serverVersion.Major < 7 { // Seafile 6 does not support recursive listing f.useOldDirectoryAPI = true f.features.ListR = nil // It also does no support moving directories f.moveDirNotAvailable = true } // Take the authentication token from the configuration first token := f.opt.AuthToken if token == "" { // If not available, send the user/password instead token, err = f.authorizeAccount(ctx) if err != nil { return nil, err } } f.setAuthorizationToken(token) if f.libraryName != "" { // Check if the library exists exists, err := f.libraryExists(ctx, f.libraryName) if err != nil { return f, err } if !exists { if f.opt.CreateLibrary { err := f.mkLibrary(ctx, f.libraryName, "") if err != nil { return f, err } } else { return f, fmt.Errorf("library '%s' was not found, and the option to create it is not activated (advanced option)", f.libraryName) } } libraryID, err := f.getLibraryID(ctx, f.libraryName) if err != nil { return f, err } f.encrypted, err = f.isEncrypted(ctx, libraryID) if err != nil { return f, err } if f.encrypted { // If we're inside an encrypted library, let's decrypt it now err = f.authorizeLibrary(ctx, libraryID) if err != nil { return f, err } // And remove the public link feature f.features.PublicLink = nil } } else { // Deactivate the cleaner feature since there's no library selected f.features.CleanUp = nil } if f.rootDirectory != "" { // Check to see if the root is an existing file remote := path.Base(rootDirectory) f.rootDirectory = path.Dir(rootDirectory) if f.rootDirectory == "." { f.rootDirectory = "" } _, err := f.NewObject(ctx, remote) if err != nil { if errors.Cause(err) == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile { // File doesn't exist so return the original f f.rootDirectory = rootDirectory return f, nil } return f, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Config callback for 2FA func Config(name string, m configmap.Mapper) { serverURL, ok := m.Get(configURL) if !ok || serverURL == "" { // If there's no server URL, it means we're trying an operation at the backend level, like a "rclone authorize seafile" fmt.Print("\nOperation not supported on this remote.\nIf you need a 2FA code on your account, use the command:\n\nrclone config reconnect :\n\n") return } // Stop if we are running non-interactive config if fs.Config.AutoConfirm { return } u, err := url.Parse(serverURL) if err != nil { fs.Errorf(nil, "Invalid server URL %s", serverURL) return } is2faEnabled, _ := m.Get(config2FA) if is2faEnabled != "true" { fmt.Println("Two-factor authentication is not enabled on this account.") return } username, _ := m.Get(configUser) if username == "" { fs.Errorf(nil, "A username is required") return } password, _ := m.Get(configPassword) if password != "" { password, _ = obscure.Reveal(password) } // Just make sure we do have a password for password == "" { fmt.Print("Two-factor authentication: please enter your password (it won't be saved in the configuration)\npassword> ") password = config.ReadPassword() } // Create rest client for getAuthorizationToken url := u.String() if !strings.HasPrefix(url, "/") { url += "/" } srv := rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(url) // We loop asking for a 2FA code for { code := "" for code == "" { fmt.Print("Two-factor authentication: please enter your 2FA code\n2fa code> ") code = config.ReadLine() } ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() fmt.Println("Authenticating...") token, err := getAuthorizationToken(ctx, srv, username, password, code) if err != nil { fmt.Printf("Authentication failed: %v\n", err) tryAgain := strings.ToLower(config.ReadNonEmptyLine("Do you want to try again (y/n)?")) if tryAgain != "y" && tryAgain != "yes" { // The user is giving up, we're done here break } } if token != "" { fmt.Println("Success!") // Let's save the token into the configuration m.Set(configAuthToken, token) // And delete any previous entry for password m.Set(configPassword, "") config.SaveConfig() // And we're done here break } } } // sets the AuthorizationToken up func (f *Fs) setAuthorizationToken(token string) { f.srv.SetHeader("Authorization", "Token "+token) } // authorizeAccount gets the auth token. func (f *Fs) authorizeAccount(ctx context.Context) (string, error) { f.authMu.Lock() defer f.authMu.Unlock() token, err := f.getAuthorizationToken(ctx) if err != nil { return "", err } return token, nil } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 408, // Request Timeout 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error 503, // Service Unavailable 504, // Gateway Time-out 520, // Operation failed (We get them sometimes when running tests in parallel) } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { // For 429 errors look at the Retry-After: header and // set the retry appropriately, starting with a minimum of 1 // second if it isn't set. if resp != nil && (resp.StatusCode == 429) { var retryAfter = 1 retryAfterString := resp.Header.Get(retryAfterHeader) if retryAfterString != "" { var err error retryAfter, err = strconv.Atoi(retryAfterString) if err != nil { fs.Errorf(f, "Malformed %s header %q: %v", retryAfterHeader, retryAfterString, err) } } return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Second) } return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } func (f *Fs) shouldRetryUpload(ctx context.Context, resp *http.Response, err error) (bool, error) { if err != nil || (resp != nil && resp.StatusCode > 400) { return true, err } return false, nil } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.libraryName == "" { return fmt.Sprintf("seafile root") } library := "library" if f.encrypted { library = "encrypted " + library } if f.rootDirectory == "" { return fmt.Sprintf("seafile %s '%s'", library, f.libraryName) } return fmt.Sprintf("seafile %s '%s' path '%s'", library, f.libraryName, f.rootDirectory) } // Precision of the ModTimes in this Fs func (f *Fs) Precision() time.Duration { // The API doesn't support setting the modified time return fs.ModTimeNotSupported } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return fs.ErrorDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { if dir == "" && f.libraryName == "" { return f.listLibraries(ctx) } return f.listDir(ctx, dir, false) } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { libraryName, filePath := f.splitPath(remote) libraryID, err := f.getLibraryID(ctx, libraryName) if err != nil { return nil, err } err = f.authorizeLibrary(ctx, libraryID) if err != nil { return nil, err } fileDetails, err := f.getFileDetails(ctx, libraryID, filePath) if err != nil { return nil, err } modTime, err := time.Parse(time.RFC3339, fileDetails.Modified) if err != nil { fs.LogPrintf(fs.LogLevelWarning, fileDetails.Modified, "Cannot parse datetime") } o := &Object{ fs: f, libraryID: libraryID, id: fileDetails.ID, remote: remote, pathInLibrary: filePath, modTime: modTime, size: fileDetails.Size, } return o, nil } // Put in to the remote path with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Put should either // return an error or upload it properly (rather than e.g. calling panic). // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { object := f.newObject(ctx, src.Remote(), src.Size(), src.ModTime(ctx)) // Check if we need to create a new library at that point if object.libraryID == "" { library, _ := f.splitPath(object.remote) err := f.Mkdir(ctx, library) if err != nil { return object, err } libraryID, err := f.getLibraryID(ctx, library) if err != nil { return object, err } object.libraryID = libraryID } err := object.Update(ctx, in, src, options...) if err != nil { return object, err } return object, nil } // PutStream uploads to the remote path with the modTime given but of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir makes the directory or library // // Shouldn't return an error if it already exists func (f *Fs) Mkdir(ctx context.Context, dir string) error { libraryName, folder := f.splitPath(dir) if strings.HasPrefix(dir, libraryName) { err := f.mkLibrary(ctx, libraryName, "") if err != nil { return err } if folder == "" { // No directory to create after the library return nil } } err := f.mkDir(ctx, dir) if err != nil { return err } return nil } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { libraryName, dirPath := f.splitPath(dir) libraryID, err := f.getLibraryID(ctx, libraryName) if err != nil { return err } if check { directoryEntries, err := f.getDirectoryEntries(ctx, libraryID, dirPath, false) if err != nil { return err } if len(directoryEntries) > 0 { return fs.ErrorDirectoryNotEmpty } } if dirPath == "" || dirPath == "/" { return f.deleteLibrary(ctx, libraryID) } return f.deleteDir(ctx, libraryID, dirPath) } // Rmdir removes the directory or library if empty // // Return an error if it doesn't exist or isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // ==================== Optional Interface fs.ListRer ==================== // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) error { var err error if dir == "" && f.libraryName == "" { libraries, err := f.listLibraries(ctx) if err != nil { return err } // Send the library list as folders err = callback(libraries) if err != nil { return err } // Then list each library for _, library := range libraries { err = f.listDirCallback(ctx, library.Remote(), callback) if err != nil { return err } } return nil } err = f.listDirCallback(ctx, dir, callback) if err != nil { return err } return nil } // ==================== Optional Interface fs.Copier ==================== // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { return nil, fs.ErrorCantCopy } srcLibraryName, srcPath := srcObj.fs.splitPath(src.Remote()) srcLibraryID, err := srcObj.fs.getLibraryID(ctx, srcLibraryName) if err != nil { return nil, err } dstLibraryName, dstPath := f.splitPath(remote) dstLibraryID, err := f.getLibraryID(ctx, dstLibraryName) if err != nil { return nil, err } // Seafile does not accept a file name as a destination, only a path. // The destination filename will be the same as the original, or with (1) added in case it was already existing dstDir, dstFilename := path.Split(dstPath) // We have to make sure the destination path exists on the server or it's going to bomb out with an obscure error message err = f.mkMultiDir(ctx, dstLibraryID, dstDir) if err != nil { return nil, err } op, err := f.copyFile(ctx, srcLibraryID, srcPath, dstLibraryID, dstDir) if err != nil { return nil, err } if op.Name != dstFilename { // Destination was existing, so we need to move the file back into place err = f.adjustDestination(ctx, dstLibraryID, op.Name, dstPath, dstDir, dstFilename) if err != nil { return nil, err } } // Create a new object from the result return f.NewObject(ctx, remote) } // ==================== Optional Interface fs.Mover ==================== // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { return nil, fs.ErrorCantMove } srcLibraryName, srcPath := srcObj.fs.splitPath(src.Remote()) srcLibraryID, err := srcObj.fs.getLibraryID(ctx, srcLibraryName) if err != nil { return nil, err } dstLibraryName, dstPath := f.splitPath(remote) dstLibraryID, err := f.getLibraryID(ctx, dstLibraryName) if err != nil { return nil, err } // anchor both source and destination paths from the root so we can compare them srcPath = path.Join("/", srcPath) dstPath = path.Join("/", dstPath) srcDir := path.Dir(srcPath) dstDir, dstFilename := path.Split(dstPath) if srcLibraryID == dstLibraryID && srcDir == dstDir { // It's only a simple case of renaming the file _, err := f.renameFile(ctx, srcLibraryID, srcPath, dstFilename) if err != nil { return nil, err } return f.NewObject(ctx, remote) } // We have to make sure the destination path exists on the server err = f.mkMultiDir(ctx, dstLibraryID, dstDir) if err != nil { return nil, err } // Seafile does not accept a file name as a destination, only a path. // The destination filename will be the same as the original, or with (1) added in case it already exists op, err := f.moveFile(ctx, srcLibraryID, srcPath, dstLibraryID, dstDir) if err != nil { return nil, err } if op.Name != dstFilename { // Destination was existing, so we need to move the file back into place err = f.adjustDestination(ctx, dstLibraryID, op.Name, dstPath, dstDir, dstFilename) if err != nil { return nil, err } } // Create a new object from the result return f.NewObject(ctx, remote) } // adjustDestination rename the file func (f *Fs) adjustDestination(ctx context.Context, libraryID, srcFilename, dstPath, dstDir, dstFilename string) error { // Seafile seems to be acting strangely if the renamed file already exists (some cache issue maybe?) // It's better to delete the destination if it already exists fileDetail, err := f.getFileDetails(ctx, libraryID, dstPath) if err != nil && err != fs.ErrorObjectNotFound { return err } if fileDetail != nil { err = f.deleteFile(ctx, libraryID, dstPath) if err != nil { return err } } _, err = f.renameFile(ctx, libraryID, path.Join(dstDir, srcFilename), dstFilename) if err != nil { return err } return nil } // ==================== Optional Interface fs.DirMover ==================== // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { // Cast into a seafile Fs srcFs, ok := src.(*Fs) if !ok { return fs.ErrorCantDirMove } srcLibraryName, srcPath := srcFs.splitPath(srcRemote) srcLibraryID, err := srcFs.getLibraryID(ctx, srcLibraryName) if err != nil { return err } dstLibraryName, dstPath := f.splitPath(dstRemote) dstLibraryID, err := f.getLibraryID(ctx, dstLibraryName) if err != nil { return err } srcDir := path.Dir(srcPath) dstDir, dstName := path.Split(dstPath) // anchor both source and destination to the root so we can compare them srcDir = path.Join("/", srcDir) dstDir = path.Join("/", dstDir) // The destination should not exist entries, err := f.getDirectoryEntries(ctx, dstLibraryID, dstDir, false) if err != nil && err != fs.ErrorDirNotFound { return err } if err == nil { for _, entry := range entries { if entry.Name == dstName { // Destination exists return fs.ErrorDirExists } } } if srcLibraryID == dstLibraryID && srcDir == dstDir { // It's only renaming err = srcFs.renameDir(ctx, dstLibraryID, srcPath, dstName) if err != nil { return err } return nil } // Seafile < 7 does not support moving directories if f.moveDirNotAvailable { return fs.ErrorCantDirMove } // Make sure the destination path exists err = f.mkMultiDir(ctx, dstLibraryID, dstDir) if err != nil { return err } // If the destination already exists, seafile will add a " (n)" to the name. // Sadly this API call will not return the new given name like the move file version does // So the trick is to rename the directory to something random before moving it // After the move we rename the random name back to the expected one // Hopefully there won't be anything with the same name existing at destination ;) tempName := ".rclone-move-" + random.String(32) // 1- rename source err = srcFs.renameDir(ctx, srcLibraryID, srcPath, tempName) if err != nil { return errors.Wrap(err, "Cannot rename source directory to a temporary name") } // 2- move source to destination err = f.moveDir(ctx, srcLibraryID, srcDir, tempName, dstLibraryID, dstDir) if err != nil { // Doh! Let's rename the source back to its original name _ = srcFs.renameDir(ctx, srcLibraryID, path.Join(srcDir, tempName), path.Base(srcPath)) return err } // 3- rename destination back to source name err = f.renameDir(ctx, dstLibraryID, path.Join(dstDir, tempName), dstName) if err != nil { return errors.Wrap(err, "Cannot rename temporary directory to destination name") } return nil } // ==================== Optional Interface fs.Purger ==================== // Purge all files in the directory // // Implement this if you have a way of deleting all the files // quicker than just running Remove() on the result of List() // // Return an error if it doesn't exist func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // ==================== Optional Interface fs.CleanUpper ==================== // CleanUp the trash in the Fs func (f *Fs) CleanUp(ctx context.Context) error { if f.libraryName == "" { return errors.New("Cannot clean up at the root of the seafile server: please select a library to clean up") } libraryID, err := f.getLibraryID(ctx, f.libraryName) if err != nil { return err } return f.emptyLibraryTrash(ctx, libraryID) } // ==================== Optional Interface fs.Abouter ==================== // About gets quota information func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) { accountInfo, err := f.getUserAccountInfo(ctx) if err != nil { return nil, err } usage = &fs.Usage{ Used: fs.NewUsageValue(accountInfo.Usage), // bytes in use } if accountInfo.Total > 0 { usage.Total = fs.NewUsageValue(accountInfo.Total) // quota of bytes that can be used usage.Free = fs.NewUsageValue(accountInfo.Total - accountInfo.Usage) // bytes which can be uploaded before reaching the quota } return usage, nil } // ==================== Optional Interface fs.UserInfoer ==================== // UserInfo returns info about the connected user func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) { accountInfo, err := f.getUserAccountInfo(ctx) if err != nil { return nil, err } return map[string]string{ "Name": accountInfo.Name, "Email": accountInfo.Email, }, nil } // ==================== Optional Interface fs.PublicLinker ==================== // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { libraryName, filePath := f.splitPath(remote) if libraryName == "" { // We cannot share the whole seafile server, we need at least a library return "", errors.New("Cannot share the root of the seafile server. Please select a library to share") } libraryID, err := f.getLibraryID(ctx, libraryName) if err != nil { return "", err } // List existing links first shareLinks, err := f.listShareLinks(ctx, libraryID, filePath) if err != nil { return "", err } if shareLinks != nil && len(shareLinks) > 0 { for _, shareLink := range shareLinks { if shareLink.IsExpired == false { return shareLink.Link, nil } } } // No link was found shareLink, err := f.createShareLink(ctx, libraryID, filePath) if err != nil { return "", err } if shareLink.IsExpired { return "", nil } return shareLink.Link, nil } func (f *Fs) listLibraries(ctx context.Context) (entries fs.DirEntries, err error) { libraries, err := f.getCachedLibraries(ctx) if err != nil { return nil, errors.New("cannot load libraries") } for _, library := range libraries { d := fs.NewDir(library.Name, time.Unix(library.Modified, 0)) d.SetSize(library.Size) entries = append(entries, d) } return entries, nil } func (f *Fs) libraryExists(ctx context.Context, libraryName string) (bool, error) { libraries, err := f.getCachedLibraries(ctx) if err != nil { return false, err } for _, library := range libraries { if library.Name == libraryName { return true, nil } } return false, nil } func (f *Fs) getLibraryID(ctx context.Context, name string) (string, error) { libraries, err := f.getCachedLibraries(ctx) if err != nil { return "", err } for _, library := range libraries { if library.Name == name { return library.ID, nil } } return "", fmt.Errorf("cannot find library '%s'", name) } func (f *Fs) isLibraryInCache(libraryName string) bool { f.librariesMutex.Lock() defer f.librariesMutex.Unlock() if f.libraries == nil { return false } value, found := f.libraries.GetMaybe(librariesCacheKey) if found == false { return false } libraries := value.([]api.Library) for _, library := range libraries { if library.Name == libraryName { return true } } return false } func (f *Fs) isEncrypted(ctx context.Context, libraryID string) (bool, error) { libraries, err := f.getCachedLibraries(ctx) if err != nil { return false, err } for _, library := range libraries { if library.ID == libraryID { return library.Encrypted, nil } } return false, fmt.Errorf("cannot find library ID %s", libraryID) } func (f *Fs) authorizeLibrary(ctx context.Context, libraryID string) error { if libraryID == "" { return errors.New("a library ID is needed") } if f.opt.LibraryKey == "" { // We have no password to send return nil } encrypted, err := f.isEncrypted(ctx, libraryID) if err != nil { return err } if encrypted { fs.Debugf(nil, "Decrypting library %s", libraryID) f.authMu.Lock() defer f.authMu.Unlock() err := f.decryptLibrary(ctx, libraryID, f.opt.LibraryKey) if err != nil { return err } } return nil } func (f *Fs) mkLibrary(ctx context.Context, libraryName, password string) error { // lock specific to library creation // we cannot reuse the same lock as we will dead-lock ourself if the libraries are not in cache createLibraryMutex.Lock() defer createLibraryMutex.Unlock() if libraryName == "" { return errors.New("a library name is needed") } // It's quite likely that multiple go routines are going to try creating the same library // at the start of a sync/copy. After releasing the mutex the calls waiting would try to create // the same library again. So we'd better check the library exists first if f.isLibraryInCache(libraryName) { return nil } fs.Debugf(nil, "%s: Create library '%s'", f.Name(), libraryName) f.librariesMutex.Lock() defer f.librariesMutex.Unlock() library, err := f.createLibrary(ctx, libraryName, password) if err != nil { return err } // Stores the library details into the cache value, found := f.libraries.GetMaybe(librariesCacheKey) if found == false { // Don't update the cache at that point return nil } libraries := value.([]api.Library) libraries = append(libraries, api.Library{ ID: library.ID, Name: library.Name, }) f.libraries.Put(librariesCacheKey, libraries) return nil } // splitPath returns the library name and the full path inside the library func (f *Fs) splitPath(dir string) (library, folder string) { library = f.libraryName folder = dir if library == "" { // The first part of the path is the library library, folder = bucket.Split(dir) } else if f.rootDirectory != "" { // Adds the root folder to the path to get a full path folder = path.Join(f.rootDirectory, folder) } return } func (f *Fs) listDir(ctx context.Context, dir string, recursive bool) (entries fs.DirEntries, err error) { libraryName, dirPath := f.splitPath(dir) libraryID, err := f.getLibraryID(ctx, libraryName) if err != nil { return nil, err } directoryEntries, err := f.getDirectoryEntries(ctx, libraryID, dirPath, recursive) if err != nil { return nil, err } return f.buildDirEntries(dir, libraryID, dirPath, directoryEntries, recursive), nil } // listDirCallback is calling listDir with the recursive option and is sending the result to the callback func (f *Fs) listDirCallback(ctx context.Context, dir string, callback fs.ListRCallback) error { entries, err := f.listDir(ctx, dir, true) if err != nil { return err } err = callback(entries) if err != nil { return err } return nil } func (f *Fs) buildDirEntries(parentPath, libraryID, parentPathInLibrary string, directoryEntries []api.DirEntry, recursive bool) (entries fs.DirEntries) { for _, entry := range directoryEntries { var filePath, filePathInLibrary string if recursive { // In recursive mode, paths are built from DirEntry (+ a starting point) entryPath := strings.TrimPrefix(entry.Path, "/") // If we're listing from some path inside the library (not the root) // there's already a path in parameter, which will also be included in the entry path entryPath = strings.TrimPrefix(entryPath, parentPathInLibrary) entryPath = strings.TrimPrefix(entryPath, "/") filePath = path.Join(parentPath, entryPath, entry.Name) filePathInLibrary = path.Join(parentPathInLibrary, entryPath, entry.Name) } else { // In non-recursive mode, paths are build from the parameters filePath = path.Join(parentPath, entry.Name) filePathInLibrary = path.Join(parentPathInLibrary, entry.Name) } if entry.Type == api.FileTypeDir { d := fs. NewDir(filePath, time.Unix(entry.Modified, 0)). SetSize(entry.Size). SetID(entry.ID) entries = append(entries, d) } else if entry.Type == api.FileTypeFile { object := &Object{ fs: f, id: entry.ID, remote: filePath, pathInLibrary: filePathInLibrary, size: entry.Size, modTime: time.Unix(entry.Modified, 0), libraryID: libraryID, } entries = append(entries, object) } } return entries } func (f *Fs) mkDir(ctx context.Context, dir string) error { library, fullPath := f.splitPath(dir) libraryID, err := f.getLibraryID(ctx, library) if err != nil { return err } return f.mkMultiDir(ctx, libraryID, fullPath) } func (f *Fs) mkMultiDir(ctx context.Context, libraryID, dir string) error { // rebuild the path one by one currentPath := "" for _, singleDir := range splitPath(dir) { currentPath = path.Join(currentPath, singleDir) err := f.mkSingleDir(ctx, libraryID, currentPath) if err != nil { return err } } return nil } func (f *Fs) mkSingleDir(ctx context.Context, libraryID, dir string) error { f.createDirMutex.Lock() defer f.createDirMutex.Unlock() dirDetails, err := f.getDirectoryDetails(ctx, libraryID, dir) if err == nil && dirDetails != nil { // Don't fail if the directory exists return nil } if err == fs.ErrorDirNotFound { err = f.createDir(ctx, libraryID, dir) if err != nil { return err } return nil } return err } func (f *Fs) getDirectoryEntries(ctx context.Context, libraryID, folder string, recursive bool) ([]api.DirEntry, error) { if f.useOldDirectoryAPI { return f.getDirectoryEntriesAPIv2(ctx, libraryID, folder) } return f.getDirectoryEntriesAPIv21(ctx, libraryID, folder, recursive) } // splitPath creates a slice of paths func splitPath(tree string) (paths []string) { tree, leaf := path.Split(path.Clean(tree)) for leaf != "" && leaf != "." { paths = append([]string{leaf}, paths...) tree, leaf = path.Split(path.Clean(tree)) } return } func (f *Fs) getCachedLibraries(ctx context.Context) ([]api.Library, error) { f.librariesMutex.Lock() defer f.librariesMutex.Unlock() libraries, err := f.libraries.Get(librariesCacheKey, func(key string) (value interface{}, ok bool, error error) { // Load the libraries if not present in the cache libraries, err := f.getLibraries(ctx) if err != nil { return nil, false, err } return libraries, true, nil }) if err != nil { return nil, err } // Type assertion return libraries.([]api.Library), nil } func (f *Fs) newObject(ctx context.Context, remote string, size int64, modTime time.Time) *Object { libraryName, remotePath := f.splitPath(remote) libraryID, _ := f.getLibraryID(ctx, libraryName) // If error it means the library does not exist (yet) object := &Object{ fs: f, remote: remote, libraryID: libraryID, pathInLibrary: remotePath, size: size, modTime: modTime, } return object } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Abouter = &Fs{} _ fs.CleanUpper = &Fs{} _ fs.Copier = &Fs{} _ fs.Mover = &Fs{} _ fs.DirMover = &Fs{} _ fs.ListRer = &Fs{} _ fs.Purger = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.PublicLinker = &Fs{} _ fs.UserInfoer = &Fs{} _ fs.Object = &Object{} _ fs.IDer = &Object{} ) rclone-1.53.3/backend/seafile/seafile_internal_test.go000066400000000000000000000060731375552240400227730ustar00rootroot00000000000000package seafile import ( "path" "testing" "github.com/stretchr/testify/assert" ) type pathData struct { configLibrary string // Library specified in the config configRoot string // Root directory specified in the config argumentPath string // Path given as an argument in the command line expectedLibrary string expectedPath string } // Test the method to split a library name and a path // from a mix of configuration data and path command line argument func TestSplitPath(t *testing.T) { testData := []pathData{ pathData{ configLibrary: "", configRoot: "", argumentPath: "", expectedLibrary: "", expectedPath: "", }, pathData{ configLibrary: "", configRoot: "", argumentPath: "Library", expectedLibrary: "Library", expectedPath: "", }, pathData{ configLibrary: "", configRoot: "", argumentPath: path.Join("Library", "path", "to", "file"), expectedLibrary: "Library", expectedPath: path.Join("path", "to", "file"), }, pathData{ configLibrary: "Library", configRoot: "", argumentPath: "", expectedLibrary: "Library", expectedPath: "", }, pathData{ configLibrary: "Library", configRoot: "", argumentPath: "path", expectedLibrary: "Library", expectedPath: "path", }, pathData{ configLibrary: "Library", configRoot: "", argumentPath: path.Join("path", "to", "file"), expectedLibrary: "Library", expectedPath: path.Join("path", "to", "file"), }, pathData{ configLibrary: "Library", configRoot: "root", argumentPath: "", expectedLibrary: "Library", expectedPath: "root", }, pathData{ configLibrary: "Library", configRoot: path.Join("root", "path"), argumentPath: "", expectedLibrary: "Library", expectedPath: path.Join("root", "path"), }, pathData{ configLibrary: "Library", configRoot: "root", argumentPath: "path", expectedLibrary: "Library", expectedPath: path.Join("root", "path"), }, pathData{ configLibrary: "Library", configRoot: "root", argumentPath: path.Join("path", "to", "file"), expectedLibrary: "Library", expectedPath: path.Join("root", "path", "to", "file"), }, pathData{ configLibrary: "Library", configRoot: path.Join("root", "path"), argumentPath: path.Join("subpath", "to", "file"), expectedLibrary: "Library", expectedPath: path.Join("root", "path", "subpath", "to", "file"), }, } for _, test := range testData { fs := &Fs{ libraryName: test.configLibrary, rootDirectory: test.configRoot, } libraryName, path := fs.splitPath(test.argumentPath) assert.Equal(t, test.expectedLibrary, libraryName) assert.Equal(t, test.expectedPath, path) } } func TestSplitPathIntoSlice(t *testing.T) { testData := map[string][]string{ "1": {"1"}, "/1": {"1"}, "/1/": {"1"}, "1/2/3": {"1", "2", "3"}, } for input, expected := range testData { output := splitPath(input) assert.Equal(t, expected, output) } } rclone-1.53.3/backend/seafile/seafile_test.go000066400000000000000000000005641375552240400210760ustar00rootroot00000000000000// Test Seafile filesystem interface package seafile_test import ( "testing" "github.com/rclone/rclone/backend/seafile" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestSeafile:", NilObject: (*seafile.Object)(nil), }) } rclone-1.53.3/backend/seafile/webapi.go000066400000000000000000001024331375552240400176740ustar00rootroot00000000000000package seafile import ( "bytes" "context" "fmt" "io" "io/ioutil" "net/http" "net/url" "path" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/backend/seafile/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" ) // Start of the API URLs const ( APIv20 = "api2/repos/" APIv21 = "api/v2.1/repos/" ) // Errors specific to seafile fs var ( ErrorInternalDuringUpload = errors.New("Internal server error during file upload") ) // ==================== Seafile API ==================== func (f *Fs) getAuthorizationToken(ctx context.Context) (string, error) { return getAuthorizationToken(ctx, f.srv, f.opt.User, f.opt.Password, "") } // getAuthorizationToken can be called outside of an fs (during configuration of the remote to get the authentication token) // it's doing a single call (no pacer involved) func getAuthorizationToken(ctx context.Context, srv *rest.Client, user, password, oneTimeCode string) (string, error) { // API Documentation // https://download.seafile.com/published/web-api/home.md#user-content-Quick%20Start opts := rest.Opts{ Method: "POST", Path: "api2/auth-token/", ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request IgnoreStatus: true, // so we can load the error messages back into result } // 2FA if oneTimeCode != "" { opts.ExtraHeaders["X-SEAFILE-OTP"] = oneTimeCode } request := api.AuthenticationRequest{ Username: user, Password: password, } result := api.AuthenticationResult{} _, err := srv.CallJSON(ctx, &opts, &request, &result) if err != nil { // This is only going to be http errors here return "", errors.Wrap(err, "failed to authenticate") } if result.Errors != nil && len(result.Errors) > 0 { return "", errors.New(strings.Join(result.Errors, ", ")) } if result.Token == "" { // No error in "non_field_errors" field but still empty token return "", errors.New("failed to authenticate") } return result.Token, nil } func (f *Fs) getServerInfo(ctx context.Context) (account *api.ServerInfo, err error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/server-info.md#user-content-Get%20Server%20Information opts := rest.Opts{ Method: "GET", Path: "api2/server-info/", } result := api.ServerInfo{} var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to get server info") } return &result, nil } func (f *Fs) getUserAccountInfo(ctx context.Context) (account *api.AccountInfo, err error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/account.md#user-content-Check%20Account%20Info opts := rest.Opts{ Method: "GET", Path: "api2/account/info/", } result := api.AccountInfo{} var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to get account info") } return &result, nil } func (f *Fs) getLibraries(ctx context.Context) ([]api.Library, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/libraries.md#user-content-List%20Libraries opts := rest.Opts{ Method: "GET", Path: APIv20, } result := make([]api.Library, 1) var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to get libraries") } return result, nil } func (f *Fs) createLibrary(ctx context.Context, libraryName, password string) (library *api.CreateLibrary, err error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/libraries.md#user-content-Create%20Library opts := rest.Opts{ Method: "POST", Path: APIv20, } request := api.CreateLibraryRequest{ Name: f.opt.Enc.FromStandardName(libraryName), Description: "Created by rclone", Password: password, } result := &api.CreateLibrary{} var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to create library") } return result, nil } func (f *Fs) deleteLibrary(ctx context.Context, libraryID string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/libraries.md#user-content-Create%20Library opts := rest.Opts{ Method: "DELETE", Path: APIv20 + libraryID + "/", } result := "" var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } } return errors.Wrap(err, "failed to delete library") } return nil } func (f *Fs) decryptLibrary(ctx context.Context, libraryID, password string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/library-encryption.md#user-content-Decrypt%20Library if libraryID == "" { return errors.New("cannot list files without a library") } // This is another call that cannot accept a JSON input so we have to build it manually opts := rest.Opts{ Method: "POST", Path: APIv20 + libraryID + "/", ContentType: "application/x-www-form-urlencoded", Body: bytes.NewBuffer([]byte("password=" + f.opt.Enc.FromStandardName(password))), NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 400 { return errors.New("incorrect password") } if resp.StatusCode == 409 { fs.Debugf(nil, "library is not encrypted") return nil } } return errors.Wrap(err, "failed to decrypt library") } return nil } func (f *Fs) getDirectoryEntriesAPIv21(ctx context.Context, libraryID, dirPath string, recursive bool) ([]api.DirEntry, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/directories.md#user-content-List%20Items%20in%20Directory // This is using the undocumented version 2.1 of the API (so we can use the recursive option which is not available in the version 2) if libraryID == "" { return nil, errors.New("cannot list files without a library") } dirPath = path.Join("/", dirPath) recursiveFlag := "0" if recursive { recursiveFlag = "1" } opts := rest.Opts{ Method: "GET", Path: APIv21 + libraryID + "/dir/", Parameters: url.Values{ "recursive": {recursiveFlag}, "p": {f.opt.Enc.FromStandardPath(dirPath)}, }, } result := &api.DirEntries{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return nil, fs.ErrorDirNotFound } if resp.StatusCode == 440 { // Encrypted library and password not provided return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to get directory contents") } // Clean up encoded names for index, fileInfo := range result.Entries { fileInfo.Name = f.opt.Enc.ToStandardName(fileInfo.Name) fileInfo.Path = f.opt.Enc.ToStandardPath(fileInfo.Path) result.Entries[index] = fileInfo } return result.Entries, nil } func (f *Fs) getDirectoryDetails(ctx context.Context, libraryID, dirPath string) (*api.DirectoryDetail, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/directories.md#user-content-Get%20Directory%20Detail if libraryID == "" { return nil, errors.New("cannot read directory without a library") } dirPath = path.Join("/", dirPath) opts := rest.Opts{ Method: "GET", Path: APIv21 + libraryID + "/dir/detail/", Parameters: url.Values{"path": {f.opt.Enc.FromStandardPath(dirPath)}}, } result := &api.DirectoryDetail{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return nil, fs.ErrorDirNotFound } } return nil, errors.Wrap(err, "failed to get directory details") } result.Name = f.opt.Enc.ToStandardName(result.Name) result.Path = f.opt.Enc.ToStandardPath(result.Path) return result, nil } // createDir creates a new directory. The API will add a number to the directory name if it already exist func (f *Fs) createDir(ctx context.Context, libraryID, dirPath string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/directories.md#user-content-Create%20New%20Directory if libraryID == "" { return errors.New("cannot create directory without a library") } dirPath = path.Join("/", dirPath) // This call *cannot* handle json parameters in the body, so we have to build the request body manually opts := rest.Opts{ Method: "POST", Path: APIv20 + libraryID + "/dir/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(dirPath)}}, NoRedirect: true, ContentType: "application/x-www-form-urlencoded", Body: bytes.NewBuffer([]byte("operation=mkdir")), NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } } return errors.Wrap(err, "failed to create directory") } return nil } func (f *Fs) renameDir(ctx context.Context, libraryID, dirPath, newName string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/directories.md#user-content-Rename%20Directory if libraryID == "" { return errors.New("cannot rename directory without a library") } dirPath = path.Join("/", dirPath) // This call *cannot* handle json parameters in the body, so we have to build the request body manually postParameters := url.Values{ "operation": {"rename"}, "newname": {f.opt.Enc.FromStandardPath(newName)}, } opts := rest.Opts{ Method: "POST", Path: APIv20 + libraryID + "/dir/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(dirPath)}}, ContentType: "application/x-www-form-urlencoded", Body: bytes.NewBuffer([]byte(postParameters.Encode())), NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } } return errors.Wrap(err, "failed to rename directory") } return nil } func (f *Fs) moveDir(ctx context.Context, srcLibraryID, srcDir, srcName, dstLibraryID, dstPath string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/files-directories-batch-op.md#user-content-Batch%20Move%20Items%20Synchronously if srcLibraryID == "" || dstLibraryID == "" || srcName == "" { return errors.New("libraryID and/or file path argument(s) missing") } srcDir = path.Join("/", srcDir) dstPath = path.Join("/", dstPath) opts := rest.Opts{ Method: "POST", Path: APIv21 + "sync-batch-move-item/", NoResponse: true, } request := &api.BatchSourceDestRequest{ SrcLibraryID: srcLibraryID, SrcParentDir: f.opt.Enc.FromStandardPath(srcDir), SrcItems: []string{f.opt.Enc.FromStandardPath(srcName)}, DstLibraryID: dstLibraryID, DstParentDir: f.opt.Enc.FromStandardPath(dstPath), } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &request, nil) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return fs.ErrorObjectNotFound } } return errors.Wrap(err, fmt.Sprintf("failed to move directory '%s' from '%s' to '%s'", srcName, srcDir, dstPath)) } return nil } func (f *Fs) deleteDir(ctx context.Context, libraryID, filePath string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/directories.md#user-content-Delete%20Directory if libraryID == "" { return errors.New("cannot delete directory without a library") } filePath = path.Join("/", filePath) opts := rest.Opts{ Method: "DELETE", Path: APIv20 + libraryID + "/dir/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(filePath)}}, NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, nil) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } } return errors.Wrap(err, "failed to delete directory") } return nil } func (f *Fs) getFileDetails(ctx context.Context, libraryID, filePath string) (*api.FileDetail, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Get%20File%20Detail if libraryID == "" { return nil, errors.New("cannot open file without a library") } filePath = path.Join("/", filePath) opts := rest.Opts{ Method: "GET", Path: APIv20 + libraryID + "/file/detail/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(filePath)}}, } result := &api.FileDetail{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 404 { return nil, fs.ErrorObjectNotFound } if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to get file details") } result.Name = f.opt.Enc.ToStandardName(result.Name) result.Parent = f.opt.Enc.ToStandardPath(result.Parent) return result, nil } func (f *Fs) deleteFile(ctx context.Context, libraryID, filePath string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Delete%20File if libraryID == "" { return errors.New("cannot delete file without a library") } filePath = path.Join("/", filePath) opts := rest.Opts{ Method: "DELETE", Path: APIv20 + libraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(filePath)}}, NoResponse: true, } err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.CallJSON(ctx, &opts, nil, nil) return f.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to delete file") } return nil } func (f *Fs) getDownloadLink(ctx context.Context, libraryID, filePath string) (string, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Download%20File if libraryID == "" { return "", errors.New("cannot download file without a library") } filePath = path.Join("/", filePath) opts := rest.Opts{ Method: "GET", Path: APIv20 + libraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(filePath)}}, } result := "" var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 404 { return "", fs.ErrorObjectNotFound } } return "", errors.Wrap(err, "failed to get download link") } return result, nil } func (f *Fs) download(ctx context.Context, url string, size int64, options ...fs.OpenOption) (io.ReadCloser, error) { // Check if we need to download partial content var start, end int64 = 0, size partialContent := false for _, option := range options { switch x := option.(type) { case *fs.SeekOption: start = x.Offset partialContent = true case *fs.RangeOption: if x.Start >= 0 { start = x.Start if x.End > 0 && x.End < size { end = x.End + 1 } } else { // {-1, 20} should load the last 20 characters [len-20:len] start = size - x.End } partialContent = true default: if option.Mandatory() { fs.Logf(nil, "Unsupported mandatory option: %v", option) } } } // Build the http request opts := rest.Opts{ Method: "GET", RootURL: url, Options: options, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 404 { return nil, fmt.Errorf("file not found '%s'", url) } } return nil, err } // Non-encrypted libraries are accepting the HTTP Range header, // BUT encrypted libraries are simply ignoring it if partialContent && resp.StatusCode == 200 { // Partial content was requested through a Range header, but a full content was sent instead rangeDownloadNotice.Do(func() { fs.Logf(nil, "%s ignored our request of partial content. This is probably because encrypted libraries are not accepting range requests. Loading this file might be slow!", f.String()) }) if start > 0 { // We need to read and discard the beginning of the data... _, err = io.CopyN(ioutil.Discard, resp.Body, start) if err != nil { return nil, err } } // ... and return a limited reader for the remaining of the data return readers.NewLimitedReadCloser(resp.Body, end-start), nil } return resp.Body, nil } func (f *Fs) getUploadLink(ctx context.Context, libraryID string) (string, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file-upload.md if libraryID == "" { return "", errors.New("cannot upload file without a library") } opts := rest.Opts{ Method: "GET", Path: APIv20 + libraryID + "/upload-link/", } result := "" var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return "", fs.ErrorPermissionDenied } } return "", errors.Wrap(err, "failed to get upload link") } return result, nil } func (f *Fs) upload(ctx context.Context, in io.Reader, uploadLink, filePath string) (*api.FileDetail, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file-upload.md fileDir, filename := path.Split(filePath) parameters := url.Values{ "parent_dir": {"/"}, "relative_path": {f.opt.Enc.FromStandardPath(fileDir)}, "need_idx_progress": {"true"}, "replace": {"1"}, } formReader, contentType, _, err := rest.MultipartUpload(in, parameters, "file", f.opt.Enc.FromStandardName(filename)) if err != nil { return nil, errors.Wrap(err, "failed to make multipart upload") } opts := rest.Opts{ Method: "POST", RootURL: uploadLink, Body: formReader, ContentType: contentType, Parameters: url.Values{"ret-json": {"1"}}, // It needs to be on the url, not in the body parameters } result := make([]api.FileDetail, 1) var resp *http.Response // If an error occurs during the call, do not attempt to retry: The upload link is single use only err = f.pacer.CallNoRetry(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetryUpload(ctx, resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 500 { // This is a temporary error - we will get a new upload link before retrying return nil, ErrorInternalDuringUpload } } return nil, errors.Wrap(err, "failed to upload file") } if len(result) > 0 { result[0].Parent = f.opt.Enc.ToStandardPath(result[0].Parent) result[0].Name = f.opt.Enc.ToStandardName(result[0].Name) return &result[0], nil } return nil, nil } func (f *Fs) listShareLinks(ctx context.Context, libraryID, remote string) ([]api.SharedLink, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/share-links.md#user-content-List%20Share%20Link%20of%20a%20Folder%20(File) if libraryID == "" { return nil, errors.New("cannot get share links without a library") } remote = path.Join("/", remote) opts := rest.Opts{ Method: "GET", Path: "api/v2.1/share-links/", Parameters: url.Values{"repo_id": {libraryID}, "path": {f.opt.Enc.FromStandardPath(remote)}}, } result := make([]api.SharedLink, 1) var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return nil, fs.ErrorObjectNotFound } } return nil, errors.Wrap(err, "failed to list shared links") } return result, nil } // createShareLink will only work with non-encrypted libraries func (f *Fs) createShareLink(ctx context.Context, libraryID, remote string) (*api.SharedLink, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/share-links.md#user-content-Create%20Share%20Link if libraryID == "" { return nil, errors.New("cannot create a shared link without a library") } remote = path.Join("/", remote) opts := rest.Opts{ Method: "POST", Path: "api/v2.1/share-links/", } request := &api.ShareLinkRequest{ LibraryID: libraryID, Path: f.opt.Enc.FromStandardPath(remote), } result := &api.SharedLink{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return nil, fs.ErrorObjectNotFound } } return nil, errors.Wrap(err, "failed to create a shared link") } return result, nil } func (f *Fs) copyFile(ctx context.Context, srcLibraryID, srcPath, dstLibraryID, dstPath string) (*api.FileInfo, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Copy%20File // It's using the api/v2.1 which is not in the documentation (as of Apr 2020) but works better than api2 if srcLibraryID == "" || dstLibraryID == "" { return nil, errors.New("libraryID and/or file path argument(s) missing") } srcPath = path.Join("/", srcPath) dstPath = path.Join("/", dstPath) opts := rest.Opts{ Method: "POST", Path: APIv21 + srcLibraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(srcPath)}}, } request := &api.FileOperationRequest{ Operation: api.CopyFileOperation, DestinationLibraryID: dstLibraryID, DestinationPath: f.opt.Enc.FromStandardPath(dstPath), } result := &api.FileInfo{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { fs.Debugf(nil, "Copy: %s", err) return nil, fs.ErrorObjectNotFound } } return nil, errors.Wrap(err, fmt.Sprintf("failed to copy file %s:'%s' to %s:'%s'", srcLibraryID, srcPath, dstLibraryID, dstPath)) } return f.decodeFileInfo(result), nil } func (f *Fs) moveFile(ctx context.Context, srcLibraryID, srcPath, dstLibraryID, dstPath string) (*api.FileInfo, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Move%20File // It's using the api/v2.1 which is not in the documentation (as of Apr 2020) but works better than api2 if srcLibraryID == "" || dstLibraryID == "" { return nil, errors.New("libraryID and/or file path argument(s) missing") } srcPath = path.Join("/", srcPath) dstPath = path.Join("/", dstPath) opts := rest.Opts{ Method: "POST", Path: APIv21 + srcLibraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(srcPath)}}, } request := &api.FileOperationRequest{ Operation: api.MoveFileOperation, DestinationLibraryID: dstLibraryID, DestinationPath: f.opt.Enc.FromStandardPath(dstPath), } result := &api.FileInfo{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { fs.Debugf(nil, "Move: %s", err) return nil, fs.ErrorObjectNotFound } } return nil, errors.Wrap(err, fmt.Sprintf("failed to move file %s:'%s' to %s:'%s'", srcLibraryID, srcPath, dstLibraryID, dstPath)) } return f.decodeFileInfo(result), nil } func (f *Fs) renameFile(ctx context.Context, libraryID, filePath, newname string) (*api.FileInfo, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Rename%20File // It's using the api/v2.1 which is not in the documentation (as of Apr 2020) but works better than api2 if libraryID == "" || newname == "" { return nil, errors.New("libraryID and/or file path argument(s) missing") } filePath = path.Join("/", filePath) opts := rest.Opts{ Method: "POST", Path: APIv21 + libraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(filePath)}}, } request := &api.FileOperationRequest{ Operation: api.RenameFileOperation, NewName: f.opt.Enc.FromStandardName(newname), } result := &api.FileInfo{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { fs.Debugf(nil, "Rename: %s", err) return nil, fs.ErrorObjectNotFound } } return nil, errors.Wrap(err, fmt.Sprintf("failed to rename file '%s' to '%s'", filePath, newname)) } return f.decodeFileInfo(result), nil } func (f *Fs) decodeFileInfo(input *api.FileInfo) *api.FileInfo { input.Name = f.opt.Enc.ToStandardName(input.Name) input.Path = f.opt.Enc.ToStandardPath(input.Path) return input } func (f *Fs) emptyLibraryTrash(ctx context.Context, libraryID string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/libraries.md#user-content-Clean%20Library%20Trash if libraryID == "" { return errors.New("cannot clean up trash without a library") } opts := rest.Opts{ Method: "DELETE", Path: APIv21 + libraryID + "/trash/", NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, nil) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return fs.ErrorObjectNotFound } } return errors.Wrap(err, "failed empty the library trash") } return nil } // === API v2 from the official documentation, but that have been replaced by the much better v2.1 (undocumented as of Apr 2020) // === getDirectoryEntriesAPIv2 is needed to keep compatibility with seafile v6, // === the others can probably be removed after the API v2.1 is documented func (f *Fs) getDirectoryEntriesAPIv2(ctx context.Context, libraryID, dirPath string) ([]api.DirEntry, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/directories.md#user-content-List%20Items%20in%20Directory if libraryID == "" { return nil, errors.New("cannot list files without a library") } dirPath = path.Join("/", dirPath) opts := rest.Opts{ Method: "GET", Path: APIv20 + libraryID + "/dir/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(dirPath)}}, } result := make([]api.DirEntry, 1) var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return nil, fs.ErrorDirNotFound } if resp.StatusCode == 440 { // Encrypted library and password not provided return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, "failed to get directory contents") } // Clean up encoded names for index, fileInfo := range result { fileInfo.Name = f.opt.Enc.ToStandardName(fileInfo.Name) fileInfo.Path = f.opt.Enc.ToStandardPath(fileInfo.Path) result[index] = fileInfo } return result, nil } func (f *Fs) copyFileAPIv2(ctx context.Context, srcLibraryID, srcPath, dstLibraryID, dstPath string) (*api.FileInfo, error) { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Copy%20File if srcLibraryID == "" || dstLibraryID == "" { return nil, errors.New("libraryID and/or file path argument(s) missing") } srcPath = path.Join("/", srcPath) dstPath = path.Join("/", dstPath) // Older API does not seem to accept JSON input here either postParameters := url.Values{ "operation": {"copy"}, "dst_repo": {dstLibraryID}, "dst_dir": {f.opt.Enc.FromStandardPath(dstPath)}, } opts := rest.Opts{ Method: "POST", Path: APIv20 + srcLibraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(srcPath)}}, ContentType: "application/x-www-form-urlencoded", Body: bytes.NewBuffer([]byte(postParameters.Encode())), } result := &api.FileInfo{} var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 401 || resp.StatusCode == 403 { return nil, fs.ErrorPermissionDenied } } return nil, errors.Wrap(err, fmt.Sprintf("failed to copy file %s:'%s' to %s:'%s'", srcLibraryID, srcPath, dstLibraryID, dstPath)) } err = rest.DecodeJSON(resp, &result) if err != nil { return nil, err } return f.decodeFileInfo(result), nil } func (f *Fs) renameFileAPIv2(ctx context.Context, libraryID, filePath, newname string) error { // API Documentation // https://download.seafile.com/published/web-api/v2.1/file.md#user-content-Rename%20File if libraryID == "" || newname == "" { return errors.New("libraryID and/or file path argument(s) missing") } filePath = path.Join("/", filePath) // No luck with JSON input with the older api2 postParameters := url.Values{ "operation": {"rename"}, "reloaddir": {"true"}, // This is an undocumented trick to avoid an http code 301 response (found in https://github.com/haiwen/seahub/blob/master/seahub/api2/views.py) "newname": {f.opt.Enc.FromStandardName(newname)}, } opts := rest.Opts{ Method: "POST", Path: APIv20 + libraryID + "/file/", Parameters: url.Values{"p": {f.opt.Enc.FromStandardPath(filePath)}}, ContentType: "application/x-www-form-urlencoded", Body: bytes.NewBuffer([]byte(postParameters.Encode())), NoRedirect: true, NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { if resp != nil { if resp.StatusCode == 301 { // This is the normal response from the server return nil } if resp.StatusCode == 401 || resp.StatusCode == 403 { return fs.ErrorPermissionDenied } if resp.StatusCode == 404 { return fs.ErrorObjectNotFound } } return errors.Wrap(err, "failed to rename file") } return nil } rclone-1.53.3/backend/sftp/000077500000000000000000000000001375552240400154375ustar00rootroot00000000000000rclone-1.53.3/backend/sftp/sftp.go000066400000000000000000001102171375552240400167440ustar00rootroot00000000000000// Package sftp provides a filesystem interface using github.com/pkg/sftp // +build !plan9 package sftp import ( "bytes" "context" "fmt" "io" "io/ioutil" "os" "os/user" "path" "regexp" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/pkg/sftp" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" sshagent "github.com/xanzy/ssh-agent" "golang.org/x/crypto/ssh" ) const ( hashCommandNotSupported = "none" minSleep = 100 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential ) var ( currentUser = readCurrentUser() ) func init() { fsi := &fs.RegInfo{ Name: "sftp", Description: "SSH/SFTP Connection", NewFs: NewFs, Options: []fs.Option{{ Name: "host", Help: "SSH host to connect to", Required: true, Examples: []fs.OptionExample{{ Value: "example.com", Help: "Connect to example.com", }}, }, { Name: "user", Help: "SSH username, leave blank for current username, " + currentUser, }, { Name: "port", Help: "SSH port, leave blank to use default (22)", }, { Name: "pass", Help: "SSH password, leave blank to use ssh-agent.", IsPassword: true, }, { Name: "key_pem", Help: "Raw PEM-encoded private key, If specified, will override key_file parameter.", }, { Name: "key_file", Help: "Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent." + env.ShellExpandHelp, }, { Name: "key_file_pass", Help: `The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used.`, IsPassword: true, }, { Name: "key_use_agent", Help: `When set forces the usage of the ssh-agent. When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid ` + "`Too many authentication failures for *username*`" + ` errors when the ssh-agent contains many keys.`, Default: false, }, { Name: "use_insecure_cipher", Help: `Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: - aes128-cbc - aes192-cbc - aes256-cbc - 3des-cbc - diffie-hellman-group-exchange-sha256 - diffie-hellman-group-exchange-sha1 Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.`, Default: false, Examples: []fs.OptionExample{ { Value: "false", Help: "Use default Cipher list.", }, { Value: "true", Help: "Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange.", }, }, }, { Name: "disable_hashcheck", Default: false, Help: "Disable the execution of SSH commands to determine if remote file hashing is available.\nLeave blank or set to false to enable hashing (recommended), set to true to disable hashing.", }, { Name: "ask_password", Default: false, Help: `Allow asking for SFTP password when needed. If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent `, Advanced: true, }, { Name: "path_override", Default: "", Help: `Override path used by SSH connection. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. Shared folders can be found in directories representing volumes rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory Home directory can be found in a shared folder called "home" rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory`, Advanced: true, }, { Name: "set_modtime", Default: true, Help: "Set the modified time on the remote if set.", Advanced: true, }, { Name: "md5sum_command", Default: "", Help: "The command used to read md5 hashes. Leave blank for autodetect.", Advanced: true, }, { Name: "sha1sum_command", Default: "", Help: "The command used to read sha1 hashes. Leave blank for autodetect.", Advanced: true, }, { Name: "skip_links", Default: false, Help: "Set to skip any symlinks and any other non regular files.", Advanced: true, }, { Name: "subsystem", Default: "sftp", Help: "Specifies the SSH2 subsystem on the remote host.", Advanced: true, }, { Name: "server_command", Default: "", Help: `Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined.`, Advanced: true, }}, } fs.Register(fsi) } // Options defines the configuration for this backend type Options struct { Host string `config:"host"` User string `config:"user"` Port string `config:"port"` Pass string `config:"pass"` KeyPem string `config:"key_pem"` KeyFile string `config:"key_file"` KeyFilePass string `config:"key_file_pass"` KeyUseAgent bool `config:"key_use_agent"` UseInsecureCipher bool `config:"use_insecure_cipher"` DisableHashCheck bool `config:"disable_hashcheck"` AskPassword bool `config:"ask_password"` PathOverride string `config:"path_override"` SetModTime bool `config:"set_modtime"` Md5sumCommand string `config:"md5sum_command"` Sha1sumCommand string `config:"sha1sum_command"` SkipLinks bool `config:"skip_links"` Subsystem string `config:"subsystem"` ServerCommand string `config:"server_command"` } // Fs stores the interface to the remote SFTP files type Fs struct { name string root string absRoot string opt Options // parsed options m configmap.Mapper // config features *fs.Features // optional features config *ssh.ClientConfig url string mkdirLock *stringLock cachedHashes *hash.Set poolMu sync.Mutex pool []*conn pacer *fs.Pacer // pacer for operations } // Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading) type Object struct { fs *Fs remote string size int64 // size of the object modTime time.Time // modification time of the object mode os.FileMode // mode bits from the file md5sum *string // Cached MD5 checksum sha1sum *string // Cached SHA1 checksum } // readCurrentUser finds the current user name or "" if not found func readCurrentUser() (userName string) { usr, err := user.Current() if err == nil { return usr.Username } // Fall back to reading $USER then $LOGNAME userName = os.Getenv("USER") if userName != "" { return userName } return os.Getenv("LOGNAME") } // dial starts a client connection to the given SSH server. It is a // convenience function that connects to the given network address, // initiates the SSH handshake, and then sets up a Client. func (f *Fs) dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) { dialer := fshttp.NewDialer(fs.Config) conn, err := dialer.Dial(network, addr) if err != nil { return nil, err } c, chans, reqs, err := ssh.NewClientConn(conn, addr, sshConfig) if err != nil { return nil, err } fs.Debugf(f, "New connection %s->%s to %q", c.LocalAddr(), c.RemoteAddr(), c.ServerVersion()) return ssh.NewClient(c, chans, reqs), nil } // conn encapsulates an ssh client and corresponding sftp client type conn struct { sshClient *ssh.Client sftpClient *sftp.Client err chan error } // Wait for connection to close func (c *conn) wait() { c.err <- c.sshClient.Conn.Wait() } // Closes the connection func (c *conn) close() error { sftpErr := c.sftpClient.Close() sshErr := c.sshClient.Close() if sftpErr != nil { return sftpErr } return sshErr } // Returns an error if closed func (c *conn) closed() error { select { case err := <-c.err: return err default: } return nil } // Open a new connection to the SFTP server. func (f *Fs) sftpConnection() (c *conn, err error) { // Rate limit rate of new connections c = &conn{ err: make(chan error, 1), } c.sshClient, err = f.dial("tcp", f.opt.Host+":"+f.opt.Port, f.config) if err != nil { return nil, errors.Wrap(err, "couldn't connect SSH") } c.sftpClient, err = f.newSftpClient(c.sshClient) if err != nil { _ = c.sshClient.Close() return nil, errors.Wrap(err, "couldn't initialise SFTP") } go c.wait() return c, nil } // Creates a new SFTP client on conn, using the specified subsystem // or sftp server, and zero or more option functions func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.Client, error) { s, err := conn.NewSession() if err != nil { return nil, err } pw, err := s.StdinPipe() if err != nil { return nil, err } pr, err := s.StdoutPipe() if err != nil { return nil, err } if f.opt.ServerCommand != "" { if err := s.Start(f.opt.ServerCommand); err != nil { return nil, err } } else { if err := s.RequestSubsystem(f.opt.Subsystem); err != nil { return nil, err } } return sftp.NewClientPipe(pr, pw, opts...) } // Get an SFTP connection from the pool, or open a new one func (f *Fs) getSftpConnection() (c *conn, err error) { f.poolMu.Lock() for len(f.pool) > 0 { c = f.pool[0] f.pool = f.pool[1:] err := c.closed() if err == nil { break } fs.Errorf(f, "Discarding closed SSH connection: %v", err) c = nil } f.poolMu.Unlock() if c != nil { return c, nil } err = f.pacer.Call(func() (bool, error) { c, err = f.sftpConnection() if err != nil { return true, err } return false, nil }) return c, err } // Return an SFTP connection to the pool // // It nils the pointed to connection out so it can't be reused // // if err is not nil then it checks the connection is alive using a // Getwd request func (f *Fs) putSftpConnection(pc **conn, err error) { c := *pc *pc = nil if err != nil { // work out if this is an expected error underlyingErr := errors.Cause(err) isRegularError := false switch underlyingErr { case os.ErrNotExist: isRegularError = true default: switch underlyingErr.(type) { case *sftp.StatusError, *os.PathError: isRegularError = true } } // If not a regular SFTP error code then check the connection if !isRegularError { _, nopErr := c.sftpClient.Getwd() if nopErr != nil { fs.Debugf(f, "Connection failed, closing: %v", nopErr) _ = c.close() return } fs.Debugf(f, "Connection OK after error: %v", err) } } f.poolMu.Lock() f.pool = append(f.pool, c) f.poolMu.Unlock() } // NewFs creates a new Fs object from the name and root. It connects to // the host specified in the config file. func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } if opt.User == "" { opt.User = currentUser } if opt.Port == "" { opt.Port = "22" } sshConfig := &ssh.ClientConfig{ User: opt.User, Auth: []ssh.AuthMethod{}, HostKeyCallback: ssh.InsecureIgnoreHostKey(), Timeout: fs.Config.ConnectTimeout, ClientVersion: "SSH-2.0-" + fs.Config.UserAgent, } if opt.UseInsecureCipher { sshConfig.Config.SetDefaults() sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc", "aes192-cbc", "aes256-cbc", "3des-cbc") sshConfig.Config.KeyExchanges = append(sshConfig.Config.KeyExchanges, "diffie-hellman-group-exchange-sha1", "diffie-hellman-group-exchange-sha256") } keyFile := env.ShellExpand(opt.KeyFile) //keyPem := env.ShellExpand(opt.KeyPem) // Add ssh agent-auth if no password or file or key PEM specified if (opt.Pass == "" && keyFile == "" && !opt.AskPassword && opt.KeyPem == "") || opt.KeyUseAgent { sshAgentClient, _, err := sshagent.New() if err != nil { return nil, errors.Wrap(err, "couldn't connect to ssh-agent") } signers, err := sshAgentClient.Signers() if err != nil { return nil, errors.Wrap(err, "couldn't read ssh agent signers") } if keyFile != "" { pubBytes, err := ioutil.ReadFile(keyFile + ".pub") if err != nil { return nil, errors.Wrap(err, "failed to read public key file") } pub, _, _, _, err := ssh.ParseAuthorizedKey(pubBytes) if err != nil { return nil, errors.Wrap(err, "failed to parse public key file") } pubM := pub.Marshal() found := false for _, s := range signers { if bytes.Equal(pubM, s.PublicKey().Marshal()) { sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(s)) found = true break } } if !found { return nil, errors.New("private key not found in the ssh-agent") } } else { sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...)) } } // Load key file if specified if keyFile != "" || opt.KeyPem != "" { var key []byte if opt.KeyPem == "" { key, err = ioutil.ReadFile(keyFile) if err != nil { return nil, errors.Wrap(err, "failed to read private key file") } } else { // wrap in quotes because the config is a coming as a literal without them. opt.KeyPem, err = strconv.Unquote("\"" + opt.KeyPem + "\"") if err != nil { return nil, errors.Wrap(err, "pem key not formatted properly") } key = []byte(opt.KeyPem) } clearpass := "" if opt.KeyFilePass != "" { clearpass, err = obscure.Reveal(opt.KeyFilePass) if err != nil { return nil, err } } var signer ssh.Signer if clearpass == "" { signer, err = ssh.ParsePrivateKey(key) } else { signer, err = ssh.ParsePrivateKeyWithPassphrase(key, []byte(clearpass)) } if err != nil { return nil, errors.Wrap(err, "failed to parse private key file") } sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signer)) } // Auth from password if specified if opt.Pass != "" { clearpass, err := obscure.Reveal(opt.Pass) if err != nil { return nil, err } sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass)) } // Ask for password if none was defined and we're allowed to if opt.Pass == "" && opt.AskPassword { _, _ = fmt.Fprint(os.Stderr, "Enter SFTP password: ") clearpass := config.ReadPassword() sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass)) } return NewFsWithConnection(ctx, name, root, m, opt, sshConfig) } // NewFsWithConnection creates a new Fs object from the name and root and an ssh.ClientConfig. It connects to // the host specified in the ssh.ClientConfig func NewFsWithConnection(ctx context.Context, name string, root string, m configmap.Mapper, opt *Options, sshConfig *ssh.ClientConfig) (fs.Fs, error) { f := &Fs{ name: name, root: root, absRoot: root, opt: *opt, m: m, config: sshConfig, url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root, mkdirLock: newStringLock(), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ CanHaveEmptyDirectories: true, SlowHash: true, }).Fill(f) // Make a connection and pool it to return errors early c, err := f.getSftpConnection() if err != nil { return nil, errors.Wrap(err, "NewFs") } cwd, err := c.sftpClient.Getwd() f.putSftpConnection(&c, nil) if err != nil { fs.Debugf(f, "Failed to read current directory - using relative paths: %v", err) } else if !path.IsAbs(f.root) { f.absRoot = path.Join(cwd, f.root) fs.Debugf(f, "Using absolute root directory %q", f.absRoot) } if root != "" { // Check to see if the root actually an existing file oldAbsRoot := f.absRoot remote := path.Base(root) f.root = path.Dir(root) f.absRoot = path.Dir(f.absRoot) if f.root == "." { f.root = "" } _, err := f.NewObject(ctx, remote) if err != nil { if err == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile { // File doesn't exist so return old f f.root = root f.absRoot = oldAbsRoot return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Name returns the configured name of the file system func (f *Fs) Name() string { return f.name } // Root returns the root for the filesystem func (f *Fs) Root() string { return f.root } // String returns the URL for the filesystem func (f *Fs) String() string { return f.url } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // Precision is the remote sftp file system's modtime precision, which we have no way of knowing. We estimate at 1s func (f *Fs) Precision() time.Duration { return time.Second } // NewObject creates a new remote sftp file object func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } err := o.stat() if err != nil { return nil, err } return o, nil } // dirExists returns true,nil if the directory exists, false, nil if // it doesn't or false, err func (f *Fs) dirExists(dir string) (bool, error) { if dir == "" { dir = "." } c, err := f.getSftpConnection() if err != nil { return false, errors.Wrap(err, "dirExists") } info, err := c.sftpClient.Stat(dir) f.putSftpConnection(&c, err) if err != nil { if os.IsNotExist(err) { return false, nil } return false, errors.Wrap(err, "dirExists stat failed") } if !info.IsDir() { return false, fs.ErrorIsFile } return true, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { root := path.Join(f.absRoot, dir) ok, err := f.dirExists(root) if err != nil { return nil, errors.Wrap(err, "List failed") } if !ok { return nil, fs.ErrorDirNotFound } sftpDir := root if sftpDir == "" { sftpDir = "." } c, err := f.getSftpConnection() if err != nil { return nil, errors.Wrap(err, "List") } infos, err := c.sftpClient.ReadDir(sftpDir) f.putSftpConnection(&c, err) if err != nil { return nil, errors.Wrapf(err, "error listing %q", dir) } for _, info := range infos { remote := path.Join(dir, info.Name()) // If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to // pick up the size and type of the destination, instead of the size and type of the symlink. if !info.Mode().IsRegular() && !info.IsDir() { if f.opt.SkipLinks { // skip non regular file if SkipLinks is set continue } oldInfo := info info, err = f.stat(remote) if err != nil { if !os.IsNotExist(err) { fs.Errorf(remote, "stat of non-regular file failed: %v", err) } info = oldInfo } } if info.IsDir() { d := fs.NewDir(remote, info.ModTime()) entries = append(entries, d) } else { o := &Object{ fs: f, remote: remote, } o.setMetadata(info) entries = append(entries, o) } } return entries, nil } // Put data from into a new remote sftp file object described by and func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { err := f.mkParentDir(src.Remote()) if err != nil { return nil, errors.Wrap(err, "Put mkParentDir failed") } // Temporary object under construction o := &Object{ fs: f, remote: src.Remote(), } err = o.Update(ctx, in, src, options...) if err != nil { return nil, err } return o, nil } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // mkParentDir makes the parent of remote if necessary and any // directories above that func (f *Fs) mkParentDir(remote string) error { parent := path.Dir(remote) return f.mkdir(path.Join(f.absRoot, parent)) } // mkdir makes the directory and parents using native paths func (f *Fs) mkdir(dirPath string) error { f.mkdirLock.Lock(dirPath) defer f.mkdirLock.Unlock(dirPath) if dirPath == "." || dirPath == "/" { return nil } ok, err := f.dirExists(dirPath) if err != nil { return errors.Wrap(err, "mkdir dirExists failed") } if ok { return nil } parent := path.Dir(dirPath) err = f.mkdir(parent) if err != nil { return err } c, err := f.getSftpConnection() if err != nil { return errors.Wrap(err, "mkdir") } err = c.sftpClient.Mkdir(dirPath) f.putSftpConnection(&c, err) if err != nil { return errors.Wrapf(err, "mkdir %q failed", dirPath) } return nil } // Mkdir makes the root directory of the Fs object func (f *Fs) Mkdir(ctx context.Context, dir string) error { root := path.Join(f.absRoot, dir) return f.mkdir(root) } // Rmdir removes the root directory of the Fs object func (f *Fs) Rmdir(ctx context.Context, dir string) error { // Check to see if directory is empty as some servers will // delete recursively with RemoveDirectory entries, err := f.List(ctx, dir) if err != nil { return errors.Wrap(err, "Rmdir") } if len(entries) != 0 { return fs.ErrorDirectoryNotEmpty } // Remove the directory root := path.Join(f.absRoot, dir) c, err := f.getSftpConnection() if err != nil { return errors.Wrap(err, "Rmdir") } err = c.sftpClient.RemoveDirectory(root) f.putSftpConnection(&c, err) return err } // Move renames a remote sftp file object func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } err := f.mkParentDir(remote) if err != nil { return nil, errors.Wrap(err, "Move mkParentDir failed") } c, err := f.getSftpConnection() if err != nil { return nil, errors.Wrap(err, "Move") } err = c.sftpClient.Rename( srcObj.path(), path.Join(f.absRoot, remote), ) f.putSftpConnection(&c, err) if err != nil { return nil, errors.Wrap(err, "Move Rename failed") } dstObj, err := f.NewObject(ctx, remote) if err != nil { return nil, errors.Wrap(err, "Move NewObject failed") } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := path.Join(srcFs.absRoot, srcRemote) dstPath := path.Join(f.absRoot, dstRemote) // Check if destination exists ok, err := f.dirExists(dstPath) if err != nil { return errors.Wrap(err, "DirMove dirExists dst failed") } if ok { return fs.ErrorDirExists } // Make sure the parent directory exists err = f.mkdir(path.Dir(dstPath)) if err != nil { return errors.Wrap(err, "DirMove mkParentDir dst failed") } // Do the move c, err := f.getSftpConnection() if err != nil { return errors.Wrap(err, "DirMove") } err = c.sftpClient.Rename( srcPath, dstPath, ) f.putSftpConnection(&c, err) if err != nil { return errors.Wrapf(err, "DirMove Rename(%q,%q) failed", srcPath, dstPath) } return nil } // run runds cmd on the remote end returning standard output func (f *Fs) run(cmd string) ([]byte, error) { c, err := f.getSftpConnection() if err != nil { return nil, errors.Wrap(err, "run: get SFTP connection") } defer f.putSftpConnection(&c, err) session, err := c.sshClient.NewSession() if err != nil { return nil, errors.Wrap(err, "run: get SFTP sessiion") } defer func() { _ = session.Close() }() var stdout, stderr bytes.Buffer session.Stdout = &stdout session.Stderr = &stderr err = session.Run(cmd) if err != nil { return nil, errors.Wrapf(err, "failed to run %q: %s", cmd, stderr.Bytes()) } return stdout.Bytes(), nil } // Hashes returns the supported hash types of the filesystem func (f *Fs) Hashes() hash.Set { if f.opt.DisableHashCheck { return hash.Set(hash.None) } if f.cachedHashes != nil { return *f.cachedHashes } // look for a hash command which works checkHash := func(commands []string, expected string, hashCommand *string, changed *bool) bool { if *hashCommand == hashCommandNotSupported { return false } if *hashCommand != "" { return true } *changed = true for _, command := range commands { output, err := f.run(command) if err != nil { continue } output = bytes.TrimSpace(output) fs.Debugf(f, "checking %q command: %q", command, output) if parseHash(output) == expected { *hashCommand = command return true } } *hashCommand = hashCommandNotSupported return false } changed := false md5Works := checkHash([]string{"md5sum", "md5 -r"}, "d41d8cd98f00b204e9800998ecf8427e", &f.opt.Md5sumCommand, &changed) sha1Works := checkHash([]string{"sha1sum", "sha1 -r"}, "da39a3ee5e6b4b0d3255bfef95601890afd80709", &f.opt.Sha1sumCommand, &changed) if changed { f.m.Set("md5sum_command", f.opt.Md5sumCommand) f.m.Set("sha1sum_command", f.opt.Sha1sumCommand) } set := hash.NewHashSet() if sha1Works { set.Add(hash.SHA1) } if md5Works { set.Add(hash.MD5) } f.cachedHashes = &set return set } // About gets usage stats func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { escapedPath := shellEscape(f.root) if f.opt.PathOverride != "" { escapedPath = shellEscape(path.Join(f.opt.PathOverride, f.root)) } if len(escapedPath) == 0 { escapedPath = "/" } stdout, err := f.run("df -k " + escapedPath) if err != nil { return nil, errors.Wrap(err, "your remote may not support About") } usageTotal, usageUsed, usageAvail := parseUsage(stdout) usage := &fs.Usage{} if usageTotal >= 0 { usage.Total = fs.NewUsageValue(usageTotal) } if usageUsed >= 0 { usage.Used = fs.NewUsageValue(usageUsed) } if usageAvail >= 0 { usage.Free = fs.NewUsageValue(usageAvail) } return usage, nil } // Fs is the filesystem this remote sftp file object is located within func (o *Object) Fs() fs.Info { return o.fs } // String returns the URL to the remote SFTP file func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote the name of the remote SFTP file, relative to the fs root func (o *Object) Remote() string { return o.remote } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) { if o.fs.opt.DisableHashCheck { return "", nil } _ = o.fs.Hashes() var hashCmd string if r == hash.MD5 { if o.md5sum != nil { return *o.md5sum, nil } hashCmd = o.fs.opt.Md5sumCommand } else if r == hash.SHA1 { if o.sha1sum != nil { return *o.sha1sum, nil } hashCmd = o.fs.opt.Sha1sumCommand } else { return "", hash.ErrUnsupported } if hashCmd == "" || hashCmd == hashCommandNotSupported { return "", hash.ErrUnsupported } c, err := o.fs.getSftpConnection() if err != nil { return "", errors.Wrap(err, "Hash get SFTP connection") } session, err := c.sshClient.NewSession() o.fs.putSftpConnection(&c, err) if err != nil { return "", errors.Wrap(err, "Hash put SFTP connection") } var stdout, stderr bytes.Buffer session.Stdout = &stdout session.Stderr = &stderr escapedPath := shellEscape(o.path()) if o.fs.opt.PathOverride != "" { escapedPath = shellEscape(path.Join(o.fs.opt.PathOverride, o.remote)) } err = session.Run(hashCmd + " " + escapedPath) fs.Debugf(nil, "sftp cmd = %s", escapedPath) if err != nil { _ = session.Close() fs.Debugf(o, "Failed to calculate %v hash: %v (%s)", r, err, bytes.TrimSpace(stderr.Bytes())) return "", nil } _ = session.Close() b := stdout.Bytes() fs.Debugf(nil, "sftp output = %q", b) str := parseHash(b) fs.Debugf(nil, "sftp hash = %q", str) if r == hash.MD5 { o.md5sum = &str } else if r == hash.SHA1 { o.sha1sum = &str } return str, nil } var shellEscapeRegex = regexp.MustCompile("[^A-Za-z0-9_.,:/\\@\u0080-\uFFFFFFFF\n-]") // Escape a string s.t. it cannot cause unintended behavior // when sending it to a shell. func shellEscape(str string) string { safe := shellEscapeRegex.ReplaceAllString(str, `\$0`) return strings.Replace(safe, "\n", "'\n'", -1) } // Converts a byte array from the SSH session returned by // an invocation of md5sum/sha1sum to a hash string // as expected by the rest of this application func parseHash(bytes []byte) string { // For strings with backslash *sum writes a leading \ // https://unix.stackexchange.com/q/313733/94054 return strings.ToLower(strings.Split(strings.TrimLeft(string(bytes), "\\"), " ")[0]) // Split at hash / filename separator / all convert to lowercase } // Parses the byte array output from the SSH session // returned by an invocation of df into // the disk size, used space, and available space on the disk, in that order. // Only works when `df` has output info on only one disk func parseUsage(bytes []byte) (spaceTotal int64, spaceUsed int64, spaceAvail int64) { spaceTotal, spaceUsed, spaceAvail = -1, -1, -1 lines := strings.Split(string(bytes), "\n") if len(lines) < 2 { return } split := strings.Fields(lines[1]) if len(split) < 6 { return } spaceTotal, err := strconv.ParseInt(split[1], 10, 64) if err != nil { spaceTotal = -1 } spaceUsed, err = strconv.ParseInt(split[2], 10, 64) if err != nil { spaceUsed = -1 } spaceAvail, err = strconv.ParseInt(split[3], 10, 64) if err != nil { spaceAvail = -1 } return spaceTotal * 1024, spaceUsed * 1024, spaceAvail * 1024 } // Size returns the size in bytes of the remote sftp file func (o *Object) Size() int64 { return o.size } // ModTime returns the modification time of the remote sftp file func (o *Object) ModTime(ctx context.Context) time.Time { return o.modTime } // path returns the native path of the object func (o *Object) path() string { return path.Join(o.fs.absRoot, o.remote) } // setMetadata updates the info in the object from the stat result passed in func (o *Object) setMetadata(info os.FileInfo) { o.modTime = info.ModTime() o.size = info.Size() o.mode = info.Mode() } // statRemote stats the file or directory at the remote given func (f *Fs) stat(remote string) (info os.FileInfo, err error) { c, err := f.getSftpConnection() if err != nil { return nil, errors.Wrap(err, "stat") } absPath := path.Join(f.absRoot, remote) info, err = c.sftpClient.Stat(absPath) f.putSftpConnection(&c, err) return info, err } // stat updates the info in the Object func (o *Object) stat() error { info, err := o.fs.stat(o.remote) if err != nil { if os.IsNotExist(err) { return fs.ErrorObjectNotFound } return errors.Wrap(err, "stat failed") } if info.IsDir() { return errors.Wrapf(fs.ErrorNotAFile, "%q", o.remote) } o.setMetadata(info) return nil } // SetModTime sets the modification and access time to the specified time // // it also updates the info field func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { if o.fs.opt.SetModTime { c, err := o.fs.getSftpConnection() if err != nil { return errors.Wrap(err, "SetModTime") } err = c.sftpClient.Chtimes(o.path(), modTime, modTime) o.fs.putSftpConnection(&c, err) if err != nil { return errors.Wrap(err, "SetModTime failed") } } err := o.stat() if err != nil { return errors.Wrap(err, "SetModTime stat failed") } return nil } // Storable returns whether the remote sftp file is a regular file (not a directory, symbolic link, block device, character device, named pipe, etc) func (o *Object) Storable() bool { return o.mode.IsRegular() } // objectReader represents a file open for reading on the SFTP server type objectReader struct { sftpFile *sftp.File pipeReader *io.PipeReader done chan struct{} } func newObjectReader(sftpFile *sftp.File) *objectReader { pipeReader, pipeWriter := io.Pipe() file := &objectReader{ sftpFile: sftpFile, pipeReader: pipeReader, done: make(chan struct{}), } go func() { // Use sftpFile.WriteTo to pump data so that it gets a // chance to build the window up. _, err := sftpFile.WriteTo(pipeWriter) // Close the pipeWriter so the pipeReader fails with // the same error or EOF if err == nil _ = pipeWriter.CloseWithError(err) // signal that we've finished close(file.done) }() return file } // Read from a remote sftp file object reader func (file *objectReader) Read(p []byte) (n int, err error) { n, err = file.pipeReader.Read(p) return n, err } // Close a reader of a remote sftp file func (file *objectReader) Close() (err error) { // Close the sftpFile - this will likely cause the WriteTo to error err = file.sftpFile.Close() // Close the pipeReader so writes to the pipeWriter fail _ = file.pipeReader.Close() // Wait for the background process to finish <-file.done return err } // Open a remote sftp file object for reading. Seek is supported func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(o.Size()) default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } c, err := o.fs.getSftpConnection() if err != nil { return nil, errors.Wrap(err, "Open") } sftpFile, err := c.sftpClient.Open(o.path()) o.fs.putSftpConnection(&c, err) if err != nil { return nil, errors.Wrap(err, "Open failed") } if offset > 0 { off, err := sftpFile.Seek(offset, io.SeekStart) if err != nil || off != offset { return nil, errors.Wrap(err, "Open Seek failed") } } in = readers.NewLimitedReadCloser(newObjectReader(sftpFile), limit) return in, nil } // Update a remote sftp file using the data and ModTime from func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { // Clear the hash cache since we are about to update the object o.md5sum = nil o.sha1sum = nil c, err := o.fs.getSftpConnection() if err != nil { return errors.Wrap(err, "Update") } file, err := c.sftpClient.OpenFile(o.path(), os.O_WRONLY|os.O_CREATE|os.O_TRUNC) o.fs.putSftpConnection(&c, err) if err != nil { return errors.Wrap(err, "Update Create failed") } // remove the file if upload failed remove := func() { c, removeErr := o.fs.getSftpConnection() if removeErr != nil { fs.Debugf(src, "Failed to open new SSH connection for delete: %v", removeErr) return } removeErr = c.sftpClient.Remove(o.path()) o.fs.putSftpConnection(&c, removeErr) if removeErr != nil { fs.Debugf(src, "Failed to remove: %v", removeErr) } else { fs.Debugf(src, "Removed after failed upload: %v", err) } } _, err = file.ReadFrom(in) if err != nil { remove() return errors.Wrap(err, "Update ReadFrom failed") } err = file.Close() if err != nil { remove() return errors.Wrap(err, "Update Close failed") } err = o.SetModTime(ctx, src.ModTime(ctx)) if err != nil { return errors.Wrap(err, "Update SetModTime failed") } return nil } // Remove a remote sftp file object func (o *Object) Remove(ctx context.Context) error { c, err := o.fs.getSftpConnection() if err != nil { return errors.Wrap(err, "Remove") } err = c.sftpClient.Remove(o.path()) o.fs.putSftpConnection(&c, err) return err } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.Mover = &Fs{} _ fs.DirMover = &Fs{} _ fs.Abouter = &Fs{} _ fs.Object = &Object{} ) rclone-1.53.3/backend/sftp/sftp_internal_test.go000066400000000000000000000035011375552240400216740ustar00rootroot00000000000000// +build !plan9 package sftp import ( "fmt" "testing" "github.com/stretchr/testify/assert" ) func TestShellEscape(t *testing.T) { for i, test := range []struct { unescaped, escaped string }{ {"", ""}, {"/this/is/harmless", "/this/is/harmless"}, {"$(rm -rf /)", "\\$\\(rm\\ -rf\\ /\\)"}, {"/test/\n", "/test/'\n'"}, {":\"'", ":\\\"\\'"}, } { got := shellEscape(test.unescaped) assert.Equal(t, test.escaped, got, fmt.Sprintf("Test %d unescaped = %q", i, test.unescaped)) } } func TestParseHash(t *testing.T) { for i, test := range []struct { sshOutput, checksum string }{ {"8dbc7733dbd10d2efc5c0a0d8dad90f958581821 RELEASE.md\n", "8dbc7733dbd10d2efc5c0a0d8dad90f958581821"}, {"03cfd743661f07975fa2f1220c5194cbaff48451 -\n", "03cfd743661f07975fa2f1220c5194cbaff48451"}, } { got := parseHash([]byte(test.sshOutput)) assert.Equal(t, test.checksum, got, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput)) } } func TestParseUsage(t *testing.T) { for i, test := range []struct { sshOutput string usage [3]int64 }{ {"Filesystem 1K-blocks Used Available Use% Mounted on\n/dev/root 91283092 81111888 10154820 89% /", [3]int64{93473886208, 83058573312, 10398535680}}, {"Filesystem 1K-blocks Used Available Use% Mounted on\ntmpfs 818256 1636 816620 1% /run", [3]int64{837894144, 1675264, 836218880}}, {"Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on\n/dev/disk0s2 244277768 94454848 149566920 39% 997820 4293969459 0% /", [3]int64{250140434432, 96721764352, 153156526080}}, } { gotSpaceTotal, gotSpaceUsed, gotSpaceAvail := parseUsage([]byte(test.sshOutput)) assert.Equal(t, test.usage, [3]int64{gotSpaceTotal, gotSpaceUsed, gotSpaceAvail}, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput)) } } rclone-1.53.3/backend/sftp/sftp_test.go000066400000000000000000000011721375552240400200020ustar00rootroot00000000000000// Test Sftp filesystem interface // +build !plan9 package sftp_test import ( "testing" "github.com/rclone/rclone/backend/sftp" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestSFTPOpenssh:", NilObject: (*sftp.Object)(nil), }) } func TestIntegration2(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("skipping as -remote is set") } fstests.Run(t, &fstests.Opt{ RemoteName: "TestSFTPRclone:", NilObject: (*sftp.Object)(nil), }) } rclone-1.53.3/backend/sftp/sftp_unsupported.go000066400000000000000000000002131375552240400214060ustar00rootroot00000000000000// Build for sftp for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 package sftp rclone-1.53.3/backend/sftp/stringlock.go000066400000000000000000000017201375552240400201450ustar00rootroot00000000000000// +build !plan9 package sftp import "sync" // stringLock locks for string IDs passed in type stringLock struct { mu sync.Mutex // mutex to protect below locks map[string]chan struct{} // map of locks } // newStringLock creates a stringLock func newStringLock() *stringLock { return &stringLock{ locks: make(map[string]chan struct{}), } } // Lock locks on the id passed in func (l *stringLock) Lock(ID string) { l.mu.Lock() for { ch, ok := l.locks[ID] if !ok { break } // Wait for the channel to be closed l.mu.Unlock() // fs.Logf(nil, "Waiting for stringLock on %q", ID) <-ch l.mu.Lock() } l.locks[ID] = make(chan struct{}) l.mu.Unlock() } // Unlock unlocks on the id passed in. Will panic if Lock with the // given id wasn't called first. func (l *stringLock) Unlock(ID string) { l.mu.Lock() ch, ok := l.locks[ID] if !ok { panic("stringLock: Unlock before Lock") } close(ch) delete(l.locks, ID) l.mu.Unlock() } rclone-1.53.3/backend/sftp/stringlock_test.go000066400000000000000000000012401375552240400212010ustar00rootroot00000000000000// +build !plan9 package sftp import ( "fmt" "sync" "testing" "time" "github.com/stretchr/testify/assert" ) func TestStringLock(t *testing.T) { var wg sync.WaitGroup counter := [3]int{} lock := newStringLock() const ( outer = 10 inner = 100 total = outer * inner ) for k := 0; k < outer; k++ { for j := range counter { wg.Add(1) go func(j int) { defer wg.Done() ID := fmt.Sprintf("%d", j) for i := 0; i < inner; i++ { lock.Lock(ID) n := counter[j] time.Sleep(1 * time.Millisecond) counter[j] = n + 1 lock.Unlock(ID) } }(j) } } wg.Wait() assert.Equal(t, [3]int{total, total, total}, counter) } rclone-1.53.3/backend/sharefile/000077500000000000000000000000001375552240400164255ustar00rootroot00000000000000rclone-1.53.3/backend/sharefile/api/000077500000000000000000000000001375552240400171765ustar00rootroot00000000000000rclone-1.53.3/backend/sharefile/api/types.go000066400000000000000000000163761375552240400207060ustar00rootroot00000000000000// Package api contains definitions for using the premiumize.me API package api import ( "fmt" "time" "github.com/pkg/errors" ) // ListRequestSelect should be used in $select for Items/Children const ListRequestSelect = "odata.count,FileCount,Name,FileName,CreationDate,IsHidden,FileSizeBytes,odata.type,Id,Hash,ClientModifiedDate" // ListResponse is returned from the Items/Children call type ListResponse struct { OdataCount int `json:"odata.count"` Value []Item `json:"value"` } // Item Types const ( ItemTypeFolder = "ShareFile.Api.Models.Folder" ItemTypeFile = "ShareFile.Api.Models.File" ) // Item refers to a file or folder type Item struct { FileCount int32 `json:"FileCount,omitempty"` Name string `json:"Name,omitempty"` FileName string `json:"FileName,omitempty"` CreatedAt time.Time `json:"CreationDate,omitempty"` ModifiedAt time.Time `json:"ClientModifiedDate,omitempty"` IsHidden bool `json:"IsHidden,omitempty"` Size int64 `json:"FileSizeBytes,omitempty"` Type string `json:"odata.type,omitempty"` ID string `json:"Id,omitempty"` Hash string `json:"Hash,omitempty"` } // Error is an odata error return type Error struct { Code string `json:"code"` Message struct { Lang string `json:"lang"` Value string `json:"value"` } `json:"message"` Reason string `json:"reason"` } // Satisfy error interface func (e *Error) Error() string { return fmt.Sprintf("%s: %s: %s", e.Message.Value, e.Code, e.Reason) } // Check Error satisfies error interface var _ error = &Error{} // DownloadSpecification is the response to /Items/Download type DownloadSpecification struct { Token string `json:"DownloadToken"` URL string `json:"DownloadUrl"` Metadata string `json:"odata.metadata"` Type string `json:"odata.type"` } // UploadRequest is set to /Items/Upload2 to receive an UploadSpecification type UploadRequest struct { Method string `json:"method"` // Upload method: one of: standard, streamed or threaded Raw bool `json:"raw"` // Raw post if true or MIME upload if false Filename string `json:"fileName"` // Uploaded item file name. Filesize *int64 `json:"fileSize,omitempty"` // Uploaded item file size. Overwrite bool `json:"overwrite"` // Indicates whether items with the same name will be overwritten or not. CreatedDate time.Time `json:"ClientCreatedDate"` // Created Date of this Item. ModifiedDate time.Time `json:"ClientModifiedDate"` // Modified Date of this Item. BatchID string `json:"batchId,omitempty"` // Indicates part of a batch. Batched uploads do not send notification until the whole batch is completed. BatchLast *bool `json:"batchLast,omitempty"` // Indicates is the last in a batch. Upload notifications for the whole batch are sent after this upload. CanResume *bool `json:"canResume,omitempty"` // Indicates uploader supports resume. StartOver *bool `json:"startOver,omitempty"` // Indicates uploader wants to restart the file - i.e., ignore previous failed upload attempts. Tool string `json:"tool,omitempty"` // Identifies the uploader tool. Title string `json:"title,omitempty"` // Item Title Details string `json:"details,omitempty"` // Item description IsSend *bool `json:"isSend,omitempty"` // Indicates that this upload is part of a Send operation SendGUID string `json:"sendGuid,omitempty"` // Used if IsSend is true. Specifies which Send operation this upload is part of. OpID string `json:"opid,omitempty"` // Used for Asynchronous copy/move operations - called by Zones to push files to other Zones ThreadCount *int `json:"threadCount,omitempty"` // Specifies the number of threads the threaded uploader will use. Only used is method is threaded, ignored otherwise Notify *bool `json:"notify,omitempty"` // Indicates whether users will be notified of this upload - based on folder preferences ExpirationDays *int `json:"expirationDays,omitempty"` // File expiration days BaseFileID string `json:"baseFileId,omitempty"` // Used to check conflict in file during File Upload. } // UploadSpecification is returned from /Items/Upload type UploadSpecification struct { Method string `json:"Method"` // The Upload method that must be used for this upload PrepareURI string `json:"PrepareUri"` // If provided, clients must issue a request to this Uri before uploading any data. ChunkURI string `json:"ChunkUri"` // Specifies the URI the client must send the file data to FinishURI string `json:"FinishUri"` // If provided, specifies the final call the client must perform to finish the upload process ProgressData string `json:"ProgressData"` // Allows the client to check progress of standard uploads IsResume bool `json:"IsResume"` // Specifies a Resumable upload is supproted. ResumeIndex int64 `json:"ResumeIndex"` // Specifies the initial index for resuming, if IsResume is true. ResumeOffset int64 `json:"ResumeOffset"` // Specifies the initial file offset by bytes, if IsResume is true ResumeFileHash string `json:"ResumeFileHash"` // Specifies the MD5 hash of the first ResumeOffset bytes of the partial file found at the server MaxNumberOfThreads int `json:"MaxNumberOfThreads"` // Specifies the max number of chunks that can be sent simultaneously for threaded uploads } // UploadFinishResponse is returns from calling UploadSpecification.FinishURI type UploadFinishResponse struct { Error bool `json:"error"` ErrorMessage string `json:"errorMessage"` ErrorCode int `json:"errorCode,string"` Value []struct { UploadID string `json:"uploadid"` ParentID string `json:"parentid"` ID string `json:"id"` StreamID string `json:"streamid"` FileName string `json:"filename"` DisplayName string `json:"displayname"` Size int `json:"size,string"` Md5 string `json:"md5"` } `json:"value"` } // ID returns the ID of the first response if available func (finish *UploadFinishResponse) ID() (string, error) { if finish.Error { return "", errors.Errorf("upload failed: %s (%d)", finish.ErrorMessage, finish.ErrorCode) } if len(finish.Value) == 0 { return "", errors.New("upload failed: no results returned") } return finish.Value[0].ID, nil } // Parent is the ID of the parent folder type Parent struct { ID string `json:"Id,omitempty"` } // Zone is where the data is stored type Zone struct { ID string `json:"Id,omitempty"` } // UpdateItemRequest is sent to PATCH /v3/Items(id) type UpdateItemRequest struct { Name string `json:"Name,omitempty"` FileName string `json:"FileName,omitempty"` Description string `json:"Description,omitempty"` ExpirationDate *time.Time `json:"ExpirationDate,omitempty"` Parent *Parent `json:"Parent,omitempty"` Zone *Zone `json:"Zone,omitempty"` ModifiedAt *time.Time `json:"ClientModifiedDate,omitempty"` } rclone-1.53.3/backend/sharefile/generate_tzdata.go000066400000000000000000000005171375552240400221200ustar00rootroot00000000000000// +build ignore package main import ( "log" "net/http" "github.com/shurcooL/vfsgen" ) func main() { var AssetDir http.FileSystem = http.Dir("./tzdata") err := vfsgen.Generate(AssetDir, vfsgen.Options{ PackageName: "sharefile", BuildTags: "!dev", VariableName: "tzdata", }) if err != nil { log.Fatalln(err) } } rclone-1.53.3/backend/sharefile/sharefile.go000066400000000000000000001205761375552240400207310ustar00rootroot00000000000000// Package sharefile provides an interface to the Citrix Sharefile // object storage system. package sharefile //go:generate ./update-timezone.sh /* NOTES ## for docs Detail standard/chunked/streaming uploads? ## Bugs in API The times in updateItem are being parsed in EST/DST local time updateItem only sets times accurate to 1 second https://community.sharefilesupport.com/citrixsharefile/topics/bug-report-for-update-item-patch-items-id-setting-clientmodifieddate-ignores-timezone-and-milliseconds When doing a rename+move directory, the server appears to do the rename first in the local directory which can overwrite files of the same name in the local directory. https://community.sharefilesupport.com/citrixsharefile/topics/bug-report-for-update-item-patch-items-id-file-overwrite-under-certain-conditions The Copy command can't change the name at the same time which means we have to copy via a temporary directory. https://community.sharefilesupport.com/citrixsharefile/topics/copy-item-needs-to-be-able-to-set-a-new-name ## Allowed characters https://api.sharefile.com/rest/index/odata.aspx $select to limit returned fields https://www.odata.org/documentation/odata-version-3-0/odata-version-3-0-core-protocol/#theselectsystemqueryoption Also $filter to select only things we need https://support.citrix.com/article/CTX234774 The following characters should not be used in folder or file names. \ / . , : ; * ? " < > A filename ending with a period without an extension File names with leading or trailing whitespaces. // sharefile stringNeedsEscaping = []byte{ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, 0x20, 0x2A, 0x2E, 0x2F, 0x3A, 0x3C, 0x3E, 0x3F, 0x7C, 0xEFBCBC } maxFileLength = 256 canWriteUnnormalized = true canReadUnnormalized = true canReadRenormalized = false canStream = true Which is control chars + [' ', '*', '.', '/', ':', '<', '>', '?', '|'] - also \ and " */ import ( "context" "encoding/json" "fmt" "io" "io/ioutil" "log" "net/http" "net/url" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/sharefile/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" ) const ( rcloneClientID = "djQUPlHTUM9EvayYBWuKC5IrVIoQde46" rcloneEncryptedClientSecret = "v7572bKhUindQL3yDnUAebmgP-QxiwT38JLxVPolcZBl6SSs329MtFzH73x7BeELmMVZtneUPvALSopUZ6VkhQ" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential apiPath = "/sf/v3" // add to endpoint to get API path tokenPath = "/oauth/token" // add to endpoint to get Token path minChunkSize = 256 * fs.KibiByte maxChunkSize = 2 * fs.GibiByte defaultChunkSize = 64 * fs.MebiByte defaultUploadCutoff = 128 * fs.MebiByte ) // Generate a new oauth2 config which we will update when we know the TokenURL func newOauthConfig(tokenURL string) *oauth2.Config { return &oauth2.Config{ Scopes: nil, Endpoint: oauth2.Endpoint{ AuthURL: "https://secure.sharefile.com/oauth/authorize", TokenURL: tokenURL, }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectPublicSecureURL, } } // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "sharefile", Description: "Citrix Sharefile", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { oauthConfig := newOauthConfig("") checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error { if auth == nil || auth.Form == nil { return errors.New("endpoint not found in response") } subdomain := auth.Form.Get("subdomain") apicp := auth.Form.Get("apicp") if subdomain == "" || apicp == "" { return errors.Errorf("subdomain or apicp not found in response: %+v", auth.Form) } endpoint := "https://" + subdomain + "." + apicp m.Set("endpoint", endpoint) oauthConfig.Endpoint.TokenURL = endpoint + tokenPath return nil } opt := oauthutil.Options{ CheckAuth: checkAuth, } err := oauthutil.Config("sharefile", name, m, oauthConfig, &opt) if err != nil { log.Fatalf("Failed to configure token: %v", err) } }, Options: []fs.Option{{ Name: "upload_cutoff", Help: "Cutoff for switching to multipart upload.", Default: defaultUploadCutoff, Advanced: true, }, { Name: "root_folder_id", Help: `ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).`, Examples: []fs.OptionExample{{ Value: "", Help: `Access the Personal Folders. (Default)`, }, { Value: "favorites", Help: "Access the Favorites folder.", }, { Value: "allshared", Help: "Access all the shared folders.", }, { Value: "connectors", Help: "Access all the individual connectors.", }, { Value: "top", Help: "Access the home, favorites, and shared folders as well as the connectors.", }}, }, { Name: "chunk_size", Default: defaultChunkSize, Help: `Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance.`, Advanced: true, }, { Name: "endpoint", Help: `Endpoint for API calls. This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com `, Advanced: true, Default: "", }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.Base | encoder.EncodeWin | // :?"*<>| encoder.EncodeBackSlash | // \ encoder.EncodeCtl | encoder.EncodeRightSpace | encoder.EncodeRightPeriod | encoder.EncodeLeftSpace | encoder.EncodeLeftPeriod | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { RootFolderID string `config:"root_folder_id"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` ChunkSize fs.SizeSuffix `config:"chunk_size"` Endpoint string `config:"endpoint"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote cloud storage system type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the server dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls bufferTokens chan []byte // control concurrency of multipart uploads tokenRenewer *oauthutil.Renew // renew the token on expiry rootID string // ID of the users root folder location *time.Location // timezone of server for SetModTime workaround } // Object describes a file type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // metadata is present and correct size int64 // size of the object modTime time.Time // modification time of the object id string // ID of the object md5 string // hash of the object } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("sharefile root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a sharefile 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // Reads the metadata for the id passed in. If id is "" then it returns the root // if path is not "" then the item read use id as the root and the path is relative func (f *Fs) readMetaDataForIDPath(ctx context.Context, id, path string, directoriesOnly bool, filesOnly bool) (info *api.Item, err error) { opts := rest.Opts{ Method: "GET", Path: "/Items", Parameters: url.Values{ "$select": {api.ListRequestSelect}, }, } if id != "" { opts.Path += "(" + id + ")" } if path != "" { opts.Path += "/ByPath" opts.Parameters.Set("path", "/"+f.opt.Enc.FromStandardPath(path)) } var item api.Item var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &item) return shouldRetry(resp, err) }) if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { if filesOnly { return nil, fs.ErrorObjectNotFound } return nil, fs.ErrorDirNotFound } return nil, errors.Wrap(err, "couldn't find item") } if directoriesOnly && item.Type != api.ItemTypeFolder { return nil, fs.ErrorIsFile } if filesOnly && item.Type != api.ItemTypeFile { return nil, fs.ErrorNotAFile } return &item, nil } // Reads the metadata for the id passed in. If id is "" then it returns the root func (f *Fs) readMetaDataForID(ctx context.Context, id string, directoriesOnly bool, filesOnly bool) (info *api.Item, err error) { return f.readMetaDataForIDPath(ctx, id, "", directoriesOnly, filesOnly) } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string, directoriesOnly bool, filesOnly bool) (info *api.Item, err error) { leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } return f.readMetaDataForIDPath(ctx, directoryID, leaf, directoriesOnly, filesOnly) } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { body, err := rest.ReadBody(resp) if err != nil { body = nil } var e = api.Error{ Code: fmt.Sprint(resp.StatusCode), Reason: resp.Status, } e.Message.Lang = "en" e.Message.Value = string(body) if body != nil { _ = json.Unmarshal(body, &e) } return &e } func checkUploadChunkSize(cs fs.SizeSuffix) error { if cs < minChunkSize { return errors.Errorf("ChunkSize: %s is less than %s", cs, minChunkSize) } if cs > maxChunkSize { return errors.Errorf("ChunkSize: %s is greater than %s", cs, maxChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs f.fillBufferTokens() // reset the buffer tokens } return } func checkUploadCutoff(cs fs.SizeSuffix) error { return nil } func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadCutoff(cs) if err == nil { old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs } return } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } // Check parameters OK if opt.Endpoint == "" { return nil, errors.New("endpoint not set: rebuild the remote or set manually") } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, err } err = checkUploadCutoff(opt.UploadCutoff) if err != nil { return nil, err } root = parsePath(root) oauthConfig := newOauthConfig(opt.Endpoint + tokenPath) var client *http.Client var ts *oauthutil.TokenSource client, ts, err = oauthutil.NewClient(name, m, oauthConfig) if err != nil { return nil, errors.Wrap(err, "failed to configure sharefile") } f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(client).SetRoot(opt.Endpoint + apiPath), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, ReadMimeType: false, }).Fill(f) f.srv.SetErrorHandler(errorHandler) f.fillBufferTokens() // Renew the token in the background if ts != nil { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.List(ctx, "") return err }) } // Load the server timezone from an internal file // Used to correct the time in SetModTime const serverTimezone = "America/New_York" timezone, err := tzdata.Open(serverTimezone) if err != nil { return nil, errors.Wrap(err, "failed to open timezone db") } tzdata, err := ioutil.ReadAll(timezone) if err != nil { return nil, errors.Wrap(err, "failed to read timezone") } _ = timezone.Close() f.location, err = time.LoadLocationFromTZData(serverTimezone, tzdata) if err != nil { return nil, errors.Wrap(err, "failed to load location from timezone") } // Find ID of user's root folder if opt.RootFolderID == "" { item, err := f.readMetaDataForID(ctx, opt.RootFolderID, true, false) if err != nil { return nil, errors.Wrap(err, "couldn't find root ID") } f.rootID = item.ID } else { f.rootID = opt.RootFolderID } // Get rootID f.dirCache = dircache.New(root, f.rootID, f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) tempF := *f tempF.dirCache = dircache.New(newRoot, f.rootID, &tempF) tempF.root = newRoot // Make new Fs which is the parent err = tempF.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f return f, nil } _, err := tempF.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f return f, nil } return nil, err } f.features.Fill(&tempF) // XXX: update the old f here instead of returning tempF, since // `features` were already filled with functions having *f as a receiver. // See https://github.com/rclone/rclone/issues/2182 f.dirCache = tempF.dirCache f.root = tempF.root // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // Fill up (or reset) the buffer tokens func (f *Fs) fillBufferTokens() { f.bufferTokens = make(chan []byte, fs.Config.Transfers) for i := 0; i < fs.Config.Transfers; i++ { f.bufferTokens <- nil } } // getUploadBlock gets a block from the pool of size chunkSize func (f *Fs) getUploadBlock() []byte { buf := <-f.bufferTokens if buf == nil { buf = make([]byte, f.opt.ChunkSize) } // fs.Debugf(f, "Getting upload block %p", buf) return buf } // putUploadBlock returns a block to the pool of size chunkSize func (f *Fs) putUploadBlock(buf []byte) { buf = buf[:cap(buf)] if len(buf) != int(f.opt.ChunkSize) { panic("bad blocksize returned to pool") } // fs.Debugf(f, "Returning upload block %p", buf) f.bufferTokens <- buf } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Item) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { if pathID == "top" { // Find the leaf in pathID found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { if item.Name == leaf { pathIDOut = item.ID return true } return false }) return pathIDOut, found, err } info, err := f.readMetaDataForIDPath(ctx, pathID, leaf, true, false) if err == nil { found = true pathIDOut = info.ID } else if err == fs.ErrorDirNotFound { err = nil // don't return an error if not found } return pathIDOut, found, err } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { var resp *http.Response leaf = f.opt.Enc.FromStandardName(leaf) var req = api.Item{ Name: leaf, FileName: leaf, CreatedAt: time.Now(), } var info api.Item opts := rest.Opts{ Method: "POST", Path: "/Items(" + pathID + ")/Folder", Parameters: url.Values{ "$select": {api.ListRequestSelect}, "overwrite": {"false"}, "passthrough": {"false"}, }, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &req, &info) return shouldRetry(resp, err) }) if err != nil { return "", errors.Wrap(err, "CreateDir") } return info.ID, nil } // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(*api.Item) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { opts := rest.Opts{ Method: "GET", Path: "/Items(" + dirID + ")/Children", Parameters: url.Values{ "$select": {api.ListRequestSelect}, }, } var result api.ListResponse var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return found, errors.Wrap(err, "couldn't list files") } for i := range result.Value { item := &result.Value[i] if item.Type == api.ItemTypeFolder { if filesOnly { continue } } else if item.Type == api.ItemTypeFile { if directoriesOnly { continue } } else { fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type) continue } item.Name = f.opt.Enc.ToStandardName(item.Name) if fn(item) { found = true break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var iErr error _, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { remote := path.Join(dir, info.Name) if info.Type == api.ItemTypeFolder { // cache the directory ID for later lookups f.dirCache.Put(remote, info.ID) d := fs.NewDir(remote, info.CreatedAt).SetID(info.ID).SetSize(info.Size).SetItems(int64(info.FileCount)) entries = append(entries, d) } else if info.Type == api.ItemTypeFile { o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, leaf, directoryID, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil) switch err { case nil: return existingObj, existingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src) default: return nil, err } } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // PutUnchecked the object into the container // // This will produce an error if the object already exists // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // purgeCheck removes the directory, if check is set then it refuses // to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache rootID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } // need to check if empty as it will delete recursively by default if check { found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool { return true }) if err != nil { return errors.Wrap(err, "purgeCheck") } if found { return fs.ErrorDirectoryNotEmpty } } err = f.remove(ctx, rootID) f.dirCache.FlushDir(dir) if err != nil { return err } return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { // sharefile returns times accurate to the millisecond, but // for some reason these seem only accurate 2ms. // updateItem seems to only set times accurate to 1 second though. return time.Second // this doesn't appear to be documented anywhere } // Purge deletes all the files and the container // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // updateItem patches a file or folder // // if leaf = "" or directoryID = "" or modTime == nil then it will be // left alone // // Note that this seems to work by renaming first, then moving to a // new directory which means that it can overwrite existing objects // :-( func (f *Fs) updateItem(ctx context.Context, id, leaf, directoryID string, modTime *time.Time) (info *api.Item, err error) { // Move the object opts := rest.Opts{ Method: "PATCH", Path: "/Items(" + id + ")", Parameters: url.Values{ "$select": {api.ListRequestSelect}, "overwrite": {"false"}, }, } leaf = f.opt.Enc.FromStandardName(leaf) // FIXME this appears to be a bug in the API // // If you set the modified time via PATCH then the server // appears to parse it as a local time for America/New_York // // However if you set it when uploading the file then it is fine... // // Also it only sets the time to 1 second resolution where it // uses 1ms resolution elsewhere if modTime != nil && f.location != nil { newTime := modTime.In(f.location) isoTime := newTime.Format(time.RFC3339Nano) // Chop TZ -05:00 off the end and replace with Z isoTime = isoTime[:len(isoTime)-6] + "Z" // Parse it back into a time newModTime, err := time.Parse(time.RFC3339Nano, isoTime) if err != nil { return nil, errors.Wrap(err, "updateItem: time parse") } modTime = &newModTime } update := api.UpdateItemRequest{ Name: leaf, FileName: leaf, ModifiedAt: modTime, } if directoryID != "" { update.Parent = &api.Parent{ ID: directoryID, } } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, &update, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } return info, nil } // move a file or folder // // This is complicated by the fact that we can't use updateItem to move // to a different directory AND rename at the same time as it can // overwrite files in the source directory. func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDirectoryID, newDirectoryID string) (item *api.Item, err error) { // To demonstrate bug // item, err = f.updateItem(ctx, id, newLeaf, newDirectoryID, nil) // if err != nil { // return nil, errors.Wrap(err, "Move rename leaf") // } // return item, nil doRenameLeaf := oldLeaf != newLeaf doMove := oldDirectoryID != newDirectoryID // Now rename the leaf to a temporary name if we are moving to // another directory to make sure we don't overwrite something // in the source directory by accident if doRenameLeaf && doMove { tmpLeaf := newLeaf + "." + random.String(8) item, err = f.updateItem(ctx, id, tmpLeaf, "", nil) if err != nil { return nil, errors.Wrap(err, "Move rename leaf") } } // Move the object to a new directory (with the existing name) // if required if doMove { item, err = f.updateItem(ctx, id, "", newDirectoryID, nil) if err != nil { return nil, errors.Wrap(err, "Move directory") } } // Rename the leaf to its final name if required if doRenameLeaf { item, err = f.updateItem(ctx, id, newLeaf, "", nil) if err != nil { return nil, errors.Wrap(err, "Move rename leaf") } } return item, nil } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Find ID of src parent, not creating subdirs srcLeaf, srcParentID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false) if err != nil { return nil, err } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Do the move info, err := f.move(ctx, true, srcObj.id, srcLeaf, leaf, srcParentID, directoryID) if err != nil { return nil, err } err = dstObj.setMetaData(info) if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, srcDirectoryID, srcLeaf, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } // Do the move _, err = f.move(ctx, false, srcID, srcLeaf, dstLeaf, srcDirectoryID, dstDirectoryID) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } err = srcObj.readMetaData(ctx) if err != nil { return nil, err } // Find ID of src parent, not creating subdirs srcLeaf, srcParentID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false) if err != nil { return nil, err } srcLeaf = f.opt.Enc.FromStandardName(srcLeaf) _ = srcParentID // Create temporary object dstObj, dstLeaf, dstParentID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } dstLeaf = f.opt.Enc.FromStandardName(dstLeaf) sameName := strings.ToLower(srcLeaf) == strings.ToLower(dstLeaf) if sameName && srcParentID == dstParentID { return nil, errors.Errorf("copy: can't copy to a file in the same directory whose name only differs in case: %q vs %q", srcLeaf, dstLeaf) } // Discover whether we can just copy directly or not directCopy := false if sameName { // if copying to same name can copy directly directCopy = true } else { // if (dstParentID, srcLeaf) does not exist then can // Copy then Rename without fear of overwriting // something _, err := f.readMetaDataForIDPath(ctx, dstParentID, srcLeaf, false, false) if err == fs.ErrorObjectNotFound || err == fs.ErrorDirNotFound { directCopy = true } else if err != nil { return nil, errors.Wrap(err, "copy: failed to examine destination dir") } else { // otherwise need to copy via a temporary directlry } } // Copy direct to destination unless !directCopy in which case // copy via a temporary directory copyTargetDirID := dstParentID if !directCopy { // Create a temporary directory to copy the object in to tmpDir := "rclone-temp-dir-" + random.String(16) err = f.Mkdir(ctx, tmpDir) if err != nil { return nil, errors.Wrap(err, "copy: failed to make temp dir") } defer func() { rmdirErr := f.Rmdir(ctx, tmpDir) if rmdirErr != nil && err == nil { err = errors.Wrap(rmdirErr, "copy: failed to remove temp dir") } }() tmpDirID, err := f.dirCache.FindDir(ctx, tmpDir, false) if err != nil { return nil, errors.Wrap(err, "copy: failed to find temp dir") } copyTargetDirID = tmpDirID } // Copy the object opts := rest.Opts{ Method: "POST", Path: "/Items(" + srcObj.id + ")/Copy", Parameters: url.Values{ "$select": {api.ListRequestSelect}, "overwrite": {"false"}, "targetid": {copyTargetDirID}, }, } var resp *http.Response var info *api.Item err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } // Rename into the correct name and directory if required and // set the modtime since the copy doesn't preserve it var updateParentID, updateLeaf string // only set these if necessary if srcLeaf != dstLeaf { updateLeaf = dstLeaf } if !directCopy { updateParentID = dstParentID } // set new modtime regardless info, err = f.updateItem(ctx, info.ID, updateLeaf, updateParentID, &srcObj.modTime) if err != nil { return nil, err } err = dstObj.setMetaData(info) if err != nil { return nil, err } return dstObj, nil } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the SHA-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } err := o.readMetaData(ctx) if err != nil { return "", err } return o.md5, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { err := o.readMetaData(context.TODO()) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.Item) (err error) { if info.Type != api.ItemTypeFile { return errors.Wrapf(fs.ErrorNotAFile, "%q is %q", o.remote, info.Type) } o.hasMetaData = true o.size = info.Size if !info.ModifiedAt.IsZero() { o.modTime = info.ModifiedAt } else { o.modTime = info.CreatedAt } o.id = info.ID o.md5 = info.Hash return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } var info *api.Item if o.id != "" { info, err = o.fs.readMetaDataForID(ctx, o.id, false, true) } else { info, err = o.fs.readMetaDataForPath(ctx, o.remote, false, true) } if err != nil { return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) { info, err := o.fs.updateItem(ctx, o.id, "", "", &modTime) if err != nil { return err } err = o.setMetaData(info) if err != nil { return err } return nil } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { opts := rest.Opts{ Method: "GET", Path: "/Items(" + o.id + ")/Download", Parameters: url.Values{ "redirect": {"false"}, }, } var resp *http.Response var dl api.DownloadSpecification err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "open: fetch download specification") } fs.FixRangeOption(options, o.size) opts = rest.Opts{ Path: "", RootURL: dl.URL, Method: "GET", Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "open") } return resp.Body, err } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { remote := o.Remote() size := src.Size() modTime := src.ModTime(ctx) isLargeFile := size < 0 || size > int64(o.fs.opt.UploadCutoff) // Create the directory for the object if it doesn't exist leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, remote, true) if err != nil { return err } leaf = o.fs.opt.Enc.FromStandardName(leaf) var req = api.UploadRequest{ Method: "standard", Raw: true, Filename: leaf, Overwrite: true, CreatedDate: modTime, ModifiedDate: modTime, Tool: fs.Config.UserAgent, } if isLargeFile { if size < 0 { // For files of indeterminate size, use streamed req.Method = "streamed" } else { // otherwise use threaded which is more efficient req.Method = "threaded" req.ThreadCount = &fs.Config.Transfers req.Filesize = &size } } var resp *http.Response var info api.UploadSpecification opts := rest.Opts{ Method: "POST", Path: "/Items(" + directoryID + ")/Upload2", Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &req, &info) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "upload get specification") } // If file is large then upload in parts if isLargeFile { up, err := o.fs.newLargeUpload(ctx, o, in, src, &info) if err != nil { return err } return up.Upload(ctx) } // Single part upload opts = rest.Opts{ Method: "POST", RootURL: info.ChunkURI + "&fmt=json", Body: in, ContentLength: &size, } var finish api.UploadFinishResponse err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &finish) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "upload file") } return o.checkUploadResponse(ctx, &finish) } // Check the upload response and update the metadata on the object func (o *Object) checkUploadResponse(ctx context.Context, finish *api.UploadFinishResponse) (err error) { // Find returned ID id, err := finish.ID() if err != nil { return err } // Read metadata o.id = id o.hasMetaData = false return o.readMetaData(ctx) } // Remove an object by ID func (f *Fs) remove(ctx context.Context, id string) (err error) { opts := rest.Opts{ Method: "DELETE", Path: "/Items(" + id + ")", Parameters: url.Values{ "singleversion": {"false"}, "forceSync": {"true"}, }, NoResponse: true, } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "remove") } return nil } // Remove an object func (o *Object) Remove(ctx context.Context) error { err := o.readMetaData(ctx) if err != nil { return errors.Wrap(err, "Remove: Failed to read metadata") } return o.fs.remove(ctx, o.id) } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/sharefile/sharefile_test.go000066400000000000000000000014141375552240400217550ustar00rootroot00000000000000// Test filesystem interface package sharefile import ( "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestSharefile:", NilObject: (*Object)(nil), ChunkedUpload: fstests.ChunkedUploadConfig{ MinChunkSize: minChunkSize, CeilChunkSize: fstests.NextPowerOfTwo, }, }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadCutoff(cs) } var ( _ fstests.SetUploadChunkSizer = (*Fs)(nil) _ fstests.SetUploadCutoffer = (*Fs)(nil) ) rclone-1.53.3/backend/sharefile/tzdata_vfsdata.go000066400000000000000000000302331375552240400217540ustar00rootroot00000000000000// Code generated by vfsgen; DO NOT EDIT. // +build !dev package sharefile import ( "bytes" "compress/gzip" "fmt" "io" "io/ioutil" "net/http" "os" pathpkg "path" "time" ) // tzdata statically implements the virtual filesystem provided to vfsgen. var tzdata = func() http.FileSystem { fs := vfsgen۰FS{ "/": &vfsgen۰DirInfo{ name: "/", modTime: time.Date(2019, 9, 12, 14, 55, 27, 600751842, time.UTC), }, "/America": &vfsgen۰DirInfo{ name: "America", modTime: time.Date(2019, 9, 12, 14, 55, 27, 600751842, time.UTC), }, "/America/New_York": &vfsgen۰CompressedFileInfo{ name: "New_York", modTime: time.Date(2019, 7, 2, 0, 44, 57, 0, time.UTC), uncompressedSize: 3536, compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\xd6\x7f\x54\xd5\xf5\x1d\xc7\xf1\xaf\x4a\x6a\x28\xa1\x2b\x9a\xa6\xb1\xa6\xdb\x08\x13\xf0\x47\xe4\x2f\xb6\x68\xc9\x18\xda\x0d\xd3\x4b\x22\x39\xfc\xd4\x20\x0e\xea\xc6\x67\xfd\x20\xdc\x0c\xdb\xb4\x18\x9e\xd3\xdc\x4e\x1e\xd8\x56\xe7\x36\x53\x94\x5c\x44\x0a\x38\xc9\x76\xfd\xc5\xc8\xe3\x8e\xab\xb8\x66\xd8\x8f\xeb\x5b\xa7\x77\x65\x92\x1f\xa4\x8c\xc9\xba\x77\xe7\xf9\x9e\xff\xed\x9f\xf9\xdf\xfe\x48\x4f\x3e\xfc\x9c\x13\xf0\x8f\xf7\xf5\x7c\xfb\x8b\xca\x1f\x9c\xe6\xfd\xd7\xaf\xab\x2e\xff\xc7\xaf\x73\x97\xff\x7e\xdd\x13\x9e\xe7\x05\xb6\x26\xdb\xe7\x5f\xfd\xd8\xfc\xe1\x29\xcf\x6e\xfa\xfd\x11\xf3\x42\xe9\x29\xbb\x79\xed\x47\xb2\x65\xf9\xcb\xb6\x21\x73\x9b\xd9\xba\xe8\xb0\xdb\x96\x54\x6b\x1a\xa7\xbf\xe4\x1a\xa3\x0d\xb2\xfd\xda\x5f\xb9\xed\xe1\x1a\xf9\x63\x9f\x75\x2f\x05\xcb\xa5\x29\xb4\xd0\xbd\x1c\x98\x2f\xcd\x2d\xb7\xba\x57\xaa\xd3\x64\xc7\x73\xf7\xd8\x9d\x65\xf3\x4c\xcb\xea\xe9\xb6\x35\x77\xb2\x69\x5b\x9a\x64\x77\xa5\x5c\x63\xfe\x34\xe7\x73\xbb\x7b\xa8\x33\xed\xe3\x8e\xdb\xf6\x48\x97\xd9\x13\xf7\x99\xdb\xd3\xd9\x6a\x5e\x3b\xfd\x8e\xfb\x73\xf3\x9b\x12\xec\x68\x77\x7b\x37\xec\x94\x7d\x5b\x9e\x75\xfb\x2b\x36\xca\x81\x75\x8f\xbb\x83\xf9\x95\xd2\x51\xb2\xcc\xfd\x25\xa3\x50\x3a\x7d\xab\xed\xeb\x89\xb3\xe5\x50\x5a\xb1\x3d\xd4\xbf\xd8\x1c\x4e\xc8\xb6\x87\xbb\x67\x99\xbf\xfe\xd2\xd9\xae\x89\x9f\xda\x2e\x33\x20\xa1\x47\x4f\xbb\xa3\xd9\x1f\xc8\xdb\x05\x9d\xee\xd8\x4d\x7b\xe5\x9d\xcc\x46\xd7\xed\x6d\x92\xe3\x49\xeb\xdd\x71\x59\x2b\xef\x46\xb7\xd9\xf7\xf6\x95\xca\xfb\xe1\x5a\xfb\xc1\x8b\xbf\x30\xe1\xe0\x0a\x7b\xa2\xb6\xc4\x48\xc0\x67\x4f\x96\x7f\xcf\x9c\xaa\xce\xb0\x7f\xcf\xbb\xd9\x9c\x2e\x1e\x6d\xcf\x2c\x4e\x97\x48\x6e\x9a\xfb\xc7\x8c\x51\xf2\x61\x4a\xa2\xfb\xe8\xfa\x0b\x72\x76\x68\xaf\x3b\x7b\xf1\xa8\x7c\x1c\x09\xb9\x73\xc7\x76\x49\x4f\x67\x9b\xfb\x64\x6f\xc8\x9c\x6f\xee\xb2\xee\xf9\x36\xd3\xbb\xa1\xd5\x5e\x58\x53\x6f\xfa\x2a\xea\xec\xa7\xcb\x56\x99\xcf\xf2\xab\xec\xc5\xdb\xef\x33\x9f\x67\x14\xd9\xfe\x9b\x1f\x93\x7f\x26\x66\xd9\x4b\xc3\x97\xc8\xa5\xfe\x42\x37\xf0\xe1\x1c\xf9\x57\xf7\x6c\xf7\xc5\xa1\x1b\x25\xba\x7b\xbc\x8b\x6d\x8b\x89\x57\x1f\x75\x83\x6a\x4e\xca\xe0\xc7\xc4\x0d\xb1\x51\x13\x67\xbe\xb0\x57\x2d\x10\x33\x34\xfb\x84\x1d\x36\xe5\x80\x19\xf6\xf4\x58\x7b\xf5\xa8\x2d\xe6\xea\xa6\x8d\x2e\xde\x3d\x65\x46\xc8\x93\x76\xe4\xf1\x17\x24\x61\x5f\x99\xbd\xa6\x7d\x9d\x24\xbe\xb8\xd6\x8d\xfa\xdd\x83\x32\xba\xb6\xd4\x7d\x65\xd5\xf7\xe5\xda\xf2\x5c\x77\xdd\x92\x49\x92\x94\x97\xea\xae\x9f\x35\x52\xbe\x9a\x3a\xc2\x8d\x99\x90\x6a\xc6\x0e\xef\x71\x37\x0c\x1e\x61\x6e\xb8\x10\x6f\xc7\x9d\xec\x31\xe3\xdf\x3a\x67\x6f\xdc\xff\x86\x49\xde\xf1\x37\xfb\xb5\x4d\x3b\xcc\x4d\x95\x9e\xfb\xfa\xcf\x9f\x31\x13\x0a\x4e\xb9\x89\xcb\x9b\xe5\x1b\x99\x1d\xee\x9b\xf3\x7e\x23\xdf\x4a\xda\xea\x52\x26\x3d\x2c\x29\xd1\x83\x36\x35\xbe\x40\x52\xc3\x0d\x76\xd2\xd9\x19\x72\x4b\xb0\xc6\x4e\x0e\xf9\x4d\x5a\xa0\xdc\xa6\xb7\xdc\x66\x32\xaa\xe7\xdb\x29\xcf\x8c\x31\x53\xcb\xca\xdc\xb4\x87\x2e\x99\xe9\xb9\x79\xee\xd6\x85\xef\x9b\xcc\x94\x5b\xdc\x6d\xd3\x82\x66\xc6\xb0\x04\x37\x73\xdc\xbb\x32\x33\x72\xde\xcd\x1a\x78\x4d\x66\x77\xbe\xe5\xe6\xbc\x17\x90\xac\xe6\x4f\xec\xb7\xf7\x3c\x21\xdf\xd9\xf0\xa6\xbd\xfd\xd9\x07\x24\xbb\x62\xa7\xbd\x63\xdd\x1a\xf3\xdd\xfc\x8d\xf6\xce\x92\xfb\xcd\xdc\x8c\x4a\x9b\x33\xf7\x4e\x93\xd3\xd7\xe8\x72\xd3\x96\x49\x6e\x68\xbd\xcb\x4b\xb8\x43\xf2\x5a\x56\xba\x79\x3d\x13\x65\xfe\x73\xb5\xf6\xae\x63\xd9\xc6\xb7\x7a\x85\xbd\x7b\xd7\x04\x93\xbf\xd4\x67\x17\xd4\xc5\x99\x7b\xb2\x32\xec\xc2\x47\x23\x66\xd1\xf8\xd1\xd6\x5f\x70\xc8\xf8\x07\xfa\xec\xbd\x99\xdb\xcd\xbd\x67\x12\x5d\x61\x72\xa7\x14\x76\xf4\xba\x25\xd1\x46\x29\xda\x12\x72\xf7\x85\xd7\xcb\xd2\x75\x6d\xee\x07\xc1\x95\x52\x5c\x52\xef\x96\x05\xee\x16\xe3\x6b\xb5\xf7\xd7\xac\x30\x0f\xa4\xd5\xd9\x1f\x96\xf9\x4c\x49\x42\x95\x2d\xcd\xcd\x30\xa5\x3d\x45\xb6\x2c\x65\xb4\x29\x3b\x92\x65\xcb\x87\xf6\x99\xf2\xa6\x64\xbb\x3c\xf2\xb6\x59\x51\x37\xdb\xad\x7c\xa3\x57\x7e\x54\x39\xde\xfd\xb8\x39\x24\x15\x05\x51\x67\x37\xb4\xc9\x4f\x32\xc5\x3d\x54\x51\x2f\x0f\x27\x1d\x70\x8f\xe4\xaf\x92\x47\xa2\x27\x6c\xe5\xcc\x3a\x53\x19\xde\x6f\xab\xc6\x54\x99\xaa\xe0\x66\xbb\xaa\xbf\xc8\xfc\x34\xf0\xa4\xfd\x59\x77\x96\x59\x5d\x5d\x66\x1f\xdf\x9d\x6c\xaa\x8b\xf3\xec\x9a\xdf\x7a\x66\xf0\xa0\x2b\xfc\x3d\x24\xee\x8a\xbf\xe4\xff\xe5\x77\x2c\xf6\x6a\xc0\xf3\x62\xb1\xd7\xf7\x0d\x8a\x8b\xc5\xda\x5f\xf1\x86\xeb\xdf\x47\xea\x9f\xa3\xee\xf2\xf9\xbd\x9c\xb9\x7e\x2f\x67\x91\xdf\xcb\x59\xec\xf7\x72\x16\xf8\x75\xda\x06\xe9\x1f\x57\xb2\x81\xb1\x58\x2c\x56\x3c\xc4\xfd\x1a\xd9\x42\x64\x0f\x91\x4d\x44\x76\x11\xd9\x46\x64\x1f\x91\x8d\x44\x76\x12\xd9\x4a\x64\x2f\x91\xcd\x54\xa3\x0d\xfa\xff\xb3\x9d\x6a\xb8\x46\xdf\x6c\x28\xb2\xa3\xc8\x96\x22\x7b\x8a\x6c\x2a\xb2\xab\xc8\xb6\x22\xfb\x8a\x6c\x2c\xb2\xb3\xc8\xd6\x22\x7b\x8b\x6c\x2e\xb2\xbb\xc8\xf6\xaa\x91\x2e\x7d\xb3\xc1\x6a\x67\xab\xbe\xd9\x62\x64\x8f\x91\x4d\x46\x76\x19\xd9\x66\x64\x9f\x91\x8d\x46\x76\x1a\xd9\x6a\x64\xaf\x91\xcd\x46\x76\x1b\xd9\x6e\xb5\x7f\xb1\xfe\x3c\x36\x5c\xed\x9e\xa5\x6f\xb6\x1c\xd9\x73\xd5\x0c\xe8\xd7\xb1\xeb\xc8\xb6\x23\xfb\x8e\x6c\x3c\xb2\xf3\xc8\xd6\x23\x7b\xaf\xca\x5a\x7d\xb3\xfb\xc8\xf6\x23\xfb\x8f\x34\x00\xe9\x00\xd2\x02\xa4\x07\x48\x13\x90\x2e\x20\x6d\x40\xfa\x80\x34\x02\xe9\x04\xd2\x0a\xa4\x17\x48\x33\x90\x6e\xa8\x17\x8f\xea\x9b\x7e\x20\x0d\x41\x3a\x82\xb4\x04\xe9\x09\xd2\x14\xa4\x2b\x48\x5b\x90\xbe\x20\x8d\x41\x3a\x83\xb4\x06\xe9\x0d\xd2\x1c\xa4\x3b\x48\x7b\xd4\xfe\x42\xfd\x79\x34\x08\xe9\x10\xd2\x22\xd4\x1e\xe9\x3f\xe4\x98\xe8\xa7\xa5\x3e\xea\xf4\x83\x55\x73\x52\xdf\xf4\x09\x69\x14\xd2\x29\xfd\x80\x2d\x10\x7d\xd3\x2b\xa4\x59\xea\xd3\x63\xf5\x4d\xbb\xd4\xa6\x8d\xfa\xf5\x34\x0c\xe9\x18\xd2\x32\xa4\x67\x48\xd3\x90\xae\x21\x6d\x43\xfa\x86\x34\x0e\xe9\x1c\xd2\x3a\xa4\x77\x48\xf3\x90\xee\x21\xed\x43\xfa\x87\x34\x50\xbd\x10\xaf\x3f\x8f\x16\x22\x3d\x44\x9a\x88\x74\x11\x69\x23\xd2\x47\xa4\x91\x48\x27\x91\x56\x22\xbd\x44\x9a\x89\x74\x13\x69\xa7\x1a\x3d\xa8\xdf\x8f\x86\xaa\xe1\x06\x7d\xd3\x52\xa4\xa7\x48\x53\x91\xae\x22\x6d\x45\xfa\x8a\x34\x16\xe9\x2c\xd2\x5a\xa4\xb7\x48\x73\x91\xee\x22\xed\x45\xfa\x8b\x34\x58\x8d\x9c\xd7\x37\x2d\x46\x7a\x8c\x34\x19\xe9\x32\xd2\x66\xa4\xcf\x48\xa3\x91\x4e\x23\xad\x46\x7a\x8d\x34\x1b\xe9\x36\xd2\x6e\xb5\xaf\x51\xbf\x3f\x0d\x57\x43\xeb\xf5\x4d\xcb\xd5\x96\x95\xfa\xa6\xe9\x48\xd7\x91\xb6\x23\x7d\x47\x1a\x8f\x74\x1e\x69\x3d\xd2\x7b\xa4\xf9\x48\xf7\x91\xf6\xab\x03\x7d\xfa\xe6\x06\x50\xcf\x24\xea\xcf\xe3\x16\x50\x3b\x7a\xf5\xcd\x4d\x80\xdc\x05\xc8\x6d\x80\xdc\x07\xc8\x8d\x80\xdc\x09\xc8\xad\x80\xdc\x0b\xc8\xcd\x80\xdc\x0d\xc8\xed\x80\xdc\x0f\xc8\x0d\xa1\xf6\x14\xe9\x9b\x5b\x42\x3d\x92\xa5\x6f\x6e\x0a\xb5\x29\x59\xdf\xdc\x16\xc8\x7d\x81\xdc\x18\xc8\x9d\x81\xdc\x1a\xc8\xbd\x81\xdc\x1c\xc8\xdd\x81\xdc\x1e\xc8\xfd\x81\xdc\x20\x6a\xf4\x3f\x9f\x57\x6e\x11\x35\xbc\x5f\xdf\xdc\x24\x6a\x70\xb3\xbe\xb9\x4d\x90\xfb\x04\xb9\x51\x90\x3b\x05\xb9\x55\x90\x7b\x05\xbf\xbc\x59\xfe\xf7\x9b\x25\x3e\x67\x91\x3f\x33\x67\xae\x7f\xb2\x6f\x7a\xfa\xb4\xf4\x29\x93\x7d\x53\xa7\xa6\x4f\x4d\x9f\x12\xff\xef\x00\x00\x00\xff\xff\x96\x2d\xbf\x9f\xd0\x0d\x00\x00"), }, } fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{ fs["/America"].(os.FileInfo), } fs["/America"].(*vfsgen۰DirInfo).entries = []os.FileInfo{ fs["/America/New_York"].(os.FileInfo), } return fs }() type vfsgen۰FS map[string]interface{} func (fs vfsgen۰FS) Open(path string) (http.File, error) { path = pathpkg.Clean("/" + path) f, ok := fs[path] if !ok { return nil, &os.PathError{Op: "open", Path: path, Err: os.ErrNotExist} } switch f := f.(type) { case *vfsgen۰CompressedFileInfo: gr, err := gzip.NewReader(bytes.NewReader(f.compressedContent)) if err != nil { // This should never happen because we generate the gzip bytes such that they are always valid. panic("unexpected error reading own gzip compressed bytes: " + err.Error()) } return &vfsgen۰CompressedFile{ vfsgen۰CompressedFileInfo: f, gr: gr, }, nil case *vfsgen۰DirInfo: return &vfsgen۰Dir{ vfsgen۰DirInfo: f, }, nil default: // This should never happen because we generate only the above types. panic(fmt.Sprintf("unexpected type %T", f)) } } // vfsgen۰CompressedFileInfo is a static definition of a gzip compressed file. type vfsgen۰CompressedFileInfo struct { name string modTime time.Time compressedContent []byte uncompressedSize int64 } func (f *vfsgen۰CompressedFileInfo) Readdir(count int) ([]os.FileInfo, error) { return nil, fmt.Errorf("cannot Readdir from file %s", f.name) } func (f *vfsgen۰CompressedFileInfo) Stat() (os.FileInfo, error) { return f, nil } func (f *vfsgen۰CompressedFileInfo) GzipBytes() []byte { return f.compressedContent } func (f *vfsgen۰CompressedFileInfo) Name() string { return f.name } func (f *vfsgen۰CompressedFileInfo) Size() int64 { return f.uncompressedSize } func (f *vfsgen۰CompressedFileInfo) Mode() os.FileMode { return 0444 } func (f *vfsgen۰CompressedFileInfo) ModTime() time.Time { return f.modTime } func (f *vfsgen۰CompressedFileInfo) IsDir() bool { return false } func (f *vfsgen۰CompressedFileInfo) Sys() interface{} { return nil } // vfsgen۰CompressedFile is an opened compressedFile instance. type vfsgen۰CompressedFile struct { *vfsgen۰CompressedFileInfo gr *gzip.Reader grPos int64 // Actual gr uncompressed position. seekPos int64 // Seek uncompressed position. } func (f *vfsgen۰CompressedFile) Read(p []byte) (n int, err error) { if f.grPos > f.seekPos { // Rewind to beginning. err = f.gr.Reset(bytes.NewReader(f.compressedContent)) if err != nil { return 0, err } f.grPos = 0 } if f.grPos < f.seekPos { // Fast-forward. _, err = io.CopyN(ioutil.Discard, f.gr, f.seekPos-f.grPos) if err != nil { return 0, err } f.grPos = f.seekPos } n, err = f.gr.Read(p) f.grPos += int64(n) f.seekPos = f.grPos return n, err } func (f *vfsgen۰CompressedFile) Seek(offset int64, whence int) (int64, error) { switch whence { case io.SeekStart: f.seekPos = 0 + offset case io.SeekCurrent: f.seekPos += offset case io.SeekEnd: f.seekPos = f.uncompressedSize + offset default: panic(fmt.Errorf("invalid whence value: %v", whence)) } return f.seekPos, nil } func (f *vfsgen۰CompressedFile) Close() error { return f.gr.Close() } // vfsgen۰DirInfo is a static definition of a directory. type vfsgen۰DirInfo struct { name string modTime time.Time entries []os.FileInfo } func (d *vfsgen۰DirInfo) Read([]byte) (int, error) { return 0, fmt.Errorf("cannot Read from directory %s", d.name) } func (d *vfsgen۰DirInfo) Close() error { return nil } func (d *vfsgen۰DirInfo) Stat() (os.FileInfo, error) { return d, nil } func (d *vfsgen۰DirInfo) Name() string { return d.name } func (d *vfsgen۰DirInfo) Size() int64 { return 0 } func (d *vfsgen۰DirInfo) Mode() os.FileMode { return 0755 | os.ModeDir } func (d *vfsgen۰DirInfo) ModTime() time.Time { return d.modTime } func (d *vfsgen۰DirInfo) IsDir() bool { return true } func (d *vfsgen۰DirInfo) Sys() interface{} { return nil } // vfsgen۰Dir is an opened dir instance. type vfsgen۰Dir struct { *vfsgen۰DirInfo pos int // Position within entries for Seek and Readdir. } func (d *vfsgen۰Dir) Seek(offset int64, whence int) (int64, error) { if offset == 0 && whence == io.SeekStart { d.pos = 0 return 0, nil } return 0, fmt.Errorf("unsupported Seek in directory %s", d.name) } func (d *vfsgen۰Dir) Readdir(count int) ([]os.FileInfo, error) { if d.pos >= len(d.entries) && count > 0 { return nil, io.EOF } if count <= 0 || count > len(d.entries)-d.pos { count = len(d.entries) - d.pos } e := d.entries[d.pos : d.pos+count] d.pos += count return e, nil } rclone-1.53.3/backend/sharefile/update-timezone.sh000077500000000000000000000004211375552240400220730ustar00rootroot00000000000000#!/bin/bash set -e # Extract just the America/New_York timezone from tzinfo=$(go env GOROOT)/lib/time/zoneinfo.zip rm -rf tzdata mkdir tzdata cd tzdata unzip ${tzinfo} America/New_York cd .. # Make the embedded assets go run generate_tzdata.go # tidy up rm -rf tzdata rclone-1.53.3/backend/sharefile/upload.go000066400000000000000000000150711375552240400202440ustar00rootroot00000000000000// Upload large files for sharefile // // Docs - https://api.sharefile.com/rest/docs/resource.aspx?name=Items#Upload_File package sharefile import ( "bytes" "context" "crypto/md5" "encoding/hex" "encoding/json" "fmt" "io" "strings" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/backend/sharefile/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" ) // largeUpload is used to control the upload of large files which need chunking type largeUpload struct { ctx context.Context f *Fs // parent Fs o *Object // object being uploaded in io.Reader // read the data from here wrap accounting.WrapFn // account parts being transferred size int64 // total size parts int64 // calculated number of parts, if known info *api.UploadSpecification // where to post chunks etc threads int // number of threads to use in upload streamed bool // set if using streamed upload } // newLargeUpload starts an upload of object o from in with metadata in src func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, info *api.UploadSpecification) (up *largeUpload, err error) { size := src.Size() parts := int64(-1) if size >= 0 { parts = size / int64(o.fs.opt.ChunkSize) if size%int64(o.fs.opt.ChunkSize) != 0 { parts++ } } var streamed bool switch strings.ToLower(info.Method) { case "streamed": streamed = true case "threaded": streamed = false default: return nil, errors.Errorf("can't use method %q with newLargeUpload", info.Method) } threads := fs.Config.Transfers if threads > info.MaxNumberOfThreads { threads = info.MaxNumberOfThreads } // unwrap the accounting from the input, we use wrap to put it // back on after the buffering in, wrap := accounting.UnWrap(in) up = &largeUpload{ ctx: ctx, f: f, o: o, in: in, wrap: wrap, size: size, threads: threads, info: info, parts: parts, streamed: streamed, } return up, nil } // parse the api.UploadFinishResponse in respBody func (up *largeUpload) parseUploadFinishResponse(respBody []byte) (err error) { var finish api.UploadFinishResponse err = json.Unmarshal(respBody, &finish) if err != nil { // Sometimes the unmarshal fails in which case return the body return errors.Errorf("upload: bad response: %q", bytes.TrimSpace(respBody)) } return up.o.checkUploadResponse(up.ctx, &finish) } // Transfer a chunk func (up *largeUpload) transferChunk(ctx context.Context, part int64, offset int64, body []byte, fileHash string) error { md5sumRaw := md5.Sum(body) md5sum := hex.EncodeToString(md5sumRaw[:]) size := int64(len(body)) // Add some more parameters to the ChunkURI u := up.info.ChunkURI u += fmt.Sprintf("&index=%d&byteOffset=%d&hash=%s&fmt=json", part, offset, md5sum, ) if fileHash != "" { u += fmt.Sprintf("&finish=true&fileSize=%d&fileHash=%s", offset+int64(len(body)), fileHash, ) } opts := rest.Opts{ Method: "POST", RootURL: u, ContentLength: &size, } var respBody []byte err := up.f.pacer.Call(func() (bool, error) { fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body)) opts.Body = up.wrap(bytes.NewReader(body)) resp, err := up.f.srv.Call(ctx, &opts) if err != nil { fs.Debugf(up.o, "Error sending chunk %d: %v", part, err) } else { respBody, err = rest.ReadBody(resp) } // retry all errors now that the multipart upload has started return err != nil, err }) if err != nil { fs.Debugf(up.o, "Error sending chunk %d: %v", part, err) return err } // If last chunk and using "streamed" transfer, get the response back now if up.streamed && fileHash != "" { return up.parseUploadFinishResponse(respBody) } fs.Debugf(up.o, "Done sending chunk %d", part) return nil } // finish closes off the large upload and reads the metadata func (up *largeUpload) finish(ctx context.Context) error { fs.Debugf(up.o, "Finishing large file upload") // For a streamed transfer we will already have read the info if up.streamed { return nil } opts := rest.Opts{ Method: "POST", RootURL: up.info.FinishURI, } var respBody []byte err := up.f.pacer.Call(func() (bool, error) { resp, err := up.f.srv.Call(ctx, &opts) if err != nil { return shouldRetry(resp, err) } respBody, err = rest.ReadBody(resp) // retry all errors now that the multipart upload has started return err != nil, err }) if err != nil { return err } return up.parseUploadFinishResponse(respBody) } // Upload uploads the chunks from the input func (up *largeUpload) Upload(ctx context.Context) error { if up.parts >= 0 { fs.Debugf(up.o, "Starting upload of large file in %d chunks", up.parts) } else { fs.Debugf(up.o, "Starting streaming upload of large file") } var ( offset int64 errs = make(chan error, 1) wg sync.WaitGroup err error wholeFileHash = md5.New() eof = false ) outer: for part := int64(0); !eof; part++ { // Check any errors select { case err = <-errs: break outer default: } // Get a block of memory buf := up.f.getUploadBlock() // Read the chunk var n int n, err = readers.ReadFill(up.in, buf) if err == io.EOF { eof = true buf = buf[:n] err = nil } else if err != nil { up.f.putUploadBlock(buf) break outer } // Hash it _, _ = io.Copy(wholeFileHash, bytes.NewBuffer(buf)) // Get file hash if was last chunk fileHash := "" if eof { fileHash = hex.EncodeToString(wholeFileHash.Sum(nil)) } // Transfer the chunk wg.Add(1) transferChunk := func(part, offset int64, buf []byte, fileHash string) { defer wg.Done() defer up.f.putUploadBlock(buf) err := up.transferChunk(ctx, part, offset, buf, fileHash) if err != nil { select { case errs <- err: default: } } } if up.streamed { transferChunk(part, offset, buf, fileHash) // streamed } else { go transferChunk(part, offset, buf, fileHash) // multithreaded } offset += int64(n) } wg.Wait() // check size read is correct if eof && err == nil && up.size >= 0 && up.size != offset { err = errors.Errorf("upload: short read: read %d bytes expected %d", up.size, offset) } // read any errors if err == nil { select { case err = <-errs: default: } } // finish regardless of errors finishErr := up.finish(ctx) if err == nil { err = finishErr } return err } rclone-1.53.3/backend/sugarsync/000077500000000000000000000000001375552240400165015ustar00rootroot00000000000000rclone-1.53.3/backend/sugarsync/api/000077500000000000000000000000001375552240400172525ustar00rootroot00000000000000rclone-1.53.3/backend/sugarsync/api/types.go000066400000000000000000000116201375552240400207450ustar00rootroot00000000000000// Package api has type definitions for sugarsync // // Converted from the API docs with help from https://www.onlinetool.io/xmltogo/ package api import ( "encoding/xml" "time" ) // AppAuthorization is used to request a refresh token // // The token is returned in the Location: field type AppAuthorization struct { XMLName xml.Name `xml:"appAuthorization"` Username string `xml:"username"` Password string `xml:"password"` Application string `xml:"application"` AccessKeyID string `xml:"accessKeyId"` PrivateAccessKey string `xml:"privateAccessKey"` } // TokenAuthRequest is the request to get Authorization type TokenAuthRequest struct { XMLName xml.Name `xml:"tokenAuthRequest"` AccessKeyID string `xml:"accessKeyId"` PrivateAccessKey string `xml:"privateAccessKey"` RefreshToken string `xml:"refreshToken"` } // Authorization is returned from the TokenAuthRequest type Authorization struct { XMLName xml.Name `xml:"authorization"` Expiration time.Time `xml:"expiration"` User string `xml:"user"` } // File represents a single file type File struct { Name string `xml:"displayName"` Ref string `xml:"ref"` DsID string `xml:"dsid"` TimeCreated time.Time `xml:"timeCreated"` Parent string `xml:"parent"` Size int64 `xml:"size"` LastModified time.Time `xml:"lastModified"` MediaType string `xml:"mediaType"` PresentOnServer bool `xml:"presentOnServer"` FileData string `xml:"fileData"` Versions string `xml:"versions"` PublicLink PublicLink } // Collection represents // - Workspace Collection // - Sync Folders collection // - Folder type Collection struct { Type string `xml:"type,attr"` Name string `xml:"displayName"` Ref string `xml:"ref"` // only for Folder DsID string `xml:"dsid"` TimeCreated time.Time `xml:"timeCreated"` Parent string `xml:"parent"` Collections string `xml:"collections"` Files string `xml:"files"` Contents string `xml:"contents"` // Sharing bool `xml:"sharing>enabled,attr"` } // CollectionContents is the result of a list call type CollectionContents struct { //XMLName xml.Name `xml:"collectionContents"` Start int `xml:"start,attr"` HasMore bool `xml:"hasMore,attr"` End int `xml:"end,attr"` Collections []Collection `xml:"collection"` Files []File `xml:"file"` } // User is returned from the /user call type User struct { XMLName xml.Name `xml:"user"` Username string `xml:"username"` Nickname string `xml:"nickname"` Quota struct { Limit int64 `xml:"limit"` Usage int64 `xml:"usage"` } `xml:"quota"` Workspaces string `xml:"workspaces"` SyncFolders string `xml:"syncfolders"` Deleted string `xml:"deleted"` MagicBriefcase string `xml:"magicBriefcase"` WebArchive string `xml:"webArchive"` MobilePhotos string `xml:"mobilePhotos"` Albums string `xml:"albums"` RecentActivities string `xml:"recentActivities"` ReceivedShares string `xml:"receivedShares"` PublicLinks string `xml:"publicLinks"` MaximumPublicLinkSize int `xml:"maximumPublicLinkSize"` } // CreateFolder is posted to a folder URL to create a folder type CreateFolder struct { XMLName xml.Name `xml:"folder"` Name string `xml:"displayName"` } // MoveFolder is posted to a folder URL to move a folder type MoveFolder struct { XMLName xml.Name `xml:"folder"` Name string `xml:"displayName"` Parent string `xml:"parent"` } // CreateSyncFolder is posted to the root folder URL to create a sync folder type CreateSyncFolder struct { XMLName xml.Name `xml:"syncFolder"` Name string `xml:"displayName"` } // CreateFile is posted to a folder URL to create a file type CreateFile struct { XMLName xml.Name `xml:"file"` Name string `xml:"displayName"` MediaType string `xml:"mediaType"` } // MoveFile is posted to a file URL to create a file type MoveFile struct { XMLName xml.Name `xml:"file"` Name string `xml:"displayName"` Parent string `xml:"parent"` } // CopyFile copies a file from source type CopyFile struct { XMLName xml.Name `xml:"fileCopy"` Source string `xml:"source,attr"` Name string `xml:"displayName"` } // PublicLink is the URL and enabled flag for a public link type PublicLink struct { XMLName xml.Name `xml:"publicLink"` URL string `xml:",chardata"` Enabled bool `xml:"enabled,attr"` } // SetPublicLink can be used to enable the file for sharing type SetPublicLink struct { XMLName xml.Name `xml:"file"` PublicLink PublicLink } // SetLastModified sets the modified time for a file type SetLastModified struct { XMLName xml.Name `xml:"file"` LastModified time.Time `xml:"lastModified"` } rclone-1.53.3/backend/sugarsync/sugarsync.go000066400000000000000000001074761375552240400210650ustar00rootroot00000000000000// Package sugarsync provides an interface to the Sugarsync // object storage system. package sugarsync /* FIXME DirMove tests fails with: Can not move sync folder. go test -v -short -run TestIntegration/FsMkdir/FsPutFiles/FsDirMove -verbose -dump-bodies To work around this we use the remote "TestSugarSync:Test" to test with. */ import ( "context" "fmt" "io" "log" "net/http" "net/url" "path" "regexp" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/sugarsync/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" ) /* maxFileLength = 16383 canWriteUnnormalized = true canReadUnnormalized = true canReadRenormalized = false canStream = true */ const ( appID = "/sc/9068489/215_1736969337" accessKeyID = "OTA2ODQ4OTE1NzEzNDAwNTI4Njc" encryptedPrivateAccessKey = "JONdXuRLNSRI5ue2Cr-vn-5m_YxyMNq9yHRKUQevqo8uaZjH502Z-x1axhyqOa8cDyldGq08RfFxozo" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential rootURL = "https://api.sugarsync.com" listChunks = 500 // chunk size to read directory listings expiryLeeway = 5 * time.Minute // time before the token expires to renew ) // withDefault returns value but if value is "" then it returns defaultValue func withDefault(key, defaultValue string) (value string) { if value == "" { value = defaultValue } return value } // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "sugarsync", Description: "Sugarsync", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { opt := new(Options) err := configstruct.Set(m, opt) if err != nil { log.Fatalf("Failed to read options: %v", err) } if opt.RefreshToken != "" { fmt.Printf("Already have a token - refresh?\n") if !config.ConfirmWithConfig(m, "config_refresh_token", true) { return } } fmt.Printf("Username (email address)> ") username := config.ReadLine() password := config.GetPassword("Your Sugarsync password is only required during setup and will not be stored.") authRequest := api.AppAuthorization{ Username: username, Password: password, Application: withDefault(opt.AppID, appID), AccessKeyID: withDefault(opt.AccessKeyID, accessKeyID), PrivateAccessKey: withDefault(opt.PrivateAccessKey, obscure.MustReveal(encryptedPrivateAccessKey)), } var resp *http.Response opts := rest.Opts{ Method: "POST", Path: "/app-authorization", } srv := rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL) // FIXME // FIXME //err = f.pacer.Call(func() (bool, error) { resp, err = srv.CallXML(context.Background(), &opts, &authRequest, nil) // return shouldRetry(resp, err) //}) if err != nil { log.Fatalf("Failed to get token: %v", err) } opt.RefreshToken = resp.Header.Get("Location") m.Set("refresh_token", opt.RefreshToken) }, Options: []fs.Option{{ Name: "app_id", Help: "Sugarsync App ID.\n\nLeave blank to use rclone's.", }, { Name: "access_key_id", Help: "Sugarsync Access Key ID.\n\nLeave blank to use rclone's.", }, { Name: "private_access_key", Help: "Sugarsync Private Access Key\n\nLeave blank to use rclone's.", }, { Name: "hard_delete", Help: "Permanently delete files if true\notherwise put them in the deleted files.", Default: false, }, { Name: "refresh_token", Help: "Sugarsync refresh token\n\nLeave blank normally, will be auto configured by rclone.", Advanced: true, }, { Name: "authorization", Help: "Sugarsync authorization\n\nLeave blank normally, will be auto configured by rclone.", Advanced: true, }, { Name: "authorization_expiry", Help: "Sugarsync authorization expiry\n\nLeave blank normally, will be auto configured by rclone.", Advanced: true, }, { Name: "user", Help: "Sugarsync user\n\nLeave blank normally, will be auto configured by rclone.", Advanced: true, }, { Name: "root_id", Help: "Sugarsync root id\n\nLeave blank normally, will be auto configured by rclone.", Advanced: true, }, { Name: "deleted_id", Help: "Sugarsync deleted folder id\n\nLeave blank normally, will be auto configured by rclone.", Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.Base | encoder.EncodeCtl | encoder.EncodeInvalidUtf8), }}, }) } // Options defines the configuration for this backend type Options struct { AppID string `config:"app_id"` AccessKeyID string `config:"access_key_id"` PrivateAccessKey string `config:"private_access_key"` HardDelete bool `config:"hard_delete"` RefreshToken string `config:"refresh_token"` Authorization string `config:"authorization"` AuthorizationExpiry string `config:"authorization_expiry"` User string `config:"user"` RootID string `config:"root_id"` DeletedID string `config:"deleted_id"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote sugarsync type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the one drive server dirCache *dircache.DirCache // Map of directory path to directory id pacer *fs.Pacer // pacer for API calls m configmap.Mapper // config file access authMu sync.Mutex // used when doing authorization authExpiry time.Time // time the authorization expires } // Object describes a sugarsync object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set size int64 // size of the object modTime time.Time // modification time of the object id string // ID of the object } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("sugarsync root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // parsePath parses a sugarsync 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.File, err error) { // defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err) leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false) if err != nil { if err == fs.ErrorDirNotFound { return nil, fs.ErrorObjectNotFound } return nil, err } found, err := f.listAll(ctx, directoryID, func(item *api.File) bool { if item.Name == leaf { info = item return true } return false }, nil) if err != nil { return nil, err } if !found { return nil, fs.ErrorObjectNotFound } return info, nil } // readMetaDataForID reads the metadata for a file from the ID func (f *Fs) readMetaDataForID(ctx context.Context, ID string) (info *api.File, err error) { var resp *http.Response opts := rest.Opts{ Method: "GET", RootURL: ID, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { return nil, fs.ErrorObjectNotFound } return nil, errors.Wrap(err, "failed to get authorization") } return info, nil } // getAuthToken gets an Auth token from the refresh token func (f *Fs) getAuthToken(ctx context.Context) error { fs.Debugf(f, "Renewing token") var authRequest = api.TokenAuthRequest{ AccessKeyID: withDefault(f.opt.AccessKeyID, accessKeyID), PrivateAccessKey: withDefault(f.opt.PrivateAccessKey, obscure.MustReveal(encryptedPrivateAccessKey)), RefreshToken: f.opt.RefreshToken, } if authRequest.RefreshToken == "" { return errors.New("no refresh token found - run `rclone config reconnect`") } var authResponse api.Authorization var err error var resp *http.Response opts := rest.Opts{ Method: "POST", Path: "/authorization", ExtraHeaders: map[string]string{ "Authorization": "", // unset Authorization }, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, &authRequest, &authResponse) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to get authorization") } f.opt.Authorization = resp.Header.Get("Location") f.authExpiry = authResponse.Expiration f.opt.User = authResponse.User // Cache the results f.m.Set("authorization", f.opt.Authorization) f.m.Set("authorization_expiry", f.authExpiry.Format(time.RFC3339)) f.m.Set("user", f.opt.User) return nil } // Read the auth from the config file and refresh it if it is expired, setting it in srv func (f *Fs) getAuth(req *http.Request) (err error) { f.authMu.Lock() defer f.authMu.Unlock() ctx := req.Context() // if have auth, check it is in date if f.opt.Authorization == "" || f.opt.User == "" || f.authExpiry.IsZero() || time.Until(f.authExpiry) < expiryLeeway { // Get the auth token f.srv.SetSigner(nil) // temporariliy remove the signer so we don't infinitely recurse err = f.getAuthToken(ctx) f.srv.SetSigner(f.getAuth) // replace signer if err != nil { return err } } // Set Authorization header req.Header.Set("Authorization", f.opt.Authorization) return nil } // Read the user info into f func (f *Fs) getUser(ctx context.Context) (user *api.User, err error) { var resp *http.Response opts := rest.Opts{ Method: "GET", Path: "/user", } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &user) return shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "failed to get user") } return user, nil } // Read the expiry time from a string func parseExpiry(expiryString string) time.Time { if expiryString == "" { return time.Time{} } expiry, err := time.Parse(time.RFC3339, expiryString) if err != nil { fs.Debugf("sugarsync", "Invalid expiry time %q read from config", expiryString) return time.Time{} } return expiry } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } root = parsePath(root) client := fshttp.NewClient(fs.Config) f := &Fs{ name: name, root: root, opt: *opt, srv: rest.NewClient(client).SetRoot(rootURL), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), m: m, authExpiry: parseExpiry(opt.AuthorizationExpiry), } f.features = (&fs.Features{ CaseInsensitive: true, CanHaveEmptyDirectories: true, }).Fill(f) f.srv.SetSigner(f.getAuth) // use signing hook to get the auth f.srv.SetErrorHandler(errorHandler) // Get rootID if f.opt.RootID == "" { user, err := f.getUser(ctx) if err != nil { return nil, err } f.opt.RootID = user.SyncFolders if strings.HasSuffix(f.opt.RootID, "/contents") { f.opt.RootID = f.opt.RootID[:len(f.opt.RootID)-9] } else { return nil, errors.Errorf("unexpected rootID %q", f.opt.RootID) } // Cache the results f.m.Set("root_id", f.opt.RootID) f.opt.DeletedID = user.Deleted f.m.Set("deleted_id", f.opt.DeletedID) } f.dirCache = dircache.New(root, f.opt.RootID, f) // Find the current root err = f.dirCache.FindRoot(ctx, false) if err != nil { // Assume it is a file newRoot, remote := dircache.SplitPath(root) oldDirCache := f.dirCache f.dirCache = dircache.New(newRoot, f.opt.RootID, f) f.root = newRoot resetF := func() { f.dirCache = oldDirCache f.root = root } // Make new Fs which is the parent err = f.dirCache.FindRoot(ctx, false) if err != nil { // No root so return old f resetF() return f, nil } _, err := f.newObjectWithInfo(ctx, remote, nil) if err != nil { if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f resetF() return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } var findError = regexp.MustCompile(`

(.*?)

`) // errorHandler parses errors from the body // // Errors seem to be HTML with

containing the error text //

Can not move sync folder.

func errorHandler(resp *http.Response) (err error) { body, err := rest.ReadBody(resp) if err != nil { return errors.Wrap(err, "error reading error out of body") } match := findError.FindSubmatch(body) if match == nil || len(match) < 2 || len(match[1]) == 0 { return errors.Errorf("HTTP error %v (%v) returned body: %q", resp.StatusCode, resp.Status, body) } return errors.Errorf("HTTP error %v (%v): %s", resp.StatusCode, resp.Status, match[1]) } // rootSlash returns root with a slash on if it is empty, otherwise empty string func (f *Fs) rootSlash() string { if f.root == "" { return f.root } return f.root + "/" } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.File) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // FindLeaf finds a directory of name leaf in the folder with ID pathID func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { //fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf) // Find the leaf in pathID found, err = f.listAll(ctx, pathID, nil, func(item *api.Collection) bool { if item.Name == leaf { pathIDOut = item.Ref return true } return false }) // fs.Debugf(f, ">FindLeaf %q, %v, %v", pathIDOut, found, err) return pathIDOut, found, err } // CreateDir makes a directory with pathID as parent and name leaf func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) { // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf) var resp *http.Response opts := rest.Opts{ Method: "POST", RootURL: pathID, NoResponse: true, } var mkdir interface{} if pathID == f.opt.RootID { // folders at the root are syncFolders mkdir = &api.CreateSyncFolder{ Name: f.opt.Enc.FromStandardName(leaf), } opts.ExtraHeaders = map[string]string{ "*X-SugarSync-API-Version": "1.5", // non canonical header } } else { mkdir = &api.CreateFolder{ Name: f.opt.Enc.FromStandardName(leaf), } } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, mkdir, nil) return shouldRetry(resp, err) }) if err != nil { return "", err } newID = resp.Header.Get("Location") if newID == "" { // look up ID if not returned (eg for syncFolder) var found bool newID, found, err = f.FindLeaf(ctx, pathID, leaf) if err != nil { return "", err } if !found { return "", errors.Errorf("couldn't find ID for newly created directory %q", leaf) } } return newID, nil } // list the objects into the function supplied // // Should return true to finish processing type listAllFileFn func(*api.File) bool // list the folders into the function supplied // // Should return true to finish processing type listAllFolderFn func(*api.Collection) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dirID string, fileFn listAllFileFn, folderFn listAllFolderFn) (found bool, err error) { opts := rest.Opts{ Method: "GET", RootURL: dirID, Path: "/contents", Parameters: url.Values{}, } opts.Parameters.Set("max", strconv.Itoa(listChunks)) start := 0 OUTER: for { opts.Parameters.Set("start", strconv.Itoa(start)) var result api.CollectionContents var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return shouldRetry(resp, err) }) if err != nil { return found, errors.Wrap(err, "couldn't list files") } if fileFn != nil { for i := range result.Files { item := &result.Files[i] item.Name = f.opt.Enc.ToStandardName(item.Name) if fileFn(item) { found = true break OUTER } } } if folderFn != nil { for i := range result.Collections { item := &result.Collections[i] item.Name = f.opt.Enc.ToStandardName(item.Name) if folderFn(item) { found = true break OUTER } } } if !result.HasMore { break } start = result.End + 1 } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { directoryID, err := f.dirCache.FindDir(ctx, dir, false) if err != nil { return nil, err } var iErr error _, err = f.listAll(ctx, directoryID, func(info *api.File) bool { remote := path.Join(dir, info.Name) o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) return false }, func(info *api.Collection) bool { remote := path.Join(dir, info.Name) id := info.Ref // cache the directory ID for later lookups f.dirCache.Put(remote, id) d := fs.NewDir(remote, info.TimeCreated).SetID(id) entries = append(entries, d) return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Returns the object, leaf, directoryID and error // // Used to create new objects func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { // Create the directory for the object if it doesn't exist leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true) if err != nil { return } // Temporary Object under construction o = &Object{ fs: f, remote: remote, } return o, leaf, directoryID, nil } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil) switch err { case nil: return existingObj, existingObj.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: // Not found so create it return f.PutUnchecked(ctx, in, src, options...) default: return nil, err } } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // PutUnchecked the object into the container // // This will produce an error if the object already exists // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { remote := src.Remote() size := src.Size() modTime := src.ModTime(ctx) o, _, _, err := f.createObject(ctx, remote, modTime, size) if err != nil { return nil, err } return o, o.Update(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { _, err := f.dirCache.FindDir(ctx, dir, true) return err } // delete removes an object or directory by ID either putting it // in the Deleted files or deleting it permanently func (f *Fs) delete(ctx context.Context, isFile bool, id string, remote string, hardDelete bool) (err error) { if hardDelete { opts := rest.Opts{ Method: "DELETE", RootURL: id, NoResponse: true, } return f.pacer.Call(func() (bool, error) { resp, err := f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) } // Move file/dir to deleted files if not hard delete leaf := path.Base(remote) if isFile { _, err = f.moveFile(ctx, id, leaf, f.opt.DeletedID) } else { err = f.moveDir(ctx, id, leaf, f.opt.DeletedID) } return err } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := path.Join(f.root, dir) if root == "" { return errors.New("can't purge root directory") } dc := f.dirCache directoryID, err := dc.FindDir(ctx, dir, false) if err != nil { return err } if check { found, err := f.listAll(ctx, directoryID, func(item *api.File) bool { return true }, func(item *api.Collection) bool { return true }) if err != nil { return err } if found { return fs.ErrorDirectoryNotEmpty } } err = f.delete(ctx, false, directoryID, root, f.opt.HardDelete || check) if err != nil { return err } f.dirCache.FlushDir(dir) return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return fs.ModTimeNotSupported } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } err := srcObj.readMetaData(ctx) if err != nil { return nil, err } srcPath := srcObj.fs.rootSlash() + srcObj.remote dstPath := f.rootSlash() + remote if strings.ToLower(srcPath) == strings.ToLower(dstPath) { return nil, errors.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath) } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Copy the object opts := rest.Opts{ Method: "POST", RootURL: directoryID, NoResponse: true, } copyFile := api.CopyFile{ Name: f.opt.Enc.FromStandardName(leaf), Source: srcObj.id, } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, ©File, nil) return shouldRetry(resp, err) }) if err != nil { return nil, err } dstObj.id = resp.Header.Get("Location") err = dstObj.readMetaData(ctx) if err != nil { return nil, err } return dstObj, nil } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { // Caution: Deleting a folder may orphan objects. It's important // to remove the contents of the folder before you delete the // folder. That's because removing a folder using DELETE does not // remove the objects contained within the folder. If you delete // a folder without first deleting its contents, the contents may // be rendered inaccessible. // // An alternative to permanently deleting a folder is moving it to the // Deleted Files folder. A folder (and all its contents) in the // Deleted Files folder can be recovered. Your app can retrieve the // link to the user's Deleted Files folder from the element // in the user resource representation. Your application can then move // a folder to the Deleted Files folder by issuing an HTTP PUT request // to the URL that represents the file resource and provide as input, // XML that specifies in the element the link to the Deleted // Files folder. if f.opt.HardDelete { return fs.ErrorCantPurge } return f.purgeCheck(ctx, dir, false) } // moveFile moves a file server side func (f *Fs) moveFile(ctx context.Context, id, leaf, directoryID string) (info *api.File, err error) { opts := rest.Opts{ Method: "PUT", RootURL: id, } move := api.MoveFile{ Name: f.opt.Enc.FromStandardName(leaf), Parent: directoryID, } var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, &move, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } // The docs say that there is nothing returned but apparently // there is... however it doesn't have Ref // // If ref not set, assume it hasn't changed if info.Ref == "" { info.Ref = id } return info, nil } // moveDir moves a folder server side func (f *Fs) moveDir(ctx context.Context, id, leaf, directoryID string) (err error) { // Move the object opts := rest.Opts{ Method: "PUT", RootURL: id, NoResponse: true, } move := api.MoveFolder{ Name: f.opt.Enc.FromStandardName(leaf), Parent: directoryID, } var resp *http.Response return f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, &move, nil) return shouldRetry(resp, err) }) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } // Create temporary object dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size) if err != nil { return nil, err } // Do the move info, err := f.moveFile(ctx, srcObj.id, leaf, directoryID) if err != nil { return nil, err } err = dstObj.setMetaData(info) if err != nil { return nil, err } return dstObj, nil } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote) if err != nil { return err } // Do the move err = f.moveDir(ctx, srcID, dstLeaf, dstDirectoryID) if err != nil { return err } srcFs.dirCache.FlushDir(srcRemote) return nil } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) { obj, err := f.NewObject(ctx, remote) if err != nil { return "", err } o, ok := obj.(*Object) if !ok { return "", errors.New("internal error: not an Object") } opts := rest.Opts{ Method: "PUT", RootURL: o.id, } linkFile := api.SetPublicLink{ PublicLink: api.PublicLink{Enabled: true}, } var resp *http.Response var info *api.File err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, &linkFile, &info) return shouldRetry(resp, err) }) if err != nil { return "", err } return info.PublicLink.URL, err } // DirCacheFlush resets the directory cache - used in testing as an // optional interface func (f *Fs) DirCacheFlush() { f.dirCache.ResetRoot() } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.None) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the SHA-1 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { err := o.readMetaData(context.TODO()) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.File) (err error) { o.hasMetaData = true o.size = info.Size o.modTime = info.LastModified if info.Ref != "" { o.id = info.Ref } else if o.id == "" { return errors.New("no ID found in response") } return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } var info *api.File if o.id != "" { info, err = o.fs.readMetaDataForID(ctx, o.id) } else { info, err = o.fs.readMetaDataForPath(ctx, o.remote) } if err != nil { return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // Sugarsync doesn't support setting the mod time. // // In theory (but not in the docs) you could patch the object, // however it doesn't work. return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { if o.id == "" { return nil, errors.New("can't download - no id") } fs.FixRangeOption(options, o.size) var resp *http.Response opts := rest.Opts{ Method: "GET", RootURL: o.id, Path: "/data", Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // createFile makes an (empty) file with pathID as parent and name leaf and returns the ID func (f *Fs) createFile(ctx context.Context, pathID, leaf, mimeType string) (newID string, err error) { var resp *http.Response opts := rest.Opts{ Method: "POST", RootURL: pathID, NoResponse: true, } mkdir := api.CreateFile{ Name: f.opt.Enc.FromStandardName(leaf), MediaType: mimeType, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, &mkdir, nil) return shouldRetry(resp, err) }) if err != nil { return "", err } return resp.Header.Get("Location"), nil } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { size := src.Size() // modTime := src.ModTime(ctx) remote := o.Remote() // Create the directory for the object if it doesn't exist leaf, directoryID, err := o.fs.dirCache.FindPath(ctx, remote, true) if err != nil { return err } // if file doesn't exist, create it if o.id == "" { o.id, err = o.fs.createFile(ctx, directoryID, leaf, fs.MimeType(ctx, src)) if err != nil { return errors.Wrap(err, "failed to create file") } if o.id == "" { return errors.New("failed to create file: no ID") } // if created the file and returning an error then delete the file defer func() { if err != nil { delErr := o.fs.delete(ctx, true, o.id, remote, o.fs.opt.HardDelete) if delErr != nil { fs.Errorf(o, "failed to remove failed upload: %v", delErr) } } }() } var resp *http.Response opts := rest.Opts{ Method: "PUT", RootURL: o.id, Path: "/data", NoResponse: true, Options: options, Body: in, } if size >= 0 { opts.ContentLength = &size } err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "failed to upload file") } o.hasMetaData = false return o.readMetaData(ctx) } // Remove an object func (o *Object) Remove(ctx context.Context) error { return o.fs.delete(ctx, true, o.id, o.remote, o.fs.opt.HardDelete) } // ID returns the ID of the Object if known, or "" if not func (o *Object) ID() string { return o.id } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.IDer = (*Object)(nil) ) rclone-1.53.3/backend/sugarsync/sugarsync_internal_test.go000066400000000000000000000024501375552240400240020ustar00rootroot00000000000000package sugarsync import ( "bytes" "io/ioutil" "net/http" "testing" "github.com/stretchr/testify/assert" ) func TestErrorHandler(t *testing.T) { for _, test := range []struct { name string body string code int status string want string }{ { name: "empty", body: "", code: 500, status: "internal error", want: `HTTP error 500 (internal error) returned body: ""`, }, { name: "unknown", body: "

unknown

", code: 500, status: "internal error", want: `HTTP error 500 (internal error) returned body: "

unknown

"`, }, { name: "blank", body: "Nothing here

", code: 500, status: "internal error", want: `HTTP error 500 (internal error) returned body: "Nothing here

"`, }, { name: "real", body: "

an error

\n

Can not move sync folder.

\n

more stuff

", code: 500, status: "internal error", want: `HTTP error 500 (internal error): Can not move sync folder.`, }, } { t.Run(test.name, func(t *testing.T) { resp := http.Response{ Body: ioutil.NopCloser(bytes.NewBufferString(test.body)), StatusCode: test.code, Status: test.status, } got := errorHandler(&resp) assert.Equal(t, test.want, got.Error()) }) } } rclone-1.53.3/backend/sugarsync/sugarsync_test.go000066400000000000000000000006021375552240400221030ustar00rootroot00000000000000// Test Sugarsync filesystem interface package sugarsync_test import ( "testing" "github.com/rclone/rclone/backend/sugarsync" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestSugarSync:Test", NilObject: (*sugarsync.Object)(nil), }) } rclone-1.53.3/backend/swift/000077500000000000000000000000001375552240400156175ustar00rootroot00000000000000rclone-1.53.3/backend/swift/auth.go000066400000000000000000000035671375552240400171220ustar00rootroot00000000000000package swift import ( "net/http" "time" "github.com/ncw/swift" ) // auth is an authenticator for swift. It overrides the StorageUrl // and AuthToken with fixed values. type auth struct { parentAuth swift.Authenticator storageURL string authToken string } // newAuth creates a swift authenticator wrapper to override the // StorageUrl and AuthToken values. // // Note that parentAuth can be nil func newAuth(parentAuth swift.Authenticator, storageURL string, authToken string) *auth { return &auth{ parentAuth: parentAuth, storageURL: storageURL, authToken: authToken, } } // Request creates an http.Request for the auth - return nil if not needed func (a *auth) Request(c *swift.Connection) (*http.Request, error) { if a.parentAuth == nil { return nil, nil } return a.parentAuth.Request(c) } // Response parses the http.Response func (a *auth) Response(resp *http.Response) error { if a.parentAuth == nil { return nil } return a.parentAuth.Response(resp) } // The public storage URL - set Internal to true to read // internal/service net URL func (a *auth) StorageUrl(Internal bool) string { // nolint if a.storageURL != "" { return a.storageURL } if a.parentAuth == nil { return "" } return a.parentAuth.StorageUrl(Internal) } // The access token func (a *auth) Token() string { if a.authToken != "" { return a.authToken } if a.parentAuth == nil { return "" } return a.parentAuth.Token() } // Expires returns the time the token expires if known or Zero if not. func (a *auth) Expires() (t time.Time) { if do, ok := a.parentAuth.(swift.Expireser); ok { t = do.Expires() } return t } // The CDN url if available func (a *auth) CdnUrl() string { // nolint if a.parentAuth == nil { return "" } return a.parentAuth.CdnUrl() } // Check the interfaces are satisfied var ( _ swift.Authenticator = (*auth)(nil) _ swift.Expireser = (*auth)(nil) ) rclone-1.53.3/backend/swift/swift.go000066400000000000000000001246511375552240400173130ustar00rootroot00000000000000// Package swift provides an interface to the Swift object storage system package swift import ( "bufio" "bytes" "context" "fmt" "io" "net/url" "path" "strconv" "strings" "time" "github.com/ncw/swift" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" ) // Constants const ( directoryMarkerContentType = "application/directory" // content type of directory marker objects listChunks = 1000 // chunk size to read directory listings defaultChunkSize = 5 * fs.GibiByte minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep. ) // SharedOptions are shared between swift and hubic var SharedOptions = []fs.Option{{ Name: "chunk_size", Help: `Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.`, Default: defaultChunkSize, Advanced: true, }, { Name: "no_chunk", Help: `Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations.`, Default: false, Advanced: true, }, { Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, Default: (encoder.EncodeInvalidUtf8 | encoder.EncodeSlash), }} // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "swift", Description: "OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)", NewFs: NewFs, Options: append([]fs.Option{{ Name: "env_auth", Help: "Get swift credentials from environment variables in standard OpenStack form.", Default: false, Examples: []fs.OptionExample{ { Value: "false", Help: "Enter swift credentials in the next step", }, { Value: "true", Help: "Get swift credentials from environment vars. Leave other fields blank if using this.", }, }, }, { Name: "user", Help: "User name to log in (OS_USERNAME).", }, { Name: "key", Help: "API key or password (OS_PASSWORD).", }, { Name: "auth", Help: "Authentication URL for server (OS_AUTH_URL).", Examples: []fs.OptionExample{{ Help: "Rackspace US", Value: "https://auth.api.rackspacecloud.com/v1.0", }, { Help: "Rackspace UK", Value: "https://lon.auth.api.rackspacecloud.com/v1.0", }, { Help: "Rackspace v2", Value: "https://identity.api.rackspacecloud.com/v2.0", }, { Help: "Memset Memstore UK", Value: "https://auth.storage.memset.com/v1.0", }, { Help: "Memset Memstore UK v2", Value: "https://auth.storage.memset.com/v2.0", }, { Help: "OVH", Value: "https://auth.cloud.ovh.net/v3", }}, }, { Name: "user_id", Help: "User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).", }, { Name: "domain", Help: "User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)", }, { Name: "tenant", Help: "Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)", }, { Name: "tenant_id", Help: "Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)", }, { Name: "tenant_domain", Help: "Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)", }, { Name: "region", Help: "Region name - optional (OS_REGION_NAME)", }, { Name: "storage_url", Help: "Storage URL - optional (OS_STORAGE_URL)", }, { Name: "auth_token", Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)", }, { Name: "application_credential_id", Help: "Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)", }, { Name: "application_credential_name", Help: "Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)", }, { Name: "application_credential_secret", Help: "Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)", }, { Name: "auth_version", Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)", Default: 0, }, { Name: "endpoint_type", Help: "Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)", Default: "public", Examples: []fs.OptionExample{{ Help: "Public (default, choose this if not sure)", Value: "public", }, { Help: "Internal (use internal service net)", Value: "internal", }, { Help: "Admin", Value: "admin", }}, }, { Name: "storage_policy", Help: `The storage policy to use when creating a new container This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.`, Default: "", Examples: []fs.OptionExample{{ Help: "Default", Value: "", }, { Help: "OVH Public Cloud Storage", Value: "pcs", }, { Help: "OVH Public Cloud Archive", Value: "pca", }}, }}, SharedOptions...), }) } // Options defines the configuration for this backend type Options struct { EnvAuth bool `config:"env_auth"` User string `config:"user"` Key string `config:"key"` Auth string `config:"auth"` UserID string `config:"user_id"` Domain string `config:"domain"` Tenant string `config:"tenant"` TenantID string `config:"tenant_id"` TenantDomain string `config:"tenant_domain"` Region string `config:"region"` StorageURL string `config:"storage_url"` AuthToken string `config:"auth_token"` AuthVersion int `config:"auth_version"` ApplicationCredentialID string `config:"application_credential_id"` ApplicationCredentialName string `config:"application_credential_name"` ApplicationCredentialSecret string `config:"application_credential_secret"` StoragePolicy string `config:"storage_policy"` EndpointType string `config:"endpoint_type"` ChunkSize fs.SizeSuffix `config:"chunk_size"` NoChunk bool `config:"no_chunk"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote swift server type Fs struct { name string // name of this remote root string // the path we are working on if any features *fs.Features // optional features opt Options // options for this backend c *swift.Connection // the connection to the swift server rootContainer string // container part of root (if any) rootDirectory string // directory part of root (if any) cache *bucket.Cache // cache of container status noCheckContainer bool // don't check the container before creating it pacer *fs.Pacer // To pace the API calls } // Object describes a swift object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path size int64 lastModified time.Time contentType string md5 string headers swift.Headers // The object headers if known } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { if f.rootContainer == "" { return fmt.Sprintf("Swift root") } if f.rootDirectory == "" { return fmt.Sprintf("Swift container %s", f.rootContainer) } return fmt.Sprintf("Swift container %s path %s", f.rootContainer, f.rootDirectory) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 401, // Unauthorized (eg "Token has expired") 408, // Request Timeout 409, // Conflict - various states that could be resolved on a retry 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error 503, // Service Unavailable/Slow Down - "Reduce your request rate" 504, // Gateway Time-out } // shouldRetry returns a boolean as to whether this err deserves to be // retried. It returns the err as a convenience func shouldRetry(err error) (bool, error) { // If this is a swift.Error object extract the HTTP error code if swiftError, ok := err.(*swift.Error); ok { for _, e := range retryErrorCodes { if swiftError.StatusCode == e { return true, err } } } // Check for generic failure conditions return fserrors.ShouldRetry(err), err } // shouldRetryHeaders returns a boolean as to whether this err // deserves to be retried. It reads the headers passed in looking for // `Retry-After`. It returns the err as a convenience func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) { if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 { if value := headers["Retry-After"]; value != "" { retryAfter, parseErr := strconv.Atoi(value) if parseErr != nil { fs.Errorf(nil, "Failed to parse Retry-After: %q: %v", value, parseErr) } else { duration := time.Second * time.Duration(retryAfter) if duration <= 60*time.Second { // Do a short sleep immediately fs.Debugf(nil, "Sleeping for %v to obey Retry-After", duration) time.Sleep(duration) return true, err } // Delay a long sleep for a retry return false, fserrors.NewErrorRetryAfter(duration) } } } return shouldRetry(err) } // parsePath parses a remote 'url' func parsePath(path string) (root string) { root = strings.Trim(path, "/") return } // split returns container and containerPath from the rootRelativePath // relative to f.root func (f *Fs) split(rootRelativePath string) (container, containerPath string) { container, containerPath = bucket.Split(path.Join(f.root, rootRelativePath)) return f.opt.Enc.FromStandardName(container), f.opt.Enc.FromStandardPath(containerPath) } // split returns container and containerPath from the object func (o *Object) split() (container, containerPath string) { return o.fs.split(o.remote) } // swiftConnection makes a connection to swift func swiftConnection(opt *Options, name string) (*swift.Connection, error) { c := &swift.Connection{ // Keep these in the same order as the Config for ease of checking UserName: opt.User, ApiKey: opt.Key, AuthUrl: opt.Auth, UserId: opt.UserID, Domain: opt.Domain, Tenant: opt.Tenant, TenantId: opt.TenantID, TenantDomain: opt.TenantDomain, Region: opt.Region, StorageUrl: opt.StorageURL, AuthToken: opt.AuthToken, AuthVersion: opt.AuthVersion, ApplicationCredentialId: opt.ApplicationCredentialID, ApplicationCredentialName: opt.ApplicationCredentialName, ApplicationCredentialSecret: opt.ApplicationCredentialSecret, EndpointType: swift.EndpointType(opt.EndpointType), ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport Transport: fshttp.NewTransport(fs.Config), } if opt.EnvAuth { err := c.ApplyEnvironment() if err != nil { return nil, errors.Wrap(err, "failed to read environment variables") } } StorageUrl, AuthToken := c.StorageUrl, c.AuthToken // nolint if !c.Authenticated() { if (c.ApplicationCredentialId != "" || c.ApplicationCredentialName != "") && c.ApplicationCredentialSecret == "" { if c.UserName == "" && c.UserId == "" { return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)") } if c.ApiKey == "" { return nil, errors.New("key not found") } } if c.AuthUrl == "" { return nil, errors.New("auth not found") } err := c.Authenticate() // fills in c.StorageUrl and c.AuthToken if err != nil { return nil, err } } // Make sure we re-auth with the AuthToken and StorageUrl // provided by wrapping the existing auth, so we can just // override one or the other or both. if StorageUrl != "" || AuthToken != "" { // Re-write StorageURL and AuthToken if they are being // overridden as c.Authenticate above will have // overwritten them. if StorageUrl != "" { c.StorageUrl = StorageUrl } if AuthToken != "" { c.AuthToken = AuthToken } c.Auth = newAuth(c.Auth, StorageUrl, AuthToken) } return c, nil } func checkUploadChunkSize(cs fs.SizeSuffix) error { const minChunkSize = fs.Byte if cs < minChunkSize { return errors.Errorf("%s is less than %s", cs, minChunkSize) } return nil } func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) { err = checkUploadChunkSize(cs) if err == nil { old, f.opt.ChunkSize = f.opt.ChunkSize, cs } return } // setRoot changes the root of the Fs func (f *Fs) setRoot(root string) { f.root = parsePath(root) f.rootContainer, f.rootDirectory = bucket.Split(f.root) } // NewFsWithConnection constructs an Fs from the path, container:path // and authenticated connection. // // if noCheckContainer is set then the Fs won't check the container // exists before creating it. func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error) { f := &Fs{ name: name, opt: *opt, c: c, noCheckContainer: noCheckContainer, pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))), cache: bucket.NewCache(), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, BucketBased: true, BucketBasedRootOK: true, SlowModTime: true, }).Fill(f) if f.rootContainer != "" && f.rootDirectory != "" { // Check to see if the object exists - ignoring directory markers var info swift.Object var err error encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory) err = f.pacer.Call(func() (bool, error) { var rxHeaders swift.Headers info, rxHeaders, err = f.c.Object(f.rootContainer, encodedDirectory) return shouldRetryHeaders(rxHeaders, err) }) if err == nil && info.ContentType != directoryMarkerContentType { newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.setRoot(newRoot) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } } return f, nil } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } err = checkUploadChunkSize(opt.ChunkSize) if err != nil { return nil, errors.Wrap(err, "swift: chunk size") } c, err := swiftConnection(opt, name) if err != nil { return nil, err } return NewFsWithConnection(opt, name, root, c, false) } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(remote string, info *swift.Object) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } // Note that due to a quirk of swift, dynamic large objects are // returned as 0 bytes in the listing. Correct this here by // making sure we read the full metadata for all 0 byte files. // We don't read the metadata for directory marker objects. if info != nil && info.Bytes == 0 && info.ContentType != "application/directory" { err := o.readMetaData() // reads info and headers, returning an error if err == fs.ErrorObjectNotFound { // We have a dangling large object here so just return the original metadata fs.Errorf(o, "dangling large object with no contents") } else if err != nil { return nil, err } else { return o, nil } } if info != nil { // Set info but not headers err := o.decodeMetaData(info) if err != nil { return nil, err } } else { err := o.readMetaData() // reads info and headers, returning an error if err != nil { return nil, err } } return o, nil } // NewObject finds the Object at remote. If it can't be found it // returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(remote, nil) } // listFn is called from list and listContainerRoot to handle an object. type listFn func(remote string, object *swift.Object, isDirectory bool) error // listContainerRoot lists the objects into the function supplied from // the container and directory supplied. The remote has prefix // removed from it and if addContainer is set then it adds the // container to the start. // // Set recurse to read sub directories func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer bool, recurse bool, includeDirMarkers bool, fn listFn) error { if prefix != "" && !strings.HasSuffix(prefix, "/") { prefix += "/" } if directory != "" && !strings.HasSuffix(directory, "/") { directory += "/" } // Options for ObjectsWalk opts := swift.ObjectsOpts{ Prefix: directory, Limit: listChunks, } if !recurse { opts.Delimiter = '/' } return f.c.ObjectsWalk(container, &opts, func(opts *swift.ObjectsOpts) (interface{}, error) { var objects []swift.Object var err error err = f.pacer.Call(func() (bool, error) { objects, err = f.c.Objects(container, opts) return shouldRetry(err) }) if err == nil { for i := range objects { object := &objects[i] isDirectory := false if !recurse { isDirectory = strings.HasSuffix(object.Name, "/") } remote := f.opt.Enc.ToStandardPath(object.Name) if !strings.HasPrefix(remote, prefix) { fs.Logf(f, "Odd name received %q", remote) continue } if !includeDirMarkers && remote == prefix { // If we have zero length directory markers ending in / then swift // will return them in the listing for the directory which causes // duplicate directories. Ignore them here. continue } remote = remote[len(prefix):] if addContainer { remote = path.Join(container, remote) } err = fn(remote, object, isDirectory) if err != nil { break } } } return objects, err }) } type addEntryFn func(fs.DirEntry) error // list the objects into the function supplied func (f *Fs) list(container, directory, prefix string, addContainer bool, recurse bool, includeDirMarkers bool, fn addEntryFn) error { err := f.listContainerRoot(container, directory, prefix, addContainer, recurse, includeDirMarkers, func(remote string, object *swift.Object, isDirectory bool) (err error) { if isDirectory { remote = strings.TrimRight(remote, "/") d := fs.NewDir(remote, time.Time{}).SetSize(object.Bytes) err = fn(d) } else { // newObjectWithInfo does a full metadata read on 0 size objects which might be dynamic large objects var o fs.Object o, err = f.newObjectWithInfo(remote, object) if err != nil { return err } if includeDirMarkers || o.Storable() { err = fn(o) } } return err }) if err == swift.ContainerNotFound { err = fs.ErrorDirNotFound } return err } // listDir lists a single directory func (f *Fs) listDir(container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) { if container == "" { return nil, fs.ErrorListBucketRequired } // List the objects err = f.list(container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error { entries = append(entries, entry) return nil }) if err != nil { return nil, err } // container must be present if listing succeeded f.cache.MarkOK(container) return entries, nil } // listContainers lists the containers func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err error) { var containers []swift.Container err = f.pacer.Call(func() (bool, error) { containers, err = f.c.ContainersAll(nil) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "container listing failed") } for _, container := range containers { f.cache.MarkOK(container.Name) d := fs.NewDir(f.opt.Enc.ToStandardName(container.Name), time.Time{}).SetSize(container.Bytes).SetItems(container.Count) entries = append(entries, d) } return entries, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { container, directory := f.split(dir) if container == "" { if directory != "" { return nil, fs.ErrorListBucketRequired } return f.listContainers(ctx) } return f.listDir(container, directory, f.rootDirectory, f.rootContainer == "") } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively than doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { container, directory := f.split(dir) list := walk.NewListRHelper(callback) listR := func(container, directory, prefix string, addContainer bool) error { return f.list(container, directory, prefix, addContainer, true, false, func(entry fs.DirEntry) error { return list.Add(entry) }) } if container == "" { entries, err := f.listContainers(ctx) if err != nil { return err } for _, entry := range entries { err = list.Add(entry) if err != nil { return err } container := entry.Remote() err = listR(container, "", f.rootDirectory, true) if err != nil { return err } // container must be present if listing succeeded f.cache.MarkOK(container) } } else { err = listR(container, directory, f.rootDirectory, f.rootContainer == "") if err != nil { return err } // container must be present if listing succeeded f.cache.MarkOK(container) } return list.Flush() } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { var containers []swift.Container var err error err = f.pacer.Call(func() (bool, error) { containers, err = f.c.ContainersAll(nil) return shouldRetry(err) }) if err != nil { return nil, errors.Wrap(err, "container listing failed") } var total, objects int64 for _, c := range containers { total += c.Bytes objects += c.Count } usage := &fs.Usage{ Used: fs.NewUsageValue(total), // bytes in use Objects: fs.NewUsageValue(objects), // objects in use } return usage, nil } // Put the object into the container // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { // Temporary Object under construction fs := &Object{ fs: f, remote: src.Remote(), headers: swift.Headers{}, // Empty object headers to stop readMetaData being called } return fs, fs.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { container, _ := f.split(dir) return f.makeContainer(ctx, container) } // makeContainer creates the container if it doesn't exist func (f *Fs) makeContainer(ctx context.Context, container string) error { return f.cache.Create(container, func() error { // Check to see if container exists first var err error = swift.ContainerNotFound if !f.noCheckContainer { err = f.pacer.Call(func() (bool, error) { var rxHeaders swift.Headers _, rxHeaders, err = f.c.Container(container) return shouldRetryHeaders(rxHeaders, err) }) } if err == swift.ContainerNotFound { headers := swift.Headers{} if f.opt.StoragePolicy != "" { headers["X-Storage-Policy"] = f.opt.StoragePolicy } err = f.pacer.Call(func() (bool, error) { err = f.c.ContainerCreate(container, headers) return shouldRetry(err) }) if err == nil { fs.Infof(f, "Container %q created", container) } } return err }, nil) } // Rmdir deletes the container if the fs is at the root // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { container, directory := f.split(dir) if container == "" || directory != "" { return nil } err := f.cache.Remove(container, func() error { err := f.pacer.Call(func() (bool, error) { err := f.c.ContainerDelete(container) return shouldRetry(err) }) if err == nil { fs.Infof(f, "Container %q removed", container) } return err }) return err } // Precision of the remote func (f *Fs) Precision() time.Duration { return time.Nanosecond } // Purge deletes all the files in the directory // // Implemented here so we can make sure we delete directory markers func (f *Fs) Purge(ctx context.Context, dir string) error { container, directory := f.split(dir) if container == "" { return fs.ErrorListBucketRequired } // Delete all the files including the directory markers toBeDeleted := make(chan fs.Object, fs.Config.Transfers) delErr := make(chan error, 1) go func() { delErr <- operations.DeleteFiles(ctx, toBeDeleted) }() err := f.list(container, directory, f.rootDirectory, false, true, true, func(entry fs.DirEntry) error { if o, ok := entry.(*Object); ok { toBeDeleted <- o } return nil }) close(toBeDeleted) delError := <-delErr if err == nil { err = delError } if err != nil { return err } return f.Rmdir(ctx, dir) } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { dstContainer, dstPath := f.split(remote) err := f.makeContainer(ctx, dstContainer) if err != nil { return nil, err } srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } srcContainer, srcPath := srcObj.split() err = f.pacer.Call(func() (bool, error) { var rxHeaders swift.Headers rxHeaders, err = f.c.ObjectCopy(srcContainer, srcPath, dstContainer, dstPath, nil) return shouldRetryHeaders(rxHeaders, err) }) if err != nil { return nil, err } return f.NewObject(ctx, remote) } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } isDynamicLargeObject, err := o.isDynamicLargeObject() if err != nil { return "", err } isStaticLargeObject, err := o.isStaticLargeObject() if err != nil { return "", err } if isDynamicLargeObject || isStaticLargeObject { fs.Debugf(o, "Returning empty Md5sum for swift large object") return "", nil } return strings.ToLower(o.md5), nil } // hasHeader checks for the header passed in returning false if the // object isn't found. func (o *Object) hasHeader(header string) (bool, error) { err := o.readMetaData() if err != nil { if err == fs.ErrorObjectNotFound { return false, nil } return false, err } _, isDynamicLargeObject := o.headers[header] return isDynamicLargeObject, nil } // isDynamicLargeObject checks for X-Object-Manifest header func (o *Object) isDynamicLargeObject() (bool, error) { return o.hasHeader("X-Object-Manifest") } // isStaticLargeObjectFile checks for the X-Static-Large-Object header func (o *Object) isStaticLargeObject() (bool, error) { return o.hasHeader("X-Static-Large-Object") } func (o *Object) isInContainerVersioning(container string) (bool, error) { _, headers, err := o.fs.c.Container(container) if err != nil { return false, err } xHistoryLocation := headers["X-History-Location"] if len(xHistoryLocation) > 0 { return true, nil } return false, nil } // Size returns the size of an object in bytes func (o *Object) Size() int64 { return o.size } // decodeMetaData sets the metadata in the object from a swift.Object // // Sets // o.lastModified // o.size // o.md5 // o.contentType func (o *Object) decodeMetaData(info *swift.Object) (err error) { o.lastModified = info.LastModified o.size = info.Bytes o.md5 = info.Hash o.contentType = info.ContentType return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info // // it returns fs.ErrorObjectNotFound if the object isn't found func (o *Object) readMetaData() (err error) { if o.headers != nil { return nil } var info swift.Object var h swift.Headers container, containerPath := o.split() err = o.fs.pacer.Call(func() (bool, error) { info, h, err = o.fs.c.Object(container, containerPath) return shouldRetryHeaders(h, err) }) if err != nil { if err == swift.ObjectNotFound { return fs.ErrorObjectNotFound } return err } o.headers = h err = o.decodeMetaData(&info) if err != nil { return err } return nil } // ModTime returns the modification time of the object // // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { if fs.Config.UseServerModTime { return o.lastModified } err := o.readMetaData() if err != nil { fs.Debugf(o, "Failed to read metadata: %s", err) return o.lastModified } modTime, err := o.headers.ObjectMetadata().GetModTime() if err != nil { // fs.Logf(o, "Failed to read mtime from object: %v", err) return o.lastModified } return modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { err := o.readMetaData() if err != nil { return err } meta := o.headers.ObjectMetadata() meta.SetModTime(modTime) newHeaders := meta.ObjectHeaders() for k, v := range newHeaders { o.headers[k] = v } // Include any other metadata from request for k, v := range o.headers { if strings.HasPrefix(k, "X-Object-") { newHeaders[k] = v } } container, containerPath := o.split() return o.fs.pacer.Call(func() (bool, error) { err = o.fs.c.ObjectUpdate(container, containerPath, newHeaders) return shouldRetry(err) }) } // Storable returns if this object is storable // // It compares the Content-Type to directoryMarkerContentType - that // makes it a directory marker which is not storable. func (o *Object) Storable() bool { return o.contentType != directoryMarkerContentType } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { fs.FixRangeOption(options, o.size) headers := fs.OpenOptionHeaders(options) _, isRanging := headers["Range"] container, containerPath := o.split() err = o.fs.pacer.Call(func() (bool, error) { var rxHeaders swift.Headers in, rxHeaders, err = o.fs.c.ObjectOpen(container, containerPath, !isRanging, headers) return shouldRetryHeaders(rxHeaders, err) }) return } // min returns the smallest of x, y func min(x, y int64) int64 { if x < y { return x } return y } // removeSegments removes any old segments from o // // if except is passed in then segments with that prefix won't be deleted func (o *Object) removeSegments(except string) error { segmentsContainer, _, err := o.getSegmentsDlo() if err != nil { return err } except = path.Join(o.remote, except) // fs.Debugf(o, "segmentsContainer %q prefix %q", segmentsContainer, prefix) err = o.fs.listContainerRoot(segmentsContainer, o.remote, "", false, true, true, func(remote string, object *swift.Object, isDirectory bool) error { if isDirectory { return nil } if except != "" && strings.HasPrefix(remote, except) { // fs.Debugf(o, "Ignoring current segment file %q in container %q", remote, segmentsContainer) return nil } fs.Debugf(o, "Removing segment file %q in container %q", remote, segmentsContainer) var err error return o.fs.pacer.Call(func() (bool, error) { err = o.fs.c.ObjectDelete(segmentsContainer, remote) return shouldRetry(err) }) }) if err != nil { return err } // remove the segments container if empty, ignore errors err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.c.ContainerDelete(segmentsContainer) if err == swift.ContainerNotFound || err == swift.ContainerNotEmpty { return false, err } return shouldRetry(err) }) if err == nil { fs.Debugf(o, "Removed empty container %q", segmentsContainer) } return nil } func (o *Object) getSegmentsDlo() (segmentsContainer string, prefix string, err error) { if err = o.readMetaData(); err != nil { return } dirManifest := o.headers["X-Object-Manifest"] dirManifest, err = url.PathUnescape(dirManifest) if err != nil { return } delimiter := strings.Index(dirManifest, "/") if len(dirManifest) == 0 || delimiter < 0 { err = errors.New("Missing or wrong structure of manifest of Dynamic large object") return } return dirManifest[:delimiter], dirManifest[delimiter+1:], nil } // urlEncode encodes a string so that it is a valid URL // // We don't use any of Go's standard methods as we need `/` not // encoded but we need '&' encoded. func urlEncode(str string) string { var buf bytes.Buffer for i := 0; i < len(str); i++ { c := str[i] if (c >= '0' && c <= '9') || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '/' || c == '.' { _ = buf.WriteByte(c) } else { _, _ = buf.WriteString(fmt.Sprintf("%%%02X", c)) } } return buf.String() } // updateChunks updates the existing object using chunks to a separate // container. It returns a string which prefixes current segments. func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64, contentType string) (string, error) { container, containerPath := o.split() segmentsContainer := container + "_segments" // Create the segmentsContainer if it doesn't exist var err error err = o.fs.pacer.Call(func() (bool, error) { var rxHeaders swift.Headers _, rxHeaders, err = o.fs.c.Container(segmentsContainer) return shouldRetryHeaders(rxHeaders, err) }) if err == swift.ContainerNotFound { headers := swift.Headers{} if o.fs.opt.StoragePolicy != "" { headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy } err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.c.ContainerCreate(segmentsContainer, headers) return shouldRetry(err) }) } if err != nil { return "", err } // Upload the chunks left := size i := 0 uniquePrefix := fmt.Sprintf("%s/%d", swift.TimeToFloatString(time.Now()), size) segmentsPath := path.Join(containerPath, uniquePrefix) in := bufio.NewReader(in0) segmentInfos := make([]string, 0, ((size / int64(o.fs.opt.ChunkSize)) + 1)) for { // can we read at least one byte? if _, err := in.Peek(1); err != nil { if left > 0 { return "", err // read less than expected } fs.Debugf(o, "Uploading segments into %q seems done (%v)", segmentsContainer, err) break } n := int64(o.fs.opt.ChunkSize) if size != -1 { n = min(left, n) headers["Content-Length"] = strconv.FormatInt(n, 10) // set Content-Length as we know it left -= n } segmentReader := io.LimitReader(in, n) segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i) fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, segmentsContainer) err = o.fs.pacer.CallNoRetry(func() (bool, error) { var rxHeaders swift.Headers rxHeaders, err = o.fs.c.ObjectPut(segmentsContainer, segmentPath, segmentReader, true, "", "", headers) if err == nil { segmentInfos = append(segmentInfos, segmentPath) } return shouldRetryHeaders(rxHeaders, err) }) if err != nil { deleteChunks(o, segmentsContainer, segmentInfos) segmentInfos = nil return "", err } i++ } // Upload the manifest headers["X-Object-Manifest"] = urlEncode(fmt.Sprintf("%s/%s", segmentsContainer, segmentsPath)) headers["Content-Length"] = "0" // set Content-Length as we know it emptyReader := bytes.NewReader(nil) err = o.fs.pacer.Call(func() (bool, error) { var rxHeaders swift.Headers rxHeaders, err = o.fs.c.ObjectPut(container, containerPath, emptyReader, true, "", contentType, headers) return shouldRetryHeaders(rxHeaders, err) }) if err != nil { deleteChunks(o, segmentsContainer, segmentInfos) segmentInfos = nil } return uniquePrefix + "/", err } func deleteChunks(o *Object, segmentsContainer string, segmentInfos []string) { if segmentInfos != nil && len(segmentInfos) > 0 { for _, v := range segmentInfos { fs.Debugf(o, "Delete segment file %q on %q", v, segmentsContainer) e := o.fs.c.ObjectDelete(segmentsContainer, v) if e != nil { fs.Errorf(o, "Error occurred in delete segment file %q on %q, error: %q", v, segmentsContainer, e) } } } } // Update the object with the contents of the io.Reader, modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { container, containerPath := o.split() if container == "" { return fserrors.FatalError(errors.New("can't upload files to the root")) } err := o.fs.makeContainer(ctx, container) if err != nil { return err } size := src.Size() modTime := src.ModTime(ctx) // Note whether this is a dynamic large object before starting isDynamicLargeObject, err := o.isDynamicLargeObject() if err != nil { return err } // Set the mtime m := swift.Metadata{} m.SetModTime(modTime) contentType := fs.MimeType(ctx, src) headers := m.ObjectHeaders() fs.OpenOptionAddHeaders(options, headers) uniquePrefix := "" if size > int64(o.fs.opt.ChunkSize) || (size == -1 && !o.fs.opt.NoChunk) { uniquePrefix, err = o.updateChunks(in, headers, size, contentType) if err != nil { return err } o.headers = nil // wipe old metadata } else { var inCount *readers.CountingReader if size >= 0 { headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length if we know it } else { // otherwise count the size for later inCount = readers.NewCountingReader(in) in = inCount } var rxHeaders swift.Headers err = o.fs.pacer.CallNoRetry(func() (bool, error) { rxHeaders, err = o.fs.c.ObjectPut(container, containerPath, in, true, "", contentType, headers) return shouldRetryHeaders(rxHeaders, err) }) if err != nil { return err } // set Metadata since ObjectPut checked the hash and length so we know the // object has been safely uploaded o.lastModified = modTime o.size = size o.md5 = rxHeaders["Etag"] o.contentType = contentType o.headers = headers if inCount != nil { // update the size if streaming from the reader o.size = int64(inCount.BytesRead()) } } // If file was a dynamic large object then remove old/all segments if isDynamicLargeObject { err = o.removeSegments(uniquePrefix) if err != nil { fs.Logf(o, "Failed to remove old segments - carrying on with upload: %v", err) } } // Read the metadata from the newly created object if necessary return o.readMetaData() } // Remove an object func (o *Object) Remove(ctx context.Context) (err error) { container, containerPath := o.split() // Remove file/manifest first err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.c.ObjectDelete(container, containerPath) return shouldRetry(err) }) if err != nil { return err } isDynamicLargeObject, err := o.isDynamicLargeObject() if err != nil { return err } // ...then segments if required if isDynamicLargeObject { isInContainerVersioning, err := o.isInContainerVersioning(container) if err != nil { return err } if !isInContainerVersioning { err = o.removeSegments("") if err != nil { return err } } } return nil } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.contentType } // Check the interfaces are satisfied var ( _ fs.Fs = &Fs{} _ fs.Purger = &Fs{} _ fs.PutStreamer = &Fs{} _ fs.Copier = &Fs{} _ fs.ListRer = &Fs{} _ fs.Object = &Object{} _ fs.MimeTyper = &Object{} ) rclone-1.53.3/backend/swift/swift_internal_test.go000066400000000000000000000032651375552240400222430ustar00rootroot00000000000000package swift import ( "testing" "time" "github.com/ncw/swift" "github.com/rclone/rclone/fs/fserrors" "github.com/stretchr/testify/assert" ) func TestInternalUrlEncode(t *testing.T) { for _, test := range []struct { in string want string }{ {"", ""}, {"abcdefghijklmopqrstuvwxyz", "abcdefghijklmopqrstuvwxyz"}, {"ABCDEFGHIJKLMOPQRSTUVWXYZ", "ABCDEFGHIJKLMOPQRSTUVWXYZ"}, {"0123456789", "0123456789"}, {"abc/ABC/123", "abc/ABC/123"}, {" ", "%20%20%20"}, {"&", "%26"}, {"ߣ", "%C3%9F%C2%A3"}, {"Vidéo Potato Sausage?&£.mkv", "Vid%C3%A9o%20Potato%20Sausage%3F%26%C2%A3.mkv"}, } { got := urlEncode(test.in) if got != test.want { t.Logf("%q: want %q got %q", test.in, test.want, got) } } } func TestInternalShouldRetryHeaders(t *testing.T) { headers := swift.Headers{ "Content-Length": "64", "Content-Type": "text/html; charset=UTF-8", "Date": "Mon: 18 Mar 2019 12:11:23 GMT", "Retry-After": "1", } err := &swift.Error{ StatusCode: 429, Text: "Too Many Requests", } // Short sleep should just do the sleep start := time.Now() retry, gotErr := shouldRetryHeaders(headers, err) dt := time.Since(start) assert.True(t, retry) assert.Equal(t, err, gotErr) assert.True(t, dt > time.Second/2) // Long sleep should return RetryError headers["Retry-After"] = "3600" start = time.Now() retry, gotErr = shouldRetryHeaders(headers, err) dt = time.Since(start) assert.True(t, dt < time.Second) assert.False(t, retry) assert.Equal(t, true, fserrors.IsRetryAfterError(gotErr)) after := gotErr.(fserrors.RetryAfter).RetryAfter() dt = after.Sub(start) assert.True(t, dt >= time.Hour-time.Second && dt <= time.Hour+time.Second) } rclone-1.53.3/backend/swift/swift_test.go000066400000000000000000000037321375552240400203460ustar00rootroot00000000000000// Test Swift filesystem interface package swift import ( "bytes" "context" "io" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestSwiftAIO:", NilObject: (*Object)(nil), }) } func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) { return f.setUploadChunkSize(cs) } var _ fstests.SetUploadChunkSizer = (*Fs)(nil) // Check that PutStream works with NoChunk as it is the major code // deviation func (f *Fs) testNoChunk(t *testing.T) { ctx := context.Background() f.opt.NoChunk = true defer func() { f.opt.NoChunk = false }() file := fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: "piped data no chunk.txt", Size: -1, // use unknown size during upload } const contentSize = 100 contents := random.String(contentSize) buf := bytes.NewBufferString(contents) uploadHash := hash.NewMultiHasher() in := io.TeeReader(buf, uploadHash) file.Size = -1 obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil) obj, err := f.Features().PutStream(ctx, in, obji) require.NoError(t, err) file.Hashes = uploadHash.Sums() file.Size = int64(contentSize) // use correct size when checking file.Check(t, obj, f.Precision()) // Re-read the object and check again obj, err = f.NewObject(ctx, file.Path) require.NoError(t, err) file.Check(t, obj, f.Precision()) // Delete the object assert.NoError(t, obj.Remove(ctx)) } // Additional tests that aren't in the framework func (f *Fs) InternalTest(t *testing.T) { t.Run("NoChunk", f.testNoChunk) } var _ fstests.InternalTester = (*Fs)(nil) rclone-1.53.3/backend/tardigrade/000077500000000000000000000000001375552240400165715ustar00rootroot00000000000000rclone-1.53.3/backend/tardigrade/fs.go000066400000000000000000000451641375552240400175420ustar00rootroot00000000000000// +build go1.13,!plan9 // Package tardigrade provides an interface to Tardigrade decentralized object storage. package tardigrade import ( "context" "fmt" "io" "log" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/bucket" "golang.org/x/text/unicode/norm" "storj.io/uplink" ) const ( existingProvider = "existing" newProvider = "new" ) var satMap = map[string]string{ "us-central-1.tardigrade.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777", "europe-west-1.tardigrade.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@europe-west-1.tardigrade.io:7777", "asia-east-1.tardigrade.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@asia-east-1.tardigrade.io:7777", } // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "tardigrade", Description: "Tardigrade Decentralized Cloud Storage", NewFs: NewFs, Config: func(name string, configMapper configmap.Mapper) { provider, _ := configMapper.Get(fs.ConfigProvider) config.FileDeleteKey(name, fs.ConfigProvider) if provider == newProvider { satelliteString, _ := configMapper.Get("satellite_address") apiKey, _ := configMapper.Get("api_key") passphrase, _ := configMapper.Get("passphrase") // satelliteString contains always default and passphrase can be empty if apiKey == "" { return } satellite, found := satMap[satelliteString] if !found { satellite = satelliteString } access, err := uplink.RequestAccessWithPassphrase(context.TODO(), satellite, apiKey, passphrase) if err != nil { log.Fatalf("Couldn't create access grant: %v", err) } serialziedAccess, err := access.Serialize() if err != nil { log.Fatalf("Couldn't serialize access grant: %v", err) } configMapper.Set("satellite_address", satellite) configMapper.Set("access_grant", serialziedAccess) } else if provider == existingProvider { config.FileDeleteKey(name, "satellite_address") config.FileDeleteKey(name, "api_key") config.FileDeleteKey(name, "passphrase") } else { log.Fatalf("Invalid provider type: %s", provider) } }, Options: []fs.Option{ { Name: fs.ConfigProvider, Help: "Choose an authentication method.", Required: true, Default: existingProvider, Examples: []fs.OptionExample{{ Value: "existing", Help: "Use an existing access grant.", }, { Value: newProvider, Help: "Create a new access grant from satellite address, API key, and passphrase.", }, }}, { Name: "access_grant", Help: "Access Grant.", Required: false, Provider: "existing", }, { Name: "satellite_address", Help: "Satellite Address. Custom satellite address should match the format: `@
:`.", Required: false, Provider: newProvider, Default: "us-central-1.tardigrade.io", Examples: []fs.OptionExample{{ Value: "us-central-1.tardigrade.io", Help: "US Central 1", }, { Value: "europe-west-1.tardigrade.io", Help: "Europe West 1", }, { Value: "asia-east-1.tardigrade.io", Help: "Asia East 1", }, }, }, { Name: "api_key", Help: "API Key.", Required: false, Provider: newProvider, }, { Name: "passphrase", Help: "Encryption Passphrase. To access existing objects enter passphrase used for uploading.", Required: false, Provider: newProvider, }, }, }) } // Options defines the configuration for this backend type Options struct { Access string `config:"access_grant"` SatelliteAddress string `config:"satellite_address"` APIKey string `config:"api_key"` Passphrase string `config:"passphrase"` } // Fs represents a remote to Tardigrade type Fs struct { name string // the name of the remote root string // root of the filesystem opts Options // parsed options features *fs.Features // optional features access *uplink.Access // parsed scope project *uplink.Project // project client } // Check the interfaces are satisfied. var ( _ fs.Fs = &Fs{} _ fs.ListRer = &Fs{} _ fs.PutStreamer = &Fs{} ) // NewFs creates a filesystem backed by Tardigrade. func NewFs(name, root string, m configmap.Mapper) (_ fs.Fs, err error) { ctx := context.Background() // Setup filesystem and connection to Tardigrade root = norm.NFC.String(root) root = strings.Trim(root, "/") f := &Fs{ name: name, root: root, } // Parse config into Options struct err = configstruct.Set(m, &f.opts) if err != nil { return nil, err } // Parse access var access *uplink.Access if f.opts.Access != "" { access, err = uplink.ParseAccess(f.opts.Access) if err != nil { return nil, errors.Wrap(err, "tardigrade: access") } } if access == nil && f.opts.SatelliteAddress != "" && f.opts.APIKey != "" && f.opts.Passphrase != "" { access, err = uplink.RequestAccessWithPassphrase(ctx, f.opts.SatelliteAddress, f.opts.APIKey, f.opts.Passphrase) if err != nil { return nil, errors.Wrap(err, "tardigrade: access") } serializedAccess, err := access.Serialize() if err != nil { return nil, errors.Wrap(err, "tardigrade: access") } err = config.SetValueAndSave(f.name, "access_grant", serializedAccess) if err != nil { return nil, errors.Wrap(err, "tardigrade: access") } } if access == nil { return nil, errors.New("access not found") } f.access = access f.features = (&fs.Features{ BucketBased: true, BucketBasedRootOK: true, }).Fill(f) project, err := f.connect(ctx) if err != nil { return nil, err } f.project = project // Root validation needs to check the following: If a bucket path is // specified and exists, then the object must be a directory. // // NOTE: At this point this must return the filesystem object we've // created so far even if there is an error. if root != "" { bucketName, bucketPath := bucket.Split(root) if bucketName != "" && bucketPath != "" { _, err = project.StatBucket(ctx, bucketName) if err != nil { return f, errors.Wrap(err, "tardigrade: bucket") } object, err := project.StatObject(ctx, bucketName, bucketPath) if err == nil { if !object.IsPrefix { // If the root is actually a file we // need to return the *parent* // directory of the root instead and an // error that the original root // requested is a file. newRoot := path.Dir(f.root) if newRoot == "." { newRoot = "" } f.root = newRoot return f, fs.ErrorIsFile } } } } return f, nil } // connect opens a connection to Tardigrade. func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) { fs.Debugf(f, "connecting...") defer fs.Debugf(f, "connected: %+v", err) cfg := uplink.Config{ UserAgent: "rclone", } project, err = cfg.OpenProject(ctx, f.access) if err != nil { return nil, errors.Wrap(err, "tardigrade: project") } return } // absolute computes the absolute bucket name and path from the filesystem root // and the relative path provided. func (f *Fs) absolute(relative string) (bucketName, bucketPath string) { bn, bp := bucket.Split(path.Join(f.root, relative)) // NOTE: Technically libuplink does not care about the encoding. It is // happy to work with them as opaque byte sequences. However, rclone // has a test that requires two paths with the same normalized form // (but different un-normalized forms) to point to the same file. This // means we have to normalize before we interact with libuplink. return norm.NFC.String(bn), norm.NFC.String(bp) } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String returns a description of the FS func (f *Fs) String() string { return fmt.Sprintf("FS sj://%s", f.root) } // Precision of the ModTimes in this Fs func (f *Fs) Precision() time.Duration { return time.Nanosecond } // Hashes returns the supported hash types of the filesystem. func (f *Fs) Hashes() hash.Set { return hash.NewHashSet() } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // List the objects and directories in relative into entries. The entries can // be returned in any order but should be for a complete directory. // // relative should be "" to list the root, and should not have trailing // slashes. // // This should return fs.ErrDirNotFound if the directory isn't found. func (f *Fs) List(ctx context.Context, relative string) (entries fs.DirEntries, err error) { fs.Debugf(f, "ls ./%s", relative) bucketName, bucketPath := f.absolute(relative) defer func() { if errors.Is(err, uplink.ErrBucketNotFound) { err = fs.ErrorDirNotFound } }() if bucketName == "" { if bucketPath != "" { return nil, fs.ErrorListBucketRequired } return f.listBuckets(ctx) } return f.listObjects(ctx, relative, bucketName, bucketPath) } func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { fs.Debugf(f, "BKT ls") buckets := f.project.ListBuckets(ctx, nil) for buckets.Next() { bucket := buckets.Item() entries = append(entries, fs.NewDir(bucket.Name, bucket.Created)) } return entries, buckets.Err() } // newDirEntry creates a directory entry from an uplink object. // // NOTE: Getting the exact behavior required by rclone is somewhat tricky. The // path manipulation here is necessary to cover all the different ways the // filesystem and object could be initialized and combined. func (f *Fs) newDirEntry(relative, prefix string, object *uplink.Object) fs.DirEntry { if object.IsPrefix { // . The entry must include the relative path as its prefix. Depending on // | what is being listed and how the filesystem root was initialized the // | relative path may be empty (and so we use path joining here to ensure // | we don't end up with an empty path segment). // | // | . Remove the prefix used during listing. // | | // | | . Remove the trailing slash. // | | | // v v v return fs.NewDir(path.Join(relative, object.Key[len(prefix):len(object.Key)-1]), object.System.Created) } return newObjectFromUplink(f, relative, object) } func (f *Fs) listObjects(ctx context.Context, relative, bucketName, bucketPath string) (entries fs.DirEntries, err error) { fs.Debugf(f, "OBJ ls ./%s (%q, %q)", relative, bucketName, bucketPath) opts := &uplink.ListObjectsOptions{ Prefix: newPrefix(bucketPath), System: true, Custom: true, } fs.Debugf(f, "opts %+v", opts) objects := f.project.ListObjects(ctx, bucketName, opts) for objects.Next() { entries = append(entries, f.newDirEntry(relative, opts.Prefix, objects.Item())) } err = objects.Err() if err != nil { return nil, err } return entries, nil } // ListR lists the objects and directories of the Fs starting from dir // recursively into out. // // relative should be "" to start from the root, and should not have trailing // slashes. // // This should return ErrDirNotFound if the directory isn't found. // // It should call callback for each tranche of entries read. These need not be // returned in any particular order. If callback returns an error then the // listing will stop immediately. // // Don't implement this unless you have a more efficient way of listing // recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, relative string, callback fs.ListRCallback) (err error) { fs.Debugf(f, "ls -R ./%s", relative) bucketName, bucketPath := f.absolute(relative) defer func() { if errors.Is(err, uplink.ErrBucketNotFound) { err = fs.ErrorDirNotFound } }() if bucketName == "" { if bucketPath != "" { return fs.ErrorListBucketRequired } return f.listBucketsR(ctx, callback) } return f.listObjectsR(ctx, relative, bucketName, bucketPath, callback) } func (f *Fs) listBucketsR(ctx context.Context, callback fs.ListRCallback) (err error) { fs.Debugf(f, "BKT ls -R") buckets := f.project.ListBuckets(ctx, nil) for buckets.Next() { bucket := buckets.Item() err = f.listObjectsR(ctx, bucket.Name, bucket.Name, "", callback) if err != nil { return err } } return buckets.Err() } func (f *Fs) listObjectsR(ctx context.Context, relative, bucketName, bucketPath string, callback fs.ListRCallback) (err error) { fs.Debugf(f, "OBJ ls -R ./%s (%q, %q)", relative, bucketName, bucketPath) opts := &uplink.ListObjectsOptions{ Prefix: newPrefix(bucketPath), Recursive: true, System: true, Custom: true, } objects := f.project.ListObjects(ctx, bucketName, opts) for objects.Next() { object := objects.Item() err = callback(fs.DirEntries{f.newDirEntry(relative, opts.Prefix, object)}) if err != nil { return err } } err = objects.Err() if err != nil { return err } return nil } // NewObject finds the Object at relative. If it can't be found it returns the // error ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, relative string) (_ fs.Object, err error) { fs.Debugf(f, "stat ./%s", relative) bucketName, bucketPath := f.absolute(relative) object, err := f.project.StatObject(ctx, bucketName, bucketPath) if err != nil { fs.Debugf(f, "err: %+v", err) if errors.Is(err, uplink.ErrObjectNotFound) { return nil, fs.ErrorObjectNotFound } return nil, err } return newObjectFromUplink(f, relative, object), nil } // Put in to the remote path with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Put should // either return an error or upload it properly (rather than e.g. calling // panic). // // May create the object even if it returns an error - if so will return the // object and the error, otherwise will return nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (_ fs.Object, err error) { fs.Debugf(f, "cp input ./%s # %+v %d", src.Remote(), options, src.Size()) // Reject options we don't support. for _, option := range options { if option.Mandatory() { fs.Errorf(f, "Unsupported mandatory option: %v", option) return nil, errors.New("unsupported mandatory option") } } bucketName, bucketPath := f.absolute(src.Remote()) upload, err := f.project.UploadObject(ctx, bucketName, bucketPath, nil) if err != nil { return nil, err } defer func() { if err != nil { aerr := upload.Abort() if aerr != nil { fs.Errorf(f, "cp input ./%s %+v: %+v", src.Remote(), options, aerr) } } }() err = upload.SetCustomMetadata(ctx, uplink.CustomMetadata{ "rclone:mtime": src.ModTime(ctx).Format(time.RFC3339Nano), }) if err != nil { return nil, err } _, err = io.Copy(upload, in) if err != nil { err = fserrors.RetryError(err) fs.Errorf(f, "cp input ./%s %+v: %+v\n", src.Remote(), options, err) return nil, err } err = upload.Commit() if err != nil { if errors.Is(err, uplink.ErrBucketNotFound) { // Rclone assumes the backend will create the bucket if not existing yet. // Here we create the bucket and return a retry error for rclone to retry the upload. _, err = f.project.EnsureBucket(ctx, bucketName) if err != nil { return nil, err } err = fserrors.RetryError(errors.New("bucket was not available, now created, the upload must be retried")) } return nil, err } return newObjectFromUplink(f, "", upload.Info()), nil } // PutStream uploads to the remote path with the modTime given of indeterminate // size. // // May create the object even if it returns an error - if so will return the // object and the error, otherwise will return nil and the error. func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (_ fs.Object, err error) { return f.Put(ctx, in, src, options...) } // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists func (f *Fs) Mkdir(ctx context.Context, relative string) (err error) { fs.Debugf(f, "mkdir -p ./%s", relative) bucketName, _ := f.absolute(relative) _, err = f.project.EnsureBucket(ctx, bucketName) return err } // Rmdir removes the directory (container, bucket) // // NOTE: Despite code documentation to the contrary, this method should not // return an error if the directory does not exist. func (f *Fs) Rmdir(ctx context.Context, relative string) (err error) { fs.Debugf(f, "rmdir ./%s", relative) bucketName, bucketPath := f.absolute(relative) if bucketPath != "" { // If we can successfully stat it, then it is an object (and not a prefix). _, err := f.project.StatObject(ctx, bucketName, bucketPath) if err != nil { if errors.Is(err, uplink.ErrObjectNotFound) { // At this point we know it is not an object, // but we don't know if it is a prefix for one. // // We check this by doing a listing and if we // get any results back, then we know this is a // valid prefix (which implies the directory is // not empty). opts := &uplink.ListObjectsOptions{ Prefix: newPrefix(bucketPath), System: true, Custom: true, } objects := f.project.ListObjects(ctx, bucketName, opts) if objects.Next() { return fs.ErrorDirectoryNotEmpty } return objects.Err() } return err } return fs.ErrorIsFile } _, err = f.project.DeleteBucket(ctx, bucketName) if err != nil { if errors.Is(err, uplink.ErrBucketNotFound) { return fs.ErrorDirNotFound } if errors.Is(err, uplink.ErrBucketNotEmpty) { return fs.ErrorDirectoryNotEmpty } return err } return nil } // newPrefix returns a new prefix for listing conforming to the libuplink // requirements. In particular, libuplink requires a trailing slash for // listings, but rclone does not always provide one. Further, depending on how // the path was initially path normalization may have removed it (e.g. a // trailing slash from the CLI is removed before it ever gets to the backend // code). func newPrefix(prefix string) string { if prefix == "" { return prefix } if prefix[len(prefix)-1] == '/' { return prefix } return prefix + "/" } rclone-1.53.3/backend/tardigrade/object.go000066400000000000000000000120441375552240400203670ustar00rootroot00000000000000// +build go1.13,!plan9 package tardigrade import ( "context" "io" "path" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/bucket" "golang.org/x/text/unicode/norm" "storj.io/uplink" ) // Object describes a Tardigrade object type Object struct { fs *Fs absolute string size int64 created time.Time modified time.Time } // Check the interfaces are satisfied. var _ fs.Object = &Object{} // newObjectFromUplink creates a new object from a Tardigrade uplink object. func newObjectFromUplink(f *Fs, relative string, object *uplink.Object) *Object { // Attempt to use the modified time from the metadata. Otherwise // fallback to the server time. modified := object.System.Created if modifiedStr, ok := object.Custom["rclone:mtime"]; ok { var err error modified, err = time.Parse(time.RFC3339Nano, modifiedStr) if err != nil { modified = object.System.Created } } bucketName, _ := bucket.Split(path.Join(f.root, relative)) return &Object{ fs: f, absolute: norm.NFC.String(bucketName + "/" + object.Key), size: object.System.ContentLength, created: object.System.Created, modified: modified, } } // String returns a description of the Object func (o *Object) String() string { if o == nil { return "" } return o.Remote() } // Remote returns the remote path func (o *Object) Remote() string { // It is possible that we have an empty root (meaning the filesystem is // rooted at the project level). In this case the relative path is just // the full absolute path to the object (including the bucket name). if o.fs.root == "" { return o.absolute } // At this point we know that the filesystem itself is at least a // bucket name (and possibly a prefix path). // // . This is necessary to remove the slash. // | // v return o.absolute[len(o.fs.root)+1:] } // ModTime returns the modification date of the file // It should return a best guess if one isn't available func (o *Object) ModTime(ctx context.Context) time.Time { return o.modified } // Size returns the size of the file func (o *Object) Size() int64 { return o.size } // Fs returns read only access to the Fs that this object is part of func (o *Object) Fs() fs.Info { return o.fs } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *Object) Hash(ctx context.Context, ty hash.Type) (_ string, err error) { fs.Debugf(o, "%s", ty) return "", hash.ErrUnsupported } // Storable says whether this object can be stored func (o *Object) Storable() bool { return true } // SetModTime sets the metadata on the object to set the modification date func (o *Object) SetModTime(ctx context.Context, t time.Time) (err error) { fs.Debugf(o, "touch -d %q sj://%s", t, o.absolute) return fs.ErrorCantSetModTime } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (_ io.ReadCloser, err error) { fs.Debugf(o, "cat sj://%s # %+v", o.absolute, options) bucketName, bucketPath := bucket.Split(o.absolute) // Convert the semantics of HTTP range headers to an offset and length // that libuplink can use. var ( offset int64 = 0 length int64 = -1 ) for _, option := range options { switch opt := option.(type) { case *fs.RangeOption: s := opt.Start >= 0 e := opt.End >= 0 switch { case s && e: offset = opt.Start length = (opt.End + 1) - opt.Start case s && !e: offset = opt.Start case !s && e: object, err := o.fs.project.StatObject(ctx, bucketName, bucketPath) if err != nil { return nil, err } offset = object.System.ContentLength - opt.End length = opt.End } case *fs.SeekOption: offset = opt.Offset default: if option.Mandatory() { fs.Errorf(o, "Unsupported mandatory option: %v", option) return nil, errors.New("unsupported mandatory option") } } } fs.Debugf(o, "range %d + %d", offset, length) return o.fs.project.DownloadObject(ctx, bucketName, bucketPath, &uplink.DownloadOptions{ Offset: offset, Length: length, }) } // Update in to the object with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // return an error or update the object properly (rather than e.g. calling panic). func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { fs.Debugf(o, "cp input ./%s %+v", src.Remote(), options) oNew, err := o.fs.Put(ctx, in, src, options...) if err == nil { *o = *(oNew.(*Object)) } return err } // Remove this object. func (o *Object) Remove(ctx context.Context) (err error) { fs.Debugf(o, "rm sj://%s", o.absolute) bucketName, bucketPath := bucket.Split(o.absolute) _, err = o.fs.project.DeleteObject(ctx, bucketName, bucketPath) return err } rclone-1.53.3/backend/tardigrade/tardigrade_test.go000066400000000000000000000006341375552240400222700ustar00rootroot00000000000000// +build go1.13,!plan9 // Test Tardigrade filesystem interface package tardigrade_test import ( "testing" "github.com/rclone/rclone/backend/tardigrade" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestTardigrade:", NilObject: (*tardigrade.Object)(nil), }) } rclone-1.53.3/backend/tardigrade/tardigrade_unsupported.go000066400000000000000000000000541375552240400236750ustar00rootroot00000000000000// +build !go1.13 plan9 package tardigrade rclone-1.53.3/backend/union/000077500000000000000000000000001375552240400156135ustar00rootroot00000000000000rclone-1.53.3/backend/union/entry.go000066400000000000000000000071341375552240400173100ustar00rootroot00000000000000package union import ( "context" "io" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) // Object describes a union Object // // This is a wrapped object which returns the Union Fs as its parent type Object struct { *upstream.Object fs *Fs // what this object is part of co []upstream.Entry } // Directory describes a union Directory // // This is a wrapped object contains all candidates type Directory struct { *upstream.Directory cd []upstream.Entry } type entry interface { upstream.Entry candidates() []upstream.Entry } // UnWrap returns the Object that this Object is wrapping or // nil if it isn't wrapping anything func (o *Object) UnWrap() *upstream.Object { return o.Object } // Fs returns the union Fs as the parent func (o *Object) Fs() fs.Info { return o.fs } func (o *Object) candidates() []upstream.Entry { return o.co } func (d *Directory) candidates() []upstream.Entry { return d.cd } // Update in to the object with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // return an error or update the object properly (rather than e.g. calling panic). func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { entries, err := o.fs.actionEntries(o.candidates()...) if err != nil { return err } if len(entries) == 1 { obj := entries[0].(*upstream.Object) return obj.Update(ctx, in, src, options...) } // Multi-threading readers, errChan := multiReader(len(entries), in) errs := Errors(make([]error, len(entries)+1)) multithread(len(entries), func(i int) { if o, ok := entries[i].(*upstream.Object); ok { err := o.Update(ctx, readers[i], src, options...) errs[i] = errors.Wrap(err, o.UpstreamFs().Name()) } else { errs[i] = fs.ErrorNotAFile } }) errs[len(entries)] = <-errChan return errs.Err() } // Remove candidate objects selected by ACTION policy func (o *Object) Remove(ctx context.Context) error { entries, err := o.fs.actionEntries(o.candidates()...) if err != nil { return err } errs := Errors(make([]error, len(entries))) multithread(len(entries), func(i int) { if o, ok := entries[i].(*upstream.Object); ok { err := o.Remove(ctx) errs[i] = errors.Wrap(err, o.UpstreamFs().Name()) } else { errs[i] = fs.ErrorNotAFile } }) return errs.Err() } // SetModTime sets the metadata on the object to set the modification date func (o *Object) SetModTime(ctx context.Context, t time.Time) error { entries, err := o.fs.actionEntries(o.candidates()...) if err != nil { return err } var wg sync.WaitGroup errs := Errors(make([]error, len(entries))) multithread(len(entries), func(i int) { if o, ok := entries[i].(*upstream.Object); ok { err := o.SetModTime(ctx, t) errs[i] = errors.Wrap(err, o.UpstreamFs().Name()) } else { errs[i] = fs.ErrorNotAFile } }) wg.Wait() return errs.Err() } // ModTime returns the modification date of the directory // It returns the latest ModTime of all candidates func (d *Directory) ModTime(ctx context.Context) (t time.Time) { entries := d.candidates() times := make([]time.Time, len(entries)) multithread(len(entries), func(i int) { times[i] = entries[i].ModTime(ctx) }) for _, ti := range times { if t.Before(ti) { t = ti } } return t } // Size returns the size of the directory // It returns the sum of all candidates func (d *Directory) Size() (s int64) { for _, e := range d.candidates() { s += e.Size() } return s } rclone-1.53.3/backend/union/errors.go000066400000000000000000000023751375552240400174650ustar00rootroot00000000000000package union import ( "bytes" "fmt" ) // The Errors type wraps a slice of errors type Errors []error // Map returns a copy of the error slice with all its errors modified // according to the mapping function. If mapping returns nil, // the error is dropped from the error slice with no replacement. func (e Errors) Map(mapping func(error) error) Errors { s := make([]error, len(e)) i := 0 for _, err := range e { nerr := mapping(err) if nerr == nil { continue } s[i] = nerr i++ } return Errors(s[:i]) } // FilterNil returns the Errors without nil func (e Errors) FilterNil() Errors { ne := e.Map(func(err error) error { return err }) return ne } // Err returns an error interface that filtered nil, // or nil if no non-nil Error is presented. func (e Errors) Err() error { ne := e.FilterNil() if len(ne) == 0 { return nil } return ne } // Error returns a concatenated string of the contained errors func (e Errors) Error() string { var buf bytes.Buffer if len(e) == 0 { buf.WriteString("no error") } if len(e) == 1 { buf.WriteString("1 error: ") } else { fmt.Fprintf(&buf, "%d errors: ", len(e)) } for i, err := range e { if i != 0 { buf.WriteString("; ") } buf.WriteString(err.Error()) } return buf.String() } rclone-1.53.3/backend/union/policy/000077500000000000000000000000001375552240400171125ustar00rootroot00000000000000rclone-1.53.3/backend/union/policy/all.go000066400000000000000000000021311375552240400202060ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("all", &All{}) } // All policy behaves the same as EpAll except for the CREATE category // Action category: same as epall. // Create category: apply to all branches. // Search category: same as epall. type All struct { EpAll } // Create category policy, governing the creation of files and directories func (p *All) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } return upstreams, nil } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *All) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterNCEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } return entries, nil } rclone-1.53.3/backend/union/policy/epall.go000066400000000000000000000047471375552240400205520ustar00rootroot00000000000000package policy import ( "context" "path" "sync" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("epall", &EpAll{}) } // EpAll stands for existing path, all // Action category: apply to all found. // Create category: apply to all found. // Search category: same as epff. type EpAll struct { EpFF } func (p *EpAll) epall(ctx context.Context, upstreams []*upstream.Fs, filePath string) ([]*upstream.Fs, error) { var wg sync.WaitGroup ufs := make([]*upstream.Fs, len(upstreams)) for i, u := range upstreams { wg.Add(1) i, u := i, u // Closure go func() { rfs := u.RootFs remote := path.Join(u.RootPath, filePath) if findEntry(ctx, rfs, remote) != nil { ufs[i] = u } wg.Done() }() } wg.Wait() var results []*upstream.Fs for _, f := range ufs { if f != nil { results = append(results, f) } } if len(results) == 0 { return nil, fs.ErrorObjectNotFound } return results, nil } // Action category policy, governing the modification of files and directories func (p *EpAll) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterRO(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } return p.epall(ctx, upstreams, path) } // ActionEntries is ACTION category policy but receivng a set of candidate entries func (p *EpAll) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterROEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } return entries, nil } // Create category policy, governing the creation of files and directories func (p *EpAll) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } upstreams, err := p.epall(ctx, upstreams, path+"/..") return upstreams, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpAll) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterNCEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } return entries, nil } rclone-1.53.3/backend/union/policy/epff.go000066400000000000000000000060731375552240400203670ustar00rootroot00000000000000package policy import ( "context" "path" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("epff", &EpFF{}) } // EpFF stands for existing path, first found // Given the order of the candidates, act on the first one found where the relative path exists. type EpFF struct{} func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) { ch := make(chan *upstream.Fs) for _, u := range upstreams { u := u // Closure go func() { rfs := u.RootFs remote := path.Join(u.RootPath, filePath) if findEntry(ctx, rfs, remote) == nil { u = nil } ch <- u }() } var u *upstream.Fs for i := 0; i < len(upstreams); i++ { u = <-ch if u != nil { // close remaining goroutines go func(num int) { defer close(ch) for i := 0; i < num; i++ { <-ch } }(len(upstreams) - 1 - i) } } if u == nil { return nil, fs.ErrorObjectNotFound } return u, nil } // Action category policy, governing the modification of files and directories func (p *EpFF) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterRO(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.epff(ctx, upstreams, path) return []*upstream.Fs{u}, err } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *EpFF) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterROEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } return entries[:1], nil } // Create category policy, governing the creation of files and directories func (p *EpFF) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.epff(ctx, upstreams, path+"/..") return []*upstream.Fs{u}, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpFF) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterNCEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } return entries[:1], nil } // Search category policy, governing the access to files and directories func (p *EpFF) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } return p.epff(ctx, upstreams, path) } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *EpFF) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return entries[0], nil } rclone-1.53.3/backend/union/policy/eplfs.go000066400000000000000000000063471375552240400205640ustar00rootroot00000000000000package policy import ( "context" "math" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("eplfs", &EpLfs{}) } // EpLfs stands for existing path, least free space // Of all the candidates on which the path exists choose the one with the least free space. type EpLfs struct { EpAll } func (p *EpLfs) lfs(upstreams []*upstream.Fs) (*upstream.Fs, error) { var minFreeSpace int64 = math.MaxInt64 var lfsupstream *upstream.Fs for _, u := range upstreams { space, err := u.GetFreeSpace() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Free Space is not supported for upstream %s, treating as infinite", u.Name()) } if space < minFreeSpace { minFreeSpace = space lfsupstream = u } } if lfsupstream == nil { return nil, fs.ErrorObjectNotFound } return lfsupstream, nil } func (p *EpLfs) lfsEntries(entries []upstream.Entry) (upstream.Entry, error) { var minFreeSpace int64 var lfsEntry upstream.Entry for _, e := range entries { space, err := e.UpstreamFs().GetFreeSpace() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Free Space is not supported for upstream %s, treating as infinite", e.UpstreamFs().Name()) } if space < minFreeSpace { minFreeSpace = space lfsEntry = e } } return lfsEntry, nil } // Action category policy, governing the modification of files and directories func (p *EpLfs) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Action(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.lfs(upstreams) return []*upstream.Fs{u}, err } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *EpLfs) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.ActionEntries(entries...) if err != nil { return nil, err } e, err := p.lfsEntries(entries) return []upstream.Entry{e}, err } // Create category policy, governing the creation of files and directories func (p *EpLfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Create(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.lfs(upstreams) return []*upstream.Fs{u}, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpLfs) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.CreateEntries(entries...) if err != nil { return nil, err } e, err := p.lfsEntries(entries) return []upstream.Entry{e}, err } // Search category policy, governing the access to files and directories func (p *EpLfs) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams, err := p.epall(ctx, upstreams, path) if err != nil { return nil, err } return p.lfs(upstreams) } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *EpLfs) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.lfsEntries(entries) } rclone-1.53.3/backend/union/policy/eplno.go000066400000000000000000000063721375552240400205660ustar00rootroot00000000000000package policy import ( "context" "math" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("eplno", &EpLno{}) } // EpLno stands for existing path, least number of objects // Of all the candidates on which the path exists choose the one with the least number of objects type EpLno struct { EpAll } func (p *EpLno) lno(upstreams []*upstream.Fs) (*upstream.Fs, error) { var minNumObj int64 = math.MaxInt64 var lnoUpstream *upstream.Fs for _, u := range upstreams { numObj, err := u.GetNumObjects() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Number of Objects is not supported for upstream %s, treating as 0", u.Name()) } if minNumObj > numObj { minNumObj = numObj lnoUpstream = u } } if lnoUpstream == nil { return nil, fs.ErrorObjectNotFound } return lnoUpstream, nil } func (p *EpLno) lnoEntries(entries []upstream.Entry) (upstream.Entry, error) { var minNumObj int64 = math.MaxInt64 var lnoEntry upstream.Entry for _, e := range entries { numObj, err := e.UpstreamFs().GetNumObjects() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Number of Objects is not supported for upstream %s, treating as 0", e.UpstreamFs().Name()) } if minNumObj > numObj { minNumObj = numObj lnoEntry = e } } return lnoEntry, nil } // Action category policy, governing the modification of files and directories func (p *EpLno) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Action(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.lno(upstreams) return []*upstream.Fs{u}, err } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *EpLno) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.ActionEntries(entries...) if err != nil { return nil, err } e, err := p.lnoEntries(entries) return []upstream.Entry{e}, err } // Create category policy, governing the creation of files and directories func (p *EpLno) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Create(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.lno(upstreams) return []*upstream.Fs{u}, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpLno) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.CreateEntries(entries...) if err != nil { return nil, err } e, err := p.lnoEntries(entries) return []upstream.Entry{e}, err } // Search category policy, governing the access to files and directories func (p *EpLno) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams, err := p.epall(ctx, upstreams, path) if err != nil { return nil, err } return p.lno(upstreams) } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *EpLno) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.lnoEntries(entries) } rclone-1.53.3/backend/union/policy/eplus.go000066400000000000000000000063311375552240400205740ustar00rootroot00000000000000package policy import ( "context" "math" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("eplus", &EpLus{}) } // EpLus stands for existing path, least used space // Of all the candidates on which the path exists choose the one with the least used space. type EpLus struct { EpAll } func (p *EpLus) lus(upstreams []*upstream.Fs) (*upstream.Fs, error) { var minUsedSpace int64 = math.MaxInt64 var lusupstream *upstream.Fs for _, u := range upstreams { space, err := u.GetUsedSpace() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Used Space is not supported for upstream %s, treating as 0", u.Name()) } if space < minUsedSpace { minUsedSpace = space lusupstream = u } } if lusupstream == nil { return nil, fs.ErrorObjectNotFound } return lusupstream, nil } func (p *EpLus) lusEntries(entries []upstream.Entry) (upstream.Entry, error) { var minUsedSpace int64 var lusEntry upstream.Entry for _, e := range entries { space, err := e.UpstreamFs().GetFreeSpace() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Used Space is not supported for upstream %s, treating as 0", e.UpstreamFs().Name()) } if space < minUsedSpace { minUsedSpace = space lusEntry = e } } return lusEntry, nil } // Action category policy, governing the modification of files and directories func (p *EpLus) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Action(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.lus(upstreams) return []*upstream.Fs{u}, err } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *EpLus) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.ActionEntries(entries...) if err != nil { return nil, err } e, err := p.lusEntries(entries) return []upstream.Entry{e}, err } // Create category policy, governing the creation of files and directories func (p *EpLus) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Create(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.lus(upstreams) return []*upstream.Fs{u}, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpLus) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.CreateEntries(entries...) if err != nil { return nil, err } e, err := p.lusEntries(entries) return []upstream.Entry{e}, err } // Search category policy, governing the access to files and directories func (p *EpLus) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams, err := p.epall(ctx, upstreams, path) if err != nil { return nil, err } return p.lus(upstreams) } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *EpLus) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.lusEntries(entries) } rclone-1.53.3/backend/union/policy/epmfs.go000066400000000000000000000063141375552240400205570ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("epmfs", &EpMfs{}) } // EpMfs stands for existing path, most free space // Of all the candidates on which the path exists choose the one with the most free space. type EpMfs struct { EpAll } func (p *EpMfs) mfs(upstreams []*upstream.Fs) (*upstream.Fs, error) { var maxFreeSpace int64 var mfsupstream *upstream.Fs for _, u := range upstreams { space, err := u.GetFreeSpace() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Free Space is not supported for upstream %s, treating as infinite", u.Name()) } if maxFreeSpace < space { maxFreeSpace = space mfsupstream = u } } if mfsupstream == nil { return nil, fs.ErrorObjectNotFound } return mfsupstream, nil } func (p *EpMfs) mfsEntries(entries []upstream.Entry) (upstream.Entry, error) { var maxFreeSpace int64 var mfsEntry upstream.Entry for _, e := range entries { space, err := e.UpstreamFs().GetFreeSpace() if err != nil { fs.LogPrintf(fs.LogLevelNotice, nil, "Free Space is not supported for upstream %s, treating as infinite", e.UpstreamFs().Name()) } if maxFreeSpace < space { maxFreeSpace = space mfsEntry = e } } return mfsEntry, nil } // Action category policy, governing the modification of files and directories func (p *EpMfs) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Action(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.mfs(upstreams) return []*upstream.Fs{u}, err } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *EpMfs) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.ActionEntries(entries...) if err != nil { return nil, err } e, err := p.mfsEntries(entries) return []upstream.Entry{e}, err } // Create category policy, governing the creation of files and directories func (p *EpMfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Create(ctx, upstreams, path) if err != nil { return nil, err } u, err := p.mfs(upstreams) return []*upstream.Fs{u}, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpMfs) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.CreateEntries(entries...) if err != nil { return nil, err } e, err := p.mfsEntries(entries) return []upstream.Entry{e}, err } // Search category policy, governing the access to files and directories func (p *EpMfs) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams, err := p.epall(ctx, upstreams, path) if err != nil { return nil, err } return p.mfs(upstreams) } // SearchEntries is SEARCH category policy but receivng a set of candidate entries func (p *EpMfs) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.mfsEntries(entries) } rclone-1.53.3/backend/union/policy/eprand.go000066400000000000000000000046611375552240400207210ustar00rootroot00000000000000package policy import ( "context" "math/rand" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("eprand", &EpRand{}) } // EpRand stands for existing path, random // Calls epall and then randomizes. Returns one candidate. type EpRand struct { EpAll } func (p *EpRand) rand(upstreams []*upstream.Fs) *upstream.Fs { return upstreams[rand.Intn(len(upstreams))] } func (p *EpRand) randEntries(entries []upstream.Entry) upstream.Entry { return entries[rand.Intn(len(entries))] } // Action category policy, governing the modification of files and directories func (p *EpRand) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Action(ctx, upstreams, path) if err != nil { return nil, err } return []*upstream.Fs{p.rand(upstreams)}, nil } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *EpRand) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.ActionEntries(entries...) if err != nil { return nil, err } return []upstream.Entry{p.randEntries(entries)}, nil } // Create category policy, governing the creation of files and directories func (p *EpRand) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.EpAll.Create(ctx, upstreams, path) if err != nil { return nil, err } return []*upstream.Fs{p.rand(upstreams)}, nil } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *EpRand) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.EpAll.CreateEntries(entries...) if err != nil { return nil, err } return []upstream.Entry{p.randEntries(entries)}, nil } // Search category policy, governing the access to files and directories func (p *EpRand) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams, err := p.epall(ctx, upstreams, path) if err != nil { return nil, err } return p.rand(upstreams), nil } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *EpRand) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.randEntries(entries), nil } rclone-1.53.3/backend/union/policy/ff.go000066400000000000000000000013641375552240400200400ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("ff", &FF{}) } // FF stands for first found // Search category: same as epff. // Action category: same as epff. // Create category: Given the order of the candiates, act on the first one found. type FF struct { EpFF } // Create category policy, governing the creation of files and directories func (p *FF) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return upstreams, fs.ErrorPermissionDenied } return upstreams[:1], nil } rclone-1.53.3/backend/union/policy/lfs.go000066400000000000000000000014071375552240400202270ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("lfs", &Lfs{}) } // Lfs stands for least free space // Search category: same as eplfs. // Action category: same as eplfs. // Create category: Pick the drive with the least free space. type Lfs struct { EpLfs } // Create category policy, governing the creation of files and directories func (p *Lfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.lfs(upstreams) return []*upstream.Fs{u}, err } rclone-1.53.3/backend/union/policy/lno.go000066400000000000000000000014251375552240400202330ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("lno", &Lno{}) } // Lno stands for least number of objects // Search category: same as eplno. // Action category: same as eplno. // Create category: Pick the drive with the least number of objects. type Lno struct { EpLno } // Create category policy, governing the creation of files and directories func (p *Lno) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.lno(upstreams) return []*upstream.Fs{u}, err } rclone-1.53.3/backend/union/policy/lus.go000066400000000000000000000014071375552240400202460ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("lus", &Lus{}) } // Lus stands for least used space // Search category: same as eplus. // Action category: same as eplus. // Create category: Pick the drive with the least used space. type Lus struct { EpLus } // Create category policy, governing the creation of files and directories func (p *Lus) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.lus(upstreams) return []*upstream.Fs{u}, err } rclone-1.53.3/backend/union/policy/mfs.go000066400000000000000000000014051375552240400202260ustar00rootroot00000000000000package policy import ( "context" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("mfs", &Mfs{}) } // Mfs stands for most free space // Search category: same as epmfs. // Action category: same as epmfs. // Create category: Pick the drive with the most free space. type Mfs struct { EpMfs } // Create category policy, governing the creation of files and directories func (p *Mfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.mfs(upstreams) return []*upstream.Fs{u}, err } rclone-1.53.3/backend/union/policy/newest.go000066400000000000000000000076211375552240400207540ustar00rootroot00000000000000package policy import ( "context" "path" "sync" "time" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("newest", &Newest{}) } // Newest policy picks the file / directory with the largest mtime // It implies the existence of a path type Newest struct { EpAll } func (p *Newest) newest(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) { var wg sync.WaitGroup ufs := make([]*upstream.Fs, len(upstreams)) mtimes := make([]time.Time, len(upstreams)) for i, u := range upstreams { wg.Add(1) i, u := i, u // Closure go func() { defer wg.Done() rfs := u.RootFs remote := path.Join(u.RootPath, filePath) if e := findEntry(ctx, rfs, remote); e != nil { ufs[i] = u mtimes[i] = e.ModTime(ctx) } }() } wg.Wait() maxMtime := time.Time{} var newestFs *upstream.Fs for i, u := range ufs { if u != nil && mtimes[i].After(maxMtime) { maxMtime = mtimes[i] newestFs = u } } if newestFs == nil { return nil, fs.ErrorObjectNotFound } return newestFs, nil } func (p *Newest) newestEntries(entries []upstream.Entry) (upstream.Entry, error) { var wg sync.WaitGroup mtimes := make([]time.Time, len(entries)) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() for i, e := range entries { wg.Add(1) i, e := i, e // Closure go func() { defer wg.Done() mtimes[i] = e.ModTime(ctx) }() } wg.Wait() maxMtime := time.Time{} var newestEntry upstream.Entry for i, t := range mtimes { if t.After(maxMtime) { maxMtime = t newestEntry = entries[i] } } if newestEntry == nil { return nil, fs.ErrorObjectNotFound } return newestEntry, nil } // Action category policy, governing the modification of files and directories func (p *Newest) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterRO(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.newest(ctx, upstreams, path) return []*upstream.Fs{u}, err } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *Newest) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterROEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } e, err := p.newestEntries(entries) return []upstream.Entry{e}, err } // Create category policy, governing the creation of files and directories func (p *Newest) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams = filterNC(upstreams) if len(upstreams) == 0 { return nil, fs.ErrorPermissionDenied } u, err := p.newest(ctx, upstreams, path+"/..") return []*upstream.Fs{u}, err } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *Newest) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } entries = filterNCEntries(entries) if len(entries) == 0 { return nil, fs.ErrorPermissionDenied } e, err := p.newestEntries(entries) return []upstream.Entry{e}, err } // Search category policy, governing the access to files and directories func (p *Newest) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } return p.newest(ctx, upstreams, path) } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *Newest) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.newestEntries(entries) } rclone-1.53.3/backend/union/policy/policy.go000066400000000000000000000061011375552240400207360ustar00rootroot00000000000000package policy import ( "context" "math/rand" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) var policies = make(map[string]Policy) // Policy is the interface of a set of defined behavior choosing // the upstream Fs to operate on type Policy interface { // Action category policy, governing the modification of files and directories Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) // Create category policy, governing the creation of files and directories Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) // Search category policy, governing the access to files and directories Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) // ActionEntries is ACTION category policy but receiving a set of candidate entries ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) // CreateEntries is CREATE category policy but receiving a set of candidate entries CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) // SearchEntries is SEARCH category policy but receiving a set of candidate entries SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) } func registerPolicy(name string, p Policy) { policies[strings.ToLower(name)] = p } // Get a Policy from the list func Get(name string) (Policy, error) { p, ok := policies[strings.ToLower(name)] if !ok { return nil, errors.Errorf("didn't find policy called %q", name) } return p, nil } func filterRO(ufs []*upstream.Fs) (wufs []*upstream.Fs) { for _, u := range ufs { if u.IsWritable() { wufs = append(wufs, u) } } return wufs } func filterROEntries(ue []upstream.Entry) (wue []upstream.Entry) { for _, e := range ue { if e.UpstreamFs().IsWritable() { wue = append(wue, e) } } return wue } func filterNC(ufs []*upstream.Fs) (wufs []*upstream.Fs) { for _, u := range ufs { if u.IsCreatable() { wufs = append(wufs, u) } } return wufs } func filterNCEntries(ue []upstream.Entry) (wue []upstream.Entry) { for _, e := range ue { if e.UpstreamFs().IsCreatable() { wue = append(wue, e) } } return wue } func parentDir(absPath string) string { parent := path.Dir(strings.TrimRight(absPath, "/")) if parent == "." { parent = "" } return parent } func clean(absPath string) string { cleanPath := path.Clean(absPath) if cleanPath == "." { cleanPath = "" } return cleanPath } func findEntry(ctx context.Context, f fs.Fs, remote string) fs.DirEntry { remote = clean(remote) dir := parentDir(remote) entries, err := f.List(ctx, dir) if remote == dir { if err != nil { return nil } // random modtime for root randomNow := time.Unix(time.Now().Unix()-rand.Int63n(10000), 0) return fs.NewDir("", randomNow) } found := false for _, e := range entries { eRemote := e.Remote() if f.Features().CaseInsensitive { found = strings.EqualFold(remote, eRemote) } else { found = (remote == eRemote) } if found { return e } } return nil } rclone-1.53.3/backend/union/policy/rand.go000066400000000000000000000045761375552240400204010ustar00rootroot00000000000000package policy import ( "context" "math/rand" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" ) func init() { registerPolicy("rand", &Rand{}) } // Rand stands for random // Calls all and then randomizes. Returns one candidate. type Rand struct { All } func (p *Rand) rand(upstreams []*upstream.Fs) *upstream.Fs { return upstreams[rand.Intn(len(upstreams))] } func (p *Rand) randEntries(entries []upstream.Entry) upstream.Entry { return entries[rand.Intn(len(entries))] } // Action category policy, governing the modification of files and directories func (p *Rand) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.All.Action(ctx, upstreams, path) if err != nil { return nil, err } return []*upstream.Fs{p.rand(upstreams)}, nil } // ActionEntries is ACTION category policy but receiving a set of candidate entries func (p *Rand) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.All.ActionEntries(entries...) if err != nil { return nil, err } return []upstream.Entry{p.randEntries(entries)}, nil } // Create category policy, governing the creation of files and directories func (p *Rand) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) { upstreams, err := p.All.Create(ctx, upstreams, path) if err != nil { return nil, err } return []*upstream.Fs{p.rand(upstreams)}, nil } // CreateEntries is CREATE category policy but receiving a set of candidate entries func (p *Rand) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { entries, err := p.All.CreateEntries(entries...) if err != nil { return nil, err } return []upstream.Entry{p.randEntries(entries)}, nil } // Search category policy, governing the access to files and directories func (p *Rand) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) { if len(upstreams) == 0 { return nil, fs.ErrorObjectNotFound } upstreams, err := p.epall(ctx, upstreams, path) if err != nil { return nil, err } return p.rand(upstreams), nil } // SearchEntries is SEARCH category policy but receiving a set of candidate entries func (p *Rand) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) { if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } return p.randEntries(entries), nil } rclone-1.53.3/backend/union/union.go000066400000000000000000000566761375552240400173160ustar00rootroot00000000000000package union import ( "bufio" "context" "fmt" "io" "path" "path/filepath" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/union/policy" "github.com/rclone/rclone/backend/union/upstream" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" ) // Register with Fs func init() { fsi := &fs.RegInfo{ Name: "union", Description: "Union merges the contents of several upstream fs", NewFs: NewFs, Options: []fs.Option{{ Name: "upstreams", Help: "List of space separated upstreams.\nCan be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.\n", Required: true, }, { Name: "action_policy", Help: "Policy to choose upstream on ACTION category.", Required: true, Default: "epall", }, { Name: "create_policy", Help: "Policy to choose upstream on CREATE category.", Required: true, Default: "epmfs", }, { Name: "search_policy", Help: "Policy to choose upstream on SEARCH category.", Required: true, Default: "ff", }, { Name: "cache_time", Help: "Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.", Required: true, Default: 120, }}, } fs.Register(fsi) } // Options defines the configuration for this backend type Options struct { Upstreams fs.SpaceSepList `config:"upstreams"` Remotes fs.SpaceSepList `config:"remotes"` // Depreated ActionPolicy string `config:"action_policy"` CreatePolicy string `config:"create_policy"` SearchPolicy string `config:"search_policy"` CacheTime int `config:"cache_time"` } // Fs represents a union of upstreams type Fs struct { name string // name of this remote features *fs.Features // optional features opt Options // options for this Fs root string // the path we are working on upstreams []*upstream.Fs // slice of upstreams hashSet hash.Set // intersection of hash types actionPolicy policy.Policy // policy for ACTION createPolicy policy.Policy // policy for CREATE searchPolicy policy.Policy // policy for SEARCH } // Wrap candidate objects in to a union Object func (f *Fs) wrapEntries(entries ...upstream.Entry) (entry, error) { e, err := f.searchEntries(entries...) if err != nil { return nil, err } switch e.(type) { case *upstream.Object: return &Object{ Object: e.(*upstream.Object), fs: f, co: entries, }, nil case *upstream.Directory: return &Directory{ Directory: e.(*upstream.Directory), cd: entries, }, nil default: return nil, errors.Errorf("unknown object type %T", e) } } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("union root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // Rmdir removes the root directory of the Fs object func (f *Fs) Rmdir(ctx context.Context, dir string) error { upstreams, err := f.action(ctx, dir) if err != nil { return err } errs := Errors(make([]error, len(upstreams))) multithread(len(upstreams), func(i int) { err := upstreams[i].Rmdir(ctx, dir) errs[i] = errors.Wrap(err, upstreams[i].Name()) }) return errs.Err() } // Hashes returns hash.HashNone to indicate remote hashing is unavailable func (f *Fs) Hashes() hash.Set { return f.hashSet } // Mkdir makes the root directory of the Fs object func (f *Fs) Mkdir(ctx context.Context, dir string) error { upstreams, err := f.create(ctx, dir) if err == fs.ErrorObjectNotFound { if dir != parentDir(dir) { if err := f.Mkdir(ctx, parentDir(dir)); err != nil { return err } upstreams, err = f.create(ctx, dir) } else if dir == "" { // If root dirs not created then create them upstreams, err = f.upstreams, nil } } if err != nil { return err } errs := Errors(make([]error, len(upstreams))) multithread(len(upstreams), func(i int) { err := upstreams[i].Mkdir(ctx, dir) errs[i] = errors.Wrap(err, upstreams[i].Name()) }) return errs.Err() } // Purge all files in the directory // // Implement this if you have a way of deleting all the files // quicker than just running Remove() on the result of List() // // Return an error if it doesn't exist func (f *Fs) Purge(ctx context.Context, dir string) error { for _, r := range f.upstreams { if r.Features().Purge == nil { return fs.ErrorCantPurge } } upstreams, err := f.action(ctx, "") if err != nil { return err } errs := Errors(make([]error, len(upstreams))) multithread(len(upstreams), func(i int) { err := upstreams[i].Features().Purge(ctx, dir) if errors.Cause(err) == fs.ErrorDirNotFound { err = nil } errs[i] = errors.Wrap(err, upstreams[i].Name()) }) return errs.Err() } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } o := srcObj.UnWrap() su := o.UpstreamFs() if su.Features().Copy == nil { return nil, fs.ErrorCantCopy } var du *upstream.Fs for _, u := range f.upstreams { if operations.Same(u.RootFs, su.RootFs) { du = u } } if du == nil { return nil, fs.ErrorCantCopy } if !du.IsCreatable() { return nil, fs.ErrorPermissionDenied } co, err := du.Features().Copy(ctx, o, remote) if err != nil || co == nil { return nil, err } wo, err := f.wrapEntries(du.WrapObject(co)) return wo.(*Object), err } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { o, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } entries, err := f.actionEntries(o.candidates()...) if err != nil { return nil, err } for _, e := range entries { if e.UpstreamFs().Features().Move == nil { return nil, fs.ErrorCantMove } } objs := make([]*upstream.Object, len(entries)) errs := Errors(make([]error, len(entries))) multithread(len(entries), func(i int) { su := entries[i].UpstreamFs() o, ok := entries[i].(*upstream.Object) if !ok { errs[i] = errors.Wrap(fs.ErrorNotAFile, su.Name()) return } var du *upstream.Fs for _, u := range f.upstreams { if operations.Same(u.RootFs, su.RootFs) { du = u } } if du == nil { errs[i] = errors.Wrap(fs.ErrorCantMove, su.Name()+":"+remote) return } mo, err := du.Features().Move(ctx, o.UnWrap(), remote) if err != nil || mo == nil { errs[i] = errors.Wrap(err, su.Name()) return } objs[i] = du.WrapObject(mo) }) var en []upstream.Entry for _, o := range objs { if o != nil { en = append(en, o) } } e, err := f.wrapEntries(en...) if err != nil { return nil, err } return e.(*Object), errs.Err() } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { sfs, ok := src.(*Fs) if !ok { fs.Debugf(src, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } upstreams, err := sfs.action(ctx, srcRemote) if err != nil { return err } for _, u := range upstreams { if u.Features().DirMove == nil { return fs.ErrorCantDirMove } } errs := Errors(make([]error, len(upstreams))) multithread(len(upstreams), func(i int) { su := upstreams[i] var du *upstream.Fs for _, u := range f.upstreams { if operations.Same(u.RootFs, su.RootFs) { du = u } } if du == nil { errs[i] = errors.Wrap(fs.ErrorCantDirMove, su.Name()+":"+su.Root()) return } err := du.Features().DirMove(ctx, su.Fs, srcRemote, dstRemote) errs[i] = errors.Wrap(err, du.Name()+":"+du.Root()) }) errs = errs.FilterNil() if len(errs) == 0 { return nil } for _, e := range errs { if errors.Cause(e) != fs.ErrorDirExists { return errs } } return fs.ErrorDirExists } // ChangeNotify calls the passed function with a path // that has had changes. If the implementation // uses polling, it should adhere to the given interval. // At least one value will be written to the channel, // specifying the initial value and updated values might // follow. A 0 Duration should pause the polling. // The ChangeNotify implementation must empty the channel // regularly. When the channel gets closed, the implementation // should stop polling and release resources. func (f *Fs) ChangeNotify(ctx context.Context, fn func(string, fs.EntryType), ch <-chan time.Duration) { var uChans []chan time.Duration for _, u := range f.upstreams { if ChangeNotify := u.Features().ChangeNotify; ChangeNotify != nil { ch := make(chan time.Duration) uChans = append(uChans, ch) ChangeNotify(ctx, fn, ch) } } go func() { for i := range ch { for _, c := range uChans { c <- i } } for _, c := range uChans { close(c) } }() } // DirCacheFlush resets the directory cache - used in testing // as an optional interface func (f *Fs) DirCacheFlush() { multithread(len(f.upstreams), func(i int) { if do := f.upstreams[i].Features().DirCacheFlush; do != nil { do() } }) } // Tee in into n outputs // // When finished read the error from the channel func multiReader(n int, in io.Reader) ([]io.Reader, <-chan error) { readers := make([]io.Reader, n) pipeWriters := make([]*io.PipeWriter, n) writers := make([]io.Writer, n) errChan := make(chan error, 1) for i := range writers { r, w := io.Pipe() bw := bufio.NewWriter(w) readers[i], pipeWriters[i], writers[i] = r, w, bw } go func() { mw := io.MultiWriter(writers...) es := make([]error, 2*n+1) _, copyErr := io.Copy(mw, in) es[2*n] = copyErr // Flush the buffers for i, bw := range writers { es[i] = bw.(*bufio.Writer).Flush() } // Close the underlying pipes for i, pw := range pipeWriters { es[2*i] = pw.CloseWithError(copyErr) } errChan <- Errors(es).Err() }() return readers, errChan } func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bool, options ...fs.OpenOption) (fs.Object, error) { srcPath := src.Remote() upstreams, err := f.create(ctx, srcPath) if err == fs.ErrorObjectNotFound { if err := f.Mkdir(ctx, parentDir(srcPath)); err != nil { return nil, err } upstreams, err = f.create(ctx, srcPath) } if err != nil { return nil, err } if len(upstreams) == 1 { u := upstreams[0] var o fs.Object var err error if stream { o, err = u.Features().PutStream(ctx, in, src, options...) } else { o, err = u.Put(ctx, in, src, options...) } if err != nil { return nil, err } e, err := f.wrapEntries(u.WrapObject(o)) return e.(*Object), err } // Multi-threading readers, errChan := multiReader(len(upstreams), in) errs := Errors(make([]error, len(upstreams)+1)) objs := make([]upstream.Entry, len(upstreams)) multithread(len(upstreams), func(i int) { u := upstreams[i] var o fs.Object var err error if stream { o, err = u.Features().PutStream(ctx, readers[i], src, options...) } else { o, err = u.Put(ctx, readers[i], src, options...) } if err != nil { errs[i] = errors.Wrap(err, u.Name()) return } objs[i] = u.WrapObject(o) }) errs[len(upstreams)] = <-errChan err = errs.Err() if err != nil { return nil, err } e, err := f.wrapEntries(objs...) return e.(*Object), err } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o, err := f.NewObject(ctx, src.Remote()) switch err { case nil: return o, o.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: return f.put(ctx, in, src, false, options...) default: return nil, err } } // PutStream uploads to the remote path with the modTime given of indeterminate size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o, err := f.NewObject(ctx, src.Remote()) switch err { case nil: return o, o.Update(ctx, in, src, options...) case fs.ErrorObjectNotFound: return f.put(ctx, in, src, true, options...) default: return nil, err } } // About gets quota information from the Fs func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { usage := &fs.Usage{ Total: new(int64), Used: new(int64), Trashed: new(int64), Other: new(int64), Free: new(int64), Objects: new(int64), } for _, u := range f.upstreams { usg, err := u.About(ctx) if errors.Cause(err) == fs.ErrorDirNotFound { continue } if err != nil { return nil, err } if usg.Total != nil && usage.Total != nil { *usage.Total += *usg.Total } else { usage.Total = nil } if usg.Used != nil && usage.Used != nil { *usage.Used += *usg.Used } else { usage.Used = nil } if usg.Trashed != nil && usage.Trashed != nil { *usage.Trashed += *usg.Trashed } else { usage.Trashed = nil } if usg.Other != nil && usage.Other != nil { *usage.Other += *usg.Other } else { usage.Other = nil } if usg.Free != nil && usage.Free != nil { *usage.Free += *usg.Free } else { usage.Free = nil } if usg.Objects != nil && usage.Objects != nil { *usage.Objects += *usg.Objects } else { usage.Objects = nil } } return usage, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { entriess := make([][]upstream.Entry, len(f.upstreams)) errs := Errors(make([]error, len(f.upstreams))) multithread(len(f.upstreams), func(i int) { u := f.upstreams[i] entries, err := u.List(ctx, dir) if err != nil { errs[i] = errors.Wrap(err, u.Name()) return } uEntries := make([]upstream.Entry, len(entries)) for j, e := range entries { uEntries[j], _ = u.WrapEntry(e) } entriess[i] = uEntries }) if len(errs) == len(errs.FilterNil()) { errs = errs.Map(func(e error) error { if errors.Cause(e) == fs.ErrorDirNotFound { return nil } return e }) if len(errs) == 0 { return nil, fs.ErrorDirNotFound } return nil, errs.Err() } return f.mergeDirEntries(entriess) } // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { var entriess [][]upstream.Entry errs := Errors(make([]error, len(f.upstreams))) var mutex sync.Mutex multithread(len(f.upstreams), func(i int) { u := f.upstreams[i] var err error callback := func(entries fs.DirEntries) error { uEntries := make([]upstream.Entry, len(entries)) for j, e := range entries { uEntries[j], _ = u.WrapEntry(e) } mutex.Lock() entriess = append(entriess, uEntries) mutex.Unlock() return nil } do := u.Features().ListR if do != nil { err = do(ctx, dir, callback) } else { err = walk.ListR(ctx, u, dir, true, -1, walk.ListAll, callback) } if err != nil { errs[i] = errors.Wrap(err, u.Name()) return } }) if len(errs) == len(errs.FilterNil()) { errs = errs.Map(func(e error) error { if errors.Cause(e) == fs.ErrorDirNotFound { return nil } return e }) if len(errs) == 0 { return fs.ErrorDirNotFound } return errs.Err() } entries, err := f.mergeDirEntries(entriess) if err != nil { return err } return callback(entries) } // NewObject creates a new remote union file object func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { objs := make([]*upstream.Object, len(f.upstreams)) errs := Errors(make([]error, len(f.upstreams))) multithread(len(f.upstreams), func(i int) { u := f.upstreams[i] o, err := u.NewObject(ctx, remote) if err != nil && err != fs.ErrorObjectNotFound { errs[i] = errors.Wrap(err, u.Name()) return } objs[i] = u.WrapObject(o) }) var entries []upstream.Entry for _, o := range objs { if o != nil { entries = append(entries, o) } } if len(entries) == 0 { return nil, fs.ErrorObjectNotFound } e, err := f.wrapEntries(entries...) if err != nil { return nil, err } return e.(*Object), errs.Err() } // Precision is the greatest Precision of all upstreams func (f *Fs) Precision() time.Duration { var greatestPrecision time.Duration for _, u := range f.upstreams { if u.Precision() > greatestPrecision { greatestPrecision = u.Precision() } } return greatestPrecision } func (f *Fs) action(ctx context.Context, path string) ([]*upstream.Fs, error) { return f.actionPolicy.Action(ctx, f.upstreams, path) } func (f *Fs) actionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { return f.actionPolicy.ActionEntries(entries...) } func (f *Fs) create(ctx context.Context, path string) ([]*upstream.Fs, error) { return f.createPolicy.Create(ctx, f.upstreams, path) } func (f *Fs) createEntries(entries ...upstream.Entry) ([]upstream.Entry, error) { return f.createPolicy.CreateEntries(entries...) } func (f *Fs) search(ctx context.Context, path string) (*upstream.Fs, error) { return f.searchPolicy.Search(ctx, f.upstreams, path) } func (f *Fs) searchEntries(entries ...upstream.Entry) (upstream.Entry, error) { return f.searchPolicy.SearchEntries(entries...) } func (f *Fs) mergeDirEntries(entriess [][]upstream.Entry) (fs.DirEntries, error) { entryMap := make(map[string]([]upstream.Entry)) for _, en := range entriess { if en == nil { continue } for _, entry := range en { remote := entry.Remote() if f.Features().CaseInsensitive { remote = strings.ToLower(remote) } entryMap[remote] = append(entryMap[remote], entry) } } var entries fs.DirEntries for path := range entryMap { e, err := f.wrapEntries(entryMap[path]...) if err != nil { return nil, err } entries = append(entries, e) } return entries, nil } // NewFs constructs an Fs from the path. // // The returned Fs is the actual Fs, referenced by remote in the config func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } // Backward compatible to old config if len(opt.Upstreams) == 0 && len(opt.Remotes) > 0 { for i := 0; i < len(opt.Remotes)-1; i++ { opt.Remotes[i] = opt.Remotes[i] + ":ro" } opt.Upstreams = opt.Remotes } if len(opt.Upstreams) == 0 { return nil, errors.New("union can't point to an empty upstream - check the value of the upstreams setting") } if len(opt.Upstreams) == 1 { return nil, errors.New("union can't point to a single upstream - check the value of the upstreams setting") } for _, u := range opt.Upstreams { if strings.HasPrefix(u, name+":") { return nil, errors.New("can't point union remote at itself - check the value of the upstreams setting") } } upstreams := make([]*upstream.Fs, len(opt.Upstreams)) errs := Errors(make([]error, len(opt.Upstreams))) multithread(len(opt.Upstreams), func(i int) { u := opt.Upstreams[i] upstreams[i], errs[i] = upstream.New(u, root, time.Duration(opt.CacheTime)*time.Second) }) var usedUpstreams []*upstream.Fs var fserr error for i, err := range errs { if err != nil && err != fs.ErrorIsFile { return nil, err } // Only the upstreams returns ErrorIsFile would be used if any if err == fs.ErrorIsFile { usedUpstreams = append(usedUpstreams, upstreams[i]) fserr = fs.ErrorIsFile } } if fserr == nil { usedUpstreams = upstreams } f := &Fs{ name: name, root: root, opt: *opt, upstreams: usedUpstreams, } f.actionPolicy, err = policy.Get(opt.ActionPolicy) if err != nil { return nil, err } f.createPolicy, err = policy.Get(opt.CreatePolicy) if err != nil { return nil, err } f.searchPolicy, err = policy.Get(opt.SearchPolicy) if err != nil { return nil, err } fs.Debugf(f, "actionPolicy = %T, createPolicy = %T, searchPolicy = %T", f.actionPolicy, f.createPolicy, f.searchPolicy) var features = (&fs.Features{ CaseInsensitive: true, DuplicateFiles: false, ReadMimeType: true, WriteMimeType: true, CanHaveEmptyDirectories: true, BucketBased: true, SetTier: true, GetTier: true, }).Fill(f) for _, f := range upstreams { features = features.Mask(f) // Mask all upstream fs } // Enable ListR when upstreams either support ListR or is local // But not when all upstreams are local if features.ListR == nil { for _, u := range upstreams { if u.Features().ListR != nil { features.ListR = f.ListR } else if !u.Features().IsLocal { features.ListR = nil break } } } f.features = features // Get common intersection of hashes hashSet := f.upstreams[0].Hashes() for _, u := range f.upstreams[1:] { hashSet = hashSet.Overlap(u.Hashes()) } f.hashSet = hashSet return f, fserr } func parentDir(absPath string) string { parent := path.Dir(strings.TrimRight(filepath.ToSlash(absPath), "/")) if parent == "." { parent = "" } return parent } func multithread(num int, fn func(int)) { var wg sync.WaitGroup for i := 0; i < num; i++ { wg.Add(1) i := i go func() { defer wg.Done() fn(i) }() } wg.Wait() } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil) _ fs.ChangeNotifier = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.ListRer = (*Fs)(nil) ) rclone-1.53.3/backend/union/union_test.go000066400000000000000000000153731375552240400203420ustar00rootroot00000000000000// Test Union filesystem interface package union_test import ( "os" "path/filepath" "testing" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" "github.com/stretchr/testify/require" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { if *fstest.RemoteName == "" { t.Skip("Skipping as -remote not set") } fstests.Run(t, &fstests.Opt{ RemoteName: *fstest.RemoteName, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } func TestStandard(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-standard1") tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-standard2") tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-standard3") require.NoError(t, os.MkdirAll(tempdir1, 0744)) require.NoError(t, os.MkdirAll(tempdir2, 0744)) require.NoError(t, os.MkdirAll(tempdir3, 0744)) upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3 name := "TestUnion" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "union"}, {Name: name, Key: "upstreams", Value: upstreams}, {Name: name, Key: "action_policy", Value: "epall"}, {Name: name, Key: "create_policy", Value: "epmfs"}, {Name: name, Key: "search_policy", Value: "ff"}, }, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } func TestRO(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-ro1") tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-ro2") tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-ro3") require.NoError(t, os.MkdirAll(tempdir1, 0744)) require.NoError(t, os.MkdirAll(tempdir2, 0744)) require.NoError(t, os.MkdirAll(tempdir3, 0744)) upstreams := tempdir1 + " " + tempdir2 + ":ro " + tempdir3 + ":ro" name := "TestUnionRO" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "union"}, {Name: name, Key: "upstreams", Value: upstreams}, {Name: name, Key: "action_policy", Value: "epall"}, {Name: name, Key: "create_policy", Value: "epmfs"}, {Name: name, Key: "search_policy", Value: "ff"}, }, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } func TestNC(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-nc1") tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-nc2") tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-nc3") require.NoError(t, os.MkdirAll(tempdir1, 0744)) require.NoError(t, os.MkdirAll(tempdir2, 0744)) require.NoError(t, os.MkdirAll(tempdir3, 0744)) upstreams := tempdir1 + " " + tempdir2 + ":nc " + tempdir3 + ":nc" name := "TestUnionNC" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "union"}, {Name: name, Key: "upstreams", Value: upstreams}, {Name: name, Key: "action_policy", Value: "epall"}, {Name: name, Key: "create_policy", Value: "epmfs"}, {Name: name, Key: "search_policy", Value: "ff"}, }, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } func TestPolicy1(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy11") tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy12") tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy13") require.NoError(t, os.MkdirAll(tempdir1, 0744)) require.NoError(t, os.MkdirAll(tempdir2, 0744)) require.NoError(t, os.MkdirAll(tempdir3, 0744)) upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3 name := "TestUnionPolicy1" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "union"}, {Name: name, Key: "upstreams", Value: upstreams}, {Name: name, Key: "action_policy", Value: "all"}, {Name: name, Key: "create_policy", Value: "lus"}, {Name: name, Key: "search_policy", Value: "all"}, }, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } func TestPolicy2(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy21") tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy22") tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy23") require.NoError(t, os.MkdirAll(tempdir1, 0744)) require.NoError(t, os.MkdirAll(tempdir2, 0744)) require.NoError(t, os.MkdirAll(tempdir3, 0744)) upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3 name := "TestUnionPolicy2" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "union"}, {Name: name, Key: "upstreams", Value: upstreams}, {Name: name, Key: "action_policy", Value: "all"}, {Name: name, Key: "create_policy", Value: "rand"}, {Name: name, Key: "search_policy", Value: "ff"}, }, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } func TestPolicy3(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping as -remote set") } tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy31") tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy32") tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy33") require.NoError(t, os.MkdirAll(tempdir1, 0744)) require.NoError(t, os.MkdirAll(tempdir2, 0744)) require.NoError(t, os.MkdirAll(tempdir3, 0744)) upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3 name := "TestUnionPolicy3" fstests.Run(t, &fstests.Opt{ RemoteName: name + ":", ExtraConfig: []fstests.ExtraConfigItem{ {Name: name, Key: "type", Value: "union"}, {Name: name, Key: "upstreams", Value: upstreams}, {Name: name, Key: "action_policy", Value: "all"}, {Name: name, Key: "create_policy", Value: "all"}, {Name: name, Key: "search_policy", Value: "all"}, }, UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"}, UnimplementableObjectMethods: []string{"MimeType"}, }) } rclone-1.53.3/backend/union/upstream/000077500000000000000000000000001375552240400174535ustar00rootroot00000000000000rclone-1.53.3/backend/union/upstream/upstream.go000066400000000000000000000202521375552240400216430ustar00rootroot00000000000000package upstream import ( "context" "io" "math" "path" "path/filepath" "strings" "sync" "sync/atomic" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" ) var ( // ErrUsageFieldNotSupported stats the usage field is not supported by the backend ErrUsageFieldNotSupported = errors.New("this usage field is not supported") ) // Fs is a wrap of any fs and its configs type Fs struct { fs.Fs RootFs fs.Fs RootPath string writable bool creatable bool usage *fs.Usage // Cache the usage cacheTime time.Duration // cache duration cacheExpiry int64 // usage cache expiry time cacheMutex sync.RWMutex cacheOnce sync.Once cacheUpdate bool // if the cache is updating } // Directory describes a wrapped Directory // // This is a wrapped Directory which contains the upstream Fs type Directory struct { fs.Directory f *Fs } // Object describes a wrapped Object // // This is a wrapped Object which contains the upstream Fs type Object struct { fs.Object f *Fs } // Entry describe a warpped fs.DirEntry interface with the // information of upstream Fs type Entry interface { fs.DirEntry UpstreamFs() *Fs } // New creates a new Fs based on the // string formatted `type:root_path(:ro/:nc)` func New(remote, root string, cacheTime time.Duration) (*Fs, error) { _, configName, fsPath, err := fs.ParseRemote(remote) if err != nil { return nil, err } f := &Fs{ RootPath: root, writable: true, creatable: true, cacheExpiry: time.Now().Unix(), cacheTime: cacheTime, usage: &fs.Usage{}, } if strings.HasSuffix(fsPath, ":ro") { f.writable = false f.creatable = false fsPath = fsPath[0 : len(fsPath)-3] } else if strings.HasSuffix(fsPath, ":nc") { f.writable = true f.creatable = false fsPath = fsPath[0 : len(fsPath)-3] } if configName != "local" { fsPath = configName + ":" + fsPath } rFs, err := cache.Get(fsPath) if err != nil && err != fs.ErrorIsFile { return nil, err } f.RootFs = rFs rootString := path.Join(fsPath, filepath.ToSlash(root)) myFs, err := cache.Get(rootString) if err != nil && err != fs.ErrorIsFile { return nil, err } f.Fs = myFs cache.PinUntilFinalized(f.Fs, f) return f, err } // WrapDirectory wraps an fs.Directory to include the info // of the upstream Fs func (f *Fs) WrapDirectory(e fs.Directory) *Directory { if e == nil { return nil } return &Directory{ Directory: e, f: f, } } // WrapObject wraps an fs.Object to include the info // of the upstream Fs func (f *Fs) WrapObject(o fs.Object) *Object { if o == nil { return nil } return &Object{ Object: o, f: f, } } // WrapEntry wraps an fs.DirEntry to include the info // of the upstream Fs func (f *Fs) WrapEntry(e fs.DirEntry) (Entry, error) { switch e.(type) { case fs.Object: return f.WrapObject(e.(fs.Object)), nil case fs.Directory: return f.WrapDirectory(e.(fs.Directory)), nil default: return nil, errors.Errorf("unknown object type %T", e) } } // UpstreamFs get the upstream Fs the entry is stored in func (e *Directory) UpstreamFs() *Fs { return e.f } // UpstreamFs get the upstream Fs the entry is stored in func (o *Object) UpstreamFs() *Fs { return o.f } // UnWrap returns the Object that this Object is wrapping or // nil if it isn't wrapping anything func (o *Object) UnWrap() fs.Object { return o.Object } // IsCreatable return if the fs is allowed to create new objects func (f *Fs) IsCreatable() bool { return f.creatable } // IsWritable return if the fs is allowed to write func (f *Fs) IsWritable() bool { return f.writable } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o, err := f.Fs.Put(ctx, in, src, options...) if err != nil { return o, err } f.cacheMutex.Lock() defer f.cacheMutex.Unlock() size := src.Size() if f.usage.Used != nil { *f.usage.Used += size } if f.usage.Free != nil { *f.usage.Free -= size } if f.usage.Objects != nil { *f.usage.Objects++ } return o, nil } // PutStream uploads to the remote path with the modTime given of indeterminate size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { do := f.Features().PutStream if do == nil { return nil, fs.ErrorNotImplemented } o, err := do(ctx, in, src, options...) if err != nil { return o, err } f.cacheMutex.Lock() defer f.cacheMutex.Unlock() size := o.Size() if f.usage.Used != nil { *f.usage.Used += size } if f.usage.Free != nil { *f.usage.Free -= size } if f.usage.Objects != nil { *f.usage.Objects++ } return o, nil } // Update in to the object with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // return an error or update the object properly (rather than e.g. calling panic). func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { size := o.Size() err := o.Object.Update(ctx, in, src, options...) if err != nil { return err } o.f.cacheMutex.Lock() defer o.f.cacheMutex.Unlock() delta := o.Size() - size if delta <= 0 { return nil } if o.f.usage.Used != nil { *o.f.usage.Used += size } if o.f.usage.Free != nil { *o.f.usage.Free -= size } return nil } // About gets quota information from the Fs func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() { err := f.updateUsage() if err != nil { return nil, ErrUsageFieldNotSupported } } f.cacheMutex.RLock() defer f.cacheMutex.RUnlock() return f.usage, nil } // GetFreeSpace get the free space of the fs func (f *Fs) GetFreeSpace() (int64, error) { if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() { err := f.updateUsage() if err != nil { return math.MaxInt64, ErrUsageFieldNotSupported } } f.cacheMutex.RLock() defer f.cacheMutex.RUnlock() if f.usage.Free == nil { return math.MaxInt64, ErrUsageFieldNotSupported } return *f.usage.Free, nil } // GetUsedSpace get the used space of the fs func (f *Fs) GetUsedSpace() (int64, error) { if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() { err := f.updateUsage() if err != nil { return 0, ErrUsageFieldNotSupported } } f.cacheMutex.RLock() defer f.cacheMutex.RUnlock() if f.usage.Used == nil { return 0, ErrUsageFieldNotSupported } return *f.usage.Used, nil } // GetNumObjects get the number of objects of the fs func (f *Fs) GetNumObjects() (int64, error) { if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() { err := f.updateUsage() if err != nil { return 0, ErrUsageFieldNotSupported } } f.cacheMutex.RLock() defer f.cacheMutex.RUnlock() if f.usage.Objects == nil { return 0, ErrUsageFieldNotSupported } return *f.usage.Objects, nil } func (f *Fs) updateUsage() (err error) { if do := f.RootFs.Features().About; do == nil { return ErrUsageFieldNotSupported } done := false f.cacheOnce.Do(func() { f.cacheMutex.Lock() err = f.updateUsageCore(false) f.cacheMutex.Unlock() done = true }) if done { return err } if !f.cacheUpdate { f.cacheUpdate = true go func() { _ = f.updateUsageCore(true) f.cacheUpdate = false }() } return nil } func (f *Fs) updateUsageCore(lock bool) error { // Run in background, should not be cancelled by user ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second) defer cancel() usage, err := f.RootFs.Features().About(ctx) if err != nil { f.cacheUpdate = false if errors.Cause(err) == fs.ErrorDirNotFound { err = nil } return err } if lock { f.cacheMutex.Lock() defer f.cacheMutex.Unlock() } // Store usage atomic.StoreInt64(&f.cacheExpiry, time.Now().Add(f.cacheTime).Unix()) f.usage = usage return nil } rclone-1.53.3/backend/webdav/000077500000000000000000000000001375552240400157335ustar00rootroot00000000000000rclone-1.53.3/backend/webdav/api/000077500000000000000000000000001375552240400165045ustar00rootroot00000000000000rclone-1.53.3/backend/webdav/api/types.go000066400000000000000000000145751375552240400202130ustar00rootroot00000000000000// Package api has type definitions for webdav package api import ( "encoding/xml" "regexp" "strconv" "strings" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" ) const ( // Wed, 27 Sep 2017 14:28:34 GMT timeFormat = time.RFC1123 // The same as time.RFC1123 with optional leading zeros on the date // see https://github.com/rclone/rclone/issues/2574 noZerosRFC1123 = "Mon, _2 Jan 2006 15:04:05 MST" ) // Multistatus contains responses returned from an HTTP 207 return code type Multistatus struct { Responses []Response `xml:"response"` } // Response contains an Href the response it about and its properties type Response struct { Href string `xml:"href"` Props Prop `xml:"propstat"` } // Prop is the properties of a response // // This is a lazy way of decoding the multiple in the // response. // // The response might look like this // // // /remote.php/webdav/Nextcloud%20Manual.pdf // // // Tue, 19 Dec 2017 22:02:36 GMT // 4143665 // // "048d7be4437ff7deeae94db50ff3e209" // application/pdf // // HTTP/1.1 200 OK // // // // // // // HTTP/1.1 404 Not Found // // // // So we elide the array of and within that the array of // into one struct. // // Note that status collects all the status values for which we just // check the first is OK. type Prop struct { Status []string `xml:"DAV: status"` Name string `xml:"DAV: prop>displayname,omitempty"` Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"` IsCollection *string `xml:"DAV: prop>iscollection,omitempty"` // this is a Microsoft extension see #2716 Size int64 `xml:"DAV: prop>getcontentlength,omitempty"` Modified Time `xml:"DAV: prop>getlastmodified,omitempty"` Checksums []string `xml:"prop>checksums>checksum,omitempty"` } // Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200" var parseStatus = regexp.MustCompile(`^HTTP/[0-9.]+\s+(\d+)`) // StatusOK examines the Status and returns an OK flag func (p *Prop) StatusOK() bool { // Assume OK if no statuses received if len(p.Status) == 0 { return true } match := parseStatus.FindStringSubmatch(p.Status[0]) if len(match) < 2 { return false } code, err := strconv.Atoi(match[1]) if err != nil { return false } if code >= 200 && code < 300 { return true } return false } // Hashes returns a map of all checksums - may be nil func (p *Prop) Hashes() (hashes map[hash.Type]string) { if len(p.Checksums) == 0 { return nil } hashes = make(map[hash.Type]string) for _, checksums := range p.Checksums { checksums = strings.ToLower(checksums) for _, checksum := range strings.Split(checksums, " ") { switch { case strings.HasPrefix(checksum, "sha1:"): hashes[hash.SHA1] = checksum[5:] case strings.HasPrefix(checksum, "md5:"): hashes[hash.MD5] = checksum[4:] } } } return hashes } // PropValue is a tagged name and value type PropValue struct { XMLName xml.Name `xml:""` Value string `xml:",chardata"` } // Error is used to describe webdav errors // // // Sabre\DAV\Exception\NotFound // File with name Photo could not be located // type Error struct { Exception string `xml:"exception,omitempty"` Message string `xml:"message,omitempty"` Status string StatusCode int } // Error returns a string for the error and satisfies the error interface func (e *Error) Error() string { var out []string if e.Message != "" { out = append(out, e.Message) } if e.Exception != "" { out = append(out, e.Exception) } if e.Status != "" { out = append(out, e.Status) } if len(out) == 0 { return "Webdav Error" } return strings.Join(out, ": ") } // Time represents date and time information for the // webdav API marshalling to and from timeFormat type Time time.Time // MarshalXML turns a Time into XML func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error { timeString := (*time.Time)(t).Format(timeFormat) return e.EncodeElement(timeString, start) } // Possible time formats to parse the time with var timeFormats = []string{ timeFormat, // Wed, 27 Sep 2017 14:28:34 GMT (as per RFC) time.RFC1123Z, // Fri, 05 Jan 2018 14:14:38 +0000 (as used by mydrive.ch) time.UnixDate, // Wed May 17 15:31:58 UTC 2017 (as used in an internal server) noZerosRFC1123, // Fri, 7 Sep 2018 08:49:58 GMT (as used by server in #2574) time.RFC3339, // Wed, 31 Oct 2018 13:57:11 CET (as used by komfortcloud.de) } var oneTimeError sync.Once // UnmarshalXML turns XML into a Time func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error { var v string err := d.DecodeElement(&v, &start) if err != nil { return err } // If time is missing then return the epoch if v == "" { *t = Time(time.Unix(0, 0)) return nil } // Parse the time format in multiple possible ways var newT time.Time for _, timeFormat := range timeFormats { newT, err = time.Parse(timeFormat, v) if err == nil { *t = Time(newT) break } } if err != nil { oneTimeError.Do(func() { fs.Errorf(nil, "Failed to parse time %q - using the epoch", v) }) // Return the epoch instead *t = Time(time.Unix(0, 0)) // ignore error err = nil } return err } // Quota is used to read the bytes used and available // // // // /remote.php/webdav/ // // // -3 // 376461895 // // HTTP/1.1 200 OK // // // type Quota struct { Available string `xml:"DAV: response>propstat>prop>quota-available-bytes"` Used string `xml:"DAV: response>propstat>prop>quota-used-bytes"` } rclone-1.53.3/backend/webdav/odrvcookie/000077500000000000000000000000001375552240400200775ustar00rootroot00000000000000rclone-1.53.3/backend/webdav/odrvcookie/fetch.go000066400000000000000000000150561375552240400215260ustar00rootroot00000000000000// Package odrvcookie can fetch authentication cookies for a sharepoint webdav endpoint package odrvcookie import ( "bytes" "context" "encoding/xml" "fmt" "html/template" "net/http" "net/http/cookiejar" "net/url" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fshttp" "golang.org/x/net/publicsuffix" ) // CookieAuth hold the authentication information // These are username and password as well as the authentication endpoint type CookieAuth struct { user string pass string endpoint string } // CookieResponse contains the requested cookies type CookieResponse struct { RtFa http.Cookie FedAuth http.Cookie } // SharepointSuccessResponse holds a response from a successful microsoft login type SharepointSuccessResponse struct { XMLName xml.Name `xml:"Envelope"` Body SuccessResponseBody `xml:"Body"` } // SuccessResponseBody is the body of a successful response, it holds the token type SuccessResponseBody struct { XMLName xml.Name Type string `xml:"RequestSecurityTokenResponse>TokenType"` Created time.Time `xml:"RequestSecurityTokenResponse>Lifetime>Created"` Expires time.Time `xml:"RequestSecurityTokenResponse>Lifetime>Expires"` Token string `xml:"RequestSecurityTokenResponse>RequestedSecurityToken>BinarySecurityToken"` } // SharepointError holds an error response microsoft login type SharepointError struct { XMLName xml.Name `xml:"Envelope"` Body ErrorResponseBody `xml:"Body"` } func (e *SharepointError) Error() string { return fmt.Sprintf("%s: %s (%s)", e.Body.FaultCode, e.Body.Reason, e.Body.Detail) } // ErrorResponseBody contains the body of an erroneous response type ErrorResponseBody struct { XMLName xml.Name FaultCode string `xml:"Fault>Code>Subcode>Value"` Reason string `xml:"Fault>Reason>Text"` Detail string `xml:"Fault>Detail>error>internalerror>text"` } // reqString is a template that gets populated with the user data in order to retrieve a "BinarySecurityToken" const reqString = ` http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue http://www.w3.org/2005/08/addressing/anonymous https://login.microsoftonline.com/extSTS.srf {{ .Username }} {{ .Password }} {{ .Address }} http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey http://schemas.xmlsoap.org/ws/2005/02/trust/Issue urn:oasis:names:tc:SAML:1.0:assertion ` // New creates a new CookieAuth struct func New(pUser, pPass, pEndpoint string) CookieAuth { retStruct := CookieAuth{ user: pUser, pass: pPass, endpoint: pEndpoint, } return retStruct } // Cookies creates a CookieResponse. It fetches the auth token and then // retrieves the Cookies func (ca *CookieAuth) Cookies(ctx context.Context) (*CookieResponse, error) { tokenResp, err := ca.getSPToken(ctx) if err != nil { return nil, err } return ca.getSPCookie(tokenResp) } func (ca *CookieAuth) getSPCookie(conf *SharepointSuccessResponse) (*CookieResponse, error) { spRoot, err := url.Parse(ca.endpoint) if err != nil { return nil, errors.Wrap(err, "Error while constructing endpoint URL") } u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0") if err != nil { return nil, errors.Wrap(err, "Error while constructing login URL") } // To authenticate with davfs or anything else we need two cookies (rtFa and FedAuth) // In order to get them we use the token we got earlier and a cookieJar jar, err := cookiejar.New(&cookiejar.Options{PublicSuffixList: publicsuffix.List}) if err != nil { return nil, err } client := &http.Client{ Jar: jar, } // Send the previously acquired Token as a Post parameter if _, err = client.Post(u.String(), "text/xml", strings.NewReader(conf.Body.Token)); err != nil { return nil, errors.Wrap(err, "Error while grabbing cookies from endpoint: %v") } cookieResponse := CookieResponse{} for _, cookie := range jar.Cookies(u) { if (cookie.Name == "rtFa") || (cookie.Name == "FedAuth") { switch cookie.Name { case "rtFa": cookieResponse.RtFa = *cookie case "FedAuth": cookieResponse.FedAuth = *cookie } } } return &cookieResponse, nil } func (ca *CookieAuth) getSPToken(ctx context.Context) (conf *SharepointSuccessResponse, err error) { reqData := map[string]interface{}{ "Username": ca.user, "Password": ca.pass, "Address": ca.endpoint, } t := template.Must(template.New("authXML").Parse(reqString)) buf := &bytes.Buffer{} if err := t.Execute(buf, reqData); err != nil { return nil, errors.Wrap(err, "Error while filling auth token template") } // Create and execute the first request which returns an auth token for the sharepoint service // With this token we can authenticate on the login page and save the returned cookies req, err := http.NewRequest("POST", "https://login.microsoftonline.com/extSTS.srf", buf) if err != nil { return nil, err } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext client := fshttp.NewClient(fs.Config) resp, err := client.Do(req) if err != nil { return nil, errors.Wrap(err, "Error while logging in to endpoint") } defer fs.CheckClose(resp.Body, &err) respBuf := bytes.Buffer{} _, err = respBuf.ReadFrom(resp.Body) if err != nil { return nil, err } s := respBuf.Bytes() conf = &SharepointSuccessResponse{} err = xml.Unmarshal(s, conf) if conf.Body.Token == "" { // xml Unmarshal won't fail if the response doesn't contain a token // However, the token will be empty sErr := &SharepointError{} errSErr := xml.Unmarshal(s, sErr) if errSErr == nil { return nil, sErr } } if err != nil { return nil, errors.Wrap(err, "Error while reading endpoint response") } return } rclone-1.53.3/backend/webdav/odrvcookie/renew.go000066400000000000000000000010061375552240400215430ustar00rootroot00000000000000package odrvcookie import ( "time" ) // CookieRenew holds information for the renew type CookieRenew struct { timer *time.Ticker renewFn func() } // NewRenew returns and starts a CookieRenew func NewRenew(interval time.Duration, renewFn func()) *CookieRenew { renew := CookieRenew{ timer: time.NewTicker(interval), renewFn: renewFn, } go renew.Renew() return &renew } // Renew calls the renewFn for every tick func (c *CookieRenew) Renew() { for { <-c.timer.C // wait for tick c.renewFn() } } rclone-1.53.3/backend/webdav/webdav.go000066400000000000000000001037031375552240400175360ustar00rootroot00000000000000// Package webdav provides an interface to the Webdav // object storage system. package webdav // SetModTime might be possible // https://stackoverflow.com/questions/3579608/webdav-can-a-client-modify-the-mtime-of-a-file // ...support for a PROPSET to lastmodified (mind the missing get) which does the utime() call might be an option. // For example the ownCloud WebDAV server does it that way. import ( "bytes" "context" "encoding/xml" "fmt" "io" "net/http" "net/url" "os/exec" "path" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/webdav/api" "github.com/rclone/rclone/backend/webdav/odrvcookie" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/rest" ) const ( minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second decayConstant = 2 // bigger for slower decay, exponential defaultDepth = "1" // depth for PROPFIND ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "webdav", Description: "Webdav", NewFs: NewFs, Options: []fs.Option{{ Name: "url", Help: "URL of http host to connect to", Required: true, Examples: []fs.OptionExample{{ Value: "https://example.com", Help: "Connect to example.com", }}, }, { Name: "vendor", Help: "Name of the Webdav site/service/software you are using", Examples: []fs.OptionExample{{ Value: "nextcloud", Help: "Nextcloud", }, { Value: "owncloud", Help: "Owncloud", }, { Value: "sharepoint", Help: "Sharepoint", }, { Value: "other", Help: "Other site/service or software", }}, }, { Name: "user", Help: "User name", }, { Name: "pass", Help: "Password.", IsPassword: true, }, { Name: "bearer_token", Help: "Bearer token instead of user/pass (eg a Macaroon)", }, { Name: "bearer_token_command", Help: "Command to run to get a bearer token", Advanced: true, }}, }) } // Options defines the configuration for this backend type Options struct { URL string `config:"url"` Vendor string `config:"vendor"` User string `config:"user"` Pass string `config:"pass"` BearerToken string `config:"bearer_token"` BearerTokenCommand string `config:"bearer_token_command"` } // Fs represents a remote webdav type Fs struct { name string // name of this remote root string // the path we are working on opt Options // parsed options features *fs.Features // optional features endpoint *url.URL // URL of the host endpointURL string // endpoint as a string srv *rest.Client // the connection to the one drive server pacer *fs.Pacer // pacer for API calls precision time.Duration // mod time precision canStream bool // set if can stream useOCMtime bool // set if can use X-OC-Mtime retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default) hasMD5 bool // set if can use owncloud style checksums for MD5 hasSHA1 bool // set if can use owncloud style checksums for SHA1 } // Object describes a webdav object // // Will definitely have info but maybe not meta type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set size int64 // size of the object modTime time.Time // modification time of the object sha1 string // SHA-1 of the object content if known md5 string // MD5 of the object content if known } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("webdav root '%s'", f.root) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 423, // Locked 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { // If we have a bearer token command and it has expired then refresh it if f.opt.BearerTokenCommand != "" && resp != nil && resp.StatusCode == 401 { fs.Debugf(f, "Bearer token expired: %v", err) authErr := f.fetchAndSetBearerToken() if authErr != nil { err = authErr } return true, err } return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // itemIsDir returns true if the item is a directory // // When a client sees a resourcetype it doesn't recognize it should // assume it is a regular non-collection resource. [WebDav book by // Lisa Dusseault ch 7.5.8 p170] func itemIsDir(item *api.Response) bool { if t := item.Props.Type; t != nil { if t.Space == "DAV:" && t.Local == "collection" { return true } fs.Debugf(nil, "Unknown resource type %q/%q on %q", t.Space, t.Local, item.Props.Name) } // the iscollection prop is a Microsoft extension, but if present it is a reliable indicator // if the above check failed - see #2716. This can be an integer or a boolean - see #2964 if t := item.Props.IsCollection; t != nil { switch x := strings.ToLower(*t); x { case "0", "false": return false case "1", "true": return true default: fs.Debugf(nil, "Unknown value %q for IsCollection", x) } } return false } // readMetaDataForPath reads the metadata from the path func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string) (info *api.Prop, err error) { // FIXME how do we read back additional properties? opts := rest.Opts{ Method: "PROPFIND", Path: f.filePath(path), ExtraHeaders: map[string]string{ "Depth": depth, }, NoRedirect: true, } if f.hasMD5 || f.hasSHA1 { opts.Body = bytes.NewBuffer(owncloudProps) } var result api.Multistatus var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if apiErr, ok := err.(*api.Error); ok { // does not exist switch apiErr.StatusCode { case http.StatusNotFound: if f.retryWithZeroDepth && depth != "0" { return f.readMetaDataForPath(ctx, path, "0") } return nil, fs.ErrorObjectNotFound case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther: // Some sort of redirect - go doesn't deal with these properly (it resets // the method to GET). However we can assume that if it was redirected the // object was not found. return nil, fs.ErrorObjectNotFound } } if err != nil { return nil, errors.Wrap(err, "read metadata failed") } if len(result.Responses) < 1 { return nil, fs.ErrorObjectNotFound } item := result.Responses[0] if !item.Props.StatusOK() { return nil, fs.ErrorObjectNotFound } if itemIsDir(&item) { return nil, fs.ErrorNotAFile } return &item.Props, nil } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { body, err := rest.ReadBody(resp) if err != nil { return errors.Wrap(err, "error when trying to read error from body") } // Decode error response errResponse := new(api.Error) err = xml.Unmarshal(body, &errResponse) if err != nil { // set the Message to be the body if can't parse the XML errResponse.Message = strings.TrimSpace(string(body)) } errResponse.Status = resp.Status errResponse.StatusCode = resp.StatusCode return errResponse } // addSlash makes sure s is terminated with a / if non empty func addSlash(s string) string { if s != "" && !strings.HasSuffix(s, "/") { s += "/" } return s } // filePath returns a file path (f.root, file) func (f *Fs) filePath(file string) string { return rest.URLPathEscape(path.Join(f.root, file)) } // dirPath returns a directory path (f.root, dir) func (f *Fs) dirPath(dir string) string { return addSlash(f.filePath(dir)) } // filePath returns a file path (f.root, remote) func (o *Object) filePath() string { return o.fs.filePath(o.remote) } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.Background() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } rootIsDir := strings.HasSuffix(root, "/") root = strings.Trim(root, "/") if !strings.HasSuffix(opt.URL, "/") { opt.URL += "/" } if opt.Pass != "" { var err error opt.Pass, err = obscure.Reveal(opt.Pass) if err != nil { return nil, errors.Wrap(err, "couldn't decrypt password") } } if opt.Vendor == "" { opt.Vendor = "other" } root = strings.Trim(root, "/") // Parse the endpoint u, err := url.Parse(opt.URL) if err != nil { return nil, err } f := &Fs{ name: name, root: root, opt: *opt, endpoint: u, endpointURL: u.String(), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), precision: fs.ModTimeNotSupported, } f.features = (&fs.Features{ CanHaveEmptyDirectories: true, }).Fill(f) if opt.User != "" || opt.Pass != "" { f.srv.SetUserPass(opt.User, opt.Pass) } else if opt.BearerToken != "" { f.setBearerToken(opt.BearerToken) } else if f.opt.BearerTokenCommand != "" { err = f.fetchAndSetBearerToken() if err != nil { return nil, err } } f.srv.SetErrorHandler(errorHandler) err = f.setQuirks(ctx, opt.Vendor) if err != nil { return nil, err } f.srv.SetHeader("Referer", u.String()) if root != "" && !rootIsDir { // Check to see if the root actually an existing file remote := path.Base(root) f.root = path.Dir(root) if f.root == "." { f.root = "" } _, err := f.NewObject(ctx, remote) if err != nil { if errors.Cause(err) == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile { // File doesn't exist so return old f f.root = root return f, nil } return nil, err } // return an error with an fs which points to the parent return f, fs.ErrorIsFile } return f, nil } // sets the BearerToken up func (f *Fs) setBearerToken(token string) { f.opt.BearerToken = token f.srv.SetHeader("Authorization", "Bearer "+token) } // fetch the bearer token using the command func (f *Fs) fetchBearerToken(cmd string) (string, error) { var ( args = strings.Split(cmd, " ") stdout bytes.Buffer stderr bytes.Buffer c = exec.Command(args[0], args[1:]...) ) c.Stdout = &stdout c.Stderr = &stderr var ( err = c.Run() stdoutString = strings.TrimSpace(stdout.String()) stderrString = strings.TrimSpace(stderr.String()) ) if err != nil { if stderrString == "" { stderrString = stdoutString } return "", errors.Wrapf(err, "failed to get bearer token using %q: %s", f.opt.BearerTokenCommand, stderrString) } return stdoutString, nil } // fetch the bearer token and set it if successful func (f *Fs) fetchAndSetBearerToken() error { if f.opt.BearerTokenCommand == "" { return nil } token, err := f.fetchBearerToken(f.opt.BearerTokenCommand) if err != nil { return err } f.setBearerToken(token) return nil } // setQuirks adjusts the Fs for the vendor passed in func (f *Fs) setQuirks(ctx context.Context, vendor string) error { switch vendor { case "owncloud": f.canStream = true f.precision = time.Second f.useOCMtime = true f.hasMD5 = true f.hasSHA1 = true case "nextcloud": f.precision = time.Second f.useOCMtime = true f.hasSHA1 = true case "sharepoint": // To mount sharepoint, two Cookies are required // They have to be set instead of BasicAuth f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies spCk := odrvcookie.New(f.opt.User, f.opt.Pass, f.endpointURL) spCookies, err := spCk.Cookies(ctx) if err != nil { return err } odrvcookie.NewRenew(12*time.Hour, func() { spCookies, err := spCk.Cookies(ctx) if err != nil { fs.Errorf("could not renew cookies: %s", err.Error()) return } f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa) fs.Debugf(spCookies, "successfully renewed sharepoint cookies") }) f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa) // sharepoint, unlike the other vendors, only lists files if the depth header is set to 0 // however, rclone defaults to 1 since it provides recursive directory listing // to determine if we may have found a file, the request has to be resent // with the depth set to 0 f.retryWithZeroDepth = true case "other": default: fs.Debugf(f, "Unknown vendor %q", vendor) } // Remove PutStream from optional features if !f.canStream { f.features.PutStream = nil } return nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Prop) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { // Set info err = o.setMetaData(info) } else { err = o.readMetaData(ctx) // reads info and meta, returning an error } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // Read the normal props, plus the checksums // // SHA1:f572d396fae9206628714fb2ce00f72e94f2258f MD5:b1946ac92492d2347c6235b4d2611184 ADLER32:084b021f var owncloudProps = []byte(` `) // list the objects into the function supplied // // If directories is set it only sends directories // User function to process a File item from listAll // // Should return true to finish processing type listAllFn func(string, bool, *api.Prop) bool // Lists the directory required calling the user function on each item found // // If the user fn ever returns true then it early exits with found = true func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) { opts := rest.Opts{ Method: "PROPFIND", Path: f.dirPath(dir), // FIXME Should not start with / ExtraHeaders: map[string]string{ "Depth": depth, }, } if f.hasMD5 || f.hasSHA1 { opts.Body = bytes.NewBuffer(owncloudProps) } var result api.Multistatus var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) if err != nil { if apiErr, ok := err.(*api.Error); ok { // does not exist if apiErr.StatusCode == http.StatusNotFound { if f.retryWithZeroDepth && depth != "0" { return f.listAll(ctx, dir, directoriesOnly, filesOnly, "0", fn) } return found, fs.ErrorDirNotFound } } return found, errors.Wrap(err, "couldn't list files") } //fmt.Printf("result = %#v", &result) baseURL, err := rest.URLJoin(f.endpoint, opts.Path) if err != nil { return false, errors.Wrap(err, "couldn't join URL") } for i := range result.Responses { item := &result.Responses[i] isDir := itemIsDir(item) // Find name u, err := rest.URLJoin(baseURL, item.Href) if err != nil { fs.Errorf(nil, "URL Join failed for %q and %q: %v", baseURL, item.Href, err) continue } // Make sure directories end with a / if isDir { u.Path = addSlash(u.Path) } if !strings.HasPrefix(u.Path, baseURL.Path) { fs.Debugf(nil, "Item with unknown path received: %q, %q", u.Path, baseURL.Path) continue } remote := path.Join(dir, u.Path[len(baseURL.Path):]) if strings.HasSuffix(remote, "/") { remote = remote[:len(remote)-1] } // the listing contains info about itself which we ignore if remote == dir { continue } // Check OK if !item.Props.StatusOK() { fs.Debugf(remote, "Ignoring item with bad status %q", item.Props.Status) continue } if isDir { if filesOnly { continue } } else { if directoriesOnly { continue } } // item.Name = restoreReservedChars(item.Name) if fn(remote, isDir, &item.Props) { found = true break } } return } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { var iErr error _, err = f.listAll(ctx, dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool { if isDir { d := fs.NewDir(remote, time.Time(info.Modified)) // .SetID(info.ID) // FIXME more info from dir? can set size, items? entries = append(entries, d) } else { o, err := f.newObjectWithInfo(ctx, remote, info) if err != nil { iErr = err return true } entries = append(entries, o) } return false }) if err != nil { return nil, err } if iErr != nil { return nil, iErr } return entries, nil } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Used to create new objects func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object) { // Temporary Object under construction o = &Object{ fs: f, remote: remote, size: size, modTime: modTime, } return o } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o := f.createObject(src.Remote(), src.ModTime(ctx), src.Size()) return o, o.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // mkParentDir makes the parent of the native path dirPath if // necessary and any directories above that func (f *Fs) mkParentDir(ctx context.Context, dirPath string) (err error) { // defer log.Trace(dirPath, "")("err=%v", &err) // chop off trailing / if it exists if strings.HasSuffix(dirPath, "/") { dirPath = dirPath[:len(dirPath)-1] } parent := path.Dir(dirPath) if parent == "." { parent = "" } return f.mkdir(ctx, parent) } // _dirExists - list dirPath to see if it exists // // dirPath should be a native path ending in a / func (f *Fs) _dirExists(ctx context.Context, dirPath string) (exists bool) { opts := rest.Opts{ Method: "PROPFIND", Path: dirPath, ExtraHeaders: map[string]string{ "Depth": "0", }, } var result api.Multistatus var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &result) return f.shouldRetry(resp, err) }) return err == nil } // low level mkdir, only makes the directory, doesn't attempt to create parents func (f *Fs) _mkdir(ctx context.Context, dirPath string) error { // We assume the root is already created if dirPath == "" { return nil } // Collections must end with / if !strings.HasSuffix(dirPath, "/") { dirPath += "/" } opts := rest.Opts{ Method: "MKCOL", Path: dirPath, NoResponse: true, } err := f.pacer.Call(func() (bool, error) { resp, err := f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if apiErr, ok := err.(*api.Error); ok { // Check if it already exists. The response code for this isn't // defined in the RFC so the implementations vary wildly. // // owncloud returns 423/StatusLocked if the create is already in progress if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked { return nil } // 4shared returns a 409/StatusConflict here which clashes // horribly with the intermediate paths don't exist meaning. So // check to see if actually exists. This will correct other // error codes too. if f._dirExists(ctx, dirPath) { return nil } } return err } // mkdir makes the directory and parents using native paths func (f *Fs) mkdir(ctx context.Context, dirPath string) (err error) { // defer log.Trace(dirPath, "")("err=%v", &err) err = f._mkdir(ctx, dirPath) if apiErr, ok := err.(*api.Error); ok { // parent does not exist so create it first then try again if apiErr.StatusCode == http.StatusConflict { err = f.mkParentDir(ctx, dirPath) if err == nil { err = f._mkdir(ctx, dirPath) } } } return err } // Mkdir creates the directory if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { dirPath := f.dirPath(dir) return f.mkdir(ctx, dirPath) } // dirNotEmpty returns true if the directory exists and is not Empty // // if the directory does not exist then err will be ErrorDirNotFound func (f *Fs) dirNotEmpty(ctx context.Context, dir string) (found bool, err error) { return f.listAll(ctx, dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool { return true }) } // purgeCheck removes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { if check { notEmpty, err := f.dirNotEmpty(ctx, dir) if err != nil { return err } if notEmpty { return fs.ErrorDirectoryNotEmpty } } opts := rest.Opts{ Method: "DELETE", Path: f.dirPath(dir), NoResponse: true, } var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, nil) return f.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "rmdir failed") } // FIXME parse Multistatus response return nil } // Rmdir deletes the root folder // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return f.precision } // Copy or Move src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy/fs.ErrorCantMove func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, method string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") if method == "COPY" { return nil, fs.ErrorCantCopy } return nil, fs.ErrorCantMove } dstPath := f.filePath(remote) err := f.mkParentDir(ctx, dstPath) if err != nil { return nil, errors.Wrap(err, "Copy mkParentDir failed") } destinationURL, err := rest.URLJoin(f.endpoint, dstPath) if err != nil { return nil, errors.Wrap(err, "copyOrMove couldn't join URL") } var resp *http.Response opts := rest.Opts{ Method: method, Path: srcObj.filePath(), NoResponse: true, ExtraHeaders: map[string]string{ "Destination": destinationURL.String(), "Overwrite": "F", }, } if f.useOCMtime { opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix()) } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "Copy call failed") } dstObj, err := f.NewObject(ctx, remote) if err != nil { return nil, errors.Wrap(err, "Copy NewObject failed") } return dstObj, nil } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { return f.copyOrMove(ctx, src, remote, "COPY") } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { return f.copyOrMove(ctx, src, remote, "MOVE") } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := srcFs.filePath(srcRemote) dstPath := f.filePath(dstRemote) // Check if destination exists _, err := f.dirNotEmpty(ctx, dstRemote) if err == nil { return fs.ErrorDirExists } if err != fs.ErrorDirNotFound { return errors.Wrap(err, "DirMove dirExists dst failed") } // Make sure the parent directory exists err = f.mkParentDir(ctx, dstPath) if err != nil { return errors.Wrap(err, "DirMove mkParentDir dst failed") } destinationURL, err := rest.URLJoin(f.endpoint, dstPath) if err != nil { return errors.Wrap(err, "DirMove couldn't join URL") } var resp *http.Response opts := rest.Opts{ Method: "MOVE", Path: addSlash(srcPath), NoResponse: true, ExtraHeaders: map[string]string{ "Destination": addSlash(destinationURL.String()), "Overwrite": "F", }, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return f.shouldRetry(resp, err) }) if err != nil { return errors.Wrap(err, "DirMove MOVE call failed") } return nil } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { hashes := hash.Set(hash.None) if f.hasMD5 { hashes.Add(hash.MD5) } if f.hasSHA1 { hashes.Add(hash.SHA1) } return hashes } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { opts := rest.Opts{ Method: "PROPFIND", Path: "", ExtraHeaders: map[string]string{ "Depth": "0", }, } opts.Body = bytes.NewBuffer([]byte(` `)) var q api.Quota var resp *http.Response var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallXML(ctx, &opts, nil, &q) return f.shouldRetry(resp, err) }) if err != nil { return nil, errors.Wrap(err, "about call failed") } usage := &fs.Usage{} if i, err := strconv.ParseInt(q.Used, 10, 64); err == nil && i >= 0 { usage.Used = fs.NewUsageValue(i) } if i, err := strconv.ParseInt(q.Available, 10, 64); err == nil && i >= 0 { usage.Free = fs.NewUsageValue(i) } if usage.Used != nil && usage.Free != nil { usage.Total = fs.NewUsageValue(*usage.Used + *usage.Free) } return usage, nil } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Hash returns the SHA1 or MD5 of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t == hash.MD5 && o.fs.hasMD5 { return o.md5, nil } if t == hash.SHA1 && o.fs.hasSHA1 { return o.sha1, nil } return "", hash.ErrUnsupported } // Size returns the size of an object in bytes func (o *Object) Size() int64 { ctx := context.TODO() err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // setMetaData sets the metadata from info func (o *Object) setMetaData(info *api.Prop) (err error) { o.hasMetaData = true o.size = info.Size o.modTime = time.Time(info.Modified) if o.fs.hasMD5 || o.fs.hasSHA1 { hashes := info.Hashes() if o.fs.hasSHA1 { o.sha1 = hashes[hash.SHA1] } if o.fs.hasMD5 { o.md5 = hashes[hash.MD5] } } return nil } // readMetaData gets the metadata if it hasn't already been fetched // // it also sets the info func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } info, err := o.fs.readMetaDataForPath(ctx, o.remote, defaultDepth) if err != nil { return err } return o.setMetaData(info) } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // SetModTime sets the modification time of the local fs object func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { return fs.ErrorCantSetModTime } // Storable returns a boolean showing whether this object storable func (o *Object) Storable() bool { return true } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { var resp *http.Response opts := rest.Opts{ Method: "GET", Path: o.filePath(), Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return o.fs.shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } // Update the object with the contents of the io.Reader, modTime and size // // If existing is set then it updates the object rather than creating a new one // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { err = o.fs.mkParentDir(ctx, o.filePath()) if err != nil { return errors.Wrap(err, "Update mkParentDir failed") } size := src.Size() var resp *http.Response opts := rest.Opts{ Method: "PUT", Path: o.filePath(), Body: in, NoResponse: true, ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365 ContentType: fs.MimeType(ctx, src), Options: options, } if o.fs.useOCMtime || o.fs.hasMD5 || o.fs.hasSHA1 { opts.ExtraHeaders = map[string]string{} if o.fs.useOCMtime { opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix()) } // Set one upload checksum // Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5 // Nextcloud stores the checksum you supply (SHA1 or MD5) but only stores one if o.fs.hasSHA1 { if sha1, _ := src.Hash(ctx, hash.SHA1); sha1 != "" { opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1 } } if o.fs.hasMD5 && opts.ExtraHeaders["OC-Checksum"] == "" { if md5, _ := src.Hash(ctx, hash.MD5); md5 != "" { opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5 } } } err = o.fs.pacer.CallNoRetry(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return o.fs.shouldRetry(resp, err) }) if err != nil { // Give the WebDAV server a chance to get its internal state in order after the // error. The error may have been local in which case we closed the connection. // The server may still be dealing with it for a moment. A sleep isn't ideal but I // haven't been able to think of a better method to find out if the server has // finished - ncw time.Sleep(1 * time.Second) // Remove failed upload _ = o.Remove(ctx) return err } // read metadata from remote o.hasMetaData = false return o.readMetaData(ctx) } // Remove an object func (o *Object) Remove(ctx context.Context) error { opts := rest.Opts{ Method: "DELETE", Path: o.filePath(), NoResponse: true, } return o.fs.pacer.Call(func() (bool, error) { resp, err := o.fs.srv.Call(ctx, &opts) return o.fs.shouldRetry(resp, err) }) } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.PutStreamer = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) ) rclone-1.53.3/backend/webdav/webdav_test.go000066400000000000000000000017161375552240400205760ustar00rootroot00000000000000// Test Webdav filesystem interface package webdav_test import ( "testing" "github.com/rclone/rclone/backend/webdav" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestWebdavNexcloud:", NilObject: (*webdav.Object)(nil), }) } // TestIntegration runs integration tests against the remote func TestIntegration2(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("skipping as -remote is set") } fstests.Run(t, &fstests.Opt{ RemoteName: "TestWebdavOwncloud:", NilObject: (*webdav.Object)(nil), }) } // TestIntegration runs integration tests against the remote func TestIntegration3(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("skipping as -remote is set") } fstests.Run(t, &fstests.Opt{ RemoteName: "TestWebdavRclone:", NilObject: (*webdav.Object)(nil), }) } rclone-1.53.3/backend/yandex/000077500000000000000000000000001375552240400157535ustar00rootroot00000000000000rclone-1.53.3/backend/yandex/api/000077500000000000000000000000001375552240400165245ustar00rootroot00000000000000rclone-1.53.3/backend/yandex/api/types.go000066400000000000000000000076031375552240400202250ustar00rootroot00000000000000package api import ( "fmt" "strings" ) // DiskInfo contains disk metadata type DiskInfo struct { TotalSpace int64 `json:"total_space"` UsedSpace int64 `json:"used_space"` TrashSize int64 `json:"trash_size"` } // ResourceInfoRequestOptions struct type ResourceInfoRequestOptions struct { SortMode *SortMode Limit uint64 Offset uint64 Fields []string } //ResourceInfoResponse struct is returned by the API for metedata requests. type ResourceInfoResponse struct { PublicKey string `json:"public_key"` Name string `json:"name"` Created string `json:"created"` CustomProperties map[string]interface{} `json:"custom_properties"` Preview string `json:"preview"` PublicURL string `json:"public_url"` OriginPath string `json:"origin_path"` Modified string `json:"modified"` Path string `json:"path"` Md5 string `json:"md5"` ResourceType string `json:"type"` MimeType string `json:"mime_type"` Size int64 `json:"size"` Embedded *ResourceListResponse `json:"_embedded"` } // ResourceListResponse struct type ResourceListResponse struct { Sort *SortMode `json:"sort"` PublicKey string `json:"public_key"` Items []ResourceInfoResponse `json:"items"` Path string `json:"path"` Limit *uint64 `json:"limit"` Offset *uint64 `json:"offset"` Total *uint64 `json:"total"` } // AsyncInfo struct is returned by the API for various async operations. type AsyncInfo struct { HRef string `json:"href"` Method string `json:"method"` Templated bool `json:"templated"` } // AsyncStatus is returned when requesting the status of an async operations. Possible values in-progress, success, failure type AsyncStatus struct { Status string `json:"status"` } //CustomPropertyResponse struct we send and is returned by the API for CustomProperty request. type CustomPropertyResponse struct { CustomProperties map[string]interface{} `json:"custom_properties"` } // SortMode struct - sort mode type SortMode struct { mode string } // Default - sort mode func (m *SortMode) Default() *SortMode { return &SortMode{ mode: "", } } // ByName - sort mode func (m *SortMode) ByName() *SortMode { return &SortMode{ mode: "name", } } // ByPath - sort mode func (m *SortMode) ByPath() *SortMode { return &SortMode{ mode: "path", } } // ByCreated - sort mode func (m *SortMode) ByCreated() *SortMode { return &SortMode{ mode: "created", } } // ByModified - sort mode func (m *SortMode) ByModified() *SortMode { return &SortMode{ mode: "modified", } } // BySize - sort mode func (m *SortMode) BySize() *SortMode { return &SortMode{ mode: "size", } } // Reverse - sort mode func (m *SortMode) Reverse() *SortMode { if strings.HasPrefix(m.mode, "-") { return &SortMode{ mode: m.mode[1:], } } return &SortMode{ mode: "-" + m.mode, } } func (m *SortMode) String() string { return m.mode } // UnmarshalJSON sort mode func (m *SortMode) UnmarshalJSON(value []byte) error { if value == nil || len(value) == 0 { m.mode = "" return nil } m.mode = string(value) if strings.HasPrefix(m.mode, "\"") && strings.HasSuffix(m.mode, "\"") { m.mode = m.mode[1 : len(m.mode)-1] } return nil } // ErrorResponse represents erroneous API response. // Implements go's built in `error`. type ErrorResponse struct { ErrorName string `json:"error"` Description string `json:"description"` Message string `json:"message"` StatusCode int `json:""` } func (e *ErrorResponse) Error() string { return fmt.Sprintf("[%d - %s] %s (%s)", e.StatusCode, e.ErrorName, e.Description, e.Message) } rclone-1.53.3/backend/yandex/yandex.go000066400000000000000000000751251375552240400176040ustar00rootroot00000000000000package yandex import ( "context" "encoding/json" "fmt" "io" "log" "net/http" "net/url" "path" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/yandex/api" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/rest" "golang.org/x/oauth2" ) //oAuth const ( rcloneClientID = "ac39b43b9eba4cae8ffb788c06d816a8" rcloneEncryptedClientSecret = "EfyyNZ3YUEwXM5yAhi72G9YwKn2mkFrYwJNS7cY0TJAhFlX9K-uJFbGlpO-RYjrJ" rootURL = "https://cloud-api.yandex.com/v1/disk" minSleep = 10 * time.Millisecond maxSleep = 2 * time.Second // may needs to be increased, testing needed decayConstant = 2 // bigger for slower decay, exponential ) // Globals var ( // Description of how to auth for this app oauthConfig = &oauth2.Config{ Endpoint: oauth2.Endpoint{ AuthURL: "https://oauth.yandex.com/authorize", //same as https://oauth.yandex.ru/authorize TokenURL: "https://oauth.yandex.com/token", //same as https://oauth.yandex.ru/token }, ClientID: rcloneClientID, ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), RedirectURL: oauthutil.RedirectURL, } ) // Register with Fs func init() { fs.Register(&fs.RegInfo{ Name: "yandex", Description: "Yandex Disk", NewFs: NewFs, Config: func(name string, m configmap.Mapper) { err := oauthutil.Config("yandex", name, m, oauthConfig, nil) if err != nil { log.Fatalf("Failed to configure token: %v", err) return } }, Options: append(oauthutil.SharedOptions, []fs.Option{{ Name: config.ConfigEncoding, Help: config.ConfigEncodingHelp, Advanced: true, // Of the control characters \t \n \r are allowed // it doesn't seem worth making an exception for this Default: (encoder.Display | encoder.EncodeInvalidUtf8), }}...), }) } // Options defines the configuration for this backend type Options struct { Token string `config:"token"` Enc encoder.MultiEncoder `config:"encoding"` } // Fs represents a remote yandex type Fs struct { name string root string // root path opt Options // parsed options features *fs.Features // optional features srv *rest.Client // the connection to the yandex server pacer *fs.Pacer // pacer for API calls diskRoot string // root path with "disk:/" container name } // Object describes a swift object type Object struct { fs *Fs // what this object is part of remote string // The remote path hasMetaData bool // whether info below has been set md5sum string // The MD5Sum of the object size int64 // Bytes in the object modTime time.Time // Modified time of the object mimeType string // Content type according to the server } // ------------------------------------------------------------ // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String converts this Fs to a string func (f *Fs) String() string { return fmt.Sprintf("Yandex %s", f.root) } // Precision return the precision of this Fs func (f *Fs) Precision() time.Duration { return time.Nanosecond } // Hashes returns the supported hash sets. func (f *Fs) Hashes() hash.Set { return hash.Set(hash.MD5) } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ 429, // Too Many Requests. 500, // Internal Server Error 502, // Bad Gateway 503, // Service Unavailable 504, // Gateway Timeout 509, // Bandwidth Limit Exceeded } // shouldRetry returns a boolean as to whether this resp and err // deserve to be retried. It returns the err as a convenience func shouldRetry(resp *http.Response, err error) (bool, error) { return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err } // errorHandler parses a non 2xx error response into an error func errorHandler(resp *http.Response) error { // Decode error response errResponse := new(api.ErrorResponse) err := rest.DecodeJSON(resp, &errResponse) if err != nil { fs.Debugf(nil, "Couldn't decode error response: %v", err) } if errResponse.Message == "" { errResponse.Message = resp.Status } if errResponse.StatusCode == 0 { errResponse.StatusCode = resp.StatusCode } return errResponse } // Sets root in f func (f *Fs) setRoot(root string) { //Set root path f.root = strings.Trim(root, "/") //Set disk root path. //Adding "disk:" to root path as all paths on disk start with it var diskRoot string if f.root == "" { diskRoot = "disk:/" } else { diskRoot = "disk:/" + f.root + "/" } f.diskRoot = diskRoot } // filePath returns an escaped file path (f.root, file) func (f *Fs) filePath(file string) string { return path.Join(f.diskRoot, file) } // dirPath returns an escaped file path (f.root, file) ending with '/' func (f *Fs) dirPath(file string) string { return path.Join(f.diskRoot, file) + "/" } func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.ResourceInfoRequestOptions) (*api.ResourceInfoResponse, error) { opts := rest.Opts{ Method: "GET", Path: "/resources", Parameters: url.Values{}, } opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(path)) if options.SortMode != nil { opts.Parameters.Set("sort", options.SortMode.String()) } if options.Limit != 0 { opts.Parameters.Set("limit", strconv.FormatUint(options.Limit, 10)) } if options.Offset != 0 { opts.Parameters.Set("offset", strconv.FormatUint(options.Offset, 10)) } if options.Fields != nil { opts.Parameters.Set("fields", strings.Join(options.Fields, ",")) } var err error var info api.ResourceInfoResponse var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } info.Name = f.opt.Enc.ToStandardName(info.Name) return &info, nil } // NewFs constructs an Fs from the path, container:path func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { ctx := context.TODO() // Parse config into Options struct opt := new(Options) err := configstruct.Set(m, opt) if err != nil { return nil, err } token, err := oauthutil.GetToken(name, m) if err != nil { log.Fatalf("Couldn't read OAuth token (this should never happen).") } if token.RefreshToken == "" { log.Fatalf("Unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.") } if token.TokenType != "OAuth" { token.TokenType = "OAuth" err = oauthutil.PutToken(name, m, token, false) if err != nil { log.Fatalf("Couldn't save OAuth token (this should never happen).") } log.Printf("Automatically upgraded OAuth config.") } oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig) if err != nil { log.Fatalf("Failed to configure Yandex: %v", err) } f := &Fs{ name: name, opt: *opt, srv: rest.NewClient(oAuthClient).SetRoot(rootURL), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), } f.setRoot(root) f.features = (&fs.Features{ ReadMimeType: true, WriteMimeType: true, CanHaveEmptyDirectories: true, }).Fill(f) f.srv.SetErrorHandler(errorHandler) // Check to see if the object exists and is a file //request object meta info // Check to see if the object exists and is a file //request object meta info if info, err := f.readMetaDataForPath(ctx, f.diskRoot, &api.ResourceInfoRequestOptions{}); err != nil { } else { if info.ResourceType == "file" { rootDir := path.Dir(root) if rootDir == "." { rootDir = "" } f.setRoot(rootDir) // return an error with an fs which points to the parent return f, fs.ErrorIsFile } } return f, nil } // Convert a list item into a DirEntry func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.ResourceInfoResponse) (fs.DirEntry, error) { switch object.ResourceType { case "dir": t, err := time.Parse(time.RFC3339Nano, object.Modified) if err != nil { return nil, errors.Wrap(err, "error parsing time in directory item") } d := fs.NewDir(remote, t).SetSize(object.Size) return d, nil case "file": o, err := f.newObjectWithInfo(ctx, remote, object) if err != nil { return nil, err } return o, nil default: fs.Debugf(f, "Unknown resource type %q", object.ResourceType) } return nil, nil } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { root := f.dirPath(dir) var limit uint64 = 1000 // max number of objects per request var itemsCount uint64 // number of items per page in response var offset uint64 // for the next page of requests for { opts := &api.ResourceInfoRequestOptions{ Limit: limit, Offset: offset, } info, err := f.readMetaDataForPath(ctx, root, opts) if err != nil { if apiErr, ok := err.(*api.ErrorResponse); ok { // does not exist if apiErr.ErrorName == "DiskNotFoundError" { return nil, fs.ErrorDirNotFound } } return nil, err } itemsCount = uint64(len(info.Embedded.Items)) if info.ResourceType == "dir" { //list all subdirs for _, element := range info.Embedded.Items { element.Name = f.opt.Enc.ToStandardName(element.Name) remote := path.Join(dir, element.Name) entry, err := f.itemToDirEntry(ctx, remote, &element) if err != nil { return nil, err } if entry != nil { entries = append(entries, entry) } } } else if info.ResourceType == "file" { return nil, fs.ErrorIsFile } //offset for the next page of items offset += itemsCount //check if we reached end of list if itemsCount < limit { break } } return entries, nil } // Return an Object from a path // // If it can't be found it returns the error fs.ErrorObjectNotFound. func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.ResourceInfoResponse) (fs.Object, error) { o := &Object{ fs: f, remote: remote, } var err error if info != nil { err = o.setMetaData(info) } else { err = o.readMetaData(ctx) if apiErr, ok := err.(*api.ErrorResponse); ok { // does not exist if apiErr.ErrorName == "DiskNotFoundError" { return nil, fs.ErrorObjectNotFound } } } if err != nil { return nil, err } return o, nil } // NewObject finds the Object at remote. If it can't be found it // returns the error fs.ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return f.newObjectWithInfo(ctx, remote, nil) } // Creates from the parameters passed in a half finished Object which // must have setMetaData called on it // // Used to create new objects func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object) { // Temporary Object under construction o = &Object{ fs: f, remote: remote, size: size, modTime: modTime, } return o } // Put the object // // Copy the reader in to the new object which is returned // // The new object may have been created if an error is returned func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o := f.createObject(src.Remote(), src.ModTime(ctx), src.Size()) return o, o.Update(ctx, in, src, options...) } // PutStream uploads to the remote path with the modTime given of indeterminate size func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return f.Put(ctx, in, src, options...) } // CreateDir makes a directory func (f *Fs) CreateDir(ctx context.Context, path string) (err error) { //fmt.Printf("CreateDir: %s\n", path) var resp *http.Response opts := rest.Opts{ Method: "PUT", Path: "/resources", Parameters: url.Values{}, NoResponse: true, } // If creating a directory with a : use (undocumented) disk: prefix if strings.IndexRune(path, ':') >= 0 { path = "disk:" + path } opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(path)) err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { // fmt.Printf("CreateDir %q Error: %s\n", path, err.Error()) return err } // fmt.Printf("...Id %q\n", *info.Id) return nil } // This really needs improvement and especially proper error checking // but Yandex does not publish a List of possible errors and when they're // expected to occur. func (f *Fs) mkDirs(ctx context.Context, path string) (err error) { //trim filename from path //dirString := strings.TrimSuffix(path, filepath.Base(path)) //trim "disk:" from path dirString := strings.TrimPrefix(path, "disk:") if dirString == "" { return nil } if err = f.CreateDir(ctx, dirString); err != nil { if apiErr, ok := err.(*api.ErrorResponse); ok { // already exists if apiErr.ErrorName != "DiskPathPointsToExistentDirectoryError" { // 2 if it fails then create all directories in the path from root. dirs := strings.Split(dirString, "/") //path separator var mkdirpath = "/" //path separator / for _, element := range dirs { if element != "" { mkdirpath += element + "/" //path separator / if err = f.CreateDir(ctx, mkdirpath); err != nil { // ignore errors while creating dirs } } } } return nil } } return err } func (f *Fs) mkParentDirs(ctx context.Context, resPath string) error { // defer log.Trace(dirPath, "")("") // chop off trailing / if it exists if strings.HasSuffix(resPath, "/") { resPath = resPath[:len(resPath)-1] } parent := path.Dir(resPath) if parent == "." { parent = "" } return f.mkDirs(ctx, parent) } // Mkdir creates the container if it doesn't exist func (f *Fs) Mkdir(ctx context.Context, dir string) error { path := f.filePath(dir) return f.mkDirs(ctx, path) } // waitForJob waits for the job with status in url to complete func (f *Fs) waitForJob(ctx context.Context, location string) (err error) { opts := rest.Opts{ RootURL: location, Method: "GET", } deadline := time.Now().Add(fs.Config.Timeout) for time.Now().Before(deadline) { var resp *http.Response var body []byte err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) if err != nil { return fserrors.ShouldRetry(err), err } body, err = rest.ReadBody(resp) return fserrors.ShouldRetry(err), err }) if err != nil { return err } // Try to decode the body first as an api.AsyncOperationStatus var status api.AsyncStatus err = json.Unmarshal(body, &status) if err != nil { return errors.Wrapf(err, "async status result not JSON: %q", body) } switch status.Status { case "failure": return errors.Errorf("async operation returned %q", status.Status) case "success": return nil } time.Sleep(1 * time.Second) } return errors.Errorf("async operation didn't complete after %v", fs.Config.Timeout) } func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err error) { opts := rest.Opts{ Method: "DELETE", Path: "/resources", Parameters: url.Values{}, } opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(path)) opts.Parameters.Set("permanently", strconv.FormatBool(hardDelete)) var resp *http.Response var body []byte err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) if err != nil { return fserrors.ShouldRetry(err), err } body, err = rest.ReadBody(resp) return fserrors.ShouldRetry(err), err }) if err != nil { return err } // if 202 Accepted it's an async operation we have to wait for it complete before retuning if resp.StatusCode == 202 { var info api.AsyncInfo err = json.Unmarshal(body, &info) if err != nil { return errors.Wrapf(err, "async info result not JSON: %q", body) } return f.waitForJob(ctx, info.HRef) } return nil } // purgeCheck remotes the root directory, if check is set then it // refuses to do so if it has anything in func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { root := f.filePath(dir) if check { //to comply with rclone logic we check if the directory is empty before delete. //send request to get list of objects in this directory. info, err := f.readMetaDataForPath(ctx, root, &api.ResourceInfoRequestOptions{}) if err != nil { return errors.Wrap(err, "rmdir failed") } if len(info.Embedded.Items) != 0 { return fs.ErrorDirectoryNotEmpty } } //delete directory return f.delete(ctx, root, false) } // Rmdir deletes the container // // Returns an error if it isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, true) } // Purge deletes all the files in the directory // // Optional interface: Only implement this if you have a way of // deleting all the files quicker than just running Remove() on the // result of List() func (f *Fs) Purge(ctx context.Context, dir string) error { return f.purgeCheck(ctx, dir, false) } // copyOrMoves copies or moves directories or files depending on the method parameter func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite bool) (err error) { opts := rest.Opts{ Method: "POST", Path: "/resources/" + method, Parameters: url.Values{}, } opts.Parameters.Set("from", f.opt.Enc.FromStandardPath(src)) opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(dst)) opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite)) var resp *http.Response var body []byte err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) if err != nil { return fserrors.ShouldRetry(err), err } body, err = rest.ReadBody(resp) return fserrors.ShouldRetry(err), err }) if err != nil { return err } // if 202 Accepted it's an async operation we have to wait for it complete before retuning if resp.StatusCode == 202 { var info api.AsyncInfo err = json.Unmarshal(body, &info) if err != nil { return errors.Wrapf(err, "async info result not JSON: %q", body) } return f.waitForJob(ctx, info.HRef) } return nil } // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't copy - not same remote type") return nil, fs.ErrorCantCopy } dstPath := f.filePath(remote) err := f.mkParentDirs(ctx, dstPath) if err != nil { return nil, err } err = f.copyOrMove(ctx, "copy", srcObj.filePath(), dstPath, false) if err != nil { return nil, errors.Wrap(err, "couldn't copy file") } return f.NewObject(ctx, remote) } // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { srcObj, ok := src.(*Object) if !ok { fs.Debugf(src, "Can't move - not same remote type") return nil, fs.ErrorCantMove } dstPath := f.filePath(remote) err := f.mkParentDirs(ctx, dstPath) if err != nil { return nil, err } err = f.copyOrMove(ctx, "move", srcObj.filePath(), dstPath, false) if err != nil { return nil, errors.Wrap(err, "couldn't move file") } return f.NewObject(ctx, remote) } // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { srcFs, ok := src.(*Fs) if !ok { fs.Debugf(srcFs, "Can't move directory - not same remote type") return fs.ErrorCantDirMove } srcPath := path.Join(srcFs.diskRoot, srcRemote) dstPath := f.dirPath(dstRemote) //fmt.Printf("Move src: %s (FullPath: %s), dst: %s (FullPath: %s)\n", srcRemote, srcPath, dstRemote, dstPath) // Refuse to move to or from the root if srcPath == "disk:/" || dstPath == "disk:/" { fs.Debugf(src, "DirMove error: Can't move root") return errors.New("can't move root directory") } err := f.mkParentDirs(ctx, dstPath) if err != nil { return err } _, err = f.readMetaDataForPath(ctx, dstPath, &api.ResourceInfoRequestOptions{}) if apiErr, ok := err.(*api.ErrorResponse); ok { // does not exist if apiErr.ErrorName == "DiskNotFoundError" { // OK } } else if err != nil { return err } else { return fs.ErrorDirExists } err = f.copyOrMove(ctx, "move", srcPath, dstPath, false) if err != nil { return errors.Wrap(err, "couldn't move directory") } return nil } // PublicLink generates a public link to the remote path (usually readable by anyone) func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { var path string if unlink { path = "/resources/unpublish" } else { path = "/resources/publish" } opts := rest.Opts{ Method: "PUT", Path: f.opt.Enc.FromStandardPath(path), Parameters: url.Values{}, NoResponse: true, } opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(f.filePath(remote))) var resp *http.Response err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if apiErr, ok := err.(*api.ErrorResponse); ok { // does not exist if apiErr.ErrorName == "DiskNotFoundError" { return "", fs.ErrorObjectNotFound } } if err != nil { if unlink { return "", errors.Wrap(err, "couldn't remove public link") } return "", errors.Wrap(err, "couldn't create public link") } info, err := f.readMetaDataForPath(ctx, f.filePath(remote), &api.ResourceInfoRequestOptions{}) if err != nil { return "", err } if info.PublicURL == "" { return "", errors.New("couldn't create public link - no link path received") } return info.PublicURL, nil } // CleanUp permanently deletes all trashed files/folders func (f *Fs) CleanUp(ctx context.Context) (err error) { var resp *http.Response opts := rest.Opts{ Method: "DELETE", Path: "/trash/resources", NoResponse: true, } err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) return err } // About gets quota information func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { opts := rest.Opts{ Method: "GET", Path: "/", } var resp *http.Response var info api.DiskInfo var err error err = f.pacer.Call(func() (bool, error) { resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) return shouldRetry(resp, err) }) if err != nil { return nil, err } usage := &fs.Usage{ Total: fs.NewUsageValue(info.TotalSpace), Used: fs.NewUsageValue(info.UsedSpace), Free: fs.NewUsageValue(info.TotalSpace - info.UsedSpace), } return usage, nil } // ------------------------------------------------------------ // Fs returns the parent Fs func (o *Object) Fs() fs.Info { return o.fs } // Return a string version func (o *Object) String() string { if o == nil { return "" } return o.remote } // Remote returns the remote path func (o *Object) Remote() string { return o.remote } // Returns the full remote path for the object func (o *Object) filePath() string { return o.fs.filePath(o.remote) } // setMetaData sets the fs data from a storage.Object func (o *Object) setMetaData(info *api.ResourceInfoResponse) (err error) { o.hasMetaData = true o.size = info.Size o.md5sum = info.Md5 o.mimeType = info.MimeType var modTimeString string modTimeObj, ok := info.CustomProperties["rclone_modified"] if ok { // read modTime from rclone_modified custom_property of object modTimeString, ok = modTimeObj.(string) } if !ok { // read modTime from Modified property of object as a fallback modTimeString = info.Modified } t, err := time.Parse(time.RFC3339Nano, modTimeString) if err != nil { return errors.Wrapf(err, "failed to parse modtime from %q", modTimeString) } o.modTime = t return nil } // readMetaData reads ands sets the new metadata for a storage.Object func (o *Object) readMetaData(ctx context.Context) (err error) { if o.hasMetaData { return nil } info, err := o.fs.readMetaDataForPath(ctx, o.filePath(), &api.ResourceInfoRequestOptions{}) if err != nil { return err } if info.ResourceType != "file" { return fs.ErrorNotAFile } return o.setMetaData(info) } // ModTime returns the modification time of the object // // It attempts to read the objects mtime and if that isn't present the // LastModified returned in the http headers func (o *Object) ModTime(ctx context.Context) time.Time { err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return time.Now() } return o.modTime } // Size returns the size of an object in bytes func (o *Object) Size() int64 { ctx := context.TODO() err := o.readMetaData(ctx) if err != nil { fs.Logf(o, "Failed to read metadata: %v", err) return 0 } return o.size } // Hash returns the Md5sum of an object returning a lowercase hex string func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { if t != hash.MD5 { return "", hash.ErrUnsupported } return o.md5sum, nil } // Storable returns whether this object is storable func (o *Object) Storable() bool { return true } func (o *Object) setCustomProperty(ctx context.Context, property string, value string) (err error) { var resp *http.Response opts := rest.Opts{ Method: "PATCH", Path: "/resources", Parameters: url.Values{}, NoResponse: true, } opts.Parameters.Set("path", o.fs.opt.Enc.FromStandardPath(o.filePath())) rcm := map[string]interface{}{ property: value, } cpr := api.CustomPropertyResponse{CustomProperties: rcm} err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, &cpr, nil) return shouldRetry(resp, err) }) return err } // SetModTime sets the modification time of the local fs object // // Commits the datastore func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { // set custom_property 'rclone_modified' of object to modTime err := o.setCustomProperty(ctx, "rclone_modified", modTime.Format(time.RFC3339Nano)) if err != nil { return err } o.modTime = modTime return nil } // Open an object for read func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { // prepare download var resp *http.Response var dl api.AsyncInfo opts := rest.Opts{ Method: "GET", Path: "/resources/download", Parameters: url.Values{}, } opts.Parameters.Set("path", o.fs.opt.Enc.FromStandardPath(o.filePath())) err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl) return shouldRetry(resp, err) }) if err != nil { return nil, err } // perform the download opts = rest.Opts{ RootURL: dl.HRef, Method: "GET", Options: options, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) if err != nil { return nil, err } return resp.Body, err } func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeType string, options ...fs.OpenOption) (err error) { // prepare upload var resp *http.Response var ur api.AsyncInfo opts := rest.Opts{ Method: "GET", Path: "/resources/upload", Parameters: url.Values{}, Options: options, } opts.Parameters.Set("path", o.fs.opt.Enc.FromStandardPath(o.filePath())) opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite)) err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &ur) return shouldRetry(resp, err) }) if err != nil { return err } // perform the actual upload opts = rest.Opts{ RootURL: ur.HRef, Method: "PUT", ContentType: mimeType, Body: in, NoResponse: true, } err = o.fs.pacer.Call(func() (bool, error) { resp, err = o.fs.srv.Call(ctx, &opts) return shouldRetry(resp, err) }) return err } // Update the already existing object // // Copy the reader into the object updating modTime and size // // The new object may have been created if an error is returned func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { in1 := readers.NewCountingReader(in) modTime := src.ModTime(ctx) remote := o.filePath() //create full path to file before upload. err := o.fs.mkParentDirs(ctx, remote) if err != nil { return err } //upload file err = o.upload(ctx, in1, true, fs.MimeType(ctx, src), options...) if err != nil { return err } //if file uploaded successfully then return metadata o.modTime = modTime o.md5sum = "" // according to unit tests after put the md5 is empty. o.size = int64(in1.BytesRead()) // better solution o.readMetaData() ? //and set modTime of uploaded file err = o.SetModTime(ctx, modTime) return err } // Remove an object func (o *Object) Remove(ctx context.Context) error { return o.fs.delete(ctx, o.filePath(), false) } // MimeType of an Object if known, "" otherwise func (o *Object) MimeType(ctx context.Context) string { return o.mimeType } // Check the interfaces are satisfied var ( _ fs.Fs = (*Fs)(nil) _ fs.Purger = (*Fs)(nil) _ fs.Copier = (*Fs)(nil) _ fs.Mover = (*Fs)(nil) _ fs.DirMover = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil) _ fs.CleanUpper = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil) _ fs.Object = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil) ) rclone-1.53.3/backend/yandex/yandex_test.go000066400000000000000000000005571375552240400206400ustar00rootroot00000000000000// Test Yandex filesystem interface package yandex_test import ( "testing" "github.com/rclone/rclone/backend/yandex" "github.com/rclone/rclone/fstest/fstests" ) // TestIntegration runs integration tests against the remote func TestIntegration(t *testing.T) { fstests.Run(t, &fstests.Opt{ RemoteName: "TestYandex:", NilObject: (*yandex.Object)(nil), }) } rclone-1.53.3/bin/000077500000000000000000000000001375552240400136445ustar00rootroot00000000000000rclone-1.53.3/bin/.ignore-emails000066400000000000000000000001671375552240400164040ustar00rootroot00000000000000# Email addresses to ignore in the git log when making the authors.md file rclone-1.53.3/bin/bisect-go-rclone.sh000077500000000000000000000005761375552240400173470ustar00rootroot00000000000000#!/bin/bash # An example script to run when bisecting go with git bisect -run when # looking for an rclone regression # Run this from the go root set -e # Compile the go version cd src ./make.bash || exit 125 # Make sure we are using it source ~/bin/use-go1.11 go version # Compile rclone cd ~/go/src/github.com/rclone/rclone make # run the failing test go run -race race.go rclone-1.53.3/bin/bisect-rclone.sh000077500000000000000000000016061375552240400167370ustar00rootroot00000000000000#!/bin/bash # Example script for git bisect run # # Copy this file into /tmp say before running as it will be # overwritten by the bisect as it is checked in. # # Change the test below to find out whether rclone is working or not # # Run from the project root # # git bisect start # git checkout master # git bisect bad # git checkout v1.41 (or whatever is the first good one) # git bisect good # git bisect run /tmp/bisect-rclone.sh set -e # Compile notifying git on compile failure make || exit 125 rclone version # Test whatever it is that is going wrong - exit with non zero exit code on failure # commented out examples follow # truncate -s 10M /tmp/10M # rclone delete azure:rclone-test1/10M || true # rclone --retries 1 copyto -vv /tmp/10M azure:rclone-test1/10M --azureblob-upload-cutoff 1M # rm -f "/tmp/tests's.docx" || true # rclone -vv --retries 1 copy "drive:test/tests's.docx" /tmp rclone-1.53.3/bin/build-xgo-cgofuse.sh000077500000000000000000000002161375552240400175250ustar00rootroot00000000000000#!/bin/bash set -e docker build -t rclone/xgo-cgofuse https://github.com/billziss-gh/cgofuse.git docker images docker push rclone/xgo-cgofuse rclone-1.53.3/bin/check-merged.go000066400000000000000000000062071375552240400165160ustar00rootroot00000000000000// +build ignore // Attempt to work out if branches have already been merged package main import ( "bufio" "errors" "flag" "fmt" "log" "os" "os/exec" "regexp" ) var ( // Flags master = flag.String("master", "master", "Branch to work out if merged into") version = "development version" // overridden by goreleaser ) func usage() { fmt.Fprintf(os.Stderr, `Usage: %s [options] Version: %s Attempt to work out if in the current git repo branches have been merged into master. Example usage: %s Full options: `, os.Args[0], version, os.Args[0]) flag.PrintDefaults() } var ( printedSep = false ) const ( sep1 = "============================================================" sep2 = "------------------------------------------------------------" ) // Show the diff between two git revisions func gitDiffDiff(rev1, rev2 string) { fmt.Printf("Diff of diffs of %q and %q\n", rev1, rev2) cmd := exec.Command("bash", "-c", fmt.Sprintf(`diff <(git show "%s") <(git show "%s")`, rev1, rev2)) out, err := cmd.Output() if err != nil { var exitErr *exec.ExitError if errors.As(err, &exitErr) && exitErr.ExitCode() == 1 { // OK just different } else { log.Fatalf("git diff failed: %#v", err) } } _, _ = os.Stdout.Write(out) } var reCommit = regexp.MustCompile(`commit ([0-9a-f]{32,})`) // Grep the git log for logLine func gitLogGrep(branch, rev, logLine string) { cmd := exec.Command("git", "log", "--grep", regexp.QuoteMeta(logLine), *master) out, err := cmd.Output() if err != nil { log.Fatalf("git log grep failed: %v", err) } if len(out) > 0 { if !printedSep { fmt.Println(sep1) printedSep = true } fmt.Printf("Branch: %s - MAY BE MERGED to %s\nLog: %s\n\n", branch, *master, logLine) _, _ = os.Stdout.Write(out) match := reCommit.FindSubmatch(out) if len(match) != 0 { commit := string(match[1]) fmt.Println(sep2) gitDiffDiff(rev, commit) } fmt.Println(sep1) } } // * b2-fix-download-url 4209c768a [gone] b2: fix transfers when using download_url var reLine = regexp.MustCompile(`^[ *] (\S+)\s+([0-9a-f]+)\s+(?:\[[^]]+\] )?(.*)$`) // Run git branch -v, parse the output and check it in the log func gitBranch() { cmd := exec.Command("git", "branch", "-v") cmd.Stderr = os.Stderr out, err := cmd.StdoutPipe() if err != nil { log.Fatalf("git branch pipe failed: %v", err) } if err := cmd.Start(); err != nil { log.Fatalf("git branch failed: %v", err) } scanner := bufio.NewScanner(out) for scanner.Scan() { line := scanner.Text() match := reLine.FindStringSubmatch(line) if len(match) != 4 { log.Printf("Invalid line %q", line) continue } branch, rev, logLine := match[1], match[2], match[3] if branch == *master { continue } //fmt.Printf("branch = %q, rev = %q, logLine = %q\n", branch, rev, logLine) gitLogGrep(branch, rev, logLine) } if err := scanner.Err(); err != nil { log.Fatalf("failed reading git branch: %v", err) } if err := cmd.Wait(); err != nil { log.Fatalf("git branch wait failed: %v", err) } } func main() { flag.Usage = usage flag.Parse() args := flag.Args() if len(args) != 0 { usage() log.Fatal("Wrong number of arguments") } gitBranch() } rclone-1.53.3/bin/cross-compile.go000066400000000000000000000253351375552240400167620ustar00rootroot00000000000000// +build ignore // Cross compile rclone - in go because I hate bash ;-) package main import ( "encoding/json" "flag" "fmt" "io/ioutil" "log" "os" "os/exec" "path" "path/filepath" "regexp" "runtime" "sort" "strings" "sync" "text/template" "time" "github.com/coreos/go-semver/semver" ) var ( // Flags debug = flag.Bool("d", false, "Print commands instead of running them.") parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.") copyAs = flag.String("release", "", "Make copies of the releases with this name") gitLog = flag.String("git-log", "", "git log to include as well") include = flag.String("include", "^.*$", "os/arch regexp to include") exclude = flag.String("exclude", "^$", "os/arch regexp to exclude") cgo = flag.Bool("cgo", false, "Use cgo for the build") noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running.") tags = flag.String("tags", "", "Space separated list of build tags") compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip.") ) // GOOS/GOARCH pairs we build for // // If the GOARCH contains a - it is a synthetic arch with more parameters var osarches = []string{ "windows/386", "windows/amd64", "darwin/amd64", "linux/386", "linux/amd64", "linux/arm", "linux/arm-v7", "linux/arm64", "linux/mips", "linux/mipsle", "freebsd/386", "freebsd/amd64", "freebsd/arm", "freebsd/arm-v7", "netbsd/386", "netbsd/amd64", "netbsd/arm", "netbsd/arm-v7", "openbsd/386", "openbsd/amd64", "plan9/386", "plan9/amd64", "solaris/amd64", "js/wasm", } // Special environment flags for a given arch var archFlags = map[string][]string{ "386": {"GO386=387"}, "mips": {"GOMIPS=softfloat"}, "mipsle": {"GOMIPS=softfloat"}, "arm-v7": {"GOARM=7"}, } // runEnv - run a shell command with env func runEnv(args, env []string) error { if *debug { args = append([]string{"echo"}, args...) } cmd := exec.Command(args[0], args[1:]...) if env != nil { cmd.Env = append(os.Environ(), env...) } if *debug { log.Printf("args = %v, env = %v\n", args, cmd.Env) } out, err := cmd.CombinedOutput() if err != nil { log.Print("----------------------------") log.Printf("Failed to run %v: %v", args, err) log.Printf("Command output was:\n%s", out) log.Print("----------------------------") } return err } // run a shell command func run(args ...string) { err := runEnv(args, nil) if err != nil { log.Fatalf("Exiting after error: %v", err) } } // chdir or die func chdir(dir string) { err := os.Chdir(dir) if err != nil { log.Fatalf("Couldn't cd into %q: %v", dir, err) } } // substitute data from go template file in to file out func substitute(inFile, outFile string, data interface{}) { t, err := template.ParseFiles(inFile) if err != nil { log.Fatalf("Failed to read template file %q: %v %v", inFile, err) } out, err := os.Create(outFile) if err != nil { log.Fatalf("Failed to create output file %q: %v %v", outFile, err) } defer func() { err := out.Close() if err != nil { log.Fatalf("Failed to close output file %q: %v %v", outFile, err) } }() err = t.Execute(out, data) if err != nil { log.Fatalf("Failed to substitute template file %q: %v %v", inFile, err) } } // build the zip package return its name func buildZip(dir string) string { // Now build the zip run("cp", "-a", "../MANUAL.txt", filepath.Join(dir, "README.txt")) run("cp", "-a", "../MANUAL.html", filepath.Join(dir, "README.html")) run("cp", "-a", "../rclone.1", dir) if *gitLog != "" { run("cp", "-a", *gitLog, dir) } zip := dir + ".zip" run("zip", "-r9", zip, dir) return zip } // Build .deb and .rpm packages // // It returns a list of artifacts it has made func buildDebAndRpm(dir, version, goarch string) []string { // Make internal version number acceptable to .deb and .rpm pkgVersion := version[1:] pkgVersion = strings.Replace(pkgVersion, "β", "-beta", -1) pkgVersion = strings.Replace(pkgVersion, "-", ".", -1) // Make nfpm.yaml from the template substitute("../bin/nfpm.yaml", path.Join(dir, "nfpm.yaml"), map[string]string{ "Version": pkgVersion, "Arch": goarch, }) // build them var artifacts []string for _, pkg := range []string{".deb", ".rpm"} { artifact := dir + pkg run("bash", "-c", "cd "+dir+" && nfpm -f nfpm.yaml pkg -t ../"+artifact) artifacts = append(artifacts, artifact) } return artifacts } // generate system object (syso) file to be picked up by a following go build for embedding icon and version info resources into windows executable func buildWindowsResourceSyso(goarch string, versionTag string) string { type M map[string]interface{} version := strings.TrimPrefix(versionTag, "v") semanticVersion := semver.New(version) // Build json input to goversioninfo utility bs, err := json.Marshal(M{ "FixedFileInfo": M{ "FileVersion": M{ "Major": semanticVersion.Major, "Minor": semanticVersion.Minor, "Patch": semanticVersion.Patch, }, "ProductVersion": M{ "Major": semanticVersion.Major, "Minor": semanticVersion.Minor, "Patch": semanticVersion.Patch, }, }, "StringFileInfo": M{ "CompanyName": "https://rclone.org", "ProductName": "Rclone", "FileDescription": "Rsync for cloud storage", "InternalName": "rclone", "OriginalFilename": "rclone.exe", "LegalCopyright": "The Rclone Authors", "FileVersion": version, "ProductVersion": version, }, "IconPath": "../graphics/logo/ico/logo_symbol_color.ico", }) if err != nil { log.Printf("Failed to build version info json: %v", err) return "" } // Write json to temporary file that will only be used by the goversioninfo command executed below. jsonPath, err := filepath.Abs("versioninfo_windows_" + goarch + ".json") // Appending goos and goarch as suffix to avoid any race conditions if err != nil { log.Printf("Failed to resolve path: %v", err) return "" } err = ioutil.WriteFile(jsonPath, bs, 0644) if err != nil { log.Printf("Failed to write %s: %v", jsonPath, err) return "" } defer func() { if err := os.Remove(jsonPath); err != nil { if !os.IsNotExist(err) { log.Printf("Warning: Couldn't remove generated %s: %v. Please remove it manually.", jsonPath, err) } } }() // Execute goversioninfo utility using the json file as input. // It will produce a system object (syso) file that a following go build should pick up. sysoPath, err := filepath.Abs("../resource_windows_" + goarch + ".syso") // Appending goos and goarch as suffix to avoid any race conditions, and also it is recognized by go build and avoids any builds for other systems considering it if err != nil { log.Printf("Failed to resolve path: %v", err) return "" } args := []string{ "goversioninfo", "-o", sysoPath, } if goarch == "amd64" { args = append(args, "-64") // Make the syso a 64-bit coff file } args = append(args, jsonPath) err = runEnv(args, nil) if err != nil { return "" } return sysoPath } // delete generated system object (syso) resource file func cleanupResourceSyso(sysoFilePath string) { if sysoFilePath == "" { return } if err := os.Remove(sysoFilePath); err != nil { if !os.IsNotExist(err) { log.Printf("Warning: Couldn't remove generated %s: %v. Please remove it manually.", sysoFilePath, err) } } } // Trip a version suffix off the arch if present func stripVersion(goarch string) string { i := strings.Index(goarch, "-") if i < 0 { return goarch } return goarch[:i] } // build the binary in dir returning success or failure func compileArch(version, goos, goarch, dir string) bool { log.Printf("Compiling %s/%s into %s", goos, goarch, dir) output := filepath.Join(dir, "rclone") if goos == "windows" { output += ".exe" sysoPath := buildWindowsResourceSyso(goarch, version) if sysoPath == "" { log.Printf("Warning: Windows binaries will not have file information embedded") } defer cleanupResourceSyso(sysoPath) } err := os.MkdirAll(dir, 0777) if err != nil { log.Fatalf("Failed to mkdir: %v", err) } args := []string{ "go", "build", "--ldflags", "-s -X github.com/rclone/rclone/fs.Version=" + version, "-trimpath", "-o", output, "-tags", *tags, "..", } env := []string{ "GOOS=" + goos, "GOARCH=" + stripVersion(goarch), } if !*cgo { env = append(env, "CGO_ENABLED=0") } else { env = append(env, "CGO_ENABLED=1") } if flags, ok := archFlags[goarch]; ok { env = append(env, flags...) } err = runEnv(args, env) if err != nil { log.Printf("Error compiling %s/%s: %v", goos, goarch, err) return false } if !*compileOnly { if goos != "js" { artifacts := []string{buildZip(dir)} // build a .deb and .rpm if appropriate if goos == "linux" { artifacts = append(artifacts, buildDebAndRpm(dir, version, stripVersion(goarch))...) } if *copyAs != "" { for _, artifact := range artifacts { run("ln", artifact, strings.Replace(artifact, "-"+version, "-"+*copyAs, 1)) } } } // tidy up run("rm", "-rf", dir) } log.Printf("Done compiling %s/%s", goos, goarch) return true } func compile(version string) { start := time.Now() wg := new(sync.WaitGroup) run := make(chan func(), *parallel) for i := 0; i < *parallel; i++ { wg.Add(1) go func() { defer wg.Done() for f := range run { f() } }() } includeRe, err := regexp.Compile(*include) if err != nil { log.Fatalf("Bad -include regexp: %v", err) } excludeRe, err := regexp.Compile(*exclude) if err != nil { log.Fatalf("Bad -exclude regexp: %v", err) } compiled := 0 var failuresMu sync.Mutex var failures []string for _, osarch := range osarches { if excludeRe.MatchString(osarch) || !includeRe.MatchString(osarch) { continue } parts := strings.Split(osarch, "/") if len(parts) != 2 { log.Fatalf("Bad osarch %q", osarch) } goos, goarch := parts[0], parts[1] userGoos := goos if goos == "darwin" { userGoos = "osx" } dir := filepath.Join("rclone-" + version + "-" + userGoos + "-" + goarch) run <- func() { if !compileArch(version, goos, goarch, dir) { failuresMu.Lock() failures = append(failures, goos+"/"+goarch) failuresMu.Unlock() } } compiled++ } close(run) wg.Wait() log.Printf("Compiled %d arches in %v", compiled, time.Since(start)) if len(failures) > 0 { sort.Strings(failures) log.Printf("%d compile failures:\n %s\n", len(failures), strings.Join(failures, "\n ")) os.Exit(1) } } func main() { flag.Parse() args := flag.Args() if len(args) != 1 { log.Fatalf("Syntax: %s ", os.Args[0]) } version := args[0] if !*noClean { run("rm", "-rf", "build") run("mkdir", "build") } chdir("build") err := ioutil.WriteFile("version.txt", []byte(fmt.Sprintf("rclone %s\n", version)), 0666) if err != nil { log.Fatalf("Couldn't write version.txt: %v", err) } compile(version) } rclone-1.53.3/bin/decrypt_names.py000077500000000000000000000032701375552240400170600ustar00rootroot00000000000000#!/usr/bin/env python3 """ This is a tool to decrypt file names in rclone logs. Pass two files in, the first should be a crypt mapping generated by rclone ls --crypt-show-mapping remote:path The second should be a log file that you want the paths decrypted in. Note that if the crypt mappings file is large it can take some time to run. """ import re import sys # Crypt line match_crypt = re.compile(r'NOTICE: (.*?): Encrypts to "(.*?)"$') def read_crypt_map(mapping_file): """ Read the crypt mapping file in, creating a dictionary of substitutions """ mapping = {} with open(mapping_file) as fd: for line in fd: match = match_crypt.search(line) if match: plaintext, ciphertext = match.groups() plaintexts = plaintext.split("/") ciphertexts = ciphertext.split("/") for plain, cipher in zip(plaintexts, ciphertexts): mapping[cipher] = plain return mapping def map_log_file(crypt_map, log_file): """ Substitute the crypt_map in the log file. This uses a straight forward O(N**2) algorithm. I tried using regexps to speed it up but it made it slower! """ with open(log_file) as fd: for line in fd: for cipher, plain in crypt_map.items(): line = line.replace(cipher, plain) sys.stdout.write(line) def main(): if len(sys.argv) < 3: print("Syntax: %s " % sys.argv[0]) raise SystemExit(1) mapping_file, log_file = sys.argv[1:] crypt_map = read_crypt_map(mapping_file) map_log_file(crypt_map, log_file) if __name__ == "__main__": main() rclone-1.53.3/bin/get-github-release.go000066400000000000000000000276711375552240400176650ustar00rootroot00000000000000// +build ignore // Get the latest release from a github project // // If GITHUB_USER and GITHUB_TOKEN are set then these will be used to // authenticate the request which is useful to avoid rate limits. package main import ( "archive/tar" "compress/bzip2" "compress/gzip" "encoding/json" "flag" "fmt" "io" "io/ioutil" "log" "net/http" "net/url" "os" "os/exec" "path" "path/filepath" "regexp" "runtime" "strings" "time" "github.com/rclone/rclone/lib/rest" "golang.org/x/net/html" "golang.org/x/sys/unix" ) var ( // Flags install = flag.Bool("install", false, "Install the downloaded package using sudo dpkg -i.") extract = flag.String("extract", "", "Extract the named executable from the .tar.gz and install into bindir.") bindir = flag.String("bindir", defaultBinDir(), "Directory to install files downloaded with -extract.") useAPI = flag.Bool("use-api", false, "Use the API for finding the release instead of scraping the page.") // Globals matchProject = regexp.MustCompile(`^([\w-]+)/([\w-]+)$`) osAliases = map[string][]string{ "darwin": {"macos", "osx"}, } archAliases = map[string][]string{ "amd64": {"x86_64"}, } ) // A github release // // Made by pasting the JSON into https://mholt.github.io/json-to-go/ type Release struct { URL string `json:"url"` AssetsURL string `json:"assets_url"` UploadURL string `json:"upload_url"` HTMLURL string `json:"html_url"` ID int `json:"id"` TagName string `json:"tag_name"` TargetCommitish string `json:"target_commitish"` Name string `json:"name"` Draft bool `json:"draft"` Author struct { Login string `json:"login"` ID int `json:"id"` AvatarURL string `json:"avatar_url"` GravatarID string `json:"gravatar_id"` URL string `json:"url"` HTMLURL string `json:"html_url"` FollowersURL string `json:"followers_url"` FollowingURL string `json:"following_url"` GistsURL string `json:"gists_url"` StarredURL string `json:"starred_url"` SubscriptionsURL string `json:"subscriptions_url"` OrganizationsURL string `json:"organizations_url"` ReposURL string `json:"repos_url"` EventsURL string `json:"events_url"` ReceivedEventsURL string `json:"received_events_url"` Type string `json:"type"` SiteAdmin bool `json:"site_admin"` } `json:"author"` Prerelease bool `json:"prerelease"` CreatedAt time.Time `json:"created_at"` PublishedAt time.Time `json:"published_at"` Assets []struct { URL string `json:"url"` ID int `json:"id"` Name string `json:"name"` Label string `json:"label"` Uploader struct { Login string `json:"login"` ID int `json:"id"` AvatarURL string `json:"avatar_url"` GravatarID string `json:"gravatar_id"` URL string `json:"url"` HTMLURL string `json:"html_url"` FollowersURL string `json:"followers_url"` FollowingURL string `json:"following_url"` GistsURL string `json:"gists_url"` StarredURL string `json:"starred_url"` SubscriptionsURL string `json:"subscriptions_url"` OrganizationsURL string `json:"organizations_url"` ReposURL string `json:"repos_url"` EventsURL string `json:"events_url"` ReceivedEventsURL string `json:"received_events_url"` Type string `json:"type"` SiteAdmin bool `json:"site_admin"` } `json:"uploader"` ContentType string `json:"content_type"` State string `json:"state"` Size int `json:"size"` DownloadCount int `json:"download_count"` CreatedAt time.Time `json:"created_at"` UpdatedAt time.Time `json:"updated_at"` BrowserDownloadURL string `json:"browser_download_url"` } `json:"assets"` TarballURL string `json:"tarball_url"` ZipballURL string `json:"zipball_url"` Body string `json:"body"` } // checks if a path has write access func writable(path string) bool { return unix.Access(path, unix.W_OK) == nil } // Directory to install releases in by default // // Find writable directories on $PATH. Use $GOPATH/bin if that is on // the path and writable or use the first writable directory which is // in $HOME or failing that the first writable directory. // // Returns "" if none of the above were found func defaultBinDir() string { home := os.Getenv("HOME") var ( bin string homeBin string goHomeBin string gopath = os.Getenv("GOPATH") ) for _, dir := range strings.Split(os.Getenv("PATH"), ":") { if writable(dir) { if strings.HasPrefix(dir, home) { if homeBin != "" { homeBin = dir } if gopath != "" && strings.HasPrefix(dir, gopath) && goHomeBin == "" { goHomeBin = dir } } if bin == "" { bin = dir } } } if goHomeBin != "" { return goHomeBin } if homeBin != "" { return homeBin } return bin } // read the body or an error message func readBody(in io.Reader) string { data, err := ioutil.ReadAll(in) if err != nil { return fmt.Sprintf("Error reading body: %v", err.Error()) } return string(data) } // Get an asset URL and name func getAsset(project string, matchName *regexp.Regexp) (string, string) { url := "https://api.github.com/repos/" + project + "/releases/latest" log.Printf("Fetching asset info for %q from %q", project, url) user, pass := os.Getenv("GITHUB_USER"), os.Getenv("GITHUB_TOKEN") req, err := http.NewRequest("GET", url, nil) if err != nil { log.Fatalf("Failed to make http request %q: %v", url, err) } if user != "" && pass != "" { log.Printf("Fetching using GITHUB_USER and GITHUB_TOKEN") req.SetBasicAuth(user, pass) } resp, err := http.DefaultClient.Do(req) if err != nil { log.Fatalf("Failed to fetch release info %q: %v", url, err) } if resp.StatusCode != http.StatusOK { log.Printf("Error: %s", readBody(resp.Body)) log.Fatalf("Bad status %d when fetching %q release info: %s", resp.StatusCode, url, resp.Status) } var release Release err = json.NewDecoder(resp.Body).Decode(&release) if err != nil { log.Fatalf("Failed to decode release info: %v", err) } err = resp.Body.Close() if err != nil { log.Fatalf("Failed to close body: %v", err) } for _, asset := range release.Assets { //log.Printf("Finding %s", asset.Name) if matchName.MatchString(asset.Name) && isOurOsArch(asset.Name) { return asset.BrowserDownloadURL, asset.Name } } log.Fatalf("Didn't find asset in info") return "", "" } // Get an asset URL and name by scraping the downloads page // // This doesn't use the API so isn't rate limited when not using GITHUB login details func getAssetFromReleasesPage(project string, matchName *regexp.Regexp) (assetURL string, assetName string) { baseURL := "https://github.com/" + project + "/releases" log.Printf("Fetching asset info for %q from %q", project, baseURL) base, err := url.Parse(baseURL) if err != nil { log.Fatalf("URL Parse failed: %v", err) } resp, err := http.Get(baseURL) if err != nil { log.Fatalf("Failed to fetch release info %q: %v", baseURL, err) } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { log.Printf("Error: %s", readBody(resp.Body)) log.Fatalf("Bad status %d when fetching %q release info: %s", resp.StatusCode, baseURL, resp.Status) } doc, err := html.Parse(resp.Body) if err != nil { log.Fatalf("Failed to parse web page: %v", err) } var walk func(*html.Node) walk = func(n *html.Node) { if n.Type == html.ElementNode && n.Data == "a" { for _, a := range n.Attr { if a.Key == "href" { if name := path.Base(a.Val); matchName.MatchString(name) && isOurOsArch(name) { if u, err := rest.URLJoin(base, a.Val); err == nil { if assetName == "" { assetName = name assetURL = u.String() } } } break } } } for c := n.FirstChild; c != nil; c = c.NextSibling { walk(c) } } walk(doc) if assetName == "" || assetURL == "" { log.Fatalf("Didn't find URL in page") } return assetURL, assetName } // isOurOsArch returns true if s contains our OS and our Arch func isOurOsArch(s string) bool { s = strings.ToLower(s) check := func(base string, aliases map[string][]string) bool { names := []string{base} names = append(names, aliases[base]...) for _, name := range names { if strings.Contains(s, name) { return true } } return false } return check(runtime.GOARCH, archAliases) && check(runtime.GOOS, osAliases) } // get a file for download func getFile(url, fileName string) { log.Printf("Downloading %q from %q", fileName, url) out, err := os.Create(fileName) if err != nil { log.Fatalf("Failed to open %q: %v", fileName, err) } resp, err := http.Get(url) if err != nil { log.Fatalf("Failed to fetch asset %q: %v", url, err) } if resp.StatusCode != http.StatusOK { log.Printf("Error: %s", readBody(resp.Body)) log.Fatalf("Bad status %d when fetching %q asset: %s", resp.StatusCode, url, resp.Status) } n, err := io.Copy(out, resp.Body) if err != nil { log.Fatalf("Error while downloading: %v", err) } err = resp.Body.Close() if err != nil { log.Fatalf("Failed to close body: %v", err) } err = out.Close() if err != nil { log.Fatalf("Failed to close output file: %v", err) } log.Printf("Downloaded %q (%d bytes)", fileName, n) } // run a shell command func run(args ...string) { cmd := exec.Command(args[0], args[1:]...) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err := cmd.Run() if err != nil { log.Fatalf("Failed to run %v: %v", args, err) } } // Untars fileName from srcFile func untar(srcFile, fileName, extractDir string) { f, err := os.Open(srcFile) if err != nil { log.Fatalf("Couldn't open tar: %v", err) } defer func() { err := f.Close() if err != nil { log.Fatalf("Couldn't close tar: %v", err) } }() var in io.Reader = f srcExt := filepath.Ext(srcFile) if srcExt == ".gz" || srcExt == ".tgz" { gzf, err := gzip.NewReader(f) if err != nil { log.Fatalf("Couldn't open gzip: %v", err) } in = gzf } else if srcExt == ".bz2" { in = bzip2.NewReader(f) } tarReader := tar.NewReader(in) for { header, err := tarReader.Next() if err == io.EOF { break } if err != nil { log.Fatalf("Trouble reading tar file: %v", err) } name := header.Name switch header.Typeflag { case tar.TypeReg: baseName := filepath.Base(name) if baseName == fileName { outPath := filepath.Join(extractDir, fileName) out, err := os.OpenFile(outPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0777) if err != nil { log.Fatalf("Couldn't open output file: %v", err) } n, err := io.Copy(out, tarReader) if err != nil { log.Fatalf("Couldn't write output file: %v", err) } if err = out.Close(); err != nil { log.Fatalf("Couldn't close output: %v", err) } log.Printf("Wrote %s (%d bytes) as %q", fileName, n, outPath) } } } } func main() { flag.Parse() args := flag.Args() if len(args) != 2 { log.Fatalf("Syntax: %s ", os.Args[0]) } project, nameRe := args[0], args[1] if !matchProject.MatchString(project) { log.Fatalf("Project %q must be in form user/project", project) } matchName, err := regexp.Compile(nameRe) if err != nil { log.Fatalf("Invalid regexp for name %q: %v", nameRe, err) } var assetURL, assetName string if *useAPI { assetURL, assetName = getAsset(project, matchName) } else { assetURL, assetName = getAssetFromReleasesPage(project, matchName) } fileName := filepath.Join(os.TempDir(), assetName) getFile(assetURL, fileName) if *install { log.Printf("Installing %s", fileName) run("sudo", "dpkg", "--force-bad-version", "-i", fileName) log.Printf("Installed %s", fileName) } else if *extract != "" { if *bindir == "" { log.Fatalf("Need to set -bindir") } log.Printf("Unpacking %s from %s and installing into %s", *extract, fileName, *bindir) untar(fileName, *extract, *bindir+"/") } } rclone-1.53.3/bin/make_backend_docs.py000077500000000000000000000044751375552240400176270ustar00rootroot00000000000000#!/usr/bin/env python3 """ Make backend documentation """ import os import io import subprocess marker = "{{< rem autogenerated options" start = marker + " start" stop = marker + " stop" end = ">}}" def find_backends(): """Return a list of all backends""" return [ x for x in os.listdir("backend") if x not in ("all",) ] def output_docs(backend, out): """Output documentation for backend options to out""" out.flush() subprocess.check_call(["rclone", "help", "backend", backend], stdout=out) def output_backend_tool_docs(backend, out): """Output documentation for backend tool to out""" out.flush() subprocess.call(["rclone", "backend", "help", backend], stdout=out, stderr=subprocess.DEVNULL) def alter_doc(backend): """Alter the documentation for backend""" doc_file = "docs/content/"+backend+".md" if not os.path.exists(doc_file): raise ValueError("Didn't find doc file %s" % (doc_file,)) new_file = doc_file+"~new~" altered = False with open(doc_file, "r") as in_file, open(new_file, "w") as out_file: in_docs = False for line in in_file: if not in_docs: if start in line: in_docs = True start_full = (start + "\" - DO NOT EDIT - instead edit fs.RegInfo in backend/%s/%s.go then run make backenddocs\" " + end + "\n") % (backend, backend) out_file.write(start_full) output_docs(backend, out_file) output_backend_tool_docs(backend, out_file) out_file.write(stop+" "+end+"\n") altered = True if not in_docs: out_file.write(line) if in_docs: if stop in line: in_docs = False os.rename(doc_file, doc_file+"~") os.rename(new_file, doc_file) if not altered: raise ValueError("Didn't find '%s' markers for in %s" % (start, doc_file)) if __name__ == "__main__": failed, success = 0, 0 for backend in find_backends(): try: alter_doc(backend) except Exception as e: print("Failed adding docs for %s backend: %s" % (backend, e)) failed += 1 else: success += 1 print("Added docs for %d backends with %d failures" % (success, failed)) rclone-1.53.3/bin/make_changelog.py000077500000000000000000000130371375552240400171510ustar00rootroot00000000000000#!/usr/bin/python3 """ Generate a markdown changelog for the rclone project """ import os import sys import re import datetime import subprocess from collections import defaultdict IGNORE_RES = [ r"^Add .* to contributors$", r"^Start v\d+\.\d+(\.\d+)?-DEV development$", r"^Version v\d+\.\d+(\.\d+)?$", ] IGNORE_RE = re.compile("(?:" + "|".join(IGNORE_RES) + ")") CATEGORY = re.compile(r"(^[\w/ ]+(?:, *[\w/ ]+)*):\s*(.*)$") backends = [ x for x in os.listdir("backend") if x != "all"] backend_aliases = { "amazon cloud drive" : "amazonclouddrive", "acd" : "amazonclouddrive", "google cloud storage" : "googlecloudstorage", "gcs" : "googlecloudstorage", "azblob" : "azureblob", "mountlib": "mount", "cmount": "mount", "mount/cmount": "mount", } backend_titles = { "amazonclouddrive": "Amazon Cloud Drive", "googlecloudstorage": "Google Cloud Storage", "azureblob": "Azure Blob", "ftp": "FTP", "sftp": "SFTP", "http": "HTTP", "webdav": "WebDAV", } STRIP_FIX_RE = re.compile(r"(\s+-)?\s+((fixes|addresses)\s+)?#\d+", flags=re.I) STRIP_PATH_RE = re.compile(r"^(backend|fs)/") IS_FIX_RE = re.compile(r"\b(fix|fixes)\b", flags=re.I) def make_out(data, indent=""): """Return a out, lines the first being a function for output into the second""" out_lines = [] def out(category, title=None): if title == None: title = category lines = data.get(category) if not lines: return del(data[category]) if indent != "" and len(lines) == 1: out_lines.append(indent+"* " + title+": " + lines[0]) return out_lines.append(indent+"* " + title) for line in lines: out_lines.append(indent+" * " + line) return out, out_lines def process_log(log): """Process the incoming log into a category dict of lists""" by_category = defaultdict(list) for log_line in reversed(log.split("\n")): log_line = log_line.strip() hash, author, timestamp, message = log_line.split("|", 3) message = message.strip() if IGNORE_RE.search(message): continue match = CATEGORY.search(message) categories = "UNKNOWN" if match: categories = match.group(1).lower() message = match.group(2) message = STRIP_FIX_RE.sub("", message) message = message +" ("+author+")" message = message[0].upper()+message[1:] seen = set() for category in categories.split(","): category = category.strip() category = STRIP_PATH_RE.sub("", category) category = backend_aliases.get(category, category) if category in seen: continue by_category[category].append(message) seen.add(category) #print category, hash, author, timestamp, message return by_category def main(): if len(sys.argv) != 3: print("Syntax: %s vX.XX vX.XY" % sys.argv[0], file=sys.stderr) sys.exit(1) version, next_version = sys.argv[1], sys.argv[2] log = subprocess.check_output(["git", "log", '''--pretty=format:%H|%an|%aI|%s'''] + [version+".."+next_version]) log = log.decode("utf-8") by_category = process_log(log) # Output backends first so remaining in by_category are core items out, backend_lines = make_out(by_category) out("mount", title="Mount") out("vfs", title="VFS") out("local", title="Local") out("cache", title="Cache") out("crypt", title="Crypt") backend_names = sorted(x for x in list(by_category.keys()) if x in backends) for backend_name in backend_names: if backend_name in backend_titles: backend_title = backend_titles[backend_name] else: backend_title = backend_name.title() out(backend_name, title=backend_title) # Split remaining in by_category into new features and fixes new_features = defaultdict(list) bugfixes = defaultdict(list) for name, messages in by_category.items(): for message in messages: if IS_FIX_RE.search(message): bugfixes[name].append(message) else: new_features[name].append(message) # Output new features out, new_features_lines = make_out(new_features, indent=" ") for name in sorted(new_features.keys()): out(name) # Output bugfixes out, bugfix_lines = make_out(bugfixes, indent=" ") for name in sorted(bugfixes.keys()): out(name) # Read old changlog and split with open("docs/content/changelog.md") as fd: old_changelog = fd.read() heading = "# Changelog" i = old_changelog.find(heading) if i < 0: raise AssertionError("Couldn't find heading in old changelog") i += len(heading) old_head, old_tail = old_changelog[:i], old_changelog[i:] # Update the build date old_head = re.sub(r"\d\d\d\d-\d\d-\d\d", str(datetime.date.today()), old_head) # Output combined changelog with new part sys.stdout.write(old_head) today = datetime.date.today() new_features = "\n".join(new_features_lines) bugfixes = "\n".join(bugfix_lines) backend_changes = "\n".join(backend_lines) sys.stdout.write(""" ## %(next_version)s - %(today)s [See commits](https://github.com/rclone/rclone/compare/%(version)s...%(next_version)s) * New backends * New commands * New Features %(new_features)s * Bug Fixes %(bugfixes)s %(backend_changes)s""" % locals()) sys.stdout.write(old_tail) if __name__ == "__main__": main() rclone-1.53.3/bin/make_manual.py000077500000000000000000000116751375552240400165050ustar00rootroot00000000000000#!/usr/bin/env python3 """ Make single page versions of the documentation for release and conversion into man pages etc. """ import os import re import time from datetime import datetime docpath = "docs/content" outfile = "MANUAL.md" # Order to add docs segments to make outfile docs = [ "_index.md", "install.md", "docs.md", "remote_setup.md", "filtering.md", "gui.md", "rc.md", "overview.md", "flags.md", # Keep these alphabetical by full name "fichier.md", "alias.md", "amazonclouddrive.md", "s3.md", "b2.md", "box.md", "cache.md", "chunker.md", "sharefile.md", "crypt.md", "dropbox.md", "ftp.md", "googlecloudstorage.md", "drive.md", "googlephotos.md", "http.md", "hubic.md", "jottacloud.md", "koofr.md", "mailru.md", "mega.md", "memory.md", "azureblob.md", "onedrive.md", "opendrive.md", "qingstor.md", "swift.md", "pcloud.md", "premiumizeme.md", "putio.md", "seafile.md", "sftp.md", "sugarsync.md", "tardigrade.md", "union.md", "webdav.md", "yandex.md", "local.md", "changelog.md", "bugs.md", "faq.md", "licence.md", "authors.md", "contact.md", ] # Order to put the commands in - any not on here will be in sorted order commands_order = [ "rclone_config.md", "rclone_copy.md", "rclone_sync.md", "rclone_move.md", "rclone_delete.md", "rclone_purge.md", "rclone_mkdir.md", "rclone_rmdir.md", "rclone_check.md", "rclone_ls.md", "rclone_lsd.md", "rclone_lsl.md", "rclone_md5sum.md", "rclone_sha1sum.md", "rclone_size.md", "rclone_version.md", "rclone_cleanup.md", "rclone_dedupe.md", ] # Docs which aren't made into outfile ignore_docs = [ "downloads.md", "privacy.md", "donate.md", ] def read_doc(doc): """Read file as a string""" path = os.path.join(docpath, doc) with open(path) as fd: contents = fd.read() parts = contents.split("---\n", 2) if len(parts) != 3: raise ValueError("Couldn't find --- markers: found %d parts" % len(parts)) contents = parts[2].strip()+"\n\n" # Remove icons contents = re.sub(r'}} contents = re.sub(r'\{\{<\s*img\s+(.*?)>\}\}', r"", contents) # Make any img tags absolute contents = re.sub(r'(\}\}', "- [Donate.](https://rclone.org/donate/)", contents) # Interpret provider shortcode # {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}} contents = re.sub(r'\{\{<\s*provider.*?name="(.*?)".*?>\}\}', r"- \1", contents) # Remove remaining shortcodes contents = re.sub(r'\{\{<.*?>\}\}', r"", contents) contents = re.sub(r'\{\{%.*?%\}\}', r"", contents) return contents def check_docs(docpath): """Check all the docs are in docpath""" files = set(f for f in os.listdir(docpath) if f.endswith(".md")) files -= set(ignore_docs) docs_set = set(docs) if files == docs_set: return print("Files on disk but not in docs variable: %s" % ", ".join(files - docs_set)) print("Files in docs variable but not on disk: %s" % ", ".join(docs_set - files)) raise ValueError("Missing files") def read_command(command): doc = read_doc("commands/"+command) doc = re.sub(r"### Options inherited from parent commands.*$", "", doc, 0, re.S) doc = doc.strip()+"\n" return doc def read_commands(docpath): """Reads the commands an makes them into a single page""" files = set(f for f in os.listdir(docpath + "/commands") if f.endswith(".md")) docs = [] for command in commands_order: docs.append(read_command(command)) files.remove(command) for command in sorted(files): if command != "rclone.md": docs.append(read_command(command)) return "\n".join(docs) def main(): check_docs(docpath) command_docs = read_commands(docpath).replace("\\", "\\\\") # escape \ so we can use command_docs in re.sub build_date = datetime.utcfromtimestamp( int(os.environ.get('SOURCE_DATE_EPOCH', time.time()))) with open(outfile, "w") as out: out.write("""\ %% rclone(1) User Manual %% Nick Craig-Wood %% %s """ % build_date.strftime("%b %d, %Y")) for doc in docs: contents = read_doc(doc) # Substitute the commands into doc.md if doc == "docs.md": contents = re.sub(r"The main rclone commands.*?for the full list.", command_docs, contents, 0, re.S) out.write(contents) print("Written '%s'" % outfile) if __name__ == "__main__": main() rclone-1.53.3/bin/make_rc_docs.sh000077500000000000000000000012411375552240400166120ustar00rootroot00000000000000#!/bin/bash # Insert the rc docs into docs/content/rc.md set -e go install mkdir -p /tmp/rclone/cache_test mkdir -p /tmp/rclone/rc_mount export RCLONE_CONFIG_RCDOCS_TYPE=cache export RCLONE_CONFIG_RCDOCS_REMOTE=/tmp/rclone/cache_test rclone -q --rc mount rcdocs: /tmp/rclone/rc_mount & sleep 0.5 rclone rc > /tmp/rclone/z.md fusermount -u -z /tmp/rclone/rc_mount > /dev/null 2>&1 || umount /tmp/rclone/rc_mount awk ' BEGIN {p=1} /^\{\{< rem autogenerated start/ {print;system("cat /tmp/rclone/z.md");p=0} /^\{\{< rem autogenerated stop/ {p=1} p' docs/content/rc.md > /tmp/rclone/rc.md mv /tmp/rclone/rc.md docs/content/rc.md rm -rf /tmp/rclone rclone-1.53.3/bin/make_test_files.go000066400000000000000000000074321375552240400173370ustar00rootroot00000000000000// +build ignore // Build a directory structure with the required number of files in // // Run with go run make_test_files.go [flag] package main import ( cryptrand "crypto/rand" "flag" "io" "log" "math/rand" "os" "path/filepath" ) var ( // Flags numberOfFiles = flag.Int("n", 1000, "Number of files to create") averageFilesPerDirectory = flag.Int("files-per-directory", 10, "Average number of files per directory") maxDepth = flag.Int("max-depth", 10, "Maximum depth of directory heirachy") minFileSize = flag.Int64("min-size", 0, "Minimum size of file to create") maxFileSize = flag.Int64("max-size", 100, "Maximum size of files to create") minFileNameLength = flag.Int("min-name-length", 4, "Minimum size of file to create") maxFileNameLength = flag.Int("max-name-length", 12, "Maximum size of files to create") directoriesToCreate int totalDirectories int fileNames = map[string]struct{}{} // keep a note of which file name we've used already ) // randomString create a random string for test purposes func randomString(n int) string { const ( vowel = "aeiou" consonant = "bcdfghjklmnpqrstvwxyz" digit = "0123456789" ) pattern := []string{consonant, vowel, consonant, vowel, consonant, vowel, consonant, digit} out := make([]byte, n) p := 0 for i := range out { source := pattern[p] p = (p + 1) % len(pattern) out[i] = source[rand.Intn(len(source))] } return string(out) } // fileName creates a unique random file or directory name func fileName() (name string) { for { length := rand.Intn(*maxFileNameLength-*minFileNameLength) + *minFileNameLength name = randomString(length) if _, found := fileNames[name]; !found { break } } fileNames[name] = struct{}{} return name } // dir is a directory in the directory heirachy being built up type dir struct { name string depth int children []*dir parent *dir } // Create a random directory heirachy under d func (d *dir) createDirectories() { for totalDirectories < directoriesToCreate { newDir := &dir{ name: fileName(), depth: d.depth + 1, parent: d, } d.children = append(d.children, newDir) totalDirectories++ switch rand.Intn(4) { case 0: if d.depth < *maxDepth { newDir.createDirectories() } case 1: return } } return } // list the directory heirachy func (d *dir) list(path string, output []string) []string { dirPath := filepath.Join(path, d.name) output = append(output, dirPath) for _, subDir := range d.children { output = subDir.list(dirPath, output) } return output } // writeFile writes a random file at dir/name func writeFile(dir, name string) { err := os.MkdirAll(dir, 0777) if err != nil { log.Fatalf("Failed to make directory %q: %v", dir, err) } path := filepath.Join(dir, name) fd, err := os.Create(path) if err != nil { log.Fatalf("Failed to open file %q: %v", path, err) } size := rand.Int63n(*maxFileSize-*minFileSize) + *minFileSize _, err = io.CopyN(fd, cryptrand.Reader, size) if err != nil { log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err) } err = fd.Close() if err != nil { log.Fatalf("Failed to close file %q: %v", path, err) } } func main() { flag.Parse() args := flag.Args() if len(args) != 1 { log.Fatalf("Require 1 directory argument") } outputDirectory := args[0] log.Printf("Output dir %q", outputDirectory) directoriesToCreate = *numberOfFiles / *averageFilesPerDirectory log.Printf("directoriesToCreate %v", directoriesToCreate) root := &dir{name: outputDirectory, depth: 1} for totalDirectories < directoriesToCreate { root.createDirectories() } dirs := root.list("", []string{}) for i := 0; i < *numberOfFiles; i++ { dir := dirs[rand.Intn(len(dirs))] writeFile(dir, fileName()) } } rclone-1.53.3/bin/nfpm.yaml000066400000000000000000000013471375552240400154750ustar00rootroot00000000000000name: "rclone" arch: "{{.Arch}}" platform: "linux" version: "{{.Version}}" section: "default" priority: "extra" provides: - rclone maintainer: "Nick Craig-Wood " description: | Rclone - "rsync for cloud storage" is a command line program to sync files and directories to and from most cloud providers. It can also mount, tree, ncdu and lots of other useful things. vendor: "rclone" homepage: "https://rclone.org" license: "MIT" # No longer supported? See https://github.com/goreleaser/nfpm/issues/144 # bindir: "/usr/bin" files: ./rclone: "/usr/bin/rclone" ./README.html: "/usr/share/doc/rclone/README.html" ./README.txt: "/usr/share/doc/rclone/README.txt" ./rclone.1: "/usr/share/man/man1/rclone.1" rclone-1.53.3/bin/not-in-stable.go000077500000000000000000000035721375552240400166610ustar00rootroot00000000000000// This shows the commits not yet in the stable branch package main import ( "bytes" "flag" "fmt" "io/ioutil" "log" "os" "os/exec" "regexp" ) // version=$(sed /tmp/in-stable // git log --oneline ${version}.0..master | cut -c11- | sort > /tmp/in-master // // comm -23 /tmp/in-master /tmp/in-stable var logRe = regexp.MustCompile(`^([0-9a-f]{4,}) (.*)$`) // run the test passed in with the -run passed in func readCommits(from, to string) (logMap map[string]string, logs []string) { cmd := exec.Command("git", "log", "--oneline", from+".."+to) out, err := cmd.Output() if err != nil { log.Fatalf("failed to run git log: %v", err) } logMap = map[string]string{} logs = []string{} for _, line := range bytes.Split(out, []byte{'\n'}) { if len(line) == 0 { continue } match := logRe.FindSubmatch(line) if match == nil { log.Fatalf("failed to parse line: %q", line) } var hash, logMessage = string(match[1]), string(match[2]) logMap[logMessage] = hash logs = append(logs, logMessage) } return logMap, logs } func main() { flag.Parse() args := flag.Args() if len(args) != 0 { log.Fatalf("Syntax: %s", os.Args[0]) } versionBytes, err := ioutil.ReadFile("VERSION") if err != nil { log.Fatalf("Failed to read version: %v", err) } i := bytes.LastIndexByte(versionBytes, '.') version := string(versionBytes[:i]) log.Printf("Finding commits not in stable %s", version) masterMap, masterLogs := readCommits(version+".0", "master") stableMap, _ := readCommits(version+".0", version+"-stable") for _, logMessage := range masterLogs { // Commit found in stable already if _, found := stableMap[logMessage]; found { continue } hash := masterMap[logMessage] fmt.Printf("%s %s\n", hash, logMessage) } } rclone-1.53.3/bin/test-all-commits-compile.sh000077500000000000000000000016251375552240400210330ustar00rootroot00000000000000#!/bin/sh # This tests rclone compiles for all the commits in the branch # # It assumes that the branch is rebased onto master and checks all the commits from branch root to master # # Adapted from: https://blog.ploeh.dk/2013/10/07/verifying-every-single-commit-in-a-git-branch/ BRANCH=$(git rev-parse --abbrev-ref HEAD) if [ "$BRANCH" = "master" ]; then echo "Don't run on master branch" exit 1 fi COMMITS=$(git log --oneline --reverse master.. | cut -d " " -f 1) CODE=0 for COMMIT in $COMMITS do git checkout $COMMIT # run-tests echo "------------------------------------------------------------" go install ./... if [ $? -eq 0 ] then echo $COMMIT - passed else echo $COMMIT - failed git checkout ${BRANCH} exit fi echo "------------------------------------------------------------" done git checkout ${BRANCH} echo "All OK" rclone-1.53.3/bin/test-repeat-vfs.sh000077500000000000000000000007431375552240400172400ustar00rootroot00000000000000#!/bin/bash # Thrash the VFS tests set -e # Optionally set the iterations with the first parameter iterations=${1:-100} base=$(dirname $(dirname $(realpath "$0"))) echo ${base} run=${base}/bin/test-repeat.sh echo ${run} testdirs=" vfs vfs/vfscache vfs/vfscache/writeback vfs/vfscache/downloaders cmd/cmount " for testdir in ${testdirs}; do echo "Testing ${testdir} with ${iterations} iterations" cd ${base}/${testdir} ${run} -i=${iterations} -race -tags=cmount done rclone-1.53.3/bin/test-repeat.sh000077500000000000000000000041441375552240400164430ustar00rootroot00000000000000#!/bin/bash # defaults buildflags="" binary="test.binary" flags="" iterations="100" logprefix="test.out" help=" This runs go tests repeatedly logging all the failures to separate files. It is very useful for debugging with printf for tests which don't fail very often. Syntax: $0 [flags] Note that flags for 'go test' need to be expanded, eg '-test.v' instead of just '-v'. '-race' does not need to be expanded. Flags this script understands -h, --help show this help -i=N, --iterations=N do N iterations (default ${iterations}) -b=NAME,--binary=NAME call the output binary NAME (default ${binary}) -l=NAME,--logprefix=NAME the log files generated will start with NAME (default ${logprefix}) -race build the binary with race testing enabled -tags=TAGS build the binary with the tags supplied Any other flags will be past to go test. Example $0 flags -race -test.run 'TestRWFileHandleOpenTests' " if [[ "$@" == "" ]]; then echo "${help}" exit 1 fi for i in "$@" do case $i in -h|--help) echo "${help}" exit 1 ;; -b=*|--binary=*) binary="${i#*=}" shift # past argument=value ;; -l=*|--log-prefix=*) logprefix="${i#*=}" shift # past argument=value ;; -i=*|--iterations=*) iterations="${i#*=}" shift # past argument=value ;; -race|--race|-tags=*|--tags=*) buildflags="${buildflags} $i" shift # past argument with no value ;; *) # unknown option flags="${flags} ${i#*=}" shift ;; esac done echo -n "Compiling ${buildflags} ${binary} ... " go test ${buildflags} -c -o "${binary}" || { echo "build failed" exit 1 } echo "OK" for i in $(seq -w ${iterations}); do echo -n "Test ${buildflags} ${flags} ${i} " log="${logprefix}${i}.log" ./${binary} ${flags} > ${log} 2>&1 ok=$? if [[ ${ok} == 0 ]]; then echo "OK" rm ${log} else echo "FAIL - log in ${log}" fi done rclone-1.53.3/bin/test_independence.go000066400000000000000000000026071375552240400176600ustar00rootroot00000000000000// +build ignore // Test that the tests in the suite passed in are independent package main import ( "flag" "log" "os" "os/exec" "regexp" ) var matchLine = regexp.MustCompile(`(?m)^=== RUN\s*(TestIntegration/\S*)\s*$`) // run the test pass in and grep out the test names func findTests(packageToTest string) (tests []string) { cmd := exec.Command("go", "test", "-v", packageToTest) out, err := cmd.CombinedOutput() if err != nil { _, _ = os.Stderr.Write(out) log.Fatal(err) } results := matchLine.FindAllSubmatch(out, -1) if results == nil { log.Fatal("No tests found") } for _, line := range results { tests = append(tests, string(line[1])) } return tests } // run the test passed in with the -run passed in func runTest(packageToTest string, testName string) { cmd := exec.Command("go", "test", "-v", packageToTest, "-run", "^"+testName+"$") out, err := cmd.CombinedOutput() if err != nil { log.Printf("%s FAILED ------------------", testName) _, _ = os.Stderr.Write(out) log.Printf("%s FAILED ------------------", testName) } else { log.Printf("%s OK", testName) } } func main() { flag.Parse() args := flag.Args() if len(args) != 1 { log.Fatalf("Syntax: %s ", os.Args[0]) } packageToTest := args[0] testNames := findTests(packageToTest) // fmt.Printf("%s\n", testNames) for _, testName := range testNames { runTest(packageToTest, testName) } } rclone-1.53.3/bin/test_proxy.py000077500000000000000000000011351375552240400164410ustar00rootroot00000000000000#!/usr/bin/env python3 """ A demo proxy for rclone serve sftp/webdav/ftp etc This takes the incoming user/pass and converts it into an sftp backend running on localhost. """ import sys import json def main(): i = json.load(sys.stdin) o = { "type": "sftp", # type of backend "_root": "", # root of the fs "_obscure": "pass", # comma sep list of fields to obscure "user": i["user"], "pass": i["pass"], "host": "127.0.0.1", } json.dump(o, sys.stdout, indent="\t") if __name__ == "__main__": main() rclone-1.53.3/bin/tidy-beta000077500000000000000000000010011375552240400154440ustar00rootroot00000000000000#!/bin/sh # Use this script after a release to tidy the betas version="$1" if [ "$version" = "" ]; then echo "Syntax: $0 [delete]" exit 1 fi dry_run="--dry-run" if [ "$2" = "delete" ]; then dry_run="" else echo "Running in --dry-run mode" echo "Use '$0 $version delete' to actually delete files" fi rclone ${dry_run} -vv -P --checkers 16 --transfers 16 delete \ --include "/${version}**" \ --include "/branch/${version}**" \ memstore:beta-rclone-org rclone-1.53.3/bin/travis.rclone.conf000066400000000000000000000005621375552240400173070ustar00rootroot00000000000000# Encrypted rclone configuration File RCLONE_ENCRYPT_V0: XIkAr3p+y+zai82cHFH8UoW1y1XTe6dpTzo/g4uSwqI2pfsnSSJ4JbAsRZ9nGVpx3NzROKEewlusVHNokiA4/nD4NbT+2DJrpMLg/OtLREICfuRk3tVWPKLGsmA+TLKU+IfQMO4LfrrCe2DF/lW0qA5Xu16E0Vn++jNhbwW2oB+JTkaGka8Ae3CyisM/3NUGnCOG/yb5wLH7ybUstNYPHsNFCiU1brFXQ4DNIbUFMmca+5S44vrOWvhp9QijQXlG7/JjwrkqbB/LK2gMJPTuhY2OW+4tRw1IoCXbWmwJXv5xmhPqanW92A==rclone-1.53.3/bin/update-authors.py000077500000000000000000000023101375552240400171620ustar00rootroot00000000000000#!/usr/bin/env python3 """ Update the authors.md file with the authors from the git log """ import re import subprocess AUTHORS = "docs/content/authors.md" IGNORE = "bin/.ignore-emails" def load(filename): """ returns a set of emails already in the file """ with open(filename) as fd: authors = fd.read() return set(re.findall(r"<(.*?)>", authors)) def add_email(name, email): """ adds the email passed in to the end of authors.md """ print("Adding %s <%s>" % (name, email)) with open(AUTHORS, "a+") as fd: print(" * %s <%s>" % (name, email), file=fd) subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS]) def main(): out = subprocess.check_output(["git", "log", '--reverse', '--format=%an|%ae', "master"]) out = out.decode("utf-8") ignored = load(IGNORE) previous = load(AUTHORS) previous.update(ignored) for line in out.split("\n"): line = line.strip() if line == "": continue name, email = line.split("|") if email in previous: continue previous.add(email) add_email(name, email) if __name__ == "__main__": main() rclone-1.53.3/bin/upload-github000077500000000000000000000020421375552240400163340ustar00rootroot00000000000000#!/usr/bin/env bash # # Upload a release # # Needs github-release from https://github.com/aktau/github-release set -e REPO="rclone" if [ "$1" == "" ]; then echo "Syntax: $0 Version" exit 1 fi VERSION="$1" if [ "$GITHUB_USER" == "" ]; then echo 1>&2 "Need GITHUB_USER environment variable" exit 1 fi if [ "$GITHUB_TOKEN" == "" ]; then echo 1>&2 "Need GITHUB_TOKEN environment variable" exit 1 fi echo "Making release ${VERSION}" github-release release \ --repo ${REPO} \ --tag ${VERSION} \ --name "rclone" \ --description "Rclone - rsync for cloud storage. Sync files to and from many cloud storage providers." for build in `ls build | grep -v current | grep -v testbuilds`; do echo "Uploading ${build}" base="${build%.*}" parts=(${base//-/ }) os=${parts[3]} arch=${parts[4]} github-release upload \ --repo ${REPO} \ --tag ${VERSION} \ --name "${build}" \ --file build/${build} done github-release info \ --repo ${REPO} \ --tag ${VERSION} echo "Done" rclone-1.53.3/bin/win-build.bat000066400000000000000000000005541375552240400162320ustar00rootroot00000000000000@echo off echo Setting environment variables for mingw+WinFsp compile set GOPATH=Z:\go rem set PATH=C:\Program Files\mingw-w64\i686-7.1.0-win32-dwarf-rt_v5-rev0\mingw32\bin;%PATH% set PATH=C:\Program Files\mingw-w64\x86_64-8.1.0-win32-seh-rt_v6-rev0\mingw64\bin;%GOPATH%/bin;%PATH% set CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse rclone-1.53.3/cmd/000077500000000000000000000000001375552240400136375ustar00rootroot00000000000000rclone-1.53.3/cmd/about/000077500000000000000000000000001375552240400147515ustar00rootroot00000000000000rclone-1.53.3/cmd/about/about.go000066400000000000000000000055201375552240400164140ustar00rootroot00000000000000package about import ( "context" "encoding/json" "fmt" "os" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/spf13/cobra" ) var ( jsonOutput bool fullOutput bool ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &jsonOutput, "json", "", false, "Format output as JSON") flags.BoolVarP(cmdFlags, &fullOutput, "full", "", false, "Full numbers instead of SI units") } // printValue formats uv to be output func printValue(what string, uv *int64) { what += ":" if uv == nil { return } var val string if fullOutput { val = fmt.Sprintf("%d", *uv) } else { val = fs.SizeSuffix(*uv).String() } fmt.Printf("%-9s%v\n", what, val) } var commandDefinition = &cobra.Command{ Use: "about remote:", Short: `Get quota information from the remote.`, Long: ` Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes. This will print to stdout something like this: Total: 17G Used: 7.444G Free: 1.315G Trashed: 100.000M Other: 8.241G Where the fields are: * Total: total size available. * Used: total size used * Free: total amount this user could upload. * Trashed: total amount in the trash * Other: total amount in other storage (eg Gmail, Google Photos) * Objects: total number of objects in the storage Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted. Use the --full flag to see the numbers written out in full, eg Total: 18253611008 Used: 7993453766 Free: 1411001220 Trashed: 104857602 Other: 8849156022 Use the --json flag for a computer readable output, eg { "total": 18253611008, "used": 7993453766, "trashed": 104857602, "other": 8849156022, "free": 1411001220 } `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) f := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { doAbout := f.Features().About if doAbout == nil { return errors.Errorf("%v doesn't support about", f) } u, err := doAbout(context.Background()) if err != nil { return errors.Wrap(err, "About call failed") } if u == nil { return errors.New("nil usage returned") } if jsonOutput { out := json.NewEncoder(os.Stdout) out.SetIndent("", "\t") return out.Encode(u) } printValue("Total", u.Total) printValue("Used", u.Used) printValue("Free", u.Free) printValue("Trashed", u.Trashed) printValue("Other", u.Other) printValue("Objects", u.Objects) return nil }) }, } rclone-1.53.3/cmd/all/000077500000000000000000000000001375552240400144075ustar00rootroot00000000000000rclone-1.53.3/cmd/all/all.go000066400000000000000000000043611375552240400155120ustar00rootroot00000000000000// Package all imports all the commands package all import ( // Active commands _ "github.com/rclone/rclone/cmd" _ "github.com/rclone/rclone/cmd/about" _ "github.com/rclone/rclone/cmd/authorize" _ "github.com/rclone/rclone/cmd/backend" _ "github.com/rclone/rclone/cmd/cachestats" _ "github.com/rclone/rclone/cmd/cat" _ "github.com/rclone/rclone/cmd/check" _ "github.com/rclone/rclone/cmd/cleanup" _ "github.com/rclone/rclone/cmd/cmount" _ "github.com/rclone/rclone/cmd/config" _ "github.com/rclone/rclone/cmd/copy" _ "github.com/rclone/rclone/cmd/copyto" _ "github.com/rclone/rclone/cmd/copyurl" _ "github.com/rclone/rclone/cmd/cryptcheck" _ "github.com/rclone/rclone/cmd/cryptdecode" _ "github.com/rclone/rclone/cmd/dbhashsum" _ "github.com/rclone/rclone/cmd/dedupe" _ "github.com/rclone/rclone/cmd/delete" _ "github.com/rclone/rclone/cmd/deletefile" _ "github.com/rclone/rclone/cmd/genautocomplete" _ "github.com/rclone/rclone/cmd/gendocs" _ "github.com/rclone/rclone/cmd/hashsum" _ "github.com/rclone/rclone/cmd/info" _ "github.com/rclone/rclone/cmd/link" _ "github.com/rclone/rclone/cmd/listremotes" _ "github.com/rclone/rclone/cmd/ls" _ "github.com/rclone/rclone/cmd/lsd" _ "github.com/rclone/rclone/cmd/lsf" _ "github.com/rclone/rclone/cmd/lsjson" _ "github.com/rclone/rclone/cmd/lsl" _ "github.com/rclone/rclone/cmd/md5sum" _ "github.com/rclone/rclone/cmd/memtest" _ "github.com/rclone/rclone/cmd/mkdir" _ "github.com/rclone/rclone/cmd/mount" _ "github.com/rclone/rclone/cmd/mount2" _ "github.com/rclone/rclone/cmd/move" _ "github.com/rclone/rclone/cmd/moveto" _ "github.com/rclone/rclone/cmd/ncdu" _ "github.com/rclone/rclone/cmd/obscure" _ "github.com/rclone/rclone/cmd/purge" _ "github.com/rclone/rclone/cmd/rc" _ "github.com/rclone/rclone/cmd/rcat" _ "github.com/rclone/rclone/cmd/rcd" _ "github.com/rclone/rclone/cmd/reveal" _ "github.com/rclone/rclone/cmd/rmdir" _ "github.com/rclone/rclone/cmd/rmdirs" _ "github.com/rclone/rclone/cmd/serve" _ "github.com/rclone/rclone/cmd/settier" _ "github.com/rclone/rclone/cmd/sha1sum" _ "github.com/rclone/rclone/cmd/size" _ "github.com/rclone/rclone/cmd/sync" _ "github.com/rclone/rclone/cmd/touch" _ "github.com/rclone/rclone/cmd/tree" _ "github.com/rclone/rclone/cmd/version" ) rclone-1.53.3/cmd/authorize/000077500000000000000000000000001375552240400156515ustar00rootroot00000000000000rclone-1.53.3/cmd/authorize/authorize.go000066400000000000000000000016151375552240400202150ustar00rootroot00000000000000package authorize import ( "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/flags" "github.com/spf13/cobra" ) var ( noAutoBrowser bool ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &noAutoBrowser, "auth-no-open-browser", "", false, "Do not automatically open auth link in default browser") } var commandDefinition = &cobra.Command{ Use: "authorize", Short: `Remote authorization.`, Long: ` Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config. Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 3, command, args) config.Authorize(args, noAutoBrowser) }, } rclone-1.53.3/cmd/backend/000077500000000000000000000000001375552240400152265ustar00rootroot00000000000000rclone-1.53.3/cmd/backend/backend.go000066400000000000000000000106451375552240400171520ustar00rootroot00000000000000package backend import ( "context" "encoding/json" "fmt" "os" "sort" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/rc" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( options []string useJSON bool ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.StringArrayVarP(cmdFlags, &options, "option", "o", options, "Option in the form name=value or name.") flags.BoolVarP(cmdFlags, &useJSON, "json", "", useJSON, "Always output in JSON format.") } var commandDefinition = &cobra.Command{ Use: "backend remote:path [opts] ", Short: `Run a backend specific command.`, Long: ` This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions. You can discover what commands a backend implements by using rclone backend help remote: rclone backend help You can also discover information about the backend using (see [operations/fsinfo](/rc/#operations/fsinfo) in the remote control docs for more info). rclone backend features remote: Pass options to the backend command with -o. This should be key=value or key, eg: rclone backend stats remote:path stats -o format=json -o long Pass arguments to the backend by placing them on the end of the line rclone backend cleanup remote:path file1 file2 file3 Note to run these commands on a running backend then see [backend/command](/rc/#backend/command) in the rc docs. `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(2, 1e6, command, args) name, remote := args[0], args[1] cmd.Run(false, false, command, func() error { // show help if remote is a backend name if name == "help" { fsInfo, err := fs.Find(remote) if err == nil { return showHelp(fsInfo) } } // Create remote fsInfo, configName, fsPath, config, err := fs.ConfigFs(remote) if err != nil { return err } f, err := fsInfo.NewFs(configName, fsPath, config) if err != nil { return err } // Run the command var out interface{} switch name { case "help": return showHelp(fsInfo) case "features": out = operations.GetFsInfo(f) default: doCommand := f.Features().Command if doCommand == nil { return errors.Errorf("%v: doesn't support backend commands", f) } arg := args[2:] opt := rc.ParseOptions(options) out, err = doCommand(context.Background(), name, arg, opt) } if err != nil { return errors.Wrapf(err, "command %q failed", name) } // Output the result writeJSON := false if useJSON { writeJSON = true } else { switch x := out.(type) { case nil: case string: fmt.Println(out) case []string: for _, line := range x { fmt.Println(line) } default: writeJSON = true } } if writeJSON { // Write indented JSON to the output enc := json.NewEncoder(os.Stdout) enc.SetIndent("", "\t") err = enc.Encode(out) if err != nil { return errors.Wrap(err, "failed to write JSON") } } return nil }) return nil }, } // show help for a backend func showHelp(fsInfo *fs.RegInfo) error { cmds := fsInfo.CommandHelp name := fsInfo.Name if len(cmds) == 0 { return errors.Errorf("%s backend has no commands", name) } fmt.Printf("### Backend commands\n\n") fmt.Printf(`Here are the commands specific to the %s backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](/rc/#backend/command). `, name) for _, cmd := range cmds { fmt.Printf("#### %s\n\n", cmd.Name) fmt.Printf("%s\n\n", cmd.Short) fmt.Printf(" rclone backend %s remote: [options] [+]\n\n", cmd.Name) if cmd.Long != "" { fmt.Printf("%s\n\n", cmd.Long) } if len(cmd.Opts) != 0 { fmt.Printf("Options:\n\n") ks := []string{} for k := range cmd.Opts { ks = append(ks, k) } sort.Strings(ks) for _, k := range ks { v := cmd.Opts[k] fmt.Printf("- %q: %s\n", k, v) } fmt.Printf("\n") } } return nil } rclone-1.53.3/cmd/cachestats/000077500000000000000000000000001375552240400157615ustar00rootroot00000000000000rclone-1.53.3/cmd/cachestats/cachestats.go000066400000000000000000000023161375552240400204340ustar00rootroot00000000000000// +build !plan9,!js package cachestats import ( "encoding/json" "fmt" "github.com/pkg/errors" "github.com/rclone/rclone/backend/cache" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "cachestats source:", Short: `Print cache stats for a remote`, Long: ` Print cache stats for a remote in JSON format `, Hidden: true, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fs.Logf(nil, `"rclone cachestats" is deprecated, use "rclone backend stats %s" instead`, args[0]) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { var fsCache *cache.Fs fsCache, ok := fsrc.(*cache.Fs) if !ok { unwrap := fsrc.Features().UnWrap if unwrap != nil { fsCache, ok = unwrap().(*cache.Fs) } if !ok { return errors.Errorf("%s: is not a cache remote", fsrc.Name()) } } m, err := fsCache.Stats() if err != nil { return err } raw, err := json.MarshalIndent(m, "", " ") if err != nil { return err } fmt.Printf("%s\n", string(raw)) return nil }) }, } rclone-1.53.3/cmd/cachestats/cachestats_unsupported.go000066400000000000000000000002251375552240400231010ustar00rootroot00000000000000// Build for cache for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 js package cachestats rclone-1.53.3/cmd/cat/000077500000000000000000000000001375552240400144065ustar00rootroot00000000000000rclone-1.53.3/cmd/cat/cat.go000066400000000000000000000044131375552240400155060ustar00rootroot00000000000000package cat import ( "context" "io" "io/ioutil" "log" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) // Globals var ( head = int64(0) tail = int64(0) offset = int64(0) count = int64(-1) discard = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.Int64VarP(cmdFlags, &head, "head", "", head, "Only print the first N characters.") flags.Int64VarP(cmdFlags, &tail, "tail", "", tail, "Only print the last N characters.") flags.Int64VarP(cmdFlags, &offset, "offset", "", offset, "Start printing at offset N (or from end if -ve).") flags.Int64VarP(cmdFlags, &count, "count", "", count, "Only print N characters.") flags.BoolVarP(cmdFlags, &discard, "discard", "", discard, "Discard the output instead of printing.") } var commandDefinition = &cobra.Command{ Use: "cat remote:path", Short: `Concatenates any files and sends them to stdout.`, Long: ` rclone cat sends any files to standard output. You can use it like this to output a single file rclone cat remote:path/to/file Or like this to output any file in dir or its subdirectories. rclone cat remote:path/to/dir Or like this to output any .txt files in dir or its subdirectories. rclone --include "*.txt" cat remote:path/to/dir Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1. `, Run: func(command *cobra.Command, args []string) { usedOffset := offset != 0 || count >= 0 usedHead := head > 0 usedTail := tail > 0 if usedHead && usedTail || usedHead && usedOffset || usedTail && usedOffset { log.Fatalf("Can only use one of --head, --tail or --offset with --count") } if head > 0 { offset = 0 count = head } if tail > 0 { offset = -tail count = -1 } cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) var w io.Writer = os.Stdout if discard { w = ioutil.Discard } cmd.Run(false, false, command, func() error { return operations.Cat(context.Background(), fsrc, w, offset, count) }) }, } rclone-1.53.3/cmd/check/000077500000000000000000000000001375552240400147145ustar00rootroot00000000000000rclone-1.53.3/cmd/check/check.go000066400000000000000000000121221375552240400163160ustar00rootroot00000000000000package check import ( "context" "io" "os" "strings" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" "github.com/spf13/pflag" ) // Globals var ( download = false oneway = false combined = "" missingOnSrc = "" missingOnDst = "" match = "" differ = "" errFile = "" ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by downloading rather than with hash.") AddFlags(cmdFlags) } // AddFlags adds the check flags to the cmdFlags command func AddFlags(cmdFlags *pflag.FlagSet) { flags.BoolVarP(cmdFlags, &oneway, "one-way", "", oneway, "Check one way only, source files must exist on remote") flags.StringVarP(cmdFlags, &combined, "combined", "", combined, "Make a combined report of changes to this file") flags.StringVarP(cmdFlags, &missingOnSrc, "missing-on-src", "", missingOnSrc, "Report all files missing from the source to this file") flags.StringVarP(cmdFlags, &missingOnDst, "missing-on-dst", "", missingOnDst, "Report all files missing from the destination to this file") flags.StringVarP(cmdFlags, &match, "match", "", match, "Report all matching files to this file") flags.StringVarP(cmdFlags, &differ, "differ", "", differ, "Report all non-matching files to this file") flags.StringVarP(cmdFlags, &errFile, "error", "", errFile, "Report all files with errors (hashing or reading) to this file") } // FlagsHelp describes the flags for the help var FlagsHelp = strings.Replace(` If you supply the |--one-way| flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The |--differ|, |--missing-on-dst|, |--missing-on-src|, |--match| and |--error| flags write paths, one per line, to the file name (or stdout if it is |-|) supplied. What they write is described in the help below. For example |--differ| will write all paths which are present on both the source and destination but different. The |--combined| flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - |= path| means path was found in source and destination and was identical - |- path| means path was missing on the source, so only in the destination - |+ path| means path was missing on the destination, so only in the source - |* path| means path was present in source and destination but different. - |! path| means there was an error reading or hashing the source or dest. `, "|", "`", -1) // GetCheckOpt gets the options corresponding to the check flags func GetCheckOpt(fsrc, fdst fs.Fs) (opt *operations.CheckOpt, close func(), err error) { closers := []io.Closer{} opt = &operations.CheckOpt{ Fsrc: fsrc, Fdst: fdst, OneWay: oneway, } open := func(name string, pout *io.Writer) error { if name == "" { return nil } if name == "-" { *pout = os.Stdout return nil } out, err := os.Create(name) if err != nil { return err } *pout = out closers = append(closers, out) return nil } if err = open(combined, &opt.Combined); err != nil { return nil, nil, err } if err = open(missingOnSrc, &opt.MissingOnSrc); err != nil { return nil, nil, err } if err = open(missingOnDst, &opt.MissingOnDst); err != nil { return nil, nil, err } if err = open(match, &opt.Match); err != nil { return nil, nil, err } if err = open(differ, &opt.Differ); err != nil { return nil, nil, err } if err = open(errFile, &opt.Error); err != nil { return nil, nil, err } close = func() { for _, closer := range closers { err := closer.Close() if err != nil { fs.Errorf(nil, "Failed to close report output: %v", err) } } } return opt, close, nil } var commandDefinition = &cobra.Command{ Use: "check source:path dest:path", Short: `Checks the files in the source and destination match.`, Long: ` Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination. If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data. ` + FlagsHelp, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, fdst := cmd.NewFsSrcDst(args) cmd.Run(false, true, command, func() error { opt, close, err := GetCheckOpt(fsrc, fdst) if err != nil { return err } defer close() if download { return operations.CheckDownload(context.Background(), opt) } return operations.Check(context.Background(), opt) }) }, } rclone-1.53.3/cmd/cleanup/000077500000000000000000000000001375552240400152665ustar00rootroot00000000000000rclone-1.53.3/cmd/cleanup/cleanup.go000066400000000000000000000012331375552240400172430ustar00rootroot00000000000000package cleanup import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "cleanup remote:path", Short: `Clean up the remote if possible.`, Long: ` Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(true, false, command, func() error { return operations.CleanUp(context.Background(), fsrc) }) }, } rclone-1.53.3/cmd/cmd.go000066400000000000000000000352021375552240400147330ustar00rootroot00000000000000// Package cmd implemnts the rclone command // // It is in a sub package so it's internals can be re-used elsewhere package cmd // FIXME only attach the remote flags when using a remote??? // would probably mean bringing all the flags in to here? Or define some flagsets in fs... import ( "fmt" "log" "os" "os/exec" "path" "regexp" "runtime" "runtime/pprof" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config/configflags" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/filter/filterflags" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fspath" fslog "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/rc/rcflags" "github.com/rclone/rclone/fs/rc/rcserver" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/random" "github.com/spf13/cobra" "github.com/spf13/pflag" ) // Globals var ( // Flags cpuProfile = flags.StringP("cpuprofile", "", "", "Write cpu profile to file") memProfile = flags.StringP("memprofile", "", "", "Write memory profile to file") statsInterval = flags.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable)") dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes'/s") version bool retries = flags.IntP("retries", "", 3, "Retry operations this many times if they fail") retriesInterval = flags.DurationP("retries-sleep", "", 0, "Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)") // Errors errorCommandNotFound = errors.New("command not found") errorUncategorized = errors.New("uncategorized error") errorNotEnoughArguments = errors.New("not enough arguments") errorTooManyArguments = errors.New("too many arguments") ) const ( exitCodeSuccess = iota exitCodeUsageError exitCodeUncategorizedError exitCodeDirNotFound exitCodeFileNotFound exitCodeRetryError exitCodeNoRetryError exitCodeFatalError exitCodeTransferExceeded exitCodeNoFilesTransferred ) // ShowVersion prints the version to stdout func ShowVersion() { fmt.Printf("rclone %s\n", fs.Version) fmt.Printf("- os/arch: %s/%s\n", runtime.GOOS, runtime.GOARCH) fmt.Printf("- go version: %s\n", runtime.Version()) } // NewFsFile creates an Fs from a name but may point to a file. // // It returns a string with the file name if points to a file // otherwise "". func NewFsFile(remote string) (fs.Fs, string) { _, _, fsPath, err := fs.ParseRemote(remote) if err != nil { err = fs.CountError(err) log.Fatalf("Failed to create file system for %q: %v", remote, err) } f, err := cache.Get(remote) switch err { case fs.ErrorIsFile: cache.Pin(f) // pin indefinitely since it was on the CLI return f, path.Base(fsPath) case nil: cache.Pin(f) // pin indefinitely since it was on the CLI return f, "" default: err = fs.CountError(err) log.Fatalf("Failed to create file system for %q: %v", remote, err) } return nil, "" } // newFsFileAddFilter creates an src Fs from a name // // This works the same as NewFsFile however it adds filters to the Fs // to limit it to a single file if the remote pointed to a file. func newFsFileAddFilter(remote string) (fs.Fs, string) { f, fileName := NewFsFile(remote) if fileName != "" { if !filter.Active.InActive() { err := errors.Errorf("Can't limit to single files when using filters: %v", remote) err = fs.CountError(err) log.Fatalf(err.Error()) } // Limit transfers to this file err := filter.Active.AddFile(fileName) if err != nil { err = fs.CountError(err) log.Fatalf("Failed to limit to single file %q: %v", remote, err) } } return f, fileName } // NewFsSrc creates a new src fs from the arguments. // // The source can be a file or a directory - if a file then it will // limit the Fs to a single file. func NewFsSrc(args []string) fs.Fs { fsrc, _ := newFsFileAddFilter(args[0]) return fsrc } // newFsDir creates an Fs from a name // // This must point to a directory func newFsDir(remote string) fs.Fs { f, err := cache.Get(remote) if err != nil { err = fs.CountError(err) log.Fatalf("Failed to create file system for %q: %v", remote, err) } cache.Pin(f) // pin indefinitely since it was on the CLI return f } // NewFsDir creates a new Fs from the arguments // // The argument must point a directory func NewFsDir(args []string) fs.Fs { fdst := newFsDir(args[0]) return fdst } // NewFsSrcDst creates a new src and dst fs from the arguments func NewFsSrcDst(args []string) (fs.Fs, fs.Fs) { fsrc, _ := newFsFileAddFilter(args[0]) fdst := newFsDir(args[1]) return fsrc, fdst } // NewFsSrcFileDst creates a new src and dst fs from the arguments // // The source may be a file, in which case the source Fs and file name is returned func NewFsSrcFileDst(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs) { fsrc, srcFileName = NewFsFile(args[0]) fdst = newFsDir(args[1]) return fsrc, srcFileName, fdst } // NewFsSrcDstFiles creates a new src and dst fs from the arguments // If src is a file then srcFileName and dstFileName will be non-empty func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs, dstFileName string) { fsrc, srcFileName = newFsFileAddFilter(args[0]) // If copying a file... dstRemote := args[1] // If file exists then srcFileName != "", however if the file // doesn't exist then we assume it is a directory... if srcFileName != "" { var err error dstRemote, dstFileName, err = fspath.Split(dstRemote) if err != nil { log.Fatalf("Parsing %q failed: %v", args[1], err) } if dstRemote == "" { dstRemote = "." } if dstFileName == "" { log.Fatalf("%q is a directory", args[1]) } } fdst, err := cache.Get(dstRemote) switch err { case fs.ErrorIsFile: _ = fs.CountError(err) log.Fatalf("Source doesn't exist or is a directory and destination is a file") case nil: default: _ = fs.CountError(err) log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err) } cache.Pin(fdst) // pin indefinitely since it was on the CLI return } // NewFsDstFile creates a new dst fs with a destination file name from the arguments func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) { dstRemote, dstFileName, err := fspath.Split(args[0]) if err != nil { log.Fatalf("Parsing %q failed: %v", args[0], err) } if dstRemote == "" { dstRemote = "." } if dstFileName == "" { log.Fatalf("%q is a directory", args[0]) } fdst = newFsDir(dstRemote) return } // ShowStats returns true if the user added a `--stats` flag to the command line. // // This is called by Run to override the default value of the // showStats passed in. func ShowStats() bool { statsIntervalFlag := pflag.Lookup("stats") return statsIntervalFlag != nil && statsIntervalFlag.Changed } // Run the function with stats and retries if required func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) { var cmdErr error stopStats := func() {} if !showStats && ShowStats() { showStats = true } if fs.Config.Progress { stopStats = startProgress() } else if showStats { stopStats = StartStats() } SigInfoHandler() for try := 1; try <= *retries; try++ { cmdErr = f() cmdErr = fs.CountError(cmdErr) lastErr := accounting.GlobalStats().GetLastError() if cmdErr == nil { cmdErr = lastErr } if !Retry || !accounting.GlobalStats().Errored() { if try > 1 { fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries) } break } if accounting.GlobalStats().HadFatalError() { fs.Errorf(nil, "Fatal error received - not attempting retries") break } if accounting.GlobalStats().Errored() && !accounting.GlobalStats().HadRetryError() { fs.Errorf(nil, "Can't retry this error - not attempting retries") break } if retryAfter := accounting.GlobalStats().RetryAfter(); !retryAfter.IsZero() { d := retryAfter.Sub(time.Now()) if d > 0 { fs.Logf(nil, "Received retry after error - sleeping until %s (%v)", retryAfter.Format(time.RFC3339Nano), d) time.Sleep(d) } } if lastErr != nil { fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.GlobalStats().GetErrors(), lastErr) } else { fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, accounting.GlobalStats().GetErrors()) } if try < *retries { accounting.GlobalStats().ResetErrors() } if *retriesInterval > 0 { time.Sleep(*retriesInterval) } } stopStats() if showStats && (accounting.GlobalStats().Errored() || *statsInterval > 0) { accounting.GlobalStats().Log() } fs.Debugf(nil, "%d go routines active\n", runtime.NumGoroutine()) // dump all running go-routines if fs.Config.Dump&fs.DumpGoRoutines != 0 { err := pprof.Lookup("goroutine").WriteTo(os.Stdout, 1) if err != nil { fs.Errorf(nil, "Failed to dump goroutines: %v", err) } } // dump open files if fs.Config.Dump&fs.DumpOpenFiles != 0 { c := exec.Command("lsof", "-p", strconv.Itoa(os.Getpid())) c.Stdout = os.Stdout c.Stderr = os.Stderr err := c.Run() if err != nil { fs.Errorf(nil, "Failed to list open files: %v", err) } } // Log the final error message and exit if cmdErr != nil { nerrs := accounting.GlobalStats().GetErrors() if nerrs <= 1 { log.Printf("Failed to %s: %v", cmd.Name(), cmdErr) } else { log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, cmdErr) } } resolveExitCode(cmdErr) } // CheckArgs checks there are enough arguments and prints a message if not func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) { if len(args) < MinArgs { _ = cmd.Usage() _, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments minimum: you provided %d non flag arguments: %q\n", cmd.Name(), MinArgs, len(args), args) resolveExitCode(errorNotEnoughArguments) } else if len(args) > MaxArgs { _ = cmd.Usage() _, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum: you provided %d non flag arguments: %q\n", cmd.Name(), MaxArgs, len(args), args) resolveExitCode(errorTooManyArguments) } } // StartStats prints the stats every statsInterval // // It returns a func which should be called to stop the stats. func StartStats() func() { if *statsInterval <= 0 { return func() {} } stopStats := make(chan struct{}) var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() ticker := time.NewTicker(*statsInterval) for { select { case <-ticker.C: accounting.GlobalStats().Log() case <-stopStats: ticker.Stop() return } } }() return func() { close(stopStats) wg.Wait() } } // initConfig is run by cobra after initialising the flags func initConfig() { // Start the logger fslog.InitLogging() // Finish parsing any command line flags configflags.SetFlags() // Load filters err := filterflags.Reload() if err != nil { log.Fatalf("Failed to load filters: %v", err) } // Write the args for debug purposes fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args) // Start the remote control server if configured _, err = rcserver.Start(&rcflags.Opt) if err != nil { log.Fatalf("Failed to start remote control: %v", err) } // Setup CPU profiling if desired if *cpuProfile != "" { fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile) f, err := os.Create(*cpuProfile) if err != nil { err = fs.CountError(err) log.Fatal(err) } err = pprof.StartCPUProfile(f) if err != nil { err = fs.CountError(err) log.Fatal(err) } atexit.Register(func() { pprof.StopCPUProfile() }) } // Setup memory profiling if desired if *memProfile != "" { atexit.Register(func() { fs.Infof(nil, "Saving Memory profile %q\n", *memProfile) f, err := os.Create(*memProfile) if err != nil { err = fs.CountError(err) log.Fatal(err) } err = pprof.WriteHeapProfile(f) if err != nil { err = fs.CountError(err) log.Fatal(err) } err = f.Close() if err != nil { err = fs.CountError(err) log.Fatal(err) } }) } if m, _ := regexp.MatchString("^(bits|bytes)$", *dataRateUnit); m == false { fs.Errorf(nil, "Invalid unit passed to --stats-unit. Defaulting to bytes.") fs.Config.DataRateUnit = "bytes" } else { fs.Config.DataRateUnit = *dataRateUnit } } func resolveExitCode(err error) { atexit.Run() if err == nil { if fs.Config.ErrorOnNoTransfer { if accounting.GlobalStats().GetTransfers() == 0 { os.Exit(exitCodeNoFilesTransferred) } } os.Exit(exitCodeSuccess) } _, unwrapped := fserrors.Cause(err) switch { case unwrapped == fs.ErrorDirNotFound: os.Exit(exitCodeDirNotFound) case unwrapped == fs.ErrorObjectNotFound: os.Exit(exitCodeFileNotFound) case unwrapped == errorUncategorized: os.Exit(exitCodeUncategorizedError) case unwrapped == accounting.ErrorMaxTransferLimitReached: os.Exit(exitCodeTransferExceeded) case fserrors.ShouldRetry(err): os.Exit(exitCodeRetryError) case fserrors.IsNoRetryError(err): os.Exit(exitCodeNoRetryError) case fserrors.IsFatalError(err): os.Exit(exitCodeFatalError) default: os.Exit(exitCodeUsageError) } } var backendFlags map[string]struct{} // AddBackendFlags creates flags for all the backend options func AddBackendFlags() { backendFlags = map[string]struct{}{} for _, fsInfo := range fs.Registry { done := map[string]struct{}{} for i := range fsInfo.Options { opt := &fsInfo.Options[i] // Skip if done already (eg with Provider options) if _, doneAlready := done[opt.Name]; doneAlready { continue } done[opt.Name] = struct{}{} // Make a flag from each option name := opt.FlagName(fsInfo.Prefix) found := pflag.CommandLine.Lookup(name) != nil if !found { // Take first line of help only help := strings.TrimSpace(opt.Help) if nl := strings.IndexRune(help, '\n'); nl >= 0 { help = help[:nl] } help = strings.TrimSpace(help) if opt.IsPassword { help += " (obscured)" } flag := pflag.CommandLine.VarPF(opt, name, opt.ShortOpt, help) if _, isBool := opt.Default.(bool); isBool { flag.NoOptDefVal = "true" } // Hide on the command line if requested if opt.Hide&fs.OptionHideCommandLine != 0 { flag.Hidden = true } backendFlags[name] = struct{}{} } else { fs.Errorf(nil, "Not adding duplicate flag --%s", name) } //flag.Hidden = true } } } // Main runs rclone interpreting flags and commands out of os.Args func Main() { if err := random.Seed(); err != nil { log.Fatalf("Fatal error: %v", err) } setupRootCommand(Root) AddBackendFlags() if err := Root.Execute(); err != nil { log.Fatalf("Fatal error: %v", err) } } rclone-1.53.3/cmd/cmount/000077500000000000000000000000001375552240400151445ustar00rootroot00000000000000rclone-1.53.3/cmd/cmount/fs.go000066400000000000000000000414171375552240400161120ustar00rootroot00000000000000// +build cmount // +build cgo // +build linux darwin freebsd windows package cmount import ( "io" "os" "path" "sync" "time" "github.com/billziss-gh/cgofuse/fuse" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) const fhUnset = ^uint64(0) // FS represents the top level filing system type FS struct { VFS *vfs.VFS f fs.Fs ready chan (struct{}) mu sync.Mutex // to protect the below handles []vfs.Handle } // NewFS makes a new FS func NewFS(VFS *vfs.VFS) *FS { fsys := &FS{ VFS: VFS, f: VFS.Fs(), ready: make(chan (struct{})), } return fsys } // Open a handle returning an integer file handle func (fsys *FS) openHandle(handle vfs.Handle) (fh uint64) { fsys.mu.Lock() defer fsys.mu.Unlock() var i int var oldHandle vfs.Handle for i, oldHandle = range fsys.handles { if oldHandle == nil { fsys.handles[i] = handle goto found } } fsys.handles = append(fsys.handles, handle) i = len(fsys.handles) - 1 found: return uint64(i) } // get the handle for fh, call with the lock held func (fsys *FS) _getHandle(fh uint64) (i int, handle vfs.Handle, errc int) { if fh > uint64(len(fsys.handles)) { fs.Debugf(nil, "Bad file handle: too big: 0x%X", fh) return i, nil, -fuse.EBADF } i = int(fh) handle = fsys.handles[i] if handle == nil { fs.Debugf(nil, "Bad file handle: nil handle: 0x%X", fh) return i, nil, -fuse.EBADF } return i, handle, 0 } // Get the handle for the file handle func (fsys *FS) getHandle(fh uint64) (handle vfs.Handle, errc int) { fsys.mu.Lock() _, handle, errc = fsys._getHandle(fh) fsys.mu.Unlock() return } // Close the handle func (fsys *FS) closeHandle(fh uint64) (errc int) { fsys.mu.Lock() i, _, errc := fsys._getHandle(fh) if errc == 0 { fsys.handles[i] = nil } fsys.mu.Unlock() return } // lookup a Node given a path func (fsys *FS) lookupNode(path string) (node vfs.Node, errc int) { node, err := fsys.VFS.Stat(path) return node, translateError(err) } // lookup a Dir given a path func (fsys *FS) lookupDir(path string) (dir *vfs.Dir, errc int) { node, errc := fsys.lookupNode(path) if errc != 0 { return nil, errc } dir, ok := node.(*vfs.Dir) if !ok { return nil, -fuse.ENOTDIR } return dir, 0 } // lookup a parent Dir given a path returning the dir and the leaf func (fsys *FS) lookupParentDir(filePath string) (leaf string, dir *vfs.Dir, errc int) { parentDir, leaf := path.Split(filePath) dir, errc = fsys.lookupDir(parentDir) return leaf, dir, errc } // lookup a File given a path func (fsys *FS) lookupFile(path string) (file *vfs.File, errc int) { node, errc := fsys.lookupNode(path) if errc != 0 { return nil, errc } file, ok := node.(*vfs.File) if !ok { return nil, -fuse.EISDIR } return file, 0 } // get a node and handle from the path or from the fh if not fhUnset // // handle may be nil func (fsys *FS) getNode(path string, fh uint64) (node vfs.Node, handle vfs.Handle, errc int) { if fh == fhUnset { node, errc = fsys.lookupNode(path) } else { handle, errc = fsys.getHandle(fh) if errc == 0 { node = handle.Node() } } return } // stat fills up the stat block for Node func (fsys *FS) stat(node vfs.Node, stat *fuse.Stat_t) (errc int) { Size := uint64(node.Size()) Blocks := (Size + 511) / 512 modTime := node.ModTime() Mode := node.Mode().Perm() if node.IsDir() { Mode |= fuse.S_IFDIR } else { Mode |= fuse.S_IFREG } //stat.Dev = 1 stat.Ino = node.Inode() // FIXME do we need to set the inode number? stat.Mode = uint32(Mode) stat.Nlink = 1 stat.Uid = fsys.VFS.Opt.UID stat.Gid = fsys.VFS.Opt.GID //stat.Rdev stat.Size = int64(Size) t := fuse.NewTimespec(modTime) stat.Atim = t stat.Mtim = t stat.Ctim = t stat.Blksize = 512 stat.Blocks = int64(Blocks) stat.Birthtim = t // fs.Debugf(nil, "stat = %+v", *stat) return 0 } // Init is called after the filesystem is ready func (fsys *FS) Init() { defer log.Trace(fsys.f, "")("") close(fsys.ready) } // Destroy is called when it is unmounted (note that depending on how // the file system is terminated the file system may not receive the // Destroy call). func (fsys *FS) Destroy() { defer log.Trace(fsys.f, "")("") } // Getattr reads the attributes for path func (fsys *FS) Getattr(path string, stat *fuse.Stat_t, fh uint64) (errc int) { defer log.Trace(path, "fh=0x%X", fh)("errc=%v", &errc) node, _, errc := fsys.getNode(path, fh) if errc == 0 { errc = fsys.stat(node, stat) } return } // Opendir opens path as a directory func (fsys *FS) Opendir(path string) (errc int, fh uint64) { defer log.Trace(path, "")("errc=%d, fh=0x%X", &errc, &fh) handle, err := fsys.VFS.OpenFile(path, os.O_RDONLY, 0777) if err != nil { return translateError(err), fhUnset } return 0, fsys.openHandle(handle) } // Readdir reads the directory at dirPath func (fsys *FS) Readdir(dirPath string, fill func(name string, stat *fuse.Stat_t, ofst int64) bool, ofst int64, fh uint64) (errc int) { itemsRead := -1 defer log.Trace(dirPath, "ofst=%d, fh=0x%X", ofst, fh)("items=%d, errc=%d", &itemsRead, &errc) dir, errc := fsys.lookupDir(dirPath) if errc != 0 { return errc } // We can't seek in directories and FUSE should know that so // return an error if ofst is ever set. if ofst > 0 { return -fuse.ESPIPE } nodes, err := dir.ReadDirAll() if err != nil { return translateError(err) } // Optionally, create a struct stat that describes the file as // for getattr (but FUSE only looks at st_ino and the // file-type bits of st_mode). // // We have called host.SetCapReaddirPlus() so WinFsp will // use the full stat information - a Useful optimization on // Windows. // // NB we are using the first mode for readdir: The readdir // implementation ignores the offset parameter, and passes // zero to the filler function's offset. The filler function // will not return '1' (unless an error happens), so the whole // directory is read in a single readdir operation. fill(".", nil, 0) fill("..", nil, 0) for _, node := range nodes { name := node.Name() if len(name) > mountlib.MaxLeafSize { fs.Errorf(dirPath, "Name too long (%d bytes) for FUSE, skipping: %s", len(name), name) continue } // We have called host.SetCapReaddirPlus() so supply the stat information // It is very cheap at this point so supply it regardless of OS capabilities var stat fuse.Stat_t _ = fsys.stat(node, &stat) // not capable of returning an error fill(name, &stat, 0) } itemsRead = len(nodes) return 0 } // Releasedir finished reading the directory func (fsys *FS) Releasedir(path string, fh uint64) (errc int) { defer log.Trace(path, "fh=0x%X", fh)("errc=%d", &errc) return fsys.closeHandle(fh) } // Statfs reads overall stats on the filessystem func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) { defer log.Trace(path, "")("stat=%+v, errc=%d", stat, &errc) const blockSize = 4096 total, _, free := fsys.VFS.Statfs() stat.Blocks = uint64(total) / blockSize // Total data blocks in file system. stat.Bfree = uint64(free) / blockSize // Free blocks in file system. stat.Bavail = stat.Bfree // Free blocks in file system if you're not root. stat.Files = 1e9 // Total files in file system. stat.Ffree = 1e9 // Free files in file system. stat.Bsize = blockSize // Block size stat.Namemax = 255 // Maximum file name length? stat.Frsize = blockSize // Fragment size, smallest addressable data size in the file system. mountlib.ClipBlocks(&stat.Blocks) mountlib.ClipBlocks(&stat.Bfree) mountlib.ClipBlocks(&stat.Bavail) return 0 } // OpenEx opens a file func (fsys *FS) OpenEx(path string, fi *fuse.FileInfo_t) (errc int) { defer log.Trace(path, "flags=0x%X", fi.Flags)("errc=%d, fh=0x%X", &errc, &fi.Fh) fi.Fh = fhUnset // translate the fuse flags to os flags flags := translateOpenFlags(fi.Flags) handle, err := fsys.VFS.OpenFile(path, flags, 0777) if err != nil { return translateError(err) } // If size unknown then use direct io to read if entry := handle.Node().DirEntry(); entry != nil && entry.Size() < 0 { fi.DirectIo = true } fi.Fh = fsys.openHandle(handle) return 0 } // Open opens a file func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) { var fi = fuse.FileInfo_t{ Flags: flags, } errc = fsys.OpenEx(path, &fi) return errc, fi.Fh } // CreateEx creates and opens a file. func (fsys *FS) CreateEx(filePath string, mode uint32, fi *fuse.FileInfo_t) (errc int) { defer log.Trace(filePath, "flags=0x%X, mode=0%o", fi.Flags, mode)("errc=%d, fh=0x%X", &errc, &fi.Fh) fi.Fh = fhUnset leaf, parentDir, errc := fsys.lookupParentDir(filePath) if errc != 0 { return errc } file, err := parentDir.Create(leaf, fi.Flags) if err != nil { return translateError(err) } // translate the fuse flags to os flags flags := translateOpenFlags(fi.Flags) | os.O_CREATE handle, err := file.Open(flags) if err != nil { return translateError(err) } fi.Fh = fsys.openHandle(handle) return 0 } // Create creates and opens a file. func (fsys *FS) Create(filePath string, flags int, mode uint32) (errc int, fh uint64) { var fi = fuse.FileInfo_t{ Flags: flags, } errc = fsys.CreateEx(filePath, mode, &fi) return errc, fi.Fh } // Truncate truncates a file to size func (fsys *FS) Truncate(path string, size int64, fh uint64) (errc int) { defer log.Trace(path, "size=%d, fh=0x%X", size, fh)("errc=%d", &errc) node, handle, errc := fsys.getNode(path, fh) if errc != 0 { return errc } var err error if handle != nil { err = handle.Truncate(size) } else { err = node.Truncate(size) } if err != nil { return translateError(err) } return 0 } // Read data from file handle func (fsys *FS) Read(path string, buff []byte, ofst int64, fh uint64) (n int) { defer log.Trace(path, "ofst=%d, fh=0x%X", ofst, fh)("n=%d", &n) handle, errc := fsys.getHandle(fh) if errc != 0 { return errc } n, err := handle.ReadAt(buff, ofst) if err == io.EOF { } else if err != nil { return translateError(err) } return n } // Write data to file handle func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) { defer log.Trace(path, "ofst=%d, fh=0x%X", ofst, fh)("n=%d", &n) handle, errc := fsys.getHandle(fh) if errc != 0 { return errc } n, err := handle.WriteAt(buff, ofst) if err != nil { return translateError(err) } return n } // Flush flushes an open file descriptor or path func (fsys *FS) Flush(path string, fh uint64) (errc int) { defer log.Trace(path, "fh=0x%X", fh)("errc=%d", &errc) handle, errc := fsys.getHandle(fh) if errc != 0 { return errc } return translateError(handle.Flush()) } // Release closes the file if still open func (fsys *FS) Release(path string, fh uint64) (errc int) { defer log.Trace(path, "fh=0x%X", fh)("errc=%d", &errc) handle, errc := fsys.getHandle(fh) if errc != 0 { return errc } _ = fsys.closeHandle(fh) return translateError(handle.Release()) } // Unlink removes a file. func (fsys *FS) Unlink(filePath string) (errc int) { defer log.Trace(filePath, "")("errc=%d", &errc) leaf, parentDir, errc := fsys.lookupParentDir(filePath) if errc != 0 { return errc } return translateError(parentDir.RemoveName(leaf)) } // Mkdir creates a directory. func (fsys *FS) Mkdir(dirPath string, mode uint32) (errc int) { defer log.Trace(dirPath, "mode=0%o", mode)("errc=%d", &errc) leaf, parentDir, errc := fsys.lookupParentDir(dirPath) if errc != 0 { return errc } _, err := parentDir.Mkdir(leaf) return translateError(err) } // Rmdir removes a directory func (fsys *FS) Rmdir(dirPath string) (errc int) { defer log.Trace(dirPath, "")("errc=%d", &errc) leaf, parentDir, errc := fsys.lookupParentDir(dirPath) if errc != 0 { return errc } return translateError(parentDir.RemoveName(leaf)) } // Rename renames a file. func (fsys *FS) Rename(oldPath string, newPath string) (errc int) { defer log.Trace(oldPath, "newPath=%q", newPath)("errc=%d", &errc) return translateError(fsys.VFS.Rename(oldPath, newPath)) } // Windows sometimes seems to send times that are the epoch which is // 1601-01-01 +/- timezone so filter out times that are earlier than // this. var invalidDateCutoff = time.Date(1601, 1, 2, 0, 0, 0, 0, time.UTC) // Utimens changes the access and modification times of a file. func (fsys *FS) Utimens(path string, tmsp []fuse.Timespec) (errc int) { defer log.Trace(path, "tmsp=%+v", tmsp)("errc=%d", &errc) node, errc := fsys.lookupNode(path) if errc != 0 { return errc } if tmsp == nil || len(tmsp) < 2 { fs.Debugf(path, "Utimens: Not setting time as timespec isn't complete: %v", tmsp) return 0 } t := tmsp[1].Time() if t.Before(invalidDateCutoff) { fs.Debugf(path, "Utimens: Not setting out of range time: %v", t) return 0 } fs.Debugf(path, "Utimens: SetModTime: %v", t) return translateError(node.SetModTime(t)) } // Mknod creates a file node. func (fsys *FS) Mknod(path string, mode uint32, dev uint64) (errc int) { defer log.Trace(path, "mode=0x%X, dev=0x%X", mode, dev)("errc=%d", &errc) return -fuse.ENOSYS } // Fsync synchronizes file contents. func (fsys *FS) Fsync(path string, datasync bool, fh uint64) (errc int) { defer log.Trace(path, "datasync=%v, fh=0x%X", datasync, fh)("errc=%d", &errc) // This is a no-op for rclone return 0 } // Link creates a hard link to a file. func (fsys *FS) Link(oldpath string, newpath string) (errc int) { defer log.Trace(oldpath, "newpath=%q", newpath)("errc=%d", &errc) return -fuse.ENOSYS } // Symlink creates a symbolic link. func (fsys *FS) Symlink(target string, newpath string) (errc int) { defer log.Trace(target, "newpath=%q", newpath)("errc=%d", &errc) return -fuse.ENOSYS } // Readlink reads the target of a symbolic link. func (fsys *FS) Readlink(path string) (errc int, linkPath string) { defer log.Trace(path, "")("linkPath=%q, errc=%d", &linkPath, &errc) return -fuse.ENOSYS, "" } // Chmod changes the permission bits of a file. func (fsys *FS) Chmod(path string, mode uint32) (errc int) { defer log.Trace(path, "mode=0%o", mode)("errc=%d", &errc) // This is a no-op for rclone return 0 } // Chown changes the owner and group of a file. func (fsys *FS) Chown(path string, uid uint32, gid uint32) (errc int) { defer log.Trace(path, "uid=%d, gid=%d", uid, gid)("errc=%d", &errc) // This is a no-op for rclone return 0 } // Access checks file access permissions. func (fsys *FS) Access(path string, mask uint32) (errc int) { defer log.Trace(path, "mask=0%o", mask)("errc=%d", &errc) // This is a no-op for rclone return 0 } // Fsyncdir synchronizes directory contents. func (fsys *FS) Fsyncdir(path string, datasync bool, fh uint64) (errc int) { defer log.Trace(path, "datasync=%v, fh=0x%X", datasync, fh)("errc=%d", &errc) // This is a no-op for rclone return 0 } // Setxattr sets extended attributes. func (fsys *FS) Setxattr(path string, name string, value []byte, flags int) (errc int) { return -fuse.ENOSYS } // Getxattr gets extended attributes. func (fsys *FS) Getxattr(path string, name string) (errc int, value []byte) { return -fuse.ENOSYS, nil } // Removexattr removes extended attributes. func (fsys *FS) Removexattr(path string, name string) (errc int) { return -fuse.ENOSYS } // Listxattr lists extended attributes. func (fsys *FS) Listxattr(path string, fill func(name string) bool) (errc int) { return -fuse.ENOSYS } // Translate errors from mountlib func translateError(err error) (errc int) { if err == nil { return 0 } switch errors.Cause(err) { case vfs.OK: return 0 case vfs.ENOENT, fs.ErrorDirNotFound, fs.ErrorObjectNotFound: return -fuse.ENOENT case vfs.EEXIST, fs.ErrorDirExists: return -fuse.EEXIST case vfs.EPERM, fs.ErrorPermissionDenied: return -fuse.EPERM case vfs.ECLOSED: return -fuse.EBADF case vfs.ENOTEMPTY: return -fuse.ENOTEMPTY case vfs.ESPIPE: return -fuse.ESPIPE case vfs.EBADF: return -fuse.EBADF case vfs.EROFS: return -fuse.EROFS case vfs.ENOSYS, fs.ErrorNotImplemented: return -fuse.ENOSYS case vfs.EINVAL: return -fuse.EINVAL } fs.Errorf(nil, "IO error: %v", err) return -fuse.EIO } // Translate Open Flags from FUSE to os (as used in the vfs layer) func translateOpenFlags(inFlags int) (outFlags int) { switch inFlags & fuse.O_ACCMODE { case fuse.O_RDONLY: outFlags = os.O_RDONLY case fuse.O_WRONLY: outFlags = os.O_WRONLY case fuse.O_RDWR: outFlags = os.O_RDWR } if inFlags&fuse.O_APPEND != 0 { outFlags |= os.O_APPEND } if inFlags&fuse.O_CREAT != 0 { outFlags |= os.O_CREATE } if inFlags&fuse.O_EXCL != 0 { outFlags |= os.O_EXCL } if inFlags&fuse.O_TRUNC != 0 { outFlags |= os.O_TRUNC } // NB O_SYNC isn't defined by fuse return outFlags } // Make sure interfaces are satisfied var ( _ fuse.FileSystemInterface = (*FS)(nil) _ fuse.FileSystemOpenEx = (*FS)(nil) //_ fuse.FileSystemChflags = (*FS)(nil) //_ fuse.FileSystemSetcrtime = (*FS)(nil) //_ fuse.FileSystemSetchgtime = (*FS)(nil) ) rclone-1.53.3/cmd/cmount/mount.go000066400000000000000000000134311375552240400166370ustar00rootroot00000000000000// Package cmount implements a FUSE mounting system for rclone remotes. // // This uses the cgo based cgofuse library // +build cmount // +build cgo // +build linux darwin freebsd windows package cmount import ( "fmt" "os" "runtime" "strings" "time" "github.com/billziss-gh/cgofuse/fuse" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/vfs" ) func init() { name := "cmount" if runtime.GOOS == "windows" { name = "mount" } mountlib.NewMountCommand(name, false, mount) mountlib.AddRc("cmount", mount) } // mountOptions configures the options from the command line flags func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.Options) (options []string) { // Options options = []string{ "-o", "fsname=" + device, "-o", "subtype=rclone", "-o", fmt.Sprintf("max_readahead=%d", opt.MaxReadAhead), "-o", fmt.Sprintf("attr_timeout=%g", opt.AttrTimeout.Seconds()), // This causes FUSE to supply O_TRUNC with the Open // call which is more efficient for cmount. However // it does not work with cgofuse on Windows with // WinFSP so cmount must work with or without it. "-o", "atomic_o_trunc", } if opt.DebugFUSE { options = append(options, "-o", "debug") } // OSX options if runtime.GOOS == "darwin" { if opt.NoAppleDouble { options = append(options, "-o", "noappledouble") } if opt.NoAppleXattr { options = append(options, "-o", "noapplexattr") } } // determine if ExtraOptions already has an opt in hasOption := func(optionName string) bool { optionName += "=" for _, option := range opt.ExtraOptions { if strings.HasPrefix(option, optionName) { return true } } return false } // Windows options if runtime.GOOS == "windows" { // These cause WinFsp to mean the current user if !hasOption("uid") { options = append(options, "-o", "uid=-1") } if !hasOption("gid") { options = append(options, "-o", "gid=-1") } options = append(options, "--FileSystemName=rclone") } if runtime.GOOS == "darwin" || runtime.GOOS == "windows" { if opt.VolumeName != "" { options = append(options, "-o", "volname="+opt.VolumeName) } } if opt.AllowNonEmpty { options = append(options, "-o", "nonempty") } if opt.AllowOther { options = append(options, "-o", "allow_other") } if opt.AllowRoot { options = append(options, "-o", "allow_root") } if opt.DefaultPermissions { options = append(options, "-o", "default_permissions") } if VFS.Opt.ReadOnly { options = append(options, "-o", "ro") } if opt.WritebackCache { // FIXME? options = append(options, "-o", WritebackCache()) } if opt.DaemonTimeout != 0 { options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(opt.DaemonTimeout.Seconds()))) } for _, option := range opt.ExtraOptions { options = append(options, "-o", option) } for _, option := range opt.ExtraFlags { options = append(options, option) } return options } // waitFor runs fn() until it returns true or the timeout expires func waitFor(fn func() bool) (ok bool) { const totalWait = 10 * time.Second const individualWait = 10 * time.Millisecond for i := 0; i < int(totalWait/individualWait); i++ { ok = fn() if ok { return ok } time.Sleep(individualWait) } return false } // mount the file system // // The mount point will be ready when this returns. // // returns an error, and an error channel for the serve process to // report an error when fusermount is called. func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) { f := VFS.Fs() fs.Debugf(f, "Mounting on %q", mountpoint) // Check the mountpoint - in Windows the mountpoint mustn't exist before the mount if runtime.GOOS != "windows" { fi, err := os.Stat(mountpoint) if err != nil { return nil, nil, errors.Wrap(err, "mountpoint") } if !fi.IsDir() { return nil, nil, errors.New("mountpoint is not a directory") } } // Create underlying FS fsys := NewFS(VFS) host := fuse.NewFileSystemHost(fsys) host.SetCapReaddirPlus(true) // only works on Windows host.SetCapCaseInsensitive(f.Features().CaseInsensitive) // Create options options := mountOptions(VFS, f.Name()+":"+f.Root(), mountpoint, opt) fs.Debugf(f, "Mounting with options: %q", options) // Serve the mount point in the background returning error to errChan errChan := make(chan error, 1) go func() { defer func() { if r := recover(); r != nil { errChan <- errors.Errorf("mount failed: %v", r) } }() var err error ok := host.Mount(mountpoint, options) if !ok { err = errors.New("mount failed") fs.Errorf(f, "Mount failed") } errChan <- err }() // unmount unmount := func() error { // Shutdown the VFS fsys.VFS.Shutdown() fs.Debugf(nil, "Calling host.Unmount") if host.Unmount() { fs.Debugf(nil, "host.Unmount succeeded") if runtime.GOOS == "windows" { if !waitFor(func() bool { _, err := os.Stat(mountpoint) return err != nil }) { fs.Errorf(nil, "mountpoint %q didn't disappear after unmount - continuing anyway", mountpoint) } } return nil } fs.Debugf(nil, "host.Unmount failed") return errors.New("host unmount failed") } // Wait for the filesystem to become ready, checking the file // system didn't blow up before starting select { case err := <-errChan: err = errors.Wrap(err, "mount stopped before calling Init") return nil, nil, err case <-fsys.ready: } // Wait for the mount point to be available on Windows // On Windows the Init signal comes slightly before the mount is ready if runtime.GOOS == "windows" { if !waitFor(func() bool { _, err := os.Stat(mountpoint) return err == nil }) { fs.Errorf(nil, "mountpoint %q didn't became available on mount - continuing anyway", mountpoint) } } return errChan, unmount, nil } rclone-1.53.3/cmd/cmount/mount_test.go000066400000000000000000000005511375552240400176750ustar00rootroot00000000000000// +build cmount // +build cgo // +build linux darwin freebsd windows // +build !race !windows // FIXME this doesn't work with the race detector under Windows either // hanging or producing lots of differences. package cmount import ( "testing" "github.com/rclone/rclone/vfs/vfstest" ) func TestMount(t *testing.T) { vfstest.RunTests(t, false, mount) } rclone-1.53.3/cmd/cmount/mount_unsupported.go000066400000000000000000000002671375552240400213120ustar00rootroot00000000000000// Build for cmount for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build !linux,!darwin,!freebsd,!windows !cgo !cmount package cmount rclone-1.53.3/cmd/config/000077500000000000000000000000001375552240400151045ustar00rootroot00000000000000rclone-1.53.3/cmd/config/config.go000066400000000000000000000226711375552240400167100ustar00rootroot00000000000000package config import ( "context" "encoding/json" "fmt" "os" "sort" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/rc" "github.com/spf13/cobra" "github.com/spf13/pflag" ) func init() { cmd.Root.AddCommand(configCommand) configCommand.AddCommand(configEditCommand) configCommand.AddCommand(configFileCommand) configCommand.AddCommand(configShowCommand) configCommand.AddCommand(configDumpCommand) configCommand.AddCommand(configProvidersCommand) configCommand.AddCommand(configCreateCommand) configCommand.AddCommand(configUpdateCommand) configCommand.AddCommand(configDeleteCommand) configCommand.AddCommand(configPasswordCommand) configCommand.AddCommand(configReconnectCommand) configCommand.AddCommand(configDisconnectCommand) configCommand.AddCommand(configUserInfoCommand) } var configCommand = &cobra.Command{ Use: "config", Short: `Enter an interactive configuration session.`, Long: `Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 0, command, args) config.EditConfig() }, } var configEditCommand = &cobra.Command{ Use: "edit", Short: configCommand.Short, Long: configCommand.Long, Run: configCommand.Run, } var configFileCommand = &cobra.Command{ Use: "file", Short: `Show path of configuration file in use.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 0, command, args) config.ShowConfigLocation() }, } var configShowCommand = &cobra.Command{ Use: "show []", Short: `Print (decrypted) config file, or the config for a single remote.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 1, command, args) if len(args) == 0 { config.ShowConfig() } else { name := strings.TrimRight(args[0], ":") config.ShowRemote(name) } }, } var configDumpCommand = &cobra.Command{ Use: "dump", Short: `Dump the config file as JSON.`, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(0, 0, command, args) return config.Dump() }, } var configProvidersCommand = &cobra.Command{ Use: "providers", Short: `List in JSON format all the providers and options.`, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(0, 0, command, args) return config.JSONListProviders() }, } var ( configObscure bool configNoObscure bool ) const configPasswordHelp = ` If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. **NB** If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. ` var configCreateCommand = &cobra.Command{ Use: "create `name` `type` [`key` `value`]*", Short: `Create a new remote with name, type and options.`, Long: ` Create a new remote of ` + "`name`" + ` with ` + "`type`" + ` and options. The options should be passed in pairs of ` + "`key` `value`" + `. For example to make a swift remote of name myremote using auto config you would do: rclone config create myremote swift env_auth true Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken. ` + configPasswordHelp + ` So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: rclone config create mydrive drive config_is_local false `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(2, 256, command, args) in, err := argsToMap(args[2:]) if err != nil { return err } err = config.CreateRemote(args[0], args[1], in, configObscure, configNoObscure) if err != nil { return err } config.ShowRemote(args[0]) return nil }, } func init() { for _, cmdFlags := range []*pflag.FlagSet{configCreateCommand.Flags(), configUpdateCommand.Flags()} { flags.BoolVarP(cmdFlags, &configObscure, "obscure", "", false, "Force any passwords to be obscured.") flags.BoolVarP(cmdFlags, &configNoObscure, "no-obscure", "", false, "Force any passwords not to be obscured.") } } var configUpdateCommand = &cobra.Command{ Use: "update `name` [`key` `value`]+", Short: `Update options in an existing remote.`, Long: ` Update an existing remote's options. The options should be passed in in pairs of ` + "`key` `value`" + `. For example to update the env_auth field of a remote of name myremote you would do: rclone config update myremote swift env_auth true ` + configPasswordHelp + ` If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: rclone config update myremote swift env_auth true config_refresh_token false `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(3, 256, command, args) in, err := argsToMap(args[1:]) if err != nil { return err } err = config.UpdateRemote(args[0], in, configObscure, configNoObscure) if err != nil { return err } config.ShowRemote(args[0]) return nil }, } var configDeleteCommand = &cobra.Command{ Use: "delete `name`", Short: "Delete an existing remote `name`.", Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) config.DeleteRemote(args[0]) }, } var configPasswordCommand = &cobra.Command{ Use: "password `name` [`key` `value`]+", Short: `Update password in an existing remote.`, Long: ` Update an existing remote's password. The password should be passed in pairs of ` + "`key` `value`" + `. For example to set password of a remote of name myremote you would do: rclone config password myremote fieldname mypassword This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(3, 256, command, args) in, err := argsToMap(args[1:]) if err != nil { return err } err = config.PasswordRemote(args[0], in) if err != nil { return err } config.ShowRemote(args[0]) return nil }, } // This takes a list of arguments in key value key value form and // converts it into a map func argsToMap(args []string) (out rc.Params, err error) { if len(args)%2 != 0 { return nil, errors.New("found key without value") } out = rc.Params{} // Set the config for i := 0; i < len(args); i += 2 { out[args[i]] = args[i+1] } return out, nil } var configReconnectCommand = &cobra.Command{ Use: "reconnect remote:", Short: `Re-authenticates user with remote.`, Long: ` This reconnects remote: passed in to the cloud storage system. To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(1, 1, command, args) fsInfo, configName, _, config, err := fs.ConfigFs(args[0]) if err != nil { return err } if fsInfo.Config == nil { return errors.Errorf("%s: doesn't support Reconnect", configName) } fsInfo.Config(configName, config) return nil }, } var configDisconnectCommand = &cobra.Command{ Use: "disconnect remote:", Short: `Disconnects user from remote`, Long: ` This disconnects the remote: passed in to the cloud storage system. This normally means revoking the oauth token. To reconnect use "rclone config reconnect". `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(1, 1, command, args) f := cmd.NewFsSrc(args) doDisconnect := f.Features().Disconnect if doDisconnect == nil { return errors.Errorf("%v doesn't support Disconnect", f) } err := doDisconnect(context.Background()) if err != nil { return errors.Wrap(err, "Disconnect call failed") } return nil }, } var ( jsonOutput bool ) func init() { flags.BoolVarP(configUserInfoCommand.Flags(), &jsonOutput, "json", "", false, "Format output as JSON") } var configUserInfoCommand = &cobra.Command{ Use: "userinfo remote:", Short: `Prints info about logged in user of remote.`, Long: ` This prints the details of the person logged in to the cloud storage system. `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(1, 1, command, args) f := cmd.NewFsSrc(args) doUserInfo := f.Features().UserInfo if doUserInfo == nil { return errors.Errorf("%v doesn't support UserInfo", f) } u, err := doUserInfo(context.Background()) if err != nil { return errors.Wrap(err, "UserInfo call failed") } if jsonOutput { out := json.NewEncoder(os.Stdout) out.SetIndent("", "\t") return out.Encode(u) } var keys []string var maxKeyLen int for key := range u { keys = append(keys, key) if len(key) > maxKeyLen { maxKeyLen = len(key) } } sort.Strings(keys) for _, key := range keys { fmt.Printf("%*s: %s\n", maxKeyLen, key, u[key]) } return nil }, } rclone-1.53.3/cmd/copy/000077500000000000000000000000001375552240400146115ustar00rootroot00000000000000rclone-1.53.3/cmd/copy/copy.go000066400000000000000000000052621375552240400161170ustar00rootroot00000000000000package copy import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/sync" "github.com/spf13/cobra" ) var ( createEmptySrcDirs = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after copy") } var commandDefinition = &cobra.Command{ Use: "copy source:path dest:path", Short: `Copy files from source to dest, skipping already copied.`, Long: ` Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. If dest:path doesn't exist, it is created and the source:path contents go there. For example rclone copy source:sourcepath dest:destpath Let's say there are two files in sourcepath sourcepath/one.txt sourcepath/two.txt This copies them to destpath/one.txt destpath/two.txt Not to destpath/sourcepath/one.txt destpath/sourcepath/two.txt If you are familiar with ` + "`rsync`" + `, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. See the [--no-traverse](/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: **Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics. **Note**: Use the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag to test without copying anything. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args) cmd.Run(true, true, command, func() error { if srcFileName == "" { return sync.CopyDir(context.Background(), fdst, fsrc, createEmptySrcDirs) } return operations.CopyFile(context.Background(), fdst, fsrc, srcFileName, srcFileName) }) }, } rclone-1.53.3/cmd/copyto/000077500000000000000000000000001375552240400151545ustar00rootroot00000000000000rclone-1.53.3/cmd/copyto/copyto.go000066400000000000000000000031531375552240400170220ustar00rootroot00000000000000package copyto import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/sync" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "copyto source:path dest:path", Short: `Copy files from source to dest, skipping already copied.`, Long: ` If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command. So rclone copyto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: if src is file copy it to dst, overwriting an existing file if it exists if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. **Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args) cmd.Run(true, true, command, func() error { if srcFileName == "" { return sync.CopyDir(context.Background(), fdst, fsrc, false) } return operations.CopyFile(context.Background(), fdst, fsrc, dstFileName, srcFileName) }) }, } rclone-1.53.3/cmd/copyurl/000077500000000000000000000000001375552240400153345ustar00rootroot00000000000000rclone-1.53.3/cmd/copyurl/copyurl.go000066400000000000000000000040551375552240400173640ustar00rootroot00000000000000package copyurl import ( "context" "errors" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( autoFilename = false stdout = false noClobber = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the URL and use it for destination file path") flags.BoolVarP(cmdFlags, &noClobber, "no-clobber", "", noClobber, "Prevent overwriting file with same name") flags.BoolVarP(cmdFlags, &stdout, "stdout", "", stdout, "Write the output to stdout rather than a file") } var commandDefinition = &cobra.Command{ Use: "copyurl https://example.com dest:path", Short: `Copy url content to dest.`, Long: ` Download a URL's content and copy it to the destination without saving it in temporary storage. Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. Setting --stdout or making the output file name "-" will cause the output to be written to standard output. `, RunE: func(command *cobra.Command, args []string) (err error) { cmd.CheckArgs(1, 2, command, args) var dstFileName string var fsdst fs.Fs if !stdout { if len(args) < 2 { return errors.New("need 2 arguments if not using --stdout") } if args[1] == "-" { stdout = true } else if autoFilename { fsdst = cmd.NewFsDir(args[1:]) } else { fsdst, dstFileName = cmd.NewFsDstFile(args[1:]) } } cmd.Run(true, true, command, func() error { if stdout { err = operations.CopyURLToWriter(context.Background(), args[0], os.Stdout) } else { _, err = operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename, noClobber) } return err }) return nil }, } rclone-1.53.3/cmd/cryptcheck/000077500000000000000000000000001375552240400157765ustar00rootroot00000000000000rclone-1.53.3/cmd/cryptcheck/cryptcheck.go000066400000000000000000000065401375552240400204710ustar00rootroot00000000000000package cryptcheck import ( "context" "github.com/pkg/errors" "github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/check" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlag := commandDefinition.Flags() check.AddFlags(cmdFlag) } var commandDefinition = &cobra.Command{ Use: "cryptcheck remote:path cryptedremote:path", Short: `Cryptcheck checks the integrity of a crypted remote.`, Long: ` rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted. Use it like this rclone cryptcheck /path/to/files encryptedremote:path You can use it like this also, but that will involve downloading all the files in remote:path. rclone cryptcheck remote:path encryptedremote:path After it has run it will log the status of the encryptedremote:. ` + check.FlagsHelp, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, fdst := cmd.NewFsSrcDst(args) cmd.Run(false, true, command, func() error { return cryptCheck(context.Background(), fdst, fsrc) }) }, } // cryptCheck checks the integrity of a crypted remote func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error { // Check to see fcrypt is a crypt fcrypt, ok := fdst.(*crypt.Fs) if !ok { return errors.Errorf("%s:%s is not a crypt remote", fdst.Name(), fdst.Root()) } // Find a hash to use funderlying := fcrypt.UnWrap() hashType := funderlying.Hashes().GetOne() if hashType == hash.None { return errors.Errorf("%s:%s does not support any hashes", funderlying.Name(), funderlying.Root()) } fs.Infof(nil, "Using %v for hash comparisons", hashType) opt, close, err := check.GetCheckOpt(fsrc, fcrypt) if err != nil { return err } defer close() // checkIdentical checks to see if dst and src are identical // // it returns true if differences were found // it also returns whether it couldn't be hashed opt.Check = func(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { cryptDst := dst.(*crypt.Object) underlyingDst := cryptDst.UnWrap() underlyingHash, err := underlyingDst.Hash(ctx, hashType) if err != nil { return true, false, errors.Wrapf(err, "error reading hash from underlying %v", underlyingDst) } if underlyingHash == "" { return false, true, nil } cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType) if err != nil { return true, false, errors.Wrap(err, "error computing hash") } if cryptHash == "" { return false, true, nil } if cryptHash != underlyingHash { err = errors.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash) fs.Errorf(src, err.Error()) return true, false, nil } return false, false, nil } return operations.CheckFn(ctx, opt) } rclone-1.53.3/cmd/cryptdecode/000077500000000000000000000000001375552240400161445ustar00rootroot00000000000000rclone-1.53.3/cmd/cryptdecode/cryptdecode.go000066400000000000000000000043561375552240400210100ustar00rootroot00000000000000package cryptdecode import ( "errors" "fmt" "github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/spf13/cobra" ) // Options set by command line flags var ( Reverse = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &Reverse, "reverse", "", Reverse, "Reverse cryptdecode, encrypts filenames") } var commandDefinition = &cobra.Command{ Use: "cryptdecode encryptedremote: encryptedfilename", Short: `Cryptdecode returns unencrypted file names.`, Long: ` rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. If you supply the --reverse flag, it will return encrypted file names. use it like this rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 rclone cryptdecode --reverse encryptedremote: filename1 filename2 `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 11, command, args) cmd.Run(false, false, command, func() error { fsInfo, _, _, config, err := fs.ConfigFs(args[0]) if err != nil { return err } if fsInfo.Name != "crypt" { return errors.New("The remote needs to be of type \"crypt\"") } cipher, err := crypt.NewCipher(config) if err != nil { return err } if Reverse { return cryptEncode(cipher, args[1:]) } return cryptDecode(cipher, args[1:]) }) }, } // cryptDecode returns the unencrypted file name func cryptDecode(cipher *crypt.Cipher, args []string) error { output := "" for _, encryptedFileName := range args { fileName, err := cipher.DecryptFileName(encryptedFileName) if err != nil { output += fmt.Sprintln(encryptedFileName, "\t", "Failed to decrypt") } else { output += fmt.Sprintln(encryptedFileName, "\t", fileName) } } fmt.Printf(output) return nil } // cryptEncode returns the encrypted file name func cryptEncode(cipher *crypt.Cipher, args []string) error { output := "" for _, fileName := range args { encryptedFileName := cipher.EncryptFileName(fileName) output += fmt.Sprintln(fileName, "\t", encryptedFileName) } fmt.Printf(output) return nil } rclone-1.53.3/cmd/dbhashsum/000077500000000000000000000000001375552240400156155ustar00rootroot00000000000000rclone-1.53.3/cmd/dbhashsum/dbhashsum.go000066400000000000000000000020751375552240400201260ustar00rootroot00000000000000package dbhashsum import ( "context" "os" "github.com/rclone/rclone/backend/dropbox" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "dbhashsum remote:path", Short: `Produces a Dropbox hash file for all the objects in the path.`, Long: ` Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to [Dropbox content hash rules](https://www.dropbox.com/developers/reference/content-hash). The output is in the same format as md5sum and sha1sum. `, Hidden: true, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) fs.Logf(nil, `"rclone dbhashsum" is deprecated, use "rclone hashsum %v %s" instead`, dropbox.DbHashType, args[0]) cmd.Run(false, false, command, func() error { return operations.HashLister(context.Background(), dropbox.DbHashType, fsrc, os.Stdout) }) }, } rclone-1.53.3/cmd/dedupe/000077500000000000000000000000001375552240400151055ustar00rootroot00000000000000rclone-1.53.3/cmd/dedupe/dedupe.go000066400000000000000000000126211375552240400167040ustar00rootroot00000000000000package dedupe import ( "context" "log" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( dedupeMode = operations.DeduplicateInteractive ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlag := commandDefinition.Flags() flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename.") } var commandDefinition = &cobra.Command{ Use: "dedupe [mode] remote:path", Short: `Interactively find duplicate filenames and delete/rename them.`, Long: ` By default ` + "`dedupe`" + ` interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is only useful with backends like Google Drive which can have duplicate file names. It can be run on wrapping backends (eg crypt) if they wrap a backend which supports duplicate file names. In the first pass it will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged. In the second pass, for every group of duplicate file names, it will delete all but one identical files it finds without confirmation. This means that for most duplicated files the ` + "`dedupe`" + ` command will not be interactive. ` + "`dedupe`" + ` considers files to be identical if they have the same hash. If the backend does not support hashes (eg crypt wrapping Google Drive) then they will never be found to be identical. If you use the ` + "`--size-only`" + ` flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes. **Important**: Since this can cause data loss, test first with the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag. Here is an example run. Before - with duplicates $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 6048320 2016-03-05 16:23:11.775000000 one.txt 564374 2016-03-05 16:23:06.731000000 one.txt 6048320 2016-03-05 16:18:26.092000000 one.txt 6048320 2016-03-05 16:22:46.185000000 two.txt 1744073 2016-03-05 16:22:38.104000000 two.txt 564374 2016-03-05 16:22:52.118000000 two.txt Now the ` + "`dedupe`" + ` session $ rclone dedupe drive:dupes 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. one.txt: Found 4 files with duplicate names one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") one.txt: 2 duplicates remain 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> k Enter the number of the file to keep> 1 one.txt: Deleted 1 extra copies two.txt: Found 3 files with duplicates names two.txt: 3 duplicates remain 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> r two-1.txt: renamed from: two.txt two-2.txt: renamed from: two.txt two-3.txt: renamed from: two.txt The result being $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + ` flag or by using an extra parameter with the same value * ` + "`" + `--dedupe-mode interactive` + "`" + ` - interactive as above. * ` + "`" + `--dedupe-mode skip` + "`" + ` - removes identical files then skips anything left. * ` + "`" + `--dedupe-mode first` + "`" + ` - removes identical files then keeps the first one. * ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one. * ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one. * ` + "`" + `--dedupe-mode largest` + "`" + ` - removes identical files then keeps the largest one. * ` + "`" + `--dedupe-mode smallest` + "`" + ` - removes identical files then keeps the smallest one. * ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different. For example to rename all the identically named photos in your Google Photos directory, do rclone dedupe --dedupe-mode rename "drive:Google Photos" Or rclone dedupe rename "drive:Google Photos" `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 2, command, args) if len(args) > 1 { err := dedupeMode.Set(args[0]) if err != nil { log.Fatal(err) } args = args[1:] } fdst := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return operations.Deduplicate(context.Background(), fdst, dedupeMode) }) }, } rclone-1.53.3/cmd/delete/000077500000000000000000000000001375552240400151015ustar00rootroot00000000000000rclone-1.53.3/cmd/delete/delete.go000066400000000000000000000034721375552240400167000ustar00rootroot00000000000000package delete import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( rmdirs = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &rmdirs, "rmdirs", "", rmdirs, "rmdirs removes empty directories but leaves root intact") } var commandDefinition = &cobra.Command{ Use: "delete remote:path", Short: `Remove the contents of path.`, Long: ` Remove the files in path. Unlike ` + "`" + `purge` + "`" + ` it obeys include/exclude filters so can be used to selectively delete files. ` + "`" + `rclone delete` + "`" + ` only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use ` + "`" + `rclone purge` + "`" + ` If you supply the --rmdirs flag, it will remove all empty directories along with it. Eg delete all files bigger than 100MBytes Check what would be deleted first (use either) rclone --min-size 100M lsl remote:path rclone --dry-run --min-size 100M delete remote:path Then delete rclone --min-size 100M delete remote:path That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. **Important**: Since this can cause data loss, test first with the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(true, false, command, func() error { if err := operations.Delete(context.Background(), fsrc); err != nil { return err } if rmdirs { fdst := cmd.NewFsDir(args) return operations.Rmdirs(context.Background(), fdst, "", true) } return nil }) }, } rclone-1.53.3/cmd/deletefile/000077500000000000000000000000001375552240400157415ustar00rootroot00000000000000rclone-1.53.3/cmd/deletefile/deletefile.go000066400000000000000000000020011375552240400203630ustar00rootroot00000000000000package deletefile import ( "context" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "deletefile remote:path", Short: `Remove a single file from remote.`, Long: ` Remove a single file from remote. Unlike ` + "`" + `delete` + "`" + ` it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fs, fileName := cmd.NewFsFile(args[0]) cmd.Run(true, false, command, func() error { if fileName == "" { return errors.Errorf("%s is a directory or doesn't exist", args[0]) } fileObj, err := fs.NewObject(context.Background(), fileName) if err != nil { return err } return operations.DeleteFile(context.Background(), fileObj) }) }, } rclone-1.53.3/cmd/genautocomplete/000077500000000000000000000000001375552240400170325ustar00rootroot00000000000000rclone-1.53.3/cmd/genautocomplete/genautocomplete.go000066400000000000000000000006141375552240400225550ustar00rootroot00000000000000package genautocomplete import ( "github.com/rclone/rclone/cmd" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(completionDefinition) } var completionDefinition = &cobra.Command{ Use: "genautocomplete [shell]", Short: `Output completion script for a given shell.`, Long: ` Generates a shell completion script for rclone. Run with --help to list the supported shells. `, } rclone-1.53.3/cmd/genautocomplete/genautocomplete_bash.go000066400000000000000000000017131375552240400235530ustar00rootroot00000000000000package genautocomplete import ( "log" "github.com/rclone/rclone/cmd" "github.com/spf13/cobra" ) func init() { completionDefinition.AddCommand(bashCommandDefinition) } var bashCommandDefinition = &cobra.Command{ Use: "bash [output_file]", Short: `Output bash completion script for rclone.`, Long: ` Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete bash Logout and login again to use the autocompletion scripts, or source them directly . /etc/bash_completion If you supply a command line argument the script will be written there. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 1, command, args) out := "/etc/bash_completion.d/rclone" if len(args) > 0 { out = args[0] } err := cmd.Root.GenBashCompletionFile(out) if err != nil { log.Fatal(err) } }, } rclone-1.53.3/cmd/genautocomplete/genautocomplete_fish.go000066400000000000000000000017401375552240400235670ustar00rootroot00000000000000package genautocomplete import ( "log" "github.com/rclone/rclone/cmd" "github.com/spf13/cobra" ) func init() { completionDefinition.AddCommand(fishCommandDefinition) } var fishCommandDefinition = &cobra.Command{ Use: "fish [output_file]", Short: `Output fish completion script for rclone.`, Long: ` Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete fish Logout and login again to use the autocompletion scripts, or source them directly . /etc/fish/completions/rclone.fish If you supply a command line argument the script will be written there. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 1, command, args) out := "/etc/fish/completions/rclone.fish" if len(args) > 0 { out = args[0] } err := cmd.Root.GenFishCompletionFile(out, true) if err != nil { log.Fatal(err) } }, } rclone-1.53.3/cmd/genautocomplete/genautocomplete_test.go000066400000000000000000000024111375552240400236110ustar00rootroot00000000000000package genautocomplete import ( "io/ioutil" "os" "testing" "github.com/stretchr/testify/assert" ) func TestCompletionBash(t *testing.T) { tempFile, err := ioutil.TempFile("", "completion_bash") assert.NoError(t, err) defer func() { _ = tempFile.Close() }() defer func() { _ = os.Remove(tempFile.Name()) }() bashCommandDefinition.Run(bashCommandDefinition, []string{tempFile.Name()}) bs, err := ioutil.ReadFile(tempFile.Name()) assert.NoError(t, err) assert.NotEmpty(t, string(bs)) } func TestCompletionZsh(t *testing.T) { tempFile, err := ioutil.TempFile("", "completion_zsh") assert.NoError(t, err) defer func() { _ = tempFile.Close() }() defer func() { _ = os.Remove(tempFile.Name()) }() zshCommandDefinition.Run(zshCommandDefinition, []string{tempFile.Name()}) bs, err := ioutil.ReadFile(tempFile.Name()) assert.NoError(t, err) assert.NotEmpty(t, string(bs)) } func TestCompletionFish(t *testing.T) { tempFile, err := ioutil.TempFile("", "completion_fish") assert.NoError(t, err) defer func() { _ = tempFile.Close() }() defer func() { _ = os.Remove(tempFile.Name()) }() fishCommandDefinition.Run(fishCommandDefinition, []string{tempFile.Name()}) bs, err := ioutil.ReadFile(tempFile.Name()) assert.NoError(t, err) assert.NotEmpty(t, string(bs)) } rclone-1.53.3/cmd/genautocomplete/genautocomplete_zsh.go000066400000000000000000000021271375552240400234420ustar00rootroot00000000000000package genautocomplete import ( "log" "os" "github.com/rclone/rclone/cmd" "github.com/spf13/cobra" ) func init() { completionDefinition.AddCommand(zshCommandDefinition) } var zshCommandDefinition = &cobra.Command{ Use: "zsh [output_file]", Short: `Output zsh completion script for rclone.`, Long: ` Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete zsh Logout and login again to use the autocompletion scripts, or source them directly autoload -U compinit && compinit If you supply a command line argument the script will be written there. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 1, command, args) out := "/usr/share/zsh/vendor-completions/_rclone" if len(args) > 0 { out = args[0] } outFile, err := os.Create(out) if err != nil { log.Fatal(err) } defer func() { _ = outFile.Close() }() err = cmd.Root.GenZshCompletion(outFile) if err != nil { log.Fatal(err) } }, } rclone-1.53.3/cmd/gendocs/000077500000000000000000000000001375552240400152615ustar00rootroot00000000000000rclone-1.53.3/cmd/gendocs/gendocs.go000066400000000000000000000075731375552240400172460ustar00rootroot00000000000000package gendocs import ( "bytes" "io/ioutil" "log" "os" "path" "path/filepath" "regexp" "strings" "text/template" "time" "github.com/rclone/rclone/cmd" "github.com/spf13/cobra" "github.com/spf13/cobra/doc" "github.com/spf13/pflag" ) func init() { cmd.Root.AddCommand(commandDefinition) } // define things which go into the frontmatter type frontmatter struct { Date string Title string Description string Slug string URL string Source string } var frontmatterTemplate = template.Must(template.New("frontmatter").Parse(`--- title: "{{ .Title }}" description: "{{ .Description }}" slug: {{ .Slug }} url: {{ .URL }} # autogenerated - DO NOT EDIT, instead edit the source code in {{ .Source }} and as part of making a release run "make commanddocs" --- `)) var commandDefinition = &cobra.Command{ Use: "gendocs output_directory", Short: `Output markdown docs for rclone to the directory supplied.`, Long: ` This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.`, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(1, 1, command, args) now := time.Now().Format(time.RFC3339) // Create the directory structure root := args[0] out := filepath.Join(root, "commands") err := os.MkdirAll(out, 0777) if err != nil { return err } // Write the flags page var buf bytes.Buffer cmd.Root.SetOutput(&buf) cmd.Root.SetArgs([]string{"help", "flags"}) cmd.GeneratingDocs = true err = cmd.Root.Execute() if err != nil { return err } err = ioutil.WriteFile(filepath.Join(root, "flags.md"), buf.Bytes(), 0777) if err != nil { return err } // Look up name => description for prepender var description = map[string]string{} var addDescription func(root *cobra.Command) addDescription = func(root *cobra.Command) { name := strings.Replace(root.CommandPath(), " ", "_", -1) + ".md" description[name] = root.Short for _, c := range root.Commands() { addDescription(c) } } addDescription(cmd.Root) // markup for the docs files prepender := func(filename string) string { name := filepath.Base(filename) base := strings.TrimSuffix(name, path.Ext(name)) data := frontmatter{ Date: now, Title: strings.Replace(base, "_", " ", -1), Description: description[name], Slug: base, URL: "/commands/" + strings.ToLower(base) + "/", Source: strings.Replace(strings.Replace(base, "rclone", "cmd", -1), "_", "/", -1) + "/", } var buf bytes.Buffer err := frontmatterTemplate.Execute(&buf, data) if err != nil { log.Fatalf("Failed to render frontmatter template: %v", err) } return buf.String() } linkHandler := func(name string) string { base := strings.TrimSuffix(name, path.Ext(name)) return "/commands/" + strings.ToLower(base) + "/" } // Hide all of the root entries flags cmd.Root.Flags().VisitAll(func(flag *pflag.Flag) { flag.Hidden = true }) err = doc.GenMarkdownTreeCustom(cmd.Root, out, prepender, linkHandler) if err != nil { return err } var outdentTitle = regexp.MustCompile(`(?m)^#(#+)`) // Munge the files to add a link to the global flags page err = filepath.Walk(out, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if !info.IsDir() { b, err := ioutil.ReadFile(path) if err != nil { return err } doc := string(b) doc = strings.Replace(doc, "\n### SEE ALSO", ` See the [global flags page](/flags/) for global options not listed here. ### SEE ALSO`, 1) // outdent all the titles by one doc = outdentTitle.ReplaceAllString(doc, `$1`) err = ioutil.WriteFile(path, []byte(doc), 0777) if err != nil { return err } } return nil }) if err != nil { return err } return nil }, } rclone-1.53.3/cmd/hashsum/000077500000000000000000000000001375552240400153075ustar00rootroot00000000000000rclone-1.53.3/cmd/hashsum/hashsum.go000066400000000000000000000032511375552240400173070ustar00rootroot00000000000000package hashsum import ( "context" "errors" "fmt" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( outputBase64 = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &outputBase64, "base64", "", outputBase64, "Output base64 encoded hashsum") } var commandDefinition = &cobra.Command{ Use: "hashsum remote:path", Short: `Produces a hashsum file for all the objects in the path.`, Long: ` Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool. Run without a hash to see the list of supported hashes, eg $ rclone hashsum Supported hashes are: * MD5 * SHA-1 * DropboxHash * QuickXorHash Then $ rclone hashsum MD5 remote:path `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(0, 2, command, args) if len(args) == 0 { fmt.Printf("Supported hashes are:\n") for _, ht := range hash.Supported().Array() { fmt.Printf(" * %v\n", ht) } return nil } else if len(args) == 1 { return errors.New("need hash type and remote") } var ht hash.Type err := ht.Set(args[0]) if err != nil { return err } fsrc := cmd.NewFsSrc(args[1:]) cmd.Run(false, false, command, func() error { if outputBase64 { return operations.HashListerBase64(context.Background(), ht, fsrc, os.Stdout) } return operations.HashLister(context.Background(), ht, fsrc, os.Stdout) }) return nil }, } rclone-1.53.3/cmd/help.go000066400000000000000000000240501375552240400151170ustar00rootroot00000000000000package cmd import ( "fmt" "log" "os" "regexp" "strings" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configflags" "github.com/rclone/rclone/fs/filter/filterflags" "github.com/rclone/rclone/fs/log/logflags" "github.com/rclone/rclone/fs/rc/rcflags" "github.com/rclone/rclone/lib/atexit" "github.com/spf13/cobra" "github.com/spf13/pflag" ) // Root is the main rclone command var Root = &cobra.Command{ Use: "rclone", Short: "Show help for rclone commands, flags and backends.", Long: ` Rclone syncs files to and from cloud storage providers as well as mounting them, listing them in lots of different ways. See the home page (https://rclone.org/) for installation, usage, documentation, changelog and configuration walkthroughs. `, PersistentPostRun: func(cmd *cobra.Command, args []string) { fs.Debugf("rclone", "Version %q finishing with parameters %q", fs.Version, os.Args) atexit.Run() }, BashCompletionFunction: bashCompletionFunc, DisableAutoGenTag: true, } const ( bashCompletionFunc = ` __rclone_custom_func() { if [[ ${#COMPREPLY[@]} -eq 0 ]]; then local cur cword prev words if declare -F _init_completion > /dev/null; then _init_completion -n : || return else __rclone_init_completion -n : || return fi local rclone=(command rclone --ask-password=false) if [[ $cur != *:* ]]; then local ifs=$IFS IFS=$'\n' local remotes=($("${rclone[@]}" listremotes 2> /dev/null)) IFS=$ifs local remote for remote in "${remotes[@]}"; do [[ $remote != $cur* ]] || COMPREPLY+=("$remote") done if [[ ${COMPREPLY[@]} ]]; then local paths=("$cur"*) [[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}") fi else local path=${cur#*:} if [[ $path == */* ]]; then local prefix=$(eval printf '%s' "${path%/*}") else local prefix= fi local ifs=$IFS IFS=$'\n' local lines=($("${rclone[@]}" lsf "${cur%%:*}:$prefix" 2> /dev/null)) IFS=$ifs local line for line in "${lines[@]}"; do local reply=${prefix:+$prefix/}$line [[ $reply != $path* ]] || COMPREPLY+=("$reply") done [[ ! ${COMPREPLY[@]} || $(type -t compopt) != builtin ]] || compopt -o filenames fi [[ ! ${COMPREPLY[@]} || $(type -t compopt) != builtin ]] || compopt -o nospace fi } ` ) // GeneratingDocs is set by rclone gendocs to alter the format of the // output suitable for the documentation. var GeneratingDocs = false // root help command var helpCommand = &cobra.Command{ Use: "help", Short: Root.Short, Long: Root.Long, Run: func(command *cobra.Command, args []string) { Root.SetOutput(os.Stdout) _ = Root.Usage() }, } // to filter the flags with var flagsRe *regexp.Regexp // Show the flags var helpFlags = &cobra.Command{ Use: "flags []", Short: "Show the global flags for rclone", Run: func(command *cobra.Command, args []string) { if len(args) > 0 { re, err := regexp.Compile(args[0]) if err != nil { log.Fatalf("Failed to compile flags regexp: %v", err) } flagsRe = re } if GeneratingDocs { Root.SetUsageTemplate(docFlagsTemplate) } else { Root.SetOutput(os.Stdout) } _ = command.Usage() }, } // Show the backends var helpBackends = &cobra.Command{ Use: "backends", Short: "List the backends available", Run: func(command *cobra.Command, args []string) { showBackends() }, } // Show a single backend var helpBackend = &cobra.Command{ Use: "backend ", Short: "List full info about a backend", Run: func(command *cobra.Command, args []string) { if len(args) == 0 { Root.SetOutput(os.Stdout) _ = command.Usage() return } showBackend(args[0]) }, } // runRoot implements the main rclone command with no subcommands func runRoot(cmd *cobra.Command, args []string) { if version { ShowVersion() resolveExitCode(nil) } else { _ = cmd.Usage() if len(args) > 0 { _, _ = fmt.Fprintf(os.Stderr, "Command not found.\n") } resolveExitCode(errorCommandNotFound) } } // setupRootCommand sets default usage, help, and error handling for // the root command. // // Helpful example: http://rtfcode.com/xref/moby-17.03.2-ce/cli/cobra.go func setupRootCommand(rootCmd *cobra.Command) { // Add global flags configflags.AddFlags(pflag.CommandLine) filterflags.AddFlags(pflag.CommandLine) rcflags.AddFlags(pflag.CommandLine) logflags.AddFlags(pflag.CommandLine) Root.Run = runRoot Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number") cobra.AddTemplateFunc("showGlobalFlags", func(cmd *cobra.Command) bool { return cmd.CalledAs() == "flags" }) cobra.AddTemplateFunc("showCommands", func(cmd *cobra.Command) bool { return cmd.CalledAs() != "flags" }) cobra.AddTemplateFunc("showLocalFlags", func(cmd *cobra.Command) bool { // Don't show local flags (which are the global ones on the root) on "rclone" and // "rclone help" (which shows the global help) return cmd.CalledAs() != "rclone" && cmd.CalledAs() != "" }) cobra.AddTemplateFunc("backendFlags", func(cmd *cobra.Command, include bool) *pflag.FlagSet { backendFlagSet := pflag.NewFlagSet("Backend Flags", pflag.ExitOnError) cmd.InheritedFlags().VisitAll(func(flag *pflag.Flag) { matched := flagsRe == nil || flagsRe.MatchString(flag.Name) if _, ok := backendFlags[flag.Name]; matched && ok == include { backendFlagSet.AddFlag(flag) } }) return backendFlagSet }) rootCmd.SetUsageTemplate(usageTemplate) // rootCmd.SetHelpTemplate(helpTemplate) // rootCmd.SetFlagErrorFunc(FlagErrorFunc) rootCmd.SetHelpCommand(helpCommand) // rootCmd.PersistentFlags().BoolP("help", "h", false, "Print usage") // rootCmd.PersistentFlags().MarkShorthandDeprecated("help", "please use --help") rootCmd.AddCommand(helpCommand) helpCommand.AddCommand(helpFlags) helpCommand.AddCommand(helpBackends) helpCommand.AddCommand(helpBackend) cobra.OnInitialize(initConfig) } var usageTemplate = `Usage:{{if .Runnable}} {{.UseLine}}{{end}}{{if .HasAvailableSubCommands}} {{.CommandPath}} [command]{{end}}{{if gt (len .Aliases) 0}} Aliases: {{.NameAndAliases}}{{end}}{{if .HasExample}} Examples: {{.Example}}{{end}}{{if and (showCommands .) .HasAvailableSubCommands}} Available Commands:{{range .Commands}}{{if (or .IsAvailableCommand (eq .Name "help"))}} {{rpad .Name .NamePadding }} {{.Short}}{{end}}{{end}}{{end}}{{if and (showLocalFlags .) .HasAvailableLocalFlags}} Flags: {{.LocalFlags.FlagUsages | trimTrailingWhitespaces}}{{end}}{{if and (showGlobalFlags .) .HasAvailableInheritedFlags}} Global Flags: {{(backendFlags . false).FlagUsages | trimTrailingWhitespaces}} Backend Flags: {{(backendFlags . true).FlagUsages | trimTrailingWhitespaces}}{{end}}{{if .HasHelpSubCommands}} Additional help topics:{{range .Commands}}{{if .IsAdditionalHelpTopicCommand}} {{rpad .CommandPath .CommandPathPadding}} {{.Short}}{{end}}{{end}}{{end}} Use "rclone [command] --help" for more information about a command. Use "rclone help flags" for to see the global flags. Use "rclone help backends" for a list of supported services. ` var docFlagsTemplate = `--- title: "Global Flags" description: "Rclone Global Flags" --- # Global Flags This describes the global flags available to every rclone command split into two groups, non backend and backend flags. ## Non Backend Flags These flags are available for every command. ` + "```" + ` {{(backendFlags . false).FlagUsages | trimTrailingWhitespaces}} ` + "```" + ` ## Backend Flags These flags are available for every command. They control the backends and may be set in the config file. ` + "```" + ` {{(backendFlags . true).FlagUsages | trimTrailingWhitespaces}} ` + "```" + ` ` // show all the backends func showBackends() { fmt.Printf("All rclone backends:\n\n") for _, backend := range fs.Registry { fmt.Printf(" %-12s %s\n", backend.Prefix, backend.Description) } fmt.Printf("\nTo see more info about a particular backend use:\n") fmt.Printf(" rclone help backend \n") } func quoteString(v interface{}) string { switch v.(type) { case string: return fmt.Sprintf("%q", v) } return fmt.Sprint(v) } // show a single backend func showBackend(name string) { backend, err := fs.Find(name) if err != nil { log.Fatal(err) } var standardOptions, advancedOptions fs.Options done := map[string]struct{}{} for _, opt := range backend.Options { // Skip if done already (eg with Provider options) if _, doneAlready := done[opt.Name]; doneAlready { continue } if opt.Advanced { advancedOptions = append(advancedOptions, opt) } else { standardOptions = append(standardOptions, opt) } } optionsType := "standard" for _, opts := range []fs.Options{standardOptions, advancedOptions} { if len(opts) == 0 { optionsType = "advanced" continue } fmt.Printf("### %s Options\n\n", strings.Title(optionsType)) fmt.Printf("Here are the %s options specific to %s (%s).\n\n", optionsType, backend.Name, backend.Description) optionsType = "advanced" for _, opt := range opts { done[opt.Name] = struct{}{} shortOpt := "" if opt.ShortOpt != "" { shortOpt = fmt.Sprintf(" / -%s", opt.ShortOpt) } fmt.Printf("#### --%s%s\n\n", opt.FlagName(backend.Prefix), shortOpt) fmt.Printf("%s\n\n", opt.Help) if opt.IsPassword { fmt.Printf("**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).\n\n") } fmt.Printf("- Config: %s\n", opt.Name) fmt.Printf("- Env Var: %s\n", opt.EnvVarName(backend.Prefix)) fmt.Printf("- Type: %s\n", opt.Type()) fmt.Printf("- Default: %s\n", quoteString(opt.GetValue())) if len(opt.Examples) > 0 { fmt.Printf("- Examples:\n") for _, ex := range opt.Examples { fmt.Printf(" - %s\n", quoteString(ex.Value)) for _, line := range strings.Split(ex.Help, "\n") { fmt.Printf(" - %s\n", line) } } } fmt.Printf("\n") } } } rclone-1.53.3/cmd/info/000077500000000000000000000000001375552240400145725ustar00rootroot00000000000000rclone-1.53.3/cmd/info/all.sh000077500000000000000000000007351375552240400157060ustar00rootroot00000000000000#!/usr/bin/env bash exec rclone --check-normalization=true --check-control=true --check-length=true info \ /tmp/testInfo \ TestAmazonCloudDrive:testInfo \ TestB2:testInfo \ TestCryptDrive:testInfo \ TestCryptSwift:testInfo \ TestDrive:testInfo \ TestDropbox:testInfo \ TestGoogleCloudStorage:rclone-testinfo \ TestOneDrive:testInfo \ TestS3:rclone-testinfo \ TestSftp:testInfo \ TestSwift:testInfo \ TestYandex:testInfo \ TestFTP:testInfo # TestHubic:testInfo \ rclone-1.53.3/cmd/info/info.go000066400000000000000000000304211375552240400160540ustar00rootroot00000000000000package info // FIXME once translations are implemented will need a no-escape // option for Put so we can make these tests work agaig import ( "bytes" "context" "encoding/json" "fmt" "io" "os" "path" "regexp" "sort" "strconv" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/info/internal" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/lib/random" "github.com/spf13/cobra" ) var ( writeJSON string checkNormalization bool checkControl bool checkLength bool checkStreaming bool uploadWait time.Duration positionLeftRe = regexp.MustCompile(`(?s)^(.*)-position-left-([[:xdigit:]]+)$`) positionMiddleRe = regexp.MustCompile(`(?s)^position-middle-([[:xdigit:]]+)-(.*)-$`) positionRightRe = regexp.MustCompile(`(?s)^position-right-([[:xdigit:]]+)-(.*)$`) ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.StringVarP(cmdFlags, &writeJSON, "write-json", "", "", "Write results to file.") flags.BoolVarP(cmdFlags, &checkNormalization, "check-normalization", "", true, "Check UTF-8 Normalization.") flags.BoolVarP(cmdFlags, &checkControl, "check-control", "", true, "Check control characters.") flags.DurationVarP(cmdFlags, &uploadWait, "upload-wait", "", 0, "Wait after writing a file.") flags.BoolVarP(cmdFlags, &checkLength, "check-length", "", true, "Check max filename length.") flags.BoolVarP(cmdFlags, &checkStreaming, "check-streaming", "", true, "Check uploadxs with indeterminate file size.") } var commandDefinition = &cobra.Command{ Use: "info [remote:path]+", Short: `Discovers file name or other limitations for paths.`, Long: `rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one. `, Hidden: true, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1e6, command, args) for i := range args { f := cmd.NewFsDir(args[i : i+1]) cmd.Run(false, false, command, func() error { return readInfo(context.Background(), f) }) } }, } type results struct { ctx context.Context f fs.Fs mu sync.Mutex stringNeedsEscaping map[string]internal.Position controlResults map[string]internal.ControlResult maxFileLength int canWriteUnnormalized bool canReadUnnormalized bool canReadRenormalized bool canStream bool } func newResults(ctx context.Context, f fs.Fs) *results { return &results{ ctx: ctx, f: f, stringNeedsEscaping: make(map[string]internal.Position), controlResults: make(map[string]internal.ControlResult), } } // Print the results to stdout func (r *results) Print() { fmt.Printf("// %s\n", r.f.Name()) if checkControl { escape := []string{} for c, needsEscape := range r.stringNeedsEscaping { if needsEscape != internal.PositionNone { k := strconv.Quote(c) k = k[1 : len(k)-1] escape = append(escape, fmt.Sprintf("'%s'", k)) } } sort.Strings(escape) fmt.Printf("stringNeedsEscaping = []rune{\n") fmt.Printf("\t%s\n", strings.Join(escape, ", ")) fmt.Printf("}\n") } if checkLength { fmt.Printf("maxFileLength = %d\n", r.maxFileLength) } if checkNormalization { fmt.Printf("canWriteUnnormalized = %v\n", r.canWriteUnnormalized) fmt.Printf("canReadUnnormalized = %v\n", r.canReadUnnormalized) fmt.Printf("canReadRenormalized = %v\n", r.canReadRenormalized) } if checkStreaming { fmt.Printf("canStream = %v\n", r.canStream) } } // WriteJSON writes the results to a JSON file when requested func (r *results) WriteJSON() { if writeJSON == "" { return } report := internal.InfoReport{ Remote: r.f.Name(), } if checkControl { report.ControlCharacters = &r.controlResults } if checkLength { report.MaxFileLength = &r.maxFileLength } if checkNormalization { report.CanWriteUnnormalized = &r.canWriteUnnormalized report.CanReadUnnormalized = &r.canReadUnnormalized report.CanReadRenormalized = &r.canReadRenormalized } if checkStreaming { report.CanStream = &r.canStream } if f, err := os.Create(writeJSON); err != nil { fs.Errorf(r.f, "Creating JSON file failed: %s", err) } else { defer fs.CheckClose(f, &err) enc := json.NewEncoder(f) enc.SetIndent("", " ") err := enc.Encode(report) if err != nil { fs.Errorf(r.f, "Writing JSON file failed: %s", err) } } fs.Infof(r.f, "Wrote JSON file: %s", writeJSON) } // writeFile writes a file with some random contents func (r *results) writeFile(path string) (fs.Object, error) { contents := random.String(50) src := object.NewStaticObjectInfo(path, time.Now(), int64(len(contents)), true, nil, r.f) obj, err := r.f.Put(r.ctx, bytes.NewBufferString(contents), src) if uploadWait > 0 { time.Sleep(uploadWait) } return obj, err } // check whether normalization is enforced and check whether it is // done on the files anyway func (r *results) checkUTF8Normalization() { unnormalized := "Héroique" normalized := "Héroique" _, err := r.writeFile(unnormalized) if err != nil { r.canWriteUnnormalized = false return } r.canWriteUnnormalized = true _, err = r.f.NewObject(r.ctx, unnormalized) if err == nil { r.canReadUnnormalized = true } _, err = r.f.NewObject(r.ctx, normalized) if err == nil { r.canReadRenormalized = true } } func (r *results) checkStringPositions(k, s string) { fs.Infof(r.f, "Writing position file 0x%0X", s) positionError := internal.PositionNone res := internal.ControlResult{ Text: s, WriteError: make(map[internal.Position]string, 3), GetError: make(map[internal.Position]string, 3), InList: make(map[internal.Position]internal.Presence, 3), } for _, pos := range internal.PositionList { path := "" switch pos { case internal.PositionMiddle: path = fmt.Sprintf("position-middle-%0X-%s-", s, s) case internal.PositionLeft: path = fmt.Sprintf("%s-position-left-%0X", s, s) case internal.PositionRight: path = fmt.Sprintf("position-right-%0X-%s", s, s) default: panic("invalid position: " + pos.String()) } _, writeError := r.writeFile(path) if writeError != nil { res.WriteError[pos] = writeError.Error() fs.Infof(r.f, "Writing %s position file 0x%0X Error: %s", pos.String(), s, writeError) } else { fs.Infof(r.f, "Writing %s position file 0x%0X OK", pos.String(), s) } obj, getErr := r.f.NewObject(r.ctx, path) if getErr != nil { res.GetError[pos] = getErr.Error() fs.Infof(r.f, "Getting %s position file 0x%0X Error: %s", pos.String(), s, getErr) } else { if obj.Size() != 50 { res.GetError[pos] = fmt.Sprintf("invalid size %d", obj.Size()) fs.Infof(r.f, "Getting %s position file 0x%0X Invalid Size: %d", pos.String(), s, obj.Size()) } else { fs.Infof(r.f, "Getting %s position file 0x%0X OK", pos.String(), s) } } if writeError != nil || getErr != nil { positionError += pos } } r.mu.Lock() r.stringNeedsEscaping[k] = positionError r.controlResults[k] = res r.mu.Unlock() } // check we can write a file with the control chars func (r *results) checkControls() { fs.Infof(r.f, "Trying to create control character file names") // Concurrency control tokens := make(chan struct{}, fs.Config.Checkers) for i := 0; i < fs.Config.Checkers; i++ { tokens <- struct{}{} } var wg sync.WaitGroup for i := rune(0); i < 128; i++ { s := string(i) if i == 0 || i == '/' { // We're not even going to check NULL or / r.stringNeedsEscaping[s] = internal.PositionAll continue } wg.Add(1) go func(s string) { defer wg.Done() token := <-tokens k := s r.checkStringPositions(k, s) tokens <- token }(s) } for _, s := range []string{"\", "\u00A0", "\xBF", "\xFE"} { wg.Add(1) go func(s string) { defer wg.Done() token := <-tokens k := s r.checkStringPositions(k, s) tokens <- token }(s) } wg.Wait() r.checkControlsList() fs.Infof(r.f, "Done trying to create control character file names") } func (r *results) checkControlsList() { l, err := r.f.List(context.TODO(), "") if err != nil { fs.Errorf(r.f, "Listing control character file names failed: %s", err) return } namesMap := make(map[string]struct{}, len(l)) for _, s := range l { namesMap[path.Base(s.Remote())] = struct{}{} } for path := range namesMap { var pos internal.Position var hex, value string if g := positionLeftRe.FindStringSubmatch(path); g != nil { pos, hex, value = internal.PositionLeft, g[2], g[1] } else if g := positionMiddleRe.FindStringSubmatch(path); g != nil { pos, hex, value = internal.PositionMiddle, g[1], g[2] } else if g := positionRightRe.FindStringSubmatch(path); g != nil { pos, hex, value = internal.PositionRight, g[1], g[2] } else { fs.Infof(r.f, "Unknown path %q", path) continue } var hexValue []byte for ; len(hex) >= 2; hex = hex[2:] { if b, err := strconv.ParseUint(hex[:2], 16, 8); err != nil { fs.Infof(r.f, "Invalid path %q: %s", path, err) continue } else { hexValue = append(hexValue, byte(b)) } } if hex != "" { fs.Infof(r.f, "Invalid path %q", path) continue } hexStr := string(hexValue) k := hexStr switch r.controlResults[k].InList[pos] { case internal.Absent: if hexStr == value { r.controlResults[k].InList[pos] = internal.Present } else { r.controlResults[k].InList[pos] = internal.Renamed } case internal.Present: r.controlResults[k].InList[pos] = internal.Multiple case internal.Renamed: r.controlResults[k].InList[pos] = internal.Multiple } delete(namesMap, path) } if len(namesMap) > 0 { fs.Infof(r.f, "Found additional control character file names:") for name := range namesMap { fs.Infof(r.f, "%q", name) } } } // find the max file name size we can use func (r *results) findMaxLength() { const maxLen = 16 * 1024 name := make([]byte, maxLen) for i := range name { name[i] = 'a' } // Find the first size of filename we can't write i := sort.Search(len(name), func(i int) (fail bool) { defer func() { if err := recover(); err != nil { fs.Infof(r.f, "Couldn't write file with name length %d: %v", i, err) fail = true } }() path := string(name[:i]) _, err := r.writeFile(path) if err != nil { fs.Infof(r.f, "Couldn't write file with name length %d: %v", i, err) return true } fs.Infof(r.f, "Wrote file with name length %d", i) return false }) r.maxFileLength = i - 1 fs.Infof(r.f, "Max file length is %d", r.maxFileLength) } func (r *results) checkStreaming() { putter := r.f.Put if r.f.Features().PutStream != nil { fs.Infof(r.f, "Given remote has specialized streaming function. Using that to test streaming.") putter = r.f.Features().PutStream } contents := "thinking of test strings is hard" buf := bytes.NewBufferString(contents) hashIn := hash.NewMultiHasher() in := io.TeeReader(buf, hashIn) objIn := object.NewStaticObjectInfo("checkStreamingTest", time.Now(), -1, true, nil, r.f) objR, err := putter(r.ctx, in, objIn) if err != nil { fs.Infof(r.f, "Streamed file failed to upload (%v)", err) r.canStream = false return } hashes := hashIn.Sums() types := objR.Fs().Hashes().Array() for _, Hash := range types { sum, err := objR.Hash(r.ctx, Hash) if err != nil { fs.Infof(r.f, "Streamed file failed when getting hash %v (%v)", Hash, err) r.canStream = false return } if !hash.Equals(hashes[Hash], sum) { fs.Infof(r.f, "Streamed file has incorrect hash %v: expecting %q got %q", Hash, hashes[Hash], sum) r.canStream = false return } } if int64(len(contents)) != objR.Size() { fs.Infof(r.f, "Streamed file has incorrect file size: expecting %d got %d", len(contents), objR.Size()) r.canStream = false return } r.canStream = true } func readInfo(ctx context.Context, f fs.Fs) error { err := f.Mkdir(ctx, "") if err != nil { return errors.Wrap(err, "couldn't mkdir") } r := newResults(ctx, f) if checkControl { r.checkControls() } if checkLength { r.findMaxLength() } if checkNormalization { r.checkUTF8Normalization() } if checkStreaming { r.checkStreaming() } r.Print() r.WriteJSON() return nil } rclone-1.53.3/cmd/info/internal/000077500000000000000000000000001375552240400164065ustar00rootroot00000000000000rclone-1.53.3/cmd/info/internal/build_csv/000077500000000000000000000000001375552240400203605ustar00rootroot00000000000000rclone-1.53.3/cmd/info/internal/build_csv/main.go000066400000000000000000000073141375552240400216400ustar00rootroot00000000000000package main import ( "encoding/csv" "encoding/json" "flag" "fmt" "io" "log" "os" "sort" "strconv" "github.com/rclone/rclone/cmd/info/internal" ) func main() { fOut := flag.String("o", "out.csv", "Output file") flag.Parse() args := flag.Args() remotes := make([]internal.InfoReport, 0, len(args)) for _, fn := range args { f, err := os.Open(fn) if err != nil { log.Fatalf("Unable to open %q: %s", fn, err) } var remote internal.InfoReport dec := json.NewDecoder(f) err = dec.Decode(&remote) if err != nil { log.Fatalf("Unable to decode %q: %s", fn, err) } if remote.ControlCharacters == nil { log.Printf("Skipping remote %s: no ControlCharacters", remote.Remote) } else { remotes = append(remotes, remote) } if err := f.Close(); err != nil { log.Fatalf("Closing %q failed: %s", fn, err) } } charsMap := make(map[string]string) var remoteNames []string for _, r := range remotes { remoteNames = append(remoteNames, r.Remote) for k, v := range *r.ControlCharacters { v.Text = k quoted := strconv.Quote(k) charsMap[k] = quoted[1 : len(quoted)-1] } } sort.Strings(remoteNames) chars := make([]string, 0, len(charsMap)) for k := range charsMap { chars = append(chars, k) } sort.Strings(chars) // char remote output recordsMap := make(map[string]map[string][]string) // remote output hRemoteMap := make(map[string][]string) hOperation := []string{"Write", "Write", "Write", "Get", "Get", "Get", "List", "List", "List"} hPosition := []string{"L", "M", "R", "L", "M", "R", "L", "M", "R"} // remote // write get list // left middle right left middle right left middle right for _, r := range remotes { hRemoteMap[r.Remote] = []string{r.Remote, "", "", "", "", "", "", "", ""} for k, v := range *r.ControlCharacters { cMap, ok := recordsMap[k] if !ok { cMap = make(map[string][]string, 1) recordsMap[k] = cMap } cMap[r.Remote] = []string{ sok(v.WriteError[internal.PositionLeft]), sok(v.WriteError[internal.PositionMiddle]), sok(v.WriteError[internal.PositionRight]), sok(v.GetError[internal.PositionLeft]), sok(v.GetError[internal.PositionMiddle]), sok(v.GetError[internal.PositionRight]), pok(v.InList[internal.PositionLeft]), pok(v.InList[internal.PositionMiddle]), pok(v.InList[internal.PositionRight]), } } } records := [][]string{ {"", ""}, {"", ""}, {"Bytes", "Char"}, } for _, r := range remoteNames { records[0] = append(records[0], hRemoteMap[r]...) records[1] = append(records[1], hOperation...) records[2] = append(records[2], hPosition...) } for _, c := range chars { k := charsMap[c] row := []string{fmt.Sprintf("%X", c), k} for _, r := range remoteNames { if m, ok := recordsMap[c][r]; ok { row = append(row, m...) } else { row = append(row, "", "", "", "", "", "", "", "", "") } } records = append(records, row) } var writer io.Writer if *fOut == "-" { writer = os.Stdout } else { f, err := os.Create(*fOut) if err != nil { log.Fatalf("Unable to create %q: %s", *fOut, err) } defer func() { if err := f.Close(); err != nil { log.Fatalln("Error writing csv:", err) } }() writer = f } w := csv.NewWriter(writer) err := w.WriteAll(records) if err != nil { log.Fatalln("Error writing csv:", err) } else if err := w.Error(); err != nil { log.Fatalln("Error writing csv:", err) } } func sok(s string) string { if s != "" { return "ERR" } return "OK" } func pok(p internal.Presence) string { switch p { case internal.Absent: return "MIS" case internal.Present: return "OK" case internal.Renamed: return "REN" case internal.Multiple: return "MUL" default: return "ERR" } } rclone-1.53.3/cmd/info/internal/internal.go000066400000000000000000000061571375552240400205620ustar00rootroot00000000000000package internal import ( "bytes" "encoding/json" "fmt" "strings" ) // Presence describes the presence of a filename in file listing type Presence int // Possible Presence states const ( Absent Presence = iota Present Renamed Multiple ) // Position is the placement of the test character in the filename type Position int // Predefined positions const ( PositionMiddle Position = 1 << iota PositionLeft PositionRight PositionNone Position = 0 PositionAll Position = PositionRight<<1 - 1 ) // PositionList contains all valid positions var PositionList = []Position{PositionMiddle, PositionLeft, PositionRight} // ControlResult contains the result of a single character test type ControlResult struct { Text string `json:"-"` WriteError map[Position]string GetError map[Position]string InList map[Position]Presence } // InfoReport is the structure of the JSON output type InfoReport struct { Remote string ControlCharacters *map[string]ControlResult MaxFileLength *int CanStream *bool CanWriteUnnormalized *bool CanReadUnnormalized *bool CanReadRenormalized *bool } func (e Position) String() string { switch e { case PositionNone: return "none" case PositionAll: return "all" } var buf bytes.Buffer if e&PositionMiddle != 0 { buf.WriteString("middle") e &= ^PositionMiddle } if e&PositionLeft != 0 { if buf.Len() != 0 { buf.WriteRune(',') } buf.WriteString("left") e &= ^PositionLeft } if e&PositionRight != 0 { if buf.Len() != 0 { buf.WriteRune(',') } buf.WriteString("right") e &= ^PositionRight } if e != PositionNone { panic("invalid position") } return buf.String() } // MarshalText encodes the position when used as a map key func (e Position) MarshalText() ([]byte, error) { return []byte(e.String()), nil } // UnmarshalText decodes a position when used as a map key func (e *Position) UnmarshalText(text []byte) error { switch s := strings.ToLower(string(text)); s { default: *e = PositionNone for _, p := range strings.Split(s, ",") { switch p { case "left": *e |= PositionLeft case "middle": *e |= PositionMiddle case "right": *e |= PositionRight default: return fmt.Errorf("unknown position: %s", e) } } case "none": *e = PositionNone case "all": *e = PositionAll } return nil } func (e Presence) String() string { switch e { case Absent: return "absent" case Present: return "present" case Renamed: return "renamed" case Multiple: return "multiple" default: panic("invalid presence") } } // MarshalJSON encodes the presence when used as a JSON value func (e Presence) MarshalJSON() ([]byte, error) { return json.Marshal(e.String()) } // UnmarshalJSON decodes a presence when used as a JSON value func (e *Presence) UnmarshalJSON(text []byte) error { var s string if err := json.Unmarshal(text, &s); err != nil { return err } switch s := strings.ToLower(s); s { case "absent": *e = Absent case "present": *e = Present case "renamed": *e = Renamed case "multiple": *e = Multiple default: return fmt.Errorf("unknown presence: %s", e) } return nil } rclone-1.53.3/cmd/info/test.cmd000066400000000000000000000003741375552240400162420ustar00rootroot00000000000000set RCLONE_CONFIG_LOCALWINDOWS_TYPE=local rclone.exe purge LocalWindows:info rclone.exe info -vv LocalWindows:info --write-json=info-LocalWindows.json > info-LocalWindows.log 2>&1 rclone.exe ls -vv LocalWindows:info > info-LocalWindows.list 2>&1 rclone-1.53.3/cmd/info/test.sh000077500000000000000000000022751375552240400161160ustar00rootroot00000000000000#!/usr/bin/env zsh # # example usage: # $GOPATH/src/github.com/rclone/rclone/cmd/info/test.sh --list | \ # parallel -P20 $GOPATH/src/github.com/rclone/rclone/cmd/info/test.sh export PATH=$GOPATH/src/github.com/rclone/rclone:$PATH typeset -A allRemotes allRemotes=( TestAmazonCloudDrive '--low-level-retries=2 --checkers=5 --upload-wait=5s' TestB2 '' TestBox '' TestDrive '--tpslimit=5' TestCrypt '' TestDropbox '--checkers=1' TestGCS '' TestJottacloud '' TestKoofr '' TestMega '' TestOneDrive '' TestOpenDrive '--low-level-retries=4 --checkers=5' TestPcloud '--low-level-retries=2 --timeout=15s' TestS3 '' Local '' ) set -euo pipefail if [[ $# -eq 0 ]]; then set -- ${(k)allRemotes[@]} elif [[ $1 = --list ]]; then printf '%s\n' ${(k)allRemotes[@]} exit 0 fi for remote; do case $remote in Local) l=Local$(uname) export RCLONE_CONFIG_${l:u}_TYPE=local dir=$l:infotest;; TestGCS) dir=$remote:$GCS_BUCKET/infotest;; *) dir=$remote:infotest;; esac rclone purge $dir || : rclone info -vv $dir --write-json=info-$remote.json ${=allRemotes[$remote]:-} &> info-$remote.log rclone ls -vv $dir &> info-$remote.list done rclone-1.53.3/cmd/link/000077500000000000000000000000001375552240400145745ustar00rootroot00000000000000rclone-1.53.3/cmd/link/link.go000066400000000000000000000037461375552240400160720ustar00rootroot00000000000000package link import ( "context" "fmt" "time" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( expire = fs.Duration(time.Hour * 24 * 365 * 100) unlink = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.FVarP(cmdFlags, &expire, "expire", "", "The amount of time that the link will be valid") flags.BoolVarP(cmdFlags, &unlink, "unlink", "", unlink, "Remove existing public link to file/folder") } var commandDefinition = &cobra.Command{ Use: "link remote:path", Short: `Generate public link to file/folder.`, Long: `rclone link will create, retrieve or remove a public link to the given file or folder. rclone link remote:path/to/file rclone link remote:path/to/folder/ rclone link --unlink remote:path/to/folder/ rclone link --expire 1d remote:path/to/file If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). **Note** not all backends support the --expire flag - if the backend doesn't support it then the link returned won't expire. Use the --unlink flag to remove existing public links to the file or folder. **Note** not all backends support "--unlink" flag - those that don't will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc, remote := cmd.NewFsFile(args[0]) cmd.Run(false, false, command, func() error { link, err := operations.PublicLink(context.Background(), fsrc, remote, expire, unlink) if err != nil { return err } if link != "" { fmt.Println(link) } return nil }) }, } rclone-1.53.3/cmd/listremotes/000077500000000000000000000000001375552240400162115ustar00rootroot00000000000000rclone-1.53.3/cmd/listremotes/listremotes.go000066400000000000000000000021761375552240400211200ustar00rootroot00000000000000package ls import ( "fmt" "sort" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/flags" "github.com/spf13/cobra" ) // Globals var ( listLong bool ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &listLong, "long", "", listLong, "Show the type as well as names.") } var commandDefinition = &cobra.Command{ Use: "listremotes", Short: `List all the remotes in the config file.`, Long: ` rclone listremotes lists all the available remotes from the config file. When uses with the -l flag it lists the types too. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 0, command, args) remotes := config.FileSections() sort.Strings(remotes) maxlen := 1 for _, remote := range remotes { if len(remote) > maxlen { maxlen = len(remote) } } for _, remote := range remotes { if listLong { remoteType := config.FileGet(remote, "type", "UNKNOWN") fmt.Printf("%-*s %s\n", maxlen+1, remote+":", remoteType) } else { fmt.Printf("%s:\n", remote) } } }, } rclone-1.53.3/cmd/ls/000077500000000000000000000000001375552240400142555ustar00rootroot00000000000000rclone-1.53.3/cmd/ls/ls.go000066400000000000000000000015661375552240400152320ustar00rootroot00000000000000package ls import ( "context" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/ls/lshelp" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "ls remote:path", Short: `List the objects in the path with size and path.`, Long: ` Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. Eg $ rclone ls swift:bucket 60295 bevajer5jef 90613 canole 94467 diwogej7 37600 fubuwic ` + lshelp.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return operations.List(context.Background(), fsrc, os.Stdout) }) }, } rclone-1.53.3/cmd/ls/lshelp/000077500000000000000000000000001375552240400155445ustar00rootroot00000000000000rclone-1.53.3/cmd/ls/lshelp/lshelp.go000066400000000000000000000020651375552240400173650ustar00rootroot00000000000000package lshelp // Help describes the common help for all the list commands var Help = ` Any of the filtering options can be applied to this command. There are several related list commands * ` + "`ls`" + ` to list size and path of objects only * ` + "`lsl`" + ` to list modification time, size and path of objects only * ` + "`lsd`" + ` to list directories only * ` + "`lsf`" + ` to list objects and directories in easy to parse format * ` + "`lsjson`" + ` to list objects and directories in JSON format ` + "`ls`,`lsl`,`lsd`" + ` are designed to be human readable. ` + "`lsf`" + ` is designed to be human and machine readable. ` + "`lsjson`" + ` is designed to be machine readable. Note that ` + "`ls` and `lsl`" + ` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands ` + "`lsd`,`lsf`,`lsjson`" + ` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ` rclone-1.53.3/cmd/lsd/000077500000000000000000000000001375552240400144215ustar00rootroot00000000000000rclone-1.53.3/cmd/lsd/lsd.go000066400000000000000000000032111375552240400155270ustar00rootroot00000000000000package lsd import ( "context" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/ls/lshelp" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( recurse bool ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &recurse, "recursive", "R", false, "Recurse into the listing.") } var commandDefinition = &cobra.Command{ Use: "lsd remote:path", Short: `List all directories/containers/buckets in the path.`, Long: ` Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg $ rclone lsd swift: 494000 2018-04-26 08:43:20 10000 10000files 65 2018-04-26 08:43:20 1 1File Or $ rclone lsd drive:test -1 2016-10-17 17:41:53 -1 1000files -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files If you just want the directory names use "rclone lsf --dirs-only". ` + lshelp.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) if recurse { fs.Config.MaxDepth = 0 } fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return operations.ListDir(context.Background(), fsrc, os.Stdout) }) }, } rclone-1.53.3/cmd/lsf/000077500000000000000000000000001375552240400144235ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/lsf.go000066400000000000000000000146471375552240400155520ustar00rootroot00000000000000package lsf import ( "context" "fmt" "io" "os" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/ls/lshelp" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( format string separator string dirSlash bool recurse bool hashType = hash.MD5 filesOnly bool dirsOnly bool csv bool absolute bool ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.StringVarP(cmdFlags, &format, "format", "F", "p", "Output format - see help for details") flags.StringVarP(cmdFlags, &separator, "separator", "s", ";", "Separator for the items in the format.") flags.BoolVarP(cmdFlags, &dirSlash, "dir-slash", "d", true, "Append a slash to directory names.") flags.FVarP(cmdFlags, &hashType, "hash", "", "Use this hash when `h` is used in the format MD5|SHA-1|DropboxHash") flags.BoolVarP(cmdFlags, &filesOnly, "files-only", "", false, "Only list files.") flags.BoolVarP(cmdFlags, &dirsOnly, "dirs-only", "", false, "Only list directories.") flags.BoolVarP(cmdFlags, &csv, "csv", "", false, "Output in CSV format.") flags.BoolVarP(cmdFlags, &absolute, "absolute", "", false, "Put a leading / in front of path names.") flags.BoolVarP(cmdFlags, &recurse, "recursive", "R", false, "Recurse into the listing.") } var commandDefinition = &cobra.Command{ Use: "lsf remote:path", Short: `List directories and objects in remote:path formatted for parsing.`, Long: ` List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. Eg $ rclone lsf swift:bucket bevajer5jef canole diwogej7 ferejej3gux/ fubuwic Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: p - path s - size t - modification time h - hash i - ID of object o - Original ID of underlying object m - MimeType of object if known e - encrypted name T - tier of storage if known, eg "Hot" or "Cool" So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. Eg $ rclone lsf --format "tsp" swift:bucket 2016-06-25 18:55:41;60295;bevajer5jef 2016-06-25 18:55:43;90613;canole 2016-06-25 18:55:43;94467;diwogej7 2018-04-26 08:50:45;0;ferejej3gux/ 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type. For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . Eg $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket 7908e352297f0f530b84a756f188baa3 bevajer5jef cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg $ rclone lsf --separator "," --format "tshp" swift:bucket 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 2018-04-26 08:52:53,0,,ferejej3gux/ 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic You can output in CSV standard format. This will escape things in " if they contain , Eg $ rclone lsf --csv --files-only --format ps remote:path test.log,22355 test.sh,449 "this file contains a comma, in the file name.txt",6 Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag. For example to find all the files modified within one day and copy those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path ` + lshelp.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { // Work out if the separatorFlag was supplied or not separatorFlag := command.Flags().Lookup("separator") separatorFlagSupplied := separatorFlag != nil && separatorFlag.Changed // Default the separator to , if using CSV if csv && !separatorFlagSupplied { separator = "," } return Lsf(context.Background(), fsrc, os.Stdout) }) }, } // Lsf lists all the objects in the path with modification time, size // and path in specific format. func Lsf(ctx context.Context, fsrc fs.Fs, out io.Writer) error { var list operations.ListFormat list.SetSeparator(separator) list.SetCSV(csv) list.SetDirSlash(dirSlash) list.SetAbsolute(absolute) var opt = operations.ListJSONOpt{ NoModTime: true, NoMimeType: true, DirsOnly: dirsOnly, FilesOnly: filesOnly, Recurse: recurse, } for _, char := range format { switch char { case 'p': list.AddPath() case 't': list.AddModTime() opt.NoModTime = false case 's': list.AddSize() case 'h': list.AddHash(hashType) opt.ShowHash = true opt.HashTypes = []string{hashType.String()} case 'i': list.AddID() case 'm': list.AddMimeType() opt.NoMimeType = false case 'e': list.AddEncrypted() opt.ShowEncrypted = true case 'o': list.AddOrigID() opt.ShowOrigIDs = true case 'T': list.AddTier() default: return errors.Errorf("Unknown format character %q", char) } } return operations.ListJSON(ctx, fsrc, "", &opt, func(item *operations.ListJSONItem) error { _, _ = fmt.Fprintln(out, list.Format(item)) return nil }) } rclone-1.53.3/cmd/lsf/lsf_test.go000066400000000000000000000110741375552240400166000ustar00rootroot00000000000000package lsf import ( "bytes" "context" "testing" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/list" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestDefaultLsf(t *testing.T) { fstest.Initialise() buf := new(bytes.Buffer) f, err := fs.NewFs("testfiles") require.NoError(t, err) err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1 file2 file3 subdir/ `, buf.String()) } func TestRecurseFlag(t *testing.T) { fstest.Initialise() buf := new(bytes.Buffer) f, err := fs.NewFs("testfiles") require.NoError(t, err) recurse = true err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1 file2 file3 subdir/ subdir/file1 subdir/file2 subdir/file3 `, buf.String()) recurse = false } func TestDirSlashFlag(t *testing.T) { fstest.Initialise() buf := new(bytes.Buffer) f, err := fs.NewFs("testfiles") require.NoError(t, err) dirSlash = true format = "p" err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1 file2 file3 subdir/ `, buf.String()) buf = new(bytes.Buffer) dirSlash = false err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1 file2 file3 subdir `, buf.String()) } func TestFormat(t *testing.T) { fstest.Initialise() f, err := fs.NewFs("testfiles") require.NoError(t, err) buf := new(bytes.Buffer) format = "p" err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1 file2 file3 subdir `, buf.String()) buf = new(bytes.Buffer) format = "s" err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `0 321 1234 -1 `, buf.String()) buf = new(bytes.Buffer) format = "hp" err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `d41d8cd98f00b204e9800998ecf8427e;file1 409d6c19451dd39d4a94e42d2ff2c834;file2 9b4c8a5e36d3be7e2c4b1d75ded8c8a1;file3 ;subdir `, buf.String()) buf = new(bytes.Buffer) format = "p" filesOnly = true err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1 file2 file3 `, buf.String()) filesOnly = false buf = new(bytes.Buffer) format = "p" dirsOnly = true err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `subdir `, buf.String()) dirsOnly = false buf = new(bytes.Buffer) format = "t" err = Lsf(context.Background(), f, buf) require.NoError(t, err) items, _ := list.DirSorted(context.Background(), f, true, "") var expectedOutput string for _, item := range items { expectedOutput += item.ModTime(context.Background()).Format("2006-01-02 15:04:05") + "\n" } assert.Equal(t, expectedOutput, buf.String()) buf = new(bytes.Buffer) format = "sp" err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `0;file1 321;file2 1234;file3 -1;subdir `, buf.String()) format = "" } func TestSeparator(t *testing.T) { fstest.Initialise() f, err := fs.NewFs("testfiles") require.NoError(t, err) format = "ps" buf := new(bytes.Buffer) err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1;0 file2;321 file3;1234 subdir;-1 `, buf.String()) separator = "__SEP__" buf = new(bytes.Buffer) err = Lsf(context.Background(), f, buf) require.NoError(t, err) assert.Equal(t, `file1__SEP__0 file2__SEP__321 file3__SEP__1234 subdir__SEP__-1 `, buf.String()) format = "" separator = "" } func TestWholeLsf(t *testing.T) { fstest.Initialise() f, err := fs.NewFs("testfiles") require.NoError(t, err) format = "pst" separator = "_+_" recurse = true dirSlash = true buf := new(bytes.Buffer) err = Lsf(context.Background(), f, buf) require.NoError(t, err) items, _ := list.DirSorted(context.Background(), f, true, "") itemsInSubdir, _ := list.DirSorted(context.Background(), f, true, "subdir") var expectedOutput []string for _, item := range items { expectedOutput = append(expectedOutput, item.ModTime(context.Background()).Format("2006-01-02 15:04:05")) } for _, item := range itemsInSubdir { expectedOutput = append(expectedOutput, item.ModTime(context.Background()).Format("2006-01-02 15:04:05")) } assert.Equal(t, `file1_+_0_+_`+expectedOutput[0]+` file2_+_321_+_`+expectedOutput[1]+` file3_+_1234_+_`+expectedOutput[2]+` subdir/_+_-1_+_`+expectedOutput[3]+` subdir/file1_+_0_+_`+expectedOutput[4]+` subdir/file2_+_1_+_`+expectedOutput[5]+` subdir/file3_+_111_+_`+expectedOutput[6]+` `, buf.String()) format = "" separator = "" recurse = false dirSlash = false } rclone-1.53.3/cmd/lsf/testfiles/000077500000000000000000000000001375552240400164255ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/file1000066400000000000000000000000001375552240400173360ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/file2000066400000000000000000000005011375552240400173450ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/file3000066400000000000000000000023221375552240400173510ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/subdir/000077500000000000000000000000001375552240400177155ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/subdir/file1000066400000000000000000000000001375552240400206260ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/subdir/file2000066400000000000000000000000011375552240400206300ustar00rootroot00000000000000rclone-1.53.3/cmd/lsf/testfiles/subdir/file3000066400000000000000000000001571375552240400206450ustar00rootroot00000000000000rclone-1.53.3/cmd/lsjson/000077500000000000000000000000001375552240400151475ustar00rootroot00000000000000rclone-1.53.3/cmd/lsjson/lsjson.go000066400000000000000000000115231375552240400170100ustar00rootroot00000000000000package lsjson import ( "context" "encoding/json" "fmt" "os" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/ls/lshelp" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( opt operations.ListJSONOpt ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &opt.Recurse, "recursive", "R", false, "Recurse into the listing.") flags.BoolVarP(cmdFlags, &opt.ShowHash, "hash", "", false, "Include hashes in the output (may take longer).") flags.BoolVarP(cmdFlags, &opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).") flags.BoolVarP(cmdFlags, &opt.NoMimeType, "no-mimetype", "", false, "Don't read the mime type (can speed things up).") flags.BoolVarP(cmdFlags, &opt.ShowEncrypted, "encrypted", "M", false, "Show the encrypted names.") flags.BoolVarP(cmdFlags, &opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object.") flags.BoolVarP(cmdFlags, &opt.FilesOnly, "files-only", "", false, "Show only files in the listing.") flags.BoolVarP(cmdFlags, &opt.DirsOnly, "dirs-only", "", false, "Show only directories in the listing.") flags.StringArrayVarP(cmdFlags, &opt.HashTypes, "hash-type", "", nil, "Show only this hash type (may be repeated).") } var commandDefinition = &cobra.Command{ Use: "lsjson remote:path", Short: `List directories and objects in the path in JSON format.`, Long: `List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", } If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash. If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift). If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift). If --encrypted is not specified the Encrypted won't be emitted. If --dirs-only is not specified files in addition to directories are returned If --files-only is not specified directories in addition to the files will be returned. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true". The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. ` + lshelp.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { fmt.Println("[") first := true err := operations.ListJSON(context.Background(), fsrc, "", &opt, func(item *operations.ListJSONItem) error { out, err := json.Marshal(item) if err != nil { return errors.Wrap(err, "failed to marshal list object") } if first { first = false } else { fmt.Print(",\n") } _, err = os.Stdout.Write(out) if err != nil { return errors.Wrap(err, "failed to write to output") } return nil }) if err != nil { return err } if !first { fmt.Println() } fmt.Println("]") return nil }) }, } rclone-1.53.3/cmd/lsl/000077500000000000000000000000001375552240400144315ustar00rootroot00000000000000rclone-1.53.3/cmd/lsl/lsl.go000066400000000000000000000020271375552240400155530ustar00rootroot00000000000000package lsl import ( "context" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/ls/lshelp" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "lsl remote:path", Short: `List the objects in path with modification time, size and path.`, Long: ` Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. Eg $ rclone lsl swift:bucket 60295 2016-06-25 18:55:41.062626927 bevajer5jef 90613 2016-06-25 18:55:43.302607074 canole 94467 2016-06-25 18:55:43.046609333 diwogej7 37600 2016-06-25 18:55:40.814629136 fubuwic ` + lshelp.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return operations.ListLong(context.Background(), fsrc, os.Stdout) }) }, } rclone-1.53.3/cmd/md5sum/000077500000000000000000000000001375552240400150515ustar00rootroot00000000000000rclone-1.53.3/cmd/md5sum/md5sum.go000066400000000000000000000013201375552240400166060ustar00rootroot00000000000000package md5sum import ( "context" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "md5sum remote:path", Short: `Produces an md5sum file for all the objects in the path.`, Long: ` Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return operations.Md5sum(context.Background(), fsrc, os.Stdout) }) }, } rclone-1.53.3/cmd/memtest/000077500000000000000000000000001375552240400153155ustar00rootroot00000000000000rclone-1.53.3/cmd/memtest/memtest.go000066400000000000000000000025351375552240400173270ustar00rootroot00000000000000package memtest import ( "context" "runtime" "sync" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "memtest remote:path", Short: `Load all the objects at remote:path and report memory stats.`, Hidden: true, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { ctx := context.Background() objects, _, err := operations.Count(ctx, fsrc) if err != nil { return err } objs := make([]fs.Object, 0, objects) var before, after runtime.MemStats runtime.GC() runtime.ReadMemStats(&before) var mu sync.Mutex err = operations.ListFn(ctx, fsrc, func(o fs.Object) { mu.Lock() objs = append(objs, o) mu.Unlock() }) if err != nil { return err } runtime.GC() runtime.ReadMemStats(&after) usedMemory := after.Alloc - before.Alloc fs.Logf(nil, "%d objects took %d bytes, %.1f bytes/object", len(objs), usedMemory, float64(usedMemory)/float64(len(objs))) fs.Logf(nil, "System memory changed from %d to %d bytes a change of %d bytes", before.Sys, after.Sys, after.Sys-before.Sys) return nil }) }, } rclone-1.53.3/cmd/mkdir/000077500000000000000000000000001375552240400147455ustar00rootroot00000000000000rclone-1.53.3/cmd/mkdir/mkdir.go000066400000000000000000000014251375552240400164040ustar00rootroot00000000000000package mkdir import ( "context" "strings" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "mkdir remote:path", Short: `Make the path if it doesn't already exist.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fdst := cmd.NewFsDir(args) if !fdst.Features().CanHaveEmptyDirectories && strings.Contains(fdst.Root(), "/") { fs.Logf(fdst, "Warning: running mkdir on a remote which can't have empty directories does nothing") } cmd.Run(true, false, command, func() error { return operations.Mkdir(context.Background(), fdst, "") }) }, } rclone-1.53.3/cmd/mount/000077500000000000000000000000001375552240400150015ustar00rootroot00000000000000rclone-1.53.3/cmd/mount/dir.go000066400000000000000000000137141375552240400161140ustar00rootroot00000000000000// +build linux,go1.13 darwin,go1.13 freebsd,go1.13 package mount import ( "context" "os" "time" "bazil.org/fuse" fusefs "bazil.org/fuse/fs" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // Dir represents a directory entry type Dir struct { *vfs.Dir fsys *FS } // Check interface satisfied var _ fusefs.Node = (*Dir)(nil) // Attr updates the attributes of a directory func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) (err error) { defer log.Trace(d, "")("attr=%+v, err=%v", a, &err) a.Valid = d.fsys.opt.AttrTimeout a.Gid = d.VFS().Opt.GID a.Uid = d.VFS().Opt.UID a.Mode = os.ModeDir | d.VFS().Opt.DirPerms modTime := d.ModTime() a.Atime = modTime a.Mtime = modTime a.Ctime = modTime a.Crtime = modTime // FIXME include Valid so get some caching? // FIXME fs.Debugf(d.path, "Dir.Attr %+v", a) return nil } // Check interface satisfied var _ fusefs.NodeSetattrer = (*Dir)(nil) // Setattr handles attribute changes from FUSE. Currently supports ModTime only. func (d *Dir) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) (err error) { defer log.Trace(d, "stat=%+v", req)("err=%v", &err) if d.VFS().Opt.NoModTime { return nil } if req.Valid.MtimeNow() { err = d.SetModTime(time.Now()) } else if req.Valid.Mtime() { err = d.SetModTime(req.Mtime) } return translateError(err) } // Check interface satisfied var _ fusefs.NodeRequestLookuper = (*Dir)(nil) // Lookup looks up a specific entry in the receiver. // // Lookup should return a Node corresponding to the entry. If the // name does not exist in the directory, Lookup should return ENOENT. // // Lookup need not to handle the names "." and "..". func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fusefs.Node, err error) { defer log.Trace(d, "name=%q", req.Name)("node=%+v, err=%v", &node, &err) mnode, err := d.Dir.Stat(req.Name) if err != nil { return nil, translateError(err) } resp.EntryValid = d.fsys.opt.AttrTimeout // Check the mnode to see if it has a fuse Node cached // We must return the same fuse nodes for vfs Nodes node, ok := mnode.Sys().(fusefs.Node) if ok { return node, nil } switch x := mnode.(type) { case *vfs.File: node = &File{x, d.fsys} case *vfs.Dir: node = &Dir{x, d.fsys} default: panic("bad type") } // Cache the node for later mnode.SetSys(node) return node, nil } // Check interface satisfied var _ fusefs.HandleReadDirAller = (*Dir)(nil) // ReadDirAll reads the contents of the directory func (d *Dir) ReadDirAll(ctx context.Context) (dirents []fuse.Dirent, err error) { itemsRead := -1 defer log.Trace(d, "")("item=%d, err=%v", &itemsRead, &err) items, err := d.Dir.ReadDirAll() if err != nil { return nil, translateError(err) } for _, node := range items { name := node.Name() if len(name) > mountlib.MaxLeafSize { fs.Errorf(d, "Name too long (%d bytes) for FUSE, skipping: %s", len(name), name) continue } var dirent = fuse.Dirent{ // Inode FIXME ??? Type: fuse.DT_File, Name: name, } if node.IsDir() { dirent.Type = fuse.DT_Dir } dirents = append(dirents, dirent) } itemsRead = len(dirents) return dirents, nil } var _ fusefs.NodeCreater = (*Dir)(nil) // Create makes a new file func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (node fusefs.Node, handle fusefs.Handle, err error) { defer log.Trace(d, "name=%q", req.Name)("node=%v, handle=%v, err=%v", &node, &handle, &err) file, err := d.Dir.Create(req.Name, int(req.Flags)) if err != nil { return nil, nil, translateError(err) } fh, err := file.Open(int(req.Flags) | os.O_CREATE) if err != nil { return nil, nil, translateError(err) } node = &File{file, d.fsys} file.SetSys(node) // cache the FUSE node for later return node, &FileHandle{fh}, err } var _ fusefs.NodeMkdirer = (*Dir)(nil) // Mkdir creates a new directory func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (node fusefs.Node, err error) { defer log.Trace(d, "name=%q", req.Name)("node=%+v, err=%v", &node, &err) dir, err := d.Dir.Mkdir(req.Name) if err != nil { return nil, translateError(err) } node = &Dir{dir, d.fsys} dir.SetSys(node) // cache the FUSE node for later return node, nil } var _ fusefs.NodeRemover = (*Dir)(nil) // Remove removes the entry with the given name from // the receiver, which must be a directory. The entry to be removed // may correspond to a file (unlink) or to a directory (rmdir). func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) (err error) { defer log.Trace(d, "name=%q", req.Name)("err=%v", &err) err = d.Dir.RemoveName(req.Name) if err != nil { return translateError(err) } return nil } // Check interface satisfied var _ fusefs.NodeRenamer = (*Dir)(nil) // Rename the file func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs.Node) (err error) { defer log.Trace(d, "oldName=%q, newName=%q, newDir=%+v", req.OldName, req.NewName, newDir)("err=%v", &err) destDir, ok := newDir.(*Dir) if !ok { return errors.Errorf("Unknown Dir type %T", newDir) } err = d.Dir.Rename(req.OldName, req.NewName, destDir.Dir) if err != nil { return translateError(err) } return nil } // Check interface satisfied var _ fusefs.NodeFsyncer = (*Dir)(nil) // Fsync the directory func (d *Dir) Fsync(ctx context.Context, req *fuse.FsyncRequest) (err error) { defer log.Trace(d, "")("err=%v", &err) err = d.Dir.Sync() if err != nil { return translateError(err) } return nil } // Check interface satisfied var _ fusefs.NodeLinker = (*Dir)(nil) // Link creates a new directory entry in the receiver based on an // existing Node. Receiver must be a directory. func (d *Dir) Link(ctx context.Context, req *fuse.LinkRequest, old fusefs.Node) (newNode fusefs.Node, err error) { defer log.Trace(d, "req=%v, old=%v", req, old)("new=%v, err=%v", &newNode, &err) return nil, fuse.ENOSYS } rclone-1.53.3/cmd/mount/file.go000066400000000000000000000067701375552240400162610ustar00rootroot00000000000000// +build linux,go1.13 darwin,go1.13 freebsd,go1.13 package mount import ( "context" "time" "bazil.org/fuse" fusefs "bazil.org/fuse/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // File represents a file type File struct { *vfs.File fsys *FS } // Check interface satisfied var _ fusefs.Node = (*File)(nil) // Attr fills out the attributes for the file func (f *File) Attr(ctx context.Context, a *fuse.Attr) (err error) { defer log.Trace(f, "")("a=%+v, err=%v", a, &err) a.Valid = f.fsys.opt.AttrTimeout modTime := f.File.ModTime() Size := uint64(f.File.Size()) Blocks := (Size + 511) / 512 a.Gid = f.VFS().Opt.GID a.Uid = f.VFS().Opt.UID a.Mode = f.VFS().Opt.FilePerms a.Size = Size a.Atime = modTime a.Mtime = modTime a.Ctime = modTime a.Crtime = modTime a.Blocks = Blocks return nil } // Check interface satisfied var _ fusefs.NodeSetattrer = (*File)(nil) // Setattr handles attribute changes from FUSE. Currently supports ModTime and Size only func (f *File) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) (err error) { defer log.Trace(f, "a=%+v", req)("err=%v", &err) if !f.VFS().Opt.NoModTime { if req.Valid.Mtime() { err = f.File.SetModTime(req.Mtime) } else if req.Valid.MtimeNow() { err = f.File.SetModTime(time.Now()) } } if req.Valid.Size() { err = f.File.Truncate(int64(req.Size)) } return translateError(err) } // Check interface satisfied var _ fusefs.NodeOpener = (*File)(nil) // Open the file for read or write func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fh fusefs.Handle, err error) { defer log.Trace(f, "flags=%v", req.Flags)("fh=%v, err=%v", &fh, &err) // fuse flags are based off syscall flags as are os flags, so // should be compatible handle, err := f.File.Open(int(req.Flags)) if err != nil { return nil, translateError(err) } // If size unknown then use direct io to read if entry := handle.Node().DirEntry(); entry != nil && entry.Size() < 0 { resp.Flags |= fuse.OpenDirectIO } return &FileHandle{handle}, nil } // Check interface satisfied var _ fusefs.NodeFsyncer = (*File)(nil) // Fsync the file // // Note that we don't do anything except return OK func (f *File) Fsync(ctx context.Context, req *fuse.FsyncRequest) (err error) { defer log.Trace(f, "")("err=%v", &err) return nil } // Getxattr gets an extended attribute by the given name from the // node. // // If there is no xattr by that name, returns fuse.ErrNoXattr. func (f *File) Getxattr(ctx context.Context, req *fuse.GetxattrRequest, resp *fuse.GetxattrResponse) error { return fuse.ENOSYS // we never implement this } var _ fusefs.NodeGetxattrer = (*File)(nil) // Listxattr lists the extended attributes recorded for the node. func (f *File) Listxattr(ctx context.Context, req *fuse.ListxattrRequest, resp *fuse.ListxattrResponse) error { return fuse.ENOSYS // we never implement this } var _ fusefs.NodeListxattrer = (*File)(nil) // Setxattr sets an extended attribute with the given name and // value for the node. func (f *File) Setxattr(ctx context.Context, req *fuse.SetxattrRequest) error { return fuse.ENOSYS // we never implement this } var _ fusefs.NodeSetxattrer = (*File)(nil) // Removexattr removes an extended attribute for the name. // // If there is no xattr by that name, returns fuse.ErrNoXattr. func (f *File) Removexattr(ctx context.Context, req *fuse.RemovexattrRequest) error { return fuse.ENOSYS // we never implement this } var _ fusefs.NodeRemovexattrer = (*File)(nil) rclone-1.53.3/cmd/mount/fs.go000066400000000000000000000053461375552240400157500ustar00rootroot00000000000000// FUSE main Fs // +build linux,go1.13 darwin,go1.13 freebsd,go1.13 package mount import ( "context" "syscall" "bazil.org/fuse" fusefs "bazil.org/fuse/fs" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // FS represents the top level filing system type FS struct { *vfs.VFS f fs.Fs opt *mountlib.Options } // Check interface satisfied var _ fusefs.FS = (*FS)(nil) // NewFS makes a new FS func NewFS(VFS *vfs.VFS, opt *mountlib.Options) *FS { fsys := &FS{ VFS: VFS, f: VFS.Fs(), opt: opt, } return fsys } // Root returns the root node func (f *FS) Root() (node fusefs.Node, err error) { defer log.Trace("", "")("node=%+v, err=%v", &node, &err) root, err := f.VFS.Root() if err != nil { return nil, translateError(err) } return &Dir{root, f}, nil } // Check interface satisfied var _ fusefs.FSStatfser = (*FS)(nil) // Statfs is called to obtain file system metadata. // It should write that data to resp. func (f *FS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.StatfsResponse) (err error) { defer log.Trace("", "")("stat=%+v, err=%v", resp, &err) const blockSize = 4096 total, _, free := f.VFS.Statfs() resp.Blocks = uint64(total) / blockSize // Total data blocks in file system. resp.Bfree = uint64(free) / blockSize // Free blocks in file system. resp.Bavail = resp.Bfree // Free blocks in file system if you're not root. resp.Files = 1e9 // Total files in file system. resp.Ffree = 1e9 // Free files in file system. resp.Bsize = blockSize // Block size resp.Namelen = 255 // Maximum file name length? resp.Frsize = blockSize // Fragment size, smallest addressable data size in the file system. mountlib.ClipBlocks(&resp.Blocks) mountlib.ClipBlocks(&resp.Bfree) mountlib.ClipBlocks(&resp.Bavail) return nil } // Translate errors from mountlib func translateError(err error) error { if err == nil { return nil } switch errors.Cause(err) { case vfs.OK: return nil case vfs.ENOENT, fs.ErrorDirNotFound, fs.ErrorObjectNotFound: return fuse.ENOENT case vfs.EEXIST, fs.ErrorDirExists: return fuse.EEXIST case vfs.EPERM, fs.ErrorPermissionDenied: return fuse.EPERM case vfs.ECLOSED: return fuse.Errno(syscall.EBADF) case vfs.ENOTEMPTY: return fuse.Errno(syscall.ENOTEMPTY) case vfs.ESPIPE: return fuse.Errno(syscall.ESPIPE) case vfs.EBADF: return fuse.Errno(syscall.EBADF) case vfs.EROFS: return fuse.Errno(syscall.EROFS) case vfs.ENOSYS, fs.ErrorNotImplemented: return fuse.ENOSYS case vfs.EINVAL: return fuse.Errno(syscall.EINVAL) } return err } rclone-1.53.3/cmd/mount/handle.go000066400000000000000000000052431375552240400165670ustar00rootroot00000000000000// +build linux,go1.13 darwin,go1.13 freebsd,go1.13 package mount import ( "context" "io" "bazil.org/fuse" fusefs "bazil.org/fuse/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // FileHandle is an open for read file handle on a File type FileHandle struct { vfs.Handle } // Check interface satisfied var _ fusefs.HandleReader = (*FileHandle)(nil) // Read from the file handle func (fh *FileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) (err error) { var n int defer log.Trace(fh, "len=%d, offset=%d", req.Size, req.Offset)("read=%d, err=%v", &n, &err) data := make([]byte, req.Size) n, err = fh.Handle.ReadAt(data, req.Offset) if err == io.EOF { err = nil } else if err != nil { return translateError(err) } resp.Data = data[:n] return nil } // Check interface satisfied var _ fusefs.HandleWriter = (*FileHandle)(nil) // Write data to the file handle func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) { defer log.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err) n, err := fh.Handle.WriteAt(req.Data, req.Offset) if err != nil { return translateError(err) } resp.Size = n return nil } // Check interface satisfied var _ fusefs.HandleFlusher = (*FileHandle)(nil) // Flush is called on each close() of a file descriptor. So if a // filesystem wants to return write errors in close() and the file has // cached dirty data, this is a good place to write back data and // return any errors. Since many applications ignore close() errors // this is not always useful. // // NOTE: The flush() method may be called more than once for each // open(). This happens if more than one file descriptor refers to an // opened file due to dup(), dup2() or fork() calls. It is not // possible to determine if a flush is final, so each flush should be // treated equally. Multiple write-flush sequences are relatively // rare, so this shouldn't be a problem. // // Filesystems shouldn't assume that flush will always be called after // some writes, or that if will be called at all. func (fh *FileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) (err error) { defer log.Trace(fh, "")("err=%v", &err) return translateError(fh.Handle.Flush()) } var _ fusefs.HandleReleaser = (*FileHandle)(nil) // Release is called when we are finished with the file handle // // It isn't called directly from userspace so the error is ignored by // the kernel func (fh *FileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) (err error) { defer log.Trace(fh, "")("err=%v", &err) return translateError(fh.Handle.Release()) } rclone-1.53.3/cmd/mount/mount.go000066400000000000000000000063771375552240400165070ustar00rootroot00000000000000// Package mount implements a FUSE mounting system for rclone remotes. // +build linux,go1.13 darwin,go1.13 freebsd,go1.13 package mount import ( "fmt" "runtime" "bazil.org/fuse" fusefs "bazil.org/fuse/fs" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/vfs" ) func init() { mountlib.NewMountCommand("mount", false, mount) mountlib.AddRc("mount", mount) } // mountOptions configures the options from the command line flags func mountOptions(VFS *vfs.VFS, device string, opt *mountlib.Options) (options []fuse.MountOption) { options = []fuse.MountOption{ fuse.MaxReadahead(uint32(opt.MaxReadAhead)), fuse.Subtype("rclone"), fuse.FSName(device), fuse.VolumeName(opt.VolumeName), // Options from benchmarking in the fuse module //fuse.MaxReadahead(64 * 1024 * 1024), //fuse.WritebackCache(), } if opt.AsyncRead { options = append(options, fuse.AsyncRead()) } if opt.NoAppleDouble { options = append(options, fuse.NoAppleDouble()) } if opt.NoAppleXattr { options = append(options, fuse.NoAppleXattr()) } if opt.AllowNonEmpty { options = append(options, fuse.AllowNonEmptyMount()) } if opt.AllowOther { options = append(options, fuse.AllowOther()) } if opt.AllowRoot { // options = append(options, fuse.AllowRoot()) fs.Errorf(nil, "Ignoring --allow-root. Support has been removed upstream - see https://github.com/bazil/fuse/issues/144 for more info") } if opt.DefaultPermissions { options = append(options, fuse.DefaultPermissions()) } if VFS.Opt.ReadOnly { options = append(options, fuse.ReadOnly()) } if opt.WritebackCache { options = append(options, fuse.WritebackCache()) } if opt.DaemonTimeout != 0 { options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(opt.DaemonTimeout.Seconds())))) } if len(opt.ExtraOptions) > 0 { fs.Errorf(nil, "-o/--option not supported with this FUSE backend") } if len(opt.ExtraFlags) > 0 { fs.Errorf(nil, "--fuse-flag not supported with this FUSE backend") } return options } // mount the file system // // The mount point will be ready when this returns. // // returns an error, and an error channel for the serve process to // report an error when fusermount is called. func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) { if runtime.GOOS == "darwin" { fs.Logf(nil, "macOS users: please try \"rclone cmount\" as it will be the default in v1.54") } if opt.DebugFUSE { fuse.Debug = func(msg interface{}) { fs.Debugf("fuse", "%v", msg) } } f := VFS.Fs() fs.Debugf(f, "Mounting on %q", mountpoint) c, err := fuse.Mount(mountpoint, mountOptions(VFS, f.Name()+":"+f.Root(), opt)...) if err != nil { return nil, nil, err } filesys := NewFS(VFS, opt) server := fusefs.New(c, nil) // Serve the mount point in the background returning error to errChan errChan := make(chan error, 1) go func() { err := server.Serve(filesys) closeErr := c.Close() if err == nil { err = closeErr } errChan <- err }() // check if the mount process has an error to report <-c.Ready if err := c.MountError; err != nil { return nil, nil, err } unmount := func() error { // Shutdown the VFS filesys.VFS.Shutdown() return fuse.Unmount(mountpoint) } return errChan, unmount, nil } rclone-1.53.3/cmd/mount/mount_test.go000066400000000000000000000005521375552240400175330ustar00rootroot00000000000000// +build linux,go1.13 darwin,go1.13 freebsd,go1.13 package mount import ( "runtime" "testing" "github.com/rclone/rclone/vfs/vfstest" ) func TestMount(t *testing.T) { if runtime.NumCPU() <= 2 { t.Skip("FIXME skipping mount tests as they lock up on <= 2 CPUs - See: https://github.com/rclone/rclone/issues/3154") } vfstest.RunTests(t, false, mount) } rclone-1.53.3/cmd/mount/mount_unsupported.go000066400000000000000000000007461375552240400211510ustar00rootroot00000000000000// Build for mount for unsupported platforms to stop go complaining // about "no buildable Go source files " // Invert the build constraint: linux,go1.13 darwin,go1.13 freebsd,go1.13 // // !((linux&&go1.13) || (darwin&&go1.13) || (freebsd&&go1.13)) // == !(linux&&go1.13) && !(darwin&&go1.13) && !(freebsd&&go1.13)) // == (!linux || !go1.13) && (!darwin || go1.13) && (!freebsd || !go1.13)) // +build !linux !go1.13 // +build !darwin !go1.13 // +build !freebsd !go1.13 package mount rclone-1.53.3/cmd/mount/test/000077500000000000000000000000001375552240400157605ustar00rootroot00000000000000rclone-1.53.3/cmd/mount/test/seek_speed.go000066400000000000000000000032111375552240400204130ustar00rootroot00000000000000// +build ignore // Read blocks out of a single file to time the seeking code package main import ( "flag" "io" "log" "math/rand" "os" "time" ) var ( // Flags iterations = flag.Int("n", 25, "Iterations to try") maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read") randSeed = flag.Int64("seed", 1, "Seed for the random number generator") ) func randomSeekTest(size int64, in *os.File, name string) { startTime := time.Now() start := rand.Int63n(size) blockSize := rand.Intn(*maxBlockSize) if int64(blockSize) > size-start { blockSize = int(size - start) } _, err := in.Seek(start, io.SeekStart) if err != nil { log.Fatalf("Seek failed on %q: %v", name, err) } buf := make([]byte, blockSize) _, err = io.ReadFull(in, buf) if err != nil { log.Fatalf("Read failed on %q: %v", name, err) } log.Printf("Reading %d from %d took %v ", blockSize, start, time.Since(startTime)) } func main() { flag.Parse() args := flag.Args() if len(args) != 1 { log.Fatalf("Require 1 file as argument") } rand.Seed(*randSeed) name := args[0] openStart := time.Now() in, err := os.Open(name) if err != nil { log.Fatalf("Couldn't open %q: %v", name, err) } log.Printf("File Open took %v", time.Since(openStart)) fi, err := in.Stat() if err != nil { log.Fatalf("Couldn't stat %q: %v", name, err) } start := time.Now() for i := 0; i < *iterations; i++ { randomSeekTest(fi.Size(), in, name) } dt := time.Since(start) log.Printf("That took %v for %d iterations, %v per iteration", dt, *iterations, dt/time.Duration(*iterations)) err = in.Close() if err != nil { log.Fatalf("Error closing %q: %v", name, err) } } rclone-1.53.3/cmd/mount/test/seeker.go000066400000000000000000000047131375552240400175720ustar00rootroot00000000000000// +build ignore // Read two files with lots of seeking to stress test the seek code package main import ( "bytes" "flag" "io" "io/ioutil" "log" "math/rand" "os" "time" ) var ( // Flags iterations = flag.Int("n", 1e6, "Iterations to try") maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read") ) func init() { rand.Seed(time.Now().UnixNano()) } func randomSeekTest(size int64, in1, in2 *os.File, file1, file2 string) { start := rand.Int63n(size) blockSize := rand.Intn(*maxBlockSize) if int64(blockSize) > size-start { blockSize = int(size - start) } log.Printf("Reading %d from %d", blockSize, start) _, err := in1.Seek(start, io.SeekStart) if err != nil { log.Fatalf("Seek failed on %q: %v", file1, err) } _, err = in2.Seek(start, io.SeekStart) if err != nil { log.Fatalf("Seek failed on %q: %v", file2, err) } buf1 := make([]byte, blockSize) n1, err := io.ReadFull(in1, buf1) if err != nil { log.Fatalf("Read failed on %q: %v", file1, err) } buf2 := make([]byte, blockSize) n2, err := io.ReadFull(in2, buf2) if err != nil { log.Fatalf("Read failed on %q: %v", file2, err) } if n1 != n2 { log.Fatalf("Read different lengths %d (%q) != %d (%q)", n1, file1, n2, file2) } if !bytes.Equal(buf1, buf2) { log.Printf("Dumping different blocks") err = ioutil.WriteFile("/tmp/z1", buf1, 0777) if err != nil { log.Fatalf("Failed to write /tmp/z1: %v", err) } err = ioutil.WriteFile("/tmp/z2", buf2, 0777) if err != nil { log.Fatalf("Failed to write /tmp/z2: %v", err) } log.Fatalf("Read different contents - saved in /tmp/z1 and /tmp/z2") } } func main() { flag.Parse() args := flag.Args() if len(args) != 2 { log.Fatalf("Require 2 files as argument") } file1, file2 := args[0], args[1] in1, err := os.Open(file1) if err != nil { log.Fatalf("Couldn't open %q: %v", file1, err) } in2, err := os.Open(file2) if err != nil { log.Fatalf("Couldn't open %q: %v", file2, err) } fi1, err := in1.Stat() if err != nil { log.Fatalf("Couldn't stat %q: %v", file1, err) } fi2, err := in2.Stat() if err != nil { log.Fatalf("Couldn't stat %q: %v", file2, err) } if fi1.Size() != fi2.Size() { log.Fatalf("Files not the same size") } for i := 0; i < *iterations; i++ { randomSeekTest(fi1.Size(), in1, in2, file1, file2) } err = in1.Close() if err != nil { log.Fatalf("Error closing %q: %v", file1, err) } err = in2.Close() if err != nil { log.Fatalf("Error closing %q: %v", file2, err) } } rclone-1.53.3/cmd/mount/test/seekers.go000066400000000000000000000052251375552240400177540ustar00rootroot00000000000000// +build ignore // Read lots files with lots of simultaneous seeking to stress test the seek code package main import ( "flag" "io" "log" "math/rand" "os" "path/filepath" "sort" "sync" "time" ) var ( // Flags iterations = flag.Int("n", 1e6, "Iterations to try") maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read") simultaneous = flag.Int("transfers", 16, "Number of simultaneous files to open") seeksPerFile = flag.Int("seeks", 8, "Seeks per file") mask = flag.Int64("mask", 0, "mask for seek, eg 0x7fff") ) func init() { rand.Seed(time.Now().UnixNano()) } func seekTest(n int, file string) { in, err := os.Open(file) if err != nil { log.Fatalf("Couldn't open %q: %v", file, err) } fi, err := in.Stat() if err != nil { log.Fatalf("Couldn't stat %q: %v", file, err) } size := fi.Size() // FIXME make sure we try start and end maxBlockSize := *maxBlockSize if int64(maxBlockSize) > size { maxBlockSize = int(size) } for i := 0; i < n; i++ { start := rand.Int63n(size) if *mask != 0 { start &^= *mask } blockSize := rand.Intn(maxBlockSize) beyondEnd := false switch rand.Intn(10) { case 0: start = 0 case 1: start = size - int64(blockSize) case 2: // seek beyond the end start = size + int64(blockSize) beyondEnd = true default: } if !beyondEnd && int64(blockSize) > size-start { blockSize = int(size - start) } log.Printf("%s: Reading %d from %d", file, blockSize, start) _, err = in.Seek(start, io.SeekStart) if err != nil { log.Fatalf("Seek failed on %q: %v", file, err) } buf := make([]byte, blockSize) n, err := io.ReadFull(in, buf) if beyondEnd && err == io.EOF { // OK } else if err != nil { log.Fatalf("Read failed on %q: %v (%d)", file, err, n) } } err = in.Close() if err != nil { log.Fatalf("Error closing %q: %v", file, err) } } // Find all the files in dir func findFiles(dir string) (files []string) { filepath.Walk(dir, func(path string, info os.FileInfo, err error) error { if info.Mode().IsRegular() && info.Size() > 0 { files = append(files, path) } return nil }) sort.Strings(files) return files } func main() { flag.Parse() args := flag.Args() if len(args) != 1 { log.Fatalf("Require a directory as argument") } dir := args[0] files := findFiles(dir) jobs := make(chan string, *simultaneous) var wg sync.WaitGroup wg.Add(*simultaneous) for i := 0; i < *simultaneous; i++ { go func() { defer wg.Done() for file := range jobs { seekTest(*seeksPerFile, file) } }() } for i := 0; i < *iterations; i++ { i := rand.Intn(len(files)) jobs <- files[i] //jobs <- files[i] } close(jobs) wg.Wait() } rclone-1.53.3/cmd/mount2/000077500000000000000000000000001375552240400150635ustar00rootroot00000000000000rclone-1.53.3/cmd/mount2/file.go000066400000000000000000000116561375552240400163420ustar00rootroot00000000000000// +build linux darwin,amd64 package mount2 import ( "context" "fmt" "io" "syscall" fusefs "github.com/hanwen/go-fuse/v2/fs" "github.com/hanwen/go-fuse/v2/fuse" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // FileHandle is a resource identifier for opened files. Usually, a // FileHandle should implement some of the FileXxxx interfaces. // // All of the FileXxxx operations can also be implemented at the // InodeEmbedder level, for example, one can implement NodeReader // instead of FileReader. // // FileHandles are useful in two cases: First, if the underlying // storage systems needs a handle for reading/writing. This is the // case with Unix system calls, which need a file descriptor (See also // the function `NewLoopbackFile`). Second, it is useful for // implementing files whose contents are not tied to an inode. For // example, a file like `/proc/interrupts` has no fixed content, but // changes on each open call. This means that each file handle must // have its own view of the content; this view can be tied to a // FileHandle. Files that have such dynamic content should return the // FOPEN_DIRECT_IO flag from their `Open` method. See directio_test.go // for an example. type FileHandle struct { h vfs.Handle fsys *FS } // Create a new FileHandle func newFileHandle(h vfs.Handle, fsys *FS) *FileHandle { return &FileHandle{ h: h, fsys: fsys, } } // Check interface satistfied var _ fusefs.FileHandle = (*FileHandle)(nil) // The String method is for debug printing. func (f *FileHandle) String() string { return fmt.Sprintf("fh=%p(%s)", f, f.h.Node().Path()) } // Read data from a file. The data should be returned as // ReadResult, which may be constructed from the incoming // `dest` buffer. func (f *FileHandle) Read(ctx context.Context, dest []byte, off int64) (res fuse.ReadResult, errno syscall.Errno) { var n int var err error defer log.Trace(f, "off=%d", off)("n=%d, off=%d, errno=%v", &n, &off, &errno) n, err = f.h.ReadAt(dest, off) if err == io.EOF { err = nil } return fuse.ReadResultData(dest[:n]), translateError(err) } var _ fusefs.FileReader = (*FileHandle)(nil) // Write the data into the file handle at given offset. After // returning, the data will be reused and may not referenced. func (f *FileHandle) Write(ctx context.Context, data []byte, off int64) (written uint32, errno syscall.Errno) { var n int var err error defer log.Trace(f, "off=%d", off)("n=%d, off=%d, errno=%v", &n, &off, &errno) n, err = f.h.WriteAt(data, off) return uint32(n), translateError(err) } var _ fusefs.FileWriter = (*FileHandle)(nil) // Flush is called for the close(2) call on a file descriptor. In case // of a descriptor that was duplicated using dup(2), it may be called // more than once for the same FileHandle. func (f *FileHandle) Flush(ctx context.Context) syscall.Errno { return translateError(f.h.Flush()) } var _ fusefs.FileFlusher = (*FileHandle)(nil) // Release is called to before a FileHandle is forgotten. The // kernel ignores the return value of this method, // so any cleanup that requires specific synchronization or // could fail with I/O errors should happen in Flush instead. func (f *FileHandle) Release(ctx context.Context) syscall.Errno { return translateError(f.h.Release()) } var _ fusefs.FileReleaser = (*FileHandle)(nil) // Fsync is a signal to ensure writes to the Inode are flushed // to stable storage. func (f *FileHandle) Fsync(ctx context.Context, flags uint32) (errno syscall.Errno) { return translateError(f.h.Sync()) } var _ fusefs.FileFsyncer = (*FileHandle)(nil) // Getattr reads attributes for an Inode. The library will ensure that // Mode and Ino are set correctly. For files that are not opened with // FOPEN_DIRECTIO, Size should be set so it can be read correctly. If // returning zeroed permissions, the default behavior is to change the // mode of 0755 (directory) or 0644 (files). This can be switched off // with the Options.NullPermissions setting. If blksize is unset, 4096 // is assumed, and the 'blocks' field is set accordingly. func (f *FileHandle) Getattr(ctx context.Context, out *fuse.AttrOut) (errno syscall.Errno) { defer log.Trace(f, "")("attr=%v, errno=%v", &out, &errno) f.fsys.setAttrOut(f.h.Node(), out) return 0 } var _ fusefs.FileGetattrer = (*FileHandle)(nil) // Setattr sets attributes for an Inode. func (f *FileHandle) Setattr(ctx context.Context, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) { defer log.Trace(f, "in=%v", in)("attr=%v, errno=%v", &out, &errno) var err error f.fsys.setAttrOut(f.h.Node(), out) size, ok := in.GetSize() if ok { err = f.h.Truncate(int64(size)) if err != nil { return translateError(err) } out.Attr.Size = size } mtime, ok := in.GetMTime() if ok { err = f.h.Node().SetModTime(mtime) if err != nil { return translateError(err) } out.Attr.Mtime = uint64(mtime.Unix()) out.Attr.Mtimensec = uint32(mtime.Nanosecond()) } return 0 } var _ fusefs.FileSetattrer = (*FileHandle)(nil) rclone-1.53.3/cmd/mount2/fs.go000066400000000000000000000056441375552240400160330ustar00rootroot00000000000000// FUSE main Fs // +build linux darwin,amd64 package mount2 import ( "os" "syscall" "github.com/hanwen/go-fuse/v2/fuse" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // FS represents the top level filing system type FS struct { VFS *vfs.VFS f fs.Fs opt *mountlib.Options } // NewFS creates a pathfs.FileSystem from the fs.Fs passed in func NewFS(VFS *vfs.VFS, opt *mountlib.Options) *FS { fsys := &FS{ VFS: VFS, f: VFS.Fs(), opt: opt, } return fsys } // Root returns the root node func (f *FS) Root() (node *Node, err error) { defer log.Trace("", "")("node=%+v, err=%v", &node, &err) root, err := f.VFS.Root() if err != nil { return nil, err } return newNode(f, root), nil } // SetDebug if called, provide debug output through the log package. func (f *FS) SetDebug(debug bool) { fs.Debugf(f.f, "SetDebug %v", debug) } // get the Mode from a vfs Node func getMode(node os.FileInfo) uint32 { Mode := node.Mode().Perm() if node.IsDir() { Mode |= fuse.S_IFDIR } else { Mode |= fuse.S_IFREG } return uint32(Mode) } // fill in attr from node func setAttr(node vfs.Node, attr *fuse.Attr) { Size := uint64(node.Size()) const BlockSize = 512 Blocks := (Size + BlockSize - 1) / BlockSize modTime := node.ModTime() // set attributes vfs := node.VFS() attr.Owner.Gid = vfs.Opt.GID attr.Owner.Uid = vfs.Opt.UID attr.Mode = getMode(node) attr.Size = Size attr.Nlink = 1 attr.Blocks = Blocks // attr.Blksize = BlockSize // not supported in freebsd/darwin, defaults to 4k if not set s := uint64(modTime.Unix()) ns := uint32(modTime.Nanosecond()) attr.Atime = s attr.Atimensec = ns attr.Mtime = s attr.Mtimensec = ns attr.Ctime = s attr.Ctimensec = ns //attr.Rdev } // fill in AttrOut from node func (f *FS) setAttrOut(node vfs.Node, out *fuse.AttrOut) { setAttr(node, &out.Attr) out.SetTimeout(f.opt.AttrTimeout) } // fill in EntryOut from node func (f *FS) setEntryOut(node vfs.Node, out *fuse.EntryOut) { setAttr(node, &out.Attr) out.SetEntryTimeout(f.opt.AttrTimeout) out.SetAttrTimeout(f.opt.AttrTimeout) } // Translate errors from mountlib into Syscall error numbers func translateError(err error) syscall.Errno { if err == nil { return 0 } switch errors.Cause(err) { case vfs.OK: return 0 case vfs.ENOENT, fs.ErrorDirNotFound, fs.ErrorObjectNotFound: return syscall.ENOENT case vfs.EEXIST, fs.ErrorDirExists: return syscall.EEXIST case vfs.EPERM, fs.ErrorPermissionDenied: return syscall.EPERM case vfs.ECLOSED: return syscall.EBADF case vfs.ENOTEMPTY: return syscall.ENOTEMPTY case vfs.ESPIPE: return syscall.ESPIPE case vfs.EBADF: return syscall.EBADF case vfs.EROFS: return syscall.EROFS case vfs.ENOSYS, fs.ErrorNotImplemented: return syscall.ENOSYS case vfs.EINVAL: return syscall.EINVAL } fs.Errorf(nil, "IO error: %v", err) return syscall.EIO } rclone-1.53.3/cmd/mount2/mount.go000066400000000000000000000137001375552240400165550ustar00rootroot00000000000000// Package mount implements a FUSE mounting system for rclone remotes. // +build linux darwin,amd64 package mount2 import ( "fmt" "log" "runtime" fusefs "github.com/hanwen/go-fuse/v2/fs" "github.com/hanwen/go-fuse/v2/fuse" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/vfs" ) func init() { mountlib.NewMountCommand("mount2", true, mount) mountlib.AddRc("mount2", mount) } // mountOptions configures the options from the command line flags // // man mount.fuse for more info and note the -o flag for other options func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) { device := f.Name() + ":" + f.Root() mountOpts = &fuse.MountOptions{ AllowOther: fsys.opt.AllowOther, FsName: device, Name: "rclone", DisableXAttrs: true, Debug: fsys.opt.DebugFUSE, MaxReadAhead: int(fsys.opt.MaxReadAhead), // RememberInodes: true, // SingleThreaded: true, /* AllowOther bool // Options are passed as -o string to fusermount. Options []string // Default is _DEFAULT_BACKGROUND_TASKS, 12. This numbers // controls the allowed number of requests that relate to // async I/O. Concurrency for synchronous I/O is not limited. MaxBackground int // Write size to use. If 0, use default. This number is // capped at the kernel maximum. MaxWrite int // Max read ahead to use. If 0, use default. This number is // capped at the kernel maximum. MaxReadAhead int // If IgnoreSecurityLabels is set, all security related xattr // requests will return NO_DATA without passing through the // user defined filesystem. You should only set this if you // file system implements extended attributes, and you are not // interested in security labels. IgnoreSecurityLabels bool // ignoring labels should be provided as a fusermount mount option. // If RememberInodes is set, we will never forget inodes. // This may be useful for NFS. RememberInodes bool // Values shown in "df -T" and friends // First column, "Filesystem" FsName string // Second column, "Type", will be shown as "fuse." + Name Name string // If set, wrap the file system in a single-threaded locking wrapper. SingleThreaded bool // If set, return ENOSYS for Getxattr calls, so the kernel does not issue any // Xattr operations at all. DisableXAttrs bool // If set, print debugging information. Debug bool // If set, ask kernel to forward file locks to FUSE. If using, // you must implement the GetLk/SetLk/SetLkw methods. EnableLocks bool // If set, ask kernel not to do automatic data cache invalidation. // The filesystem is fully responsible for invalidating data cache. ExplicitDataCacheControl bool */ } var opts []string // FIXME doesn't work opts = append(opts, fmt.Sprintf("max_readahead=%d", maxReadAhead)) if fsys.opt.AllowNonEmpty { opts = append(opts, "nonempty") } if fsys.opt.AllowOther { opts = append(opts, "allow_other") } if fsys.opt.AllowRoot { opts = append(opts, "allow_root") } if fsys.opt.DefaultPermissions { opts = append(opts, "default_permissions") } if fsys.VFS.Opt.ReadOnly { opts = append(opts, "ro") } if fsys.opt.WritebackCache { log.Printf("FIXME --write-back-cache not supported") // FIXME opts = append(opts,fuse.WritebackCache()) } // Some OS X only options if runtime.GOOS == "darwin" { opts = append(opts, // VolumeName sets the volume name shown in Finder. fmt.Sprintf("volname=%s", device), // NoAppleXattr makes OSXFUSE disallow extended attributes with the // prefix "com.apple.". This disables persistent Finder state and // other such information. "noapplexattr", // NoAppleDouble makes OSXFUSE disallow files with names used by OS X // to store extended attributes on file systems that do not support // them natively. // // Such file names are: // // ._* // .DS_Store "noappledouble", ) } mountOpts.Options = opts return mountOpts } // mount the file system // // The mount point will be ready when this returns. // // returns an error, and an error channel for the serve process to // report an error when fusermount is called. func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) { f := VFS.Fs() fs.Debugf(f, "Mounting on %q", mountpoint) fsys := NewFS(VFS, opt) // nodeFsOpts := &fusefs.PathNodeFsOptions{ // ClientInodes: false, // Debug: mountlib.DebugFUSE, // } // nodeFs := fusefs.NewPathNodeFs(fsys, nodeFsOpts) //mOpts := fusefs.NewOptions() // default options // FIXME // mOpts.EntryTimeout = 10 * time.Second // mOpts.AttrTimeout = 10 * time.Second // mOpts.NegativeTimeout = 10 * time.Second //mOpts.Debug = mountlib.DebugFUSE //conn := fusefs.NewFileSystemConnector(nodeFs.Root(), mOpts) mountOpts := mountOptions(fsys, f) // FIXME fill out opts := fusefs.Options{ MountOptions: *mountOpts, EntryTimeout: &opt.AttrTimeout, AttrTimeout: &opt.AttrTimeout, // UID // GID } root, err := fsys.Root() if err != nil { return nil, nil, err } rawFS := fusefs.NewNodeFS(root, &opts) server, err := fuse.NewServer(rawFS, mountpoint, &opts.MountOptions) if err != nil { return nil, nil, err } //mountOpts := &fuse.MountOptions{} //server, err := fusefs.Mount(mountpoint, fsys, &opts) // server, err := fusefs.Mount(mountpoint, root, &opts) // if err != nil { // return nil, nil, err // } umount := func() error { // Shutdown the VFS fsys.VFS.Shutdown() return server.Unmount() } // serverSettings := server.KernelSettings() // fs.Debugf(f, "Server settings %+v", serverSettings) // Serve the mount point in the background returning error to errChan errs := make(chan error, 1) go func() { server.Serve() errs <- nil }() fs.Debugf(f, "Waiting for the mount to start...") err = server.WaitMount() if err != nil { return nil, nil, err } fs.Debugf(f, "Mount started") return errs, umount, nil } rclone-1.53.3/cmd/mount2/mount_test.go000066400000000000000000000002621375552240400176130ustar00rootroot00000000000000// +build linux darwin,amd64 package mount2 import ( "testing" "github.com/rclone/rclone/vfs/vfstest" ) func TestMount(t *testing.T) { vfstest.RunTests(t, false, mount) } rclone-1.53.3/cmd/mount2/mount_unsupported.go000066400000000000000000000002501375552240400212210ustar00rootroot00000000000000// Build for mount for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build !linux // +build !darwin !amd64 package mount2 rclone-1.53.3/cmd/mount2/node.go000066400000000000000000000314001375552240400163350ustar00rootroot00000000000000// +build linux darwin,amd64 package mount2 import ( "context" "os" "path" "syscall" fusefs "github.com/hanwen/go-fuse/v2/fs" "github.com/hanwen/go-fuse/v2/fuse" "github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/vfs" ) // Node represents a directory or file type Node struct { fusefs.Inode node vfs.Node fsys *FS } // Node types must be InodeEmbedders var _ fusefs.InodeEmbedder = (*Node)(nil) // newNode creates a new fusefs.Node from a vfs Node func newNode(fsys *FS, vfsNode vfs.Node) (node *Node) { // Check the vfsNode to see if it has a fuse Node cached // We must return the same fuse nodes for vfs Nodes node, ok := vfsNode.Sys().(*Node) if ok { return node } node = &Node{ node: vfsNode, fsys: fsys, } // Cache the node for later vfsNode.SetSys(node) return node } // String used for pretty printing. func (n *Node) String() string { return n.node.Path() } // lookup a Node in a directory func (n *Node) lookupVfsNodeInDir(leaf string) (vfsNode vfs.Node, errno syscall.Errno) { dir, ok := n.node.(*vfs.Dir) if !ok { return nil, syscall.ENOTDIR } vfsNode, err := dir.Stat(leaf) return vfsNode, translateError(err) } // // lookup a Dir given a path // func (n *Node) lookupDir(path string) (dir *vfs.Dir, code fuse.Status) { // node, code := fsys.lookupVfsNodeInDir(path) // if !code.Ok() { // return nil, code // } // dir, ok := n.(*vfs.Dir) // if !ok { // return nil, fuse.ENOTDIR // } // return dir, fuse.OK // } // // lookup a parent Dir given a path returning the dir and the leaf // func (n *Node) lookupParentDir(filePath string) (leaf string, dir *vfs.Dir, code fuse.Status) { // parentDir, leaf := path.Split(filePath) // dir, code = fsys.lookupDir(parentDir) // return leaf, dir, code // } // Statfs implements statistics for the filesystem that holds this // Inode. If not defined, the `out` argument will zeroed with an OK // result. This is because OSX filesystems must Statfs, or the mount // will not work. func (n *Node) Statfs(ctx context.Context, out *fuse.StatfsOut) syscall.Errno { defer log.Trace(n, "")("out=%+v", &out) out = new(fuse.StatfsOut) const blockSize = 4096 const fsBlocks = (1 << 50) / blockSize out.Blocks = fsBlocks // Total data blocks in file system. out.Bfree = fsBlocks // Free blocks in file system. out.Bavail = fsBlocks // Free blocks in file system if you're not root. out.Files = 1e9 // Total files in file system. out.Ffree = 1e9 // Free files in file system. out.Bsize = blockSize // Block size out.NameLen = 255 // Maximum file name length? out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system. mountlib.ClipBlocks(&out.Blocks) mountlib.ClipBlocks(&out.Bfree) mountlib.ClipBlocks(&out.Bavail) return 0 } var _ = (fusefs.NodeStatfser)((*Node)(nil)) // Getattr reads attributes for an Inode. The library will ensure that // Mode and Ino are set correctly. For files that are not opened with // FOPEN_DIRECTIO, Size should be set so it can be read correctly. If // returning zeroed permissions, the default behavior is to change the // mode of 0755 (directory) or 0644 (files). This can be switched off // with the Options.NullPermissions setting. If blksize is unset, 4096 // is assumed, and the 'blocks' field is set accordingly. func (n *Node) Getattr(ctx context.Context, f fusefs.FileHandle, out *fuse.AttrOut) syscall.Errno { n.fsys.setAttrOut(n.node, out) return 0 } var _ = (fusefs.NodeGetattrer)((*Node)(nil)) // Setattr sets attributes for an Inode. func (n *Node) Setattr(ctx context.Context, f fusefs.FileHandle, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) { defer log.Trace(n, "in=%v", in)("out=%#v, errno=%v", &out, &errno) var err error n.fsys.setAttrOut(n.node, out) size, ok := in.GetSize() if ok { err = n.node.Truncate(int64(size)) if err != nil { return translateError(err) } out.Attr.Size = size } mtime, ok := in.GetMTime() if ok { err = n.node.SetModTime(mtime) if err != nil { return translateError(err) } out.Attr.Mtime = uint64(mtime.Unix()) out.Attr.Mtimensec = uint32(mtime.Nanosecond()) } return 0 } var _ = (fusefs.NodeSetattrer)((*Node)(nil)) // Open opens an Inode (of regular file type) for reading. It // is optional but recommended to return a FileHandle. func (n *Node) Open(ctx context.Context, flags uint32) (fh fusefs.FileHandle, fuseFlags uint32, errno syscall.Errno) { defer log.Trace(n, "flags=%#o", flags)("errno=%v", &errno) // fuse flags are based off syscall flags as are os flags, so // should be compatible handle, err := n.node.Open(int(flags)) if err != nil { return nil, 0, translateError(err) } // If size unknown then use direct io to read if entry := n.node.DirEntry(); entry != nil && entry.Size() < 0 { fuseFlags |= fuse.FOPEN_DIRECT_IO } return newFileHandle(handle, n.fsys), fuseFlags, 0 } var _ = (fusefs.NodeOpener)((*Node)(nil)) // Lookup should find a direct child of a directory by the child's name. If // the entry does not exist, it should return ENOENT and optionally // set a NegativeTimeout in `out`. If it does exist, it should return // attribute data in `out` and return the Inode for the child. A new // inode can be created using `Inode.NewInode`. The new Inode will be // added to the FS tree automatically if the return status is OK. // // If a directory does not implement NodeLookuper, the library looks // for an existing child with the given name. // // The input to a Lookup is {parent directory, name string}. // // Lookup, if successful, must return an *Inode. Once the Inode is // returned to the kernel, the kernel can issue further operations, // such as Open or Getxattr on that node. // // A successful Lookup also returns an EntryOut. Among others, this // contains file attributes (mode, size, mtime, etc.). // // FUSE supports other operations that modify the namespace. For // example, the Symlink, Create, Mknod, Link methods all create new // children in directories. Hence, they also return *Inode and must // populate their fuse.EntryOut arguments. func (n *Node) Lookup(ctx context.Context, name string, out *fuse.EntryOut) (inode *fusefs.Inode, errno syscall.Errno) { defer log.Trace(n, "name=%q", name)("inode=%v, attr=%v, errno=%v", &inode, &out, &errno) vfsNode, errno := n.lookupVfsNodeInDir(name) if errno != 0 { return nil, errno } newNode := newNode(n.fsys, vfsNode) // FIXME // out.SetEntryTimeout(dt time.Duration) // out.SetAttrTimeout(dt time.Duration) n.fsys.setEntryOut(vfsNode, out) return n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode}), 0 } var _ = (fusefs.NodeLookuper)((*Node)(nil)) // Opendir opens a directory Inode for reading its // contents. The actual reading is driven from Readdir, so // this method is just for performing sanity/permission // checks. The default is to return success. func (n *Node) Opendir(ctx context.Context) syscall.Errno { if !n.node.IsDir() { return syscall.ENOTDIR } return 0 } var _ = (fusefs.NodeOpendirer)((*Node)(nil)) type dirStream struct { nodes []os.FileInfo i int } // HasNext indicates if there are further entries. HasNext // might be called on already closed streams. func (ds *dirStream) HasNext() bool { return ds.i < len(ds.nodes) } // Next retrieves the next entry. It is only called if HasNext // has previously returned true. The Errno return may be used to // indicate I/O errors func (ds *dirStream) Next() (de fuse.DirEntry, errno syscall.Errno) { // defer log.Trace(nil, "")("de=%+v, errno=%v", &de, &errno) fi := ds.nodes[ds.i] de = fuse.DirEntry{ // Mode is the file's mode. Only the high bits (eg. S_IFDIR) // are considered. Mode: getMode(fi), // Name is the basename of the file in the directory. Name: path.Base(fi.Name()), // Ino is the inode number. Ino: 0, // FIXME } ds.i++ return de, 0 } // Close releases resources related to this directory // stream. func (ds *dirStream) Close() { } var _ fusefs.DirStream = (*dirStream)(nil) // Readdir opens a stream of directory entries. // // Readdir essentiallly returns a list of strings, and it is allowed // for Readdir to return different results from Lookup. For example, // you can return nothing for Readdir ("ls my-fuse-mount" is empty), // while still implementing Lookup ("ls my-fuse-mount/a-specific-file" // shows a single file). // // If a directory does not implement NodeReaddirer, a list of // currently known children from the tree is returned. This means that // static in-memory file systems need not implement NodeReaddirer. func (n *Node) Readdir(ctx context.Context) (ds fusefs.DirStream, errno syscall.Errno) { defer log.Trace(n, "")("ds=%v, errno=%v", &ds, &errno) if !n.node.IsDir() { return nil, syscall.ENOTDIR } fh, err := n.node.Open(os.O_RDONLY) if err != nil { return nil, translateError(err) } defer func() { closeErr := fh.Close() if errno == 0 && closeErr != nil { errno = translateError(closeErr) } }() items, err := fh.Readdir(-1) if err != nil { return nil, translateError(err) } return &dirStream{ nodes: items, }, 0 } var _ = (fusefs.NodeReaddirer)((*Node)(nil)) // Mkdir is similar to Lookup, but must create a directory entry and Inode. // Default is to return EROFS. func (n *Node) Mkdir(ctx context.Context, name string, mode uint32, out *fuse.EntryOut) (inode *fusefs.Inode, errno syscall.Errno) { defer log.Trace(name, "mode=0%o", mode)("inode=%v, errno=%v", &inode, &errno) dir, ok := n.node.(*vfs.Dir) if !ok { return nil, syscall.ENOTDIR } newDir, err := dir.Mkdir(name) if err != nil { return nil, translateError(err) } newNode := newNode(n.fsys, newDir) n.fsys.setEntryOut(newNode.node, out) newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode}) return newInode, 0 } var _ = (fusefs.NodeMkdirer)((*Node)(nil)) // Create is similar to Lookup, but should create a new // child. It typically also returns a FileHandle as a // reference for future reads/writes. // Default is to return EROFS. func (n *Node) Create(ctx context.Context, name string, flags uint32, mode uint32, out *fuse.EntryOut) (node *fusefs.Inode, fh fusefs.FileHandle, fuseFlags uint32, errno syscall.Errno) { defer log.Trace(n, "name=%q, flags=%#o, mode=%#o", name, flags, mode)("node=%v, fh=%v, flags=%#o, errno=%v", &node, &fh, &fuseFlags, &errno) dir, ok := n.node.(*vfs.Dir) if !ok { return nil, nil, 0, syscall.ENOTDIR } // translate the fuse flags to os flags osFlags := int(flags) | os.O_CREATE file, err := dir.Create(name, osFlags) if err != nil { return nil, nil, 0, translateError(err) } handle, err := file.Open(osFlags) if err != nil { return nil, nil, 0, translateError(err) } fh = newFileHandle(handle, n.fsys) // FIXME // fh = &fusefs.WithFlags{ // File: fh, // //FuseFlags: fuse.FOPEN_NONSEEKABLE, // OpenFlags: flags, // } // Find the created node vfsNode, errno := n.lookupVfsNodeInDir(name) if errno != 0 { return nil, nil, 0, errno } n.fsys.setEntryOut(vfsNode, out) newNode := newNode(n.fsys, vfsNode) fs.Debugf(nil, "attr=%#v", out.Attr) newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode}) return newInode, fh, 0, 0 } var _ = (fusefs.NodeCreater)((*Node)(nil)) // Unlink should remove a child from this directory. If the // return status is OK, the Inode is removed as child in the // FS tree automatically. Default is to return EROFS. func (n *Node) Unlink(ctx context.Context, name string) (errno syscall.Errno) { defer log.Trace(n, "name=%q", name)("errno=%v", &errno) vfsNode, errno := n.lookupVfsNodeInDir(name) if errno != 0 { return errno } return translateError(vfsNode.Remove()) } var _ = (fusefs.NodeUnlinker)((*Node)(nil)) // Rmdir is like Unlink but for directories. // Default is to return EROFS. func (n *Node) Rmdir(ctx context.Context, name string) (errno syscall.Errno) { defer log.Trace(n, "name=%q", name)("errno=%v", &errno) vfsNode, errno := n.lookupVfsNodeInDir(name) if errno != 0 { return errno } return translateError(vfsNode.Remove()) } var _ = (fusefs.NodeRmdirer)((*Node)(nil)) // Rename should move a child from one directory to a different // one. The change is effected in the FS tree if the return status is // OK. Default is to return EROFS. func (n *Node) Rename(ctx context.Context, oldName string, newParent fusefs.InodeEmbedder, newName string, flags uint32) (errno syscall.Errno) { defer log.Trace(n, "oldName=%q, newParent=%v, newName=%q", oldName, newParent, newName)("errno=%v", &errno) oldDir, ok := n.node.(*vfs.Dir) if !ok { return syscall.ENOTDIR } newParentNode, ok := newParent.(*Node) if !ok { fs.Errorf(n, "newParent was not a *Node") return syscall.EIO } newDir, ok := newParentNode.node.(*vfs.Dir) if !ok { return syscall.ENOTDIR } return translateError(oldDir.Rename(oldName, newName, newDir)) } var _ = (fusefs.NodeRenamer)((*Node)(nil)) rclone-1.53.3/cmd/mountlib/000077500000000000000000000000001375552240400154705ustar00rootroot00000000000000rclone-1.53.3/cmd/mountlib/daemon.go000066400000000000000000000003661375552240400172670ustar00rootroot00000000000000// Daemonization interface for non-Unix variants only // +build windows package mountlib import ( "log" "runtime" ) func startBackgroundMode() bool { log.Fatalf("background mode not supported on %s platform", runtime.GOOS) return false } rclone-1.53.3/cmd/mountlib/daemon_unix.go000066400000000000000000000007121375552240400203250ustar00rootroot00000000000000// Daemonization interface for Unix variants only // +build !windows package mountlib import ( "log" daemon "github.com/sevlyar/go-daemon" ) func startBackgroundMode() bool { cntxt := &daemon.Context{} d, err := cntxt.Reborn() if err != nil { log.Fatalln(err) } if d != nil { return true } defer func() { if err := cntxt.Release(); err != nil { log.Printf("error encountered while killing daemon: %v", err) } }() return false } rclone-1.53.3/cmd/mountlib/mount.go000066400000000000000000000425131375552240400171660ustar00rootroot00000000000000package mountlib import ( "io" "log" "os" "os/signal" "path/filepath" "runtime" "strings" "syscall" "time" "github.com/okzk/sdnotify" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "github.com/spf13/cobra" "github.com/spf13/pflag" ) // Options for creating the mount type Options struct { DebugFUSE bool AllowNonEmpty bool AllowRoot bool AllowOther bool DefaultPermissions bool WritebackCache bool Daemon bool MaxReadAhead fs.SizeSuffix ExtraOptions []string ExtraFlags []string AttrTimeout time.Duration // how long the kernel caches attribute for VolumeName string NoAppleDouble bool NoAppleXattr bool DaemonTimeout time.Duration // OSXFUSE only AsyncRead bool } // DefaultOpt is the default values for creating the mount var DefaultOpt = Options{ MaxReadAhead: 128 * 1024, AttrTimeout: 1 * time.Second, // how long the kernel caches attribute for NoAppleDouble: true, // use noappledouble by default NoAppleXattr: false, // do not use noapplexattr by default AsyncRead: true, // do async reads by default } type ( // UnmountFn is called to unmount the file system UnmountFn func() error // MountFn is called to mount the file system MountFn func(VFS *vfs.VFS, mountpoint string, opt *Options) (<-chan error, func() error, error) ) // Global constants const ( MaxLeafSize = 1024 // don't pass file names longer than this ) func init() { // DaemonTimeout defaults to non zero for macOS if runtime.GOOS == "darwin" { DefaultOpt.DaemonTimeout = 15 * time.Minute } } // Options set by command line flags var ( Opt = DefaultOpt ) // AddFlags adds the non filing system specific flags to the command func AddFlags(flagSet *pflag.FlagSet) { rc.AddOption("mount", &Opt) flags.BoolVarP(flagSet, &Opt.DebugFUSE, "debug-fuse", "", Opt.DebugFUSE, "Debug the FUSE internals - needs -v.") flags.BoolVarP(flagSet, &Opt.AllowNonEmpty, "allow-non-empty", "", Opt.AllowNonEmpty, "Allow mounting over a non-empty directory (not Windows).") flags.BoolVarP(flagSet, &Opt.AllowRoot, "allow-root", "", Opt.AllowRoot, "Allow access to root user.") flags.BoolVarP(flagSet, &Opt.AllowOther, "allow-other", "", Opt.AllowOther, "Allow access to other users.") flags.BoolVarP(flagSet, &Opt.DefaultPermissions, "default-permissions", "", Opt.DefaultPermissions, "Makes kernel enforce access control based on the file mode.") flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.") flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads.") flags.DurationVarP(flagSet, &Opt.AttrTimeout, "attr-timeout", "", Opt.AttrTimeout, "Time for which file/directory attributes are cached.") flags.StringArrayVarP(flagSet, &Opt.ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp. Repeat if required.") flags.StringArrayVarP(flagSet, &Opt.ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.") flags.BoolVarP(flagSet, &Opt.Daemon, "daemon", "", Opt.Daemon, "Run mount as a daemon (background mode).") flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (not supported by all OSes).") flags.DurationVarP(flagSet, &Opt.DaemonTimeout, "daemon-timeout", "", Opt.DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).") flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads.") if runtime.GOOS == "darwin" { flags.BoolVarP(flagSet, &Opt.NoAppleDouble, "noappledouble", "", Opt.NoAppleDouble, "Sets the OSXFUSE option noappledouble.") flags.BoolVarP(flagSet, &Opt.NoAppleXattr, "noapplexattr", "", Opt.NoAppleXattr, "Sets the OSXFUSE option noapplexattr.") } } // Check is folder is empty func checkMountEmpty(mountpoint string) error { fp, fpErr := os.Open(mountpoint) if fpErr != nil { return errors.Wrap(fpErr, "Can not open: "+mountpoint) } defer fs.CheckClose(fp, &fpErr) _, fpErr = fp.Readdirnames(1) // directory is not empty if fpErr != io.EOF { var e error var errorMsg = "Directory is not empty: " + mountpoint + " If you want to mount it anyway use: --allow-non-empty option" if fpErr == nil { e = errors.New(errorMsg) } else { e = errors.Wrap(fpErr, errorMsg) } return e } return nil } // Check the root doesn't overlap the mountpoint func checkMountpointOverlap(root, mountpoint string) error { abs := func(x string) string { if absX, err := filepath.EvalSymlinks(x); err == nil { x = absX } if absX, err := filepath.Abs(x); err == nil { x = absX } x = filepath.ToSlash(x) if !strings.HasSuffix(x, "/") { x += "/" } return x } rootAbs, mountpointAbs := abs(root), abs(mountpoint) if strings.HasPrefix(rootAbs, mountpointAbs) || strings.HasPrefix(mountpointAbs, rootAbs) { return errors.Errorf("mount point %q and directory to be mounted %q mustn't overlap", mountpoint, root) } return nil } // NewMountCommand makes a mount command with the given name and Mount function func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Command { var commandDefinition = &cobra.Command{ Use: commandName + " remote:path /path/to/mountpoint", Hidden: hidden, Short: `Mount the remote as file system on a mountpoint.`, Long: ` rclone ` + commandName + ` allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up your remote using ` + "`rclone config`" + `. Check it works with ` + "`rclone ls`" + ` etc. You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows. On Linux/macOS/FreeBSD Start the mount like this where ` + "`/path/to/local/mount`" + ` is an **empty** **existing** directory. rclone ` + commandName + ` remote:path/to/files /path/to/local/mount Or on Windows like this where ` + "`X:`" + ` is an unused drive letter or use a path to **non-existent** directory. rclone ` + commandName + ` remote:path/to/files X: rclone ` + commandName + ` remote:path/to/files C:\path\to\nonexistent\directory When running in background mode the user will have to stop the mount manually (specified below). When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped. The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. Stopping the mount manually: # Linux fusermount -u /path/to/local/mount # OS X umount /path/to/local/mount **Note**: As of ` + "`rclone` 1.52.2, `rclone mount`" + ` now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use. ### Installing on Windows To run rclone ` + commandName + ` on Windows, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/billziss-gh/winfsp) is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone ` + commandName + ` for Windows. #### Windows caveats Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive. The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using [the WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)) which creates drives accessible for everyone on the system or alternatively using [the nssm service manager](https://nssm.cc/usage). #### Mount as a network drive By default, rclone will mount the remote as a normal drive. However, you can also mount it as a **Network Drive** (or **Network Share**, as mentioned in some places) Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected. Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also [Limitations](#limitations) section below for more info Add "--fuse-flag --VolumePrefix=\server\share" to your "mount" command, **replacing "share" with any other name of your choice if you are mounting more than one remote**. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) ### Limitations Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File Caching](#file-caching) section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. ### rclone ` + commandName + ` vs rclone sync/copy File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone ` + commandName + ` can't use retries in the same way without making local copies of the uploads. Look at the [file caching](#file-caching) for solutions to make ` + commandName + ` more reliable. ### Attribute caching You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel. In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as [rclone using too much memory](https://github.com/rclone/rclone/issues/2157), [rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147). The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above. If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above. If files don't change on the remote outside of the control of rclone then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. ### Filters Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. ### systemd When running rclone ` + commandName + ` as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone ` + commandName + ` service specified as a requirement will see all files and folders immediately in this mode. ### chunked reading ### --vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests. When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely. With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. ` + vfs.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) opt := Opt // make a copy of the options if opt.Daemon { config.PassConfigKeyForDaemonization = true } mountpoint := args[1] fdst := cmd.NewFsDir(args) if fdst.Name() == "" || fdst.Name() == "local" { err := checkMountpointOverlap(fdst.Root(), mountpoint) if err != nil { log.Fatalf("Fatal error: %v", err) } } // Show stats if the user has specifically requested them if cmd.ShowStats() { defer cmd.StartStats()() } // Skip checkMountEmpty if --allow-non-empty flag is used or if // the Operating System is Windows if !opt.AllowNonEmpty && runtime.GOOS != "windows" { err := checkMountEmpty(mountpoint) if err != nil { log.Fatalf("Fatal error: %v", err) } } else if opt.AllowNonEmpty && runtime.GOOS == "windows" { fs.Logf(nil, "--allow-non-empty flag does nothing on Windows") } // Work out the volume name, removing special // characters from it if necessary if opt.VolumeName == "" { opt.VolumeName = fdst.Name() + ":" + fdst.Root() } opt.VolumeName = strings.Replace(opt.VolumeName, ":", " ", -1) opt.VolumeName = strings.Replace(opt.VolumeName, "/", " ", -1) opt.VolumeName = strings.TrimSpace(opt.VolumeName) if runtime.GOOS == "windows" && len(opt.VolumeName) > 32 { opt.VolumeName = opt.VolumeName[:32] } // Start background task if --background is specified if opt.Daemon { daemonized := startBackgroundMode() if daemonized { return } } VFS := vfs.New(fdst, &vfsflags.Opt) err := Mount(VFS, mountpoint, mount, &opt) if err != nil { log.Fatalf("Fatal error: %v", err) } }, } // Register the command cmd.Root.AddCommand(commandDefinition) // Add flags cmdFlags := commandDefinition.Flags() AddFlags(cmdFlags) vfsflags.AddFlags(cmdFlags) return commandDefinition } // ClipBlocks clips the blocks pointed to the OS max func ClipBlocks(b *uint64) { var max uint64 switch runtime.GOOS { case "windows": if runtime.GOARCH == "386" { max = (1 << 32) - 1 } else { max = (1 << 43) - 1 } case "darwin": // OSX FUSE only supports 32 bit number of blocks // https://github.com/osxfuse/osxfuse/issues/396 max = (1 << 32) - 1 default: // no clipping return } if *b > max { *b = max } } // Mount mounts the remote at mountpoint. // // If noModTime is set then it func Mount(VFS *vfs.VFS, mountpoint string, mount MountFn, opt *Options) error { if opt == nil { opt = &DefaultOpt } // Mount it errChan, unmount, err := mount(VFS, mountpoint, opt) if err != nil { return errors.Wrap(err, "failed to mount FUSE fs") } // Unmount on exit fnHandle := atexit.Register(func() { _ = unmount() _ = sdnotify.Stopping() }) defer atexit.Unregister(fnHandle) // Notify systemd if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket { return errors.Wrap(err, "failed to notify systemd") } // Reload VFS cache on SIGHUP sigHup := make(chan os.Signal, 1) signal.Notify(sigHup, syscall.SIGHUP) waitloop: for { select { // umount triggered outside the app case err = <-errChan: break waitloop // user sent SIGHUP to clear the cache case <-sigHup: root, err := VFS.Root() if err != nil { fs.Errorf(VFS.Fs(), "Error reading root: %v", err) } else { root.ForgetAll() } } } _ = unmount() _ = sdnotify.Stopping() if err != nil { return errors.Wrap(err, "failed to umount FUSE fs") } return nil } rclone-1.53.3/cmd/mountlib/rc.go000066400000000000000000000160571375552240400164340ustar00rootroot00000000000000package mountlib import ( "context" "log" "sort" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfscommon" "github.com/rclone/rclone/vfs/vfsflags" ) // MountInfo defines the configuration for a mount type MountInfo struct { unmountFn UnmountFn MountPoint string `json:"MountPoint"` MountedOn time.Time `json:"MountedOn"` Fs string `json:"Fs"` MountOpt *Options VFSOpt *vfscommon.Options } var ( // mutex to protect all the variables in this block mountMu sync.Mutex // Mount functions available mountFns = map[string]MountFn{} // Map of mounted path => MountInfo liveMounts = map[string]MountInfo{} ) // AddRc adds mount and unmount functionality to rc func AddRc(mountUtilName string, mountFunction MountFn) { mountMu.Lock() defer mountMu.Unlock() // rcMount allows the mount command to be run from rc mountFns[mountUtilName] = mountFunction } func init() { rc.Add(rc.Call{ Path: "mount/mount", AuthRequired: true, Fn: mountRc, Title: "Create a new mount point", Help: `rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2 This takes the following parameters - fs - a remote path to be mounted (required) - mountPoint: valid path on the local machine (required) - mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use - mountOpt: a JSON object with Mount options in. - vfsOpt: a JSON object with VFS options in. Eg rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section. rclone rc options/get `, }) } // mountRc allows the mount command to be run from rc func mountRc(_ context.Context, in rc.Params) (out rc.Params, err error) { mountPoint, err := in.GetString("mountPoint") if err != nil { return nil, err } vfsOpt := vfsflags.Opt err = in.GetStructMissingOK("vfsOpt", &vfsOpt) if err != nil { return nil, err } mountOpt := Opt err = in.GetStructMissingOK("mountOpt", &mountOpt) if err != nil { return nil, err } mountType, err := in.GetString("mountType") mountMu.Lock() defer mountMu.Unlock() if err != nil || mountType == "" { if mountFns["mount"] != nil { mountType = "mount" } else if mountFns["cmount"] != nil { mountType = "cmount" } else if mountFns["mount2"] != nil { mountType = "mount2" } } // Get Fs.fs to be mounted from fs parameter in the params fdst, err := rc.GetFs(in) if err != nil { return nil, err } if mountFns[mountType] != nil { VFS := vfs.New(fdst, &vfsOpt) _, unmountFn, err := mountFns[mountType](VFS, mountPoint, &mountOpt) if err != nil { log.Printf("mount FAILED: %v", err) return nil, err } // Add mount to list if mount point was successfully created liveMounts[mountPoint] = MountInfo{ unmountFn: unmountFn, MountedOn: time.Now(), Fs: fdst.Name(), MountPoint: mountPoint, VFSOpt: &vfsOpt, MountOpt: &mountOpt, } fs.Debugf(nil, "Mount for %s created at %s using %s", fdst.String(), mountPoint, mountType) return nil, nil } return nil, errors.New("Mount Option specified is not registered, or is invalid") } func init() { rc.Add(rc.Call{ Path: "mount/unmount", AuthRequired: true, Fn: unMountRc, Title: "Unmount selected active mount", Help: ` rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. This takes the following parameters - mountPoint: valid path on the local machine where the mount was created (required) Eg rclone rc mount/unmount mountPoint=/home//mountPoint `, }) } // unMountRc allows the umount command to be run from rc func unMountRc(_ context.Context, in rc.Params) (out rc.Params, err error) { mountPoint, err := in.GetString("mountPoint") if err != nil { return nil, err } mountMu.Lock() defer mountMu.Unlock() err = performUnMount(mountPoint) if err != nil { return nil, err } return nil, nil } func init() { rc.Add(rc.Call{ Path: "mount/types", AuthRequired: true, Fn: mountTypesRc, Title: "Show all possible mount types", Help: `This shows all possible mount types and returns them as a list. This takes no parameters and returns - mountTypes: list of mount types The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter. Eg rclone rc mount/types `, }) } // mountTypesRc returns a list of available mount types. func mountTypesRc(_ context.Context, in rc.Params) (out rc.Params, err error) { var mountTypes = []string{} mountMu.Lock() defer mountMu.Unlock() for mountType := range mountFns { mountTypes = append(mountTypes, mountType) } sort.Strings(mountTypes) return rc.Params{ "mountTypes": mountTypes, }, nil } func init() { rc.Add(rc.Call{ Path: "mount/listmounts", AuthRequired: true, Fn: listMountsRc, Title: "Show current mount points", Help: `This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns - mountPoints: list of current mount points Eg rclone rc mount/listmounts `, }) } // listMountsRc returns a list of current mounts func listMountsRc(_ context.Context, in rc.Params) (out rc.Params, err error) { var mountTypes = []MountInfo{} mountMu.Lock() defer mountMu.Unlock() for _, a := range liveMounts { mountTypes = append(mountTypes, a) } return rc.Params{ "mountPoints": mountTypes, }, nil } func init() { rc.Add(rc.Call{ Path: "mount/unmountall", AuthRequired: true, Fn: unmountAll, Title: "Show current mount points", Help: `This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns error if unmount does not succeed. Eg rclone rc mount/unmountall `, }) } // unmountAll unmounts all the created mounts func unmountAll(_ context.Context, in rc.Params) (out rc.Params, err error) { mountMu.Lock() defer mountMu.Unlock() for key, mountInfo := range liveMounts { err = performUnMount(mountInfo.MountPoint) if err != nil { fs.Debugf(nil, "Couldn't unmount : %s", key) return nil, err } } return nil, nil } // performUnMount unmounts the specified mountPoint func performUnMount(mountPoint string) (err error) { mountInfo, ok := liveMounts[mountPoint] if ok { err := mountInfo.unmountFn() if err != nil { return err } delete(liveMounts, mountPoint) } else { return errors.New("mount not found") } return nil } rclone-1.53.3/cmd/mountlib/rc_test.go000066400000000000000000000051321375552240400174630ustar00rootroot00000000000000package mountlib_test import ( "context" "io/ioutil" "os" "path/filepath" "runtime" "testing" "time" _ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/cmd/cmount" _ "github.com/rclone/rclone/cmd/mount" _ "github.com/rclone/rclone/cmd/mount2" "github.com/rclone/rclone/fs/rc" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestRc(t *testing.T) { ctx := context.Background() mount := rc.Calls.Get("mount/mount") assert.NotNil(t, mount) unmount := rc.Calls.Get("mount/unmount") assert.NotNil(t, unmount) getMountTypes := rc.Calls.Get("mount/types") assert.NotNil(t, getMountTypes) localDir, err := ioutil.TempDir("", "rclone-mountlib-localDir") require.NoError(t, err) defer func() { _ = os.RemoveAll(localDir) }() err = ioutil.WriteFile(filepath.Join(localDir, "file.txt"), []byte("hello"), 0666) require.NoError(t, err) mountPoint, err := ioutil.TempDir("", "rclone-mountlib-mountPoint") require.NoError(t, err) if runtime.GOOS == "windows" { // Windows requires the mount point not to exist require.NoError(t, os.RemoveAll(mountPoint)) } else { defer func() { _ = os.RemoveAll(mountPoint) }() } out, err := getMountTypes.Fn(ctx, nil) require.NoError(t, err) var mountTypes []string err = out.GetStruct("mountTypes", &mountTypes) require.NoError(t, err) t.Logf("Mount types %v", mountTypes) t.Run("Errors", func(t *testing.T) { _, err := mount.Fn(ctx, rc.Params{}) assert.Error(t, err) _, err = mount.Fn(ctx, rc.Params{"fs": "/tmp"}) assert.Error(t, err) _, err = mount.Fn(ctx, rc.Params{"mountPoint": "/tmp"}) assert.Error(t, err) }) t.Run("Mount", func(t *testing.T) { if len(mountTypes) == 0 { t.Skip("Can't mount") } in := rc.Params{ "fs": localDir, "mountPoint": mountPoint, "vfsOpt": rc.Params{ "FilePerms": 0400, }, } // check file.txt is not there filePath := filepath.Join(mountPoint, "file.txt") _, err := os.Stat(filePath) require.Error(t, err) require.True(t, os.IsNotExist(err)) // mount _, err = mount.Fn(ctx, in) if err != nil { t.Skipf("Mount failed - skipping test: %v", err) } // check file.txt is there now fi, err := os.Stat(filePath) require.NoError(t, err) assert.Equal(t, int64(5), fi.Size()) if runtime.GOOS == "linux" { assert.Equal(t, os.FileMode(0400), fi.Mode()) } // FIXME the OS sometimes appears to be using the mount // immediately after it appears so wait a moment time.Sleep(100 * time.Millisecond) t.Run("Unmount", func(t *testing.T) { _, err := unmount.Fn(ctx, in) require.NoError(t, err) }) }) } rclone-1.53.3/cmd/move/000077500000000000000000000000001375552240400146055ustar00rootroot00000000000000rclone-1.53.3/cmd/move/move.go000066400000000000000000000047051375552240400161100ustar00rootroot00000000000000package move import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/sync" "github.com/spf13/cobra" ) // Globals var ( deleteEmptySrcDirs = false createEmptySrcDirs = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &deleteEmptySrcDirs, "delete-empty-src-dirs", "", deleteEmptySrcDirs, "Delete empty source dirs after move") flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after move") } var commandDefinition = &cobra.Command{ Use: "move source:path dest:path", Short: `Move files from source to dest.`, Long: ` Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation. If no filters are in use and if possible this will server side move ` + "`source:path`" + ` into ` + "`dest:path`" + `. After this ` + "`source:path`" + ` will no longer exist. Otherwise for each file in ` + "`source:path`" + ` selected by the filters (if any) this will move it into ` + "`dest:path`" + `. If possible a server side move will be used, otherwise it will copy it (server side if possible) into ` + "`dest:path`" + ` then delete the original (if no errors on copy) in ` + "`source:path`" + `. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. See the [--no-traverse](/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. **Important**: Since this can cause data loss, test first with the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag. **Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args) cmd.Run(true, true, command, func() error { if srcFileName == "" { return sync.MoveDir(context.Background(), fdst, fsrc, deleteEmptySrcDirs, createEmptySrcDirs) } return operations.MoveFile(context.Background(), fdst, fsrc, srcFileName, srcFileName) }) }, } rclone-1.53.3/cmd/moveto/000077500000000000000000000000001375552240400151505ustar00rootroot00000000000000rclone-1.53.3/cmd/moveto/moveto.go000066400000000000000000000033611375552240400170130ustar00rootroot00000000000000package moveto import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/sync" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "moveto source:path dest:path", Short: `Move file or directory from source to dest.`, Long: ` If source:path is a file or directory then it moves it to a file or directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command. So rclone moveto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: if src is file move it to dst, overwriting an existing file if it exists if src is directory move it to dst, overwriting existing files if they exist see move command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer. **Important**: Since this can cause data loss, test first with the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag. **Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args) cmd.Run(true, true, command, func() error { if srcFileName == "" { return sync.MoveDir(context.Background(), fdst, fsrc, false, false) } return operations.MoveFile(context.Background(), fdst, fsrc, dstFileName, srcFileName) }) }, } rclone-1.53.3/cmd/ncdu/000077500000000000000000000000001375552240400145705ustar00rootroot00000000000000rclone-1.53.3/cmd/ncdu/ncdu.go000066400000000000000000000422431375552240400160550ustar00rootroot00000000000000// Package ncdu implements a text based user interface for exploring a remote //+build !plan9,!solaris,!js package ncdu import ( "context" "fmt" "path" "reflect" "sort" "strings" "github.com/atotto/clipboard" runewidth "github.com/mattn/go-runewidth" termbox "github.com/nsf/termbox-go" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/ncdu/scan" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "ncdu remote:path", Short: `Explore a remote with a text based user interface.`, Long: ` This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?". {{< asciinema 157793 >}} To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. Here are the keys - press '?' to toggle the help on and off ` + strings.Join(helpText()[1:], "\n ") + ` This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return NewUI(fsrc).Show() }) }, } // helpText returns help text for ncdu func helpText() (tr []string) { tr = []string{ "rclone ncdu", " ↑,↓ or k,j to Move", " →,l to enter", " ←,h to return", " c toggle counts", " g toggle graph", " n,s,C sort by name,size,count", " d delete file/directory", } if !clipboard.Unsupported { tr = append(tr, " y copy current path to clipbard") } tr = append(tr, []string{ " Y display current path", " ^L refresh screen", " ? to toggle help on and off", " q/ESC/c-C to quit", }...) return } // UI contains the state of the user interface type UI struct { f fs.Fs // fs being displayed fsName string // human name of Fs root *scan.Dir // root directory d *scan.Dir // current directory being displayed path string // path of current directory showBox bool // whether to show a box boxText []string // text to show in box boxMenu []string // box menu options boxMenuButton int boxMenuHandler func(fs fs.Fs, path string, option int) (string, error) entries fs.DirEntries // entries of current directory sortPerm []int // order to display entries in after sorting invSortPerm []int // inverse order dirListHeight int // height of listing listing bool // whether listing is in progress showGraph bool // toggle showing graph showCounts bool // toggle showing counts sortByName int8 // +1 for normal, 0 for off, -1 for reverse sortBySize int8 sortByCount int8 dirPosMap map[string]dirPos // store for directory positions } // Where we have got to in the directory listing type dirPos struct { entry int offset int } // Print a string func Print(x, y int, fg, bg termbox.Attribute, msg string) { for _, c := range msg { termbox.SetCell(x, y, c, fg, bg) x++ } } // Printf a string func Printf(x, y int, fg, bg termbox.Attribute, format string, args ...interface{}) { s := fmt.Sprintf(format, args...) Print(x, y, fg, bg, s) } // Line prints a string to given xmax, with given space func Line(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, msg string) { for _, c := range msg { termbox.SetCell(x, y, c, fg, bg) x += runewidth.RuneWidth(c) if x >= xmax { return } } for ; x < xmax; x++ { termbox.SetCell(x, y, spacer, fg, bg) } } // Linef a string func Linef(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, format string, args ...interface{}) { s := fmt.Sprintf(format, args...) Line(x, y, xmax, fg, bg, spacer, s) } // LineOptions Print line of selectable options func LineOptions(x, y, xmax int, fg, bg termbox.Attribute, options []string, selected int) { defaultBg := bg defaultFg := fg // Print left+right whitespace to center the options xoffset := ((xmax - x) - lineOptionLength(options)) / 2 for j := x; j < x+xoffset; j++ { termbox.SetCell(j, y, ' ', fg, bg) } for j := xmax - xoffset; j < xmax; j++ { termbox.SetCell(j, y, ' ', fg, bg) } x += xoffset for i, o := range options { termbox.SetCell(x, y, ' ', fg, bg) if i == selected { bg = termbox.ColorBlack fg = termbox.ColorWhite } termbox.SetCell(x+1, y, '<', fg, bg) x += 2 // print option text for _, c := range o { termbox.SetCell(x, y, c, fg, bg) x++ } termbox.SetCell(x, y, '>', fg, bg) bg = defaultBg fg = defaultFg termbox.SetCell(x+1, y, ' ', fg, bg) x += 2 } } func lineOptionLength(o []string) int { count := 0 for _, i := range o { count += len(i) } return count + 4*len(o) // spacer and arrows } // Box the u.boxText onto the screen func (u *UI) Box() { w, h := termbox.Size() // Find dimensions of text boxWidth := 10 for _, s := range u.boxText { if len(s) > boxWidth && len(s) < w-4 { boxWidth = len(s) } } boxHeight := len(u.boxText) // position x := (w - boxWidth) / 2 y := (h - boxHeight) / 2 xmax := x + boxWidth if len(u.boxMenu) != 0 { count := lineOptionLength(u.boxMenu) if x+boxWidth > x+count { xmax = x + boxWidth } else { xmax = x + count } } ymax := y + len(u.boxText) // draw text fg, bg := termbox.ColorRed, termbox.ColorWhite for i, s := range u.boxText { Line(x, y+i, xmax, fg, bg, ' ', s) fg = termbox.ColorBlack } if len(u.boxMenu) != 0 { ymax++ LineOptions(x, ymax-1, xmax, fg, bg, u.boxMenu, u.boxMenuButton) } // draw top border for i := y; i < ymax; i++ { termbox.SetCell(x-1, i, '│', fg, bg) termbox.SetCell(xmax, i, '│', fg, bg) } for j := x; j < xmax; j++ { termbox.SetCell(j, y-1, '─', fg, bg) termbox.SetCell(j, ymax, '─', fg, bg) } termbox.SetCell(x-1, y-1, '┌', fg, bg) termbox.SetCell(xmax, y-1, '┐', fg, bg) termbox.SetCell(x-1, ymax, '└', fg, bg) termbox.SetCell(xmax, ymax, '┘', fg, bg) } func (u *UI) moveBox(to int) { if len(u.boxMenu) == 0 { return } if to > 0 { // move right u.boxMenuButton++ } else { // move left u.boxMenuButton-- } if u.boxMenuButton >= len(u.boxMenu) { u.boxMenuButton = len(u.boxMenu) - 1 } else if u.boxMenuButton < 0 { u.boxMenuButton = 0 } } // find the biggest entry in the current listing func (u *UI) biggestEntry() (biggest int64) { if u.d == nil { return } for i := range u.entries { size, _, _, _ := u.d.AttrI(u.sortPerm[i]) if size > biggest { biggest = size } } return } // Draw the current screen func (u *UI) Draw() error { w, h := termbox.Size() u.dirListHeight = h - 3 // Plot err := termbox.Clear(termbox.ColorDefault, termbox.ColorDefault) if err != nil { return errors.Wrap(err, "failed to clear screen") } // Header line Linef(0, 0, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "rclone ncdu %s - use the arrow keys to navigate, press ? for help", fs.Version) // Directory line Linef(0, 1, w, termbox.ColorWhite, termbox.ColorBlack, '-', "-- %s ", u.path) // graphs const ( graphBars = 10 graph = "########## " ) // Directory listing if u.d != nil { y := 2 perBar := u.biggestEntry() / graphBars if perBar == 0 { perBar = 1 } dirPos := u.dirPosMap[u.path] for i, j := range u.sortPerm[dirPos.offset:] { entry := u.entries[j] n := i + dirPos.offset if y >= h-1 { break } fg := termbox.ColorWhite bg := termbox.ColorBlack if n == dirPos.entry { fg, bg = bg, fg } size, count, isDir, readable := u.d.AttrI(u.sortPerm[n]) mark := ' ' if isDir { mark = '/' } message := "" if !readable { message = " [not read yet]" } extras := "" if u.showCounts { if count > 0 { extras += fmt.Sprintf("%8v ", fs.SizeSuffix(count)) } else { extras += " " } } if u.showGraph { bars := (size + perBar/2 - 1) / perBar // clip if necessary - only happens during startup if bars > 10 { bars = 10 } else if bars < 0 { bars = 0 } extras += "[" + graph[graphBars-bars:2*graphBars-bars] + "] " } Linef(0, y, w, fg, bg, ' ', "%8v %s%c%s%s", fs.SizeSuffix(size), extras, mark, path.Base(entry.Remote()), message) y++ } } // Footer if u.d == nil { Line(0, h-1, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "Waiting for root directory...") } else { message := "" if u.listing { message = " [listing in progress]" } size, count := u.d.Attr() Linef(0, h-1, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "Total usage: %v, Objects: %d%s", fs.SizeSuffix(size), count, message) } // Show the box on top if required if u.showBox { u.Box() } err = termbox.Flush() if err != nil { return errors.Wrap(err, "failed to flush screen") } return nil } // Move the cursor this many spaces adjusting the viewport as necessary func (u *UI) move(d int) { if u.d == nil { return } absD := d if d < 0 { absD = -d } entries := len(u.entries) // Fetch current dirPos dirPos := u.dirPosMap[u.path] dirPos.entry += d // check entry in range if dirPos.entry < 0 { dirPos.entry = 0 } else if dirPos.entry >= entries { dirPos.entry = entries - 1 } // check cursor still on screen p := dirPos.entry - dirPos.offset // where dirPos.entry appears on the screen if p < 0 { dirPos.offset -= absD } else if p >= u.dirListHeight { dirPos.offset += absD } // check dirPos.offset in bounds if entries == 0 || dirPos.offset < 0 { dirPos.offset = 0 } else if dirPos.offset >= entries { dirPos.offset = entries - 1 } // write dirPos back for later u.dirPosMap[u.path] = dirPos } func (u *UI) removeEntry(pos int) { u.d.Remove(pos) u.setCurrentDir(u.d) } // delete the entry at the current position func (u *UI) delete() { ctx := context.Background() dirPos := u.sortPerm[u.dirPosMap[u.path].entry] entry := u.entries[dirPos] u.boxMenu = []string{"cancel", "confirm"} if obj, isFile := entry.(fs.Object); isFile { u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) { if o != 1 { return "Aborted!", nil } err := operations.DeleteFile(ctx, obj) if err != nil { return "", err } u.removeEntry(dirPos) return "Successfully deleted file!", nil } u.popupBox([]string{ "Delete this file?", u.fsName + entry.String()}) } else { u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) { if o != 1 { return "Aborted!", nil } err := operations.Purge(ctx, f, entry.String()) if err != nil { return "", err } u.removeEntry(dirPos) return "Successfully purged folder!", nil } u.popupBox([]string{ "Purge this directory?", "ALL files in it will be deleted", u.fsName + entry.String()}) } } func (u *UI) displayPath() { u.togglePopupBox([]string{ "Current Path", u.path, }) } func (u *UI) copyPath() { if !clipboard.Unsupported { _ = clipboard.WriteAll(u.path) } } // Sort by the configured sort method type ncduSort struct { sortPerm []int entries fs.DirEntries d *scan.Dir u *UI } // Less is part of sort.Interface. func (ds *ncduSort) Less(i, j int) bool { isize, icount, _, _ := ds.d.AttrI(ds.sortPerm[i]) jsize, jcount, _, _ := ds.d.AttrI(ds.sortPerm[j]) iname, jname := ds.entries[ds.sortPerm[i]].Remote(), ds.entries[ds.sortPerm[j]].Remote() switch { case ds.u.sortByName < 0: return iname > jname case ds.u.sortByName > 0: break case ds.u.sortBySize < 0: if isize != jsize { return isize < jsize } case ds.u.sortBySize > 0: if isize != jsize { return isize > jsize } case ds.u.sortByCount < 0: if icount != jcount { return icount < jcount } case ds.u.sortByCount > 0: if icount != jcount { return icount > jcount } } // if everything equal, sort by name return iname < jname } // Swap is part of sort.Interface. func (ds *ncduSort) Swap(i, j int) { ds.sortPerm[i], ds.sortPerm[j] = ds.sortPerm[j], ds.sortPerm[i] } // Len is part of sort.Interface. func (ds *ncduSort) Len() int { return len(ds.sortPerm) } // sort the permutation map of the current directory func (u *UI) sortCurrentDir() { u.sortPerm = u.sortPerm[:0] for i := range u.entries { u.sortPerm = append(u.sortPerm, i) } data := ncduSort{ sortPerm: u.sortPerm, entries: u.entries, d: u.d, u: u, } sort.Sort(&data) if len(u.invSortPerm) < len(u.sortPerm) { u.invSortPerm = make([]int, len(u.sortPerm)) } for i, j := range u.sortPerm { u.invSortPerm[j] = i } } // setCurrentDir sets the current directory func (u *UI) setCurrentDir(d *scan.Dir) { u.d = d u.entries = d.Entries() u.path = path.Join(u.fsName, d.Path()) u.sortCurrentDir() } // enters the current entry func (u *UI) enter() { if u.d == nil || len(u.entries) == 0 { return } dirPos := u.dirPosMap[u.path] d, _ := u.d.GetDir(u.sortPerm[dirPos.entry]) if d == nil { return } u.setCurrentDir(d) } // handles a box option that was selected func (u *UI) handleBoxOption() { msg, err := u.boxMenuHandler(u.f, u.path, u.boxMenuButton) // reset u.boxMenuButton = 0 u.boxMenu = []string{} u.boxMenuHandler = nil if err != nil { u.popupBox([]string{ "error:", err.Error(), }) return } u.popupBox([]string{"Finished:", msg}) } // up goes up to the parent directory func (u *UI) up() { if u.d == nil { return } parent := u.d.Parent() if parent != nil { u.setCurrentDir(parent) } } // popupBox shows a box with the text in func (u *UI) popupBox(text []string) { u.boxText = text u.showBox = true } // togglePopupBox shows a box with the text in func (u *UI) togglePopupBox(text []string) { if u.showBox && reflect.DeepEqual(u.boxText, text) { u.showBox = false } else { u.popupBox(text) } } // toggle the sorting for the flag passed in func (u *UI) toggleSort(sortType *int8) { old := *sortType u.sortBySize = 0 u.sortByCount = 0 u.sortByName = 0 if old == 0 { *sortType = 1 } else { *sortType = -old } u.sortCurrentDir() } // NewUI creates a new user interface for ncdu on f func NewUI(f fs.Fs) *UI { return &UI{ f: f, path: "Waiting for root...", dirListHeight: 20, // updated in Draw fsName: f.Name() + ":" + f.Root(), showGraph: true, showCounts: false, sortByName: 0, // +1 for normal, 0 for off, -1 for reverse sortBySize: 1, sortByCount: 0, dirPosMap: make(map[string]dirPos), } } // Show shows the user interface func (u *UI) Show() error { err := termbox.Init() if err != nil { return errors.Wrap(err, "termbox init") } defer termbox.Close() // scan the disk in the background u.listing = true rootChan, errChan, updated := scan.Scan(context.Background(), u.f) // Poll the events into a channel events := make(chan termbox.Event) doneWithEvent := make(chan bool) go func() { for { events <- termbox.PollEvent() <-doneWithEvent } }() // Main loop, waiting for events and channels outer: for { //Reset() err := u.Draw() if err != nil { return errors.Wrap(err, "draw failed") } var root *scan.Dir select { case root = <-rootChan: u.root = root u.setCurrentDir(root) case err := <-errChan: if err != nil { return errors.Wrap(err, "ncdu directory listing") } u.listing = false case <-updated: // redraw // might want to limit updates per second u.sortCurrentDir() case ev := <-events: doneWithEvent <- true if ev.Type == termbox.EventKey { switch ev.Key + termbox.Key(ev.Ch) { case termbox.KeyEsc, termbox.KeyCtrlC, 'q': if u.showBox { u.showBox = false } else { break outer } case termbox.KeyArrowDown, 'j': u.move(1) case termbox.KeyArrowUp, 'k': u.move(-1) case termbox.KeyPgdn, '-', '_': u.move(u.dirListHeight) case termbox.KeyPgup, '=', '+': u.move(-u.dirListHeight) case termbox.KeyArrowLeft, 'h': if u.showBox { u.moveBox(-1) break } u.up() case termbox.KeyEnter: if len(u.boxMenu) > 0 { u.handleBoxOption() break } u.enter() case termbox.KeyArrowRight, 'l': if u.showBox { u.moveBox(1) break } u.enter() case 'c': u.showCounts = !u.showCounts case 'g': u.showGraph = !u.showGraph case 'n': u.toggleSort(&u.sortByName) case 's': u.toggleSort(&u.sortBySize) case 'C': u.toggleSort(&u.sortByCount) case 'y': u.copyPath() case 'Y': u.displayPath() case 'd': u.delete() case '?': u.togglePopupBox(helpText()) // Refresh the screen. Not obvious what key to map // this onto, but ^L is a common choice. case termbox.KeyCtrlL: err := termbox.Sync() if err != nil { fs.Errorf(nil, "termbox sync returned error: %v", err) } } } } // listen to key presses, etc } return nil } rclone-1.53.3/cmd/ncdu/ncdu_unsupported.go000066400000000000000000000002261375552240400205200ustar00rootroot00000000000000// Build for ncdu for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 solaris js package ncdu rclone-1.53.3/cmd/ncdu/scan/000077500000000000000000000000001375552240400155145ustar00rootroot00000000000000rclone-1.53.3/cmd/ncdu/scan/scan.go000066400000000000000000000112201375552240400167630ustar00rootroot00000000000000// Package scan does concurrent scanning of an Fs building up a directory tree. package scan import ( "context" "path" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/walk" ) // Dir represents a directory found in the remote type Dir struct { parent *Dir path string mu sync.Mutex count int64 size int64 entries fs.DirEntries dirs map[string]*Dir } // Parent returns the directory above this one func (d *Dir) Parent() *Dir { // no locking needed since these are write once in newDir() return d.parent } // Path returns the position of the dir in the filesystem func (d *Dir) Path() string { // no locking needed since these are write once in newDir() return d.path } // make a new directory func newDir(parent *Dir, dirPath string, entries fs.DirEntries) *Dir { d := &Dir{ parent: parent, path: dirPath, entries: entries, dirs: make(map[string]*Dir), } // Count size in this dir for _, entry := range entries { if o, ok := entry.(fs.Object); ok { d.count++ d.size += o.Size() } } // Set my directory entry in parent if parent != nil { parent.mu.Lock() leaf := path.Base(dirPath) d.parent.dirs[leaf] = d parent.mu.Unlock() } // Accumulate counts in parents for ; parent != nil; parent = parent.parent { parent.mu.Lock() parent.count += d.count parent.size += d.size parent.mu.Unlock() } return d } // Entries returns a copy of the entries in the directory func (d *Dir) Entries() fs.DirEntries { return append(fs.DirEntries(nil), d.entries...) } // Remove removes the i-th entry from the // in-memory representation of the remote directory func (d *Dir) Remove(i int) { d.mu.Lock() defer d.mu.Unlock() d.remove(i) } // removes the i-th entry from the // in-memory representation of the remote directory // // Call with d.mu held func (d *Dir) remove(i int) { size := d.entries[i].Size() count := int64(1) subDir, ok := d.getDir(i) if ok { size = subDir.size count = subDir.count delete(d.dirs, path.Base(subDir.path)) } d.size -= size d.count -= count d.entries = append(d.entries[:i], d.entries[i+1:]...) dir := d // populate changed size and count to parent(s) for parent := d.parent; parent != nil; parent = parent.parent { parent.mu.Lock() parent.dirs[path.Base(dir.path)] = dir parent.size -= size parent.count -= count dir = parent parent.mu.Unlock() } } // gets the directory of the i-th entry // // returns nil if it is a file // returns a flag as to whether is directory or not // // Call with d.mu held func (d *Dir) getDir(i int) (subDir *Dir, isDir bool) { obj := d.entries[i] dir, ok := obj.(fs.Directory) if !ok { return nil, false } leaf := path.Base(dir.Remote()) subDir = d.dirs[leaf] return subDir, true } // GetDir returns the Dir of the i-th entry // // returns nil if it is a file // returns a flag as to whether is directory or not func (d *Dir) GetDir(i int) (subDir *Dir, isDir bool) { d.mu.Lock() defer d.mu.Unlock() return d.getDir(i) } // Attr returns the size and count for the directory func (d *Dir) Attr() (size int64, count int64) { d.mu.Lock() defer d.mu.Unlock() return d.size, d.count } // AttrI returns the size, count and flags for the i-th directory entry func (d *Dir) AttrI(i int) (size int64, count int64, isDir bool, readable bool) { d.mu.Lock() defer d.mu.Unlock() subDir, isDir := d.getDir(i) if !isDir { return d.entries[i].Size(), 0, false, true } if subDir == nil { return 0, 0, true, false } size, count = subDir.Attr() return size, count, true, true } // Scan the Fs passed in, returning a root directory channel and an // error channel func Scan(ctx context.Context, f fs.Fs) (chan *Dir, chan error, chan struct{}) { root := make(chan *Dir, 1) errChan := make(chan error, 1) updated := make(chan struct{}, 1) go func() { parents := map[string]*Dir{} err := walk.Walk(ctx, f, "", false, fs.Config.MaxDepth, func(dirPath string, entries fs.DirEntries, err error) error { if err != nil { return err // FIXME mark directory as errored instead of aborting } var parent *Dir if dirPath != "" { parentPath := path.Dir(dirPath) if parentPath == "." { parentPath = "" } var ok bool parent, ok = parents[parentPath] if !ok { errChan <- errors.Errorf("couldn't find parent for %q", dirPath) } } d := newDir(parent, dirPath, entries) parents[dirPath] = d if dirPath == "" { root <- d } // Mark updated select { case updated <- struct{}{}: default: break } return nil }) if err != nil { errChan <- errors.Wrap(err, "ncdu listing failed") } errChan <- nil }() return root, errChan, updated } rclone-1.53.3/cmd/obscure/000077500000000000000000000000001375552240400153015ustar00rootroot00000000000000rclone-1.53.3/cmd/obscure/obscure.go000066400000000000000000000032461375552240400172770ustar00rootroot00000000000000package obscure import ( "fmt" "io/ioutil" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/obscure" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "obscure password", Short: `Obscure password for use in the rclone config file.`, Long: `In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is **not** a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token. This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. Example: echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. If you want to encrypt the config file then please use config file encryption - see [rclone config](/commands/rclone_config/) for more info.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) var password string fi, _ := os.Stdin.Stat() if args[0] == "-" && (fi.Mode()&os.ModeCharDevice) == 0 { bytes, _ := ioutil.ReadAll(os.Stdin) password = string(bytes) } else { password = args[0] } cmd.Run(false, false, command, func() error { obscured := obscure.MustObscure(password) fmt.Println(obscured) return nil }) }, } rclone-1.53.3/cmd/progress.go000066400000000000000000000045721375552240400160420ustar00rootroot00000000000000// Show the dynamic progress bar package cmd import ( "bytes" "fmt" "strings" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/lib/terminal" ) const ( // interval between progress prints defaultProgressInterval = 500 * time.Millisecond // time format for logging logTimeFormat = "2006-01-02 15:04:05" ) // startProgress starts the progress bar printing // // It returns a func which should be called to stop the stats. func startProgress() func() { stopStats := make(chan struct{}) oldLogPrint := fs.LogPrint if !log.Redirected() { // Intercept the log calls if not logging to file or syslog fs.LogPrint = func(level fs.LogLevel, text string) { printProgress(fmt.Sprintf("%s %-6s: %s", time.Now().Format(logTimeFormat), level, text)) } } var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() progressInterval := defaultProgressInterval if ShowStats() && *statsInterval > 0 { progressInterval = *statsInterval } ticker := time.NewTicker(progressInterval) for { select { case <-ticker.C: printProgress("") case <-stopStats: ticker.Stop() printProgress("") fs.LogPrint = oldLogPrint fmt.Println("") return } } }() return func() { close(stopStats) wg.Wait() } } // state for the progress printing var ( nlines = 0 // number of lines in the previous stats block progressMu sync.Mutex ) // printProgress prints the progress with an optional log func printProgress(logMessage string) { progressMu.Lock() defer progressMu.Unlock() var buf bytes.Buffer w, _ := terminal.GetSize() stats := strings.TrimSpace(accounting.GlobalStats().String()) logMessage = strings.TrimSpace(logMessage) out := func(s string) { buf.WriteString(s) } if logMessage != "" { out("\n") out(terminal.MoveUp) } // Move to the start of the block we wrote erasing all the previous lines for i := 0; i < nlines-1; i++ { out(terminal.EraseLine) out(terminal.MoveUp) } out(terminal.EraseLine) out(terminal.MoveToStartOfLine) if logMessage != "" { out(terminal.EraseLine) out(logMessage + "\n") } fixedLines := strings.Split(stats, "\n") nlines = len(fixedLines) for i, line := range fixedLines { if len(line) > w { line = line[:w] } out(line) if i != nlines-1 { out("\n") } } terminal.Write(buf.Bytes()) } rclone-1.53.3/cmd/purge/000077500000000000000000000000001375552240400147615ustar00rootroot00000000000000rclone-1.53.3/cmd/purge/purge.go000066400000000000000000000015701375552240400164350ustar00rootroot00000000000000package purge import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "purge remote:path", Short: `Remove the path and all of its contents.`, Long: ` Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use ` + "`" + `delete` + "`" + ` if you want to selectively delete files. **Important**: Since this can cause data loss, test first with the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fdst := cmd.NewFsDir(args) cmd.Run(true, false, command, func() error { return operations.Purge(context.Background(), fdst, "") }) }, } rclone-1.53.3/cmd/rc/000077500000000000000000000000001375552240400142435ustar00rootroot00000000000000rclone-1.53.3/cmd/rc/rc.go000066400000000000000000000205661375552240400152070ustar00rootroot00000000000000package rc import ( "bytes" "context" "encoding/json" "fmt" "io/ioutil" "net/http" "os" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/rc" "github.com/spf13/cobra" "github.com/spf13/pflag" ) var ( noOutput = false url = "http://localhost:5572/" jsonInput = "" authUser = "" authPass = "" loopback = false options []string arguments []string ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &noOutput, "no-output", "", noOutput, "If set don't output the JSON result.") flags.StringVarP(cmdFlags, &url, "url", "", url, "URL to connect to rclone remote control.") flags.StringVarP(cmdFlags, &jsonInput, "json", "", jsonInput, "Input JSON - use instead of key=value args.") flags.StringVarP(cmdFlags, &authUser, "user", "", "", "Username to use to rclone remote control.") flags.StringVarP(cmdFlags, &authPass, "pass", "", "", "Password to use to connect to rclone remote control.") flags.BoolVarP(cmdFlags, &loopback, "loopback", "", false, "If set connect to this rclone instance not via HTTP.") flags.StringArrayVarP(cmdFlags, &options, "opt", "o", options, "Option in the form name=value or name placed in the \"opt\" array.") flags.StringArrayVarP(cmdFlags, &arguments, "arg", "a", arguments, "Argument placed in the \"arg\" array.") } var commandDefinition = &cobra.Command{ Use: "rc commands parameter", Short: `Run a command against a running rclone.`, Long: ` This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" A username and password can be passed in with --user and --pass. Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. -o key=value -o key2 Will place this in the "opt" value {"key":"value", "key2","") The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. -a value -a value2 Will place this in the "arg" value ["value", "value2"] Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg: rclone rc --loopback operations/about fs=/ Use "rclone rc" to see a list of all possible commands.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 1e9, command, args) cmd.Run(false, false, command, func() error { ctx := context.Background() parseFlags() if len(args) == 0 { return list(ctx) } return run(ctx, args) }) }, } // Parse the flags func parseFlags() { // set alternates from alternate flags setAlternateFlag("rc-addr", &url) setAlternateFlag("rc-user", &authUser) setAlternateFlag("rc-pass", &authPass) // If url is just :port then fix it up if strings.HasPrefix(url, ":") { url = "localhost" + url } // if url is just host:port add http:// if !strings.HasPrefix(url, "http:") && !strings.HasPrefix(url, "https:") { url = "http://" + url } // if url doesn't end with / add it if !strings.HasSuffix(url, "/") { url += "/" } } // ParseOptions parses a slice of options in the form key=value or key // into a map func ParseOptions(options []string) (opt map[string]string) { opt = make(map[string]string, len(options)) for _, option := range options { equals := strings.IndexRune(option, '=') key := option value := "" if equals >= 0 { key = option[:equals] value = option[equals+1:] } opt[key] = value } return opt } // If the user set flagName set the output to its value func setAlternateFlag(flagName string, output *string) { if rcFlag := pflag.Lookup(flagName); rcFlag != nil && rcFlag.Changed { *output = rcFlag.Value.String() } } // do a call from (path, in) to (out, err). // // if err is set, out may be a valid error return or it may be nil func doCall(ctx context.Context, path string, in rc.Params) (out rc.Params, err error) { // If loopback set, short circuit HTTP request if loopback { call := rc.Calls.Get(path) if call == nil { return nil, errors.Errorf("method %q not found", path) } out, err = call.Fn(context.Background(), in) if err != nil { return nil, errors.Wrap(err, "loopback call failed") } // Reshape (serialize then deserialize) the data so it is in the form expected err = rc.Reshape(&out, out) if err != nil { return nil, errors.Wrap(err, "loopback reshape failed") } return out, nil } // Do HTTP request client := fshttp.NewClient(fs.Config) url += path data, err := json.Marshal(in) if err != nil { return nil, errors.Wrap(err, "failed to encode JSON") } req, err := http.NewRequest("POST", url, bytes.NewBuffer(data)) if err != nil { return nil, errors.Wrap(err, "failed to make request") } req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext req.Header.Set("Content-Type", "application/json") if authUser != "" || authPass != "" { req.SetBasicAuth(authUser, authPass) } resp, err := client.Do(req) if err != nil { return nil, errors.Wrap(err, "connection failed") } defer fs.CheckClose(resp.Body, &err) if resp.StatusCode != http.StatusOK { var body []byte body, err = ioutil.ReadAll(resp.Body) var bodyString string if err == nil { bodyString = string(body) } else { bodyString = err.Error() } bodyString = strings.TrimSpace(bodyString) return nil, errors.Errorf("Failed to read rc response: %s: %s", resp.Status, bodyString) } // Parse output out = make(rc.Params) err = json.NewDecoder(resp.Body).Decode(&out) if err != nil { return nil, errors.Wrap(err, "failed to decode JSON") } // Check we got 200 OK if resp.StatusCode != http.StatusOK { err = errors.Errorf("operation %q failed: %v", path, out["error"]) } return out, err } // Run the remote control command passed in func run(ctx context.Context, args []string) (err error) { path := strings.Trim(args[0], "/") // parse input in := make(rc.Params) params := args[1:] if jsonInput == "" { for _, param := range params { equals := strings.IndexRune(param, '=') if equals < 0 { return errors.Errorf("no '=' found in parameter %q", param) } key, value := param[:equals], param[equals+1:] in[key] = value } } else { if len(params) > 0 { return errors.New("can't use --json and parameters together") } err = json.Unmarshal([]byte(jsonInput), &in) if err != nil { return errors.Wrap(err, "bad --json input") } } if len(options) > 0 { in["opt"] = ParseOptions(options) } if len(arguments) > 0 { in["arg"] = arguments } // Do the call out, callErr := doCall(ctx, path, in) // Write the JSON blob to stdout if required if out != nil && !noOutput { err := rc.WriteJSON(os.Stdout, out) if err != nil { return errors.Wrap(err, "failed to output JSON") } } return callErr } // List the available commands to stdout func list(ctx context.Context) error { list, err := doCall(ctx, "rc/list", nil) if err != nil { return errors.Wrap(err, "failed to list") } commands, ok := list["commands"].([]interface{}) if !ok { return errors.New("bad JSON") } for _, command := range commands { info, ok := command.(map[string]interface{}) if !ok { return errors.New("bad JSON") } fmt.Printf("### %s: %s {#%s}\n\n", info["Path"], info["Title"], strings.Replace(info["Path"].(string), "/", "-", -1)) fmt.Printf("%s\n\n", info["Help"]) if authRequired := info["AuthRequired"]; authRequired != nil { if authRequired.(bool) { fmt.Printf("**Authentication is required for this call.**\n\n") } } } return nil } rclone-1.53.3/cmd/rcat/000077500000000000000000000000001375552240400145705ustar00rootroot00000000000000rclone-1.53.3/cmd/rcat/rcat.go000066400000000000000000000034761375552240400160620ustar00rootroot00000000000000package rcat import ( "context" "log" "os" "time" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "rcat remote:path", Short: `Copies standard input to file on remote.`, Long: ` rclone rcat reads from standard input (stdin) and copies it to a single remote file. echo "hello world" | rclone rcat remote:path/to/file ffmpeg - | rclone rcat remote:path/to/file If the remote file already exists, it will be overwritten. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through ` + "`--streaming-upload-cutoff`" + `. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then ` + "`rclone move`" + ` it to the destination.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) stat, _ := os.Stdin.Stat() if (stat.Mode() & os.ModeCharDevice) != 0 { log.Fatalf("nothing to read from standard input (stdin).") } fdst, dstFileName := cmd.NewFsDstFile(args) cmd.Run(false, false, command, func() error { _, err := operations.Rcat(context.Background(), fdst, dstFileName, os.Stdin, time.Now()) return err }) }, } rclone-1.53.3/cmd/rcd/000077500000000000000000000000001375552240400144075ustar00rootroot00000000000000rclone-1.53.3/cmd/rcd/rcd.go000066400000000000000000000023271375552240400155120ustar00rootroot00000000000000package rcd import ( "log" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/rc/rcflags" "github.com/rclone/rclone/fs/rc/rcserver" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "rcd *", Short: `Run rclone listening to remote control commands only.`, Long: ` This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. See the [rc documentation](/rc/) for more info on the rc flags. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 1, command, args) if rcflags.Opt.Enabled { log.Fatalf("Don't supply --rc flag when using rcd") } // Start the rc rcflags.Opt.Enabled = true if len(args) > 0 { rcflags.Opt.Files = args[0] } s, err := rcserver.Start(&rcflags.Opt) if err != nil { log.Fatalf("Failed to start remote control: %v", err) } if s == nil { log.Fatal("rc server not configured") } s.Wait() }, } rclone-1.53.3/cmd/reveal/000077500000000000000000000000001375552240400151155ustar00rootroot00000000000000rclone-1.53.3/cmd/reveal/reveal.go000066400000000000000000000011321375552240400167170ustar00rootroot00000000000000package reveal import ( "fmt" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/obscure" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "reveal password", Short: `Reveal obscured password from rclone.conf`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) cmd.Run(false, false, command, func() error { revealed, err := obscure.Reveal(args[0]) if err != nil { return err } fmt.Println(revealed) return nil }) }, Hidden: true, } rclone-1.53.3/cmd/rmdir/000077500000000000000000000000001375552240400147545ustar00rootroot00000000000000rclone-1.53.3/cmd/rmdir/rmdir.go000066400000000000000000000012001375552240400164110ustar00rootroot00000000000000package rmdir import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "rmdir remote:path", Short: `Remove the path if empty.`, Long: ` Remove the path. Note that you can't remove a path with objects in it, use purge for that.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fdst := cmd.NewFsDir(args) cmd.Run(true, false, command, func() error { return operations.Rmdir(context.Background(), fdst, "") }) }, } rclone-1.53.3/cmd/rmdirs/000077500000000000000000000000001375552240400151375ustar00rootroot00000000000000rclone-1.53.3/cmd/rmdirs/rmdirs.go000066400000000000000000000020021375552240400167600ustar00rootroot00000000000000package rmdir import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var ( leaveRoot = false ) func init() { cmd.Root.AddCommand(rmdirsCmd) rmdirsCmd.Flags().BoolVarP(&leaveRoot, "leave-root", "", leaveRoot, "Do not remove root directory if empty") } var rmdirsCmd = &cobra.Command{ Use: "rmdirs remote:path", Short: `Remove empty directories under the path.`, Long: `This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in. If you supply the --leave-root flag, it will not remove the root directory. This is useful for tidying up remotes that rclone has left a lot of empty directories in. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fdst := cmd.NewFsDir(args) cmd.Run(true, false, command, func() error { return operations.Rmdirs(context.Background(), fdst, "", leaveRoot) }) }, } rclone-1.53.3/cmd/serve/000077500000000000000000000000001375552240400147635ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/000077500000000000000000000000001375552240400157015ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/cds.go000066400000000000000000000235071375552240400170100ustar00rootroot00000000000000package dlna import ( "context" "encoding/xml" "fmt" "log" "mime" "net/http" "net/url" "os" "path" "path/filepath" "regexp" "strings" "github.com/anacrolix/dms/dlna" "github.com/anacrolix/dms/upnp" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/serve/dlna/upnpav" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/vfs" ) // Add a minimal number of mime types to augment go's built in types // for environments which don't have access to a mime.types file (eg // Termux on android) func init() { for _, t := range []struct { mimeType string extensions string }{ {"audio/flac", ".flac"}, {"audio/mpeg", ".mpga,.mpega,.mp2,.mp3,.m4a"}, {"audio/ogg", ".oga,.ogg,.opus,.spx"}, {"audio/x-wav", ".wav"}, {"image/tiff", ".tiff,.tif"}, {"video/dv", ".dif,.dv"}, {"video/fli", ".fli"}, {"video/mpeg", ".mpeg,.mpg,.mpe"}, {"video/MP2T", ".ts"}, {"video/mp4", ".mp4"}, {"video/quicktime", ".qt,.mov"}, {"video/ogg", ".ogv"}, {"video/webm", ".webm"}, {"video/x-msvideo", ".avi"}, {"video/x-matroska", ".mpv,.mkv"}, {"text/srt", ".srt"}, } { for _, ext := range strings.Split(t.extensions, ",") { err := mime.AddExtensionType(ext, t.mimeType) if err != nil { panic(err) } } } } type contentDirectoryService struct { *server upnp.Eventing } func (cds *contentDirectoryService) updateIDString() string { return fmt.Sprintf("%d", uint32(os.Getpid())) } var mediaMimeTypeRegexp = regexp.MustCompile("^(video|audio|image)/") // Turns the given entry and DMS host into a UPnP object. A nil object is // returned if the entry is not of interest. func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fileInfo vfs.Node, resources vfs.Nodes, host string) (ret interface{}, err error) { obj := upnpav.Object{ ID: cdsObject.ID(), Restricted: 1, ParentID: cdsObject.ParentID(), } if fileInfo.IsDir() { defaultChildCount := 1 obj.Class = "object.container.storageFolder" obj.Title = fileInfo.Name() return upnpav.Container{ Object: obj, ChildCount: &defaultChildCount, }, nil } if !fileInfo.Mode().IsRegular() { return } // Read the mime type from the fs.Object if possible, // otherwise fall back to working out what it is from the file path. var mimeType string if o, ok := fileInfo.DirEntry().(fs.Object); ok { mimeType = fs.MimeType(context.TODO(), o) } else { mimeType = fs.MimeTypeFromName(fileInfo.Name()) } mediaType := mediaMimeTypeRegexp.FindStringSubmatch(mimeType) if mediaType == nil { return } obj.Class = "object.item." + mediaType[1] + "Item" obj.Title = fileInfo.Name() obj.Date = upnpav.Timestamp{Time: fileInfo.ModTime()} item := upnpav.Item{ Object: obj, Res: make([]upnpav.Resource, 0, 1), } item.Res = append(item.Res, upnpav.Resource{ URL: (&url.URL{ Scheme: "http", Host: host, Path: path.Join(resPath, cdsObject.Path), }).String(), ProtocolInfo: fmt.Sprintf("http-get:*:%s:%s", mimeType, dlna.ContentFeatures{ SupportRange: true, }.String()), Size: uint64(fileInfo.Size()), }) for _, resource := range resources { subtitleURL := (&url.URL{ Scheme: "http", Host: host, Path: path.Join(resPath, resource.Path()), }).String() item.Res = append(item.Res, upnpav.Resource{ URL: subtitleURL, ProtocolInfo: fmt.Sprintf("http-get:*:%s:*", "text/srt"), }) } ret = item return } // Returns all the upnpav objects in a directory. func (cds *contentDirectoryService) readContainer(o object, host string) (ret []interface{}, err error) { node, err := cds.vfs.Stat(o.Path) if err != nil { return } if !node.IsDir() { err = errors.New("not a directory") return } dir := node.(*vfs.Dir) dirEntries, err := dir.ReadDirAll() if err != nil { err = errors.New("failed to list directory") return } dirEntries, mediaResources := mediaWithResources(dirEntries) for _, de := range dirEntries { child := object{ path.Join(o.Path, de.Name()), } obj, err := cds.cdsObjectToUpnpavObject(child, de, mediaResources[de], host) if err != nil { fs.Errorf(cds, "error with %s: %s", child.FilePath(), err) continue } if obj == nil { fs.Debugf(cds, "unrecognized file type: %s", de) continue } ret = append(ret, obj) } return } // Given a list of nodes, separate them into potential media items and any associated resources (external subtitles, // for example.) // // The result is a slice of potential media nodes (in their original order) and a map containing associated // resources nodes of each media node, if any. func mediaWithResources(nodes vfs.Nodes) (vfs.Nodes, map[vfs.Node]vfs.Nodes) { media, mediaResources := vfs.Nodes{}, make(map[vfs.Node]vfs.Nodes) // First, separate out the subtitles and media into maps, keyed by their lowercase base names. mediaByName, subtitlesByName := make(map[string]vfs.Nodes), make(map[string]vfs.Node) for _, node := range nodes { baseName, ext := splitExt(strings.ToLower(node.Name())) switch ext { case ".srt": subtitlesByName[baseName] = node default: mediaByName[baseName] = append(mediaByName[baseName], node) media = append(media, node) } } // Find the associated media file for each subtitle for baseName, node := range subtitlesByName { // Find a media file with the same basename (video.mp4 for video.srt) mediaNodes, found := mediaByName[baseName] if !found { // Or basename of the basename (video.mp4 for video.en.srt) baseName, _ = splitExt(baseName) mediaNodes, found = mediaByName[baseName] } // Just advise if no match found if !found { fs.Infof(node, "could not find associated media for subtitle: %s", node.Name()) continue } // Associate with all potential media nodes fs.Debugf(mediaNodes, "associating subtitle: %s", node.Name()) for _, mediaNode := range mediaNodes { mediaResources[mediaNode] = append(mediaResources[mediaNode], node) } } return media, mediaResources } type browse struct { ObjectID string BrowseFlag string Filter string StartingIndex int RequestedCount int } // ContentDirectory object from ObjectID. func (cds *contentDirectoryService) objectFromID(id string) (o object, err error) { o.Path, err = url.QueryUnescape(id) if err != nil { return } if o.Path == "0" { o.Path = "/" } o.Path = path.Clean(o.Path) if !path.IsAbs(o.Path) { err = fmt.Errorf("bad ObjectID %v", o.Path) return } return } func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *http.Request) (map[string]string, error) { host := r.Host switch action { case "GetSystemUpdateID": return map[string]string{ "Id": cds.updateIDString(), }, nil case "GetSortCapabilities": return map[string]string{ "SortCaps": "dc:title", }, nil case "Browse": var browse browse if err := xml.Unmarshal(argsXML, &browse); err != nil { return nil, err } obj, err := cds.objectFromID(browse.ObjectID) if err != nil { return nil, upnp.Errorf(upnpav.NoSuchObjectErrorCode, err.Error()) } switch browse.BrowseFlag { case "BrowseDirectChildren": objs, err := cds.readContainer(obj, host) if err != nil { return nil, upnp.Errorf(upnpav.NoSuchObjectErrorCode, err.Error()) } totalMatches := len(objs) objs = objs[func() (low int) { low = browse.StartingIndex if low > len(objs) { low = len(objs) } return }():] if browse.RequestedCount != 0 && browse.RequestedCount < len(objs) { objs = objs[:browse.RequestedCount] } result, err := xml.Marshal(objs) if err != nil { return nil, err } return map[string]string{ "TotalMatches": fmt.Sprint(totalMatches), "NumberReturned": fmt.Sprint(len(objs)), "Result": didlLite(string(result)), "UpdateID": cds.updateIDString(), }, nil case "BrowseMetadata": node, err := cds.vfs.Stat(obj.Path) if err != nil { return nil, err } // TODO: External subtitles won't appear in the metadata here, but probably should. upnpObject, err := cds.cdsObjectToUpnpavObject(obj, node, vfs.Nodes{}, host) if err != nil { return nil, err } result, err := xml.Marshal(upnpObject) if err != nil { return nil, err } return map[string]string{ "Result": didlLite(string(result)), }, nil default: return nil, upnp.Errorf(upnp.ArgumentValueInvalidErrorCode, "unhandled browse flag: %v", browse.BrowseFlag) } case "GetSearchCapabilities": return map[string]string{ "SearchCaps": "", }, nil // Samsung Extensions case "X_GetFeatureList": return map[string]string{ "FeatureList": ` `}, nil case "X_SetBookmark": // just ignore return map[string]string{}, nil default: return nil, upnp.InvalidActionError } } // Represents a ContentDirectory object. type object struct { Path string // The cleaned, absolute path for the object relative to the server. } // Returns the actual local filesystem path for the object. func (o *object) FilePath() string { return filepath.FromSlash(o.Path) } // Returns the ObjectID for the object. This is used in various ContentDirectory actions. func (o object) ID() string { if !path.IsAbs(o.Path) { log.Panicf("Relative object path: %s", o.Path) } if len(o.Path) == 1 { return "0" } return url.QueryEscape(o.Path) } func (o *object) IsRoot() bool { return o.Path == "/" } // Returns the object's parent ObjectID. Fortunately it can be deduced from the // ObjectID (for now). func (o object) ParentID() string { if o.IsRoot() { return "-1" } o.Path = path.Dir(o.Path) return o.ID() } rclone-1.53.3/cmd/serve/dlna/cms.go000066400000000000000000000015761375552240400170230ustar00rootroot00000000000000package dlna import ( "net/http" "github.com/anacrolix/dms/upnp" ) const defaultProtocolInfo = "http-get:*:video/mpeg:*,http-get:*:video/mp4:*,http-get:*:video/vnd.dlna.mpeg-tts:*,http-get:*:video/avi:*,http-get:*:video/x-matroska:*,http-get:*:video/x-ms-wmv:*,http-get:*:video/wtv:*,http-get:*:audio/mpeg:*,http-get:*:audio/mp3:*,http-get:*:audio/mp4:*,http-get:*:audio/x-ms-wma*,http-get:*:audio/wav:*,http-get:*:audio/L16:*,http-get:*image/jpeg:*,http-get:*image/png:*,http-get:*image/gif:*,http-get:*image/tiff:*" type connectionManagerService struct { *server upnp.Eventing } func (cms *connectionManagerService) Handle(action string, argsXML []byte, r *http.Request) (map[string]string, error) { switch action { case "GetProtocolInfo": return map[string]string{ "Source": defaultProtocolInfo, "Sink": "", }, nil default: return nil, upnp.InvalidActionError } } rclone-1.53.3/cmd/serve/dlna/data/000077500000000000000000000000001375552240400166125ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/data/assets_generate.go000066400000000000000000000007061375552240400223200ustar00rootroot00000000000000//go:generate go run assets_generate.go // The "go:generate" directive compiles static assets by running assets_generate.go // +build ignore package main import ( "log" "net/http" "github.com/shurcooL/vfsgen" ) func main() { var AssetDir http.FileSystem = http.Dir("./static") err := vfsgen.Generate(AssetDir, vfsgen.Options{ PackageName: "data", BuildTags: "!dev", VariableName: "Assets", }) if err != nil { log.Fatalln(err) } } rclone-1.53.3/cmd/serve/dlna/data/assets_vfsdata.go000066400000000000000000003673311375552240400221700ustar00rootroot00000000000000// Code generated by vfsgen; DO NOT EDIT. // +build !dev package data import ( "bytes" "compress/gzip" "fmt" "io" "io/ioutil" "net/http" "os" pathpkg "path" "time" ) // Assets statically implements the virtual filesystem provided to vfsgen. var Assets = func() http.FileSystem { fs := vfsgen۰FS{ "/": &vfsgen۰DirInfo{ name: "/", modTime: time.Date(2019, 9, 15, 16, 40, 10, 576397038, time.UTC), }, "/ConnectionManager.xml": &vfsgen۰CompressedFileInfo{ name: "ConnectionManager.xml", modTime: time.Date(2019, 5, 27, 15, 4, 10, 117829131, time.UTC), uncompressedSize: 5505, compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xd4\x57\x4d\x6f\xdb\x3c\x0c\x3e\x3b\xbf\x22\xf0\x3d\xaf\x5b\xe0\x3d\x0c\x85\xe2\xa2\x4b\x3f\x10\x6c\x45\x83\x36\x0d\xb0\x53\xa1\xc9\x6c\xaa\xd5\xa6\x0c\x89\xee\xc7\xbf\x1f\x62\xc7\xb5\xdd\x38\x8b\xe3\xc8\x59\x76\x93\x68\x91\xcf\x63\x92\x22\x29\x76\xfa\x16\x85\xfd\x17\xd0\x46\x2a\x1c\xba\xc7\xff\x1d\xb9\x7d\x40\xa1\x02\x89\xf3\xa1\x7b\x3f\xbd\x1c\x7c\x71\x4f\xfd\x1e\x33\x22\x0e\xfa\x6f\x51\x88\x66\xe8\x26\x1a\x4f\x8c\x78\x82\x88\x9b\x41\x12\x63\x3c\x50\x7a\x7e\x62\x40\xbf\x48\x01\x83\xe3\xc1\x91\xeb\xf7\x1c\x66\x62\x10\xb3\xcc\xac\xdf\x73\x1c\x16\xf1\x5f\x4a\xfb\xc7\xcc\xcb\x16\xa9\x48\xa2\xd2\xfe\x11\xf3\xb2\x45\xcf\x61\x5e\x55\x8b\x71\x41\x52\xe1\x77\x69\x28\x55\xc8\xb6\x8b\xa5\xc3\x90\x47\xe0\x5f\x01\x4d\xb4\x22\x25\x54\x38\xc6\x47\xc5\xbc\x54\x9a\x7e\xe7\x7a\x9e\x44\x80\x94\x2b\x97\x44\xd9\x76\x69\xe2\x4e\x25\x5a\x40\x49\xd3\x71\x58\x20\x35\x64\x50\x2a\x21\xe6\x15\xdb\xe5\x77\x0d\x21\x27\x08\xee\x88\x13\xcc\xb8\x96\xfc\x67\x98\x1b\xaa\xd2\xa9\x3d\x98\x91\xf1\xaa\x6c\xd6\x90\x93\xf8\x6c\x83\x9a\xc4\xe7\x96\xc4\x8a\xed\x47\x14\xbc\x22\x0c\xab\x11\x99\x68\x88\xb9\x86\x4b\xa5\x47\x0a\x31\xe3\xd6\x26\x2c\xb7\x10\x29\x82\x35\xc1\xad\xf8\x41\x62\x53\x37\x9c\x3d\x9c\xdd\x5e\x3d\x4c\x7f\x4c\x2e\x1e\xec\x86\x69\x02\x50\xfa\xdd\x6b\x8e\x7c\x0e\xda\x2a\xdf\x1a\xeb\x76\x49\x8f\xcf\x3b\xe2\xbb\x30\xbc\x2b\xd5\xf3\x1c\xde\x2a\xc7\x92\xd5\x5d\x09\x36\xf1\xe3\x16\xf7\xb5\x33\x47\x9e\xcd\xa6\x9a\xa3\x89\x95\x26\xdb\x44\x3f\x99\xde\x95\xe9\xad\x30\xb6\x19\x2e\x4d\x76\x56\xfa\x8a\x50\x8d\x54\x14\x87\x40\xd0\xa6\xf0\x1d\xda\x95\xdc\xd6\x0b\x57\x40\xa3\x44\x6b\x40\x2a\x03\x9a\x5d\x5d\x61\x2c\xe4\x42\x3d\xaf\xfd\x7a\xa2\xe5\x94\x72\x68\x59\x71\xb8\xb7\xf6\x9f\xaf\x7c\x4d\x66\x9e\x76\x44\xff\xe2\xd0\xb3\x6b\xf3\xdb\xfb\xd4\x73\x08\xdd\x7a\xe3\xd8\xd3\x8e\xa4\xc5\xb9\x67\xa1\x98\xd8\x28\xcd\xb5\x3e\xcc\xad\xdb\xa9\xd0\xf9\x72\xf9\x89\x2d\x1f\xac\xa9\xd5\x69\x6e\x92\x99\x32\x48\xdf\x00\x06\x17\x2f\x80\x64\x86\xee\x3b\x18\xb7\x54\xdd\xeb\x9e\x7b\x45\x5d\x0f\x38\xf1\xe9\x7b\x0c\xbe\x21\x2d\x71\xce\xbc\x0f\x41\x4a\xca\x7c\xfe\x95\x2d\x70\x57\xde\x72\xfb\x40\xdd\xd4\xd2\x2d\x22\xa3\x2a\x03\xff\x31\x31\x1a\xe2\x3b\x8c\x87\xa1\x7a\x85\x60\xc6\xc3\x04\xca\xad\xb6\x24\xf6\x6f\xbe\x31\xaf\x22\xa8\x39\x33\x52\x48\x80\x74\xa9\x74\xc4\xe9\x5a\x9a\x88\x93\x78\xda\xac\x36\x46\x93\x3c\x3e\x4a\x21\x01\xe9\x2b\xc7\xe0\x55\x06\xd4\x40\xed\x1e\x35\x84\xa9\x83\x46\x4f\x1c\x11\xc2\x26\x2a\xcf\xa8\x5e\xb1\xe6\x60\x55\x54\xdc\x0f\xcb\xa1\x59\xed\x03\x7b\xc9\x8d\xba\x52\x69\x23\x29\xc6\x18\x2f\x8a\xd8\x26\xb7\xdf\x24\x54\x7f\xae\x53\xaf\xef\xa1\x0c\x34\x88\x78\xa5\x87\x16\xd8\xf2\xff\x4e\x70\xd7\xcd\x71\x9d\x03\x7f\x1e\x6d\xb7\x01\x64\x5e\x4d\xb3\x61\x9e\x11\x71\xe0\xff\x0e\x00\x00\xff\xff\x2a\x62\x9d\xe1\x81\x15\x00\x00"), }, "/ContentDirectory.xml": &vfsgen۰CompressedFileInfo{ name: "ContentDirectory.xml", modTime: time.Date(2019, 5, 27, 15, 4, 10, 118094855, time.UTC), uncompressedSize: 14527, compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x5a\x5f\x6f\xa3\x38\x10\x7f\x4e\x3f\x45\x95\xf7\x1e\xad\x74\x4f\x2b\x37\xab\xdd\x24\xad\x22\xb5\x4d\x04\xd9\x6a\xef\x29\x72\x61\x9a\x78\x17\x6c\xce\x1e\xda\xe4\xdb\x9f\x80\x90\x40\x0a\xf9\x03\x86\x65\xf7\xf2\xc6\x3f\xff\xe6\x37\xc3\x78\xc6\x1e\x0f\xf9\xbc\xf4\xdc\xcb\x37\x90\x8a\x09\x7e\xdb\xbd\xf9\xeb\xba\xfb\xb9\x77\x41\x94\xed\x3b\x97\x4b\xcf\xe5\xea\xb6\x1b\x48\xfe\x49\xd9\x0b\xf0\xa8\xba\x0a\x7c\xee\x5f\x09\x39\xff\xa4\x40\xbe\x31\x1b\xae\x6e\xae\xae\xbb\xbd\x8b\x0e\x51\x3e\xd8\xcf\x31\x4a\xef\xa2\xd3\x21\x1e\xfd\x21\x64\xef\x86\x18\xf1\x45\xf4\x88\x71\x21\x7b\xd7\xc4\x88\x2f\x2e\x3a\xc4\xc8\x8e\x22\xd4\x46\x26\xf8\x03\x53\x18\x0d\x88\x6f\xc3\xcb\x0e\xe1\xd4\x83\xde\x3d\xa0\x05\x54\xda\x8b\x3e\xf5\xe9\x0b\x73\x19\x32\x50\xc4\x88\xde\x45\x5f\x51\x39\x0f\x3c\xe0\x98\x40\xa4\x1e\xc5\xb7\x6b\xa0\x0d\x4a\x7a\x74\xa7\x43\x1c\x26\x21\x16\x2a\x02\x24\xc6\xf6\x76\xfd\x5e\x82\x4b\x11\x1c\x0b\x29\xc2\x33\x95\x8c\xbe\xb8\x29\xb0\x14\xa5\xdc\x0f\x63\x42\x46\x86\xd1\xf6\x76\xa3\xb6\xb1\xd5\x3b\xdf\x04\x42\x62\x65\x03\xc4\x18\x5a\xd4\xff\x40\xa7\x6e\xe5\x87\x4b\x04\x1e\xfa\x8c\x0e\x2b\xa4\xc1\x74\x99\xa3\x80\x60\x9d\x76\xb9\x03\x8a\x81\x84\xf0\xfb\x32\x96\xc8\x1f\x5e\xd6\x06\x19\xb4\x5a\xbd\x61\xa5\x10\xbc\x6f\xbe\x43\x11\x46\x83\x32\x8a\x8f\x1c\x1d\xff\x7c\x87\x46\x6d\x2a\x7f\x95\xe2\x5d\x41\x19\x3d\xc7\x2f\x3f\xc0\xc6\x8c\x8d\x32\xda\x32\x7e\xac\xb2\x5f\x66\x5f\xcc\xfb\xd9\xf4\x9f\xc9\x70\xb6\x05\x3d\x5a\xe3\x02\x7a\xb1\x62\x77\x2e\x9d\x6b\x25\x98\x86\xad\x4a\xf1\x8e\xb9\x08\x52\x2b\xbd\x04\xb2\x2a\x35\x0b\xa9\x44\xc6\xe7\x23\xee\xc0\x52\x2b\xc3\x35\x62\x55\x82\x26\xfc\x1b\x80\x42\x70\xfa\x22\xe0\x85\x11\xa6\x14\xc3\x35\x62\x65\x13\x86\x59\x4c\x32\x04\xc9\xa8\x56\x7e\x59\xe0\xea\x86\x54\x81\xab\x23\x44\xa7\x18\x26\x98\x55\xb9\x3d\x05\xde\x0b\x48\x13\x30\x90\x1c\x74\x84\x55\xfd\x7f\x79\x2a\x90\xba\x8f\x14\xed\x05\xe8\xc8\xf5\xfa\x09\xe6\xa4\x32\x0d\xe4\x1a\xc8\x4c\xf1\x22\xb8\x4c\x66\xea\x0b\x8e\x94\x71\x90\xad\x4d\x4e\xeb\x05\x7e\x2d\xd1\x61\x07\xfa\x9c\xa4\xce\x49\xea\x9c\xa4\xce\x49\xea\x9c\xa4\xea\x48\x52\x7d\x09\x14\x21\x4e\x0c\x7f\x66\xaa\x1a\xba\x10\x3e\x2b\xf4\x9b\x52\xf4\x74\xcd\xbd\x43\x7b\xd0\x72\x7e\xa3\xcf\x78\xad\x88\x5b\xa7\xfa\xf4\x00\x14\x4a\xb1\x2a\xef\xd4\x6d\xa9\x0c\x9c\xaa\x78\x1c\x2f\x7e\x7f\xbd\x8b\x62\x4d\x20\x25\x70\x9c\xd2\xf9\x33\x75\x03\xd0\xca\x32\x01\x3d\xb1\x40\x57\x94\x52\xe1\xbd\x4d\x2c\x4f\xf5\xa3\x47\xf1\xf6\xe7\x7a\xd1\x13\xbc\x4f\x68\xe8\x47\x6d\x66\xd8\x9a\xbc\x70\xaa\xeb\x8c\x3c\x5f\x48\x34\x41\x89\x40\xda\xa5\xca\xb2\x56\x34\xf2\x9b\x39\xd2\xfa\x77\x22\xbc\xaa\x3f\x26\x4c\x2c\x8c\xd3\x50\x6e\x2b\xf9\x4d\x25\xe5\xea\x75\xdf\x5a\xac\x9c\xdf\xa4\x71\x6b\xf3\x9c\xe1\xf2\xec\x39\x67\xcf\x29\xe3\x39\x16\x0a\x3f\x11\x54\xc5\x7f\x0e\x1b\xa1\x5c\xd2\x6e\xc2\x06\x03\x70\x01\xa1\x8a\xf6\xc9\xd8\x5f\xeb\x9f\x25\x4e\x3c\x13\xfb\x4e\xa4\x98\x4b\x50\xa5\x8e\xbd\xdb\xf4\xeb\x0f\x50\x0c\x01\x02\xcd\x55\x90\x5d\x6c\x5d\x5c\x1f\x80\xcf\x71\x51\x0f\xd7\x04\x5b\x17\xd7\xa8\xc6\x54\x0f\xd5\x35\x74\xcd\x45\x1c\x13\x5e\x41\x02\x2f\x37\xfb\xdb\x5f\xc7\x69\xff\xb6\xe2\x37\x5c\xae\x7f\x9f\xb5\xad\x51\x26\x7d\xcc\x12\x03\xbb\xf5\x6e\x76\xbf\xcf\x2c\xc0\xaf\x42\xfc\xf4\xa8\xfc\x59\x6a\xea\x50\x84\xb9\x90\xab\xe9\xca\xd7\xbb\xdb\xcf\x02\x57\x2e\xe5\x69\x9e\x3a\xe6\xff\x60\x52\x4f\x84\xb2\xc0\x16\xbc\xf0\x54\xa4\x14\xbf\x18\x55\x97\x4b\x27\x97\xeb\x57\x64\xdd\x86\x1a\xa1\x4e\x13\x48\xa2\xd2\x42\x2e\x15\x70\x67\xf8\x06\x1c\xd5\x6d\x97\x8b\x6e\x7a\x2d\xbd\xb7\x99\xd4\xa1\x48\x43\x6f\xec\x29\x94\x8c\xcf\x89\xb1\x79\x10\x71\x52\xbb\x9a\x1c\x2f\x76\x4f\x03\x67\xad\x42\x0f\x36\x4e\x6a\x94\xbe\x02\x95\x11\x5f\xd8\xa9\xb7\x11\x11\xb0\xbf\x35\x0a\xdc\xe4\xf8\x44\x66\x43\x8a\x6e\x57\xc1\xcd\xfc\xd7\x82\x7c\x56\x9b\xbc\xdc\xd0\xd3\xac\xdc\x0f\xe7\x34\x8d\x48\x2d\xec\x02\x69\x44\x7a\x6e\x83\xe4\x7e\xc9\x1d\x42\x5d\x57\xbc\x83\xb3\xa9\xa2\x27\xd1\x3f\xf5\x78\xdd\x79\xf9\x08\x48\xc3\xb1\xc4\xc8\xbc\x2c\xfc\x7e\x10\xa5\x81\xfe\x82\xb9\x8e\x04\x9e\x33\x2a\xfb\x68\x1b\xc9\xb5\x18\xe3\x43\x97\x4b\x33\x0e\x90\xdf\x7d\xd1\x88\xec\xdd\xa6\x19\x6d\x51\xb3\x50\xe2\x6e\x13\x4c\xfd\x12\xeb\x4c\x0e\x85\x42\x73\x6b\x21\xcd\x89\xfd\x50\xdf\xd0\x31\xa5\xfb\xe3\xc7\xc9\xc3\x70\x3a\x1c\x1c\x9e\xcd\x43\xd3\x1c\x9b\x87\x3f\x1b\x3d\xcd\x26\xe6\xf8\xde\x1c\x5a\xd6\xe1\x8f\xad\xe9\x78\x32\xc9\x15\x5e\x6b\x50\x28\x2c\xc3\x34\x32\x41\x8b\x0a\x2b\xcd\x08\xcf\x9c\x94\x36\x2b\x3b\x5b\x41\x4d\xcd\x1c\xc9\xea\x89\x4b\xf9\x1b\xd0\x3d\x53\xb6\x43\x1c\x78\xa5\x81\x8b\x91\x8d\x2e\x0d\x9d\x2b\x91\x63\xc3\x46\x8d\x1c\x92\x0d\xd6\x2f\xa6\x91\xa9\x5f\x1c\x1d\xce\x8e\xa1\x43\x8c\x9c\x5d\x1e\x31\x94\xed\x3b\xbd\xff\x02\x00\x00\xff\xff\x1a\x57\x6a\x92\xbf\x38\x00\x00"), }, "/X_MS_MediaReceiverRegistrar.xml": &vfsgen۰CompressedFileInfo{ name: "X_MS_MediaReceiverRegistrar.xml", modTime: time.Date(2019, 5, 27, 15, 4, 10, 118232995, time.UTC), uncompressedSize: 2485, compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xec\x56\xc1\x6a\xdb\x40\x10\x3d\xcb\x5f\x61\x74\x77\x36\x86\xd2\x43\x58\x6f\x30\x38\x04\x43\x0b\x45\x71\x0d\x3d\x99\xb5\x76\x50\xa6\x95\x66\xd5\x9d\x95\x49\xfa\xf5\x45\x5a\x39\x92\x13\x25\x14\x27\x4e\x2e\xbd\xcd\xcc\x7a\xde\x7b\xde\xa7\x19\x49\x5e\xde\x15\xf9\x78\x07\x8e\xd1\xd2\x2c\x9e\x9e\x9d\xc7\xe3\x4b\x35\x92\x9c\x96\x66\x7c\x57\xe4\xc4\xb3\xb8\x72\x74\xc1\xe9\x2d\x14\x9a\x27\x55\x49\xe5\xc4\xba\xec\x82\xc1\xed\x30\x85\xc9\x74\x72\x1e\xab\x51\x24\xb9\x84\x74\x1d\x60\xd4\x28\x8a\x64\xa1\x7f\x5a\xa7\xa6\x52\x84\xa0\x29\x21\x59\xa7\xce\xa5\x08\xc1\x28\x92\xe2\xb0\x4b\xea\xd4\xa3\xa5\x2f\xc8\xbe\x69\x08\x69\x1d\x46\x92\x74\x01\x6a\xc9\xf3\xca\xdf\x5a\x87\x7f\xc0\x48\xd1\x94\x9a\x43\xed\xb2\xaa\x00\xf2\xfb\xce\x5e\x29\xa4\x6d\xff\x02\x6a\xcd\xcb\x45\xaf\x37\x8a\xa4\x41\x07\x81\x09\x49\x8a\x2e\x6b\x8f\x1d\xe4\xda\x83\xb9\xf1\xda\xc3\x5a\x3b\xd4\xdb\x1c\xd4\x7c\x33\x4f\xae\x37\xab\x1f\xdf\xae\x36\x1d\xe8\xe0\x2f\x83\x1c\x71\xa8\x67\x58\x5e\x02\x5c\xe5\xfe\x39\x71\xb6\xf2\x47\xa8\xdb\x63\xfe\xb3\xb6\x2e\x7d\xf0\x41\x74\x46\x3c\xf5\x24\x81\x0c\xd9\x83\x0b\xd7\x70\x8c\x2b\x01\xc1\xe9\x1a\x38\x81\xdf\x5f\x39\x7b\x53\x7f\x86\xe0\x5f\xef\x54\x1f\x93\xcb\x17\x34\x1f\x6b\xdb\x00\xc1\xc9\x3c\x5c\xf2\x5a\xe7\x68\x6a\xf0\xff\x63\xf5\x01\x63\xb5\x0f\xdb\x23\xd9\x2e\xd7\x06\x75\xb5\x87\x94\xdc\x27\x19\x33\x90\xb9\xda\x01\x79\x9e\xc5\x64\xe3\x9e\x9b\x83\x97\xd8\xb9\x6a\xb4\xd7\xab\xfb\x12\x14\x7b\x87\x94\x49\xf1\x50\x68\x44\xf1\xe3\xbf\x72\x0c\xef\x93\x2b\xef\x58\x91\xfc\x89\x28\x5f\xd8\x22\x1d\xfd\x16\xe9\x6c\xab\x19\x3e\x7f\x7a\x07\x15\x8f\x17\xc3\x5b\xcb\xb8\x07\x3e\xd0\xd1\xbe\x1c\x1b\xf6\x6b\xa7\xc9\x83\xf9\x5e\xd6\x63\xfd\xcc\x13\x50\xe1\xc9\xe8\x17\x40\xf8\xae\xec\xed\x06\x43\x4b\x37\x55\x9a\x02\x98\x0f\x62\x4f\x60\x67\x7f\xbd\x9e\x5b\x8a\x81\x25\x20\x45\xfd\x5d\xa6\xfe\x06\x00\x00\xff\xff\xfa\x10\x27\xa0\xb5\x09\x00\x00"), }, "/rclone-120x120.png": &vfsgen۰FileInfo{ name: "rclone-120x120.png", modTime: time.Date(2019, 5, 27, 15, 4, 10, 118634349, time.UTC), content: []byte("\x89\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\x00\x00\x78\x00\x00\x00\x78\x08\x06\x00\x00\x00\x39\x64\x36\xd2\x00\x00\x00\x04\x67\x41\x4d\x41\x00\x00\xb1\x8f\x0b\xfc\x61\x05\x00\x00\x00\x20\x63\x48\x52\x4d\x00\x00\x7a\x26\x00\x00\x80\x84\x00\x00\xfa\x00\x00\x00\x80\xe8\x00\x00\x75\x30\x00\x00\xea\x60\x00\x00\x3a\x98\x00\x00\x17\x70\x9c\xba\x51\x3c\x00\x00\x00\x06\x62\x4b\x47\x44\x00\xff\x00\xff\x00\xff\xa0\xbd\xa7\x93\x00\x00\x00\x09\x70\x48\x59\x73\x00\x00\x0b\x13\x00\x00\x0b\x13\x01\x00\x9a\x9c\x18\x00\x00\x00\x07\x74\x49\x4d\x45\x07\xe3\x03\x0b\x05\x07\x12\x76\x96\x1b\x39\x00\x00\x50\x82\x49\x44\x41\x54\x78\xda\x9d\xbd\x77\x7c\x1c\xc5\xfd\xff\xff\x9c\xdd\xbb\x93\x74\xea\xd5\x92\x2c\x57\xd9\x96\xbb\x71\xc1\xc6\x60\x30\x18\x30\x98\x5e\x42\x0f\x04\x42\xe0\x13\x92\x40\xc2\x87\x34\x42\x12\x48\x03\xd2\x49\x08\x9f\x84\x10\x02\x84\xd0\x7b\x09\xc5\x98\xe2\x42\x31\xd8\xb8\x77\x5b\xee\xb6\x9a\xd5\xbb\x6e\x77\x7e\x7f\x6c\x9b\xd9\x5b\x99\x7c\x7f\xeb\xc7\xfa\x4e\x77\xb7\xbb\x33\xf3\x9a\xf7\x7b\xde\x6d\xde\x6f\xf1\xc0\x8e\x2e\x29\x01\x5b\x7a\xa7\xc4\x72\xff\xb6\x94\xcf\x6c\xf7\x33\x09\x48\xf7\xd5\x3b\x04\x20\x84\xfb\xea\xbe\xf7\x0e\xe9\xfe\x27\xd1\xaf\xd1\xae\x75\xaf\x31\x00\x53\x08\x4c\x01\xa6\x01\x31\x21\x88\x09\x9c\xbf\x05\x08\x21\x10\xca\x7d\xa5\x94\x4e\xfb\xb4\xf6\x06\x6d\xf5\xbe\xf3\xda\x80\xdb\x46\x43\x6d\xaf\xd2\xee\xa8\x43\x86\xde\x0b\xe5\x34\x84\xc0\x10\x28\x6d\x74\xdb\x6b\x38\xef\x0d\xc0\x10\xce\xe9\xf5\x55\x86\xda\x66\x49\x89\x25\x21\x65\x43\x4a\x4a\x52\xee\x7b\xef\x73\x1f\x03\xa4\x3f\xfe\xea\xd8\x19\x02\x0c\x04\xc2\x6d\x83\x21\xc0\x54\xdb\x96\x30\x84\x33\x50\xc2\xbb\x99\xc0\x90\x60\x21\x11\x80\xe5\xdd\x4a\x4a\x10\xce\xc3\xa2\x3a\xee\x7e\xed\x80\xab\x7c\x29\x43\xbf\x0f\x83\xec\x0f\x96\xf7\x87\x3b\x8c\xc2\xeb\x14\x01\xa8\x86\x72\xb5\x37\xc9\x54\x30\xad\x30\xb8\xee\x20\x82\x0c\xda\x87\x40\x0a\x30\xa4\x0e\x96\xff\x78\x11\x4c\x48\xbe\xa0\xcd\x12\x89\x54\xda\x27\x90\x58\x42\x04\x33\xca\x7f\x1f\x80\xab\x4e\x74\xe9\xfe\x21\x23\x1e\x16\xfe\xad\xfa\x6c\xbd\x1d\xc2\x27\x8e\x60\xf2\x0a\x17\x78\x88\x25\x0c\x90\x08\x9f\x02\x2c\x29\x31\x24\x18\xd2\x19\x64\x61\x7b\x40\x2b\x20\x03\x22\x82\x2a\xfd\x7e\x85\x3b\xe1\x37\x56\xf9\xc3\x9f\xd5\x22\xbd\xf3\x52\x22\x8c\x80\x56\x25\x02\xd3\x05\xce\xbb\xce\x07\x58\x99\xdd\x1a\xd7\x51\x66\xbc\xd7\x16\x21\x1d\x10\x0c\x9c\x3e\x0b\xe1\x7d\xa6\x0c\xdc\x20\x13\x32\x8d\x63\xb9\x83\x69\xba\x9d\xf1\xbf\xb7\x65\x40\xb2\x6e\xdb\x7d\xca\x57\x26\x8f\x37\x1e\x83\x4d\x7c\xf5\x59\xd2\x1f\xa7\xe0\x57\x0e\xe7\x11\x3e\x07\xf2\x38\x85\xa1\x80\x6b\x08\x88\x65\x18\xce\x65\x01\x4b\x10\xa4\x24\x18\x2e\xb0\xc2\x70\x81\xb5\x9d\x41\x91\x52\xa6\xb1\x33\x3b\xa2\x75\xd2\xed\x49\xe4\xac\x55\x26\xb8\x70\x39\x85\x0d\x98\x08\xe7\x7b\xa1\x0f\x94\x74\x29\xd9\x08\x01\x60\x4b\xe9\x83\x1c\x06\x56\x05\xd7\x7b\xa6\x37\xd3\x3d\x48\x0c\x77\xe0\xc2\x94\x81\xd6\x66\x99\xc6\x85\xfc\x25\x45\x0a\x6c\x01\x52\x48\xa4\x11\xdc\x41\xda\xd2\x7f\x98\x07\xa2\x21\x1c\x82\x51\xc7\x2c\x6d\x6c\x22\x26\x91\xf7\x87\x90\xce\x3d\x02\xb8\x83\xe5\xc5\x0c\x81\x6b\x2a\x80\xc7\x32\x4c\x8f\x9d\x39\xc0\x7a\x14\x9c\x22\x00\xd6\x7b\x82\xda\x20\xe1\x51\x91\x74\x3b\xa1\xb2\x65\x75\x90\xc2\x1d\x09\x4d\x06\x95\x1a\xa4\x94\x18\x42\x04\xbf\xb5\x9d\x81\x93\x12\x0c\x11\x4c\x2c\xef\x16\xbe\x5c\xe0\x02\x6b\x29\xc0\xaa\xf2\x42\x3a\x35\xe8\x40\x87\x01\x8e\x6a\x7f\x1a\xc0\x12\x84\x90\x98\xb8\x13\xdf\x96\x01\x29\x79\x13\xc3\x76\xe4\x09\x6f\x92\x7a\x2c\x35\x4c\x00\x47\x3b\xfc\xe5\xc0\x05\xd9\x13\x24\x34\x59\xc2\x93\x5f\x08\x28\xd7\x74\x01\x8f\x25\x0c\xe1\xcf\x78\x43\x4a\x52\x52\x20\x6c\x10\x86\x04\x5b\x01\xd6\x65\xcd\x86\x4b\xc5\x76\x44\x63\x54\x8a\x19\xfc\xbd\xc2\x66\x64\xc0\x62\x54\xb6\x85\x3b\x68\x1e\x35\xd8\x2e\xb5\x78\xc2\x5b\x70\x4f\xa9\x0b\x53\x8a\xc0\x15\x9e\x74\x01\xc0\x42\x01\x17\x0c\xc3\x70\x26\x95\xd0\x29\xd0\xb6\x6d\x6c\x29\xd3\xfa\xe5\xdf\xcb\x5d\xc7\xa5\x4b\xc1\x3e\xb0\x2e\xe7\x09\xae\x74\xde\x1b\x1e\x20\xa4\xdf\x73\xd0\xc9\x2f\x82\xc9\xa4\xce\xc2\x40\xc8\x73\x7e\xa3\x0a\x55\x1a\xab\xc6\x5d\x83\x3d\x16\x67\x48\xe1\xac\xb9\x06\x60\x0b\xa4\x21\xfd\xd9\x29\x0d\x81\x6d\x3b\x83\x26\x85\xc0\x70\x07\x57\x5d\x57\xc2\x6c\xd1\x26\x00\x55\x1d\xa0\xa0\xbd\xd2\x6f\xac\x2d\x83\xb5\x43\x02\xa6\x0c\x06\xca\x70\xa9\x45\x28\x33\x5e\x46\x81\xea\x4e\xbc\x30\xe5\x79\x83\x62\x78\x13\x48\x08\x4c\xc3\xc4\x4a\x0d\xd0\x5e\x7f\x88\xb6\x43\xfb\xe8\x3a\xd2\x40\xaa\xbf\x8f\x58\x46\x16\x85\xc3\x46\x51\x30\xbc\x9a\x78\x56\xb6\x0b\xb4\xad\xb5\xdd\x1b\x74\x3b\xc4\x6d\xc2\x20\xeb\xb4\xeb\x5e\x9b\x36\xa1\x3d\x61\x4b\x86\xc6\x46\x69\xb7\x08\xfa\xe3\x0b\x9c\x0a\xe5\x7a\x42\x95\xf7\xb7\x27\x70\x09\x01\xb1\xb8\x10\x2e\x85\xc8\x80\x05\xd8\x12\x69\x80\xb4\xbd\x19\xea\x0e\xa2\x10\xd8\xc2\x59\xbb\x0c\xb7\x0f\xe1\xf5\x57\x05\xd7\x5b\x23\x55\x30\xd4\x41\x52\x67\xa3\x10\xee\xe4\x51\x06\xc3\xa1\x5e\xb7\xf1\x32\xfd\x19\xe1\x75\xd8\x93\xaa\xa3\x58\x9f\x37\x48\xa6\x61\xd0\xdf\xd7\xc7\x81\xd5\xcb\xd9\xfc\xc6\x73\x1c\x5c\xf7\x09\x9d\x8d\x75\xa4\xfa\x7a\x91\xb6\x8d\x30\x4c\x32\xf3\x0b\xa9\x98\x38\x9d\xf1\x67\x5c\x4c\xf5\xfc\x45\x64\x16\x14\x63\x49\x2b\x8d\xdd\x1b\x0a\x1c\xc2\x9d\x75\xc2\x7f\x1f\xfc\xd6\x13\xc4\x7c\x6d\x41\x1d\xaf\x88\xb5\x38\xea\xf0\x08\xc9\x9b\x20\xfe\xd8\xb9\xaa\xa3\xaa\x6e\x0a\x11\x70\x46\xf1\x61\x63\x9f\xf4\xf4\x2d\x4b\xc2\x80\x2d\x19\xf0\x5e\x6d\xe8\xf7\x5e\xa5\xf3\xea\x7d\xef\xeb\x6d\x36\xa4\xa4\xab\x3f\x4b\x42\x3a\xa8\x2e\xf0\xa8\xe0\xf8\x8d\x46\x5f\x4f\x3c\xbd\xce\x54\xd8\x8d\x69\x18\x08\x21\xb0\x53\x29\x70\xd9\xa9\xa7\x03\x87\xd7\x5b\x55\x32\x55\x29\xc1\x10\x02\xd3\x30\xe8\xac\xdb\xcf\xca\xbf\xdf\xcb\xd6\xb7\x9e\xa7\xbf\xbb\x8b\xa3\x1d\x86\x69\x52\x35\x73\x1e\xc7\xdd\xf8\x43\x86\xce\x9c\xe7\xaf\x9b\xde\x8d\x1d\xbd\x5d\xd7\x7f\x3d\xdd\x3d\xe6\xbe\xf7\x74\x78\x53\xa1\xb6\xf0\x7a\xef\x4d\x52\x15\x07\x4f\x9f\x8f\x12\xf0\xbc\xb1\x33\x48\x17\xac\xbc\xb5\xd7\x1b\xbf\x98\x29\x84\xcf\x2a\xc1\xa5\x58\x85\x5a\x63\x42\x60\x1b\x92\x98\xed\xfc\x6d\x0b\xe1\x3c\x5c\x04\x54\x6c\xb8\x0d\x51\x75\x60\x6f\x0d\x0c\xb3\xa2\xb0\xda\xe2\x37\x5a\xa5\x52\xe1\xd2\xb8\x61\x62\x5b\x16\x47\xf6\x6c\x67\xf7\x8a\xb7\xa9\xdb\xb8\x9a\xec\x92\x72\x66\x5c\x7d\x33\xc9\xb2\x4a\x67\xc9\x88\x50\x87\xd4\x09\xe4\xcf\x6c\x43\xd0\xbc\xaf\x96\xf7\x7e\x71\x33\xfb\x3f\x5b\xca\x7f\x73\xd8\x96\xc5\xbe\x4f\x97\x72\xa4\x76\x1b\x27\xde\xfa\x2b\xc6\x2d\xba\x44\x53\x05\x83\x49\x24\xb1\x42\xba\xbb\x2d\x1d\x5b\x82\xfa\x4b\x6f\x45\x0e\xf4\x68\x15\xe4\xf4\x65\xe5\x68\xc7\xd1\x8c\x33\xea\x11\x33\x85\xd7\x44\x19\xcc\x08\x24\xa6\x70\x64\x2c\xdb\x9d\x1d\xb6\x3b\x3b\xbc\xcf\x3c\x96\x6d\xe0\x48\xdd\xb6\x10\x88\x08\x15\xca\xef\x9c\xb6\x3e\x2b\x22\x88\x4b\xc6\xfe\x7a\x8e\x40\x18\x26\xfd\xbd\x5d\x34\x6d\x59\xcb\xce\xb7\x5f\x60\xf7\xb2\x37\x69\x3f\xbc\xcf\xbf\x57\xa2\xb0\x84\x19\x5f\xfd\x2e\x96\xbb\x36\x46\x09\x41\xea\x20\x18\x42\x30\xd0\xd1\xce\x8a\x3f\xfd\x24\x12\xdc\xec\x92\x21\x94\x4f\x9c\x4e\x56\x61\x09\x5d\x47\xea\xa9\xdf\xbc\x96\xee\xe6\x46\xff\xfb\xae\xa6\x3a\x96\xfe\xf6\xfb\xc4\x72\xf2\x18\x75\xd2\x22\xa4\xb4\x40\x8a\x80\x18\xc0\x95\x49\x84\x3b\x2e\x12\x4b\x7a\xac\x33\x30\x85\x48\xa5\x4d\xaa\x46\xa0\x2f\x6b\xce\xf8\x44\xc9\x2c\xda\x78\x2a\xf7\xd3\xee\x83\x4e\x4c\xb1\x40\x37\x14\x18\x52\x3a\x14\x29\xd1\x24\x32\x13\x81\x25\x24\xa6\x50\xa8\x18\x87\x8a\xa5\x0b\xbe\x27\x1c\x79\xda\x82\xf0\x48\x32\xbc\x46\x87\x50\x90\x52\x3a\x42\x9b\xcb\x86\x7b\x5a\x8f\x70\x78\xd5\x32\x76\xbe\xf5\x2c\x87\x56\x2d\xa7\xaf\xa3\x35\xad\x73\xdd\xad\xcd\x0c\x58\x12\xdb\x0e\x26\x8c\xc6\x19\xdc\x57\xc3\xd3\x17\x4d\x83\x1d\xef\xbc\x44\xed\x07\xaf\xa7\xdd\x6b\xcc\x29\x67\x73\xc2\xd7\x6f\xa7\xa4\x7a\x02\x46\x2c\x4e\xaa\xbf\x9f\xc6\x5d\x5b\xf8\xf4\x91\x3f\xb2\xe3\x9d\x97\xfc\xf6\xf6\xb4\x1e\x61\xe5\xdf\x7e\x45\x49\xcd\x14\x72\xca\x86\x22\xa5\xed\x1b\x31\x24\x80\x61\x3a\x36\x03\x01\xb6\xb4\x5d\x6a\x0e\xe0\x90\x21\x0b\x5a\x18\x2c\x5f\x20\x0d\x99\x85\x3d\xe2\x50\x67\xad\xb7\xc6\x4b\xe1\xf5\x3d\x98\x6c\xde\x32\x65\xbb\x4b\x41\x2c\x6d\x81\x96\x2e\x15\xbb\x3a\xa9\x21\x1c\x0a\xd5\xa8\x57\xe0\xb0\x21\xe1\xdc\xdc\xf6\x54\x29\xe9\xfc\xde\xeb\xb4\x2d\xf5\x35\xc3\x61\xe3\x22\x0d\x64\x81\xa0\xe3\xe0\x1e\xf6\x2d\x7b\x83\xda\xc5\x2f\xd0\xb4\x75\x2d\xf6\x40\x7f\x1a\x18\xf1\xec\x5c\x2a\x66\x9e\xc8\x98\x73\xbe\x4c\x4a\x4a\x4d\x88\x0b\xaf\xbb\xc2\x55\xd4\x0d\x21\xe8\xeb\x68\x63\xfb\x1b\x4f\x23\x2d\x4b\xbb\x5f\xd5\xcc\x13\x38\xfd\x8e\xfb\xc8\x1d\x32\x14\x69\xa5\x1c\xe9\x3d\x91\xa0\x7c\xd2\x4c\x4e\xfb\xf1\x7d\x58\xfd\x7d\xec\xfa\xe0\x3f\xfe\xef\x1b\xb7\xac\x65\xe7\x92\x97\x39\xe6\xaa\x6f\xf9\x0f\xf4\x84\xab\x5d\x2b\x16\x53\xb7\xe1\x33\x86\x4e\x9c\xce\xe8\xd9\x27\x92\x9d\x97\x8f\x6d\x5b\xbe\xcd\x46\xb5\x6a\x79\xe3\xa1\x82\xe7\xad\xc5\xb6\xd2\x9f\xb4\x25\xcd\xfd\x4c\xb1\xa9\x04\x93\x48\xfb\x91\xf7\x81\x20\xc6\x20\x47\x00\xb6\x7b\xba\xe0\xf9\x54\x8c\xbe\x5e\x4b\xa4\xfb\xb7\x70\x2d\x4f\xce\x35\xd2\x95\xba\x55\xb0\x51\x40\x96\x52\x52\xfb\xf6\xb3\xac\xfd\xe7\x6f\x68\xdb\xb3\x3d\xba\x2d\xa6\x49\xcd\xb9\x57\x33\xe6\x9c\x2b\x29\x1a\x3b\x05\x33\x2b\x9b\x94\x65\x0f\x3a\x08\x3e\x0b\x14\x20\x0c\x83\xd6\x7d\xbb\x38\xb2\x63\x93\x76\xcf\x58\x46\x26\x33\xbf\xfc\x2d\x72\x86\x0c\xc5\xb6\x52\xda\xba\x28\xad\x14\xc9\x82\x12\xa6\x5f\x79\x13\xfb\x3f\x5d\x4a\x7f\x77\xa7\x7f\xdd\x9e\x65\x6f\x32\xf9\xc2\x6b\x89\x67\x25\xdd\xb5\xdd\xe0\xc8\x8e\x8d\xbc\xf5\xb3\x9b\xe9\xa8\x3b\x40\x2c\x23\x93\xd1\x73\x4f\xe1\x84\xaf\xdc\xcc\xe8\x59\x27\x60\xc4\xe3\xbe\x9a\x65\x28\x02\xa0\xa6\xf2\xa1\x03\x1a\x96\x29\x7c\xb9\xc6\xa5\x5e\x4b\x2a\xaa\x50\x1a\xba\xf8\xae\x03\xe9\x2e\x9f\x83\x8a\xe8\xaa\x42\xed\x49\x68\x9e\xd4\xe8\x78\x7c\x9c\xd7\x98\xfa\x99\xbb\x66\xfb\xe6\x32\x14\x43\xb8\x2a\xf4\x08\x81\x61\x9a\xb4\xef\xd9\xce\x67\xf7\xff\x64\x50\x70\x01\xa4\x65\x51\x38\x6e\x32\xe5\x33\x4f\xc4\xc8\x4c\x92\xb2\xed\xc0\x76\x8e\xea\xf5\x22\x4d\x92\x07\x41\xfb\x81\xdd\xf4\x77\xb6\x6b\xf7\x2c\x1c\x39\x8e\xca\x63\xe6\x22\x5d\x2a\xd3\xfa\x2c\x00\xdb\xa2\x74\xec\x24\xf2\x86\x8e\xd0\xae\x6b\xdd\xbb\x93\x9e\x23\x0d\x98\x86\xe1\x4b\xf8\xed\x87\xf6\xd1\xd9\x70\x18\x80\x54\x5f\x2f\xdb\x3f\x78\x93\xa7\xbf\x7d\x25\xaf\xdf\xfd\x3d\x1a\x76\xef\x08\x84\x2e\xe5\x4c\x85\x5e\x2d\x5b\xff\xdb\xf7\xe2\x49\x89\x85\x74\x35\x12\xc5\xe6\xae\xf4\x3b\xb8\xaf\xd4\xee\xe5\xd8\x36\x64\xb0\xb0\xa7\xb9\xa3\x44\xe0\x0c\xf0\xd4\x17\x0f\xe8\x98\xff\x2a\x5c\x95\xc0\x51\x0d\x54\xe0\xbd\xc9\x60\xb8\x2a\x84\x69\x9a\xc4\x62\x31\xe7\x7b\x02\xd5\x48\xda\xc1\x6a\x95\x91\x5b\x40\xe9\xc4\x19\x18\x31\x9d\xb9\xec\x79\xff\x35\x7a\xdb\x5a\xd2\x8c\x1b\xea\x40\x84\x75\x62\xef\xec\x6b\x6f\x45\xda\x3a\x7b\x2e\x19\x33\x91\xac\xfc\x42\x9f\x47\x8a\x88\x33\x91\x95\x24\x33\xaf\x40\xbb\x6e\xa0\xbb\x93\x81\x8e\x16\xc7\xb6\x8c\x33\x11\x2a\x27\xcf\x64\xec\xfc\x45\x18\xa6\xe9\xff\xae\xa7\xbd\x95\x95\x4f\xfd\x9d\xa7\x6e\xbd\x9a\xba\x1d\x9b\x91\x86\x91\x06\xac\x7f\xda\xde\x7b\x89\x65\x4b\x2c\x29\x9d\xf7\x72\x30\x10\xbd\xdf\xe9\x13\xc6\xbb\x87\xef\x76\x94\x8e\x4f\xc1\x19\x2c\x95\x55\x28\xe6\x34\xd5\xee\xa9\x53\xb2\xab\xef\xa9\x7a\xa0\x10\xe9\x54\xee\xbe\x17\x96\xc5\xa1\xcf\x3e\x60\xe7\x9b\x4f\x93\xea\x6a\xc7\x30\x04\x42\xda\x14\x8d\x1c\xc7\x8c\xeb\xbf\x47\xe9\x84\xe9\x8c\x5d\x74\x19\xa7\xff\xe6\xdf\x9c\xfe\xeb\x7f\x51\x34\x76\xb2\x36\xb0\x4d\x9b\x3f\xe7\xc8\xb6\x75\x60\x18\x21\xdd\x57\x06\x42\x8a\xf2\xb7\xb7\x0c\x49\x40\xc4\xd2\x57\xa2\xdc\xf2\xa1\x18\xb1\x58\x9a\xba\xa1\xca\x23\x58\x29\xac\xfe\xbe\x10\x3b\x09\x0c\xfc\x8e\x6e\x2b\xc9\x2f\xab\x60\xc1\xad\x3f\x27\xaf\xbc\x2a\xed\x39\x87\xb7\xae\x67\xe7\x27\x1f\x20\x31\xfc\x09\x99\x72\x41\x4a\xd9\xce\x39\xe0\x81\x62\x07\x40\x79\x94\x98\xb2\x3d\xff\x70\xd4\x29\x7d\x40\x7d\xa0\x95\xc9\x92\xb2\x25\x31\xcf\xe2\xa4\xb2\x39\x4f\x27\x53\x05\x09\xcf\xf6\x6a\xe2\xac\xab\xbe\x91\xdd\x13\xaa\x0c\xd7\xa4\x19\x5a\x8f\xbd\x9b\xac\x7f\xe9\x51\x3e\xbc\xff\x2e\x06\xba\x3b\x39\xe6\xea\x5b\x98\xf3\x8d\x9f\x62\x98\x26\xd2\x34\x98\x72\xc9\xd7\xa8\x39\xeb\x32\x62\x59\x49\xcc\x78\x06\x00\xe3\xcf\xbd\x8a\x15\x5b\xd7\xf9\x14\x36\xd0\xd5\xc1\xbe\xe5\x6f\x52\x72\xcc\xf1\x9a\x74\x99\xb6\xc4\x08\x43\x71\x35\x3a\x47\x66\x61\x09\x46\x2c\x8e\x9d\x1a\xf0\x3f\x8b\x67\xe5\xf8\x2a\x9a\xca\xad\xfc\xfe\x1a\x06\x7d\x6d\x2d\x74\x35\xd5\x6b\xf7\x4a\x24\xb3\x49\xe6\xe5\x3b\x56\x2c\x85\x43\xed\xfd\x74\x29\xed\x75\x07\x89\x3a\x84\x61\xba\xce\x10\x19\xdd\xf6\x41\xf4\xf8\xb4\x89\xa5\x18\x86\x20\x90\x91\x1c\xb3\xa9\x74\x4d\xba\xc2\xb7\x34\x9a\x02\x0c\x4b\x86\x7d\xa9\xba\xd7\xc8\x53\xb6\x02\x33\x58\x88\x92\xb5\x75\x38\xfd\x8c\xc5\x62\x58\x3d\x5d\x6c\x7d\xed\x49\xfa\x3b\xdb\x91\xb6\xcd\xc6\x67\x1f\x62\xf7\xfb\xaf\x12\x33\x4d\xff\x1e\x59\xb9\xf9\xc4\x63\x31\x84\xb4\x30\x90\x54\x2f\x38\x9f\x92\x10\x15\x1f\xfc\xe8\x1d\x7a\x9a\xea\x10\xc2\x08\x7a\xad\x81\x2b\xe8\x3a\xbc\x97\x8d\x0f\xdd\xcd\xda\x07\xee\xa4\xe3\xc0\x2e\x00\xb2\xcb\x2a\x89\x67\xe7\xe8\xe3\x65\xa7\xf0\xa6\x60\xda\x52\xe4\xae\xad\x47\x76\x6d\x4e\x03\x38\x7f\xe8\x08\x72\x4b\x86\x04\xd1\x27\x86\x41\xc7\xe1\xfd\x7c\xfe\xec\xc3\xd8\x56\x2a\x0d\x17\x33\x9e\x20\xbf\x72\xb8\x62\xdd\x33\xb0\x85\x89\x6d\x98\xa4\xa4\x20\x65\xe3\x50\xb0\x47\xcd\xb6\x54\xd8\x6c\xe8\x74\x7f\xa3\x51\x6c\x88\xcd\xab\xec\xdb\x67\xd1\x51\xa1\x39\xe1\xf5\x58\x65\x5f\xaa\xe7\xc2\x24\x42\xf8\x12\x82\x98\x69\x60\x22\x69\xd8\xbc\x86\x8f\xfe\xfa\x4b\x9a\x15\x21\x6a\xa0\xa7\x8b\xcf\x1e\xbc\x87\xd6\xdd\x5b\x5d\x90\xdd\x59\x87\xc0\xc4\x61\xdd\x79\x43\x2a\x19\x7f\xee\x55\xda\xf3\xdb\xf7\xed\xa0\x61\xed\x47\x18\xa6\xe1\xdb\x5c\xd5\x19\x0d\x82\x5d\x2f\x3e\xcc\xa6\x87\xef\x65\xcb\xe3\x7f\x64\xdb\xb3\x0f\x62\x5b\x36\xc9\xd2\x4a\xb2\x4b\x2b\xb5\x7b\xf5\x34\x37\x81\xbb\xf6\xab\xe0\xfa\xd2\xb4\x95\x62\xf7\x87\x4b\xb0\x42\xea\xda\xc8\x63\x4f\x24\x99\x9b\xe7\x1b\x83\x0c\x01\x1b\x5e\x7b\x8a\xfa\xed\x1b\x23\xa9\x37\x23\x3b\x87\xdc\x21\x43\xb1\x6c\x89\x2d\x0c\x1a\x6a\xb7\xf1\xfe\x5f\x7e\xc1\xda\x57\x9e\xa0\xa7\xab\x93\x14\x1e\xc8\xce\x99\x1a\xe4\xd4\xd6\x6f\x1b\x0d\x74\x6d\x2d\x56\xd7\x73\x29\x31\x82\x45\x3c\x1a\xdc\xb0\xdb\x4d\x35\xff\x05\xeb\xb2\x50\xec\xb2\x10\x33\x4d\x3a\x1b\x0e\xf1\xde\x9f\xee\xe2\x99\x6f\x5c\xc4\x67\x8f\x3f\x40\x5f\x48\x8a\x6d\xde\xbd\x8d\x95\x0f\xde\x83\xd5\xdd\x81\x69\x38\x82\x9a\x69\x04\x82\x9b\x09\x8c\x3b\xfd\x02\x8a\xc7\x4c\xf4\xaf\xb1\x53\x29\xf6\x7f\xf0\x3a\xb2\xbf\x57\x53\x13\x82\x53\x92\xea\x09\x54\x9a\x96\x5d\x9b\x48\xf5\x76\x93\x99\x5f\x44\x71\xcd\x34\xed\xf9\x47\x76\x6f\x63\xa0\xb7\x27\x98\x24\x0a\xfb\x33\x4d\x83\xd6\xfd\xbb\xd8\xb9\x7c\xb1\x76\x4d\xb2\xb0\x84\xf1\x0b\xce\xf1\x6d\xbd\xa6\x61\xd0\xb2\x77\x27\x6b\x5e\xfa\x17\x83\x1d\x59\x05\xc5\x64\x15\x96\x60\x49\x49\x57\x5b\x2b\x8b\x7f\xfd\x7d\x96\xff\xf5\x1e\xfe\xf3\x93\x9b\x58\xf2\x9b\x1f\xd0\xd1\x72\x84\x94\x30\x18\xb0\xf1\x29\x58\x3d\x1d\xaa\x4d\x07\x3b\xbc\x16\x7b\x12\xb4\x13\xe7\xa5\x78\x08\x7d\xa7\x40\x08\x5c\xcd\x1b\x14\x45\xc9\x9e\xc7\x42\x84\x04\x30\xd3\xe4\xc0\xda\x95\x3c\xfd\xbf\x57\xb3\xe2\xe1\x3f\xd0\x19\x62\x71\xea\xb1\x73\xc9\xcb\x6c\x79\xed\xc9\xc0\x28\x4f\x10\x9d\x20\xb0\xc9\xaf\x18\xc6\xc4\x73\xaf\xd4\xae\xa9\x5b\xf3\x21\x1d\xfb\x76\x6a\x54\xec\x4f\x38\xc3\x24\xb7\xaa\xda\xff\x6d\x7f\x5b\x33\x56\x6f\x37\x46\x3c\x41\xd5\x71\x0b\x10\x8a\x94\xdb\xb8\x6d\x3d\x4d\x3b\x36\x61\x98\x66\xc8\x1b\x23\xc0\xb6\x58\xf3\xfc\xa3\xb4\x1e\xdc\xa3\x3d\x7b\xfc\x82\xb3\xa9\x9c\x30\x15\x43\x5a\xee\xda\x67\xb3\xe6\xc5\xc7\x68\xde\x57\xeb\xff\x26\x91\xcc\x26\x91\xcc\xf6\xff\xce\x29\x2d\x27\x91\x93\x8f\x04\xba\xdb\x9a\x69\xdc\xb9\xd9\x99\xac\x56\x8a\x0d\x2f\x3d\xc6\x07\xbf\xbf\x9d\xce\x96\x23\x58\x42\x1c\x85\x6a\x3d\x4a\x95\x1a\x98\xaa\x9a\x68\x29\xec\x59\x95\xba\x0d\x35\x12\x42\x8d\x50\x54\x75\x2e\x48\xd7\x95\xa3\xa8\x38\x66\x9a\xec\xfd\xfc\x63\x9e\xfa\xc1\xd7\xd8\xfb\xf9\x27\xda\xe0\x24\x92\xd9\x64\xe6\xe6\x6b\x82\x8d\x6d\x59\xec\x7c\xf7\x55\xec\xbe\x6e\xcd\x03\xe2\xe9\xcf\x02\x49\xcd\xc2\x8b\x29\x1e\x3d\xde\xbf\xa6\xa7\xa9\x8e\x83\x1f\xbf\xe3\x73\x8d\x20\x54\x45\x20\x84\x20\x7f\x54\x0d\x46\xc2\x11\xd4\xfa\x3b\xdb\x5d\x8a\x96\x54\xce\x3a\x91\xc2\x51\xca\x7d\x5a\x9b\x59\xfd\xef\xbf\xd0\xdb\xde\x8a\x19\x8b\x21\x0c\x03\xc3\x34\x31\x0d\xc1\xe6\xb7\x5e\xe4\xf3\xe7\x1f\xd1\xda\x9f\x5f\x5e\xc5\x9c\x2b\xff\x87\x78\x22\xe1\x38\xd2\x4d\x93\x86\xed\x1b\x59\xf7\xda\xd3\xc1\x98\x18\x06\x93\x17\x5e\xe8\xa8\x5f\xde\x75\x15\xc3\x30\x33\x32\xb1\x2c\x9b\xec\xb2\xa1\x8c\x5f\x74\x29\xc2\x08\x9c\x86\x5b\x5f\x7b\x92\x15\xbf\xbf\x9d\xae\x96\x66\x2c\x61\xa4\xe9\xf4\x83\x9f\x9e\xba\x14\x02\x1a\x3d\x22\xd3\xd0\x6e\x86\x27\x51\x07\xe1\xa8\x9e\x4a\x12\x15\x02\x13\x50\xb3\xab\xf0\x37\x1c\xe2\x95\xdf\xdc\x41\xe3\x9e\x9d\x01\x8b\xca\x2b\xe0\x84\xaf\x7c\x8b\xab\xff\xf6\x22\x5f\x79\xf8\x75\xce\xfe\xe9\x7d\x0c\x9d\x32\x13\x21\x0c\x32\x73\xf3\x19\x7f\xda\x79\x24\x32\xb2\xf4\xa8\x04\x17\x38\x61\xdb\xe4\x55\x0e\x63\x42\x88\x8a\xf7\x2f\x7f\x93\x54\x47\xab\x2b\xe0\x09\xcc\x98\x89\x69\x9a\xc8\xbe\x1e\xc7\xfe\x9a\xc8\x04\x20\xd5\xdd\x49\xaa\xb3\x1d\x21\x21\xa7\xac\x92\xf1\xe7\x5d\x15\xf0\x62\x60\xeb\xe2\x97\x79\xfb\x17\xb7\x38\x12\xf0\xe1\xfd\x1c\xa9\xdd\xca\x8a\x87\x7f\xcf\x9b\xf7\x7c\x8f\xde\x8e\x36\xff\x77\x46\x2c\xc6\x89\x5f\xfd\x0e\x55\x13\xa6\x61\xd8\x36\xa6\x21\xc0\x4a\xb1\xea\xd9\x87\x69\xaf\x3f\xe4\xff\x6e\xf4\xec\x93\xa8\x99\x7f\x26\x3d\x6d\x2d\xfe\x67\x05\xc3\x46\x23\xcc\x18\xb6\x94\x88\x58\x9c\xd9\x37\xfe\x90\x29\x5f\xba\x5e\xeb\xcf\xb6\xd7\x9f\xe4\xe3\x3f\xfe\x88\xee\xd6\x66\x24\x3a\xc8\xaa\x0a\x98\x4e\x84\xf8\xd2\x79\xfa\xa4\x70\x3e\x8f\x0d\x16\x7b\x1b\x0e\x25\x51\x71\xd5\x30\x76\xc5\x74\x81\x64\xf9\x93\x0f\xb3\x6b\xf5\xc7\xfe\x57\x79\x65\x15\x5c\xf0\xe3\xdf\x31\xf1\xb4\xf3\xc1\x8c\x61\x49\xc9\xb0\xa9\xb3\x18\x7f\xca\xd9\xec\xf9\x6c\x39\xd9\xc5\x43\xa8\x9a\x71\x3c\xc2\x50\xed\xaf\x8e\x29\x14\x57\x05\xb0\x81\x9a\x85\x17\xb1\xe9\x95\xc7\x69\xd9\xb3\x03\x80\x23\x5b\xd7\xd1\xbc\x75\x1d\xe5\xc7\xce\xa7\xb7\xa9\x9e\x96\xdd\x5b\x69\x58\xbf\x92\xc6\x75\x9f\xd0\xb2\x73\x23\x03\x5d\xce\x7a\x9f\xea\xeb\xa1\xbf\xbd\xc5\xd7\xb9\x27\x9c\x73\x25\xfb\x3f\x5a\xc2\xfe\x4f\xde\x73\xfa\x61\x5b\x6c\x7e\xf3\x79\x76\x2e\x7b\x8b\x64\x61\x09\x56\x5f\x2f\x9d\x4d\xf5\x69\xb6\xf2\xd9\x97\x5c\xc7\xec\x4b\xae\xc5\x74\x4d\xb3\xa6\x69\xb2\x7b\xed\x4a\x36\xbc\xf5\x92\xff\x9b\x64\x7e\x21\xa7\xde\xf4\x43\x8e\x1c\xd8\xe3\xfb\x99\x85\x10\x14\x0c\x1d\xe1\xdb\xa0\x6d\x04\x56\x2a\xa5\x99\x3e\xbd\x63\xfb\xeb\x4f\x80\x69\x32\xe7\xd6\x7b\x88\x27\x73\x34\x93\xa6\xa7\x6c\xaa\xf6\x6b\xdd\x77\xe0\x2d\xa7\xc2\x0f\x3f\xf6\xbd\x49\x51\x31\xc5\x2a\xc0\x69\x80\x2a\xa0\xfb\x33\xdc\x30\xa8\xab\xdd\xc1\x47\x2f\x3d\xe1\x7f\x16\x4b\x64\x70\xd6\xb7\x7f\xcc\xf4\x45\x17\x91\xb2\x2c\x6c\x3b\xe5\xb8\x15\x81\x82\xb2\x0a\x8e\x39\xf7\x72\x67\x19\xb0\x2c\xe7\x99\x5e\xa3\x04\xa0\x80\x2c\xa4\x4d\xfe\xd0\x91\x4c\x38\xfb\x72\x3e\x7a\xe0\x17\x0e\x70\x3d\x5d\x6c\x78\xfc\x3e\x6a\x17\x3f\x4f\xc3\xc6\xcf\xe8\x38\xb8\x87\x54\x4f\xba\xf3\xde\xee\xef\xa7\xbf\xf5\x88\xcb\x0d\x24\xc9\xc2\x12\x4e\xfc\xce\x2f\x58\xfc\x93\x7a\x9a\x14\xdb\x74\x7f\x57\x27\xfd\x5d\xe9\x83\x2e\x0c\x83\xd9\x5f\xba\x96\x33\xbf\x73\x27\x99\x59\x49\x90\x36\xc2\x10\xa4\xfa\x7b\xf9\xe4\xa9\xbf\xd3\xdd\x7a\xc4\xff\xed\x9c\xcb\xae\xa7\x7a\xf6\x89\x6c\x79\xff\x8d\x60\x0c\x32\xb3\xc8\x2b\xaf\x72\x06\x5f\x08\xfa\x3a\x3b\xf8\xf0\xfe\xbb\xd8\xfa\x9f\xa7\x89\x3a\x76\xbc\xf6\x04\x25\x13\xa6\x33\xfe\xe2\xaf\x39\x96\x37\x55\x4f\x77\x31\xf1\x54\x56\x55\x85\xf5\x3d\x78\xc2\x19\x40\x89\xeb\x90\x70\xd6\xe0\x40\xc8\x1a\xd4\x2c\xe6\xb3\x6d\xd7\x1c\x28\x65\x10\x39\x21\x41\x08\x83\x4d\xcb\x97\xd0\xb4\x7f\x8f\xdf\xa0\x9a\xb9\x27\x73\xec\xb9\x97\x62\x48\xa9\x45\x36\xc4\x84\x13\xc2\x22\x6c\x0b\xc3\xb6\x83\x75\x14\x45\x42\x17\x8a\xab\xcf\x9d\x40\x35\x67\x5e\x42\x5e\xe5\x08\xff\xfe\x07\x3e\x5e\xc2\xb6\x97\x1f\xa5\x65\xe7\xa6\x48\x70\x01\xe2\xd9\x39\x64\xe4\xe4\xfa\xcf\xc0\xb6\x19\x32\x7e\x1a\x67\xfe\xe2\x41\x46\x1c\x77\x8a\xb6\x16\x86\x8f\xbc\xb2\x0a\x16\xdd\xfa\x33\xce\xfd\xe1\x3d\xe4\x14\x14\x61\xe0\xb0\x66\x03\xc9\xee\x4f\x97\xb1\xf9\xbd\x00\xc8\x11\xc7\xcc\xe6\xc4\x6b\x6e\xc2\x4e\x0d\xd0\xb8\x3b\x50\x07\x33\x72\xf2\xc9\x2e\x2d\x47\x02\xfd\x3d\x3d\x7c\xfc\xd7\x5f\xb1\xf1\x85\x7f\x6a\xeb\x9c\x2a\x93\x48\xdb\xa2\x71\xd3\xea\xc0\xac\x1a\x69\xf1\xd0\x09\x2d\x6c\xec\xb1\xbd\x40\x0b\x97\x98\x62\x69\x5b\x24\xa4\xf2\xa3\x41\x6e\x1c\x36\xed\xa5\xfa\xfb\xd8\xba\x72\x85\xf6\x9b\xa9\xa7\x9c\x49\x76\x6e\x1e\xa9\x81\x94\x6f\x48\xf0\x26\x92\xc0\xb3\xbe\x38\x77\x10\xee\xe7\x48\xc7\x22\x26\x84\x81\x65\xd9\xbe\xb3\xbc\xa3\x6e\x3f\x9b\x5e\x7a\x8c\xbe\xce\x36\xbe\xe8\x48\xe4\xe6\x53\x3c\x66\x12\x43\x67\x9f\xcc\xd0\x99\xf3\x18\x32\x79\x96\x33\x99\x5c\x3b\x3a\xd2\xa6\x7c\xc2\x31\x9c\xf7\xeb\x47\xd9\xf9\xfe\xeb\xec\x5a\xf6\x26\xcd\x7b\x76\xd2\xdb\xde\x8c\x10\x06\xb9\xa5\x15\x54\xcf\x39\x89\x19\xe7\x5e\xc6\xd0\x09\x53\x7d\xf6\x8e\xb4\xf9\xf4\xf9\xc7\xd9\xf6\xe1\x7b\x1c\xd9\x57\x4b\x5f\x57\x07\x00\x99\xb9\x79\x2c\xbc\xe9\x07\x14\x94\x57\xd1\x5a\x7f\x88\x96\x83\x7b\xfd\xb6\x64\x17\x97\x92\x55\x50\xc2\x40\x5f\x2f\x2b\x1f\xfe\x1d\x6b\x9f\xfa\x9b\x66\x77\x8f\x65\x65\x33\xe5\xcb\x37\xd3\xdb\x7a\x84\x5d\x6f\x3d\x4b\x56\x51\x19\xa3\x17\x7e\x09\x61\x18\x4e\x7c\x58\x28\xbc\x47\x8d\xc7\x0a\xe3\xe0\x23\x2e\x5c\x7f\xb0\x74\xa3\x2a\x55\xf3\xa4\x26\x5c\xc9\x68\xc9\xd9\x7b\xf5\x1f\x26\x04\x3d\x5d\x9d\x34\xec\xdb\xed\xff\xce\x8c\xc5\x28\xaa\xa8\xf2\x7d\xc8\xfe\x35\xa8\x51\x0e\x81\x24\x6e\x7b\x41\xbf\x42\xd0\xde\x58\x4f\xcb\xe1\xfd\x94\x54\x4f\xc4\xcc\x4c\x22\x53\x03\x7c\xf4\xd7\x5f\xb1\xfe\x28\xba\x66\x46\x5e\x21\x25\xe3\x26\x53\x35\xe7\x14\xaa\x8e\x9d\x4f\xe1\xe8\xf1\x64\xe4\xe6\x3b\x81\xe2\xae\xb7\x48\x35\x64\x20\x6d\x92\x05\x45\x4c\xff\xd2\x75\x4c\x3b\xef\x0a\xfa\xdb\x5b\xe9\xef\x68\xc1\x34\x0c\x72\x8b\x4a\xc8\x29\x28\xc2\x34\x0c\x90\x16\x42\x4a\x4c\xd3\x64\xdf\xfa\x35\xfc\xe7\x0f\x77\xd1\xa9\x44\x7a\x00\x1c\x7f\xe9\x57\x99\x70\xd2\x42\x90\x36\xed\x8d\x75\x74\x28\x6a\x61\x5e\x79\x15\x66\x46\x06\x9f\x3e\xf6\x27\x56\x3d\x7a\x9f\x66\xe9\xca\xc8\x2f\x64\xd6\x37\xee\x64\xec\xf9\x5f\xc1\x1e\xe8\x67\xcc\xb9\x57\x93\x91\x57\x40\x6e\xc5\x08\x67\xa4\xdc\x20\x08\xa4\x44\xb8\x8b\xae\x3a\xfe\x1e\xe0\xc1\xd8\x46\x1f\x31\x5b\x63\xc7\xe9\x9b\xb6\x3c\x81\xc3\x03\x33\x6c\x5c\x30\x04\xf4\xf7\xf7\xd1\xa7\x08\x0e\x56\x2a\x45\xc7\x91\x06\x5f\xf5\xf1\xa8\x35\xb8\x4e\xdf\x86\xe1\x1c\x06\x03\x3d\xdd\x2c\xfe\xcd\x0f\xd9\xf6\xfe\x1b\x4c\xbb\xf0\x6a\x4e\xf9\xce\x2f\x30\x12\x19\x91\xac\x34\xab\xa0\x88\xd2\x9a\x69\x0c\x3b\xee\x14\xaa\x66\x9d\x44\xe1\xa8\x1a\x12\x39\xb9\xce\x13\x6c\xcb\x59\x2f\xa5\x3e\x19\xb5\x89\x89\x44\x5a\x16\xb1\x58\x9c\x8c\x92\x32\xcc\xb2\x21\x04\xf1\x69\x12\x6c\xcb\x57\xc1\x4c\xc3\xe0\xe0\xa6\x35\x69\xe0\x0a\xc3\x00\x01\xfd\xbd\xdd\x64\xe4\x16\xd0\xbc\x7f\x0f\xbd\x1d\x81\x41\x27\x77\xc8\x50\xd6\xbd\xf0\x28\x1f\x3f\xf4\x1b\xcd\x22\x96\x5d\x56\xc9\x9c\x5b\xef\x61\xe4\xa9\x17\xb8\x41\x06\x99\x94\x4e\x98\xee\xb4\x49\x4a\x1a\xd6\x7f\xc2\xce\x37\x9e\xa6\x68\xf4\x78\x6a\xce\xbb\x9a\x78\x56\xd2\x8d\xd8\x8c\xb0\x47\x87\xc0\x46\xf9\x5b\x80\x23\x45\x87\x75\xae\xa8\x88\x3e\x87\xad\x4a\x65\xb0\x82\xad\x24\x86\x19\x23\x16\x4f\x68\x0f\xd8\xb4\xec\x1d\x4e\xb8\xe8\x2a\x8c\x78\x06\x02\xdb\xa1\x54\xbd\x09\x81\x38\xe8\x0e\x56\xe3\x8e\xcd\xec\x58\xbe\x98\xfe\x9e\x2e\xd6\xbd\xfc\x6f\x26\x9c\x71\x31\xc3\x8f\x3d\x89\x19\x97\xdd\xc8\xc1\x75\x2b\xe9\x6c\xac\xa3\x7c\xd2\x0c\x46\xce\x3d\x95\xaa\x59\x27\x52\x38\x72\x1c\xf1\xa4\x63\x63\x96\xb6\xc3\x46\xc3\x9d\x0d\x0f\x4a\x30\x30\x41\x20\x3d\x52\xfa\x8e\x12\xa1\xcc\x5e\xff\xf7\xd2\xa6\x7a\xc6\x5c\xaa\x67\x1d\xcf\xae\x55\x1f\xf9\xf7\x96\xb6\xcd\xfb\x0f\xff\x99\xbc\xb2\x0a\x4e\xbe\xfe\x56\x1a\x77\x6f\xd7\xa8\xf4\xe0\xfa\xcf\xd8\xfc\xe6\x73\xa4\x7a\x7b\xfc\xcf\xf2\x47\x8c\xe5\x84\xef\xff\x8e\xca\x39\x0b\xb0\x6d\xdb\x1f\x64\x69\x5b\x08\xc3\xa0\xf3\xe0\x1e\x56\xfc\xf2\x66\x5a\x77\x6f\x45\x98\x26\x76\x7f\x1f\xc7\x5c\x7d\x8b\xd6\x9e\x70\xff\x50\xfa\xa4\x82\x2b\x04\x8e\x37\xc9\x63\xcd\x96\xca\xa6\x65\x00\x88\x44\xb9\xb9\xf4\x3c\x4b\xae\xc7\xc8\x86\x44\x32\x87\x82\xf2\xa1\xec\xdf\x1a\xd8\x63\x37\x2e\x7b\x87\x35\xef\xbc\xc6\x9c\x73\x2f\xc5\xb2\x3c\x2e\x20\xfd\xeb\x7d\x2b\xb2\x22\xf3\xef\x59\xbd\xc2\xd7\x3f\x53\x7d\xbd\xf4\xb6\x1e\x41\x48\x9b\xf2\xf1\x53\xb9\xf8\x8f\x4f\xd1\xd7\xdd\x49\xd1\xc8\xb1\x0e\xa8\x12\x77\x80\xac\xb4\x9e\x86\xd9\x95\x08\xbd\x11\xee\x7a\xac\x3a\x50\xd2\xcd\x9e\x81\xb5\x0e\xdb\x66\xe8\xf8\xc9\xdc\xf0\x97\x27\xf9\xf0\x99\x7f\xb2\xfc\xc9\x7f\xd0\xea\xea\xbf\xd2\x9d\x54\x96\x95\xa2\xa1\x76\x9b\xf6\xdc\xfa\x2d\x6b\xb5\xbf\x4b\x27\x4c\x67\xde\x0f\xff\x40\xe9\xe4\x63\xb1\x42\xe1\x43\x1e\x4b\xee\x6f\x6b\xa6\xbb\xd1\xbd\xb7\x65\xb1\xe5\xc5\x7f\x32\xe6\xd4\xf3\x29\xa8\x1a\x89\x94\x76\x10\xef\x46\x3a\x35\x47\x4d\x68\xc3\xa3\x56\x95\x4d\xeb\xce\x68\xd5\x6f\x19\x65\x0b\x95\x98\x19\x99\x54\x4f\x9f\xa3\x35\xb8\xaf\xbb\x8b\x17\x7e\xf3\x53\xb6\x7e\xfc\x41\xe0\x50\x08\x6d\xaf\x08\x62\x78\x05\x7d\x1d\xad\x6c\x5f\x16\xd8\x7e\xe3\x99\x59\xe4\x94\x94\x01\x12\x03\x49\xc9\xc8\xb1\x54\x4e\x9a\x41\x46\x56\xd2\x91\xbe\xa5\xed\xfa\xa3\x85\x7f\xc6\x0d\xe7\x8c\x85\x4e\xd3\x3d\x0d\x84\x2f\x6c\xa9\x40\xaa\xa3\x13\x66\x7d\x3e\xb5\x4a\x9b\xfc\x92\x21\x9c\xf5\xcd\x1f\xf2\xf5\x07\x9f\x65\xe6\xd9\x5f\x22\x99\x5f\xc8\xf4\xb3\x2e\x66\xd6\x79\x57\xd0\xdb\xd9\x41\xd3\xde\x5a\x06\x3b\xaa\x8e\x9d\xcf\x82\x5f\xfc\x9d\xb2\x29\xc7\x62\x2b\xe0\x0a\xcf\x0a\x87\xa3\x49\x14\x8c\x18\x43\xd1\x98\x49\xfe\xf7\x6d\xfb\x6b\xd9\xb3\xf4\x75\x0c\xc3\x70\x9d\x39\xae\xf7\xce\x88\xf6\xde\x79\xf1\xd7\xfe\xfb\xb4\x70\x0f\xdf\x06\xaa\x83\x3a\xe0\x81\xed\xb9\xad\x42\x92\xf7\xd4\x53\xcf\x22\xbf\x74\x88\xd6\xa9\xc6\xfd\xbb\x79\xf8\x7b\x37\xb2\xf2\xb5\x67\xc1\xb6\x1c\x0f\x93\xeb\x48\x50\x41\x8e\x99\x26\x9b\xdf\x7d\x9d\xbd\x6b\x03\xf3\x66\xf1\xc8\xb1\x94\x8c\x18\x83\xf0\x66\xad\xb4\x31\x6c\x4b\xf3\x5e\xc5\x8c\xe0\x8c\x7b\x51\x25\xa1\x53\xed\xb4\xf7\xcc\xf0\x7a\xac\xaf\xcd\x21\x61\x52\x39\xa5\x74\x58\xea\xc8\xa9\xb3\xb8\xfa\xde\xbf\xf1\x9d\x27\xde\xe6\xb2\x9f\xdf\x4f\x4e\x49\x19\x1d\x47\x1a\x69\x1b\xc4\x1f\x5c\x7d\xea\xf9\x2c\xb8\xeb\xaf\x14\x8e\x9e\xa0\x83\xab\x9c\xa6\x69\xd2\x55\x7f\x80\xb5\xff\xfc\x1d\xcd\xbb\x36\x6b\xd7\xef\x78\xeb\x39\xfa\x9a\x1b\x9c\xc9\x1a\x02\x33\x16\xf1\xde\x8f\xb0\x11\x22\x64\xaa\x94\x6a\xd8\x87\xf2\xb7\xad\xb8\xa1\x54\xbf\xa3\xf7\x1b\xcb\xa2\xb2\x66\x32\xc7\x5f\x7c\x75\x5a\xe7\x9a\x0e\xec\xe5\x91\x1f\x7e\x83\x67\xee\xfe\x21\x0d\x7b\x76\x61\x1a\x06\xb1\x58\x9c\x98\x69\x12\x73\x4d\x8c\xdb\x3f\x7a\x8f\xb7\xff\xfc\x4b\x52\x7d\x41\xf4\xc4\x94\x33\x2f\x22\xb7\xa8\x58\x73\xcb\xa9\x71\x61\x31\x23\xe8\x54\x5c\x1c\x1d\x5c\x6d\x76\x8b\x10\xb8\x0a\xdb\xf6\xd9\x5e\x18\xec\x10\x2b\x94\x96\x45\x22\x2b\xc9\xd0\x09\x53\xc9\xca\x2b\x40\x4a\x68\x3d\x7c\x40\x33\x7c\x80\x23\x57\x4c\xba\xf0\x2b\x9c\x7c\xc7\x7d\x64\x57\x0c\xd3\xc0\xf5\x0e\xc3\x30\xc0\xb6\xd8\xbb\xf4\x75\x16\x7f\xef\x2a\xd6\x3d\xfe\x27\xfa\x15\x33\x29\x38\x8e\x91\xfd\x9f\xbc\x87\xe9\xf9\xcf\x85\xe2\x75\x53\x38\x98\xcf\xb5\x54\x4a\x4f\xa7\xde\xc0\x80\xed\x99\x2e\xfd\x06\xcb\xc0\x35\xe8\xa8\x50\xd2\xef\xb6\x61\x9a\x9c\xfa\xd5\x9b\xd9\xbf\x75\x03\x1b\x3f\x78\x5b\x6b\x60\x6f\x57\x07\x8b\xff\xf9\x17\xd6\xbd\xf7\x16\xb3\x16\x5d\xc0\xf8\xb9\x27\x93\x5f\x56\x41\x4f\x77\x17\x9b\x3f\x7c\x9f\xa5\x4f\xfc\x9d\x56\x65\xf6\x0f\x9f\x76\x2c\x33\xce\xbb\x3c\x78\x4e\xa4\xd0\x24\x34\xea\x0b\x1f\xaa\x89\xd5\x8b\xf2\xf4\x0d\x03\x22\x1d\x38\x5d\x00\x4b\xbf\x69\x3a\xcb\x96\xd8\xb6\xbb\x5f\xc9\x80\xc6\x3d\x3b\xe9\xef\xe9\xf6\xbf\x37\xe3\x09\xa6\x5f\xf5\x4d\x66\x7d\xed\xfb\x98\xc9\x1c\x52\x96\x9b\x6b\xc0\xb7\x2f\x08\x0c\xd3\xa0\xf3\xd0\x5e\x36\x3c\xf1\x17\xb6\xbd\xf6\x6f\x06\x5c\xdd\x3a\x7c\xd8\xa9\x14\x5b\xfe\xf3\x14\x35\x0b\xce\x26\x23\x99\xad\xc9\x31\x83\x49\xd6\xfe\x67\x7f\xdf\xd9\x25\xbd\xbd\x46\x03\x4a\x5c\x90\xaa\x13\xab\x9d\x0c\x28\x49\x28\xd4\x22\x30\x0d\x88\x9b\x06\x47\xf6\xed\xe6\xe9\x9f\xdd\xc6\xfa\xf7\xdf\x64\xb0\x23\x9e\x99\x49\x66\x32\x87\xd4\x40\x3f\x3d\x1d\xba\x9f\xb8\xa0\x7c\x28\x97\xff\xfa\x21\x46\x1f\x77\x0a\x96\x65\x69\xd2\xbc\x07\x8a\x2e\x08\x45\xeb\x80\xaa\x95\xc7\x0f\xd4\x8b\x70\x7f\x06\x7b\xa2\x02\x9f\xb6\xd7\x3f\xcd\xf1\x11\x02\x59\xcd\x0b\x62\x09\x83\xed\x9f\x2c\xe3\xc9\xdb\xbe\x42\x47\x63\x1d\x89\x64\x0e\xc7\xdd\xf8\x03\xa6\x5d\x79\x13\xc4\x32\x48\xd9\x36\x96\xea\xac\x31\x0c\xac\x81\x7e\xf6\x2f\x7b\x83\x35\xff\xfc\x9d\x13\x6b\x16\x71\x24\x8b\xcb\xe8\xef\xea\x24\xd5\xdb\x4d\x3c\x2b\x9b\x8b\xff\xf8\x24\x63\x4e\x3c\x1d\x2c\x2b\x8d\x13\xa5\x8f\x8f\x43\x00\xe6\x59\xb7\xdc\x71\x97\x15\x62\xcb\x9a\x75\x8b\x50\x10\x80\x61\x38\x21\x33\x52\x62\x18\xa6\xb3\x23\xc1\x33\x5c\x48\x49\x6e\x61\x31\x93\x4f\x3c\x95\x44\x46\x26\x75\xbb\x77\x6a\xfa\xb1\x3a\x23\xfb\x7b\xba\x49\x85\x02\xda\xf2\xcb\xca\xb9\xe4\xce\xfb\x98\x30\xff\x0c\x47\x8f\x55\x26\x95\x10\x3a\x8b\xf6\x12\xb5\xf8\xac\xd7\x08\x40\xf2\x25\x63\x15\x18\x8f\xea\x95\x01\xf1\xc1\x45\xc9\x73\xe1\xb1\x71\x05\x60\xf5\x3b\xdf\xe0\x4f\x30\x79\x2c\x69\x93\x57\x39\x82\xdc\x8a\x61\x34\x1f\xd8\xc3\xdc\x1b\xbe\xcf\xb4\xcb\x6e\x00\x33\xee\xc4\x61\x09\x37\x73\x01\x02\x0c\x93\xf6\x03\xbb\xf9\xfc\x6f\xbf\xe2\xf3\x87\xee\xa5\xab\x6e\x7f\xda\xf8\x64\x15\x96\x30\xf9\xc2\x6b\x98\x7f\xeb\xaf\xe8\x6e\x6e\xe0\x48\xed\x56\xec\xd4\x00\x02\xa8\x39\xf9\x4c\x62\xa6\xa1\x08\xa9\x81\xa0\xea\x79\xd7\x82\x80\x04\x10\x7f\xd9\xd1\x25\x53\xda\xae\x41\xa9\x49\xca\x5e\x94\xa2\x67\xe4\x68\xda\xb1\x91\x5d\x4b\xdf\xa0\xfd\xd0\x3e\xf2\xcb\x2a\x18\x77\xfc\x29\x54\x4f\x9f\x4d\x66\x46\x26\x26\xb6\xbb\x3e\x1a\x18\x48\xf6\x6d\x5c\xc3\xb2\x67\x1e\x61\xed\x7b\x6f\xd2\x7c\x38\x5a\x00\x71\x6e\x2d\x18\x39\x75\x26\xe7\xde\xfa\x53\x6a\xe6\x9d\xe6\xc7\x14\x85\x29\x4e\x07\x46\x68\xe0\xf9\xeb\xa3\x42\xb9\xea\x36\x90\x74\x0b\x9d\xd4\xd6\x5d\x2d\x68\x81\xd0\x86\x6a\x74\xb6\xa7\x86\xed\x7a\x9a\xc6\x80\x84\x01\xcb\xa6\xbd\xb5\x85\x58\x6e\x21\x29\x20\x25\x05\xfd\xfd\x7d\xf4\x74\xb4\x13\xcf\x2b\xc2\x4a\xa5\xd8\xfd\xfe\xab\xac\x7b\xe4\xf7\x34\xef\xdc\x14\x39\x16\xd5\x0b\xce\x63\xe6\xb5\xb7\x52\x31\x69\x06\x89\x8c\x0c\xb6\x2f\x7e\x81\xd7\x7f\x70\x1d\xd6\x40\x3f\xc9\xa2\x12\xbe\xfc\xd7\x97\xa8\x9a\x32\xd3\x8d\x5d\xd3\x73\x74\x80\x4e\xd5\x02\xdf\x92\x25\x89\x5c\x8b\x55\x73\xa5\x65\xb3\xf9\xf5\x27\xf9\xf0\xff\x7e\x45\x7b\xdd\x01\xbf\x41\x1f\x3f\xfe\x7f\xcc\x38\xef\x72\x16\x7d\xeb\x76\x8a\xca\xca\x9d\x10\x6d\x57\xb7\x1d\x3d\x6d\x16\xa3\x27\x4f\xa3\xfe\xab\xdf\x62\xcb\x27\xcb\xd9\xbe\xea\x23\x0e\xd7\xee\xa0\xb3\xad\x05\x69\x59\x64\x66\xe7\x30\x64\xd4\x18\x26\x9d\x78\x3a\xd3\x4e\x3d\x8b\xdc\xb2\x0a\x52\x96\xe5\x03\x66\xbb\x33\x3f\x6c\x2e\x1d\x4c\xfa\x55\x29\xcb\x73\xcf\x05\x9f\x3a\x2f\xbe\xe9\xd4\x5d\x88\x55\x2e\xe1\x07\xe9\x0b\x3d\x73\x4d\xda\x73\x04\x5a\xf2\x16\xcf\x6d\x27\x84\x20\x59\x50\x4c\xbf\xcb\xee\x3a\x0e\xef\xe3\xe3\x07\xef\xa1\x7e\xe3\xe7\x8c\x3a\xed\x02\xba\x1a\x0e\xb1\xf3\x8d\x67\x48\xf5\xf5\x30\xd8\x51\x34\x66\x02\xe5\xd3\x66\x83\x65\x61\xa5\x06\x18\x36\x73\x1e\x65\xe3\xa7\x72\x78\xc3\x2a\xba\x9b\x9b\xd8\xf8\xe6\x73\x54\x4d\x9e\xae\x71\x1d\x8f\xc3\x69\xfa\xbb\xf7\xfe\x8f\xdb\xba\xa4\xb7\x17\x78\xb0\x35\x58\x18\x06\xdb\x17\xbf\xc4\x3b\x3f\xff\x16\x7d\x21\x09\xcf\x3b\x66\x9c\x73\x09\x57\xfc\xe2\x7e\x72\x72\xf3\xd2\xa5\x57\xc3\xc4\x74\xd7\x9d\xde\xae\x4e\x7a\xbb\x3b\xb1\x6d\x9b\x78\x66\x16\x59\x39\x79\x98\x89\x4c\x52\xb6\xa5\x84\x9b\x28\xae\xcb\x30\xc0\x61\x1d\x36\x44\xbd\x28\x66\x56\x2d\x30\x1e\x7d\xa9\xd1\x8c\x37\x0a\x15\x84\xd7\x62\xdd\xcb\x25\x5c\x0a\x0e\xf6\xf3\xfa\x14\xac\xed\xa1\x76\xa8\xf9\x9d\x7b\x6e\x63\xfd\x73\xff\x18\x14\x4c\x23\x9e\xa0\x74\xe2\x0c\xda\xf7\xef\xa2\xc7\x35\x83\xe6\x0f\x1b\xcd\x79\x0f\xbc\x44\xe1\xb0\xd1\x8e\x9c\x63\x08\x3e\x7b\xe4\x0f\x7c\xf0\xc7\x9f\x00\x50\x58\x35\x8a\x6b\xff\xf1\x0a\xa5\x23\xc6\x60\x48\x5b\xdf\xd9\xef\x01\xab\x8c\x91\x79\xfa\xb7\x7e\x74\x57\x4a\x42\x8a\xc0\xc8\xe1\x85\xeb\x58\x00\xc2\xa0\x65\xdf\x2e\x96\xfc\xf2\x16\x3a\xeb\x07\x67\xb3\xf5\xbb\xb6\x51\x3a\xbc\x9a\xe1\x53\xa6\xbb\xba\xab\xc2\x42\x71\x0c\xe6\xb1\x58\x9c\x8c\x64\x92\x64\x4e\x1e\xd9\xb9\xf9\x64\x66\x25\x1d\x35\xc1\xb5\xff\x7a\x8d\x73\x06\x59\xa4\x27\xf8\x0a\xbd\x1a\x46\x10\xaa\x23\x94\xce\xa9\xd4\x2e\xa3\xf4\x9c\xc1\xc0\x25\x9d\x35\x1b\x22\x9d\x2a\x10\x81\x25\xdd\x9f\x48\xda\x84\x72\x34\x90\xda\x15\x8b\xa9\xdf\xf4\x79\xda\x58\x09\xc3\xa0\x6c\xf2\xb1\xcc\xbc\xe9\x27\xcc\xb8\xf1\x47\x0c\x74\xb6\x53\xbf\x7e\x25\x00\x7d\x1d\xad\x74\x35\x1c\x64\xdf\x47\xef\xd2\xd5\x78\x98\xb2\x9a\x29\xe4\x14\x95\xb2\xf3\x83\xff\xd0\xd7\xd9\x4e\x6f\x7b\x2b\xf9\xe5\x43\x19\x35\xf3\x78\xbc\x4c\x41\xde\xc4\x54\xfb\xe1\x8f\x51\x78\x7b\x84\x1f\xa3\xe5\x36\xd8\xb2\x6d\x36\xbc\xf8\x28\xcd\xb5\x5b\x39\xda\x61\x5b\x16\xeb\x16\xbf\x4a\x6f\x4f\x0f\xb6\xeb\x4a\x90\xae\x54\x22\x0c\x93\x54\x2a\xc5\xea\x77\x5e\xe7\xd3\x37\x5f\xa6\xa7\xbd\x05\xc3\x05\x55\xd5\xe9\x1c\x1d\xce\xd5\x6d\x0d\xef\x74\xad\x54\x8a\xbe\xe7\xe9\x7e\xfe\x1e\x28\x02\x70\x54\x2a\x0f\x4b\xdc\x3e\xb0\x61\x21\x4c\x05\x31\x2c\xa5\x87\xc1\x55\xee\x13\x9e\x2c\xea\xa7\xc2\x8c\x51\xb3\xe8\x32\x32\x42\x5b\x5f\xf2\x86\x55\x73\xdc\x77\x7f\xcb\xa9\xbf\x7b\x8a\xea\xb3\x2e\x27\x91\x5f\x44\xf5\x59\x97\x93\x55\x5c\xe6\xce\x18\xc9\xae\x77\x5f\x65\xf3\x2b\x8f\xb3\xe2\x4f\x3f\xe5\xe0\xda\x4f\x28\x1c\x39\x8e\x51\x27\x9c\xee\xdf\x63\xc3\x1b\xcf\xd3\xd1\x54\xe7\x4e\x6e\x65\x97\x7f\x94\xde\xef\x09\x33\x69\x71\x40\x12\xa4\x4b\xbd\x3b\xde\x79\x51\x6b\x64\xe1\x88\x31\x54\x4e\x9d\x4d\x66\x5e\xa1\xf6\x79\xe3\xbe\x5a\xba\x3b\x3b\xb0\xa5\xe3\x61\xea\xee\x68\xa7\xa5\xb1\x9e\xfa\xbd\xb5\x2c\x7d\xee\x5f\xfc\xf9\x5b\xd7\xf0\x87\xaf\x5f\xc1\xdf\xbe\xff\x0d\xda\x9a\xd2\x2d\x33\x0e\xb0\xc2\x07\x35\x66\x38\x46\x8c\xb8\xa1\x5a\x67\xd0\x0c\x1f\xbe\xd4\x1b\x02\x23\xea\xd0\x40\x8e\xf8\x3b\xfc\x5d\x94\x7e\x3d\xd8\xbd\xc3\x87\xb4\x2d\xca\x26\x4e\x67\xd8\xec\x53\xb4\xcf\x4b\x26\x4e\xa7\xe6\xa2\xaf\x92\x51\x58\x8a\x6d\xdb\xf4\x75\xb4\x52\xbb\xe4\x25\x52\x8a\x0e\xed\x1d\xfd\x5d\x1d\x34\xef\xdd\x85\x88\x27\x18\x7f\xc6\x45\x24\x5c\xc7\x4a\xdd\xb6\x0d\xec\x58\xfe\x0e\xc2\x08\x22\x42\xd5\xc0\x89\x20\x46\x5d\x38\x0e\x7f\x9f\xbd\x20\x35\xbd\x11\x43\xb0\xef\xe3\x77\x69\x57\x9c\xd8\x23\x4f\x38\x9d\x05\xb7\xff\x9e\xdc\xa2\x52\x1a\xb7\xac\x61\xc9\xbd\xdf\xa3\xc1\x0d\x7f\xe9\x6a\x3d\xc2\xbb\xff\xb8\x8f\xce\xa6\x7a\xba\x9a\x1b\xe9\x69\x6b\xa5\xa7\xa3\x8d\xbe\xae\x4e\x3a\xdb\x5a\x7c\x43\xc0\xaa\x77\x5e\x67\xd1\x57\xfe\x87\x92\xf2\x8a\xc0\xfb\x22\x42\x7a\xab\x70\xfc\xc4\xb6\xff\xb9\x74\xc3\x79\xa2\xb7\x5d\x7a\xd7\x7b\x9f\xab\x6f\x24\xa1\xcf\x55\xc0\x22\x59\xb0\x0e\xa8\x3a\x01\xa2\xee\x15\x48\xfa\xc1\xff\x52\x3a\x21\x3b\x35\x67\x5f\xce\x9e\xe5\x6f\x92\xea\xeb\x75\xc0\x59\xbd\x82\xe6\xda\xad\x14\x8e\x99\x8c\x10\x06\x0d\x1b\x57\xb1\xf1\xdf\x7f\x4e\xdb\x03\x65\x66\x64\x52\x39\xe3\x04\x2a\x67\xce\xc3\xb2\x6c\x2a\xa6\xcd\xa1\x62\xca\x2c\xf6\xae\xfc\x00\xdb\xb2\x58\xf3\xea\x53\x4c\x5d\x78\x3e\xd9\x39\x39\x7a\xae\x2c\x85\xa2\x35\x87\xbf\xd7\x28\x5b\x06\x83\x68\xa5\xfa\xa9\x5b\xff\xa9\xff\xd0\x8c\xdc\x7c\x66\x5e\x77\x1b\x05\xc3\xc7\x22\xec\x14\x23\x8e\x5b\xc0\xe4\xf3\xae\xe4\xbd\xdf\xdf\x01\x40\xe7\x91\x46\xde\x7b\xe8\x0f\x5f\x38\xbb\x87\x8d\x9d\x40\xd5\x98\x1a\xb0\x6d\x7f\x9d\x05\x7c\x00\x6d\x37\xe3\x8e\x2d\x50\xe2\xb5\x44\x00\xbe\x12\x2e\x34\x18\x70\x21\x8c\x8f\x1a\x9d\xe2\x53\xaf\xd0\x3f\x13\xea\x87\x83\xdc\x7b\xb0\xc9\x04\x8e\x3b\xb1\x72\xe6\x3c\xca\xa6\xcc\xe6\xd0\xaa\x65\x00\x74\x37\xd5\xb1\x6f\xe9\x7f\x28\x18\x33\x09\x29\x21\xa3\xa0\x84\x44\x5e\x21\x3d\x4d\x75\x98\x89\x0c\xf2\x86\x8f\xa1\x72\xd6\x49\x0c\x3f\x61\x21\x15\x53\x8e\x25\x23\x99\x64\xff\xea\x15\x1c\xf8\xe4\x5d\xda\x0f\x07\x3a\xf3\xde\x35\x9f\xb0\x7b\xd5\x0a\xa6\x2e\x38\xdb\x55\x99\x54\x9d\x98\x74\x80\xc3\x99\x70\x10\xd0\xdf\xd1\x4e\xab\x1b\xc9\x08\x50\x36\x71\x06\xa5\x13\xa7\x63\xa5\x52\x18\x42\x62\x4b\x9b\x44\x32\xf7\x0b\x01\x0d\x1f\xf3\xbf\x74\x15\xc5\x43\x2a\xc0\xb6\xfd\x99\xa7\x8e\xb8\x81\x70\x29\x58\x6a\x3b\x1f\xbd\x70\x1f\x67\xbf\xad\xd4\xa8\xd6\x1f\x70\x19\x4a\xfe\x72\x14\xf4\xd5\xa8\x94\xb0\xd0\x35\x98\xcf\x15\x02\x29\x3a\x3c\x79\xc2\xcf\x92\x52\x92\xc8\xcd\x67\xec\xa2\xcb\x38\xfc\xf9\x87\x7e\xac\xd5\xbe\x0f\x5e\x65\xec\x05\xd7\x92\x59\x54\x46\xfe\xe8\x09\x1c\xf7\xc3\xfb\x68\xd9\xb1\x91\xe2\x71\x53\x28\x99\x38\x83\x64\x71\x19\x31\xd3\xd9\xa7\xb5\xf6\xf1\x3f\xf1\xf9\xc3\xbf\x4b\x0b\x57\x1a\xe8\xed\x61\xc3\xdb\x2f\x33\x79\xfe\x42\x27\x0b\x11\x11\xc9\xd0\x10\xc4\xc2\xec\x25\xe8\xbc\x41\x57\xc3\x61\x3a\xea\x03\x9d\xb7\x7c\xea\x6c\xe2\x59\xd9\xbe\x0f\xd4\xb6\x6c\x9a\xf7\xed\x3a\x2a\x98\x86\x19\x23\x2b\x37\x8f\x82\xb2\x72\x4a\xab\x46\x30\xf6\x98\x59\xcc\xbf\xe8\x4a\xa4\x2b\xe2\xab\x92\xaa\x06\x14\x81\xb9\x4d\x8d\xd9\xf2\xb2\xfb\x78\x99\x04\x82\xc1\x0c\x03\xad\x0e\x7c\x3a\xca\x69\xc2\xd4\x51\x84\x2b\x42\x6d\x53\x1b\x19\x9d\x28\x45\x6d\x97\x64\xf8\x09\x0b\x29\x1a\x33\x91\x23\xdb\x37\x00\xd0\xba\x6b\x0b\x75\xab\x97\x31\x7a\xd1\xe5\x48\xc3\x60\xd8\xfc\xb3\x19\x3e\xff\x1c\x67\x57\xa4\x95\x02\x29\xb1\xad\x14\xb2\xbf\x97\xbd\x1f\x2e\x1e\x34\x16\xad\x79\xff\x6e\x52\x7d\x7d\x24\x92\x49\x27\x51\x1c\xca\x7a\xec\xba\x45\x7d\x80\xd3\x8c\x09\x86\x41\xfb\xfe\x5a\xfa\xda\x5b\x00\xc7\x78\x5e\x36\x69\xa6\x13\xa6\x62\x39\xb4\xd4\xd7\xdd\xc9\x61\x4d\x0d\x10\xe4\x94\x94\x51\x38\x74\x04\xa5\x23\x46\x53\x51\x3d\x9e\xca\x31\xe3\x28\x1f\x3e\x9a\x92\xf2\x0a\xf2\x0b\x0a\xc8\xca\xcc\xc4\x74\x43\x53\x50\xa5\xd6\xb0\x35\x0a\x05\xf4\x20\x51\x63\x90\xf1\x4d\xa0\xb1\x45\x9f\x7a\xd5\xbf\x49\xa7\xb2\x34\x36\xac\xf8\x62\xd3\xa4\x6a\x18\x54\xa8\x8a\xba\x77\xf4\x0f\x6d\xb2\x4b\xcb\xa9\x5e\x78\xb1\x0f\xb0\x9d\x1a\x60\xcf\xdb\xcf\x53\x30\x6a\x3c\x47\xb6\xae\xa5\x71\xfd\x4a\x62\x59\xd9\x4c\xfe\xf2\xcd\x64\x97\x55\xfa\xf7\x34\x33\x32\x29\xaa\x9e\xc8\xa1\x55\xcb\x83\x11\x36\x0c\x32\x73\xf2\xc8\x2f\xaf\x62\xda\x59\x5f\x22\x96\x91\xa9\xb5\x20\xbc\x03\x34\x36\xd8\x4e\x05\x90\xb4\xd4\x6e\x75\x92\x8f\xe1\x44\x2b\x26\x0b\x8a\xb1\xfb\xfb\x88\xc5\xe3\x98\xb1\x18\x6d\xfb\x0e\xd3\xb2\x37\x60\xe1\xe3\x4e\x5e\xc4\xe9\xb7\xfe\x8c\xe2\x8a\x2a\x92\x39\x39\x64\x26\xe2\x24\x4c\x83\xb8\x00\x53\xda\x18\xc2\xc6\xb6\x1d\xca\xd5\x04\x19\x81\x26\xe0\xf8\xa9\x8a\x45\x3a\xfb\x13\xea\xdf\x11\x02\x97\x3d\x08\x7b\x56\x27\x4f\x98\x52\x3d\x0e\x32\xa8\xde\x1b\x02\x52\x7e\xc1\x67\xae\x06\xae\x3d\x63\xd4\x82\xf3\xd9\xf2\xe2\x23\x74\x1c\x72\x04\xd6\x43\x2b\xdf\xa5\x71\xd3\x2a\xfa\x5a\x9a\x7c\x8e\x98\x53\x56\xc9\x94\x6b\xbe\x8d\xb4\x9c\x40\x41\xc3\x30\x98\x7c\xc9\xd7\x18\xe8\x6c\x63\xa0\xa3\x8d\x92\x51\xe3\xa8\x18\x3f\x85\xf2\xea\x1a\x4a\x86\x0e\x27\xaf\xa8\xd8\x31\xbe\xb8\x9d\x8a\xd2\x06\x62\x41\x93\xdc\x80\x33\x97\x05\x5a\xa9\x01\x8e\x28\x8e\xe7\xbe\xf6\x16\x96\xdc\xf5\x75\x0a\x86\x8f\xa1\x78\xf4\x78\xca\x6a\xa6\xd0\x51\x7f\x90\xee\x16\xc7\x07\x2a\x4c\x93\x29\xe7\x5e\x41\xc5\xa4\x19\x88\x54\x3f\x48\x9b\x54\xca\xa2\xab\xb1\x0e\x61\x0d\x50\x5c\x36\x84\x58\x22\xee\xbb\xeb\xa4\x46\xa2\x2a\xe5\x38\x9b\xca\x3d\xf0\x44\xa8\x7d\x69\x54\xe4\xc9\x0f\x32\x7d\x57\x46\xd4\x91\xc6\x82\x15\x6b\x95\x0f\xb4\x48\x77\x45\xa6\xad\xaf\x84\xd8\x75\xf8\x39\x22\x30\x67\x22\x6d\xf2\x87\x8d\x66\xe4\xc9\xe7\xb0\xe1\xc9\x07\x00\xb0\xfa\xfb\xb0\x9a\x1b\x08\xdf\xd5\x89\x15\x13\x7e\x1c\x77\xf1\xe8\x1a\x4e\xbf\xeb\xaf\xc4\xb0\xc9\xc8\xc8\x20\x61\x9a\xc4\x84\x24\xa6\x38\x6c\x8f\xc6\x49\x62\x5a\xa7\xbd\xf5\xcd\x10\xf4\xb5\xb7\x6b\xd4\x69\xa7\x52\xb4\xec\xd9\x41\xcb\x9e\x1d\xec\x5e\xf6\xa6\xbb\x59\x2b\xe6\xef\x9a\x17\x42\xb0\x73\xd9\xdb\xd8\xa9\x7e\xca\x46\x8d\xa5\xb4\x6a\x24\x1d\x87\xf7\xf1\xd2\xcf\xbe\x43\x77\xcb\x11\x26\xce\x5b\xc0\xc5\xb7\xfe\x98\x92\x21\x15\xda\xda\x19\x0d\x82\x03\xb2\x3f\x40\xa1\x41\xf5\xba\xa6\xdb\x9d\x23\xd6\xde\x10\xb0\xda\x6b\xd8\x5b\xa4\x80\xeb\x7d\x16\x06\xec\x0b\xe2\xd0\xd3\x27\x91\x27\x30\x22\x11\x86\xc9\x98\x85\x17\xb3\xfd\xb5\x27\xe8\xeb\x68\x45\x18\x06\x89\x9c\x7c\xb2\x87\x0c\xa5\x60\xe4\x38\x2a\x67\xce\x63\xf4\x69\x17\x61\x48\x5b\xb3\xae\x79\x61\xbb\xa6\x61\x82\xb4\xb1\x2d\x37\x8f\xa8\xbf\xa6\x39\x64\x90\x26\xf0\x79\xdf\xfc\x64\x43\x87\x54\xa5\x4e\x5b\x02\x86\x41\xc3\xf6\x8d\x3c\x7d\xc3\x39\x5a\xc6\xb7\xff\xaa\x73\xc2\x20\x23\x37\x8f\x82\xf2\xa1\x20\xa1\xce\xd5\x91\x85\x10\x7c\xeb\x6f\x4f\x33\xf7\xac\x0b\x31\x6c\xcb\x0d\xb1\x11\x5a\xa4\x85\x9a\x79\x41\xf5\xd8\x04\xf9\x28\x94\xdc\x15\x21\xc7\x88\x66\x2e\xe4\xbf\xa0\x62\x45\xc0\x0b\x9b\x29\xa3\xa8\xd7\x9b\x3c\xc1\x06\x3d\x75\xd3\xb5\xd3\xb6\x01\x5b\x6a\xb1\x6c\x5a\xa4\x2a\x92\x54\x7f\x3f\x9b\x5e\x7c\x84\xae\x86\xc3\x14\x8d\x9d\x44\xc1\xc8\x1a\x72\xca\xab\xc8\xc8\xcd\xc7\x8c\x27\x40\xda\x7e\xc2\x75\xdf\xec\xa8\xed\x0a\xd1\xf3\x60\x7a\x61\x4a\x71\xd7\x02\x18\x57\x2c\x81\x4e\x0c\x9a\x9b\x08\x0d\x45\x60\x31\x85\xb3\x90\xb7\xed\xaf\xa5\xd7\x15\xb0\xfe\x5f\x0e\x29\x6d\x7a\xdb\x5b\xa9\x6b\x6f\xd5\x3e\x8f\x67\x66\x91\xcc\x2b\xf0\xf7\xd1\x04\x54\x28\xb4\xc9\x15\x16\xb4\xb4\xb5\x55\xb1\xf3\x7a\xea\x93\xaa\x42\xfd\x37\xbb\x31\x54\x21\xca\xb7\x6b\x87\xc0\x0d\x17\xd1\xf0\xde\x28\xb2\xde\x20\x93\x3b\x60\xcb\x4e\x62\x73\x25\x2f\x16\x02\x91\xc8\xe0\x98\x2b\x6f\x72\x6e\xe8\xb1\x04\xe9\xa5\xeb\x0b\xa2\x43\xd3\x52\xf5\x93\xee\xb2\xf4\xc6\x48\x1d\x2b\x27\x87\xa8\x70\x4b\x2e\x48\x84\x6b\xfe\x0d\x2c\x41\xee\x0f\x0d\x01\xed\x87\xf6\xfa\x02\x96\x30\x0c\x86\xcd\x38\x9e\x78\x66\x92\xee\x96\x46\x7a\xdb\x5b\xe9\xeb\xec\x60\xa0\xb7\x1b\xab\xbf\x2f\x32\xd6\x28\x7c\x24\xf3\x0a\xc8\x1f\x52\x89\x2d\x6d\x3f\x31\x9a\x4d\xb0\x83\xd0\xcb\x3f\xaa\x35\xdc\xd7\xd1\x75\x0b\x9b\xef\x65\x0a\xaf\xbd\x04\x94\xa6\x81\x2a\x48\x13\x3e\xc2\x89\xb3\xc3\xbe\x5f\xdf\x3f\xe1\x59\xd7\x8e\x06\xae\x3b\x01\x1c\x60\xa4\x12\xd2\xa4\xfb\xb2\xbd\xac\xf6\x9e\x80\xe1\xb7\xc9\x50\x5b\xab\xb7\xd5\x9b\x74\xa6\x88\x8e\x06\x55\x97\x25\x29\x82\x71\x15\xee\xb7\x31\x8d\x15\xb9\x17\x19\x02\xb2\x92\x41\xd2\x92\x8c\x9c\x3c\x4e\xff\xee\xdd\x0c\xa9\x99\x4a\x5f\x77\x27\xfd\xdd\x5d\xf4\x76\xb6\xd1\xd3\xda\x42\x77\x4b\x13\xdd\xcd\x8d\x74\x35\xd5\xd3\x75\xa4\x9e\xae\x23\x8d\xf4\xb4\x36\xd1\x75\xa4\x91\x96\x03\x7b\x7c\x53\x64\x5e\xe9\x10\xb2\x0b\x8b\xdd\x00\x73\x2f\x1b\x8f\x03\x9e\x40\xf8\x03\xa8\x36\x5c\xf3\xce\x48\x2f\x01\x98\xea\xfa\x8b\x16\xac\xd2\x02\xdf\x15\xd0\xd2\xd8\x5f\x18\x5c\x97\x82\x7c\x2a\x11\x21\x30\x34\x28\xa2\x9f\x13\xe8\xa3\x4e\xa7\x0c\x77\x72\xfa\x2c\x21\xa2\x7d\x7e\xbb\x95\xef\x7d\x3b\x40\x68\x39\x51\xb9\xae\x9a\x10\xd5\xe1\x80\x4e\xe6\x5b\xef\x88\x19\x11\xad\x15\xd2\xa6\xe6\xa4\x85\xcc\xb8\xe8\x1a\x1a\x77\x6d\x65\xc2\x69\xe7\x51\x59\x33\x19\x23\x1e\x23\x96\x5f\x40\x32\xbf\x10\x18\xe6\x52\xbd\x02\x89\x6d\x23\x53\x03\x60\x0d\xd0\xb2\x67\x3b\x8f\xdf\x74\x09\x1d\x8d\x4e\x06\xb8\xe2\xa1\xc3\xc9\x48\xe6\xf8\xa9\x88\xd5\x99\x16\x16\x80\x7c\x80\x49\xdf\xf9\xa8\x6d\x82\x46\xa7\xd8\x34\x5d\x5e\x01\x56\x65\x71\xaa\x30\x15\xb6\xfc\x84\x03\x08\xbc\x7d\x53\x61\x81\x2f\xfc\xde\xd7\xaf\xa5\x73\x2f\x95\x35\x4b\x14\x6b\x9d\xd0\xaf\xd1\xfa\xae\x2c\x95\x88\xf4\x49\x15\xf6\xfb\xfa\x63\xe5\x86\x05\x05\x89\x60\x83\x68\x95\x98\xa1\xdc\x49\x99\xb7\x14\x56\x0c\xe5\xc2\xbb\xfe\x4c\xaa\xbf\x8f\x78\x56\x12\xaf\xe2\x8a\xed\xdd\xd0\x1b\x58\xa9\xb3\x21\x33\x16\xc3\x4c\xc4\x19\x32\x66\x22\xb3\x2f\xbd\x8e\x8d\x6f\xbd\x48\x2c\x1e\xe7\xd8\xf3\x2e\xc7\x8c\x27\xdc\x88\x0f\xfc\x2c\xf2\x5e\x83\x50\x6a\x32\x80\x27\x64\x85\x63\xc3\x94\x5d\x17\x61\x93\x64\x88\x40\xc2\x83\xa2\x3a\x15\x54\xcf\x8b\xca\xaa\xd5\xeb\x54\x6e\x66\xcb\x20\xa7\x76\x14\xf5\xaa\xd8\x19\x42\xb8\xc5\x45\xdc\xfb\x48\xb4\xa8\x50\xd4\xb6\x84\xde\xa7\x79\x34\x42\x37\x17\xe8\xeb\xb3\x47\xf1\x1e\x6b\xd6\x04\x6b\x2f\x28\xe9\x9e\xcd\x9d\x52\xbd\x37\x4a\xc3\xbc\xa7\xea\x59\x5d\x07\xc9\x02\xe0\x0d\x88\x3b\x28\x31\x43\x20\x6c\x9b\xfe\xf6\x66\xe2\xa6\x49\x7e\x61\xa1\xe6\x06\x0c\xe2\x7a\xd3\x63\x95\xd5\x7b\x0f\x16\x98\x1f\xb5\x13\x52\xed\x87\x16\xce\x02\x69\xba\xad\x0e\xb0\xd0\x26\x41\x30\xcd\x03\x69\x5e\xcd\x03\x19\xce\x53\xa5\x6e\xfd\x49\xcb\x94\xa0\x00\xa6\x6d\x29\x51\x84\xa9\xf0\xba\xaf\x12\x4d\x18\xef\xc1\x84\xc5\xa8\x48\x14\x0f\x87\x10\xf5\xea\x8d\xc1\x35\x7e\xa8\x5e\x1d\x14\x6f\x0f\x04\x82\x88\xc6\x76\x24\x18\xa6\x41\x6e\x71\xa9\x93\x94\x5a\x2d\xcb\xa3\x52\xb0\xd2\xf9\x70\x5d\x86\xa8\x35\xd8\x13\xac\xd4\x8e\x47\xb1\xcb\x70\xc7\xc3\x93\x28\x4a\x72\x0e\xab\x46\x83\x11\x53\xd4\xc0\xfb\xed\x10\x20\xbc\xb4\xca\xde\xb8\x78\xe3\xe4\x82\x1a\x7e\x66\x38\x2c\xd7\xa3\x78\x75\x82\xa8\x12\xb3\xd6\x5f\xff\x22\x97\x4d\x2b\x3f\x92\x12\x5d\xc8\x0a\x83\xac\x5a\x6a\xfc\xfa\x58\x02\xb7\x20\x94\x0c\x00\x11\xe9\x33\x2b\xe0\x04\x8a\x59\x43\x91\xd4\x7d\xdf\xaf\x32\x49\xbc\x6b\xfd\xef\x7c\x2a\x48\xb7\x31\x6b\x13\x3e\x24\x5d\x46\x82\x9b\xb6\x0e\x0b\x4d\x68\x09\x3b\x3c\x02\xc9\xf8\xbf\x03\x57\x03\xd9\x7d\x97\x26\xec\x29\xcf\xf2\xf6\x4a\xf9\x05\xba\xa2\xb8\x07\x28\x96\xbf\x68\xab\x95\xca\xd5\x3d\xce\x6b\x7b\x6d\x77\xb9\x64\xa4\x90\xa3\xf4\x51\x5b\xdc\x84\xff\x2a\x5c\x97\x61\xd4\x4c\x8e\xf6\xc2\xf8\x8d\xc5\x13\xa2\x5c\x09\x5a\x01\xd7\x77\x20\xa8\x20\x0f\xa2\xdf\x8a\x88\x57\x55\x10\x49\x8f\xad\xd2\x43\x72\xa2\xf4\x5e\x6d\xb0\x42\x1d\x08\xaf\xf9\x12\x7d\xe0\xa3\x58\xa9\xf7\x87\x1a\x5f\x6d\x80\xb3\x73\xd2\x96\xe4\x16\x16\x61\x9a\x26\x9e\x99\xd8\x7f\xae\x42\x00\xce\xb8\x89\x10\xdb\x77\x28\x56\x35\xaf\x79\x0e\x1c\x55\x4d\x8c\x45\x0d\xd2\xa0\x53\x53\x7a\x42\x87\x07\x86\xce\x8a\x06\xbb\x5e\x53\xc8\x65\x50\x70\x52\xb8\x20\x1b\x3e\x5f\x0c\x4d\x08\x02\xea\x55\xbf\x13\xa1\x36\x41\xba\x10\xa5\xe7\xdc\x12\x69\x2c\x5a\x9d\x0c\xde\xf5\x1a\x98\xe1\x89\xe9\x4f\x4a\xdd\xe8\xe2\x7f\x1f\x21\x8f\x78\x6d\xf4\x27\x14\x10\x33\x0d\x36\xbc\xfd\x32\x8b\x1f\xb8\x07\x2b\x35\xc0\xcc\xb3\x2e\xe2\xec\x9b\xbe\x4f\x3c\x23\xc3\xd9\xc9\xef\x3d\x57\xa8\xe3\xe5\xda\x0c\xf0\x34\x07\x87\x15\xf7\x77\x77\x61\x26\x32\x30\x63\x31\x45\xf0\x75\x4b\x05\xba\x4e\x88\x41\x33\xbe\x47\x60\xab\xfd\x2d\x06\xf9\x36\x6c\xc3\xd5\x06\x2a\x74\xfa\xeb\x85\x6f\x64\x57\xae\x73\x05\xba\xb0\xf3\x5e\x7a\x0f\x09\xef\x78\x1f\x24\xaa\x50\xdd\x82\x92\x66\xf0\x50\x24\xd3\xa0\xbd\xd2\x5f\xbf\xd4\xcf\x02\x20\x55\xa3\x8b\xb2\xe7\x29\xe8\x4a\xc8\x49\xa2\x0b\x76\x76\x6a\x80\xd5\xaf\x3d\xc3\xc1\xad\x8e\xeb\xf0\xc3\xa7\x1f\x61\xe6\xe9\xe7\x50\x59\x5d\x43\x3c\x1e\x27\x16\x8b\x23\xcc\xa0\x46\x9a\xed\x67\xaf\x73\x5e\x85\x84\x81\xfe\x7e\xde\x7f\xec\x01\xd6\x2f\x7e\x95\xb2\xea\x1a\xce\xbc\xe5\xc7\xe4\x97\x57\xb9\x75\x24\x3c\x19\xc8\x19\x9f\x58\xd0\x89\x48\x3d\x5c\xa7\x1a\xa1\x77\x40\x8a\xd0\x85\xa1\x23\x10\x54\x74\xaa\x55\xf5\x34\x7f\xad\x53\xd7\xf4\x41\x26\xc5\x60\x93\x8d\x08\xd5\x27\x5c\x3f\x48\x07\x58\x0f\xb1\xd5\xee\x29\x83\x40\x02\x6d\x32\x4a\x22\x42\x63\xd3\x77\x4d\xf8\x93\x44\x99\x36\x01\xc8\x82\xbe\xae\x4e\x5a\x0e\x05\x15\x64\xda\x8f\x34\xf2\xe0\x2d\xd7\x90\x5f\x52\x46\x6e\x61\x09\xf9\x25\x65\xe4\x97\x96\x91\x5f\x3a\x84\xbc\x92\x32\x72\x0a\x4b\x48\x16\x14\x91\x91\x93\x4f\x3c\x99\x4d\x3c\x2b\x9b\xc3\xdb\x36\xb0\xe4\xc1\xdf\xd2\xdd\xd6\xca\x9e\xb5\x2b\x19\x71\xcc\x6c\xe6\x5e\xf1\x35\xec\x94\x9d\xb6\x9c\xc5\xc2\x03\x77\x34\x90\xfd\x41\x55\x66\x78\x78\xc0\xdd\x65\x20\x42\xd7\x0d\xa8\xd6\x4b\x9c\xa6\x2a\xe0\x46\x84\xd4\xa0\x52\x6e\xfa\x64\x13\xfe\xfb\xf0\x7a\x1a\x36\xd2\x47\x19\x31\x06\x93\x37\xa4\xd2\xa1\x34\x81\x50\xa6\x5b\xd1\x74\x55\x4d\x49\xdc\x2a\xd2\x47\xd1\x10\x82\xee\xd6\x66\x3a\x1a\x83\x44\x2d\xb6\x95\xa2\x7e\xf7\x4e\xea\x77\xef\x24\x7c\x18\xa6\x49\x3c\x23\x93\x44\x56\x92\xac\xec\x5c\x92\xf9\x05\xe4\x14\x95\xd0\xd7\xd3\xad\x6d\xda\xeb\xef\xee\xd4\x64\x04\xbf\xf1\xe2\x28\x14\x3c\x98\x1e\x76\xb4\xc3\xfb\x9d\x2d\xdd\x2a\xa2\x42\x78\xbb\xc2\x5c\x6b\x0b\x48\x69\xfb\x09\x59\xd4\x8b\xa4\x07\x52\xc4\x3a\x1c\x39\xc9\x94\xd7\x30\xb8\x66\x08\x5c\x2f\x5d\x71\x58\x52\x55\xfb\xe9\x01\xa9\x0a\x5a\x48\x65\x0d\x54\x28\x56\xab\x90\x8e\xb7\x1e\x87\x9c\xa0\xd2\x01\x59\xf5\x62\x0b\xc3\xa0\xad\xfe\x20\xdd\x21\x47\xcc\x60\x87\x6d\x59\xf4\x75\x77\xd1\xd7\xdd\x45\xc7\x91\x68\xaf\x9e\x19\x8b\x53\x54\x35\x32\x78\x6c\x08\x64\x3d\xa2\xe3\x28\x64\xab\x52\x52\x38\xfe\x49\x05\x43\xb8\xa0\xa6\x06\xfa\xe9\x6f\x6d\xa3\xbd\xee\x00\x4d\xb5\xdb\xa8\xdf\xbe\x91\x82\x21\x95\x9c\x70\xf9\xf5\x64\x24\x12\xa1\xab\x85\x2e\xbc\xa9\xcf\x95\x3a\x15\x85\x41\xf6\x81\xf6\xd5\x20\xdd\x22\x15\x56\x95\x22\x29\x37\x82\x1b\xa5\x6d\x3d\x05\xcd\x8a\x16\x6c\x8f\x91\xfe\xf7\x91\x83\x16\x1a\xd7\xee\xb6\x56\x06\x94\xa4\x2c\xc3\x6a\x26\x81\x10\x34\x1f\x3e\x40\x77\x7b\xbb\x1f\xdd\xf1\xdf\x1e\x59\xf9\x05\x94\x8e\xa8\xd6\xe2\xce\x54\xc2\xd0\x84\x2c\x19\xd1\xa0\xf0\x77\x9a\x1e\xe6\xb2\xa5\x40\x55\x12\x74\xb5\x34\xb3\xe9\xed\x17\xd8\xfb\xd9\x0a\x9a\x76\x6f\xa3\xad\xee\x20\xbd\x1d\x6d\x48\xdb\x26\x2b\xaf\x80\x11\xd3\x8e\x65\xd4\xf4\x39\x7e\x84\xa1\xf4\x47\x58\x07\x26\x3c\x46\x83\xfd\x9d\x26\x15\x8b\xe0\x35\xec\xd0\x8f\xea\x9a\x67\x88\x50\x27\x6f\x34\xb8\x01\x90\x81\x97\x0b\xed\x33\xff\x7e\xca\x83\xd4\xc1\xb6\x6d\x9b\x51\xc7\xcc\xe6\x84\xcb\xae\x63\xdb\x47\xef\x93\x5b\x58\xcc\x57\x7f\xf5\x67\xca\xaa\x86\xd3\x7c\x70\x2f\x75\xb5\x3b\x58\xbf\xfc\x5d\x96\xbe\xf8\x14\x96\xeb\xc9\x8b\x25\x12\x58\xa9\x94\x96\x40\x4d\x3d\x0a\xca\x87\x92\x5f\x56\x01\xe1\x89\x21\x15\x80\xc3\xab\x45\x98\xaa\x05\xce\x6e\x74\x10\x58\xb6\x4d\x6a\xa0\x0f\x29\x0c\x84\x19\x73\xbc\x43\xde\x35\x86\xc1\xe7\x2f\x3d\xce\x12\x37\x4e\x3a\x7c\xf4\xb4\xb7\xb2\xfa\xf5\xe7\x18\x36\x65\x66\xa0\x1a\xb9\xcf\xf0\x8a\x43\x1b\x11\x02\xc0\x17\x51\x2f\x83\xbc\x0f\x57\xf7\x56\xaf\x89\x62\xfd\x69\x12\x7f\x48\x0d\xf2\xed\xe1\xe8\xef\x23\x55\x24\x4f\x16\x71\x47\x57\x02\xd2\xb6\xc9\x2b\x2b\xe7\xf2\xbb\xfe\x48\xf7\x91\x06\x32\x33\x12\x14\x95\x94\x11\x37\x05\x25\x65\x43\x98\x7a\xdc\xf1\x24\x12\x09\x3e\x78\xe1\x49\x07\x9c\x78\x9c\x2b\xbf\x77\x27\xc5\x15\x55\xd4\x6e\x5a\xcf\xfe\x1d\x5b\x38\x5c\xbb\x93\x86\x7d\xb5\xfe\x04\x18\x32\x6a\x2c\xc9\xdc\x7c\x6c\x19\xd5\x23\x45\xc8\x1a\x94\x78\x25\x74\xb6\x36\xd1\x72\x68\x3f\x0d\xbb\xb6\x71\x68\xeb\x06\x1a\x6a\xb7\x93\x99\x9b\xcf\x89\x37\x7e\x97\x92\xd1\xe3\xdd\x1a\xbb\x1e\x8b\x53\x14\x28\xc3\x20\x59\x58\x42\x5f\x47\x9b\xbf\xd9\x7b\xe3\x92\xd7\x98\x7b\xc5\x0d\x94\x8f\x1e\x8b\x15\x8e\x8a\x73\x25\x34\x43\x61\xd5\x51\xcb\xc1\x51\xdb\xfb\x05\x13\x21\xea\xd0\xc3\x7c\x06\xb1\xbb\x47\x00\x1e\x35\x21\xbc\x87\x0a\xf5\x1a\x4d\xfd\x93\x98\xb1\x04\xc5\x55\x23\x48\xb8\xb9\x2f\x6d\x2b\x45\x57\x67\x3b\x07\x0f\x1f\xe4\x83\xe7\x9f\xf0\xfd\xeb\x79\x45\x25\xcc\x3d\xf3\x7c\xaa\xc6\x4d\x64\xde\x85\x03\xf4\xf5\x0d\xb0\x63\xdd\x6a\x7e\x7f\xc3\x25\xb4\x37\x39\xf1\x5c\x95\xe3\x26\x11\x4b\xc4\x19\xf0\x52\x46\x86\x70\x4e\xd3\x83\x35\xa9\x5a\x4a\x3e\x79\xe6\x61\x3e\x79\xfa\x1f\xb4\x2a\xac\xd6\x3b\x12\xb9\xf9\x9c\x79\xfb\x6f\x03\x49\xd2\xb2\xa9\x39\xe3\x62\xda\xea\x0f\xd2\xdf\xdd\x45\xd5\x31\x73\xa8\x98\x34\x83\x65\xf7\xff\x8c\x1d\x1f\x38\xc9\x3b\x5b\x0e\xee\x65\xfd\xe2\x57\x28\xf9\x9f\xef\x3a\x45\x9b\xd4\x09\x21\x70\x03\xda\x85\x96\x46\x97\x88\xd7\xa3\x5a\x9d\x34\xb0\x34\x79\x3d\x72\x92\x84\x8d\x19\x61\x8b\x95\xba\xe6\xaa\x1b\xc8\x07\x93\x4b\xd4\x76\x05\x7f\x38\xeb\x84\x35\xd0\x4f\x7b\x47\x2b\x5d\x4d\x75\x34\xef\xdf\x4d\x43\xed\x76\x0e\xef\xda\x4a\xfd\x9e\x5d\x34\x1e\xd8\x47\x4b\x43\x20\x61\x0f\xad\x1e\x47\x49\x79\x25\x76\xaa\x1f\x6c\x49\x46\x66\x06\x5d\x2d\x47\xe8\x6e\x6b\x05\x1c\x01\xab\x72\xec\x84\x60\x87\xa5\x2b\x25\x0a\x6f\x52\x49\xd7\x92\xa5\x0a\x16\x81\x51\x49\x30\xd0\xd7\xcb\xba\x37\x9e\xe7\xf0\xb6\xe8\x82\x13\x0d\x3b\x37\xd3\xdf\xd7\x8b\x91\xc8\xf4\x4b\xc6\xe4\x54\x0e\xe7\x94\x1f\xfc\xd6\xb1\xa8\x08\x27\x0d\xe0\x84\x45\x97\xb2\x6b\xc5\x3b\x7e\x80\xde\xba\x37\x9e\x63\xc6\x05\x57\x51\x50\x56\xee\x56\x2c\x55\x32\xcb\x8a\xc0\xe4\x16\x05\x6c\xda\x44\x94\xaa\xd5\x07\xcd\x14\xaa\x1a\x54\x22\xb4\x96\x74\x2b\x95\x24\x92\x62\xbd\x1f\x87\x75\x72\xf5\x38\x6a\x50\x9e\x10\x74\xb5\x1e\x61\xe5\x93\x0f\xb2\x7f\xc3\x6a\x5a\x0e\xee\xa5\xbd\xb1\x9e\x9e\x8e\xf6\xc8\x4a\x2d\xde\x31\x6e\xda\x4c\x92\x39\x39\x58\x6e\xa8\xb1\x94\x92\xda\x75\xab\x48\xb9\x69\x11\x93\xf9\x05\x0c\x19\x59\x8d\x70\xdd\x93\x81\x21\xc8\x6d\xaf\x88\xa2\x60\xa5\x47\x66\x22\x83\xa2\x61\xa3\xa8\xfd\x6c\x45\xda\xc3\x33\x73\xf3\xa9\x9e\xb7\x10\x91\xc8\xc4\x36\x4c\xac\x94\x45\xe7\x91\x06\x0e\x6f\x5c\x4d\xe3\xf6\x0d\x94\x8c\x9d\xcc\xc8\x79\x67\x60\xc6\x62\x0c\x3d\xf6\x24\x4a\xc6\x4c\xa4\x61\xeb\x3a\x00\xea\xb6\x6f\x66\xcb\xd2\xb7\x99\x73\xe9\x75\x4e\x5e\x49\x02\x90\x71\x1b\x68\x10\x94\x46\x77\xda\x15\x31\x7a\xde\x3e\x5d\x6f\xf0\x45\x10\x21\x12\x36\xaa\x0c\x16\x72\xa3\x67\x55\x97\xe9\xe0\xca\xa3\x80\x8a\xae\x37\x0f\x06\xb2\x30\x0c\x3e\x7f\xe5\x29\xde\xbe\xff\x57\xfc\xbf\x1c\x95\xa3\xaa\x31\x4d\xc3\x8d\x25\x17\x74\x77\x77\xb2\xfd\xf3\x95\xfe\xf7\xc5\x95\xc3\x28\x1a\x52\xe9\xe4\x10\x03\xb7\xe2\xa8\x1e\x9c\xe0\x87\xcd\x86\xa5\x49\x00\x23\x16\x63\xd2\xc2\x0b\x58\xf7\xc6\xf3\xbe\x68\x5f\x34\xbc\x9a\x29\xe7\x5c\xc6\xa8\xb9\xa7\x32\x64\xd2\x0c\x7a\x3a\xda\x39\xb0\xfe\x53\x6a\x57\xbc\xc3\xbe\x4f\x97\xd2\xb2\x77\x27\x56\x7f\x1f\xd9\xa5\xe5\x9c\x7b\xff\x8b\x94\xd6\x4c\x25\x51\x54\xc6\x98\x05\xe7\xf9\x00\x4b\xdb\x62\xed\x6b\x4f\x33\xe9\x8c\x0b\xc9\xce\xcd\x43\x48\x89\xa5\x48\xc3\x01\xe5\x44\x93\x84\xcf\x96\xdd\x20\x33\xbf\xb4\x2d\xf8\x65\x5d\x45\xe8\x94\xa4\xaf\xc7\x3a\x8b\x0e\xf2\x2c\xfb\x46\x8b\x88\x16\x04\xf7\x54\x0a\x6b\x7b\x6c\x11\xdd\x3a\xe6\x81\xdb\xd1\x70\x98\xcf\x5f\x7d\xea\xff\x09\x5c\x80\x9e\xce\x0e\x0c\xe1\x94\xdd\x8d\x99\x31\x76\x6f\xf8\x9c\x5d\xeb\x56\xfb\xdf\x57\x8c\x1e\x47\x76\x5e\x1e\x16\xd2\xd7\x18\xd4\x62\x64\x12\xd7\x9b\xe4\x29\xf8\x61\x90\x6d\xdb\x66\xe4\xac\x79\x8c\x98\x7e\x1c\x3b\x3f\x7e\x1f\x70\x8a\x4c\xcc\xbd\xee\x56\xe2\xd9\x79\x34\xd4\x6e\x65\xf1\xdd\xb7\xb1\x7f\xf5\x0a\x7f\x7b\xa4\x77\xf4\xb6\xb5\xd0\xd9\xdc\x44\x11\x4e\xc0\xc0\xc8\x93\xcf\x66\xdd\x73\xff\xa0\xcb\x0d\xe1\xd9\xbf\xee\x53\x6a\x3f\x5d\xce\xe4\xd3\xce\xc5\x72\xd3\xf6\x7a\xe6\x4c\xd5\x40\x9f\x06\x86\xaa\x8e\xb8\xb3\xc1\x33\xac\x07\x23\xab\x53\x31\x52\xfa\x3a\xb2\x2a\x00\x79\x37\xf2\x84\x29\xd5\xae\x1c\x3e\x54\x55\x4c\xd3\xb3\x15\xee\xa1\x68\x7c\xc1\xf7\x86\x41\x63\xed\x36\x1a\x76\x6e\xf1\xef\x95\xc8\x4a\x92\x9d\x5f\x48\x66\x76\x0e\xd9\x79\xf9\xe4\xe4\x17\x50\x50\x52\x46\x51\xd9\x10\xde\x7b\xf6\x71\x5a\x5d\x01\x6a\xc9\x73\xff\x66\xec\xb4\x19\x8c\x99\x3a\x93\xe6\xa6\x06\x5e\xb8\xff\xb7\x74\x29\x46\x92\xb1\x33\xe7\x90\x88\x27\xe8\x4b\xa5\x1c\x2d\x44\x6d\xbb\xf0\x00\xf6\xfe\x90\xfa\x4c\xf7\x66\x75\x66\x6e\x3e\xc7\x9c\x7b\x05\xb5\x9f\x2e\xc3\xb6\x2c\x0e\x6e\x58\xc5\xbe\xb5\x2b\x19\x35\x6f\x21\x7b\x3f\x5b\xce\xee\x8f\x96\xa4\x0d\x46\x3c\x99\xc3\xa4\xcb\xbf\x41\xf1\xa4\x59\x0c\xa4\x2c\x0c\x21\xc8\x1f\x35\x9e\xca\x99\xf3\xd8\xf1\xd6\x73\x80\x93\x6c\x74\xed\x6b\x4f\x31\xf6\x84\xd3\xc8\xc8\x48\xf8\xf1\x57\x9e\xd3\x40\x75\x21\x0e\xe6\x7c\xf7\xda\xac\x8b\x51\xde\x3b\xa9\xbd\xf3\x95\x95\x90\x31\x45\xbd\x77\x54\xd0\x7c\x1a\xb0\x92\x60\x20\x85\x17\xf2\x1b\x98\x28\xa5\x42\xc1\x42\x00\xb6\xcd\x90\xea\xf1\x4c\x5c\x70\x36\x9b\x96\xbc\x46\x2c\x23\x83\x8b\x6f\xbf\x97\xc9\x27\x9e\x4a\x32\x2b\x8b\xec\x64\x16\x59\x99\x99\x64\x65\x64\xd0\xdd\xd2\xc4\x67\xef\xfc\xc7\x07\x78\xcf\x96\x8d\xfc\xf2\xfa\xcb\xa8\xaa\x1e\x47\x7b\x4b\x33\x07\x6b\x95\x9d\x9e\xc3\x46\x32\x6d\xfe\xe9\x08\x1c\xf6\x2c\x85\xd2\x2e\x17\x50\x89\x13\x43\xed\x47\xfc\xf9\x20\xab\xf6\x58\xdb\x62\xcc\x89\xa7\x33\xa4\x66\x0a\x87\x37\xaf\xa5\xbf\xbb\x93\xcd\x6f\xbd\x40\xd5\x9c\x93\xc9\x1f\x5e\x4d\x22\x3b\x97\xfe\x50\x86\xb6\x58\x56\x92\xe1\xa7\x9e\x8f\x99\x93\x87\x25\xa1\xbb\xb9\x81\xbd\xef\xbe\x4c\xc3\x66\x3d\x5f\x45\xed\xc7\xef\x73\x60\xd3\xe7\x8c\x9e\x75\x02\xb6\x6d\x29\x3b\x07\x83\x81\x8a\x34\xb0\x84\x29\x4b\x12\xf0\xc8\x34\xba\x17\x0a\xc0\x1e\xd5\x47\x70\x87\x90\xc5\x4c\x07\xd6\x2d\x98\x8d\x63\xf6\x54\x86\xd1\xbd\xb7\x48\xdb\x9c\x1e\x4c\x0e\x49\x5e\x69\x19\x97\xfc\xf2\x01\x8e\x3d\xef\x72\x62\x31\x93\xc9\x27\x9e\x46\x56\x66\x06\x31\xa4\x9f\xc5\x00\x20\x99\x9b\x47\xc5\xc8\x6a\xf6\xef\x08\xd2\x65\x74\xb4\xb6\xb0\x65\xf5\x4a\xc2\xc7\xdc\x73\xbf\x44\xc5\xc8\x6a\x52\x9e\x00\x86\xd0\xea\x13\x7b\xfd\x36\xcf\xbc\xf9\x8e\xbb\x7c\x56\x16\xe1\x5d\x91\x40\x22\x3b\x97\xde\x8e\x36\x76\x7d\xec\x54\x2b\xe9\x6e\x69\x62\xe4\x09\x0b\x29\x1e\x37\x85\x44\x5e\x21\x99\x45\xa5\xe4\x0e\x1d\x49\xab\x5b\xbe\x2e\xd5\xd3\x85\x99\x95\x43\xe1\xf8\xe9\xec\x7d\xf7\x25\x3e\xbd\xef\x47\x6c\x7d\xf9\x51\x7a\x5b\x9b\xb5\x46\xa6\xfa\x7a\x31\x13\x19\x8c\x9d\x77\x7a\x28\x66\x4a\x5f\xc8\xc2\x92\xac\x7a\x78\x98\xea\x03\x1b\x32\x70\x28\x24\x2b\x8f\xf2\x3a\x98\x0a\x95\x7e\x28\xe1\xac\x82\xb4\x90\x20\xdf\x8a\x66\xe0\x27\x27\xcb\xc8\xcc\x62\xe8\xd8\x09\x94\x57\x8f\xc3\x34\x94\x54\x0b\x04\xf9\x35\x32\x32\x32\x18\xe8\xed\xe1\xd3\x25\x6f\x44\x0b\x95\xee\x31\x6c\xec\x04\xae\xf9\xc9\xbd\xe4\x14\x16\xeb\x01\x90\x7a\x13\x9d\x04\x36\x8b\x6e\xb9\xe3\x2e\x94\x01\xf1\x97\x33\xa1\x77\x3a\x59\x5c\xc6\xf6\xa5\x6f\xb9\x41\xef\xed\xe4\x94\x57\x51\x39\x73\x1e\xc5\x13\xa6\x33\x7c\xfe\x39\x14\x8d\x9f\xce\xfe\x15\x6f\xfb\xdb\x4d\x3b\x0e\xd6\x72\xe8\x93\x77\xd9\xf6\xc2\xc3\x74\x1e\xde\x37\xa8\x0e\x51\x31\x69\x06\x63\x4f\x3a\xc3\x4f\x28\xe2\x51\xcc\xd1\x22\xf8\xc3\xa7\x0f\xa6\x66\x1c\xd3\x41\xf6\x17\x46\xf5\x7e\x21\xb5\x49\xbb\x87\x88\xb8\x5e\x19\x23\x67\x29\xf1\x52\x24\xeb\x21\xb8\xa6\x61\x10\x33\x0d\x64\x6a\x80\xde\xf6\x56\x7a\x5a\x8f\x90\xea\xea\x00\xdb\x22\x11\x33\x49\x24\xe2\x4e\x7e\x12\xa4\x53\xd6\x4f\xb8\x59\xfb\x80\xa1\x23\x47\x53\xbf\x6f\x37\x7b\xb6\x46\x27\x4a\xcb\xce\xcb\xe7\xab\x77\xfd\x86\x09\x73\x4e\xc0\xb6\x6c\x5f\xcf\x77\xfa\xa5\xab\x96\x00\xe6\x59\x37\xdf\x71\x97\xf0\x25\x9b\x68\x83\xad\x2d\x21\x23\xaf\x80\xb6\xba\x03\x1c\x58\xeb\xb0\x8b\xfe\xce\x0e\x46\x9d\x72\x0e\x46\x46\x16\x96\x2d\x49\x14\x14\xd3\xd3\xd2\x44\xfd\x9a\x0f\x01\x48\xf5\x74\xd3\x5d\x7f\x40\x2b\xcc\x1c\xcb\xcc\xa2\xe6\xdc\x2f\x33\x6a\xfe\x59\xf4\xb5\x1d\x61\xcc\x49\x67\x72\xc2\x8d\xdf\x27\x99\x57\xa0\x54\x30\x53\x86\x32\x64\xcf\x8d\x0c\x99\xd1\xc7\x3f\xe8\x86\xd0\xfd\xb1\x9a\xf4\x16\x9e\xe9\x04\xa7\xf6\x99\x7a\x0f\x0f\x54\x0f\x4c\x74\x7b\xb7\x07\xb4\x69\x98\x74\x36\xd6\xb1\xe6\xf5\x67\x79\xff\xa1\xdf\xb3\xfc\xd1\xbf\xf0\xc9\x33\xff\xe4\xd3\x97\x9e\x60\xed\xdb\x2f\xb1\x73\xd5\x87\x74\x34\x35\x90\xcc\xce\x26\xbf\xb0\x88\x44\x3c\xe6\x00\xed\x7a\xc0\x32\x33\xb3\x98\x3a\xe7\x78\xfa\x7a\x7a\x38\x50\xbb\x83\x01\x25\x03\x6f\x69\x65\x15\xd7\xde\x71\x37\x27\x9e\x7f\xe9\xa0\xfa\xba\x3f\x26\x1e\x94\x7f\xd9\xd1\x25\x75\x23\xba\x5e\x3f\xc9\x0f\x51\x15\x26\xfb\x37\xac\xe6\xc9\x9b\x2e\xa4\xbb\xb9\x09\x33\x9e\x60\xe1\x3d\x8f\x30\xfa\xf4\x8b\xe8\xed\xed\xa5\x7e\xf3\x1a\xd6\xff\xeb\x8f\xec\x7b\xff\xb5\xc8\x99\x57\x38\x66\x12\xd3\xae\xbd\x8d\xea\x05\xe7\x91\x48\xc4\x48\xb5\xb7\x90\x93\x9b\x47\x66\x46\x86\xb3\x61\x0a\x9b\xfe\x8e\x56\xf2\x0a\x0a\x9d\x80\x34\x77\x20\x51\x1a\x0e\xe8\x41\x7c\x3e\x6e\x7a\x34\x87\x5a\xc4\x3a\x48\x4d\xa8\x08\x4a\x9e\xfb\x10\x34\xa9\xd8\x03\x32\x6c\xfc\x70\x36\x9d\xc9\x34\x67\xbf\x16\x8f\x25\x41\x0a\xc1\xe6\xe5\x4b\x78\xed\x8f\x3f\x67\xdf\x86\xd5\x47\x65\xb3\x45\x15\x43\x99\x71\xea\x59\x9c\x7e\xc5\x75\x8c\x9b\x36\x9d\x84\x69\x60\x48\xe9\xe6\xcf\x36\x48\xf5\xf7\xb3\x79\xd5\x4a\x56\x2f\x7b\x97\x23\xf5\x75\x94\x55\x0d\x67\xce\xc2\xb3\x19\x31\x61\xaa\x56\x75\x34\xa8\x34\xea\x95\x76\x0f\xd2\x60\x49\x40\xfc\xdf\x8e\x2e\x19\x76\x89\x49\x29\x35\x9f\xa7\x77\x51\xff\x40\x8a\x57\x7f\x72\x13\x1b\x5f\x77\x0a\x3b\x55\x9f\x76\x21\xc7\x5c\xfb\xbf\x6c\x7d\xed\xdf\xec\x5a\xfc\x02\xbd\x2d\x4d\x69\x1d\x89\x65\x65\x33\x6a\xd1\xe5\x4c\xfa\xf2\x2d\x4e\xf6\x36\x6c\x77\x27\x9c\x41\x4c\x48\x12\x86\xc0\x94\x36\x1b\x5f\x79\x9c\xd5\xcf\x3e\xcc\xc9\x37\xdc\xc6\x31\x67\x5e\xe8\xe6\xd1\x72\xc0\x70\x84\xa0\xe8\xe0\x36\x0f\x64\x41\xc0\x01\xc2\x21\x3b\x5a\x54\x07\xde\x3a\x39\xf8\xce\x7e\x9f\x12\xa4\x02\x32\x41\xfc\xb3\x6f\xf5\x72\x7f\xe8\x08\x85\x82\xcf\x17\xbf\xca\x93\x3f\xfd\x36\x6d\x0d\x75\xfc\xb7\x47\x61\x59\x39\x67\x5c\xf5\x55\x2e\xb8\xfe\x9b\x14\x97\x96\x39\x46\x0b\x6f\xa2\x9a\xa6\xf3\x5c\xcb\x06\xc3\x70\x76\x58\xa6\xac\xa0\x60\xa5\xf4\xaa\x83\xcb\x50\x26\xfe\xa0\x9d\xe2\x6f\x3b\xbb\x64\x7a\x38\x8a\x0e\xb0\xbf\x15\xd2\x30\xd9\xbe\x7c\x31\x2f\xdd\x7a\x15\x03\x3d\x5d\xc4\x32\xb3\x88\x27\x73\xe9\x49\xdb\xc8\xec\x1c\x05\xe3\xa6\x32\xf1\xda\xef\x52\x75\xe2\x59\xc4\x13\x09\x4c\x69\x6b\x5b\x1f\xe3\x86\x81\x91\xea\x63\xcd\x93\x7f\xe5\xc3\x07\x7f\x4d\x7f\x77\x27\x79\xa5\xe5\x5c\xf9\x87\xc7\x18\x3b\xe7\x24\xd7\xca\xa5\xb3\x9e\x41\xb7\xa9\x88\x00\x64\x8f\x62\x03\xa7\xbf\x1e\x60\xef\x81\x6a\xaa\x94\x1f\xc1\xbd\x55\xa0\xb5\x60\xbb\x90\x13\x42\x18\x26\x9b\x56\xbc\xcf\x43\xb7\x5d\x4f\x4b\x28\xeb\x7b\x56\x6e\x1e\xb1\x78\xc2\xd9\xd3\xd5\xdb\xcb\x60\xc7\x9c\xd3\x16\x71\xf3\xdd\xf7\x51\x35\x6a\xb4\x9f\x9c\xc6\x3b\xf4\x3a\x85\x6e\xb0\xbd\x12\x80\x9f\xbe\x5d\x35\xd0\xeb\x63\x61\x8b\x8f\xbf\x1e\x7b\xc1\xe5\x8a\x29\xcf\xb6\x6c\x2a\xa7\x1d\xc7\xd0\xe9\x73\xd9\xf3\xd1\x12\x52\xbd\x3d\x5a\x45\x11\x9f\x6a\x93\xb9\x8c\x38\xfb\x2a\xc6\x5d\xf1\x4d\xb2\x2b\x46\xe0\x6c\x5c\xb6\x11\x86\xc2\xde\x84\x41\x5f\x57\x27\xab\xfe\xf1\x1b\x3e\x7f\xe2\x01\xbf\xec\x4c\x7b\x63\x1d\x0d\xbb\x77\x30\xe6\xb8\xf9\xba\x54\x3b\x88\xbe\xea\x1d\xfe\xee\x00\x4f\x07\x92\x7a\x28\x9d\xf7\x97\xb7\x6f\x48\xd5\xf9\xd5\x23\xcc\xaa\x3d\xb5\xd1\x1b\x1f\xc7\x4a\x26\xdc\xeb\x25\x08\x83\xe6\xfa\x83\xbc\xf0\xdb\x9f\x68\xe0\x16\x55\x0c\x65\xc1\x97\x6f\x64\xd2\x09\xa7\x90\x99\xcc\xa6\xad\xe1\x30\x9b\x3f\xfa\x80\xcf\xde\x7c\x99\xc3\x11\xe1\x39\x2b\x97\xbc\x09\x52\x72\xfb\xfd\xff\xa0\xb8\x74\x88\xe6\xf8\xf7\x27\xb0\x0c\xe2\xa8\xd5\xcd\x7a\xc1\xf7\xe1\xdf\x83\x79\xde\xb7\x7f\x7c\x57\xda\x8c\x45\x67\x43\xc1\x46\x6c\x89\x48\x64\x61\x4b\xa8\x5d\xf6\x46\xa4\x13\xba\x70\xc2\x0c\xa6\x7e\xfb\x5e\x46\x5f\x7c\x03\xf1\xbc\x22\xfc\x2a\xdb\x8a\x70\x62\x9a\x26\x7d\x2d\x4d\x7c\xfc\xa7\x1f\xb3\xee\xd9\xbf\x6b\x06\xf7\x78\x56\x92\xe3\xbf\x7c\x13\xc5\x55\x23\x5c\xe7\x87\xa9\x51\xaf\x16\xc5\xa8\xe8\xcc\x69\xab\x9d\x4b\xd2\xea\xba\x2b\xf4\xaf\xf0\x77\x19\x08\x95\x03\x84\x76\x39\x08\xfd\x96\xfa\xab\xc0\x10\x06\xef\xfe\xfb\x21\x96\x3d\xf7\xb8\xff\xbb\xd2\x61\x23\xb9\xf1\xb7\x0f\x72\xd2\x25\xd7\x50\x5a\x39\x8c\xc2\xd2\x32\x2a\x47\x8d\x65\xda\xbc\x05\xcc\x5c\x70\x06\xa6\x61\xb2\x7f\xfb\x16\x52\xfd\x7a\x75\xf1\x83\xbb\x77\x12\x8b\xc5\x98\x39\xef\x64\x0c\x75\x57\x60\x48\xa3\x50\xc7\x40\xe5\x24\xbe\x6f\x5a\x04\xba\x7f\x38\x1f\x4a\x40\x2e\xfe\x85\x52\x5b\x8b\x2c\xdb\xa2\xea\xb8\x05\x94\x8c\x9b\xaa\x5d\x12\xcf\xc9\x67\xdc\x95\xb7\x30\xe7\xee\x7f\x53\x7e\xd2\xd9\x48\xc3\xf4\xfd\xc4\xfe\x3e\x5e\x40\x1a\x26\x1d\x87\xf7\xb3\xf4\xee\xef\xb0\xe9\xa5\xc7\xd2\x26\x49\xb2\xa0\x98\xfc\x8a\x2a\x6c\x29\xe8\x6e\x6b\xe5\xd0\xd6\xf5\xf4\xf5\x74\x83\x61\x86\xa2\x29\xa4\xcb\x2a\xbd\x33\xa8\xa1\x1b\x1d\x14\x27\xb5\x32\x41\x69\xf3\x41\x3b\x55\xf5\x27\x3d\x80\x3e\x10\xe4\x04\x5d\x2d\x4d\xac\x7c\xfd\x85\x60\x1c\x32\x32\xb9\xe4\xb6\x9f\x32\xed\xa4\xd3\x9c\x74\x0c\xb6\x85\xb0\x6d\x84\x9d\xc2\xc0\xa6\x6a\xf4\x58\xae\xff\xe9\xbd\x7c\xfb\x77\x7f\x63\xc8\xb0\x91\x69\xed\x78\xf3\xa9\x7f\x51\xbb\x79\x83\x53\xb8\x7a\x90\x65\x43\xfd\x2c\xbc\x7b\x22\x7c\x81\x96\x83\x2c\x52\x15\x09\x09\x1a\xb6\x2d\xc9\x2c\x2a\x65\xcc\x99\x97\x68\x37\xca\x1f\x3d\x81\x9a\x6b\x6e\x23\xb3\xac\x0a\xcb\xb2\x34\x6a\xf3\xb8\x00\x86\x49\x6b\xed\x56\x96\xfd\xfc\x26\x76\xbf\xf7\x0a\x51\x47\x7e\xf9\x50\x92\x45\xa5\xb4\x37\xd5\xf3\xe2\x9d\xb7\xf0\xe0\x35\x67\xf2\xdc\x8f\x6e\x62\xdf\xfa\xd5\xae\xc5\x48\x04\x80\xca\x20\x00\xce\x07\x1a\x5d\x96\xd0\xc2\x6f\xbc\xfe\x44\x58\x9c\xbc\xc1\x52\x75\x70\x55\x20\x33\x5c\xd0\x63\xa6\x89\x69\x1a\x2e\x27\x32\xa8\xdb\xbd\x93\xc3\xb5\x41\x41\xca\x71\x33\xe7\x30\x7b\xe1\xb9\x08\xdb\xd6\x2a\xc4\xf8\xf2\x80\xb4\x89\xc7\x62\x2c\xb8\xf0\x52\xbe\xf7\xa7\x87\xa8\x18\x31\x4a\xeb\xff\x91\x86\x3a\x3e\x7c\xfb\x75\x0d\x93\xff\xbf\x87\xc4\x8d\x56\x8d\x3a\xc3\x91\x84\xaa\x41\xde\x96\x92\xe1\xf3\xcf\x26\x6f\xd8\x68\xff\x66\x6d\xb5\x9b\xe9\xd8\xbf\x53\x33\xfb\xa9\x96\x27\x61\x98\x34\x6d\xfc\x8c\x15\x3f\xbb\x91\xc3\x9f\x2d\x1d\xb4\x51\x85\xc3\x46\x13\x4f\xe6\xb0\xe6\xb5\xa7\x59\xff\xd6\x0b\x74\xb7\xb5\xb0\xfe\xcd\xe7\x79\xfc\x9b\x97\xf2\xde\xdf\xee\xa5\xbd\xa9\x1e\x29\x0c\x6d\x97\x7f\x5a\xe5\x72\x22\xd8\x96\x22\xa1\x69\xeb\x96\xca\x9e\x51\x52\x27\xa0\x08\x6d\x6e\x85\x98\x81\xde\x1e\x6a\xd7\x7f\xce\xe1\xda\x1d\x18\xae\x85\xaa\xfd\x48\x03\xfd\x3d\x81\x1c\x32\xfd\x94\x33\xc9\xc9\x2f\x70\xd6\x4a\x11\x06\xd8\x5b\xa6\x24\xd2\xb6\x98\x79\xe2\x29\xdc\x70\xc7\x2f\xc9\x54\x4a\xc2\x03\x6c\xf8\xec\x13\x7a\xbb\xbb\x35\x07\xcb\x60\xfa\x7f\x5a\xc4\x8b\xdb\x4f\xef\x73\x23\x3d\xf5\x9f\x54\xf6\x03\x49\x5d\x82\x74\x6f\x68\xdb\x36\x39\x95\x23\xa9\xb9\xf0\x3a\x84\x69\x02\x90\x59\x52\x41\x22\xbf\x38\x4d\x4f\x75\xd1\xa5\x6e\xe5\xbb\x7c\xf2\xb3\x1b\x69\xde\xb2\xc6\xff\x38\x23\xb7\x80\xd2\xf1\xd3\xb4\x9f\x96\x8c\xae\x71\x62\xbd\xa4\x44\x88\x80\xc1\xb4\x37\x1c\xe2\xdd\x3f\xff\x82\xa7\xbf\x73\x15\x75\xdb\x37\x22\x0d\x23\xd8\x18\x1e\x6e\x9f\x0c\x24\xde\x48\x2a\x70\x11\x56\xd9\xb2\xa1\xad\xc3\x82\x58\xcc\x01\xb5\xbf\xa7\x8b\x5d\xeb\x56\xf3\xe2\x5f\xff\xc0\x3d\x5f\xbb\x94\x3b\x2f\x3f\x8b\x5f\x7f\xed\x52\xf6\x6c\x5a\x87\x61\x18\xe4\x17\x15\x13\xcf\x74\x2a\x8e\x67\xe7\xe5\x33\xf9\xb8\x79\x18\x42\x2f\x05\xa4\x5a\xbb\xd4\xcf\xa5\x6d\x33\x6f\xd1\x79\xcc\x59\xb0\x50\x6b\x5e\xe3\xe1\x43\x74\x77\x75\xa1\xda\xd1\xd5\xd0\xdc\xc1\x0c\x3f\xc8\x74\xee\x1b\xd3\xa8\x35\x24\x54\xa9\x41\xdd\x2a\x25\x7b\x37\x1e\x77\xf1\xd7\x20\x9e\x49\x4b\xed\x16\x86\x9d\x79\x39\xd9\x95\x23\xb1\x00\xc3\x5d\x7f\x3d\xd7\xdd\x81\xf7\x5e\x62\xe3\x9f\x7f\x44\x4f\x63\x50\x0e\x3d\x96\x91\xc5\xb1\xff\x73\x3b\x6d\x7b\x77\xd0\xe8\xfa\x89\x0d\x33\x46\xf1\xc8\xb1\xd8\x52\x32\xe3\xe2\x6b\xe9\x68\xa8\xe3\xc3\xc7\xfe\xac\xad\xd3\xfb\x3e\xff\x98\xbd\x9f\x7f\x42\x69\xcd\x54\x6c\x19\x94\xaa\xf1\x6d\xd8\x86\x70\x6d\xcf\xd2\xff\xa7\x9b\xb9\xd2\x7d\xc5\x0e\xa8\x8e\x79\x51\x08\x1c\x1f\xf7\x8e\xad\xac\xfb\xf0\x03\x3e\x5f\xfa\x2e\xbb\x36\xae\xa3\xbd\x25\xa8\x89\xd4\xd1\xda\xcc\x5b\x8f\x3d\xc8\xd7\x7f\x3d\x85\xd1\x93\x8f\x61\xc1\xa5\xd7\xf0\xde\x33\x8f\x31\xfd\x94\x33\x18\x31\x7e\x92\xa3\xe6\xb8\x1c\x62\xf0\xd4\xfb\x8e\x98\x9e\x95\x4c\x32\x7b\xc1\x42\x96\xbe\xfe\x92\x7f\xff\x81\xfe\x7e\x06\x52\x29\x7f\x02\xfb\xaa\x19\xea\xe4\x0d\x19\x59\x42\x02\xa8\x87\xa3\x96\x8c\x54\x17\x4c\xf4\xf7\x69\x91\x83\x52\x12\xcb\x4c\x52\x73\xe9\xff\x38\x3a\x18\x90\xb2\x24\x3d\x87\xf6\xd0\xd7\xd1\x4a\xee\xc8\x1a\x44\x2c\xc1\xde\xd7\x1f\x63\xcb\x83\x3f\xa7\x5f\xc9\xd8\x63\xc4\x13\x4c\xff\xda\xf7\x19\x7f\xe1\xb5\xbc\xfb\xa3\xeb\x14\x8a\xce\x23\xbf\x6a\x24\xb6\x6d\x13\xcb\x4a\x3a\xf0\x84\x85\xb0\xa2\x12\xca\x27\x4e\xf7\xd3\xff\x0b\xe1\xb0\x7f\x29\x6d\xba\x5a\x9b\xe9\x73\xeb\x1d\xe6\x14\x14\x92\x5b\x50\x48\x22\x33\x0b\xe1\xa6\x28\xd7\x77\x16\x3a\x54\x15\x8f\x39\x15\x4c\x7a\x3b\xda\x39\xb0\x7d\x33\xeb\x56\x7c\xc0\xda\xe5\xef\x52\xbb\x69\x3d\x1d\xad\x2d\x0c\x7a\x48\x1b\x6c\x9b\xcc\x64\x92\xab\x6f\xff\x25\x27\x5f\x7c\x25\x25\x43\x87\x39\x95\xc2\xfd\x67\x85\x52\x35\xaa\xd2\xb9\xc2\x56\x0b\x4a\xca\x30\x63\x31\x3f\x52\x32\x33\x3b\x1b\x33\x9e\x81\x65\xeb\xcb\x8e\xb6\xc9\x5c\x3d\x91\xba\x25\x52\x59\xba\x62\xaa\xd4\x99\x9e\xfd\x5d\xdf\xbe\x99\xae\x83\x4a\x9f\x37\x0a\x21\x68\x5c\xf5\x3e\x6b\xff\xf0\x7d\xfa\x9a\x1b\xa8\x5a\x78\x29\x89\x82\x62\x76\x3d\x75\xbf\x56\xa1\x5b\x98\x31\xa6\x5c\xfd\x6d\xa6\x7c\xf9\x16\xac\x9e\x4e\x3a\x94\x02\x1f\xd9\xc5\x43\xc8\x2e\x29\x47\x0a\x83\x1d\xcb\x17\xb3\xea\xb9\x47\xfc\xef\x2a\x26\x4e\xa7\x70\xd8\x28\x46\xce\x5d\x40\xd9\x84\x69\x4e\x71\x47\x37\x5d\x42\xc3\xa6\x35\x6c\x7a\xe3\x19\xf6\x7d\xb6\x8c\x8e\xc6\x3a\x90\x36\xc9\xfc\x22\x4a\x47\x8e\x61\xf2\x82\xb3\x98\xb1\xf0\x3c\x0a\x8a\x4b\xfc\xd0\x16\x8f\x45\xa6\xfa\x7b\xd9\xb6\xf2\x53\xb6\xac\x5c\xce\xba\xe5\xef\xb3\x67\xcb\x06\xba\xda\xdb\x38\xda\x91\x93\x5f\xc0\xa2\x6b\x6e\xe4\xac\x6b\xbf\x4e\x2c\x16\xc7\x96\x0e\xc8\x35\x33\x67\x23\x6d\x35\xcf\x95\x9e\x0a\x29\x4a\x5b\xf1\xc6\xb3\xe1\xd0\x41\x1f\x5c\x80\x8a\x11\xa3\x48\x64\x67\x93\xb2\xed\x34\x0e\xaa\x17\x12\x8d\xce\x5b\xa2\x12\x6c\x2c\x9c\xf1\x3d\xbd\x02\xa9\x1e\x51\x88\xce\xf0\x7c\xf6\x23\xa5\xe4\xe0\x92\x17\xe8\xdc\xe7\x38\xa5\x77\xbf\xf8\x90\xee\xad\x07\x84\x61\x30\xe1\xb2\xaf\x33\xf5\xda\xdb\x88\x25\x32\x68\x3f\xbc\x57\xb3\x82\xe5\x0f\x1d\x41\x66\x5e\x21\x6d\x75\x07\x59\xf6\xe0\x6f\xe8\xeb\x74\xf6\xdf\x24\xb2\x73\x39\xf9\x96\x3b\x19\x75\xfc\xa9\x48\xbc\x54\xc2\x82\xfe\xae\x0e\xd6\x3c\xf3\x10\xab\x9e\xf8\x2b\x9d\x6e\xa4\x88\x77\x74\x36\x35\xd0\xb0\x6b\x2b\x9b\xdf\x7f\x83\x75\x6f\x3c\xcf\x97\x7e\xf0\x2b\xaa\xa7\x4e\x77\x41\x76\x42\x60\xd6\x7e\xbc\x98\x3f\x7c\xf3\x1a\xba\xbf\x00\x54\xf5\xc8\xce\x2b\xe0\x8c\x2b\xae\xa3\xac\x6a\x38\xa9\x81\x94\x9f\xc9\x4e\xba\xa1\xae\x86\xe7\x2a\x09\xdb\xb8\x55\x60\x25\xae\x95\xc2\xa0\xb3\xa3\x9d\x8f\xdf\x79\x43\x7b\xc6\xe4\xe3\x4e\x42\xc4\x33\x18\x48\xa5\xf4\x60\x7b\xf4\x54\x11\xe1\xba\xcf\xaa\x05\xcb\x72\x4f\x23\x6c\xc7\xd4\x13\x9d\x28\x6a\x45\x48\x52\x51\x85\x13\x43\x38\xc6\x8b\xe2\x89\x33\x11\x86\x19\xea\x89\x77\x81\x60\xec\x05\xd7\x31\xed\x86\xdb\x9d\x82\xc7\x38\xc9\xb1\xfb\x94\x4d\x54\x45\x23\xc7\x22\x62\x71\x3e\x7d\xe2\xaf\x1c\x58\x17\x24\x22\x1f\x77\xf2\x59\x0c\x3f\xf6\x24\x67\x76\xda\x8e\x8b\xac\xa7\xad\x85\xf7\x7e\xf7\x23\x96\xfe\xf9\xae\x34\x70\x35\x2a\xb1\x6d\xb6\x7e\xf8\x1e\x8f\x7c\xef\x06\xf6\x6f\x5a\x4b\xcc\x34\x5d\x97\x1e\xf4\x76\x76\xd0\xd3\x19\x5d\x4e\x2e\x33\x99\xcd\xe8\x09\x93\x39\xe7\xea\xeb\x99\xbd\xe0\x0c\xff\xf3\xfa\xfd\x7b\x58\xfa\xca\x33\xbe\x39\xd1\x67\xc5\xde\x3f\xa1\xb3\x60\xc3\x34\x31\x63\x31\xbc\x8d\xd9\xaa\xcc\x23\x85\x60\xc9\x8b\x4f\xb3\x66\xf9\xfb\xfe\xef\x87\x0c\x1f\xc9\xb4\xf9\xa7\x31\x60\xd9\x4a\x35\x3a\x19\xaa\x88\xe3\x99\x29\xd3\xcd\x93\x61\x1c\x63\xe1\xc2\xd0\x5a\x56\x1b\xd2\x03\xcf\x54\x35\x42\x0d\x03\x11\xd2\x66\xf4\x99\x97\xd2\xb2\x65\x0d\xbb\x5e\x4b\x2f\xc7\x3e\xfa\xcc\xcb\x99\xfe\xcd\xbb\xc8\xc8\xce\x05\x37\xa4\xb6\xab\xee\x00\x96\x52\x43\xa8\xa4\x7a\x02\x7b\x3f\x5b\xc6\xe7\xcf\x3d\xec\x7f\x96\x55\x50\xc4\xf4\xcb\x6e\x70\x42\x73\xad\x14\x52\x08\x06\x7a\xbb\x59\x76\xff\xcf\x58\xf7\xc2\x23\xda\x33\x84\x10\xe4\x94\x0c\x21\x96\x91\x49\x77\x4b\x13\x7d\x5d\x41\xd5\xb5\x83\xdb\x36\xf2\xec\xbd\x77\x70\xf3\xfd\xff\xa2\xa8\xb8\xd8\x29\xe5\x5a\x5c\x42\x3c\x91\x41\x7f\x6f\x0f\x42\x18\x14\x0f\x29\xa7\x7a\xd2\x14\xa6\xce\x39\x81\xc9\xb3\xe7\x32\x72\xdc\x04\x0a\xcb\xca\x59\xb5\xec\x3d\x36\xac\x5c\x41\x4f\x97\xb3\xd4\xbc\xff\xfc\x93\x2c\xb8\xf8\x4a\xca\x86\x8d\xc4\x0a\xd9\x8d\x55\x02\xb0\x2d\x8b\x8d\x1f\x2f\xa3\xa5\xa1\x8e\xa9\xc7\xcf\xa7\xb4\x62\x28\xee\x7a\x86\x95\x4a\xb1\xfc\xd5\x17\x78\xec\x37\x3f\x67\x40\x49\xe7\x7f\xf2\x97\xae\xa2\x74\x44\x35\x7d\x29\xdd\x96\x10\xde\x42\x13\xe8\xff\xa1\x4a\x75\xa1\xbf\x63\x29\x99\xce\xdb\xa3\xd6\x5e\x1f\x5c\xe9\x82\xab\x38\x4f\x1d\x6a\x96\x64\x66\xe7\x32\xe3\xe6\x9f\x23\xed\x14\xb5\xff\x79\x52\xeb\x70\x2c\x99\xed\xa7\x2a\xf0\xb6\x37\xb7\x1f\xda\xe3\xbb\xd4\xe2\xc9\x6c\xe2\xd9\xb9\x7c\xf8\xe0\xaf\xe9\x6d\x0b\x84\x9b\xf1\x0b\x2f\xa4\x62\xca\x2c\x2c\x2b\xe5\xb3\xaa\x75\x2f\x3c\xc6\xfa\x17\x1f\xd5\xee\x9f\x2c\x2a\x65\xce\x57\x6e\x66\xc2\x82\x73\xc8\x4a\x66\xd3\x71\x70\x0f\x9f\x3e\xf3\x0f\xd6\xbf\xf9\x22\xb6\xeb\xb4\xd8\xb8\xe2\x3d\x3e\x7c\xe1\x09\x2e\xf8\xfa\xb7\x31\x90\xe4\x17\x16\x91\x91\x99\x45\x7f\x6f\x0f\xd9\x79\x79\xfc\xe0\xbe\x07\x99\x79\xe2\x29\x64\x64\x66\x22\x84\x70\x0d\x36\x36\xe3\xa7\xcd\x60\xec\x94\xe9\xac\xff\xc4\x09\x1f\x3e\xb0\x6b\x3b\xcb\x5f\x79\x8e\x4b\x6e\xfe\x5e\xa4\x77\xcb\x3b\xde\x7c\xfc\x21\x1e\xff\xf5\x9d\x74\x77\x76\x30\x76\xda\x0c\x16\x5e\x7e\x2d\x63\xa7\xcd\xa4\xb7\xbb\x8b\x8f\xdf\x7a\x95\x25\xcf\xfc\x8b\x4e\x37\x88\x1d\x60\xfc\xb1\xc7\x73\xda\x55\x37\x30\x20\x05\x96\x3d\xc8\x0e\x0b\x55\xe7\x27\x7a\x79\x55\xa9\x3a\x96\x9e\x02\x48\xb1\x02\xa1\x70\x59\xdf\xb8\xed\xa4\x6d\x50\x59\xb4\x73\x9d\xc0\x92\x36\xd9\x05\x45\x1c\xfb\x9d\xbb\x11\xc0\x2e\x05\xe4\x1d\x2f\xfd\x93\x8c\x64\x36\xb3\xbe\xf1\x53\x8c\x58\x0c\x69\xa5\x68\x3f\xb0\x27\x60\x89\x79\x85\xec\x5a\xf6\x16\xfb\x94\xe4\xd7\xb9\x65\x95\x4c\xfb\xd2\xf5\x4e\x0d\x40\xcb\x02\xc3\xa4\x7e\xcb\x5a\x56\x3d\x76\x9f\x9f\x66\x11\x9c\x5a\x12\xa7\xfd\xe0\xb7\x4c\x3e\xeb\x12\x62\x86\x20\x2e\x24\x65\xc3\x46\x30\x62\xd2\x34\x0c\x24\x9f\xff\xe7\x79\xe7\x87\x52\xb2\xf4\xb9\x7f\x31\xff\x82\x4b\x28\x1f\x5a\x45\x6e\x7e\x3e\x59\x39\x39\x74\xb4\x36\xd3\xd7\xdb\x83\x21\x04\xc9\x64\x96\xb3\xd9\x0b\x4f\xb6\x80\xfc\x82\x02\xe6\x9d\x79\x8e\x0f\x30\x38\x11\x8f\x27\x5f\x78\x19\xa5\x55\xc3\x9d\x65\x43\xd1\xb9\x05\x82\xbe\xbe\x1e\x3e\x7e\xf3\x15\x5f\x68\xdb\xbe\x66\x15\x3b\xd6\xae\x26\x99\x9b\x8f\x95\x1a\xa0\xb7\x3b\x10\x3c\x01\xca\x47\x8d\xe1\x8a\xdb\x7f\x45\x4e\x59\x05\x7d\xaa\x25\x30\x42\xc0\xf5\x53\x3b\x2a\x98\xa5\xb3\x69\x07\x68\xad\x6e\x92\xea\x0f\xf6\x1a\xeb\x11\xab\x07\xa6\x56\xb2\x45\x04\xe5\x70\x62\x6e\x8d\x23\x13\x49\x76\x41\x11\x73\x6e\xbd\x9b\x31\x67\x5f\xe9\x77\x40\xda\x36\xb5\x6f\x3f\x47\x4f\xfd\x41\x47\x4f\xee\xeb\xa5\x53\x29\x32\xd1\xdb\xde\xc2\xf6\x25\x2f\x6b\x6a\xd1\xc4\x73\x2e\xa7\x64\xdc\x64\xbf\x0a\x69\x6a\xa0\x9f\xb5\xcf\xfc\x9d\x8e\x50\x81\xae\xa9\x17\x5f\x47\xcd\xc2\x0b\x9d\x14\x06\xb6\x85\xb4\x6d\xec\x54\x8a\x64\x5e\x01\x0b\xae\xbb\x99\xec\xc2\x62\xff\xb7\xfb\x77\x6c\x61\xf3\xc7\xcb\x30\x0d\x83\xec\x9c\x1c\x72\xf2\xf2\x01\x18\xe8\xeb\x63\xdf\x8e\x2d\x5a\x74\x86\x1a\x61\x72\xdc\xa9\x67\x38\x75\x26\xdc\x63\xdf\xf6\x2d\x2c\x7b\xf5\xf9\x41\xb6\xa7\x4a\x32\xb2\x92\xcc\x5d\x74\x3e\xa6\x19\xec\x2d\x90\x52\xd2\xd5\xde\x9a\x06\xee\xb8\x59\x73\xf9\xfa\x1f\x1f\x66\xf4\xcc\xb9\xf4\x0e\x58\xf4\xdb\x28\xa7\xf4\xcf\x01\xef\x94\x4e\x75\x35\x3d\xb3\xad\x5a\xd8\x2c\x58\x76\x0d\x75\x16\xa8\x01\xdf\x3e\xb0\xaa\xa1\x5d\x01\xd9\x01\xd5\x2d\x58\xe5\xbe\x7a\xc5\xac\x62\x48\x72\x0a\x8a\x99\xfb\xbf\x77\x33\xfe\xfc\x6b\x30\xdc\x4e\x16\x55\x4f\x20\xab\xa0\x10\x43\x48\x06\xba\xda\xe9\x6a\x0a\x84\xa3\x81\xee\x2e\x06\x94\x8e\x17\x0c\x1b\xcd\xa4\x0b\xae\x41\x0a\x03\x0b\x90\xc2\xa4\x7e\xeb\x7a\x76\x7d\xf0\x1f\x6d\x70\xf2\x87\x8e\x64\xf2\x85\x5f\x01\xd3\x4c\x13\x06\xa5\x6d\x31\xb4\x66\x12\xc3\x26\x04\x8e\x11\xdb\xb2\x58\xbb\xf4\x1d\xac\x81\x7e\xb2\xb2\xb2\xc9\x2b\x2c\xf2\xbf\x3b\x50\xbb\x0b\x69\x3b\xaa\x94\x5f\x70\x4b\x08\x84\xb4\x19\x39\x66\x1c\xd3\x8f\x3f\x51\x7b\xf6\x92\x67\x1f\xa7\xe9\xd0\x7e\x4c\xd7\x40\xa2\xe5\x99\x96\x92\x05\x97\x5e\xcd\xf1\x67\x5f\xc8\x60\x47\x7e\xe9\x10\xce\xf9\xfa\x6d\x7c\xf3\x2f\xff\x66\xd4\xf4\xe3\xe8\x4b\xd9\x1a\xa0\xce\x89\x5f\x2e\xaf\xdf\x03\xd6\x0e\x80\xf5\xaa\xc6\xaa\x15\xd9\x55\x2d\xc8\x38\x5a\x96\x74\x4f\x98\x52\x6b\xc2\xa7\x55\xda\x56\x8a\x57\xc5\x45\x00\xb4\x89\x4d\x4e\x41\x31\xf3\xbe\xfb\x6b\x4e\xb9\xf3\xff\x38\xf6\xc6\x1f\x32\xef\xbb\xbf\x76\xe3\xaf\x04\xbd\x2d\x8d\x69\x51\x96\xea\x31\xf9\x82\x6b\x28\x18\x3e\x06\xcb\xb2\xdd\x46\xdb\xec\xfc\xe0\x75\x7a\x42\x51\x23\x23\x4e\x58\x48\xfe\xb0\x6a\x37\x00\x4d\xe7\x3c\x52\x4a\x12\x59\xd9\x54\x54\xd7\x68\xd7\xec\xda\xb8\x8e\xce\x96\x23\x64\x25\x93\x8c\x9e\x38\xc5\xff\x3c\x99\x93\x13\x24\x6d\x51\x4f\x9c\x58\xa9\xf9\x67\x9f\x4f\x2c\x16\x50\xe4\x9e\xad\x9b\x59\xf1\xda\x8b\x5a\xa6\x3c\xef\x55\x4a\x49\x32\x27\x8f\xcb\x6f\xfb\xb1\x53\x42\x48\x39\xb2\x72\xf3\x38\xf3\xfa\x9b\xb9\xed\x91\x97\xb9\xe8\x7b\x3f\x27\x67\xc8\x50\x7a\x07\x52\x3e\xa0\xde\x39\xe0\x56\x85\xd5\xa9\xd7\x3d\x5d\x4a\x0e\x47\x72\xa8\x6a\x93\x2d\x43\x45\x39\xb4\x1c\x13\x1e\xc8\xbe\x67\x65\xf0\x8a\x62\xde\x9e\x20\x29\x1c\xef\x45\x50\xd0\x4a\x12\xcb\xce\x66\xd2\x79\x57\x39\x77\x77\xcd\x2b\x86\x69\xd2\x5d\x7f\x88\x81\x88\xda\xc2\x00\x25\x63\x27\x33\xe1\x9c\xcb\xb5\x84\xa3\x7d\x9d\xed\x1c\x5c\xa5\xef\x91\x32\xe3\x09\x86\x1f\x7f\x2a\x98\x31\xa4\x9d\x4a\xdb\x7c\xe6\xb4\x5f\x90\x5b\x5c\xaa\x5d\x77\xa4\xee\x20\x4d\x87\x0f\x51\x56\x5e\xc1\xc5\x5f\xfb\x26\xc9\xac\x2c\x32\x33\x33\x39\xf7\xca\xaf\x20\x5c\xaf\xb9\xb7\x2c\x05\x76\x5e\x9b\xe9\x73\x4f\x64\xf8\x98\x71\xd4\x6e\xf5\x4a\x1d\x48\x16\x3f\xf3\x2f\x4e\x39\xff\x12\x8a\xca\x2b\xb0\x5c\xf3\xac\x67\x86\xb4\x2c\x8b\xaa\xb1\x13\xb8\xf4\x7f\x7f\xc2\x03\xdf\xfd\x1f\xfa\x5c\x0e\x55\x31\x7a\x1c\x17\xdc\xfa\x53\x32\x72\xf2\x18\x48\x59\x58\x29\x2b\x00\x89\xf4\xac\x01\xaa\x9d\x19\xe5\x33\xef\x73\xd5\x1d\xaa\x1b\x3d\x64\xc0\xa2\xc3\xe0\xaa\x3a\x9e\xea\x13\x35\x5d\xaa\xd5\x4e\xa1\xd4\x18\xf4\x6a\x0e\xba\x54\x1d\x13\x60\x4a\xcb\x09\xd7\x41\xfa\xbf\xef\xac\xdb\xef\x47\x71\xa8\x87\x61\xc6\x98\x76\xc9\xf5\xe4\x96\x0f\x0b\xd6\x63\x21\xe8\x3e\xd2\x40\xbb\x5b\xd0\xc2\xa7\xb8\x92\x21\x14\x55\x4f\x74\x2c\x48\xea\x60\x68\x83\x20\x7c\x87\x88\x77\xf4\x74\x76\xd0\x74\xf8\x20\x52\x4a\xaa\x46\x8f\xe1\xa6\x3b\xef\xe6\x86\x1f\xde\x49\xe5\xf0\x91\x48\x5b\x2f\xa6\xe1\xaf\xaf\xd2\x66\x48\x45\x05\xc7\x9f\x76\xa6\x76\xaf\xda\xcd\x1b\x58\xfe\x9f\x97\x30\x0d\x03\xd3\x34\xb1\x52\x29\xba\xda\xdb\x7c\x5b\x71\xca\xb2\x98\xbd\xe8\x7c\xce\xb8\xe6\xeb\x08\xc3\x20\x2b\x37\x9f\xe3\x2f\xfe\x32\x66\x56\x36\x7d\xa9\x94\x4b\x91\xb8\xa7\xf4\x2b\x99\xfa\x55\xd9\x6d\xaf\x3a\xbb\x57\xc4\x5b\x06\x41\x77\x2e\x8b\x0e\xe7\xf1\xf4\x72\x6b\xda\x12\x62\x76\x04\xd5\xfa\x20\x23\x34\x33\x5b\x54\x59\x71\x6f\xba\x07\x52\x9d\x08\x2a\x96\x29\xf7\x4e\xd3\xa5\xed\xe8\x6d\x93\xb9\xe5\x43\x19\x36\xfb\x24\x9f\x8a\xa4\x4b\x85\x7d\x9d\x6d\x69\x14\x9f\x2c\x1e\x42\x66\x7e\x21\x5e\xbd\xe2\x80\x35\x2b\xd2\xa7\x94\xf4\x75\xe9\x42\x4d\x6a\x60\x80\xd6\x23\x8d\x8e\x55\xcc\xb6\x9c\x76\x1b\x42\xb7\x17\x13\x32\xc4\x09\x67\x9f\xd1\x49\x8b\xce\xe5\xa5\xc7\xfe\x41\x97\x6b\xa0\x91\x52\xf2\xf6\xd3\xff\x62\xee\xc2\xb3\x39\xb8\xb7\x96\x37\x9f\x7c\x94\xfd\x3b\xb7\xb3\xe0\xd2\xab\x59\xf8\xe5\x1b\xb0\x0d\x03\x61\xc6\x39\xef\x5b\xdf\x67\xe8\x84\x29\xe4\x94\x94\x53\x7d\xec\xf1\x6e\xac\x5b\x84\x6a\x43\xba\x33\x01\xa5\x5f\xea\xe4\xf5\x27\x71\x48\xd2\x56\xfd\x08\xb1\x41\x9d\xdf\x11\xe0\xea\x79\x1d\x43\x39\x8b\x01\x5b\x28\x86\x6e\x01\x52\x8a\x41\x4a\xe3\x48\x46\xcd\x39\x99\x21\xe3\xa7\x52\xbf\x75\xbd\x36\xf8\xed\x75\x07\x58\x7e\xdf\x4f\x39\xfd\x27\x7f\x26\xab\xb8\x0c\x61\x7b\x8a\x47\xba\x35\xd7\x4c\x64\x20\xcc\x18\xe9\xd2\x43\xd0\xa8\x54\x2a\x45\xf3\xe1\x03\x69\x5f\xa5\x52\xa9\x40\xbf\x14\xde\x80\x2a\x9e\x7e\x75\x32\xfa\x83\x6e\x33\x7e\xea\x31\x4c\x9c\x3e\x8b\xcf\x96\xbd\xe7\xff\x66\xd7\xe6\x0d\xfc\xec\x6b\x57\x70\x70\xf7\x4e\x5f\xaf\x6d\x3a\x7c\x80\x09\xc7\x9d\xc4\xd0\x9a\xc9\xa4\x2c\x8b\x44\x4e\x1e\xc7\x5d\x70\xa5\xb3\x76\x5a\x29\x25\x48\x2e\x44\x7d\x52\x46\xaa\x47\xa1\x6e\x69\xef\x83\xdf\xea\xd1\x37\x12\x30\xa2\x76\xd4\xa5\xed\xa0\x53\x84\x0d\x4f\x45\x0a\x8a\x3f\x3b\x67\x4c\x04\x42\x96\x23\x55\xab\x25\x62\xd1\xfe\x36\xb1\xa9\x18\x37\x91\x8b\x7f\xf7\x2f\x66\x5d\x7e\x03\x99\x4a\x09\x56\x69\x59\xd4\x2e\x7b\x8b\xfa\x4d\xab\x31\x0d\xd3\x9d\x4c\x92\xac\xbc\x02\x12\xd9\x39\x5a\x47\xfb\xda\x5b\xb1\x5c\xfd\x35\xca\x90\x8f\x10\x74\xb5\x35\x73\x68\xbb\xbe\x4b\xc0\x30\x4d\xb2\xdd\xfa\x11\xea\x60\xc8\xa8\xb1\x50\x5e\x05\x90\x5f\x58\xc0\x29\xe7\x5c\xa0\xdd\xcf\x4a\x0d\xb0\x6d\xed\x2a\xcd\x68\x21\xdc\xa0\x04\x4b\x06\x59\xea\xfa\x53\x29\x06\x52\x29\x8d\x72\x55\x09\x38\x3d\x65\xb2\xca\x72\xd5\xf4\xc5\x61\x73\x65\x10\xb6\x64\x87\xfa\x63\x44\xdb\x99\x43\x45\x21\x04\x8a\x7e\xa8\x48\xd5\x86\x2e\x4d\xeb\x52\xb5\xf0\xcf\x84\xf0\xde\x07\x60\xc7\x90\x0c\x19\x3d\x96\x45\xb7\xff\x96\x4b\xef\x7b\x82\x71\xf3\x17\xb9\x55\xbc\x20\xb7\xac\x82\x82\xca\xe1\x18\xee\xce\x39\x21\x25\x39\x25\x43\x28\x1c\x31\x46\xa7\xf6\x43\x7b\x69\xde\xb9\xc9\x89\x5f\x4a\xa3\x6f\x10\xa6\xc9\xae\x55\x1f\x51\xaf\x84\xd4\x00\x24\x73\xf3\x29\x1b\x31\xda\x4f\x11\x18\x06\x39\x8a\x5a\x0c\xc3\xa0\xa7\xbb\x8b\x55\xcb\x97\xb1\x7d\xe3\x3a\xcc\xd0\xba\xae\x1e\x43\xc7\xd4\x70\xf5\x8f\xef\xa1\x62\x4c\x0d\x29\xcb\x0a\xf4\x55\xc5\xee\xaf\xbe\x0f\x40\xd2\x6d\x12\xba\x64\x1c\x00\xa9\xfe\x7e\xb0\x0c\x7c\x1e\x65\xc7\x64\x78\x96\x86\xd6\x20\x15\xdc\x70\x5a\x40\xaf\xde\xbd\x3a\xdd\x9d\x4d\xd9\xee\xe0\x78\xec\xdf\x97\x6c\x85\xbf\x7f\xd6\x93\xfc\x8c\x58\x8c\xb1\x73\x4f\xa1\x6a\xf2\x4c\x76\x7f\xb6\x9c\x43\x9b\xd6\x50\x39\x75\x36\x43\xc6\x4c\xc4\xab\x6f\x88\x74\xcc\xa0\xd5\xf3\x17\xb1\xff\xd3\x20\xdc\x67\xa0\xbb\x93\x75\x4f\xfd\x95\xf2\x09\x53\xc9\x29\x2c\x71\x02\xdc\xdc\xe7\x19\xa6\x49\xcb\xe1\x83\x7c\xf0\xe8\x5f\x18\x08\xed\x5d\x1e\x31\x61\x32\x43\xdc\x9d\x79\xc2\xb3\xc4\xb9\x9a\x80\xe1\x46\x51\x78\x3b\x1d\x9d\x75\xd9\x60\xe7\xd6\xcd\xfc\xf9\x67\x77\xb0\x6a\xc5\x07\x74\x75\xa4\x3b\x28\xe2\x89\x04\xd5\x53\xa6\x33\xef\xbc\x4b\x98\xb5\xf0\x5c\x8a\xaa\x46\x3a\x0e\x03\xa9\xb0\x62\x5b\xa5\xbc\x74\x6f\x90\x1e\x60\xa1\xcb\x2f\xe1\x65\x4e\x9d\x7c\xe1\xf7\xea\xef\x94\xca\x67\x3a\xf5\xaa\xa5\x5c\xd2\xab\x5a\x46\x67\x49\xf7\xd8\x62\x18\x58\xe7\x3b\xa9\xab\x31\x92\xa0\x4a\x88\x6d\x93\xcc\xc9\x61\xe2\x82\x73\x18\x7f\xca\xd9\xd8\xd2\x91\x3e\x83\x34\xf5\x8e\xf9\xa5\xe6\xf4\x0b\xd9\xf2\xfa\xd3\xd4\x6f\x59\xeb\x77\xa4\x76\xe9\x1b\xbc\xfb\xcb\xef\x30\xe7\xba\x5b\x19\x32\x76\x02\x24\x32\xb0\x07\x06\x38\xb8\x6d\x03\xef\xff\xdf\xdd\x69\xe9\x27\x0c\xd3\x64\xde\x85\x57\x92\x99\x9b\xef\x44\x7d\x1a\xb8\xe5\x6c\x85\x96\x7a\xd1\xdb\x56\xeb\xd5\x60\x78\xe5\x89\x47\x59\xfa\x66\xfa\xb6\x9c\x9c\xfc\x02\xa6\x9d\x30\x9f\x93\x2f\xbc\x9c\xc9\x27\x9c\x4c\xb2\xb0\x98\x01\x4b\xd2\x97\xb2\x5d\x69\x17\xe5\x55\x0f\x56\xb7\xbf\x00\xdc\xb0\x89\x32\x0a\xc8\x30\xe8\xe1\x23\xa6\x4a\xc7\x2a\x0b\x56\xd3\x02\x06\xc9\x3d\xbd\x7d\x3e\xd1\xe9\x79\x03\x90\xd5\xc8\x0f\xef\xbd\x88\x00\x57\x29\xd0\x21\x71\x9d\x02\x4e\xe7\x4c\xa1\x06\xa7\x0b\x6c\x6c\x0a\xca\xab\x98\xf7\x8d\x3b\x78\xe3\xa7\x5f\xa7\xc7\x0d\xa1\x91\x52\xb2\x6d\xf1\x8b\xec\x5f\xb5\x9c\x8a\x89\xd3\xc9\x29\x29\xa3\xa7\xa5\x89\xc3\x9b\xd7\x3a\xce\xff\xd0\x31\x73\xe1\xb9\x1c\x7b\xf6\x45\x6e\x62\x13\xe9\x48\xfd\x3e\x05\x3b\x93\x49\x15\x20\xa5\x70\xb8\x54\xbe\x62\xee\x04\x28\x1f\x36\x9c\xb9\x0b\xcf\xe1\xe4\x0b\x2e\xa5\x7a\xea\x74\xcc\xcc\x6c\xfa\x53\x16\xfd\x03\x56\xa0\xd2\x28\xe0\x0e\x78\x6b\xae\xb2\xfe\xaa\xc5\x46\x82\x65\x22\x1a\xdc\x30\x05\x0f\x86\xa9\x4f\xa8\x9e\xbc\xf8\xdd\xb5\xed\x32\x6a\x7d\xfd\x22\x70\xc3\x96\x9b\x48\x21\x27\x62\xa6\xe9\xe2\x7c\xf4\x56\x19\x5b\x19\x00\x35\x6a\xd2\xbb\x6e\xd3\x1b\xcf\xf2\xfe\xef\xef\xa0\xb3\xe9\xbf\xdf\x03\x04\x30\x7e\xee\xc9\x7c\xf5\xde\x07\xa8\x1c\x55\x4d\x4c\x4a\x57\x26\x10\xbe\xcc\xe0\xd9\xd6\xbd\xaa\x61\x6a\xa1\xca\xfa\x43\x07\xf9\xf7\x03\x7f\xa4\xa9\xbe\x8e\x9a\xe9\xb3\x38\xee\xb4\x45\x54\x8c\x1a\x83\x34\x4c\xfa\x53\x76\x60\x65\x72\xad\x4f\x03\xae\x2f\x77\x40\xa1\x5c\x0d\xdc\xd0\xda\xe9\xa9\x74\x9a\xb0\xa7\x1a\x3b\x22\xd8\x75\x24\xb0\x2a\xb8\x80\xf8\xfe\xda\x76\x69\xa8\x40\xf2\x5f\x80\xab\xfc\xce\x1b\x80\x48\x21\x27\x02\x70\xaf\x23\x69\x09\x3e\x65\xd8\x1b\x12\xb0\xaa\x74\x69\x57\xb0\x6f\xd5\x72\x56\xfc\xed\x5e\xf6\xad\x5e\xf1\x85\x85\xb9\x32\x92\x39\xcc\x3a\xe7\x12\xce\xbd\xe5\x76\x86\x54\x0d\xc7\x94\xb6\x56\x0e\xce\x17\xfc\x0c\xa1\xed\x67\xd2\x0b\x57\x0b\x6c\x5b\x92\xb2\x2d\x44\x2c\x4e\xca\x92\x0c\xb8\x02\x94\x0a\x66\xe0\x0c\x08\xc0\x1d\xf0\x77\x02\xea\xae\xbd\x30\xd5\xaa\x00\x46\x51\xee\x60\x6b\xb1\x06\xa8\x42\x70\x86\x80\x98\x6a\x7b\x35\x45\xa0\x0a\x85\x3f\x8b\x02\x37\xbc\x43\x2f\xcc\x22\xa2\xa8\xda\x61\xcf\xc2\xd7\x95\x3d\x4a\x75\x1a\x27\xb0\x7c\x86\x0e\x16\x5e\xf6\x8d\x20\x36\xd2\xd3\xf7\x46\xcd\x99\x4f\xf9\xb8\xc9\x6c\x5f\xfa\x06\x9b\xdf\x7a\x81\xc3\x9b\xd6\xd0\xd3\xd6\x82\xb4\x2d\x84\x61\x10\xcb\xc8\x24\xaf\xb4\x82\x51\x33\xe6\x70\xec\xb9\x97\x31\xfe\xb8\x13\xc9\xca\xcc\xc4\xb6\x6c\x0c\x83\x40\xff\xc5\x59\x7f\x03\x16\x1d\x78\x84\xb4\x67\xfa\x54\x67\x60\x0d\x58\x3e\x58\x03\xea\x1a\x1b\x02\x77\x40\x65\xcf\x76\xb8\x44\x50\xba\x31\xe3\x68\x60\x0e\x06\xae\x4a\x4c\xbe\x60\xac\x12\xdf\x8f\xd7\x77\xc8\xb4\x04\xda\x28\xe0\x86\xd7\x65\xef\x62\x11\x66\x63\xba\xee\x1c\x84\xa7\xea\x74\xac\x52\x62\xb4\x93\x3a\xb4\xc7\x55\x7d\x0d\x85\x8a\x22\x04\x86\x61\xd0\xdf\xdd\x45\xcb\xfe\x5d\xb4\xec\xdf\x8d\xd5\xdb\x43\x3c\x23\x93\xbc\x92\x32\x4a\xaa\x86\x53\x58\x56\x41\x46\x22\x8e\x90\x16\x26\xf8\x65\x7d\xbc\x57\x8f\x72\xe3\x42\x75\x85\xea\xc9\x4b\xbd\x67\xaa\xed\xf5\xb6\x6e\xa6\x14\xca\xed\x57\xc0\xed\xb7\xa5\x4f\xdd\xaa\xd4\x6c\x47\xf5\x23\x82\x10\x06\x03\x3c\xca\x57\x10\x05\xae\xef\x09\xbb\x73\x43\x87\xd4\x36\x4b\x0f\xa2\xf7\xaa\x95\x4b\xc2\x12\x74\x3a\xc8\xea\x4e\x7d\xd5\x85\x16\xc4\x24\xc9\x88\x48\x84\xb4\x3d\xae\xfe\x4c\x0f\x3b\xbf\x43\xe6\x55\xe1\x64\xd4\x33\x0c\xc7\x6d\xe7\xb8\xfa\xa4\x7f\xfa\x5b\x48\x5c\x16\x1c\xe8\xee\x5e\xb5\x4e\xe1\xeb\xf1\xe6\x20\xda\x81\xba\x7d\xd3\x33\x33\xfa\x14\xaa\x80\xdb\x6f\xa7\x53\x70\xe0\xca\x0b\xfa\xa4\x02\xa8\x52\x61\x1a\xc8\x83\xa8\x4a\x51\x6c\x39\x5c\xe8\xcb\x04\x62\x4e\x91\x2a\xf1\x85\x82\x94\x4a\xb5\xaa\x8e\xac\xb2\xe8\xa8\x9a\x40\x6a\x43\xbc\xc2\xd0\x8e\x0a\xe2\xa8\x63\x96\x6f\x5b\x0b\x52\xe9\x7b\x57\x38\x15\x34\x03\xd5\xc5\x9f\x1c\x3e\x13\x55\x26\x91\xb4\xc1\x52\xe2\xa3\xbc\xf6\x0a\xd2\x6c\xcc\xe9\xc3\x19\x98\x57\x85\x3b\x7a\xe9\x35\x9c\xd4\x62\x58\x0a\x75\x1e\x05\x5c\x5f\xc0\xb2\x03\x89\x39\x2d\x0c\x4a\x6d\x49\x78\x4d\x0b\x81\xad\x52\xed\xd1\xc0\xf5\x26\xa9\x29\x20\xa6\xd6\xf0\x35\x07\xa1\xd8\xc1\xea\xdc\xeb\x54\x2c\xd2\x27\x44\x88\x12\x20\xd0\x7d\x83\x8e\x39\xa8\x4b\xe1\x64\x9c\xd1\x6b\x24\x39\xe9\x8b\x02\x43\x89\x0b\xb4\xbb\x7e\xaa\xe3\xa2\x59\xde\x08\xc9\x0c\x8a\xd0\xa4\x2d\x23\xfe\x00\x4a\xdf\x08\x63\xbb\x0f\xf0\x06\xd1\x03\xc4\x0a\x71\x9a\x81\xa8\x35\xd7\x63\xcf\x1e\xb8\x11\xec\x59\xa6\x4d\xe3\x80\x30\x50\x28\x53\xa5\xde\x28\xfc\x55\xee\x18\x06\x57\xdd\x0f\x15\xf3\x33\xbc\x08\xbd\x12\xa7\x07\x5a\xda\xfa\xea\x01\x3b\x08\xd5\x6a\xef\x15\x93\xa7\x3a\x1b\xb5\x3a\x80\xae\xd0\x65\xba\x1d\x37\x10\x81\xf0\x21\x1c\x4a\xf7\xe2\xae\xc3\xbe\xde\xf0\xac\xf6\x26\x63\x78\x0f\x90\x49\x7a\x21\x4a\xb5\x5d\x61\x4b\xb6\x5a\x91\xed\xa8\x6b\xaf\x0b\x6e\x4a\x95\x9e\xfd\x35\x59\x31\x47\xda\x81\x87\x27\x1d\x5c\x1c\x1f\xb4\x5b\xbb\x31\x5c\x04\x33\xea\x50\x39\xe3\x60\x94\xeb\x17\x94\x56\x55\x03\x55\x3d\x08\x83\x2a\x08\xff\x9d\x5e\x64\x2a\x52\xc2\x0e\xad\x65\x0e\x07\x74\x87\xd4\x0b\x12\x50\x06\xd5\xdf\xbd\x0e\x7e\x72\x70\xdb\xbd\x46\x05\x3e\xbd\xc3\x3a\x07\x09\x00\x0e\x95\x90\x55\xfa\xe8\xb7\x2d\x44\x35\x42\x04\x6d\x55\x77\x14\x58\x0a\xb8\x29\x25\x74\xc6\x8b\xac\x50\xfd\xb7\xaa\xa1\x43\x35\x66\x84\x01\x0e\xea\x16\x4a\xd7\x6a\x16\xce\xdb\xf7\xdf\x81\x6b\xb8\x93\xd8\x0c\xd5\x85\x8c\x79\x42\x47\x58\x62\x8e\x02\xd8\x9b\xed\x51\xdf\x85\xc1\x0d\xb3\x69\x95\x82\xfd\x0c\x30\xde\x29\x95\xdf\x85\x66\xb0\xf7\x1b\xe9\x82\x88\x0c\xd2\x6d\x0a\x97\x74\x07\x9b\x64\xe1\x2c\x3b\x51\x6d\x1b\x74\x5b\x89\x8c\x02\x59\xf7\xec\x04\xd2\xbf\x27\x20\xca\xb4\xcf\x75\xf6\x1c\xb1\xf6\x8a\x80\x5b\x38\x11\xaa\x6e\xba\x89\xd0\x12\x34\x18\xb8\xce\x24\x51\x88\x94\x80\x4d\xc7\x04\xfc\x7f\x12\x0b\x95\xdb\xce\x10\x29\x8f\x00\x00\x00\x19\x74\x45\x58\x74\x43\x6f\x6d\x6d\x65\x6e\x74\x00\x43\x72\x65\x61\x74\x65\x64\x20\x77\x69\x74\x68\x20\x47\x49\x4d\x50\x57\x81\x0e\x17\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\x63\x72\x65\x61\x74\x65\x00\x32\x30\x31\x39\x2d\x30\x33\x2d\x31\x31\x54\x31\x30\x3a\x30\x37\x3a\x31\x38\x2d\x30\x35\x3a\x30\x30\x3f\x9b\x66\xad\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\x6d\x6f\x64\x69\x66\x79\x00\x32\x30\x31\x39\x2d\x30\x33\x2d\x31\x31\x54\x31\x30\x3a\x30\x37\x3a\x31\x38\x2d\x30\x35\x3a\x30\x30\x4e\xc6\xde\x11\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82"), }, "/rclone-48x48.png": &vfsgen۰FileInfo{ name: "rclone-48x48.png", modTime: time.Date(2019, 5, 27, 15, 4, 10, 118821186, time.UTC), content: []byte("\x89\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\x00\x00\x30\x00\x00\x00\x30\x08\x06\x00\x00\x00\x57\x02\xf9\x87\x00\x00\x00\x04\x67\x41\x4d\x41\x00\x00\xb1\x8f\x0b\xfc\x61\x05\x00\x00\x00\x20\x63\x48\x52\x4d\x00\x00\x7a\x26\x00\x00\x80\x84\x00\x00\xfa\x00\x00\x00\x80\xe8\x00\x00\x75\x30\x00\x00\xea\x60\x00\x00\x3a\x98\x00\x00\x17\x70\x9c\xba\x51\x3c\x00\x00\x00\x06\x62\x4b\x47\x44\x00\xff\x00\xff\x00\xff\xa0\xbd\xa7\x93\x00\x00\x00\x09\x70\x48\x59\x73\x00\x00\x0b\x13\x00\x00\x0b\x13\x01\x00\x9a\x9c\x18\x00\x00\x00\x07\x74\x49\x4d\x45\x07\xe3\x03\x0b\x05\x07\x12\x76\x96\x1b\x39\x00\x00\x15\x39\x49\x44\x41\x54\x68\xde\x6d\x9a\x79\x78\x54\xd5\xdd\xc7\x3f\xe7\xdc\x7b\x67\xcb\x9e\x10\x92\x10\x48\xc2\x8e\x06\x35\xb2\x28\x9b\x68\x5d\x00\x8b\xb8\x61\x71\xa3\xf6\xd5\x56\x5f\xb5\xae\xd5\x5a\x5b\xb5\x5a\xeb\x6e\xab\x52\x95\xd6\x22\xad\x88\x4a\x45\x2d\x95\x4a\x5b\x51\xc4\x6a\x6b\x40\x04\xc1\x40\x58\xc3\x92\x04\x08\x84\x84\xac\xb3\xde\x7b\xce\xfb\xc7\xbd\x77\x66\x68\xdf\x79\x9e\xfb\xcc\x3c\x33\xf3\x9c\x7b\x7e\xdb\xf7\xf7\xfd\x7d\xcf\x15\xaf\xec\xe9\xd7\x29\x0d\x09\x47\x93\x50\x90\x54\x1a\x5b\x81\x06\x04\x20\x04\x00\x28\xed\x7e\x07\x60\x08\xb0\x04\x84\x0c\x41\xd8\x10\x98\xd2\xfd\xde\x51\x90\xd4\xd9\x6b\x68\x24\x02\x43\x82\xf4\xd6\xd3\x64\xd6\x91\x02\x82\x52\xb8\xeb\x48\xb0\xa4\xc0\x10\xa0\x70\xd7\x88\x3b\x10\x75\x34\x71\x47\x93\x54\x1a\xa5\xdd\x35\x4c\x29\x08\x48\x08\x48\x81\x59\x60\x49\x92\x4a\x13\x13\x20\x1d\x77\x61\xad\x35\x29\x0d\x8e\x76\x3f\x6b\xed\x2e\xaa\x75\xc6\x00\x25\x45\x7a\x2b\xa6\x16\x68\xad\x71\x34\xa4\x14\xa4\xb4\x6b\x00\x80\x21\x35\x16\x06\x52\x4a\xd0\xa0\x94\x83\xa3\x14\xda\x5f\xc7\x5b\x53\x20\xd0\x68\x4c\x29\x40\xbb\xf7\xf2\x0d\x05\x90\x08\x84\x70\x8d\xb6\x04\x58\x42\x60\x09\x30\x0b\x2c\x41\x42\x81\x21\x04\xa0\x50\x1a\x6c\x0d\x29\x07\x1c\x6f\x53\xee\xe5\x79\x40\xb8\x8b\xd9\x5a\xa3\xb4\x40\x09\x81\x81\x44\x39\x0e\xb6\xd6\xa4\x54\xe6\xbf\x52\x0a\x64\xd2\xa1\x6d\x7f\x23\x9d\xfb\x77\x23\x0d\x83\xd2\x31\xa7\x91\x57\x51\x85\xd2\x1a\x01\xd8\x52\xa7\x1d\xa3\xb5\xc0\xd2\x1a\xe9\x19\xa6\xbd\x1f\xa4\x67\xac\x6f\xb4\x29\x05\x96\x17\x31\x33\xd7\x14\x98\x5e\xb8\x1d\x2d\x48\xf9\x5e\x14\x9a\x14\xee\xe6\x53\x5a\xe3\x28\xf7\xb3\x1b\x7a\x8d\x92\x06\x4a\x69\x3a\x9b\xf7\xd3\xdd\xba\x8f\xd2\x93\xc7\x61\x44\x72\xb1\xfd\x50\x4b\x81\xd3\xdd\xcd\xc6\xc5\xcf\xb2\x6d\xe5\x9b\xc4\x7b\xba\x40\x08\x0a\x06\x55\x31\xe1\x7f\xee\xe2\xa4\x39\xf3\x31\x4c\x03\x8d\x70\x0d\xf5\x16\x57\xb8\xce\xd4\xf8\x19\xe0\xe6\x9e\xc8\x32\xc4\xf0\x22\x21\x05\x98\x01\x43\xa0\x80\xa0\x84\xa4\x01\x21\x05\x29\xc3\x4f\x03\x81\x2d\xdc\xfc\x51\x7e\x1d\x08\x70\x6c\x9b\x63\x3b\xbf\x61\xff\x9a\x15\xec\xfb\x64\x25\xc9\x58\x94\x99\xbf\x7a\x8b\x92\xda\x09\x38\xb6\x0d\x42\x60\x00\x0d\xcb\x5f\x65\xc3\x92\x05\x94\x0c\x1b\xc3\xf4\xbb\x1e\x05\x0d\x1b\x96\xbe\xc8\xda\x67\xee\xc3\xc8\x2d\x64\xd4\x8c\xcb\x49\x26\xe3\x24\xbb\x7a\x30\x4a\x06\x60\x48\x89\x56\x1a\x43\xb8\xc6\x64\x47\xde\x4f\x27\xed\xa5\xb2\xf2\x2e\x33\xdb\x32\x4b\x40\x40\x42\x48\x82\x2d\x05\x8e\xe1\xa6\x90\x2d\x40\x09\x09\x52\x90\x4a\x24\xf8\xfa\xd5\x27\xd9\xf1\xee\x62\x12\x3d\xc7\x09\x16\x14\x33\xf5\xa7\x2f\x10\x19\x7a\x32\xd1\x94\x8d\xd6\xae\x67\x54\x6f\x17\x4d\x9f\xfc\x15\x21\x0d\xa6\xdc\xf2\x33\x6a\x67\x5f\x89\x06\x02\xf9\x05\xfc\xf5\x9e\xef\xb2\x73\xd5\x32\x46\x9d\x3b\x87\xc3\x0d\x5f\xb1\xf6\x99\x1f\x33\xfa\xac\x0b\x98\x72\xcd\x4d\x94\x54\x0c\x46\x78\xdb\x55\x5e\xfa\xfa\x06\x08\xe1\x45\x44\x6a\x50\x6e\xcd\x48\x3f\xc7\xdd\xd4\x10\x58\x52\x10\xf4\xd0\x25\x6c\xb8\x9f\x43\x96\x49\xfc\x68\x2b\x47\x36\x7e\x8e\xa1\x1d\x4c\xd3\x62\xcc\xc5\xf3\x19\x38\x76\x22\x76\x2c\x8a\x91\x93\x8f\x0e\x46\x48\x39\x5e\xf1\x0b\x49\x2a\x99\x20\xd1\xd7\x45\xa4\x78\x00\x65\x63\x4e\x43\xdb\x29\x84\x63\x53\x50\x56\x89\x19\x0a\x11\xeb\x38\x82\x48\xc5\x29\x2c\x1f\x44\xb2\xbf\x8f\xcf\x16\xfd\x9a\x0f\x5f\xf8\x05\xf1\xa4\x4d\x02\x83\x98\x87\x40\x31\x47\x13\x57\x9a\x84\xd2\x1e\x52\xba\xe8\x14\xf3\x7e\x93\x49\xa5\xdd\x1c\xf7\x8c\x30\xbd\x28\x04\x0d\x17\xde\x22\x96\x81\xd3\xd5\xce\xc6\x57\x9e\x60\xf5\x7d\xf3\x39\xb6\xa5\x9e\xc9\x37\xff\x8c\xe9\xf7\x3e\x45\xdd\xd5\x37\xa3\x52\x09\x0e\x7c\xf2\x3e\xca\x4e\xa1\x85\x20\xd1\x7d\x9c\x86\x57\x9f\x24\x7a\xe4\x20\xe1\xe2\x81\x08\x21\x90\x86\x81\x14\x02\x53\x4a\xba\xf6\xef\x22\x15\x8b\x52\x54\x59\x4d\x4e\x4e\x0e\x3d\xcd\x4d\x44\x3b\xdb\x01\x48\xc6\x63\x44\xe3\x31\xf6\x37\x7c\x4d\x77\x5f\x94\xa8\x23\xd2\x46\xc4\x3c\x38\x8d\x65\x6d\x3e\x63\x80\x72\x91\x47\x7b\xa1\x33\x84\x5b\xe5\x41\x43\xd2\xfa\xd5\xe7\x2c\xbf\x7d\x1e\xdb\x3e\xf8\x13\xf1\xae\x0e\xd6\x2f\x7c\x14\xa7\xab\x9d\x80\xd0\x0c\x3f\x6b\x06\x15\x75\x93\x38\xb8\x6e\x0d\xb1\x23\x2d\x98\xa6\x85\x4e\xc5\xd9\xf7\xc1\x9b\x74\xef\xdd\x4e\xd9\xa9\x93\x88\x75\x75\xd0\xb9\x77\x07\x96\x65\x91\xec\xe9\xa4\x71\xd5\xdb\x48\xc3\x60\xec\xcc\xcb\x90\xa9\x04\x1b\xde\x5c\x88\x9d\x88\x23\xa5\x41\x61\xd5\x70\xda\x5b\xf6\xb3\xfc\xae\x6b\xf8\xe4\xa5\x5f\xd2\xd5\xd7\xe7\x1a\x61\x9f\x68\x44\xdc\x8f\x8a\x03\x66\x52\x71\x02\xf4\x79\xf0\x8d\x69\x48\x3a\x9a\x76\xf2\xee\x43\xb7\x81\x61\x70\xf9\x2f\x5f\xa6\xbf\xab\x93\x63\x07\x9a\x08\x48\xb7\x5e\xf2\x8a\x4a\x38\x75\xee\xf5\x7c\xf4\xe8\xed\xb4\xad\xfb\x98\x41\x53\x2d\x3a\x36\xd7\xa3\x52\x49\x12\x9d\x47\x19\x35\x6b\x2e\x7b\x56\xbd\xc5\x3f\x5f\xf8\x39\x5d\xfb\x77\xd1\xb6\x6d\x13\x4d\x5f\xac\x61\xdc\xa5\xd7\x72\xea\xf9\x73\x68\xfc\x78\x25\x07\xbe\xfa\x17\xb5\x17\x5c\xcc\x9e\xfa\xb5\xe4\x57\x54\x93\x3b\x78\x18\x05\x55\xc3\xd9\xb4\xf4\x45\x52\x4a\x33\xee\xe6\x07\xb1\x42\x11\x84\xd6\x48\x0f\x01\x0d\x01\x8e\x16\x28\xa9\x31\xb3\x23\xe0\x36\x15\xbf\xde\x05\xeb\xff\xb2\x8c\xae\xa3\x87\xb9\x71\xe1\xdb\x0c\x9f\x76\x01\xc9\x94\x4d\xc2\xb6\x51\x48\x1c\x04\x0e\x82\xaa\x09\x67\x51\x50\x59\xc3\xe6\x45\x4f\xd2\xb0\xe4\x79\x62\xc7\x0e\x63\xe5\xe4\x11\xca\xcd\xa3\xfc\xa4\x3a\xce\xbe\xe7\x09\xd6\xfd\xee\x09\xd6\x2e\x78\x84\x40\x38\xc2\x99\xf3\x6e\x60\xc6\xed\x0f\xd0\xdb\xba\x8f\x2f\x5e\x7f\x99\x9a\x71\x93\x18\x3d\xf5\x3c\x76\x7e\xb6\x9a\xbc\x41\x43\xd8\xb1\x7a\x05\x6d\x0d\x1b\x08\xe4\x16\xd0\xf4\xe1\xbb\xd4\x9c\x7f\x39\x65\xa7\x4d\x42\xdb\x29\x94\xc8\xda\x1e\x1a\xad\x84\x1b\x81\xa4\x57\x7c\x7e\xab\x96\x02\x94\x9d\x60\x5f\xc3\x46\xf2\x4b\x4a\xa9\x1e\x53\x4b\x40\x3b\x20\x34\xc2\x90\xd8\x08\xf6\x6c\xf8\x37\xca\x08\xd0\xbe\x6f\x17\x3d\x87\x0e\x60\x45\x72\x19\x30\x6c\x0c\x55\xf3\x6f\xa3\x72\xfc\x34\x8a\x6b\x46\x62\xa1\xa9\xfd\xf6\x3c\x86\x4d\x98\x42\xff\xc1\xfd\xe4\x15\x14\x30\x64\x74\x2d\xdd\x87\x5a\x58\x72\xfb\x55\xb4\xef\x6f\xe2\xda\x67\x17\x73\xbc\xad\x15\x69\x5a\xb4\x7c\xf5\x6f\xbe\x5e\xfe\x2a\xb9\x83\xaa\x39\xe3\xce\xc7\x31\xc3\x39\x18\xc1\x10\x5d\x7b\xb6\x51\x3c\x74\xd4\x09\xd4\xc6\xa7\x24\x66\x52\x69\xe2\x1e\x07\xf2\xdb\xba\x29\xdd\x6e\x6b\x85\x22\x1c\x3f\x7c\x90\xa6\x0d\xff\x62\xe2\x45\xf3\x10\xb8\x5d\xd2\x89\x27\xf8\xfc\x95\xa7\xb1\xf2\x8a\x38\xeb\xd6\x07\xb8\xe0\xa7\xbf\xa6\x7c\xec\x78\x0a\xab\x86\x63\x85\x73\x3c\x1e\xa0\x40\x6b\xa4\x10\x14\x0d\xaa\xa2\x7c\x48\x8d\x97\x7a\x9a\x82\xe2\x12\x4e\x9f\x71\x09\xff\x5e\xfe\x1a\x1d\x2d\xfb\x38\xd6\xbc\x97\x58\x4f\x17\x1b\x96\xbe\x48\x69\xed\x78\xa6\xfe\xe4\x39\x0a\x46\xd4\x22\xa5\x41\xfd\x33\xf7\x70\x78\xe3\xe7\x5c\xb4\xe0\x1d\xf2\xcb\x2a\x41\x6b\x84\xc8\x38\x5a\x26\x1d\x97\xc8\xc5\x1c\x4d\xbf\xa3\xdd\xaa\xb7\x35\x8e\x11\x60\xec\x39\x33\x71\xec\x14\xcb\x9f\xb8\x9f\xf5\x2b\xde\x40\xc7\xa3\x58\x02\xf6\x7c\xbe\x9a\xe6\xaf\xd7\x33\x70\xd8\x28\x2a\x46\x8c\x61\xe2\xbc\x1b\x18\x72\xd2\xa9\xe4\x85\xc3\x84\x71\x08\x0b\x45\xc8\x43\x32\x53\x80\xd4\x0a\x94\x0d\xca\x01\xad\xc9\x29\x28\xe0\xe2\xbb\x7f\xce\x1d\x4b\x3e\xa0\x6e\xd6\x65\x1c\xdd\xbb\x0b\xb4\x62\xc8\xe4\xf3\x38\xf7\x17\xaf\x50\x3c\x72\x2c\x38\x0e\x87\xd7\x7f\x42\xdb\xa6\x7f\xd1\xb1\xab\x81\x96\xcf\xfe\x46\xc8\x94\x84\x0c\xb7\x4f\xb9\xef\x02\xb1\x70\x77\xbf\xf6\xa1\x2a\xa5\x40\x4a\x89\x25\x20\x6c\x19\xc8\x58\x2f\x7f\xfd\xf5\xc3\x7c\xfa\xd6\x22\xd0\x30\x62\xfc\x24\xf2\x4a\x06\xb2\xa3\xfe\x53\x8a\x87\xd4\x70\xf5\x73\x4b\xc9\xaf\xac\x46\x2b\x9d\x6e\xed\x5e\x84\xd3\x5d\xd4\xf1\x19\xa4\x87\x6c\x01\x8f\xc7\x48\x00\xc3\xe4\xf8\xf1\xe3\xfc\xf1\x8e\x6b\xb1\x8a\xcb\x99\xf2\xa3\x27\x30\x8b\xca\xe8\x3d\x72\x90\xed\xef\x2c\xa2\xf1\xdd\x45\xd8\xd1\x7e\xc2\x25\xa5\x14\x94\x0f\x66\xde\x8b\xcb\xc9\x2d\x2c\x42\xa2\xd3\x11\x10\xcf\xef\xec\xd7\x51\xdb\xf5\xbc\x8d\xe0\xe8\x8e\x6f\xd8\xf6\xfe\x52\x8a\x06\x94\x32\xfd\xca\xeb\x29\x2e\x2a\x62\xdb\xda\xbf\xb1\x7e\xe5\xdb\xb4\xec\xd8\x8a\x63\xdb\x0c\xab\x3b\x83\x19\xb7\xfe\x84\x01\x23\x6b\x49\x39\x8e\xb7\x98\x4b\x85\xf1\x5a\xbd\xed\x31\xda\x34\x2b\xf5\xfa\x8b\x25\x5d\x16\x29\x84\x6b\x64\xcc\xd1\x1c\x3e\x78\x90\xb8\x11\xa4\x75\x47\x03\xbd\x1d\xed\x6c\x7b\x67\x11\x47\x36\xd7\x23\xa4\x64\xda\xdd\x8f\x33\xe4\xf4\xc9\xac\xba\xef\x3a\x66\xde\xf3\x18\x75\xb3\xbf\x83\x54\x4e\x9a\x0f\x89\xa7\xb6\xf7\xe9\xa8\xad\x89\x29\xc1\xf1\xc3\xad\x7c\x70\xef\xb5\xb4\x6d\xdd\x08\xc0\xe4\x2b\xbe\xc7\x75\x4f\xbe\x4c\x7e\x4e\x84\x54\x6f\x37\x5d\x6d\x07\x09\x46\x22\x44\x8a\x4a\xc0\x0c\x92\x70\x14\x8e\x57\x38\xe2\x3f\xbc\xef\x92\x42\x7d\x82\x01\x96\xe7\x7d\x53\x80\x10\x02\x47\x7b\xa9\xab\x0c\x0e\x35\xef\x67\xd9\x8d\x73\xe8\x39\xb8\x9f\xa2\x11\x27\x53\x75\xd6\x85\xec\x5a\xb9\x94\xf2\x53\x26\x52\xf7\x9d\xef\xd3\xb8\xf2\x0d\x54\x7f\x0f\xf3\x17\xbc\x49\x24\x1c\xc2\xf4\x48\x9d\x99\x70\x34\x49\x05\x36\x82\x1d\xab\xff\xcc\xb1\xdd\xdb\x38\x79\xce\x35\x74\xec\xde\x4a\xe3\xe7\x1f\xb1\x7e\xe5\x72\x2c\x09\x2d\x0d\x9b\x38\xd6\xb2\x9f\x5b\x9e\x79\x99\xbc\x70\x08\xdb\x51\x58\x42\x60\x7b\x44\x0f\x8f\xaf\x38\x5e\x43\x14\x59\x88\xe6\x0f\x2f\x99\x4b\x78\x40\xed\xcd\x01\xca\x21\x32\xa0\x9c\x8a\xba\x49\xf4\xb6\xb5\x32\xe1\x8e\xc7\x08\x17\x97\xb2\x63\xc5\x6b\x34\xad\x79\x9f\xbc\xb2\x41\x8c\x3c\xef\x62\xd6\x3c\x7e\x37\xad\x5b\xbe\xa4\x76\xea\xb7\xb0\x70\xb0\x84\x40\xda\x1e\x17\x4a\x26\xe2\xb4\x7c\xf9\x29\x15\x75\x93\x39\xff\xe7\x2f\x71\xda\x15\x37\xd0\x7d\xe4\x10\x4b\xee\xfd\x01\x8b\x7e\xf4\x7d\x3e\x5e\xfa\x7b\x94\x63\x13\x0a\x06\x09\x08\x08\x1b\x82\x88\x01\x11\x53\x10\xf2\xf3\x3a\x2b\x85\x4e\x18\x46\x84\xdb\xdd\xa5\xf0\x72\x9f\xac\xff\x69\x8d\xd6\x1a\x23\x10\x64\xf4\xec\xab\x91\x86\xc1\xe1\x4d\xff\x86\x40\x98\x91\x17\x5f\xc7\xf9\xcf\xbd\x4d\xd5\x39\x73\xd8\xbc\xfc\x55\xfa\x3b\xdb\x69\xf8\xc7\x7b\x48\xed\x10\x10\x82\xa0\x04\xd3\xf1\xf8\x76\xb2\xe7\x38\xdd\x2d\x7b\xa9\x99\x36\x0b\xad\x35\x9d\xcd\x7b\x89\x14\x96\x70\xfa\x25\x57\x53\x3d\xe6\x14\xaa\x86\x0f\x67\xd8\x88\x91\xe4\x15\x14\x62\xa0\x91\x12\x94\x16\x48\xaf\xf9\x29\xe5\x4e\x65\x4a\x83\x93\x65\x84\xf0\x36\x2d\xd3\x3c\x5e\xa4\x47\x4b\xe5\x5d\x1a\x50\x8e\x43\xd9\x29\x13\xa9\x18\x3f\x8d\xa6\x0f\xde\x24\xd9\xd7\x43\xde\xa0\x2a\xaa\xa6\xcf\xa6\x73\xeb\x97\x24\xa3\xfd\x8c\xfa\xd6\x6c\xaa\xc7\x4d\x01\x8f\xf1\x9a\xd2\x45\x39\x0c\x29\x89\x1d\x39\x48\xac\xa3\x9d\x9d\xab\x96\xd1\xbe\x63\x33\x5d\xcd\x7b\xc8\x1d\x50\x46\xf5\xc4\xb3\xd0\xd1\x5e\x5a\x77\x6e\xa3\x76\xdc\x19\x99\x4d\x79\x39\xef\x93\x40\xad\xff\x9b\xbf\xfb\xde\x36\xbc\x1b\x99\x59\x85\xee\xe8\xec\x9e\x0f\x12\x4d\x30\x92\xc3\x49\x97\x7c\x97\x75\xbf\x79\x98\xee\xdd\xdf\x50\x3c\x64\x28\xd2\x4e\x52\x51\x3b\x8e\x2b\x7e\xfb\x17\xf2\x73\x22\x14\x84\x83\x20\x32\xf1\x35\x0d\x01\x86\x21\xe9\x6b\x6d\x22\x19\xed\xa3\xb0\xb2\x9a\x9e\xd6\x7d\x38\x89\x38\x9d\x2d\x7b\x59\xfe\xa3\xeb\xb0\x53\x49\xaa\x6b\xeb\x98\x7e\xe9\x95\xe4\x86\x02\xe9\x8e\x9d\xf6\xa2\x87\x3a\xb6\xd2\xae\xf7\xfd\xfc\xf6\xd2\xc6\xf2\x0a\xd8\xf4\xe0\x53\x65\x19\xee\x1b\x60\x08\x81\xa9\x1c\x46\x4c\x9f\x45\xe5\x29\x13\x09\xe6\x17\x62\x85\x22\x80\xcb\x7d\x02\x81\x7c\x4c\x09\x4a\x29\x94\x14\x69\x67\x99\xa6\x74\x09\x52\x69\x65\x15\x67\x5c\xf9\x7d\x26\xce\xbf\x15\x69\x05\x89\x76\x1f\x27\xd6\xdd\x49\xec\x58\x1b\xf5\x4b\x5e\xa4\x7c\xc4\x18\xcc\x70\x0e\xb6\xd2\xe9\x91\x4f\x79\x48\x93\x52\xfe\x2c\x9c\x19\xd2\x7d\x98\x33\xfc\x01\x5c\xba\x11\x40\x78\xfd\x81\x0c\x6a\xb9\x73\x88\x6b\xb6\x15\x0a\x92\x37\xa8\xd2\xed\xfa\xe9\x71\xdf\x4b\x99\xac\xa8\xa7\x94\x46\x08\x81\x78\xaa\xb1\x4f\xa7\xa5\x13\xa5\xd1\x52\xba\xc4\x0e\x81\xc6\x0d\x7d\x6f\xf3\x6e\x72\x83\x16\x83\xab\xaa\x09\x19\x6e\xf1\xf8\x83\x77\x4a\x41\xdc\x1b\x34\x52\x2a\x93\xf7\xd9\x29\x63\x4a\xd7\x08\x4f\x70\xc0\x56\x90\x50\x19\x7a\xec\x32\x62\x9d\x61\xc2\x52\x62\x4a\x81\xd0\xea\x84\x31\x52\x64\xad\xed\xf7\x13\xd3\x90\x3e\x9c\x81\x36\x5c\x6c\xd6\x1a\xd0\xca\x5d\x50\x40\x69\xcd\x08\xc2\xde\x6f\x8e\xd6\xa4\x94\x48\x1b\x60\x6b\x9d\x2e\x44\x1f\x85\xcc\xac\x94\x31\x7c\x23\xfe\x9f\x9a\xc9\x46\x2b\xc3\x9b\xa3\x0d\x29\xe8\x39\xd4\x8c\xa1\x6c\x2a\x87\x8f\x72\x1b\x9e\xf2\xd2\x53\x6b\x1c\xc7\x8d\x9d\x5f\x43\xa6\xbf\xf9\x74\x38\xbd\x8d\x68\xdc\x01\x54\x20\x50\x4a\xe3\x48\x17\x5d\x6c\x05\x42\x6a\xa4\x16\x9e\x92\x91\xf1\x8e\x21\x32\xc3\x50\x36\x65\x70\x0b\x3e\xf3\x7f\x85\x76\x6b\xe6\x3f\x66\x5e\x29\x40\x2a\x9b\xd5\x0b\x1e\xe5\x68\x53\x23\x33\xbf\x7f\x27\x45\x65\x15\xe4\x14\x16\x13\xc8\x2f\x24\x1e\x4f\xb0\xe6\xb5\x97\xa8\xbb\xe8\x4a\x86\x8e\x9f\x82\xa3\x1c\xcc\x6c\xbc\xf6\x2b\x4a\x22\x50\x9e\x26\xa3\x81\x44\x2c\x4a\xb4\xaf\x9b\xa2\x92\x12\x64\x28\x08\x8a\xb4\x72\x90\xb9\xb9\xf0\x54\x33\x77\xf3\x41\x8f\x36\xf8\x68\xe5\xa6\x8e\x40\xe1\x4a\x34\xa9\x74\xfd\x90\xbe\x97\x14\x92\x54\x2c\xca\x91\xa6\xed\xb4\x36\x7e\xc3\x6b\x3f\xbd\x05\x21\x04\x56\x30\x44\x20\x9c\x83\x19\x08\xd2\x71\xf0\x00\xd5\xe3\x26\x53\x3d\x7e\x2a\xb6\x02\x33\x1b\xaf\x85\x90\xd8\xc9\x24\x89\x58\x0c\x19\xce\xc1\x56\x8a\x0d\x7f\x5a\xc4\xf6\x8f\xfe\x42\x5f\x7b\x1b\x73\x7e\xfc\x18\x13\x2e\xbc\x0c\xad\x1c\x2c\x99\xc1\xf3\x13\x1b\xd6\x89\xe2\x53\x36\xbd\xf0\x0b\xdf\xf6\x38\x52\xca\x4f\x41\xed\x47\x5d\x63\x85\x42\x4c\x9d\x77\x3d\x85\xf9\xb9\x14\xe4\x17\xb0\xec\x99\x87\xe9\xeb\xea\xa4\xbc\x66\x18\x2d\xbb\xb6\x13\x8c\xe4\x32\xb0\x7a\x44\xba\xe7\x48\x7f\x30\x50\x5a\xb3\x71\xe5\x9f\x58\x7a\xe7\xb5\x2c\xbe\xf1\x62\x9a\xb7\x6e\x42\x99\x01\x12\xb6\xa2\x64\xf4\x69\x24\x13\x49\xea\xdf\x59\x42\x6f\x5f\x3f\x29\x2d\x48\x29\x7d\x42\xf8\xff\x23\x88\x27\xbc\xb2\xef\xe1\x68\x3f\x7d\x74\x66\x8c\xcd\xba\xfa\xbb\xbb\xa8\x1c\x5d\x8b\x63\x3b\x34\xae\xfb\x8c\xe3\x47\xdb\x38\xfb\xb2\xab\x78\x64\xd9\xdf\x98\x3e\xf7\x1a\xf2\x06\x0c\xa4\x64\xd0\x60\x84\x56\x6e\x41\x67\x6e\x22\xd8\xb7\xf1\x0b\x1a\x3f\x59\xc5\xd0\x49\xe7\x10\x8b\xc5\x69\x5c\xf3\x01\xfd\x5d\x1d\x9c\x7a\xe5\xff\x62\xe5\xe4\xb3\xe9\x8d\x17\xd9\xf3\xf5\x7a\x6a\xa7\x9e\x8b\xd0\x2e\xe2\x0b\x21\xd2\x12\xa0\x40\xa4\xb5\x1c\xc7\x13\xc6\xb2\x91\x27\x6d\xf0\x7f\xf4\x00\xe1\x91\xbb\x78\x6f\x37\xef\xfd\xf4\x66\xf6\xd4\xaf\xc5\xb6\x53\x6e\xdd\x68\x45\xe5\xd0\xe1\x58\x52\xd0\xba\x63\x2b\x65\xd5\xc3\x29\x28\x2a\x22\x25\x40\x09\x57\xbe\x74\x17\x91\x06\x63\x67\xcd\x25\xa7\xb8\x94\xd3\xe7\xdd\xc8\xce\x35\x2b\xf9\xf3\x1d\xf3\xf8\xe6\xcf\x4b\xe8\x89\xc6\x18\x78\xfa\x54\x1c\xdb\x61\xe3\xfb\x6f\x11\x8b\x27\xb2\x0a\x30\xa3\x9f\xfa\xda\x68\x52\xb9\xd0\x98\xf4\xf4\x9c\xa4\xf2\x3d\xce\x09\x9d\xdc\x14\x2e\x52\x99\xc2\x45\x1f\x95\x4c\x60\x98\x26\x67\xcf\xbf\x89\x9b\x9f\x5f\xcc\x9d\x0b\x16\x53\x52\x3e\x88\xbf\x2f\x5d\xc4\x4b\xf7\xde\xcc\xf6\x0d\xf5\x9c\x32\xed\x1c\x22\xe1\xb0\x27\xf0\x82\x69\x08\xbf\xb0\x14\x83\x4f\x9b\x48\xe5\xd8\x71\x6c\xff\xf8\x7d\x46\xcf\xb9\x96\x8e\xc3\xad\x1c\x69\xf8\x8a\x3d\x1f\xaf\xa0\xf5\xf3\xbf\xa3\x1d\x9b\x03\x9b\xd6\xd1\x79\xf4\x30\xa1\xaa\x1a\xa4\x76\xd2\xaa\xb5\xc6\xed\xf0\x4a\xf8\x7d\x58\xa3\xe4\x7f\x13\x38\x7f\xe3\x41\xaf\x3e\x6c\x0d\x5a\x48\x52\x89\x04\x06\x8a\x4b\xef\x7f\x92\x8a\xca\xc1\x0c\x2c\x29\xc2\x4c\xc5\x68\xac\xff\x27\xab\x97\xbd\x46\xd3\xd6\x2d\x8c\xac\x9b\xc0\xb4\x39\x57\x60\xa0\x31\xa5\x9b\x35\xc6\xec\x3b\x1e\x78\x04\xdc\x34\x90\x81\x20\x5a\x1a\x7c\xbd\xe2\x75\x46\x5c\x78\x15\x0e\x82\x03\x6b\x57\xd2\xb6\xf1\x5f\xe4\x0d\x1e\xc6\x99\xb7\x3d\xc2\xc9\x33\x2f\xa7\xb0\xbc\x92\x44\x77\x27\xe1\xdc\xfc\xb4\xf4\xae\xb4\xf6\x9a\x5f\x86\x26\x93\xf5\x9e\x99\xd8\x44\x9a\x52\x1b\x02\x02\x86\xc1\x91\x5d\xdb\x58\xf9\xd4\x4f\xf8\xe8\xe5\x27\xd9\xb4\x6a\x39\x7b\x37\x7e\x81\x29\x04\xd5\xc3\x47\x50\x77\xe6\x64\xf2\x0a\x0a\xa9\x19\x53\xcb\x55\x77\x3f\x40\xc5\xb0\x51\xd8\x8e\xca\xdc\x6b\xe1\xee\x7e\x6d\x6b\x3f\xdc\x82\xe3\xc7\x3b\x59\x76\xdb\x3c\xba\x0f\xb7\x12\xeb\x6c\xc7\x08\xe7\x30\x7c\xee\x0f\x38\xe9\x8a\x1b\x29\xa9\xa8\x44\x76\x1f\xe5\x8b\xe7\x7e\x46\xa2\xe3\x08\xd7\x2d\x58\x4a\xa4\xa0\x08\xc7\xb6\xd3\x5d\xd4\xa7\x0e\x01\xc3\x3d\xbc\x48\x37\x35\x91\x69\x74\x3e\x12\x39\x08\xda\x9a\xf7\xf3\xf2\x6d\xf3\x39\xd6\xbc\x8f\x93\xa7\x9c\x43\x4f\x47\x3b\xbb\xbe\xaa\x47\x08\xb8\xe8\xba\x9b\xb8\xf9\xa1\xc7\xc8\xc9\xcb\xc3\x76\x14\x49\x05\xfd\x29\x87\x98\x43\xfa\xd0\x43\x66\xe3\xb4\xa3\x15\x56\x41\x09\xa3\x66\x7d\x87\xbe\xb6\x16\xf2\x6a\x46\x31\xf1\xb1\x25\x8c\xbe\xfe\x7e\xac\x92\x72\x3a\x9b\x9b\xf8\xf0\xe1\x5b\x69\x58\xf9\x26\xf1\x68\x1f\x5d\xed\x47\x68\x69\xdc\x82\x2b\x4f\x8a\x13\x72\x3f\xe5\xd3\x83\xac\xbc\xf7\xe1\x35\x60\x1a\x84\x2c\x93\xa0\x69\x50\xff\xde\x52\x0e\xef\xd9\xc1\x8d\x4f\x2f\xe4\xce\x85\x6f\xf0\xc0\x92\x15\xdc\xf5\xfc\xef\x29\x1a\x58\xce\xfb\x7f\x58\xc8\xa7\x7f\x59\xee\x9e\xee\x78\x4a\x47\x3a\x92\x22\x43\xd5\xd3\x8c\xd2\x51\x90\x72\x14\x43\xa6\x7f\x9b\xe2\x51\xa7\x50\x52\x37\x95\xd2\x89\xdf\x42\x1b\x26\x9d\xbb\xb7\xf2\xf9\xc3\x37\xd1\xba\x6e\x0d\x08\xc9\x80\xa1\xa3\xd8\xf2\x8f\x15\x2c\xba\xe1\x22\x56\x2f\x78\x94\xbe\xee\x2e\x6c\x2d\xb0\x91\x38\xd2\xc0\x41\xa4\x29\x86\x00\x4c\xc3\xc0\x32\x2d\x54\x32\x41\xcb\xf6\x06\xd6\xbc\xb5\x98\xb6\x3d\xdb\x69\xdb\xbb\x8b\xd1\xe3\x27\x31\xf1\xbc\x59\x84\x0c\x41\x41\x5e\x2e\xb3\xae\xfa\x2e\xb7\x3c\xf2\x34\x86\x69\x51\xff\xd1\xdf\x49\x26\x93\xe9\xb5\x4e\xb8\x34\x98\xb6\x02\xdb\xf7\x98\x06\xdb\x51\x84\x06\x54\x30\xed\xd1\xc5\xc4\x92\x29\x3a\xb6\xd4\x63\xc7\xfa\x69\x7c\xe9\x41\xe2\x47\x5b\x38\xf5\xda\xdb\x68\x7c\xef\x0f\x94\x0c\x1d\x4d\x41\x79\x25\xd1\xae\xe3\x6c\xfb\x78\x25\xb5\x97\x7c\x97\xfc\x50\x0e\x9d\x7b\xb6\xe1\xf4\xf7\x30\xa8\x66\x18\x95\x83\x87\x20\xa5\xc0\x49\xc6\x39\xb0\x73\x37\x8d\xeb\x3e\x63\xe3\xda\xd5\xec\xfe\x66\x13\xb1\xfe\x3e\x6e\x79\xfa\x65\x2e\xbe\xf1\x0e\xec\x54\x92\x70\x28\x84\x44\x63\xa0\x11\x4a\x51\x37\x65\x3a\x03\x2b\x07\x13\x8d\x46\x49\xda\x0e\x98\x46\x9a\xf1\xfa\x6a\x87\xad\xc1\x4c\xa8\xec\xc3\xbd\x0c\x23\x2c\x1e\x59\xcb\xa1\xcd\xeb\xd9\xf0\xc0\x7c\x9c\x78\x0c\xd0\x4c\xba\xe7\x19\xf2\x06\x94\xb1\xf5\xed\x57\x08\xe4\x17\xb1\xe9\xcf\xaf\x53\x31\x76\x1c\xb3\x1f\xfd\x2d\x22\x18\xe6\x1f\x8f\xdf\xcd\x8e\x0f\x57\x90\x8a\x47\x29\xae\xac\xe2\xd2\xbb\x1e\xe4\xdc\xb9\x57\xf3\xc9\x9b\xaf\xf3\xa7\x67\x7f\x41\xac\xaf\x97\x92\xf2\x0a\xa4\x94\x94\x0d\xae\xe6\xb4\x29\x67\x53\x3a\xa4\x06\xad\x14\x68\x85\xd0\x1a\xc3\x34\x91\xa6\x41\x53\xe3\x56\x8e\xb7\x1f\x65\xfa\xa5\x57\x62\x9b\x41\x52\x8e\x22\x95\x4e\x51\xbc\xcf\x1a\x19\xf7\x84\xad\x54\xd6\x11\x92\x21\xc0\x54\x36\xa5\x23\x4e\x62\xd8\x8c\x2b\x48\xf5\x75\x53\x5a\x3b\x81\xa1\xe7\x5f\x46\x5f\x5b\x0b\x86\x69\xd1\xf4\xc5\x1a\x5a\x36\xaf\x63\xc2\xfc\x1f\x92\x57\x39\x94\x4f\x17\x3c\xcc\xe6\x77\xff\xc8\xb0\xa9\xe7\x73\xc1\x5d\x8f\x60\x58\x01\xde\x79\xfc\x27\x1c\x6c\xdc\x4c\x2a\xda\x47\x30\x14\xe2\xc1\x85\x7f\xe4\xb7\xab\xfe\xc9\xf5\xf7\xfd\x9c\xb6\xe6\xfd\x6c\xab\xff\x0c\x03\x10\x68\x7a\x3a\xda\xf9\xdd\x83\x77\xf3\xde\xef\x5e\x60\xd5\x1b\x7f\xe0\xa5\x07\xee\xa6\xa8\xac\x82\x49\x97\x5e\x4d\x9f\xad\xe9\xb3\x15\xfd\x59\x0a\xb5\x2f\x46\x98\x09\xe5\xb3\x42\x9d\xde\xbc\x94\x02\x53\x6b\xac\xdc\x1c\xce\xb8\xf9\x01\x0c\x01\xfb\x3f\xfd\x80\xd8\xc1\xbd\xf4\x1d\x3e\x40\xbc\xb7\x9b\x5d\x6b\x56\x52\x3d\xe9\x5b\xd4\x4c\xff\x36\x3b\xd6\xae\xa2\xf1\x6f\x6f\x53\x3d\xf9\x3c\x66\x3e\xb4\x80\x81\x03\x07\x52\x35\x72\x34\x8b\x7f\x78\x15\xeb\x56\xbe\x43\xf5\xa8\x31\x58\x81\x00\xe3\xa7\x9c\x45\xc5\x90\x2a\xa6\x9e\x37\x93\xb7\x7e\xf3\x2c\xff\x78\xe3\x55\x26\xcf\x98\x4d\x4e\x61\x31\x00\xbb\xb7\x6c\x64\xd5\x6b\xbf\xc3\xb0\x2c\x6a\xc6\x9e\xce\x35\xf7\x3f\x4e\x7e\xf5\x48\xba\x12\xb6\xc7\x62\x33\xc3\x8c\x0f\x16\x32\x7b\xf3\xa6\xcc\x9c\xdb\xe6\x98\x92\x5c\x03\x0a\xf3\x72\x99\xfa\xc3\x87\x98\xfd\xab\x37\x29\x19\x5c\x43\x6f\xcb\x5e\xb4\x72\x30\x83\x21\xea\xae\xbe\x05\x6d\x05\xd9\xb9\x7a\x05\xda\x71\x18\x7d\xe1\x3c\x82\x05\xc5\x38\xa9\x24\xe5\x23\x4e\x22\xb7\xa8\x84\x1d\x5f\xd5\x33\x7c\x4c\x2d\x17\x5d\xf3\x3d\x22\xe1\x10\x42\xd9\x0c\xa9\xaa\x62\xf2\x79\x33\x68\xdc\x50\xcf\xa6\xb5\x1f\x62\x1a\x82\xc2\xd2\x81\x5c\x73\xdf\x23\xe4\x15\x0f\x60\xca\xa5\xd7\x70\xdb\xa2\xf7\x18\x32\xe1\x2c\xba\x92\x8a\x9e\x94\xa6\xd7\xd6\xf4\xd9\x9a\xa8\xed\x9e\xd0\xa4\x23\x60\x67\x09\x53\x86\x10\x69\xc1\xc8\x3d\x74\x75\x75\x9f\x50\x6e\x84\xc2\xb1\x75\xd8\xa9\x24\xc1\x50\x08\x80\x48\x71\x29\x45\xd5\x23\x48\xc5\xfa\xe9\x3d\x7c\x00\x33\x14\xa1\x70\xe8\x28\x94\x76\x9b\x8c\xe3\x38\x68\xad\x39\xde\x7e\x94\xd2\xca\xc1\xdc\x70\xdf\x43\x04\xd0\x48\xad\xb1\x02\x16\xe7\x5f\x3c\x97\x55\xcb\x96\xb2\xe2\xd5\x97\xd8\xf5\xcd\x26\xac\x48\x2e\x17\xdd\x72\x2f\xb7\xff\x76\x19\x79\x95\xd5\x18\x85\x03\xe8\x4d\xda\x24\x55\x66\xe6\xc8\x90\xc2\x4c\x31\xcb\xf4\x34\x85\xdf\x74\xdc\x28\xb8\xba\x8f\x20\xc7\x14\xe4\x19\x82\x1c\xa9\xc8\x0f\x5a\xcc\xb8\xf3\x61\xa6\xfd\xe0\x1e\x82\x39\xb9\xc4\xda\x0f\x11\x0c\x85\xc9\x29\x29\xc3\x49\xc4\x89\x1e\x3d\x84\x34\x4c\xa4\x69\xd2\xbc\x6d\x0b\xbd\x1d\xc7\x88\xe4\x17\x20\xac\xa0\xdb\x3d\x3d\x65\x39\x11\x8b\xd2\xdf\xd3\x4d\x24\x27\x87\xed\x1b\xbf\xe4\xc3\x65\xaf\x93\x48\xda\xa4\x90\x0c\x3d\x63\x3a\x91\xb2\xc1\x44\x93\x8e\x97\xeb\x90\x70\x20\xae\xb2\x73\xdf\x9d\xc3\x6d\xed\xb1\x51\xe9\xcf\xad\xe9\x49\x4a\xa4\x47\xc0\xf4\xb1\xa6\x37\xe4\x84\x87\x54\x71\xe1\xdd\x8f\x30\xe5\xba\xdb\x30\x72\x0b\xc0\xb2\x38\x65\xce\xd5\x34\xaf\x5f\x4b\xfd\xc2\xc7\x50\x7d\xdd\x58\x52\x50\xff\xda\x02\xb4\x72\x98\x30\xeb\x12\x02\x79\x05\xa4\x94\x76\xc9\x9b\x30\xd8\xb9\xad\x81\x5f\xfd\xec\x47\xe4\x15\x15\x33\xf3\xda\x1b\x38\xf3\xa2\xb9\x94\x8e\xa8\x25\x86\x41\x34\x69\x13\xcb\x9a\xb1\x7d\x12\xa8\xb2\x22\x90\xcd\xdf\xc5\xfd\x5b\x7a\xb4\x95\xf5\xec\x81\x3f\x0a\x9a\xc2\x3f\xbd\xcf\xbc\x94\x37\x06\xa6\xbc\x67\x22\x12\x8e\xcb\x32\x93\x29\x87\x2d\xef\xbf\xc1\xba\x3f\x3c\x4f\xf7\xa1\x66\xb4\xd6\x44\x0a\x0a\x99\x32\x77\x3e\x97\xdf\x71\x3f\x03\x8b\x0a\xc9\x31\x5c\x35\x2f\x60\x08\xa2\xbd\xbd\x34\x6c\xfa\x8a\xd2\xaa\x61\xe4\x0f\xaa\xa2\x5f\x09\x7a\x12\x76\x3a\xc7\x13\x59\x93\x9a\x43\xd6\xbc\xe0\x6f\x3e\x4b\x2c\x13\x0f\x37\xf4\xea\x80\x67\x80\x2f\xbe\x5a\x5e\xcb\x4f\x9f\x88\x7b\xcf\x29\xf8\x06\xf8\x0f\x86\x24\x9c\xcc\x33\x15\x0a\xe8\x3a\xd8\xcc\x91\x9d\xdf\x20\x1d\x9b\x41\xc3\x47\x51\x3d\x6a\x0c\xb9\x41\x8b\xb0\x84\x1c\x4f\x82\x74\x1f\x0c\x11\x28\x21\xe9\xb7\x15\x7d\x49\x87\x5e\x5b\xd3\x93\x52\xf4\xdb\xee\x61\x8b\x4f\x41\x4e\x1c\x86\x32\xde\xcf\x28\x13\x60\x46\xcc\x13\xbd\xee\x4b\x21\x3e\x6f\xf1\x8b\xda\x55\x21\x04\x76\x5a\xd4\xd2\x68\x2d\x10\x59\x21\x2e\xab\xaa\x61\x50\xcd\x30\x4f\xf2\xd0\x04\x50\xe9\x67\x05\xd2\x0a\x86\xe7\xd9\x84\x72\x37\xdc\xe7\x6d\xbe\xd7\xd6\xf4\xdb\x2e\xb2\x38\x9e\x98\xe0\xf3\x1e\x7f\x00\x12\x64\x38\x95\xef\x74\x33\x6c\x78\xde\x17\xe2\x84\x67\x10\x7c\x44\xca\xd6\x74\x94\x06\xa1\x34\x8e\x14\x6e\x7e\x4a\xf7\xa0\x4d\xa0\xd1\x02\x04\x0a\xa9\x35\xa6\x06\x0b\x3f\x9a\x22\xc3\x42\x01\xe5\x51\x16\xff\xd8\xd4\x3f\x9b\xf0\x8f\x52\x93\xca\x07\x15\x97\xf3\x1b\x69\x42\x9e\x51\x3e\xdc\x5a\x75\x1f\xf7\xf9\x3f\x8a\x4e\x8b\x9e\xb4\xbd\x7d\x11\x00\x00\x00\x19\x74\x45\x58\x74\x43\x6f\x6d\x6d\x65\x6e\x74\x00\x43\x72\x65\x61\x74\x65\x64\x20\x77\x69\x74\x68\x20\x47\x49\x4d\x50\x57\x81\x0e\x17\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\x63\x72\x65\x61\x74\x65\x00\x32\x30\x31\x39\x2d\x30\x33\x2d\x31\x31\x54\x31\x30\x3a\x30\x37\x3a\x31\x38\x2d\x30\x35\x3a\x30\x30\x3f\x9b\x66\xad\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\x6d\x6f\x64\x69\x66\x79\x00\x32\x30\x31\x39\x2d\x30\x33\x2d\x31\x31\x54\x31\x30\x3a\x30\x37\x3a\x31\x38\x2d\x30\x35\x3a\x30\x30\x4e\xc6\xde\x11\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82"), }, "/rootDesc.xml.tmpl": &vfsgen۰CompressedFileInfo{ name: "rootDesc.xml.tmpl", modTime: time.Date(2019, 9, 15, 16, 40, 10, 576918038, time.UTC), uncompressedSize: 2521, compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xcc\x56\xd1\x6a\xe3\x3a\x10\x7d\xef\x57\x84\x3c\xdd\x0b\x8d\x65\x97\x5e\x08\x46\x55\x29\x31\x17\x02\x75\xb7\x24\x9b\xa5\x6f\x41\x95\x27\x8e\x76\x6d\xc9\x48\x72\x92\x52\xfa\xef\x8b\x64\x3b\xb6\x93\x36\x5b\x36\xb0\x6c\x5e\x2a\xcd\x9c\x73\x34\x3a\xd2\xa8\xc6\xb7\xbb\x3c\x1b\x6c\x40\x69\x2e\xc5\xcd\x30\xf0\xfc\xe1\x2d\xb9\xc0\x4a\x4a\x33\xd8\xe5\x99\xd0\x37\xc3\x52\x89\x50\xb3\x35\xe4\x54\x8f\xca\x42\x14\x23\xa9\xd2\x30\x81\x0d\x67\x30\x0a\x46\xfe\xf0\x62\xe0\x7e\x0e\x1d\x26\x99\xa0\x7d\x8a\x8d\x9c\xa4\x68\x60\x37\xc3\xb5\x31\x45\x88\xd0\x76\xbb\xf5\x34\x30\x8f\x49\xef\x87\x42\x96\x3a\x24\x17\x83\x01\xd6\x05\xb0\x6f\x55\x91\xc4\x91\x71\x4e\xbf\x4b\x45\x02\x8c\xaa\x41\x1d\xe4\x42\x2a\xe2\x63\x54\x0d\x2c\x13\x1d\x50\x71\x55\x46\x4d\xa8\x26\x5f\x5f\x0a\x20\x27\xb6\x19\xc6\x90\x70\x3a\x07\xb5\x01\x15\x06\x18\x75\x58\x95\xcc\x4a\x71\x10\x49\xf6\xf2\x40\x73\x20\xaf\xaf\xde\xff\x9d\xf9\xdb\x1b\x46\xbd\x7c\x53\xbf\x28\x57\x94\x99\x52\x81\x22\x8a\x65\x52\xc0\xe0\x9f\xea\xaf\x27\x55\xfa\xaf\xdd\x58\x07\x71\x4c\x5a\xcc\xee\x89\x75\x4d\x87\x08\xb5\x3c\xd4\xe7\x59\x50\x4d\x95\x09\x64\x11\x68\xa6\x78\x61\xac\x17\x15\x07\xa3\xa3\x44\x07\xef\xea\xed\x01\xbb\x3b\x70\xf3\x32\x7f\x06\x65\xf7\x1c\xb7\x53\xbb\xe5\x6e\xb6\x83\xff\xb8\xea\x26\x5b\x81\x35\x28\x4e\x1b\xbe\x5f\xff\x30\xea\x85\x2b\xe4\x22\x7a\xb0\xcb\xcf\xa4\x34\x91\x3b\x97\xc5\x62\x1a\xd9\x0a\x6c\xa2\x3e\xe5\x4c\xd0\xf0\x69\x19\xdd\x3f\xdc\x4d\xee\x1e\xd1\x71\x34\xfa\x32\x21\x51\x3c\x1f\x05\xde\x7f\x3e\x46\x07\x89\x77\xd1\xf1\xe8\x17\x78\x0d\x2c\x7c\x54\x32\x29\x99\x99\xd0\x82\xe8\x9c\x5f\x46\x93\x38\xf0\x2f\x53\x30\xee\x3a\x4d\xc5\x4a\xda\xbb\x6e\x03\x13\xea\xac\x6f\x42\x76\x9f\x3d\x76\x2b\xf9\xb4\x3c\x4f\xb4\xc7\xaf\x64\x39\x93\xe2\x9e\x6b\x43\xea\xae\x74\x81\x66\xe2\x9a\x2a\x07\x63\xef\x3a\xcf\x69\x0a\xa8\x10\xa9\xed\xaf\x3a\xd6\xc2\xb6\x3c\x31\x6b\x72\x3d\xc6\xa8\x1a\xb5\x99\x35\xf0\x74\x6d\x5c\xaa\x1e\xb6\xb9\x04\x0a\xb3\x26\x63\xdb\x53\x45\x8f\x54\xaa\x8c\x20\x6d\xa8\xe1\xac\xbe\x28\xa3\xeb\xf1\xee\x7a\xec\xb9\xf5\x6d\xb6\x29\x17\x75\xeb\x3d\xab\xf8\xe0\xca\xff\xb0\x7a\x97\x3b\xab\xfc\xe0\xca\xdf\x05\x57\xfe\xc9\x0d\x54\xe3\xf6\x34\x6c\x1f\xd8\x3b\xdd\x3b\x9f\x3a\xd6\x59\xad\x8e\x7c\xfc\x8e\xd5\x80\x70\x22\x85\x01\x61\x22\xae\x80\x19\xa9\x5e\xec\x6b\xd6\x25\x1f\x49\x4e\x13\x27\x78\x28\x34\x4d\x8e\xa4\xf6\x42\xd3\xa4\x23\x33\x9f\x3c\x46\xb6\xab\x1b\x2f\x0e\x59\xde\x2e\xcf\x30\x6a\x50\x2d\x8f\x49\x61\x94\x74\x0f\x02\x62\x26\xc3\xa8\x13\x68\x51\xb0\x01\x61\xe6\xe5\xb3\x8d\x62\xd4\x9d\xed\xbd\x3d\x30\xeb\x4c\xf3\x04\x30\xdb\x53\x31\x15\x34\xad\xfe\x17\xfc\xbe\x7b\x7d\xad\x4f\xdb\xd7\xa7\xfd\x85\xfe\xe5\x9c\x29\xa9\xe5\xca\x78\x4c\xe6\x7b\xf3\x9e\x96\xf1\x7c\xe9\xde\xa8\x19\x30\xe0\x1b\x50\x33\x48\xb9\x36\x8a\x7e\xda\xc6\x77\x85\xa7\xc9\x29\xe9\xcf\xb9\x7a\x42\xe0\xcf\xfa\xbb\x9f\x76\x5e\x80\x42\x81\x06\x61\x0b\x95\xc2\x2d\x87\xd1\x61\xc8\x7d\xe7\x34\xdf\x35\x18\xd9\x8f\x37\xf2\x33\x00\x00\xff\xff\xb4\xae\xdd\x58\xd9\x09\x00\x00"), }, } fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{ fs["/ConnectionManager.xml"].(os.FileInfo), fs["/ContentDirectory.xml"].(os.FileInfo), fs["/X_MS_MediaReceiverRegistrar.xml"].(os.FileInfo), fs["/rclone-120x120.png"].(os.FileInfo), fs["/rclone-48x48.png"].(os.FileInfo), fs["/rootDesc.xml.tmpl"].(os.FileInfo), } return fs }() type vfsgen۰FS map[string]interface{} func (fs vfsgen۰FS) Open(path string) (http.File, error) { path = pathpkg.Clean("/" + path) f, ok := fs[path] if !ok { return nil, &os.PathError{Op: "open", Path: path, Err: os.ErrNotExist} } switch f := f.(type) { case *vfsgen۰CompressedFileInfo: gr, err := gzip.NewReader(bytes.NewReader(f.compressedContent)) if err != nil { // This should never happen because we generate the gzip bytes such that they are always valid. panic("unexpected error reading own gzip compressed bytes: " + err.Error()) } return &vfsgen۰CompressedFile{ vfsgen۰CompressedFileInfo: f, gr: gr, }, nil case *vfsgen۰FileInfo: return &vfsgen۰File{ vfsgen۰FileInfo: f, Reader: bytes.NewReader(f.content), }, nil case *vfsgen۰DirInfo: return &vfsgen۰Dir{ vfsgen۰DirInfo: f, }, nil default: // This should never happen because we generate only the above types. panic(fmt.Sprintf("unexpected type %T", f)) } } // vfsgen۰CompressedFileInfo is a static definition of a gzip compressed file. type vfsgen۰CompressedFileInfo struct { name string modTime time.Time compressedContent []byte uncompressedSize int64 } func (f *vfsgen۰CompressedFileInfo) Readdir(count int) ([]os.FileInfo, error) { return nil, fmt.Errorf("cannot Readdir from file %s", f.name) } func (f *vfsgen۰CompressedFileInfo) Stat() (os.FileInfo, error) { return f, nil } func (f *vfsgen۰CompressedFileInfo) GzipBytes() []byte { return f.compressedContent } func (f *vfsgen۰CompressedFileInfo) Name() string { return f.name } func (f *vfsgen۰CompressedFileInfo) Size() int64 { return f.uncompressedSize } func (f *vfsgen۰CompressedFileInfo) Mode() os.FileMode { return 0444 } func (f *vfsgen۰CompressedFileInfo) ModTime() time.Time { return f.modTime } func (f *vfsgen۰CompressedFileInfo) IsDir() bool { return false } func (f *vfsgen۰CompressedFileInfo) Sys() interface{} { return nil } // vfsgen۰CompressedFile is an opened compressedFile instance. type vfsgen۰CompressedFile struct { *vfsgen۰CompressedFileInfo gr *gzip.Reader grPos int64 // Actual gr uncompressed position. seekPos int64 // Seek uncompressed position. } func (f *vfsgen۰CompressedFile) Read(p []byte) (n int, err error) { if f.grPos > f.seekPos { // Rewind to beginning. err = f.gr.Reset(bytes.NewReader(f.compressedContent)) if err != nil { return 0, err } f.grPos = 0 } if f.grPos < f.seekPos { // Fast-forward. _, err = io.CopyN(ioutil.Discard, f.gr, f.seekPos-f.grPos) if err != nil { return 0, err } f.grPos = f.seekPos } n, err = f.gr.Read(p) f.grPos += int64(n) f.seekPos = f.grPos return n, err } func (f *vfsgen۰CompressedFile) Seek(offset int64, whence int) (int64, error) { switch whence { case io.SeekStart: f.seekPos = 0 + offset case io.SeekCurrent: f.seekPos += offset case io.SeekEnd: f.seekPos = f.uncompressedSize + offset default: panic(fmt.Errorf("invalid whence value: %v", whence)) } return f.seekPos, nil } func (f *vfsgen۰CompressedFile) Close() error { return f.gr.Close() } // vfsgen۰FileInfo is a static definition of an uncompressed file (because it's not worth gzip compressing). type vfsgen۰FileInfo struct { name string modTime time.Time content []byte } func (f *vfsgen۰FileInfo) Readdir(count int) ([]os.FileInfo, error) { return nil, fmt.Errorf("cannot Readdir from file %s", f.name) } func (f *vfsgen۰FileInfo) Stat() (os.FileInfo, error) { return f, nil } func (f *vfsgen۰FileInfo) NotWorthGzipCompressing() {} func (f *vfsgen۰FileInfo) Name() string { return f.name } func (f *vfsgen۰FileInfo) Size() int64 { return int64(len(f.content)) } func (f *vfsgen۰FileInfo) Mode() os.FileMode { return 0444 } func (f *vfsgen۰FileInfo) ModTime() time.Time { return f.modTime } func (f *vfsgen۰FileInfo) IsDir() bool { return false } func (f *vfsgen۰FileInfo) Sys() interface{} { return nil } // vfsgen۰File is an opened file instance. type vfsgen۰File struct { *vfsgen۰FileInfo *bytes.Reader } func (f *vfsgen۰File) Close() error { return nil } // vfsgen۰DirInfo is a static definition of a directory. type vfsgen۰DirInfo struct { name string modTime time.Time entries []os.FileInfo } func (d *vfsgen۰DirInfo) Read([]byte) (int, error) { return 0, fmt.Errorf("cannot Read from directory %s", d.name) } func (d *vfsgen۰DirInfo) Close() error { return nil } func (d *vfsgen۰DirInfo) Stat() (os.FileInfo, error) { return d, nil } func (d *vfsgen۰DirInfo) Name() string { return d.name } func (d *vfsgen۰DirInfo) Size() int64 { return 0 } func (d *vfsgen۰DirInfo) Mode() os.FileMode { return 0755 | os.ModeDir } func (d *vfsgen۰DirInfo) ModTime() time.Time { return d.modTime } func (d *vfsgen۰DirInfo) IsDir() bool { return true } func (d *vfsgen۰DirInfo) Sys() interface{} { return nil } // vfsgen۰Dir is an opened dir instance. type vfsgen۰Dir struct { *vfsgen۰DirInfo pos int // Position within entries for Seek and Readdir. } func (d *vfsgen۰Dir) Seek(offset int64, whence int) (int64, error) { if offset == 0 && whence == io.SeekStart { d.pos = 0 return 0, nil } return 0, fmt.Errorf("unsupported Seek in directory %s", d.name) } func (d *vfsgen۰Dir) Readdir(count int) ([]os.FileInfo, error) { if d.pos >= len(d.entries) && count > 0 { return nil, io.EOF } if count <= 0 || count > len(d.entries)-d.pos { count = len(d.entries) - d.pos } e := d.entries[d.pos : d.pos+count] d.pos += count return e, nil } rclone-1.53.3/cmd/serve/dlna/data/data.go000066400000000000000000000014731375552240400200570ustar00rootroot00000000000000//go:generate go run assets_generate.go // The "go:generate" directive compiles static assets by running assets_generate.go package data import ( "io/ioutil" "text/template" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) // GetTemplate returns the rootDesc XML template func GetTemplate() (tpl *template.Template, err error) { templateFile, err := Assets.Open("rootDesc.xml.tmpl") if err != nil { return nil, errors.Wrap(err, "get template open") } defer fs.CheckClose(templateFile, &err) templateBytes, err := ioutil.ReadAll(templateFile) if err != nil { return nil, errors.Wrap(err, "get template read") } var templateString = string(templateBytes) tpl, err = template.New("rootDesc").Parse(templateString) if err != nil { return nil, errors.Wrap(err, "get template parse") } return } rclone-1.53.3/cmd/serve/dlna/data/static/000077500000000000000000000000001375552240400201015ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/data/static/ConnectionManager.xml000066400000000000000000000126011375552240400242150ustar00rootroot00000000000000 1 0 GetProtocolInfo Source out SourceProtocolInfo Sink out SinkProtocolInfo PrepareForConnection RemoteProtocolInfo in A_ARG_TYPE_ProtocolInfo PeerConnectionManager in A_ARG_TYPE_ConnectionManager PeerConnectionID in A_ARG_TYPE_ConnectionID Direction in A_ARG_TYPE_Direction ConnectionID out A_ARG_TYPE_ConnectionID AVTransportID out A_ARG_TYPE_AVTransportID RcsID out A_ARG_TYPE_RcsID ConnectionComplete ConnectionID in A_ARG_TYPE_ConnectionID GetCurrentConnectionIDs ConnectionIDs out CurrentConnectionIDs GetCurrentConnectionInfo ConnectionID in A_ARG_TYPE_ConnectionID RcsID out A_ARG_TYPE_RcsID AVTransportID out A_ARG_TYPE_AVTransportID ProtocolInfo out A_ARG_TYPE_ProtocolInfo PeerConnectionManager out A_ARG_TYPE_ConnectionManager PeerConnectionID out A_ARG_TYPE_ConnectionID Direction out A_ARG_TYPE_Direction Status out A_ARG_TYPE_ConnectionStatus SourceProtocolInfo string SinkProtocolInfo string CurrentConnectionIDs string A_ARG_TYPE_ConnectionStatus string OK ContentFormatMismatch InsufficientBandwidth UnreliableChannel Unknown A_ARG_TYPE_ConnectionManager string A_ARG_TYPE_Direction string Input Output A_ARG_TYPE_ProtocolInfo string A_ARG_TYPE_ConnectionID i4 A_ARG_TYPE_AVTransportID i4 A_ARG_TYPE_RcsID i4 rclone-1.53.3/cmd/serve/dlna/data/static/ContentDirectory.xml000066400000000000000000000342771375552240400241370ustar00rootroot00000000000000 1 0 GetSearchCapabilities SearchCaps out SearchCapabilities GetSortCapabilities SortCaps out SortCapabilities GetSortExtensionCapabilities SortExtensionCaps out SortExtensionCapabilities GetFeatureList FeatureList out FeatureList GetSystemUpdateID Id out SystemUpdateID Browse ObjectID in A_ARG_TYPE_ObjectID BrowseFlag in A_ARG_TYPE_BrowseFlag Filter in A_ARG_TYPE_Filter StartingIndex in A_ARG_TYPE_Index RequestedCount in A_ARG_TYPE_Count SortCriteria in A_ARG_TYPE_SortCriteria Result out A_ARG_TYPE_Result NumberReturned out A_ARG_TYPE_Count TotalMatches out A_ARG_TYPE_Count UpdateID out A_ARG_TYPE_UpdateID Search ContainerID in A_ARG_TYPE_ObjectID SearchCriteria in A_ARG_TYPE_SearchCriteria Filter in A_ARG_TYPE_Filter StartingIndex in A_ARG_TYPE_Index RequestedCount in A_ARG_TYPE_Count SortCriteria in A_ARG_TYPE_SortCriteria Result out A_ARG_TYPE_Result NumberReturned out A_ARG_TYPE_Count TotalMatches out A_ARG_TYPE_Count UpdateID out A_ARG_TYPE_UpdateID CreateObject ContainerID in A_ARG_TYPE_ObjectID Elements in A_ARG_TYPE_Result ObjectID out A_ARG_TYPE_ObjectID Result out A_ARG_TYPE_Result DestroyObject ObjectID in A_ARG_TYPE_ObjectID UpdateObject ObjectID in A_ARG_TYPE_ObjectID CurrentTagValue in A_ARG_TYPE_TagValueList NewTagValue in A_ARG_TYPE_TagValueList MoveObject ObjectID in A_ARG_TYPE_ObjectID NewParentID in A_ARG_TYPE_ObjectID NewObjectID out A_ARG_TYPE_ObjectID ImportResource SourceURI in A_ARG_TYPE_URI DestinationURI in A_ARG_TYPE_URI TransferID out A_ARG_TYPE_TransferID ExportResource SourceURI in A_ARG_TYPE_URI DestinationURI in A_ARG_TYPE_URI TransferID out A_ARG_TYPE_TransferID StopTransferResource TransferID in A_ARG_TYPE_TransferID DeleteResource ResourceURI in A_ARG_TYPE_URI GetTransferProgress TransferID in A_ARG_TYPE_TransferID TransferStatus out A_ARG_TYPE_TransferStatus TransferLength out A_ARG_TYPE_TransferLength TransferTotal out A_ARG_TYPE_TransferTotal CreateReference ContainerID in A_ARG_TYPE_ObjectID ObjectID in A_ARG_TYPE_ObjectID NewID out A_ARG_TYPE_ObjectID X_GetFeatureList FeatureList out A_ARG_TYPE_Featurelist X_SetBookmark CategoryType in A_ARG_TYPE_CategoryType RID in A_ARG_TYPE_RID ObjectID in A_ARG_TYPE_ObjectID PosSecond in A_ARG_TYPE_PosSec SearchCapabilities string SortCapabilities string SortExtensionCapabilities string SystemUpdateID ui4 ContainerUpdateIDs string TransferIDs string FeatureList string A_ARG_TYPE_ObjectID string A_ARG_TYPE_Result string A_ARG_TYPE_SearchCriteria string A_ARG_TYPE_BrowseFlag string BrowseMetadata BrowseDirectChildren A_ARG_TYPE_Filter string A_ARG_TYPE_SortCriteria string A_ARG_TYPE_Index ui4 A_ARG_TYPE_Count ui4 A_ARG_TYPE_UpdateID ui4 A_ARG_TYPE_TransferID ui4 A_ARG_TYPE_TransferStatus string COMPLETED ERROR IN_PROGRESS STOPPED A_ARG_TYPE_TransferLength string A_ARG_TYPE_TransferTotal string A_ARG_TYPE_TagValueList string A_ARG_TYPE_URI uri A_ARG_TYPE_CategoryType ui4 A_ARG_TYPE_RID ui4 A_ARG_TYPE_PosSec ui4 A_ARG_TYPE_Featurelist string rclone-1.53.3/cmd/serve/dlna/data/static/X_MS_MediaReceiverRegistrar.xml000066400000000000000000000046651375552240400261130ustar00rootroot00000000000000 1 0 IsAuthorized DeviceID in A_ARG_TYPE_DeviceID Result out A_ARG_TYPE_Result RegisterDevice RegistrationReqMsg in A_ARG_TYPE_RegistrationReqMsg RegistrationRespMsg out A_ARG_TYPE_RegistrationRespMsg IsValidated DeviceID in A_ARG_TYPE_DeviceID Result out A_ARG_TYPE_Result A_ARG_TYPE_DeviceID string A_ARG_TYPE_Result int A_ARG_TYPE_RegistrationReqMsg bin.base64 A_ARG_TYPE_RegistrationRespMsg bin.base64 AuthorizationGrantedUpdateID ui4 AuthorizationDeniedUpdateID ui4 ValidationSucceededUpdateID ui4 ValidationRevokedUpdateID ui4 rclone-1.53.3/cmd/serve/dlna/data/static/rclone-120x120.png000066400000000000000000000506701375552240400230140ustar00rootroot00000000000000PNG  IHDRxx9d6gAMA a cHRMz&u0`:pQ<bKGD pHYs  tIME v9PIDATxڝw|ݻtՒ,Wٖq`00^BB@‡4BHI{ ŘB1ظw[ջnw~l[|Nw3{mo.)[zrl3 H; 'ѯѮu1SL1! !}NmڀFCmC 4(mtk8 UfI%!eCJJR{s? m!Tۖ03P»`!JJâ:~퀫|)C;r7T00 ڇ@ 0xLHT'XB3Nt!#l'` x% ,)1$da{@+ "*~;7Vß"R"V%λXQf! > } 2cie@n}W&7M|YW>8keeK$.pAR3;uIU&p9 {t)`KV{7=H w”fƅ%E lRHA!Q,ml"&= k*2L9z"{ Qt;eu MB L,\k)B:5@jߖ)yv oz,5LG;$4Y“_(t% xCJRR l[e͆KvDcTfdbT;h5.x[pO StB p&)жml)]ǥK>. t s/ɤ@s~ U]=gHᬹ` !) m;&pW]Wl&Uo-C pE(3^FN0ybxHLJ ^C:@XFFQ0xV t;m ^6=aKFi zB'p .ȀiY W[#U0ARgQá^2uؓX7Haǁs\ uz0L2 8g\LEdcI+ u|mA8ɛ عn pFac-K€-^m^}m6?KB:.F_O+"Kz@&]4mYη_`7i?ϿW_.6F A B0Ί?$!ONVa ]G߼F:ru"@I;.Kz30HMF/kD,x*NL@7R:)$2%$P 'yڂH2FPR:Bˆ{Zpx2v,V-5sݭ X&WMDk)gsozF,N][?㝗a~EIrʆ"1$a:6]j! Z,_ =PgK=l2eKA,m.:! ըW!T)-5a" d-{/дu-@\*fȘsLJJM U !hcO#-K_82i=|LN}X}?e璗9o]+S3N'm[Fjy㡂Ŷҟ%LH G{T^Kp-O5ҕUQ@RRoh۳=-I͹W3+);3+e:>  }8cvXF&3-r ŶRں(ɂ_y?]Jwݞeo2kg%ݵȎ;@,#sOᄯY'`㾚e()|ƥ^K*P.BIhx|טf2C*agdPpeQ8n23OLv"MAwk,9c"]*,ۢt$Юkݻ# KpT_/?x}%=v.L^-[It5;6dD *\Q T`*ib1{Hj[@1y5zZҌ@ubkoE:{.3BG3$3@nǶ3*'dEOn{eK,)r0ƻvO,U(4SzTš>`Oj0BnjG]tߜQ4v6M?ȶu`!WB I@WX#X); n+/`?'*9g' 1 rAJ9b@y=p)}@}ɒ%1⤲9O'S j⬫ פZQ>.;9[b&4rר92bYIxϽ[6oRrtCq5:Gfa F,?g*ʭ}m-t5kJ$I;V,Ct)u:aA tM·4 K}S3Xu8bX=]l}I;ۑgb3MYc10T/8zz+:.*ge`R䮭GvmN8rK'A|V* 3 rb3m ePGͶTltQlͫgQ9Xe_$Bi`"iؼK!j[]YayC*UaG\ ]/>̦ed۳b[6JK+{47ҴbKBcO$ ^{#7#;!Cl- j_~WΙo tm-Vs)1E<ܰM5P3M:ޟo\g?@_Hm޽ރ݁i8i ;LS):WSS T]Hv_Dq4Gvoc'$ 3Mع|vM maвw'k^Yd`IIW[+}Xr0)X=M;{x}@\EɞB0ڕ {%Qq0vtdkWye\1aIɰl9CqcOi2{'s4lkw-?IQ1*i*30G/=KdpַE,l; 9rg,^,MБL8r>zp=]lx>j?O8TO刺 $N/XztW']. _3s'YI6{䩿zˮzly` 2+r_:;:v%397UOw1TVU=x@p@,m(e9!AM˗дߠ's칗bHE6Ą"l öuEBϝ@5g^B^>^¶eHp9d2~gAFw 3=afOknN и;P3r.-G==|_j*Hۢqi -l챽@ bi[$An6غrIvnoH&8wH"&eپn?^z6HS'V=zf/d7d_g̹WW@ng D:F1[c零<3l\0ѧV*EǑ_5N߆=,LjN/04Қi ;fD9lY/>DZX2̲!ilWL5i dмA'wPֽ(?"]Vɜ[aANIJ7hxjλxVҍ،GF[#Eu>Je$#OhشN*xۡT 8VX.ֽo&q1Ï=u+l| F=Y'R8rccFÝ J00A =R^Ҧz\gϮU NVwoרsz{G9 mdi[àVfZwoE&v\}֞pP+7c͖ʦeDJ!e_"_.fyWAZ;ς_)b+ ICјImkٳu p9Έyp:BS"t֩y{7gHPA&}kfȱf1lK^Ō{Q%Sz!aR9tXȩ޿'沟ONIGi\},iUͻ6kx93ޏ"dj؇򷭸Tˢf2_|uZ~g! {vaXisM?zKR}AĔ3/"Xs˩qa1#T\\mv ^+E"+ S+@Jh=|@3|#WL+|}dW 0ػu*='3)8υuS8ϵTJO.5Pa՛ٿu?x[k`oWֽ]'_VAOw?|OVev,3λ*w)TOMfF&&>Hm\òga{o|8Zqn-9u&Sj)NFhB6t ]-hІjtzƀ˦Xn!) %}t+J{49 c海R1i /~p@?ɢח2Ӎ]stNߒ%\Use'~E{A?8r}vʝmW=m'Ob'پ#iYdf0d&x:N=ܲ Rf3?l.LU)s:/]U. =sMsZm' YPL:~:gH0Q4ffea6serx*sTMqiۺx5X;?}! ;fs W~rrҥWtםޮNz;mxfY9yLR(0a6D(fV-0}ь7 b%\ v{ncsL#t 5yDюc>{|ǟPX5k #`H[y~tWJBX e.:gQ:SBq XddNٹdf%5zsY' F#Ω.%5"*%ݟHڄr4yX àl̼'̸G tS~%}t5dGx)No{+C5xLATQx{6زm6(͵[9a[JoOJT" T*w^7_Um tT~(pT*K>a!L1,U,ŒQ22B[_Usw˩{.'_DYU\ɮw_e+O?O(9Q'cTNne 3iq@K;yQkd1TNMf^yZ;;ah,}_[_߾ ښ-35f8FZg #@;]~=ؽÇ-&NgSK&N梯QXmuR%R]4݅'E$\Jݶ X"B F]8 5CwiW#O8ܢRaɽߣ j=»ΦzikN:Z|Cw^gW"BzpĶty]z{o$U"Y:HR:!;5g_ΞouYڭ WNefdR9*gòl*͡b,۲XSL]x>99z,5([h[Ќ|f^w"#[@F{_8@՘m|m7-PD.4p!S?ꇃ{;r<ʦЪet7ձo(3 )!D^!=Mu 򆏡rI ?a!S%#d]:5{ .8UTtÙpNP6qcRBbKD2  tC*p)Xj;pgԨpJrը5):gW).4ynR}}$I'QzE}ӌ AZ[x^6ib9aM 䔔Q8t#FSQ=1(> tCSPְ5  QcMEzտI46bӤjTwmK˩^x`S0jgf+T(UzꓪB71T!ʷk ( ;`Nbs%/+orn CRH+'pK.Hk ,A 0 8xfFz[`/2(|$ R-m?1M?5u e *H>‰þ_?Y׎;`ҤP[՛tU%)q1AҒ$]F Mh ;<W}&)J(h=k{mwdQ[܄*\aL\ Zw ߊWUICr^mB}X_msҖa&Bθw(V5yUME ҠSSzBΊ^SePpR >_ MU6Ai,Z OJ!xm'3 62+5̳.웾O<#=W 4wwa&20c1EuKNA3G`-6l*todWs^z x$P݂fP$Ӡ_ U)JI vvjկ=çaPY]C<'#̠Fgs^~/~μWu$<XЉH=\w@Ѕ#TtU4SA&`'\?HX)@m2J"BcwMD6ȂNZdڏ4-א_RFna %e䗖_:2r KHO@w{YTЄ,Ѡw沥@Ut4 vo mH&+ӎe9~GX&xIx+w'UnZ[8\}2j,|l#Ex%t6rh? qhjω7~S( YXB_G{ט{ s%4CaQQ!|G5! MU#H/m+EWg;yE%=|Mdޅ cj~%79\&KRFpNӃ5ZJ>ya>y*;yoIҲ9bE1s4e8;[eW(:E !pڅFףZ44y=raKv85O{G+]Mu4MCvJ]4GKC aGIy%vlIFf]-Gnkr`+% oRIג QI0˺7 ;7׋KTֱ' Ek;~޺7cWQPVV,U2ˊlDT"t+$bur8jPtajZJ-1nL99XnuHi Ypݓ!m`Gf"alE3s󩞷6LEo\M yg`b =$JLa:ofҷsuN^IqhFw1z>]oE!6 rgUʣ70 >)޾W1MÍ%twwŕ(R͆I#c Xh_4)\ƨ2d z:9SjWþOҲw'V٥{L%QTƘKbkO3 CHHDϖ 3-e]E蔤:,FT k{l:p_} \ ݍ1vo]VWGv^bdכ)amfy~;?~pL̽Vy4neݷ {wDNȓfs ٿSj?]rzL@3#S1R:*y7)ծ>TULӳh|Ac6vnJ_HfvyPPRFQ{qZ]jsf:^t)F3琈'K-DmLfufn>ǜ{.ö,nXž+5o!{?[ F<ä˿AY , !5ʙsltkO1HW@u!|ڬQ;󕕐1EwT|` (BB͐L\p6F,#o'J2+dYdedgxϖ˨G{K3kF2m,.PCG Xb̉3f 7o@՜^M";PXV᧞%Lf=_Es`猞um);4)K4 GpL-cTѽHۜLI^i=rb1'FVf1 GjeteJsDjR{6ϼ|V]@";ގ6v}T+nib )7D^!EI[.ӅC}%>Gl}Qz[Fz1wz(fJ_’zx2p($+: ~(ᬂ ߊf''b W4T 5222%oD 1lɽz6n.3w:Y\oAWQ9s3|9oM;rwt7Q1icO:O(Q"çfAF~!I^#g)R$!a3 djVzZ"3I$N~SOY#GSo7{F'Jw sNl_wY7q%h-!#X밋FrFF-IDHt]@+̢/3jYaIgr'WT0S2dύ ?zTLt{it6ֱgy߳ѿ3ӗ`/sՇt45&D<z23:xzz8P%oieq7'?&%u#^?Q&7ɛ. 3`=0~{șW8fӮHHGfFa V 4w QA|>nz4Z:HMJ42lp64g%A Kx?g߆GeECqY~u6i`H6HyJV/{#uU g³1aVu44v`I@ߎ.vI)5wQ@Wr_w ;Uv!\l}Z-Miee3jL-N6lw'ALH6_y>71g^rp6dA!;ZT:9~2Aorʓ?6m uGaY9g\U.9F o\pvX`P&o;dz80پ|1/z=]2'sI2RuY Lik[ㆁc͓Mw'y\;$ʥAdb`jU`B&VC]OK({Vnx`ǜqQ5j;:n]5ca{)϶l*sR=ZEj8*]M+Fl\ބA_W'>L{c w0T;Oz(oH#̪=J&%۟hU eod ̦0?|9+ Rrt' z߃y޷|WڌEgCFlHdaK]Fp ~^F_|"*ۊpb&}-M|ٿkxV|U#\燩QŨiK+wv9!=a#r%PZ92*Geڼ\paRzuwŘ9d uW`HP@$oZ8J@.R[,ۢ]gܕ0S~H>^@&&I*l)nkta)*3'2AiA;U'=>]-M|`22䶟2Ӝt mjX|wcȰixQySzeC,{"|,R -,*e̙h7=kn# ˲4jIkV&v QG~PE7❷5g܏nbծH ]oDXRupU 3\ci.'2۽õAAq30{*򀴉b,R1J:>|u čV:ÑAޖ&ohfmؿS3'a4m?ß-QFO浧Y ty) mZr"ؖ"iʞQR'mnjH=23/pJ[$Ҷy)p/TJlz5`ZċOs#=TI]toh69#:iYRA"8MOuѥn|i޲8#ӴqbD7?su7" #n $H*pVٲÂX]V_=_;/?_RlZat*g3yB/Zϥm3oyYPk^Ctwu ?tӨ5$TA*%{7w IKy9ٕ#]=݁^bDOcP=űs;m{w 3FȱR2kh>BiTlm؆pm}yQ>_.6%[==ѓa3131~bL2{B~R) YBTLiRLRs8:$=JD,c˃?_cLᵼ#j$mJ:'N )mZs[PH"3 (w:T9Lz;9}3V|Ri=- zHldo%'_|%%C9gR5ҹV J0c1?R23;3eˎ\=%RYbԙ]߾J7 !h\>k}Zx)bv=uV[1\m|N:C.)G G*&Np(F]@لiNqG7]Bæ5lz}:6"JGa򂳘< KE{ٶS\κg8ڑ_knkN,ǖ53g#m5ϕ )J[ƳA\Hdg4[l,=Q|#8wa0Ჯ3ۈ%2h?WAf^!muYot$s9;uHT‚<+nwt65аk+uo<ϗ~+NwAvB`~?|T++j8N* ۸U`% yC{NB3H`{TϪrO#l(jEHRQC8Ƌ3w`1۝8ɱMTE#"bq>}X$"wY ?$gvڎ~#4p5*m~|oZK4]vv]N.3 9뙽 {X39g?`41c1٪#`ɋOf ɴ1`J5:)͓acZVT5B fҲe ^K/>˙ͻ7RCz{?[=UPnpBsRzYvX#3 !IwK}]Aյ6wp)Z\Bֿ"ظ=>| .1Eoyyy)ddf"p 66` >k;_yKn^w;|!twv0v ^~-cͤz%N7`sU70 = U'zyU:HpY߸mPYs6E./d6SX ih?'`yZ׹eLN @ä~ZV=vfZL>b .$eF0b4 $yR1K(ZEn~>Y99t6ۃ!d Oy08'_xUÝeCѹ>~_h۾f;֮&;<GENY}%0BS;*ihnVEpbn#IvAsn1g_w@6o?GOAGOS)2%/kjs.dd ijP_G mTd^ bwla0 rcߎ-Ztarܩg8u&c-,{AJ2]t>-RޚYsf̹X(Nu5=Z,Xv u>]-XzŬbHr w3k0NUO CHj . ͤ A  ¤~zv}mpd_Lm1f&۲X~+,;P i;_pK9fӏ?Q{g~L@噖^g_`G~m|/fKΉ_.ƪU-8ZtORk§UVWEMNA1kN82v-iQ1k(>˲FuzBQ##NXHj7M9Vթ=]JGrj-CE9Ⱦgebޞ )EPJfyW9ww+i]%c'3˵}\2 *1J|_[\]w MQV^_&ɬ,2339ʯ \,v^sOdqnJH?/N9+\g,\U1zS2rHYX)+3s=d:5]NjT`J AGqaƘv c!>@[§!UOt,H`h |wtvt RJF;IH[/ᯯfHEǟvvX0 4R)|[qʲ|θ +7/2fV6}K+Um:W[Aw.rkbv #43[TYqoR*)NӥmC6$K}mi,Bf~!^5+ҧuBMj`#U̶vB2 gIAkRb³97|;ҫY a9[g)䔔S}n[jC3_'qHVAyC9[(nRAJHF9!Ru6uX~O9'& a{G5Ld CШT*Ei_R@ހ*~u2n3~1L>ϖf kWppN_m:| ǝКɤ,DN]pvZ)%H.D}RFGni70vԥS OE ?;gLB#U%b67/f]~J ViY.{M1 ӝL9ZG[\5ʐt5shK0M`ȨP^_X)\J m*h!ܠKYS)R)rU 8=erasedcDۙCE!~HՆ.MRτ`ǐ =EK{qU 9!%9%C(1FC{i޹ɉ_JoɮUQ$s)1O9Z àU˗}:кCpbL ) U@md~ |edx pZ@޽:ݝMxߗl֓XsOjLvCP9u6CLīot̠ guO S),qI|_]1a2Cܝy³ĹFQx;u`gwjtu;(S3K\F:b[to`/eN|g:\ҫZFgIbX;1Jmasґ>4 oYwv0[2v$28mi' dޅWD}lzV`xGYf0/'LKҗ]iUVܰ2 0#J* V=}>yhW)!qNL ll ʫ;x_ Rm_))Û:1s{EnbH>;I pTb(6 .ztlSV(xkFe"0Oݵ2j}"pÖH!'b|V[5jһn0~|UMLJW&ֽajC8ET4LSv`erO/w@\ t;"u$*vi@_H!'p#i >etiWorV^^񅅹29:νvT ǔV g W l[-D,Nʒ  f wꮽ0ժFQ`kBpj{5E ?7C/"aו=Ju'|^ 6F͙Ol_zÛւ-a$Q3p칗1Ķl @YxgTg` X>Xw@evDP1h`JL`ߏwȴ(ebfctRb:U} "aE]ߍC<#2JSXVAF"&e}WrBuKgnWOݪlG#U&R⣼ lW;z5bX u\_9- JmIxM R&) 5T,'D }9Kdk$9C ~Y Ф-#Jc q5cYM0P(S(UW3Z;j::e7!Jþ&cxIz!J]aKZk nJ5Y1Gځ'\[1\3P9`VUU=*^d*Ret P߽~rp۽F>: U-D5BmUwX )%tƋPC5fJj߁k ՅyBGXb؛Q߅ i 0)߅f邈 m td,;Qmt[Yҿ' ʴu[8n4$QM )tEXtCommentCreated with GIMPW%tEXtdate:create2019-03-11T10:07:18-05:00?f%tEXtdate:modify2019-03-11T10:07:18-05:00NIENDB`rclone-1.53.3/cmd/serve/dlna/data/static/rclone-48x48.png000066400000000000000000000131571375552240400226750ustar00rootroot00000000000000PNG  IHDR00WgAMA a cHRMz&u0`:pQ<bKGD pHYs  tIME v99IDAThmyxT?{g˞HŽ5(h]aqV_Z[ZnR"JE-J[Qjk@@XÒ{ǽwfhy<3{~}) GPT[ (~` AQkh$Cd֑RHp׈;u4qGT5L)HHY`IJ wa5) v?k.u%Ez+hq4k!5RJР_[S hL)@ pXB` 0 ,AB!Pl )oSy@Z@ D9֤TR dҡm#w# 1WQRgo)13^-H^SZ(zJi:ݺғaDrPKϲm{@ U191Lp W2"!C!)O-Q~plc;ad%Xz 8 B` _eÒ  g-dԌI&$z0J`HVCdGO'.32K@@BH-ᦐ-@ RJ$'b= 3/z2є֮gToM! 3jg_s2F; _3 rMT FxU^EDjPnH?XR%lCIh+G6~Lb8v"v,FH9^ I* ExecNC)cSPV 8H),D_M@1GWR~I0( "W`}97>E7R |>N }W$z !SJ"RTYMNN=MD;HcD17|Mw_#F<8em>crG{3[AC,}>/| ?ku8n #-NtN٩uuйweqH`ː\#Aap[k_ahD܏fRqyiH:vCap/_cH^J8u|AS-:6ףRIG5k.{V?_9]wѶmM_aܥrshx%\̞WT;xUٴERJ3BH (1#6q vɔM¶QH gQPYEOҰybcͣ:ξ .x@8™n`ۺ/^q=neݻ b8PxuGl{gG6#dݏ3ɬ:fuTN騭)í|pﵴm+uOL~NTo7]m F"DJ pW8?B}}SG{ 5gٍs9'SuօZS&RӸ TI$Hp4I6̱8y5tJ~r, - 8ֲ[ypQXB`{D8^CY/Kx@!2I2ck4yA^{c  S:+NFݥri#d쫑M@_ϽM9sؼU;i{Hv8-{6 5{p%WS=g؈bH)NeJe6-<^GK]PC)?$Cޠ*Ϧs$lME9 )9HѾc3]{PFij^ZwnvMy9@6YM0I|uyP6\l]P@io֤H`k.D̬1|#F+Û )9Ԍl*rSkǍ_Ct8hT PJH]lBj!2P6ep >vk?f^)@* hS#3'Ee/$O浗Jl+J"P&D,Jd(rT3wA6h妎@J4tT,ʑ6~k?!V0D q&S=~*3$X V ZB_{s~. ,։S6 8ROAG]cBLw=왇ꤼf-2zDH0PZqXz,bnBdi$IYBo_?)-H)}B#'h?}tfͺ]c;4G8xdߘ> d`VnAgn"ط ?YIi\]zbzj. !@Ʋ'mᑻxo7fԯŶSnhEXRкc+e)(*"%@ Wtcg%5+KxTao' 0hRИ= .REL`&gϿ_̝ SR>/]K 2"'iOHql}FϹíi=y0vҪJ}X8A>l ZHRK ,)Lh'F-9W`15;x4 Z|uF\xkWҶ_ ƙ=3/Dw'_&DR]XO'ٴj9{7~)GPwd SUw?@ŰQ؎k~mk?܂;Yv<l0|8)Dv~F-XJǶ]ԧ=H75it>9ښm9ּCOG;G躛vI)CCf㴣VA f}jF1%~r:[iX&h]Ghi܂+Or?Ӄ5`,iPRO/΅o)XXȧYxJG:"CӌQrCQPR7҉B&7Ѻn ɀ,"V/x.l-8A)L2-T2Aּ=iۻ'1Y AA^..<4iQI&N4QT0Ē):ci|AG[8h| MAy%Ѯlx%|P{0f I9s7>cf>nye.Tp(DcJQ7e:+FIFjLý #,Yˡ|x LWϯS1v-"͎WG)һܹWɛgA )gS:h4ASVeWbAR"NQT!T6#Nb،+HuSZ;_F_[ iZ6cW9O@]kVR=[L6;֮ooS=R5r4xVC1X㧜EŐ*7~,xU&ϘMNa1ldkð,jƞ5?N~Hb3Ì2{̜昒\ rÇ7)\Co^r0!ꮾmٹzq}<8$#N"_3|L-]="B by3hPϦbҁ\s#`ʥpۢ2,ٚФ#`g Si=tuuPn±uة$PHq)E#H=|3p(v8h9~pCH_q? 1\5/`4lҪa_ zv:YCּo>K,7g/Z^O{)$3 :̑ AGQ=j AOt (!}I^[ӓRaOAN2(`FK!>oU!vZh-Y!.aP0OPg لr7m.8d8t3lxg|Dt4n~JM 5 ?"BQ?RiBQ>Zu?N}tEXtCommentCreated with GIMPW%tEXtdate:create2019-03-11T10:07:18-05:00?f%tEXtdate:modify2019-03-11T10:07:18-05:00NIENDB`rclone-1.53.3/cmd/serve/dlna/data/static/rootDesc.xml.tmpl000066400000000000000000000047311375552240400233650ustar00rootroot00000000000000 1 0 urn:schemas-upnp-org:device:MediaServer:1 {{.FriendlyName}} rclone (rclone.org) https://rclone.org/ rclone rclone {{.ModelNumber}} https://rclone.org/ 00000000 {{.RootDeviceUUID}} DMS-1.50 M-DMS-1.50 smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec image/png 48 48 8 /static/rclone-48x48.png image/png 120 120 8 /static/rclone-120x120.png urn:schemas-upnp-org:service:ContentDirectory:1 urn:upnp-org:serviceId:ContentDirectory /static/ContentDirectory.xml /ctl urn:schemas-upnp-org:service:ConnectionManager:1 urn:upnp-org:serviceId:ConnectionManager /static/ConnectionManager.xml /ctl urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1 urn:microsoft.com:serviceId:X_MS_MediaReceiverRegistrar /static/X_MS_MediaReceiverRegistrar.xml /ctl / rclone-1.53.3/cmd/serve/dlna/dlna.go000066400000000000000000000247411375552240400171560ustar00rootroot00000000000000package dlna import ( "bytes" "encoding/xml" "fmt" "net" "net/http" "net/url" "os" "strconv" "strings" "time" dms_dlna "github.com/anacrolix/dms/dlna" "github.com/anacrolix/dms/soap" "github.com/anacrolix/dms/ssdp" "github.com/anacrolix/dms/upnp" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/dlna/data" "github.com/rclone/rclone/cmd/serve/dlna/dlnaflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "github.com/spf13/cobra" ) func init() { dlnaflags.AddFlags(Command.Flags()) vfsflags.AddFlags(Command.Flags()) } // Command definition for cobra. var Command = &cobra.Command{ Use: "dlna remote:path", Short: `Serve remote:path over DLNA`, Long: `rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. ` + dlnaflags.Help + vfs.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) f := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { s := newServer(f, &dlnaflags.Opt) if err := s.Serve(); err != nil { return err } s.Wait() return nil }) }, } const ( serverField = "Linux/3.4 DLNADOC/1.50 UPnP/1.0 DMS/1.0" rootDescPath = "/rootDesc.xml" resPath = "/r/" serviceControlURL = "/ctl" ) type server struct { // The service SOAP handler keyed by service URN. services map[string]UPnPService Interfaces []net.Interface HTTPConn net.Listener httpListenAddr string handler http.Handler RootDeviceUUID string FriendlyName string // For waiting on the listener to close waitChan chan struct{} // Time interval between SSPD announces AnnounceInterval time.Duration f fs.Fs vfs *vfs.VFS } func newServer(f fs.Fs, opt *dlnaflags.Options) *server { friendlyName := opt.FriendlyName if friendlyName == "" { friendlyName = makeDefaultFriendlyName() } s := &server{ AnnounceInterval: 10 * time.Second, FriendlyName: friendlyName, RootDeviceUUID: makeDeviceUUID(friendlyName), Interfaces: listInterfaces(), httpListenAddr: opt.ListenAddr, f: f, vfs: vfs.New(f, &vfsflags.Opt), } s.services = map[string]UPnPService{ "ContentDirectory": &contentDirectoryService{ server: s, }, "ConnectionManager": &connectionManagerService{ server: s, }, "X_MS_MediaReceiverRegistrar": &mediaReceiverRegistrarService{ server: s, }, } // Setup the various http routes. r := http.NewServeMux() r.Handle(resPath, http.StripPrefix(resPath, http.HandlerFunc(s.resourceHandler))) if opt.LogTrace { r.Handle(rootDescPath, traceLogging(http.HandlerFunc(s.rootDescHandler))) r.Handle(serviceControlURL, traceLogging(http.HandlerFunc(s.serviceControlHandler))) } else { r.HandleFunc(rootDescPath, s.rootDescHandler) r.HandleFunc(serviceControlURL, s.serviceControlHandler) } r.Handle("/static/", http.StripPrefix("/static/", withHeader("Cache-Control", "public, max-age=86400", http.FileServer(data.Assets)))) s.handler = logging(withHeader("Server", serverField, r)) return s } // UPnPService is the interface for the SOAP service. type UPnPService interface { Handle(action string, argsXML []byte, r *http.Request) (respArgs map[string]string, err error) Subscribe(callback []*url.URL, timeoutSeconds int) (sid string, actualTimeout int, err error) Unsubscribe(sid string) error } // Formats the server as a string (used for logging.) func (s *server) String() string { return fmt.Sprintf("DLNA server on %v", s.httpListenAddr) } // Returns rclone version number as the model number. func (s *server) ModelNumber() string { return fs.Version } // Renders the root device descriptor. func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) { tmpl, err := data.GetTemplate() if err != nil { serveError(s, w, "Failed to load root descriptor template", err) return } buffer := new(bytes.Buffer) err = tmpl.Execute(buffer, s) if err != nil { serveError(s, w, "Failed to render root descriptor XML", err) return } w.Header().Set("content-type", `text/xml; charset="utf-8"`) w.Header().Set("cache-control", "private, max-age=60") w.Header().Set("content-length", strconv.FormatInt(int64(buffer.Len()), 10)) _, err = buffer.WriteTo(w) if err != nil { // Network error fs.Debugf(s, "Error writing rootDesc: %v", err) } } // Handle a service control HTTP request. func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) { soapActionString := r.Header.Get("SOAPACTION") soapAction, err := parseActionHTTPHeader(soapActionString) if err != nil { serveError(s, w, "Could not parse SOAPACTION header", err) return } var env soap.Envelope if err := xml.NewDecoder(r.Body).Decode(&env); err != nil { serveError(s, w, "Could not parse SOAP request body", err) return } w.Header().Set("Content-Type", `text/xml; charset="utf-8"`) w.Header().Set("Ext", "") soapRespXML, code := func() ([]byte, int) { respArgs, err := s.soapActionResponse(soapAction, env.Body.Action, r) if err != nil { fs.Errorf(s, "Error invoking %v: %v", soapAction, err) upnpErr := upnp.ConvertError(err) return mustMarshalXML(soap.NewFault("UPnPError", upnpErr)), http.StatusInternalServerError } return marshalSOAPResponse(soapAction, respArgs), http.StatusOK }() bodyStr := fmt.Sprintf(`%s`, soapRespXML) w.WriteHeader(code) if _, err := w.Write([]byte(bodyStr)); err != nil { fs.Infof(s, "Error writing response: %v", err) } } // Handle a SOAP request and return the response arguments or UPnP error. func (s *server) soapActionResponse(sa upnp.SoapAction, actionRequestXML []byte, r *http.Request) (map[string]string, error) { service, ok := s.services[sa.Type] if !ok { // TODO: What's the invalid service error? return nil, upnp.Errorf(upnp.InvalidActionErrorCode, "Invalid service: %s", sa.Type) } return service.Handle(sa.Action, actionRequestXML, r) } // Serves actual resources (media files). func (s *server) resourceHandler(w http.ResponseWriter, r *http.Request) { remotePath := r.URL.Path node, err := s.vfs.Stat(r.URL.Path) if err != nil { http.NotFound(w, r) return } w.Header().Set("Content-Length", strconv.FormatInt(node.Size(), 10)) // add some DLNA specific headers if r.Header.Get("getContentFeatures.dlna.org") != "" { w.Header().Set("contentFeatures.dlna.org", dms_dlna.ContentFeatures{ SupportRange: true, }.String()) } w.Header().Set("transferMode.dlna.org", "Streaming") file := node.(*vfs.File) in, err := file.Open(os.O_RDONLY) if err != nil { serveError(node, w, "Could not open resource", err) return } defer fs.CheckClose(in, &err) http.ServeContent(w, r, remotePath, node.ModTime(), in) } // Serve runs the server - returns the error only if // the listener was not started; does not block, so // use s.Wait() to block on the listener indefinitely. func (s *server) Serve() (err error) { if s.HTTPConn == nil { s.HTTPConn, err = net.Listen("tcp", s.httpListenAddr) if err != nil { return } } go func() { s.startSSDP() }() go func() { fs.Logf(s.f, "Serving HTTP on %s", s.HTTPConn.Addr().String()) err = s.serveHTTP() if err != nil { fs.Logf(s.f, "Error on serving HTTP server: %v", err) } }() return nil } // Wait blocks while the listener is open. func (s *server) Wait() { <-s.waitChan } func (s *server) Close() { err := s.HTTPConn.Close() if err != nil { fs.Errorf(s.f, "Error closing HTTP server: %v", err) return } close(s.waitChan) } // Run SSDP (multicast for server discovery) on all interfaces. func (s *server) startSSDP() { active := 0 stopped := make(chan struct{}) for _, intf := range s.Interfaces { active++ go func(intf2 net.Interface) { defer func() { stopped <- struct{}{} }() s.ssdpInterface(intf2) }(intf) } for active > 0 { <-stopped active-- } } // Run SSDP server on an interface. func (s *server) ssdpInterface(intf net.Interface) { // Figure out which HTTP location to advertise based on the interface IP. advertiseLocationFn := func(ip net.IP) string { url := url.URL{ Scheme: "http", Host: (&net.TCPAddr{ IP: ip, Port: s.HTTPConn.Addr().(*net.TCPAddr).Port, }).String(), Path: rootDescPath, } return url.String() } // Note that the devices and services advertised here via SSDP should be // in agreement with the rootDesc XML descriptor that is defined above. ssdpServer := ssdp.Server{ Interface: intf, Devices: []string{ "urn:schemas-upnp-org:device:MediaServer:1"}, Services: []string{ "urn:schemas-upnp-org:service:ContentDirectory:1", "urn:schemas-upnp-org:service:ConnectionManager:1", "urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1"}, Location: advertiseLocationFn, Server: serverField, UUID: s.RootDeviceUUID, NotifyInterval: s.AnnounceInterval, } // An interface with these flags should be valid for SSDP. const ssdpInterfaceFlags = net.FlagUp | net.FlagMulticast if err := ssdpServer.Init(); err != nil { if intf.Flags&ssdpInterfaceFlags != ssdpInterfaceFlags { // Didn't expect it to work anyway. return } if strings.Contains(err.Error(), "listen") { // OSX has a lot of dud interfaces. Failure to create a socket on // the interface are what we're expecting if the interface is no // good. return } fs.Errorf(s, "Error creating ssdp server on %s: %s", intf.Name, err) return } defer ssdpServer.Close() fs.Infof(s, "Started SSDP on %v", intf.Name) stopped := make(chan struct{}) go func() { defer close(stopped) if err := ssdpServer.Serve(); err != nil { fs.Errorf(s, "%q: %q\n", intf.Name, err) } }() select { case <-s.waitChan: // Returning will close the server. case <-stopped: } } func (s *server) serveHTTP() error { srv := &http.Server{ Handler: s.handler, } err := srv.Serve(s.HTTPConn) select { case <-s.waitChan: return nil default: return err } } rclone-1.53.3/cmd/serve/dlna/dlna_test.go000066400000000000000000000162441375552240400202140ustar00rootroot00000000000000package dlna import ( "bytes" "context" "fmt" "html" "io/ioutil" "net/http" "os" "strings" "testing" "github.com/anacrolix/dms/soap" "github.com/rclone/rclone/vfs" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/cmd/serve/dlna/dlnaflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var ( dlnaServer *server baseURL string ) const ( testBindAddress = "localhost:0" ) func startServer(t *testing.T, f fs.Fs) { opt := dlnaflags.DefaultOpt opt.ListenAddr = testBindAddress dlnaServer = newServer(f, &opt) assert.NoError(t, dlnaServer.Serve()) baseURL = "http://" + dlnaServer.HTTPConn.Addr().String() } func TestInit(t *testing.T) { config.LoadConfig() f, err := fs.NewFs("testdata/files") l, _ := f.List(context.Background(), "") fmt.Println(l) require.NoError(t, err) startServer(t, f) } // Make sure that it serves rootDesc.xml (SCPD in uPnP parlance). func TestRootSCPD(t *testing.T) { req, err := http.NewRequest("GET", baseURL+rootDescPath, nil) require.NoError(t, err) resp, err := http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, http.StatusOK, resp.StatusCode) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) // Make sure that the SCPD contains a CDS service. require.Contains(t, string(body), "urn:schemas-upnp-org:service:ContentDirectory:1") // Make sure that the SCPD contains a CM service. require.Contains(t, string(body), "urn:schemas-upnp-org:service:ConnectionManager:1") // Ensure that the SCPD url is configured. require.Regexp(t, "/.*", string(body)) } // Make sure that it serves content from the remote. func TestServeContent(t *testing.T) { req, err := http.NewRequest("GET", baseURL+resPath+"video.mp4", nil) require.NoError(t, err) resp, err := http.DefaultClient.Do(req) require.NoError(t, err) defer fs.CheckClose(resp.Body, &err) assert.Equal(t, http.StatusOK, resp.StatusCode) actualContents, err := ioutil.ReadAll(resp.Body) assert.NoError(t, err) // Now compare the contents with the golden file. node, err := dlnaServer.vfs.Stat("/video.mp4") assert.NoError(t, err) goldenFile := node.(*vfs.File) goldenReader, err := goldenFile.Open(os.O_RDONLY) assert.NoError(t, err) defer fs.CheckClose(goldenReader, &err) goldenContents, err := ioutil.ReadAll(goldenReader) assert.NoError(t, err) require.Equal(t, goldenContents, actualContents) } // Check that ContentDirectory#Browse returns appropriate metadata on the root container. func TestContentDirectoryBrowseMetadata(t *testing.T) { // Sample from: https://github.com/rclone/rclone/issues/3253#issuecomment-524317469 req, err := http.NewRequest("POST", baseURL+serviceControlURL, strings.NewReader(` 0 BrowseMetadata * 0 0 `)) require.NoError(t, err) req.Header.Set("SOAPACTION", `"urn:schemas-upnp-org:service:ContentDirectory:1#Browse"`) resp, err := http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, http.StatusOK, resp.StatusCode) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) // expect a element require.Contains(t, string(body), html.EscapeString("")) } // Check that the X_MS_MediaReceiverRegistrar is faked out properly. func TestMediaReceiverRegistrarService(t *testing.T) { env := soap.Envelope{ Body: soap.Body{ Action: []byte("RegisterDevice"), }, } req, err := http.NewRequest("POST", baseURL+serviceControlURL, bytes.NewReader(mustMarshalXML(env))) require.NoError(t, err) req.Header.Set("SOAPACTION", `"urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1#RegisterDevice"`) resp, err := http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, http.StatusOK, resp.StatusCode) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) require.Contains(t, string(body), "") } // Check that ContentDirectory#Browse returns the expected items. func TestContentDirectoryBrowseDirectChildren(t *testing.T) { // First the root... req, err := http.NewRequest("POST", baseURL+serviceControlURL, strings.NewReader(` 0 BrowseDirectChildren * 0 0 `)) require.NoError(t, err) req.Header.Set("SOAPACTION", `"urn:schemas-upnp-org:service:ContentDirectory:1#Browse"`) resp, err := http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, http.StatusOK, resp.StatusCode) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) // expect video.mp4, video.srt, video.en.srt URLs to be in the DIDL require.Contains(t, string(body), "/r/video.mp4") require.Contains(t, string(body), "/r/video.srt") require.Contains(t, string(body), "/r/video.en.srt") // Then a subdirectory req, err = http.NewRequest("POST", baseURL+serviceControlURL, strings.NewReader(` %2Fsubdir BrowseDirectChildren * 0 0 `)) require.NoError(t, err) req.Header.Set("SOAPACTION", `"urn:schemas-upnp-org:service:ContentDirectory:1#Browse"`) resp, err = http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, http.StatusOK, resp.StatusCode) body, err = ioutil.ReadAll(resp.Body) require.NoError(t, err) // expect video.mp4, video.srt, URLs to be in the DIDL require.Contains(t, string(body), "/r/subdir/video.mp4") require.Contains(t, string(body), "/r/subdir/video.srt") } rclone-1.53.3/cmd/serve/dlna/dlna_util.go000066400000000000000000000136371375552240400202150ustar00rootroot00000000000000package dlna import ( "crypto/md5" "encoding/xml" "errors" "fmt" "io" "log" "net" "net/http" "net/http/httptest" "net/http/httputil" "os" "regexp" "strconv" "strings" "github.com/anacrolix/dms/soap" "github.com/anacrolix/dms/upnp" "github.com/rclone/rclone/fs" ) // Return a default "friendly name" for the server. func makeDefaultFriendlyName() string { hostName, err := os.Hostname() if err != nil { hostName = "" } else { hostName = " (" + hostName + ")" } return "rclone" + hostName } func makeDeviceUUID(unique string) string { h := md5.New() if _, err := io.WriteString(h, unique); err != nil { log.Panicf("makeDeviceUUID write failed: %s", err) } buf := h.Sum(nil) return upnp.FormatUUID(buf) } // Get all available active network interfaces. func listInterfaces() []net.Interface { ifs, err := net.Interfaces() if err != nil { log.Printf("list network interfaces: %v", err) return []net.Interface{} } var active []net.Interface for _, intf := range ifs { if intf.Flags&net.FlagUp != 0 && intf.Flags&net.FlagMulticast != 0 && intf.MTU > 0 { active = append(active, intf) } } return active } func didlLite(chardata string) string { return `` + chardata + `` } func mustMarshalXML(value interface{}) []byte { ret, err := xml.MarshalIndent(value, "", " ") if err != nil { log.Panicf("mustMarshalXML failed to marshal %v: %s", value, err) } return ret } // Marshal SOAP response arguments into a response XML snippet. func marshalSOAPResponse(sa upnp.SoapAction, args map[string]string) []byte { soapArgs := make([]soap.Arg, 0, len(args)) for argName, value := range args { soapArgs = append(soapArgs, soap.Arg{ XMLName: xml.Name{Local: argName}, Value: value, }) } return []byte(fmt.Sprintf(`%[3]s`, sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs))) } var serviceURNRegexp = regexp.MustCompile(`:service:(\w+):(\d+)$`) func parseServiceType(s string) (ret upnp.ServiceURN, err error) { matches := serviceURNRegexp.FindStringSubmatch(s) if matches == nil { err = errors.New(s) return } if len(matches) != 3 { log.Panicf("Invalid serviceURNRegexp ?") } ret.Type = matches[1] ret.Version, err = strconv.ParseUint(matches[2], 0, 0) return } func parseActionHTTPHeader(s string) (ret upnp.SoapAction, err error) { if s[0] != '"' || s[len(s)-1] != '"' { return } s = s[1 : len(s)-1] hashIndex := strings.LastIndex(s, "#") if hashIndex == -1 { return } ret.Action = s[hashIndex+1:] ret.ServiceURN, err = parseServiceType(s[:hashIndex]) return } type loggingResponseWriter struct { http.ResponseWriter request *http.Request committed bool } func (lrw *loggingResponseWriter) logRequest(code int, err interface{}) { // Choose appropriate log level based on response status code. var level fs.LogLevel if code < 400 && err == nil { level = fs.LogLevelInfo } else { level = fs.LogLevelError } if err == nil { err = "" } fs.LogPrintf(level, lrw.request.URL, "%s %s %d %s %s", lrw.request.RemoteAddr, lrw.request.Method, code, lrw.request.Header.Get("SOAPACTION"), err) } func (lrw *loggingResponseWriter) WriteHeader(code int) { lrw.committed = true lrw.logRequest(code, nil) lrw.ResponseWriter.WriteHeader(code) } // HTTP handler that logs requests and any errors or panics. func logging(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { lrw := &loggingResponseWriter{ResponseWriter: w, request: r} defer func() { err := recover() if err != nil { if !lrw.committed { lrw.logRequest(http.StatusInternalServerError, err) http.Error(w, fmt.Sprint(err), http.StatusInternalServerError) } else { // Too late to send the error to client, but at least log it. fs.Errorf(r.URL.Path, "Recovered panic: %v", err) } } }() next.ServeHTTP(lrw, r) }) } // HTTP handler that logs complete request and response bodies for debugging. // Error recovery and general request logging are left to logging(). func traceLogging(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { dump, err := httputil.DumpRequest(r, true) if err != nil { serveError(nil, w, "error dumping request", err) return } fs.Debugf(nil, "%s", dump) recorder := httptest.NewRecorder() next.ServeHTTP(recorder, r) dump, err = httputil.DumpResponse(recorder.Result(), true) if err != nil { // log the error but ignore it fs.Errorf(nil, "error dumping response: %v", err) } else { fs.Debugf(nil, "%s", dump) } // copy from recorder to the real response writer for k, v := range recorder.Header() { w.Header()[k] = v } w.WriteHeader(recorder.Code) _, err = recorder.Body.WriteTo(w) if err != nil { // Network error fs.Debugf(nil, "Error writing response: %v", err) } }) } // HTTP handler that sets headers. func withHeader(name string, value string, next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Header().Set(name, value) next.ServeHTTP(w, r) }) } // serveError returns an http.StatusInternalServerError and logs the error func serveError(what interface{}, w http.ResponseWriter, text string, err error) { err = fs.CountError(err) fs.Errorf(what, "%s: %v", text, err) http.Error(w, text+".", http.StatusInternalServerError) } // Splits a path into (root, ext) such that root + ext == path, and ext is empty // or begins with a period. Extended version of path.Ext(). func splitExt(path string) (string, string) { for i := len(path) - 1; i >= 0 && path[i] != '/'; i-- { if path[i] == '.' { return path[:i], path[i:] } } return path, "" } rclone-1.53.3/cmd/serve/dlna/dlnaflags/000077500000000000000000000000001375552240400176345ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/dlnaflags/dlnaflags.go000066400000000000000000000030431375552240400221160ustar00rootroot00000000000000package dlnaflags import ( "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/rc" "github.com/spf13/pflag" ) // Help contains the text for the command line help and manual. var Help = ` ### Server options Use ` + "`--addr`" + ` to specify which IP address and port the server should listen on, eg ` + "`--addr 1.2.3.4:8000` or `--addr :8080`" + ` to listen to all IPs. Use ` + "`--name`" + ` to choose the friendly server name, which is by default "rclone (hostname)". Use ` + "`--log-trace` in conjunction with `-vv`" + ` to enable additional debug logging of all UPNP traffic. ` // Options is the type for DLNA serving options. type Options struct { ListenAddr string FriendlyName string LogTrace bool } // DefaultOpt contains the defaults options for DLNA serving. var DefaultOpt = Options{ ListenAddr: ":7879", FriendlyName: "", LogTrace: false, } // Opt contains the options for DLNA serving. var ( Opt = DefaultOpt ) func addFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *Options) { rc.AddOption("dlna", &Opt) flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "ip:port or :port to bind the DLNA http server to.") flags.StringVarP(flagSet, &Opt.FriendlyName, prefix+"name", "", Opt.FriendlyName, "name of DLNA server") flags.BoolVarP(flagSet, &Opt.LogTrace, prefix+"log-trace", "", Opt.LogTrace, "enable trace logging of SOAP traffic") } // AddFlags add the command line flags for DLNA serving. func AddFlags(flagSet *pflag.FlagSet) { addFlagsPrefix(flagSet, "", &Opt) } rclone-1.53.3/cmd/serve/dlna/mrrs.go000066400000000000000000000010341375552240400172110ustar00rootroot00000000000000package dlna import ( "net/http" "github.com/anacrolix/dms/upnp" ) type mediaReceiverRegistrarService struct { *server upnp.Eventing } func (mrrs *mediaReceiverRegistrarService) Handle(action string, argsXML []byte, r *http.Request) (map[string]string, error) { switch action { case "IsAuthorized", "IsValidated": return map[string]string{ "Result": "1", }, nil case "RegisterDevice": return map[string]string{ "RegistrationRespMsg": mrrs.RootDeviceUUID, }, nil default: return nil, upnp.InvalidActionError } } rclone-1.53.3/cmd/serve/dlna/testdata/000077500000000000000000000000001375552240400175125ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/testdata/files/000077500000000000000000000000001375552240400206145ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/testdata/files/small_jpeg.jpg000066400000000000000000000001531375552240400234320ustar00rootroot00000000000000C        ? rclone-1.53.3/cmd/serve/dlna/testdata/files/subdir/000077500000000000000000000000001375552240400221045ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/testdata/files/subdir/video.mp4000066400000000000000000000004061375552240400236340ustar00rootroot00000000000000 ftypisomisomiso2avc1mp41freemdatmoovlmvhd@budtaZmeta!hdlrmdirappl-ilst%toodataLavf57.41.100rclone-1.53.3/cmd/serve/dlna/testdata/files/subdir/video.srt000066400000000000000000000000451375552240400237430ustar00rootroot000000000000001 00:00:00,000 --> 00:02:00,000 Test rclone-1.53.3/cmd/serve/dlna/testdata/files/video.en.srt000066400000000000000000000000451375552240400230540ustar00rootroot000000000000001 00:00:00,000 --> 00:02:00,000 Test rclone-1.53.3/cmd/serve/dlna/testdata/files/video.mp4000066400000000000000000000004061375552240400223440ustar00rootroot00000000000000 ftypisomisomiso2avc1mp41freemdatmoovlmvhd@budtaZmeta!hdlrmdirappl-ilst%toodataLavf57.41.100rclone-1.53.3/cmd/serve/dlna/testdata/files/video.nfo000066400000000000000000000000001375552240400224140ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/testdata/files/video.srt000066400000000000000000000000451375552240400224530ustar00rootroot000000000000001 00:00:00,000 --> 00:02:00,000 Test rclone-1.53.3/cmd/serve/dlna/upnpav/000077500000000000000000000000001375552240400172125ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/dlna/upnpav/upnpav.go000066400000000000000000000033471375552240400210610ustar00rootroot00000000000000package upnpav import ( "encoding/xml" "time" ) const ( // NoSuchObjectErrorCode : The specified ObjectID is invalid. NoSuchObjectErrorCode = 701 ) // Resource description type Resource struct { XMLName xml.Name `xml:"res"` ProtocolInfo string `xml:"protocolInfo,attr"` URL string `xml:",chardata"` Size uint64 `xml:"size,attr,omitempty"` Bitrate uint `xml:"bitrate,attr,omitempty"` Duration string `xml:"duration,attr,omitempty"` Resolution string `xml:"resolution,attr,omitempty"` } // Container description type Container struct { Object XMLName xml.Name `xml:"container"` ChildCount *int `xml:"childCount,attr"` } // Item description type Item struct { Object XMLName xml.Name `xml:"item"` Res []Resource InnerXML string `xml:",innerxml"` } // Object description type Object struct { ID string `xml:"id,attr"` ParentID string `xml:"parentID,attr"` Restricted int `xml:"restricted,attr"` // indicates whether the object is modifiable Class string `xml:"upnp:class"` Icon string `xml:"upnp:icon,omitempty"` Title string `xml:"dc:title"` Date Timestamp `xml:"dc:date"` Artist string `xml:"upnp:artist,omitempty"` Album string `xml:"upnp:album,omitempty"` Genre string `xml:"upnp:genre,omitempty"` AlbumArtURI string `xml:"upnp:albumArtURI,omitempty"` Searchable int `xml:"searchable,attr"` } // Timestamp wraps time.Time for formatting purposes type Timestamp struct { time.Time } // MarshalXML formats the Timestamp per DIDL-Lite spec func (t Timestamp) MarshalXML(e *xml.Encoder, start xml.StartElement) error { return e.EncodeElement(t.Format("2006-01-02"), start) } rclone-1.53.3/cmd/serve/ftp/000077500000000000000000000000001375552240400155545ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/ftp/ftp.go000066400000000000000000000300231375552240400166720ustar00rootroot00000000000000// Package ftp implements an FTP server for rclone //+build !plan9,go1.13 package ftp import ( "fmt" "io" "net" "os" "os/user" "strconv" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/proxy" "github.com/rclone/rclone/cmd/serve/proxy/proxyflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "github.com/spf13/cobra" "github.com/spf13/pflag" ftp "goftp.io/server/core" ) // Options contains options for the http Server type Options struct { //TODO add more options ListenAddr string // Port to listen on PublicIP string // Passive ports range PassivePorts string // Passive ports range BasicUser string // single username for basic auth if not using Htpasswd BasicPass string // password for BasicUser } // DefaultOpt is the default values used for Options var DefaultOpt = Options{ ListenAddr: "localhost:2121", PublicIP: "", PassivePorts: "30000-32000", BasicUser: "anonymous", BasicPass: "", } // Opt is options set by command line flags var Opt = DefaultOpt // AddFlags adds flags for ftp func AddFlags(flagSet *pflag.FlagSet) { rc.AddOption("ftp", &Opt) flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.") flags.StringVarP(flagSet, &Opt.PublicIP, "public-ip", "", Opt.PublicIP, "Public IP address to advertise for passive connections.") flags.StringVarP(flagSet, &Opt.PassivePorts, "passive-port", "", Opt.PassivePorts, "Passive port range to use.") flags.StringVarP(flagSet, &Opt.BasicUser, "user", "", Opt.BasicUser, "User name for authentication.") flags.StringVarP(flagSet, &Opt.BasicPass, "pass", "", Opt.BasicPass, "Password for authentication. (empty value allow every password)") } func init() { vfsflags.AddFlags(Command.Flags()) proxyflags.AddFlags(Command.Flags()) AddFlags(Command.Flags()) } // Command definition for cobra var Command = &cobra.Command{ Use: "ftp remote:path", Short: `Serve remote:path over FTP.`, Long: ` rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it. ### Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. #### Authentication By default this will serve files without needing a login. You can set a single username and password with the --user and --pass flags. ` + vfs.Help + proxy.Help, Run: func(command *cobra.Command, args []string) { var f fs.Fs if proxyflags.Opt.AuthProxy == "" { cmd.CheckArgs(1, 1, command, args) f = cmd.NewFsSrc(args) } else { cmd.CheckArgs(0, 0, command, args) } cmd.Run(false, false, command, func() error { s, err := newServer(f, &Opt) if err != nil { return err } return s.serve() }) }, } // server contains everything to run the server type server struct { f fs.Fs srv *ftp.Server opt Options vfs *vfs.VFS proxy *proxy.Proxy } // Make a new FTP to serve the remote func newServer(f fs.Fs, opt *Options) (*server, error) { host, port, err := net.SplitHostPort(opt.ListenAddr) if err != nil { return nil, errors.New("Failed to parse host:port") } portNum, err := strconv.Atoi(port) if err != nil { return nil, errors.New("Failed to parse host:port") } s := &server{ f: f, opt: *opt, } if proxyflags.Opt.AuthProxy != "" { s.proxy = proxy.New(&proxyflags.Opt) } else { s.vfs = vfs.New(f, &vfsflags.Opt) } ftpopt := &ftp.ServerOpts{ Name: "Rclone FTP Server", WelcomeMessage: "Welcome to Rclone " + fs.Version + " FTP Server", Factory: s, // implemented by NewDriver method Hostname: host, Port: portNum, PublicIP: opt.PublicIP, PassivePorts: opt.PassivePorts, Auth: s, // implemented by CheckPasswd method Logger: &Logger{}, //TODO implement a maximum of https://godoc.org/goftp.io/server#ServerOpts } s.srv = ftp.NewServer(ftpopt) return s, nil } // serve runs the ftp server func (s *server) serve() error { fs.Logf(s.f, "Serving FTP on %s", s.srv.Hostname+":"+strconv.Itoa(s.srv.Port)) return s.srv.ListenAndServe() } // serve runs the ftp server func (s *server) close() error { fs.Logf(s.f, "Stopping FTP on %s", s.srv.Hostname+":"+strconv.Itoa(s.srv.Port)) return s.srv.Shutdown() } //Logger ftp logger output formatted message type Logger struct{} //Print log simple text message func (l *Logger) Print(sessionID string, message interface{}) { fs.Infof(sessionID, "%s", message) } //Printf log formatted text message func (l *Logger) Printf(sessionID string, format string, v ...interface{}) { fs.Infof(sessionID, format, v...) } //PrintCommand log formatted command execution func (l *Logger) PrintCommand(sessionID string, command string, params string) { if command == "PASS" { fs.Infof(sessionID, "> PASS ****") } else { fs.Infof(sessionID, "> %s %s", command, params) } } //PrintResponse log responses func (l *Logger) PrintResponse(sessionID string, code int, message string) { fs.Infof(sessionID, "< %d %s", code, message) } // CheckPasswd handle auth based on configuration // // This is not used - the one in Driver should be called instead func (s *server) CheckPasswd(user, pass string) (ok bool, err error) { err = errors.New("internal error: server.CheckPasswd should never be called") fs.Errorf(nil, "Error: %v", err) return false, err } // NewDriver starts a new session for each client connection func (s *server) NewDriver() (ftp.Driver, error) { log.Trace("", "Init driver")("") d := &Driver{ s: s, vfs: s.vfs, // this can be nil if proxy set } return d, nil } //Driver implementation of ftp server type Driver struct { s *server vfs *vfs.VFS lock sync.Mutex } // CheckPasswd handle auth based on configuration func (d *Driver) CheckPasswd(user, pass string) (ok bool, err error) { s := d.s if s.proxy != nil { var VFS *vfs.VFS VFS, _, err = s.proxy.Call(user, pass, false) if err != nil { fs.Infof(nil, "proxy login failed: %v", err) return false, nil } d.vfs = VFS } else { ok = s.opt.BasicUser == user && (s.opt.BasicPass == "" || s.opt.BasicPass == pass) if !ok { fs.Infof(nil, "login failed: bad credentials") return false, nil } } return true, nil } //Stat get information on file or folder func (d *Driver) Stat(path string) (fi ftp.FileInfo, err error) { defer log.Trace(path, "")("fi=%+v, err = %v", &fi, &err) n, err := d.vfs.Stat(path) if err != nil { return nil, err } return &FileInfo{n, n.Mode(), d.vfs.Opt.UID, d.vfs.Opt.GID}, err } //ChangeDir move current folder func (d *Driver) ChangeDir(path string) (err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "")("err = %v", &err) n, err := d.vfs.Stat(path) if err != nil { return err } if !n.IsDir() { return errors.New("Not a directory") } return nil } //ListDir list content of a folder func (d *Driver) ListDir(path string, callback func(ftp.FileInfo) error) (err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "")("err = %v", &err) node, err := d.vfs.Stat(path) if err == vfs.ENOENT { return errors.New("Directory not found") } else if err != nil { return err } if !node.IsDir() { return errors.New("Not a directory") } dir := node.(*vfs.Dir) dirEntries, err := dir.ReadDirAll() if err != nil { return err } // Account the transfer tr := accounting.GlobalStats().NewTransferRemoteSize(path, node.Size()) defer func() { tr.Done(err) }() for _, file := range dirEntries { err = callback(&FileInfo{file, file.Mode(), d.vfs.Opt.UID, d.vfs.Opt.GID}) if err != nil { return err } } return nil } //DeleteDir delete a folder and his content func (d *Driver) DeleteDir(path string) (err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "")("err = %v", &err) node, err := d.vfs.Stat(path) if err != nil { return err } if !node.IsDir() { return errors.New("Not a directory") } err = node.Remove() if err != nil { return err } return nil } //DeleteFile delete a file func (d *Driver) DeleteFile(path string) (err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "")("err = %v", &err) node, err := d.vfs.Stat(path) if err != nil { return err } if !node.IsFile() { return errors.New("Not a file") } err = node.Remove() if err != nil { return err } return nil } //Rename rename a file or folder func (d *Driver) Rename(oldName, newName string) (err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(oldName, "newName=%q", newName)("err = %v", &err) return d.vfs.Rename(oldName, newName) } //MakeDir create a folder func (d *Driver) MakeDir(path string) (err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "")("err = %v", &err) dir, leaf, err := d.vfs.StatParent(path) if err != nil { return err } _, err = dir.Mkdir(leaf) return err } //GetFile download a file func (d *Driver) GetFile(path string, offset int64) (size int64, fr io.ReadCloser, err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "offset=%v", offset)("err = %v", &err) node, err := d.vfs.Stat(path) if err == vfs.ENOENT { fs.Infof(path, "File not found") return 0, nil, errors.New("File not found") } else if err != nil { return 0, nil, err } if !node.IsFile() { return 0, nil, errors.New("Not a file") } handle, err := node.Open(os.O_RDONLY) if err != nil { return 0, nil, err } _, err = handle.Seek(offset, io.SeekStart) if err != nil { return 0, nil, err } // Account the transfer tr := accounting.GlobalStats().NewTransferRemoteSize(path, node.Size()) defer tr.Done(nil) return node.Size(), handle, nil } //PutFile upload a file func (d *Driver) PutFile(path string, data io.Reader, appendData bool) (n int64, err error) { d.lock.Lock() defer d.lock.Unlock() defer log.Trace(path, "append=%v", appendData)("err = %v", &err) var isExist bool node, err := d.vfs.Stat(path) if err == nil { isExist = true if node.IsDir() { return 0, errors.New("A dir has the same name") } } else { if os.IsNotExist(err) { isExist = false } else { return 0, err } } if appendData && !isExist { appendData = false } if !appendData { if isExist { err = node.Remove() if err != nil { return 0, err } } f, err := d.vfs.OpenFile(path, os.O_RDWR|os.O_CREATE, 0660) if err != nil { return 0, err } defer closeIO(path, f) bytes, err := io.Copy(f, data) if err != nil { return 0, err } return bytes, nil } of, err := d.vfs.OpenFile(path, os.O_APPEND|os.O_RDWR, 0660) if err != nil { return 0, err } defer closeIO(path, of) _, err = of.Seek(0, os.SEEK_END) if err != nil { return 0, err } bytes, err := io.Copy(of, data) if err != nil { return 0, err } return bytes, nil } //FileInfo struct to hold file info for ftp server type FileInfo struct { os.FileInfo mode os.FileMode owner uint32 group uint32 } //Mode return mode of file. func (f *FileInfo) Mode() os.FileMode { return f.mode } //Owner return owner of file. Try to find the username if possible func (f *FileInfo) Owner() string { str := fmt.Sprint(f.owner) u, err := user.LookupId(str) if err != nil { return str //User not found } return u.Username } //Group return group of file. Try to find the group name if possible func (f *FileInfo) Group() string { str := fmt.Sprint(f.group) g, err := user.LookupGroupId(str) if err != nil { return str //Group not found default to numerical value } return g.Name } func closeIO(path string, c io.Closer) { err := c.Close() if err != nil { log.Trace(path, "")("err = %v", &err) } } rclone-1.53.3/cmd/serve/ftp/ftp_test.go000066400000000000000000000031471375552240400177400ustar00rootroot00000000000000// Serve ftp tests set up a server and run the integration tests // for the ftp remote against it. // // We skip tests on platforms with troublesome character mappings //+build !windows,!darwin,!plan9,go1.13 package ftp import ( "testing" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/cmd/serve/servetest" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/obscure" "github.com/stretchr/testify/assert" ftp "goftp.io/server/core" ) const ( testHOST = "localhost" testPORT = "51780" testPASSIVEPORTRANGE = "30000-32000" testUSER = "rclone" testPASS = "password" ) // TestFTP runs the ftp server then runs the unit tests for the // ftp remote against it. func TestFTP(t *testing.T) { // Configure and start the server start := func(f fs.Fs) (configmap.Simple, func()) { opt := DefaultOpt opt.ListenAddr = testHOST + ":" + testPORT opt.PassivePorts = testPASSIVEPORTRANGE opt.BasicUser = testUSER opt.BasicPass = testPASS w, err := newServer(f, &opt) assert.NoError(t, err) quit := make(chan struct{}) go func() { err := w.serve() close(quit) if err != ftp.ErrServerClosed { assert.NoError(t, err) } }() // Config for the backend we'll use to connect to the server config := configmap.Simple{ "type": "ftp", "host": testHOST, "port": testPORT, "user": testUSER, "pass": obscure.MustObscure(testPASS), } return config, func() { err := w.close() assert.NoError(t, err) <-quit } } servetest.Run(t, "ftp", start) } rclone-1.53.3/cmd/serve/ftp/ftp_unsupported.go000066400000000000000000000004021375552240400213400ustar00rootroot00000000000000// Build for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 !go1.13 package ftp import "github.com/spf13/cobra" // Command definition is nil to show not implemented var Command *cobra.Command = nil rclone-1.53.3/cmd/serve/http/000077500000000000000000000000001375552240400157425ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/http.go000066400000000000000000000127141375552240400172550ustar00rootroot00000000000000package http import ( "net/http" "os" "path" "strconv" "strings" "time" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/cmd/serve/httplib/httpflags" "github.com/rclone/rclone/cmd/serve/httplib/serve" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "github.com/spf13/cobra" ) func init() { httpflags.AddFlags(Command.Flags()) vfsflags.AddFlags(Command.Flags()) } // Command definition for cobra var Command = &cobra.Command{ Use: "http remote:path", Short: `Serve the remote over HTTP.`, Long: `rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. ` + httplib.Help + vfs.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) f := cmd.NewFsSrc(args) cmd.Run(false, true, command, func() error { s := newServer(f, &httpflags.Opt) err := s.Serve() if err != nil { return err } s.Wait() return nil }) }, } // server contains everything to run the server type server struct { *httplib.Server f fs.Fs vfs *vfs.VFS } func newServer(f fs.Fs, opt *httplib.Options) *server { mux := http.NewServeMux() s := &server{ Server: httplib.NewServer(mux, opt), f: f, vfs: vfs.New(f, &vfsflags.Opt), } mux.HandleFunc(s.Opt.BaseURL+"/", s.handler) return s } // Serve runs the http server in the background. // // Use s.Close() and s.Wait() to shutdown server func (s *server) Serve() error { err := s.Server.Serve() if err != nil { return err } fs.Logf(s.f, "Serving on %s", s.URL()) return nil } // handler reads incoming requests and dispatches them func (s *server) handler(w http.ResponseWriter, r *http.Request) { if r.Method != "GET" && r.Method != "HEAD" { http.Error(w, "Method not allowed", http.StatusMethodNotAllowed) return } w.Header().Set("Accept-Ranges", "bytes") w.Header().Set("Server", "rclone/"+fs.Version) urlPath, ok := s.Path(w, r) if !ok { return } isDir := strings.HasSuffix(urlPath, "/") remote := strings.Trim(urlPath, "/") if isDir { s.serveDir(w, r, remote) } else { s.serveFile(w, r, remote) } } // serveDir serves a directory index at dirRemote func (s *server) serveDir(w http.ResponseWriter, r *http.Request, dirRemote string) { // List the directory node, err := s.vfs.Stat(dirRemote) if err == vfs.ENOENT { http.Error(w, "Directory not found", http.StatusNotFound) return } else if err != nil { serve.Error(dirRemote, w, "Failed to list directory", err) return } if !node.IsDir() { http.Error(w, "Not a directory", http.StatusNotFound) return } dir := node.(*vfs.Dir) dirEntries, err := dir.ReadDirAll() if err != nil { serve.Error(dirRemote, w, "Failed to list directory", err) return } // Make the entries for display directory := serve.NewDirectory(dirRemote, s.HTMLTemplate) for _, node := range dirEntries { if vfsflags.Opt.NoModTime { directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{}) } else { directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), node.ModTime().UTC()) } } sortParm := r.URL.Query().Get("sort") orderParm := r.URL.Query().Get("order") directory.ProcessQueryParams(sortParm, orderParm) // Set the Last-Modified header to the timestamp w.Header().Set("Last-Modified", dir.ModTime().UTC().Format(http.TimeFormat)) directory.Serve(w, r) } // serveFile serves a file object at remote func (s *server) serveFile(w http.ResponseWriter, r *http.Request, remote string) { node, err := s.vfs.Stat(remote) if err == vfs.ENOENT { fs.Infof(remote, "%s: File not found", r.RemoteAddr) http.Error(w, "File not found", http.StatusNotFound) return } else if err != nil { serve.Error(remote, w, "Failed to find file", err) return } if !node.IsFile() { http.Error(w, "Not a file", http.StatusNotFound) return } entry := node.DirEntry() if entry == nil { http.Error(w, "Can't open file being written", http.StatusNotFound) return } obj := entry.(fs.Object) file := node.(*vfs.File) // Set content length since we know how long the object is w.Header().Set("Content-Length", strconv.FormatInt(node.Size(), 10)) // Set content type mimeType := fs.MimeType(r.Context(), obj) if mimeType == "application/octet-stream" && path.Ext(remote) == "" { // Leave header blank so http server guesses } else { w.Header().Set("Content-Type", mimeType) } // Set the Last-Modified header to the timestamp w.Header().Set("Last-Modified", file.ModTime().UTC().Format(http.TimeFormat)) // If HEAD no need to read the object since we have set the headers if r.Method == "HEAD" { return } // open the object in, err := file.Open(os.O_RDONLY) if err != nil { serve.Error(remote, w, "Failed to open file", err) return } defer func() { err := in.Close() if err != nil { fs.Errorf(remote, "Failed to close file: %v", err) } }() // Account the transfer tr := accounting.Stats(r.Context()).NewTransfer(obj) defer tr.Done(nil) // FIXME in = fs.NewAccount(in, obj).WithBuffer() // account the transfer // Serve the file http.ServeContent(w, r, remote, node.ModTime(), in) } rclone-1.53.3/cmd/serve/http/http_test.go000066400000000000000000000124571375552240400203200ustar00rootroot00000000000000package http import ( "context" "flag" "io/ioutil" "net/http" "strings" "testing" "time" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/filter" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var ( updateGolden = flag.Bool("updategolden", false, "update golden files for regression test") httpServer *server testURL string ) const ( testBindAddress = "localhost:0" testTemplate = "testdata/golden/testindex.html" ) func startServer(t *testing.T, f fs.Fs) { opt := httplib.DefaultOpt opt.ListenAddr = testBindAddress opt.Template = testTemplate httpServer = newServer(f, &opt) assert.NoError(t, httpServer.Serve()) testURL = httpServer.Server.URL() // try to connect to the test server pause := time.Millisecond for i := 0; i < 10; i++ { resp, err := http.Head(testURL) if err == nil { _ = resp.Body.Close() return } // t.Logf("couldn't connect, sleeping for %v: %v", pause, err) time.Sleep(pause) pause *= 2 } t.Fatal("couldn't connect to server") } var ( datedObject = "two.txt" expectedTime = time.Date(2000, 1, 2, 3, 4, 5, 0, time.UTC) ) func TestInit(t *testing.T) { // Configure the remote config.LoadConfig() // fs.Config.LogLevel = fs.LogLevelDebug // fs.Config.DumpHeaders = true // fs.Config.DumpBodies = true // exclude files called hidden.txt and directories called hidden require.NoError(t, filter.Active.AddRule("- hidden.txt")) require.NoError(t, filter.Active.AddRule("- hidden/**")) // Create a test Fs f, err := fs.NewFs("testdata/files") require.NoError(t, err) // set date of datedObject to expectedTime obj, err := f.NewObject(context.Background(), datedObject) require.NoError(t, err) require.NoError(t, obj.SetModTime(context.Background(), expectedTime)) startServer(t, f) } // check body against the file, or re-write body if -updategolden is // set. func checkGolden(t *testing.T, fileName string, got []byte) { if *updateGolden { t.Logf("Updating golden file %q", fileName) err := ioutil.WriteFile(fileName, got, 0666) require.NoError(t, err) } else { want, err := ioutil.ReadFile(fileName) require.NoError(t, err) wants := strings.Split(string(want), "\n") gots := strings.Split(string(got), "\n") assert.Equal(t, wants, gots, fileName) } } func TestGET(t *testing.T) { for _, test := range []struct { URL string Status int Golden string Method string Range string }{ { URL: "", Status: http.StatusOK, Golden: "testdata/golden/index.html", }, { URL: "notfound", Status: http.StatusNotFound, Golden: "testdata/golden/notfound.html", }, { URL: "dirnotfound/", Status: http.StatusNotFound, Golden: "testdata/golden/dirnotfound.html", }, { URL: "hidden/", Status: http.StatusNotFound, Golden: "testdata/golden/hiddendir.html", }, { URL: "one%25.txt", Status: http.StatusOK, Golden: "testdata/golden/one.txt", }, { URL: "hidden.txt", Status: http.StatusNotFound, Golden: "testdata/golden/hidden.txt", }, { URL: "three/", Status: http.StatusOK, Golden: "testdata/golden/three.html", }, { URL: "three/a.txt", Status: http.StatusOK, Golden: "testdata/golden/a.txt", }, { URL: "", Method: "HEAD", Status: http.StatusOK, Golden: "testdata/golden/indexhead.txt", }, { URL: "one%25.txt", Method: "HEAD", Status: http.StatusOK, Golden: "testdata/golden/onehead.txt", }, { URL: "", Method: "POST", Status: http.StatusMethodNotAllowed, Golden: "testdata/golden/indexpost.txt", }, { URL: "one%25.txt", Method: "POST", Status: http.StatusMethodNotAllowed, Golden: "testdata/golden/onepost.txt", }, { URL: "two.txt", Status: http.StatusOK, Golden: "testdata/golden/two.txt", }, { URL: "two.txt", Status: http.StatusPartialContent, Range: "bytes=2-5", Golden: "testdata/golden/two2-5.txt", }, { URL: "two.txt", Status: http.StatusPartialContent, Range: "bytes=0-6", Golden: "testdata/golden/two-6.txt", }, { URL: "two.txt", Status: http.StatusPartialContent, Range: "bytes=3-", Golden: "testdata/golden/two3-.txt", }, } { method := test.Method if method == "" { method = "GET" } req, err := http.NewRequest(method, testURL+test.URL, nil) require.NoError(t, err) if test.Range != "" { req.Header.Add("Range", test.Range) } resp, err := http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, test.Status, resp.StatusCode, test.Golden) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) // Check we got a Last-Modifed header and that it is a valid date if test.Status == http.StatusOK || test.Status == http.StatusPartialContent { lastModified := resp.Header.Get("Last-Modified") assert.NotEqual(t, "", lastModified, test.Golden) modTime, err := http.ParseTime(lastModified) assert.NoError(t, err, test.Golden) // check the actual date on our special file if test.URL == datedObject { assert.Equal(t, expectedTime, modTime, test.Golden) } } checkGolden(t, test.Golden, body) } } func TestFinalise(t *testing.T) { httpServer.Close() httpServer.Wait() } rclone-1.53.3/cmd/serve/http/testdata/000077500000000000000000000000001375552240400175535ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/files/000077500000000000000000000000001375552240400206555ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/files/hidden.txt000066400000000000000000000000071375552240400226460ustar00rootroot00000000000000hidden rclone-1.53.3/cmd/serve/http/testdata/files/hidden/000077500000000000000000000000001375552240400221105ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/files/hidden/file.txt000066400000000000000000000000131375552240400235620ustar00rootroot00000000000000hiddenfile rclone-1.53.3/cmd/serve/http/testdata/files/one%.txt000066400000000000000000000000051375552240400222370ustar00rootroot00000000000000one% rclone-1.53.3/cmd/serve/http/testdata/files/three/000077500000000000000000000000001375552240400217645ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/files/three/a.txt000066400000000000000000000000061375552240400227410ustar00rootroot00000000000000three rclone-1.53.3/cmd/serve/http/testdata/files/three/b.txt000066400000000000000000000000071375552240400227430ustar00rootroot00000000000000threeb rclone-1.53.3/cmd/serve/http/testdata/files/two.txt000066400000000000000000000000131375552240400222210ustar00rootroot000000000000000123456789 rclone-1.53.3/cmd/serve/http/testdata/golden/000077500000000000000000000000001375552240400210235ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/golden/a.txt000066400000000000000000000000061375552240400220000ustar00rootroot00000000000000three rclone-1.53.3/cmd/serve/http/testdata/golden/dirnotfound.html000066400000000000000000000000241375552240400242400ustar00rootroot00000000000000Directory not found rclone-1.53.3/cmd/serve/http/testdata/golden/hidden.txt000066400000000000000000000000171375552240400230150ustar00rootroot00000000000000File not found rclone-1.53.3/cmd/serve/http/testdata/golden/hiddendir.html000066400000000000000000000000241375552240400236370ustar00rootroot00000000000000Directory not found rclone-1.53.3/cmd/serve/http/testdata/golden/index.html000066400000000000000000000004221375552240400230160ustar00rootroot00000000000000 Directory listing of /

Directory listing of /

three/
one%.txt
two.txt
rclone-1.53.3/cmd/serve/http/testdata/golden/indexhead.txt000066400000000000000000000000001375552240400235030ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/golden/indexpost.txt000066400000000000000000000000231375552240400235740ustar00rootroot00000000000000Method not allowed rclone-1.53.3/cmd/serve/http/testdata/golden/notfound.html000066400000000000000000000000171375552240400235430ustar00rootroot00000000000000File not found rclone-1.53.3/cmd/serve/http/testdata/golden/one.txt000066400000000000000000000000051375552240400223400ustar00rootroot00000000000000one% rclone-1.53.3/cmd/serve/http/testdata/golden/onehead.txt000066400000000000000000000000001375552240400231550ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/http/testdata/golden/onepost.txt000066400000000000000000000000231375552240400232460ustar00rootroot00000000000000Method not allowed rclone-1.53.3/cmd/serve/http/testdata/golden/testindex.html000066400000000000000000000003421375552240400237170ustar00rootroot00000000000000 {{ .Title }}

{{ .Title }}

{{ range $i := .Entries }}{{ $i.Leaf }}
{{ end }} rclone-1.53.3/cmd/serve/http/testdata/golden/three.html000066400000000000000000000003561375552240400230240ustar00rootroot00000000000000 Directory listing of /three

Directory listing of /three

a.txt
b.txt
rclone-1.53.3/cmd/serve/http/testdata/golden/two-6.txt000066400000000000000000000000071375552240400225350ustar00rootroot000000000000000123456rclone-1.53.3/cmd/serve/http/testdata/golden/two.txt000066400000000000000000000000131375552240400223670ustar00rootroot000000000000000123456789 rclone-1.53.3/cmd/serve/http/testdata/golden/two2-5.txt000066400000000000000000000000041375552240400226130ustar00rootroot000000000000002345rclone-1.53.3/cmd/serve/http/testdata/golden/two3-.txt000066400000000000000000000000101375552240400225240ustar00rootroot000000000000003456789 rclone-1.53.3/cmd/serve/httplib/000077500000000000000000000000001375552240400164315ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/httplib/httpflags/000077500000000000000000000000001375552240400204255ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/httplib/httpflags/httpflags.go000066400000000000000000000041251375552240400227520ustar00rootroot00000000000000package httpflags import ( "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/rc" "github.com/spf13/pflag" ) // Options set by command line flags var ( Opt = httplib.DefaultOpt ) // AddFlagsPrefix adds flags for the httplib func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *httplib.Options) { rc.AddOption(prefix+"http", &Opt) flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.") flags.DurationVarP(flagSet, &Opt.ServerReadTimeout, prefix+"server-read-timeout", "", Opt.ServerReadTimeout, "Timeout for server reading data") flags.DurationVarP(flagSet, &Opt.ServerWriteTimeout, prefix+"server-write-timeout", "", Opt.ServerWriteTimeout, "Timeout for server writing data") flags.IntVarP(flagSet, &Opt.MaxHeaderBytes, prefix+"max-header-bytes", "", Opt.MaxHeaderBytes, "Maximum size of request header") flags.StringVarP(flagSet, &Opt.SslCert, prefix+"cert", "", Opt.SslCert, "SSL PEM key (concatenation of certificate and CA certificate)") flags.StringVarP(flagSet, &Opt.SslKey, prefix+"key", "", Opt.SslKey, "SSL PEM Private key") flags.StringVarP(flagSet, &Opt.ClientCA, prefix+"client-ca", "", Opt.ClientCA, "Client certificate authority to verify clients with") flags.StringVarP(flagSet, &Opt.HtPasswd, prefix+"htpasswd", "", Opt.HtPasswd, "htpasswd file - if not provided no authentication is done") flags.StringVarP(flagSet, &Opt.Realm, prefix+"realm", "", Opt.Realm, "realm for authentication") flags.StringVarP(flagSet, &Opt.BasicUser, prefix+"user", "", Opt.BasicUser, "User name for authentication.") flags.StringVarP(flagSet, &Opt.BasicPass, prefix+"pass", "", Opt.BasicPass, "Password for authentication.") flags.StringVarP(flagSet, &Opt.BaseURL, prefix+"baseurl", "", Opt.BaseURL, "Prefix for URLs - leave blank for root.") flags.StringVarP(flagSet, &Opt.Template, prefix+"template", "", Opt.Template, "User Specified Template.") } // AddFlags adds flags for the httplib func AddFlags(flagSet *pflag.FlagSet) { AddFlagsPrefix(flagSet, "", &Opt) } rclone-1.53.3/cmd/serve/httplib/httplib.go000066400000000000000000000324061375552240400204330ustar00rootroot00000000000000// Package httplib provides common functionality for http servers package httplib import ( "context" "crypto/tls" "crypto/x509" "encoding/base64" "fmt" "html/template" "io/ioutil" "log" "net" "net/http" "strings" "time" auth "github.com/abbot/go-http-auth" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/serve/httplib/serve/data" "github.com/rclone/rclone/fs" ) // Globals var () // Help contains text describing the http server to add to the command // help. var Help = ` ### Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | #### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. #### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ` // Options contains options for the http Server type Options struct { ListenAddr string // Port to listen on BaseURL string // prefix to strip from URLs ServerReadTimeout time.Duration // Timeout for server reading data ServerWriteTimeout time.Duration // Timeout for server writing data MaxHeaderBytes int // Maximum size of request header SslCert string // SSL PEM key (concatenation of certificate and CA certificate) SslKey string // SSL PEM Private key ClientCA string // Client certificate authority to verify clients with HtPasswd string // htpasswd file - if not provided no authentication is done Realm string // realm for authentication BasicUser string // single username for basic auth if not using Htpasswd BasicPass string // password for BasicUser Auth AuthFn `json:"-"` // custom Auth (not set by command line flags) Template string // User specified template } // AuthFn if used will be used to authenticate user, pass. If an error // is returned then the user is not authenticated. // // If a non nil value is returned then it is added to the context under the key type AuthFn func(user, pass string) (value interface{}, err error) // DefaultOpt is the default values used for Options var DefaultOpt = Options{ ListenAddr: "localhost:8080", Realm: "rclone", ServerReadTimeout: 1 * time.Hour, ServerWriteTimeout: 1 * time.Hour, MaxHeaderBytes: 4096, } // Server contains info about the running http server type Server struct { Opt Options handler http.Handler // original handler listener net.Listener waitChan chan struct{} // for waiting on the listener to close httpServer *http.Server basicPassHashed string useSSL bool // if server is configured for SSL/TLS usingAuth bool // set if authentication is configured HTMLTemplate *template.Template // HTML template for web interface } type contextUserType struct{} // ContextUserKey is a simple context key for storing the username of the request var ContextUserKey = &contextUserType{} type contextAuthType struct{} // ContextAuthKey is a simple context key for storing info returned by AuthFn var ContextAuthKey = &contextAuthType{} // singleUserProvider provides the encrypted password for a single user func (s *Server) singleUserProvider(user, realm string) string { if user == s.Opt.BasicUser { return s.basicPassHashed } return "" } // parseAuthorization parses the Authorization header into user, pass // it returns a boolean as to whether the parse was successful func parseAuthorization(r *http.Request) (user, pass string, ok bool) { authHeader := r.Header.Get("Authorization") if authHeader != "" { s := strings.SplitN(authHeader, " ", 2) if len(s) == 2 && s[0] == "Basic" { b, err := base64.StdEncoding.DecodeString(s[1]) if err == nil { parts := strings.SplitN(string(b), ":", 2) user = parts[0] if len(parts) > 1 { pass = parts[1] ok = true } } } } return } // NewServer creates an http server. The opt can be nil in which case // the default options will be used. func NewServer(handler http.Handler, opt *Options) *Server { s := &Server{ handler: handler, } // Make a copy of the options if opt != nil { s.Opt = *opt } else { s.Opt = DefaultOpt } // Use htpasswd if required on everything if s.Opt.HtPasswd != "" || s.Opt.BasicUser != "" || s.Opt.Auth != nil { var authenticator *auth.BasicAuth if s.Opt.Auth == nil { var secretProvider auth.SecretProvider if s.Opt.HtPasswd != "" { fs.Infof(nil, "Using %q as htpasswd storage", s.Opt.HtPasswd) secretProvider = auth.HtpasswdFileProvider(s.Opt.HtPasswd) } else { fs.Infof(nil, "Using --user %s --pass XXXX as authenticated user", s.Opt.BasicUser) s.basicPassHashed = string(auth.MD5Crypt([]byte(s.Opt.BasicPass), []byte("dlPL2MqE"), []byte("$1$"))) secretProvider = s.singleUserProvider } authenticator = auth.NewBasicAuthenticator(s.Opt.Realm, secretProvider) } oldHandler := handler handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { // No auth wanted for OPTIONS method if r.Method == "OPTIONS" { oldHandler.ServeHTTP(w, r) return } unauthorized := func() { w.Header().Set("Content-Type", "text/plain") w.Header().Set("WWW-Authenticate", `Basic realm="`+s.Opt.Realm+`"`) http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized) } user, pass, authValid := parseAuthorization(r) if !authValid { unauthorized() return } if s.Opt.Auth == nil { if username := authenticator.CheckAuth(r); username == "" { fs.Infof(r.URL.Path, "%s: Unauthorized request from %s", r.RemoteAddr, user) unauthorized() return } } else { // Custom Auth value, err := s.Opt.Auth(user, pass) if err != nil { fs.Infof(r.URL.Path, "%s: Auth failed from %s: %v", r.RemoteAddr, user, err) unauthorized() return } if value != nil { r = r.WithContext(context.WithValue(r.Context(), ContextAuthKey, value)) } } r = r.WithContext(context.WithValue(r.Context(), ContextUserKey, user)) oldHandler.ServeHTTP(w, r) }) s.usingAuth = true } s.useSSL = s.Opt.SslKey != "" if (s.Opt.SslCert != "") != s.useSSL { log.Fatalf("Need both -cert and -key to use SSL") } // If a Base URL is set then serve from there s.Opt.BaseURL = strings.Trim(s.Opt.BaseURL, "/") if s.Opt.BaseURL != "" { s.Opt.BaseURL = "/" + s.Opt.BaseURL } // FIXME make a transport? s.httpServer = &http.Server{ Addr: s.Opt.ListenAddr, Handler: handler, ReadTimeout: s.Opt.ServerReadTimeout, WriteTimeout: s.Opt.ServerWriteTimeout, MaxHeaderBytes: s.Opt.MaxHeaderBytes, ReadHeaderTimeout: 10 * time.Second, // time to send the headers IdleTimeout: 60 * time.Second, // time to keep idle connections open TLSConfig: &tls.Config{ MinVersion: tls.VersionTLS10, // disable SSL v3.0 and earlier }, } if s.Opt.ClientCA != "" { if !s.useSSL { log.Fatalf("Can't use --client-ca without --cert and --key") } certpool := x509.NewCertPool() pem, err := ioutil.ReadFile(s.Opt.ClientCA) if err != nil { log.Fatalf("Failed to read client certificate authority: %v", err) } if !certpool.AppendCertsFromPEM(pem) { log.Fatalf("Can't parse client certificate authority") } s.httpServer.TLSConfig.ClientCAs = certpool s.httpServer.TLSConfig.ClientAuth = tls.RequireAndVerifyClientCert } htmlTemplate, templateErr := data.GetTemplate(s.Opt.Template) if templateErr != nil { log.Fatalf(templateErr.Error()) } s.HTMLTemplate = htmlTemplate return s } // Serve runs the server - returns an error only if // the listener was not started; does not block, so // use s.Wait() to block on the listener indefinitely. func (s *Server) Serve() error { ln, err := net.Listen("tcp", s.httpServer.Addr) if err != nil { return errors.Wrapf(err, "start server failed") } s.listener = ln s.waitChan = make(chan struct{}) go func() { var err error if s.useSSL { // hacky hack to get this to work with old Go versions, which // don't have ServeTLS on http.Server; see PR #2194. type tlsServer interface { ServeTLS(ln net.Listener, cert, key string) error } srvIface := interface{}(s.httpServer) if tlsSrv, ok := srvIface.(tlsServer); ok { // yay -- we get easy TLS support with HTTP/2 err = tlsSrv.ServeTLS(s.listener, s.Opt.SslCert, s.Opt.SslKey) } else { // oh well -- we can still do TLS but might not have HTTP/2 tlsConfig := new(tls.Config) tlsConfig.Certificates = make([]tls.Certificate, 1) tlsConfig.Certificates[0], err = tls.LoadX509KeyPair(s.Opt.SslCert, s.Opt.SslKey) if err != nil { log.Printf("Error loading key pair: %v", err) } tlsLn := tls.NewListener(s.listener, tlsConfig) err = s.httpServer.Serve(tlsLn) } } else { err = s.httpServer.Serve(s.listener) } if err != nil { log.Printf("Error on serving HTTP server: %v", err) } }() return nil } // Wait blocks while the listener is open. func (s *Server) Wait() { <-s.waitChan } // Close shuts the running server down func (s *Server) Close() { err := s.httpServer.Close() if err != nil { log.Printf("Error on closing HTTP server: %v", err) return } close(s.waitChan) } // URL returns the serving address of this server func (s *Server) URL() string { proto := "http" if s.useSSL { proto = "https" } addr := s.Opt.ListenAddr if s.listener != nil { // prefer actual listener address; required if using 0-port // (i.e. port assigned by operating system) addr = s.listener.Addr().String() } return fmt.Sprintf("%s://%s%s/", proto, addr, s.Opt.BaseURL) } // UsingAuth returns true if authentication is required func (s *Server) UsingAuth() bool { return s.usingAuth } // Path returns the current path with the Prefix stripped // // If it returns false, then the path was invalid and the handler // should exit as the error response has already been sent func (s *Server) Path(w http.ResponseWriter, r *http.Request) (Path string, ok bool) { Path = r.URL.Path if s.Opt.BaseURL == "" { return Path, true } if !strings.HasPrefix(Path, s.Opt.BaseURL+"/") { http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return Path, false } Path = Path[len(s.Opt.BaseURL):] return Path, true } rclone-1.53.3/cmd/serve/httplib/serve/000077500000000000000000000000001375552240400175555ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/httplib/serve/data/000077500000000000000000000000001375552240400204665ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/httplib/serve/data/assets_generate.go000066400000000000000000000005151375552240400241720ustar00rootroot00000000000000// +build ignore package main import ( "log" "net/http" "github.com/shurcooL/vfsgen" ) func main() { var AssetDir http.FileSystem = http.Dir("./templates") err := vfsgen.Generate(AssetDir, vfsgen.Options{ PackageName: "data", BuildTags: "!dev", VariableName: "Assets", }) if err != nil { log.Fatalln(err) } } rclone-1.53.3/cmd/serve/httplib/serve/data/assets_vfsdata.go000066400000000000000000000716271375552240400240440ustar00rootroot00000000000000// Code generated by vfsgen; DO NOT EDIT. // +build !dev package data import ( "bytes" "compress/gzip" "fmt" "io" "io/ioutil" "net/http" "os" pathpkg "path" "time" ) // Assets statically implements the virtual filesystem provided to vfsgen. var Assets = func() http.FileSystem { fs := vfsgen۰FS{ "/": &vfsgen۰DirInfo{ name: "/", modTime: time.Date(2020, 5, 5, 16, 40, 6, 115915195, time.UTC), }, "/index.html": &vfsgen۰CompressedFileInfo{ name: "index.html", modTime: time.Date(2020, 5, 5, 16, 40, 5, 919909715, time.UTC), uncompressedSize: 15424, compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xbc\x7b\x6d\x73\xdb\x46\xf2\xe7\x6b\xe9\x53\x4c\x98\xcd\x8a\x4a\xc0\xe1\x3c\x3f\x48\xa4\xf6\x6c\xc6\x59\xbb\x56\x71\x52\xb1\x9d\xad\x6c\x2a\x2f\x20\x62\x48\xe2\x0c\x02\x0c\x00\xea\xc1\x3a\x55\xdd\x87\xb8\x4f\x78\x9f\xe4\xaa\x67\x00\x12\x90\x28\x27\x7b\xf5\xdf\xbf\xec\xa2\x80\x9e\x99\x9e\x7e\xf8\x75\x4f\x37\x08\x4d\xbe\x18\x8d\xd0\xf1\x78\x8c\x66\xc5\xe6\xae\x4c\x97\xab\x1a\x31\x42\x25\xfa\x3e\xae\xeb\x95\xbb\x41\xaf\x8b\xac\x46\x71\x9e\xa0\xf7\x2b\x87\x66\x71\x92\xdc\xa1\x17\xdb\x7a\x55\x94\xd5\xf1\x78\x0c\xeb\x2e\xd3\xb9\xcb\x2b\x97\xa0\x6d\x9e\xb8\x12\xd5\x2b\x87\x5e\x6c\xe2\xf9\xca\xb5\x23\x11\xfa\xd9\x95\x55\x5a\xe4\x88\x61\x82\x86\x30\x61\xd0\x0c\x0d\x4e\xcf\x81\xc5\x5d\xb1\x45\xeb\xf8\x0e\xe5\x45\x8d\xb6\x95\x43\xf5\x2a\xad\xd0\x22\xcd\x1c\x72\xb7\x73\xb7\xa9\x51\x9a\xa3\x79\xb1\xde\x64\x69\x9c\xcf\x1d\xba\x49\xeb\x95\xdf\xa7\xe1\x82\x81\xc7\x2f\x0d\x8f\xe2\xaa\x8e\xd3\x1c\xc5\x68\x5e\x6c\xee\x50\xb1\xe8\x4e\x44\x71\xdd\x08\x0d\x3f\xab\xba\xde\x9c\x8d\xc7\x37\x37\x37\x38\xf6\x02\xe3\xa2\x5c\x8e\xb3\x30\xb5\x1a\x5f\xbe\x99\xbd\x7a\xfb\xee\xd5\x88\x61\xd2\x2c\xfa\x90\x67\xae\xaa\x50\xe9\x7e\xdf\xa6\xa5\x4b\xd0\xd5\x1d\x8a\x37\x9b\x2c\x9d\xc7\x57\x99\x43\x59\x7c\x83\x8a\x12\xc5\xcb\xd2\xb9\x04\xd5\x05\x08\x7d\x53\xa6\x75\x9a\x2f\x23\x54\x15\x8b\xfa\x26\x2e\x1d\xb0\x49\xd2\xaa\x2e\xd3\xab\x6d\xdd\xb3\x59\x2b\x62\x5a\xf5\x26\x14\x39\x8a\x73\x34\x78\xf1\x0e\xbd\x79\x37\x40\x2f\x5f\xbc\x7b\xf3\x2e\x02\x26\xff\x7c\xf3\xfe\xf5\x0f\x1f\xde\xa3\x7f\xbe\xf8\xe9\xa7\x17\x6f\xdf\xbf\x79\xf5\x0e\xfd\xf0\x13\x9a\xfd\xf0\xf6\xdb\x37\xef\xdf\xfc\xf0\xf6\x1d\xfa\xe1\x3b\xf4\xe2\xed\x2f\xe8\x1f\x6f\xde\x7e\x1b\x21\x97\xd6\x2b\x57\x22\x77\xbb\x29\x41\x83\xa2\x44\x29\x58\xd3\x25\xde\x74\xef\x9c\xeb\x89\xb0\x28\x82\x48\xd5\xc6\xcd\xd3\x45\x3a\x47\x59\x9c\x2f\xb7\xf1\xd2\xa1\x65\x71\xed\xca\x3c\xcd\x97\x68\xe3\xca\x75\x5a\x81\x57\x2b\x40\x07\xb0\xc9\xd2\x75\x5a\xc7\xb5\x27\x3d\xd1\x0b\xa3\xe3\xef\x8b\x04\xb8\x85\x19\x67\x08\xbd\x48\xe2\x4d\x1d\x4c\x55\xce\xb3\x22\x77\x68\x1d\x97\x1f\xb7\x1b\x34\x1a\x5d\x1c\x1f\x4f\xbe\xf8\xf6\x87\xd9\xfb\x5f\x7e\x7c\x85\x56\xf5\x3a\xbb\x38\x9e\x84\x5f\x47\x93\x95\x8b\x93\x8b\xe3\xa3\xa3\x49\x9d\xd6\x99\xbb\xb8\xbf\x87\x01\x84\xdf\xc6\x6b\xf7\xf0\x30\x19\x07\x2a\x8c\xaf\x5d\x1d\xa3\xf9\x2a\x2e\x2b\x57\x4f\x07\xdb\x7a\x31\x32\x83\xfd\x40\x1e\xaf\xdd\x74\x70\x9d\xba\x9b\x4d\x51\xd6\x03\x34\x2f\xf2\xda\xe5\xf5\x74\x70\x93\x26\xf5\x6a\x9a\xb8\xeb\x74\xee\x46\xfe\x26\x42\x69\x9e\xd6\x69\x9c\x8d\xaa\x79\x9c\xb9\x29\xc5\xe4\x09\xa3\x65\x51\x2c\x33\xd7\x61\x93\x17\x75\x19\xe7\x55\x16\xd7\x6e\x70\x71\x3c\xa9\xea\x3b\x10\xeb\x6b\x74\x8f\x36\x71\x92\xa4\xf9\xf2\x0c\x91\x73\xd0\x78\x99\xe6\xfe\xf2\xe1\xf8\xaa\x48\xee\xd0\xfd\xf1\xd1\xa2\xc8\xeb\xd1\x22\x5e\xa7\xd9\xdd\x19\xaa\xe2\xbc\x1a\x55\xae\x4c\x17\xe7\xc7\x47\xb5\xbb\xad\x47\xa5\x03\xe3\x7a\x0e\xc5\xa6\x4e\xd7\xe9\x27\x57\x6d\x9c\x4b\xce\x8f\x8f\xae\xe2\xf9\xc7\x65\x59\x6c\xf3\x64\x34\x2f\xb2\xa2\x3c\x43\x5f\x2e\xfc\xcf\xf9\xf1\xc3\x71\x0c\xbc\x5b\x32\x21\xca\x25\xbc\x65\x99\xb8\x79\x51\x7a\xc7\x9c\xa1\xbc\xc8\x9d\x9f\x7e\xb6\x02\x6f\x47\xc7\x2b\x8a\x9a\xeb\x2e\x03\x4e\xed\x3c\xf0\x05\x87\xc0\xbc\x2f\xab\xed\x7a\x1d\x97\x5e\x85\x46\xc7\x51\xe6\x16\xf5\x19\x92\x5f\x9d\xef\x49\x3e\xc7\x04\xda\xc3\x71\xbd\x3a\x5b\xa4\x65\x55\x8f\xe6\xab\x34\x4b\xa2\xe3\x3a\xe9\xde\x03\x27\xef\x81\x33\x44\xbf\x3a\x47\xe3\xaf\x51\x0d\x8b\x5d\xe9\x21\xba\x2e\xae\x20\x45\x7c\x3d\x0e\x7c\xb2\xb8\xc7\x66\x7f\xfb\xe7\xb9\x04\x4d\xba\xf2\xd7\xc5\xe6\x0c\x31\xb9\xb9\xed\x28\x70\x55\xd4\x75\xb1\x3e\x43\x34\x90\x0f\xd9\x9c\xc1\x3f\x6f\x1b\xba\x73\x68\x95\x7e\x72\x67\x88\x11\xbf\xc8\x53\x6e\x5c\x30\x45\x5e\x94\xeb\x38\x3b\x3f\x3e\xba\x59\xa5\xb5\x1b\x55\x9b\x78\xee\x80\x7a\x53\xc6\x9b\xf3\xe3\x23\xb0\xfc\x22\x2b\x6e\x46\xb7\x67\x68\x95\x26\x89\xcb\x5b\xb7\xb5\x23\x67\xc8\x65\x59\xba\xa9\xd2\xea\x7c\xef\x20\x6b\x6d\x23\xc1\x23\xc7\x93\xf3\xe3\xa3\x1d\xee\x90\x00\x79\x1e\x1e\x39\xf9\x09\x28\x7c\x3c\x67\x69\x40\x86\x9f\xfb\xc8\x4d\x7b\x20\x1f\x3f\x1c\xaf\x21\x03\xdf\x1f\x1f\x25\x69\xb5\xc9\xe2\xbb\x33\x74\x95\x15\xf3\x8f\x30\x82\x7d\xc8\xf4\x4d\x42\xd9\xde\x24\x2d\xea\x7f\x76\x65\x12\xe7\x71\xd4\x87\xff\x55\x51\x26\xae\xdc\x3b\x60\x73\x8b\xaa\x22\x4b\x13\xf4\xa5\x9d\xc1\xbf\xf3\x47\x8e\xa3\xe4\xb0\xe3\x48\xd0\xd9\x0b\x33\x4a\x6b\xb7\xde\x6b\xd0\xc2\x93\xba\x35\x4c\xf9\x72\x91\x66\x75\x0f\x12\x67\xc1\x62\x8d\x2c\x3d\x21\x66\xb3\x99\xc7\xb4\x3f\x0e\x3a\xa0\x23\xe4\xab\xbd\xf0\xf3\x22\xcb\xe2\x4d\xe5\xce\x50\x7b\xe5\xd7\xf8\x2d\x0e\xe8\x97\xc4\xd5\xca\x25\xe8\xcb\x24\x86\x7f\x7e\xaa\x4f\x13\x75\xb9\xf7\xd6\x33\x51\xef\xe6\x21\xc2\x20\x1c\x76\x4e\x8d\xb3\x74\x99\x9f\x21\x88\xcb\xf3\x8e\x4e\x60\x12\x04\x7e\x80\xf0\x98\xad\xe2\x7c\xe9\x12\xb4\x28\x8b\x35\x22\x90\x9f\x59\x1b\x65\x4f\x62\xe3\xb3\x26\xee\x79\x59\x79\xca\x41\x88\x7b\xce\x5d\x94\x5e\x65\x71\xc0\x4b\xbd\x42\xd5\xf5\x12\x46\xae\x5d\x59\xa7\xf3\x38\x6b\x35\x58\xa7\x49\x92\x05\xdb\x85\x08\x3f\x18\x3b\x5d\x01\x1a\xa4\xd7\xc9\x59\x5e\xaf\x02\x72\x87\xec\xb4\xe3\x28\x43\xbe\x7a\x32\x81\x9f\xf6\x7c\x4f\x7c\x00\x37\xbf\x9a\x04\xb6\x9f\x2c\x4e\xa3\xfe\x6a\x71\xfa\xd8\xf0\x1e\x5e\x87\xc4\x68\xd4\xdc\x14\x55\x1a\x42\x2e\xbe\xaa\x8a\x6c\x5b\xb7\x2a\x62\x38\x67\xbc\x2b\xf1\xb2\xd8\x6e\x3a\x88\x0d\x39\x96\x62\x2d\x01\xb3\x47\x37\x45\x99\x8c\xae\x4a\x17\x7f\x3c\x43\xfe\xd7\x28\xce\xb2\x6e\x1a\x01\xd3\xb4\x43\x30\xf9\xb1\x57\x36\xa5\x1b\xb5\x7e\xc1\xe9\xbc\xc8\x9f\x46\x87\x6c\x02\x08\x46\x71\x55\x94\x75\x2f\xda\xd3\x1c\x32\xc5\xa8\x09\xfa\x5d\x18\x78\xe9\x56\xae\x13\x5f\x1d\x6d\x4b\x97\xc5\x75\x7a\xed\x20\xb5\x01\xae\x30\x0b\x01\xd8\xd9\x02\xd7\xc5\xe6\x39\x13\x1d\x05\x23\x90\x76\xf9\x88\x3e\x91\x10\x07\x6c\x3e\xcb\xa1\x85\x6e\x58\xba\x67\xf8\x70\xbc\x28\x8a\x27\x39\xc0\xc7\xcb\x53\x90\x87\x54\xd6\x75\xf8\xdc\xe5\xb5\x2b\x81\xcd\xff\x58\xbb\x24\x8d\xd1\x70\x1d\xdf\x8e\x1a\x9b\x28\x42\x36\xb7\x80\x91\xf1\xd7\x47\x78\x95\x26\xae\x4d\x1d\x7b\x63\x86\xe3\xf8\xe8\x01\x95\x6e\x5d\x5c\x87\x72\xe9\xa3\x73\x1b\x94\xc4\xb5\xab\xa0\x3e\xdc\x9f\x60\x47\x07\xb0\xdd\x9a\x3f\xde\xd6\x05\xf0\x39\x3e\xea\x41\x96\x9f\x46\x8f\x96\x05\xc4\x1f\x3a\xae\x8f\x0e\x21\x19\x38\x86\x53\xee\xd1\x11\x13\xe8\x3e\xaa\xbb\xa7\x03\xd0\x3b\x59\xf5\xa8\x63\x0d\x4a\x82\x41\x1f\x8e\x1f\x8e\x27\xe3\xa6\x62\x3a\x9a\x8c\x9b\x8a\x6f\xe2\x13\x5f\x91\x67\x45\x9c\x4c\x4f\x02\x8b\xe1\xe9\x79\x5d\x2c\x97\x99\x1b\x0e\x7c\xee\x1c\x9c\x9e\xcf\x7d\xf6\x7a\x97\x7e\x72\xc3\xd3\x13\x5f\xa6\x41\x68\x5d\x87\x16\x64\x3a\xa0\x98\x0e\xd0\xed\x3a\xcb\xab\xe9\xa0\xd3\x01\xdc\x70\x5f\xfd\x33\x42\xc8\xb8\xba\x5e\x36\x53\xce\x6e\xb3\x34\xff\x78\x68\x22\xb5\xd6\x8e\xfd\xe8\x00\x05\x4c\x4f\x07\x64\x80\x42\xf1\x08\x57\x5e\xfc\xe9\xe0\x00\xd4\x7c\xed\x78\x34\x49\xdc\xa2\xf2\x57\x47\xbe\x05\xfb\xae\xc8\xa0\xf6\x80\xda\xd7\xd3\x96\x28\x4d\xa6\x83\x85\xa7\x0e\xa0\x19\xca\x46\xe5\x16\x38\xe6\x45\xfe\xc9\x95\x45\xa0\xf9\x5b\x17\x38\x1e\x1d\x4d\x36\x71\xbd\x42\xc9\x74\xf0\x3d\x33\x12\x33\x86\xb8\xc6\x52\xae\x46\x54\x30\xac\x2e\x29\x25\xd8\x22\xf2\x9a\x53\xac\x67\x54\x60\x26\x11\x41\x04\x51\x05\xd4\x30\xf5\x5a\x4b\x4c\x57\x1c\x48\xec\x67\xb8\x9e\x93\x11\x23\x58\xc9\x11\xcc\x57\x23\x3f\x69\x04\x0c\xc2\xe5\xa7\x56\x8a\x2f\xbf\xfb\xee\x05\x21\x64\x30\x7e\x56\x12\xd5\xdd\x97\x2b\x44\x90\x24\x98\x19\x44\x90\xd2\x58\x8b\x6b\x2a\x0d\xd6\x73\x82\xa8\xc6\x42\x23\xbf\x1d\x82\x15\xd2\x7f\x86\xcb\xd7\x9e\xd9\x1c\xa6\x08\x10\x19\xe4\xa0\x02\xf3\x70\xe5\xa7\xfc\x0c\xdc\xe4\x9c\x8c\x3c\x9f\x56\x6c\x18\x19\xed\x27\x75\xc5\x9e\xbd\x60\xa6\x15\x7b\x32\x5e\x1e\xb0\xfe\xa8\x5a\x15\x65\x3d\xdf\xd6\xe0\xd4\xb2\xf8\xe8\x1a\xa3\x37\x77\xa3\xc6\xe7\xb4\xe7\x91\xae\xc7\xdc\xb5\xcb\x8b\x24\xd9\x79\xe9\x20\xf3\x11\x9c\xe0\x9b\x83\x9e\x6e\xd6\x3d\xb7\xb0\x5a\xc5\x9b\x1d\x04\x9e\x9a\x5e\x18\xad\x22\xf0\x96\x30\xca\x12\x86\x2e\x3d\x1a\x28\x13\xdc\xf4\xc9\x00\x0f\x46\xb4\x91\x11\x41\x97\x9c\x62\x65\xa9\x92\xcc\x46\x04\x79\xaf\x35\x4b\x08\x22\x11\x55\xd8\x58\x65\x29\x51\x88\xf4\x78\x90\x88\x52\x86\x95\x50\x44\x53\xe0\xa1\x70\xc3\xe3\x19\xb2\x96\x98\x58\xcd\x0d\x91\x68\xd6\x21\x4b\x81\x85\x90\x8a\x10\x83\x38\x61\x58\x49\xc9\x8c\xec\x6e\x74\x58\xb3\x7f\x0d\xbc\x7d\xde\x79\x7b\x3c\x02\xe6\xc5\x64\x0c\x76\xf9\x03\x2b\xa9\x9e\xe2\x5c\xf5\x34\x07\xd0\x46\x1e\xb4\xdc\x48\x65\x10\x89\x3c\x72\xa9\x25\xdc\x82\xea\x8c\x29\x2c\x24\x15\x4c\xa0\x19\x89\x98\xe0\xd8\x12\x2b\x34\x45\x1d\x1e\x4c\x1a\x4c\x2d\xe7\xcc\xa0\xce\x46\x1d\xea\x65\x47\x9c\x0e\x79\xd6\xb1\x43\x8f\xc7\xce\x66\x9d\xfd\xba\xd4\xbd\x4c\x5d\xbb\x77\x04\xef\xd9\x7d\xaf\x5c\xd7\xee\x0a\xf5\x6d\xf4\x8c\x9d\x7d\x24\x3d\xb2\xf3\x2e\xa2\xba\x16\xa7\x4c\x61\x2a\x05\xe5\x22\x62\x92\x60\x29\x2d\x35\x02\xcd\x80\x6c\x24\xb1\x1a\xc8\x14\x1b\xc3\x95\xe6\x88\x32\x8d\x39\x21\x52\x80\x99\x38\x26\x44\x51\xc6\x3c\x55\x6b\xa6\x08\x8b\x98\x14\x98\x06\xea\x8c\x32\x83\x85\xb2\x42\x00\x59\x62\xd6\x4e\x36\xd8\x52\x4b\x28\x98\x54\x61\x4a\x04\x31\x40\xb5\x58\x71\xc3\x39\x58\x54\x63\x42\x18\x11\x14\xcd\x28\xf7\x12\x59\xc5\xbc\xa1\x39\x53\x92\x53\x44\x21\x6f\x30\x63\x24\x4c\xb6\x88\x72\x8e\x29\x21\x44\x6a\x7f\x3b\xa3\x5c\x60\x61\xb9\xe6\xba\x19\x96\x58\x50\xc9\x95\xf0\x3c\xa4\xa4\x84\x21\xca\x15\xa6\x94\x31\x22\xfc\x7e\x4a\x4b\xe9\xb7\x53\xd8\x10\x4b\x84\xe8\x4a\x41\xb9\xc6\x4c\x1a\x45\xad\xd7\xc3\x1e\xa0\x0a\x2c\x75\xcb\xa2\x43\x06\x10\x04\xf5\xba\x54\x86\xcd\x9e\x4a\x38\x37\x9c\x79\x1b\x0b\xa9\xa9\xe0\x41\x0a\x6d\x94\x54\x2a\x62\xc2\x62\x4b\x0c\x55\xdc\x4b\x2c\x15\xd5\xda\x7a\x2a\xf1\xb6\xe8\x53\x0d\x96\xc1\x4d\x9e\x05\x31\x56\x33\x60\xc1\xb0\xa1\x82\x19\xe5\x2d\x61\x94\xb0\xdc\x46\x8c\x6b\xc8\x2f\x82\x98\x3e\x95\x63\xa6\xb9\x50\xde\x8a\x7b\x32\x93\x98\x04\xe1\xba\xb2\x51\x8d\x65\xcb\xd8\x60\x6a\x08\x13\x40\x25\xd8\x08\x65\xb9\x67\x61\xb1\xb6\x46\x53\x1e\x31\x22\x30\x6b\x0c\x27\x00\x4e\x96\x71\x11\x51\x6b\xb0\x82\xed\x04\xa2\x42\x60\xc1\x2c\x67\x26\xa2\x96\x63\xad\x40\x3f\x88\x78\x8d\x19\x55\xca\xd8\x88\x1a\x83\x8d\xb2\xdc\x18\x44\x25\xc1\x4a\x1b\x41\x69\x44\x8d\xc0\x26\x88\x4c\xa5\xc0\x86\x2b\xab\x79\x44\x0d\x6d\xc1\x32\x83\xb3\xcc\x5a\x29\xb9\x8c\xa8\x06\xa0\x5a\x69\x19\xa2\x8a\x63\xc5\x14\x15\x36\xa2\x5a\xec\xf0\xad\x0c\x16\x86\x4a\xc9\x22\xaa\x19\x56\x80\x58\x08\x06\xcd\x31\xe7\xca\x4a\x11\x51\x4d\xb0\xe0\x46\x6b\x85\xa8\xb6\x98\x52\x6e\x0d\x8f\x60\x9d\x52\x92\x13\x85\xa8\x91\x58\x1a\x6d\x80\x85\xd2\x98\x0b\x70\x1f\x9a\x51\xcb\x30\x51\x54\x33\x20\x2b\xcc\x28\x6c\x88\xc0\x00\x5a\x11\xae\x4d\x44\x95\xc4\x5c\x30\x23\x35\x62\x44\x7a\x29\xa8\x88\xa8\x12\x58\x05\xa5\x67\x8c\x32\xb0\x32\xd5\x9e\xca\x82\xf7\x18\xb5\x58\x5a\x43\x85\x8a\x40\x25\x6b\x21\x7e\x11\x63\x06\x53\xc5\xbc\xce\x7b\xea\x25\x13\x0a\x13\x09\x21\xfe\x2c\xd9\xca\xd6\xa9\xb3\x1e\x59\x63\x0d\x16\x92\x08\xa8\x5a\x32\xc1\x81\x6a\x31\x44\x13\x11\x08\xc0\xc7\x35\x31\xd6\x46\x8c\x50\x4c\x9a\x2c\x02\x19\x85\x51\x88\xbe\x88\x41\x0e\x0b\x50\x86\x10\x20\xda\x1a\xa3\x22\x46\x38\x96\x21\x31\x40\x14\x71\xcd\x34\x95\x5d\xea\x0c\x92\x84\x50\x9c\xf1\x47\x93\x0d\x96\x9c\x32\xad\x7b\x8c\x15\xc1\x60\x62\xc6\xbb\x52\x5c\x72\x48\x71\x84\x31\x00\x11\xd7\x58\x5b\x80\x00\x9a\x71\x48\x5b\x8c\x68\xa9\x23\x80\x35\x13\xc6\x1a\xc4\x99\xc1\x4a\x30\x6e\x44\xe4\xf3\x88\x8f\xea\x1e\x91\x61\x06\x8e\xa6\xc0\xa0\x43\x26\x98\x00\x95\xa1\x2e\x5b\x66\x30\x0b\x81\xd3\x95\x81\x29\xac\x83\xc0\x97\x1d\x89\x15\xc7\x42\xb4\xae\xf6\x89\x8a\x6b\xa9\x22\x45\xb1\xb1\x01\xb3\x1d\x53\x28\x1a\xec\x65\x25\xb5\x02\xee\x66\x1d\xa3\xfa\x41\x82\x19\x57\x8a\xb3\x1e\x03\xf0\x92\xe5\x5c\xeb\xfe\x6e\xe0\x52\x2d\x2c\x8d\x94\x68\x40\x21\xbc\x9f\x89\x36\x44\x47\x4a\x61\x0d\x33\xb5\xe9\x12\x21\xa8\x02\x88\x2f\xf7\x54\x4a\x48\x8b\x88\xcb\x2e\x06\xf7\xe4\x19\xa0\xdf\x28\x4e\xc0\x68\x7b\x32\xe4\x7f\xaa\x14\x33\x2c\xa2\x54\x63\xaa\x34\x87\xc2\x93\x4a\xcc\xb4\x10\x5c\x47\x10\xf2\xa2\x3d\x9b\x28\xc1\x4a\x29\x4e\x00\xc6\x14\x2a\x0e\x6b\x10\x25\x06\x73\x49\xac\xe5\x11\xd5\x12\xf3\x86\x6f\x87\x6a\x29\xd6\x21\xc0\x66\x1d\x32\x04\x1b\x6b\x72\x02\x85\x84\x1d\xa0\xc6\x38\xb6\x10\xfc\x12\x51\x26\x20\x46\x85\x14\x11\x13\xba\x0d\xfe\x19\x85\xa4\x48\xb4\x66\x3e\xf1\xd2\x76\xae\x84\x34\xce\xac\x08\x49\xba\x55\xee\xd0\x11\xfb\xcc\xc1\x0d\x3f\x03\xe4\x1f\x57\x2f\x8a\x72\x3d\x1d\xec\x9e\x5c\x0f\x19\x35\x58\x58\x9f\x0c\x11\x55\x04\x13\xff\x73\x8a\xfc\x83\xf0\xe1\x88\x46\x88\x9e\xa2\xfd\xf4\x51\x77\xfe\xa8\xbb\xe0\x51\x61\xb0\xaf\xb4\x77\x17\xbe\x09\x82\x46\xf6\x71\x0b\x94\x66\x6e\x5f\x79\x43\x73\xf9\xb8\xf2\x66\xb2\xab\xcc\xc1\xd2\xbb\x5d\x91\xa5\xb9\x9b\xc7\x9b\xe9\xc0\x3f\x2e\xeb\x91\xff\x67\x91\xe6\x2d\xfd\x49\x17\x43\x39\x62\x02\x53\x76\xcd\x34\xb8\x66\x4e\x90\xc2\x54\x21\x89\x0d\x40\x06\x53\x38\x59\x31\x6d\xae\x5f\x33\x6e\xe7\x1a\x73\x68\xae\x80\x3a\x12\xd8\xaa\xe6\xd2\x4f\xf8\xd9\xd7\x02\xf2\x1d\x44\x36\x0c\xf8\x0a\x85\x6b\x44\xf9\x6b\xf0\x9b\x9e\x51\xe3\x19\x73\xff\x5f\x87\xd5\x41\x80\x4f\x07\x5a\x2c\x40\xb2\x5f\x7d\x49\x99\x0d\x90\x82\x46\x8a\x60\x69\x90\x86\x3e\x8a\x5a\x4c\xa1\xcf\x63\xda\x5f\xbe\x66\xc2\x5e\xee\x16\x7d\x7a\xb6\xfb\x49\x33\xf7\x9f\xea\x7d\xba\xac\xdb\xce\xe7\x20\x00\x29\x6f\x20\x14\xa1\xdd\xe5\xe9\x93\x8e\xa8\xc7\x2e\xf4\x43\x3d\xc4\xfc\x21\x68\x3c\x6e\xfe\xbf\x30\xd2\x75\x04\xb4\x3f\x98\x32\x2a\x8c\x51\xbe\x23\x30\x00\x10\x23\xb4\xf6\x1d\x01\x9c\xc7\xdc\x5a\x26\x3c\x6e\x84\x35\xbe\xc8\xb7\x0a\x5b\x6b\xad\xe1\x01\x21\xcc\x70\xcd\xbb\xd4\x4b\xa8\x85\xac\xd5\x70\xe8\x77\xc8\x33\x5f\x39\x59\xe9\xcb\xe5\x3d\x99\x71\x8b\xa9\x26\x86\xed\xb6\x13\xac\x4b\xdc\x4b\x74\xb9\xa7\x52\xc6\x31\x15\x32\x64\xe6\x43\x54\x0a\x47\xbe\x91\x82\x46\x0c\x1b\xc1\xa8\x26\x56\xb8\x11\x15\x3e\x5d\x72\x65\x05\x1c\x7f\xfd\x91\xcb\x46\x1b\x09\x3a\xf6\x87\x66\x20\x83\x24\xc4\x12\x1d\x8d\x28\xd6\x54\x68\x6b\x0d\x73\x23\x22\x11\x89\x20\x58\x08\x63\xd6\xc2\x4d\xc7\x9e\x4d\xf2\x2a\xdd\xbc\xa6\x54\xd3\xcf\x74\x74\x94\x2a\xa8\x0c\x88\x6f\x64\x29\x55\x3e\xeb\x5b\x22\xac\xf2\x99\x5c\x45\x94\x52\x2c\x0c\x9c\x55\x08\x94\x64\xd2\x10\x62\x22\xca\x20\x5e\x19\x94\xa3\x70\x5c\xc1\xed\x25\x24\x66\x7f\xf1\x98\xe7\xee\xa6\x2b\x96\xb6\xe2\x4f\x35\x40\x42\x63\x43\x38\x05\x73\x5a\x81\x49\x38\xe8\x66\x02\x52\xa7\xb5\x86\x02\x59\x62\x1a\xca\x7b\x61\xb0\x15\x56\x4a\x19\xbc\x4f\x42\x01\x25\x2c\x16\x8c\x2a\xb2\xc3\x84\xa7\xce\x24\xc1\x94\x1a\x21\x0c\x60\x42\x63\x13\xc8\x70\x00\x28\x43\x18\xd3\x11\xf3\xe5\xaf\x2f\x1a\x24\xc5\x0c\xca\x58\x28\x7e\xac\xc5\x3c\xd4\x92\x33\xc9\x30\x23\xc6\x2a\x63\x22\x4e\x08\x16\xfe\xa4\x93\x1c\x73\xad\x7d\x33\xc1\x09\x45\x52\x60\x2d\x2c\x51\x5c\xf8\xdb\x19\x34\x55\x82\x69\xc1\x55\x18\xd6\x98\x28\xc1\x35\xb1\x9e\x85\x0a\x7d\x83\xd4\x58\x2b\xca\xbc\x76\x16\x3a\x4e\xce\x34\x9a\x49\x83\x85\x34\x44\x36\xe4\x46\x0a\xa8\x9f\x89\x56\x0c\x0e\x40\x0b\x2d\xdd\x53\xaa\xc6\x1c\xea\x19\xee\x59\xec\xc9\x0a\x9b\x46\xbd\x2e\x55\x62\xbb\xa3\x2a\x03\x3d\x2e\xf3\xa6\x37\x80\x4f\x1a\xa4\xe0\x52\x6a\x06\x93\x39\xb4\x37\x50\x84\x4b\x83\x19\x25\xbe\xae\x86\x60\x32\x8d\x2d\xfa\x54\xd1\x74\x61\xa0\x1e\x37\x9a\x43\x24\x18\x8d\x75\xa8\xc1\x24\x34\x2c\xdc\x0a\x49\x23\x66\x38\xd6\xa1\x8c\xeb\x52\xb5\xc5\xd6\xf7\x87\xb3\x1e\x95\x63\x16\x64\xeb\x8a\xa6\x74\xdb\x14\x49\x8b\x0d\xb3\x4c\x42\xb7\xa5\x28\x56\x4d\xf3\xaa\x28\x16\xc2\x17\x08\xe0\x92\x60\x35\xc5\xb1\xe4\x86\x09\xc2\x7d\xcb\xa7\x42\x81\xa0\x7c\xfd\xc4\x39\xb4\xd5\x42\x63\xc5\x84\xb0\x68\xa6\xa0\xdf\x91\xbe\xeb\x60\x02\xba\x15\x5f\xf0\x6b\x86\x39\xd3\x82\xfa\xc2\x83\x60\xee\xc5\xd5\x0a\x0b\x23\xad\xb6\xcc\x77\x76\xc1\x36\x33\x43\xb0\x12\x42\x0a\x0a\x54\x81\x65\x68\xcb\x0c\xd4\x3b\x92\x4a\x45\x23\xc6\x59\x8b\x6c\x4b\x30\xe5\x44\x4a\xf0\x05\x07\xb6\xa1\xd4\xb2\x02\x5b\x23\xad\x02\x79\x99\xc1\x32\x74\x33\x10\xc2\x5a\x31\x28\xf6\x99\xf6\x8d\x26\x1c\x8a\x44\x43\xc9\x69\x64\x78\xd0\x41\xda\xa7\x00\x94\x63\x4d\x89\x66\x26\xf4\x91\x06\xf6\x43\x94\x11\x2c\x20\xd6\x64\xc4\x18\xd4\xfd\x54\xc0\x69\xc9\xb4\x97\x82\xf9\xfa\xcb\x04\x85\x67\xd0\xdf\x1b\x66\x29\xd4\xfa\x8c\x63\x11\xdc\x06\x6d\x24\x13\x9a\x36\x93\x59\xd3\x19\x0a\x8b\x0d\xa5\x52\xf4\xa8\x97\xd0\x89\x69\x22\xa4\x35\xcf\x92\xa1\x5c\x6b\x1b\xf0\x0e\x59\x12\x48\x8f\x92\x86\xd6\x90\xd0\xf0\xdc\x88\xed\x1e\x45\x68\x82\x09\xb5\x96\x48\xdf\xee\xcb\x26\x7b\x50\x4d\xa1\xc8\xf5\xcf\x38\x04\x36\x01\xc1\xd0\x45\x6a\x66\x0c\x14\x9d\x52\x62\x19\xf2\x01\xd5\x0a\x13\xe6\x1b\xc3\x0e\x75\x46\x75\x53\x54\xf6\xc8\xd4\x10\xdf\x68\x43\x4a\xe9\x30\x36\x14\x1b\x46\x01\xec\x7b\x19\x2e\x01\x49\x5a\x52\x66\x95\x6f\x86\x4c\xe8\xb3\x67\xa0\x28\x34\xc9\x0a\x4a\x5f\xc8\x45\xbe\xd2\xf6\xfd\x82\xa5\xdc\x86\xae\x8e\x86\x84\xd0\xa3\x6a\xcc\x9a\xc6\xa9\x47\x96\x58\xb4\xcd\xc5\x8e\x31\x85\x44\x1a\x22\xa6\x23\x05\xb4\xc0\x3a\x48\x7c\xb9\x17\x19\xfc\xb8\x7b\xdc\x63\x08\x66\x84\x79\x16\xdc\x62\xdd\x3c\x1a\xd8\x9b\x82\x72\x1b\x0c\x26\x04\x83\xea\xdf\x3f\x65\xd8\x9b\x35\x0c\x53\x6c\x8c\x54\xdc\xf6\x79\x10\x4c\x9a\x56\xad\xbb\x21\x38\x95\x71\x0b\x3d\xb5\x60\x3b\x10\x81\xff\x99\x26\x70\xee\x08\x60\x1e\x9e\x93\x74\xa9\x12\xc2\x18\xfa\x80\xcb\x2e\x59\xef\x9e\x3a\x5c\x76\x80\xd8\x21\xcf\x8c\xc1\x92\x32\x62\xe1\x84\xdb\x93\x01\x64\x54\x32\xe8\xdd\xa8\x11\xd8\x86\x67\x54\x5c\x61\xcb\xb8\xef\x7e\x7c\xeb\xdf\x60\x8b\x33\xcc\xa9\xe4\x44\x43\xe8\x50\xcc\x82\x03\x39\xf1\xd1\x2c\x03\x43\xb8\x13\x12\xdb\x10\x56\x33\xb8\x85\x73\x20\x24\x00\x2e\xb1\x94\x60\x4e\x16\x31\xcd\xc0\x93\x86\x6b\x24\x14\x04\xa4\x50\x04\xce\x25\xda\x46\xfa\x4c\x28\xac\xa4\xd2\x4c\xa9\x50\xc3\x34\x93\x35\xa6\x44\x71\x42\x6c\x48\xc6\x61\xd7\x83\x27\x69\xb7\xcd\x19\xcd\x8a\xcd\xdd\xae\xd2\x6b\x4b\xc1\x43\x5f\xa7\x1c\x2e\x3f\x05\x81\x1a\x48\x59\x19\x21\xc6\xfe\xb8\xff\xe9\xce\x1f\x75\x17\xfc\xb9\xfe\xe7\xc3\x06\xc5\x65\x59\xdc\x3c\xee\x81\xb6\x9b\x91\xa7\x3f\x23\xe5\x08\x4e\x11\xc6\xd0\x88\x11\x83\x29\x3b\xed\xf7\x2f\x9d\x25\xeb\xb8\x2e\xd3\xdb\x21\x66\x4c\x50\xee\xbf\xfd\xc1\x14\x4e\x7b\xc4\xb9\xc4\x4a\x23\xaa\x04\xe6\xf2\xf4\x71\xad\x4c\x06\x50\xb6\xac\x47\x10\x64\x54\x23\x41\x19\x26\x74\x35\x62\x06\x1b\xa6\x9b\x5f\x19\x15\x58\x50\x31\x62\x50\xbf\x49\x74\xe8\x0e\x85\xbb\x03\x0d\x07\xe8\xfe\x6d\x71\x93\x1f\xd6\x3e\x29\x6e\xf2\xff\x94\xfe\xa3\xbe\x01\x00\xb3\x96\xff\x77\x1b\x60\x32\x6e\xbf\x0c\x9c\x8c\xab\x6b\x4f\x9b\x84\x77\x91\xc2\xf0\x8a\x86\xf9\xf7\xf7\x65\x9c\x2f\x1d\xfa\x4b\x1a\xa1\xbf\xcc\xcb\xed\xfa\x0a\x9d\x4d\x11\x7e\x59\xba\x38\xf1\xb7\x0f\x0f\x93\x18\xad\x4a\xb7\x98\x0e\x9a\xf7\xe2\xc2\x34\x7c\x99\xe6\x1f\x1f\x1e\x06\x17\x7d\xea\x7b\x77\x5b\x3f\x3c\x4c\xc6\xf1\xc5\xfd\x7d\xba\x40\x39\x70\x46\xe4\xe1\x61\x7c\x7f\xef\xf2\xe4\xe1\xa1\xf9\x15\x44\x0c\x42\x84\x6f\x63\x83\x60\x93\x75\x9c\xe6\xcd\x97\x99\xe9\x35\x9a\x67\x71\x55\x4d\x07\x6b\x57\xc7\x8d\x03\x3c\x19\x3c\xd8\xbc\x19\xb6\xf3\x4b\xb5\x89\xf3\xee\x7c\xff\x12\xce\xe0\x62\x92\xe6\x9b\x6d\x8d\xea\xbb\x8d\x9b\x0e\x6a\x77\x5b\x0f\xd0\x26\x8b\xe7\x6e\xe5\xbf\xf1\xf2\x7d\x5e\xed\xca\x41\xdb\xf3\xf9\xeb\x22\xff\xe8\xee\xb6\x9b\xfd\x17\xc2\x27\x17\x93\x31\xf0\x6f\x4d\x9c\xa4\xd7\xad\x91\xdb\xab\x8e\xb4\x59\x5a\xd5\x69\xbe\x6c\x05\x0e\xef\xee\xc4\x65\x1a\x8f\x12\x57\xcd\xcb\xf4\xca\x25\x57\x77\x4f\x15\xa8\xdb\xb7\x10\xfd\x4d\xb9\x2b\xf1\xeb\xd5\xc5\x64\xdc\xa9\xfe\xbb\xfd\x49\xeb\x99\xbf\x55\x45\x59\x4f\xf3\x78\xed\x92\xb4\xf4\xaf\x51\xfd\xd5\x7f\x77\x3d\x8d\xab\xf9\xa0\x95\xcb\xbf\x77\xe1\xdf\x5b\x08\xdf\x6b\x5f\xf8\x6f\xb1\x9b\xc1\xba\xd8\xec\xbe\x6a\xa6\x6e\xbd\xff\x06\x1a\x4b\xb8\xeb\x7f\xd7\x7d\x9d\xba\x9b\x97\xc5\xed\x74\xe0\xbf\xec\x65\xd8\x32\x46\xad\x40\x0a\x13\x2e\x8d\xb1\x76\x70\x31\xd9\x56\x0e\xf9\xef\xb2\xcf\x82\x84\x5f\xee\xf2\xcd\xc5\x64\xbc\xad\xdc\x45\x80\x65\x57\x84\xf0\xb6\xc4\x7f\x56\x8a\x4e\xdc\xf7\xe5\x18\xc7\x3b\xab\x7e\xc6\xba\x07\xac\xda\xd8\xf2\x6d\xbc\x76\x1d\x26\x7f\xd2\x63\x55\xfa\xe9\x33\x3c\xdf\xa5\x9f\x3e\xc3\xb3\x9d\xdc\xbe\xe3\x31\x78\x6e\x93\x3a\xfd\x9c\xe0\xe1\x15\x5a\x97\xfc\x3b\x1b\x75\x26\x4c\xc6\x3b\xa8\x02\xb5\x0b\xe1\xab\x22\xb9\x3b\x84\xe7\x04\xd6\x27\xdd\xfb\x27\x82\x63\xbc\xd7\xa6\x1f\xda\xcb\x62\xbb\x19\x5c\xfc\xbd\x40\xdb\x4d\x37\x26\xfd\xf6\x5d\x05\x7a\xfc\xff\xba\x4e\xe2\x6a\x75\xfe\x88\xfc\x54\xaf\x3f\x3b\xaf\x33\xa1\xa3\xff\xfd\xfd\x08\x85\x64\x8a\x5f\xe5\x75\x99\xba\x2a\xe4\x39\xaf\x7e\xcb\xc4\x3f\x7a\x3c\xa0\xfb\x73\x26\x01\xa6\xe9\x02\xe1\x37\xd5\xb7\x69\xd9\xf2\x6b\x5e\x40\x69\x03\x25\x04\x47\x1b\x2a\xf4\x0f\x22\x85\x53\x38\x93\x0e\x46\x47\xf3\x6a\x48\x2f\x32\xba\x82\xb8\xac\x72\xff\x25\x32\x30\x25\x11\x67\xfc\xa0\x0c\x69\xb0\xf0\x33\x12\xb4\x87\xc7\x13\x60\x40\x78\x0e\x2e\x1e\x9f\x55\xf8\xc3\x4f\x97\x9d\x43\x0a\x5f\xba\x78\x11\x8e\xa7\x3e\x7a\xba\xe6\x3f\x6c\x72\x00\x42\x12\xd7\xf1\x28\x44\xd2\x60\x44\x0f\xe2\xe5\x89\x99\x1e\xaf\xbb\xbf\xc7\x10\xd7\x20\xd4\x04\xc2\xff\x62\x47\x98\x8c\xfd\xfd\x13\x6e\x1d\x95\x5b\xd1\xbe\x2f\x92\xf7\xe9\xda\xa1\xff\x85\xe2\x45\xed\xca\x57\x9b\x62\xbe\x42\xbd\x2d\x9f\x62\x16\xd2\x80\x7f\xc3\x0b\x2e\xbc\x1c\x2d\x97\x60\xa0\xce\xed\x64\x0c\x73\x9e\x4a\xf2\x58\xaf\x27\x9b\xfc\xdf\xff\xfd\x7f\x3e\x27\xfe\xbf\x19\x4c\x9d\xa5\x93\x71\x27\x9d\x4c\xc6\xfe\x4c\xed\x1f\xc1\x93\x71\x5b\x3a\x4c\xe0\x90\xdd\xd4\x7e\xf8\x3a\x2e\x51\x38\xc5\x5f\x65\x68\x8a\x92\x62\xbe\x5d\xbb\xbc\xc6\x4b\x57\xbf\xca\x1c\x5c\xbe\xbc\x7b\x93\x0c\x9b\x93\xfe\xe4\xf4\x1c\x16\xb5\x0b\xf0\xa2\x98\x6f\xab\x61\x43\xdc\xe6\xf3\x3a\x2d\x72\xd4\x16\x05\xfe\x55\xb3\xb0\xc3\xef\x68\xba\xdb\x05\x5f\xc7\xd9\xd6\xe1\xba\x4c\xd7\xc3\x53\x5c\x17\x97\xc5\x8d\x2b\x67\x71\xe5\x1a\x3e\x7e\x81\xcb\xdc\xba\xea\xca\xf3\xfb\xd6\x95\x77\xef\x5c\xe6\xe6\x75\x51\xbe\xc8\xb2\xe1\x49\x5d\x62\x08\x85\x46\xa4\x23\xbf\x02\x2f\x8a\xf2\x55\x3c\x5f\x0d\x5b\x61\x86\x2e\x6b\xe5\x38\x4a\x17\x68\xf8\xc5\xef\xbb\xdb\x23\x97\x61\xff\xc2\x18\x6e\xde\xfb\x43\x53\x74\x72\x72\xde\x0c\x96\xae\xde\x96\x79\x73\xd7\xd8\x18\x04\x83\x28\xf2\x96\x72\x59\x5f\xa6\xe1\x89\x7f\x5b\xb4\x15\x67\x37\xf9\xe7\x18\x66\x87\x65\x18\xea\xab\x59\xf8\x6b\x85\xcf\x18\xc0\x4b\xda\xac\xc5\x69\x9e\xb8\xdb\x1f\x16\xc3\xdf\x4f\xd1\x17\xd3\x29\x1a\xd1\x3f\xa7\xc0\x83\x07\xe3\x67\xa7\xe6\x45\xee\x4e\x7a\x1a\x3e\x04\x01\x1e\x7a\xee\xcc\x8a\x79\x9c\xa5\x9f\xdc\xb7\x4d\x64\x0c\x5d\x84\xbc\x50\x11\x8a\xcb\x56\x18\x90\xd8\x75\xd5\x43\xd3\xe9\xd4\xbf\xc1\xbe\x48\x73\x97\xec\x64\xee\x9a\xf5\x61\xe7\xed\x04\x2c\xe4\x6e\x10\x6c\x31\x74\x80\xbd\x17\x75\xf3\xd7\x38\xc3\x93\x36\x22\x4f\x4e\x1b\xf3\xc0\x5e\x69\xf5\x36\x7e\x3b\x4c\x4e\x77\x8c\x1f\xb1\xe8\x48\xd2\x35\xea\x93\x65\x87\xfc\x1c\x3e\x1f\x69\x83\x12\xef\x29\x68\x41\xdf\xd5\x65\x9a\x2f\x87\xbf\xfe\x16\xa1\xfb\x24\xbe\x3b\x43\x03\x36\x4a\xd2\x65\x5a\x0f\x22\xb4\x2e\xf2\x7a\xd5\xa3\xdc\xb9\xb8\x3c\x43\x83\x7c\xbb\x76\x65\x3a\x1f\x44\x68\x55\x6c\xcb\xfe\x9a\x34\xdf\xd6\xae\x47\xaa\xdc\xbc\xc8\x93\x0e\xa9\xeb\x19\xb0\x18\x18\xe4\x32\xad\x40\xb0\x17\x65\x19\xdf\xe1\x4d\x59\xd4\x05\x14\xf1\xb8\xca\xd2\xb9\xc3\xf3\x38\xcb\x86\x07\xa2\xb9\x7a\x79\xf7\x3e\x5e\x42\x31\x36\x1c\x00\x93\x41\x63\xd5\x96\xe1\x2e\x82\x1e\xbb\xfd\xf4\xfc\xb8\xdd\x7c\xe9\xea\x0f\x65\xf6\x63\x5c\xc6\x6b\x57\xbb\x12\x62\xbb\x05\xcb\xa3\xa1\x61\xe5\x2f\xbb\xa9\xa0\xfa\x31\x5e\xba\x0f\x3f\x5d\xa2\x29\xba\x49\xf3\xa4\xb8\xc1\xb0\x13\x2c\xc6\x95\x8b\xcb\xf9\x0a\x57\xdb\xab\x2a\x98\x98\x9e\x46\x7e\x5d\xf5\xe1\xa7\xcb\x9f\xa1\x41\xb8\xca\x1c\x64\x85\x96\x07\xae\x36\x59\x5a\x0f\x4f\xfe\x7a\xd2\x4e\xdc\xed\xfc\xd6\xbf\xb9\xed\xfd\x1e\x04\x3f\x5a\x14\x25\x1a\xa6\x68\x8a\xc8\x39\x4a\xd1\x04\xf5\x98\xe2\xcc\xe5\xcb\x7a\x75\x8e\xd2\x6f\xbe\xd9\x81\xa3\xcf\x0d\xf6\xed\x2e\xf9\x35\xfd\xad\xdd\x7f\x7a\xd2\x58\x27\xa0\xac\xbf\xee\x57\xf2\x9b\x0f\x86\xbe\x29\x5a\xe4\xa1\x47\x93\xe9\x6f\xfd\xc8\x41\x7f\x43\x75\xb9\x75\xe8\x0c\x25\x6e\x5e\x24\xee\xc3\x4f\x6f\x66\xc5\x7a\x53\xe4\x2e\xaf\x1f\x6f\x44\x7f\x3b\x7d\x0a\xe4\x87\x7e\x72\x6e\xde\xdc\xf5\x87\x0c\x2c\x3a\xdd\x7b\xc6\x9f\xbf\x68\xfa\xc4\x87\x27\x7e\xe0\xe4\x51\x76\x06\x2c\x1d\x3e\x30\xaa\x97\x77\xb3\x96\x7d\x67\xa3\xf3\xd6\x0b\x43\x60\xe1\x1d\x11\xa1\x60\x76\x9f\x4e\xc3\xda\xbd\x23\xd0\x04\x1d\x72\x0a\x2c\x9e\x6f\xcb\xf2\x75\xe9\x16\x9d\x75\xe0\x0d\x28\x6c\x76\xc1\x3e\x0c\xe5\xc4\xf4\x04\x7a\xca\x93\xd3\x7b\xd4\x98\xdd\xaf\x5f\x2d\xd1\x74\xc7\x05\x97\xce\x77\xbc\xc3\x30\x35\x42\x27\x31\xac\x38\xdf\xa5\xce\xfe\x0e\xb0\x72\xb5\xec\x9f\x0c\x9d\xed\xe2\x3f\xbd\x5b\x1c\x36\x0b\xf2\xfd\x1b\xbb\x1d\x72\x6b\xe9\xe2\x04\x50\xf9\x5d\x9a\x85\xd7\xb0\xa1\x52\xea\x86\xdd\x36\x4f\xbd\xbf\x7e\x3d\x79\x09\x9b\xfe\xc3\x7f\x7e\xef\x3f\xff\xee\x3f\xdf\xfb\xcf\x1f\xfd\xe7\x2b\xff\xf9\x2f\xff\xf9\xcb\xcb\x93\xdf\xf6\x9e\x0f\xf1\xe3\x6f\x6f\x56\x69\x16\xf6\x41\x17\x53\x44\x09\x13\xfb\xc0\x01\xe2\x38\x10\x1b\xd1\xbf\xf9\x26\xed\x66\xfd\x06\xfc\x9b\xb8\xac\xdc\x77\x59\x11\xd7\x41\x60\x5c\x17\xdf\xa5\xb7\xce\xbf\x47\xff\x0d\x3a\x41\x27\xe8\x9b\x20\xf9\xaf\xe9\x6f\x4d\x02\xec\xa9\xdd\x7d\xef\xbc\x9b\x63\xd2\x4f\xee\x79\x70\xee\xf2\x1f\x4c\x1b\x9c\x76\xd3\xc3\x5e\xc5\x90\x22\x80\xcf\xc1\xd4\xb0\xda\xae\xe3\x1c\xf6\x45\xd3\xc3\xb6\xf7\x0e\x4c\xf3\xdc\x95\xaf\xdf\x7f\x7f\xd9\xba\xf7\xe9\x08\x9a\xa2\x1d\xaf\x8e\x77\xc3\x63\xa9\xb6\x4c\x9b\x8c\x43\x6d\x37\x19\x87\x3f\xc8\xfc\x7f\x01\x00\x00\xff\xff\xd1\xe7\x7f\xef\x40\x3c\x00\x00"), }, } fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{ fs["/index.html"].(os.FileInfo), } return fs }() type vfsgen۰FS map[string]interface{} func (fs vfsgen۰FS) Open(path string) (http.File, error) { path = pathpkg.Clean("/" + path) f, ok := fs[path] if !ok { return nil, &os.PathError{Op: "open", Path: path, Err: os.ErrNotExist} } switch f := f.(type) { case *vfsgen۰CompressedFileInfo: gr, err := gzip.NewReader(bytes.NewReader(f.compressedContent)) if err != nil { // This should never happen because we generate the gzip bytes such that they are always valid. panic("unexpected error reading own gzip compressed bytes: " + err.Error()) } return &vfsgen۰CompressedFile{ vfsgen۰CompressedFileInfo: f, gr: gr, }, nil case *vfsgen۰DirInfo: return &vfsgen۰Dir{ vfsgen۰DirInfo: f, }, nil default: // This should never happen because we generate only the above types. panic(fmt.Sprintf("unexpected type %T", f)) } } // vfsgen۰CompressedFileInfo is a static definition of a gzip compressed file. type vfsgen۰CompressedFileInfo struct { name string modTime time.Time compressedContent []byte uncompressedSize int64 } func (f *vfsgen۰CompressedFileInfo) Readdir(count int) ([]os.FileInfo, error) { return nil, fmt.Errorf("cannot Readdir from file %s", f.name) } func (f *vfsgen۰CompressedFileInfo) Stat() (os.FileInfo, error) { return f, nil } func (f *vfsgen۰CompressedFileInfo) GzipBytes() []byte { return f.compressedContent } func (f *vfsgen۰CompressedFileInfo) Name() string { return f.name } func (f *vfsgen۰CompressedFileInfo) Size() int64 { return f.uncompressedSize } func (f *vfsgen۰CompressedFileInfo) Mode() os.FileMode { return 0444 } func (f *vfsgen۰CompressedFileInfo) ModTime() time.Time { return f.modTime } func (f *vfsgen۰CompressedFileInfo) IsDir() bool { return false } func (f *vfsgen۰CompressedFileInfo) Sys() interface{} { return nil } // vfsgen۰CompressedFile is an opened compressedFile instance. type vfsgen۰CompressedFile struct { *vfsgen۰CompressedFileInfo gr *gzip.Reader grPos int64 // Actual gr uncompressed position. seekPos int64 // Seek uncompressed position. } func (f *vfsgen۰CompressedFile) Read(p []byte) (n int, err error) { if f.grPos > f.seekPos { // Rewind to beginning. err = f.gr.Reset(bytes.NewReader(f.compressedContent)) if err != nil { return 0, err } f.grPos = 0 } if f.grPos < f.seekPos { // Fast-forward. _, err = io.CopyN(ioutil.Discard, f.gr, f.seekPos-f.grPos) if err != nil { return 0, err } f.grPos = f.seekPos } n, err = f.gr.Read(p) f.grPos += int64(n) f.seekPos = f.grPos return n, err } func (f *vfsgen۰CompressedFile) Seek(offset int64, whence int) (int64, error) { switch whence { case io.SeekStart: f.seekPos = 0 + offset case io.SeekCurrent: f.seekPos += offset case io.SeekEnd: f.seekPos = f.uncompressedSize + offset default: panic(fmt.Errorf("invalid whence value: %v", whence)) } return f.seekPos, nil } func (f *vfsgen۰CompressedFile) Close() error { return f.gr.Close() } // vfsgen۰DirInfo is a static definition of a directory. type vfsgen۰DirInfo struct { name string modTime time.Time entries []os.FileInfo } func (d *vfsgen۰DirInfo) Read([]byte) (int, error) { return 0, fmt.Errorf("cannot Read from directory %s", d.name) } func (d *vfsgen۰DirInfo) Close() error { return nil } func (d *vfsgen۰DirInfo) Stat() (os.FileInfo, error) { return d, nil } func (d *vfsgen۰DirInfo) Name() string { return d.name } func (d *vfsgen۰DirInfo) Size() int64 { return 0 } func (d *vfsgen۰DirInfo) Mode() os.FileMode { return 0755 | os.ModeDir } func (d *vfsgen۰DirInfo) ModTime() time.Time { return d.modTime } func (d *vfsgen۰DirInfo) IsDir() bool { return true } func (d *vfsgen۰DirInfo) Sys() interface{} { return nil } // vfsgen۰Dir is an opened dir instance. type vfsgen۰Dir struct { *vfsgen۰DirInfo pos int // Position within entries for Seek and Readdir. } func (d *vfsgen۰Dir) Seek(offset int64, whence int) (int64, error) { if offset == 0 && whence == io.SeekStart { d.pos = 0 return 0, nil } return 0, fmt.Errorf("unsupported Seek in directory %s", d.name) } func (d *vfsgen۰Dir) Readdir(count int) ([]os.FileInfo, error) { if d.pos >= len(d.entries) && count > 0 { return nil, io.EOF } if count <= 0 || count > len(d.entries)-d.pos { count = len(d.entries) - d.pos } e := d.entries[d.pos : d.pos+count] d.pos += count return e, nil } rclone-1.53.3/cmd/serve/httplib/serve/data/data.go000066400000000000000000000024271375552240400217330ustar00rootroot00000000000000//go:generate go run assets_generate.go // The "go:generate" directive compiles static assets by running assets_generate.go package data import ( "html/template" "io/ioutil" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) // AfterEpoch returns the time since the epoch for the given time func AfterEpoch(t time.Time) bool { return t.After(time.Time{}) } // GetTemplate returns the HTML template for serving directories via HTTP/Webdav func GetTemplate(tmpl string) (tpl *template.Template, err error) { var templateString string if tmpl == "" { templateFile, err := Assets.Open("index.html") if err != nil { return nil, errors.Wrap(err, "get template open") } defer fs.CheckClose(templateFile, &err) templateBytes, err := ioutil.ReadAll(templateFile) if err != nil { return nil, errors.Wrap(err, "get template read") } templateString = string(templateBytes) } else { templateFile, err := ioutil.ReadFile(tmpl) if err != nil { return nil, errors.Wrap(err, "get template open") } templateString = string(templateFile) } funcMap := template.FuncMap{ "afterEpoch": AfterEpoch, } tpl, err = template.New("index").Funcs(funcMap).Parse(templateString) if err != nil { return nil, errors.Wrap(err, "get template parse") } return } rclone-1.53.3/cmd/serve/httplib/serve/data/templates/000077500000000000000000000000001375552240400224645ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/httplib/serve/data/templates/index.html000066400000000000000000000361001375552240400244610ustar00rootroot00000000000000 {{html .Name}}

{{range $i, $crumb := .Breadcrumb}}{{html $crumb.Text}}{{if ne $i 0}}/{{end}}{{end}}

{{- range .Entries}} {{- if .IsDir}} {{- else}} {{- end}} {{- if .ModTime | afterEpoch }} {{- else}} {{- end}} {{- end}}
Name Size Modified
Go up
{{- if .IsDir}} {{- else}} {{- end}} {{html .Leaf}} {{.Size}}
rclone-1.53.3/cmd/serve/httplib/serve/dir.go000066400000000000000000000134261375552240400206700ustar00rootroot00000000000000package serve import ( "bytes" "fmt" "html/template" "net/http" "net/url" "path" "sort" "strings" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/lib/rest" ) // DirEntry is a directory entry type DirEntry struct { remote string URL string Leaf string IsDir bool Size int64 ModTime time.Time } // Directory represents a directory type Directory struct { DirRemote string Title string Name string Entries []DirEntry Query string HTMLTemplate *template.Template Breadcrumb []Crumb Sort string Order string } // Crumb is a breadcrumb entry type Crumb struct { Link string Text string } // NewDirectory makes an empty Directory func NewDirectory(dirRemote string, htmlTemplate *template.Template) *Directory { var breadcrumb []Crumb // skip trailing slash lpath := "/" + dirRemote if lpath[len(lpath)-1] == '/' { lpath = lpath[:len(lpath)-1] } parts := strings.Split(lpath, "/") for i := range parts { txt := parts[i] if i == 0 && parts[i] == "" { txt = "/" } lnk := strings.Repeat("../", len(parts)-i-1) breadcrumb = append(breadcrumb, Crumb{Link: lnk, Text: txt}) } d := &Directory{ DirRemote: dirRemote, Title: fmt.Sprintf("Directory listing of /%s", dirRemote), Name: fmt.Sprintf("/%s", dirRemote), HTMLTemplate: htmlTemplate, Breadcrumb: breadcrumb, } return d } // SetQuery sets the query parameters for each URL func (d *Directory) SetQuery(queryParams url.Values) *Directory { d.Query = "" if len(queryParams) > 0 { d.Query = "?" + queryParams.Encode() } return d } // AddHTMLEntry adds an entry to that directory func (d *Directory) AddHTMLEntry(remote string, isDir bool, size int64, modTime time.Time) { leaf := path.Base(remote) if leaf == "." { leaf = "" } urlRemote := leaf if isDir { leaf += "/" urlRemote += "/" } d.Entries = append(d.Entries, DirEntry{ remote: remote, URL: rest.URLPathEscape(urlRemote) + d.Query, Leaf: leaf, IsDir: isDir, Size: size, ModTime: modTime, }) } // AddEntry adds an entry to that directory func (d *Directory) AddEntry(remote string, isDir bool) { leaf := path.Base(remote) if leaf == "." { leaf = "" } urlRemote := leaf if isDir { leaf += "/" urlRemote += "/" } d.Entries = append(d.Entries, DirEntry{ remote: remote, URL: rest.URLPathEscape(urlRemote) + d.Query, Leaf: leaf, }) } // Error logs the error and if a ResponseWriter is given it writes an http.StatusInternalServerError func Error(what interface{}, w http.ResponseWriter, text string, err error) { err = fs.CountError(err) fs.Errorf(what, "%s: %v", text, err) if w != nil { http.Error(w, text+".", http.StatusInternalServerError) } } // ProcessQueryParams takes and sorts/orders based on the request sort/order parameters and default is namedirfist/asc func (d *Directory) ProcessQueryParams(sortParm string, orderParm string) *Directory { d.Sort = sortParm d.Order = orderParm var toSort sort.Interface switch d.Sort { case sortByName: toSort = byName(*d) case sortByNameDirFirst: toSort = byNameDirFirst(*d) case sortBySize: toSort = bySize(*d) case sortByTime: toSort = byTime(*d) default: toSort = byNameDirFirst(*d) } if d.Order == "desc" && toSort != nil { toSort = sort.Reverse(toSort) } if toSort != nil { sort.Sort(toSort) } return d } type byName Directory type byNameDirFirst Directory type bySize Directory type byTime Directory func (d byName) Len() int { return len(d.Entries) } func (d byName) Swap(i, j int) { d.Entries[i], d.Entries[j] = d.Entries[j], d.Entries[i] } func (d byName) Less(i, j int) bool { return strings.ToLower(d.Entries[i].Leaf) < strings.ToLower(d.Entries[j].Leaf) } func (d byNameDirFirst) Len() int { return len(d.Entries) } func (d byNameDirFirst) Swap(i, j int) { d.Entries[i], d.Entries[j] = d.Entries[j], d.Entries[i] } func (d byNameDirFirst) Less(i, j int) bool { // sort by name if both are dir or file if d.Entries[i].IsDir == d.Entries[j].IsDir { return strings.ToLower(d.Entries[i].Leaf) < strings.ToLower(d.Entries[j].Leaf) } // sort dir ahead of file return d.Entries[i].IsDir } func (d bySize) Len() int { return len(d.Entries) } func (d bySize) Swap(i, j int) { d.Entries[i], d.Entries[j] = d.Entries[j], d.Entries[i] } func (d bySize) Less(i, j int) bool { const directoryOffset = -1 << 31 // = -math.MinInt32 iSize, jSize := d.Entries[i].Size, d.Entries[j].Size // directory sizes depend on the file system; to // provide a consistent experience, put them up front // and sort them by name if d.Entries[i].IsDir { iSize = directoryOffset } if d.Entries[j].IsDir { jSize = directoryOffset } if d.Entries[i].IsDir && d.Entries[j].IsDir { return strings.ToLower(d.Entries[i].Leaf) < strings.ToLower(d.Entries[j].Leaf) } return iSize < jSize } func (d byTime) Len() int { return len(d.Entries) } func (d byTime) Swap(i, j int) { d.Entries[i], d.Entries[j] = d.Entries[j], d.Entries[i] } func (d byTime) Less(i, j int) bool { return d.Entries[i].ModTime.Before(d.Entries[j].ModTime) } const ( sortByName = "name" sortByNameDirFirst = "namedirfirst" sortBySize = "size" sortByTime = "time" ) // Serve serves a directory func (d *Directory) Serve(w http.ResponseWriter, r *http.Request) { // Account the transfer tr := accounting.Stats(r.Context()).NewTransferRemoteSize(d.DirRemote, -1) defer tr.Done(nil) fs.Infof(d.DirRemote, "%s: Serving directory", r.RemoteAddr) buf := &bytes.Buffer{} err := d.HTMLTemplate.Execute(buf, d) if err != nil { Error(d.DirRemote, w, "Failed to render template", err) return } _, err = buf.WriteTo(w) if err != nil { Error(d.DirRemote, nil, "Failed to drain template buffer", err) } } rclone-1.53.3/cmd/serve/httplib/serve/dir_test.go000066400000000000000000000077661375552240400217410ustar00rootroot00000000000000package serve import ( "errors" "html/template" "io/ioutil" "net/http" "net/http/httptest" "net/url" "testing" "time" "github.com/rclone/rclone/cmd/serve/httplib/serve/data" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func GetTemplate(t *testing.T) *template.Template { htmlTemplate, err := data.GetTemplate("../../http/testdata/golden/testindex.html") require.NoError(t, err) return htmlTemplate } func TestNewDirectory(t *testing.T) { d := NewDirectory("z", GetTemplate(t)) assert.Equal(t, "z", d.DirRemote) assert.Equal(t, "Directory listing of /z", d.Title) } func TestSetQuery(t *testing.T) { d := NewDirectory("z", GetTemplate(t)) assert.Equal(t, "", d.Query) d.SetQuery(url.Values{"potato": []string{"42"}}) assert.Equal(t, "?potato=42", d.Query) d.SetQuery(url.Values{}) assert.Equal(t, "", d.Query) } func TestAddHTMLEntry(t *testing.T) { var modtime = time.Now() var d = NewDirectory("z", GetTemplate(t)) d.AddHTMLEntry("", true, 0, modtime) d.AddHTMLEntry("dir", true, 0, modtime) d.AddHTMLEntry("a/b/c/d.txt", false, 64, modtime) d.AddHTMLEntry("a/b/c/colon:colon.txt", false, 64, modtime) d.AddHTMLEntry("\"quotes\".txt", false, 64, modtime) assert.Equal(t, []DirEntry{ {remote: "", URL: "/", Leaf: "/", IsDir: true, Size: 0, ModTime: modtime}, {remote: "dir", URL: "dir/", Leaf: "dir/", IsDir: true, Size: 0, ModTime: modtime}, {remote: "a/b/c/d.txt", URL: "d.txt", Leaf: "d.txt", IsDir: false, Size: 64, ModTime: modtime}, {remote: "a/b/c/colon:colon.txt", URL: "./colon:colon.txt", Leaf: "colon:colon.txt", IsDir: false, Size: 64, ModTime: modtime}, {remote: "\"quotes\".txt", URL: "%22quotes%22.txt", Leaf: "\"quotes\".txt", Size: 64, IsDir: false, ModTime: modtime}, }, d.Entries) // Now test with a query parameter d = NewDirectory("z", GetTemplate(t)).SetQuery(url.Values{"potato": []string{"42"}}) d.AddHTMLEntry("file", false, 64, modtime) d.AddHTMLEntry("dir", true, 0, modtime) assert.Equal(t, []DirEntry{ {remote: "file", URL: "file?potato=42", Leaf: "file", IsDir: false, Size: 64, ModTime: modtime}, {remote: "dir", URL: "dir/?potato=42", Leaf: "dir/", IsDir: true, Size: 0, ModTime: modtime}, }, d.Entries) } func TestAddEntry(t *testing.T) { var d = NewDirectory("z", GetTemplate(t)) d.AddEntry("", true) d.AddEntry("dir", true) d.AddEntry("a/b/c/d.txt", false) d.AddEntry("a/b/c/colon:colon.txt", false) d.AddEntry("\"quotes\".txt", false) assert.Equal(t, []DirEntry{ {remote: "", URL: "/", Leaf: "/"}, {remote: "dir", URL: "dir/", Leaf: "dir/"}, {remote: "a/b/c/d.txt", URL: "d.txt", Leaf: "d.txt"}, {remote: "a/b/c/colon:colon.txt", URL: "./colon:colon.txt", Leaf: "colon:colon.txt"}, {remote: "\"quotes\".txt", URL: "%22quotes%22.txt", Leaf: "\"quotes\".txt"}, }, d.Entries) // Now test with a query parameter d = NewDirectory("z", GetTemplate(t)).SetQuery(url.Values{"potato": []string{"42"}}) d.AddEntry("file", false) d.AddEntry("dir", true) assert.Equal(t, []DirEntry{ {remote: "file", URL: "file?potato=42", Leaf: "file"}, {remote: "dir", URL: "dir/?potato=42", Leaf: "dir/"}, }, d.Entries) } func TestError(t *testing.T) { w := httptest.NewRecorder() err := errors.New("help") Error("potato", w, "sausage", err) resp := w.Result() assert.Equal(t, http.StatusInternalServerError, resp.StatusCode) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, "sausage.\n", string(body)) } func TestServe(t *testing.T) { d := NewDirectory("aDirectory", GetTemplate(t)) d.AddEntry("file", false) d.AddEntry("dir", true) w := httptest.NewRecorder() r := httptest.NewRequest("GET", "http://example.com/aDirectory/", nil) d.Serve(w, r) resp := w.Result() assert.Equal(t, http.StatusOK, resp.StatusCode) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, ` Directory listing of /aDirectory

Directory listing of /aDirectory

file
dir/
`, string(body)) } rclone-1.53.3/cmd/serve/httplib/serve/serve.go000066400000000000000000000050611375552240400212320ustar00rootroot00000000000000// Package serve deals with serving objects over HTTP package serve import ( "fmt" "io" "net/http" "path" "strconv" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" ) // Object serves an fs.Object via HEAD or GET func Object(w http.ResponseWriter, r *http.Request, o fs.Object) { if r.Method != "HEAD" && r.Method != "GET" { http.Error(w, http.StatusText(http.StatusMethodNotAllowed), http.StatusMethodNotAllowed) return } // Show that we accept ranges w.Header().Set("Accept-Ranges", "bytes") // Set content length since we know how long the object is if o.Size() >= 0 { w.Header().Set("Content-Length", strconv.FormatInt(o.Size(), 10)) } // Set content type mimeType := fs.MimeType(r.Context(), o) if mimeType == "application/octet-stream" && path.Ext(o.Remote()) == "" { // Leave header blank so http server guesses } else { w.Header().Set("Content-Type", mimeType) } if r.Method == "HEAD" { return } // Decode Range request if present code := http.StatusOK size := o.Size() var options []fs.OpenOption if rangeRequest := r.Header.Get("Range"); rangeRequest != "" { //fs.Debugf(nil, "Range: request %q", rangeRequest) option, err := fs.ParseRangeOption(rangeRequest) if err != nil { fs.Debugf(o, "Get request parse range request error: %v", err) http.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest) return } options = append(options, option) offset, limit := option.Decode(o.Size()) end := o.Size() // exclusive if limit >= 0 { end = offset + limit } if end > o.Size() { end = o.Size() } size = end - offset // fs.Debugf(nil, "Range: offset=%d, limit=%d, end=%d, size=%d (object size %d)", offset, limit, end, size, o.Size()) // Content-Range: bytes 0-1023/146515 w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", offset, end-1, o.Size())) // fs.Debugf(nil, "Range: Content-Range: %q", w.Header().Get("Content-Range")) code = http.StatusPartialContent } w.Header().Set("Content-Length", strconv.FormatInt(size, 10)) file, err := o.Open(r.Context(), options...) if err != nil { fs.Debugf(o, "Get request open error: %v", err) http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return } tr := accounting.Stats(r.Context()).NewTransfer(o) defer func() { tr.Done(err) }() in := tr.Account(r.Context(), file) // account the transfer (no buffering) w.WriteHeader(code) n, err := io.Copy(w, in) if err != nil { fs.Errorf(o, "Didn't finish writing GET request (wrote %d/%d bytes): %v", n, size, err) return } } rclone-1.53.3/cmd/serve/httplib/serve/serve_test.go000066400000000000000000000051331375552240400222710ustar00rootroot00000000000000package serve import ( "io/ioutil" "net/http" "net/http/httptest" "testing" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" ) func TestObjectBadMethod(t *testing.T) { w := httptest.NewRecorder() r := httptest.NewRequest("BADMETHOD", "http://example.com/aFile", nil) o := mockobject.New("aFile") Object(w, r, o) resp := w.Result() assert.Equal(t, http.StatusMethodNotAllowed, resp.StatusCode) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, "Method Not Allowed\n", string(body)) } func TestObjectHEAD(t *testing.T) { w := httptest.NewRecorder() r := httptest.NewRequest("HEAD", "http://example.com/aFile", nil) o := mockobject.New("aFile").WithContent([]byte("hello"), mockobject.SeekModeNone) Object(w, r, o) resp := w.Result() assert.Equal(t, http.StatusOK, resp.StatusCode) assert.Equal(t, "5", resp.Header.Get("Content-Length")) assert.Equal(t, "bytes", resp.Header.Get("Accept-Ranges")) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, "", string(body)) } func TestObjectGET(t *testing.T) { w := httptest.NewRecorder() r := httptest.NewRequest("GET", "http://example.com/aFile", nil) o := mockobject.New("aFile").WithContent([]byte("hello"), mockobject.SeekModeNone) Object(w, r, o) resp := w.Result() assert.Equal(t, http.StatusOK, resp.StatusCode) assert.Equal(t, "5", resp.Header.Get("Content-Length")) assert.Equal(t, "bytes", resp.Header.Get("Accept-Ranges")) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, "hello", string(body)) } func TestObjectRange(t *testing.T) { w := httptest.NewRecorder() r := httptest.NewRequest("GET", "http://example.com/aFile", nil) r.Header.Add("Range", "bytes=3-5") o := mockobject.New("aFile").WithContent([]byte("0123456789"), mockobject.SeekModeNone) Object(w, r, o) resp := w.Result() assert.Equal(t, http.StatusPartialContent, resp.StatusCode) assert.Equal(t, "3", resp.Header.Get("Content-Length")) assert.Equal(t, "bytes", resp.Header.Get("Accept-Ranges")) assert.Equal(t, "bytes 3-5/10", resp.Header.Get("Content-Range")) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, "345", string(body)) } func TestObjectBadRange(t *testing.T) { w := httptest.NewRecorder() r := httptest.NewRequest("GET", "http://example.com/aFile", nil) r.Header.Add("Range", "xxxbytes=3-5") o := mockobject.New("aFile").WithContent([]byte("0123456789"), mockobject.SeekModeNone) Object(w, r, o) resp := w.Result() assert.Equal(t, http.StatusBadRequest, resp.StatusCode) assert.Equal(t, "10", resp.Header.Get("Content-Length")) body, _ := ioutil.ReadAll(resp.Body) assert.Equal(t, "Bad Request\n", string(body)) } rclone-1.53.3/cmd/serve/proxy/000077500000000000000000000000001375552240400161445ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/proxy/proxy.go000066400000000000000000000210041375552240400176510ustar00rootroot00000000000000// Package proxy implements a programmable proxy for rclone serve package proxy import ( "bytes" "crypto/sha256" "crypto/subtle" "encoding/json" "os/exec" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/obscure" libcache "github.com/rclone/rclone/lib/cache" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" ) // Help contains text describing how to use the proxy var Help = strings.Replace(` ### Auth Proxy If you supply the parameter |--auth-proxy /path/to/program| then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** |--auth-proxy| and |--authorized-keys| cannot be used together, if |--auth-proxy| is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a |user| and |pass| on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - |_root| - root to use for the backend And it may have this parameter - |_obscure| - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ||| { "user": "me", "pass": "mypassword" } ||| If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ||| { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ||| And as an example return this on STDOUT ||| { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ||| This would mean that an SFTP backend would be created on the fly for the |user| and |pass|/|public_key| returned in the output to the host given. Note that since |_obscure| is set to |pass|, rclone will obscure the |pass| parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied |user| in any way, for example to make proxy to many different sftp backends, you could make the |user| be |user@example.com| and then set the |host| to |example.com| in the output and the user to |user|. For security you'd probably want to restrict the |host| to a limited list. Note that an internal cache is keyed on |user| so only use that for configuration, don't use |pass| or |public_key|. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. `, "|", "`", -1) // Options is options for creating the proxy type Options struct { AuthProxy string } // DefaultOpt is the default values uses for Opt var DefaultOpt = Options{ AuthProxy: "", } // Proxy represents a proxy to turn auth requests into a VFS type Proxy struct { cmdLine []string // broken down command line vfsCache *libcache.Cache Opt Options } // cacheEntry is what is stored in the vfsCache type cacheEntry struct { vfs *vfs.VFS // stored VFS pwHash [sha256.Size]byte // sha256 hash of the password/publicKey } // New creates a new proxy with the Options passed in func New(opt *Options) *Proxy { return &Proxy{ Opt: *opt, cmdLine: strings.Fields(opt.AuthProxy), vfsCache: libcache.New(), } } // run the proxy command returning a config map func (p *Proxy) run(in map[string]string) (config configmap.Simple, err error) { cmd := exec.Command(p.cmdLine[0], p.cmdLine[1:]...) inBytes, err := json.MarshalIndent(in, "", "\t") if err != nil { return nil, errors.Wrap(err, "Proxy.Call failed to marshal input: %v") } var stdout, stderr bytes.Buffer cmd.Stdin = bytes.NewBuffer(inBytes) cmd.Stdout = &stdout cmd.Stderr = &stderr start := time.Now() err = cmd.Run() fs.Debugf(nil, "Calling proxy %v", p.cmdLine) duration := time.Since(start) if err != nil { return nil, errors.Wrapf(err, "proxy: failed on %v: %q", p.cmdLine, strings.TrimSpace(string(stderr.Bytes()))) } err = json.Unmarshal(stdout.Bytes(), &config) if err != nil { return nil, errors.Wrapf(err, "proxy: failed to read output: %q", string(stdout.Bytes())) } fs.Debugf(nil, "Proxy returned in %v", duration) // Obscure any values in the config map that need it obscureFields, ok := config.Get("_obscure") if ok { for _, key := range strings.Split(obscureFields, ",") { value, ok := config.Get(key) if ok { obscuredValue, err := obscure.Obscure(value) if err != nil { return nil, errors.Wrap(err, "proxy") } config.Set(key, obscuredValue) } } } return config, nil } // call runs the auth proxy and returns a cacheEntry and an error func (p *Proxy) call(user, auth string, isPublicKey bool) (value interface{}, err error) { var config configmap.Simple // Contact the proxy if isPublicKey { config, err = p.run(map[string]string{ "user": user, "public_key": auth, }) } else { config, err = p.run(map[string]string{ "user": user, "pass": auth, }) } if err != nil { return nil, err } // Look for required fields in the answer fsName, ok := config.Get("type") if !ok { return nil, errors.New("proxy: type not set in result") } root, ok := config.Get("_root") if !ok { return nil, errors.New("proxy: _root not set in result") } // Find the backend fsInfo, err := fs.Find(fsName) if err != nil { return nil, errors.Wrapf(err, "proxy: couldn't find backend for %q", fsName) } // base name of config on user name. This may appear in logs name := "proxy-" + user fsString := name + ":" + root // Look for fs in the VFS cache value, err = p.vfsCache.Get(user, func(key string) (value interface{}, ok bool, err error) { // Create the Fs from the cache f, err := cache.GetFn(fsString, func(fsString string) (fs.Fs, error) { // Update the config with the default values for i := range fsInfo.Options { o := &fsInfo.Options[i] if _, found := config.Get(o.Name); !found && o.Default != nil && o.String() != "" { config.Set(o.Name, o.String()) } } return fsInfo.NewFs(name, root, config) }) if err != nil { return nil, false, err } // We hash the auth here so we don't copy the auth more than we // need to in memory. An attacker would find it easier to go // after the unencrypted password in memory most likely. entry := cacheEntry{ vfs: vfs.New(f, &vfsflags.Opt), pwHash: sha256.Sum256([]byte(auth)), } return entry, true, nil }) if err != nil { return nil, errors.Wrapf(err, "proxy: failed to create backend") } return value, nil } // Call runs the auth proxy with the username and password/public key provided // returning a *vfs.VFS and the key used in the VFS cache. func (p *Proxy) Call(user, auth string, isPublicKey bool) (VFS *vfs.VFS, vfsKey string, err error) { // Look in the cache first value, ok := p.vfsCache.GetMaybe(user) // If not found then call the proxy for a fresh answer if !ok { value, err = p.call(user, auth, isPublicKey) if err != nil { return nil, "", err } } // check we got what we were expecting entry, ok := value.(cacheEntry) if !ok { return nil, "", errors.Errorf("proxy: value is not cache entry: %#v", value) } // Check the password / public key is correct in the cached entry. This // prevents an attack where subsequent requests for the same // user don't have their auth checked. It does mean that if // the password is changed, the user will have to wait for // cache expiry (5m) before trying again. authHash := sha256.Sum256([]byte(auth)) if subtle.ConstantTimeCompare(authHash[:], entry.pwHash[:]) != 1 { if isPublicKey { return nil, "", errors.New("proxy: incorrect public key") } return nil, "", errors.New("proxy: incorrect password") } return entry.vfs, user, nil } // Get VFS from the cache using key - returns nil if not found func (p *Proxy) Get(key string) *vfs.VFS { value, ok := p.vfsCache.GetMaybe(key) if !ok { return nil } entry := value.(cacheEntry) return entry.vfs } rclone-1.53.3/cmd/serve/proxy/proxy_code.go000066400000000000000000000011441375552240400206460ustar00rootroot00000000000000// +build ignore // A simple auth proxy for testing purposes package main import ( "encoding/json" "log" "os" ) func main() { // Read the input var in map[string]string err := json.NewDecoder(os.Stdin).Decode(&in) if err != nil { log.Fatal(err) } // Write the output var out = map[string]string{} for k, v := range in { switch k { case "user": v += "-test" case "error": log.Fatal(v) } out[k] = v } if out["type"] == "" { out["type"] = "local" } if out["_root"] == "" { out["_root"] = "" } json.NewEncoder(os.Stdout).Encode(&out) if err != nil { log.Fatal(err) } } rclone-1.53.3/cmd/serve/proxy/proxy_test.go000066400000000000000000000144511375552240400207200ustar00rootroot00000000000000package proxy import ( "crypto/rand" "crypto/rsa" "crypto/sha256" "encoding/base64" "log" "strings" "testing" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/obscure" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/crypto/ssh" ) func TestRun(t *testing.T) { opt := DefaultOpt cmd := "go run proxy_code.go" opt.AuthProxy = cmd p := New(&opt) t.Run("Normal", func(t *testing.T) { config, err := p.run(map[string]string{ "type": "ftp", "user": "me", "pass": "pass", "host": "127.0.0.1", }) require.NoError(t, err) assert.Equal(t, configmap.Simple{ "type": "ftp", "user": "me-test", "pass": "pass", "host": "127.0.0.1", "_root": "", }, config) }) t.Run("Error", func(t *testing.T) { config, err := p.run(map[string]string{ "error": "potato", }) assert.Nil(t, config) require.Error(t, err) require.Contains(t, err.Error(), "potato") }) t.Run("Obscure", func(t *testing.T) { config, err := p.run(map[string]string{ "type": "ftp", "user": "me", "pass": "pass", "host": "127.0.0.1", "_obscure": "pass,user", }) require.NoError(t, err) config["user"] = obscure.MustReveal(config["user"]) config["pass"] = obscure.MustReveal(config["pass"]) assert.Equal(t, configmap.Simple{ "type": "ftp", "user": "me-test", "pass": "pass", "host": "127.0.0.1", "_obscure": "pass,user", "_root": "", }, config) }) const testUser = "testUser" const testPass = "testPass" t.Run("call w/Password", func(t *testing.T) { // check cache empty assert.Equal(t, 0, p.vfsCache.Entries()) defer p.vfsCache.Clear() passwordBytes := []byte(testPass) value, err := p.call(testUser, testPass, false) require.NoError(t, err) entry, ok := value.(cacheEntry) require.True(t, ok) // check hash is correct in entry assert.Equal(t, entry.pwHash, sha256.Sum256(passwordBytes)) require.NotNil(t, entry.vfs) f := entry.vfs.Fs() require.NotNil(t, f) assert.Equal(t, "proxy-"+testUser, f.Name()) assert.True(t, strings.HasPrefix(f.String(), "Local file system")) // check it is in the cache assert.Equal(t, 1, p.vfsCache.Entries()) cacheValue, ok := p.vfsCache.GetMaybe(testUser) assert.True(t, ok) assert.Equal(t, value, cacheValue) }) t.Run("Call w/Password", func(t *testing.T) { // check cache empty assert.Equal(t, 0, p.vfsCache.Entries()) defer p.vfsCache.Clear() vfs, vfsKey, err := p.Call(testUser, testPass, false) require.NoError(t, err) require.NotNil(t, vfs) assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name()) assert.Equal(t, testUser, vfsKey) // check it is in the cache assert.Equal(t, 1, p.vfsCache.Entries()) cacheValue, ok := p.vfsCache.GetMaybe(testUser) assert.True(t, ok) cacheEntry, ok := cacheValue.(cacheEntry) assert.True(t, ok) assert.Equal(t, vfs, cacheEntry.vfs) // Test Get works while we have something in the cache t.Run("Get", func(t *testing.T) { assert.Equal(t, vfs, p.Get(testUser)) assert.Nil(t, p.Get("unknown")) }) // now try again from the cache vfs, vfsKey, err = p.Call(testUser, testPass, false) require.NoError(t, err) require.NotNil(t, vfs) assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name()) assert.Equal(t, testUser, vfsKey) // check cache is at the same level assert.Equal(t, 1, p.vfsCache.Entries()) // now try again from the cache but with wrong password vfs, vfsKey, err = p.Call(testUser, testPass+"wrong", false) require.Error(t, err) require.Contains(t, err.Error(), "incorrect password") require.Nil(t, vfs) require.Equal(t, "", vfsKey) // check cache is at the same level assert.Equal(t, 1, p.vfsCache.Entries()) }) privateKey, privateKeyErr := rsa.GenerateKey(rand.Reader, 2048) if privateKeyErr != nil { log.Fatal("error generating test private key " + privateKeyErr.Error()) } publicKey, publicKeyError := ssh.NewPublicKey(&privateKey.PublicKey) if privateKeyErr != nil { log.Fatal("error generating test public key " + publicKeyError.Error()) } publicKeyString := base64.StdEncoding.EncodeToString(publicKey.Marshal()) t.Run("Call w/PublicKey", func(t *testing.T) { // check cache empty assert.Equal(t, 0, p.vfsCache.Entries()) defer p.vfsCache.Clear() value, err := p.call(testUser, publicKeyString, true) require.NoError(t, err) entry, ok := value.(cacheEntry) require.True(t, ok) // check publicKey is correct in entry require.NoError(t, err) require.NotNil(t, entry.vfs) f := entry.vfs.Fs() require.NotNil(t, f) assert.Equal(t, "proxy-"+testUser, f.Name()) assert.True(t, strings.HasPrefix(f.String(), "Local file system")) // check it is in the cache assert.Equal(t, 1, p.vfsCache.Entries()) cacheValue, ok := p.vfsCache.GetMaybe(testUser) assert.True(t, ok) assert.Equal(t, value, cacheValue) }) t.Run("call w/PublicKey", func(t *testing.T) { // check cache empty assert.Equal(t, 0, p.vfsCache.Entries()) defer p.vfsCache.Clear() vfs, vfsKey, err := p.Call( testUser, publicKeyString, true, ) require.NoError(t, err) require.NotNil(t, vfs) assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name()) assert.Equal(t, testUser, vfsKey) // check it is in the cache assert.Equal(t, 1, p.vfsCache.Entries()) cacheValue, ok := p.vfsCache.GetMaybe(testUser) assert.True(t, ok) cacheEntry, ok := cacheValue.(cacheEntry) assert.True(t, ok) assert.Equal(t, vfs, cacheEntry.vfs) // Test Get works while we have something in the cache t.Run("Get", func(t *testing.T) { assert.Equal(t, vfs, p.Get(testUser)) assert.Nil(t, p.Get("unknown")) }) // now try again from the cache vfs, vfsKey, err = p.Call(testUser, publicKeyString, true) require.NoError(t, err) require.NotNil(t, vfs) assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name()) assert.Equal(t, testUser, vfsKey) // check cache is at the same level assert.Equal(t, 1, p.vfsCache.Entries()) // now try again from the cache but with wrong public key vfs, vfsKey, err = p.Call(testUser, publicKeyString+"wrong", true) require.Error(t, err) require.Contains(t, err.Error(), "incorrect public key") require.Nil(t, vfs) require.Equal(t, "", vfsKey) // check cache is at the same level assert.Equal(t, 1, p.vfsCache.Entries()) }) } rclone-1.53.3/cmd/serve/proxy/proxyflags/000077500000000000000000000000001375552240400203425ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/proxy/proxyflags/proxyflags.go000066400000000000000000000010221375552240400230620ustar00rootroot00000000000000// Package proxyflags implements command line flags to set up a proxy package proxyflags import ( "github.com/rclone/rclone/cmd/serve/proxy" "github.com/rclone/rclone/fs/config/flags" "github.com/spf13/pflag" ) // Options set by command line flags var ( Opt = proxy.DefaultOpt ) // AddFlags adds the non filing system specific flags to the command func AddFlags(flagSet *pflag.FlagSet) { flags.StringVarP(flagSet, &Opt.AuthProxy, "auth-proxy", "", Opt.AuthProxy, "A program to use to create the backend from the auth.") } rclone-1.53.3/cmd/serve/restic/000077500000000000000000000000001375552240400162545ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/restic/restic-test.sh000077500000000000000000000011211375552240400210540ustar00rootroot00000000000000#!/bin/bash # # Test all the remotes against restic integration test # Run with: screen -S restic-test -L ./restic-test.sh remotes=" TestAzureBlob: TestB2: TestBox: TestCache: TestCryptDrive: TestCryptSwift: TestDrive: TestDropbox: TestFichier: TestFTP: TestGoogleCloudStorage: TestHubic: TestOneDrive: TestPcloud: TestQingStor: TestS3: TestSftp: TestSwift: TestWebdav: TestYandex: " # TestOss: # TestMega: for remote in $remotes; do echo `date -Is` $remote starting go test -remote $remote -v -timeout 30m 2>&1 | tee restic-test.$remote.log echo `date -Is` $remote ending done rclone-1.53.3/cmd/serve/restic/restic.go000066400000000000000000000266101375552240400201010ustar00rootroot00000000000000// Package restic serves a remote suitable for use with restic package restic import ( "encoding/json" "errors" "net/http" "os" "path" "regexp" "strings" "time" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/cmd/serve/httplib/httpflags" "github.com/rclone/rclone/cmd/serve/httplib/serve" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/terminal" "github.com/spf13/cobra" "golang.org/x/net/http2" ) var ( stdio bool appendOnly bool privateRepos bool ) func init() { httpflags.AddFlags(Command.Flags()) flagSet := Command.Flags() flags.BoolVarP(flagSet, &stdio, "stdio", "", false, "run an HTTP2 server on stdin/stdout") flags.BoolVarP(flagSet, &appendOnly, "append-only", "", false, "disallow deletion of repository data") flags.BoolVarP(flagSet, &privateRepos, "private-repos", "", false, "users can only access their private repo") } // Command definition for cobra var Command = &cobra.Command{ Use: "restic remote:path", Short: `Serve the remote for restic's REST API.`, Long: `rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. [Restic](https://restic.net/) is a command line program for doing backups. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. ### Setting up rclone for use by restic ### First [set up a remote for your chosen cloud provider](/docs/#configure). Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server rclone serve restic -v remote:backup Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag. You might wish to start this server on boot. ### Setting up restic to use rclone ### Now you can [follow the restic instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: $ export RESTIC_REPOSITORY=rest:http://localhost:8080/ $ export RESTIC_PASSWORD=yourpassword $ restic init created restic backend 8b1a4b56ae at rest:http://localhost:8080/ Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost. $ restic backup /path/to/files/to/backup scan [/path/to/files/to/backup] scanned 189 directories, 312 files in 0:00 [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 duration: 0:00 snapshot 45c8fdd8 saved #### Multiple repositories #### Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these **must** end with /. Eg $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ # backup user1 stuff $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff #### Private repositories #### The "--private-repos" flag can be used to limit users to repositories starting with a path of ` + "`//`" + `. ` + httplib.Help, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) f := cmd.NewFsSrc(args) cmd.Run(false, true, command, func() error { s := NewServer(f, &httpflags.Opt) if stdio { if terminal.IsTerminal(int(os.Stdout.Fd())) { return errors.New("Refusing to run HTTP2 server directly on a terminal, please let restic start rclone") } conn := &StdioConn{ stdin: os.Stdin, stdout: os.Stdout, } httpSrv := &http2.Server{} opts := &http2.ServeConnOpts{ Handler: s, } httpSrv.ServeConn(conn, opts) return nil } err := s.Serve() if err != nil { return err } s.Wait() return nil }) }, } const ( resticAPIV2 = "application/vnd.x.restic.rest.v2" ) // Server contains everything to run the Server type Server struct { *httplib.Server f fs.Fs } // NewServer returns an HTTP server that speaks the rest protocol func NewServer(f fs.Fs, opt *httplib.Options) *Server { mux := http.NewServeMux() s := &Server{ Server: httplib.NewServer(mux, opt), f: f, } mux.HandleFunc(s.Opt.BaseURL+"/", s.ServeHTTP) return s } // Serve runs the http server in the background. // // Use s.Close() and s.Wait() to shutdown server func (s *Server) Serve() error { err := s.Server.Serve() if err != nil { return err } fs.Logf(s.f, "Serving restic REST API on %s", s.URL()) return nil } var matchData = regexp.MustCompile("(?:^|/)data/([^/]{2,})$") // Makes a remote from a URL path. This implements the backend layout // required by restic. func makeRemote(path string) string { path = strings.Trim(path, "/") parts := matchData.FindStringSubmatch(path) // if no data directory, layout is flat if parts == nil { return path } // otherwise map // data/2159dd48 to // data/21/2159dd48 fileName := parts[1] prefix := path[:len(path)-len(fileName)] return prefix + fileName[:2] + "/" + fileName } // ServeHTTP reads incoming requests and dispatches them func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { w.Header().Set("Accept-Ranges", "bytes") w.Header().Set("Server", "rclone/"+fs.Version) path, ok := s.Path(w, r) if !ok { return } remote := makeRemote(path) fs.Debugf(s.f, "%s %s", r.Method, path) v := r.Context().Value(httplib.ContextUserKey) if privateRepos && (v == nil || !strings.HasPrefix(path, "/"+v.(string)+"/")) { http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden) return } // Dispatch on path then method if strings.HasSuffix(path, "/") { switch r.Method { case "GET": s.listObjects(w, r, remote) case "POST": s.createRepo(w, r, remote) default: http.Error(w, http.StatusText(http.StatusMethodNotAllowed), http.StatusMethodNotAllowed) } } else { switch r.Method { case "GET", "HEAD": s.serveObject(w, r, remote) case "POST": s.postObject(w, r, remote) case "DELETE": s.deleteObject(w, r, remote) default: http.Error(w, http.StatusText(http.StatusMethodNotAllowed), http.StatusMethodNotAllowed) } } } // get the remote func (s *Server) serveObject(w http.ResponseWriter, r *http.Request, remote string) { o, err := s.f.NewObject(r.Context(), remote) if err != nil { fs.Debugf(remote, "%s request error: %v", r.Method, err) http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return } serve.Object(w, r, o) } // postObject posts an object to the repository func (s *Server) postObject(w http.ResponseWriter, r *http.Request, remote string) { if appendOnly { // make sure the file does not exist yet _, err := s.f.NewObject(r.Context(), remote) if err == nil { fs.Errorf(remote, "Post request: file already exists, refusing to overwrite in append-only mode") http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden) return } } _, err := operations.RcatSize(r.Context(), s.f, remote, r.Body, r.ContentLength, time.Now()) if err != nil { err = accounting.Stats(r.Context()).Error(err) fs.Errorf(remote, "Post request rcat error: %v", err) http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError) return } } // delete the remote func (s *Server) deleteObject(w http.ResponseWriter, r *http.Request, remote string) { if appendOnly { parts := strings.Split(r.URL.Path, "/") // if path doesn't end in "/locks/:name", disallow the operation if len(parts) < 2 || parts[len(parts)-2] != "locks" { http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden) return } } o, err := s.f.NewObject(r.Context(), remote) if err != nil { fs.Debugf(remote, "Delete request error: %v", err) http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return } if err := o.Remove(r.Context()); err != nil { fs.Errorf(remote, "Delete request remove error: %v", err) if err == fs.ErrorObjectNotFound { http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) } else { http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError) } return } } // listItem is an element returned for the restic v2 list response type listItem struct { Name string `json:"name"` Size int64 `json:"size"` } // return type for list type listItems []listItem // add a DirEntry to the listItems func (ls *listItems) add(entry fs.DirEntry) { if o, ok := entry.(fs.Object); ok { *ls = append(*ls, listItem{ Name: path.Base(o.Remote()), Size: o.Size(), }) } } // listObjects lists all Objects of a given type in an arbitrary order. func (s *Server) listObjects(w http.ResponseWriter, r *http.Request, remote string) { fs.Debugf(remote, "list request") if r.Header.Get("Accept") != resticAPIV2 { fs.Errorf(remote, "Restic v2 API required") http.Error(w, "Restic v2 API required", http.StatusBadRequest) return } // make sure an empty list is returned, and not a 'nil' value ls := listItems{} // if remote supports ListR use that directly, otherwise use recursive Walk err := walk.ListR(r.Context(), s.f, remote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error { for _, entry := range entries { ls.add(entry) } return nil }) if err != nil { _, err = fserrors.Cause(err) if err != fs.ErrorDirNotFound { fs.Errorf(remote, "list failed: %#v %T", err, err) http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return } } w.Header().Set("Content-Type", "application/vnd.x.restic.rest.v2") enc := json.NewEncoder(w) err = enc.Encode(ls) if err != nil { fs.Errorf(remote, "failed to write list: %v", err) http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError) return } } // createRepo creates repository directories. // // We don't bother creating the data dirs as rclone will create them on the fly func (s *Server) createRepo(w http.ResponseWriter, r *http.Request, remote string) { fs.Infof(remote, "Creating repository") if r.URL.Query().Get("create") != "true" { http.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest) return } err := s.f.Mkdir(r.Context(), remote) if err != nil { fs.Errorf(remote, "Create repo failed to Mkdir: %v", err) http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError) return } for _, name := range []string{"data", "index", "keys", "locks", "snapshots"} { dirRemote := path.Join(remote, name) err := s.f.Mkdir(r.Context(), dirRemote) if err != nil { fs.Errorf(dirRemote, "Create repo failed to Mkdir: %v", err) http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError) return } } } rclone-1.53.3/cmd/serve/restic/restic_appendonly_test.go000066400000000000000000000071241375552240400233700ustar00rootroot00000000000000package restic import ( "crypto/rand" "encoding/hex" "io" "io/ioutil" "net/http" "os" "strings" "testing" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/httplib/httpflags" "github.com/stretchr/testify/require" ) // createOverwriteDeleteSeq returns a sequence which will create a new file at // path, and then try to overwrite and delete it. func createOverwriteDeleteSeq(t testing.TB, path string) []TestRequest { // add a file, try to overwrite and delete it req := []TestRequest{ { req: newRequest(t, "GET", path, nil), want: []wantFunc{wantCode(http.StatusNotFound)}, }, { req: newRequest(t, "POST", path, strings.NewReader("foobar test config")), want: []wantFunc{wantCode(http.StatusOK)}, }, { req: newRequest(t, "GET", path, nil), want: []wantFunc{ wantCode(http.StatusOK), wantBody("foobar test config"), }, }, { req: newRequest(t, "POST", path, strings.NewReader("other config")), want: []wantFunc{wantCode(http.StatusForbidden)}, }, { req: newRequest(t, "GET", path, nil), want: []wantFunc{ wantCode(http.StatusOK), wantBody("foobar test config"), }, }, { req: newRequest(t, "DELETE", path, nil), want: []wantFunc{wantCode(http.StatusForbidden)}, }, { req: newRequest(t, "GET", path, nil), want: []wantFunc{ wantCode(http.StatusOK), wantBody("foobar test config"), }, }, } return req } // TestResticHandler runs tests on the restic handler code, especially in append-only mode. func TestResticHandler(t *testing.T) { buf := make([]byte, 32) _, err := io.ReadFull(rand.Reader, buf) require.NoError(t, err) randomID := hex.EncodeToString(buf) var tests = []struct { seq []TestRequest }{ {createOverwriteDeleteSeq(t, "/config")}, {createOverwriteDeleteSeq(t, "/data/"+randomID)}, { // ensure we can add and remove lock files []TestRequest{ { req: newRequest(t, "GET", "/locks/"+randomID, nil), want: []wantFunc{wantCode(http.StatusNotFound)}, }, { req: newRequest(t, "POST", "/locks/"+randomID, strings.NewReader("lock file")), want: []wantFunc{wantCode(http.StatusOK)}, }, { req: newRequest(t, "GET", "/locks/"+randomID, nil), want: []wantFunc{ wantCode(http.StatusOK), wantBody("lock file"), }, }, { req: newRequest(t, "POST", "/locks/"+randomID, strings.NewReader("other lock file")), want: []wantFunc{wantCode(http.StatusForbidden)}, }, { req: newRequest(t, "DELETE", "/locks/"+randomID, nil), want: []wantFunc{wantCode(http.StatusOK)}, }, { req: newRequest(t, "GET", "/locks/"+randomID, nil), want: []wantFunc{wantCode(http.StatusNotFound)}, }, }, }, } // setup rclone with a local backend in a temporary directory tempdir, err := ioutil.TempDir("", "rclone-restic-test-") require.NoError(t, err) // make sure the tempdir is properly removed defer func() { err := os.RemoveAll(tempdir) require.NoError(t, err) }() // globally set append-only mode prev := appendOnly appendOnly = true defer func() { appendOnly = prev // reset when done }() // make a new file system in the temp dir f := cmd.NewFsSrc([]string{tempdir}) srv := NewServer(f, &httpflags.Opt) // create the repo checkRequest(t, srv.ServeHTTP, newRequest(t, "POST", "/?create=true", nil), []wantFunc{wantCode(http.StatusOK)}) for _, test := range tests { t.Run("", func(t *testing.T) { for i, seq := range test.seq { t.Logf("request %v: %v %v", i, seq.req.Method, seq.req.URL.Path) checkRequest(t, srv.ServeHTTP, seq.req, seq.want) } }) } } rclone-1.53.3/cmd/serve/restic/restic_privaterepos_test.go000066400000000000000000000045761375552240400237520ustar00rootroot00000000000000package restic import ( "context" "crypto/rand" "io" "io/ioutil" "net/http" "os" "strings" "testing" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/httplib/httpflags" "github.com/stretchr/testify/require" ) // newAuthenticatedRequest returns a new HTTP request with the given params. func newAuthenticatedRequest(t testing.TB, method, path string, body io.Reader) *http.Request { req := newRequest(t, method, path, body) req = req.WithContext(context.WithValue(req.Context(), httplib.ContextUserKey, "test")) req.Header.Add("Accept", resticAPIV2) return req } // TestResticPrivateRepositories runs tests on the restic handler code for private repositories func TestResticPrivateRepositories(t *testing.T) { buf := make([]byte, 32) _, err := io.ReadFull(rand.Reader, buf) require.NoError(t, err) // setup rclone with a local backend in a temporary directory tempdir, err := ioutil.TempDir("", "rclone-restic-test-") require.NoError(t, err) // make sure the tempdir is properly removed defer func() { err := os.RemoveAll(tempdir) require.NoError(t, err) }() // globally set private-repos mode & test user prev := privateRepos prevUser := httpflags.Opt.BasicUser prevPassword := httpflags.Opt.BasicPass privateRepos = true httpflags.Opt.BasicUser = "test" httpflags.Opt.BasicPass = "password" // reset when done defer func() { privateRepos = prev httpflags.Opt.BasicUser = prevUser httpflags.Opt.BasicPass = prevPassword }() // make a new file system in the temp dir f := cmd.NewFsSrc([]string{tempdir}) srv := NewServer(f, &httpflags.Opt) // Requesting /test/ should allow access reqs := []*http.Request{ newAuthenticatedRequest(t, "POST", "/test/?create=true", nil), newAuthenticatedRequest(t, "POST", "/test/config", strings.NewReader("foobar test config")), newAuthenticatedRequest(t, "GET", "/test/config", nil), } for _, req := range reqs { checkRequest(t, srv.ServeHTTP, req, []wantFunc{wantCode(http.StatusOK)}) } // Requesting everything else should raise forbidden errors reqs = []*http.Request{ newAuthenticatedRequest(t, "GET", "/", nil), newAuthenticatedRequest(t, "POST", "/other_user", nil), newAuthenticatedRequest(t, "GET", "/other_user/config", nil), } for _, req := range reqs { checkRequest(t, srv.ServeHTTP, req, []wantFunc{wantCode(http.StatusForbidden)}) } } rclone-1.53.3/cmd/serve/restic/restic_test.go000066400000000000000000000043711375552240400211400ustar00rootroot00000000000000// Serve restic tests set up a server and run the integration tests // for restic against it. package restic import ( "context" "os" "os/exec" "testing" _ "github.com/rclone/rclone/backend/all" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" ) const ( testBindAddress = "localhost:0" resticSource = "../../../../../restic/restic" ) // TestRestic runs the restic server then runs the unit tests for the // restic remote against it. func TestRestic(t *testing.T) { _, err := os.Stat(resticSource) if err != nil { t.Skipf("Skipping test as restic source not found: %v", err) } opt := httplib.DefaultOpt opt.ListenAddr = testBindAddress fstest.Initialise() fremote, _, clean, err := fstest.RandomRemote() assert.NoError(t, err) defer clean() err = fremote.Mkdir(context.Background(), "") assert.NoError(t, err) // Start the server w := NewServer(fremote, &opt) assert.NoError(t, w.Serve()) defer func() { w.Close() w.Wait() }() // Change directory to run the tests err = os.Chdir(resticSource) assert.NoError(t, err, "failed to cd to restic source code") // Run the restic tests runTests := func(path string) { args := []string{"test", "./internal/backend/rest", "-run", "TestBackendRESTExternalServer", "-count=1"} if testing.Verbose() { args = append(args, "-v") } cmd := exec.Command("go", args...) cmd.Env = append(os.Environ(), "RESTIC_TEST_REST_REPOSITORY=rest:"+w.Server.URL()+path, "GO111MODULE=on", ) out, err := cmd.CombinedOutput() if len(out) != 0 { t.Logf("\n----------\n%s----------\n", string(out)) } assert.NoError(t, err, "Running restic integration tests") } // Run the tests with no path runTests("") //... and again with a path runTests("potato/sausage/") } func TestMakeRemote(t *testing.T) { for _, test := range []struct { in, want string }{ {"", ""}, {"/", ""}, {"/data", "data"}, {"/data/", "data"}, {"/data/1", "data/1"}, {"/data/12", "data/12/12"}, {"/data/123", "data/12/123"}, {"/data/123/", "data/12/123"}, {"/keys", "keys"}, {"/keys/1", "keys/1"}, {"/keys/12", "keys/12"}, {"/keys/123", "keys/123"}, } { got := makeRemote(test.in) assert.Equal(t, test.want, got, test.in) } } rclone-1.53.3/cmd/serve/restic/restic_utils_test.go000066400000000000000000000030311375552240400223500ustar00rootroot00000000000000package restic import ( "io" "net/http" "net/http/httptest" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // declare a few helper functions // wantFunc tests the HTTP response in res and marks the test as errored if something is incorrect. type wantFunc func(t testing.TB, res *httptest.ResponseRecorder) // newRequest returns a new HTTP request with the given params. On error, the // test is marked as failed. func newRequest(t testing.TB, method, path string, body io.Reader) *http.Request { req, err := http.NewRequest(method, path, body) require.NoError(t, err) return req } // wantCode returns a function which checks that the response has the correct HTTP status code. func wantCode(code int) wantFunc { return func(t testing.TB, res *httptest.ResponseRecorder) { assert.Equal(t, code, res.Code) } } // wantBody returns a function which checks that the response has the data in the body. func wantBody(body string) wantFunc { return func(t testing.TB, res *httptest.ResponseRecorder) { assert.NotNil(t, res.Body) assert.Equal(t, res.Body.Bytes(), []byte(body)) } } // checkRequest uses f to process the request and runs the checker functions on the result. func checkRequest(t testing.TB, f http.HandlerFunc, req *http.Request, want []wantFunc) { rr := httptest.NewRecorder() f(rr, req) for _, fn := range want { fn(t, rr) } } // TestRequest is a sequence of HTTP requests with (optional) tests for the response. type TestRequest struct { req *http.Request want []wantFunc } rclone-1.53.3/cmd/serve/restic/stdio_conn.go000066400000000000000000000025641375552240400207510ustar00rootroot00000000000000package restic import ( "net" "os" "time" ) // Addr implements net.Addr for stdin/stdout. type Addr struct{} // Network returns the network type as a string. func (a Addr) Network() string { return "stdio" } func (a Addr) String() string { return "stdio" } // StdioConn implements a net.Conn via stdin/stdout. type StdioConn struct { stdin *os.File stdout *os.File } func (s *StdioConn) Read(p []byte) (int, error) { return s.stdin.Read(p) } func (s *StdioConn) Write(p []byte) (int, error) { return s.stdout.Write(p) } // Close closes both streams. func (s *StdioConn) Close() error { err1 := s.stdin.Close() err2 := s.stdout.Close() if err1 != nil { return err1 } return err2 } // LocalAddr returns nil. func (s *StdioConn) LocalAddr() net.Addr { return Addr{} } // RemoteAddr returns nil. func (s *StdioConn) RemoteAddr() net.Addr { return Addr{} } // SetDeadline sets the read/write deadline. func (s *StdioConn) SetDeadline(t time.Time) error { err1 := s.stdin.SetReadDeadline(t) err2 := s.stdout.SetWriteDeadline(t) if err1 != nil { return err1 } return err2 } // SetReadDeadline sets the read/write deadline. func (s *StdioConn) SetReadDeadline(t time.Time) error { return s.stdin.SetReadDeadline(t) } // SetWriteDeadline sets the read/write deadline. func (s *StdioConn) SetWriteDeadline(t time.Time) error { return s.stdout.SetWriteDeadline(t) } rclone-1.53.3/cmd/serve/serve.go000066400000000000000000000025361375552240400164440ustar00rootroot00000000000000package serve import ( "errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/dlna" "github.com/rclone/rclone/cmd/serve/ftp" "github.com/rclone/rclone/cmd/serve/http" "github.com/rclone/rclone/cmd/serve/restic" "github.com/rclone/rclone/cmd/serve/sftp" "github.com/rclone/rclone/cmd/serve/webdav" "github.com/spf13/cobra" ) func init() { Command.AddCommand(http.Command) if webdav.Command != nil { Command.AddCommand(webdav.Command) } if restic.Command != nil { Command.AddCommand(restic.Command) } if dlna.Command != nil { Command.AddCommand(dlna.Command) } if ftp.Command != nil { Command.AddCommand(ftp.Command) } if sftp.Command != nil { Command.AddCommand(sftp.Command) } cmd.Root.AddCommand(Command) } // Command definition for cobra var Command = &cobra.Command{ Use: "serve [opts] ", Short: `Serve a remote over a protocol.`, Long: `rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg rclone serve http remote: Each subcommand has its own options which you can see in their help. `, RunE: func(command *cobra.Command, args []string) error { if len(args) == 0 { return errors.New("serve requires a protocol, eg 'rclone serve http remote:'") } return errors.New("unknown protocol") }, } rclone-1.53.3/cmd/serve/servetest/000077500000000000000000000000001375552240400170075ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/servetest/proxy_code.go000066400000000000000000000010511375552240400215060ustar00rootroot00000000000000// +build ignore // A simple auth proxy for testing purposes package main import ( "encoding/json" "log" "os" ) func main() { if len(os.Args) < 2 { log.Fatalf("Syntax: %s ", os.Args[0]) } root := os.Args[1] // Read the input var in map[string]string err := json.NewDecoder(os.Stdin).Decode(&in) if err != nil { log.Fatal(err) } // Write the output var out = map[string]string{ "type": "local", "_root": root, "_obscure": "pass", } json.NewEncoder(os.Stdout).Encode(&out) if err != nil { log.Fatal(err) } } rclone-1.53.3/cmd/serve/servetest/servetest.go000066400000000000000000000055511375552240400213700ustar00rootroot00000000000000// Package servetest provides infrastructure for running loopback // tests of "rclone serve backend:" against the backend integration // tests. package servetest import ( "context" "fmt" "os" "os/exec" "path/filepath" "strings" "testing" "github.com/rclone/rclone/cmd/serve/proxy/proxyflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // StartFn describes the callback which should start the server with // the Fs passed in. // It should return a config for the backend used to connect to the // server and a clean up function type StartFn func(f fs.Fs) (configmap.Simple, func()) // run runs the server then runs the unit tests for the remote against // it. func run(t *testing.T, name string, start StartFn, useProxy bool) { fremote, _, clean, err := fstest.RandomRemote() assert.NoError(t, err) defer clean() err = fremote.Mkdir(context.Background(), "") assert.NoError(t, err) f := fremote if useProxy { // If using a proxy don't pass in the backend f = nil // the backend config will be made by the proxy prog, err := filepath.Abs("../servetest/proxy_code.go") require.NoError(t, err) cmd := "go run " + prog + " " + fremote.Root() // FIXME this is untidy setting a global variable! proxyflags.Opt.AuthProxy = cmd defer func() { proxyflags.Opt.AuthProxy = "" }() } config, cleanup := start(f) defer cleanup() // Change directory to run the tests cwd, err := os.Getwd() require.NoError(t, err) err = os.Chdir("../../../backend/" + name) require.NoError(t, err, "failed to cd to "+name+" backend") defer func() { // Change back to the old directory require.NoError(t, os.Chdir(cwd)) }() // Run the backend tests with an on the fly remote args := []string{"test"} if testing.Verbose() { args = append(args, "-v") } if *fstest.Verbose { args = append(args, "-verbose") } remoteName := name + "test:" args = append(args, "-remote", remoteName) args = append(args, "-list-retries", fmt.Sprint(*fstest.ListRetries)) cmd := exec.Command("go", args...) // Configure the backend with environment variables cmd.Env = os.Environ() prefix := "RCLONE_CONFIG_" + strings.ToUpper(remoteName[:len(remoteName)-1]) + "_" for k, v := range config { cmd.Env = append(cmd.Env, prefix+strings.ToUpper(k)+"="+v) } // Run the test out, err := cmd.CombinedOutput() if len(out) != 0 { t.Logf("\n----------\n%s----------\n", string(out)) } assert.NoError(t, err, "Running "+name+" integration tests") } // Run runs the server then runs the unit tests for the remote against // it. func Run(t *testing.T, name string, start StartFn) { fstest.Initialise() t.Run("Normal", func(t *testing.T) { run(t, name, start, false) }) t.Run("AuthProxy", func(t *testing.T) { run(t, name, start, true) }) } rclone-1.53.3/cmd/serve/sftp/000077500000000000000000000000001375552240400157375ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/sftp/connection.go000066400000000000000000000153501375552240400204310ustar00rootroot00000000000000// +build !plan9 package sftp import ( "context" "fmt" "io" "net" "regexp" "strings" "github.com/pkg/errors" "github.com/pkg/sftp" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/vfs" "golang.org/x/crypto/ssh" ) func describeConn(c interface { RemoteAddr() net.Addr LocalAddr() net.Addr }) string { return fmt.Sprintf("serve sftp %s->%s", c.RemoteAddr(), c.LocalAddr()) } // Return the exit status of the command type exitStatus struct { RC uint32 } // The incoming exec command type execCommand struct { Command string } var shellUnEscapeRegex = regexp.MustCompile(`\\(.)`) // Unescape a string that was escaped by rclone func shellUnEscape(str string) string { str = strings.Replace(str, "'\n'", "\n", -1) str = shellUnEscapeRegex.ReplaceAllString(str, `$1`) return str } // Info about the current connection type conn struct { vfs *vfs.VFS handlers sftp.Handlers what string } // execCommand implements an extremely limited number of commands to // interoperate with the rclone sftp backend func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (err error) { binary, args := command, "" space := strings.Index(command, " ") if space >= 0 { binary = command[:space] args = strings.TrimLeft(command[space+1:], " ") } args = shellUnEscape(args) fs.Debugf(c.what, "exec command: binary = %q, args = %q", binary, args) switch binary { case "df": about := c.vfs.Fs().Features().About if about == nil { return errors.New("df not supported") } usage, err := about(ctx) if err != nil { return errors.Wrap(err, "About failed") } total, used, free := int64(-1), int64(-1), int64(-1) if usage.Total != nil { total = *usage.Total / 1024 } if usage.Used != nil { used = *usage.Used / 1024 } if usage.Free != nil { free = *usage.Free / 1024 } perc := int64(0) if total > 0 && used >= 0 { perc = (100 * used) / total } _, err = fmt.Fprintf(out, ` Filesystem 1K-blocks Used Available Use%% Mounted on /dev/root %d %d %d %d%% / `, total, used, free, perc) if err != nil { return errors.Wrap(err, "send output failed") } case "md5sum", "sha1sum": ht := hash.MD5 if binary == "sha1sum" { ht = hash.SHA1 } var hashSum string if args == "" { // empty hash for no input if ht == hash.MD5 { hashSum = "d41d8cd98f00b204e9800998ecf8427e" } else { hashSum = "da39a3ee5e6b4b0d3255bfef95601890afd80709" } args = "-" } else { node, err := c.vfs.Stat(args) if err != nil { return errors.Wrapf(err, "hash failed finding file %q", args) } if node.IsDir() { return errors.New("can't hash directory") } o, ok := node.DirEntry().(fs.ObjectInfo) if !ok { return errors.New("unexpected non file") } hashSum, err = o.Hash(ctx, ht) if err != nil { return errors.Wrap(err, "hash failed") } } _, err = fmt.Fprintf(out, "%s %s\n", hashSum, args) if err != nil { return errors.Wrap(err, "send output failed") } case "echo": // special cases for rclone command detection switch args { case "'abc' | md5sum": if c.vfs.Fs().Hashes().Contains(hash.MD5) { _, err = fmt.Fprintf(out, "0bee89b07a248e27c83fc3d5951213c1 -\n") if err != nil { return errors.Wrap(err, "send output failed") } } else { return errors.New("md5 hash not supported") } case "'abc' | sha1sum": if c.vfs.Fs().Hashes().Contains(hash.SHA1) { _, err = fmt.Fprintf(out, "03cfd743661f07975fa2f1220c5194cbaff48451 -\n") if err != nil { return errors.Wrap(err, "send output failed") } } else { return errors.New("sha1 hash not supported") } default: _, err = fmt.Fprintf(out, "%s\n", args) if err != nil { return errors.Wrap(err, "send output failed") } } default: return errors.Errorf("%q not implemented\n", command) } return nil } // handle a new incoming channel request func (c *conn) handleChannel(newChannel ssh.NewChannel) { fs.Debugf(c.what, "Incoming channel: %s\n", newChannel.ChannelType()) if newChannel.ChannelType() != "session" { err := newChannel.Reject(ssh.UnknownChannelType, "unknown channel type") fs.Debugf(c.what, "Unknown channel type: %s\n", newChannel.ChannelType()) if err != nil { fs.Errorf(c.what, "Failed to reject unknown channel: %v", err) } return } channel, requests, err := newChannel.Accept() if err != nil { fs.Errorf(c.what, "could not accept channel: %v", err) return } defer func() { err := channel.Close() if err != nil && err != io.EOF { fs.Debugf(c.what, "Failed to close channel: %v", err) } }() fs.Debugf(c.what, "Channel accepted\n") isSFTP := make(chan bool, 1) var command execCommand // Handle out-of-band requests go func(in <-chan *ssh.Request) { for req := range in { fs.Debugf(c.what, "Request: %v\n", req.Type) ok := false var subSystemIsSFTP bool var reply []byte switch req.Type { case "subsystem": fs.Debugf(c.what, "Subsystem: %s\n", req.Payload[4:]) if string(req.Payload[4:]) == "sftp" { ok = true subSystemIsSFTP = true } case "exec": err := ssh.Unmarshal(req.Payload, &command) if err != nil { fs.Errorf(c.what, "ignoring bad exec command: %v", err) } else { ok = true subSystemIsSFTP = false } } fs.Debugf(c.what, " - accepted: %v\n", ok) err = req.Reply(ok, reply) if err != nil { fs.Errorf(c.what, "Failed to Reply to request: %v", err) return } if ok { // Wake up main routine after we have responded isSFTP <- subSystemIsSFTP } } }(requests) // Wait for either subsystem "sftp" or "exec" request if <-isSFTP { fs.Debugf(c.what, "Starting SFTP server") server := sftp.NewRequestServer(channel, c.handlers) defer func() { err := server.Close() if err != nil && err != io.EOF { fs.Debugf(c.what, "Failed to close server: %v", err) } }() err = server.Serve() if err == io.EOF || err == nil { fs.Debugf(c.what, "exited session") } else { fs.Errorf(c.what, "completed with error: %v", err) } } else { var rc = uint32(0) err := c.execCommand(context.TODO(), channel, command.Command) if err != nil { rc = 1 _, errPrint := fmt.Fprintf(channel.Stderr(), "%v\n", err) if errPrint != nil { fs.Errorf(c.what, "Failed to write to stderr: %v", errPrint) } fs.Debugf(c.what, "command %q failed with error: %v", command.Command, err) } _, err = channel.SendRequest("exit-status", false, ssh.Marshal(exitStatus{RC: rc})) if err != nil { fs.Errorf(c.what, "Failed to send exit status: %v", err) } } } // Service the incoming Channel channel in go routine func (c *conn) handleChannels(chans <-chan ssh.NewChannel) { for newChannel := range chans { go c.handleChannel(newChannel) } } rclone-1.53.3/cmd/serve/sftp/connection_test.go000066400000000000000000000007671375552240400214760ustar00rootroot00000000000000// +build !plan9 package sftp import ( "fmt" "testing" "github.com/stretchr/testify/assert" ) func TestShellEscape(t *testing.T) { for i, test := range []struct { unescaped, escaped string }{ {"", ""}, {"/this/is/harmless", "/this/is/harmless"}, {"$(rm -rf /)", "\\$\\(rm\\ -rf\\ /\\)"}, {"/test/\n", "/test/'\n'"}, {":\"'", ":\\\"\\'"}, } { got := shellUnEscape(test.escaped) assert.Equal(t, test.unescaped, got, fmt.Sprintf("Test %d unescaped = %q", i, test.unescaped)) } } rclone-1.53.3/cmd/serve/sftp/handler.go000066400000000000000000000055031375552240400177060ustar00rootroot00000000000000// +build !plan9 package sftp import ( "io" "os" "syscall" "time" "github.com/pkg/sftp" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/vfs" ) // vfsHandler converts the VFS to be served by SFTP type vfsHandler struct { *vfs.VFS } // vfsHandler returns a Handlers object with the test handlers. func newVFSHandler(vfs *vfs.VFS) sftp.Handlers { v := vfsHandler{VFS: vfs} return sftp.Handlers{ FileGet: v, FilePut: v, FileCmd: v, FileList: v, } } func (v vfsHandler) Fileread(r *sftp.Request) (io.ReaderAt, error) { file, err := v.OpenFile(r.Filepath, os.O_RDONLY, 0777) if err != nil { return nil, err } return file, nil } func (v vfsHandler) Filewrite(r *sftp.Request) (io.WriterAt, error) { file, err := v.OpenFile(r.Filepath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0777) if err != nil { return nil, err } return file, nil } func (v vfsHandler) Filecmd(r *sftp.Request) error { switch r.Method { case "Setstat": attr := r.Attributes() if attr.Mtime != 0 { modTime := time.Unix(int64(attr.Mtime), 0) err := v.Chtimes(r.Filepath, modTime, modTime) if err != nil { return err } } return nil case "Rename": err := v.Rename(r.Filepath, r.Target) if err != nil { return err } case "Rmdir", "Remove": err := v.Remove(r.Filepath) if err != nil { return err } case "Mkdir": err := v.Mkdir(r.Filepath, 0777) if err != nil { return err } case "Symlink": // FIXME // _, err := v.fetch(r.Filepath) // if err != nil { // return err // } // link := newMemFile(r.Target, false) // link.symlink = r.Filepath // v.files[r.Target] = link return sftp.ErrSshFxOpUnsupported } return nil } type listerat []os.FileInfo // Modeled after strings.Reader's ReadAt() implementation func (f listerat) ListAt(ls []os.FileInfo, offset int64) (int, error) { var n int if offset >= int64(len(f)) { return 0, io.EOF } n = copy(ls, f[offset:]) if n < len(ls) { return n, io.EOF } return n, nil } func (v vfsHandler) Filelist(r *sftp.Request) (l sftp.ListerAt, err error) { var node vfs.Node var handle vfs.Handle switch r.Method { case "List": node, err = v.Stat(r.Filepath) if err != nil { return nil, err } if !node.IsDir() { return nil, syscall.ENOTDIR } handle, err = node.Open(os.O_RDONLY) if err != nil { return nil, err } defer fs.CheckClose(handle, &err) fis, err := handle.Readdir(-1) if err != nil { return nil, err } return listerat(fis), nil case "Stat": node, err = v.Stat(r.Filepath) if err != nil { return nil, err } return listerat([]os.FileInfo{node}), nil case "Readlink": // FIXME // if file.symlink != "" { // file, err = v.fetch(file.symlink) // if err != nil { // return nil, err // } // } // return listerat([]os.FileInfo{file}), nil } return nil, sftp.ErrSshFxOpUnsupported } rclone-1.53.3/cmd/serve/sftp/server.go000066400000000000000000000235771375552240400176120ustar00rootroot00000000000000// +build !plan9 package sftp import ( "bytes" "crypto/rand" "crypto/rsa" "crypto/subtle" "crypto/x509" "encoding/base64" "encoding/pem" "fmt" "io/ioutil" "net" "os" "path/filepath" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/serve/proxy" "github.com/rclone/rclone/cmd/serve/proxy/proxyflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "golang.org/x/crypto/ssh" ) // server contains everything to run the server type server struct { f fs.Fs opt Options vfs *vfs.VFS config *ssh.ServerConfig listener net.Listener waitChan chan struct{} // for waiting on the listener to close proxy *proxy.Proxy } func newServer(f fs.Fs, opt *Options) *server { s := &server{ f: f, opt: *opt, waitChan: make(chan struct{}), } if proxyflags.Opt.AuthProxy != "" { s.proxy = proxy.New(&proxyflags.Opt) } else { s.vfs = vfs.New(f, &vfsflags.Opt) } return s } // getVFS gets the vfs from s or the proxy func (s *server) getVFS(what string, sshConn *ssh.ServerConn) (VFS *vfs.VFS) { if s.proxy == nil { return s.vfs } if sshConn.Permissions == nil && sshConn.Permissions.Extensions == nil { fs.Infof(what, "SSH Permissions Extensions not found") return nil } key := sshConn.Permissions.Extensions["_vfsKey"] if key == "" { fs.Infof(what, "VFS key not found") return nil } VFS = s.proxy.Get(key) if VFS == nil { fs.Infof(what, "failed to read VFS from cache") return nil } return VFS } func (s *server) acceptConnections() { for { nConn, err := s.listener.Accept() if err != nil { if strings.Contains(err.Error(), "use of closed network connection") { return } fs.Errorf(nil, "Failed to accept incoming connection: %v", err) continue } what := describeConn(nConn) // Before use, a handshake must be performed on the incoming net.Conn. sshConn, chans, reqs, err := ssh.NewServerConn(nConn, s.config) if err != nil { fs.Errorf(what, "SSH login failed: %v", err) continue } fs.Infof(what, "SSH login from %s using %s", sshConn.User(), sshConn.ClientVersion()) // Discard all global out-of-band Requests go ssh.DiscardRequests(reqs) c := &conn{ what: what, vfs: s.getVFS(what, sshConn), } if c.vfs == nil { fs.Infof(what, "Closing unauthenticated connection (couldn't find VFS)") _ = nConn.Close() continue } c.handlers = newVFSHandler(c.vfs) // Accept all channels go c.handleChannels(chans) } } // Based on example server code from golang.org/x/crypto/ssh and server_standalone func (s *server) serve() (err error) { var authorizedKeysMap map[string]struct{} // ensure the user isn't trying to use conflicting flags if proxyflags.Opt.AuthProxy != "" && s.opt.AuthorizedKeys != "" && s.opt.AuthorizedKeys != DefaultOpt.AuthorizedKeys { return errors.New("--auth-proxy and --authorized-keys cannot be used at the same time") } // Load the authorized keys if s.opt.AuthorizedKeys != "" && proxyflags.Opt.AuthProxy == "" { authKeysFile := env.ShellExpand(s.opt.AuthorizedKeys) authorizedKeysMap, err = loadAuthorizedKeys(authKeysFile) // If user set the flag away from the default then report an error if err != nil && s.opt.AuthorizedKeys != DefaultOpt.AuthorizedKeys { return err } fs.Logf(nil, "Loaded %d authorized keys from %q", len(authorizedKeysMap), authKeysFile) } if !s.opt.NoAuth && len(authorizedKeysMap) == 0 && s.opt.User == "" && s.opt.Pass == "" && s.proxy == nil { return errors.New("no authorization found, use --user/--pass or --authorized-keys or --no-auth or --auth-proxy") } // An SSH server is represented by a ServerConfig, which holds // certificate details and handles authentication of ServerConns. s.config = &ssh.ServerConfig{ ServerVersion: "SSH-2.0-" + fs.Config.UserAgent, PasswordCallback: func(c ssh.ConnMetadata, pass []byte) (*ssh.Permissions, error) { fs.Debugf(describeConn(c), "Password login attempt for %s", c.User()) if s.proxy != nil { // query the proxy for the config _, vfsKey, err := s.proxy.Call(c.User(), string(pass), false) if err != nil { return nil, err } // just return the Key so we can get it back from the cache return &ssh.Permissions{ Extensions: map[string]string{ "_vfsKey": vfsKey, }, }, nil } else if s.opt.User != "" && s.opt.Pass != "" { userOK := subtle.ConstantTimeCompare([]byte(c.User()), []byte(s.opt.User)) passOK := subtle.ConstantTimeCompare(pass, []byte(s.opt.Pass)) if (userOK & passOK) == 1 { return nil, nil } } return nil, fmt.Errorf("password rejected for %q", c.User()) }, PublicKeyCallback: func(c ssh.ConnMetadata, pubKey ssh.PublicKey) (*ssh.Permissions, error) { fs.Debugf(describeConn(c), "Public key login attempt for %s", c.User()) if s.proxy != nil { //query the proxy for the config _, vfsKey, err := s.proxy.Call( c.User(), base64.StdEncoding.EncodeToString(pubKey.Marshal()), true, ) if err != nil { return nil, err } // just return the Key so we can get it back from the cache return &ssh.Permissions{ Extensions: map[string]string{ "_vfsKey": vfsKey, }, }, nil } if _, ok := authorizedKeysMap[string(pubKey.Marshal())]; ok { return &ssh.Permissions{ // Record the public key used for authentication. Extensions: map[string]string{ "pubkey-fp": ssh.FingerprintSHA256(pubKey), }, }, nil } return nil, fmt.Errorf("unknown public key for %q", c.User()) }, AuthLogCallback: func(conn ssh.ConnMetadata, method string, err error) { status := "OK" if err != nil { status = err.Error() } fs.Debugf(describeConn(conn), "ssh auth %q from %q: %s", method, conn.ClientVersion(), status) }, NoClientAuth: s.opt.NoAuth, } // Load the private key, from the cache if not explicitly configured keyPaths := s.opt.HostKeys cachePath := filepath.Join(config.CacheDir, "serve-sftp") if len(keyPaths) == 0 { keyPaths = []string{filepath.Join(cachePath, "id_rsa")} } for _, keyPath := range keyPaths { private, err := loadPrivateKey(keyPath) if err != nil && len(s.opt.HostKeys) == 0 { fs.Debugf(nil, "Failed to load %q: %v", keyPath, err) // If loading a cached key failed, make the keys and retry err = os.MkdirAll(cachePath, 0700) if err != nil { return errors.Wrap(err, "failed to create cache path") } const bits = 2048 fs.Logf(nil, "Generating %d bit key pair at %q", bits, keyPath) err = makeSSHKeyPair(bits, keyPath+".pub", keyPath) if err != nil { return errors.Wrap(err, "failed to create SSH key pair") } // reload the new keys private, err = loadPrivateKey(keyPath) } if err != nil { return err } fs.Debugf(nil, "Loaded private key from %q", keyPath) s.config.AddHostKey(private) } // Once a ServerConfig has been configured, connections can be // accepted. s.listener, err = net.Listen("tcp", s.opt.ListenAddr) if err != nil { return errors.Wrap(err, "failed to listen for connection") } fs.Logf(nil, "SFTP server listening on %v\n", s.listener.Addr()) go s.acceptConnections() return nil } // Addr returns the address the server is listening on func (s *server) Addr() string { return s.listener.Addr().String() } // Serve runs the sftp server in the background. // // Use s.Close() and s.Wait() to shutdown server func (s *server) Serve() error { err := s.serve() if err != nil { return err } return nil } // Wait blocks while the listener is open. func (s *server) Wait() { <-s.waitChan } // Close shuts the running server down func (s *server) Close() { err := s.listener.Close() if err != nil { fs.Errorf(nil, "Error on closing SFTP server: %v", err) return } close(s.waitChan) } func loadPrivateKey(keyPath string) (ssh.Signer, error) { privateBytes, err := ioutil.ReadFile(keyPath) if err != nil { return nil, errors.Wrap(err, "failed to load private key") } private, err := ssh.ParsePrivateKey(privateBytes) if err != nil { return nil, errors.Wrap(err, "failed to parse private key") } return private, nil } // Public key authentication is done by comparing // the public key of a received connection // with the entries in the authorized_keys file. func loadAuthorizedKeys(authorizedKeysPath string) (authorizedKeysMap map[string]struct{}, err error) { authorizedKeysBytes, err := ioutil.ReadFile(authorizedKeysPath) if err != nil { return nil, errors.Wrap(err, "failed to load authorized keys") } authorizedKeysMap = make(map[string]struct{}) for len(authorizedKeysBytes) > 0 { pubKey, _, _, rest, err := ssh.ParseAuthorizedKey(authorizedKeysBytes) if err != nil { return nil, errors.Wrap(err, "failed to parse authorized keys") } authorizedKeysMap[string(pubKey.Marshal())] = struct{}{} authorizedKeysBytes = bytes.TrimSpace(rest) } return authorizedKeysMap, nil } // makeSSHKeyPair make a pair of public and private keys for SSH access. // Public key is encoded in the format for inclusion in an OpenSSH authorized_keys file. // Private Key generated is PEM encoded // // Originally from: https://stackoverflow.com/a/34347463/164234 func makeSSHKeyPair(bits int, pubKeyPath, privateKeyPath string) (err error) { privateKey, err := rsa.GenerateKey(rand.Reader, bits) if err != nil { return err } // generate and write private key as PEM privateKeyFile, err := os.OpenFile(privateKeyPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) if err != nil { return err } defer fs.CheckClose(privateKeyFile, &err) privateKeyPEM := &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privateKey)} if err := pem.Encode(privateKeyFile, privateKeyPEM); err != nil { return err } // generate and write public key pub, err := ssh.NewPublicKey(&privateKey.PublicKey) if err != nil { return err } return ioutil.WriteFile(pubKeyPath, ssh.MarshalAuthorizedKey(pub), 0644) } rclone-1.53.3/cmd/serve/sftp/sftp.go000066400000000000000000000074031375552240400172460ustar00rootroot00000000000000// Package sftp implements an SFTP server to serve an rclone VFS // +build !plan9 package sftp import ( "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/proxy" "github.com/rclone/rclone/cmd/serve/proxy/proxyflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "github.com/spf13/cobra" "github.com/spf13/pflag" ) // Options contains options for the http Server type Options struct { ListenAddr string // Port to listen on HostKeys []string // Paths to private host keys AuthorizedKeys string // Path to authorized keys file User string // single username Pass string // password for user NoAuth bool // allow no authentication on connections } // DefaultOpt is the default values used for Options var DefaultOpt = Options{ ListenAddr: "localhost:2022", AuthorizedKeys: "~/.ssh/authorized_keys", } // Opt is options set by command line flags var Opt = DefaultOpt // AddFlags adds flags for the sftp func AddFlags(flagSet *pflag.FlagSet, Opt *Options) { rc.AddOption("sftp", &Opt) flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.") flags.StringArrayVarP(flagSet, &Opt.HostKeys, "key", "", Opt.HostKeys, "SSH private host key file (Can be multi-valued, leave blank to auto generate)") flags.StringVarP(flagSet, &Opt.AuthorizedKeys, "authorized-keys", "", Opt.AuthorizedKeys, "Authorized keys file") flags.StringVarP(flagSet, &Opt.User, "user", "", Opt.User, "User name for authentication.") flags.StringVarP(flagSet, &Opt.Pass, "pass", "", Opt.Pass, "Password for authentication.") flags.BoolVarP(flagSet, &Opt.NoAuth, "no-auth", "", Opt.NoAuth, "Allow connections with no authentication if set.") } func init() { vfsflags.AddFlags(Command.Flags()) proxyflags.AddFlags(Command.Flags()) AddFlags(Command.Flags(), &Opt) } // Command definition for cobra var Command = &cobra.Command{ Use: "sftp remote:path", Short: `Serve the remote over SFTP.`, Long: `rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in. Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. If you don't supply a --key then rclone will generate one and cache it for later use. By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example. Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients. ` + vfs.Help + proxy.Help, Run: func(command *cobra.Command, args []string) { var f fs.Fs if proxyflags.Opt.AuthProxy == "" { cmd.CheckArgs(1, 1, command, args) f = cmd.NewFsSrc(args) } else { cmd.CheckArgs(0, 0, command, args) } cmd.Run(false, true, command, func() error { s := newServer(f, &Opt) err := s.Serve() if err != nil { return err } s.Wait() return nil }) }, } rclone-1.53.3/cmd/serve/sftp/sftp_test.go000066400000000000000000000031341375552240400203020ustar00rootroot00000000000000// Serve sftp tests set up a server and run the integration tests // for the sftp remote against it. // // We skip tests on platforms with troublesome character mappings //+build !windows,!darwin,!plan9 package sftp import ( "strings" "testing" "github.com/pkg/sftp" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/cmd/serve/servetest" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/obscure" "github.com/stretchr/testify/require" ) const ( testBindAddress = "localhost:0" testUser = "testuser" testPass = "testpass" ) // check interfaces var ( _ sftp.FileReader = vfsHandler{} _ sftp.FileWriter = vfsHandler{} _ sftp.FileCmder = vfsHandler{} _ sftp.FileLister = vfsHandler{} ) // TestSftp runs the sftp server then runs the unit tests for the // sftp remote against it. func TestSftp(t *testing.T) { // Configure and start the server start := func(f fs.Fs) (configmap.Simple, func()) { opt := DefaultOpt opt.ListenAddr = testBindAddress opt.User = testUser opt.Pass = testPass w := newServer(f, &opt) require.NoError(t, w.serve()) // Read the host and port we started on addr := w.Addr() colon := strings.LastIndex(addr, ":") // Config for the backend we'll use to connect to the server config := configmap.Simple{ "type": "sftp", "user": testUser, "pass": obscure.MustObscure(testPass), "host": addr[:colon], "port": addr[colon+1:], } // return a stop function return config, func() { w.Close() w.Wait() } } servetest.Run(t, "sftp", start) } rclone-1.53.3/cmd/serve/sftp/sftp_unsupported.go000066400000000000000000000004031375552240400217070ustar00rootroot00000000000000// Build for sftp for unsupported platforms to stop go complaining // about "no buildable Go source files " // +build plan9 package sftp import "github.com/spf13/cobra" // Command definition is nil to show not implemented var Command *cobra.Command = nil rclone-1.53.3/cmd/serve/webdav/000077500000000000000000000000001375552240400162335ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/webdav/testdata/000077500000000000000000000000001375552240400200445ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/webdav/testdata/golden/000077500000000000000000000000001375552240400213145ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/webdav/testdata/golden/a.txt000066400000000000000000000000061375552240400222710ustar00rootroot00000000000000three rclone-1.53.3/cmd/serve/webdav/testdata/golden/dirnotfound.html000066400000000000000000000000241375552240400245310ustar00rootroot00000000000000Directory not found rclone-1.53.3/cmd/serve/webdav/testdata/golden/hidden.txt000066400000000000000000000000111375552240400233000ustar00rootroot00000000000000Not Foundrclone-1.53.3/cmd/serve/webdav/testdata/golden/hiddendir.html000066400000000000000000000000241375552240400241300ustar00rootroot00000000000000Directory not found rclone-1.53.3/cmd/serve/webdav/testdata/golden/index.html000066400000000000000000000004221375552240400233070ustar00rootroot00000000000000 Directory listing of /

Directory listing of /

three/
one%.txt
two.txt
rclone-1.53.3/cmd/serve/webdav/testdata/golden/indexhead.txt000066400000000000000000000000001375552240400237740ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/webdav/testdata/golden/indexpost.txt000066400000000000000000000000221375552240400240640ustar00rootroot00000000000000Method Not Allowedrclone-1.53.3/cmd/serve/webdav/testdata/golden/notfound.html000066400000000000000000000000111375552240400240260ustar00rootroot00000000000000Not Foundrclone-1.53.3/cmd/serve/webdav/testdata/golden/one.txt000066400000000000000000000000051375552240400226310ustar00rootroot00000000000000one% rclone-1.53.3/cmd/serve/webdav/testdata/golden/onehead.txt000066400000000000000000000000001375552240400234460ustar00rootroot00000000000000rclone-1.53.3/cmd/serve/webdav/testdata/golden/onepost.txt000066400000000000000000000000051375552240400235370ustar00rootroot00000000000000one% rclone-1.53.3/cmd/serve/webdav/testdata/golden/three.html000066400000000000000000000003561375552240400233150ustar00rootroot00000000000000 Directory listing of /three

Directory listing of /three

a.txt
b.txt
rclone-1.53.3/cmd/serve/webdav/testdata/golden/two-6.txt000066400000000000000000000000071375552240400230260ustar00rootroot000000000000000123456rclone-1.53.3/cmd/serve/webdav/testdata/golden/two.txt000066400000000000000000000000131375552240400226600ustar00rootroot000000000000000123456789 rclone-1.53.3/cmd/serve/webdav/testdata/golden/two2-5.txt000066400000000000000000000000041375552240400231040ustar00rootroot000000000000002345rclone-1.53.3/cmd/serve/webdav/testdata/golden/two3-.txt000066400000000000000000000000101375552240400230150ustar00rootroot000000000000003456789 rclone-1.53.3/cmd/serve/webdav/webdav.go000066400000000000000000000244761375552240400200470ustar00rootroot00000000000000// Package webdav implements a WebDAV server backed by rclone VFS package webdav import ( "context" "net/http" "os" "strings" "time" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/cmd/serve/httplib/httpflags" "github.com/rclone/rclone/cmd/serve/httplib/serve" "github.com/rclone/rclone/cmd/serve/proxy" "github.com/rclone/rclone/cmd/serve/proxy/proxyflags" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/errors" "github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs/vfsflags" "github.com/spf13/cobra" "golang.org/x/net/webdav" ) var ( hashName string hashType = hash.None disableGETDir = false ) func init() { flagSet := Command.Flags() httpflags.AddFlags(flagSet) vfsflags.AddFlags(flagSet) proxyflags.AddFlags(flagSet) flags.StringVarP(flagSet, &hashName, "etag-hash", "", "", "Which hash to use for the ETag, or auto or blank for off") flags.BoolVarP(flagSet, &disableGETDir, "disable-dir-list", "", false, "Disable HTML directory list on GET request for a directory") } // Command definition for cobra var Command = &cobra.Command{ Use: "webdav remote:path", Short: `Serve remote:path over webdav.`, Long: ` rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it. ### Webdav options #### --etag-hash This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use "rclone hashsum" to see the full list. ` + httplib.Help + vfs.Help + proxy.Help, RunE: func(command *cobra.Command, args []string) error { var f fs.Fs if proxyflags.Opt.AuthProxy == "" { cmd.CheckArgs(1, 1, command, args) f = cmd.NewFsSrc(args) } else { cmd.CheckArgs(0, 0, command, args) } hashType = hash.None if hashName == "auto" { hashType = f.Hashes().GetOne() } else if hashName != "" { err := hashType.Set(hashName) if err != nil { return err } } if hashType != hash.None { fs.Debugf(f, "Using hash %v for ETag", hashType) } cmd.Run(false, false, command, func() error { s := newWebDAV(f, &httpflags.Opt) err := s.serve() if err != nil { return err } s.Wait() return nil }) return nil }, } // WebDAV is a webdav.FileSystem interface // // A FileSystem implements access to a collection of named files. The elements // in a file path are separated by slash ('/', U+002F) characters, regardless // of host operating system convention. // // Each method has the same semantics as the os package's function of the same // name. // // Note that the os.Rename documentation says that "OS-specific restrictions // might apply". In particular, whether or not renaming a file or directory // overwriting another existing file or directory is an error is OS-dependent. type WebDAV struct { *httplib.Server f fs.Fs _vfs *vfs.VFS // don't use directly, use getVFS webdavhandler *webdav.Handler proxy *proxy.Proxy } // check interface var _ webdav.FileSystem = (*WebDAV)(nil) // Make a new WebDAV to serve the remote func newWebDAV(f fs.Fs, opt *httplib.Options) *WebDAV { w := &WebDAV{ f: f, } if proxyflags.Opt.AuthProxy != "" { w.proxy = proxy.New(&proxyflags.Opt) // override auth copyOpt := *opt copyOpt.Auth = w.auth opt = ©Opt } else { w._vfs = vfs.New(f, &vfsflags.Opt) } w.Server = httplib.NewServer(http.HandlerFunc(w.handler), opt) webdavHandler := &webdav.Handler{ Prefix: w.Server.Opt.BaseURL, FileSystem: w, LockSystem: webdav.NewMemLS(), Logger: w.logRequest, // FIXME } w.webdavhandler = webdavHandler return w } // Gets the VFS in use for this request func (w *WebDAV) getVFS(ctx context.Context) (VFS *vfs.VFS, err error) { if w._vfs != nil { return w._vfs, nil } value := ctx.Value(httplib.ContextAuthKey) if value == nil { return nil, errors.New("no VFS found in context") } VFS, ok := value.(*vfs.VFS) if !ok { return nil, errors.Errorf("context value is not VFS: %#v", value) } return VFS, nil } // auth does proxy authorization func (w *WebDAV) auth(user, pass string) (value interface{}, err error) { VFS, _, err := w.proxy.Call(user, pass, false) if err != nil { return nil, err } return VFS, err } func (w *WebDAV) handler(rw http.ResponseWriter, r *http.Request) { urlPath, ok := w.Path(rw, r) if !ok { return } isDir := strings.HasSuffix(urlPath, "/") remote := strings.Trim(urlPath, "/") if !disableGETDir && (r.Method == "GET" || r.Method == "HEAD") && isDir { w.serveDir(rw, r, remote) return } w.webdavhandler.ServeHTTP(rw, r) } // serveDir serves a directory index at dirRemote // This is similar to serveDir in serve http. func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote string) { VFS, err := w.getVFS(r.Context()) if err != nil { http.Error(rw, "Root directory not found", http.StatusNotFound) fs.Errorf(nil, "Failed to serve directory: %v", err) return } // List the directory node, err := VFS.Stat(dirRemote) if err == vfs.ENOENT { http.Error(rw, "Directory not found", http.StatusNotFound) return } else if err != nil { serve.Error(dirRemote, rw, "Failed to list directory", err) return } if !node.IsDir() { http.Error(rw, "Not a directory", http.StatusNotFound) return } dir := node.(*vfs.Dir) dirEntries, err := dir.ReadDirAll() if err != nil { serve.Error(dirRemote, rw, "Failed to list directory", err) return } // Make the entries for display directory := serve.NewDirectory(dirRemote, w.HTMLTemplate) for _, node := range dirEntries { if vfsflags.Opt.NoModTime { directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{}) } else { directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), node.ModTime().UTC()) } } sortParm := r.URL.Query().Get("sort") orderParm := r.URL.Query().Get("order") directory.ProcessQueryParams(sortParm, orderParm) directory.Serve(rw, r) } // serve runs the http server in the background. // // Use s.Close() and s.Wait() to shutdown server func (w *WebDAV) serve() error { err := w.Serve() if err != nil { return err } fs.Logf(w.f, "WebDav Server started on %s", w.URL()) return nil } // logRequest is called by the webdav module on every request func (w *WebDAV) logRequest(r *http.Request, err error) { fs.Infof(r.URL.Path, "%s from %s", r.Method, r.RemoteAddr) } // Mkdir creates a directory func (w *WebDAV) Mkdir(ctx context.Context, name string, perm os.FileMode) (err error) { // defer log.Trace(name, "perm=%v", perm)("err = %v", &err) VFS, err := w.getVFS(ctx) if err != nil { return err } dir, leaf, err := VFS.StatParent(name) if err != nil { return err } _, err = dir.Mkdir(leaf) return err } // OpenFile opens a file or a directory func (w *WebDAV) OpenFile(ctx context.Context, name string, flags int, perm os.FileMode) (file webdav.File, err error) { // defer log.Trace(name, "flags=%v, perm=%v", flags, perm)("err = %v", &err) VFS, err := w.getVFS(ctx) if err != nil { return nil, err } f, err := VFS.OpenFile(name, flags, perm) if err != nil { return nil, err } return Handle{f}, nil } // RemoveAll removes a file or a directory and its contents func (w *WebDAV) RemoveAll(ctx context.Context, name string) (err error) { // defer log.Trace(name, "")("err = %v", &err) VFS, err := w.getVFS(ctx) if err != nil { return err } node, err := VFS.Stat(name) if err != nil { return err } err = node.RemoveAll() if err != nil { return err } return nil } // Rename a file or a directory func (w *WebDAV) Rename(ctx context.Context, oldName, newName string) (err error) { // defer log.Trace(oldName, "newName=%q", newName)("err = %v", &err) VFS, err := w.getVFS(ctx) if err != nil { return err } return VFS.Rename(oldName, newName) } // Stat returns info about the file or directory func (w *WebDAV) Stat(ctx context.Context, name string) (fi os.FileInfo, err error) { // defer log.Trace(name, "")("fi=%+v, err = %v", &fi, &err) VFS, err := w.getVFS(ctx) if err != nil { return nil, err } fi, err = VFS.Stat(name) if err != nil { return nil, err } return FileInfo{fi}, nil } // Handle represents an open file type Handle struct { vfs.Handle } // Readdir reads directory entries from the handle func (h Handle) Readdir(count int) (fis []os.FileInfo, err error) { fis, err = h.Handle.Readdir(count) if err != nil { return nil, err } // Wrap each FileInfo for i := range fis { fis[i] = FileInfo{fis[i]} } return fis, nil } // Stat the handle func (h Handle) Stat() (fi os.FileInfo, err error) { fi, err = h.Handle.Stat() if err != nil { return nil, err } return FileInfo{fi}, nil } // FileInfo represents info about a file satisfying os.FileInfo and // also some additional interfaces for webdav for ETag and ContentType type FileInfo struct { os.FileInfo } // ETag returns an ETag for the FileInfo func (fi FileInfo) ETag(ctx context.Context) (etag string, err error) { // defer log.Trace(fi, "")("etag=%q, err=%v", &etag, &err) if hashType == hash.None { return "", webdav.ErrNotImplemented } node, ok := (fi.FileInfo).(vfs.Node) if !ok { fs.Errorf(fi, "Expecting vfs.Node, got %T", fi.FileInfo) return "", webdav.ErrNotImplemented } entry := node.DirEntry() o, ok := entry.(fs.Object) if !ok { return "", webdav.ErrNotImplemented } hash, err := o.Hash(ctx, hashType) if err != nil || hash == "" { return "", webdav.ErrNotImplemented } return `"` + hash + `"`, nil } // ContentType returns a content type for the FileInfo func (fi FileInfo) ContentType(ctx context.Context) (contentType string, err error) { // defer log.Trace(fi, "")("etag=%q, err=%v", &contentType, &err) node, ok := (fi.FileInfo).(vfs.Node) if !ok { fs.Errorf(fi, "Expecting vfs.Node, got %T", fi.FileInfo) return "application/octet-stream", nil } entry := node.DirEntry() switch x := entry.(type) { case fs.Object: return fs.MimeType(ctx, x), nil case fs.Directory: return "inode/directory", nil } fs.Errorf(fi, "Expecting fs.Object or fs.Directory, got %T", entry) return "application/octet-stream", nil } rclone-1.53.3/cmd/serve/webdav/webdav_test.go000066400000000000000000000135321375552240400210750ustar00rootroot00000000000000// Serve webdav tests set up a server and run the integration tests // for the webdav remote against it. // // We skip tests on platforms with troublesome character mappings //+build !windows,!darwin package webdav import ( "flag" "io/ioutil" "net/http" "os" "strings" "testing" "time" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/cmd/serve/servetest" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/hash" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/net/webdav" ) const ( testBindAddress = "localhost:0" testUser = "user" testPass = "pass" testTemplate = "../http/testdata/golden/testindex.html" ) // check interfaces var ( _ os.FileInfo = FileInfo{nil} _ webdav.ETager = FileInfo{nil} _ webdav.ContentTyper = FileInfo{nil} ) // TestWebDav runs the webdav server then runs the unit tests for the // webdav remote against it. func TestWebDav(t *testing.T) { // Configure and start the server start := func(f fs.Fs) (configmap.Simple, func()) { opt := httplib.DefaultOpt opt.ListenAddr = testBindAddress opt.BasicUser = testUser opt.BasicPass = testPass opt.Template = testTemplate hashType = hash.MD5 // Start the server w := newWebDAV(f, &opt) assert.NoError(t, w.serve()) // Config for the backend we'll use to connect to the server config := configmap.Simple{ "type": "webdav", "vendor": "other", "url": w.Server.URL(), "user": testUser, "pass": obscure.MustObscure(testPass), } return config, func() { w.Close() w.Wait() } } servetest.Run(t, "webdav", start) } // Test serve http functionality in serve webdav // While similar to http serve, there are some inconsistencies // in the handling of some requests such as POST requests var ( updateGolden = flag.Bool("updategolden", false, "update golden files for regression test") ) func TestHTTPFunction(t *testing.T) { // exclude files called hidden.txt and directories called hidden require.NoError(t, filter.Active.AddRule("- hidden.txt")) require.NoError(t, filter.Active.AddRule("- hidden/**")) // Uses the same test files as http tests but with different golden. f, err := fs.NewFs("../http/testdata/files") assert.NoError(t, err) opt := httplib.DefaultOpt opt.ListenAddr = testBindAddress opt.Template = testTemplate // Start the server w := newWebDAV(f, &opt) assert.NoError(t, w.serve()) defer func() { w.Close() w.Wait() }() testURL := w.Server.URL() pause := time.Millisecond i := 0 for ; i < 10; i++ { resp, err := http.Head(testURL) if err == nil { _ = resp.Body.Close() break } // t.Logf("couldn't connect, sleeping for %v: %v", pause, err) time.Sleep(pause) pause *= 2 } if i >= 10 { t.Fatal("couldn't connect to server") } HelpTestGET(t, testURL) } // check body against the file, or re-write body if -updategolden is // set. func checkGolden(t *testing.T, fileName string, got []byte) { if *updateGolden { t.Logf("Updating golden file %q", fileName) err := ioutil.WriteFile(fileName, got, 0666) require.NoError(t, err) } else { want, err := ioutil.ReadFile(fileName) require.NoError(t, err, "problem") wants := strings.Split(string(want), "\n") gots := strings.Split(string(got), "\n") assert.Equal(t, wants, gots, fileName) } } func HelpTestGET(t *testing.T, testURL string) { for _, test := range []struct { URL string Status int Golden string Method string Range string }{ { URL: "", Status: http.StatusOK, Golden: "testdata/golden/index.html", }, { URL: "notfound", Status: http.StatusNotFound, Golden: "testdata/golden/notfound.html", }, { URL: "dirnotfound/", Status: http.StatusNotFound, Golden: "testdata/golden/dirnotfound.html", }, { URL: "hidden/", Status: http.StatusNotFound, Golden: "testdata/golden/hiddendir.html", }, { URL: "one%25.txt", Status: http.StatusOK, Golden: "testdata/golden/one.txt", }, { URL: "hidden.txt", Status: http.StatusNotFound, Golden: "testdata/golden/hidden.txt", }, { URL: "three/", Status: http.StatusOK, Golden: "testdata/golden/three.html", }, { URL: "three/a.txt", Status: http.StatusOK, Golden: "testdata/golden/a.txt", }, { URL: "", Method: "HEAD", Status: http.StatusOK, Golden: "testdata/golden/indexhead.txt", }, { URL: "one%25.txt", Method: "HEAD", Status: http.StatusOK, Golden: "testdata/golden/onehead.txt", }, { URL: "", Method: "POST", Status: http.StatusMethodNotAllowed, Golden: "testdata/golden/indexpost.txt", }, { URL: "one%25.txt", Method: "POST", Status: http.StatusOK, Golden: "testdata/golden/onepost.txt", }, { URL: "two.txt", Status: http.StatusOK, Golden: "testdata/golden/two.txt", }, { URL: "two.txt", Status: http.StatusPartialContent, Range: "bytes=2-5", Golden: "testdata/golden/two2-5.txt", }, { URL: "two.txt", Status: http.StatusPartialContent, Range: "bytes=0-6", Golden: "testdata/golden/two-6.txt", }, { URL: "two.txt", Status: http.StatusPartialContent, Range: "bytes=3-", Golden: "testdata/golden/two3-.txt", }, } { method := test.Method if method == "" { method = "GET" } req, err := http.NewRequest(method, testURL+test.URL, nil) require.NoError(t, err) if test.Range != "" { req.Header.Add("Range", test.Range) } resp, err := http.DefaultClient.Do(req) require.NoError(t, err) assert.Equal(t, test.Status, resp.StatusCode, test.Golden) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) checkGolden(t, test.Golden, body) } } rclone-1.53.3/cmd/settier/000077500000000000000000000000001375552240400153165ustar00rootroot00000000000000rclone-1.53.3/cmd/settier/settier.go000066400000000000000000000031701375552240400173250ustar00rootroot00000000000000package settier import ( "context" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "settier tier remote:path", Short: `Changes storage class/tier of objects in remote.`, Long: ` rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc. Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true You can use it to tier single object rclone settier Cool remote:path/file Or use rclone filters to set tier on only specific files rclone --include "*.txt" settier Hot remote:path/dir Or just provide remote directory and all files in directory will be tiered rclone settier tier remote:path/dir `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) tier := args[0] input := args[1:] fsrc := cmd.NewFsSrc(input) cmd.Run(false, false, command, func() error { isSupported := fsrc.Features().SetTier if !isSupported { return errors.Errorf("Remote %s does not support settier", fsrc.Name()) } return operations.SetTier(context.Background(), fsrc, tier) }) }, } rclone-1.53.3/cmd/sha1sum/000077500000000000000000000000001375552240400152205ustar00rootroot00000000000000rclone-1.53.3/cmd/sha1sum/sha1sum.go000066400000000000000000000013261375552240400171320ustar00rootroot00000000000000package sha1sum import ( "context" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) func init() { cmd.Root.AddCommand(commandDefinition) } var commandDefinition = &cobra.Command{ Use: "sha1sum remote:path", Short: `Produces an sha1sum file for all the objects in the path.`, Long: ` Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { return operations.Sha1sum(context.Background(), fsrc, os.Stdout) }) }, } rclone-1.53.3/cmd/siginfo_darwin.go000066400000000000000000000005471375552240400171760ustar00rootroot00000000000000//+build darwin package cmd import ( "log" "os" "os/signal" "syscall" "github.com/rclone/rclone/fs/accounting" ) // SigInfoHandler creates SigInfo handler func SigInfoHandler() { signals := make(chan os.Signal, 1) signal.Notify(signals, syscall.SIGINFO) go func() { for range signals { log.Printf("%v\n", accounting.GlobalStats()) } }() } rclone-1.53.3/cmd/siginfo_others.go000066400000000000000000000001431375552240400172060ustar00rootroot00000000000000//+build !darwin package cmd // SigInfoHandler creates SigInfo handler func SigInfoHandler() { } rclone-1.53.3/cmd/size/000077500000000000000000000000001375552240400146115ustar00rootroot00000000000000rclone-1.53.3/cmd/size/size.go000066400000000000000000000023321375552240400161120ustar00rootroot00000000000000package size import ( "context" "encoding/json" "fmt" "os" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/spf13/cobra" ) var jsonOutput bool func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &jsonOutput, "json", "", false, "format output as JSON") } var commandDefinition = &cobra.Command{ Use: "size remote:path", Short: `Prints the total size and number of objects in remote:path.`, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) cmd.Run(false, false, command, func() error { var err error var results struct { Count int64 `json:"count"` Bytes int64 `json:"bytes"` } results.Count, results.Bytes, err = operations.Count(context.Background(), fsrc) if err != nil { return err } if jsonOutput { return json.NewEncoder(os.Stdout).Encode(results) } fmt.Printf("Total objects: %d\n", results.Count) fmt.Printf("Total size: %s (%d Bytes)\n", fs.SizeSuffix(results.Bytes).Unit("Bytes"), results.Bytes) return nil }) }, } rclone-1.53.3/cmd/sync/000077500000000000000000000000001375552240400146135ustar00rootroot00000000000000rclone-1.53.3/cmd/sync/sync.go000066400000000000000000000037441375552240400161260ustar00rootroot00000000000000package sync import ( "context" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/sync" "github.com/spf13/cobra" ) var ( createEmptySrcDirs = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after sync") } var commandDefinition = &cobra.Command{ Use: "sync source:path dest:path", Short: `Make source and dest identical, modifying destination only.`, Long: ` Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. **Important**: Since this can cause data loss, test first with the ` + "`--dry-run` or the `--interactive`/`-i`" + ` flag. rclone sync -i SOURCE remote:DESTINATION Note that files in the destination won't be deleted if there were any errors at any point. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the ` + "`" + `copy` + "`" + ` command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. **Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(2, 2, command, args) fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args) cmd.Run(true, true, command, func() error { if srcFileName == "" { return sync.Sync(context.Background(), fdst, fsrc, createEmptySrcDirs) } return operations.CopyFile(context.Background(), fdst, fsrc, srcFileName, srcFileName) }) }, } rclone-1.53.3/cmd/touch/000077500000000000000000000000001375552240400147615ustar00rootroot00000000000000rclone-1.53.3/cmd/touch/touch.go000066400000000000000000000060211375552240400164310ustar00rootroot00000000000000package touch import ( "bytes" "context" "time" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/object" "github.com/spf13/cobra" ) var ( notCreateNewFile bool timeAsArgument string localTime bool ) const ( defaultLayout string = "060102" layoutDateWithTime = "2006-01-02T15:04:05" layoutDateWithTimeNano = "2006-01-02T15:04:05.999999999" ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, ¬CreateNewFile, "no-create", "C", false, "Do not create the file if it does not exist.") flags.StringVarP(cmdFlags, &timeAsArgument, "timestamp", "t", "", "Use specified time instead of the current time of day.") flags.BoolVarP(cmdFlags, &localTime, "localtime", "", false, "Use localtime for timestamp, not UTC.") } var commandDefinition = &cobra.Command{ Use: "touch remote:path", Short: `Create new file or change file modification time.`, Long: ` Set the modification time on object(s) as specified by remote:path to have the current time. If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided. If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: - 'YYMMDD' - eg. 17.10.30 - 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05 - 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789 Note that --timestamp is in UTC if you want local time then add the --localtime flag. `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(1, 1, command, args) fsrc, srcFileName := cmd.NewFsDstFile(args) cmd.Run(true, false, command, func() error { return Touch(context.Background(), fsrc, srcFileName) }) }, } //Touch create new file or change file modification time. func Touch(ctx context.Context, fsrc fs.Fs, srcFileName string) (err error) { timeAtr := time.Now() if timeAsArgument != "" { layout := defaultLayout if len(timeAsArgument) == len(layoutDateWithTime) { layout = layoutDateWithTime } else if len(timeAsArgument) > len(layoutDateWithTime) { layout = layoutDateWithTimeNano } var timeAtrFromFlags time.Time if localTime { timeAtrFromFlags, err = time.ParseInLocation(layout, timeAsArgument, time.Local) } else { timeAtrFromFlags, err = time.Parse(layout, timeAsArgument) } if err != nil { return errors.Wrap(err, "failed to parse date/time argument") } timeAtr = timeAtrFromFlags } file, err := fsrc.NewObject(ctx, srcFileName) if err != nil { if !notCreateNewFile { var buffer []byte src := object.NewStaticObjectInfo(srcFileName, timeAtr, int64(len(buffer)), true, nil, fsrc) _, err = fsrc.Put(ctx, bytes.NewBuffer(buffer), src) if err != nil { return err } } return nil } err = file.SetModTime(ctx, timeAtr) if err != nil { return errors.Wrap(err, "touch: couldn't set mod time") } return nil } rclone-1.53.3/cmd/touch/touch_test.go000066400000000000000000000057641375552240400175050ustar00rootroot00000000000000package touch import ( "context" "testing" "time" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/require" ) var ( t1 = fstest.Time("2017-02-03T04:05:06.499999999Z") ) func checkFile(t *testing.T, r fs.Fs, path string, content string) { layout := defaultLayout if len(timeAsArgument) == len(layoutDateWithTime) { layout = layoutDateWithTime } timeAtrFromFlags, err := time.Parse(layout, timeAsArgument) require.NoError(t, err) file1 := fstest.NewItem(path, content, timeAtrFromFlags) fstest.CheckItems(t, r, file1) } // TestMain drives the tests func TestMain(m *testing.M) { fstest.TestMain(m) } func TestTouchOneFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() err := Touch(context.Background(), r.Fremote, "newFile") require.NoError(t, err) _, err = r.Fremote.NewObject(context.Background(), "newFile") require.NoError(t, err) } func TestTouchWithNoCreateFlag(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() notCreateNewFile = true err := Touch(context.Background(), r.Fremote, "newFile") require.NoError(t, err) _, err = r.Fremote.NewObject(context.Background(), "newFile") require.Error(t, err) notCreateNewFile = false } func TestTouchWithTimestamp(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() timeAsArgument = "060102" srcFileName := "oldFile" err := Touch(context.Background(), r.Fremote, srcFileName) require.NoError(t, err) checkFile(t, r.Fremote, srcFileName, "") } func TestTouchWithLognerTimestamp(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() timeAsArgument = "2006-01-02T15:04:05" srcFileName := "oldFile" err := Touch(context.Background(), r.Fremote, srcFileName) require.NoError(t, err) checkFile(t, r.Fremote, srcFileName, "") } func TestTouchUpdateTimestamp(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() srcFileName := "a" content := "aaa" file1 := r.WriteObject(context.Background(), srcFileName, content, t1) fstest.CheckItems(t, r.Fremote, file1) timeAsArgument = "121212" err := Touch(context.Background(), r.Fremote, "a") require.NoError(t, err) checkFile(t, r.Fremote, srcFileName, content) } func TestTouchUpdateTimestampWithCFlag(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() srcFileName := "a" content := "aaa" file1 := r.WriteObject(context.Background(), srcFileName, content, t1) fstest.CheckItems(t, r.Fremote, file1) notCreateNewFile = true timeAsArgument = "121212" err := Touch(context.Background(), r.Fremote, "a") require.NoError(t, err) checkFile(t, r.Fremote, srcFileName, content) notCreateNewFile = false } func TestTouchCreateMultipleDirAndFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() longPath := "a/b/c.txt" err := Touch(context.Background(), r.Fremote, longPath) require.NoError(t, err) file1 := fstest.NewItem("a/b/c.txt", "", t1) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1}, []string{"a", "a/b"}, fs.ModTimeNotSupported) } rclone-1.53.3/cmd/tree/000077500000000000000000000000001375552240400145765ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/000077500000000000000000000000001375552240400166005ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/file1000066400000000000000000000000001375552240400175110ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/file2000066400000000000000000000000001375552240400175120ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/file3000066400000000000000000000000001375552240400175130ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/subdir/000077500000000000000000000000001375552240400200705ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/subdir/file4000066400000000000000000000000001375552240400210040ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/testfiles/subdir/file5000066400000000000000000000000001375552240400210050ustar00rootroot00000000000000rclone-1.53.3/cmd/tree/tree.go000066400000000000000000000172541375552240400160750ustar00rootroot00000000000000package tree import ( "context" "fmt" "io" "os" "path" "path/filepath" "strings" "time" "github.com/a8m/tree" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/dirtree" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/walk" "github.com/spf13/cobra" ) var ( opts tree.Options outFileName string noReport bool sort string ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() // List flags.BoolVarP(cmdFlags, &opts.All, "all", "a", false, "All files are listed (list . files too).") flags.BoolVarP(cmdFlags, &opts.DirsOnly, "dirs-only", "d", false, "List directories only.") flags.BoolVarP(cmdFlags, &opts.FullPath, "full-path", "", false, "Print the full path prefix for each file.") //flags.BoolVarP(cmdFlags, &opts.IgnoreCase, "ignore-case", "", false, "Ignore case when pattern matching.") flags.BoolVarP(cmdFlags, &noReport, "noreport", "", false, "Turn off file/directory count at end of tree listing.") // flags.BoolVarP(cmdFlags, &opts.FollowLink, "follow", "l", false, "Follow symbolic links like directories.") flags.IntVarP(cmdFlags, &opts.DeepLevel, "level", "", 0, "Descend only level directories deep.") // flags.StringVarP(cmdFlags, &opts.Pattern, "pattern", "P", "", "List only those files that match the pattern given.") // flags.StringVarP(cmdFlags, &opts.IPattern, "exclude", "", "", "Do not list files that match the given pattern.") flags.StringVarP(cmdFlags, &outFileName, "output", "o", "", "Output to file instead of stdout.") // Files flags.BoolVarP(cmdFlags, &opts.ByteSize, "size", "s", false, "Print the size in bytes of each file.") flags.BoolVarP(cmdFlags, &opts.UnitSize, "human", "", false, "Print the size in a more human readable way.") flags.BoolVarP(cmdFlags, &opts.FileMode, "protections", "p", false, "Print the protections for each file.") // flags.BoolVarP(cmdFlags, &opts.ShowUid, "uid", "", false, "Displays file owner or UID number.") // flags.BoolVarP(cmdFlags, &opts.ShowGid, "gid", "", false, "Displays file group owner or GID number.") flags.BoolVarP(cmdFlags, &opts.Quotes, "quote", "Q", false, "Quote filenames with double quotes.") flags.BoolVarP(cmdFlags, &opts.LastMod, "modtime", "D", false, "Print the date of last modification.") // flags.BoolVarP(cmdFlags, &opts.Inodes, "inodes", "", false, "Print inode number of each file.") // flags.BoolVarP(cmdFlags, &opts.Device, "device", "", false, "Print device ID number to which each file belongs.") // Sort flags.BoolVarP(cmdFlags, &opts.NoSort, "unsorted", "U", false, "Leave files unsorted.") flags.BoolVarP(cmdFlags, &opts.VerSort, "version", "", false, "Sort files alphanumerically by version.") flags.BoolVarP(cmdFlags, &opts.ModSort, "sort-modtime", "t", false, "Sort files by last modification time.") flags.BoolVarP(cmdFlags, &opts.CTimeSort, "sort-ctime", "", false, "Sort files by last status change time.") flags.BoolVarP(cmdFlags, &opts.ReverSort, "sort-reverse", "r", false, "Reverse the order of the sort.") flags.BoolVarP(cmdFlags, &opts.DirSort, "dirsfirst", "", false, "List directories before files (-U disables).") flags.StringVarP(cmdFlags, &sort, "sort", "", "", "Select sort: name,version,size,mtime,ctime.") // Graphics flags.BoolVarP(cmdFlags, &opts.NoIndent, "noindent", "", false, "Don't print indentation lines.") flags.BoolVarP(cmdFlags, &opts.Colorize, "color", "C", false, "Turn colorization on always.") } var commandDefinition = &cobra.Command{ Use: "tree remote:path", Short: `List the contents of the remote in a tree like fashion.`, Long: ` rclone tree lists the contents of a remote in a similar way to the unix tree command. For example $ rclone tree remote:path / ├── file1 ├── file2 ├── file3 └── subdir ├── file4 └── file5 1 directories, 5 files You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options. `, RunE: func(command *cobra.Command, args []string) error { cmd.CheckArgs(1, 1, command, args) fsrc := cmd.NewFsSrc(args) outFile := os.Stdout if outFileName != "" { var err error outFile, err = os.Create(outFileName) if err != nil { return errors.Errorf("failed to create output file: %v", err) } } opts.VerSort = opts.VerSort || sort == "version" opts.ModSort = opts.ModSort || sort == "mtime" opts.CTimeSort = opts.CTimeSort || sort == "ctime" opts.NameSort = sort == "name" opts.SizeSort = sort == "size" if opts.DeepLevel == 0 { opts.DeepLevel = fs.Config.MaxDepth } cmd.Run(false, false, command, func() error { return Tree(fsrc, outFile, &opts) }) return nil }, } // Tree lists fsrc to outFile using the Options passed in func Tree(fsrc fs.Fs, outFile io.Writer, opts *tree.Options) error { dirs, err := walk.NewDirTree(context.Background(), fsrc, "", false, opts.DeepLevel) if err != nil { return err } opts.Fs = NewFs(dirs) opts.OutFile = outFile inf := tree.New("/") var nd, nf int if d, f := inf.Visit(opts); f != 0 { nd, nf = nd+d, nf+f } inf.Print(opts) // Print footer report if !noReport { footer := fmt.Sprintf("\n%d directories", nd) if !opts.DirsOnly { footer += fmt.Sprintf(", %d files", nf) } _, _ = fmt.Fprintln(outFile, footer) } return nil } // FileInfo maps an fs.DirEntry into an os.FileInfo type FileInfo struct { entry fs.DirEntry } // Name is base name of the file func (to *FileInfo) Name() string { return path.Base(to.entry.Remote()) } // Size in bytes for regular files; system-dependent for others func (to *FileInfo) Size() int64 { return to.entry.Size() } // Mode is file mode bits func (to *FileInfo) Mode() os.FileMode { if to.IsDir() { return os.FileMode(0777) } return os.FileMode(0666) } // ModTime is modification time func (to *FileInfo) ModTime() time.Time { return to.entry.ModTime(context.Background()) } // IsDir is abbreviation for Mode().IsDir() func (to *FileInfo) IsDir() bool { _, ok := to.entry.(fs.Directory) return ok } // Sys is underlying data source (can return nil) func (to *FileInfo) Sys() interface{} { return nil } // String returns the full path func (to *FileInfo) String() string { return to.entry.Remote() } // Fs maps an fs.Fs into a tree.Fs type Fs dirtree.DirTree // NewFs creates a new tree func NewFs(dirs dirtree.DirTree) Fs { return Fs(dirs) } // Stat returns info about the file func (dirs Fs) Stat(filePath string) (fi os.FileInfo, err error) { defer log.Trace(nil, "filePath=%q", filePath)("fi=%+v, err=%v", &fi, &err) filePath = filepath.ToSlash(filePath) filePath = strings.TrimLeft(filePath, "/") if filePath == "" { return &FileInfo{fs.NewDir("", time.Now())}, nil } _, entry := dirtree.DirTree(dirs).Find(filePath) if entry == nil { return nil, errors.Errorf("Couldn't find %q in directory cache", filePath) } return &FileInfo{entry}, nil } // ReadDir returns info about the directory and fills up the directory cache func (dirs Fs) ReadDir(dir string) (names []string, err error) { defer log.Trace(nil, "dir=%s", dir)("names=%+v, err=%v", &names, &err) dir = filepath.ToSlash(dir) dir = strings.TrimLeft(dir, "/") entries, ok := dirs[dir] if !ok { return nil, errors.Errorf("Couldn't find directory %q", dir) } for _, entry := range entries { names = append(names, path.Base(entry.Remote())) } return } // check interfaces var ( _ tree.Fs = (*Fs)(nil) _ os.FileInfo = (*FileInfo)(nil) ) rclone-1.53.3/cmd/tree/tree_test.go000066400000000000000000000011621375552240400171230ustar00rootroot00000000000000package tree import ( "bytes" "testing" "github.com/a8m/tree" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestTree(t *testing.T) { fstest.Initialise() buf := new(bytes.Buffer) f, err := fs.NewFs("testfiles") require.NoError(t, err) err = Tree(f, buf, new(tree.Options)) require.NoError(t, err) assert.Equal(t, `/ ├── file1 ├── file2 ├── file3 └── subdir ├── file4 └── file5 1 directories, 5 files `, buf.String()) } rclone-1.53.3/cmd/version/000077500000000000000000000000001375552240400153245ustar00rootroot00000000000000rclone-1.53.3/cmd/version/version.go000066400000000000000000000063021375552240400173410ustar00rootroot00000000000000package version import ( "fmt" "io/ioutil" "net/http" "strings" "time" "github.com/coreos/go-semver/semver" "github.com/pkg/errors" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/flags" "github.com/spf13/cobra" ) var ( check = false ) func init() { cmd.Root.AddCommand(commandDefinition) cmdFlags := commandDefinition.Flags() flags.BoolVarP(cmdFlags, &check, "check", "", false, "Check for new version.") } var commandDefinition = &cobra.Command{ Use: "version", Short: `Show the version number.`, Long: ` Show the version number, the go version and the architecture. Eg $ rclone version rclone v1.41 - os/arch: linux/amd64 - go version: go1.10 If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. $ rclone version --check yours: 1.42.0.6 latest: 1.42 (released 2018-06-16) beta: 1.42.0.5 (released 2018-06-17) Or $ rclone version --check yours: 1.41 latest: 1.42 (released 2018-06-16) upgrade: https://downloads.rclone.org/v1.42 beta: 1.42.0.5 (released 2018-06-17) upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 `, Run: func(command *cobra.Command, args []string) { cmd.CheckArgs(0, 0, command, args) if check { checkVersion() } else { cmd.ShowVersion() } }, } // strip a leading v off the string func stripV(s string) string { if len(s) > 0 && s[0] == 'v' { return s[1:] } return s } // getVersion gets the version by checking the download repository passed in func getVersion(url string) (v *semver.Version, vs string, date time.Time, err error) { resp, err := http.Get(url) if err != nil { return v, vs, date, err } defer fs.CheckClose(resp.Body, &err) if resp.StatusCode != http.StatusOK { return v, vs, date, errors.New(resp.Status) } bodyBytes, err := ioutil.ReadAll(resp.Body) if err != nil { return v, vs, date, err } vs = strings.TrimSpace(string(bodyBytes)) if strings.HasPrefix(vs, "rclone ") { vs = vs[7:] } vs = strings.TrimRight(vs, "β") date, err = http.ParseTime(resp.Header.Get("Last-Modified")) if err != nil { return v, vs, date, err } v, err = semver.NewVersion(stripV(vs)) return v, vs, date, err } // check the current version against available versions func checkVersion() { // Get Current version vCurrent, err := semver.NewVersion(stripV(fs.Version)) if err != nil { fs.Errorf(nil, "Failed to parse version: %v", err) } const timeFormat = "2006-01-02" printVersion := func(what, url string) { v, vs, t, err := getVersion(url + "version.txt") if err != nil { fs.Errorf(nil, "Failed to get rclone %s version: %v", what, err) return } fmt.Printf("%-8s%-40v %20s\n", what+":", v, "(released "+t.Format(timeFormat)+")", ) if v.Compare(*vCurrent) > 0 { fmt.Printf(" upgrade: %s\n", url+vs) } } fmt.Printf("yours: %-13s\n", vCurrent) printVersion( "latest", "https://downloads.rclone.org/", ) printVersion( "beta", "https://beta.rclone.org/", ) if strings.HasSuffix(fs.Version, "-DEV") { fmt.Println("Your version is compiled from git so comparisons may be wrong.") } } rclone-1.53.3/cmd/version/version_test.go000066400000000000000000000020541375552240400204000ustar00rootroot00000000000000package version import ( "io/ioutil" "os" "runtime" "testing" "github.com/rclone/rclone/cmd" "github.com/rclone/rclone/fs/config" "github.com/stretchr/testify/assert" ) func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) { // create temp config file tempFile, err := ioutil.TempFile("", "unreadable_config.conf") assert.NoError(t, err) path := tempFile.Name() defer func() { err := os.Remove(path) assert.NoError(t, err) }() assert.NoError(t, tempFile.Close()) if runtime.GOOS != "windows" { assert.NoError(t, os.Chmod(path, 0000)) } // re-wire oldOsStdout := os.Stdout oldConfigPath := config.ConfigPath config.ConfigPath = path os.Stdout = nil defer func() { os.Stdout = oldOsStdout config.ConfigPath = oldConfigPath }() cmd.Root.SetArgs([]string{"version"}) assert.NotPanics(t, func() { assert.NoError(t, cmd.Root.Execute()) }) // This causes rclone to exit and the tests to stop! // cmd.Root.SetArgs([]string{"--version"}) // assert.NotPanics(t, func() { // assert.NoError(t, cmd.Root.Execute()) // }) } rclone-1.53.3/contrib/000077500000000000000000000000001375552240400145345ustar00rootroot00000000000000rclone-1.53.3/contrib/docker/000077500000000000000000000000001375552240400160035ustar00rootroot00000000000000rclone-1.53.3/contrib/docker/docker-compose.dlna-server.yml000066400000000000000000000013711375552240400236630ustar00rootroot00000000000000rclone-dlna-server: container_name: rclone-dlna-server image: rclone/rclone command: # Tweak here rclone's command line switches: # - "--config" # - "/path/to/mounted/rclone.conf" - "--verbose" - "serve" - "dlna" - "remote:/" - "--name" - "myDLNA server" - "--read-only" # - "--no-modtime" # - "--no-checksum" restart: unless-stopped # Use host networking for simplicity with DLNA broadcasts # and to avoid having to do port mapping. net: host # Here you have to map your host's rclone.conf directory to # container's /root/.config/rclone/ dir (R/O). # If you have any remote referencing local files, you have to # map them here, too. volumes: - ~/.config/rclone/:/root/.config/rclone/:ro rclone-1.53.3/contrib/docker/docker-compose.webdav-server.yml000066400000000000000000000016411375552240400242150ustar00rootroot00000000000000rclone-webdav-server: container_name: rclone-webdav-server image: rclone/rclone command: # Tweak here rclone's command line switches: # - "--config" # - "/path/to/mounted/rclone.conf" - "--verbose" - "serve" - "webdav" - "remote:/" # - "--addr" # - "0.0.0.0:8080" - "--read-only" # - "--no-modtime" # - "--no-checksum" restart: unless-stopped # Use host networking for simplicity. # It also enables server's default listen on 127.0.0.1 to work safely. net: host # If you want to use port mapping instead of host networking, # make sure to make rclone listen on 0.0.0.0. #ports: # - "127.0.0.1:8080:8080" # Here you have to map your host's rclone.conf directory to # container's /root/.config/rclone/ dir (R/O). # If you have any remote referencing local files, you have to # map them here, too. volumes: - ~/.config/rclone/:/root/.config/rclone/:ro rclone-1.53.3/docs/000077500000000000000000000000001375552240400140245ustar00rootroot00000000000000rclone-1.53.3/docs/README.md000066400000000000000000000072631375552240400153130ustar00rootroot00000000000000# Docs This directory tree is uses to build all the different docs for rclone. See the `content` directory for the docs in markdown format. Note that some of the docs are auto generated - these should have a DO NOT EDIT marker near the top. Use [hugo](https://github.com/spf13/hugo) to build the website. ## Changing the layout If you want to change the layout then the main files to edit are - `layout/index.html` for the front page - `chrome/*.html` for the HTML fragments - `_default/single.md` for the default template - `page/single.md` for the page template Running `make serve` in a terminal give a live preview of the website so it is easy to tweak stuff. ## What are all these files ``` ├── config.json - hugo config file ├── content - docs and backend docs │   ├── _index.md - the front page of rclone.org │   ├── commands - auto generated command docs - DO NOT EDIT ├── i18n │   └── en.toml - hugo multilingual config ├── layouts - how the markdown gets converted into HTML │   ├── 404.html - 404 page │   ├── chrome - contains parts of the HTML page included elsewhere │   │   ├── footer.copyright.html - copyright footer │   │   ├── footer.html - footer including scripts │   │   ├── header.html - the whole html header │   │   ├── header.includes.html - header includes eg css files │   │   ├── menu.html - left hand side menu │   │   ├── meta.html - meta tags for the header │   │   └── navbar.html - top navigation bar │   ├── _default │   │   └── single.html - the default HTML page render │   ├── index.html - the index page of the whole site │   ├── page │   │   └── single.html - the render of all "page" type markdown │   ├── partials - bits of HTML to include into layout .html files │   │   └── version.html - the current version number │   ├── rss.xml - template for the RSS output │   ├── section - rendering for sections │   │   └── commands.html - rendering for /commands/index.html │   ├── shortcodes - shortcodes to call from markdown files │   │   ├── cdownload.html - download the "current" version │   │   ├── download.html - download a version with the partials/version.html number │   │   ├── provider.html - used to make provider list on the front page │   │   └── version.html - used to insert the current version number │   └── sitemap.xml - sitemap template ├── public - render of the website ├── README.md - this file ├── resources - don't know! │   └── _gen │   ├── assets │   └── images └── static - static content for the website ├── css │   ├── bootstrap.css │   ├── custom.css - custom css goes here │   └── font-awesome.css ├── img - images used ├── js │   ├── bootstrap.js │   ├── custom.js - custom javascript goes here │   └── jquery.js └── webfonts ``` rclone-1.53.3/docs/config.json000066400000000000000000000012221375552240400161610ustar00rootroot00000000000000{ "indexes": { "tag": "tags", "group": "groups", "menu": "menu" }, "baseurl": "https://rclone.org", "title": "rclone - rsync for cloud storage", "description": "rclone - rsync for cloud storage: google drive, s3, swift, cloudfiles, dropbox, memstore...", "canonifyurls": false, "disableKinds": [ "taxonomy", "taxonomyTerm" ], "ignoreFiles": [ "~$", "^\\." ], "enableGitInfo": true, "markup": { "goldmark": { "extensions": { "typographer": false }, "parser": { "autoHeadingIDType": "blackfriday" }, "renderer": { "unsafe": false } } } } rclone-1.53.3/docs/content/000077500000000000000000000000001375552240400154765ustar00rootroot00000000000000rclone-1.53.3/docs/content/_index.md000066400000000000000000000232601375552240400172710ustar00rootroot00000000000000--- title: "Rclone" description: "Rclone syncs your files to cloud storage: Google Drive, S3, Swift, Dropbox, Google Cloud Storage, Azure, Box and many more." type: page --- # Rclone syncs your files to cloud storage {{< img width="50%" src="/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" >}} - [About rclone](#about) - [What can rclone do for you?](#what) - [What features does rclone have?](#features) - [What providers does rclone support?](#providers) - [Download](/downloads/) - [Install](/install/) {{< rem MAINPAGELINK >}} ## About rclone {#about} Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. [Over 40 cloud storage products](#providers) support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and `--dry-run` protection. It is used at the command line, in scripts or via its [API](/rc). Users call rclone *"The Swiss army knife of cloud storage"*, and *"Technology indistinguishable from magic"*. Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can [check](/commands/rclone_check/) the integrity of your files. Where possible, rclone employs server side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk. Virtual backends wrap local and cloud file systems to apply [encryption](/crypt/), [caching](/cache/), [chunking](/chunker/) and [joining](/union/). Rclone [mounts](/commands/rclone_mount/) any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over [SFTP](/commands/rclone_serve_sftp/), [HTTP](/commands/rclone_serve_http/), [WebDAV](/commands/rclone_serve_webdav/), [FTP](/commands/rclone_serve_ftp/) and [DLNA](/commands/rclone_serve_dlna/). Rclone is mature, open source software originally inspired by rsync and written in [Go](https://golang.org). The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version [downloading from rclone.org](/downloads/) is recommended. Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API. Rclone does the heavy lifting of communicating with cloud storage. ## What can rclone do for you? {#what} Rclone helps you: - Backup (and encrypt) files to cloud storage - Restore (and decrypt) files from cloud storage - Mirror cloud data to other cloud services or locally - Migrate data to cloud, or between cloud storage vendors - Mount multiple, encrypted, cached or diverse cloud storage as a disk - Analyse and account for data held on cloud storage using [lsf](/commands/rclone_lsf/), [ljson](/commands/rclone_lsjson/), [size](/commands/rclone_size/), [ncdu](/commands/rclone_ncdu/) - [Union](/union/) file systems together to present multiple local and/or cloud file systems as one ## Features {#features} - Transfers - MD5, SHA1 hashes are checked at all times for file integrity - Timestamps are preserved on files - Operations can be restarted at any time - Can be to and from network, eg two different cloud providers - Can use multi-threaded downloads to local disk - [Copy](/commands/rclone_copy/) new or changed files to cloud storage - [Sync](/commands/rclone_sync/) (one way) to make a directory identical - [Move](/commands/rclone_move/) files to cloud storage deleting the local after verification - [Check](/commands/rclone_check/) hashes and for missing/extra files - [Mount](/commands/rclone_mount/) your cloud storage as a network disk - [Serve](/commands/rclone_serve/) local or remote files over [HTTP](/commands/rclone_serve_http/)/[WebDav](/commands/rclone_serve_webdav/)/[FTP](/commands/rclone_serve_ftp/)/[SFTP](/commands/rclone_serve_sftp/)/[dlna](/commands/rclone_serve_dlna/) - Experimental [Web based GUI](/gui/) ## Supported providers {#providers} (There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.) {{< provider_list >}} {{< provider name="1Fichier" home="https://1fichier.com/" config="/fichier/" start="true">}} {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}} {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" note="#status">}} {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}} {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}} {{< provider name="Box" home="https://www.box.com/" config="/box/" >}} {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}} {{< provider name="Citrix ShareFile" home="http://sharefile.com/" config="/sharefile/" >}} {{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14" >}} {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}} {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}} {{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}} {{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}} {{< provider name="Google Cloud Storage" home="https://cloud.google.com/storage/" config="/googlecloudstorage/" >}} {{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}} {{< provider name="Google Photos" home="https://www.google.com/photos/about/" config="/googlephotos/" >}} {{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}} {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}} {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}} {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}} {{< provider name="Koofr" home="https://koofr.eu/" config="/koofr/" >}} {{< provider name="Mail.ru Cloud" home="https://cloud.mail.ru/" config="/mailru/" >}} {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}} {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}} {{< provider name="Memory" home="/memory/" config="/memory/" >}} {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}} {{< provider name="Microsoft OneDrive" home="https://onedrive.live.com/" config="/onedrive/" >}} {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}} {{< provider name="Nextcloud" home="https://nextcloud.com/" config="/webdav/#nextcloud" >}} {{< provider name="OVH" home="https://www.ovh.co.uk/public-cloud/storage/object-storage/" config="/swift/" >}} {{< provider name="OpenDrive" home="https://www.opendrive.com/" config="/opendrive/" >}} {{< provider name="OpenStack Swift" home="https://docs.openstack.org/swift/latest/" config="/swift/" >}} {{< provider name="Oracle Cloud Storage" home="https://cloud.oracle.com/storage-opc" config="/swift/" >}} {{< provider name="ownCloud" home="https://owncloud.org/" config="/webdav/#owncloud" >}} {{< provider name="pCloud" home="https://www.pcloud.com/" config="/pcloud/" >}} {{< provider name="premiumize.me" home="https://premiumize.me/" config="/premiumizeme/" >}} {{< provider name="put.io" home="https://put.io/" config="/putio/" >}} {{< provider name="QingStor" home="https://www.qingcloud.com/products/storage" config="/qingstor/" >}} {{< provider name="Rackspace Cloud Files" home="https://www.rackspace.com/cloud/files" config="/swift/" >}} {{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net" >}} {{< provider name="Scaleway" home="https://www.scaleway.com/object-storage/" config="/s3/#scaleway" >}} {{< provider name="Seafile" home="https://www.seafile.com/" config="/seafile/" >}} {{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}} {{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}} {{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}} {{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}} {{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}} {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}} {{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}} {{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}} {{< provider name="The local filesystem" home="/local/" config="/local/" end="true">}} {{< /provider_list >}} Links * {{< icon "fa fa-home" >}} [Home page](https://rclone.org/) * {{< icon "fab fa-github" >}} [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) * {{< icon "fa fa-comments" >}} [Rclone Forum](https://forum.rclone.org) * {{< icon "fas fa-cloud-download-alt" >}}[Downloads](/downloads/) rclone-1.53.3/docs/content/alias.md000066400000000000000000000053431375552240400171160ustar00rootroot00000000000000--- title: "Alias" description: "Remote Aliases" --- {{< icon "fa fa-link" >}} Alias ----------------------------------------- The `alias` remote provides a new name for another remote. Paths may be as deep as required or a local path, eg `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the target remote. The target remote can either be a local path or another remote. Subfolders can be used in target remote. Assume an alias remote named `backup` with the target `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` is exactly the same as invoking `rclone mkdir mydrive:private/backup/desktop`. There will be no special handling of paths containing `..` segments. Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking `rclone mkdir mydrive:private/backup/../desktop`. The empty path is not allowed as a remote. To alias the current directory use `.` instead. Here is an example of how to make an alias called `remote` for local folder. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Alias for an existing remote \ "alias" [snip] Storage> alias Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". remote> /mnt/storage/backup Remote config -------------------- [remote] remote = /mnt/storage/backup -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote alias e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` Once configured you can then use `rclone` like this, List directories in top level in `/mnt/storage/backup` rclone lsd remote: List all the files in `/mnt/storage/backup` rclone ls remote: Copy another local directory to the alias directory called source rclone copy /home/source remote:source {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to alias (Alias for an existing remote). #### --alias-remote Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". - Config: remote - Env Var: RCLONE_ALIAS_REMOTE - Type: string - Default: "" {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/amazonclouddrive.md000066400000000000000000000231331375552240400213700ustar00rootroot00000000000000--- title: "Amazon Drive" description: "Rclone docs for Amazon Drive" --- {{< icon "fab fa-amazon" >}} Amazon Drive ----------------------------------------- Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers. ## Status **Important:** rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the [Amazon Drive developer program](https://developer.amazon.com/amazon-drive) is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive. For the history on why rclone no longer has a set of Amazon Drive API keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks! ## Setup The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. `rclone config` walks you through it. The configuration process for Amazon Drive may involve using an [oauth proxy](https://github.com/ncw/oauthproxy). This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it. Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own `client_id` and `client_secret` with Amazon Drive, or use a third party oauth proxy in which case you will need to enter `client_id`, `client_secret`, `auth_url` and `token_url`. Note also if you are not using Amazon's `auth_url` and `token_url`, (ie you filled in something for those) then if setting up on a remote machine you can only use the [copying the config method of configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) - `rclone authorize` will not work. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon Drive \ "amazon cloud drive" [snip] Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. client_secret> your client secret goes here Auth server URL - leave blank to use Amazon's. auth_url> Optional auth URL Token server url - leave blank to use Amazon's. token_url> Optional token URL Remote config Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = your client ID goes here client_secret = your client secret goes here auth_url = Optional auth URL token_url = Optional token URL token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your Amazon Drive rclone lsd remote: List all the files in your Amazon Drive rclone ls remote: To copy a local directory to an Amazon Drive directory called backup rclone copy /home/source remote:backup ### Modified time and MD5SUMs ### Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing. It does store MD5SUMs so for a more accurate sync, you can use the `--checksum` flag. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Deleting files ### Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days. ### Using with non `.com` Amazon accounts ### Let's say you usually use `amazon.co.uk`. When you authenticate with rclone it will take you to an `amazon.com` page to log in. Your `amazon.co.uk` email and password should work here just fine. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to amazon cloud drive (Amazon Drive). #### --acd-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_ACD_CLIENT_ID - Type: string - Default: "" #### --acd-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_ACD_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to amazon cloud drive (Amazon Drive). #### --acd-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_ACD_TOKEN - Type: string - Default: "" #### --acd-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_ACD_AUTH_URL - Type: string - Default: "" #### --acd-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_ACD_TOKEN_URL - Type: string - Default: "" #### --acd-checkpoint Checkpoint for internal polling (debug). - Config: checkpoint - Env Var: RCLONE_ACD_CHECKPOINT - Type: string - Default: "" #### --acd-upload-wait-per-gb Additional time per GB to wait after a failed complete upload to see if it appears. Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear. The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears. You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually. These values were determined empirically by observing lots of uploads of big files for a range of file sizes. Upload with the "-v" flag to see more info about what rclone is doing in this situation. - Config: upload_wait_per_gb - Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB - Type: Duration - Default: 3m0s #### --acd-templink-threshold Files >= this size will be downloaded via their tempLink. Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed. To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage. - Config: templink_threshold - Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD - Type: SizeSuffix - Default: 9G #### --acd-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_ACD_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations ### Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see `--retries` flag) which should hopefully work around this problem. Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use `--max-size 50000M` option to limit the maximum size of uploaded files. Note that `--max-size` does not split files into segments, it only ignores files over this size. rclone-1.53.3/docs/content/authors.md000066400000000000000000000422071375552240400175120ustar00rootroot00000000000000--- title: "Authors" description: "Rclone Authors and Contributors" --- Authors ------- * Nick Craig-Wood Contributors ------------ {{< rem `email addresses removed from here need to be addeed to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again.` >}} * Alex Couper * Leonid Shalupov * Shimon Doodkin * Colin Nicholson * Klaus Post * Sergey Tolmachev * Adriano Aurélio Meirelles * C. Bess * Dmitry Burdeev * Joseph Spurrier * Björn Harrtell * Xavier Lucas * Werner Beroux * Brian Stengaard * Jakub Gedeon * Jim Tittsler * Michal Witkowski * Fabian Ruff * Leigh Klotz * Romain Lapray * Justin R. Wilson * Antonio Messina * Stefan G. Weichinger * Per Cederberg * Radek Šenfeld * Fredrik Fornwall * Asko Tamm * xor-zz * Tomasz Mazur * Marco Paganini * Felix Bünemann * Durval Menezes * Luiz Carlos Rumbelsperger Viana * Stefan Breunig * Alishan Ladhani * 0xJAKE <0xJAKE@users.noreply.github.com> * Thibault Molleman * Scott McGillivray * Bjørn Erik Pedersen * Lukas Loesche * emyarod * T.C. Ferguson * Brandur * Dario Giovannetti * Károly Oláh * Jon Yergatian * Jack Schmidt * Dedsec1 * Hisham Zarka * Jérôme Vizcaino * Mike Tesch * Marvin Watson * Danny Tsai * Yoni Jah * Stephen Harris * Ihor Dvoretskyi * Jon Craton * Hraban Luyat * Michael Ledin * Martin Kristensen * Too Much IO * Anisse Astier * Zahiar Ahmed * Igor Kharin * Bill Zissimopoulos * Bob Potter * Steven Lu * Sjur Fredriksen * Ruwbin * Fabian Möller * Edward Q. Bridges * Vasiliy Tolstov * Harshavardhana * sainaen * gdm85 * Yaroslav Halchenko * John Papandriopoulos * Zhiming Wang * Andy Pilate * Oliver Heyme * wuyu * Andrei Dragomir * Christian Brüggemann * Alex McGrath Kraak * bpicode * Daniel Jagszent * Josiah White * Ishuah Kariuki * Jan Varho * Girish Ramakrishnan * LingMan * Jacob McNamee * jersou * thierry * Simon Leinen * Dan Dascalescu * Jason Rose * Andrew Starr-Bochicchio * John Leach * Corban Raun * Pierre Carlson * Ernest Borowski * Remus Bunduc * Iakov Davydov * Jakub Tasiemski * David Minor * Tim Cooijmans * Laurence * Giovanni Pizzi * Filip Bartodziej * Jon Fautley * lewapm <32110057+lewapm@users.noreply.github.com> * Yassine Imounachen * Chris Redekop * Jon Fautley * Will Gunn * Lucas Bremgartner * Jody Frankowski * Andreas Roussos * nbuchanan * Durval Menezes * Victor * Mateusz * Daniel Loader * David0rk * Alexander Neumann * Giri Badanahatti * Leo R. Lundgren * wolfv * Dave Pedu * Stefan Lindblom * seuffert * gbadanahatti <37121690+gbadanahatti@users.noreply.github.com> * Keith Goldfarb * Steve Kriss * Chih-Hsuan Yen * Alexander Neumann * Matt Holt * Eri Bastos * Michael P. Dubner * Antoine GIRARD * Mateusz Piotrowski * Animosity022 * Peter Baumgartner * Craig Rachel * Michael G. Noll * hensur * Oliver Heyme * Richard Yang * Piotr Oleszczyk * Rodrigo * NoLooseEnds * Jakub Karlicek * John Clayton * Kasper Byrdal Nielsen * Benjamin Joseph Dag * themylogin * Onno Zweers * Jasper Lievisse Adriaanse * sandeepkru * HerrH * Andrew <4030760+sparkyman215@users.noreply.github.com> * dan smith * Oleg Kovalov * Ruben Vandamme * Cnly * Andres Alvarez <1671935+kir4h@users.noreply.github.com> * reddi1 * Matt Tucker * Sebastian Bünger * Martin Polden * Alex Chen * Denis * bsteiss <35940619+bsteiss@users.noreply.github.com> * Cédric Connes * Dr. Tobias Quathamer * dcpu <42736967+dcpu@users.noreply.github.com> * Sheldon Rupp * albertony <12441419+albertony@users.noreply.github.com> * cron410 * Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com> * Felix Brucker * Santiago Rodríguez * Craig Miskell * Antoine GIRARD * Joanna Marek * frenos * ssaqua * xnaas * Frantisek Fuka * Paul Kohout * dcpu <43330287+dcpu@users.noreply.github.com> * jackyzy823 * David Haguenauer * teresy * buergi * Florian Gamboeck * Ralf Hemberger <10364191+rhemberger@users.noreply.github.com> * Scott Edlund * Erik Swanson * Jake Coggiano * brused27 * Peter Kaminski * Henry Ptasinski * Alexander * Garry McNulty * Mathieu Carbou * Mark Otway * William Cocker <37018962+WilliamCocker@users.noreply.github.com> * François Leurent <131.js@cloudyks.org> * Arkadius Stefanski * Jay * andrea rota * nicolov * Dario Guzik * qip * yair@unicorn * Matt Robinson * kayrus * Rémy Léone * Wojciech Smigielski * weetmuts * Jonathan * James Carpenter * Vince * Nestar47 <47841759+Nestar47@users.noreply.github.com> * Six * Alexandru Bumbacea * calisro * Dr.Rx * marcintustin * jaKa Močnik * Fionera * Dan Walters * Danil Semelenov * xopez <28950736+xopez@users.noreply.github.com> * Ben Boeckel * Manu * Kyle E. Mitchell * Gary Kim * Jon * Jeff Quinn * Peter Berbec * didil <1284255+didil@users.noreply.github.com> * id01 * Robert Marko * Philip Harvey <32467456+pharveybattelle@users.noreply.github.com> * JorisE * garry415 * forgems * Florian Apolloner * Aleksandar Janković * Maran * nguyenhuuluan434 * Laura Hausmann * yparitcher * AbelThar * Matti Niemenmaa * Russell Davis * Yi FU * Paul Millar * justinalin * EliEron * justina777 * Chaitanya Bankanhal * Michał Matczuk * Macavirus * Abhinav Sharma * ginvine <34869051+ginvine@users.noreply.github.com> * Patrick Wang * Cenk Alti * Andreas Chlupka * Alfonso Montero * Ivan Andreev * David Baumgold * Lars Lehtonen * Matei David * David * Anthony Rusdi <33247310+antrusd@users.noreply.github.com> * Richard Patel * 庄天翼 * SwitchJS * Raphael * Sezal Agrawal * Tyler * Brett Dutro * Vighnesh SK * Arijit Biswas * Michele Caci * AlexandrBoltris * Bryce Larson * Carlos Ferreyra * Saksham Khanna * dausruddin <5763466+dausruddin@users.noreply.github.com> * zero-24 * Xiaoxing Ye * Barry Muldrey * Sebastian Brandt * Marco Molteni * Ankur Gupta <7876747+ankur0493@users.noreply.github.com> * Maciej Zimnoch * anuar45 * Fernando * David Cole * Wei He * Outvi V <19144373+outloudvi@users.noreply.github.com> * Thomas Kriechbaumer * Tennix * Ole Schütt * Kuang-che Wu * Thomas Eales * Paul Tinsley * Felix Hungenberg * Benjamin Richter * landall * thestigma * jtagcat <38327267+jtagcat@users.noreply.github.com> * Damon Permezel * boosh * unbelauscht <58393353+unbelauscht@users.noreply.github.com> * Motonori IWAMURO * Benjapol Worakan * Dave Koston * Durval Menezes * Tim Gallant * Frederick Zhang * valery1707 * Yves G * Shing Kit Chan * Franklyn Tackitt * Robert-André Mauchin * evileye <48332831+ibiruai@users.noreply.github.com> * Joachim Brandon LeBlanc * Patryk Jakuszew * fishbullet * greatroar <@> * Bernd Schoolmann * Elan Ruusamäe * Max Sum * Mark Spieth * harry * Samantha McVey * Jack Anderson * Michael G * Brandon Philips * Daven * Martin Stone * David Bramwell <13053834+dbramwell@users.noreply.github.com> * Sunil Patra * Adam Stroud * Kush * Matan Rosenberg * gitch1 <63495046+gitch1@users.noreply.github.com> * ElonH * Fred * Sébastien Gross * Maxime Suret <11944422+msuret@users.noreply.github.com> * Caleb Case * Ben Zenker * Martin Michlmayr * Brandon McNama * Daniel Slyman * Alex Guerrero * Matteo Pietro Dazzi * edwardxml <56691903+edwardxml@users.noreply.github.com> * Roman Kredentser * Kamil Trzciński * Zac Rubin * Vincent Feltz * Heiko Bornholdt * Matteo Pietro Dazzi * jtagcat * Petri Salminen * Tim Burke * Kai Lüke * Garrett Squire * Evan Harris * Kevin * Morten Linderud * Dmitry Ustalov * Jack <196648+jdeng@users.noreply.github.com> * kcris * tyhuber1 <68970760+tyhuber1@users.noreply.github.com> * David Ibarra * Tim Gallant * Kaloyan Raev * Jay McEntire * Leo Luan * aus <549081+aus@users.noreply.github.com> * Aaron Gokaslan * Egor Margineanu * Lucas Kanashiro * WarpedPixel * Sam Edwards rclone-1.53.3/docs/content/azureblob.md000066400000000000000000000236411375552240400200130ustar00rootroot00000000000000--- title: "Microsoft Azure Blob Storage" description: "Rclone docs for Microsoft Azure Blob Storage" --- {{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage ----------------------------------------- Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Microsoft Azure Blob Storage \ "azureblob" [snip] Storage> azureblob Storage Account Name account> account_name Storage Account Key key> base64encodedkey== Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = account_name key = base64encodedkey== endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See all containers rclone lsd remote: Make a new container rclone mkdir remote:container List the contents of a container rclone ls remote:container Sync `/home/local/directory` to the remote container, deleting any excess files in the container. rclone sync -i /home/local/directory remote:container ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### Modified time ### The modified time is stored as metadata on the object with the `mtime` key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it. ### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | / | 0x2F | / | | \ | 0x5C | \ | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | . | 0x2E | . | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Hashes ### MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk. ### Authenticating with Azure Blob Storage Rclone has 3 ways of authenticating with Azure Blob Storage: #### Account and Key This is the most straight forward and least flexible way. Just fill in the `account` and `key` lines and leave the rest blank. #### SAS URL This can be an account level SAS URL or container level SAS URL. To use it leave `account`, `key` blank and fill in `sas_url`. An account level SAS URL or container level SAS URL can be obtained from the Azure portal or the Azure Storage Explorer. To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal. If you use a container level SAS URL, rclone operations are permitted only on a particular container, eg rclone ls azureblob:container You can also list the single container from the root. This will only show the container specified by the SAS URL. $ rclone lsd azureblob: container/ Note that you can't see or access any other containers - this will fail rclone ls azureblob:othercontainer Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server. ### Multipart uploads ### Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default. The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to `--transfers` of them being uploaded at once. Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using `--azureblob-chunk-size 100M`. Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-account Storage Account Name (leave blank to use SAS URL or Emulator) - Config: account - Env Var: RCLONE_AZUREBLOB_ACCOUNT - Type: string - Default: "" #### --azureblob-key Storage Account Key (leave blank to use SAS URL or Emulator) - Config: key - Env Var: RCLONE_AZUREBLOB_KEY - Type: string - Default: "" #### --azureblob-sas-url SAS URL for container level access only (leave blank if using account/key or Emulator) - Config: sas_url - Env Var: RCLONE_AZUREBLOB_SAS_URL - Type: string - Default: "" #### --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) - Config: use_emulator - Env Var: RCLONE_AZUREBLOB_USE_EMULATOR - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-endpoint Endpoint for the service Leave blank normally. - Config: endpoint - Env Var: RCLONE_AZUREBLOB_ENDPOINT - Type: string - Default: "" #### --azureblob-upload-cutoff Cutoff for switching to chunked upload (<= 256MB). - Config: upload_cutoff - Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 256M #### --azureblob-chunk-size Upload chunk size (<= 100MB). Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory. - Config: chunk_size - Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE - Type: SizeSuffix - Default: 4M #### --azureblob-list-chunk Size of blob list. This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( [source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) ). This can be used to limit the number of blobs items to return, to avoid the time out. - Config: list_chunk - Env Var: RCLONE_AZUREBLOB_LIST_CHUNK - Type: int - Default: 5000 #### --azureblob-access-tier Access tier of blob: hot, cool or archive. Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool". - Config: access_tier - Env Var: RCLONE_AZUREBLOB_ACCESS_TIER - Type: string - Default: "" #### --azureblob-disable-checksum Don't store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM - Type: bool - Default: false #### --azureblob-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s #### --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP - Type: bool - Default: false #### --azureblob-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_AZUREBLOB_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 {{< rem autogenerated options stop >}} ### Limitations ### MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. ### Azure Storage Emulator Support ### You can test rclone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with `rclone config` follow instructions described in introduction, set `use_emulator` config as `true`, you do not need to provide default account name or key if using emulator. rclone-1.53.3/docs/content/b2.md000066400000000000000000000352161375552240400163320ustar00rootroot00000000000000--- title: "B2" description: "Backblaze B2" --- {{< icon "fa fa-fire" >}} Backblaze B2 ---------------------------------------- B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/). Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Here is an example of making a b2 configuration. First run rclone config This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key. ``` No remotes found - make a new one n) New remote q) Quit config n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Backblaze B2 \ "b2" [snip] Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key key> 0123456789abcdef0123456789abcdef0123456789 Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = 123456789abc key = 0123456789abcdef0123456789abcdef0123456789 endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all buckets rclone lsd remote: Create a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ### Application Keys ### B2 supports multiple [Application Keys for different access permission to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html). You can use these with rclone too; you will need to use rclone version 1.43 or later. Follow Backblaze's docs to create an Application Key with the required permission and add the `applicationKeyId` as the `account` and the `Application Key` itself as the `key`. Note that you must put the _applicationKeyId_ as the `account` – you can't use the master Account ID. If you try then B2 will return 401 errors. ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### Modified time ### The modified time is stored as metadata on the object as `X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time. Modified times are used in syncing and are fully supported. Note that if a modification time needs to be updated on an object then it will create a new version of the object. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. Note that in 2020-05 Backblaze started allowing \ characters in file names. Rclone hasn't changed its encoding as this could cause syncs to re-transfer files. If you want rclone not to replace \ then see the `--b2-encoding` flag below and remove the `BackSlash` from the string. This can be set in the config. ### SHA1 checksums ### The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. Large files (bigger than the limit in `--b2-upload-cutoff`) which are uploaded in chunks will store their SHA1 on the object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze. For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See [the overview](/overview/#features) for exactly which remotes support SHA1. Sources which don't support SHA1, in particular `crypt` will upload large files without SHA1 checksums. This may be fixed in the future (see [#1767](https://github.com/rclone/rclone/issues/1767)). Files sizes below `--b2-upload-cutoff` will always have an SHA1 regardless of the source. ### Transfers ### Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about `--transfers 32` though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of `--transfers 4` is definitely too low for Backblaze B2 though. Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most `--transfers` of these in use at any moment, so this sets the upper limit on the memory used. ### Versions ### When rclone uploads a new version of a file it creates a [new version of it](https://www.backblaze.com/b2/docs/file_versions.html). Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the `--b2-hard-delete` flag which would permanently remove the file instead of hiding it. Old versions of files, where available, are visible using the `--b2-versions` flag. **NB** Note that `--b2-versions` does not work with crypt at the moment [#1627](https://github.com/rclone/rclone/issues/1627). Using [--backup-dir](/docs/#backup-dir-dir) with rclone is the recommended way of working around this. If you wish to remove all the old versions then you can use the `rclone cleanup remote:bucket` command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg `rclone cleanup remote:bucket/path/to/stuff`. Note that `cleanup` will remove partially uploaded files from the bucket if they are more than a day old. When you `purge` a bucket, the current and the old versions will be deleted then the bucket will be deleted. However `delete` will cause the current versions of the files to become hidden old versions. Here is a session showing the listing and retrieval of an old version followed by a `cleanup` of the old versions. Show current version and all the versions with `--b2-versions` flag. ``` $ rclone -q ls b2:cleanup-test 9 one.txt $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt ``` Retrieve an old version ``` $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ ls -l /tmp/one-v2016-07-04-141003-000.txt -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt ``` Clean up all the old versions and show that they've gone. ``` $ rclone -q cleanup b2:cleanup-test $ rclone -q ls b2:cleanup-test 9 one.txt $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt ``` ### Data usage ### It is useful to know how many requests are sent to the server in different scenarios. All copy commands send the following 4 requests: ``` /b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket /b2api/v1/b2_list_buckets /b2api/v1/b2_list_file_names ``` The `b2_list_file_names` request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue [#818](https://github.com/rclone/rclone/issues/818) causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent. Uploading files that do not require chunking, will send 2 requests per file upload: ``` /b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ ``` Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk: ``` /b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url /b2api/v1/b2_upload_part/ /b2api/v1/b2_finish_large_file ``` #### Versions #### Versions can be viewed with the `--b2-versions` flag. When it is set rclone will show and act on older versions of files. For example Listing without `--b2-versions` ``` $ rclone -q ls b2:cleanup-test 9 one.txt ``` And with ``` $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt ``` Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them. Note that when using `--b2-versions` no file write operations are permitted, so you can't upload files or delete them. ### B2 and rclone link ### Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: ``` ./rclone link B2:bucket/path/to/file.txt https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx ``` or if run on a directory you will get: ``` ./rclone link B2:bucket/path https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx ``` you can then use the authorization token (the part of the url from the `?Authorization=` on) on any file path under that directory. For example: ``` https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx ``` {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/b2/b2.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to b2 (Backblaze B2). #### --b2-account Account ID or Application Key ID - Config: account - Env Var: RCLONE_B2_ACCOUNT - Type: string - Default: "" #### --b2-key Application Key - Config: key - Env Var: RCLONE_B2_KEY - Type: string - Default: "" #### --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - Config: hard_delete - Env Var: RCLONE_B2_HARD_DELETE - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to b2 (Backblaze B2). #### --b2-endpoint Endpoint for the service. Leave blank normally. - Config: endpoint - Env Var: RCLONE_B2_ENDPOINT - Type: string - Default: "" #### --b2-test-mode A flag string for X-Bz-Test-Mode header for debugging. This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors: * "fail_some_uploads" * "expire_some_account_authorization_tokens" * "force_cap_exceeded" These will be set in the "X-Bz-Test-Mode" header which is documented in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html). - Config: test_mode - Env Var: RCLONE_B2_TEST_MODE - Type: string - Default: "" #### --b2-versions Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them. - Config: versions - Env Var: RCLONE_B2_VERSIONS - Type: bool - Default: false #### --b2-upload-cutoff Cutoff for switching to chunked upload. Files above this size will be uploaded in chunks of "--b2-chunk-size". This value should be set no larger than 4.657GiB (== 5GB). - Config: upload_cutoff - Env Var: RCLONE_B2_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M #### --b2-copy-cutoff Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 4.6GB. - Config: copy_cutoff - Env Var: RCLONE_B2_COPY_CUTOFF - Type: SizeSuffix - Default: 4G #### --b2-chunk-size Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size. - Config: chunk_size - Env Var: RCLONE_B2_CHUNK_SIZE - Type: SizeSuffix - Default: 96M #### --b2-disable-checksum Disable checksums for large (> upload cutoff) files Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_B2_DISABLE_CHECKSUM - Type: bool - Default: false #### --b2-download-url Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. - Config: download_url - Env Var: RCLONE_B2_DOWNLOAD_URL - Type: string - Default: "" #### --b2-download-auth-duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. - Config: download_auth_duration - Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION - Type: Duration - Default: 1w #### --b2-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s #### --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP - Type: bool - Default: false #### --b2-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_B2_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/box.md000066400000000000000000000251121375552240400166110ustar00rootroot00000000000000--- title: "Box" description: "Rclone docs for Box" --- {{< icon "fa fa-archive" >}} Box ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Box \ "box" [snip] Storage> box Box App Client Id - leave blank normally. client_id> Box App Client Secret - leave blank normally. client_secret> Box App config.json location Leave blank normally. Enter a string value. Press Enter for the default (""). box_config_file> Box App Primary Access Token Leave blank normally. Enter a string value. Press Enter for the default (""). access_token> Enter a string value. Press Enter for the default ("user"). Choose a number from below, or type in your own value 1 / Rclone should act on behalf of a user \ "user" 2 / Rclone should act on behalf of a service account \ "enterprise" box_sub_type> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your Box rclone lsd remote: List all the files in your Box rclone ls remote: To copy a local directory to an Box directory called backup rclone copy /home/source remote:backup ### Using rclone with an Enterprise account with SSO ### If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field. Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set. ### Invalid refresh token ### According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): > Each refresh_token is valid for one use in 60 days. This means that if you * Don't use the box remote for 60 days * Copy the config file with a box refresh token in and use it in two places * Get an error on a token refresh then rclone will return an error which includes the text `Invalid refresh token`. To fix this you will need to use oauth2 again to update the refresh token. You can use the methods in [the remote setup docs](/remote_setup/), bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on. Here is how to do it. ``` $ rclone config Current remotes: Name Type ==== ==== remote box e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote -------------------- [remote] type = box token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} -------------------- Edit remote Value "client_id" = "" Edit? (y/n)> y) Yes n) No y/n> n Value "client_secret" = "" Edit? (y/n)> y) Yes n) No y/n> n Remote config Already have a token - refresh? y) Yes n) No y/n> y Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = box token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### Modified time and hashes ### Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Box supports SHA1 type hashes, so you can use the `--checksum` flag. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Transfers ### For files above 50MB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing `--transfers` will increase memory use. ### Deleting files ### Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash. Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it may take a very long time. Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI. ### Root folder ID ### You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your Box drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface. So if the folder you want rclone to use has a URL which looks like `https://app.box.com/folder/11xxxxxxxxx8` in the browser, then you use `11xxxxxxxxx8` as the `root_folder_id` in the config. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to box (Box). #### --box-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_BOX_CLIENT_ID - Type: string - Default: "" #### --box-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_BOX_CLIENT_SECRET - Type: string - Default: "" #### --box-box-config-file Box App config.json location Leave blank normally. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: box_config_file - Env Var: RCLONE_BOX_BOX_CONFIG_FILE - Type: string - Default: "" #### --box-access-token Box App Primary Access Token Leave blank normally. - Config: access_token - Env Var: RCLONE_BOX_ACCESS_TOKEN - Type: string - Default: "" #### --box-box-sub-type - Config: box_sub_type - Env Var: RCLONE_BOX_BOX_SUB_TYPE - Type: string - Default: "user" - Examples: - "user" - Rclone should act on behalf of a user - "enterprise" - Rclone should act on behalf of a service account ### Advanced Options Here are the advanced options specific to box (Box). #### --box-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_BOX_TOKEN - Type: string - Default: "" #### --box-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_BOX_AUTH_URL - Type: string - Default: "" #### --box-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_BOX_TOKEN_URL - Type: string - Default: "" #### --box-root-folder-id Fill in for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_BOX_ROOT_FOLDER_ID - Type: string - Default: "0" #### --box-upload-cutoff Cutoff for switching to multipart upload (>= 50MB). - Config: upload_cutoff - Env Var: RCLONE_BOX_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 50M #### --box-commit-retries Max number of times to try committing a multipart file. - Config: commit_retries - Env Var: RCLONE_BOX_COMMIT_RETRIES - Type: int - Default: 100 #### --box-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_BOX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations ### Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Box file names can't have the `\` character in. rclone maps this to and from an identical looking unicode equivalent `\` (U+FF3C Fullwidth Reverse Solidus). Box only supports filenames up to 255 characters in length. rclone-1.53.3/docs/content/bugs.md000066400000000000000000000026511375552240400167640ustar00rootroot00000000000000--- title: "Bugs" description: "Rclone Bugs and Limitations" --- # Bugs and Limitations ## Limitations ### Directory timestamps aren't preserved Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing. ### Rclone struggles with millions of files in a directory/bucket Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory. Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket. ### Bucket based remotes and folders Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear. Some software creates empty keys ending in `/` as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. This ability may be added in the future (probably via a flag/option). ## Bugs Bugs are stored in rclone's GitHub project: * [Reported bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug) * [Known issues](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22) rclone-1.53.3/docs/content/cache.md000066400000000000000000000511071375552240400170670ustar00rootroot00000000000000--- title: "Cache" description: "Rclone docs for cache remote" --- {{< icon "fa fa-archive" >}} Cache (BETA) ----------------------------------------- The `cache` remote wraps another existing remote and stores file structure and its data for long running tasks like `rclone mount`. ## Status The cache backend code is working but it currently doesn't have a maintainer so there are [outstanding bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22) which aren't getting fixed. The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone. Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more. ## Setup To get started you just need to have an existing remote which can be configured with `cache`. Here is an example of how to make a remote called `test-cache`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Cache a remote \ "cache" [snip] Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> local:/test Optional: The URL of the Plex server plex_url> http://127.0.0.1:32400 Optional: The username of the Plex user plex_username> dummyusername Optional: The password of the Plex user y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: The size of a chunk. Lower value good for slow connections but can affect seamless reading. Default: 5M Choose a number from below, or type in your own value 1 / 1MB \ "1m" 2 / 5 MB \ "5M" 3 / 10 MB \ "10M" chunk_size> 2 How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. Accepted units are: "s", "m", "h". Default: 5m Choose a number from below, or type in your own value 1 / 1 hour \ "1h" 2 / 24 hours \ "24h" 3 / 24 hours \ "48h" info_age> 2 The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. Default: 10G Choose a number from below, or type in your own value 1 / 500 MB \ "500M" 2 / 1 GB \ "1G" 3 / 10 GB \ "10G" chunk_total_size> 3 Remote config -------------------- [test-cache] remote = local:/test plex_url = http://127.0.0.1:32400 plex_username = dummyusername plex_password = *** ENCRYPTED *** chunk_size = 5M info_age = 48h chunk_total_size = 10G ``` You can then use it like this, List directories in top level of your drive rclone lsd test-cache: List all the files in your drive rclone ls test-cache: To start a cached mount rclone mount --allow-other test-cache: /var/tmp/test-cache ### Write Features ### ### Offline uploading ### In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a `cache-tmp-upload-path`. A files goes through these states when using this feature: 1. An upload is started (usually by copying a file on the cache remote) 2. When the copy to the temporary location is complete the file is part of the cached remote and looks and behaves like any other file (reading included) 3. After `cache-tmp-wait-time` passes and the file is next in line, `rclone move` is used to move the file to the cloud provider 4. Reading the file still works during the upload but most modifications on it will be prohibited 5. Once the move is complete the file is unlocked for modifications as it becomes as any other regular file 6. If the file is being read through `cache` when it's actually deleted from the temporary path then `cache` will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though) Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the `--cache-db-purge` flag. ### Write Support ### Writes are supported through `cache`. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using `Offline uploading` for reliable writes. One special case is covered with `cache-writes` which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished. ### Read Features ### #### Multiple connections #### To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the cloud provider for smaller file chunks and combines them together locally where they can be available almost immediately before the reader usually needs them. This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before. #### Plex Integration #### There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries the cloud provider depending on what is needed for. Scans will have a minimum amount of workers (1) while in a confirmed playback cache will deploy the configured number of workers. This integration opens the doorway to additional performance improvements which will be explored in the near future. **Note:** If Plex options are not configured, `cache` will function with its configured options without adapting any of its settings. How to enable? Run `rclone config` and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled. Affected settings: - `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times ##### Certificate Validation ##### When the Plex server is configured to only accept secure connections, it is possible to use `.plex.direct` URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely. The format for these URLs is the following: https://ip-with-dots-replaced.server-hash.plex.direct:32400/ The `ip-with-dots-replaced` part can be any IPv4 address, where the dots have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. To get the `server-hash` part, the easiest way is to visit https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token This page will list all the available Plex servers for your account with at least one `.plex.direct` link for each. Copy one URL and replace the IP address with the desired address. This can be used as the `plex_url` value. ### Known issues ### #### Mount and --dir-cache-time #### --dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the `cache` backend, it will manage its own entries based on the configured time. To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are already configured in this way. #### Windows support - Experimental #### There are a couple of issues with Windows `mount` functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS. Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. - https://github.com/rclone/rclone/issues/1935 - https://github.com/rclone/rclone/issues/1907 - https://github.com/rclone/rclone/issues/1834 #### Risk of throttling #### Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures. There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. Some recommendations: - don't use a very small interval for entry information (`--cache-info-age`) - while writes aren't yet optimised, you can still write through `cache` which gives you the advantage of adding the file in the cache at the same time if configured to do so. Future enhancements: - https://github.com/rclone/rclone/issues/1937 - https://github.com/rclone/rclone/issues/1936 #### cache and crypt #### One common scenario is to keep your data encrypted in the cloud provider using the `crypt` remote. `crypt` uses a similar technique to wrap around an existing remote and handles this translation in a seamless way. There is an issue with wrapping the remotes in this order: {{}}**cloud remote** -> **crypt** -> **cache**{{}} During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: {{}}**cloud remote** -> **cache** -> **crypt**{{}} #### absolute remote paths #### `cache` can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the `remote` config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading `/` character. This behavior is irrelevant for most backend types, but there are backends where a leading `/` changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are relative to the root of the SSH server and paths without are relative to the user home directory. As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent a different directory on the SSH server. ### Cache and Remote Control (--rc) ### Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag. ### rc cache/expire Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - **remote** = path to remote **(required)** - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to cache (Cache a remote). #### --cache-remote Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CACHE_REMOTE - Type: string - Default: "" #### --cache-plex-url The URL of the Plex server - Config: plex_url - Env Var: RCLONE_CACHE_PLEX_URL - Type: string - Default: "" #### --cache-plex-username The username of the Plex user - Config: plex_username - Env Var: RCLONE_CACHE_PLEX_USERNAME - Type: string - Default: "" #### --cache-plex-password The password of the Plex user **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: plex_password - Env Var: RCLONE_CACHE_PLEX_PASSWORD - Type: string - Default: "" #### --cache-chunk-size The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur. - Config: chunk_size - Env Var: RCLONE_CACHE_CHUNK_SIZE - Type: SizeSuffix - Default: 5M - Examples: - "1m" - 1MB - "5M" - 5 MB - "10M" - 10 MB #### --cache-info-age How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time. - Config: info_age - Env Var: RCLONE_CACHE_INFO_AGE - Type: Duration - Default: 6h0m0s - Examples: - "1h" - 1 hour - "24h" - 24 hours - "48h" - 48 hours #### --cache-chunk-total-size The total size that the chunks can take up on the local disk. If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value. - Config: chunk_total_size - Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE - Type: SizeSuffix - Default: 10G - Examples: - "500M" - 500 MB - "1G" - 1 GB - "10G" - 10 GB ### Advanced Options Here are the advanced options specific to cache (Cache a remote). #### --cache-plex-token The plex token for authentication - auto set normally - Config: plex_token - Env Var: RCLONE_CACHE_PLEX_TOKEN - Type: string - Default: "" #### --cache-plex-insecure Skip all certificate verification when connecting to the Plex server - Config: plex_insecure - Env Var: RCLONE_CACHE_PLEX_INSECURE - Type: string - Default: "" #### --cache-db-path Directory to store file structure metadata DB. The remote name is used as the DB file name. - Config: db_path - Env Var: RCLONE_CACHE_DB_PATH - Type: string - Default: "$HOME/.cache/rclone/cache-backend" #### --cache-chunk-path Directory to cache chunk files. Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path. This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path". - Config: chunk_path - Env Var: RCLONE_CACHE_CHUNK_PATH - Type: string - Default: "$HOME/.cache/rclone/cache-backend" #### --cache-db-purge Clear all the cached data for this remote on start. - Config: db_purge - Env Var: RCLONE_CACHE_DB_PURGE - Type: bool - Default: false #### --cache-chunk-clean-interval How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often. - Config: chunk_clean_interval - Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL - Type: Duration - Default: 1m0s #### --cache-read-retries How many times to retry a read from a cache storage. Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore. For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering. - Config: read_retries - Env Var: RCLONE_CACHE_READ_RETRIES - Type: int - Default: 10 #### --cache-workers How many workers should run in parallel to download chunks. Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers. **Note**: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. - Config: workers - Env Var: RCLONE_CACHE_WORKERS - Type: int - Default: 4 #### --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible. This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time). If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine. - Config: chunk_no_memory - Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY - Type: bool - Default: false #### --cache-rps Limits the number of requests per second to the source FS (-1 to disable) This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads. If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that. A good balance of all the other settings should make this setting useless but it is available to set for more special cases. **NOTE**: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass. - Config: rps - Env Var: RCLONE_CACHE_RPS - Type: int - Default: -1 #### --cache-writes Cache file data on writes through the FS If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload. - Config: writes - Env Var: RCLONE_CACHE_WRITES - Type: bool - Default: false #### --cache-tmp-upload-path Directory to keep temporary files until they are uploaded. This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider. Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider - Config: tmp_upload_path - Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH - Type: string - Default: "" #### --cache-tmp-wait-time How long should files be stored in local cache before being uploaded This is the duration that a file must wait in the temporary location _cache-tmp-upload-path_ before it is selected for upload. Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose. - Config: tmp_wait_time - Env Var: RCLONE_CACHE_TMP_WAIT_TIME - Type: Duration - Default: 15s #### --cache-db-wait-time How long to wait for the DB to be available - 0 is unlimited Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error. If you set it to 0 then it will wait forever. - Config: db_wait_time - Env Var: RCLONE_CACHE_DB_WAIT_TIME - Type: Duration - Default: 1s ### Backend commands Here are the commands specific to the cache backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](/rc/#backend/command). #### stats Print stats on the cache backend in JSON format. rclone backend stats remote: [options] [+] {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/changelog.md000066400000000000000000004406621375552240400177630ustar00rootroot00000000000000--- title: "Documentation" description: "Rclone Changelog" --- # Changelog ## v1.53.3 - 2020-11-19 [See commits](https://github.com/rclone/rclone/compare/v1.53.2...v1.53.3) * Bug Fixes * random: Fix incorrect use of math/rand instead of crypto/rand CVE-2020-28924 (Nick Craig-Wood) * Passwords you have generated with `rclone config` may be insecure * See [issue #4783](https://github.com/rclone/rclone/issues/4783) for more details and a checking tool * random: Seed math/rand in one place with crypto strong seed (Nick Craig-Wood) * VFS * Fix vfs/refresh calls with fs= parameter (Nick Craig-Wood) * Sharefile * Fix backend due to API swapping integers for strings (Nick Craig-Wood) ## v1.53.2 - 2020-10-26 [See commits](https://github.com/rclone/rclone/compare/v1.53.1...v1.53.2) * Bug Fixes * acounting * Fix incorrect speed and transferTime in core/stats (Nick Craig-Wood) * Stabilize display order of transfers on Windows (Nick Craig-Wood) * operations * Fix use of --suffix without --backup-dir (Nick Craig-Wood) * Fix spurious "--checksum is in use but the source and destination have no hashes in common" (Nick Craig-Wood) * build * Work around GitHub actions brew problem (Nick Craig-Wood) * Stop using set-env and set-path in the GitHub actions (Nick Craig-Wood) * Mount * mount2: Fix the swapped UID / GID values (Russell Cattelan) * VFS * Detect and recover from a file being removed externally from the cache (Nick Craig-Wood) * Fix a deadlock vulnerability in downloaders.Close (Leo Luan) * Fix a race condition in retryFailedResets (Leo Luan) * Fix missed concurrency control between some item operations and reset (Leo Luan) * Add exponential backoff during ENOSPC retries (Leo Luan) * Add a missed update of used cache space (Leo Luan) * Fix --no-modtime to not attempt to set modtimes (as documented) (Nick Craig-Wood) * Local * Fix sizes and syncing with --links option on Windows (Nick Craig-Wood) * Chunker * Disable ListR to fix missing files on GDrive (workaround) (Ivan Andreev) * Fix upload over crypt (Ivan Andreev) * Fichier * Increase maximum file size from 100GB to 300GB (gyutw) * Jottacloud * Remove clientSecret from config when upgrading to token based authentication (buengese) * Avoid double url escaping of device/mountpoint (albertony) * Remove DirMove workaround as it's not required anymore - also (buengese) * Mailru * Fix uploads after recent changes on server (Ivan Andreev) * Fix range requests after june changes on server (Ivan Andreev) * Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev) * Onedrive * Fix disk usage for sharepoint (Nick Craig-Wood) * S3 * Add missing regions for AWS (Anagh Kumar Baranwal) * Seafile * Fix accessing libraries > 2GB on 32 bit systems (Muffin King) * SFTP * Always convert the checksum to lower case (buengese) * Union * Create root directories if none exist (Nick Craig-Wood) ## v1.53.1 - 2020-09-13 [See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.53.1) * Bug Fixes * accounting: Remove new line from end of --stats-one-line display (Nick Craig-Wood) * check * Add back missing --download flag (Nick Craig-Wood) * Fix docs (Nick Craig-Wood) * docs * Note --log-file does append (Nick Craig-Wood) * Add full stops for consistency in rclone --help (edwardxml) * Add Tencent COS to s3 provider list (wjielai) * Updated mount command to reflect that it requires Go 1.13 or newer (Evan Harris) * jottacloud: Mention that uploads from local disk will not need to cache files to disk for md5 calculation (albertony) * Fix formatting of rc docs page (Nick Craig-Wood) * build * Include vendor tar ball in release and fix startdev (Nick Craig-Wood) * Fix "Illegal instruction" error for ARMv6 builds (Nick Craig-Wood) * Fix architecture name in ARMv7 build (Nick Craig-Wood) * VFS * Fix spurious error "vfs cache: failed to _ensure cache EOF" (Nick Craig-Wood) * Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood) * Local * Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood) * Drive * Re-adds special oauth help text (Tim Gallant) * Opendrive * Do not retry 400 errors (Evan Harris) ## v1.53.0 - 2020-09-02 [See commits](https://github.com/rclone/rclone/compare/v1.52.0...v1.53.0) * New Features * The [VFS layer](/commands/rclone_mount/#vfs-virtual-file-system) was heavily reworked for this release - see below for more details * Interactive mode [-i/--interactive](/docs/#interactive) for destructive operations (fishbullet) * Add [--bwlimit-file](/docs/#bwlimit-file-bandwidth-spec) flag to limit speeds of individual file transfers (Nick Craig-Wood) * Transfers are sorted by start time in the stats and progress output (Max Sum) * Make sure backends expand `~` and environment vars in file names they use (Nick Craig-Wood) * Add [--refresh-times](/docs/#refresh-times) flag to set modtimes on hashless backends (Nick Craig-Wood) * build * Remove vendor directory in favour of Go modules (Nick Craig-Wood) * Build with go1.15.x by default (Nick Craig-Wood) * Drop macOS 386 build as it is no longer supported by go1.15 (Nick Craig-Wood) * Add ARMv7 to the supported builds (Nick Craig-Wood) * Enable `rclone cmount` on macOS (Nick Craig-Wood) * Make rclone build with gccgo (Nick Craig-Wood) * Make rclone build with wasm (Nick Craig-Wood) * Change beta numbering to be semver compatible (Nick Craig-Wood) * Add file properties and icon to Windows executable (albertony) * Add experimental interface for integrating rclone into browsers (Nick Craig-Wood) * lib: Add file name compression (Klaus Post) * rc * Allow installation and use of plugins and test plugins with rclone-webui (Chaitanya Bankanhal) * Add reverse proxy pluginsHandler for serving plugins (Chaitanya Bankanhal) * Add `mount/listmounts` option for listing current mounts (Chaitanya Bankanhal) * Add `operations/uploadfile` to upload a file through rc using encoding multipart/form-data (Chaitanya Bankanhal) * Add `core/copmmand` to execute rclone terminal commands. (Chaitanya Bankanhal) * `rclone check` * Add reporting of filenames for same/missing/changed (Nick Craig-Wood) * Make check command obey `--dry-run`/`-i`/`--interactive` (Nick Craig-Wood) * Make check do `--checkers` files concurrently (Nick Craig-Wood) * Retry downloads if they fail when using the `--download` flag (Nick Craig-Wood) * Make it show stats by default (Nick Craig-Wood) * `rclone obscure`: Allow obscure command to accept password on STDIN (David Ibarra) * `rclone config` * Set RCLONE_CONFIG_DIR for use in config files and subprocesses (Nick Craig-Wood) * Reject remote names starting with a dash. (jtagcat) * `rclone cryptcheck`: Add reporting of filenames for same/missing/changed (Nick Craig-Wood) * `rclone dedupe`: Make it obey the `--size-only` flag for duplicate detection (Nick Craig-Wood) * `rclone link`: Add `--expire` and `--unlink` flags (Roman Kredentser) * `rclone mkdir`: Warn when using mkdir on remotes which can't have empty directories (Nick Craig-Wood) * `rclone rc`: Allow JSON parameters to simplify command line usage (Nick Craig-Wood) * `rclone serve ftp` * Don't compile on < go1.13 after dependency update (Nick Craig-Wood) * Add error message if auth proxy fails (Nick Craig-Wood) * Use refactored goftp.io/server library for binary shrink (Nick Craig-Wood) * `rclone serve restic`: Expose interfaces so that rclone can be used as a library from within restic (Jack) * `rclone sync`: Add `--track-renames-strategy leaf` (Nick Craig-Wood) * `rclone touch`: Add ability to set nanosecond resolution times (Nick Craig-Wood) * `rclone tree`: Remove `-i` shorthand for `--noindent` as it conflicts with `-i`/`--interactive` (Nick Craig-Wood) * Bug Fixes * accounting * Fix documentation for `speed`/`speedAvg` (Nick Craig-Wood) * Fix elapsed time not show actual time since beginning (Chaitanya Bankanhal) * Fix deadlock in stats printing (Nick Craig-Wood) * build * Fix file handle leak in GitHub release tool (Garrett Squire) * `rclone check`: Fix successful retries with `--download` counting errors (Nick Craig-Wood) * `rclone dedupe`: Fix logging to be easier to understand (Nick Craig-Wood) * Mount * Warn macOS users that mount implementation is changing (Nick Craig-Wood) * to test the new implementation use `rclone cmount` instead of `rclone mount` * this is because the library rclone uses has dropped macOS support * rc interface * Add call for unmount all (Chaitanya Bankanhal) * Make `mount/mount` remote control take vfsOpt option (Nick Craig-Wood) * Add mountOpt to `mount/mount` (Nick Craig-Wood) * Add VFS and Mount options to `mount/listmounts` (Nick Craig-Wood) * Catch panics in cgofuse initialization and turn into error messages (Nick Craig-Wood) * Always supply stat information in Readdir (Nick Craig-Wood) * Add support for reading unknown length files using direct IO (Windows) (Nick Craig-Wood) * Fix On Windows don't add `-o uid/gid=-1` if user supplies `-o uid/gid`. (Nick Craig-Wood) * Fix macOS losing directory contents in cmount (Nick Craig-Wood) * Fix volume name broken in recent refactor (Nick Craig-Wood) * VFS * Implement partial reads for `--vfs-cache-mode full` (Nick Craig-Wood) * Add `--vfs-writeback` option to delay writes back to cloud storage (Nick Craig-Wood) * Add `--vfs-read-ahead` parameter for use with `--vfs-cache-mode full` (Nick Craig-Wood) * Restart pending uploads on restart of the cache (Nick Craig-Wood) * Support synchronous cache space recovery upon ENOSPC (Leo Luan) * Allow ReadAt and WriteAt to run concurrently with themselves (Nick Craig-Wood) * Change modtime of file before upload to current (Rob Calistri) * Recommend `--vfs-cache-modes writes` on backends which can't stream (Nick Craig-Wood) * Add an optional `fs` parameter to vfs rc methods (Nick Craig-Wood) * Fix errors when using > 260 char files in the cache in Windows (Nick Craig-Wood) * Fix renaming of items while they are being uploaded (Nick Craig-Wood) * Fix very high load caused by slow directory listings (Nick Craig-Wood) * Fix renamed files not being uploaded with `--vfs-cache-mode minimal` (Nick Craig-Wood) * Fix directory locking caused by slow directory listings (Nick Craig-Wood) * Fix saving from chrome without `--vfs-cache-mode writes` (Nick Craig-Wood) * Local * Add `--local-no-updated` to provide a consistent view of changing objects (Nick Craig-Wood) * Add `--local-no-set-modtime` option to prevent modtime changes (tyhuber1) * Fix race conditions updating and reading Object metadata (Nick Craig-Wood) * Cache * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Fix dedupe on caches wrapping drives (Nick Craig-Wood) * Crypt * Add `--crypt-server-side-across-configs` flag (Nick Craig-Wood) * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Alias * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Azure Blob * Don't compile on < go1.13 after dependency update (Nick Craig-Wood) * B2 * Implement server side copy for files > 5GB (Nick Craig-Wood) * Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) * Note that b2's encoding now allows \ but rclone's hasn't changed (Nick Craig-Wood) * Fix transfers when using download_url (Nick Craig-Wood) * Box * Implement rclone cleanup (buengese) * Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) * Allow authentication with access token (David) * Chunker * Make any created backends be cached to fix rc problems (Nick Craig-Wood) * Drive * Add `rclone backend drives` to list shared drives (teamdrives) (Nick Craig-Wood) * Implement `rclone backend untrash` (Nick Craig-Wood) * Work around drive bug which didn't set modtime of copied docs (Nick Craig-Wood) * Added `--drive-starred-only` to only show starred files (Jay McEntire) * Deprecate `--drive-alternate-export` as it is no longer needed (themylogin) * Fix duplication of Google docs on server side copy (Nick Craig-Wood) * Fix "panic: send on closed channel" when recycling dir entries (Nick Craig-Wood) * Dropbox * Add copyright detector info in limitations section in the docs (Alex Guerrero) * Fix `rclone link` by removing expires parameter (Nick Craig-Wood) * Fichier * Detect Flood detected: IP Locked error and sleep for 30s (Nick Craig-Wood) * FTP * Add explicit TLS support (Heiko Bornholdt) * Add support for `--dump bodies` and `--dump auth` for debugging (Nick Craig-Wood) * Fix interoperation with pure-ftpd (Nick Craig-Wood) * Google Cloud Storage * Add support for anonymous access (Kai Lüke) * Jottacloud * Bring back legacy authentification for use with whitelabel versions (buengese) * Switch to new api root - also implement a very ugly workaround for the DirMove failures (buengese) * Onedrive * Rework cancel of multipart uploads on rclone exit (Nick Craig-Wood) * Implement rclone cleanup (Nick Craig-Wood) * Add `--onedrive-no-versions` flag to remove old versions (Nick Craig-Wood) * Pcloud * Implement `rclone link` for public link creation (buengese) * Qingstor * Cancel in progress multipart uploads on rclone exit (Nick Craig-Wood) * S3 * Preserve metadata when doing multipart copy (Nick Craig-Wood) * Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood) * Add `rclone link` for public link sharing (Roman Kredentser) * Add `rclone backend restore` command to restore objects from GLACIER (Nick Craig-Wood) * Add `rclone cleanup` and `rclone backend cleanup` to clean unfinished multipart uploads (Nick Craig-Wood) * Add `rclone backend list-multipart-uploads` to list unfinished multipart uploads (Nick Craig-Wood) * Add `--s3-max-upload-parts` support (Kamil Trzciński) * Add `--s3-no-check-bucket` for minimising rclone transactions and perms (Nick Craig-Wood) * Add `--s3-profile` and `--s3-shared-credentials-file` options (Nick Craig-Wood) * Use regional s3 us-east-1 endpoint (David) * Add Scaleway provider (Vincent Feltz) * Update IBM COS endpoints (Egor Margineanu) * Reduce the default `--s3-copy-cutoff` to < 5GB for Backblaze S3 compatibility (Nick Craig-Wood) * Fix detection of bucket existing (Nick Craig-Wood) * SFTP * Use the absolute path instead of the relative path for listing for improved compatibility (Nick Craig-Wood) * Add `--sftp-subsystem` and `--sftp-server-command` options (aus) * Swift * Fix dangling large objects breaking the listing (Nick Craig-Wood) * Fix purge not deleting directory markers (Nick Craig-Wood) * Fix update multipart object removing all of its own parts (Nick Craig-Wood) * Fix missing hash from object returned from upload (Nick Craig-Wood) * Tardigrade * Upgrade to uplink v1.2.0 (Kaloyan Raev) * Union * Fix writing with the all policy (Nick Craig-Wood) * WebDAV * Fix directory creation with 4shared (Nick Craig-Wood) ## v1.52.3 - 2020-08-07 [See commits](https://github.com/rclone/rclone/compare/v1.52.2...v1.52.3) * Bug Fixes * docs * Disable smart typography (eg en-dash) in MANUAL.* and man page (Nick Craig-Wood) * Update install.md to reflect minimum Go version (Evan Harris) * Update install from source instructions (Nick Craig-Wood) * make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud) * log: Fix --use-json-log going to stderr not --log-file on Windows (Nick Craig-Wood) * serve dlna: Fix file list on Samsung Series 6+ TVs (Matteo Pietro Dazzi) * sync: Fix deadlock with --track-renames-strategy modtime (Nick Craig-Wood) * Cache * Fix moveto/copyto remote:file remote:file2 (Nick Craig-Wood) * Drive * Stop using root_folder_id as a cache (Nick Craig-Wood) * Make dangling shortcuts appear in listings (Nick Craig-Wood) * Drop "Disabling ListR" messages down to debug (Nick Craig-Wood) * Workaround and policy for Google Drive API (Dmitry Ustalov) * FTP * Add note to docs about home vs root directory selection (Nick Craig-Wood) * Onedrive * Fix reverting to Copy when Move would have worked (Nick Craig-Wood) * Avoid comma rendered in URL in onedrive.md (Kevin) * Pcloud * Fix oauth on European region "eapi.pcloud.com" (Nick Craig-Wood) * S3 * Fix bucket Region auto detection when Region unset in config (Nick Craig-Wood) ## v1.52.2 - 2020-06-24 [See commits](https://github.com/rclone/rclone/compare/v1.52.1...v1.52.2) * Bug Fixes * build * Fix docker release build action (Nick Craig-Wood) * Fix custom timezone in Docker image (NoLooseEnds) * check: Fix misleading message which printed errors instead of differences (Nick Craig-Wood) * errors: Add WSAECONNREFUSED and more to the list of retriable Windows errors (Nick Craig-Wood) * rcd: Fix incorrect prometheus metrics (Gary Kim) * serve restic: Fix flags so they use environment variables (Nick Craig-Wood) * serve webdav: Fix flags so they use environment variables (Nick Craig-Wood) * sync: Fix --track-renames-strategy modtime (Nick Craig-Wood) * Drive * Fix not being able to delete a directory with a trashed shortcut (Nick Craig-Wood) * Fix creating a directory inside a shortcut (Nick Craig-Wood) * Fix --drive-impersonate with cached root_folder_id (Nick Craig-Wood) * SFTP * Fix SSH key PEM loading (Zac Rubin) * Swift * Speed up deletes by not retrying segment container deletes (Nick Craig-Wood) * Tardigrade * Upgrade to uplink v1.1.1 (Caleb Case) * WebDAV * Fix free/used display for rclone about/df for certain backends (Nick Craig-Wood) ## v1.52.1 - 2020-06-10 [See commits](https://github.com/rclone/rclone/compare/v1.52.0...v1.52.1) * Bug Fixes * lib/file: Fix SetSparse on Windows 7 which fixes downloads of files > 250MB (Nick Craig-Wood) * build * Update go.mod to go1.14 to enable -mod=vendor build (Nick Craig-Wood) * Remove quicktest from Dockerfile (Nick Craig-Wood) * Build Docker images with GitHub actions (Matteo Pietro Dazzi) * Update Docker build workflows (Nick Craig-Wood) * Set user_allow_other in /etc/fuse.conf in the Docker image (Nick Craig-Wood) * Fix xgo build after go1.14 go.mod update (Nick Craig-Wood) * docs * Add link to source and modified time to footer of every page (Nick Craig-Wood) * Remove manually set dates and use git dates instead (Nick Craig-Wood) * Minor tense, punctuation, brevity and positivity changes for the home page (edwardxml) * Remove leading slash in page reference in footer when present (Nick Craig-Wood) * Note commands which need obscured input in the docs (Nick Craig-Wood) * obscure: Write more help as we are referencing it elsewhere (Nick Craig-Wood) * VFS * Fix OS vs Unix path confusion - fixes ChangeNotify on Windows (Nick Craig-Wood) * Drive * Fix missing items when listing using --fast-list / ListR (Nick Craig-Wood) * Putio * Fix panic on Object.Open (Cenk Alti) * S3 * Fix upload of single files into buckets without create permission (Nick Craig-Wood) * Fix --header-upload (Nick Craig-Wood) * Tardigrade * Fix listing bug by upgrading to v1.0.7 * Set UserAgent to rclone (Caleb Case) ## v1.52.0 - 2020-05-27 Special thanks to Martin Michlmayr for proof reading and correcting all the docs and Edward Barker for helping re-write the front page. [See commits](https://github.com/rclone/rclone/compare/v1.51.0...v1.52.0) * New backends * [Tardigrade](/tardigrade/) backend for use with storj.io (Caleb Case) * [Union](/union/) re-write to have multiple writable remotes (Max Sum) * [Seafile](/seafile) for Seafile server (Fred @creativeprojects) * New commands * backend: command for backend specific commands (see backends) (Nick Craig-Wood) * cachestats: Deprecate in favour of `rclone backend stats cache:` (Nick Craig-Wood) * dbhashsum: Deprecate in favour of `rclone hashsum DropboxHash` (Nick Craig-Wood) * New Features * Add `--header-download` and `--header-upload` flags for setting HTTP headers when uploading/downloading (Tim Gallant) * Add `--header` flag to add HTTP headers to every HTTP transaction (Nick Craig-Wood) * Add `--check-first` to do all checking before starting transfers (Nick Craig-Wood) * Add `--track-renames-strategy` for configurable matching criteria for `--track-renames` (Bernd Schoolmann) * Add `--cutoff-mode` hard,soft,catious (Shing Kit Chan & Franklyn Tackitt) * Filter flags (eg `--files-from -`) can read from stdin (fishbullet) * Add `--error-on-no-transfer` option (Jon Fautley) * Implement `--order-by xxx,mixed` for copying some small and some big files (Nick Craig-Wood) * Allow `--max-backlog` to be negative meaning as large as possible (Nick Craig-Wood) * Added `--no-unicode-normalization` flag to allow Unicode filenames to remain unique (Ben Zenker) * Allow `--min-age`/`--max-age` to take a date as well as a duration (Nick Craig-Wood) * Add rename statistics for file and directory renames (Nick Craig-Wood) * Add statistics output to JSON log (reddi) * Make stats be printed on non-zero exit code (Nick Craig-Wood) * When running `--password-command` allow use of stdin (Sébastien Gross) * Stop empty strings being a valid remote path (Nick Craig-Wood) * accounting: support WriterTo for less memory copying (Nick Craig-Wood) * build * Update to use go1.14 for the build (Nick Craig-Wood) * Add `-trimpath` to release build for reproduceable builds (Nick Craig-Wood) * Remove GOOS and GOARCH from Dockerfile (Brandon Philips) * config * Fsync the config file after writing to save more reliably (Nick Craig-Wood) * Add `--obscure` and `--no-obscure` flags to `config create`/`update` (Nick Craig-Wood) * Make `config show` take `remote:` as well as `remote` (Nick Craig-Wood) * copyurl: Add `--no-clobber` flag (Denis) * delete: Added `--rmdirs` flag to delete directories as well (Kush) * filter: Added `--files-from-raw` flag (Ankur Gupta) * genautocomplete: Add support for fish shell (Matan Rosenberg) * log: Add support for syslog LOCAL facilities (Patryk Jakuszew) * lsjson: Add `--hash-type` parameter and use it in lsf to speed up hashing (Nick Craig-Wood) * rc * Add `-o`/`--opt` and `-a`/`--arg` for more structured input (Nick Craig-Wood) * Implement `backend/command` for running backend specific commands remotely (Nick Craig-Wood) * Add `mount/mount` command for starting `rclone mount` via the API (Chaitanya) * rcd: Add Prometheus metrics support (Gary Kim) * serve http * Added a `--template` flag for user defined markup (calistri) * Add Last-Modified headers to files and directories (Nick Craig-Wood) * serve sftp: Add support for multiple host keys by repeating `--key` flag (Maxime Suret) * touch: Add `--localtime` flag to make `--timestamp` localtime not UTC (Nick Craig-Wood) * Bug Fixes * accounting * Restore "Max number of stats groups reached" log line (Michał Matczuk) * Correct exitcode on Transfer Limit Exceeded flag. (Anuar Serdaliyev) * Reset bytes read during copy retry (Ankur Gupta) * Fix race clearing stats (Nick Craig-Wood) * copy: Only create empty directories when they don't exist on the remote (Ishuah Kariuki) * dedupe: Stop dedupe deleting files with identical IDs (Nick Craig-Wood) * oauth * Use custom http client so that `--no-check-certificate` is honored by oauth token fetch (Mark Spieth) * Replace deprecated oauth2.NoContext (Lars Lehtonen) * operations * Fix setting the timestamp on Windows for multithread copy (Nick Craig-Wood) * Make rcat obey `--ignore-checksum` (Nick Craig-Wood) * Make `--max-transfer` more accurate (Nick Craig-Wood) * rc * Fix dropped error (Lars Lehtonen) * Fix misplaced http server config (Xiaoxing Ye) * Disable duplicate log (ElonH) * serve dlna * Cds: don't specify childCount at all when unknown (Dan Walters) * Cds: use modification time as date in dlna metadata (Dan Walters) * serve restic: Fix tests after restic project removed vendoring (Nick Craig-Wood) * sync * Fix incorrect "nothing to transfer" message using `--delete-before` (Nick Craig-Wood) * Only create empty directories when they don't exist on the remote (Ishuah Kariuki) * Mount * Add `--async-read` flag to disable asynchronous reads (Nick Craig-Wood) * Ignore `--allow-root` flag with a warning as it has been removed upstream (Nick Craig-Wood) * Warn if `--allow-non-empty` used on Windows and clarify docs (Nick Craig-Wood) * Constrain to go1.13 or above otherwise bazil.org/fuse fails to compile (Nick Craig-Wood) * Fix fail because of too long volume name (evileye) * Report 1PB free for unknown disk sizes (Nick Craig-Wood) * Map more rclone errors into file systems errors (Nick Craig-Wood) * Fix disappearing cwd problem (Nick Craig-Wood) * Use ReaddirPlus on Windows to improve directory listing performance (Nick Craig-Wood) * Send a hint as to whether the filesystem is case insensitive or not (Nick Craig-Wood) * Add rc command `mount/types` (Nick Craig-Wood) * Change maximum leaf name length to 1024 bytes (Nick Craig-Wood) * VFS * Add `--vfs-read-wait` and `--vfs-write-wait` flags to control time waiting for a sequential read/write (Nick Craig-Wood) * Change default `--vfs-read-wait` to 20ms (it was 5ms and not configurable) (Nick Craig-Wood) * Make `df` output more consistent on a rclone mount. (Yves G) * Report 1PB free for unknown disk sizes (Nick Craig-Wood) * Fix race condition caused by unlocked reading of Dir.path (Nick Craig-Wood) * Make File lock and Dir lock not overlap to avoid deadlock (Nick Craig-Wood) * Implement lock ordering between File and Dir to eliminate deadlocks (Nick Craig-Wood) * Factor the vfs cache into its own package (Nick Craig-Wood) * Pin the Fs in use in the Fs cache (Nick Craig-Wood) * Add SetSys() methods to Node to allow caching stuff on a node (Nick Craig-Wood) * Ignore file not found errors from Hash in Read.Release (Nick Craig-Wood) * Fix hang in read wait code (Nick Craig-Wood) * Local * Speed up multi thread downloads by using sparse files on Windows (Nick Craig-Wood) * Implement `--local-no-sparse` flag for disabling sparse files (Nick Craig-Wood) * Implement `rclone backend noop` for testing purposes (Nick Craig-Wood) * Fix "file not found" errors on post transfer Hash calculation (Nick Craig-Wood) * Cache * Implement `rclone backend stats` command (Nick Craig-Wood) * Fix Server Side Copy with Temp Upload (Brandon McNama) * Remove Unused Functions (Lars Lehtonen) * Disable race tests until bbolt is fixed (Nick Craig-Wood) * Move methods used for testing into test file (greatroar) * Add Pin and Unpin and canonicalised lookup (Nick Craig-Wood) * Use proper import path go.etcd.io/bbolt (Robert-André Mauchin) * Crypt * Calculate hashes for uploads from local disk (Nick Craig-Wood) * This allows crypted Jottacloud uploads without using local disk * This means crypted s3/b2 uploads will now have hashes * Added `rclone backend decode`/`encode` commands to replicate functionality of `cryptdecode` (Anagh Kumar Baranwal) * Get rid of the unused Cipher interface as it obfuscated the code (Nick Craig-Wood) * Azure Blob * Implement streaming of unknown sized files so `rcat` is now supported (Nick Craig-Wood) * Implement memory pooling to control memory use (Nick Craig-Wood) * Add `--azureblob-disable-checksum` flag (Nick Craig-Wood) * Retry `InvalidBlobOrBlock` error as it may indicate block concurrency problems (Nick Craig-Wood) * Remove unused `Object.parseTimeString()` (Lars Lehtonen) * Fix permission error on SAS URL limited to container (Nick Craig-Wood) * B2 * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Ignore directory markers at the root also (Nick Craig-Wood) * Force the case of the SHA1 to lowercase (Nick Craig-Wood) * Remove unused `largeUpload.clearUploadURL()` (Lars Lehtonen) * Box * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Implement About to read size used (Nick Craig-Wood) * Add token renew function for jwt auth (David Bramwell) * Added support for interchangeable root folder for Box backend (Sunil Patra) * Remove unnecessary iat from jws claims (David) * Drive * Follow shortcuts by default, skip with `--drive-skip-shortcuts` (Nick Craig-Wood) * Implement `rclone backend shortcut` command for creating shortcuts (Nick Craig-Wood) * Added `rclone backend` command to change `service_account_file` and `chunk_size` (Anagh Kumar Baranwal) * Fix missing files when using `--fast-list` and `--drive-shared-with-me` (Nick Craig-Wood) * Fix duplicate items when using `--drive-shared-with-me` (Nick Craig-Wood) * Extend `--drive-stop-on-upload-limit` to respond to `teamDriveFileLimitExceeded`. (harry) * Don't delete files with multiple parents to avoid data loss (Nick Craig-Wood) * Server side copy docs use default description if empty (Nick Craig-Wood) * Dropbox * Make error insufficient space to be fatal (harry) * Add info about required redirect url (Elan Ruusamäe) * Fichier * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Implement custom pacer to deal with the new rate limiting (buengese) * FTP * Fix lockup when using concurrency limit on failed connections (Nick Craig-Wood) * Fix lockup on failed upload when using concurrency limit (Nick Craig-Wood) * Fix lockup on Close failures when using concurrency limit (Nick Craig-Wood) * Work around pureftp sending spurious 150 messages (Nick Craig-Wood) * Google Cloud Storage * Add support for `--header-upload` and `--header-download` (Nick Craig-Wood) * Add `ARCHIVE` storage class to help (Adam Stroud) * Ignore directory markers at the root (Nick Craig-Wood) * Googlephotos * Make the start year configurable (Daven) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Create feature/favorites directory (Brandon Philips) * Fix "concurrent map write" error (Nick Craig-Wood) * Don't put an image in error message (Nick Craig-Wood) * HTTP * Improved directory listing with new template from Caddy project (calisro) * Jottacloud * Implement `--jottacloud-trashed-only` (buengese) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Use `RawURLEncoding` when decoding base64 encoded login token (buengese) * Implement cleanup (buengese) * Update docs regarding cleanup, removed remains from old auth, and added warning about special mountpoints. (albertony) * Mailru * Describe 2FA requirements (valery1707) * Onedrive * Implement `--onedrive-server-side-across-configs` (Nick Craig-Wood) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Fix occasional 416 errors on multipart uploads (Nick Craig-Wood) * Added maximum chunk size limit warning in the docs (Harry) * Fix missing drive on config (Nick Craig-Wood) * Make error `quotaLimitReached` to be fatal (harry) * Opendrive * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Pcloud * Added support for interchangeable root folder for pCloud backend (Sunil Patra) * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Fix initial config "Auth state doesn't match" message (Nick Craig-Wood) * Premiumizeme * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Prune unused functions (Lars Lehtonen) * Putio * Add support for `--header-upload` and `--header-download` (Nick Craig-Wood) * Make downloading files use the rclone http Client (Nick Craig-Wood) * Fix parsing of remotes with leading and trailing / (Nick Craig-Wood) * Qingstor * Make `rclone cleanup` remove pending multipart uploads older than 24h (Nick Craig-Wood) * Try harder to cancel failed multipart uploads (Nick Craig-Wood) * Prune `multiUploader.list()` (Lars Lehtonen) * Lint fix (Lars Lehtonen) * S3 * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Use memory pool for buffer allocations (Maciej Zimnoch) * Add SSE-C support for AWS, Ceph, and MinIO (Jack Anderson) * Fail fast multipart upload (Michał Matczuk) * Report errors on bucket creation (mkdir) correctly (Nick Craig-Wood) * Specify that Minio supports URL encoding in listings (Nick Craig-Wood) * Added 500 as retryErrorCode (Michał Matczuk) * Use `--low-level-retries` as the number of SDK retries (Aleksandar Janković) * Fix multipart abort context (Aleksandar Jankovic) * Replace deprecated `session.New()` with `session.NewSession()` (Lars Lehtonen) * Use the provided size parameter when allocating a new memory pool (Joachim Brandon LeBlanc) * Use rclone's low level retries instead of AWS SDK to fix listing retries (Nick Craig-Wood) * Ignore directory markers at the root also (Nick Craig-Wood) * Use single memory pool (Michał Matczuk) * Do not resize buf on put to memBuf (Michał Matczuk) * Improve docs for `--s3-disable-checksum` (Nick Craig-Wood) * Don't leak memory or tokens in edge cases for multipart upload (Nick Craig-Wood) * Seafile * Implement 2FA (Fred) * SFTP * Added `--sftp-pem-key` to support inline key files (calisro) * Fix post transfer copies failing with 0 size when using `set_modtime=false` (Nick Craig-Wood) * Sharefile * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Sugarsync * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Swift * Add support for `--header-upload` and `--header-download` (Nick Craig-Wood) * Fix cosmetic issue in error message (Martin Michlmayr) * Union * Implement multiple writable remotes (Max Sum) * Fix server-side copy (Max Sum) * Implement ListR (Max Sum) * Enable ListR when upstreams contain local (Max Sum) * WebDAV * Add support for `--header-upload` and `--header-download` (Tim Gallant) * Fix `X-OC-Mtime` header for Transip compatibility (Nick Craig-Wood) * Report full and consistent usage with `about` (Yves G) * Yandex * Add support for `--header-upload` and `--header-download` (Tim Gallant) ## v1.51.0 - 2020-02-01 * New backends * [Memory](/memory/) (Nick Craig-Wood) * [Sugarsync](/sugarsync/) (Nick Craig-Wood) * New Features * Adjust all backends to have `--backend-encoding` parameter (Nick Craig-Wood) * this enables the encoding for special characters to be adjusted or disabled * Add `--max-duration` flag to control the maximum duration of a transfer session (boosh) * Add `--expect-continue-timeout` flag, default 1s (Nick Craig-Wood) * Add `--no-check-dest` flag for copying without testing the destination (Nick Craig-Wood) * Implement `--order-by` flag to order transfers (Nick Craig-Wood) * accounting * Don't show entries in both transferring and checking (Nick Craig-Wood) * Add option to delete stats (Aleksandar Jankovic) * build * Compress the test builds with gzip (Nick Craig-Wood) * Implement a framework for starting test servers during tests (Nick Craig-Wood) * cmd: Always print elapsed time to tenth place seconds in progress (Gary Kim) * config * Add `--password-command` to allow dynamic config password (Damon Permezel) * Give config questions default values (Nick Craig-Wood) * Check a remote exists when creating a new one (Nick Craig-Wood) * copyurl: Add `--stdout` flag to write to stdout (Nick Craig-Wood) * dedupe: Implement keep smallest too (Nick Craig-Wood) * hashsum: Add flag `--base64` flag (landall) * lsf: Speed up on s3/swift/etc by not reading mimetype by default (Nick Craig-Wood) * lsjson: Add `--no-mimetype` flag (Nick Craig-Wood) * rc: Add methods to turn on blocking and mutex profiling (Nick Craig-Wood) * rcd * Adding group parameter to stats (Chaitanya) * Move webgui apart; option to disable browser (Xiaoxing Ye) * serve sftp: Add support for public key with auth proxy (Paul Tinsley) * stats: Show deletes in stats and hide zero stats (anuar45) * Bug Fixes * accounting * Fix error counter counting multiple times (Ankur Gupta) * Fix error count shown as checks (Cnly) * Clear finished transfer in stats-reset (Maciej Zimnoch) * Added StatsInfo locking in statsGroups sum function (Michał Matczuk) * asyncreader: Fix EOF error (buengese) * check: Fix `--one-way` recursing more directories than it needs to (Nick Craig-Wood) * chunkedreader: Disable hash calculation for first segment (Nick Craig-Wood) * config * Do not open browser on headless on drive/gcs/google photos (Xiaoxing Ye) * SetValueAndSave ignore error if config section does not exist yet (buengese) * cmd: Fix completion with an encrypted config (Danil Semelenov) * dbhashsum: Stop it returning UNSUPPORTED on dropbox (Nick Craig-Wood) * dedupe: Add missing modes to help string (Nick Craig-Wood) * operations * Fix dedupe continuing on errors like insufficientFilePermisson (SezalAgrawal) * Clear accounting before low level retry (Maciej Zimnoch) * Write debug message when hashes could not be checked (Ole Schütt) * Move interface assertion to tests to remove pflag dependency (Nick Craig-Wood) * Make NewOverrideObjectInfo public and factor uses (Nick Craig-Wood) * proxy: Replace use of bcrypt with sha256 (Nick Craig-Wood) * vendor * Update bazil.org/fuse to fix FreeBSD 12.1 (Nick Craig-Wood) * Update github.com/t3rm1n4l/go-mega to fix mega "illegal base64 data at input byte 22" (Nick Craig-Wood) * Update termbox-go to fix ncdu command on FreeBSD (Kuang-che Wu) * Update t3rm1n4l/go-mega - fixes mega: couldn't login: crypto/aes: invalid key size 0 (Nick Craig-Wood) * Mount * Enable async reads for a 20% speedup (Nick Craig-Wood) * Replace use of WriteAt with Write for cache mode >= writes and O_APPEND (Brett Dutro) * Make sure we call unmount when exiting (Nick Craig-Wood) * Don't build on go1.10 as bazil/fuse no longer supports it (Nick Craig-Wood) * When setting dates discard out of range dates (Nick Craig-Wood) * VFS * Add a newly created file straight into the directory (Nick Craig-Wood) * Only calculate one hash for reads for a speedup (Nick Craig-Wood) * Make ReadAt for non cached files work better with non-sequential reads (Nick Craig-Wood) * Fix edge cases when reading ModTime from file (Nick Craig-Wood) * Make sure existing files opened for write show correct size (Nick Craig-Wood) * Don't cache the path in RW file objects to fix renaming (Nick Craig-Wood) * Fix rename of open files when using the VFS cache (Nick Craig-Wood) * When renaming files in the cache, rename the cache item in memory too (Nick Craig-Wood) * Fix open file renaming on drive when using `--vfs-cache-mode writes` (Nick Craig-Wood) * Fix incorrect modtime for mv into mount with `--vfs-cache-modes writes` (Nick Craig-Wood) * On rename, rename in cache too if the file exists (Anagh Kumar Baranwal) * Local * Make source file being updated errors be NoLowLevelRetry errors (Nick Craig-Wood) * Fix update of hidden files on Windows (Nick Craig-Wood) * Cache * Follow move of upstream library github.com/coreos/bbolt github.com/etcd-io/bbolt (Nick Craig-Wood) * Fix `fatal error: concurrent map writes` (Nick Craig-Wood) * Crypt * Reorder the filename encryption options (Thomas Eales) * Correctly handle trailing dot (buengese) * Chunker * Reduce length of temporary suffix (Ivan Andreev) * Drive * Add `--drive-stop-on-upload-limit` flag to stop syncs when upload limit reached (Nick Craig-Wood) * Add `--drive-use-shared-date` to use date file was shared instead of modified date (Garry McNulty) * Make sure invalid auth for teamdrives always reports an error (Nick Craig-Wood) * Fix `--fast-list` when using appDataFolder (Nick Craig-Wood) * Use multipart resumable uploads for streaming and uploads in mount (Nick Craig-Wood) * Log an ERROR if an incomplete search is returned (Nick Craig-Wood) * Hide dangerous config from the configurator (Nick Craig-Wood) * Dropbox * Treat `insufficient_space` errors as non retriable errors (Nick Craig-Wood) * Jottacloud * Use new auth method used by official client (buengese) * Add URL to generate Login Token to config wizard (Nick Craig-Wood) * Add support whitelabel versions (buengese) * Koofr * Use rclone HTTP client. (jaKa) * Onedrive * Add Sites.Read.All permission (Benjamin Richter) * Add support "Retry-After" header (Motonori IWAMURO) * Opendrive * Implement `--opendrive-chunk-size` (Nick Craig-Wood) * S3 * Re-implement multipart upload to fix memory issues (Nick Craig-Wood) * Add `--s3-copy-cutoff` for size to switch to multipart copy (Nick Craig-Wood) * Add new region Asia Patific (Hong Kong) (Outvi V) * Reduce memory usage streaming files by reducing max stream upload size (Nick Craig-Wood) * Add `--s3-list-chunk` option for bucket listing (Thomas Kriechbaumer) * Force path style bucket access to off for AWS deprecation (Nick Craig-Wood) * Use AWS web identity role provider if available (Tennix) * Add StackPath Object Storage Support (Dave Koston) * Fix ExpiryWindow value (Aleksandar Jankovic) * Fix DisableChecksum condition (Aleksandar Janković) * Fix URL decoding of NextMarker (Nick Craig-Wood) * SFTP * Add `--sftp-skip-links` to skip symlinks and non regular files (Nick Craig-Wood) * Retry Creation of Connection (Sebastian Brandt) * Fix "failed to parse private key file: ssh: not an encrypted key" error (Nick Craig-Wood) * Open files for update write only to fix AWS SFTP interop (Nick Craig-Wood) * Swift * Reserve segments of dynamic large object when delete objects in container what was enabled versioning. (Nguyễn Hữu Luân) * Fix parsing of X-Object-Manifest (Nick Craig-Wood) * Update OVH API endpoint (unbelauscht) * WebDAV * Make nextcloud only upload SHA1 checksums (Nick Craig-Wood) * Fix case of "Bearer" in Authorization: header to agree with RFC (Nick Craig-Wood) * Add Referer header to fix problems with WAFs (Nick Craig-Wood) ## v1.50.2 - 2019-11-19 * Bug Fixes * accounting: Fix memory leak on retries operations (Nick Craig-Wood) * Drive * Fix listing of the root directory with drive.files scope (Nick Craig-Wood) * Fix --drive-root-folder-id with team/shared drives (Nick Craig-Wood) ## v1.50.1 - 2019-11-02 * Bug Fixes * hash: Fix accidentally changed hash names for `DropboxHash` and `CRC-32` (Nick Craig-Wood) * fshttp: Fix error reporting on tpslimit token bucket errors (Nick Craig-Wood) * fshttp: Don't print token bucket errors on context cancelled (Nick Craig-Wood) * Local * Fix listings of . on Windows (Nick Craig-Wood) * Onedrive * Fix DirMove/Move after Onedrive change (Xiaoxing Ye) ## v1.50.0 - 2019-10-26 * New backends * [Citrix Sharefile](/sharefile/) (Nick Craig-Wood) * [Chunker](/chunker/) - an overlay backend to split files into smaller parts (Ivan Andreev) * [Mail.ru Cloud](/mailru/) (Ivan Andreev) * New Features * encodings (Fabian Möller & Nick Craig-Wood) * All backends now use file name encoding to ensure any file name can be written to any backend. * See the [restricted file name docs](/overview/#restricted-filenames) for more info and the [local backend docs](/local/#filenames). * Some file names may look different in rclone if you are using any control characters in names or [unicode FULLWIDTH symbols](https://en.wikipedia.org/wiki/Halfwidth_and_Fullwidth_Forms_(Unicode_block)). * build * Update to use go1.13 for the build (Nick Craig-Wood) * Drop support for go1.9 (Nick Craig-Wood) * Build rclone with GitHub actions (Nick Craig-Wood) * Convert python scripts to python3 (Nick Craig-Wood) * Swap Azure/go-ansiterm for mattn/go-colorable (Nick Craig-Wood) * Dockerfile fixes (Matei David) * Add [plugin support](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#writing-a-plugin) for backends and commands (Richard Patel) * config * Use alternating Red/Green in config to make more obvious (Nick Craig-Wood) * contrib * Add sample DLNA server Docker Compose manifest. (pataquets) * Add sample WebDAV server Docker Compose manifest. (pataquets) * copyurl * Add `--auto-filename` flag for using file name from URL in destination path (Denis) * serve dlna: * Many compatibility improvements (Dan Walters) * Support for external srt subtitles (Dan Walters) * rc * Added command core/quit (Saksham Khanna) * Bug Fixes * sync * Make `--update`/`-u` not transfer files that haven't changed (Nick Craig-Wood) * Free objects after they come out of the transfer pipe to save memory (Nick Craig-Wood) * Fix `--files-from without --no-traverse` doing a recursive scan (Nick Craig-Wood) * operations * Fix accounting for server side copies (Nick Craig-Wood) * Display 'All duplicates removed' only if dedupe successful (Sezal Agrawal) * Display 'Deleted X extra copies' only if dedupe successful (Sezal Agrawal) * accounting * Only allow up to 100 completed transfers in the accounting list to save memory (Nick Craig-Wood) * Cull the old time ranges when possible to save memory (Nick Craig-Wood) * Fix panic due to server-side copy fallback (Ivan Andreev) * Fix memory leak noticeable for transfers of large numbers of objects (Nick Craig-Wood) * Fix total duration calculation (Nick Craig-Wood) * cmd * Fix environment variables not setting command line flags (Nick Craig-Wood) * Make autocomplete compatible with bash's posix mode for macOS (Danil Semelenov) * Make `--progress` work in git bash on Windows (Nick Craig-Wood) * Fix 'compopt: command not found' on autocomplete on macOS (Danil Semelenov) * config * Fix setting of non top level flags from environment variables (Nick Craig-Wood) * Check config names more carefully and report errors (Nick Craig-Wood) * Remove error: can't use `--size-only` and `--ignore-size` together. (Nick Craig-Wood) * filter: Prevent mixing options when `--files-from` is in use (Michele Caci) * serve sftp: Fix crash on unsupported operations (eg Readlink) (Nick Craig-Wood) * Mount * Allow files of unknown size to be read properly (Nick Craig-Wood) * Skip tests on <= 2 CPUs to avoid lockup (Nick Craig-Wood) * Fix panic on File.Open (Nick Craig-Wood) * Fix "mount_fusefs: -o timeout=: option not supported" on FreeBSD (Nick Craig-Wood) * Don't pass huge filenames (>4k) to FUSE as it can't cope (Nick Craig-Wood) * VFS * Add flag `--vfs-case-insensitive` for windows/macOS mounts (Ivan Andreev) * Make objects of unknown size readable through the VFS (Nick Craig-Wood) * Move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush() (Brett Dutro) * Stop empty dirs disappearing when renamed on bucket based remotes (Nick Craig-Wood) * Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood) * Azure Blob * Disable logging to the Windows event log (Nick Craig-Wood) * B2 * Remove `unverified:` prefix on sha1 to improve interop (eg with CyberDuck) (Nick Craig-Wood) * Box * Add options to get access token via JWT auth (David) * Drive * Disable HTTP/2 by default to work around INTERNAL_ERROR problems (Nick Craig-Wood) * Make sure that drive root ID is always canonical (Nick Craig-Wood) * Fix `--drive-shared-with-me` from the root with lsand `--fast-list` (Nick Craig-Wood) * Fix ChangeNotify polling for shared drives (Nick Craig-Wood) * Fix change notify polling when using appDataFolder (Nick Craig-Wood) * Dropbox * Make disallowed filenames errors not retry (Nick Craig-Wood) * Fix nil pointer exception on restricted files (Nick Craig-Wood) * Fichier * Fix accessing files > 2GB on 32 bit systems (Nick Craig-Wood) * FTP * Allow disabling EPSV mode (Jon Fautley) * HTTP * HEAD directory entries in parallel to speedup (Nick Craig-Wood) * Add `--http-no-head` to stop rclone doing HEAD in listings (Nick Craig-Wood) * Putio * Add ability to resume uploads (Cenk Alti) * S3 * Fix signature v2_auth headers (Anthony Rusdi) * Fix encoding for control characters (Nick Craig-Wood) * Only ask for URL encoded directory listings if we need them on Ceph (Nick Craig-Wood) * Add option for multipart failure behaviour (Aleksandar Jankovic) * Support for multipart copy (庄天翼) * Fix nil pointer reference if no metadata returned for object (Nick Craig-Wood) * SFTP * Fix `--sftp-ask-password` trying to contact the ssh agent (Nick Craig-Wood) * Fix hashes of files with backslashes (Nick Craig-Wood) * Include more ciphers with `--sftp-use-insecure-cipher` (Carlos Ferreyra) * WebDAV * Parse and return Sharepoint error response (Henning Surmeier) ## v1.49.5 - 2019-10-05 * Bug Fixes * Revert back to go1.12.x for the v1.49.x builds as go1.13.x was causing issues (Nick Craig-Wood) * Fix rpm packages by using master builds of nfpm (Nick Craig-Wood) * Fix macOS build after brew changes (Nick Craig-Wood) ## v1.49.4 - 2019-09-29 * Bug Fixes * cmd/rcd: Address ZipSlip vulnerability (Richard Patel) * accounting: Fix file handle leak on errors (Nick Craig-Wood) * oauthutil: Fix security problem when running with two users on the same machine (Nick Craig-Wood) * FTP * Fix listing of an empty root returning: error dir not found (Nick Craig-Wood) * S3 * Fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier (Nick Craig-Wood) ## v1.49.3 - 2019-09-15 * Bug Fixes * accounting * Fix total duration calculation (Aleksandar Jankovic) * Fix "file already closed" on transfer retries (Nick Craig-Wood) ## v1.49.2 - 2019-09-08 * New Features * build: Add Docker workflow support (Alfonso Montero) * Bug Fixes * accounting: Fix locking in Transfer to avoid deadlock with `--progress` (Nick Craig-Wood) * docs: Fix template argument for mktemp in install.sh (Cnly) * operations: Fix `-u`/`--update` with google photos / files of unknown size (Nick Craig-Wood) * rc: Fix docs for config/create /update /password (Nick Craig-Wood) * Google Cloud Storage * Fix need for elevated permissions on SetModTime (Nick Craig-Wood) ## v1.49.1 - 2019-08-28 * Bug Fixes * config: Fix generated passwords being stored as empty password (Nick Craig-Wood) * rcd: Added missing parameter for web-gui info logs. (Chaitanya) * Googlephotos * Fix crash on error response (Nick Craig-Wood) * Onedrive * Fix crash on error response (Nick Craig-Wood) ## v1.49.0 - 2019-08-26 * New backends * [1fichier](/fichier/) (Laura Hausmann) * [Google Photos](/googlephotos/) (Nick Craig-Wood) * [Putio](/putio/) (Cenk Alti) * [premiumize.me](/premiumizeme/) (Nick Craig-Wood) * New Features * Experimental [web GUI](/gui/) (Chaitanya Bankanhal) * Implement `--compare-dest` & `--copy-dest` (yparitcher) * Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher) * `config reconnect` to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood) * `config userinfo` to discover which user you are logged in as. (Nick Craig-Wood) * `config disconnect` to disconnect you (log out) from the backend. (Nick Craig-Wood) * Add `--use-json-log` for JSON logging (justinalin) * Add context propagation to rclone (Aleksandar Jankovic) * Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic) * Add Higher units for ETA (AbelThar) * Update rclone logos to new design (Andreas Chlupka) * hash: Add CRC-32 support (Cenk Alti) * help showbackend: Fixed advanced option category when there are no standard options (buengese) * ncdu: Display/Copy to Clipboard Current Path (Gary Kim) * operations: * Run hashing operations in parallel (Nick Craig-Wood) * Don't calculate checksums when using `--ignore-checksum` (Nick Craig-Wood) * Check transfer hashes when using `--size-only` mode (Nick Craig-Wood) * Disable multi thread copy for local to local copies (Nick Craig-Wood) * Debug successful hashes as well as failures (Nick Craig-Wood) * rc * Add ability to stop async jobs (Aleksandar Jankovic) * Return current settings if core/bwlimit called without parameters (Nick Craig-Wood) * Rclone-WebUI integration with rclone (Chaitanya Bankanhal) * Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement) (Chaitanya Bankanhal) * Add anchor tags to the docs so links are consistent (Nick Craig-Wood) * Remove _async key from input parameters after parsing so later operations won't get confused (buengese) * Add call to clear stats (Aleksandar Jankovic) * rcd * Auto-login for web-gui (Chaitanya Bankanhal) * Implement `--baseurl` for rcd and web-gui (Chaitanya Bankanhal) * serve dlna * Only select interfaces which can multicast for SSDP (Nick Craig-Wood) * Add more builtin mime types to cover standard audio/video (Nick Craig-Wood) * Fix missing mime types on Android causing missing videos (Nick Craig-Wood) * serve ftp * Refactor to bring into line with other serve commands (Nick Craig-Wood) * Implement `--auth-proxy` (Nick Craig-Wood) * serve http: Implement `--baseurl` (Nick Craig-Wood) * serve restic: Implement `--baseurl` (Nick Craig-Wood) * serve sftp * Implement auth proxy (Nick Craig-Wood) * Fix detection of whether server is authorized (Nick Craig-Wood) * serve webdav * Implement `--baseurl` (Nick Craig-Wood) * Support `--auth-proxy` (Nick Craig-Wood) * Bug Fixes * Make "bad record MAC" a retriable error (Nick Craig-Wood) * copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood) * march: Fix checking sub-directories when using `--no-traverse` (buengese) * rc * Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood) * Move job expire flags to rc to fix initialization problem (Nick Craig-Wood) * Fix `--loopback` with rc/list and others (Nick Craig-Wood) * rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood) * rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) * Mount * Default `--deamon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) * Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) * Remove nonseekable flag from write files (Nick Craig-Wood) * VFS * Make write without cache more efficient (Nick Craig-Wood) * Fix `--vfs-cache-mode minimal` and `writes` ignoring cached files (Nick Craig-Wood) * Local * Add `--local-case-sensitive` and `--local-case-insensitive` (Nick Craig-Wood) * Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk) * Don't calculate any hashes by default (Nick Craig-Wood) * Fadvise run syscall on a dedicated go routine (Michał Matczuk) * Azure Blob * Azure Storage Emulator support (Sandeep) * Updated config help details to remove connection string references (Sandeep) * Make all operations work from the root (Nick Craig-Wood) * B2 * Implement link sharing (yparitcher) * Enable server side copy to copy between buckets (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * Drive * Fix server side copy of big files (Nick Craig-Wood) * Update API for teamdrive use (Nick Craig-Wood) * Add error for purge with `--drive-trashed-only` (ginvine) * Fichier * Make FolderID int and adjust related code (buengese) * Google Cloud Storage * Reduce oauth scope requested as suggested by Google (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * HTTP * Add `--http-headers` flag for setting arbitrary headers (Nick Craig-Wood) * Jottacloud * Use new api for retrieving internal username (buengese) * Refactor configuration and minor cleanup (buengese) * Koofr * Support setting modification times on Koofr backend. (jaKa) * Opendrive * Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood) * Qingstor * Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * S3 * Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) * Make all operations work from the root (Nick Craig-Wood) * SFTP * Add missing interface check and fix About (Nick Craig-Wood) * Completely ignore all modtime checks if SetModTime=false (Jon Fautley) * Support md5/sha1 with rsync.net (Nick Craig-Wood) * Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood) * Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU) * Swift * Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood) * Fix upload when using no_chunk to return the correct size (Nick Craig-Wood) * Make all operations work from the root (Nick Craig-Wood) * Fix segments leak during failed large file uploads. (nguyenhuuluan434) * WebDAV * Add `--webdav-bearer-token-command` (Nick Craig-Wood) * Refresh token when it expires with `--webdav-bearer-token-command` (Nick Craig-Wood) * Add docs for using bearer_token_command with oidc-agent (Paul Millar) ## v1.48.0 - 2019-06-15 * New commands * serve sftp: Serve an rclone remote over SFTP (Nick Craig-Wood) * New Features * Multi threaded downloads to local storage (Nick Craig-Wood) * controlled with `--multi-thread-cutoff` and `--multi-thread-streams` * Use rclone.conf from rclone executable directory to enable portable use (albertony) * Allow sync of a file and a directory with the same name (forgems) * this is common on bucket based remotes, eg s3, gcs * Add `--ignore-case-sync` for forced case insensitivity (garry415) * Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec) * Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood) * Use go-homedir to read the home directory more reliably (Nick Craig-Wood) * Enable creating encrypted config through external script invocation (Wojciech Smigielski) * build: Drop support for go1.8 (Nick Craig-Wood) * config: Make config create/update encrypt passwords where necessary (Nick Craig-Wood) * copyurl: Honor `--no-check-certificate` (Stefan Breunig) * install: Linux skip man pages if no mandb (didil) * lsf: Support showing the Tier of the object (Nick Craig-Wood) * lsjson * Added EncryptedPath to output (calisro) * Support showing the Tier of the object (Nick Craig-Wood) * Add IsBucket field for bucket based remote listing of the root (Nick Craig-Wood) * rc * Add `--loopback` flag to run commands directly without a server (Nick Craig-Wood) * Add operations/fsinfo: Return information about the remote (Nick Craig-Wood) * Skip auth for OPTIONS request (Nick Craig-Wood) * cmd/providers: Add DefaultStr, ValueStr and Type fields (Nick Craig-Wood) * jobs: Make job expiry timeouts configurable (Aleksandar Jankovic) * serve dlna reworked and improved (Dan Walters) * serve ftp: add `--ftp-public-ip` flag to specify public IP (calistri) * serve restic: Add support for `--private-repos` in `serve restic` (Florian Apolloner) * serve webdav: Combine serve webdav and serve http (Gary Kim) * size: Ignore negative sizes when calculating total (Garry McNulty) * Bug Fixes * Make move and copy individual files obey `--backup-dir` (Nick Craig-Wood) * If `--ignore-checksum` is in effect, don't calculate checksum (Nick Craig-Wood) * moveto: Fix case-insensitive same remote move (Gary Kim) * rc: Fix serving bucket based objects with `--rc-serve` (Nick Craig-Wood) * serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim) * Mount * Fix poll interval documentation (Animosity022) * VFS * Make WriteAt for non cached files work with non-sequential writes (Nick Craig-Wood) * Local * Only calculate the required hashes for big speedup (Nick Craig-Wood) * Log errors when listing instead of returning an error (Nick Craig-Wood) * Fix preallocate warning on Linux with ZFS (Nick Craig-Wood) * Crypt * Make rclone dedupe work through crypt (Nick Craig-Wood) * Fix wrapping of ChangeNotify to decrypt directories properly (Nick Craig-Wood) * Support PublicLink (rclone link) of underlying backend (Nick Craig-Wood) * Implement Optional methods SetTier, GetTier (Nick Craig-Wood) * B2 * Implement server side copy (Nick Craig-Wood) * Implement SetModTime (Nick Craig-Wood) * Drive * Fix move and copy from TeamDrive to GDrive (Fionera) * Add notes that cleanup works in the background on drive (Nick Craig-Wood) * Add `--drive-server-side-across-configs` to default back to old server side copy semantics by default (Nick Craig-Wood) * Add `--drive-size-as-quota` to show storage quota usage for file size (Garry McNulty) * FTP * Add FTP List timeout (Jeff Quinn) * Add FTP over TLS support (Gary Kim) * Add `--ftp-no-check-certificate` option for FTPS (Gary Kim) * Google Cloud Storage * Fix upload errors when uploading pre 1970 files (Nick Craig-Wood) * Jottacloud * Add support for selecting device and mountpoint. (buengese) * Mega * Add cleanup support (Gary Kim) * Onedrive * More accurately check if root is found (Cnly) * S3 * Support S3 Accelerated endpoints with `--s3-use-accelerate-endpoint` (Nick Craig-Wood) * Add config info for Wasabi's EU Central endpoint (Robert Marko) * Make SetModTime work for GLACIER while syncing (Philip Harvey) * SFTP * Add About support (Gary Kim) * Fix about parsing of `df` results so it can cope with -ve results (Nick Craig-Wood) * Send custom client version and debug server version (Nick Craig-Wood) * WebDAV * Retry on 423 Locked errors (Nick Craig-Wood) ## v1.47.0 - 2019-04-13 * New backends * Backend for Koofr cloud storage service. (jaKa) * New Features * Resume downloads if the reader fails in copy (Nick Craig-Wood) * this means rclone will restart transfers if the source has an error * this is most useful for downloads or cloud to cloud copies * Use `--fast-list` for listing operations where it won't use more memory (Nick Craig-Wood) * this should speed up the following operations on remotes which support `ListR` * `dedupe`, `serve restic` `lsf`, `ls`, `lsl`, `lsjson`, `lsd`, `md5sum`, `sha1sum`, `hashsum`, `size`, `delete`, `cat`, `settier` * use `--disable ListR` to get old behaviour if required * Make `--files-from` traverse the destination unless `--no-traverse` is set (Nick Craig-Wood) * this fixes `--files-from` with Google drive and excessive API use in general. * Make server side copy account bytes and obey `--max-transfer` (Nick Craig-Wood) * Add `--create-empty-src-dirs` flag and default to not creating empty dirs (ishuah) * Add client side TLS/SSL flags `--ca-cert`/`--client-cert`/`--client-key` (Nick Craig-Wood) * Implement `--suffix-keep-extension` for use with `--suffix` (Nick Craig-Wood) * build: * Switch to semvar compliant version tags to be go modules compliant (Nick Craig-Wood) * Update to use go1.12.x for the build (Nick Craig-Wood) * serve dlna: Add connection manager service description to improve compatibility (Dan Walters) * lsf: Add 'e' format to show encrypted names and 'o' for original IDs (Nick Craig-Wood) * lsjson: Added `--files-only` and `--dirs-only` flags (calistri) * rc: Implement operations/publiclink the equivalent of `rclone link` (Nick Craig-Wood) * Bug Fixes * accounting: Fix total ETA when `--stats-unit bits` is in effect (Nick Craig-Wood) * Bash TAB completion * Use private custom func to fix clash between rclone and kubectl (Nick Craig-Wood) * Fix for remotes with underscores in their names (Six) * Fix completion of remotes (Florian Gamböck) * Fix autocompletion of remote paths with spaces (Danil Semelenov) * serve dlna: Fix root XML service descriptor (Dan Walters) * ncdu: Fix display corruption with Chinese characters (Nick Craig-Wood) * Add SIGTERM to signals which run the exit handlers on unix (Nick Craig-Wood) * rc: Reload filter when the options are set via the rc (Nick Craig-Wood) * VFS / Mount * Fix FreeBSD: Ignore Truncate if called with no readers and already the correct size (Nick Craig-Wood) * Read directory and check for a file before mkdir (Nick Craig-Wood) * Shorten the locking window for vfs/refresh (Nick Craig-Wood) * Azure Blob * Enable MD5 checksums when uploading files bigger than the "Cutoff" (Dr.Rx) * Fix SAS URL support (Nick Craig-Wood) * B2 * Allow manual configuration of backblaze downloadUrl (Vince) * Ignore already_hidden error on remove (Nick Craig-Wood) * Ignore malformed `src_last_modified_millis` (Nick Craig-Wood) * Drive * Add `--skip-checksum-gphotos` to ignore incorrect checksums on Google Photos (Nick Craig-Wood) * Allow server side move/copy between different remotes. (Fionera) * Add docs on team drives and `--fast-list` eventual consistency (Nestar47) * Fix imports of text files (Nick Craig-Wood) * Fix range requests on 0 length files (Nick Craig-Wood) * Fix creation of duplicates with server side copy (Nick Craig-Wood) * Dropbox * Retry blank errors to fix long listings (Nick Craig-Wood) * FTP * Add `--ftp-concurrency` to limit maximum number of connections (Nick Craig-Wood) * Google Cloud Storage * Fall back to default application credentials (marcintustin) * Allow bucket policy only buckets (Nick Craig-Wood) * HTTP * Add `--http-no-slash` for websites with directories with no slashes (Nick Craig-Wood) * Remove duplicates from listings (Nick Craig-Wood) * Fix socket leak on 404 errors (Nick Craig-Wood) * Jottacloud * Fix token refresh (Sebastian Bünger) * Add device registration (Oliver Heyme) * Onedrive * Implement graceful cancel of multipart uploads if rclone is interrupted (Cnly) * Always add trailing colon to path when addressing items, (Cnly) * Return errors instead of panic for invalid uploads (Fabian Möller) * S3 * Add support for "Glacier Deep Archive" storage class (Manu) * Update Dreamhost endpoint (Nick Craig-Wood) * Note incompatibility with CEPH Jewel (Nick Craig-Wood) * SFTP * Allow custom ssh client config (Alexandru Bumbacea) * Swift * Obey Retry-After to enable OVH restore from cold storage (Nick Craig-Wood) * Work around token expiry on CEPH (Nick Craig-Wood) * WebDAV * Allow IsCollection property to be integer or boolean (Nick Craig-Wood) * Fix race when creating directories (Nick Craig-Wood) * Fix About/df when reading the available/total returns 0 (Nick Craig-Wood) ## v1.46 - 2019-02-09 * New backends * Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood) * New commands * serve dlna: serves a remove via DLNA for the local network (nicolov) * New Features * copy, move: Restore deprecated `--no-traverse` flag (Nick Craig-Wood) * This is useful for when transferring a small number of files into a large destination * genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov) * Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood) * Buffer recycling library to replace sync.Pool * Optionally use memory mapped memory for better memory shrinking * Enable with `--use-mmap` if having memory problems - not default yet * Parallelise reading of files specified by `--files-from` (Nick Craig-Wood) * check: Add stats showing total files matched. (Dario Guzik) * Allow rename/delete open files under Windows (Nick Craig-Wood) * lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood) * Add cookie support with cmdline switch `--use-cookies` for all HTTP based remotes (qip) * Warn if `--checksum` is set but there are no hashes available (Nick Craig-Wood) * Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood) * Improve error reporting for too many/few arguments in commands (Nick Craig-Wood) * listremotes: Remove `-l` short flag as it conflicts with the new global flag (weetmuts) * Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood) * Bug Fixes * Fix layout of stats (Nick Craig-Wood) * Fix `--progress` crash under Windows Jenkins (Nick Craig-Wood) * Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly) * copyurl: Fix checking of `--dry-run` (Denis Skovpen) * Mount * Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood) * Fix mount size under 32 bit Windows (Nick Craig-Wood) * VFS * Implement renaming of directories for backends without DirMove (Nick Craig-Wood) * now all backends except b2 support renaming directories * Implement `--vfs-cache-max-size` to limit the total size of the cache (Nick Craig-Wood) * Add `--dir-perms` and `--file-perms` flags to set default permissions (Nick Craig-Wood) * Fix deadlock on concurrent operations on a directory (Nick Craig-Wood) * Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood) * Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood) * Fix panic on rename with `--dry-run` set (Nick Craig-Wood) * Fix vfs/refresh with recurse=true needing the `--fast-list` flag * Local * Add support for `-l`/`--links` (symbolic link translation) (yair@unicorn) * this works by showing links as `link.rclonelink` - see local backend docs for more info * this errors if used with `-L`/`--copy-links` * Fix renaming/deleting open files on Windows (Nick Craig-Wood) * Crypt * Check for maximum length before decrypting filename to fix panic (Garry McNulty) * Azure Blob * Allow building azureblob backend on *BSD (themylogin) * Use the rclone HTTP client to support `--dump headers`, `--tpslimit` etc (Nick Craig-Wood) * Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood) * Ignore directory markers (Nick Craig-Wood) * Stop Mkdir attempting to create existing containers (Nick Craig-Wood) * B2 * cleanup: will remove unfinished large files >24hrs old (Garry McNulty) * For a bucket limited application key check the bucket name (Nick Craig-Wood) * before this, rclone would use the authorised bucket regardless of what you put on the command line * Added `--b2-disable-checksum` flag (Wojciech Smigielski) * this enables large files to be uploaded without a SHA-1 hash for speed reasons * Drive * Set default pacer to 100ms for 10 tps (Nick Craig-Wood) * This fits the Google defaults much better and reduces the 403 errors massively * Add `--drive-pacer-min-sleep` and `--drive-pacer-burst` to control the pacer * Improve ChangeNotify support for items with multiple parents (Fabian Möller) * Fix ListR for items with multiple parents - this fixes oddities with `vfs/refresh` (Fabian Möller) * Fix using `--drive-impersonate` and appfolders (Nick Craig-Wood) * Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood) * Dropbox * Retry-After support for Dropbox backend (Mathieu Carbou) * FTP * Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood) * helps with indefinite hangs on some FTP servers * Google Cloud Storage * Update google cloud storage endpoints (weetmuts) * HTTP * Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood) * Fix backend with `--files-from` and non-existent files (Nick Craig-Wood) * Hubic * Make error message more informative if authentication fails (Nick Craig-Wood) * Jottacloud * Resume and deduplication support (Oliver Heyme) * Use token auth for all API requests Don't store password anymore (Sebastian Bünger) * Add support for 2-factor authentication (Sebastian Bünger) * Mega * Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood) * Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood) * Add new error codes for better error reporting (Nick Craig-Wood) * Onedrive * Fix broken support for "shared with me" folders (Alex Chen) * Fix root ID not normalised (Cnly) * Return err instead of panic on unknown-sized uploads (Cnly) * Qingstor * Fix go routine leak on multipart upload errors (Nick Craig-Wood) * Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood) * Default `--qingstor-upload-concurrency` to 1 to work around bug (Nick Craig-Wood) * S3 * Implement `--s3-upload-cutoff` for single part uploads below this (Nick Craig-Wood) * Change `--s3-upload-concurrency` default to 4 to increase performance (Nick Craig-Wood) * Add `--s3-bucket-acl` to control bucket ACL (Nick Craig-Wood) * Auto detect region for buckets on operation failure (Nick Craig-Wood) * Add GLACIER storage class (William Cocker) * Add Scaleway to s3 documentation (Rémy Léone) * Add AWS endpoint eu-north-1 (weetmuts) * SFTP * Add support for PEM encrypted private keys (Fabian Möller) * Add option to force the usage of an ssh-agent (Fabian Möller) * Perform environment variable expansion on key-file (Fabian Möller) * Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood) * Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood) * Fix error on dangling symlinks (Nick Craig-Wood) * Swift * Add `--swift-no-chunk` to disable segmented uploads in rcat/mount (Nick Craig-Wood) * Introduce application credential auth support (kayrus) * Fix memory usage by slimming Object (Nick Craig-Wood) * Fix extra requests on upload (Nick Craig-Wood) * Fix reauth on big files (Nick Craig-Wood) * Union * Fix poll-interval not working (Nick Craig-Wood) * WebDAV * Support About which means rclone mount will show the correct disk size (Nick Craig-Wood) * Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood) * Fail soft on time parsing errors (Nick Craig-Wood) * Fix infinite loop on failed directory creation (Nick Craig-Wood) * Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood) * Fix upload of 0 length files on some servers (Nick Craig-Wood) * Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood) ## v1.45 - 2018-11-24 * New backends * The Yandex backend was re-written - see below for details (Sebastian Bünger) * New commands * rcd: New command just to serve the remote control API (Nick Craig-Wood) * New Features * The remote control API (rc) was greatly expanded to allow full control over rclone (Nick Craig-Wood) * sensitive operations require authorization or the `--rc-no-auth` flag * config/* operations to configure rclone * options/* for reading/setting command line flags * operations/* for all low level operations, eg copy file, list directory * sync/* for sync, copy and move * `--rc-files` flag to serve files on the rc http server * this is for building web native GUIs for rclone * Optionally serving objects on the rc http server * Ensure rclone fails to start up if the `--rc` port is in use already * See [the rc docs](https://rclone.org/rc/) for more info * sync/copy/move * Make `--files-from` only read the objects specified and don't scan directories (Nick Craig-Wood) * This is a huge speed improvement for destinations with lots of files * filter: Add `--ignore-case` flag (Nick Craig-Wood) * ncdu: Add remove function ('d' key) (Henning Surmeier) * rc command * Add `--json` flag for structured JSON input (Nick Craig-Wood) * Add `--user` and `--pass` flags and interpret `--rc-user`, `--rc-pass`, `--rc-addr` (Nick Craig-Wood) * build * Require go1.8 or later for compilation (Nick Craig-Wood) * Enable softfloat on MIPS arch (Scott Edlund) * Integration test framework revamped with a better report and better retries (Nick Craig-Wood) * Bug Fixes * cmd: Make `--progress` update the stats correctly at the end (Nick Craig-Wood) * config: Create config directory on save if it is missing (Nick Craig-Wood) * dedupe: Check for existing filename before renaming a dupe file (ssaqua) * move: Don't create directories with `--dry-run` (Nick Craig-Wood) * operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood) * serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood) * Mount * Make `--volname` work for Windows and macOS (Nick Craig-Wood) * Azure Blob * Avoid context deadline exceeded error by setting a large TryTimeout value (brused27) * Fix erroneous Rmdir error "directory not empty" (Nick Craig-Wood) * Wait for up to 60s to create a just deleted container (Nick Craig-Wood) * Dropbox * Add dropbox impersonate support (Jake Coggiano) * Jottacloud * Fix bug in `--fast-list` handing of empty folders (albertony) * Opendrive * Fix transfer of files with `+` and `&` in (Nick Craig-Wood) * Fix retries of upload chunks (Nick Craig-Wood) * S3 * Set ACL for server side copies to that provided by the user (Nick Craig-Wood) * Fix role_arn, credential_source, ... (Erik Swanson) * Add config info for Wasabi's US-West endpoint (Henry Ptasinski) * SFTP * Ensure file hash checking is really disabled (Jon Fautley) * Swift * Add pacer for retries to make swift more reliable (Nick Craig-Wood) * WebDAV * Add Content-Type to PUT requests (Nick Craig-Wood) * Fix config parsing so `--webdav-user` and `--webdav-pass` flags work (Nick Craig-Wood) * Add RFC3339 date format (Ralf Hemberger) * Yandex * The yandex backend was re-written (Sebastian Bünger) * This implements low level retries (Sebastian Bünger) * Copy, Move, DirMove, PublicLink and About optional interfaces (Sebastian Bünger) * Improved general error handling (Sebastian Bünger) * Removed ListR for now due to inconsistent behaviour (Sebastian Bünger) ## v1.44 - 2018-10-15 * New commands * serve ftp: Add ftp server (Antoine GIRARD) * settier: perform storage tier changes on supported remotes (sandeepkru) * New Features * Reworked command line help * Make default help less verbose (Nick Craig-Wood) * Split flags up into global and backend flags (Nick Craig-Wood) * Implement specialised help for flags and backends (Nick Craig-Wood) * Show URL of backend help page when starting config (Nick Craig-Wood) * stats: Long names now split in center (Joanna Marek) * Add `--log-format` flag for more control over log output (dcpu) * rc: Add support for OPTIONS and basic CORS (frenos) * stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes) * Bug Fixes * Fix -P not ending with a new line (Nick Craig-Wood) * config: don't create default config dir when user supplies `--config` (albertony) * Don't print non-ASCII characters with `--progress` on windows (Nick Craig-Wood) * Correct logs for excluded items (ssaqua) * Mount * Remove EXPERIMENTAL tags (Nick Craig-Wood) * VFS * Fix race condition detected by serve ftp tests (Nick Craig-Wood) * Add vfs/poll-interval rc command (Fabian Möller) * Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood) * Reduce directory cache cleared by poll-interval (Fabian Möller) * Remove EXPERIMENTAL tags (Nick Craig-Wood) * Local * Skip bad symlinks in dir listing with -L enabled (Cédric Connes) * Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood) * Preallocate files on linux with fallocate(2) (Nick Craig-Wood) * Cache * Add cache/fetch rc function (Fabian Möller) * Fix worker scale down (Fabian Möller) * Improve performance by not sending info requests for cached chunks (dcpu) * Fix error return value of cache/fetch rc method (Fabian Möller) * Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal) * Preserve leading / in wrapped remote path (Fabian Möller) * Add plex_insecure option to skip certificate validation (Fabian Möller) * Remove entries that no longer exist in the source (dcpu) * Crypt * Preserve leading / in wrapped remote path (Fabian Möller) * Alias * Fix handling of Windows network paths (Nick Craig-Wood) * Azure Blob * Add `--azureblob-list-chunk` parameter (Santiago Rodríguez) * Implemented settier command support on azureblob remote. (sandeepkru) * Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood) * Box * Implement link sharing. (Sebastian Bünger) * Drive * Add `--drive-import-formats` - google docs can now be imported (Fabian Möller) * Rewrite mime type and extension handling (Fabian Möller) * Add document links (Fabian Möller) * Add support for multipart document extensions (Fabian Möller) * Add support for apps-script to json export (Fabian Möller) * Fix escaped chars in documents during list (Fabian Möller) * Add `--drive-v2-download-min-size` a workaround for slow downloads (Fabian Möller) * Improve directory notifications in ChangeNotify (Fabian Möller) * When listing team drives in config, continue on failure (Nick Craig-Wood) * FTP * Add a small pause after failed upload before deleting file (Nick Craig-Wood) * Google Cloud Storage * Fix service_account_file being ignored (Fabian Möller) * Jottacloud * Minor improvement in quota info (omit if unlimited) (albertony) * Add `--fast-list` support (albertony) * Add permanent delete support: `--jottacloud-hard-delete` (albertony) * Add link sharing support (albertony) * Fix handling of reserved characters. (Sebastian Bünger) * Fix socket leak on Object.Remove (Nick Craig-Wood) * Onedrive * Rework to support Microsoft Graph (Cnly) * **NB** this will require re-authenticating the remote * Removed upload cutoff and always do session uploads (Oliver Heyme) * Use single-part upload for empty files (Cnly) * Fix new fields not saved when editing old config (Alex Chen) * Fix sometimes special chars in filenames not replaced (Alex Chen) * Ignore OneNote files by default (Alex Chen) * Add link sharing support (jackyzy823) * S3 * Use custom pacer, to retry operations when reasonable (Craig Miskell) * Use configured server-side-encryption and storage class options when calling CopyObject() (Paul Kohout) * Make `--s3-v2-auth` flag (Nick Craig-Wood) * Fix v2 auth on files with spaces (Nick Craig-Wood) * Union * Implement union backend which reads from multiple backends (Felix Brucker) * Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood) * Fix ChangeNotify to support multiple remotes (Fabian Möller) * Fix `--backup-dir` on union backend (Nick Craig-Wood) * WebDAV * Add another time format (Nick Craig-Wood) * Add a small pause after failed upload before deleting file (Nick Craig-Wood) * Add workaround for missing mtime (buergi) * Sharepoint: Renew cookies after 12hrs (Henning Surmeier) * Yandex * Remove redundant nil checks (teresy) ## v1.43.1 - 2018-09-07 Point release to fix hubic and azureblob backends. * Bug Fixes * ncdu: Return error instead of log.Fatal in Show (Fabian Möller) * cmd: Fix crash with `--progress` and `--stats 0` (Nick Craig-Wood) * docs: Tidy website display (Anagh Kumar Baranwal) * Azure Blob: * Fix multi-part uploads. (sandeepkru) * Hubic * Fix uploads (Nick Craig-Wood) * Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood) ## v1.43 - 2018-09-01 * New backends * Jottacloud (Sebastian Bünger) * New commands * copyurl: copies a URL to a remote (Denis) * New Features * Reworked config for backends (Nick Craig-Wood) * All backend config can now be supplied by command line, env var or config file * Advanced section in the config wizard for the optional items * A large step towards rclone backends being usable in other go software * Allow on the fly remotes with :backend: syntax * Stats revamp * Add `--progress`/`-P` flag to show interactive progress (Nick Craig-Wood) * Show the total progress of the sync in the stats (Nick Craig-Wood) * Add `--stats-one-line` flag for single line stats (Nick Craig-Wood) * Added weekday schedule into `--bwlimit` (Mateusz) * lsjson: Add option to show the original object IDs (Fabian Möller) * serve webdav: Make Content-Type without reading the file and add `--etag-hash` (Nick Craig-Wood) * build * Build macOS with native compiler (Nick Craig-Wood) * Update to use go1.11 for the build (Nick Craig-Wood) * rc * Added core/stats to return the stats (reddi1) * `version --check`: Prints the current release and beta versions (Nick Craig-Wood) * Bug Fixes * accounting * Fix time to completion estimates (Nick Craig-Wood) * Fix moving average speed for file stats (Nick Craig-Wood) * config: Fix error reading password from piped input (Nick Craig-Wood) * move: Fix `--delete-empty-src-dirs` flag to delete all empty dirs on move (ishuah) * Mount * Implement `--daemon-timeout` flag for OSXFUSE (Nick Craig-Wood) * Fix mount `--daemon` not working with encrypted config (Alex Chen) * Clip the number of blocks to 2^32-1 on macOS - fixes borg backup (Nick Craig-Wood) * VFS * Enable vfs-read-chunk-size by default (Fabian Möller) * Add the vfs/refresh rc command (Fabian Möller) * Add non recursive mode to vfs/refresh rc command (Fabian Möller) * Try to seek buffer on read only files (Fabian Möller) * Local * Fix crash when deprecated `--local-no-unicode-normalization` is supplied (Nick Craig-Wood) * Fix mkdir error when trying to copy files to the root of a drive on windows (Nick Craig-Wood) * Cache * Fix nil pointer deref when using lsjson on cached directory (Nick Craig-Wood) * Fix nil pointer deref for occasional crash on playback (Nick Craig-Wood) * Crypt * Fix accounting when checking hashes on upload (Nick Craig-Wood) * Amazon Cloud Drive * Make very clear in the docs that rclone has no ACD keys (Nick Craig-Wood) * Azure Blob * Add connection string and SAS URL auth (Nick Craig-Wood) * List the container to see if it exists (Nick Craig-Wood) * Port new Azure Blob Storage SDK (sandeepkru) * Added blob tier, tier between Hot, Cool and Archive. (sandeepkru) * Remove leading / from paths (Nick Craig-Wood) * B2 * Support Application Keys (Nick Craig-Wood) * Remove leading / from paths (Nick Craig-Wood) * Box * Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood) * Make `--box-commit-retries` flag defaulting to 100 to fix large uploads (Nick Craig-Wood) * Drive * Add `--drive-keep-revision-forever` flag (lewapm) * Handle gdocs when filtering file names in list (Fabian Möller) * Support using `--fast-list` for large speedups (Fabian Möller) * FTP * Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood) * Google Cloud Storage * Fix index out of range error with `--fast-list` (Nick Craig-Wood) * Jottacloud * Fix MD5 error check (Oliver Heyme) * Handle empty time values (Martin Polden) * Calculate missing MD5s (Oliver Heyme) * Docs, fixes and tests for MD5 calculation (Nick Craig-Wood) * Add optional MimeTyper interface. (Sebastian Bünger) * Implement optional About interface (for `df` support). (Sebastian Bünger) * Mega * Wait for events instead of arbitrary sleeping (Nick Craig-Wood) * Add `--mega-hard-delete` flag (Nick Craig-Wood) * Fix failed logins with upper case chars in email (Nick Craig-Wood) * Onedrive * Shared folder support (Yoni Jah) * Implement DirMove (Cnly) * Fix rmdir sometimes deleting directories with contents (Nick Craig-Wood) * Pcloud * Delete half uploaded files on upload error (Nick Craig-Wood) * Qingstor * Remove leading / from paths (Nick Craig-Wood) * S3 * Fix index out of range error with `--fast-list` (Nick Craig-Wood) * Add `--s3-force-path-style` (Nick Craig-Wood) * Add support for KMS Key ID (bsteiss) * Remove leading / from paths (Nick Craig-Wood) * Swift * Add `storage_policy` (Ruben Vandamme) * Make it so just `storage_url` or `auth_token` can be overridden (Nick Craig-Wood) * Fix server side copy bug for unusual file names (Nick Craig-Wood) * Remove leading / from paths (Nick Craig-Wood) * WebDAV * Ensure we call MKCOL with a URL with a trailing / for QNAP interop (Nick Craig-Wood) * If root ends with / then don't check if it is a file (Nick Craig-Wood) * Don't accept redirects when reading metadata (Nick Craig-Wood) * Add bearer token (Macaroon) support for dCache (Nick Craig-Wood) * Document dCache and Macaroons (Onno Zweers) * Sharepoint recursion with different depth (Henning) * Attempt to remove failed uploads (Nick Craig-Wood) * Yandex * Fix listing/deleting files in the root (Nick Craig-Wood) ## v1.42 - 2018-06-16 * New backends * OpenDrive (Oliver Heyme, Jakub Karlicek, ncw) * New commands * deletefile command (Filip Bartodziej) * New Features * copy, move: Copy single files directly, don't use `--files-from` work-around * this makes them much more efficient * Implement `--max-transfer` flag to quit transferring at a limit * make exit code 8 for `--max-transfer` exceeded * copy: copy empty source directories to destination (Ishuah Kariuki) * check: Add `--one-way` flag (Kasper Byrdal Nielsen) * Add siginfo handler for macOS for ctrl-T stats (kubatasiemski) * rc * add core/gc to run a garbage collection on demand * enable go profiling by default on the `--rc` port * return error from remote on failure * lsf * Add `--absolute` flag to add a leading / onto path names * Add `--csv` flag for compliant CSV output * Add 'm' format specifier to show the MimeType * Implement 'i' format for showing object ID * lsjson * Add MimeType to the output * Add ID field to output to show Object ID * Add `--retries-sleep` flag (Benjamin Joseph Dag) * Oauth tidy up web page and error handling (Henning Surmeier) * Bug Fixes * Password prompt output with `--log-file` fixed for unix (Filip Bartodziej) * Calculate ModifyWindow each time on the fly to fix various problems (Stefan Breunig) * Mount * Only print "File.rename error" if there actually is an error (Stefan Breunig) * Delay rename if file has open writers instead of failing outright (Stefan Breunig) * Ensure atexit gets run on interrupt * macOS enhancements * Make `--noappledouble` `--noapplexattr` * Add `--volname` flag and remove special chars from it * Make Get/List/Set/Remove xattr return ENOSYS for efficiency * Make `--daemon` work for macOS without CGO * VFS * Add `--vfs-read-chunk-size` and `--vfs-read-chunk-size-limit` (Fabian Möller) * Fix ChangeNotify for new or changed folders (Fabian Möller) * Local * Fix symlink/junction point directory handling under Windows * **NB** you will need to add `-L` to your command line to copy files with reparse points * Cache * Add non cached dirs on notifications (Remus Bunduc) * Allow root to be expired from rc (Remus Bunduc) * Clean remaining empty folders from temp upload path (Remus Bunduc) * Cache lists using batch writes (Remus Bunduc) * Use secure websockets for HTTPS Plex addresses (John Clayton) * Reconnect plex websocket on failures (Remus Bunduc) * Fix panic when running without plex configs (Remus Bunduc) * Fix root folder caching (Remus Bunduc) * Crypt * Check the crypted hash of files when uploading for extra data security * Dropbox * Make Dropbox for business folders accessible using an initial `/` in the path * Google Cloud Storage * Low level retry all operations if necessary * Google Drive * Add `--drive-acknowledge-abuse` to download flagged files * Add `--drive-alternate-export` to fix large doc export * Don't attempt to choose Team Drives when using rclone config create * Fix change list polling with team drives * Fix ChangeNotify for folders (Fabian Möller) * Fix about (and df on a mount) for team drives * Onedrive * Errorhandler for onedrive for business requests (Henning Surmeier) * S3 * Adjust upload concurrency with `--s3-upload-concurrency` (themylogin) * Fix `--s3-chunk-size` which was always using the minimum * SFTP * Add `--ssh-path-override` flag (Piotr Oleszczyk) * Fix slow downloads for long latency connections * Webdav * Add workarounds for biz.mail.ru * Ignore Reason-Phrase in status line to fix 4shared (Rodrigo) * Better error message generation ## v1.41 - 2018-04-28 * New backends * Mega support added * Webdav now supports SharePoint cookie authentication (hensur) * New commands * link: create public link to files and folders (Stefan Breunig) * about: gets quota info from a remote (a-roussos, ncw) * hashsum: a generic tool for any hash to produce md5sum like output * New Features * lsd: Add -R flag and fix and update docs for all ls commands * ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb) * serve restic: Add append-only mode (Steve Kriss) * serve restic: Disallow overwriting files in append-only mode (Alexander Neumann) * serve restic: Print actual listener address (Matt Holt) * size: Add --json flag (Matthew Holt) * sync: implement --ignore-errors (Mateusz Pabian) * dedupe: Add dedupe largest functionality (Richard Yang) * fs: Extend SizeSuffix to include TB and PB for rclone about * fs: add --dump goroutines and --dump openfiles for debugging * rc: implement core/memstats to print internal memory usage info * rc: new call rc/pid (Michael P. Dubner) * Compile * Drop support for go1.6 * Release * Fix `make tarball` (Chih-Hsuan Yen) * Bug Fixes * filter: fix --min-age and --max-age together check * fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport * lsd,lsf: make sure all times we output are in local time * rc: fix setting bwlimit to unlimited * rc: take note of the --rc-addr flag too as per the docs * Mount * Use About to return the correct disk total/used/free (eg in `df`) * Set `--attr-timeout default` to `1s` - fixes: * rclone using too much memory * rclone not serving files to samba * excessive time listing directories * Fix `df -i` (upstream fix) * VFS * Filter files `.` and `..` from directory listing * Only make the VFS cache if --vfs-cache-mode > Off * Local * Add --local-no-check-updated to disable updated file checks * Retry remove on Windows sharing violation error * Cache * Flush the memory cache after close * Purge file data on notification * Always forget parent dir for notifications * Integrate with Plex websocket * Add rc cache/stats (seuffert) * Add info log on notification * Box * Fix failure reading large directories - parse file/directory size as float * Dropbox * Fix crypt+obfuscate on dropbox * Fix repeatedly uploading the same files * FTP * Work around strange response from box FTP server * More workarounds for FTP servers to fix mkParentDir error * Fix no error on listing non-existent directory * Google Cloud Storage * Add service_account_credentials (Matt Holt) * Detect bucket presence by listing it - minimises permissions needed * Ignore zero length directory markers * Google Drive * Add service_account_credentials (Matt Holt) * Fix directory move leaving a hardlinked directory behind * Return proper google errors when Opening files * When initialized with a filepath, optional features used incorrect root path (Stefan Breunig) * HTTP * Fix sync for servers which don't return Content-Length in HEAD * Onedrive * Add QuickXorHash support for OneDrive for business * Fix socket leak in multipart session upload * S3 * Look in S3 named profile files for credentials * Add `--s3-disable-checksum` to disable checksum uploading (Chris Redekop) * Hierarchical configuration support (Giri Badanahatti) * Add in config for all the supported S3 providers * Add One Zone Infrequent Access storage class (Craig Rachel) * Add --use-server-modtime support (Peter Baumgartner) * Add --s3-chunk-size option to control multipart uploads * Ignore zero length directory markers * SFTP * Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll) * Update docs with Synology quirks * Fail soft with a debug on hash failure * Swift * Add --use-server-modtime support (Peter Baumgartner) * Webdav * Support SharePoint cookie authentication (hensur) * Strip leading and trailing / off root ## v1.40 - 2018-03-19 * New backends * Alias backend to create aliases for existing remote names (Fabian Möller) * New commands * `lsf`: list for parsing purposes (Jakub Tasiemski) * by default this is a simple non recursive list of files and directories * it can be configured to add more info in an easy to parse way * `serve restic`: for serving a remote as a Restic REST endpoint * This enables restic to use any backends that rclone can access * Thanks Alexander Neumann for help, patches and review * `rc`: enable the remote control of a running rclone * The running rclone must be started with --rc and related flags. * Currently there is support for bwlimit, and flushing for mount and cache. * New Features * `--max-delete` flag to add a delete threshold (Bjørn Erik Pedersen) * All backends now support RangeOption for ranged Open * `cat`: Use RangeOption for limited fetches to make more efficient * `cryptcheck`: make reading of nonce more efficient with RangeOption * serve http/webdav/restic * support SSL/TLS * add `--user` `--pass` and `--htpasswd` for authentication * `copy`/`move`: detect file size change during copy/move and abort transfer (ishuah) * `cryptdecode`: added option to return encrypted file names. (ishuah) * `lsjson`: add `--encrypted` to show encrypted name (Jakub Tasiemski) * Add `--stats-file-name-length` to specify the printed file name length for stats (Will Gunn) * Compile * Code base was shuffled and factored * backends moved into a backend directory * large packages split up * See the CONTRIBUTING.md doc for info as to what lives where now * Update to using go1.10 as the default go version * Implement daily [full integration tests](https://pub.rclone.org/integration-tests/) * Release * Include a source tarball and sign it and the binaries * Sign the git tags as part of the release process * Add .deb and .rpm packages as part of the build * Make a beta release for all branches on the main repo (but not pull requests) * Bug Fixes * config: fixes errors on non existing config by loading config file only on first access * config: retry saving the config after failure (Mateusz) * sync: when using `--backup-dir` don't delete files if we can't set their modtime * this fixes odd behaviour with Dropbox and `--backup-dir` * fshttp: fix idle timeouts for HTTP connections * `serve http`: fix serving files with : in - fixes * Fix `--exclude-if-present` to ignore directories which it doesn't have permission for (Iakov Davydov) * Make accounting work properly with crypt and b2 * remove `--no-traverse` flag because it is obsolete * Mount * Add `--attr-timeout` flag to control attribute caching in kernel * this now defaults to 0 which is correct but less efficient * see [the mount docs](/commands/rclone_mount/#attribute-caching) for more info * Add `--daemon` flag to allow mount to run in the background (ishuah) * Fix: Return ENOSYS rather than EIO on attempted link * This fixes FileZilla accessing an rclone mount served over sftp. * Fix setting modtime twice * Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows * Many bugs fixed in the VFS layer - see below * VFS * Many fixes for `--vfs-cache-mode` writes and above * Update cached copy if we know it has changed (fixes stale data) * Clean path names before using them in the cache * Disable cache cleaner if `--vfs-cache-poll-interval=0` * Fill and clean the cache immediately on startup * Fix Windows opening every file when it stats the file * Fix applying modtime for an open Write Handle * Fix creation of files when truncating * Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE * Downgrade "poll-interval is not supported" message to Info * Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC * Local * Downgrade "invalid cross-device link: trying copy" to debug * Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device * Fix race conditions updating the hashes * Cache * Add support for polling - cache will update when remote changes on supported backends * Reduce log level for Plex api * Fix dir cache issue * Implement `--cache-db-wait-time` flag * Improve efficiency with RangeOption and RangeSeek * Fix dirmove with temp fs enabled * Notify vfs when using temp fs * Offline uploading * Remote control support for path flushing * Amazon cloud drive * Rclone no longer has any working keys - disable integration tests * Implement DirChangeNotify to notify cache/vfs/mount of changes * Azureblob * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Improve accounting for chunked uploads * Backblaze B2 * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Box * Improve accounting for chunked uploads * Dropbox * Fix custom oauth client parameters * Google Cloud Storage * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Google Drive * Migrate to api v3 (Fabian Möller) * Add scope configuration and root folder selection * Add `--drive-impersonate` for service accounts * thanks to everyone who tested, explored and contributed docs * Add `--drive-use-created-date` to use created date as modified date (nbuchanan) * Request the export formats only when required * This makes rclone quicker when there are no google docs * Fix finding paths with latin1 chars (a workaround for a drive bug) * Fix copying of a single Google doc file * Fix `--drive-auth-owner-only` to look in all directories * HTTP * Fix handling of directories with & in * Onedrive * Removed upload cutoff and always do session uploads * this stops the creation of multiple versions on business onedrive * Overwrite object size value with real size when reading file. (Victor) * this fixes oddities when onedrive misreports the size of images * Pcloud * Remove unused chunked upload flag and code * Qingstor * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * S3 * Support hashes for multipart files (Chris Redekop) * Initial support for IBM COS (S3) (Giri Badanahatti) * Update docs to discourage use of v2 auth with CEPH and others * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Fix server side copy and set modtime on files with + in * SFTP * Add option to disable remote hash check command execution (Jon Fautley) * Add `--sftp-ask-password` flag to prompt for password when needed (Leo R. Lundgren) * Add `set_modtime` configuration option * Fix following of symlinks * Fix reading config file outside of Fs setup * Fix reading $USER in username fallback not $HOME * Fix running under crontab - Use correct OS way of reading username * Swift * Fix refresh of authentication token * in v1.39 a bug was introduced which ignored new tokens - this fixes it * Fix extra HEAD transaction when uploading a new file * Don't check for bucket/container presence if listing was OK * this makes rclone do one less request per invocation * Webdav * Add new time formats to support mydrive.ch and others ## v1.39 - 2017-12-23 * New backends * WebDAV * tested with nextcloud, owncloud, put.io and others! * Pcloud * cache - wraps a cache around other backends (Remus Bunduc) * useful in combination with mount * NB this feature is in beta so use with care * New commands * serve command with subcommands: * serve webdav: this implements a webdav server for any rclone remote. * serve http: command to serve a remote over HTTP * config: add sub commands for full config file management * create/delete/dump/edit/file/password/providers/show/update * touch: to create or update the timestamp of a file (Jakub Tasiemski) * New Features * curl install for rclone (Filip Bartodziej) * --stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki) * --exclude-if-present to exclude a directory if a file is present (Iakov Davydov) * rmdirs: add --leave-root flag (lewapm) * move: add --delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki) * Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters * Obscure X-Auth-Token: from headers when dumping too * Document and implement exit codes for different failure modes (Ishuah Kariuki) * Compile * Bug Fixes * Retry lots more different types of errors to make multipart transfers more reliable * Save the config before asking for a token, fixes disappearing oauth config * Warn the user if --include and --exclude are used together (Ernest Borowski) * Fix duplicate files (eg on Google drive) causing spurious copies * Allow trailing and leading whitespace for passwords (Jason Rose) * ncdu: fix crashes on empty directories * rcat: fix goroutine leak * moveto/copyto: Fix to allow copying to the same name * Mount * --vfs-cache mode to make writes into mounts more reliable. * this requires caching files on the disk (see --cache-dir) * As this is a new feature, use with care * Use sdnotify to signal systemd the mount is ready (Fabian Möller) * Check if directory is not empty before mounting (Ernest Borowski) * Local * Add error message for cross file system moves * Fix equality check for times * Dropbox * Rework multipart upload * buffer the chunks when uploading large files so they can be retried * change default chunk size to 48MB now we are buffering them in memory * retry every error after the first chunk is done successfully * Fix error when renaming directories * Swift * Fix crash on bad authentication * Google Drive * Add service account support (Tim Cooijmans) * S3 * Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio) * Fix crash if a bad listing is received * Add support for ECS task IAM roles (David Minor) * Backblaze B2 * Fix multipart upload retries * Fix --hard-delete to make it work 100% of the time * Swift * Allow authentication with storage URL and auth key (Giovanni Pizzi) * Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson) * Add OS_TENANT_ID and OS_USER_ID to config * Allow configs with user id instead of user name * Check if swift segments container exists before creating (John Leach) * Fix memory leak in swift transfers (upstream fix) * SFTP * Add option to enable the use of aes128-cbc cipher (Jon Fautley) * Amazon cloud drive * Fix download of large files failing with "Only one auth mechanism allowed" * crypt * Option to encrypt directory names or leave them intact * Implement DirChangeNotify (Fabian Möller) * onedrive * Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user ## v1.38 - 2017-09-30 * New backends * Azure Blob Storage (thanks Andrei Dragomir) * Box * Onedrive for Business (thanks Oliver Heyme) * QingStor from QingCloud (thanks wuyu) * New commands * `rcat` - read from standard input and stream upload * `tree` - shows a nicely formatted recursive listing * `cryptdecode` - decode crypted file names (thanks ishuah) * `config show` - print the config file * `config file` - print the config file location * New Features * Empty directories are deleted on `sync` * `dedupe` - implement merging of duplicate directories * `check` and `cryptcheck` made more consistent and use less memory * `cleanup` for remaining remotes (thanks ishuah) * `--immutable` for ensuring that files don't change (thanks Jacob McNamee) * `--user-agent` option (thanks Alex McGrath Kraak) * `--disable` flag to disable optional features * `--bind` flag for choosing the local addr on outgoing connections * Support for zsh auto-completion (thanks bpicode) * Stop normalizing file names but do a normalized compare in `sync` * Compile * Update to using go1.9 as the default go version * Remove snapd build due to maintenance problems * Bug Fixes * Improve retriable error detection which makes multipart uploads better * Make `check` obey `--ignore-size` * Fix bwlimit toggle in conjunction with schedules (thanks cbruegg) * `config` ensures newly written config is on the same mount * Local * Revert to copy when moving file across file system boundaries * `--skip-links` to suppress symlink warnings (thanks Zhiming Wang) * Mount * Re-use `rcat` internals to support uploads from all remotes * Dropbox * Fix "entry doesn't belong in directory" error * Stop using deprecated API methods * Swift * Fix server side copy to empty container with `--fast-list` * Google Drive * Change the default for `--drive-use-trash` to `true` * S3 * Set session token when using STS (thanks Girish Ramakrishnan) * Glacier docs and error messages (thanks Jan Varho) * Read 1000 (not 1024) items in dir listings to fix Wasabi * Backblaze B2 * Fix SHA1 mismatch when downloading files with no SHA1 * Calculate missing hashes on the fly instead of spooling * `--b2-hard-delete` to permanently delete (not hide) files (thanks John Papandriopoulos) * Hubic * Fix creating containers - no longer have to use the `default` container * Swift * Optionally configure from a standard set of OpenStack environment vars * Add `endpoint_type` config * Google Cloud Storage * Fix bucket creation to work with limited permission users * SFTP * Implement connection pooling for multiple ssh connections * Limit new connections per second * Add support for MD5 and SHA1 hashes where available (thanks Christian Brüggemann) * HTTP * Fix URL encoding issues * Fix directories with `:` in * Fix panic with URL encoded content ## v1.37 - 2017-07-22 * New backends * FTP - thanks to Antonio Messina * HTTP - thanks to Vasiliy Tolstov * New commands * rclone ncdu - for exploring a remote with a text based user interface. * rclone lsjson - for listing with a machine readable output * rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox) * New Features * Implement --fast-list flag * This allows remotes to list recursively if they can * This uses less transactions (important if you pay for them) * This may or may not be quicker * This will use more memory as it has to hold the listing in memory * --old-sync-method deprecated - the remaining uses are covered by --fast-list * This involved a major re-write of all the listing code * Add --tpslimit and --tpslimit-burst to limit transactions per second * this is useful in conjunction with `rclone mount` to limit external apps * Add --stats-log-level so can see --stats without -v * Print password prompts to stderr - Hraban Luyat * Warn about duplicate files when syncing * Oauth improvements * allow auth_url and token_url to be set in the config file * Print redirection URI if using own credentials. * Don't Mkdir at the start of sync to save transactions * Compile * Update build to go1.8.3 * Require go1.6 for building rclone * Compile 386 builds with "GO386=387" for maximum compatibility * Bug Fixes * Fix menu selection when no remotes * Config saving reworked to not kill the file if disk gets full * Don't delete remote if name does not change while renaming * moveto, copyto: report transfers and checks as per move and copy * Local * Add --local-no-unicode-normalization flag - Bob Potter * Mount * Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help * Compare checksums on upload/download via FUSE * Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino * On read only open of file, make open pending until first read * Make --read-only reject modify operations * Implement ModTime via FUSE for remotes that support it * Allow modTime to be changed even before all writers are closed * Fix panic on renames * Fix hang on errored upload * Crypt * Report the name:root as specified by the user * Add an "obfuscate" option for filename encryption - Stephen Harris * Amazon Drive * Fix initialization order for token renewer * Remove revoked credentials, allow oauth proxy config and update docs * B2 * Reduce minimum chunk size to 5MB * Drive * Add team drive support * Reduce bandwidth by adding fields for partial responses - Martin Kristensen * Implement --drive-shared-with-me flag to view shared with me files - Danny Tsai * Add --drive-trashed-only to read only the files in the trash * Remove obsolete --drive-full-list * Add missing seek to start on retries of chunked uploads * Fix stats accounting for upload * Convert / in names to a unicode equivalent (/) * Poll for Google Drive changes when mounted * OneDrive * Fix the uploading of files with spaces * Fix initialization order for token renewer * Display speeds accurately when uploading - Yoni Jah * Swap to using http://localhost:53682/ as redirect URL - Michael Ledin * Retry on token expired error, reset upload body on retry - Yoni Jah * Google Cloud Storage * Add ability to specify location and storage class via config and command line - thanks gdm85 * Create container if necessary on server side copy * Increase directory listing chunk to 1000 to increase performance * Obtain a refresh token for GCS - Steven Lu * Yandex * Fix the name reported in log messages (was empty) * Correct error return for listing empty directory * Dropbox * Rewritten to use the v2 API * Now supports ModTime * Can only set by uploading the file again * If you uploaded with an old rclone, rclone may upload everything again * Use `--size-only` or `--checksum` to avoid this * Now supports the Dropbox content hashing scheme * Now supports low level retries * S3 * Work around eventual consistency in bucket creation * Create container if necessary on server side copy * Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed * Swift, Hubic * Fix zero length directory markers showing in the subdirectory listing * this caused lots of duplicate transfers * Fix paged directory listings * this caused duplicate directory errors * Create container if necessary on server side copy * Increase directory listing chunk to 1000 to increase performance * Make sensible error if the user forgets the container * SFTP * Add support for using ssh key files * Fix under Windows * Fix ssh agent on Windows * Adapt to latest version of library - Igor Kharin ## v1.36 - 2017-03-18 * New Features * SFTP remote (Jack Schmidt) * Re-implement sync routine to work a directory at a time reducing memory usage * Logging revamped to be more inline with rsync - now much quieter * -v only shows transfers * -vv is for full debug * --syslog to log to syslog on capable platforms * Implement --backup-dir and --suffix * Implement --track-renames (initial implementation by Bjørn Erik Pedersen) * Add time-based bandwidth limits (Lukas Loesche) * rclone cryptcheck: checks integrity of crypt remotes * Allow all config file variables and options to be set from environment variables * Add --buffer-size parameter to control buffer size for copy * Make --delete-after the default * Add --ignore-checksum flag (fixed by Hisham Zarka) * rclone check: Add --download flag to check all the data, not just hashes * rclone cat: add --head, --tail, --offset, --count and --discard * rclone config: when choosing from a list, allow the value to be entered too * rclone config: allow rename and copy of remotes * rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson) * Comply with XDG Base Directory specification (Dario Giovannetti) * this moves the default location of the config file in a backwards compatible way * Release changes * Ubuntu snap support (Dedsec1) * Compile with go 1.8 * MIPS/Linux big and little endian support * Bug Fixes * Fix copyto copying things to the wrong place if the destination dir didn't exist * Fix parsing of remotes in moveto and copyto * Fix --delete-before deleting files on copy * Fix --files-from with an empty file copying everything * Fix sync: don't update mod times if --dry-run set * Fix MimeType propagation * Fix filters to add ** rules to directory rules * Local * Implement -L, --copy-links flag to allow rclone to follow symlinks * Open files in write only mode so rclone can write to an rclone mount * Fix unnormalised unicode causing problems reading directories * Fix interaction between -x flag and --max-depth * Mount * Implement proper directory handling (mkdir, rmdir, renaming) * Make include and exclude filters apply to mount * Implement read and write async buffers - control with --buffer-size * Fix fsync on for directories * Fix retry on network failure when reading off crypt * Crypt * Add --crypt-show-mapping to show encrypted file mapping * Fix crypt writer getting stuck in a loop * **IMPORTANT** this bug had the potential to cause data corruption when * reading data from a network based remote and * writing to a crypt on Google Drive * Use the cryptcheck command to validate your data if you are concerned * If syncing two crypt remotes, sync the unencrypted remote * Amazon Drive * Fix panics on Move (rename) * Fix panic on token expiry * B2 * Fix inconsistent listings and rclone check * Fix uploading empty files with go1.8 * Constrain memory usage when doing multipart uploads * Fix upload url not being refreshed properly * Drive * Fix Rmdir on directories with trashed files * Fix "Ignoring unknown object" when downloading * Add --drive-list-chunk * Add --drive-skip-gdocs (Károly Oláh) * OneDrive * Implement Move * Fix Copy * Fix overwrite detection in Copy * Fix waitForJob to parse errors correctly * Use token renewer to stop auth errors on long uploads * Fix uploading empty files with go1.8 * Google Cloud Storage * Fix depth 1 directory listings * Yandex * Fix single level directory listing * Dropbox * Normalise the case for single level directory listings * Fix depth 1 listing * S3 * Added ca-central-1 region (Jon Yergatian) ## v1.35 - 2017-01-02 * New Features * moveto and copyto commands for choosing a destination name on copy/move * rmdirs command to recursively delete empty directories * Allow repeated --include/--exclude/--filter options * Only show transfer stats on commands which transfer stuff * show stats on any command using the `--stats` flag * Allow overlapping directories in move when server side dir move is supported * Add --stats-unit option - thanks Scott McGillivray * Bug Fixes * Fix the config file being overwritten when two rclone instances are running * Make rclone lsd obey the filters properly * Fix compilation on mips * Fix not transferring files that don't differ in size * Fix panic on nil retry/fatal error * Mount * Retry reads on error - should help with reliability a lot * Report the modification times for directories from the remote * Add bandwidth accounting and limiting (fixes --bwlimit) * If --stats provided will show stats and which files are transferring * Support R/W files if truncate is set. * Implement statfs interface so df works * Note that write is now supported on Amazon Drive * Report number of blocks in a file - thanks Stefan Breunig * Crypt * Prevent the user pointing crypt at itself * Fix failed to authenticate decrypted block errors * these will now return the underlying unexpected EOF instead * Amazon Drive * Add support for server side move and directory move - thanks Stefan Breunig * Fix nil pointer deref on size attribute * B2 * Use new prefix and delimiter parameters in directory listings * This makes --max-depth 1 dir listings as used in mount much faster * Reauth the account while doing uploads too - should help with token expiry * Drive * Make DirMove more efficient and complain about moving the root * Create destination directory on Move() ## v1.34 - 2016-11-06 * New Features * Stop single file and `--files-from` operations iterating through the source bucket. * Stop removing failed upload to cloud storage remotes * Make ContentType be preserved for cloud to cloud copies * Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini * `rclone check` shows count of hashes that couldn't be checked * `rclone listremotes` command * Support linux/arm64 build - thanks Fredrik Fornwall * Remove `Authorization:` lines from `--dump-headers` output * Bug Fixes * Ignore files with control characters in the names * Fix `rclone move` command * Delete src files which already existed in dst * Fix deletion of src file when dst file older * Fix `rclone check` on crypted file systems * Make failed uploads not count as "Transferred" * Make sure high level retries show with `-q` * Use a vendor directory with godep for repeatable builds * `rclone mount` - FUSE * Implement FUSE mount options * `--no-modtime`, `--debug-fuse`, `--read-only`, `--allow-non-empty`, `--allow-root`, `--allow-other` * `--default-permissions`, `--write-back-cache`, `--max-read-ahead`, `--umask`, `--uid`, `--gid` * Add `--dir-cache-time` to control caching of directory entries * Implement seek for files opened for read (useful for video players) * with `-no-seek` flag to disable * Fix crash on 32 bit ARM (alignment of 64 bit counter) * ...and many more internal fixes and improvements! * Crypt * Don't show encrypted password in configurator to stop confusion * Amazon Drive * New wait for upload option `--acd-upload-wait-per-gb` * upload timeouts scale by file size and can be disabled * Add 502 Bad Gateway to list of errors we retry * Fix overwriting a file with a zero length file * Fix ACD file size warning limit - thanks Felix Bünemann * Local * Unix: implement `-x`/`--one-file-system` to stay on a single file system * thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana * Windows: ignore the symlink bit on files * Windows: Ignore directory based junction points * B2 * Make sure each upload has at least one upload slot - fixes strange upload stats * Fix uploads when using crypt * Fix download of large files (sha1 mismatch) * Return error when we try to create a bucket which someone else owns * Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur * S3 * Command line and config file support for * Setting/overriding ACL - thanks Radek Senfeld * Setting storage class - thanks Asko Tamm * Drive * Make exponential backoff work exactly as per Google specification * add `.epub`, `.odp` and `.tsv` as export formats. * Swift * Don't read metadata for directory marker objects ## v1.33 - 2016-08-24 * New Features * Implement encryption * data encrypted in NACL secretbox format * with optional file name encryption * New commands * rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL) * works on Linux, FreeBSD and OS X (need testers for the last 2!) * rclone cat - outputs remote file or files to the terminal * rclone genautocomplete - command to make a bash completion script for rclone * Editing a remote using `rclone config` now goes through the wizard * Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors * Use cobra for sub commands and docs generation * drive * Document how to make your own client_id * s3 * User-configurable Amazon S3 ACL (thanks Radek Šenfeld) * b2 * Fix stats accounting for upload - no more jumping to 100% done * On cleanup delete hide marker if it is the current file * New B2 API endpoint (thanks Per Cederberg) * Set maximum backoff to 5 Minutes * onedrive * Fix URL escaping in file names - eg uploading files with `+` in them. * amazon cloud drive * Fix token expiry during large uploads * Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors * local * Fix filenames with invalid UTF-8 not being uploaded * Fix problem with some UTF-8 characters on OS X ## v1.32 - 2016-07-13 * Backblaze B2 * Fix upload of files large files not in root ## v1.31 - 2016-07-13 * New Features * Reduce memory on sync by about 50% * Implement --no-traverse flag to stop copy traversing the destination remote. * This can be used to reduce memory usage down to the smallest possible. * Useful to copy a small number of files into a large destination folder. * Implement cleanup command for emptying trash / removing old versions of files * Currently B2 only * Single file handling improved * Now copied with --files-from * Automatically sets --no-traverse when copying a single file * Info on using installing with ansible - thanks Stefan Weichinger * Implement --no-update-modtime flag to stop rclone fixing the remote modified times. * Bug Fixes * Fix move command - stop it running for overlapping Fses - this was causing data loss. * Local * Fix incomplete hashes - this was causing problems for B2. * Amazon Drive * Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed. * Swift * Add support for non-default project domain - thanks Antonio Messina. * S3 * Add instructions on how to use rclone with minio. * Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions. * Skip setting the modified time for objects > 5GB as it isn't possible. * Backblaze B2 * Add --b2-versions flag so old versions can be listed and retrieved. * Treat 403 errors (eg cap exceeded) as fatal. * Implement cleanup command for deleting old file versions. * Make error handling compliant with B2 integrations notes. * Fix handling of token expiry. * Implement --b2-test-mode to set `X-Bz-Test-Mode` header. * Set cutoff for chunked upload to 200MB as per B2 guidelines. * Make upload multi-threaded. * Dropbox * Don't retry 461 errors. ## v1.30 - 2016-06-18 * New Features * Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables * Directory include filtering for efficiency * --max-depth parameter * Better error reporting * More to come * Retry more errors * Add --ignore-size flag - for uploading images to onedrive * Log -v output to stdout by default * Display the transfer stats in more human readable form * Make 0 size files specifiable with `--max-size 0b` * Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc * Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz * Bug Fixes * Fix retry doing one too many retries * Local * Fix problems with OS X and UTF-8 characters * Amazon Drive * Check a file exists before uploading to help with 408 Conflict errors * Reauth on 401 errors - this has been causing a lot of problems * Work around spurious 403 errors * Restart directory listings on error * Google Drive * Check a file exists before uploading to help with duplicates * Fix retry of multipart uploads * Backblaze B2 * Implement large file uploading * S3 * Add AES256 server-side encryption for - thanks Justin R. Wilson * Google Cloud Storage * Make sure we don't use conflicting content types on upload * Add service account support - thanks Michal Witkowski * Swift * Add auth version parameter * Add domain option for openstack (v3 auth) - thanks Fabian Ruff ## v1.29 - 2016-04-18 * New Features * Implement `-I, --ignore-times` for unconditional upload * Improve `dedupe`command * Now removes identical copies without asking * Now obeys `--dry-run` * Implement `--dedupe-mode` for non interactive running * `--dedupe-mode interactive` - interactive the default. * `--dedupe-mode skip` - removes identical files then skips anything left. * `--dedupe-mode first` - removes identical files then keeps the first one. * `--dedupe-mode newest` - removes identical files then keeps the newest one. * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. * `--dedupe-mode rename` - removes identical files then renames the rest to be different. * Bug fixes * Make rclone check obey the `--size-only` flag. * Use "application/octet-stream" if discovered mime type is invalid. * Fix missing "quit" option when there are no remotes. * Google Drive * Increase default chunk size to 8 MB - increases upload speed of big files * Speed up directory listings and make more reliable * Add missing retries for Move and DirMove - increases reliability * Preserve mime type on file update * Backblaze B2 * Enable mod time syncing * This means that B2 will now check modification times * It will upload new files to update the modification times * (there isn't an API to just set the mod time.) * If you want the old behaviour use `--size-only`. * Update API to new version * Fix parsing of mod time when not in metadata * Swift/Hubic * Don't return an MD5SUM for static large objects * S3 * Fix uploading files bigger than 50GB ## v1.28 - 2016-03-01 * New Features * Configuration file encryption - thanks Klaus Post * Improve `rclone config` adding more help and making it easier to understand * Implement `-u`/`--update` so creation times can be used on all remotes * Implement `--low-level-retries` flag * Optionally disable gzip compression on downloads with `--no-gzip-encoding` * Bug fixes * Don't make directories if `--dry-run` set * Fix and document the `move` command * Fix redirecting stderr on unix-like OSes when using `--log-file` * Fix `delete` command to wait until all finished - fixes missing deletes. * Backblaze B2 * Use one upload URL per go routine fixes `more than one upload using auth token` * Add pacing, retries and reauthentication - fixes token expiry problems * Upload without using a temporary file from local (and remotes which support SHA1) * Fix reading metadata for all files when it shouldn't have been * Drive * Fix listing drive documents at root * Disable copy and move for Google docs * Swift * Fix uploading of chunked files with non ASCII characters * Allow setting of `storage_url` in the config - thanks Xavier Lucas * S3 * Allow IAM role and credentials from environment variables - thanks Brian Stengaard * Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon * Amazon Drive * Retry on more things to make directory listings more reliable ## v1.27 - 2016-01-31 * New Features * Easier headless configuration with `rclone authorize` * Add support for multiple hash types - we now check SHA1 as well as MD5 hashes. * `delete` command which does obey the filters (unlike `purge`) * `dedupe` command to deduplicate a remote. Useful with Google Drive. * Add `--ignore-existing` flag to skip all files that exist on destination. * Add `--delete-before`, `--delete-during`, `--delete-after` flags. * Add `--memprofile` flag to debug memory use. * Warn the user about files with same name but different case * Make `--include` rules add their implicit exclude * at the end of the filter list * Deprecate compiling with go1.3 * Amazon Drive * Fix download of files > 10 GB * Fix directory traversal ("Next token is expired") for large directory listings * Remove 409 conflict from error codes we will retry - stops very long pauses * Backblaze B2 * SHA1 hashes now checked by rclone core * Drive * Add `--drive-auth-owner-only` to only consider files owned by the user - thanks Björn Harrtell * Export Google documents * Dropbox * Make file exclusion error controllable with -q * Swift * Fix upload from unprivileged user. * S3 * Fix updating of mod times of files with `+` in. * Local * Add local file system option to disable UNC on Windows. ## v1.26 - 2016-01-02 * New Features * Yandex storage backend - thank you Dmitry Burdeev ("dibu") * Implement Backblaze B2 storage backend * Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles * Make ls/lsl/md5sum/size/check obey includes and excludes * Fixes * Fix crash in http logging * Upload releases to github too * Swift * Fix sync for chunked files * OneDrive * Re-enable server side copy * Don't mask HTTP error codes with JSON decode error * S3 * Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier) ## v1.25 - 2015-11-14 * New features * Implement Hubic storage system * Fixes * Fix deletion of some excluded files without --delete-excluded * This could have deleted files unexpectedly on sync * Always check first with `--dry-run`! * Swift * Stop SetModTime losing metadata (eg X-Object-Manifest) * This could have caused data loss for files > 5GB in size * Use ContentType from Object to avoid lookups in listings * OneDrive * disable server side copy as it seems to be broken at Microsoft ## v1.24 - 2015-11-07 * New features * Add support for Microsoft OneDrive * Add `--no-check-certificate` option to disable server certificate verification * Add async readahead buffer for faster transfer of big files * Fixes * Allow spaces in remotes and check remote names for validity at creation time * Allow '&' and disallow ':' in Windows filenames. * Swift * Ignore directory marker objects where appropriate - allows working with Hubic * Don't delete the container if fs wasn't at root * S3 * Don't delete the bucket if fs wasn't at root * Google Cloud Storage * Don't delete the bucket if fs wasn't at root ## v1.23 - 2015-10-03 * New features * Implement `rclone size` for measuring remotes * Fixes * Fix headless config for drive and gcs * Tell the user they should try again if the webserver method failed * Improve output of `--dump-headers` * S3 * Allow anonymous access to public buckets * Swift * Stop chunked operations logging "Failed to read info: Object Not Found" * Use Content-Length on uploads for extra reliability ## v1.22 - 2015-09-28 * Implement rsync like include and exclude flags * swift * Support files > 5GB - thanks Sergey Tolmachev ## v1.21 - 2015-09-22 * New features * Display individual transfer progress * Make lsl output times in localtime * Fixes * Fix allowing user to override credentials again in Drive, GCS and ACD * Amazon Drive * Implement compliant pacing scheme * Google Drive * Make directory reads concurrent for increased speed. ## v1.20 - 2015-09-15 * New features * Amazon Drive support * Oauth support redone - fix many bugs and improve usability * Use "golang.org/x/oauth2" as oauth library of choice * Improve oauth usability for smoother initial signup * drive, googlecloudstorage: optionally use auto config for the oauth token * Implement --dump-headers and --dump-bodies debug flags * Show multiple matched commands if abbreviation too short * Implement server side move where possible * local * Always use UNC paths internally on Windows - fixes a lot of bugs * dropbox * force use of our custom transport which makes timeouts work * Thanks to Klaus Post for lots of help with this release ## v1.19 - 2015-08-28 * New features * Server side copies for s3/swift/drive/dropbox/gcs * Move command - uses server side copies if it can * Implement --retries flag - tries 3 times by default * Build for plan9/amd64 and solaris/amd64 too * Fixes * Make a current version download with a fixed URL for scripting * Ignore rmdir in limited fs rather than throwing error * dropbox * Increase chunk size to improve upload speeds massively * Issue an error message when trying to upload bad file name ## v1.18 - 2015-08-17 * drive * Add `--drive-use-trash` flag so rclone trashes instead of deletes * Add "Forbidden to download" message for files with no downloadURL * dropbox * Remove datastore * This was deprecated and it caused a lot of problems * Modification times and MD5SUMs no longer stored * Fix uploading files > 2GB * s3 * use official AWS SDK from github.com/aws/aws-sdk-go * **NB** will most likely require you to delete and recreate remote * enable multipart upload which enables files > 5GB * tested with Ceph / RadosGW / S3 emulation * many thanks to Sam Liston and Brian Haymore at the [Utah Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account * misc * Show errors when reading the config file * Do not print stats in quiet mode - thanks Leonid Shalupov * Add FAQ * Fix created directories not obeying umask * Linux installation instructions - thanks Shimon Doodkin ## v1.17 - 2015-06-14 * dropbox: fix case insensitivity issues - thanks Leonid Shalupov ## v1.16 - 2015-06-09 * Fix uploading big files which was causing timeouts or panics * Don't check md5sum after download with --size-only ## v1.15 - 2015-06-06 * Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper * Implement --size-only flag to sync on size not checksum & modtime * Expand docs and remove duplicated information * Document rclone's limitations with directories * dropbox: update docs about case insensitivity ## v1.14 - 2015-05-21 * local: fix encoding of non utf-8 file names - fixes a duplicate file problem * drive: docs about rate limiting * google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1" ## v1.13 - 2015-05-10 * Revise documentation (especially sync) * Implement --timeout and --conntimeout * s3: ignore etags from multipart uploads which aren't md5sums ## v1.12 - 2015-03-15 * drive: Use chunked upload for files above a certain size * drive: add --drive-chunk-size and --drive-upload-cutoff parameters * drive: switch to insert from update when a failed copy deletes the upload * core: Log duplicate files if they are detected ## v1.11 - 2015-03-04 * swift: add region parameter * drive: fix crash on failed to update remote mtime * In remote paths, change native directory separators to / * Add synchronization to ls/lsl/lsd output to stop corruptions * Ensure all stats/log messages to go stderr * Add --log-file flag to log everything (including panics) to file * Make it possible to disable stats printing with --stats=0 * Implement --bwlimit to limit data transfer bandwidth ## v1.10 - 2015-02-12 * s3: list an unlimited number of items * Fix getting stuck in the configurator ## v1.09 - 2015-02-07 * windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:) * local: Fix directory separators on Windows * drive: fix rate limit exceeded errors ## v1.08 - 2015-02-04 * drive: fix subdirectory listing to not list entire drive * drive: Fix SetModTime * dropbox: adapt code to recent library changes ## v1.07 - 2014-12-23 * google cloud storage: fix memory leak ## v1.06 - 2014-12-12 * Fix "Couldn't find home directory" on OSX * swift: Add tenant parameter * Use new location of Google API packages ## v1.05 - 2014-08-09 * Improved tests and consequently lots of minor fixes * core: Fix race detected by go race detector * core: Fixes after running errcheck * drive: reset root directory on Rmdir and Purge * fs: Document that Purger returns error on empty directory, test and fix * google cloud storage: fix ListDir on subdirectory * google cloud storage: re-read metadata in SetModTime * s3: make reading metadata more reliable to work around eventual consistency problems * s3: strip trailing / from ListDir() * swift: return directories without / in ListDir ## v1.04 - 2014-07-21 * google cloud storage: Fix crash on Update ## v1.03 - 2014-07-20 * swift, s3, dropbox: fix updated files being marked as corrupted * Make compile with go 1.1 again ## v1.02 - 2014-07-19 * Implement Dropbox remote * Implement Google Cloud Storage remote * Verify Md5sums and Sizes after copies * Remove times from "ls" command - lists sizes only * Add add "lsl" - lists times and sizes * Add "md5sum" command ## v1.01 - 2014-07-04 * drive: fix transfer of big files using up lots of memory ## v1.00 - 2014-07-03 * drive: fix whole second dates ## v0.99 - 2014-06-26 * Fix --dry-run not working * Make compatible with go 1.1 ## v0.98 - 2014-05-30 * s3: Treat missing Content-Length as 0 for some ceph installations * rclonetest: add file with a space in ## v0.97 - 2014-05-05 * Implement copying of single files * s3 & swift: support paths inside containers/buckets ## v0.96 - 2014-04-24 * drive: Fix multiple files of same name being created * drive: Use o.Update and fs.Put to optimise transfers * Add version number, -V and --version ## v0.95 - 2014-03-28 * rclone.org: website, docs and graphics * drive: fix path parsing ## v0.94 - 2014-03-27 * Change remote format one last time * GNU style flags ## v0.93 - 2014-03-16 * drive: store token in config file * cross compile other versions * set strict permissions on config file ## v0.92 - 2014-03-15 * Config fixes and --config option ## v0.91 - 2014-03-15 * Make config file ## v0.90 - 2013-06-27 * Project named rclone ## v0.00 - 2012-11-18 * Project started rclone-1.53.3/docs/content/chunker.md000066400000000000000000000401711375552240400174620ustar00rootroot00000000000000--- title: "Chunker" description: "Split-chunking overlay remote" --- {{< icon "fa fa-cut" >}}Chunker (BETA) ---------------------------------------- The `chunker` overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers. To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. First check your chosen remote is working - we'll call it `remote:path` here. Note that anything inside `remote:path` will be chunked and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote `s3:bucket`. Now configure `chunker` using `rclone config`. We will call this one `overlay` to separate it from the `remote` itself. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> overlay Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Transparently chunk/split large files \ "chunker" [snip] Storage> chunker Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Enter a string value. Press Enter for the default (""). remote> remote:path Files larger than chunk size will be split in chunks. Enter a size with suffix k,M,G,T. Press Enter for the default ("2G"). chunk_size> 100M Choose how chunker handles hash sums. All modes but "none" require metadata. Enter a string value. Press Enter for the default ("md5"). Choose a number from below, or type in your own value 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise \ "none" 2 / MD5 for composite files \ "md5" 3 / SHA1 for composite files \ "sha1" 4 / MD5 for all files \ "md5all" 5 / SHA1 for all files \ "sha1all" 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported \ "md5quick" 7 / Similar to "md5quick" but prefers SHA1 over MD5 \ "sha1quick" hash_type> md5 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [overlay] type = chunker remote = remote:bucket chunk_size = 100M hash_type = md5 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### Specifying the remote In normal use, make sure the remote has a `:` in. If you specify the remote without a `:` then rclone will use a local directory of that name. So if you use a remote of `/path/to/secret/files` then rclone will chunk stuff in that directory. If you use a remote of `name` then rclone will put files in a directory called `name` in the current directory. ### Chunking When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename etc). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact. When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content. When the `list` rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden. List and other commands can sometimes come across composite files with missing or invalid chunks, eg. shadowed by like-named directory or another file. This usually means that wrapped file system has been directly tampered with or damaged. If chunker detects a missing chunk it will by default print warning, skip the whole incomplete group of chunks but proceed with current command. You can set the `--chunker-fail-hard` flag to have commands abort with error message in such cases. #### Chunk names The default chunk name format is `*.rclone_chunk.###`, hence by default chunk names are `BIG_FILE_NAME.rclone_chunk.001`, `BIG_FILE_NAME.rclone_chunk.002` etc. You can configure another name format using the `name_format` configuration file option. The format uses asterisk `*` as a placeholder for the base file name and one or more consecutive hash characters `#` as a placeholder for sequential chunk number. There must be one and only one asterisk. The number of consecutive hash characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is left-padded by zeros. If the decimal string is longer, it is left intact. By default numbering starts from 1 but there is another option that allows user to start from 0, eg. for compatibility with legacy software. For example, if name format is `big_*-##.part` and original file name is `data.txt` and numbering starts from 0, then the first chunk will be named `big_data.txt-00.part`, the 99th chunk will be `big_data.txt-98.part` and the 302nd chunk will become `big_data.txt-301.part`. Note that `list` assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files. ### Metadata Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the `none` format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases. #### Simple JSON metadata format This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields: - `ver` - version of format, currently `1` - `size` - total size of composite file - `nchunks` - number of data chunks in file - `md5` - MD5 hashsum of composite file (if present) - `sha1` - SHA1 hashsum (if present) There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling. #### No metadata You can disable meta objects by setting the meta format option to `none`. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled. ### Hashsums Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of `none`, chunker will report hashsum as `UNSUPPORTED`. Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it. Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent. If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with `md5all` or `sha1all`. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. `chunk_type=sha1all` to force hashsums and `chunk_size=1P` to effectively disable chunking. Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: `sha1quick` and `md5quick`. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the `sync` command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found. ### Modified time Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is `none` then chunker will use modification time of the first data chunk. ### Migrations The idiomatic way to migrate to a different chunk size, hash type or chunk naming scheme is to: - Collect all your chunked files under a directory and have your chunker remote point to it. - Create another directory (most probably on the same cloud storage) and configure a new remote with desired metadata format, hash type, chunk naming etc. - Now run `rclone sync -i oldchunks: newchunks:` and all your data will be transparently converted in transfer. This may take some time, yet chunker will try server-side copy if possible. - After checking data integrity you may remove configuration section of the old remote. If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the `list` command but will eat up your account quota. Please note that the `deletefile` command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The `copy` command will copy only active chunks while the `purge` will remove everything including garbage. ### Caveats and Limitations Chunker requires wrapped remote to support server side `move` (or `copy` + `delete`) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully. Chunker encodes chunk number in file name, so with default `name_format` setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. `*.rcc##` and save 10 characters (provided at most 99 chunks per file). Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers. Chunker will not automatically rename existing chunks when you run `rclone config` on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above. If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory). {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to chunker (Transparently chunk/split large files). #### --chunker-remote Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CHUNKER_REMOTE - Type: string - Default: "" #### --chunker-chunk-size Files larger than chunk size will be split in chunks. - Config: chunk_size - Env Var: RCLONE_CHUNKER_CHUNK_SIZE - Type: SizeSuffix - Default: 2G #### --chunker-hash-type Choose how chunker handles hash sums. All modes but "none" require metadata. - Config: hash_type - Env Var: RCLONE_CHUNKER_HASH_TYPE - Type: string - Default: "md5" - Examples: - "none" - Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise - "md5" - MD5 for composite files - "sha1" - SHA1 for composite files - "md5all" - MD5 for all files - "sha1all" - SHA1 for all files - "md5quick" - Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported - "sha1quick" - Similar to "md5quick" but prefers SHA1 over MD5 ### Advanced Options Here are the advanced options specific to chunker (Transparently chunk/split large files). #### --chunker-name-format String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format. - Config: name_format - Env Var: RCLONE_CHUNKER_NAME_FORMAT - Type: string - Default: "*.rclone_chunk.###" #### --chunker-start-from Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1. - Config: start_from - Env Var: RCLONE_CHUNKER_START_FROM - Type: int - Default: 1 #### --chunker-meta-format Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file. - Config: meta_format - Env Var: RCLONE_CHUNKER_META_FORMAT - Type: string - Default: "simplejson" - Examples: - "none" - Do not use metadata files at all. Requires hash type "none". - "simplejson" - Simple JSON supports hash sums and chunk validation. - It has the following fields: ver, size, nchunks, md5, sha1. #### --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks. - Config: fail_hard - Env Var: RCLONE_CHUNKER_FAIL_HARD - Type: bool - Default: false - Examples: - "true" - Report errors and abort current command. - "false" - Warn user, skip incomplete file and proceed. {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/commands/000077500000000000000000000000001375552240400172775ustar00rootroot00000000000000rclone-1.53.3/docs/content/commands/rclone.md000066400000000000000000000116171375552240400211110ustar00rootroot00000000000000--- title: "rclone" description: "Show help for rclone commands, flags and backends." slug: rclone url: /commands/rclone/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/ and as part of making a release run "make commanddocs" --- # rclone Show help for rclone commands, flags and backends. ## Synopsis Rclone syncs files to and from cloud storage providers as well as mounting them, listing them in lots of different ways. See the home page (https://rclone.org/) for installation, usage, documentation, changelog and configuration walkthroughs. ``` rclone [flags] ``` ## Options ``` -h, --help help for rclone ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone about](/commands/rclone_about/) - Get quota information from the remote. * [rclone authorize](/commands/rclone_authorize/) - Remote authorization. * [rclone backend](/commands/rclone_backend/) - Run a backend specific command. * [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout. * [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. * [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied. * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied. * [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest. * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote. * [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate filenames and delete/rename them. * [rclone delete](/commands/rclone_delete/) - Remove the contents of path. * [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. * [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path. * [rclone link](/commands/rclone_link/) - Generate public link to file/folder. * [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file. * [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path. * [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path. * [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing. * [rclone lsjson](/commands/rclone_lsjson/) - List directories and objects in the path in JSON format. * [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path. * [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path. * [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. * [rclone mount](/commands/rclone_mount/) - Mount the remote as file system on a mountpoint. * [rclone move](/commands/rclone_move/) - Move files from source to dest. * [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest. * [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface. * [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file. * [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. * [rclone rc](/commands/rclone_rc/) - Run a command against a running rclone. * [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote. * [rclone rcd](/commands/rclone_rcd/) - Run rclone listening to remote control commands only. * [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty. * [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote. * [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path. * [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path. * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. * [rclone touch](/commands/rclone_touch/) - Create new file or change file modification time. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone version](/commands/rclone_version/) - Show the version number. rclone-1.53.3/docs/content/commands/rclone_about.md000066400000000000000000000034531375552240400223020ustar00rootroot00000000000000--- title: "rclone about" description: "Get quota information from the remote." slug: rclone_about url: /commands/rclone_about/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/about/ and as part of making a release run "make commanddocs" --- # rclone about Get quota information from the remote. ## Synopsis Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes. This will print to stdout something like this: Total: 17G Used: 7.444G Free: 1.315G Trashed: 100.000M Other: 8.241G Where the fields are: * Total: total size available. * Used: total size used * Free: total amount this user could upload. * Trashed: total amount in the trash * Other: total amount in other storage (eg Gmail, Google Photos) * Objects: total number of objects in the storage Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted. Use the --full flag to see the numbers written out in full, eg Total: 18253611008 Used: 7993453766 Free: 1411001220 Trashed: 104857602 Other: 8849156022 Use the --json flag for a computer readable output, eg { "total": 18253611008, "used": 7993453766, "trashed": 104857602, "other": 8849156022, "free": 1411001220 } ``` rclone about remote: [flags] ``` ## Options ``` --full Full numbers instead of SI units -h, --help help for about --json Format output as JSON ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_authorize.md000066400000000000000000000016271375552240400232030ustar00rootroot00000000000000--- title: "rclone authorize" description: "Remote authorization." slug: rclone_authorize url: /commands/rclone_authorize/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/authorize/ and as part of making a release run "make commanddocs" --- # rclone authorize Remote authorization. ## Synopsis Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config. Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. ``` rclone authorize [flags] ``` ## Options ``` --auth-no-open-browser Do not automatically open auth link in default browser -h, --help help for authorize ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_backend.md000066400000000000000000000031661375552240400225600ustar00rootroot00000000000000--- title: "rclone backend" description: "Run a backend specific command." slug: rclone_backend url: /commands/rclone_backend/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/backend/ and as part of making a release run "make commanddocs" --- # rclone backend Run a backend specific command. ## Synopsis This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions. You can discover what commands a backend implements by using rclone backend help remote: rclone backend help You can also discover information about the backend using (see [operations/fsinfo](/rc/#operations/fsinfo) in the remote control docs for more info). rclone backend features remote: Pass options to the backend command with -o. This should be key=value or key, eg: rclone backend stats remote:path stats -o format=json -o long Pass arguments to the backend by placing them on the end of the line rclone backend cleanup remote:path file1 file2 file3 Note to run these commands on a running backend then see [backend/command](/rc/#backend/command) in the rc docs. ``` rclone backend remote:path [opts] [flags] ``` ## Options ``` -h, --help help for backend --json Always output in JSON format. -o, --option stringArray Option in the form name=value or name. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_cat.md000066400000000000000000000027521375552240400217400ustar00rootroot00000000000000--- title: "rclone cat" description: "Concatenates any files and sends them to stdout." slug: rclone_cat url: /commands/rclone_cat/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/cat/ and as part of making a release run "make commanddocs" --- # rclone cat Concatenates any files and sends them to stdout. ## Synopsis rclone cat sends any files to standard output. You can use it like this to output a single file rclone cat remote:path/to/file Or like this to output any file in dir or its subdirectories. rclone cat remote:path/to/dir Or like this to output any .txt files in dir or its subdirectories. rclone --include "*.txt" cat remote:path/to/dir Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1. ``` rclone cat remote:path [flags] ``` ## Options ``` --count int Only print N characters. (default -1) --discard Discard the output instead of printing. --head int Only print the first N characters. -h, --help help for cat --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_check.md000066400000000000000000000056671375552240400222560ustar00rootroot00000000000000--- title: "rclone check" description: "Checks the files in the source and destination match." slug: rclone_check url: /commands/rclone_check/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/check/ and as part of making a release run "make commanddocs" --- # rclone check Checks the files in the source and destination match. ## Synopsis Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination. If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data. If you supply the `--one-way` flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, one per line, to the file name (or stdout if it is `-`) supplied. What they write is described in the help below. For example `--differ` will write all paths which are present on both the source and destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - `= path` means path was found in source and destination and was identical - `- path` means path was missing on the source, so only in the destination - `+ path` means path was missing on the destination, so only in the source - `* path` means path was present in source and destination but different. - `! path` means there was an error reading or hashing the source or dest. ``` rclone check source:path dest:path [flags] ``` ## Options ``` --combined string Make a combined report of changes to this file --differ string Report all non-matching files to this file --download Check by downloading rather than with hash. --error string Report all files with errors (hashing or reading) to this file -h, --help help for check --match string Report all matching files to this file --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_cleanup.md000066400000000000000000000013051375552240400226110ustar00rootroot00000000000000--- title: "rclone cleanup" description: "Clean up the remote if possible." slug: rclone_cleanup url: /commands/rclone_cleanup/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/cleanup/ and as part of making a release run "make commanddocs" --- # rclone cleanup Clean up the remote if possible. ## Synopsis Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. ``` rclone cleanup remote:path [flags] ``` ## Options ``` -h, --help help for cleanup ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_config.md000066400000000000000000000037371375552240400224420ustar00rootroot00000000000000--- title: "rclone config" description: "Enter an interactive configuration session." slug: rclone_config url: /commands/rclone_config/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/ and as part of making a release run "make commanddocs" --- # rclone config Enter an interactive configuration session. ## Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. ``` rclone config [flags] ``` ## Options ``` -h, --help help for config ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote `name`. * [rclone config disconnect](/commands/rclone_config_disconnect/) - Disconnects user from remote * [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON. * [rclone config edit](/commands/rclone_config_edit/) - Enter an interactive configuration session. * [rclone config file](/commands/rclone_config_file/) - Show path of configuration file in use. * [rclone config password](/commands/rclone_config_password/) - Update password in an existing remote. * [rclone config providers](/commands/rclone_config_providers/) - List in JSON format all the providers and options. * [rclone config reconnect](/commands/rclone_config_reconnect/) - Re-authenticates user with remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config userinfo](/commands/rclone_config_userinfo/) - Prints info about logged in user of remote. rclone-1.53.3/docs/content/commands/rclone_config_create.md000066400000000000000000000040601375552240400237530ustar00rootroot00000000000000--- title: "rclone config create" description: "Create a new remote with name, type and options." slug: rclone_config_create url: /commands/rclone_config_create/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/create/ and as part of making a release run "make commanddocs" --- # rclone config create Create a new remote with name, type and options. ## Synopsis Create a new remote of `name` with `type` and options. The options should be passed in pairs of `key` `value`. For example to make a swift remote of name myremote using auto config you would do: rclone config create myremote swift env_auth true Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken. If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. **NB** If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: rclone config create mydrive drive config_is_local false ``` rclone config create `name` `type` [`key` `value`]* [flags] ``` ## Options ``` -h, --help help for create --no-obscure Force any passwords not to be obscured. --obscure Force any passwords to be obscured. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_delete.md000066400000000000000000000012371375552240400237550ustar00rootroot00000000000000--- title: "rclone config delete" description: "Delete an existing remote `name`." slug: rclone_config_delete url: /commands/rclone_config_delete/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/delete/ and as part of making a release run "make commanddocs" --- # rclone config delete Delete an existing remote `name`. ## Synopsis Delete an existing remote `name`. ``` rclone config delete `name` [flags] ``` ## Options ``` -h, --help help for delete ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_disconnect.md000066400000000000000000000014621375552240400246440ustar00rootroot00000000000000--- title: "rclone config disconnect" description: "Disconnects user from remote" slug: rclone_config_disconnect url: /commands/rclone_config_disconnect/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/disconnect/ and as part of making a release run "make commanddocs" --- # rclone config disconnect Disconnects user from remote ## Synopsis This disconnects the remote: passed in to the cloud storage system. This normally means revoking the oauth token. To reconnect use "rclone config reconnect". ``` rclone config disconnect remote: [flags] ``` ## Options ``` -h, --help help for disconnect ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_dump.md000066400000000000000000000011761375552240400234620ustar00rootroot00000000000000--- title: "rclone config dump" description: "Dump the config file as JSON." slug: rclone_config_dump url: /commands/rclone_config_dump/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/dump/ and as part of making a release run "make commanddocs" --- # rclone config dump Dump the config file as JSON. ## Synopsis Dump the config file as JSON. ``` rclone config dump [flags] ``` ## Options ``` -h, --help help for dump ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_edit.md000066400000000000000000000014471375552240400234430ustar00rootroot00000000000000--- title: "rclone config edit" description: "Enter an interactive configuration session." slug: rclone_config_edit url: /commands/rclone_config_edit/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/edit/ and as part of making a release run "make commanddocs" --- # rclone config edit Enter an interactive configuration session. ## Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. ``` rclone config edit [flags] ``` ## Options ``` -h, --help help for edit ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_file.md000066400000000000000000000012341375552240400234270ustar00rootroot00000000000000--- title: "rclone config file" description: "Show path of configuration file in use." slug: rclone_config_file url: /commands/rclone_config_file/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/file/ and as part of making a release run "make commanddocs" --- # rclone config file Show path of configuration file in use. ## Synopsis Show path of configuration file in use. ``` rclone config file [flags] ``` ## Options ``` -h, --help help for file ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_password.md000066400000000000000000000017731375552240400243620ustar00rootroot00000000000000--- title: "rclone config password" description: "Update password in an existing remote." slug: rclone_config_password url: /commands/rclone_config_password/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/password/ and as part of making a release run "make commanddocs" --- # rclone config password Update password in an existing remote. ## Synopsis Update an existing remote's password. The password should be passed in pairs of `key` `value`. For example to set password of a remote of name myremote you would do: rclone config password myremote fieldname mypassword This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. ``` rclone config password `name` [`key` `value`]+ [flags] ``` ## Options ``` -h, --help help for password ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_providers.md000066400000000000000000000013401375552240400245230ustar00rootroot00000000000000--- title: "rclone config providers" description: "List in JSON format all the providers and options." slug: rclone_config_providers url: /commands/rclone_config_providers/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/providers/ and as part of making a release run "make commanddocs" --- # rclone config providers List in JSON format all the providers and options. ## Synopsis List in JSON format all the providers and options. ``` rclone config providers [flags] ``` ## Options ``` -h, --help help for providers ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_reconnect.md000066400000000000000000000015251375552240400244730ustar00rootroot00000000000000--- title: "rclone config reconnect" description: "Re-authenticates user with remote." slug: rclone_config_reconnect url: /commands/rclone_config_reconnect/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/reconnect/ and as part of making a release run "make commanddocs" --- # rclone config reconnect Re-authenticates user with remote. ## Synopsis This reconnects remote: passed in to the cloud storage system. To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. ``` rclone config reconnect remote: [flags] ``` ## Options ``` -h, --help help for reconnect ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_show.md000066400000000000000000000013651375552240400234750ustar00rootroot00000000000000--- title: "rclone config show" description: "Print (decrypted) config file, or the config for a single remote." slug: rclone_config_show url: /commands/rclone_config_show/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/show/ and as part of making a release run "make commanddocs" --- # rclone config show Print (decrypted) config file, or the config for a single remote. ## Synopsis Print (decrypted) config file, or the config for a single remote. ``` rclone config show [] [flags] ``` ## Options ``` -h, --help help for show ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_update.md000066400000000000000000000035411375552240400237750ustar00rootroot00000000000000--- title: "rclone config update" description: "Update options in an existing remote." slug: rclone_config_update url: /commands/rclone_config_update/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/update/ and as part of making a release run "make commanddocs" --- # rclone config update Update options in an existing remote. ## Synopsis Update an existing remote's options. The options should be passed in in pairs of `key` `value`. For example to update the env_auth field of a remote of name myremote you would do: rclone config update myremote swift env_auth true If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. **NB** If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: rclone config update myremote swift env_auth true config_refresh_token false ``` rclone config update `name` [`key` `value`]+ [flags] ``` ## Options ``` -h, --help help for update --no-obscure Force any passwords not to be obscured. --obscure Force any passwords to be obscured. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_config_userinfo.md000066400000000000000000000014241375552240400243430ustar00rootroot00000000000000--- title: "rclone config userinfo" description: "Prints info about logged in user of remote." slug: rclone_config_userinfo url: /commands/rclone_config_userinfo/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/userinfo/ and as part of making a release run "make commanddocs" --- # rclone config userinfo Prints info about logged in user of remote. ## Synopsis This prints the details of the person logged in to the cloud storage system. ``` rclone config userinfo remote: [flags] ``` ## Options ``` -h, --help help for userinfo --json Format output as JSON ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. rclone-1.53.3/docs/content/commands/rclone_copy.md000066400000000000000000000045101375552240400221350ustar00rootroot00000000000000--- title: "rclone copy" description: "Copy files from source to dest, skipping already copied." slug: rclone_copy url: /commands/rclone_copy/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/copy/ and as part of making a release run "make commanddocs" --- # rclone copy Copy files from source to dest, skipping already copied. ## Synopsis Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. If dest:path doesn't exist, it is created and the source:path contents go there. For example rclone copy source:sourcepath dest:destpath Let's say there are two files in sourcepath sourcepath/one.txt sourcepath/two.txt This copies them to destpath/one.txt destpath/two.txt Not to destpath/sourcepath/one.txt destpath/sourcepath/two.txt If you are familiar with `rsync`, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. See the [--no-traverse](/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. **Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything. ``` rclone copy source:path dest:path [flags] ``` ## Options ``` --create-empty-src-dirs Create empty source dirs on destination after copy -h, --help help for copy ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_copyto.md000066400000000000000000000027101375552240400225000ustar00rootroot00000000000000--- title: "rclone copyto" description: "Copy files from source to dest, skipping already copied." slug: rclone_copyto url: /commands/rclone_copyto/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyto/ and as part of making a release run "make commanddocs" --- # rclone copyto Copy files from source to dest, skipping already copied. ## Synopsis If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command. So rclone copyto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: if src is file copy it to dst, overwriting an existing file if it exists if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics ``` rclone copyto source:path dest:path [flags] ``` ## Options ``` -h, --help help for copyto ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_copyurl.md000066400000000000000000000024171375552240400226640ustar00rootroot00000000000000--- title: "rclone copyurl" description: "Copy url content to dest." slug: rclone_copyurl url: /commands/rclone_copyurl/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyurl/ and as part of making a release run "make commanddocs" --- # rclone copyurl Copy url content to dest. ## Synopsis Download a URL's content and copy it to the destination without saving it in temporary storage. Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. Setting --stdout or making the output file name "-" will cause the output to be written to standard output. ``` rclone copyurl https://example.com dest:path [flags] ``` ## Options ``` -a, --auto-filename Get the file name from the URL and use it for destination file path -h, --help help for copyurl --no-clobber Prevent overwriting file with same name --stdout Write the output to stdout rather than a file ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_cryptcheck.md000066400000000000000000000062361375552240400233310ustar00rootroot00000000000000--- title: "rclone cryptcheck" description: "Cryptcheck checks the integrity of a crypted remote." slug: rclone_cryptcheck url: /commands/rclone_cryptcheck/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/cryptcheck/ and as part of making a release run "make commanddocs" --- # rclone cryptcheck Cryptcheck checks the integrity of a crypted remote. ## Synopsis rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted. Use it like this rclone cryptcheck /path/to/files encryptedremote:path You can use it like this also, but that will involve downloading all the files in remote:path. rclone cryptcheck remote:path encryptedremote:path After it has run it will log the status of the encryptedremote:. If you supply the `--one-way` flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected. The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, one per line, to the file name (or stdout if it is `-`) supplied. What they write is described in the help below. For example `--differ` will write all paths which are present on both the source and destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files. - `= path` means path was found in source and destination and was identical - `- path` means path was missing on the source, so only in the destination - `+ path` means path was missing on the destination, so only in the source - `* path` means path was present in source and destination but different. - `! path` means there was an error reading or hashing the source or dest. ``` rclone cryptcheck remote:path cryptedremote:path [flags] ``` ## Options ``` --combined string Make a combined report of changes to this file --differ string Report all non-matching files to this file --error string Report all files with errors (hashing or reading) to this file -h, --help help for cryptcheck --match string Report all matching files to this file --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_cryptdecode.md000066400000000000000000000021051375552240400234660ustar00rootroot00000000000000--- title: "rclone cryptdecode" description: "Cryptdecode returns unencrypted file names." slug: rclone_cryptdecode url: /commands/rclone_cryptdecode/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/cryptdecode/ and as part of making a release run "make commanddocs" --- # rclone cryptdecode Cryptdecode returns unencrypted file names. ## Synopsis rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. If you supply the --reverse flag, it will return encrypted file names. use it like this rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 rclone cryptdecode --reverse encryptedremote: filename1 filename2 ``` rclone cryptdecode encryptedremote: encryptedfilename [flags] ``` ## Options ``` -h, --help help for cryptdecode --reverse Reverse cryptdecode, encrypts filenames ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_dedupe.md000066400000000000000000000117221375552240400224340ustar00rootroot00000000000000--- title: "rclone dedupe" description: "Interactively find duplicate filenames and delete/rename them." slug: rclone_dedupe url: /commands/rclone_dedupe/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/dedupe/ and as part of making a release run "make commanddocs" --- # rclone dedupe Interactively find duplicate filenames and delete/rename them. ## Synopsis By default `dedupe` interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is only useful with backends like Google Drive which can have duplicate file names. It can be run on wrapping backends (eg crypt) if they wrap a backend which supports duplicate file names. In the first pass it will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged. In the second pass, for every group of duplicate file names, it will delete all but one identical files it finds without confirmation. This means that for most duplicated files the `dedupe` command will not be interactive. `dedupe` considers files to be identical if they have the same hash. If the backend does not support hashes (eg crypt wrapping Google Drive) then they will never be found to be identical. If you use the `--size-only` flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. Here is an example run. Before - with duplicates $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 6048320 2016-03-05 16:23:11.775000000 one.txt 564374 2016-03-05 16:23:06.731000000 one.txt 6048320 2016-03-05 16:18:26.092000000 one.txt 6048320 2016-03-05 16:22:46.185000000 two.txt 1744073 2016-03-05 16:22:38.104000000 two.txt 564374 2016-03-05 16:22:52.118000000 two.txt Now the `dedupe` session $ rclone dedupe drive:dupes 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. one.txt: Found 4 files with duplicate names one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") one.txt: 2 duplicates remain 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> k Enter the number of the file to keep> 1 one.txt: Deleted 1 extra copies two.txt: Found 3 files with duplicates names two.txt: 3 duplicates remain 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> r two-1.txt: renamed from: two.txt two-2.txt: renamed from: two.txt two-3.txt: renamed from: two.txt The result being $ rclone lsl drive:dupes 6048320 2016-03-05 16:23:16.798000000 one.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value * `--dedupe-mode interactive` - interactive as above. * `--dedupe-mode skip` - removes identical files then skips anything left. * `--dedupe-mode first` - removes identical files then keeps the first one. * `--dedupe-mode newest` - removes identical files then keeps the newest one. * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. * `--dedupe-mode largest` - removes identical files then keeps the largest one. * `--dedupe-mode smallest` - removes identical files then keeps the smallest one. * `--dedupe-mode rename` - removes identical files then renames the rest to be different. For example to rename all the identically named photos in your Google Photos directory, do rclone dedupe --dedupe-mode rename "drive:Google Photos" Or rclone dedupe rename "drive:Google Photos" ``` rclone dedupe [mode] remote:path [flags] ``` ## Options ``` --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive") -h, --help help for dedupe ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_delete.md000066400000000000000000000027301375552240400224270ustar00rootroot00000000000000--- title: "rclone delete" description: "Remove the contents of path." slug: rclone_delete url: /commands/rclone_delete/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/delete/ and as part of making a release run "make commanddocs" --- # rclone delete Remove the contents of path. ## Synopsis Remove the files in path. Unlike `purge` it obeys include/exclude filters so can be used to selectively delete files. `rclone delete` only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use `rclone purge` If you supply the --rmdirs flag, it will remove all empty directories along with it. Eg delete all files bigger than 100MBytes Check what would be deleted first (use either) rclone --min-size 100M lsl remote:path rclone --dry-run --min-size 100M delete remote:path Then delete rclone --min-size 100M delete remote:path That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. ``` rclone delete remote:path [flags] ``` ## Options ``` -h, --help help for delete --rmdirs rmdirs removes empty directories but leaves root intact ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_deletefile.md000066400000000000000000000014611375552240400232670ustar00rootroot00000000000000--- title: "rclone deletefile" description: "Remove a single file from remote." slug: rclone_deletefile url: /commands/rclone_deletefile/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/deletefile/ and as part of making a release run "make commanddocs" --- # rclone deletefile Remove a single file from remote. ## Synopsis Remove a single file from remote. Unlike `delete` it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed. ``` rclone deletefile remote:path [flags] ``` ## Options ``` -h, --help help for deletefile ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_genautocomplete.md000066400000000000000000000020541375552240400243570ustar00rootroot00000000000000--- title: "rclone genautocomplete" description: "Output completion script for a given shell." slug: rclone_genautocomplete url: /commands/rclone_genautocomplete/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/genautocomplete/ and as part of making a release run "make commanddocs" --- # rclone genautocomplete Output completion script for a given shell. ## Synopsis Generates a shell completion script for rclone. Run with --help to list the supported shells. ## Options ``` -h, --help help for genautocomplete ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete fish](/commands/rclone_genautocomplete_fish/) - Output fish completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. rclone-1.53.3/docs/content/commands/rclone_genautocomplete_bash.md000066400000000000000000000021311375552240400253500ustar00rootroot00000000000000--- title: "rclone genautocomplete bash" description: "Output bash completion script for rclone." slug: rclone_genautocomplete_bash url: /commands/rclone_genautocomplete_bash/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/genautocomplete/bash/ and as part of making a release run "make commanddocs" --- # rclone genautocomplete bash Output bash completion script for rclone. ## Synopsis Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete bash Logout and login again to use the autocompletion scripts, or source them directly . /etc/bash_completion If you supply a command line argument the script will be written there. ``` rclone genautocomplete bash [output_file] [flags] ``` ## Options ``` -h, --help help for bash ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. rclone-1.53.3/docs/content/commands/rclone_genautocomplete_fish.md000066400000000000000000000021441375552240400253700ustar00rootroot00000000000000--- title: "rclone genautocomplete fish" description: "Output fish completion script for rclone." slug: rclone_genautocomplete_fish url: /commands/rclone_genautocomplete_fish/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/genautocomplete/fish/ and as part of making a release run "make commanddocs" --- # rclone genautocomplete fish Output fish completion script for rclone. ## Synopsis Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete fish Logout and login again to use the autocompletion scripts, or source them directly . /etc/fish/completions/rclone.fish If you supply a command line argument the script will be written there. ``` rclone genautocomplete fish [output_file] [flags] ``` ## Options ``` -h, --help help for fish ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. rclone-1.53.3/docs/content/commands/rclone_genautocomplete_zsh.md000066400000000000000000000021361375552240400252440ustar00rootroot00000000000000--- title: "rclone genautocomplete zsh" description: "Output zsh completion script for rclone." slug: rclone_genautocomplete_zsh url: /commands/rclone_genautocomplete_zsh/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/genautocomplete/zsh/ and as part of making a release run "make commanddocs" --- # rclone genautocomplete zsh Output zsh completion script for rclone. ## Synopsis Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg sudo rclone genautocomplete zsh Logout and login again to use the autocompletion scripts, or source them directly autoload -U compinit && compinit If you supply a command line argument the script will be written there. ``` rclone genautocomplete zsh [output_file] [flags] ``` ## Options ``` -h, --help help for zsh ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. rclone-1.53.3/docs/content/commands/rclone_gendocs.md000066400000000000000000000014571375552240400226140ustar00rootroot00000000000000--- title: "rclone gendocs" description: "Output markdown docs for rclone to the directory supplied." slug: rclone_gendocs url: /commands/rclone_gendocs/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/gendocs/ and as part of making a release run "make commanddocs" --- # rclone gendocs Output markdown docs for rclone to the directory supplied. ## Synopsis This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website. ``` rclone gendocs output_directory [flags] ``` ## Options ``` -h, --help help for gendocs ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_hashsum.md000066400000000000000000000020551375552240400226350ustar00rootroot00000000000000--- title: "rclone hashsum" description: "Produces a hashsum file for all the objects in the path." slug: rclone_hashsum url: /commands/rclone_hashsum/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/hashsum/ and as part of making a release run "make commanddocs" --- # rclone hashsum Produces a hashsum file for all the objects in the path. ## Synopsis Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool. Run without a hash to see the list of supported hashes, eg $ rclone hashsum Supported hashes are: * MD5 * SHA-1 * DropboxHash * QuickXorHash Then $ rclone hashsum MD5 remote:path ``` rclone hashsum remote:path [flags] ``` ## Options ``` --base64 Output base64 encoded hashsum -h, --help help for hashsum ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_link.md000066400000000000000000000032111375552240400221150ustar00rootroot00000000000000--- title: "rclone link" description: "Generate public link to file/folder." slug: rclone_link url: /commands/rclone_link/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/link/ and as part of making a release run "make commanddocs" --- # rclone link Generate public link to file/folder. ## Synopsis rclone link will create, retrieve or remove a public link to the given file or folder. rclone link remote:path/to/file rclone link remote:path/to/folder/ rclone link --unlink remote:path/to/folder/ rclone link --expire 1d remote:path/to/file If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). **Note** not all backends support the --expire flag - if the backend doesn't support it then the link returned won't expire. Use the --unlink flag to remove existing public links to the file or folder. **Note** not all backends support "--unlink" flag - those that don't will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account. ``` rclone link remote:path [flags] ``` ## Options ``` --expire Duration The amount of time that the link will be valid (default 100y) -h, --help help for link --unlink Remove existing public link to file/folder ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_listremotes.md000066400000000000000000000014441375552240400235400ustar00rootroot00000000000000--- title: "rclone listremotes" description: "List all the remotes in the config file." slug: rclone_listremotes url: /commands/rclone_listremotes/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/listremotes/ and as part of making a release run "make commanddocs" --- # rclone listremotes List all the remotes in the config file. ## Synopsis rclone listremotes lists all the available remotes from the config file. When uses with the -l flag it lists the types too. ``` rclone listremotes [flags] ``` ## Options ``` -h, --help help for listremotes --long Show the type as well as names. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_ls.md000066400000000000000000000033071375552240400216040ustar00rootroot00000000000000--- title: "rclone ls" description: "List the objects in the path with size and path." slug: rclone_ls url: /commands/rclone_ls/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/ls/ and as part of making a release run "make commanddocs" --- # rclone ls List the objects in the path with size and path. ## Synopsis Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. Eg $ rclone ls swift:bucket 60295 bevajer5jef 90613 canole 94467 diwogej7 37600 fubuwic Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone ls remote:path [flags] ``` ## Options ``` -h, --help help for ls ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_lsd.md000066400000000000000000000044031375552240400217460ustar00rootroot00000000000000--- title: "rclone lsd" description: "List all directories/containers/buckets in the path." slug: rclone_lsd url: /commands/rclone_lsd/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsd/ and as part of making a release run "make commanddocs" --- # rclone lsd List all directories/containers/buckets in the path. ## Synopsis Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg $ rclone lsd swift: 494000 2018-04-26 08:43:20 10000 10000files 65 2018-04-26 08:43:20 1 1File Or $ rclone lsd drive:test -1 2016-10-17 17:41:53 -1 1000files -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files If you just want the directory names use "rclone lsf --dirs-only". Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsd remote:path [flags] ``` ## Options ``` -h, --help help for lsd -R, --recursive Recurse into the listing. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_lsf.md000066400000000000000000000123631375552240400217540ustar00rootroot00000000000000--- title: "rclone lsf" description: "List directories and objects in remote:path formatted for parsing." slug: rclone_lsf url: /commands/rclone_lsf/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsf/ and as part of making a release run "make commanddocs" --- # rclone lsf List directories and objects in remote:path formatted for parsing. ## Synopsis List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. Eg $ rclone lsf swift:bucket bevajer5jef canole diwogej7 ferejej3gux/ fubuwic Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: p - path s - size t - modification time h - hash i - ID of object o - Original ID of underlying object m - MimeType of object if known e - encrypted name T - tier of storage if known, eg "Hot" or "Cool" So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. Eg $ rclone lsf --format "tsp" swift:bucket 2016-06-25 18:55:41;60295;bevajer5jef 2016-06-25 18:55:43;90613;canole 2016-06-25 18:55:43;94467;diwogej7 2018-04-26 08:50:45;0;ferejej3gux/ 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type. For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . Eg $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket 7908e352297f0f530b84a756f188baa3 bevajer5jef cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg $ rclone lsf --separator "," --format "tshp" swift:bucket 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 2018-04-26 08:52:53,0,,ferejej3gux/ 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic You can output in CSV standard format. This will escape things in " if they contain , Eg $ rclone lsf --csv --files-only --format ps remote:path test.log,22355 test.sh,449 "this file contains a comma, in the file name.txt",6 Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag. For example to find all the files modified within one day and copy those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsf remote:path [flags] ``` ## Options ``` --absolute Put a leading / in front of path names. --csv Output in CSV format. -d, --dir-slash Append a slash to directory names. (default true) --dirs-only Only list directories. --files-only Only list files. -F, --format string Output format - see help for details (default "p") --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5") -h, --help help for lsf -R, --recursive Recurse into the listing. -s, --separator string Separator for the items in the format. (default ";") ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_lsjson.md000066400000000000000000000112771375552240400225030ustar00rootroot00000000000000--- title: "rclone lsjson" description: "List directories and objects in the path in JSON format." slug: rclone_lsjson url: /commands/rclone_lsjson/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsjson/ and as part of making a release run "make commanddocs" --- # rclone lsjson List directories and objects in the path in JSON format. ## Synopsis List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", } If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash. If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift). If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift). If --encrypted is not specified the Encrypted won't be emitted. If --dirs-only is not specified files in addition to directories are returned If --files-only is not specified directories in addition to the files will be returned. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true". The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsjson remote:path [flags] ``` ## Options ``` --dirs-only Show only directories in the listing. -M, --encrypted Show the encrypted names. --files-only Show only files in the listing. --hash Include hashes in the output (may take longer). --hash-type stringArray Show only this hash type (may be repeated). -h, --help help for lsjson --no-mimetype Don't read the mime type (can speed things up). --no-modtime Don't read the modification time (can speed things up). --original Show the ID of the underlying Object. -R, --recursive Recurse into the listing. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_lsl.md000066400000000000000000000035701375552240400217620ustar00rootroot00000000000000--- title: "rclone lsl" description: "List the objects in path with modification time, size and path." slug: rclone_lsl url: /commands/rclone_lsl/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsl/ and as part of making a release run "make commanddocs" --- # rclone lsl List the objects in path with modification time, size and path. ## Synopsis Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. Eg $ rclone lsl swift:bucket 60295 2016-06-25 18:55:41.062626927 bevajer5jef 90613 2016-06-25 18:55:43.302607074 canole 94467 2016-06-25 18:55:43.046609333 diwogej7 37600 2016-06-25 18:55:40.814629136 fubuwic Any of the filtering options can be applied to this command. There are several related list commands * `ls` to list size and path of objects only * `lsl` to list modification time, size and path of objects only * `lsd` to list directories only * `lsf` to list objects and directories in easy to parse format * `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human readable. `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). ``` rclone lsl remote:path [flags] ``` ## Options ``` -h, --help help for lsl ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_md5sum.md000066400000000000000000000013741375552240400224020ustar00rootroot00000000000000--- title: "rclone md5sum" description: "Produces an md5sum file for all the objects in the path." slug: rclone_md5sum url: /commands/rclone_md5sum/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/md5sum/ and as part of making a release run "make commanddocs" --- # rclone md5sum Produces an md5sum file for all the objects in the path. ## Synopsis Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces. ``` rclone md5sum remote:path [flags] ``` ## Options ``` -h, --help help for md5sum ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_mkdir.md000066400000000000000000000012071375552240400222710ustar00rootroot00000000000000--- title: "rclone mkdir" description: "Make the path if it doesn't already exist." slug: rclone_mkdir url: /commands/rclone_mkdir/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/mkdir/ and as part of making a release run "make commanddocs" --- # rclone mkdir Make the path if it doesn't already exist. ## Synopsis Make the path if it doesn't already exist. ``` rclone mkdir remote:path [flags] ``` ## Options ``` -h, --help help for mkdir ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_mount.md000066400000000000000000000546141375552240400223370ustar00rootroot00000000000000--- title: "rclone mount" description: "Mount the remote as file system on a mountpoint." slug: rclone_mount url: /commands/rclone_mount/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/mount/ and as part of making a release run "make commanddocs" --- # rclone mount Mount the remote as file system on a mountpoint. ## Synopsis rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up your remote using `rclone config`. Check it works with `rclone ls` etc. You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows. On Linux/macOS/FreeBSD Start the mount like this where `/path/to/local/mount` is an **empty** **existing** directory. rclone mount remote:path/to/files /path/to/local/mount Or on Windows like this where `X:` is an unused drive letter or use a path to **non-existent** directory. rclone mount remote:path/to/files X: rclone mount remote:path/to/files C:\path\to\nonexistent\directory When running in background mode the user will have to stop the mount manually (specified below). When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped. The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. Stopping the mount manually: # Linux fusermount -u /path/to/local/mount # OS X umount /path/to/local/mount **Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use. ## Installing on Windows To run rclone mount on Windows, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/billziss-gh/winfsp) is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows. ### Windows caveats Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive. The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using [the WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)) which creates drives accessible for everyone on the system or alternatively using [the nssm service manager](https://nssm.cc/usage). ### Mount as a network drive By default, rclone will mount the remote as a normal drive. However, you can also mount it as a **Network Drive** (or **Network Share**, as mentioned in some places) Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected. Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also [Limitations](#limitations) section below for more info Add "--fuse-flag --VolumePrefix=\server\share" to your "mount" command, **replacing "share" with any other name of your choice if you are mounting more than one remote**. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) ## Limitations Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File Caching](#file-caching) section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. ## rclone mount vs rclone sync/copy File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the [file caching](#file-caching) for solutions to make mount more reliable. ## Attribute caching You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel. In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as [rclone using too much memory](https://github.com/rclone/rclone/issues/2157), [rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147). The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above. If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above. If files don't change on the remote outside of the control of rclone then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. ## Filters Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. ## systemd When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode. ## chunked reading ### --vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests. When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely. With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ``` rclone mount remote:path /path/to/mountpoint [flags] ``` ## Options ``` --allow-non-empty Allow mounting over a non-empty directory (not Windows). --allow-other Allow access to other users. --allow-root Allow access to root user. --async-read Use asynchronous reads. (default true) --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --daemon Run mount as a daemon (background mode). --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). --debug-fuse Debug the FUSE internals - needs -v. --default-permissions Makes kernel enforce access control based on the file mode. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for mount --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) --volname string Set the volume name (not supported by all OSes). --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_move.md000066400000000000000000000036101375552240400221310ustar00rootroot00000000000000--- title: "rclone move" description: "Move files from source to dest." slug: rclone_move url: /commands/rclone_move/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/move/ and as part of making a release run "make commanddocs" --- # rclone move Move files from source to dest. ## Synopsis Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation. If no filters are in use and if possible this will server side move `source:path` into `dest:path`. After this `source:path` will no longer exist. Otherwise for each file in `source:path` selected by the filters (if any) this will move it into `dest:path`. If possible a server side move will be used, otherwise it will copy it (server side if possible) into `dest:path` then delete the original (if no errors on copy) in `source:path`. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. See the [--no-traverse](/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. ``` rclone move source:path dest:path [flags] ``` ## Options ``` --create-empty-src-dirs Create empty source dirs on destination after move --delete-empty-src-dirs Delete empty source dirs after move -h, --help help for move ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_moveto.md000066400000000000000000000030571375552240400225010ustar00rootroot00000000000000--- title: "rclone moveto" description: "Move file or directory from source to dest." slug: rclone_moveto url: /commands/rclone_moveto/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/moveto/ and as part of making a release run "make commanddocs" --- # rclone moveto Move file or directory from source to dest. ## Synopsis If source:path is a file or directory then it moves it to a file or directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command. So rclone moveto src dst where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: if src is file move it to dst, overwriting an existing file if it exists if src is directory move it to dst, overwriting existing files if they exist see move command for full details This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. ``` rclone moveto source:path dest:path [flags] ``` ## Options ``` -h, --help help for moveto ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_ncdu.md000066400000000000000000000032701375552240400221160ustar00rootroot00000000000000--- title: "rclone ncdu" description: "Explore a remote with a text based user interface." slug: rclone_ncdu url: /commands/rclone_ncdu/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/ncdu/ and as part of making a release run "make commanddocs" --- # rclone ncdu Explore a remote with a text based user interface. ## Synopsis This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?". {{< asciinema 157793 >}} To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. Here are the keys - press '?' to toggle the help on and off ↑,↓ or k,j to Move →,l to enter ←,h to return c toggle counts g toggle graph n,s,C sort by name,size,count d delete file/directory y copy current path to clipbard Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously. ``` rclone ncdu remote:path [flags] ``` ## Options ``` -h, --help help for ncdu ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_obscure.md000066400000000000000000000027721375552240400226350ustar00rootroot00000000000000--- title: "rclone obscure" description: "Obscure password for use in the rclone config file." slug: rclone_obscure url: /commands/rclone_obscure/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/obscure/ and as part of making a release run "make commanddocs" --- # rclone obscure Obscure password for use in the rclone config file. ## Synopsis In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is **not** a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token. This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. Example: echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. If you want to encrypt the config file then please use config file encryption - see [rclone config](/commands/rclone_config/) for more info. ``` rclone obscure password [flags] ``` ## Options ``` -h, --help help for obscure ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_purge.md000066400000000000000000000016001375552240400223020ustar00rootroot00000000000000--- title: "rclone purge" description: "Remove the path and all of its contents." slug: rclone_purge url: /commands/rclone_purge/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/purge/ and as part of making a release run "make commanddocs" --- # rclone purge Remove the path and all of its contents. ## Synopsis Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use `delete` if you want to selectively delete files. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. ``` rclone purge remote:path [flags] ``` ## Options ``` -h, --help help for purge ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_rc.md000066400000000000000000000054011375552240400215670ustar00rootroot00000000000000--- title: "rclone rc" description: "Run a command against a running rclone." slug: rclone_rc url: /commands/rclone_rc/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/rc/ and as part of making a release run "make commanddocs" --- # rclone rc Run a command against a running rclone. ## Synopsis This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" A username and password can be passed in with --user and --pass. Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. -o key=value -o key2 Will place this in the "opt" value {"key":"value", "key2","") The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. -a value -a value2 Will place this in the "arg" value ["value", "value2"] Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg: rclone rc --loopback operations/about fs=/ Use "rclone rc" to see a list of all possible commands. ``` rclone rc commands parameter [flags] ``` ## Options ``` -a, --arg stringArray Argument placed in the "arg" array. -h, --help help for rc --json string Input JSON - use instead of key=value args. --loopback If set connect to this rclone instance not via HTTP. --no-output If set don't output the JSON result. -o, --opt stringArray Option in the form name=value or name placed in the "opt" array. --pass string Password to use to connect to rclone remote control. --url string URL to connect to rclone remote control. (default "http://localhost:5572/") --user string Username to use to rclone remote control. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_rcat.md000066400000000000000000000031461375552240400221200ustar00rootroot00000000000000--- title: "rclone rcat" description: "Copies standard input to file on remote." slug: rclone_rcat url: /commands/rclone_rcat/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/rcat/ and as part of making a release run "make commanddocs" --- # rclone rcat Copies standard input to file on remote. ## Synopsis rclone rcat reads from standard input (stdin) and copies it to a single remote file. echo "hello world" | rclone rcat remote:path/to/file ffmpeg - | rclone rcat remote:path/to/file If the remote file already exists, it will be overwritten. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through `--streaming-upload-cutoff`. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then `rclone move` it to the destination. ``` rclone rcat remote:path [flags] ``` ## Options ``` -h, --help help for rcat ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_rcd.md000066400000000000000000000017451375552240400217420ustar00rootroot00000000000000--- title: "rclone rcd" description: "Run rclone listening to remote control commands only." slug: rclone_rcd url: /commands/rclone_rcd/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/rcd/ and as part of making a release run "make commanddocs" --- # rclone rcd Run rclone listening to remote control commands only. ## Synopsis This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. See the [rc documentation](/rc/) for more info on the rc flags. ``` rclone rcd * [flags] ``` ## Options ``` -h, --help help for rcd ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_rmdir.md000066400000000000000000000012271375552240400223020ustar00rootroot00000000000000--- title: "rclone rmdir" description: "Remove the path if empty." slug: rclone_rmdir url: /commands/rclone_rmdir/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/rmdir/ and as part of making a release run "make commanddocs" --- # rclone rmdir Remove the path if empty. ## Synopsis Remove the path. Note that you can't remove a path with objects in it, use purge for that. ``` rclone rmdir remote:path [flags] ``` ## Options ``` -h, --help help for rmdir ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_rmdirs.md000066400000000000000000000017511375552240400224670ustar00rootroot00000000000000--- title: "rclone rmdirs" description: "Remove empty directories under the path." slug: rclone_rmdirs url: /commands/rclone_rmdirs/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/rmdirs/ and as part of making a release run "make commanddocs" --- # rclone rmdirs Remove empty directories under the path. ## Synopsis This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in. If you supply the --leave-root flag, it will not remove the root directory. This is useful for tidying up remotes that rclone has left a lot of empty directories in. ``` rclone rmdirs remote:path [flags] ``` ## Options ``` -h, --help help for rmdirs --leave-root Do not remove root directory if empty ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_serve.md000066400000000000000000000025121375552240400223070ustar00rootroot00000000000000--- title: "rclone serve" description: "Serve a remote over a protocol." slug: rclone_serve url: /commands/rclone_serve/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/ and as part of making a release run "make commanddocs" --- # rclone serve Serve a remote over a protocol. ## Synopsis rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg rclone serve http remote: Each subcommand has its own options which you can see in their help. ``` rclone serve [opts] [flags] ``` ## Options ``` -h, --help help for serve ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA * [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. rclone-1.53.3/docs/content/commands/rclone_serve_dlna.md000066400000000000000000000344071375552240400233150ustar00rootroot00000000000000--- title: "rclone serve dlna" description: "Serve remote:path over DLNA" slug: rclone_serve_dlna url: /commands/rclone_serve_dlna/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/dlna/ and as part of making a release run "make commanddocs" --- # rclone serve dlna Serve remote:path over DLNA ## Synopsis rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. ## Server options Use `--addr` to specify which IP address and port the server should listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all IPs. Use `--name` to choose the friendly server name, which is by default "rclone (hostname)". Use `--log-trace` in conjunction with `-vv` to enable additional debug logging of all UPNP traffic. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ``` rclone serve dlna remote:path [flags] ``` ## Options ``` --addr string ip:port or :port to bind the DLNA http server to. (default ":7879") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for dlna --log-trace enable trace logging of SOAP traffic --name string name of DLNA server --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. rclone-1.53.3/docs/content/commands/rclone_serve_ftp.md000066400000000000000000000420131375552240400231600ustar00rootroot00000000000000--- title: "rclone serve ftp" description: "Serve remote:path over FTP." slug: rclone_serve_ftp url: /commands/rclone_serve_ftp/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/ftp/ and as part of making a release run "make commanddocs" --- # rclone serve ftp Serve remote:path over FTP. ## Synopsis rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ### Authentication By default this will serve files without needing a login. You can set a single username and password with the --user and --pass flags. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ## Auth Proxy If you supply the parameter `--auth-proxy /path/to/program` then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - `_root` - root to use for the backend And it may have this parameter - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT ``` { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ``` This would mean that an SFTP backend would be created on the fly for the `user` and `pass`/`public_key` returned in the output to the host given. Note that since `_obscure` is set to `pass`, rclone will obscure the `pass` parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied `user` in any way, for example to make proxy to many different sftp backends, you could make the `user` be `user@example.com` and then set the `host` to `example.com` in the output and the user to `user`. For security you'd probably want to restrict the `host` to a limited list. Note that an internal cache is keyed on `user` so only use that for configuration, don't use `pass` or `public_key`. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. ``` rclone serve ftp remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") --auth-proxy string A program to use to create the backend from the auth. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for ftp --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. (empty value allow every password) --passive-port string Passive port range to use. (default "30000-32000") --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --public-ip string Public IP address to advertise for passive connections. --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. (default "anonymous") --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. rclone-1.53.3/docs/content/commands/rclone_serve_http.md000066400000000000000000000436621375552240400233610ustar00rootroot00000000000000--- title: "rclone serve http" description: "Serve the remote over HTTP." slug: rclone_serve_http url: /commands/rclone_serve_http/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/http/ and as part of making a release run "make commanddocs" --- # rclone serve http Serve the remote over HTTP. ## Synopsis rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | ### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. ### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ``` rclone serve http remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for http --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --template string User Specified Template. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. rclone-1.53.3/docs/content/commands/rclone_serve_restic.md000066400000000000000000000174151375552240400236700ustar00rootroot00000000000000--- title: "rclone serve restic" description: "Serve the remote for restic's REST API." slug: rclone_serve_restic url: /commands/rclone_serve_restic/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/restic/ and as part of making a release run "make commanddocs" --- # rclone serve restic Serve the remote for restic's REST API. ## Synopsis rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. [Restic](https://restic.net/) is a command line program for doing backups. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. ## Setting up rclone for use by restic ### First [set up a remote for your chosen cloud provider](/docs/#configure). Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server rclone serve restic -v remote:backup Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag. You might wish to start this server on boot. ## Setting up restic to use rclone ### Now you can [follow the restic instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: $ export RESTIC_REPOSITORY=rest:http://localhost:8080/ $ export RESTIC_PASSWORD=yourpassword $ restic init created restic backend 8b1a4b56ae at rest:http://localhost:8080/ Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost. $ restic backup /path/to/files/to/backup scan [/path/to/files/to/backup] scanned 189 directories, 312 files in 0:00 [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 duration: 0:00 snapshot 45c8fdd8 saved ### Multiple repositories #### Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these **must** end with /. Eg $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ # backup user1 stuff $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff ### Private repositories #### The "--private-repos" flag can be used to limit users to repositories starting with a path of `//`. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | ### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. ### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ``` rclone serve restic remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --append-only disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --pass string Password for authentication. --private-repos users can only access their private repo --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --stdio run an HTTP2 server on stdin/stdout --template string User Specified Template. --user string User name for authentication. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. rclone-1.53.3/docs/content/commands/rclone_serve_sftp.md000066400000000000000000000431411375552240400233460ustar00rootroot00000000000000--- title: "rclone serve sftp" description: "Serve the remote over SFTP." slug: rclone_serve_sftp url: /commands/rclone_serve_sftp/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/sftp/ and as part of making a release run "make commanddocs" --- # rclone serve sftp Serve the remote over SFTP. ## Synopsis rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in. Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. If you don't supply a --key then rclone will generate one and cache it for later use. By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example. Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ## Auth Proxy If you supply the parameter `--auth-proxy /path/to/program` then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - `_root` - root to use for the backend And it may have this parameter - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT ``` { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ``` This would mean that an SFTP backend would be created on the fly for the `user` and `pass`/`public_key` returned in the output to the host given. Note that since `_obscure` is set to `pass`, rclone will obscure the `pass` parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied `user` in any way, for example to make proxy to many different sftp backends, you could make the `user` be `user@example.com` and then set the `host` to `example.com` in the output and the user to `user`. For security you'd probably want to restrict the `host` to a limited list. Note that an internal cache is keyed on `user` so only use that for configuration, don't use `pass` or `public_key`. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. ``` rclone serve sftp remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022") --auth-proxy string A program to use to create the backend from the auth. --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for sftp --key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate) --no-auth Allow connections with no authentication if set. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. rclone-1.53.3/docs/content/commands/rclone_serve_webdav.md000066400000000000000000000520141375552240400236410ustar00rootroot00000000000000--- title: "rclone serve webdav" description: "Serve remote:path over webdav." slug: rclone_serve_webdav url: /commands/rclone_serve_webdav/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/webdav/ and as part of making a release run "make commanddocs" --- # rclone serve webdav Serve remote:path over webdav. ## Synopsis rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it. ## Webdav options ### --etag-hash This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use "rclone hashsum" to see the full list. ## Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | | :---------- | :---------- | | .Name | The full path of a file/directory. | | .Title | Directory listing of .Name | | .Sort | The current sort used. This is changeable via ?sort= parameter | | | Sort Options: namedirfist,name,size,time (default namedirfirst) | | .Order | The current ordering used. This is changeable via ?order= parameter | | | Order Options: asc,desc (default asc) | | .Query | Currently unused. | | .Breadcrumb | Allows for creating a relative navigation | |-- .Link | The relative to the root link of the Text. | |-- .Text | The Name of the directory. | | .Entries | Information about a specific file/directory. | |-- .URL | The 'url' of an entry. | |-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | |-- .IsDir | Boolean for if an entry is a directory or not. | |-- .Size | Size in Bytes of the entry. | |-- .ModTime | The UTC timestamp of an entry. | ### Authentication By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: touch htpasswd htpasswd -B htpasswd user htpasswd -B htpasswd anotherUser The password file can be updated while rclone is running. Use --realm to set the authentication realm. ### SSL/TLS By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. ## VFS Directory Cache Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --poll-interval duration Time to wait between polling for changes. However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: rclone rc vfs/forget Or individual files or directories: rclone rc vfs/forget file=path/to/file dir=path/to/dir ## VFS File Buffering The `--buffer-size` flag determines the amount of memory, that will be used to buffer data in advance. Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to `--buffer-size * open files`. ## VFS File Caching These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with `--cache-dir` or setting the appropriate environment variable. The cache has 4 different modes selected by `--vfs-cache-mode`. The higher the cache mode the more compatible rclone becomes at the cost of using disk space. Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. ### --vfs-cache-mode off In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible * Files can't be opened for both read AND write * Files opened for write can't be seeked * Existing files opened for write must have O_TRUNC set * Files open for read with O_TRUNC will be opened write only * Files open for write only will behave as if O_TRUNC was supplied * Open modes O_APPEND, O_TRUNC are ignored * If an upload fails it can't be retried ### --vfs-cache-mode minimal This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible * Files opened for write only can't be seeked * Existing files opened for write must have O_TRUNC set * Files opened for write only will ignore O_APPEND, O_TRUNC * If an upload fails it can't be retried ### --vfs-cache-mode writes In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. ### --vfs-cache-mode full In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has dowloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes. When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk. When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required. **IMPORTANT** not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. ## VFS Performance These flags may be used to enable/disable features of the VFS for performance or other reasons. In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Mount read-only. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered. Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off") Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ## VFS Case Sensitivity Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". ## Auth Proxy If you supply the parameter `--auth-proxy /path/to/program` then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT. **PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program [bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter - `_root` - root to use for the backend And it may have this parameter - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: ``` { "user": "me", "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT ``` { "type": "sftp", "_root": "", "_obscure": "pass", "user": "me", "pass": "mypassword", "host": "sftp.example.com" } ``` This would mean that an SFTP backend would be created on the fly for the `user` and `pass`/`public_key` returned in the output to the host given. Note that since `_obscure` is set to `pass`, rclone will obscure the `pass` parameter before creating the backend (which is required for sftp backends). The program can manipulate the supplied `user` in any way, for example to make proxy to many different sftp backends, you could make the `user` be `user@example.com` and then set the `host` to `example.com` in the output and the user to `user`. For security you'd probably want to restrict the `host` to a limited list. Note that an internal cache is keyed on `user` so only use that for configuration, don't use `pass` or `public_key`. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect. This can be used to build general purpose proxies to any kind of backend that rclone supports. ``` rclone serve webdav remote:path [flags] ``` ## Options ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --auth-proxy string A program to use to create the backend from the auth. --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --disable-dir-list Disable HTML directory list on GET request for a directory --etag-hash string Which hash to use for the ETag, or auto or blank for off --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem. (default 1000) -h, --help help for webdav --htpasswd string htpasswd file - if not provided no authentication is done --key string SSL PEM Private key --max-header-bytes int Maximum size of request header (default 4096) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --pass string Password for authentication. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --realm string realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) --template string User Specified Template. --uid uint32 Override the uid field set by the filesystem. (default 1000) --umask int Override the permission bits set by the filesystem. (default 2) --user string User name for authentication. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match. --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s) --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s) ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. rclone-1.53.3/docs/content/commands/rclone_settier.md000066400000000000000000000027321375552240400226460ustar00rootroot00000000000000--- title: "rclone settier" description: "Changes storage class/tier of objects in remote." slug: rclone_settier url: /commands/rclone_settier/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/settier/ and as part of making a release run "make commanddocs" --- # rclone settier Changes storage class/tier of objects in remote. ## Synopsis rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc. Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true You can use it to tier single object rclone settier Cool remote:path/file Or use rclone filters to set tier on only specific files rclone --include "*.txt" settier Hot remote:path/dir Or just provide remote directory and all files in directory will be tiered rclone settier tier remote:path/dir ``` rclone settier tier remote:path [flags] ``` ## Options ``` -h, --help help for settier ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_sha1sum.md000066400000000000000000000014071375552240400225460ustar00rootroot00000000000000--- title: "rclone sha1sum" description: "Produces an sha1sum file for all the objects in the path." slug: rclone_sha1sum url: /commands/rclone_sha1sum/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/sha1sum/ and as part of making a release run "make commanddocs" --- # rclone sha1sum Produces an sha1sum file for all the objects in the path. ## Synopsis Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces. ``` rclone sha1sum remote:path [flags] ``` ## Options ``` -h, --help help for sha1sum ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_size.md000066400000000000000000000013301375552240400221320ustar00rootroot00000000000000--- title: "rclone size" description: "Prints the total size and number of objects in remote:path." slug: rclone_size url: /commands/rclone_size/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/size/ and as part of making a release run "make commanddocs" --- # rclone size Prints the total size and number of objects in remote:path. ## Synopsis Prints the total size and number of objects in remote:path. ``` rclone size remote:path [flags] ``` ## Options ``` -h, --help help for size --json format output as JSON ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_sync.md000066400000000000000000000031661375552240400221450ustar00rootroot00000000000000--- title: "rclone sync" description: "Make source and dest identical, modifying destination only." slug: rclone_sync url: /commands/rclone_sync/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/sync/ and as part of making a release run "make commanddocs" --- # rclone sync Make source and dest identical, modifying destination only. ## Synopsis Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. rclone sync -i SOURCE remote:DESTINATION Note that files in the destination won't be deleted if there were any errors at any point. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the `copy` command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics ``` rclone sync source:path dest:path [flags] ``` ## Options ``` --create-empty-src-dirs Create empty source dirs on destination after sync -h, --help help for sync ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_touch.md000066400000000000000000000026201375552240400223050ustar00rootroot00000000000000--- title: "rclone touch" description: "Create new file or change file modification time." slug: rclone_touch url: /commands/rclone_touch/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/touch/ and as part of making a release run "make commanddocs" --- # rclone touch Create new file or change file modification time. ## Synopsis Set the modification time on object(s) as specified by remote:path to have the current time. If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided. If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: - 'YYMMDD' - eg. 17.10.30 - 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05 - 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789 Note that --timestamp is in UTC if you want local time then add the --localtime flag. ``` rclone touch remote:path [flags] ``` ## Options ``` -h, --help help for touch --localtime Use localtime for timestamp, not UTC. -C, --no-create Do not create the file if it does not exist. -t, --timestamp string Use specified time instead of the current time of day. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_tree.md000066400000000000000000000046561375552240400221350ustar00rootroot00000000000000--- title: "rclone tree" description: "List the contents of the remote in a tree like fashion." slug: rclone_tree url: /commands/rclone_tree/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/tree/ and as part of making a release run "make commanddocs" --- # rclone tree List the contents of the remote in a tree like fashion. ## Synopsis rclone tree lists the contents of a remote in a similar way to the unix tree command. For example $ rclone tree remote:path / ├── file1 ├── file2 ├── file3 └── subdir ├── file4 └── file5 1 directories, 5 files You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options. ``` rclone tree remote:path [flags] ``` ## Options ``` -a, --all All files are listed (list . files too). -C, --color Turn colorization on always. -d, --dirs-only List directories only. --dirsfirst List directories before files (-U disables). --full-path Print the full path prefix for each file. -h, --help help for tree --human Print the size in a more human readable way. --level int Descend only level directories deep. -D, --modtime Print the date of last modification. --noindent Don't print indentation lines. --noreport Turn off file/directory count at end of tree listing. -o, --output string Output to file instead of stdout. -p, --protections Print the protections for each file. -Q, --quote Quote filenames with double quotes. -s, --size Print the size in bytes of each file. --sort string Select sort: name,version,size,mtime,ctime. --sort-ctime Sort files by last status change time. -t, --sort-modtime Sort files by last modification time. -r, --sort-reverse Reverse the order of the sort. -U, --unsorted Leave files unsorted. --version Sort files alphanumerically by version. ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/commands/rclone_version.md000066400000000000000000000024331375552240400226520ustar00rootroot00000000000000--- title: "rclone version" description: "Show the version number." slug: rclone_version url: /commands/rclone_version/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/version/ and as part of making a release run "make commanddocs" --- # rclone version Show the version number. ## Synopsis Show the version number, the go version and the architecture. Eg $ rclone version rclone v1.41 - os/arch: linux/amd64 - go version: go1.10 If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. $ rclone version --check yours: 1.42.0.6 latest: 1.42 (released 2018-06-16) beta: 1.42.0.5 (released 2018-06-17) Or $ rclone version --check yours: 1.41 latest: 1.42 (released 2018-06-16) upgrade: https://downloads.rclone.org/v1.42 beta: 1.42.0.5 (released 2018-06-17) upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 ``` rclone version [flags] ``` ## Options ``` --check Check for new version. -h, --help help for version ``` See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. rclone-1.53.3/docs/content/contact.md000066400000000000000000000012771375552240400174620ustar00rootroot00000000000000--- title: "Contact" description: "Contact the rclone project" --- # Contact the rclone project # ## Forum ## Forum for questions and general discussion: * https://forum.rclone.org ## GitHub repository ## The project's repository is located at: * https://github.com/rclone/rclone There you can file bug reports or contribute with pull requests. ## Twitter ## You can also follow me on twitter for rclone announcements: * [@njcw](https://twitter.com/njcw) ## Email ## Or if all else fails or you want to ask something private or confidential email [Nick Craig-Wood](mailto:nick@craig-wood.com). Please don't email me requests for help - those are better directed to the forum. Thanks! rclone-1.53.3/docs/content/crypt.md000066400000000000000000000406221375552240400171650ustar00rootroot00000000000000--- title: "Crypt" description: "Encryption overlay remote" --- {{< icon "fa fa-lock" >}}Crypt ---------------------------------------- Rclone `crypt` remotes encrypt and decrypt other remotes. To use `crypt`, first set up the underlying remote. Follow the `rclone config` instructions for that remote. `crypt` applied to a local pathname instead of a remote will encrypt and decrypt that directory, and can be used to encrypt USB removable drives. Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called `remote:path`. Anything inside `remote:path` will be encrypted and anything outside will not. In the case of an S3 based underlying remote (eg Amazon S3, B2, Swift) it is generally advisable to define a crypt remote in the underlying remote `s3:bucket`. If `s3:` alone is specified alongside file name encryption, rclone will encrypt the bucket name. Configure `crypt` using `rclone config`. In this example the `crypt` remote is called `secret`, to differentiate it from the underlying `remote`. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> secret Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Encrypt/Decrypt a remote \ "crypt" [snip] Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> remote:path How to encrypt the filenames. Choose a number from below, or type in your own value 1 / Don't encrypt the file names. Adds a ".bin" extension only. \ "off" 2 / Encrypt the filenames see the docs for the details. \ "standard" 3 / Very simple filename obfuscation. \ "obfuscate" filename_encryption> 2 Option to either encrypt directory names or leave them intact. Choose a number from below, or type in your own value 1 / Encrypt directory names. \ "true" 2 / Don't encrypt directory names, leave them intact. \ "false" filename_encryption> 1 Password or pass phrase for encryption. y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> g Password strength in bits. 64 is just about memorable 128 is secure 1024 is the maximum Bits> 128 Your password is: JAsJvRcgR-_veXNfy_sGmQ Use this password? y) Yes n) No y/n> y Remote config -------------------- [secret] remote = remote:path filename_encryption = standard password = *** ENCRYPTED *** password2 = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` **Important** The crypt password stored in `rclone.conf` is lightly obscured. That only protects it from cursory inspection. It is not secure unless encryption of `rclone.conf` is specified. A long passphrase is recommended, or `rclone config` can generate a random one. The obscured password is created using AES-CTR with a static key. The salt is stored verbatim at the beginning of the obscured password. This static key is shared between all versions of rclone. If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt. Rclone does not encrypt * file length - this can be calculated within 16 bytes * modification time - used for syncing ## Specifying the remote ## In normal use, ensure the remote has a `:` in. If specified without, rclone uses a local directory of that name. For example if a remote `/path/to/secret/files` is specified, rclone encrypts content to that directory. If a remote `name` is specified, rclone targets a directory `name` in the current directory. If remote `remote:path/to/dir` is specified, rclone stores encrypted files in `path/to/dir` on the remote. With file name encryption, files saved to `secret:subdir/subfile` are stored in the unencrypted path `path/to/dir` but the `subdir/subpath` element is encrypted. ## Example ## Create the following file structure using "standard" file name encryption. ``` plaintext/ ├── file0.txt ├── file1.txt └── subdir ├── file2.txt ├── file3.txt └── subsubdir └── file4.txt ``` Copy these to the remote, and list them ``` $ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt 6 file0.txt 8 subdir/file2.txt 10 subdir/subsubdir/file4.txt 9 subdir/file3.txt ``` The crypt remote looks like ``` $ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 v05749mltvv1tf4onltun46gls 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps ``` The directory structure is preserved ``` $ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt 10 subsubdir/file4.txt ``` Without file name encryption `.bin` extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content. ``` $ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin 56 subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin ``` ### File name encryption modes ### Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files Standard * file names encrypted * file names can't be as long (~143 characters) * can use sub paths and copy single files * directory structure visible * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion Obfuscation This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. Rclone stores the distance at the beginning of the filename. A file called "hello" may become "53.jgnnq". Obfuscation is not a strong encryption of filenames, but hinders automated scanning tools picking up on filename patterns. It is an intermediate between "off" and "standard" which allows for longer path segment names. There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. Obfuscation cannot be relied upon for strong protection. * file names very lightly obfuscated * file names can be longer than standard encryption * can use sub paths and copy single files * directory structure visible * identical files names will have identical uploaded names Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less thn 156 characters in length issues should not be encountered, irrespective of cloud storage provider. An alternative, future rclone file name encryption mode may tolerate backend provider path length limits. ### Directory name encryption ### Crypt offers the option of encrypting dir names or leaving them intact. There are two options: True Encrypts the whole file path including directory names Example: `1/12/123.txt` is encrypted to `p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0` False Only encrypts file names, skips directory names Example: `1/12/123.txt` is encrypted to `1/12/qgm4avr35m5loi1th53ato71v0` ### Modified time and hashes ### Crypt stores modification times using the underlying remote so support depends on that. Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator. Use the `rclone cryptcheck` command to check the integrity of a crypted remote instead of `rclone check` which can't check the checksums properly. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/crypt/crypt.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-remote Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote - Env Var: RCLONE_CRYPT_REMOTE - Type: string - Default: "" #### --crypt-filename-encryption How to encrypt the filenames. - Config: filename_encryption - Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION - Type: string - Default: "standard" - Examples: - "standard" - Encrypt the filenames see the docs for the details. - "obfuscate" - Very simple filename obfuscation. - "off" - Don't encrypt the file names. Adds a ".bin" extension only. #### --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. NB If filename_encryption is "off" then this option will do nothing. - Config: directory_name_encryption - Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION - Type: bool - Default: true - Examples: - "true" - Encrypt directory names. - "false" - Don't encrypt directory names, leave them intact. #### --crypt-password Password or pass phrase for encryption. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: password - Env Var: RCLONE_CRYPT_PASSWORD - Type: string - Default: "" #### --crypt-password2 Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: password2 - Env Var: RCLONE_CRYPT_PASSWORD2 - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-server-side-across-configs Allow server side operations (eg copy) to work across different crypt configs. Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes. - Config: server_side_across_configs - Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false #### --crypt-show-mapping For all files listed show how the names encrypt. If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name. This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes. - Config: show_mapping - Env Var: RCLONE_CRYPT_SHOW_MAPPING - Type: bool - Default: false ### Backend commands Here are the commands specific to the crypt backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](/rc/#backend/command). #### encode Encode the given filename(s) rclone backend encode remote: [options] [+] This encodes the filenames given as arguments returning a list of strings of the encoded results. Usage Example: rclone backend encode crypt: file1 [file2...] rclone rc backend/command command=encode fs=crypt: file1 [file2...] #### decode Decode the given filename(s) rclone backend decode remote: [options] [+] This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. Usage Example: rclone backend decode crypt: encryptedfile1 [encryptedfile2...] rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] {{< rem autogenerated options stop >}} ## Backing up a crypted remote ## If you wish to backup a crypted remote, it is recommended that you use `rclone sync` on the encrypted files, and make sure the passwords are the same in the new encrypted remote. This will have the following advantages * `rclone sync` will check the checksums while copying * you can use `rclone check` between the encrypted remotes * you don't decrypt and encrypt unnecessarily For example, let's say you have your original remote at `remote:` with the encrypted version at `eremote:` with path `remote:crypt`. You would then set up the new remote `remote2:` and then the encrypted version `eremote2:` with path `remote2:crypt` using the same passwords as `eremote:`. To sync the two remotes you would do rclone sync -i remote:crypt remote2:crypt And to check the integrity you would do rclone check remote:crypt remote2:crypt ## File formats ## ### File encryption ### Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks. #### Header #### * 8 bytes magic string `RCLONE\x00\x00` * 24 bytes Nonce (IV) The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce. #### Chunk #### Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages. Each chunk contains: * 16 Bytes of Poly1305 authenticator * 1 - 65536 bytes XSalsa20 encrypted data 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big. This uses a 32 byte (256 bit key) key derived from the user password. #### Examples #### 1 byte file will encrypt to * 32 bytes header * 17 bytes data chunk 49 bytes total 1MB (1048576 bytes) file will encrypt to * 32 bytes header * 16 chunks of 65568 bytes 1049120 bytes total (a 0.05% overhead). This is the overhead for big files. ### Name encryption ### File names are encrypted segment by segment - the path is broken up into `/` separated strings and these are encrypted individually. File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption. They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway. This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system. This means that * filenames with the same name will encrypt the same * filenames which start the same won't have a common prefix This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password. After encryption they are written out using a modified version of standard `base32` encoding as described in RFC4648. The standard encoding is modified in two ways: * it becomes lower case (no-one likes upper case filenames!) * we strip the padding character `=` `base32` is used rather than the more efficient `base64` so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive). ### Key derivation ### Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one. `scrypt` makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt. rclone-1.53.3/docs/content/docs.md000066400000000000000000002130241375552240400167520ustar00rootroot00000000000000--- title: "Documentation" description: "Rclone Usage" --- Configure --------- First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the `--config` entry for how to find the config file and choose its location.) The easiest way to make the config is to run rclone with the config option: rclone config See the following for detailed instructions for * [1Fichier](/fichier/) * [Alias](/alias/) * [Amazon Drive](/amazonclouddrive/) * [Amazon S3](/s3/) * [Backblaze B2](/b2/) * [Box](/box/) * [Cache](/cache/) * [Chunker](/chunker/) - transparently splits large files for other remotes * [Citrix ShareFile](/sharefile/) * [Crypt](/crypt/) - to encrypt other remotes * [DigitalOcean Spaces](/s3/#digitalocean-spaces) * [Dropbox](/dropbox/) * [FTP](/ftp/) * [Google Cloud Storage](/googlecloudstorage/) * [Google Drive](/drive/) * [Google Photos](/googlephotos/) * [HTTP](/http/) * [Hubic](/hubic/) * [Jottacloud / GetSky.no](/jottacloud/) * [Koofr](/koofr/) * [Mail.ru Cloud](/mailru/) * [Mega](/mega/) * [Memory](/memory/) * [Microsoft Azure Blob Storage](/azureblob/) * [Microsoft OneDrive](/onedrive/) * [OpenStack Swift / Rackspace Cloudfiles / Memset Memstore](/swift/) * [OpenDrive](/opendrive/) * [Pcloud](/pcloud/) * [premiumize.me](/premiumizeme/) * [put.io](/putio/) * [QingStor](/qingstor/) * [Seafile](/seafile/) * [SFTP](/sftp/) * [SugarSync](/sugarsync/) * [Tardigrade](/tardigrade/) * [Union](/union/) * [WebDAV](/webdav/) * [Yandex Disk](/yandex/) * [The local filesystem](/local/) Usage ----- Rclone syncs a directory tree from one storage system to another. Its syntax is like this Syntax: [options] subcommand Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive. You can define as many storage paths as you like in the config file. Please use the [`-i` / `--interactive`](#interactive) flag while learning rclone to avoid accidental data loss. Subcommands ----------- rclone uses a system of subcommands. For example rclone ls remote:path # lists a remote rclone copy /local/path remote:path # copies /local/path to the remote rclone sync -i /local/path remote:path # syncs /local/path to the remote The main rclone commands with most used first * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied. * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. * [rclone move](/commands/rclone_move/) - Move files from source to dest. * [rclone delete](/commands/rclone_delete/) - Remove the contents of path. * [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. * [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. * [rclone rmdir](/commands/rclone_rmdir/) - Remove the path. * [rclone rmdirs](/commands/rclone_rmdirs/) - Remove any empty directories under the path. * [rclone check](/commands/rclone_check/) - Check if the files in the source and destination match. * [rclone ls](/commands/rclone_ls/) - List all the objects in the path with size and path. * [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path. * [rclone lsl](/commands/rclone_lsl/) - List all the objects in the path with size, modification time and path. * [rclone md5sum](/commands/rclone_md5sum/) - Produce an md5sum file for all the objects in the path. * [rclone sha1sum](/commands/rclone_sha1sum/) - Produce a sha1sum file for all the objects in the path. * [rclone size](/commands/rclone_size/) - Return the total size and number of objects in remote:path. * [rclone version](/commands/rclone_version/) - Show the version number. * [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files and delete/rename them. * [rclone authorize](/commands/rclone_authorize/) - Remote authorization. * [rclone cat](/commands/rclone_cat/) - Concatenate any files and send them to stdout. * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output shell completion scripts for rclone. * [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. * [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file. * [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. * [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest. * [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone.conf * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Check the integrity of a crypted remote. * [rclone about](/commands/rclone_about/) - Get quota information from the remote. See the [commands index](/commands/) for the full list. Copying single files -------------------- rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error `Failed to create file system for "remote:file": is a file not a directory` if it isn't. For example, suppose you have a remote with a file in called `test.jpg`, then you could copy just that file like this rclone copy remote:test.jpg /tmp/download The file `test.jpg` will be placed inside `/tmp/download`. This is equivalent to specifying rclone copy --files-from /tmp/files remote: /tmp/download Where `/tmp/files` contains the single line test.jpg It is recommended to use `copy` when copying individual files, not `sync`. They have pretty much the same effect but `copy` will use a lot less memory. Syntax of remote paths ---------------------- The syntax of the paths passed to the rclone command are as follows. ### /path/to/dir This refers to the local file system. On Windows only `\` may be used instead of `/` in local paths **only**, non local paths must use `/`. These paths needn't start with a leading `/` - if they don't then they will be relative to the current directory. ### remote:path/to/dir This refers to a directory `path/to/dir` on `remote:` as defined in the config file (configured with `rclone config`). ### remote:/path/to/dir On most backends this is refers to the same directory as `remote:path/to/dir` and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading `/` will refer to your "home" directory and paths with a leading `/` will refer to the root. ### :backend:path/to/dir This is an advanced form for creating remotes on the fly. `backend` should be the name or prefix of a backend (the `type` in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables). Here are some examples: rclone lsd --http-url https://pub.rclone.org :http: To list all the directories in the root of `https://pub.rclone.org/`. rclone lsf --http-url https://example.com :http:path/to/dir To list files and directories in `https://example.com/path/to/dir/` rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir To copy files and directories in `https://example.com/path/to/dir` to `/tmp/dir`. rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir To copy files and directories from `example.com` in the relative directory `path/to/dir` to `/tmp/dir` using sftp. ### Valid remote names - Remote names may only contain 0-9, A-Z ,a-z ,_ , - and space. - Remote names may not start with -. Quoting and the shell --------------------- When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way. Here are some gotchas which may help users unfamiliar with the shell rules ### Linux / OSX ### If your names have spaces or shell metacharacters (eg `*`, `?`, `$`, `'`, `"` etc) then you must quote them. Use single quotes `'` by default. rclone copy 'Important files?' remote:backup If you want to send a `'` you will need to use `"`, eg rclone copy "O'Reilly Reviews" remote:backup The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell. ### Windows ### If your names have spaces in you need to put them in `"`, eg rclone copy "E:\folder name\folder name\folder name" remote:backup If you are using the root directory on its own then don't quote it (see [#464](https://github.com/rclone/rclone/issues/464) for why), eg rclone copy E:\ remote:backup Copying files or directories with `:` in the names -------------------------------------------------- rclone uses `:` to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a `:` up to the first `/` so if you need to act on a file or directory like this then use the full path starting with a `/`, or use `./` as a current directory prefix. So to sync a directory called `sync:me` to a remote called `remote:` use rclone sync -i ./sync:me remote:path or rclone sync -i /full/path/to/sync:me remote:path Server Side Copy ---------------- Most remotes (but not all - see [the overview](/overview/#optional-features)) support server side copy. This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place. Eg rclone copy s3:oldbucket s3:newbucket Will copy the contents of `oldbucket` to `newbucket` without downloading and re-uploading. Remotes which don't support server side copy **will** download and re-upload in this case. Server side copies are used with `sync` and `copy` and will be identified in the log when using the `-v` flag. The `move` command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload. Server side copies will only be attempted if the remote names are the same. This can be used when scripting to make aged backups efficiently, eg rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup Options ------- Rclone has a number of options to control its behaviour. Options that take parameters can have the values passed in two ways, `--option=value` or `--option value`. However boolean (true/false) options behave slightly differently to the other options in that `--boolean` sets the option to `true` and the absence of the flag sets it to `false`. It is also possible to specify `--boolean=false` or `--boolean=true`. Note that `--boolean false` is not valid - this is parsed as `--boolean` and the `false` is parsed as an extra command line argument for rclone. Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of `b` for bytes, `k` for kBytes, `M` for MBytes, `G` for GBytes, `T` for TBytes and `P` for PBytes may be used. These are the binary units, eg 1, 2\*\*10, 2\*\*20, 2\*\*30 respectively. ### --backup-dir=DIR ### When using `sync`, `copy` or `move` any files which would have been overwritten or deleted are moved in their original hierarchy into this directory. If `--suffix` is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten. The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory. For example rclone sync -i /path/to/local remote:current --backup-dir remote:old will sync `/path/to/local` to `remote:current`, but for any files which would have been updated or deleted will be stored in `remote:old`. If running rclone from a script you might want to use today's date as the directory name passed to `--backup-dir` to store the old files, or you might want to pass `--suffix` with today's date. See `--compare-dest` and `--copy-dest`. ### --bind string ### Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error. ### --bwlimit=BANDWIDTH_SPEC ### This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable. Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is `0` which means to not limit bandwidth. For example, to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M` It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as `WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH...` where: `WEEKDAY` is optional element. It could be written as whole world or only using 3 first characters. `HH:MM` is an hour from 00:00 to 23:59. An example of a typical timetable to avoid link saturation during daytime working hours could be: `--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"` In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. An example of timetable with `WEEKDAY` could be: `--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"` It mean that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited. Timeslots without weekday are extended to whole week. So this one example: `--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"` Is equal to this: `--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"` Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc. Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a `--bwlimit 0.625M` parameter for rclone. On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a `SIGUSR2` signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with `--bwlimit` quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this: kill -SIGUSR2 $(pidof rclone) If you configure rclone with a [remote control](/rc) then you can use change the bwlimit dynamically: rclone rc core/bwlimit rate=1M ### --bwlimit-file=BANDWIDTH_SPEC ### This option controls per file bandwidth limit. For the options see the `--bwlimit` flag. For example use this to allow no transfers to be faster than 1MByte/s --bwlimit-file 1M This can be used in conjunction with `--bwlimit`. Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer. ### --buffer-size=SIZE ### Use this sized buffer to speed up file transfers. Each `--transfer` will use this much memory for buffering. When using `mount` or `cmount` each open file descriptor will use this much memory for buffering. See the [mount](/commands/rclone_mount/#file-buffering) documentation for more details. Set to `0` to disable the buffering for the minimum memory usage. Note that the memory allocation of the buffers is influenced by the [--use-mmap](#use-mmap) flag. ### --check-first ### If this flag is set then in a `sync`, `copy` or `move`, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible. This flag can be useful on IO limited systems where transfers interfere with checking. Using this flag can use more memory as it effectively sets `--max-backlog` to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start. ### --checkers=N ### The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel. The default is to run 8 checkers in parallel. ### -c, --checksum ### Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal. This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the [overview section](/overview/). Eg `rclone --checksum sync s3:/bucket swift:/bucket` would run much quicker than without the `--checksum` flag. When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. ### --compare-dest=DIR ### When using `sync`, `copy` or `move` DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup. You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See `--copy-dest` and `--backup-dir`. ### --config=CONFIG_FILE ### Specify the location of the rclone config file. Normally the config file is in your home directory as a file called `.config/rclone/rclone.conf` (or `.rclone.conf` if created with an older version). If `$XDG_CONFIG_HOME` is set it will be at `$XDG_CONFIG_HOME/rclone/rclone.conf`. If there is a file `rclone.conf` in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically. If you run `rclone config file` you will see where the default location is for you. Use this flag to override the config location, eg `rclone --config=".myconfig" .config`. ### --contimeout=TIME ### Set the connection timeout. This should be in go time format which looks like `5s` for 5 seconds, `10m` for 10 minutes, or `3h30m`. The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is `1m` by default. ### --copy-dest=DIR ### When using `sync`, `copy` or `move` DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup. The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See `--compare-dest` and `--backup-dir`. ### --dedupe-mode MODE ### Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean. ### --disable FEATURE,FEATURE,... ### This disables a comma separated list of optional features. For example to disable server side move and server side copy use: --disable move,copy The features can be put in any case. To see a list of which features can be disabled use: --disable help See the overview [features](/overview/#features) and [optional features](/overview/#optional-features) to get an idea of which feature does what. This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day). ### -n, --dry-run ### Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the `sync` command which deletes files in the destination. ### --expect-continue-timeout=TIME ### This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this. Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header. The default is `1s`. Set to `0` to disable. ### --error-on-no-transfer ### By default, rclone will exit with return code 0 if there were no errors. This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! ### --header ### Add an HTTP header for all transactions. The flag can be repeated to add multiple headers. If you want to add headers only for uploads use `--header-upload` and if you want to add headers only for downloads use `--header-download`. This flag is supported for all HTTP based backends even those not supported by `--header-upload` and `--header-download` so may be used as a workaround for those with care. ``` rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes" ``` ### --header-download ### Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers. ``` rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar" ``` See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for currently supported backends. ### --header-upload ### Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers. ``` rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar" ``` See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for currently supported backends. ### --ignore-case-sync ### Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different. ### --ignore-checksum ### Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't. You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data. ### --ignore-existing ### Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files. While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted. ### --ignore-size ### Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If `--checksum` is set then it only checks the checksum. It will also cause rclone to skip verifying the sizes are the same after transfer. This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see [#399](https://github.com/rclone/rclone/issues/399) for more info). ### -I, --ignore-times ### Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination. Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using `--checksum`). ### --immutable ### Treat source and destination files as immutable and disallow modification. With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error `Source and destination exist but do not match: immutable file modified`. Note that only commands which transfer files (e.g. `sync`, `copy`, `move`) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. `delete`, `purge`) or implicitly (e.g. `sync`, `move`). Use `copy --immutable` if it is desired to avoid deletion as well as modification. This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated. ### -i / --interactive {#interactive} This flag can be used to tell rclone that you wish a manual confirmation before destructive operations. It is **recommended** that you use this flag while learning rclone especially with `rclone sync`. For example ``` $ rclone delete -i /tmp/dir rclone: delete "important-file.txt"? y) Yes, this is OK (default) n) No, skip this s) Skip all delete operations with no more questions !) Do all delete operations with no more questions q) Exit rclone now. y/n/s/!/q> n ``` The options mean - `y`: **Yes**, this operation should go ahead. You can also press Return for this to happen. You'll be asked every time unless you choose `s` or `!`. - `n`: **No**, do not do this operation. You'll be asked every time unless you choose `s` or `!`. - `s`: **Skip** all the following operations of this type with no more questions. This takes effect until rclone exits. If there are any different kind of operations you'll be prompted for them. - `!`: **Do all** the following operations with no more questions. Useful if you've decided that you don't mind rclone doing that kind of operation. This takes effect until rclone exits . If there are any different kind of operations you'll be prompted for them. - `q`: **Quit** rclone now, just in case! ### --leave-root #### During rmdirs it will not remove root directory, even if it's empty. ### --log-file=FILE ### Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the `-v` flag. See the [Logging section](#logging) for more info. If FILE exists then rclone will append to it. Note that if you are using the `logrotate` program to manage rclone's logs, then you should use the `copytruncate` option as rclone doesn't have a signal to rotate logs. ### --log-format LIST ### Comma separated list of log format options. `date`, `time`, `microseconds`, `longfile`, `shortfile`, `UTC`. The default is "`date`,`time`". ### --log-level LEVEL ### This sets the log level for rclone. The default log level is `NOTICE`. `DEBUG` is equivalent to `-vv`. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing. `INFO` is equivalent to `-v`. It outputs information about each transfer and prints stats once a minute by default. `NOTICE` is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events. `ERROR` is equivalent to `-q`. It only outputs error messages. ### --use-json-log ### This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time. ### --low-level-retries NUMBER ### This controls the number of low level retries rclone does. A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the `-v` flag. This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the `--retries` flag) quicker. Disable low level retries with `--low-level-retries 1`. ### --max-backlog=N ### This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred. This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use. Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make `--order-by` work more accurately. Setting this small will make rclone more synchronous to the listings of the remote which may be desirable. Setting this to a negative number will make the backlog as large as possible. ### --max-delete=N ### This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress. ### --max-depth=N ### This modifies the recursion depth for all the commands except purge. So if you do `rclone --max-depth 1 ls remote:path` you will see only the files in the top level directory. Using `--max-depth 2` means you will see all the files in first two directory levels and so on. For historical reasons the `lsd` command defaults to using a `--max-depth` of 1 - you can override this with the command line flag. You can use this command to disable recursion (with `--max-depth 1`). Note that if you use this with `sync` and `--delete-excluded` the files not recursed through are considered excluded and will be deleted on the destination. Test first with `--dry-run` if you are not sure what will happen. ### --max-duration=TIME ### Rclone will stop scheduling new transfers when it has run for the duration specified. Defaults to off. When the limit is reached any existing transfers will complete. Rclone won't exit with an error if the transfer limit is reached. ### --max-transfer=SIZE ### Rclone will stop transferring when it has reached the size specified. Defaults to off. When the limit is reached all transfers will stop immediately. Rclone will exit with exit code 8 if the transfer limit is reached. ### --cutoff-mode=hard|soft|cautious ### This modifies the behavior of `--max-transfer` Defaults to `--cutoff-mode=hard`. Specifying `--cutoff-mode=hard` will stop transferring immediately when Rclone reaches the limit. Specifying `--cutoff-mode=soft` will stop starting new transfers when Rclone reaches the limit. Specifying `--cutoff-mode=cautious` will try to prevent Rclone from reaching the limit. ### --modify-window=TIME ### When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent. The default is `1ns` unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be `1s` by default. This command line flag allows you to override that computed default. ### --multi-thread-cutoff=SIZE ### When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M). Rclone preallocates the file (using `fallocate(FALLOC_FL_KEEP_SIZE)` on unix or `NTSetInformationFile` on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer. The number of threads used to download is controlled by `--multi-thread-streams`. Use `-vv` if you wish to see info about the threads. This will work with the `sync`/`copy`/`move` commands and friends `copyto`/`moveto`. Multi thread downloads will be used with `rclone mount` and `rclone serve` if `--vfs-cache-mode` is set to `writes` or above. **NB** that this **only** works for a local destination but will work with any source. **NB** that multi thread copies are disabled for local to local copies as they are faster without unless `--multi-thread-streams` is set explicitly. **NB** on Windows using multi-thread downloads will cause the resulting files to be [sparse](https://en.wikipedia.org/wiki/Sparse_file). Use `--local-no-sparse` to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with `--multi-thread-streams 0` ### --multi-thread-streams=N ### When using multi thread downloads (see above `--multi-thread-cutoff`) this sets the maximum number of streams to use. Set to `0` to disable multi thread downloads (Default 4). Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the `--multi-thread-cutoff` and rounds up, up to the maximum set with `--multi-thread-streams`. So if `--multi-thread-cutoff 250MB` and `--multi-thread-streams 4` are in effect (the defaults): - 0MB..250MB files will be downloaded with 1 stream - 250MB..500MB files will be downloaded with 2 streams - 500MB..750MB files will be downloaded with 3 streams - 750MB+ files will be downloaded with 4 streams ### --no-check-dest ### The `--no-check-dest` can be used with `move` or `copy` and it causes rclone not to check the destination at all when copying files. This means that: - the destination is not listed minimising the API calls - files are always transferred - this can cause duplicates on remotes which allow it (eg Google Drive) - `--retries 1` is recommended otherwise you'll transfer everything again on a retry This flag is useful to minimise the transactions if you know that none of the files are on the destination. This is a specialized flag which should be ignored by most users! ### --no-gzip-encoding ### Don't set `Accept-Encoding: gzip`. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with `Content-Encoding: gzip` but you uploaded compressed files. There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone. ### --no-traverse ### The `--no-traverse` flag controls whether the destination file system is traversed when using the `copy` or `move` commands. `--no-traverse` is not compatible with `sync` and will be ignored if you supply it with `sync`. If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then `--no-traverse` will stop rclone listing the destination and save time. However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use `--no-traverse`. See [rclone copy](/commands/rclone_copy/) for an example of how to use it. ### --no-unicode-normalization ### Don't normalize unicode characters in filenames during the sync routine. Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem. Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With `--no-unicode-normalization` they will be treated as unique characters. ### --no-update-modtime ### When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (eg the Google Drive client). ### --order-by string ### The `--order-by` flag controls the order in which files in the backlog are processed in `rclone sync`, `rclone copy` and `rclone move`. The order by string is constructed like this. The first part describes what aspect is being measured: - `size` - order by the size of the files - `name` - order by the full path of the files - `modtime` - order by the modification date of the files This can have a modifier appended with a comma: - `ascending` or `asc` - order so that the smallest (or oldest) is processed first - `descending` or `desc` - order so that the largest (or newest) is processed first - `mixed` - order so that the smallest is processed first for some threads and the largest for others If the modifier is `mixed` then it can have an optional percentage (which defaults to `50`), eg `size,mixed,25` which means that 25% of the threads should be taking the smallest items and 75% the largest. The threads which take the smallest first will always take the smallest first and likewise the largest first threads. The `mixed` mode can be useful to minimise the transfer time when you are transferring a mixture of large and small files - the large files are guaranteed upload threads and bandwidth and the small files will be processed continuously. If no modifier is supplied then the order is `ascending`. For example - `--order-by size,desc` - send the largest files first - `--order-by modtime,ascending` - send the oldest files first - `--order-by name` - send the files with alphabetically by path first If the `--order-by` flag is not supplied or it is supplied with an empty string then the default ordering will be used which is as scanned. With `--checkers 1` this is mostly alphabetical, however with the default `--checkers 8` it is somewhat random. #### Limitations The `--order-by` flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if - there are no files in the backlog or the source has not been fully scanned yet - there are more than [--max-backlog](#max-backlog-n) files in the backlog Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of `--order-by` as being more of a best efforts flag rather than a perfect ordering. ### --password-command SpaceSepList ### This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the `RCLONE_CONFIG_PASS` variable. The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in `"`, if you want a literal `"` in an argument then enclose the argument in `"` and double the `"`. See [CSV encoding](https://godoc.org/encoding/csv) for more info. Eg --password-command echo hello --password-command echo "hello with space" --password-command echo "hello with ""quotes"" and space" See the [Configuration Encryption](#configuration-encryption) for more info. See a [Windows PowerShell example on the Wiki](https://github.com/rclone/rclone/wiki/Windows-Powershell-use-rclone-password-command-for-Config-file-password). ### -P, --progress ### This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer. Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay. Normally this is updated every 500mS but this period can be overridden with the `--stats` flag. This can be used with the `--stats-one-line` flag for a simpler display. Note: On Windows until [this bug](https://github.com/Azure/go-ansiterm/issues/26) is fixed all non-ASCII characters will be replaced with `.` when `--progress` is in use. ### -q, --quiet ### This flag will limit rclone's output to error messages only. ### --refresh-times ### The `--refresh-times` flag can be used to update modification times of existing files when they are out of sync on backends which don't support hashes. This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them. This flag is **only** useful for destinations which don't support hashes (eg `crypt`). This can be used any of the sync commands `sync`, `copy` or `move`. To use this flag you will need to be doing a modification time sync (so not using `--size-only` or `--checksum`). The flag will have no effect when using `--size-only` or `--checksum`. If this flag is used when rclone comes to upload a file it will check to see if there is an existing file on the destination. If this file matches the source with size (and checksum if available) but has a differing timestamp then instead of re-uploading it, rclone will update the timestamp on the destination file. If the checksum does not match rclone will upload the new file. If the checksum is absent (eg on a `crypt` backend) then rclone will update the timestamp. Note that some remotes can't set the modification time without re-uploading the file so this flag is less useful on them. Normally if you are doing a modification time sync rclone will update modification times without `--refresh-times` provided that the remote supports checksums **and** the checksums match on the file. However if the checksums are absent then rclone will upload the file rather than setting the timestamp as this is the safe behaviour. ### --retries int ### Retry the entire sync if it fails this many times it fails (default 3). Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors. Disable retries with `--retries 1`. ### --retries-sleep=TIME ### This sets the interval between each retry specified by `--retries` The default is `0`. Use `0` to disable. ### --size-only ### Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size. This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone. ### --stats=TIME ### Commands which transfer data (`sync`, `copy`, `copyto`, `move`, `moveto`) will print data transfer stats at regular intervals to show their progress. This sets the interval. The default is `1m`. Use `0` to disable. If you set the stats interval then all commands can show stats. This can be useful when running other commands, `check` or `mount` for example. Stats are logged at `INFO` level by default which means they won't show at default log level `NOTICE`. Use `--stats-log-level NOTICE` or `-v` to make them show. See the [Logging section](#logging) for more info on log levels. Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately. ### --stats-file-name-length integer ### By default, the `--stats` output will truncate file names and paths longer than 40 characters. This is equivalent to providing `--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable any truncation of file names printed by stats. ### --stats-log-level string ### Log level to show `--stats` output at. This can be `DEBUG`, `INFO`, `NOTICE`, or `ERROR`. The default is `INFO`. This means at the default level of logging which is `NOTICE` the stats won't show - if you want them to then use `--stats-log-level NOTICE`. See the [Logging section](#logging) for more info on log levels. ### --stats-one-line ### When this is specified, rclone condenses the stats into a single line showing the most important stats only. ### --stats-one-line-date ### When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is `2006/01/02 15:04:05 - ` ### --stats-one-line-date-format ### When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow [golang specs](https://golang.org/pkg/time/#Time.Format) for date formatting syntax. ### --stats-unit=bits|bytes ### By default, data transfer rates will be printed in bytes/second. This option allows the data rate to be printed in bits/second. Data transfer volume will still be reported in bytes. The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s. The default is `bytes`. ### --suffix=SUFFIX ### When using `sync`, `copy` or `move` any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten. The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. This is for use with files to add the suffix in the current directory or with `--backup-dir`. See `--backup-dir` for more info. For example rclone copy -i /path/to/local/file remote:current --suffix .bak will copy `/path/to/local` to `remote:current`, but for any files which would have been updated or deleted have .bak added. If using `rclone sync` with `--suffix` and without `--backup-dir` then it is recommended to put a filter rule in excluding the suffix otherwise the `sync` will delete the backup files. rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak" ### --suffix-keep-extension ### When using `--suffix`, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after. So let's say we had `--suffix -2019-01-01`, without the flag `file.txt` would be backed up to `file.txt-2019-01-01` and with the flag it would be backed up to `file-2019-01-01.txt`. This can be helpful to make sure the suffixed files can still be opened. ### --syslog ### On capable OSes (not Windows or Plan9) send all log output to syslog. This can be useful for running rclone in a script or `rclone mount`. ### --syslog-facility string ### If using `--syslog` this sets the syslog facility (eg `KERN`, `USER`). See `man syslog` for a list of possible facilities. The default facility is `DAEMON`. ### --tpslimit float ### Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second. For example to limit rclone to 10 HTTP transactions per second use `--tpslimit 10`, or to 1 transaction every 2 seconds use `--tpslimit 0.5`. Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited). This can be very useful for `rclone mount` to control the behaviour of applications using it. See also `--tpslimit-burst`. ### --tpslimit-burst int ### Max burst of transactions for `--tpslimit` (default `1`). Normally `--tpslimit` will do exactly the number of transaction per second specified. However if you supply `--tps-burst` then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied. For example if you provide `--tpslimit-burst 10` then if rclone has been idle for more than 10*`--tpslimit` then it can do 10 transactions very quickly before they are limited again. This may be used to increase performance of `--tpslimit` without changing the long term average number of transactions per second. ### --track-renames ### By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy. If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during `sync` operations and perform renaming server-side. Files will be matched by size and hash - if both match then a rename will be considered. If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Encrypted destinations are not currently supported by `--track-renames` if `--track-renames-strategy` includes `hash`. Note that `--track-renames` is incompatible with `--no-traverse` and that it uses extra memory to keep track of all the rename candidates. Note also that `--track-renames` is incompatible with `--delete-before` and will select `--delete-after` instead of `--delete-during`. ### --track-renames-strategy (hash,modtime,leaf,size) ### This option changes the matching criteria for `--track-renames`. The matching is controlled by a comma separated selection of these tokens: - `modtime` - the modification time of the file - not supported on all backends - `hash` - the hash of the file contents - not supported on all backends - `leaf` - the name of the file not including its directory name - `size` - the size of the file (this is always enabled) So using `--track-renames-strategy modtime,leaf` would match files based on modification time, the leaf of the file name and the size only. Using `--track-renames-strategy modtime` or `leaf` can enable `--track-renames` support for encrypted destinations. If nothing is specified, the default option is matching by `hash`es. Note that the `hash` strategy is not supported with encrypted destinations. ### --delete-(before,during,after) ### This option allows you to specify when files on your destination are deleted when you sync folders. Specifying the value `--delete-before` will delete all files present on the destination, but not on the source *before* starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies. Specifying `--delete-during` will delete files while checking and uploading files. This is the fastest option and uses the least memory. Specifying `--delete-after` (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message `not deleting files as there were IO errors`. ### --fast-list ### When doing anything which involves a directory listing (eg `sync`, `copy`, `ls` - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory. However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic). If you use the `--fast-list` flag then rclone will use this method for listing directories. This will have the following consequences for the listing: * It **will** use fewer transactions (important if you pay for them) * It **will** use more memory. Rclone has to load the whole listing into memory. * It *may* be faster because it uses fewer transactions * It *may* be slower because it can't be parallelized rclone should always give identical results with and without `--fast-list`. If you pay for transactions and can fit your entire sync listing into memory then `--fast-list` is recommended. If you have a very big sync to do then don't use `--fast-list` otherwise you will run out of memory. If you use `--fast-list` on a remote which doesn't support it, then rclone will just ignore it. ### --timeout=TIME ### This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected. The default is `5m`. Set to `0` to disable. ### --transfers=N ### The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote. The default is to run 4 file transfers in parallel. ### -u, --update ### This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file. This can be useful when transferring to a remote which doesn't support mod times directly (or when using `--use-server-modtime` to avoid extra API calls) as it is more accurate than a `--size-only` check and faster than using `--checksum`. If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. If `--checksum` is set then rclone will update the destination if the checksums differ too. If an existing destination file is older than the source file then it will be updated if the size or checksum differs from the source file. On remotes which don't support mod time directly (or when using `--use-server-modtime`) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. ### --use-mmap ### If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by `--buffer-size`). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with. If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS. It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default. ### --use-server-modtime ### Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using `--update`, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary. Using this flag on a sync operation without also using `--update` would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want. ### -v, -vv, --verbose ### With `-v` rclone will tell you about each file that is transferred and a small number of significant events. With `-vv` rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting. ### -V, --version ### Prints the version number SSL/TLS options --------------- The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation. ### --ca-cert string This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to. If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates. ### --client-cert string This loads the PEM encoded client side certificate. This is used for [mutual TLS authentication](https://en.wikipedia.org/wiki/Mutual_authentication). The `--client-key` flag is required too when using this. ### --client-key string This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with `--client-cert`. ### --no-check-certificate=true/false ### `--no-check-certificate` controls whether a client verifies the server's certificate chain and host name. If `--no-check-certificate` is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This option defaults to `false`. **This should be used only for testing.** Configuration Encryption ------------------------ Your configuration file contains information for logging in to your cloud services. This means that you should keep your `.rclone.conf` file in a secure location. If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone. To add a password to your rclone configuration, execute `rclone config`. ``` >rclone config Current remotes: e) Edit existing remote n) New remote d) Delete remote s) Set configuration password q) Quit config e/n/d/s/q> ``` Go into `s`, Set configuration password: ``` e/n/d/s/q> s Your configuration is not encrypted. If you add a password, you will protect your login information to cloud services. a) Add Password q) Quit to main menu a/q> a Enter NEW configuration password: password: Confirm NEW password: password: Password set Your configuration is encrypted. c) Change Password u) Unencrypt configuration q) Quit to main menu c/u/q> ``` Your configuration is now encrypted, and every time you start rclone you will have to supply the password. See below for details. In the same menu, you can change the password or completely remove encryption from your configuration. There is no way to recover the configuration if you lose your password. rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored. While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password. If it is safe in your environment, you can set the `RCLONE_CONFIG_PASS` environment variable to contain your password, in which case it will be used for decrypting the configuration. You can set this for a session from a script. For unix like systems save this to a file called `set-rclone-password`: ``` #!/bin/echo Source this file don't run it read -s RCLONE_CONFIG_PASS export RCLONE_CONFIG_PASS ``` Then source the file when you want to use it. From the shell you would do `source set-rclone-password`. It will then ask you for the password and set it in the environment variable. An alternate means of supplying the password is to provide a script which will retrieve the password and print on standard output. This script should have a fully specified path name and not rely on any environment variables. The script is supplied either via `--password-command="..."` command line argument or via the `RCLONE_PASSWORD_COMMAND` environment variable. One useful example of this is using the `passwordstore` application to retrieve the password: ``` export RCLONE_PASSWORD_COMMAND="pass rclone/config" ``` If the `passwordstore` password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the `passwordstore` system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably. If you are running rclone inside a script, unless you are using the `--password-command` method, you might want to disable password prompts. To do that, pass the parameter `--ask-password=false` to rclone. This will make rclone fail instead of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain a valid password, and `--password-command` has not been supplied. Developer options ----------------- These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg `--drive-test-option` - see the docs for the remote in question. ### --cpuprofile=FILE ### Write CPU profile to file. This can be analysed with `go tool pprof`. #### --dump flag,flag,flag #### The `--dump` flag takes a comma separated list of flags to dump info about. Note that some headers including `Accept-Encoding` as shown may not be correct in the request and the response may not show `Content-Encoding` if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it. The available flags are: #### --dump headers #### Dump HTTP headers with `Authorization:` lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only. Use `--dump auth` if you do want the `Authorization:` headers. #### --dump bodies #### Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only. Note that the bodies are buffered in memory so don't use this for enormous files. #### --dump requests #### Like `--dump bodies` but dumps the request bodies and the response headers. Useful for debugging download problems. #### --dump responses #### Like `--dump bodies` but dumps the response bodies and the request headers. Useful for debugging upload problems. #### --dump auth #### Dump HTTP headers - will contain sensitive info such as `Authorization:` headers - use `--dump headers` to dump without `Authorization:` headers. Can be very verbose. Useful for debugging only. #### --dump filters #### Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. #### --dump goroutines #### This dumps a list of the running go-routines at the end of the command to standard output. #### --dump openfiles #### This dumps a list of the open files at the end of the command. It uses the `lsof` command to do that so you'll need that installed to use it. ### --memprofile=FILE ### Write memory profile to file. This can be analysed with `go tool pprof`. Filtering --------- For the filtering options * `--delete-excluded` * `--filter` * `--filter-from` * `--exclude` * `--exclude-from` * `--include` * `--include-from` * `--files-from` * `--files-from-raw` * `--min-size` * `--max-size` * `--min-age` * `--max-age` * `--dump filters` See the [filtering section](/filtering/). Remote control -------------- For the remote control options and for instructions on how to remote control rclone * `--rc` * and anything starting with `--rc-` See [the remote control section](/rc/). Logging ------- rclone has 4 levels of logging, `ERROR`, `NOTICE`, `INFO` and `DEBUG`. By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg `rclone ls`). By default, rclone will produce `Error` and `Notice` level messages. If you use the `-q` flag, rclone will only produce `Error` messages. If you use the `-v` flag, rclone will produce `Error`, `Notice` and `Info` messages. If you use the `-vv` flag, rclone will produce `Error`, `Notice`, `Info` and `Debug` messages. You can also control the log levels with the `--log-level` flag. If you use the `--log-file=FILE` option, rclone will redirect `Error`, `Info` and `Debug` messages along with standard error to FILE. If you use the `--syslog` flag then rclone will log to syslog and the `--syslog-facility` control which facility it uses. Rclone prefixes all log messages with their level in capitals, eg INFO which makes it easy to grep the log file for different kinds of information. Exit Code --------- If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed. During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting. When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with `-q`) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful. ### List of exit codes ### * `0` - success * `1` - Syntax or usage error * `2` - Error not otherwise categorised * `3` - Directory not found * `4` - File not found * `5` - Temporary error (one that more retries might fix) (Retry errors) * `6` - Less serious errors (like 461 errors from dropbox) (NoRetry errors) * `7` - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors) * `8` - Transfer exceeded - limit set by --max-transfer reached * `9` - Operation successful, but no files transferred Environment Variables --------------------- Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries. ### Options ### Every option in rclone can have its default set by environment variable. To find the name of the environment variable, first, take the long option name, strip the leading `--`, change `-` to `_`, make upper case and prepend `RCLONE_`. For example, to always set `--stats 5s`, set the environment variable `RCLONE_STATS=5s`. If you set stats on the command line this will override the environment variable setting. Or to always use the trash in drive `--drive-use-trash`, set `RCLONE_DRIVE_USE_TRASH=true`. The same parser is used for the options and the environment variables so they take exactly the same form. ### Config file ### You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through `rclone config` by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for `--config` in `rclone help`). To find the name of the environment variable, you need to set, take `RCLONE_CONFIG_` + name of remote + `_` + name of config file option and make it all uppercase. For example, to configure an S3 remote named `mys3:` without a config file (using unix ways of setting environment variables): ``` $ export RCLONE_CONFIG_MYS3_TYPE=s3 $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX $ rclone lsd MYS3: -1 2016-09-21 12:54:21 -1 my-bucket $ rclone listremotes | grep mys3 mys3: ``` Note that if you want to create a remote using environment variables you must create the `..._TYPE` variable as above. ### Precedence The various different methods of backend configuration are read in this order and the first one with a value is used. - Flag values as supplied on the command line, eg `--drive-use-trash`. - Remote specific environment vars, eg `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above). - Backend specific environment vars, eg `RCLONE_DRIVE_USE_TRASH`. - Config file, eg `use_trash = false`. - Default values, eg `true` - these can't be changed. So if both `--drive-use-trash` is supplied on the config line and an environment variable `RCLONE_DRIVE_USE_TRASH` is set, the command line flag will take preference. For non backend configuration the order is as follows: - Flag values as supplied on the command line, eg `--stats 5s`. - Environment vars, eg `RCLONE_STATS=5s`. - Default values, eg `1m` - these can't be changed. ### Other environment variables ### - `RCLONE_CONFIG_PASS` set to contain your config file password (see [Configuration Encryption](#configuration-encryption) section) - `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` (or the lowercase versions thereof). - `HTTPS_PROXY` takes precedence over `HTTP_PROXY` for https requests. - The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed. - `RCLONE_CONFIG_DIR` - rclone **sets** this variable for use in config files and sub processes to point to the directory holding the config file. rclone-1.53.3/docs/content/donate.md000066400000000000000000000043471375552240400173020ustar00rootroot00000000000000--- title: "Donations" description: "Donations to the rclone project." type: page --- # {{< icon "fa fa-heart heart" >}} Donations to the rclone project Rclone is a free open source project with thousands of contributions from volunteers all round the world and I would like to thank all of you for donating your time to the project. However, maintaining rclone is a lot of work - easily the equivalent of a **full time job** - for me. Nothing stands still in the world of cloud storage. Rclone needs constant attention adapting to changes by cloud providers, adding new providers, adding new features, keeping the integration tests working, fixing bugs and many more things! I love doing the work and I'd like to spend more time doing it - your support helps make that possible. Thank you :-) {{< nick >}} PS I'm available for rclone and object storage related consultancy - [email me](mailto:nick@craig-wood.com) for more info. {{< monthly_donations >}} ## Personal users If you are a personal user and you would like to support the project with sponsorship as a way of saying thank you that would be most appreciated. {{< icon "fa fa-heart heart" >}} ## Business users If your business distributes rclone as part of its products (which the generous MIT licence allows) or uses it internally then it would make business sense to sponsor the rclone project to ensure that the project you rely on stays healthy and well maintained. If you run one of the cloud storage providers that rclone supports and rclone is driving revenue your way then you know it makes sense to sponsor the project. {{< icon "far fa-smile" >}} Note that if you choose the "GitHub Sponsors" option they will provide proper tax invoices appropriate for your country. ## Monthly donations Monthly donations help keep rclone development sustainable in the long run so this is the preferred option. A small amount every month is much better than a one off donation as it allows planning for the future. {{< monthly_donations >}} ## One off donations If you don't want to contribute monthly then of course we'd love a one off donation. {{< one_off_donations >}} If you require a receipt or wish to contribute in a different way then please [drop me an email](mailto:nick@craig-wood.com). rclone-1.53.3/docs/content/downloads.md000066400000000000000000000126711375552240400200210ustar00rootroot00000000000000--- title: "Rclone downloads" description: "Download rclone binaries for your OS." type: page --- Rclone Download {{< version >}} ===================== | Arch-OS | Windows | macOS | Linux | .deb | .rpm | FreeBSD | NetBSD | OpenBSD | Plan9 | Solaris | |:-------:|:-------:|:-----:|:-----:|:----:|:----:|:-------:|:------:|:-------:|:-----:|:-------:| | Intel/AMD - 64 Bit | {{< download windows amd64 >}} | {{< download osx amd64 >}} | {{< download linux amd64 >}} | {{< download linux amd64 deb >}} | {{< download linux amd64 rpm >}} | {{< download freebsd amd64 >}} | {{< download netbsd amd64 >}} | {{< download openbsd amd64 >}} | {{< download plan9 amd64 >}} | {{< download solaris amd64 >}} | | Intel/AMD - 32 Bit | {{< download windows 386 >}} | - | {{< download linux 386 >}} | {{< download linux 386 deb >}} | {{< download linux 386 rpm >}} | {{< download freebsd 386 >}} | {{< download netbsd 386 >}} | {{< download openbsd 386 >}} | {{< download plan9 386 >}} | - | | ARMv6 - 32 Bit | - | - | {{< download linux arm >}} | {{< download linux arm deb >}} | {{< download linux arm rpm >}} | {{< download freebsd arm >}} | {{< download netbsd arm >}} | - | - | - | | ARMv7 - 32 Bit | - | - | {{< download linux arm-v7 >}} | {{< download linux arm-v7 deb >}} | {{< download linux arm-v7 rpm >}} | {{< download freebsd arm-v7 >}} | {{< download netbsd arm-v7 >}} | - | - | - | | ARM - 64 Bit | - | - | {{< download linux arm64 >}} | {{< download linux arm64 deb >}} | {{< download linux arm64 rpm >}} | - | - | - | - | - | | MIPS - Big Endian | - | - | {{< download linux mips >}} | {{< download linux mips deb >}} | {{< download linux mips rpm >}} | - | - | - | - | - | | MIPS - Little Endian | - | - | {{< download linux mipsle >}} | {{< download linux mipsle deb >}} | {{< download linux mipsle rpm >}} | - | - | - | - | - | You can also find a [mirror of the downloads on GitHub](https://github.com/rclone/rclone/releases/tag/{{< version >}}). ## Script download and install ## To install rclone on Linux/macOS/BSD systems, run: curl https://rclone.org/install.sh | sudo bash For beta installation, run: curl https://rclone.org/install.sh | sudo bash -s beta Note that this script checks the version of rclone installed first and won't re-download if not needed. Beta releases ============= [Beta releases](https://beta.rclone.org) are generated from each commit to master. Note these are named like {Version Tag}.beta.{Commit Number}.{Git Commit Hash} eg v1.53.0-beta.4677.b657a2204 The `Version Tag` is the version that the beta release will become when it is released. You can match the `Git Commit Hash` up with the [git log](https://github.com/rclone/rclone/commits/master). The most recent release will have the largest `Version Tag` and `Commit Number` and will normally be at the end of the list. Some beta releases may have a branch name also: {Version Tag}-beta.{Commit Number}.{Git Commit Hash}.{Branch Name} eg v1.53.0-beta.4677.b657a2204.semver The presence of `Branch Name` indicates that this is a feature under development which will at some point be merged into the normal betas and then into a normal release. The beta releases haven't been through the [full integration test suite](https://pub.rclone.org/integration-tests/) like the releases. However it is useful to try the latest beta before reporting an issue. Note that [rclone.org](https://rclone.org/) is only updated on releases - to see the documentation for the latest beta go to [tip.rclone.org](https://tip.rclone.org/). Downloads for scripting ======================= If you would like to download the current version (maybe from a script) from a URL which doesn't change then you can use these links. | Arch-OS | Windows | macOS | Linux | .deb | .rpm | FreeBSD | NetBSD | OpenBSD | Plan9 | Solaris | |:-------:|:-------:|:-----:|:-----:|:----:|:----:|:-------:|:------:|:-------:|:-----:|:-------:| | Intel/AMD - 64 Bit | {{< cdownload windows amd64 >}} | {{< cdownload osx amd64 >}} | {{< cdownload linux amd64 >}} | {{< cdownload linux amd64 deb >}} | {{< cdownload linux amd64 rpm >}} | {{< cdownload freebsd amd64 >}} | {{< cdownload netbsd amd64 >}} | {{< cdownload openbsd amd64 >}} | {{< cdownload plan9 amd64 >}} | {{< cdownload solaris amd64 >}} | | Intel/AMD - 32 Bit | {{< cdownload windows 386 >}} | - | {{< cdownload linux 386 >}} | {{< cdownload linux 386 deb >}} | {{< cdownload linux 386 rpm >}} | {{< cdownload freebsd 386 >}} | {{< cdownload netbsd 386 >}} | {{< cdownload openbsd 386 >}} | {{< cdownload plan9 386 >}} | - | | ARMv6 - 32 Bit | - | - | {{< cdownload linux arm >}} | {{< cdownload linux arm deb >}} | {{< cdownload linux arm rpm >}} | {{< cdownload freebsd arm >}} | {{< cdownload netbsd arm >}} | - | - | - | | ARMv7 - 32 Bit | - | - | {{< cdownload linux arm-v7 >}} | {{< cdownload linux arm-v7 deb >}} | {{< cdownload linux arm-v7 rpm >}} | {{< cdownload freebsd arm-v7 >}} | {{< cdownload netbsd arm-v7 >}} | - | - | - | | ARM - 64 Bit | - | - | {{< cdownload linux arm64 >}} | {{< cdownload linux arm64 deb >}} | {{< cdownload linux arm64 rpm >}} | - | - | - | - | - | | MIPS - Big Endian | - | - | {{< cdownload linux mips >}} | {{< cdownload linux mips deb >}} | {{< cdownload linux mips rpm >}} | - | - | - | - | - | | MIPS - Little Endian | - | - | {{< cdownload linux mipsle >}} | {{< cdownload linux mipsle deb >}} | {{< cdownload linux mipsle rpm >}} | - | - | - | - | - | Older Downloads ============== Older downloads can be found [here](https://downloads.rclone.org/). rclone-1.53.3/docs/content/drive.md000066400000000000000000001250471375552240400171420ustar00rootroot00000000000000--- title: "Google drive" description: "Rclone docs for Google drive" --- {{< icon "fab fa-google" >}} Google Drive ----------------------------------------- Paths are specified as `drive:path` Drive paths may be as deep as required, eg `drive:directory/subdirectory`. The initial setup for drive involves getting a token from Google drive which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Drive \ "drive" [snip] Storage> drive Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Scope that rclone should use when requesting access from drive. Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ "drive" 2 / Read-only access to file metadata and file contents. \ "drive.readonly" / Access to files created by rclone only. 3 | These are visible in the drive website. | File authorization is revoked when the user deauthorizes the app. \ "drive.file" / Allows read and write access to the Application Data folder. 4 | This is not visible in the drive website. \ "drive.appfolder" / Allows read-only access to file metadata but 5 | does not allow any access to read or download file content. \ "drive.metadata.readonly" scope> 1 ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs). root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine or Y didn't work y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Configure this as a team drive? y) Yes n) No y/n> n -------------------- [remote] client_id = client_secret = scope = drive root_folder_id = service_account_file = token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. You can then use it like this, List directories in top level of your drive rclone lsd remote: List all the files in your drive rclone ls remote: To copy a local directory to a drive directory called backup rclone copy /home/source remote:backup ### Scopes ### Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. [The scopes are defined here](https://developers.google.com/drive/v3/web/about-auth). The scope are #### drive #### This is the default scope and allows full access to all files, except for the Application Data Folder (see below). Choose this one if you aren't sure. #### drive.readonly #### This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted. #### drive.file #### With this scope rclone can read/view/modify only those files and folders it creates. So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone. This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone. Files created with this scope are visible in the web interface. #### drive.appfolder #### This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either. #### drive.metadata.readonly #### This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories. ### Root folder ID ### You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go). In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface. So if the folder you want rclone to use has a URL which looks like `https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` in the browser, then you use `1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` as the `root_folder_id` in the config. **NB** folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone. There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise! Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet. ### Service Account support ### You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the `service_account_file` prompt during `rclone config` and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set `service_account_credentials` with the actual contents of the file instead, or set the equivalent environment variable. #### Use case - Google Apps/G-suite account and individual Drive #### Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain **example.com**, and the user **foo@example.com**. There's a few steps we need to go through to accomplish this: ##### 1. Create a service account for example.com ##### - To create a service account and obtain its credentials, go to the [Google Developer Console](https://console.developers.google.com). - You must have a project - create one if you don't. - Then go to "IAM & admin" -> "Service Accounts". - Use the "Create Credentials" button. Fill in "Service account name" with something that identifies your client. "Role" can be empty. - Tick "Furnish a new private key" - select "Key type JSON". - Tick "Enable G Suite Domain-wide Delegation". This option makes "impersonation" possible, as documented here: [Delegating domain-wide authority to the service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) - These credentials are what rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button. ##### 2. Allowing API access to example.com Google Drive ##### - Go to example.com's admin console - Go into "Security" (or use the search bar) - Select "Show more" and then "Advanced settings" - Select "Manage API client access" in the "Authentication" section - In the "Client Name" field enter the service account's "Client ID" - this can be found in the Developer Console under "IAM & Admin" -> "Service Accounts", then "View Client ID" for the newly created service account. It is a ~21 character numerical string. - In the next field, "One or More API Scopes", enter `https://www.googleapis.com/auth/drive` to grant access to Google Drive specifically. ##### 3. Configure rclone, assuming a new install ##### ``` rclone config n/s/q> n # New name>gdrive # Gdrive is an example name Storage> # Select the number shown for Google Drive client_id> # Can be left blank client_secret> # Can be left blank scope> # Select your scope, 1 for example root_folder_id> # Can be left blank service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes! y/n> # Auto config, y ``` ##### 4. Verify that it's working ##### - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` - The arguments do: - `-v` - verbose logging - `--drive-impersonate foo@example.com` - this is what does the magic, pretending to be user foo. - `lsf` - list files in a parsing friendly way - `gdrive:backup` - use the remote called gdrive, work in the folder named backup. Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the `--drive-impersonate` option, like this: `rclone -v foo@example.com lsf gdrive:backup` ### Team drives ### If you want to configure the remote to point to a Google Team Drive then answer `y` to the question `Configure this as a team drive?`. This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer. For example: ``` Configure this as a team drive? y) Yes n) No y/n> y Fetching team drive list... Choose a number from below, or type in your own value 1 / Rclone Test \ "xxxxxxxxxxxxxxxxxxxx" 2 / Rclone Test 2 \ "yyyyyyyyyyyyyyyyyyyy" 3 / Rclone Test 3 \ "zzzzzzzzzzzzzzzzzzzz" Enter a Team Drive ID> 1 -------------------- [remote] client_id = client_secret = token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} team_drive = xxxxxxxxxxxxxxxxxxxx -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. It does this by combining multiple `list` calls into a single API request. This works by combining many `'%s' in parents` filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular `List` function: ``` trashed=false and 'a' in parents trashed=false and 'b' in parents trashed=false and 'c' in parents ``` These can now be combined into a single request: ``` trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) ``` The implementation of `ListR` will put up to 50 `parents` filters into one request. It will use the `--checkers` value to specify the number of requests to run in parallel. In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives: ``` rclone lsjson -vv -R --checkers=6 gdrive:folder ``` small folder (220 directories, 700 files): - without `--fast-list`: 38s - with `--fast-list`: 10s large folder (10600 directories, 39000 files): - without `--fast-list`: 22:05 min - with `--fast-list`: 58s ### Modified time ### Google drive stores modification times accurate to 1 ms. #### Restricted filename characters Only Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. In contrast to other backends, `/` can also be used in names and `.` or `..` are valid names. ### Revisions ### Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file. Revisions follow the standard google policy which at time of writing was * They are deleted after 30 days or 100 revisions (whatever comes first). * They do not count towards a user storage quota. ### Deleting files ### By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the `--drive-use-trash=false` flag, or set the equivalent environment variable. ### Shortcuts ### In March 2020 Google introduced a new feature in Google Drive called [drive shortcuts](https://support.google.com/drive/answer/9700156) ([API](https://developers.google.com/drive/api/v3/shortcuts)). These will (by September 2020) [replace the ability for files or folders to be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-structure-and-sharing-models). Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don't break if the source is renamed or moved about. Be default rclone treats these as follows. For shortcuts pointing to files: - When listing a file shortcut appears as the destination file. - When downloading the contents of the destination file is downloaded. - When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. - When server side moving (renaming) the shortcut is renamed, not the destination file. - When server side copying the shortcut is copied, not the contents of the shortcut. - When deleting the shortcut is deleted not the linked file. - When setting the modification time, the modification time of the linked file will be set. For shortcuts pointing to folders: - When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) - When downloading the contents of the linked folder and sub contents are downloaded - When uploading to a shortcut folder the file will be placed in the linked folder - When server side moving (renaming) the shortcut is renamed, not the destination folder - When server side copying the contents of the linked folder is copied, not the shortcut. - When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder. - **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted. The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts. Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag or the corresponding `skip_shortcuts` configuration setting. ### Emptying trash ### If you wish to empty your trash you can use the `rclone cleanup remote:` command which will permanently delete all your trashed files. This command does not take any path arguments. Note that Google Drive takes some time (minutes to days) to empty the trash even though the command returns within a few seconds. No output is echoed, so there will be no confirmation even using -v or -vv. ### Quota information ### To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments. #### Import/Export of google documents #### Google documents can be exported from and uploaded to Google Drive. When rclone downloads a Google doc it chooses a format to download depending upon the `--drive-export-formats` setting. By default the export formats are `docx,xlsx,pptx,svg` which are a sensible default for an editable document. When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list. If you prefer an archive copy then you might use `--drive-export-formats pdf`, or if you prefer openoffice/libreoffice formats you might use `--drive-export-formats ods,odt,odp`. Note that rclone adds the extension to the google doc, so if it is called `My Spreadsheet` on google docs, it will be exported as `My Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc. When importing files into Google Drive, rclone will convert all files with an extension in `--drive-import-formats` to their associated document type. rclone will not convert any files by default, since the conversion is lossy process. The conversion must result in a file with the same extension when the `--drive-export-formats` rules are applied to the uploaded document. Here are some examples for allowed and prohibited conversions. | export-formats | import-formats | Upload Ext | Document Ext | Allowed | | -------------- | -------------- | ---------- | ------------ | ------- | | odt | odt | odt | odt | Yes | | odt | docx,odt | odt | odt | Yes | | | docx | docx | docx | Yes | | | odt | odt | docx | No | | odt,docx | docx,odt | docx | odt | No | | docx,odt | docx,odt | docx | docx | Yes | | docx,odt | docx,odt | odt | docx | No | This limitation can be disabled by specifying `--drive-allow-import-name-change`. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with `--drive-import-formats docx,odt,txt`, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes. Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries. This list can be changed by Google Drive at any time and might not represent the currently available conversions. | Extension | Mime Type | Description | | --------- |-----------| ------------| | csv | text/csv | Standard CSV format for Spreadsheets | | docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | | epub | application/epub+zip | E-book format | | html | text/html | An HTML Document | | jpg | image/jpeg | A JPEG Image File | | json | application/vnd.google-apps.script+json | JSON Text Format | | odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | | ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | odt | application/vnd.oasis.opendocument.text | Openoffice Document | | pdf | application/pdf | Adobe PDF Format | | png | image/png | PNG Image Format| | pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | | rtf | application/rtf | Rich Text Format | | svg | image/svg+xml | Scalable Vector Graphics Format | | tsv | text/tab-separated-values | Standard TSV format for spreadsheets | | txt | text/plain | Plain Text | | xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | | zip | application/zip | A ZIP file of HTML, Images CSS | Google documents can also be exported as link files. These files will open a browser window for the Google Docs website of that document when opened. The link file extension has to be specified as a `--drive-export-formats` parameter. They will match all available Google Documents. | Extension | Description | OS Support | | --------- | ----------- | ---------- | | desktop | freedesktop.org specified desktop entry | Linux | | link.html | An HTML Document with a redirect | All | | url | INI style link file | macOS, Windows | | webloc | macOS specific XML format | macOS | {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/drive/drive.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to drive (Google Drive). #### --drive-client-id Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. - Config: client_id - Env Var: RCLONE_DRIVE_CLIENT_ID - Type: string - Default: "" #### --drive-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_DRIVE_CLIENT_SECRET - Type: string - Default: "" #### --drive-scope Scope that rclone should use when requesting access from drive. - Config: scope - Env Var: RCLONE_DRIVE_SCOPE - Type: string - Default: "" - Examples: - "drive" - Full access all files, excluding Application Data Folder. - "drive.readonly" - Read-only access to file metadata and file contents. - "drive.file" - Access to files created by rclone only. - These are visible in the drive website. - File authorization is revoked when the user deauthorizes the app. - "drive.appfolder" - Allows read and write access to the Application Data folder. - This is not visible in the drive website. - "drive.metadata.readonly" - Allows read-only access to file metadata but - does not allow any access to read or download file content. #### --drive-root-folder-id ID of the root folder Leave blank normally. Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID - Type: string - Default: "" #### --drive-service-account-file Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: service_account_file - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE - Type: string - Default: "" #### --drive-alternate-export Deprecated: no longer needed - Config: alternate_export - Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to drive (Google Drive). #### --drive-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_DRIVE_TOKEN - Type: string - Default: "" #### --drive-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_DRIVE_AUTH_URL - Type: string - Default: "" #### --drive-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_DRIVE_TOKEN_URL - Type: string - Default: "" #### --drive-service-account-credentials Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. - Config: service_account_credentials - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS - Type: string - Default: "" #### --drive-team-drive ID of the Team Drive - Config: team_drive - Env Var: RCLONE_DRIVE_TEAM_DRIVE - Type: string - Default: "" #### --drive-auth-owner-only Only consider files owned by the authenticated user. - Config: auth_owner_only - Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY - Type: bool - Default: false #### --drive-use-trash Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use `--drive-use-trash=false` to delete files permanently instead. - Config: use_trash - Env Var: RCLONE_DRIVE_USE_TRASH - Type: bool - Default: true #### --drive-skip-gdocs Skip google documents in all listings. If given, gdocs practically become invisible to rclone. - Config: skip_gdocs - Env Var: RCLONE_DRIVE_SKIP_GDOCS - Type: bool - Default: false #### --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. Use this if you get checksum errors when transferring Google photos or videos. Setting this flag will cause Google photos and videos to return a blank MD5 checksum. Google photos are identified by being in the "photos" space. Corrupted checksums are caused by Google modifying the image/video but not updating the checksum. - Config: skip_checksum_gphotos - Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS - Type: bool - Default: false #### --drive-shared-with-me Only show files that are shared with me. Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too. - Config: shared_with_me - Env Var: RCLONE_DRIVE_SHARED_WITH_ME - Type: bool - Default: false #### --drive-trashed-only Only show files that are in the trash. This will show trashed files in their original directory structure. - Config: trashed_only - Env Var: RCLONE_DRIVE_TRASHED_ONLY - Type: bool - Default: false #### --drive-starred-only Only show files that are starred. - Config: starred_only - Env Var: RCLONE_DRIVE_STARRED_ONLY - Type: bool - Default: false #### --drive-formats Deprecated: see export_formats - Config: formats - Env Var: RCLONE_DRIVE_FORMATS - Type: string - Default: "" #### --drive-export-formats Comma separated list of preferred formats for downloading Google docs. - Config: export_formats - Env Var: RCLONE_DRIVE_EXPORT_FORMATS - Type: string - Default: "docx,xlsx,pptx,svg" #### --drive-import-formats Comma separated list of preferred formats for uploading Google docs. - Config: import_formats - Env Var: RCLONE_DRIVE_IMPORT_FORMATS - Type: string - Default: "" #### --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - Config: allow_import_name_change - Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE - Type: bool - Default: false #### --drive-use-created-date Use file created date instead of modified date., Useful when downloading data and you want the creation date used in place of the last modified date. **WARNING**: This flag may have some unexpected consequences. When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag. This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date. - Config: use_created_date - Env Var: RCLONE_DRIVE_USE_CREATED_DATE - Type: bool - Default: false #### --drive-use-shared-date Use date file was shared instead of modified date. Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files. If both this flag and "--drive-use-created-date" are set, the created date is used. - Config: use_shared_date - Env Var: RCLONE_DRIVE_USE_SHARED_DATE - Type: bool - Default: false #### --drive-list-chunk Size of listing chunk 100-1000. 0 to disable. - Config: list_chunk - Env Var: RCLONE_DRIVE_LIST_CHUNK - Type: int - Default: 1000 #### --drive-impersonate Impersonate this user when using a service account. - Config: impersonate - Env Var: RCLONE_DRIVE_IMPERSONATE - Type: string - Default: "" #### --drive-upload-cutoff Cutoff for switching to chunked upload - Config: upload_cutoff - Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 8M #### --drive-chunk-size Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. - Config: chunk_size - Env Var: RCLONE_DRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 8M #### --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway. - Config: acknowledge_abuse - Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE - Type: bool - Default: false #### --drive-keep-revision-forever Keep new head revision of each file forever. - Config: keep_revision_forever - Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER - Type: bool - Default: false #### --drive-size-as-quota Show sizes as storage quota usage, not actual size. Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever. **WARNING**: This flag may have some unexpected consequences. It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only. If you do use this flag for syncing (not recommended) then you will need to use --ignore size also. - Config: size_as_quota - Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA - Type: bool - Default: false #### --drive-v2-download-min-size If Object's are greater, use drive v2 API to download. - Config: v2_download_min_size - Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE - Type: SizeSuffix - Default: off #### --drive-pacer-min-sleep Minimum time to sleep between API calls. - Config: pacer_min_sleep - Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP - Type: Duration - Default: 100ms #### --drive-pacer-burst Number of API calls to allow without sleeping. - Config: pacer_burst - Env Var: RCLONE_DRIVE_PACER_BURST - Type: int - Default: 100 #### --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. - Config: server_side_across_configs - Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false #### --drive-disable-http2 Disable drive using http2 There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed. See: https://github.com/rclone/rclone/issues/3631 - Config: disable_http2 - Env Var: RCLONE_DRIVE_DISABLE_HTTP2 - Type: bool - Default: true #### --drive-stop-on-upload-limit Make upload limit errors be fatal At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync. Note that this detection is relying on error message strings which Google don't document so it may break in the future. See: https://github.com/rclone/rclone/issues/3857 - Config: stop_on_upload_limit - Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT - Type: bool - Default: false #### --drive-skip-shortcuts If set skip shortcut files Normally rclone dereferences shortcut files making them appear as if they are the original file (see [the shortcuts section](#shortcuts)). If this flag is set then rclone will ignore shortcut files completely. - Config: skip_shortcuts - Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS - Type: bool - Default: false #### --drive-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_DRIVE_ENCODING - Type: MultiEncoder - Default: InvalidUtf8 ### Backend commands Here are the commands specific to the drive backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](/rc/#backend/command). #### get Get command for fetching the drive config parameters rclone backend get remote: [options] [+] This is a get command which will be used to fetch the various drive config parameters Usage Examples: rclone backend get drive: [-o service_account_file] [-o chunk_size] rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] Options: - "chunk_size": show the current upload chunk size - "service_account_file": show the current service account file #### set Set command for updating the drive config parameters rclone backend set remote: [options] [+] This is a set command which will be used to update the various drive config parameters Usage Examples: rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] Options: - "chunk_size": update the current upload chunk size - "service_account_file": update the current service account file #### shortcut Create shortcuts from files or directories rclone backend shortcut remote: [options] [+] This command creates shortcuts from files or directories. Usage: rclone backend shortcut drive: source_item destination_shortcut rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:" In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". Options: - "target": optional target remote for the shortcut destination #### drives List the shared drives available to this account rclone backend drives remote: [options] [+] This command lists the shared drives (teamdrives) available to this account. Usage: rclone backend drives drive: This will return a JSON list of objects like this [ { "id": "0ABCDEF-01234567890", "kind": "drive#teamDrive", "name": "My Drive" }, { "id": "0ABCDEFabcdefghijkl", "kind": "drive#teamDrive", "name": "Test Drive" } ] #### untrash Untrash files and directories rclone backend untrash remote: [options] [+] This command untrashes all the files and directories in the directory passed in recursively. Usage: This takes an optional directory to trash which make this easier to use via the API. rclone backend untrash drive:directory rclone backend -i untrash drive:directory subdir Use the -i flag to see what would be restored before restoring it. Result: { "Untrashed": 17, "Errors": 0 } {{< rem autogenerated options stop >}} ### Limitations ### Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time. Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with `--disable copy` to download and upload the files if you prefer. #### Limitations of Google Docs #### Google docs will appear as size -1 in `rclone ls` and as size 0 in anything which uses the VFS layer, eg `rclone mount`, `rclone serve`. This is because rclone can't find out the size of the Google docs without downloading them. Google docs will transfer correctly with `rclone sync`, `rclone copy` etc as rclone knows to ignore the size when doing the transfer. However an unfortunate consequence of this is that you may not be able to download Google docs using `rclone mount`. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you! ### Duplicated files ### Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files. Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use `rclone dedupe` to fix duplicated files. Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes. ### Rclone appears to be re-copying files it shouldn't ### The most likely cause of this is the duplicated file issue above - run `rclone dedupe` and check your logs for duplicate object or directory messages. This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list. Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem. ### Making your own client_id ### When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. Here is how to create your own Google Drive client ID for rclone: 1. Log into the [Google API Console](https://console.developers.google.com/) with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access) 2. Select a project or create a new project. 3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API". 4. Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials" 5. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen. (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far). 6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". 7. Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine) 8. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote. Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal). (Thanks to @balazer on github for these instructions.) Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the [Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. rclone-1.53.3/docs/content/dropbox.md000066400000000000000000000166141375552240400175050ustar00rootroot00000000000000--- title: "Dropbox" description: "Rclone docs for Dropbox" --- {{< icon "fab fa-dropbox" >}} Dropbox --------------------------------- Paths are specified as `remote:path` Dropbox paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Dropbox \ "dropbox" [snip] Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. app_secret> Remote config Please visit: https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX -------------------- [remote] app_key = app_secret = token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` You can then use it like this, List directories in top level of your dropbox rclone lsd remote: List all the files in your dropbox rclone ls remote: To copy a local directory to a dropbox directory called backup rclone copy /home/source remote:backup ### Dropbox for business ### Rclone supports Dropbox for business and Team Folders. When using Dropbox for business `remote:` and `remote:path/to/file` will refer to your personal folder. If you wish to see Team Folders you must use a leading `/` in the path, so `rclone lsd remote:/` will refer to the root and show you all Team Folders and your User Folder. You can then use team folders like this `remote:/TeamFolder` and `remote:/TeamFolder/path/to/file`. A leading `/` for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided. ### Modified time and Hashes ### Dropbox supports modified times, but the only way to set a modification time is to re-upload the file. This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use `--size-only` or `--checksum` flag to stop it. Dropbox supports [its own hash type](https://www.dropbox.com/developers/reference/content-hash) which is checked for all transfers. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | | DEL | 0x7F | ␡ | | \ | 0x5C | \ | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/dropbox/dropbox.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to dropbox (Dropbox). #### --dropbox-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_DROPBOX_CLIENT_ID - Type: string - Default: "" #### --dropbox-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_DROPBOX_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to dropbox (Dropbox). #### --dropbox-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_DROPBOX_TOKEN - Type: string - Default: "" #### --dropbox-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_DROPBOX_AUTH_URL - Type: string - Default: "" #### --dropbox-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_DROPBOX_TOKEN_URL - Type: string - Default: "" #### --dropbox-chunk-size Upload chunk size. (< 150M). Any files larger than this will be uploaded in chunks of this size. Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory. - Config: chunk_size - Env Var: RCLONE_DROPBOX_CHUNK_SIZE - Type: SizeSuffix - Default: 48M #### --dropbox-impersonate Impersonate this user when using a business account. - Config: impersonate - Env Var: RCLONE_DROPBOX_IMPERSONATE - Type: string - Default: "" #### --dropbox-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_DROPBOX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations ### Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are some file names such as `thumbs.db` which Dropbox can't store. There is a full list of them in the ["Ignored Files" section of this document](https://www.dropbox.com/en/help/145). Rclone will issue an error message `File name disallowed - not uploading` if it attempts to upload one of those file names, but the sync won't fail. Some errors may occur if you try to sync copyright-protected files because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) that prevents this sort of file being downloaded. This will return the error `ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.` If you have more than 10,000 files in a directory then `rclone purge dropbox:dir` will return the error `Failed to purge: There are too many files involved in this operation`. As a work-around do an `rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`. ### Get your own Dropbox App ID ### When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users. Here is how to create your own Dropbox App ID for rclone: 1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) with your Dropbox Account (It need not to be the same account as the Dropbox you want to access) 2. Choose an API => Usually this should be `Dropbox API` 3. Choose the type of access you want to use => `Full Dropbox` or `App Folder` 4. Name your App. The app name is global, so you can't use `rclone` for example 5. Click the button `Create App` 5. Fill `Redirect URIs` as `http://localhost:53682/` 6. Find the `App key` and `App secret` Use these values in rclone config to add a new remote or edit an existing remote. rclone-1.53.3/docs/content/faq.md000066400000000000000000000201211375552240400165630ustar00rootroot00000000000000--- title: "FAQ" description: "Rclone Frequently Asked Questions" --- Frequently Asked Questions -------------------------- ### Do all cloud storage systems support all rclone commands ### Yes they do. All the rclone commands (eg `sync`, `copy` etc) will work on all the remote storage systems. ### Can I copy the config from one machine to another ### Sure! Rclone stores all of its config in a single file. If you want to find this file, run `rclone config file` which will tell you where it is. See the [remote setup docs](/remote_setup/) for more info. ### How do I configure rclone on a remote / headless box with no browser? ### This has now been documented in its own [remote setup page](/remote_setup/). ### Can rclone sync directly from drive to s3 ### Rclone can sync between two remote cloud storage systems just fine. Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth. The syncs would be incremental (on a file by file basis). Eg rclone sync -i drive:Folder s3:bucket ### Using rclone from multiple locations at the same time ### You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg ``` Server A> rclone sync -i /tmp/whatever remote:ServerA Server B> rclone sync -i /tmp/whatever remote:ServerB ``` If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, eg ``` Server A> rclone copy /tmp/whatever remote:Backup Server B> rclone copy /tmp/whatever remote:Backup ``` The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates. ### Why doesn't rclone support partial transfers / binary diffs like rsync? ### Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system. Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it. It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system. All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects. ### Can rclone do bi-directional sync? ### No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it. ### Can I use rclone with an HTTP proxy? ### Yes. rclone will follow the standard environment variables for proxies, similar to cURL and other programs. In general the variables are called `http_proxy` (for services reached over `http`) and `https_proxy` (for services reached over `https`). Most public services will be using `https`, but you may wish to set both. The content of the variable is `protocol://server:port`. The protocol value is the one used to talk to the proxy server, itself, and is commonly either `http` or `socks5`. Slightly annoyingly, there is no _standard_ for the name; some applications may use `http_proxy` but another one `HTTP_PROXY`. The `Go` libraries used by `rclone` will try both variations, but you may wish to set all possibilities. So, on Linux, you may end up with code similar to export http_proxy=http://proxyserver:12345 export https_proxy=$http_proxy export HTTP_PROXY=$http_proxy export HTTPS_PROXY=$http_proxy The `NO_PROXY` allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com". e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy Note that the ftp backend does not support `ftp_proxy` yet. ### Rclone gives x509: failed to load system roots and no roots provided error ### This means that `rclone` can't file the SSL root certificates. Likely you are running `rclone` on a NAS with a cut-down Linux OS, or possibly on Solaris. Rclone (via the Go runtime) tries to load the root certificates from these places on Linux. "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL "/etc/ssl/ca-bundle.pem", // OpenSUSE "/etc/pki/tls/cacert.pem", // OpenELEC So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly. ``` mkdir -p /etc/ssl/certs/ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ntpclient -s -h pool.ntp.org ``` The two environment variables `SSL_CERT_FILE` and `SSL_CERT_DIR`, mentioned in the [x509 package](https://godoc.org/crypto/x509), provide an additional way to provide the SSL root certificates. Note that you may need to add the `--insecure` option to the `curl` command line if it doesn't work without. ``` curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ``` ### Rclone gives Failed to load config file: function not implemented error ### Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23. See the [system requirements section in the go install docs](https://golang.org/doc/install) for full details. ### All my uploaded docx/xlsx/pptx files appear as archive/zip ### This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats ### tcp lookup some.domain.com no such host ### This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g. ``` # both should print a long list of possible IP addresses dig www.googleapis.com # resolve using your default DNS dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server ``` If you are using `systemd-resolved` (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly. Additionally with the `GODEBUG=netdns=` environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the [name resolution section in the go docs](https://golang.org/pkg/net/#hdr-Name_Resolution). ### The total size reported in the stats for a sync is wrong and keeps changing It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the [--max-backlog](/docs/#max-backlog-n) flag. ### Rclone is using too much memory or appears to have a memory leak Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled. However it is possible to tune the garbage collector to use less memory by [setting GOGC](https://dave.cheney.net/tag/gogc) to a lower value, say `export GOGC=20`. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage. The most common cause of rclone using lots of memory is a single directory with thousands or millions of files in. Rclone has to load this entirely into memory as rclone objects. Each rclone object takes 0.5k-1k of memory. rclone-1.53.3/docs/content/fichier.md000066400000000000000000000077411375552240400174420ustar00rootroot00000000000000--- title: "1Fichier" description: "Rclone docs for 1Fichier" --- {{< icon "fa fa-archive" >}} 1Fichier ----------------------------------------- This is a backend for the [1fichier](https://1fichier.com) cloud storage service. Note that a Premium subscription is required to use the API. Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / 1Fichier \ "fichier" [snip] Storage> fichier ** See help for fichier backend at: https://rclone.org/fichier/ ** Your API Key, get it from https://1fichier.com/console/params.pl Enter a string value. Press Enter for the default (""). api_key> example_key Edit advanced config? (y/n) y) Yes n) No y/n> Remote config -------------------- [remote] type = fichier api_key = example_key -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Once configured you can then use `rclone` like this, List directories in top level of your 1Fichier account rclone lsd remote: List all the files in your 1Fichier account rclone ls remote: To copy a local directory to a 1Fichier directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### 1Fichier does not support modification times. It supports the Whirlpool hash algorithm. ### Duplicated files ### 1Fichier can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | | < | 0x3C | < | | > | 0x3E | > | | " | 0x22 | " | | $ | 0x24 | $ | | ` | 0x60 | ` | | ' | 0x27 | ' | File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/fichier/fichier.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to fichier (1Fichier). #### --fichier-api-key Your API Key, get it from https://1fichier.com/console/params.pl - Config: api_key - Env Var: RCLONE_FICHIER_API_KEY - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to fichier (1Fichier). #### --fichier-shared-folder If you want to download a shared folder, add this parameter - Config: shared_folder - Env Var: RCLONE_FICHIER_SHARED_FOLDER - Type: string - Default: "" #### --fichier-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_FICHIER_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/filtering.md000066400000000000000000000402261375552240400200070ustar00rootroot00000000000000--- title: "Filtering" description: "Filtering, includes and excludes" --- # Filtering, includes and excludes # Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size. The filters are applied for the `copy`, `sync`, `move`, `ls`, `lsl`, `md5sum`, `sha1sum`, `size`, `delete` and `check` operations. Note that `purge` does not obey the filters. Each path as it passes through rclone is matched against the include and exclude rules like `--include`, `--exclude`, `--include-from`, `--exclude-from`, `--filter`, or `--filter-from`. The simplest way to try them out is using the `ls` command, or `--dry-run` together with `-v`. `--filter-from`, `--exclude-from`, `--include-from`, `--files-from`, `--files-from-raw` understand `-` as a file name to mean read from standard input. ## Patterns ## The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell. If the pattern starts with a `/` then it only matches at the top level of the directory tree, **relative to the root of the remote** (not necessarily the root of the local drive). If it doesn't start with `/` then it is matched starting at the **end of the path**, but it will only match a complete path element: file.jpg - matches "file.jpg" - matches "directory/file.jpg" - doesn't match "afile.jpg" - doesn't match "directory/afile.jpg" /file.jpg - matches "file.jpg" in the root directory of the remote - doesn't match "afile.jpg" - doesn't match "directory/file.jpg" **Important** Note that you must use `/` in patterns and not `\` even if running on Windows. A `*` matches anything but not a `/`. *.jpg - matches "file.jpg" - matches "directory/file.jpg" - doesn't match "file.jpg/something" Use `**` to match anything, including slashes (`/`). dir/** - matches "dir/file.jpg" - matches "dir/dir1/dir2/file.jpg" - doesn't match "directory/file.jpg" - doesn't match "adir/file.jpg" A `?` matches any character except a slash `/`. l?ss - matches "less" - matches "lass" - doesn't match "floss" A `[` and `]` together make a character class, such as `[a-z]` or `[aeiou]` or `[[:alpha:]]`. See the [go regexp docs](https://golang.org/pkg/regexp/syntax/) for more info on these. h[ae]llo - matches "hello" - matches "hallo" - doesn't match "hullo" A `{` and `}` define a choice between elements. It should contain a comma separated list of patterns, any of which might match. These patterns can contain wildcards. {one,two}_potato - matches "one_potato" - matches "two_potato" - doesn't match "three_potato" - doesn't match "_potato" Special characters can be escaped with a `\` before them. \*.jpg - matches "*.jpg" \\.jpg - matches "\.jpg" \[one\].jpg - matches "[one].jpg" Patterns are case sensitive unless the `--ignore-case` flag is used. Without `--ignore-case` (default) potato - matches "potato" - doesn't match "POTATO" With `--ignore-case` potato - matches "potato" - matches "POTATO" Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so `rclone copy "remote:dir*.jpg" /path/to/dir` won't work - what is required is `rclone --include "*.jpg" copy remote:dir /path/to/dir` ### Directories ### Rclone keeps track of directories that could match any file patterns. Eg if you add the include rule /a/*.jpg Rclone will synthesize the directory include rule /a/ If you put any rules which end in `/` then it will only match directories. Directory matches are **only** used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory. ### Differences between rsync and rclone patterns ### Rclone implements bash style `{a,b,c}` glob matching which rsync doesn't. Rclone always does a wildcard match so `\` must always escape a `\`. ## How the rules are used ## Rclone maintains a combined list of include rules and exclude rules. Each file is matched in order, starting from the top, against the rule in the list until it finds a match. The file is then included or excluded according to the rule type. If the matcher fails to find a match after testing against all the entries in the list then the path is included. For example given the following rules, `+` being include, `-` being exclude, - secret*.jpg + *.jpg + *.png + file2.avi - * This would include * `file1.jpg` * `file3.png` * `file2.avi` This would exclude * `secret17.jpg` * non `*.jpg` and `*.png` A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2). ## Adding filtering rules ## Filtering rules are added with the following command line flags. ### Repeating options ## You can repeat the following options to add more than one rule of that type. * `--include` * `--include-from` * `--exclude` * `--exclude-from` * `--filter` * `--filter-from` * `--filter-from-raw` **Important** You should not use `--include*` together with `--exclude*`. It may produce different results than you expected. In that case try to use: `--filter*`. Note that all the options of the same type are processed together in the order above, regardless of what order they were placed on the command line. So all `--include` options are processed first in the order they appeared on the command line, then all `--include-from` options etc. To mix up the order includes and excludes, the `--filter` flag can be used. ### `--exclude` - Exclude files matching pattern ### Add a single exclude rule with `--exclude`. This flag can be repeated. See above for the order the flags are processed in. Eg `--exclude *.bak` to exclude all bak files from the sync. ### `--exclude-from` - Read exclude patterns from file ### Add exclude rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this `exclude-file.txt` # a sample exclude rule file *.bak file2.jpg Then use as `--exclude-from exclude-file.txt`. This will sync all files except those ending in `bak` and `file2.jpg`. This is useful if you have a lot of rules. ### `--include` - Include files matching pattern ### Add a single include rule with `--include`. This flag can be repeated. See above for the order the flags are processed in. Eg `--include *.{png,jpg}` to include all `png` and `jpg` files in the backup and no others. This adds an implicit `--exclude *` at the very end of the filter list. This means you can mix `--include` and `--include-from` with the other filters (eg `--exclude`) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use `--filter-from`. ### `--include-from` - Read include patterns from file ### Add include rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this `include-file.txt` # a sample include rule file *.jpg *.png file2.avi Then use as `--include-from include-file.txt`. This will sync all `jpg`, `png` files and `file2.avi`. This is useful if you have a lot of rules. This adds an implicit `--exclude *` at the very end of the filter list. This means you can mix `--include` and `--include-from` with the other filters (eg `--exclude`) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use `--filter-from`. ### `--filter` - Add a file-filtering rule ### This can be used to add a single include or exclude rule. Include rules start with `+ ` and exclude rules start with `- `. A special rule called `!` can be used to clear the existing rules. This flag can be repeated. See above for the order the flags are processed in. Eg `--filter "- *.bak"` to exclude all bak files from the sync. ### `--filter-from` - Read filtering patterns from a file ### Add include/exclude rules from a file. This flag can be repeated. See above for the order the flags are processed in. Prepare a file like this `filter-file.txt` # a sample filter rule file - secret*.jpg + *.jpg + *.png + file2.avi - /dir/Trash/** + /dir/** # exclude everything else - * Then use as `--filter-from filter-file.txt`. The rules are processed in the order that they are defined. This example will include all `jpg` and `png` files, exclude any files matching `secret*.jpg` and include `file2.avi`. It will also include everything in the directory `dir` at the root of the sync, except `dir/Trash` which it will exclude. Everything else will be excluded from the sync. ### `--files-from` - Read list of source-file names ### This reads a list of file names from the file passed in and **only** these files are transferred. The **filtering rules are ignored** completely if you use this option. `--files-from` expects a list of files as its input. Leading / trailing whitespace is stripped from the input lines and lines starting with `#` and `;` are ignored. Rclone will traverse the file system if you use `--files-from`, effectively using the files in `--files-from` as a set of filters. Rclone will not error if any of the files are missing. If you use `--no-traverse` as well as `--files-from` then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files. This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line. Paths within the `--files-from` file will be interpreted as starting with the root specified in the command. Leading `/` characters are ignored. See [--files-from-raw](#files-from-raw-read-list-of-source-file-names-without-any-processing) if you need the input to be processed in a raw manner. For example, suppose you had `files-from.txt` with this content: # comment file1.jpg subdir/file2.jpg You could then use it like this: rclone copy --files-from files-from.txt /home/me/pics remote:pics This will transfer these files only (if they exist) /home/me/pics/file1.jpg → remote:pics/file1.jpg /home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths: /home/user1/important /home/user1/dir/file /home/user2/stuff To copy these you'd find a common subdirectory - in this case `/home` and put the remaining files in `files-from.txt` with or without leading `/`, eg user1/important user1/dir/file user2/stuff You could then copy these to a remote like this rclone copy --files-from files-from.txt /home remote:backup The 3 files will arrive in `remote:backup` with the paths as in the `files-from.txt` like this: /home/user1/important → remote:backup/user1/important /home/user1/dir/file → remote:backup/user1/dir/file /home/user2/stuff → remote:backup/user2/stuff You could of course choose `/` as the root too in which case your `files-from.txt` might look like this. /home/user1/important /home/user1/dir/file /home/user2/stuff And you would transfer it like this rclone copy --files-from files-from.txt / remote:backup In this case there will be an extra `home` directory on the remote: /home/user1/important → remote:backup/home/user1/important /home/user1/dir/file → remote:backup/home/user1/dir/file /home/user2/stuff → remote:backup/home/user2/stuff ### `--files-from-raw` - Read list of source-file names without any processing ### This option is same as `--files-from` with the only difference being that the input is read in a raw manner. This means that lines with leading/trailing whitespace and lines starting with `;` or `#` are read without any processing. [rclone lsf](/commands/rclone_lsf/) has a compatible format that can be used to export file lists from remotes, which can then be used as an input to `--files-from-raw`. ### `--min-size` - Don't transfer any file smaller than this ### This option controls the minimum size file which will be transferred. This defaults to `kBytes` but a suffix of `k`, `M`, or `G` can be used. For example `--min-size 50k` means no files smaller than 50kByte will be transferred. ### `--max-size` - Don't transfer any file larger than this ### This option controls the maximum size file which will be transferred. This defaults to `kBytes` but a suffix of `k`, `M`, or `G` can be used. For example `--max-size 1G` means no files larger than 1GByte will be transferred. ### `--max-age` - Don't transfer any file older than this ### This option controls the maximum age of files to transfer. Give in seconds or with a suffix of: * `ms` - Milliseconds * `s` - Seconds * `m` - Minutes * `h` - Hours * `d` - Days * `w` - Weeks * `M` - Months * `y` - Years For example `--max-age 2d` means no files older than 2 days will be transferred. This can also be an absolute time in one of these formats - RFC3339 - eg "2006-01-02T15:04:05Z07:00" - ISO8601 Date and time, local timezone - "2006-01-02T15:04:05" - ISO8601 Date and time, local timezone - "2006-01-02 15:04:05" - ISO8601 Date - "2006-01-02" (YYYY-MM-DD) ### `--min-age` - Don't transfer any file younger than this ### This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see `--max-age` for list of suffixes) For example `--min-age 2d` means no files younger than 2 days will be transferred. ### `--delete-excluded` - Delete files on dest excluded from sync ### **Important** this flag is dangerous - use with `--dry-run` and `-v` first. When doing `rclone sync` this will delete any files which are excluded from the sync on the destination. If for example you did a sync from `A` to `B` without the `--min-size 50k` flag rclone sync -i A: B: Then you repeated it like this with the `--delete-excluded` rclone --min-size 50k --delete-excluded sync A: B: This would delete all files on `B` which are less than 50 kBytes as these are now excluded from the sync. Always test first with `--dry-run` and `-v` before using this flag. ### `--dump filters` - dump the filters to the output ### This dumps the defined filters to the output as regular expressions. Useful for debugging. ### `--ignore-case` - make searches case insensitive ### Normally filter patterns are case sensitive. If this flag is supplied then filter patterns become case insensitive. Normally a `--include "file.txt"` will not match a file called `FILE.txt`. However if you use the `--ignore-case` flag then `--include "file.txt"` this will match a file called `FILE.txt`. ## Quoting shell metacharacters ## The examples above may not work verbatim in your shell as they have shell metacharacters in them (eg `*`), and may require quoting. Eg linux, OSX * `--include \*.jpg` * `--include '*.jpg'` * `--include='*.jpg'` In Windows the expansion is done by the command not the shell so this should work fine * `--include *.jpg` ## Exclude directory based on a file ## It is possible to exclude a directory based on a file, which is present in this directory. Filename should be specified using the `--exclude-if-present` flag. This flag has a priority over the other filtering flags. Imagine, you have the following directory structure: dir1/file1 dir1/dir2/file2 dir1/dir2/dir3/file3 dir1/dir2/dir3/.ignore You can exclude `dir3` from sync by running the following command: rclone sync -i --exclude-if-present .ignore dir1 remote:backup Currently only one filename is supported, i.e. `--exclude-if-present` should not be used multiple times. rclone-1.53.3/docs/content/flags.md000077500000000000000000001576521375552240400171370ustar00rootroot00000000000000--- title: "Global Flags" description: "Rclone Global Flags" --- # Global Flags This describes the global flags available to every rclone command split into two groups, non backend and backend flags. ## Non Backend Flags These flags are available for every command. ``` --ask-password Allow prompt for password for encrypted configuration. (default true) --auto-confirm If enabled, do not request console confirmation. --backup-dir string Make backups into hierarchy based in DIR. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --bwlimit-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable. --ca-cert string CA certificate used to verify servers --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --check-first Do all the checks before starting transfers. --checkers int Number of checkers to run in parallel. (default 8) -c, --checksum Skip based on checksum (if available) & size, not mod-time & size --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth --compare-dest string Include additional server-side path during comparison. --config string Config file. (default "$HOME/.config/rclone/rclone.conf") --contimeout duration Connect timeout (default 1m0s) --copy-dest string Implies --compare-dest but also copies files from path into destination. --cpuprofile string Write cpu profile to file --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features. Use help to see a list. -n, --dry-run Do a trial run with no permanent changes --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP headers - may contain sensitive info --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) --exclude-if-present string Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file (use - to read from stdin) --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file (use - to read from stdin) --header stringArray Set HTTP header for all transactions --header-download stringArray Set HTTP header for download transactions --header-upload stringArray Set HTTP header for upload transactions --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums. --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file (use - to read from stdin) -i, --interactive Enable interactive mode --log-file string Log everything to this file --log-format string Comma separated list of log format options (default "date,time") --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) --max-duration duration Maximum duration rclone will transfer data for. --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer. (default off) --memprofile string Write memory profile to file --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M) --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-check-dest Don't check the destination, copy regardless. --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-unicode-normalization Don't normalize unicode characters in filenames. --no-update-modtime Don't update destination mod-time if files identical. --order-by string Instructions on how to order the transfers, eg 'size,descending' --password-command SpaceSepList Command for supplying password for encrypted configuration. -P, --progress Show progress during transfer. -q, --quiet Print as little stuff as possible --rc Enable the remote control server. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --rc-allow-origin string Set the allowed origin for CORS. --rc-baseurl string Prefix for URLs - leave blank for root. --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --rc-client-ca string Client certificate authority to verify clients with --rc-enable-metrics Enable prometheus metrics on /metrics --rc-files string Path to local files to serve on the HTTP server. --rc-htpasswd string htpasswd file - if not provided no authentication is done --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s) --rc-job-expire-interval duration interval to check for expired async jobs (default 10s) --rc-key string SSL PEM Private key --rc-max-header-bytes int Maximum size of request header (default 4096) --rc-no-auth Don't require auth for certain methods. --rc-pass string Password for authentication. --rc-realm string realm for authentication (default "rclone") --rc-serve Enable the serving of remote objects. --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-template string User Specified Template. --rc-user string User name for authentication. --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") --rc-web-gui Launch WebGUI on localhost --rc-web-gui-force-update Force update to latest version of web gui --rc-web-gui-no-open-browser Don't open the browser automatically --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files. --retries int Retry operations this many times if they fail (default 3) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --size-only Skip based on size only, not mod-time or checksum --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-one-line Make the stats fit on one line. --stats-one-line-date Enables --stats-one-line and add current date/time prefix. --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix to add to changed files. --suffix-keep-extension Preserve the extension when using --suffix. --syslog Use Syslog for logging --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --timeout duration IO idle timeout (default 5m0s) --tpslimit float Limit HTTP transactions per second to this. --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --track-renames When synchronizing, track file renames and do a server side move if possible --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. --use-cookies Enable session cookiejar. --use-json-log Use json log format. --use-mmap Use mmap allocator (see docs). --use-server-modtime Use server modified time instead of object metadata --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.3") -v, --verbose count Print lots more stuff (repeat for more) ``` ## Backend Flags These flags are available for every command. They control the backends and may be set in the config file. ``` --acd-auth-url string Auth server URL. --acd-client-id string OAuth Client Id --acd-client-secret string OAuth Client Secret --acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-token string OAuth Access Token as a JSON blob. --acd-token-url string Token server url. --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --alias-remote string Remote or path to alias. --azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-disable-checksum Don't store MD5 checksum with object metadata. --azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) --azureblob-endpoint string Endpoint for the service --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator) --azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --azureblob-sas-url string SAS URL for container level access only --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G) --b2-disable-checksum Disable checksums for large (> upload cutoff) files --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w) --b2-download-url string Custom endpoint for downloads. --b2-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service. --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-key string Application Key --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-versions Include old versions in directory listings. --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL. --box-box-config-file string Box App config.json location --box-box-sub-type string (default "user") --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point. --box-token string OAuth Access Token as a JSON blob. --box-token-url string Token server url. --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start. --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) --cache-plex-url string The URL of the Plex server --cache-plex-username string The username of the Plex user --cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-remote string Remote to cache. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G) --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks. --chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5") --chunker-meta-format string Format of the metadata object or "none". By default "simplejson". (default "simplejson") --chunker-name-format string String format of chunk file names. (default "*.rclone_chunk.###") --chunker-remote string Remote to chunk/unchunk. --chunker-start-from int Minimum valid chunk number. Usually 0 or 1. (default 1) -L, --copy-links Follow symlinks and copy the pointed to item. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-password string Password or pass phrase for encryption. (obscured) --crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured) --crypt-remote string Remote to encrypt/decrypt. --crypt-server-side-across-configs Allow server side operations (eg copy) to work across different crypt configs. --crypt-show-mapping For all files listed show how the names encrypt. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-auth-url string Auth server URL. --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Deprecated: see export_formats --drive-impersonate string Impersonate this user when using a service account. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-keep-revision-forever Keep new head revision of each file forever. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive. --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. --drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-file string Service Account Credentials JSON file path --drive-shared-with-me Only show files that are shared with me. --drive-size-as-quota Show sizes as storage quota usage, not actual size. --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. --drive-skip-gdocs Skip google documents in all listings. --drive-skip-shortcuts If set skip shortcut files --drive-starred-only Only show files that are starred. --drive-stop-on-upload-limit Make upload limit errors be fatal --drive-team-drive string ID of the Team Drive --drive-token string OAuth Access Token as a JSON blob. --drive-token-url string Token server url. --drive-trashed-only Only show files that are in the trash. --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-use-created-date Use file created date instead of modified date., --drive-use-shared-date Use date file was shared instead of modified date. --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --dropbox-auth-url string Auth server URL. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret --dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account. --dropbox-token string OAuth Access Token as a JSON blob. --dropbox-token-url string Token server url. --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-shared-folder string If you want to download a shared folder, add this parameter --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use FTP over TLS (Explicit) --ftp-host string FTP host to connect to --ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-pass string FTP password (obscured) --ftp-port string FTP port, leave blank to use default (21) --ftp-tls Use FTPS over TLS (Implicit) --ftp-user string FTP username, leave blank for current username, $USER --gcs-anonymous Access public buckets and objects without credentials --gcs-auth-url string Auth server URL. --gcs-bucket-acl string Access Control List for new buckets. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-encoding MultiEncoder This sets the encoding for the backend. (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets. --gcs-object-acl string Access Control List for new objects. --gcs-project-number string Project number. --gcs-service-account-file string Service Account Credentials JSON file path --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-token string OAuth Access Token as a JSON blob. --gcs-token-url string Token server url. --gphotos-auth-url string Auth server URL. --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret --gphotos-read-only Set to make the Google Photos backend read only. --gphotos-read-size Set to read the size of media items. --gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000) --gphotos-token string OAuth Access Token as a JSON blob. --gphotos-token-url string Token server url. --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests to find file sizes in dir listing --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of http host to connect to --hubic-auth-url string Auth server URL. --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --hubic-client-id string OAuth Client Id --hubic-client-secret string OAuth Client Secret --hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8) --hubic-no-chunk Don't chunk files during streaming upload. --hubic-token string OAuth Access Token as a JSON blob. --hubic-token-url string Token server url. --jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --jottacloud-trashed-only Only show files that are in the trash. --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) --koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net") --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used. --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured) --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true) --koofr-user string Your Koofr user name -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive. --local-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --local-nounc string Disable UNC (long path names) conversion on Windows --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M) --mailru-user string User name (usually email) --mega-debug Output more debug from Mega. --mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash. --mega-pass string Password. (obscured) --mega-user string User name -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --onedrive-auth-url string Auth server URL. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) --onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-no-versions Remove all versions on modifying operations --onedrive-server-side-across-configs Allow server side operations (eg copy) to work across different onedrive configs. --onedrive-token string OAuth Access Token as a JSON blob. --onedrive-token-url string Token server url. --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M) --opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password. (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL. --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to. (default "api.pcloud.com") --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point. (default "d0") --pcloud-token string OAuth Access Token as a JSON blob. --pcloud-token-url string Token server url. --premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) --qingstor-connection-retries int Number of connection retries. (default 3) --qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --qingstor-secret-access-key string QingStor Secret Access Key (password) --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) --qingstor-zone string Zone to connect to. --s3-access-key-id string AWS Access Key ID. --s3-acl string Canned ACL used when creating buckets and storing or copying objects. --s3-bucket-acl string Canned ACL used when creating buckets. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G) --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot) --s3-endpoint string Endpoint for S3 API. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. --s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request). (default 1000) --s3-location-constraint string Location constraint - must be set to match the Region. --s3-max-upload-parts int Maximum number of parts in a multipart upload. (default 10000) --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s) --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. --s3-no-check-bucket If set don't attempt to check the bucket exists or create it --s3-profile string Profile to use in the shared credentials file --s3-provider string Choose your S3 provider. --s3-region string Region to connect to. --s3-secret-access-key string AWS Secret Access Key (password) --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-session-token string An AWS session token --s3-shared-credentials-file string Path to the shared credentials file --s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3. --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. --s3-sse-customer-key-md5 string If using SSE-C you must provide the secret encryption key MD5 checksum. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-storage-class string The storage class to use when storing new objects in S3. --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. --s3-v2-auth If true use v2 authentication. --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist --seafile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library. Leave blank to access all non-encrypted libraries. --seafile-library-key string Library password (for encrypted libraries only). Leave blank if you pass it through the command line. (obscured) --seafile-pass string Password (obscured) --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --sftp-host string SSH host to connect to --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. (obscured) --sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter. --sftp-key-use-agent When set forces the usage of the ssh-agent. --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect. --sftp-pass string SSH password, leave blank to use ssh-agent. (obscured) --sftp-path-override string Override path used by SSH connection. --sftp-port string SSH port, leave blank to use default (22) --sftp-server-command string Specifies the path or command to run a sftp server on the remote host. --sftp-set-modtime Set the modified time on the remote if set. (default true) --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect. --sftp-skip-links Set to skip any symlinks and any other non regular files. --sftp-subsystem string Specifies the SSH2 subsystem on the remote host. (default "sftp") --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods. --sftp-user string SSH username, leave blank for current username, ncw --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M) --sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls. --sharefile-root-folder-id string ID of the root folder --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M) --skip-links Don't warn about skipped symlinks. --sugarsync-access-key-id string Sugarsync Access Key ID. --sugarsync-app-id string Sugarsync App ID. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id --sugarsync-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key --sugarsync-refresh-token string Sugarsync refresh token --sugarsync-root-id string Sugarsync root id --sugarsync-user string Sugarsync user --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) --swift-auth string Authentication URL for server (OS_AUTH_URL). --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --swift-key string API key or password (OS_PASSWORD). --swift-no-chunk Don't chunk files during streaming upload. --swift-region string Region name - optional (OS_REGION_NAME) --swift-storage-policy string The storage policy to use when creating a new container --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-user string User name to log in (OS_USERNAME). --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --tardigrade-access-grant string Access Grant. --tardigrade-api-key string API Key. --tardigrade-passphrase string Encryption Passphrase. To access existing objects enter passphrase used for uploading. --tardigrade-provider string Choose an authentication method. (default "existing") --tardigrade-satellite-address @
: Satellite Address. Custom satellite address should match the format: @
:. (default "us-central-1.tardigrade.io") --union-action-policy string Policy to choose upstream on ACTION category. (default "epall") --union-cache-time int Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. (default 120) --union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs") --union-search-policy string Policy to choose upstream on SEARCH category. (default "ff") --union-upstreams string List of space separated upstreams. --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token --webdav-pass string Password. (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name --webdav-vendor string Name of the Webdav site/service/software you are using --yandex-auth-url string Auth server URL. --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-token string OAuth Access Token as a JSON blob. --yandex-token-url string Token server url. ``` rclone-1.53.3/docs/content/ftp.md000066400000000000000000000156241375552240400166210ustar00rootroot00000000000000--- title: "FTP" description: "Rclone docs for FTP backend" --- {{< icon "fa fa-file" >}} FTP ------------------------------ FTP is the File Transfer Protocol. FTP support is provided using the [github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) package. Paths are specified as `remote:path`. If the path does not begin with a `/` it is relative to the home directory of the user. An empty path `remote:` refers to the user's home directory. Here is an example of making an FTP configuration. First run rclone config This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use `anonymous` as username and your email address as the password. ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / FTP Connection \ "ftp" [snip] Storage> ftp ** See help for ftp backend at: https://rclone.org/ftp/ ** FTP host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to ftp.example.com \ "ftp.example.com" host> ftp.example.com FTP username, leave blank for current username, ncw Enter a string value. Press Enter for the default (""). user> FTP port, leave blank to use default (21) Enter a string value. Press Enter for the default (""). port> FTP password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Use FTP over TLS (Implicit) Enter a boolean value (true or false). Press Enter for the default ("false"). tls> Use FTP over TLS (Explicit) Enter a boolean value (true or false). Press Enter for the default ("false"). explicit_tls> Remote config -------------------- [remote] type = ftp host = ftp.example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all directories in the home directory rclone lsd remote: Make a new directory rclone mkdir remote:path/to/directory List the contents of a directory rclone ls remote:path/to/directory Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. rclone sync -i /home/local/directory remote:directory ### Modified time ### FTP does not support modified times. Any times you see on the server will be time of upload. ### Checksums ### FTP does not support any checksums. ### Usage without a config file ### An example how to use the ftp remote without a config file: rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy` #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | Note that not all FTP servers can have all characters in file names, for example: | FTP Server| Forbidden characters | | --------- |:--------------------:| | proftpd | `*` | | pureftpd | `\ [ ]` | ### Implicit TLS ### FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is `990` so the port will likely have to be explicitly set in the config for the remote. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to ftp (FTP Connection). #### --ftp-host FTP host to connect to - Config: host - Env Var: RCLONE_FTP_HOST - Type: string - Default: "" - Examples: - "ftp.example.com" - Connect to ftp.example.com #### --ftp-user FTP username, leave blank for current username, $USER - Config: user - Env Var: RCLONE_FTP_USER - Type: string - Default: "" #### --ftp-port FTP port, leave blank to use default (21) - Config: port - Env Var: RCLONE_FTP_PORT - Type: string - Default: "" #### --ftp-pass FTP password **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_FTP_PASS - Type: string - Default: "" #### --ftp-tls Use FTPS over TLS (Implicit) When using implicit FTP over TLS the client will connect using TLS right from the start, which in turn breaks the compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP. - Config: tls - Env Var: RCLONE_FTP_TLS - Type: bool - Default: false #### --ftp-explicit-tls Use FTP over TLS (Explicit) When using explicit FTP over TLS the client explicitly request security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP. - Config: explicit_tls - Env Var: RCLONE_FTP_EXPLICIT_TLS - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to ftp (FTP Connection). #### --ftp-concurrency Maximum number of FTP simultaneous connections, 0 for unlimited - Config: concurrency - Env Var: RCLONE_FTP_CONCURRENCY - Type: int - Default: 0 #### --ftp-no-check-certificate Do not verify the TLS certificate of the server - Config: no_check_certificate - Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE - Type: bool - Default: false #### --ftp-disable-epsv Disable using EPSV even if server advertises support - Config: disable_epsv - Env Var: RCLONE_FTP_DISABLE_EPSV - Type: bool - Default: false #### --ftp-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_FTP_ENCODING - Type: MultiEncoder - Default: Slash,Del,Ctl,RightSpace,Dot {{< rem autogenerated options stop >}} ### Limitations ### Note that FTP does have its own implementation of : `--dump headers`, `--dump bodies`, `--dump auth` for debugging which isn't the same as the HTTP based backends - it has less fine grained control. Note that `--timeout` isn't supported (but `--contimeout` is). Note that `--bind` isn't supported. FTP could support server side move but doesn't yet. Note that the ftp backend does not support the `ftp_proxy` environment variable yet. rclone-1.53.3/docs/content/googlecloudstorage.md000066400000000000000000000361761375552240400217250ustar00rootroot00000000000000--- title: "Google Cloud Storage" description: "Rclone docs for Google Cloud Storage" --- {{< icon "fab fa-google" >}} Google Cloud Storage ------------------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" [snip] Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Project number optional - needed only for list/create/delete buckets - see your developer console. project_number> 12345678 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Access Control List for new objects. Choose a number from below, or type in your own value 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. \ "authenticatedRead" 2 / Object owner gets OWNER access, and project team owners get OWNER access. \ "bucketOwnerFullControl" 3 / Object owner gets OWNER access, and project team owners get READER access. \ "bucketOwnerRead" 4 / Object owner gets OWNER access [default if left blank]. \ "private" 5 / Object owner gets OWNER access, and project team members get access according to their roles. \ "projectPrivate" 6 / Object owner gets OWNER access, and all Users get READER access. \ "publicRead" object_acl> 4 Access Control List for new buckets. Choose a number from below, or type in your own value 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. \ "authenticatedRead" 2 / Project team owners get OWNER access [default if left blank]. \ "private" 3 / Project team members get access according to their roles. \ "projectPrivate" 4 / Project team owners get OWNER access, and all Users get READER access. \ "publicRead" 5 / Project team owners get OWNER access, and all Users get WRITER access. \ "publicReadWrite" bucket_acl> 2 Location for the newly created buckets. Choose a number from below, or type in your own value 1 / Empty for default location (US). \ "" 2 / Multi-regional location for Asia. \ "asia" 3 / Multi-regional location for Europe. \ "eu" 4 / Multi-regional location for United States. \ "us" 5 / Taiwan. \ "asia-east1" 6 / Tokyo. \ "asia-northeast1" 7 / Singapore. \ "asia-southeast1" 8 / Sydney. \ "australia-southeast1" 9 / Belgium. \ "europe-west1" 10 / London. \ "europe-west2" 11 / Iowa. \ "us-central1" 12 / South Carolina. \ "us-east1" 13 / Northern Virginia. \ "us-east4" 14 / Oregon. \ "us-west1" location> 12 The storage class to use when storing objects in Google Cloud Storage. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Multi-regional storage class \ "MULTI_REGIONAL" 3 / Regional storage class \ "REGIONAL" 4 / Nearline storage class \ "NEARLINE" 5 / Coldline storage class \ "COLDLINE" 6 / Durable reduced availability storage class \ "DURABLE_REDUCED_AVAILABILITY" storage_class> 5 Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine or Y didn't work y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = google cloud storage client_id = client_secret = token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} project_number = 12345678 object_acl = private bucket_acl = private -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. This remote is called `remote` and can now be used like this See all the buckets in your project rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ### Service Account support ### You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To get credentials for Google Cloud Platform [IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), please head to the [Service Account](https://console.cloud.google.com/permissions/serviceaccounts) section of the Google Developer Console. Service Accounts behave just like normal `User` permissions in [Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the `service_account_file` prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set `service_account_credentials` with the actual contents of the file instead, or set the equivalent environment variable. ### Anonymous Access ### For downloads of objects that permit public access you can configure rclone to use anonymous access by setting `anonymous` to `true`. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access. ### Application Default Credentials ### If no other source of credentials is provided, rclone will fall back to [Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials) this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - [see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper). Note that in the case application default credentials are used, there is no need to explicitly configure a project number. ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### Custom upload headers ### You can set custom upload headers with the `--header-upload` flag. Google Cloud Storage supports the headers as described in the [working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata) - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Goog-Meta- Eg `--header-upload "Content-Type text/potato"` Note that the last of these is for setting custom metadata in the form `--header-upload "x-goog-meta-key: value"` ### Modified time ### Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | LF | 0x0A | ␊ | | CR | 0x0D | ␍ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_GCS_CLIENT_ID - Type: string - Default: "" #### --gcs-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_GCS_CLIENT_SECRET - Type: string - Default: "" #### --gcs-project-number Project number. Optional - needed only for list/create/delete buckets - see your developer console. - Config: project_number - Env Var: RCLONE_GCS_PROJECT_NUMBER - Type: string - Default: "" #### --gcs-service-account-file Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: service_account_file - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE - Type: string - Default: "" #### --gcs-service-account-credentials Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. - Config: service_account_credentials - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS - Type: string - Default: "" #### --gcs-anonymous Access public buckets and objects without credentials Set to 'true' if you just want to download files and don't configure credentials. - Config: anonymous - Env Var: RCLONE_GCS_ANONYMOUS - Type: bool - Default: false #### --gcs-object-acl Access Control List for new objects. - Config: object_acl - Env Var: RCLONE_GCS_OBJECT_ACL - Type: string - Default: "" - Examples: - "authenticatedRead" - Object owner gets OWNER access, and all Authenticated Users get READER access. - "bucketOwnerFullControl" - Object owner gets OWNER access, and project team owners get OWNER access. - "bucketOwnerRead" - Object owner gets OWNER access, and project team owners get READER access. - "private" - Object owner gets OWNER access [default if left blank]. - "projectPrivate" - Object owner gets OWNER access, and project team members get access according to their roles. - "publicRead" - Object owner gets OWNER access, and all Users get READER access. #### --gcs-bucket-acl Access Control List for new buckets. - Config: bucket_acl - Env Var: RCLONE_GCS_BUCKET_ACL - Type: string - Default: "" - Examples: - "authenticatedRead" - Project team owners get OWNER access, and all Authenticated Users get READER access. - "private" - Project team owners get OWNER access [default if left blank]. - "projectPrivate" - Project team members get access according to their roles. - "publicRead" - Project team owners get OWNER access, and all Users get READER access. - "publicReadWrite" - Project team owners get OWNER access, and all Users get WRITER access. #### --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this. When it is set, rclone: - ignores ACLs set on buckets - ignores ACLs set on objects - creates buckets with Bucket Policy Only set Docs: https://cloud.google.com/storage/docs/bucket-policy-only - Config: bucket_policy_only - Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY - Type: bool - Default: false #### --gcs-location Location for the newly created buckets. - Config: location - Env Var: RCLONE_GCS_LOCATION - Type: string - Default: "" - Examples: - "" - Empty for default location (US). - "asia" - Multi-regional location for Asia. - "eu" - Multi-regional location for Europe. - "us" - Multi-regional location for United States. - "asia-east1" - Taiwan. - "asia-east2" - Hong Kong. - "asia-northeast1" - Tokyo. - "asia-south1" - Mumbai. - "asia-southeast1" - Singapore. - "australia-southeast1" - Sydney. - "europe-north1" - Finland. - "europe-west1" - Belgium. - "europe-west2" - London. - "europe-west3" - Frankfurt. - "europe-west4" - Netherlands. - "us-central1" - Iowa. - "us-east1" - South Carolina. - "us-east4" - Northern Virginia. - "us-west1" - Oregon. - "us-west2" - California. #### --gcs-storage-class The storage class to use when storing objects in Google Cloud Storage. - Config: storage_class - Env Var: RCLONE_GCS_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "MULTI_REGIONAL" - Multi-regional storage class - "REGIONAL" - Regional storage class - "NEARLINE" - Nearline storage class - "COLDLINE" - Coldline storage class - "ARCHIVE" - Archive storage class - "DURABLE_REDUCED_AVAILABILITY" - Durable reduced availability storage class ### Advanced Options Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_GCS_TOKEN - Type: string - Default: "" #### --gcs-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_GCS_AUTH_URL - Type: string - Default: "" #### --gcs-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_GCS_TOKEN_URL - Type: string - Default: "" #### --gcs-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_GCS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/googlephotos.md000066400000000000000000000300421375552240400205300ustar00rootroot00000000000000--- title: "Google Photos" description: "Rclone docs for Google Photos" --- {{< icon "fa fa-images" >}} Google Photos ------------------------------------------------- The rclone backend for [Google Photos](https://www.google.com/photos/about/) is a specialized backend for transferring photos and videos to and from Google Photos. **NB** The Google Photos API which rclone uses has quite a few limitations, so please read the [limitations section](#limitations) carefully to make sure it is suitable for your use. ## Configuring Google Photos The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Google Photos \ "google photos" [snip] Storage> google photos ** See help for google photos backend at: https://rclone.org/googlephotos/ ** Google Application Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Google Application Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. Enter a boolean value (true or false). Press Enter for the default ("false"). read_only> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code *** IMPORTANT: All media items uploaded to Google Photos with rclone *** are stored in full resolution at original quality. These uploads *** will count towards storage in your Google Account. -------------------- [remote] type = google photos token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode. This remote is called `remote` and can now be used like this See all the albums in your photos rclone lsd remote:album Make a new album rclone mkdir remote:album/newAlbum List the contents of an album rclone ls remote:album/newAlbum Sync `/home/local/images` to the Google Photos, removing any excess files in the album. rclone sync -i /home/local/image remote:album/newAlbum ## Layout As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it. The directories under `media` show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup `remote:media/by-month`. (**NB** `remote:media/by-day` is rather slow at the moment so avoid for syncing.) Note that all your photos and videos will appear somewhere under `media`, but they may not appear under `album` unless you've put them into albums. ``` / - upload - file1.jpg - file2.jpg - ... - media - all - file1.jpg - file2.jpg - ... - by-year - 2000 - file1.jpg - ... - 2001 - file2.jpg - ... - ... - by-month - 2000 - 2000-01 - file1.jpg - ... - 2000-02 - file2.jpg - ... - ... - by-day - 2000 - 2000-01-01 - file1.jpg - ... - 2000-01-02 - file2.jpg - ... - ... - album - album name - album name/sub - shared-album - album name - album name/sub - feature - favorites - file1.jpg - file2.jpg ``` There are two writable parts of the tree, the `upload` directory and sub directories of the `album` directory. The `upload` directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to `album` will work better. Directories within the `album` directory are also writeable and you may create new directories (albums) under `album`. If you copy files with a directory hierarchy in there then rclone will create albums with the `/` character in them. For example if you do rclone copy /path/to/images remote:album/images and the images directory contains ``` images - file1.jpg dir file2.jpg dir2 dir3 file3.jpg ``` Then rclone will create the following albums with the following files in - images - file1.jpg - images/dir - file2.jpg - images/dir2/dir3 - file3.jpg This means that you can use the `album` path pretty much like a normal filesystem and it is a good target for repeated syncing. The `shared-album` directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface. ## Limitations Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and **will** count towards your storage quota in your Google Account. The API does **not** offer a way to upload in "high quality" mode.. ### Downloading Images When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115). **The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort** ### Downloading Videos When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044). ### Duplicates If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called `file.jpg` would then appear as `file {123456}.jpg` and `file {ABCDEF}.jpg` (the actual IDs are a lot longer alas!). If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to `upload` then uploaded the same image to `album/my_album` the filename of the image in `album/my_album` will be what it was uploaded with initially, not what you uploaded it with to `album`. In practise this shouldn't cause too many problems. ### Modified time The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. ### Size The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is **very slow** and uses up a lot of transactions. This can be enabled with the `--gphotos-read-size` option or the `read_size = true` config parameter. If you want to use the backend with `rclone mount` you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag. ### Albums Rclone can only upload files to albums it created. This is a [limitation of the Google Photos API](https://developers.google.com/photos/library/guides/manage-albums). Rclone can remove files it uploaded from albums it created only. ### Deleting files Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See [bug #109759781](https://issuetracker.google.com/issues/109759781). Rclone cannot delete files anywhere except under `album`. ### Deleting albums The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733). {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlephotos/googlephotos.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to google photos (Google Photos). #### --gphotos-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_GPHOTOS_CLIENT_ID - Type: string - Default: "" #### --gphotos-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_GPHOTOS_CLIENT_SECRET - Type: string - Default: "" #### --gphotos-read-only Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. - Config: read_only - Env Var: RCLONE_GPHOTOS_READ_ONLY - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to google photos (Google Photos). #### --gphotos-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_GPHOTOS_TOKEN - Type: string - Default: "" #### --gphotos-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_GPHOTOS_AUTH_URL - Type: string - Default: "" #### --gphotos-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_GPHOTOS_TOKEN_URL - Type: string - Default: "" #### --gphotos-read-size Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. - Config: read_size - Env Var: RCLONE_GPHOTOS_READ_SIZE - Type: bool - Default: false #### --gphotos-start-year Year limits the photos to be downloaded to those which are uploaded after the given year - Config: start_year - Env Var: RCLONE_GPHOTOS_START_YEAR - Type: int - Default: 2000 {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/gui.md000066400000000000000000000066741375552240400166210ustar00rootroot00000000000000--- title: "GUI" description: "Web based Graphical User Interface" --- # GUI (Experimental) Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change. Run this command in a terminal and rclone will download and then display the GUI in a web browser. ``` rclone rcd --rc-web-gui ``` This will produce logs like this and rclone needs to continue to run to serve the GUI: ``` 2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip 2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] 2019/08/25 11:40:16 NOTICE: Unzipping 2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/ ``` This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details. If you wish to check for updates then you can add `--rc-web-gui-update` to the command line. If you find your GUI broken, you may force it to update by add `--rc-web-gui-force-update`. By default, rclone will open your browser. Add `--rc-web-gui-no-open-browser` to disable this feature. ## Using the GUI Once the GUI opens, you will be looking at the dashboard which has an overall overview. On the left hand side you will see a series of view buttons you can click on: - Dashboard - main overview - Configs - examine and create new configurations - Explorer - view, download and upload files to the cloud storage systems - Backend - view or alter the backend config - Log out (More docs and walkthrough video to come!) ## How it works When you run the `rclone rcd --rc-web-gui` this is what happens - Rclone starts but only runs the remote control API ("rc"). - The API is bound to localhost with an auto generated username and password. - If the API bundle is missing then rclone will download it. - rclone will start serving the files from the API bundle over the same port as the API - rclone will open the browser with a `login_token` so it can log straight in. ## Advanced use The `rclone rcd` may use any of the [flags documented on the rc page](https://rclone.org/rc/#supported-parameters). The flag `--rc-web-gui` is shorthand for - Download the web GUI if necessary - Check we are using some authentication - `--rc-user gui` - `--rc-pass ` - `--rc-serve` These flags can be overridden as desired. See also the [rclone rcd documentation](https://rclone.org/commands/rclone_rcd/). ### Example: Running a public GUI For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags: - `--rc-web-gui` - `--rc-addr :443` - `--rc-htpasswd /path/to/htpasswd` - `--rc-cert /path/to/ssl.crt` - `--rc-key /path/to/ssl.key` ### Example: Running a GUI behind a proxy If you want to run the GUI behind a proxy at `/rclone` you could use these flags: - `--rc-web-gui` - `--rc-baseurl rclone` - `--rc-htpasswd /path/to/htpasswd` Or instead of htpasswd if you just want a single user and password: - `--rc-user me` - `--rc-pass mypassword` ## Project The GUI is being developed in the: [rclone/rclone-webui-react repository](https://github.com/rclone/rclone-webui-react). Bug reports and contributions are very welcome :-) If you have questions then please ask them on the [rclone forum](https://forum.rclone.org/). rclone-1.53.3/docs/content/http.md000066400000000000000000000111641375552240400170020ustar00rootroot00000000000000--- title: "HTTP Remote" description: "Read only remote for HTTP servers" --- {{< icon "fa fa-globe" >}} HTTP ------------------------------------------------- The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!) Paths are specified as `remote:` or `remote:path/to/dir`. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / http Connection \ "http" [snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://beta.rclone.org Remote config -------------------- [remote] url = https://beta.rclone.org -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote http e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` This remote is called `remote` and can now be used like this See all the top level directories rclone lsd remote: List the contents of a directory rclone ls remote:directory Sync the remote `directory` to `/home/local/directory`, deleting any excess files. rclone sync -i remote:directory /home/local/directory ### Read only ### This remote is read only - you can't upload files to an HTTP server. ### Modified time ### Most HTTP servers store time accurate to 1 second. ### Checksum ### No checksums are stored. ### Usage without a config file ### Since the http remote only has one config parameter it is easy to use without a config file: rclone lsd --http-url https://beta.rclone.org :http: {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/http/http.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to http (http Connection). #### --http-url URL of http host to connect to - Config: url - Env Var: RCLONE_HTTP_URL - Type: string - Default: "" - Examples: - "https://example.com" - Connect to example.com - "https://user:pass@example.com" - Connect to example.com using a username and password ### Advanced Options Here are the advanced options specific to http (http Connection). #### --http-headers Set HTTP headers for all transactions Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard [CSV encoding](https://godoc.org/encoding/csv) may be used. For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. - Config: headers - Env Var: RCLONE_HTTP_HEADERS - Type: CommaSepList - Default: #### --http-no-slash Set this if the site doesn't end directories with / Use this if your target website does not use / on the end of directories. A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them. Note that this may cause rclone to confuse genuine HTML files with directories. - Config: no_slash - Env Var: RCLONE_HTTP_NO_SLASH - Type: bool - Default: false #### --http-no-head Don't use HEAD requests to find file sizes in dir listing If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to: - find its size - check it really exists - check to see if it is a directory If you set this option, rclone will not do the HEAD request. This will mean - directory listings are much quicker - rclone won't have the times or sizes of any files - some files that don't exist may be in the listing - Config: no_head - Env Var: RCLONE_HTTP_NO_HEAD - Type: bool - Default: false {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/hubic.md000066400000000000000000000126401375552240400171150ustar00rootroot00000000000000--- title: "Hubic" description: "Rclone docs for Hubic" --- {{< icon "fa fa-space-shuttle" >}} Hubic ----------------------------------------- Paths are specified as `remote:path` Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Hubic \ "hubic" [snip] Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXXXXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List containers in the top level of your Hubic rclone lsd remote: List all the files in your Hubic rclone ls remote: To copy a local directory to an Hubic directory called backup rclone copy /home/source remote:backup If you want the directory to be visible in the official *Hubic browser*, you need to copy your files to the `default` directory rclone copy /home/source remote:default/backup ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### Modified time ### The modified time is stored as metadata on the object as `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 ns. This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. Note that Hubic wraps the Swift backend, so most of the properties of are the same. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hubic/hubic.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to hubic (Hubic). #### --hubic-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_HUBIC_CLIENT_ID - Type: string - Default: "" #### --hubic-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_HUBIC_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to hubic (Hubic). #### --hubic-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_HUBIC_TOKEN - Type: string - Default: "" #### --hubic-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_HUBIC_AUTH_URL - Type: string - Default: "" #### --hubic-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_HUBIC_TOKEN_URL - Type: string - Default: "" #### --hubic-chunk-size Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. - Config: chunk_size - Env Var: RCLONE_HUBIC_CHUNK_SIZE - Type: SizeSuffix - Default: 5G #### --hubic-no-chunk Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - Config: no_chunk - Env Var: RCLONE_HUBIC_NO_CHUNK - Type: bool - Default: false #### --hubic-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_HUBIC_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8 {{< rem autogenerated options stop >}} ### Limitations ### This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. rclone-1.53.3/docs/content/install.md000066400000000000000000000163451375552240400174770ustar00rootroot00000000000000--- title: "Install" description: "Rclone Installation" --- # Install # Rclone is a Go program and comes as a single binary file. ## Quickstart ## * [Download](/downloads/) the relevant binary. * Extract the `rclone` or `rclone.exe` binary from the archive * Run `rclone config` to setup. See [rclone config docs](/docs/) for more details. See below for some expanded Linux / macOS instructions. See the [Usage section](/docs/#usage) of the docs for how to use rclone, or run `rclone -h`. ## Script installation ## To install rclone on Linux/macOS/BSD systems, run: curl https://rclone.org/install.sh | sudo bash For beta installation, run: curl https://rclone.org/install.sh | sudo bash -s beta Note that this script checks the version of rclone installed first and won't re-download if not needed. ## Linux installation from precompiled binary ## Fetch and unpack curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip unzip rclone-current-linux-amd64.zip cd rclone-*-linux-amd64 Copy binary file sudo cp rclone /usr/bin/ sudo chown root:root /usr/bin/rclone sudo chmod 755 /usr/bin/rclone Install manpage sudo mkdir -p /usr/local/share/man/man1 sudo cp rclone.1 /usr/local/share/man/man1/ sudo mandb Run `rclone config` to setup. See [rclone config docs](/docs/) for more details. rclone config ## macOS installation with brew ## brew install rclone ## macOS installation from precompiled binary, using curl ## To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with `curl`. Download the latest version of rclone. cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip Unzip the download and cd to the extracted folder. unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64 Move rclone to your $PATH. You will be prompted for your password. sudo mkdir -p /usr/local/bin sudo mv rclone /usr/local/bin/ (the `mkdir` command is safe to run, even if the directory already exists). Remove the leftover files. cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip Run `rclone config` to setup. See [rclone config docs](/docs/) for more details. rclone config ## macOS installation from precompiled binary, using a web browser ## When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run `rclone`, a pop-up will appear saying: “rclone” cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. The simplest fix is to run xattr -d com.apple.quarantine rclone ## Install with docker ## The rclone maintains a [docker image for rclone](https://hub.docker.com/r/rclone/rclone). These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image. The `:latest` tag will always point to the latest stable release. You can use the `:beta` tag to get the latest build from master. You can also use version tags, eg `:1.49.1`, `:1.49` or `:1`. ``` $ docker pull rclone/rclone:latest latest: Pulling from rclone/rclone Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11 ... $ docker run --rm rclone/rclone:latest version rclone v1.49.1 - os/arch: linux/amd64 - go version: go1.12.9 ``` There are a few command line options to consider when starting an rclone Docker container from the rclone image. - You need to mount the host rclone config dir at `/config/rclone` into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file. - You need to mount a host data dir at `/data` into the Docker container. - By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line. - If you want to access the RC interface (either via the API or the Web UI), it is required to set the `--rc-addr` to `:5572` in order to connect to it from outside the container. An explanation about why this is necessary is present [here](https://web.archive.org/web/20200808071950/https://pythonspeed.com/articles/docker-connection-refused/). * NOTE: Users running this container with the docker network set to `host` should probably set it to listen to localhost only, with `127.0.0.1:5572` as the value for `--rc-addr` - It is possible to use `rclone mount` inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact `docker run` options to do that might vary slightly between hosts. See, e.g. the discussion in this [thread](https://github.com/moby/moby/issues/9448). You also need to mount the host `/etc/passwd` and `/etc/group` for fuse to work inside the container. Here are some commands tested on an Ubuntu 18.04.3 host: ``` # config on host at ~/.config/rclone/rclone.conf # data on host at ~/data # make sure the config is ok by listing the remotes docker run --rm \ --volume ~/.config/rclone:/config/rclone \ --volume ~/data:/data:shared \ --user $(id -u):$(id -g) \ rclone/rclone \ listremotes # perform mount inside Docker container, expose result to host mkdir -p ~/data/mount docker run --rm \ --volume ~/.config/rclone:/config/rclone \ --volume ~/data:/data:shared \ --user $(id -u):$(id -g) \ --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro \ --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \ rclone/rclone \ mount dropbox:Photos /data/mount & ls ~/data/mount kill %1 ``` ## Install from source ## Make sure you have at least [Go](https://golang.org/) 1.11 installed. [Download go](https://golang.org/dl/) if necessary. The latest release is recommended. Then git clone https://github.com/rclone/rclone.git cd rclone go build ./rclone version This will leave you a checked out version of rclone you can modify and send pull requests with. If you use `make` instead of `go build` then the rclone build will have the correct version information in it. You can also build the latest stable rclone with: go get github.com/rclone/rclone or the latest version (equivalent to the beta) with go get github.com/rclone/rclone@master These will build the binary in `$(go env GOPATH)/bin` (`~/go/bin/rclone` by default) after downloading the source to the go module cache. Note - do **not** use the `-u` flag here. This causes go to try to update the depencencies that rclone uses and sometimes these don't work with the current version of rclone. ## Installation with Ansible ## This can be done with [Stefan Weichinger's ansible role](https://github.com/stefangweichinger/ansible-rclone). Instructions 1. `git clone https://github.com/stefangweichinger/ansible-rclone.git` into your local roles-directory 2. add the role to the hosts you want rclone installed to: ``` - hosts: rclone-hosts roles: - rclone ``` rclone-1.53.3/docs/content/install.sh000077500000000000000000000105241375552240400175050ustar00rootroot00000000000000#!/usr/bin/env bash # error codes # 0 - exited without problems # 1 - parameters not supported were used or some unexpected error occurred # 2 - OS not supported by this script # 3 - installed version of rclone is up to date # 4 - supported unzip tools are not available set -e #when adding a tool to the list make sure to also add its corresponding command further in the script unzip_tools_list=('unzip' '7z' 'busybox') usage() { echo "Usage: curl https://rclone.org/install.sh | sudo bash [-s beta]" 1>&2; exit 1; } #check for beta flag if [ -n "$1" ] && [ "$1" != "beta" ]; then usage fi if [ -n "$1" ]; then install_beta="beta " fi #create tmp directory and move to it with macOS compatibility fallback tmp_dir=`mktemp -d 2>/dev/null || mktemp -d -t 'rclone-install.XXXXXXXXXX'`; cd $tmp_dir #make sure unzip tool is available and choose one to work with set +e for tool in ${unzip_tools_list[*]}; do trash=`hash $tool 2>>errors` if [ "$?" -eq 0 ]; then unzip_tool="$tool" break fi done set -e # exit if no unzip tools available if [ -z "${unzip_tool}" ]; then printf "\nNone of the supported tools for extracting zip archives (${unzip_tools_list[*]}) were found. " printf "Please install one of them and try again.\n\n" exit 4 fi # Make sure we don't create a root owned .config/rclone directory #2127 export XDG_CONFIG_HOME=config #check installed version of rclone to determine if update is necessary version=`rclone --version 2>>errors | head -n 1` if [ -z "${install_beta}" ]; then current_version=`curl https://downloads.rclone.org/version.txt` else current_version=`curl https://beta.rclone.org/version.txt` fi if [ "$version" = "$current_version" ]; then printf "\nThe latest ${install_beta}version of rclone ${version} is already installed.\n\n" exit 3 fi #detect the platform OS="`uname`" case $OS in Linux) OS='linux' ;; FreeBSD) OS='freebsd' ;; NetBSD) OS='netbsd' ;; OpenBSD) OS='openbsd' ;; Darwin) OS='osx' ;; SunOS) OS='solaris' echo 'OS not supported' exit 2 ;; *) echo 'OS not supported' exit 2 ;; esac OS_type="`uname -m`" case $OS_type in x86_64|amd64) OS_type='amd64' ;; i?86|x86) OS_type='386' ;; arm*) OS_type='arm' ;; aarch64) OS_type='arm64' ;; *) echo 'OS type not supported' exit 2 ;; esac #download and unzip if [ -z "${install_beta}" ]; then download_link="https://downloads.rclone.org/rclone-current-$OS-$OS_type.zip" rclone_zip="rclone-current-$OS-$OS_type.zip" else download_link="https://beta.rclone.org/rclone-beta-latest-$OS-$OS_type.zip" rclone_zip="rclone-beta-latest-$OS-$OS_type.zip" fi curl -O $download_link unzip_dir="tmp_unzip_dir_for_rclone" # there should be an entry in this switch for each element of unzip_tools_list case $unzip_tool in 'unzip') unzip -a $rclone_zip -d $unzip_dir ;; '7z') 7z x $rclone_zip -o$unzip_dir ;; 'busybox') mkdir -p $unzip_dir busybox unzip $rclone_zip -d $unzip_dir ;; esac cd $unzip_dir/* #mounting rclone to environment case $OS in 'linux') #binary cp rclone /usr/bin/rclone.new chmod 755 /usr/bin/rclone.new chown root:root /usr/bin/rclone.new mv /usr/bin/rclone.new /usr/bin/rclone #manuals if ! [ -x "$(command -v mandb)" ]; then echo 'mandb not found. The rclone man docs will not be installed.' else mkdir -p /usr/local/share/man/man1 cp rclone.1 /usr/local/share/man/man1/ mandb fi ;; 'freebsd'|'openbsd'|'netbsd') #bin cp rclone /usr/bin/rclone.new chown root:wheel /usr/bin/rclone.new mv /usr/bin/rclone.new /usr/bin/rclone #man mkdir -p /usr/local/man/man1 cp rclone.1 /usr/local/man/man1/ makewhatis ;; 'osx') #binary mkdir -p /usr/local/bin cp rclone /usr/local/bin/rclone.new mv /usr/local/bin/rclone.new /usr/local/bin/rclone #manual mkdir -p /usr/local/share/man/man1 cp rclone.1 /usr/local/share/man/man1/ ;; *) echo 'OS not supported' exit 2 esac #update version variable post install version=`rclone --version 2>>errors | head -n 1` printf "\n${version} has successfully installed." printf '\nNow run "rclone config" for setup. Check https://rclone.org/docs/ for more details.\n\n' exit 0 rclone-1.53.3/docs/content/jottacloud.md000066400000000000000000000215221375552240400201720ustar00rootroot00000000000000--- title: "Jottacloud" description: "Rclone docs for Jottacloud" --- {{< icon "fa fa-cloud" >}} Jottacloud ----------------------------------------- Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), there are also several whitelabel versions which should work with this backend. Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. ## Setup ### Default Setup To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your [account security settings](https://www.jottacloud.com/web/secure) (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token. ### Legacy Setup If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentification. To to this select yes when the setup asks for legacy authentification and enter your username and password. The rest of the setup is identical to the default setup. Here is an example of how to make a remote called `remote` with the default setup. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Jottacloud \ "jottacloud" [snip] Storage> jottacloud ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use legacy authentification?. This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. y) Yes n) No (default) y/n> n Generate a personal login token here: https://www.jottacloud.com/web/secure Login Token> Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? y) Yes n) No y/n> y Please select the device to use. Normally this will be Jotta Choose a number from below, or type in an existing value 1 > DESKTOP-3H31129 2 > Jotta Devices> 2 Please select the mountpoint to user. Normally this will be Archive Choose a number from below, or type in an existing value 1 > Archive 2 > Links 3 > Sync Mountpoints> 1 -------------------- [jotta] type = jottacloud token = {........} device = Jotta mountpoint = Archive configVersion = 1 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Once configured you can then use `rclone` like this, List directories in top level of your Jottacloud rclone lsd remote: List all the files in your Jottacloud rclone ls remote: To copy a local directory to an Jottacloud directory called backup rclone copy /home/source remote:backup ### Devices and Mountpoints The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing. ### --fast-list This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown. ### Modified time and hashes Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Jottacloud supports MD5 type hashes, so you can use the `--checksum` flag. Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the `TMPDIR` environment variable points to) before it is uploaded. Small files will be cached in memory - see the [--jottacloud-md5-memory-limit](#jottacloud-md5-memory-limit) flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above). #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \| | 0x7C | | | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in XML strings. ### Deleting files By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. Emptying the trash is supported by the [cleanup](/commands/rclone_cleanup/) command. ### Versions Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. ### Quota information To view your current quota you can use the `rclone about remote:` command which will display your usage limit (unless it is unlimited) and the current usage. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs" >}} ### Advanced Options Here are the advanced options specific to jottacloud (Jottacloud). #### --jottacloud-md5-memory-limit Files bigger than this will be cached on disk to calculate the MD5 if required. - Config: md5_memory_limit - Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT - Type: SizeSuffix - Default: 10M #### --jottacloud-trashed-only Only show files that are in the trash. This will show trashed files in their original directory structure. - Config: trashed_only - Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY - Type: bool - Default: false #### --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - Config: hard_delete - Env Var: RCLONE_JOTTACLOUD_HARD_DELETE - Type: bool - Default: false #### --jottacloud-upload-resume-limit Files bigger than this can be resumed if the upload fail's. - Config: upload_resume_limit - Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT - Type: SizeSuffix - Default: 10M #### --jottacloud-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_JOTTACLOUD_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. ### Troubleshooting Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases. rclone-1.53.3/docs/content/koofr.md000066400000000000000000000104071375552240400171420ustar00rootroot00000000000000--- title: "Koofr" description: "Rclone docs for Koofr" --- {{< icon "fa fa-suitcase" >}} Koofr ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr [web application](https://app.koofr.net/app/admin/preferences/password), giving the password a nice name like `rclone` and clicking on generate. Here is an example of how to make a remote called `koofr`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> koofr Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Koofr \ "koofr" [snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** Your Koofr user name Enter a string value. Press Enter for the default (""). user> USER@NAME Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [koofr] type = koofr baseurl = https://app.koofr.net user = USER@NAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage. Once configured you can then use `rclone` like this, List directories in top level of your Koofr rclone lsd koofr: List all the files in your Koofr rclone ls koofr: To copy a local directory to an Koofr directory called backup rclone copy /home/source remote:backup #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in XML strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/koofr/koofr.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to koofr (Koofr). #### --koofr-user Your Koofr user name - Config: user - Env Var: RCLONE_KOOFR_USER - Type: string - Default: "" #### --koofr-password Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: password - Env Var: RCLONE_KOOFR_PASSWORD - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to koofr (Koofr). #### --koofr-endpoint The Koofr API endpoint to use - Config: endpoint - Env Var: RCLONE_KOOFR_ENDPOINT - Type: string - Default: "https://app.koofr.net" #### --koofr-mountid Mount ID of the mount to use. If omitted, the primary mount is used. - Config: mountid - Env Var: RCLONE_KOOFR_MOUNTID - Type: string - Default: "" #### --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. - Config: setmtime - Env Var: RCLONE_KOOFR_SETMTIME - Type: bool - Default: true #### --koofr-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_KOOFR_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations ### Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". rclone-1.53.3/docs/content/licence.md000066400000000000000000000024131375552240400174220ustar00rootroot00000000000000--- title: "Licence" description: "Rclone Licence" --- License ------- This is free software under the terms of MIT the license (check the COPYING file included with the source code). ``` Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` rclone-1.53.3/docs/content/local.md000066400000000000000000000332601375552240400171160ustar00rootroot00000000000000--- title: "Local Filesystem" description: "Rclone docs for the local filesystem" --- {{< icon "fas fa-hdd" >}} Local Filesystem ------------------------------------------- Local paths are specified as normal filesystem paths, eg `/path/to/wherever`, so rclone sync -i /home/source /tmp/destination Will sync `/home/source` to `/tmp/destination` These can be configured into the config file for consistencies sake, but it is probably easier not to. ### Modified time ### Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. ### Filenames ### Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the `convmv` tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers. If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name `gro\xdf` will be transferred as `gro‛DF`. `rclone` will emit a debug message in this case (use `-v` to see), eg ``` Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf" ``` #### Restricted characters On non Windows platforms the following characters are replaced when handling file names. | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | When running on Windows the following characters are replaced. This list is based on the [Windows file naming conventions](https://docs.microsoft.com/de-de/windows/desktop/FileIO/naming-a-file#naming-conventions). | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | SOH | 0x01 | ␁ | | STX | 0x02 | ␂ | | ETX | 0x03 | ␃ | | EOT | 0x04 | ␄ | | ENQ | 0x05 | ␅ | | ACK | 0x06 | ␆ | | BEL | 0x07 | ␇ | | BS | 0x08 | ␈ | | HT | 0x09 | ␉ | | LF | 0x0A | ␊ | | VT | 0x0B | ␋ | | FF | 0x0C | ␌ | | CR | 0x0D | ␍ | | SO | 0x0E | ␎ | | SI | 0x0F | ␏ | | DLE | 0x10 | ␐ | | DC1 | 0x11 | ␑ | | DC2 | 0x12 | ␒ | | DC3 | 0x13 | ␓ | | DC4 | 0x14 | ␔ | | NAK | 0x15 | ␕ | | SYN | 0x16 | ␖ | | ETB | 0x17 | ␗ | | CAN | 0x18 | ␘ | | EM | 0x19 | ␙ | | SUB | 0x1A | ␚ | | ESC | 0x1B | ␛ | | FS | 0x1C | ␜ | | GS | 0x1D | ␝ | | RS | 0x1E | ␞ | | US | 0x1F | ␟ | | / | 0x2F | / | | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | File names on Windows can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | . | 0x2E | . | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be converted to UTF-16. ### Long paths on Windows ### Rclone handles long paths automatically, by converting all paths to long [UNC paths](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#maxpath) which allows paths up to 32,767 characters. This is why you will see that your paths, for instance `c:\files` is converted to the UNC path `\\?\c:\files` in the output, and `\\server\share` is converted to `\\?\UNC\server\share`. However, in rare cases this may cause problems with buggy file system drivers like [EncFS](https://github.com/rclone/rclone/issues/261). To disable UNC conversion globally, add this to your `.rclone.conf` file: ``` [local] nounc = true ``` If you want to selectively disable UNC, you can add it to a separate entry like this: ``` [nounc] type = local nounc = true ``` And use rclone like this: `rclone copy c:\src nounc:z:\dst` This will use UNC paths on `c:\src` but not on `z:\dst`. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to. ### Symlinks / Junction points Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply `--copy-links` or `-L` then rclone will follow the symlink and copy the pointed to file or directory. Note that this flag is incompatible with `-links` / `-l`. This flag applies to all commands. For example, supposing you have a directory structure like this ``` $ tree /tmp/a /tmp/a ├── b -> ../b ├── expected -> ../expected ├── one └── two └── three ``` Then you can see the difference with and without the flag like this ``` $ rclone ls /tmp/a 6 one 6 two/three ``` and ``` $ rclone -L ls /tmp/a 4174 expected 6 one 6 two/three 6 b/two 6 b/one ``` #### --links, -l Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage. The text file will contain the target of the symbolic link (see example). This flag applies to all commands. For example, supposing you have a directory structure like this ``` $ tree /tmp/a /tmp/a ├── file1 -> ./file4 └── file2 -> /home/user/file3 ``` Copying the entire directory with '-l' ``` $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/ ``` The remote files are created with a '.rclonelink' suffix ``` $ rclone ls remote:/tmp/a 5 file1.rclonelink 14 file2.rclonelink ``` The remote files will contain the target of the symbolic links ``` $ rclone cat remote:/tmp/a/file1.rclonelink ./file4 $ rclone cat remote:/tmp/a/file2.rclonelink /home/user/file3 ``` Copying them back with '-l' ``` $ rclone copyto -l remote:/tmp/a/ /tmp/b/ $ tree /tmp/b /tmp/b ├── file1 -> ./file4 └── file2 -> /home/user/file3 ``` However, if copied back without '-l' ``` $ rclone copyto remote:/tmp/a/ /tmp/b/ $ tree /tmp/b /tmp/b ├── file1.rclonelink └── file2.rclonelink ```` Note that this flag is incompatible with `-copy-links` / `-L`. ### Restricting filesystems with --one-file-system Normally rclone will recurse through filesystems as mounted. However if you set `--one-file-system` or `-x` this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems. For example if you have a directory hierarchy like this ``` root ├── disk1 - disk1 mounted on the root │   └── file3 - stored on disk1 ├── disk2 - disk2 mounted on the root │   └── file4 - stored on disk12 ├── file1 - stored on the root disk └── file2 - stored on the root disk ``` Using `rclone --one-file-system copy root remote:` will only copy `file1` and `file2`. Eg ``` $ rclone -q --one-file-system ls root 0 file1 0 file2 ``` ``` $ rclone -q ls root 0 disk1/file3 0 disk2/file4 0 file1 0 file2 ``` **NB** Rclone (like most unix tools such as `du`, `rsync` and `tar`) treats a bind mount to the same device as being on the same filesystem. **NB** This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to local (Local Disk). #### --local-nounc Disable UNC (long path names) conversion on Windows - Config: nounc - Env Var: RCLONE_LOCAL_NOUNC - Type: string - Default: "" - Examples: - "true" - Disables long file names ### Advanced Options Here are the advanced options specific to local (Local Disk). #### --copy-links / -L Follow symlinks and copy the pointed to item. - Config: copy_links - Env Var: RCLONE_LOCAL_COPY_LINKS - Type: bool - Default: false #### --links / -l Translate symlinks to/from regular files with a '.rclonelink' extension - Config: links - Env Var: RCLONE_LOCAL_LINKS - Type: bool - Default: false #### --skip-links Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped. - Config: skip_links - Env Var: RCLONE_LOCAL_SKIP_LINKS - Type: bool - Default: false #### --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead. - Config: no_unicode_normalization - Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION - Type: bool - Default: false #### --local-no-check-updated Don't check to see if the files change during upload Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. However on some file systems this modification time check may fail (eg [Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this check can be disabled with this flag. If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things appended to it (eg a log) then rclone will transfer the log file with the size it had the first time rclone saw it. If the file is being modified throughout (not just appended to) then the transfer may fail with a hash check failure. In detail, once the file has had stat() called on it for the first time we: - Only transfer the size that stat gave - Only checksum the size that stat gave - Don't update the stat info for the file - Config: no_check_updated - Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED - Type: bool - Default: false #### --one-file-system / -x Don't cross filesystem boundaries (unix/macOS only). - Config: one_file_system - Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM - Type: bool - Default: false #### --local-case-sensitive Force the filesystem to report itself as case sensitive. Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. - Config: case_sensitive - Env Var: RCLONE_LOCAL_CASE_SENSITIVE - Type: bool - Default: false #### --local-case-insensitive Force the filesystem to report itself as case insensitive Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. - Config: case_insensitive - Env Var: RCLONE_LOCAL_CASE_INSENSITIVE - Type: bool - Default: false #### --local-no-sparse Disable sparse files for multi-thread downloads On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with. - Config: no_sparse - Env Var: RCLONE_LOCAL_NO_SPARSE - Type: bool - Default: false #### --local-no-set-modtime Disable setting modtime Normally rclone updates modification time of files after they are done uploading. This can cause permissions issues on Linux platforms when the user rclone is running as does not own the file uploaded, such as when copying to a CIFS mount owned by another user. If this option is enabled, rclone will no longer update the modtime after copying a file. - Config: no_set_modtime - Env Var: RCLONE_LOCAL_NO_SET_MODTIME - Type: bool - Default: false #### --local-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_LOCAL_ENCODING - Type: MultiEncoder - Default: Slash,Dot ### Backend commands Here are the commands specific to the local backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](/rc/#backend/command). #### noop A null operation for testing backend commands rclone backend noop remote: [options] [+] This is a test command which has some options you can try to change the output. Options: - "echo": echo the input arguments - "error": return an error based on option value {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/mailru.md000066400000000000000000000232621375552240400173160ustar00rootroot00000000000000--- title: "Mailru" description: "Mail.ru Cloud" --- {{< icon "fas fa-at" >}} Mail.ru Cloud ---------------------------------------- [Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/), available only on Windows. (Please note that official sites are in Russian) Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented. ### Features highlights ### - Paths may be as deep as required, eg `remote:directory/subdirectory` - Files have a `last modified time` property, directories don't - Deleted files are by default moved to the trash - Files and directories can be shared via public links - Partial uploads or streaming are not supported, file size must be known before upload - Maximum file size is limited to 2G for a free account, unlimited for paid accounts - Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1 - If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone) ### Configuration ### Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Mail.ru Cloud \ "mailru" [snip] Storage> mailru User name (usually email) Enter a string value. Press Enter for the default (""). user> username@mail.ru Password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips [snip] Enter a boolean value (true or false). Press Enter for the default ("true"). Choose a number from below, or type in your own value 1 / Enable \ "true" 2 / Disable \ "false" speedup_enable> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [remote] type = mailru user = username@mail.ru pass = *** ENCRYPTED *** speedup_enable = true -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Configuration of this backend does not require a local web browser. You can use the configured backend as shown below: See top level directories rclone lsd remote: Make a new directory rclone mkdir remote:directory List the contents of a directory rclone ls remote:directory Sync `/home/local/directory` to the remote path, deleting any excess files in the path. rclone sync -i /home/local/directory remote:directory ### Modified time ### Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970". ### Hash checksums ### Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length. ### Emptying Trash ### Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the `rclone cleanup remote:` command, which will permanently delete all your trashed files. This command does not take any path arguments. ### Quota information ### To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota) and the current usage. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Limitations ### File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits. Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mailru/mailru.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to mailru (Mail.ru Cloud). #### --mailru-user User name (usually email) - Config: user - Env Var: RCLONE_MAILRU_USER - Type: string - Default: "" #### --mailru-pass Password **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_MAILRU_PASS - Type: string - Default: "" #### --mailru-speedup-enable Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization. - Config: speedup_enable - Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE - Type: bool - Default: true - Examples: - "true" - Enable - "false" - Disable ### Advanced Options Here are the advanced options specific to mailru (Mail.ru Cloud). #### --mailru-speedup-file-patterns Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters. - Config: speedup_file_patterns - Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS - Type: string - Default: "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf" - Examples: - "" - Empty list completely disables speedup (put by hash). - "*" - All files will be attempted for speedup. - "*.mkv,*.avi,*.mp4,*.mp3" - Only common audio/video files will be tried for put by hash. - "*.zip,*.gz,*.rar,*.pdf" - Only common archives or PDF books will be tried for speedup. #### --mailru-speedup-max-disk This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space) - Config: speedup_max_disk - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK - Type: SizeSuffix - Default: 3G - Examples: - "0" - Completely disable speedup (put by hash). - "1G" - Files larger than 1Gb will be uploaded directly. - "3G" - Choose this option if you have less than 3Gb free on local disk. #### --mailru-speedup-max-memory Files larger than the size given below will always be hashed on disk. - Config: speedup_max_memory - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY - Type: SizeSuffix - Default: 32M - Examples: - "0" - Preliminary hashing will always be done in a temporary disk location. - "32M" - Do not dedicate more than 32Mb RAM for preliminary hashing. - "256M" - You have at most 256Mb RAM free for hash calculations. #### --mailru-check-hash What should copy do if file checksum is mismatched or invalid - Config: check_hash - Env Var: RCLONE_MAILRU_CHECK_HASH - Type: bool - Default: true - Examples: - "true" - Fail with error. - "false" - Ignore and continue. #### --mailru-user-agent HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line. - Config: user_agent - Env Var: RCLONE_MAILRU_USER_AGENT - Type: string - Default: "" #### --mailru-quirks Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400 - Config: quirks - Env Var: RCLONE_MAILRU_QUIRKS - Type: string - Default: "" #### --mailru-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_MAILRU_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/mega.md000066400000000000000000000150271375552240400167360ustar00rootroot00000000000000--- title: "Mega" description: "Rclone docs for Mega" --- {{< icon "fa fa-archive" >}} Mega ----------------------------------------- [Mega](https://mega.nz/) is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption. This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Mega \ "mega" [snip] Storage> mega User name user> you@example.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Remote config -------------------- [remote] type = mega user = you@example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` **NOTE:** The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in `rclone` will fail. Once configured you can then use `rclone` like this, List directories in top level of your Mega rclone lsd remote: List all the files in your Mega rclone ls remote: To copy a local directory to an Mega directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### Mega does not support modification times or hashes yet. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Duplicated files ### Mega can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use `rclone dedupe` to fix duplicated files. ### Failure to log-in ### Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. For example, executing this command 90 times in a row `rclone link remote:file` will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again. You can mitigate this issue by mounting the remote it with `rclone mount`. This will log-in when mounting and a log-out when unmounting only. You can also run `rclone rcd` and then use `rclone rc` to run the commands over the API to avoid logging in each time. Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue. If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing... Note that this has been observed by trial and error and might not be set in stone. Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach. Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though. Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum. So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mega/mega.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to mega (Mega). #### --mega-user User name - Config: user - Env Var: RCLONE_MEGA_USER - Type: string - Default: "" #### --mega-pass Password. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_MEGA_PASS - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to mega (Mega). #### --mega-debug Output more debug from Mega. If this flag is set (along with -vv) it will print further debugging information from the mega backend. - Config: debug - Env Var: RCLONE_MEGA_DEBUG - Type: bool - Default: false #### --mega-hard-delete Delete files permanently rather than putting them into the trash. Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead. - Config: hard_delete - Env Var: RCLONE_MEGA_HARD_DELETE - Type: bool - Default: false #### --mega-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_MEGA_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations ### This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. rclone-1.53.3/docs/content/memory.md000066400000000000000000000032271375552240400173340ustar00rootroot00000000000000--- title: "Memory" description: "Rclone docs for Memory backend" --- {{< icon "fas fa-memory" >}} Memory ----------------------------------------- The memory backend is an in RAM backend. It does not persist its data - use the local backend for that. The memory backend behaves like a bucket based remote (eg like s3). Because it has no parameters you can just use it with the `:memory:` remote name. You can configure it as a remote like this with `rclone config` too if you want to: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Memory \ "memory" [snip] Storage> memory ** See help for memory backend at: https://rclone.org/memory/ ** Remote config -------------------- [remote] type = memory -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, eg rclone mount :memory: /mnt/tmp rclone serve webdav :memory: rclone serve sftp :memory: ### Modified time and hashes ### The memory backend supports MD5 hashes and modification times accurate to 1 nS. #### Restricted filename characters The memory backend replaces the [default restricted characters set](/overview/#restricted-characters). {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/memory/memory.go then run make backenddocs" >}} {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/onedrive.md000066400000000000000000000465311375552240400176440ustar00rootroot00000000000000--- title: "Microsoft OneDrive" description: "Rclone docs for Microsoft OneDrive" --- {{< icon "fab fa-windows" >}} Microsoft OneDrive ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Microsoft OneDrive \ "onedrive" [snip] Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Microsoft App Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Choose a number from below, or type in an existing value 1 / OneDrive Personal or Business \ "onedrive" 2 / Sharepoint site \ "sharepoint" 3 / Type in driveID \ "driveid" 4 / Type in SiteID \ "siteid" 5 / Search a Sharepoint site \ "search" Your choice> 1 Found 1 drives, please select the one you want to use: 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk Chose drive to use:> 0 Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents Is that okay? y) Yes n) No y/n> y -------------------- [remote] type = onedrive token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"} drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk drive_type = business -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your OneDrive rclone lsd remote: List all the files in your OneDrive rclone ls remote: To copy a local directory to an OneDrive directory called backup rclone copy /home/source remote:backup ### Getting your own Client ID and Key ### You can use your own Client ID if the default (`client_id` left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests. If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below: 1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`. 2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI` Enter `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. 3. Under `manage` select `Certificates & secrets`, click `New client secret`. Copy and keep that secret for later use. 4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. 5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read`. Once selected click `Add permissions` at the bottom. Now the application is complete. Run `rclone config` to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. ### Modification time and hashes ### OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). For all types of OneDrive you can use the `--checksum` flag. ### Restricted filename characters ### In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | | # | 0x23 | # | | % | 0x25 | % | File names can also not end with the following characters. These only get replaced if they are the last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | . | 0x2E | . | File names can also not begin with the following characters. These only get replaced if they are the first character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | ~ | 0x7E | ~ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Deleting files ### Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/onedrive/onedrive.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to onedrive (Microsoft OneDrive). #### --onedrive-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_ONEDRIVE_CLIENT_ID - Type: string - Default: "" #### --onedrive-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to onedrive (Microsoft OneDrive). #### --onedrive-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_ONEDRIVE_TOKEN - Type: string - Default: "" #### --onedrive-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_ONEDRIVE_AUTH_URL - Type: string - Default: "" #### --onedrive-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_ONEDRIVE_TOKEN_URL - Type: string - Default: "" #### --onedrive-chunk-size Chunk size to upload files with - must be multiple of 320k (327,680 bytes). Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter \"Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\" Note that the chunks will be buffered into memory. - Config: chunk_size - Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 10M #### --onedrive-drive-id The ID of the drive to use - Config: drive_id - Env Var: RCLONE_ONEDRIVE_DRIVE_ID - Type: string - Default: "" #### --onedrive-drive-type The type of the drive ( personal | business | documentLibrary ) - Config: drive_type - Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE - Type: string - Default: "" #### --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option. - Config: expose_onenote_files - Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES - Type: bool - Default: false #### --onedrive-server-side-across-configs Allow server side operations (eg copy) to work across different onedrive configs. This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. - Config: server_side_across_configs - Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS - Type: bool - Default: false #### --onedrive-no-versions Remove all versions on modifying operations Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time. These versions take up space out of the quota. This flag checks for versions after file upload and setting modification time and removes all but the last version. **NB** Onedrive personal can't currently delete versions so don't use this flag there. - Config: no_versions - Env Var: RCLONE_ONEDRIVE_NO_VERSIONS - Type: bool - Default: false #### --onedrive-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_ONEDRIVE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the `rclone config reconnect remote:` command to get a new token and refresh token. #### Naming #### Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a `?` in it will be mapped to `?` instead. #### File sizes #### The largest allowed file size is 100GB for both OneDrive Personal and OneDrive for Business [(Updated 17 June 2020)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). #### Path length #### The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. #### Number of files #### OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like `couldn’t list files: UnknownError:`. See [#2707](https://github.com/rclone/rclone/issues/2707) for more info. An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). ### Versions Every change in a file OneDrive causes the service to create a new version of the the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space. For example the `copy` command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version. You can use the `rclone cleanup` command (see below) to remove all old versions. Or you can set the `no_versions` parameter to `true` and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it. **Note** At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and `no_versions` should not be used on Onedrive Personal. ### Disabling versioning Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: 1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) 2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` 3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) 4. `Set-SPOTenant -EnableMinimumVersionRequirement $False` 5. `Disconnect-SPOService` (to disconnect from the server) *Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* User [Weropol](https://github.com/Weropol) has found a method to disable versioning on OneDrive 1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. 2. Click Site settings. 3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. 4. Click Customize "Documents". 5. Click General Settings > Versioning Settings. 6. Under Document Version History select the option No versioning. Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. 7. Apply the changes by clicking OK. 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) 9. Restore the versioning settings after using rclone. (Optional) ### Cleanup OneDrive supports `rclone cleanup` which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does `--checkers` tests in parallel. The command also supports `-i` which is a great way to see what it would do. rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir **NB** Onedrive personal can't currently delete versions ### Troubleshooting ### #### Unexpected file size/hash differences on Sharepoint #### It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments: ``` --ignore-checksum --ignore-size ``` Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for [OneDrive](https://onedrive.live.com) and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above. #### Replacing/deleting existing files on Sharepoint gets "item not found" #### It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the `--backup-dir ` command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory `rclone-backup-dir` on backend `mysharepoint`, you may use: ``` --backup-dir mysharepoint:rclone-backup-dir ``` #### access\_denied (AADSTS65005) #### ``` Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. ``` This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint #### invalid\_grant (AADSTS50076) #### ``` Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. ``` If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. rclone-1.53.3/docs/content/opendrive.md000066400000000000000000000101771375552240400200210ustar00rootroot00000000000000--- title: "OpenDrive" description: "Rclone docs for OpenDrive" --- {{< icon "fa fa-file" >}} OpenDrive ------------------------------------ Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` n) New remote d) Delete remote q) Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenDrive \ "opendrive" [snip] Storage> opendrive Username username> Password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: -------------------- [remote] username = password = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` List directories in top level of your OpenDrive rclone lsd remote: List all the files in your OpenDrive rclone ls remote: To copy a local directory to an OpenDrive directory called backup rclone copy /home/source remote:backup ### Modified time and MD5SUMs ### OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. #### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | | " | 0x22 | " | | * | 0x2A | * | | : | 0x3A | : | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | \ | 0x5C | \ | | \| | 0x7C | | | File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | HT | 0x09 | ␉ | | LF | 0x0A | ␊ | | VT | 0x0B | ␋ | | CR | 0x0D | ␍ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/opendrive/opendrive.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to opendrive (OpenDrive). #### --opendrive-username Username - Config: username - Env Var: RCLONE_OPENDRIVE_USERNAME - Type: string - Default: "" #### --opendrive-password Password. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: password - Env Var: RCLONE_OPENDRIVE_PASSWORD - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to opendrive (OpenDrive). #### --opendrive-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_OPENDRIVE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot #### --opendrive-chunk-size Files will be uploaded in chunks this size. Note that these chunks are buffered in memory so increasing them will increase memory use. - Config: chunk_size - Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE - Type: SizeSuffix - Default: 10M {{< rem autogenerated options stop >}} ### Limitations ### Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a `?` in it will be mapped to `?` instead. rclone-1.53.3/docs/content/overview.md000066400000000000000000000540031375552240400176700ustar00rootroot00000000000000--- title: "Overview of cloud storage systems" description: "Overview of cloud storage systems" type: page --- # Overview of cloud storage systems # Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through. ## Features ## Here is an overview of the major features of each cloud storage system. | Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | | ---------------------------- |:-----------:|:-------:|:----------------:|:---------------:|:---------:| | 1Fichier | Whirlpool | No | No | Yes | R | | Amazon Drive | MD5 | No | Yes | No | R | | Amazon S3 | MD5 | Yes | No | No | R/W | | Backblaze B2 | SHA1 | Yes | No | No | R/W | | Box | SHA1 | Yes | Yes | No | - | | Citrix ShareFile | MD5 | Yes | Yes | No | - | | Dropbox | DBHASH † | Yes | Yes | No | - | | FTP | - | No | No | No | - | | Google Cloud Storage | MD5 | Yes | No | No | R/W | | Google Drive | MD5 | Yes | No | Yes | R/W | | Google Photos | - | No | No | Yes | R | | HTTP | - | No | No | No | R | | Hubic | MD5 | Yes | No | No | R/W | | Jottacloud | MD5 | Yes | Yes | No | R/W | | Koofr | MD5 | No | Yes | No | - | | Mail.ru Cloud | Mailru ‡‡‡ | Yes | Yes | No | - | | Mega | - | No | No | Yes | - | | Memory | MD5 | Yes | No | No | - | | Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W | | Microsoft OneDrive | SHA1 ‡‡ | Yes | Yes | No | R | | OpenDrive | MD5 | Yes | Yes | Partial \* | - | | OpenStack Swift | MD5 | Yes | No | No | R/W | | pCloud | MD5, SHA1 | Yes | No | No | W | | premiumize.me | - | No | Yes | No | R | | put.io | CRC-32 | Yes | No | Yes | R | | QingStor | MD5 | No | No | No | R/W | | Seafile | - | No | No | No | - | | SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - | | SugarSync | - | No | No | No | - | | Tardigrade | - | Yes | No | No | - | | WebDAV | MD5, SHA1 ††| Yes ††† | Depends | No | - | | Yandex Disk | MD5 | Yes | No | No | R/W | | The local filesystem | All | Yes | Depends | No | - | ### Hash ### The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the `--checksum` flag in syncs and in the `check` command. To use the verify checksums when transferring between cloud storage systems they must support a common hash type. † Note that Dropbox supports [its own custom hash](https://www.dropbox.com/developers/reference/content-hash). This is an SHA256 sum of all the 4MB block SHA256s. ‡ SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. †† WebDAV supports hashes when used with Owncloud and Nextcloud only. ††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). ‡‡‡ Mail.ru uses its own modified SHA1 hash ### ModTime ### The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the `--checksum` flag. All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system. ### Case Insensitive ### If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg `file.txt` and `FILE.txt`. If a cloud storage system is case insensitive then that isn't possible. This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully. The local filesystem and SFTP may or may not be case sensitive depending on OS. * Windows - usually case insensitive, though case is preserved * OSX - usually case insensitive, though it is possible to format case sensitive * Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys) Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems. ### Duplicate files ### If a cloud storage system allows duplicate files then it can have two objects with the same name. This confuses rclone greatly when syncing - use the `rclone dedupe` command to rename or remove duplicates. \* Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with `rclone`. It may be that this is a mistake or an unsupported feature. ### Restricted filenames ### Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When `rclone` detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters. This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently. The name shown by `rclone` to the user or during log output will only contain a minimal set of [replaced characters](#restricted-characters) to ensure correct formatting and not necessarily the actual name used on the cloud storage. This transformation is reversed when downloading a file or parsing `rclone` arguments. For example, when uploading a file named `my file?.txt` to Onedrive will be displayed as `my file?.txt` on the console, but stored as `my file?.txt` (the `?` gets replaced by the similar looking `?` character) to Onedrive. The reverse transformation allows to read a file`unusual/name.txt` from Google Drive, by passing the name `unusual/name.txt` (the `/` needs to be replaced by the similar looking `/` character) on the command line. #### Default restricted characters {#restricted-characters} The table below shows the characters that are replaced by default. When a replacement character is found in a filename, this character will be escaped with the `‛` character to avoid ambiguous file names. (e.g. a file named `␀.txt` would shown as `‛␀.txt`) Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend. | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | SOH | 0x01 | ␁ | | STX | 0x02 | ␂ | | ETX | 0x03 | ␃ | | EOT | 0x04 | ␄ | | ENQ | 0x05 | ␅ | | ACK | 0x06 | ␆ | | BEL | 0x07 | ␇ | | BS | 0x08 | ␈ | | HT | 0x09 | ␉ | | LF | 0x0A | ␊ | | VT | 0x0B | ␋ | | FF | 0x0C | ␌ | | CR | 0x0D | ␍ | | SO | 0x0E | ␎ | | SI | 0x0F | ␏ | | DLE | 0x10 | ␐ | | DC1 | 0x11 | ␑ | | DC2 | 0x12 | ␒ | | DC3 | 0x13 | ␓ | | DC4 | 0x14 | ␔ | | NAK | 0x15 | ␕ | | SYN | 0x16 | ␖ | | ETB | 0x17 | ␗ | | CAN | 0x18 | ␘ | | EM | 0x19 | ␙ | | SUB | 0x1A | ␚ | | ESC | 0x1B | ␛ | | FS | 0x1C | ␜ | | GS | 0x1D | ␝ | | RS | 0x1E | ␞ | | US | 0x1F | ␟ | | / | 0x2F | / | | DEL | 0x7F | ␡ | The default encoding will also encode these file names as they are problematic with many cloud storage systems. | File name | Replacement | | --------- |:-----------:| | . | . | | .. | .. | #### Invalid UTF-8 bytes {#invalid-utf8} Some backends only support a sequence of well formed UTF-8 bytes as file or directory names. In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte `0xFE` will be encoded as `‛FE`. A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the [local filenames](/local/#filenames) section for details. #### Encoding option {#encoding} Most backends have an encoding options, specified as a flag `--backend-encoding` where `backend` is the name of the backend, or as a config parameter `encoding` (you'll need to select the Advanced config in `rclone config` to see it). This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above). However this can be incorrect in some scenarios, for example if you have a Windows file system with characters such as `*` and `?` that you want to remain as those characters on the remote rather than being translated to `*` and `?`. The `--backend-encoding` flags allow you to change that. You can disable the encoding completely with `--backend-encoding None` or set `encoding = None` in the config file. Encoding takes a comma separated list of encodings. You can see the list of all available characters by passing an invalid value to this flag, eg `--local-encoding "help"` and `rclone help flags encoding` will show you the defaults for the backends. | Encoding | Characters | | --------- | ---------- | | Asterisk | `*` | | BackQuote | `` ` `` | | BackSlash | `\` | | Colon | `:` | | CrLf | CR 0x0D, LF 0x0A | | Ctl | All control characters 0x00-0x1F | | Del | DEL 0x7F | | Dollar | `$` | | Dot | `.` | | DoubleQuote | `"` | | Hash | `#` | | InvalidUtf8 | An invalid UTF-8 character (eg latin1) | | LeftCrLfHtVt | CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string | | LeftPeriod | `.` on the left of a string | | LeftSpace | SPACE on the left of a string | | LeftTilde | `~` on the left of a string | | LtGt | `<`, `>` | | None | No characters are encoded | | Percent | `%` | | Pipe | \| | | Question | `?` | | RightCrLfHtVt | CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string | | RightPeriod | `.` on the right of a string | | RightSpace | SPACE on the right of a string | | SingleQuote | `'` | | Slash | `/` | To take a specific example, the FTP backend's default encoding is --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot" However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot to the existing ones, giving: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace This can be specified using the `--ftp-encoding` flag or using an `encoding` parameter in the config file. Or let's say you have a Windows server but you want to preserve `*` and `?`, you would then have this as the encoding (the Windows encoding minus `Asterisk` and `Question`). Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot This can be specified using the `--local-encoding` flag or using an `encoding` parameter in the config file. ### MIME Type ### MIME types (also known as media types) classify types of documents using a simple text classification, eg `text/html` or `application/pdf`. Some cloud storage systems support reading (`R`) the MIME type of objects and some support writing (`W`) the MIME type of objects. The MIME type can be important if you are serving files directly to HTTP from the storage system. If you are copying from a remote which supports reading (`R`) to a remote which supports writing (`W`) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type. ## Optional Features ## All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. | Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | | ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:| :------: | | 1Fichier | No | No | No | No | No | No | No | No | No | Yes | | Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | | Amazon S3 | No | Yes | No | No | Yes | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | | Box | Yes | Yes | Yes | Yes | Yes ‡‡ | No | Yes | Yes | No | Yes | | Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | Yes | No | No | Yes | | Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | Yes | Yes | Yes | Yes | | FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | | Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Google Photos | No | No | No | No | No | No | No | No | No | No | | HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | | Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | No | | Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | | Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Mega | Yes | No | Yes | Yes | Yes | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | | Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | | OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | No | | pCloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | premiumize.me | Yes | No | Yes | Yes | No | No | No | Yes | Yes | Yes | | put.io | Yes | No | Yes | Yes | Yes | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | QingStor | No | Yes | No | No | Yes | Yes | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | | Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | | Tardigrade | Yes † | No | No | No | No | Yes | Yes | No | No | No | | WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | | Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes | | The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | ### Purge ### This deletes a directory quicker than just deleting all the files in the directory. † Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually. ‡ StreamUpload is not supported with Nextcloud ### Copy ### Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use `rclone copy` or `rclone move` if the remote doesn't support `Move` directly. If the server doesn't support `Copy` directly then for copy operations the file is downloaded then re-uploaded. ### Move ### Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in `rclone move` if the server doesn't support `DirMove`. If the server isn't capable of `Move` then rclone simulates it with `Copy` then delete. If the server doesn't support `Copy` then rclone will download the file and re-upload it. ### DirMove ### This is used to implement `rclone move` to move a directory if possible. If it isn't then it will use `Move` on each file (which falls back to `Copy` then download and upload - see `Move` section). ### CleanUp ### This is used for emptying the trash for a remote by `rclone cleanup`. If the server can't do `CleanUp` then `rclone cleanup` will return an error. ‡‡ Note that while Box implements this it has to delete every file idividually so it will be slower than emptying the trash via the WebUI ### ListR ### The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the `--fast-list` flag to work. See the [rclone docs](/docs/#fast-list) for more details. ### StreamUpload ### Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. `rclone rcat`. ### LinkSharing ### Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider. ### About ### This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. This is also used to return the space used, available for `rclone mount`. If the server can't do `About` then `rclone about` will return an error. ### EmptyDir ### The remote supports empty directories. See [Limitations](/bugs/#limitations) for details. Most Object/Bucket based remotes do not support this. rclone-1.53.3/docs/content/pcloud.md000066400000000000000000000137051375552240400173140ustar00rootroot00000000000000--- title: "pCloud" description: "Rclone docs for pCloud" --- {{< icon "fa fa-cloud" >}} pCloud ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Pcloud \ "pcloud" [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your pCloud rclone lsd remote: List all the files in your pCloud rclone ls remote: To copy a local directory to an pCloud directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded. pCloud supports MD5 and SHA1 type hashes, so you can use the `--checksum` flag. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Deleting files ### Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. `rclone cleanup` can be used to empty the trash. ### Root folder ID ### You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your pCloud drive. Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. This will be the `folder` field of the URL when you open the relevant folder in the pCloud web interface. So if the folder you want rclone to use has a URL which looks like `https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid` in the browser, then you use `5xxxxxxxx8` as the `root_folder_id` in the config. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pcloud/pcloud.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to pcloud (Pcloud). #### --pcloud-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_PCLOUD_CLIENT_ID - Type: string - Default: "" #### --pcloud-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_PCLOUD_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to pcloud (Pcloud). #### --pcloud-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_PCLOUD_TOKEN - Type: string - Default: "" #### --pcloud-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_PCLOUD_AUTH_URL - Type: string - Default: "" #### --pcloud-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_PCLOUD_TOKEN_URL - Type: string - Default: "" #### --pcloud-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_PCLOUD_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot #### --pcloud-root-folder-id Fill in for rclone to use a non root folder as its starting point. - Config: root_folder_id - Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID - Type: string - Default: "d0" #### --pcloud-hostname Hostname to connect to. This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize. - Config: hostname - Env Var: RCLONE_PCLOUD_HOSTNAME - Type: string - Default: "api.pcloud.com" - Examples: - "api.pcloud.com" - Original/US region - "eapi.pcloud.com" - EU region {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/premiumizeme.md000066400000000000000000000101751375552240400205340ustar00rootroot00000000000000--- title: "premiumize.me" description: "Rclone docs for premiumize.me" --- {{< icon "fa fa-user" >}} premiumize.me ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / premiumize.me \ "premiumizeme" [snip] Storage> premiumizeme ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = premiumizeme token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your premiumize.me rclone lsd remote: List all the files in your premiumize.me rclone ls remote: To copy a local directory to an premiumize.me directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### premiumize.me does not support modification times or hashes, therefore syncing will default to `--size-only` checking. Note that using `--update` will work. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | | " | 0x22 | " | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/premiumizeme/premiumizeme.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to premiumizeme (premiumize.me). #### --premiumizeme-api-key API Key. This is not normally used - use oauth instead. - Config: api_key - Env Var: RCLONE_PREMIUMIZEME_API_KEY - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to premiumizeme (premiumize.me). #### --premiumizeme-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_PREMIUMIZEME_ENCODING - Type: MultiEncoder - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} ### Limitations ### Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". premiumize.me file names can't have the `\` or `"` characters in. rclone maps these to and from an identical looking unicode equivalents `\` and `"` premiumize.me only supports filenames up to 255 characters in length. rclone-1.53.3/docs/content/privacy.md000066400000000000000000000163771375552240400175130ustar00rootroot00000000000000--- title: "Privacy Policy" description: "Rclone Privacy Policy" --- # Rclone Privacy Policy # ## What is this Privacy Policy for? ## This privacy policy is for this website https://rclone.org and governs the privacy of its users who choose to use it. The policy sets out the different areas where user privacy is concerned and outlines the obligations & requirements of the users, the website and website owners. Furthermore the way this website processes, stores and protects user data and information will also be detailed within this policy. ## The Website ## This website and its owners take a proactive approach to user privacy and ensure the necessary steps are taken to protect the privacy of its users throughout their visiting experience. This website complies to all UK national laws and requirements for user privacy. ## Use of Cookies ## This website uses cookies to better the users experience while visiting the website. Where applicable this website uses a cookie control system allowing the user on their first visit to the website to allow or disallow the use of cookies on their computer / device. This complies with recent legislation requirements for websites to obtain explicit consent from users before leaving behind or reading files such as cookies on a user's computer / device. Cookies are small files saved to the user's computers hard drive that track, save and store information about the user's interactions and usage of the website. This allows the website, through its server to provide the users with a tailored experience within this website. Users are advised that if they wish to deny the use and saving of cookies from this website on to their computers hard drive they should take necessary steps within their web browsers security settings to block all cookies from this website and its external serving vendors. This website uses tracking software to monitor its visitors to better understand how they use it. This software is provided by Google Analytics which uses cookies to track visitor usage. The software will save a cookie to your computers hard drive in order to track and monitor your engagement and usage of the website, but will not store, save or collect personal information. You can read [Google's privacy policy here](https://www.google.com/privacy.html) for further information. Other cookies may be stored to your computers hard drive by external vendors when this website uses referral programs, sponsored links or adverts. Such cookies are used for conversion and referral tracking and typically expire after 30 days, though some may take longer. No personal information is stored, saved or collected. ## Contact & Communication ## Users contacting this website and/or its owners do so at their own discretion and provide any such personal details requested at their own risk. Your personal information is kept private and stored securely until a time it is no longer required or has no use, as detailed in the Data Protection Act 1998. This website and its owners use any information submitted to provide you with further information about the products / services they offer or to assist you in answering any questions or queries you may have submitted. ## External Links ## Although this website only looks to include quality, safe and relevant external links, users are advised adopt a policy of caution before clicking any external web links mentioned throughout this website. The owners of this website cannot guarantee or verify the contents of any externally linked website despite their best efforts. Users should therefore note they click on external links at their own risk and this website and its owners cannot be held liable for any damages or implications caused by visiting any external links mentioned. ## Adverts and Sponsored Links ## This website may contain sponsored links and adverts. These will typically be served through our advertising partners, to whom may have detailed privacy policies relating directly to the adverts they serve. Clicking on any such adverts will send you to the advertisers website through a referral program which may use cookies and will track the number of referrals sent from this website. This may include the use of cookies which may in turn be saved on your computers hard drive. Users should therefore note they click on sponsored external links at their own risk and this website and its owners cannot be held liable for any damages or implications caused by visiting any external links mentioned. ### Social Media Platforms ## Communication, engagement and actions taken through external social media platforms that this website and its owners participate on are subject to the terms and conditions as well as the privacy policies held with each social media platform respectively. Users are advised to use social media platforms wisely and communicate / engage upon them with due care and caution in regard to their own privacy and personal details. This website nor its owners will ever ask for personal or sensitive information through social media platforms and encourage users wishing to discuss sensitive details to contact them through primary communication channels such as email. This website may use social sharing buttons which help share web content directly from web pages to the social media platform in question. Users are advised before using such social sharing buttons that they do so at their own discretion and note that the social media platform may track and save your request to share a web page respectively through your social media platform account. ## Use of Cloud API User Data ## Rclone is a command line program to manage files on cloud storage. Its sole purpose is to access and manipulate user content in the [supported](/overview/) cloud storage systems from a local machine of the end user. For accessing the user content via the cloud provider API, Rclone uses authentication mechanisms, such as OAuth or HTTP Cookies, depending on the particular cloud provider offerings. Use of these authentication mechanisms and user data is governed by the privacy policies mentioned in the [Resources & Further Information](/privacy/#resources-further-information) section and followed by the privacy policy of Rclone. * Rclone provides the end user with access to their files available in a storage system associated by the authentication credentials via the publicly exposed API of the storage system. * Rclone allows storing the authentication credentials on the user machine in the local configuration file. * Rclone does not share any user data with third parties. ## Resources & Further Information ## * [Data Protection Act 1998](http://www.legislation.gov.uk/ukpga/1998/29/contents) * [Privacy and Electronic Communications Regulations 2003](http://www.legislation.gov.uk/uksi/2003/2426/contents/made) * [Privacy and Electronic Communications Regulations 2003 - The Guide](https://ico.org.uk/for-organisations/guide-to-pecr/) * [Twitter Privacy Policy](https://twitter.com/privacy) * [Facebook Privacy Policy](https://www.facebook.com/about/privacy/) * [Google Privacy Policy](https://www.google.com/privacy.html) * [Google API Services User Data Policy](https://developers.google.com/terms/api-services-user-data-policy) * [Sample Website Privacy Policy](http://www.jamieking.co.uk/resources/free_sample_privacy_policy.html) rclone-1.53.3/docs/content/putio.md000066400000000000000000000062711375552240400171660ustar00rootroot00000000000000--- title: "put.io" description: "Rclone docs for put.io" --- {{< icon "fas fa-parking" >}} put.io --------------------------------- Paths are specified as `remote:path` put.io paths may be as deep as required, eg `remote:directory/subdirectory`. The initial setup for put.io involves getting a token from put.io which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> putio Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Put.io \ "putio" [snip] Storage> putio ** See help for putio backend at: https://rclone.org/putio/ ** Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [putio] type = putio token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== putio putio e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode. You can then use it like this, List directories in top level of your put.io rclone lsd remote: List all the files in your put.io rclone ls remote: To copy a local directory to a put.io directory called backup rclone copy /home/source remote:backup #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/putio/putio.go then run make backenddocs" >}} ### Advanced Options Here are the advanced options specific to putio (Put.io). #### --putio-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_PUTIO_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/qingstor.md000066400000000000000000000176521375552240400177010ustar00rootroot00000000000000--- title: "QingStor" description: "Rclone docs for QingStor Object Storage" --- {{< icon "fas fa-hdd" >}} QingStor --------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Here is an example of making an QingStor configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage \ "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step \ "false" 2 / Get QingStor credentials from the environment (env vars or IAM) \ "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a. \ "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a. \ "sh1a" zone> 1 Number of connection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### Multipart uploads ### rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM. Note that incomplete multipart uploads older than 24 hours can be removed with `rclone cleanup remote:bucket` just for one bucket `rclone cleanup remote:` for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time. ### Buckets and Zone ### With QingStor you can list buckets (`rclone lsd`) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, `incorrect zone, the bucket is not in 'XXX' zone`. ### Authentication ### There are two ways to supply `rclone` with a set of QingStor credentials. In order of precedence: - Directly in the rclone configuration file (as configured by `rclone config`) - set `access_key_id` and `secret_access_key` - Runtime configuration: - set `env_auth` to `true` in the config file - Exporting the following environment variables before running `rclone` - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` ### Restricted filename characters The control characters 0x00-0x1F and / are replaced as in the [default restricted characters set](/overview/#restricted-characters). Note that 0x7F is not replaced. Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/qingstor/qingstor.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to qingstor (QingCloud Object Storage). #### --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - Config: env_auth - Env Var: RCLONE_QINGSTOR_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter QingStor credentials in the next step - "true" - Get QingStor credentials from the environment (env vars or IAM) #### --qingstor-access-key-id QingStor Access Key ID Leave blank for anonymous access or runtime credentials. - Config: access_key_id - Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID - Type: string - Default: "" #### --qingstor-secret-access-key QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials. - Config: secret_access_key - Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY - Type: string - Default: "" #### --qingstor-endpoint Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" - Config: endpoint - Env Var: RCLONE_QINGSTOR_ENDPOINT - Type: string - Default: "" #### --qingstor-zone Zone to connect to. Default is "pek3a". - Config: zone - Env Var: RCLONE_QINGSTOR_ZONE - Type: string - Default: "" - Examples: - "pek3a" - The Beijing (China) Three Zone - Needs location constraint pek3a. - "sh1a" - The Shanghai (China) First Zone - Needs location constraint sh1a. - "gd2a" - The Guangdong (China) Second Zone - Needs location constraint gd2a. ### Advanced Options Here are the advanced options specific to qingstor (QingCloud Object Storage). #### --qingstor-connection-retries Number of connection retries. - Config: connection_retries - Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES - Type: int - Default: 3 #### --qingstor-upload-cutoff Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. - Config: upload_cutoff - Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M #### --qingstor-chunk-size Chunk size to use for uploading. When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size. Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. - Config: chunk_size - Env Var: RCLONE_QINGSTOR_CHUNK_SIZE - Type: SizeSuffix - Default: 4M #### --qingstor-upload-concurrency Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though). If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY - Type: int - Default: 1 #### --qingstor-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_QINGSTOR_ENCODING - Type: MultiEncoder - Default: Slash,Ctl,InvalidUtf8 {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/rc.md000066400000000000000000001302621375552240400164300ustar00rootroot00000000000000--- title: "Remote Control / API" description: "Remote controlling rclone with its API" --- # Remote controlling rclone with its API If rclone is run with the `--rc` flag then it starts an http server which can be used to remote control rclone using its API. If you just want to run a remote control then see the [rcd command](/commands/rclone_rcd/). ## Supported parameters ### --rc Flag to start the http server listen on remote requests ### --rc-addr=IP IPaddress:Port or :Port to bind server to. (default "localhost:5572") ### --rc-cert=KEY SSL PEM key (concatenation of certificate and CA certificate) ### --rc-client-ca=PATH Client certificate authority to verify clients with ### --rc-htpasswd=PATH htpasswd file - if not provided no authentication is done ### --rc-key=PATH SSL PEM Private key ### --rc-max-header-bytes=VALUE Maximum size of request header (default 4096) ### --rc-user=VALUE User name for authentication. ### --rc-pass=VALUE Password for authentication. ### --rc-realm=VALUE Realm for authentication (default "rclone") ### --rc-server-read-timeout=DURATION Timeout for server reading data (default 1h0m0s) ### --rc-server-write-timeout=DURATION Timeout for server writing data (default 1h0m0s) ### --rc-serve Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object Default Off. ### --rc-files /path/to/directory Path to local files to serve on the HTTP server. If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions. If `--rc-user` or `--rc-pass` is set then the URL that is opened will have the authorization in the URL in the `http://user:pass@localhost/` style. Default Off. ### --rc-enable-metrics Enable OpenMetrics/Prometheus compatible endpoint at `/metrics`. Default Off. ### --rc-web-gui Set this flag to serve the default web gui on the same port as rclone. Default Off. ### --rc-allow-origin Set the allowed Access-Control-Allow-Origin for rc requests. Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui. Default is IP address on which rc is running. ### --rc-web-fetch-url Set the URL to fetch the rclone-web-gui files from. Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. ### --rc-web-gui-update Set this flag to check and update rclone-webui-react from the rc-web-fetch-url. Default Off. ### --rc-web-gui-force-update Set this flag to force update rclone-webui-react from the rc-web-fetch-url. Default Off. ### --rc-web-gui-no-open-browser Set this flag to disable opening browser automatically when using web-gui. Default Off. ### --rc-job-expire-duration=DURATION Expire finished async jobs older than DURATION (default 60s). ### --rc-job-expire-interval=DURATION Interval duration to check for expired async jobs (default 10s). ### --rc-no-auth By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg `operations/list` is denied as it involved creating a remote as is `sync/copy`. If this is set then no authorisation will be required on the server to use these methods. The alternative is to use `--rc-user` and `--rc-pass` and use these credentials in the request. Default Off. ## Accessing the remote control via the rclone rc command Rclone itself implements the remote control protocol in its `rclone rc` command. You can use it like this ``` $ rclone rc rc/noop param1=one param2=two { "param1": "one", "param2": "two" } ``` Run `rclone rc` on its own to see the help for the installed remote control commands. ## JSON input `rclone rc` also supports a `--json` flag which can be used to send more complicated input parameters. ``` $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop { "p1": [ 1, "2", null, 4 ], "p2": { "a": 1, "b": 2 } } ``` If the parameter being passed is an object then it can be passed as a JSON string rather than using the `--json` flag which simplifies the command line. ``` rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}' ``` Rather than ``` rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}' ``` ## Special parameters The rc interface supports some special parameters which apply to **all** commands. These start with `_` to show they are different. ### Running asynchronous jobs with _async = true Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously. If `_async` has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The `job/status` call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished. It is recommended that potentially long running jobs, eg `sync/sync`, `sync/copy`, `sync/move`, `operations/purge` are run with the `_async` flag to avoid any potential problems with the HTTP request and response timing out. Starting a job with the `_async` flag: ``` $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop { "jobid": 2 } ``` Query the status to see if the job has finished. For more information on the meaning of these return parameters see the `job/status` call. ``` $ rclone rc --json '{ "jobid":2 }' job/status { "duration": 0.000124163, "endTime": "2018-10-27T11:38:07.911245881+01:00", "error": "", "finished": true, "id": 2, "output": { "_async": true, "p1": [ 1, "2", null, 4 ], "p2": { "a": 1, "b": 2 } }, "startTime": "2018-10-27T11:38:07.911121728+01:00", "success": true } ``` `job/list` can be used to show the running or recently completed jobs ``` $ rclone rc job/list { "jobids": [ 2 ] } ``` ### Assigning operations to groups with _group = value Each rc call has its own stats group for tracking its metrics. By default grouping is done by the composite group name from prefix `job/` and id of the job like so `job/1`. If `_group` has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name. Stats for specific group can be accessed by passing `group` to `core/stats`: ``` $ rclone rc --json '{ "group": "job/1" }' core/stats { "speed": 12345 ... } ``` ## Supported commands {{< rem autogenerated start "- run make rcdocs - don't edit here" >}} ### backend/command: Runs a backend command. {#backend-command} This takes the following parameters - command - a string with the command name - fs - a remote name string eg "drive:" - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command For example rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2 Returns ``` { "result": { "arg": [ "path1", "path2" ], "name": "noop", "opt": { "blue": "", "echo": "yes" } } } ``` Note that this is the direct equivalent of using this "backend" command: rclone backend noop . -o echo=yes -o blue path1 path2 Note that arguments must be preceded by the "-a" flag See the [backend](/commands/rclone_backend/) command for more information. **Authentication is required for this call.** ### cache/expire: Purge a remote from cache {#cache-expire} Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional) Eg rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=/ withData=true ### cache/fetch: Fetch file chunks {#cache-fetch} Ensure the specified file chunks are cached on disk. The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to specify files to fetch, eg rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye File names will automatically be encrypted when the a crypt remote is used on top of the cache. ### cache/stats: Get cache stats {#cache-stats} Show statistics for the cache remote. ### config/create: create the config for a remote. {#config-create} This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs - type - type of the new remote - obscure - optional bool - forces obscuring of passwords - noObscure - optional bool - forces passwords not to be obscured See the [config create command](/commands/rclone_config_create/) command for more information on the above. **Authentication is required for this call.** ### config/delete: Delete a remote in the config file. {#config-delete} Parameters: - name - name of remote to delete See the [config delete command](/commands/rclone_config_delete/) command for more information on the above. **Authentication is required for this call.** ### config/dump: Dumps the config file. {#config-dump} Returns a JSON object: - key: value Where keys are remote names and values are the config parameters. See the [config dump command](/commands/rclone_config_dump/) command for more information on the above. **Authentication is required for this call.** ### config/get: Get a remote in the config file. {#config-get} Parameters: - name - name of remote to get See the [config dump command](/commands/rclone_config_dump/) command for more information on the above. **Authentication is required for this call.** ### config/listremotes: Lists the remotes in the config file. {#config-listremotes} Returns - remotes - array of remote names See the [listremotes command](/commands/rclone_listremotes/) command for more information on the above. **Authentication is required for this call.** ### config/password: password the config for a remote. {#config-password} This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs See the [config password command](/commands/rclone_config_password/) command for more information on the above. **Authentication is required for this call.** ### config/providers: Shows how providers are configured in the config file. {#config-providers} Returns a JSON object: - providers - array of objects See the [config providers command](/commands/rclone_config_providers/) command for more information on the above. **Authentication is required for this call.** ### config/update: update the config for a remote. {#config-update} This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs - obscure - optional bool - forces obscuring of passwords - noObscure - optional bool - forces passwords not to be obscured See the [config update command](/commands/rclone_config_update/) command for more information on the above. **Authentication is required for this call.** ### core/bwlimit: Set the bandwidth limit. {#core-bwlimit} This sets the bandwidth limit to that passed in. Eg rclone rc core/bwlimit rate=off { "bytesPerSecond": -1, "rate": "off" } rclone rc core/bwlimit rate=1M { "bytesPerSecond": 1048576, "rate": "1M" } If the rate parameter is not supplied then the bandwidth is queried rclone rc core/bwlimit { "bytesPerSecond": 1048576, "rate": "1M" } The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number. ### core/command: Run a rclone terminal command over rc. {#core-command} This takes the following parameters - command - a string with the command name - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command - error - set if rclone exits with an error code - returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT". "STREAM_ONLY_STDERR") For example rclone rc core/command command=ls -a mydrive:/ -o max-depth=1 rclone rc core/command -a ls -a mydrive:/ -o max-depth=1 Returns ``` { "error": false, "result": "" } OR { "error": true, "result": "" } ``` **Authentication is required for this call.** ### core/gc: Runs a garbage collection. {#core-gc} This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. ### core/group-list: Returns list of stats. {#core-group-list} This returns list of stats groups currently in memory. Returns the following values: ``` { "groups": an array of group names: [ "group1", "group2", ... ] } ``` ### core/memstats: Returns the memory statistics {#core-memstats} This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats The most interesting values for most people are: * HeapAlloc: This is the amount of memory rclone is actually using * HeapSys: This is the amount of memory rclone has obtained from the OS * Sys: this is the total amount of memory requested from the OS * It is virtual memory so may include unused memory ### core/obscure: Obscures a string passed in. {#core-obscure} Pass a clear string and rclone will obscure it for the config file: - clear - string Returns - obscured - string ### core/pid: Return PID of current process {#core-pid} This returns PID of current process. Useful for stopping rclone process. ### core/quit: Terminates the app. {#core-quit} (optional) Pass an exit code to be used for terminating the app: - exitCode - int ### core/stats: Returns stats about current transfers. {#core-stats} This returns all available stats: rclone rc core/stats If group is not provided then summed up stats for all groups will be returned. Parameters - group - name of the stats group (string) Returns the following values: ``` { "speed": average speed in bytes/sec since start of the process, "bytes": total transferred bytes since the start of the process, "errors": number of errors, "fatalError": whether there has been at least one FatalError, "retryError": whether there has been at least one non-NoRetryError, "checks": number of checked files, "transfers": number of transferred files, "deletes" : number of deleted files, "renames" : number of renamed files, "transferTime" : total time spent on running jobs, "elapsedTime": time in seconds since the start of the process, "lastError": last occurred error, "transferring": an array of currently active file transfers: [ { "bytes": total transferred bytes for this file, "eta": estimated time in seconds until file transfer completion "name": name of the file, "percentage": progress of the file transfer in percent, "speed": average speed over the whole transfer in bytes/sec, "speedAvg": current speed in bytes/sec as an exponentially weighted moving average, "size": size of the file in bytes } ], "checking": an array of names of currently active file checks [] } ``` Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined. ### core/stats-delete: Delete stats group. {#core-stats-delete} This deletes entire stats group Parameters - group - name of the stats group (string) ### core/stats-reset: Reset stats. {#core-stats-reset} This clears counters, errors and finished transfers for all stats or specific stats group if group is provided. Parameters - group - name of the stats group (string) ### core/transferred: Returns stats about completed transfers. {#core-transferred} This returns stats about completed transfers: rclone rc core/transferred If group is not provided then completed transfers for all groups will be returned. Note only the last 100 completed transfers are returned. Parameters - group - name of the stats group (string) Returns the following values: ``` { "transferred": an array of completed transfers (including failed ones): [ { "name": name of the file, "size": size of the file in bytes, "bytes": total transferred bytes for this file, "checked": if the transfer is only checked (skipped, deleted), "timestamp": integer representing millisecond unix epoch, "error": string description of the error (empty if successful), "jobid": id of the job that this transfer belongs to } ] } ``` ### core/version: Shows the current version of rclone and the go runtime. {#core-version} This shows the current version of go and the go runtime - version - rclone version, eg "v1.53.0" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use ### debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling. {#debug-set-block-profile-rate} SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked. To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0. After calling this you can use this to see the blocking profile: go tool pprof http://localhost:5572/debug/pprof/block Parameters - rate - int ### debug/set-mutex-profile-fraction: Set runtime.SetMutexProfileFraction for mutex profiling. {#debug-set-mutex-profile-fraction} SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned. To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.) Once this is set you can look use this to profile the mutex contention: go tool pprof http://localhost:5572/debug/pprof/mutex Parameters - rate - int Results - previousRate - int ### job/list: Lists the IDs of the running jobs {#job-list} Parameters - None Results - jobids - array of integer job ids ### job/status: Reads the status of the job ID {#job-status} Parameters - jobid - id of the job (integer) Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job ### job/stop: Stop the running job {#job-stop} Parameters - jobid - id of the job (integer) ### mount/listmounts: Show current mount points {#mount-listmounts} This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns - mountPoints: list of current mount points Eg rclone rc mount/listmounts **Authentication is required for this call.** ### mount/mount: Create a new mount point {#mount-mount} rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2 This takes the following parameters - fs - a remote path to be mounted (required) - mountPoint: valid path on the local machine (required) - mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use - mountOpt: a JSON object with Mount options in. - vfsOpt: a JSON object with VFS options in. Eg rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section. rclone rc options/get **Authentication is required for this call.** ### mount/types: Show all possible mount types {#mount-types} This shows all possible mount types and returns them as a list. This takes no parameters and returns - mountTypes: list of mount types The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter. Eg rclone rc mount/types **Authentication is required for this call.** ### mount/unmount: Unmount selected active mount {#mount-unmount} rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. This takes the following parameters - mountPoint: valid path on the local machine where the mount was created (required) Eg rclone rc mount/unmount mountPoint=/home//mountPoint **Authentication is required for this call.** ### mount/unmountall: Show current mount points {#mount-unmountall} This shows currently mounted points, which can be used for performing an unmount This takes no parameters and returns error if unmount does not succeed. Eg rclone rc mount/unmountall **Authentication is required for this call.** ### operations/about: Return the space used on the remote {#operations-about} This takes the following parameters - fs - a remote name string eg "drive:" The result is as returned from rclone about --json See the [about command](/commands/rclone_size/) command for more information on the above. **Authentication is required for this call.** ### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup} This takes the following parameters - fs - a remote name string eg "drive:" See the [cleanup command](/commands/rclone_cleanup/) command for more information on the above. **Authentication is required for this call.** ### operations/copyfile: Copy a file from source remote to destination remote {#operations-copyfile} This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination **Authentication is required for this call.** ### operations/copyurl: Copy the URL to the object {#operations-copyurl} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - url - string, URL to read from - autoFilename - boolean, set to true to retrieve destination file name from url See the [copyurl command](/commands/rclone_copyurl/) command for more information on the above. **Authentication is required for this call.** ### operations/delete: Remove files in the path {#operations-delete} This takes the following parameters - fs - a remote name string eg "drive:" See the [delete command](/commands/rclone_delete/) command for more information on the above. **Authentication is required for this call.** ### operations/deletefile: Remove the single file pointed to {#operations-deletefile} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [deletefile command](/commands/rclone_deletefile/) command for more information on the above. **Authentication is required for this call.** ### operations/fsinfo: Return information about the remote {#operations-fsinfo} This takes the following parameters - fs - a remote name string eg "drive:" This returns info about the remote passed in; ``` { // optional features and whether they are available or not "Features": { "About": true, "BucketBased": false, "CanHaveEmptyDirectories": true, "CaseInsensitive": false, "ChangeNotify": false, "CleanUp": false, "Copy": false, "DirCacheFlush": false, "DirMove": true, "DuplicateFiles": false, "GetTier": false, "ListR": false, "MergeDirs": false, "Move": true, "OpenWriterAt": true, "PublicLink": false, "Purge": true, "PutStream": true, "PutUnchecked": false, "ReadMimeType": false, "ServerSideAcrossConfigs": false, "SetTier": false, "SetWrapper": false, "UnWrap": false, "WrapFs": false, "WriteMimeType": false }, // Names of hashes available "Hashes": [ "MD5", "SHA-1", "DropboxHash", "QuickXorHash" ], "Name": "local", // Name as created "Precision": 1, // Precision of timestamps in ns "Root": "/", // Path as created "String": "Local file system at /" // how the remote will appear in logs } ``` This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: ### operations/list: List the given remote and path in JSON format {#operations-list} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - opt - a dictionary of options to control the listing (optional) - recurse - If set recurse directories - noModTime - If set return modification time - showEncrypted - If set show decrypted names - showOrigIDs - If set show the IDs for each item if known - showHash - If set return a dictionary of hashes The result is - list - This is an array of objects as described in the lsjson command See the [lsjson command](/commands/rclone_lsjson/) for more information on the above and examples. **Authentication is required for this call.** ### operations/mkdir: Make a destination directory or container {#operations-mkdir} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [mkdir command](/commands/rclone_mkdir/) command for more information on the above. **Authentication is required for this call.** ### operations/movefile: Move a file from source remote to destination remote {#operations-movefile} This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination **Authentication is required for this call.** ### operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations-publiclink} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - unlink - boolean - if set removes the link rather than adding it (optional) - expire - string - the expiry time of the link eg "1d" (optional) Returns - url - URL of the resource See the [link command](/commands/rclone_link/) command for more information on the above. **Authentication is required for this call.** ### operations/purge: Remove a directory or container and all of its contents {#operations-purge} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [purge command](/commands/rclone_purge/) command for more information on the above. **Authentication is required for this call.** ### operations/rmdir: Remove an empty directory or container {#operations-rmdir} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" See the [rmdir command](/commands/rclone_rmdir/) command for more information on the above. **Authentication is required for this call.** ### operations/rmdirs: Remove all the empty directories in the path {#operations-rmdirs} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - leaveRoot - boolean, set to true not to delete the root See the [rmdirs command](/commands/rclone_rmdirs/) command for more information on the above. **Authentication is required for this call.** ### operations/size: Count the number of bytes and files in remote {#operations-size} This takes the following parameters - fs - a remote name string eg "drive:path/to/dir" Returns - count - number of files - bytes - number of bytes in those files See the [size command](/commands/rclone_size/) command for more information on the above. **Authentication is required for this call.** ### operations/uploadfile: Upload file using multiform/form-data {#operations-uploadfile} This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - each part in body represents a file to be uploaded See the [uploadfile command](/commands/rclone_uploadfile/) command for more information on the above. **Authentication is required for this call.** ### options/blocks: List all the option blocks {#options-blocks} Returns - options - a list of the options block names ### options/get: Get all the options {#options-get} Returns an object where keys are option block names and values are an object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. ### options/set: Set an option {#options-set} Parameters - option block name containing an object with - key: value Repeated as often as required. Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. For example: This sets DEBUG level logs (-vv) rclone rc options/set --json '{"main": {"LogLevel": 8}}' And this sets INFO level logs (-v) rclone rc options/set --json '{"main": {"LogLevel": 7}}' And this sets NOTICE level logs (normal without -v) rclone rc options/set --json '{"main": {"LogLevel": 6}}' ### pluginsctl/addPlugin: Add a plugin using url {#pluginsctl-addPlugin} used for adding a plugin to the webgui This takes the following parameters - url: http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react) Eg rclone rc pluginsctl/addPlugin **Authentication is required for this call.** ### pluginsctl/getPluginsForType: Get plugins with type criteria {#pluginsctl-getPluginsForType} This shows all possible plugins by a mime type This takes the following parameters - type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3) - pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL) and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/getPluginsForType type=video/mp4 **Authentication is required for this call.** ### pluginsctl/listPlugins: Get the list of currently loaded plugins {#pluginsctl-listPlugins} This allows you to get the currently enabled plugins and their details. This takes no parameters and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/listPlugins **Authentication is required for this call.** ### pluginsctl/listTestPlugins: Show currently loaded test plugins {#pluginsctl-listTestPlugins} allows listing of test plugins with the rclone.test set to true in package.json of the plugin This takes no parameters and returns - loadedTestPlugins: list of currently available test plugins Eg rclone rc pluginsctl/listTestPlugins **Authentication is required for this call.** ### pluginsctl/removePlugin: Remove a loaded plugin {#pluginsctl-removePlugin} This allows you to remove a plugin using it's name This takes parameters - name: name of the plugin in the format `author`/`plugin_name` Eg rclone rc pluginsctl/removePlugin name=rclone/video-plugin **Authentication is required for this call.** ### pluginsctl/removeTestPlugin: Remove a test plugin {#pluginsctl-removeTestPlugin} This allows you to remove a plugin using it's name This takes the following parameters - name: name of the plugin in the format `author`/`plugin_name` Eg rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react **Authentication is required for this call.** ### rc/error: This returns an error {#rc-error} This returns an error with the input as part of its error string. Useful for testing error handling. ### rc/list: List all the registered remote control commands {#rc-list} This lists all the registered remote control commands as a JSON map in the commands response. ### rc/noop: Echo the input to the output parameters {#rc-noop} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. ### rc/noopauth: Echo the input to the output parameters requiring auth {#rc-noopauth} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. **Authentication is required for this call.** ### sync/copy: copy a directory from source remote to destination remote {#sync-copy} This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination See the [copy command](/commands/rclone_copy/) command for more information on the above. **Authentication is required for this call.** ### sync/move: move a directory from source remote to destination remote {#sync-move} This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set See the [move command](/commands/rclone_move/) command for more information on the above. **Authentication is required for this call.** ### sync/sync: sync a directory from source remote to destination remote {#sync-sync} This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination See the [sync command](/commands/rclone_sync/) command for more information on the above. **Authentication is required for this call.** ### vfs/forget: Forget files or directories in the directory cache. {#vfs-forget} This forgets the paths in the directory cache causing them to be re-read from the remote when needed. If no paths are passed in then it will forget all the paths in the directory cache. rclone rc vfs/forget Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. ### vfs/list: List active VFSes. {#vfs-list} This lists the active VFSes. It returns a list under the key "vfses" where the values are the VFS names that could be passed to the other VFS commands in the "fs" parameter. ### vfs/poll-interval: Get the status or update the value of the poll-interval option. {#vfs-poll-interval} Without any parameter given this returns the current status of the poll-interval setting. When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval. rclone rc vfs/poll-interval interval=5m The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely. The new poll-interval value will only be active when the timeout is not reached. If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. ### vfs/refresh: Refresh the directory cache. {#vfs-refresh} This reads the directories for the specified paths and freshens the directory cache. If no paths are passed in then it will refresh the root directory. rclone rc vfs/refresh Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg rclone rc vfs/refresh dir=home/junk dir2=data/misc If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. {{< rem autogenerated stop >}} ## Accessing the remote control via HTTP Rclone implements a simple HTTP based protocol. Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values. All calls must made using POST. The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using `curl`. The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable. ### Error returns If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg ``` { "error": "Expecting string value for key \"remote\" (was float64)", "input": { "fs": "/tmp", "remote": 3 }, "status": 400 "path": "operations/rmdir", } ``` The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call ### CORS The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back. ### Using POST with URL parameters only ``` curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2' ``` Response ``` { "potato": "1", "sausage": "2" } ``` Here is what an error response looks like: ``` curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' ``` ``` { "error": "arbitrary error on input map[potato:1 sausage:2]", "input": { "potato": "1", "sausage": "2" } } ``` Note that curl doesn't return errors to the shell unless you use the `-f` option ``` $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' curl: (22) The requested URL returned error: 400 Bad Request $ echo $? 22 ``` ### Using POST with a form ``` curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop ``` Response ``` { "potato": "1", "sausage": "2" } ``` Note that you can combine these with URL parameters too with the POST parameters taking precedence. ``` curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4" ``` Response ``` { "potato": "1", "rutabaga": "3", "sausage": "4" } ``` ### Using POST with a JSON blob ``` curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop ``` response ``` { "password": "xyz", "username": "xyz" } ``` This can be combined with URL parameters too if required. The JSON blob takes precedence. ``` curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4' ``` ``` { "potato": 2, "rutabaga": "3", "sausage": 1 } ``` ## Debugging rclone with pprof ## If you use the `--rc` flag this will also enable the use of the go profiling tools on the same port. To use these, first [install go](https://golang.org/doc/install). ### Debugging memory use To profile rclone's memory use you can run: go tool pprof -web http://localhost:5572/debug/pprof/heap This should open a page in your browser showing what is using what memory. You can also use the `-text` flag to produce a textual summary ``` $ go tool pprof -text http://localhost:5572/debug/pprof/heap Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total flat flat% sum% cum cum% 1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode 513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/all.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve/restic.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init 0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0 0 0% 100% 1024.03kB 66.62% main.init 0 0% 100% 513kB 33.38% net/http.(*conn).readRequest 0 0% 100% 513kB 33.38% net/http.(*conn).serve 0 0% 100% 1024.03kB 66.62% runtime.main ``` ### Debugging go routine leaks Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected. See all active go routines using curl http://localhost:5572/debug/pprof/goroutine?debug=1 Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser. ### Other profiles to look at You can see a summary of profiles available at http://localhost:5572/debug/pprof/ Here is how to use some of them: - Memory: `go tool pprof http://localhost:5572/debug/pprof/heap` - Go routines: `curl http://localhost:5572/debug/pprof/goroutine?debug=1` - 30-second CPU profile: `go tool pprof http://localhost:5572/debug/pprof/profile` - 5-second execution trace: `wget http://localhost:5572/debug/pprof/trace?seconds=5` - Goroutine blocking profile - Enable first with: `rclone rc debug/set-block-profile-rate rate=1` ([docs](#debug/set-block-profile-rate)) - `go tool pprof http://localhost:5572/debug/pprof/block` - Contended mutexes: - Enable first with: `rclone rc debug/set-mutex-profile-fraction rate=1` ([docs](#debug/set-mutex-profile-fraction)) - `go tool pprof http://localhost:5572/debug/pprof/mutex` See the [net/http/pprof docs](https://golang.org/pkg/net/http/pprof/) for more info on how to use the profiling and for a general overview see [the Go team's blog post on profiling go programs](https://blog.golang.org/profiling-go-programs). The profiling hook is [zero overhead unless it is used](https://stackoverflow.com/q/26545159/164234). rclone-1.53.3/docs/content/remote_setup.md000066400000000000000000000043651375552240400205430ustar00rootroot00000000000000--- title: "Remote Setup" description: "Configuring rclone up on a remote / headless machine" --- # Configuring rclone on a remote / headless machine # Some of the configurations (those involving oauth2) require an Internet connected web browser. If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below. ## Configuring using rclone authorize ## On the headless box run `rclone` config but answer `N` to the `Use auto config?` question. ``` ... Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> n For this to work, you will need rclone available on a machine that has a web browser available. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): rclone authorize "amazon cloud drive" Then paste the result below: result> ``` Then on your main desktop machine ``` rclone authorize "amazon cloud drive" If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Paste the following into your remote machine ---> SECRET_TOKEN <---End paste ``` Then back to the headless box, paste in the code ``` result> SECRET_TOKEN -------------------- [acd12] client_id = client_secret = token = SECRET_TOKEN -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> ``` ## Configuring by copying the config file ## Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone. So first configure rclone on your desktop machine with rclone config to set up the config file. Find the config file by running `rclone config file`, for example ``` $ rclone config file Configuration file is stored at: /home/user/.rclone.conf ``` Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and place it in the correct place (use `rclone config file` on the remote box to find out where). rclone-1.53.3/docs/content/s3.md000066400000000000000000002301511375552240400163470ustar00rootroot00000000000000--- title: "Amazon S3" description: "Rclone docs for Amazon S3" --- {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers -------------------------------------------------------- The S3 backend can be used with a number of different providers: {{< provider_list >}} {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#amazon-s3" start="true" >}} {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}} {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}} {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}} {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}} {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}} {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}} {{< provider name="Scaleway" home="https://www.scaleway.com/en/object-storage/" config="/s3/#scaleway" >}} {{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}} {{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}} {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" end="true" >}} {{< /provider_list >}} Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Once you have made a remote (see the provider specific section above) you can use it like this: See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync -i /home/local/directory remote:bucket ## AWS S3 {#amazon-s3} Here is an example of making an s3 configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" [snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Ceph Object Storage \ "Ceph" 3 / Digital Ocean Spaces \ "DigitalOcean" 4 / Dreamhost DreamObjects \ "Dreamhost" 5 / IBM COS S3 \ "IBMCOS" 6 / Minio Object Storage \ "Minio" 7 / Wasabi Object Storage \ "Wasabi" 8 / Any other S3 compatible provider \ "Other" provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US East (Ohio) Region 2 | Needs location constraint us-east-2. \ "us-east-2" / US West (Oregon) Region 3 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 4 | Needs location constraint us-west-1. \ "us-west-1" / Canada (Central) Region 5 | Needs location constraint ca-central-1. \ "ca-central-1" / EU (Ireland) Region 6 | Needs location constraint EU or eu-west-1. \ "eu-west-1" / EU (London) Region 7 | Needs location constraint eu-west-2. \ "eu-west-2" / EU (Frankfurt) Region 8 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 9 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region 10 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region 11 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / Asia Pacific (Seoul) 12 | Needs location constraint ap-northeast-2. \ "ap-northeast-2" / Asia Pacific (Mumbai) 13 | Needs location constraint ap-south-1. \ "ap-south-1" / Asia Pacific (Hong Kong) Region 14 | Needs location constraint ap-east-1. \ "ap-east-1" / South America (Sao Paulo) Region 15 | Needs location constraint sa-east-1. \ "sa-east-1" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" 2 / US East (Ohio) Region. \ "us-east-2" 3 / US West (Oregon) Region. \ "us-west-2" 4 / US West (Northern California) Region. \ "us-west-1" 5 / Canada (Central) Region. \ "ca-central-1" 6 / EU (Ireland) Region. \ "eu-west-1" 7 / EU (London) Region. \ "eu-west-2" 8 / EU Region. \ "EU" 9 / Asia Pacific (Singapore) Region. \ "ap-southeast-1" 10 / Asia Pacific (Sydney) Region. \ "ap-southeast-2" 11 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1" 12 / Asia Pacific (Seoul) \ "ap-northeast-2" 13 / Asia Pacific (Mumbai) \ "ap-south-1" 14 / Asia Pacific (Hong Kong) \ "ap-east-1" 15 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl> 1 The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> 1 The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" 6 / Glacier storage class \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" 8 / Intelligent-Tiering storage class \ "INTELLIGENT_TIERING" storage_class> 1 Remote config -------------------- [remote] type = s3 provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> ``` ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### --update and --use-server-modtime ### As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using `--update` along with `--use-server-modtime`, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded. ### Modified time ### The modified time is stored as metadata on the object as `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns. If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied. ### Cleanup ### If you run `rclone cleanup s3:bucket` then it will remove all pending multipart uploads older than 24 hours. You can use the `-i` flag to see exactly what it will do. If you want more control over the expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h` to expire all uploads older than one hour. You can use `rclone backend list-multipart-uploads s3:bucket` to see the pending multipart uploads. #### Restricted filename characters S3 allows any valid UTF-8 string as a key. Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), as they can't be used in XML. The following characters are replaced since these are problematic when dealing with the REST API: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | The encoding will also encode these file names as they don't seem to work with the SDK properly: | File name | Replacement | | --------- |:-----------:| | . | . | | .. | .. | ### Multipart uploads ### rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded *both* with multipart upload *and* through crypt remotes do not have MD5 sums. rclone switches from single part uploads to multipart uploads at the point specified by `--s3-upload-cutoff`. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files). The chunk sizes used in the multipart upload are specified by `--s3-chunk-size` and the number of chunks uploaded concurrently is specified by `--s3-upload-concurrency`. Multipart uploads will use `--transfers` * `--s3-upload-concurrency` * `--s3-chunk-size` extra memory. Single part uploads to not use extra memory. Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster. Increasing `--s3-upload-concurrency` will increase throughput (8 would be a sensible value) and increasing `--s3-chunk-size` also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory. ### Buckets and Regions ### With Amazon S3 you can list buckets (`rclone lsd`) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, `incorrect region, the bucket is not in 'XXX' region`. ### Authentication ### There are a number of ways to supply `rclone` with a set of AWS credentials, with and without using the environment. The different authentication methods are tried in this order: - Directly in the rclone configuration file (`env_auth = false` in the config file): - `access_key_id` and `secret_access_key` are required. - `session_token` can be optionally set when using AWS STS. - Runtime configuration (`env_auth = true` in the config file): - Export the following environment variables before running `rclone`: - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` - Session Token: `AWS_SESSION_TOKEN` (optional) - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html): - Profile files are standard files used by AWS CLI tools - By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables: - `AWS_SHARED_CREDENTIALS_FILE` to control which file. - `AWS_PROFILE` to control which profile to use. - Or, run `rclone` in an ECS task with an IAM role (AWS only). - Or, run `rclone` on an EC2 instance with an IAM role (AWS only). - Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only). If none of these option actually end up providing `rclone` with AWS credentials then S3 interaction will be non-authenticated (see below). ### S3 Permissions ### When using the `sync` subcommand of `rclone` the following minimum permissions are required to be available on the bucket being written to: * `ListBucket` * `DeleteObject` * `GetObject` * `PutObject` * `PutObjectACL` When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required. Example policy: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" }, "Action": [ "s3:ListBucket", "s3:DeleteObject", "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::BUCKET_NAME/*", "arn:aws:s3:::BUCKET_NAME" ] }, { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" } ] } ``` Notes on above: 1. This is a policy that can be used when creating bucket. It assumes that `USER_NAME` has been created. 2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with `rclone sync`. ### Key Management System (KMS) ### If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the `--ignore-checksum` flag. A proper fix is being worked on in [issue #1824](https://github.com/rclone/rclone/issues/1824). ### Glacier and Glacier Deep Archive ### You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below. 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) the object(s) in question before using rclone. Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)). #### --s3-provider Choose your S3 provider. - Config: provider - Env Var: RCLONE_S3_PROVIDER - Type: string - Default: "" - Examples: - "AWS" - Amazon Web Services (AWS) S3 - "Alibaba" - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage - "DigitalOcean" - Digital Ocean Spaces - "Dreamhost" - Dreamhost DreamObjects - "IBMCOS" - IBM COS S3 - "Minio" - Minio Object Storage - "Netease" - Netease Object Storage (NOS) - "Scaleway" - Scaleway Object Storage - "StackPath" - StackPath Object Storage - "TencentCOS" - Tencent Cloud Object Storage (COS) - "Wasabi" - Wasabi Object Storage - "Other" - Any other S3 compatible provider #### --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. - Config: env_auth - Env Var: RCLONE_S3_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter AWS credentials in the next step - "true" - Get AWS credentials from the environment (env vars or IAM) #### --s3-access-key-id AWS Access Key ID. Leave blank for anonymous access or runtime credentials. - Config: access_key_id - Env Var: RCLONE_S3_ACCESS_KEY_ID - Type: string - Default: "" #### --s3-secret-access-key AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. - Config: secret_access_key - Env Var: RCLONE_S3_SECRET_ACCESS_KEY - Type: string - Default: "" #### --s3-region Region to connect to. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "us-east-1" - The default endpoint - a good choice if you are unsure. - US Region, Northern Virginia or Pacific Northwest. - Leave location constraint empty. - "us-east-2" - US East (Ohio) Region - Needs location constraint us-east-2. - "us-west-1" - US West (Northern California) Region - Needs location constraint us-west-1. - "us-west-2" - US West (Oregon) Region - Needs location constraint us-west-2. - "ca-central-1" - Canada (Central) Region - Needs location constraint ca-central-1. - "eu-west-1" - EU (Ireland) Region - Needs location constraint EU or eu-west-1. - "eu-west-2" - EU (London) Region - Needs location constraint eu-west-2. - "eu-west-3" - EU (Paris) Region - Needs location constraint eu-west-3. - "eu-north-1" - EU (Stockholm) Region - Needs location constraint eu-north-1. - "eu-south-1" - EU (Milan) Region - Needs location constraint eu-south-1. - "eu-central-1" - EU (Frankfurt) Region - Needs location constraint eu-central-1. - "ap-southeast-1" - Asia Pacific (Singapore) Region - Needs location constraint ap-southeast-1. - "ap-southeast-2" - Asia Pacific (Sydney) Region - Needs location constraint ap-southeast-2. - "ap-northeast-1" - Asia Pacific (Tokyo) Region - Needs location constraint ap-northeast-1. - "ap-northeast-2" - Asia Pacific (Seoul) - Needs location constraint ap-northeast-2. - "ap-northeast-3" - Asia Pacific (Osaka-Local) - Needs location constraint ap-northeast-3. - "ap-south-1" - Asia Pacific (Mumbai) - Needs location constraint ap-south-1. - "ap-east-1" - Asia Pacific (Hong Kong) Region - Needs location constraint ap-east-1. - "sa-east-1" - South America (Sao Paulo) Region - Needs location constraint sa-east-1. - "me-south-1" - Middle East (Bahrain) Region - Needs location constraint me-south-1. - "af-south-1" - Africa (Cape Town) Region - Needs location constraint af-south-1. - "cn-north-1" - China (Beijing) Region - Needs location constraint cn-north-1. - "cn-northwest-1" - China (Ningxia) Region - Needs location constraint cn-northwest-1. - "us-gov-east-1" - AWS GovCloud (US-East) Region - Needs location constraint us-gov-east-1. - "us-gov-west-1" - AWS GovCloud (US) Region - Needs location constraint us-gov-west-1. #### --s3-region Region to connect to. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "nl-ams" - Amsterdam, The Netherlands - "fr-par" - Paris, France #### --s3-region Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. - Config: region - Env Var: RCLONE_S3_REGION - Type: string - Default: "" - Examples: - "" - Use this if unsure. Will use v4 signatures and an empty region. - "other-v2-signature" - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. #### --s3-endpoint Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" #### --s3-endpoint Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.us.cloud-object-storage.appdomain.cloud" - US Cross Region Endpoint - "s3.dal.us.cloud-object-storage.appdomain.cloud" - US Cross Region Dallas Endpoint - "s3.wdc.us.cloud-object-storage.appdomain.cloud" - US Cross Region Washington DC Endpoint - "s3.sjc.us.cloud-object-storage.appdomain.cloud" - US Cross Region San Jose Endpoint - "s3.private.us.cloud-object-storage.appdomain.cloud" - US Cross Region Private Endpoint - "s3.private.dal.us.cloud-object-storage.appdomain.cloud" - US Cross Region Dallas Private Endpoint - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud" - US Cross Region Washington DC Private Endpoint - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud" - US Cross Region San Jose Private Endpoint - "s3.us-east.cloud-object-storage.appdomain.cloud" - US Region East Endpoint - "s3.private.us-east.cloud-object-storage.appdomain.cloud" - US Region East Private Endpoint - "s3.us-south.cloud-object-storage.appdomain.cloud" - US Region South Endpoint - "s3.private.us-south.cloud-object-storage.appdomain.cloud" - US Region South Private Endpoint - "s3.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Endpoint - "s3.fra.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Frankfurt Endpoint - "s3.mil.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Milan Endpoint - "s3.ams.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Amsterdam Endpoint - "s3.private.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Private Endpoint - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Frankfurt Private Endpoint - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Milan Private Endpoint - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud" - EU Cross Region Amsterdam Private Endpoint - "s3.eu-gb.cloud-object-storage.appdomain.cloud" - Great Britain Endpoint - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud" - Great Britain Private Endpoint - "s3.eu-de.cloud-object-storage.appdomain.cloud" - EU Region DE Endpoint - "s3.private.eu-de.cloud-object-storage.appdomain.cloud" - EU Region DE Private Endpoint - "s3.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Endpoint - "s3.tok.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Tokyo Endpoint - "s3.hkg.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional HongKong Endpoint - "s3.seo.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Seoul Endpoint - "s3.private.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Private Endpoint - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Tokyo Private Endpoint - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional HongKong Private Endpoint - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud" - APAC Cross Regional Seoul Private Endpoint - "s3.jp-tok.cloud-object-storage.appdomain.cloud" - APAC Region Japan Endpoint - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud" - APAC Region Japan Private Endpoint - "s3.au-syd.cloud-object-storage.appdomain.cloud" - APAC Region Australia Endpoint - "s3.private.au-syd.cloud-object-storage.appdomain.cloud" - APAC Region Australia Private Endpoint - "s3.ams03.cloud-object-storage.appdomain.cloud" - Amsterdam Single Site Endpoint - "s3.private.ams03.cloud-object-storage.appdomain.cloud" - Amsterdam Single Site Private Endpoint - "s3.che01.cloud-object-storage.appdomain.cloud" - Chennai Single Site Endpoint - "s3.private.che01.cloud-object-storage.appdomain.cloud" - Chennai Single Site Private Endpoint - "s3.mel01.cloud-object-storage.appdomain.cloud" - Melbourne Single Site Endpoint - "s3.private.mel01.cloud-object-storage.appdomain.cloud" - Melbourne Single Site Private Endpoint - "s3.osl01.cloud-object-storage.appdomain.cloud" - Oslo Single Site Endpoint - "s3.private.osl01.cloud-object-storage.appdomain.cloud" - Oslo Single Site Private Endpoint - "s3.tor01.cloud-object-storage.appdomain.cloud" - Toronto Single Site Endpoint - "s3.private.tor01.cloud-object-storage.appdomain.cloud" - Toronto Single Site Private Endpoint - "s3.seo01.cloud-object-storage.appdomain.cloud" - Seoul Single Site Endpoint - "s3.private.seo01.cloud-object-storage.appdomain.cloud" - Seoul Single Site Private Endpoint - "s3.mon01.cloud-object-storage.appdomain.cloud" - Montreal Single Site Endpoint - "s3.private.mon01.cloud-object-storage.appdomain.cloud" - Montreal Single Site Private Endpoint - "s3.mex01.cloud-object-storage.appdomain.cloud" - Mexico Single Site Endpoint - "s3.private.mex01.cloud-object-storage.appdomain.cloud" - Mexico Single Site Private Endpoint - "s3.sjc04.cloud-object-storage.appdomain.cloud" - San Jose Single Site Endpoint - "s3.private.sjc04.cloud-object-storage.appdomain.cloud" - San Jose Single Site Private Endpoint - "s3.mil01.cloud-object-storage.appdomain.cloud" - Milan Single Site Endpoint - "s3.private.mil01.cloud-object-storage.appdomain.cloud" - Milan Single Site Private Endpoint - "s3.hkg02.cloud-object-storage.appdomain.cloud" - Hong Kong Single Site Endpoint - "s3.private.hkg02.cloud-object-storage.appdomain.cloud" - Hong Kong Single Site Private Endpoint - "s3.par01.cloud-object-storage.appdomain.cloud" - Paris Single Site Endpoint - "s3.private.par01.cloud-object-storage.appdomain.cloud" - Paris Single Site Private Endpoint - "s3.sng01.cloud-object-storage.appdomain.cloud" - Singapore Single Site Endpoint - "s3.private.sng01.cloud-object-storage.appdomain.cloud" - Singapore Single Site Private Endpoint #### --s3-endpoint Endpoint for OSS API. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "oss-cn-hangzhou.aliyuncs.com" - East China 1 (Hangzhou) - "oss-cn-shanghai.aliyuncs.com" - East China 2 (Shanghai) - "oss-cn-qingdao.aliyuncs.com" - North China 1 (Qingdao) - "oss-cn-beijing.aliyuncs.com" - North China 2 (Beijing) - "oss-cn-zhangjiakou.aliyuncs.com" - North China 3 (Zhangjiakou) - "oss-cn-huhehaote.aliyuncs.com" - North China 5 (Huhehaote) - "oss-cn-shenzhen.aliyuncs.com" - South China 1 (Shenzhen) - "oss-cn-hongkong.aliyuncs.com" - Hong Kong (Hong Kong) - "oss-us-west-1.aliyuncs.com" - US West 1 (Silicon Valley) - "oss-us-east-1.aliyuncs.com" - US East 1 (Virginia) - "oss-ap-southeast-1.aliyuncs.com" - Southeast Asia Southeast 1 (Singapore) - "oss-ap-southeast-2.aliyuncs.com" - Asia Pacific Southeast 2 (Sydney) - "oss-ap-southeast-3.aliyuncs.com" - Southeast Asia Southeast 3 (Kuala Lumpur) - "oss-ap-southeast-5.aliyuncs.com" - Asia Pacific Southeast 5 (Jakarta) - "oss-ap-northeast-1.aliyuncs.com" - Asia Pacific Northeast 1 (Japan) - "oss-ap-south-1.aliyuncs.com" - Asia Pacific South 1 (Mumbai) - "oss-eu-central-1.aliyuncs.com" - Central Europe 1 (Frankfurt) - "oss-eu-west-1.aliyuncs.com" - West Europe (London) - "oss-me-east-1.aliyuncs.com" - Middle East 1 (Dubai) #### --s3-endpoint Endpoint for Scaleway Object Storage. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.nl-ams.scw.cloud" - Amsterdam Endpoint - "s3.fr-par.scw.cloud" - Paris Endpoint #### --s3-endpoint Endpoint for StackPath Object Storage. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "s3.us-east-2.stackpathstorage.com" - US East Endpoint - "s3.us-west-1.stackpathstorage.com" - US West Endpoint - "s3.eu-central-1.stackpathstorage.com" - EU Endpoint #### --s3-endpoint Endpoint for Tencent COS API. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "cos.ap-beijing.myqcloud.com" - Beijing Region. - "cos.ap-nanjing.myqcloud.com" - Nanjing Region. - "cos.ap-shanghai.myqcloud.com" - Shanghai Region. - "cos.ap-guangzhou.myqcloud.com" - Guangzhou Region. - "cos.ap-nanjing.myqcloud.com" - Nanjing Region. - "cos.ap-chengdu.myqcloud.com" - Chengdu Region. - "cos.ap-chongqing.myqcloud.com" - Chongqing Region. - "cos.ap-hongkong.myqcloud.com" - Hong Kong (China) Region. - "cos.ap-singapore.myqcloud.com" - Singapore Region. - "cos.ap-mumbai.myqcloud.com" - Mumbai Region. - "cos.ap-seoul.myqcloud.com" - Seoul Region. - "cos.ap-bangkok.myqcloud.com" - Bangkok Region. - "cos.ap-tokyo.myqcloud.com" - Tokyo Region. - "cos.na-siliconvalley.myqcloud.com" - Silicon Valley Region. - "cos.na-ashburn.myqcloud.com" - Virginia Region. - "cos.na-toronto.myqcloud.com" - Toronto Region. - "cos.eu-frankfurt.myqcloud.com" - Frankfurt Region. - "cos.eu-moscow.myqcloud.com" - Moscow Region. - "cos.accelerate.myqcloud.com" - Use Tencent COS Accelerate Endpoint. #### --s3-endpoint Endpoint for S3 API. Required when using an S3 clone. - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Type: string - Default: "" - Examples: - "objects-us-east-1.dream.io" - Dream Objects endpoint - "nyc3.digitaloceanspaces.com" - Digital Ocean Spaces New York 3 - "ams3.digitaloceanspaces.com" - Digital Ocean Spaces Amsterdam 3 - "sgp1.digitaloceanspaces.com" - Digital Ocean Spaces Singapore 1 - "s3.wasabisys.com" - Wasabi US East endpoint - "s3.us-west-1.wasabisys.com" - Wasabi US West endpoint - "s3.eu-central-1.wasabisys.com" - Wasabi EU Central endpoint #### --s3-location-constraint Location constraint - must be set to match the Region. Used when creating buckets only. - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" - Examples: - "" - Empty for US Region, Northern Virginia or Pacific Northwest. - "us-east-2" - US East (Ohio) Region. - "us-west-1" - US West (Northern California) Region. - "us-west-2" - US West (Oregon) Region. - "ca-central-1" - Canada (Central) Region. - "eu-west-1" - EU (Ireland) Region. - "eu-west-2" - EU (London) Region. - "eu-west-3" - EU (Paris) Region. - "eu-north-1" - EU (Stockholm) Region. - "eu-south-1" - EU (Milan) Region. - "EU" - EU Region. - "ap-southeast-1" - Asia Pacific (Singapore) Region. - "ap-southeast-2" - Asia Pacific (Sydney) Region. - "ap-northeast-1" - Asia Pacific (Tokyo) Region. - "ap-northeast-2" - Asia Pacific (Seoul) Region. - "ap-northeast-3" - Asia Pacific (Osaka-Local) Region. - "ap-south-1" - Asia Pacific (Mumbai) Region. - "ap-east-1" - Asia Pacific (Hong Kong) Region. - "sa-east-1" - South America (Sao Paulo) Region. - "me-south-1" - Middle East (Bahrain) Region. - "af-south-1" - Africa (Cape Town) Region. - "cn-north-1" - China (Beijing) Region - "cn-northwest-1" - China (Ningxia) Region. - "us-gov-east-1" - AWS GovCloud (US-East) Region. - "us-gov-west-1" - AWS GovCloud (US) Region. #### --s3-location-constraint Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" - Examples: - "us-standard" - US Cross Region Standard - "us-vault" - US Cross Region Vault - "us-cold" - US Cross Region Cold - "us-flex" - US Cross Region Flex - "us-east-standard" - US East Region Standard - "us-east-vault" - US East Region Vault - "us-east-cold" - US East Region Cold - "us-east-flex" - US East Region Flex - "us-south-standard" - US South Region Standard - "us-south-vault" - US South Region Vault - "us-south-cold" - US South Region Cold - "us-south-flex" - US South Region Flex - "eu-standard" - EU Cross Region Standard - "eu-vault" - EU Cross Region Vault - "eu-cold" - EU Cross Region Cold - "eu-flex" - EU Cross Region Flex - "eu-gb-standard" - Great Britain Standard - "eu-gb-vault" - Great Britain Vault - "eu-gb-cold" - Great Britain Cold - "eu-gb-flex" - Great Britain Flex - "ap-standard" - APAC Standard - "ap-vault" - APAC Vault - "ap-cold" - APAC Cold - "ap-flex" - APAC Flex - "mel01-standard" - Melbourne Standard - "mel01-vault" - Melbourne Vault - "mel01-cold" - Melbourne Cold - "mel01-flex" - Melbourne Flex - "tor01-standard" - Toronto Standard - "tor01-vault" - Toronto Vault - "tor01-cold" - Toronto Cold - "tor01-flex" - Toronto Flex #### --s3-location-constraint Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Type: string - Default: "" #### --s3-acl Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. - Config: acl - Env Var: RCLONE_S3_ACL - Type: string - Default: "" - Examples: - "default" - Owner gets Full_CONTROL. No one else has access rights (default). - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. - "bucket-owner-read" - Object owner gets FULL_CONTROL. Bucket owner gets READ access. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - "bucket-owner-full-control" - Both the object owner and the bucket owner get FULL_CONTROL over the object. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS #### --s3-server-side-encryption The server-side encryption algorithm used when storing this object in S3. - Config: server_side_encryption - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION - Type: string - Default: "" - Examples: - "" - None - "AES256" - AES256 - "aws:kms" - aws:kms #### --s3-sse-kms-key-id If using KMS ID you must provide the ARN of Key. - Config: sse_kms_key_id - Env Var: RCLONE_S3_SSE_KMS_KEY_ID - Type: string - Default: "" - Examples: - "" - None - "arn:aws:kms:us-east-1:*" - arn:aws:kms:* #### --s3-storage-class The storage class to use when storing new objects in S3. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "REDUCED_REDUNDANCY" - Reduced redundancy storage class - "STANDARD_IA" - Standard Infrequent Access storage class - "ONEZONE_IA" - One Zone Infrequent Access storage class - "GLACIER" - Glacier storage class - "DEEP_ARCHIVE" - Glacier Deep Archive storage class - "INTELLIGENT_TIERING" - Intelligent-Tiering storage class #### --s3-storage-class The storage class to use when storing new objects in OSS. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "GLACIER" - Archive storage mode. - "STANDARD_IA" - Infrequent access storage mode. #### --s3-storage-class The storage class to use when storing new objects in Tencent COS. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - Standard storage class - "ARCHIVE" - Archive storage mode. - "STANDARD_IA" - Infrequent access storage mode. #### --s3-storage-class The storage class to use when storing new objects in S3. - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Type: string - Default: "" - Examples: - "" - Default - "STANDARD" - The Standard class for any upload; suitable for on-demand content like streaming or CDN. - "GLACIER" - Archived storage; prices are lower, but it needs to be restored first to be accessed. ### Advanced Options Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)). #### --s3-bucket-acl Canned ACL used when creating buckets. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead. - Config: bucket_acl - Env Var: RCLONE_S3_BUCKET_ACL - Type: string - Default: "" - Examples: - "private" - Owner gets FULL_CONTROL. No one else has access rights (default). - "public-read" - Owner gets FULL_CONTROL. The AllUsers group gets READ access. - "public-read-write" - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. - "authenticated-read" - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. #### --s3-sse-customer-algorithm If using SSE-C, the server-side encryption algorithm used when storing this object in S3. - Config: sse_customer_algorithm - Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM - Type: string - Default: "" - Examples: - "" - None - "AES256" - AES256 #### --s3-sse-customer-key If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. - Config: sse_customer_key - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY - Type: string - Default: "" - Examples: - "" - None #### --s3-sse-customer-key-md5 If using SSE-C you must provide the secret encryption key MD5 checksum. - Config: sse_customer_key_md5 - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 - Type: string - Default: "" - Examples: - "" - None #### --s3-upload-cutoff Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. - Config: upload_cutoff - Env Var: RCLONE_S3_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 200M #### --s3-chunk-size Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown size (eg from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit. Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size. - Config: chunk_size - Env Var: RCLONE_S3_CHUNK_SIZE - Type: SizeSuffix - Default: 5M #### --s3-max-upload-parts Maximum number of parts in a multipart upload. This option defines the maximum number of multipart chunks to use when doing a multipart upload. This can be useful if a service does not support the AWS S3 specification of 10,000 chunks. Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit. - Config: max_upload_parts - Env Var: RCLONE_S3_MAX_UPLOAD_PARTS - Type: int - Default: 10000 #### --s3-copy-cutoff Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 5GB. - Config: copy_cutoff - Env Var: RCLONE_S3_COPY_CUTOFF - Type: SizeSuffix - Default: 4.656G #### --s3-disable-checksum Don't store MD5 checksum with object metadata Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. - Config: disable_checksum - Env Var: RCLONE_S3_DISABLE_CHECKSUM - Type: bool - Default: false #### --s3-shared-credentials-file Path to the shared credentials file If env_auth = true then rclone can use a shared credentials file. If this variable is empty rclone will look for the "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty it will default to the current user's home directory. Linux/OSX: "$HOME/.aws/credentials" Windows: "%USERPROFILE%\.aws\credentials" - Config: shared_credentials_file - Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE - Type: string - Default: "" #### --s3-profile Profile to use in the shared credentials file If env_auth = true then rclone can use a shared credentials file. This variable controls which profile is used in that file. If empty it will default to the environment variable "AWS_PROFILE" or "default" if that environment variable is also not set. - Config: profile - Env Var: RCLONE_S3_PROFILE - Type: string - Default: "" #### --s3-session-token An AWS session token - Config: session_token - Env Var: RCLONE_S3_SESSION_TOKEN - Type: string - Default: "" #### --s3-upload-concurrency Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_S3_UPLOAD_CONCURRENCY - Type: int - Default: 4 #### --s3-force-path-style If true use path style access if false use virtual hosted style. If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See [the AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) for more info. Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting. - Config: force_path_style - Env Var: RCLONE_S3_FORCE_PATH_STYLE - Type: bool - Default: true #### --s3-v2-auth If true use v2 authentication. If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. - Config: v2_auth - Env Var: RCLONE_S3_V2_AUTH - Type: bool - Default: false #### --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html) - Config: use_accelerate_endpoint - Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT - Type: bool - Default: false #### --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. It should be set to true for resuming uploads across different sessions. WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up. - Config: leave_parts_on_error - Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR - Type: bool - Default: false #### --s3-list-chunk Size of listing chunk (response list for each ListObject S3 request). This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html). In Ceph, this can be increased with the "rgw list buckets max chunk" option. - Config: list_chunk - Env Var: RCLONE_S3_LIST_CHUNK - Type: int - Default: 1000 #### --s3-no-check-bucket If set don't attempt to check the bucket exists or create it This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already. - Config: no_check_bucket - Env Var: RCLONE_S3_NO_CHECK_BUCKET - Type: bool - Default: false #### --s3-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_S3_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8,Dot #### --s3-memory-pool-flush-time How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. - Config: memory_pool_flush_time - Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME - Type: Duration - Default: 1m0s #### --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. - Config: memory_pool_use_mmap - Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP - Type: bool - Default: false ### Backend commands Here are the commands specific to the s3 backend. Run them with rclone backend COMMAND remote: The help below will explain what arguments each command takes. See [the "rclone backend" command](/commands/rclone_backend/) for more info on how to pass options and arguments. These can be run on a running backend using the rc command [backend/command](/rc/#backend/command). #### restore Restore objects from GLACIER to normal storage rclone backend restore remote: [options] [+] This command can be used to restore one or more objects from GLACIER to normal storage. Usage Examples: rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard All the objects shown will be marked for restore, then rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successfull or an error message if not. [ { "Status": "OK", "Path": "test.txt" }, { "Status": "OK", "Path": "test/file4.txt" } ] Options: - "description": The optional description for the job. - "lifetime": Lifetime of the active copy in days - "priority": Priority of restore: Standard|Expedited|Bulk #### list-multipart-uploads List the unfinished multipart uploads rclone backend list-multipart-uploads remote: [options] [+] This command lists the unfinished multipart uploads in JSON format. rclone backend list-multipart s3:bucket/path/to/object It returns a dictionary of buckets with values as lists of unfinished multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. { "rclone": [ { "Initiated": "2020-06-26T14:20:36Z", "Initiator": { "DisplayName": "XXX", "ID": "arn:aws:iam::XXX:user/XXX" }, "Key": "KEY", "Owner": { "DisplayName": null, "ID": "XXX" }, "StorageClass": "STANDARD", "UploadId": "XXX" } ], "rclone-1000files": [], "rclone-dst": [] } #### cleanup Remove unfinished multipart uploads. rclone backend cleanup remote: [options] [+] This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. Note that you can use -i/--dry-run with this command to see what it would do. rclone backend cleanup s3:bucket/path/to/object rclone backend cleanup -o max-age=7w s3:bucket/path/to/object Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Options: - "max-age": Max age of upload to delete {{< rem autogenerated options stop >}} ### Anonymous access to public buckets ### If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Your config should end up looking like this: ``` [anons3] type = s3 provider = AWS env_auth = false access_key_id = secret_access_key = region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = ``` Then use it as normal with the name of the public bucket, eg rclone lsd anons3:1000genomes You will be able to list and copy data but not upload it. ### Ceph ### [Ceph](https://ceph.com/) is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface. To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: ``` [ceph] type = s3 provider = Ceph env_auth = false access_key_id = XXX secret_access_key = YYY region = endpoint = https://ceph.endpoint.example.com location_constraint = acl = server_side_encryption = storage_class = ``` If you are using an older version of CEPH, eg 10.2.x Jewel, then you may need to supply the parameter `--s3-upload-cutoff 0` or put this in the config file as `upload_cutoff 0` to work around a bug which causes uploading of small files to fail. Note also that Ceph sometimes puts `/` in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the `/` escaped as `\/`. Make sure you only write `/` in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). ``` { "user_id": "xxx", "display_name": "xxxx", "keys": [ { "user": "xxx", "access_key": "xxxxxx", "secret_key": "xxxxxx\/xxxx" } ], } ``` Because this is a json dump, it is encoding the `/` as `\/`, so if you use the secret key as `xxxxxx/xxxx` it will work fine. ### Dreamhost ### Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is an object storage system based on CEPH. To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: ``` [dreamobjects] type = s3 provider = DreamHost env_auth = false access_key_id = your_access_key secret_access_key = your_secret_key region = endpoint = objects-us-west-1.dream.io location_constraint = acl = private server_side_encryption = storage_class = ``` ### DigitalOcean Spaces ### [Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`. When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings. Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: ``` Storage> s3 env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY region> endpoint> nyc3.digitaloceanspaces.com location_constraint> acl> storage_class> ``` The resulting configuration file should look like: ``` [spaces] type = s3 provider = DigitalOcean env_auth = false access_key_id = YOUR_ACCESS_KEY secret_access_key = YOUR_SECRET_KEY region = endpoint = nyc3.digitaloceanspaces.com location_constraint = acl = server_side_encryption = storage_class = ``` Once configured, you can create a new Space and begin copying files. For example: ``` rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space ``` ### IBM COS (S3) ### Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage) To configure access to IBM COS S3, follow the steps below: 1. Run rclone config and select n for a new remote. ``` 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n ``` 2. Enter the name for the configuration ``` name> ``` 3. Select "s3" storage. ``` Choose a number from below, or type in your own value 1 / Alias for an existing remote \ "alias" 2 / Amazon Drive \ "amazon cloud drive" 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) \ "s3" 4 / Backblaze B2 \ "b2" [snip] 23 / http Connection \ "http" Storage> 3 ``` 4. Select IBM COS as the S3 Storage Provider. ``` Choose the S3 provider. Choose a number from below, or type in your own value 1 / Choose this option to configure Storage to AWS S3 \ "AWS" 2 / Choose this option to configure Storage to Ceph Systems \ "Ceph" 3 / Choose this option to configure Storage to Dreamhost \ "Dreamhost" 4 / Choose this option to the configure Storage to IBM COS S3 \ "IBMCOS" 5 / Choose this option to the configure Storage to Minio \ "Minio" Provider>4 ``` 5. Enter the Access Key and Secret. ``` AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> <> AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> <> ``` 6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address. ``` Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. Choose a number from below, or type in your own value 1 / US Cross Region Endpoint \ "s3-api.us-geo.objectstorage.softlayer.net" 2 / US Cross Region Dallas Endpoint \ "s3-api.dal.us-geo.objectstorage.softlayer.net" 3 / US Cross Region Washington DC Endpoint \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" 4 / US Cross Region San Jose Endpoint \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" 5 / US Cross Region Private Endpoint \ "s3-api.us-geo.objectstorage.service.networklayer.com" 6 / US Cross Region Dallas Private Endpoint \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" 7 / US Cross Region Washington DC Private Endpoint \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" 8 / US Cross Region San Jose Private Endpoint \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" 9 / US Region East Endpoint \ "s3.us-east.objectstorage.softlayer.net" 10 / US Region East Private Endpoint \ "s3.us-east.objectstorage.service.networklayer.com" 11 / US Region South Endpoint [snip] 34 / Toronto Single Site Private Endpoint \ "s3.tor01.objectstorage.service.networklayer.com" endpoint>1 ``` 7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter ``` 1 / US Cross Region Standard \ "us-standard" 2 / US Cross Region Vault \ "us-vault" 3 / US Cross Region Cold \ "us-cold" 4 / US Cross Region Flex \ "us-flex" 5 / US East Region Standard \ "us-east-standard" 6 / US East Region Vault \ "us-east-vault" 7 / US East Region Cold \ "us-east-cold" 8 / US East Region Flex \ "us-east-flex" 9 / US South Region Standard \ "us-south-standard" 10 / US South Region Vault \ "us-south-vault" [snip] 32 / Toronto Flex \ "tor01-flex" location_constraint>1 ``` 9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. ``` Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS \ "public-read" 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS \ "authenticated-read" acl> 1 ``` 12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this ``` [xxx] type = s3 Provider = IBMCOS access_key_id = xxx secret_access_key = yyy endpoint = s3-api.us-geo.objectstorage.softlayer.net location_constraint = us-standard acl = private ``` 13. Execute rclone commands ``` 1) Create a bucket. rclone mkdir IBM-COS-XREGION:newbucket 2) List available buckets. rclone lsd IBM-COS-XREGION: -1 2017-11-08 21:16:22 -1 test -1 2018-02-14 20:16:39 -1 newbucket 3) List contents of a bucket. rclone ls IBM-COS-XREGION:newbucket 18685952 test.exe 4) Copy a file from local to remote. rclone copy /Users/file.txt IBM-COS-XREGION:newbucket 5) Copy a file from remote to local. rclone copy IBM-COS-XREGION:newbucket/file.txt . 6) Delete a file on remote. rclone delete IBM-COS-XREGION:newbucket/file.txt ``` ### Minio ### [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. It is very easy to install and provides an S3 compatible server which can be used by rclone. To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide). When it configures itself Minio will print something like this ``` Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 AccessKey: USWUXHGYZQYFYFFIT3RE SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 Region: us-east-1 SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis Browser Access: http://192.168.1.106:9000 http://172.23.0.1:9000 Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 Object API (Amazon S3 compatible): Go: https://docs.minio.io/docs/golang-client-quickstart-guide Java: https://docs.minio.io/docs/java-client-quickstart-guide Python: https://docs.minio.io/docs/python-client-quickstart-guide JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide Drive Capacity: 26 GiB Free, 165 GiB Total ``` These details need to go into `rclone config` like this. Note that it is important to put the region in as stated above. ``` env_auth> 1 access_key_id> USWUXHGYZQYFYFFIT3RE secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 region> us-east-1 endpoint> http://192.168.1.106:9000 location_constraint> server_side_encryption> ``` Which makes the config file look like this ``` [minio] type = s3 provider = Minio env_auth = false access_key_id = USWUXHGYZQYFYFFIT3RE secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 region = us-east-1 endpoint = http://192.168.1.106:9000 location_constraint = server_side_encryption = ``` So once set up, for example to copy files into a bucket ``` rclone copy /path/to/files minio:bucket ``` ### Scaleway {#scaleway} [Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. Scaleway provides an S3 interface which can be configured for use with rclone like this: ``` [scaleway] type = s3 provider = Scaleway env_auth = false endpoint = s3.nl-ams.scw.cloud access_key_id = SCWXXXXXXXXXXXXXX secret_access_key = 1111111-2222-3333-44444-55555555555555 region = nl-ams location_constraint = acl = private server_side_encryption = storage_class = ``` ### Wasabi ### [Wasabi](https://wasabi.com) is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost. Wasabi provides an S3 interface which can be configured for use with rclone like this. ``` No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> wasabi Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> YOURACCESSKEY AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YOURSECRETACCESSKEY Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" [snip] region> us-east-1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Specify if using an S3 clone such as Ceph. endpoint> s3.wasabisys.com Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" [snip] location_constraint> Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" [snip] acl> The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" storage_class> Remote config -------------------- [wasabi] env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = us-east-1 endpoint = s3.wasabisys.com location_constraint = acl = server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This will leave the config file looking like this. ``` [wasabi] type = s3 provider = Wasabi env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = endpoint = s3.wasabisys.com location_constraint = acl = server_side_encryption = storage_class = ``` ### Alibaba OSS {#alibaba-oss} Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/) configuration. First run: rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> oss Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ "s3" [snip] Storage> s3 Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph" [snip] provider> Alibaba Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> accesskeyid AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> secretaccesskey Endpoint for OSS API. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / East China 1 (Hangzhou) \ "oss-cn-hangzhou.aliyuncs.com" 2 / East China 2 (Shanghai) \ "oss-cn-shanghai.aliyuncs.com" 3 / North China 1 (Qingdao) \ "oss-cn-qingdao.aliyuncs.com" [snip] endpoint> 1 Canned ACL used when creating buckets and storing or copying objects. Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. [snip] acl> 1 The storage class to use when storing new objects in OSS. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Archive storage mode. \ "GLACIER" 4 / Infrequent access storage mode. \ "STANDARD_IA" storage_class> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [oss] type = s3 provider = Alibaba env_auth = false access_key_id = accesskeyid secret_access_key = secretaccesskey endpoint = oss-cn-hangzhou.aliyuncs.com acl = private storage_class = Standard -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` ### Tencent COS {#tencent-cos} [Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. To configure access to Tencent COS, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. ``` rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n ``` 2. Give the name of the configuration. For example, name it 'cos'. ``` name> cos ``` 3. Select `s3` storage. ``` Choose a number from below, or type in your own value 1 / 1Fichier \ "fichier" 2 / Alias for an existing remote \ "alias" 3 / Amazon Drive \ "amazon cloud drive" 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc) \ "s3" [snip] Storage> s3 ``` 4. Select `TencentCOS` provider. ``` Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" [snip] 11 / Tencent Cloud Object Storage (COS) \ "TencentCOS" [snip] provider> TencentCOS ``` 5. Enter your SecretId and SecretKey of Tencent Cloud. ``` Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> AKIDxxxxxxxxxx AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> xxxxxxxxxxx ``` 6. Select endpoint for Tencent COS. This is the standard endpoint for different region. ``` 1 / Beijing Region. \ "cos.ap-beijing.myqcloud.com" 2 / Nanjing Region. \ "cos.ap-nanjing.myqcloud.com" 3 / Shanghai Region. \ "cos.ap-shanghai.myqcloud.com" 4 / Guangzhou Region. \ "cos.ap-guangzhou.myqcloud.com" [snip] endpoint> 4 ``` 7. Choose acl and storage class. ``` Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets Full_CONTROL. No one else has access rights (default). \ "default" [snip] acl> 1 The storage class to use when storing new objects in Tencent COS. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" [snip] storage_class> 1 Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [cos] type = s3 provider = TencentCOS env_auth = false access_key_id = xxx secret_access_key = xxx endpoint = cos.ap-guangzhou.myqcloud.com acl = default -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== cos s3 ``` ### Netease NOS ### For Netease NOS configure as per the configurator `rclone config` setting the provider `Netease`. This will automatically set `force_path_style = false` which is necessary for it to run properly. rclone-1.53.3/docs/content/seafile.md000066400000000000000000000237141375552240400174370ustar00rootroot00000000000000--- title: "Seafile" description: "Seafile" --- {{< icon "fa fa-server" >}}Seafile ---------------------------------------- This is a backend for the [Seafile](https://www.seafile.com/) storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users ### Root mode vs Library mode ### There are two distinct modes you can setup your remote: - you point your remote to the **root of the server**, meaning you don't specify a library during the configuration: Paths are specified as `remote:library`. You may put subdirectories in too, eg `remote:library/path/to/dir`. - you point your remote to a specific library during the configuration: Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) ### Configuration in root mode ### Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run rclone config This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile \ "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ ** URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com \ "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> false Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication is not enabled on this account. -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** 2fa = false -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this: See all libraries rclone lsd seafile: Create a new library rclone mkdir seafile:library List the contents of a library rclone ls seafile:library Sync `/home/local/directory` to the remote library, deleting any excess files in the library. rclone sync -i /home/local/directory seafile:library ### Configuration in library mode ### Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile \ "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ ** URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com \ "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> true Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> My Library Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication: please enter your 2FA code 2fa code> 123456 Authenticating... Success! -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = 2fa = true library = My Library -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. You specified `My Library` during the configuration. The root of the remote is pointing at the root of the library `My Library`: See all files in the library: rclone lsd seafile: Create a new directory inside the library rclone mkdir seafile:directory List the contents of a directory rclone ls seafile:directory Sync `/home/local/directory` to the remote library, deleting any excess files in the library. rclone sync -i /home/local/directory seafile: ### --fast-list ### Seafile version 7+ supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. Please note this is not supported on seafile server version 6.x #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | / | 0x2F | / | | " | 0x22 | " | | \ | 0x5C | \ | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Seafile and rclone link ### Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: ``` rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ ``` or if run on a directory you will get: ``` rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ ``` Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link. ### Compatibility ### It has been actively tested using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/seafile/seafile.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to seafile (seafile). #### --seafile-url URL of seafile host to connect to - Config: url - Env Var: RCLONE_SEAFILE_URL - Type: string - Default: "" - Examples: - "https://cloud.seafile.com/" - Connect to cloud.seafile.com #### --seafile-user User name (usually email address) - Config: user - Env Var: RCLONE_SEAFILE_USER - Type: string - Default: "" #### --seafile-pass Password **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_SEAFILE_PASS - Type: string - Default: "" #### --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) - Config: 2fa - Env Var: RCLONE_SEAFILE_2FA - Type: bool - Default: false #### --seafile-library Name of the library. Leave blank to access all non-encrypted libraries. - Config: library - Env Var: RCLONE_SEAFILE_LIBRARY - Type: string - Default: "" #### --seafile-library-key Library password (for encrypted libraries only). Leave blank if you pass it through the command line. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: library_key - Env Var: RCLONE_SEAFILE_LIBRARY_KEY - Type: string - Default: "" #### --seafile-auth-token Authentication token - Config: auth_token - Env Var: RCLONE_SEAFILE_AUTH_TOKEN - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to seafile (seafile). #### --seafile-create-library Should rclone create a library if it doesn't exist - Config: create_library - Env Var: RCLONE_SEAFILE_CREATE_LIBRARY - Type: bool - Default: false #### --seafile-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SEAFILE_ENCODING - Type: MultiEncoder - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/sftp.md000066400000000000000000000301761375552240400170030ustar00rootroot00000000000000--- title: "SFTP" description: "SFTP" --- {{< icon "fa fa-server" >}} SFTP ---------------------------------------- SFTP is the [Secure (or SSH) File Transfer Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). The SFTP backend can be used with a number of different providers: {{< provider_list >}} {{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14">}} {{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net">}} {{< /provider_list >}} SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. Paths are specified as `remote:path`. If the path does not begin with a `/` it is relative to the home directory of the user. An empty path `remote:` refers to the user's home directory. "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /. Here is an example of making an SFTP configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / SSH/SFTP Connection \ "sftp" [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "example.com" host> example.com SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. key_file> Remote config -------------------- [remote] host = example.com user = sftpuser port = pass = key_file = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this: See all directories in the home directory rclone lsd remote: Make a new directory rclone mkdir remote:path/to/directory List the contents of a directory rclone ls remote:path/to/directory Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. rclone sync -i /home/local/directory remote:directory ### SSH Authentication ### The SFTP remote supports three authentication methods: * Password * Key file * ssh-agent Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`. Only unencrypted OpenSSH or PEM encrypted files are supported. The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('\n' or '\r\n') separating lines. i.e. key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- This will generate it correctly for key_pem for use in the config: awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa If you don't specify `pass`, `key_file`, or `key_pem` then rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent` to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can also be specified to force the usage of a specific key in the ssh-agent. Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. If you set the `--sftp-ask-password` option, rclone will prompt for a password when needed and no password has been configured. ### ssh-agent on macOS ### Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg eval `ssh-agent -s` && ssh-add -A And then at the end of the session eval `ssh-agent -k` These commands can be used in scripts of course. ### Modified time ### Modified times are stored on the server to 1 second precision. Modified times are used in syncing and are fully supported. Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option `set_modtime = false` in your RClone backend configuration to disable this behaviour. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sftp/sftp.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to sftp (SSH/SFTP Connection). #### --sftp-host SSH host to connect to - Config: host - Env Var: RCLONE_SFTP_HOST - Type: string - Default: "" - Examples: - "example.com" - Connect to example.com #### --sftp-user SSH username, leave blank for current username, ncw - Config: user - Env Var: RCLONE_SFTP_USER - Type: string - Default: "" #### --sftp-port SSH port, leave blank to use default (22) - Config: port - Env Var: RCLONE_SFTP_PORT - Type: string - Default: "" #### --sftp-pass SSH password, leave blank to use ssh-agent. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_SFTP_PASS - Type: string - Default: "" #### --sftp-key-pem Raw PEM-encoded private key, If specified, will override key_file parameter. - Config: key_pem - Env Var: RCLONE_SFTP_KEY_PEM - Type: string - Default: "" #### --sftp-key-file Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - Config: key_file - Env Var: RCLONE_SFTP_KEY_FILE - Type: string - Default: "" #### --sftp-key-file-pass The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: key_file_pass - Env Var: RCLONE_SFTP_KEY_FILE_PASS - Type: string - Default: "" #### --sftp-key-use-agent When set forces the usage of the ssh-agent. When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors when the ssh-agent contains many keys. - Config: key_use_agent - Env Var: RCLONE_SFTP_KEY_USE_AGENT - Type: bool - Default: false #### --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: - aes128-cbc - aes192-cbc - aes256-cbc - 3des-cbc - diffie-hellman-group-exchange-sha256 - diffie-hellman-group-exchange-sha1 Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. - Config: use_insecure_cipher - Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER - Type: bool - Default: false - Examples: - "false" - Use default Cipher list. - "true" - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. #### --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing. - Config: disable_hashcheck - Env Var: RCLONE_SFTP_DISABLE_HASHCHECK - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to sftp (SSH/SFTP Connection). #### --sftp-ask-password Allow asking for SFTP password when needed. If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent - Config: ask_password - Env Var: RCLONE_SFTP_ASK_PASSWORD - Type: bool - Default: false #### --sftp-path-override Override path used by SSH connection. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. Shared folders can be found in directories representing volumes rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory Home directory can be found in a shared folder called "home" rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory - Config: path_override - Env Var: RCLONE_SFTP_PATH_OVERRIDE - Type: string - Default: "" #### --sftp-set-modtime Set the modified time on the remote if set. - Config: set_modtime - Env Var: RCLONE_SFTP_SET_MODTIME - Type: bool - Default: true #### --sftp-md5sum-command The command used to read md5 hashes. Leave blank for autodetect. - Config: md5sum_command - Env Var: RCLONE_SFTP_MD5SUM_COMMAND - Type: string - Default: "" #### --sftp-sha1sum-command The command used to read sha1 hashes. Leave blank for autodetect. - Config: sha1sum_command - Env Var: RCLONE_SFTP_SHA1SUM_COMMAND - Type: string - Default: "" #### --sftp-skip-links Set to skip any symlinks and any other non regular files. - Config: skip_links - Env Var: RCLONE_SFTP_SKIP_LINKS - Type: bool - Default: false #### --sftp-subsystem Specifies the SSH2 subsystem on the remote host. - Config: subsystem - Env Var: RCLONE_SFTP_SUBSYSTEM - Type: string - Default: "sftp" #### --sftp-server-command Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined. - Config: server_command - Env Var: RCLONE_SFTP_SERVER_COMMAND - Type: string - Default: "" {{< rem autogenerated options stop >}} ### Limitations ### SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option `disable_hashcheck` to `true` to disable checksumming. SFTP also supports `about` if the same login has shell access and `df` are in the remote's PATH. `about` will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. `about` will fail if it does not have shell access or if `df` is not in the remote's PATH. Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using `disable_hashcheck` is a good idea. The only ssh agent supported under Windows is Putty's pageant. The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the `use_insecure_cipher` setting in the configuration file to `true`. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf). SFTP isn't supported under plan9 until [this issue](https://github.com/pkg/sftp/issues/156) is fixed. Note that since SFTP isn't HTTP based the following flags don't work with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` Note that `--timeout` isn't supported (but `--contimeout` is). ## C14 {#c14} C14 is supported through the SFTP backend. See [C14's documentation](https://www.online.net/en/storage/c14-cold-storage) ## rsync.net {#rsync-net} rsync.net is supported through the SFTP backend. See [rsync.net's documentation of rclone examples](https://www.rsync.net/products/rclone.html). rclone-1.53.3/docs/content/sharefile.md000066400000000000000000000154771375552240400200000ustar00rootroot00000000000000--- title: "Citrix ShareFile" description: "Rclone docs for Citrix ShareFile" --- ## {{< icon "fas fa-share-square" >}} Citrix ShareFile [Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business. The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value XX / Citrix Sharefile \ "sharefile" Storage> sharefile ** See help for sharefile backend at: https://rclone.org/sharefile/ ** ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Access the Personal Folders. (Default) \ "" 2 / Access the Favorites folder. \ "favorites" 3 / Access all the shared folders. \ "allshared" 4 / Access all the individual connectors. \ "connectors" 5 / Access the home, favorites, and shared folders as well as the connectors. \ "top" root_folder_id> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = sharefile endpoint = https://XXX.sharefile.com token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, List directories in top level of your ShareFile rclone lsd remote: List all the files in your ShareFile rclone ls remote: To copy a local directory to an ShareFile directory called backup rclone copy /home/source remote:backup Paths may be as deep as required, eg `remote:directory/subdirectory`. ### Modified time and hashes ### ShareFile allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. ShareFile supports MD5 type hashes, so you can use the `--checksum` flag. ### Transfers ### For files above 128MB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing `--transfers` will increase memory use. ### Limitations ### Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". ShareFile only supports filenames up to 256 characters in length. #### Restricted filename characters In addition to the [default restricted characters set](/overview/#restricted-characters) the following characters are also replaced: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | \\ | 0x5C | \ | | * | 0x2A | * | | < | 0x3C | < | | > | 0x3E | > | | ? | 0x3F | ? | | : | 0x3A | : | | \| | 0x7C | | | | " | 0x22 | " | File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: | Character | Value | Replacement | | --------- |:-----:|:-----------:| | SP | 0x20 | ␠ | | . | 0x2E | . | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sharefile/sharefile.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to sharefile (Citrix Sharefile). #### --sharefile-root-folder-id ID of the root folder Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). - Config: root_folder_id - Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID - Type: string - Default: "" - Examples: - "" - Access the Personal Folders. (Default) - "favorites" - Access the Favorites folder. - "allshared" - Access all the shared folders. - "connectors" - Access all the individual connectors. - "top" - Access the home, favorites, and shared folders as well as the connectors. ### Advanced Options Here are the advanced options specific to sharefile (Citrix Sharefile). #### --sharefile-upload-cutoff Cutoff for switching to multipart upload. - Config: upload_cutoff - Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF - Type: SizeSuffix - Default: 128M #### --sharefile-chunk-size Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. - Config: chunk_size - Env Var: RCLONE_SHAREFILE_CHUNK_SIZE - Type: SizeSuffix - Default: 64M #### --sharefile-endpoint Endpoint for API calls. This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com - Config: endpoint - Env Var: RCLONE_SHAREFILE_ENDPOINT - Type: string - Default: "" #### --sharefile-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SHAREFILE_ENCODING - Type: MultiEncoder - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/sugarsync.md000066400000000000000000000141661375552240400200460ustar00rootroot00000000000000--- title: "SugarSync" description: "Rclone docs for SugarSync" --- {{< icon "fas fa-dove" >}} SugarSync ----------------------------------------- [SugarSync](https://sugarsync.com) is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing. The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Sugarsync \ "sugarsync" [snip] Storage> sugarsync ** See help for sugarsync backend at: https://rclone.org/sugarsync/ ** Sugarsync App ID. Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). app_id> Sugarsync Access Key ID. Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). access_key_id> Sugarsync Private Access Key Leave blank to use rclone's. Enter a string value. Press Enter for the default (""). private_access_key> Permanently delete files if true otherwise put them in the deleted files. Enter a boolean value (true or false). Press Enter for the default ("false"). hard_delete> Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Username (email address)> nick@craig-wood.com Your Sugarsync password is only required during setup and will not be stored. password: -------------------- [remote] type = sugarsync refresh_token = https://api.sugarsync.com/app-authorization/XXXXXXXXXXXXXXXXXX -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token. Once configured you can then use `rclone` like this, List directories (sync folders) in top level of your SugarSync rclone lsd remote: List all the files in your SugarSync folder "Test" rclone ls remote:Test To copy a local directory to an SugarSync folder called backup rclone copy /home/source remote:backup Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. **NB** you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync. ### Modified time and hashes ### SugarSync does not support modification times or hashes, therefore syncing will default to `--size-only` checking. Note that using `--update` will work as rclone can read the time files were uploaded. #### Restricted filename characters SugarSync replaces the [default restricted characters set](/overview/#restricted-characters) except for DEL. Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in XML strings. ### Deleting files ### Deleted files will be moved to the "Deleted items" folder by default. However you can supply the flag `--sugarsync-hard-delete` or set the config parameter `hard_delete = true` if you would like files to be deleted straight away. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sugarsync/sugarsync.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to sugarsync (Sugarsync). #### --sugarsync-app-id Sugarsync App ID. Leave blank to use rclone's. - Config: app_id - Env Var: RCLONE_SUGARSYNC_APP_ID - Type: string - Default: "" #### --sugarsync-access-key-id Sugarsync Access Key ID. Leave blank to use rclone's. - Config: access_key_id - Env Var: RCLONE_SUGARSYNC_ACCESS_KEY_ID - Type: string - Default: "" #### --sugarsync-private-access-key Sugarsync Private Access Key Leave blank to use rclone's. - Config: private_access_key - Env Var: RCLONE_SUGARSYNC_PRIVATE_ACCESS_KEY - Type: string - Default: "" #### --sugarsync-hard-delete Permanently delete files if true otherwise put them in the deleted files. - Config: hard_delete - Env Var: RCLONE_SUGARSYNC_HARD_DELETE - Type: bool - Default: false ### Advanced Options Here are the advanced options specific to sugarsync (Sugarsync). #### --sugarsync-refresh-token Sugarsync refresh token Leave blank normally, will be auto configured by rclone. - Config: refresh_token - Env Var: RCLONE_SUGARSYNC_REFRESH_TOKEN - Type: string - Default: "" #### --sugarsync-authorization Sugarsync authorization Leave blank normally, will be auto configured by rclone. - Config: authorization - Env Var: RCLONE_SUGARSYNC_AUTHORIZATION - Type: string - Default: "" #### --sugarsync-authorization-expiry Sugarsync authorization expiry Leave blank normally, will be auto configured by rclone. - Config: authorization_expiry - Env Var: RCLONE_SUGARSYNC_AUTHORIZATION_EXPIRY - Type: string - Default: "" #### --sugarsync-user Sugarsync user Leave blank normally, will be auto configured by rclone. - Config: user - Env Var: RCLONE_SUGARSYNC_USER - Type: string - Default: "" #### --sugarsync-root-id Sugarsync root id Leave blank normally, will be auto configured by rclone. - Config: root_id - Env Var: RCLONE_SUGARSYNC_ROOT_ID - Type: string - Default: "" #### --sugarsync-deleted-id Sugarsync deleted folder id Leave blank normally, will be auto configured by rclone. - Config: deleted_id - Env Var: RCLONE_SUGARSYNC_DELETED_ID - Type: string - Default: "" #### --sugarsync-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SUGARSYNC_ENCODING - Type: MultiEncoder - Default: Slash,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/swift.md000066400000000000000000000346311375552240400171630ustar00rootroot00000000000000--- title: "Swift" description: "Swift" --- {{< icon "fa fa-space-shuttle" >}}Swift ---------------------------------------- Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). Commercial implementations of that being: * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) * [Memset Memstore](https://www.memset.com/cloud/storage/) * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) * [Oracle Cloud Storage](https://cloud.oracle.com/storage-opc) * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. Here is an example of making a swift configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value 1 / Enter swift credentials in the next step \ "false" 2 / Get swift credentials from environment vars. Leave other fields blank if using this. \ "true" env_auth> true User name to log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> Authentication URL for server (OS_AUTH_URL). Choose a number from below, or type in your own value 1 / Rackspace US \ "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK \ "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2 \ "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK \ "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2 \ "https://auth.storage.memset.com/v2.0" 6 / OVH \ "https://auth.cloud.ovh.net/v3" auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> Region name - optional (OS_REGION_NAME) region> Storage URL - optional (OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) Choose a number from below, or type in your own value 1 / Public (default, choose this if not sure) \ "public" 2 / Internal (use internal service net) \ "internal" 3 / Admin \ "admin" endpoint_type> Remote config -------------------- [test] env_auth = true user = key = auth = user_id = domain = tenant = tenant_id = tenant_domain = region = storage_url = auth_token = auth_version = endpoint_type = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all containers rclone lsd remote: Make a new container rclone mkdir remote:container List the contents of a container rclone ls remote:container Sync `/home/local/directory` to the remote container, deleting any excess files in the container. rclone sync -i /home/local/directory remote:container ### Configuration from an OpenStack credentials file ### An OpenStack credentials file typically looks something something like this (without the comments) ``` export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi ``` The config file needs to look something like this where `$OS_USERNAME` represents the value of the `OS_USERNAME` variable - `123abc567xy` in the example above. ``` [remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME ``` Note that you may (or may not) need to set `region` too - try without first. ### Configuration from the environment ### If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables. When you run through the config, make sure you choose `true` for `env_auth` and leave everything else blank. rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is [a list of the variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) in the docs for the swift library. ### Using an alternate authentication method ### If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the `openstack` commands to get a token). Then, you just need to pass the two configuration variables ``auth_token`` and ``storage_url``. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation. #### Using rclone without a config file #### You can use rclone with swift without a config file, if desired, like this: ``` source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: ``` ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. ### --update and --use-server-modtime ### As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using `--update` along with `--use-server-modtime`, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/swift/swift.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - Config: env_auth - Env Var: RCLONE_SWIFT_ENV_AUTH - Type: bool - Default: false - Examples: - "false" - Enter swift credentials in the next step - "true" - Get swift credentials from environment vars. Leave other fields blank if using this. #### --swift-user User name to log in (OS_USERNAME). - Config: user - Env Var: RCLONE_SWIFT_USER - Type: string - Default: "" #### --swift-key API key or password (OS_PASSWORD). - Config: key - Env Var: RCLONE_SWIFT_KEY - Type: string - Default: "" #### --swift-auth Authentication URL for server (OS_AUTH_URL). - Config: auth - Env Var: RCLONE_SWIFT_AUTH - Type: string - Default: "" - Examples: - "https://auth.api.rackspacecloud.com/v1.0" - Rackspace US - "https://lon.auth.api.rackspacecloud.com/v1.0" - Rackspace UK - "https://identity.api.rackspacecloud.com/v2.0" - Rackspace v2 - "https://auth.storage.memset.com/v1.0" - Memset Memstore UK - "https://auth.storage.memset.com/v2.0" - Memset Memstore UK v2 - "https://auth.cloud.ovh.net/v3" - OVH #### --swift-user-id User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - Config: user_id - Env Var: RCLONE_SWIFT_USER_ID - Type: string - Default: "" #### --swift-domain User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - Config: domain - Env Var: RCLONE_SWIFT_DOMAIN - Type: string - Default: "" #### --swift-tenant Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - Config: tenant - Env Var: RCLONE_SWIFT_TENANT - Type: string - Default: "" #### --swift-tenant-id Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - Config: tenant_id - Env Var: RCLONE_SWIFT_TENANT_ID - Type: string - Default: "" #### --swift-tenant-domain Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - Config: tenant_domain - Env Var: RCLONE_SWIFT_TENANT_DOMAIN - Type: string - Default: "" #### --swift-region Region name - optional (OS_REGION_NAME) - Config: region - Env Var: RCLONE_SWIFT_REGION - Type: string - Default: "" #### --swift-storage-url Storage URL - optional (OS_STORAGE_URL) - Config: storage_url - Env Var: RCLONE_SWIFT_STORAGE_URL - Type: string - Default: "" #### --swift-auth-token Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - Config: auth_token - Env Var: RCLONE_SWIFT_AUTH_TOKEN - Type: string - Default: "" #### --swift-application-credential-id Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) - Config: application_credential_id - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID - Type: string - Default: "" #### --swift-application-credential-name Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) - Config: application_credential_name - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME - Type: string - Default: "" #### --swift-application-credential-secret Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) - Config: application_credential_secret - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET - Type: string - Default: "" #### --swift-auth-version AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - Config: auth_version - Env Var: RCLONE_SWIFT_AUTH_VERSION - Type: int - Default: 0 #### --swift-endpoint-type Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) - Config: endpoint_type - Env Var: RCLONE_SWIFT_ENDPOINT_TYPE - Type: string - Default: "public" - Examples: - "public" - Public (default, choose this if not sure) - "internal" - Internal (use internal service net) - "admin" - Admin #### --swift-storage-policy The storage policy to use when creating a new container This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider. - Config: storage_policy - Env Var: RCLONE_SWIFT_STORAGE_POLICY - Type: string - Default: "" - Examples: - "" - Default - "pcs" - OVH Public Cloud Storage - "pca" - OVH Public Cloud Archive ### Advanced Options Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-chunk-size Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. - Config: chunk_size - Env Var: RCLONE_SWIFT_CHUNK_SIZE - Type: SizeSuffix - Default: 5G #### --swift-no-chunk Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - Config: no_chunk - Env Var: RCLONE_SWIFT_NO_CHUNK - Type: bool - Default: false #### --swift-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_SWIFT_ENCODING - Type: MultiEncoder - Default: Slash,InvalidUtf8 {{< rem autogenerated options stop >}} ### Modified time ### The modified time is stored as metadata on the object as `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 ns. This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. ### Restricted filename characters | Character | Value | Replacement | | --------- |:-----:|:-----------:| | NUL | 0x00 | ␀ | | / | 0x2F | / | Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Limitations ### The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. ### Troubleshooting ### #### Rclone gives Failed to create file system for "remote:": Bad Request #### Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift. So this most likely means your username / password is wrong. You can investigate further with the `--dump-bodies` flag. This may also be caused by specifying the region when you shouldn't have (eg OVH). #### Rclone gives Failed to create file system: Response didn't have storage url and auth token #### This is most likely caused by forgetting to specify your tenant when setting up a swift remote. rclone-1.53.3/docs/content/tardigrade.md000066400000000000000000000207701375552240400201340ustar00rootroot00000000000000--- title: "Tardigrade" description: "Rclone docs for Tardigrade" --- {{< icon "fas fa-dove" >}} Tardigrade ----------------------------------------- [Tardigrade](https://tardigrade.io) is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner. ## Setup To make a new Tardigrade configuration you need one of the following: * Access Grant that someone else shared with you. * [API Key](https://documentation.tardigrade.io/getting-started/uploading-your-first-object/create-an-api-key) of a Tardigrade project you are a member of. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ### Setup with access grant ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Tardigrade Decentralized Cloud Storage \ "tardigrade" [snip] Storage> tardigrade ** See help for tardigrade backend at: https://rclone.org/tardigrade/ ** Choose an authentication method. Enter a string value. Press Enter for the default ("existing"). Choose a number from below, or type in your own value 1 / Use an existing access grant. \ "existing" 2 / Create a new access grant from satellite address, API key, and passphrase. \ "new" provider> existing Access Grant. Enter a string value. Press Enter for the default (""). access_grant> your-access-grant-received-by-someone-else Remote config -------------------- [remote] type = tardigrade access_grant = your-access-grant-received-by-someone-else -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` ### Setup with API key and passhprase ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Tardigrade Decentralized Cloud Storage \ "tardigrade" [snip] Storage> tardigrade ** See help for tardigrade backend at: https://rclone.org/tardigrade/ ** Choose an authentication method. Enter a string value. Press Enter for the default ("existing"). Choose a number from below, or type in your own value 1 / Use an existing access grant. \ "existing" 2 / Create a new access grant from satellite address, API key, and passphrase. \ "new" provider> new Satellite Address. Custom satellite address should match the format: `@
:`. Enter a string value. Press Enter for the default ("us-central-1.tardigrade.io"). Choose a number from below, or type in your own value 1 / US Central 1 \ "us-central-1.tardigrade.io" 2 / Europe West 1 \ "europe-west-1.tardigrade.io" 3 / Asia East 1 \ "asia-east-1.tardigrade.io" satellite_address> 1 API Key. Enter a string value. Press Enter for the default (""). api_key> your-api-key-for-your-tardigrade-project Encryption Passphrase. To access existing objects enter passphrase used for uploading. Enter a string value. Press Enter for the default (""). passphrase> your-human-readable-encryption-passphrase Remote config -------------------- [remote] type = tardigrade satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777 api_key = your-api-key-for-your-tardigrade-project passphrase = your-human-readable-encryption-passphrase access_grant = the-access-grant-generated-from-the-api-key-and-passphrase -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` ## Usage Paths are specified as `remote:bucket` (or `remote:` for the `lsf` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Once configured you can then use `rclone` like this. ### Create a new bucket Use the `mkdir` command to create new bucket, e.g. `bucket`. rclone mkdir remote:bucket ### List all buckets Use the `lsf` command to list all buckets. rclone lsf remote: Note the colon (`:`) character at the end of the command line. ### Delete a bucket Use the `rmdir` command to delete an empty bucket. rclone rmdir remote:bucket Use the `purge` command to delete a non-empty bucket with all its content. rclone purge remote:bucket ### Upload objects Use the `copy` command to upload an object. rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/ The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the local path to upload all its objects. rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/ Only modified files will be copied. ### List objects Use the `ls` command to list recursively all objects in a bucket. rclone ls remote:bucket Add the folder to the remote path to list recursively all objects in this folder. rclone ls remote:bucket/path/to/dir/ Use the `lsf` command to list non-recursively all objects in a bucket or a folder. rclone lsf remote:bucket/path/to/dir/ ### Download objects Use the `copy` command to download an object. rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/ The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the remote path to download all its objects. rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/ ### Delete objects Use the `deletefile` command to delete a single object. rclone deletefile remote:bucket/path/to/dir/file.ext Use the `delete` command to delete all object in a folder. rclone delete remote:bucket/path/to/dir/ ### Print the total size of objects Use the `size` command to print the total size of objects in a bucket or a folder. rclone size remote:bucket/path/to/dir/ ### Sync two Locations Use the `sync` command to sync the source to the destination, changing the destination only, deleting any excess files. rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/ The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Since this can cause data loss, test first with the `--dry-run` flag to see exactly what would be copied and deleted. The sync can be done also from Tardigrade to the local file system. rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/ Or between two Tardigrade buckets. rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/ Or even between another cloud storage and Tardigrade. rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/ {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/tardigrade/tardigrade.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage). #### --tardigrade-provider Choose an authentication method. - Config: provider - Env Var: RCLONE_TARDIGRADE_PROVIDER - Type: string - Default: "existing" - Examples: - "existing" - Use an existing access grant. - "new" - Create a new access grant from satellite address, API key, and passphrase. #### --tardigrade-access-grant Access Grant. - Config: access_grant - Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT - Type: string - Default: "" #### --tardigrade-satellite-address Satellite Address. Custom satellite address should match the format: `@
:`. - Config: satellite_address - Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS - Type: string - Default: "us-central-1.tardigrade.io" - Examples: - "us-central-1.tardigrade.io" - US Central 1 - "europe-west-1.tardigrade.io" - Europe West 1 - "asia-east-1.tardigrade.io" - Asia East 1 #### --tardigrade-api-key API Key. - Config: api_key - Env Var: RCLONE_TARDIGRADE_API_KEY - Type: string - Default: "" #### --tardigrade-passphrase Encryption Passphrase. To access existing objects enter passphrase used for uploading. - Config: passphrase - Env Var: RCLONE_TARDIGRADE_PASSPHRASE - Type: string - Default: "" {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/union.md000066400000000000000000000234351375552240400171570ustar00rootroot00000000000000--- title: "Union" description: "Remote Unification" --- {{< icon "fa fa-link" >}} Union ----------------------------------------- The `union` remote provides a unification similar to UnionFS using other remotes. Paths may be as deep as required or a local path, eg `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. Attribute `:ro` and `:nc` can be attach to the end of path to tag the remote as **read only** or **no create**, eg `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`. Subfolders can be used in upstream remotes. Assume a union remote named `backup` with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` is exactly the same as invoking `rclone mkdir mydrive2:/backup/desktop`. There will be no special handling of paths containing `..` segments. Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking `rclone mkdir mydrive2:/backup/../desktop`. ### Behavior / Policies The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). All functions are grouped into 3 categories: **action**, **create** and **search**. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: **rand** (random) may be useful for file creation (create) but could lead to very odd behavior if used for `delete` if there were more than one copy of the file. #### Function / Category classifications | Category | Description | Functions | |----------|--------------------------|-------------------------------------------------------------------------------------| | action | Writing Existing file | move, rmdir, rmdirs, delete, purge and copy, sync (as destination when file exist) | | create | Create non-existing file | copy, sync (as destination when file not exist) | | search | Reading and listing file | ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync (as source) | | N/A | | size, about | #### Path Preservation Policies, as described below, are of two basic types. `path preserving` and `non-path preserving`. All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) are `path preserving`. `ep` stands for `existing path`. A path preserving policy will only consider upstreams where the relative path being accessed already exists. When using non-path preserving policies paths will be created in target upstreams as necessary. #### Quota Relevant Policies Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields. | Policy | Required Field | |------------|----------------| | lfs, eplfs | Free | | mfs, epmfs | Free | | lus, eplus | Used | | lno, eplno | Objects | To check if your upstream supports the field, run `rclone about remote: [flags]` and see if the required field exists. #### Filters Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below. * No **search** policies filter. * All **action** policies will filter out remotes which are tagged as **read-only**. * All **create** policies will filter out remotes which are tagged **read-only** or **no-create**. If all remotes are filtered an error will be returned. #### Policy descriptions The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems. | Policy | Description | |------------------|------------------------------------------------------------| | all | Search category: same as **epall**. Action category: same as **epall**. Create category: act on all upstreams. | | epall (existing path, all) | Search category: Given this order configured, act on the first one found where the relative path exists. Action category: apply to all found. Create category: act on all upstreams where the relative path exists. | | epff (existing path, first found) | Act on the first one found, by the time upstreams reply, where the relative path exists. | | eplfs (existing path, least free space) | Of all the upstreams on which the relative path exists choose the one with the least free space. | | eplus (existing path, least used space) | Of all the upstreams on which the relative path exists choose the one with the least used space. | | eplno (existing path, least number of objects) | Of all the upstreams on which the relative path exists choose the one with the least number of objects. | | epmfs (existing path, most free space) | Of all the upstreams on which the relative path exists choose the one with the most free space. | | eprand (existing path, random) | Calls **epall** and then randomizes. Returns only one upstream. | | ff (first found) | Search category: same as **epff**. Action category: same as **epff**. Create category: Act on the first one found by the time upstreams reply. | | lfs (least free space) | Search category: same as **eplfs**. Action category: same as **eplfs**. Create category: Pick the upstream with the least available free space. | | lus (least used space) | Search category: same as **eplus**. Action category: same as **eplus**. Create category: Pick the upstream with the least used space. | | lno (least number of objects) | Search category: same as **eplno**. Action category: same as **eplno**. Create category: Pick the upstream with the least number of objects. | | mfs (most free space) | Search category: same as **epmfs**. Action category: same as **epmfs**. Create category: Pick the upstream with the most available free space. | | newest | Pick the file / directory with the largest mtime. | | rand (random) | Calls **all** and then randomizes. Returns only one upstream. | ### Setup Here is an example of how to make a union called `remote` for local folders. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Union merges the contents of several remotes \ "union" [snip] Storage> union List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc. Enter a string value. Press Enter for the default (""). upstreams> Policy to choose upstream on ACTION class. Enter a string value. Press Enter for the default ("epall"). action_policy> Policy to choose upstream on CREATE class. Enter a string value. Press Enter for the default ("epmfs"). create_policy> Policy to choose upstream on SEARCH class. Enter a string value. Press Enter for the default ("ff"). search_policy> Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. Enter a signed integer. Press Enter for the default ("120"). cache_time> Remote config -------------------- [remote] type = union upstreams = C:\dir1 C:\dir2 C:\dir3 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote union e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q ``` Once configured you can then use `rclone` like this, List directories in top level in `C:\dir1`, `C:\dir2` and `C:\dir3` rclone lsd remote: List all the files in `C:\dir1`, `C:\dir2` and `C:\dir3` rclone ls remote: Copy another local directory to the union directory called source, which will be placed into `C:\dir3` rclone copy C:\source remote:source {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/union/union.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to union (Union merges the contents of several upstream fs). #### --union-upstreams List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc. - Config: upstreams - Env Var: RCLONE_UNION_UPSTREAMS - Type: string - Default: "" #### --union-action-policy Policy to choose upstream on ACTION category. - Config: action_policy - Env Var: RCLONE_UNION_ACTION_POLICY - Type: string - Default: "epall" #### --union-create-policy Policy to choose upstream on CREATE category. - Config: create_policy - Env Var: RCLONE_UNION_CREATE_POLICY - Type: string - Default: "epmfs" #### --union-search-policy Policy to choose upstream on SEARCH category. - Config: search_policy - Env Var: RCLONE_UNION_SEARCH_POLICY - Type: string - Default: "ff" #### --union-cache-time Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. - Config: cache_time - Env Var: RCLONE_UNION_CACHE_TIME - Type: int - Default: 120 {{< rem autogenerated options stop >}} rclone-1.53.3/docs/content/webdav.md000066400000000000000000000227071375552240400173000ustar00rootroot00000000000000--- title: "WebDAV" description: "Rclone docs for WebDAV" --- {{< icon "fa fa-globe" >}} WebDAV ----------------------------------------- Paths are specified as `remote:path` Paths may be as deep as required, eg `remote:directory/subdirectory`. To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features. Here is an example of how to make a remote called `remote`. First run: rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Webdav \ "webdav" [snip] Storage> webdav URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://example.com/remote.php/webdav/ Name of the Webdav site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \ "nextcloud" 2 / Owncloud \ "owncloud" 3 / Sharepoint \ "sharepoint" 4 / Other site/service or software \ "other" vendor> 1 User name user> user Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Bearer token instead of user/pass (eg a Macaroon) bearer_token> Remote config -------------------- [remote] type = webdav url = https://example.com/remote.php/webdav/ vendor = nextcloud user = user pass = *** ENCRYPTED *** bearer_token = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` Once configured you can then use `rclone` like this, List directories in top level of your WebDAV rclone lsd remote: List all the files in your WebDAV rclone ls remote: To copy a local directory to an WebDAV directory called backup rclone copy /home/source remote:backup ### Modified time and hashes ### Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/webdav/webdav.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to webdav (Webdav). #### --webdav-url URL of http host to connect to - Config: url - Env Var: RCLONE_WEBDAV_URL - Type: string - Default: "" - Examples: - "https://example.com" - Connect to example.com #### --webdav-vendor Name of the Webdav site/service/software you are using - Config: vendor - Env Var: RCLONE_WEBDAV_VENDOR - Type: string - Default: "" - Examples: - "nextcloud" - Nextcloud - "owncloud" - Owncloud - "sharepoint" - Sharepoint - "other" - Other site/service or software #### --webdav-user User name - Config: user - Env Var: RCLONE_WEBDAV_USER - Type: string - Default: "" #### --webdav-pass Password. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). - Config: pass - Env Var: RCLONE_WEBDAV_PASS - Type: string - Default: "" #### --webdav-bearer-token Bearer token instead of user/pass (eg a Macaroon) - Config: bearer_token - Env Var: RCLONE_WEBDAV_BEARER_TOKEN - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to webdav (Webdav). #### --webdav-bearer-token-command Command to run to get a bearer token - Config: bearer_token_command - Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND - Type: string - Default: "" {{< rem autogenerated options stop >}} ## Provider notes ## See below for notes on specific providers. ### Owncloud ### Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like `https://example.com/remote.php/webdav/`. Owncloud supports modified times using the `X-OC-Mtime` header. ### Nextcloud ### This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (`rcat`) whereas Owncloud does. This [may be fixed](https://github.com/nextcloud/nextcloud-snap/issues/365) in the future. ### Sharepoint ### Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner [github#1975](https://github.com/rclone/rclone/issues/1975) This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav. To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL: - Go [here](https://onedrive.live.com/about/en-us/signin/) to open your OneDrive or to sign in - Now take a look at your address bar, the URL should look like this: `https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx` You'll only need this URL up to the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive. Add the remote to rclone like this: Configure the `url` as `https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents` and use your normal account email and password for `user` and `pass`. If you have 2FA enabled, you have to generate an app password. Set the `vendor` to `sharepoint`. Your config file should look like this: ``` [sharepoint] type = webdav url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents vendor = other user = YourEmailAddress pass = encryptedpassword ``` #### Required Flags for SharePoint #### As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents: ``` --ignore-size --ignore-checksum --update ``` ### dCache ### dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including [Macaroons](https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) and [OpenID-Connect](https://en.wikipedia.org/wiki/OpenID_Connect) access tokens. Configure as normal using the `other` type. Don't enter a username or password, instead enter your Macaroon as the `bearer_token`. The config will end up looking something like this. ``` [dcache] type = webdav url = https://dcache... vendor = other user = pass = bearer_token = your-macaroon ``` There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache. ### OpenID-Connect ### dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service. Support for OpenID-Connect in rclone is currently achieved using another software package called [oidc-agent](https://github.com/indigo-dc/oidc-agent). This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the `oidc-token` command. The following example shows a (shortened) access token obtained from the *XDC* OIDC Provider. ``` paul@celebrimbor:~$ oidc-token XDC eyJraWQ[...]QFXDt0 paul@celebrimbor:~$ ``` **Note** Before the `oidc-token` command will work, the refresh token must be loaded into the oidc agent. This is done with the `oidc-add` command (e.g., `oidc-add XDC`). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the [oidc-agent documentation](https://indigo-dc.gitbooks.io/oidc-agent/). The rclone `bearer_token_command` configuration option is used to fetch the access token from oidc-agent. Configure as a normal WebDAV endpoint, using the 'other' vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., `oidc-agent XDC`). The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the *XDC* OIDC Provider. ``` [dcache] type = webdav url = https://dcache.example.org/ vendor = other bearer_token_command = oidc-token XDC ``` rclone-1.53.3/docs/content/yandex.md000066400000000000000000000122171375552240400173130ustar00rootroot00000000000000--- title: "Yandex" description: "Yandex Disk" --- {{< icon "fa fa-space-shuttle" >}}Yandex Disk ---------------------------------------- [Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com). Here is an example of making a yandex configuration. First run rclone config This will guide you through an interactive setup process: ``` No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Yandex Disk \ "yandex" [snip] Storage> yandex Yandex Client Id - leave blank normally. client_id> Yandex Client Secret - leave blank normally. client_secret> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. Once configured you can then use `rclone` like this, See top level directories rclone lsd remote: Make a new directory rclone mkdir remote:directory List the contents of a directory rclone ls remote:directory Sync `/home/local/directory` to the remote path, deleting any excess files in the path. rclone sync -i /home/local/directory remote:directory Yandex paths may be as deep as required, eg `remote:directory/subdirectory`. ### Modified time ### Modified times are supported and are stored accurate to 1 ns in custom metadata called `rclone_modified` in RFC3339 with nanoseconds format. ### MD5 checksums ### MD5 checksums are natively supported by Yandex Disk. ### Emptying Trash ### If you wish to empty your trash you can use the `rclone cleanup remote:` command which will permanently delete all your trashed files. This command does not take any path arguments. ### Quota information ### To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota) and the current usage. #### Restricted filename characters The [default restricted characters set](/overview/#restricted-characters) are replaced. Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. ### Limitations ### When uploading very large files (bigger than about 5GB) you will need to increase the `--timeout` parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see `net/http: timeout awaiting response headers` errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of `2 * 30 = 60m`, that is `--timeout 60m`. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/yandex/yandex.go then run make backenddocs" >}} ### Standard Options Here are the standard options specific to yandex (Yandex Disk). #### --yandex-client-id OAuth Client Id Leave blank normally. - Config: client_id - Env Var: RCLONE_YANDEX_CLIENT_ID - Type: string - Default: "" #### --yandex-client-secret OAuth Client Secret Leave blank normally. - Config: client_secret - Env Var: RCLONE_YANDEX_CLIENT_SECRET - Type: string - Default: "" ### Advanced Options Here are the advanced options specific to yandex (Yandex Disk). #### --yandex-token OAuth Access Token as a JSON blob. - Config: token - Env Var: RCLONE_YANDEX_TOKEN - Type: string - Default: "" #### --yandex-auth-url Auth server URL. Leave blank to use the provider defaults. - Config: auth_url - Env Var: RCLONE_YANDEX_AUTH_URL - Type: string - Default: "" #### --yandex-token-url Token server url. Leave blank to use the provider defaults. - Config: token_url - Env Var: RCLONE_YANDEX_TOKEN_URL - Type: string - Default: "" #### --yandex-encoding This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. - Config: encoding - Env Var: RCLONE_YANDEX_ENCODING - Type: MultiEncoder - Default: Slash,Del,Ctl,InvalidUtf8,Dot {{< rem autogenerated options stop >}} rclone-1.53.3/docs/i18n/000077500000000000000000000000001375552240400146035ustar00rootroot00000000000000rclone-1.53.3/docs/i18n/en.toml000066400000000000000000000002311375552240400160760ustar00rootroot00000000000000# Dummy translation file to make errors go away # See: https://discourse.gohugo.io/t/monolingual-site/6300 [wordCount] other = "{{ .WordCount }} words" rclone-1.53.3/docs/layouts/000077500000000000000000000000001375552240400155245ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/404.html000066400000000000000000000002001375552240400167110ustar00rootroot00000000000000{{ define "main"}}

Page not found :-(

Click here to return to the homepage. {{ end }} rclone-1.53.3/docs/layouts/_default/000077500000000000000000000000001375552240400173075ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/_default/baseof.html000066400000000000000000000057231375552240400214430ustar00rootroot00000000000000 {{ block "title" . }}{{ .Title }}{{ end }} {{ $RSSLink := "" }}{{ with .OutputFormats.Get "RSS" }}{{ $RSSLink = .RelPermalink }}{{ end }}{{ if $RSSLink }}{{ end }} {{ template "chrome/navbar.html" . }}
{{ block "main" . }} {{ end }}
{{ template "chrome/menu.html" . }}
{{ block "footer" . }} {{ end }}
rclone-1.53.3/docs/layouts/_default/single.html000066400000000000000000000000551375552240400214560ustar00rootroot00000000000000{{ define "main" }} {{ .Content }} {{ end }} rclone-1.53.3/docs/layouts/chrome/000077500000000000000000000000001375552240400170015ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/chrome/menu.html000066400000000000000000000040001375552240400206250ustar00rootroot00000000000000
Share and Enjoy
rclone-1.53.3/docs/layouts/chrome/navbar.html000066400000000000000000000213741375552240400211470ustar00rootroot00000000000000 rclone-1.53.3/docs/layouts/index.html000066400000000000000000000000551375552240400175210ustar00rootroot00000000000000{{ define "main" }} {{ .Content }} {{ end }} rclone-1.53.3/docs/layouts/partials/000077500000000000000000000000001375552240400173435ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/partials/version.html000066400000000000000000000000071375552240400217130ustar00rootroot00000000000000v1.53.3rclone-1.53.3/docs/layouts/rss.xml000066400000000000000000000013621375552240400170570ustar00rootroot00000000000000 {{ .Title }} on {{ .Site.Title }} {{ .Permalink }} en-US Nick Craig-Wood Copyright (c) 2017, Nick Craig-Wood; all rights reserved. {{ .Date.Format "Mon, 02 Jan 2006 15:04:05 MST" }} {{ range first 15 .Data.Pages }} {{ .Title }} {{ .Permalink }} {{ .Date.Format "Mon, 02 Jan 2006 15:04:05 MST" }} Nick Craig-Wood {{ .Permalink }} {{ .Content | html }} {{ end }} rclone-1.53.3/docs/layouts/section/000077500000000000000000000000001375552240400171705ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/section/commands.html000066400000000000000000000007631375552240400216650ustar00rootroot00000000000000{{ define "main" }}

Rclone Commands

This is an index of all commands in rclone. Run rclone command --help to see the help for that command.

{{ range .Data.Pages }} {{ end }}
Command Description
{{ .Title | markdownify }} {{ .Description | markdownify }}
{{ end }} rclone-1.53.3/docs/layouts/shortcodes/000077500000000000000000000000001375552240400177015ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/shortcodes/asciinema.html000066400000000000000000000001421375552240400225150ustar00rootroot00000000000000 rclone-1.53.3/docs/layouts/shortcodes/cdownload.html000066400000000000000000000004361375552240400225440ustar00rootroot00000000000000 rclone-1.53.3/docs/layouts/shortcodes/color.html000066400000000000000000000000661375552240400217070ustar00rootroot00000000000000{{ .Inner }} rclone-1.53.3/docs/layouts/shortcodes/download.html000066400000000000000000000005241375552240400223770ustar00rootroot00000000000000 rclone-1.53.3/docs/layouts/shortcodes/icon.html000066400000000000000000000000601375552240400215130ustar00rootroot00000000000000 rclone-1.53.3/docs/layouts/shortcodes/img.html000066400000000000000000000001641375552240400213440ustar00rootroot00000000000000{{ .Get rclone-1.53.3/docs/layouts/shortcodes/monthly_donations.html000066400000000000000000000026541375552240400243460ustar00rootroot00000000000000 rclone-1.53.3/docs/layouts/shortcodes/nick.html000066400000000000000000000001311375552240400215060ustar00rootroot00000000000000

Nick
Nick Craig-Wood

rclone-1.53.3/docs/layouts/shortcodes/one_off_donations.html000066400000000000000000000040201375552240400242540ustar00rootroot00000000000000
PayPal.Me
rclone-1.53.3/docs/layouts/shortcodes/provider.html000066400000000000000000000010251375552240400224170ustar00rootroot00000000000000
  • {{ .Get "name" }}{{ if .Get "note"}} (See note){{end}} Home Config
  • rclone-1.53.3/docs/layouts/shortcodes/provider_list.html000066400000000000000000000000531375552240400234520ustar00rootroot00000000000000
      {{ .Inner }}
    rclone-1.53.3/docs/layouts/shortcodes/rem.html000066400000000000000000000000001375552240400213400ustar00rootroot00000000000000rclone-1.53.3/docs/layouts/shortcodes/version.html000066400000000000000000000000361375552240400222530ustar00rootroot00000000000000{{ partial "version.html" . }}rclone-1.53.3/docs/layouts/sitemap.xml000066400000000000000000000006301375552240400177070ustar00rootroot00000000000000 {{ range .Data.Pages }} {{ .Permalink }} {{ safeHTML ( .Date.Format "2006-01-02T15:04:05-07:00" ) }}{{ with .Sitemap.ChangeFreq }} {{ . }}{{ end }}{{ if ge .Sitemap.Priority 0.0 }} {{ .Sitemap.Priority }}{{ end }} {{ end }} rclone-1.53.3/docs/static/000077500000000000000000000000001375552240400153135ustar00rootroot00000000000000rclone-1.53.3/docs/static/css/000077500000000000000000000000001375552240400161035ustar00rootroot00000000000000rclone-1.53.3/docs/static/css/bootstrap.min.4.4.1.css000066400000000000000000004674331375552240400221000ustar00rootroot00000000000000/*! * Bootstrap v4.4.1 (https://getbootstrap.com/) * Copyright 2011-2019 The Bootstrap Authors * Copyright 2011-2019 Twitter, Inc. * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) */:root{--blue:#007bff;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;--red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;--teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray-dark:#343a40;--primary:#007bff;--secondary:#6c757d;--success:#28a745;--info:#17a2b8;--warning:#ffc107;--danger:#dc3545;--light:#f8f9fa;--dark:#343a40;--breakpoint-xs:0;--breakpoint-sm:576px;--breakpoint-md:768px;--breakpoint-lg:992px;--breakpoint-xl:1200px;--font-family-sans-serif:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--font-family-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace}*,::after,::before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:left;background-color:#fff}[tabindex="-1"]:focus:not(:focus-visible){outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;border-bottom:0;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#007bff;text-decoration:none;background-color:transparent}a:hover{color:#0056b3;text-decoration:underline}a:not([href]){color:inherit;text-decoration:none}a:not([href]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto}figure{margin:0 0 1rem}img{vertical-align:middle;border-style:none}svg{overflow:hidden;vertical-align:middle}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}input[type=date],input[type=datetime-local],input[type=month],input[type=time]{-webkit-appearance:listbox}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:2.5rem}.h2,h2{font-size:2rem}.h3,h3{font-size:1.75rem}.h4,h4{font-size:1.5rem}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:6rem;font-weight:300;line-height:1.2}.display-2{font-size:5.5rem;font-weight:300;line-height:1.2}.display-3{font-size:4.5rem;font-weight:300;line-height:1.2}.display-4{font-size:3.5rem;font-weight:300;line-height:1.2}hr{margin-top:1rem;margin-bottom:1rem;border:0;border-top:1px solid rgba(0,0,0,.1)}.small,small{font-size:80%;font-weight:400}.mark,mark{padding:.2em;background-color:#fcf8e3}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:90%;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote-footer{display:block;font-size:80%;color:#6c757d}.blockquote-footer::before{content:"\2014\00A0"}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:90%;color:#6c757d}code{font-size:87.5%;color:#e83e8c;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:87.5%;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:100%;font-weight:700}pre{display:block;font-size:87.5%;color:#212529}pre code{font-size:inherit;color:inherit;word-break:normal}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container{max-width:540px}}@media (min-width:768px){.container{max-width:720px}}@media (min-width:992px){.container{max-width:960px}}@media (min-width:1200px){.container{max-width:1140px}}.container-fluid,.container-lg,.container-md,.container-sm,.container-xl{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}.row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-15px;margin-left:-15px}.no-gutters{margin-right:0;margin-left:0}.no-gutters>.col,.no-gutters>[class*=col-]{padding-right:0;padding-left:0}.col,.col-1,.col-10,.col-11,.col-12,.col-2,.col-3,.col-4,.col-5,.col-6,.col-7,.col-8,.col-9,.col-auto,.col-lg,.col-lg-1,.col-lg-10,.col-lg-11,.col-lg-12,.col-lg-2,.col-lg-3,.col-lg-4,.col-lg-5,.col-lg-6,.col-lg-7,.col-lg-8,.col-lg-9,.col-lg-auto,.col-md,.col-md-1,.col-md-10,.col-md-11,.col-md-12,.col-md-2,.col-md-3,.col-md-4,.col-md-5,.col-md-6,.col-md-7,.col-md-8,.col-md-9,.col-md-auto,.col-sm,.col-sm-1,.col-sm-10,.col-sm-11,.col-sm-12,.col-sm-2,.col-sm-3,.col-sm-4,.col-sm-5,.col-sm-6,.col-sm-7,.col-sm-8,.col-sm-9,.col-sm-auto,.col-xl,.col-xl-1,.col-xl-10,.col-xl-11,.col-xl-12,.col-xl-2,.col-xl-3,.col-xl-4,.col-xl-5,.col-xl-6,.col-xl-7,.col-xl-8,.col-xl-9,.col-xl-auto{position:relative;width:100%;padding-right:15px;padding-left:15px}.col{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-first{-ms-flex-order:-1;order:-1}.order-last{-ms-flex-order:13;order:13}.order-0{-ms-flex-order:0;order:0}.order-1{-ms-flex-order:1;order:1}.order-2{-ms-flex-order:2;order:2}.order-3{-ms-flex-order:3;order:3}.order-4{-ms-flex-order:4;order:4}.order-5{-ms-flex-order:5;order:5}.order-6{-ms-flex-order:6;order:6}.order-7{-ms-flex-order:7;order:7}.order-8{-ms-flex-order:8;order:8}.order-9{-ms-flex-order:9;order:9}.order-10{-ms-flex-order:10;order:10}.order-11{-ms-flex-order:11;order:11}.order-12{-ms-flex-order:12;order:12}.offset-1{margin-left:8.333333%}.offset-2{margin-left:16.666667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.333333%}.offset-5{margin-left:41.666667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.333333%}.offset-8{margin-left:66.666667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.333333%}.offset-11{margin-left:91.666667%}@media (min-width:576px){.col-sm{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-sm-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-sm-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-sm-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-sm-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-sm-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-sm-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-sm-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-sm-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-sm-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-sm-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-sm-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-sm-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-sm-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-sm-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-sm-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-sm-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-sm-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-sm-first{-ms-flex-order:-1;order:-1}.order-sm-last{-ms-flex-order:13;order:13}.order-sm-0{-ms-flex-order:0;order:0}.order-sm-1{-ms-flex-order:1;order:1}.order-sm-2{-ms-flex-order:2;order:2}.order-sm-3{-ms-flex-order:3;order:3}.order-sm-4{-ms-flex-order:4;order:4}.order-sm-5{-ms-flex-order:5;order:5}.order-sm-6{-ms-flex-order:6;order:6}.order-sm-7{-ms-flex-order:7;order:7}.order-sm-8{-ms-flex-order:8;order:8}.order-sm-9{-ms-flex-order:9;order:9}.order-sm-10{-ms-flex-order:10;order:10}.order-sm-11{-ms-flex-order:11;order:11}.order-sm-12{-ms-flex-order:12;order:12}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.333333%}.offset-sm-2{margin-left:16.666667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.333333%}.offset-sm-5{margin-left:41.666667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.333333%}.offset-sm-8{margin-left:66.666667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.333333%}.offset-sm-11{margin-left:91.666667%}}@media (min-width:768px){.col-md{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-md-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-md-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-md-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-md-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-md-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-md-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-md-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-md-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-md-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-md-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-md-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-md-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-md-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-md-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-md-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-md-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-md-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-md-first{-ms-flex-order:-1;order:-1}.order-md-last{-ms-flex-order:13;order:13}.order-md-0{-ms-flex-order:0;order:0}.order-md-1{-ms-flex-order:1;order:1}.order-md-2{-ms-flex-order:2;order:2}.order-md-3{-ms-flex-order:3;order:3}.order-md-4{-ms-flex-order:4;order:4}.order-md-5{-ms-flex-order:5;order:5}.order-md-6{-ms-flex-order:6;order:6}.order-md-7{-ms-flex-order:7;order:7}.order-md-8{-ms-flex-order:8;order:8}.order-md-9{-ms-flex-order:9;order:9}.order-md-10{-ms-flex-order:10;order:10}.order-md-11{-ms-flex-order:11;order:11}.order-md-12{-ms-flex-order:12;order:12}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.333333%}.offset-md-2{margin-left:16.666667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.333333%}.offset-md-5{margin-left:41.666667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.333333%}.offset-md-8{margin-left:66.666667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.333333%}.offset-md-11{margin-left:91.666667%}}@media (min-width:992px){.col-lg{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-lg-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-lg-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-lg-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-lg-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-lg-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-lg-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-lg-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-lg-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-lg-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-lg-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-lg-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-lg-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-lg-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-lg-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-lg-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-lg-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-lg-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-lg-first{-ms-flex-order:-1;order:-1}.order-lg-last{-ms-flex-order:13;order:13}.order-lg-0{-ms-flex-order:0;order:0}.order-lg-1{-ms-flex-order:1;order:1}.order-lg-2{-ms-flex-order:2;order:2}.order-lg-3{-ms-flex-order:3;order:3}.order-lg-4{-ms-flex-order:4;order:4}.order-lg-5{-ms-flex-order:5;order:5}.order-lg-6{-ms-flex-order:6;order:6}.order-lg-7{-ms-flex-order:7;order:7}.order-lg-8{-ms-flex-order:8;order:8}.order-lg-9{-ms-flex-order:9;order:9}.order-lg-10{-ms-flex-order:10;order:10}.order-lg-11{-ms-flex-order:11;order:11}.order-lg-12{-ms-flex-order:12;order:12}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.333333%}.offset-lg-2{margin-left:16.666667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.333333%}.offset-lg-5{margin-left:41.666667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.333333%}.offset-lg-8{margin-left:66.666667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.333333%}.offset-lg-11{margin-left:91.666667%}}@media (min-width:1200px){.col-xl{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-xl-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-xl-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-xl-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-xl-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-xl-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-xl-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-xl-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-xl-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-xl-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-xl-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-xl-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-xl-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-xl-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-xl-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-xl-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-xl-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-xl-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-xl-first{-ms-flex-order:-1;order:-1}.order-xl-last{-ms-flex-order:13;order:13}.order-xl-0{-ms-flex-order:0;order:0}.order-xl-1{-ms-flex-order:1;order:1}.order-xl-2{-ms-flex-order:2;order:2}.order-xl-3{-ms-flex-order:3;order:3}.order-xl-4{-ms-flex-order:4;order:4}.order-xl-5{-ms-flex-order:5;order:5}.order-xl-6{-ms-flex-order:6;order:6}.order-xl-7{-ms-flex-order:7;order:7}.order-xl-8{-ms-flex-order:8;order:8}.order-xl-9{-ms-flex-order:9;order:9}.order-xl-10{-ms-flex-order:10;order:10}.order-xl-11{-ms-flex-order:11;order:11}.order-xl-12{-ms-flex-order:12;order:12}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.333333%}.offset-xl-2{margin-left:16.666667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.333333%}.offset-xl-5{margin-left:41.666667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.333333%}.offset-xl-8{margin-left:66.666667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.333333%}.offset-xl-11{margin-left:91.666667%}}.table{width:100%;margin-bottom:1rem;color:#212529}.table td,.table th{padding:.75rem;vertical-align:top;border-top:1px solid #dee2e6}.table thead th{vertical-align:bottom;border-bottom:2px solid #dee2e6}.table tbody+tbody{border-top:2px solid #dee2e6}.table-sm td,.table-sm th{padding:.3rem}.table-bordered{border:1px solid #dee2e6}.table-bordered td,.table-bordered th{border:1px solid #dee2e6}.table-bordered thead td,.table-bordered thead th{border-bottom-width:2px}.table-borderless tbody+tbody,.table-borderless td,.table-borderless th,.table-borderless thead th{border:0}.table-striped tbody tr:nth-of-type(odd){background-color:rgba(0,0,0,.05)}.table-hover tbody tr:hover{color:#212529;background-color:rgba(0,0,0,.075)}.table-primary,.table-primary>td,.table-primary>th{background-color:#b8daff}.table-primary tbody+tbody,.table-primary td,.table-primary th,.table-primary thead th{border-color:#7abaff}.table-hover .table-primary:hover{background-color:#9fcdff}.table-hover .table-primary:hover>td,.table-hover .table-primary:hover>th{background-color:#9fcdff}.table-secondary,.table-secondary>td,.table-secondary>th{background-color:#d6d8db}.table-secondary tbody+tbody,.table-secondary td,.table-secondary th,.table-secondary thead th{border-color:#b3b7bb}.table-hover .table-secondary:hover{background-color:#c8cbcf}.table-hover .table-secondary:hover>td,.table-hover .table-secondary:hover>th{background-color:#c8cbcf}.table-success,.table-success>td,.table-success>th{background-color:#c3e6cb}.table-success tbody+tbody,.table-success td,.table-success th,.table-success thead th{border-color:#8fd19e}.table-hover .table-success:hover{background-color:#b1dfbb}.table-hover .table-success:hover>td,.table-hover .table-success:hover>th{background-color:#b1dfbb}.table-info,.table-info>td,.table-info>th{background-color:#bee5eb}.table-info tbody+tbody,.table-info td,.table-info th,.table-info thead th{border-color:#86cfda}.table-hover .table-info:hover{background-color:#abdde5}.table-hover .table-info:hover>td,.table-hover .table-info:hover>th{background-color:#abdde5}.table-warning,.table-warning>td,.table-warning>th{background-color:#ffeeba}.table-warning tbody+tbody,.table-warning td,.table-warning th,.table-warning thead th{border-color:#ffdf7e}.table-hover .table-warning:hover{background-color:#ffe8a1}.table-hover .table-warning:hover>td,.table-hover .table-warning:hover>th{background-color:#ffe8a1}.table-danger,.table-danger>td,.table-danger>th{background-color:#f5c6cb}.table-danger tbody+tbody,.table-danger td,.table-danger th,.table-danger thead th{border-color:#ed969e}.table-hover .table-danger:hover{background-color:#f1b0b7}.table-hover .table-danger:hover>td,.table-hover .table-danger:hover>th{background-color:#f1b0b7}.table-light,.table-light>td,.table-light>th{background-color:#fdfdfe}.table-light tbody+tbody,.table-light td,.table-light th,.table-light thead th{border-color:#fbfcfc}.table-hover .table-light:hover{background-color:#ececf6}.table-hover .table-light:hover>td,.table-hover .table-light:hover>th{background-color:#ececf6}.table-dark,.table-dark>td,.table-dark>th{background-color:#c6c8ca}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#95999c}.table-hover .table-dark:hover{background-color:#b9bbbe}.table-hover .table-dark:hover>td,.table-hover .table-dark:hover>th{background-color:#b9bbbe}.table-active,.table-active>td,.table-active>th{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover>td,.table-hover .table-active:hover>th{background-color:rgba(0,0,0,.075)}.table .thead-dark th{color:#fff;background-color:#343a40;border-color:#454d55}.table .thead-light th{color:#495057;background-color:#e9ecef;border-color:#dee2e6}.table-dark{color:#fff;background-color:#343a40}.table-dark td,.table-dark th,.table-dark thead th{border-color:#454d55}.table-dark.table-bordered{border:0}.table-dark.table-striped tbody tr:nth-of-type(odd){background-color:rgba(255,255,255,.05)}.table-dark.table-hover tbody tr:hover{color:#fff;background-color:rgba(255,255,255,.075)}@media (max-width:575.98px){.table-responsive-sm{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-sm>.table-bordered{border:0}}@media (max-width:767.98px){.table-responsive-md{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-md>.table-bordered{border:0}}@media (max-width:991.98px){.table-responsive-lg{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-lg>.table-bordered{border:0}}@media (max-width:1199.98px){.table-responsive-xl{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-xl>.table-bordered{border:0}}.table-responsive{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive>.table-bordered{border:0}.form-control{display:block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control::-ms-expand{background-color:transparent;border:0}.form-control:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.form-control:focus{color:#495057;background-color:#fff;border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.form-control::-webkit-input-placeholder{color:#6c757d;opacity:1}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control:-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}select.form-control:focus::-ms-value{color:#495057;background-color:#fff}.form-control-file,.form-control-range{display:block;width:100%}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem;line-height:1.5}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem;line-height:1.5}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;font-size:1rem;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.form-control-lg{height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}select.form-control[multiple],select.form-control[size]{height:auto}textarea.form-control{height:auto}.form-group{margin-bottom:1rem}.form-text{display:block;margin-top:.25rem}.form-row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-5px;margin-left:-5px}.form-row>.col,.form-row>[class*=col-]{padding-right:5px;padding-left:5px}.form-check{position:relative;display:block;padding-left:1.25rem}.form-check-input{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{color:#6c757d}.form-check-label{margin-bottom:0}.form-check-inline{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-align:center;align-items:center;padding-left:0;margin-right:.75rem}.form-check-inline .form-check-input{position:static;margin-top:0;margin-right:.3125rem;margin-left:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#28a745}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(40,167,69,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#28a745;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-valid,.was-validated .custom-select:valid{border-color:#28a745;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-valid:focus,.was-validated .custom-select:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#28a745}.form-check-input.is-valid~.valid-feedback,.form-check-input.is-valid~.valid-tooltip,.was-validated .form-check-input:valid~.valid-feedback,.was-validated .form-check-input:valid~.valid-tooltip{display:block}.custom-control-input.is-valid~.custom-control-label,.was-validated .custom-control-input:valid~.custom-control-label{color:#28a745}.custom-control-input.is-valid~.custom-control-label::before,.was-validated .custom-control-input:valid~.custom-control-label::before{border-color:#28a745}.custom-control-input.is-valid:checked~.custom-control-label::before,.was-validated .custom-control-input:valid:checked~.custom-control-label::before{border-color:#34ce57;background-color:#34ce57}.custom-control-input.is-valid:focus~.custom-control-label::before,.was-validated .custom-control-input:valid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.custom-control-input.is-valid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:valid:focus:not(:checked)~.custom-control-label::before{border-color:#28a745}.custom-file-input.is-valid~.custom-file-label,.was-validated .custom-file-input:valid~.custom-file-label{border-color:#28a745}.custom-file-input.is-valid:focus~.custom-file-label,.was-validated .custom-file-input:valid:focus~.custom-file-label{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-invalid,.was-validated .custom-select:invalid{border-color:#dc3545;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-invalid:focus,.was-validated .custom-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-input.is-invalid~.invalid-feedback,.form-check-input.is-invalid~.invalid-tooltip,.was-validated .form-check-input:invalid~.invalid-feedback,.was-validated .form-check-input:invalid~.invalid-tooltip{display:block}.custom-control-input.is-invalid~.custom-control-label,.was-validated .custom-control-input:invalid~.custom-control-label{color:#dc3545}.custom-control-input.is-invalid~.custom-control-label::before,.was-validated .custom-control-input:invalid~.custom-control-label::before{border-color:#dc3545}.custom-control-input.is-invalid:checked~.custom-control-label::before,.was-validated .custom-control-input:invalid:checked~.custom-control-label::before{border-color:#e4606d;background-color:#e4606d}.custom-control-input.is-invalid:focus~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.custom-control-input.is-invalid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus:not(:checked)~.custom-control-label::before{border-color:#dc3545}.custom-file-input.is-invalid~.custom-file-label,.was-validated .custom-file-input:invalid~.custom-file-label{border-color:#dc3545}.custom-file-input.is-invalid:focus~.custom-file-label,.was-validated .custom-file-input:invalid:focus~.custom-file-label{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-inline{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center}.form-inline .form-check{width:100%}@media (min-width:576px){.form-inline label{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;margin-bottom:0}.form-inline .form-group{display:-ms-flexbox;display:flex;-ms-flex:0 0 auto;flex:0 0 auto;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center;margin-bottom:0}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-plaintext{display:inline-block}.form-inline .custom-select,.form-inline .input-group{width:auto}.form-inline .form-check{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:auto;padding-left:0}.form-inline .form-check-input{position:relative;-ms-flex-negative:0;flex-shrink:0;margin-top:0;margin-right:.25rem;margin-left:0}.form-inline .custom-control{-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center}.form-inline .custom-control-label{margin-bottom:0}}.btn{display:inline-block;font-weight:400;color:#212529;text-align:center;vertical-align:middle;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;line-height:1.5;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529;text-decoration:none}.btn.focus,.btn:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.btn.disabled,.btn:disabled{opacity:.65}a.btn.disabled,fieldset:disabled a.btn{pointer-events:none}.btn-primary{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:hover{color:#fff;background-color:#0069d9;border-color:#0062cc}.btn-primary.focus,.btn-primary:focus{color:#fff;background-color:#0069d9;border-color:#0062cc;box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:not(:disabled):not(.disabled).active,.btn-primary:not(:disabled):not(.disabled):active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0062cc;border-color:#005cbf}.btn-primary:not(:disabled):not(.disabled).active:focus,.btn-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5a6268;border-color:#545b62}.btn-secondary.focus,.btn-secondary:focus{color:#fff;background-color:#5a6268;border-color:#545b62;box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:not(:disabled):not(.disabled).active,.btn-secondary:not(:disabled):not(.disabled):active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#545b62;border-color:#4e555b}.btn-secondary:not(:disabled):not(.disabled).active:focus,.btn-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-success{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:hover{color:#fff;background-color:#218838;border-color:#1e7e34}.btn-success.focus,.btn-success:focus{color:#fff;background-color:#218838;border-color:#1e7e34;box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:not(:disabled):not(.disabled).active,.btn-success:not(:disabled):not(.disabled):active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#1e7e34;border-color:#1c7430}.btn-success:not(:disabled):not(.disabled).active:focus,.btn-success:not(:disabled):not(.disabled):active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-info{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:hover{color:#fff;background-color:#138496;border-color:#117a8b}.btn-info.focus,.btn-info:focus{color:#fff;background-color:#138496;border-color:#117a8b;box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-info.disabled,.btn-info:disabled{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:not(:disabled):not(.disabled).active,.btn-info:not(:disabled):not(.disabled):active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#117a8b;border-color:#10707f}.btn-info:not(:disabled):not(.disabled).active:focus,.btn-info:not(:disabled):not(.disabled):active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-warning{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#212529;background-color:#e0a800;border-color:#d39e00}.btn-warning.focus,.btn-warning:focus{color:#212529;background-color:#e0a800;border-color:#d39e00;box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:not(:disabled):not(.disabled).active,.btn-warning:not(:disabled):not(.disabled):active,.show>.btn-warning.dropdown-toggle{color:#212529;background-color:#d39e00;border-color:#c69500}.btn-warning:not(:disabled):not(.disabled).active:focus,.btn-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#c82333;border-color:#bd2130}.btn-danger.focus,.btn-danger:focus{color:#fff;background-color:#c82333;border-color:#bd2130;box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:not(:disabled):not(.disabled).active,.btn-danger:not(:disabled):not(.disabled):active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#bd2130;border-color:#b21f2d}.btn-danger:not(:disabled):not(.disabled).active:focus,.btn-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-light{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#212529;background-color:#e2e6ea;border-color:#dae0e5}.btn-light.focus,.btn-light:focus{color:#212529;background-color:#e2e6ea;border-color:#dae0e5;box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-light.disabled,.btn-light:disabled{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:not(:disabled):not(.disabled).active,.btn-light:not(:disabled):not(.disabled):active,.show>.btn-light.dropdown-toggle{color:#212529;background-color:#dae0e5;border-color:#d3d9df}.btn-light:not(:disabled):not(.disabled).active:focus,.btn-light:not(:disabled):not(.disabled):active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-dark{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:hover{color:#fff;background-color:#23272b;border-color:#1d2124}.btn-dark.focus,.btn-dark:focus{color:#fff;background-color:#23272b;border-color:#1d2124;box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:not(:disabled):not(.disabled).active,.btn-dark:not(:disabled):not(.disabled):active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1d2124;border-color:#171a1d}.btn-dark:not(:disabled):not(.disabled).active:focus,.btn-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-outline-primary{color:#007bff;border-color:#007bff}.btn-outline-primary:hover{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary.focus,.btn-outline-primary:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#007bff;background-color:transparent}.btn-outline-primary:not(:disabled):not(.disabled).active,.btn-outline-primary:not(:disabled):not(.disabled):active,.show>.btn-outline-primary.dropdown-toggle{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary:not(:disabled):not(.disabled).active:focus,.btn-outline-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary.focus,.btn-outline-secondary:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-secondary:not(:disabled):not(.disabled).active,.btn-outline-secondary:not(:disabled):not(.disabled):active,.show>.btn-outline-secondary.dropdown-toggle{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary:not(:disabled):not(.disabled).active:focus,.btn-outline-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-success{color:#28a745;border-color:#28a745}.btn-outline-success:hover{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success.focus,.btn-outline-success:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#28a745;background-color:transparent}.btn-outline-success:not(:disabled):not(.disabled).active,.btn-outline-success:not(:disabled):not(.disabled):active,.show>.btn-outline-success.dropdown-toggle{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success:not(:disabled):not(.disabled).active:focus,.btn-outline-success:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-info{color:#17a2b8;border-color:#17a2b8}.btn-outline-info:hover{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info.focus,.btn-outline-info:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#17a2b8;background-color:transparent}.btn-outline-info:not(:disabled):not(.disabled).active,.btn-outline-info:not(:disabled):not(.disabled):active,.show>.btn-outline-info.dropdown-toggle{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info:not(:disabled):not(.disabled).active:focus,.btn-outline-info:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning.focus,.btn-outline-warning:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-warning:not(:disabled):not(.disabled).active,.btn-outline-warning:not(:disabled):not(.disabled):active,.show>.btn-outline-warning.dropdown-toggle{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning:not(:disabled):not(.disabled).active:focus,.btn-outline-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger.focus,.btn-outline-danger:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-danger:not(:disabled):not(.disabled).active,.btn-outline-danger:not(:disabled):not(.disabled):active,.show>.btn-outline-danger.dropdown-toggle{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger:not(:disabled):not(.disabled).active:focus,.btn-outline-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light.focus,.btn-outline-light:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-light:not(:disabled):not(.disabled).active,.btn-outline-light:not(:disabled):not(.disabled):active,.show>.btn-outline-light.dropdown-toggle{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:not(:disabled):not(.disabled).active:focus,.btn-outline-light:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-dark{color:#343a40;border-color:#343a40}.btn-outline-dark:hover{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark.focus,.btn-outline-dark:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#343a40;background-color:transparent}.btn-outline-dark:not(:disabled):not(.disabled).active,.btn-outline-dark:not(:disabled):not(.disabled):active,.show>.btn-outline-dark.dropdown-toggle{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark:not(:disabled):not(.disabled).active:focus,.btn-outline-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-link{font-weight:400;color:#007bff;text-decoration:none}.btn-link:hover{color:#0056b3;text-decoration:underline}.btn-link.focus,.btn-link:focus{text-decoration:underline;box-shadow:none}.btn-link.disabled,.btn-link:disabled{color:#6c757d;pointer-events:none}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:.5rem}input[type=button].btn-block,input[type=reset].btn-block,input[type=submit].btn-block{width:100%}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{position:relative;height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.dropdown,.dropleft,.dropright,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:10rem;padding:.5rem 0;margin:.125rem 0 0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu-left{right:auto;left:0}.dropdown-menu-right{right:0;left:auto}@media (min-width:576px){.dropdown-menu-sm-left{right:auto;left:0}.dropdown-menu-sm-right{right:0;left:auto}}@media (min-width:768px){.dropdown-menu-md-left{right:auto;left:0}.dropdown-menu-md-right{right:0;left:auto}}@media (min-width:992px){.dropdown-menu-lg-left{right:auto;left:0}.dropdown-menu-lg-right{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-left{right:auto;left:0}.dropdown-menu-xl-right{right:0;left:auto}}.dropup .dropdown-menu{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-menu{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropright .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropright .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-toggle::after{vertical-align:0}.dropleft .dropdown-menu{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropleft .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropleft .dropdown-toggle::after{display:none}.dropleft .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropleft .dropdown-toggle:empty::after{margin-left:0}.dropleft .dropdown-toggle::before{vertical-align:0}.dropdown-menu[x-placement^=bottom],.dropdown-menu[x-placement^=left],.dropdown-menu[x-placement^=right],.dropdown-menu[x-placement^=top]{right:auto;bottom:auto}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid #e9ecef}.dropdown-item{display:block;width:100%;padding:.25rem 1.5rem;clear:both;font-weight:400;color:#212529;text-align:inherit;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#16181b;text-decoration:none;background-color:#f8f9fa}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#007bff}.dropdown-item.disabled,.dropdown-item:disabled{color:#6c757d;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1.5rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1.5rem;color:#212529}.btn-group,.btn-group-vertical{position:relative;display:-ms-inline-flexbox;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;-ms-flex:1 1 auto;flex:1 1 auto}.btn-group-vertical>.btn:hover,.btn-group>.btn:hover{z-index:1}.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus{z-index:1}.btn-toolbar{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-pack:start;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropright .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-left:0}.dropleft .dropdown-toggle-split::before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{-ms-flex-direction:column;flex-direction:column;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:center;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn:not(:first-child){border-top-left-radius:0;border-top-right-radius:0}.btn-group-toggle>.btn,.btn-group-toggle>.btn-group>.btn{margin-bottom:0}.btn-group-toggle>.btn input[type=checkbox],.btn-group-toggle>.btn input[type=radio],.btn-group-toggle>.btn-group>.btn input[type=checkbox],.btn-group-toggle>.btn-group>.btn input[type=radio]{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.input-group{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:stretch;align-items:stretch;width:100%}.input-group>.custom-file,.input-group>.custom-select,.input-group>.form-control,.input-group>.form-control-plaintext{position:relative;-ms-flex:1 1 0%;flex:1 1 0%;min-width:0;margin-bottom:0}.input-group>.custom-file+.custom-file,.input-group>.custom-file+.custom-select,.input-group>.custom-file+.form-control,.input-group>.custom-select+.custom-file,.input-group>.custom-select+.custom-select,.input-group>.custom-select+.form-control,.input-group>.form-control+.custom-file,.input-group>.form-control+.custom-select,.input-group>.form-control+.form-control,.input-group>.form-control-plaintext+.custom-file,.input-group>.form-control-plaintext+.custom-select,.input-group>.form-control-plaintext+.form-control{margin-left:-1px}.input-group>.custom-file .custom-file-input:focus~.custom-file-label,.input-group>.custom-select:focus,.input-group>.form-control:focus{z-index:3}.input-group>.custom-file .custom-file-input:focus{z-index:4}.input-group>.custom-select:not(:last-child),.input-group>.form-control:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-select:not(:first-child),.input-group>.form-control:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>.custom-file{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}.input-group>.custom-file:not(:last-child) .custom-file-label,.input-group>.custom-file:not(:last-child) .custom-file-label::after{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-file:not(:first-child) .custom-file-label{border-top-left-radius:0;border-bottom-left-radius:0}.input-group-append,.input-group-prepend{display:-ms-flexbox;display:flex}.input-group-append .btn,.input-group-prepend .btn{position:relative;z-index:2}.input-group-append .btn:focus,.input-group-prepend .btn:focus{z-index:3}.input-group-append .btn+.btn,.input-group-append .btn+.input-group-text,.input-group-append .input-group-text+.btn,.input-group-append .input-group-text+.input-group-text,.input-group-prepend .btn+.btn,.input-group-prepend .btn+.input-group-text,.input-group-prepend .input-group-text+.btn,.input-group-prepend .input-group-text+.input-group-text{margin-left:-1px}.input-group-prepend{margin-right:-1px}.input-group-append{margin-left:-1px}.input-group-text{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.375rem .75rem;margin-bottom:0;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-text input[type=checkbox],.input-group-text input[type=radio]{margin-top:0}.input-group-lg>.custom-select,.input-group-lg>.form-control:not(textarea){height:calc(1.5em + 1rem + 2px)}.input-group-lg>.custom-select,.input-group-lg>.form-control,.input-group-lg>.input-group-append>.btn,.input-group-lg>.input-group-append>.input-group-text,.input-group-lg>.input-group-prepend>.btn,.input-group-lg>.input-group-prepend>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.input-group-sm>.custom-select,.input-group-sm>.form-control:not(textarea){height:calc(1.5em + .5rem + 2px)}.input-group-sm>.custom-select,.input-group-sm>.form-control,.input-group-sm>.input-group-append>.btn,.input-group-sm>.input-group-append>.input-group-text,.input-group-sm>.input-group-prepend>.btn,.input-group-sm>.input-group-prepend>.input-group-text{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.input-group-lg>.custom-select,.input-group-sm>.custom-select{padding-right:1.75rem}.input-group>.input-group-append:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group>.input-group-append:last-child>.input-group-text:not(:last-child),.input-group>.input-group-append:not(:last-child)>.btn,.input-group>.input-group-append:not(:last-child)>.input-group-text,.input-group>.input-group-prepend>.btn,.input-group>.input-group-prepend>.input-group-text{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.input-group-append>.btn,.input-group>.input-group-append>.input-group-text,.input-group>.input-group-prepend:first-child>.btn:not(:first-child),.input-group>.input-group-prepend:first-child>.input-group-text:not(:first-child),.input-group>.input-group-prepend:not(:first-child)>.btn,.input-group>.input-group-prepend:not(:first-child)>.input-group-text{border-top-left-radius:0;border-bottom-left-radius:0}.custom-control{position:relative;display:block;min-height:1.5rem;padding-left:1.5rem}.custom-control-inline{display:-ms-inline-flexbox;display:inline-flex;margin-right:1rem}.custom-control-input{position:absolute;left:0;z-index:-1;width:1rem;height:1.25rem;opacity:0}.custom-control-input:checked~.custom-control-label::before{color:#fff;border-color:#007bff;background-color:#007bff}.custom-control-input:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-control-input:focus:not(:checked)~.custom-control-label::before{border-color:#80bdff}.custom-control-input:not(:disabled):active~.custom-control-label::before{color:#fff;background-color:#b3d7ff;border-color:#b3d7ff}.custom-control-input:disabled~.custom-control-label,.custom-control-input[disabled]~.custom-control-label{color:#6c757d}.custom-control-input:disabled~.custom-control-label::before,.custom-control-input[disabled]~.custom-control-label::before{background-color:#e9ecef}.custom-control-label{position:relative;margin-bottom:0;vertical-align:top}.custom-control-label::before{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;pointer-events:none;content:"";background-color:#fff;border:#adb5bd solid 1px}.custom-control-label::after{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;content:"";background:no-repeat 50%/50% 50%}.custom-checkbox .custom-control-label::before{border-radius:.25rem}.custom-checkbox .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::before{border-color:#007bff;background-color:#007bff}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4' viewBox='0 0 4 4'%3e%3cpath stroke='%23fff' d='M0 2h4'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-checkbox .custom-control-input:disabled:indeterminate~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-radio .custom-control-label::before{border-radius:50%}.custom-radio .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.custom-radio .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-switch{padding-left:2.25rem}.custom-switch .custom-control-label::before{left:-2.25rem;width:1.75rem;pointer-events:all;border-radius:.5rem}.custom-switch .custom-control-label::after{top:calc(.25rem + 2px);left:calc(-2.25rem + 2px);width:calc(1rem - 4px);height:calc(1rem - 4px);background-color:#adb5bd;border-radius:.5rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-switch .custom-control-label::after{transition:none}}.custom-switch .custom-control-input:checked~.custom-control-label::after{background-color:#fff;-webkit-transform:translateX(.75rem);transform:translateX(.75rem)}.custom-switch .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-select{display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem 1.75rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;vertical-align:middle;background:#fff url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px;border:1px solid #ced4da;border-radius:.25rem;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-select:focus{border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-select:focus::-ms-value{color:#495057;background-color:#fff}.custom-select[multiple],.custom-select[size]:not([size="1"]){height:auto;padding-right:.75rem;background-image:none}.custom-select:disabled{color:#6c757d;background-color:#e9ecef}.custom-select::-ms-expand{display:none}.custom-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.custom-select-sm{height:calc(1.5em + .5rem + 2px);padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem}.custom-select-lg{height:calc(1.5em + 1rem + 2px);padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.custom-file{position:relative;display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);margin-bottom:0}.custom-file-input{position:relative;z-index:2;width:100%;height:calc(1.5em + .75rem + 2px);margin:0;opacity:0}.custom-file-input:focus~.custom-file-label{border-color:#80bdff;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-file-input:disabled~.custom-file-label,.custom-file-input[disabled]~.custom-file-label{background-color:#e9ecef}.custom-file-input:lang(en)~.custom-file-label::after{content:"Browse"}.custom-file-input~.custom-file-label[data-browse]::after{content:attr(data-browse)}.custom-file-label{position:absolute;top:0;right:0;left:0;z-index:1;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem}.custom-file-label::after{position:absolute;top:0;right:0;bottom:0;z-index:3;display:block;height:calc(1.5em + .75rem);padding:.375rem .75rem;line-height:1.5;color:#495057;content:"Browse";background-color:#e9ecef;border-left:inherit;border-radius:0 .25rem .25rem 0}.custom-range{width:100%;height:1.4rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-range:focus{outline:0}.custom-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-ms-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range::-moz-focus-outer{border:0}.custom-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#007bff;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.custom-range::-webkit-slider-thumb:active{background-color:#b3d7ff}.custom-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#007bff;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-moz-range-thumb{-moz-transition:none;transition:none}}.custom-range::-moz-range-thumb:active{background-color:#b3d7ff}.custom-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-ms-thumb{width:1rem;height:1rem;margin-top:0;margin-right:.2rem;margin-left:.2rem;background-color:#007bff;border:0;border-radius:1rem;-ms-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-ms-thumb{-ms-transition:none;transition:none}}.custom-range::-ms-thumb:active{background-color:#b3d7ff}.custom-range::-ms-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:transparent;border-color:transparent;border-width:.5rem}.custom-range::-ms-fill-lower{background-color:#dee2e6;border-radius:1rem}.custom-range::-ms-fill-upper{margin-right:15px;background-color:#dee2e6;border-radius:1rem}.custom-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.custom-range:disabled::-webkit-slider-runnable-track{cursor:default}.custom-range:disabled::-moz-range-thumb{background-color:#adb5bd}.custom-range:disabled::-moz-range-track{cursor:default}.custom-range:disabled::-ms-thumb{background-color:#adb5bd}.custom-control-label::before,.custom-file-label,.custom-select{transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-control-label::before,.custom-file-label,.custom-select{transition:none}}.nav{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem}.nav-link:focus,.nav-link:hover{text-decoration:none}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-item{margin-bottom:-1px}.nav-tabs .nav-link{border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#007bff}.nav-fill .nav-item{-ms-flex:1 1 auto;flex:1 1 auto;text-align:center}.nav-justified .nav-item{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;text-align:center}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between;padding:.5rem 1rem}.navbar .container,.navbar .container-fluid,.navbar .container-lg,.navbar .container-md,.navbar .container-sm,.navbar .container-xl{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between}.navbar-brand{display:inline-block;padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;line-height:inherit;white-space:nowrap}.navbar-brand:focus,.navbar-brand:hover{text-decoration:none}.navbar-nav{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static;float:none}.navbar-text{display:inline-block;padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{-ms-flex-preferred-size:100%;flex-basis:100%;-ms-flex-positive:1;flex-grow:1;-ms-flex-align:center;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem}.navbar-toggler:focus,.navbar-toggler:hover{text-decoration:none}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;content:"";background:no-repeat center center;background-size:100% 100%}@media (max-width:575.98px){.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{padding-right:0;padding-left:0}}@media (min-width:576px){.navbar-expand-sm{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-sm .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-sm .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}}@media (max-width:767.98px){.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{padding-right:0;padding-left:0}}@media (min-width:768px){.navbar-expand-md{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-md .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-md .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}}@media (max-width:991.98px){.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{padding-right:0;padding-left:0}}@media (min-width:992px){.navbar-expand-lg{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-lg .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-lg .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}}@media (max-width:1199.98px){.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{padding-right:0;padding-left:0}}@media (min-width:1200px){.navbar-expand-xl{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-xl .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-xl .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}}.navbar-expand{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{padding-right:0;padding-left:0}.navbar-expand .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.5)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .active>.nav-link,.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .nav-link.show,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.5);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba(0, 0, 0, 0.5)' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.5)}.navbar-light .navbar-text a{color:rgba(0,0,0,.9)}.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.5)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .nav-link.show,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.5);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba(255, 255, 255, 0.5)' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.5)}.navbar-dark .navbar-text a{color:#fff}.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group:first-child .list-group-item:first-child{border-top-left-radius:.25rem;border-top-right-radius:.25rem}.card>.list-group:last-child .list-group-item:last-child{border-bottom-right-radius:.25rem;border-bottom-left-radius:.25rem}.card-body{-ms-flex:1 1 auto;flex:1 1 auto;min-height:1px;padding:1.25rem}.card-title{margin-bottom:.75rem}.card-subtitle{margin-top:-.375rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link:hover{text-decoration:none}.card-link+.card-link{margin-left:1.25rem}.card-header{padding:.75rem 1.25rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-header+.list-group .list-group-item:first-child{border-top:0}.card-footer{padding:.75rem 1.25rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-right:-.625rem;margin-bottom:-.75rem;margin-left:-.625rem;border-bottom:0}.card-header-pills{margin-right:-.625rem;margin-left:-.625rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1.25rem}.card-img,.card-img-bottom,.card-img-top{-ms-flex-negative:0;flex-shrink:0;width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-deck .card{margin-bottom:15px}@media (min-width:576px){.card-deck{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;margin-right:-15px;margin-left:-15px}.card-deck .card{-ms-flex:1 0 0%;flex:1 0 0%;margin-right:15px;margin-bottom:0;margin-left:15px}}.card-group>.card{margin-bottom:15px}@media (min-width:576px){.card-group{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap}.card-group>.card{-ms-flex:1 0 0%;flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.card-columns .card{margin-bottom:.75rem}@media (min-width:576px){.card-columns{-webkit-column-count:3;-moz-column-count:3;column-count:3;-webkit-column-gap:1.25rem;-moz-column-gap:1.25rem;column-gap:1.25rem;orphans:1;widows:1}.card-columns .card{display:inline-block;width:100%}}.accordion>.card{overflow:hidden}.accordion>.card:not(:last-of-type){border-bottom:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.accordion>.card:not(:first-of-type){border-top-left-radius:0;border-top-right-radius:0}.accordion>.card>.card-header{border-radius:0;margin-bottom:-1px}.breadcrumb{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding:.75rem 1rem;margin-bottom:1rem;list-style:none;background-color:#e9ecef;border-radius:.25rem}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{display:inline-block;padding-right:.5rem;color:#6c757d;content:"/"}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:underline}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:none}.breadcrumb-item.active{color:#6c757d}.pagination{display:-ms-flexbox;display:flex;padding-left:0;list-style:none;border-radius:.25rem}.page-link{position:relative;display:block;padding:.5rem .75rem;margin-left:-1px;line-height:1.25;color:#007bff;background-color:#fff;border:1px solid #dee2e6}.page-link:hover{z-index:2;color:#0056b3;text-decoration:none;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.page-item:first-child .page-link{margin-left:0;border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item.active .page-link{z-index:3;color:#fff;background-color:#007bff;border-color:#007bff}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;cursor:auto;background-color:#fff;border-color:#dee2e6}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem;line-height:1.5}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem;line-height:1.5}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.25em .4em;font-size:75%;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.badge{transition:none}}a.badge:focus,a.badge:hover{text-decoration:none}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.badge-pill{padding-right:.6em;padding-left:.6em;border-radius:10rem}.badge-primary{color:#fff;background-color:#007bff}a.badge-primary:focus,a.badge-primary:hover{color:#fff;background-color:#0062cc}a.badge-primary.focus,a.badge-primary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.badge-secondary{color:#fff;background-color:#6c757d}a.badge-secondary:focus,a.badge-secondary:hover{color:#fff;background-color:#545b62}a.badge-secondary.focus,a.badge-secondary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.badge-success{color:#fff;background-color:#28a745}a.badge-success:focus,a.badge-success:hover{color:#fff;background-color:#1e7e34}a.badge-success.focus,a.badge-success:focus{outline:0;box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.badge-info{color:#fff;background-color:#17a2b8}a.badge-info:focus,a.badge-info:hover{color:#fff;background-color:#117a8b}a.badge-info.focus,a.badge-info:focus{outline:0;box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.badge-warning{color:#212529;background-color:#ffc107}a.badge-warning:focus,a.badge-warning:hover{color:#212529;background-color:#d39e00}a.badge-warning.focus,a.badge-warning:focus{outline:0;box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.badge-danger{color:#fff;background-color:#dc3545}a.badge-danger:focus,a.badge-danger:hover{color:#fff;background-color:#bd2130}a.badge-danger.focus,a.badge-danger:focus{outline:0;box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.badge-light{color:#212529;background-color:#f8f9fa}a.badge-light:focus,a.badge-light:hover{color:#212529;background-color:#dae0e5}a.badge-light.focus,a.badge-light:focus{outline:0;box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.badge-dark{color:#fff;background-color:#343a40}a.badge-dark:focus,a.badge-dark:hover{color:#fff;background-color:#1d2124}a.badge-dark.focus,a.badge-dark:focus{outline:0;box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.jumbotron{padding:2rem 1rem;margin-bottom:2rem;background-color:#e9ecef;border-radius:.3rem}@media (min-width:576px){.jumbotron{padding:4rem 2rem}}.jumbotron-fluid{padding-right:0;padding-left:0;border-radius:0}.alert{position:relative;padding:.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:4rem}.alert-dismissible .close{position:absolute;top:0;right:0;padding:.75rem 1.25rem;color:inherit}.alert-primary{color:#004085;background-color:#cce5ff;border-color:#b8daff}.alert-primary hr{border-top-color:#9fcdff}.alert-primary .alert-link{color:#002752}.alert-secondary{color:#383d41;background-color:#e2e3e5;border-color:#d6d8db}.alert-secondary hr{border-top-color:#c8cbcf}.alert-secondary .alert-link{color:#202326}.alert-success{color:#155724;background-color:#d4edda;border-color:#c3e6cb}.alert-success hr{border-top-color:#b1dfbb}.alert-success .alert-link{color:#0b2e13}.alert-info{color:#0c5460;background-color:#d1ecf1;border-color:#bee5eb}.alert-info hr{border-top-color:#abdde5}.alert-info .alert-link{color:#062c33}.alert-warning{color:#856404;background-color:#fff3cd;border-color:#ffeeba}.alert-warning hr{border-top-color:#ffe8a1}.alert-warning .alert-link{color:#533f03}.alert-danger{color:#721c24;background-color:#f8d7da;border-color:#f5c6cb}.alert-danger hr{border-top-color:#f1b0b7}.alert-danger .alert-link{color:#491217}.alert-light{color:#818182;background-color:#fefefe;border-color:#fdfdfe}.alert-light hr{border-top-color:#ececf6}.alert-light .alert-link{color:#686868}.alert-dark{color:#1b1e21;background-color:#d6d8d9;border-color:#c6c8ca}.alert-dark hr{border-top-color:#b9bbbe}.alert-dark .alert-link{color:#040505}@-webkit-keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}@keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}.progress{display:-ms-flexbox;display:flex;height:1rem;overflow:hidden;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#007bff;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:progress-bar-stripes 1s linear infinite;animation:progress-bar-stripes 1s linear infinite}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.media{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start}.media-body{-ms-flex:1;flex:1}.list-group{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.75rem 1.25rem;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:.25rem;border-top-right-radius:.25rem}.list-group-item:last-child{border-bottom-right-radius:.25rem;border-bottom-left-radius:.25rem}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#007bff;border-color:#007bff}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal .list-group-item.active{margin-top:0}.list-group-horizontal .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:576px){.list-group-horizontal-sm{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-sm .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm .list-group-item.active{margin-top:0}.list-group-horizontal-sm .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:768px){.list-group-horizontal-md{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-md .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md .list-group-item.active{margin-top:0}.list-group-horizontal-md .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-lg .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg .list-group-item.active{margin-top:0}.list-group-horizontal-lg .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-xl .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl .list-group-item.active{margin-top:0}.list-group-horizontal-xl .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush .list-group-item{border-right-width:0;border-left-width:0;border-radius:0}.list-group-flush .list-group-item:first-child{border-top-width:0}.list-group-flush:last-child .list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#004085;background-color:#b8daff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#004085;background-color:#9fcdff}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#004085;border-color:#004085}.list-group-item-secondary{color:#383d41;background-color:#d6d8db}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#383d41;background-color:#c8cbcf}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#383d41;border-color:#383d41}.list-group-item-success{color:#155724;background-color:#c3e6cb}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#155724;background-color:#b1dfbb}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#155724;border-color:#155724}.list-group-item-info{color:#0c5460;background-color:#bee5eb}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#0c5460;background-color:#abdde5}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#0c5460;border-color:#0c5460}.list-group-item-warning{color:#856404;background-color:#ffeeba}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#856404;background-color:#ffe8a1}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#856404;border-color:#856404}.list-group-item-danger{color:#721c24;background-color:#f5c6cb}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#721c24;background-color:#f1b0b7}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#721c24;border-color:#721c24}.list-group-item-light{color:#818182;background-color:#fdfdfe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#818182;background-color:#ececf6}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#818182;border-color:#818182}.list-group-item-dark{color:#1b1e21;background-color:#c6c8ca}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#1b1e21;background-color:#b9bbbe}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#1b1e21;border-color:#1b1e21}.close{float:right;font-size:1.5rem;font-weight:700;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.5}.close:hover{color:#000;text-decoration:none}.close:not(:disabled):not(.disabled):focus,.close:not(:disabled):not(.disabled):hover{opacity:.75}button.close{padding:0;background-color:transparent;border:0;-webkit-appearance:none;-moz-appearance:none;appearance:none}a.close.disabled{pointer-events:none}.toast{max-width:350px;overflow:hidden;font-size:.875rem;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .25rem .75rem rgba(0,0,0,.1);-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);opacity:0;border-radius:.25rem}.toast:not(:last-child){margin-bottom:.75rem}.toast.showing{opacity:1}.toast.show{display:block;opacity:1}.toast.hide{display:none}.toast-header{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.25rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-body{padding:.75rem}.modal-open{overflow:hidden}.modal-open .modal{overflow-x:hidden;overflow-y:auto}.modal{position:fixed;top:0;left:0;z-index:1050;display:none;width:100%;height:100%;overflow:hidden;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:-webkit-transform .3s ease-out;transition:transform .3s ease-out;transition:transform .3s ease-out,-webkit-transform .3s ease-out;-webkit-transform:translate(0,-50px);transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{-webkit-transform:none;transform:none}.modal.modal-static .modal-dialog{-webkit-transform:scale(1.02);transform:scale(1.02)}.modal-dialog-scrollable{display:-ms-flexbox;display:flex;max-height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 1rem);overflow:hidden}.modal-dialog-scrollable .modal-footer,.modal-dialog-scrollable .modal-header{-ms-flex-negative:0;flex-shrink:0}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;min-height:calc(100% - 1rem)}.modal-dialog-centered::before{display:block;height:calc(100vh - 1rem);content:""}.modal-dialog-centered.modal-dialog-scrollable{-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;height:100%}.modal-dialog-centered.modal-dialog-scrollable .modal-content{max-height:none}.modal-dialog-centered.modal-dialog-scrollable::before{content:none}.modal-content{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:justify;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .close{padding:1rem 1rem;margin:-1rem -1rem -1rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem}.modal-footer{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:end;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}.modal-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{max-height:calc(100% - 3.5rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-dialog-centered::before{height:calc(100vh - 3.5rem)}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.tooltip{position:absolute;z-index:1070;display:block;margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[x-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[x-placement^=top] .arrow,.bs-tooltip-top .arrow{bottom:0}.bs-tooltip-auto[x-placement^=top] .arrow::before,.bs-tooltip-top .arrow::before{top:0;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[x-placement^=right],.bs-tooltip-right{padding:0 .4rem}.bs-tooltip-auto[x-placement^=right] .arrow,.bs-tooltip-right .arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=right] .arrow::before,.bs-tooltip-right .arrow::before{right:0;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[x-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[x-placement^=bottom] .arrow,.bs-tooltip-bottom .arrow{top:0}.bs-tooltip-auto[x-placement^=bottom] .arrow::before,.bs-tooltip-bottom .arrow::before{bottom:0;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[x-placement^=left],.bs-tooltip-left{padding:0 .4rem}.bs-tooltip-auto[x-placement^=left] .arrow,.bs-tooltip-left .arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=left] .arrow::before,.bs-tooltip-left .arrow::before{left:0;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1060;display:block;max-width:276px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .arrow{position:absolute;display:block;width:1rem;height:.5rem;margin:0 .3rem}.popover .arrow::after,.popover .arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[x-placement^=top],.bs-popover-top{margin-bottom:.5rem}.bs-popover-auto[x-placement^=top]>.arrow,.bs-popover-top>.arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=top]>.arrow::before,.bs-popover-top>.arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=top]>.arrow::after,.bs-popover-top>.arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[x-placement^=right],.bs-popover-right{margin-left:.5rem}.bs-popover-auto[x-placement^=right]>.arrow,.bs-popover-right>.arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=right]>.arrow::before,.bs-popover-right>.arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=right]>.arrow::after,.bs-popover-right>.arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[x-placement^=bottom],.bs-popover-bottom{margin-top:.5rem}.bs-popover-auto[x-placement^=bottom]>.arrow,.bs-popover-bottom>.arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=bottom]>.arrow::before,.bs-popover-bottom>.arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=bottom]>.arrow::after,.bs-popover-bottom>.arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[x-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f7f7f7}.bs-popover-auto[x-placement^=left],.bs-popover-left{margin-right:.5rem}.bs-popover-auto[x-placement^=left]>.arrow,.bs-popover-left>.arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=left]>.arrow::before,.bs-popover-left>.arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=left]>.arrow::after,.bs-popover-left>.arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem .75rem;margin-bottom:0;font-size:1rem;background-color:#f7f7f7;border-bottom:1px solid #ebebeb;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:.5rem .75rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{-ms-touch-action:pan-y;touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:-webkit-transform .6s ease-in-out;transition:transform .6s ease-in-out;transition:transform .6s ease-in-out,-webkit-transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-right,.carousel-item-next:not(.carousel-item-left){-webkit-transform:translateX(100%);transform:translateX(100%)}.active.carousel-item-left,.carousel-item-prev:not(.carousel-item-right){-webkit-transform:translateX(-100%);transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;-webkit-transform:none;transform:none}.carousel-fade .carousel-item-next.carousel-item-left,.carousel-fade .carousel-item-prev.carousel-item-right,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:15%;color:#fff;text-align:center;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:20px;height:20px;background:no-repeat 50%/100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M5.25 0l-4 4 4 4 1.5-1.5L4.25 4l2.5-2.5L5.25 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:15;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;padding-left:0;margin-right:15%;margin-left:15%;list-style:none}.carousel-indicators li{box-sizing:content-box;-ms-flex:0 1 auto;flex:0 1 auto;width:30px;height:3px;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators li{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:20px;left:15%;z-index:10;padding-top:20px;padding-bottom:20px;color:#fff;text-align:center}@-webkit-keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;border:.25em solid currentColor;border-right-color:transparent;border-radius:50%;-webkit-animation:spinner-border .75s linear infinite;animation:spinner-border .75s linear infinite}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1}}@keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:spinner-grow .75s linear infinite;animation:spinner-grow .75s linear infinite}.spinner-grow-sm{width:1rem;height:1rem}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.bg-primary{background-color:#007bff!important}a.bg-primary:focus,a.bg-primary:hover,button.bg-primary:focus,button.bg-primary:hover{background-color:#0062cc!important}.bg-secondary{background-color:#6c757d!important}a.bg-secondary:focus,a.bg-secondary:hover,button.bg-secondary:focus,button.bg-secondary:hover{background-color:#545b62!important}.bg-success{background-color:#28a745!important}a.bg-success:focus,a.bg-success:hover,button.bg-success:focus,button.bg-success:hover{background-color:#1e7e34!important}.bg-info{background-color:#17a2b8!important}a.bg-info:focus,a.bg-info:hover,button.bg-info:focus,button.bg-info:hover{background-color:#117a8b!important}.bg-warning{background-color:#ffc107!important}a.bg-warning:focus,a.bg-warning:hover,button.bg-warning:focus,button.bg-warning:hover{background-color:#d39e00!important}.bg-danger{background-color:#dc3545!important}a.bg-danger:focus,a.bg-danger:hover,button.bg-danger:focus,button.bg-danger:hover{background-color:#bd2130!important}.bg-light{background-color:#f8f9fa!important}a.bg-light:focus,a.bg-light:hover,button.bg-light:focus,button.bg-light:hover{background-color:#dae0e5!important}.bg-dark{background-color:#343a40!important}a.bg-dark:focus,a.bg-dark:hover,button.bg-dark:focus,button.bg-dark:hover{background-color:#1d2124!important}.bg-white{background-color:#fff!important}.bg-transparent{background-color:transparent!important}.border{border:1px solid #dee2e6!important}.border-top{border-top:1px solid #dee2e6!important}.border-right{border-right:1px solid #dee2e6!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-left{border-left:1px solid #dee2e6!important}.border-0{border:0!important}.border-top-0{border-top:0!important}.border-right-0{border-right:0!important}.border-bottom-0{border-bottom:0!important}.border-left-0{border-left:0!important}.border-primary{border-color:#007bff!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#28a745!important}.border-info{border-color:#17a2b8!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#343a40!important}.border-white{border-color:#fff!important}.rounded-sm{border-radius:.2rem!important}.rounded{border-radius:.25rem!important}.rounded-top{border-top-left-radius:.25rem!important;border-top-right-radius:.25rem!important}.rounded-right{border-top-right-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-bottom{border-bottom-right-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-left{border-top-left-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-lg{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-0{border-radius:0!important}.clearfix::after{display:block;clear:both;content:""}.d-none{display:none!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:-ms-flexbox!important;display:flex!important}.d-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}@media (min-width:576px){.d-sm-none{display:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:-ms-flexbox!important;display:flex!important}.d-sm-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:768px){.d-md-none{display:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:-ms-flexbox!important;display:flex!important}.d-md-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:992px){.d-lg-none{display:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:-ms-flexbox!important;display:flex!important}.d-lg-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:1200px){.d-xl-none{display:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:-ms-flexbox!important;display:flex!important}.d-xl-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media print{.d-print-none{display:none!important}.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:-ms-flexbox!important;display:flex!important}.d-print-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}.embed-responsive{position:relative;display:block;width:100%;padding:0;overflow:hidden}.embed-responsive::before{display:block;content:""}.embed-responsive .embed-responsive-item,.embed-responsive embed,.embed-responsive iframe,.embed-responsive object,.embed-responsive video{position:absolute;top:0;bottom:0;left:0;width:100%;height:100%;border:0}.embed-responsive-21by9::before{padding-top:42.857143%}.embed-responsive-16by9::before{padding-top:56.25%}.embed-responsive-4by3::before{padding-top:75%}.embed-responsive-1by1::before{padding-top:100%}.flex-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-center{-ms-flex-align:center!important;align-items:center!important}.align-items-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}@media (min-width:576px){.flex-sm-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-sm-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-sm-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-sm-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-sm-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-sm-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-sm-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-sm-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-sm-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-sm-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-sm-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-sm-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-sm-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-sm-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-sm-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-sm-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-sm-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-sm-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-sm-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-sm-center{-ms-flex-align:center!important;align-items:center!important}.align-items-sm-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-sm-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-sm-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-sm-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-sm-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-sm-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-sm-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-sm-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-sm-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-sm-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-sm-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-sm-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-sm-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-sm-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:768px){.flex-md-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-md-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-md-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-md-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-md-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-md-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-md-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-md-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-md-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-md-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-md-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-md-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-md-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-md-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-md-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-md-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-md-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-md-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-md-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-md-center{-ms-flex-align:center!important;align-items:center!important}.align-items-md-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-md-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-md-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-md-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-md-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-md-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-md-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-md-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-md-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-md-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-md-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-md-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-md-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-md-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:992px){.flex-lg-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-lg-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-lg-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-lg-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-lg-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-lg-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-lg-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-lg-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-lg-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-lg-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-lg-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-lg-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-lg-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-lg-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-lg-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-lg-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-lg-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-lg-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-lg-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-lg-center{-ms-flex-align:center!important;align-items:center!important}.align-items-lg-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-lg-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-lg-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-lg-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-lg-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-lg-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-lg-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-lg-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-lg-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-lg-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-lg-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-lg-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-lg-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-lg-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:1200px){.flex-xl-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-xl-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-xl-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-xl-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-xl-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-xl-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-xl-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-xl-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-xl-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-xl-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-xl-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-xl-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-xl-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-xl-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-xl-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-xl-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-xl-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-xl-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-xl-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-xl-center{-ms-flex-align:center!important;align-items:center!important}.align-items-xl-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-xl-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-xl-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-xl-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-xl-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-xl-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-xl-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-xl-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-xl-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-xl-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-xl-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-xl-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-xl-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-xl-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}.float-left{float:left!important}.float-right{float:right!important}.float-none{float:none!important}@media (min-width:576px){.float-sm-left{float:left!important}.float-sm-right{float:right!important}.float-sm-none{float:none!important}}@media (min-width:768px){.float-md-left{float:left!important}.float-md-right{float:right!important}.float-md-none{float:none!important}}@media (min-width:992px){.float-lg-left{float:left!important}.float-lg-right{float:right!important}.float-lg-none{float:none!important}}@media (min-width:1200px){.float-xl-left{float:left!important}.float-xl-right{float:right!important}.float-xl-none{float:none!important}}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}@supports ((position:-webkit-sticky) or (position:sticky)){.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;overflow:visible;clip:auto;white-space:normal}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mw-100{max-width:100%!important}.mh-100{max-height:100%!important}.min-vw-100{min-width:100vw!important}.min-vh-100{min-height:100vh!important}.vw-100{width:100vw!important}.vh-100{height:100vh!important}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;pointer-events:auto;content:"";background-color:rgba(0,0,0,0)}.m-0{margin:0!important}.mt-0,.my-0{margin-top:0!important}.mr-0,.mx-0{margin-right:0!important}.mb-0,.my-0{margin-bottom:0!important}.ml-0,.mx-0{margin-left:0!important}.m-1{margin:.25rem!important}.mt-1,.my-1{margin-top:.25rem!important}.mr-1,.mx-1{margin-right:.25rem!important}.mb-1,.my-1{margin-bottom:.25rem!important}.ml-1,.mx-1{margin-left:.25rem!important}.m-2{margin:.5rem!important}.mt-2,.my-2{margin-top:.5rem!important}.mr-2,.mx-2{margin-right:.5rem!important}.mb-2,.my-2{margin-bottom:.5rem!important}.ml-2,.mx-2{margin-left:.5rem!important}.m-3{margin:1rem!important}.mt-3,.my-3{margin-top:1rem!important}.mr-3,.mx-3{margin-right:1rem!important}.mb-3,.my-3{margin-bottom:1rem!important}.ml-3,.mx-3{margin-left:1rem!important}.m-4{margin:1.5rem!important}.mt-4,.my-4{margin-top:1.5rem!important}.mr-4,.mx-4{margin-right:1.5rem!important}.mb-4,.my-4{margin-bottom:1.5rem!important}.ml-4,.mx-4{margin-left:1.5rem!important}.m-5{margin:3rem!important}.mt-5,.my-5{margin-top:3rem!important}.mr-5,.mx-5{margin-right:3rem!important}.mb-5,.my-5{margin-bottom:3rem!important}.ml-5,.mx-5{margin-left:3rem!important}.p-0{padding:0!important}.pt-0,.py-0{padding-top:0!important}.pr-0,.px-0{padding-right:0!important}.pb-0,.py-0{padding-bottom:0!important}.pl-0,.px-0{padding-left:0!important}.p-1{padding:.25rem!important}.pt-1,.py-1{padding-top:.25rem!important}.pr-1,.px-1{padding-right:.25rem!important}.pb-1,.py-1{padding-bottom:.25rem!important}.pl-1,.px-1{padding-left:.25rem!important}.p-2{padding:.5rem!important}.pt-2,.py-2{padding-top:.5rem!important}.pr-2,.px-2{padding-right:.5rem!important}.pb-2,.py-2{padding-bottom:.5rem!important}.pl-2,.px-2{padding-left:.5rem!important}.p-3{padding:1rem!important}.pt-3,.py-3{padding-top:1rem!important}.pr-3,.px-3{padding-right:1rem!important}.pb-3,.py-3{padding-bottom:1rem!important}.pl-3,.px-3{padding-left:1rem!important}.p-4{padding:1.5rem!important}.pt-4,.py-4{padding-top:1.5rem!important}.pr-4,.px-4{padding-right:1.5rem!important}.pb-4,.py-4{padding-bottom:1.5rem!important}.pl-4,.px-4{padding-left:1.5rem!important}.p-5{padding:3rem!important}.pt-5,.py-5{padding-top:3rem!important}.pr-5,.px-5{padding-right:3rem!important}.pb-5,.py-5{padding-bottom:3rem!important}.pl-5,.px-5{padding-left:3rem!important}.m-n1{margin:-.25rem!important}.mt-n1,.my-n1{margin-top:-.25rem!important}.mr-n1,.mx-n1{margin-right:-.25rem!important}.mb-n1,.my-n1{margin-bottom:-.25rem!important}.ml-n1,.mx-n1{margin-left:-.25rem!important}.m-n2{margin:-.5rem!important}.mt-n2,.my-n2{margin-top:-.5rem!important}.mr-n2,.mx-n2{margin-right:-.5rem!important}.mb-n2,.my-n2{margin-bottom:-.5rem!important}.ml-n2,.mx-n2{margin-left:-.5rem!important}.m-n3{margin:-1rem!important}.mt-n3,.my-n3{margin-top:-1rem!important}.mr-n3,.mx-n3{margin-right:-1rem!important}.mb-n3,.my-n3{margin-bottom:-1rem!important}.ml-n3,.mx-n3{margin-left:-1rem!important}.m-n4{margin:-1.5rem!important}.mt-n4,.my-n4{margin-top:-1.5rem!important}.mr-n4,.mx-n4{margin-right:-1.5rem!important}.mb-n4,.my-n4{margin-bottom:-1.5rem!important}.ml-n4,.mx-n4{margin-left:-1.5rem!important}.m-n5{margin:-3rem!important}.mt-n5,.my-n5{margin-top:-3rem!important}.mr-n5,.mx-n5{margin-right:-3rem!important}.mb-n5,.my-n5{margin-bottom:-3rem!important}.ml-n5,.mx-n5{margin-left:-3rem!important}.m-auto{margin:auto!important}.mt-auto,.my-auto{margin-top:auto!important}.mr-auto,.mx-auto{margin-right:auto!important}.mb-auto,.my-auto{margin-bottom:auto!important}.ml-auto,.mx-auto{margin-left:auto!important}@media (min-width:576px){.m-sm-0{margin:0!important}.mt-sm-0,.my-sm-0{margin-top:0!important}.mr-sm-0,.mx-sm-0{margin-right:0!important}.mb-sm-0,.my-sm-0{margin-bottom:0!important}.ml-sm-0,.mx-sm-0{margin-left:0!important}.m-sm-1{margin:.25rem!important}.mt-sm-1,.my-sm-1{margin-top:.25rem!important}.mr-sm-1,.mx-sm-1{margin-right:.25rem!important}.mb-sm-1,.my-sm-1{margin-bottom:.25rem!important}.ml-sm-1,.mx-sm-1{margin-left:.25rem!important}.m-sm-2{margin:.5rem!important}.mt-sm-2,.my-sm-2{margin-top:.5rem!important}.mr-sm-2,.mx-sm-2{margin-right:.5rem!important}.mb-sm-2,.my-sm-2{margin-bottom:.5rem!important}.ml-sm-2,.mx-sm-2{margin-left:.5rem!important}.m-sm-3{margin:1rem!important}.mt-sm-3,.my-sm-3{margin-top:1rem!important}.mr-sm-3,.mx-sm-3{margin-right:1rem!important}.mb-sm-3,.my-sm-3{margin-bottom:1rem!important}.ml-sm-3,.mx-sm-3{margin-left:1rem!important}.m-sm-4{margin:1.5rem!important}.mt-sm-4,.my-sm-4{margin-top:1.5rem!important}.mr-sm-4,.mx-sm-4{margin-right:1.5rem!important}.mb-sm-4,.my-sm-4{margin-bottom:1.5rem!important}.ml-sm-4,.mx-sm-4{margin-left:1.5rem!important}.m-sm-5{margin:3rem!important}.mt-sm-5,.my-sm-5{margin-top:3rem!important}.mr-sm-5,.mx-sm-5{margin-right:3rem!important}.mb-sm-5,.my-sm-5{margin-bottom:3rem!important}.ml-sm-5,.mx-sm-5{margin-left:3rem!important}.p-sm-0{padding:0!important}.pt-sm-0,.py-sm-0{padding-top:0!important}.pr-sm-0,.px-sm-0{padding-right:0!important}.pb-sm-0,.py-sm-0{padding-bottom:0!important}.pl-sm-0,.px-sm-0{padding-left:0!important}.p-sm-1{padding:.25rem!important}.pt-sm-1,.py-sm-1{padding-top:.25rem!important}.pr-sm-1,.px-sm-1{padding-right:.25rem!important}.pb-sm-1,.py-sm-1{padding-bottom:.25rem!important}.pl-sm-1,.px-sm-1{padding-left:.25rem!important}.p-sm-2{padding:.5rem!important}.pt-sm-2,.py-sm-2{padding-top:.5rem!important}.pr-sm-2,.px-sm-2{padding-right:.5rem!important}.pb-sm-2,.py-sm-2{padding-bottom:.5rem!important}.pl-sm-2,.px-sm-2{padding-left:.5rem!important}.p-sm-3{padding:1rem!important}.pt-sm-3,.py-sm-3{padding-top:1rem!important}.pr-sm-3,.px-sm-3{padding-right:1rem!important}.pb-sm-3,.py-sm-3{padding-bottom:1rem!important}.pl-sm-3,.px-sm-3{padding-left:1rem!important}.p-sm-4{padding:1.5rem!important}.pt-sm-4,.py-sm-4{padding-top:1.5rem!important}.pr-sm-4,.px-sm-4{padding-right:1.5rem!important}.pb-sm-4,.py-sm-4{padding-bottom:1.5rem!important}.pl-sm-4,.px-sm-4{padding-left:1.5rem!important}.p-sm-5{padding:3rem!important}.pt-sm-5,.py-sm-5{padding-top:3rem!important}.pr-sm-5,.px-sm-5{padding-right:3rem!important}.pb-sm-5,.py-sm-5{padding-bottom:3rem!important}.pl-sm-5,.px-sm-5{padding-left:3rem!important}.m-sm-n1{margin:-.25rem!important}.mt-sm-n1,.my-sm-n1{margin-top:-.25rem!important}.mr-sm-n1,.mx-sm-n1{margin-right:-.25rem!important}.mb-sm-n1,.my-sm-n1{margin-bottom:-.25rem!important}.ml-sm-n1,.mx-sm-n1{margin-left:-.25rem!important}.m-sm-n2{margin:-.5rem!important}.mt-sm-n2,.my-sm-n2{margin-top:-.5rem!important}.mr-sm-n2,.mx-sm-n2{margin-right:-.5rem!important}.mb-sm-n2,.my-sm-n2{margin-bottom:-.5rem!important}.ml-sm-n2,.mx-sm-n2{margin-left:-.5rem!important}.m-sm-n3{margin:-1rem!important}.mt-sm-n3,.my-sm-n3{margin-top:-1rem!important}.mr-sm-n3,.mx-sm-n3{margin-right:-1rem!important}.mb-sm-n3,.my-sm-n3{margin-bottom:-1rem!important}.ml-sm-n3,.mx-sm-n3{margin-left:-1rem!important}.m-sm-n4{margin:-1.5rem!important}.mt-sm-n4,.my-sm-n4{margin-top:-1.5rem!important}.mr-sm-n4,.mx-sm-n4{margin-right:-1.5rem!important}.mb-sm-n4,.my-sm-n4{margin-bottom:-1.5rem!important}.ml-sm-n4,.mx-sm-n4{margin-left:-1.5rem!important}.m-sm-n5{margin:-3rem!important}.mt-sm-n5,.my-sm-n5{margin-top:-3rem!important}.mr-sm-n5,.mx-sm-n5{margin-right:-3rem!important}.mb-sm-n5,.my-sm-n5{margin-bottom:-3rem!important}.ml-sm-n5,.mx-sm-n5{margin-left:-3rem!important}.m-sm-auto{margin:auto!important}.mt-sm-auto,.my-sm-auto{margin-top:auto!important}.mr-sm-auto,.mx-sm-auto{margin-right:auto!important}.mb-sm-auto,.my-sm-auto{margin-bottom:auto!important}.ml-sm-auto,.mx-sm-auto{margin-left:auto!important}}@media (min-width:768px){.m-md-0{margin:0!important}.mt-md-0,.my-md-0{margin-top:0!important}.mr-md-0,.mx-md-0{margin-right:0!important}.mb-md-0,.my-md-0{margin-bottom:0!important}.ml-md-0,.mx-md-0{margin-left:0!important}.m-md-1{margin:.25rem!important}.mt-md-1,.my-md-1{margin-top:.25rem!important}.mr-md-1,.mx-md-1{margin-right:.25rem!important}.mb-md-1,.my-md-1{margin-bottom:.25rem!important}.ml-md-1,.mx-md-1{margin-left:.25rem!important}.m-md-2{margin:.5rem!important}.mt-md-2,.my-md-2{margin-top:.5rem!important}.mr-md-2,.mx-md-2{margin-right:.5rem!important}.mb-md-2,.my-md-2{margin-bottom:.5rem!important}.ml-md-2,.mx-md-2{margin-left:.5rem!important}.m-md-3{margin:1rem!important}.mt-md-3,.my-md-3{margin-top:1rem!important}.mr-md-3,.mx-md-3{margin-right:1rem!important}.mb-md-3,.my-md-3{margin-bottom:1rem!important}.ml-md-3,.mx-md-3{margin-left:1rem!important}.m-md-4{margin:1.5rem!important}.mt-md-4,.my-md-4{margin-top:1.5rem!important}.mr-md-4,.mx-md-4{margin-right:1.5rem!important}.mb-md-4,.my-md-4{margin-bottom:1.5rem!important}.ml-md-4,.mx-md-4{margin-left:1.5rem!important}.m-md-5{margin:3rem!important}.mt-md-5,.my-md-5{margin-top:3rem!important}.mr-md-5,.mx-md-5{margin-right:3rem!important}.mb-md-5,.my-md-5{margin-bottom:3rem!important}.ml-md-5,.mx-md-5{margin-left:3rem!important}.p-md-0{padding:0!important}.pt-md-0,.py-md-0{padding-top:0!important}.pr-md-0,.px-md-0{padding-right:0!important}.pb-md-0,.py-md-0{padding-bottom:0!important}.pl-md-0,.px-md-0{padding-left:0!important}.p-md-1{padding:.25rem!important}.pt-md-1,.py-md-1{padding-top:.25rem!important}.pr-md-1,.px-md-1{padding-right:.25rem!important}.pb-md-1,.py-md-1{padding-bottom:.25rem!important}.pl-md-1,.px-md-1{padding-left:.25rem!important}.p-md-2{padding:.5rem!important}.pt-md-2,.py-md-2{padding-top:.5rem!important}.pr-md-2,.px-md-2{padding-right:.5rem!important}.pb-md-2,.py-md-2{padding-bottom:.5rem!important}.pl-md-2,.px-md-2{padding-left:.5rem!important}.p-md-3{padding:1rem!important}.pt-md-3,.py-md-3{padding-top:1rem!important}.pr-md-3,.px-md-3{padding-right:1rem!important}.pb-md-3,.py-md-3{padding-bottom:1rem!important}.pl-md-3,.px-md-3{padding-left:1rem!important}.p-md-4{padding:1.5rem!important}.pt-md-4,.py-md-4{padding-top:1.5rem!important}.pr-md-4,.px-md-4{padding-right:1.5rem!important}.pb-md-4,.py-md-4{padding-bottom:1.5rem!important}.pl-md-4,.px-md-4{padding-left:1.5rem!important}.p-md-5{padding:3rem!important}.pt-md-5,.py-md-5{padding-top:3rem!important}.pr-md-5,.px-md-5{padding-right:3rem!important}.pb-md-5,.py-md-5{padding-bottom:3rem!important}.pl-md-5,.px-md-5{padding-left:3rem!important}.m-md-n1{margin:-.25rem!important}.mt-md-n1,.my-md-n1{margin-top:-.25rem!important}.mr-md-n1,.mx-md-n1{margin-right:-.25rem!important}.mb-md-n1,.my-md-n1{margin-bottom:-.25rem!important}.ml-md-n1,.mx-md-n1{margin-left:-.25rem!important}.m-md-n2{margin:-.5rem!important}.mt-md-n2,.my-md-n2{margin-top:-.5rem!important}.mr-md-n2,.mx-md-n2{margin-right:-.5rem!important}.mb-md-n2,.my-md-n2{margin-bottom:-.5rem!important}.ml-md-n2,.mx-md-n2{margin-left:-.5rem!important}.m-md-n3{margin:-1rem!important}.mt-md-n3,.my-md-n3{margin-top:-1rem!important}.mr-md-n3,.mx-md-n3{margin-right:-1rem!important}.mb-md-n3,.my-md-n3{margin-bottom:-1rem!important}.ml-md-n3,.mx-md-n3{margin-left:-1rem!important}.m-md-n4{margin:-1.5rem!important}.mt-md-n4,.my-md-n4{margin-top:-1.5rem!important}.mr-md-n4,.mx-md-n4{margin-right:-1.5rem!important}.mb-md-n4,.my-md-n4{margin-bottom:-1.5rem!important}.ml-md-n4,.mx-md-n4{margin-left:-1.5rem!important}.m-md-n5{margin:-3rem!important}.mt-md-n5,.my-md-n5{margin-top:-3rem!important}.mr-md-n5,.mx-md-n5{margin-right:-3rem!important}.mb-md-n5,.my-md-n5{margin-bottom:-3rem!important}.ml-md-n5,.mx-md-n5{margin-left:-3rem!important}.m-md-auto{margin:auto!important}.mt-md-auto,.my-md-auto{margin-top:auto!important}.mr-md-auto,.mx-md-auto{margin-right:auto!important}.mb-md-auto,.my-md-auto{margin-bottom:auto!important}.ml-md-auto,.mx-md-auto{margin-left:auto!important}}@media (min-width:992px){.m-lg-0{margin:0!important}.mt-lg-0,.my-lg-0{margin-top:0!important}.mr-lg-0,.mx-lg-0{margin-right:0!important}.mb-lg-0,.my-lg-0{margin-bottom:0!important}.ml-lg-0,.mx-lg-0{margin-left:0!important}.m-lg-1{margin:.25rem!important}.mt-lg-1,.my-lg-1{margin-top:.25rem!important}.mr-lg-1,.mx-lg-1{margin-right:.25rem!important}.mb-lg-1,.my-lg-1{margin-bottom:.25rem!important}.ml-lg-1,.mx-lg-1{margin-left:.25rem!important}.m-lg-2{margin:.5rem!important}.mt-lg-2,.my-lg-2{margin-top:.5rem!important}.mr-lg-2,.mx-lg-2{margin-right:.5rem!important}.mb-lg-2,.my-lg-2{margin-bottom:.5rem!important}.ml-lg-2,.mx-lg-2{margin-left:.5rem!important}.m-lg-3{margin:1rem!important}.mt-lg-3,.my-lg-3{margin-top:1rem!important}.mr-lg-3,.mx-lg-3{margin-right:1rem!important}.mb-lg-3,.my-lg-3{margin-bottom:1rem!important}.ml-lg-3,.mx-lg-3{margin-left:1rem!important}.m-lg-4{margin:1.5rem!important}.mt-lg-4,.my-lg-4{margin-top:1.5rem!important}.mr-lg-4,.mx-lg-4{margin-right:1.5rem!important}.mb-lg-4,.my-lg-4{margin-bottom:1.5rem!important}.ml-lg-4,.mx-lg-4{margin-left:1.5rem!important}.m-lg-5{margin:3rem!important}.mt-lg-5,.my-lg-5{margin-top:3rem!important}.mr-lg-5,.mx-lg-5{margin-right:3rem!important}.mb-lg-5,.my-lg-5{margin-bottom:3rem!important}.ml-lg-5,.mx-lg-5{margin-left:3rem!important}.p-lg-0{padding:0!important}.pt-lg-0,.py-lg-0{padding-top:0!important}.pr-lg-0,.px-lg-0{padding-right:0!important}.pb-lg-0,.py-lg-0{padding-bottom:0!important}.pl-lg-0,.px-lg-0{padding-left:0!important}.p-lg-1{padding:.25rem!important}.pt-lg-1,.py-lg-1{padding-top:.25rem!important}.pr-lg-1,.px-lg-1{padding-right:.25rem!important}.pb-lg-1,.py-lg-1{padding-bottom:.25rem!important}.pl-lg-1,.px-lg-1{padding-left:.25rem!important}.p-lg-2{padding:.5rem!important}.pt-lg-2,.py-lg-2{padding-top:.5rem!important}.pr-lg-2,.px-lg-2{padding-right:.5rem!important}.pb-lg-2,.py-lg-2{padding-bottom:.5rem!important}.pl-lg-2,.px-lg-2{padding-left:.5rem!important}.p-lg-3{padding:1rem!important}.pt-lg-3,.py-lg-3{padding-top:1rem!important}.pr-lg-3,.px-lg-3{padding-right:1rem!important}.pb-lg-3,.py-lg-3{padding-bottom:1rem!important}.pl-lg-3,.px-lg-3{padding-left:1rem!important}.p-lg-4{padding:1.5rem!important}.pt-lg-4,.py-lg-4{padding-top:1.5rem!important}.pr-lg-4,.px-lg-4{padding-right:1.5rem!important}.pb-lg-4,.py-lg-4{padding-bottom:1.5rem!important}.pl-lg-4,.px-lg-4{padding-left:1.5rem!important}.p-lg-5{padding:3rem!important}.pt-lg-5,.py-lg-5{padding-top:3rem!important}.pr-lg-5,.px-lg-5{padding-right:3rem!important}.pb-lg-5,.py-lg-5{padding-bottom:3rem!important}.pl-lg-5,.px-lg-5{padding-left:3rem!important}.m-lg-n1{margin:-.25rem!important}.mt-lg-n1,.my-lg-n1{margin-top:-.25rem!important}.mr-lg-n1,.mx-lg-n1{margin-right:-.25rem!important}.mb-lg-n1,.my-lg-n1{margin-bottom:-.25rem!important}.ml-lg-n1,.mx-lg-n1{margin-left:-.25rem!important}.m-lg-n2{margin:-.5rem!important}.mt-lg-n2,.my-lg-n2{margin-top:-.5rem!important}.mr-lg-n2,.mx-lg-n2{margin-right:-.5rem!important}.mb-lg-n2,.my-lg-n2{margin-bottom:-.5rem!important}.ml-lg-n2,.mx-lg-n2{margin-left:-.5rem!important}.m-lg-n3{margin:-1rem!important}.mt-lg-n3,.my-lg-n3{margin-top:-1rem!important}.mr-lg-n3,.mx-lg-n3{margin-right:-1rem!important}.mb-lg-n3,.my-lg-n3{margin-bottom:-1rem!important}.ml-lg-n3,.mx-lg-n3{margin-left:-1rem!important}.m-lg-n4{margin:-1.5rem!important}.mt-lg-n4,.my-lg-n4{margin-top:-1.5rem!important}.mr-lg-n4,.mx-lg-n4{margin-right:-1.5rem!important}.mb-lg-n4,.my-lg-n4{margin-bottom:-1.5rem!important}.ml-lg-n4,.mx-lg-n4{margin-left:-1.5rem!important}.m-lg-n5{margin:-3rem!important}.mt-lg-n5,.my-lg-n5{margin-top:-3rem!important}.mr-lg-n5,.mx-lg-n5{margin-right:-3rem!important}.mb-lg-n5,.my-lg-n5{margin-bottom:-3rem!important}.ml-lg-n5,.mx-lg-n5{margin-left:-3rem!important}.m-lg-auto{margin:auto!important}.mt-lg-auto,.my-lg-auto{margin-top:auto!important}.mr-lg-auto,.mx-lg-auto{margin-right:auto!important}.mb-lg-auto,.my-lg-auto{margin-bottom:auto!important}.ml-lg-auto,.mx-lg-auto{margin-left:auto!important}}@media (min-width:1200px){.m-xl-0{margin:0!important}.mt-xl-0,.my-xl-0{margin-top:0!important}.mr-xl-0,.mx-xl-0{margin-right:0!important}.mb-xl-0,.my-xl-0{margin-bottom:0!important}.ml-xl-0,.mx-xl-0{margin-left:0!important}.m-xl-1{margin:.25rem!important}.mt-xl-1,.my-xl-1{margin-top:.25rem!important}.mr-xl-1,.mx-xl-1{margin-right:.25rem!important}.mb-xl-1,.my-xl-1{margin-bottom:.25rem!important}.ml-xl-1,.mx-xl-1{margin-left:.25rem!important}.m-xl-2{margin:.5rem!important}.mt-xl-2,.my-xl-2{margin-top:.5rem!important}.mr-xl-2,.mx-xl-2{margin-right:.5rem!important}.mb-xl-2,.my-xl-2{margin-bottom:.5rem!important}.ml-xl-2,.mx-xl-2{margin-left:.5rem!important}.m-xl-3{margin:1rem!important}.mt-xl-3,.my-xl-3{margin-top:1rem!important}.mr-xl-3,.mx-xl-3{margin-right:1rem!important}.mb-xl-3,.my-xl-3{margin-bottom:1rem!important}.ml-xl-3,.mx-xl-3{margin-left:1rem!important}.m-xl-4{margin:1.5rem!important}.mt-xl-4,.my-xl-4{margin-top:1.5rem!important}.mr-xl-4,.mx-xl-4{margin-right:1.5rem!important}.mb-xl-4,.my-xl-4{margin-bottom:1.5rem!important}.ml-xl-4,.mx-xl-4{margin-left:1.5rem!important}.m-xl-5{margin:3rem!important}.mt-xl-5,.my-xl-5{margin-top:3rem!important}.mr-xl-5,.mx-xl-5{margin-right:3rem!important}.mb-xl-5,.my-xl-5{margin-bottom:3rem!important}.ml-xl-5,.mx-xl-5{margin-left:3rem!important}.p-xl-0{padding:0!important}.pt-xl-0,.py-xl-0{padding-top:0!important}.pr-xl-0,.px-xl-0{padding-right:0!important}.pb-xl-0,.py-xl-0{padding-bottom:0!important}.pl-xl-0,.px-xl-0{padding-left:0!important}.p-xl-1{padding:.25rem!important}.pt-xl-1,.py-xl-1{padding-top:.25rem!important}.pr-xl-1,.px-xl-1{padding-right:.25rem!important}.pb-xl-1,.py-xl-1{padding-bottom:.25rem!important}.pl-xl-1,.px-xl-1{padding-left:.25rem!important}.p-xl-2{padding:.5rem!important}.pt-xl-2,.py-xl-2{padding-top:.5rem!important}.pr-xl-2,.px-xl-2{padding-right:.5rem!important}.pb-xl-2,.py-xl-2{padding-bottom:.5rem!important}.pl-xl-2,.px-xl-2{padding-left:.5rem!important}.p-xl-3{padding:1rem!important}.pt-xl-3,.py-xl-3{padding-top:1rem!important}.pr-xl-3,.px-xl-3{padding-right:1rem!important}.pb-xl-3,.py-xl-3{padding-bottom:1rem!important}.pl-xl-3,.px-xl-3{padding-left:1rem!important}.p-xl-4{padding:1.5rem!important}.pt-xl-4,.py-xl-4{padding-top:1.5rem!important}.pr-xl-4,.px-xl-4{padding-right:1.5rem!important}.pb-xl-4,.py-xl-4{padding-bottom:1.5rem!important}.pl-xl-4,.px-xl-4{padding-left:1.5rem!important}.p-xl-5{padding:3rem!important}.pt-xl-5,.py-xl-5{padding-top:3rem!important}.pr-xl-5,.px-xl-5{padding-right:3rem!important}.pb-xl-5,.py-xl-5{padding-bottom:3rem!important}.pl-xl-5,.px-xl-5{padding-left:3rem!important}.m-xl-n1{margin:-.25rem!important}.mt-xl-n1,.my-xl-n1{margin-top:-.25rem!important}.mr-xl-n1,.mx-xl-n1{margin-right:-.25rem!important}.mb-xl-n1,.my-xl-n1{margin-bottom:-.25rem!important}.ml-xl-n1,.mx-xl-n1{margin-left:-.25rem!important}.m-xl-n2{margin:-.5rem!important}.mt-xl-n2,.my-xl-n2{margin-top:-.5rem!important}.mr-xl-n2,.mx-xl-n2{margin-right:-.5rem!important}.mb-xl-n2,.my-xl-n2{margin-bottom:-.5rem!important}.ml-xl-n2,.mx-xl-n2{margin-left:-.5rem!important}.m-xl-n3{margin:-1rem!important}.mt-xl-n3,.my-xl-n3{margin-top:-1rem!important}.mr-xl-n3,.mx-xl-n3{margin-right:-1rem!important}.mb-xl-n3,.my-xl-n3{margin-bottom:-1rem!important}.ml-xl-n3,.mx-xl-n3{margin-left:-1rem!important}.m-xl-n4{margin:-1.5rem!important}.mt-xl-n4,.my-xl-n4{margin-top:-1.5rem!important}.mr-xl-n4,.mx-xl-n4{margin-right:-1.5rem!important}.mb-xl-n4,.my-xl-n4{margin-bottom:-1.5rem!important}.ml-xl-n4,.mx-xl-n4{margin-left:-1.5rem!important}.m-xl-n5{margin:-3rem!important}.mt-xl-n5,.my-xl-n5{margin-top:-3rem!important}.mr-xl-n5,.mx-xl-n5{margin-right:-3rem!important}.mb-xl-n5,.my-xl-n5{margin-bottom:-3rem!important}.ml-xl-n5,.mx-xl-n5{margin-left:-3rem!important}.m-xl-auto{margin:auto!important}.mt-xl-auto,.my-xl-auto{margin-top:auto!important}.mr-xl-auto,.mx-xl-auto{margin-right:auto!important}.mb-xl-auto,.my-xl-auto{margin-bottom:auto!important}.ml-xl-auto,.mx-xl-auto{margin-left:auto!important}}.text-monospace{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace!important}.text-justify{text-align:justify!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text-left{text-align:left!important}.text-right{text-align:right!important}.text-center{text-align:center!important}@media (min-width:576px){.text-sm-left{text-align:left!important}.text-sm-right{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.text-md-left{text-align:left!important}.text-md-right{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.text-lg-left{text-align:left!important}.text-lg-right{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.text-xl-left{text-align:left!important}.text-xl-right{text-align:right!important}.text-xl-center{text-align:center!important}}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.font-weight-light{font-weight:300!important}.font-weight-lighter{font-weight:lighter!important}.font-weight-normal{font-weight:400!important}.font-weight-bold{font-weight:700!important}.font-weight-bolder{font-weight:bolder!important}.font-italic{font-style:italic!important}.text-white{color:#fff!important}.text-primary{color:#007bff!important}a.text-primary:focus,a.text-primary:hover{color:#0056b3!important}.text-secondary{color:#6c757d!important}a.text-secondary:focus,a.text-secondary:hover{color:#494f54!important}.text-success{color:#28a745!important}a.text-success:focus,a.text-success:hover{color:#19692c!important}.text-info{color:#17a2b8!important}a.text-info:focus,a.text-info:hover{color:#0f6674!important}.text-warning{color:#ffc107!important}a.text-warning:focus,a.text-warning:hover{color:#ba8b00!important}.text-danger{color:#dc3545!important}a.text-danger:focus,a.text-danger:hover{color:#a71d2a!important}.text-light{color:#f8f9fa!important}a.text-light:focus,a.text-light:hover{color:#cbd3da!important}.text-dark{color:#343a40!important}a.text-dark:focus,a.text-dark:hover{color:#121416!important}.text-body{color:#212529!important}.text-muted{color:#6c757d!important}.text-black-50{color:rgba(0,0,0,.5)!important}.text-white-50{color:rgba(255,255,255,.5)!important}.text-hide{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.text-decoration-none{text-decoration:none!important}.text-break{word-break:break-word!important;overflow-wrap:break-word!important}.text-reset{color:inherit!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media print{*,::after,::before{text-shadow:none!important;box-shadow:none!important}a:not(.btn){text-decoration:underline}abbr[title]::after{content:" (" attr(title) ")"}pre{white-space:pre-wrap!important}blockquote,pre{border:1px solid #adb5bd;page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}@page{size:a3}body{min-width:992px!important}.container{min-width:992px!important}.navbar{display:none}.badge{border:1px solid #000}.table{border-collapse:collapse!important}.table td,.table th{background-color:#fff!important}.table-bordered td,.table-bordered th{border:1px solid #dee2e6!important}.table-dark{color:inherit}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#dee2e6}.table .thead-dark th{color:inherit;border-color:#dee2e6}} /*# sourceMappingURL=bootstrap.min.css.map */rclone-1.53.3/docs/static/css/custom.css000066400000000000000000000060101375552240400201240ustar00rootroot00000000000000body { font-family: "Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif; } btn-group.a { color: #000000; } a { color: #3f79ad; text-decoration: none; } a:hover { color: #70caf2; } table { background-color:#f0f0f0; } tbody td, th { border: 1px solid #c0c0c0; padding: 3px 7px 2px 7px; } thead td, th { padding: 3px 7px 2px 7px; font-weight: bold; background-color:#e0e0e0; } tbody tr:nth-child(even) { background-color:#e8e8e8; } /* Preserve whitespace. Wrap text only at line breaks. */ /* Avoids auto-wrapping the code lines. */ pre code { white-space: pre } /* Hover over links on headers */ .header-link { position: absolute; left: -0.5em; opacity: 0; transition: opacity 0.2s ease-in-out 0.1s; } h2:hover .header-link, h3:hover .header-link, h4:hover .header-link, h5:hover .header-link, h6:hover .header-link { opacity: 1; } /* more space before headings */ h1, h2, h3, h4, h5, h6 { margin-top: 12px; padding: 15px 0px; } /* Fix spacing of info boxes */ .card { margin-top: 0.75rem; } /* less padding around info box items */ .card-body { padding: 0.5rem; } /* make menus longer */ .pre-scrollable { max-height: 30rem; } /* Fix spacing between menu items */ /* .navbar-default .dropdown-menu>li>a { */ /* padding-top: 6px; */ /* padding-bottom: 6px; */ /* } */ /* custom logo in navbar */ .rclone-logo { height: 36px; margin-top: -10px; margin-bottom: -10px; } .heart { color: #e31b23; } blockquote { display: block; background-color: #EEE; padding: 0.25rem; border: 1px solid black; } pre { padding: 10px; margin: 0 0 10.5px; font-size: 14px; line-height: 1.42857143; word-break: break-all; word-wrap: break-word; color: #333333; background-color: #f5f5f5; border: 1px solid #cccccc; border-radius: 0.25rem; } code { font-size: 87.5%; color: #e83e8c; word-wrap: break-word; } /* reduce h12345 sizes */ h1 { font-size: 160%; } h2 { font-size: 130%; } h3 { font-size: 110%; font-weight: bold; } h4 { font-size: 100%; font-weight: bold; } h5 { font-size: 90%; font-weight: bold; } .menu { font-size: 95%; } /* Make primary buttons rclone colours. Should learn sass and do this the proper way! */ .btn-primary { background-color: #3f79ad; border-color: #3f79ad; } .btn-primary:hover { background-color: #70caf2; border-color: #70caf2; } .btn-primary:not(:disabled):not(.disabled).active, .btn-primary:not(:disabled):not(.disabled):active, .show>.btn-primary.dropdown-toggle { background-color: #70caf2; border-color: #70caf2; } .btn-primary.focus, .btn-primary:focus { background-color: #3f79ad; border-color: #3f79ad; box-shadow: 0 0 0 0.2rem rgba(63,121,173,.5); } .badge-primary { background-color: #3f79ad; border-color: #3f79ad; } a.badge-primary:focus, a.badge-primary:hover { background-color: #70caf2; } a.badge-primary.focus, a.badge-primary:focus { outline: 0; box-shadow: 0 0 0 0.2rem rgba(112,202,242,.5); } rclone-1.53.3/docs/static/css/font-awesome.min.5.10.2.css000066400000000000000000001563051375552240400225370ustar00rootroot00000000000000/*! * Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com * License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) */ .fa,.fab,.fad,.fal,.far,.fas{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;display:inline-block;font-style:normal;font-variant:normal;text-rendering:auto;line-height:1}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-.0667em}.fa-xs{font-size:.75em}.fa-sm{font-size:.875em}.fa-1x{font-size:1em}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-6x{font-size:6em}.fa-7x{font-size:7em}.fa-8x{font-size:8em}.fa-9x{font-size:9em}.fa-10x{font-size:10em}.fa-fw{text-align:center;width:1.25em}.fa-ul{list-style-type:none;margin-left:2.5em;padding-left:0}.fa-ul>li{position:relative}.fa-li{left:-2em;position:absolute;text-align:center;width:2em;line-height:inherit}.fa-border{border:.08em solid #eee;border-radius:.1em;padding:.2em .25em .15em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left,.fab.fa-pull-left,.fal.fa-pull-left,.far.fa-pull-left,.fas.fa-pull-left{margin-right:.3em}.fa.fa-pull-right,.fab.fa-pull-right,.fal.fa-pull-right,.far.fa-pull-right,.fas.fa-pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(1turn);transform:rotate(1turn)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(1turn);transform:rotate(1turn)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-webkit-transform:scaleY(-1);transform:scaleY(-1)}.fa-flip-both,.fa-flip-horizontal.fa-flip-vertical,.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)"}.fa-flip-both,.fa-flip-horizontal.fa-flip-vertical{-webkit-transform:scale(-1);transform:scale(-1)}:root .fa-flip-both,:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{-webkit-filter:none;filter:none}.fa-stack{display:inline-block;height:2em;line-height:2em;position:relative;vertical-align:middle;width:2.5em}.fa-stack-1x,.fa-stack-2x{left:0;position:absolute;text-align:center;width:100%}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-500px:before{content:"\f26e"}.fa-accessible-icon:before{content:"\f368"}.fa-accusoft:before{content:"\f369"}.fa-acquisitions-incorporated:before{content:"\f6af"}.fa-ad:before{content:"\f641"}.fa-address-book:before{content:"\f2b9"}.fa-address-card:before{content:"\f2bb"}.fa-adjust:before{content:"\f042"}.fa-adn:before{content:"\f170"}.fa-adobe:before{content:"\f778"}.fa-adversal:before{content:"\f36a"}.fa-affiliatetheme:before{content:"\f36b"}.fa-air-freshener:before{content:"\f5d0"}.fa-airbnb:before{content:"\f834"}.fa-algolia:before{content:"\f36c"}.fa-align-center:before{content:"\f037"}.fa-align-justify:before{content:"\f039"}.fa-align-left:before{content:"\f036"}.fa-align-right:before{content:"\f038"}.fa-alipay:before{content:"\f642"}.fa-allergies:before{content:"\f461"}.fa-amazon:before{content:"\f270"}.fa-amazon-pay:before{content:"\f42c"}.fa-ambulance:before{content:"\f0f9"}.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-amilia:before{content:"\f36d"}.fa-anchor:before{content:"\f13d"}.fa-android:before{content:"\f17b"}.fa-angellist:before{content:"\f209"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-down:before{content:"\f107"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angry:before{content:"\f556"}.fa-angrycreative:before{content:"\f36e"}.fa-angular:before{content:"\f420"}.fa-ankh:before{content:"\f644"}.fa-app-store:before{content:"\f36f"}.fa-app-store-ios:before{content:"\f370"}.fa-apper:before{content:"\f371"}.fa-apple:before{content:"\f179"}.fa-apple-alt:before{content:"\f5d1"}.fa-apple-pay:before{content:"\f415"}.fa-archive:before{content:"\f187"}.fa-archway:before{content:"\f557"}.fa-arrow-alt-circle-down:before{content:"\f358"}.fa-arrow-alt-circle-left:before{content:"\f359"}.fa-arrow-alt-circle-right:before{content:"\f35a"}.fa-arrow-alt-circle-up:before{content:"\f35b"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-down:before{content:"\f063"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrows-alt:before{content:"\f0b2"}.fa-arrows-alt-h:before{content:"\f337"}.fa-arrows-alt-v:before{content:"\f338"}.fa-artstation:before{content:"\f77a"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asterisk:before{content:"\f069"}.fa-asymmetrik:before{content:"\f372"}.fa-at:before{content:"\f1fa"}.fa-atlas:before{content:"\f558"}.fa-atlassian:before{content:"\f77b"}.fa-atom:before{content:"\f5d2"}.fa-audible:before{content:"\f373"}.fa-audio-description:before{content:"\f29e"}.fa-autoprefixer:before{content:"\f41c"}.fa-avianex:before{content:"\f374"}.fa-aviato:before{content:"\f421"}.fa-award:before{content:"\f559"}.fa-aws:before{content:"\f375"}.fa-baby:before{content:"\f77c"}.fa-baby-carriage:before{content:"\f77d"}.fa-backspace:before{content:"\f55a"}.fa-backward:before{content:"\f04a"}.fa-bacon:before{content:"\f7e5"}.fa-balance-scale:before{content:"\f24e"}.fa-balance-scale-left:before{content:"\f515"}.fa-balance-scale-right:before{content:"\f516"}.fa-ban:before{content:"\f05e"}.fa-band-aid:before{content:"\f462"}.fa-bandcamp:before{content:"\f2d5"}.fa-barcode:before{content:"\f02a"}.fa-bars:before{content:"\f0c9"}.fa-baseball-ball:before{content:"\f433"}.fa-basketball-ball:before{content:"\f434"}.fa-bath:before{content:"\f2cd"}.fa-battery-empty:before{content:"\f244"}.fa-battery-full:before{content:"\f240"}.fa-battery-half:before{content:"\f242"}.fa-battery-quarter:before{content:"\f243"}.fa-battery-three-quarters:before{content:"\f241"}.fa-battle-net:before{content:"\f835"}.fa-bed:before{content:"\f236"}.fa-beer:before{content:"\f0fc"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-bell:before{content:"\f0f3"}.fa-bell-slash:before{content:"\f1f6"}.fa-bezier-curve:before{content:"\f55b"}.fa-bible:before{content:"\f647"}.fa-bicycle:before{content:"\f206"}.fa-biking:before{content:"\f84a"}.fa-bimobject:before{content:"\f378"}.fa-binoculars:before{content:"\f1e5"}.fa-biohazard:before{content:"\f780"}.fa-birthday-cake:before{content:"\f1fd"}.fa-bitbucket:before{content:"\f171"}.fa-bitcoin:before{content:"\f379"}.fa-bity:before{content:"\f37a"}.fa-black-tie:before{content:"\f27e"}.fa-blackberry:before{content:"\f37b"}.fa-blender:before{content:"\f517"}.fa-blender-phone:before{content:"\f6b6"}.fa-blind:before{content:"\f29d"}.fa-blog:before{content:"\f781"}.fa-blogger:before{content:"\f37c"}.fa-blogger-b:before{content:"\f37d"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-bold:before{content:"\f032"}.fa-bolt:before{content:"\f0e7"}.fa-bomb:before{content:"\f1e2"}.fa-bone:before{content:"\f5d7"}.fa-bong:before{content:"\f55c"}.fa-book:before{content:"\f02d"}.fa-book-dead:before{content:"\f6b7"}.fa-book-medical:before{content:"\f7e6"}.fa-book-open:before{content:"\f518"}.fa-book-reader:before{content:"\f5da"}.fa-bookmark:before{content:"\f02e"}.fa-bootstrap:before{content:"\f836"}.fa-border-all:before{content:"\f84c"}.fa-border-none:before{content:"\f850"}.fa-border-style:before{content:"\f853"}.fa-bowling-ball:before{content:"\f436"}.fa-box:before{content:"\f466"}.fa-box-open:before{content:"\f49e"}.fa-boxes:before{content:"\f468"}.fa-braille:before{content:"\f2a1"}.fa-brain:before{content:"\f5dc"}.fa-bread-slice:before{content:"\f7ec"}.fa-briefcase:before{content:"\f0b1"}.fa-briefcase-medical:before{content:"\f469"}.fa-broadcast-tower:before{content:"\f519"}.fa-broom:before{content:"\f51a"}.fa-brush:before{content:"\f55d"}.fa-btc:before{content:"\f15a"}.fa-buffer:before{content:"\f837"}.fa-bug:before{content:"\f188"}.fa-building:before{content:"\f1ad"}.fa-bullhorn:before{content:"\f0a1"}.fa-bullseye:before{content:"\f140"}.fa-burn:before{content:"\f46a"}.fa-buromobelexperte:before{content:"\f37f"}.fa-bus:before{content:"\f207"}.fa-bus-alt:before{content:"\f55e"}.fa-business-time:before{content:"\f64a"}.fa-buysellads:before{content:"\f20d"}.fa-calculator:before{content:"\f1ec"}.fa-calendar:before{content:"\f133"}.fa-calendar-alt:before{content:"\f073"}.fa-calendar-check:before{content:"\f274"}.fa-calendar-day:before{content:"\f783"}.fa-calendar-minus:before{content:"\f272"}.fa-calendar-plus:before{content:"\f271"}.fa-calendar-times:before{content:"\f273"}.fa-calendar-week:before{content:"\f784"}.fa-camera:before{content:"\f030"}.fa-camera-retro:before{content:"\f083"}.fa-campground:before{content:"\f6bb"}.fa-canadian-maple-leaf:before{content:"\f785"}.fa-candy-cane:before{content:"\f786"}.fa-cannabis:before{content:"\f55f"}.fa-capsules:before{content:"\f46b"}.fa-car:before{content:"\f1b9"}.fa-car-alt:before{content:"\f5de"}.fa-car-battery:before{content:"\f5df"}.fa-car-crash:before{content:"\f5e1"}.fa-car-side:before{content:"\f5e4"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-caret-square-down:before{content:"\f150"}.fa-caret-square-left:before{content:"\f191"}.fa-caret-square-right:before{content:"\f152"}.fa-caret-square-up:before{content:"\f151"}.fa-caret-up:before{content:"\f0d8"}.fa-carrot:before{content:"\f787"}.fa-cart-arrow-down:before{content:"\f218"}.fa-cart-plus:before{content:"\f217"}.fa-cash-register:before{content:"\f788"}.fa-cat:before{content:"\f6be"}.fa-cc-amazon-pay:before{content:"\f42d"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-apple-pay:before{content:"\f416"}.fa-cc-diners-club:before{content:"\f24c"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-cc-visa:before{content:"\f1f0"}.fa-centercode:before{content:"\f380"}.fa-centos:before{content:"\f789"}.fa-certificate:before{content:"\f0a3"}.fa-chair:before{content:"\f6c0"}.fa-chalkboard:before{content:"\f51b"}.fa-chalkboard-teacher:before{content:"\f51c"}.fa-charging-station:before{content:"\f5e7"}.fa-chart-area:before{content:"\f1fe"}.fa-chart-bar:before{content:"\f080"}.fa-chart-line:before{content:"\f201"}.fa-chart-pie:before{content:"\f200"}.fa-check:before{content:"\f00c"}.fa-check-circle:before{content:"\f058"}.fa-check-double:before{content:"\f560"}.fa-check-square:before{content:"\f14a"}.fa-cheese:before{content:"\f7ef"}.fa-chess:before{content:"\f439"}.fa-chess-bishop:before{content:"\f43a"}.fa-chess-board:before{content:"\f43c"}.fa-chess-king:before{content:"\f43f"}.fa-chess-knight:before{content:"\f441"}.fa-chess-pawn:before{content:"\f443"}.fa-chess-queen:before{content:"\f445"}.fa-chess-rook:before{content:"\f447"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-down:before{content:"\f078"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-chevron-up:before{content:"\f077"}.fa-child:before{content:"\f1ae"}.fa-chrome:before{content:"\f268"}.fa-chromecast:before{content:"\f838"}.fa-church:before{content:"\f51d"}.fa-circle:before{content:"\f111"}.fa-circle-notch:before{content:"\f1ce"}.fa-city:before{content:"\f64f"}.fa-clinic-medical:before{content:"\f7f2"}.fa-clipboard:before{content:"\f328"}.fa-clipboard-check:before{content:"\f46c"}.fa-clipboard-list:before{content:"\f46d"}.fa-clock:before{content:"\f017"}.fa-clone:before{content:"\f24d"}.fa-closed-captioning:before{content:"\f20a"}.fa-cloud:before{content:"\f0c2"}.fa-cloud-download-alt:before{content:"\f381"}.fa-cloud-meatball:before{content:"\f73b"}.fa-cloud-moon:before{content:"\f6c3"}.fa-cloud-moon-rain:before{content:"\f73c"}.fa-cloud-rain:before{content:"\f73d"}.fa-cloud-showers-heavy:before{content:"\f740"}.fa-cloud-sun:before{content:"\f6c4"}.fa-cloud-sun-rain:before{content:"\f743"}.fa-cloud-upload-alt:before{content:"\f382"}.fa-cloudscale:before{content:"\f383"}.fa-cloudsmith:before{content:"\f384"}.fa-cloudversify:before{content:"\f385"}.fa-cocktail:before{content:"\f561"}.fa-code:before{content:"\f121"}.fa-code-branch:before{content:"\f126"}.fa-codepen:before{content:"\f1cb"}.fa-codiepie:before{content:"\f284"}.fa-coffee:before{content:"\f0f4"}.fa-cog:before{content:"\f013"}.fa-cogs:before{content:"\f085"}.fa-coins:before{content:"\f51e"}.fa-columns:before{content:"\f0db"}.fa-comment:before{content:"\f075"}.fa-comment-alt:before{content:"\f27a"}.fa-comment-dollar:before{content:"\f651"}.fa-comment-dots:before{content:"\f4ad"}.fa-comment-medical:before{content:"\f7f5"}.fa-comment-slash:before{content:"\f4b3"}.fa-comments:before{content:"\f086"}.fa-comments-dollar:before{content:"\f653"}.fa-compact-disc:before{content:"\f51f"}.fa-compass:before{content:"\f14e"}.fa-compress:before{content:"\f066"}.fa-compress-arrows-alt:before{content:"\f78c"}.fa-concierge-bell:before{content:"\f562"}.fa-confluence:before{content:"\f78d"}.fa-connectdevelop:before{content:"\f20e"}.fa-contao:before{content:"\f26d"}.fa-cookie:before{content:"\f563"}.fa-cookie-bite:before{content:"\f564"}.fa-copy:before{content:"\f0c5"}.fa-copyright:before{content:"\f1f9"}.fa-cotton-bureau:before{content:"\f89e"}.fa-couch:before{content:"\f4b8"}.fa-cpanel:before{content:"\f388"}.fa-creative-commons:before{content:"\f25e"}.fa-creative-commons-by:before{content:"\f4e7"}.fa-creative-commons-nc:before{content:"\f4e8"}.fa-creative-commons-nc-eu:before{content:"\f4e9"}.fa-creative-commons-nc-jp:before{content:"\f4ea"}.fa-creative-commons-nd:before{content:"\f4eb"}.fa-creative-commons-pd:before{content:"\f4ec"}.fa-creative-commons-pd-alt:before{content:"\f4ed"}.fa-creative-commons-remix:before{content:"\f4ee"}.fa-creative-commons-sa:before{content:"\f4ef"}.fa-creative-commons-sampling:before{content:"\f4f0"}.fa-creative-commons-sampling-plus:before{content:"\f4f1"}.fa-creative-commons-share:before{content:"\f4f2"}.fa-creative-commons-zero:before{content:"\f4f3"}.fa-credit-card:before{content:"\f09d"}.fa-critical-role:before{content:"\f6c9"}.fa-crop:before{content:"\f125"}.fa-crop-alt:before{content:"\f565"}.fa-cross:before{content:"\f654"}.fa-crosshairs:before{content:"\f05b"}.fa-crow:before{content:"\f520"}.fa-crown:before{content:"\f521"}.fa-crutch:before{content:"\f7f7"}.fa-css3:before{content:"\f13c"}.fa-css3-alt:before{content:"\f38b"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-cut:before{content:"\f0c4"}.fa-cuttlefish:before{content:"\f38c"}.fa-d-and-d:before{content:"\f38d"}.fa-d-and-d-beyond:before{content:"\f6ca"}.fa-dashcube:before{content:"\f210"}.fa-database:before{content:"\f1c0"}.fa-deaf:before{content:"\f2a4"}.fa-delicious:before{content:"\f1a5"}.fa-democrat:before{content:"\f747"}.fa-deploydog:before{content:"\f38e"}.fa-deskpro:before{content:"\f38f"}.fa-desktop:before{content:"\f108"}.fa-dev:before{content:"\f6cc"}.fa-deviantart:before{content:"\f1bd"}.fa-dharmachakra:before{content:"\f655"}.fa-dhl:before{content:"\f790"}.fa-diagnoses:before{content:"\f470"}.fa-diaspora:before{content:"\f791"}.fa-dice:before{content:"\f522"}.fa-dice-d20:before{content:"\f6cf"}.fa-dice-d6:before{content:"\f6d1"}.fa-dice-five:before{content:"\f523"}.fa-dice-four:before{content:"\f524"}.fa-dice-one:before{content:"\f525"}.fa-dice-six:before{content:"\f526"}.fa-dice-three:before{content:"\f527"}.fa-dice-two:before{content:"\f528"}.fa-digg:before{content:"\f1a6"}.fa-digital-ocean:before{content:"\f391"}.fa-digital-tachograph:before{content:"\f566"}.fa-directions:before{content:"\f5eb"}.fa-discord:before{content:"\f392"}.fa-discourse:before{content:"\f393"}.fa-divide:before{content:"\f529"}.fa-dizzy:before{content:"\f567"}.fa-dna:before{content:"\f471"}.fa-dochub:before{content:"\f394"}.fa-docker:before{content:"\f395"}.fa-dog:before{content:"\f6d3"}.fa-dollar-sign:before{content:"\f155"}.fa-dolly:before{content:"\f472"}.fa-dolly-flatbed:before{content:"\f474"}.fa-donate:before{content:"\f4b9"}.fa-door-closed:before{content:"\f52a"}.fa-door-open:before{content:"\f52b"}.fa-dot-circle:before{content:"\f192"}.fa-dove:before{content:"\f4ba"}.fa-download:before{content:"\f019"}.fa-draft2digital:before{content:"\f396"}.fa-drafting-compass:before{content:"\f568"}.fa-dragon:before{content:"\f6d5"}.fa-draw-polygon:before{content:"\f5ee"}.fa-dribbble:before{content:"\f17d"}.fa-dribbble-square:before{content:"\f397"}.fa-dropbox:before{content:"\f16b"}.fa-drum:before{content:"\f569"}.fa-drum-steelpan:before{content:"\f56a"}.fa-drumstick-bite:before{content:"\f6d7"}.fa-drupal:before{content:"\f1a9"}.fa-dumbbell:before{content:"\f44b"}.fa-dumpster:before{content:"\f793"}.fa-dumpster-fire:before{content:"\f794"}.fa-dungeon:before{content:"\f6d9"}.fa-dyalog:before{content:"\f399"}.fa-earlybirds:before{content:"\f39a"}.fa-ebay:before{content:"\f4f4"}.fa-edge:before{content:"\f282"}.fa-edit:before{content:"\f044"}.fa-egg:before{content:"\f7fb"}.fa-eject:before{content:"\f052"}.fa-elementor:before{content:"\f430"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-ello:before{content:"\f5f1"}.fa-ember:before{content:"\f423"}.fa-empire:before{content:"\f1d1"}.fa-envelope:before{content:"\f0e0"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-text:before{content:"\f658"}.fa-envelope-square:before{content:"\f199"}.fa-envira:before{content:"\f299"}.fa-equals:before{content:"\f52c"}.fa-eraser:before{content:"\f12d"}.fa-erlang:before{content:"\f39d"}.fa-ethereum:before{content:"\f42e"}.fa-ethernet:before{content:"\f796"}.fa-etsy:before{content:"\f2d7"}.fa-euro-sign:before{content:"\f153"}.fa-evernote:before{content:"\f839"}.fa-exchange-alt:before{content:"\f362"}.fa-exclamation:before{content:"\f12a"}.fa-exclamation-circle:before{content:"\f06a"}.fa-exclamation-triangle:before{content:"\f071"}.fa-expand:before{content:"\f065"}.fa-expand-arrows-alt:before{content:"\f31e"}.fa-expeditedssl:before{content:"\f23e"}.fa-external-link-alt:before{content:"\f35d"}.fa-external-link-square-alt:before{content:"\f360"}.fa-eye:before{content:"\f06e"}.fa-eye-dropper:before{content:"\f1fb"}.fa-eye-slash:before{content:"\f070"}.fa-facebook:before{content:"\f09a"}.fa-facebook-f:before{content:"\f39e"}.fa-facebook-messenger:before{content:"\f39f"}.fa-facebook-square:before{content:"\f082"}.fa-fan:before{content:"\f863"}.fa-fantasy-flight-games:before{content:"\f6dc"}.fa-fast-backward:before{content:"\f049"}.fa-fast-forward:before{content:"\f050"}.fa-fax:before{content:"\f1ac"}.fa-feather:before{content:"\f52d"}.fa-feather-alt:before{content:"\f56b"}.fa-fedex:before{content:"\f797"}.fa-fedora:before{content:"\f798"}.fa-female:before{content:"\f182"}.fa-fighter-jet:before{content:"\f0fb"}.fa-figma:before{content:"\f799"}.fa-file:before{content:"\f15b"}.fa-file-alt:before{content:"\f15c"}.fa-file-archive:before{content:"\f1c6"}.fa-file-audio:before{content:"\f1c7"}.fa-file-code:before{content:"\f1c9"}.fa-file-contract:before{content:"\f56c"}.fa-file-csv:before{content:"\f6dd"}.fa-file-download:before{content:"\f56d"}.fa-file-excel:before{content:"\f1c3"}.fa-file-export:before{content:"\f56e"}.fa-file-image:before{content:"\f1c5"}.fa-file-import:before{content:"\f56f"}.fa-file-invoice:before{content:"\f570"}.fa-file-invoice-dollar:before{content:"\f571"}.fa-file-medical:before{content:"\f477"}.fa-file-medical-alt:before{content:"\f478"}.fa-file-pdf:before{content:"\f1c1"}.fa-file-powerpoint:before{content:"\f1c4"}.fa-file-prescription:before{content:"\f572"}.fa-file-signature:before{content:"\f573"}.fa-file-upload:before{content:"\f574"}.fa-file-video:before{content:"\f1c8"}.fa-file-word:before{content:"\f1c2"}.fa-fill:before{content:"\f575"}.fa-fill-drip:before{content:"\f576"}.fa-film:before{content:"\f008"}.fa-filter:before{content:"\f0b0"}.fa-fingerprint:before{content:"\f577"}.fa-fire:before{content:"\f06d"}.fa-fire-alt:before{content:"\f7e4"}.fa-fire-extinguisher:before{content:"\f134"}.fa-firefox:before{content:"\f269"}.fa-first-aid:before{content:"\f479"}.fa-first-order:before{content:"\f2b0"}.fa-first-order-alt:before{content:"\f50a"}.fa-firstdraft:before{content:"\f3a1"}.fa-fish:before{content:"\f578"}.fa-fist-raised:before{content:"\f6de"}.fa-flag:before{content:"\f024"}.fa-flag-checkered:before{content:"\f11e"}.fa-flag-usa:before{content:"\f74d"}.fa-flask:before{content:"\f0c3"}.fa-flickr:before{content:"\f16e"}.fa-flipboard:before{content:"\f44d"}.fa-flushed:before{content:"\f579"}.fa-fly:before{content:"\f417"}.fa-folder:before{content:"\f07b"}.fa-folder-minus:before{content:"\f65d"}.fa-folder-open:before{content:"\f07c"}.fa-folder-plus:before{content:"\f65e"}.fa-font:before{content:"\f031"}.fa-font-awesome:before{content:"\f2b4"}.fa-font-awesome-alt:before{content:"\f35c"}.fa-font-awesome-flag:before{content:"\f425"}.fa-font-awesome-logo-full:before{content:"\f4e6"}.fa-fonticons:before{content:"\f280"}.fa-fonticons-fi:before{content:"\f3a2"}.fa-football-ball:before{content:"\f44e"}.fa-fort-awesome:before{content:"\f286"}.fa-fort-awesome-alt:before{content:"\f3a3"}.fa-forumbee:before{content:"\f211"}.fa-forward:before{content:"\f04e"}.fa-foursquare:before{content:"\f180"}.fa-free-code-camp:before{content:"\f2c5"}.fa-freebsd:before{content:"\f3a4"}.fa-frog:before{content:"\f52e"}.fa-frown:before{content:"\f119"}.fa-frown-open:before{content:"\f57a"}.fa-fulcrum:before{content:"\f50b"}.fa-funnel-dollar:before{content:"\f662"}.fa-futbol:before{content:"\f1e3"}.fa-galactic-republic:before{content:"\f50c"}.fa-galactic-senate:before{content:"\f50d"}.fa-gamepad:before{content:"\f11b"}.fa-gas-pump:before{content:"\f52f"}.fa-gavel:before{content:"\f0e3"}.fa-gem:before{content:"\f3a5"}.fa-genderless:before{content:"\f22d"}.fa-get-pocket:before{content:"\f265"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-ghost:before{content:"\f6e2"}.fa-gift:before{content:"\f06b"}.fa-gifts:before{content:"\f79c"}.fa-git:before{content:"\f1d3"}.fa-git-alt:before{content:"\f841"}.fa-git-square:before{content:"\f1d2"}.fa-github:before{content:"\f09b"}.fa-github-alt:before{content:"\f113"}.fa-github-square:before{content:"\f092"}.fa-gitkraken:before{content:"\f3a6"}.fa-gitlab:before{content:"\f296"}.fa-gitter:before{content:"\f426"}.fa-glass-cheers:before{content:"\f79f"}.fa-glass-martini:before{content:"\f000"}.fa-glass-martini-alt:before{content:"\f57b"}.fa-glass-whiskey:before{content:"\f7a0"}.fa-glasses:before{content:"\f530"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-globe:before{content:"\f0ac"}.fa-globe-africa:before{content:"\f57c"}.fa-globe-americas:before{content:"\f57d"}.fa-globe-asia:before{content:"\f57e"}.fa-globe-europe:before{content:"\f7a2"}.fa-gofore:before{content:"\f3a7"}.fa-golf-ball:before{content:"\f450"}.fa-goodreads:before{content:"\f3a8"}.fa-goodreads-g:before{content:"\f3a9"}.fa-google:before{content:"\f1a0"}.fa-google-drive:before{content:"\f3aa"}.fa-google-play:before{content:"\f3ab"}.fa-google-plus:before{content:"\f2b3"}.fa-google-plus-g:before{content:"\f0d5"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-wallet:before{content:"\f1ee"}.fa-gopuram:before{content:"\f664"}.fa-graduation-cap:before{content:"\f19d"}.fa-gratipay:before{content:"\f184"}.fa-grav:before{content:"\f2d6"}.fa-greater-than:before{content:"\f531"}.fa-greater-than-equal:before{content:"\f532"}.fa-grimace:before{content:"\f57f"}.fa-grin:before{content:"\f580"}.fa-grin-alt:before{content:"\f581"}.fa-grin-beam:before{content:"\f582"}.fa-grin-beam-sweat:before{content:"\f583"}.fa-grin-hearts:before{content:"\f584"}.fa-grin-squint:before{content:"\f585"}.fa-grin-squint-tears:before{content:"\f586"}.fa-grin-stars:before{content:"\f587"}.fa-grin-tears:before{content:"\f588"}.fa-grin-tongue:before{content:"\f589"}.fa-grin-tongue-squint:before{content:"\f58a"}.fa-grin-tongue-wink:before{content:"\f58b"}.fa-grin-wink:before{content:"\f58c"}.fa-grip-horizontal:before{content:"\f58d"}.fa-grip-lines:before{content:"\f7a4"}.fa-grip-lines-vertical:before{content:"\f7a5"}.fa-grip-vertical:before{content:"\f58e"}.fa-gripfire:before{content:"\f3ac"}.fa-grunt:before{content:"\f3ad"}.fa-guitar:before{content:"\f7a6"}.fa-gulp:before{content:"\f3ae"}.fa-h-square:before{content:"\f0fd"}.fa-hacker-news:before{content:"\f1d4"}.fa-hacker-news-square:before{content:"\f3af"}.fa-hackerrank:before{content:"\f5f7"}.fa-hamburger:before{content:"\f805"}.fa-hammer:before{content:"\f6e3"}.fa-hamsa:before{content:"\f665"}.fa-hand-holding:before{content:"\f4bd"}.fa-hand-holding-heart:before{content:"\f4be"}.fa-hand-holding-usd:before{content:"\f4c0"}.fa-hand-lizard:before{content:"\f258"}.fa-hand-middle-finger:before{content:"\f806"}.fa-hand-paper:before{content:"\f256"}.fa-hand-peace:before{content:"\f25b"}.fa-hand-point-down:before{content:"\f0a7"}.fa-hand-point-left:before{content:"\f0a5"}.fa-hand-point-right:before{content:"\f0a4"}.fa-hand-point-up:before{content:"\f0a6"}.fa-hand-pointer:before{content:"\f25a"}.fa-hand-rock:before{content:"\f255"}.fa-hand-scissors:before{content:"\f257"}.fa-hand-spock:before{content:"\f259"}.fa-hands:before{content:"\f4c2"}.fa-hands-helping:before{content:"\f4c4"}.fa-handshake:before{content:"\f2b5"}.fa-hanukiah:before{content:"\f6e6"}.fa-hard-hat:before{content:"\f807"}.fa-hashtag:before{content:"\f292"}.fa-hat-wizard:before{content:"\f6e8"}.fa-haykal:before{content:"\f666"}.fa-hdd:before{content:"\f0a0"}.fa-heading:before{content:"\f1dc"}.fa-headphones:before{content:"\f025"}.fa-headphones-alt:before{content:"\f58f"}.fa-headset:before{content:"\f590"}.fa-heart:before{content:"\f004"}.fa-heart-broken:before{content:"\f7a9"}.fa-heartbeat:before{content:"\f21e"}.fa-helicopter:before{content:"\f533"}.fa-highlighter:before{content:"\f591"}.fa-hiking:before{content:"\f6ec"}.fa-hippo:before{content:"\f6ed"}.fa-hips:before{content:"\f452"}.fa-hire-a-helper:before{content:"\f3b0"}.fa-history:before{content:"\f1da"}.fa-hockey-puck:before{content:"\f453"}.fa-holly-berry:before{content:"\f7aa"}.fa-home:before{content:"\f015"}.fa-hooli:before{content:"\f427"}.fa-hornbill:before{content:"\f592"}.fa-horse:before{content:"\f6f0"}.fa-horse-head:before{content:"\f7ab"}.fa-hospital:before{content:"\f0f8"}.fa-hospital-alt:before{content:"\f47d"}.fa-hospital-symbol:before{content:"\f47e"}.fa-hot-tub:before{content:"\f593"}.fa-hotdog:before{content:"\f80f"}.fa-hotel:before{content:"\f594"}.fa-hotjar:before{content:"\f3b1"}.fa-hourglass:before{content:"\f254"}.fa-hourglass-end:before{content:"\f253"}.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-start:before{content:"\f251"}.fa-house-damage:before{content:"\f6f1"}.fa-houzz:before{content:"\f27c"}.fa-hryvnia:before{content:"\f6f2"}.fa-html5:before{content:"\f13b"}.fa-hubspot:before{content:"\f3b2"}.fa-i-cursor:before{content:"\f246"}.fa-ice-cream:before{content:"\f810"}.fa-icicles:before{content:"\f7ad"}.fa-icons:before{content:"\f86d"}.fa-id-badge:before{content:"\f2c1"}.fa-id-card:before{content:"\f2c2"}.fa-id-card-alt:before{content:"\f47f"}.fa-igloo:before{content:"\f7ae"}.fa-image:before{content:"\f03e"}.fa-images:before{content:"\f302"}.fa-imdb:before{content:"\f2d8"}.fa-inbox:before{content:"\f01c"}.fa-indent:before{content:"\f03c"}.fa-industry:before{content:"\f275"}.fa-infinity:before{content:"\f534"}.fa-info:before{content:"\f129"}.fa-info-circle:before{content:"\f05a"}.fa-instagram:before{content:"\f16d"}.fa-intercom:before{content:"\f7af"}.fa-internet-explorer:before{content:"\f26b"}.fa-invision:before{content:"\f7b0"}.fa-ioxhost:before{content:"\f208"}.fa-italic:before{content:"\f033"}.fa-itch-io:before{content:"\f83a"}.fa-itunes:before{content:"\f3b4"}.fa-itunes-note:before{content:"\f3b5"}.fa-java:before{content:"\f4e4"}.fa-jedi:before{content:"\f669"}.fa-jedi-order:before{content:"\f50e"}.fa-jenkins:before{content:"\f3b6"}.fa-jira:before{content:"\f7b1"}.fa-joget:before{content:"\f3b7"}.fa-joint:before{content:"\f595"}.fa-joomla:before{content:"\f1aa"}.fa-journal-whills:before{content:"\f66a"}.fa-js:before{content:"\f3b8"}.fa-js-square:before{content:"\f3b9"}.fa-jsfiddle:before{content:"\f1cc"}.fa-kaaba:before{content:"\f66b"}.fa-kaggle:before{content:"\f5fa"}.fa-key:before{content:"\f084"}.fa-keybase:before{content:"\f4f5"}.fa-keyboard:before{content:"\f11c"}.fa-keycdn:before{content:"\f3ba"}.fa-khanda:before{content:"\f66d"}.fa-kickstarter:before{content:"\f3bb"}.fa-kickstarter-k:before{content:"\f3bc"}.fa-kiss:before{content:"\f596"}.fa-kiss-beam:before{content:"\f597"}.fa-kiss-wink-heart:before{content:"\f598"}.fa-kiwi-bird:before{content:"\f535"}.fa-korvue:before{content:"\f42f"}.fa-landmark:before{content:"\f66f"}.fa-language:before{content:"\f1ab"}.fa-laptop:before{content:"\f109"}.fa-laptop-code:before{content:"\f5fc"}.fa-laptop-medical:before{content:"\f812"}.fa-laravel:before{content:"\f3bd"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-laugh:before{content:"\f599"}.fa-laugh-beam:before{content:"\f59a"}.fa-laugh-squint:before{content:"\f59b"}.fa-laugh-wink:before{content:"\f59c"}.fa-layer-group:before{content:"\f5fd"}.fa-leaf:before{content:"\f06c"}.fa-leanpub:before{content:"\f212"}.fa-lemon:before{content:"\f094"}.fa-less:before{content:"\f41d"}.fa-less-than:before{content:"\f536"}.fa-less-than-equal:before{content:"\f537"}.fa-level-down-alt:before{content:"\f3be"}.fa-level-up-alt:before{content:"\f3bf"}.fa-life-ring:before{content:"\f1cd"}.fa-lightbulb:before{content:"\f0eb"}.fa-line:before{content:"\f3c0"}.fa-link:before{content:"\f0c1"}.fa-linkedin:before{content:"\f08c"}.fa-linkedin-in:before{content:"\f0e1"}.fa-linode:before{content:"\f2b8"}.fa-linux:before{content:"\f17c"}.fa-lira-sign:before{content:"\f195"}.fa-list:before{content:"\f03a"}.fa-list-alt:before{content:"\f022"}.fa-list-ol:before{content:"\f0cb"}.fa-list-ul:before{content:"\f0ca"}.fa-location-arrow:before{content:"\f124"}.fa-lock:before{content:"\f023"}.fa-lock-open:before{content:"\f3c1"}.fa-long-arrow-alt-down:before{content:"\f309"}.fa-long-arrow-alt-left:before{content:"\f30a"}.fa-long-arrow-alt-right:before{content:"\f30b"}.fa-long-arrow-alt-up:before{content:"\f30c"}.fa-low-vision:before{content:"\f2a8"}.fa-luggage-cart:before{content:"\f59d"}.fa-lyft:before{content:"\f3c3"}.fa-magento:before{content:"\f3c4"}.fa-magic:before{content:"\f0d0"}.fa-magnet:before{content:"\f076"}.fa-mail-bulk:before{content:"\f674"}.fa-mailchimp:before{content:"\f59e"}.fa-male:before{content:"\f183"}.fa-mandalorian:before{content:"\f50f"}.fa-map:before{content:"\f279"}.fa-map-marked:before{content:"\f59f"}.fa-map-marked-alt:before{content:"\f5a0"}.fa-map-marker:before{content:"\f041"}.fa-map-marker-alt:before{content:"\f3c5"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-markdown:before{content:"\f60f"}.fa-marker:before{content:"\f5a1"}.fa-mars:before{content:"\f222"}.fa-mars-double:before{content:"\f227"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mask:before{content:"\f6fa"}.fa-mastodon:before{content:"\f4f6"}.fa-maxcdn:before{content:"\f136"}.fa-medal:before{content:"\f5a2"}.fa-medapps:before{content:"\f3c6"}.fa-medium:before{content:"\f23a"}.fa-medium-m:before{content:"\f3c7"}.fa-medkit:before{content:"\f0fa"}.fa-medrt:before{content:"\f3c8"}.fa-meetup:before{content:"\f2e0"}.fa-megaport:before{content:"\f5a3"}.fa-meh:before{content:"\f11a"}.fa-meh-blank:before{content:"\f5a4"}.fa-meh-rolling-eyes:before{content:"\f5a5"}.fa-memory:before{content:"\f538"}.fa-mendeley:before{content:"\f7b3"}.fa-menorah:before{content:"\f676"}.fa-mercury:before{content:"\f223"}.fa-meteor:before{content:"\f753"}.fa-microchip:before{content:"\f2db"}.fa-microphone:before{content:"\f130"}.fa-microphone-alt:before{content:"\f3c9"}.fa-microphone-alt-slash:before{content:"\f539"}.fa-microphone-slash:before{content:"\f131"}.fa-microscope:before{content:"\f610"}.fa-microsoft:before{content:"\f3ca"}.fa-minus:before{content:"\f068"}.fa-minus-circle:before{content:"\f056"}.fa-minus-square:before{content:"\f146"}.fa-mitten:before{content:"\f7b5"}.fa-mix:before{content:"\f3cb"}.fa-mixcloud:before{content:"\f289"}.fa-mizuni:before{content:"\f3cc"}.fa-mobile:before{content:"\f10b"}.fa-mobile-alt:before{content:"\f3cd"}.fa-modx:before{content:"\f285"}.fa-monero:before{content:"\f3d0"}.fa-money-bill:before{content:"\f0d6"}.fa-money-bill-alt:before{content:"\f3d1"}.fa-money-bill-wave:before{content:"\f53a"}.fa-money-bill-wave-alt:before{content:"\f53b"}.fa-money-check:before{content:"\f53c"}.fa-money-check-alt:before{content:"\f53d"}.fa-monument:before{content:"\f5a6"}.fa-moon:before{content:"\f186"}.fa-mortar-pestle:before{content:"\f5a7"}.fa-mosque:before{content:"\f678"}.fa-motorcycle:before{content:"\f21c"}.fa-mountain:before{content:"\f6fc"}.fa-mouse-pointer:before{content:"\f245"}.fa-mug-hot:before{content:"\f7b6"}.fa-music:before{content:"\f001"}.fa-napster:before{content:"\f3d2"}.fa-neos:before{content:"\f612"}.fa-network-wired:before{content:"\f6ff"}.fa-neuter:before{content:"\f22c"}.fa-newspaper:before{content:"\f1ea"}.fa-nimblr:before{content:"\f5a8"}.fa-node:before{content:"\f419"}.fa-node-js:before{content:"\f3d3"}.fa-not-equal:before{content:"\f53e"}.fa-notes-medical:before{content:"\f481"}.fa-npm:before{content:"\f3d4"}.fa-ns8:before{content:"\f3d5"}.fa-nutritionix:before{content:"\f3d6"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-oil-can:before{content:"\f613"}.fa-old-republic:before{content:"\f510"}.fa-om:before{content:"\f679"}.fa-opencart:before{content:"\f23d"}.fa-openid:before{content:"\f19b"}.fa-opera:before{content:"\f26a"}.fa-optin-monster:before{content:"\f23c"}.fa-osi:before{content:"\f41a"}.fa-otter:before{content:"\f700"}.fa-outdent:before{content:"\f03b"}.fa-page4:before{content:"\f3d7"}.fa-pagelines:before{content:"\f18c"}.fa-pager:before{content:"\f815"}.fa-paint-brush:before{content:"\f1fc"}.fa-paint-roller:before{content:"\f5aa"}.fa-palette:before{content:"\f53f"}.fa-palfed:before{content:"\f3d8"}.fa-pallet:before{content:"\f482"}.fa-paper-plane:before{content:"\f1d8"}.fa-paperclip:before{content:"\f0c6"}.fa-parachute-box:before{content:"\f4cd"}.fa-paragraph:before{content:"\f1dd"}.fa-parking:before{content:"\f540"}.fa-passport:before{content:"\f5ab"}.fa-pastafarianism:before{content:"\f67b"}.fa-paste:before{content:"\f0ea"}.fa-patreon:before{content:"\f3d9"}.fa-pause:before{content:"\f04c"}.fa-pause-circle:before{content:"\f28b"}.fa-paw:before{content:"\f1b0"}.fa-paypal:before{content:"\f1ed"}.fa-peace:before{content:"\f67c"}.fa-pen:before{content:"\f304"}.fa-pen-alt:before{content:"\f305"}.fa-pen-fancy:before{content:"\f5ac"}.fa-pen-nib:before{content:"\f5ad"}.fa-pen-square:before{content:"\f14b"}.fa-pencil-alt:before{content:"\f303"}.fa-pencil-ruler:before{content:"\f5ae"}.fa-penny-arcade:before{content:"\f704"}.fa-people-carry:before{content:"\f4ce"}.fa-pepper-hot:before{content:"\f816"}.fa-percent:before{content:"\f295"}.fa-percentage:before{content:"\f541"}.fa-periscope:before{content:"\f3da"}.fa-person-booth:before{content:"\f756"}.fa-phabricator:before{content:"\f3db"}.fa-phoenix-framework:before{content:"\f3dc"}.fa-phoenix-squadron:before{content:"\f511"}.fa-phone:before{content:"\f095"}.fa-phone-alt:before{content:"\f879"}.fa-phone-slash:before{content:"\f3dd"}.fa-phone-square:before{content:"\f098"}.fa-phone-square-alt:before{content:"\f87b"}.fa-phone-volume:before{content:"\f2a0"}.fa-photo-video:before{content:"\f87c"}.fa-php:before{content:"\f457"}.fa-pied-piper:before{content:"\f2ae"}.fa-pied-piper-alt:before{content:"\f1a8"}.fa-pied-piper-hat:before{content:"\f4e5"}.fa-pied-piper-pp:before{content:"\f1a7"}.fa-piggy-bank:before{content:"\f4d3"}.fa-pills:before{content:"\f484"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-p:before{content:"\f231"}.fa-pinterest-square:before{content:"\f0d3"}.fa-pizza-slice:before{content:"\f818"}.fa-place-of-worship:before{content:"\f67f"}.fa-plane:before{content:"\f072"}.fa-plane-arrival:before{content:"\f5af"}.fa-plane-departure:before{content:"\f5b0"}.fa-play:before{content:"\f04b"}.fa-play-circle:before{content:"\f144"}.fa-playstation:before{content:"\f3df"}.fa-plug:before{content:"\f1e6"}.fa-plus:before{content:"\f067"}.fa-plus-circle:before{content:"\f055"}.fa-plus-square:before{content:"\f0fe"}.fa-podcast:before{content:"\f2ce"}.fa-poll:before{content:"\f681"}.fa-poll-h:before{content:"\f682"}.fa-poo:before{content:"\f2fe"}.fa-poo-storm:before{content:"\f75a"}.fa-poop:before{content:"\f619"}.fa-portrait:before{content:"\f3e0"}.fa-pound-sign:before{content:"\f154"}.fa-power-off:before{content:"\f011"}.fa-pray:before{content:"\f683"}.fa-praying-hands:before{content:"\f684"}.fa-prescription:before{content:"\f5b1"}.fa-prescription-bottle:before{content:"\f485"}.fa-prescription-bottle-alt:before{content:"\f486"}.fa-print:before{content:"\f02f"}.fa-procedures:before{content:"\f487"}.fa-product-hunt:before{content:"\f288"}.fa-project-diagram:before{content:"\f542"}.fa-pushed:before{content:"\f3e1"}.fa-puzzle-piece:before{content:"\f12e"}.fa-python:before{content:"\f3e2"}.fa-qq:before{content:"\f1d6"}.fa-qrcode:before{content:"\f029"}.fa-question:before{content:"\f128"}.fa-question-circle:before{content:"\f059"}.fa-quidditch:before{content:"\f458"}.fa-quinscape:before{content:"\f459"}.fa-quora:before{content:"\f2c4"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-quran:before{content:"\f687"}.fa-r-project:before{content:"\f4f7"}.fa-radiation:before{content:"\f7b9"}.fa-radiation-alt:before{content:"\f7ba"}.fa-rainbow:before{content:"\f75b"}.fa-random:before{content:"\f074"}.fa-raspberry-pi:before{content:"\f7bb"}.fa-ravelry:before{content:"\f2d9"}.fa-react:before{content:"\f41b"}.fa-reacteurope:before{content:"\f75d"}.fa-readme:before{content:"\f4d5"}.fa-rebel:before{content:"\f1d0"}.fa-receipt:before{content:"\f543"}.fa-recycle:before{content:"\f1b8"}.fa-red-river:before{content:"\f3e3"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-alien:before{content:"\f281"}.fa-reddit-square:before{content:"\f1a2"}.fa-redhat:before{content:"\f7bc"}.fa-redo:before{content:"\f01e"}.fa-redo-alt:before{content:"\f2f9"}.fa-registered:before{content:"\f25d"}.fa-remove-format:before{content:"\f87d"}.fa-renren:before{content:"\f18b"}.fa-reply:before{content:"\f3e5"}.fa-reply-all:before{content:"\f122"}.fa-replyd:before{content:"\f3e6"}.fa-republican:before{content:"\f75e"}.fa-researchgate:before{content:"\f4f8"}.fa-resolving:before{content:"\f3e7"}.fa-restroom:before{content:"\f7bd"}.fa-retweet:before{content:"\f079"}.fa-rev:before{content:"\f5b2"}.fa-ribbon:before{content:"\f4d6"}.fa-ring:before{content:"\f70b"}.fa-road:before{content:"\f018"}.fa-robot:before{content:"\f544"}.fa-rocket:before{content:"\f135"}.fa-rocketchat:before{content:"\f3e8"}.fa-rockrms:before{content:"\f3e9"}.fa-route:before{content:"\f4d7"}.fa-rss:before{content:"\f09e"}.fa-rss-square:before{content:"\f143"}.fa-ruble-sign:before{content:"\f158"}.fa-ruler:before{content:"\f545"}.fa-ruler-combined:before{content:"\f546"}.fa-ruler-horizontal:before{content:"\f547"}.fa-ruler-vertical:before{content:"\f548"}.fa-running:before{content:"\f70c"}.fa-rupee-sign:before{content:"\f156"}.fa-sad-cry:before{content:"\f5b3"}.fa-sad-tear:before{content:"\f5b4"}.fa-safari:before{content:"\f267"}.fa-salesforce:before{content:"\f83b"}.fa-sass:before{content:"\f41e"}.fa-satellite:before{content:"\f7bf"}.fa-satellite-dish:before{content:"\f7c0"}.fa-save:before{content:"\f0c7"}.fa-schlix:before{content:"\f3ea"}.fa-school:before{content:"\f549"}.fa-screwdriver:before{content:"\f54a"}.fa-scribd:before{content:"\f28a"}.fa-scroll:before{content:"\f70e"}.fa-sd-card:before{content:"\f7c2"}.fa-search:before{content:"\f002"}.fa-search-dollar:before{content:"\f688"}.fa-search-location:before{content:"\f689"}.fa-search-minus:before{content:"\f010"}.fa-search-plus:before{content:"\f00e"}.fa-searchengin:before{content:"\f3eb"}.fa-seedling:before{content:"\f4d8"}.fa-sellcast:before{content:"\f2da"}.fa-sellsy:before{content:"\f213"}.fa-server:before{content:"\f233"}.fa-servicestack:before{content:"\f3ec"}.fa-shapes:before{content:"\f61f"}.fa-share:before{content:"\f064"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-share-square:before{content:"\f14d"}.fa-shekel-sign:before{content:"\f20b"}.fa-shield-alt:before{content:"\f3ed"}.fa-ship:before{content:"\f21a"}.fa-shipping-fast:before{content:"\f48b"}.fa-shirtsinbulk:before{content:"\f214"}.fa-shoe-prints:before{content:"\f54b"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-shopping-cart:before{content:"\f07a"}.fa-shopware:before{content:"\f5b5"}.fa-shower:before{content:"\f2cc"}.fa-shuttle-van:before{content:"\f5b6"}.fa-sign:before{content:"\f4d9"}.fa-sign-in-alt:before{content:"\f2f6"}.fa-sign-language:before{content:"\f2a7"}.fa-sign-out-alt:before{content:"\f2f5"}.fa-signal:before{content:"\f012"}.fa-signature:before{content:"\f5b7"}.fa-sim-card:before{content:"\f7c4"}.fa-simplybuilt:before{content:"\f215"}.fa-sistrix:before{content:"\f3ee"}.fa-sitemap:before{content:"\f0e8"}.fa-sith:before{content:"\f512"}.fa-skating:before{content:"\f7c5"}.fa-sketch:before{content:"\f7c6"}.fa-skiing:before{content:"\f7c9"}.fa-skiing-nordic:before{content:"\f7ca"}.fa-skull:before{content:"\f54c"}.fa-skull-crossbones:before{content:"\f714"}.fa-skyatlas:before{content:"\f216"}.fa-skype:before{content:"\f17e"}.fa-slack:before{content:"\f198"}.fa-slack-hash:before{content:"\f3ef"}.fa-slash:before{content:"\f715"}.fa-sleigh:before{content:"\f7cc"}.fa-sliders-h:before{content:"\f1de"}.fa-slideshare:before{content:"\f1e7"}.fa-smile:before{content:"\f118"}.fa-smile-beam:before{content:"\f5b8"}.fa-smile-wink:before{content:"\f4da"}.fa-smog:before{content:"\f75f"}.fa-smoking:before{content:"\f48d"}.fa-smoking-ban:before{content:"\f54d"}.fa-sms:before{content:"\f7cd"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-snowboarding:before{content:"\f7ce"}.fa-snowflake:before{content:"\f2dc"}.fa-snowman:before{content:"\f7d0"}.fa-snowplow:before{content:"\f7d2"}.fa-socks:before{content:"\f696"}.fa-solar-panel:before{content:"\f5ba"}.fa-sort:before{content:"\f0dc"}.fa-sort-alpha-down:before{content:"\f15d"}.fa-sort-alpha-down-alt:before{content:"\f881"}.fa-sort-alpha-up:before{content:"\f15e"}.fa-sort-alpha-up-alt:before{content:"\f882"}.fa-sort-amount-down:before{content:"\f160"}.fa-sort-amount-down-alt:before{content:"\f884"}.fa-sort-amount-up:before{content:"\f161"}.fa-sort-amount-up-alt:before{content:"\f885"}.fa-sort-down:before{content:"\f0dd"}.fa-sort-numeric-down:before{content:"\f162"}.fa-sort-numeric-down-alt:before{content:"\f886"}.fa-sort-numeric-up:before{content:"\f163"}.fa-sort-numeric-up-alt:before{content:"\f887"}.fa-sort-up:before{content:"\f0de"}.fa-soundcloud:before{content:"\f1be"}.fa-sourcetree:before{content:"\f7d3"}.fa-spa:before{content:"\f5bb"}.fa-space-shuttle:before{content:"\f197"}.fa-speakap:before{content:"\f3f3"}.fa-speaker-deck:before{content:"\f83c"}.fa-spell-check:before{content:"\f891"}.fa-spider:before{content:"\f717"}.fa-spinner:before{content:"\f110"}.fa-splotch:before{content:"\f5bc"}.fa-spotify:before{content:"\f1bc"}.fa-spray-can:before{content:"\f5bd"}.fa-square:before{content:"\f0c8"}.fa-square-full:before{content:"\f45c"}.fa-square-root-alt:before{content:"\f698"}.fa-squarespace:before{content:"\f5be"}.fa-stack-exchange:before{content:"\f18d"}.fa-stack-overflow:before{content:"\f16c"}.fa-stackpath:before{content:"\f842"}.fa-stamp:before{content:"\f5bf"}.fa-star:before{content:"\f005"}.fa-star-and-crescent:before{content:"\f699"}.fa-star-half:before{content:"\f089"}.fa-star-half-alt:before{content:"\f5c0"}.fa-star-of-david:before{content:"\f69a"}.fa-star-of-life:before{content:"\f621"}.fa-staylinked:before{content:"\f3f5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-steam-symbol:before{content:"\f3f6"}.fa-step-backward:before{content:"\f048"}.fa-step-forward:before{content:"\f051"}.fa-stethoscope:before{content:"\f0f1"}.fa-sticker-mule:before{content:"\f3f7"}.fa-sticky-note:before{content:"\f249"}.fa-stop:before{content:"\f04d"}.fa-stop-circle:before{content:"\f28d"}.fa-stopwatch:before{content:"\f2f2"}.fa-store:before{content:"\f54e"}.fa-store-alt:before{content:"\f54f"}.fa-strava:before{content:"\f428"}.fa-stream:before{content:"\f550"}.fa-street-view:before{content:"\f21d"}.fa-strikethrough:before{content:"\f0cc"}.fa-stripe:before{content:"\f429"}.fa-stripe-s:before{content:"\f42a"}.fa-stroopwafel:before{content:"\f551"}.fa-studiovinari:before{content:"\f3f8"}.fa-stumbleupon:before{content:"\f1a4"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-subscript:before{content:"\f12c"}.fa-subway:before{content:"\f239"}.fa-suitcase:before{content:"\f0f2"}.fa-suitcase-rolling:before{content:"\f5c1"}.fa-sun:before{content:"\f185"}.fa-superpowers:before{content:"\f2dd"}.fa-superscript:before{content:"\f12b"}.fa-supple:before{content:"\f3f9"}.fa-surprise:before{content:"\f5c2"}.fa-suse:before{content:"\f7d6"}.fa-swatchbook:before{content:"\f5c3"}.fa-swimmer:before{content:"\f5c4"}.fa-swimming-pool:before{content:"\f5c5"}.fa-symfony:before{content:"\f83d"}.fa-synagogue:before{content:"\f69b"}.fa-sync:before{content:"\f021"}.fa-sync-alt:before{content:"\f2f1"}.fa-syringe:before{content:"\f48e"}.fa-table:before{content:"\f0ce"}.fa-table-tennis:before{content:"\f45d"}.fa-tablet:before{content:"\f10a"}.fa-tablet-alt:before{content:"\f3fa"}.fa-tablets:before{content:"\f490"}.fa-tachometer-alt:before{content:"\f3fd"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-tape:before{content:"\f4db"}.fa-tasks:before{content:"\f0ae"}.fa-taxi:before{content:"\f1ba"}.fa-teamspeak:before{content:"\f4f9"}.fa-teeth:before{content:"\f62e"}.fa-teeth-open:before{content:"\f62f"}.fa-telegram:before{content:"\f2c6"}.fa-telegram-plane:before{content:"\f3fe"}.fa-temperature-high:before{content:"\f769"}.fa-temperature-low:before{content:"\f76b"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-tenge:before{content:"\f7d7"}.fa-terminal:before{content:"\f120"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-th:before{content:"\f00a"}.fa-th-large:before{content:"\f009"}.fa-th-list:before{content:"\f00b"}.fa-the-red-yeti:before{content:"\f69d"}.fa-theater-masks:before{content:"\f630"}.fa-themeco:before{content:"\f5c6"}.fa-themeisle:before{content:"\f2b2"}.fa-thermometer:before{content:"\f491"}.fa-thermometer-empty:before{content:"\f2cb"}.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-think-peaks:before{content:"\f731"}.fa-thumbs-down:before{content:"\f165"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbtack:before{content:"\f08d"}.fa-ticket-alt:before{content:"\f3ff"}.fa-times:before{content:"\f00d"}.fa-times-circle:before{content:"\f057"}.fa-tint:before{content:"\f043"}.fa-tint-slash:before{content:"\f5c7"}.fa-tired:before{content:"\f5c8"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-toilet:before{content:"\f7d8"}.fa-toilet-paper:before{content:"\f71e"}.fa-toolbox:before{content:"\f552"}.fa-tools:before{content:"\f7d9"}.fa-tooth:before{content:"\f5c9"}.fa-torah:before{content:"\f6a0"}.fa-torii-gate:before{content:"\f6a1"}.fa-tractor:before{content:"\f722"}.fa-trade-federation:before{content:"\f513"}.fa-trademark:before{content:"\f25c"}.fa-traffic-light:before{content:"\f637"}.fa-train:before{content:"\f238"}.fa-tram:before{content:"\f7da"}.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-trash:before{content:"\f1f8"}.fa-trash-alt:before{content:"\f2ed"}.fa-trash-restore:before{content:"\f829"}.fa-trash-restore-alt:before{content:"\f82a"}.fa-tree:before{content:"\f1bb"}.fa-trello:before{content:"\f181"}.fa-tripadvisor:before{content:"\f262"}.fa-trophy:before{content:"\f091"}.fa-truck:before{content:"\f0d1"}.fa-truck-loading:before{content:"\f4de"}.fa-truck-monster:before{content:"\f63b"}.fa-truck-moving:before{content:"\f4df"}.fa-truck-pickup:before{content:"\f63c"}.fa-tshirt:before{content:"\f553"}.fa-tty:before{content:"\f1e4"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-tv:before{content:"\f26c"}.fa-twitch:before{content:"\f1e8"}.fa-twitter:before{content:"\f099"}.fa-twitter-square:before{content:"\f081"}.fa-typo3:before{content:"\f42b"}.fa-uber:before{content:"\f402"}.fa-ubuntu:before{content:"\f7df"}.fa-uikit:before{content:"\f403"}.fa-umbrella:before{content:"\f0e9"}.fa-umbrella-beach:before{content:"\f5ca"}.fa-underline:before{content:"\f0cd"}.fa-undo:before{content:"\f0e2"}.fa-undo-alt:before{content:"\f2ea"}.fa-uniregistry:before{content:"\f404"}.fa-universal-access:before{content:"\f29a"}.fa-university:before{content:"\f19c"}.fa-unlink:before{content:"\f127"}.fa-unlock:before{content:"\f09c"}.fa-unlock-alt:before{content:"\f13e"}.fa-untappd:before{content:"\f405"}.fa-upload:before{content:"\f093"}.fa-ups:before{content:"\f7e0"}.fa-usb:before{content:"\f287"}.fa-user:before{content:"\f007"}.fa-user-alt:before{content:"\f406"}.fa-user-alt-slash:before{content:"\f4fa"}.fa-user-astronaut:before{content:"\f4fb"}.fa-user-check:before{content:"\f4fc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-clock:before{content:"\f4fd"}.fa-user-cog:before{content:"\f4fe"}.fa-user-edit:before{content:"\f4ff"}.fa-user-friends:before{content:"\f500"}.fa-user-graduate:before{content:"\f501"}.fa-user-injured:before{content:"\f728"}.fa-user-lock:before{content:"\f502"}.fa-user-md:before{content:"\f0f0"}.fa-user-minus:before{content:"\f503"}.fa-user-ninja:before{content:"\f504"}.fa-user-nurse:before{content:"\f82f"}.fa-user-plus:before{content:"\f234"}.fa-user-secret:before{content:"\f21b"}.fa-user-shield:before{content:"\f505"}.fa-user-slash:before{content:"\f506"}.fa-user-tag:before{content:"\f507"}.fa-user-tie:before{content:"\f508"}.fa-user-times:before{content:"\f235"}.fa-users:before{content:"\f0c0"}.fa-users-cog:before{content:"\f509"}.fa-usps:before{content:"\f7e1"}.fa-ussunnah:before{content:"\f407"}.fa-utensil-spoon:before{content:"\f2e5"}.fa-utensils:before{content:"\f2e7"}.fa-vaadin:before{content:"\f408"}.fa-vector-square:before{content:"\f5cb"}.fa-venus:before{content:"\f221"}.fa-venus-double:before{content:"\f226"}.fa-venus-mars:before{content:"\f228"}.fa-viacoin:before{content:"\f237"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-vial:before{content:"\f492"}.fa-vials:before{content:"\f493"}.fa-viber:before{content:"\f409"}.fa-video:before{content:"\f03d"}.fa-video-slash:before{content:"\f4e2"}.fa-vihara:before{content:"\f6a7"}.fa-vimeo:before{content:"\f40a"}.fa-vimeo-square:before{content:"\f194"}.fa-vimeo-v:before{content:"\f27d"}.fa-vine:before{content:"\f1ca"}.fa-vk:before{content:"\f189"}.fa-vnv:before{content:"\f40b"}.fa-voicemail:before{content:"\f897"}.fa-volleyball-ball:before{content:"\f45f"}.fa-volume-down:before{content:"\f027"}.fa-volume-mute:before{content:"\f6a9"}.fa-volume-off:before{content:"\f026"}.fa-volume-up:before{content:"\f028"}.fa-vote-yea:before{content:"\f772"}.fa-vr-cardboard:before{content:"\f729"}.fa-vuejs:before{content:"\f41f"}.fa-walking:before{content:"\f554"}.fa-wallet:before{content:"\f555"}.fa-warehouse:before{content:"\f494"}.fa-water:before{content:"\f773"}.fa-wave-square:before{content:"\f83e"}.fa-waze:before{content:"\f83f"}.fa-weebly:before{content:"\f5cc"}.fa-weibo:before{content:"\f18a"}.fa-weight:before{content:"\f496"}.fa-weight-hanging:before{content:"\f5cd"}.fa-weixin:before{content:"\f1d7"}.fa-whatsapp:before{content:"\f232"}.fa-whatsapp-square:before{content:"\f40c"}.fa-wheelchair:before{content:"\f193"}.fa-whmcs:before{content:"\f40d"}.fa-wifi:before{content:"\f1eb"}.fa-wikipedia-w:before{content:"\f266"}.fa-wind:before{content:"\f72e"}.fa-window-close:before{content:"\f410"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-windows:before{content:"\f17a"}.fa-wine-bottle:before{content:"\f72f"}.fa-wine-glass:before{content:"\f4e3"}.fa-wine-glass-alt:before{content:"\f5ce"}.fa-wix:before{content:"\f5cf"}.fa-wizards-of-the-coast:before{content:"\f730"}.fa-wolf-pack-battalion:before{content:"\f514"}.fa-won-sign:before{content:"\f159"}.fa-wordpress:before{content:"\f19a"}.fa-wordpress-simple:before{content:"\f411"}.fa-wpbeginner:before{content:"\f297"}.fa-wpexplorer:before{content:"\f2de"}.fa-wpforms:before{content:"\f298"}.fa-wpressr:before{content:"\f3e4"}.fa-wrench:before{content:"\f0ad"}.fa-x-ray:before{content:"\f497"}.fa-xbox:before{content:"\f412"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-y-combinator:before{content:"\f23b"}.fa-yahoo:before{content:"\f19e"}.fa-yammer:before{content:"\f840"}.fa-yandex:before{content:"\f413"}.fa-yandex-international:before{content:"\f414"}.fa-yarn:before{content:"\f7e3"}.fa-yelp:before{content:"\f1e9"}.fa-yen-sign:before{content:"\f157"}.fa-yin-yang:before{content:"\f6ad"}.fa-yoast:before{content:"\f2b1"}.fa-youtube:before{content:"\f167"}.fa-youtube-square:before{content:"\f431"}.fa-zhihu:before{content:"\f63f"}.sr-only{border:0;clip:rect(0,0,0,0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.sr-only-focusable:active,.sr-only-focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}@font-face{font-family:"Font Awesome 5 Brands";font-style:normal;font-weight:normal;font-display:auto;src:url(../webfonts/fa-brands-400.eot);src:url(../webfonts/fa-brands-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-brands-400.woff2) format("woff2"),url(../webfonts/fa-brands-400.woff) format("woff"),url(../webfonts/fa-brands-400.ttf) format("truetype"),url(../webfonts/fa-brands-400.svg#fontawesome) format("svg")}.fab{font-family:"Font Awesome 5 Brands"}@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:400;font-display:auto;src:url(../webfonts/fa-regular-400.eot);src:url(../webfonts/fa-regular-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-regular-400.woff2) format("woff2"),url(../webfonts/fa-regular-400.woff) format("woff"),url(../webfonts/fa-regular-400.ttf) format("truetype"),url(../webfonts/fa-regular-400.svg#fontawesome) format("svg")}.far{font-weight:400}@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:900;font-display:auto;src:url(../webfonts/fa-solid-900.eot);src:url(../webfonts/fa-solid-900.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-solid-900.woff2) format("woff2"),url(../webfonts/fa-solid-900.woff) format("woff"),url(../webfonts/fa-solid-900.ttf) format("truetype"),url(../webfonts/fa-solid-900.svg#fontawesome) format("svg")}.fa,.far,.fas{font-family:"Font Awesome 5 Free"}.fa,.fas{font-weight:900}rclone-1.53.3/docs/static/img/000077500000000000000000000000001375552240400160675ustar00rootroot00000000000000rclone-1.53.3/docs/static/img/logo_on_dark__horizontal_color.svg000066400000000000000000000123371375552240400250610ustar00rootroot00000000000000 rclone-1.53.3/docs/static/img/logo_on_light__horizontal_color.svg000066400000000000000000000127601375552240400252470ustar00rootroot00000000000000 rclone-1.53.3/docs/static/img/ncw-bitcoin-address.png000066400000000000000000000021141375552240400224320ustar00rootroot00000000000000PNG  IHDRg3 pHYs  tIME !K?IDATxaj0 Qe`]pٙ5_ }=uyVv^ b-wDŽ3ϒ1V=0 b{/͛aw̻;>4ީ}bX@, {g~2vpbX@,ow}ZY@,X@,  ̰=:2BȷYV b='80Y}`ŔbX@,p{/xGr.界 b)(=|z K;|Z}}v@[Y@,  DbbS{8ȣgea  (Lخӣǵջr;=hbX@, NE4:_xʨzYΝgeX@,  ܣ|ʔwNşpsZYmb=N#ت䨾N)gG[Ymb=_ĔC)Np1 X@,  ">K߯z'Jϲ9,6 bX,C3i49;=o3v@[Ymb=ݮFc)'wbX@,p]ǝf8 jl1 X@,X/RA{տr&uVpbX@,p'"=6:=}twbX@, Gc/љ3N8:,6 bX ;SW#ʔeʮq+ X@,  "]puu'<ίwUu n@,  w'w?Kl]Sv,6 bXisNm+ bX F+`Ly]XZ(bX@, qe|YY@,  X]aʘO+ X@, DIENDB`rclone-1.53.3/docs/static/img/nick.svg000066400000000000000000000223741375552240400175440ustar00rootroot00000000000000 image/svg+xml rclone-1.53.3/docs/static/img/rclone-32x32.png000066400000000000000000000021221375552240400206330ustar00rootroot00000000000000PNG  IHDR szz pHYs.#.#x?vIDATX]hU]Rj&NA6M")m2"XF¿bEꅠh+iEjҋ0ZLh6XMF/$NΙļs|87i%&Bk` pu; ]uA%xXVzhgf O4Mpxҵ?h+s0cA%6}\:-];"4չK% w'lW (@Pہ$Xk)"pk?tҎr=tm(4T"YH.f %YoV%dD)_7pҵqUkJ0J_O"o{#Ѐ7g_Sݖ 0]GRV!vFϔFEs8t=b/ 4Q81,B;U #x kP`WٳGq xUAC5;a4[> 3U-b ء?z; g~0|K~lƪϖ=d51/";5w(=$&!K0ڏ #׀9ajeAL8atp MJIFC, EpIehƜ0 ъ<\u3āӅ(v`C3titJè9a4dW*%^beF@$t*bi4;8Q8<p1'jڻF?g~3b_#!Ĉy0p#p'o#:$g~/~rG1s˺ DQx;Plxĥ?|91 Έ!īk׋NS|;П0]=:cTC@OWyh3EPsj&k. ܄H!Ͳg|9ax@o #^GkP4 1)W_#f 0ʚ'_bIENDB`rclone-1.53.3/docs/static/js/000077500000000000000000000000001375552240400157275ustar00rootroot00000000000000rclone-1.53.3/docs/static/js/bootstrap.min.4.4.1.js000066400000000000000000001651521375552240400215410ustar00rootroot00000000000000/*! * Bootstrap v4.4.1 (https://getbootstrap.com/) * Copyright 2011-2019 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) */ !function(t,e){"object"==typeof exports&&"undefined"!=typeof module?e(exports,require("jquery"),require("popper.js")):"function"==typeof define&&define.amd?define(["exports","jquery","popper.js"],e):e((t=t||self).bootstrap={},t.jQuery,t.Popper)}(this,function(t,g,u){"use strict";function i(t,e){for(var n=0;nthis._items.length-1||t<0))if(this._isSliding)g(this._element).one(Y.SLID,function(){return e.to(t)});else{if(n===t)return this.pause(),void this.cycle();var i=ndocument.documentElement.clientHeight;!this._isBodyOverflowing&&t&&(this._element.style.paddingLeft=this._scrollbarWidth+"px"),this._isBodyOverflowing&&!t&&(this._element.style.paddingRight=this._scrollbarWidth+"px")},t._resetAdjustments=function(){this._element.style.paddingLeft="",this._element.style.paddingRight=""},t._checkScrollbar=function(){var t=document.body.getBoundingClientRect();this._isBodyOverflowing=t.left+t.right
    ',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:0,container:!1,fallbackPlacement:"flip",boundary:"scrollParent",sanitize:!0,sanitizeFn:null,whiteList:Se,popperConfig:null},Fe="show",Ue="out",We={HIDE:"hide"+Oe,HIDDEN:"hidden"+Oe,SHOW:"show"+Oe,SHOWN:"shown"+Oe,INSERTED:"inserted"+Oe,CLICK:"click"+Oe,FOCUSIN:"focusin"+Oe,FOCUSOUT:"focusout"+Oe,MOUSEENTER:"mouseenter"+Oe,MOUSELEAVE:"mouseleave"+Oe},qe="fade",Me="show",Ke=".tooltip-inner",Qe=".arrow",Be="hover",Ve="focus",Ye="click",ze="manual",Xe=function(){function i(t,e){if("undefined"==typeof u)throw new TypeError("Bootstrap's tooltips require Popper.js (https://popper.js.org/)");this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this.element=t,this.config=this._getConfig(e),this.tip=null,this._setListeners()}var t=i.prototype;return t.enable=function(){this._isEnabled=!0},t.disable=function(){this._isEnabled=!1},t.toggleEnabled=function(){this._isEnabled=!this._isEnabled},t.toggle=function(t){if(this._isEnabled)if(t){var e=this.constructor.DATA_KEY,n=g(t.currentTarget).data(e);n||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),g(t.currentTarget).data(e,n)),n._activeTrigger.click=!n._activeTrigger.click,n._isWithActiveTrigger()?n._enter(null,n):n._leave(null,n)}else{if(g(this.getTipElement()).hasClass(Me))return void this._leave(null,this);this._enter(null,this)}},t.dispose=function(){clearTimeout(this._timeout),g.removeData(this.element,this.constructor.DATA_KEY),g(this.element).off(this.constructor.EVENT_KEY),g(this.element).closest(".modal").off("hide.bs.modal",this._hideModalHandler),this.tip&&g(this.tip).remove(),this._isEnabled=null,this._timeout=null,this._hoverState=null,this._activeTrigger=null,this._popper&&this._popper.destroy(),this._popper=null,this.element=null,this.config=null,this.tip=null},t.show=function(){var e=this;if("none"===g(this.element).css("display"))throw new Error("Please use show on visible elements");var t=g.Event(this.constructor.Event.SHOW);if(this.isWithContent()&&this._isEnabled){g(this.element).trigger(t);var n=_.findShadowRoot(this.element),i=g.contains(null!==n?n:this.element.ownerDocument.documentElement,this.element);if(t.isDefaultPrevented()||!i)return;var o=this.getTipElement(),r=_.getUID(this.constructor.NAME);o.setAttribute("id",r),this.element.setAttribute("aria-describedby",r),this.setContent(),this.config.animation&&g(o).addClass(qe);var s="function"==typeof this.config.placement?this.config.placement.call(this,o,this.element):this.config.placement,a=this._getAttachment(s);this.addAttachmentClass(a);var l=this._getContainer();g(o).data(this.constructor.DATA_KEY,this),g.contains(this.element.ownerDocument.documentElement,this.tip)||g(o).appendTo(l),g(this.element).trigger(this.constructor.Event.INSERTED),this._popper=new u(this.element,o,this._getPopperConfig(a)),g(o).addClass(Me),"ontouchstart"in document.documentElement&&g(document.body).children().on("mouseover",null,g.noop);var c=function(){e.config.animation&&e._fixTransition();var t=e._hoverState;e._hoverState=null,g(e.element).trigger(e.constructor.Event.SHOWN),t===Ue&&e._leave(null,e)};if(g(this.tip).hasClass(qe)){var h=_.getTransitionDurationFromElement(this.tip);g(this.tip).one(_.TRANSITION_END,c).emulateTransitionEnd(h)}else c()}},t.hide=function(t){function e(){n._hoverState!==Fe&&i.parentNode&&i.parentNode.removeChild(i),n._cleanTipClass(),n.element.removeAttribute("aria-describedby"),g(n.element).trigger(n.constructor.Event.HIDDEN),null!==n._popper&&n._popper.destroy(),t&&t()}var n=this,i=this.getTipElement(),o=g.Event(this.constructor.Event.HIDE);if(g(this.element).trigger(o),!o.isDefaultPrevented()){if(g(i).removeClass(Me),"ontouchstart"in document.documentElement&&g(document.body).children().off("mouseover",null,g.noop),this._activeTrigger[Ye]=!1,this._activeTrigger[Ve]=!1,this._activeTrigger[Be]=!1,g(this.tip).hasClass(qe)){var r=_.getTransitionDurationFromElement(i);g(i).one(_.TRANSITION_END,e).emulateTransitionEnd(r)}else e();this._hoverState=""}},t.update=function(){null!==this._popper&&this._popper.scheduleUpdate()},t.isWithContent=function(){return Boolean(this.getTitle())},t.addAttachmentClass=function(t){g(this.getTipElement()).addClass(Pe+"-"+t)},t.getTipElement=function(){return this.tip=this.tip||g(this.config.template)[0],this.tip},t.setContent=function(){var t=this.getTipElement();this.setElementContent(g(t.querySelectorAll(Ke)),this.getTitle()),g(t).removeClass(qe+" "+Me)},t.setElementContent=function(t,e){"object"!=typeof e||!e.nodeType&&!e.jquery?this.config.html?(this.config.sanitize&&(e=we(e,this.config.whiteList,this.config.sanitizeFn)),t.html(e)):t.text(e):this.config.html?g(e).parent().is(t)||t.empty().append(e):t.text(g(e).text())},t.getTitle=function(){var t=this.element.getAttribute("data-original-title");return t=t||("function"==typeof this.config.title?this.config.title.call(this.element):this.config.title)},t._getPopperConfig=function(t){var e=this;return l({},{placement:t,modifiers:{offset:this._getOffset(),flip:{behavior:this.config.fallbackPlacement},arrow:{element:Qe},preventOverflow:{boundariesElement:this.config.boundary}},onCreate:function(t){t.originalPlacement!==t.placement&&e._handlePopperPlacementChange(t)},onUpdate:function(t){return e._handlePopperPlacementChange(t)}},{},this.config.popperConfig)},t._getOffset=function(){var e=this,t={};return"function"==typeof this.config.offset?t.fn=function(t){return t.offsets=l({},t.offsets,{},e.config.offset(t.offsets,e.element)||{}),t}:t.offset=this.config.offset,t},t._getContainer=function(){return!1===this.config.container?document.body:_.isElement(this.config.container)?g(this.config.container):g(document).find(this.config.container)},t._getAttachment=function(t){return Re[t.toUpperCase()]},t._setListeners=function(){var i=this;this.config.trigger.split(" ").forEach(function(t){if("click"===t)g(i.element).on(i.constructor.Event.CLICK,i.config.selector,function(t){return i.toggle(t)});else if(t!==ze){var e=t===Be?i.constructor.Event.MOUSEENTER:i.constructor.Event.FOCUSIN,n=t===Be?i.constructor.Event.MOUSELEAVE:i.constructor.Event.FOCUSOUT;g(i.element).on(e,i.config.selector,function(t){return i._enter(t)}).on(n,i.config.selector,function(t){return i._leave(t)})}}),this._hideModalHandler=function(){i.element&&i.hide()},g(this.element).closest(".modal").on("hide.bs.modal",this._hideModalHandler),this.config.selector?this.config=l({},this.config,{trigger:"manual",selector:""}):this._fixTitle()},t._fixTitle=function(){var t=typeof this.element.getAttribute("data-original-title");!this.element.getAttribute("title")&&"string"==t||(this.element.setAttribute("data-original-title",this.element.getAttribute("title")||""),this.element.setAttribute("title",""))},t._enter=function(t,e){var n=this.constructor.DATA_KEY;(e=e||g(t.currentTarget).data(n))||(e=new this.constructor(t.currentTarget,this._getDelegateConfig()),g(t.currentTarget).data(n,e)),t&&(e._activeTrigger["focusin"===t.type?Ve:Be]=!0),g(e.getTipElement()).hasClass(Me)||e._hoverState===Fe?e._hoverState=Fe:(clearTimeout(e._timeout),e._hoverState=Fe,e.config.delay&&e.config.delay.show?e._timeout=setTimeout(function(){e._hoverState===Fe&&e.show()},e.config.delay.show):e.show())},t._leave=function(t,e){var n=this.constructor.DATA_KEY;(e=e||g(t.currentTarget).data(n))||(e=new this.constructor(t.currentTarget,this._getDelegateConfig()),g(t.currentTarget).data(n,e)),t&&(e._activeTrigger["focusout"===t.type?Ve:Be]=!1),e._isWithActiveTrigger()||(clearTimeout(e._timeout),e._hoverState=Ue,e.config.delay&&e.config.delay.hide?e._timeout=setTimeout(function(){e._hoverState===Ue&&e.hide()},e.config.delay.hide):e.hide())},t._isWithActiveTrigger=function(){for(var t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1},t._getConfig=function(t){var e=g(this.element).data();return Object.keys(e).forEach(function(t){-1!==je.indexOf(t)&&delete e[t]}),"number"==typeof(t=l({},this.constructor.Default,{},e,{},"object"==typeof t&&t?t:{})).delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),_.typeCheckConfig(Ae,t,this.constructor.DefaultType),t.sanitize&&(t.template=we(t.template,t.whiteList,t.sanitizeFn)),t},t._getDelegateConfig=function(){var t={};if(this.config)for(var e in this.config)this.constructor.Default[e]!==this.config[e]&&(t[e]=this.config[e]);return t},t._cleanTipClass=function(){var t=g(this.getTipElement()),e=t.attr("class").match(Le);null!==e&&e.length&&t.removeClass(e.join(""))},t._handlePopperPlacementChange=function(t){var e=t.instance;this.tip=e.popper,this._cleanTipClass(),this.addAttachmentClass(this._getAttachment(t.placement))},t._fixTransition=function(){var t=this.getTipElement(),e=this.config.animation;null===t.getAttribute("x-placement")&&(g(t).removeClass(qe),this.config.animation=!1,this.hide(),this.show(),this.config.animation=e)},i._jQueryInterface=function(n){return this.each(function(){var t=g(this).data(Ne),e="object"==typeof n&&n;if((t||!/dispose|hide/.test(n))&&(t||(t=new i(this,e),g(this).data(Ne,t)),"string"==typeof n)){if("undefined"==typeof t[n])throw new TypeError('No method named "'+n+'"');t[n]()}})},s(i,null,[{key:"VERSION",get:function(){return"4.4.1"}},{key:"Default",get:function(){return xe}},{key:"NAME",get:function(){return Ae}},{key:"DATA_KEY",get:function(){return Ne}},{key:"Event",get:function(){return We}},{key:"EVENT_KEY",get:function(){return Oe}},{key:"DefaultType",get:function(){return He}}]),i}();g.fn[Ae]=Xe._jQueryInterface,g.fn[Ae].Constructor=Xe,g.fn[Ae].noConflict=function(){return g.fn[Ae]=ke,Xe._jQueryInterface};var $e="popover",Ge="bs.popover",Je="."+Ge,Ze=g.fn[$e],tn="bs-popover",en=new RegExp("(^|\\s)"+tn+"\\S+","g"),nn=l({},Xe.Default,{placement:"right",trigger:"click",content:"",template:''}),on=l({},Xe.DefaultType,{content:"(string|element|function)"}),rn="fade",sn="show",an=".popover-header",ln=".popover-body",cn={HIDE:"hide"+Je,HIDDEN:"hidden"+Je,SHOW:"show"+Je,SHOWN:"shown"+Je,INSERTED:"inserted"+Je,CLICK:"click"+Je,FOCUSIN:"focusin"+Je,FOCUSOUT:"focusout"+Je,MOUSEENTER:"mouseenter"+Je,MOUSELEAVE:"mouseleave"+Je},hn=function(t){function i(){return t.apply(this,arguments)||this}!function(t,e){t.prototype=Object.create(e.prototype),(t.prototype.constructor=t).__proto__=e}(i,t);var e=i.prototype;return e.isWithContent=function(){return this.getTitle()||this._getContent()},e.addAttachmentClass=function(t){g(this.getTipElement()).addClass(tn+"-"+t)},e.getTipElement=function(){return this.tip=this.tip||g(this.config.template)[0],this.tip},e.setContent=function(){var t=g(this.getTipElement());this.setElementContent(t.find(an),this.getTitle());var e=this._getContent();"function"==typeof e&&(e=e.call(this.element)),this.setElementContent(t.find(ln),e),t.removeClass(rn+" "+sn)},e._getContent=function(){return this.element.getAttribute("data-content")||this.config.content},e._cleanTipClass=function(){var t=g(this.getTipElement()),e=t.attr("class").match(en);null!==e&&0=this._offsets[o]&&("undefined"==typeof this._offsets[o+1]||t").addClass("header-link").attr("href", "#" + id).html(icon)); } }); // Wire up copy to clipboard buttons $(".copy-to-clipboard").click(function() { var copyText = $(this).prev(); copyText.select(); document.execCommand("copy"); $(this).attr("title", "Copied!"); }); }); rclone-1.53.3/docs/static/js/jquery.min.3.5.1.js000066400000000000000000002566041375552240400210460ustar00rootroot00000000000000/*! jQuery v3.5.1 | (c) JS Foundation and other contributors | jquery.org/license */ !function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.5.1",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function D(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||j,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,j=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
    "],col:[2,"","
    "],tr:[2,"","
    "],td:[3,"","
    "],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function qe(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function Le(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function He(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Oe(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var Ut,Xt=[],Vt=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=Xt.pop()||S.expando+"_"+Ct.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Vt.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Vt.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Vt,"$1"+r):!1!==e.jsonp&&(e.url+=(Et.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,Xt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((Ut=E.implementation.createHTMLDocument("").body).innerHTML="
    ",2===Ut.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):("number"==typeof f.top&&(f.top+="px"),"number"==typeof f.left&&(f.left+="px"),c.css(f))}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=$e(y.pixelPosition,function(e,t){if(t)return t=Be(e,n),Me.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0=o.clientWidth&&n>=o.clientHeight}),l=0a[e]&&!t.escapeWithReference&&(n=Q(f[o],a[e]-('right'===e?f.width:f.height))),ae({},o,n)}};return l.forEach(function(e){var t=-1===['left','top'].indexOf(e)?'secondary':'primary';f=le({},f,m[t](e))}),e.offsets.popper=f,e},priority:['left','right','top','bottom'],padding:5,boundariesElement:'scrollParent'},keepTogether:{order:400,enabled:!0,fn:function(e){var t=e.offsets,o=t.popper,n=t.reference,i=e.placement.split('-')[0],r=Z,p=-1!==['top','bottom'].indexOf(i),s=p?'right':'bottom',d=p?'left':'top',a=p?'width':'height';return o[s]r(n[s])&&(e.offsets.popper[d]=r(n[s])),e}},arrow:{order:500,enabled:!0,fn:function(e,o){var n;if(!K(e.instance.modifiers,'arrow','keepTogether'))return e;var i=o.element;if('string'==typeof i){if(i=e.instance.popper.querySelector(i),!i)return e;}else if(!e.instance.popper.contains(i))return console.warn('WARNING: `arrow.element` must be child of its popper element!'),e;var r=e.placement.split('-')[0],p=e.offsets,s=p.popper,d=p.reference,a=-1!==['left','right'].indexOf(r),l=a?'height':'width',f=a?'Top':'Left',m=f.toLowerCase(),h=a?'left':'top',c=a?'bottom':'right',u=S(i)[l];d[c]-us[c]&&(e.offsets.popper[m]+=d[m]+u-s[c]),e.offsets.popper=g(e.offsets.popper);var b=d[m]+d[l]/2-u/2,w=t(e.instance.popper),y=parseFloat(w['margin'+f],10),E=parseFloat(w['border'+f+'Width'],10),v=b-e.offsets.popper[m]-y-E;return v=ee(Q(s[l]-u,v),0),e.arrowElement=i,e.offsets.arrow=(n={},ae(n,m,$(v)),ae(n,h,''),n),e},element:'[x-arrow]'},flip:{order:600,enabled:!0,fn:function(e,t){if(W(e.instance.modifiers,'inner'))return e;if(e.flipped&&e.placement===e.originalPlacement)return e;var o=v(e.instance.popper,e.instance.reference,t.padding,t.boundariesElement,e.positionFixed),n=e.placement.split('-')[0],i=T(n),r=e.placement.split('-')[1]||'',p=[];switch(t.behavior){case ce.FLIP:p=[n,i];break;case ce.CLOCKWISE:p=G(n);break;case ce.COUNTERCLOCKWISE:p=G(n,!0);break;default:p=t.behavior;}return p.forEach(function(s,d){if(n!==s||p.length===d+1)return e;n=e.placement.split('-')[0],i=T(n);var a=e.offsets.popper,l=e.offsets.reference,f=Z,m='left'===n&&f(a.right)>f(l.left)||'right'===n&&f(a.left)f(l.top)||'bottom'===n&&f(a.top)f(o.right),g=f(a.top)f(o.bottom),b='left'===n&&h||'right'===n&&c||'top'===n&&g||'bottom'===n&&u,w=-1!==['top','bottom'].indexOf(n),y=!!t.flipVariations&&(w&&'start'===r&&h||w&&'end'===r&&c||!w&&'start'===r&&g||!w&&'end'===r&&u),E=!!t.flipVariationsByContent&&(w&&'start'===r&&c||w&&'end'===r&&h||!w&&'start'===r&&u||!w&&'end'===r&&g),v=y||E;(m||b||v)&&(e.flipped=!0,(m||b)&&(n=p[d+1]),v&&(r=z(r)),e.placement=n+(r?'-'+r:''),e.offsets.popper=le({},e.offsets.popper,C(e.instance.popper,e.offsets.reference,e.placement)),e=P(e.instance.modifiers,e,'flip'))}),e},behavior:'flip',padding:5,boundariesElement:'viewport',flipVariations:!1,flipVariationsByContent:!1},inner:{order:700,enabled:!1,fn:function(e){var t=e.placement,o=t.split('-')[0],n=e.offsets,i=n.popper,r=n.reference,p=-1!==['left','right'].indexOf(o),s=-1===['top','left'].indexOf(o);return i[p?'left':'top']=r[o]-(s?i[p?'width':'height']:0),e.placement=T(t),e.offsets.popper=g(i),e}},hide:{order:800,enabled:!0,fn:function(e){if(!K(e.instance.modifiers,'hide','preventOverflow'))return e;var t=e.offsets.reference,o=D(e.instance.modifiers,function(e){return'preventOverflow'===e.name}).boundaries;if(t.bottomo.right||t.top>o.bottom||t.rightwindow.devicePixelRatio||!fe),c='bottom'===o?'top':'bottom',g='right'===n?'left':'right',b=B('transform');if(d='bottom'==c?'HTML'===l.nodeName?-l.clientHeight+h.bottom:-f.height+h.bottom:h.top,s='right'==g?'HTML'===l.nodeName?-l.clientWidth+h.right:-f.width+h.right:h.left,a&&b)m[b]='translate3d('+s+'px, '+d+'px, 0)',m[c]=0,m[g]=0,m.willChange='transform';else{var w='bottom'==c?-1:1,y='right'==g?-1:1;m[c]=d*w,m[g]=s*y,m.willChange=c+', '+g}var E={"x-placement":e.placement};return e.attributes=le({},E,e.attributes),e.styles=le({},m,e.styles),e.arrowStyles=le({},e.offsets.arrow,e.arrowStyles),e},gpuAcceleration:!0,x:'bottom',y:'right'},applyStyle:{order:900,enabled:!0,fn:function(e){return V(e.instance.popper,e.styles),j(e.instance.popper,e.attributes),e.arrowElement&&Object.keys(e.arrowStyles).length&&V(e.arrowElement,e.arrowStyles),e},onLoad:function(e,t,o,n,i){var r=L(i,t,e,o.positionFixed),p=O(o.placement,r,t,e,o.modifiers.flip.boundariesElement,o.modifiers.flip.padding);return t.setAttribute('x-placement',p),V(t,{position:o.positionFixed?'fixed':'absolute'}),o},gpuAcceleration:void 0}}},ge}); //# sourceMappingURL=popper.min.js.map rclone-1.53.3/docs/static/webfonts/000077500000000000000000000000001375552240400171425ustar00rootroot00000000000000rclone-1.53.3/docs/static/webfonts/fa-brands-400.eot000066400000000000000000003750661375552240400220320ustar00rootroot000000000000006LP;:Font Awesome 5 Brands RegularRegularL330.242 (Font Awesome version: 5.10.2):Font Awesome 5 Brands Regular PFFTMtfGDEF*OS/2BX`cmap u tgaspglyfФ)`ͼhead,6hhea6$hmtx6loca6`maxpO8 namepost{.J=;_< لQلUL'@LfGLfPfEd.T:  @@  p@@ @@@ @  @@@@@@.@ @@@@ @@ @  @@Lh@@P  @ @@@@@ @@ @@ @E@ @@@ @  h6L^knp~\u} !#1MRWY?B1]x{=B6;Zgkpsy 17:K^`mp|\hx#%MRWY ?B0]xz4?ytn8-zwhedXTQP620$ R G E D B @ > = < : 9 8 7 5 3 2 1 . - + * ) ( % $       w i h X  k i = ; U *     l k ZB<x8X B@ r J  z ` .&Lz$@,D\xJlh>h$F.t!"D$H%0%&Z&'B'(*)N)|)**++,l,-2-\--0V01112\223x34F44557&7b7889j9~99:D::;B;d;<^<==^===>:>>?p?@\@AjABBCnCDE.EFXpY4ZZ4Z`\].]]^`^^__z_`a^aabbJbddeef4fffgLjk^klDllmVmpnpqHqr^rrsxttrtuđHғʕ^|6vz8j@ܝ"Th$`Ģ(j̣8fP`Nتt:ޭv"r>zF*nvZ6ʾFlü\*ʬ@ TҺPӼ6tJֲ&N6tڤlz4NߪDtz2;2#!"&54636767&#"&'&''3#"'32>54/(S5 "$- .79X)`  0!@ $:T-$2+537#54;5&#"#3#"&5463: E()??H/&=,7H`52#!"&546335#6264&"54.#"#5#354632   gC  @!!@C    u"j  h V[cipw}2#!"&5463>54&#"54&5".'&'0'43767.547&7662>&&364#"32&'&6'6&'6'&&6'&'6'"2=aEE]>2       +    S   `W7D]]D7W      ) @  0#"'327"&'327.=&547&546326767(ItFXI I:#5 &/'q@=+.$ "( 5iW6/-)+  81: +>!% %537#54;5&#"#3.5462xY: E()??YxΑ\H/&=,7H\gemu}7#54366'72&2&5<54'>54&'6'&&".&'./"2?'.5466'&'6'&6'&'6&!(GDtC]K $   " A%  "    L^, " 3@pDS? (%   %!(     0Sh ! J#"'6?32654&#"7>76'&54632#"&7>54.#".5462g&# 7IXALW=3.:$  GW'Α (; YA=RX>9 "+A9,!4 $  " g.~NgR+6?32>54&#"74676'&54632#"&7>54'&#"&5463!2 %:!WAKW=2-:#  Y`p$; (F*=QW>8 !+@8, 4 #  7  `+72#!"&54632654'#3#".546327&#"%5#5##335t+5^9$');;`6,"%;R;R t'%#"&4632&#"3267#53##5#5353fRPppPK54. 7 F126mn888778Sipp22 8!2G6C88888(3#3'"&4632#54.#"#33>32d]]. |]!]Z . "- +) , *#+))0! 3LT6#".5462#*.'&547&5472632>34&#""'&#";2>&2"&4 #9 ) 76-*#1*/*&1"'@(" ,0+ "(*&00/d##&$ $;$  (7A0 .w#00#& +V$&$$& {%#76&+###'!2a41$% `--=!'7#3/#3?#'# 56/cat44 0>&L1'73?!7!7!7@RtM PP_),,aR5R.AT%#5"##5"#723>75."#57536353'262>54.2>54.#6MGB1 1 X  T 1 1        ! R5,DCCD:4ONMO :T`      2$'#"'&'.'547>7>7327'& j,,*'  j,,*ǏD $# Z+ %CY+ RQ#70+"&?'&6;2%+"/>76;2F AE,A  [B \H) B| zL V 6H  12#!"&54632767'&+"3%6&+";26/\0 .1 /fB/B` U7 6W x w '7'7757'7LUTTUTTTUUTTTTTUTT~ %'?'7'?'#553!53# zw w4((')'.$T%((Pxx3k2"&4264&"6"&462"'&'.5&47676762><.'&'.*#*:3:>76`CC`CT>,,>,\C==! !!==C- #)(#  #)'#3C`CC`{,>,,>6=C! 76;232+766 5!*> / *(F T T#!  ., D B% s R  @ 72#!"&54636/&'&&=32=4+54+";3244, 1&*`( S4H 8+Z}(%"&#"#.'&5467232>''&7>7?= : ./ 1E.8 938 % &?&P26*'?F (n&( ( 7#53537#b"  $;DP_7"&=463253+&=#&=#"&7#4>7'&7676&4&"3264&#"322#"&=46Z   &   66" we  t  t  ==== #$$#52 t  t   k~ )C#'476#"&"54762"10'&'&"#'.'&'&76'&5476767>7&767674.>32&63"6'&""60'&672676&367676'.'&#""6&/&'&6567656767636'6'&'"&#&'&''.'&'&76#%4'.'&'&'"1767>&72132767>2#"'&   %    +!  !    0       (   $ 29#%                  E    .3% "#"(#*%9   )  <    5$  2  # 1!.2   3$     &   #  ".6<2"&4%6&'.>'7&>7&''7&'67&ΑΑ:# IKG'd3+$"70!^e$76H% lnR.t3 =YM?EΑ* JR# 9C &(,9!AR<8.  %MSK.4Z PS%#"'#"&547&546326322654'&'.543232>54&"#".#"B.&UxB.&Ux-C[ *  DWA      G&.BxU&.BxUr-)9 ()    &t?2+"'&54636&+"?>;2>7676&+"&=4>;2>5C0  S  Y I Z j ! J l% #52#!"&54=46354&+";26754&+";26!!! S  S  Q  Q !!}(!   {   2"&4>&'&'.ΑΑk ! & " qΑQ    4`M0+".'&#"#"&'.54;2326=.54>;227>?6;2!)" ;  ( /`$"-; =]$ ;JD3    &! 84.[0  =` W t<  &0>JT\6&&676&"&547676766.766&76&&6'.7>'&>&7&76  &3  ULH<,7"'>77.=RB:6^:)?8-? ?-7!6'XvR EZ=iE Q..QX3?M)\ZR%&/#"43267.'>67".'665.674>2/>21-a9 0Q%/,7% $4##& #*)  &&7&' >H;3 -'   $,  " $$6 *+( %2)%   7!+5#"&=!%5!'2!5463&W&dA&d&t'ZZ'-UUnUU((42#!"&54636'&6&'."6323276P2B! -A@`?N  *f  #!3TR+4BKW7"&46;462"&5"&4622+"&54>3462+'"&=46322"&=7"&46;2#^&/''/'/v  3&/'  /'/v'/v3&/'  /'/v  &/''/7AIQ.54'76&#'&33'76&#"#>32"#"#"'76546"&4624&"2>e6BeL "8 !/M #^7S=A#$:0A<ΑΆċg>-"4G [,48": 2/7:b($ ΑΑċċ%.546777'7&'57DWumS7EM:D%(E3# R85Q + 8$&:T!+7&#"7.'327>7327     g ;/%9*7-39`O$4%#"&4632.32>7#53kggaEC YM7Z@#7jΑAA !M2A\#"V;IU6"&5462$"&462"&'7264&#"'&&#"32>54'>&6"'&7672"&54>Α  $28 =2$ N61$UL C   ΑΑ N V (7$t Y    %5`%"'&76276'#"&5463272#"&5467#!"&5463!2"&'73264&#"'&&#"32654'>&RHW  [  `d '6;   B6& T:;Sf1    ` U ] ! *<<* !%92"&42754&""&=#326=405#"&='32>ΑΑ ",>,  :,,:  ",#Α!  +*l -.,+kq.0 / /+#1%"&=726='54&"#"&=332>=462B]B"4B/.BV  C[C4F.BA.FG  H:  .AB.FE  -?@-$%3#!"'&'&'.5463!225#5#"3326 `  | `   ` !%)53#55#%3#535#75#53#535'35'3#3#R3RHRRRͅR4444L]]))]]))]]u3 #2B#"'56322#"'567#!"&5463!22654.#"#754&#"7532  G  `+"-3+ 4 !DiD`/ % 3:B. 39@D7'26322/>&67>77367>?"&75.# &'<7465&#".'&'#"&54?.>3263263232654.547'6'632763'7>31#  37 6I  G#     ,g ; #,  "  ?! ,    0  5/)    '  E<     -      \ PK< 3  !    4. "    #C3^ (:T#"&54>767676'&#"&/"6'763276&'&"76'&#"#".#">2@ &pNNt& (   + $P&  V  = "+#3(* 0M,E'RhmQ%D* #&  D %**"$  11Je46326&".'.7.67>>32'64&"#".'&'7264/7'#"&5467&72767#!6*, d,3! V6"#-, Z#9, d-[(b0' (7#,,  Y d# *, d,2"6!_U #!8-- Zl!#  , d,[ (0' ( #!7,- X  @u 397+32'3254#254+%#53#32>73#"&546323&#"AH5qO/5 :8U8! >\7#!"&5463!235#4'654+32674&"327##"534-.+(?`NN+#>Z]'$?( 2" "d47--`}) 0+ +*  / #!2:B%#"&'326'7654&"&'>32>&/6'&'6"&462264&"BsCU_&*T(87M7;bg   ( 3%%3%R''CsBeO' +<8''66'V 7`   (v$4$$4&&6>I>&/6'&'2#!"&=326'7654&"&'5463"&462264&" &Y'O%53I38u0""0#;% N && ^q%(84%$33$Q 0"0""0$  .C2"&42654'&327672654'&#"327672654'&#"3276ΑΑ] `  |S  Md20  ?0$  R{B4  <5Α  9 2B  -  L 0 @3#0#57'#53?3@b[ , ]]V+ bc T ] T 5K #-9DN`u7#"5'743272#"5'7472#"5'742"5'?2#"5'7462#"5'762"5'?#"5/7547632#"'&5'?47622#"5'72+&=476326%2#"5'74'2#"5'742#"5'7o s ~!..! 7O.AEEA)iEEiDD`####9989 DBBDFDDFr@ :BCC8/ !.  I6<AACCCC+%#'.'367.54632&'654#"O* +%$ J ;+-%+7/(0  "L44\ ;K~Km5,>N12A)<: .:9 $(+/&=4?67'775'?'7'75 _MجM_7_MNNNdM_7   ;g?3ss3?5J%g?3444s3?5J%A$I%##"'.'&676'&76067>6&'&5.*&'&35236'#"'.'&67>76'.5676676.&76" $#J$; '&/("MD7 + "/26(&"# D+;'I!  ,!%B' $ "1D$  ."#C1)6% ($X 6% ('B<s L  !   &&H %  /, ["&7>721>?>././76&/7>?>?>./65g<7      -,       (8h7"67527'676"&4624&"2'67'654'7&'"'7&'&'7&47'6767'6327' :a )= :b ); a:=)$ b:=)oΑ΅Ċc   4 @@ 4  4 @@ 490  .  2991 w90 /-  09 . 09 /ΑΑĊĊW2r1 CC,  ,CC,  , "*K]}7&5472#"&6%#!"&5463!23254#"#.32>54&'654'7'"=#37'.541535#"545#23232e !!#  O`1$ @ 0  ! U 4He&$ r |`YD     XiK  = B'4<R]&'#"547&5467&54>4&'"&"'6'26&#"527>=4&'733'".5432#"&=4&"#5>73;#067%" %:5f#)-(  453T?  %%!!$  &9= "#  #&2 ,&    $##$d w!  ###j% >)a!!77#.'#3@Q#  %P@6-/0"gg-'&67&632'2'&6654&#""'&>H 77 !;kEb{Q>jP9%>"  'M0_93!!sDbENeRC9P$9G# (VG-2$./"'"'&67.5'"&6767&632 n##m   !QRQR!  7  7 0 Q^yx_Q 0 3(6GS"#"'7&54632&'"3264&264&#"'#"&46322>54.#"32>54&#"E` 1DNrQGq r   D6(F``FBc   o   Z@ ";7ME`N;5   0   =40 SuST!     F^2"&462"&46'&7<5"&#'&7&'&625463!254.#!"6366263$$3$3$$3% #G6"(& &("6H# g  B)  *D&!0""0!!0""09 *f4! *  ) !4g+      !##5#5!3735#5##5(uW932&54?6#"#&7&6#"'7&?632%67232/&54+dl   1 J  +mi  Zk > 5; " h1     R o " (  [  )  "K7+"&7>3262"&7#*+"&76763:>7676oK; H$ *m $ y@  s O,Y BpW   )&'>&42#&'&6373#&&'&'&6;25 07h<m~r |JY AUM21c2 ">LvnM X¦nfjaJH@%)HS%#70>?#!"&5463!27#/&'#37#74'&56367&#"#"/26'#"3673!m X@+'@$("$' <*4&+`jG  2  !  # @ 4BR^d -9F$"4!2#"462#72#"42011"1"1"1"1&1<143413030310030410+353#!"&5463!2327&67&#"64'4#"&#"5#3<>323<>3237#&#"327374&5427&"'63'=35#5##3734#"7'&7&5#3546467&7'7#&737&5#35467#&7373535#45&1&0#"0#111323210606107#'#537374&#"326 xu uJ Q9*#76#*9555Y - ( #   :    5"  *Q9*#67#*9Q1 Z`fsQ-,)))v%    %         %     % %  |~U:Q--Q @'3:BSkuy%+53272#!"&54633533'654.*#35#535#535+37#&"264'&75&546"'654&'&5467&2654.+35#5!26%+532  :-r#$;%%^  ! a  ,  $ l  [3z *  `R!!#RTT7:%%  /        RRR i 6A.7:AMN`g{~1Iamxy#<.+#532254+2*##'#5377'#3#57254+'#3#3#53#54.+#532254+'#5#'#'##7353733':>7#!"&=3673353>7352335;67323535#&'#&'#"1#"5#.*+.'#.'#5463!2"#"5*5#."&*#&+.'#3673465273<523272?"+73254&"&54;254&&54;#"6+5'#3##53'63023"'&47'3*;735353#'#'##"543'3#E '   !BC3)$$j?  '&&'77I ' "#   5B 4)(+  (  g .(#  O  :SR 0 '$    !W  !   '&&'889 ""!L p  EZ E6  8  Ej  EP6666EBB//(Z        \    & n]    <  (  (  E^ 7BB11E44"# (E @ ,AXr7#"54632'2+743!2+7437#!"&5463!24+";2?46:3267#"&#"327;2?47454+"'&+"0;274+";2?>:1267#"&#"327;2?474+";27'2#"546  j   @"(T  %@  "(T  ,J   !  `f   @b\'&nf   @!h1  @#@PZ^bq%2#4>2#"'5672#!"&54634.54325&#"#"'32675#53275&=5&'#3565#752654&#"'#757454&#"3275#"' $ q   J  E#e $ =$$$n  $      5` "    "Lr 3! }U b}}! " ( 3$  1OX7#"&546323254.'&54632&#" #"&'.#"26?#>BD@%)=  D<-^;+   zFE   ) (& Q FCFM % <-'J#    Q60 ( /3* g2#!"&5463254.1&54327&#"2#"'. #"326?'#"&54632L  $:%+ &.    (+#   `3 /% . 0,&      ;G2+#"&'#"&46;&546324'!"&463!.#"!2#!326%32+"&46h  5gL|C  5gL|  ^8Su*  ^8Su   gTDgTD`/9uS/9uk.'/8E%#"&547&54632&54>32>32654#"6'&"32654&>&'.#"723201&'"##"&732654&#"32'67&[!iVF_3 /  %7 P!"/ ." =0 Y!r ( 1  .  C-A6 0&Xs\99# 3%/^xeC Q\P:( '! ;A   ?$!)/ #%#!"&5463!2 #37+V `A__[p)`*[[ 3JMR_finquz%#"'#"'##"&547'&547'&54?4&14?&54>32362363225#7575'#075175573'7'37'77&='&'#'#7#"7'"7'737'37#3'#362?#6?#747'74547'67& 7 4kj 4 7 6 8jj8,,-,3483,.,rb@#  PSPY|D"uSb: 9 h3K x63)O<<$$kR#[ci\&\a |sZ#V.# _ Z Y _ \ a` L30?\4Z`H!7&'677&'>7\\gq!!p$74(-CyU#"g \$a!rp!0>}B-k#V h":%:#&&#"6326"'&#"&#"+632632 &#"&#">32632 8/-:8C,0=*)$"75>))>57#"*>G@**@G>"/6>))>6/"!18//8#9Q+)))]##&& z,<M]%#!"&5467&54>32>3254&+";26754&+";26754.+";26754&+";26)5I22I' '[:Ec   Y   Z  W   C+3HH3$< &8HcEo o       ' #'+/37;?CGMUuy}?7?7'777?#5##5##53#53#53#5##53#5#57#5#53"&462'6#"'32654&&543237&#"'#553%#5!'%!5!#5#57#553'53'#55353'#53#55#53'#5d,r" " "!"Q"=!" Q"$ "L66L5!    y a *zz " ! !Q"u"/"&      B    > g "" " 5L55L:    I""""M!!0jbbU;;!!+!! !!+!!x M"""" " "" "*2#!"&546;235463264&"264&"  >  j  7''7'7''7'  L  (( '7''7''7''7|Y%#".#"327>32#"&54>3232>.#"#"&54654&#"#"&547632632D26()!)'5*9 <+C; S(@Z)F*!<,,"'%.& % ! C2(&@3%.Bw1B#-/,$,>>,*"R?*E&",/,")) 2A ,%3 >@2#"&'"'&54>7&547632>54&#"#"&546Ik$=&(       % & A50Pd H#"'7&546322656'&"77'&'&5476323217676}C\91v \\\Ln96mF+  )1-  ! _CZ\s3<\imLJ86lL5-D? ) 3#3#'#53'#53'3377#00DRgYYgRD00@Q^Q6p0 00 0p@(!!5#'#35'&753735'&75&7@tS;DW Z O Cv@V"&&s!7#/#3@N!. -#L@[ [^B'8I[nz%'''.567&667&674665".'&7>762263&'&'45&'&67666667&6767&'6:>5>"23636#6666'6&'.6767&'"#276&67&54&&7&'6757"2654&&#327>'"#276464'*#'7>7>7&7.#'&''''&'&'65.'&26'67&'.''&'5767#65&'&'&7&5&'6&3>.>6&5467"&7654'64>76&%'.'&6'&'&6=3 3  !   (  !'% /T '     N 4  P  2 #-?,,T" 5#J $a 8 KQ/K*     1 255+7 S  98 7172 $B! <4   %  %0   %   ))   "   G   "G% "$         ! '(!$  !&' ("# -- -# .&  $6  ( O 4*!      & O6R  "3%&      #    H  + 0$"&462"2"&42>..'&&&&Q';( '3 3&XQ:aB9 *e&&&&) #,8%'   ,)#3?OW_2"&4#;2=4>;2=4&#"54&+";26'+"=4;2+"=&54622"&4264&"}}}t  %$  9(-       ΑΑ}}}    (9-  5    Α@ ;Nc%531"#527#!"&5463!2##"'232>4&7.675&464.'>54.'+326& ((h v9  73!"651""1   u{  #`   % !! 2 = @ '62"&45>54.'72#!"&54632654&+"3ggg ((p(   OywQ[5X4rOpghh 9F8 8#x``qMTn0Z8Np)8D7&#"327#".54632&#"327#".54632'2#"&54>2654&#"! !! , -.! !! + -.yi)EZ0f?sFQzvUVuwX'"(%$X'!(%h6^?%gBrD=sXUvzQSx t 7'7'7'7'77''gg-4M-4Mggf-4L-4Lg 2"&47''77'7'7'7'ΑΑJM5MM #~~~JM5NN ΑKN5MM $~|~~KN5MM  Gk 3;MU]ju}7#"&462%2"&46&'''.7&'3>3232323$4&"27>7.#"#"#"2.6$"&46254&#"26%#"&4624&"2    d/w%$s/0# Z$o+)m$` #C_DD_`P7NP2%!VZ(!WY*;**;  ': **;)''   %//.-"#v0 /u_CC_D~7L  #1+Y(!VZ(!{;**;*H  !*;**1''<(4@%'&'.7617&'.7>26?6%4632"&72654&#"!9L 9L # L;  140  L6#<#LlLB&4&&rL # 9L M    6L#;$5LL5%%%D6462"7#!"&5463!2264&".#"&/&?6/67>  `/D//D% % 0  0$  / # !!`SD//C0   /  0$ 0  +2#"&=46367>54&#"&'.#"2]]") "  *" ]] '  !  (  L#'&1.'.<53>7.'5676.#<5:3.#5'+&2RJa 0 j !; -  <)&;@z  ( cZs)%780   8=   B7.   {  jp74>32#"&$"&4624&"27'4#"#"'614#"&'764#&5473254.#672/632327"37.#"'67   Αv9+K70 B -K90  @!`  oΑΑ| A +M90 A+K8/ ",7'>32'&4>32"&%'76&'7.547M#f7C:@!)#&#11F1;!%0"#"'01&'&'&'&747676747673467676773316676767"67203&'&'&'01&'#'#'>76'&"'&/32656'.'&'&>?677%06($"?     %     .@       4,    &B05 -) "         A- #&"  ]%"327#"#.546;2&&'>4&'>:=`* &=W5.F_cg^F-0G*76+GOJk~0K?dg?sb~bJ !*2<!673#"'.546767>667&734.#"7654.#"& /?DywJF ">59O5EQapI #)IY/*=C- DI$3 DN&# Q gFOUk+9'%9! *,6J&27#"&546;%2+6''.'.676767&'- P: 4>7#  $6 /*A|0#>=g>  3E/;  73!J_q7&6765>32'&62>54&6764'&#"#'&=4;2+3>.4>#".&6.'&66'47'&>3276#"'#"'&g  9((:O2 $*8'Z &&'77& %eU;%9HD<0%,-. %0C# 1oT  5IRR   h ^%79(1; &  / &m'&&  ("Q5*E' 1*   1&_?0J%  -6Nd<.#*'4632&'.54>576#".'&62>76&7>'&&>76  JI>. 9& .2-%3%T $T/0^37&@9J'$  *  &   (@ 8$ + ,W)7  ))    &  %##3#im2uhV&#"'&'.#"'>76?676&6Y[?'   ( 4 "!]E&ArwI#-)0% L5-l !!%'7#@<@@@@\@{VVY,?N!!274&#"#3235'&74=37#4754>35'"454=#?'#'##77@3#!,8 DSgPQ   @3 %oq% l  !*9GT%2#".547.6326?6632#"&5'632>54&#"6'&"'&272>54&#" {X9a8 9O a XP9   ky  &1 ?Y)E* 0'( !}'i  hR  '701!67"'.5&736.'#>32>W\Nb'4Dz Gq$ zc@h4-6 ib69!.+ G4];64-%2+#5#"&4632'#"&546327.#"326  WAX2ggGp#=+O>WW>H-;]7[[:b,a.C&ΑIAmnHW>>W?#/5; %''7'57#d%S}.*}TA`&::Yeq%2#54&"#54;2354;2354;235435&4663232632#"&#"2354;2354;23543%54+";2754+";2"*"$$%    %%$``   p<&   @@@@Y$#"'&=#;54;2+"=#".+#"&46322>7>23>32#"&'#"!546Y  Y Y 'e&**&       6 Y 5$ <" Y ,6,*:* "  !< #6$+5326"&4624&+3532FFFȑΑ3$x2F$JΑΑH3J|7FU%#!"&5467>322>54&'&7654&#"'&#"3$#"&764'&>#"&764'&>"8'/B7*X6A`  J4%?  ** %   "   ''8C.+?3?X   4J)! );)x8  -l-  Ia(  H  :7&54636'7&"76&%#/7327654'&7466*^I"<53%8C)G!G( 4RmR=#!@P_"*&3IP  H'%*# 6 $*+),IH'0A( &'77#".4>327''77%+*| ,..C97D-,B3O}J]]J+VV+Y3O7& 6fi89gKXI^\I' 7''7'75#'7bVllV422gV ll VΝ2djd2 1#7627'&73%'#762i98f ^ 889Xڸ e &!)-%#"&'*##".67.>2%35#5&''35#7#77#7!=ZfZ=!**-e)&lm~8//8&TJ;##;JT-: "j:  $(+/36:#!"&5463!24+''#"3!2#5#5?##5#5?##5j% o==o jIIIT6vp@vcujjK22K%%J&&,u%%J&&,%%12#'".'.&56'&'&'.VP5QF! %&#D $GI& %6 (,>=E10RG .A9l ,! g,i#"&54676327#!"&5463!24&'&#"'>54#"32321/4>14&#"3267>4>7632   `@   D+    "3 ! + ,(0 3`"??%*)  <   8$  Ha%#&#"#"&546327674"#"&54>32>32'654.#"3:>76#2 (5R6(. /  BI%R6$0    7  8Y1'6  *-GB-XA-$f  P M(?1F&'67"'.>32&#">54'654#"'6767&547>.%$ 33(&Y;! :N3!#1 m;Ob/&KB%  0**JMX CJ88*p^B  T:";'AUM' 79DR;$cI)+ !2 @U2#!"&5463>''3654'5.'.5463267&#"2726&'>7 1%D   '1$%8  c< ) .`H%3+U$"#! 1./6( 0 $5 );F#.>   h2"&46'.'#&76763654'&"'6'.#0"1"#"'&3263277632327>76ΑΑ         ,'Α   <  <  l%#"&#"''.#"#"'.5.'0547>7056'.'&7>327&47>3023232761<  ("*       %( /)  $7             I$%I      n2#!"&54636'.5#&76763654'&"'6'.#0"1"#"'&03263277632327>76Z    .   ,'`   < <   #7463!1#!>7>7>7>7 F1GE1YXF1 3 /# "%!<H1FZ9f1F&  4H) GMSYc?27777''''"'7&'7&'7&'7&547'67'67'67'6'7'7'5$"2654 - 2= @DC@ =2 -  -3= @DD@ =3 - ʼbbbۄD@ =3 - -3= AED@ =3 - -3> @DuuvvPmnn;bFEccEF.93#"3#"&=4>73 5>574&'&'37#6734'7[++) %6*C( 0! >2 1=t))5%)82i@E&( 3. GA1.  Xcu462#"##"'"'"'#".5#"'67&'7&'&54>3267>761>32632632$2654.#"47327654#"&4&"326574.#"26574&"26574&"2657#"&5476326262632>74&#"32>6  &N    " $6 $G'  -    - 47)   1   1   2   M/?!FZ%"    & 3)-")R-_   O-    7!25$" &(Dt  !B6#     .iH2>.VN*?$)%12"&42>54'#3#".546327&#"%35#5##33ΑΑ#7uF) - !"13II##$$$$Α7# +  .! IfIj$##$#=2#!"&546354#"#"'&#"5654&"32756=6276323276.( %"  &*  \   - BFJNRVZcgos%#/#/##&1'&'&?&5'&?&/&?2327674?677'///7'75'??'? 7CTA&{Z"$#;Fe[ yjIP9@ F+: >NOU\_BX@)J3H,!6.CEN d ',# $&9a=/791=^:=D<>F=,DC/DJC("?$A4%#"'.5463232>5'654&#"327.#"'632(;%:tKxNOwJ r588338   +!)= ">(dAUpoV_:$ :%?PNNPON  8}h#".54>32.54654&'&563254'&7254>54'�&'<54'.#"2#"&4>7>&'&5462#!"&54>3E #!$:     ,   " !  ' &  05}2  1N+'I. /Y  !! "0$     & (1N*%H0'fm) 2"&46&762?7ΑΑr   =s>Α ] Yh?.  2"&47#ΑΑ(UUΑε .6>%/&?6''&766'&7.7627&76"&4626&'.>;4'>4&5'676&'&.'&7&'">374'&"417670.#636=4'#&'6767-    N ! )!  ΑO           2  '  /].('     $ 7 ΑΑ  %    ; 3     ! 5 n*%#X%&5>56'4.'57#.+"72>73#.'&#;2>7   h X C-Y   d54&#">&6?6'6873  Fm    D)# !  {  )S7 0;qPJ6I;J$ 0_ 3@!- "! $m>!4DUB4A %"2( 9 f - N4J)l^8Pr1N  '7\2#!"&54636&'.14"&'&'."04&'&7676&"&306/.>1276a,8''88'*$  / $|%Q)% / ,'88''8IQ &  /)    HQ & /)  7&7>7>6&'&6W?_E#ZZ#W?`E04P<'D,P<;ZW>VFe  W=WFe ;X"@'.6766&'&>7'&6&6.''0&'&'.67&67>6.'&6'&'"'&#&'&'&767>7667>763266.6a  ?  1 m  + T   FE&#'f*3T ^(+      )  %  ,P       L   9  <   w!K$ 2?#J2$ A5 @`  b    "k  F  " K  S -=M2#"&#"##"&=&54622763232672#!"&54634&#!"3!26S ('  #" &<j\,      \r\ )4E%&?'>.#"'6?'.?6176&"&463227.547  )*'1''#.L+3  D  :j-*'FA6!!(/ z $>-'<2#' V. = Q C*('A&5*'8.1S%01#547>73&'.623632&7.#.#&5&7067672?B.,]rvy3 :: q z   ;9v *)"%txBy+ / L""6 J4AX|2#!"&546367676'.6'<.'&67>4&*#"#"&63!22654*#".6767626325'&'&7>0&".5&7676 ; ';1C# @=  73    *   )- 4 %#  5E& |   '!"   ZG  22!. ;32H-"&>#"&'>76MU,MU,!I}J=l&5V $7k4!bj4 ^ K~J4-F13!T"4EM2"&46'&'"?2#!"&546337632=4&+"767676/&"264&"J44J4)### - T BiJJiJ 4J44J$C##J#U - JiJJi3I2"#"'&=#".5<>7>754.#"&'&'&76532a_@5X"0 D (V  LE (I?    & 7k   0NGV_cf%777'77?'7&7>721?6?77?7?''&"76?6&2>77'/???>761?6'7?7''7#''767676'&76767?&'&#7'###"@ /  x  $!% '%A$$%%6 K :(   ;  ( 9 &  '  +"%  )&p "? qsvPSxx$&" (uoV\>  & B< &(Mzfh?B)   +.N--+$'%' *(?BO@    &"7?K7>32#"&46;7'&>.?6732+&'.'&67&2"&44&#"326 W? 4BF5 2  ΑΑYZ~YZ~G   s$ " +  " V% ΑgZ~YZ~2G2#!"&54637&676&+76.'.#"3!264&+&'>/O>V A3  4+  3`!= #r   J$X! V!,AMan7254#"#47632#7##"&5462654&#"733>32#"&'##74&#"32673>32#"&'##74&#"326747632#32653#"'&7454&#"733632#454&#"#* & %, ' '  Z& 'Q  :& 'Q  5 ! L & #& M ]' # '   L0 _  %( Fl  O  %( Fl   #    3  .   DA/7%#'#7'#7&74767676&'&>67'?If.!$!e"E*;f!1+ 2I:rkrTWZ`6M&Of!$!nG 5%"37)*= !.SN,2.'7!9 z5*+%%57.#>32632&#"'>32'.@~6D[5#B5#<<#."4)MM)1166ȀG8*2)1110o ?II?DAT02#!"&7>37/>16./'7'67"91"91_ g E :4)s)))4)w qd 0= @ n1>f7#*&'#"1&7&6256&"#&47>323226765&#&/&5"54;67636763:'#/'"'&=47>'.'.5&632#.#"#"#"&'&6;26766&&7>&7   ),(V 2 $!  !% 4  !! !  #  '      3CK7=F4/ I4&     "  e          "  #h&)G2#1   M#42#!"&546354&+"#5#353;26'+".=46;2   @(@@@@ 9@(@. .   #*('K#'(H?Rg"&4626'7'&'7'&/022130"''&1377676'6".#727&".'72Α=    %   #   "3 (9          'ΑΑ*-+,-  3H -,-- -6< Q6  )5O62'&"..7>7&67>'&6'"&=462746#"&=4632>NGI?>i`b UT   OR^ GE11E%"!%}CG=:e`  T  Q  ]1 H   G0++0G  !1! 2 s!)2:+73272+72+732+7%2+72+72+7"&GD~!%FIFGFFF( FFyFFKYY1Y1Y61Y 1Y61Y>`7>232"'.2"'.76#"'&'&'&'&547>76 '&'.'&54'&'&'"76767> 9C) 04  I!V"2+ %v  X  .:G S&6G%"'.'.567>7>3%327654'&#">'.#"?   ";  - '48%( ,  -- .$  CB  TM %  3%-L^ #5  )22   #'+/37;?C3#75#73#75#73#75#3#75#73#75#73#75#3#75#73#75#73#xpxpxpxpxpxpxpxpppxppxpp(ppxppxpp(ppxppx#$.>.7>7>76.'6L)5!)58Gs8 K8/fYH+H5K;+35!*5!P~F@e$>]4m"C2. )u ;'/6"&4767&#"&546326&&2"&4264&">L  $'7L5-'UI^^^&X  8* 6N"q߃^^^M %#"&462'""&54632654632L'"22E1T".0G12"#)5##--&1F11-#".."#5)#"21F1q)Xhu7#"&=46;;2>=4#67>76#'&77>76/6.'&.&677&'#"&=46;2'"=4;2#754;2+"+"&=46;21/  .*75/ 62 G2$q%;Q9, 7$>4R60      &>(D( ( }    p (  ,5 +Y1+9%">,%)&". #@/ 4C3 93$  ( ( (     T 2S^w7+76;26'&++76;2";2#"&7>;#72+".>;+";2?6+"&?%2#7632+"&576&++"&?3+"&?>;2+76;2?6+"   # %    ( 9  0 # : $    I B   /5 !    _  %( #4  "c#!( @bJ (  !'#31/#?#7#{p651cb ost44XA#_5gg5_#A 6*>>*&R#&+Α+&#R>T>*.y &767676.'&'&76'&'&0&'&0&&'&1.'241676727"#00'671327>7213"767676'&'4.501&'&'&76767632767617'"#7&'0'"'"#.76&'.'&'&'4'.'&7:30236'45"37>'&'6%&'6&74&5&'&5&76767&'&'&'&7671676'&'6'&272637'67'676&'&67&'67.4>56'&'&"#>767&R "8       % ") )    ,H+#   )%     !0             "           "   | ] )  ;%   $X-. -   V   15 +* !$  %           !                     .:JZ3#5'.546762654&#"'3#5'.546762654.#"%!"3!2654&'2#!"&5463~44-.@ 44 .  -@ D    &&&&8 &' :  ' :      &&&&GL767>454'.'37#73'##".5&#".7547;2254'326& O 4" wZJ! [ !&R T&6 @$@& 3+7    # / )$J"'53'3#%#512>'.'&#4>1#53WJ>>Bj<3P -7]<`6[v@8S__"JJ=@v[6`<]7- Q232"2"&4%&'!"&5463!24&/.+.67'"367./17672>7)    s(:=XI  ,h+.#%  ( "S+ !   5390Z$K O$      2+4546264&"7\\Z5KKjKR[KjKK5!K #53+32&+3250`ӽ__q ~Td  -159=%#535#5#5'#53#5'#"'.7!27&6?6%#53#53#5'#5^BBBBZBBW C)vn8 %+ BBB B;==;;H<<<'.#"&76767>357$4&"23676&/&'YH0  !"!H)("M["/ 'sZ& #0 162R@]R/   "*4 .>>XA76  bD!3$ 4=B #)1AI7>77&'6>7&7&504576&'67&&7#!"&5463!24&"2Z(C2!@. KE.# #DI'-( 2'4/0`@^^^3$,K+%  71 $(4 :".d %0 `^^^32+532654&+#rE> <_:V_dQk@F?[*P@&@a?@`7 $,>2&&#&676.2#".54&2"&4'3237.670>636"'&5"5&?"1"&46323&''&676767.'547>3>367.>>2'%6.67>4.'&""232>7&'&7>9        N 5j!@   --H  b9 " $-  6*"(  ' G 3 (# H    l    h          ]    (    . %    O 1   U . O #2#>%13#67'.5!456WANdE#-3 TQ `AWGDuOqS .%$/ k"/0ijY_*%##5#5354>32#"3KdQQ7$$ (Y]F(8O<'B2'&*#"&5<&<.'&54>6&"/&#"?62327i`L  N'B\O :  I O :  hT{-  Fl3Z?% ;,t <, ####5353533##5353##5!###@Z@@@@Z)<K73##576=4+53546324&#"#576=4/73''7'73733r]M; &55*/E  ! n *+ & &3"# 32.*  "  (9-  -% && Q72+"=4;2+"=437010101010"01"101"1110101#0**1#*1"#"*#"#0"1*#"#*#0*&*1&#"#*1&#*#&'05&'4.#04'.5.14"5&'014&5&4&4&4&5'"&</0&1&54'4&'45"41&54&<&41&5<&<&504'41'041&5041<545&504145041<'45<547676767676767676320154+"#54+"#54+"#54#563232=4#"#"&#"5654""#54+"#54+"#54+"54667676.;ed<.*33+    (,,(4444}AOVSOA   H1 ZH LL  (>.'&7667"&54>32&760m5 %#01}~~:c;4/  M`5m #%%1U 19YY;c:!4Hlw5'.54765&'.546765.547.=46723>54&'&765>54'&76"32642>54&"6s s b 5T1E9  4>N>  dd  >N>4  &8 iQ @i=    J15tt51  -1d |h Dc8Dr  g>Eo\       \pE=g @S-Vh|JrC0-   "  "%2#"&54632&"327572&+5DDg]\[A?&nNN74&%34v R\]B?'NpN$"q50*Y L'.7>327#!"&5463!245#"'.#"7670703'&'4+76,* ,!!!!P!v(&4 <4. !#O!C%!  5 *-"O!!P!!"(*$:<,    c0?732327676564/.'47>32753.'&"67>'.+  5'*2 7"3L+ O;2,f%IE +D"(A"<-0 1'* #'Q;% 7@="EA1Q U.8@ %3!!'S6Q!SQ %'% &54'77E54'71"&5>54'#".5470>54/p   O,+!(!C  D    8* >105,9)6 CF-4,  * P~&-OVd#067&'6?#64>4'&'.'&'4.77&"&576&'&'.77>2"'&'&637#"'&'.'&6?654&'&'.54746165>'&767674765&'.#'763263&67067>367632"22#66763267&#""67>76%2726767&'&#"&'&#667676;2&'67&76767>54/0&5&'&7676'&'&#"767&'&"'&'&#"45673&?67&'&'"276'&+"767#7>'12'"&/327"'&'&63=.%    n %.  ?     (/((/        #         (    1   t     & T  &  ( =5  f>v  F< )   1     +          +      4   e %(--$#                    #1   + 30+..+-3     !  !     *+9 0@@<+*      7#".5/267#"527654&654#"0#"76746767476'&#"47>76'&#"#"7676'0#&327717670767464676767676767"./3276732767>54&#"'&7>7632'.7&54632?64'"=32%i         +-*    ttD0 =9 DF9P P    @      !  _P          1@<) 2#!"&54637#.'#3Q#  %P`6-/0"g!.'>72&#&'.=4>76&#"#"&#"0+".5&54>16&#"#"&#""3262324'&'.46546;23262'')N n$p$(&s)k  1Fe    :  :   eF1   + + $>&''&'&'&67>./ O: # & o"*>$! THN(_-.R&`J7-# AMI06L2 "2",8}()95EQ2"&4654""7676547676762+"&=4632>4.#"ede r  X /+<<++<<+y4Y33Y4Pqqpeee    q" 4<++<<++767>5&540'.7>767676763636~*  &  <, 43&#7Jc  <  +SlLcs 27PZg%!&'45.'&>767656.'&'&'.7>74.5&7>7676>367>7>72>5&626.5&6'6&'&'7676'&'.&>.'.'&'&7>36'4&'&6'>7'"76'&'465.'30'&'&'".4173&'"7<5&#>7<5"'&'&7>7&#&'342;&76'4.'.'636>&&>7&721&'&7&72>>7>.'.56'.'.#".7&6&7>7&'.'&'&36 Z .     %g  4   k* " 6#-6,   FC    e "* )=    9 $S     #  `     $!%*! 8/V %   %!   -?    (|0     " ^4  F $'?^2      !%   !)        ;  /          &  Z  )/ #   D   '    $(    ">[.7>3267"1326'6&'&>76767>320676zX/lȯ/6#q?FpG3C   7a3 57K/+&F-)  ! ."   3 ?6ȯ/lW8>     6>u".)KEP&/    6   " !6!!75##"'3262654&/.54>327&#"#"'@* "-"c'    !)"  "@c'   " 0  +B2#!"&54635##"'3262654&/.54>327&#"#"'* "-"c'    !)"  "`'   " 0  +.D\huv?7'&'2#"&546%016"'&7'676'%&767''&>%2#"'&'&'67&54>2.7>3'654&'7@<1N/' 1;C3$)z/  -? .'. r ?1N/'1;C3)  ;H^uHQ J "":';B  +#'17D5  02J*(} 1 ,;,&18D5 $'/76&'&`   < ! 1( & , `.    *L   ?:  @7>&/#"&54>32SFK[>p( $'" )NGv 5ks6L**/%$*2DTa%&'&''.'&767:7.'&7>366/&#"#"7264&'&7>7676'&'&6~ PK  ?k ?A `!8 Ahp3>:[\ 109EeD r. O  RK 9T1d= N%6Rbx%+"/+"=4;254;2#2+"=432+"&=4;272+32+32+"=437#!"&5463!24&"67>76 !    R - .0""/0""/>_`N;  .9"G,*G,*GG< G9<     GW"/0""/06NN61J,'u *H3'&5%3".=4&3#'546323#%53'57670&'&=325N , /''<*%8M*#&(  -Mo+ '" M/MM'+<.#MM(6 F  4q  ''7'7@@ ?__? @$\\%\$77$@2:Bt7>3>32"".''&/*.7#"4;2#"4;27&767>54&"&'&'&'.54>32v     (XX KhK  +J+B^  .  A   @  ,1(3HH3(1,  ' /*G*[@/ $4/5373#576=##576'H-m_'&'l33 2 6%%& >> ++O#"&54632>'&'&'66&'.732+"+"=4+"=4;2=4;2rnSggS:&./ .7&"*  b1j((0((0gg);B^B/ ) 1-r+&U#* n0((0(( 3#73#3#73#!&=4&"&=4.&5,,*+$ ,,t$2"&454&"6754&"6324&"ΑΑ"!""/Α = %3#"&'357#&5462#5'`lAR,Bq!lhhK Α K@$6A7iiP')gg)'П6G\h%&'7&"&'>='6267&"6267.'&47>330#'236"'&'.4"'6?7**h+&$:FF:$' ,d, 0>>,,>VcU?,, #:  $ 4&&J  -F%$G- -EEEg:T&&U;g(5 {  ;j"/&7676?6=4/&6=4;2#"/&=4?67#"54;23254.'.54632+"'.#" >  /200,b#7 ",(V0).&<%jjj    k k  k!D @ @@  %%#53%!#5#5#35337#3537#3533533  @ @@@ @@ `` ````7G DNW3#'#73%'.'&5#7327676&+"&7676763#";2675&66'67676&'&-093-117)O%   X    X!= =%&i 1 %    jjrs6&' ( $   $ #*h 7FNV^fnv~:4>'4>7"&.'."'&5476324&"264&"264&"24&"264&"264&"24&"264&"264&"24&"264&"264&"27&'&"62X      # 549 BB (B^^B(XXX(>> O     !(;O5005OBB11BSStSStSStSSD' (  #/9AT"&463235#737'2#547#'35##5##5##54#"35'5'57'#353#"&7'326ggΕ`o)F9{!8)E DEDD$   8Α ?.  471M/:HJ0!! g V  ,8 3m%#"'67>454676765&5654&.3276'&#"'4>;>3735467;23232+!".='#"&'6) &.^O8 ,9    V b   &:/I^,5 (< [:4% H  R    K    %#"&46323lLLllLLlZLllmm"5#"&'&546322>54&#"#".672654'6r/mKPn/?@B-$9%%B&?WSn +?*mTX~9Qf(PttP(fQ9,A$5A ;!,AI%#'#//75'737573724&3>7"'&'&476762564&"C       - ^88)8? 88Y~ZZ~Y !       .0_87*80?!87YZ~YY FMl7&'.'&'&"#67>3021'.'&70703'.'&167&'&'57>01&'&76"45."&'&74"'50>767>0&'01&"'&167>76'0'27:63231'&'.'&'6160&'&'61"Q  . K"#8)'4   R?(&$  D"%#  F 3* %=C"' "  ! " (  )!  $%\    h744D "//%  , &0      #     i        @(;%57>'&5676'54'&'.7>??; ͗  e%'E= ^6L;/=  a  .PLI66  #8 U  <97*l #  /6#  1(-&'&67%6'76&7'&>7'6/ ##i  at  S jZP Q \-!1(7 = %GV%+3'&=46;26=32"32>54'"#"&'&7>;5#54>76#'2>4.#" k5656 k(((  %$k 38r   ?.)&&f )0   167 )   f   /2#!"&5463&63265#"265265#"26=&6a'88''88'2 8   8''88''8   8 U2"&46#"#"3232#*#"&7676#*#"&767676&#*#"32327656323276ΑΑ | <  5  '   #.[A Α  F5 B?F+E&!@GQ[#"&=46;2'&#"#'#35>32#"&'365654'&'&'&#"32767&#>33264&#"3264&#"@5KK55KK  $  x S #"( 3\ K55KK55K Z F %%"'f  x   %(%'762#"/7/7676/&'&>&5( BDrBfE:0 ]$# 0"-{&9b6 'BsCf#P% " !)%0 @Sk>FA#:EP['.'>'.676&/>6264&"0676?32'"&'46746."&'46,..,7=!L>(     (?K!>_4 +,n"#F:  WzWW=5),7  q! 2"&4'616/&"37#"ΑΑeJ " ZUTK pΑw( cc)x #'+/37;?'77'7'7'7'777'?'7'?'/7'77?^6JN."7Z %# 8 8<+!)#+.|G7-+3=5PCBY".',";##B1 " &  /  )@ H96`-2W B@D57'.546'0'.'&/?>54&'514"D66&/$:MO< ) &,$-5+7(& ,>҃s|5 7"';6[<4*9 0* ( <7!>'#.'6X-~wx2M )ps^` 5EE"9m<Me>`L-T?- Q '#"&462264&".?''"&'&>?'&>767'&67676767'- ^ 4  -5  - ] - .- ^]/ / .  0  Z / // / Z 9 Z77&>&7&'&'"'3>56'.54>32@< w C !,z'/)7 /("2  /8Bw < +)+&$ *! "& =%/&?6?6/&/&?67/&6/&?6, -]TD:4-\_92+ *"JR A,V$H\ 7'* 2>"&462##"./567>324&#"'.'3264&#"32$3##3XC/m1!:aGB/5$&%, $- (3##3$>/CO!, k'e/B54% && "?,#-A%'"'<.'.'.01"'&76&*'&'&'&656&7>4645&>'"'.'&0#"'&'46'.'&54'&7>5&"5.6'.767>67>7>7>23>32307>302     +%   >59 ,     N        '*   $  $+-   , $*    !     2637??7'7''''/'7'?'8P C]8Y9}Rsf|1.+l &  ,ND& @'$J .3D6 {l&Jc&Tz8a %   Ea@NZamt%#"'232654'%.54632&#73254.54632#54#"#"&'73"&=3255#532+53254+5#532+53254+54+532;2=3+"75#532#54+3#;2=3+"7#53##7373#5#'#ApC#$ e3.53.5e%" e<     v$"'    9'    9  %?4'; 6J5+#1,"10EJ]      M777H+HH+?=> ?   U/#?6'&'&67%6Dg2 f]"L0 i (2#!"&5463>'."354;2+"=#pGNhN^DD^| W84FF4D8 '575'7''7XeW"XVY@-uKK:55/7 #'+/39=AEIOSW]cko"'3'#4=3#&'7#&'7#57#57#5#5#57#5#5673'5353'53'3#&'!5353&'!'673'53#4=3v0ggbnfggggggggggggg ibgggggggg %ggg S t tVggg  F # E#EhE # % JMgs % R "y-In'.'&?6?>76367'432'&5676322'&'&#&56/&'&74>76  4 5"     %    %; !& 7"     "    } 7#&'&761'''''#'&'7&'7&'7&'7&'7&'7&545'767'67'67'67'67'67'6?27777774&"26"&462654'&5'654'&5654'&5&=#+&'#&'#.'1#.547&+'&54704&5232673275#327.'&,+- "%    $" -+3378:; 7; 16 (."  ".( 61 ;8 <:883JghhOYYY| )              > k  "  %$ '&&&&' #% "   ,' 84 A?GFHHFG?A 48 ',  !hhg~ZZ~Zy   Z p@$         !eB ;Q>;2=4632+""&5&54.+"&=4>2;2"'4.5&6327632 c  !"d  e"!  c $11  CC  a3    %     %     UUvv 3Zes#'"'.5'7#.767>;6&'.'*?>72>'"5&'"&636&'"&63&&54&#"432'.'&'&'./&6762?6!  "J%    $ 0d$#4U  I,-)0) (D@  +(%*//E4:L ?       }\&    4 "  m0Nm([)/)#OO #f 8!,4& ;C K"       22#!"&54636'&6'&'&6323276L2C!  % .B@f@N  *h8 />4VR`,U7"&/.>;2326?>;2#!"&/&;2326?>;2#'2+"./&#"+"&?>3i7!$ $! 8  h8 !$ %! 7  8 !% %! 8  `pK K p pK K p pK K p BRa2#"/7'&546'&'&'&'.'&>76'.'&'"#"07>2#!"&54632654'&"7m'(N7$2     eB^0.]T#E'(56M 0 &7o        `]A@1.]A+%R/:''#/'7''7/5?'77'7?3772>54&"''#/'7''7/5?'77'7?3/3267;?3?'         w"/!!`/7'6'' * $%7&66'5%$ *  $/>,%:           B!!/!( #&4(66'2#&)'$4'4. :&+>/#  @J2"&427&/7654'&54632&#"23767/"&63376>54'ΑΑE9k&$DC?P@W&G9<  R0"%<  D3<ΑgAl$+  GJ6 #;-_ f=92 %++AQ%'&'.547>''&'&'&7>7>576'&'67632%'76"&'".rG;14;3 Y6TZ!   69  `+# ?y2 1)W $ ! /_0RmU- 2G)"   1A<"Rx( @      7#7&546;#5#";W@`KU@R7&2X&gMT8;m.533p8R^4y@*<D/;C\gy'&7670#1#"&#"#'.7672326732+#73264&#"&54?54#"#4>32#5#'26="#532?'3373u     nI%*#6& " n3$#5 "! 9 #$BM%&   a  J ss@'DPj|+534?#"7#!"&5463!276'&5&7&'&#"&#3262327674&+3532>4.#"363232737##'##"#3267.M  @  >      3  Z   &   f$77Z  `  W  ( B/   cQQcl'.871#"&767672#&'.'&763276&'6"  _3+3U*'(%+}"  3>Acf5O===%.$>%===Q    =>==KL !B!!B BB B!!B!B  )  _,  ,_$$d$$G# ^     ##G$$*##F$$6-V7>76'&'&767654&'&'&'.4&707676'&7>7676'.zZfVR  %&-  X+C;0-@(7!5 cRrc v`Y! FE '*#+  FF "U   8b:;8 0  /-;:.IX[x }<JRZqy&'&''.767&'.546767&'&67667>'676&'&&'&'67673276767&'&'&"67&'767&''6&'6767&1.67&'67&'&'76767#"'6'&'767>54&'&'2&"&462*47- ;!!?  +52*  @"!; !" 0 '   .   _ "!4%  #*'z"" 2 !## ',#  }&&),0$  >* ,* *?  C-R'6    JF A aN3$ "!   d $1 0."   ! ! && #'##73'#'%'7>MxNyT)A<  QQ 88 7I)P%+5352>54&5475&54654&+532#"'73254'&'&54>32&#"23#"=#";#"&54654#5654&546;7#"'73254'&'&54>32&#"'2#327#".5&634#"e j+"*    +  K+ !*   #_##-9 !  !  "  !*"    <"6 "  "  !  "  ;"     D+! # &%.B%72'47>&'&'&767&'&''&7676767&'&'5#"&?#"'.7671'.7676727.'&67676'.'&767>'.24767>2630>201276705076767>563206767632266'0"676'&2>65417654'.)1%#             <  M -# !9^Bw ;/%H -HW hK"1:  ! ' $ r  "p[  EW, $    " (   % >%$& )75"*)V Z""#'+      V    :    "-& j3  2   3 37#'e[08ਨ6rrYY  ?'' 373&&& R0n\\Jbb$BB X( 0D7'##7'3'373537#5#53".46767624.'&"27>"&5475"4753#5##5353#5##535&/3#5##535'"475'"&5475'&5&5'43>?##7#'#077'373'k4>>>']>--)d          W   P  &  &  P   YAA ?>@@Y**c+<``ttt %   %D           )) Q((+yy%'.'5#.='./&/'.?#+'&6?#'.5'"&'&'&'&4?673676720326?>7>;67>2?>026?67>?6766>?4567>72>?>7>3>?>6?67>'&%>'&>./.677>/&     :             0             d   :              "  6  $  - 5:/   -= ( %  1%  m7 ,   "   .%#"&#"##"&5&54622>3232632 7%H2B"/"  %P C YNw"" p 7#3#3#3#5B22e332~BLLL L) ?G7573'7$"&546253#57'+'.45454>;654"2'#''.4547&'632&'54#"'&'#5.6?>767>36'7&54"2&&)6:::Y   k   #A  ! :2)@&  .`  x nnw        Z( ( T      6^ 1J   '9( ( y #'#!3'3Y>>Y.CoqD-ttX <D>RYhsw3#267#"&=6&#5375#"'53254.54632&#"#327#".5463234&#"'2#"'5362654&#"53""    ,&_,!SW  %"X. q$#,'  r,/&&? "* )g;$ $^  (  * +  ! $#..3+%(( 2 {B ! v"#"&'53254.54632&#"!090 l\"[TH7 090 iXLBDJ1%;(KO .#8%KV$#327#".547>772#".546)3 &R$Y; :F${% %rYW@76"+"#&#&'&'.'4'231'&'1#"#&50=46;230>76700137654'&'&723230#"#"'4'"'&'&767621450454&'&5<547676362<=4#"&'"#"76760"&747676'4'&'&""+"5&1476767632'00?4656;2'"'&5<:232767>76'&'&5&3232=B ((#_!+9%&)'"U   "  +!  ,  #        ##$   {   3I )**  ( j         "  udZY   IH5@(8r770=476'"'&.7676232#%#!"&5463!2#0&#"376767674#0"1"45.'&'&#"#"6761323250=4'&'&'"&+"176760"1&'&'323254=01767656'&'"'+01054&5&+"&'&'&'&'&230;21676767676&7&'&'&32376213676767674} " $    ' ~%           AH((926D # /$.   $#        `, Y 2eT <j  Y  T<   0    8%'7'8]]\2#!"&546337#53'#35####Qo>BvX`###G#Wttr2#!"&54635#35#75#75# l (xxxxx l  ((P((P((27%#!"&5463!24'&'.+";2767>5__`* Gr Gr66`; ,< , !!5!3535@fYZ@ YYZx /[nz+"=4;2+"&=6/"+"54;2632##"'&=463276/&'&76322'&"'2#"'+"=4>2654.#"&2"&4))`  ))$ $!5 &!+:.?@-!)# $$$" # { Q # "  -   i?[?Y+ %%3%i.?f{"264%2".54>6<.*#:>732"#73262#67>'.*##%64.*#:>732""#WVVV);GR:  x!G   %   % %  x!G     X[z[[zk-M\M--M.,$   G e6,-  "`? G d6, '#"#"&462&'6.#"3762"&4: "#eˏ *?"B]+I+%R::R:  ˏf3. "A0]A+I+:R::R@!-9EQ]i+""'.+"&=46;2>354+";2=4+";2=4+";254+";2=4+";2=4+";2ZI> =2Z$88$#(.##.@@v@@'=Ohv%'&767>;0?'.67>'07>&'&766'&'"6766'&'&7672"67>/07'&76>?'&'67'&767Of9( )%)RM `  0% R w %  =eC !/0'G22+!  ).46EF$&S4( YK$ B     (27@"   G2 ##  <   -  Z   G1&7>01'&".54327>3270>7>" $ &', 5+:3'<#5 )bBU!1&A5f7 "2B  7  `  fX 1!"4( -%##5#546;2'42"72#"&546"32>54&;M ffEE#idfjVuwT4]:vfxxf 9##"hkge-zQSx2^;Uv )B2#"&54632>7'#5"'774'&7">753&654'&idfV xS/"]')""=j@_$" # 0-;;ikfe$Sw )&&"  ]+''" 9V:<13M2#"&5462>741'#327#"&'#53<5#53'7#7654&#"67632&#"3i0>J&fj!<%" [ 8 %-GA xCNvUl>Q&:" !Fi+N<,fe=#82 .50 #Sx<Uv]$ * 0   !%:2#"&54627'##5#535'#53'7'7654&#"'3733idfjlDR@=@@: x7@evU :* QB(+B?&hkfe=c$$88' '8 Sx3-Uv$3V `t  2#"&546"32>54&#5#5idfjVuwT4]:v hkge-zQSx2^;Uv++P**#72"&42>7&'327#".547'703654&#"632&ΑΑ%E5 $!.&'O3XwS4YJ!D. (0 Α/` , 4##0`L0Sw2*!4) (4;2#"&5462>54&#"2+5254.+'2+#526&+idfj4]:vUVuwFF56" ^2 "? hkge=2^;UvzQSxl'Ol.  ,C) ,37;2#"&546"32>54&#".''574575#5'?'5idfjVuwT4]:vM :D @?;Cqq(%'Y-hkge-zQSx2^;UvF @F>>f,.-/:5++ <2#"&546"32>54&>32#".'332654.#"3'idfjVuwT4]:v$#5 <*+'>4$ 211hkge-zQSx2^;Uv*++!;(.$.(+",11 s2#"&546"32>54&2742763276;#&/#"'4'#"'4'"5'#"'0&5'#"/+53767463274>276idfjVuwT4]:vQ :3     4- hkge-zQSx2^;Uv5l< P@ a , \ LH W_ WT Y9 Lb E\Pb  _s2#"&546"32>54&3#&/"/"5'"/"/"/+53767627427627>7427624+54"#";2=idfjVuwT4]:v;3 3+ Fhkge-zQSx2^;Uv,S @: HQ IF N+ I Z 9P CT ^-  E3 ]  4<@2#"&546"32>54&2+"&=#".=4732354735##3idfjVuwT4]:v6 ) 7lllhkge-zQSx2^;Uv6 6se (C (192#"&546"32>54&2".4>"76'&3254idfjVuwT4]:vr:* *:* G,6 %=  ,hkge-zQSx2^;Uv<+/6/++/6/+Yd .i Y?A9AMaf3#7'#&5#"&547#"'#6=3632672354#"#632'3264&"75"326%673#"&54632'.^"c$=) ('  &. C)!"C0 %7--/ d)+!*)*)$+"--6t B   P($ !!+m00  )!$*&&O`$2"&47'.7654'"1'&7'&'54>767'.?4763226?6'67'&3&545'7654/&#"32?32?322"&4  $  .<,.      (  7  R"# l    5:0'  &, !K< !"70$   ]  %`   t  4`'2>73'.45454>?62<.'&'.3547635476 A)fM 316  ./KD D/ .  H2,  %-7 ..-   4r ??$ rE2EK%#'#5.5463253:654&#"22>54.*#27&#EXCp(!%cfxON-E8   Ft`[     R9rK >> lGQr4ZC'A % #HBCN] <_i!!%5"'&'>54&#"'22#23235".4=372><1#3#"=463274.#""'5632@ <   $     d %@r!m.*     -E##[g7'&'&7>'&'47>76327'&5<5&'&"#&=76767676'&76760'.67276 3$# (f42,E%?  d(\N;  ,:m 4!Q!e"( !!%$ ,y. " s7%G. +'  1+1,i 7   *%c2"&4264&"2"&4264&"%'''"'7&'7&'7&'7&47'67'67'67'627777'ΑΑƍƍ|||J=7$  $6>JMRSNJ=7$  $7=J NSRΑƍ?|||'$7=JNSSNJ>7$  $6>JMRRMJ>6#  ' '?/?557`$##T))33d#$$T33),,+y--::"++,::- DLRX^dj"&462"264'./'6373#'#5&''7&'#5367'76?#.'%#63&%3'6&'756_ΑΑ8-18.%-D  ;; RR ;;   ;; RR ;; " ""%.8--8.8ΑNr" "+R ;;  ;; RR;;   ;; 4-8  %.8p.%-88-%L""2Js{3&=4'&'&">7#C26&'676&'676727""'&'.232.#"'.#"&'5>326="264&2"&46"2642327367632&#"'5.'"&'6  &      O      &) =!(&? :(!1  &(: ?&(!=""7,  * *  xP5[= y!')      )&@  @L"" 3   ++ r%>7#./.>?.>?677'7'=57&'>&'>&'>4'   #,F**E-"      &   "????,      J ( &#+!''! '5,* 5)# 0[>0 60+ !<-'D$0 7...7 0 U *1P-1-713D# &'544J{&76567<'45&'&54545&567>767>�'&5&'&7674'&'&'&676567>75'&'&/7<5/.454'&'"'&'&?'&'&'414767676367654'45&5&'&'&'&'&'&'&'&/5'7>32674&'&762763676?32?#"367>7676767674676761#"7&54'&54'&5476'.'4'4'&54'&43607&'.66'&'67&                I W D)! I&+%& #%@        8     '                   &-    %      9,:L8'', % % ^^K2322230##"'"'*&#&#&'&'&'&'&'&'&'&'4&54&45&547<65<656167>567676767672636212":372676767676767676767676'4&1&'&'&'&'&'"#'&67"#"'&'&'&'.5&'&'&7676767"'041465<5<5<5456767676767654'&'&'&'0&673676745<5<5<645<5<5<5454512174767676763632367676?0'&'&'&'3"#"'&'.'&'676767'&'&'&'&'.'2303232763667676767464567<5<545<54'&'&'&'&'"'47676767610#4&'<'"#*#"#"#41>12676345"'.#4):3670254>721633230"""02220#*#"#"14=46123#"'&'&'&'&'&'&'.50'&'67<7676%0#&'"/476767676767676367676747654'&   ( +        5  +    +      (!  '/  u              f      "       &            t        '    2#  ,A    &   -<   #     2                      : $%                     &&,               jp66&67676327676765&'7>767677676'4&45>7654'&'.'&'5>6767`I\*,(!B. %$6!  # */1!- 0! (M;753 93 -!"       ' #!  -*6 $/($/<4-I#":,14^0 !)1'62777''''777&47'6"264&2"&4w [  [ wF;FF;Fw [[ wF;FF;gHIfIH44H4F;FF;Fw [[ wF;FF;Fw [  [ HgIIf%4H44H +5?IS[iq2"&4264&"%#3##5#'353#535##3353'7'7'7''7'7'7'7'7'7'7'7'&2"&4#535##3353'"353&2654'#3##5#'?'7'7'ΑΑŠŠTT0v%IS|S   x  켅BTƝ(u4T_Fp?}1TT@s,  ΑŠ¿#/.R\8P]        R !3R2RARAU}XM<3?8 XF   ''.'4.'6'777''4&467&7467>'.'.67'77'??'?67.67'''''776'7.'5'7'677&'67'677&'6'4&''&'47'67&'6       '' Y:k, !,l:Y '   E (  ! ' ,  '   ((&""!# % % "!=2b']<&1xx1&<]'b2=!"  % #!""&((   n ,   >' L  '  J '?  #Mq7"&4632.4>?3:1>'.0#&7"#"&54>32#"#7>.'.#"56&546212#".'&5./>7>5<13L  M)u$=,# [  $46  0   M'pH x  >-$ [ %6N$ \ &6}  M'wG    ?.$ [ %41    M(m  +39^ )%&'&76&"&#'476322'"#"'&6"&7662"&31&'46&'&#"#".4>76%2.'"&'4547'&>017626'.'67>?&'&6'6720'&676'&'&'&'&'4'&'&#""#.#'"76767676.&'&27'53>2241&#"15674#"1675#K8   =   2 3=E>4)(22, ,?"  ,& > 8<1 +Jq 9,'    H( (  ((   G    -/ # ,37,! #  :"qT1 6!F a       -        '?'2"&45'5'5'757757"!!"EΑΑ;<<;!""!""B!! ΑW;X2>54&"j  ?$B/ih-P $=#MnNN  /B#JggJO!##=$6MMmM (%#"&462#.5467552>54.'7"&&6'SuoPmm&'*#3533'2+5'2+57"326&2"& l u  * C !6** l  m m \f$$e# f""T!&!(!55#$$.S2#"'#"'.4&54632>32>654.#"#'&"#'&#"32?3327&1  C==C  3%- $ .8  ,- : -,A ! '( " ~/$ N+F66E7"#  -# #s   %"uu"%?B:Uq0467>23>'.'.'&"+'67>27'#5<>7>7>7'&'&#'7'6767>  r B     B #)  S$ *+  SS# +,  < # =1 1> mm  S{ = > {| ? ? 2"&46&'&"&'.26ΑΑ  ARA   PfPΑΌ '33' 1??L#".'&4>3226'&<74#"#"#6=276&#"133:325&53" ga gb% P%P@&m5? &m5?$VB%/i$ta-0'2+"/+"54;276;20Cn33E6lΆ "2#!"&54635#'#3577#5##R$=>=>>>=\>==.MMxMMxlii #'#'#735737537#75'3#_lVb(m{WaCHU4N4POI 6SPߌHH@\?q?235';er5F=f5+Q GT`3#'#75#37'2#7''67#6;65#>7>3732#%'67>''>+#"'&'236=#"&/35"'&'26?#036z1*aI Z=6@B 4  X3 ) z    l &) t,+<o,^  !A" 1L3    '#(  ! :LI  G d e5>2.'>?#535#5##3#3&#"3267#!"&5463&632#"z)d Xmm3nn\ Q.:-.0-MKa ))#-"K7?C)$ - 33/#/-$"6 )4)! Id#n'1Uk%'#"'&54&5&7461/>36767'767'36&'&5476736'>6762#'&3&476726#&5�&'&#27"/67676'&7676'6=&'/7'7*#67654'&'&'&#"'#&?#7"ȼ363236767'"'5767"'"'&#"#7?7>4.5#&''.#'".'&'47&''&'367.'''2>7'2&/?&+57''#776.'&'.'77#6'.22077'327''7'3''76767/'#7#"#'67&7'&66'.67'&'&6327'52&#"#5#63'647674>7632.73230765654'&6?#&3'&'&547&'676 ]\BA(  ! -5  #!     o      (        )/        (        8: $0 @8 '&  ,  )    / 6 ,       I,, !  +&+       7' F       (      \x<=Z-#  & -                  !*       $ &         <    ) ! $,,   +/   0 H      v % 03    21        C     mp!"76743765&54'&'3#6743627>7>46'4&#*#.'&3!#"#23#5232'0&'&32&2>767&'.'+er)"s  7S0,   t  d  (  k  IH       D-<@') hZ&G!-1#.5"=4547>7'0.#0>162;2='#"'&'&#"2*&543:65654'&"#&72012=432720;&'27215"5&'4327&'&'0.'#"#6272'&'4&'"013'&672=674#"#"'67625654#&54632'>3"=&'&7>3201&'&6#"'>5454'"572725454'"4732#32#*#"'&'&*32%&+"#&76'&'4#"#"#160+"72676742325454'"7032#";276"'67236"54&#&#"7636'05&#&36762#"#"43656'1"54'&767456'.7623>76"+"&3656'4'"43:'32#"#"4;265454'"#&63230#:3>7+"0'&1&541474+'3676&'&"&5653276&/"326&3645.' gi=81  aa  .  0  ()'#Zc4W &(*'+_     !   "  +.e  +g!  "   &$(  YeV  j    U{}~:< # t87 87O&   C  FGC=#'SAGJBJ7z  H1    ! "R  %&  )/('E3n  q!: g  &3: $&e Z&"'[r~'<JX_l%".5&&7'46?67632766''7&5&'46&'.<.523036'74'6'67&672>30?2&&&2#"'75'654%'4>2&16&#"&'6'3&74'&7&'&'&7276&5&633#65#6'&67"'6&261&>&'707&#&#'5063632*#75'35'32'22>=4.#":>5#54.*#3#75'3#372#".>26&"'#75'37'3'&'320637+676=40:>4."&3#.'#75'35'#6'254.*#&63&674'&':  $ '  !;//   *     89e<QI4%  #J*,-*I!  #',')r        31NL! d    l    < M<  N  B&&U '"O U,0 L '*i     #5>6+53%2#!"&546354&+32675#"735#535#57#'#r (n0/e? >5 &! & F hhGj } &&srr!GKO'36767&'&7>672&1&'&67.'.'1'&'6 37'7:  )"AJD!   !& 5=*9   & &K|D!   % 6   x2K^66'6%'#"&/45467'&54>67676>7>.#0#"77''77''77'?( Wboo w  QVRO9   X2$ s0mC=J!q: . >/ .18'8   ,-2$3   $! ,s,- -,~,'}%Wah*=U`W\by76&654&'&#6'76"10'4?#&'2#4'5654'&'654&#"#./>6'&7'7'>15.'675&'#&'56767.#5675".'565'#34>307&'.'&7"2#04/5"267&'4.17676767#'4.56461##&563&'5#6376535"'&'474656'.#672<15&7507&7>7#7.767.7&#76&'&&561"&'2157>7202'3>17&67>7&'&75."&7#76'&7627&'24767?.3'45'3654&5473.#&7'7'"'0#&7''''7''&'.?'>0327'7'067''&/67'7"'76%63''>'2#54.#"#&'56#"7>'&76567"& \9+aWYc.: 3o   I9E|,r6CIo*    8  7   &        +     .        &HL%CA"    ; _ (   !$ $*"  = %[UCB-K+m  AnI3,MsB<27  @  f  D(P G ,  sR      ) 1G9Si 2& G         -      P7  y   Y%  \ *    /B%=A=  + $ *   ;   !^+ XF D;7%+P  '/#b (& |2=; (P@( 7   ) % '73# 7 F!v'a_jk]A 2Nem Ql 0:?'373'7'373'2#"=432+"54"2545<632+"=432+32+'+"5'#+"=4;2'54+327+"5'#+"?4;23'+"=#"=4;2+2+"=4;2+32+'2+"=4354"2'+"5'#+"=4;2'2=4+72+3+"=4354+323'7'3?#"&'&'>7&'.7.5467&76326&'."&'>%16767632.#"6'&'67#"'32767>.'&'&'67&'.'&#"&'67&#&'6703273&'27327672767&'6753'32+"=4;22=433'7'37'7'?2+"=4;21+;2+73'7'3:a N   P    UJ .   `  --=8 "b2-( *#  1A7 "a3+* )(I3 *+.X&%0-XT $ (  (   4:9:$$$$ m  6 ( $<; ; ;*265@?((4$ $ @   ! @ 300/[+HF ?@  N> $2[+GG >A 'Q;  ">@6'T ">@'   +-- _442 2U?d 3#'#<ުlJOM7!!"&'%32#7;; )X =tGg f G  7+"&?>762+"'.>2  }b   ' 6;J*H3G  < Z#+37;CGKOW_573'7#'5#'73?5#'735%53#'%'#57'#'7'57#7'537'#5'53'357'#573"Lv -#78-  88ZL B8 P [Z ZZd- M!LNZZ L/-!mj87, z P!Z^Mllv88 -78- M -7- -N[[.N [[ LG- MdO Z[[- Mj l/ -77 * O [#F767>'&'./&"&'&'.?>>7076($533613373'3#73'3#7333#"&7;VRD#d)3 F>(>M[pu- N;ZIIM R:'(:8^ 75< 7 N55.7"1"&5064.'&>1250616232>1000#0&XVWAG HIgEESVY]w<~eNNf0z?}Z 48?KR^%#'#5##"&'#73&547##53#3>>32353#3733.26&#"5#535#535#737'#3'7#'#3J6@ "V# * @511m<7 32  8w> /&! 8878_18228(.-&(..&= " !%# M)L1+E 3,"(.99,"4233 $*06<BIOU2+"&=64/'.6'&57'67'67'67'67'&5547'&/67'726&+"=46>54&'&+"#";2#"&#36=367'67'67'67'67'67''6765']77"#5"1$.'.4178";@H "+ +>* !! .OB  $ 6#0$.(26)@ \ 8!6% $#$(6  8 %\@I , /  =,8/!  >38$#$(5 * Ak$#"&463"&463".546;2"2"&4U2G22##22#'2##2':F22F2#22G22F2(#22#'2F22F'/AI2#!"&54632=4"2=4"2=4"2=4">.""'&"6754"2!!!!KKJC 02 :<!!!P!R- )) 0N2#!"&54634&32>27'#&5676.7#3547#36737676  & " D  !M ".- nr  4 5&($O$]  g$ %?&'&47677'IM&LLLJM&|LLL sCR%.'&76'&'&756'&'.676>'&6332>7262654&+"3q#1+-1(''(1+,1)$  55)"55 "%% {;  0,A@ -/";,- 20-VV-02(%%lt| *6>KWcm%+"'&'.'&'.6?4>7&'&'&7&7&7&7&66366632362636730764622>6626'6.>.677&'>7&'67"'67&'67'67'467'67"#"57&&7&461&7&7&5&&2656&'>7&'>&'&'&'&'&'0&'&#'''#&7676'&6&'.&>.26'4&&754&"266&'&67>&'&t %$$ %            % +/:%.$ )=D         $"$ D=)          7#1.+p  g  ! -%%.%.~&%(I#1"#1"R  & #  %2$  $2%             # "%'^/+ /* !#     l   "!     &$(%%  )$l*'  +. +   ~  -*. Y $)  '5%254'.'.#"#"&#"7#".54732654&VRB   *6I}7+54&"R;`;Rww()+3+=qffq=TwwT## %%}bkw7&'367#".#.#"#.'&676&'&32>36#"&'47>'&'&'&767>6&'0#"72##".5463224"[& "0 74  1 . %&66 *&    $$3  J$D33?I<.$ &    Y  <      / E& -   0 ($#>04"/     )1>J2"&4$>.2654.#"7&47'>.767''7>?&'&'&6ΑΑ3  % +''+  %%.R+/2.%%2/+RΑ?      2(^(2+  E$4//)e4$)0|';FSk753"&=36'>.5!5&>'>'#7>54.46&'56'4.g!#!_J[ !,*f'\TY%1,*"'- $/!!  O; #7${gh (,   +!("'"3#E_      %`5H%'&5>.#&'4>76;2723$#>7%!!$7>&'.#.'&/ %:'Y44%f" !DY QH5  ,]#  QY%'&>?&/'&7'.67&7.76767>30767"12676326"&462.#6'676&'&&'&"&"7327676767>$0   /         \ΑB    %  /" g    5!     '   ΑΑ / -  "1     1K7676'&>'&'.#"767&'&63270#"'#"&767>32% #  U    j /&'., ))"Bd ;179971; j$&(K.%-$$x 2J  %;38,);".C%I2P>>P2 %$EU ?GOW%'"'&'&''&7'&'&767&'"6767&667667>'&'7&'7'&'676'&'47.#267&'&&'.67&'672>1#."6&'&2&27& {%' 44# &W%8 !4)+ ! l!% /  1$Dc;8   4A0!" "DA '.  $ $H +$& %+H/&;2,  &.: >D $1* j +i  5)''   O_JO ::+'M.&   3'   )B%+532'#532"7#!"&5463!24'565<.+:62>76$  QQ2KK ,,,*,n1% .  5Q  DH  ,,*,,9 (  +9%"/&?622?62'"/&?622?62%&?62"'///CaaCCbbCC[[[[#+2+53!#54632#532#14#4&#++%@=X+?,a+pP+*@@%W>->aPpx#*"#7&62326514.'0.'&&'&'&'467:62>7>5454>2#+"&772'.y  mO "       6    A    (H (<o 1   +. (    6wA )Hy632#"&5"&5#".5"&5#"&=4632327277 '.67672"3>57:>4&'."&".''"15>7H iO1))1#C $  ;<  $ C   #NN#     , G,  8    8} G[M {f") % %  )"  /) Eh 0Lko7#6236#"'&54626'&=4%''.'.67&>3263264'&562506350'&#"'&1327"11130>32&176=4.74+";257&#"32765'4##"53250<.254'&5432;'06150'&#"#"'41"374+47632572'&#";"#"01376767325&#"32767&54+";2=476256750/#"432125656#&327617&#"132765'4##"532505<.2# e@ &6 #)-("#:  8":"/ #E8"     $  8  %"    P       6*   / 6 %$  BT !#19:>2 #  #?D  =    " O2 2   : 4 6   8 5'( ` :7#"&46;2+";2+"&46;26&>732>=4&+64'32#r);;)r);;) HA  v!.$;R;"(;R;"x / x ./!%2"&4%>'4&*0.'&74676&676/&76#".76&"767>74>5>56&'>36ΑΑ~      1        '!     Α    *       !144  '7'$   #  &EKQd%#"&'*'#"#.7&'&632>7>32>76.#"#623>32&46646&67>76"& >> ?%  BY0N|t&5P,@n  "<2 H [ $#.( 3@r-*1 2-I'` ).S>%M; * -  !! !! )   (%0V'&6727632#"'"'.767".'&63:%&5463237632+"'.5463033267*P_  _PT  -*vO QM n&'   (!I     H!n    *&   +%"/&4?276&'576&'6&'762  2   / /23(     3 {$ % z.. 24) 1=AE7+5327!4.'.5432;4&#"#"5#32>74&+3532>7#7#3@    " ~4!100*@    D/ N 6CNVanw%3#"&54632#.#"54&#".5467>2#">'32>54>&''"67&3>.327.7367&'|`tt`|nVju1_C'$-4-IJ,5. Qhd$+! ++*r)*" @&Y  vS[tt[SHMujEb61$'-.4!-/4 1L$%"!D\%##3$$#&D$&  V6$ n&7 ,  Ms' 4 :Q  H :2 L 4 XR . *! c :{ * Copyright (c) Font AwesomeCopyright (c) Font AwesomeFont Awesome 5 Brands RegularFont Awesome 5 Brands RegularRegularRegularFont Awesome 5 Brands Regular-5.10.2Font Awesome 5 Brands Regular-5.10.2Font Awesome 5 Brands RegularFont Awesome 5 Brands Regular330.242 (Font Awesome version: 5.10.2)330.242 (Font Awesome version: 5.10.2)FontAwesome5Brands-RegularFontAwesome5Brands-RegularThe web's most popular icon set and toolkit.The web's most popular icon set and toolkit.https://fontawesome.comhttps://fontawesome.comFont Awesome 5 BrandsFont Awesome 5 BrandsRegularRegularFont Awesome 5 Brands RegularFont Awesome 5 Brands RegularFont Awesome 5 BrandsFont Awesome 5 BrandsRegularRegular      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~twitter-squarefacebook-squarelinkedin github-squaretwitterfacebookgithub pinterestpinterest-squaregoogle-plus-square google-plus-g linkedin-in github-altmaxcdnhtml5css3btcyoutubexing xing-squaredropboxstack-overflow instagramflickradn bitbuckettumblr tumblr-squarewindowsandroidlinuxdribbbleskype foursquaretrellogratipayvkweiborenren pagelinesstack-exchange vimeo-squareslack wordpressopenidyahoogooglereddit reddit-squarestumbleupon-circle stumbleupon deliciousdigg pied-piper-pppied-piper-altdrupaljoomlabehancebehance-squaresteam steam-squarespotify deviantart soundcloudvinecodepenjsfiddlerebelempire git-squaregit hacker-news tencent-weiboqqweixin slidesharetwitchyelppaypal google-walletcc-visa cc-mastercard cc-discovercc-amex cc-paypal cc-stripelastfm lastfm-squareioxhost angellist buyselladsconnectdevelopdashcubeforumbeeleanpubsellsy shirtsinbulk simplybuiltskyatlas pinterest-pwhatsappviacoinmedium y-combinator optin-monsteropencart expeditedsslcc-jcbcc-diners-clubcreative-commonsgg gg-circle tripadvisor odnoklassnikiodnoklassniki-square get-pocket wikipedia-wsafarichromefirefoxoperainternet-explorercontao500pxamazonhouzzvimeo-v black-tie fonticons reddit-alienedgecodiepiemodx fort-awesomeusb product-huntmixcloudscribd bluetooth bluetooth-bgitlab wpbeginnerwpformsenviraglideglide-gviadeo viadeo-squaresnapchatsnapchat-ghostsnapchat-square pied-piper first-orderyoast themeisle google-plus font-awesomelinodequorafree-code-camptelegrambandcampgravetsyimdbravelrysellcast superpowers wpexplorermeetupfont-awesome-altaccessible-iconaccusoftadversalaffiliatethemealgoliaamilia angrycreative app-store app-store-iosapper asymmetrikaudibleavianexaws bimobjectbitcoinbity blackberryblogger blogger-bburomobelexperte centercode cloudscale cloudsmith cloudversifycpanelcss3-alt cuttlefishd-and-d deploydogdeskpro digital-oceandiscord discoursedochubdocker draft2digitaldribbble-squaredyalog earlybirdserlang facebook-ffacebook-messenger firstdraft fonticons-fifort-awesome-altfreebsd gitkrakengofore goodreads goodreads-g google-drive google-playgripfiregruntgulphacker-news-square hire-a-helperhotjarhubspotitunes itunes-notejenkinsjogetjs js-squarekeycdn kickstarter kickstarter-klaravellinelyftmagentomedappsmedium-mmedrt microsoftmixmizunimoneronapsternode-jsnpmns8 nutritionixpage4palfedpatreon periscope phabricatorphoenix-framework playstationpushedpython red-riverwpressrreplyd resolving rocketchatrockrmsschlix searchengin servicestacksistrix slack-hashspeakap staylinked steam-symbol sticker-mule studiovinarisuppletelegram-planeuberuikit uniregistryuntappdussunnahvaadinvibervimeovnvwhatsapp-squarewhmcswordpress-simplexboxyandexyandex-international apple-pay cc-apple-payflynodeosireact autoprefixersassvuejsangularaviatoemberfont-awesome-flaggitterhoolistravastripestripe-stypo3 amazon-pay cc-amazon-payethereumkorvue elementoryoutube-square flipboardhipsphp quinscapereadmejavapied-piper-hatcreative-commons-bycreative-commons-nccreative-commons-nc-eucreative-commons-nc-jpcreative-commons-ndcreative-commons-pdcreative-commons-pd-altcreative-commons-remixcreative-commons-sacreative-commons-samplingcreative-commons-sampling-pluscreative-commons-sharecreative-commons-zeroebaykeybasemastodon r-project researchgate teamspeakfirst-order-altfulcrumgalactic-republicgalactic-senate jedi-order mandalorian old-republicphoenix-squadronsithtrade-federationwolf-pack-battalionhornbill mailchimpmegaportnimblrrevshopware squarespacethemecoweeblywixello hackerrankkagglemarkdownneoszhihualipay the-red-yetiacquisitions-incorporated critical-roled-and-d-beyonddevfantasy-flight-games penny-arcadewizards-of-the-coast think-peaks reacteuropeadobe artstation atlassiancanadian-maple-leafcentos confluencedhldiasporafedexfedorafigmaintercominvisionjiramendeley raspberry-piredhatsketch sourcetreesuseubuntuupsuspsyarnairbnb battle-net bootstrapbuffer chromecastevernoteitch-io salesforce speaker-decksymfonywazeyammergit-alt stackpath cotton-bureau k%لQلUrclone-1.53.3/docs/static/webfonts/fa-brands-400.svg000066400000000000000000025055351375552240400220400ustar00rootroot00000000000000 Created by FontForge 20190801 at Thu Aug 22 14:41:09 2019 By Robert Madole Copyright (c) Font Awesome rclone-1.53.3/docs/static/webfonts/fa-brands-400.ttf000066400000000000000000003744041375552240400220330ustar00rootroot00000000000000 PFFTMtfGDEF*OS/2BX`cmap u tgaspglyfФ)`ͼhead,6hhea6$hmtx6loca6`maxpO8 namepost{.J=;It_< لQلUL'@LfGLfPfEd.T:  @@  p@@ @@@ @  @@@@@@.@ @@@@ @@ @  @@Lh@@P  @ @@@@@ @@ @@ @E@ @@@ @  h6L^knp~\u} !#1MRWY?B1]x{=B6;Zgkpsy 17:K^`mp|\hx#%MRWY ?B0]xz4?ytn8-zwhedXTQP620$ R G E D B @ > = < : 9 8 7 5 3 2 1 . - + * ) ( % $       w i h X  k i = ; U *     l k ZB<x8X B@ r J  z ` .&Lz$@,D\xJlh>h$F.t!"D$H%0%&Z&'B'(*)N)|)**++,l,-2-\--0V01112\223x34F44557&7b7889j9~99:D::;B;d;<^<==^===>:>>?p?@\@AjABBCnCDE.EFXpY4ZZ4Z`\].]]^`^^__z_`a^aabbJbddeef4fffgLjk^klDllmVmpnpqHqr^rrsxttrtuđHғʕ^|6vz8j@ܝ"Th$`Ģ(j̣8fP`Nتt:ޭv"r>zF*nvZ6ʾFlü\*ʬ@ TҺPӼ6tJֲ&N6tڤlz4NߪDtz2;2#!"&54636767&#"&'&''3#"'32>54/(S5 "$- .79X)`  0!@ $:T-$2+537#54;5&#"#3#"&5463: E()??H/&=,7H`52#!"&546335#6264&"54.#"#5#354632   gC  @!!@C    u"j  h V[cipw}2#!"&5463>54&#"54&5".'&'0'43767.547&7662>&&364#"32&'&6'6&'6'&&6'&'6'"2=aEE]>2       +    S   `W7D]]D7W      ) @  0#"'327"&'327.=&547&546326767(ItFXI I:#5 &/'q@=+.$ "( 5iW6/-)+  81: +>!% %537#54;5&#"#3.5462xY: E()??YxΑ\H/&=,7H\gemu}7#54366'72&2&5<54'>54&'6'&&".&'./"2?'.5466'&'6'&6'&'6&!(GDtC]K $   " A%  "    L^, " 3@pDS? (%   %!(     0Sh ! J#"'6?32654&#"7>76'&54632#"&7>54.#".5462g&# 7IXALW=3.:$  GW'Α (; YA=RX>9 "+A9,!4 $  " g.~NgR+6?32>54&#"74676'&54632#"&7>54'&#"&5463!2 %:!WAKW=2-:#  Y`p$; (F*=QW>8 !+@8, 4 #  7  `+72#!"&54632654'#3#".546327&#"%5#5##335t+5^9$');;`6,"%;R;R t'%#"&4632&#"3267#53##5#5353fRPppPK54. 7 F126mn888778Sipp22 8!2G6C88888(3#3'"&4632#54.#"#33>32d]]. |]!]Z . "- +) , *#+))0! 3LT6#".5462#*.'&547&5472632>34&#""'&#";2>&2"&4 #9 ) 76-*#1*/*&1"'@(" ,0+ "(*&00/d##&$ $;$  (7A0 .w#00#& +V$&$$& {%#76&+###'!2a41$% `--=!'7#3/#3?#'# 56/cat44 0>&L1'73?!7!7!7@RtM PP_),,aR5R.AT%#5"##5"#723>75."#57536353'262>54.2>54.#6MGB1 1 X  T 1 1        ! R5,DCCD:4ONMO :T`      2$'#"'&'.'547>7>7327'& j,,*'  j,,*ǏD $# Z+ %CY+ RQ#70+"&?'&6;2%+"/>76;2F AE,A  [B \H) B| zL V 6H  12#!"&54632767'&+"3%6&+";26/\0 .1 /fB/B` U7 6W x w '7'7757'7LUTTUTTTUUTTTTTUTT~ %'?'7'?'#553!53# zw w4((')'.$T%((Pxx3k2"&4264&"6"&462"'&'.5&47676762><.'&'.*#*:3:>76`CC`CT>,,>,\C==! !!==C- #)(#  #)'#3C`CC`{,>,,>6=C! 76;232+766 5!*> / *(F T T#!  ., D B% s R  @ 72#!"&54636/&'&&=32=4+54+";3244, 1&*`( S4H 8+Z}(%"&#"#.'&5467232>''&7>7?= : ./ 1E.8 938 % &?&P26*'?F (n&( ( 7#53537#b"  $;DP_7"&=463253+&=#&=#"&7#4>7'&7676&4&"3264&#"322#"&=46Z   &   66" we  t  t  ==== #$$#52 t  t   k~ )C#'476#"&"54762"10'&'&"#'.'&'&76'&5476767>7&767674.>32&63"6'&""60'&672676&367676'.'&#""6&/&'&6567656767636'6'&'"&#&'&''.'&'&76#%4'.'&'&'"1767>&72132767>2#"'&   %    +!  !    0       (   $ 29#%                  E    .3% "#"(#*%9   )  <    5$  2  # 1!.2   3$     &   #  ".6<2"&4%6&'.>'7&>7&''7&'67&ΑΑ:# IKG'd3+$"70!^e$76H% lnR.t3 =YM?EΑ* JR# 9C &(,9!AR<8.  %MSK.4Z PS%#"'#"&547&546326322654'&'.543232>54&"#".#"B.&UxB.&Ux-C[ *  DWA      G&.BxU&.BxUr-)9 ()    &t?2+"'&54636&+"?>;2>7676&+"&=4>;2>5C0  S  Y I Z j ! J l% #52#!"&54=46354&+";26754&+";26!!! S  S  Q  Q !!}(!   {   2"&4>&'&'.ΑΑk ! & " qΑQ    4`M0+".'&#"#"&'.54;2326=.54>;227>?6;2!)" ;  ( /`$"-; =]$ ;JD3    &! 84.[0  =` W t<  &0>JT\6&&676&"&547676766.766&76&&6'.7>'&>&7&76  &3  ULH<,7"'>77.=RB:6^:)?8-? ?-7!6'XvR EZ=iE Q..QX3?M)\ZR%&/#"43267.'>67".'665.674>2/>21-a9 0Q%/,7% $4##& #*)  &&7&' >H;3 -'   $,  " $$6 *+( %2)%   7!+5#"&=!%5!'2!5463&W&dA&d&t'ZZ'-UUnUU((42#!"&54636'&6&'."6323276P2B! -A@`?N  *f  #!3TR+4BKW7"&46;462"&5"&4622+"&54>3462+'"&=46322"&=7"&46;2#^&/''/'/v  3&/'  /'/v'/v3&/'  /'/v  &/''/7AIQ.54'76&#'&33'76&#"#>32"#"#"'76546"&4624&"2>e6BeL "8 !/M #^7S=A#$:0A<ΑΆċg>-"4G [,48": 2/7:b($ ΑΑċċ%.546777'7&'57DWumS7EM:D%(E3# R85Q + 8$&:T!+7&#"7.'327>7327     g ;/%9*7-39`O$4%#"&4632.32>7#53kggaEC YM7Z@#7jΑAA !M2A\#"V;IU6"&5462$"&462"&'7264&#"'&&#"32>54'>&6"'&7672"&54>Α  $28 =2$ N61$UL C   ΑΑ N V (7$t Y    %5`%"'&76276'#"&5463272#"&5467#!"&5463!2"&'73264&#"'&&#"32654'>&RHW  [  `d '6;   B6& T:;Sf1    ` U ] ! *<<* !%92"&42754&""&=#326=405#"&='32>ΑΑ ",>,  :,,:  ",#Α!  +*l -.,+kq.0 / /+#1%"&=726='54&"#"&=332>=462B]B"4B/.BV  C[C4F.BA.FG  H:  .AB.FE  -?@-$%3#!"'&'&'.5463!225#5#"3326 `  | `   ` !%)53#55#%3#535#75#53#535'35'3#3#R3RHRRRͅR4444L]]))]]))]]u3 #2B#"'56322#"'567#!"&5463!22654.#"#754&#"7532  G  `+"-3+ 4 !DiD`/ % 3:B. 39@D7'26322/>&67>77367>?"&75.# &'<7465&#".'&'#"&54?.>3263263232654.547'6'632763'7>31#  37 6I  G#     ,g ; #,  "  ?! ,    0  5/)    '  E<     -      \ PK< 3  !    4. "    #C3^ (:T#"&54>767676'&#"&/"6'763276&'&"76'&#"#".#">2@ &pNNt& (   + $P&  V  = "+#3(* 0M,E'RhmQ%D* #&  D %**"$  11Je46326&".'.7.67>>32'64&"#".'&'7264/7'#"&5467&72767#!6*, d,3! V6"#-, Z#9, d-[(b0' (7#,,  Y d# *, d,2"6!_U #!8-- Zl!#  , d,[ (0' ( #!7,- X  @u 397+32'3254#254+%#53#32>73#"&546323&#"AH5qO/5 :8U8! >\7#!"&5463!235#4'654+32674&"327##"534-.+(?`NN+#>Z]'$?( 2" "d47--`}) 0+ +*  / #!2:B%#"&'326'7654&"&'>32>&/6'&'6"&462264&"BsCU_&*T(87M7;bg   ( 3%%3%R''CsBeO' +<8''66'V 7`   (v$4$$4&&6>I>&/6'&'2#!"&=326'7654&"&'5463"&462264&" &Y'O%53I38u0""0#;% N && ^q%(84%$33$Q 0"0""0$  .C2"&42654'&327672654'&#"327672654'&#"3276ΑΑ] `  |S  Md20  ?0$  R{B4  <5Α  9 2B  -  L 0 @3#0#57'#53?3@b[ , ]]V+ bc T ] T 5K #-9DN`u7#"5'743272#"5'7472#"5'742"5'?2#"5'7462#"5'762"5'?#"5/7547632#"'&5'?47622#"5'72+&=476326%2#"5'74'2#"5'742#"5'7o s ~!..! 7O.AEEA)iEEiDD`####9989 DBBDFDDFr@ :BCC8/ !.  I6<AACCCC+%#'.'367.54632&'654#"O* +%$ J ;+-%+7/(0  "L44\ ;K~Km5,>N12A)<: .:9 $(+/&=4?67'775'?'7'75 _MجM_7_MNNNdM_7   ;g?3ss3?5J%g?3444s3?5J%A$I%##"'.'&676'&76067>6&'&5.*&'&35236'#"'.'&67>76'.5676676.&76" $#J$; '&/("MD7 + "/26(&"# D+;'I!  ,!%B' $ "1D$  ."#C1)6% ($X 6% ('B<s L  !   &&H %  /, ["&7>721>?>././76&/7>?>?>./65g<7      -,       (8h7"67527'676"&4624&"2'67'654'7&'"'7&'&'7&47'6767'6327' :a )= :b ); a:=)$ b:=)oΑ΅Ċc   4 @@ 4  4 @@ 490  .  2991 w90 /-  09 . 09 /ΑΑĊĊW2r1 CC,  ,CC,  , "*K]}7&5472#"&6%#!"&5463!23254#"#.32>54&'654'7'"=#37'.541535#"545#23232e !!#  O`1$ @ 0  ! U 4He&$ r |`YD     XiK  = B'4<R]&'#"547&5467&54>4&'"&"'6'26&#"527>=4&'733'".5432#"&=4&"#5>73;#067%" %:5f#)-(  453T?  %%!!$  &9= "#  #&2 ,&    $##$d w!  ###j% >)a!!77#.'#3@Q#  %P@6-/0"gg-'&67&632'2'&6654&#""'&>H 77 !;kEb{Q>jP9%>"  'M0_93!!sDbENeRC9P$9G# (VG-2$./"'"'&67.5'"&6767&632 n##m   !QRQR!  7  7 0 Q^yx_Q 0 3(6GS"#"'7&54632&'"3264&264&#"'#"&46322>54.#"32>54&#"E` 1DNrQGq r   D6(F``FBc   o   Z@ ";7ME`N;5   0   =40 SuST!     F^2"&462"&46'&7<5"&#'&7&'&625463!254.#!"6366263$$3$3$$3% #G6"(& &("6H# g  B)  *D&!0""0!!0""09 *f4! *  ) !4g+      !##5#5!3735#5##5(uW932&54?6#"#&7&6#"'7&?632%67232/&54+dl   1 J  +mi  Zk > 5; " h1     R o " (  [  )  "K7+"&7>3262"&7#*+"&76763:>7676oK; H$ *m $ y@  s O,Y BpW   )&'>&42#&'&6373#&&'&'&6;25 07h<m~r |JY AUM21c2 ">LvnM X¦nfjaJH@%)HS%#70>?#!"&5463!27#/&'#37#74'&56367&#"#"/26'#"3673!m X@+'@$("$' <*4&+`jG  2  !  # @ 4BR^d -9F$"4!2#"462#72#"42011"1"1"1"1&1<143413030310030410+353#!"&5463!2327&67&#"64'4#"&#"5#3<>323<>3237#&#"327374&5427&"'63'=35#5##3734#"7'&7&5#3546467&7'7#&737&5#35467#&7373535#45&1&0#"0#111323210606107#'#537374&#"326 xu uJ Q9*#76#*9555Y - ( #   :    5"  *Q9*#67#*9Q1 Z`fsQ-,)))v%    %         %     % %  |~U:Q--Q @'3:BSkuy%+53272#!"&54633533'654.*#35#535#535+37#&"264'&75&546"'654&'&5467&2654.+35#5!26%+532  :-r#$;%%^  ! a  ,  $ l  [3z *  `R!!#RTT7:%%  /        RRR i 6A.7:AMN`g{~1Iamxy#<.+#532254+2*##'#5377'#3#57254+'#3#3#53#54.+#532254+'#5#'#'##7353733':>7#!"&=3673353>7352335;67323535#&'#&'#"1#"5#.*+.'#.'#5463!2"#"5*5#."&*#&+.'#3673465273<523272?"+73254&"&54;254&&54;#"6+5'#3##53'63023"'&47'3*;735353#'#'##"543'3#E '   !BC3)$$j?  '&&'77I ' "#   5B 4)(+  (  g .(#  O  :SR 0 '$    !W  !   '&&'889 ""!L p  EZ E6  8  Ej  EP6666EBB//(Z        \    & n]    <  (  (  E^ 7BB11E44"# (E @ ,AXr7#"54632'2+743!2+7437#!"&5463!24+";2?46:3267#"&#"327;2?47454+"'&+"0;274+";2?>:1267#"&#"327;2?474+";27'2#"546  j   @"(T  %@  "(T  ,J   !  `f   @b\'&nf   @!h1  @#@PZ^bq%2#4>2#"'5672#!"&54634.54325&#"#"'32675#53275&=5&'#3565#752654&#"'#757454&#"3275#"' $ q   J  E#e $ =$$$n  $      5` "    "Lr 3! }U b}}! " ( 3$  1OX7#"&546323254.'&54632&#" #"&'.#"26?#>BD@%)=  D<-^;+   zFE   ) (& Q FCFM % <-'J#    Q60 ( /3* g2#!"&5463254.1&54327&#"2#"'. #"326?'#"&54632L  $:%+ &.    (+#   `3 /% . 0,&      ;G2+#"&'#"&46;&546324'!"&463!.#"!2#!326%32+"&46h  5gL|C  5gL|  ^8Su*  ^8Su   gTDgTD`/9uS/9uk.'/8E%#"&547&54632&54>32>32654#"6'&"32654&>&'.#"723201&'"##"&732654&#"32'67&[!iVF_3 /  %7 P!"/ ." =0 Y!r ( 1  .  C-A6 0&Xs\99# 3%/^xeC Q\P:( '! ;A   ?$!)/ #%#!"&5463!2 #37+V `A__[p)`*[[ 3JMR_finquz%#"'#"'##"&547'&547'&54?4&14?&54>32362363225#7575'#075175573'7'37'77&='&'#'#7#"7'"7'737'37#3'#362?#6?#747'74547'67& 7 4kj 4 7 6 8jj8,,-,3483,.,rb@#  PSPY|D"uSb: 9 h3K x63)O<<$$kR#[ci\&\a |sZ#V.# _ Z Y _ \ a` L30?\4Z`H!7&'677&'>7\\gq!!p$74(-CyU#"g \$a!rp!0>}B-k#V h":%:#&&#"6326"'&#"&#"+632632 &#"&#">32632 8/-:8C,0=*)$"75>))>57#"*>G@**@G>"/6>))>6/"!18//8#9Q+)))]##&& z,<M]%#!"&5467&54>32>3254&+";26754&+";26754.+";26754&+";26)5I22I' '[:Ec   Y   Z  W   C+3HH3$< &8HcEo o       ' #'+/37;?CGMUuy}?7?7'777?#5##5##53#53#53#5##53#5#57#5#53"&462'6#"'32654&&543237&#"'#553%#5!'%!5!#5#57#553'53'#55353'#53#55#53'#5d,r" " "!"Q"=!" Q"$ "L66L5!    y a *zz " ! !Q"u"/"&      B    > g "" " 5L55L:    I""""M!!0jbbU;;!!+!! !!+!!x M"""" " "" "*2#!"&546;235463264&"264&"  >  j  7''7'7''7'  L  (( '7''7''7''7|Y%#".#"327>32#"&54>3232>.#"#"&54654&#"#"&547632632D26()!)'5*9 <+C; S(@Z)F*!<,,"'%.& % ! C2(&@3%.Bw1B#-/,$,>>,*"R?*E&",/,")) 2A ,%3 >@2#"&'"'&54>7&547632>54&#"#"&546Ik$=&(       % & A50Pd H#"'7&546322656'&"77'&'&5476323217676}C\91v \\\Ln96mF+  )1-  ! _CZ\s3<\imLJ86lL5-D? ) 3#3#'#53'#53'3377#00DRgYYgRD00@Q^Q6p0 00 0p@(!!5#'#35'&753735'&75&7@tS;DW Z O Cv@V"&&s!7#/#3@N!. -#L@[ [^B'8I[nz%'''.567&667&674665".'&7>762263&'&'45&'&67666667&6767&'6:>5>"23636#6666'6&'.6767&'"#276&67&54&&7&'6757"2654&&#327>'"#276464'*#'7>7>7&7.#'&''''&'&'65.'&26'67&'.''&'5767#65&'&'&7&5&'6&3>.>6&5467"&7654'64>76&%'.'&6'&'&6=3 3  !   (  !'% /T '     N 4  P  2 #-?,,T" 5#J $a 8 KQ/K*     1 255+7 S  98 7172 $B! <4   %  %0   %   ))   "   G   "G% "$         ! '(!$  !&' ("# -- -# .&  $6  ( O 4*!      & O6R  "3%&      #    H  + 0$"&462"2"&42>..'&&&&Q';( '3 3&XQ:aB9 *e&&&&) #,8%'   ,)#3?OW_2"&4#;2=4>;2=4&#"54&+";26'+"=4;2+"=&54622"&4264&"}}}t  %$  9(-       ΑΑ}}}    (9-  5    Α@ ;Nc%531"#527#!"&5463!2##"'232>4&7.675&464.'>54.'+326& ((h v9  73!"651""1   u{  #`   % !! 2 = @ '62"&45>54.'72#!"&54632654&+"3ggg ((p(   OywQ[5X4rOpghh 9F8 8#x``qMTn0Z8Np)8D7&#"327#".54632&#"327#".54632'2#"&54>2654&#"! !! , -.! !! + -.yi)EZ0f?sFQzvUVuwX'"(%$X'!(%h6^?%gBrD=sXUvzQSx t 7'7'7'7'77''gg-4M-4Mggf-4L-4Lg 2"&47''77'7'7'7'ΑΑJM5MM #~~~JM5NN ΑKN5MM $~|~~KN5MM  Gk 3;MU]ju}7#"&462%2"&46&'''.7&'3>3232323$4&"27>7.#"#"#"2.6$"&46254&#"26%#"&4624&"2    d/w%$s/0# Z$o+)m$` #C_DD_`P7NP2%!VZ(!WY*;**;  ': **;)''   %//.-"#v0 /u_CC_D~7L  #1+Y(!VZ(!{;**;*H  !*;**1''<(4@%'&'.7617&'.7>26?6%4632"&72654&#"!9L 9L # L;  140  L6#<#LlLB&4&&rL # 9L M    6L#;$5LL5%%%D6462"7#!"&5463!2264&".#"&/&?6/67>  `/D//D% % 0  0$  / # !!`SD//C0   /  0$ 0  +2#"&=46367>54&#"&'.#"2]]") "  *" ]] '  !  (  L#'&1.'.<53>7.'5676.#<5:3.#5'+&2RJa 0 j !; -  <)&;@z  ( cZs)%780   8=   B7.   {  jp74>32#"&$"&4624&"27'4#"#"'614#"&'764#&5473254.#672/632327"37.#"'67   Αv9+K70 B -K90  @!`  oΑΑ| A +M90 A+K8/ ",7'>32'&4>32"&%'76&'7.547M#f7C:@!)#&#11F1;!%0"#"'01&'&'&'&747676747673467676773316676767"67203&'&'&'01&'#'#'>76'&"'&/32656'.'&'&>?677%06($"?     %     .@       4,    &B05 -) "         A- #&"  ]%"327#"#.546;2&&'>4&'>:=`* &=W5.F_cg^F-0G*76+GOJk~0K?dg?sb~bJ !*2<!673#"'.546767>667&734.#"7654.#"& /?DywJF ">59O5EQapI #)IY/*=C- DI$3 DN&# Q gFOUk+9'%9! *,6J&27#"&546;%2+6''.'.676767&'- P: 4>7#  $6 /*A|0#>=g>  3E/;  73!J_q7&6765>32'&62>54&6764'&#"#'&=4;2+3>.4>#".&6.'&66'47'&>3276#"'#"'&g  9((:O2 $*8'Z &&'77& %eU;%9HD<0%,-. %0C# 1oT  5IRR   h ^%79(1; &  / &m'&&  ("Q5*E' 1*   1&_?0J%  -6Nd<.#*'4632&'.54>576#".'&62>76&7>'&&>76  JI>. 9& .2-%3%T $T/0^37&@9J'$  *  &   (@ 8$ + ,W)7  ))    &  %##3#im2uhV&#"'&'.#"'>76?676&6Y[?'   ( 4 "!]E&ArwI#-)0% L5-l !!%'7#@<@@@@\@{VVY,?N!!274&#"#3235'&74=37#4754>35'"454=#?'#'##77@3#!,8 DSgPQ   @3 %oq% l  !*9GT%2#".547.6326?6632#"&5'632>54&#"6'&"'&272>54&#" {X9a8 9O a XP9   ky  &1 ?Y)E* 0'( !}'i  hR  '701!67"'.5&736.'#>32>W\Nb'4Dz Gq$ zc@h4-6 ib69!.+ G4];64-%2+#5#"&4632'#"&546327.#"326  WAX2ggGp#=+O>WW>H-;]7[[:b,a.C&ΑIAmnHW>>W?#/5; %''7'57#d%S}.*}TA`&::Yeq%2#54&"#54;2354;2354;235435&4663232632#"&#"2354;2354;23543%54+";2754+";2"*"$$%    %%$``   p<&   @@@@Y$#"'&=#;54;2+"=#".+#"&46322>7>23>32#"&'#"!546Y  Y Y 'e&**&       6 Y 5$ <" Y ,6,*:* "  !< #6$+5326"&4624&+3532FFFȑΑ3$x2F$JΑΑH3J|7FU%#!"&5467>322>54&'&7654&#"'&#"3$#"&764'&>#"&764'&>"8'/B7*X6A`  J4%?  ** %   "   ''8C.+?3?X   4J)! );)x8  -l-  Ia(  H  :7&54636'7&"76&%#/7327654'&7466*^I"<53%8C)G!G( 4RmR=#!@P_"*&3IP  H'%*# 6 $*+),IH'0A( &'77#".4>327''77%+*| ,..C97D-,B3O}J]]J+VV+Y3O7& 6fi89gKXI^\I' 7''7'75#'7bVllV422gV ll VΝ2djd2 1#7627'&73%'#762i98f ^ 889Xڸ e &!)-%#"&'*##".67.>2%35#5&''35#7#77#7!=ZfZ=!**-e)&lm~8//8&TJ;##;JT-: "j:  $(+/36:#!"&5463!24+''#"3!2#5#5?##5#5?##5j% o==o jIIIT6vp@vcujjK22K%%J&&,u%%J&&,%%12#'".'.&56'&'&'.VP5QF! %&#D $GI& %6 (,>=E10RG .A9l ,! g,i#"&54676327#!"&5463!24&'&#"'>54#"32321/4>14&#"3267>4>7632   `@   D+    "3 ! + ,(0 3`"??%*)  <   8$  Ha%#&#"#"&546327674"#"&54>32>32'654.#"3:>76#2 (5R6(. /  BI%R6$0    7  8Y1'6  *-GB-XA-$f  P M(?1F&'67"'.>32&#">54'654#"'6767&547>.%$ 33(&Y;! :N3!#1 m;Ob/&KB%  0**JMX CJ88*p^B  T:";'AUM' 79DR;$cI)+ !2 @U2#!"&5463>''3654'5.'.5463267&#"2726&'>7 1%D   '1$%8  c< ) .`H%3+U$"#! 1./6( 0 $5 );F#.>   h2"&46'.'#&76763654'&"'6'.#0"1"#"'&3263277632327>76ΑΑ         ,'Α   <  <  l%#"&#"''.#"#"'.5.'0547>7056'.'&7>327&47>3023232761<  ("*       %( /)  $7             I$%I      n2#!"&54636'.5#&76763654'&"'6'.#0"1"#"'&03263277632327>76Z    .   ,'`   < <   #7463!1#!>7>7>7>7 F1GE1YXF1 3 /# "%!<H1FZ9f1F&  4H) GMSYc?27777''''"'7&'7&'7&'7&547'67'67'67'6'7'7'5$"2654 - 2= @DC@ =2 -  -3= @DD@ =3 - ʼbbbۄD@ =3 - -3= AED@ =3 - -3> @DuuvvPmnn;bFEccEF.93#"3#"&=4>73 5>574&'&'37#6734'7[++) %6*C( 0! >2 1=t))5%)82i@E&( 3. GA1.  Xcu462#"##"'"'"'#".5#"'67&'7&'&54>3267>761>32632632$2654.#"47327654#"&4&"326574.#"26574&"26574&"2657#"&5476326262632>74&#"32>6  &N    " $6 $G'  -    - 47)   1   1   2   M/?!FZ%"    & 3)-")R-_   O-    7!25$" &(Dt  !B6#     .iH2>.VN*?$)%12"&42>54'#3#".546327&#"%35#5##33ΑΑ#7uF) - !"13II##$$$$Α7# +  .! IfIj$##$#=2#!"&546354#"#"'&#"5654&"32756=6276323276.( %"  &*  \   - BFJNRVZcgos%#/#/##&1'&'&?&5'&?&/&?2327674?677'///7'75'??'? 7CTA&{Z"$#;Fe[ yjIP9@ F+: >NOU\_BX@)J3H,!6.CEN d ',# $&9a=/791=^:=D<>F=,DC/DJC("?$A4%#"'.5463232>5'654&#"327.#"'632(;%:tKxNOwJ r588338   +!)= ">(dAUpoV_:$ :%?PNNPON  8}h#".54>32.54654&'&563254'&7254>54'�&'<54'.#"2#"&4>7>&'&5462#!"&54>3E #!$:     ,   " !  ' &  05}2  1N+'I. /Y  !! "0$     & (1N*%H0'fm) 2"&46&762?7ΑΑr   =s>Α ] Yh?.  2"&47#ΑΑ(UUΑε .6>%/&?6''&766'&7.7627&76"&4626&'.>;4'>4&5'676&'&.'&7&'">374'&"417670.#636=4'#&'6767-    N ! )!  ΑO           2  '  /].('     $ 7 ΑΑ  %    ; 3     ! 5 n*%#X%&5>56'4.'57#.+"72>73#.'&#;2>7   h X C-Y   d54&#">&6?6'6873  Fm    D)# !  {  )S7 0;qPJ6I;J$ 0_ 3@!- "! $m>!4DUB4A %"2( 9 f - N4J)l^8Pr1N  '7\2#!"&54636&'.14"&'&'."04&'&7676&"&306/.>1276a,8''88'*$  / $|%Q)% / ,'88''8IQ &  /)    HQ & /)  7&7>7>6&'&6W?_E#ZZ#W?`E04P<'D,P<;ZW>VFe  W=WFe ;X"@'.6766&'&>7'&6&6.''0&'&'.67&67>6.'&6'&'"'&#&'&'&767>7667>763266.6a  ?  1 m  + T   FE&#'f*3T ^(+      )  %  ,P       L   9  <   w!K$ 2?#J2$ A5 @`  b    "k  F  " K  S -=M2#"&#"##"&=&54622763232672#!"&54634&#!"3!26S ('  #" &<j\,      \r\ )4E%&?'>.#"'6?'.?6176&"&463227.547  )*'1''#.L+3  D  :j-*'FA6!!(/ z $>-'<2#' V. = Q C*('A&5*'8.1S%01#547>73&'.623632&7.#.#&5&7067672?B.,]rvy3 :: q z   ;9v *)"%txBy+ / L""6 J4AX|2#!"&546367676'.6'<.'&67>4&*#"#"&63!22654*#".6767626325'&'&7>0&".5&7676 ; ';1C# @=  73    *   )- 4 %#  5E& |   '!"   ZG  22!. ;32H-"&>#"&'>76MU,MU,!I}J=l&5V $7k4!bj4 ^ K~J4-F13!T"4EM2"&46'&'"?2#!"&546337632=4&+"767676/&"264&"J44J4)### - T BiJJiJ 4J44J$C##J#U - JiJJi3I2"#"'&=#".5<>7>754.#"&'&'&76532a_@5X"0 D (V  LE (I?    & 7k   0NGV_cf%777'77?'7&7>721?6?77?7?''&"76?6&2>77'/???>761?6'7?7''7#''767676'&76767?&'&#7'###"@ /  x  $!% '%A$$%%6 K :(   ;  ( 9 &  '  +"%  )&p "? qsvPSxx$&" (uoV\>  & B< &(Mzfh?B)   +.N--+$'%' *(?BO@    &"7?K7>32#"&46;7'&>.?6732+&'.'&67&2"&44&#"326 W? 4BF5 2  ΑΑYZ~YZ~G   s$ " +  " V% ΑgZ~YZ~2G2#!"&54637&676&+76.'.#"3!264&+&'>/O>V A3  4+  3`!= #r   J$X! V!,AMan7254#"#47632#7##"&5462654&#"733>32#"&'##74&#"32673>32#"&'##74&#"326747632#32653#"'&7454&#"733632#454&#"#* & %, ' '  Z& 'Q  :& 'Q  5 ! L & #& M ]' # '   L0 _  %( Fl  O  %( Fl   #    3  .   DA/7%#'#7'#7&74767676&'&>67'?If.!$!e"E*;f!1+ 2I:rkrTWZ`6M&Of!$!nG 5%"37)*= !.SN,2.'7!9 z5*+%%57.#>32632&#"'>32'.@~6D[5#B5#<<#."4)MM)1166ȀG8*2)1110o ?II?DAT02#!"&7>37/>16./'7'67"91"91_ g E :4)s)))4)w qd 0= @ n1>f7#*&'#"1&7&6256&"#&47>323226765&#&/&5"54;67636763:'#/'"'&=47>'.'.5&632#.#"#"#"&'&6;26766&&7>&7   ),(V 2 $!  !% 4  !! !  #  '      3CK7=F4/ I4&     "  e          "  #h&)G2#1   M#42#!"&546354&+"#5#353;26'+".=46;2   @(@@@@ 9@(@. .   #*('K#'(H?Rg"&4626'7'&'7'&/022130"''&1377676'6".#727&".'72Α=    %   #   "3 (9          'ΑΑ*-+,-  3H -,-- -6< Q6  )5O62'&"..7>7&67>'&6'"&=462746#"&=4632>NGI?>i`b UT   OR^ GE11E%"!%}CG=:e`  T  Q  ]1 H   G0++0G  !1! 2 s!)2:+73272+72+732+7%2+72+72+7"&GD~!%FIFGFFF( FFyFFKYY1Y1Y61Y 1Y61Y>`7>232"'.2"'.76#"'&'&'&'&547>76 '&'.'&54'&'&'"76767> 9C) 04  I!V"2+ %v  X  .:G S&6G%"'.'.567>7>3%327654'&#">'.#"?   ";  - '48%( ,  -- .$  CB  TM %  3%-L^ #5  )22   #'+/37;?C3#75#73#75#73#75#3#75#73#75#73#75#3#75#73#75#73#xpxpxpxpxpxpxpxpppxppxpp(ppxppxpp(ppxppx#$.>.7>7>76.'6L)5!)58Gs8 K8/fYH+H5K;+35!*5!P~F@e$>]4m"C2. )u ;'/6"&4767&#"&546326&&2"&4264&">L  $'7L5-'UI^^^&X  8* 6N"q߃^^^M %#"&462'""&54632654632L'"22E1T".0G12"#)5##--&1F11-#".."#5)#"21F1q)Xhu7#"&=46;;2>=4#67>76#'&77>76/6.'&.&677&'#"&=46;2'"=4;2#754;2+"+"&=46;21/  .*75/ 62 G2$q%;Q9, 7$>4R60      &>(D( ( }    p (  ,5 +Y1+9%">,%)&". #@/ 4C3 93$  ( ( (     T 2S^w7+76;26'&++76;2";2#"&7>;#72+".>;+";2?6+"&?%2#7632+"&576&++"&?3+"&?>;2+76;2?6+"   # %    ( 9  0 # : $    I B   /5 !    _  %( #4  "c#!( @bJ (  !'#31/#?#7#{p651cb ost44XA#_5gg5_#A 6*>>*&R#&+Α+&#R>T>*.y &767676.'&'&76'&'&0&'&0&&'&1.'241676727"#00'671327>7213"767676'&'4.501&'&'&76767632767617'"#7&'0'"'"#.76&'.'&'&'4'.'&7:30236'45"37>'&'6%&'6&74&5&'&5&76767&'&'&'&7671676'&'6'&272637'67'676&'&67&'67.4>56'&'&"#>767&R "8       % ") )    ,H+#   )%     !0             "           "   | ] )  ;%   $X-. -   V   15 +* !$  %           !                     .:JZ3#5'.546762654&#"'3#5'.546762654.#"%!"3!2654&'2#!"&5463~44-.@ 44 .  -@ D    &&&&8 &' :  ' :      &&&&GL767>454'.'37#73'##".5&#".7547;2254'326& O 4" wZJ! [ !&R T&6 @$@& 3+7    # / )$J"'53'3#%#512>'.'&#4>1#53WJ>>Bj<3P -7]<`6[v@8S__"JJ=@v[6`<]7- Q232"2"&4%&'!"&5463!24&/.+.67'"367./17672>7)    s(:=XI  ,h+.#%  ( "S+ !   5390Z$K O$      2+4546264&"7\\Z5KKjKR[KjKK5!K #53+32&+3250`ӽ__q ~Td  -159=%#535#5#5'#53#5'#"'.7!27&6?6%#53#53#5'#5^BBBBZBBW C)vn8 %+ BBB B;==;;H<<<'.#"&76767>357$4&"23676&/&'YH0  !"!H)("M["/ 'sZ& #0 162R@]R/   "*4 .>>XA76  bD!3$ 4=B #)1AI7>77&'6>7&7&504576&'67&&7#!"&5463!24&"2Z(C2!@. KE.# #DI'-( 2'4/0`@^^^3$,K+%  71 $(4 :".d %0 `^^^32+532654&+#rE> <_:V_dQk@F?[*P@&@a?@`7 $,>2&&#&676.2#".54&2"&4'3237.670>636"'&5"5&?"1"&46323&''&676767.'547>3>367.>>2'%6.67>4.'&""232>7&'&7>9        N 5j!@   --H  b9 " $-  6*"(  ' G 3 (# H    l    h          ]    (    . %    O 1   U . O #2#>%13#67'.5!456WANdE#-3 TQ `AWGDuOqS .%$/ k"/0ijY_*%##5#5354>32#"3KdQQ7$$ (Y]F(8O<'B2'&*#"&5<&<.'&54>6&"/&#"?62327i`L  N'B\O :  I O :  hT{-  Fl3Z?% ;,t <, ####5353533##5353##5!###@Z@@@@Z)<K73##576=4+53546324&#"#576=4/73''7'73733r]M; &55*/E  ! n *+ & &3"# 32.*  "  (9-  -% && Q72+"=4;2+"=437010101010"01"101"1110101#0**1#*1"#"*#"#0"1*#"#*#0*&*1&#"#*1&#*#&'05&'4.#04'.5.14"5&'014&5&4&4&4&5'"&</0&1&54'4&'45"41&54&<&41&5<&<&504'41'041&5041<545&504145041<'45<547676767676767676320154+"#54+"#54+"#54#563232=4#"#"&#"5654""#54+"#54+"#54+"54667676.;ed<.*33+    (,,(4444}AOVSOA   H1 ZH LL  (>.'&7667"&54>32&760m5 %#01}~~:c;4/  M`5m #%%1U 19YY;c:!4Hlw5'.54765&'.546765.547.=46723>54&'&765>54'&76"32642>54&"6s s b 5T1E9  4>N>  dd  >N>4  &8 iQ @i=    J15tt51  -1d |h Dc8Dr  g>Eo\       \pE=g @S-Vh|JrC0-   "  "%2#"&54632&"327572&+5DDg]\[A?&nNN74&%34v R\]B?'NpN$"q50*Y L'.7>327#!"&5463!245#"'.#"7670703'&'4+76,* ,!!!!P!v(&4 <4. !#O!C%!  5 *-"O!!P!!"(*$:<,    c0?732327676564/.'47>32753.'&"67>'.+  5'*2 7"3L+ O;2,f%IE +D"(A"<-0 1'* #'Q;% 7@="EA1Q U.8@ %3!!'S6Q!SQ %'% &54'77E54'71"&5>54'#".5470>54/p   O,+!(!C  D    8* >105,9)6 CF-4,  * P~&-OVd#067&'6?#64>4'&'.'&'4.77&"&576&'&'.77>2"'&'&637#"'&'.'&6?654&'&'.54746165>'&767674765&'.#'763263&67067>367632"22#66763267&#""67>76%2726767&'&#"&'&#667676;2&'67&76767>54/0&5&'&7676'&'&#"767&'&"'&'&#"45673&?67&'&'"276'&+"767#7>'12'"&/327"'&'&63=.%    n %.  ?     (/((/        #         (    1   t     & T  &  ( =5  f>v  F< )   1     +          +      4   e %(--$#                    #1   + 30+..+-3     !  !     *+9 0@@<+*      7#".5/267#"527654&654#"0#"76746767476'&#"47>76'&#"#"7676'0#&327717670767464676767676767"./3276732767>54&#"'&7>7632'.7&54632?64'"=32%i         +-*    ttD0 =9 DF9P P    @      !  _P          1@<) 2#!"&54637#.'#3Q#  %P`6-/0"g!.'>72&#&'.=4>76&#"#"&#"0+".5&54>16&#"#"&#""3262324'&'.46546;23262'')N n$p$(&s)k  1Fe    :  :   eF1   + + $>&''&'&'&67>./ O: # & o"*>$! THN(_-.R&`J7-# AMI06L2 "2",8}()95EQ2"&4654""7676547676762+"&=4632>4.#"ede r  X /+<<++<<+y4Y33Y4Pqqpeee    q" 4<++<<++767>5&540'.7>767676763636~*  &  <, 43&#7Jc  <  +SlLcs 27PZg%!&'45.'&>767656.'&'&'.7>74.5&7>7676>367>7>72>5&626.5&6'6&'&'7676'&'.&>.'.'&'&7>36'4&'&6'>7'"76'&'465.'30'&'&'".4173&'"7<5&#>7<5"'&'&7>7&#&'342;&76'4.'.'636>&&>7&721&'&7&72>>7>.'.56'.'.#".7&6&7>7&'.'&'&36 Z .     %g  4   k* " 6#-6,   FC    e "* )=    9 $S     #  `     $!%*! 8/V %   %!   -?    (|0     " ^4  F $'?^2      !%   !)        ;  /          &  Z  )/ #   D   '    $(    ">[.7>3267"1326'6&'&>76767>320676zX/lȯ/6#q?FpG3C   7a3 57K/+&F-)  ! ."   3 ?6ȯ/lW8>     6>u".)KEP&/    6   " !6!!75##"'3262654&/.54>327&#"#"'@* "-"c'    !)"  "@c'   " 0  +B2#!"&54635##"'3262654&/.54>327&#"#"'* "-"c'    !)"  "`'   " 0  +.D\huv?7'&'2#"&546%016"'&7'676'%&767''&>%2#"'&'&'67&54>2.7>3'654&'7@<1N/' 1;C3$)z/  -? .'. r ?1N/'1;C3)  ;H^uHQ J "":';B  +#'17D5  02J*(} 1 ,;,&18D5 $'/76&'&`   < ! 1( & , `.    *L   ?:  @7>&/#"&54>32SFK[>p( $'" )NGv 5ks6L**/%$*2DTa%&'&''.'&767:7.'&7>366/&#"#"7264&'&7>7676'&'&6~ PK  ?k ?A `!8 Ahp3>:[\ 109EeD r. O  RK 9T1d= N%6Rbx%+"/+"=4;254;2#2+"=432+"&=4;272+32+32+"=437#!"&5463!24&"67>76 !    R - .0""/0""/>_`N;  .9"G,*G,*GG< G9<     GW"/0""/06NN61J,'u *H3'&5%3".=4&3#'546323#%53'57670&'&=325N , /''<*%8M*#&(  -Mo+ '" M/MM'+<.#MM(6 F  4q  ''7'7@@ ?__? @$\\%\$77$@2:Bt7>3>32"".''&/*.7#"4;2#"4;27&767>54&"&'&'&'.54>32v     (XX KhK  +J+B^  .  A   @  ,1(3HH3(1,  ' /*G*[@/ $4/5373#576=##576'H-m_'&'l33 2 6%%& >> ++O#"&54632>'&'&'66&'.732+"+"=4+"=4;2=4;2rnSggS:&./ .7&"*  b1j((0((0gg);B^B/ ) 1-r+&U#* n0((0(( 3#73#3#73#!&=4&"&=4.&5,,*+$ ,,t$2"&454&"6754&"6324&"ΑΑ"!""/Α = %3#"&'357#&5462#5'`lAR,Bq!lhhK Α K@$6A7iiP')gg)'П6G\h%&'7&"&'>='6267&"6267.'&47>330#'236"'&'.4"'6?7**h+&$:FF:$' ,d, 0>>,,>VcU?,, #:  $ 4&&J  -F%$G- -EEEg:T&&U;g(5 {  ;j"/&7676?6=4/&6=4;2#"/&=4?67#"54;23254.'.54632+"'.#" >  /200,b#7 ",(V0).&<%jjj    k k  k!D @ @@  %%#53%!#5#5#35337#3537#3533533  @ @@@ @@ `` ````7G DNW3#'#73%'.'&5#7327676&+"&7676763#";2675&66'67676&'&-093-117)O%   X    X!= =%&i 1 %    jjrs6&' ( $   $ #*h 7FNV^fnv~:4>'4>7"&.'."'&5476324&"264&"264&"24&"264&"264&"24&"264&"264&"24&"264&"264&"27&'&"62X      # 549 BB (B^^B(XXX(>> O     !(;O5005OBB11BSStSStSStSSD' (  #/9AT"&463235#737'2#547#'35##5##5##54#"35'5'57'#353#"&7'326ggΕ`o)F9{!8)E DEDD$   8Α ?.  471M/:HJ0!! g V  ,8 3m%#"'67>454676765&5654&.3276'&#"'4>;>3735467;23232+!".='#"&'6) &.^O8 ,9    V b   &:/I^,5 (< [:4% H  R    K    %#"&46323lLLllLLlZLllmm"5#"&'&546322>54&#"#".672654'6r/mKPn/?@B-$9%%B&?WSn +?*mTX~9Qf(PttP(fQ9,A$5A ;!,AI%#'#//75'737573724&3>7"'&'&476762564&"C       - ^88)8? 88Y~ZZ~Y !       .0_87*80?!87YZ~YY FMl7&'.'&'&"#67>3021'.'&70703'.'&167&'&'57>01&'&76"45."&'&74"'50>767>0&'01&"'&167>76'0'27:63231'&'.'&'6160&'&'61"Q  . K"#8)'4   R?(&$  D"%#  F 3* %=C"' "  ! " (  )!  $%\    h744D "//%  , &0      #     i        @(;%57>'&5676'54'&'.7>??; ͗  e%'E= ^6L;/=  a  .PLI66  #8 U  <97*l #  /6#  1(-&'&67%6'76&7'&>7'6/ ##i  at  S jZP Q \-!1(7 = %GV%+3'&=46;26=32"32>54'"#"&'&7>;5#54>76#'2>4.#" k5656 k(((  %$k 38r   ?.)&&f )0   167 )   f   /2#!"&5463&63265#"265265#"26=&6a'88''88'2 8   8''88''8   8 U2"&46#"#"3232#*#"&7676#*#"&767676&#*#"32327656323276ΑΑ | <  5  '   #.[A Α  F5 B?F+E&!@GQ[#"&=46;2'&#"#'#35>32#"&'365654'&'&'&#"32767&#>33264&#"3264&#"@5KK55KK  $  x S #"( 3\ K55KK55K Z F %%"'f  x   %(%'762#"/7/7676/&'&>&5( BDrBfE:0 ]$# 0"-{&9b6 'BsCf#P% " !)%0 @Sk>FA#:EP['.'>'.676&/>6264&"0676?32'"&'46746."&'46,..,7=!L>(     (?K!>_4 +,n"#F:  WzWW=5),7  q! 2"&4'616/&"37#"ΑΑeJ " ZUTK pΑw( cc)x #'+/37;?'77'7'7'7'777'?'7'?'/7'77?^6JN."7Z %# 8 8<+!)#+.|G7-+3=5PCBY".',";##B1 " &  /  )@ H96`-2W B@D57'.546'0'.'&/?>54&'514"D66&/$:MO< ) &,$-5+7(& ,>҃s|5 7"';6[<4*9 0* ( <7!>'#.'6X-~wx2M )ps^` 5EE"9m<Me>`L-T?- Q '#"&462264&".?''"&'&>?'&>767'&67676767'- ^ 4  -5  - ] - .- ^]/ / .  0  Z / // / Z 9 Z77&>&7&'&'"'3>56'.54>32@< w C !,z'/)7 /("2  /8Bw < +)+&$ *! "& =%/&?6?6/&/&?67/&6/&?6, -]TD:4-\_92+ *"JR A,V$H\ 7'* 2>"&462##"./567>324&#"'.'3264&#"32$3##3XC/m1!:aGB/5$&%, $- (3##3$>/CO!, k'e/B54% && "?,#-A%'"'<.'.'.01"'&76&*'&'&'&656&7>4645&>'"'.'&0#"'&'46'.'&54'&7>5&"5.6'.767>67>7>7>23>32307>302     +%   >59 ,     N        '*   $  $+-   , $*    !     2637??7'7''''/'7'?'8P C]8Y9}Rsf|1.+l &  ,ND& @'$J .3D6 {l&Jc&Tz8a %   Ea@NZamt%#"'232654'%.54632&#73254.54632#54#"#"&'73"&=3255#532+53254+5#532+53254+54+532;2=3+"75#532#54+3#;2=3+"7#53##7373#5#'#ApC#$ e3.53.5e%" e<     v$"'    9'    9  %?4'; 6J5+#1,"10EJ]      M777H+HH+?=> ?   U/#?6'&'&67%6Dg2 f]"L0 i (2#!"&5463>'."354;2+"=#pGNhN^DD^| W84FF4D8 '575'7''7XeW"XVY@-uKK:55/7 #'+/39=AEIOSW]cko"'3'#4=3#&'7#&'7#57#57#5#5#57#5#5673'5353'53'3#&'!5353&'!'673'53#4=3v0ggbnfggggggggggggg ibgggggggg %ggg S t tVggg  F # E#EhE # % JMgs % R "y-In'.'&?6?>76367'432'&5676322'&'&#&56/&'&74>76  4 5"     %    %; !& 7"     "    } 7#&'&761'''''#'&'7&'7&'7&'7&'7&'7&545'767'67'67'67'67'67'6?27777774&"26"&462654'&5'654'&5654'&5&=#+&'#&'#.'1#.547&+'&54704&5232673275#327.'&,+- "%    $" -+3378:; 7; 16 (."  ".( 61 ;8 <:883JghhOYYY| )              > k  "  %$ '&&&&' #% "   ,' 84 A?GFHHFG?A 48 ',  !hhg~ZZ~Zy   Z p@$         !eB ;Q>;2=4632+""&5&54.+"&=4>2;2"'4.5&6327632 c  !"d  e"!  c $11  CC  a3    %     %     UUvv 3Zes#'"'.5'7#.767>;6&'.'*?>72>'"5&'"&636&'"&63&&54&#"432'.'&'&'./&6762?6!  "J%    $ 0d$#4U  I,-)0) (D@  +(%*//E4:L ?       }\&    4 "  m0Nm([)/)#OO #f 8!,4& ;C K"       22#!"&54636'&6'&'&6323276L2C!  % .B@f@N  *h8 />4VR`,U7"&/.>;2326?>;2#!"&/&;2326?>;2#'2+"./&#"+"&?>3i7!$ $! 8  h8 !$ %! 7  8 !% %! 8  `pK K p pK K p pK K p BRa2#"/7'&546'&'&'&'.'&>76'.'&'"#"07>2#!"&54632654'&"7m'(N7$2     eB^0.]T#E'(56M 0 &7o        `]A@1.]A+%R/:''#/'7''7/5?'77'7?3772>54&"''#/'7''7/5?'77'7?3/3267;?3?'         w"/!!`/7'6'' * $%7&66'5%$ *  $/>,%:           B!!/!( #&4(66'2#&)'$4'4. :&+>/#  @J2"&427&/7654'&54632&#"23767/"&63376>54'ΑΑE9k&$DC?P@W&G9<  R0"%<  D3<ΑgAl$+  GJ6 #;-_ f=92 %++AQ%'&'.547>''&'&'&7>7>576'&'67632%'76"&'".rG;14;3 Y6TZ!   69  `+# ?y2 1)W $ ! /_0RmU- 2G)"   1A<"Rx( @      7#7&546;#5#";W@`KU@R7&2X&gMT8;m.533p8R^4y@*<D/;C\gy'&7670#1#"&#"#'.7672326732+#73264&#"&54?54#"#4>32#5#'26="#532?'3373u     nI%*#6& " n3$#5 "! 9 #$BM%&   a  J ss@'DPj|+534?#"7#!"&5463!276'&5&7&'&#"&#3262327674&+3532>4.#"363232737##'##"#3267.M  @  >      3  Z   &   f$77Z  `  W  ( B/   cQQcl'.871#"&767672#&'.'&763276&'6"  _3+3U*'(%+}"  3>Acf5O===%.$>%===Q    =>==KL !B!!B BB B!!B!B  )  _,  ,_$$d$$G# ^     ##G$$*##F$$6-V7>76'&'&767654&'&'&'.4&707676'&7>7676'.zZfVR  %&-  X+C;0-@(7!5 cRrc v`Y! FE '*#+  FF "U   8b:;8 0  /-;:.IX[x }<JRZqy&'&''.767&'.546767&'&67667>'676&'&&'&'67673276767&'&'&"67&'767&''6&'6767&1.67&'67&'&'76767#"'6'&'767>54&'&'2&"&462*47- ;!!?  +52*  @"!; !" 0 '   .   _ "!4%  #*'z"" 2 !## ',#  }&&),0$  >* ,* *?  C-R'6    JF A aN3$ "!   d $1 0."   ! ! && #'##73'#'%'7>MxNyT)A<  QQ 88 7I)P%+5352>54&5475&54654&+532#"'73254'&'&54>32&#"23#"=#";#"&54654#5654&546;7#"'73254'&'&54>32&#"'2#327#".5&634#"e j+"*    +  K+ !*   #_##-9 !  !  "  !*"    <"6 "  "  !  "  ;"     D+! # &%.B%72'47>&'&'&767&'&''&7676767&'&'5#"&?#"'.7671'.7676727.'&67676'.'&767>'.24767>2630>201276705076767>563206767632266'0"676'&2>65417654'.)1%#             <  M -# !9^Bw ;/%H -HW hK"1:  ! ' $ r  "p[  EW, $    " (   % >%$& )75"*)V Z""#'+      V    :    "-& j3  2   3 37#'e[08ਨ6rrYY  ?'' 373&&& R0n\\Jbb$BB X( 0D7'##7'3'373537#5#53".46767624.'&"27>"&5475"4753#5##5353#5##535&/3#5##535'"475'"&5475'&5&5'43>?##7#'#077'373'k4>>>']>--)d          W   P  &  &  P   YAA ?>@@Y**c+<``ttt %   %D           )) Q((+yy%'.'5#.='./&/'.?#+'&6?#'.5'"&'&'&'&4?673676720326?>7>;67>2?>026?67>?6766>?4567>72>?>7>3>?>6?67>'&%>'&>./.677>/&     :             0             d   :              "  6  $  - 5:/   -= ( %  1%  m7 ,   "   .%#"&#"##"&5&54622>3232632 7%H2B"/"  %P C YNw"" p 7#3#3#3#5B22e332~BLLL L) ?G7573'7$"&546253#57'+'.45454>;654"2'#''.4547&'632&'54#"'&'#5.6?>767>36'7&54"2&&)6:::Y   k   #A  ! :2)@&  .`  x nnw        Z( ( T      6^ 1J   '9( ( y #'#!3'3Y>>Y.CoqD-ttX <D>RYhsw3#267#"&=6&#5375#"'53254.54632&#"#327#".5463234&#"'2#"'5362654&#"53""    ,&_,!SW  %"X. q$#,'  r,/&&? "* )g;$ $^  (  * +  ! $#..3+%(( 2 {B ! v"#"&'53254.54632&#"!090 l\"[TH7 090 iXLBDJ1%;(KO .#8%KV$#327#".547>772#".546)3 &R$Y; :F${% %rYW@76"+"#&#&'&'.'4'231'&'1#"#&50=46;230>76700137654'&'&723230#"#"'4'"'&'&767621450454&'&5<547676362<=4#"&'"#"76760"&747676'4'&'&""+"5&1476767632'00?4656;2'"'&5<:232767>76'&'&5&3232=B ((#_!+9%&)'"U   "  +!  ,  #        ##$   {   3I )**  ( j         "  udZY   IH5@(8r770=476'"'&.7676232#%#!"&5463!2#0&#"376767674#0"1"45.'&'&#"#"6761323250=4'&'&'"&+"176760"1&'&'323254=01767656'&'"'+01054&5&+"&'&'&'&'&230;21676767676&7&'&'&32376213676767674} " $    ' ~%           AH((926D # /$.   $#        `, Y 2eT <j  Y  T<   0    8%'7'8]]\2#!"&546337#53'#35####Qo>BvX`###G#Wttr2#!"&54635#35#75#75# l (xxxxx l  ((P((P((27%#!"&5463!24'&'.+";2767>5__`* Gr Gr66`; ,< , !!5!3535@fYZ@ YYZx /[nz+"=4;2+"&=6/"+"54;2632##"'&=463276/&'&76322'&"'2#"'+"=4>2654.#"&2"&4))`  ))$ $!5 &!+:.?@-!)# $$$" # { Q # "  -   i?[?Y+ %%3%i.?f{"264%2".54>6<.*#:>732"#73262#67>'.*##%64.*#:>732""#WVVV);GR:  x!G   %   % %  x!G     X[z[[zk-M\M--M.,$   G e6,-  "`? G d6, '#"#"&462&'6.#"3762"&4: "#eˏ *?"B]+I+%R::R:  ˏf3. "A0]A+I+:R::R@!-9EQ]i+""'.+"&=46;2>354+";2=4+";2=4+";254+";2=4+";2=4+";2ZI> =2Z$88$#(.##.@@v@@'=Ohv%'&767>;0?'.67>'07>&'&766'&'"6766'&'&7672"67>/07'&76>?'&'67'&767Of9( )%)RM `  0% R w %  =eC !/0'G22+!  ).46EF$&S4( YK$ B     (27@"   G2 ##  <   -  Z   G1&7>01'&".54327>3270>7>" $ &', 5+:3'<#5 )bBU!1&A5f7 "2B  7  `  fX 1!"4( -%##5#546;2'42"72#"&546"32>54&;M ffEE#idfjVuwT4]:vfxxf 9##"hkge-zQSx2^;Uv )B2#"&54632>7'#5"'774'&7">753&654'&idfV xS/"]')""=j@_$" # 0-;;ikfe$Sw )&&"  ]+''" 9V:<13M2#"&5462>741'#327#"&'#53<5#53'7#7654&#"67632&#"3i0>J&fj!<%" [ 8 %-GA xCNvUl>Q&:" !Fi+N<,fe=#82 .50 #Sx<Uv]$ * 0   !%:2#"&54627'##5#535'#53'7'7654&#"'3733idfjlDR@=@@: x7@evU :* QB(+B?&hkfe=c$$88' '8 Sx3-Uv$3V `t  2#"&546"32>54&#5#5idfjVuwT4]:v hkge-zQSx2^;Uv++P**#72"&42>7&'327#".547'703654&#"632&ΑΑ%E5 $!.&'O3XwS4YJ!D. (0 Α/` , 4##0`L0Sw2*!4) (4;2#"&5462>54&#"2+5254.+'2+#526&+idfj4]:vUVuwFF56" ^2 "? hkge=2^;UvzQSxl'Ol.  ,C) ,37;2#"&546"32>54&#".''574575#5'?'5idfjVuwT4]:vM :D @?;Cqq(%'Y-hkge-zQSx2^;UvF @F>>f,.-/:5++ <2#"&546"32>54&>32#".'332654.#"3'idfjVuwT4]:v$#5 <*+'>4$ 211hkge-zQSx2^;Uv*++!;(.$.(+",11 s2#"&546"32>54&2742763276;#&/#"'4'#"'4'"5'#"'0&5'#"/+53767463274>276idfjVuwT4]:vQ :3     4- hkge-zQSx2^;Uv5l< P@ a , \ LH W_ WT Y9 Lb E\Pb  _s2#"&546"32>54&3#&/"/"5'"/"/"/+53767627427627>7427624+54"#";2=idfjVuwT4]:v;3 3+ Fhkge-zQSx2^;Uv,S @: HQ IF N+ I Z 9P CT ^-  E3 ]  4<@2#"&546"32>54&2+"&=#".=4732354735##3idfjVuwT4]:v6 ) 7lllhkge-zQSx2^;Uv6 6se (C (192#"&546"32>54&2".4>"76'&3254idfjVuwT4]:vr:* *:* G,6 %=  ,hkge-zQSx2^;Uv<+/6/++/6/+Yd .i Y?A9AMaf3#7'#&5#"&547#"'#6=3632672354#"#632'3264&"75"326%673#"&54632'.^"c$=) ('  &. C)!"C0 %7--/ d)+!*)*)$+"--6t B   P($ !!+m00  )!$*&&O`$2"&47'.7654'"1'&7'&'54>767'.?4763226?6'67'&3&545'7654/&#"32?32?322"&4  $  .<,.      (  7  R"# l    5:0'  &, !K< !"70$   ]  %`   t  4`'2>73'.45454>?62<.'&'.3547635476 A)fM 316  ./KD D/ .  H2,  %-7 ..-   4r ??$ rE2EK%#'#5.5463253:654&#"22>54.*#27&#EXCp(!%cfxON-E8   Ft`[     R9rK >> lGQr4ZC'A % #HBCN] <_i!!%5"'&'>54&#"'22#23235".4=372><1#3#"=463274.#""'5632@ <   $     d %@r!m.*     -E##[g7'&'&7>'&'47>76327'&5<5&'&"#&=76767676'&76760'.67276 3$# (f42,E%?  d(\N;  ,:m 4!Q!e"( !!%$ ,y. " s7%G. +'  1+1,i 7   *%c2"&4264&"2"&4264&"%'''"'7&'7&'7&'7&47'67'67'67'627777'ΑΑƍƍ|||J=7$  $6>JMRSNJ=7$  $7=J NSRΑƍ?|||'$7=JNSSNJ>7$  $6>JMRRMJ>6#  ' '?/?557`$##T))33d#$$T33),,+y--::"++,::- DLRX^dj"&462"264'./'6373#'#5&''7&'#5367'76?#.'%#63&%3'6&'756_ΑΑ8-18.%-D  ;; RR ;;   ;; RR ;; " ""%.8--8.8ΑNr" "+R ;;  ;; RR;;   ;; 4-8  %.8p.%-88-%L""2Js{3&=4'&'&">7#C26&'676&'676727""'&'.232.#"'.#"&'5>326="264&2"&46"2642327367632&#"'5.'"&'6  &      O      &) =!(&? :(!1  &(: ?&(!=""7,  * *  xP5[= y!')      )&@  @L"" 3   ++ r%>7#./.>?.>?677'7'=57&'>&'>&'>4'   #,F**E-"      &   "????,      J ( &#+!''! '5,* 5)# 0[>0 60+ !<-'D$0 7...7 0 U *1P-1-713D# &'544J{&76567<'45&'&54545&567>767>�'&5&'&7674'&'&'&676567>75'&'&/7<5/.454'&'"'&'&?'&'&'414767676367654'45&5&'&'&'&'&'&'&'&/5'7>32674&'&762763676?32?#"367>7676767674676761#"7&54'&54'&5476'.'4'4'&54'&43607&'.66'&'67&                I W D)! I&+%& #%@        8     '                   &-    %      9,:L8'', % % ^^K2322230##"'"'*&#&#&'&'&'&'&'&'&'&'4&54&45&547<65<656167>567676767672636212":372676767676767676767676'4&1&'&'&'&'&'"#'&67"#"'&'&'&'.5&'&'&7676767"'041465<5<5<5456767676767654'&'&'&'0&673676745<5<5<645<5<5<5454512174767676763632367676?0'&'&'&'3"#"'&'.'&'676767'&'&'&'&'.'2303232763667676767464567<5<545<54'&'&'&'&'"'47676767610#4&'<'"#*#"#"#41>12676345"'.#4):3670254>721633230"""02220#*#"#"14=46123#"'&'&'&'&'&'&'.50'&'67<7676%0#&'"/476767676767676367676747654'&   ( +        5  +    +      (!  '/  u              f      "       &            t        '    2#  ,A    &   -<   #     2                      : $%                     &&,               jp66&67676327676765&'7>767677676'4&45>7654'&'.'&'5>6767`I\*,(!B. %$6!  # */1!- 0! (M;753 93 -!"       ' #!  -*6 $/($/<4-I#":,14^0 !)1'62777''''777&47'6"264&2"&4w [  [ wF;FF;Fw [[ wF;FF;gHIfIH44H4F;FF;Fw [[ wF;FF;Fw [  [ HgIIf%4H44H +5?IS[iq2"&4264&"%#3##5#'353#535##3353'7'7'7''7'7'7'7'7'7'7'7'&2"&4#535##3353'"353&2654'#3##5#'?'7'7'ΑΑŠŠTT0v%IS|S   x  켅BTƝ(u4T_Fp?}1TT@s,  ΑŠ¿#/.R\8P]        R !3R2RARAU}XM<3?8 XF   ''.'4.'6'777''4&467&7467>'.'.67'77'??'?67.67'''''776'7.'5'7'677&'67'677&'6'4&''&'47'67&'6       '' Y:k, !,l:Y '   E (  ! ' ,  '   ((&""!# % % "!=2b']<&1xx1&<]'b2=!"  % #!""&((   n ,   >' L  '  J '?  #Mq7"&4632.4>?3:1>'.0#&7"#"&54>32#"#7>.'.#"56&546212#".'&5./>7>5<13L  M)u$=,# [  $46  0   M'pH x  >-$ [ %6N$ \ &6}  M'wG    ?.$ [ %41    M(m  +39^ )%&'&76&"&#'476322'"#"'&6"&7662"&31&'46&'&#"#".4>76%2.'"&'4547'&>017626'.'67>?&'&6'6720'&676'&'&'&'&'4'&'&#""#.#'"76767676.&'&27'53>2241&#"15674#"1675#K8   =   2 3=E>4)(22, ,?"  ,& > 8<1 +Jq 9,'    H( (  ((   G    -/ # ,37,! #  :"qT1 6!F a       -        '?'2"&45'5'5'757757"!!"EΑΑ;<<;!""!""B!! ΑW;X2>54&"j  ?$B/ih-P $=#MnNN  /B#JggJO!##=$6MMmM (%#"&462#.5467552>54.'7"&&6'SuoPmm&'*#3533'2+5'2+57"326&2"& l u  * C !6** l  m m \f$$e# f""T!&!(!55#$$.S2#"'#"'.4&54632>32>654.#"#'&"#'&#"32?3327&1  C==C  3%- $ .8  ,- : -,A ! '( " ~/$ N+F66E7"#  -# #s   %"uu"%?B:Uq0467>23>'.'.'&"+'67>27'#5<>7>7>7'&'&#'7'6767>  r B     B #)  S$ *+  SS# +,  < # =1 1> mm  S{ = > {| ? ? 2"&46&'&"&'.26ΑΑ  ARA   PfPΑΌ '33' 1??L#".'&4>3226'&<74#"#"#6=276&#"133:325&53" ga gb% P%P@&m5? &m5?$VB%/i$ta-0'2+"/+"54;276;20Cn33E6lΆ "2#!"&54635#'#3577#5##R$=>=>>>=\>==.MMxMMxlii #'#'#735737537#75'3#_lVb(m{WaCHU4N4POI 6SPߌHH@\?q?235';er5F=f5+Q GT`3#'#75#37'2#7''67#6;65#>7>3732#%'67>''>+#"'&'236=#"&/35"'&'26?#036z1*aI Z=6@B 4  X3 ) z    l &) t,+<o,^  !A" 1L3    '#(  ! :LI  G d e5>2.'>?#535#5##3#3&#"3267#!"&5463&632#"z)d Xmm3nn\ Q.:-.0-MKa ))#-"K7?C)$ - 33/#/-$"6 )4)! Id#n'1Uk%'#"'&54&5&7461/>36767'767'36&'&5476736'>6762#'&3&476726#&5�&'&#27"/67676'&7676'6=&'/7'7*#67654'&'&'&#"'#&?#7"ȼ363236767'"'5767"'"'&#"#7?7>4.5#&''.#'".'&'47&''&'367.'''2>7'2&/?&+57''#776.'&'.'77#6'.22077'327''7'3''76767/'#7#"#'67&7'&66'.67'&'&6327'52&#"#5#63'647674>7632.73230765654'&6?#&3'&'&547&'676 ]\BA(  ! -5  #!     o      (        )/        (        8: $0 @8 '&  ,  )    / 6 ,       I,, !  +&+       7' F       (      \x<=Z-#  & -                  !*       $ &         <    ) ! $,,   +/   0 H      v % 03    21        C     mp!"76743765&54'&'3#6743627>7>46'4&#*#.'&3!#"#23#5232'0&'&32&2>767&'.'+er)"s  7S0,   t  d  (  k  IH       D-<@') hZ&G!-1#.5"=4547>7'0.#0>162;2='#"'&'&#"2*&543:65654'&"#&72012=432720;&'27215"5&'4327&'&'0.'#"#6272'&'4&'"013'&672=674#"#"'67625654#&54632'>3"=&'&7>3201&'&6#"'>5454'"572725454'"4732#32#*#"'&'&*32%&+"#&76'&'4#"#"#160+"72676742325454'"7032#";276"'67236"54&#&#"7636'05&#&36762#"#"43656'1"54'&767456'.7623>76"+"&3656'4'"43:'32#"#"4;265454'"#&63230#:3>7+"0'&1&541474+'3676&'&"&5653276&/"326&3645.' gi=81  aa  .  0  ()'#Zc4W &(*'+_     !   "  +.e  +g!  "   &$(  YeV  j    U{}~:< # t87 87O&   C  FGC=#'SAGJBJ7z  H1    ! "R  %&  )/('E3n  q!: g  &3: $&e Z&"'[r~'<JX_l%".5&&7'46?67632766''7&5&'46&'.<.523036'74'6'67&672>30?2&&&2#"'75'654%'4>2&16&#"&'6'3&74'&7&'&'&7276&5&633#65#6'&67"'6&261&>&'707&#&#'5063632*#75'35'32'22>=4.#":>5#54.*#3#75'3#372#".>26&"'#75'37'3'&'320637+676=40:>4."&3#.'#75'35'#6'254.*#&63&674'&':  $ '  !;//   *     89e<QI4%  #J*,-*I!  #',')r        31NL! d    l    < M<  N  B&&U '"O U,0 L '*i     #5>6+53%2#!"&546354&+32675#"735#535#57#'#r (n0/e? >5 &! & F hhGj } &&srr!GKO'36767&'&7>672&1&'&67.'.'1'&'6 37'7:  )"AJD!   !& 5=*9   & &K|D!   % 6   x2K^66'6%'#"&/45467'&54>67676>7>.#0#"77''77''77'?( Wboo w  QVRO9   X2$ s0mC=J!q: . >/ .18'8   ,-2$3   $! ,s,- -,~,'}%Wah*=U`W\by76&654&'&#6'76"10'4?#&'2#4'5654'&'654&#"#./>6'&7'7'>15.'675&'#&'56767.#5675".'565'#34>307&'.'&7"2#04/5"267&'4.17676767#'4.56461##&563&'5#6376535"'&'474656'.#672<15&7507&7>7#7.767.7&#76&'&&561"&'2157>7202'3>17&67>7&'&75."&7#76'&7627&'24767?.3'45'3654&5473.#&7'7'"'0#&7''''7''&'.?'>0327'7'067''&/67'7"'76%63''>'2#54.#"#&'56#"7>'&76567"& \9+aWYc.: 3o   I9E|,r6CIo*    8  7   &        +     .        &HL%CA"    ; _ (   !$ $*"  = %[UCB-K+m  AnI3,MsB<27  @  f  D(P G ,  sR      ) 1G9Si 2& G         -      P7  y   Y%  \ *    /B%=A=  + $ *   ;   !^+ XF D;7%+P  '/#b (& |2=; (P@( 7   ) % '73# 7 F!v'a_jk]A 2Nem Ql 0:?'373'7'373'2#"=432+"54"2545<632+"=432+32+'+"5'#+"=4;2'54+327+"5'#+"?4;23'+"=#"=4;2+2+"=4;2+32+'2+"=4354"2'+"5'#+"=4;2'2=4+72+3+"=4354+323'7'3?#"&'&'>7&'.7.5467&76326&'."&'>%16767632.#"6'&'67#"'32767>.'&'&'67&'.'&#"&'67&#&'6703273&'27327672767&'6753'32+"=4;22=433'7'37'7'?2+"=4;21+;2+73'7'3:a N   P    UJ .   `  --=8 "b2-( *#  1A7 "a3+* )(I3 *+.X&%0-XT $ (  (   4:9:$$$$ m  6 ( $<; ; ;*265@?((4$ $ @   ! @ 300/[+HF ?@  N> $2[+GG >A 'Q;  ">@6'T ">@'   +-- _442 2U?d 3#'#<ުlJOM7!!"&'%32#7;; )X =tGg f G  7+"&?>762+"'.>2  }b   ' 6;J*H3G  < Z#+37;CGKOW_573'7#'5#'73?5#'735%53#'%'#57'#'7'57#7'537'#5'53'357'#573"Lv -#78-  88ZL B8 P [Z ZZd- M!LNZZ L/-!mj87, z P!Z^Mllv88 -78- M -7- -N[[.N [[ LG- MdO Z[[- Mj l/ -77 * O [#F767>'&'./&"&'&'.?>>7076($533613373'3#73'3#7333#"&7;VRD#d)3 F>(>M[pu- N;ZIIM R:'(:8^ 75< 7 N55.7"1"&5064.'&>1250616232>1000#0&XVWAG HIgEESVY]w<~eNNf0z?}Z 48?KR^%#'#5##"&'#73&547##53#3>>32353#3733.26&#"5#535#535#737'#3'7#'#3J6@ "V# * @511m<7 32  8w> /&! 8878_18228(.-&(..&= " !%# M)L1+E 3,"(.99,"4233 $*06<BIOU2+"&=64/'.6'&57'67'67'67'67'&5547'&/67'726&+"=46>54&'&+"#";2#"&#36=367'67'67'67'67'67''6765']77"#5"1$.'.4178";@H "+ +>* !! .OB  $ 6#0$.(26)@ \ 8!6% $#$(6  8 %\@I , /  =,8/!  >38$#$(5 * Ak$#"&463"&463".546;2"2"&4U2G22##22#'2##2':F22F2#22G22F2(#22#'2F22F'/AI2#!"&54632=4"2=4"2=4"2=4">.""'&"6754"2!!!!KKJC 02 :<!!!P!R- )) 0N2#!"&54634&32>27'#&5676.7#3547#36737676  & " D  !M ".- nr  4 5&($O$]  g$ %?&'&47677'IM&LLLJM&|LLL sCR%.'&76'&'&756'&'.676>'&6332>7262654&+"3q#1+-1(''(1+,1)$  55)"55 "%% {;  0,A@ -/";,- 20-VV-02(%%lt| *6>KWcm%+"'&'.'&'.6?4>7&'&'&7&7&7&7&66366632362636730764622>6626'6.>.677&'>7&'67"'67&'67'67'467'67"#"57&&7&461&7&7&5&&2656&'>7&'>&'&'&'&'&'0&'&#'''#&7676'&6&'.&>.26'4&&754&"266&'&67>&'&t %$$ %            % +/:%.$ )=D         $"$ D=)          7#1.+p  g  ! -%%.%.~&%(I#1"#1"R  & #  %2$  $2%             # "%'^/+ /* !#     l   "!     &$(%%  )$l*'  +. +   ~  -*. Y $)  '5%254'.'.#"#"&#"7#".54732654&VRB   *6I}7+54&"R;`;Rww()+3+=qffq=TwwT## %%}bkw7&'367#".#.#"#.'&676&'&32>36#"&'47>'&'&'&767>6&'0#"72##".5463224"[& "0 74  1 . %&66 *&    $$3  J$D33?I<.$ &    Y  <      / E& -   0 ($#>04"/     )1>J2"&4$>.2654.#"7&47'>.767''7>?&'&'&6ΑΑ3  % +''+  %%.R+/2.%%2/+RΑ?      2(^(2+  E$4//)e4$)0|';FSk753"&=36'>.5!5&>'>'#7>54.46&'56'4.g!#!_J[ !,*f'\TY%1,*"'- $/!!  O; #7${gh (,   +!("'"3#E_      %`5H%'&5>.#&'4>76;2723$#>7%!!$7>&'.#.'&/ %:'Y44%f" !DY QH5  ,]#  QY%'&>?&/'&7'.67&7.76767>30767"12676326"&462.#6'676&'&&'&"&"7327676767>$0   /         \ΑB    %  /" g    5!     '   ΑΑ / -  "1     1K7676'&>'&'.#"767&'&63270#"'#"&767>32% #  U    j /&'., ))"Bd ;179971; j$&(K.%-$$x 2J  %;38,);".C%I2P>>P2 %$EU ?GOW%'"'&'&''&7'&'&767&'"6767&667667>'&'7&'7'&'676'&'47.#267&'&&'.67&'672>1#."6&'&2&27& {%' 44# &W%8 !4)+ ! l!% /  1$Dc;8   4A0!" "DA '.  $ $H +$& %+H/&;2,  &.: >D $1* j +i  5)''   O_JO ::+'M.&   3'   )B%+532'#532"7#!"&5463!24'565<.+:62>76$  QQ2KK ,,,*,n1% .  5Q  DH  ,,*,,9 (  +9%"/&?622?62'"/&?622?62%&?62"'///CaaCCbbCC[[[[#+2+53!#54632#532#14#4&#++%@=X+?,a+pP+*@@%W>->aPpx#*"#7&62326514.'0.'&&'&'&'467:62>7>5454>2#+"&772'.y  mO "       6    A    (H (<o 1   +. (    6wA )Hy632#"&5"&5#".5"&5#"&=4632327277 '.67672"3>57:>4&'."&".''"15>7H iO1))1#C $  ;<  $ C   #NN#     , G,  8    8} G[M {f") % %  )"  /) Eh 0Lko7#6236#"'&54626'&=4%''.'.67&>3263264'&562506350'&#"'&1327"11130>32&176=4.74+";257&#"32765'4##"53250<.254'&5432;'06150'&#"#"'41"374+47632572'&#";"#"01376767325&#"32767&54+";2=476256750/#"432125656#&327617&#"132765'4##"532505<.2# e@ &6 #)-("#:  8":"/ #E8"     $  8  %"    P       6*   / 6 %$  BT !#19:>2 #  #?D  =    " O2 2   : 4 6   8 5'( ` :7#"&46;2+";2+"&46;26&>732>=4&+64'32#r);;)r);;) HA  v!.$;R;"(;R;"x / x ./!%2"&4%>'4&*0.'&74676&676/&76#".76&"767>74>5>56&'>36ΑΑ~      1        '!     Α    *       !144  '7'$   #  &EKQd%#"&'*'#"#.7&'&632>7>32>76.#"#623>32&46646&67>76"& >> ?%  BY0N|t&5P,@n  "<2 H [ $#.( 3@r-*1 2-I'` ).S>%M; * -  !! !! )   (%0V'&6727632#"'"'.767".'&63:%&5463237632+"'.5463033267*P_  _PT  -*vO QM n&'   (!I     H!n    *&   +%"/&4?276&'576&'6&'762  2   / /23(     3 {$ % z.. 24) 1=AE7+5327!4.'.5432;4&#"#"5#32>74&+3532>7#7#3@    " ~4!100*@    D/ N 6CNVanw%3#"&54632#.#"54&#".5467>2#">'32>54>&''"67&3>.327.7367&'|`tt`|nVju1_C'$-4-IJ,5. Qhd$+! ++*r)*" @&Y  vS[tt[SHMujEb61$'-.4!-/4 1L$%"!D\%##3$$#&D$&  V6$ n&7 ,  Ms' 4 :Q  H :2 L 4 XR . *! c :{ * Copyright (c) Font AwesomeCopyright (c) Font AwesomeFont Awesome 5 Brands RegularFont Awesome 5 Brands RegularRegularRegularFont Awesome 5 Brands Regular-5.10.2Font Awesome 5 Brands Regular-5.10.2Font Awesome 5 Brands RegularFont Awesome 5 Brands Regular330.242 (Font Awesome version: 5.10.2)330.242 (Font Awesome version: 5.10.2)FontAwesome5Brands-RegularFontAwesome5Brands-RegularThe web's most popular icon set and toolkit.The web's most popular icon set and toolkit.https://fontawesome.comhttps://fontawesome.comFont Awesome 5 BrandsFont Awesome 5 BrandsRegularRegularFont Awesome 5 Brands RegularFont Awesome 5 Brands RegularFont Awesome 5 BrandsFont Awesome 5 BrandsRegularRegular      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~twitter-squarefacebook-squarelinkedin github-squaretwitterfacebookgithub pinterestpinterest-squaregoogle-plus-square google-plus-g linkedin-in github-altmaxcdnhtml5css3btcyoutubexing xing-squaredropboxstack-overflow instagramflickradn bitbuckettumblr tumblr-squarewindowsandroidlinuxdribbbleskype foursquaretrellogratipayvkweiborenren pagelinesstack-exchange vimeo-squareslack wordpressopenidyahoogooglereddit reddit-squarestumbleupon-circle stumbleupon deliciousdigg pied-piper-pppied-piper-altdrupaljoomlabehancebehance-squaresteam steam-squarespotify deviantart soundcloudvinecodepenjsfiddlerebelempire git-squaregit hacker-news tencent-weiboqqweixin slidesharetwitchyelppaypal google-walletcc-visa cc-mastercard cc-discovercc-amex cc-paypal cc-stripelastfm lastfm-squareioxhost angellist buyselladsconnectdevelopdashcubeforumbeeleanpubsellsy shirtsinbulk simplybuiltskyatlas pinterest-pwhatsappviacoinmedium y-combinator optin-monsteropencart expeditedsslcc-jcbcc-diners-clubcreative-commonsgg gg-circle tripadvisor odnoklassnikiodnoklassniki-square get-pocket wikipedia-wsafarichromefirefoxoperainternet-explorercontao500pxamazonhouzzvimeo-v black-tie fonticons reddit-alienedgecodiepiemodx fort-awesomeusb product-huntmixcloudscribd bluetooth bluetooth-bgitlab wpbeginnerwpformsenviraglideglide-gviadeo viadeo-squaresnapchatsnapchat-ghostsnapchat-square pied-piper first-orderyoast themeisle google-plus font-awesomelinodequorafree-code-camptelegrambandcampgravetsyimdbravelrysellcast superpowers wpexplorermeetupfont-awesome-altaccessible-iconaccusoftadversalaffiliatethemealgoliaamilia angrycreative app-store app-store-iosapper asymmetrikaudibleavianexaws bimobjectbitcoinbity blackberryblogger blogger-bburomobelexperte centercode cloudscale cloudsmith cloudversifycpanelcss3-alt cuttlefishd-and-d deploydogdeskpro digital-oceandiscord discoursedochubdocker draft2digitaldribbble-squaredyalog earlybirdserlang facebook-ffacebook-messenger firstdraft fonticons-fifort-awesome-altfreebsd gitkrakengofore goodreads goodreads-g google-drive google-playgripfiregruntgulphacker-news-square hire-a-helperhotjarhubspotitunes itunes-notejenkinsjogetjs js-squarekeycdn kickstarter kickstarter-klaravellinelyftmagentomedappsmedium-mmedrt microsoftmixmizunimoneronapsternode-jsnpmns8 nutritionixpage4palfedpatreon periscope phabricatorphoenix-framework playstationpushedpython red-riverwpressrreplyd resolving rocketchatrockrmsschlix searchengin servicestacksistrix slack-hashspeakap staylinked steam-symbol sticker-mule studiovinarisuppletelegram-planeuberuikit uniregistryuntappdussunnahvaadinvibervimeovnvwhatsapp-squarewhmcswordpress-simplexboxyandexyandex-international apple-pay cc-apple-payflynodeosireact autoprefixersassvuejsangularaviatoemberfont-awesome-flaggitterhoolistravastripestripe-stypo3 amazon-pay cc-amazon-payethereumkorvue elementoryoutube-square flipboardhipsphp quinscapereadmejavapied-piper-hatcreative-commons-bycreative-commons-nccreative-commons-nc-eucreative-commons-nc-jpcreative-commons-ndcreative-commons-pdcreative-commons-pd-altcreative-commons-remixcreative-commons-sacreative-commons-samplingcreative-commons-sampling-pluscreative-commons-sharecreative-commons-zeroebaykeybasemastodon r-project researchgate teamspeakfirst-order-altfulcrumgalactic-republicgalactic-senate jedi-order mandalorian old-republicphoenix-squadronsithtrade-federationwolf-pack-battalionhornbill mailchimpmegaportnimblrrevshopware squarespacethemecoweeblywixello hackerrankkagglemarkdownneoszhihualipay the-red-yetiacquisitions-incorporated critical-roled-and-d-beyonddevfantasy-flight-games penny-arcadewizards-of-the-coast think-peaks reacteuropeadobe artstation atlassiancanadian-maple-leafcentos confluencedhldiasporafedexfedorafigmaintercominvisionjiramendeley raspberry-piredhatsketch sourcetreesuseubuntuupsuspsyarnairbnb battle-net bootstrapbuffer chromecastevernoteitch-io salesforce speaker-decksymfonywazeyammergit-alt stackpath cotton-bureau k%لQلUrclone-1.53.3/docs/static/webfonts/fa-brands-400.woff000066400000000000000000002527401375552240400221750ustar00rootroot00000000000000wOFFU J=FFTM0tfGDEFL*OS/2lO`Bcmap u gaspglyf<ͼФ)head@36,hheaA,!$6hmtxAPb6locaC``6maxpG OnameG4postID {.xc```d٪?t˂@( K xc`a|8ч1Jedha``b`efFHsMahc04ՀZ1.R ,xoheݮ~sL0(THpiYhpc: TMZ&4*B2MYN(3 W"wy߳iho|Uo9yޝծŲ=/X+}*( ׇma 4^R*7i-J崊ztR~Kt \cx"OWy./*Wqx77g*3myO*F>eYJrBҒv9/rYHԽ4!=DT~Ytԙzl3{~sдѐzGhC;jQ;.[k6e}ɶo2 Ow.[*KmesZ)C }8 8~ιs˻\ռW_9ƿd]Jd,2Y!) K4:#"ss1%G 2o. d˨ˢO۹|r޹<5K*09/YY<◸؀[pnOq%a-"%qNp4(?i1V ]?يJ#E|QRUvjG".DDAPC0?"%,""|):\l<\`UdDU<d-a,g"3g7K@N$!L%{Q0 A RcY{98Hkl#Z3(PB;ԾX3/^fYxT\/R)DmQvpLJSYlB!;Dt}jF (|Oї 4|~^8 !W%zo=҉'O>e>qO 2wя _&ni{D?~oH 3ta% {xZkUB-Ԭ)ʄDYTJJ|IJIFͣ$**x[$g󾍓ʊfꞩßZ36KL z >3(1yrD/X"H(*ҁD#Hf:#ob5ٙaer(fF9[-C2=E ,ens8#{Ux6\g:D1VRF\(n;8.aAG@h$ GOjjӊ\|}WBwG/Nz9}BsBo=˺}ծ\-.∈$MZyWUVj9QaL7>g;890Tx_TEݡt'kaa!I~XGnbe3,luz ,-F,!h\),^zqktd'TB3kYYI,N5:yJm&RDebfYk7~ d $`btx+ Gat $R<:"aUBZk$% } suy1ZtD% 03)lfvfɭ>SF&lbfh^]\{y[!4;k]slT}Ub:o(!t.6CdfL {Akɰ9HmU97۞_~+3zÐ{aO̊B NJgŽ˨:CȑA,]o Aªi68?믤}@@Yc&/m}F:{vbzà -K^#@kqI4ߠX-ɠ APP%.7EnJӽ 3g}>k+W9HTQyo -䩁U\'ojW7Ãiqsccs !Y#>B@/Lw=US+5ڪi`QxA邌y_D%RD>K]ύe $A6KHJZI!iRndKSꡛm*ǧk o#\5>$3[g~6\k`G S^fW@NK\ہRE{y,0()e>NYUld]UգqU%+th7l^h>СNu^&wt( 0}yN>}: qђ53bp{6E e#PA4 Lyx(wN«*Dk;qY-/@ ,ꋅB!K>VlD +l]*^aJw5Ib}b[_N /?!g k*g~Mk²)}p8g7l )tnRFÈ9 =F%M"q]]sԟ_7_w/T}>k*(2"fPΏm[OO|5Mwǂz6-Cg2u`[_fL".3vP z/ z-{ _G*@ЏVojߥ6?G0ຎ (@ g戍ڽZr7J W4j4΁UM8[S h^*`n9ܬasRYP߈_:H/ސsgҀڛ 9LrrH GOw}~&MyM_39 U50d9% )rhY.2/[ $aaHE՚Ad* PHWe O)brqΤ:tDEbǦjdhMƾ2V54.%#L$I Q"3!Rm!sk{!`*dqUX)氨8ܷ͂L쏙bvDIXeHx;dR"ی_d:KѭOaz,LC%~bcnS]7g#5觖уV&BwFU+gmUѓvDee[;i p]5C-kTzS|,+XJAs7)wzz$w ZT"T{HP@hj%bH]G ရ?ޭCF^S>gwQZ܁`mU[8,U2QCnY]_m{uڌ/k착L_*tJWZ?qܯ :0 '}F4 U:RkA @O_If03Y~QUNeY SEz?fђ *@ +=T0MfLk*սA ӠuD˾A eCyDׇ9cr( Or,[f O4D=TO ZM)fq5h AQk}={WognB7nj [řc:vwۅ0,/I7N nJtjk&aQ:y g*A~>cG*2+*A̮s_ZC䀢vetK!eQB5kB cD2PEa J.\*\LjR">K/1E2ؓx"?nS@H@('~_Sh l&*ZY?;$+zP??7@[08N) CG[,.ة'?@z"΂P\9K71RU2|ŶsVN)˳zVd'.buU;'^|wBUU֪NDqͺ<ߖt2H1ArjJC ^մ("#nmTMg9F>_ӄjV5Lu ,C^XG-|`1Q$g^46O,U,ef]3-2MD/Y5SNl]éJoVmtK}⇠Zr;zar"vC6u/h8!v؅̽tiiգs5Mz8u æfѥ#f-亐d8{;/Y#j3b[mAN⼖ʲ222;kg9k_AEI G0u)Gsxa@SjӫӪ.M/.^-.?}U&KUU锧wu昪YU-OqwϠ RFIs`.ƂɵqOOOX>Y$WoxQg}W"mRWs]띋vk3k;gH0G FPW8Dhۓ‡Byz*L`L(C) }"HG L_x""c 8p b#;g %GUC  Ƈ{.>z7\>|_?'ɓ_8p^-`I7R/=|A2(x%47]CcL3@#o:3=SS%*R#,uHo ӟ_;l%lʷԞAgaꎝ2^Mq6/ / _9kuWIì}IЃ=y s(k&,2W:$KJI[+({1wxZe qBz5lqTAE4,刁mI$a e0m!Gd' Jt>}Ƞ& ҙe52V@i)"L05&*m?]Ȕwc,KHsX`m`ϖ"u &rU,z_R,+,tuږ~ ) Oul1CFa>T#"3'idaQ?z]\u>8)]XT1I4*CJ"f*(zX~TU$1vMʞWb"Ïg^D,q"+ ρgGv&g?4 RgGxq լA}cCA;L3{mxbiS-->*`SDYQ1^!{ydX6Z 2LמT9_ wzd$7Uo}^6H q&)Hd OfZ]5(8͠cFa8+#'vO x)ǎ)R YM4抿w h@,J8~MK n3s8USԛ{U7d?Mo%(B{f KzOR5.v!nZn0 LiSU)}rڄolo3<'x G=RoZi#5@=U15MR[4/h_{&aʹ^7A w89Yi/ke*Sj -t92y^*RYv__1Ca00.g ŌBUn W/F zhtq_E܂oŽнܟe/mt@@uPͦ%35m1o{::?\=su_+fqq[f#y2J`/ƓTQ{ҐV.Aϛ)VrpZxaᓀHY2R+vG.:5R^1M/ljve< L:dƯ.}5;Xϑ}gybjk)ERS%Iψ<K#aC++<8 `G^sBov:ș\rdm5t(E3!R_ڵ[w'vKۤ$m>,]G7K1YX$\'d}S"mE`;eR UX#<{g7(xlAʿ0\4|[.w`kbe)DX5a'YZ~ Sΐ BE(ٴа(in Zl K ]UWǮ8pSz*TV;˧z[\xvs@5^q \PBeᵠR1[S xQ0F (=t,YZẏsG z94sgX+s zD  ff 58-GpKU`>L71eZꢩW4ߊtѕX̙\ ,WC5*YPh@8X@^VP`4*2q@r2Z5k|6p_wsIꮋVιR?7yT*Iz2 `3ajyՐBa9б¸81caʆe2BHXt[i4oW /@T˺/ ˦qƩȆgn0_|gTD2S3~*i &o\DHNJDZSoGpu)vN\$qn,1۠xmq.aDƑ~(mmĄ,U X'V{gzށ{yܣCxF3MMs]ְ^tuLГ%L:H-'w_w` Ѐϳ;sEGVy/q#p\ O HU5Fxl:W-YZ3.`tBÃ/SɵKU0A8mW Wk:vf6,9FVv8 ߓUWMԬrXMMnW5u鸞Ldu-`V*N)#wc}/ [V<S ^a1H H, X\=w7h3?>u5VSnazPsRL'=yND:͔,T]M, &`4ҁgރM&_ PUzVM`FSwʾuS\ҪPUo~o 5VyVsH5Rb?찮ۯU fV%IK\$Tr' ;b}A7gΜ8(K5x|jM'(|ZxI}/ @3h]6Ư]?a8In0G9|o99D=zS7q-[.WA]JƻI? G}|^$?%Ztꝝ"Ioa[QѝpB^!O|s.t^>^ e~n87ht.%PC+= ~s>i4B'%¡8Axr=*KDY3)apbgd]Yۑ %lIT21 pY*"TIUl ge dPI8{B}4J(B%9FEW$:~rhO&xR QU6I:4E$"`w1S8&[ (b# Ib "S"|OH"OVV>3-<>ǖx!gC a~#qF CD=N*;dϯO؞>j|d0bh-wNpZ;lMbْ$KN#C?4S,8\wF.]c<TXT5USwd٬_FoDW zz*[,QȾ\Na`[ħ2XfhѣR?Jt_3MϵYUPU%2"kLe#kz?]-R~r2> rBGHw S)[<]D7gчJ g`PYP]6%ic-~ dt8!(Qsj 6y^=kI錷`{+0 ݞ?tH'͏T^G$S48B(O_3+q9)drkܯⲂAeZXKBp^5h⢽]A>Z\"p e40 $ݺfcBEn勚L \ ;Qߞ]S@k5&TF73VϯrKB>D:.GҞǢ'\5TֺDPA$<і6@7$ْ~JbXC mlK#u+XQBYSaQbP}Y($RT?"%QDԩlfzbFZ?laayC ބ2Bd|WݧGw;ʘ H"8$3xY=; ['S7MsSMS(PcYggۅ |w?B5T* $9)j  S$CbSi6q FrydFϷj`"+ .9ŢV5l|U ,FDB / 8nYP&&~߲j^hq/a=c¯ FrX lyL0 +<pA 7\5?%tq0ǃӓ(yt*Zfog^mu3|QWmCoJ 5SqQ(l$z$J(uEl7皶#bx4,>GD6xYDzĭ=t͋UŰ*fkn?|`AE9L2= 'VSe\$.~)sV@Ko'KGw u.X7EUQxr2y3E B/uIyUBWm%:gz>o:G8dnsW;]A[:ԍO3ovΉs 衈K<6`_;נ oʔU lzh8ͲQ:O (7SIF4Th:ىQ %b}. ry\)R((bH T4]=9LLc\D aȆ( l+[>=$GT .4}U+A9W)<9'iќr|}I( ž&Xx(wcV5U+E4VF[UTӧ=X8gHQ&w‘)G[m};oP({飛gwd`v#;wy z p X::=Q.u`ULtPf4x6ү+K;0&@bZX$%IiDK+/']Y7 ?:Xj:Xl aIt?WUQ#b ˞'{Y 1V+f*Q,[s6fv (ſW{bq e7-j)LBטw }ص"yCSE,Z'^,\MR`Y*9QՉOykn4~GpḍbKW.Gڹ^}wM"|\xAe7F#-IǕcʿ#!hmo6a;/ҍzCMg,JR"BǙ"H֥ ZIqN+c0-ZqYAF1e5G5'}>dc 9 HsLv (!R>:{hxc3hG>Z{Ӹ3ag=9x ݬtcꭽg.@!4qJ~s8FW߈#h^t=!rQ'66tr-:91<[$Iet1S~RNELS(}ro}|OT? X]4nxe=:|Na+^y(YPB{!:tdڽeo_{!r;+n~=LzS pB"77:yA5bkiz2hEs0hNS,IЕnk5l} &7cxX?P3|m%cp0LBW%g(}LĻѩ=_pwI=FJ=s_9?˝xT'Jw?_vn;G.[?T˧}z$4i{%^TC# mN4fg6s;&'~T%Չ V4g=6;Z:Y]ѡABaojb3vvb"kqo}ef3]WE;Oc'jjX%-Ż ښ?\\z{qBVTΘW/q -3Eae\8Hv -d|Wݘ۫X5`a/_Z~#g7`w*lə9Xitد5^6"CxBx>ᗅ >*<|$|ExY?Q1P2FOFo1$8CF잏m"ȋ 4~ %UNՉA5uD<-J9㣛IJ(K%XzT~Pe|OY,fߧkj;mյڗ OdjAVW䢥5tam}tM/[ohv|]>U=T?֚IJP WM@ >[ߩվYD5/}K%ٲ\tlf.Zj߮Whnn凫թjaMh!Si y.tX|C_ۙ^OZ`-ݢӖy>,8nkZ D Wu(L1{|z['Aqr0W6~Vwh2=]k5:ЁFNJ@6vW͝^#@YzN׃I$色]^ jӳӵ81QJ3+ꊴC;4zԧ-S[D-v7gwxM(̰4>S/J.ekr0Rh0f≋K5ѧ -L[2gzz^ҝKVi$Wth{Gkɞ;nN5cFOI?k82D܍698J'N -gQ.bSWa$&g"2QV_؄({VN>Ϊ / l9[ovAsuLJTfF<lBz KDZiZ( tne!lVuTEdؔRLCE#mЀ/0"j#֌GW͗floM" t-?RgϘOnIdp#P#2/ G'I}c0$ҁg(RKV ʗ: ӕRhܴ(Ͻ!Q4- d|f15tC5X"EB'E-[hajB0f?r4g˚,>zh1O[+]IlPR PFL&mʀJA9$2e|UmI &%飴s`sE@Der A,e0vzId tRR(&$(fIY۬6^W[t dT$d?L 2jB1Dk,B5@M kL*c1GD QLQ| _ >JA;uI%'j&Zs`#hG*SN W2~ P5-Q6.W[7 $ڔV-9"%-BTff*O#7@qYD*vhSEj]Ю2ey?)U`/]hg *MQ4B&B}Y>[!krB'a"*|}hٮ!D|61$/@ua(@ `U~EHT8k#\P'QS2%h;tE˔h ͱ#4j'j+OI_-:-V s|ɡCvI{+-]X.Ws\z=ll)p!%"MOsĀׄ˄+*{`QP::HTtiq8"_egxzϽC/* (e50i(㙜ScjY7}9eEi<̨L|mpuW>hs43jXF}Pi j&$ ~/j$%n)F&jm82|KDd :n+0?){8ˮ^}g;Cu[]U=wWtzJ:9$HBg@@0*S$A}(PS?H>zWpiK(PR{g֓zٍu]ϘcZ栛)r-"3pufw!w{za* e% Sm"*ubWSS4|f3;vN<<,R\HrD?ZZrsVl ALsA-#[ {~~n֮lrH-?P:h?]LjjĽO!~ΈL?G$ WH  ΨEqhƘhl#~e1o%;XoTcT9*˅5=56߱[lfݢ{_BFMazgʖ9"[R} hds piyǑ/؉ϑq|gV9}F+!?\N2G;wJtef9`LF0fl 7kWJ%rJt{WQTQ/R1ȼYH6چh|BޑKxe{>_"jŴ5 ]nO?LW'mМ*<|LϞ49v_s'NMk\nyJNcs/z9_B,{y3BI3^inkl=a3\eͮg<;>/xe&nʅ }MԢZ%I, R o4Ȅy'?kA Qi7k4mhN3.WoI6%D "P8w2F&@keF:}.0-g5!eR1pkۃSX,ceo`["1\8>s=ܢ3C  ^EFfDbڂr+O^K>#2IPeIԻ_?0նzN;ðE>1$~18>k[;G;{NBV|vWt誗Hә?6+>5̩ͤoˮ5Nڸo[s [p7FHE-έ5Q ʬVXBDLé0Vhi a(?k˹ '`ⲮFoRdcTU^ϒ_Yq tB~F!|$,NP-V-dRT&M@y8EaQpğ0pNDYtJZ𑋨uCL6#̲QH櫖{vln{/bˏ5~R类Ķ#G=JD"(Ij(smc8Y&jhO杭fV5AT>W vLC$EMqC NyfEER|gȁ'92IRjWIh/,nYEm=;ь8$4nw}ye l `Jik{KsiQŦlug F\r|ڞbQsuq%j7MTR4]j*T$іM jXĪo3P u˅-FUگWsI_@d5tCDb%ۈ9> CjB%5;I<u>FL7֜∻W!?Y煓Wr`JP7ҞSDÂ]94~mv95"clChE}UVWqW>ws'_\ b&OoKu9D2߂KBTHp]ㄈbx',jl˃1>1#}+:a9D@8NNT\FOTVR&lw˄f3"F۠4id"h;6Z¯͂k Sye'@rnTSG>pҌa/!k0Iz(]2~، SEwti9\WX~AȨ\,*Qz#vuËӎbk,vtO3M _,^9iljmv Z99sv˙^(+q-6'f݊!hƸDg&[-)Aiw j n (O 6ֿH_SA_U~C%}oxō gfvp\q: ?a&|f8\D4a(ɘ;]p=n=E>-Ӈ_3T:=Ǯtÿes3׿cn{L[K~l?{e'xI#[~#Fnj<߮|WM#_rGW_|cÖ$KL : sp&^G_1CVNE^آO累UrR}nTg$T́m!_coKlE.=ȥ (nVD@Ͳ(⪔|ǽ>N1z3gQ֊t/"q+Ɲ&c>n,@Ot7j9KY]"q$2JSFfN=. ͬ)LM2M&!80rX+QV%CU:M @\p&.Nm*Ջ)Yq t+cveY0L|9UmY'uR1=KvE]8t{V (?{z?X9?5JBFRvaĻ,u]v$,r~h=1zTӮiJ\,*FVsL7 9d#TZ̵[9eciɊ1Sɥɇ^FzHOD~OAؠ_%_K0- :m)65j idN6v~J_yFnaB%&I󈽥VY=|&K U z{d{ҭU9~,F}jzҞ!{=V{+ciE\a MI,r= ޼/sa: _vKb[SEeA f+,&Q8@G(+TrS[*ۧ1{d;?G l:-/Gזjzo|BKŒqsC"Z9dBjşX0߳n$\\5$volm7|d3F“4cqab1VIS!`+X.M?[K[1Ԩn{6iʍe2򖝝td$pn&W.9 ˝͆3҅YJ݆"M\hlGqt5̆,nȰ<й4Mdvb:+ܥHD D$ײvd hjOA0d2֜ )eښػĉU⮤S d,Q6V9d:3`] 3'u8B(:1AG~b/a/˽.O兩D:1I+A\!6 èmoȰD`XD}@"*zaG#YRrHĘfN+3"pՀ⬸{쉭l=F**DTUd u~3p]n]3y[, $./&ѐG@#%AfԨG)dr&ɥJAqv|R*ݐ|n_{["k=nH=b֫hY*RƽiSt?(yq_EMj޷aFEqKoV&A<ި;l"hy<ӥ@ Ӓ; CZ`fbSw`+?h,tqlZx<Μܟ_ʂ4zdzڙpBer},S\ڕr۪l.Qejz Yk ȘSւ6ۮv g@8EˆNx'RXиǼ (f2эJ+WS4+{LqsYq`" ;6IReʞ)9Ak<:*ɂYS0Έͮ6-)a.#W5A,قD`"pzGRw IFxhEzyhmllpST1$E) cTS9kکbZoZ_ X:/wu~#o!/' _.L' Vw6'4X4sv-si :OԓޛwD  η7(f+P$Gwr1yd<^q4NH2)YS׃Q2>wO5q@Јs<"),mjYUʪꨦ-U]ꥡ%^=vX񌄙-Mq3 4kȚig?}֬lCI& ,FG$݅/k`q_:Ŋ*ћNa0VYr5Tו,Xb"9UbmE3shaŊ^Zc[u7f*mHj?R˒̠y`{BϾsd2cNVOR Lc͙%K `؟x,zHmT&N>4fŠR%ZcҨk_)eHX\l /תAp\2awCy ɛb1X~9LHT0otgYp̨,g/|MP\7z@Y_uX-Űg&[::::=l<>!4Ukva&*'큙cjr$*%D8|50p,'>Kv񠅫G+D9ty"Q Fv&GRP/!$9"SjiB(Efj^\5wMoҖ&W$QlLhYSz_=TUCJst;!;A+2&J} ORfh-$)R'Iw#q\鏷-3X&v*c~bjn2O@b=Fde%.FB+HfxsDr&KM/C/[)K^od M]1sӕ-j/;7mꞻTg&mj>x𡃒s/Gt<ʃ}XM5꫉t IpIÀ ٍЖHΨYE74J$A''WvG$PBjeGwiAM $gTJ`Hv&t!ŞC)۱mX0h!~V-UrEU˧a7<5҆)IJspw.NoBdlD"VRJb!#n3H0fg,&4J<Ġtb[h%ƞvA@hr$eӎʢ,]^p)i"uشMcrՕD&iq*ci",uEVK3N\[Ww[~wo˩֛PC7yx,Ƌ0qBfRf)32VU"&4Φ\fC?HAX&[ސܢۢ1X @Tq^$1[cR] X d@ >AGe"tS'+.LS7ɀ8%C5tYb=t%jjƢJK(b@ʌ9LW'60r@0*Lښ?I⩪XNOS?ߨq' xc*@%RSJu,wIMO5E^+\9bMnrΆ)}x ٦1ၗ&1ra`JԘs`z01i #4&$m#HYlĥ"Qh" +>wIUW?nMXuU:htЋ E*4ec XQw݃18ձEcd08^dCP\dU4p4@"f8zIIG&2w3/R]wz-R=_e}UdS6_AKL!$ Ε#F)[k/{xm齙tD>}*ʬV4N߹\; `ݫ}\{k65"IiT:6e45T7dDH_p]0rRtLeyiV%nŎ \F FV3>KbLk̑ju-sOmҳj&@zA1c7 t16 3zkaɩ.J |SLj$]Z 6òiydqF(:%ffm-+B:`wɘ4 > 3CfD{.]\Q/0<r.jJAWxfgԩIngͭ3T8yy]7KǷ_,"6 yN8 R ݴވV b0εw|ڃO| _ݎՋYq'MJ8I\Ƒko%U.(dROP|,0WnڀvL}4Tj(U,(Blb H qE"a"^ٰ9~9—`E̤@LR-T(I5>1 7 kԿz!V "D\/*E^g g!%q%鋺dr*=|R6-h:` Q!pJ+'w`5tXkL%Z:A4"KlZUݬ"GTR0bПJ0řP a, hM{{1_Q+԰2`ɃYTifU0B*B4t fJT[cz:VӘ`Ӂa\aj>G8]G| $Qv :KK"Uti- .#tkq-&.Aa_4G{cY. @H;$W$9f[Lk)|`Wl67_YoWp\B:נi|Ms$z4p^W'ɯ@+/[h|rO+alH:+"I/$J)f&% DtZuJB6gesM[TZbUصuXL84={|_AV[˶ʱ̜i-~R638UsDlyT+0>EٹRdXsbgx BsHL7Qpt )RmnL#%\DIYiogl+3B_}f~D$I}A7??5YI /} ܐm/4j_T!'qؤZ3LBS"OI ɡLq .N6b|ΓL%866S*5%ߜi_| T A_ܓ7Gp+-?@t9i}H10%O7ܷk-aHR (EKv"#|D@*!'oXY_~ZZŕIx`yc_?pGk V~~{r SXA4J,STkZA"O ⻝SKvVK~ g(c p5@(an[[m} E?adc]e'l0f%h TPEp#p=Ř^IT tYDu;:j-I$\:Q-3jr3;'DL3Ka 7AݻgpbDɸ/[dPjZDAƂDtQ@-hMGKjs ;u,u&ucRN}>{ l:E*Z6ZXQ꠬^!ɚk\7x+!eö<_AwGjE G@}%'-B^XxG6aP5xWUc.̼-_ a=[QPL$FVC uh x e5 im8JNuf!혹%b؂,k"G%-dEVV^RٌKRArh5c:i&s{bL )a]+iO'h@+=DMRRRΥ'(ID\NS@wy)Ol,S0MgOGHDmT9q>[n+aGz 0=8KgV6ZLIH-TXXCRiZ4(@Xh9{ V!<ͣ$5Ac:OA~Qsuѥ:$)Ut-I:bq-$wfpv!qUtY7NGMv;:;GeiWJǬ۟M}UTwfzyD, DfZlq^VPM'阘{6O.u& YcpuD1A4~iJg"I ^4lWҊ[e!C\/aʯ)9V`R׺KJU8RKTCԈh+Ov԰fUAaZȽ$plȶjamtI$WRgt>ɜAo@qG|XɔkWi21-Jc/hibʩ&d6 VwSȰD3ac %XAFޏ$VkKkD؁"zU{(n[BHgBSQƄƮ;BCB8c3mQMe D.jzR@AS>c/W=5}q#s*b?N|VV f(ea~-sBq\In)T:i;{}GrCTG]5eMNm)ka2weDߵuݒ5ӝ֨x6>6j\SIMT1F .ג X`zhqN=d/spmkm\_)+|&;`/ tnw ҩnݿnO<`,]a"~Z`;xam ~)hf6Wo?m]^>'kICNXƊF”V͇~t-x!)-UFF T-D+l1LBFGεwwV͏RxVjfǸ{ NnM=z, Ȓ^"`=XT(Y9>P {)>$?lA"]W#`wꙚ7T7(F,Kyo خ`IcEAE&b&qT5$* ~}^IԷXEwy!@RE& k=ʹnqvLNd0k,7M]a%VUnWr*$L*|ݰ5Xrd-̊Rf}nE[lwb̓Z%jyRG8v ݀_<ط\+2_"UW/;weBND[kgO>8S,H9 tv$.-]@,$"Kͥ^1;Ȼi8pvY;At/+(˭ 9Tg@26k`[] $ƨ* S3X%1Sv=5C%a3` ̼DR)s9+݆PP7%XM5,*R 7]\` +W8pUL I%3F?kDBGV)+,7S|?y ө-KI@`lF= 0^i VӈaA:߾O:C)kY:+tQݩkD>s^8kb Prnec\ږ80E7fxya>:ex b4#@GBn{uoOT$MeQT 9H]5qƝ$M0E{֣MT@,cx<Ԯ;zVWL$3RI; S9bLŻ/,R Rrn8w*ůjv9}.{s5z|fn 7c˨i!+"wl)(o0½ x<p x>)Soԋ{}-}*`05S3b1k.1QЦ5(5C3lJ/ w FM ڻ^;WT<*cpت?WcElCk6ԥcrz*&Wd~RZ>gvБSdp%s@Γ̇N 0FQ,ǚ;mc; kLnOQ6E;6%tPmW;{S9%u8E 7[wqژ8dFC*4ynb!VKPUf7oU[Ns#W?pŴŹQ+1 _ǿk$ݻq7&W1'o¿Ȁ~E#Q||?q!_K7'cv ɜE>‹&HŁ6jNG{nq?B;mժ5Ua3Y] ><ROҜ,ɠA2;x<ǣu{,%p$9iyn vv.}/\ul۸ aA_$P C_m+jKdz2KJ[tjva*jqa}ekRλ3Z]|ҧ6SdidٹlAt^9{IV>ݷ= //tR*i6y/oUd-"L֣z~bd+ͻK#YFs -9CKzZf_\ yz&}VY9YTi)hSڻlO1j#{?XQc-]Y,Ȯ_#YJ#)Qz=_#˶y_ZRq oqr\{=cSFd$_#dlJ>wm߾]vV9XI?Zv_ +dZοm X$t$Ej dW,SZ9h&6;S#j6'_ ۔Eq[5 )0ŔzZ}5,NgA"l6)zy;[e\7,MˍiBUBz2\\ܢɢ5 /_$_';R12ԅ_1q2" Ny1:G$+"d$V\.R2j4gI1sryG+åg^Fc~ʰb,0颟ɚf `07OŞ926Zlr-I tLY-$ˣ<${KA^~xlpp7"r0KdgYwaEϕ<"#%joSM-zyr>[Cq")JkWlvDq^(U}DAg6< h9xѰPǰD B|.L[Bar2[HZR#Yy}h yvy<= DD q8s/:p~/f<(p[(҈dc!! Kd(5 > :<~̅i 4jp{UMUM*3s)tJͰmT_Ո&zAa񡽅&?|r.!JRVv ;ZtQ #Jt }=,WC 3mb*3jX 6yRŇ=v#tc0 Pu)k}=ՅQjS5Ou_0L&uKTy#x毂YXGڔ\EcIF̳>T EdQh9!ß[p- [M9O/#-@"6"Nph@NL n4P[:yaoqpbE n2 w'iT =WL1r ۫) 0 V `$z2A1Mh"ÇDդ3KT5H&"tXl\6Uپt=5RcHg3pz-LVp3152-9ϊTh"dMwfe_4ڍz[IK9/h'7LvŅs \u_vtqҴmnI _w}8 nDkg$%41NCi+Nz&LV6RqpLmeY rU'4)xҲ&˶VF[^S~DrDlm`EpV%Q= X,>6ʓw2@qR;xOb*IyML_2*~XK҅çO>r7>ƟTcɋ^T7k sd}`b SP*YoY9~yy5yEED,c*g!yRB8I" 7siaI^U,^ )0 /8]zW.BT=f= zM>:|E6% &UY7.:Q2%W߬<<|QCOVE!Y+r$ٽ][s&:Ƀt'7o~ӛJIk4b̈ih&bՄxGߔYa[σUm|;>yr9y7p˨Ӟ"H. Obk\g'FNNVӓi=Lly&![eޮGd<\[d ,'LT IyFB<cyOQ%ErTDJ(SI?'=^xnBg /3 5tͲ^%.U4:0(R(-.|-Ynz`P !*tm%B::w>ϻnЋ']#\~eEΛ&r"$OYI8*@|^7Qkfbo}iRݵgd V`RsF`,"U37vUӡ̹0ԺDIΨs~ y[/O\㱨;٘1E.E 58/^r6NcN9_kP6DUiRٕf74̖oU*|Zm}7ɾ }5x{Iҧwk*\+8 *uK&wa:WޠTGƥXaU1?vhG0] pKpzl/* Cک,*ǔ +f_,qBF=k!ݝq G.ko뺹^=x hA|,_|1@u`ϙ:<y ,礳/”UÓ RUDLUt1aL퍏q?$Ր 3[GѨ4LB,ݣlwlŹﹼR-oV/G]Ww[k7-O:-g@{?um?%^Ǐy% MKstlO7r ʪhx׿W;ɯ!<wA]ٯE2f9.\; ONϰDبYrYZ)`e 4fsJ񩼹X//9V%x 0&cgPwpkl/qG/NQ%R?FZ0^{ z䆻}fku?8ѝ#zİot; qwS7~|$ױ-x޻NqWĬp7/2볷X&'9ҾęQJJ5y3CvO4go5G^_,|PW|-\`UQJ+N@,ey+nW=fZdkP'&.{xq>q +U+{X̤O P2>~h' WL\CD<3bԨZ鋑yyuGDZ)-6Ms([`)>o9)hb=qË7 b{n1,ɫ8cb:5]:T*gc_UIMwܯprݮXWSN үf֝_5hjT` }}15Jy!W6&i(L !)3IGu4~h4Hc3O>y,/4,ӍӍ37px.(#r%չV3䌨!ΔT-E>S' 䀸T8]}rnqW>4"*.sK;^3"'y 񂣙Hd WW^co{j-šŝR2XL<%~/QRN)*@*uPP~Q\!@Xʪtʺ6jګY?;lyMZw1\>NydQlsJĵr ոD&բdsY8 Kv5~o1u_Gϳua\nqB%$gٜ  m:ۮg4K)%Ng_/,..ZZkK2s1JzY>&a6q Q(mz9*1^/7`Ю˳W7WGq$ɁN<0mFa1Xo??l-SwVFKbwaZbVk/ʶULӨgzUQWFF\ Y$NHNJ{v|gtG`XI;!ypu{#V]JX:\7sS0.30 ]?CRYyҴtMI=@=8,gв Ytp 6p&׬kvtK֝N"57%N3W_~QlL}Wm.u]>oba6{1$e-k,@ryyUE"9*4Ed׌8uDM\LR@"p)jl4`a['^uN|"^(M8qmUOj8QDpÝLF~O5!Vb./(W@k,8;gII/:KYX4Bf̎}Ǭ.Ha|JN.jNVj`Ѩfl) t"=Ff H1V@ZI jH=]LMhGRf+[Rd:BNЌ(-* CN k&5d$`e2$]8J0¥^әqMnPAʮ=2 af~;7I֩E{nz0^w#lPHKUv{QL?YטlJi]黌m. r`ɓ?<"QUsrsǺQg"tPFp4^o@;cMuToOϪ Q-ݠ]6Oek݄,h.IU}OR%omw{93I yqRI$mo 2ݡk.ʤICWpbhNxq3;Y| a=|;"fp*1_)S]icm6փjq8*V)H C'ܰ*yV~Ǩ |_9:ٳ{ |FnDq\fj\YӍ*ˆOkl5ǯ9:kW熤B2Rʷhp-s+F\uLFxDU5`TwjXDIA5YiuQq }SG2ZL:}G\u0HC$60-̦ *xxhTVv {gwZҞ>%!zEᘦYᨛJ޼;KʗӃ#GQD2A^rJ4Mi6zEuMqLym§g\<SYt,ĝNWVsͶczmG5hY-6Z c%4suەOM4Kܿ[H}uŶm?_v4D!z`oi8r̚SE]u6/s5r >hmerP9:B=/뭫jsut kD* b|tR'{v-m{OZ*ma*ԵZZGo}ϼnũRR.~<3Kqcjk6qC)Khe4*܄^.hyP;ɫu_v{S#G}AjW'T09RyN,? b ) >R:>f|+8ν㾱_Ld7OyHQr;_lTPAI1))*uhɥ6NAw u`Z=b_vrxMnru]Ē39`cE +%g]IEh\=@y{Ջ,_̝{8` lѿ^S/>|7_4wƜsڿShJ*t-kn鱛|ֽ 7|h4[c/?"bqTe>]7^$YÑdSmsΗk_~uLϹeqsfS> { psf2uC³M c W&Ose;1j{&4`SF]IhFqJbqO&Hl0<7f/6͔/{ԈP&/dkBDB1tyiu#/6#%kuDLw"'&!ʺ!b!"^(gu7uzp?C 8# U?Ftq.v k|Ų.>8/Osq6s~|8_z9q>B}JCYV?&r!2ȆٸfKގL0o/͖n%dk>DR{HMx)l# 2M/쿴9kE7 [߭twN Íٸ>Y?cNsMe](٭ؓ:P",7Mr\x)֝ΓNN W cg(XS*Rq"eTH\uGR"E$xI $\EZoBe!rւD#Q4 c:'X\*T1RUDݩ9Wg:ոg$rאTU*a(e;4.]$'(*xeWl(~w`L brK@hR $֧kCFh4|ż@iͼFyXc5Bb%i3z:'Lyvё#^FLgai֪?))6 - ~58 Z%yVHxBPy)x:gQգ4j*c?Mf[x$@Kt Iz =9Y84_Ph(˲PCr!7ꨁ/D9ܨܭ<^y!Ԑ9{˒W#8Kg6TQ!)$ZI;=ds2!C"q,pcVIޒSŒO5'@q lqV=H!i3ߞӏ?<=~rӏıo#ax{"l߷=,˞g{ڻ47eQ"=}(qYE=h L_-xP#uirõLW7@H6-WGnQ/t_ -m쥮w cN"^Ii(i A10d m˦[7z!n7[E-^{{Cj#9c62euy 'ei 7Gm^KnD;%;ɓ;W[WWZSqI z,piȧ}խW3g6#G~z ,|ao052yʥuʝ p4-dV6?cV;p@^1!\ZMrx>m71~v?pva7OC:nX_Rmrn>9 zi4ޥB iۖe _մ6uv;hm~|4;}X/۷yO6yNcPHOӗPw<ą02*z+|aJJXXEa="\[j Dcۜ- t2h@!:9Va Օ5>qBm")ymNT0B ,^ 4;{ꙩkr@n9 Aaq4-YGMF[Sb~+*oTޡOub TldbomM~c.|cnHœ̞Vfr\e{->۩ݼ(A</.e%EwFJX F$40q F8H4b2̜n:_ũRij4(}ges3:h{1cKC۝wr {xOp6|m7~[.=ЉB*TfN_ZP)'11b`R !>-NQhbjڝӶm$'#{w0[.w;{0Y6&wm\Q18t[H.ܰ%Q #Y aFܰ nf!n;[~bhB_Z201704ȍo:چ;w5Mq4iXvjE'}r啀@(Dr > WU"3Z`XtbfMK9Zr̎^AR\z1Ҋlea.pmV+;WwLGURSeYޝS͊?R7 uT 6*"CY/nh\(YEԞ7sS`q3]=P߬q8iڏ:B˘IdG=< \\|EYq- t ꑡ_{xfrꘘ)dBD“Rj%Dxț&Zj KQV k*x.0]m΢JWES~~ȻDZATHAkn'L70 YFlSQs\aWlGDZ[R)Y|&H\?ja X\$O֨MB/14hLNz]ꘒkH+7ЇY*W*ju<\c>qAU@co_{iXr'2UY)h0 d i@.$ꪕȆC?pD䖿%74&e0`g;ҋ4M<"lh .%̐#;Bt${j> ʖr BIJ#!(ňǕ7J9-'z8[ 9=Gb稚%+*%4X̒ Lw_Zϖ|h#%mr79^2gQktG/p5d:AL<_5zh^'Ȅ`%zu679F㹶?fA٭< =ñ\q׈`CJQ>|iUKɿg 嶉,a<푧4HŽb]̘Ah[]#V[I$DW+f󆅑ڋn%δNt٭;j4x f;a*K.FJY/?"-\hHVI_.^S^ܛpW9WC;@cʕurqTm hyiW;a?.-GjepGra;dH;?Nr`}hDiyu] bVAS5I2ٙNm]21=OA.ðh~)0Ja0fk[ZLϑ&R"G?){k{3\y^Y3f]JyJ {K+{s0TO5:t;ˇ[JvplLUa&M]~҈6^lK՘b}քizEB+"1cho5fID,c/tbV:uuڎ5xZنFJ=7o8rQ|ҽ+wuƲ:~YNtIB=C@Z_VMy)2 ),!0!ף- \pO|"(GleHv^CA#N ɲYԇB7QZrH&uU,*BOЧyU8rjЩNj8M3Iyح2RBKZUaߣ3M8'H>0 HؤWL!4,vJ `YmU&~0* [Ƭx(]SIՓJdW:epI9B) yM~+SLd^GT1(l 8j'vjpqq[kߐi0#Lܢ嘪k7~RuVhE,K]32#5-b:AU=,F',59IJbZWI` a4EV\xtJpM脕uW͗]v3w}_.>\yeD,+%(Pv)-Xſ"$ÔcuӼ%Ze_gdgI/9}7ǵP7HQҭ,(xy~k_ƹ?G] Gpނ9gk_ G^Aה"}\;1|K9Fj`)=+|.`$>a~d*K\9dn<_Qke$frlLse'o}g_u%p-WT5`EYϨE!=*M3.22YDTK:j,-7c7?f;/4s蠭=)kW 3&:di+upMUޣjc @sE٭_|e^ _TSN*EvGO*I-9 LŘ)3uIrr4QÜ2]z(45Sbf|g\q6衘[WulK- ϲ/0!?Rktnnbar9񞦮ڈPS3iL58l=q#CH.gM\7yZ We5Q3X! IAz߭Gldti5!2tL5Mў~'^iՂ@L8AX_|2$R!$!teRe kHsT`.0! -vo*pGX܃ۘ#$I!1{y%v,> ^z\f Fp ߈=ƭŭRDIi˜J MQ5B}ӝqM;A;ŝ䑎|w ~|n_?u$B@R\3hy;R-k'dnW)lŸHT0fCZl*E9Mkk0D\q\Rpe?=ȁf9V~҇S_chyac-i2vsvFxO] 0@@Z+Ma*CϚNTDx8|vWqX5gߖ8g+r0%TEXj.dq8d] Fl,,>Im[-<=ҙI-k:I%T;S1QFZZ]H)Yz 'X*+ƦeTJUgX Pᚺj0?X9_I"\֋-sBBӋ0C.ڷn?ZBuDR 7÷K5*Uu L d]@d.R/t)o_L:-7 So5cZYM[8kJgA,BHt]iTEdYb-0^Z,d#vJ!40Œ8rQW;.]4Sq0 ZU(,C8*V.U}f .5cBpUyf /8?BImz≈H"f1?+6j+ZDEg>?~rf 9sDcߖvgȇȓul` =COt/7>9Ë{JM7xdb[[yrl$CfEkR> F['c"r$lr*9c]*bO%XLb4k\H#̳z=zɳGa +},?aQD?;(X__z׭ssAf8ṵP)ih]QDsi&kiW(p,U$kӺ*i`w92U͂qaOȾ}9'0X&!wBQNO?3kpÙ3g|Aet Mu4 - 1X(J 8f&f-  Z4L89W蚗MN(ٳKK?S0~IVzYoІ[)Jgd*m6(\ㆈX{I6Ӿ4&.LF>,gІrm9Ŧ5L7DM|ڒbWy]CZH:>dC g}\M%.UOsy N(J]FgEXղ85*,m\ SCZEG@$F=LS6m]hcQĞ*ǩ*rbE?hX@g"1xB 9=ifB!2"H:!â&H踄:wL .f {jc:6 |-S5$X=IM6oM$=4X7Tb)nڶfhpǁԆPM .Lzv DjBQY!a8 qP A'K9d{d g!<6^n*'iv %<@DĹ`<dT}8B[w|fVOg"Yxå 1Wvii2d#)RWɃnwj5ױT5Iމ\e}n9(-Km&ab+dnTe;[:27@8=s[J689Pk,+ zrҲ&+ 0 Ωcl$WRc);\*+/x_MCBbxfR1Us9.yhB;PBO}p|?}b}̑ySN(+(/F7.+fIW%bR 1}I_HX0S&ueep1!旧ضj +.=bg{٤Bd\XMQ^kw ӽ8.QPW'd0Ly!~|:5;L!KBz0$5cѴR{~>r=?ә/Zg|;r=Rgɋڷ`uQ.?\ܵظ5+.5/:pxt-L4r^= ^9rUu>ClFa$Hf 2:~T?EXy}h}+'T!xMv髞\\V;:`1u ]ܧ|TQ.$Y+EM i7^VTۉzI3n{_FCr}S~e9Z× Hm=9$wC׈TO "j⹟{mg3.'Η dbYwp{xˮLp>9ܓnr RIU*U)g D Q& m7imm1ƀ4ƞqO5vi=W>7Nz{9 [ߪw25 xa^.%uN%{:J8]aiJ2 _í A{͔FTrDrB! ҉ԥgyR:P J+e!zT 59n jYA|uq* ц"MawϬQjK̠AUQ] !4&ax.60tKA%4쇑-$SzP3РqTn Ifq L8 Q<_ }/a)_L R42vۭ` `R?QYſ s}G ZHJq‘?Y5:1asyI-wo~k|yͣ9EL^mF:|pcu'݆/~qn r'O< 1r htQ6*xKק8K/&c36LDz`V<<\ /EXx L.͎WIe[xZ bVCuRY(qS\]a}6޻ .&,Ig R'a4g4ΐG׌&cn)?jUۋ QF+;e,ʌ_*33 3̉od|YQ4(}#LX[OQw`z3L1G2vn oel%X>΅B zCp ;OljfV6MHWnmMNPup,>f`.<$[]ٝWEE#2/?nz/t~`:-?C}\%n;tCzm>]7N!>/}%?x+y'EպsVA-CX<ʟH9?I>ER\dYapإOjNd'Uj?: PQGjЉK:Z;BHEL:5!*db]͋0p2zN?1$I;{4P4g'{$6fiƧXKZq=c)Z;1ܶ}cX)-.pٷ4ǮwՑJ(}ϙa4cط^3a0ZP~{^r(2&x$[w<_zQ^09$Kn&Ґ=4l  NHL{34I 6qi%nΟ)va"+;?|n9;Vk˯O=γכ3g^b0|\^)oܹ$H){GWT7ml.o pba#op=˺&RT 2;IEM.sΡtUֶaö7g;jdf47'n<}N*%|œK)>V\z:׷VZ0H9p\;g,c^3p=+la l:\d2{iu<}`Q{qv[ ꡹tN|K^ۗl_uJ8X5o|"h6ÙM=_VRi~kk}/o_\yS\$q6r"DbCa#;'>3aض[{vTa੧e^Qέh-=u8KSߚ$V I&d8֍d=V+;'&/z>7ݺ?#`jZ;P"GgpתRaVn3X~d Q$+ ȁ̞UՙUBilXk}Q\_Q5BQV9oefen?,?T<qDOuF0Zo>]% ϔ (H,Yy=: %{c2x 9H@S/ ewALMH#dsIO0{,kǠo za\5I>.@KM;uCk?0wƴ:eMgAC ӷ DY-,j$ѴhXL]P;.s~i1"=ˏj?Ltg,e޲y\N}i^qʲ`Yw-ĺj`wėܷL aqwi"ˮb(N[dt 9<%| >$SL7ۓs;wG=kuq`  qǏ=`ӹ;S_z&iGMŎNgUэ0%:-mvUxu;Yyqo-;sT' \ ٝd،i7+N`dxP[A(|5u-Ƴ`أT0ږ(zjmE 5Km|kvZm%Uv{?_mV 3 q=YU/5ʆX>U16d=5"#$-WKv'|{fJWU۸+[=7U(L$wH3IBc69  @O랇ix1k|G1=9Lӳ^=i`gڥXtF\#s:ǹan]a?cn-[NOcNÀ)taGΉ9?Y\r?0XJL亄 5υS zg9CLуSDw˸8Mҝ~ڳXK=X]1uT1uh⏫{{}C?}ֆ{;GHFȫARENwȸ6dqWYƸ<TϞ ԯx.zfHm>g ꄡ+_ez׀I(WȜR+9/-iv(yLKD)Xaܜ붨TCjQe=R-Z`jUh"JѣMM-ggg ~b. kX*\i7hLDS|4 i B%'}LC0ѕqҫn9¹s>׽U~0=XFd?NhjexvZX_?}ţ|{>wG_qQ/fȬ{wC.tɄP[Ub,zÌR nYц[CH%6:,B:R>Cv2Dq0Hei9AKLxihTt[jQL&& 7MϪ̇J5?)QWr`†j;-T@մ2T3CUGy|ȟ]-^xΧ<C DmIv^G1!C m1M+49Od} Xh n"yůt"ϝUF8L-*;NW{J%11ni(Fl'D|`z6S~ Ѥ3WttݞKk794R?" cѩۏM٣+Ns1H5+%izq!b[SC:km-Tf/˲BgTrY!o'`g~K~~o$:/<Uk?½zˏ'3i9W+8vʼnӔ9 U'hr:3y ! ?War,?;'L 廝>'ɸTTLgH,jXcƫd5 Lvo337ăAU7E(X}Pkvb|-X>P5/+4_i]G9A )UlmEZ!2T@Y!N YHuSMbkE>YM&L, :99\E 9aZ s D 9+rWѼh"}0U;rh"_E_aԀ¥B |$UJQ戗 L &fq+uηK{_lTZ @'YLD6PcTPKM+2!+:j)tO 4 l]X+V]5l2tP!A yq1,j:vL5 f&'> $ÑPWQJLf92Gp[sEKrdC(^F=JtHHUD E̥Q#\Y 5 `B?# @cER=? &0DZe₡ ʵ#}֣g5=UJ aiSd4­.`xwizV+KY[g].\<7 ,1]kȮAo${S 'Z"$q:kd PK0Ҵ+Vy |Gzg4tlf3tl^•?B glv= bWS3l v|%v1H5y)"| 7X8?dyQm覨a1mMXk6 ##b,uL-pbȰѨRD50XtLoMԩk{s]EW-PHRD&&3w]**iaDlr`^+(=:) (zEX-q7 n΅fy 51.H{?KxD9QDb<å0fRұú % *jid.uӶ4 @k;\"膤:[~*tگ{QxIec^/p<~De S G3t^"@+1Qm3 ϋI%mh-C" S]X1ψ%q6!i9a oc[gsdKW49@BLƷ+ ճ'g*5=mI*t%VC*?v5FԑX-P73_ þT, QQ0;@i%(O.q:N{9 =QG2Djܓs2vR렂fTFȽ m9~\ tvF)Xiql n5e6_X]o/ %=LUYk7Aũ$27V;Md$E+waT `iym S,8&k)ɨ_Ͳi kya g2(mjs3a4\>ޭo޵yiva^(ii'gz; D/ow*K?iT*{Ǝ?{džcC釗w*%}u ^<#/ŝ_+֡-H/vx ɿ&맻;N;bшJ(QS8j[ 0T߿onN?/>x [1ЊK Vv3俑~~ _)F -|5 ~ wT@SÏx"{9٠zMݤl & D$ у{'~Df.3U8 {\?`O9CaDX~N 0/-KӐ:rS6ZMbe , 2JbD@ ߪm`ɯՎY_*s|ߪ " *$6hȲ^PTbC4+}AYZsoB'-Ve[Ҷi՗\]IY-֬ Q-sKqٲA4 [#ͱ@m״jŊEf4KI+ ުܔگur~mku/l$#1g#ͨkjʆ;Ff7%6|K`)9aqw 3w-/|Pl.GleR &0߅\qCv )`SL6Zw{5Pu4Dc'TnG^HKza*J~& $$sE.\ e˳㐞$͔g4;J`|I^Rq RdChkڰ>Z;+3F^f$%PT@Jѡv%ѐ7ɯS*YM*x/fY`Z>V4J լҼDJNOh%J<4g8JYiq)tDeo[|Vp |`2Ȩ[YNHeNbLT-&qhq 2ޫ%3 WYzf^hMjC # bK|}<uS)Jd ;=^dV0AO@%Ul1}S-J yy.Km9K;e>8RimKQDEL}:^$ɋXʠZ2ԑ}XL8}U rQPT {>bov& ̊DAi:O=i̡Czr|5=' Nt[~z{^MVĉGN 6(A/5U&gh^ӸCR9BQU4)6^okg^S:Ak\(Q{1&RִєvSOyu݋t??u)ҝ\}\zދu=t>^~mS> *G ע%$ dl." ABefQgqUֈPK5]+~XM4szKiYeVifK5R+B,y|4EO"lcf.oN?h0hTLpL ř<73¸90SܮMiq9=L>[7dVubĶ#q 6ucQ%> \"憹LdOnmed\3Nd.G㫪DEeZ+MZT>V▯T[$4y; p`u(\ᢢigI(TM EۈBBSf6"Vn´[l<5`M"IdRJ(ob˶l Fl | [[|"j?-,ɁOȲLݙ 䢡WM̙n`-Jd MPtڍ<ݰHG4u܀\߶JbGiZ~IgrP9Vڤ˸gn6ftz˅o!焮_]bBC4^7"OI,0,b"|9u"}6d`5>S B${ a=,bAmo@!}Lqd'2|F(E v>xO/fƋjN)_ _ډ_;yvg; U, 'Q8tIT2$HŚ$w龹G^v󽻇'w?7k<"{/ TJHxQlm&u**צ!k"5-xDKhYs*-y[mj96 ז%% RbGmMf q>X<@.8ad 3aEʹcP9}e=vKh#?_1W69[Y[>Qް~>?_OIwsd0|#Zvf8͠(v31?_'׋wa\3a ta٥iFy-=R"0Sh_:pzCJO ( ] n azNRwTzEx-vi$ :<ڶBedʂz? #~er, ~o YY9*Er(A5*G  7;YCfVhQĊTLv2UQz!+g@!FR b>3i4ΐ@p]t?D^h)( ->'i6{J|L '/3ΫEr @0ŧoj$Ab[_NP VNDH3*zCbN[x%bӂx!ʒ}0%a{oSgCZ-ߢ± ݥ8t@qAgtDRz y)("c]':!c$]SYl iFmPL! <ǝ8- ]ɸT||V#|2dG+SJc婬A`SN`񫦁Wut1!50gj|DcJ%wA3zS/’:͐i,k|ht:lQA0t!IiSӈ٣dILfqFTO>ôݥ*’ M}aBp} BVh' )ojr #o$!%_[_%!$f$؆pa8 V,*(|xJK#T|s&&!؍hyJ@,%$I#t"r6Cr=D:=PEI#ȣD&qqOT Uy' >M7^mrM&q+{]4Gkct)/.Tfj*zzi20OnmL]+NKc2Cv]Ζ:"T}U"TA"^۠J퀜H|ۻyMYmYznQMȆ_K :tʹdg`Eiqd ni͵Vizf9`0akj{U}uu @Ǹ yG -w09mHT`OcShJ1F-pl]o`Jh!];5t01Z:P ,`6<gO%G'u!qz]ϻZ.ȥe Z8.K[oy]X[B)Ѥ- ]2I٣^@yѓF SU(FHʯ GP5#QI-2>)װZ@UR{ =>9>Lwl1#O{K|PM֩~R׊vK_2PxFn∑;ϩ䚃6v TR6ǭq¢3%$)mQ a;Ymjha~)Vgb$GE('؋7b^OFxHzFaT1 0=|K3w#)3nK<"ioÛ!:t Q(v@&x,+YEޠ7mȸ$7<\%&n%Wh6.]zo+&xY 7L>(:0.\E(N7H%@AAKA1Qvd٤@Tj?z ~?MGx+t~;QdOv0Tnl8$ &StL0ڻH^G0n*L;zg @k b݆\C/̧ U 3Dͦ, I+G>:ZCGV2*#;òLJ{oFlv5 >l1hз]0,5.Zr^Dj^>| NE[-C0PT`<&0}>bGxo5@PGo"iNU쯲)hSUÚpv-PZnhuB@EڢZ@.e!- `_icױ5VtĄs@X [>KtL`dc 幙ְ E$! #j^ˋOd -HP-];{`.Whz#p@9 b2WP6nڌ`B?bLFINҘo<|htPf+aU=ピݝKJ: y jFJW8e-m㺴L=v2 k&!+iOӃ*Yp@« + -A LB/a8tn vPeĐޑm`0b 1yr2;~ A~yzIZLj1rN"Y`)O>s"I ~D/3 ҽtlsxWKS%A3!Ak@e(&+ =TQ!Rs!f1 (_cW dEx%WozJA[7$3IKbϣkY.@BZkԆ eFO |QDrYA0/R>> }԰jt\ JP6;KM;;ιm 1#( 6+0Ar8K]۰ Wd3-s"gtZrmX*%*~ٳ-iۮRj UzU߾X;v;,oDӊWc{rYL{pѣ>J 5jO)❆klg73_ʸHê%yaxjPGC3(4qVF G!IJc աe8+ƸdCB'7N%%4;JS t#_/ɓk 1.7>sW7˓[DL/(~vچEj.~˩ A8{8Ǐ>Xg3_DLyl3A wTBq? #Vm~΍_ '|]Ќ,nچQ]u ]macr;%M . @i|H\r#h$ Yn3 >i 0ՆjôbO\fġQdY5Y°Oq\#1rD)jYLǵd^h fRSʦ̡]YCהO+Q>'lB!IbKS㑈F.:ˏvgXK?u{ٲv  ͣG0U!Ojp -cZ[J 31w\bpb~ 5J1pzlk3:*qM5'DEYoZ}5]k>Ma$Ӗ<1߷BQH+XK/7n,kk׍ǭ!>g#׎׎IrEl=×%?Yu'I5r-Hu7,Ds%,X;Lhdo\Qk4U㦶;cWOܔ5.'^T>~mlh @>(wf~2xvE1te .0Lp)Od.K JGqtݲ녖mrѴSZyc-#́'F :!}A靁OݿlwC@ ,{MA|I0>q1 po; X7;+w7pט}>{[*8g.0P1@lIz-P 3 :,C̣l)s'PUmKD٨=I'ᨿ5ѮVWfF\4(0oך2)w"~8ܳ]aVI#?܂_? ~َ0ӭmLqlFzc߶3e=+w:ey hǟ%]yPyDC$÷Ik'%?F\ls {E9-pL.~Ұvq\Ô :h%l/ Io 3R0(')t -hQ&of9[ty3ȭ&m0E{>JQΗ%<Cnm'E^ xijrH!]0;.*͹s6tytYc\(3"W$gF:n%IWf&憮/7li07TbiA9\a$lR#%ktXmfU+qTv J 5XBYUz=f྘9& S&SfQ*HztY槎mx3*PTFLdX@six ZD9dп߀i"q{Lf|XgP}4NճQ.:t]iѲM7TxfgjMZ5ڮUE2,] l|!.9U`&&N, EM`6iYNٰnoĭ!;IqML>e"Lˀ_cBYN2 o9d@S}ڌMh,\€uܘeBS1Sh1n]w\bm.6z/ʜ9Am0,B\CGIZ_KqD:^,U!O1Cl[HDg* KW-(wńOEbuts~ZR4~-n} - >9MkHj}4nƥL`/^ v"X(J.RpmxEYkHtHe_=/;:vo=3rW!'ŽO|A կ3~@zawXpm"Zr0 + ^Ʃ "! '؝L]|E}B0+*Tl^9JL/nhNܩdQsFӚYCղj~/e:0:p>OͮE%ӗlt.#W&ӎSr;Tlpڧ&٣vç3Ygq{YF.'5iH F/ 2T2JkZ D0׆)5NwѶJ 4CҢ4rOI²jAn(M(L'Mb!a0p/%WνZfx,HP$^Y ,1^0Jóf46_J} 8 yƝNGhpFe1:qYPxZƣ8_5䆎k7Vkc)S2[ȴ-6o5[_QSk['*fN>3;C p@3T\s@:"\ |:$+T(Zyu8LN#؊0CDu7 uw[/udmg.F)asƽi OzmGG.{ZoֵbeV`^fotyZOC/Z}%N߁[(PޭA%pARPU3%(kj)3K>^ \/Qzu,47gǝrzw>N~ݬbY;O\"ՉniQ3_H- $pS";/ #?"7+cT=$wHV0%V. 3X#MFTj67DVY{ ~<<ǜVI%aZ;< 1ND뚥XAay`ٚŴ>V%r\zVZC1\eXU#UF$乥،eԹݣ1i7I &-jUkPp ]ZY_8v$_7ip\] P9RsdWS1vjİ+$9~#ToZK^})+}Y7e{.v0hT{hURV%6`mf ' { wG27"-j1 {vs~qn~%SS6)ΞMv6_"u|M?ըv5 rnKWT~W,z2* _r+c`wq(8RN+[V-iS`?EI/ft0ijk|yc{ItFX T=zc6 ݌|!)l?\JSM5O[@j5둒Gi1#qkQ37:և4XJ<7jWS?.OBXӓ;Ԋ#qDiIfZno-@5H5&cq>EBDYTK R:L45y~72#eGUʾZn{lνmZeJ0?I0BF ܇ = Bi|*;wckG釽AWRJ}3".K(mNHJ$NghɸK%TaGE?4`fȬ>}sӝ7{1W_GɟnQڪ}#j ؞03^͆o:c@ W_?2`N^>e>o_R)E^:Y2%}sT-L2CkOç~G?xxw]zȿL{JoR7Qw˄, Ja\:Ta5'TRjh .ү<~+ǷzM<AӃ_ ݹ'(@ڱl^:̱vg>A) 1 wXE?`b`jxGcm>3JWiu騙x!% zpY F6bədt.yPw‹NVʢG %KMÏ=ǞUdLJ!w[Nylml7EHݨ,v9:ihӻ~5" X_|;|F6t|myCHwYŚFJl m8nALY`3;]'%]B3y*xP&R|kJƂt!8|e i'GHg7Y'wPK[)wPg ywD2iX%Fwqӭ„loVU)Qny\r Eֳ`-9:}#tdJ0!zWs ͮQRYSȕ+߿ZŔ0})JY8f]& E#3$c˯6 0j1/ɺQ/_O{ zq\0b>R)'*XzADSqF:#ӥyzJ\W^,kgryђ M(/q`YZ ݳ P6)#뤙yh_ցu27>-xfK̖~#d=@.VWN D^[5٨^1K:>9+cدpfOqfۈ)kqZl#}k]]]lk TzF?l7c-9rmJgA{mXqj5-\ݻkX_z>2hwm;mhϰTp$yA=zqOGw{٧t'{ ?[Kړ 3#i65kyՙ5NTyOU%ac>ڥdr`PrF3f:œs)ŚpPb zIyȱ_9&p7ï<+cUKټI*G/y3uο̈/kĝ+m)fᄻ˺ν9YClFJw5M鑯Yh$F.t!"D$H%0%&Z&'B'(*)N)|)**++,l,-2-\--0V01112\223x34F44557&7b7889j9~99:D::;B;d;<^<==^===>:>>?p?@\@AjABBCnCDE.EFXpY4ZZ4Z`\].]]^`^^__z_`a^aabbJbddeef4fffgLjk^klDllmVmpnpqHqr^rrsxttrtuđHғʕ^|6vz8j@ܝ"Th$`Ģ(j̣8fP`Nتt:ޭv"r>zF*nvZ6ʾFlü\*ʬ@ TҺPӼ6tJֲ&N6tڤlz4NߪDtz2xc`d``\àL@ `> lxN@ FP#mIƁozhPի1&w!B}$C}ڧ؞BSP++߻-dhB _m<4Eʼg'"Wxa}f^ļy 5[3ian Ԝ gf|cP5}12>XCU<0n μ(dD#a F)+{"yBo2Gc#aGȽiUB>''.11>~'Z$_7إ ;ئglyܥQ2؇.<.3RJRlψ^DUMά7c{.U.7.*CBeu#+B$ţ2ĊgcƑF--TV8TԑԆ4JMbCQ6$h`Ƨ['nL^37Tɿl;sA[Ͽ2xuW츕U]ؿ?L~`&N&Ʉd[e˶%afffl 2s7̔\:%'=w >}+'Ν}_N01O (!hLҌ$):Mg,]Bet=:Gק F'Ltc ݔnF7[-Vtk ݖnG@w+Nt]Mеtg ]GwtOݛC@z=BHz=C Dz=BOLz=CbJ(%Is('EJHSMɐ%G -iEkj鐞Kϣ "z1^J/+*z5^K&z3Jo;.z7K!0}>J')4}>KDLBo;;{{%A!aQ1q I)iY9yE%22 *:&}~~~~~=y#<=>|O>˗|_|C_W&|Sߜo[6|[ߞ#ߑ;U'w|7;߃{}|???C0~8?ɏGc8~>@?Jͻ*ʢpb+Q҅I-3KDt^l\p[&(ʼmn-+εu#EMkKtU!06O\a!EU7q_8)ʺhFn U=\YQ!hU%ʧ)m2VptPRvLسuL-l VsaTF2s`FvZ=FtBzRjnM-G/)9Yt[%P4C`PI@_t=aוXB6:mM冥Zw lONkOZQxU2S8sj[@VKe ( k8T涣Z[v(:yݦ:A€P4V#R*:FSPjppF͍U%Nҳ0>z˾t2Cdaڡl`!^6QR?3I$ 5R o!)D'sU(dPFGg(39Q8;jEJzҌmRP4 Kr+ /u| ~B4 Bg4=@ ,_?ɱ$8`bQ%Dl5AR#z1*1N 9W6R42ئ: j @qVNN@aT;GNFUwNj m2€YIOS #ݟ>j8) Z3gӣjx NMG0b!+(#1ũvrԊX`E UgAfP)`g @ |`.@:(@k'[Ti'Y-|vt*X6c9^ch~+xh.,!h F FJCR6 @%8^U^7)]ۅkc\a-.ا?ZN\F%OչIXGsuXL|v?UP76iP. S7n: ht66Q{ }ӘMB'IrBUS+R "w%!t0R,D=B[ƺ@GuZ6D8.ގ<$s47A& !@acmSU"B@K'wj/Lϊ:y ZԢ\ܢu~mLWuziiD⦢qsZ <~x&d@ SBd^S UƉ 7\[)˘uez5)6=BBKYd`amڻM7pJ.X$!pN/Șg 5%xV`" ct Q}6ʡw:޽l +|tsoص)S@z/GG# l1>m^1H@Hߑ>p^g$P4Ϭt1Pձp smXŨHrT/5 \+deA,thwmo}4л"X JD>~%釹ʛ}aY+:'`viQG,YHg'S(rd4d+Q,[]=sb[r.Ч#Wm@&ή!FzwY"Fvպ/@c(ƖzGiHDT%+.xu1.1ܦҫC[JJ+\ ΓQ|h@\XR4_zrclone-1.53.3/docs/static/webfonts/fa-brands-400.woff2000066400000000000000000002214341375552240400222530ustar00rootroot00000000000000wOF2# "J=?FFTM`  <|6$< ` [{qhBPwU?03!l14(:@[#OE)o }~3˒ZR Z'mn/-3 1P;u `xj pE;=ɐrL]$#&\.~k!F<z3E'ŏe1C*`LJ$y PRZcZ5DWRory.&Grc6Ŏ+kٽ(nD)]QhI x X=3&Pgf3ӯpEvRX(M$ R(cÖUlI4VbiӺi*"XƂ%#"%ҤL3+wIʩ enLq~ݥ~ZW[- A2+ DAl}oFKpT^3 Uܭ&0~ْ,̬\8}8hn3;}88KBɲ(盩WmJrMn!߮,aL\t[cck_ !*Gus~=>W& À `8rxzH2I\܀tfE[:Gedff9j=?7B {aAQJN=>GK\{=ɦPa#_!`1]%״ Q1פ9;DsW%_1 0tur!dws:&?k䤒AZ@Ѹ {&v&P@0~wxze6ؔgWl{FCÀ V@)K6ުM7hBCLdJkr4Wz*%S-xR}pƕo*kWXx&X46A~ ( ?9XP%/ ;hWEC?vfm㥌 USHyi퍜eטIY޺}̳c z"՞?K l={ ?t4UUd.$`:ԫQDxmrNʝ7~]o;_8 3#Ǘ7O(r '&$pܦIb#ªdMWܔG%MB_z]?Wyֵtp{xk?i]|hg2>`}1 C[N#)ږ=UOZF]'tH18)CmF!; cZ&)>X`2hM[,ʨۆ?IM Xd EIE"KlH1Gpn2֊Ga\2; =nY5Vx2 ^\}fb h4R_[ЄM}XǒUM9BZz{@ 5|TX+4!o%NPbF?0='[+p:,$gEUl;, M mc9rxydQ77Eʱ|2t=Hms;Y< \kd9±LݠPD1| }תl $C&*!:+(nO(\sru/kCeXL$ͮ* t zA(w6.Il0H)2j&kDVO2+8>ZVbS7*SY0:SA2 Bj& 1L #iQ#KDv6`C\ ,eZTIbUE.V7bh nh:8fmXYj|ʇSִxW[^e~M\Y)Z=SзhyNWrQ;\7yH:c`GYNIXX@;5#_xE&:ɬ?H{£!ZŃt*rҴYeR^9\F n[Vbj*.@9`D~JLxsQ:{kb^|Շg96Y9={eE0ffUD+XQ #H?exXD._DPƨ=2W?O!2MO|KF*#u$n]]]'^\{NԴ/1 Zg}5Q3F"X&, )W+>d];ME˦nsC'jӸǎeOB[O۷rݼQwQQK ]OR1JV,|vG:'&:X줾{ΏĘ}[% @W6 ,G͞.wzbC^`k!?ڐXt7B-k͌x? %Qqڶ6W;O|-S!_gC79ܬ| T:أ΍A•tm?oMڀ1z~p~Un@zv͓{{^~4Xuvp< TؒPnaܺ+Lm/EiZ}:ӍC,M4T9cl0!=0-k֤Tf@Jf ټœM'3Bz- R(:' 1zfa ttؕηC_<~wNNvtu p&/0<r?aWZQc{ pw`CO4IWn>lMj93X~DdkR GGVB."nH@WI3ze22Bq!h0̭)KP!b $sY$qʤЖͼKsܢZ'n)1Z@Tblc ",e4Z!Pd$^: c)킋AK"(r{l'%>o Sݜ1I ܊H=X+ S: o |9Y$nF8UtX4Q=M֨&[uʼ 8#Ĥfnlej5l)ehŏCڮeX}rM' #Ut*6 7$`T@"fK uWs0"}UEvB1"y:'B(Nzk3YqԶ\\Hm'VG-M-yg  1~ .RnNCRTVjK׎=,D0Z?DvYh%Fiݪ8[]T߂8ZI N%먄2y 9f:\θ;R/$Y]A ~IRCEfY .+Ucȱ\1/ $'pI1 & b7bIQCh~|HZYxUh%#E 2+ձ˚\f\@:]y%%XvmyX#2"r~9L|M/ϳ|VivȄNc2rZB\0ȉvbN!jfkzp8Δٳt㔚-5󌈉D7>׷BR{o)hFLa(kG_<89ں,z"j&:ȺmOٝa}q? ɮtAulck(,i?5,EWidM; ̚w& KA4ZC^B2ūMXp_MA|ogc  /{B ZNѐ--q5Zx")J ?҂ړOuͨh ;r[<|Y[ihRi."ˉL}$>nN.ƍDtI#41s,F=!dNIBaO$qnT".NC=&Í S:stЎYq`f*C;|4f)VEC0ʝqQE%ZZJk49IGEz  Zg^Z/\AU[ECB\ҴUL!A?SfaD|+ц: SN{%@R m'`ˡz"Q-EC)-0E,&6Y)Q!tU߃g2ȨbyiDY[-&pMWwZU`+5F{x5 >>x5Lq[*⧂[P+e@ ׸͊) F#n斶sݿO疡}[W`e|5QpsbEU"!gp@KC%!tZm73VB@޹]t16qS2NHqjZiMIY,n)oz_@,0MYB\G_.Y:IQ8}]~7g@5LVE:@߀R*Χ5 pPyĉydmiw92*XP+}1 ":gmy{#"=}HD\pdkЉ%/4Ҍm )YCGNAQ,F8i6${Wzaq~(v9ue-G_w2U)f4ǜA _?Aֈ+h^[Ou;?cbZdviIWl*Fwu8Dp/RGA$SHq` ??xdfsR-ĘF^pV~<AkrnQ}+^7[ )pZW.I&y$t S侇3"]SjZb!Vm@Sak4= 4A}+%Xr[gHT4 dL)t*#c_G-9]M"Y8%AEqzi%GU!B>K-iF:6:ܖԆQen4Я"ș !HyTYI̡IfpXz_OؠX¸8r@9TYܽ.dM r (k~>1"3 .^p:"# !)Hqi%O!S׺O.6N t d'R ˜ rc_٧$'؅e,04R$DK+T?V#7,T iC- 换%вm0Eg]@x cQ(el1dH:敵\]̳.R$dƔ-緝3QL NngO/qZd^MaRi~ ^+;C]ZFFa9Js m.lm9\T|L,Q/˙dKr?FYv4EWEm 9 DtB:2p'R4LPJ@Kc[r:Ml%6YȡՉqK7'>8c쇀ǬJPWےI?A;?gAZ8# xStx2]znI]h޺|`~}b@4OMDϱPSGE(4ӟ};qg/)lsGfi x܄?W|YI,x oMGNdzx*E{ߺ߽.?*@: l"ìDpm8PF >ek̑sęr;H!`ٔ4+joM=_ԧXH9֗"ȸ `u)l_=`yC6/TxU gDgOz0g2@4B^$TgU-q7sO\_Hƺ;G[~c5/#Ey<Ն(3] JaEQ3|RBm#jvKbqOM$&1*>(L{Hg$y~%n|'MSHR󲍱8~gwUelfYbN4ѱijB[:Rϡ]("@Tvܕ-pKQ}4-2F$.^Vh6yY]!}2|5r@X߳͠ޯ1`? 4$XK\N[XskSmHFt'm(o3(q9 kyiWL̓7@@چ -jse~o}d[f/R툞a5{  Ճ>nWm ݉D BW34%75>=ȃo=& п<۹y0].{)ܹb;4jIAsxAi^PS#Q |]a~?MAd2nAY`%(3Bye{ulv{F:h} j,~9V?᨞ǚs=Rvrl]Bl7āpxŚENԮӵNΆ+=I/NDdk>@26QRkI}jg^؅xZwR,fw_0K6!"PaJ !]8 L˼j&#@a/9Wq?]/Ю3Dw[{4LzDo5z*`%)lBLU-N=6J7RXTοqv1gv-nXٍ"Fo,X%K`B@`Kk >?mRUg8f-XC2&谓0b@8QGGHtre$S+NOiz&7g$w,zg/'Onf BTtq2) ΔPYY"ǁ4%r61rf2#7C(nFb!JIGW&8Hso *` }oO3{_غs9tp~ksCO3-s`8}J|-w$[gD]72 mhknGS6ʈ?cE_*U(}ʐ bbIJ$<2vYr=K4sixޥSb*i[g,M-^bB%s-8iӢ ksYK.t-,리k_WᓕX)=VaD6.Cu  36jLB(y@'8a k%D~m?t[bN08Hh5dy=.]o}Yx# Tl> `AO:EW4|=\(+fj/H| D3j±8ی  ͦF" ?|:BMH@dl|tjEY U9_n֧3b0֭ABp+L.\?< I)m=&d*@|?H(=Рc ߞЗggwMο+^fyBw߄hu꿿`=I+̟ ᵎΛ)qr1‡{Wo99rg%9OO[TD':ʌiŽ:r:ߍ8p ˌ愹WWۮ(ro_-&.%\3NY GTKRK _z|p9YUI`YXD;.ֿ uj\9ތ>T*3"{G3I(>~C~<Ҥ6 mVk})?{sCN[]$]gP^:aB*@̓0:JgZbtqKYtz#es#4LٹQqU8]}(fEe`WOGYMgsO@:~fکqA>`K )J?;C%9崲f)#g pcw4"@2=i*Y>)|Rb:n:3hQ]יe`nQ!y] \Y®@b*3f|//@Jju[tSt] }!/~HCD@*i:'$Mn"pWp$)mN*9:v,#:(J5D}NaQ]rkA* rR9. 0Z#ŽZ|&5DtƶN,ޘ匉YO<k&vmإV,"=~׆b,ҦJ_.=!Ǫ~H3. úHqe*ӻݚ̔ܘOܽO\/KvdV1R~-ag[Ɓ4;$Lw*;v>卼E$$-~)@\"-wG8vLMҢ|c9۱5 \kdT X7| %w~$|+'0&ux+B֐Īn{O}M_'^ߒ;SL _;v)WgA1i(k.X>: 8,M֞xqwB뜦Qʍ 檘18NVsԌZbTi\%뒧ma6CCVAj(dz#$NTTJ zw~4%>⇈;љ[]H[#Xjwjkw|O µꝽ}-iە|ƒr*?F\#P +\_ ^#ٲzQfFME(v\hdMfi=iKOXѥm_7QN'!!cfm473sKvL'ޣ~Q;۴ۜC3LPӜR+Bg:f42Lc;1^YIP=a;#'.N,vϑ +|b`9fxX>5T^^MshǷI>LC3]ɰiIOeq9sJH}4*gaBE  ] /?1rnYQ,h,k`>P ;o]>Y_WzXN>%>QlLWߗOSC'Ŧ(2_>z+D [LD/xW-3tSw BH-J$?6?ׇYIkz0Ԗ 7Ƿ6~4 ר6 h"N~QdEqY/тlnp-f VMcGl%;-{SD w5>|gRܾt?sۘS`ݨ'w(v;DVmS(3ڷ}\RnIq (\=@*PCp5RtUÿ6A+CIfBPhF`yM\qb߰ݭ3u)]9jbF&q&܇c5ޞUu&TUoAu7@ ߩ%=,4։U9ǩQ/@Z^<f W9,(0ie`͠HжdQv^3žyXt\Qvb$xOvLuh$pk "8?<n^vj ql1+&"ng wUT5anmٍ_B%Fo^ f~tmizMHSz}!cq15!չhI!jVI_qh&Ɔguډ6A?4Iᩪ{(LZL2If#)i׋axljSK\"sۖC;V]Ӌz&f]U`Vnc!I2j=!mڱ+k?Ą,+NƬzlv ~yԩgg+[sܐDY`ɘki YԆ]UV TYAyHvXKF²;e/26ڱ̷{AS\GiIXK,YܢH[ ҵjyTW'UVjn6(<BRVEJ%! '{&"XF53)ƠHʀe@xAd?6 ʈn@NEE U. 5zK 3=PRN;k7IOuֆ\%ޥ"yE'vňƸt%>6ih}36mtŽxp^00¢복 C  K5H.kk SeIϞz8KS'v^V%L6RS2<8{[՟X c}@n[( s B^x| WÑ{tkZAZV9]eMDL~Wr巠:ֵ%\~Wp_s?L'*ocw 3k_ ykvؐ7Nxp1^;ys"O&o}=).'}AI,(Y"`v쐖:$8Y;[¹JĻ ;M\|/>$ w^KieAHDMtZd!r[{aӀZcz;d9I+X(UL{yZWK+*bREtM]G0?SBL&—C*W_0 abqUl[/ Ή0&]9!{XkT4lۦL0STQg䌱kbAB4|!tDH&>~gEMPL: {yY/SDOSw8\(f-9Əs|1Us SI) =@)hiEpjщ*5O tYD@ Hꌄ I`: s ]!2D`Ea^`BW65@e3d<*à%ԭ+~MJBΖ:EpµD]Y8{"#23or6g돶'-kN؄vHY.'73) S}H93jv^3s;d5^%de婙A3Аmlsf1 y|ӧt\|QiR'xˀǯ=<8b0)h>VbNlp[ګYW@L STn 5*U VItѬb10TSQܘhV´drSX?^{j̃mPt ඐR!qhQd<hY8OEFH;vūeA )XК O&QؙyF݀0K'\y;?[Hsb`N-櫼~4jIX2wrYcW(eӜI #D3?z)6G䝤H%PEGs $si~?L,b^К_e~ =ؕXѮf9WTw2)qw \s-}4CBYkؐ|?[`sDHl<*Nn5`YMv|hs÷v=UsJ/\ޘ.!?sG[ (D-R.KlQvNI#7ajZxyZQ ɺ#k M"5{+Cj1lMlRɔr,q%b*'qIde$2HozU0hH -ϙgLfwhb$ v{J%;:~FLk$>*U2pJoTJU%eP!c[Kr'/BP\Z'#)Rd *~gYRO7l= @as+DƫF2jY\sfGovkLMɂu 3lQ!we]/"U_El?u3oݎvA^: !5pF64T˒*3'O|X;^dYcDTӶdċΩg5#ʟˮKx%'t:B֋w̢숊w> -ARQʉT\=۵+ A)E,: ܝ2[v-:CV(BkNQˆjʫ̉ E5)LP5(^b_da8 /t6 tr|(YMR 0ub3(ӧ}*q׀g#:MCJ?М62n5\Q.2eXI('wưe8MoIaf(8iLjcyK8rCUr})Eg(h>WͲm?jO"(g mCpRXVQq+"Ϲb3`qC{q[]'kB8@#4&gXaX/@iuѕ̿V;ca~!D&4F/cEfr Tu.X;ܘ 6~YZArW]0LrtP&"PHU)dvs/{N5NX9TuEwڶlUZ#e늿.N{K\i|t3\u,\>5adQLWߐvb}8iÿR&Sf[v]VkWTJ+3 5jc1a o%5`,vI~8ąS9o!^\Es7sv)Ž1deXZ9#=C#p0yOGXTCcc;o`fRB~H;`﯇?HT 7gjf{V\?2c B;:7F>,3M|Z0I#p7>x :UD%1;pI[=W5AuÖC)utW L$Dz[G^ gh_8Sނ0nE4PvCWw~Z"Zg*\BU/Qp9y76rñQ<\83Wik{.p'LP׳WW߽q)ҦOσ1`gLneݜl bEFznIz'hYI燪Lg.b ic_mh GS+_`ػYnq?n+}IdR  *zZ7^?׋VU.mN@+^ˈoz4k m⽢!R'5} N ǚ1>^oBYKhUvDҩŌi)!GEI ߠiX+u*T%I9_3C*xwQ׬cCmxV=]Ge%DChsԳ4OʬjBgvyc[ox5wR$ !6Tj&2f7G;c|\[ϷߪYk}+vxoxrAEJJ3JF׿C& ˰ M8@r<$.`#,ayxyD $4n ޠbW{%>g@oSiu=ծ??KD##{qx !WD܇Eyv= ~o7aƒYbBҙ{%-HZaa~ڥ0g[nW %73cXEÕ>".W3X2GP D?kYBs#֠lą}oݳ(b37G%a?nO?\&<ܙ i]p>Sohz]AAPd&r%$b&u(d˭"#)FnG'&iI%a] 1Td"<SQ'}on?L9T2/Y!6$Ue*W/+Uit@b{/b2IqQUuE;>m'{8RG'-iT0+w69,>Ģ(r.uU1I6GU^-Ӵ(b4V!&;0ojEa޿jI'R$V/jơE57( (q` Uϼѝ x2#VT+=B1䄅y(eL͝!/RȓY"V 3evx.Δ˖ALZg zNa~dC^<~6{^]{%yAhF^lLbۜpO߼L@V CNc /k =Lr<ܵ >Lؐ3Ȉ}q^,W5_㭤҉ ؘ@_vw>籃jʙj7ǡOZa+'iסɧdH p%2(-R'Ck Ϧl)G 4"p "R`i,yQ*(ME,ΞwY=6@+̀[¹Mym˥SSB ~r &F ąUCb˜a+$uG[@ހ:뙵[M@,X| krC? xcXo%!SBk;jR+5ȸ/`|"o *EP~lH[uTYHN3UG,_Ӄ%|%AtHx£c^3xp,̬i4q`? P͟P&Nk Emn!F᯶O;&977̅.SF,֘H:v_b'!}u#&8!hAg*`;K]B3ל{y" qJikz鐬??`OWHڕ;i>%‚8s XE,/ې7 ˆRnIҗw7 7{Ej4ZePAL++ķ;5S 2VUԍ~zjVCG}lЈ)!2t軍t> xH$ "4Iq'7*t̄N?'J:`'9M,d$y)!D80h_j9fyj6kU J4tSUL `xE4ElM% $i鸯Yd=e1(QFz*yP܈*!bf)s&]tZ@8..*EYB8= qeQ$J!nVՈ lzV &8%X)*xcPq_^U:Wp 5V!``pQA1ו}5C4_%WYL\ 6I~v (煬f-DgDpD35.bbH?)%8͐uB=K86븡2);8A#Ҷ{V쒠~G#! Yl8.3`> `]EW9|B,S"a}\F": Ͱ,on6NJKy +& cW9lqBN=p'h#(-FG`D b46)[vyAQ5, DX'ޏV*{g{o5MBSl/YճO::z٤T&a0'B&0)$0~#c{i{saӈOkYkjaqοYRi& ~ {wwbiW.@CpfN+98 șO9 ?*}ׅNjj4&Z:vNIV~]RMx3Hn[$lۃf&Lh)ry{}a%>孁o!;(Ù '{~z_惋z9ͥ<_y/jm'QW7D,\PvYhNGzܙT;u9 X CVګ:R~44]92QbOwxj6g_~9(% ֏1SO:8l}!X9xyyIFJ_ = aZPË5İF/+^=X37z$+}ttЀ8 /g{-^²7͜Q @`-`AXֱ6b,sȢÔsg,GqL'x 70opN?ɋbm> A_d&åD9.x_UV;l,tERZ!ީ™I^ۣ}D³Zio@b5,vohǹl;dV-nQK.u/qIrI$&8uz9=gׄ/>4&X>u!IjJ A s~iR1 r5[^;s/;rsǸXT YBNHA(M[zvWa llA#muIuQD3۩wψ S~ǒזz @~S'pOղli$iL NR<{j0!6w^G֖S{Zh ?^ ȸw{սBrKwyu?v^|r*e{'r\p_œ=ڱ{; vx,S2  fbz˃t |aR8&\L™*UjfR$8cj[dn9HQhj <4_{1jH–B1 1E$i6SQ7ɤoTQNT235){}f|oSthg? =F[lYJ5U(ɘ(De584FqB'B.8=\7ʫg~Z@!ר; ;ӳPL-kV[:R+[lؘ2eۥ)b 5S p|I7P*Q@<tAAKsdYy`^ ˝11e+^ }R&cVioY^|m nk#T!e[cy34:WȚcfPK=S n~?*H G}:_`3ǿʓHqxk4oZK#yd}oC,3G|y|"&xzNs<-{LHH(K4\oe6ޏ0ZB恼fuQ:['wRSmTvc+"p%'toמ:w; 8D$fJ#`m<`&+۽{캵/K=H!H}XYZBB,Q<$?_*MY8:2MGm.r1 ~FT_z>)- >8DeM)n/aiv6?H:oB5:9`1MBPUV̹3IlX9$4`C I:E.T`Bs \ bݜ`G"y'ɚ@Ι{sHڙrD7{60"jW)r]ѣѕ^ngW~ 8{f}}Yy=G|)tݏިܞ/Br{vNO #iMn1}QYl]]5x F?*3Ḣ:HRc9z 3h50_SzdCT'qu3D|GE\ۍn1 lTՒX>KtfS5 ѽg%❧1_nz"uS lhYo^u.c"_Дz[o-3i}S% vZk04E@cBqH_<iZ 6NH aJx:j(R?~sҵ" c`M`x]T.dC?5ש-+8QGplE{(|"'=t`D΅*Czv B!zpjbeܱ-c&+#_$wVE%FVF(Z&lfIj@#PoV:uF-e 2r7#\2Mي%Wng ZG:&7aCfz!9JZ5HX?=‚ y䪊@G* G=NYwӍDR$(!1<9$o]{lR.j+x6+[>u8hچrR)119zj`OO u̪6X l>/ɦ6%Y5 [?\'r"eBnK@U#«iwo~`1|8ȏ;ٔۼGY%C93nJDm'MH㒱` !`+E̒Ue&_dT:RA °/޽; M;b22/AP vJ`Ƹ fP *G`PV):E %IC0<: V&,2#SFDhQ(cHV^,5,G-tt句 Qďh+4UNrWYUd ҕF(1*~k[9T0)#r^l,0!=-DzˈpyIqiIBU4Q|D% I$ XŠ:۪5G܁ ]yXY%lVPAX&``4uDW u]Z;kl)Z wdPӑԲr%,Д\"f\g Q"#;0GCsVĈn\7%7CVK=Y1bPPPR!ZE0rhC2 @K4D䒱s"̝H{s܋(2[bރlAL+sn JR\@jՒRd^HDs$T)( dlZa1%)' pP/G3wj -xuJA^6@F㒇-VZ#! )(,g,\٪vd(#v`bePq grxv |GrL/kYAT2 cx*"1L-43r[UqL}Dut6~=0_+jaL% _wõK} !SK vo ?'&%jٱ;UW׿y3;b!Ӂ' n^횏e]aX+<EeLȼ @'zz:@XMblcX:a7cb UB*F{a 3sS1 bL6O O wѫoՌSbTgP*. >91 "Ij4sKRO_`+f29L9>3b#U^yp>-\MKf"8#Pov]>b(ĉ(BF5+P)̫=28JM̑dJ5Š r٠8;9Ea1DSIؔHy?C`UM<7"qsB24*XPrĒ?ɿ%?Ľ۱K]=}> &U3 'j%):풇"nddTw4QBX Gh|W\_-s@fw4n(i eLcɉ0|eBTy^ #Ga4#~+{(.$AO:)|=>bg%1Os$xY=dx!G>=GYdž s|fT?ue^!mnVU0IwludX MfH%-bsNF#ge ?L 7 V7#S& }-'|Tc,WN(_%,|t5ͳ{|)PbqS7m j'). ,sSgoDUih+ynkC_>9NQm@(BtP7{9GdH1Sr7Y:ALJf V a ΊC;p"?ڇDO|ThCt65Ќ1%I`!U"mPGhT=<)Ƹ1 Yj[#hN|v{oZa{wJA*eP7 yZ!}{jMrsaW_@A8 911ăf;Rí֤Ɛ$v']I 1Wd;:#+e-ه !'$?R Y;w o¯#?[;C+LڙW R^.wue~NR 5Gᛌc8Ng[}{D.t`C*ͷ'tkBc,}ysȵ?$G檨8-,I4N!|E 1H8X^jm= \Nߣ@ONF/߽{.^ߜxhŕ/ES{N=CJ`GwVRZ,:X'f2K/CUWցbl"._*KKYށJ'n%WM!NI"d}$NIkTgM^%`bY\j<8ww{wjq+ 4-I>)D8M@( z)7Pw " iu{4ʍ-&{($*}:B Y"V #T燻a>(WY)mA9#laSňH<#d]̔agHx77L1[~iEZbLsU])w !ff\fV7YQoYG'~ 7~xW<<2]zWfxsV6V}|t 'R]~Oڼ1,;qptПf[}AxkX+U8vc$L2JoCXukUG^.l]Avzq' O~H?hj$lnN+ytˢ.+e޹[\x/;2,ͤ% A\LLѓY΀vsV,u"v޾#RQ,.:<#c[!;='P!OR߹@0ytT߹H;k#PuU}VwN9+IIg} N V vϊgg3*ŴϧivPl㍩]pǽ1K2_|MtW҈Z.ň8\JrNAIӗo&VJ1epeq0s{ kɍeg9=VxSRlΓձ1 jc?%\ cUo|{9̙t JfD%+i+u9K{3VQjY;gIv/C,v#XOW+:5!͗ s+!pHd@r]Rc|H+UyMU/G:Gj[(lh8A$=UpC@*%l5pa]64bO k:cu~+ʒ*3&xx@bMbs~+ءs@mq##̓JFJZ8T7_ĻǏzݧ׎88LIGӟ|J3+5 'XO ,X+IϠ2=#)^S$YӤi$q(#ƕe:\ \qJ,\Nrv{ nnWc#/!@%BL5 ny_E )8@Vhy=)Y>Qr)Vk iA Q{ARK/֋1."|C55M0o^r'ްxJmrACxuTDoz~T~jxW3'&N)Lys(7-]o`eeĉ4wb\YL G`ܢYek§WTFfpҤxJfP4ΨN74GbqN;x{ ކO(;Kw f_uliGzxv2Z2sI>!ٿտe:R貎eUL~v3~t ڬ9~[)a"PZY9Ьde} xQgFǙWs9]͑X# imgj&lE@O\0tFރs~/^lSiR0<o=;?*Sgط:!6PD d{](wh'D/x9lLmpZɭ]{<|2^:dA_[EH@z6$"(wIxxM``C{$3%{ KeoMJ!! 01pElSv)¸.$.䀷m!@ xRMYiZ:n{$XTliqؒx`jq4"thjj!LT?͗uX|j<",ôBU_ȝ\ Ryo70Rh'ޟ(y_ )?-rL {tvG೤ 3̖_u[+3:!FxjL;\7P!߁0>'$xj  3!p2?Ti0x:ݴX>ھs/CäHl1>VzSήDx^zY-8o&T+zПn˭x57ZIYđ.&:HŕWΥصz*?"ȔA1A90&C\,n,C*4@c䁝@.Gb,;dHq*=Q$:c9?I00LR\tMX]MDET|DۅgcW>Xf/aH .ߜ+2 |.ĺi@Bɛ͊q]B3AlڭsIZ 8dz}廢YTVxqs 0w?Eg@~4;u uʕse@V-Lv`$ouHz$-NZTſ7RlAqϹ}3I'L[*wi޺}zMiTࡷ.~^SfMj'^8|)lqo?3@0f((оҢրnIM 1̍,Ip~;Bon WܕNL2PdlOWiMG՛}N1Mⶭ3:+MnflamY[|f,^;+ 3Nxf_+h&QgR 9:!-C/+~ $Ӂk3Y Gɘ0HP6؉Cz=Gٯظ\J?sNh&Jff\tH_T} W53ej}W1HA& '~EX/"Hi{وaɖvE-G0dC=!D&6kxwh47zc}KZ&a0(qVlV$6F-,QVxwnAe1~e pDnj<-8/ɢ8l q"=?e\0El-IE㪹߸cɻ|ef',_BGQ/dCi hX6H3װiʦ3kdtIb.&_:Պ+%< P‘{^[82qYbӦ:X~WLC:x~&}l+"[2]d5A "Dh F2@}%߮Ng$=ESǔ{xy뵧Yv 4g&t5fp %qq6ϔҿ\%r a0ģM:/.]ϖF/wou$gq]pmF!9ʆS@oNJ| Q9\)s$ii*.sxRy3uGKOoiϊ4Wy|N% a/7 #&C]YȊv 4;;9CɅ5A]>G"b._ٯ۔iѨ1iM0kc:yi>`{ 75iqǴM{r%I zB.ʡMJd,mi7;QܶޛfH/sӺRCRtÊE](-W0BOۉ֯Gb^QX)*yf$WeS) UȢsUɳ4M(■YIYđ)G@<1ZQRQ+bb߉}us{:mW5>9M4AEY.TTQq<|TdcUθ֬G&㳲HV{oTz}^.?͵~-PXmrKE{; KCbԷ6-9*:jG+3MAL(`4m.`(]Si-' (s):Ԕcflq橅v \:nO o]0pTr> ܭag:}+r|is/甜E"ŽMRb(υD|7 ?7SФQf{7N'/9ZQ8uZ.ԶI /7_ $Ur1Z?}Ɓܱʰr|U8~2",[HۜR1ے`5#묥#ՠcI38+c}b-9mZRZ@>?E|ľOiћ43r[a|]1v8MSX_@[!hP[s5L(,IdrX`r: 3e>lw,(2k#sry'"}40C& 0̢74WtniVoti4AAM<rEi,iy:e6GxUvO{^*ɛ%]&OI Dqr[i+!ȽMM*)H(GblLݏ L~~Fcj%Č"~@tC4+]B,qԉ~?Bmzmc617@5Kvh,u O >ZHy\/ GR<݈-~ $k(9iQH>BXÛ.Ǫ__-XDBzb΅וϹS\~}7tUU7">6 ,٬#~_ds90R !c[*lKy ~YC+ۂAISXmHWXD%rjohCdF]R6A,6k1Yeoޡj?KS 8ظ_,do1S8-nW+ֵWJͽ9Z]=qVl%SwpAYu9?^>z21UwmNJ.vev_q??Z_o?3X:/|:^5R$ <~ՔC\%z6=^Rk'r+L^-lÆb#4,lk`]3FaLPu0֮Ɗ4Ej~s` ]mp-鲑@KgK:Ppf~5VpL1RJ!~q|ȉK|G.:-x/|2̠nn|{˫]lb6FɮoY"8‘RH oB`w*+D,Yk;l:[f\l9r7kbZq!r9/8A@yZA.΄𫧸;I=P} >f^VK^qx߰5'} ï IIs-Ȳ&g9A\GMh i^EI֍ e>ª>VQ& xjKNLQg'VPu,0SXNS\ Xc0QEd/$S RǪ af?BG Yq db*63;4!b=rVk &R?bkU:1@!Lo S% aʁQ@;/{Kb&&aBMQ"^Y,??{C}1U~a@k쌰Һ Fc:g*7ůb˸arKOg<0Q8 njۨ2lJcil}iOV o!Uv8;5OevY Gޒt8M,Cq?1憩'#H2hQՌtN:0]FG|j P}`R}DWe@:}sa,aQ%tR%}d$(m LEG ܢj9PEMQx[BJ=5,0I5L|ڛW@*0W 颵nW@X+ _"4ezA:u"fe;%yZJW:!=It ,d"Ss/4*_7@6_ /ёʐMQ[#^V\[+^Z\_i9ebD7H;IJ~;%As~FGhDc\T ѕzIWï/ OHwws &{' inpԥWRbVzx%?LUÄs(*DLE/i9 '`1A4| @޿>Ls 7|_bS!떠ȡ R{(YC4 |ȑcSkISiu`3T.p.^c=[NR&h i7!ڥn&Ewc\>l<|VQ Å=YB175m2Iu3( B ցD;,L }-Tb(8Vn1|bB/ǤBjLZwC:΀<4##Nw#*s{,zv#ⳑTx{[p8^]SzUI<5 Z2o "w mrMIX$vsFQ.N}!2nbyE*ȝpyaUWIbzgYbOaݬc~O/橑L} ,cpquun!/q’M%5žr?NjE,)>zP,:hlzs` /7}N.P8iGruG^s h ?g忋 UmA݊ sVcn?jaz8U$rK^{,!'}͍_]|PB]jNLL ( \>L㿆Ҫ]c5w x|H/km4ЩáU#n~PUo$VY+^n %I|vCND3^Ӱ:N? \Ve7jyGVSB\KV/B%dҜ&ca#eDuY,4}"9cv7d1^,qwE!~,JJt) ԩ, #KA+̥dOu}DJ-U#|7c`Rmc# B:W>̎FdDz-[=X\MmYmձ꿩JÑ>>J܂AO&>R$pIY$ʩ>lz˦o$Pճ't5,oz::`PzhʹOHkeht Zg驑4:.w*r&ŹwJ6E~Ϊ*\S,&^)4t|ƹž\Bg1$(#Iǒ hj?avΗ{V~*Q4fgrEbdb 6Aghj1 mEPXam gg z^4y] ޕid M6!y8!psǷHW:Ň/ĸ6֨u>Pt(9jg $aEPadKG00rn?'f;%f$ySty[!>&BFGk< m|!"8GO-#λ2OKWFp_yr5gmvkd!\KGҪ=ou!xz;3S;E+P%-ez]\x8MM (,ql/FIT%&}HJ^Ý"p?႙n(( rpG|lyT6)И$t)ٳP?E۰HL;ozo X6ݞԇ))PRZLjv "?;,8"2q-q-Ϧ{۴KgEfZ ~{ϴ* ¢̲FVl}#3}.yhnZgFC܅cUIm,Mo uG$tKœ7niٚy_LA6x셸"kB& 2՘)D' 6C*|ۜҌZί=s AnVu0T2_I)m%_MTR&OMu5+ɑϾ"meЊ%4Na8LGM0Y8Y-LyDs9>BoϷve0\7bwsTG2A2L~lV,3Ɣ-?yڛQRlJΛR/ qF&K7(ԃFUW]TLJ$'@T2mmPp!f}>{8hҬtEYx&x=ƏJ?w}@b]vi2M%jaE'R$98&SOHc8SĈ1*I \e]#B~?igKW4?5xE,p8`XJFVṁYtM#&>$JLv${𑡗SuMTHM ^C!Tgg&۹Ye0z(cI!"wzjN>SU~|+iAeY(\@HX^J䏗߯?am$"oM*Q|<0"l̊'W7AD{Sm@PfŢ_y>( ^Ohp AZA(s=H6NL&y8 xIT\ʤ"#efefLiA\JJXrIki4Po3D?Qسve,ů Akf巙ML]:NX $(s[[EMVopW(o\**heV# ^:D'c(vvѣSTP``TDD:kN&Zx *Mrwx^ٓ99ǃͫ0!f*O| F8@EQ;;N(QbC:˵#>n%*g1*`Bͼ5sIT+v4y\kUaas' " C潳9aIZÆg0H(-3$1VmؿȊW귦B;#!O}_g"򿨻>pK G0LxV41ƅ>Qة# WHL"[apk#7:/uOcJ5JήdbNh1saI}&h-1ϱpr5xog:j3_?u@8'2I ֨NOKxzZ,G 7NLEB/ȿ-tA_=םA79ZvԥGЬd☵\,*vͿt @OnMEx7"A-Ey_yԊ$-ޮ.~B$`Pi i",n)&Ѯ.qc!p6ᄎޓy0=98-AȺ})sHttFs|Nl=N #Z~{3-@Y9=Oe=F0D3Oy<~(?BnfJjR(@{uSD|&=GgԿŠN 3v@_&$#3n[jTzd[xsgEܪʹJs3m5sg4p3T;P Y7:!\+8\ -HK[y?͙1g; xzqʛK{8 8M[ Qh/!`]t' ϜggkEyLNx|ڑph)ɠ a gBP!F)W:v;Qٮ y5W1G`ZnN*s?E;בa(4^JeG:qe6C_Ai9wo$ai+!$A 5K<%tôJOWKo澡^zT6<4Ďj DDSxe23Gpc`qrBqFJ)µ{S8x@OV=pg p08m@o)P_ ) +2H7J`ك? V*a`Q@7PՉ)ADӓ]HW+//{> Fe6Ăp6 .O`JD'!Y& 'pYO#Qq9|ܥ\B&z~~Xk|>Ǹ^ )\Y&^ס&:[&;H17͙AGT {~&DO.a-sX+VRQ=+}>zM/Ipw>AB)@p>#9KwYg0'VsxQ$,cȍ #IH$Շa]9:yTJⰅ48E ye]ips9lNX,g+1г%X(|Vۧc{)s!G?%3nGb~+"1/O䏚IC;B&SS =aP:9i; ~=$ҍhp@R-<9ua.Uc4O+\3c[AW"tAE4b(*nS,ϰD" nL:=)=||#`sYzXwxXkx[6 (cz&x;Yݯ.ҿ0ZSkNj=\nlޔ*I矨٘;t ww:ޟj!Miw . h#j눇>?>{Br gZjo1 Zt\O0N]HN/4BK\hQ;T:C5nbkħ3Iw,O\ϝk;:wt!n6,JU@5eݜWΫr7F;0~ĉ{N]_o@_{#Y~2z͕+啛PdW?M ֺ .4Ogx'!-ԬOwZkg&j i4J#d#b^祬^#A#Mn2w/egnZlDէ rZdW=ץCCAOfqxK]62M.G? H`]Ey!EYc) )Kt9qpg/Ȕc M4B**@c}KsSQlk& 3;JQg>k<ŅOgѰ6II +BGNlhT`z1d~O?I_OJ;@s [eTG&E/aC$emߣ$:Fg[S8;`SPXK F9R( *hkoq44th2${v3| ݁YOY(b1F\4$jrbgɓ"+4Ic> -\j7)@$֭-W ʕr!~J"$źP&S.FIsW{"L6̓HJDe+F#B].gp},X(D;kQ~ bna$ i9f{̍YQeNL I ]@7)M.a\b>sڞ&$4:T t[ TMQܹdxzd72L]ChjƗ m] `w:7 w|ː`߼Z"|l_ƌpDA^?z9#hA kW]/bF>{Pmy`NhXnBC)Iw>fǚupM#x6^|xP7s]:\uۨ?%a^nګԙgwAb-p}Z.O>yJt?N.刟6uuP9>ƯVZ_A;2hqk VVܺx_pL;ǁ( Eɓ+9f:dFLPЭgZKPp0k~>.@(l`M$Y(8'T~ Ξ~6xA~YLAԂ0aH_,s:?F@l/*dhgä]I#B|ŅlYET{+[쀉ؼHtD;{;#ֿ.k䢨49d ׎-:3ެɭL]K 7E$/ҊݵꊈWuXL%A;8"ibpAr(Sx1%.wsAg^G&OA`UW2Rb mcͅe} >ک`'8p@jv2x_!X3ȗFYYƀC>QBd/R.bwd͖*kutG}3 x6 xWӟH> u.&>gJI8W9φf>bG=/\ʊR3^aU4Ȕz>vˠ4}sh8IU(qOCéMWamJN#!,ܹ] .V7:T7g:SAaqU:^ol"whFIB٘reBlm-•hC\?=W^$PRZD py ,™36dg3ᙴ|$z+Kt 2YG֕sMKR y6$58cGVNŝ*ڲ@1"?Cp0Ŀ5zq(/*8ΌE}؄9f&πKs xa#я ۲la.*z?E09ݣ0a8T"ż% Kb$BRfɐ~1.@MmGR툨G '"hkc…#^OWjc-.EwXCv ۝XҌUl# ):8H;Fpkp+]F`(BsW d!ł[]tb uˣsdkӌ$Hd҈FX 8NX}C,iNiT-Ķ& L ZDi@aq %7G_Ceͤ\Q?& IB" 0\!1"uػ mH/Q6y]4&Jl P9GOh5e%0:rىˈ)<,!CMDb`nALbD5P}+WVUY3iAZĨ);?-???Pkz- "a]:䚭[}ڠV՝sH w(T~fsUQsC,ng毭)_I9pR{lT;+2oV[DFJl%x$vZ;MQTkR>$r+$$?4 8]8r'dvffudf dGjRPk@%o:PzPA7Ľ8a5@y(?}@Qدfxk@iF ^+_78c"t(IVb@A wѠR$]{LfS'Mj~bUе|9ߕc#^0ssFmo#ӹoMZ)ϿՃhu:sm3$3vGId$_CK$G`<4XN&%-zNM>u8_?>1e(^ʅ;8WaU0p8Td  * ^yK2a'H3ʹr&aMԹx1aBoI>O]j/j|Xԁtj);ޱ|j^X*nF g\j1 h 1m̲-RY6χ<.(kh$>Dt8?+֎?>3p=FVg,wŠyi"W:ۯ0=&ox _ _ 'q-ܩ{~^z]./|.xnU,;9i@ӮEX$Q5:l(:oLѵK({,`>Ւs]5pB'*;Dl ;5Z:nUpcWܳbƤ|~% Hh 2\snVN/gk?n9@@xEwª/gH> bnp>yl+q@a N2<2-`A™!wf%ҦGATL^o^ ]fji)TftZSv T,Ҧj)ewQka>0ϣH:6#{Y4IkҚd\BT(}E4C,[faBgJWN@s6fp|j"4$J݂PDLfKzN*D"?UUgc )W|?$FA2ѧ@۱ǻK€@5mTZ@\!&p8pt;qسn5o1&.&*=Ùe6ҒTdj!|c󧭙8v1]̥}.41lA=K αhMѺG ,BA9uGE”]D^'t };VK!Ooz!!:M &B6ABԦ;z"Ksq#Fm(6=#>PV4vGEUdIҼij#@le^f&$mᅱy&u\qJt5a%i@oĪv PD |Bn}x;r@ې3JiL-c^4yR|xv4r,U@lm?ukll>zEqކC/eJ!f&~y+Wbio Hh360 Q=[(Zz8c5i V^d ۉVD$kj0[ti=b8E&c޹;^^VF/֓I;hv ƆlZE6함' Yfu_rX>NA4>0BxVTIԱIrՅs }Ν TL> z>ǮDeE鵳IކGWA2A7NwptRq:/ p$!B@ĄtVO% /a"J!|b;_1QJ;xOvlONwlcj$*|Cf+nmot~!!|O |{X /YscõnuX"q<1lkibkZx3|$cix}́gxp4]v"ukn?T5JFZ`*20Ds)b]Рq^X |k* :m&A5br705Qᥙo.Nak3^RfJs?+#A7=jG b>bg4njz'KK-o-Pt0vd-q4kx؃!,m#z[n v'" 3\p< vRM~Mw V6no0{>~tص?HZs16{y7ڛj&sz&VzBwc]R2vr)׫/.6iKJ7Cfc!YDz9sw;>v`ސ4[f')A"$1]kC=^(@#hU|Xq.|\;7T56,v TG&X dv]SMV"[FC=@OI1.(|.P]|lr .i?ɽҌ4$a Ѹb'm9ϗ)8쒜4ZLV T|B) dDB6؈M+FZFb1-Byx/}n*)"UT5İ0,2%C3lsGaH ^TIy8T#y%zm3o){kU;hrBb] TΘRdm<ԏp8 nj* #:,C'!'b:x5.oJiG|_Th8.÷P+ExGB&!)@:PxѬTa G; )* (r, "efO,IdY8y iT py $ ߴ]AgO/a ~=hD|f&32C?>X}l}>T‚JWKj[mU)Ԑ*S'LN'\ zX+룭UV)sʙE_XHn0Ӂ{ޱWBNH{Ǡ.U }G֚V']uڃjN:6AtO 2jjdP6%-u&% zkkC@VDYUc۾NVKVHu?0{̆$Bח}۟FyiߩVZ"Zp1I:a8L<`[RQ5r֌]vZ*t{ 6 2he.%j+SEOy`|4[d܆ Oy~|~ $S4 H' `epMu"µz fqڬIVp@S<5t6SP*H);6PS`^^$N16˦ FҢ+Iڶvtv{Je!9:I_qbu̝Rikj>JWŘr"ʃ:pO/cRfE8g8+-*HvV #[>zfjw<+ݸK];чH\aN᰹ $d`ﵸg^ 򒽋6'܋՝ KHJEL8zP۳l,p\Fk$㜿SĮEl/nuUG<1FR3}ê 1mrhQaҺ՚IK3‹3MZ"6^r]ReS*gZ#4)^FWen{1"שhX!^ဩ׋gUpr;􀘩QDEy:}%M Wb]oHbՀkx}f;nQMy+e1r1![/w]R̘(jb qWB_nL>x1| VXn2'tu6 !ŹOp#Sc] n |"{$-EwkPx(]hrltX_S`aq;e3򘑅AdB:~xB)/&FgAƯݫצQ4!кOU;dZ_3s>ǣ7"sP "3#y7ln1qf{_V_|<Ҹ 7MiƇ1q1CC?n?^*;-޹S\GovkDUԃ*I[[m .$-|&\{8Z4Ll(εS$B)g ?Vb1e3tSPNƮZf63#jQQHqeZFP-}ߡL+oDבDM1uTwRBj :ߑVFاk31 /x$ֹk]m^rd#׉YG|d\tF9.vlsiϋ IhJj›I}>*z>}@?[rvBhGv|{}GFhO:N-g1妕>V1jۯK%'JbXUʼnJ_cmV\<dKOYgmZ;kw~;-@;'Ha #_kw$ߜvrmHKL`q` } ŃqYu#|ml!̜H;n,ηM&9Uֵ6R&nc X" lwx. ~Ww㟙p) L_ohOw-tx(v=͟$2A%lHӓFұIwWDOØtdC5^*~!@֙0I0ZWGuVR[tVt™ wN$,$'=:V"1>BM/j-] HTG:;!̧@4ΧNƨL*N?D 1 l[ )N|e 3=>Xt&Z/+<*E= U=7W7|bc|OKP'+゙ ?C @NtIR)љ(*K)Y0T}_(2ɘizЕff,)8xthJvvy=J=-;e+ʊ?4ʽMa 3~R铯yTL<3P*+Lla+֑iu:$Y?`sr!%lLo D˃hp 8qg਑6{e'!Iv:1VX쿌8L{-1,H-nQm#CiNz-pCoV^$>L 5YWNk\0gohi`Agq}Ε(EIl]NP8UB"mX $iy1cH0 N`jłn߉$ިF2.&(\+cvfzjF4O׬<ZLuE LHu."Yd+WƤ-88Vֳ4օHg'a B tՕl>FHܑfS9$U"Ƥp/nYX!-gS% [z"-P}E-n0ߤ68ݡ~g–Q7}.#mCqhg͋:O,2EI%,ؖ2w;]PfV4Yk^ޭ"+m[WNE*̠cH4&jq> BI_H&0BGL&}6Y&> >;~R^fue?ϊ!{!,E4.\^ĥd9M)L0;pcJKS}2Կ]S]s0qv#.꓈*kppM|2oᙣwQT@c}Lhm2ѷ:.H`Dn̴r_g魿ϺnfɜNodC)ӻ SI>y!IT\٘!N tI;6Gټ%\D({h^lKL* DhS_WȘ2?1̩;cr:tr9\C[cyyw.+,qO2UNJt嫝 x(5[_ iԡF(j+hמ{ Fe:lû QeF-cJs >VRBi=P-+r-QkKtiG8/%^&n٣Q6ALUxpjDiI*2I7uS:7N`Q0q$.yWL-i$[KG dDkBc39c5?l KV=dr}y5б$VJ3d4_i9g+yzrw$[C 7Xill3 | "3aˣD$v?b\G h8/u5KYW~kN{ vM_c@%]1yYi[$.Y"hJ?+AdaD[, 0yq3D&A@`@'f:%2& A(&^'8&C3!ɍ34 _sˌJ8Ka0jYL%` kV^lZ}K[R}SS{o"cK"ƖZ@op%Uyh5)s]{ҕl'SƔ*<䖻עJ@QeSddac/V~,?b>VQw٫)G|r+ RcSs">ڋv-1?`jc5:@2( %2EzS.ښ`"xĵFQ;sH(`gTh%*|QXZ430@TD70q!J:bҠ@6GAjN*)&gE@p&U<{pWic`C^=qR\l\-*3}-† N]=s@\hTxA:X 4$O[Rqp *hHҎrEОE'ȦAdQ]Ss6w0;vQd:MW6s?lqR߸q]kOm*yH0l$cPďZƈgqSf Q- ,b%Wkt-ctnpՋ|N82oHRCDS nfóX-eG/Xrd,-jiqjSO'i"OKW1:]w]*qv,lƆ0: ^RHgn482C͋Sb$Z$Ŋ)tO;x5/@̇ DytKZj`B?qtvG ?R٥I1T,l9${1Ie8y-gE4CJ8Y}f汅i%4\UJok0³r՜(6X'k0 i)<(И8,YAnu%İ*,AtK4s V?X“AfsŚ`to~xU2"B aN̿yu3ctwi 人 vOkʹ j_>x <PMc"ZZXueCpr"13UlYM> Ս3О 4U%AaދanC96?:̡-׬oD5&zNgw}^9yHW :,2%Iq"Wo`Uq--bfhR'Ȥ+f\4sW Ljl)5 crr{s~%)2]R+jNBZ9ff+*g3˙3搼n{p{/Dc#LPrnٛ\yO` R]&UgZei[Ƴ% 0G %4dr_YOpL.KD(_pUT1;_n^p#^Sd99-SǁeFy'稣SPɮM^)F?|̯3U_tRgVbrV54. 1F[0FC?ok4`~pBM5[sSnH}k~lM:cl=J*SYvѷm+@dX!6͊`2H뺡Fp?&k(k(?E1庾@G3Ym-V:*K/:5#CeRQ=2꡷#5F8kD(†NqNҍZ4F}*=^ >c|rB=g/TnmtJB!,G٦<&F. q8҇2{\HG|]*+WZKm`p,YÒz{S+VT:3Lx 2T \0#ʌ E~xV"[}oA#$ע4{§2&?4Bc.w߁ELŠ VW4eD>*_SNzd6@2>;%yp޼쭜{n{NV+%B-%mC(y6[RuzQPgɣLWDey1\AReبFr_ 6!7 !RKPdt4 j OI } \UD̑c)3*xreVg4ӬB! &\?\`?O ɲ4 ;*L7-ӫNи7<1&T>h/i.Hgqy&*mj[5iWf}?yޑ<"0s3cgɺsp&մ;#F7J28҆C ϑ$6p. 29uV +d:3t7w,Dʀ#Asױz}>"O tAV`PBz{C*..B(Z1bFs &tzҗXC ͭǐMvp'*aͥFT]̧dlRMƱbҵm﵏$hxx{P~͜9zx0#l]~g=&F>$!AcG /`_4ybnɞSTOzcM^ltM$‹gacaK\҉l2UӤ Ɓg%J@dH}V'i;路H t+5ň(l>P_Yq 2XA65VXQ1`eBU@d^'?9I5,Pm.xqW4)1e I&=wK̰,F0!]f48.K%I_ZI !H3$*E`*%uMyg(쫋eT[+SK-Om>s.]P_"e^oˆՎe,LȄ晙]zeؒRI1O+t(d}jAǿKԹ d% D&a+L:`-gݏ;5Tf'nP>b2}˽kj|}wovaBYvCKpb}WKSjRm){_yǎ"dH`O=UVc.d O`&&Bs{V^hBי: nW}3oфne~%a0Jd5D7'8)"5_{gՎ߶CvBp%b&F)N&!?QD Z9 F+ H$tJgoŗXټ(?+)yT5jg# 2G)ٮ=S@tf=Ś]8e4퉾XW".d=w^4$UP`AZ~-e!9MZ2Mk@2 Fd<7 #$~-T@ K`ywE*!H $dq4SP<[MnSlIx|W yé=H-@O`ý":/AyG_W[zxf8q!,;ۥx`Zȱ$_B?-x0-/1xvߖgɷʡ4$t+ЈN#y{SĀ4j#r!.<H_ q%~qRGJ3L nl;bLm&t1lX6X##LmFp练N#4ܓݵtyX_ÃDMEdN9/L}$uܽFZp|i}'*wYǏ#V` ;ޮ>ĉAͩ%''υUۘ% C$DFbh =3ĩRq sl0G0?:WbvmNsw/A#ShV1 c%3Bk%ި ߶";&NFbW"n@VxO$yCD$|=̉; _ҧɶ|pϫ?^@SlavEH' O6&TNcݫv_n6~%0&ot$O1sXDpͺcH('apߤزE2n\!  q{'H>?4qH ta(舄A47.bpuͮv6nJYUWh&M5HW״FcmiF!y]` SmĎDUqHA9׻FqBGF(In6B_z%:PAIiX0$@10KczJ$' +Bi>5 ofI{ypAaq7k]ĶN $?dYPB,w]P#_.{zOw &UYg|p ]Oo"|VI0 nAn UT|0@`)CC ( o/ɖMHdf&> ⡒xxH#?.S@.LwHI)ƺz5j5ƣCeWBZ5"5)uȈ޿Lk1ǒ$zxQ]Q YxkD#!SASռ .^|MM\p9~C%K&of^g0KUO,a>D{xјYoY{++2p:V(d OGܭ:/HW8v m p]c:w+0OrM " %T 6|l♧m;VP<4ԍQ8ٷV']kF^ŠݦCcFF&}Z)$qsudSdn@sO AS8GEg~宭)nBl$<^u3t.y6t9T&&nS7+GMR6w'!aqⴲMM+ ݯLDu沴k[L AFgC t`%R X~7ƅy!K\I+E25rRB3JDXܲm`,q(R EXPB`&YT:.7V=;NY:Xw=&RS!-E D٬`ϙМI!O'WMRw ڿoj|j^>Њ[M^ׂnzrG'e!)j"mV:KQ8g-9x!9!b y6(/ՔNH~ ~7f[Ӧ-(cx//Lʺqz GHo$@z  mǽqGW$Nm?akamsw(HREU*qqL%d3#}(Jcx}a8h-t(OD |q¶gg7tt:2xgg0#> V~ql|Ӟ=Mv`e}_tYCe.xoH,TQ=/a?.uSA}5PDzb*ӶyQ MN 3yQ 5!q(2ҟ̽ABڤz:A4\׭A {P>쌛?W"@Qrh N"˞1._hzL 3gl[BSj~P '4&6PB_Lh*p\~T,zWo<%Sa}E_3. KŎ3#cW#ɑ: n)'%+-IDD뗎!abH~~3=I0s׼˚" ʻ&A(je,^rƖ#.t90t6 x^P-H\x:T/U gX *F:WˤZ ǧ]B_G Bw(B|i)2RBP %Mmq6*VA< Qv@i>腢mbP@Nb|ɎfhKE33FN e(VFã3c. b !|glĵPh(Xt2 C[hRֲg6 4C|!F7rt៮bgsD6B/Z,// SϷۻH?$4j8pqˁ'Kevܵ;YE?BfS |GfNQ@Gj:kDJN~+5 (gޘWt \MWc<6oP{D.Z$AUʢ=LMB6YqnR&Y7M$5½β y=2}NfBw+f'Nܡ Y!' _# vKD"xL#0#҂IHP$irGh$4SDB(S2Phrpz’BP-8aB!k3脘@V3JLCk!H'@!&9H$8SH\i&@gQ'KB9b_p(Ni"@1yv#e )Dy1 g7Efy #N"ēSȖ3* ·_W"iXY7ѺY^Y<ʓoJFwDszلrsP+upDe2'7ڢ!=iaSs Țb`n²C gB]=)vQ2z#E63Ɗumӏj#m옴'1rwZֹ s%7ÅP >bn$zme-~+L$TtrT<*4$ '.7AfC@!{ 0iSG:1!تkvH}z{Ky@H6$} x r9Uj8A(wV[y]K5?O8m>Yy˸Vm$NQ,e<-deDb#J+zQ)zhQPfhh$eqjs>s.Mb0U%Rc15$DžPW0h(fC$4 28}4݋KcObd4 az)4 .Vy* C/1B:'iq_"֊OM1{C4DMRo0v˙N=qdԵ&4k̽}08I29p71w%(ߓ* {آd<fvvlPG#J/a,:]-ORϠ09☳/:af5@b?LY8'422_d"u,5BҬgn-K0cV,EMLXOH@L؅lmzmO `Hi[a)&kb_a`G@L ՖfA}F81A##, a;c!q![fws-3lDm?zveU!$n(HBO!¤IԀ k,ؑۗ"Fb([ QDH(xDHLrb8r䀂i$'q؞L(.p_Bݡ9RTwĮO>M/2cB>WS#w9#z%,_Q@VĞ{gq!DX7@0~7`mXت Cu-3;eE?kT}>Xm9p6.,J4K\/1vM&~wu?{'}¿bCn:lł xg83ln0b[u*:vsJD9k1\lrvibW@N#k=#{# :Ak*/kj[>ga{ &Y~^]K71FB{](cqtA+ɑ~7$Gps^9M50ll¼qmO 9X,$b5n&ǘ5~ƶŧڄ , }!1gMRYQ82Mt@LFZ866 ph/w4%Xr}oB)bt @/?"\ &9gXl0oL9q#luh}gMnyh2{gyḦ0D0%bv&AzՃWi"EGxVsईH ޸jreҺu&F.M>?|D07>75 D;b |:M?$&zCNS%3^` 4z9ܸ4KحA+{ȡS;aC?uGgg._>sk CyP9L>+ @vYZכYc3-t]ʁdub"`ʗYagfCo-v$]INt9/~^>-٬'LñY:d<~EdH-b }<&O{iHovt i9V)ɿ$ñ4Gx(t[ZCΛ;?~Pd^fl5v^]3Г5o6a gڈ? }OOTQsw9YV9fi_)r1KX eэuu^eͩe'frsEKP6[}[cȅ/KWڼ@qDɬ[>~엑WSв}/~ )?m΢.@KJ"WєbjKAyc zq߶K Kx~pf2({6[ҿ[6~eT~) i׏ ; ƶ?!' 7#ɓƉ9#jtIcaՍ\{͆ɨ~=W dtSy ۾ d=]6Orj):"v ƋVOѩA>pr8mъQpR~Z{*4|^YV"‼8>=?DfEY{J0mMWcB|ȹ #n{[.sVW[Ag Z*x1VΆ@żrRx\2 N9\B _{|&3h{%TQF"E ']8T-ROznO_lN(c>EOΧz:/,S)?_\/0(?sr"k'UQ 4H/@2BM7磚!m$ϗ,ӑmPEPı~m9/0 exd}gaSk+/=w*ޘb~ ]yŤ?r Ҟڥ( '9JorZw؃WZ^nvt?72j/>b !!Tݷ\ak0oJ5~3R~#_/ƸTo@4{VL̲`qfZ)D`)Vonk~P cb *~2l#H;{Ƥ>pEۻ7m'ed8x T:v!:cUA ( *HYhLPkHВu*fp a0|S]7vovx(7E`:q$%lNKTbgǫ3ݷThVE+ cШvX·╁ՁA©w >\h 1ȯ  AR.bH) @)yԍpg~_>X΁uZ 3-/3ô~zT'UĜzdƴE rw}U~k-,?Sv,߼QddF:I?DxȅGrvܪ*yƆ%MؕXBƆxv1\gHcC~5]rgÄxNrDGl1ie Zz[,6C`>߾PWP6X_DAZC?;Nx1gXUȫo4 s:;`&#GI.oh(uFZ,w<5I|r◁So-vUj鯎bpdPIcwdHCU | R鄏~h=Tk51֡2RدZOu C+&*gEiCIg_> ow#+zp@ _Z}'&EM,~e{KK:$< aUrP74܃g7+')?ԯ) gŁ*] ardo?AV iNKۡQ[uGơn$+Yo M\x-MNw8SsjM(u,Vk^?d[{ļ7Cp2fWKҡ &r8㡨m^lp%ӂZvtUP ZtU?tpPr,KM]* /Ct5XJMIeGڅ3;Gk|{WK]WGb|'ľ 5HKAc]jELZ5]J?h]N {֜F ?J Z4^Fs8dL;A:s[H{2#G @#u!,uO᪽֋pi? >Kb@p._Mz*@FP 'Hf, EbT&W(UjV7Nffw8]n #($E3, $+e;a'ieU7m4/u?Bc8W]ZUCf- 9TrG*RE< 'ގr3FiPv_%YTBQ +I›eAUHe xU~&<%žfQ!ciwι40U6y =$JXlh IY3ȳ/#d;ELU*+P)S&)5%|[*'lxM5R:WJMLŷ!kN (Czo)7T$xhښ6CIsaۯyc{y\Zv^j*6QfJvuYn:KTv; Id~̎Z 3- zXXr~r;I))6dڣL kϢB ?{OgGS>S.' |Opqtl+inW-j*Nz`*ׁ{=qɘrv吁053*㨚_WV!J櫙<.>*(<c:\SAϧXX6B'pp$JLz.TJPPgwlc ;]\[EBBeьӫKD8Qmc z>~[Q(R5xVt?| xﹶP3x.CGo߽GOX;&G=ʢk#N5A] 5 m6ct'u6tC`Ork|.3lBeM{Ag8,i~!9zH!=`%A5_]ݽm(Hݳ.=xa{ {X@>Oxndt|VTIyv*#YgBBu1h 4^ymPm؀IFHMD5(3JXiŸpG~R]IlJlH}nU=dv]sl NEҦ5C%cewTkExE'dWPڅthT׆CӌXڡmJwcg~GrGܠ2[Ҋ4'rolU$lCq@9 ۊl5sq ǮF)КZoZ,yrZɇפmZuq\;?877~-+:x=iRo,h/3듅Z Q& k?/KذwSĩ<ֺaefXN2"#èh/#rcRK:*jossU.i~%I=#yRo!mϕ03Wā/g&2[̒Ԫ9 -D%-xS=8=re(] "{Kd~9R\YqxvˢV 3]>ICʏJ V= ĵQ^:$@kU5: We1[t{*FB2 VhJ\O]> 8zokj;v dm'Ln=1٩1[Efk*WKW|| rclone-1.53.3/docs/static/webfonts/fa-regular-400.eot000066400000000000000000001031321375552240400222010ustar00rootroot00000000000000Z0LP^a*6Font Awesome 5 Free RegularRegularL330.242 (Font Awesome version: 5.10.2)6Font Awesome 5 Free Regular PFFTMtfGDEF*OS/2AX`cmapǠ gaspglyfl nhead6hhea5$hmtxtTlocaEˈ6maxp8 name.3&w[postiA}@J=*a^_< لPلU@LfGLfPfEd.T: @@@@@@@@`@@@@@@@@@@*"$.>DYnpsu|3DFJNR\e IM[]tz([Vgz"$.>DWnpsu{3DFJMP[d GMT]qy(XVgy|zwZYB81-(}zpf`USC m H  g $   @|n*b H*Nbb ^  : :  6  ND>n8(fLDLl.lDfZhDh  h ! !J!!"f"#N#$$j$$%%P&B&' 'P'''(((l()$))*N*+6+~+,J,--..f//0,01 12 2p2334:445 566Z677b$"/.676>64'&'.7$,$+u**u$O##O$UNOT$ ** W"##"W1"/&?'&6?627/7 j  j A ( Ad>>d||&g DD g&b~~bBB"*2%2#!"&=46322654&#"#"&#"6"&462&"264:7OO7 344b3#99#3xTTxThP88P8O77O#33#TxTTx8P88P2"&4264&"'&=4;2ΑΑuuuU C Αuuu > 0  '3?GOW2#!"&5463254#!"3%+"=4;25+"=4;25+"=4;2"&462"&462"&462`lj `pT\  T  T  uu(8276#"&#"+"&5&5466325#"&#"63232P,A+=C%gA?  ')%gE;f9-4Lf9p ( S    2'4634#!"7P 0ppTvT#.2#!"&5463254#!"362"&457676`l9""((xX ""0((xXPA*/7%76#!"&5463!2+!547&?62'7'&76   `Z 0+t:A+:g `  r Z+10:A +:+2"&4264&"%//&?'&?676ΑΑuuu.>> >> >> >> Αuuu>> >> >> >>2"&4$"264/&?676ΑΑKuuu< [  ; Αauuu  [ = 8@2"&4264&"%+"&=4>7>54&#"/&7632"&462ΑΑuuu3.      %@&?A""Αuuu     64""@#/2"&54732654'6#".'&47>2267." /AA]A !-*[7&54264&"6?Ԗj/.AJ 5Vzzz(  &zz. %8FV^^^B3*   2#!"&546;25#"/#` 7 7@  7 7H"&%2#!"&546;32%763!54+'#"7!Pp@P?@PM1 @0jh*@/DT%"&=46;2+"&=46;2#3"&=46;2#2#!"&546;27"&=46;2#J 0   =`0   P 0FFE#6M%#"'#"&'&'#"&767&54632%3264&"67654&'32?&/  A<@g;B ,zV@hKeKA___x@1/R21R>  '6,' #+7B^6, Y<71B\BB.# /#$; (E,2   %&?'&6?63  j A >d?7D g&D~b#AQ&.7>&>6766&&7>2>7>&6&.7>76*  '>D88(%6*  '>D88(%5 ,36&  *6%(88D>&  i+  DF<63+ !"F<63 `)  7t@ ,82#!"&5463"!54#2=!37+"=4;2+"=4;2     H H   `0**d( ( ( ( @%-%#!"&=4?>3!2!'5!$"&462"&4627   i  NNi M  ppM2U]2++"&++"&=46;2327167>322654&+4654#";2'>'>'4&"2"1 $5."S  @  @    *2V,   **=  61" "(3( $/$ v,#$=  & 2V^546;&546321;6;2+"'#"+"&'&7&'#"&7;;2675&'&'.#"#""2641"V2*    @  @  S".5$"10[  =L   , "1 $/$  (3(" 1" &  =&$#, 3U]46326+"&=4754'1&'.5463274&#""&#"3>=4.."264j1"' "(3(   /$ v,#$&=  & m"1'$5."S  @   @     *2V,   L=  1S["&=#"&54>76716=&=46;2''26=6767=4&'#3263264&""1 $/  (3(" 1" &  =&$#,@1"V2*     @  @  S".5$"10[  =L   ,*9B++"&546;546;22=#"&=#"3%2=#"&=#"354/&+PPt`JTX j0 ~0@0>*` X 0@+3;#!"&5463!2#3254/+"&=#"362"&4264&" NzN *H44H4G"">`"PNd  h4H44HL""2#!"&5463254#!"3Z`pT 22#!"&5463!32>567!5".'&'``#dd#`#FF# 0)N N7  7`B2#""&54653+"/&2#4767654&#"#.'&546 &  8( N PIg,0+ K54L +0,ep  & (8++%%gIB25=1$05KJ60$1=% 2BGi%39%#!".54767>54675462!&5414&"0"&53   H8%:!8,B\B4&V   C,:V   ,B&,C;d.BB.d% #/Ka}754;2+"3"=4;2#+"=4;2"=4;2#!54;46;546;23232%354;23#+"&=#7#54+"#";;2=32=4 ( ( ( t ( ( L ( @  X p X  p ( p@ p @( ( ( ( T ( 4 ( ( |$$ { (  ( C C_  0+7%++"=#"=4;54;2327#!"&5463!24#!"3!2` X X X X ``0T X X X X`T2"&4264&"ΑΑuuuΑuuu/2"&4264&"6"&462"&462>"'&>2ΑΑuuu//!fΑuuuc\ 99 '02"&4264&"6"&46262"&42&'&".7ΑΑuuuso~)RΑuuuc-m1 +2"&4264&"6"&46262"&42+"&463ΑΑuuus(   Αuuuc-}@'3?KWco{2#!"&54634#!"3!2%+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2%+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2     `   `   `          `   `   `   `   b           n              '2#!"&546;54;2354;225!30 ( ( *`4 44 4p* %.=46"&462264&"t   4Αuuue ΑΑuuu '7"=4;2#7#!"&5463!24#!"3!2l l`0T `T#2#!"&5463!%/&?676`< [  ;  `p` [ =  A2Y&='.54767675467'".>7676#!"&546;2+"3!2=42;1+ 6(-$50I;"!0<*&>))   VT"( 6  )K-;(!9 h)A'% +`     %6&?6764&"2"&4264&"\ B B 9ZΑΑuuu> B \Αuuu &732/&6%#!"&5463!24#!"3!2}ccK`0Tcc`T &%#"&?67#!"&5463!24#!"3!2Cccu`0Tcc`T &7546&#!"&5463!24#!"3!2cc`0T]cc`T#!"&546;2'!#"&=#r.L h ^rLL  h ),5%+"=4;22+"=43%#!"&546;2'3#"&=# TLh   T   T7>3232>'2654&+4654.#+2;2'>4&"23&-E/ @  c  -3 $(K h  m&%@ !$*   # :/45'N) 9  G%&B"U.6[%+#".'.'#+"&=46;232>3:32264&"2>54&'6&'6+"#322>54&52$$ 3-  0  @  @ .G-88]b    @%&m "("454/: ?  3,$h   )"BG  9(8@H%/"/&?'&4?'&6762767'7''77&2"&4264&"<G( + (G<<G( , (G5MM[44[MM[44[V==V=Q.!!.! + )G<<F) + )G<<G54[MM[44\MM=V==Vc!.!!.&"&463276"3267&5467&jj %a>!"$h;VzzV0UT2*@Ԗ#( >$?P 9-2zz)$mU2T &/&?67#!"&5463!24#!"3!2cc`0T#ccE`T"264$2"&42"&4SuuuΑΑB//B/uuuΑ/B//B #/;GYe54;2+"3"=4;2#"=4;2#3"=4;2#+"=4;2"=4;2#!54;463!232%354;23% ( ( ( ( X ( t ( ( L ( @  P  p ( p,( ( ( ( ` ( ( ( ( T ( 4 ( ( |$$   HC C7=DJQ#!"&546;2'!#"&=##"''&76767&766>77"6467&'6&7r.L h  6' . $ 3N-  x^rLL  hD .0*[" #R#!"&546;2'!#"&=#32+"'&/+"'.'&6;2476;21676r.L h          ^rLL  h&b + H 78 `b aa B#!"&546;2'!#"&=#2+"'&'+"&?'&6;26763r.L h # . ..  ^rLL  h =N 7 NN ($6>#!"&546;2'!#"&=#54;2*#+"73264&+r.L h H E#   0 ^rLL  h %  /c(0#!"&546;2'!#"&=#57676&2"&4r.L h  ((X((^rLL  h@((X((  !$1@LP3#7#53#3#57#!"&546;23#"&=##5#7#"&?53322654."7#5 @ @ Lh 0 P   -  @  &L  h`W'a     /=#!"&546;2'!#"&=#/#"&=46;7664'&>.r.L h $$)   ^rLL  h$8$Y  : 7#!"&546;2'!#"&=#6/+"&=46;2r.L h 5 h h ^rLL  h p 4% h %$'0<M7#"/&?67#!"&546;23#"&=#7/&?6&?6/5&?99!Lh  7  7!99!c66&L  h`  O66$+"&46227''64&"264''&"67&7gΑΑ/p/522/B//B5/p/52258ΑD5UB//B//p/5225/p/522!6#"/&='.77'762< p+) r%%;`l !| .;T0 <2$-   '.5#"&'&676326767'7&'7'757/?'# ^a&'O ^a&'4_H: 'G,4%=/@G<& G@]+G'%e:@,O+ b ^Mb *N */%L=4J/}45/J>J%?4I0== @+/;GS_2#!"&546;>32=#!7"=4;2#'3554;2+"754;2+"=4;2+"54;2+"(  ! +@`= lPx h h h h h h  !  x ` ` P((      <   8@'&?63!"&54767>57"'675462'&5414&"&53z >   . |"&&%:!;B.&  0  *#D  ,B&7&/.B`%42"&4264&"#"&54>32'&#"3276ΑΑuuu3)?=O$@(;& !$*$$ Αuuu)Q<'@%# %"23K2#!"&5463254#!"37&546'&76&546'&76`lR@AR@=R@AR@> V6/.4"%6/.4"% +/37;Ocgo#32+"=!+"=4;#"=4;2!54;2'35!355#!5#'#"=!+32!543'2+"=#"=4;2'355#+ H H H @ H @ @   @  T `8 L@ H H  H H @ @ 4 4pppL  @CGKOSgk%#32+"=#+"=4;5#+"=4;5#"=4;2354;2+354;2'35355#5#354;5#"=#+325#%5#"=#32+"=#3235435#5#4 H H X H H H X H @ 8 X H X 4 H H  H H H  H  @ X H  @  %#!"&5463!27#!3546;T`LLP hdT`Lh (42+#!"&546;54632=#"&=#"3%254#!"300*0@00*`4%+132#!"=4;467.5#"=43!2#!2654&"p2//2 2//2 h KjKKjK?ss?   ?ss?   LllL`LllL)p6+"&5<&4&/&=46>62654&+"&=4&+"&=4&+"&=4&#"/&=4&354?6$C + b'8#I$@:H       a+q,(rd !  U"49#,)( r    ( )  t ) 9U d Y6+"/&676546>654&+"&=4&+"&=4&+"&=4&/.376u.  w!3 P4       sO)  7 ~!($%)!ф L       & w ]"&7.7#"&463'.>6#'32?6=4/&#"/&+";2+";2#""( X++^O8+1b"wFFw b       6 #+<+  88 857'%0 7 @  ;      @A%#5'&+"&546;7#"&=463!254/&#!";2+";2-d!4$r!/!&'v    r c$|H&!$40/!!!L  @ '(i7.7>'&67>767>2+"'&376=4?6&+"&?6."+"/.+"/./&5' /0 )   [ |  & * -* #5B82"i3 %k u` %  q      E 2^p6+"/&676546266&;2?6=4&+"&=4&+".=4&+"&=4&'546;2+"&7546;2+"&754;2+"f!9%"m!'7'!-m    DL K (#Tw7 ''J w  wT    # #   R` ` ` ` ``#^6+"/&7'&676'462654&+"&=4&+"&=4&+"/.;2?6j(%'758 % +<+#0      ;@ 7 #Fw"b1+8 O^++XF      b w,52"&4264&"+"/#+"&=46;2'254.+ΑΑuuu6+0 'Of(: Αuuu#ZTU3)! @;C%++"=#"=4;54;2327#!"&546;54;2354;232!3!2P L  L L  L p0 ( ( 00T L L  L L`4 44 4* +37"=4;2#7#!"&546;54;2354;232!3!2| |0 ( ( 00Tx   `4 44 4*;C%//&?'&?6767#!"&546;54;2354;232!3!28 66 66 66 660 ( ( 00TI66 66 66 66`4 44 4*'72#!"&546;54;2354;225!3%/&?6760 ( ( * K  ,o  `4 44 4p*Ɏ K -n  @!%*/2#"/#"&54>?6327673#0        5=? Z 5=?:./,!/*2+#"=#"&54634&#!";?326&&} `&&  C  &&^ T& &   <2 /2"&4264&"%+"&=46;2+"&=46;2ΑΑuuu( 0 0 p 0 0 Αuuu    "&462264&"+"&=46;2Αuuu   'ΑΑuuux  #+Lfn3#"&5#'#"'&/#+376;2264&"6/&'&6?#"##36???5#'&+"6?6264&"y@ : #;U  @v0Q  l - '9>*a ! % -=T A7 gn  @  L  0 W ;#>X    $I> <3 S0 5`#!"&54767>254'&'"."#3!2'#".'&'&?6232>7676`(_  dQ4/V  7  7  7  7 !K  OB) %F -    -  ,     , '+3H#32+32+#!"&5463!232!6"&462"&=463232632#  @ P4&&4& ''  @ ( @ ( 00 ( `&4&&4 "" @0<HT2#!"&5463!6"&462"&=463232632#7"=4;2#'"=4;2#'"=4;2#  4&&4& '' >ppppp`p`&4&&4 "" @@@"42"&4264&"&2"&427&'#"'%654&"6322632P88P8L((7ΑΑ8.-%%uu%)@%$%@X8P88PX((ԑΑ0& '$4@SuuS@44 '<2#!"&5463!"&46;2#"&462"&=463232632#P ` ` 4&&4& '' `0``   &4&&4 "" @$0<HP2#!"&5463!3&54632326327"=4;2#'"=4;2#'"=4;2#"&462  !''9ppppp4&&4&`p ""P@@`&4&&42#!"&54635!3!2```v !"&463!2@    &2+#!"&546;54635!%!32000@00`@0%/&//+"&=/&4?5/.?&/&?'.?>7'&/&6?'&?6'&6?65'&?6546;276?>76/76'  H;    :H  ' '" PHHP "''  H:    :H  ( '" PHHP "] "Q*T; . . ;2'3'&+"!7"=4;2#    R" d"^L  P  P  9  9`P0 @$4<G%#!"&546;#"3!2=!"3!2=452#!"&5463"&4627676! tZtx""1((hH 0 P0_""((hHP12#!"&546;462&"2644++"=#"3!2PP&4&6x* *`&&lT$ $"2"&4264&"74;232/&6;ΑΑuuu ( CddCΑuuu tdd"6462"$4&"27+/&?632Α/uuu- tddt YΑΑuuu( CddC""&462264&"54;546&=#"Αuuu- tddt 'ΑΑuuu( CddC""&462"264#"=#"&?6+gΑΑuuu( CddC8Α/uuu tddt C!$2"'&?63#3%3'#37'377#d dM94D44?D4dxD3aax4U0``````%5E2"&454+54+"327#";22#!"&54635"&5!#2!46P88P8@   0%`%%%0B\BB\fX 7  @ %%%%72#!"&54634#!"3!2'//&?'&?676`ll==  ==  ==  ==  `vT==  ==  ==  == +>62"&462"&462"&4&2#"'#"&7>7&54264&"6?Ԗj/.AJ 5Vzzz(  &zz. %8FV^^^B3*   '92"&4264&"$"'&>276&"&46272/&"&7>ΑΑuuu6//!f!% & Αuuu$ 99 ''A)    3H2"&4264&"62&'&".?"&547'.>6"&5"'&>?6ΑΑuuuj">'  P  P Αuuu) q     @    +GO2"&4264&"6"/"&4?'&4627626"/"&4?'&462762"&4ΑΑuuu        4&&4&Αuuuk    L    &4&&4 '/7?K2"&4264&"$2"&4264&"62"&4&"&462264&"62"&42+"&463ΑΑuuuB//B/<((&X/B//B5((&   Αuuu/B//BQ((,B//B/((,-2"&4264&"6"&46:"&42'&"&76ΑΑuuuN<= 141 Αuuu]"   /6=AEIMT[2"&4264&"6"&462"&4622+"&=4635#375#"5#75#5#75#5#326=4&+ΑΑuuu(  h000p000h( Αuuuc#  ` ( ((((  '/2"&4264&"%6#".'&62&"&462"&462ΑΑuuu2 M)6( )Αuuu" ".&  N #+97"'&'6762"'&'6762&2"&4264&"%6"&'&62qΑΑuuu2 MRM ) "" "" "" ""ڑΑuuu" ".."  4E2"&4264&"%6#".'&62'&7>32/&"7&7>2/&"ΑΑuuu2 M)6( ) #  & #&#  & Αuuu" ".&  V  ))) +EZ"&5476".'&62766/&"'&7>27"&4632&#"2654'2$"'&7>32/,..6' )) M`  & #&#d ΑgC:2:Suuu &  #   "E E%   ".)),%(gΑ" uuuS!  ) 4<D%6"&'&62/.7>7>7/&67676&2"&4264&"b MRM )pFF   ΑΑuuu ".."  > FDF  Αuuu,:2"&4264&"%6#".'&62.?'&63'&4?6ΑΑuuu2 M)6( ) !!PPPP !! Αuuu" ".&  /((0000(( /;JYh76"&4767&67632&#"%67#"'67327>'&76762+&/.?"'&?6367>'&676u /")7Hg<5/6S:.{$9Ig30)-S:1C /"   4  Z4  4  &.:T&@R"/6D6I;-E9I;0TR"/s Z  4Z  4p/%T:6P2"&4264&"%6"&'&62'7'&6?62/&%/&?'&6?62ΑΑuuu2 MRM )# ## Αuuu" ".."  G#  #N ##  %5CTi76"&476"'&'&6"&'6726%&>2&.""&'&62766/&"'&7>2&"'&7>32/u /"M"/ Rl #l~l# _r_ 'Em~mE& ppRM ))   & #&#&  #  R"//"R P 2;;2 1;;>f;;f? H``."   "|))"  );CK2"&45'&"/.32>>54&"&=&'&6276&2"&462"&4ΑΑ8  %1;uu;1 )) Α,    +'_9SuuS9_ ,   , ;HU2"&45'&"/.32>>54&"&=&'&6276/&4?&?'&6ΑΑ8  %1;uu;1 )) !! PPP !!Α,    +'_9SuuS9_ ,   , ((0000((!)1Ge2/&"&7>2"&4264&"62"&4&2"&45'&"/.32>>54&"&=&'&6276  & %B//B/<((&ΑΑ8  %1;uu;1 ))     4/B//BQ((,Α,    +'_9SuuS9_ ,   , %-52/&"&76"&4626"&'&622"&4264&"4(% & Z MRM )ΑΑuuu    $T ".."  6Αuuu)19A2"&4#"'&?64/&4?64/&762"&4264&"$2"&4##) ΑΑuuu q     %Αuuu#EU2/&"&7>&2"&4264&"%#"'&?64/&4?64/&7662/&"&76#  &   ΑΑuuu##) &#  &  ()  Αuuu     )!4<Ol%#"'&?64/&4?64/&7637&"&7>32'&462"/&>7676#"&4632&#&'654&"320##) F &   %ZS   /4ggg  uuuS-      L    ! S  1(Αg.,  Suuu-2"&464&"6"&462"&4622+"&'&63ΑΑKuuuuE..EΑuuuuS -<<- /=2"&464&"2/&"&76&7>2/&"2+"&'&63ΑΑKuuuu&#  &   #&#  & E..EΑuuuu0)#))M -<<- )72"&464&"/&4?6&?'&62+"&'&63ΑΑKuuuu!PP !!PE..EΑuuuu(00b((0R -<<- !)72"&464&"2/&"&7>"&4622+"&'&63ΑΑKuuuu & lE..EΑuuuu$    2S -<<- 2"&4264&"62"&462"&4ΑΑuuukΑuuu)1CO2"&4264&"$2"&42654'"&547&"&4622654'"&5472+"&463ΑΑuuu<**<*7"@*<**3262/&"&762"&4ΑΑ8uu8-f- &  %p(% & >.!!.!Α:PSuuSP:r    2     * D&4&&4&.92"&4264&"72&'&#"&46&"&46262"&46"&54ΑΑuuu<&& As$"Αuuu#.@-P14E2"&4264&"%>"'&>2''&7>32/&"62/&"'&76ΑΑuuu//!f  #  &&#  & Αuuu 99 '  )%)'2"&4264&"62"&46"&46:"&4ΑΑuuu4&&4&Αuuu;&4&&4{)82"&4264&"$/&4?&?'&62'&".76ΑΑuuuC !! PPP !!PHC #r#Αuuu((0030((0J6'  'V6"^&! , 1U 4 6Q  D 6& Lz 0 X: . &  E 6] & Copyright (c) Font AwesomeCopyright (c) Font AwesomeFont Awesome 5 Free RegularFont Awesome 5 Free RegularRegularRegularFont Awesome 5 Free Regular-5.10.2Font Awesome 5 Free Regular-5.10.2Font Awesome 5 Free RegularFont Awesome 5 Free Regular330.242 (Font Awesome version: 5.10.2)330.242 (Font Awesome version: 5.10.2)FontAwesome5Free-RegularFontAwesome5Free-RegularThe web's most popular icon set and toolkit.The web's most popular icon set and toolkit.https://fontawesome.comhttps://fontawesome.comFont Awesome 5 FreeFont Awesome 5 FreeRegularRegularFont Awesome 5 Free RegularFont Awesome 5 Free RegularFont Awesome 5 FreeFont Awesome 5 FreeRegularRegular      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~heartstaruserclocklist-altflagbookmarkimageedit times-circle check-circlequestion-circleeye eye-slash calendar-altcommentfolder folder-open chart-barcomments star-halflemon credit-cardhddhand-point-righthand-point-left hand-point-uphand-point-downcopysavesquareenvelope lightbulbbellhospital plus-squarecirclesmilefrownmehkeyboardcalendar play-circle minus-square check-square share-squarecompasscaret-square-downcaret-square-upcaret-square-rightfilefile-alt thumbs-up thumbs-downsunmooncaret-square-left dot-circlebuildingfile-pdf file-word file-excelfile-powerpoint file-image file-archive file-audio file-video file-code life-ring paper-planefutbol newspaper bell-slashclosed-captioning object-groupobject-ungroup sticky-noteclone hourglass hand-rock hand-paper hand-scissors hand-lizard hand-spock hand-pointer hand-peace calendar-pluscalendar-minuscalendar-timescalendar-checkmap comment-alt pause-circle stop-circle handshake envelope-open address-book address-card user-circleid-badgeid-cardwindow-maximizewindow-minimizewindow-restore snowflake trash-altimages clipboardarrow-alt-circle-downarrow-alt-circle-leftarrow-alt-circle-rightarrow-alt-circle-upgemmoney-bill-alt window-close comment-dots smile-winkangrydizzyflushed frown-opengrimacegringrin-alt grin-beamgrin-beam-sweat grin-hearts grin-squintgrin-squint-tears grin-stars grin-tears grin-tonguegrin-tongue-squintgrin-tongue-wink grin-winkkiss kiss-beamkiss-wink-heartlaugh laugh-beam laugh-squint laugh-wink meh-blankmeh-rolling-eyessad-crysad-tear smile-beamsurprisetired k%لPلUrclone-1.53.3/docs/static/webfonts/fa-regular-400.svg000066400000000000000000004321041375552240400222150ustar00rootroot00000000000000 Created by FontForge 20190801 at Thu Aug 22 14:41:09 2019 By Robert Madole Copyright (c) Font Awesome rclone-1.53.3/docs/static/webfonts/fa-regular-400.ttf000066400000000000000000001024601375552240400222120ustar00rootroot00000000000000 PFFTMtfGDEF*OS/2AX`cmapǠ gaspglyfl nhead6hhea5$hmtxtTlocaEˈ6maxp8 name.3&w[postiA}@J=*bD_< لPلU@LfGLfPfEd.T: @@@@@@@@`@@@@@@@@@@*"$.>DYnpsu|3DFJNR\e IM[]tz([Vgz"$.>DWnpsu{3DFJMP[d GMT]qy(XVgy|zwZYB81-(}zpf`USC m H  g $   @|n*b H*Nbb ^  : :  6  ND>n8(fLDLl.lDfZhDh  h ! !J!!"f"#N#$$j$$%%P&B&' 'P'''(((l()$))*N*+6+~+,J,--..f//0,01 12 2p2334:445 566Z677b$"/.676>64'&'.7$,$+u**u$O##O$UNOT$ ** W"##"W1"/&?'&6?627/7 j  j A ( Ad>>d||&g DD g&b~~bBB"*2%2#!"&=46322654&#"#"&#"6"&462&"264:7OO7 344b3#99#3xTTxThP88P8O77O#33#TxTTx8P88P2"&4264&"'&=4;2ΑΑuuuU C Αuuu > 0  '3?GOW2#!"&5463254#!"3%+"=4;25+"=4;25+"=4;2"&462"&462"&462`lj `pT\  T  T  uu(8276#"&#"+"&5&5466325#"&#"63232P,A+=C%gA?  ')%gE;f9-4Lf9p ( S    2'4634#!"7P 0ppTvT#.2#!"&5463254#!"362"&457676`l9""((xX ""0((xXPA*/7%76#!"&5463!2+!547&?62'7'&76   `Z 0+t:A+:g `  r Z+10:A +:+2"&4264&"%//&?'&?676ΑΑuuu.>> >> >> >> Αuuu>> >> >> >>2"&4$"264/&?676ΑΑKuuu< [  ; Αauuu  [ = 8@2"&4264&"%+"&=4>7>54&#"/&7632"&462ΑΑuuu3.      %@&?A""Αuuu     64""@#/2"&54732654'6#".'&47>2267." /AA]A !-*[7&54264&"6?Ԗj/.AJ 5Vzzz(  &zz. %8FV^^^B3*   2#!"&546;25#"/#` 7 7@  7 7H"&%2#!"&546;32%763!54+'#"7!Pp@P?@PM1 @0jh*@/DT%"&=46;2+"&=46;2#3"&=46;2#2#!"&546;27"&=46;2#J 0   =`0   P 0FFE#6M%#"'#"&'&'#"&767&54632%3264&"67654&'32?&/  A<@g;B ,zV@hKeKA___x@1/R21R>  '6,' #+7B^6, Y<71B\BB.# /#$; (E,2   %&?'&6?63  j A >d?7D g&D~b#AQ&.7>&>6766&&7>2>7>&6&.7>76*  '>D88(%6*  '>D88(%5 ,36&  *6%(88D>&  i+  DF<63+ !"F<63 `)  7t@ ,82#!"&5463"!54#2=!37+"=4;2+"=4;2     H H   `0**d( ( ( ( @%-%#!"&=4?>3!2!'5!$"&462"&4627   i  NNi M  ppM2U]2++"&++"&=46;2327167>322654&+4654#";2'>'>'4&"2"1 $5."S  @  @    *2V,   **=  61" "(3( $/$ v,#$=  & 2V^546;&546321;6;2+"'#"+"&'&7&'#"&7;;2675&'&'.#"#""2641"V2*    @  @  S".5$"10[  =L   , "1 $/$  (3(" 1" &  =&$#, 3U]46326+"&=4754'1&'.5463274&#""&#"3>=4.."264j1"' "(3(   /$ v,#$&=  & m"1'$5."S  @   @     *2V,   L=  1S["&=#"&54>76716=&=46;2''26=6767=4&'#3263264&""1 $/  (3(" 1" &  =&$#,@1"V2*     @  @  S".5$"10[  =L   ,*9B++"&546;546;22=#"&=#"3%2=#"&=#"354/&+PPt`JTX j0 ~0@0>*` X 0@+3;#!"&5463!2#3254/+"&=#"362"&4264&" NzN *H44H4G"">`"PNd  h4H44HL""2#!"&5463254#!"3Z`pT 22#!"&5463!32>567!5".'&'``#dd#`#FF# 0)N N7  7`B2#""&54653+"/&2#4767654&#"#.'&546 &  8( N PIg,0+ K54L +0,ep  & (8++%%gIB25=1$05KJ60$1=% 2BGi%39%#!".54767>54675462!&5414&"0"&53   H8%:!8,B\B4&V   C,:V   ,B&,C;d.BB.d% #/Ka}754;2+"3"=4;2#+"=4;2"=4;2#!54;46;546;23232%354;23#+"&=#7#54+"#";;2=32=4 ( ( ( t ( ( L ( @  X p X  p ( p@ p @( ( ( ( T ( 4 ( ( |$$ { (  ( C C_  0+7%++"=#"=4;54;2327#!"&5463!24#!"3!2` X X X X ``0T X X X X`T2"&4264&"ΑΑuuuΑuuu/2"&4264&"6"&462"&462>"'&>2ΑΑuuu//!fΑuuuc\ 99 '02"&4264&"6"&46262"&42&'&".7ΑΑuuuso~)RΑuuuc-m1 +2"&4264&"6"&46262"&42+"&463ΑΑuuus(   Αuuuc-}@'3?KWco{2#!"&54634#!"3!2%+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2%+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2     `   `   `          `   `   `   `   b           n              '2#!"&546;54;2354;225!30 ( ( *`4 44 4p* %.=46"&462264&"t   4Αuuue ΑΑuuu '7"=4;2#7#!"&5463!24#!"3!2l l`0T `T#2#!"&5463!%/&?676`< [  ;  `p` [ =  A2Y&='.54767675467'".>7676#!"&546;2+"3!2=42;1+ 6(-$50I;"!0<*&>))   VT"( 6  )K-;(!9 h)A'% +`     %6&?6764&"2"&4264&"\ B B 9ZΑΑuuu> B \Αuuu &732/&6%#!"&5463!24#!"3!2}ccK`0Tcc`T &%#"&?67#!"&5463!24#!"3!2Cccu`0Tcc`T &7546&#!"&5463!24#!"3!2cc`0T]cc`T#!"&546;2'!#"&=#r.L h ^rLL  h ),5%+"=4;22+"=43%#!"&546;2'3#"&=# TLh   T   T7>3232>'2654&+4654.#+2;2'>4&"23&-E/ @  c  -3 $(K h  m&%@ !$*   # :/45'N) 9  G%&B"U.6[%+#".'.'#+"&=46;232>3:32264&"2>54&'6&'6+"#322>54&52$$ 3-  0  @  @ .G-88]b    @%&m "("454/: ?  3,$h   )"BG  9(8@H%/"/&?'&4?'&6762767'7''77&2"&4264&"<G( + (G<<G( , (G5MM[44[MM[44[V==V=Q.!!.! + )G<<F) + )G<<G54[MM[44\MM=V==Vc!.!!.&"&463276"3267&5467&jj %a>!"$h;VzzV0UT2*@Ԗ#( >$?P 9-2zz)$mU2T &/&?67#!"&5463!24#!"3!2cc`0T#ccE`T"264$2"&42"&4SuuuΑΑB//B/uuuΑ/B//B #/;GYe54;2+"3"=4;2#"=4;2#3"=4;2#+"=4;2"=4;2#!54;463!232%354;23% ( ( ( ( X ( t ( ( L ( @  P  p ( p,( ( ( ( ` ( ( ( ( T ( 4 ( ( |$$   HC C7=DJQ#!"&546;2'!#"&=##"''&76767&766>77"6467&'6&7r.L h  6' . $ 3N-  x^rLL  hD .0*[" #R#!"&546;2'!#"&=#32+"'&/+"'.'&6;2476;21676r.L h          ^rLL  h&b + H 78 `b aa B#!"&546;2'!#"&=#2+"'&'+"&?'&6;26763r.L h # . ..  ^rLL  h =N 7 NN ($6>#!"&546;2'!#"&=#54;2*#+"73264&+r.L h H E#   0 ^rLL  h %  /c(0#!"&546;2'!#"&=#57676&2"&4r.L h  ((X((^rLL  h@((X((  !$1@LP3#7#53#3#57#!"&546;23#"&=##5#7#"&?53322654."7#5 @ @ Lh 0 P   -  @  &L  h`W'a     /=#!"&546;2'!#"&=#/#"&=46;7664'&>.r.L h $$)   ^rLL  h$8$Y  : 7#!"&546;2'!#"&=#6/+"&=46;2r.L h 5 h h ^rLL  h p 4% h %$'0<M7#"/&?67#!"&546;23#"&=#7/&?6&?6/5&?99!Lh  7  7!99!c66&L  h`  O66$+"&46227''64&"264''&"67&7gΑΑ/p/522/B//B5/p/52258ΑD5UB//B//p/5225/p/522!6#"/&='.77'762< p+) r%%;`l !| .;T0 <2$-   '.5#"&'&676326767'7&'7'757/?'# ^a&'O ^a&'4_H: 'G,4%=/@G<& G@]+G'%e:@,O+ b ^Mb *N */%L=4J/}45/J>J%?4I0== @+/;GS_2#!"&546;>32=#!7"=4;2#'3554;2+"754;2+"=4;2+"54;2+"(  ! +@`= lPx h h h h h h  !  x ` ` P((      <   8@'&?63!"&54767>57"'675462'&5414&"&53z >   . |"&&%:!;B.&  0  *#D  ,B&7&/.B`%42"&4264&"#"&54>32'&#"3276ΑΑuuu3)?=O$@(;& !$*$$ Αuuu)Q<'@%# %"23K2#!"&5463254#!"37&546'&76&546'&76`lR@AR@=R@AR@> V6/.4"%6/.4"% +/37;Ocgo#32+"=!+"=4;#"=4;2!54;2'35!355#!5#'#"=!+32!543'2+"=#"=4;2'355#+ H H H @ H @ @   @  T `8 L@ H H  H H @ @ 4 4pppL  @CGKOSgk%#32+"=#+"=4;5#+"=4;5#"=4;2354;2+354;2'35355#5#354;5#"=#+325#%5#"=#32+"=#3235435#5#4 H H X H H H X H @ 8 X H X 4 H H  H H H  H  @ X H  @  %#!"&5463!27#!3546;T`LLP hdT`Lh (42+#!"&546;54632=#"&=#"3%254#!"300*0@00*`4%+132#!"=4;467.5#"=43!2#!2654&"p2//2 2//2 h KjKKjK?ss?   ?ss?   LllL`LllL)p6+"&5<&4&/&=46>62654&+"&=4&+"&=4&+"&=4&#"/&=4&354?6$C + b'8#I$@:H       a+q,(rd !  U"49#,)( r    ( )  t ) 9U d Y6+"/&676546>654&+"&=4&+"&=4&+"&=4&/.376u.  w!3 P4       sO)  7 ~!($%)!ф L       & w ]"&7.7#"&463'.>6#'32?6=4/&#"/&+";2+";2#""( X++^O8+1b"wFFw b       6 #+<+  88 857'%0 7 @  ;      @A%#5'&+"&546;7#"&=463!254/&#!";2+";2-d!4$r!/!&'v    r c$|H&!$40/!!!L  @ '(i7.7>'&67>767>2+"'&376=4?6&+"&?6."+"/.+"/./&5' /0 )   [ |  & * -* #5B82"i3 %k u` %  q      E 2^p6+"/&676546266&;2?6=4&+"&=4&+".=4&+"&=4&'546;2+"&7546;2+"&754;2+"f!9%"m!'7'!-m    DL K (#Tw7 ''J w  wT    # #   R` ` ` ` ``#^6+"/&7'&676'462654&+"&=4&+"&=4&+"/.;2?6j(%'758 % +<+#0      ;@ 7 #Fw"b1+8 O^++XF      b w,52"&4264&"+"/#+"&=46;2'254.+ΑΑuuu6+0 'Of(: Αuuu#ZTU3)! @;C%++"=#"=4;54;2327#!"&546;54;2354;232!3!2P L  L L  L p0 ( ( 00T L L  L L`4 44 4* +37"=4;2#7#!"&546;54;2354;232!3!2| |0 ( ( 00Tx   `4 44 4*;C%//&?'&?6767#!"&546;54;2354;232!3!28 66 66 66 660 ( ( 00TI66 66 66 66`4 44 4*'72#!"&546;54;2354;225!3%/&?6760 ( ( * K  ,o  `4 44 4p*Ɏ K -n  @!%*/2#"/#"&54>?6327673#0        5=? Z 5=?:./,!/*2+#"=#"&54634&#!";?326&&} `&&  C  &&^ T& &   <2 /2"&4264&"%+"&=46;2+"&=46;2ΑΑuuu( 0 0 p 0 0 Αuuu    "&462264&"+"&=46;2Αuuu   'ΑΑuuux  #+Lfn3#"&5#'#"'&/#+376;2264&"6/&'&6?#"##36???5#'&+"6?6264&"y@ : #;U  @v0Q  l - '9>*a ! % -=T A7 gn  @  L  0 W ;#>X    $I> <3 S0 5`#!"&54767>254'&'"."#3!2'#".'&'&?6232>7676`(_  dQ4/V  7  7  7  7 !K  OB) %F -    -  ,     , '+3H#32+32+#!"&5463!232!6"&462"&=463232632#  @ P4&&4& ''  @ ( @ ( 00 ( `&4&&4 "" @0<HT2#!"&5463!6"&462"&=463232632#7"=4;2#'"=4;2#'"=4;2#  4&&4& '' >ppppp`p`&4&&4 "" @@@"42"&4264&"&2"&427&'#"'%654&"6322632P88P8L((7ΑΑ8.-%%uu%)@%$%@X8P88PX((ԑΑ0& '$4@SuuS@44 '<2#!"&5463!"&46;2#"&462"&=463232632#P ` ` 4&&4& '' `0``   &4&&4 "" @$0<HP2#!"&5463!3&54632326327"=4;2#'"=4;2#'"=4;2#"&462  !''9ppppp4&&4&`p ""P@@`&4&&42#!"&54635!3!2```v !"&463!2@    &2+#!"&546;54635!%!32000@00`@0%/&//+"&=/&4?5/.?&/&?'.?>7'&/&6?'&?6'&6?65'&?6546;276?>76/76'  H;    :H  ' '" PHHP "''  H:    :H  ( '" PHHP "] "Q*T; . . ;2'3'&+"!7"=4;2#    R" d"^L  P  P  9  9`P0 @$4<G%#!"&546;#"3!2=!"3!2=452#!"&5463"&4627676! tZtx""1((hH 0 P0_""((hHP12#!"&546;462&"2644++"=#"3!2PP&4&6x* *`&&lT$ $"2"&4264&"74;232/&6;ΑΑuuu ( CddCΑuuu tdd"6462"$4&"27+/&?632Α/uuu- tddt YΑΑuuu( CddC""&462264&"54;546&=#"Αuuu- tddt 'ΑΑuuu( CddC""&462"264#"=#"&?6+gΑΑuuu( CddC8Α/uuu tddt C!$2"'&?63#3%3'#37'377#d dM94D44?D4dxD3aax4U0``````%5E2"&454+54+"327#";22#!"&54635"&5!#2!46P88P8@   0%`%%%0B\BB\fX 7  @ %%%%72#!"&54634#!"3!2'//&?'&?676`ll==  ==  ==  ==  `vT==  ==  ==  == +>62"&462"&462"&4&2#"'#"&7>7&54264&"6?Ԗj/.AJ 5Vzzz(  &zz. %8FV^^^B3*   '92"&4264&"$"'&>276&"&46272/&"&7>ΑΑuuu6//!f!% & Αuuu$ 99 ''A)    3H2"&4264&"62&'&".?"&547'.>6"&5"'&>?6ΑΑuuuj">'  P  P Αuuu) q     @    +GO2"&4264&"6"/"&4?'&4627626"/"&4?'&462762"&4ΑΑuuu        4&&4&Αuuuk    L    &4&&4 '/7?K2"&4264&"$2"&4264&"62"&4&"&462264&"62"&42+"&463ΑΑuuuB//B/<((&X/B//B5((&   Αuuu/B//BQ((,B//B/((,-2"&4264&"6"&46:"&42'&"&76ΑΑuuuN<= 141 Αuuu]"   /6=AEIMT[2"&4264&"6"&462"&4622+"&=4635#375#"5#75#5#75#5#326=4&+ΑΑuuu(  h000p000h( Αuuuc#  ` ( ((((  '/2"&4264&"%6#".'&62&"&462"&462ΑΑuuu2 M)6( )Αuuu" ".&  N #+97"'&'6762"'&'6762&2"&4264&"%6"&'&62qΑΑuuu2 MRM ) "" "" "" ""ڑΑuuu" ".."  4E2"&4264&"%6#".'&62'&7>32/&"7&7>2/&"ΑΑuuu2 M)6( ) #  & #&#  & Αuuu" ".&  V  ))) +EZ"&5476".'&62766/&"'&7>27"&4632&#"2654'2$"'&7>32/,..6' )) M`  & #&#d ΑgC:2:Suuu &  #   "E E%   ".)),%(gΑ" uuuS!  ) 4<D%6"&'&62/.7>7>7/&67676&2"&4264&"b MRM )pFF   ΑΑuuu ".."  > FDF  Αuuu,:2"&4264&"%6#".'&62.?'&63'&4?6ΑΑuuu2 M)6( ) !!PPPP !! Αuuu" ".&  /((0000(( /;JYh76"&4767&67632&#"%67#"'67327>'&76762+&/.?"'&?6367>'&676u /")7Hg<5/6S:.{$9Ig30)-S:1C /"   4  Z4  4  &.:T&@R"/6D6I;-E9I;0TR"/s Z  4Z  4p/%T:6P2"&4264&"%6"&'&62'7'&6?62/&%/&?'&6?62ΑΑuuu2 MRM )# ## Αuuu" ".."  G#  #N ##  %5CTi76"&476"'&'&6"&'6726%&>2&.""&'&62766/&"'&7>2&"'&7>32/u /"M"/ Rl #l~l# _r_ 'Em~mE& ppRM ))   & #&#&  #  R"//"R P 2;;2 1;;>f;;f? H``."   "|))"  );CK2"&45'&"/.32>>54&"&=&'&6276&2"&462"&4ΑΑ8  %1;uu;1 )) Α,    +'_9SuuS9_ ,   , ;HU2"&45'&"/.32>>54&"&=&'&6276/&4?&?'&6ΑΑ8  %1;uu;1 )) !! PPP !!Α,    +'_9SuuS9_ ,   , ((0000((!)1Ge2/&"&7>2"&4264&"62"&4&2"&45'&"/.32>>54&"&=&'&6276  & %B//B/<((&ΑΑ8  %1;uu;1 ))     4/B//BQ((,Α,    +'_9SuuS9_ ,   , %-52/&"&76"&4626"&'&622"&4264&"4(% & Z MRM )ΑΑuuu    $T ".."  6Αuuu)19A2"&4#"'&?64/&4?64/&762"&4264&"$2"&4##) ΑΑuuu q     %Αuuu#EU2/&"&7>&2"&4264&"%#"'&?64/&4?64/&7662/&"&76#  &   ΑΑuuu##) &#  &  ()  Αuuu     )!4<Ol%#"'&?64/&4?64/&7637&"&7>32'&462"/&>7676#"&4632&#&'654&"320##) F &   %ZS   /4ggg  uuuS-      L    ! S  1(Αg.,  Suuu-2"&464&"6"&462"&4622+"&'&63ΑΑKuuuuE..EΑuuuuS -<<- /=2"&464&"2/&"&76&7>2/&"2+"&'&63ΑΑKuuuu&#  &   #&#  & E..EΑuuuu0)#))M -<<- )72"&464&"/&4?6&?'&62+"&'&63ΑΑKuuuu!PP !!PE..EΑuuuu(00b((0R -<<- !)72"&464&"2/&"&7>"&4622+"&'&63ΑΑKuuuu & lE..EΑuuuu$    2S -<<- 2"&4264&"62"&462"&4ΑΑuuukΑuuu)1CO2"&4264&"$2"&42654'"&547&"&4622654'"&5472+"&463ΑΑuuu<**<*7"@*<**3262/&"&762"&4ΑΑ8uu8-f- &  %p(% & >.!!.!Α:PSuuSP:r    2     * D&4&&4&.92"&4264&"72&'&#"&46&"&46262"&46"&54ΑΑuuu<&& As$"Αuuu#.@-P14E2"&4264&"%>"'&>2''&7>32/&"62/&"'&76ΑΑuuu//!f  #  &&#  & Αuuu 99 '  )%)'2"&4264&"62"&46"&46:"&4ΑΑuuu4&&4&Αuuu;&4&&4{)82"&4264&"$/&4?&?'&62'&".76ΑΑuuuC !! PPP !!PHC #r#Αuuu((0030((0J6'  'V6"^&! , 1U 4 6Q  D 6& Lz 0 X: . &  E 6] & Copyright (c) Font AwesomeCopyright (c) Font AwesomeFont Awesome 5 Free RegularFont Awesome 5 Free RegularRegularRegularFont Awesome 5 Free Regular-5.10.2Font Awesome 5 Free Regular-5.10.2Font Awesome 5 Free RegularFont Awesome 5 Free Regular330.242 (Font Awesome version: 5.10.2)330.242 (Font Awesome version: 5.10.2)FontAwesome5Free-RegularFontAwesome5Free-RegularThe web's most popular icon set and toolkit.The web's most popular icon set and toolkit.https://fontawesome.comhttps://fontawesome.comFont Awesome 5 FreeFont Awesome 5 FreeRegularRegularFont Awesome 5 Free RegularFont Awesome 5 Free RegularFont Awesome 5 FreeFont Awesome 5 FreeRegularRegular      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~heartstaruserclocklist-altflagbookmarkimageedit times-circle check-circlequestion-circleeye eye-slash calendar-altcommentfolder folder-open chart-barcomments star-halflemon credit-cardhddhand-point-righthand-point-left hand-point-uphand-point-downcopysavesquareenvelope lightbulbbellhospital plus-squarecirclesmilefrownmehkeyboardcalendar play-circle minus-square check-square share-squarecompasscaret-square-downcaret-square-upcaret-square-rightfilefile-alt thumbs-up thumbs-downsunmooncaret-square-left dot-circlebuildingfile-pdf file-word file-excelfile-powerpoint file-image file-archive file-audio file-video file-code life-ring paper-planefutbol newspaper bell-slashclosed-captioning object-groupobject-ungroup sticky-noteclone hourglass hand-rock hand-paper hand-scissors hand-lizard hand-spock hand-pointer hand-peace calendar-pluscalendar-minuscalendar-timescalendar-checkmap comment-alt pause-circle stop-circle handshake envelope-open address-book address-card user-circleid-badgeid-cardwindow-maximizewindow-minimizewindow-restore snowflake trash-altimages clipboardarrow-alt-circle-downarrow-alt-circle-leftarrow-alt-circle-rightarrow-alt-circle-upgemmoney-bill-alt window-close comment-dots smile-winkangrydizzyflushed frown-opengrimacegringrin-alt grin-beamgrin-beam-sweat grin-hearts grin-squintgrin-squint-tears grin-stars grin-tears grin-tonguegrin-tongue-squintgrin-tongue-wink grin-winkkiss kiss-beamkiss-wink-heartlaugh laugh-beam laugh-squint laugh-wink meh-blankmeh-rolling-eyessad-crysad-tear smile-beamsurprisetired k%لPلUrclone-1.53.3/docs/static/webfonts/fa-regular-400.woff000066400000000000000000000406441375552240400223630ustar00rootroot00000000000000wOFFA 0J=FFTM0tfGDEFL*OS/2lO`AcmapǠgaspglyf5Fnlhead836hhea9$5hmtx9<Ttloca966Eˈmaxp;$ name;D[.3&post=XIiAxc```d٪?t˂( K xc`a|8ч1Jedha``b`efFHsMah04ՀZ1.R xKq>n ϳ3c)ɵ\|q7s+?>_\ dF*dR/WM奼QiQWhFИ^XS}E~ԏd[r6f/6nm}1βXOͳy֥XKeD吜ڝeXwcZ3keցu_ƞ7l}JiJKt5MN tQ-ŨS:ZCS>8#8؀1UXw0L [x>$nA##E"Hܻz7NSr~PɀCN0샿 ~3Sx} $Gu`FyUVeUVfuuuuwW3}LϩjiZnIF !VԆAAlVY1lˀ~<֞X,TȬa33"##_#?;\P ™i4ꍺ,"^-8@_ M~oЎ%%ıdn k&O[)ccʥk21<X5E8 nϠ/p1.D]]Xa1MICn5 wL wneDMNwq񽧜B_0N?q}XFφg?zwXw޽q\;ƕ9._LaU땲];j! =zPmܭc n%5jJVؑL?ʝ˳˳5V"aBx U +ɇfiN+0Dž5^IE *}`>_+)}B}qz[EDԸ dϡux W*. }َ.5Ge8A *+52.-19(UzDOcܼctY4u<< <11}wT`/r#,FIBZwȱZR'v텽] %q/O?.yGrd0=3 59nE!I#M 3A&r*^,րUBEHŋ 7!lΜ$m5޳ByvؘfDz52NQF/ovd'#<,Dkk_0U(L8(-swÜN̚0\wJ)#3]6Ɏ`#" jD:z"%m&K7rUDRxܼo dar:%רDs^͟|º0͙v37Ͽ Oת.p!?|ᩱv#Eux]yfGN!8Q|tifZZjtTC]-m>5jnֹl f$Y|~DGU}E"%(_JT-t;QߵNGUuJ'X1o?OF7o`5`PKZ䷞|Wt:>܃#*A)u(SCoóNߛ|ʘ +[?r_Q{.kJZJXm4/_%oN~`uDYݱSGaփGl5҄%ecyTk.2Q #[nȥؚ^極5Na} ۵<<.g+=eg@6zI,WCw0}Ю_gvRw(Zf_,z d?IsG$*ł1u0L($Q;T!D+A܅4-ѐw'KcAcnOPWă'zm}-% E$f5kbE1h (/o칹>\7ߘRcz(d~dTƢf@%56=J1>KVAȹy6 ms0~iZD-.Fjh3 ZP\paGEE}]&DG[zW(HMue*'{ޏ*Ok\q CO8 ^:/ڒ8H&U8 MвW]Nf&$E զ㪠A㡱2ˆFNjHҬԠ96."Ha33t534RMtSE saDɄ5尝/VxځډOp7s\"J*a{Q- 5#G(бvF)>ZvH‹99Bq=,*fTM\T Y"sRԸVVtu!;l ,M I@igf4=70Pc(3#Y(Ĝͬ5A$5#kT0QN./4( EF$Qu{E-jE-?][%doH܌G5` @8(>,dpׅ w#U& e\_6[ .b(A3` : $JdeR#frrc u@4Ӛ8u8/3~n|ol $2Y7]Ԫy#a-{F0ī05O]/|ð?Qd0x ͯUC=IȒ'uUgVvF+OPim+czb[kkb^P19X^\yw#|#珞R CV^K'\XmeLc ,Hxkh䭷g)+Wʋ| t QL B쩄Nc$Neh={01v-JȡzAGBl5HHHF3Vـ"5(Bœ81;¨EG4;%/ YbrTb'aR#Cx/IKx-:m ql㒘'$/N_9,JCWK??\: ?;<S0 D h; $ ;) p0#X\SiUߙ |?S= U9(*;OhQ:A)L2w MوNy~#Cܭ{4 fJ%ԼњW^ۦwM~sk^ߣzo]?EV,ȓ0Bc)FtnEǘ@|i5T3NQ'z#O/(!-A䵢P^2 b'4s\'wwtR7AśA.G n3`{ .Fi^%BjS_B;Uc n']wܚ{;Ujgr\ͳ<9B<[J|"`ޫMҟdX;,R@^dx˟b̷`jsS w#4Ng U@*FhH#J/\5%dixSl* I6r*!$L6ScrlPɊ؇H#A;t!!UmB~Igt27,Ո^DH*;5(Q5fUY"DC {Hږ!ǽNc?GKsQe'5 JZm E0t ~mjF9S*CTxh+&,ȖB͛mjA ez)_g@3#rȐͨ],:N:Sv)!Fٲ*ù䝌FZX1bh0!{ /dr Y@dG[G#l1/igTYj2=AQl"):&4Bo!+FѤh̳$T^Uȑ a‰ WOS`}`ԭõ7/>nO> צDb6hdB Z):ȍrzɻbNV"[A;'J-E֓QË]:,|\k+1,WQPK]+P#o)~5^0)3K=Q~$gֵru6^ǽ"[yʾ_̃D]3xrK-%4sWm}OS7y n]EWwAbJék5p#cu"FIBTÚ`8T `!'H>P <HѨTE͗aLcR$G I1I8lPeippwp-bayq" D1Y R ]: 4D̫Sy BWT^/Kʃ%(w%ehܔ5f 0TJϷpsY\ 5 X עU@[Ym ,!")Q(@Dꂎ̣XBcpNGuxa_Q/˱l+ jEq5% O^iۚ? Kui? 2r -2@EYz14KTDN *1ݲL|T^HXXDjtgTcz-/7_]Է n?"5]SaKP&My.9\ ĒiyQ(n:m9v由DAAM?<)BW1 O`i;&W1Fm6jc!8c`{ 0L^dPˍ".z>t2 4v-$:LGLw2;=CCh4V/Vef>v}-_m-[hn!/?>zD X>E'"9E8/Z*gƍԽ 1L>WsջE F:|Ѱ05g {]л^ѿB Ƌq\ 8`zn]ЖEkZ(( ȶmkP(Y!uⱳ\N]=_PXikDJ ynKP]j{7S~CkH;(}W68j~O~##dm!vgG`ԍl|eeI7TμHIzcQ9eJ%+ꪪ?>V/+!ER  1'왊̤t]EdR$%^QXv?FV0hjOs5?h&M>@O&ά㦂Yl:C Μݵp,42W0X.<%EU+pzϊ iۙAQ{r+ܡsK_w%A,f@M'%0֡( /-&"(&TU;h& Jxpz:w<:k*cnf*Jӝ XM;i3kY"kb@JY*1 ԛi{ఔj.@QM1b>}nuV?68skf4ѱo/R3g.9S5(7l/HC@@s̽UivPuFъZIPv=咢>'* +n{X^bݩDHѿDoZ 2uUix7WR3F:=[M)i_Y-_Z,sF66+lA]4VYk/7/F?E#^ jajA>`orNid)Y/z00F1MJl7fɗa'/b$r8J]vJ40t<=]uѻF>_=t!sޜLyׇps[\t0byPl;6Ʌ;g2* GUHgBl2g/ٱql>aHsy\Bo MRTTwd'q_ ,u B,$dB"bQ w$k͸J)w-pg|:4beѠ[\Q,= twARiCSRR05շd{(]I`8%:`5V1% J71i4yֱߥn7BQ9_8L}ZTu *:[M?o|lXXeaHE4lxE@]̀(-׀`и;p̈27ٳCܖrJxA嶠jKD>ZD?*S!}bT=fq%ƴZ)~K| iRC1ZR9Qӱ:0(&G*n737[+`n@D37 &0p),3fI~2r(uܽ?IOW(Fߢ?Aoˑ_Elt䳁CӲN7? @d8cRQc.nlџ{i,]ۡf:چEڧfGqِ%>U/T{ZyZSÀ2ļq!jp-:L)4dNcĥA+.FR ܠᇾa!JTcUS1!e c( و\{"VD4.#;#"R2RIzP>׵G4CSDY]S]2 G<_IY9Ƈ ׸Ƕi0ץ3T&כS-RϽγb 0:+ tGz?(3=L6ŴlD +ADjI Qa'ceZ8(\2?G+2y3x $URLުai9V_ zVH^EApb^fYar*8kvVC+;KQwH`^e[N?GtWtdQkH#Qb +?L" H+[T8.7$k#F>hmfYHc7SuR0<@F- ~((ء l5xz UMmyTv{ezkթƉPJ(ʲx~gJ ,#Wh<\[Ѵsse ѽә7}L?6j,x`{~hkpM>t#WMYSuY1ezJk[|e^j9PW]NOwVKx#K6M/+X4;us1o*@i a1*wM$tLԨD2IKXT)= tXk0Fh]!͏c]k )])l~w %4Mn~Gx*nR݀H.:/.*BkTX' X7/ 7 M ub;.ez@d{Ȥ%YFRÙeP:>݃9=oFD;T%t`_CkWƹe1/E >MUq-i[wt3ujYf{Y5VOR Sz֌$3\(R{l^m"zRw+!$űĭzZ %AX ekNǗ6H L>rVrP2XBJdk){.Nb "j."+ym[2ASEĞ@'HСTN 6oPL;]@܂*DCʺ XvlHW/ "iӍ1zX8%WWѠcMੇFmFF6q@j!PdZn bӇj#p&uT]WbHh/dRwѝ >IT&H" ~9H`KܝA7S *&@ݳ,]g Xr 8O+bBq1Q ((tɕ+X/m/CH,#˙XD>WNfs-]-ҁ/5S}X3ݿ~6#$duwgq,_)`lJ%lDM%CÓ(]򹼰NΜdkV**KRoթc_^m9g 6/ul/\),hUaҤڏor,v,lqBcҨ8ڱS M$.aq24;u7h>58]!EQ[>m,FȰD9t)omw\rWmXf:}s$R"@vx Vm!s} Q_WOTIi4}q*L/ ȲQT 9P6Q@΃Q^Ay}^tNj!xL'='M4"mM޲d$#걕9Jěõh+cW{g0*3 xl`~iګ%c\?Ͳ}. /ٽfs q"0!(h!W5O!C<~, 7 _l^:=k 9144$ Zn^(m@T+QGjg*>Eڙ6it `_T}c&;۷9CSiK)&Gsf׺u hef>1>K e_u`ӟ ճ: 1Aۼn;hN/[g+c,t},M4XB+Ufݡid|P|M~Ef_;ݮE˺޷mzW++o}G=G/q #?#~g/8`%YRpϸ+)])fIAbiFExE.4o:/Vk_*j>PE4[ѵJ߹h}ֻOEǮq+֌ !٤U" QQY^6,zzI~̟SKy0GyTM|RUAyC@ /dv}Ӂoct+nmnky|‚ز'VR%9~mEڱ4qsU}!]_7^t|(uݱYv6ݗ@NbxWPkOjj (U+`5xX XX%I[&Sioh>]4FѪSvwSy`?+na,F"$%겢 XUT%ܳtpjlc(U j.ELAVdZoJ 84q&2an'_9=a}~S{Nפ[=Ug=t-'ҝGlϭ֗vzKg~c,a k7 1ky5s#'vXokw7oA >$g:Nq bA噃R/, ~QR,Uf9"1_Ŗ؛ތTbUju\EbůDNŶ$cA( (B@&|Ot; ]%M61ϓhl_8`u=1iFUo^0iÄ~"U\YSL[^ *NQ:w!w{-HR~M۟h;뀠Գx"?r:(B9}uܴVUc OU8|n3B$ pcF-%#t@- )a]W}Bü| 6# ]{yxyyVۯHmm_;l}a{\Đ1^zP05mu:(A$B$"*={ vSqWqqw>ŠQoo|f<;F;y0ӺbtPirc<[_"H35h<=_#t) mv\yR"xtΙNT"UEBʅ#ΕaHa H(31Th"17YtǕHš+JٛUr=^; uJg Z WGK[׋g_oϻ!aAy0ﳒ 1aQF[vxŝWdca ~HcYIC!^: -IVD/=s vyH(a7\"Ow!?[4bOoMD6v:ƻpwܝ k{KcZ];s߷DeWRi?x^Nss`jfnwxtpuLn>-63H\D3 |6TcG3ta&, ?HRlXox8i&ffD?D̾٘P$M 5MREs ܲ~e&X 5ow?_4/)( *C*PSI(Yz5-/&7Qn4C}o'T ]OZSa!ṀA!]tƊU799!NlS\wqKnIuBp1]caHJ8Ǹx',=@t$mcjё.Ӳ3XIږ g$4F+8튋[,?54-]mQ-suwtϷOs[eكuRsG^Um>: ''X= @@=sZ;sS ]2}^wa+pSV@&[ b# ݾ/P>qn 3Г=%[d(,DX̌ll= -jbOt~Oc]@acV72/K1펕]2ȻAcc--hsk;YiI݋<[-K̾oO[@|n*b H*Nbb ^  : :  6  ND>n8(fLDLl.lDfZhDh  h ! !J!!"f"#N#$$j$$%%P&B&' 'P'''(((l()$))*N*+6+~+,J,--..f//0,01 12 2p2334:445 566Z677bxc`d``Ű$3$xn@iR b`&q~d]UȊ UAl@rS7d,{ڨ{xKx!cVX|3s|=Aa򻃖B,!^ ϡ> sOx 7˸|^ņ[^ÂTxy&5_Aw[䬑ubO(CKXGU}^s- $(:Z Mj oQ{0p!zB.J;e+#@#LtnIwq!zHlN15^D7u'N>7Nͫ]5Ty_\nzWP;&-4O.gǜ^D(luq,Ќztd֠9P,k𴤽b 4A_e4;f֚A9;hn;2ۉDfjt1RYKX3 ~HQҮT˟fO xmTgw6Ԝ$d.ޙ{$W$N Ae9'wCZ~Ll?G Yb#Ήt衏%,#@ac Qc83qNlsqb\Kq.Wj\k:\p#n͸6܎;p'ݸ>܏ x1<'$x9<"^x597&x=#A rpJHT؄A-̰9v!>3|/%;|S8#~7?'ĔiT֤4\ ӛӂi,܌ /HG WQS2mx&m,[ܥ9it>J2eU_NdQPi&R#a[^QT.ҡ#LL YD Si7OӵiTI^H,7} AT^"yO-Zқ5SPE’ "$DK]qİKfv([[PLiKK0ݷ}wVrQmb >bZ۝i@6`߄ :.bmȕRz6HiqEˬV3ҁh;!1nf匔on33jh$[Yrل[<%ٔLd>q{(a*+ ۸&Ki#in)#DuڂphELQ) 9՗VC e0hTG'\k#w6te׏Te5XB+ ;Ѭ.Wڃ^݂U GNmBVmY[LieWK#f,{-6AUN%Bh+Ew`b0A=3ա*èo"WLJG%GHNe %eBLb&_m1$#0xa db^y.5]]Ƹb%m6UO146"T:%CP?KayaLk%csνa$%`֕f& s5'цnbv_B &׿Q ZFG,q.GG{LK0LE&*lCCu>0!^#E0vٱ~E8?p{8D4 dr%J?M?&K(A3mh7ݶ`B*)QπSq7Yi ؼ2V (SX *SʅWo J , `|$a<n-K-lhQ ?2 P<|$Nvp/cGcGaCs`Mݬϗ\@4kLAuU=4BX]Mp[ 1?PgЄĈcQ9}#"htl2ږ7R~G8mx[X9EVIgy:,uN+1%:"z.& re&о ! Ԏ^Ix7Sz5 3"/,q@wyH!v{%(~c yx*\E4ELWR?7@K rL=#"{'N۹bϜ? ޽q]_'ϝt)yfP쫢嬶* gV#$U8~n*:prj1 n!~+5Xİ6 G с]gGt*Xt&횺ɮ+YɺF w(`[+{]pSOPMB\7H3sݡn;[uONÖ$]QcӦly]Gaq^j.|'5ː!:Qm.D&+7S&`ʐJ*"l;@-dU1N׈GeMx% *p[VFNy&/7RB1 d>f S:|eYJ- b)ԎZol~j gY^0 }uý 5'gͺ"K~'5\h5z)|mHg`&m" l3}b NwV'sd z,Hh,5Fɿ D!BlZ~-mϘRґ1Fz'Mxs6'Ez>V@xvF!ȿԨe[zEiYR-ܥT<)jhv Tq]`Fdӱ|&bx[ M\\k\M % b+2OJ=ڇqq<1-64E 7?S gʛK@bqe_){!od]zMi"9fwRi4TbbKFI +uYj;R<_[<R7Ψӈc#`PY0bGX #Ij FpLce*{1%K( C@յ {<5"R;|2n=;G4S(sL[vXoiMD:HI2zKٛJW!+TD )XHj9NF i$77BdJ#y#ӁfdoG ״*/B8) qX!'y=ߑftE!=&ESF@CEkXW YS1 ]a))a|ۆx,E /K%R\w |dzF~yP;UU!zh'lEy.E7AA6nrx(zdb>xA*r`( .v!'(z*/:KM@cu¹ ޳_%H`8cO*w< @[.o8Eٞ!"6ل؍`f #Owz oLllJkE c 媾H&OMK72{riwv,kGj-eq[IW}x_t /8kʊ@;A(].rݗvؚcY:ou *N}(JT.Rhzm9ZRQUnj$Y䤋dݓx]q0"r{=~[,XQ÷~m/хⅮAn W˨z*Hc[5ﲦfuwyRդ.(~ryqAlL7.,:mc?g1ⲟ=dzОy~ih'::Т8 x Vpp}t I `$D/Xv|@5d:&( ƙbLH6F筳k_,6M K9 Hq漰cjψ\YR:44=٩|"M eTjY&%?} 5ߥk=ZvԔnu! O+;#f;+єI"h^/`/jcx(t@c\ 6Hɽ@5$>+?/*|_PgE㚙.Y#.[/ B^K6q'mmɖ2T'Sly$Đch%@¸a BJI/ AQDHGZ [QTli c1{XV?!ɞV|TO9ҮǂX lO@YRC $tKv(c7>P;szҢZ!tfֹ Y6MԶW]F`ޭZ  (/dPsh^*)|aP]61\[벸@xR1e# 9e L6-ڽP~%w_vg*jc{S7olf0ĪZQO %Y rWDt%슰jf4dq[j/"B3ν9?!"gO>{&IZɠeIdq%/dq.X&dܤ$AOnpmOO^qAQo~"A/{QMÎґ .զs S^%A/a^fHT'%fzձ$b\G2!w| K&kjA#}ƒ$uhu +(^*v8=$~ދL5 #trrQ2ʬvNg8gPX JfjJC?p~'Xsgu&IfgUEo49wƞB% /ݼE)H"Yn3<&9B挧oAzYqDkYlb.8SƩ sQw[^;A$ i.moؿhϿ1^? $>'1WmMK{:ѺR:炏xW#.'BXѓx?ȣ y28 (cbpy~V!LJ֖3`9Qܚ] JvmmSULeE{P4K67 T\塪B&'QX`;׊14E<ܷfI۫+ FwFzX-|y)M"~`[Ï;*e꺒ګsg/u"ζupb#Wz92 `s%#$m ϵX'VI:k6(%zv]_8_*~b}hN͊Q'5O >nK](0@]03Z mOR֪HPg{|̿(TʍOA2jVc=O^-eh^'ZQO?Jh?V$DB)}g&^K{tc ʯ LyyH 3y;l' {{_Db vZ?;Ɖvl bHFNx}L+ ـze,V>+[G2Rx^<%Flꪗ&yyQJAJw_0kKdJ9_s;b&W;!1%ۣ^gOfH0KPŕ}r?R_D`U䨿wQ~Es彜M- 4c8PfDǗlbɾ5f%wH_=*)&ٽP"}A[ Q_fVD1 ~y^zzR|l~b㷠>Bm8u$nJ7%E- ~:q/hzԪfNLӺ+-8}95l@Ljs,l^oWGqׯo8Eɳ[waqyЂ8$6J, UY"@}5? 5xݢfnII\@ <MJd=+;/@<: NjZjtO1ޗׯMH%jrPb_Dfa D8g A*J̽{Zp+x5hmUFɵ=/2~^ `\ѩ:!X:pX7Qن ͉DD6ͥQ;).c.Koͅ1b̤-c7u9%1gV麅k=M^.4?0ǻ"?A= [ޒ|G,1 2zكݘ,t;DxnJ$N[Mo{Q^4O-k_:v7 n@Y[Xݷxa 4Z/ޫBF-sȡ xX&0ON83 WrSC3~<.uˤӷpxZ'X8ĆGPޮB\$1Cw&n0֔<'O^Lx<J<M ZuӅVݷ7ṖoS=X?R.`e}Tu9֏m,ۂlf{hyy]FG%wكJSy*u&ж2y;5* otcF㇙\#=;΋ǝ4Ƞ0=nDVЍLT!et\ ٙ[ pn/6(q#dםϸ]h 9 !kŁYG򣊊θ6=3G7i,Oe!#C|q[0)HS1&eҬN#T(ĊYm:>8"42E 2 BD*lqjIs!L߄ңU܂(Aܾa{`OWt/ׯ:,._\̴FqЌRwgC:X1Ҽu %, _ʻs|S1觉ĞhN#n0ptGӺC?˘:6/̮3p7KB71ѣelJH 'c11i\_6'81roWUr3 ȵf| Ix!iS vxX8|y?8 C K2D\8H 㛪ܑu- .]\a|D"+?ܳ\=7^(@3>|0WPTiixy~`Mš G9fYSuS8CX,cFI|5&2PьO&%.oΊӧԘ?2"ۑp zEg&Ŗ[7?2GNf>yq>v&8T?a'/gNȜkU|r`QX2aaEhZ"UO[[; '֞W8vz[ۿ dŜQʒZKD%o>Sgnmϐтvġ@4(3 !@qX8. v ی> yڭUA2iVoUhP(85"€xh:N%-dA@*4'-.<9Xa@nw+v7Pw .RP]2g[Uue{7r*z`p͐k `s,A@&l? b('š"_.YYƷA_gաԢxKS3®b2,`3ҰI f0Ui/RXm8Qb`ϐ).۬ ~U 2! XRެnTjtDoʑUSe-hHjTJhl'ʱgRN.\[kaxRR@IX(bA ¡BS/GW>8,*^\X\pN" C~Fԗ2GB &8|Y1Ľ&œEhZ D7 tupT6n4 O0etՈ},M]}U*-$mb.f(BRU3jUIS!˺R\9iR?ļd`o!GC%~Nj3ŠܲABLl=.`)|X,mKWl?|L!טk>+ͬNΗyƭTCy.h~?WWP/lZr{>?ʸJ|FqfyQVuv0Nnq^~?C08D1X@Pi:bs<@(K2Bo`hdlbjfnaZox:_zNWDY횡U\a==%ذ@uМ'w[VIo8ɜ1X=0*xG/)aU8NẒ-CEgL'RLyCx+B)?Œk}Qs\r:3쿁,GtK  Up3P8[",m m0D[$ʱdmEXFfɛb+:*n%a+[;biyܜgkK"Y8٧ U҅ʛ o h/ᤅ#xZ4d>_ڡ3~^V@ަExx17TL*Y P*IY?m*\- OV7b8Fi1¯´6Of6PpKؤCɨ; 4Y_-F=gA a ΩjFSp XۅrKpDwkE8ZFaDZ*X&9ylav֜}~;,b;=ZqSJ#DN[^n| ".15:>DFKNY^e -69IN]lwz (8[]`b46:JLPScmy}!AHP^`p  $037=@FJMP[` !38@MQlqy (7X]`b369JLPScmy{|{zywjihd`]TSRQNEDCA?431,+%"  jigfH&   } | { z t f a ] [ Z Y X P L J H F 0 % $ # "               z w u t r h g f e d c ` ] [ Z W U N M K A @ ; : 4 1 , (        | n c b _ ] N E : 9 6 5 , 'LX \@h  | 4 <D@|h4 t|h0TT h !!!"T""#<#t#$($p$%%d%&&&'p'())|)*@*++,X,,-../4/0$11l123344545667(889T;<`<==>?0?@@@ABBCtD,DEFLFG4GHHIhJJK<KLMMN`OOP(PPQdR0STUHUVLVWWWXXLXXY$YTYZx[$[|\T]]^D_8``|`axbchd0dePefPfgdghthiiTij jjk$klPlm mn no$pqrrsHst uuvhvw wxy0zz{h{|},}}~L~ t d\8T$<XP4L$,XXD$Xtt|LH<`,840LXdTL8\tX,8(´ôŜ$ƘǸTH(̀͜,<Д ќhHӸ$ԈՔDltڠ`Xެ0t(X\dL8lDL4Pdlp$ , t    h   0 t   LL<4Ph4p@P h8  !\!"#`$4%D&&D&''(h)*L*+P+,(,-L.</L04012823`4456D67p789H:8; ;<==p>>?`@A,AB0BCDDEFGH8HJK KL\LM$MN$NOPPtPQRxSTdTUVHWWXXY$YZ@[\]^_ ```aTabcHcdDdefgghh(hiihijLjk$klmn no<opxpqLrrst8tvvwTwx<xyz`{{|T|}|}~pP<0xlh,ptH l|<L4 0DhL``tdx\|8tL”Ü lƤ4h@x4̰<ʹΠϠXlpӄԜD8נH$ۀP@(` 8@,L`@44T 4pL    T  lX`hX(0 X !#$$%L%&h() * *+,L-D-.(//0<1 12d2446H77889;4< =8>@\ABtCD`EPFGH@HIK,KLNNOpPPQS0T0TVVWXYZZ[D[\D]^,^_8_`daabcdHde4ffghijlHmxmnpo@opdqdrrPrsXstttu`v vxwwxy,yzl{ {|||}~,hL||x, 32+"546;5'&63!288**"&46325"&463247%68P88(8P88(@%%6%K%%6%^!%"/&=#"&46232&264&"d8HVzzz, jKKjKd ,zzzVH8KjKKj"/&6767>/+  +/)k&&k(|+ +|(# '' 162/&?'&6? ( A j  j &g DD g&$"&4622#!"&=46;27jKKjK&7OO7#L#KjKKjkO7**7O /;GS_kw2+54+"!54+"#"&546;;2=!;2=54+";2=4+";2=4+";254+";2=4+";254+";2=4+";2=4+";2   ( (    ( @ ( ( ( ( ( ( (  p ( ( ( ( ( (     P   ( ( l( ( l( ( ` ` ` ` ( ( l( ( l( ( /?32+"&=46#2+"&=46346;2+"&5"&=46;2#(  F     (              0BRbr+"&=46;2+"&=46;2746;2+"&5#+"&=46;22+"&=46346;2+"&5%"&=46;2#"&=46;2#%46;2+"&5 e  e  f  f   e  e  f  f   e   e  e   e  e  e   f  f P  P   P   P    P   P  P   P   P  P  P  P h  P  /?O_7+"&=46;246;2+"&52+"&=463"&=463!2#463!2#!"&5"&=463!2# e  e  e  e }  e            P  P   P   P  P @ P  P   P   P  P 7'&4?62762"%p% $p$aq#7"/"/&4?'&4?62762d   dd   dd   dd   d   dd   dd   dd   5=++"=#"=4;54;232"/&=#"&46232&4&"20 8 8 8 8 d8HVzzz, dPpPPp 8 8 8 8d ,zzzVH8dpPPpP %-+"=4;2"/&=#"&46232&4&"20 d8HVzzz, dPpPPp d ,zzzVH8dpPPpP)9#".54>7632654&'.?>+"&=46;208gCrC.  bFEc$ _  "k=gBrD)L>  "'EcbG'G    h/?O72+"&=4632+"&=463%2+"&546372+"&546372+"&5463 0 P 0 0 0  0  0    ` ` `    `   `    <D%/'&=&''&'&?&47'&767667547676264&" &+" 76 "* & ** & *" 77 "* & *B//B/ 5) 1   1 )5 $ 5) 1   1 )5 ##-/B//BAD62+"&=4.+"+"&=%#"/&"#"/&54?62546;2 p @ p " Z8, ` ` 1 JI2"&4?6/54+"ΑΑ1  @ 0 Α ' . E"0B%+'4&+"#"&76;;265'32;26/&+"2>5'4&+"3= D bb((7 . -bb@4,,00+3;3232"/&6;546#!"&=46;2?324&"264&"2P X  X2 0  1*1 |  L      p  p 11j    @%#!"&=4?>3!23373'8 j  U{ p {U    +@@<2+"&=46303.#"3267632#".54632'41463eU'IggID")-.CsBfIA/ .gg"  BrDf)R*T4146;2+"&=463.#"+"&545>322676;2#"&'1+"=46;2#'/ fU':d  1ZIA9d 1Z0n/ fV / .O8 Ws)O8 Ws3$R / .'3?K!"&5463!2"264"264"264754+";254+";254+";2`"""""" `h""I""I"" T T %2#!"&=46;5462#54&"Y~YP*<*H?YY?HH**H'276#"&#"+"&5&54662^0E;?$h>4  "!Gh]  )#^  ! #12+"&=46;254&"6;2+"&/.=4Ԗ A-  /!qq!/  -A j0-? "OqqO" ?-0j6/#"&=46;  Yf  fy  Y #6/#"&=46;.6764'.>  Yf  f    x  Y  )0)  . @'>P6/#"&=46;%.67>54&'.>&'&67>4.'.>.6764'.>  Yf  fB;2324&"26"&462`X   ~ XFdFFd&4H44H0 ! !dFFdFH44H4-0%2+"&=46;'#32+"&=46;>;2'3'     0 ^/     @@     j %-%+"&=46;#"&=46;2'3264&#264&+MR6   9LW W?$4F 0  0 T9$a`(!.!p@#+32+"&=46;#"&=46;2@ ?P/  ?P/        @   C-I2+"&=#32+"&=46;#+"&=4632"/&6;5#"&?62+0   8(  (8    PP 00 PP 0 `      0  ` PPPP-I2+"&=#32+"&=46;5#+"&=463&=#/&4?63546   x  x   [PPPP P p     p P PP 00 PP 00 /?7"&=463!2#"&=463!2#2#!"&=4632#!"&=463  `  ` `&&&&@         /?2#!"&=4632#!"&=463"&=46;2##"&=46;2 `  ` \         @((((/?7"&=463!2#2#!"&=4632#!"&=4632#!"&=463   `          &&&&/?%2#!"&=463%2#!"&=463%2#!"&=463%2#!"&=463 `  `  `  `                  /?O_72+"&=4632+"&=4632+"&=4632#!"&=4632#!"&=4632#!"&=463P @ @ @ @ @   @  @  P @ @ @ @ @ @ @     @          +;K7'&4?62#!"&=463%2+"&=46372+"&=46372#!"&=463e``D `  ` U`` -     &&&&      +;K7&=462#!"&=463%2+"&=46372+"&=46372#!"&=463`5 `  ` U `     &&&&     @2#!"&54636/5P"#n &  K")"&5463!2"264!5'&'&`.!!.!hX8H H!.!!.pX8H.5462"=" pp "=6#X$5"PppP"5$X#  6462"7264&#ΑgLllLYΑΑ@ll`!"&54>762264&#"&54&"6,!gg!-5- !/  B4_6I&JhhJ&H8^4@  /! .B@ 7.?67/&?624?6#!"&5463!2+!Z\   $Z$ ((@mZ  \ #Z# (`(@4;276/+"@ 0 %% 0   54;27676//+" ( %%%% ( h     7&4?6'7&4?6' %%@ %%      %&546136#"&546;2%+"&546;2`````0`2#!"&5463` %&546&546 %%@ %%  @  @ +"=&=&54654654;2 ( %%%% ( t  @  @+"=&54654;2 0 %% 0 X  %#!"&=463!2%"&?62#   p*@@  @ 399%?62"/&4#%%"/.?'&6?62#2"&454+54+"#";;2=32ΑΑ \ 8 \ \ 8 \ Α΃8 \ \ 8 \ \2"&4!2=4#!"ΑΑt ΑΏ 8 8 #2"&4'76/&'&??6ΑΑrBB( AA (BB( AA (ΑΨAA (BB( AA (BB("&46276/&'&Α΄  F  h 'ΑΑ  F  h +3"&462"7>32;2=4>54&"264Α`J+ #    8 H &&'ΑΑR@     &*<&&'2"&4$"26454+54+";#";2ΑΑ "" @ X Α#"" d  @  (PX%2++"=.'#"=4;>754;2>7#"=4;.'+"=32+54;24"&462 3N. ( Fe   eF ( Fe ,? ) ) ?, ( ,? ) ) ?, (  ( .N3  eF ( Fe   eF ?, ( ,? ) ) ?, ( ,? ) y 2"&4.67ΑΑ`4&`4&Α` &4` &4%"/&4?62!2#!x  xs s762"/&6?!"&=463!'.x  x}s s7'&4?62&/+"&5#s sx  x%"/&4?646;27>s sx  x&='.54>7546 1D2  #02LP4  X *@+ U,,B) P !2C546;2++"%4;2+"=#"52+"=4;543+"&=4;232 | T (  |  ( T  | T |  ( T  |  ( T  | T |  ( T  | T !2C#"&=4;232%+"=4;54;2+"=#"=4;2+"=46;2+|  ( T  | T ( ( T |  (  | T | T (   ( T X T (  | |  ( #%2++"&=#"&=46;546;2            %2#!"&=463     6%/+"&?&/&6?'.?>'&>;276  &   &  r " T  T " NN " T  T " N"&462"264';2574&+"ΑT&&Z 0 @'ΑΑ&&  .9C353#"&53#2#!"&=46;&54632>32!3.#"3264&#"      , 4$..$4 V#!     @ P P $4 $$ 4$5  "$%@'&'.7>3264&#"&546;26762"u8[5  @Q}G upPP/OBN .%,.  />0"  TPp/)!"&547632654.676#11#qq7 %& "2,2M/OqqON9 U&% "*=% @-%#".'&47>22654&"72"&54732654'6=*[32'654&#"654'6320@ 4)%[*i'@_57[* 0IT<6I '80) aPQ%=#:c  aP<9;2BBri B 1g+(   (+g1B i'2' : ll : !-9EQo!#!"&%;2=4+";2=4+"';2=4+";2=4+"';2=4+";2=4+"2!546;546;23546;2@ ( ( ( ( ( ( ( ( ( ( ( ( P@0     ( ( ( ( t( ( ( ( t( ( ( ( @000 00 0;%&=#"/73546%"=4;2'!#+"=4;76;546&5P ;F55  c oF55 o T; PP YP (K:9( g 8 K:9 8 ( PP 2#"'#"&'&>7&54Ԗj83AL9zz319JV?#"&=46;2%+"=46;232".54546;226=46hY hNt{tO2<2,PP  P|, Ht;;tG14-44-4;E/&/&4?62=;E7'&4?>76"C}!C%"/&4?625#"/&6;2762+"&="/&4?62"/32vee (   (   ( ee ( hdd +  +   + dd +B.%#!!2"&547#"&547#"&=46;2!2  !.!!.!FF  g     #!!!! W   - 2#!"&546;`@@ @D %#!"&?>3!2%"46;32=I!pI!T  E@|  | 8 v@0/?T%"&=46;2#3"&=46;2#!"&=46;2#3"&=46;2#2#!"&546;2M&:&&:& 0    FF@    P -5=M!2#!"&546";2=4#2=4&+"#"3264&"62"&4265463264&#"0` t@ dFFdFTH44H4!   &`  `J -$FdFFd4H44H4    &"*#"'+++"&=4?&54>32264&"gI % ( p  /Q0Ig((Ig( (  N 0Q/g((;C'&'&?&'#"'&76;67'&767667632+/'>'&/'.=&''&'&?&7'&76766754676276>.6'&'&?&'#"'&76;67'&767667632+/'>'&            & & g" !  &&  !!  && "{#0#0             & &                     > # # (  ""  '   '  " "  & 50#0#                    > # #B+#"'#"&'&767&5462#"'#"&'32654'zV<3+.&z.+3<@hc9HB^")3B^" 6,qO Q13  &?'&6?6  j A HD g&,%+"/&=#"&546?#"&=463!2+*&   0& *    * - h00h $<v 0  0 @19A232#!"=46;5.'&'&=46;5463!2&'#%5#676( >0>%0% %0$>0> h   @@ # 88-"4H H &"-88 (  ( )--) /7?%#"&=#"&?62+7#!"&=46;;26=324&"264&"2(P X  X 0  !P! |  L  @    p  p !!j    #7&.7>&>676>.2327>"  'BF:;-+ +"  'BF:;-+  4j  ^"+ +-;:FB'  "+ +-;:FB'  c j4 ,^#"/&?6>7'&?6p 1;[<0h0<\:2 p.2#!"&54632654/"#"'&#".x A %#KF `x  FK#% A &%2#!"&=46;5462+"&=4&#"Y}Z +*f@ZY?  *+g@ !+=!#!"&7;2=4+";2=4+"!5463!2@  H H X( ( ( ( 0016"&462+"&5.'"&=463+"&5.'"&=463%5&&5 0 wT t 0 ː VsF;5&&5%o Tw 0 t  0 Gr@&.%#!"&=463!2'!"7>3!2&"264&"264@ 0 a  a3s``<  `@,4$#"/&++"'&547#"&=46;2?632#2@  UBU K"%%UBU @!TkkTJ D5+"#5:%`%5D C`C;%//&/&?'.?'&6?'&67>7676.  ?> -- >?  ..  ?> -- >?  - >? .. ?> -- >? //  ?> '7?%+#*#"&'.=47>763232+"&=46;24&"2d  ##G E   ` 0  0    +% $  -+     &6>3&'&632#*#"&7.7.7#"&54632+"&=46264&"-   E G#('  d0  0   $ +-  $$#%+     '7?2666+"'.'&54654632+"&=46"264 +%#$$  -+     d  '(#G E   ` 0  0   &6>5&547>76;2'''#"&546;2+"&64&"2\ +-  $$#%+        E G#('  d0  0   %"&462'326=4&+764/&"2?64gΑΑL  L  8ΑH  H  %2"&47#";2?64/&"ΑΑL  L  ΑH  H  %6462"'7;26=2?64/&"2ΑH  H  YΑΑL  L  %"&46254&+"'&"2?64/&"ΑH  H  'ΑΑL  L    "+17=#>2473#%#&'%#>#64'#&4733"&673%3.P 0:0Bl#5S$lSk rr r( 0:0$lSl#5S DTT@ @ \2K[2\3K !>!@"!>!!DTTJ2\3K~\2K$'"&4?&67>76264&" P&5% J$J DK <S$I %5%Q  KD J+3CSc"/&4?62762"/&4?627622"&4%2#!"&=4632#!"&=4632#!"&=463I / @ H / @ \((         H0 ? H/ ?((     @         2/&='&63  P   8 ι '+%53#!"&=3;2672!546;546;2#5#@` ` P@p00 PP00 ;%"/&6;5#/&4?635#"&?62+3546&=#32`OO 3e OO e3 OO 3e OO e3 OO e3 OO 3e OO e3 OO 3e&:H6"&462"&4622+.'63*&4622#!"&=46;27'#"&=46;2z4&&4&4&&4& & B \BB\B#0CC0 !F!(B &@&4&&4&&4&&4F& *! B\BB\bC00C :# &%K"&4?62?64'&'&5&?66&'&'&?>'&"&'&4?6G,,D,~Y,&  <C  jY,&  <C   ,,D,-~,D,Y~,&  ;C<  Y~,&  ;C<   -~,D,%#!"&5467454632632.K5`%6%%6  h 2#!"&5463`/"&=463!2#"&=463!2#"&=463!2#  `  `  < ( ( ( ( ( ( '7G2"&42"&42"&4%2#!"&=4632#!"&=4632#!"&=463((((((  @  @  ((((((     @         *:JZt7#"'&?63254+"/&?67#"=4;20%2#!"&=463%2#!"&=4632#!"&=463'"=4;5#"54?6;232#"=4>54#"/&763232#>  9   @  @  D       '/                  @X    1T%2#!"&=46;&'&54>;2#"'.+"3+"./&54?632;2654&   f!9"D C +  B W^H4D.%  +  B     "9!)   ` 3I!   /?"&=46;2+26=#"&=46;2+"&=2#!"&=463    /B/    ^^p `      !//!     B^^B     2#!"&54635#75#5#75#``````````'+/?/?/?"/&47627'  5555E5555b  U l  ;W3V`  `555555  U  l V3W'/7<%2+"&5#"&5#"&5463!232264&"264&"75'#p 08P88P8@,d,((\((d,`   (88((88(@0dlp(((( dp!&+2#!"&546334ᒑ#264&"5"75#`   @%%@B//B/`%@@%  @ %%8P88Ph@%@%W9 !2"/&6    `9 %!"&?62!  `  'Y /&4?6  A  'Y 546&  ?  2#!"&5463#!#`` 3 732"/&6%+"&?62) ww   w ww i  w 3 732"/&6) ww  ww  4 %#"&?62 ww  ww :6#!"&=4632>76".#&'&=463!2"`!y  vz4  1}X  Ue Y(  &[ +7#"&=4;2>3#"'&?63264&#"32 0 #a6gg_G " 2BIggI+Jb N'+͑@ ",gg'!07%"/&4?'"/&4?627'"/&4?62762|(Q s  . s  Q(}q}(Q  s .  s Q(|qD2#"&?#".?>;2( .w    *      5EU72+"&=4637#546;5#"&=46;2+32#5##52+"&=463!2+"&=463  `  H0(   (00H  `  P  `  ` `  ` P0:@ `  ` @:000P `  `  `  ` B-C%'."'0.#""'.&7>75462632#"&'&>3265@ 57   ( 74 \\/!*     #+ '"+$ cz    zx!/  /8#"&546;6232#"6"2643+"&546;7#532h  QJQ !*h  r` B P   H!y  0 h &`B` 053+"/&46320#41&'&7264&#"26546` > `dKJg,%%, .B  /&&  GigIB2+00+2  B. !/U$"&462462"7#!"&=4672654&'527267?6/546?6=4&'jKKjK6J@0!.! :$00 $KjKKjN5--1KQ!!R .,,  *,BJ2#"&'.=46?6326='.?>326=&7>264&" gIGg7I ?  9('8  ? I73.B!$  P%qEc_D Y9    {(8:'z    9X .=+q&%P   !46;235+32#"&546;00`00 p P`+"&537#!".54767>546754624&   H8%:!@%q   C,:V   ,B&,C'7"&=463!2+##3264&!"'&763!2(8 5KK5 8( &&! H@8( KjK(8 &4&  '3;GSo!54;46;546;23232#";2=432=4+"#"3547#";2=44+";25'3;2=32=4+54+"#"@  X p X  ( ( ( ( t( @4( ( ( ( , h H  H  ( ( @ ( ( TT ` ( ( ( '/KSX%2+"&5#"&5#"&5463!232264&"754+54+"#";;2=32264&"75'#p 08P88P8@,d,((808808((d,`   (88((88(@0dlp((088088(( dp %I3546;2335+32#"&546;4&+54&+"#";;26=3265`  @ 0   0 0   0 `00 P` 0 0   0 0 /%#32+535##'53535'575#5#57335#532+3 ``0u(s0C" 0@@0 "C0s(u0 P E*E P ".:2#!"&5463!254&+7626=4&"26=4&"p!/Q   @ 07   W  `/!' $*    (с E    3#!"&5463!2#"#54&+";26=3;26=4&`p        p`D PP  PP  +2#!"&546354+54+"#";;2=32@ \ 8 \ \ 8 \ `8 \ \ 8 \ \a*?62"/&4&4?62"'````ш`` ``a+7"/&4?'&4?627"/&4?'&4?>2```` ````A+7"/"/&4?62'62"/"&/&47```` ````A)7'&4?62762""/&4?62762````````a?62"/&4 ``ш`` a7"/&4?'&4?62````XA("/"/&4?62`` ``X@(7'&4?62762"`````@#2+32#!"&46;7#"&5463!H   H@00@ !%%2#!"&=46;;267!463!2!p &&  =@ && pP2#!"&5463264&"` @2+"&5463264&"c` 9%2+"&=46;2+"#2+"&=46;2+"^B  &^B  &B^ 0 &@B^ 0 &@92+"&=46;26=#"&=463#2+"&=46;26=#"&=463^B  &P^B  &PB^ 0 &@B^ 0 &@'/7"&4622"&462"&4 "&4622"&4$2"&42"&40(((((((((((B((((((`((((((((((B((2"&4ΑΑΑ)2"&4"264&"26464."'.2ΑΑU #n# -Α ** 6%2"&4"264&"264>'&"762ΑΑU--#nΑ66*2"&4"26424+"36264&"ΑΑΑ  `+3;2#"'##"&46354+54+"#";;2=32264&"6264&"B^^BC/\/CB^^B` 4 ( 4 4 ( 4 ((l((`^^00^^( 4 4 ( 4 48((4(( @'3?KWco{)"&5463!254+";2754+";2754+";2754+";2754+";254+";2754+";2754+";2754+";254+";2%54+";2754+";2 \ ( ( ` ( ( ` ( ( ` ( ( ` ( ( ( ( ` ( ( ` ( ( ` ( ( ( (  ` ( (   ( ( ( ( ( ( ( ( ( ( T( ( ( ( ( ( ( ( T( ( ( ( ( ( .4X.'76#"&#"+"&5&546623256%5'5&'&56765767556 2 2;?$i>4  "!Gh0"(#M'##& >&#7.(! ?&#%% *D )"^  ! #HF GH DD GFFDD HFG$%"/&4?'&4?62#!"&=463!2w  0     +'&76'/&?6/&?'&?6=  = } + [[  + [[ + @   * t   . PP / . PP .  @076&76&'/&4&4?61' 3N>!0# 9T x m m  S)F/,U ZR T  ^ ^  6.=#"&67/    /`%- 7%2++"&53#"&5#"&=46;546;27#5376  ( 0 s (  ( 0 s;  ;` 0 (  `  0 (  `;  ;6>FN"&54>75.546267>767.5462$"264264&""264 - 2 /B//B/9 /B/      0** )!//!*!//!*   !///    7  /?%"&4?62?6/&?62/&?64&""'&4?620 -,~Y,- (-*<-( -,~Y,- (-*<-G* -,Y~,- (-<*-( -,Y~,- (-<*-Gz)12+"&=4>7>54&#"/.762"&4Bn%% H  ,+?F:((:(Z@ 1   "  ![)9))9&735#"&=46;232+"&=462"&4 p   6<**<* 0  0 0 *<**<6"&46246;2+"&5/B//Bh ^  B 1B//B/G  S2+"&=46;5#"&54?6;2'2+32+"/+"&=46;7'#"&=46;2763 `   0  !NN! CPPC !NN! CPP     `  ` 0 pp 0 ss 0 pp 0 ssS!2+"&=46;5#"&54?6;22+32+"/+"&=46;7'#"&=46;2763 `   0  !NN! CPPC !NN! CPP     `   0 pp 0 ss 0 pp 0 ss%32#!"/&4762%37 `(|PrD ( `((1}PC@K%2#"&#"".54654&#"&/054&54632>?0326 1 ! $$#%54@p '#$#**" ) 2#$##  0  M 2 20 Ek` <6"&=4627232+"&=46;5.=46;26=463P88P8@ WA8  8AW  B19T `8((88((h 0Bc "   " iD( *3NM80 %F'&?6546326=46;22+"&=46;5.=7z   8(,  L  8AW4/2    -(8,0 0*&   " iD),'" +!2#!"&54%#!"=46;54;2354;232   X 0 ( ( 0  , $4 44 48@6/&5#+"&=4675*.767&632347264&" (  ) < !":   t  !5   L1-  *2#"&="/&4?#"&54?>;>32264&"D< b   3 h 1 h&eH3 ""!2Hd'h 1 h 3   c =C"""&462%2?64/764/&"gΑΑee8ΑVff2"&4'&"2?64ΑΑjeeΑVff6462"2?2?64/&"ΑVffYΑΑjee"&462764/&"'&"2ΑVff'ΑΑeeBBJ7"&?6+5#"=4;5.54632+>7#"&?6+"&'"264 DD#R04 4$9((7$440R#DD `DD'1 (  2(98'2 (1'DDIWWI &2%2#!"&=46;5462+"&=4&#"54&"26Y}Z +*p""f@ZY?  *+g002"&4264&"62"&4264&"ΑΑllljKKjKf4&&4&ΑlllKjKKju&4&&4x$"&46:"&4$2"&4H*<**4&/&7ΑΑl  Α k2#!"&5463!2=4#!", ` 8 8 !"&5463!2'76/&'&`  F  h `b  F  h *!"&5463!23?6/&7'&"?64`97g7 `.98 7A@&="'.54>3546463276#!"&546;2+!8 -?0  +-FJ/   y 3@ H ';( O)*>& H Y`>2"&2"&46.?67GΑΑv B ΑB B #!"&5463!2?6&+"`||p`||5463!2#!"&%'&;26`d||`||!2#!"&54676/&0`||`||BE%#"&'#"&=4;&7#"&=4;>32'&#"32+32+32767  Mp  !oH  (>  r& ", VF  @P, (#      @6%2#!"=4;5#"=4;54632'&#"32+354634 $ O=7/    T T{` h ( ( B7G# $ @ ( 3*A7+"&="'.?6;2654/.'&6;546;22'&+",- :$   0&" B g".A1   0&" B  T-!'0 0"    4"1G0 0"   @4#32++"/&=46;267#"=4;&+"&=43!24I ; 5L9ST' )U ( ` ( 8F5  ( - ( n:232+32++"=#"=4;5'#"=4;'&6;236?63_P: Xl l 8 l lX :PA7 7  % \ \ %  qH%#q)27#32++"=#"=4;5#"=46;54;2'32654&#\ ; 4 44 4 @QQM$((# ( 4 4 ( - OQۖ)#"(@ENRZdh#32++"/#+"/#"=4;'#"=4;'&6;2376;2376;2327#32>?#;'&'#7#136?#4?F U* 9 +7* 9 )T F> 0* m , n .0 x & Q  ' Q ( ( ( Q VV VV Q ( 66/  66   ;#!"&546;#532   Ƞ b8    zb)5>;#!"&546;54+";2=4+";254+";257#532   @ ` b8      L  T  zb5OR72"/&6;46;2%232+"&=4?#"&=4637+"/#+"&54?6;23' PP 0     =8  =8   G  ; * 4 ```0 @  F    F   U  e05OR"&?62++"&5232+"&=4?#"&=4637+"/#+"&54?6;23' PP 0   ` =8  =8   G  ; * 4  `` 0  F    F   U  e0%5EU%2+"&=463'2"/&6;46;2%2+"&=4632+"&=4632#!"&=4630 @ @ PP 0   0           @``0               %5EU%2+"&=463"&?62++"&5!2+"&=4632+"&=4632#!"&=4630 @  PP 0   p           `` 0              5=S"&54?6;232+"&=46;56&/&767.7>264&"2"/&6;46;20  0  `  (>%   $+    PP 0   `  p     @ 1' #2" =&\  ``0 =S%6&/&767.7>264&"'"&54?6;232+"&=46;5'++"&5#"&?62J(>%   $+ !    0  ` P 0   0 P 1' #2" =&\    p     @;` 0`D72+"&=463264&"32#+"'&#"&=47>767632h  P  X e#   *!0H#5    y?#.) M,:$F46;2+"&56264&""'&'.'&=463276;23+ P  P ( #H0$   #e    2:, (+  ).#?$$2"&4++"&=#"&?>;732e6%%6% 8 8 0  %%  %6%%6 h  h   $2"&42++"&=#"&=46;7E6%%6%p  @   %%%6%%6k    /72"&4/"/&?'&4?'&67627664&"P88P8V ^! d0/d !^ ^! d//d !cKKjKK 8P88P/d !^ ^! d//d !^ ^! dKjKKjK".5463276EvEj/7]>J@EuFj^6^y* %3!#!"&7;2=4+"%2#!"&=463   h h          0 0 ?E%+"/#54+""'"&4?&=#"&546;5'&4623762322#4 7=  6(3  3(6  =7  8/  66  /8 \B  <  7 7  <  ;.  77  .;B..!"&5463!26=4&`|| `d||2"&44&"2ΑΑH/B//BΑΈB//B/-C%&/#"&'&54>32+3276'#"&54673267A >  # y9% W5Ig=2B.*?> !  $!   {7/:gI7Y'3.B7):%2+"=&=4?5&=4?54;27676265463t{eP 1 71 7 8   Fbdp )   )  E 3) ) MH3<%#!3#5#"&="=435"=435463352#!226&#"Q//4E (B+*7" "7*+B( E5 4 #$#%@((@%#$# p((@%=2#!"&546322>36754&#!"".'&'3!265 J( (I I  `5    5    5  %9+#!"=#"=4?622#!"=4637335335332!54; x  0 8@@@@@$ ` $@ XX    #./+"&?&54767'&47%67"&57n 8 0 pp!'"U<  s s  "U,q%%q,  `DNe7#?%2#!54+54+"#";&'&767?6'&'6732%463!!"&57;2?3;26/&+" &    @  @ r           ;  <  9 &&! @x             ##l  8  %5EUen2+"&54632#!"&546;254&+";26=4&+";2654&+";26=4&+";2675#"&=#@      .                   @  @      .       y       w`  )5=IUa2!54;463!2;2=4+";2=4+"2=4+"354+"754+";2=4+";2=4+";2 @  P  ( ( ( ( 4 ( t ( ( ( ( ( ( (    8( ( l( ( ( ( T T( ( l( ( l( ( )462"6+"&=#+"&5'&462376x*<**< _    _  VfV Z<**<*P ^  pp   ^  WW (3?62#"&#"#"&546&.'&>.676.7>.>L\>&"LM"&>,/- *,-/ C_&&_j4- -  &A7 !-4 - !7A& 7!6/&=46?7575N h h HNNPB!'+/37%//&=4?54?65'75'57'57'dhhdad d"UwfffUUfffVUUfff n 2442 n $l && lI$E))&K'O''*FK'O''*8U7/;2+".?'&6?67&"/&?6276/.?+/&?6326/&?6 (3 4 4$6 3)  n | $  " j*(  n  (-`P P`  "  Q! ( 0A"QI  --B n5+0 P P 0!,  7APa2++"&=!+"&=&=47#"/&6;7>;2%!'.+":>54&"!264&#":    < "9 9  "N     06   60  * & *22 ! 19BJ%+"&=!+"&=&=46?>;546;232264&"7!'.+"264&"    +   +5   #0$)   )%0#U"  "]C/%++"&?5#"&?#"&?#"'&?>2++z  `  P O nn O F 11 Z [ uu [ ##".=4632"&= "&= ]=g<]=g$ O %+(6V       $/  76 & >"  <E;#!"&546;"&'&+"&'&+";27673;2?6.#7#532   9    % %  & &O b8     es s_  c X b4=;#!"&546;6&+"4'&+";2767;26/7#532   <#%%#<<#"#< b8     ED ]^ =!$ ^b"=72+57#532;#!"&546;4&+";26=:>   7   5(!Q  =     !,9 "-#5323#!"&546;"2645'&'& b  j''( g((Fb0    ((p(h((0*.O#5322"&5467;#!"&546;3533526/&+535#535#535##3#3#y   w   h @ 2  W         &W a!/=F;#!"&546;54&#";67764'&7&7647#532   @$ $!    O    )@ b8    l$ 8 $8  ;  n  N  *ub4#532;#!"&546;4&54&+";26=65 b   ` 7 p  p 7 Fb     7&  p  &7 3G`#5323#!"&546;7654/7654/&#"32032?454/0#"764/&#"327 b  s((AA5>>A((Fb0    $$==1g=$$ !'2"&4%&'264&"'677&'67'ΑΑ?P88P8 @m?@Α@8P88P?@m?#46#"&'46762654&'&5  QiBrCgiQ :LllL: VCrCgV b?LllL?b6/&=6&'.7 H }@ j P 4M Q%,%7#"'.?>3264&#"+"&=46632/546;2gXD 2@LllLI53  2Hcg$A  )'͒7 (ll23  2E3  h K32+"&=46;5#32+"&=46;#"&=46;2+35#"&=46;2#            `         @          !++"&5#+"&=#"&46;2 0     B^^B    p p^^ #Gk%2#!+"&=#"&=46;546;2%2++"&=!"&=463!546;272++"&=#"&=46;546;2    P P   P P    P   P        @                        %%2"&547'#"&46327&5462#"'6`(88P8f"(88("f8P88("ff8P88( @8P8@ (88P8@  @5#!"&5463!2"'64'73264&"&#"327264&`DD!!.!D!!D!.!!p`))!.!!)!.!)!!.!'/7?O"&4632762?2+"43&2"=&?6'&6/&6463264&#"264zzzV-)4M  0. U ; & (8  h4)-Vzzz4 <  +     &  8( %*"&462'7&''77'77'?6'7Αa?"'>KK>'#?&U%>>%U&NN'ΑΑ:T6**6T:B4" MM "4\88\ !-9EQ]iu>#"/&5417&0#"/&54+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2DD + W `` W +` ( ( ` ( ( ` ( ( ` ( ( ` ( ( ( ( ` ( ( ` ( ( ` ( ( ( (  ` ( ( XD11D E# <##< #E ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (  3=A#546;246;#+"&=>%+"&=#532%2#546353` @  `  `    `  `   ` p@00 w   ,$;)%-9)G,,  w 00 #+54622+#5.=#"&=4637#54620 I7@7I p@0p  p   :X cc X:   p  @&2>JV2#!"&546;54632=#72=4+"3!2=4+"3'2=4+"3!2=4+"372=4#!"3( (! ( ( h 8 h  !       `     ` ( ( ,'& /&47>2"&4%'&"/&4762{" nn "W6%%6% " ?? "W%" dd "Q99%6%%6y" 66 "L/?O_o2#!"&546354&+";26=4&+";2654&+";26=4&+";2654&+";2654&#!"3!26P&&&&&&&&&&&`M&&&&{&&&&{ff#/7'&?66754621%#".547>"&53z   %=8H ~   &    p1    V:,C   2%!2#!"&=46;76;2!+"& ` x r       MS82"&4654/.#"&546326?654'.#"32>ΑΑm   #'("$   >R$C) 8Α" 0 !+# R=)B&:F2#".'#"&546326;2>54&#"3276#"&462654&#"jJL *,4Q9* -   jNLllL6-  =JgQ%$|d?T 91BV  y 2JVll (Α))  #?+'754"/&4?62763~~-8 8M )) Ms~~ 8-`8PM )) M,70#"&532762#"/&54>76XF:@@   j UP. @ '%?J8@XC  D:a5  #8#4?JU%"&""&#"#"&"#546;5335335332!526226226323"&546523"&546523"&54652 '/''( '/' @@@@@@''/''/' '  r  r  @ P` @ %& %& %&%2#!"54;2%6!57>  ( 5 ZW U@ ( h hr!#%2/7+5463&'&676  =  Y;23'#"&=46;263&267#"&?&#"?#654&#"&/5KK54K)V 4 E,5KL41  W r3 @ N+J( Q- !//J$"2/! -  -,M55JK48& *6K64K0     }J&v I/B/`x( 0`0"!/H  G,4<LT2+++"&=#+"&="&=#"&=46;5462264&"7!26=4&#!"264&"        0  @ P       P 0".."0]  p'?2#!"&5463&546?6'.676/&&546?6'.676/&`=@RA?S>@RA@R %"4./6%"4./694&++"&546;2+"&52+"&546;23265463!p 0  8P 0  P8  0 p! !  P8 0 8P  !( B3W%!!2#"&'&7#'.'4>7#"&=46;2!2'#54&+"#";;26=326=4&  ! # FF  g   /t0  0 0  0  #! ! W   - ( (  ( (  B3F%!!2#"&'&7#'.'4>7#"&=46;2!2'#54+"#"?6&  ! # FF  g   /y+  +DD  #! ! W   - < <DD<D%32#"'+".'#"&=463267'&6?546;546;232%7625=,  X7 0#7X  ,> F * @ @ * v vK")  ;!;  )"F # (  ( # X&&X26:V%#!"&=467'&6;&5&54767>?6327/77475#"'0326762676#% :@F  F@;  2*2J,42. =#--$><!  1"  "1  !x( (  HWm2'"&547'+#"&7>767&+"&546;23'#"&=46;276;2+6267#"&?&#"%6&#"&/7>4KL64K-   T E,6KH2  8  8EB P& -  R ( Q)!//0"1  22"+K46LK4;')- *6M63H (  &* 7 K/B/L"2Q R "0+$@%"&54672654&'56&"&462"&=46;2732#+"&5pAOԖOA!3zz=3M6%%6%  &   @ v .(88(.  !!%6%%6 `` `   )%3"/3727#'&"'&"#&6?62762@m^9$!w1:$f!$&n''n&,H  %i(7 b V(i%(('' !)32++"=#"=4;5.5462264&" ?1$ $ ( $ $1?TxT/B//B3N 4 ( $ $ ( 4 N3327'&?67'&63264&"t  TxT'B'*# B//B/O  #*KjKKjkO7**7O+6"&462%2+"&=!+"&546;235463B//B/.B       /B//BQB. 00 ` А '/##!"&?"&546;254&#!"3!26"264N4??4NO12N0  0 .!!.!`'92 29''99p  p 6!.!!.'7?G##!"&?"&546;254&+";26754&+";26"264$"264N4??4NO12N h  h  h  h ((((`'92 29''99p  p  p  p .(((( `!%!!535#7232+#!"&=463!5    ` @@ ` `!%!!535#7232+#!"&=463!5    p @@ ` `!%!!535#7232+#!"&=463#5     @@ ` `!%!!535#7232+#!"&=463#5     @@ ` `!!!535#7232+#!"&=463     @@ D%#&/&546.j81 5W wY  H#"32+72#".'#"&=4632=#"=4;54".=4>32>32T$ $T-.T$ $T-.0p ( p1((0p ( p1(,8G32+"=!+"=4;#"=4;2!54;2+;2=4+"54++;2 ( (  (  (  @ T H @ (  ( @ (  ( H 4 @+]7+"=4;5#"=4;2!54;2+32+"=732+"=!+"=4;53;26=4&+5354;2+@44 4444 @  h44444@44H  @ H4%"!"&5463!2+538    b     7b !2#!"&54633#!"&546;0/ `0 !/47c%"&5054676762'3#"&505467>762'3'2#!"&=46;&'#"&=46;6232+KjK  <  H";#5K    <  ȐH ` & N &p!//!<4&(59 %/!<  (59      ')     )*02#2#!"&=463467.5"&=4634&"h  D55D   D5#7  =V=  CkkC    Ck 8O-  @;UU;*062#2#!"&=463467.5"&=463."65#h  D55D   D5#7   2<2   CkkC    Ck 8O-  $,,$""*02#!"&=463467.5"&=463!2265#hD55D   D5#7  P V=CkkC    Ck 8O-    U;;*2#!"&=463467.5"&=463!2hD55D   D5#7  P CkkC    Ck 8O-    ?2+"&=4/&=463254632354632354632346  0  p p t  d+B00/ 82+"&/&>54>32354>323546323546   } !     @  q ! ,     ױW.4635#"&46;5#"&46;5'.7>7>+"  в $  p P ""(E K @  @&54/&+"&=46;26?6&+"&=463!2 p  %| ! N  =G % 3 !  t7+"/.>5'&>763'&67637>376 $ ! =' # 0 40 , c * Y ~ ! 9P  -159%+"/&>46235462354623462#37#37#3  ! """"XX`! ,((a`````-$2+"./&6?'&>354623546234"  @ K%$E("P p  $   `G2++"&=#"&=43+"/5#+"/#+"&76;236?6;2U6U t 6    2   6 A + + A `++ 6"UU6 ( 5  *%+5326"&462&'654+";26=3;26ۑΑ+&jG   ,8 'ΑΑO2Y  HS #2+32#!"&46;5#"&5463!P     @ +G!"=46;54;2354;232!2#!"&544+54+"#";;2=325X 0 ( ( 0L H < ( < < ( <  $4 44 4$  < < ( < < +7!"=46;54;2354;232!2#!"&542=4+"3X 0 ( ( 0L <  $4 44 4$  ( ( +G!"=46;54;2354;232!2#!"&5476/&'&??6'X 0 ( ( 0L  0  00  00  00   $4 44 4$  0  00  00  00  +;!"=46;54;2354;232!2#!"&54'&'&?6X 0 ( ( 0L Y j.  R   $4 44 4$  ` j/ T 6#!"&546;276  0  p      XD  727"/2"&4724#"2546p xTTxT &6(!!TxTTx6& ($9#!"&=46;546;23253+"&2#!"/&?6;53 +      @     + + @k ,  P   9pp  P  , , @ 4>?&576   J 8? 0@ 82+&=#"&5463&&} `&&&&^T& &'2"&454&+";26754&+";26ΑΑ 0 0 p 0 0 Αη    2"&454&+";26ΑΑX   Αη  !)3#!"&535462#354&"264&"264&"``/!!/`KjK&4& !//! 5KK5 &&@*6BN%+#!"&/#"&=46;7>3'&>3254&"26754&"26'54&"26@     CkPPkC p       mm  p  p  p  p  p  p CG+32++"&?#+"&?#"&?6;7#"&?6;76;2376;2327# OK P (b )K OK P (b )Kc ( V RV R ( ( V RV R -6"&462&"2642"&4264&"32+"&76\BB\B\((\BB\B\((h  " pB\BB\^((B\BB\^(( 0   G"264$2"&462"&46"264>."'&67673>'.56VzzzΑΑpppR HLH 5       zzzΑYpppDM   /A.!!.A/ B#"/67'&/"&462&/&#".?5#"&=76;2|}~? $$$ofB 6 = ?Ú  X<$$'l  P  _O 7G7#76772+57#!"&5463!2'&+";2?3;2>74.+";26  /`9 $9 4 -99.5 1b_   T+4!3CS.676#"#'7232#"/&764'&?67/&764'&?6/&764'&?6a@-.? @  8 8  @   _--  %%  &  ;@@ ( Q ZZ Q ("V66 -t- 7#X# B '/7?GOW_6"&4622"&42"&42"&42"&42"&42"&42"&42"&42"&42"&42"&4%6%%6(6%%6%6%%6%6%%6%6%%6%s%6%%6s%6%%6s%6%%6S8@H`d7"&5462"&54&"&2#"&4632654>54&""&542"&42"&4%"&54=.'.7>'7/B/1g?-  FdFSmQ@  Qf"P"  !//!  gI+ -?%&2FF2  Im  Do  V"P"2At&2#2#'&/&?6766766&#"32763&%.6767&'&>767'.7>7"'&63"&7>376"#32654&# != +^C,:) F "(  -+ %    p:) F "(  -+  != +\C   3   !Y !JZ9    A 4!JZ9     3   !G8DQ7"&5462"&54&"&2#"&4632654>54&""&54%/&?6"/&?6/B/1g?-  FdF W W      !//!  gI+ -?%&2FF2  I W W   2?k46;5#"&546;5#"&546;5#"&746;'.>+"&7'&6767154/&'&6767'&6767'&676'&>[ >~    }    | Pk I%   ={4 N^ _M g$    Z  .   #  0 ZC  cy  zb  &  @-E%&'#"/&'&767'&6?6632654&#"6"326591W+ )w }q<1W+ 5(*Y"P8*&    T+: e :(fT+: H W'38P    = (/7W2&/76'&"&'&4757633#"&5:64&"3#&"264///#576;? 7  P " b `@ )  `        5[7 TR; 7v I  ! Z@  ) @     &  R7 K<F#!"&547670>321&0#".'&'&2>767654'`)||)B#G G# $FF$ Z Z B3  32  2'/D#32+32+#!"&5463!232&"26454&+"'#";26  @ 4&&4&0'*  @ ( @ ( 00 ( &4&&4" @+7CO2#!"&5463"26454&+"'#";26754+";2=4+";2=4+";2 4&&4&0'*' ``&4&&4"" =HH2"&4$"26427.#""'&#"ΑΑH44H494 ( 4Α14H44HD!!#72#!"&5463";264&#"26454&+"'#";26P` ` 4&&4&0'*' `   &4&&4"" @ +7?R2!5463!#!"&%;2=4+";2=4+";2=4+"&"264;26'.+"'#"0@ `4&&4&  "*p0HH&4&&4  3C#0#"&54754622654.'54&"37"&5475462 K55J 8P8`!/(%A%6%  `%05KL50$(88h/!      %P%%%  "C7"&54754627#0#"&54754624.'54&";26%6%  K55J 8P8(%!/@%%%  0%05KL50$(88(      %/"C7"&54754627#0#"&54754624.'54&";26%6%  K55J 8P8(%!/@%%%i  i0%05KL50$(88(      %/"C7"&54754627#0#"&54754624.'54&";26%6%  K55J 8P8(%!/@%%%)  )0%05KL50$(88(      %/;6"&4627#0#"&54754624.'54&";26%6%%6E K55J 8P8(%!/[6%%6%%05KL50$(88(      %/!)19AIQYaiqy"/&47.7&#"#4632662"&462462"6"&462"&46:"&42"&462"&462462"6"&462462"6"&462"&462"&462"&462"&462  .@S;7):   )  W    2  `    )  W  w  W  W  I      8 O . 2;S%  H      )     )      7    )    )  )  F%2++"&=!+"&=&=#"&=46;5463266/&7.7&#"          /!*$ _       +    +   !/"   _ /  +QY"'.542''&546'&'&7654&2&767>54.#"'.542"&4 Ju.W=9R0;)%5F8%,!+6Ig*$9E6%%6%)U,,U +>=UQ9A, ()95%+]BlN.5, gH-NlB]%6%%62#!"&546354#!"` `T T`%2#!"&=463``  !)+54&+5463!2#!"&5463!24+"30/!  D !/0 0 4 /?O_o+"&546;2++53232++53232++53232++53232%3#"=#"=4;5473#"=#"=4;5473#"=#"=4;5473#"=#"=4;54`****************`N 0f 0f 0f 0 0 `0 `0 `0 %/&/"/+"&="/&4?5/.?/&?'.?>7'&/&6?'&?6'&>?65'&4?6546;2762?>76/76! ?5       6? !" F@@F "!?6     6? ! " F@@F g   G$N5  ( (  6N$G    %%   G$N5  ( (  5N$G    %%  '"/&4?&67>(0$X% 3  $1*1$  3 %X$0*<+"&?.5476227>56227562.>32+"&7%  @  ,,,y,"*9  8 _"&7  *"_ G;#VLB& 0  &2#"'&?632676&+"&=466CrCg_G (0?FaeEB0*   $GBsCg@ (*bEGc,*   $E !-E!#!"&26=4&"26=4&"26=4&"%2#!"&=46;76;2    `   `   @ ` x r P           :&#"+"&7>3276+"&73232>76;2#"'&=46s1B:[ 9ZcH$   ׆ *1B&D0  9ZcH$ ;-G8 WsE$   y *-!9% WsE$  "0%"&54675#"=4;2+7654&+";2zzeK x 8,  "( ( VzzVMu " ( ( " $  3b 1%&=#"&=46;546+"&=46;2+";2     T(88(T T  T ` ` `  8((8 (   1!#"=4;26=4&+"=4;2'&=#"&=46;546T T  T T(88W    (   ( 8((8 ` ` ` $276+"&?&'&3276#"&46dG$   *0BEeaF?0( G_gE$   *,cGEb*( @Α08DL%#!"&5467&546;&546;2654'6323232$"2646&+"26&264&"#**#*&!/ (8&*<8<%O(**(#*&/!8(&*#[x `@!)4%#!"&546;3%#!"&5463!24&"2!5'&'&/!((D`X( !/PD((`0pX( !(/&?62'6&?6?6&5#75. o .(< o z \  $$ @2. o .<( o  z   h0@ $&?'762#r  99(c  r)(99 !'762&7>?'/&?6299(US"r/ 7"f  w !v(99(TS"7 /r"f  v  732"/&6;4;2. VV . 8 f VV . M37/&4?6!2# VV . . VV . 8 M3%546&=!"=43: VV  . V V . 8  #"&?62++"5X. VV . 8  VV  ;%+"&?'+"&=467'&=46;27'&6;2/76 p $kk$ p  $kk$  p $kk$ p  $kk$ hp  $kk$  p $kk$ p  $kk$  p $kk$ )#!"&546;46232&"26454+";2P&4&PHP`&&4dM346&=#/&4?63z VV  VV  VV .. VV . 72"/&6;5#"&?62+ VV .. VV .F VV  VV "&462#"?6&+54+"ΑΓGssG @ 'ΑΑtsst "&462'#54&6=32=4gΑΑtsst 8ΑΓGssG @ 2"&43?6/&#"ΑΑtsst ΑΓGssG @ 6462"75326/&;;2ΑΓGssG @ YΑΑtsst @2/"/&4?'&6;26#!"&5463!2#!!547@ $$   (  @ $$  `  ##!"&5463!2#"?6=4&`Xp  p`$  p +5463!546&=!"&2#!/&4?6 h PP     PP  0 PP 0  0 PP 01%#!"&54674546326326&+54&+"#"7.K5Wh0    $%2 p+2#!"&5463"26454&+"'#";26P4&&4&0'*' `&4&&4""  76&7>.'/&4 4PL20#  .6"  P )B,,U 2& X '.54>?62>7'.=9*M@' Kcl?sSC) C_G PF%lJ#2#!"&5463264&"74#!"3!2 H ` Y8 @ &.BLT2#!"'&54%"32?4761.264&"6264&"%6.'&3654'73264&#"264&"' D '   C =# n L   .wNCCNwi   5  %@/!!$3#!"&=264ᕗ!2"'4&#!"3!265@ L  P  t(``(``8   $"&4622#!"&=46;27?>=#"'4"2642#!"&=463   Oq  9      (     pP  2         @#37.546232+#6=#"&=4632#!"&=463i=V=    /+==+/   P++P       I"&4622#!"&=463!'&54?6323>546;22676;232676.!!.!x  ff  '   '  P!.!!.            #+;2!65'546;23546;23546354&"2#!"&=463p @ @ 8 0 P 0 X    VJJV  00 00 @  @     /S2+"&=#"=4;54632++"&546;2'2+"&=#+"&546;235463h  0  @ 0  0   0  0  0  `  h0h 0h    h p      Q'&'&'&'>7'&'&'&??6/7?6/7?6/76/& 634 63ggf        45[45ooT        #-8B73+"+"=4&+"&%#.54622654''4''32674''326`   (   @$-5zzH  @    5,&]7Vzz/  M<62"'67"&5ԖԖ77ԖP88P8((n(88(#5%/6?6&67>32"&54>'7>76W 65_<Q "..C.  ";V " m($*  pJ/B//! A +  m7+ !!@  '%&'762&/"/&4?'.72"&40q"8??/ )$Y5g# >P88P8 /8??/$g5Y#Q"8P88P (2@7.'>7'&67&'.'667&'>7#"&'6#0#6&-V! `D$>_GE#707%0344LS)( F>/Ov]&K 0K*,-'p4/Gr 2Oc?Q&6'2U( 1)+*WR"/D_5E:BJRZbjr2+"/&>5462;2=462;2=462;2=4264&"6264&"264&"6264&"264&"264&"6264&"} !     I    I  )  )  P q! !      H  W    W  )  I  w   ` '/746;#"&52+!&"264"264&"264"264&``&@&&`Fn &&&&@@RnR 0#532'#617>3!#!"&5 3c` !@)>%2+"&=46;75%"&=46;7532#2+"&=46;750  P  P P   P   ``@  ``  @   ``92#!"&546;546;2'3554+54+"#";;2=32`P 808808@ 00 088088 "&5462654'Ukkkj,T6``M:^vv^:5*8ii8*G%/%#"/&'"&=46326763254&"7'&#"+'$<"B\BB.-@ $<"(#RA  ']1 .BB..B>,!1pp-:] )-2#!"&546;462&"2646/&'&7PP&4&6a  j.  R `&& j/ T%-5AMY2#!"&546;462264&"6264&"6264&"6"26454+";2=4+";2=4+";2PP&4&h`&&RRhh3;CK[$"&462&"&462'&67676563267'&'#5&$"264&"264&"&4622#!"&=463  H44H4 7VJ7.'&6;2!676;23&'67+"'&'!+"&67#7!36!&'# B--B   8   *M1,"  *9    $  ./DIID/   (?9@ $Z8   hB5=%'&>?7'76&'.'&7#"&=46;276264&"&5>!=!=<D-+,\Y pd- ((cc 5) G,8,7#   $G((B7"&546;7532#2+"&547#"&547#"&5#"&=46;2 00   S((S 0 `   @     p   6#532;#!"&546;4+54+"#";;2=325y 7   @808808W     88088/8;#!"&=32?3264&+'&"'&+"=4;546;#532   F# 9Z F# 98 ș 8   Fr,  Fr( i @ )346;#"&5!%;;2=32=4+54+"#"%2+00@808808P0p@0@088088@%1=Ieq}2#!"&546;546;254+";2=4+";254+";2=4+";2754+54+"#";;2=3254+";2=4+";2      ( ( ( ( ( ( ( (  ( ( ( ( `  p @  @( ( ( ( t( ( ( ( ( ( ( ( #2"&454+"#54+";2=3;2ԖԖp0`00`0ԖXXXX@.82#!"&546;35"26426'.+"'#"3#546;2 F4&&4& "*  @ ```&4&&4    `` 9E2#!"&546;462&"26454+54+"#";;2=32=4+";2PP&4&6H808808`&&0880888<@7"&=46;7532##32#!"&=46;5#"&=463!25#!5# p@@p 00  00 `    @     @     @@@@@D '2"&=454&"7.7>/&76B\BB\B(/vT"S #/B..BB..pp# Sv/7Tv/"!15!#!"&=32=4+532=4+532=4#72#!"&=463 @ xxxxx   @  @@@ 0  0 52#!"&=463!#!"&7;;2=32=4+54+"#"h   @ @808808 0  0  ` 088088#?G%2+"&=!+"&546;235463'"=4;276232+'"/"&462.B       2f z2L4&&4&B. 00 `  (c 7  ,c 7&4&&4IQY^%2+"&5#"&=32=4+"=4;2=4+"=4;2=4#!"=4;5463!232264&"264&"75'#p 08P88P88,d,((\((d,`   (88((88(00dlp(((( dp #37NZ%2+"=43+"=4&'&=4;22#!"&=4635#+"=4'&=4;22+"=43x0).0$0p `0$<0X0` W1&B%>>  @p@@B9!,*AB>,>7//&?'&??6/7?6'7//&?67'&?6@&@B C8 7-8  -I""9- AC B@&8 7.8  -J""8-  $.2#!"&562#".'463/&76.76dx[  / [<7*T #/F/vT #M::c:M$1kTv/#F# Tv/(#"&4?57?6/7?6/7?6/7>#dYY-2 2-2 2-2 3.M!RYYe.3 2.3 2.3 2./#"'&6?'&?67'" +/ "FO" $ R "EO';?"=4;2+"&=3352#!"&=463"=4;2+"&=335H/B/0@  X/B/0@00!//!``     00!//!`` #A%2#!"=432#!"=43%2#!"5743%+"=4&#!"+"547%62ppPP`00`0000   W q!22#!"&546;2654'"&462'"2654'76.&&&&qqd^^^" " &&&@&-3OqqO3-^^^: O 3}62"&4"&4622#!"&=4632#!"&=46;!'54+532=4+54+"#";#";#";#"26=3264&+532=4+532      `  0`hhhhhh@(@(@hh@         @     `    ((  "-%2?/.=32?7/76!6/.7  @@ \  \ ' 66 'kkp8   8$,2#"'#"&'&>7&54264&"264&"264&"Ԗj83AL9szz319JVv$747#"'#"&767&'&?6632@ E&(83AL, 9:  iI`j9 3 4+9    R5zVH:974&+463!2#"!%2+"&=!+"&=&546;2!5463& 8(@(8 && @ @ &  &(88(&@`&$y  y$& `` Fh$"&462'+"&;26=>54&/&546;2?6&'&'54&+"2#!"&=46;#"3!2=4+67Vzzz"? &    "> '     @  +? l @, zzz$!   $!    `  ` '' )1&'67673+#&'&6?&'&54762546264&" N5  ,pp ^BMA k.U}N/  ;1% E8K@B^8H$Qh=7 EX"#!"&=46;76;2+";2?62m= <m   7/ * N v] rH HrH  y ` &   J!>e.'&67546;2'&+"+"&=&'&4?6;254'#!"&=46;76;2+";2?62   2    !    5+' J pX 0%    %     y ` &   J"E7+"'&/&=4622?6/&>$2+"&=4?>2?549  i Z & i  9& ZM"+p   l 3   p+"M  3 l %@2+++&/&6?54?26=%54&+"&=4?6;76   &g P PA*<*, P!"!gg  0 @ &< .0$'z**8% .!X~  < 5++"&=47'#"&7>7#6;5#462#327#4'      .C)>t0pQ^Qp0t>?X     $A,DlRnnRlDTen"&462"&462#"/?#"#.?6?'#&/&/"#"&/&?676546;2?676.7((d((N ) C     +3 3+     C$    #) `((((p E.d L] \1G !! G1\ ]L P '" ` ` "' .E  @CKZ%2++"&=#+"&=.5#"&7>32+";>;26;264&"'"4&54621&#0 1  @ @ ("  + X:,  i  8P8    Q 00 Q'0&   7I"L@  (88()?/.%'.4>?6762'67&"uO[D  E , "k ,"O4!Z!Xf  # I  I@&W;&.;C%2#!673264&+"&46;&5462#"3"2642.54264&"(88(  `(88(--8P80`  mxP80 5S8P8!8P8@ (88(P8(P &V(H2+"&="&5%3&'>@]   ]@tU+)2;`] ]@V =/*#'2#!+"&5#"&=46;546;25!    0 0   @    p   0 0"/<62"4&"276'.#"?62&276&"Α7   % &--  #nYΑΑ     o66  *!2"&42#!"&462&264&"4&&4& p]CP88P8&4&&4   ]^B@8P88P)17'&>?7'72&!#"&'/&7%46264&"21M>M2 LB.-Aw Z((H\] :~p.B?-l.^D @(()19?G%+"&547##"'#"&5463!232264&"264&"%3'&#264&"m /B//!((!/  &0\+V !//!!/ /!P  `0+&/'&?632576%5#!"&z %  N n" n    !  =K   %2+"546;5.?4>;2(4@ @4u W77W u!'&?6>323!"&=46z  M23232+"&'7;26=4&+"/?+54&+"#"&=46722"&4&2#54@ P0.%  P`P8(0(8H $$ $$3EP PE30p  W ` +5$ ` +55+X(88(Y$ $$ L3 @  @ 3L`   00+$"&4622#!"&=46;27%/&?676jKKjK&7OO7#L#T  Q  -i KjKKjkO7**7O R  -h +3$2"&454+54+";2'!"&=46;2732&"&462xTTxT & < (#O7#L# +jKKjKTxTTxB 6 L ,L*7O&HKjKKj;CKv%/'&=&''&'&?&7'&767667547676264&"&"&462#!"&=46;2732332?b !!  !! ((jKKjKI O7#L#     K    0((tKjKKj *7O  ' $/$"&4622!"&=46;277&7%'?62jKKjK&;)NO7#L#>G= *H%KjKKjk.M = *7OG *H&#76"&4622#!"&=46;27$"&4622+46=4'6;27\BB\B#0CC0 !F!P88P80.B(*B\BB\bC00C 8P88PXB.'9+:%#!"&=467&4?6"&547'+"&?&5475?6KK6_`KjKB * M6  6M_P..5KK55 > > ;3;C$"&462!"&=46;2730%2+"&=46;5462264&"754&"jKKjK O7#L#   /B/]@KjKKj*7O  P!//!P}P  P+%2+"&=463"&4622#!"&=46;27p  jKKjK&7OO7#L#     0KjKKjkO7**7O%-%#!"&=467'47&52>32#"&'7"34&#E4GG4e++1#5KK5,C g  L4**4Lf33( KjK5)` 7%'.54?6>7'&"&462#!"&=46;273230n$" 5Ps !;`jKKjK 7, O7#L#$@/&sN-N:&KjKKjsDm#*7O!'&?6>324>7!"&5z  #8!5K0&3     r 6K5)B 9%+@%"/&=46;22>4.#"'".54632#!"&=46;2732w \  [O\   #;"K5#;"KK:O7#L#S  ] [O E X";#5K";#5Ko(:*7O$"&462#!"&=4677'3jKKjK 5KK50 ` KjKKjlN5**5N88;CKb%/'&=&''&'&?&7'&767667547676264&"$"&462"&5462"'&#""#"&=46;323630#3273'#"&=46;2b !!  !! ((4&&4&.BA]B    fD/ !##   (B &@k    0((t&4&&4FB..BB. " 0C ' :# &*BE[^!2+"&5&'&/&6?&5463276%#"&505467>762'3'1"&50546767623'    /!.v    ";#5K   < SȐH KjK  AWZ46;&/.?>632/+"&5'054676762"&73'0546767621"&73'`    v.!/   `  < SKjK8H  <  KjK8H '!, '(/!+ 0  96(!//1 95(&4<!//1;?%2#!"&=463264&"";#";#";!'#"&=463!3'#% %sHri gQQ F@%  %`@@@`@@"6#.5476'&'"&546 O M{O  ' l //  ' R $9Kil"'&476;2#'+"'&476;2%+".7>&'&>;22+"&764'&63&/#/.7&54>23'  "  _  #  #  #   # y  "  R 11  "p`0 6 , 4]+ 9|9 9|9  )[6 @ 6 , d  vv  :t)%/6?6&67>3%'7>76W 65_<Q ";V " m)$*  pJ +  m6+  !#463!2#2#!"&=463!53`@@P  x`     @@472+"&54632276&"&4622+&'3533!ᕗ/AA/(P88P8P L@!`B//B 8P88P&@@ 2.-7@%#54&"#54?5#"&=46;546;232+46?#"&5%+5`%6%`p0 0   0 0 mp m p `%%` D3   0 0   3 / d  (085 7"&5"&46232>7"&5%567$2"&4&'67==qqqqqh:&H>qq?!#ppp 0n37++++&&@&4&&4 4&&)? +  /B//B "%+2"&44635"264&"62"&4ΑΑX^BOqP88P8SΑgB^ qO`8P88PH)12/"#"+/#&54754632264&" (8`M=)  ,'&  ,ay t/!'J%PAep  xh  x= %!/ H;!2#!"&=4632#"'!'#"&4626?&5462?&54 ` (HH(H R(R H     @(( +  + (08@HP2+"&=7>'264&"'"/&4?62264&"264&"6264&"6264&"264&"P ((.1((rr'/72#!"&5463264&"6264&"264&"264&"6264&"%%%%3ss%%%@%ss'/2#!"&5463264&"6264&"264&"6264&"%%%%3%%%@%2#!"&5463264&"%%%%%%%@%'/7?2#!"&5463264&"6264&"6264&"264&"6264&"6264&"%%%%3%%%@%MMMM'2#!"&5463264&"264&"264&"%%%%3ss%%%@%ss2#!"&5463264&"264&"%%%%3%%%@%62"&46"&4622#!"&=4636%%6%[6%%6%   `%6%%6%6%%6U  !!2#!"&=46;463!2&264&"p  p m     s"*!2+#5326!"&=46;467264&"p `p P    @!   m 0P%2#!"&=463%2#!"&=463          "33&'"&476&&67>7>' !b'bGE:B  )!.,_Q'g9,'1B  5t*"-,!@9A32+'6.'&"?632#!"&54>7>2264&":) b6  Zw F#  #& #O  &5HX0,:,S_#Vi  ~,  B0  %1]L<%%%C@D!2#!"&=463'.=4&+!46;23226=.='&4?625#P  /!%&&$4&      U +,p`&&4$  >& @CP]%+"&/#+"&=4?>/&6323632'&'&&/&6?67&#";26%5&#";26>C0%.B$B.%0C- C$ '));7J7;))' $C ##%$%6$%##%F/C=,'',=C/F&%        %&) %% )%&/&6?'.?>n    "   ff  (.?>&/&6?2#!"&=4637   .   p  T  xx  D 0  0 &,>%"/&6;235#"&=463!2+2#'3.#!"&=463!260 V  (+а  ] }#4!# F  @ s@l :@     @] |4#     `)2#"'#"&4632627&#"!2>54&#"FccFQFFQFccFQFF////!+L +///`^^OO^^OO@@&  &@@@4@D%"#"/#"+"&=#"'+"&=.54>76;2264&#"5@  J LL    ,4$0 TFCP .Ew  h 1 ; 16 JY4)D& 0.=K  c %'%&=467%6m   3  Z  "   f&7&=47%6'2#!"&=4637.    D  p  x  DD  B 0  0  $H!5>4&'5463!25#35#35#5!#54&"#54&"#54&"#54&"  @ `@@@`@     = cc#  ``    5U'&?65462#";#";#36=46;22+"&=46;5.=7z   8P8U UU UU),  L  8AW4 <)2    -(88(  0 0*&   " iD)(9'"&,4:@#"'&#"#"'&54632326322>7&.#6264&"575&'m +1>>?5 +1>>?  $"%B//B/`   >= >d=  $8P88PT:1 #&#"'&#"#"'&5463232632264&"m +1>>?5 +1>>?B//B/  >= >8P88P %1=G1!#!"&%;26=4&+";2=4+"%3!2=4#!";2=4+"2!5463  ` ` pp00 @     Xh@ 00 KWco2#!"&54632>54&/&546;2?6'&'54+"+"'&;25754+";2754+";2=4#!"3!2`    -  -  ppPP              h3%#32+/&?#"&=46;7#"&=463!7632bK  S  37  K  R  4 ` j  C ` j  C %-56+".7>264&"6264&"6264&"264&"PU&P&(/hP) n/3=N&@";*P}CJmm-S"*2#!"&5463264&+";26=72+5(88(`   0  0`8P8  0@"&462"&467"/&4762%6%%6%%%6%%F     %%%6%%%6%%6%     $4%2+"&=4633#"'#"&=46;2%2+"&=463   @@$I`        @0p    %1=I6/"/"/&54676276254+";2=4+";2=4+";2f&6 66 6&&6 66 6 ------  -----hh !%-15=G546;#"&52#!"&546;54625#6264&"5#35#6264&"72+5 !/&&/!p`@""@@""    /!&&!/@  @ x"" x""1 7&/&6??6/7?6/7?6/7?6/7?6/76|  P E< <87<<77< ;E   'ef/0ff/0ff' #G7&546;2#";#";#"32#!"'73;2=3;2=3;2= ` 88888x  @)@@)  @@@ ` 888888@@@72#!"&=46;;2=3;2=3;2=3;2=3;2=   0@@@@@  XXXXXXXXXX/73+"&546;2#";#";#";#"X  XXXXXXX @    @@@@ /7A546;#"&52+"=4;27#54&+"#4?62264&"%2+ @P h0 ` 3B//B/p  P  @@(@  u k/B//B  @ #.'7562"/&47@`>S S>+u  5 @S S>+>u 5  !4"&46;46;#"%2'&/52>76#"'.#57676%% % ,IfHM-4#-:+0MH2N/80+:-#3 %6%6%2.&B   B&,  *2:2+54+"#54+"#"&?6'.54264&"264&"Ԗ* N@N  ,24&&4&4&&4&]"A6 B8888 B Z4]&4&&4&&4&&4 /J7546;#"&2"&427735654&#"32+'2+"'.#".'46;2` `v 6ԖԖ=2$q| 3$qO=2| #,# @ ` 7Ԗ$ 2=Oq o2=Oq$| @   t3#"'"'"'#"'.?63!22767#!".=327!5Z&/,XX-/&A     @   I(Z!!!!!!Z(h   dd %%53#!"&53%#!"&?63!23+"&@@ @  U  %@   @  "" /"&=463!2#2#!"&=4632#!"&=463  @ ` ` ` @ @ @ P @ @ @ @  7'7'772"&464/76/&'7?6/76/&'&'76/&'&'76/&'&?'&?'&??6/7??6/7??6/76/&'77'7----------jԖԖ ."   "-  -"   !-  ."   "-  -"   !-------D------qԖԒ -"   !-  ."   "-  -"   !-  ."   "-3---+/M%#54&+"#54&+"#54?6;546;232'5#53#!"&=3;26=3;26     . 33  @      S  S . PP @@`  `  %/&#!"&=4&&/&6?267w9 9 9 9?L?` r  r `C:I"&462/&/&/&/&?.?67>327"&4?6((.   -   < " #8 ( ;  2 `(( E2 Y  J B"<      *"4 ," <  2 $2#!"&5463!2#!"3264&"s%%p  C@%@%  $6O2"&42654':12764./&>'&"67627>'.3232654'ΑΑ P  b FbP  ΑW    &&      @"2!2+"&=4&"+"&=46;!2#!"&=4630  8P8       (88(   `      #:BJ%6?#&''#>#6&'37.'#!"&5463!2"2645!"3>- '' -V ." '--'L    )77)FjKKjK 1) )#*)11))*# 4  7)@)7&KjKKj@,x7327/"&?#'&/32767672>367'&"&'.'&'.'&764'&7>76767676276&2654.#"a4%4 .  ! 4%4  J          7N7+'U &p Y&           P88(,-2#!"/&4?63'76/&'&??6@%% >>  >>  >>  >>  %%  >>  >>  >>  >>  '7M]2+"&=463#>7##"&46322+"&=4632#"&'##&'6=3>2+"&=463p  `  @62 9*P  %% %  `  %% P*9 26U   `   `  ` 8H '?0O  %6% `  ` %6%O0?' H `  ` 1;#"'&54675#"&=46;2+7'&?6/'!&/5#.&4,  ' 4 +%%`@!,34Y     ' 4 +.53,!''  #2!54635!+"&=#"&"264`  %@%6%@%  %@%%@%M4@HR\d2+++"&=#+"&="&=#"&=46;5462%;2=4+"264&"75#";26=4&+264&"        =p  p  ps@ P       P 0".."08] `  ` [%#"##"'&'+"=#"'&767"#"'&4767.'4546320454>762>7632 (8 n ?6264&"6264&"264&"$ E'*M+ 7  $!E'*M*7 3*'E $  7 +L E"$ 7 +#+3;%/././&54?>?63264&"6264&"264&"# E'*M+ 7  $!E&*J5J3*&E $  7 +L E"$4J5Jq1%2++"&5#5323#"&5#"&=46;546;2  ( 0   (  ( 0 ` 0 (  H` H`  0 (  `'3?K[g2#!"&546354+";2';2=4+";2=4+";2=4+";2=4+"754&+";2654+";2`   @@@(   `   H00 0 y!)A2"&4>/7>.'&7264&"7'76.'&?>ΑΑ c4&&4&Α> &4&&4@H%/67#"'".5'7&'&6?67&5462'0"0"132676&"264767,RL\142G6G! D 8P8 4#03C#"9d h`:_~)= {:|"  v(88(Y$ Xt 7/  C-D54&"54&"&'54&"&=4632762654.'&'&6?*#"7?@:FF:@w('   c9%  WcG6-#`  zo  oz  `#-5K`/!a 5/B/@  *62"&=46/627.#"75&"26>54&'8DF;<<"} @ &4&/&:!D8K55KK55S, &&    &&m& ,!33#"&476&67> -jQe:K? 9  2D[OV,}n5119  ."3W8.)T];#!"&546;;2=4+";2=4+""'.'&"+";26?212?62;264&##532   ȠPPPP +"       / I 8    HH )6  2     '0;#!"&546;6&+54&+"#"27#532   L A   A ` 8    P P`R G3#532&=#53546;#!"&546;;#" b `@@     Fb ` A@A |      373#"&=46%#532;#!"&=3?6/&#46;pp  7  ` ` Ƞ@       A ` ` A  *6BR%#5%#532;#!"&546;;2=4+";2=4+"54+";254&+";265  7   ȠPPPPPP   @@     HH ` &2m#532;#!"&546;;2=4+";2=4+">54./&546;2?6'&'54+"+"'&;25y 7   ȠPPPP -  -  W     HX          ;CL;#!"&546;&'654&+";26=3??6/76/#5327#532   E !!%P  ;   00 ɀ b8     !&%  0;   7  bA7@EO7;#!"&546;;&'.'&"+";26??62#5327#%'762     +"     b`DED (    /R )6  2  (bDD'0;#!"&546;26/&";;26=7#532   A `` A    8    ``P P &%"/&4?'&?67627'"/ Pu_V  VR  [1;;  ;Q  uP_V  VR 0;;  ;R @ "1%"&546?"/&4?'&?67627'"/@%6%  Pu_V  VR  [1;;  ;Q]#%%@  uP_V  VR 0;;  ;R  -A\n62#"&76'47'.76'4&'".76'&6&'.76'&767'.76'.'".676#"&54'&676'&'.#&#0#"&=&762 +>   " ;I   (Lm Q9     ' "&(X<9  GKl>60 rn ik `<*MK  GH  ML  FI)< #.><  79?1:kJ55 897O z,  ' V   <9R  fHJ* @`$2#"&'&?'&6>264&#"G0a@((@a0>p&X  W&p  `)7337)<*B nn B*<  '/$2"&4&2"&462"&4264&"24+"36264&"NΑΑP*<**<<**<*ΑΑ)<**<* h*<**<'2"&4264&"6'.#"7626264&"ΑΑ =(" 14)Α* "  32+"546;5'&63!2!!88J0**0j2"&446;.'+";2;2/&+";232?676?6?6=4&+"/&6?63232?65ΑΑ Y;  Z  /       ΑG :R               _n2"&4654/&+.#"/&54?632;26/&54?6;2?6/&?6//2?676767654'ΑΑJ  C       G    _ Α         G&     /d /*8m2"&4762;2=4/&4?&#"32?6;26=4/&?63254/&=4+"+"/&+";237632;2?6ΑΑ 3Su>     '     Α_N  & uS-  f        !%)-15=DK2"&45#375#"6264&"5#75#5#75#5#75#6264&"5#326=4&+ΑΑ(   5000p000p000`(  Α( 8( ((8((8((8((8((8((h(% (&2"&4"264&"2642676&"'&ΑΑU`Y // Α6'  '#22"&4%2767&'&"2767&'&"2676&"'&ΑΑ8+`Y // Α!" !" !" !" 6'  '+92"&4$"?626'&'"?626'.2676&"'&ΑΑ[&#  &  #  &   `Y // Α)))  6'  ' -AP"&5476227"&4632&"?626'&'"?626'.2676&"'&(**0 ΑgJ>/L&#  &  #  &   `Y // @77<03gΑ)!.)))  6'  '':2"&4?6&'&'&2676&"'&%>'.'.7ΑΑZFi`Y //  ΑF  6'  '  F#12"&46/764.?64/&2676&"'&ΑΑP!!!!PPX`Y // Α-0(( ((006'  ' /<JZ"&767622"&47667'676&&>'?6&/.?6.>'&26 * )!)yG!)yG Z  4]Z  4"k,<@PH)H) Gy) Gy) ([ 4   Z`"a$,k .G2"&4?6/76&/&"2676&"'&%6&/&"?6/ΑΑ_# F`Y // '# #Α ##   6'  '    ## 6FVd72"&476"'&'&61"&'?676&>32&'&&"?626'."?626'&2676&"'&f )S) Hr  ss   _?lE K&#  &  &#  &  `Y // H))G R9DD9R]:e=b)))6'  '%-K26=676&"'&.54264&"264&":#".=6362?>ΑYH * // * HY&    gO~.$  $.~OgW@%  ?   ;HU%:#".=6362?>26=676&"'&.5464/&?'76&6%&    ΑYH * // * HYP!!!! PPI@%  ?   lgO~.$  $.~OgA0((((00%8@^2"&4&26=676&"'&.546'.#"?62264&":#".=6362?>NΑYH * // * HY  % & 4&&4& &    gO~.$  $.~OgP    0&4&&4@%  ?   "1<62"4&"276'.#"?62&2676."Α7  % & Y`Y /YΑΑ     a '66'  `/?O_72+"&=46;2+"&=46;2+"&=463%2+"&=46;2+"&=46;2+"&=463`  @   @   @   @   @   @   @  @  @  @  @  @  @  @  @  @  @  @ @/?O_2+"&=4632+"&=4632+"&=4632+"&=4632+"&=4632+"&=463`  @  @  @  @  @    @  @  @  @  @   @  @  @  @  @  @ @ @  @  @  @  @  @ "@72+".=4>;2+"&=4632+"&=4&#"+"&=6  %  Ҕ  zV8`8   @&@ ip pVz8`8p piF7+"&=46;2#"&=46;22+"&46;23265454&"+"&=6 %%   %%Ҕ5% fzz  p %0% p %0%`i%5(VzzV i!77/'7'&54%'76DC#)#+3`3 + FC##* 3`3++ǩ';GS_ks&'&'&6;2+"'&'&'&6;2+"2#!"&=46;254+";2754+";2754+";2754+";2"&462+ + n+ +  %%%+o```6%%6%($> ($> ($> ($> @ %%% Spppppppp@%6%%6@-=M]ms#32+54&+"#"&=46;#"&=463!2;26=4&+";26=4&+"';26=4&+"26=4&+"334&"%54&+";26=4&+";260        &&&&&&3&38P8&&&&   P P        -&&e&&[&&&&(88e&&e&&7DQ&=4;2+"=46#"'6+"=4.'&=4;22#"./2!2+"&/<0(0$qYZ).00vYuV   v *AB>,9!,P8!W1/' %>>    p  -52"&4264&"4'654.'&7>264&"ΑΑ##") ΑW      5G2"&465."?624'654.'&7>765.#"?62ΑΑ #&#  & h##")A  #  & ΑK))      !)&.Ma%/&>7676#"&462&#.&264&"4'654.'&7>?626'.#"S  /4gΑ T## ) '   .! S   TΑg.,%#           %2"&4"264&"2642676&#!"3ΑΑU87Q  Q7ΑH6 6H.<2"&4?626'.#"?626'.#"6&#!";26ΑΑ  &  #   &      Q77QΑ6)!  !w 6HH#12"&46/764.?64/&6&#!";26ΑΑP!!!!PP  Q77QΑ0(( ((00 6HH!/2"&4?626'.#"&"2646&#!";26ΑΑ  &  %Z  Q77QΑ5      6HH ALP7"&=46;%+5322+"&547#"&547#"&5#"&=46;2#46;25#  @ 0 S((S 0 ` ``0`      p    000@ #/2"'&547.=462767'5%6hJrrw ,!!/  J4>>40? (%@%( 8 @ !+72"'&54264&"7.=462767'5%6hJrrm""w ,!!/  J4>>4^""I0? (%@%( 8  ?&7>'7'/&?6276^KK"r/ 75bW  i ! %KK"7 /rD5KbW  h $ 4'&6;2%2&'7632"&46&/&"?6/?/o o U o/?E ʒggg 44&  //  = ++ sggg6 // %4  42"&4264&"264&"ΑΑΑW!32"&42654'"&54724+"362654'"&547ΑΑX&4&""4&""ΑG&&&   p&&  &*!2#!"&=463!6?654&+";26p  M M0FF     [; L L&& +#76322++"&767.5#"&=463d  C6  6C c$   76732#!"&5463264&+"36264&"7#&- '' -V.  'j  %%  ;jKKjK-'1) )**h*#f)1 @ %%`   `KjKKj1)?'73264&"'766'OTc!] ];1#]$b!cT;] ]'T(b&7'73264&"'>%'762+  (] o99(5+](  B(99$67'&4?622?'762'&?%"/7?md Z  "> >,.q.(-qk  Z  d,> >d  Z "> >,.q.-(q k  Z e,> >.!2#!"&=4637&=46'46'%&'p   'f0 A db+ (&+     e  C   1 N 1!2#!"&=4637'&6?627'&6?63276+"p  AM(HhAc++.,     kS$4b!R21'$*2%//&?'#+"&546;276%3264&#-N  NN  N   (80$TN  `  `N  NN  NP  8($6TN  @.6J254&""'54&".546'.#"?62264&"76'.#"?62ΑA7  .d.  7A   % &9((  % & gBq!  !qBgG     &4&&4    #.2"&4"2642654'&"6264&">'&#"32ΑΑU(**3-F7Α776 !)-59%+"&5#"&5#"&5463!235#264&"75#264&"'3'#u  8P88P8  R``L((`((0PB o (88((88(  `((``((`=2##"'&7#"&?6&&/&?6>76327>o )"( C#+M b:  7:& *KJ 6 E=  8= ' %#3hG6! ''92"&4?626'.#"6&"'&276'.#"?62ΑΑp  &   #  #n#  -3  #  & ΑF  )  **  6!)(,048<@!2+"&=4>;53#!"&54>3!23'7#?#3'#73'3'#  07  7   j ` gFrj        B 1``pp``p``pA,2"'.543267>&'&'6768'SS''?D++D?'!FG"%;?QQ?;%$'88'$oo==o"%/&&?6/&6?6%">H6 F&3 &)'H!.L  C>"( 4949/!=  !)19AIQ#546;22"&42#!"&=463264&""&4622"&4&2"&462"&42"&4 @ (8 8(B//B/Mss`` m 8(  (8/B//B3s*5!2#!"&=46;26=4'&5463 @(8 @ 8(C  D-+ @@@8(  (8 (2,:+ *$ '#"/#"&?'&6?627// jj A ( ARr 3f&gDDg&Qg 6r#/;I2++"&=#+"&=#"&=46354+";2=4+";2'#546;2#5P     0`0   hPPPP2"&4264&"264&"6264&"ΑΑC4&&4&Α*&4&&4 (048%2#!>?465762'"&546;2264&"75#75#  ? K  9O8  j8@@@   K -(88(  _@@@@ hT\7&'76?6/&"'&+""2+"'"'"'+"&=46;276;2276;2276;23$"&462 D P(1d"dq 0  0 :&&t&&t&&: %JJ%1B//B/b 9 "& P      `/B//B4j%2+"'"'"'+"&=46;276;2276;2276;23%"'&'5462+"&=4&"35462+"&=4&"#5#p :&&t&&t&&: %JJ% 8P8   8P8          (88(   ``(88(    ` '&?66762%47#"&z   (-6,!2BIg    >@4_6I& (&-h#22"&46/764.?64/&"7626'&ΑΑP!!!!PPRM )) Α-0(( ((00>-  -3#"./.""&/&'&'&7>76676/676%   "" %2" #d   0)"2`>1(CS9"    "9SC(2="3 @# 3 1.7>>327/6'&2#!"&=46;7s-G'I!`+ .#:?f  I;&5463232&264&"`I <8(,< ))$ (8, %"2+"546;5.?4>;23'(4@ @4ru W77W uPP/7;%+32#!"&=46;5#"&?#"&?&5462+&"264#3{ p  p ^1 r(r 1V  `  ` `  a ~  ~   y "5#"'&"#"'&'&767676676'#'&7676763_' $+A A+$ '*$$*?#/++G0:  :0G++/#  % $  (/6BJV^%'"''&7&766267&'&'767*#">"6727&'76'&'66''6'&#"#"#2"&47W$|$W77W$|$Wk% , 2 `!!!! % K54 VV 45KK54 VV 45  # # %%x####( # #   `O%#"&'4.#!"#"&=46764'.=46323!2>5>32W,' ',,  ', $,   ,$  $,   ,$+"&462'&'.=46%2.=476`8P88P> F|  < |E DP88P8#   ")@%K2#"&'#"&547.547&5467&5463021>#"'#"&546320212&*&*(&"*&*&"&(&** (.( &( **x%& (+5HY%+"&=!+"&=&=46?>;2%!'.+":>54&#"!264&#":    "9 9  " N   !06   60!2 & 222 !  #/K2#!"&546;546;23546;254+";2%54+54+"#";;2=32  @  ` ` pp((((@   0 00 0((((*fz7&?6/.?6/&6?>?6/.?'/&'&?&'&?676?676767676%&>76&'&'&/&#"&>76 #  NA 2M  +=   %          { }     {  l)   .M  +=  NA 28+$/ 4 C 4. + ! 5 d   )6! +.O $,04<2+"&5#"&5#"&=46?>;2264&"75#;'#264&" (8 08P88P80 0! m|((xM&MY((8(P (88((88( p"z ((```((@Qh!2#!"&=46332'.=4&+!46;2327>=.=46;5462354626&+76&+";327P   2"'%%$4     : D ;     @  *x!..p`%%4$  v*  0 00 ` 3 kS'%"/&4?626/&#";2=37   {U p  `    N6 P@6'8@HPX`%2#"'##"&5475&546323632#0#23'"1"&46;7#264&"64&"2$"264264&"264&"%%%%% %%%%%''&%%&  w     @  `%6% %%%% %6%A A @%6%@b        7AE%'&?67&?'&?6'2#!"&=46;;267!463!2!: : $$ U $$ : :  &&  =@; ; $$  $$ ; ; && pP0&4?62'%/&4?2?/&4?2?   ;  : :  ,jjjjjIIjjII!;G7"&=463546;22++"&=2#!"&5463!2644%"=4;2#  @  ( $   5KK5Oq1     KjK@qOI7  `26A6#!"&='.=4>3235#"&=46;2+325'46?"&v F   88  8920 +$   /  _ 0   0A?>] +>0%#!"&5467&546;&546;2654'6323232#**#*%!/ (8%*O(**(#*%/!8(%*##%+"&=46;2$2"&4%#"&?62  KjKKjK߾ _ & _   3KjKKjk  7%/+"&=&/&6?'.?>546;276   @      @  r 8O O8 NN 8O O8 N +9GUcq2#!"&546354&+"26=4&";2654&+"26=4&";2654&+"26=4&";2654&+"26=4&";26 (88(@(88(@ @ (( @ P !.!!.! P P !.!!.! P @ (( @ 8((88(@(8@ @@@ q8 8!!X!!X q8 8!!X!!X y@ @@@ +9GWes2#!"&=46354&";26754&";26754&";26754&";262#!"&=46354&+"26754&+"26754&+"26754&+"26 (8%%8(@( @ !.! P !.! P ( @ %8(@(8%` @ ( P !.! P !.! @ (8(@%%@(8   8!!8 8!!8    % (88( %P    !! !!   (6HT\j7&>#"./&67632*#"0&'&567%.?>766676.67&'7676.'&6 ]F& #YJ gu$$ dY ,   SaAO6 ,  SF ." ,+ -2 # $D&' 91+   )H! A`) ,'*(O  .6>F3"&'.535.535.53546;23264&"6264&"6264&"  @ F\F %@$@$@ @((((((  &%,99, 2 & 1& 1  %&((d((d((15%2+.#"#."#"&=46;546;546;232%3'#2+/+"&=&'/&?&'#"&=46;67'&?667546;276264&"'2+/+"&=&'/&?&'#"&=46;67'&?667546;276264&"p '0&CBCLC   q SJ 3Q0                ((                ((   """   P `  h @`@                P((4                 P((7;CK%2+"&547#"&547#"&=46;546;546;232%3'264&"264&"p 2B\BDB\B2   q m0 M((<((   .BB. .BB.   @    @``(((( 3K?62"&472#!"&546326/.+";2?33754&+"&#"327;26` 6 6  F  **  55      $*<* @#/%2++"&=#"&=46;&5462&">54(  ` `  -%JlJ%E,     ;5AOOA5;"9 !(:B%#!"&5463!2;;26=326=4&+54&+"#"5!"3   )77)F 0   0 0   0 Z 4  7)@)7v  p p   0 0 @/3B$2"&454+54+";2'"!546;546;232&'5#!"&=33xTTxT & < @7.uPP  TxTTxB 6 L P001` *&0 -9EQ]iu2#!"&546;546;23546;23546;254+";2=4+";2=4+";254+";2=4+";2=4+";254+";2=4+";2=4+";254+";2=4+";2h   (  @  @  ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (   h P PP PH  ( ( l( ( l( ( ( ( l( ( l( ( T( ( l( ( l( ( ( ( l( ( T2#"'#"547>7&54>'./&54;2?6&'&'54&+"+"'&;265Ԗj83AL92    2   zz329JV%      BOg#"'#"54767&5462;2=>54./&54>;2?6'&'54+"+"'&#"'#"&'32654'zV<3+.&zf -  -  Z.+3<@hc9HB^)3B^          6,qO P23#2++"&=#"&=46;546;2`  ` @ `  ` @ @ @    @ `  ` W]ciou}%2#'"/+"&57&'"/&4?&'"&=46367'&4?6267'46;2762'7&'67#677&'7&'6264&"67'367'?&' $  /;  ;/  $ $  /;  ;/  $ +Y%+ l@ L1 C +2:%+ l@ @1  ;/  $ $  /;  ;/  $ $  /;@ 11 % L%+ @ 1w1 % H%+ -T7"&=46;2#546;2+"&2?#!"&=%5!'54767546;>3232     F4```NN   P  r՗ FF  , , !2#!"&546;54&+";26`@`   @ @  52#!"&546;54&+54&+"#";;26=326`@` H  H H  H @ @ H H  H H Z/&='&63!22"&4>54&/&546;2?6'&'54+""+"'&;25?R;#Py^^^-    -  gCQ9 < ''^^^          eoy%2+5#5#5#33#54&+"#53535####"&=46;546;546;54623546235462354623232'354&+"54&+" P P @ P P     @  @  @    0  8   `  ``P P``  p P p    p P p00 @ @1?G%"/&5476;5462;2=462;2=4623226?."62"&4 f88f @       @00 200 = n<76&'&67654'&6323(!QM(;/)V  :)- **    **%5:  U)/`HXWE; )f<G+%74#A !  !   !   !A5% >+G=f)%#!"&5463!20#"/267#"'&54?<14'"'&?&''3276/764/7454&#"#'4"'&32?.547&547"'5!"3   )77)F% =N= %        Z 4  7)@)7v &11&  #  po ! @A .=KY%&546?626"/.=54&?6754&3766=4/&'6=4/&*  ,  PP``PP``l1PP1 OmR 66 "-$ nv~/612#".5'"&5475#"&46307'/&'&6765'&?.5467'&?7>54'&54764'6'76/82 5 P/  ))  /P 8 /< &!M,;;,L"& U ~s?'$= *!       !* AE  (/(F6$ 33 55 33 $6F(/( t && "" &&'7#!"&=4?62335335332!546;2#!"&=463    @`@`@ @    d % % Z 00 `     @'9=L:767#!"&=72"#".'.'5463%2+5.+54635#%#5463!2#"*+ @ @ ,)" ),   $`  @` @ & 0  0 ! " ` ! @@`  `&  '2=HS^it2#546;2#546;2#546;2#54636"&546?"&546?"&546?"&546?"&546?"&546?"&546?546;2+32#!"&=46;5#"&=46;2;546;2326 @  @  @  @ } s s s s s s    8( ` (8      @       .   .   .   .   .   .   . Ӑ (8@     @8(    CP3+"&%!&5476?2+54&"#54&/#54&"#"&=463#5467 @ CXE++EX  @ @    @ C8+5 5+8C< @  @H$ &H@  @ @! 0 h'&4?6"2#".5'546326=4&"+#".5463264&+&/&6;264&#"/&47632?6#".'&66iR;;%'  #/  )K5*A$  !1 &&!   (8% / q%  &'5;)d$4 &+ d !5K01&4& ( & $'$  $     $&#".'&'32#".'"'#"&463267&'#"'&&'&4>7>6767&'#".'&#"&463232>7'.5462627&5462327>32#"#"'7>"264$"264g  !   (  (   !       *#%6% : %6%#*         h   #8-%%-8#      !4"%%%%%%%%"4!  '    2"&44&'657677ΑΑWAY&s3&+E(ΑgCg g)rG"ir "?3N,/)g /%+54>?#"&5%#54&"#54?54?6m p mp `%6%`0 L L Q d L /  `%%` s K K s/?2#!"&546354&+";26754&+";26754&+";26p     `     `     `    @ @ /?%#!"&5463!2326=4&+"326=4&+"326=4&+"`    @ @ `p     `     `     ("&462'+"&46;'.?67676&6%%6%_#n\- 1&(': ! X !@%6%%6An1"#G!\!./ ! H )R2#"&=46?54?03>26=4#"/.=46226=4/&67621/& Zu M  z&/  M u '? / ` Q  M P ` / ?'  P M  Q1NV%#!"&5463!2"3237232/76&#/&"'"327654#0"#"&463:1324#&5!"3   )77)F  ;/DD/'88'u Z 4  7)@)7x CC`C8N8@$_%"/&=#"&462322654.#"7+"=&'&?6;2654/.54>354;2'&+"d8HVzzz, xT'B'+CCJ  d ,zzzVH8 T<'C&Tx+$OO$5  %57#"&'&6?535#546;272#546353#"&'&67( . W   s - ^)!HA  =W!HG;%//&?'&?67672+#"/#"&=46;2>3; ..  ..  ..  ..   b X,  Q :U ..  ..  ..  ..  0  0 jC 92#"&4632#0&#"3267#"/#"&?'&6?62T +.jj.,  W||W7 DD 7L" "  Ԗ || 6L$$L6  EE &),/25%+"/#"&?'&6;76232#'#3373'7#73'5 k8 * 8k 55 k8 * 8k B( 5p55p8.((.(Y$^^$YY$^^$!7XXX&8!!!&8! 'E762#547!#762'#54&#4?6'76&+'&"#";2?326F &r&  `-` H''''+  +f `'&]z f(((( FRUX^%'37'$2"&54737!;32?327654/7654&+'"#"$2"&54#''73@$Q((| < <  < < s(($,B!!B!!R @   `11 1 1 11 1 1 @   o88887;?267+32++"&=!+"&=#"&=46;5#"&=3355#xP 0 0     0 0 P`` ` @        @ ` `@@@@@@DH%+"&=#+"&=#+"&='&'&?5'&4?5'&6?%35!5!y7       7Y7 ww 7@/ 0 00 00 0 1@2@VV@2@@@@-6/#"&=46;//&?'&?676  Yf  fP-  ..  --  ..  y  Y `.  --  ..  --  %2"&4264&"&264&#"3"&462"&4ΑΑP88(PppP(8SΑm8P8pp8P@4DL";#";#";!!&'.7>/762#!"&=463264&"rip ' FF '  -- [% %@@@`@ MN  ? JJ p%  %`0LT\"&462#!"&5463!2";26=654??6/76/&'&5!"3"&462    )77)FB/ @ GGbbFFbb w  8   4  7)@)7&     ++**@8   #!2#!"&=46;'&?676%3p  5  ))  5t     JJ  88  J-7?G327+"&=32+"&54&"&4632>?"&=264&"264&"#!*    & (833@8P8@  Y  & `  & 8(V'`@(88(@P    7#546;2#54'#5##5#+"&=!+"&="&?63!2p0K5@5K0(000(N      j@5KK5.p pp p B<%+"&54674&5463263276&'654&#"&'&54632W*/!(8$8(, &5T%8( qO#)m_.!/8(1 (8&' A$(8 Pp F)F[DQ%#!"&54674&454632632%&?'&4?'&676276&'&"74632.?$8((84%B."6&R L SS X))X  !\B1#) ; 'z 1(88(&6.B$ & X))X SS Y !B\.#1$D/  '0:EKO7'"&?&=46&63''&4?6#76276/&>2".=7#'3'klMArQ =m__}LPujddd8.BY1b zbWBߜŰ #'&4?66&='46/&5   R ww e&] mt ]&;2+'54632264&"3+"&=#+"&=.5462 % 57  @ @ $`  %$.  0  Y6 pp  1 CI7&?>#"&?#!"&547>7.='&4?'&6;2+"/#77 u*x [ #H0  GUZ-3<<> K @  u W+*)EB<#/@ _6m 2 c  %  .&#"+"'&7'&54>32?547>323HV( #2" (?).F3I X! ) #" (VO?   0@P`p7+".767>%+"/&67622+"&=4632+"&=463/.7676"'&'&?6#''&'&?>2+"&=4632+"&=463'62+"5'67+"5+"5  a% * %@ ` ` `   S*   S *S  $ *I ` ` `  @   .+ h h  @ @ @ @   3$ 3+. 3  Z @ @ @ @   -Ohq;#!"&546;54+";2=4+"&=46;22654/&54;2=4+"+"374+"276=4+"&57#532   ` ,   `$$y 8        `  x7''7  DP`p#"546;2!5'&=;27;2732?>72=4+"&546;2'&+546;2"&=46;2#3"&=46;2#0    $%    # p&       @     X' $@@%5N  (   %?p      &.6/&/&"/&&546264&"264&"Rt + )) + lqQ  0 .. 0  PvG'%/&?'/&='6276"&47%; Z  $1Z//-  14% Z   1-//-$ 1%4'/mx32#54#32#54;2#54;2#54#54;2!32#54!2+32#!"&=46;5#"&=46;2;546;2326=4636"&546?"&546?%".546?"&546?"&546?2"&546?"&546?"&546?"&546? 8  H ` H  8( ` (8            (   C  2 (    xxxxxxxxxxxx p(8@     @8(p p   p   '   '   ' 0  '   '   '   '   '   ' $,!2#!"&=4637#7>?#?/7?/   o8V@@ @       @ " k @@    JR7#"'.7/.?>2+"&=#"/1"&='&54?>3235463&"&462Q#4   @:!  0 /W  /# \((5e  b" +   P/S  SV k/ @((>F2#+"&=#+"&="'+"&=4632546;2632>264&"E   @ 3z3 @ pPI7  #  `\     GG B^)1   P  AKS&/0+"&='+"/&54?&5457+"&=4636;46;2264&"@ !  @  B   4$+K5x G  sM   &f-! @i d D) 8 8$4!5K   G43+'7'#"&=265%/&/&7%62546;2  (hFN2+32+'32+";2+"&'&76=46;76;2"264267#7` 8(\7  PK@   p  l0(pP8 9  Mwf (82n  P   %*&.PpX   W;7 2"&=4$"6265427&"ړԖPp II 99:R>b;SS;b>/! ** !ZB"&462+"&46;%2+"/#"'.?'.?'&.6?6$((   M  C  6  >  U % (  '%,G*`(()N$ f W2 .W   (5 &2#"&=4&54'!2!!2#!265(P -P(8 P B.(8P @&'p8( .B8(#7?G/&/&6?'.?>76'.5462+"&76"264&"264     &TxT&  ~  ZZ EE ZZ E:"5KK5":   &?6S  M  =  9  Go'&54?>#/#+"&='"&547+"&=4?#/&?6;'&/&?637>23?632/&6?6#'&'   0/=  J7R7J  =/0  4 NE 5  2=$ $ $=2  5 EN s6N H(  Ha N Iw (==(wI N aH  O P  L44L  P H N6 @$,4<DL!#!"&7>=46264&"264&"264&"264&"62"&4264&"   8  I  I  I  P88P8S:J)1     Pp        ppp%6%%6muz$2"&4264&"2&#"#+"/+"&=&'"/&4?&'#"&=46;67'&4?6546;2354762264&"73'#H44H4Nh  3;$Q  ,   8f  oB//B/fn)kp4H44H< 3 3 0   ,   (.$ (/B//B``  (<#>32#7"&547367#'547"&%2+'72+654&+'327ZR@(\gB6jKbP0 *q7OP/!F*#L#D$,P2K5 5w0*=)D(`O7*!/`$,2+"./&"+"&5463264&"264&"`   :    e6%%6%e6%%6%   == @ %6%%6%%6%%6  A[72#"&'&6;232>'.+"&=4635"&=463!2676&#"+".7>#2&'&6;23264&+&'%9:+#5      N  !26( :-06D /+H " t /$,A,"           ' (.G T7. +' (&'"/&4?>7&?67'  M5ZN#L  zZzw  L#NZ5M  ^zZz-W62"&4$2"&4'#"'"'#"'&7&47&66266+&'&#&""#"&5467&54632632:12((((G  2  $ 2 #8(+ < +(8%B. 5%!/(`(((( 2    2 #$P8  8(2 .B!/! B8ER_l%#!"&5467<146326326#"&"'.'&'>32#"'&?>#"'&?>#"'&?>#"'&?>^%/!!/%8(1,+E+T: %2$ % [$ % [$ % [$ % ,!//!,(8(%5#+:P 5.@@@@@@@@+;J2#!"&5467&54632632:62"&546762#"&546762"&54>(88((8%B. 5%!/ "(" "  " "(@8P88(2 .B!/! 232 32   )7Ec6#"'.?>#"'.?6&#"'.?6$#"'.?6&#"'.?672#!"&5467&54632632: @ @m @ @ @ @ @ @S @ @;(88((8%B. 5%!/Q p p p p p p p p p p8P88(2 .B!/! @EP]jw%#!"&5467<146326321&?'&4?'&6762760"1"&7&54632#"'&?>#"'&?>#"'&?>#"'&?>%/!!/$8(1"; GG K##K  % Ib&D$ % [$ % [$ % [$ % ,!//!,(8(5 K##K GG P & @@@@@@@@1Jc|%'.'!'+"/&?67&'.7676326&#/&""?65'76&#/&""?65'76&#/&""?65'5!+"&=#+"&}  6&   K  +*  Q&C      ` @ @    Nl# c # +**Q$    퐐 PP  1U]emy2+"&54327.5>7327&'.5>7&'327&'.5>676.264&"6264&"264&"7264&"   ,>&3=1S?AW3+O-%>&3=+O-%&.T,+O-. 5!6H+O-%&.T,1N.1(A( 4$)    I     0  $ E V>  = t!E     (  / 9 / !)196"&4?6766264&"62"&42"&4   '-9r9DZjKKjKSI   aZ.@>9r9BL ^KjKKjuM  @ 5=RYf53+"&2+"/#".='0&5#".=46;2&"&462753"&5#"&'#"&?'46;#%2+"&5@    :)   &-Y((  @`    0 Q= 8   8+e   `&-`((D `     0 M%2#"&?#".576;27+76'&+76&+"#"&5467&546;2654'632324X; D|+/! D0!/+ &!/ (8&p  ax Ao.!/"x/!.&/!8(&@<Z6+"&=4&'&+"&=466+"&=4&'&+"&=466+"&=4&"+"&=46 SS   oSf   lW   B19T   _N5      +KQ V e tZ tV 3NM8 JrW0    #9 ";T!546;26&/&"?6/76&/&"?6/76&/&"?6/2'.=#+"&=#+"&=!26=463 ^BB^       )) ` `   @@B^^,    @'1 +p PP p @ /G%2#!"&=4632+"&=463!2#!"&=463'"&4632632632+"'p  @     P3!2?6/&'&`   @    @ J_) `  ` @@@@  K^)@,X%#&'"'"'"&=46767>7>7627#&'"'"'"&=46767>7>7627#&'"'"'"&=46767>7>7622+$)l+)l+) !  X  Y  !+$)l+)l+#, !  X  Y  !+$)l+)l+#, !  X  Y  @                         %="&462#"/&4?7#"'.?'6#5&/.>2?6B//B/   0 .=u. 0   )) )'b') /B//B' ! @  9339  @ ! '       2:B!46762+"&547"'"&54>7&5!5463264&" 4&"2o0+ !h 0<"/B/!F!/B/<&`6`" \   @K8+!//! !//!!8K@&B  go"&'62&.'632767632&7'&'&'&''&4767>.&76767&547632>54'&6264&"  'V' %, 54 ,$k6 +`2+=66>+2`+ 6 ) F2!7  ) ((P  P/C**C/P = Xb   bX= ?13H!8"0@(( 4I7&=46;22654.'&=467+"&5.'.=46+"&5.'".=46:D <)Ej 0 ( 0z   a Sv   S9 d=)< WD   2 z a   `vS :R   7!#!"&7;26=4&+"%2!546;546;23546;2@ ` ` P@0     ` ` 000 00 0 7!#!"&73!26=4&#!"%2!546;546;23546;2@    P@0     @ @ 000 00 0 ")-15;C#"/&67%>'&#"#"/&6?63267'7'?'?'77&'67')6  a     ! %,.O =<<Q - $=d9w!6  7 18>='''.  *'&&'&7276/>76/&67*."'f9 13  7SDX!"&#b.&29 1B 2 8 &"!XD +;K[_o%#!"&=4?>;5#"&=463!2+32;26=4&+"";26=4&#'#";26=4&'35#26=4&+"3;26=4&+"54+";2754&+";26754&+";26 @  U`  `      0  (        0   E[  [ @ ` ` @   G   P     @  x  W  +@V72/"/&4?'&637"&=46762#/&=46;26+"&?'&4?62  c c!   c c! c c  p  p !c c p !c c @ p !c c c  c! p   p  c cB C#'32%#73#5"&?6;2++"&=!+"&='#"&=46;'!0ac  ~~  c        ` ` @         %?N^#53##7/32210'&/&"&?6;#+"&='#"&=46;'%"&54676>'&'&'32=d~! c  '  c51    %4^^D4( '?3&XH > `  ` K7H6    !`A]]A&m/$(?/S!('132+5##5##5##5##"&=46;546;546;232 P @ @ @ P 0 0  0    0 0 0 07>Eeoy7#"&546;'&?6'&6?67>763253#"&53#2#5+#546;&54632632#3&#"7654&#"36            )   ( 7!!7 (5  5   `   66   ``   ``  ,KK, 0  09=A%'&6?'#"&/#"'/&7>7.?>76767'7' ' (?  ?( ' '%W rr W%'_$p$_C h.'KK'.h C  fW( 00 (Wf  G'@..@'2#!".'&63!!889e@cs2"&44&#"+;2=732;2?6;2;2=4?6=46;2+";2;6$;2?6=4&">7#"/&+'&+";2ΑΑuS(    %       }5W  +  + ΑgSu            !  8- %   ` %2#!"&=463%2#!"&=463                +"&546;2+"&546;2`         0     )1"'.767>767>7&4?62264&" D  7#mT #&K"6 D  ((  D 6"K&# Tm#6  D (("/&6?>'7'6$!!$"b&`0`"&a"v%i((i%#W@@h#@j"&4622462"&"&462&54'&>367>'&'&636762'&'.7676&'"'6'&5<546((@(((((@  72   94 27   49  ((((`((=3    :2   1:   +3   /7%+"/#"&'.+"&=4>?6264&". 3 ./   7") %B0 0%zt( @  &R Q?bU;K 6!8"'"/"/"'&63!2WB. ". ,$ W,l}@ '08!>32#47%#534'3+#."#553#"&62#54@'~J`Bp?B 2:F:2 P8~;EC=_=C@`  ##``8(``(%2#!"&=463$!'&>76p  e(H0@te  @ @ "5W:e@:(:B&'&'&6;2+"7&'&'4>;2+"2++"&=463264&+#  n# .BB.8((8 p.#1 , #1, B\B(88( (` "/%4'7>#"'.?2&"&462%"&5>76H&Q,6 Q7|7Q-(( 6,Q&, `8 D   (( 8` , &.6>%4'7>#27"'.7'"&7676"&462"264"&4628*4$*"H"C 4*ΑΑlll#C*B6 C  y B*C#Αllll )9V"&462"&462++"&=#"&?6727%2+"&5463++"&=#"&=46724&&4&Z4&&4&? 7 0 7 .8   |    P  8@&4&&4&&4&&4 h  h         7;?%"//"&46327'&67>7'&4?62762%7'7' a  P k l'Z( Q a  P0  / 0EIEIEI  a Q (Z'l k Q  a Q0 /  09EIEIEI-B7.7>7&5462#"'+"&5.'.=46+"&5.'".=46t9b u z   a Sv   S9 fta9u >z a   `vS :R   2#!"&575#35#35#@&&& 000&&&@`````` #*.546;#!"&575#354&+326=#35#;5'!5%35#"&&&@ @ @@ &&&@@ @@ @@ @`@@@ @(2O2"&42+"&46;264!6"/&627"&4?"&46;2"1"&='&6?|(() ` ` 'D  D =\  0  R=N((\    D  D \  1S=Y  SN8$C"&462#"'%.>7'&776'&76776/&//'((%x   .Kk(  1# >R04  : s/`((fEK5( #JF a  3   9 @.BN"&4622#!"&46;0&5&6?63232647'.?'&3737#"/d((*  +6 (%,G*, F U % '4> +!  = `(( *   (5 "Q2 .W  [)N$ LC%#!"&=463!2654/&?6%5"&46;2;26=32###5##5.e,    &" /= &`  8(@@*6a&'    w ,&@`(80004 E6Wy2#"'#"&'&>7&542654/&54;2=4+"+"3754&+"'&+";2=2?;232654/&54;2=4+"+"3Ԗj83AL9        0   zz319JV  h ## hD88D  EQ"&462'76#"'%&'&>&'&6?54?'&/&>#"'&7((TB4Z  !& J#*  ?o   <U`((?'f" !    5(:   4S --= T\dlt+"'.547&507'.?>54>;2167&5462704=46;276$264&"264&"6264&"6264&"7654&"6264&"! #%c !&,  8 8P8 8     )          ( 7 "-.Q C(?, 7  . !(88(" .  *    7  7  O  2  [`l62"&462"&462"&462"&4"/&=##!"&54>7546;546;2354?623'264&#!"3n^^^$ %)+ B..B O@)% $@zD:H#% $*:.BB.( [p 3:*$ % f S`@((2++"=#"=43%2#!"=43t 8 h  8 8 8 8 %19#+"&?.5475#"&=463!2;2=4+"264&"p /((/  ` 006tRRtR2TGGT2   /9A%"/.7'#'76'"'&7>76'&"&4?264&" 5  u k>`@k04 R*I%J DK   |5%u4  5 u0k@`>k 6R=+ KD J$J R {%5)-15="&462232#!"&=46;5"#"'&67%65#%35+3"&462-    ````c <\  S;  `````` "&54676>'&'&'32D4H_I9* $:\K8+0+__6D5: Y' Cu ((8EA,X7'&67670767>767632%"#"/67>7>7>767  5#$  $7E #3 %.&% L  $7E %C( %.     !5#p  !# J $'  % %&! J6'/  %  &!! !2:%#!"&5463!2;;2=32=4+54+"#"5!"3   )77)F808808 Z 4  7)@)7088088@@2##!"&5".546 AoH(# 2;    >k 5!#!"&5!%630 @ ,Y{  ZA,H3#!"&=26554+54+"#";;2=327#"/&"#"/&547%62  ` ;808808   M D088088 02#"'#"547>7&5454+54+"#";;2=32Ԗj83AL9`808808zz31:IVn088088%."/&4?627"/&4?6?7'76 7-nxf fn-7 D P  7-nf fxn-7{ D  P 2"&546b]2pp2u7PppP7 '/7?%2#!"&4632#!"&=4637"&7>2#&"264&"264"264` && ~~>      (( && 42BB24p  )     @%+"&/&=46?2=4>3762=463212=464/     '$ S4 J L   2 #%!5467546;272#!"&=463@B5) ` )5B   @@;aRp pRa     AKU#"&5467>264&"2767>7>7>7>76'.?6&/  "!/ ` !!(   ' '    ' ' | 3 3 ! /!" ` i  ' '   ' '  3 3  2#!"&46;&5462'!"&pTxTcc ((77'#5'6326'&54?632Kk%KjL!!1$ 56YK$( '"G@)!.!#+2-,&B9N" + $7&.6.'7>#"&5476264&"6264&"264&"Zb  9R] 3XK rC{ _Y ]P9 qJVi L{ 7!+"&7;;26=326/&"72#!"&=46;76;25E 9   9 Z ` x r Sp p^       7!#!"&7;;26=326/&"72#!"&=46;76;2 [ 9   9 Z ` x r Pp p^      7?N7"&547676=7+"'7;;2=32=4+54+"#"26=#!"&5467:  R%^%,(/B/6K""K6_ <$)1001)$< ] !//! N6""6N_-#"&5#+"&=46;546;23546;2+ܘ` p` p \      '/7?G"&462'"&='&54?63232+"2"&4264&"$2"&4264&"((4);*R p G5  @ 6jKKjKf4&&4&jKKjKf4&&4&`((!2   o6  `9 KjKKju&4&&4KjKKju&4&&42#!"&5463#3'#335#35#   `   @/?O_o/?O72+"&=46;2+"&=46;2+"&=463!2+"&=4632+"&=46;2+"&=46;2+"&=46372+"&=46372+"&=4632+"&=46372+"&=4632+"&=4632+"&=46;2+"&=46;2+"&=4632+"&=4632+"&=46372+"&=46372+"&=46372+"&=46;2+"&=463                                    @                                                               `                                          `         `         /?O_o72+"&=463#2+"&=46;2+"&=46372+"&=4632+"&=4632+"&=4632+"&=46372#!+"&5463   @                                           `     `          `     $,2/&?#"&'&6&546766264&"a=Z { %A'  #*=Z { Q<  #D@Q<  #*(E- { Q<  #*=Z { .6Lj7'&6767>"2+"&=46;7>;2264&"%2"&?#"&?6;26"&46325"&46325467ua5 5a    0H[,,\ > L'&5%%p&5%% c> >c`    ,,V Sk 3`(0l(k #%#"54>?6327632  h0 <h21 Vh  p 12h< /2#!"&5463454/&#".'7654/&#"#327P FA& A x ` &AF  x (4@PX_2+##546354+";254+";2=4+";2=4+";22#!"&5463"2645''`   H   `         M` @ @@` g    q  q  0   @`` @ ;?%2+"&=46;7#"'&54?6325463!2+"&=#'7#P   C 5s    v1C v     P4*Y, `  4`5OR72"/&6;46;27"&=4?#"&=46;232#+"/#+"&54?6;23' PP 0    =8  =8   G  ; * 4 ```0 Ѐ  F    F     e05OR"&?62++"&5"&=4?#"&=46;232#+"/#+"&54?6;23' PP 0    =8  =8   G  ; * 4  `` 0@  F    F     e0/?U"&=46;2#"&=46;2#2#!"&=4635"&=46;2#!2"/&6;46;2 @ @       PP 0   `             @     ``0 /?U"&=46;2#"&=46;2#2#!"&=4635"&=46;2#%"&?62++"&5 @ @      ` PP 0   `             @     `` 05JR72"/&6;46;22+"&=46;5#"&54?6;26&/&767.7>264&" PP 0    `   0 6(>*,   $+ !  ```0 @     @  p} 1' 3>=&\  5JR++"&5#"&?622+"&=46;5#"&54?6;26&/&767.7>264&"kP 0   0 P) `   0 6(>*,   $+ !  ` 0`     @  p} 1' 3>=&\  A'DG\%"&=46;2#'3264&#3264&#'+"/#+"&54?>;23'"/&4?62762 K 2 2!3(  (8  D   X   D  ?.p- 7   * # +0`0 $$   E p -8 @&2#!"&4623.546264&"264&"?@ABCDEFGHIJKLMNOPQRSTU VWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~"      !"#$%&'()*+,-./0123456#789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ glass-martinimusicsearchheartstaruserfilmth-largethth-listchecktimes search-plus search-minus power-offsignalcoghomeclockroaddownloadinboxredosynclist-altlockflag headphones volume-off volume-down volume-upqrcodebarcodetagtagsbookbookmarkprintcamerafontbolditalic text-height text-width align-left align-center align-right align-justifylistoutdentindentvideoimage map-markeradjusttintedit step-backward fast-backwardbackwardplaypausestopforward fast-forward step-forwardeject chevron-left chevron-right plus-circle minus-circle times-circle check-circlequestion-circle info-circle crosshairsban arrow-left arrow-rightarrow-up arrow-downshareexpandcompressexclamation-circlegiftleaffireeye eye-slashexclamation-triangleplane calendar-altrandomcommentmagnet chevron-up chevron-downretweet shopping-cartfolder folder-open chart-bar camera-retrokeycogscomments star-half thumbtacktrophyuploadlemonphone phone-squareunlock credit-cardrsshddbullhorn certificatehand-point-righthand-point-left hand-point-uphand-point-downarrow-circle-leftarrow-circle-rightarrow-circle-uparrow-circle-downglobewrenchtasksfilter briefcase arrows-altuserslinkcloudflaskcutcopy paperclipsavesquarebarslist-ullist-ol strikethrough underlinetablemagictruck money-bill caret-downcaret-up caret-left caret-rightcolumnssort sort-downsort-upenvelopeundogavelboltsitemapumbrellapaste lightbulbuser-md stethoscopesuitcasebellcoffeehospital ambulancemedkit fighter-jetbeerh-square plus-squareangle-double-leftangle-double-rightangle-double-upangle-double-down angle-left angle-rightangle-up angle-downdesktoplaptoptabletmobile quote-left quote-rightspinnercirclesmilefrownmehgamepadkeyboardflag-checkeredterminalcode reply-alllocation-arrowcrop code-branchunlinkinfo exclamation superscript subscripteraser puzzle-piece microphonemicrophone-slashcalendarfire-extinguisherrocketchevron-circle-leftchevron-circle-rightchevron-circle-upchevron-circle-downanchor unlock-altbullseye ellipsis-h ellipsis-v rss-square play-circle minus-square check-square pen-square share-squarecompasscaret-square-downcaret-square-upcaret-square-right euro-sign pound-sign dollar-sign rupee-signyen-sign ruble-signwon-signfilefile-altsort-alpha-down sort-alpha-upsort-amount-downsort-amount-upsort-numeric-downsort-numeric-up thumbs-up thumbs-downfemalemalesunmoonarchivebugcaret-square-left dot-circle wheelchair lira-sign space-shuttleenvelope-square universitygraduation-caplanguagefaxbuildingchildpawcubecubesrecyclecartaxitreedatabasefile-pdf file-word file-excelfile-powerpoint file-image file-archive file-audio file-video file-code life-ring circle-notch paper-planehistoryheading sliders-h share-altshare-alt-squarebombfutboltty binocularsplug newspaperwifi calculator bell-slashtrash eye-dropper paint-brush birthday-cake chart-area chart-pie chart-line toggle-off toggle-onbicyclebusclosed-captioning shekel-sign cart-pluscart-arrow-downship user-secret motorcycle street-view heartbeatvenusmarsmercury transgendertransgender-alt venus-double mars-double venus-mars mars-stroke mars-stroke-v mars-stroke-hneuter genderlessserver user-plus user-timesbedtrainsubway battery-fullbattery-three-quarters battery-halfbattery-quarter battery-empty mouse-pointeri-cursor object-groupobject-ungroup sticky-noteclone balance-scalehourglass-starthourglass-half hourglass-end hourglass hand-rock hand-paper hand-scissors hand-lizard hand-spock hand-pointer hand-peacetv calendar-pluscalendar-minuscalendar-timescalendar-checkindustrymap-pin map-signsmap comment-alt pause-circle stop-circle shopping-bagshopping-baskethashtaguniversal-accessblindaudio-description phone-volumebrailleassistive-listening-systems#american-sign-language-interpretingdeaf sign-language low-vision handshake envelope-open address-book address-card user-circleid-badgeid-cardthermometer-fullthermometer-three-quartersthermometer-halfthermometer-quarterthermometer-emptyshowerbathpodcastwindow-maximizewindow-minimizewindow-restore microchip snowflake utensil-spoonutensilsundo-alt trash-altsync-alt stopwatch sign-out-alt sign-in-altredo-altpooimages pencil-altpenpen-altlong-arrow-alt-downlong-arrow-alt-leftlong-arrow-alt-rightlong-arrow-alt-upexpand-arrows-alt clipboard arrows-alt-h arrows-alt-varrow-alt-circle-downarrow-alt-circle-leftarrow-alt-circle-rightarrow-alt-circle-upexternal-link-altexternal-link-square-alt exchange-altcloud-download-altcloud-upload-altgemlevel-down-alt level-up-alt lock-openmap-marker-altmicrophone-alt mobile-altmoney-bill-alt phone-slashportraitreply shield-alt tablet-alttachometer-alt ticket-altuser-alt window-close baseball-ballbasketball-ball bowling-ballchess chess-bishop chess-board chess-king chess-knight chess-pawn chess-queen chess-rookdumbbell football-ball golf-ball hockey-puck quidditch square-full table-tennisvolleyball-ball allergiesband-aidboxboxesbriefcase-medicalburncapsulesclipboard-checkclipboard-list diagnosesdnadolly dolly-flatbed file-medicalfile-medical-alt first-aid hospital-althospital-symbol id-card-alt notes-medicalpalletpillsprescription-bottleprescription-bottle-alt procedures shipping-fastsmokingsyringetablets thermometervialvials warehouseweightx-raybox-open comment-dots comment-slashcouchdonatedove hand-holdinghand-holding-hearthand-holding-usdhands hands-helping parachute-box people-carry piggy-bankribbonrouteseedlingsign smile-winktape truck-loading truck-moving video-slash wine-glassuser-alt-slashuser-astronaut user-check user-clockuser-cog user-edit user-friends user-graduate user-lock user-minus user-ninja user-shield user-slashuser-taguser-tie users-cogbalance-scale-leftbalance-scale-rightblender book-openbroadcast-towerbroom chalkboardchalkboard-teacherchurchcoins compact-disccrowcrowndice dice-five dice-fourdice-onedice-six dice-threedice-two door-closed door-openequalsfeatherfroggas-pumpglasses greater-thangreater-than-equal helicopter kiwi-bird less-thanless-than-equalmemorymicrophone-alt-slashmoney-bill-wavemoney-bill-wave-alt money-checkmoney-check-alt not-equalpaletteparking percentageproject-diagramreceiptrobotrulerruler-combinedruler-horizontalruler-verticalschool screwdriver shoe-printsskull smoking-banstore store-altstream stroopwafeltoolboxtshirtwalkingwalletangryarchwayatlasaward backspace bezier-curvebongbrushbus-altcannabis check-doublecocktailconcierge-bellcookie cookie-bitecrop-altdigital-tachographdizzydrafting-compassdrum drum-steelpan feather-alt file-contract file-download file-export file-import file-invoicefile-invoice-dollarfile-prescriptionfile-signature file-uploadfill fill-drip fingerprintfishflushed frown-openglass-martini-alt globe-africaglobe-americas globe-asiagrimacegringrin-alt grin-beamgrin-beam-sweat grin-hearts grin-squintgrin-squint-tears grin-stars grin-tears grin-tonguegrin-tongue-squintgrin-tongue-wink grin-winkgrip-horizontal grip-verticalheadphones-altheadset highlighterhot-tubhoteljointkiss kiss-beamkiss-wink-heartlaugh laugh-beam laugh-squint laugh-wink luggage-cart map-markedmap-marked-altmarkermedal meh-blankmeh-rolling-eyesmonument mortar-pestle paint-rollerpassport pen-fancypen-nib pencil-ruler plane-arrivalplane-departure prescriptionsad-crysad-tear shuttle-van signature smile-beam solar-panelspasplotch spray-canstamp star-half-altsuitcase-rollingsurprise swatchbookswimmer swimming-pool tint-slashtiredtoothumbrella-beach vector-squareweight-hangingwine-glass-alt air-freshener apple-altatombone book-readerbraincar-alt car-battery car-crashcar-sidecharging-station directions draw-polygon laptop-code layer-group microscopeoil-canpoopshapes star-of-lifeteeth teeth-open theater-masks traffic-light truck-monster truck-pickupadankhbible business-timecitycomment-dollarcomments-dollarcross dharmachakraenvelope-open-text folder-minus folder-plus funnel-dollargopuramhamsahaykaljedijournal-whillskaabakhandalandmark mail-bulkmenorahmosqueompastafarianismpeaceplace-of-worshippollpoll-hpray praying-handsquran search-dollarsearch-locationsockssquare-root-altstar-and-crescent star-of-david synagoguetorah torii-gatevihara volume-muteyin-yang blender-phone book-dead campgroundcatchair cloud-moon cloud-sundice-d20dice-d6dogdragondrumstick-bitedungeonfile-csv fist-raisedghosthammerhanukiah hat-wizardhikinghippohorse house-damagehryvniamaskmountain network-wiredotterrunningscrollskull-crossbonesspider toilet-papertractor user-injured vr-cardboardwind wine-bottlecloud-meatballcloud-moon-rain cloud-raincloud-showers-heavycloud-sun-raindemocratflag-usameteor person-booth poo-stormrainbow republicansmogtemperature-hightemperature-lowvote-yeawaterbaby baby-carriage biohazardblog calendar-day calendar-week candy-canecarrot cash-registercompress-arrows-altdumpster dumpster-fireethernetgifts glass-cheers glass-whiskey globe-europe grip-linesgrip-lines-verticalguitar heart-broken holly-berry horse-headiciclesigloomittenmug-hot radiation radiation-altrestroom satellitesatellite-dishsd-cardsim-cardskatingskiing skiing-nordicsleighsms snowboardingsnowmansnowplowtengetoilettoolstramfire-altbacon book-medical bread-slicecheeseclinic-medicalcomment-medicalcrutchegg hamburgerhand-middle-fingerhard-hathotdog ice-creamlaptop-medicalpager pepper-hot pizza-slice trash-restoretrash-restore-alt user-nurse wave-squarebiking border-all border-none border-stylefanicons phone-altphone-square-alt photo-video remove-formatsort-alpha-down-altsort-alpha-up-altsort-amount-down-altsort-amount-up-altsort-numeric-down-altsort-numeric-up-alt spell-check voicemail k%لRلUrclone-1.53.3/docs/static/webfonts/fa-solid-900.svg000066400000000000000000031473351375552240400217070ustar00rootroot00000000000000 Created by FontForge 20190801 at Thu Aug 22 14:41:09 2019 By Robert Madole Copyright (c) Font Awesome rclone-1.53.3/docs/static/webfonts/fa-solid-900.ttf000066400000000000000000005665341375552240400217100ustar00rootroot00000000000000 PFFTMtf@GDEF* OS/2CX`cmap*b3{ rgaspglyff+(head6hheaC$hmtxaJlocach(maxp!S8 name""8+post|"d.J=._< لRلUP @LfGLfPfEd.T: @`@@@@@@@`@@@@@@@@@@@@@@@@@@@@ @ @ @@`@@@@@@@@@`@@@@ @@ @ @@@@  @   @@@ @@@@@@   @@@@@`@@@@@@@@@@@@@@ @ @@@@@h@@@@@@@@@@ @@@@ @@@@ @@@@@@@@@@@@@@@@@ @ @@@@  @ l P>DN[^n| ".15:>DFKNY^e -69IN]lwz (8[]`b46:JLPScmy}!AHP^`p  $037=@FJMP[` !38@MQlqy (7X]`b369JLPScmy{|{zywjihd`]TSRQNEDCA?431,+%"  jigfH&   } | { z t f a ] [ Z Y X P L J H F 0 % $ # "               z w u t r h g f e d c ` ] [ Z W U N M K A @ ; : 4 1 , (        | n c b _ ] N E : 9 6 5 , 'LX \@h  | 4 <D@|h4 t|h0TT h !!!"T""#<#t#$($p$%%d%&&&'p'())|)*@*++,X,,-../4/0$11l123344545667(889T;<`<==>?0?@@@ABBCtD,DEFLFG4GHHIhJJK<KLMMN`OOP(PPQdR0STUHUVLVWWWXXLXXY$YTYZx[$[|\T]]^D_8``|`axbchd0dePefPfgdghthiiTij jjk$klPlm mn no$pqrrsHst uuvhvw wxy0zz{h{|},}}~L~ t d\8T$<XP4L$,XXD$Xtt|LH<`,840LXdTL8\tX,8(´ôŜ$ƘǸTH(̀͜,<Д ќhHӸ$ԈՔDltڠ`Xެ0t(X\dL8lDL4Pdlp$ , t    h   0 t   LL<4Ph4p@P h8  !\!"#`$4%D&&D&''(h)*L*+P+,(,-L.</L04012823`4456D67p789H:8; ;<==p>>?`@A,AB0BCDDEFGH8HJK KL\LM$MN$NOPPtPQRxSTdTUVHWWXXY$YZ@[\]^_ ```aTabcHcdDdefgghh(hiihijLjk$klmn no<opxpqLrrst8tvvwTwx<xyz`{{|T|}|}~pP<0xlh,ptH l|<L4 0DhL``tdx\|8tL”Ü lƤ4h@x4̰<ʹΠϠXlpӄԜD8נH$ۀP@(` 8@,L`@44T 4pL    T  lX`hX(0 X !#$$%L%&h() * *+,L-D-.(//0<1 12d2446H77889;4< =8>@\ABtCD`EPFGH@HIK,KLNNOpPPQS0T0TVVWXYZZ[D[\D]^,^_8_`daabcdHde4ffghijlHmxmnpo@opdqdrrPrsXstttu`v vxwwxy,yzl{ {|||}~,hL||x, 32+"546;5'&63!288**"&46325"&463247%68P88(8P88(@%%6%K%%6%^!%"/&=#"&46232&264&"d8HVzzz, jKKjKd ,zzzVH8KjKKj"/&6767>/+  +/)k&&k(|+ +|(# '' 162/&?'&6? ( A j  j &g DD g&$"&4622#!"&=46;27jKKjK&7OO7#L#KjKKjkO7**7O /;GS_kw2+54+"!54+"#"&546;;2=!;2=54+";2=4+";2=4+";254+";2=4+";254+";2=4+";2=4+";2   ( (    ( @ ( ( ( ( ( ( (  p ( ( ( ( ( (     P   ( ( l( ( l( ( ` ` ` ` ( ( l( ( l( ( /?32+"&=46#2+"&=46346;2+"&5"&=46;2#(  F     (              0BRbr+"&=46;2+"&=46;2746;2+"&5#+"&=46;22+"&=46346;2+"&5%"&=46;2#"&=46;2#%46;2+"&5 e  e  f  f   e  e  f  f   e   e  e   e  e  e   f  f P  P   P   P    P   P  P   P   P  P  P  P h  P  /?O_7+"&=46;246;2+"&52+"&=463"&=463!2#463!2#!"&5"&=463!2# e  e  e  e }  e            P  P   P   P  P @ P  P   P   P  P 7'&4?62762"%p% $p$aq#7"/"/&4?'&4?62762d   dd   dd   dd   d   dd   dd   dd   5=++"=#"=4;54;232"/&=#"&46232&4&"20 8 8 8 8 d8HVzzz, dPpPPp 8 8 8 8d ,zzzVH8dpPPpP %-+"=4;2"/&=#"&46232&4&"20 d8HVzzz, dPpPPp d ,zzzVH8dpPPpP)9#".54>7632654&'.?>+"&=46;208gCrC.  bFEc$ _  "k=gBrD)L>  "'EcbG'G    h/?O72+"&=4632+"&=463%2+"&546372+"&546372+"&5463 0 P 0 0 0  0  0    ` ` `    `   `    <D%/'&=&''&'&?&47'&767667547676264&" &+" 76 "* & ** & *" 77 "* & *B//B/ 5) 1   1 )5 $ 5) 1   1 )5 ##-/B//BAD62+"&=4.+"+"&=%#"/&"#"/&54?62546;2 p @ p " Z8, ` ` 1 JI2"&4?6/54+"ΑΑ1  @ 0 Α ' . E"0B%+'4&+"#"&76;;265'32;26/&+"2>5'4&+"3= D bb((7 . -bb@4,,00+3;3232"/&6;546#!"&=46;2?324&"264&"2P X  X2 0  1*1 |  L      p  p 11j    @%#!"&=4?>3!23373'8 j  U{ p {U    +@@<2+"&=46303.#"3267632#".54632'41463eU'IggID")-.CsBfIA/ .gg"  BrDf)R*T4146;2+"&=463.#"+"&545>322676;2#"&'1+"=46;2#'/ fU':d  1ZIA9d 1Z0n/ fV / .O8 Ws)O8 Ws3$R / .'3?K!"&5463!2"264"264"264754+";254+";254+";2`"""""" `h""I""I"" T T %2#!"&=46;5462#54&"Y~YP*<*H?YY?HH**H'276#"&#"+"&5&54662^0E;?$h>4  "!Gh]  )#^  ! #12+"&=46;254&"6;2+"&/.=4Ԗ A-  /!qq!/  -A j0-? "OqqO" ?-0j6/#"&=46;  Yf  fy  Y #6/#"&=46;.6764'.>  Yf  f    x  Y  )0)  . @'>P6/#"&=46;%.67>54&'.>&'&67>4.'.>.6764'.>  Yf  fB;2324&"26"&462`X   ~ XFdFFd&4H44H0 ! !dFFdFH44H4-0%2+"&=46;'#32+"&=46;>;2'3'     0 ^/     @@     j %-%+"&=46;#"&=46;2'3264&#264&+MR6   9LW W?$4F 0  0 T9$a`(!.!p@#+32+"&=46;#"&=46;2@ ?P/  ?P/        @   C-I2+"&=#32+"&=46;#+"&=4632"/&6;5#"&?62+0   8(  (8    PP 00 PP 0 `      0  ` PPPP-I2+"&=#32+"&=46;5#+"&=463&=#/&4?63546   x  x   [PPPP P p     p P PP 00 PP 00 /?7"&=463!2#"&=463!2#2#!"&=4632#!"&=463  `  ` `&&&&@         /?2#!"&=4632#!"&=463"&=46;2##"&=46;2 `  ` \         @((((/?7"&=463!2#2#!"&=4632#!"&=4632#!"&=463   `          &&&&/?%2#!"&=463%2#!"&=463%2#!"&=463%2#!"&=463 `  `  `  `                  /?O_72+"&=4632+"&=4632+"&=4632#!"&=4632#!"&=4632#!"&=463P @ @ @ @ @   @  @  P @ @ @ @ @ @ @     @          +;K7'&4?62#!"&=463%2+"&=46372+"&=46372#!"&=463e``D `  ` U`` -     &&&&      +;K7&=462#!"&=463%2+"&=46372+"&=46372#!"&=463`5 `  ` U `     &&&&     @2#!"&54636/5P"#n &  K")"&5463!2"264!5'&'&`.!!.!hX8H H!.!!.pX8H.5462"=" pp "=6#X$5"PppP"5$X#  6462"7264&#ΑgLllLYΑΑ@ll`!"&54>762264&#"&54&"6,!gg!-5- !/  B4_6I&JhhJ&H8^4@  /! .B@ 7.?67/&?624?6#!"&5463!2+!Z\   $Z$ ((@mZ  \ #Z# (`(@4;276/+"@ 0 %% 0   54;27676//+" ( %%%% ( h     7&4?6'7&4?6' %%@ %%      %&546136#"&546;2%+"&546;2`````0`2#!"&5463` %&546&546 %%@ %%  @  @ +"=&=&54654654;2 ( %%%% ( t  @  @+"=&54654;2 0 %% 0 X  %#!"&=463!2%"&?62#   p*@@  @ 399%?62"/&4#%%"/.?'&6?62#2"&454+54+"#";;2=32ΑΑ \ 8 \ \ 8 \ Α΃8 \ \ 8 \ \2"&4!2=4#!"ΑΑt ΑΏ 8 8 #2"&4'76/&'&??6ΑΑrBB( AA (BB( AA (ΑΨAA (BB( AA (BB("&46276/&'&Α΄  F  h 'ΑΑ  F  h +3"&462"7>32;2=4>54&"264Α`J+ #    8 H &&'ΑΑR@     &*<&&'2"&4$"26454+54+";#";2ΑΑ "" @ X Α#"" d  @  (PX%2++"=.'#"=4;>754;2>7#"=4;.'+"=32+54;24"&462 3N. ( Fe   eF ( Fe ,? ) ) ?, ( ,? ) ) ?, (  ( .N3  eF ( Fe   eF ?, ( ,? ) ) ?, ( ,? ) y 2"&4.67ΑΑ`4&`4&Α` &4` &4%"/&4?62!2#!x  xs s762"/&6?!"&=463!'.x  x}s s7'&4?62&/+"&5#s sx  x%"/&4?646;27>s sx  x&='.54>7546 1D2  #02LP4  X *@+ U,,B) P !2C546;2++"%4;2+"=#"52+"=4;543+"&=4;232 | T (  |  ( T  | T |  ( T  |  ( T  | T |  ( T  | T !2C#"&=4;232%+"=4;54;2+"=#"=4;2+"=46;2+|  ( T  | T ( ( T |  (  | T | T (   ( T X T (  | |  ( #%2++"&=#"&=46;546;2            %2#!"&=463     6%/+"&?&/&6?'.?>'&>;276  &   &  r " T  T " NN " T  T " N"&462"264';2574&+"ΑT&&Z 0 @'ΑΑ&&  .9C353#"&53#2#!"&=46;&54632>32!3.#"3264&#"      , 4$..$4 V#!     @ P P $4 $$ 4$5  "$%@'&'.7>3264&#"&546;26762"u8[5  @Q}G upPP/OBN .%,.  />0"  TPp/)!"&547632654.676#11#qq7 %& "2,2M/OqqON9 U&% "*=% @-%#".'&47>22654&"72"&54732654'6=*[32'654&#"654'6320@ 4)%[*i'@_57[* 0IT<6I '80) aPQ%=#:c  aP<9;2BBri B 1g+(   (+g1B i'2' : ll : !-9EQo!#!"&%;2=4+";2=4+"';2=4+";2=4+"';2=4+";2=4+"2!546;546;23546;2@ ( ( ( ( ( ( ( ( ( ( ( ( P@0     ( ( ( ( t( ( ( ( t( ( ( ( @000 00 0;%&=#"/73546%"=4;2'!#+"=4;76;546&5P ;F55  c oF55 o T; PP YP (K:9( g 8 K:9 8 ( PP 2#"'#"&'&>7&54Ԗj83AL9zz319JV?#"&=46;2%+"=46;232".54546;226=46hY hNt{tO2<2,PP  P|, Ht;;tG14-44-4;E/&/&4?62=;E7'&4?>76"C}!C%"/&4?625#"/&6;2762+"&="/&4?62"/32vee (   (   ( ee ( hdd +  +   + dd +B.%#!!2"&547#"&547#"&=46;2!2  !.!!.!FF  g     #!!!! W   - 2#!"&546;`@@ @D %#!"&?>3!2%"46;32=I!pI!T  E@|  | 8 v@0/?T%"&=46;2#3"&=46;2#!"&=46;2#3"&=46;2#2#!"&546;2M&:&&:& 0    FF@    P -5=M!2#!"&546";2=4#2=4&+"#"3264&"62"&4265463264&#"0` t@ dFFdFTH44H4!   &`  `J -$FdFFd4H44H4    &"*#"'+++"&=4?&54>32264&"gI % ( p  /Q0Ig((Ig( (  N 0Q/g((;C'&'&?&'#"'&76;67'&767667632+/'>'&/'.=&''&'&?&7'&76766754676276>.6'&'&?&'#"'&76;67'&767667632+/'>'&            & & g" !  &&  !!  && "{#0#0             & &                     > # # (  ""  '   '  " "  & 50#0#                    > # #B+#"'#"&'&767&5462#"'#"&'32654'zV<3+.&z.+3<@hc9HB^")3B^" 6,qO Q13  &?'&6?6  j A HD g&,%+"/&=#"&546?#"&=463!2+*&   0& *    * - h00h $<v 0  0 @19A232#!"=46;5.'&'&=46;5463!2&'#%5#676( >0>%0% %0$>0> h   @@ # 88-"4H H &"-88 (  ( )--) /7?%#"&=#"&?62+7#!"&=46;;26=324&"264&"2(P X  X 0  !P! |  L  @    p  p !!j    #7&.7>&>676>.2327>"  'BF:;-+ +"  'BF:;-+  4j  ^"+ +-;:FB'  "+ +-;:FB'  c j4 ,^#"/&?6>7'&?6p 1;[<0h0<\:2 p.2#!"&54632654/"#"'&#".x A %#KF `x  FK#% A &%2#!"&=46;5462+"&=4&#"Y}Z +*f@ZY?  *+g@ !+=!#!"&7;2=4+";2=4+"!5463!2@  H H X( ( ( ( 0016"&462+"&5.'"&=463+"&5.'"&=463%5&&5 0 wT t 0 ː VsF;5&&5%o Tw 0 t  0 Gr@&.%#!"&=463!2'!"7>3!2&"264&"264@ 0 a  a3s``<  `@,4$#"/&++"'&547#"&=46;2?632#2@  UBU K"%%UBU @!TkkTJ D5+"#5:%`%5D C`C;%//&/&?'.?'&6?'&67>7676.  ?> -- >?  ..  ?> -- >?  - >? .. ?> -- >? //  ?> '7?%+#*#"&'.=47>763232+"&=46;24&"2d  ##G E   ` 0  0    +% $  -+     &6>3&'&632#*#"&7.7.7#"&54632+"&=46264&"-   E G#('  d0  0   $ +-  $$#%+     '7?2666+"'.'&54654632+"&=46"264 +%#$$  -+     d  '(#G E   ` 0  0   &6>5&547>76;2'''#"&546;2+"&64&"2\ +-  $$#%+        E G#('  d0  0   %"&462'326=4&+764/&"2?64gΑΑL  L  8ΑH  H  %2"&47#";2?64/&"ΑΑL  L  ΑH  H  %6462"'7;26=2?64/&"2ΑH  H  YΑΑL  L  %"&46254&+"'&"2?64/&"ΑH  H  'ΑΑL  L    "+17=#>2473#%#&'%#>#64'#&4733"&673%3.P 0:0Bl#5S$lSk rr r( 0:0$lSl#5S DTT@ @ \2K[2\3K !>!@"!>!!DTTJ2\3K~\2K$'"&4?&67>76264&" P&5% J$J DK <S$I %5%Q  KD J+3CSc"/&4?62762"/&4?627622"&4%2#!"&=4632#!"&=4632#!"&=463I / @ H / @ \((         H0 ? H/ ?((     @         2/&='&63  P   8 ι '+%53#!"&=3;2672!546;546;2#5#@` ` P@p00 PP00 ;%"/&6;5#/&4?635#"&?62+3546&=#32`OO 3e OO e3 OO 3e OO e3 OO e3 OO 3e OO e3 OO 3e&:H6"&462"&4622+.'63*&4622#!"&=46;27'#"&=46;2z4&&4&4&&4& & B \BB\B#0CC0 !F!(B &@&4&&4&&4&&4F& *! B\BB\bC00C :# &%K"&4?62?64'&'&5&?66&'&'&?>'&"&'&4?6G,,D,~Y,&  <C  jY,&  <C   ,,D,-~,D,Y~,&  ;C<  Y~,&  ;C<   -~,D,%#!"&5467454632632.K5`%6%%6  h 2#!"&5463`/"&=463!2#"&=463!2#"&=463!2#  `  `  < ( ( ( ( ( ( '7G2"&42"&42"&4%2#!"&=4632#!"&=4632#!"&=463((((((  @  @  ((((((     @         *:JZt7#"'&?63254+"/&?67#"=4;20%2#!"&=463%2#!"&=4632#!"&=463'"=4;5#"54?6;232#"=4>54#"/&763232#>  9   @  @  D       '/                  @X    1T%2#!"&=46;&'&54>;2#"'.+"3+"./&54?632;2654&   f!9"D C +  B W^H4D.%  +  B     "9!)   ` 3I!   /?"&=46;2+26=#"&=46;2+"&=2#!"&=463    /B/    ^^p `      !//!     B^^B     2#!"&54635#75#5#75#``````````'+/?/?/?"/&47627'  5555E5555b  U l  ;W3V`  `555555  U  l V3W'/7<%2+"&5#"&5#"&5463!232264&"264&"75'#p 08P88P8@,d,((\((d,`   (88((88(@0dlp(((( dp!&+2#!"&546334ᒑ#264&"5"75#`   @%%@B//B/`%@@%  @ %%8P88Ph@%@%W9 !2"/&6    `9 %!"&?62!  `  'Y /&4?6  A  'Y 546&  ?  2#!"&5463#!#`` 3 732"/&6%+"&?62) ww   w ww i  w 3 732"/&6) ww  ww  4 %#"&?62 ww  ww :6#!"&=4632>76".#&'&=463!2"`!y  vz4  1}X  Ue Y(  &[ +7#"&=4;2>3#"'&?63264&#"32 0 #a6gg_G " 2BIggI+Jb N'+͑@ ",gg'!07%"/&4?'"/&4?627'"/&4?62762|(Q s  . s  Q(}q}(Q  s .  s Q(|qD2#"&?#".?>;2( .w    *      5EU72+"&=4637#546;5#"&=46;2+32#5##52+"&=463!2+"&=463  `  H0(   (00H  `  P  `  ` `  ` P0:@ `  ` @:000P `  `  `  ` B-C%'."'0.#""'.&7>75462632#"&'&>3265@ 57   ( 74 \\/!*     #+ '"+$ cz    zx!/  /8#"&546;6232#"6"2643+"&546;7#532h  QJQ !*h  r` B P   H!y  0 h &`B` 053+"/&46320#41&'&7264&#"26546` > `dKJg,%%, .B  /&&  GigIB2+00+2  B. !/U$"&462462"7#!"&=4672654&'527267?6/546?6=4&'jKKjK6J@0!.! :$00 $KjKKjN5--1KQ!!R .,,  *,BJ2#"&'.=46?6326='.?>326=&7>264&" gIGg7I ?  9('8  ? I73.B!$  P%qEc_D Y9    {(8:'z    9X .=+q&%P   !46;235+32#"&546;00`00 p P`+"&537#!".54767>546754624&   H8%:!@%q   C,:V   ,B&,C'7"&=463!2+##3264&!"'&763!2(8 5KK5 8( &&! H@8( KjK(8 &4&  '3;GSo!54;46;546;23232#";2=432=4+"#"3547#";2=44+";25'3;2=32=4+54+"#"@  X p X  ( ( ( ( t( @4( ( ( ( , h H  H  ( ( @ ( ( TT ` ( ( ( '/KSX%2+"&5#"&5#"&5463!232264&"754+54+"#";;2=32264&"75'#p 08P88P8@,d,((808808((d,`   (88((88(@0dlp((088088(( dp %I3546;2335+32#"&546;4&+54&+"#";;26=3265`  @ 0   0 0   0 `00 P` 0 0   0 0 /%#32+535##'53535'575#5#57335#532+3 ``0u(s0C" 0@@0 "C0s(u0 P E*E P ".:2#!"&5463!254&+7626=4&"26=4&"p!/Q   @ 07   W  `/!' $*    (с E    3#!"&5463!2#"#54&+";26=3;26=4&`p        p`D PP  PP  +2#!"&546354+54+"#";;2=32@ \ 8 \ \ 8 \ `8 \ \ 8 \ \a*?62"/&4&4?62"'````ш`` ``a+7"/&4?'&4?627"/&4?'&4?>2```` ````A+7"/"/&4?62'62"/"&/&47```` ````A)7'&4?62762""/&4?62762````````a?62"/&4 ``ш`` a7"/&4?'&4?62````XA("/"/&4?62`` ``X@(7'&4?62762"`````@#2+32#!"&46;7#"&5463!H   H@00@ !%%2#!"&=46;;267!463!2!p &&  =@ && pP2#!"&5463264&"` @2+"&5463264&"c` 9%2+"&=46;2+"#2+"&=46;2+"^B  &^B  &B^ 0 &@B^ 0 &@92+"&=46;26=#"&=463#2+"&=46;26=#"&=463^B  &P^B  &PB^ 0 &@B^ 0 &@'/7"&4622"&462"&4 "&4622"&4$2"&42"&40(((((((((((B((((((`((((((((((B((2"&4ΑΑΑ)2"&4"264&"26464."'.2ΑΑU #n# -Α ** 6%2"&4"264&"264>'&"762ΑΑU--#nΑ66*2"&4"26424+"36264&"ΑΑΑ  `+3;2#"'##"&46354+54+"#";;2=32264&"6264&"B^^BC/\/CB^^B` 4 ( 4 4 ( 4 ((l((`^^00^^( 4 4 ( 4 48((4(( @'3?KWco{)"&5463!254+";2754+";2754+";2754+";2754+";254+";2754+";2754+";2754+";254+";2%54+";2754+";2 \ ( ( ` ( ( ` ( ( ` ( ( ` ( ( ( ( ` ( ( ` ( ( ` ( ( ( (  ` ( (   ( ( ( ( ( ( ( ( ( ( T( ( ( ( ( ( ( ( T( ( ( ( ( ( .4X.'76#"&#"+"&5&546623256%5'5&'&56765767556 2 2;?$i>4  "!Gh0"(#M'##& >&#7.(! ?&#%% *D )"^  ! #HF GH DD GFFDD HFG$%"/&4?'&4?62#!"&=463!2w  0     +'&76'/&?6/&?'&?6=  = } + [[  + [[ + @   * t   . PP / . PP .  @076&76&'/&4&4?61' 3N>!0# 9T x m m  S)F/,U ZR T  ^ ^  6.=#"&67/    /`%- 7%2++"&53#"&5#"&=46;546;27#5376  ( 0 s (  ( 0 s;  ;` 0 (  `  0 (  `;  ;6>FN"&54>75.546267>767.5462$"264264&""264 - 2 /B//B/9 /B/      0** )!//!*!//!*   !///    7  /?%"&4?62?6/&?62/&?64&""'&4?620 -,~Y,- (-*<-( -,~Y,- (-*<-G* -,Y~,- (-<*-( -,Y~,- (-<*-Gz)12+"&=4>7>54&#"/.762"&4Bn%% H  ,+?F:((:(Z@ 1   "  ![)9))9&735#"&=46;232+"&=462"&4 p   6<**<* 0  0 0 *<**<6"&46246;2+"&5/B//Bh ^  B 1B//B/G  S2+"&=46;5#"&54?6;2'2+32+"/+"&=46;7'#"&=46;2763 `   0  !NN! CPPC !NN! CPP     `  ` 0 pp 0 ss 0 pp 0 ssS!2+"&=46;5#"&54?6;22+32+"/+"&=46;7'#"&=46;2763 `   0  !NN! CPPC !NN! CPP     `   0 pp 0 ss 0 pp 0 ss%32#!"/&4762%37 `(|PrD ( `((1}PC@K%2#"&#"".54654&#"&/054&54632>?0326 1 ! $$#%54@p '#$#**" ) 2#$##  0  M 2 20 Ek` <6"&=4627232+"&=46;5.=46;26=463P88P8@ WA8  8AW  B19T `8((88((h 0Bc "   " iD( *3NM80 %F'&?6546326=46;22+"&=46;5.=7z   8(,  L  8AW4/2    -(8,0 0*&   " iD),'" +!2#!"&54%#!"=46;54;2354;232   X 0 ( ( 0  , $4 44 48@6/&5#+"&=4675*.767&632347264&" (  ) < !":   t  !5   L1-  *2#"&="/&4?#"&54?>;>32264&"D< b   3 h 1 h&eH3 ""!2Hd'h 1 h 3   c =C"""&462%2?64/764/&"gΑΑee8ΑVff2"&4'&"2?64ΑΑjeeΑVff6462"2?2?64/&"ΑVffYΑΑjee"&462764/&"'&"2ΑVff'ΑΑeeBBJ7"&?6+5#"=4;5.54632+>7#"&?6+"&'"264 DD#R04 4$9((7$440R#DD `DD'1 (  2(98'2 (1'DDIWWI &2%2#!"&=46;5462+"&=4&#"54&"26Y}Z +*p""f@ZY?  *+g002"&4264&"62"&4264&"ΑΑllljKKjKf4&&4&ΑlllKjKKju&4&&4x$"&46:"&4$2"&4H*<**4&/&7ΑΑl  Α k2#!"&5463!2=4#!", ` 8 8 !"&5463!2'76/&'&`  F  h `b  F  h *!"&5463!23?6/&7'&"?64`97g7 `.98 7A@&="'.54>3546463276#!"&546;2+!8 -?0  +-FJ/   y 3@ H ';( O)*>& H Y`>2"&2"&46.?67GΑΑv B ΑB B #!"&5463!2?6&+"`||p`||5463!2#!"&%'&;26`d||`||!2#!"&54676/&0`||`||BE%#"&'#"&=4;&7#"&=4;>32'&#"32+32+32767  Mp  !oH  (>  r& ", VF  @P, (#      @6%2#!"=4;5#"=4;54632'&#"32+354634 $ O=7/    T T{` h ( ( B7G# $ @ ( 3*A7+"&="'.?6;2654/.'&6;546;22'&+",- :$   0&" B g".A1   0&" B  T-!'0 0"    4"1G0 0"   @4#32++"/&=46;267#"=4;&+"&=43!24I ; 5L9ST' )U ( ` ( 8F5  ( - ( n:232+32++"=#"=4;5'#"=4;'&6;236?63_P: Xl l 8 l lX :PA7 7  % \ \ %  qH%#q)27#32++"=#"=4;5#"=46;54;2'32654&#\ ; 4 44 4 @QQM$((# ( 4 4 ( - OQۖ)#"(@ENRZdh#32++"/#+"/#"=4;'#"=4;'&6;2376;2376;2327#32>?#;'&'#7#136?#4?F U* 9 +7* 9 )T F> 0* m , n .0 x & Q  ' Q ( ( ( Q VV VV Q ( 66/  66   ;#!"&546;#532   Ƞ b8    zb)5>;#!"&546;54+";2=4+";254+";257#532   @ ` b8      L  T  zb5OR72"/&6;46;2%232+"&=4?#"&=4637+"/#+"&54?6;23' PP 0     =8  =8   G  ; * 4 ```0 @  F    F   U  e05OR"&?62++"&5232+"&=4?#"&=4637+"/#+"&54?6;23' PP 0   ` =8  =8   G  ; * 4  `` 0  F    F   U  e0%5EU%2+"&=463'2"/&6;46;2%2+"&=4632+"&=4632#!"&=4630 @ @ PP 0   0           @``0               %5EU%2+"&=463"&?62++"&5!2+"&=4632+"&=4632#!"&=4630 @  PP 0   p           `` 0              5=S"&54?6;232+"&=46;56&/&767.7>264&"2"/&6;46;20  0  `  (>%   $+    PP 0   `  p     @ 1' #2" =&\  ``0 =S%6&/&767.7>264&"'"&54?6;232+"&=46;5'++"&5#"&?62J(>%   $+ !    0  ` P 0   0 P 1' #2" =&\    p     @;` 0`D72+"&=463264&"32#+"'&#"&=47>767632h  P  X e#   *!0H#5    y?#.) M,:$F46;2+"&56264&""'&'.'&=463276;23+ P  P ( #H0$   #e    2:, (+  ).#?$$2"&4++"&=#"&?>;732e6%%6% 8 8 0  %%  %6%%6 h  h   $2"&42++"&=#"&=46;7E6%%6%p  @   %%%6%%6k    /72"&4/"/&?'&4?'&67627664&"P88P8V ^! d0/d !^ ^! d//d !cKKjKK 8P88P/d !^ ^! d//d !^ ^! dKjKKjK".5463276EvEj/7]>J@EuFj^6^y* %3!#!"&7;2=4+"%2#!"&=463   h h          0 0 ?E%+"/#54+""'"&4?&=#"&546;5'&4623762322#4 7=  6(3  3(6  =7  8/  66  /8 \B  <  7 7  <  ;.  77  .;B..!"&5463!26=4&`|| `d||2"&44&"2ΑΑH/B//BΑΈB//B/-C%&/#"&'&54>32+3276'#"&54673267A >  # y9% W5Ig=2B.*?> !  $!   {7/:gI7Y'3.B7):%2+"=&=4?5&=4?54;27676265463t{eP 1 71 7 8   Fbdp )   )  E 3) ) MH3<%#!3#5#"&="=435"=435463352#!226&#"Q//4E (B+*7" "7*+B( E5 4 #$#%@((@%#$# p((@%=2#!"&546322>36754&#!"".'&'3!265 J( (I I  `5    5    5  %9+#!"=#"=4?622#!"=4637335335332!54; x  0 8@@@@@$ ` $@ XX    #./+"&?&54767'&47%67"&57n 8 0 pp!'"U<  s s  "U,q%%q,  `DNe7#?%2#!54+54+"#";&'&767?6'&'6732%463!!"&57;2?3;26/&+" &    @  @ r           ;  <  9 &&! @x             ##l  8  %5EUen2+"&54632#!"&546;254&+";26=4&+";2654&+";26=4&+";2675#"&=#@      .                   @  @      .       y       w`  )5=IUa2!54;463!2;2=4+";2=4+"2=4+"354+"754+";2=4+";2=4+";2 @  P  ( ( ( ( 4 ( t ( ( ( ( ( ( (    8( ( l( ( ( ( T T( ( l( ( l( ( )462"6+"&=#+"&5'&462376x*<**< _    _  VfV Z<**<*P ^  pp   ^  WW (3?62#"&#"#"&546&.'&>.676.7>.>L\>&"LM"&>,/- *,-/ C_&&_j4- -  &A7 !-4 - !7A& 7!6/&=46?7575N h h HNNPB!'+/37%//&=4?54?65'75'57'57'dhhdad d"UwfffUUfffVUUfff n 2442 n $l && lI$E))&K'O''*FK'O''*8U7/;2+".?'&6?67&"/&?6276/.?+/&?6326/&?6 (3 4 4$6 3)  n | $  " j*(  n  (-`P P`  "  Q! ( 0A"QI  --B n5+0 P P 0!,  7APa2++"&=!+"&=&=47#"/&6;7>;2%!'.+":>54&"!264&#":    < "9 9  "N     06   60  * & *22 ! 19BJ%+"&=!+"&=&=46?>;546;232264&"7!'.+"264&"    +   +5   #0$)   )%0#U"  "]C/%++"&?5#"&?#"&?#"'&?>2++z  `  P O nn O F 11 Z [ uu [ ##".=4632"&= "&= ]=g<]=g$ O %+(6V       $/  76 & >"  <E;#!"&546;"&'&+"&'&+";27673;2?6.#7#532   9    % %  & &O b8     es s_  c X b4=;#!"&546;6&+"4'&+";2767;26/7#532   <#%%#<<#"#< b8     ED ]^ =!$ ^b"=72+57#532;#!"&546;4&+";26=:>   7   5(!Q  =     !,9 "-#5323#!"&546;"2645'&'& b  j''( g((Fb0    ((p(h((0*.O#5322"&5467;#!"&546;3533526/&+535#535#535##3#3#y   w   h @ 2  W         &W a!/=F;#!"&546;54&#";67764'&7&7647#532   @$ $!    O    )@ b8    l$ 8 $8  ;  n  N  *ub4#532;#!"&546;4&54&+";26=65 b   ` 7 p  p 7 Fb     7&  p  &7 3G`#5323#!"&546;7654/7654/&#"32032?454/0#"764/&#"327 b  s((AA5>>A((Fb0    $$==1g=$$ !'2"&4%&'264&"'677&'67'ΑΑ?P88P8 @m?@Α@8P88P?@m?#46#"&'46762654&'&5  QiBrCgiQ :LllL: VCrCgV b?LllL?b6/&=6&'.7 H }@ j P 4M Q%,%7#"'.?>3264&#"+"&=46632/546;2gXD 2@LllLI53  2Hcg$A  )'͒7 (ll23  2E3  h K32+"&=46;5#32+"&=46;#"&=46;2+35#"&=46;2#            `         @          !++"&5#+"&=#"&46;2 0     B^^B    p p^^ #Gk%2#!+"&=#"&=46;546;2%2++"&=!"&=463!546;272++"&=#"&=46;546;2    P P   P P    P   P        @                        %%2"&547'#"&46327&5462#"'6`(88P8f"(88("f8P88("ff8P88( @8P8@ (88P8@  @5#!"&5463!2"'64'73264&"&#"327264&`DD!!.!D!!D!.!!p`))!.!!)!.!)!!.!'/7?O"&4632762?2+"43&2"=&?6'&6/&6463264&#"264zzzV-)4M  0. U ; & (8  h4)-Vzzz4 <  +     &  8( %*"&462'7&''77'77'?6'7Αa?"'>KK>'#?&U%>>%U&NN'ΑΑ:T6**6T:B4" MM "4\88\ !-9EQ]iu>#"/&5417&0#"/&54+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2DD + W `` W +` ( ( ` ( ( ` ( ( ` ( ( ` ( ( ( ( ` ( ( ` ( ( ` ( ( ( (  ` ( ( XD11D E# <##< #E ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (  3=A#546;246;#+"&=>%+"&=#532%2#546353` @  `  `    `  `   ` p@00 w   ,$;)%-9)G,,  w 00 #+54622+#5.=#"&=4637#54620 I7@7I p@0p  p   :X cc X:   p  @&2>JV2#!"&546;54632=#72=4+"3!2=4+"3'2=4+"3!2=4+"372=4#!"3( (! ( ( h 8 h  !       `     ` ( ( ,'& /&47>2"&4%'&"/&4762{" nn "W6%%6% " ?? "W%" dd "Q99%6%%6y" 66 "L/?O_o2#!"&546354&+";26=4&+";2654&+";26=4&+";2654&+";2654&#!"3!26P&&&&&&&&&&&`M&&&&{&&&&{ff#/7'&?66754621%#".547>"&53z   %=8H ~   &    p1    V:,C   2%!2#!"&=46;76;2!+"& ` x r       MS82"&4654/.#"&546326?654'.#"32>ΑΑm   #'("$   >R$C) 8Α" 0 !+# R=)B&:F2#".'#"&546326;2>54&#"3276#"&462654&#"jJL *,4Q9* -   jNLllL6-  =JgQ%$|d?T 91BV  y 2JVll (Α))  #?+'754"/&4?62763~~-8 8M )) Ms~~ 8-`8PM )) M,70#"&532762#"/&54>76XF:@@   j UP. @ '%?J8@XC  D:a5  #8#4?JU%"&""&#"#"&"#546;5335335332!526226226323"&546523"&546523"&54652 '/''( '/' @@@@@@''/''/' '  r  r  @ P` @ %& %& %&%2#!"54;2%6!57>  ( 5 ZW U@ ( h hr!#%2/7+5463&'&676  =  Y;23'#"&=46;263&267#"&?&#"?#654&#"&/5KK54K)V 4 E,5KL41  W r3 @ N+J( Q- !//J$"2/! -  -,M55JK48& *6K64K0     }J&v I/B/`x( 0`0"!/H  G,4<LT2+++"&=#+"&="&=#"&=46;5462264&"7!26=4&#!"264&"        0  @ P       P 0".."0]  p'?2#!"&5463&546?6'.676/&&546?6'.676/&`=@RA?S>@RA@R %"4./6%"4./694&++"&546;2+"&52+"&546;23265463!p 0  8P 0  P8  0 p! !  P8 0 8P  !( B3W%!!2#"&'&7#'.'4>7#"&=46;2!2'#54&+"#";;26=326=4&  ! # FF  g   /t0  0 0  0  #! ! W   - ( (  ( (  B3F%!!2#"&'&7#'.'4>7#"&=46;2!2'#54+"#"?6&  ! # FF  g   /y+  +DD  #! ! W   - < <DD<D%32#"'+".'#"&=463267'&6?546;546;232%7625=,  X7 0#7X  ,> F * @ @ * v vK")  ;!;  )"F # (  ( # X&&X26:V%#!"&=467'&6;&5&54767>?6327/77475#"'0326762676#% :@F  F@;  2*2J,42. =#--$><!  1"  "1  !x( (  HWm2'"&547'+#"&7>767&+"&546;23'#"&=46;276;2+6267#"&?&#"%6&#"&/7>4KL64K-   T E,6KH2  8  8EB P& -  R ( Q)!//0"1  22"+K46LK4;')- *6M63H (  &* 7 K/B/L"2Q R "0+$@%"&54672654&'56&"&462"&=46;2732#+"&5pAOԖOA!3zz=3M6%%6%  &   @ v .(88(.  !!%6%%6 `` `   )%3"/3727#'&"'&"#&6?62762@m^9$!w1:$f!$&n''n&,H  %i(7 b V(i%(('' !)32++"=#"=4;5.5462264&" ?1$ $ ( $ $1?TxT/B//B3N 4 ( $ $ ( 4 N3327'&?67'&63264&"t  TxT'B'*# B//B/O  #*KjKKjkO7**7O+6"&462%2+"&=!+"&546;235463B//B/.B       /B//BQB. 00 ` А '/##!"&?"&546;254&#!"3!26"264N4??4NO12N0  0 .!!.!`'92 29''99p  p 6!.!!.'7?G##!"&?"&546;254&+";26754&+";26"264$"264N4??4NO12N h  h  h  h ((((`'92 29''99p  p  p  p .(((( `!%!!535#7232+#!"&=463!5    ` @@ ` `!%!!535#7232+#!"&=463!5    p @@ ` `!%!!535#7232+#!"&=463#5     @@ ` `!%!!535#7232+#!"&=463#5     @@ ` `!!!535#7232+#!"&=463     @@ D%#&/&546.j81 5W wY  H#"32+72#".'#"&=4632=#"=4;54".=4>32>32T$ $T-.T$ $T-.0p ( p1((0p ( p1(,8G32+"=!+"=4;#"=4;2!54;2+;2=4+"54++;2 ( (  (  (  @ T H @ (  ( @ (  ( H 4 @+]7+"=4;5#"=4;2!54;2+32+"=732+"=!+"=4;53;26=4&+5354;2+@44 4444 @  h44444@44H  @ H4%"!"&5463!2+538    b     7b !2#!"&54633#!"&546;0/ `0 !/47c%"&5054676762'3#"&505467>762'3'2#!"&=46;&'#"&=46;6232+KjK  <  H";#5K    <  ȐH ` & N &p!//!<4&(59 %/!<  (59      ')     )*02#2#!"&=463467.5"&=4634&"h  D55D   D5#7  =V=  CkkC    Ck 8O-  @;UU;*062#2#!"&=463467.5"&=463."65#h  D55D   D5#7   2<2   CkkC    Ck 8O-  $,,$""*02#!"&=463467.5"&=463!2265#hD55D   D5#7  P V=CkkC    Ck 8O-    U;;*2#!"&=463467.5"&=463!2hD55D   D5#7  P CkkC    Ck 8O-    ?2+"&=4/&=463254632354632354632346  0  p p t  d+B00/ 82+"&/&>54>32354>323546323546   } !     @  q ! ,     ױW.4635#"&46;5#"&46;5'.7>7>+"  в $  p P ""(E K @  @&54/&+"&=46;26?6&+"&=463!2 p  %| ! N  =G % 3 !  t7+"/.>5'&>763'&67637>376 $ ! =' # 0 40 , c * Y ~ ! 9P  -159%+"/&>46235462354623462#37#37#3  ! """"XX`! ,((a`````-$2+"./&6?'&>354623546234"  @ K%$E("P p  $   `G2++"&=#"&=43+"/5#+"/#+"&76;236?6;2U6U t 6    2   6 A + + A `++ 6"UU6 ( 5  *%+5326"&462&'654+";26=3;26ۑΑ+&jG   ,8 'ΑΑO2Y  HS #2+32#!"&46;5#"&5463!P     @ +G!"=46;54;2354;232!2#!"&544+54+"#";;2=325X 0 ( ( 0L H < ( < < ( <  $4 44 4$  < < ( < < +7!"=46;54;2354;232!2#!"&542=4+"3X 0 ( ( 0L <  $4 44 4$  ( ( +G!"=46;54;2354;232!2#!"&5476/&'&??6'X 0 ( ( 0L  0  00  00  00   $4 44 4$  0  00  00  00  +;!"=46;54;2354;232!2#!"&54'&'&?6X 0 ( ( 0L Y j.  R   $4 44 4$  ` j/ T 6#!"&546;276  0  p      XD  727"/2"&4724#"2546p xTTxT &6(!!TxTTx6& ($9#!"&=46;546;23253+"&2#!"/&?6;53 +      @     + + @k ,  P   9pp  P  , , @ 4>?&576   J 8? 0@ 82+&=#"&5463&&} `&&&&^T& &'2"&454&+";26754&+";26ΑΑ 0 0 p 0 0 Αη    2"&454&+";26ΑΑX   Αη  !)3#!"&535462#354&"264&"264&"``/!!/`KjK&4& !//! 5KK5 &&@*6BN%+#!"&/#"&=46;7>3'&>3254&"26754&"26'54&"26@     CkPPkC p       mm  p  p  p  p  p  p CG+32++"&?#+"&?#"&?6;7#"&?6;76;2376;2327# OK P (b )K OK P (b )Kc ( V RV R ( ( V RV R -6"&462&"2642"&4264&"32+"&76\BB\B\((\BB\B\((h  " pB\BB\^((B\BB\^(( 0   G"264$2"&462"&46"264>."'&67673>'.56VzzzΑΑpppR HLH 5       zzzΑYpppDM   /A.!!.A/ B#"/67'&/"&462&/&#".?5#"&=76;2|}~? $$$ofB 6 = ?Ú  X<$$'l  P  _O 7G7#76772+57#!"&5463!2'&+";2?3;2>74.+";26  /`9 $9 4 -99.5 1b_   T+4!3CS.676#"#'7232#"/&764'&?67/&764'&?6/&764'&?6a@-.? @  8 8  @   _--  %%  &  ;@@ ( Q ZZ Q ("V66 -t- 7#X# B '/7?GOW_6"&4622"&42"&42"&42"&42"&42"&42"&42"&42"&42"&42"&4%6%%6(6%%6%6%%6%6%%6%6%%6%s%6%%6s%6%%6s%6%%6S8@H`d7"&5462"&54&"&2#"&4632654>54&""&542"&42"&4%"&54=.'.7>'7/B/1g?-  FdFSmQ@  Qf"P"  !//!  gI+ -?%&2FF2  Im  Do  V"P"2At&2#2#'&/&?6766766&#"32763&%.6767&'&>767'.7>7"'&63"&7>376"#32654&# != +^C,:) F "(  -+ %    p:) F "(  -+  != +\C   3   !Y !JZ9    A 4!JZ9     3   !G8DQ7"&5462"&54&"&2#"&4632654>54&""&54%/&?6"/&?6/B/1g?-  FdF W W      !//!  gI+ -?%&2FF2  I W W   2?k46;5#"&546;5#"&546;5#"&746;'.>+"&7'&6767154/&'&6767'&6767'&676'&>[ >~    }    | Pk I%   ={4 N^ _M g$    Z  .   #  0 ZC  cy  zb  &  @-E%&'#"/&'&767'&6?6632654&#"6"326591W+ )w }q<1W+ 5(*Y"P8*&    T+: e :(fT+: H W'38P    = (/7W2&/76'&"&'&4757633#"&5:64&"3#&"264///#576;? 7  P " b `@ )  `        5[7 TR; 7v I  ! Z@  ) @     &  R7 K<F#!"&547670>321&0#".'&'&2>767654'`)||)B#G G# $FF$ Z Z B3  32  2'/D#32+32+#!"&5463!232&"26454&+"'#";26  @ 4&&4&0'*  @ ( @ ( 00 ( &4&&4" @+7CO2#!"&5463"26454&+"'#";26754+";2=4+";2=4+";2 4&&4&0'*' ``&4&&4"" =HH2"&4$"26427.#""'&#"ΑΑH44H494 ( 4Α14H44HD!!#72#!"&5463";264&#"26454&+"'#";26P` ` 4&&4&0'*' `   &4&&4"" @ +7?R2!5463!#!"&%;2=4+";2=4+";2=4+"&"264;26'.+"'#"0@ `4&&4&  "*p0HH&4&&4  3C#0#"&54754622654.'54&"37"&5475462 K55J 8P8`!/(%A%6%  `%05KL50$(88h/!      %P%%%  "C7"&54754627#0#"&54754624.'54&";26%6%  K55J 8P8(%!/@%%%  0%05KL50$(88(      %/"C7"&54754627#0#"&54754624.'54&";26%6%  K55J 8P8(%!/@%%%i  i0%05KL50$(88(      %/"C7"&54754627#0#"&54754624.'54&";26%6%  K55J 8P8(%!/@%%%)  )0%05KL50$(88(      %/;6"&4627#0#"&54754624.'54&";26%6%%6E K55J 8P8(%!/[6%%6%%05KL50$(88(      %/!)19AIQYaiqy"/&47.7&#"#4632662"&462462"6"&462"&46:"&42"&462"&462462"6"&462462"6"&462"&462"&462"&462"&462  .@S;7):   )  W    2  `    )  W  w  W  W  I      8 O . 2;S%  H      )     )      7    )    )  )  F%2++"&=!+"&=&=#"&=46;5463266/&7.7&#"          /!*$ _       +    +   !/"   _ /  +QY"'.542''&546'&'&7654&2&767>54.#"'.542"&4 Ju.W=9R0;)%5F8%,!+6Ig*$9E6%%6%)U,,U +>=UQ9A, ()95%+]BlN.5, gH-NlB]%6%%62#!"&546354#!"` `T T`%2#!"&=463``  !)+54&+5463!2#!"&5463!24+"30/!  D !/0 0 4 /?O_o+"&546;2++53232++53232++53232++53232%3#"=#"=4;5473#"=#"=4;5473#"=#"=4;5473#"=#"=4;54`****************`N 0f 0f 0f 0 0 `0 `0 `0 %/&/"/+"&="/&4?5/.?/&?'.?>7'&/&6?'&?6'&>?65'&4?6546;2762?>76/76! ?5       6? !" F@@F "!?6     6? ! " F@@F g   G$N5  ( (  6N$G    %%   G$N5  ( (  5N$G    %%  '"/&4?&67>(0$X% 3  $1*1$  3 %X$0*<+"&?.5476227>56227562.>32+"&7%  @  ,,,y,"*9  8 _"&7  *"_ G;#VLB& 0  &2#"'&?632676&+"&=466CrCg_G (0?FaeEB0*   $GBsCg@ (*bEGc,*   $E !-E!#!"&26=4&"26=4&"26=4&"%2#!"&=46;76;2    `   `   @ ` x r P           :&#"+"&7>3276+"&73232>76;2#"'&=46s1B:[ 9ZcH$   ׆ *1B&D0  9ZcH$ ;-G8 WsE$   y *-!9% WsE$  "0%"&54675#"=4;2+7654&+";2zzeK x 8,  "( ( VzzVMu " ( ( " $  3b 1%&=#"&=46;546+"&=46;2+";2     T(88(T T  T ` ` `  8((8 (   1!#"=4;26=4&+"=4;2'&=#"&=46;546T T  T T(88W    (   ( 8((8 ` ` ` $276+"&?&'&3276#"&46dG$   *0BEeaF?0( G_gE$   *,cGEb*( @Α08DL%#!"&5467&546;&546;2654'6323232$"2646&+"26&264&"#**#*&!/ (8&*<8<%O(**(#*&/!8(&*#[x `@!)4%#!"&546;3%#!"&5463!24&"2!5'&'&/!((D`X( !/PD((`0pX( !(/&?62'6&?6?6&5#75. o .(< o z \  $$ @2. o .<( o  z   h0@ $&?'762#r  99(c  r)(99 !'762&7>?'/&?6299(US"r/ 7"f  w !v(99(TS"7 /r"f  v  732"/&6;4;2. VV . 8 f VV . M37/&4?6!2# VV . . VV . 8 M3%546&=!"=43: VV  . V V . 8  #"&?62++"5X. VV . 8  VV  ;%+"&?'+"&=467'&=46;27'&6;2/76 p $kk$ p  $kk$  p $kk$ p  $kk$ hp  $kk$  p $kk$ p  $kk$  p $kk$ )#!"&546;46232&"26454+";2P&4&PHP`&&4dM346&=#/&4?63z VV  VV  VV .. VV . 72"/&6;5#"&?62+ VV .. VV .F VV  VV "&462#"?6&+54+"ΑΓGssG @ 'ΑΑtsst "&462'#54&6=32=4gΑΑtsst 8ΑΓGssG @ 2"&43?6/&#"ΑΑtsst ΑΓGssG @ 6462"75326/&;;2ΑΓGssG @ YΑΑtsst @2/"/&4?'&6;26#!"&5463!2#!!547@ $$   (  @ $$  `  ##!"&5463!2#"?6=4&`Xp  p`$  p +5463!546&=!"&2#!/&4?6 h PP     PP  0 PP 0  0 PP 01%#!"&54674546326326&+54&+"#"7.K5Wh0    $%2 p+2#!"&5463"26454&+"'#";26P4&&4&0'*' `&4&&4""  76&7>.'/&4 4PL20#  .6"  P )B,,U 2& X '.54>?62>7'.=9*M@' Kcl?sSC) C_G PF%lJ#2#!"&5463264&"74#!"3!2 H ` Y8 @ &.BLT2#!"'&54%"32?4761.264&"6264&"%6.'&3654'73264&#"264&"' D '   C =# n L   .wNCCNwi   5  %@/!!$3#!"&=264ᕗ!2"'4&#!"3!265@ L  P  t(``(``8   $"&4622#!"&=46;27?>=#"'4"2642#!"&=463   Oq  9      (     pP  2         @#37.546232+#6=#"&=4632#!"&=463i=V=    /+==+/   P++P       I"&4622#!"&=463!'&54?6323>546;22676;232676.!!.!x  ff  '   '  P!.!!.            #+;2!65'546;23546;23546354&"2#!"&=463p @ @ 8 0 P 0 X    VJJV  00 00 @  @     /S2+"&=#"=4;54632++"&546;2'2+"&=#+"&546;235463h  0  @ 0  0   0  0  0  `  h0h 0h    h p      Q'&'&'&'>7'&'&'&??6/7?6/7?6/76/& 634 63ggf        45[45ooT        #-8B73+"+"=4&+"&%#.54622654''4''32674''326`   (   @$-5zzH  @    5,&]7Vzz/  M<62"'67"&5ԖԖ77ԖP88P8((n(88(#5%/6?6&67>32"&54>'7>76W 65_<Q "..C.  ";V " m($*  pJ/B//! A +  m7+ !!@  '%&'762&/"/&4?'.72"&40q"8??/ )$Y5g# >P88P8 /8??/$g5Y#Q"8P88P (2@7.'>7'&67&'.'667&'>7#"&'6#0#6&-V! `D$>_GE#707%0344LS)( F>/Ov]&K 0K*,-'p4/Gr 2Oc?Q&6'2U( 1)+*WR"/D_5E:BJRZbjr2+"/&>5462;2=462;2=462;2=4264&"6264&"264&"6264&"264&"264&"6264&"} !     I    I  )  )  P q! !      H  W    W  )  I  w   ` '/746;#"&52+!&"264"264&"264"264&``&@&&`Fn &&&&@@RnR 0#532'#617>3!#!"&5 3c` !@)>%2+"&=46;75%"&=46;7532#2+"&=46;750  P  P P   P   ``@  ``  @   ``92#!"&546;546;2'3554+54+"#";;2=32`P 808808@ 00 088088 "&5462654'Ukkkj,T6``M:^vv^:5*8ii8*G%/%#"/&'"&=46326763254&"7'&#"+'$<"B\BB.-@ $<"(#RA  ']1 .BB..B>,!1pp-:] )-2#!"&546;462&"2646/&'&7PP&4&6a  j.  R `&& j/ T%-5AMY2#!"&546;462264&"6264&"6264&"6"26454+";2=4+";2=4+";2PP&4&h`&&RRhh3;CK[$"&462&"&462'&67676563267'&'#5&$"264&"264&"&4622#!"&=463  H44H4 7VJ7.'&6;2!676;23&'67+"'&'!+"&67#7!36!&'# B--B   8   *M1,"  *9    $  ./DIID/   (?9@ $Z8   hB5=%'&>?7'76&'.'&7#"&=46;276264&"&5>!=!=<D-+,\Y pd- ((cc 5) G,8,7#   $G((B7"&546;7532#2+"&547#"&547#"&5#"&=46;2 00   S((S 0 `   @     p   6#532;#!"&546;4+54+"#";;2=325y 7   @808808W     88088/8;#!"&=32?3264&+'&"'&+"=4;546;#532   F# 9Z F# 98 ș 8   Fr,  Fr( i @ )346;#"&5!%;;2=32=4+54+"#"%2+00@808808P0p@0@088088@%1=Ieq}2#!"&546;546;254+";2=4+";254+";2=4+";2754+54+"#";;2=3254+";2=4+";2      ( ( ( ( ( ( ( (  ( ( ( ( `  p @  @( ( ( ( t( ( ( ( ( ( ( ( #2"&454+"#54+";2=3;2ԖԖp0`00`0ԖXXXX@.82#!"&546;35"26426'.+"'#"3#546;2 F4&&4& "*  @ ```&4&&4    `` 9E2#!"&546;462&"26454+54+"#";;2=32=4+";2PP&4&6H808808`&&0880888<@7"&=46;7532##32#!"&=46;5#"&=463!25#!5# p@@p 00  00 `    @     @     @@@@@D '2"&=454&"7.7>/&76B\BB\B(/vT"S #/B..BB..pp# Sv/7Tv/"!15!#!"&=32=4+532=4+532=4#72#!"&=463 @ xxxxx   @  @@@ 0  0 52#!"&=463!#!"&7;;2=32=4+54+"#"h   @ @808808 0  0  ` 088088#?G%2+"&=!+"&546;235463'"=4;276232+'"/"&462.B       2f z2L4&&4&B. 00 `  (c 7  ,c 7&4&&4IQY^%2+"&5#"&=32=4+"=4;2=4+"=4;2=4#!"=4;5463!232264&"264&"75'#p 08P88P88,d,((\((d,`   (88((88(00dlp(((( dp #37NZ%2+"=43+"=4&'&=4;22#!"&=4635#+"=4'&=4;22+"=43x0).0$0p `0$<0X0` W1&B%>>  @p@@B9!,*AB>,>7//&?'&??6/7?6'7//&?67'&?6@&@B C8 7-8  -I""9- AC B@&8 7.8  -J""8-  $.2#!"&562#".'463/&76.76dx[  / [<7*T #/F/vT #M::c:M$1kTv/#F# Tv/(#"&4?57?6/7?6/7?6/7>#dYY-2 2-2 2-2 3.M!RYYe.3 2.3 2.3 2./#"'&6?'&?67'" +/ "FO" $ R "EO';?"=4;2+"&=3352#!"&=463"=4;2+"&=335H/B/0@  X/B/0@00!//!``     00!//!`` #A%2#!"=432#!"=43%2#!"5743%+"=4&#!"+"547%62ppPP`00`0000   W q!22#!"&546;2654'"&462'"2654'76.&&&&qqd^^^" " &&&@&-3OqqO3-^^^: O 3}62"&4"&4622#!"&=4632#!"&=46;!'54+532=4+54+"#";#";#";#"26=3264&+532=4+532      `  0`hhhhhh@(@(@hh@         @     `    ((  "-%2?/.=32?7/76!6/.7  @@ \  \ ' 66 'kkp8   8$,2#"'#"&'&>7&54264&"264&"264&"Ԗj83AL9szz319JVv$747#"'#"&767&'&?6632@ E&(83AL, 9:  iI`j9 3 4+9    R5zVH:974&+463!2#"!%2+"&=!+"&=&546;2!5463& 8(@(8 && @ @ &  &(88(&@`&$y  y$& `` Fh$"&462'+"&;26=>54&/&546;2?6&'&'54&+"2#!"&=46;#"3!2=4+67Vzzz"? &    "> '     @  +? l @, zzz$!   $!    `  ` '' )1&'67673+#&'&6?&'&54762546264&" N5  ,pp ^BMA k.U}N/  ;1% E8K@B^8H$Qh=7 EX"#!"&=46;76;2+";2?62m= <m   7/ * N v] rH HrH  y ` &   J!>e.'&67546;2'&+"+"&=&'&4?6;254'#!"&=46;76;2+";2?62   2    !    5+' J pX 0%    %     y ` &   J"E7+"'&/&=4622?6/&>$2+"&=4?>2?549  i Z & i  9& ZM"+p   l 3   p+"M  3 l %@2+++&/&6?54?26=%54&+"&=4?6;76   &g P PA*<*, P!"!gg  0 @ &< .0$'z**8% .!X~  < 5++"&=47'#"&7>7#6;5#462#327#4'      .C)>t0pQ^Qp0t>?X     $A,DlRnnRlDTen"&462"&462#"/?#"#.?6?'#&/&/"#"&/&?676546;2?676.7((d((N ) C     +3 3+     C$    #) `((((p E.d L] \1G !! G1\ ]L P '" ` ` "' .E  @CKZ%2++"&=#+"&=.5#"&7>32+";>;26;264&"'"4&54621&#0 1  @ @ ("  + X:,  i  8P8    Q 00 Q'0&   7I"L@  (88()?/.%'.4>?6762'67&"uO[D  E , "k ,"O4!Z!Xf  # I  I@&W;&.;C%2#!673264&+"&46;&5462#"3"2642.54264&"(88(  `(88(--8P80`  mxP80 5S8P8!8P8@ (88(P8(P &V(H2+"&="&5%3&'>@]   ]@tU+)2;`] ]@V =/*#'2#!+"&5#"&=46;546;25!    0 0   @    p   0 0"/<62"4&"276'.#"?62&276&"Α7   % &--  #nYΑΑ     o66  *!2"&42#!"&462&264&"4&&4& p]CP88P8&4&&4   ]^B@8P88P)17'&>?7'72&!#"&'/&7%46264&"21M>M2 LB.-Aw Z((H\] :~p.B?-l.^D @(()19?G%+"&547##"'#"&5463!232264&"264&"%3'&#264&"m /B//!((!/  &0\+V !//!!/ /!P  `0+&/'&?632576%5#!"&z %  N n" n    !  =K   %2+"546;5.?4>;2(4@ @4u W77W u!'&?6>323!"&=46z  M23232+"&'7;26=4&+"/?+54&+"#"&=46722"&4&2#54@ P0.%  P`P8(0(8H $$ $$3EP PE30p  W ` +5$ ` +55+X(88(Y$ $$ L3 @  @ 3L`   00+$"&4622#!"&=46;27%/&?676jKKjK&7OO7#L#T  Q  -i KjKKjkO7**7O R  -h +3$2"&454+54+";2'!"&=46;2732&"&462xTTxT & < (#O7#L# +jKKjKTxTTxB 6 L ,L*7O&HKjKKj;CKv%/'&=&''&'&?&7'&767667547676264&"&"&462#!"&=46;2732332?b !!  !! ((jKKjKI O7#L#     K    0((tKjKKj *7O  ' $/$"&4622!"&=46;277&7%'?62jKKjK&;)NO7#L#>G= *H%KjKKjk.M = *7OG *H&#76"&4622#!"&=46;27$"&4622+46=4'6;27\BB\B#0CC0 !F!P88P80.B(*B\BB\bC00C 8P88PXB.'9+:%#!"&=467&4?6"&547'+"&?&5475?6KK6_`KjKB * M6  6M_P..5KK55 > > ;3;C$"&462!"&=46;2730%2+"&=46;5462264&"754&"jKKjK O7#L#   /B/]@KjKKj*7O  P!//!P}P  P+%2+"&=463"&4622#!"&=46;27p  jKKjK&7OO7#L#     0KjKKjkO7**7O%-%#!"&=467'47&52>32#"&'7"34&#E4GG4e++1#5KK5,C g  L4**4Lf33( KjK5)` 7%'.54?6>7'&"&462#!"&=46;273230n$" 5Ps !;`jKKjK 7, O7#L#$@/&sN-N:&KjKKjsDm#*7O!'&?6>324>7!"&5z  #8!5K0&3     r 6K5)B 9%+@%"/&=46;22>4.#"'".54632#!"&=46;2732w \  [O\   #;"K5#;"KK:O7#L#S  ] [O E X";#5K";#5Ko(:*7O$"&462#!"&=4677'3jKKjK 5KK50 ` KjKKjlN5**5N88;CKb%/'&=&''&'&?&7'&767667547676264&"$"&462"&5462"'&#""#"&=46;323630#3273'#"&=46;2b !!  !! ((4&&4&.BA]B    fD/ !##   (B &@k    0((t&4&&4FB..BB. " 0C ' :# &*BE[^!2+"&5&'&/&6?&5463276%#"&505467>762'3'1"&50546767623'    /!.v    ";#5K   < SȐH KjK  AWZ46;&/.?>632/+"&5'054676762"&73'0546767621"&73'`    v.!/   `  < SKjK8H  <  KjK8H '!, '(/!+ 0  96(!//1 95(&4<!//1;?%2#!"&=463264&"";#";#";!'#"&=463!3'#% %sHri gQQ F@%  %`@@@`@@"6#.5476'&'"&546 O M{O  ' l //  ' R $9Kil"'&476;2#'+"'&476;2%+".7>&'&>;22+"&764'&63&/#/.7&54>23'  "  _  #  #  #   # y  "  R 11  "p`0 6 , 4]+ 9|9 9|9  )[6 @ 6 , d  vv  :t)%/6?6&67>3%'7>76W 65_<Q ";V " m)$*  pJ +  m6+  !#463!2#2#!"&=463!53`@@P  x`     @@472+"&54632276&"&4622+&'3533!ᕗ/AA/(P88P8P L@!`B//B 8P88P&@@ 2.-7@%#54&"#54?5#"&=46;546;232+46?#"&5%+5`%6%`p0 0   0 0 mp m p `%%` D3   0 0   3 / d  (085 7"&5"&46232>7"&5%567$2"&4&'67==qqqqqh:&H>qq?!#ppp 0n37++++&&@&4&&4 4&&)? +  /B//B "%+2"&44635"264&"62"&4ΑΑX^BOqP88P8SΑgB^ qO`8P88PH)12/"#"+/#&54754632264&" (8`M=)  ,'&  ,ay t/!'J%PAep  xh  x= %!/ H;!2#!"&=4632#"'!'#"&4626?&5462?&54 ` (HH(H R(R H     @(( +  + (08@HP2+"&=7>'264&"'"/&4?62264&"264&"6264&"6264&"264&"P ((.1((rr'/72#!"&5463264&"6264&"264&"264&"6264&"%%%%3ss%%%@%ss'/2#!"&5463264&"6264&"264&"6264&"%%%%3%%%@%2#!"&5463264&"%%%%%%%@%'/7?2#!"&5463264&"6264&"6264&"264&"6264&"6264&"%%%%3%%%@%MMMM'2#!"&5463264&"264&"264&"%%%%3ss%%%@%ss2#!"&5463264&"264&"%%%%3%%%@%62"&46"&4622#!"&=4636%%6%[6%%6%   `%6%%6%6%%6U  !!2#!"&=46;463!2&264&"p  p m     s"*!2+#5326!"&=46;467264&"p `p P    @!   m 0P%2#!"&=463%2#!"&=463          "33&'"&476&&67>7>' !b'bGE:B  )!.,_Q'g9,'1B  5t*"-,!@9A32+'6.'&"?632#!"&54>7>2264&":) b6  Zw F#  #& #O  &5HX0,:,S_#Vi  ~,  B0  %1]L<%%%C@D!2#!"&=463'.=4&+!46;23226=.='&4?625#P  /!%&&$4&      U +,p`&&4$  >& @CP]%+"&/#+"&=4?>/&6323632'&'&&/&6?67&#";26%5&#";26>C0%.B$B.%0C- C$ '));7J7;))' $C ##%$%6$%##%F/C=,'',=C/F&%        %&) %% )%&/&6?'.?>n    "   ff  (.?>&/&6?2#!"&=4637   .   p  T  xx  D 0  0 &,>%"/&6;235#"&=463!2+2#'3.#!"&=463!260 V  (+а  ] }#4!# F  @ s@l :@     @] |4#     `)2#"'#"&4632627&#"!2>54&#"FccFQFFQFccFQFF////!+L +///`^^OO^^OO@@&  &@@@4@D%"#"/#"+"&=#"'+"&=.54>76;2264&#"5@  J LL    ,4$0 TFCP .Ew  h 1 ; 16 JY4)D& 0.=K  c %'%&=467%6m   3  Z  "   f&7&=47%6'2#!"&=4637.    D  p  x  DD  B 0  0  $H!5>4&'5463!25#35#35#5!#54&"#54&"#54&"#54&"  @ `@@@`@     = cc#  ``    5U'&?65462#";#";#36=46;22+"&=46;5.=7z   8P8U UU UU),  L  8AW4 <)2    -(88(  0 0*&   " iD)(9'"&,4:@#"'&#"#"'&54632326322>7&.#6264&"575&'m +1>>?5 +1>>?  $"%B//B/`   >= >d=  $8P88PT:1 #&#"'&#"#"'&5463232632264&"m +1>>?5 +1>>?B//B/  >= >8P88P %1=G1!#!"&%;26=4&+";2=4+"%3!2=4#!";2=4+"2!5463  ` ` pp00 @     Xh@ 00 KWco2#!"&54632>54&/&546;2?6'&'54+"+"'&;25754+";2754+";2=4#!"3!2`    -  -  ppPP              h3%#32+/&?#"&=46;7#"&=463!7632bK  S  37  K  R  4 ` j  C ` j  C %-56+".7>264&"6264&"6264&"264&"PU&P&(/hP) n/3=N&@";*P}CJmm-S"*2#!"&5463264&+";26=72+5(88(`   0  0`8P8  0@"&462"&467"/&4762%6%%6%%%6%%F     %%%6%%%6%%6%     $4%2+"&=4633#"'#"&=46;2%2+"&=463   @@$I`        @0p    %1=I6/"/"/&54676276254+";2=4+";2=4+";2f&6 66 6&&6 66 6 ------  -----hh !%-15=G546;#"&52#!"&546;54625#6264&"5#35#6264&"72+5 !/&&/!p`@""@@""    /!&&!/@  @ x"" x""1 7&/&6??6/7?6/7?6/7?6/7?6/76|  P E< <87<<77< ;E   'ef/0ff/0ff' #G7&546;2#";#";#"32#!"'73;2=3;2=3;2= ` 88888x  @)@@)  @@@ ` 888888@@@72#!"&=46;;2=3;2=3;2=3;2=3;2=   0@@@@@  XXXXXXXXXX/73+"&546;2#";#";#";#"X  XXXXXXX @    @@@@ /7A546;#"&52+"=4;27#54&+"#4?62264&"%2+ @P h0 ` 3B//B/p  P  @@(@  u k/B//B  @ #.'7562"/&47@`>S S>+u  5 @S S>+>u 5  !4"&46;46;#"%2'&/52>76#"'.#57676%% % ,IfHM-4#-:+0MH2N/80+:-#3 %6%6%2.&B   B&,  *2:2+54+"#54+"#"&?6'.54264&"264&"Ԗ* N@N  ,24&&4&4&&4&]"A6 B8888 B Z4]&4&&4&&4&&4 /J7546;#"&2"&427735654&#"32+'2+"'.#".'46;2` `v 6ԖԖ=2$q| 3$qO=2| #,# @ ` 7Ԗ$ 2=Oq o2=Oq$| @   t3#"'"'"'#"'.?63!22767#!".=327!5Z&/,XX-/&A     @   I(Z!!!!!!Z(h   dd %%53#!"&53%#!"&?63!23+"&@@ @  U  %@   @  "" /"&=463!2#2#!"&=4632#!"&=463  @ ` ` ` @ @ @ P @ @ @ @  7'7'772"&464/76/&'7?6/76/&'&'76/&'&'76/&'&?'&?'&??6/7??6/7??6/76/&'77'7----------jԖԖ ."   "-  -"   !-  ."   "-  -"   !-------D------qԖԒ -"   !-  ."   "-  -"   !-  ."   "-3---+/M%#54&+"#54&+"#54?6;546;232'5#53#!"&=3;26=3;26     . 33  @      S  S . PP @@`  `  %/&#!"&=4&&/&6?267w9 9 9 9?L?` r  r `C:I"&462/&/&/&/&?.?67>327"&4?6((.   -   < " #8 ( ;  2 `(( E2 Y  J B"<      *"4 ," <  2 $2#!"&5463!2#!"3264&"s%%p  C@%@%  $6O2"&42654':12764./&>'&"67627>'.3232654'ΑΑ P  b FbP  ΑW    &&      @"2!2+"&=4&"+"&=46;!2#!"&=4630  8P8       (88(   `      #:BJ%6?#&''#>#6&'37.'#!"&5463!2"2645!"3>- '' -V ." '--'L    )77)FjKKjK 1) )#*)11))*# 4  7)@)7&KjKKj@,x7327/"&?#'&/32767672>367'&"&'.'&'.'&764'&7>76767676276&2654.#"a4%4 .  ! 4%4  J          7N7+'U &p Y&           P88(,-2#!"/&4?63'76/&'&??6@%% >>  >>  >>  >>  %%  >>  >>  >>  >>  '7M]2+"&=463#>7##"&46322+"&=4632#"&'##&'6=3>2+"&=463p  `  @62 9*P  %% %  `  %% P*9 26U   `   `  ` 8H '?0O  %6% `  ` %6%O0?' H `  ` 1;#"'&54675#"&=46;2+7'&?6/'!&/5#.&4,  ' 4 +%%`@!,34Y     ' 4 +.53,!''  #2!54635!+"&=#"&"264`  %@%6%@%  %@%%@%M4@HR\d2+++"&=#+"&="&=#"&=46;5462%;2=4+"264&"75#";26=4&+264&"        =p  p  ps@ P       P 0".."08] `  ` [%#"##"'&'+"=#"'&767"#"'&4767.'4546320454>762>7632 (8 n ?6264&"6264&"264&"$ E'*M+ 7  $!E'*M*7 3*'E $  7 +L E"$ 7 +#+3;%/././&54?>?63264&"6264&"264&"# E'*M+ 7  $!E&*J5J3*&E $  7 +L E"$4J5Jq1%2++"&5#5323#"&5#"&=46;546;2  ( 0   (  ( 0 ` 0 (  H` H`  0 (  `'3?K[g2#!"&546354+";2';2=4+";2=4+";2=4+";2=4+"754&+";2654+";2`   @@@(   `   H00 0 y!)A2"&4>/7>.'&7264&"7'76.'&?>ΑΑ c4&&4&Α> &4&&4@H%/67#"'".5'7&'&6?67&5462'0"0"132676&"264767,RL\142G6G! D 8P8 4#03C#"9d h`:_~)= {:|"  v(88(Y$ Xt 7/  C-D54&"54&"&'54&"&=4632762654.'&'&6?*#"7?@:FF:@w('   c9%  WcG6-#`  zo  oz  `#-5K`/!a 5/B/@  *62"&=46/627.#"75&"26>54&'8DF;<<"} @ &4&/&:!D8K55KK55S, &&    &&m& ,!33#"&476&67> -jQe:K? 9  2D[OV,}n5119  ."3W8.)T];#!"&546;;2=4+";2=4+""'.'&"+";26?212?62;264&##532   ȠPPPP +"       / I 8    HH )6  2     '0;#!"&546;6&+54&+"#"27#532   L A   A ` 8    P P`R G3#532&=#53546;#!"&546;;#" b `@@     Fb ` A@A |      373#"&=46%#532;#!"&=3?6/&#46;pp  7  ` ` Ƞ@       A ` ` A  *6BR%#5%#532;#!"&546;;2=4+";2=4+"54+";254&+";265  7   ȠPPPPPP   @@     HH ` &2m#532;#!"&546;;2=4+";2=4+">54./&546;2?6'&'54+"+"'&;25y 7   ȠPPPP -  -  W     HX          ;CL;#!"&546;&'654&+";26=3??6/76/#5327#532   E !!%P  ;   00 ɀ b8     !&%  0;   7  bA7@EO7;#!"&546;;&'.'&"+";26??62#5327#%'762     +"     b`DED (    /R )6  2  (bDD'0;#!"&546;26/&";;26=7#532   A `` A    8    ``P P &%"/&4?'&?67627'"/ Pu_V  VR  [1;;  ;Q  uP_V  VR 0;;  ;R @ "1%"&546?"/&4?'&?67627'"/@%6%  Pu_V  VR  [1;;  ;Q]#%%@  uP_V  VR 0;;  ;R  -A\n62#"&76'47'.76'4&'".76'&6&'.76'&767'.76'.'".676#"&54'&676'&'.#&#0#"&=&762 +>   " ;I   (Lm Q9     ' "&(X<9  GKl>60 rn ik `<*MK  GH  ML  FI)< #.><  79?1:kJ55 897O z,  ' V   <9R  fHJ* @`$2#"&'&?'&6>264&#"G0a@((@a0>p&X  W&p  `)7337)<*B nn B*<  '/$2"&4&2"&462"&4264&"24+"36264&"NΑΑP*<**<<**<*ΑΑ)<**<* h*<**<'2"&4264&"6'.#"7626264&"ΑΑ =(" 14)Α* "  32+"546;5'&63!2!!88J0**0j2"&446;.'+";2;2/&+";232?676?6?6=4&+"/&6?63232?65ΑΑ Y;  Z  /       ΑG :R               _n2"&4654/&+.#"/&54?632;26/&54?6;2?6/&?6//2?676767654'ΑΑJ  C       G    _ Α         G&     /d /*8m2"&4762;2=4/&4?&#"32?6;26=4/&?63254/&=4+"+"/&+";237632;2?6ΑΑ 3Su>     '     Α_N  & uS-  f        !%)-15=DK2"&45#375#"6264&"5#75#5#75#5#75#6264&"5#326=4&+ΑΑ(   5000p000p000`(  Α( 8( ((8((8((8((8((8((h(% (&2"&4"264&"2642676&"'&ΑΑU`Y // Α6'  '#22"&4%2767&'&"2767&'&"2676&"'&ΑΑ8+`Y // Α!" !" !" !" 6'  '+92"&4$"?626'&'"?626'.2676&"'&ΑΑ[&#  &  #  &   `Y // Α)))  6'  ' -AP"&5476227"&4632&"?626'&'"?626'.2676&"'&(**0 ΑgJ>/L&#  &  #  &   `Y // @77<03gΑ)!.)))  6'  '':2"&4?6&'&'&2676&"'&%>'.'.7ΑΑZFi`Y //  ΑF  6'  '  F#12"&46/764.?64/&2676&"'&ΑΑP!!!!PPX`Y // Α-0(( ((006'  ' /<JZ"&767622"&47667'676&&>'?6&/.?6.>'&26 * )!)yG!)yG Z  4]Z  4"k,<@PH)H) Gy) Gy) ([ 4   Z`"a$,k .G2"&4?6/76&/&"2676&"'&%6&/&"?6/ΑΑ_# F`Y // '# #Α ##   6'  '    ## 6FVd72"&476"'&'&61"&'?676&>32&'&&"?626'."?626'&2676&"'&f )S) Hr  ss   _?lE K&#  &  &#  &  `Y // H))G R9DD9R]:e=b)))6'  '%-K26=676&"'&.54264&"264&":#".=6362?>ΑYH * // * HY&    gO~.$  $.~OgW@%  ?   ;HU%:#".=6362?>26=676&"'&.5464/&?'76&6%&    ΑYH * // * HYP!!!! PPI@%  ?   lgO~.$  $.~OgA0((((00%8@^2"&4&26=676&"'&.546'.#"?62264&":#".=6362?>NΑYH * // * HY  % & 4&&4& &    gO~.$  $.~OgP    0&4&&4@%  ?   "1<62"4&"276'.#"?62&2676."Α7  % & Y`Y /YΑΑ     a '66'  `/?O_72+"&=46;2+"&=46;2+"&=463%2+"&=46;2+"&=46;2+"&=463`  @   @   @   @   @   @   @  @  @  @  @  @  @  @  @  @  @  @ @/?O_2+"&=4632+"&=4632+"&=4632+"&=4632+"&=4632+"&=463`  @  @  @  @  @    @  @  @  @  @   @  @  @  @  @  @ @ @  @  @  @  @  @ "@72+".=4>;2+"&=4632+"&=4&#"+"&=6  %  Ҕ  zV8`8   @&@ ip pVz8`8p piF7+"&=46;2#"&=46;22+"&46;23265454&"+"&=6 %%   %%Ҕ5% fzz  p %0% p %0%`i%5(VzzV i!77/'7'&54%'76DC#)#+3`3 + FC##* 3`3++ǩ';GS_ks&'&'&6;2+"'&'&'&6;2+"2#!"&=46;254+";2754+";2754+";2754+";2"&462+ + n+ +  %%%+o```6%%6%($> ($> ($> ($> @ %%% Spppppppp@%6%%6@-=M]ms#32+54&+"#"&=46;#"&=463!2;26=4&+";26=4&+"';26=4&+"26=4&+"334&"%54&+";26=4&+";260        &&&&&&3&38P8&&&&   P P        -&&e&&[&&&&(88e&&e&&7DQ&=4;2+"=46#"'6+"=4.'&=4;22#"./2!2+"&/<0(0$qYZ).00vYuV   v *AB>,9!,P8!W1/' %>>    p  -52"&4264&"4'654.'&7>264&"ΑΑ##") ΑW      5G2"&465."?624'654.'&7>765.#"?62ΑΑ #&#  & h##")A  #  & ΑK))      !)&.Ma%/&>7676#"&462&#.&264&"4'654.'&7>?626'.#"S  /4gΑ T## ) '   .! S   TΑg.,%#           %2"&4"264&"2642676&#!"3ΑΑU87Q  Q7ΑH6 6H.<2"&4?626'.#"?626'.#"6&#!";26ΑΑ  &  #   &      Q77QΑ6)!  !w 6HH#12"&46/764.?64/&6&#!";26ΑΑP!!!!PP  Q77QΑ0(( ((00 6HH!/2"&4?626'.#"&"2646&#!";26ΑΑ  &  %Z  Q77QΑ5      6HH ALP7"&=46;%+5322+"&547#"&547#"&5#"&=46;2#46;25#  @ 0 S((S 0 ` ``0`      p    000@ #/2"'&547.=462767'5%6hJrrw ,!!/  J4>>40? (%@%( 8 @ !+72"'&54264&"7.=462767'5%6hJrrm""w ,!!/  J4>>4^""I0? (%@%( 8  ?&7>'7'/&?6276^KK"r/ 75bW  i ! %KK"7 /rD5KbW  h $ 4'&6;2%2&'7632"&46&/&"?6/?/o o U o/?E ʒggg 44&  //  = ++ sggg6 // %4  42"&4264&"264&"ΑΑΑW!32"&42654'"&54724+"362654'"&547ΑΑX&4&""4&""ΑG&&&   p&&  &*!2#!"&=463!6?654&+";26p  M M0FF     [; L L&& +#76322++"&767.5#"&=463d  C6  6C c$   76732#!"&5463264&+"36264&"7#&- '' -V.  'j  %%  ;jKKjK-'1) )**h*#f)1 @ %%`   `KjKKj1)?'73264&"'766'OTc!] ];1#]$b!cT;] ]'T(b&7'73264&"'>%'762+  (] o99(5+](  B(99$67'&4?622?'762'&?%"/7?md Z  "> >,.q.(-qk  Z  d,> >d  Z "> >,.q.-(q k  Z e,> >.!2#!"&=4637&=46'46'%&'p   'f0 A db+ (&+     e  C   1 N 1!2#!"&=4637'&6?627'&6?63276+"p  AM(HhAc++.,     kS$4b!R21'$*2%//&?'#+"&546;276%3264&#-N  NN  N   (80$TN  `  `N  NN  NP  8($6TN  @.6J254&""'54&".546'.#"?62264&"76'.#"?62ΑA7  .d.  7A   % &9((  % & gBq!  !qBgG     &4&&4    #.2"&4"2642654'&"6264&">'&#"32ΑΑU(**3-F7Α776 !)-59%+"&5#"&5#"&5463!235#264&"75#264&"'3'#u  8P88P8  R``L((`((0PB o (88((88(  `((``((`=2##"'&7#"&?6&&/&?6>76327>o )"( C#+M b:  7:& *KJ 6 E=  8= ' %#3hG6! ''92"&4?626'.#"6&"'&276'.#"?62ΑΑp  &   #  #n#  -3  #  & ΑF  )  **  6!)(,048<@!2+"&=4>;53#!"&54>3!23'7#?#3'#73'3'#  07  7   j ` gFrj        B 1``pp``p``pA,2"'.543267>&'&'6768'SS''?D++D?'!FG"%;?QQ?;%$'88'$oo==o"%/&&?6/&6?6%">H6 F&3 &)'H!.L  C>"( 4949/!=  !)19AIQ#546;22"&42#!"&=463264&""&4622"&4&2"&462"&42"&4 @ (8 8(B//B/Mss`` m 8(  (8/B//B3s*5!2#!"&=46;26=4'&5463 @(8 @ 8(C  D-+ @@@8(  (8 (2,:+ *$ '#"/#"&?'&6?627// jj A ( ARr 3f&gDDg&Qg 6r#/;I2++"&=#+"&=#"&=46354+";2=4+";2'#546;2#5P     0`0   hPPPP2"&4264&"264&"6264&"ΑΑC4&&4&Α*&4&&4 (048%2#!>?465762'"&546;2264&"75#75#  ? K  9O8  j8@@@   K -(88(  _@@@@ hT\7&'76?6/&"'&+""2+"'"'"'+"&=46;276;2276;2276;23$"&462 D P(1d"dq 0  0 :&&t&&t&&: %JJ%1B//B/b 9 "& P      `/B//B4j%2+"'"'"'+"&=46;276;2276;2276;23%"'&'5462+"&=4&"35462+"&=4&"#5#p :&&t&&t&&: %JJ% 8P8   8P8          (88(   ``(88(    ` '&?66762%47#"&z   (-6,!2BIg    >@4_6I& (&-h#22"&46/764.?64/&"7626'&ΑΑP!!!!PPRM )) Α-0(( ((00>-  -3#"./.""&/&'&'&7>76676/676%   "" %2" #d   0)"2`>1(CS9"    "9SC(2="3 @# 3 1.7>>327/6'&2#!"&=46;7s-G'I!`+ .#:?f  I;&5463232&264&"`I <8(,< ))$ (8, %"2+"546;5.?4>;23'(4@ @4ru W77W uPP/7;%+32#!"&=46;5#"&?#"&?&5462+&"264#3{ p  p ^1 r(r 1V  `  ` `  a ~  ~   y "5#"'&"#"'&'&767676676'#'&7676763_' $+A A+$ '*$$*?#/++G0:  :0G++/#  % $  (/6BJV^%'"''&7&766267&'&'767*#">"6727&'76'&'66''6'&#"#"#2"&47W$|$W77W$|$Wk% , 2 `!!!! % K54 VV 45KK54 VV 45  # # %%x####( # #   `O%#"&'4.#!"#"&=46764'.=46323!2>5>32W,' ',,  ', $,   ,$  $,   ,$+"&462'&'.=46%2.=476`8P88P> F|  < |E DP88P8#   ")@%K2#"&'#"&547.547&5467&5463021>#"'#"&546320212&*&*(&"*&*&"&(&** (.( &( **x%& (+5HY%+"&=!+"&=&=46?>;2%!'.+":>54&#"!264&#":    "9 9  " N   !06   60!2 & 222 !  #/K2#!"&546;546;23546;254+";2%54+54+"#";;2=32  @  ` ` pp((((@   0 00 0((((*fz7&?6/.?6/&6?>?6/.?'/&'&?&'&?676?676767676%&>76&'&'&/&#"&>76 #  NA 2M  +=   %          { }     {  l)   .M  +=  NA 28+$/ 4 C 4. + ! 5 d   )6! +.O $,04<2+"&5#"&5#"&=46?>;2264&"75#;'#264&" (8 08P88P80 0! m|((xM&MY((8(P (88((88( p"z ((```((@Qh!2#!"&=46332'.=4&+!46;2327>=.=46;5462354626&+76&+";327P   2"'%%$4     : D ;     @  *x!..p`%%4$  v*  0 00 ` 3 kS'%"/&4?626/&#";2=37   {U p  `    N6 P@6'8@HPX`%2#"'##"&5475&546323632#0#23'"1"&46;7#264&"64&"2$"264264&"264&"%%%%% %%%%%''&%%&  w     @  `%6% %%%% %6%A A @%6%@b        7AE%'&?67&?'&?6'2#!"&=46;;267!463!2!: : $$ U $$ : :  &&  =@; ; $$  $$ ; ; && pP0&4?62'%/&4?2?/&4?2?   ;  : :  ,jjjjjIIjjII!;G7"&=463546;22++"&=2#!"&5463!2644%"=4;2#  @  ( $   5KK5Oq1     KjK@qOI7  `26A6#!"&='.=4>3235#"&=46;2+325'46?"&v F   88  8920 +$   /  _ 0   0A?>] +>0%#!"&5467&546;&546;2654'6323232#**#*%!/ (8%*O(**(#*%/!8(%*##%+"&=46;2$2"&4%#"&?62  KjKKjK߾ _ & _   3KjKKjk  7%/+"&=&/&6?'.?>546;276   @      @  r 8O O8 NN 8O O8 N +9GUcq2#!"&546354&+"26=4&";2654&+"26=4&";2654&+"26=4&";2654&+"26=4&";26 (88(@(88(@ @ (( @ P !.!!.! P P !.!!.! P @ (( @ 8((88(@(8@ @@@ q8 8!!X!!X q8 8!!X!!X y@ @@@ +9GWes2#!"&=46354&";26754&";26754&";26754&";262#!"&=46354&+"26754&+"26754&+"26754&+"26 (8%%8(@( @ !.! P !.! P ( @ %8(@(8%` @ ( P !.! P !.! @ (8(@%%@(8   8!!8 8!!8    % (88( %P    !! !!   (6HT\j7&>#"./&67632*#"0&'&567%.?>766676.67&'7676.'&6 ]F& #YJ gu$$ dY ,   SaAO6 ,  SF ." ,+ -2 # $D&' 91+   )H! A`) ,'*(O  .6>F3"&'.535.535.53546;23264&"6264&"6264&"  @ F\F %@$@$@ @((((((  &%,99, 2 & 1& 1  %&((d((d((15%2+.#"#."#"&=46;546;546;232%3'#2+/+"&=&'/&?&'#"&=46;67'&?667546;276264&"'2+/+"&=&'/&?&'#"&=46;67'&?667546;276264&"p '0&CBCLC   q SJ 3Q0                ((                ((   """   P `  h @`@                P((4                 P((7;CK%2+"&547#"&547#"&=46;546;546;232%3'264&"264&"p 2B\BDB\B2   q m0 M((<((   .BB. .BB.   @    @``(((( 3K?62"&472#!"&546326/.+";2?33754&+"&#"327;26` 6 6  F  **  55      $*<* @#/%2++"&=#"&=46;&5462&">54(  ` `  -%JlJ%E,     ;5AOOA5;"9 !(:B%#!"&5463!2;;26=326=4&+54&+"#"5!"3   )77)F 0   0 0   0 Z 4  7)@)7v  p p   0 0 @/3B$2"&454+54+";2'"!546;546;232&'5#!"&=33xTTxT & < @7.uPP  TxTTxB 6 L P001` *&0 -9EQ]iu2#!"&546;546;23546;23546;254+";2=4+";2=4+";254+";2=4+";2=4+";254+";2=4+";2=4+";254+";2=4+";2h   (  @  @  ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (   h P PP PH  ( ( l( ( l( ( ( ( l( ( l( ( T( ( l( ( l( ( ( ( l( ( T2#"'#"547>7&54>'./&54;2?6&'&'54&+"+"'&;265Ԗj83AL92    2   zz329JV%      BOg#"'#"54767&5462;2=>54./&54>;2?6'&'54+"+"'&#"'#"&'32654'zV<3+.&zf -  -  Z.+3<@hc9HB^)3B^          6,qO P23#2++"&=#"&=46;546;2`  ` @ `  ` @ @ @    @ `  ` W]ciou}%2#'"/+"&57&'"/&4?&'"&=46367'&4?6267'46;2762'7&'67#677&'7&'6264&"67'367'?&' $  /;  ;/  $ $  /;  ;/  $ +Y%+ l@ L1 C +2:%+ l@ @1  ;/  $ $  /;  ;/  $ $  /;@ 11 % L%+ @ 1w1 % H%+ -T7"&=46;2#546;2+"&2?#!"&=%5!'54767546;>3232     F4```NN   P  r՗ FF  , , !2#!"&546;54&+";26`@`   @ @  52#!"&546;54&+54&+"#";;26=326`@` H  H H  H @ @ H H  H H Z/&='&63!22"&4>54&/&546;2?6'&'54+""+"'&;25?R;#Py^^^-    -  gCQ9 < ''^^^          eoy%2+5#5#5#33#54&+"#53535####"&=46;546;546;54623546235462354623232'354&+"54&+" P P @ P P     @  @  @    0  8   `  ``P P``  p P p    p P p00 @ @1?G%"/&5476;5462;2=462;2=4623226?."62"&4 f88f @       @00 200 = n<76&'&67654'&6323(!QM(;/)V  :)- **    **%5:  U)/`HXWE; )f<G+%74#A !  !   !   !A5% >+G=f)%#!"&5463!20#"/267#"'&54?<14'"'&?&''3276/764/7454&#"#'4"'&32?.547&547"'5!"3   )77)F% =N= %        Z 4  7)@)7v &11&  #  po ! @A .=KY%&546?626"/.=54&?6754&3766=4/&'6=4/&*  ,  PP``PP``l1PP1 OmR 66 "-$ nv~/612#".5'"&5475#"&46307'/&'&6765'&?.5467'&?7>54'&54764'6'76/82 5 P/  ))  /P 8 /< &!M,;;,L"& U ~s?'$= *!       !* AE  (/(F6$ 33 55 33 $6F(/( t && "" &&'7#!"&=4?62335335332!546;2#!"&=463    @`@`@ @    d % % Z 00 `     @'9=L:767#!"&=72"#".'.'5463%2+5.+54635#%#5463!2#"*+ @ @ ,)" ),   $`  @` @ & 0  0 ! " ` ! @@`  `&  '2=HS^it2#546;2#546;2#546;2#54636"&546?"&546?"&546?"&546?"&546?"&546?"&546?546;2+32#!"&=46;5#"&=46;2;546;2326 @  @  @  @ } s s s s s s    8( ` (8      @       .   .   .   .   .   .   . Ӑ (8@     @8(    CP3+"&%!&5476?2+54&"#54&/#54&"#"&=463#5467 @ CXE++EX  @ @    @ C8+5 5+8C< @  @H$ &H@  @ @! 0 h'&4?6"2#".5'546326=4&"+#".5463264&+&/&6;264&#"/&47632?6#".'&66iR;;%'  #/  )K5*A$  !1 &&!   (8% / q%  &'5;)d$4 &+ d !5K01&4& ( & $'$  $     $&#".'&'32#".'"'#"&463267&'#"'&&'&4>7>6767&'#".'&#"&463232>7'.5462627&5462327>32#"#"'7>"264$"264g  !   (  (   !       *#%6% : %6%#*         h   #8-%%-8#      !4"%%%%%%%%"4!  '    2"&44&'657677ΑΑWAY&s3&+E(ΑgCg g)rG"ir "?3N,/)g /%+54>?#"&5%#54&"#54?54?6m p mp `%6%`0 L L Q d L /  `%%` s K K s/?2#!"&546354&+";26754&+";26754&+";26p     `     `     `    @ @ /?%#!"&5463!2326=4&+"326=4&+"326=4&+"`    @ @ `p     `     `     ("&462'+"&46;'.?67676&6%%6%_#n\- 1&(': ! X !@%6%%6An1"#G!\!./ ! H )R2#"&=46?54?03>26=4#"/.=46226=4/&67621/& Zu M  z&/  M u '? / ` Q  M P ` / ?'  P M  Q1NV%#!"&5463!2"3237232/76&#/&"'"327654#0"#"&463:1324#&5!"3   )77)F  ;/DD/'88'u Z 4  7)@)7x CC`C8N8@$_%"/&=#"&462322654.#"7+"=&'&?6;2654/.54>354;2'&+"d8HVzzz, xT'B'+CCJ  d ,zzzVH8 T<'C&Tx+$OO$5  %57#"&'&6?535#546;272#546353#"&'&67( . W   s - ^)!HA  =W!HG;%//&?'&?67672+#"/#"&=46;2>3; ..  ..  ..  ..   b X,  Q :U ..  ..  ..  ..  0  0 jC 92#"&4632#0&#"3267#"/#"&?'&6?62T +.jj.,  W||W7 DD 7L" "  Ԗ || 6L$$L6  EE &),/25%+"/#"&?'&6;76232#'#3373'7#73'5 k8 * 8k 55 k8 * 8k B( 5p55p8.((.(Y$^^$YY$^^$!7XXX&8!!!&8! 'E762#547!#762'#54&#4?6'76&+'&"#";2?326F &r&  `-` H''''+  +f `'&]z f(((( FRUX^%'37'$2"&54737!;32?327654/7654&+'"#"$2"&54#''73@$Q((| < <  < < s(($,B!!B!!R @   `11 1 1 11 1 1 @   o88887;?267+32++"&=!+"&=#"&=46;5#"&=3355#xP 0 0     0 0 P`` ` @        @ ` `@@@@@@DH%+"&=#+"&=#+"&='&'&?5'&4?5'&6?%35!5!y7       7Y7 ww 7@/ 0 00 00 0 1@2@VV@2@@@@-6/#"&=46;//&?'&?676  Yf  fP-  ..  --  ..  y  Y `.  --  ..  --  %2"&4264&"&264&#"3"&462"&4ΑΑP88(PppP(8SΑm8P8pp8P@4DL";#";#";!!&'.7>/762#!"&=463264&"rip ' FF '  -- [% %@@@`@ MN  ? JJ p%  %`0LT\"&462#!"&5463!2";26=654??6/76/&'&5!"3"&462    )77)FB/ @ GGbbFFbb w  8   4  7)@)7&     ++**@8   #!2#!"&=46;'&?676%3p  5  ))  5t     JJ  88  J-7?G327+"&=32+"&54&"&4632>?"&=264&"264&"#!*    & (833@8P8@  Y  & `  & 8(V'`@(88(@P    7#546;2#54'#5##5#+"&=!+"&="&?63!2p0K5@5K0(000(N      j@5KK5.p pp p B<%+"&54674&5463263276&'654&#"&'&54632W*/!(8$8(, &5T%8( qO#)m_.!/8(1 (8&' A$(8 Pp F)F[DQ%#!"&54674&454632632%&?'&4?'&676276&'&"74632.?$8((84%B."6&R L SS X))X  !\B1#) ; 'z 1(88(&6.B$ & X))X SS Y !B\.#1$D/  '0:EKO7'"&?&=46&63''&4?6#76276/&>2".=7#'3'klMArQ =m__}LPujddd8.BY1b zbWBߜŰ #'&4?66&='46/&5   R ww e&] mt ]&;2+'54632264&"3+"&=#+"&=.5462 % 57  @ @ $`  %$.  0  Y6 pp  1 CI7&?>#"&?#!"&547>7.='&4?'&6;2+"/#77 u*x [ #H0  GUZ-3<<> K @  u W+*)EB<#/@ _6m 2 c  %  .&#"+"'&7'&54>32?547>323HV( #2" (?).F3I X! ) #" (VO?   0@P`p7+".767>%+"/&67622+"&=4632+"&=463/.7676"'&'&?6#''&'&?>2+"&=4632+"&=463'62+"5'67+"5+"5  a% * %@ ` ` `   S*   S *S  $ *I ` ` `  @   .+ h h  @ @ @ @   3$ 3+. 3  Z @ @ @ @   -Ohq;#!"&546;54+";2=4+"&=46;22654/&54;2=4+"+"374+"276=4+"&57#532   ` ,   `$$y 8        `  x7''7  DP`p#"546;2!5'&=;27;2732?>72=4+"&546;2'&+546;2"&=46;2#3"&=46;2#0    $%    # p&       @     X' $@@%5N  (   %?p      &.6/&/&"/&&546264&"264&"Rt + )) + lqQ  0 .. 0  PvG'%/&?'/&='6276"&47%; Z  $1Z//-  14% Z   1-//-$ 1%4'/mx32#54#32#54;2#54;2#54#54;2!32#54!2+32#!"&=46;5#"&=46;2;546;2326=4636"&546?"&546?%".546?"&546?"&546?2"&546?"&546?"&546?"&546? 8  H ` H  8( ` (8            (   C  2 (    xxxxxxxxxxxx p(8@     @8(p p   p   '   '   ' 0  '   '   '   '   '   ' $,!2#!"&=4637#7>?#?/7?/   o8V@@ @       @ " k @@    JR7#"'.7/.?>2+"&=#"/1"&='&54?>3235463&"&462Q#4   @:!  0 /W  /# \((5e  b" +   P/S  SV k/ @((>F2#+"&=#+"&="'+"&=4632546;2632>264&"E   @ 3z3 @ pPI7  #  `\     GG B^)1   P  AKS&/0+"&='+"/&54?&5457+"&=4636;46;2264&"@ !  @  B   4$+K5x G  sM   &f-! @i d D) 8 8$4!5K   G43+'7'#"&=265%/&/&7%62546;2  (hFN2+32+'32+";2+"&'&76=46;76;2"264267#7` 8(\7  PK@   p  l0(pP8 9  Mwf (82n  P   %*&.PpX   W;7 2"&=4$"6265427&"ړԖPp II 99:R>b;SS;b>/! ** !ZB"&462+"&46;%2+"/#"'.?'.?'&.6?6$((   M  C  6  >  U % (  '%,G*`(()N$ f W2 .W   (5 &2#"&=4&54'!2!!2#!265(P -P(8 P B.(8P @&'p8( .B8(#7?G/&/&6?'.?>76'.5462+"&76"264&"264     &TxT&  ~  ZZ EE ZZ E:"5KK5":   &?6S  M  =  9  Go'&54?>#/#+"&='"&547+"&=4?#/&?6;'&/&?637>23?632/&6?6#'&'   0/=  J7R7J  =/0  4 NE 5  2=$ $ $=2  5 EN s6N H(  Ha N Iw (==(wI N aH  O P  L44L  P H N6 @$,4<DL!#!"&7>=46264&"264&"264&"264&"62"&4264&"   8  I  I  I  P88P8S:J)1     Pp        ppp%6%%6muz$2"&4264&"2&#"#+"/+"&=&'"/&4?&'#"&=46;67'&4?6546;2354762264&"73'#H44H4Nh  3;$Q  ,   8f  oB//B/fn)kp4H44H< 3 3 0   ,   (.$ (/B//B``  (<#>32#7"&547367#'547"&%2+'72+654&+'327ZR@(\gB6jKbP0 *q7OP/!F*#L#D$,P2K5 5w0*=)D(`O7*!/`$,2+"./&"+"&5463264&"264&"`   :    e6%%6%e6%%6%   == @ %6%%6%%6%%6  A[72#"&'&6;232>'.+"&=4635"&=463!2676&#"+".7>#2&'&6;23264&+&'%9:+#5      N  !26( :-06D /+H " t /$,A,"           ' (.G T7. +' (&'"/&4?>7&?67'  M5ZN#L  zZzw  L#NZ5M  ^zZz-W62"&4$2"&4'#"'"'#"'&7&47&66266+&'&#&""#"&5467&54632632:12((((G  2  $ 2 #8(+ < +(8%B. 5%!/(`(((( 2    2 #$P8  8(2 .B!/! B8ER_l%#!"&5467<146326326#"&"'.'&'>32#"'&?>#"'&?>#"'&?>#"'&?>^%/!!/%8(1,+E+T: %2$ % [$ % [$ % [$ % ,!//!,(8(%5#+:P 5.@@@@@@@@+;J2#!"&5467&54632632:62"&546762#"&546762"&54>(88((8%B. 5%!/ "(" "  " "(@8P88(2 .B!/! 232 32   )7Ec6#"'.?>#"'.?6&#"'.?6$#"'.?6&#"'.?672#!"&5467&54632632: @ @m @ @ @ @ @ @S @ @;(88((8%B. 5%!/Q p p p p p p p p p p8P88(2 .B!/! @EP]jw%#!"&5467<146326321&?'&4?'&6762760"1"&7&54632#"'&?>#"'&?>#"'&?>#"'&?>%/!!/$8(1"; GG K##K  % Ib&D$ % [$ % [$ % [$ % ,!//!,(8(5 K##K GG P & @@@@@@@@1Jc|%'.'!'+"/&?67&'.7676326&#/&""?65'76&#/&""?65'76&#/&""?65'5!+"&=#+"&}  6&   K  +*  Q&C      ` @ @    Nl# c # +**Q$    퐐 PP  1U]emy2+"&54327.5>7327&'.5>7&'327&'.5>676.264&"6264&"264&"7264&"   ,>&3=1S?AW3+O-%>&3=+O-%&.T,+O-. 5!6H+O-%&.T,1N.1(A( 4$)    I     0  $ E V>  = t!E     (  / 9 / !)196"&4?6766264&"62"&42"&4   '-9r9DZjKKjKSI   aZ.@>9r9BL ^KjKKjuM  @ 5=RYf53+"&2+"/#".='0&5#".=46;2&"&462753"&5#"&'#"&?'46;#%2+"&5@    :)   &-Y((  @`    0 Q= 8   8+e   `&-`((D `     0 M%2#"&?#".576;27+76'&+76&+"#"&5467&546;2654'632324X; D|+/! D0!/+ &!/ (8&p  ax Ao.!/"x/!.&/!8(&@<Z6+"&=4&'&+"&=466+"&=4&'&+"&=466+"&=4&"+"&=46 SS   oSf   lW   B19T   _N5      +KQ V e tZ tV 3NM8 JrW0    #9 ";T!546;26&/&"?6/76&/&"?6/76&/&"?6/2'.=#+"&=#+"&=!26=463 ^BB^       )) ` `   @@B^^,    @'1 +p PP p @ /G%2#!"&=4632+"&=463!2#!"&=463'"&4632632632+"'p  @     P3!2?6/&'&`   @    @ J_) `  ` @@@@  K^)@,X%#&'"'"'"&=46767>7>7627#&'"'"'"&=46767>7>7627#&'"'"'"&=46767>7>7622+$)l+)l+) !  X  Y  !+$)l+)l+#, !  X  Y  !+$)l+)l+#, !  X  Y  @                         %="&462#"/&4?7#"'.?'6#5&/.>2?6B//B/   0 .=u. 0   )) )'b') /B//B' ! @  9339  @ ! '       2:B!46762+"&547"'"&54>7&5!5463264&" 4&"2o0+ !h 0<"/B/!F!/B/<&`6`" \   @K8+!//! !//!!8K@&B  go"&'62&.'632767632&7'&'&'&''&4767>.&76767&547632>54'&6264&"  'V' %, 54 ,$k6 +`2+=66>+2`+ 6 ) F2!7  ) ((P  P/C**C/P = Xb   bX= ?13H!8"0@(( 4I7&=46;22654.'&=467+"&5.'.=46+"&5.'".=46:D <)Ej 0 ( 0z   a Sv   S9 d=)< WD   2 z a   `vS :R   7!#!"&7;26=4&+"%2!546;546;23546;2@ ` ` P@0     ` ` 000 00 0 7!#!"&73!26=4&#!"%2!546;546;23546;2@    P@0     @ @ 000 00 0 ")-15;C#"/&67%>'&#"#"/&6?63267'7'?'?'77&'67')6  a     ! %,.O =<<Q - $=d9w!6  7 18>='''.  *'&&'&7276/>76/&67*."'f9 13  7SDX!"&#b.&29 1B 2 8 &"!XD +;K[_o%#!"&=4?>;5#"&=463!2+32;26=4&+"";26=4&#'#";26=4&'35#26=4&+"3;26=4&+"54+";2754&+";26754&+";26 @  U`  `      0  (        0   E[  [ @ ` ` @   G   P     @  x  W  +@V72/"/&4?'&637"&=46762#/&=46;26+"&?'&4?62  c c!   c c! c c  p  p !c c p !c c @ p !c c c  c! p   p  c cB C#'32%#73#5"&?6;2++"&=!+"&='#"&=46;'!0ac  ~~  c        ` ` @         %?N^#53##7/32210'&/&"&?6;#+"&='#"&=46;'%"&54676>'&'&'32=d~! c  '  c51    %4^^D4( '?3&XH > `  ` K7H6    !`A]]A&m/$(?/S!('132+5##5##5##5##"&=46;546;546;232 P @ @ @ P 0 0  0    0 0 0 07>Eeoy7#"&546;'&?6'&6?67>763253#"&53#2#5+#546;&54632632#3&#"7654&#"36            )   ( 7!!7 (5  5   `   66   ``   ``  ,KK, 0  09=A%'&6?'#"&/#"'/&7>7.?>76767'7' ' (?  ?( ' '%W rr W%'_$p$_C h.'KK'.h C  fW( 00 (Wf  G'@..@'2#!".'&63!!889e@cs2"&44&#"+;2=732;2?6;2;2=4?6=46;2+";2;6$;2?6=4&">7#"/&+'&+";2ΑΑuS(    %       }5W  +  + ΑgSu            !  8- %   ` %2#!"&=463%2#!"&=463                +"&546;2+"&546;2`         0     )1"'.767>767>7&4?62264&" D  7#mT #&K"6 D  ((  D 6"K&# Tm#6  D (("/&6?>'7'6$!!$"b&`0`"&a"v%i((i%#W@@h#@j"&4622462"&"&462&54'&>367>'&'&636762'&'.7676&'"'6'&5<546((@(((((@  72   94 27   49  ((((`((=3    :2   1:   +3   /7%+"/#"&'.+"&=4>?6264&". 3 ./   7") %B0 0%zt( @  &R Q?bU;K 6!8"'"/"/"'&63!2WB. ". ,$ W,l}@ '08!>32#47%#534'3+#."#553#"&62#54@'~J`Bp?B 2:F:2 P8~;EC=_=C@`  ##``8(``(%2#!"&=463$!'&>76p  e(H0@te  @ @ "5W:e@:(:B&'&'&6;2+"7&'&'4>;2+"2++"&=463264&+#  n# .BB.8((8 p.#1 , #1, B\B(88( (` "/%4'7>#"'.?2&"&462%"&5>76H&Q,6 Q7|7Q-(( 6,Q&, `8 D   (( 8` , &.6>%4'7>#27"'.7'"&7676"&462"264"&4628*4$*"H"C 4*ΑΑlll#C*B6 C  y B*C#Αllll )9V"&462"&462++"&=#"&?6727%2+"&5463++"&=#"&=46724&&4&Z4&&4&? 7 0 7 .8   |    P  8@&4&&4&&4&&4 h  h         7;?%"//"&46327'&67>7'&4?62762%7'7' a  P k l'Z( Q a  P0  / 0EIEIEI  a Q (Z'l k Q  a Q0 /  09EIEIEI-B7.7>7&5462#"'+"&5.'.=46+"&5.'".=46t9b u z   a Sv   S9 fta9u >z a   `vS :R   2#!"&575#35#35#@&&& 000&&&@`````` #*.546;#!"&575#354&+326=#35#;5'!5%35#"&&&@ @ @@ &&&@@ @@ @@ @`@@@ @(2O2"&42+"&46;264!6"/&627"&4?"&46;2"1"&='&6?|(() ` ` 'D  D =\  0  R=N((\    D  D \  1S=Y  SN8$C"&462#"'%.>7'&776'&76776/&//'((%x   .Kk(  1# >R04  : s/`((fEK5( #JF a  3   9 @.BN"&4622#!"&46;0&5&6?63232647'.?'&3737#"/d((*  +6 (%,G*, F U % '4> +!  = `(( *   (5 "Q2 .W  [)N$ LC%#!"&=463!2654/&?6%5"&46;2;26=32###5##5.e,    &" /= &`  8(@@*6a&'    w ,&@`(80004 E6Wy2#"'#"&'&>7&542654/&54;2=4+"+"3754&+"'&+";2=2?;232654/&54;2=4+"+"3Ԗj83AL9        0   zz319JV  h ## hD88D  EQ"&462'76#"'%&'&>&'&6?54?'&/&>#"'&7((TB4Z  !& J#*  ?o   <U`((?'f" !    5(:   4S --= T\dlt+"'.547&507'.?>54>;2167&5462704=46;276$264&"264&"6264&"6264&"7654&"6264&"! #%c !&,  8 8P8 8     )          ( 7 "-.Q C(?, 7  . !(88(" .  *    7  7  O  2  [`l62"&462"&462"&462"&4"/&=##!"&54>7546;546;2354?623'264&#!"3n^^^$ %)+ B..B O@)% $@zD:H#% $*:.BB.( [p 3:*$ % f S`@((2++"=#"=43%2#!"=43t 8 h  8 8 8 8 %19#+"&?.5475#"&=463!2;2=4+"264&"p /((/  ` 006tRRtR2TGGT2   /9A%"/.7'#'76'"'&7>76'&"&4?264&" 5  u k>`@k04 R*I%J DK   |5%u4  5 u0k@`>k 6R=+ KD J$J R {%5)-15="&462232#!"&=46;5"#"'&67%65#%35+3"&462-    ````c <\  S;  `````` "&54676>'&'&'32D4H_I9* $:\K8+0+__6D5: Y' Cu ((8EA,X7'&67670767>767632%"#"/67>7>7>767  5#$  $7E #3 %.&% L  $7E %C( %.     !5#p  !# J $'  % %&! J6'/  %  &!! !2:%#!"&5463!2;;2=32=4+54+"#"5!"3   )77)F808808 Z 4  7)@)7088088@@2##!"&5".546 AoH(# 2;    >k 5!#!"&5!%630 @ ,Y{  ZA,H3#!"&=26554+54+"#";;2=327#"/&"#"/&547%62  ` ;808808   M D088088 02#"'#"547>7&5454+54+"#";;2=32Ԗj83AL9`808808zz31:IVn088088%."/&4?627"/&4?6?7'76 7-nxf fn-7 D P  7-nf fxn-7{ D  P 2"&546b]2pp2u7PppP7 '/7?%2#!"&4632#!"&=4637"&7>2#&"264&"264"264` && ~~>      (( && 42BB24p  )     @%+"&/&=46?2=4>3762=463212=464/     '$ S4 J L   2 #%!5467546;272#!"&=463@B5) ` )5B   @@;aRp pRa     AKU#"&5467>264&"2767>7>7>7>76'.?6&/  "!/ ` !!(   ' '    ' ' | 3 3 ! /!" ` i  ' '   ' '  3 3  2#!"&46;&5462'!"&pTxTcc ((77'#5'6326'&54?632Kk%KjL!!1$ 56YK$( '"G@)!.!#+2-,&B9N" + $7&.6.'7>#"&5476264&"6264&"264&"Zb  9R] 3XK rC{ _Y ]P9 qJVi L{ 7!+"&7;;26=326/&"72#!"&=46;76;25E 9   9 Z ` x r Sp p^       7!#!"&7;;26=326/&"72#!"&=46;76;2 [ 9   9 Z ` x r Pp p^      7?N7"&547676=7+"'7;;2=32=4+54+"#"26=#!"&5467:  R%^%,(/B/6K""K6_ <$)1001)$< ] !//! N6""6N_-#"&5#+"&=46;546;23546;2+ܘ` p` p \      '/7?G"&462'"&='&54?63232+"2"&4264&"$2"&4264&"((4);*R p G5  @ 6jKKjKf4&&4&jKKjKf4&&4&`((!2   o6  `9 KjKKju&4&&4KjKKju&4&&42#!"&5463#3'#335#35#   `   @/?O_o/?O72+"&=46;2+"&=46;2+"&=463!2+"&=4632+"&=46;2+"&=46;2+"&=46372+"&=46372+"&=4632+"&=46372+"&=4632+"&=4632+"&=46;2+"&=46;2+"&=4632+"&=4632+"&=46372+"&=46372+"&=46372+"&=46;2+"&=463                                    @                                                               `                                          `         `         /?O_o72+"&=463#2+"&=46;2+"&=46372+"&=4632+"&=4632+"&=4632+"&=46372#!+"&5463   @                                           `     `          `     $,2/&?#"&'&6&546766264&"a=Z { %A'  #*=Z { Q<  #D@Q<  #*(E- { Q<  #*=Z { .6Lj7'&6767>"2+"&=46;7>;2264&"%2"&?#"&?6;26"&46325"&46325467ua5 5a    0H[,,\ > L'&5%%p&5%% c> >c`    ,,V Sk 3`(0l(k #%#"54>?6327632  h0 <h21 Vh  p 12h< /2#!"&5463454/&#".'7654/&#"#327P FA& A x ` &AF  x (4@PX_2+##546354+";254+";2=4+";2=4+";22#!"&5463"2645''`   H   `         M` @ @@` g    q  q  0   @`` @ ;?%2+"&=46;7#"'&54?6325463!2+"&=#'7#P   C 5s    v1C v     P4*Y, `  4`5OR72"/&6;46;27"&=4?#"&=46;232#+"/#+"&54?6;23' PP 0    =8  =8   G  ; * 4 ```0 Ѐ  F    F     e05OR"&?62++"&5"&=4?#"&=46;232#+"/#+"&54?6;23' PP 0    =8  =8   G  ; * 4  `` 0@  F    F     e0/?U"&=46;2#"&=46;2#2#!"&=4635"&=46;2#!2"/&6;46;2 @ @       PP 0   `             @     ``0 /?U"&=46;2#"&=46;2#2#!"&=4635"&=46;2#%"&?62++"&5 @ @      ` PP 0   `             @     `` 05JR72"/&6;46;22+"&=46;5#"&54?6;26&/&767.7>264&" PP 0    `   0 6(>*,   $+ !  ```0 @     @  p} 1' 3>=&\  5JR++"&5#"&?622+"&=46;5#"&54?6;26&/&767.7>264&"kP 0   0 P) `   0 6(>*,   $+ !  ` 0`     @  p} 1' 3>=&\  A'DG\%"&=46;2#'3264&#3264&#'+"/#+"&54?>;23'"/&4?62762 K 2 2!3(  (8  D   X   D  ?.p- 7   * # +0`0 $$   E p -8 @&2#!"&4623.546264&"264&"?@ABCDEFGHIJKLMNOPQRSTU VWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~"      !"#$%&'()*+,-./0123456#789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ glass-martinimusicsearchheartstaruserfilmth-largethth-listchecktimes search-plus search-minus power-offsignalcoghomeclockroaddownloadinboxredosynclist-altlockflag headphones volume-off volume-down volume-upqrcodebarcodetagtagsbookbookmarkprintcamerafontbolditalic text-height text-width align-left align-center align-right align-justifylistoutdentindentvideoimage map-markeradjusttintedit step-backward fast-backwardbackwardplaypausestopforward fast-forward step-forwardeject chevron-left chevron-right plus-circle minus-circle times-circle check-circlequestion-circle info-circle crosshairsban arrow-left arrow-rightarrow-up arrow-downshareexpandcompressexclamation-circlegiftleaffireeye eye-slashexclamation-triangleplane calendar-altrandomcommentmagnet chevron-up chevron-downretweet shopping-cartfolder folder-open chart-bar camera-retrokeycogscomments star-half thumbtacktrophyuploadlemonphone phone-squareunlock credit-cardrsshddbullhorn certificatehand-point-righthand-point-left hand-point-uphand-point-downarrow-circle-leftarrow-circle-rightarrow-circle-uparrow-circle-downglobewrenchtasksfilter briefcase arrows-altuserslinkcloudflaskcutcopy paperclipsavesquarebarslist-ullist-ol strikethrough underlinetablemagictruck money-bill caret-downcaret-up caret-left caret-rightcolumnssort sort-downsort-upenvelopeundogavelboltsitemapumbrellapaste lightbulbuser-md stethoscopesuitcasebellcoffeehospital ambulancemedkit fighter-jetbeerh-square plus-squareangle-double-leftangle-double-rightangle-double-upangle-double-down angle-left angle-rightangle-up angle-downdesktoplaptoptabletmobile quote-left quote-rightspinnercirclesmilefrownmehgamepadkeyboardflag-checkeredterminalcode reply-alllocation-arrowcrop code-branchunlinkinfo exclamation superscript subscripteraser puzzle-piece microphonemicrophone-slashcalendarfire-extinguisherrocketchevron-circle-leftchevron-circle-rightchevron-circle-upchevron-circle-downanchor unlock-altbullseye ellipsis-h ellipsis-v rss-square play-circle minus-square check-square pen-square share-squarecompasscaret-square-downcaret-square-upcaret-square-right euro-sign pound-sign dollar-sign rupee-signyen-sign ruble-signwon-signfilefile-altsort-alpha-down sort-alpha-upsort-amount-downsort-amount-upsort-numeric-downsort-numeric-up thumbs-up thumbs-downfemalemalesunmoonarchivebugcaret-square-left dot-circle wheelchair lira-sign space-shuttleenvelope-square universitygraduation-caplanguagefaxbuildingchildpawcubecubesrecyclecartaxitreedatabasefile-pdf file-word file-excelfile-powerpoint file-image file-archive file-audio file-video file-code life-ring circle-notch paper-planehistoryheading sliders-h share-altshare-alt-squarebombfutboltty binocularsplug newspaperwifi calculator bell-slashtrash eye-dropper paint-brush birthday-cake chart-area chart-pie chart-line toggle-off toggle-onbicyclebusclosed-captioning shekel-sign cart-pluscart-arrow-downship user-secret motorcycle street-view heartbeatvenusmarsmercury transgendertransgender-alt venus-double mars-double venus-mars mars-stroke mars-stroke-v mars-stroke-hneuter genderlessserver user-plus user-timesbedtrainsubway battery-fullbattery-three-quarters battery-halfbattery-quarter battery-empty mouse-pointeri-cursor object-groupobject-ungroup sticky-noteclone balance-scalehourglass-starthourglass-half hourglass-end hourglass hand-rock hand-paper hand-scissors hand-lizard hand-spock hand-pointer hand-peacetv calendar-pluscalendar-minuscalendar-timescalendar-checkindustrymap-pin map-signsmap comment-alt pause-circle stop-circle shopping-bagshopping-baskethashtaguniversal-accessblindaudio-description phone-volumebrailleassistive-listening-systems#american-sign-language-interpretingdeaf sign-language low-vision handshake envelope-open address-book address-card user-circleid-badgeid-cardthermometer-fullthermometer-three-quartersthermometer-halfthermometer-quarterthermometer-emptyshowerbathpodcastwindow-maximizewindow-minimizewindow-restore microchip snowflake utensil-spoonutensilsundo-alt trash-altsync-alt stopwatch sign-out-alt sign-in-altredo-altpooimages pencil-altpenpen-altlong-arrow-alt-downlong-arrow-alt-leftlong-arrow-alt-rightlong-arrow-alt-upexpand-arrows-alt clipboard arrows-alt-h arrows-alt-varrow-alt-circle-downarrow-alt-circle-leftarrow-alt-circle-rightarrow-alt-circle-upexternal-link-altexternal-link-square-alt exchange-altcloud-download-altcloud-upload-altgemlevel-down-alt level-up-alt lock-openmap-marker-altmicrophone-alt mobile-altmoney-bill-alt phone-slashportraitreply shield-alt tablet-alttachometer-alt ticket-altuser-alt window-close baseball-ballbasketball-ball bowling-ballchess chess-bishop chess-board chess-king chess-knight chess-pawn chess-queen chess-rookdumbbell football-ball golf-ball hockey-puck quidditch square-full table-tennisvolleyball-ball allergiesband-aidboxboxesbriefcase-medicalburncapsulesclipboard-checkclipboard-list diagnosesdnadolly dolly-flatbed file-medicalfile-medical-alt first-aid hospital-althospital-symbol id-card-alt notes-medicalpalletpillsprescription-bottleprescription-bottle-alt procedures shipping-fastsmokingsyringetablets thermometervialvials warehouseweightx-raybox-open comment-dots comment-slashcouchdonatedove hand-holdinghand-holding-hearthand-holding-usdhands hands-helping parachute-box people-carry piggy-bankribbonrouteseedlingsign smile-winktape truck-loading truck-moving video-slash wine-glassuser-alt-slashuser-astronaut user-check user-clockuser-cog user-edit user-friends user-graduate user-lock user-minus user-ninja user-shield user-slashuser-taguser-tie users-cogbalance-scale-leftbalance-scale-rightblender book-openbroadcast-towerbroom chalkboardchalkboard-teacherchurchcoins compact-disccrowcrowndice dice-five dice-fourdice-onedice-six dice-threedice-two door-closed door-openequalsfeatherfroggas-pumpglasses greater-thangreater-than-equal helicopter kiwi-bird less-thanless-than-equalmemorymicrophone-alt-slashmoney-bill-wavemoney-bill-wave-alt money-checkmoney-check-alt not-equalpaletteparking percentageproject-diagramreceiptrobotrulerruler-combinedruler-horizontalruler-verticalschool screwdriver shoe-printsskull smoking-banstore store-altstream stroopwafeltoolboxtshirtwalkingwalletangryarchwayatlasaward backspace bezier-curvebongbrushbus-altcannabis check-doublecocktailconcierge-bellcookie cookie-bitecrop-altdigital-tachographdizzydrafting-compassdrum drum-steelpan feather-alt file-contract file-download file-export file-import file-invoicefile-invoice-dollarfile-prescriptionfile-signature file-uploadfill fill-drip fingerprintfishflushed frown-openglass-martini-alt globe-africaglobe-americas globe-asiagrimacegringrin-alt grin-beamgrin-beam-sweat grin-hearts grin-squintgrin-squint-tears grin-stars grin-tears grin-tonguegrin-tongue-squintgrin-tongue-wink grin-winkgrip-horizontal grip-verticalheadphones-altheadset highlighterhot-tubhoteljointkiss kiss-beamkiss-wink-heartlaugh laugh-beam laugh-squint laugh-wink luggage-cart map-markedmap-marked-altmarkermedal meh-blankmeh-rolling-eyesmonument mortar-pestle paint-rollerpassport pen-fancypen-nib pencil-ruler plane-arrivalplane-departure prescriptionsad-crysad-tear shuttle-van signature smile-beam solar-panelspasplotch spray-canstamp star-half-altsuitcase-rollingsurprise swatchbookswimmer swimming-pool tint-slashtiredtoothumbrella-beach vector-squareweight-hangingwine-glass-alt air-freshener apple-altatombone book-readerbraincar-alt car-battery car-crashcar-sidecharging-station directions draw-polygon laptop-code layer-group microscopeoil-canpoopshapes star-of-lifeteeth teeth-open theater-masks traffic-light truck-monster truck-pickupadankhbible business-timecitycomment-dollarcomments-dollarcross dharmachakraenvelope-open-text folder-minus folder-plus funnel-dollargopuramhamsahaykaljedijournal-whillskaabakhandalandmark mail-bulkmenorahmosqueompastafarianismpeaceplace-of-worshippollpoll-hpray praying-handsquran search-dollarsearch-locationsockssquare-root-altstar-and-crescent star-of-david synagoguetorah torii-gatevihara volume-muteyin-yang blender-phone book-dead campgroundcatchair cloud-moon cloud-sundice-d20dice-d6dogdragondrumstick-bitedungeonfile-csv fist-raisedghosthammerhanukiah hat-wizardhikinghippohorse house-damagehryvniamaskmountain network-wiredotterrunningscrollskull-crossbonesspider toilet-papertractor user-injured vr-cardboardwind wine-bottlecloud-meatballcloud-moon-rain cloud-raincloud-showers-heavycloud-sun-raindemocratflag-usameteor person-booth poo-stormrainbow republicansmogtemperature-hightemperature-lowvote-yeawaterbaby baby-carriage biohazardblog calendar-day calendar-week candy-canecarrot cash-registercompress-arrows-altdumpster dumpster-fireethernetgifts glass-cheers glass-whiskey globe-europe grip-linesgrip-lines-verticalguitar heart-broken holly-berry horse-headiciclesigloomittenmug-hot radiation radiation-altrestroom satellitesatellite-dishsd-cardsim-cardskatingskiing skiing-nordicsleighsms snowboardingsnowmansnowplowtengetoilettoolstramfire-altbacon book-medical bread-slicecheeseclinic-medicalcomment-medicalcrutchegg hamburgerhand-middle-fingerhard-hathotdog ice-creamlaptop-medicalpager pepper-hot pizza-slice trash-restoretrash-restore-alt user-nurse wave-squarebiking border-all border-none border-stylefanicons phone-altphone-square-alt photo-video remove-formatsort-alpha-down-altsort-alpha-up-altsort-amount-down-altsort-amount-up-altsort-numeric-down-altsort-numeric-up-alt spell-check voicemail k%لRلUrclone-1.53.3/docs/static/webfonts/fa-solid-900.woff000066400000000000000000002773401375552240400220460ustar00rootroot00000000000000wOFF~ \J=FFTM0tfGDEFL*OS/2lO`Ccmapx r*b3{gasp4glyf<KfheadS56hheaT $ChmtxT@aJlocaW chmaxpa !Snamea +""postc.|"xc```d٪?t˂ ( K xc`aØ2H200012304|`1 zgBj )FxkPVPgy51DmҚFMFi%j&1MEq"X(*b#[% RR1x{Nۏm>?;y>>ec>ݭ B_Z `ʃy{xgƻ<x$Sk{}Юci>o_[vW@5H UOUF1jTQ*ZMRjl5GU -Ze@T{U*WꀪQ'TjRNq6;۝]NSt;5q֩s&sݹ}=u?=X?#(=F?szx)z^zz.e\G1}BMu>[E}UKݪoˀa&t1ᦗyҩvD6~M䍯vhOd'ruGOd^|GgDD~r_)Rn@do Kd3/rI4bkMDD0USZ]"ۻ D>oD.pܔ"}}A> Th@ ^‹xl:?X V`^܃Ÿ63bn-y sq#`6.L8pp&8şc4 'b$Q8#i?# {`'dHMv2)&`9v5&NZvUC`X)+a.e fX<˾,v,j(_ӴaOtHsh6͢h&}K3h:MoӷJ&]N%4&Et!M |:h,tqt,}Їh/ڃ>D郴;}O0ڙv2  V-p@ (2a7.@>A.lȆ 2!ކ@2̇y; s`&"̀D0PAGhOn!Wer|IZH3i"u%'g&I $''cd #I.y&$E$Fid*%d4JAq҇om7'AZwXykbϏ(x G}(W5===s;ޣյc -G^۲%c0M q a8'$\/"{_U3=+>ihs9X}&sQ1܎r\USwЍw|w͍(J]56?q28r<۬ϸ )Gf[fȺBt,_>̃dlyqI.qFraw"H&P@y9[.3w~C7 :fMXy^a/OrQdWJՌ_@3ϓ;>:~xhfn|$HvϏ&y A9Jyv,Q8cʩxp)# SFk䳾΅u6y>K܁z\UZJ=~\MiqUIP$ " bR_0 4'a4 addMşɄaIxa3~A c?k1 Gd)uOG[4ZնQ3$$Xhxݰ8eE @Opeh{e (alh'vf2t @_@*wf~N8yRW^K(~b_/HB~gD_2F4_V)o^! m:Pt IQ |VR(r <VZ{Tr6ap6q.D>.GClZ訉fEXx˃yau1uњ=< U)-f#7s93z9væœS֕pݭ~N;dŐrGޔ;{ӑo{ ~o!e|^a^G;(ݷ.2,[M]:6BD-C?#)G*O8눢$rܱMw;0hK<-a)7u)%Y%"7=@:fca%)tb6@M7t5A!*~U}bx x Gu5W7iYﺺvL9ZZn>򊁄08pT`^!6<9oVǏ,);t>SpX Aj I`= HZ•q9k?-W-;qH]W]kq1;6]$sUq!pCޘ Ӆw`̡,t/5955 `B1$FQG1 W-2#EJk`c5aԇ^[ i4C~HSL~$g>z[~&wC7ה'w`n/I,F͌ ġXM:ɌfW1Q7YЯ2319~[k.Yv,)*8?%cʐt}An[ڬRycB$WvݻCqtQOnQEݩN6=٪9umT޷Tdr΋*O%ǒX~eRՖC#lnp_hlX EqۉJhcVC .AE V+ktr\{9n^RJny-8K[ 0 {K N]%RGU5hZ%RtSiE+_#\]neOjo8po7*dۡguEҵsv#ˎ\.glF[EIuϞ [3tîALJ̭iGwBOb7hVXA1ч%@APg-d&RJ!d)5TvL1WJLǩti5dCP,/IRұf{w 1F+w;}p# 4}G 1NQ'szgggQs1yT"f>]zY:^0 㟅R8{Qʗ]WE($N8ݐv!ԨgMTL0pݬBj3B?DA=^.ab$ÎD!.$$W4Es}ŲWWS;K\]#) rqxu0;SS:P" @A[v-M{hЏ@:kzi=@Ȇ#p Cvcx33Wx2qs^%(K).)^hCsgU<ԺsKs]׎Άv6FIɁKQ!xY w%)qW38z`U:y!ѥit p Ëu4Nv3ZL^cX%\ϿY} J"7_Rsp~@Xz>|OW[|tQ{sR7՞ݓ8?^:@5ik>p{5eIZƎNw21˰Ǭd`z-} i](mOڂM2ÚfkO>&9o5#P ߊV9GQQ9r_?ySy8OrzNQY1H');]sc43UC©SBH/v&pY<:҈S"B1r:CפkТBə'.喋-,O T1;Ϸ3D9oZ͉N'tpUS]@y|ɓ;mΓS n<No G59\Ă$G=eY ? 7[F(o$TeY(0,U_Ø|Dw*!d'åRI88P8`sK ԠF/܌h(Q#rt0dyq:kNAɬidZ X#YhYml5cqIpC,ɴ]H"Yucd6gtfsss lzu. I F|ؙVyV>vZ~^vh``<TLP̲Sgih{OR`+DvN 'cg1l,9nhԷkn^ *^2q>NO(zSDw U;)zONO̧~bva&kθG3sxK7 ]e7ue[Er2KM^֠9H2Ԍcb7w/Ed\'5i,/UOp(j *qiȼAPhvt_-TZYp$E_ϳmb4E;hkZy*sRMծ[꧆./,nRR= zOnq6Jja ?"MȁjЙ޲|BL'I3Lwp)Z-V+bC0FEbo)J1SԇeZRrb\"4[dqA^L1%&-(QA1hoaVmqs%ż٬x}{|ۯy+yRigZl! ߧ)#-`LȒt^6`ۻ89Q%uKޡ O%FFX|PAMvIhzx- HP^ $0d> 2 K4@b7yIxf FXZ b%A 4p&;o󀴅Pf'gQl٦%9Mg#vn`r31=f8 >XC{FSPZ%؉ Xg߈3ͬBPǝ?D!/ƿ-=kj?'ox0<] ̸>!0A yK;?QH7y9G2nf?UI9x0\TOn>&,+A {dTg15Pj.mRzxy`BiI&w,ʘwpibAgLsuvq`(C I];9pE޻%c*(ҶM"r+jbww4w9d:dT z 4]*M6D\rF'HӃSo(*)˨-Ͼ[FބڑӜ5",_UaR;SwSYC_j92J{&t> c" d}a;q0b%^2*ƿN_^;x,BBZj$R V:S|DNc8Sp}Ѯ+P29@=I|=LV7N В/_dOH3D$d~y EM "iQSMt$Hd fT˺Twj>&R.pb(i8|TrFt(rYIJۂ/p藯 B*`֣6ªl`Q BX听BfG0 FPMa4>T5EbzAT z,+i[}֬VPC^;n3`RHT!w+Bބ:;`MD‘ 0%at@rnbG}|B,\џ.JS퓉yA[}ǒI.D0Z_8ֹZ2ٜoLWo3|;s0)"]Wr=u*Z,U]fYo'{K+H{Ff@cɲkjQ:1FVqM&5$D'MmdGF ?Ǟ c'K`.Sje Kb6?\KFF)]ej'Y:]u.Թ/fS؛.Z9 ssS7NZkM*TWӥZTQ]Sӥ1S}}yw͟קhS(6Be@bIHr8 ݻ6dyw>Ch/c^,߶}r<e :D$!jW m*VTuC %|%( Oe4{ЭÄ~NvӦyn}2lUzk~G:'mkSQɗxS6YӨP$K1gJ x8uק݈,KCkOeF_ *d5hwO|+_S{}PaWSxvh = MEϞ3y"pR*e+[2Lf8mߊhϽ"g}jߢǩL]*>*B6 +FpXV5|'?~DbMAo'$3΂?+7 4QUU7Xktt]N&wP!OU)3sNp7`IU`1B.`Ki?0=0l،˷sGdc%ҥRz Tݲïy{QZmXx*F,^N{q\(ڳh~A*t1CvtP NU`Ss2䤚yB65VKޮ&&'NkoYϽ\۝܊˘3F5J(։/åptxE5b ⁨bdhtgwe|$GKJ"PH}t8F( ۥ.sMnh}ay-oM!Nhi@,֟JP =PW+ihQ:MG)Z 42Cd(4(PHQ)ET$rzM[H{ܼה݌/! f *JH>q\{;wZqdDtB}ֺ"WlX$MzqatI^_ BnAEVկѵY{ZuseGf_̷KRSRTkjFT7L/t1U܇*(?-R"#5JPmT" _r,[ՅH35ط> FׯltB=yK%:SOLՓK9٢vtjabp!0p9[[n݄6[,BKgڶgW-b㕝1oFUoɳJӽ|C?IuVj_~>Ϳ\DBRpLWkp#r#{B#:mPSu_~:R;vd'J_4"}!)^^9r03 }.5.J;*/{ ue?7^Q5 Q"RӢm Łkb҄66Uqd.B*}J-1^@3ZO%, ?=kSƒeig/w~)s1JqI-&܇kCvgZjg7d/ ٫P{T "3c=^j$1Q.*w]A˗ Wqȸ ˌV<::MaSE6fVYnag~8tYC* |wϖ˳eKЂ;5 ,k{|񞚚ߟ^*-r:WпT.dcy^ ]6D ԡ Yۊv1h謯w_7HǛk)IX ߣ{'rDP,#'$j OW*Z:b3=t;b(*Q#Yg/ZYx2t2>"֬ǂTO M5:gvn[(461}QXMXs( 7͚R 5d`zi+axuX5:jkjp닏fi|DRĝILMa9qu8y'Pz)UI&HF4 .Ȥ:v.obC8(#Gx&IKhjZZY̻Pz >Fb SJ :yt7Eb ^ zLmT—Ӝķ x gQ3/f5bQ[o^P6_]"P WbŒJ5nƑ'~n>C%Xa#BDEUOX%=1".x2I[I1&Z7D"f,X1-QU>ݴ( ijp#OTafJZQEQEۈ~j,(`Yotb!#vgPY$m+<< U]$ CTDEXmjdm)=}Zf&T:r25jI0H urQq Mwvܬ@MyM6[ {s{p 9ʩB4 ]?&"876Uf06!Y.l͏fIGa 1l\?hm)b*غz!lV3@]^^rmK޵.@]ˆ==y[|,4>s=WSUEs+2MkY4I:ZVRO\Q;0Z 9w$?0A=ͩ!fal5*O ,%m'3(ڒY9>LG[ ?2 s˩y)ZVڔ-0br , <'SS/jR_q3 v 0 vCvlZJ?G\;]v+V8tq忇{eKW//eFPvNo\YCWr"=eOv?ooHOv؛b$y~p#X?<Ӟ.YXڅ=y =o fW4o2}Ӿt?shցrҺ>8ʊ 4hGV̋&.LEsXG&`AXȿwxLSV԰itʔ:-oFiEy = QG%lpF]]UņM^.*\X0[)yTje8gRYo?9׌ug'=EY /J^/a|KOƥqt7o^8JK5&yx\+hXZQ 7E"sw.rrYUP{ȗW%O乤H?fƁD>ICb>{ fxKmO9N"K1pT5M|)L2!@p F#j<9Ylm]> 5qG6갷bEFx} F1ٞSAx[-UU,jh]!{-{VՖz1=cLlQ- XMaj "-rDY$Q}λFj3\Qt3U`lZ1P}w: G,k:S$#J0-*c Rx|`Mn-Jj4ߖJa*qC*c_}[Gnkc 5{yy͏QIiv#.`Qb\w&0ɑEr΢؊ȫEҀqHHDU Kkz*{Pl?OϚoکv-g 2#OŮlɅF2 ?_ژS){r!ri&2 ^&&LvKt.fD^0|hlhBF9@$E<7{)4)FJxJeYFekl$w|렮,^uuxk27P+Bv)M-qL-Emnv~!ހ/-A~Sun[n(ćm=jR;oS%m0\"vEXR1bg}V֘ugIrn~ rK뜓A˙¹^&y#(՝V^qٟkv>Vg+P&aEE|8,Pږw"ab b[CCB8gbzX(OIcFh>+- f-EftS9xC$ֈ.tqP#7WƥxҞ>0s(R-wVr !Qqwͤ]IfoنLl'8Oz>#{"pw]_|@~R*oJh&R=/Dߞ k)ّSc ؘ):K6= 1}tqo8pc hS_&V)5yt㇁ϔzG? QFz.pl._3 l,̝#kвmE~j3ߛC"n: ^.HfpO9Pzx HUw!A߻u-/bwX=]5E/Ywb%jbe0>eL}' V@p"(l["!ҥ=k5e|f~t*?]Sj]ZRD+٦<7QF*q{XRܧb :}r\~v^_;1l׍aЩM S*^ZV&QUV_#7~]pӘoY5À yWΝfp$Q8q7ig|4!]>Mz áKN`*!3q@!*@V;Gdux3AsT&T[mBk("&7. ^B{_,9 m26z/g8o>swG~tFh@ y{ 6kޤ37fՓ[15韊(t_yq,j=iӝB5 B5v ڊ`38HUX63 j9ReiFQ'(eaUE°sv:?d 7ը I<>ڞyNuo_Gp bnlz^+N(UF繅cTjoPl(^XKR^K@ /XӅka[\PvP'=QQU u;k4*( ,X,LS < fud8Ncb Xuc ggy QV5xkQTY|WcP@r XΒfߍ4IGpMJ"ZJ#68˜!0Soz`~[֚ey5Xgr};Nw? OõK9 3}Y6f5 ٲ|c80fmSr-Wr7&܃6 z~H_:OIg:0lٗg ߋ3'ÿ_?KC;\ލ3!XD\(Wbeӊ5XmӂFL֢7:ٻLNjP%bl̗vk*SR)7F<k x ktBx"坽tGfE1ߑ=7+HP8Hz13k:7O` 񩉛X*#$r@ EaAȾ c+^FNC<-߬g4팪^KQ@5xtƌ;U坪NER%E!RiI@pr\/tlUN52K3[g]pЯ**DR8TLWh!aL YpB Ư*S*҄^2M01-̴5%u&ӚИ#*bʩҤ뺭}C4"Έσvsb-61v'b!j;ɛ3BJ/S#k%rD }?w)եlՅ"&'j5g..b5(ߺ/#d(ef,Mr3 fJs8ؚHd"0;ߣKgy-5,[%ER$aGSx,M1|FyAWC )B4%uvE>*W`q= `!bVg 4`tJc21~BףK5U$҄dllGlRQfks-TӘz $RicV7|ob<˜MZB5LL'Ju PQh?lTA3CöO$!',ʸ"ȻoVUܖe2mx!mX8 Hѭ]#0>?Z~ր_qPN)}> Ns؎Co{Op#cl;KaPݭEu^#vlj h0AB}^.^ Mi\kҴg}ѪZN:b' I%j5RR3{iiIRl4ObQeWv6<0VX/6m2""Mc%OI廂0,A+Zat>˦8j?g}/h 1 -o^$S٧=7|)θ,PFh袸5]{ ʄ&0oƍSK0і6V=Ge3J{.lzf\uW'anP %u Rɱ|\Vep.$W<ûn3PUJ*L– 6TG3^jj"%tZ `g46gN7$R:>V^o MM RB[%u `c3άpwmxwݗ?FcOӜ}]@ @&ז sIV(UEӲE%D߾9uz+$!ٜf.| x(="1Wi<59wp&PktQV߰l;h+ͦs&,@;|Z9Dl7P/[JR'ؗΥSzvSvVF qx]LEG"y2F2=`ەp9w„.(WRpX% XyZT!vU9ڹxHw5={ gzƽi41e;[*j6FL 𛰴L:Pڈ"ۢL.a^Xr7R"惺V4 T Vy!tJ9(w5C¡ڌI6¤b2/4-l/")QܽP:(H $ЭPm~WZ uͣw 7 <[|Bѵ K sagy8G cےnEhy#j9Jt d߶X*֥_CDiMb:r_}A /z"1tÚ3>Ѿ_2RW(~/3&%e+bzmN/^]N/IlAV*0a]R*‚B0|Gxmʏs~e0jQ(L#;6+ۮ.#r//kT A@1H뿤_"?UrIU!fsHܛ˩aom6_.>W?nԝ hsO9]J땳;&} 8iQԓflZס-Uc6@*~,U,~#I˴FyV^F#%FOkg ;WȞ w=ּfrk ^3;8ki5_R;MGwPevzDȾx;ɜ}h0oKh}DN b3{ӄxӱ'mVURV2ԑqMѤo2}JwZ'-8S%IwP[MJtWh[KlKEz .Ь0mVfNNޕ_@߬+6 h1 Bӎ兰v&ƥ%揑(15Qt,Ҏ12 ҳ7 8̨6 3ԇT ?J{nN[H^DޒRǚrvDlu8޽5"hsРCοo`l' Hld  ٞ&vQ,'Q %=-=`5&:Sx _9'2wvJkVF¨WZVwcqs!W,X(jvVg\$1!>Eи666Pf? "w>j "=j"eo{ӔݥI0n&&LZhQO[KT;gD)905K!!/;,b4Ah$i Eè16{>vɢ}a֩8RdDgR`.$VodP i#aX`9b.XBi8 a+g MV#J)NTJbY;U*mNZr<-( =;. SBXa.7ABPD ka%>D T+ !`)ӿ$Km7S# dȪ(24Rg ԙE)u"*s[k ,ѳRB3NM167btԶX|Phsk8>Hm7i~jʷR@/"9\*6rh\IR9EE)BZ4\7O2ϣ\("v6 |fET,mĴm\hwx&Z@2 nܕed", |"d,V댈,?H诨Dٳ3Ե v5*X%&5Wd=B/)</*;5 jN].c+l~]3"n(bHUeԱn p wX5_aUWQ=FV&5cM"[L k{wHM0!? Sj9!w̎ p2 u>]7{v 2K"+FC,%3ɺBKhT] v]D#'T{6M7f4QՄQ̄"C%2QUEYQrg)1q@aAnW늮Lt< 8xZ_8|l(A]C@!Xtk#hiR-Gvj!ǔ+{$/Bd࿆~ 5 '[ixs̰RtK5#˪*f6ᰡ^6eVM8;Ѯ鹧`  H<(, E겡ú|A)ʖmIlqƴcˊ:e򜬒J7x=U]]]]WTU^gjn{wq-Ɠy0 3`_n<gW؅8rEWsQY'ۮrby.2䮑`,Od3'_.` 1spFO(R5ZrAGXUXj <˥1U˵ϊvܪʽ [-E'Dv"=%Kf}aRd$ uTP%=FPW Hh65Q~Y2fbR4H u6?BC]\3 ud1yK(?HV`eIL &RX_$ 1-=;ht1B(E]J"%d%$5tv]]u9T~+$֑VALz_gkNswO˔T,J,{Q+j;v5^q%nr},{nvT y(uT[=m>KySjRڟ;$*H^儢pwx''!{.7$J,l6(&X9aE ϡS- Aq_?N{5DX-c}8S]dNB1Pc#tnͭ-qeR9>Rv=N-/^WU=X hwTW'lDD/|d} 4\ͥRQw[Nhͻ[wNVGIh:ydA1`Г6Sl'wVnHJo~@S>kS)31iu fJVߖ:̤Ǭףd&}a8XȘuC̸}'Qd@LQsJG*.aႽQ&}e}m&IHXıRd#֡nvar!3ٴGo޷)Bl)#nvđw% Gճny!%;V-[QyJTaKy(%ƛFEI{-i8؜7-er "EJ/ǟcuZt- y} \ɂ b-=c5jN4K3zNDӶaMTI8N~VæF'v4$]= kE^p5ad[Y|dO^}C?ɺ<2ͰOXċEV<#.{΂5 m"|O>R`sM>MVd4gS?DQ5Ƣ"<_~d Fϴ/={V9LAɭi,x"ק`"O]/"b?DXF#"abnIyKuܥ<-·9) ;q5ҩDh(n%\n$FHdn6 DN1:{B-,d<[T\14lk.<=Mi8r{dxChjre%-k,"zX k!7a#4[윜X#D”vOZ4<$9GgEJ)>_E[Q:챥.Q/2 .R.䵩pOzmQolwr0 ²,r) kms5Fg^zIP:O\O mCןr)]PIxf ER޽Tߨ(y|]{3ͼOϡ1\n@z%:iP`m+R8a8యMJ H|aÙpbޮ]L\m"$4(u-Ȧd)NcD46K)JuoL\j\& NGT\3nn.ɒfܟKɦ`4#T`mSawg,Kym]SDz,IRd"iTW勷l+%C:&@hXT@X\}u5pǔzQi n|Ϟ}cK䡍b [+lVjZxe}˞?|(^yM&j '6IXa@H{]K"QK`;zfCo4!.eqfFuRO+oYzK{x*%O=( ?};\fk>ٻQ(u]u,gf!ָ[27``v;|<)I&GFkJZU<>b|Ƨԡ"J]ɓ[_96Tzܰ9x)I:&=jgZ 2+To::Y91qdYUɋ!(ﱺ9YivZZx h jL#kV;Cso~ts*o<@A D9|]4:VұLq|5F&FжT:@$<qK(q,WcD4@Sd݌3]m4v4am;m$h٬oL$C#@`v,H{PdK{雉Lkvo.ؔڊA=kk압lSdMRaUZkn6ns[B-s-: oC^uhYWY՞Eҹk+ ̰Enr/@4e4uNAMPlY6M9 gnP([ȋluҴqwcDgw{'kRnNZeHXmvUjNtǓ^ּ} 5ȋl~L=Xw{LKڛpHS+y}>!ŭ?ܪiJ'R#_B.X-uF2beG FM`%#? ʢiB Y:usVG! |:x̉W(Pu e,Z8jP앙rL>9wWE7)q'^2ו|U}f[u7uuH E?S.l(7c@z ΎYV!=|,!T9`NiL>)pZR^Kr;>`o8<=P+Ydd3fx0pġl`1}hqoiG趨VoatuEJ Q;;,*%JI*>^X[%);hF4feq0ϫ:}s9_ŭnUҪDV4@Zp)\y}ee9m΁d.8.S[]Sl'ZZףQV((EumCïh@!m U;oX(>f(-/;S9_zo*n:y@;f<1}xcG 4Mdt?WU8w[}Q̝gGHa,?s/ߵ!ǽ<UJQ{m8[s/&|Ў cr*k}RP-b<"ŻG\[UWV7/Pgcɼ҈7¼qU3NXsLDxoy%TW@ˊg(0zDD,7x-t >:^zmYmvAzwýϿR.>*\JC<<F<xU,dx8pG*=g,!u1_IWK9]q\m=J\=`=\Wxau6C>Mw e 7W@onǵ^-1͑t,kJzf:|@(HQkΦ_S1;+A@)t͍  x)A ΙST;H*`2J穭6Ϩ<;yā1?c#B iJ˄7~IkD2" Xu2uZu{ePY_ nfo϶Sg>xcnHo!ւ9Pw@N}v Q6l6b#d~%HvXJBX,,-+\}zݿ}΅wz; tZ/%qb:pm8%tQ-Pˆzea?tC8lvʉ2=Ej-NWnJdF]РaUd#LO(׳ye{]v>RzK0Nx%!iXmc7Ƹюvmؓ-|OA`4\uKoM=UQپusCTTJeɶ+xlŔكxs2cNٍݴO _Vՠ^ KsE6__ &m}ɭH+U %g() ]Aw(tI\H2.\ 1`w!3ZeZQ4^[*j \j='нT& 32h`:KKO U(f*gOu'N48u uiJ5"3=YJ:Y>b;6jPN5_iC׏jǿ=z0YX2nK<*T%nVcgz/wm7E"7mèwnMxK7Ġ,WU0v֪s2)3loh|>~}8nf!FNLH#[ΐ .ݼ[.+x~$fonQ_j,TF&k6&}-}ٞX4>E;xG,rG$*QS {bcz88..~3oy W[X:,C]7նU޽4Tv(ѻ "O{bإcpRfl:[ I`U no =~ڰ K QR $P3NL ]Qd5֠[ui0DtmPҷ,02rP/1"U;jإ^Z$h,j6V\>'h\CyK5u%=Z )ݸKuǰSCN=Sa6P d%?wwmy{G*vu='y=}Nw!i}E9*jW,}mG-ibw H3Kmmy ª2_e} .#{օ`rcs2t=ccM8`ɗ׻=|W=e+oAź[.BZnXiÚv  W%\t3C;c. ]VIkǽ׏>B=1Q|SOLEÙpBDYç,TKM~7(9kZɲ\~ϹC+lԇJɭV& Q83\w[֠n7 _8R J>kLsHG1.qD}gn ]^xsp-Ȭmâoj'U ٣Z8\ m,5KliV d 틸ߋf9mp}j`"djM)=NҪ[ٮ XϣWjG]-L*[No2r}uVPu|yox-hy67+KoPKJe[jH.٨-G,zF\}/.6M[G"R  nR YyMNigy<#?gFM'9vϔC)[TH6g?g}ϒaebZ<TVCoCσbTϋAgA`_" eks<\Xҕ~㈡+$=\M黯|()s*Rk^3d<ڶ|ZyspJdO'OYlMY!(4հ.sRyDyr0P gdwat?Tkȓ`_Q/3x&~J%޷U=j"!GS`p]:p~b4`<5 TPUŦQq_ ld :#Wv0~!*p &P6ޓ&QT7k[ȋRT/zUh^OD.K@k e%_H k :FSp״nOER-KOP 6U䢴|U6g 1yDYP\Ԍ.!I&MntQOtt [қsiFB$E%mRY>lD_e2aW_fץ=.g2BgVu/OBwVyĉN rO${rr䧟hOf%92%'jGۻsR=͸5F 6?2ߖ/$֫%dYY>TԌq&xlUŖ[jwyqrƺ=eV1ۘ,m'О4wivW9&ߋd B_iWKvi Y9-s8?vvv蘟&ܯsp)dMC\,r dhn vO]1$H Ո=Y̆=ry#7Wh0@m/ka~fq{s5Ml@h2뺞=1=*7(Mee薍qL*=7/V5dUCՈ:0 ^V#nZ$_ꘪ^b? `8K6Цǹ=-[F z`Gն c3j-;HUpWmp|5`[\~;3le*HӪ$SEU VTrPT(٨h*94;(l4vRȢJSy[㏒9(k*{XM-ӌx8&۲%YĂg*<} )th,,RuW߁*ahQ^ځ]*e[j*F-JTQvg=F> J%/#y<>=BC/:y: (#9;V\?H Z*:l>>ʢ8Ͷ<6Enm‰'&3bzDUDv۶O\5mlvk n؀iioyzm'nh,zǮk03ΐe^9]GނW>@6u;'c 6;!VeuGLgQs}µO`p'noڲP7iSG7XyQ9W?*k卑_:{;kqVO{Eȧ.)7gqf͘a a aSo^^`58}Қ_bVV9~rn(GΫ/5+㜿YnpR_Z//s; q֡|r @n]Okwz܎y1M׍]WxvW"];x/|8Uc(S!w u:g&9vwXF܉/%Ok@y*3PH dD)ʍuJUSh_(Z"|ZV6=5]Ĕr7[3rJ8 R=сLra-S*dz.Γ[<NMrӏJ{=i<:Ǡ,;icsc'* q6ӱ0r/^2j-U]{S7y2د6Ĉ^cX4ƛLެX#kq<՘@f$9ʈwRXjmVƥ !6B{W\sd%*̅@~/?ooeQGL.3G+쥣M\oiiqZ=p ڜMJp'aCuGp3씝eȈe/gkP&ǯӟ+~qX!IR3߱e)q}GBcx6VS[icc{WZl˖EXo &E\!9_WǟՉCqXk8-~km4Ybc[*с_z5®jLx{~ZdK— +^zъ :k/6 <Վ/ٹ:kh xS(ۥvjbJk."{gq\X_!'<`j7~fK YAYuckJ~3k. ^>( }e|%(w"!!:PB+o  /6XpdOGk͗I_ 2`h^OͰ͕ShBa _ٛuxX8-5]e@;U5Wk8Bw cPUQF{|(>bdƜtME6g5H2dcD o#eﺄ@fD=2ɍ ] stTHjsC{ɞwQ܊[Mh?Vjӣ{4wP_h?r!Jb 򼪲tBפc|V/ˁ喨Ubd뵃HN\prBI9I͗gI_clGv,7"ٮy!;o_(k~U~uNduHإ'Nj1a Wn؍5if SJu{Zt;؅&̆9!quq6{RPI\ػ r8CzX$78 fz~٘lT `fW:dvq:Q] [}2%70-I!|Fpwf7ܕOT2~&$i3 j]|UT~]Ua njiOm=s$4/~D`t=o؎\Y?:T2콴u6$$w2Їacn:EMnt,4H0ftn.rѫ[n ^: TXW6Wl{KMkhZ4QHv~/[P]lTcU%Spp`k C@ȰrD(^ѽok#+ys\R9` <ہ'~DY4G2%h@uHcqjFDU_;0 Eח`pہn`D4D3<*?-`' %\<ǾezFr'Qu4+T:U7Y;ɮ\*E1|v}@"h MZ/+,){8?PVte_J؃DCo6]9 ::d͟=eY/mDUZB6s!RX+B,Fsvw6NjPO4hO(]w纺z˽;w\zmKvZspolv0OkK VWYJ,rBGg P>[djdB7l$6?v #Ȕx,_Ǥg +?y!V1+L@MHڧ=yӍIMKuT'yRɒϹyɚ;L5v5KfQbVx%p\u TN%88[ggbW@]3C8޸ao6lѕs ı@ `J8xa0mɳRfQD*^=2Ek\by)li-7L(^7=-<41jw Q˺bF(+iY{OۗG˅kMY_ 9V`:2VPqjp( Ab,,qX =2<e @b!$K\a .RЄ$LlsB߷NekͭG\KްVR7+>_HA@ɭ,@[.< A=$Xwژ|{Y0֦Qw}l!gPx  26d>aݭFQG\bF3栥2B-dY od#9-J0=JW`TLPUD,~oBk\-E!McS7$ҝ‘< ||HGlK#uGWnն@D2.8>!k'x}" 5a$Ҕ)JlFeEKeHE[),?,JH h&%!CS"=1Y_\~srjm`H&Toh&t:c@>m={dOA$%A6,!60suåvF"2I6xhV®㏷ _U|$NuR*Fm<(ȹաeyWowkLs'~UR'= x\*ydw5/7_po؏NٝEBKQӝNrz8ȣ-]X+TjkWBŘ-҂;PW&TVs,IOaDCҰS} W}i4ǎ6EMա~{ڎ롉QG 81)*DapuEWַ,x v&:``"bea/ j?$zxF@tM"Q) uVU*OwtJ=?2mJ|ذKHdE aQt2%=N"0rងEO*n+D|Joɟ xU]]; ߩzq29}l2;;c]=Wzv6w`ho,NGnN\3{TjwN~B `dű|<8J~w \C>-#\,nn󸡫\on|ջyW,PgI63kFEk~]2 Nn c-"G5iִG݁S2~S;燸ث:RMAیBx&_-hV`? Ǒםjqz9ű!E3ndr.-F+FvVO̱ʋ1{̧c˱RVb4c@~ \eٗ|Y)R@lZ[>yӍCk deo;wWnajcv*rli. "@CNQ``w7U<2=|  o v)_#P.EMW$tkCGnںFr%0%~n/Hk_Z`m&wHEPM.]"+QU*}Kzjn"oW|7xFI8nlz~4_'jUjV.hEKuot޻N_85?eF#{w(Q;d)c~vjHob4A8Wp݅v/tWXWo6U5mG3T[6_!_Yb-/g:.r [l^2]WDƌD=p*5)W*T em4kHRFf`P9=1YHg,ۗ^JA#ehZ#\}4NM[3 6ivc+chZ#MK&OBpsa}}Yw ;Y.9b&Fm,;e!6!ta߀|:zN%F;VlڪKqNXUA8厑Eo fsu\<l\$\'*B9ϼ'~*|k/%T='pUh? )53Skd!h\@@O⥇x/YSTGKBЁ9ǿK*|cJY/L41MでK@GK^&kxwZGS5Q n# ODaqOA,.!Ss]XXF3͠4@=_GD/8gk|$O'YV_ͥuZB)jxOˬT aZl4Phs"D4X&K+H"?^}Xs4Ex3q:!"0uY0&w. }NdIU8|R/sU~lkEDaA'nMipQrmJZG 5k-ۖfJ?_yZ^̒,7:'-?J_a}ܼO?e䡗Upn% ]дw+blB!0>;7h? ?.:[w <'8:H2x%/x!"J^X q,jlj~j 8KLǏM:C1Z9ĝp4ĥk.}%IvkE/X8~`eqZL ]g[V6ėח=Nww z{k5zX8MT.M1WxF2X8i3fEcX2d.19α C? _( u4w]Ns- DYvsȠFr<|3/R p0>8mmQBvt}CW=Gap0gɊv-&N+|ZEhM|ԝ k}NTyFRt2OgiE8yqs89S"zi5?7#& aj܉#r=N(!_Ck-YR]#B~\H7lzƄ/Ϯt'O.N|cF*Xk´l/KSv`g6)kkfp[smGg^d+v|ϰ4`[vشDV3|*7W,;]$0 O֋1ظb?$6EvE2$[l M{ 84r4k*˫NC5Ѳ:zm17J[S׾+~rq˒˂I#Uփs•uS2~Au=9Q("q>;p :齵MO+qK* vgAʣDaA{दv ؊ld@X_h!2>SM \Sc~( bLLOX_vq{-A?'QOzzq!uI>h }߰Tin4dU旖?)/_ZTyE,?e,פTYtnY8%fa죦<%Ե:l1">Q̣v:=ɍSr"pZx;*;w$>yo" '69r*\QЊj>YW2k_떃c~x"KCFqS -gR_ъ!On<աi{q#7 7cu["*IZ[DZ.S?r*eu`f ,hl4J)X(p*8K9uZPap͇,(*caX\J̲\*s v/X GjS]dt2h!Y&ujRf,C-c*ICdMWf;lfpu 1"7H_*S}d05 Z)Yeء鸮2/z#@ N›-p%\Jŝ${Ҩh -.fۿհnQTT"|iJ˙7MqjWhxQSS{sWNinBC'uc1e:7J,v> 8WɃ9ұ@YZ`$mr0*1鄯Гwmc[A:{ǷB؜Xl~E [)lfb閒6I}DLIXVB%v'oSwnYs?ױN*ÈU(7r#q:ۚ-qy>K,1 #6aUSԬNQ(xAp)U`ovT"M@b5~֌3]KEIY#A[W=Dyq_ysU+)3p˪l8CBh2ǙO08B_B_(M9LQe*N#dgr+]G+E>2+WgbD&VeDÓIK W\oY:($6rLF8ơlǫ\ b}3Uxھm!lGcAZw/,RbI-clj`}h1Ƚ{+MJ=N4alGxQlv3ᘏ4|kTVn!taQ:äStv^s=ۆϜbJGϜ\I眐^ƌn69eg ET.o];{vcC5UettBJ*@ ڬS>ĵxYDQϑ,OmJKix 8ʰ>Ut㡘`ɐl2x%۷V>0H5Dܡ3g>}fWy e~9&2 alsWSS9HfzKcfKRc"%4G)w]Bu` },_nJ@? U MӻwNZPhv;=WGzMnIRN4R@[]`Nfv)jCCŒkUӭЙb'd,oˈ(TQ3ĦBHf}B>ۋyw9Gu O~6OFM 2|+e&]l6/|f:`јT> Y&ƟzEq?*wD~2cj{ u{[@txҁ~;d/+Iez[Ms#gKB^[g=|m}eoLw)רrK͟e\DJ&CSJцr,mz(؂ُؗp_(wM|G*@x픎%^nR#p-㠙oEPԌNݨ!"aq 3j6WI/\+p! Y ZE34m|PP1G7cgYPnCkiQ͞<:Kds?3ǹIg SqpR*'YPA`@ƘAM (GPO76% A.ɵZIᚴ!JLq\(ณ%OpJR'JJ4$]CUQ !$#k8G"Χ$!j0t#VTRy,BV ]3qok2QD fH$Y1dI/f3QG ^jB@3qmHAI[|C;!2MSW:0AJB"_?}fCh0whC_ 3{}O]-/ GQ-3,yS1\cy^{:huDp>6_؀W'S} NWtm|\6F;[G=Һ[xlѮvz%2ƛc- W%1-NKҴ0M@p_ʋamz t7߸gx4XY:)v臿_Yu ldqYxQҾKn |QͰ 6t1_o3эo:߮4EgǥKnq оY_611;LK.=F]u_zcxQ31V9[!Y>81e)@<@@H< EcX,;ZttK6i/;lOZ2!hgD#qE12%Ihka}u[KR@\pEyJJy"'mL?XwBHpct?Dpg5B;f{-e{_-rYW#[D\96#h}OyTt5|Y,f^uBtYsm#Mnzr'N;t#c+n]QVjak٭a8LC`^fQ֮6H@g̀X4r>'ի| 5/ƒ k" 'Π^/'ǤH<ujcj^|-m7 Lv?ݻN RT1< sPdI.*gdNgwOQCHT}I4 rtU?lo-QFd[A_GvGE^)>k:>sК1qqNԬdaPFUt">˖vt#Q0Uߎ A1YA3i[<ʳ0$c͓Oktiknу>ER]xĄ"fjUz#8Yh•m7TJڣ7ך[*b)bJzQ?~Dd1+r9X.!(7qGuFrc ^:P-2Do!X֛.̠8CDގ e-Nm_x hi(q9NEU0)t`Z)t$vbkR[ZtigEjrKU{ ba{a=|/*IiݾPQ-L3ܺyPQ{%v䜋70g; [2DŽ@:,в|NτR,SP"å{/)䬸cL7rmދVQSժ43#w+TUb:aŐ9}ާHȫcryq qҽr+%8,k K"HK!7aG]q}vT"B$qCT(*(\mD%ORՒ&eҒ^Riy5hpI_qcAd~s:VZ7dB1JΟPQ [8юo5:MAƯ}-遁V;k; bCZ~ODZc(PE`Q`d7wpy>0]ӣ>f#uB>5Z鮢{"} t[g<06'FF\^0`J'N!a~vHb0x掮t#l|yͺ3aRݯ$Atb82NQc%.ܨ=#%`!jM\n!<]Օ[)otg~ iuS6֫a:̚J~II\GԵ~S~$Rcϑf2$2 Pnl .AK7wAwHƭq(ZYRY fkFF{.>lV*9P2SP.=<], grs+v nOpnǗ ?ӑC8d?SK  |Ock vIzҐ& jafPMџ匽UӺH=78Q SO6D|XpTp&}f#Xeg8=p ݕzѣ 6J!#/~1"M;I-;(дjGQtBWzU{ƪ?xx}^P.,TMdݚߏ֢waeoKP~(6YЖpg EEW`w`$ںSpdUn bۻBۏoyzZX bn'Sߵ3E_$/ڽc76oDVCS}nїB\<pΧ?ǰ')HG|_J¢^-{\#Li{Fm*nTbQ[mknoJv4fQH0GhOKRy4(s5Z.$'e̞=3?U&"䤒0Q^G%Xi,Wm4ˉW6173m/{:+"vqnM#%[X5/L[r(R^lH믧eZ=KgqEK23b'JjÔOj1-I B%Ⱥ]GRSg] \}#)VKMOwu^xk8n)}#9ْr<\)ʼndYn78ϽH`1ܢs'џ|/^OF2f=G˯x+glq-]˙07~ʘl^AE ~mH{YPb;rDk($^c&6Gi CC`r>;ZO߳M{%-bF۟m̏ [j[q :*/3 \]eo`X7p|'Ć1yhӦI25{!J?DC>2JDK% f!7 +PŠY)(2Ǎ\Vm%A;հhJZ9K%f[a֖3Xpfַ6)247lT~k@vS xܳq) &c|, >6)u6G1b1Yi0{޲CiV b QN(z|Lc2*?<-5xIo\nZլS뭮Ud<1ŝM͏_ a4+r>Z/Cy~~W0JbBu1=0]%[Ō+&9׎$дy5kiuk>BY+ c!@68̆sτ5;! GGk~uQ W]QP]Q:@jtZ?\BAUȑk#o+\Ñ!<ƳRSJJr`iB0FNb֔J,X2b9α&XU`_D榻U [ɦɊ$J:uұM_hQZР4-_{vom~wk|*󓪙2FN?MVICeL jaK->My?a,EVH<K+J͉Gfn\30 1,37_tm2_z÷v[vmO%\8ZIO6_o׀-gug|+<} @q煜Y2uYmWZKؔkZ'ZZKj=*)-4\&FYu~b(%"rE?=u-\@䗃=-;y_> hRpFc6|q^ J,TZ&ئ簪u󚊧Dh :.&Za=%䩃q>˰(e(#m@)S wg2vM: h)dNOvV2 |I(?S>Tva{Yw,{%kLVpcm'/{lDF4^ط]"&/d%msβ=;׷t~zT_E$*_?2뱨Hdy {$Bimy]m7܌u ,B/ Bձ_bC>le_׏D:TZA yj~/0K7M>X 3 4(°" ;LO^qϳƻc}Bgnu?S|~vz_߃T1蓖ndpAkCxYJ\ipV4hk%w XB;RhNWxne&#^_*1{&cG:ˋ'\K.N$szSiI&'?X0˃DO7G C\ekJMa`JYg/MmNDhfF'[J ht67]h`+jQ]L U_MA #s ׁ=d[mxjo|U$HD ѮUEgr7Tv؏§F$XZ)[tOez Ԙi+}n'εٔRܸ;aUt dr8wE OaBW]j ;BU c\UMP* WnFU:k,"Buqym>bH9HIP=N;U/1RA9j+q:-CE{Ų9d%wgPTZ*XGiKr9t/Ra$6T#ex*;Sʹ'$}DӆLА]Q- qf6-sQwF7W6Qcw]tKXy.yXxq=dPnZSC1Y}P-f*!~t< [ :H<˭8a@203iE`|h./= Tl_ PNJpaiz̃aBKDUiO#l*DLS+c75rt,!cvOBe"D >(Dˆޢ][V? .PȣvaScÞxB͎P!ltLb=a-1%ɢY8~ 8J5Y ^i`Rjyj# U/\"`M>:Т2eժ`L]&i "=WpE \-w?=niR8f=?ˏ 'rrȍ c'frJ꼡a/w 1N,}A_#{"g '$Iғ94Uɥ\nÕ4]801`3rO }ͷ@ր$~Ò1HDt wTv=m_u3aͤ`֨Rd<ݏ9c/Wi+fۘ}Ѣex>S S"ַH y^ZzFkR -M6.G2GЁXkS%6rv1v/[KSh 3h&A^IqsfLw}2C{ʱ3gEpnV\c^^ u|'OLekaB:JD5Og&JҫhvJ)fR$dE y TA"[^g s{_vIۚ"J[z~.6._ɑ oc=dwC:.œp"3lzs MNΆT,,*uD:Tay:$ED#3eB,sXNKQ-նF4F10ϐJ뛵mUPnTb{U}[w銲,[G6gC$μdY ^n:h,0\x0=ב*y6Vyא2w:#JՐiJ^CBow`E%̎/2p)EZ;4xգץYg\A̹ FHт\;#F$n*D /&ʯTACEޗfy|cqrU"s[/(ܞ 8> t,-@[ܳ6K[c?D3,5kcH H G7vi5560v/L =W(3K>+nTs9a,sn,*dʾN-T =| پcs1CEQ@gwMf:ITCtj͹A{`VB +aZCn@lZBٽYPt"3o>VТXQ/a$~sئ47=ڻ>K{. )+a<~onJ<`Mgg٥՗1=e'd7mScuŖ,-,gĴt&:ܔNrCEL* P }i"ʃZ9UzK!f)1)+m_ea 7[y:R\Y ҞVhxIʠ ~(W\|oPD9"`PW 0uNtK Qgo.',n #r, 9XD,U5TE m)2!:F:z+5-1[ud]B1li0]4P Jv\6rJ-q-i&:p#YڝMhh# (8vT%[hl{tv޲sVيn9LH*–q$ Z{UH˔.N6ihCXQ-qg.+;TvɃ!bЂ~:PeV뼲xrR)!]^967UehyH7cw]6M'Ƿ*c`zoĞW ۃ/{s΋ u4}||m{Ŷ_3xy>#9tm5l)E~:3::jnID[+TgaDCaW5胁$0[Hs >9g=p:B>~|[?&Jb֕TRdEĖLkDg0-0Q,I}:Q; EMI& J+ʒU<@ԭ'$ѐJUņh "zX^^%î56$$Y߈(%M! qZ -Ҝg]N&pwp -İ F54JwhqzQ-{$-(#@#X(4ARiuЊ5Yׇ # ȧTECQ'6bIC{N&\I?\>@,UԐ֏  !HUU2FTLCXuT7ogHuJTǢ(gj8:HjDc#$#zcYRΤF +v-ЯC %bcd"]Q2  L?&ڠ wW{,i:wIA8-r)Jy7[](!dbDHTJ*s}%(f_mqЊ\~лɀGТ2!2qTh`PȃYLut doMCX>x1]ޕGADžnj\dzGiӗO}{xx{_?Adž zW5V  wď 2Ffd+$?b@`|Z<^Cz_ -"1=G!=!"2oәѠx<7\Z+P)ΐS BBta]۶EBLu.l6Wۭ}Pxdx. %4UN %]=JI)y_fX[җ/u)5u{@_M!  ?/ PRĆ|"YvB{y?ea0g6G0aK9'r*&aUјT$֘}!vFSy2^t0qSy/l.|ُeG^ ދoӋݿ|Jqǂ ]5zCԻj'jP*> bA6G=)ӭ %ֻEfLv$ QzTҰ!obxT/o1kR:/PEy!ȔsviMiVW8gZfgwR~^^Om=Sv1Z~*7`M{l{B.PPDmmpd(UuݻHNN4VBw=a)!qڄ?;QApBfte|żNƲuNLFep_LzGmRa.lH8ȗ7u?BёUK.}[t'?Ő6 j0Qw0[i sTrz|듽pb༺> x7x.yd {j#]\uaWXc96`䃿zww^ƚS*iGF.Q=^n@lj+$w3 As0> k,1|q+W吷s H"Iŵk]v3*E >μvZ&tt}>vZL9 %[BuEm[(nx_JLP͊yyjK4P,e}~; p|/Ă#=ϿѶXcW9fYZPEDR2^6$ht_:} )qb'5Ip#35+KcLuN rRPO- rTkFoB)[W,SټDY 7Te8 |zO /έ=QH^8"NZ駦#úmH%r"@ش= CJBJHluB|R1VxUԦu>^j\@3{A7 `ܴa^f4\atnڌb/V2U1P7jp#"GAƖ(L=|0?pRfˉ1|J& {R5=/C@Z6֗WX uNQ]%|(Ax^\/. 5,DNUfD-A;n<|lwuzXRdt'ZfJdM~Z;<,οLhPc8:(>! Є;KkӣZL|%V`&.5s܇S9p7ۛr80PvE>BlY@#=Nq$ a9?87}&С~D6W#VLshfT0yX K9?3P{ ZL@?~ɵ9Qt\ht*9k>T(=?#)Q$Us>yM?h[lˠM^0wQ->ʄ4dv}LY )(n-+[+[)T1) t(u./5Z,\DX*W=Ĝz΁=[SիԌZ\Ęrdwi)+wt\}~uo\dpr=Jߒb%Z~l&b UrxecB' `\aԇMS*bR뮙Di?T2ʞo*ZFqڰ5jMx}gvX="UNGCuLHo|]vbbz(MvgGu$IqAĤ Klu4n[*Vl<: .:6rE1ȋVGk Kxi<Կ Qj-^ Ĺ p@ ,GtVO_7M]ؘa{ YWLY0쾣qW]x7XzL8W/c1D2Oo0[/w# % QxaWv{{=w\a[Z-}tbɍf)X/jI[$gwVQ֫.߹ǦsB<|-tw,wfP&Y^~?xa8g 40 $ @f" sVN"s(ao N4PZcŚ4chkאbL1٭?r8&$$ liv `%"#'>$RمoUsϬRD1ղ#7hU(1Our0nR׷kqF [ DSok@&ڑӱ@bqi} *CH7E&)?1Ywg+c,(!FUlNT"m!LX}m5%m#RзXs۴/7k΃vN0|}9N`n sӧWWO?ʎX.tٞ;>N?L9p C?"B{FtFG/fr Q9jڥ]wFzWBqbhRڻWJFyUoےwCF`1LuaȊ@σh[ F(e ;bt+GP {k_`_KʇhAg6z?{ cPӵ~ESut:ls)`o$OT#!}bq:Vz3.X#}=?EX@Mѣƪս9%lEH{!;A\@7Z傾0{9i Y;r~~{u{6{TuxZ]K )dpkӲul'^/wg|a-)+I)$sIHy6;Y|Ȁ?_J EXشf3 i͐hS{!tl[ۛWM} 1c^ f*o )Jkrit9]JD;9M2^>uZf(A϶]`e+<{VcC17:J!56Be۵z-`CIq!LtxHs&HY:9lS,Vm3'y=wgڗwU>-ݵ9L#jN-% a=5l0~2WS I4}2$;{Z"OOht7順TwӺϳs_@_aXM#qLU=ʢp1(jP Qщ]6ʲwM$R#gQA\vdvāVMQ3#%"`&uaum/ڧߡuؕJ |Iy YRjG+,'!ܢi]ă`:!+@d'汶d> lvnF/ε[HYd:)B!I|̎9[BamWU]JHJnt9]v6(WcnʘS*%l;5zQ,:Bfq}3D<r EatsM;/(7 儸PU&DV,APn2@1BXEm4&r NɆe'sJbT`zQ,İ_@esUFxo.ay!T 񴼋$ӣ1A]~>Yx/zo3H%6Rb(G)Bi,!Z@iNaaAM>7VY?I^dxΫ"Q2SS2PI $xN&+i eZ ԴR5t%>=r&QIcs<_^,Z}ڌ\$c(0;E(*u0, Bu?l6+'nE1sN%ljLY)n1?H\NY+Vj`bhY"TφJl)CVZ $;ĴJ~.=7g:#AR +?ٞ*aQ@xj ـb]2y"?#:Bl{.;E; Rq)}0@ݫ"oA_{#FKeJÈ`0=%$Xd s9C wVkkVǑ2|:Jr"@VdXfX`%H:qNp,d] EXwdN"2 Lvk,2)ѩn 1)K Ȇ Ȳ Z,/c#JopeD 龑BC(&߀_l p7o`]8a8$ٖ3s{ N`2/1g:gHhh`P! V"n挘{R\H=%k,"$!]> Yt5GZo{H{0 f3qeI}lC՞rk$>g+vcz{U~㷎]qv:%JE|0e8(7UrAȍ}%hRΛ+ vr0!Q&12@1+WY>rf5SVF&FF@/~_"dAa(qtW)Lt.Ų!sm~4k:UyOeYoȊzLNʊXc8g @gvހvtֿBw?"Q[(,pj/Ce&OMaLp^"IzIQ-rvNi_)Ǚ#p9rUgʳ **ѧ:΋15NP3b+R(Ð*װ }w}}evUdxy=}ڮSEz902dQٹ48JbU,GcaЗlƳʧ6}_x/vAptic1Ν [mK.6ڪ|1灍[_`ƏdCzѓm0n N~N$fQʇeYC|轼,j| JRxr/Pļ!X80TRvI P`1}Be7eEpNpaxG:w?+<^sT2 CLTT(F8mf5!3Lq#VŸfǦ_!*/,enZsݵ:iR'R) FL\sbJTEzNRv* ^bT8~o9V%2 mїӥ\x)=<]jrmL9iƏKLn|OzlDÃPtKFsFFT>UXxMq[ܨB4ye/>J4g"L0^;#nӋ:t:1ՙTe>ۏqZ#\T:~^W?f`шBW̥Yֹ'ۊW.xFFrH%˵]]s_1d̽p("WpWz۝=-/zZJ29bYz8@, 4Q4v,4A6a.%π>pLlɮ]'QuR˶bLoEuPlٗkTJW.rĚ]_MX;=ܑC؛դ,nkv`W#f̕Ee?fϯ/7n Z#W4-^#0(hI\'Wemݡr2'Z/ ^5] $1*-;?؁v&3V`voȟ;vBOO3HAfG1ħesk$@6@UYiCq6̝@nFXsƕ1,R ]w?;~ y=Iq5:%0;6Q~+ ^r쵰V,:D=tыF^3g^w†c%E\ȟr^fG3bZ 7ۘt4/*8GXc>@%a:_u5TWϹj5B dyGqn0* 藤_HkChݥ2xec۹!ߡ9r Ya'V9.Kda>8 zNζ?>IIz\,;^̘+NL,m,gDy\ֿn>7ʖ~烘9)(:7Zdd1[<,ُ0wj@U7lNPUgVUCSM]=xXw+q̈GΘqe|zt8yX q㸞/W rP_ FVlòF%չlS59>d@/: vV l>)p5/CA+DPѝ\xD>Sŀ`z=3ԑ:EkǏ|#V1:YQuI߂KmʊiEㄌe]əd QQbxBS`l)KO'nbVt6V&RneKxWLpQݓVĚ졂.z,Sh(ʴ}vF#M\JzRT+ycHjCjNo Vw7"ߪeζt1^J:?*W`p ʊ2Vh%&퇷k QEQ*+یtE$4p,U1EET!GEBݤ УgF`I*OsZӐET!ܨ% !IHS ރ{4$KL!Rq?bh!(kFYaxۅk1 Q5cTt!Nf rjQ97 6 m~߃m`O6ƿ"y,/7,. xQ;hН$]$dqVJG.Y> #*5E)Eo8|DU7ѕ[Rb9i e4.:/. .^!ErC%UƼFXe\ʂS݀~:"7BvKqi@fZ0WYwNnSѹ2cܞs|C5ӊnc*C2]4Զ'tHVLr|.euV"+\>JxH%)ڵiղpUlkrnxlXTi1PbZ. [c}W"KG:]M(Ѣz>Wi4fD|=hd⿿V*;2⚕J4v2aV:QĶ[::ΧRLK%3Bv[e !,?04D @Tv` q*-pHF؜JuJHU,YG8BOOBNarI09&h$Gccx+hh %}1U)3gerq5eGbC<6xXnT`K!`sds/P~!NpXƩz,3D2N*kyY|lyq(jBTp ^][Յ9k_X*_Z#ϳPu˂kӜb?axgBv5k.}'=n@(E֡0 = jSCh|u)'ⱊ޻NS19vT,tfGm*P2퉆7&ڻXU+~̓:HyX~4/|^ۅ~B??$'hE?4s?De{T`+a<RIltNjKI%Ƕn- 4mx^NOfѨ!#pp*j#kҍZJSTIJ+N,z$v ٦z^_ֱ[G{¤%y"|6_lXveԂ2!q9+zQ=n飴C75Bu1+E3jDQ (U@ޚ>Dm/l8| LbqЁ~#Z:Q{p~"Hѝb!%JbI2EaD jI[K2܂VHazyߍITU}^E7pRj3>q"x2,eMoIMhx@uaL9RDQ-lu;GD<͎[ԦCifSE-o@d&ŋUSt!(ڊulˣ%#b9 >SVղB:YGEϲaVQI)QUBUwZCbTIoy͖v d-@Nz4wMh//{ޫF^z6vÓc/A q6˔ An<1dN8vF޳HKŹh]T{F8h vTjҮ65qy.KLgkM*O>}F5hV{ h_-3 ]xآ_oy{J}-i#ʏq N x z-0TI#7sb~"Qޘ(zgAaS!UjI)B~9nw6 J7e975Kɢ_1nƀSC }jw n])2zWos9 Qfj8l̗fFx7ҬRۍ$Uncb|ѕ*f@={=W!i@q+i,Fiɨ(˾{n"*}l{ P 3-ayGQpa 1h*'e`AAHɭÓNz:Q]n;!;UbRudmckUI2Gw&inh'UwWN)d(Đ%YUtɢfZ=/Ds0c?wL":8%`g'G$h.?ev9BZ(eC4Ɂjt>B[B; 鿨*$X4wrEՕwݷ 1wf,&~ׅ$G={I{ ]0O}`.ٌmU###q+ܧLrڒ@L͸c&a㨫oVߨx [x,}mXYf:Yl6-~+m tS/{ɗMMh^uR$@$e!"Ҟ= (!%KЈdJ#1 WtDW"SnFGV5e{1 X:kV(#OQlBl_[ͷ8TG"uHJz=0|1n|vs/45'Z-`|-r;z&;3^?HbԩSȮ GjU#W!2RVV*(:dU U tKϥ blʕ.% MC\ڔ rKf7uEI&Fɪ$k7(gq9zRȕċ|^TjXSAJ0H(KrNjE\a# (DQt+|I@Te:|"J_E= =s{e~Ts_zIR6+ڶ,6Mf>tc ";}c>~ZzB˽%+ VfgZ>kQdS=moaЌ{qwDz(͐Q|sh/Z̩KK'D5Ug+<. uG#B RU65`-u<6 +FQ|b]Z ='jZrI (r|8*/㲪+:v뮓3G3bM|ɪP\xB/z-n2^LsӚ-JtzӸ-mT(Z۝cT!d' Ssaoڋsڋ2ݻCTЫ֭"aqPvbѣM6zSh؛Uu{UEԍ)f}jҨJJ17ӛLPJ䴜т>NI`Iˇl N)( UDt;k!5װ?{eL]^'~da$rU>K7tElo.>NLWebk*Tw-U\NRmU`pk|\Rkx!ߧ#a鲈̭I}2f%-}lL+?!Ks5f__^>#˴ 2s* uTAvHwyU$LwE?F.* v̇:w"U^_Wsɵc<)].[xgqZ!2܎e\ȭ!T6.. W0R5 eds3L V\gvXW@Enbպݩ&T%40ie}G5xVqᆩl:CզiAŠ)MOt3*VPC[Riց*qkOřC* DTxZ ^jTbe](!=*T4y]7բ!gQNQy/3[o?r$o_]?dfISAU`Doe3,慯p}ߢ"2,ЎޏaTW{'(>.TYU]G}tq[<Ӓ^}Rح i:z&QKQKh# EtlRFegۓ?_qV EˣSOU{7j9Xari{x-Ow` :|`:1Da͓7o Lc8knZpۊ@tΊ(ǯ˺8fsIJl'D ncKT{Wm$ʘ*V5Rm<mYtSSDlu"iHonx_s1 UEi3]pr:j@&@bn}3:[Un0QNe8 FJ_'lwT/,Rd,ia9H1ڨb+t8$]'2;dqLf %RUUUjˣ|*8q  }4^,_^\bZ|b ꌝcUϡ FŒo똣tYU h0̴HE:,XV ϥ*hc3.2gXcEAz;SaOEh.7KTOi&JNǐ v;xutƐO9*ԪXp #|Tq8Ia/:f$; lG{.:39(S9ch1U575V0TP1e>d>O3Mͺ_7)e}V&/DžKJϠ hX4MˍfMPgeR<Bۗ3{`´pM>(|LŸӷVcBr)\3k:F1_zas+D^MG`6ZV6[7L)M[aMOKуⶉq⹏5R!a?3j:[ac7i[ԇh[G[z=X.Rw-]p m8T9̜|j  E kx>r*,aKi'nK4w#ngNPt ǡ,ogDu2XE:V ٰK `n.`wkZQ}f` AGKI)jȲj*Q K(n)"~3 t?YPZ)Fn`6ěM0Aª|s~D <ч5zV `8~ndtˆUl^;mdGvŽ&vptPV p-^ݲuGx)|ux>zZUAp{3c08JmqA { WEoqae6ZxC8OA:YDJSJr*;͏f'6$֮t5pe)Q!gѦGi :/UMVƞ(IFQqˮAGos>#P˜uT|*CrUk$!2jc埡rga~0aT$F3oB5Oyポ5)7̄i|Q׿ᦟ`E4iJF"?c.!jPGBp{gNܭ#%0KL턆M%q"f|-et0Tf慞A&Mѽ 58ST"K?Ϡ..E5> j9 PyB\>3YIU3Y =pLQӊ@]Wܟڕh~bqi2R9alQb3=5V] EVTG鳕εUSIT'J$r}ΰ ~py`-n7 AFI qEM#X3C3-9b?JPv'V[ Y\PWG|I0x~*2KT |t`}1\]܌BrswEqrͬp"I0Rf&ll)UsT1{k5M ,FF@m+TWD猫zn{l[ztѲ-x:؈e&yK>)o љv C' !X ܨ>?6Sjwe2gΊvئ' E1M)sˬ)G2ɫ)FDKãZ.rurΚO*S%/25Vm@P-/^a,HUbP hK/HJ!69h],TB K=J5Υv@{v-BfC{Gpj3꓈ɀn_+%[MŢ 弽P9ЉJU90NQJG>MKfwe9onȖhJDw@OoO;'l.H<؋j{Y ~ ?򜋃GR0R@[2GnEՐS(/[;9ଫp"`\ G02=";T.M`,&`Ao,f+9 ,ll:ihmisGuD2s,A} cGV!CJ\SQK*k~TsVm}_kM-Z>b'dm>Nb{59ax?6E!,VBZ c c 2LP68Bתu]#o$$6b՗-?wˁ'gS)RV?x_[>wn<|_:ë8;@Cx<㧊K3Y7}Z&r鍙TgK&A-VZ-LL9pu-/Тlv{11ϲX$=)^$9Q yh:^C97Csy*$DF럀TS,-u`$#*[`a3.n$ YPT)*5m4c~[]37+ӣ&GzL3#C3m+FZz‰Vn UZTRߧZv`&>^E*v*A?`g8{݆{uߛ9\:LQ>|a K*-qghau@Sޠ  H[N 7[<} :%IٹlEY5fF՜3 1f&Vvmk^FYqaаⴈPmi) D-)$+Ob=OڵnS Lvl {cgV!f{$TA> cE:6A7ЙlyxI(Nl։e'^If]gB:ŲQIGmuV V˒5e2[.f'Vkrr!$EbKMR*~TrRVd0@Ǐ-ݿw3W.OGKYi!_X7Yϩ{, [ a% ,\mcT+Bİо o5+kFg_yGMIo#ݵfck[l_ (9U9,|~j {E˄AFb+!,\=afdvf=\IlLh/w=*T?Xtfib:9*B{+\4q,&*5W@ Tf1g&Lg~Cj}^mUp:E^JBKT?6#>P 0 Ω֏4HWkV$LYVr[yWA)i tє#G]>#")E _EK{_@KCnf˖XQtΒ Z,Q㑣%OXtIu]1rǕ219!-: Vr_ɹ-l~FSTn@~ZSz~>;,y /PMԷfu>qdS6a]:1IS&6bs?olmW-ZϏ6\p[cBLuJ3VĨzՓ{饯s;btUnbN~o?)2"iQ7o;U5K׭lXƲ؏ x@aDbK0H Ø8 4T]4[E6RM7EqO[ħ/1mF,TI7A@H\8$c"F 3 _f oƃ8uMh۩qp=aq#/̂LbПCi5ܝ{=>Oy,0Kx8(iz|Qq=q Xyu,a_f7$fvb76~$_csh0BeBf qE@lb-¢~ fs,\\8;/rB*Ҳ(DvX'Zb,ȏqN4ABp$U$%eDxZEe91ո(& Fr\14/cM~j4Hg1Ɩ2oMKrom'Nl>1'H2*jruqC'6m1nX("}=b]c"C-㉬tpk ),u zşZ.>d*Ƣ?W~KKc͎}>PZbtDxXIF I* O{wK<;V-5H1:Ϲ=|%96%u}[Bh&kOO 9GU57D tPSõޤJRzf9^¤p*3X'P̹6"}sG@Z_ה簪ebt bHSY.i=Y\S4Q*AoO%&j TT7)wxlst~u_(SEpHS^iqKqX+5*ܱa22 볺bVܜQLi MUl.1[q X꘍ j*1z}3]uU/Z2=5y# E˦Ym9 s qt ,6A }u!K̄YDlzvf[jޠ\WIGn% tx GK#:l i Xї;J<;"@3޺_3'@D*P}T)T y"Z1J >p9]`NY@Z sBRCkɀ ĵuU-dq4 ir4vG3.XKMDlo&ds^d"54eS>Fd\U T FTzqZRv'iz1vLGKOݜdd#I&#&5utJ*o/89`#1L'o^8.̯^?㳫;vO.TO>[暼-{,mޗ`z=֢љ?٧o k:9  'cܺ}lf}>z̖3gaBЛ~RKY㴷 [) Dӄ$Dxbiq|*@x}Ϊ?3g:Lߩ Bq{WN@ 3Ñk7/}~_$ĝ 163.M;&lb ڱ)gk;6 RdǁԪ`_#q:55_Ucz`db45e#S:;I7Gݺei )Ñre̶ !߈UcIKǪT{e;qe8gB%f8\=͸c|h O͆(1YuRL>ܞDTQ+Y٤m_])a qyD-'5#B "Fڛ="!fX,L|ՙf5 5YE{a5ZZQ7JZz0@؁eF1ZÿQWE@o[Y e^P_bl5WQ^D ^(fdv"fvp`WO;w8CX0~~!~`h0 !uh@DyC+A:ohC9#b)IEhr[%i䍜sL맊ey6-۹< vZd=Bb?5d8r |U0aGuFVJzPj(#͂ dW˫E"ܔŔH}5o^ӻj(!gі2EAF]T.7,qn1j/N0` PԨC(Mw΀A_6z<[VX¢%s5)T/y1^ߘOHg^wf|-IӖN%+Yt*_a...6y [k<[\zwUۙ&dRY{Շ|Xo#ƛDa>.2$$T7*qqݲ%)cpˆcP?3 M(8pr%4#O c]1h *Z6GeR.l8RD-YEƵʲ^\+JMZUR]MaK&}I3̈́א{xhĵkeiA4^&[%l#K]Ҩ@.L33ydưf"JjhQX[cTF]yIvi} y?a[ B|oX*a+WMTi2~’D9ǯIk/W}'{ڽq Fo[V#yݎRv #4["'uUUY UT c8]?QCM%)nW)Et1m1dR[t#ޛh3ZNҶ;+Z[Z*-110я(tc+LCc[Qsd'h)Dgei- V8$1+A')-5*+uY .74R IԴLFw IW7N&-Ui;aF 1|'-6;/F9%"KĺIh8*L@Ɇuh 5I4(Jǒ&9ޤDARV*QC-,Ѿ,HoxՐ7Q(\Z.6B!;3w҆Y9f\M& } L"xJd*&7h kޣvvE)YQJV"&-Ziz5G&"XND*ɒqRR]z|M=_piy`oW͝V FWpDJQFnd,TnA(׊׵ɜvnO .]4_]o[F#}`2/f1t[TJc Rٛ'y?{+<(!T2LZ%BZs"Qy^ujs-4kqGL2$Y\/m D-dC2D$rHĉfn1 QE<ߌRܝCĮ`WO3YA̦[bN,2`8(G)ͪ;NHLYFT7mu 3_E{+gxYETlEDǜky[ceb0N Vq뀩R.D6ZMqtqRvqѻ{;??_((I-SI5)g6A# :1(a|^H٠>nHn5"_^ʡs;?]-wna.eߖ\rr^A}ֽߕq}wǎ-KR^r][ZXKߣz}.ﳘ'Lj5(MĦx D4cwyjxߎ4'r[ °+B{]:cmVP +a-~3rZ[lUy,-?>/#sRלQgz˻QNv^1N|,Ǩwj,kk1Ƈ@5S䴢>kz1Z@n>V;bp"N$p7~7pX_8&ڨSaDz`=f~ʗGyx=l|B2OeZt8g٘ P%ael 2;ЮZyfQJ踺$O(wP,MxtM?3W4&M;NW~g ToH3ܡDbѰČN3鑽?_d Z8mTf`8ǚfTs X$-P끌<DLz~y'6Qy ͞"hǏr[L2O(n\=_RnaKJׅL./|\mU0"* f/1/X(TeIҰj7u0PY>D2NzNɒ( oDZ Λ40)7&8I_V=5dg )evF"_0~= g5tQ?vسM6wx.uܙJ2Oc004Ik5HuYLanV^zJzǩMZnP?|@Yn52'9Oh eEq2?4}>s+#^R6ݮuV:͛J aDLK4'GA% K̼hS8y79F_J?ڷZg<ŀT Y̭˱8C= ʆohI\5¤ }ݷꞶ,؛"ܵ@y#0157D8fl&@x<')*7&Ab|x}ڱ5W%ZԍDzo/:22T`hZJQWb-ĩICVux׶XZ7E Uc @U^o5p(6&V 'ql2TsSjZ6m;I%C>]"mxpp6Aq?lUk6gk _ƛY/hsf~մJh~9̠ɧY蒨7 jIA(<`Gg|^l Ey9vY厙5lJ-Ȣl:4Uowmc-ZKdlYYQDL!Dt vk8;ylZ 4($;x{u]^$,ATIJk%/y*Y7?!/W]q5fz˖*XJylX$L-8Y*3yD錕N_QF^-\bf{_7Hrڋ%e?aZuwx~l_,WWk6@:ҿǎ ΝH]I"WN̹{V'l:0<QA24]C;(Wj+méB>&0]oQոcR{G|&:ox2ƞ%jt F}-獜03;wj)>k-cP$2z 3|'r}׸Q}-wz p7tr1zfqži0 0hGMI$]Ľ,!atKz^enp-ΜlǷD]K\Folv\|j~4rFpB 'lQՐ>fd)J\VM~ bڛcywmP4h`h1b<(b&HX hr&FT1~HYe0\EEa-$\]Ca!/\Mb[ W|%抷R۽ ClY8*g"'"^$h?Vs[%-7v! ,0 jXܬψG6GDO@j.YGtp;qHmVI4BfwYv~r פx,{PQ^Q_? f] YVn@GyK5ԫp? W9WC0nL%A2zLMe 'pA}b}kIwy^-anհgNXn1bR6Id`7 G 0QٷYG\y:!ٳS^bA\*(bհxtc^ө|>543Oʼ$\SeyKVmS{)G B?9=@uϧwt1.?gHD$˛J^iPjlNC,[= YCk5#%yfd l1V~l՗ iWRJG%qa8;>ŠgDIR\/sA:L=](g0iܐ)ˇfN܇=9)]i9m¬-k9^뭯V=uS=VA;{zz#G?ЛYm>ŠNȶ.熝}៼ۓַ =2n ڻNwk]c[B7zHSJ;b>87+o\jvVw6׻٥e&!Eý2McYY X ]9ZR `'>@׏3=W'NL~NWA[!AG_6p"w F?sp-`׹\b!e@UivICdc$?I,Z2 "KӋā~:PU_XL&6J$VLIzT4M1_ɐ˞KHAO膪*hA"a%%Va$|;էh%łP ̥z[ d C1@u!u}vFёt.u֑&cU Ԃ ͝rst7%6`#3,'EQFQsApxs{%&e5fHœ%ko0( s4y-֋B/Rz5艃u/DO]!nVܕX?fTÃX UO ڹ&qOė=mZ>(+ jF6 ODv/s/a T(pLP,94iQR(aPDt)7+g1;D?g8j#Bq"4qWu9OdE)P&sIEoÆ*C^DD_<|Bx1СѸ94j ZY* -q ^7"-xHnk++m-h ;o!Qrba Ѓ ;"|Lxc`d``_CyNLaK:sO{ve}̷&;}Ӕ}(n5rйFm~ZD|&u4{ۓEc>TVJDxގSznd6y~4G( m cTsk9)v9ݩZ\<5,Z.^9ԐˡKf}T{.2V<1ϒs}4ˏӜ.?\]GN^=&`9lArmHtX;w>O b\|m<;۷:c8޵le5A cv+rb#O_N 5s{X_ӷ9տ-ĞdY2j ;Po=ć?Vչ\Xv@'pTϓ3k0/,[RP.GAA>^]lOjׂR!Fvt.5SCr y[@{S}ʝKƙ}|?}c0= /"{1ῆ׼s xe{WcUJ%U7LE%:t;HN:n^*:FBhh(E(jBjdog}ozzpȔ5޵vSg|WVnYg:] L\Ouz ^zKlgˆٛ/is }q?kisUD˱#wܛqsڹKg`^ kϣw(Cg0ު wn8hm#]Ge2Ʊ|ȗs;exkLy"n}Oփ) #T=jcPk:^3s3ͤeΈ;}7 p([k_fMlsBs5sys7mg3_/o`ݏ<[BX_D">>ȣ?a.ї%~RlA#r(cz8>x=izO_I3>Kӳ?sQXIBkYUIݑx_%o'vFRA^T_qA$FRp֪jTu*03ԯOX}]$5r" VHڧցUga$uՃQ5PH_l ]ivSyn%<[%--iUeI~kZZkCK[ʹpӞDҁEr%. W-j{gӵU"\\BJUO͵pү,x~B$7G0 7:^<^IeIzuM0;7ѓwM{כaAsywG2CpMpnxMu؟›Sizq3϶wYgasH Os-6{|{ xg~uЇ-aueQ<'=ӧ|{T=x*|<8ϗjF_qWk>ku7ܿ[0ߢgmowhznjmyMlxow:[zϜ﫷VܷNv ;3[>.zv|LǸ|O{p#/gS}ݯ^|)+|687< OOz6}a0ٝDEϯu wq<r*#Xi13mYY#=KEDzHU+k#EDa%GZ*;9)-VGz^^iYxDZr*"$ VRH+۫|4RBUp.c^ժDZ}͚§ցHkQWn=-6;ݑ6RsGڤMM5-YXiszEf-ԒVdYo͋6δmxivȏJyi*+RNp3 gYi熂.CvgYG"nz7P9o}Fڗ#-ijz?7 94x 7xC`rFL;#(G 61y-|3p;x3߄őNm"?&'ɔ)Ʃaz7wU~r kB ywsy8~x,xc`d``L@ `>!|yxn@?4(YGj։ȊRĂ j:X3=M@< ,gX!X;WA)f9 c<Pae|vS\IxU,\=#v]mjN,ǜU|UzY$g ^ t{'> Yxa 7p`}ൽ/ ~/Cx!y*Vʜ/8Fkp wZ͊c,"<\Ĩ樥zf"t])dJg\tOn (_ 0Tz1t+KV_͈#ݣC[w ]K|w֟rźLy_y?y@˱ d(ȩd,șLd*3B4k篝++Ym9Bgɳ׺Q1Xy@ч#Q}>AOѧ3Y}@_/ї+U}Aߤoѷ;]}~@?я'S~A_ѯ7[@?џ/WAѿ?_R-VU=W5T#6ԦR~u:ZUǩ Du1uqu uIu)uiu:Y]F]V]N]^]A]Q]I]Y]Eyjuuu:M][]G]W]O]_@Phtucu}f uKu+ukuu[u;u{uuGu'uguuWu7uwuuOu/uouuzzX*TZMTE*VL%*UURUZm:zzzzzz:KQUqDuzzzzzzzzzzzzzzzzzzzzzzzzzzzzzhx_VqAh VSL嗪6T8IU%~9ղ:,Ǧjf*Ng"Ͱ8Mɤmi'䘍<խ Ƀ*s?`f.}5ɳ דW~JHӨ[yUǵi%zR )%>rTdx,B؎3>P8/Lm?֦R:t`6p41+KTVcѰzyљ%K(-}@]n0bG0 =T/kq/zp &y#27&~YsځKN:i O~v<-Jm>}(H_yX'R^={Z*c?&i0?Ylmoޓbay2]SS;S)u5EgS/'!؃;yy^A•Cgl,s酂Wn>c"?NV;hѮ (Ne`h9X (nKB*(Ie64~~7# +rXk+z]^o%rso߷u.>dwcݞ:U6` OqH fFA]a"E B[(LXDwxVty=z5ܲD1m`Uu0cIǴj7]'X`ىv!Qf˪g3VζuPxsޚkFcJXR'dž 9xi8@k]Sϕv`ԺcYOpW w^h@e7j4KK9g2;{&'fk'~'sjqg1ΪZ&Ž ,&TG)@s8:t1 7?Q(z.,-Y39֧`ŀo bw* L 4AXz6|6,Ç1"`?Nم7wEW%(m1y>@:6Yu D+xmv~h#o@]͂aQqabEvزk3bop>ppn1Qkafkߴ.z.sEHV9ܦt%C[pmlm6:uR?)"~hEMxU302HԅwiHv &{2>3yaFokӽS+jbI1X0Ԩ*D%,s YOK?x ZLkp` q P1xA0X:>6'=+2[ > w -eECnݯ8wOKINw]>`Gf56kL=XOl,s@laáǎ>ġJpu˘=q+wr 'fD`u4\w A&y4O&VDL^wC$ 1ʱ x,6@YNG4֦;v4N:!!۬Kb S#n옮]14D 3 {1G >:XA8.RKwH G]V1t, c6kpH5Is^ŎY0qg]fj;LoE8ҷ&:up ffT cg~Uokn}g枦ڄ-Dl( bT%8:/"N^~6kxg-ЎJw![͕9Ȗ;udUC+M`xFZ [yVs$yp.YL#;QuZti3"p*}[,217mHvSXG}$+?#t\5B, 8}7h3^~٩& PW#,!E6MFyI\Z)#A4\<,g"Ay WSqV7#U AhpC|P+hBO-DېI-V> KRٲ+ᨰh 변ޤ{hK1hjMeUl;_U1/0 -t7۬I[ `*30pL7HM`0A|]6j~QLH-fZl,7`l:[v? }؞Yd4}؛bE1&_ eƌOQC=~/dBglRh.hS:N8|چN9jڢ Fd3W=6BX'<5ETa؉zcϿP^;M8ʥAakNT.Wl]me\}SfɰxMBmґzpt[8.mbg|`itlz;{<䬙M7i T ۋtCpPEյ;}5s9|oփ{L@}ɼ_k} lnYOX`8m6$͛ j#> VKI>AsQZ`EYha]uإ\,/RmzBGK®f…l/"-bͯf =Bm=WeSD퍹 "[k8񂄝8$4%egUy]lw#5e<ˏӞd"vUz}G|)W8B,t؝g}(u:΂9%FӮys:|S9f!Xaѳ߆`>V=L@o>1`90b37h  $B÷wa lҮ?A٤4^4pU,6m~4Y8Iz|"rU 1ON(ݮJ ܌ʣynP Roߣ=3Clv+^_j \9HFo3t7^ݍh6EE@H@ Hj(ѐ<+kviЀ@(OZ !ʞџO)鸇{>u@òһ,->_ީ {\-^Uf{eTg&3T BLJA5Vg 5>U]խ90I :rMPCr\uf΋)Hif$q6Ȍ]^xA|A8__ܰA~qnjj`Su^CD$AP >m 6{,Q \DzϜPR^:Y_1FNߝ&6Bk{*:4Vt9?oL*eH ! e؇cudҢd'o3B#]E֤\$FXA -{[VeM@!-c6_]Xno7*]c'Nmq9-;BNA-DX)@_o[awǜ“2ZMv(P^oJrp{@5(qi]ݽ`@2Rd D [ey0 qK[VBu6i޺]sϻTQ]Ӌw_B"ҊzF6N)k虘Yxx!obnio:{Ӳ [;:wW_\Q#]^ܻ93?𰧞gxludn%,Ֆ167w1x0 k8WÅa!CAM 7)>ru.!zbAR  4~ֲO"LTQy#Ie-}eDa/Z]rF^ۿ(_ɹc㍘Qx>鬯/"/Ru;#/:_5Mi}s@p`t>~XVfF뉑}kƶG;jh~^t<\]V{;snY<~ݬWl:Pvvr!F/ϏiRL8n!I)߯/c<$k5k6U 3g%~9QGo_=Pyӆ֒U+/]X*͛=xɖ¾ksS8qq2tMUdId)sϧ43Up@8QReR㰙X  KB.e :LaQL}M%WN٨U\4ڳs2ƕ/.+N4č]ָw6ч1D@C.1Hz:ٻzGT8R$/Mb fdtű3D[TW弘kas9eJi(=!̗%OVEP9O"쌪K^W;E~T/^rܾف~TN"v!y2m ?+nC+ܡe/ѭpiL!s3]Xo-3M7+{3#(uY/;ى/j E,Lbs: an LWfIR~О޿z-p:EaM^Xd)+@w}0w3<mcπ 0.j*)P州v~K+aͰ,A:t$vG.'x@K,dRqS=-න)aA`}oq%Dygn5] Amպ ˦~0N˳]|ÇP'Lj#g;N! T{$ƾqTz =I#Z|89tKCЩ»\X-6,k: kz:vG`VIV$_(Xgl 䁁}ie74 muk>\ޖaƾx 04 Fb̧c?lMH)< AN Mzqa̞NT智nq= xuK3K,65* M  q8rr3LyPD(@LyBm6h]%e"ЏzNbUdGvKm-̂;p}X+mP9 f;8} -$>]DRU]ߪZb|CXp>çxT Ɖ QUGO_i^7 X=b'"1,Me̿ "ѪbA/QuVVߥȕKq aszX3 uA}?̲t6Yؓ=**Em R+Ti3;z+3톹"S=VmvY줵McdΗ @Tx\e|^3Bxr0m.tIi*F*gžhN;e|X [eE@Ć,稪3 ml'm\0Yiu!ҧjtWcS* Ԝ\5iHEiP;N7*P~ԏ/u*0g;FUU}qHhI)$A[:DT" ɖ(CDJXO 'h0ɺfgA k;/,1bRGmF-c*o08]cVa ) $t穲vU&y}J(.GI͵Ӝ{qǟ,?JW j|'IPbOK21A5\._|X$PbXg ;aB,SڸS i:Q)0;mʙqjD]$kO2퐖gXڀ<2OC#9=fْz-ð=15ܟwm\m[\%K~ƞ=66I{eXj=o(ˤؑltI0]A5r]!%j`@2-EɹҲX)H1Aa'@y t\.dD;X,pXǁ7k1ҙ ̍J/1JT0::NiR0o61%KYH]l s3u ao)אQXb@ \iasXUpP=3pq-#3 w_zߜ˭o瑨ld'm4ui;b `?],jj# Yk^ 97McJ q"Lr3?dx[TXŤtErJTàYܠW4?B5šcshGC!Cx2s]6;ﻆ]00_U|D_#lQ*o|N=Kj4ˮDg?2lup7@lEKdZއ/DLG5`}a\xצh: LGZO9LQw _"^cT1풽 sB:ϜUd=;Fz7W~+6c47djƕq eC:bjvr9vO1BN0lS$~ .^x3u)Т(Xqt&ܕYґ\$Ti^`%^̃>}&9acg-U1.7 ,I&{!CLb'S} hk ԭq ]jt1EA<g .Dm%'JB7|X6@+ȇO;AzX~A~Z`1Tz`}A͝H=ƿ2"{ I)b )PoxwV2[K/=#cŨCy. Y0{Q[_h$J}EvrsXZZ۾p6e޿Nb&zR H̜ zBE`ԪZ'S"3 mo>1mwZvc2_zXᜉ_>`t {# rO VѠ(PK\n,_TWneZe6XIj$2"W&w:+< ch6!L!؅ԁCB(LGP ]#aV24ca1Ƕ]X\>fYpۭ;\?  f`X|)H8NIj"qȤ.ġGwOOTbndBŭQ;I30;}si"as^bRP;WRxfa楤⠩iV,\6J) %=/s+EIVFfE.'$eouL:j8B'u1 Pa  4QfwŁɣ$# >JA`j&T3YEdOotiOﶙaiL1' +]).Ws_|W.Ez\d&Ѣw-D#ʮG\(Yer %l`SaYKC*{U|K( [.ydG7S :'DTGwߪV4QeP9?L3 k M=Y*9UUS^c2z.v R o㲗">SZn''0A鮦9@B0HT Ǖ(p}KOWxUAZ̈uCUSzGVt `͚Qn^$of|&.Z| {dF bh, H= K{iUK1?l(u2'=47B`VJ1.!Wt1C\>bHnȕ#qrQ:b ىLj1\U+\Das'tF=6SG485کjZ6?*`JXi2,&ۈ,qR0.V'EoMYQXDzl;ɑ - ZW}4aeJƞ]"l1V͑US ]%1֖Z bim%PDy 岓1P%;j9>h+, ..qVC!Jq6[KZ299g젹D 2gnunz.{/zY߫cܥF [6+R(" Պ5x.=BHUƸkHpYAjem;AoSo]BG>[H=4P(ߍ:NNQӹwR.K֢6vq.JS LSUi/NUE V׷jTV!oSGpml%^?EGGÕ$b&+7ݠ"B$ɥxAA=vOW :Ի#GL-e ڸ9Lwו.jhbrMj--sbt>yI.ݶ'+M7ǔLT,UG&d?1_|};G뭟_𯄳]hR=Ң|~ O<%[kRƼ8xă6jKhWĚӓ&Uu۸A]pfWK; Err%n[|0TjVGjM?-޼2x ~d)3M7v#`bi7[$ I6u[0n*|dkJ)4vfo\}}D4=3mp%hsZAiM*4%yB>M@etB|FIMs#:;F\1:|FlZ]~xA>㒣Q^]GT`<y]߽6)wj;7TA \Uamu)bru%qpzP{Q4ʙRn5UFzE^+.Lve3oNoR]ryh1$^G,u:͙AӉq4qڒ@(F:'n _ 㖊kqS@EW|dj  j:Hbg# o vaMĭ5qZ=aftGu \G._t2~A0ي}RjYRڞקۜa(ɵ~,l\~zx7&r?4=Z ؄4lL| 0~ n坂YX(jZ;a:<I>E5U RA ,hS"#%hi .2 |vR}Tx8Xo.mJCSpXz7O HIʭc=4@lI;6NToэwj!e 聾d$Fm0lE⃟1BAqw >׆wZsQ!sq|%ܠ?1 {&Y9etK\⎱E̩䬌6 5vׯm 8/£>ðO]61[q d3.o;6o6JА2 F& ,gKK[]Ե6^Ie|ɫ VOVdtW>( }ljk͈=VǶPJ.s'*YFK $#?hva%/X?p:Tc(@} }iW7 9|RhRRWp24!v^7kAck QWi<&ա-}"cnFD$5~lIhr TP`LJ%2Q}ܘ !%Idt6I cʱT>[Q'IiФ|ELt}3.uз#R LE5 vBGO)YIҧ%Doy歐:8_Ɗ\_ JyLf4n f4UEf=%961<(6^@QW = ~~S+.w T3I j:-,ڭ:@x7U8#\`kdO6 zļnHϬ1U <1 ~'b"U4:"\ذ,?$+e+JL0?4ff%b'/ cB&$?ַHcdӻ.E=cma2Ӄ3Gkoc#1Z3|Q4RWJ[5bE--bB#]} Ú 'Q|,63NGEHbDUx2Zŗ!!XԤ\yTJ0 x)bX*?=ybuCq"4JK }墔v9v5@T,='b٪CCsїޟ_gSr{%eZ#aJ0)zEl<2z|~wFG$S{߲MRG|7]-ϛ0x&cBFH;Wv}hT~u) 7U2]w3§$dVq6+LNBZ~Puj}rbMgƭtsǸmYY=Do *4(Ͻ茹d'0Y<2Oa0.S~ t@xLFk45k߶no1T\{RV[mSR}6LQ0w9cn`ϰ?&aẖۉ>dХ9 C7y~cFN_e_N[Xo~fc.KoGJm'.Oa "DL^o7zdFCbQ}"&; ^uѮd賸!mŋ6>0*l̿`̲m,cboԠ: Znq-JH|[hNTOW*Uo%Ti?z$ 94 DFlV?zo,-]>YnQ8XI\ @ThR k!t)oW+pUp[0pndb֔QЖqSBK=0)8*_/% sbg3fE-CŏL` e)·byTLEi (jDaYjRA0* W$ħ-GTJ.O>-֗:V9ª^ICs DUJ6q:wSRn;uC )utMY7J@,9 ȑfK\y+Kv+]j26lzujRl$*}fŦI~Anq x?m1Mbh<\N4dH*Gb\%^K O$qXSC'd8f7"ël?'JF"3mbƊJ5TH$2s) +]q0uǃ#( Wđ_`G.a3nL|ضparI4i2R-TX*n_}n6Zspu[HVRFMB"s!vv6$ &5+;}߶5KiVr]{j=PDJtŸtl-$I/O黺9AiWh+b홓*[SU_e9$l,oͼM4ZM EF˱ǔ}C,SƨQg5w*G0,K*_DVku_kP2D_M(! {aPEu6@DOY+LJC ZL#RSHDsQO9UC b 9߹w%SdD Tr&^r} Κc3ed[(w_Z9U{鿊w$=М.W]_&,}:6RB6gәrLM='zl{Bf=c]lI*p;NjBFD'oW{j*{yׂu4JF m'jyE%*$R޻>pV sQ&EøҐs'$%RnU 8w%]OdbH\@RJYaMg䏌xjÒPՇ1nhR3^DI,PF8($H:N94MbHaIxJB'yԉIEehZc;vԤȉ\7"!ısY]yj:$6[Yn`7#Z@OOw+^hї )"e}inRtZSd/Wtz gQ ҅0b 6xBpizm^?4=lQ)$d 6=͟tQjfi C(gӘֹpdS\A7d:W9{}ujkP/fIloQĵV1Sl+Wt\XNvw-fEDZ}a\-1Lma8>r..Md`/Q: S3$78$/㩣ҧ /ZtE5*L34O{1SokYo!+(}! 8ɉnZJUhxKD~`X-I➊u4}Ea}&=1cFV}XUulY ƛ_Zx4#oS/} wX*ѽ^k2Q̔^Ta$DfJ53-ty|q9$-Pb[T?967$@[ҡnx ߹̪Qrf[_lC#7e…v4>w>1}!CKUkݤ6Hl? m Ct",hܘ^fУ{}uSA5L //`p[, }&Ie22 'QVLq#EzE?65 *G6QT'DžaVb%&v;%Z"^b>4: Di ^n5b\. {ʦ/S/)᫣]9v;^a/ 6\;5᱗Os 1c@LTuq~@PIDsڹX!~=u;+<{$zvнM*Vu zp6m ?))! s1*ɺ@:f I-2 xOYKȴ%YL} H8L  c2 $㓄Rr$8sĄA-)@MUy?(d֪d^#b2XрԹnZA8{ew18s\4:kKݎ!)[' %Tt@I֣0֭*\BzjkqPp\8Rbm]ic+bO[z[r]|=Z9%cuK#²L i#z@JZotQQԬ2$?Xe^f16,(~c9 cnݦnݿǷgRϦz1ڜb=˿G$[+SvqC j42܅%PYqm{klt+Ve-Oj?kQAC+}\H, .H2cGx4WtDRG  ɍjg/ tL = e >WO$˘ ; 18liӭf-rF[Jh _/u?˵&C5|?jfƖ^A}8o h_%EMҊ?vXe4P6n/53"$6]X< W4 ~kKՍr|BX@Yw5\l*TY*Q-~)ІoW7;UyssXn#<~2.'5T o.jLRz$x0XVѦcvj^3=ixdgp4s1nEIBaVb phL1/u̱X+da+eW>)^!~~Kf:.nR_'Ou=[Ŧ==&:Y4 ?yBpyemyl&Ody$}R/7Yj|*3R̹I=/K˴Gyl&vcK^);e]8̆QG-< }VRm`{DJf}O¶c4p*fU~iu+#x݄sxQmG–s<4n??hE1` v:P+jd49@ NK3+yNQp {׊7"*2oDvMZR*6U:4o\C| Oa$ېsiwE)_NN87cztƙ3 (SZJſ$BP*점-odhE~V@8%ҕٷR~sP[]9,~ZIqxly jfEpSZr}MP;!be]C%*0=*w"2\a=!5/n6}ߩt) .UA_ܝkVm\V-"w^1/6 uP-[>fc(<3mš  @PX|\n*c"S]+MXWLqEVƹץ#5_%*^A6'fѹ$->aBxMr>tfl5:E{\V Mv#}ڂO}t0+\"aP:0 >H,iAROBv9-CD5f1p_~T}kX1l@)~I_ӿֲ*еzV^OMgt*gmcVAPvSb]D坣16߇&nm+Mη%uO^VrlY[ZyT"d5n?UzP*}xN;t%[CFq&YsnAL^I[U_|#1N<\N2N{;B( kTONEߡ:;Rw5垐T=jEjR!5op{[[ښy,^!&pc8*Ve`]Zђm8u(\ssyuPdMB ݫ [L4,+Ӧ{vC2wRxhQ7L9V|Ym ֽ Z%5C.OT6oR9E)OCV~PjSt>㹌櫚X$@`!_Gtʁʱ-)Ȧ^W7&a7aQ8Z$q&/j X)_z&ҥ7yzg-N ׳\څIxrz/&m_oWspl?ܨaZ塶c#ChQ]c&О%1N>_ pJGoN },i5+ƅ@ <=PDiuѾtȸǼ>uQ[YI@xe gZ>'o8rwŊ\lޥ]^<%ko}{4 b tARf B!k3R,eY*TZEHx<{*Xdr %P7Ql(ݞM+Oi8_ }G>=n'6QJ׉hn@v/8D)Xߤ Kh׋\2kНYc>I tHt \Z_ JRN&.^r8s2HS<^`f!'J\l8sIVf SC-*iUxpSoR~Fb5Gw(muGɒ>ip7vpc]7  |uĵ\@xf~"}Pj\9Y=iy BGš>Uōo-(ql Hzlưfw_lGBM6'TgU@f|֓Zs^)#]S ZHȮDŽ Ap9kC"qveŸ$nZ|Ҭ T"v9m&QJV5G,VyJKl22d 77?=r/W2ݖ,]㍝ؕ]Z*EtsMYj\'/y+B76$<eGZӜ,)8("tI^@8QI *oUb&cncO|hoX}vߩw!ȗYɔGS ?Qְwi>x}کf[ֶqtŠWǯN) .t;0R$G+9mۋ6[iξ3#zLR׏Fj,K T'$&W2G~hVVu ȶu@E'Y|źZ\5;7q9Qd-ȓkO;}I\DvuY&XV2Z|!;&$UJ ӟb(6ij>U d У|<_|v4n HлS/4{ю+3\Qkd-EU$s=5V=#Y+mJ`rk{=xl+qI9ev7Ī*T>!ZbnW\TkGk3R/k9ANF<vp ГOz}ݻv0ev0,t]e(/ ^ج]bPHO?Nn"&N0c\/N9KY]{ZgR][yKrߘ֔[ /\n&܎eI=/n\1@r7{mJ%eز>Pwͮ})P6\ 6tq HؠA1ir "(;'BӻnfŒD LW͈X ,Lw4(TàYFbphXro7Qo2\?9ۜDqla|(òAְ,@3"O S8$3%(*Sϯ9tQ )B# XsA|>qe|C{}mk=b*?JKf3ͬc=ON݁ީïAqڨu9 ƭBg6,y| O8_ $tso(\sӡ)e虆É }~SgHA'Vk9_pJ\{M!QCuPl#,5֌;rV ٭mq IWv tjOX:Jȸ|l ,ђ&/|#omnIz=~^|v@>5t+hPkphEίL\Meَw쪧cCec H;{S7:}Չ9_a i,&])SL40OϿ9A;;r󦢋3H,- Ұq =1ЦU؂'QT4Ne6PgS\<8M cLL3TJ`%1&0:vC&ci닟KZJHYM}!F  .L`)r#qa̹F6V+gkC7)4NA[B*vhYLoGEd1M"3 O-2|}D ؛ *W^<(*UgpZ7sEZvJq9~Ձi=#aZ٤c4C쎕{:R&Ǒ i%0/PPDŽ:]I(.wd9F&M_R+:sxqPu:o6~=\6& Y/ұ{KD#)#8BzuQLTdt]FE&^[eJi%1~xD>S8쫰{9Z|c]VXJ2]IS;W䱟2[ԹJy6n{, f6 U3)V]"3!X3߂'W\~6)Hhҁ$.O>I8=پNؤw0aqZoj$IS\;wbv 3v$wG؄ƨF9 sr2Y.'^jF4PZ<4x O̟A 4RGخD&U8Oֲ'|qWVXkNݾ.*S)>q^׮< cFkă=dQl?,2NDz!Me RB`}v$,hrcvuģd BИ@H|EY u)ٻ'; Ћ~~C"UL *-kMoh41䐀 U '25O 9a0X[F4*Qo2nC'sMGbRvv-f[Nkti-Ӻ"khV 9A2ܯ)10Š-8˽/)9by_B9gD!̋"H( J>- Z )l qxkt߮4pS6ž2c}$D 6]H?sG\D>iSn0-JW`D.Xlg/[2CIP<(u.vԙ QY]I,N.Ǧe޴؝U-}ϫPzFuD4FdU` V=|W1bXHjh2UT՜TSILA \XHZ~<: XQGՍ舖׌a'ZɌVkNB\_g1zM4@u&4NTسΌ Lu:5j^`MMm 2zYz{5F?j_f2߶^XZ/N2GLHYA${ZR( t576[CR֪>!笜a鋯!?qg*]>sf*Y|L,K;ռRe˪TpG!7 !5j4`Ȃ"iO!P "⑥fKx pqN/G_6)>t<+[ e\R,fn{hf/u/ Z"zy3TDUܰDN >ـr2!p|2T+;yϊȈ} rh@d;z햀'`ۓIe@$*4J\hxtŀP!]1Djs蟕jWގ ?~uDz8濕fzi/TipӴ|Ο|OD;}x.),Fͮ_2adEޯn[G/IW ʿ9vv*qkRx4v6'Y5ge6zq$#M7lek2o{d|^h`쀤 FVYA&Ax;ف2GL2ƶ͋*i~kS~vgI 1Oyy$v2KtFxNas-?S4{1 $_ڒw>$$=~2+1,B@#d\)H_t*lE -Gua6!;;i۟aW@9&-d 0zW{ql1hSDvR2.?Ǣ /9:Oo<-U]|9x]x{ܔF+'jbqUh&xh18hȐ*zgaW4ʨfc *\jogۍƼ<4$q=]C5I%Y2vq32 V#A"(;dq qvU #'Mtt_]HprFCC|f6$¿( a%BT8;{ܠhT]2DsyQOc7|4'qd_ΦqF7EV ;A<Єhi7i|_7dMuV 6,0>SO+Wi{3؁h?sЛC[hAz%8G2 :5^D7njBkHmqYtDxA6APE&)9HR9q(cS3;Vz*BMP}[\4 q5%/RL'#$zEsqxirBD8q”u U<|gA'An4>gjss]_~\JhWח=rsaS[J y*:KIg>օM8pJ%;:X\~y∜Ϡ5Ip%ї߉_ZZ6t1zڃ8AtWQND+ ?̤9}Hn8A]K<d0C@BEr{w{c/U<ȧ=:oKG<;@]({kM2Ս,ps}Z#3h4YώK;.oĉRsU^۔3na~:1M2}:xٗ5F"xM3¢jJB-#Te@cLIjˆ ^Kh M8(÷# Ǚ6{;Fd;6nɚIA(\,vHKdNPx\vSS6Fwvml|u7t&(2 l3K['2Zʒ["GQYN^{^dEE&2kON?)8)m^ؕJRw 6xzZtpPayYֽk foem VbؼQ}=J^냛oW_:0m3֓ nrt뿽*_|J.LxgmfmGV:'ؗ.(.ׄzfp?>M&)ٗ-RL8l)8< jj1+SN)XC,OwMuՋm$ul I'k)+y౑1mI>Rri?SOI%tu=+iw|$VKzweRL%XECEKU3.|6[)LxMo'H(*:Y>:` $HaSX]b23l ++*ncȕU?RTE,"bIi4C%% VU`4:Z".\5-I8ӋFtXb~MOZ^?25f1ho(zxZyqa)q K,g #sVMun%ÛU #!}<@q$4ԻmLy*L34mL:+ r̡&yk 66%Nlx޸(F`}ݩunwcOo9/m`pd p|qM.|(^V)Qb_Bש05!B<84&ۻ ~.U;n"߸Ifs>/HS Sa 'dy _*'P,2Ya#t{iRW2GSSw?LYM9-;kXr(zQ62/fOMo՜Hq!QuA3"*yNJ:)5v,t:o7ipd8ZNNNޞZ{lg-!C(@8ncܟf '57?z4~!7glQv N5Qf4 ޠa~~4Emi6,nrb25ld9\pkk ꂜ޴a3B6CNjw%Ny$BGӾA)2XJ2d_N7C'"@ky^ p9jeٔ}n&gfvll~àS2"CX=,;ĆS?լzjiC;O}R&{㽂PWCugoO:Hx71o Ҕ${1JWp</or7׺-y3^:b?A9ģF0W^eٿ.V]md "^ۀ K\p5`zʛO,y2Hws8AmD$ӢһmcsfOѕl*(Zy$Zd9I_Ȅ?lAnϲ# ]6"~SmOgmcL!(lesl'mR%9 dn(7tS'%Zb7`k?'7$)&.NT.Ss/\΅5x>3LE^!6\ښVť ֭tBAlc"J@ݡV ʦWw9j T%a&>@U'Ǭ^@ ,4}tw$e N=[cɉ=}?w>-ry㳛Jؐkμ/a#8e`~7|92y{X w߫4 B&{Ԭ64lPrG&fAsl*`vj\j>m,c#=smh+o<8Hc.tU #_n;*zDN&T\+`JE'.d"{ڪ9lkD 7|F-'_}$KTxL G 9u'3%%٤|[B$qy#xϐKJcC.›ӳg3 =z'$<~ACKkNJnmmZhna:}3ņxaA9#aw͆͞ӛǾNӹGFގ|[ߝ̡![ ׭k7,7[#Sl:"8]|S C2̩QQ2b7` XV3I]@ì x;,++aUP;"-E`INH8R/*51Ll1[N_Z~-D,"  E`d3ȅ@1vM<&dl!ƒGڿi1 pvm25qͦ}µt_?>,{%z83Ɨ1^i_k]-"YW6\I` ^E*sݒ6A[EYvv4\9\u;j"ҵ K 1T^4bZ~baa1C٤h25 29*9sڮ v61 c}I#99YiC>hfZLt>d3eRoPocD%26vXFLQXHkk>FVYڪT:tvx"k]MʮSl`hJOcb3\4pRdxH Gac.AĠ)ܩ$#n:TMG򘒢S;~sx>|N@ͽeƗ^a;2j,Lw$9w`^u1cBE1~_M v1+f-/q@ D&Fg&N~Ō7̽S`$ƙ㜯\ۧXE{KK^\VI G˄%sw#SgdR,=K1E./GTk\`BJH1C4SjGW T]"uP?g i`YT }&FN/?b mSdpc_`DW,|b,8q$:kQ0LJ| ObY`"2J,ӤmpL`J\Nu($ ] '^c8At'cw ^J\˅'KJvp9=}3&[ Fyutf2 1Dʉk9(ALOJ#y1T\o),OAgթeHLn-q, ]K ycz:ӵi!e n|}49]*n/*je 'G}JvnQY"">a?Cjew)8m H1ԵET?x&%!1ժ Sf\E&N_mu:*)0^},GJ(1oP$:Ƀ> -L5 9=lJG҄,D|hd@ zFwNjC$95.Q1#DŽ3sc#qߋpȽ?,Y)WSa"M+3֐+]I+7$=kǁde{o5]*$.}Tr.;Q6DRnFU q #0|2(J)ӣ@'6%y $4ۄaO. aӼN@ OB 0~4"d?0G8C1<x>3? | k&.`==ΈDf5B~m^򲲽3>g5 PX`I"gnHps<=]NgpbacQC򌖊3'lUhM` JZq(WD]rCoelRvؘxVg܁hϽ^wziНg$hjD {+z5llnlYOe*6b=*m,^+=)9vTئ'e$Ȍ9+d,8xsyK* gFmY|0~~r_g U4-={ͳ;=yW.JS&6w`F -Z&ON`gy)E%U tң~+y@(N%Y>Jef]\^/|T=Ϥg9P󑀕pJaT;ɩ!$v;ʫqZ&]rqAnS6v}F;X \YE>mFoDii^?$MP%ӧYClu|OHTD$(@|/gZ~-%G=%nj쭯< 5\ǔ[O3ډg5+L rOoa=EG<"؃聯9b2"_#f.89~zN:P?-~† cgᔛ{ *gmĴbK>vkX,3hY][ͭvGҭ9}u='3~\ „a$huӷmo2?Έ1\"W,iKăK3T^BP+_kXʜճ/o@@OY1Tڽ0y+&o.k8N=Qj0l}{ ~߲*ĭ8؝2c 6N;]ߘou_#,#{6+4uuZd\nb !3޴1z~K.AAJuֿ~~Mu ]\E5a-rrR4{i-֒$}א1pPeVYsកJZJ6b!wZLtEn27cnd!aGzD&$~;|/[#(S)4@L@՚t'Ac1Z%6C"oy-ơuRgX23̒Q-SI8 ^-\tzU09)DlIOOܴI;v(_#`1uq=RnF,.av-x%l9?*4T$ׄ9ICnDDFD1fŽ`B!cr ;'Qsm4`TԜrE:SݩŸ{5ġrʿͱ4neDuLX ݧ![bNC'%1L#1"Cb 6?3-R/2cbue9]TJI:/8rt-ܜ+8!ex7DfsFH- v5 X33#zDP˨D!M;fZ>O}vvG k$ۋ(T'nI`֤t["%ׇ84omonQNJsrw\ďؕCB q &ώ8zID by?i Ύ }^D?A+ס-~mK\liVY|.AFob9$>_&kZ=-@ęc'ig:tRvrҏn2C4]k62(ɾJa" t];u>0c5+l,Uѓ LxsX;O 7ʍgϊzW#0P[LXPþ%"e~>5VN2a;( <{q5L>#ϼ>'C;1 'UEʦJ.TJ_ f7ZK[S Agj8_c#G%/[ ^۵GZsY̫_LGHVxo`l9>{5 7$8|=0b}NrM'P )nL K DwлmCp^5Obgbz&E v)oP!OSΤpFx*RL ޚajz2/P#v&qAN􏞍*PH &xg.?`=^`\ZM V-cG[-zYۥfaD!& sX*W`9x |x6is=!vsd'5'^)#S]?D=hS,D2D7JGBAD(Ԏ2!ᠼx2")ڬ0ϟHhOА w !vlk`&*/* eUS{D28bU.}cr E?]aS*1!G GaO؝ɽ֑d1ߠJ Z)6PD^<16v\]M+W٭ꇓ"AQhf>)9lajgC{S{u/{VW{/FYa} ^XJb ӃjPHX 8#S)JObV/\%cY7oZ,O&atɫDRxIt|*E++>z٦fkt@N7QfܽfS&%RJ`VXx%W[oXX{yU j~?\;SDeawynjn%% w/fGNsv ѳdkl}jɒZy>٘8LRPF4hLۓC2}0LJL #XPL`Rqv102*;m=d<^|"I$Yg][ 8w]LyL!S=H!}hKD^" cn,)L-103*ً),ȼ hA΂9*?:"D#TT)&'&ͅOO@&sY<~n U;')inUЊ%oo-ה+A {QT}~hnZXig )2^spX&wss_D[ 2J q ~esn:5'+^zL`\;60RW?ښLE~=dL'K;(Rf7$l˸Sr88-/:pr=_睷Cۀw6;ճ9fGҚHe`k׭r)mUc?e8.&$QWSs{Sc k;n]BZ .rj;"X_L3'd,,HٌIa11~v@g/oAI*l >n==1! ps9B"Gu:~}ԑ&InӭMhtOdge]z73iSE#G kfhhKbQ sKCyGW7(WV)AhbMTa#ǔ3 AIel<(3T܊S?CDB6&99pR#hKYvH)!0)kPVWLH-{ \=ux8vwA:v4GoSַ-LƼ}#pST'^!Kf=l~i++ $M-䫻VCjϘ߿1%KZ[Umq*AiQSPXVx?J 5fEn>MG98d.i6 -9tRF &&)wN^SB1Q{]9-J"*,!"|_p|}v;ë4mvF߃3Lxzp2t4+hd&U*#D?aAB;|xf[g? QҏyQV/q W_&TX|ek?t,}i_O@j}(>~ĶmS1Y% AM;8wbMVŖd-PiAS@n(VO ^Ǡ} wX(] ]?R"ml3tٰvj</=(X6/ 5Eu7DZ52@ضv MG  lc)=-1ZJaVpUszU"aW;z[xhvo}g-Pڔ.Ԉ6N ,MTy_e/hEJ1F <B:zgJD5S_5!.L-fC_ sǃ_NLǙ 3zͧOS 4ke׆* .!_?5Fq8G10F4G,zrCC)s'fEդ*y"ϋ\w$3"r+WEp6@ߵ_Mb( N=ݺrn ٛ:{'U*X 99>#Vbp[֏WX,6=c"1`["?]mYLPY9I {gãӍ`&2\8xAr̦5vث>?|et&7Wqri Jh0N[nC|5$7 XW )]|t֭淇߾~s[^]UsQaϗ&͔]1=W㖚Ru:7sjJKC8?irҜ4,`o`xOs+YWȼRۿ^>UPyur3!eK@Bꚋ%NVd]O*nC.Oк%ʫ'); <-Q̊Yxnl $x4_^|,jX L?u4g1ŒI1s}WP^;jN? Tc|bA{iӪ.%ЗQ!}J}J{CT\`s,2~BX94DTEuBwF!zV-; *ӎhYD zN9=r|jҢgE 룼ϰ4 Fݕt*I'"&y|ܱv7Cb#ƃECMO1O;b g/ IAi{ǣ[T(WqK\z˿ 6U"s[u;|a p`_S˫( ;r'| O] 59iP,J:`K*NaXzmTu9=aXJOXDVL]Q!ҺY5MyrW`?'6M&o\ p_5u"e2{H\zLs t *5o|@p{ul=O[m[᭽6\|NW: ;?RPN!LnU#} :@$I7 nK3/^ݴ?΂x瞋ʆ&o~[DبE%2Z$*=BZ(OQB{?&:Zj ^Î_yE:)m A4U07W_ka }Q* !P-pN:3TC-i#W[k 2@}n 9ѾM{jсm~Dojj"77 ~ 8s]OhONɱ0,P\ zR*G4e@zڂQ` R90=k(zWe{xEdngoy~=3l =HǁG=#E>| ag}Ɂ]] vdo"}S4^S2ŸcVGhn5^{#q".J(<#Zm[ W-P[f)M.czc:WYڀO&'Q)7lu}c䵬k%6.a-LpHD{jrP;-jE ݲäGǽu7H23yW]jwDM}h_WQJIs*J,6*?r2.ַ}JJsVannr r"x? 2k~.o"3zқb 8SHF2$jCjPDzv"oތ_GB؆h*%w0&VsCN@AK_RxMj$gzm\ IҾ_+\9ec 'K / i}rbc|~W`M%JLjoQ::iChٞ|b |ð-t %L<ɸAR5x1L;Cn֒|J=CnopTGU k򢸠^Lڠ2rPPe劭3#yY]kLyZ2(Z>`Y{\O,xaD/]`l2,Z [[/x5/=v'J^4ǽ!6h]gOpff'֥@xV_y t+ޝi+mke544h]9Pm=<+XWzŞ2%Q8AEB4mZ~ XꗪK;aljk%/=YOp4DqP-m4){p^U2+gyq.跳bfڜkPp0o V:gV<˥ g^(aaU23(hg E}RM"'37.UoexYV .A06Sz2K>/JrUSSqmp~SWPR}{mDg3%X GDvҽ_vz6by-;[6Ⱦ~2 쩿M/j.՗Y$=OKgclZ:^l޴v~d{f`"q#KXav^) Idɻ3]x4Gj1gЕCmQ?#ßf>|pGx sչC23|+4&sG}+*JO`F#FJPN*0[˨cN?+pz>MIM2PWӤRL\`[!V@* ln]m2O+-`q!(9@|G O/ 8 YN`nԤZ\oRNƛ\(vծR*>ݮj Y 89E@עPRtBp8ܝ0A{O#~MFz3ǚN3͝nTv&% ށ" M˻v9}1/@TdNM"ĨtsqwƛgBJq}EHC@l2>2e8N(Zh8~pPD3 Mj֑ya Ķ&̢CUb jf+Re=& A*`]]tO~@f zEBwa!Ch0F!H[Eڃ' C! N&u|{.5m&5#͑K&+Q'']ïqА d E4ӵyyEq1RyE8K$L&t0_4ReާϨgQqXf +<.-*qB31G6t K/P-u,엵Ll93eLI|Ũ}KJ e8sdE̎%E:ŚҼiZ/a=ދ0&]?)2+Jmlꅂh񈫧V"e4juߠ}4N"(LvG%P:/P nȔ((r5F 0rܨd [ ?fZUt;Aw ɗ_Ac _ V}I-M+J̏G9KZҤ5fPnHnP[XNCܚ_\zHE؆\pf#c0DB0vXq?77TO6Қ5ʧJ3}wL757͛>&n2J\S1CKS*883g Sծag8k39?ZˆԲOa)@*Y-})|M{}`6)J$z.F,1]C9 m#bҵ/rRR5c H_PKf G.JKcѸYkgs$§;Ӽ9=i"Prb~L.s{7[\%GDoᅮqR=M'zFN!qS?L6LlP&M!wX.?g4-Qb͔Vpp])DtҊZbDLmngR1-[{=+ eʾ[FgD9*6*`쌫< ضܲ1ԈAԬv: chm.1X"r" N͇! =PC}tA9-/.3񠷪7m_ " &P:iS&a~~=(-v=,%Tʒ4>.Ai3D vXd~OW️o[2]E6\ J` *d2xX#!/?]^JkHB 7ĵZ wdTskd>%m<tR)^_o4Fa>h:xgచ)'lAc7o|q,o{)-؄8ڙx:e[rl@:/.La.BAڿec0ǹ9U ڸY;wߦkfͨ=Skxի9cTǂ`甜ӧl#dhȑqZMB7E,k7%Ŵܿ%zvvÈ Nw55+1u U Ve5*!O?!C[Ut#t~5'?҆`FԁxMe/*lR4}K\[|;nߥgtJy8kJt-"p[nH{=NwY0B#X5#dccY^~`8YRĒB.Rpu:hJns{8:};I tR{4ƟKJq_B??MNnV |@'!%8:Q ~1ǓA)rm1Ԝh_K𱌾ER؜xm%vȩ<'w-GdNa0@ +{Oň92\ؿN--)t=j#u*jm=1yX˾J!!r B6Րb4ˑcBW."ߟ D7uh.(KNHqMނ(f#clYrی+'ζwznz/f΍_ښSz޾i?5r JW`ggp:g>Rξa* N^T8kaX:DM4#ЯӠ~\ olW"+]-ݘNB>mZ yD}P O&ӥ͇=hT/7k#&%%2s~M1(@}@-Is`X~D2L7HMk )1+uӔv\i!oN.Ď1`!ڽE Zpe+͍*2/ܟP$ W1lvF|RkGw=vn1~Sb1ey;5>H[v~:rvWR#Ǿyk  5y#rjϺwåc?V~^Ӷq̪3o Cf-q 95r52-suR. =`<}6C]JX)uZDs 0 /5Cea3> Ed,=@'UmғW3MB7ILw͇L%*cI0d.}^¦Lx.MV?hg}ȣPh!E;y!7y*6M+\)vZ't㾧j\àL /-"u.9C6)Ox>TetS?1_& vb !{K%̹((xsԺ?i?ܡivpqP;)vĚ ZJk+Dn{bb4HKAo63]ȥ2~ |+}~ѦY}͞ gi|JTk{xbbzAFdx]DA$${;G `]]06F*(2M. ōEvN|BO[X[')Hzıp][9 3.)5e&@RbjM|AfE|<6j ή77 9૔EKF5~g>tH$+ HpYoO |u U%o+xPcZ&?|oρx8qa2_[x"91 nʛyX]v6c%6ÔH_0x>2DL`#$8ϫĝܞx҅O)?x]+E,%`=CޢwwgYb[CEM$eIa)T!HA.Wb2 f00I\' $nl̦BzUCCn|H7PX<=Q82^טDψ "1< [=\S$fYE/\!kD+XҷGڣ~k*ٿM{ ޻am΁a~Q Az^ac.sx;r֏x#D P>md̍Ѓ[*C~b>2Dz̪Fcf?5QLC,Lٳ9n{"H*;oISqIC jqKZ$bdn>z~O6ɼϦg V/g˧>Zr״8oc?9 K١͟͡IFpt P#! ,RQS!. qZ N\=sYw$v=y.SAWU=/=j$QS >Do8&m'Sȏ<p]ݸtd>g9I]~|; ieθAa 3A +7nDW[ts~;X1J@ BP`6~p+MXزW?u"VCv-ttV§-9x0}A*;piK?X해g,fNRGe Nl yvxP'n4vZO#w#ȹVTp**Z©F&Ʌp&2Y$pĝ"}~Ljk쎜W`Xnz0eV4rEyv2([$f4/!!\" 1g=,H!+#WV亘D<2v̚ !%gdBV|Of^r>UJ] vSdcmޛJNܙv{~wŽ$(=BU axMIAf|))aH"rng4ݿ$CM]$:[Ou=vR0ꚗ-]B3-0)[a*ńFyz'6xK+SRTSySUk4Umjx]vo¡V?UBwfq2zy|NMu:wmwv:KJ|ssO+>#:}?;y.b 7Ft=bI_w6ȑt["U(ږY?u һT`<2MDWO XDjzw9=m``9oKJ7]^A&A:FG Ir-3h %^y`H=Y'S#d/`Atxn,ZIP/_Itۘ-.-ggOfX>n(SMdG_.)%-7:\M{7C$Rʺʷx쪖=5hJ#/3&uZio ښ ;U[+W1wL U?-+_|dD^^7 •:B /O_W8 ʸu eGҳ > pxC$7cͥ{vެ]QvLu?tt8Q~TB~dGOڎ^7o^\=6V8( sE~Lu ](}P0`E芘|Ibz4 ūR|~/mJiFw hՂ بr KT*)=an 뿌ۘ}N*Xp|h}d< 'qe_vzv䟻 yqu3Y4Q76WvGiф5 S &J\P8|US$Y`ѭx`uЭzߚ*`d0Ly,kj2 |Ou81ېԔYIiOf$y[@׼twiexҖFu[3eo߁!Of39l$yO2ӥ߭0<91 d'+̞\rȂ+PN{OkYR>.y% Ƶ.5./KDۓ ,NQn %L]JW A)S\_kΫZTް9v͚/WzPJT1Q|- E9쉮(} DaQ ,\.ȼ߶~_%mɪQs֮Y>J{ě? ebO+[1quYXl9= ]*_)2d>-)Y4 CS۾xvm#[/-=޼ Ny62=Ur! _z/[|MX=0#3XCMtHrʜN]Kg|Y%Ѥ{[UWu]5Ͻ2q[l9N߉_ybC))]RVrv\._}j pZM ia-xٲyb3D{ df)ݵ ?| =o-w!ln urmnuǔͷlR뮽ւ#%<7ۃK<)kHM? ,5j *(0ցד`Z 2mOݢ"~Xr2$](TL E$#1^MTෛz Y$^X~~nXjDsY- ͧNm?UWעTzj@#UtF8pǼN0/؁;c{p/* FuGt=C|dN3D ڣd;<oqL,ü}ª_wisnO1 >QX]Q4ba<-bp閵8H^[7'YG#LT} (j0pglv'YێڼmY;[3~,3S|P5O^FlQ{F 4C$ A bLnV\[;!#{co>Kf&N 6a|}Sta3/sPTy(WG;~?LBo۽,Jb&im5f~A 8a,!h%\L)VP_EX<|`F;,a `2LΫkĈBa*{aTtxeB 0n͜ȴ>ӵ}l:3uB9ˑ:Aea0Q.d\`A27Q߅H`|^JppuJ4]%! W\a3\ [ CV]4_J߈ռSV r`cbc -P88h zԥk%[m IԤdV@ ^Ϳ@>ѽO6g|]F!^ĐCg?z4骧̈́G w [.~5`_`]9s]L;NEQ~뭧֘poJ#0;Rrނ6A0Ai,G}FDDzAr|m3("0lLw#$n&cZ$۳`RP7 .JahT}:x#I_J|vߕ3r]A]a.7ٱR|^5h=Pʼ g3 xI^>fM\}?|6bj  !CP~E@E;]gmux3mKײكfj3;P>uOzXULkɂdצP:k1 ` ]1k@zr,n d\5|ٰ7٩MIJj\ױc%Ԭ{f0p? L"hY%cchtxA~X]:)|Ó^Gs\< 1r~ % K|-HڜCYu" ١TtۨnEd6IaJ5vD}u oe`\xNFC R2hGs L,Ѹ|,sly#eQ(1} ogz׿}W|;S֠PhZU`k0;]\bENh٧۬mgs{3w{>\^8l۽j8ql/ŷob_۸ȵܵ;.C(dsG0í_mA:o6r eV1fs paxHp'tWDVy~ˠޫૄn+ "7$uѽar>bf{aLGW>#H kUڋ9}$ܱʺHWJ 13HoEW^h:dv3*̮F٧eU Assv9쓨S V.cEְ*e A u"%XtH 1)(ػ}:y]Suг ڀVYA;n*EwϝbC'Jh9""ɜڿpuiNьĞS,XbEY~j:<^F$=7o5иDJ9}\ѓr u2ٸ+f37+KFiya$'r鯊_FAYVWʓ+J |b%Ҽokc۫ d}ZBO5%M(lb]% 56~.XMg܆={l].[:x,ko_>An(W o/8KN,ǧ3UȊeDS6h^Ds}e=u>?#8e#B&\Z? NK ̝s`&c#| |jƚO oTV.~T $ȬYDpCZmJ+ gm?NN0[p8TIUe8e d6Ql?XЂȲ&onm\>&2%sԨxوHX/ylQܖ-q$ϣS j #y4f'O)! d@>L2_͔RTz"B풅:Ktzҝ\dyڼpцngl;/Cʝ/i. 6ꌝx@Oȍ' b=P:z4XH)yж؂4|Q*A4^JXLU;93ܑNz&'{*=L5ʫ̀M=PvD *':]B/8+Nl^nOͣtV^20(<ǘ `wO?;0erX豟wPoɦ>W_ç cdrXЄ/E~'C4!H A x麩IGum[LoȊؐ^54i$@ e.FSk+k d){iS82h +,24 2)~a\֧HBEq`]:np|)hadž%C:[CF˲iC)0VXݨvhJ83A_zi_ ĿSҺłs.D/lX?ݿ${J ^9`Lmf5Am*e%Wn̫6Ydq;֛i#W!t.IQR, N{VLRE"t)\y4|7FoM@^uz\q^o36{bk΍~rK *gܖT*$10Kb=ɦ#i ՅS M@Yry%1.4qg^٘/ _)Jb& 3G]\z> ƒVn\0 y:C1;1G*G-7)X[=x _(//6mq{ڈF%5E|q "f9C6"aB/! ZWv=]-_HY+l |1i&IjpLHˡ "fX%w%?зa/XDu#R=/n\ *Y=V}-Br[,#{H"u\Q:yW\ۉY^O`7=E)'bC%Ӥ~G"T:2(l -ov͊1ڔuAooIC=PNCHϢ| d6%hP䵲TOx9[y-w`&2W鏲hDH^͋9Qz>*s_Z0&0뻂w#JDlGfbGL1i}( Vr eP+G{ʧ, |tHpQ fz_|b3g~pŎhl9?!:9ݿS#/AJWcz>Uv x?KW&[1YB2tb Gh _c zOISMTRmEta#< .ګXDnME)Uߠzs_5_TE%R,_ߘBL'FdM/$#}6cCbc[ƴ"+fsa'ɒVkkR*Ç8-[kKv;2(7˒wJ?Ofrӣ ֒e\5AYntL 5ԗkDĖO.w*%o0V: G}H:54px۩-ysC' T;uV.ZqxN rM>hkR#+R5|/:<;Wf\0?<1ZF.(EG1r%Ѫ/N"Q.D%i-PmWI8֙6࿧@?raѡ _9N0_翞HZK&\ư)#:VJ)TObQ*qkdHRnSڮ,E|Ͱ[#T^G̙B/WKc0xzasĐ(]Yo^H yF2qwkENٶM!#԰̎s Bn.iDTNόB?G i6RVM6>iAKxxȣҰ%pQ(PB8 ᕶu-um xDrE΀lc,[0={ Ilx_sh ҹ>pȜsZ~CgakG4žN(`W`&t&K'F7zx75"w%h,E* 32z Syu|Ib=*|o#ڀu83b*k=-bE.8ʎ+k|%:I-rfV7EZK`j) NH:YCoM_eZ_aJ3Yc2 a[u6wL ${ ϧ2e?䆝%??u00^644Յ4Y샔} eR F=KFZ3䟶i)m6Jr"M~R9qMTdzm5wשׂ3tiewhSd>RVc!!%oP*]TkBo^]M/؋^/$<jB㺿'ғfq{oP'ۈ|b S7;XScKxZȼMR_:)Aԏ0Q!o^xThh=x{*؞[67+LΝ,XA޴b}YKZܐr`]UeIjE}$3[ b4KQb,ƤbY G']xMKK~pnUӘi -$є?#1i&)2$ 2S"0 (UN6XlTrS{]t.Ծ+ b>}4x'{k5 n1r.2BixfkQsS[sst^4TV}.j -&LݗCMKī!=2O|{2K[QأBV?qp(kd륎uV~~z` ҪA_뎓8GJtK  J*~x$ ,IۖksOOL 2$o)/ȤA8FKktX <ϣkjjmi>2QMWS2SZQu`zd'&&։QQKiOt/0f:&2M뱸"(5IǦ(a$o.6*٩4D|P7_49ʾ䤦kdgz Hi) Fk XgF4 *(pyVhӵ̯YpKWA,vF|\METڷ.cjץ}USw'깺D"Wy]:r9iN(OnAv6\:2\Ȱ r^d (jÔ!km*̢OsM.*!:9Lڰ*ip)=i5kL__ cEGZGo F{3@+}`qC5-JeK-P~. A3!i9.HBA=_se40䄥㬷'Xw M9Jv;`NwUqV8camFS](})LDCA3Ynɥwdz-l8I듓plh[%6f􍍧8ÿ)e Ҟ,`0tzm kS9HI'P8utAZ!ZmE@SK;L2Xm p<1QT^-:KkDbBnN h`{lY.VSSU櫒mVJ¶QKwEW@Ǿ8Xy,{+j,v;>]zSȥz}L _3t 7}| ?R.^.m^kQ׋V}qsIt!t !eBah`Ĕy+#j XaKG~*3(vEjϣ*xTk|#z&-9~(e~$@:#`T8M@-Oŀ5yr<N(V\ ]̲M΋u Z_,n~fD6Ɋ͞I,Syz62p >L_+o76j9DS}ɬ'R<{t8wq>ɠJ~6t\P84ęiz4kmRh(=G*Ⱥd)~'F& bbڢFq+&UwmMSʁ:+Lg}9D[;YA~Ғz[=/P`E#G #i^b{]Iƺfj"N ^ZZ MK5}U`-fԛ*wȠ5US]zJC:NJR/qxoRroi/WO=JgPQUY${1֗)h̚5n'z,|\ i]ѴC?"+<A' h  ^=_ȭ_5dr:@}k=R G~.@b#@K*bRY;PᏍىMwC56܌zrAcmRU!6NjZwzn'ſREiG>حQ&װx l6߾}D/ * M04Qz tDD`/ 4.);W<@ i2c]R+ LIMnw6\| L7u HZ"zjT(Hj-l=u`״^sS-*h|K@)y$AH%M%5: GCyz'Rگce>:= G12\Vo=:zʅ?MʊS{ͩQgM9jfMmU{5C0 y@z׎C[9rI+c" !Y_d_=&qbyU*ڝj?q7stcM[ou"Ӌ:01iʨ2!q=;)"|mLi e+ R|hv\OמH Yu}^|\ df^eI.*x 'ԺWV(l^T|^GT]]&OuF;GK:s`+h1\ ca5,uZHp"6Fĥ/1ҿ| Oe!Q05&-A7\bhvoof,u<(IX{f'SVsrRL qUR|^V>?8l&EV)6ygY~XKARFuL7f5}ͩwt:CSCF6y ^O _<*'<;Ƽ,{qT4F$ׄVEt{ }>BБ#A4BNE9uI(W͕!CV 6@4C3l9!5%Y{o Y*3con/"pO\|Dΰ@)usb+]'6A s[jos.k~uߝ~= ,9C,I$F蚖ЉhjwqYF7f$>{i2bQK\-XU km2hFK]5dT !O)I=*جt{ =b8)? k6>~Ƨ5 Qզꕌ[ U3ҭDhLb-ZmA'6t{d 5Y?VH;(c,j;w{> T z'I*'^ZmɕV`"DDXמHrO(E$ "#"K$GK* ,b%Sэi"i^F!S:cFTaa-'XlC}N ୤KlyK\AxUQ!$QE!9 Hَ¨1sA+r~gx.r¢wN~]jf=y]p_V^+9h<2.2'+Yvx&͇%:DIv'kcq@Jo-Y1F/oJXp=?zcRb"Q%%j<6l*VTfԘTw htJ (D:+x6Sl>1T؃NDBK,@`W !%)Q_-}9,JVKLa؊ȾJak}7ɻi(\%%P[28COvRp=#sSlȦ9a{?"`)~䵜{ JM}BUurɈ-wˀ uB^߬z&V2ҽ/BELukG7QYQBǍljP )E]ܲ{{0BI`<9Fj˞-pݫkk<7mDQr\& .?))8dV@/b04.DϘL TޡzcvV՜0b+:[F21u&,?ɓj?xUϬuՇ¥cL`辙wW{ >u~Rk[XQÝz4 2rWey~Tt긳ͥLToj@8Fj:Z+T'? {)CЕ[=#fKXNߗ ~b@*^!< ymz{M)xe(h}֓o=l*apiXD8cgPţh5,o/dԑqPzAK/Y^ֈ eM9/6ፈeTgxqRD v-Ci`'‹ֈ dn7ˎٙ[YFc]W|VDÞl}^rۛʕv?Cʧe$7:#Wn8Sr4 0n٠6km yPZArKI"nќK* 9DXAz'Ad6ň#xEvYuGцbQ2׳[tRGȿFj/g  @I|iƄ2, $+e;a'ieU7m4/u?? 0 #^`qxDPitpy|P$HerRhuzI (ɊiَA ~0W7]q;[;ý_n@amD $rY Fg0YlEbT&W(UjV7\MÐ$Uj]} #($2J3,6 C=K2BRk FbCG'gW7wO/o_  $ "L &H,Fn J샏A`G Qh 'Id Fg0YlEbT&W(UjV7Mffw8]nA1 )a9^%YQ50-q=?8I(i~\M[nDŽ2, $+e;a'ieU7m4/NY{xzy!PD1X@$)Td9\_ %R\T5Z`4?|@FP 'HfXDIVTM7Lv\(N,/ʪnڮi^m?vNͰ/nQY^Uݴ]?c.}&БdU Ӳa`qxDPitpy|P$HerRhuzdXm@.gW7wO/o_  $ "L &H, J#B0b8AR4r JjaZz~FqfyQVu#NTX5D32W쓤G_ 鯊tN9 +u!y SއW E||,~ozsiw7Z~d( [TҚj7hP¤q>, 7pW|+k.{ā l>!9saN~*gTsl"]dmZz]M;j-Ѝ9Q恑FGU*$-5ޱִ=قE~CH\TjJ% P޵m4D3 3Q@aפ 79*0J #Ӥ6z/'1 VC/>w6uC˚E|5]Bv_ߪn]c&(mdc$.VmtZGv@2g; `<Ү IrjQ[=TP_Cy~ov{Nţb{=I$E!3-O>Jy+“P,G[ ݚ=9ڗ~HCٳgG-E7W]ۮl$WGU>IY݂zMq@%9UxS- C&%i'ۤq$91š9bf(zxO{a$إ`m`!7%[ٷ\*577@ ;[X}B9 Ѱ33>I l+ḲR53׋0&A 鳁 SQkސPq <؞Ck)E'yRRٕ=v͏Ɵs,*PcV)J:s=ioΞ_[}BvU}灙][zӍ;vҨk aIhDO82= ӇGn]Q7_xb[oϝrHD4+!htHSMS ? T0S~p 4t|G$f^9l:`"Hq;tcD IB)(Ǯ8NiϵL=cm{Z53hyGK\(1;12s)WShZI@bS%SN#0AE8lGj9W"n,JRqBNA+䩏NJ4N]z!Vg[!9şrMUc[JOTOT=q~,);-ӵpۻ̤ڂ&jMk3뛸EfsGBmȯїvv-S/څ_E? Y{AVM2|;&RE0>6Ffbhմ5 XbKEtzmZN01fh'Iy_uY1nS"zfvyo#jru !(r5@tab/ajLF|v֝3fDlO@'UǙ8| (p6DH S_:57q&¡G[CIH*d肍_KܠqG ߐtÝҋAx=쁢@B$E?w`H]9hYHPH>P~ޣh_iD;U30XXu>w^m"p-g;|3 K=1 [Aw:t0PdN8Ι\!gU%Pp17½Iݖ=|Kj #%@Z'JWv==*]V}H!hXK0<663 ?Q_lL܇nq)B u8u VD #n1C pv{/߲3J66]][|U357Z xn~=ٺ,`WiKf5b{Z.} L@9ق+V UGLmo,&M.E`C4I4)ЕY㰪=nQyJ^O3/@#xX2K°gĬ]'F*P>jwIQaw@@ڷ`4K5OaQ&3ZQ'#xO'Ҧ}.GV.Ɍt.w1(0#Ra̒6qC1s5l[)y& X Xj6L-Z,!<&R?ܕK^o[ĺQmшrIʨ(a`]H@ZastdV@kR ^^wÔb$h 'ʞh90·*gG9՟?S _vT}ShDzrE9'}bڭG|.\ }q{SNQdaQT I!BtzQU0leSzЃ$:~ny&F=5:02xWB@5jPDk$Ot]]LY5foɭ+d[V!ի+mZ:,ȕWD.-qpT74 1bdi /nB3`p"ϙjwAl[ jfe Nt$}@!cY |PV;?7:RW溜H<@wx@SJq['8w)06`$vXxZDW.xP*Iit$%%^%Ŷ B-ފHڝ 8A kzn$ʍUZW6Mt@I'0q'F8>T‡mLJ%)] -UB~}-|“h$."SΪxɯv2Mtn=Xz+mAf`{8tץ _a P鏺Y-7Z3-ܚvVtP# X|,5٢u@.Mh6GIr ̍8RwN.[?NzF"ܜETp" 8 /1he;0.Ÿoi"Üc': G \ۨ8.PP$PU(8jrr iߵQ|p64jE_8Z ߵC!w^͑ kVQDR nDQk Ann5퉣3ZYfeWPޜA]ћ,c}p.n.9qlI@f-Lb@2S&5NJs )D%hTDsWMB"qK@9u׽u^y/ho}b4*2E]mCVKo-1Y5'<0G|0?6?0UYz_}FXeqVY8qŵf_> 8J3bv 18=5M^Geaz;t&DKFҝ..J#?l\pd(3px"&t1tsK>Ki]fĝw  x+`s2OA[@UZ4v`Ux,!I)n7:cOI߮qE"u咵:{;UD\z" LZ }Ked[^vkq0)Dov !y`B; rH2k\R6Ez\-*v@)7<-JEǏ&ő-d3i籮l< u=Ǣ0^̊$&ǸJHBfdOM؊~ i¤I1Srclone-1.53.3/fs/000077500000000000000000000000001375552240400135045ustar00rootroot00000000000000rclone-1.53.3/fs/accounting/000077500000000000000000000000001375552240400156365ustar00rootroot00000000000000rclone-1.53.3/fs/accounting/accounting.go000066400000000000000000000362261375552240400203300ustar00rootroot00000000000000// Package accounting providers an accounting and limiting reader package accounting import ( "context" "fmt" "io" "sync" "time" "unicode/utf8" "github.com/rclone/rclone/fs/rc" "golang.org/x/time/rate" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/asyncreader" "github.com/rclone/rclone/fs/fserrors" ) // ErrorMaxTransferLimitReached defines error when transfer limit is reached. // Used for checking on exit and matching to correct exit code. var ErrorMaxTransferLimitReached = errors.New("Max transfer limit reached as set by --max-transfer") // ErrorMaxTransferLimitReachedFatal is returned from Read when the max // transfer limit is reached. var ErrorMaxTransferLimitReachedFatal = fserrors.FatalError(ErrorMaxTransferLimitReached) // Account limits and accounts for one transfer type Account struct { stats *StatsInfo // The mutex is to make sure Read() and Close() aren't called // concurrently. Unfortunately the persistent connection loop // in http transport calls Read() after Do() returns on // CancelRequest so this race can happen when it apparently // shouldn't. mu sync.Mutex // mutex protects these values in io.Reader ctx context.Context // current context for transfer - may change origIn io.ReadCloser close io.Closer size int64 name string closed bool // set if the file is closed exit chan struct{} // channel that will be closed when transfer is finished withBuf bool // is using a buffered in tokenBucket *rate.Limiter // per file bandwidth limiter (may be nil) values accountValues } // accountValues holds statistics for this Account type accountValues struct { mu sync.Mutex // Mutex for stat values. bytes int64 // Total number of bytes read max int64 // if >=0 the max number of bytes to transfer start time.Time // Start time of first read lpTime time.Time // Time of last average measurement lpBytes int // Number of bytes read since last measurement avg float64 // Moving average of last few measurements in bytes/s } const averagePeriod = 16 // period to do exponentially weighted averages over // newAccountSizeName makes an Account reader for an io.ReadCloser of // the given size and name func newAccountSizeName(ctx context.Context, stats *StatsInfo, in io.ReadCloser, size int64, name string) *Account { acc := &Account{ stats: stats, in: in, ctx: ctx, close: in, origIn: in, size: size, name: name, exit: make(chan struct{}), values: accountValues{ avg: 0, lpTime: time.Now(), max: -1, }, } if fs.Config.CutoffMode == fs.CutoffModeHard { acc.values.max = int64((fs.Config.MaxTransfer)) } currLimit := fs.Config.BwLimitFile.LimitAt(time.Now()) if currLimit.Bandwidth > 0 { fs.Debugf(acc.name, "Limiting file transfer to %v", currLimit.Bandwidth) acc.tokenBucket = newTokenBucket(currLimit.Bandwidth) } go acc.averageLoop() stats.inProgress.set(acc.name, acc) return acc } // WithBuffer - If the file is above a certain size it adds an Async reader func (acc *Account) WithBuffer() *Account { // if already have a buffer then just return if acc.withBuf { return acc } acc.withBuf = true var buffers int if acc.size >= int64(fs.Config.BufferSize) || acc.size == -1 { buffers = int(int64(fs.Config.BufferSize) / asyncreader.BufferSize) } else { buffers = int(acc.size / asyncreader.BufferSize) } // On big files add a buffer if buffers > 0 { rc, err := asyncreader.New(acc.origIn, buffers) if err != nil { fs.Errorf(acc.name, "Failed to make buffer: %v", err) } else { acc.in = rc acc.close = rc } } return acc } // HasBuffer - returns true if this Account has an AsyncReader with a buffer func (acc *Account) HasBuffer() bool { acc.mu.Lock() defer acc.mu.Unlock() _, ok := acc.in.(*asyncreader.AsyncReader) return ok } // GetReader returns the underlying io.ReadCloser under any Buffer func (acc *Account) GetReader() io.ReadCloser { acc.mu.Lock() defer acc.mu.Unlock() return acc.origIn } // GetAsyncReader returns the current AsyncReader or nil if Account is unbuffered func (acc *Account) GetAsyncReader() *asyncreader.AsyncReader { acc.mu.Lock() defer acc.mu.Unlock() if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok { return asyncIn } return nil } // StopBuffering stops the async buffer doing any more buffering func (acc *Account) StopBuffering() { if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok { asyncIn.StopBuffering() } } // Abandon stops the async buffer doing any more buffering func (acc *Account) Abandon() { if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok { asyncIn.Abandon() } } // UpdateReader updates the underlying io.ReadCloser stopping the // async buffer (if any) and re-adding it func (acc *Account) UpdateReader(ctx context.Context, in io.ReadCloser) { acc.mu.Lock() withBuf := acc.withBuf if withBuf { acc.Abandon() acc.withBuf = false } acc.in = in acc.ctx = ctx acc.close = in acc.origIn = in acc.closed = false if withBuf { acc.WithBuffer() } acc.mu.Unlock() // Reset counter to stop percentage going over 100% acc.values.mu.Lock() acc.values.lpBytes = 0 acc.values.bytes = 0 acc.values.mu.Unlock() } // averageLoop calculates averages for the stats in the background func (acc *Account) averageLoop() { tick := time.NewTicker(time.Second) var period float64 defer tick.Stop() for { select { case now := <-tick.C: acc.values.mu.Lock() // Add average of last second. elapsed := now.Sub(acc.values.lpTime).Seconds() avg := float64(acc.values.lpBytes) / elapsed // Soft start the moving average if period < averagePeriod { period++ } acc.values.avg = (avg + (period-1)*acc.values.avg) / period acc.values.lpBytes = 0 acc.values.lpTime = now // Unlock stats acc.values.mu.Unlock() case <-acc.exit: return } } } // Check the read before it has happened is valid returning the number // of bytes remaining to read. func (acc *Account) checkReadBefore() (bytesUntilLimit int64, err error) { // Check to see if context is cancelled if err = acc.ctx.Err(); err != nil { return 0, err } acc.values.mu.Lock() if acc.values.max >= 0 { bytesUntilLimit = acc.values.max - acc.stats.GetBytes() if bytesUntilLimit < 0 { acc.values.mu.Unlock() return bytesUntilLimit, ErrorMaxTransferLimitReachedFatal } } else { bytesUntilLimit = 1 << 62 } // Set start time. if acc.values.start.IsZero() { acc.values.start = time.Now() } acc.values.mu.Unlock() return bytesUntilLimit, nil } // Check the read call after the read has happened func (acc *Account) checkReadAfter(bytesUntilLimit int64, n int, err error) (outN int, outErr error) { bytesUntilLimit -= int64(n) if bytesUntilLimit < 0 { // chop the overage off n += int(bytesUntilLimit) if n < 0 { n = 0 } err = ErrorMaxTransferLimitReachedFatal } return n, err } // ServerSideCopyStart should be called at the start of a server side copy // // This pretends a transfer has started func (acc *Account) ServerSideCopyStart() { acc.values.mu.Lock() // Set start time. if acc.values.start.IsZero() { acc.values.start = time.Now() } acc.values.mu.Unlock() } // ServerSideCopyEnd accounts for a read of n bytes in a sever side copy func (acc *Account) ServerSideCopyEnd(n int64) { // Update Stats acc.values.mu.Lock() acc.values.bytes += n acc.values.mu.Unlock() acc.stats.Bytes(n) } // Account for n bytes from the current file bandwidth limit (if any) func (acc *Account) limitPerFileBandwidth(n int) { acc.values.mu.Lock() tokenBucket := acc.tokenBucket acc.values.mu.Unlock() if tokenBucket != nil { err := tokenBucket.WaitN(context.Background(), n) if err != nil { fs.Errorf(nil, "Token bucket error: %v", err) } } } // Account the read and limit bandwidth func (acc *Account) accountRead(n int) { // Update Stats acc.values.mu.Lock() acc.values.lpBytes += n acc.values.bytes += int64(n) acc.values.mu.Unlock() acc.stats.Bytes(int64(n)) limitBandwidth(n) acc.limitPerFileBandwidth(n) } // read bytes from the io.Reader passed in and account them func (acc *Account) read(in io.Reader, p []byte) (n int, err error) { bytesUntilLimit, err := acc.checkReadBefore() if err == nil { n, err = in.Read(p) acc.accountRead(n) n, err = acc.checkReadAfter(bytesUntilLimit, n, err) } return n, err } // Read bytes from the object - see io.Reader func (acc *Account) Read(p []byte) (n int, err error) { acc.mu.Lock() defer acc.mu.Unlock() return acc.read(acc.in, p) } // Thin wrapper for w type accountWriteTo struct { w io.Writer acc *Account } // Write writes len(p) bytes from p to the underlying data stream. It // returns the number of bytes written from p (0 <= n <= len(p)) and // any error encountered that caused the write to stop early. Write // must return a non-nil error if it returns n < len(p). Write must // not modify the slice data, even temporarily. // // Implementations must not retain p. func (awt *accountWriteTo) Write(p []byte) (n int, err error) { bytesUntilLimit, err := awt.acc.checkReadBefore() if err == nil { n, err = awt.w.Write(p) n, err = awt.acc.checkReadAfter(bytesUntilLimit, n, err) awt.acc.accountRead(n) } return n, err } // WriteTo writes data to w until there's no more data to write or // when an error occurs. The return value n is the number of bytes // written. Any error encountered during the write is also returned. func (acc *Account) WriteTo(w io.Writer) (n int64, err error) { acc.mu.Lock() in := acc.in acc.mu.Unlock() wrappedWriter := accountWriteTo{w: w, acc: acc} if do, ok := in.(io.WriterTo); ok { n, err = do.WriteTo(&wrappedWriter) } else { n, err = io.Copy(&wrappedWriter, in) } return } // AccountRead account having read n bytes func (acc *Account) AccountRead(n int) (err error) { acc.mu.Lock() defer acc.mu.Unlock() bytesUntilLimit, err := acc.checkReadBefore() if err == nil { n, err = acc.checkReadAfter(bytesUntilLimit, n, err) acc.accountRead(n) } return err } // Close the object func (acc *Account) Close() error { acc.mu.Lock() defer acc.mu.Unlock() if acc.closed { return nil } acc.closed = true if acc.close == nil { return nil } return acc.close.Close() } // Done with accounting - must be called to free accounting goroutine func (acc *Account) Done() { acc.mu.Lock() defer acc.mu.Unlock() close(acc.exit) acc.stats.inProgress.clear(acc.name) } // progress returns bytes read as well as the size. // Size can be <= 0 if the size is unknown. func (acc *Account) progress() (bytes, size int64) { if acc == nil { return 0, 0 } acc.values.mu.Lock() bytes, size = acc.values.bytes, acc.size acc.values.mu.Unlock() return bytes, size } // speed returns the speed of the current file transfer // in bytes per second, as well an exponentially weighted moving average // If no read has completed yet, 0 is returned for both values. func (acc *Account) speed() (bps, current float64) { if acc == nil { return 0, 0 } acc.values.mu.Lock() defer acc.values.mu.Unlock() if acc.values.bytes == 0 { return 0, 0 } // Calculate speed from first read. total := float64(time.Now().Sub(acc.values.start)) / float64(time.Second) bps = float64(acc.values.bytes) / total current = acc.values.avg return } // eta returns the ETA of the current operation, // rounded to full seconds. // If the ETA cannot be determined 'ok' returns false. func (acc *Account) eta() (etaDuration time.Duration, ok bool) { if acc == nil { return 0, false } acc.values.mu.Lock() defer acc.values.mu.Unlock() return eta(acc.values.bytes, acc.size, acc.values.avg) } // shortenName shortens in to size runes long // If size <= 0 then in is left untouched func shortenName(in string, size int) string { if size <= 0 { return in } if utf8.RuneCountInString(in) <= size { return in } name := []rune(in) size-- // don't count elipsis rune suffixLength := size / 2 prefixLength := size - suffixLength suffixStart := len(name) - suffixLength name = append(append(name[:prefixLength], '…'), name[suffixStart:]...) return string(name) } // String produces stats for this file func (acc *Account) String() string { a, b := acc.progress() _, cur := acc.speed() eta, etaok := acc.eta() etas := "-" if etaok { if eta > 0 { etas = fmt.Sprintf("%v", eta) } else { etas = "0s" } } if fs.Config.DataRateUnit == "bits" { cur = cur * 8 } percentageDone := 0 if b > 0 { percentageDone = int(100 * float64(a) / float64(b)) } return fmt.Sprintf("%*s:%3d%% /%s, %s/s, %s", fs.Config.StatsFileNameLength, shortenName(acc.name, fs.Config.StatsFileNameLength), percentageDone, fs.SizeSuffix(b), fs.SizeSuffix(cur), etas, ) } // rcStats produces remote control stats for this file func (acc *Account) rcStats() (out rc.Params) { out = make(rc.Params) a, b := acc.progress() out["bytes"] = a out["size"] = b spd, cur := acc.speed() out["speed"] = spd out["speedAvg"] = cur eta, etaok := acc.eta() out["eta"] = nil if etaok { if eta > 0 { out["eta"] = eta.Seconds() } else { out["eta"] = 0 } } out["name"] = acc.name percentageDone := 0 if b > 0 { percentageDone = int(100 * float64(a) / float64(b)) } out["percentage"] = percentageDone out["group"] = acc.stats.group return out } // OldStream returns the top io.Reader func (acc *Account) OldStream() io.Reader { acc.mu.Lock() defer acc.mu.Unlock() return acc.in } // SetStream updates the top io.Reader func (acc *Account) SetStream(in io.Reader) { acc.mu.Lock() acc.in = in acc.mu.Unlock() } // WrapStream wraps an io Reader so it will be accounted in the same // way as account func (acc *Account) WrapStream(in io.Reader) io.Reader { return &accountStream{ acc: acc, in: in, } } // accountStream accounts a single io.Reader into a parent *Account type accountStream struct { acc *Account in io.Reader } // OldStream return the underlying stream func (a *accountStream) OldStream() io.Reader { return a.in } // SetStream set the underlying stream func (a *accountStream) SetStream(in io.Reader) { a.in = in } // WrapStream wrap in in an accounter func (a *accountStream) WrapStream(in io.Reader) io.Reader { return a.acc.WrapStream(in) } // Read bytes from the object - see io.Reader func (a *accountStream) Read(p []byte) (n int, err error) { return a.acc.read(a.in, p) } // Accounter accounts a stream allowing the accounting to be removed and re-added type Accounter interface { io.Reader OldStream() io.Reader SetStream(io.Reader) WrapStream(io.Reader) io.Reader } // WrapFn wraps an io.Reader (for accounting purposes usually) type WrapFn func(io.Reader) io.Reader // UnWrap unwraps a reader returning unwrapped and wrap, a function to // wrap it back up again. If `in` is an Accounter then this function // will take the accounting unwrapped and wrap will put it back on // again the new Reader passed in. // // This allows functions which wrap io.Readers to move the accounting // to the end of the wrapped chain of readers. This is very important // if buffering is being introduced and if the Reader might be wrapped // again. func UnWrap(in io.Reader) (unwrapped io.Reader, wrap WrapFn) { acc, ok := in.(Accounter) if !ok { return in, func(r io.Reader) io.Reader { return r } } return acc.OldStream(), acc.WrapStream } rclone-1.53.3/fs/accounting/accounting_other.go000066400000000000000000000004231375552240400215170ustar00rootroot00000000000000// Accounting and limiting reader // Non-unix specific functions. // +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris package accounting // startSignalHandler() is Unix specific and does nothing under non-Unix // platforms. func startSignalHandler() {} rclone-1.53.3/fs/accounting/accounting_test.go000066400000000000000000000231771375552240400213700ustar00rootroot00000000000000package accounting import ( "bytes" "context" "fmt" "io" "io/ioutil" "strings" "testing" "unicode/utf8" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/asyncreader" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check it satisfies the interfaces var ( _ io.ReadCloser = &Account{} _ io.WriterTo = &Account{} _ io.Reader = &accountStream{} _ Accounter = &Account{} _ Accounter = &accountStream{} ) func TestNewAccountSizeName(t *testing.T) { in := ioutil.NopCloser(bytes.NewBuffer([]byte{1})) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 1, "test") assert.Equal(t, in, acc.in) assert.Equal(t, acc, stats.inProgress.get("test")) err := acc.Close() assert.NoError(t, err) assert.Equal(t, acc, stats.inProgress.get("test")) acc.Done() assert.Nil(t, stats.inProgress.get("test")) assert.False(t, acc.HasBuffer()) } func TestAccountWithBuffer(t *testing.T) { in := ioutil.NopCloser(bytes.NewBuffer([]byte{1})) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, -1, "test") assert.False(t, acc.HasBuffer()) acc.WithBuffer() assert.True(t, acc.HasBuffer()) // should have a buffer for an unknown size _, ok := acc.in.(*asyncreader.AsyncReader) require.True(t, ok) assert.NoError(t, acc.Close()) acc = newAccountSizeName(context.Background(), stats, in, 1, "test") acc.WithBuffer() // should not have a buffer for a small size _, ok = acc.in.(*asyncreader.AsyncReader) require.False(t, ok) assert.NoError(t, acc.Close()) } func TestAccountGetUpdateReader(t *testing.T) { test := func(doClose bool) func(t *testing.T) { return func(t *testing.T) { in := ioutil.NopCloser(bytes.NewBuffer([]byte{1})) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 1, "test") assert.Equal(t, in, acc.GetReader()) assert.Equal(t, acc, stats.inProgress.get("test")) if doClose { // close the account before swapping it out require.NoError(t, acc.Close()) } in2 := ioutil.NopCloser(bytes.NewBuffer([]byte{1})) acc.UpdateReader(context.Background(), in2) assert.Equal(t, in2, acc.GetReader()) assert.Equal(t, acc, stats.inProgress.get("test")) assert.NoError(t, acc.Close()) } } t.Run("NoClose", test(false)) t.Run("Close", test(true)) } func TestAccountRead(t *testing.T) { in := ioutil.NopCloser(bytes.NewBuffer([]byte{1, 2, 3})) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 1, "test") assert.True(t, acc.values.start.IsZero()) acc.values.mu.Lock() assert.Equal(t, 0, acc.values.lpBytes) assert.Equal(t, int64(0), acc.values.bytes) acc.values.mu.Unlock() assert.Equal(t, int64(0), stats.bytes) var buf = make([]byte, 2) n, err := acc.Read(buf) assert.NoError(t, err) assert.Equal(t, 2, n) assert.Equal(t, []byte{1, 2}, buf[:n]) assert.False(t, acc.values.start.IsZero()) acc.values.mu.Lock() assert.Equal(t, 2, acc.values.lpBytes) assert.Equal(t, int64(2), acc.values.bytes) acc.values.mu.Unlock() assert.Equal(t, int64(2), stats.bytes) n, err = acc.Read(buf) assert.NoError(t, err) assert.Equal(t, 1, n) assert.Equal(t, []byte{3}, buf[:n]) n, err = acc.Read(buf) assert.Equal(t, io.EOF, err) assert.Equal(t, 0, n) assert.NoError(t, acc.Close()) } func testAccountWriteTo(t *testing.T, withBuffer bool) { buf := make([]byte, 2*asyncreader.BufferSize+1) for i := range buf { buf[i] = byte(i % 251) } in := ioutil.NopCloser(bytes.NewBuffer(buf)) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, int64(len(buf)), "test") if withBuffer { acc = acc.WithBuffer() } assert.True(t, acc.values.start.IsZero()) acc.values.mu.Lock() assert.Equal(t, 0, acc.values.lpBytes) assert.Equal(t, int64(0), acc.values.bytes) acc.values.mu.Unlock() assert.Equal(t, int64(0), stats.bytes) var out bytes.Buffer n, err := acc.WriteTo(&out) assert.NoError(t, err) assert.Equal(t, int64(len(buf)), n) assert.Equal(t, buf, out.Bytes()) assert.False(t, acc.values.start.IsZero()) acc.values.mu.Lock() assert.Equal(t, len(buf), acc.values.lpBytes) assert.Equal(t, int64(len(buf)), acc.values.bytes) acc.values.mu.Unlock() assert.Equal(t, int64(len(buf)), stats.bytes) assert.NoError(t, acc.Close()) } func TestAccountWriteTo(t *testing.T) { testAccountWriteTo(t, false) } func TestAccountWriteToWithBuffer(t *testing.T) { testAccountWriteTo(t, true) } func TestAccountString(t *testing.T) { in := ioutil.NopCloser(bytes.NewBuffer([]byte{1, 2, 3})) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 3, "test") // FIXME not an exhaustive test! assert.Equal(t, "test: 0% /3, 0/s, -", strings.TrimSpace(acc.String())) var buf = make([]byte, 2) n, err := acc.Read(buf) assert.NoError(t, err) assert.Equal(t, 2, n) assert.Equal(t, "test: 66% /3, 0/s, -", strings.TrimSpace(acc.String())) assert.NoError(t, acc.Close()) } // Test the Accounter interface methods on Account and accountStream func TestAccountAccounter(t *testing.T) { in := ioutil.NopCloser(bytes.NewBuffer([]byte{1, 2, 3})) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 3, "test") assert.True(t, in == acc.OldStream()) in2 := ioutil.NopCloser(bytes.NewBuffer([]byte{2, 3, 4})) acc.SetStream(in2) assert.True(t, in2 == acc.OldStream()) r := acc.WrapStream(in) as, ok := r.(Accounter) require.True(t, ok) assert.True(t, in == as.OldStream()) assert.True(t, in2 == acc.OldStream()) accs, ok := r.(*accountStream) require.True(t, ok) assert.Equal(t, acc, accs.acc) assert.True(t, in == accs.in) // Check Read on the accountStream var buf = make([]byte, 2) n, err := r.Read(buf) assert.NoError(t, err) assert.Equal(t, 2, n) assert.Equal(t, []byte{1, 2}, buf[:n]) // Test that we can get another accountstream out in3 := ioutil.NopCloser(bytes.NewBuffer([]byte{3, 1, 2})) r2 := as.WrapStream(in3) as2, ok := r2.(Accounter) require.True(t, ok) assert.True(t, in3 == as2.OldStream()) assert.True(t, in2 == acc.OldStream()) accs2, ok := r2.(*accountStream) require.True(t, ok) assert.Equal(t, acc, accs2.acc) assert.True(t, in3 == accs2.in) // Test we can set this new accountStream as2.SetStream(in) assert.True(t, in == as2.OldStream()) // Test UnWrap on accountStream unwrapped, wrap := UnWrap(r2) assert.True(t, unwrapped == in) r3 := wrap(in2) assert.True(t, in2 == r3.(Accounter).OldStream()) // TestUnWrap on a normal io.Reader unwrapped, wrap = UnWrap(in2) assert.True(t, unwrapped == in2) assert.True(t, wrap(in3) == in3) } func TestAccountMaxTransfer(t *testing.T) { old := fs.Config.MaxTransfer oldMode := fs.Config.CutoffMode fs.Config.MaxTransfer = 15 defer func() { fs.Config.MaxTransfer = old fs.Config.CutoffMode = oldMode }() in := ioutil.NopCloser(bytes.NewBuffer(make([]byte, 100))) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 1, "test") var b = make([]byte, 10) n, err := acc.Read(b) assert.Equal(t, 10, n) assert.NoError(t, err) n, err = acc.Read(b) assert.Equal(t, 5, n) assert.Equal(t, ErrorMaxTransferLimitReachedFatal, err) n, err = acc.Read(b) assert.Equal(t, 0, n) assert.Equal(t, ErrorMaxTransferLimitReachedFatal, err) assert.True(t, fserrors.IsFatalError(err)) fs.Config.CutoffMode = fs.CutoffModeSoft stats = NewStats() acc = newAccountSizeName(context.Background(), stats, in, 1, "test") n, err = acc.Read(b) assert.Equal(t, 10, n) assert.NoError(t, err) n, err = acc.Read(b) assert.Equal(t, 10, n) assert.NoError(t, err) n, err = acc.Read(b) assert.Equal(t, 10, n) assert.NoError(t, err) } func TestAccountMaxTransferWriteTo(t *testing.T) { old := fs.Config.MaxTransfer oldMode := fs.Config.CutoffMode fs.Config.MaxTransfer = 15 defer func() { fs.Config.MaxTransfer = old fs.Config.CutoffMode = oldMode }() in := ioutil.NopCloser(readers.NewPatternReader(1024)) stats := NewStats() acc := newAccountSizeName(context.Background(), stats, in, 1, "test") var b bytes.Buffer n, err := acc.WriteTo(&b) assert.Equal(t, int64(15), n) assert.Equal(t, ErrorMaxTransferLimitReachedFatal, err) } func TestAccountReadCtx(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) in := ioutil.NopCloser(bytes.NewBuffer(make([]byte, 100))) stats := NewStats() acc := newAccountSizeName(ctx, stats, in, 1, "test") var b = make([]byte, 10) n, err := acc.Read(b) assert.Equal(t, 10, n) assert.NoError(t, err) cancel() n, err = acc.Read(b) assert.Equal(t, 0, n) assert.Equal(t, context.Canceled, err) } func TestShortenName(t *testing.T) { for _, test := range []struct { in string size int want string }{ {"", 0, ""}, {"abcde", 10, "abcde"}, {"abcde", 0, "abcde"}, {"abcde", -1, "abcde"}, {"abcde", 5, "abcde"}, {"abcde", 4, "ab…e"}, {"abcde", 3, "a…e"}, {"abcde", 2, "a…"}, {"abcde", 1, "…"}, {"abcdef", 6, "abcdef"}, {"abcdef", 5, "ab…ef"}, {"abcdef", 4, "ab…f"}, {"abcdef", 3, "a…f"}, {"abcdef", 2, "a…"}, {"áßcdèf", 1, "…"}, {"áßcdè", 5, "áßcdè"}, {"áßcdè", 4, "áß…è"}, {"áßcdè", 3, "á…è"}, {"áßcdè", 2, "á…"}, {"áßcdè", 1, "…"}, {"áßcdèł", 6, "áßcdèł"}, {"áßcdèł", 5, "áß…èł"}, {"áßcdèł", 4, "áß…ł"}, {"áßcdèł", 3, "á…ł"}, {"áßcdèł", 2, "á…"}, {"áßcdèł", 1, "…"}, } { t.Run(fmt.Sprintf("in=%q, size=%d", test.in, test.size), func(t *testing.T) { got := shortenName(test.in, test.size) assert.Equal(t, test.want, got) if test.size > 0 { assert.True(t, utf8.RuneCountInString(got) <= test.size, "too big") } }) } } rclone-1.53.3/fs/accounting/accounting_unix.go000066400000000000000000000014351375552240400213650ustar00rootroot00000000000000// Accounting and limiting reader // Unix specific functions. // +build darwin dragonfly freebsd linux netbsd openbsd solaris package accounting import ( "os" "os/signal" "syscall" "github.com/rclone/rclone/fs" ) // startSignalHandler() sets a signal handler to catch SIGUSR2 and toggle throttling. func startSignalHandler() { signals := make(chan os.Signal, 1) signal.Notify(signals, syscall.SIGUSR2) go func() { // This runs forever, but blocks until the signal is received. for { <-signals tokenBucketMu.Lock() bwLimitToggledOff = !bwLimitToggledOff tokenBucket, prevTokenBucket = prevTokenBucket, tokenBucket s := "disabled" if tokenBucket != nil { s = "enabled" } tokenBucketMu.Unlock() fs.Logf(nil, "Bandwidth limit %s by user", s) } }() } rclone-1.53.3/fs/accounting/inprogress.go000066400000000000000000000020161375552240400203570ustar00rootroot00000000000000package accounting import ( "sync" "github.com/rclone/rclone/fs" ) // inProgress holds a synchronized map of in progress transfers type inProgress struct { mu sync.Mutex m map[string]*Account } // newInProgress makes a new inProgress object func newInProgress() *inProgress { return &inProgress{ m: make(map[string]*Account, fs.Config.Transfers), } } // set marks the name as in progress func (ip *inProgress) set(name string, acc *Account) { ip.mu.Lock() defer ip.mu.Unlock() ip.m[name] = acc } // clear marks the name as no longer in progress func (ip *inProgress) clear(name string) { ip.mu.Lock() defer ip.mu.Unlock() delete(ip.m, name) } // get gets the account for name, of nil if not found func (ip *inProgress) get(name string) *Account { ip.mu.Lock() defer ip.mu.Unlock() return ip.m[name] } // merge adds items from another inProgress func (ip *inProgress) merge(m *inProgress) { ip.mu.Lock() defer ip.mu.Unlock() m.mu.Lock() defer m.mu.Unlock() for key, val := range m.m { ip.m[key] = val } } rclone-1.53.3/fs/accounting/prometheus.go000066400000000000000000000066071375552240400203710ustar00rootroot00000000000000package accounting import ( "github.com/prometheus/client_golang/prometheus" ) var namespace = "rclone_" // RcloneCollector is a Prometheus collector for Rclone type RcloneCollector struct { bytesTransferred *prometheus.Desc transferSpeed *prometheus.Desc numOfErrors *prometheus.Desc numOfCheckFiles *prometheus.Desc transferredFiles *prometheus.Desc deletes *prometheus.Desc renames *prometheus.Desc fatalError *prometheus.Desc retryError *prometheus.Desc } // NewRcloneCollector make a new RcloneCollector func NewRcloneCollector() *RcloneCollector { return &RcloneCollector{ bytesTransferred: prometheus.NewDesc(namespace+"bytes_transferred_total", "Total transferred bytes since the start of the Rclone process", nil, nil, ), transferSpeed: prometheus.NewDesc(namespace+"speed", "Average speed in bytes/sec since the start of the Rclone process", nil, nil, ), numOfErrors: prometheus.NewDesc(namespace+"errors_total", "Number of errors thrown", nil, nil, ), numOfCheckFiles: prometheus.NewDesc(namespace+"checked_files_total", "Number of checked files", nil, nil, ), transferredFiles: prometheus.NewDesc(namespace+"files_transferred_total", "Number of transferred files", nil, nil, ), deletes: prometheus.NewDesc(namespace+"files_deleted_total", "Total number of files deleted", nil, nil, ), renames: prometheus.NewDesc(namespace+"files_renamed_total", "Total number of files renamed", nil, nil, ), fatalError: prometheus.NewDesc(namespace+"fatal_error", "Whether a fatal error has occurred", nil, nil, ), retryError: prometheus.NewDesc(namespace+"retry_error", "Whether there has been an error that will be retried", nil, nil, ), } } // Describe is part of the Collector interface: https://godoc.org/github.com/prometheus/client_golang/prometheus#Collector func (c *RcloneCollector) Describe(ch chan<- *prometheus.Desc) { ch <- c.bytesTransferred ch <- c.transferSpeed ch <- c.numOfErrors ch <- c.numOfCheckFiles ch <- c.transferredFiles ch <- c.deletes ch <- c.renames ch <- c.fatalError ch <- c.retryError } // Collect is part of the Collector interface: https://godoc.org/github.com/prometheus/client_golang/prometheus#Collector func (c *RcloneCollector) Collect(ch chan<- prometheus.Metric) { s := groups.sum() s.mu.RLock() ch <- prometheus.MustNewConstMetric(c.bytesTransferred, prometheus.CounterValue, float64(s.bytes)) ch <- prometheus.MustNewConstMetric(c.transferSpeed, prometheus.GaugeValue, s.Speed()) ch <- prometheus.MustNewConstMetric(c.numOfErrors, prometheus.CounterValue, float64(s.errors)) ch <- prometheus.MustNewConstMetric(c.numOfCheckFiles, prometheus.CounterValue, float64(s.checks)) ch <- prometheus.MustNewConstMetric(c.transferredFiles, prometheus.CounterValue, float64(s.transfers)) ch <- prometheus.MustNewConstMetric(c.deletes, prometheus.CounterValue, float64(s.deletes)) ch <- prometheus.MustNewConstMetric(c.renames, prometheus.CounterValue, float64(s.renames)) ch <- prometheus.MustNewConstMetric(c.fatalError, prometheus.GaugeValue, bool2Float(s.fatalError)) ch <- prometheus.MustNewConstMetric(c.retryError, prometheus.GaugeValue, bool2Float(s.retryError)) s.mu.RUnlock() } // bool2Float is a small function to convert a boolean into a float64 value that can be used for Prometheus func bool2Float(e bool) float64 { if e { return 1 } return 0 } rclone-1.53.3/fs/accounting/stats.go000066400000000000000000000415401375552240400173270ustar00rootroot00000000000000package accounting import ( "bytes" "fmt" "sort" "strings" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/rc" ) // MaxCompletedTransfers specifies maximum number of completed transfers in startedTransfers list var MaxCompletedTransfers = 100 var startTime = time.Now() // StatsInfo accounts all transfers type StatsInfo struct { mu sync.RWMutex bytes int64 errors int64 lastError error fatalError bool retryError bool retryAfter time.Time checks int64 checking *transferMap checkQueue int checkQueueSize int64 transfers int64 transferring *transferMap transferQueue int transferQueueSize int64 renames int64 renameQueue int renameQueueSize int64 deletes int64 inProgress *inProgress startedTransfers []*Transfer // currently active transfers oldTimeRanges timeRanges // a merged list of time ranges for the transfers oldDuration time.Duration // duration of transfers we have culled group string } // NewStats creates an initialised StatsInfo func NewStats() *StatsInfo { return &StatsInfo{ checking: newTransferMap(fs.Config.Checkers, "checking"), transferring: newTransferMap(fs.Config.Transfers, "transferring"), inProgress: newInProgress(), } } // RemoteStats returns stats for rc func (s *StatsInfo) RemoteStats() (out rc.Params, err error) { out = make(rc.Params) s.mu.RLock() out["speed"] = s.Speed() out["bytes"] = s.bytes out["errors"] = s.errors out["fatalError"] = s.fatalError out["retryError"] = s.retryError out["checks"] = s.checks out["transfers"] = s.transfers out["deletes"] = s.deletes out["renames"] = s.renames out["transferTime"] = s.totalDuration().Seconds() out["elapsedTime"] = time.Since(startTime).Seconds() s.mu.RUnlock() if !s.checking.empty() { out["checking"] = s.checking.remotes() } if !s.transferring.empty() { out["transferring"] = s.transferring.rcStats(s.inProgress) } if s.errors > 0 { out["lastError"] = s.lastError.Error() } return out, nil } // Speed returns the average speed of the transfer in bytes/second func (s *StatsInfo) Speed() float64 { dt := s.totalDuration() dtSeconds := dt.Seconds() speed := 0.0 if dt > 0 { speed = float64(s.bytes) / dtSeconds } return speed } // timeRange is a start and end time of a transfer type timeRange struct { start time.Time end time.Time } // timeRanges is a list of non-overlapping start and end times for // transfers type timeRanges []timeRange // merge all the overlapping time ranges func (trs *timeRanges) merge() { Trs := *trs // Sort by the starting time. sort.Slice(Trs, func(i, j int) bool { return Trs[i].start.Before(Trs[j].start) }) // Merge overlaps and add distinctive ranges together var ( newTrs = Trs[:0] i, j = 0, 1 ) for i < len(Trs) { if j < len(Trs) { if !Trs[i].end.Before(Trs[j].start) { if Trs[i].end.Before(Trs[j].end) { Trs[i].end = Trs[j].end } j++ continue } } newTrs = append(newTrs, Trs[i]) i = j j++ } *trs = newTrs } // cull remove any ranges whose start and end are before cutoff // returning their duration sum func (trs *timeRanges) cull(cutoff time.Time) (d time.Duration) { var newTrs = (*trs)[:0] for _, tr := range *trs { if cutoff.Before(tr.start) || cutoff.Before(tr.end) { newTrs = append(newTrs, tr) } else { d += tr.end.Sub(tr.start) } } *trs = newTrs return d } // total the time out of the time ranges func (trs timeRanges) total() (total time.Duration) { for _, tr := range trs { total += tr.end.Sub(tr.start) } return total } // Total duration is union of durations of all transfers belonging to this // object. // Needs to be protected by mutex. func (s *StatsInfo) totalDuration() time.Duration { // copy of s.oldTimeRanges with extra room for the current transfers timeRanges := make(timeRanges, len(s.oldTimeRanges), len(s.oldTimeRanges)+len(s.startedTransfers)) copy(timeRanges, s.oldTimeRanges) // Extract time ranges of all transfers. now := time.Now() for i := range s.startedTransfers { start, end := s.startedTransfers[i].TimeRange() if end.IsZero() { end = now } timeRanges = append(timeRanges, timeRange{start, end}) } timeRanges.merge() return s.oldDuration + timeRanges.total() } // eta returns the ETA of the current operation, // rounded to full seconds. // If the ETA cannot be determined 'ok' returns false. func eta(size, total int64, rate float64) (eta time.Duration, ok bool) { if total <= 0 || size < 0 || rate <= 0 { return 0, false } remaining := total - size if remaining < 0 { return 0, false } seconds := float64(remaining) / rate return time.Second * time.Duration(seconds), true } // etaString returns the ETA of the current operation, // rounded to full seconds. // If the ETA cannot be determined it returns "-" func etaString(done, total int64, rate float64) string { d, ok := eta(done, total, rate) if !ok { return "-" } return fs.Duration(d).ReadableString() } // percent returns a/b as a percentage rounded to the nearest integer // as a string // // if the percentage is invalid it returns "-" func percent(a int64, b int64) string { if a < 0 || b <= 0 { return "-" } return fmt.Sprintf("%d%%", int(float64(a)*100/float64(b)+0.5)) } // String convert the StatsInfo to a string for printing func (s *StatsInfo) String() string { // checking and transferring have their own locking so read // here before lock to prevent deadlock on GetBytes transferring, checking := s.transferring.count(), s.checking.count() transferringBytesDone, transferringBytesTotal := s.transferring.progress(s) s.mu.RLock() elapsedTime := time.Since(startTime) elapsedTimeSecondsOnly := elapsedTime.Truncate(time.Second/10) % time.Minute dt := s.totalDuration() dtSeconds := dt.Seconds() speed := 0.0 if dt > 0 { speed = float64(s.bytes) / dtSeconds } displaySpeed := speed if fs.Config.DataRateUnit == "bits" { displaySpeed *= 8 } var ( totalChecks = int64(s.checkQueue) + s.checks + int64(checking) totalTransfer = int64(s.transferQueue) + s.transfers + int64(transferring) // note that s.bytes already includes transferringBytesDone so // we take it off here to avoid double counting totalSize = s.transferQueueSize + s.bytes + transferringBytesTotal - transferringBytesDone currentSize = s.bytes buf = &bytes.Buffer{} xfrchkString = "" dateString = "" ) if !fs.Config.StatsOneLine { _, _ = fmt.Fprintf(buf, "\nTransferred: ") } else { xfrchk := []string{} if totalTransfer > 0 && s.transferQueue > 0 { xfrchk = append(xfrchk, fmt.Sprintf("xfr#%d/%d", s.transfers, totalTransfer)) } if totalChecks > 0 && s.checkQueue > 0 { xfrchk = append(xfrchk, fmt.Sprintf("chk#%d/%d", s.checks, totalChecks)) } if len(xfrchk) > 0 { xfrchkString = fmt.Sprintf(" (%s)", strings.Join(xfrchk, ", ")) } if fs.Config.StatsOneLineDate { t := time.Now() dateString = t.Format(fs.Config.StatsOneLineDateFormat) // Including the separator so people can customize it } } _, _ = fmt.Fprintf(buf, "%s%10s / %s, %s, %s, ETA %s%s", dateString, fs.SizeSuffix(s.bytes), fs.SizeSuffix(totalSize).Unit("Bytes"), percent(s.bytes, totalSize), fs.SizeSuffix(displaySpeed).Unit(strings.Title(fs.Config.DataRateUnit)+"/s"), etaString(currentSize, totalSize, speed), xfrchkString, ) if !fs.Config.StatsOneLine { _, _ = buf.WriteRune('\n') errorDetails := "" switch { case s.fatalError: errorDetails = " (fatal error encountered)" case s.retryError: errorDetails = " (retrying may help)" case s.errors != 0: errorDetails = " (no need to retry)" } // Add only non zero stats if s.errors != 0 { _, _ = fmt.Fprintf(buf, "Errors: %10d%s\n", s.errors, errorDetails) } if s.checks != 0 || totalChecks != 0 { _, _ = fmt.Fprintf(buf, "Checks: %10d / %d, %s\n", s.checks, totalChecks, percent(s.checks, totalChecks)) } if s.deletes != 0 { _, _ = fmt.Fprintf(buf, "Deleted: %10d\n", s.deletes) } if s.renames != 0 { _, _ = fmt.Fprintf(buf, "Renamed: %10d\n", s.renames) } if s.transfers != 0 || totalTransfer != 0 { _, _ = fmt.Fprintf(buf, "Transferred: %10d / %d, %s\n", s.transfers, totalTransfer, percent(s.transfers, totalTransfer)) } _, _ = fmt.Fprintf(buf, "Elapsed time: %10ss\n", strings.TrimRight(elapsedTime.Truncate(time.Minute).String(), "0s")+fmt.Sprintf("%.1f", elapsedTimeSecondsOnly.Seconds())) } // checking and transferring have their own locking so unlock // here to prevent deadlock on GetBytes s.mu.RUnlock() // Add per transfer stats if required if !fs.Config.StatsOneLine { if !s.checking.empty() { _, _ = fmt.Fprintf(buf, "Checking:\n%s\n", s.checking.String(s.inProgress, s.transferring)) } if !s.transferring.empty() { _, _ = fmt.Fprintf(buf, "Transferring:\n%s\n", s.transferring.String(s.inProgress, nil)) } } return buf.String() } // Transferred returns list of all completed transfers including checked and // failed ones. func (s *StatsInfo) Transferred() []TransferSnapshot { ts := make([]TransferSnapshot, 0, len(s.startedTransfers)) for _, tr := range s.startedTransfers { if tr.IsDone() { ts = append(ts, tr.Snapshot()) } } return ts } // Log outputs the StatsInfo to the log func (s *StatsInfo) Log() { if fs.Config.UseJSONLog { out, _ := s.RemoteStats() fs.LogLevelPrintf(fs.Config.StatsLogLevel, nil, "%v%v\n", s, fs.LogValue("stats", out)) } else { fs.LogLevelPrintf(fs.Config.StatsLogLevel, nil, "%v\n", s) } } // Bytes updates the stats for bytes bytes func (s *StatsInfo) Bytes(bytes int64) { s.mu.Lock() defer s.mu.Unlock() s.bytes += bytes } // GetBytes returns the number of bytes transferred so far func (s *StatsInfo) GetBytes() int64 { s.mu.RLock() defer s.mu.RUnlock() return s.bytes } // GetBytesWithPending returns the number of bytes transferred and remaining transfers func (s *StatsInfo) GetBytesWithPending() int64 { s.mu.RLock() defer s.mu.RUnlock() pending := int64(0) for _, tr := range s.startedTransfers { if tr.acc != nil { bytes, size := tr.acc.progress() if bytes < size { pending += size - bytes } } } return s.bytes + pending } // Errors updates the stats for errors func (s *StatsInfo) Errors(errors int64) { s.mu.Lock() defer s.mu.Unlock() s.errors += errors } // GetErrors reads the number of errors func (s *StatsInfo) GetErrors() int64 { s.mu.RLock() defer s.mu.RUnlock() return s.errors } // GetLastError returns the lastError func (s *StatsInfo) GetLastError() error { s.mu.RLock() defer s.mu.RUnlock() return s.lastError } // GetChecks returns the number of checks func (s *StatsInfo) GetChecks() int64 { s.mu.RLock() defer s.mu.RUnlock() return s.checks } // FatalError sets the fatalError flag func (s *StatsInfo) FatalError() { s.mu.Lock() defer s.mu.Unlock() s.fatalError = true } // HadFatalError returns whether there has been at least one FatalError func (s *StatsInfo) HadFatalError() bool { s.mu.RLock() defer s.mu.RUnlock() return s.fatalError } // RetryError sets the retryError flag func (s *StatsInfo) RetryError() { s.mu.Lock() defer s.mu.Unlock() s.retryError = true } // HadRetryError returns whether there has been at least one non-NoRetryError func (s *StatsInfo) HadRetryError() bool { s.mu.RLock() defer s.mu.RUnlock() return s.retryError } // Deletes updates the stats for deletes func (s *StatsInfo) Deletes(deletes int64) int64 { s.mu.Lock() defer s.mu.Unlock() s.deletes += deletes return s.deletes } // Renames updates the stats for renames func (s *StatsInfo) Renames(renames int64) int64 { s.mu.Lock() defer s.mu.Unlock() s.renames += renames return s.renames } // ResetCounters sets the counters (bytes, checks, errors, transfers, deletes, renames) to 0 and resets lastError, fatalError and retryError func (s *StatsInfo) ResetCounters() { s.mu.Lock() defer s.mu.Unlock() s.bytes = 0 s.errors = 0 s.lastError = nil s.fatalError = false s.retryError = false s.retryAfter = time.Time{} s.checks = 0 s.transfers = 0 s.deletes = 0 s.renames = 0 s.startedTransfers = nil s.oldDuration = 0 } // ResetErrors sets the errors count to 0 and resets lastError, fatalError and retryError func (s *StatsInfo) ResetErrors() { s.mu.Lock() defer s.mu.Unlock() s.errors = 0 s.lastError = nil s.fatalError = false s.retryError = false s.retryAfter = time.Time{} } // Errored returns whether there have been any errors func (s *StatsInfo) Errored() bool { s.mu.RLock() defer s.mu.RUnlock() return s.errors != 0 } // Error adds a single error into the stats, assigns lastError and eventually sets fatalError or retryError func (s *StatsInfo) Error(err error) error { if err == nil || fserrors.IsCounted(err) { return err } s.mu.Lock() defer s.mu.Unlock() s.errors++ s.lastError = err err = fserrors.FsError(err) fserrors.Count(err) switch { case fserrors.IsFatalError(err): s.fatalError = true case fserrors.IsRetryAfterError(err): retryAfter := fserrors.RetryAfterErrorTime(err) if s.retryAfter.IsZero() || retryAfter.Sub(s.retryAfter) > 0 { s.retryAfter = retryAfter } s.retryError = true case !fserrors.IsNoRetryError(err): s.retryError = true } return err } // RetryAfter returns the time to retry after if it is set. It will // be Zero if it isn't set. func (s *StatsInfo) RetryAfter() time.Time { s.mu.Lock() defer s.mu.Unlock() return s.retryAfter } // NewCheckingTransfer adds a checking transfer to the stats, from the object. func (s *StatsInfo) NewCheckingTransfer(obj fs.Object) *Transfer { tr := newCheckingTransfer(s, obj) s.checking.add(tr) return tr } // DoneChecking removes a check from the stats func (s *StatsInfo) DoneChecking(remote string) { s.checking.del(remote) s.mu.Lock() s.checks++ s.mu.Unlock() } // GetTransfers reads the number of transfers func (s *StatsInfo) GetTransfers() int64 { s.mu.RLock() defer s.mu.RUnlock() return s.transfers } // NewTransfer adds a transfer to the stats from the object. func (s *StatsInfo) NewTransfer(obj fs.Object) *Transfer { tr := newTransfer(s, obj) s.transferring.add(tr) return tr } // NewTransferRemoteSize adds a transfer to the stats based on remote and size. func (s *StatsInfo) NewTransferRemoteSize(remote string, size int64) *Transfer { tr := newTransferRemoteSize(s, remote, size, false) s.transferring.add(tr) return tr } // DoneTransferring removes a transfer from the stats // // if ok is true then it increments the transfers count func (s *StatsInfo) DoneTransferring(remote string, ok bool) { s.transferring.del(remote) if ok { s.mu.Lock() s.transfers++ s.mu.Unlock() } } // SetCheckQueue sets the number of queued checks func (s *StatsInfo) SetCheckQueue(n int, size int64) { s.mu.Lock() s.checkQueue = n s.checkQueueSize = size s.mu.Unlock() } // SetTransferQueue sets the number of queued transfers func (s *StatsInfo) SetTransferQueue(n int, size int64) { s.mu.Lock() s.transferQueue = n s.transferQueueSize = size s.mu.Unlock() } // SetRenameQueue sets the number of queued transfers func (s *StatsInfo) SetRenameQueue(n int, size int64) { s.mu.Lock() s.renameQueue = n s.renameQueueSize = size s.mu.Unlock() } // AddTransfer adds reference to the started transfer. func (s *StatsInfo) AddTransfer(transfer *Transfer) { s.mu.Lock() s.startedTransfers = append(s.startedTransfers, transfer) s.mu.Unlock() } // removeTransfer removes a reference to the started transfer in // position i. // // Must be called with the lock held func (s *StatsInfo) removeTransfer(transfer *Transfer, i int) { now := time.Now() // add finished transfer onto old time ranges start, end := transfer.TimeRange() if end.IsZero() { end = now } s.oldTimeRanges = append(s.oldTimeRanges, timeRange{start, end}) s.oldTimeRanges.merge() // remove the found entry s.startedTransfers = append(s.startedTransfers[:i], s.startedTransfers[i+1:]...) // Find youngest active transfer oldestStart := now for i := range s.startedTransfers { start, _ := s.startedTransfers[i].TimeRange() if start.Before(oldestStart) { oldestStart = start } } // remove old entries older than that s.oldDuration += s.oldTimeRanges.cull(oldestStart) } // RemoveTransfer removes a reference to the started transfer. func (s *StatsInfo) RemoveTransfer(transfer *Transfer) { s.mu.Lock() for i, tr := range s.startedTransfers { if tr == transfer { s.removeTransfer(tr, i) break } } s.mu.Unlock() } // PruneTransfers makes sure there aren't too many old transfers by removing // single finished transfer. func (s *StatsInfo) PruneTransfers() { if MaxCompletedTransfers < 0 { return } s.mu.Lock() // remove a transfer from the start if we are over quota if len(s.startedTransfers) > MaxCompletedTransfers+fs.Config.Transfers { for i, tr := range s.startedTransfers { if tr.IsDone() { s.removeTransfer(tr, i) break } } } s.mu.Unlock() } rclone-1.53.3/fs/accounting/stats_groups.go000066400000000000000000000224031375552240400207230ustar00rootroot00000000000000package accounting import ( "context" "sync" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/fs" ) const globalStats = "global_stats" var groups *statsGroups func init() { // Init stats container groups = newStatsGroups() // Set the function pointer up in fs fs.CountError = GlobalStats().Error } func rcListStats(ctx context.Context, in rc.Params) (rc.Params, error) { out := make(rc.Params) out["groups"] = groups.names() return out, nil } func init() { rc.Add(rc.Call{ Path: "core/group-list", Fn: rcListStats, Title: "Returns list of stats.", Help: ` This returns list of stats groups currently in memory. Returns the following values: ` + "```" + ` { "groups": an array of group names: [ "group1", "group2", ... ] } ` + "```" + ` `, }) } func rcRemoteStats(ctx context.Context, in rc.Params) (rc.Params, error) { // Check to see if we should filter by group. group, err := in.GetString("group") if rc.NotErrParamNotFound(err) { return rc.Params{}, err } if group != "" { return StatsGroup(group).RemoteStats() } return groups.sum().RemoteStats() } func init() { rc.Add(rc.Call{ Path: "core/stats", Fn: rcRemoteStats, Title: "Returns stats about current transfers.", Help: ` This returns all available stats: rclone rc core/stats If group is not provided then summed up stats for all groups will be returned. Parameters - group - name of the stats group (string) Returns the following values: ` + "```" + ` { "speed": average speed in bytes/sec since start of the process, "bytes": total transferred bytes since the start of the process, "errors": number of errors, "fatalError": whether there has been at least one FatalError, "retryError": whether there has been at least one non-NoRetryError, "checks": number of checked files, "transfers": number of transferred files, "deletes" : number of deleted files, "renames" : number of renamed files, "transferTime" : total time spent on running jobs, "elapsedTime": time in seconds since the start of the process, "lastError": last occurred error, "transferring": an array of currently active file transfers: [ { "bytes": total transferred bytes for this file, "eta": estimated time in seconds until file transfer completion "name": name of the file, "percentage": progress of the file transfer in percent, "speed": average speed over the whole transfer in bytes/sec, "speedAvg": current speed in bytes/sec as an exponentially weighted moving average, "size": size of the file in bytes } ], "checking": an array of names of currently active file checks [] } ` + "```" + ` Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined. `, }) } func rcTransferredStats(ctx context.Context, in rc.Params) (rc.Params, error) { // Check to see if we should filter by group. group, err := in.GetString("group") if rc.NotErrParamNotFound(err) { return rc.Params{}, err } out := make(rc.Params) if group != "" { out["transferred"] = StatsGroup(group).Transferred() } else { out["transferred"] = groups.sum().Transferred() } return out, nil } func init() { rc.Add(rc.Call{ Path: "core/transferred", Fn: rcTransferredStats, Title: "Returns stats about completed transfers.", Help: ` This returns stats about completed transfers: rclone rc core/transferred If group is not provided then completed transfers for all groups will be returned. Note only the last 100 completed transfers are returned. Parameters - group - name of the stats group (string) Returns the following values: ` + "```" + ` { "transferred": an array of completed transfers (including failed ones): [ { "name": name of the file, "size": size of the file in bytes, "bytes": total transferred bytes for this file, "checked": if the transfer is only checked (skipped, deleted), "timestamp": integer representing millisecond unix epoch, "error": string description of the error (empty if successful), "jobid": id of the job that this transfer belongs to } ] } ` + "```" + ` `, }) } func rcResetStats(ctx context.Context, in rc.Params) (rc.Params, error) { // Check to see if we should filter by group. group, err := in.GetString("group") if rc.NotErrParamNotFound(err) { return rc.Params{}, err } if group != "" { stats := groups.get(group) stats.ResetErrors() stats.ResetCounters() } else { groups.reset() } return rc.Params{}, nil } func init() { rc.Add(rc.Call{ Path: "core/stats-reset", Fn: rcResetStats, Title: "Reset stats.", Help: ` This clears counters, errors and finished transfers for all stats or specific stats group if group is provided. Parameters - group - name of the stats group (string) `, }) } func rcDeleteStats(ctx context.Context, in rc.Params) (rc.Params, error) { // Group name required because we only do single group. group, err := in.GetString("group") if rc.NotErrParamNotFound(err) { return rc.Params{}, err } if group != "" { groups.delete(group) } return rc.Params{}, nil } func init() { rc.Add(rc.Call{ Path: "core/stats-delete", Fn: rcDeleteStats, Title: "Delete stats group.", Help: ` This deletes entire stats group Parameters - group - name of the stats group (string) `, }) } type statsGroupCtx int64 const statsGroupKey statsGroupCtx = 1 // WithStatsGroup returns copy of the parent context with assigned group. func WithStatsGroup(parent context.Context, group string) context.Context { return context.WithValue(parent, statsGroupKey, group) } // StatsGroupFromContext returns group from the context if it's available. // Returns false if group is empty. func StatsGroupFromContext(ctx context.Context) (string, bool) { statsGroup, ok := ctx.Value(statsGroupKey).(string) if statsGroup == "" { ok = false } return statsGroup, ok } // Stats gets stats by extracting group from context. func Stats(ctx context.Context) *StatsInfo { group, ok := StatsGroupFromContext(ctx) if !ok { return GlobalStats() } return StatsGroup(group) } // StatsGroup gets stats by group name. func StatsGroup(group string) *StatsInfo { stats := groups.get(group) if stats == nil { return NewStatsGroup(group) } return stats } // GlobalStats returns special stats used for global accounting. func GlobalStats() *StatsInfo { return StatsGroup(globalStats) } // NewStatsGroup creates new stats under named group. func NewStatsGroup(group string) *StatsInfo { stats := NewStats() stats.group = group groups.set(group, stats) return stats } // statsGroups holds a synchronized map of stats type statsGroups struct { mu sync.Mutex m map[string]*StatsInfo order []string } // newStatsGroups makes a new statsGroups object func newStatsGroups() *statsGroups { return &statsGroups{ m: make(map[string]*StatsInfo), } } // set marks the stats as belonging to a group func (sg *statsGroups) set(group string, stats *StatsInfo) { sg.mu.Lock() defer sg.mu.Unlock() // Limit number of groups kept in memory. if len(sg.order) >= fs.Config.MaxStatsGroups { group := sg.order[0] fs.LogPrintf(fs.LogLevelDebug, nil, "Max number of stats groups reached removing %s", group) delete(sg.m, group) r := (len(sg.order) - fs.Config.MaxStatsGroups) + 1 sg.order = sg.order[r:] } // Exclude global stats from listing if group != globalStats { sg.order = append(sg.order, group) } sg.m[group] = stats } // get gets the stats for group, or nil if not found func (sg *statsGroups) get(group string) *StatsInfo { sg.mu.Lock() defer sg.mu.Unlock() stats, ok := sg.m[group] if !ok { return nil } return stats } func (sg *statsGroups) names() []string { sg.mu.Lock() defer sg.mu.Unlock() return sg.order } // sum returns aggregate stats that contains summation of all groups. func (sg *statsGroups) sum() *StatsInfo { sg.mu.Lock() defer sg.mu.Unlock() sum := NewStats() for _, stats := range sg.m { stats.mu.RLock() { sum.bytes += stats.bytes sum.errors += stats.errors sum.fatalError = sum.fatalError || stats.fatalError sum.retryError = sum.retryError || stats.retryError sum.checks += stats.checks sum.transfers += stats.transfers sum.deletes += stats.deletes sum.renames += stats.renames sum.checking.merge(stats.checking) sum.transferring.merge(stats.transferring) sum.inProgress.merge(stats.inProgress) if sum.lastError == nil && stats.lastError != nil { sum.lastError = stats.lastError } sum.startedTransfers = append(sum.startedTransfers, stats.startedTransfers...) sum.oldDuration += stats.oldDuration sum.oldTimeRanges = append(sum.oldTimeRanges, stats.oldTimeRanges...) } stats.mu.RUnlock() } return sum } func (sg *statsGroups) reset() { sg.mu.Lock() defer sg.mu.Unlock() for _, stats := range sg.m { stats.ResetErrors() stats.ResetCounters() } sg.m = make(map[string]*StatsInfo) sg.order = nil } // delete removes all references to the group. func (sg *statsGroups) delete(group string) { sg.mu.Lock() defer sg.mu.Unlock() stats := sg.m[group] if stats == nil { return } stats.ResetErrors() stats.ResetCounters() delete(sg.m, group) // Remove group reference from the ordering slice. tmp := sg.order[:0] for _, g := range sg.order { if g != group { tmp = append(tmp, g) } } sg.order = tmp } rclone-1.53.3/fs/accounting/stats_groups_test.go000066400000000000000000000056741375552240400217750ustar00rootroot00000000000000package accounting import ( "fmt" "runtime" "testing" "time" "github.com/rclone/rclone/fstest/testy" "github.com/stretchr/testify/assert" ) func TestStatsGroupOperations(t *testing.T) { t.Run("empty group returns nil", func(t *testing.T) { t.Parallel() sg := newStatsGroups() sg.get("invalid-group") }) t.Run("set assigns stats to group", func(t *testing.T) { t.Parallel() stats := NewStats() sg := newStatsGroups() sg.set("test", stats) sg.set("test1", stats) if len(sg.m) != len(sg.names()) || len(sg.m) != 2 { t.Fatalf("Expected two stats got %d, %d", len(sg.m), len(sg.order)) } }) t.Run("get returns correct group", func(t *testing.T) { t.Parallel() stats := NewStats() sg := newStatsGroups() sg.set("test", stats) sg.set("test1", stats) got := sg.get("test") if got != stats { t.Fatal("get returns incorrect stats") } }) t.Run("sum returns correct values", func(t *testing.T) { t.Parallel() stats1 := NewStats() stats1.bytes = 5 stats1.errors = 6 stats1.oldDuration = time.Second stats1.oldTimeRanges = []timeRange{{time.Now(), time.Now().Add(time.Second)}} stats2 := NewStats() stats2.bytes = 10 stats2.errors = 12 stats2.oldDuration = 2 * time.Second stats2.oldTimeRanges = []timeRange{{time.Now(), time.Now().Add(2 * time.Second)}} sg := newStatsGroups() sg.set("test1", stats1) sg.set("test2", stats2) sum := sg.sum() assert.Equal(t, stats1.bytes+stats2.bytes, sum.bytes) assert.Equal(t, stats1.errors+stats2.errors, sum.errors) assert.Equal(t, stats1.oldDuration+stats2.oldDuration, sum.oldDuration) // dict can iterate in either order a := timeRanges{stats1.oldTimeRanges[0], stats2.oldTimeRanges[0]} b := timeRanges{stats2.oldTimeRanges[0], stats1.oldTimeRanges[0]} if !assert.ObjectsAreEqual(a, sum.oldTimeRanges) { assert.Equal(t, b, sum.oldTimeRanges) } }) t.Run("delete removes stats", func(t *testing.T) { t.Parallel() stats := NewStats() sg := newStatsGroups() sg.set("test", stats) sg.set("test1", stats) sg.delete("test1") if sg.get("test1") != nil { t.Fatal("stats not deleted") } if len(sg.m) != len(sg.names()) || len(sg.m) != 1 { t.Fatalf("Expected two stats got %d, %d", len(sg.m), len(sg.order)) } }) t.Run("memory is reclaimed", func(t *testing.T) { testy.SkipUnreliable(t) var ( count = 1000 start, end runtime.MemStats sg = newStatsGroups() ) runtime.GC() runtime.ReadMemStats(&start) for i := 0; i < count; i++ { sg.set(fmt.Sprintf("test-%d", i), NewStats()) } for i := 0; i < count; i++ { sg.delete(fmt.Sprintf("test-%d", i)) } runtime.GC() runtime.ReadMemStats(&end) t.Log(fmt.Sprintf("%+v\n%+v", start, end)) diff := percentDiff(start.HeapObjects, end.HeapObjects) if diff > 1 || diff < 0 { t.Errorf("HeapObjects = %d, expected %d", end.HeapObjects, start.HeapObjects) } }) } func percentDiff(start, end uint64) uint64 { return (start - end) * 100 / start } rclone-1.53.3/fs/accounting/stats_test.go000066400000000000000000000300621375552240400203630ustar00rootroot00000000000000package accounting import ( "fmt" "io" "testing" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fserrors" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestETA(t *testing.T) { for _, test := range []struct { size, total int64 rate float64 wantETA time.Duration wantOK bool wantString string }{ // Custom String Cases {size: 0, total: 365 * 86400, rate: 1.0, wantETA: 365 * 86400 * time.Second, wantOK: true, wantString: "1y"}, {size: 0, total: 7 * 86400, rate: 1.0, wantETA: 7 * 86400 * time.Second, wantOK: true, wantString: "1w"}, {size: 0, total: 1 * 86400, rate: 1.0, wantETA: 1 * 86400 * time.Second, wantOK: true, wantString: "1d"}, {size: 0, total: 1110 * 86400, rate: 1.0, wantETA: 1110 * 86400 * time.Second, wantOK: true, wantString: "3y2w1d"}, {size: 0, total: 15 * 86400, rate: 1.0, wantETA: 15 * 86400 * time.Second, wantOK: true, wantString: "2w1d"}, // Composite Custom String Cases {size: 0, total: 1.5 * 86400, rate: 1.0, wantETA: 1.5 * 86400 * time.Second, wantOK: true, wantString: "1d12h"}, {size: 0, total: 95000, rate: 1.0, wantETA: 95000 * time.Second, wantOK: true, wantString: "1d2h23m20s"}, // Standard Duration String Cases {size: 0, total: 100, rate: 1.0, wantETA: 100 * time.Second, wantOK: true, wantString: "1m40s"}, {size: 50, total: 100, rate: 1.0, wantETA: 50 * time.Second, wantOK: true, wantString: "50s"}, {size: 100, total: 100, rate: 1.0, wantETA: 0 * time.Second, wantOK: true, wantString: "0s"}, // No String Cases {size: -1, total: 100, rate: 1.0, wantETA: 0, wantOK: false, wantString: "-"}, {size: 200, total: 100, rate: 1.0, wantETA: 0, wantOK: false, wantString: "-"}, {size: 10, total: -1, rate: 1.0, wantETA: 0, wantOK: false, wantString: "-"}, {size: 10, total: 20, rate: 0.0, wantETA: 0, wantOK: false, wantString: "-"}, {size: 10, total: 20, rate: -1.0, wantETA: 0, wantOK: false, wantString: "-"}, {size: 0, total: 0, rate: 1.0, wantETA: 0, wantOK: false, wantString: "-"}, } { t.Run(fmt.Sprintf("size=%d/total=%d/rate=%f", test.size, test.total, test.rate), func(t *testing.T) { gotETA, gotOK := eta(test.size, test.total, test.rate) assert.Equal(t, test.wantETA, gotETA) assert.Equal(t, test.wantOK, gotOK) gotString := etaString(test.size, test.total, test.rate) assert.Equal(t, test.wantString, gotString) }) } } func TestPercentage(t *testing.T) { assert.Equal(t, percent(0, 1000), "0%") assert.Equal(t, percent(1, 1000), "0%") assert.Equal(t, percent(9, 1000), "1%") assert.Equal(t, percent(500, 1000), "50%") assert.Equal(t, percent(1000, 1000), "100%") assert.Equal(t, percent(1e8, 1e9), "10%") assert.Equal(t, percent(1e8, 1e9), "10%") assert.Equal(t, percent(0, 0), "-") assert.Equal(t, percent(100, -100), "-") assert.Equal(t, percent(-100, 100), "-") assert.Equal(t, percent(-100, -100), "-") } func TestStatsError(t *testing.T) { s := NewStats() assert.Equal(t, int64(0), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.False(t, s.HadRetryError()) assert.Equal(t, time.Time{}, s.RetryAfter()) assert.Equal(t, nil, s.GetLastError()) assert.False(t, s.Errored()) t0 := time.Now() t1 := t0.Add(time.Second) _ = s.Error(nil) assert.Equal(t, int64(0), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.False(t, s.HadRetryError()) assert.Equal(t, time.Time{}, s.RetryAfter()) assert.Equal(t, nil, s.GetLastError()) assert.False(t, s.Errored()) _ = s.Error(io.EOF) assert.Equal(t, int64(1), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.True(t, s.HadRetryError()) assert.Equal(t, time.Time{}, s.RetryAfter()) assert.Equal(t, io.EOF, s.GetLastError()) assert.True(t, s.Errored()) e := fserrors.ErrorRetryAfter(t0) _ = s.Error(e) assert.Equal(t, int64(2), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.True(t, s.HadRetryError()) assert.Equal(t, t0, s.RetryAfter()) assert.Equal(t, e, s.GetLastError()) err := errors.Wrap(fserrors.ErrorRetryAfter(t1), "potato") err = s.Error(err) assert.Equal(t, int64(3), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.True(t, s.HadRetryError()) assert.Equal(t, t1, s.RetryAfter()) assert.Equal(t, t1, fserrors.RetryAfterErrorTime(err)) _ = s.Error(fserrors.FatalError(io.EOF)) assert.Equal(t, int64(4), s.GetErrors()) assert.True(t, s.HadFatalError()) assert.True(t, s.HadRetryError()) assert.Equal(t, t1, s.RetryAfter()) s.ResetErrors() assert.Equal(t, int64(0), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.False(t, s.HadRetryError()) assert.Equal(t, time.Time{}, s.RetryAfter()) assert.Equal(t, nil, s.GetLastError()) assert.False(t, s.Errored()) _ = s.Error(fserrors.NoRetryError(io.EOF)) assert.Equal(t, int64(1), s.GetErrors()) assert.False(t, s.HadFatalError()) assert.False(t, s.HadRetryError()) assert.Equal(t, time.Time{}, s.RetryAfter()) } func TestStatsTotalDuration(t *testing.T) { startTime := time.Now() time1 := startTime.Add(-40 * time.Second) time2 := time1.Add(10 * time.Second) time3 := time2.Add(10 * time.Second) time4 := time3.Add(10 * time.Second) t.Run("Single completed transfer", func(t *testing.T) { s := NewStats() tr1 := &Transfer{ startedAt: time1, completedAt: time2, } s.AddTransfer(tr1) s.mu.Lock() total := s.totalDuration() s.mu.Unlock() assert.Equal(t, 1, len(s.startedTransfers)) assert.Equal(t, 10*time.Second, total) s.RemoveTransfer(tr1) assert.Equal(t, 10*time.Second, total) assert.Equal(t, 0, len(s.startedTransfers)) }) t.Run("Single uncompleted transfer", func(t *testing.T) { s := NewStats() tr1 := &Transfer{ startedAt: time1, } s.AddTransfer(tr1) s.mu.Lock() total := s.totalDuration() s.mu.Unlock() assert.Equal(t, time.Since(time1)/time.Second, total/time.Second) s.RemoveTransfer(tr1) assert.Equal(t, time.Since(time1)/time.Second, total/time.Second) }) t.Run("Overlapping without ending", func(t *testing.T) { s := NewStats() tr1 := &Transfer{ startedAt: time2, completedAt: time3, } s.AddTransfer(tr1) tr2 := &Transfer{ startedAt: time2, completedAt: time2.Add(time.Second), } s.AddTransfer(tr2) tr3 := &Transfer{ startedAt: time1, completedAt: time3, } s.AddTransfer(tr3) tr4 := &Transfer{ startedAt: time3, completedAt: time4, } s.AddTransfer(tr4) tr5 := &Transfer{ startedAt: time.Now(), } s.AddTransfer(tr5) time.Sleep(time.Millisecond) s.mu.Lock() total := s.totalDuration() s.mu.Unlock() assert.Equal(t, time.Duration(30), total/time.Second) s.RemoveTransfer(tr1) assert.Equal(t, time.Duration(30), total/time.Second) s.RemoveTransfer(tr2) assert.Equal(t, time.Duration(30), total/time.Second) s.RemoveTransfer(tr3) assert.Equal(t, time.Duration(30), total/time.Second) s.RemoveTransfer(tr4) assert.Equal(t, time.Duration(30), total/time.Second) }) t.Run("Mixed completed and uncompleted transfers", func(t *testing.T) { s := NewStats() s.AddTransfer(&Transfer{ startedAt: time1, completedAt: time2, }) s.AddTransfer(&Transfer{ startedAt: time2, }) s.AddTransfer(&Transfer{ startedAt: time3, }) s.AddTransfer(&Transfer{ startedAt: time3, }) s.mu.Lock() total := s.totalDuration() s.mu.Unlock() assert.Equal(t, startTime.Sub(time1)/time.Second, total/time.Second) }) } // make time ranges from string description for testing func makeTimeRanges(t *testing.T, in []string) timeRanges { trs := make(timeRanges, len(in)) for i, Range := range in { var start, end int64 n, err := fmt.Sscanf(Range, "%d-%d", &start, &end) require.NoError(t, err) require.Equal(t, 2, n) trs[i] = timeRange{time.Unix(start, 0), time.Unix(end, 0)} } return trs } func (trs timeRanges) toStrings() (out []string) { out = []string{} for _, tr := range trs { out = append(out, fmt.Sprintf("%d-%d", tr.start.Unix(), tr.end.Unix())) } return out } func TestTimeRangeMerge(t *testing.T) { for _, test := range []struct { in []string want []string }{{ in: []string{}, want: []string{}, }, { in: []string{"1-2"}, want: []string{"1-2"}, }, { in: []string{"1-4", "2-3"}, want: []string{"1-4"}, }, { in: []string{"2-3", "1-4"}, want: []string{"1-4"}, }, { in: []string{"1-3", "2-4"}, want: []string{"1-4"}, }, { in: []string{"2-4", "1-3"}, want: []string{"1-4"}, }, { in: []string{"1-2", "2-3"}, want: []string{"1-3"}, }, { in: []string{"2-3", "1-2"}, want: []string{"1-3"}, }, { in: []string{"1-2", "3-4"}, want: []string{"1-2", "3-4"}, }, { in: []string{"1-3", "7-8", "4-6", "2-5", "7-8", "7-8"}, want: []string{"1-6", "7-8"}, }} { in := makeTimeRanges(t, test.in) in.merge() got := in.toStrings() assert.Equal(t, test.want, got) } } func TestTimeRangeCull(t *testing.T) { for _, test := range []struct { in []string cutoff int64 want []string wantDuration time.Duration }{{ in: []string{}, cutoff: 1, want: []string{}, wantDuration: 0 * time.Second, }, { in: []string{"1-2"}, cutoff: 1, want: []string{"1-2"}, wantDuration: 0 * time.Second, }, { in: []string{"2-5", "7-9"}, cutoff: 1, want: []string{"2-5", "7-9"}, wantDuration: 0 * time.Second, }, { in: []string{"2-5", "7-9"}, cutoff: 4, want: []string{"2-5", "7-9"}, wantDuration: 0 * time.Second, }, { in: []string{"2-5", "7-9"}, cutoff: 5, want: []string{"7-9"}, wantDuration: 3 * time.Second, }, { in: []string{"2-5", "7-9", "2-5", "2-5"}, cutoff: 6, want: []string{"7-9"}, wantDuration: 9 * time.Second, }, { in: []string{"7-9", "3-3", "2-5"}, cutoff: 7, want: []string{"7-9"}, wantDuration: 3 * time.Second, }, { in: []string{"2-5", "7-9"}, cutoff: 8, want: []string{"7-9"}, wantDuration: 3 * time.Second, }, { in: []string{"2-5", "7-9"}, cutoff: 9, want: []string{}, wantDuration: 5 * time.Second, }, { in: []string{"2-5", "7-9"}, cutoff: 10, want: []string{}, wantDuration: 5 * time.Second, }} { in := makeTimeRanges(t, test.in) cutoff := time.Unix(test.cutoff, 0) gotDuration := in.cull(cutoff) what := fmt.Sprintf("in=%q, cutoff=%d", test.in, test.cutoff) got := in.toStrings() assert.Equal(t, test.want, got, what) assert.Equal(t, test.wantDuration, gotDuration, what) } } func TestTimeRangeDuration(t *testing.T) { assert.Equal(t, 0*time.Second, timeRanges{}.total()) assert.Equal(t, 1*time.Second, makeTimeRanges(t, []string{"1-2"}).total()) assert.Equal(t, 91*time.Second, makeTimeRanges(t, []string{"1-2", "10-100"}).total()) } func TestPruneTransfers(t *testing.T) { for _, test := range []struct { Name string Transfers int Limit int ExpectedStartedTransfers int }{ { Name: "Limited number of StartedTransfers", Limit: 100, Transfers: 200, ExpectedStartedTransfers: 100 + fs.Config.Transfers, }, { Name: "Unlimited number of StartedTransfers", Limit: -1, Transfers: 200, ExpectedStartedTransfers: 200, }, } { t.Run(test.Name, func(t *testing.T) { prevLimit := MaxCompletedTransfers MaxCompletedTransfers = test.Limit defer func() { MaxCompletedTransfers = prevLimit }() s := NewStats() for i := int64(1); i <= int64(test.Transfers); i++ { s.AddTransfer(&Transfer{ startedAt: time.Unix(i, 0), completedAt: time.Unix(i+1, 0), }) } s.mu.Lock() assert.Equal(t, time.Duration(test.Transfers)*time.Second, s.totalDuration()) assert.Equal(t, test.Transfers, len(s.startedTransfers)) s.mu.Unlock() for i := 0; i < test.Transfers; i++ { s.PruneTransfers() } s.mu.Lock() assert.Equal(t, time.Duration(test.Transfers)*time.Second, s.totalDuration()) assert.Equal(t, test.ExpectedStartedTransfers, len(s.startedTransfers)) s.mu.Unlock() }) } } rclone-1.53.3/fs/accounting/token_bucket.go000066400000000000000000000120061375552240400206410ustar00rootroot00000000000000package accounting import ( "context" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" "golang.org/x/time/rate" ) // Globals var ( tokenBucketMu sync.Mutex // protects the token bucket variables tokenBucket *rate.Limiter prevTokenBucket = tokenBucket bwLimitToggledOff = false currLimitMu sync.Mutex // protects changes to the timeslot currLimit fs.BwTimeSlot ) const maxBurstSize = 4 * 1024 * 1024 // must be bigger than the biggest request // make a new empty token bucket with the bandwidth given func newTokenBucket(bandwidth fs.SizeSuffix) *rate.Limiter { newTokenBucket := rate.NewLimiter(rate.Limit(bandwidth), maxBurstSize) // empty the bucket err := newTokenBucket.WaitN(context.Background(), maxBurstSize) if err != nil { fs.Errorf(nil, "Failed to empty token bucket: %v", err) } return newTokenBucket } // StartTokenBucket starts the token bucket if necessary func StartTokenBucket() { currLimitMu.Lock() currLimit := fs.Config.BwLimit.LimitAt(time.Now()) currLimitMu.Unlock() if currLimit.Bandwidth > 0 { tokenBucket = newTokenBucket(currLimit.Bandwidth) fs.Infof(nil, "Starting bandwidth limiter at %vBytes/s", &currLimit.Bandwidth) // Start the SIGUSR2 signal handler to toggle bandwidth. // This function does nothing in windows systems. startSignalHandler() } } // StartTokenTicker creates a ticker to update the bandwidth limiter every minute. func StartTokenTicker() { // If the timetable has a single entry or was not specified, we don't need // a ticker to update the bandwidth. if len(fs.Config.BwLimit) <= 1 { return } ticker := time.NewTicker(time.Minute) go func() { for range ticker.C { limitNow := fs.Config.BwLimit.LimitAt(time.Now()) currLimitMu.Lock() if currLimit.Bandwidth != limitNow.Bandwidth { tokenBucketMu.Lock() // If bwlimit is toggled off, the change should only // become active on the next toggle, which causes // an exchange of tokenBucket <-> prevTokenBucket var targetBucket **rate.Limiter if bwLimitToggledOff { targetBucket = &prevTokenBucket } else { targetBucket = &tokenBucket } // Set new bandwidth. If unlimited, set tokenbucket to nil. if limitNow.Bandwidth > 0 { *targetBucket = newTokenBucket(limitNow.Bandwidth) if bwLimitToggledOff { fs.Logf(nil, "Scheduled bandwidth change. "+ "Limit will be set to %vBytes/s when toggled on again.", &limitNow.Bandwidth) } else { fs.Logf(nil, "Scheduled bandwidth change. Limit set to %vBytes/s", &limitNow.Bandwidth) } } else { *targetBucket = nil fs.Logf(nil, "Scheduled bandwidth change. Bandwidth limits disabled") } currLimit = limitNow tokenBucketMu.Unlock() } currLimitMu.Unlock() } }() } // limitBandwith sleeps for the correct amount of time for the passage // of n bytes according to the current bandwidth limit func limitBandwidth(n int) { tokenBucketMu.Lock() // Limit the transfer speed if required if tokenBucket != nil { err := tokenBucket.WaitN(context.Background(), n) if err != nil { fs.Errorf(nil, "Token bucket error: %v", err) } } tokenBucketMu.Unlock() } // SetBwLimit sets the current bandwidth limit func SetBwLimit(bandwidth fs.SizeSuffix) { tokenBucketMu.Lock() defer tokenBucketMu.Unlock() if bandwidth > 0 { tokenBucket = newTokenBucket(bandwidth) fs.Logf(nil, "Bandwidth limit set to %v", bandwidth) } else { tokenBucket = nil fs.Logf(nil, "Bandwidth limit reset to unlimited") } } // Remote control for the token bucket func init() { rc.Add(rc.Call{ Path: "core/bwlimit", Fn: func(ctx context.Context, in rc.Params) (out rc.Params, err error) { if in["rate"] != nil { bwlimit, err := in.GetString("rate") if err != nil { return out, err } var bws fs.BwTimetable err = bws.Set(bwlimit) if err != nil { return out, errors.Wrap(err, "bad bwlimit") } if len(bws) != 1 { return out, errors.New("need exactly 1 bandwidth setting") } bw := bws[0] SetBwLimit(bw.Bandwidth) } bytesPerSecond := int64(-1) if tokenBucket != nil { bytesPerSecond = int64(tokenBucket.Limit()) } out = rc.Params{ "rate": fs.SizeSuffix(bytesPerSecond).String(), "bytesPerSecond": bytesPerSecond, } return out, nil }, Title: "Set the bandwidth limit.", Help: ` This sets the bandwidth limit to that passed in. Eg rclone rc core/bwlimit rate=off { "bytesPerSecond": -1, "rate": "off" } rclone rc core/bwlimit rate=1M { "bytesPerSecond": 1048576, "rate": "1M" } If the rate parameter is not supplied then the bandwidth is queried rclone rc core/bwlimit { "bytesPerSecond": 1048576, "rate": "1M" } The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number. `, }) } rclone-1.53.3/fs/accounting/token_bucket_test.go000066400000000000000000000022641375552240400217050ustar00rootroot00000000000000package accounting import ( "context" "testing" "github.com/rclone/rclone/fs/rc" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/time/rate" ) func TestRcBwLimit(t *testing.T) { call := rc.Calls.Get("core/bwlimit") assert.NotNil(t, call) // Set in := rc.Params{ "rate": "1M", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params{ "bytesPerSecond": int64(1048576), "rate": "1M", }, out) assert.Equal(t, rate.Limit(1048576), tokenBucket.Limit()) // Query in = rc.Params{} out, err = call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params{ "bytesPerSecond": int64(1048576), "rate": "1M", }, out) // Reset in = rc.Params{ "rate": "off", } out, err = call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params{ "bytesPerSecond": int64(-1), "rate": "off", }, out) assert.Nil(t, tokenBucket) // Query in = rc.Params{} out, err = call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params{ "bytesPerSecond": int64(-1), "rate": "off", }, out) } rclone-1.53.3/fs/accounting/transfer.go000066400000000000000000000110731375552240400200130ustar00rootroot00000000000000package accounting import ( "context" "encoding/json" "io" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" ) // TransferSnapshot represents state of an account at point in time. type TransferSnapshot struct { Name string `json:"name"` Size int64 `json:"size"` Bytes int64 `json:"bytes"` Checked bool `json:"checked"` StartedAt time.Time `json:"started_at"` CompletedAt time.Time `json:"completed_at,omitempty"` Error error `json:"-"` Group string `json:"group"` } // MarshalJSON implements json.Marshaler interface. func (as TransferSnapshot) MarshalJSON() ([]byte, error) { err := "" if as.Error != nil { err = as.Error.Error() } type Alias TransferSnapshot return json.Marshal(&struct { Error string `json:"error"` Alias }{ Error: err, Alias: (Alias)(as), }) } // Transfer keeps track of initiated transfers and provides access to // accounting functions. // Transfer needs to be closed on completion. type Transfer struct { // these are initialised at creation and may be accessed without locking stats *StatsInfo remote string size int64 startedAt time.Time checking bool // Protects all below // // NB to avoid deadlocks we must release this lock before // calling any methods on Transfer.stats. This is because // StatsInfo calls back into Transfer. mu sync.RWMutex acc *Account err error completedAt time.Time } // newCheckingTransfer instantiates new checking of the object. func newCheckingTransfer(stats *StatsInfo, obj fs.Object) *Transfer { return newTransferRemoteSize(stats, obj.Remote(), obj.Size(), true) } // newTransfer instantiates new transfer. func newTransfer(stats *StatsInfo, obj fs.Object) *Transfer { return newTransferRemoteSize(stats, obj.Remote(), obj.Size(), false) } func newTransferRemoteSize(stats *StatsInfo, remote string, size int64, checking bool) *Transfer { tr := &Transfer{ stats: stats, remote: remote, size: size, startedAt: time.Now(), checking: checking, } stats.AddTransfer(tr) return tr } // Done ends the transfer. // Must be called after transfer is finished to run proper cleanups. func (tr *Transfer) Done(err error) { if err != nil { err = tr.stats.Error(err) tr.mu.Lock() tr.err = err tr.mu.Unlock() } tr.mu.RLock() acc := tr.acc tr.mu.RUnlock() if acc != nil { // Close the file if it is still open if err := acc.Close(); err != nil { fs.LogLevelPrintf(fs.Config.StatsLogLevel, nil, "can't close account: %+v\n", err) } // Signal done with accounting acc.Done() // free the account since we may keep the transfer acc = nil } tr.mu.Lock() tr.completedAt = time.Now() tr.mu.Unlock() if tr.checking { tr.stats.DoneChecking(tr.remote) } else { tr.stats.DoneTransferring(tr.remote, err == nil) } tr.stats.PruneTransfers() } // Reset allows to switch the Account to another transfer method. func (tr *Transfer) Reset() { tr.mu.RLock() acc := tr.acc tr.acc = nil tr.mu.RUnlock() if acc != nil { if err := acc.Close(); err != nil { fs.LogLevelPrintf(fs.Config.StatsLogLevel, nil, "can't close account: %+v\n", err) } } } // Account returns reader that knows how to keep track of transfer progress. func (tr *Transfer) Account(ctx context.Context, in io.ReadCloser) *Account { tr.mu.Lock() if tr.acc == nil { tr.acc = newAccountSizeName(ctx, tr.stats, in, tr.size, tr.remote) } else { tr.acc.UpdateReader(ctx, in) } tr.mu.Unlock() return tr.acc } // TimeRange returns the time transfer started and ended at. If not completed // it will return zero time for end time. func (tr *Transfer) TimeRange() (time.Time, time.Time) { tr.mu.RLock() defer tr.mu.RUnlock() return tr.startedAt, tr.completedAt } // IsDone returns true if transfer is completed. func (tr *Transfer) IsDone() bool { tr.mu.RLock() defer tr.mu.RUnlock() return !tr.completedAt.IsZero() } // Snapshot produces stats for this account at point in time. func (tr *Transfer) Snapshot() TransferSnapshot { tr.mu.RLock() defer tr.mu.RUnlock() var s, b int64 = tr.size, 0 if tr.acc != nil { b, s = tr.acc.progress() } return TransferSnapshot{ Name: tr.remote, Checked: tr.checking, Size: s, Bytes: b, StartedAt: tr.startedAt, CompletedAt: tr.completedAt, Error: tr.err, Group: tr.stats.group, } } // rcStats returns stats for the transfer suitable for the rc func (tr *Transfer) rcStats() rc.Params { return rc.Params{ "name": tr.remote, // no locking needed to access thess "size": tr.size, } } rclone-1.53.3/fs/accounting/transfermap.go000066400000000000000000000070711375552240400205140ustar00rootroot00000000000000package accounting import ( "fmt" "sort" "strings" "sync" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" ) // transferMap holds name to transfer map type transferMap struct { mu sync.RWMutex items map[string]*Transfer name string } // newTransferMap creates a new empty transfer map of capacity size func newTransferMap(size int, name string) *transferMap { return &transferMap{ items: make(map[string]*Transfer, size), name: name, } } // add adds a new transfer to the map func (tm *transferMap) add(tr *Transfer) { tm.mu.Lock() tm.items[tr.remote] = tr tm.mu.Unlock() } // del removes a transfer from the map by name func (tm *transferMap) del(remote string) { tm.mu.Lock() delete(tm.items, remote) tm.mu.Unlock() } // merge adds items from another map func (tm *transferMap) merge(m *transferMap) { tm.mu.Lock() m.mu.Lock() for name, tr := range m.items { tm.items[name] = tr } m.mu.Unlock() tm.mu.Unlock() } // empty returns whether the map has any items func (tm *transferMap) empty() bool { tm.mu.RLock() defer tm.mu.RUnlock() return len(tm.items) == 0 } // count returns the number of items in the map func (tm *transferMap) count() int { tm.mu.RLock() defer tm.mu.RUnlock() return len(tm.items) } // _sortedSlice returns all transfers sorted by start time // // Call with mu.Rlock held func (tm *transferMap) _sortedSlice() []*Transfer { s := make([]*Transfer, 0, len(tm.items)) for _, tr := range tm.items { s = append(s, tr) } // sort by time first and if equal by name. Note that the relatively // low time resolution on Windows can cause equal times. sort.Slice(s, func(i, j int) bool { a, b := s[i], s[j] if a.startedAt.Before(b.startedAt) { return true } else if !a.startedAt.Equal(b.startedAt) { return false } return a.remote < b.remote }) return s } // String returns string representation of map items excluding any in // exclude (if set). func (tm *transferMap) String(progress *inProgress, exclude *transferMap) string { tm.mu.RLock() defer tm.mu.RUnlock() strngs := make([]string, 0, len(tm.items)) for _, tr := range tm._sortedSlice() { if exclude != nil { exclude.mu.RLock() _, found := exclude.items[tr.remote] exclude.mu.RUnlock() if found { continue } } var out string if acc := progress.get(tr.remote); acc != nil { out = acc.String() } else { out = fmt.Sprintf("%*s: %s", fs.Config.StatsFileNameLength, shortenName(tr.remote, fs.Config.StatsFileNameLength), tm.name, ) } strngs = append(strngs, " * "+out) } return strings.Join(strngs, "\n") } // progress returns total bytes read as well as the size. func (tm *transferMap) progress(stats *StatsInfo) (totalBytes, totalSize int64) { tm.mu.RLock() defer tm.mu.RUnlock() for name := range tm.items { if acc := stats.inProgress.get(name); acc != nil { bytes, size := acc.progress() if size >= 0 && bytes >= 0 { totalBytes += bytes totalSize += size } } } return totalBytes, totalSize } // remotes returns a []string of the remote names for the transferMap func (tm *transferMap) remotes() (c []string) { tm.mu.RLock() defer tm.mu.RUnlock() for _, tr := range tm._sortedSlice() { c = append(c, tr.remote) } return c } // rcStats returns a []rc.Params of the stats for the transferMap func (tm *transferMap) rcStats(progress *inProgress) (t []rc.Params) { tm.mu.RLock() defer tm.mu.RUnlock() for _, tr := range tm._sortedSlice() { if acc := progress.get(tr.remote); acc != nil { t = append(t, acc.rcStats()) } else { t = append(t, tr.rcStats()) } } return t } rclone-1.53.3/fs/asyncreader/000077500000000000000000000000001375552240400160045ustar00rootroot00000000000000rclone-1.53.3/fs/asyncreader/asyncreader.go000066400000000000000000000204251375552240400206360ustar00rootroot00000000000000// Package asyncreader provides an asynchronous reader which reads // independently of write package asyncreader import ( "io" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/pool" "github.com/rclone/rclone/lib/readers" ) const ( // BufferSize is the default size of the async buffer BufferSize = 1024 * 1024 softStartInitial = 4 * 1024 bufferCacheSize = 64 // max number of buffers to keep in cache bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long ) // ErrorStreamAbandoned is returned when the input is closed before the end of the stream var ErrorStreamAbandoned = errors.New("stream abandoned") // AsyncReader will do async read-ahead from the input reader // and make the data available as an io.Reader. // This should be fully transparent, except that once an error // has been returned from the Reader, it will not recover. type AsyncReader struct { in io.ReadCloser // Input reader ready chan *buffer // Buffers ready to be handed to the reader token chan struct{} // Tokens which allow a buffer to be taken exit chan struct{} // Closes when finished buffers int // Number of buffers err error // If an error has occurred it is here cur *buffer // Current buffer being served exited chan struct{} // Channel is closed been the async reader shuts down size int // size of buffer to use closed bool // whether we have closed the underlying stream mu sync.Mutex // lock for Read/WriteTo/Abandon/Close } // New returns a reader that will asynchronously read from // the supplied Reader into a number of buffers each of size BufferSize // It will start reading from the input at once, maybe even before this // function has returned. // The input can be read from the returned reader. // When done use Close to release the buffers and close the supplied input. func New(rd io.ReadCloser, buffers int) (*AsyncReader, error) { if buffers <= 0 { return nil, errors.New("number of buffers too small") } if rd == nil { return nil, errors.New("nil reader supplied") } a := &AsyncReader{} a.init(rd, buffers) return a, nil } func (a *AsyncReader) init(rd io.ReadCloser, buffers int) { a.in = rd a.ready = make(chan *buffer, buffers) a.token = make(chan struct{}, buffers) a.exit = make(chan struct{}, 0) a.exited = make(chan struct{}, 0) a.buffers = buffers a.cur = nil a.size = softStartInitial // Create tokens for i := 0; i < buffers; i++ { a.token <- struct{}{} } // Start async reader go func() { // Ensure that when we exit this is signalled. defer close(a.exited) defer close(a.ready) for { select { case <-a.token: b := a.getBuffer() if a.size < BufferSize { b.buf = b.buf[:a.size] a.size <<= 1 } err := b.read(a.in) a.ready <- b if err != nil { return } case <-a.exit: return } } }() } // bufferPool is a global pool of buffers var bufferPool *pool.Pool var bufferPoolOnce sync.Once // return the buffer to the pool (clearing it) func (a *AsyncReader) putBuffer(b *buffer) { bufferPool.Put(b.buf) b.buf = nil } // get a buffer from the pool func (a *AsyncReader) getBuffer() *buffer { bufferPoolOnce.Do(func() { // Initialise the buffer pool when used bufferPool = pool.New(bufferCacheFlushTime, BufferSize, bufferCacheSize, fs.Config.UseMmap) }) return &buffer{ buf: bufferPool.Get(), } } // Read will return the next available data. func (a *AsyncReader) fill() (err error) { if a.cur.isEmpty() { if a.cur != nil { a.putBuffer(a.cur) a.token <- struct{}{} a.cur = nil } b, ok := <-a.ready if !ok { // Return an error to show fill failed if a.err == nil { return ErrorStreamAbandoned } return a.err } a.cur = b } return nil } // Read will return the next available data. func (a *AsyncReader) Read(p []byte) (n int, err error) { a.mu.Lock() defer a.mu.Unlock() // Swap buffer and maybe return error err = a.fill() if err != nil { return 0, err } // Copy what we can n = copy(p, a.cur.buffer()) a.cur.increment(n) // If at end of buffer, return any error, if present if a.cur.isEmpty() { a.err = a.cur.err return n, a.err } return n, nil } // WriteTo writes data to w until there's no more data to write or when an error occurs. // The return value n is the number of bytes written. // Any error encountered during the write is also returned. func (a *AsyncReader) WriteTo(w io.Writer) (n int64, err error) { a.mu.Lock() defer a.mu.Unlock() n = 0 for { err = a.fill() if err == io.EOF { return n, nil } if err != nil { return n, err } n2, err := w.Write(a.cur.buffer()) a.cur.increment(n2) n += int64(n2) if err != nil { return n, err } if a.cur.err == io.EOF { a.err = a.cur.err return n, err } if a.cur.err != nil { a.err = a.cur.err return n, a.cur.err } } } // SkipBytes will try to seek 'skip' bytes relative to the current position. // On success it returns true. If 'skip' is outside the current buffer data or // an error occurs, Abandon is called and false is returned. func (a *AsyncReader) SkipBytes(skip int) (ok bool) { a.mu.Lock() defer func() { a.mu.Unlock() if !ok { a.Abandon() } }() if a.err != nil { return false } if skip < 0 { // seek backwards if skip is inside current buffer if a.cur != nil && a.cur.offset+skip >= 0 { a.cur.offset += skip return true } return false } // early return if skip is past the maximum buffer capacity if skip >= (len(a.ready)+1)*BufferSize { return false } refillTokens := 0 for { if a.cur.isEmpty() { if a.cur != nil { a.putBuffer(a.cur) refillTokens++ a.cur = nil } select { case b, ok := <-a.ready: if !ok { return false } a.cur = b default: return false } } n := len(a.cur.buffer()) if n > skip { n = skip } a.cur.increment(n) skip -= n if skip == 0 { for ; refillTokens > 0; refillTokens-- { a.token <- struct{}{} } // If at end of buffer, store any error, if present if a.cur.isEmpty() && a.cur.err != nil { a.err = a.cur.err } return true } if a.cur.err != nil { a.err = a.cur.err return false } } } // StopBuffering will ensure that the underlying async reader is shut // down so no more is read from the input. // // This does not free the memory so Abandon() or Close() need to be // called on the input. // // This does not wait for Read/WriteTo to complete so can be called // concurrently to those. func (a *AsyncReader) StopBuffering() { select { case <-a.exit: // Do nothing if reader routine already exited return default: } // Close and wait for go routine close(a.exit) <-a.exited } // Abandon will ensure that the underlying async reader is shut down // and memory is returned. It does everything but close the input. // // It will NOT close the input supplied on New. func (a *AsyncReader) Abandon() { a.StopBuffering() // take the lock to wait for Read/WriteTo to complete a.mu.Lock() defer a.mu.Unlock() // Return any outstanding buffers to the Pool if a.cur != nil { a.putBuffer(a.cur) a.cur = nil } for b := range a.ready { a.putBuffer(b) } } // Close will ensure that the underlying async reader is shut down. // It will also close the input supplied on New. func (a *AsyncReader) Close() (err error) { a.Abandon() if a.closed { return nil } a.closed = true return a.in.Close() } // Internal buffer // If an error is present, it must be returned // once all buffer content has been served. type buffer struct { buf []byte err error offset int } // isEmpty returns true is offset is at end of // buffer, or func (b *buffer) isEmpty() bool { if b == nil { return true } if len(b.buf)-b.offset <= 0 { return true } return false } // read into start of the buffer from the supplied reader, // resets the offset and updates the size of the buffer. // Any error encountered during the read is returned. func (b *buffer) read(rd io.Reader) error { var n int n, b.err = readers.ReadFill(rd, b.buf) b.buf = b.buf[0:n] b.offset = 0 return b.err } // Return the buffer at current offset func (b *buffer) buffer() []byte { return b.buf[b.offset:] } // increment the offset func (b *buffer) increment(n int) { b.offset += n } rclone-1.53.3/fs/asyncreader/asyncreader_test.go000066400000000000000000000216631375552240400217020ustar00rootroot00000000000000package asyncreader import ( "bufio" "bytes" "fmt" "io" "io/ioutil" "math/rand" "strings" "sync" "testing" "testing/iotest" "time" "github.com/rclone/rclone/lib/israce" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestAsyncReader(t *testing.T) { buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer")) ar, err := New(buf, 4) require.NoError(t, err) var dst = make([]byte, 100) n, err := ar.Read(dst) assert.Equal(t, io.EOF, err) assert.Equal(t, 10, n) n, err = ar.Read(dst) assert.Equal(t, io.EOF, err) assert.Equal(t, 0, n) // Test read after error n, err = ar.Read(dst) assert.Equal(t, io.EOF, err) assert.Equal(t, 0, n) err = ar.Close() require.NoError(t, err) // Test double close err = ar.Close() require.NoError(t, err) // Test Close without reading everything buf = ioutil.NopCloser(bytes.NewBuffer(make([]byte, 50000))) ar, err = New(buf, 4) require.NoError(t, err) err = ar.Close() require.NoError(t, err) } func TestAsyncWriteTo(t *testing.T) { buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer")) ar, err := New(buf, 4) require.NoError(t, err) var dst = &bytes.Buffer{} n, err := io.Copy(dst, ar) require.NoError(t, err) assert.Equal(t, int64(10), n) // Should still not return any errors n, err = io.Copy(dst, ar) require.NoError(t, err) assert.Equal(t, int64(0), n) err = ar.Close() require.NoError(t, err) } func TestAsyncReaderErrors(t *testing.T) { // test nil reader _, err := New(nil, 4) require.Error(t, err) // invalid buffer number buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer")) _, err = New(buf, 0) require.Error(t, err) _, err = New(buf, -1) require.Error(t, err) } // Complex read tests, leveraged from "bufio". type readMaker struct { name string fn func(io.Reader) io.Reader } var readMakers = []readMaker{ {"full", func(r io.Reader) io.Reader { return r }}, {"byte", iotest.OneByteReader}, {"half", iotest.HalfReader}, {"data+err", iotest.DataErrReader}, {"timeout", iotest.TimeoutReader}, } // Call Read to accumulate the text of a file func reads(buf io.Reader, m int) string { var b [1000]byte nb := 0 for { n, err := buf.Read(b[nb : nb+m]) nb += n if err == io.EOF { break } else if err != nil && err != iotest.ErrTimeout { panic("Data: " + err.Error()) } else if err != nil { break } } return string(b[0:nb]) } type bufReader struct { name string fn func(io.Reader) string } var bufreaders = []bufReader{ {"1", func(b io.Reader) string { return reads(b, 1) }}, {"2", func(b io.Reader) string { return reads(b, 2) }}, {"3", func(b io.Reader) string { return reads(b, 3) }}, {"4", func(b io.Reader) string { return reads(b, 4) }}, {"5", func(b io.Reader) string { return reads(b, 5) }}, {"7", func(b io.Reader) string { return reads(b, 7) }}, } const minReadBufferSize = 16 var bufsizes = []int{ 0, minReadBufferSize, 23, 32, 46, 64, 93, 128, 1024, 4096, } // Test various input buffer sizes, number of buffers and read sizes. func TestAsyncReaderSizes(t *testing.T) { var texts [31]string str := "" all := "" for i := 0; i < len(texts)-1; i++ { texts[i] = str + "\n" all += texts[i] str += string(rune(i)%26 + 'a') } texts[len(texts)-1] = all for h := 0; h < len(texts); h++ { text := texts[h] for i := 0; i < len(readMakers); i++ { for j := 0; j < len(bufreaders); j++ { for k := 0; k < len(bufsizes); k++ { for l := 1; l < 10; l++ { readmaker := readMakers[i] bufreader := bufreaders[j] bufsize := bufsizes[k] read := readmaker.fn(strings.NewReader(text)) buf := bufio.NewReaderSize(read, bufsize) ar, _ := New(ioutil.NopCloser(buf), l) s := bufreader.fn(ar) // "timeout" expects the Reader to recover, AsyncReader does not. if s != text && readmaker.name != "timeout" { t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q", readmaker.name, bufreader.name, bufsize, text, s) } err := ar.Close() require.NoError(t, err) } } } } } } // Test various input buffer sizes, number of buffers and read sizes. func TestAsyncReaderWriteTo(t *testing.T) { var texts [31]string str := "" all := "" for i := 0; i < len(texts)-1; i++ { texts[i] = str + "\n" all += texts[i] str += string(rune(i)%26 + 'a') } texts[len(texts)-1] = all for h := 0; h < len(texts); h++ { text := texts[h] for i := 0; i < len(readMakers); i++ { for j := 0; j < len(bufreaders); j++ { for k := 0; k < len(bufsizes); k++ { for l := 1; l < 10; l++ { readmaker := readMakers[i] bufreader := bufreaders[j] bufsize := bufsizes[k] read := readmaker.fn(strings.NewReader(text)) buf := bufio.NewReaderSize(read, bufsize) ar, _ := New(ioutil.NopCloser(buf), l) dst := &bytes.Buffer{} _, err := ar.WriteTo(dst) if err != nil && err != io.EOF && err != iotest.ErrTimeout { t.Fatal("Copy:", err) } s := dst.String() // "timeout" expects the Reader to recover, AsyncReader does not. if s != text && readmaker.name != "timeout" { t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q", readmaker.name, bufreader.name, bufsize, text, s) } err = ar.Close() require.NoError(t, err) } } } } } } // Read an infinite number of zeros type zeroReader struct { closed bool } func (z *zeroReader) Read(p []byte) (n int, err error) { if z.closed { return 0, io.EOF } for i := range p { p[i] = 0 } return len(p), nil } func (z *zeroReader) Close() error { if z.closed { panic("double close on zeroReader") } z.closed = true return nil } // Test closing and abandoning func testAsyncReaderClose(t *testing.T, writeto bool) { zr := &zeroReader{} a, err := New(zr, 16) require.NoError(t, err) var copyN int64 var copyErr error var wg sync.WaitGroup started := make(chan struct{}) wg.Add(1) go func() { defer wg.Done() close(started) if writeto { // exercise the WriteTo path copyN, copyErr = a.WriteTo(ioutil.Discard) } else { // exercise the Read path buf := make([]byte, 64*1024) for { var n int n, copyErr = a.Read(buf) copyN += int64(n) if copyErr != nil { break } } } }() // Do some copying <-started time.Sleep(100 * time.Millisecond) // Abandon the copy a.Abandon() wg.Wait() assert.Equal(t, ErrorStreamAbandoned, copyErr) // t.Logf("Copied %d bytes, err %v", copyN, copyErr) assert.True(t, copyN > 0) } func TestAsyncReaderCloseRead(t *testing.T) { testAsyncReaderClose(t, false) } func TestAsyncReaderCloseWriteTo(t *testing.T) { testAsyncReaderClose(t, true) } func TestAsyncReaderSkipBytes(t *testing.T) { t.Parallel() data := make([]byte, 15000) buf := make([]byte, len(data)) r := rand.New(rand.NewSource(42)) n, err := r.Read(data) require.NoError(t, err) require.Equal(t, len(data), n) initialReads := []int{0, 1, 100, 2048, softStartInitial - 1, softStartInitial, softStartInitial + 1, 8000, len(data)} skips := []int{-1000, -101, -100, -99, 0, 1, 2048, softStartInitial - 1, softStartInitial, softStartInitial + 1, 8000, len(data), BufferSize, 2 * BufferSize} for buffers := 1; buffers <= 5; buffers++ { if israce.Enabled && buffers > 1 { t.Skip("FIXME Skipping further tests with race detector until https://github.com/golang/go/issues/27070 is fixed.") } t.Run(fmt.Sprintf("%d", buffers), func(t *testing.T) { for _, initialRead := range initialReads { t.Run(fmt.Sprintf("%d", initialRead), func(t *testing.T) { for _, skip := range skips { t.Run(fmt.Sprintf("%d", skip), func(t *testing.T) { ar, err := New(ioutil.NopCloser(bytes.NewReader(data)), buffers) require.NoError(t, err) wantSkipFalse := false buf = buf[:initialRead] n, err := readers.ReadFill(ar, buf) if initialRead >= len(data) { wantSkipFalse = true if initialRead > len(data) { assert.Equal(t, err, io.EOF) } else { assert.True(t, err == nil || err == io.EOF) } assert.Equal(t, len(data), n) assert.Equal(t, data, buf[:len(data)]) } else { assert.NoError(t, err) assert.Equal(t, initialRead, n) assert.Equal(t, data[:initialRead], buf) } skipped := ar.SkipBytes(skip) buf = buf[:1024] n, err = readers.ReadFill(ar, buf) offset := initialRead + skip if skipped { assert.False(t, wantSkipFalse) l := len(buf) if offset >= len(data) { assert.Equal(t, err, io.EOF) } else { if offset+1024 >= len(data) { l = len(data) - offset } assert.Equal(t, l, n) assert.Equal(t, data[offset:offset+l], buf[:l]) } } else { if initialRead >= len(data) { assert.Equal(t, err, io.EOF) } else { assert.True(t, err == ErrorStreamAbandoned || err == io.EOF) } } }) } }) } }) } } rclone-1.53.3/fs/bwtimetable.go000066400000000000000000000126601375552240400163370ustar00rootroot00000000000000package fs import ( "fmt" "strconv" "strings" "time" "github.com/pkg/errors" ) // BwTimeSlot represents a bandwidth configuration at a point in time. type BwTimeSlot struct { DayOfTheWeek int HHMM int Bandwidth SizeSuffix } // BwTimetable contains all configured time slots. type BwTimetable []BwTimeSlot // String returns a printable representation of BwTimetable. func (x BwTimetable) String() string { ret := []string{} for _, ts := range x { ret = append(ret, fmt.Sprintf("%s-%04.4d,%s", time.Weekday(ts.DayOfTheWeek), ts.HHMM, ts.Bandwidth.String())) } return strings.Join(ret, " ") } // Basic hour format checking func validateHour(HHMM string) error { if len(HHMM) != 5 { return errors.Errorf("invalid time specification (hh:mm): %q", HHMM) } hh, err := strconv.Atoi(HHMM[0:2]) if err != nil { return errors.Errorf("invalid hour in time specification %q: %v", HHMM, err) } if hh < 0 || hh > 23 { return errors.Errorf("invalid hour (must be between 00 and 23): %q", hh) } mm, err := strconv.Atoi(HHMM[3:]) if err != nil { return errors.Errorf("invalid minute in time specification: %q: %v", HHMM, err) } if mm < 0 || mm > 59 { return errors.Errorf("invalid minute (must be between 00 and 59): %q", hh) } return nil } // Basic weekday format checking func parseWeekday(dayOfWeek string) (int, error) { dayOfWeek = strings.ToLower(dayOfWeek) if dayOfWeek == "sun" || dayOfWeek == "sunday" { return 0, nil } if dayOfWeek == "mon" || dayOfWeek == "monday" { return 1, nil } if dayOfWeek == "tue" || dayOfWeek == "tuesday" { return 2, nil } if dayOfWeek == "wed" || dayOfWeek == "wednesday" { return 3, nil } if dayOfWeek == "thu" || dayOfWeek == "thursday" { return 4, nil } if dayOfWeek == "fri" || dayOfWeek == "friday" { return 5, nil } if dayOfWeek == "sat" || dayOfWeek == "saturday" { return 6, nil } return 0, errors.Errorf("invalid weekday: %q", dayOfWeek) } // Set the bandwidth timetable. func (x *BwTimetable) Set(s string) error { // The timetable is formatted as: // "dayOfWeek-hh:mm,bandwidth dayOfWeek-hh:mm,banwidth..." ex: "Mon-10:00,10G Mon-11:30,1G Tue-18:00,off" // If only a single bandwidth identifier is provided, we assume constant bandwidth. if len(s) == 0 { return errors.New("empty string") } // Single value without time specification. if !strings.Contains(s, " ") && !strings.Contains(s, ",") { ts := BwTimeSlot{} if err := ts.Bandwidth.Set(s); err != nil { return err } ts.DayOfTheWeek = 0 ts.HHMM = 0 *x = BwTimetable{ts} return nil } for _, tok := range strings.Split(s, " ") { tv := strings.Split(tok, ",") // Format must be dayOfWeek-HH:MM,BW if len(tv) != 2 { return errors.Errorf("invalid time/bandwidth specification: %q", tok) } weekday := 0 HHMM := "" if !strings.Contains(tv[0], "-") { HHMM = tv[0] if err := validateHour(HHMM); err != nil { return err } for i := 0; i < 7; i++ { hh, _ := strconv.Atoi(HHMM[0:2]) mm, _ := strconv.Atoi(HHMM[3:]) ts := BwTimeSlot{ DayOfTheWeek: i, HHMM: (hh * 100) + mm, } if err := ts.Bandwidth.Set(tv[1]); err != nil { return err } *x = append(*x, ts) } } else { timespec := strings.Split(tv[0], "-") if len(timespec) != 2 { return errors.Errorf("invalid time specification: %q", tv[0]) } var err error weekday, err = parseWeekday(timespec[0]) if err != nil { return err } HHMM = timespec[1] if err := validateHour(HHMM); err != nil { return err } hh, _ := strconv.Atoi(HHMM[0:2]) mm, _ := strconv.Atoi(HHMM[3:]) ts := BwTimeSlot{ DayOfTheWeek: weekday, HHMM: (hh * 100) + mm, } // Bandwidth limit for this time slot. if err := ts.Bandwidth.Set(tv[1]); err != nil { return err } *x = append(*x, ts) } } return nil } // Difference in minutes between lateDayOfWeekHHMM and earlyDayOfWeekHHMM func timeDiff(lateDayOfWeekHHMM int, earlyDayOfWeekHHMM int) int { lateTimeMinutes := (lateDayOfWeekHHMM / 10000) * 24 * 60 lateTimeMinutes += ((lateDayOfWeekHHMM / 100) % 100) * 60 lateTimeMinutes += lateDayOfWeekHHMM % 100 earlyTimeMinutes := (earlyDayOfWeekHHMM / 10000) * 24 * 60 earlyTimeMinutes += ((earlyDayOfWeekHHMM / 100) % 100) * 60 earlyTimeMinutes += earlyDayOfWeekHHMM % 100 return lateTimeMinutes - earlyTimeMinutes } // LimitAt returns a BwTimeSlot for the time requested. func (x BwTimetable) LimitAt(tt time.Time) BwTimeSlot { // If the timetable is empty, we return an unlimited BwTimeSlot starting at Sunday midnight. if len(x) == 0 { return BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: -1} } dayOfWeekHHMM := int(tt.Weekday())*10000 + tt.Hour()*100 + tt.Minute() // By default, we return the last element in the timetable. This // satisfies two conditions: 1) If there's only one element it // will always be selected, and 2) The last element of the table // will "wrap around" until overridden by an earlier time slot. // there's only one time slot in the timetable. ret := x[len(x)-1] mindif := 0 first := true // Look for most recent time slot. for _, ts := range x { // Ignore the past if dayOfWeekHHMM < (ts.DayOfTheWeek*10000)+ts.HHMM { continue } dif := timeDiff(dayOfWeekHHMM, (ts.DayOfTheWeek*10000)+ts.HHMM) if first { mindif = dif first = false } if dif <= mindif { mindif = dif ret = ts } } return ret } // Type of the value func (x BwTimetable) Type() string { return "BwTimetable" } rclone-1.53.3/fs/bwtimetable_test.go000066400000000000000000000360241375552240400173760ustar00rootroot00000000000000package fs import ( "testing" "time" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check it satisfies the interface var _ pflag.Value = (*BwTimetable)(nil) func TestBwTimetableSet(t *testing.T) { for _, test := range []struct { in string want BwTimetable err bool }{ {"", BwTimetable{}, true}, {"bad,bad", BwTimetable{}, true}, {"bad bad", BwTimetable{}, true}, {"bad", BwTimetable{}, true}, {"1000X", BwTimetable{}, true}, {"2401,666", BwTimetable{}, true}, {"1061,666", BwTimetable{}, true}, {"bad-10:20,666", BwTimetable{}, true}, {"Mon-bad,666", BwTimetable{}, true}, {"Mon-10:20,bad", BwTimetable{}, true}, { "0", BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: 0}, }, false, }, { "666", BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: 666 * 1024}, }, false, }, { "10:20,666", BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1020, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1020, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1020, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1020, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1020, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1020, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1020, Bandwidth: 666 * 1024}, }, false, }, { "11:00,333 13:40,666 23:50,10M 23:59,off", BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2350, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2359, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2359, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2359, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2359, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2359, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2359, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2359, Bandwidth: -1}, }, false, }, { "Mon-11:00,333 Tue-13:40,666 Fri-00:00,10M Sat-10:00,off Sun-23:00,666", BwTimetable{ BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 0000, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1000, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2300, Bandwidth: 666 * 1024}, }, false, }, { "Mon-11:00,333 Tue-13:40,666 Fri-00:00,10M 00:01,off Sun-23:00,666", BwTimetable{ BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 0000, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2300, Bandwidth: 666 * 1024}, }, false, }, } { tt := BwTimetable{} err := tt.Set(test.in) if test.err { require.Error(t, err) } else { require.NoError(t, err) } assert.Equal(t, test.want, tt) } } func TestBwTimetableLimitAt(t *testing.T) { for _, test := range []struct { tt BwTimetable now time.Time want BwTimeSlot }{ { BwTimetable{}, time.Date(2017, time.April, 20, 15, 0, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: -1}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1100, Bandwidth: 333 * 1024}, }, time.Date(2017, time.April, 20, 15, 0, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2350, Bandwidth: -1}, }, time.Date(2017, time.April, 20, 10, 15, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 3, HHMM: 2350, Bandwidth: -1}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2350, Bandwidth: -1}, }, time.Date(2017, time.April, 20, 11, 0, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2350, Bandwidth: -1}, }, time.Date(2017, time.April, 20, 13, 1, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 4, HHMM: 1300, Bandwidth: 666 * 1024}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 0, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1300, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2301, Bandwidth: 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 1, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 3, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 4, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 2350, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 2350, Bandwidth: -1}, }, time.Date(2017, time.April, 20, 23, 59, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 4, HHMM: 2350, Bandwidth: -1}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 0000, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1000, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2300, Bandwidth: 666 * 1024}, }, time.Date(2017, time.April, 20, 23, 59, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 0000, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1000, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2300, Bandwidth: 666 * 1024}, }, time.Date(2017, time.April, 21, 23, 59, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 5, HHMM: 0000, Bandwidth: 10 * 1024 * 1024}, }, { BwTimetable{ BwTimeSlot{DayOfTheWeek: 1, HHMM: 1100, Bandwidth: 333 * 1024}, BwTimeSlot{DayOfTheWeek: 2, HHMM: 1340, Bandwidth: 666 * 1024}, BwTimeSlot{DayOfTheWeek: 5, HHMM: 0000, Bandwidth: 10 * 1024 * 1024}, BwTimeSlot{DayOfTheWeek: 6, HHMM: 1000, Bandwidth: -1}, BwTimeSlot{DayOfTheWeek: 0, HHMM: 2300, Bandwidth: 666 * 1024}, }, time.Date(2017, time.April, 17, 10, 59, 0, 0, time.UTC), BwTimeSlot{DayOfTheWeek: 0, HHMM: 2300, Bandwidth: 666 * 1024}, }, } { slot := test.tt.LimitAt(test.now) assert.Equal(t, test.want, slot) } } rclone-1.53.3/fs/cache/000077500000000000000000000000001375552240400145475ustar00rootroot00000000000000rclone-1.53.3/fs/cache/cache.go000066400000000000000000000060131375552240400161410ustar00rootroot00000000000000// Package cache implements the Fs cache package cache import ( "runtime" "sync" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/cache" ) var ( c = cache.New() mu sync.Mutex // mutex to protect remap remap = map[string]string{} // map user supplied names to canonical names ) // Canonicalize looks up fsString in the mapping from user supplied // names to canonical names and return the canonical form func Canonicalize(fsString string) string { mu.Lock() canonicalName, ok := remap[fsString] mu.Unlock() if !ok { return fsString } fs.Debugf(nil, "fs cache: switching user supplied name %q for canonical name %q", fsString, canonicalName) return canonicalName } // Put in a mapping from fsString => canonicalName if they are different func addMapping(fsString, canonicalName string) { if canonicalName == fsString { return } mu.Lock() remap[fsString] = canonicalName mu.Unlock() } // GetFn gets an fs.Fs named fsString either from the cache or creates // it afresh with the create function func GetFn(fsString string, create func(fsString string) (fs.Fs, error)) (f fs.Fs, err error) { fsString = Canonicalize(fsString) created := false value, err := c.Get(fsString, func(fsString string) (f interface{}, ok bool, err error) { f, err = create(fsString) ok = err == nil || err == fs.ErrorIsFile created = ok return f, ok, err }) if err != nil && err != fs.ErrorIsFile { return nil, err } f = value.(fs.Fs) // Check we stored the Fs at the canonical name if created { canonicalName := fs.ConfigString(f) if canonicalName != fsString { // Note that if err == fs.ErrorIsFile at this moment // then we can't rename the remote as it will have the // wrong error status, we need to add a new one. if err == nil { fs.Debugf(nil, "fs cache: renaming cache item %q to be canonical %q", fsString, canonicalName) value, found := c.Rename(fsString, canonicalName) if found { f = value.(fs.Fs) } addMapping(fsString, canonicalName) } else { fs.Debugf(nil, "fs cache: adding new entry for parent of %q, %q", fsString, canonicalName) Put(canonicalName, f) } } } return f, err } // Pin f into the cache until Unpin is called func Pin(f fs.Fs) { c.Pin(fs.ConfigString(f)) } // PinUntilFinalized pins f into the cache until x is garbage collected // // This calls runtime.SetFinalizer on x so it shouldn't have a // finalizer already. func PinUntilFinalized(f fs.Fs, x interface{}) { Pin(f) runtime.SetFinalizer(x, func(_ interface{}) { Unpin(f) }) } // Unpin f from the cache func Unpin(f fs.Fs) { c.Pin(fs.ConfigString(f)) } // Get gets an fs.Fs named fsString either from the cache or creates it afresh func Get(fsString string) (f fs.Fs, err error) { return GetFn(fsString, fs.NewFs) } // Put puts an fs.Fs named fsString into the cache func Put(fsString string, f fs.Fs) { canonicalName := fs.ConfigString(f) c.Put(canonicalName, f) addMapping(fsString, canonicalName) } // Clear removes everything from the cache func Clear() { c.Clear() } rclone-1.53.3/fs/cache/cache_test.go000066400000000000000000000066041375552240400172060ustar00rootroot00000000000000package cache import ( "errors" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/mockfs" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var ( called = 0 errSentinel = errors.New("an error") ) func mockNewFs(t *testing.T) (func(), func(path string) (fs.Fs, error)) { called = 0 create := func(path string) (f fs.Fs, err error) { assert.Equal(t, 0, called) called++ switch path { case "mock:/": return mockfs.NewFs("mock", "/"), nil case "mock:/file.txt", "mock:file.txt": return mockfs.NewFs("mock", "/"), fs.ErrorIsFile case "mock:/error": return nil, errSentinel } t.Fatalf("Unknown path %q", path) panic("unreachable") } cleanup := func() { c.Clear() } return cleanup, create } func TestGet(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() assert.Equal(t, 0, c.Entries()) f, err := GetFn("mock:/", create) require.NoError(t, err) assert.Equal(t, 1, c.Entries()) f2, err := GetFn("mock:/", create) require.NoError(t, err) assert.Equal(t, f, f2) } func TestGetFile(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() assert.Equal(t, 0, c.Entries()) f, err := GetFn("mock:/file.txt", create) require.Equal(t, fs.ErrorIsFile, err) require.NotNil(t, f) assert.Equal(t, 2, c.Entries()) f2, err := GetFn("mock:/file.txt", create) require.Equal(t, fs.ErrorIsFile, err) require.NotNil(t, f2) assert.Equal(t, f, f2) // check parent is there too f2, err = GetFn("mock:/", create) require.Nil(t, err) require.NotNil(t, f2) assert.Equal(t, f, f2) } func TestGetFile2(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() assert.Equal(t, 0, c.Entries()) f, err := GetFn("mock:file.txt", create) require.Equal(t, fs.ErrorIsFile, err) require.NotNil(t, f) assert.Equal(t, 2, c.Entries()) f2, err := GetFn("mock:file.txt", create) require.Equal(t, fs.ErrorIsFile, err) require.NotNil(t, f2) assert.Equal(t, f, f2) // check parent is there too f2, err = GetFn("mock:/", create) require.Nil(t, err) require.NotNil(t, f2) assert.Equal(t, f, f2) } func TestGetError(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() assert.Equal(t, 0, c.Entries()) f, err := GetFn("mock:/error", create) require.Equal(t, errSentinel, err) require.Equal(t, nil, f) assert.Equal(t, 0, c.Entries()) } func TestPut(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() f := mockfs.NewFs("mock", "/alien") assert.Equal(t, 0, c.Entries()) Put("mock:/alien", f) assert.Equal(t, 1, c.Entries()) fNew, err := GetFn("mock:/alien", create) require.NoError(t, err) require.Equal(t, f, fNew) assert.Equal(t, 1, c.Entries()) // Check canonicalisation Put("mock:/alien/", f) fNew, err = GetFn("mock:/alien/", create) require.NoError(t, err) require.Equal(t, f, fNew) assert.Equal(t, 1, c.Entries()) } func TestPin(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() // Test pinning and unpinning non existent f := mockfs.NewFs("mock", "/alien") Pin(f) Unpin(f) // Now test pinning an existing f2, err := GetFn("mock:/", create) require.NoError(t, err) Pin(f2) Unpin(f2) } func TestClear(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() // Create something _, err := GetFn("mock:/", create) require.NoError(t, err) assert.Equal(t, 1, c.Entries()) Clear() assert.Equal(t, 0, c.Entries()) } rclone-1.53.3/fs/chunkedreader/000077500000000000000000000000001375552240400163105ustar00rootroot00000000000000rclone-1.53.3/fs/chunkedreader/chunkedreader.go000066400000000000000000000161321375552240400214460ustar00rootroot00000000000000package chunkedreader import ( "context" "errors" "io" "sync" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" ) // io related errors returned by ChunkedReader var ( ErrorFileClosed = errors.New("file already closed") ErrorInvalidSeek = errors.New("invalid seek position") ) // ChunkedReader is a reader for an Object with the possibility // of reading the source in chunks of given size // // An initialChunkSize of <= 0 will disable chunked reading. type ChunkedReader struct { ctx context.Context mu sync.Mutex // protects following fields o fs.Object // source to read from rc io.ReadCloser // reader for the current open chunk offset int64 // offset the next Read will start. -1 forces a reopen of o chunkOffset int64 // beginning of the current or next chunk chunkSize int64 // length of the current or next chunk. -1 will open o from chunkOffset to the end initialChunkSize int64 // default chunkSize after the chunk specified by RangeSeek is complete maxChunkSize int64 // consecutive read chunks will double in size until reached. -1 means no limit customChunkSize bool // is the current chunkSize set by RangeSeek? closed bool // has Close been called? } // New returns a ChunkedReader for the Object. // // An initialChunkSize of <= 0 will disable chunked reading. // If maxChunkSize is greater than initialChunkSize, the chunk size will be // doubled after each chunk read with a maximun of maxChunkSize. // A Seek or RangeSeek will reset the chunk size to it's initial value func New(ctx context.Context, o fs.Object, initialChunkSize int64, maxChunkSize int64) *ChunkedReader { if initialChunkSize <= 0 { initialChunkSize = -1 } if maxChunkSize != -1 && maxChunkSize < initialChunkSize { maxChunkSize = initialChunkSize } return &ChunkedReader{ ctx: ctx, o: o, offset: -1, chunkSize: initialChunkSize, initialChunkSize: initialChunkSize, maxChunkSize: maxChunkSize, } } // Read from the file - for details see io.Reader func (cr *ChunkedReader) Read(p []byte) (n int, err error) { cr.mu.Lock() defer cr.mu.Unlock() if cr.closed { return 0, ErrorFileClosed } for reqSize := int64(len(p)); reqSize > 0; reqSize = int64(len(p)) { // the current chunk boundary. valid only when chunkSize > 0 chunkEnd := cr.chunkOffset + cr.chunkSize fs.Debugf(cr.o, "ChunkedReader.Read at %d length %d chunkOffset %d chunkSize %d", cr.offset, reqSize, cr.chunkOffset, cr.chunkSize) switch { case cr.chunkSize > 0 && cr.offset == chunkEnd: // last chunk read completely cr.chunkOffset = cr.offset if cr.customChunkSize { // last chunkSize was set by RangeSeek cr.customChunkSize = false cr.chunkSize = cr.initialChunkSize } else { cr.chunkSize *= 2 if cr.chunkSize > cr.maxChunkSize && cr.maxChunkSize != -1 { cr.chunkSize = cr.maxChunkSize } } // recalculate the chunk boundary. valid only when chunkSize > 0 chunkEnd = cr.chunkOffset + cr.chunkSize fallthrough case cr.offset == -1: // first Read or Read after RangeSeek err = cr.openRange() if err != nil { return } } var buf []byte chunkRest := chunkEnd - cr.offset // limit read to chunk boundaries if chunkSize > 0 if reqSize > chunkRest && cr.chunkSize > 0 { buf, p = p[0:chunkRest], p[chunkRest:] } else { buf, p = p, nil } var rn int rn, err = io.ReadFull(cr.rc, buf) n += rn cr.offset += int64(rn) if err != nil { if err == io.ErrUnexpectedEOF { err = io.EOF } return } } return n, nil } // Close the file - for details see io.Closer // // All methods on ChunkedReader will return ErrorFileClosed afterwards func (cr *ChunkedReader) Close() error { cr.mu.Lock() defer cr.mu.Unlock() if cr.closed { return ErrorFileClosed } cr.closed = true return cr.resetReader(nil, 0) } // Seek the file - for details see io.Seeker func (cr *ChunkedReader) Seek(offset int64, whence int) (int64, error) { return cr.RangeSeek(context.TODO(), offset, whence, -1) } // RangeSeek the file - for details see RangeSeeker // // The specified length will only apply to the next chunk opened. // RangeSeek will not reopen the source until Read is called. func (cr *ChunkedReader) RangeSeek(ctx context.Context, offset int64, whence int, length int64) (int64, error) { cr.mu.Lock() defer cr.mu.Unlock() fs.Debugf(cr.o, "ChunkedReader.RangeSeek from %d to %d length %d", cr.offset, offset, length) if cr.closed { return 0, ErrorFileClosed } size := cr.o.Size() switch whence { case io.SeekStart: cr.offset = 0 case io.SeekEnd: cr.offset = size } // set the new chunk start cr.chunkOffset = cr.offset + offset // force reopen on next Read cr.offset = -1 if length > 0 { cr.customChunkSize = true cr.chunkSize = length } else { cr.chunkSize = cr.initialChunkSize } if cr.chunkOffset < 0 || cr.chunkOffset >= size { cr.chunkOffset = 0 return 0, ErrorInvalidSeek } return cr.chunkOffset, nil } // Open forces the connection to be opened func (cr *ChunkedReader) Open() (*ChunkedReader, error) { cr.mu.Lock() defer cr.mu.Unlock() if cr.rc != nil && cr.offset != -1 { return cr, nil } return cr, cr.openRange() } // openRange will open the source Object with the current chunk range // // If the current open reader implements RangeSeeker, it is tried first. // When RangeSeek fails, o.Open with a RangeOption is used. // // A length <= 0 will request till the end of the file func (cr *ChunkedReader) openRange() error { offset, length := cr.chunkOffset, cr.chunkSize fs.Debugf(cr.o, "ChunkedReader.openRange at %d length %d", offset, length) if cr.closed { return ErrorFileClosed } if rs, ok := cr.rc.(fs.RangeSeeker); ok { n, err := rs.RangeSeek(cr.ctx, offset, io.SeekStart, length) if err == nil && n == offset { cr.offset = offset return nil } if err != nil { fs.Debugf(cr.o, "ChunkedReader.openRange seek failed (%s). Trying Open", err) } else { fs.Debugf(cr.o, "ChunkedReader.openRange seeked to wrong offset. Wanted %d, got %d. Trying Open", offset, n) } } var rc io.ReadCloser var err error if length <= 0 { if offset == 0 { rc, err = cr.o.Open(cr.ctx, &fs.HashesOption{Hashes: hash.Set(hash.None)}) } else { rc, err = cr.o.Open(cr.ctx, &fs.HashesOption{Hashes: hash.Set(hash.None)}, &fs.RangeOption{Start: offset, End: -1}) } } else { rc, err = cr.o.Open(cr.ctx, &fs.HashesOption{Hashes: hash.Set(hash.None)}, &fs.RangeOption{Start: offset, End: offset + length - 1}) } if err != nil { return err } return cr.resetReader(rc, offset) } // resetReader switches the current reader to the given reader. // The old reader will be Close'd before setting the new reader. func (cr *ChunkedReader) resetReader(rc io.ReadCloser, offset int64) error { if cr.rc != nil { if err := cr.rc.Close(); err != nil { return err } } cr.rc = rc cr.offset = offset return nil } var ( _ io.ReadCloser = (*ChunkedReader)(nil) _ io.Seeker = (*ChunkedReader)(nil) _ fs.RangeSeeker = (*ChunkedReader)(nil) ) rclone-1.53.3/fs/chunkedreader/chunkedreader_test.go000066400000000000000000000055211375552240400225050ustar00rootroot00000000000000package chunkedreader import ( "context" "fmt" "io" "math/rand" "testing" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestChunkedReader(t *testing.T) { content := makeContent(t, 1024) for _, mode := range mockobject.SeekModes { t.Run(mode.String(), testRead(content, mode)) } } func testRead(content []byte, mode mockobject.SeekMode) func(*testing.T) { return func(t *testing.T) { chunkSizes := []int64{-1, 0, 1, 15, 16, 17, 1023, 1024, 1025, 2000} offsets := []int64{0, 1, 2, 3, 4, 5, 7, 8, 9, 15, 16, 17, 31, 32, 33, 63, 64, 65, 511, 512, 513, 1023, 1024, 1025} limits := []int64{-1, 0, 1, 31, 32, 33, 1023, 1024, 1025} cl := int64(len(content)) bl := 32 buf := make([]byte, bl) o := mockobject.New("test.bin").WithContent(content, mode) for ics, cs := range chunkSizes { for icsMax, csMax := range chunkSizes { // skip tests where chunkSize is much bigger than maxChunkSize if ics > icsMax+1 { continue } t.Run(fmt.Sprintf("Chunksize_%d_%d", cs, csMax), func(t *testing.T) { cr := New(context.Background(), o, cs, csMax) for _, offset := range offsets { for _, limit := range limits { what := fmt.Sprintf("offset %d, limit %d", offset, limit) p, err := cr.RangeSeek(context.Background(), offset, io.SeekStart, limit) if offset >= cl { require.Error(t, err, what) return } require.NoError(t, err, what) require.Equal(t, offset, p, what) n, err := cr.Read(buf) end := offset + int64(bl) if end > cl { end = cl } l := int(end - offset) if l < bl { require.Equal(t, io.EOF, err, what) } else { require.NoError(t, err, what) } require.Equal(t, l, n, what) require.Equal(t, content[offset:end], buf[:n], what) } } }) } } } } func TestErrorAfterClose(t *testing.T) { content := makeContent(t, 1024) o := mockobject.New("test.bin").WithContent(content, mockobject.SeekModeNone) // Close cr := New(context.Background(), o, 0, 0) require.NoError(t, cr.Close()) require.Error(t, cr.Close()) // Read cr = New(context.Background(), o, 0, 0) require.NoError(t, cr.Close()) var buf [1]byte _, err := cr.Read(buf[:]) require.Error(t, err) // Seek cr = New(context.Background(), o, 0, 0) require.NoError(t, cr.Close()) _, err = cr.Seek(1, io.SeekCurrent) require.Error(t, err) // RangeSeek cr = New(context.Background(), o, 0, 0) require.NoError(t, cr.Close()) _, err = cr.RangeSeek(context.Background(), 1, io.SeekCurrent, 0) require.Error(t, err) } func makeContent(t *testing.T, size int) []byte { content := make([]byte, size) r := rand.New(rand.NewSource(42)) _, err := io.ReadFull(r, content) assert.NoError(t, err) return content } rclone-1.53.3/fs/config.go000066400000000000000000000131451375552240400153040ustar00rootroot00000000000000package fs import ( "net" "strings" "time" "github.com/pkg/errors" ) // Global var ( // Config is the global config Config = NewConfig() // Read a value from the config file // // This is a function pointer to decouple the config // implementation from the fs ConfigFileGet = func(section, key string) (string, bool) { return "", false } // Set a value into the config file and persist it // // This is a function pointer to decouple the config // implementation from the fs ConfigFileSet = func(section, key, value string) (err error) { return errors.New("no config file set handler") } // CountError counts an error. If any errors have been // counted then it will exit with a non zero error code. // // This is a function pointer to decouple the config // implementation from the fs CountError = func(err error) error { return nil } // ConfigProvider is the config key used for provider options ConfigProvider = "provider" ) // ConfigInfo is filesystem config options type ConfigInfo struct { LogLevel LogLevel StatsLogLevel LogLevel UseJSONLog bool DryRun bool Interactive bool CheckSum bool SizeOnly bool IgnoreTimes bool IgnoreExisting bool IgnoreErrors bool ModifyWindow time.Duration Checkers int Transfers int ConnectTimeout time.Duration // Connect timeout Timeout time.Duration // Data channel timeout ExpectContinueTimeout time.Duration Dump DumpFlags InsecureSkipVerify bool // Skip server certificate verification DeleteMode DeleteMode MaxDelete int64 TrackRenames bool // Track file renames. TrackRenamesStrategy string // Comma separated list of stratgies used to track renames LowLevelRetries int UpdateOlder bool // Skip files that are newer on the destination NoGzip bool // Disable compression MaxDepth int IgnoreSize bool IgnoreChecksum bool IgnoreCaseSync bool NoTraverse bool CheckFirst bool NoCheckDest bool NoUnicodeNormalization bool NoUpdateModTime bool DataRateUnit string CompareDest string CopyDest string BackupDir string Suffix string SuffixKeepExtension bool UseListR bool BufferSize SizeSuffix BwLimit BwTimetable BwLimitFile BwTimetable TPSLimit float64 TPSLimitBurst int BindAddr net.IP DisableFeatures []string UserAgent string Immutable bool AutoConfirm bool StreamingUploadCutoff SizeSuffix StatsFileNameLength int AskPassword bool PasswordCommand SpaceSepList UseServerModTime bool MaxTransfer SizeSuffix MaxDuration time.Duration CutoffMode CutoffMode MaxBacklog int MaxStatsGroups int StatsOneLine bool StatsOneLineDate bool // If we want a date prefix at all StatsOneLineDateFormat string // If we want to customize the prefix ErrorOnNoTransfer bool // Set appropriate exit code if no files transferred Progress bool Cookie bool UseMmap bool CaCert string // Client Side CA ClientCert string // Client Side Cert ClientKey string // Client Side Key MultiThreadCutoff SizeSuffix MultiThreadStreams int MultiThreadSet bool // whether MultiThreadStreams was set (set in fs/config/configflags) OrderBy string // instructions on how to order the transfer UploadHeaders []*HTTPOption DownloadHeaders []*HTTPOption Headers []*HTTPOption RefreshTimes bool } // NewConfig creates a new config with everything set to the default // value. These are the ultimate defaults and are overridden by the // config module. func NewConfig() *ConfigInfo { c := new(ConfigInfo) // Set any values which aren't the zero for the type c.LogLevel = LogLevelNotice c.StatsLogLevel = LogLevelInfo c.ModifyWindow = time.Nanosecond c.Checkers = 8 c.Transfers = 4 c.ConnectTimeout = 60 * time.Second c.Timeout = 5 * 60 * time.Second c.ExpectContinueTimeout = 1 * time.Second c.DeleteMode = DeleteModeDefault c.MaxDelete = -1 c.LowLevelRetries = 10 c.MaxDepth = -1 c.DataRateUnit = "bytes" c.BufferSize = SizeSuffix(16 << 20) c.UserAgent = "rclone/" + Version c.StreamingUploadCutoff = SizeSuffix(100 * 1024) c.MaxStatsGroups = 1000 c.StatsFileNameLength = 45 c.AskPassword = true c.TPSLimitBurst = 1 c.MaxTransfer = -1 c.MaxBacklog = 10000 // We do not want to set the default here. We use this variable being empty as part of the fall-through of options. // c.StatsOneLineDateFormat = "2006/01/02 15:04:05 - " c.MultiThreadCutoff = SizeSuffix(250 * 1024 * 1024) c.MultiThreadStreams = 4 c.TrackRenamesStrategy = "hash" return c } // ConfigToEnv converts a config section and name, eg ("myremote", // "ignore-size") into an environment name // "RCLONE_CONFIG_MYREMOTE_IGNORE_SIZE" func ConfigToEnv(section, name string) string { return "RCLONE_CONFIG_" + strings.ToUpper(strings.Replace(section+"_"+name, "-", "_", -1)) } // OptionToEnv converts an option name, eg "ignore-size" into an // environment name "RCLONE_IGNORE_SIZE" func OptionToEnv(name string) string { return "RCLONE_" + strings.ToUpper(strings.Replace(name, "-", "_", -1)) } rclone-1.53.3/fs/config/000077500000000000000000000000001375552240400147515ustar00rootroot00000000000000rclone-1.53.3/fs/config/config.go000066400000000000000000001245151375552240400165550ustar00rootroot00000000000000// Package config reads, writes and edits the config file and deals with command line flags package config import ( "bufio" "bytes" "crypto/rand" "crypto/sha256" "encoding/base64" "encoding/json" "fmt" "io" "io/ioutil" "log" mathrand "math/rand" "os" "os/exec" "path/filepath" "regexp" "runtime" "sort" "strconv" "strings" "time" "unicode/utf8" "github.com/Unknwon/goconfig" homedir "github.com/mitchellh/go-homedir" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/driveletter" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/terminal" "golang.org/x/crypto/nacl/secretbox" "golang.org/x/text/unicode/norm" ) const ( configFileName = "rclone.conf" hiddenConfigFileName = "." + configFileName // ConfigToken is the key used to store the token under ConfigToken = "token" // ConfigClientID is the config key used to store the client id ConfigClientID = "client_id" // ConfigClientSecret is the config key used to store the client secret ConfigClientSecret = "client_secret" // ConfigAuthURL is the config key used to store the auth server endpoint ConfigAuthURL = "auth_url" // ConfigTokenURL is the config key used to store the token server endpoint ConfigTokenURL = "token_url" // ConfigEncoding is the config key to change the encoding for a backend ConfigEncoding = "encoding" // ConfigEncodingHelp is the help for ConfigEncoding ConfigEncodingHelp = "This sets the encoding for the backend.\n\nSee: the [encoding section in the overview](/overview/#encoding) for more info." // ConfigAuthorize indicates that we just want "rclone authorize" ConfigAuthorize = "config_authorize" // ConfigAuthNoBrowser indicates that we do not want to open browser ConfigAuthNoBrowser = "config_auth_no_browser" ) // Global var ( // configFile is the global config data structure. Don't read it directly, use getConfigData() configFile *goconfig.ConfigFile // ConfigPath points to the config file ConfigPath = makeConfigPath() // CacheDir points to the cache directory. Users of this // should make a subdirectory and use MkdirAll() to create it // and any parents. CacheDir = makeCacheDir() // Key to use for password en/decryption. // When nil, no encryption will be used for saving. configKey []byte // output of prompt for password PasswordPromptOutput = os.Stderr // If set to true, the configKey is obscured with obscure.Obscure and saved to a temp file when it is // calculated from the password. The path of that temp file is then written to the environment variable // `_RCLONE_CONFIG_KEY_FILE`. If `_RCLONE_CONFIG_KEY_FILE` is present, password prompt is skipped and `RCLONE_CONFIG_PASS` ignored. // For security reasons, the temp file is deleted once the configKey is successfully loaded. // This can be used to pass the configKey to a child process. PassConfigKeyForDaemonization = false // Password can be used to configure the random password generator Password = random.Password ) func init() { // Set the function pointers up in fs fs.ConfigFileGet = FileGetFlag fs.ConfigFileSet = SetValueAndSave } func getConfigData() *goconfig.ConfigFile { if configFile == nil { LoadConfig() } return configFile } // Return the path to the configuration file func makeConfigPath() string { // Use rclone.conf from rclone executable directory if already existing exe, err := os.Executable() if err == nil { exedir := filepath.Dir(exe) cfgpath := filepath.Join(exedir, configFileName) _, err := os.Stat(cfgpath) if err == nil { return cfgpath } } // Find user's home directory homeDir, err := homedir.Dir() // Find user's configuration directory. // Prefer XDG config path, with fallback to $HOME/.config. // See XDG Base Directory specification // https://specifications.freedesktop.org/basedir-spec/latest/), xdgdir := os.Getenv("XDG_CONFIG_HOME") var cfgdir string if xdgdir != "" { // User's configuration directory for rclone is $XDG_CONFIG_HOME/rclone cfgdir = filepath.Join(xdgdir, "rclone") } else if homeDir != "" { // User's configuration directory for rclone is $HOME/.config/rclone cfgdir = filepath.Join(homeDir, ".config", "rclone") } // Use rclone.conf from user's configuration directory if already existing var cfgpath string if cfgdir != "" { cfgpath = filepath.Join(cfgdir, configFileName) _, err := os.Stat(cfgpath) if err == nil { return cfgpath } } // Use .rclone.conf from user's home directory if already existing var homeconf string if homeDir != "" { homeconf = filepath.Join(homeDir, hiddenConfigFileName) _, err := os.Stat(homeconf) if err == nil { return homeconf } } // Check to see if user supplied a --config variable or environment // variable. We can't use pflag for this because it isn't initialised // yet so we search the command line manually. _, configSupplied := os.LookupEnv("RCLONE_CONFIG") if !configSupplied { for _, item := range os.Args { if item == "--config" || strings.HasPrefix(item, "--config=") { configSupplied = true break } } } // If user's configuration directory was found, then try to create it // and assume rclone.conf can be written there. If user supplied config // then skip creating the directory since it will not be used. if cfgpath != "" { // cfgpath != "" implies cfgdir != "" if configSupplied { return cfgpath } err := os.MkdirAll(cfgdir, os.ModePerm) if err == nil { return cfgpath } } // Assume .rclone.conf can be written to user's home directory. if homeconf != "" { return homeconf } // Default to ./.rclone.conf (current working directory) if everything else fails. if !configSupplied { fs.Errorf(nil, "Couldn't find home directory or read HOME or XDG_CONFIG_HOME environment variables.") fs.Errorf(nil, "Defaulting to storing config in current directory.") fs.Errorf(nil, "Use --config flag to workaround.") fs.Errorf(nil, "Error was: %v", err) } return hiddenConfigFileName } // LoadConfig loads the config file func LoadConfig() { // Set RCLONE_CONFIG_DIR for backend config and subprocesses _ = os.Setenv("RCLONE_CONFIG_DIR", filepath.Dir(ConfigPath)) // Load configuration file. var err error configFile, err = loadConfigFile() if err == errorConfigFileNotFound { fs.Logf(nil, "Config file %q not found - using defaults", ConfigPath) configFile, _ = goconfig.LoadFromReader(&bytes.Buffer{}) } else if err != nil { log.Fatalf("Failed to load config file %q: %v", ConfigPath, err) } else { fs.Debugf(nil, "Using config file from %q", ConfigPath) } // Start the token bucket limiter accounting.StartTokenBucket() // Start the bandwidth update ticker accounting.StartTokenTicker() // Start the transactions per second limiter fshttp.StartHTTPTokenBucket() } var errorConfigFileNotFound = errors.New("config file not found") // loadConfigFile will load a config file, and // automatically decrypt it. func loadConfigFile() (*goconfig.ConfigFile, error) { var usingPasswordCommand bool b, err := ioutil.ReadFile(ConfigPath) if err != nil { if os.IsNotExist(err) { return nil, errorConfigFileNotFound } return nil, err } // Find first non-empty line r := bufio.NewReader(bytes.NewBuffer(b)) for { line, _, err := r.ReadLine() if err != nil { if err == io.EOF { return goconfig.LoadFromReader(bytes.NewBuffer(b)) } return nil, err } l := strings.TrimSpace(string(line)) if len(l) == 0 || strings.HasPrefix(l, ";") || strings.HasPrefix(l, "#") { continue } // First non-empty or non-comment must be ENCRYPT_V0 if l == "RCLONE_ENCRYPT_V0:" { break } if strings.HasPrefix(l, "RCLONE_ENCRYPT_V") { return nil, errors.New("unsupported configuration encryption - update rclone for support") } return goconfig.LoadFromReader(bytes.NewBuffer(b)) } if len(configKey) == 0 { if len(fs.Config.PasswordCommand) != 0 { var stdout bytes.Buffer var stderr bytes.Buffer cmd := exec.Command(fs.Config.PasswordCommand[0], fs.Config.PasswordCommand[1:]...) cmd.Stdout = &stdout cmd.Stderr = &stderr cmd.Stdin = os.Stdin if err := cmd.Run(); err != nil { // One does not always get the stderr returned in the wrapped error. fs.Errorf(nil, "Using --password-command returned: %v", err) if ers := strings.TrimSpace(stderr.String()); ers != "" { fs.Errorf(nil, "--password-command stderr: %s", ers) } return nil, errors.Wrap(err, "password command failed") } if pass := strings.Trim(stdout.String(), "\r\n"); pass != "" { err := setConfigPassword(pass) if err != nil { return nil, errors.Wrap(err, "incorrect password") } } else { return nil, errors.New("password-command returned empty string") } if len(configKey) == 0 { return nil, errors.New("unable to decrypt configuration: incorrect password") } usingPasswordCommand = true } else { usingPasswordCommand = false envpw := os.Getenv("RCLONE_CONFIG_PASS") if envpw != "" { err := setConfigPassword(envpw) if err != nil { fs.Errorf(nil, "Using RCLONE_CONFIG_PASS returned: %v", err) } else { fs.Debugf(nil, "Using RCLONE_CONFIG_PASS password.") } } } } // Encrypted content is base64 encoded. dec := base64.NewDecoder(base64.StdEncoding, r) box, err := ioutil.ReadAll(dec) if err != nil { return nil, errors.Wrap(err, "failed to load base64 encoded data") } if len(box) < 24+secretbox.Overhead { return nil, errors.New("Configuration data too short") } var out []byte for { if envKeyFile := os.Getenv("_RCLONE_CONFIG_KEY_FILE"); len(envKeyFile) > 0 { fs.Debugf(nil, "attempting to obtain configKey from temp file %s", envKeyFile) obscuredKey, err := ioutil.ReadFile(envKeyFile) if err != nil { errRemove := os.Remove(envKeyFile) if errRemove != nil { log.Fatalf("unable to read obscured config key and unable to delete the temp file: %v", err) } log.Fatalf("unable to read obscured config key: %v", err) } errRemove := os.Remove(envKeyFile) if errRemove != nil { log.Fatalf("unable to delete temp file with configKey: %v", err) } configKey = []byte(obscure.MustReveal(string(obscuredKey))) fs.Debugf(nil, "using _RCLONE_CONFIG_KEY_FILE for configKey") } else { if len(configKey) == 0 { if usingPasswordCommand { return nil, errors.New("using --password-command derived password, unable to decrypt configuration") } if !fs.Config.AskPassword { return nil, errors.New("unable to decrypt configuration and not allowed to ask for password - set RCLONE_CONFIG_PASS to your configuration password") } getConfigPassword("Enter configuration password:") } } // Nonce is first 24 bytes of the ciphertext var nonce [24]byte copy(nonce[:], box[:24]) var key [32]byte copy(key[:], configKey[:32]) // Attempt to decrypt var ok bool out, ok = secretbox.Open(nil, box[24:], &nonce, &key) if ok { break } // Retry fs.Errorf(nil, "Couldn't decrypt configuration, most likely wrong password.") configKey = nil } return goconfig.LoadFromReader(bytes.NewBuffer(out)) } // checkPassword normalises and validates the password func checkPassword(password string) (string, error) { if !utf8.ValidString(password) { return "", errors.New("password contains invalid utf8 characters") } // Check for leading/trailing whitespace trimmedPassword := strings.TrimSpace(password) // Warn user if password has leading+trailing whitespace if len(password) != len(trimmedPassword) { _, _ = fmt.Fprintln(os.Stderr, "Your password contains leading/trailing whitespace - in previous versions of rclone this was stripped") } // Normalize to reduce weird variations. password = norm.NFKC.String(password) if len(password) == 0 || len(trimmedPassword) == 0 { return "", errors.New("no characters in password") } return password, nil } // GetPassword asks the user for a password with the prompt given. func GetPassword(prompt string) string { _, _ = fmt.Fprintln(PasswordPromptOutput, prompt) for { _, _ = fmt.Fprint(PasswordPromptOutput, "password:") password := ReadPassword() password, err := checkPassword(password) if err == nil { return password } _, _ = fmt.Fprintf(os.Stderr, "Bad password: %v\n", err) } } // ChangePassword will query the user twice for the named password. If // the same password is entered it is returned. func ChangePassword(name string) string { for { a := GetPassword(fmt.Sprintf("Enter %s password:", name)) b := GetPassword(fmt.Sprintf("Confirm %s password:", name)) if a == b { return a } fmt.Println("Passwords do not match!") } } // getConfigPassword will query the user for a password the // first time it is required. func getConfigPassword(q string) { if len(configKey) != 0 { return } for { password := GetPassword(q) err := setConfigPassword(password) if err == nil { return } _, _ = fmt.Fprintln(os.Stderr, "Error:", err) } } // setConfigPassword will set the configKey to the hash of // the password. If the length of the password is // zero after trimming+normalization, an error is returned. func setConfigPassword(password string) error { password, err := checkPassword(password) if err != nil { return err } // Create SHA256 has of the password sha := sha256.New() _, err = sha.Write([]byte("[" + password + "][rclone-config]")) if err != nil { return err } configKey = sha.Sum(nil) if PassConfigKeyForDaemonization { tempFile, err := ioutil.TempFile("", "rclone") if err != nil { log.Fatalf("cannot create temp file to store configKey: %v", err) } _, err = tempFile.WriteString(obscure.MustObscure(string(configKey))) if err != nil { errRemove := os.Remove(tempFile.Name()) if errRemove != nil { log.Fatalf("error writing configKey to temp file and also error deleting it: %v", err) } log.Fatalf("error writing configKey to temp file: %v", err) } err = tempFile.Close() if err != nil { errRemove := os.Remove(tempFile.Name()) if errRemove != nil { log.Fatalf("error closing temp file with configKey and also error deleting it: %v", err) } log.Fatalf("error closing temp file with configKey: %v", err) } fs.Debugf(nil, "saving configKey to temp file") err = os.Setenv("_RCLONE_CONFIG_KEY_FILE", tempFile.Name()) if err != nil { errRemove := os.Remove(tempFile.Name()) if errRemove != nil { log.Fatalf("unable to set environment variable _RCLONE_CONFIG_KEY_FILE and unable to delete the temp file: %v", err) } log.Fatalf("unable to set environment variable _RCLONE_CONFIG_KEY_FILE: %v", err) } } return nil } // changeConfigPassword will query the user twice // for a password. If the same password is entered // twice the key is updated. func changeConfigPassword() { err := setConfigPassword(ChangePassword("NEW configuration")) if err != nil { fmt.Printf("Failed to set config password: %v\n", err) return } } // saveConfig saves configuration file. // if configKey has been set, the file will be encrypted. func saveConfig() error { dir, name := filepath.Split(ConfigPath) err := os.MkdirAll(dir, os.ModePerm) if err != nil { return errors.Wrap(err, "failed to create config directory") } f, err := ioutil.TempFile(dir, name) if err != nil { return errors.Errorf("Failed to create temp file for new config: %v", err) } defer func() { if err := os.Remove(f.Name()); err != nil && !os.IsNotExist(err) { fs.Errorf(nil, "Failed to remove temp config file: %v", err) } }() var buf bytes.Buffer err = goconfig.SaveConfigData(getConfigData(), &buf) if err != nil { return errors.Errorf("Failed to save config file: %v", err) } if len(configKey) == 0 { if _, err := buf.WriteTo(f); err != nil { return errors.Errorf("Failed to write temp config file: %v", err) } } else { _, _ = fmt.Fprintln(f, "# Encrypted rclone configuration File") _, _ = fmt.Fprintln(f, "") _, _ = fmt.Fprintln(f, "RCLONE_ENCRYPT_V0:") // Generate new nonce and write it to the start of the ciphertext var nonce [24]byte n, _ := rand.Read(nonce[:]) if n != 24 { return errors.Errorf("nonce short read: %d", n) } enc := base64.NewEncoder(base64.StdEncoding, f) _, err = enc.Write(nonce[:]) if err != nil { return errors.Errorf("Failed to write temp config file: %v", err) } var key [32]byte copy(key[:], configKey[:32]) b := secretbox.Seal(nil, buf.Bytes(), &nonce, &key) _, err = enc.Write(b) if err != nil { return errors.Errorf("Failed to write temp config file: %v", err) } _ = enc.Close() } _ = f.Sync() err = f.Close() if err != nil { return errors.Errorf("Failed to close config file: %v", err) } var fileMode os.FileMode = 0600 info, err := os.Stat(ConfigPath) if err != nil { fs.Debugf(nil, "Using default permissions for config file: %v", fileMode) } else if info.Mode() != fileMode { fs.Debugf(nil, "Keeping previous permissions for config file: %v", info.Mode()) fileMode = info.Mode() } attemptCopyGroup(ConfigPath, f.Name()) err = os.Chmod(f.Name(), fileMode) if err != nil { fs.Errorf(nil, "Failed to set permissions on config file: %v", err) } if err = os.Rename(ConfigPath, ConfigPath+".old"); err != nil && !os.IsNotExist(err) { return errors.Errorf("Failed to move previous config to backup location: %v", err) } if err = os.Rename(f.Name(), ConfigPath); err != nil { return errors.Errorf("Failed to move newly written config from %s to final location: %v", f.Name(), err) } if err := os.Remove(ConfigPath + ".old"); err != nil && !os.IsNotExist(err) { fs.Errorf(nil, "Failed to remove backup config file: %v", err) } return nil } // SaveConfig calling function which saves configuration file. // if saveConfig returns error trying again after sleep. func SaveConfig() { var err error for i := 0; i < fs.Config.LowLevelRetries+1; i++ { if err = saveConfig(); err == nil { return } waitingTimeMs := mathrand.Intn(1000) time.Sleep(time.Duration(waitingTimeMs) * time.Millisecond) } log.Fatalf("Failed to save config after %d tries: %v", fs.Config.LowLevelRetries, err) return } // SetValueAndSave sets the key to the value and saves just that // value in the config file. It loads the old config file in from // disk first and overwrites the given value only. func SetValueAndSave(name, key, value string) (err error) { // Set the value in config in case we fail to reload it getConfigData().SetValue(name, key, value) // Reload the config file reloadedConfigFile, err := loadConfigFile() if err == errorConfigFileNotFound { // Config file not written yet so ignore reload return nil } else if err != nil { return err } _, err = reloadedConfigFile.GetSection(name) if err != nil { // Section doesn't exist yet so ignore reload return nil } // Update the config file with the reloaded version configFile = reloadedConfigFile // Set the value in the reloaded version reloadedConfigFile.SetValue(name, key, value) // Save it again SaveConfig() return nil } // FileGetFresh reads the config key under section return the value or // an error if the config file was not found or that value couldn't be // read. func FileGetFresh(section, key string) (value string, err error) { reloadedConfigFile, err := loadConfigFile() if err != nil { return "", err } return reloadedConfigFile.GetValue(section, key) } // ShowRemotes shows an overview of the config file func ShowRemotes() { remotes := getConfigData().GetSectionList() if len(remotes) == 0 { return } sort.Strings(remotes) fmt.Printf("%-20s %s\n", "Name", "Type") fmt.Printf("%-20s %s\n", "====", "====") for _, remote := range remotes { fmt.Printf("%-20s %s\n", remote, FileGet(remote, "type")) } } // ChooseRemote chooses a remote name func ChooseRemote() string { remotes := getConfigData().GetSectionList() sort.Strings(remotes) return Choose("remote", remotes, nil, false) } // ReadLine reads some input var ReadLine = func() string { buf := bufio.NewReader(os.Stdin) line, err := buf.ReadString('\n') if err != nil { log.Fatalf("Failed to read line: %v", err) } return strings.TrimSpace(line) } // ReadNonEmptyLine prints prompt and calls Readline until non empty func ReadNonEmptyLine(prompt string) string { result := "" for result == "" { fmt.Print(prompt) result = strings.TrimSpace(ReadLine()) } return result } // CommandDefault - choose one. If return is pressed then it will // chose the defaultIndex if it is >= 0 func CommandDefault(commands []string, defaultIndex int) byte { opts := []string{} for i, text := range commands { def := "" if i == defaultIndex { def = " (default)" } fmt.Printf("%c) %s%s\n", text[0], text[1:], def) opts = append(opts, text[:1]) } optString := strings.Join(opts, "") optHelp := strings.Join(opts, "/") for { fmt.Printf("%s> ", optHelp) result := strings.ToLower(ReadLine()) if len(result) == 0 && defaultIndex >= 0 { return optString[defaultIndex] } if len(result) != 1 { continue } i := strings.Index(optString, string(result[0])) if i >= 0 { return result[0] } } } // Command - choose one func Command(commands []string) byte { return CommandDefault(commands, -1) } // Confirm asks the user for Yes or No and returns true or false // // If the user presses enter then the Default will be used func Confirm(Default bool) bool { defaultIndex := 0 if !Default { defaultIndex = 1 } return CommandDefault([]string{"yYes", "nNo"}, defaultIndex) == 'y' } // ConfirmWithConfig asks the user for Yes or No and returns true or // false. // // If AutoConfirm is set, it will look up the value in m and return // that, but if it isn't set then it will return the Default value // passed in func ConfirmWithConfig(m configmap.Getter, configName string, Default bool) bool { if fs.Config.AutoConfirm { configString, ok := m.Get(configName) if ok { configValue, err := strconv.ParseBool(configString) if err != nil { fs.Errorf(nil, "Failed to parse config parameter %s=%q as boolean - using default %v: %v", configName, configString, Default, err) } else { Default = configValue } } answer := "No" if Default { answer = "Yes" } fmt.Printf("Auto confirm is set: answering %s, override by setting config parameter %s=%v\n", answer, configName, !Default) return Default } return Confirm(Default) } // Choose one of the defaults or type a new string if newOk is set func Choose(what string, defaults, help []string, newOk bool) string { valueDescription := "an existing" if newOk { valueDescription = "your own" } fmt.Printf("Choose a number from below, or type in %s value\n", valueDescription) attributes := []string{terminal.HiRedFg, terminal.HiGreenFg} for i, text := range defaults { var lines []string if help != nil { parts := strings.Split(help[i], "\n") lines = append(lines, parts...) } lines = append(lines, fmt.Sprintf("%q", text)) pos := i + 1 terminal.WriteString(attributes[i%len(attributes)]) if len(lines) == 1 { fmt.Printf("%2d > %s\n", pos, text) } else { mid := (len(lines) - 1) / 2 for i, line := range lines { var sep rune switch i { case 0: sep = '/' case len(lines) - 1: sep = '\\' default: sep = '|' } number := " " if i == mid { number = fmt.Sprintf("%2d", pos) } fmt.Printf("%s %c %s\n", number, sep, line) } } terminal.WriteString(terminal.Reset) } for { fmt.Printf("%s> ", what) result := ReadLine() i, err := strconv.Atoi(result) if err != nil { if newOk { return result } for _, v := range defaults { if result == v { return result } } continue } if i >= 1 && i <= len(defaults) { return defaults[i-1] } } } // ChooseNumber asks the user to enter a number between min and max // inclusive prompting them with what. func ChooseNumber(what string, min, max int) int { for { fmt.Printf("%s> ", what) result := ReadLine() i, err := strconv.Atoi(result) if err != nil { fmt.Printf("Bad number: %v\n", err) continue } if i < min || i > max { fmt.Printf("Out of range - %d to %d inclusive\n", min, max) continue } return i } } // ShowRemote shows the contents of the remote func ShowRemote(name string) { fmt.Printf("--------------------\n") fmt.Printf("[%s]\n", name) fs := MustFindByName(name) for _, key := range getConfigData().GetKeyList(name) { isPassword := false for _, option := range fs.Options { if option.Name == key && option.IsPassword { isPassword = true break } } value := FileGet(name, key) if isPassword && value != "" { fmt.Printf("%s = *** ENCRYPTED ***\n", key) } else { fmt.Printf("%s = %s\n", key, value) } } fmt.Printf("--------------------\n") } // OkRemote prints the contents of the remote and ask if it is OK func OkRemote(name string) bool { ShowRemote(name) switch i := CommandDefault([]string{"yYes this is OK", "eEdit this remote", "dDelete this remote"}, 0); i { case 'y': return true case 'e': return false case 'd': getConfigData().DeleteSection(name) return true default: fs.Errorf(nil, "Bad choice %c", i) } return false } // MustFindByName finds the RegInfo for the remote name passed in or // exits with a fatal error. func MustFindByName(name string) *fs.RegInfo { fsType := FileGet(name, "type") if fsType == "" { log.Fatalf("Couldn't find type of fs for %q", name) } return fs.MustFind(fsType) } // RemoteConfig runs the config helper for the remote if needed func RemoteConfig(name string) { fmt.Printf("Remote config\n") f := MustFindByName(name) if f.Config != nil { m := fs.ConfigMap(f, name) f.Config(name, m) } } // matchProvider returns true if provider matches the providerConfig string. // // The providerConfig string can either be a list of providers to // match, or if it starts with "!" it will be a list of providers not // to match. // // If either providerConfig or provider is blank then it will return true func matchProvider(providerConfig, provider string) bool { if providerConfig == "" || provider == "" { return true } negate := false if strings.HasPrefix(providerConfig, "!") { providerConfig = providerConfig[1:] negate = true } providers := strings.Split(providerConfig, ",") matched := false for _, p := range providers { if p == provider { matched = true break } } if negate { return !matched } return matched } // ChooseOption asks the user to choose an option func ChooseOption(o *fs.Option, name string) string { var subProvider = getConfigData().MustValue(name, fs.ConfigProvider, "") fmt.Println(o.Help) if o.IsPassword { actions := []string{"yYes type in my own password", "gGenerate random password"} defaultAction := -1 if !o.Required { defaultAction = len(actions) actions = append(actions, "nNo leave this optional password blank") } var password string var err error switch i := CommandDefault(actions, defaultAction); i { case 'y': password = ChangePassword("the") case 'g': for { fmt.Printf("Password strength in bits.\n64 is just about memorable\n128 is secure\n1024 is the maximum\n") bits := ChooseNumber("Bits", 64, 1024) password, err = Password(bits) if err != nil { log.Fatalf("Failed to make password: %v", err) } fmt.Printf("Your password is: %s\n", password) fmt.Printf("Use this password? Please note that an obscured version of this \npassword (and not the " + "password itself) will be stored under your \nconfiguration file, so keep this generated password " + "in a safe place.\n") if Confirm(true) { break } } case 'n': return "" default: fs.Errorf(nil, "Bad choice %c", i) } return obscure.MustObscure(password) } what := fmt.Sprintf("%T value", o.Default) switch o.Default.(type) { case bool: what = "boolean value (true or false)" case fs.SizeSuffix: what = "size with suffix k,M,G,T" case fs.Duration: what = "duration s,m,h,d,w,M,y" case int, int8, int16, int32, int64: what = "signed integer" case uint, byte, uint16, uint32, uint64: what = "unsigned integer" } var in string for { fmt.Printf("Enter a %s. Press Enter for the default (%q).\n", what, fmt.Sprint(o.Default)) if len(o.Examples) > 0 { var values []string var help []string for _, example := range o.Examples { if matchProvider(example.Provider, subProvider) { values = append(values, example.Value) help = append(help, example.Help) } } in = Choose(o.Name, values, help, true) } else { fmt.Printf("%s> ", o.Name) in = ReadLine() } if in == "" { if o.Required && fmt.Sprint(o.Default) == "" { fmt.Printf("This value is required and it has no default.\n") continue } break } newIn, err := configstruct.StringToInterface(o.Default, in) if err != nil { fmt.Printf("Failed to parse %q: %v\n", in, err) continue } in = fmt.Sprint(newIn) // canonicalise break } return in } // Suppress the confirm prompts and return a function to undo that func suppressConfirm() func() { old := fs.Config.AutoConfirm fs.Config.AutoConfirm = true return func() { fs.Config.AutoConfirm = old } } // UpdateRemote adds the keyValues passed in to the remote of name. // keyValues should be key, value pairs. func UpdateRemote(name string, keyValues rc.Params, doObscure, noObscure bool) error { if doObscure && noObscure { return errors.New("can't use --obscure and --no-obscure together") } err := fspath.CheckConfigName(name) if err != nil { return err } defer suppressConfirm()() // Work out which options need to be obscured needsObscure := map[string]struct{}{} if !noObscure { if fsType := FileGet(name, "type"); fsType != "" { if ri, err := fs.Find(fsType); err != nil { fs.Debugf(nil, "Couldn't find fs for type %q", fsType) } else { for _, opt := range ri.Options { if opt.IsPassword { needsObscure[opt.Name] = struct{}{} } } } } else { fs.Debugf(nil, "UpdateRemote: Couldn't find fs type") } } // Set the config for k, v := range keyValues { vStr := fmt.Sprint(v) // Obscure parameter if necessary if _, ok := needsObscure[k]; ok { _, err := obscure.Reveal(vStr) if err != nil || doObscure { // If error => not already obscured, so obscure it // or we are forced to obscure vStr, err = obscure.Obscure(vStr) if err != nil { return errors.Wrap(err, "UpdateRemote: obscure failed") } } } getConfigData().SetValue(name, k, vStr) } RemoteConfig(name) SaveConfig() return nil } // CreateRemote creates a new remote with name, provider and a list of // parameters which are key, value pairs. If update is set then it // adds the new keys rather than replacing all of them. func CreateRemote(name string, provider string, keyValues rc.Params, doObscure, noObscure bool) error { err := fspath.CheckConfigName(name) if err != nil { return err } // Delete the old config if it exists getConfigData().DeleteSection(name) // Set the type getConfigData().SetValue(name, "type", provider) // Set the remaining values return UpdateRemote(name, keyValues, doObscure, noObscure) } // PasswordRemote adds the keyValues passed in to the remote of name. // keyValues should be key, value pairs. func PasswordRemote(name string, keyValues rc.Params) error { err := fspath.CheckConfigName(name) if err != nil { return err } defer suppressConfirm()() for k, v := range keyValues { keyValues[k] = obscure.MustObscure(fmt.Sprint(v)) } return UpdateRemote(name, keyValues, false, true) } // JSONListProviders prints all the providers and options in JSON format func JSONListProviders() error { b, err := json.MarshalIndent(fs.Registry, "", " ") if err != nil { return errors.Wrap(err, "failed to marshal examples") } _, err = os.Stdout.Write(b) if err != nil { return errors.Wrap(err, "failed to write providers list") } return nil } // fsOption returns an Option describing the possible remotes func fsOption() *fs.Option { o := &fs.Option{ Name: "Storage", Help: "Type of storage to configure.", Default: "", } for _, item := range fs.Registry { example := fs.OptionExample{ Value: item.Name, Help: item.Description, } o.Examples = append(o.Examples, example) } o.Examples.Sort() return o } // NewRemoteName asks the user for a name for a new remote func NewRemoteName() (name string) { for { fmt.Printf("name> ") name = ReadLine() _, err := getConfigData().GetSection(name) if err == nil { fmt.Printf("Remote %q already exists.\n", name) continue } err = fspath.CheckConfigName(name) switch { case name == "": fmt.Printf("Can't use empty name.\n") case driveletter.IsDriveLetter(name): fmt.Printf("Can't use %q as it can be confused with a drive letter.\n", name) case err != nil: fmt.Printf("Can't use %q as %v.\n", name, err) default: return name } } } // editOptions edits the options. If new is true then it just allows // entry and doesn't show any old values. func editOptions(ri *fs.RegInfo, name string, isNew bool) { fmt.Printf("** See help for %s backend at: https://rclone.org/%s/ **\n\n", ri.Name, ri.FileName()) hasAdvanced := false for _, advanced := range []bool{false, true} { if advanced { if !hasAdvanced { break } fmt.Printf("Edit advanced config? (y/n)\n") if !Confirm(false) { break } } for _, option := range ri.Options { isVisible := option.Hide&fs.OptionHideConfigurator == 0 hasAdvanced = hasAdvanced || (option.Advanced && isVisible) if option.Advanced != advanced { continue } subProvider := getConfigData().MustValue(name, fs.ConfigProvider, "") if matchProvider(option.Provider, subProvider) && isVisible { if !isNew { fmt.Printf("Value %q = %q\n", option.Name, FileGet(name, option.Name)) fmt.Printf("Edit? (y/n)>\n") if !Confirm(false) { continue } } FileSet(name, option.Name, ChooseOption(&option, name)) } } } } // NewRemote make a new remote from its name func NewRemote(name string) { var ( newType string ri *fs.RegInfo err error ) // Set the type first for { newType = ChooseOption(fsOption(), name) ri, err = fs.Find(newType) if err != nil { fmt.Printf("Bad remote %q: %v\n", newType, err) continue } break } getConfigData().SetValue(name, "type", newType) editOptions(ri, name, true) RemoteConfig(name) if OkRemote(name) { SaveConfig() return } EditRemote(ri, name) } // EditRemote gets the user to edit a remote func EditRemote(ri *fs.RegInfo, name string) { ShowRemote(name) fmt.Printf("Edit remote\n") for { editOptions(ri, name, false) if OkRemote(name) { break } } SaveConfig() RemoteConfig(name) } // DeleteRemote gets the user to delete a remote func DeleteRemote(name string) { getConfigData().DeleteSection(name) SaveConfig() } // copyRemote asks the user for a new remote name and copies name into // it. Returns the new name. func copyRemote(name string) string { newName := NewRemoteName() // Copy the keys for _, key := range getConfigData().GetKeyList(name) { value := getConfigData().MustValue(name, key, "") getConfigData().SetValue(newName, key, value) } return newName } // RenameRemote renames a config section func RenameRemote(name string) { fmt.Printf("Enter new name for %q remote.\n", name) newName := copyRemote(name) if name != newName { getConfigData().DeleteSection(name) SaveConfig() } } // CopyRemote copies a config section func CopyRemote(name string) { fmt.Printf("Enter name for copy of %q remote.\n", name) copyRemote(name) SaveConfig() } // ShowConfigLocation prints the location of the config file in use func ShowConfigLocation() { if _, err := os.Stat(ConfigPath); os.IsNotExist(err) { fmt.Println("Configuration file doesn't exist, but rclone will use this path:") } else { fmt.Println("Configuration file is stored at:") } fmt.Printf("%s\n", ConfigPath) } // ShowConfig prints the (unencrypted) config options func ShowConfig() { var buf bytes.Buffer if err := goconfig.SaveConfigData(getConfigData(), &buf); err != nil { log.Fatalf("Failed to serialize config: %v", err) } str := buf.String() if str == "" { str = "; empty config\n" } fmt.Printf("%s", str) } // EditConfig edits the config file interactively func EditConfig() { for { haveRemotes := len(getConfigData().GetSectionList()) != 0 what := []string{"eEdit existing remote", "nNew remote", "dDelete remote", "rRename remote", "cCopy remote", "sSet configuration password", "qQuit config"} if haveRemotes { fmt.Printf("Current remotes:\n\n") ShowRemotes() fmt.Printf("\n") } else { fmt.Printf("No remotes found - make a new one\n") // take 2nd item and last 2 items of menu list what = append(what[1:2], what[len(what)-2:]...) } switch i := Command(what); i { case 'e': name := ChooseRemote() fs := MustFindByName(name) EditRemote(fs, name) case 'n': NewRemote(NewRemoteName()) case 'd': name := ChooseRemote() DeleteRemote(name) case 'r': RenameRemote(ChooseRemote()) case 'c': CopyRemote(ChooseRemote()) case 's': SetPassword() case 'q': return } } } // SetPassword will allow the user to modify the current // configuration encryption settings. func SetPassword() { for { if len(configKey) > 0 { fmt.Println("Your configuration is encrypted.") what := []string{"cChange Password", "uUnencrypt configuration", "qQuit to main menu"} switch i := Command(what); i { case 'c': changeConfigPassword() SaveConfig() fmt.Println("Password changed") continue case 'u': configKey = nil SaveConfig() continue case 'q': return } } else { fmt.Println("Your configuration is not encrypted.") fmt.Println("If you add a password, you will protect your login information to cloud services.") what := []string{"aAdd Password", "qQuit to main menu"} switch i := Command(what); i { case 'a': changeConfigPassword() SaveConfig() fmt.Println("Password set") continue case 'q': return } } } } // Authorize is for remote authorization of headless machines. // // It expects 1 or 3 arguments // // rclone authorize "fs name" // rclone authorize "fs name" "client id" "client secret" func Authorize(args []string, noAutoBrowser bool) { defer suppressConfirm()() switch len(args) { case 1, 3: default: log.Fatalf("Invalid number of arguments: %d", len(args)) } newType := args[0] f := fs.MustFind(newType) if f.Config == nil { log.Fatalf("Can't authorize fs %q", newType) } // Name used for temporary fs name := "**temp-fs**" // Make sure we delete it defer DeleteRemote(name) // Indicate that we are running rclone authorize getConfigData().SetValue(name, ConfigAuthorize, "true") if noAutoBrowser { getConfigData().SetValue(name, ConfigAuthNoBrowser, "true") } if len(args) == 3 { getConfigData().SetValue(name, ConfigClientID, args[1]) getConfigData().SetValue(name, ConfigClientSecret, args[2]) } m := fs.ConfigMap(f, name) f.Config(name, m) } // FileGetFlag gets the config key under section returning the // the value and true if found and or ("", false) otherwise func FileGetFlag(section, key string) (string, bool) { newValue, err := getConfigData().GetValue(section, key) return newValue, err == nil } // FileGet gets the config key under section returning the // default or empty string if not set. // // It looks up defaults in the environment if they are present func FileGet(section, key string, defaultVal ...string) string { envKey := fs.ConfigToEnv(section, key) newValue, found := os.LookupEnv(envKey) if found { defaultVal = []string{newValue} } return getConfigData().MustValue(section, key, defaultVal...) } // FileSet sets the key in section to value. It doesn't save // the config file. func FileSet(section, key, value string) { if value != "" { getConfigData().SetValue(section, key, value) } else { FileDeleteKey(section, key) } } // FileDeleteKey deletes the config key in the config file. // It returns true if the key was deleted, // or returns false if the section or key didn't exist. func FileDeleteKey(section, key string) bool { return getConfigData().DeleteKey(section, key) } var matchEnv = regexp.MustCompile(`^RCLONE_CONFIG_(.*?)_TYPE=.*$`) // FileRefresh ensures the latest configFile is loaded from disk func FileRefresh() error { reloadedConfigFile, err := loadConfigFile() if err != nil { return err } configFile = reloadedConfigFile return nil } // FileSections returns the sections in the config file // including any defined by environment variables. func FileSections() []string { sections := getConfigData().GetSectionList() for _, item := range os.Environ() { matches := matchEnv.FindStringSubmatch(item) if len(matches) == 2 { sections = append(sections, strings.ToLower(matches[1])) } } return sections } // DumpRcRemote dumps the config for a single remote func DumpRcRemote(name string) (dump rc.Params) { params := rc.Params{} for _, key := range getConfigData().GetKeyList(name) { params[key] = FileGet(name, key) } return params } // DumpRcBlob dumps all the config as an unstructured blob suitable // for the rc func DumpRcBlob() (dump rc.Params) { dump = rc.Params{} for _, name := range getConfigData().GetSectionList() { dump[name] = DumpRcRemote(name) } return dump } // Dump dumps all the config as a JSON file func Dump() error { dump := DumpRcBlob() b, err := json.MarshalIndent(dump, "", " ") if err != nil { return errors.Wrap(err, "failed to marshal config dump") } _, err = os.Stdout.Write(b) if err != nil { return errors.Wrap(err, "failed to write config dump") } return nil } // makeCacheDir returns a directory to use for caching. // // Code borrowed from go stdlib until it is made public func makeCacheDir() (dir string) { // Compute default location. switch runtime.GOOS { case "windows": dir = os.Getenv("LocalAppData") case "darwin": dir = os.Getenv("HOME") if dir != "" { dir += "/Library/Caches" } case "plan9": dir = os.Getenv("home") if dir != "" { // Plan 9 has no established per-user cache directory, // but $home/lib/xyz is the usual equivalent of $HOME/.xyz on Unix. dir += "/lib/cache" } default: // Unix // https://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html dir = os.Getenv("XDG_CACHE_HOME") if dir == "" { dir = os.Getenv("HOME") if dir != "" { dir += "/.cache" } } } // if no dir found then use TempDir - we will have a cachedir! if dir == "" { dir = os.TempDir() } return filepath.Join(dir, "rclone") } rclone-1.53.3/fs/config/config_other.go000066400000000000000000000005271375552240400177520ustar00rootroot00000000000000// Read, write and edit the config file // Non-unix specific functions. // +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris package config // attemptCopyGroups tries to keep the group the same, which only makes sense // for system with user-group-world permission model. func attemptCopyGroup(fromPath, toPath string) {} rclone-1.53.3/fs/config/config_read_password.go000066400000000000000000000012321375552240400214600ustar00rootroot00000000000000// ReadPassword for OSes which are supported by golang.org/x/crypto/ssh/terminal // See https://github.com/golang/go/issues/14441 - plan9 // https://github.com/golang/go/issues/13085 - solaris // +build !solaris,!plan9 package config import ( "fmt" "log" "os" "github.com/rclone/rclone/lib/terminal" ) // ReadPassword reads a password without echoing it to the terminal. func ReadPassword() string { stdin := int(os.Stdin.Fd()) if !terminal.IsTerminal(stdin) { return ReadLine() } line, err := terminal.ReadPassword(stdin) _, _ = fmt.Fprintln(os.Stderr) if err != nil { log.Fatalf("Failed to read password: %v", err) } return string(line) } rclone-1.53.3/fs/config/config_read_password_unsupported.go000066400000000000000000000005471375552240400241400ustar00rootroot00000000000000// ReadPassword for OSes which are not supported by golang.org/x/crypto/ssh/terminal // See https://github.com/golang/go/issues/14441 - plan9 // https://github.com/golang/go/issues/13085 - solaris // +build solaris plan9 package config // ReadPassword reads a password with echoing it to the terminal. func ReadPassword() string { return ReadLine() } rclone-1.53.3/fs/config/config_test.go000066400000000000000000000272251375552240400176140ustar00rootroot00000000000000package config import ( "bytes" "fmt" "io/ioutil" "os" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/rc" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func testConfigFile(t *testing.T, configFileName string) func() { configKey = nil // reset password _ = os.Unsetenv("_RCLONE_CONFIG_KEY_FILE") _ = os.Unsetenv("RCLONE_CONFIG_PASS") // create temp config file tempFile, err := ioutil.TempFile("", configFileName) assert.NoError(t, err) path := tempFile.Name() assert.NoError(t, tempFile.Close()) // temporarily adapt configuration oldOsStdout := os.Stdout oldConfigPath := ConfigPath oldConfig := fs.Config oldConfigFile := configFile oldReadLine := ReadLine oldPassword := Password os.Stdout = nil ConfigPath = path fs.Config = &fs.ConfigInfo{} configFile = nil LoadConfig() assert.Equal(t, []string{}, getConfigData().GetSectionList()) // Fake a remote fs.Register(&fs.RegInfo{ Name: "config_test_remote", Options: fs.Options{ { Name: "bool", Default: false, IsPassword: false, }, { Name: "pass", Default: "", IsPassword: true, }, }, }) // Undo the above return func() { err := os.Remove(path) assert.NoError(t, err) os.Stdout = oldOsStdout ConfigPath = oldConfigPath ReadLine = oldReadLine Password = oldPassword fs.Config = oldConfig configFile = oldConfigFile _ = os.Unsetenv("_RCLONE_CONFIG_KEY_FILE") _ = os.Unsetenv("RCLONE_CONFIG_PASS") } } // makeReadLine makes a simple readLine which returns a fixed list of // strings func makeReadLine(answers []string) func() string { i := 0 return func() string { i = i + 1 return answers[i-1] } } func TestCRUD(t *testing.T) { defer testConfigFile(t, "crud.conf")() // script for creating remote ReadLine = makeReadLine([]string{ "config_test_remote", // type "true", // bool value "y", // type my own password "secret", // password "secret", // repeat "y", // looks good, save }) NewRemote("test") assert.Equal(t, []string{"test"}, configFile.GetSectionList()) assert.Equal(t, "config_test_remote", FileGet("test", "type")) assert.Equal(t, "true", FileGet("test", "bool")) assert.Equal(t, "secret", obscure.MustReveal(FileGet("test", "pass"))) // normal rename, test → asdf ReadLine = makeReadLine([]string{ "asdf", "asdf", "asdf", }) RenameRemote("test") assert.Equal(t, []string{"asdf"}, configFile.GetSectionList()) assert.Equal(t, "config_test_remote", FileGet("asdf", "type")) assert.Equal(t, "true", FileGet("asdf", "bool")) assert.Equal(t, "secret", obscure.MustReveal(FileGet("asdf", "pass"))) // delete remote DeleteRemote("asdf") assert.Equal(t, []string{}, configFile.GetSectionList()) } func TestChooseOption(t *testing.T) { defer testConfigFile(t, "crud.conf")() // script for creating remote ReadLine = makeReadLine([]string{ "config_test_remote", // type "false", // bool value "x", // bad choice "g", // generate password "1024", // very big "y", // password OK "y", // looks good, save }) Password = func(bits int) (string, error) { assert.Equal(t, 1024, bits) return "not very random password", nil } NewRemote("test") assert.Equal(t, "false", FileGet("test", "bool")) assert.Equal(t, "not very random password", obscure.MustReveal(FileGet("test", "pass"))) // script for creating remote ReadLine = makeReadLine([]string{ "config_test_remote", // type "true", // bool value "n", // not required "y", // looks good, save }) NewRemote("test") assert.Equal(t, "true", FileGet("test", "bool")) assert.Equal(t, "", FileGet("test", "pass")) } func TestNewRemoteName(t *testing.T) { defer testConfigFile(t, "crud.conf")() // script for creating remote ReadLine = makeReadLine([]string{ "config_test_remote", // type "true", // bool value "n", // not required "y", // looks good, save }) NewRemote("test") ReadLine = makeReadLine([]string{ "test", // already exists "", // empty string not allowed "bad@characters", // bad characters "newname", // OK }) assert.Equal(t, "newname", NewRemoteName()) } func TestCreateUpatePasswordRemote(t *testing.T) { defer testConfigFile(t, "update.conf")() for _, doObscure := range []bool{false, true} { for _, noObscure := range []bool{false, true} { if doObscure && noObscure { break } t.Run(fmt.Sprintf("doObscure=%v,noObscure=%v", doObscure, noObscure), func(t *testing.T) { require.NoError(t, CreateRemote("test2", "config_test_remote", rc.Params{ "bool": true, "pass": "potato", }, doObscure, noObscure)) assert.Equal(t, []string{"test2"}, configFile.GetSectionList()) assert.Equal(t, "config_test_remote", FileGet("test2", "type")) assert.Equal(t, "true", FileGet("test2", "bool")) gotPw := FileGet("test2", "pass") if !noObscure { gotPw = obscure.MustReveal(gotPw) } assert.Equal(t, "potato", gotPw) wantPw := obscure.MustObscure("potato2") require.NoError(t, UpdateRemote("test2", rc.Params{ "bool": false, "pass": wantPw, "spare": "spare", }, doObscure, noObscure)) assert.Equal(t, []string{"test2"}, configFile.GetSectionList()) assert.Equal(t, "config_test_remote", FileGet("test2", "type")) assert.Equal(t, "false", FileGet("test2", "bool")) gotPw = FileGet("test2", "pass") if doObscure { gotPw = obscure.MustReveal(gotPw) } assert.Equal(t, wantPw, gotPw) require.NoError(t, PasswordRemote("test2", rc.Params{ "pass": "potato3", })) assert.Equal(t, []string{"test2"}, configFile.GetSectionList()) assert.Equal(t, "config_test_remote", FileGet("test2", "type")) assert.Equal(t, "false", FileGet("test2", "bool")) assert.Equal(t, "potato3", obscure.MustReveal(FileGet("test2", "pass"))) }) } } } // Test some error cases func TestReveal(t *testing.T) { for _, test := range []struct { in string wantErr string }{ {"YmJiYmJiYmJiYmJiYmJiYp*gcEWbAw", "base64 decode failed when revealing password - is it obscured?: illegal base64 data at input byte 22"}, {"aGVsbG8", "input too short when revealing password - is it obscured?"}, {"", "input too short when revealing password - is it obscured?"}, } { gotString, gotErr := obscure.Reveal(test.in) assert.Equal(t, "", gotString) assert.Equal(t, test.wantErr, gotErr.Error()) } } func TestConfigLoad(t *testing.T) { oldConfigPath := ConfigPath ConfigPath = "./testdata/plain.conf" defer func() { ConfigPath = oldConfigPath }() configKey = nil // reset password c, err := loadConfigFile() if err != nil { t.Fatal(err) } sections := c.GetSectionList() var expect = []string{"RCLONE_ENCRYPT_V0", "nounc", "unc"} assert.Equal(t, expect, sections) keys := c.GetKeyList("nounc") expect = []string{"type", "nounc"} assert.Equal(t, expect, keys) } func TestConfigLoadEncrypted(t *testing.T) { var err error oldConfigPath := ConfigPath ConfigPath = "./testdata/encrypted.conf" defer func() { ConfigPath = oldConfigPath configKey = nil // reset password }() // Set correct password err = setConfigPassword("asdf") require.NoError(t, err) c, err := loadConfigFile() require.NoError(t, err) sections := c.GetSectionList() var expect = []string{"nounc", "unc"} assert.Equal(t, expect, sections) keys := c.GetKeyList("nounc") expect = []string{"type", "nounc"} assert.Equal(t, expect, keys) } func TestConfigLoadEncryptedWithValidPassCommand(t *testing.T) { oldConfigPath := ConfigPath oldConfig := fs.Config ConfigPath = "./testdata/encrypted.conf" // using fs.Config.PasswordCommand, correct password fs.Config.PasswordCommand = fs.SpaceSepList{"echo", "asdf"} defer func() { ConfigPath = oldConfigPath configKey = nil // reset password fs.Config = oldConfig fs.Config.PasswordCommand = nil }() configKey = nil // reset password c, err := loadConfigFile() require.NoError(t, err) sections := c.GetSectionList() var expect = []string{"nounc", "unc"} assert.Equal(t, expect, sections) keys := c.GetKeyList("nounc") expect = []string{"type", "nounc"} assert.Equal(t, expect, keys) } func TestConfigLoadEncryptedWithInvalidPassCommand(t *testing.T) { oldConfigPath := ConfigPath oldConfig := fs.Config ConfigPath = "./testdata/encrypted.conf" // using fs.Config.PasswordCommand, incorrect password fs.Config.PasswordCommand = fs.SpaceSepList{"echo", "asdf-blurfl"} defer func() { ConfigPath = oldConfigPath configKey = nil // reset password fs.Config = oldConfig fs.Config.PasswordCommand = nil }() configKey = nil // reset password _, err := loadConfigFile() require.Error(t, err) assert.Contains(t, err.Error(), "using --password-command derived password") } func TestConfigLoadEncryptedFailures(t *testing.T) { var err error // This file should be too short to be decoded. oldConfigPath := ConfigPath ConfigPath = "./testdata/enc-short.conf" defer func() { ConfigPath = oldConfigPath }() _, err = loadConfigFile() require.Error(t, err) // This file contains invalid base64 characters. ConfigPath = "./testdata/enc-invalid.conf" _, err = loadConfigFile() require.Error(t, err) // This file contains invalid base64 characters. ConfigPath = "./testdata/enc-too-new.conf" _, err = loadConfigFile() require.Error(t, err) // This file does not exist. ConfigPath = "./testdata/filenotfound.conf" c, err := loadConfigFile() assert.Equal(t, errorConfigFileNotFound, err) assert.Nil(t, c) } func TestPassword(t *testing.T) { defer func() { configKey = nil // reset password }() var err error // Empty password should give error err = setConfigPassword(" \t ") require.Error(t, err) // Test invalid utf8 sequence err = setConfigPassword(string([]byte{0xff, 0xfe, 0xfd}) + "abc") require.Error(t, err) // Simple check of wrong passwords hashedKeyCompare(t, "mis", "match", false) // Check that passwords match after unicode normalization hashedKeyCompare(t, "ff\u0041\u030A", "ffÅ", true) // Check that passwords preserves case hashedKeyCompare(t, "abcdef", "ABCDEF", false) } func hashedKeyCompare(t *testing.T, a, b string, shouldMatch bool) { err := setConfigPassword(a) require.NoError(t, err) k1 := configKey err = setConfigPassword(b) require.NoError(t, err) k2 := configKey if shouldMatch { assert.Equal(t, k1, k2) } else { assert.NotEqual(t, k1, k2) } } func TestMatchProvider(t *testing.T) { for _, test := range []struct { config string provider string want bool }{ {"", "", true}, {"one", "one", true}, {"one,two", "two", true}, {"one,two,three", "two", true}, {"one", "on", false}, {"one,two,three", "tw", false}, {"!one,two,three", "two", false}, {"!one,two,three", "four", true}, } { what := fmt.Sprintf("%q,%q", test.config, test.provider) got := matchProvider(test.config, test.provider) assert.Equal(t, test.want, got, what) } } func TestFileRefresh(t *testing.T) { defer testConfigFile(t, "refresh.conf")() require.NoError(t, CreateRemote("refresh_test", "config_test_remote", rc.Params{ "bool": true, }, false, false)) b, err := ioutil.ReadFile(ConfigPath) assert.NoError(t, err) b = bytes.Replace(b, []byte("refresh_test"), []byte("refreshed_test"), 1) err = ioutil.WriteFile(ConfigPath, b, 0644) assert.NoError(t, err) assert.NotEqual(t, []string{"refreshed_test"}, configFile.GetSectionList()) err = FileRefresh() assert.NoError(t, err) assert.Equal(t, []string{"refreshed_test"}, configFile.GetSectionList()) } rclone-1.53.3/fs/config/config_unix.go000066400000000000000000000016361375552240400176160ustar00rootroot00000000000000// Read, write and edit the config file // Unix specific functions. // +build darwin dragonfly freebsd linux netbsd openbsd solaris package config import ( "os" "os/user" "strconv" "syscall" "github.com/rclone/rclone/fs" ) // attemptCopyGroups tries to keep the group the same. User will be the one // who is currently running this process. func attemptCopyGroup(fromPath, toPath string) { info, err := os.Stat(fromPath) if err != nil || info.Sys() == nil { return } if stat, ok := info.Sys().(*syscall.Stat_t); ok { uid := int(stat.Uid) // prefer self over previous owner of file, because it has a higher chance // of success if user, err := user.Current(); err == nil { if tmpUID, err := strconv.Atoi(user.Uid); err == nil { uid = tmpUID } } if err = os.Chown(toPath, uid, int(stat.Gid)); err != nil { fs.Debugf(nil, "Failed to keep previous owner of config file: %v", err) } } } rclone-1.53.3/fs/config/configflags/000077500000000000000000000000001375552240400172335ustar00rootroot00000000000000rclone-1.53.3/fs/config/configflags/configflags.go000066400000000000000000000410751375552240400220530ustar00rootroot00000000000000// Package configflags defines the flags used by rclone. It is // decoupled into a separate package so it can be replaced. package configflags // Options set by command line flags import ( "log" "net" "path/filepath" "strings" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/flags" fsLog "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/rc" "github.com/sirupsen/logrus" "github.com/spf13/pflag" ) var ( // these will get interpreted into fs.Config via SetFlags() below verbose int quiet bool dumpHeaders bool dumpBodies bool deleteBefore bool deleteDuring bool deleteAfter bool bindAddr string disableFeatures string uploadHeaders []string downloadHeaders []string headers []string ) // AddFlags adds the non filing system specific flags to the command func AddFlags(flagSet *pflag.FlagSet) { rc.AddOption("main", fs.Config) // NB defaults which aren't the zero for the type should be set in fs/config.go NewConfig flags.CountVarP(flagSet, &verbose, "verbose", "v", "Print lots more stuff (repeat for more)") flags.BoolVarP(flagSet, &quiet, "quiet", "q", false, "Print as little stuff as possible") flags.DurationVarP(flagSet, &fs.Config.ModifyWindow, "modify-window", "", fs.Config.ModifyWindow, "Max time diff to be considered the same") flags.IntVarP(flagSet, &fs.Config.Checkers, "checkers", "", fs.Config.Checkers, "Number of checkers to run in parallel.") flags.IntVarP(flagSet, &fs.Config.Transfers, "transfers", "", fs.Config.Transfers, "Number of file transfers to run in parallel.") flags.StringVarP(flagSet, &config.ConfigPath, "config", "", config.ConfigPath, "Config file.") flags.StringVarP(flagSet, &config.CacheDir, "cache-dir", "", config.CacheDir, "Directory rclone will use for caching.") flags.BoolVarP(flagSet, &fs.Config.CheckSum, "checksum", "c", fs.Config.CheckSum, "Skip based on checksum (if available) & size, not mod-time & size") flags.BoolVarP(flagSet, &fs.Config.SizeOnly, "size-only", "", fs.Config.SizeOnly, "Skip based on size only, not mod-time or checksum") flags.BoolVarP(flagSet, &fs.Config.IgnoreTimes, "ignore-times", "I", fs.Config.IgnoreTimes, "Don't skip files that match size and time - transfer all files") flags.BoolVarP(flagSet, &fs.Config.IgnoreExisting, "ignore-existing", "", fs.Config.IgnoreExisting, "Skip all files that exist on destination") flags.BoolVarP(flagSet, &fs.Config.IgnoreErrors, "ignore-errors", "", fs.Config.IgnoreErrors, "delete even if there are I/O errors") flags.BoolVarP(flagSet, &fs.Config.DryRun, "dry-run", "n", fs.Config.DryRun, "Do a trial run with no permanent changes") flags.BoolVarP(flagSet, &fs.Config.Interactive, "interactive", "i", fs.Config.Interactive, "Enable interactive mode") flags.DurationVarP(flagSet, &fs.Config.ConnectTimeout, "contimeout", "", fs.Config.ConnectTimeout, "Connect timeout") flags.DurationVarP(flagSet, &fs.Config.Timeout, "timeout", "", fs.Config.Timeout, "IO idle timeout") flags.DurationVarP(flagSet, &fs.Config.ExpectContinueTimeout, "expect-continue-timeout", "", fs.Config.ExpectContinueTimeout, "Timeout when using expect / 100-continue in HTTP") flags.BoolVarP(flagSet, &dumpHeaders, "dump-headers", "", false, "Dump HTTP headers - may contain sensitive info") flags.BoolVarP(flagSet, &dumpBodies, "dump-bodies", "", false, "Dump HTTP headers and bodies - may contain sensitive info") flags.BoolVarP(flagSet, &fs.Config.InsecureSkipVerify, "no-check-certificate", "", fs.Config.InsecureSkipVerify, "Do not verify the server SSL certificate. Insecure.") flags.BoolVarP(flagSet, &fs.Config.AskPassword, "ask-password", "", fs.Config.AskPassword, "Allow prompt for password for encrypted configuration.") flags.FVarP(flagSet, &fs.Config.PasswordCommand, "password-command", "", "Command for supplying password for encrypted configuration.") flags.BoolVarP(flagSet, &deleteBefore, "delete-before", "", false, "When synchronizing, delete files on destination before transferring") flags.BoolVarP(flagSet, &deleteDuring, "delete-during", "", false, "When synchronizing, delete files during transfer") flags.BoolVarP(flagSet, &deleteAfter, "delete-after", "", false, "When synchronizing, delete files on destination after transferring (default)") flags.Int64VarP(flagSet, &fs.Config.MaxDelete, "max-delete", "", -1, "When synchronizing, limit the number of deletes") flags.BoolVarP(flagSet, &fs.Config.TrackRenames, "track-renames", "", fs.Config.TrackRenames, "When synchronizing, track file renames and do a server side move if possible") flags.StringVarP(flagSet, &fs.Config.TrackRenamesStrategy, "track-renames-strategy", "", fs.Config.TrackRenamesStrategy, "Strategies to use when synchronizing using track-renames hash|modtime|leaf") flags.IntVarP(flagSet, &fs.Config.LowLevelRetries, "low-level-retries", "", fs.Config.LowLevelRetries, "Number of low level retries to do.") flags.BoolVarP(flagSet, &fs.Config.UpdateOlder, "update", "u", fs.Config.UpdateOlder, "Skip files that are newer on the destination.") flags.BoolVarP(flagSet, &fs.Config.UseServerModTime, "use-server-modtime", "", fs.Config.UseServerModTime, "Use server modified time instead of object metadata") flags.BoolVarP(flagSet, &fs.Config.NoGzip, "no-gzip-encoding", "", fs.Config.NoGzip, "Don't set Accept-Encoding: gzip.") flags.IntVarP(flagSet, &fs.Config.MaxDepth, "max-depth", "", fs.Config.MaxDepth, "If set limits the recursion depth to this.") flags.BoolVarP(flagSet, &fs.Config.IgnoreSize, "ignore-size", "", false, "Ignore size when skipping use mod-time or checksum.") flags.BoolVarP(flagSet, &fs.Config.IgnoreChecksum, "ignore-checksum", "", fs.Config.IgnoreChecksum, "Skip post copy check of checksums.") flags.BoolVarP(flagSet, &fs.Config.IgnoreCaseSync, "ignore-case-sync", "", fs.Config.IgnoreCaseSync, "Ignore case when synchronizing") flags.BoolVarP(flagSet, &fs.Config.NoTraverse, "no-traverse", "", fs.Config.NoTraverse, "Don't traverse destination file system on copy.") flags.BoolVarP(flagSet, &fs.Config.CheckFirst, "check-first", "", fs.Config.CheckFirst, "Do all the checks before starting transfers.") flags.BoolVarP(flagSet, &fs.Config.NoCheckDest, "no-check-dest", "", fs.Config.NoCheckDest, "Don't check the destination, copy regardless.") flags.BoolVarP(flagSet, &fs.Config.NoUnicodeNormalization, "no-unicode-normalization", "", fs.Config.NoUnicodeNormalization, "Don't normalize unicode characters in filenames.") flags.BoolVarP(flagSet, &fs.Config.NoUpdateModTime, "no-update-modtime", "", fs.Config.NoUpdateModTime, "Don't update destination mod-time if files identical.") flags.StringVarP(flagSet, &fs.Config.CompareDest, "compare-dest", "", fs.Config.CompareDest, "Include additional server-side path during comparison.") flags.StringVarP(flagSet, &fs.Config.CopyDest, "copy-dest", "", fs.Config.CopyDest, "Implies --compare-dest but also copies files from path into destination.") flags.StringVarP(flagSet, &fs.Config.BackupDir, "backup-dir", "", fs.Config.BackupDir, "Make backups into hierarchy based in DIR.") flags.StringVarP(flagSet, &fs.Config.Suffix, "suffix", "", fs.Config.Suffix, "Suffix to add to changed files.") flags.BoolVarP(flagSet, &fs.Config.SuffixKeepExtension, "suffix-keep-extension", "", fs.Config.SuffixKeepExtension, "Preserve the extension when using --suffix.") flags.BoolVarP(flagSet, &fs.Config.UseListR, "fast-list", "", fs.Config.UseListR, "Use recursive list if available. Uses more memory but fewer transactions.") flags.Float64VarP(flagSet, &fs.Config.TPSLimit, "tpslimit", "", fs.Config.TPSLimit, "Limit HTTP transactions per second to this.") flags.IntVarP(flagSet, &fs.Config.TPSLimitBurst, "tpslimit-burst", "", fs.Config.TPSLimitBurst, "Max burst of transactions for --tpslimit.") flags.StringVarP(flagSet, &bindAddr, "bind", "", "", "Local address to bind to for outgoing connections, IPv4, IPv6 or name.") flags.StringVarP(flagSet, &disableFeatures, "disable", "", "", "Disable a comma separated list of features. Use help to see a list.") flags.StringVarP(flagSet, &fs.Config.UserAgent, "user-agent", "", fs.Config.UserAgent, "Set the user-agent to a specified string. The default is rclone/ version") flags.BoolVarP(flagSet, &fs.Config.Immutable, "immutable", "", fs.Config.Immutable, "Do not modify files. Fail if existing files have been modified.") flags.BoolVarP(flagSet, &fs.Config.AutoConfirm, "auto-confirm", "", fs.Config.AutoConfirm, "If enabled, do not request console confirmation.") flags.IntVarP(flagSet, &fs.Config.StatsFileNameLength, "stats-file-name-length", "", fs.Config.StatsFileNameLength, "Max file name length in stats. 0 for no limit") flags.FVarP(flagSet, &fs.Config.LogLevel, "log-level", "", "Log level DEBUG|INFO|NOTICE|ERROR") flags.FVarP(flagSet, &fs.Config.StatsLogLevel, "stats-log-level", "", "Log level to show --stats output DEBUG|INFO|NOTICE|ERROR") flags.FVarP(flagSet, &fs.Config.BwLimit, "bwlimit", "", "Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.") flags.FVarP(flagSet, &fs.Config.BwLimitFile, "bwlimit-file", "", "Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable.") flags.FVarP(flagSet, &fs.Config.BufferSize, "buffer-size", "", "In memory buffer size when reading files for each --transfer.") flags.FVarP(flagSet, &fs.Config.StreamingUploadCutoff, "streaming-upload-cutoff", "", "Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends.") flags.FVarP(flagSet, &fs.Config.Dump, "dump", "", "List of items to dump from: "+fs.DumpFlagsList) flags.FVarP(flagSet, &fs.Config.MaxTransfer, "max-transfer", "", "Maximum size of data to transfer.") flags.DurationVarP(flagSet, &fs.Config.MaxDuration, "max-duration", "", 0, "Maximum duration rclone will transfer data for.") flags.FVarP(flagSet, &fs.Config.CutoffMode, "cutoff-mode", "", "Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS") flags.IntVarP(flagSet, &fs.Config.MaxBacklog, "max-backlog", "", fs.Config.MaxBacklog, "Maximum number of objects in sync or check backlog.") flags.IntVarP(flagSet, &fs.Config.MaxStatsGroups, "max-stats-groups", "", fs.Config.MaxStatsGroups, "Maximum number of stats groups to keep in memory. On max oldest is discarded.") flags.BoolVarP(flagSet, &fs.Config.StatsOneLine, "stats-one-line", "", fs.Config.StatsOneLine, "Make the stats fit on one line.") flags.BoolVarP(flagSet, &fs.Config.StatsOneLineDate, "stats-one-line-date", "", fs.Config.StatsOneLineDate, "Enables --stats-one-line and add current date/time prefix.") flags.StringVarP(flagSet, &fs.Config.StatsOneLineDateFormat, "stats-one-line-date-format", "", fs.Config.StatsOneLineDateFormat, "Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes (\"). See https://golang.org/pkg/time/#Time.Format") flags.BoolVarP(flagSet, &fs.Config.ErrorOnNoTransfer, "error-on-no-transfer", "", fs.Config.ErrorOnNoTransfer, "Sets exit code 9 if no files are transferred, useful in scripts") flags.BoolVarP(flagSet, &fs.Config.Progress, "progress", "P", fs.Config.Progress, "Show progress during transfer.") flags.BoolVarP(flagSet, &fs.Config.Cookie, "use-cookies", "", fs.Config.Cookie, "Enable session cookiejar.") flags.BoolVarP(flagSet, &fs.Config.UseMmap, "use-mmap", "", fs.Config.UseMmap, "Use mmap allocator (see docs).") flags.StringVarP(flagSet, &fs.Config.CaCert, "ca-cert", "", fs.Config.CaCert, "CA certificate used to verify servers") flags.StringVarP(flagSet, &fs.Config.ClientCert, "client-cert", "", fs.Config.ClientCert, "Client SSL certificate (PEM) for mutual TLS auth") flags.StringVarP(flagSet, &fs.Config.ClientKey, "client-key", "", fs.Config.ClientKey, "Client SSL private key (PEM) for mutual TLS auth") flags.FVarP(flagSet, &fs.Config.MultiThreadCutoff, "multi-thread-cutoff", "", "Use multi-thread downloads for files above this size.") flags.IntVarP(flagSet, &fs.Config.MultiThreadStreams, "multi-thread-streams", "", fs.Config.MultiThreadStreams, "Max number of streams to use for multi-thread downloads.") flags.BoolVarP(flagSet, &fs.Config.UseJSONLog, "use-json-log", "", fs.Config.UseJSONLog, "Use json log format.") flags.StringVarP(flagSet, &fs.Config.OrderBy, "order-by", "", fs.Config.OrderBy, "Instructions on how to order the transfers, eg 'size,descending'") flags.StringArrayVarP(flagSet, &uploadHeaders, "header-upload", "", nil, "Set HTTP header for upload transactions") flags.StringArrayVarP(flagSet, &downloadHeaders, "header-download", "", nil, "Set HTTP header for download transactions") flags.StringArrayVarP(flagSet, &headers, "header", "", nil, "Set HTTP header for all transactions") flags.BoolVarP(flagSet, &fs.Config.RefreshTimes, "refresh-times", "", fs.Config.RefreshTimes, "Refresh the modtime of remote files.") } // ParseHeaders converts the strings passed in via the header flags into HTTPOptions func ParseHeaders(headers []string) []*fs.HTTPOption { opts := []*fs.HTTPOption{} for _, header := range headers { parts := strings.SplitN(header, ":", 2) if len(parts) == 1 { log.Fatalf("Failed to parse '%s' as an HTTP header. Expecting a string like: 'Content-Encoding: gzip'", header) } option := &fs.HTTPOption{ Key: strings.TrimSpace(parts[0]), Value: strings.TrimSpace(parts[1]), } opts = append(opts, option) } return opts } // SetFlags converts any flags into config which weren't straight forward func SetFlags() { if verbose >= 2 { fs.Config.LogLevel = fs.LogLevelDebug } else if verbose >= 1 { fs.Config.LogLevel = fs.LogLevelInfo } if quiet { if verbose > 0 { log.Fatalf("Can't set -v and -q") } fs.Config.LogLevel = fs.LogLevelError } logLevelFlag := pflag.Lookup("log-level") if logLevelFlag != nil && logLevelFlag.Changed { if verbose > 0 { log.Fatalf("Can't set -v and --log-level") } if quiet { log.Fatalf("Can't set -q and --log-level") } } if fs.Config.UseJSONLog { logrus.AddHook(fsLog.NewCallerHook()) logrus.SetFormatter(&logrus.JSONFormatter{ TimestampFormat: "2006-01-02T15:04:05.999999-07:00", }) logrus.SetLevel(logrus.DebugLevel) switch fs.Config.LogLevel { case fs.LogLevelEmergency, fs.LogLevelAlert: logrus.SetLevel(logrus.PanicLevel) case fs.LogLevelCritical: logrus.SetLevel(logrus.FatalLevel) case fs.LogLevelError: logrus.SetLevel(logrus.ErrorLevel) case fs.LogLevelWarning, fs.LogLevelNotice: logrus.SetLevel(logrus.WarnLevel) case fs.LogLevelInfo: logrus.SetLevel(logrus.InfoLevel) case fs.LogLevelDebug: logrus.SetLevel(logrus.DebugLevel) } } if dumpHeaders { fs.Config.Dump |= fs.DumpHeaders fs.Logf(nil, "--dump-headers is obsolete - please use --dump headers instead") } if dumpBodies { fs.Config.Dump |= fs.DumpBodies fs.Logf(nil, "--dump-bodies is obsolete - please use --dump bodies instead") } switch { case deleteBefore && (deleteDuring || deleteAfter), deleteDuring && deleteAfter: log.Fatalf(`Only one of --delete-before, --delete-during or --delete-after can be used.`) case deleteBefore: fs.Config.DeleteMode = fs.DeleteModeBefore case deleteDuring: fs.Config.DeleteMode = fs.DeleteModeDuring case deleteAfter: fs.Config.DeleteMode = fs.DeleteModeAfter default: fs.Config.DeleteMode = fs.DeleteModeDefault } if fs.Config.CompareDest != "" && fs.Config.CopyDest != "" { log.Fatalf(`Can't use --compare-dest with --copy-dest.`) } switch { case len(fs.Config.StatsOneLineDateFormat) > 0: fs.Config.StatsOneLineDate = true fs.Config.StatsOneLine = true case fs.Config.StatsOneLineDate: fs.Config.StatsOneLineDateFormat = "2006/01/02 15:04:05 - " fs.Config.StatsOneLine = true } if bindAddr != "" { addrs, err := net.LookupIP(bindAddr) if err != nil { log.Fatalf("--bind: Failed to parse %q as IP address: %v", bindAddr, err) } if len(addrs) != 1 { log.Fatalf("--bind: Expecting 1 IP address for %q but got %d", bindAddr, len(addrs)) } fs.Config.BindAddr = addrs[0] } if disableFeatures != "" { if disableFeatures == "help" { log.Fatalf("Possible backend features are: %s\n", strings.Join(new(fs.Features).List(), ", ")) } fs.Config.DisableFeatures = strings.Split(disableFeatures, ",") } if len(uploadHeaders) != 0 { fs.Config.UploadHeaders = ParseHeaders(uploadHeaders) } if len(downloadHeaders) != 0 { fs.Config.DownloadHeaders = ParseHeaders(downloadHeaders) } if len(headers) != 0 { fs.Config.Headers = ParseHeaders(headers) } // Make the config file absolute configPath, err := filepath.Abs(config.ConfigPath) if err == nil { config.ConfigPath = configPath } // Set whether multi-thread-streams was set multiThreadStreamsFlag := pflag.Lookup("multi-thread-streams") fs.Config.MultiThreadSet = multiThreadStreamsFlag != nil && multiThreadStreamsFlag.Changed } rclone-1.53.3/fs/config/configmap/000077500000000000000000000000001375552240400167145ustar00rootroot00000000000000rclone-1.53.3/fs/config/configmap/configmap.go000066400000000000000000000037671375552240400212230ustar00rootroot00000000000000// Package configmap provides an abstraction for reading and writing config package configmap // Getter provides an interface to get config items type Getter interface { // Get should get an item with the key passed in and return // the value. If the item is found then it should return true, // otherwise false. Get(key string) (value string, ok bool) } // Setter provides an interface to set config items type Setter interface { // Set should set an item into persistent config store. Set(key, value string) } // Mapper provides an interface to read and write config type Mapper interface { Getter Setter } // Map provides a wrapper around multiple Setter and // Getter interfaces. type Map struct { setters []Setter getters []Getter } // New returns an empty Map func New() *Map { return &Map{} } // AddGetter appends a getter onto the end of the getters func (c *Map) AddGetter(getter Getter) *Map { c.getters = append(c.getters, getter) return c } // AddGetters appends multiple getters onto the end of the getters func (c *Map) AddGetters(getters ...Getter) *Map { c.getters = append(c.getters, getters...) return c } // AddSetter appends a setter onto the end of the setters func (c *Map) AddSetter(setter Setter) *Map { c.setters = append(c.setters, setter) return c } // Get gets an item with the key passed in and return the value from // the first getter. If the item is found then it returns true, // otherwise false. func (c *Map) Get(key string) (value string, ok bool) { for _, do := range c.getters { value, ok = do.Get(key) if ok { return value, ok } } return "", false } // Set sets an item into all the stored setters. func (c *Map) Set(key, value string) { for _, do := range c.setters { do.Set(key, value) } } // Simple is a simple Mapper for testing type Simple map[string]string // Get the value func (c Simple) Get(key string) (value string, ok bool) { value, ok = c[key] return value, ok } // Set the value func (c Simple) Set(key, value string) { c[key] = value } rclone-1.53.3/fs/config/configmap/configmap_test.go000066400000000000000000000027061375552240400222520ustar00rootroot00000000000000package configmap import ( "testing" "github.com/stretchr/testify/assert" ) var ( _ Mapper = Simple(nil) _ Getter = Simple(nil) _ Setter = Simple(nil) ) func TestConfigMapGet(t *testing.T) { m := New() value, found := m.Get("config1") assert.Equal(t, "", value) assert.Equal(t, false, found) value, found = m.Get("config2") assert.Equal(t, "", value) assert.Equal(t, false, found) m1 := Simple{ "config1": "one", } m.AddGetter(m1) value, found = m.Get("config1") assert.Equal(t, "one", value) assert.Equal(t, true, found) value, found = m.Get("config2") assert.Equal(t, "", value) assert.Equal(t, false, found) m2 := Simple{ "config1": "one2", "config2": "two2", } m.AddGetter(m2) value, found = m.Get("config1") assert.Equal(t, "one", value) assert.Equal(t, true, found) value, found = m.Get("config2") assert.Equal(t, "two2", value) assert.Equal(t, true, found) } func TestConfigMapSet(t *testing.T) { m := New() m1 := Simple{ "config1": "one", } m2 := Simple{ "config1": "one2", "config2": "two2", } m.AddSetter(m1).AddSetter(m2) m.Set("config2", "potato") assert.Equal(t, Simple{ "config1": "one", "config2": "potato", }, m1) assert.Equal(t, Simple{ "config1": "one2", "config2": "potato", }, m2) m.Set("config1", "beetroot") assert.Equal(t, Simple{ "config1": "beetroot", "config2": "potato", }, m1) assert.Equal(t, Simple{ "config1": "beetroot", "config2": "potato", }, m2) } rclone-1.53.3/fs/config/configstruct/000077500000000000000000000000001375552240400174635ustar00rootroot00000000000000rclone-1.53.3/fs/config/configstruct/configstruct.go000066400000000000000000000072561375552240400225360ustar00rootroot00000000000000// Package configstruct parses unstructured maps into structures package configstruct import ( "fmt" "reflect" "regexp" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/fs/config/configmap" ) var matchUpper = regexp.MustCompile("([A-Z]+)") // camelToSnake converts CamelCase to snake_case func camelToSnake(in string) string { out := matchUpper.ReplaceAllString(in, "_$1") out = strings.ToLower(out) out = strings.Trim(out, "_") return out } // StringToInterface turns in into an interface{} the same type as def func StringToInterface(def interface{}, in string) (newValue interface{}, err error) { typ := reflect.TypeOf(def) switch typ.Kind() { case reflect.String: // Pass strings unmodified return in, nil } // Otherwise parse with Sscanln // // This means any types we use here must implement fmt.Scanner o := reflect.New(typ) n, err := fmt.Sscanln(in, o.Interface()) if err != nil { return newValue, errors.Wrapf(err, "parsing %q as %T failed", in, def) } if n != 1 { return newValue, errors.New("no items parsed") } return o.Elem().Interface(), nil } // Item describes a single entry in the options structure type Item struct { Name string // snake_case Field string // CamelCase Num int // number of the field in the struct Value interface{} } // Items parses the opt struct and returns a slice of Item objects. // // opt must be a pointer to a struct. The struct should have entirely // public fields. // // The config_name is looked up in a struct tag called "config" or if // not found is the field name converted from CamelCase to snake_case. func Items(opt interface{}) (items []Item, err error) { def := reflect.ValueOf(opt) if def.Kind() != reflect.Ptr { return nil, errors.New("argument must be a pointer") } def = def.Elem() // indirect the pointer if def.Kind() != reflect.Struct { return nil, errors.New("argument must be a pointer to a struct") } defType := def.Type() for i := 0; i < def.NumField(); i++ { field := defType.Field(i) fieldName := field.Name configName, ok := field.Tag.Lookup("config") if !ok { configName = camelToSnake(fieldName) } defaultItem := Item{ Name: configName, Field: fieldName, Num: i, Value: def.Field(i).Interface(), } items = append(items, defaultItem) } return items, nil } // Set interprets the field names in defaults and looks up config // values in the config passed in. Any values found in config will be // set in the opt structure. // // opt must be a pointer to a struct. The struct should have entirely // public fields. The field names are converted from CamelCase to // snake_case and looked up in the config supplied or a // `config:"field_name"` is looked up. // // If items are found then they are converted from string to native // types and set in opt. // // All the field types in the struct must implement fmt.Scanner. func Set(config configmap.Getter, opt interface{}) (err error) { defaultItems, err := Items(opt) if err != nil { return err } defStruct := reflect.ValueOf(opt).Elem() for _, defaultItem := range defaultItems { newValue := defaultItem.Value if configValue, ok := config.Get(defaultItem.Name); ok { var newNewValue interface{} newNewValue, err = StringToInterface(newValue, configValue) if err != nil { // Mask errors if setting an empty string as // it isn't valid for all types. This makes // empty string be the equivalent of unset. if configValue != "" { return errors.Wrapf(err, "couldn't parse config item %q = %q as %T", defaultItem.Name, configValue, defaultItem.Value) } } else { newValue = newNewValue } } defStruct.Field(defaultItem.Num).Set(reflect.ValueOf(newValue)) } return nil } rclone-1.53.3/fs/config/configstruct/configstruct_test.go000066400000000000000000000057101375552240400235660ustar00rootroot00000000000000package configstruct_test import ( "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/configstruct" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) type conf struct { A string B string } type conf2 struct { PotatoPie string `config:"spud_pie"` BeanStew bool RaisinRoll int SausageOnStick int64 ForbiddenFruit uint CookingTime fs.Duration TotalWeight fs.SizeSuffix } func TestItemsError(t *testing.T) { _, err := configstruct.Items(nil) assert.EqualError(t, err, "argument must be a pointer") _, err = configstruct.Items(new(int)) assert.EqualError(t, err, "argument must be a pointer to a struct") } func TestItems(t *testing.T) { in := &conf2{ PotatoPie: "yum", BeanStew: true, RaisinRoll: 42, SausageOnStick: 101, ForbiddenFruit: 6, CookingTime: fs.Duration(42 * time.Second), TotalWeight: fs.SizeSuffix(17 << 20), } got, err := configstruct.Items(in) require.NoError(t, err) want := []configstruct.Item{ {Name: "spud_pie", Field: "PotatoPie", Num: 0, Value: string("yum")}, {Name: "bean_stew", Field: "BeanStew", Num: 1, Value: true}, {Name: "raisin_roll", Field: "RaisinRoll", Num: 2, Value: int(42)}, {Name: "sausage_on_stick", Field: "SausageOnStick", Num: 3, Value: int64(101)}, {Name: "forbidden_fruit", Field: "ForbiddenFruit", Num: 4, Value: uint(6)}, {Name: "cooking_time", Field: "CookingTime", Num: 5, Value: fs.Duration(42 * time.Second)}, {Name: "total_weight", Field: "TotalWeight", Num: 6, Value: fs.SizeSuffix(17 << 20)}, } assert.Equal(t, want, got) } func TestSetBasics(t *testing.T) { c := &conf{A: "one", B: "two"} err := configstruct.Set(configMap{}, c) require.NoError(t, err) assert.Equal(t, &conf{A: "one", B: "two"}, c) } // a simple configmap.Getter for testing type configMap map[string]string // Get the value func (c configMap) Get(key string) (value string, ok bool) { value, ok = c[key] return value, ok } func TestSetMore(t *testing.T) { c := &conf{A: "one", B: "two"} m := configMap{ "a": "ONE", } err := configstruct.Set(m, c) require.NoError(t, err) assert.Equal(t, &conf{A: "ONE", B: "two"}, c) } func TestSetFull(t *testing.T) { in := &conf2{ PotatoPie: "yum", BeanStew: true, RaisinRoll: 42, SausageOnStick: 101, ForbiddenFruit: 6, CookingTime: fs.Duration(42 * time.Second), TotalWeight: fs.SizeSuffix(17 << 20), } m := configMap{ "spud_pie": "YUM", "bean_stew": "FALSE", "raisin_roll": "43 ", "sausage_on_stick": " 102 ", "forbidden_fruit": "0x7", "cooking_time": "43s", "total_weight": "18M", } want := &conf2{ PotatoPie: "YUM", BeanStew: false, RaisinRoll: 43, SausageOnStick: 102, ForbiddenFruit: 7, CookingTime: fs.Duration(43 * time.Second), TotalWeight: fs.SizeSuffix(18 << 20), } err := configstruct.Set(m, in) require.NoError(t, err) assert.Equal(t, want, in) } rclone-1.53.3/fs/config/configstruct/internal_test.go000066400000000000000000000030651375552240400226710ustar00rootroot00000000000000package configstruct import ( "fmt" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestCamelToSnake(t *testing.T) { for _, test := range []struct { in string want string }{ {"", ""}, {"Type", "type"}, {"AuthVersion", "auth_version"}, {"AccessKeyID", "access_key_id"}, } { got := camelToSnake(test.in) assert.Equal(t, test.want, got, test.in) } } func TestStringToInterface(t *testing.T) { item := struct{ A int }{2} for _, test := range []struct { in string def interface{} want interface{} err string }{ {"", string(""), "", ""}, {" string ", string(""), " string ", ""}, {"123", int(0), int(123), ""}, {"0x123", int(0), int(0x123), ""}, {" 0x123 ", int(0), int(0x123), ""}, {"-123", int(0), int(-123), ""}, {"0", false, false, ""}, {"1", false, true, ""}, {"FALSE", false, false, ""}, {"true", false, true, ""}, {"123", uint(0), uint(123), ""}, {"123", int64(0), int64(123), ""}, {"123x", int64(0), nil, "parsing \"123x\" as int64 failed: expected newline"}, {"truth", false, nil, "parsing \"truth\" as bool failed: syntax error scanning boolean"}, {"struct", item, nil, "parsing \"struct\" as struct { A int } failed: can't scan type: *struct { A int }"}, } { what := fmt.Sprintf("parse %q as %T", test.in, test.def) got, err := StringToInterface(test.def, test.in) if test.err == "" { require.NoError(t, err, what) assert.Equal(t, test.want, got, what) } else { assert.Nil(t, got) assert.EqualError(t, err, test.err, what) } } } rclone-1.53.3/fs/config/flags/000077500000000000000000000000001375552240400160455ustar00rootroot00000000000000rclone-1.53.3/fs/config/flags/flags.go000066400000000000000000000154311375552240400174740ustar00rootroot00000000000000// Package flags contains enhanced versions of spf13/pflag flag // routines which will read from the environment also. package flags import ( "log" "os" "time" "github.com/rclone/rclone/fs" "github.com/spf13/pflag" ) // setDefaultFromEnv constructs a name from the flag passed in and // sets the default from the environment if possible. func setDefaultFromEnv(flags *pflag.FlagSet, name string) { key := fs.OptionToEnv(name) newValue, found := os.LookupEnv(key) if found { flag := flags.Lookup(name) if flag == nil { log.Fatalf("Couldn't find flag %q", name) } err := flag.Value.Set(newValue) if err != nil { log.Fatalf("Invalid value for environment variable %q: %v", key, err) } fs.Debugf(nil, "Set default for %q from %q to %q (%v)", name, key, newValue, flag.Value) flag.DefValue = newValue } } // StringP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.StringP func StringP(name, shorthand string, value string, usage string) (out *string) { out = pflag.StringP(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // StringVarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.StringVarP func StringVarP(flags *pflag.FlagSet, p *string, name, shorthand string, value string, usage string) { flags.StringVarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // BoolP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.BoolP func BoolP(name, shorthand string, value bool, usage string) (out *bool) { out = pflag.BoolP(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // BoolVarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.BoolVarP func BoolVarP(flags *pflag.FlagSet, p *bool, name, shorthand string, value bool, usage string) { flags.BoolVarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // IntP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.IntP func IntP(name, shorthand string, value int, usage string) (out *int) { out = pflag.IntP(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // Int64P defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.IntP func Int64P(name, shorthand string, value int64, usage string) (out *int64) { out = pflag.Int64P(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // Int64VarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.Int64VarP func Int64VarP(flags *pflag.FlagSet, p *int64, name, shorthand string, value int64, usage string) { flags.Int64VarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // IntVarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.IntVarP func IntVarP(flags *pflag.FlagSet, p *int, name, shorthand string, value int, usage string) { flags.IntVarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // Uint32VarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.Uint32VarP func Uint32VarP(flags *pflag.FlagSet, p *uint32, name, shorthand string, value uint32, usage string) { flags.Uint32VarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // Float64P defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.Float64P func Float64P(name, shorthand string, value float64, usage string) (out *float64) { out = pflag.Float64P(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // Float64VarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.Float64VarP func Float64VarP(flags *pflag.FlagSet, p *float64, name, shorthand string, value float64, usage string) { flags.Float64VarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // DurationP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.DurationP func DurationP(name, shorthand string, value time.Duration, usage string) (out *time.Duration) { out = pflag.DurationP(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // DurationVarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.DurationVarP func DurationVarP(flags *pflag.FlagSet, p *time.Duration, name, shorthand string, value time.Duration, usage string) { flags.DurationVarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // VarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.VarP func VarP(value pflag.Value, name, shorthand, usage string) { pflag.VarP(value, name, shorthand, usage) setDefaultFromEnv(pflag.CommandLine, name) } // FVarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.VarP func FVarP(flags *pflag.FlagSet, value pflag.Value, name, shorthand, usage string) { flags.VarP(value, name, shorthand, usage) setDefaultFromEnv(flags, name) } // StringArrayP defines a flag which can be overridden by an environment variable // // It sets one value only - command line flags can be used to set more. // // It is a thin wrapper around pflag.StringArrayP func StringArrayP(name, shorthand string, value []string, usage string) (out *[]string) { out = pflag.StringArrayP(name, shorthand, value, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // StringArrayVarP defines a flag which can be overridden by an environment variable // // It sets one value only - command line flags can be used to set more. // // It is a thin wrapper around pflag.StringArrayVarP func StringArrayVarP(flags *pflag.FlagSet, p *[]string, name, shorthand string, value []string, usage string) { flags.StringArrayVarP(p, name, shorthand, value, usage) setDefaultFromEnv(flags, name) } // CountP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.CountP func CountP(name, shorthand string, usage string) (out *int) { out = pflag.CountP(name, shorthand, usage) setDefaultFromEnv(pflag.CommandLine, name) return out } // CountVarP defines a flag which can be overridden by an environment variable // // It is a thin wrapper around pflag.CountVarP func CountVarP(flags *pflag.FlagSet, p *int, name, shorthand string, usage string) { flags.CountVarP(p, name, shorthand, usage) setDefaultFromEnv(flags, name) } rclone-1.53.3/fs/config/obscure/000077500000000000000000000000001375552240400164135ustar00rootroot00000000000000rclone-1.53.3/fs/config/obscure/obscure.go000066400000000000000000000046121375552240400204070ustar00rootroot00000000000000// Package obscure contains the Obscure and Reveal commands package obscure import ( "crypto/aes" "crypto/cipher" "crypto/rand" "encoding/base64" "io" "log" "github.com/pkg/errors" ) // crypt internals var ( cryptKey = []byte{ 0x9c, 0x93, 0x5b, 0x48, 0x73, 0x0a, 0x55, 0x4d, 0x6b, 0xfd, 0x7c, 0x63, 0xc8, 0x86, 0xa9, 0x2b, 0xd3, 0x90, 0x19, 0x8e, 0xb8, 0x12, 0x8a, 0xfb, 0xf4, 0xde, 0x16, 0x2b, 0x8b, 0x95, 0xf6, 0x38, } cryptBlock cipher.Block cryptRand = rand.Reader ) // crypt transforms in to out using iv under AES-CTR. // // in and out may be the same buffer. // // Note encryption and decryption are the same operation func crypt(out, in, iv []byte) error { if cryptBlock == nil { var err error cryptBlock, err = aes.NewCipher(cryptKey) if err != nil { return err } } stream := cipher.NewCTR(cryptBlock, iv) stream.XORKeyStream(out, in) return nil } // Obscure a value // // This is done by encrypting with AES-CTR func Obscure(x string) (string, error) { plaintext := []byte(x) ciphertext := make([]byte, aes.BlockSize+len(plaintext)) iv := ciphertext[:aes.BlockSize] if _, err := io.ReadFull(cryptRand, iv); err != nil { return "", errors.Wrap(err, "failed to read iv") } if err := crypt(ciphertext[aes.BlockSize:], plaintext, iv); err != nil { return "", errors.Wrap(err, "encrypt failed") } return base64.RawURLEncoding.EncodeToString(ciphertext), nil } // MustObscure obscures a value, exiting with a fatal error if it failed func MustObscure(x string) string { out, err := Obscure(x) if err != nil { log.Fatalf("Obscure failed: %v", err) } return out } // Reveal an obscured value func Reveal(x string) (string, error) { ciphertext, err := base64.RawURLEncoding.DecodeString(x) if err != nil { return "", errors.Wrap(err, "base64 decode failed when revealing password - is it obscured?") } if len(ciphertext) < aes.BlockSize { return "", errors.New("input too short when revealing password - is it obscured?") } buf := ciphertext[aes.BlockSize:] iv := ciphertext[:aes.BlockSize] if err := crypt(buf, buf, iv); err != nil { return "", errors.Wrap(err, "decrypt failed when revealing password - is it obscured?") } return string(buf), nil } // MustReveal reveals an obscured value, exiting with a fatal error if it failed func MustReveal(x string) string { out, err := Reveal(x) if err != nil { log.Fatalf("Reveal failed: %v", err) } return out } rclone-1.53.3/fs/config/obscure/obscure_test.go000066400000000000000000000027741375552240400214550ustar00rootroot00000000000000package obscure import ( "bytes" "crypto/rand" "testing" "github.com/stretchr/testify/assert" ) func TestObscure(t *testing.T) { for _, test := range []struct { in string want string iv string }{ {"", "YWFhYWFhYWFhYWFhYWFhYQ", "aaaaaaaaaaaaaaaa"}, {"potato", "YWFhYWFhYWFhYWFhYWFhYXMaGgIlEQ", "aaaaaaaaaaaaaaaa"}, {"potato", "YmJiYmJiYmJiYmJiYmJiYp3gcEWbAw", "bbbbbbbbbbbbbbbb"}, } { cryptRand = bytes.NewBufferString(test.iv) got, err := Obscure(test.in) cryptRand = rand.Reader assert.NoError(t, err) assert.Equal(t, test.want, got) recoveredIn, err := Reveal(got) assert.NoError(t, err) assert.Equal(t, test.in, recoveredIn, "not bidirectional") // Now the Must variants cryptRand = bytes.NewBufferString(test.iv) got = MustObscure(test.in) cryptRand = rand.Reader assert.Equal(t, test.want, got) recoveredIn = MustReveal(got) assert.Equal(t, test.in, recoveredIn, "not bidirectional") } } func TestReveal(t *testing.T) { for _, test := range []struct { in string want string iv string }{ {"YWFhYWFhYWFhYWFhYWFhYQ", "", "aaaaaaaaaaaaaaaa"}, {"YWFhYWFhYWFhYWFhYWFhYXMaGgIlEQ", "potato", "aaaaaaaaaaaaaaaa"}, {"YmJiYmJiYmJiYmJiYmJiYp3gcEWbAw", "potato", "bbbbbbbbbbbbbbbb"}, } { cryptRand = bytes.NewBufferString(test.iv) got, err := Reveal(test.in) assert.NoError(t, err) assert.Equal(t, test.want, got) // Now the Must variants cryptRand = bytes.NewBufferString(test.iv) got = MustReveal(test.in) assert.Equal(t, test.want, got) } } rclone-1.53.3/fs/config/rc.go000066400000000000000000000110301375552240400156770ustar00rootroot00000000000000package config import ( "context" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" ) func init() { rc.Add(rc.Call{ Path: "config/dump", Fn: rcDump, Title: "Dumps the config file.", AuthRequired: true, Help: ` Returns a JSON object: - key: value Where keys are remote names and values are the config parameters. See the [config dump command](/commands/rclone_config_dump/) command for more information on the above. `, }) } // Return the config file dump func rcDump(ctx context.Context, in rc.Params) (out rc.Params, err error) { return DumpRcBlob(), nil } func init() { rc.Add(rc.Call{ Path: "config/get", Fn: rcGet, Title: "Get a remote in the config file.", AuthRequired: true, Help: ` Parameters: - name - name of remote to get See the [config dump command](/commands/rclone_config_dump/) command for more information on the above. `, }) } // Return the config file get func rcGet(ctx context.Context, in rc.Params) (out rc.Params, err error) { name, err := in.GetString("name") if err != nil { return nil, err } return DumpRcRemote(name), nil } func init() { rc.Add(rc.Call{ Path: "config/listremotes", Fn: rcListRemotes, Title: "Lists the remotes in the config file.", AuthRequired: true, Help: ` Returns - remotes - array of remote names See the [listremotes command](/commands/rclone_listremotes/) command for more information on the above. `, }) } // Return the a list of remotes in the config file func rcListRemotes(ctx context.Context, in rc.Params) (out rc.Params, err error) { var remotes = []string{} for _, remote := range getConfigData().GetSectionList() { remotes = append(remotes, remote) } out = rc.Params{ "remotes": remotes, } return out, nil } func init() { rc.Add(rc.Call{ Path: "config/providers", Fn: rcProviders, Title: "Shows how providers are configured in the config file.", AuthRequired: true, Help: ` Returns a JSON object: - providers - array of objects See the [config providers command](/commands/rclone_config_providers/) command for more information on the above. `, }) } // Return the config file providers func rcProviders(ctx context.Context, in rc.Params) (out rc.Params, err error) { out = rc.Params{ "providers": fs.Registry, } return out, nil } func init() { for _, name := range []string{"create", "update", "password"} { name := name extraHelp := "" if name == "create" { extraHelp = "- type - type of the new remote\n" } if name == "create" || name == "update" { extraHelp += "- obscure - optional bool - forces obscuring of passwords\n" extraHelp += "- noObscure - optional bool - forces passwords not to be obscured\n" } rc.Add(rc.Call{ Path: "config/" + name, AuthRequired: true, Fn: func(ctx context.Context, in rc.Params) (rc.Params, error) { return rcConfig(ctx, in, name) }, Title: name + " the config for a remote.", Help: `This takes the following parameters - name - name of remote - parameters - a map of \{ "key": "value" \} pairs ` + extraHelp + ` See the [config ` + name + ` command](/commands/rclone_config_` + name + `/) command for more information on the above.`, }) } } // Manipulate the config file func rcConfig(ctx context.Context, in rc.Params, what string) (out rc.Params, err error) { name, err := in.GetString("name") if err != nil { return nil, err } parameters := rc.Params{} err = in.GetStruct("parameters", ¶meters) if err != nil { return nil, err } doObscure, _ := in.GetBool("obscure") noObscure, _ := in.GetBool("noObscure") switch what { case "create": remoteType, err := in.GetString("type") if err != nil { return nil, err } return nil, CreateRemote(name, remoteType, parameters, doObscure, noObscure) case "update": return nil, UpdateRemote(name, parameters, doObscure, noObscure) case "password": return nil, PasswordRemote(name, parameters) } panic("unknown rcConfig type") } func init() { rc.Add(rc.Call{ Path: "config/delete", Fn: rcDelete, Title: "Delete a remote in the config file.", AuthRequired: true, Help: ` Parameters: - name - name of remote to delete See the [config delete command](/commands/rclone_config_delete/) command for more information on the above. `, }) } // Return the config file delete func rcDelete(ctx context.Context, in rc.Params) (out rc.Params, err error) { name, err := in.GetString("name") if err != nil { return nil, err } DeleteRemote(name) return nil, nil } rclone-1.53.3/fs/config/rc_test.go000066400000000000000000000077111375552240400167510ustar00rootroot00000000000000package config_test import ( "context" "testing" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/rc" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) const testName = "configTestNameForRc" func TestRc(t *testing.T) { // Create the test remote call := rc.Calls.Get("config/create") assert.NotNil(t, call) in := rc.Params{ "name": testName, "type": "local", "parameters": rc.Params{ "test_key": "sausage", }, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.Nil(t, out) assert.Equal(t, "local", config.FileGet(testName, "type")) assert.Equal(t, "sausage", config.FileGet(testName, "test_key")) // The sub tests rely on the remote created above but they can // all be run independently t.Run("Dump", func(t *testing.T) { call := rc.Calls.Get("config/dump") assert.NotNil(t, call) in := rc.Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) require.NotNil(t, out[testName]) config := out[testName].(rc.Params) assert.Equal(t, "local", config["type"]) assert.Equal(t, "sausage", config["test_key"]) }) t.Run("Get", func(t *testing.T) { call := rc.Calls.Get("config/get") assert.NotNil(t, call) in := rc.Params{ "name": testName, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, "local", out["type"]) assert.Equal(t, "sausage", out["test_key"]) }) t.Run("ListRemotes", func(t *testing.T) { call := rc.Calls.Get("config/listremotes") assert.NotNil(t, call) in := rc.Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) var remotes []string err = out.GetStruct("remotes", &remotes) require.NoError(t, err) assert.Contains(t, remotes, testName) }) t.Run("Update", func(t *testing.T) { call := rc.Calls.Get("config/update") assert.NotNil(t, call) in := rc.Params{ "name": testName, "parameters": rc.Params{ "test_key": "rutabaga", "test_key2": "cabbage", }, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Nil(t, out) assert.Equal(t, "local", config.FileGet(testName, "type")) assert.Equal(t, "rutabaga", config.FileGet(testName, "test_key")) assert.Equal(t, "cabbage", config.FileGet(testName, "test_key2")) }) t.Run("Password", func(t *testing.T) { call := rc.Calls.Get("config/password") assert.NotNil(t, call) pw2 := obscure.MustObscure("password") in := rc.Params{ "name": testName, "parameters": rc.Params{ "test_key": "rutabaga", "test_key2": pw2, // check we encode an already encoded password }, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Nil(t, out) assert.Equal(t, "local", config.FileGet(testName, "type")) assert.Equal(t, "rutabaga", obscure.MustReveal(config.FileGet(testName, "test_key"))) assert.Equal(t, pw2, obscure.MustReveal(config.FileGet(testName, "test_key2"))) }) // Delete the test remote call = rc.Calls.Get("config/delete") assert.NotNil(t, call) in = rc.Params{ "name": testName, } out, err = call.Fn(context.Background(), in) require.NoError(t, err) assert.Nil(t, out) assert.Equal(t, "", config.FileGet(testName, "type")) assert.Equal(t, "", config.FileGet(testName, "test_key")) } func TestRcProviders(t *testing.T) { call := rc.Calls.Get("config/providers") assert.NotNil(t, call) in := rc.Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) var registry []*fs.RegInfo err = out.GetStruct("providers", ®istry) require.NoError(t, err) foundLocal := false for _, provider := range registry { if provider.Name == "local" { foundLocal = true break } } assert.True(t, foundLocal, "didn't find local provider") } rclone-1.53.3/fs/config/testdata/000077500000000000000000000000001375552240400165625ustar00rootroot00000000000000rclone-1.53.3/fs/config/testdata/enc-invalid.conf000066400000000000000000000003341375552240400216220ustar00rootroot00000000000000# Encrypted rclone configuration File RCLONE_ENCRYPT_V0: b5Uk6mE3cUn5Wb8xiWYnVBAxXUirAaEG1PO/GIDiO9274AOæøå+Yj790BwJA4d2y7lNkmHt4nJwIsoueFvUYmm7RDyzER8IA3XOCrjzl3OUcczZqcplk5JfBdhxMZpt1aGYWUdle1IgO/kAFne6sLD6IuxPySEbrclone-1.53.3/fs/config/testdata/enc-short.conf000066400000000000000000000001131375552240400213260ustar00rootroot00000000000000# Encrypted rclone configuration File RCLONE_ENCRYPT_V0: b5Uk6mE3cUn5Wb8xirclone-1.53.3/fs/config/testdata/enc-too-new.conf000066400000000000000000000003261375552240400215650ustar00rootroot00000000000000# Encrypted rclone configuration File RCLONE_ENCRYPT_V1: b5Uk6mE3cUn5Wb8xiWYnVBAxXUirAaEG1PO/GIDiO9274AO+Yj790BwJA4d2y7lNkmHt4nJwIsoueFvUYmm7RDyzER8IA3XOCrjzl3OUcczZqcplk5JfBdhxMZpt1aGYWUdle1IgO/kAFne6sLD6IuxPySEbrclone-1.53.3/fs/config/testdata/encrypted.conf000066400000000000000000000003261375552240400214270ustar00rootroot00000000000000# Encrypted rclone configuration File RCLONE_ENCRYPT_V0: b5Uk6mE3cUn5Wb8xiWYnVBAxXUirAaEG1PO/GIDiO9274AO+Yj790BwJA4d2y7lNkmHt4nJwIsoueFvUYmm7RDyzER8IA3XOCrjzl3OUcczZqcplk5JfBdhxMZpt1aGYWUdle1IgO/kAFne6sLD6IuxPySEbrclone-1.53.3/fs/config/testdata/plain.conf000066400000000000000000000002001375552240400205240ustar00rootroot00000000000000[RCLONE_ENCRYPT_V0] type = local nounc = true [nounc] type = local nounc = true [unc] type = local nounc = false rclone-1.53.3/fs/config_list.go000066400000000000000000000037711375552240400163430ustar00rootroot00000000000000package fs import ( "bytes" "encoding/csv" "fmt" ) // CommaSepList is a comma separated config value // It uses the encoding/csv rules for quoting and escaping type CommaSepList []string // SpaceSepList is a space separated config value // It uses the encoding/csv rules for quoting and escaping type SpaceSepList []string type genericList []string func (l CommaSepList) String() string { return genericList(l).string(',') } // Set the List entries func (l *CommaSepList) Set(s string) error { return (*genericList)(l).set(',', []byte(s)) } // Type of the value func (CommaSepList) Type() string { return "CommaSepList" } // Scan implements the fmt.Scanner interface func (l *CommaSepList) Scan(s fmt.ScanState, ch rune) error { return (*genericList)(l).scan(',', s, ch) } func (l SpaceSepList) String() string { return genericList(l).string(' ') } // Set the List entries func (l *SpaceSepList) Set(s string) error { return (*genericList)(l).set(' ', []byte(s)) } // Type of the value func (SpaceSepList) Type() string { return "SpaceSepList" } // Scan implements the fmt.Scanner interface func (l *SpaceSepList) Scan(s fmt.ScanState, ch rune) error { return (*genericList)(l).scan(' ', s, ch) } func (gl genericList) string(sep rune) string { var buf bytes.Buffer w := csv.NewWriter(&buf) w.Comma = sep err := w.Write(gl) if err != nil { // can only happen if w.Comma is invalid panic(err) } w.Flush() return string(bytes.TrimSpace(buf.Bytes())) } func (gl *genericList) set(sep rune, b []byte) error { if len(b) == 0 { *gl = nil return nil } r := csv.NewReader(bytes.NewReader(b)) r.Comma = sep record, err := r.Read() switch _err := err.(type) { case nil: *gl = record case *csv.ParseError: err = _err.Err // remove line numbers from the error message } return err } func (gl *genericList) scan(sep rune, s fmt.ScanState, ch rune) error { token, err := s.Token(true, func(rune) bool { return true }) if err != nil { return err } return gl.set(sep, bytes.TrimSpace(token)) } rclone-1.53.3/fs/config_list_test.go000066400000000000000000000045101375552240400173720ustar00rootroot00000000000000package fs import ( "fmt" "testing" "github.com/stretchr/testify/require" ) func must(err error) { if err != nil { panic(err) } } func ExampleSpaceSepList() { for _, s := range []string{ `remotea:test/dir remoteb:`, `"remotea:test/space dir" remoteb:`, `"remotea:test/quote""dir" remoteb:`, } { var l SpaceSepList must(l.Set(s)) fmt.Printf("%#v\n", l) } // Output: // fs.SpaceSepList{"remotea:test/dir", "remoteb:"} // fs.SpaceSepList{"remotea:test/space dir", "remoteb:"} // fs.SpaceSepList{"remotea:test/quote\"dir", "remoteb:"} } func ExampleCommaSepList() { for _, s := range []string{ `remotea:test/dir,remoteb:`, `"remotea:test/space dir",remoteb:`, `"remotea:test/quote""dir",remoteb:`, } { var l CommaSepList must(l.Set(s)) fmt.Printf("%#v\n", l) } // Output: // fs.CommaSepList{"remotea:test/dir", "remoteb:"} // fs.CommaSepList{"remotea:test/space dir", "remoteb:"} // fs.CommaSepList{"remotea:test/quote\"dir", "remoteb:"} } func TestSpaceSepListSet(t *testing.T) { type tc struct { in string out SpaceSepList err string } tests := []tc{ {``, nil, ""}, {`\`, SpaceSepList{`\`}, ""}, {`\\`, SpaceSepList{`\\`}, ""}, {`potato`, SpaceSepList{`potato`}, ""}, {`po\tato`, SpaceSepList{`po\tato`}, ""}, {`potato\`, SpaceSepList{`potato\`}, ""}, {`'potato`, SpaceSepList{`'potato`}, ""}, {`pot'ato`, SpaceSepList{`pot'ato`}, ""}, {`potato'`, SpaceSepList{`potato'`}, ""}, {`"potato"`, SpaceSepList{`potato`}, ""}, {`'potato'`, SpaceSepList{`'potato'`}, ""}, {`potato apple`, SpaceSepList{`potato`, `apple`}, ""}, {`potato\ apple`, SpaceSepList{`potato\`, `apple`}, ""}, {`"potato apple"`, SpaceSepList{`potato apple`}, ""}, {`"potato'apple"`, SpaceSepList{`potato'apple`}, ""}, {`"potato''apple"`, SpaceSepList{`potato''apple`}, ""}, {`"potato' 'apple"`, SpaceSepList{`potato' 'apple`}, ""}, {`potato="apple"`, nil, `bare " in non-quoted-field`}, {`apple "potato`, nil, "extraneous"}, {`apple pot"ato`, nil, "bare \" in non-quoted-field"}, {`potato"`, nil, "bare \" in non-quoted-field"}, } for _, tc := range tests { var l SpaceSepList err := l.Set(tc.in) if tc.err == "" { require.NoErrorf(t, err, "input: %q", tc.in) } else { require.Containsf(t, err.Error(), tc.err, "input: %q", tc.in) } require.Equalf(t, tc.out, l, "input: %q", tc.in) } } rclone-1.53.3/fs/cutoffmode.go000066400000000000000000000016741375552240400161760ustar00rootroot00000000000000package fs import ( "fmt" "strings" "github.com/pkg/errors" ) // CutoffMode describes the possible delete modes in the config type CutoffMode byte // MaxTransferMode constants const ( CutoffModeHard CutoffMode = iota CutoffModeSoft CutoffModeCautious CutoffModeDefault = CutoffModeHard ) var cutoffModeToString = []string{ CutoffModeHard: "HARD", CutoffModeSoft: "SOFT", CutoffModeCautious: "CAUTIOUS", } // String turns a LogLevel into a string func (m CutoffMode) String() string { if m >= CutoffMode(len(cutoffModeToString)) { return fmt.Sprintf("CutoffMode(%d)", m) } return cutoffModeToString[m] } // Set a LogLevel func (m *CutoffMode) Set(s string) error { for n, name := range cutoffModeToString { if s != "" && name == strings.ToUpper(s) { *m = CutoffMode(n) return nil } } return errors.Errorf("Unknown cutoff mode %q", s) } // Type of the value func (m *CutoffMode) Type() string { return "string" } rclone-1.53.3/fs/cutoffmode_test.go000066400000000000000000000001701375552240400172230ustar00rootroot00000000000000package fs import "github.com/spf13/pflag" // Check it satisfies the interface var _ pflag.Value = (*CutoffMode)(nil) rclone-1.53.3/fs/deletemode.go000066400000000000000000000004171375552240400161440ustar00rootroot00000000000000package fs // DeleteMode describes the possible delete modes in the config type DeleteMode byte // DeleteMode constants const ( DeleteModeOff DeleteMode = iota DeleteModeBefore DeleteModeDuring DeleteModeAfter DeleteModeOnly DeleteModeDefault = DeleteModeAfter ) rclone-1.53.3/fs/dir.go000066400000000000000000000041161375552240400146130ustar00rootroot00000000000000package fs import ( "context" "time" ) // Dir describes an unspecialized directory for directory/container/bucket lists type Dir struct { remote string // name of the directory modTime time.Time // modification or creation time - IsZero for unknown size int64 // size of directory and contents or -1 if unknown items int64 // number of objects or -1 for unknown id string // optional ID } // NewDir creates an unspecialized Directory object func NewDir(remote string, modTime time.Time) *Dir { return &Dir{ remote: remote, modTime: modTime, size: -1, items: -1, } } // NewDirCopy creates an unspecialized copy of the Directory object passed in func NewDirCopy(ctx context.Context, d Directory) *Dir { return &Dir{ remote: d.Remote(), modTime: d.ModTime(ctx), size: d.Size(), items: d.Items(), id: d.ID(), } } // String returns the name func (d *Dir) String() string { return d.remote } // Remote returns the remote path func (d *Dir) Remote() string { return d.remote } // SetRemote sets the remote func (d *Dir) SetRemote(remote string) *Dir { d.remote = remote return d } // ID gets the optional ID func (d *Dir) ID() string { return d.id } // SetID sets the optional ID func (d *Dir) SetID(id string) *Dir { d.id = id return d } // ModTime returns the modification date of the file // It should return a best guess if one isn't available func (d *Dir) ModTime(ctx context.Context) time.Time { if !d.modTime.IsZero() { return d.modTime } return time.Now() } // Size returns the size of the file func (d *Dir) Size() int64 { return d.size } // SetSize sets the size of the directory func (d *Dir) SetSize(size int64) *Dir { d.size = size return d } // Items returns the count of items in this directory or this // directory and subdirectories if known, -1 for unknown func (d *Dir) Items() int64 { return d.items } // SetItems sets the number of items in the directory func (d *Dir) SetItems(items int64) *Dir { d.items = items return d } // Check interfaces var ( _ DirEntry = (*Dir)(nil) _ Directory = (*Dir)(nil) ) rclone-1.53.3/fs/direntries.go000066400000000000000000000042301375552240400162020ustar00rootroot00000000000000package fs import "fmt" // DirEntries is a slice of Object or *Dir type DirEntries []DirEntry // Len is part of sort.Interface. func (ds DirEntries) Len() int { return len(ds) } // Swap is part of sort.Interface. func (ds DirEntries) Swap(i, j int) { ds[i], ds[j] = ds[j], ds[i] } // Less is part of sort.Interface. func (ds DirEntries) Less(i, j int) bool { return CompareDirEntries(ds[i], ds[j]) < 0 } // ForObject runs the function supplied on every object in the entries func (ds DirEntries) ForObject(fn func(o Object)) { for _, entry := range ds { o, ok := entry.(Object) if ok { fn(o) } } } // ForObjectError runs the function supplied on every object in the entries func (ds DirEntries) ForObjectError(fn func(o Object) error) error { for _, entry := range ds { o, ok := entry.(Object) if ok { err := fn(o) if err != nil { return err } } } return nil } // ForDir runs the function supplied on every Directory in the entries func (ds DirEntries) ForDir(fn func(dir Directory)) { for _, entry := range ds { dir, ok := entry.(Directory) if ok { fn(dir) } } } // ForDirError runs the function supplied on every Directory in the entries func (ds DirEntries) ForDirError(fn func(dir Directory) error) error { for _, entry := range ds { dir, ok := entry.(Directory) if ok { err := fn(dir) if err != nil { return err } } } return nil } // DirEntryType returns a string description of the DirEntry, either // "object", "directory" or "unknown type XXX" func DirEntryType(d DirEntry) string { switch d.(type) { case Object: return "object" case Directory: return "directory" } return fmt.Sprintf("unknown type %T", d) } // CompareDirEntries returns 1 if a > b, 0 if a == b and -1 if a < b // If two dir entries have the same name, compare their types (directories are before objects) func CompareDirEntries(a, b DirEntry) int { aName := a.Remote() bName := b.Remote() if aName > bName { return 1 } else if aName < bName { return -1 } typeA := DirEntryType(a) typeB := DirEntryType(b) // same name, compare types if typeA > typeB { return 1 } else if typeA < typeB { return -1 } return 0 } rclone-1.53.3/fs/direntries_test.go000066400000000000000000000011441375552240400172420ustar00rootroot00000000000000package fs_test import ( "sort" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/mockdir" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" ) func TestDirEntriesSort(t *testing.T) { a := mockobject.New("a") aDir := mockdir.New("a") b := mockobject.New("b") bDir := mockdir.New("b") c := mockobject.New("c") cDir := mockdir.New("c") anotherc := mockobject.New("c") dirEntries := fs.DirEntries{bDir, b, aDir, a, c, cDir, anotherc} sort.Stable(dirEntries) assert.Equal(t, fs.DirEntries{aDir, a, bDir, b, cDir, c, anotherc}, dirEntries) } rclone-1.53.3/fs/dirtree/000077500000000000000000000000001375552240400151425ustar00rootroot00000000000000rclone-1.53.3/fs/dirtree/dirtree.go000066400000000000000000000123611375552240400171320ustar00rootroot00000000000000// Package dirtree contains the DirTree type which is used for // building filesystem heirachies in memory. package dirtree import ( "bytes" "fmt" "path" "sort" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/errors" ) // DirTree is a map of directories to entries type DirTree map[string]fs.DirEntries // New returns a fresh DirTree func New() DirTree { return make(DirTree) } // parentDir finds the parent directory of path func parentDir(entryPath string) string { dirPath := path.Dir(entryPath) if dirPath == "." { dirPath = "" } return dirPath } // Add an entry to the tree // it doesn't create parents func (dt DirTree) Add(entry fs.DirEntry) { dirPath := parentDir(entry.Remote()) dt[dirPath] = append(dt[dirPath], entry) } // AddDir adds a directory entry to the tree // this creates the directory itself if required // it doesn't create parents func (dt DirTree) AddDir(entry fs.DirEntry) { dt.Add(entry) // create the directory itself if it doesn't exist already dirPath := entry.Remote() if _, ok := dt[dirPath]; !ok { dt[dirPath] = nil } } // AddEntry adds the entry and creates the parents for it regardless // of whether it is a file or a directory. func (dt DirTree) AddEntry(entry fs.DirEntry) { switch entry.(type) { case fs.Directory: dt.AddDir(entry) case fs.Object: dt.Add(entry) default: panic("unknown entry type") } remoteParent := parentDir(entry.Remote()) dt.CheckParent("", remoteParent) } // Find returns the DirEntry for filePath or nil if not found func (dt DirTree) Find(filePath string) (parentPath string, entry fs.DirEntry) { parentPath = parentDir(filePath) for _, entry := range dt[parentPath] { if entry.Remote() == filePath { return parentPath, entry } } return parentPath, nil } // CheckParent checks that dirPath has a *Dir in its parent func (dt DirTree) CheckParent(root, dirPath string) { if dirPath == root { return } parentPath, entry := dt.Find(dirPath) if entry != nil { return } dt[parentPath] = append(dt[parentPath], fs.NewDir(dirPath, time.Now())) dt.CheckParent(root, parentPath) } // CheckParents checks every directory in the tree has *Dir in its parent func (dt DirTree) CheckParents(root string) { for dirPath := range dt { dt.CheckParent(root, dirPath) } } // Sort sorts all the Entries func (dt DirTree) Sort() { for _, entries := range dt { sort.Stable(entries) } } // Dirs returns the directories in sorted order func (dt DirTree) Dirs() (dirNames []string) { for dirPath := range dt { dirNames = append(dirNames, dirPath) } sort.Strings(dirNames) return dirNames } // Prune remove directories from a directory tree. dirNames contains // all directories to remove as keys, with true as values. dirNames // will be modified in the function. func (dt DirTree) Prune(dirNames map[string]bool) error { // We use map[string]bool to avoid recursion (and potential // stack exhaustion). // First we need delete directories from their parents. for dName, remove := range dirNames { if !remove { // Currently all values should be // true, therefore this should not // happen. But this makes function // more predictable. fs.Infof(dName, "Directory in the map for prune, but the value is false") continue } if dName == "" { // if dName is root, do nothing (no parent exist) continue } parent := parentDir(dName) // It may happen that dt does not have a dName key, // since directory was excluded based on a filter. In // such case the loop will be skipped. for i, entry := range dt[parent] { switch x := entry.(type) { case fs.Directory: if x.Remote() == dName { // the slice is not sorted yet // to delete item // a) replace it with the last one dt[parent][i] = dt[parent][len(dt[parent])-1] // b) remove last dt[parent] = dt[parent][:len(dt[parent])-1] // we modify a slice within a loop, but we stop // iterating immediately break } case fs.Object: // do nothing default: return errors.Errorf("unknown object type %T", entry) } } } for len(dirNames) > 0 { // According to golang specs, if new keys were added // during range iteration, they may be skipped. for dName, remove := range dirNames { if !remove { fs.Infof(dName, "Directory in the map for prune, but the value is false") continue } // First, add all subdirectories to dirNames. // It may happen that dt[dName] does not exist. // If so, the loop will be skipped. for _, entry := range dt[dName] { switch x := entry.(type) { case fs.Directory: excludeDir := x.Remote() dirNames[excludeDir] = true case fs.Object: // do nothing default: return errors.Errorf("unknown object type %T", entry) } } // Then remove current directory from DirTree delete(dt, dName) // and from dirNames delete(dirNames, dName) } } return nil } // String emits a simple representation of the DirTree func (dt DirTree) String() string { out := new(bytes.Buffer) for _, dir := range dt.Dirs() { _, _ = fmt.Fprintf(out, "%s/\n", dir) for _, entry := range dt[dir] { flag := "" if _, ok := entry.(fs.Directory); ok { flag = "/" } _, _ = fmt.Fprintf(out, " %s%s\n", path.Base(entry.Remote()), flag) } } return out.String() } rclone-1.53.3/fs/dirtree/dirtree_test.go000066400000000000000000000063661375552240400202010ustar00rootroot00000000000000package dirtree import ( "testing" "github.com/rclone/rclone/fstest/mockdir" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestNew(t *testing.T) { dt := New() assert.Equal(t, "", dt.String()) } func TestParentDir(t *testing.T) { assert.Equal(t, "root/parent", parentDir("root/parent/file")) assert.Equal(t, "parent", parentDir("parent/file")) assert.Equal(t, "", parentDir("parent")) assert.Equal(t, "", parentDir("")) } func TestDirTreeAdd(t *testing.T) { dt := New() o := mockobject.New("potato") dt.Add(o) assert.Equal(t, `/ potato `, dt.String()) o = mockobject.New("dir/subdir/sausage") dt.Add(o) assert.Equal(t, `/ potato dir/subdir/ sausage `, dt.String()) } func TestDirTreeAddDir(t *testing.T) { dt := New() d := mockdir.New("potato") dt.Add(d) assert.Equal(t, `/ potato/ `, dt.String()) d = mockdir.New("dir/subdir/sausage") dt.AddDir(d) assert.Equal(t, `/ potato/ dir/subdir/ sausage/ dir/subdir/sausage/ `, dt.String()) } func TestDirTreeAddEntry(t *testing.T) { dt := New() d := mockdir.New("dir/subdir/sausagedir") dt.AddEntry(d) o := mockobject.New("dir/subdir2/sausage2") dt.AddEntry(o) assert.Equal(t, `/ dir/ dir/ subdir/ subdir2/ dir/subdir/ sausagedir/ dir/subdir/sausagedir/ dir/subdir2/ sausage2 `, dt.String()) } func TestDirTreeFind(t *testing.T) { dt := New() parent, foundObj := dt.Find("dir/subdir/sausage") assert.Equal(t, "dir/subdir", parent) assert.Nil(t, foundObj) o := mockobject.New("dir/subdir/sausage") dt.Add(o) parent, foundObj = dt.Find("dir/subdir/sausage") assert.Equal(t, "dir/subdir", parent) assert.Equal(t, o, foundObj) } func TestDirTreeCheckParent(t *testing.T) { dt := New() o := mockobject.New("dir/subdir/sausage") dt.Add(o) assert.Equal(t, `dir/subdir/ sausage `, dt.String()) dt.CheckParent("", "dir/subdir") assert.Equal(t, `/ dir/ dir/ subdir/ dir/subdir/ sausage `, dt.String()) } func TestDirTreeCheckParents(t *testing.T) { dt := New() dt.Add(mockobject.New("dir/subdir/sausage")) dt.Add(mockobject.New("dir/subdir2/sausage2")) dt.CheckParents("") dt.Sort() // sort since the exact order of adding parents is not defined assert.Equal(t, `/ dir/ dir/ subdir/ subdir2/ dir/subdir/ sausage dir/subdir2/ sausage2 `, dt.String()) } func TestDirTreeSort(t *testing.T) { dt := New() dt.Add(mockobject.New("dir/subdir/B")) dt.Add(mockobject.New("dir/subdir/A")) assert.Equal(t, `dir/subdir/ B A `, dt.String()) dt.Sort() assert.Equal(t, `dir/subdir/ A B `, dt.String()) } func TestDirTreeDirs(t *testing.T) { dt := New() dt.Add(mockobject.New("dir/subdir/sausage")) dt.Add(mockobject.New("dir/subdir2/sausage2")) dt.CheckParents("") assert.Equal(t, []string{ "", "dir", "dir/subdir", "dir/subdir2", }, dt.Dirs()) } func TestDirTreePrune(t *testing.T) { dt := New() dt.Add(mockobject.New("file")) dt.Add(mockobject.New("dir/subdir/sausage")) dt.Add(mockobject.New("dir/subdir2/sausage2")) dt.Add(mockobject.New("dir/file")) dt.Add(mockobject.New("dir2/file")) dt.CheckParents("") err := dt.Prune(map[string]bool{ "dir": true, }) require.NoError(t, err) assert.Equal(t, `/ file dir2/ dir2/ file `, dt.String()) } rclone-1.53.3/fs/driveletter/000077500000000000000000000000001375552240400160355ustar00rootroot00000000000000rclone-1.53.3/fs/driveletter/driveletter.go000066400000000000000000000005321375552240400207150ustar00rootroot00000000000000// Package driveletter returns whether a name is a valid drive letter // +build !windows package driveletter // IsDriveLetter returns a bool indicating whether name is a valid // Windows drive letter // // On non windows platforms we don't have drive letters so we always // return false func IsDriveLetter(name string) bool { return false } rclone-1.53.3/fs/driveletter/driveletter_windows.go000066400000000000000000000004321375552240400224660ustar00rootroot00000000000000// +build windows package driveletter // IsDriveLetter returns a bool indicating whether name is a valid // Windows drive letter func IsDriveLetter(name string) bool { if len(name) != 1 { return false } c := name[0] return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') } rclone-1.53.3/fs/dump.go000066400000000000000000000033151375552240400150020ustar00rootroot00000000000000package fs import ( "fmt" "strings" "github.com/pkg/errors" ) // DumpFlags describes the Dump options in force type DumpFlags int // DumpFlags definitions const ( DumpHeaders DumpFlags = 1 << iota DumpBodies DumpRequests DumpResponses DumpAuth DumpFilters DumpGoRoutines DumpOpenFiles ) var dumpFlags = []struct { flag DumpFlags name string }{ {DumpHeaders, "headers"}, {DumpBodies, "bodies"}, {DumpRequests, "requests"}, {DumpResponses, "responses"}, {DumpAuth, "auth"}, {DumpFilters, "filters"}, {DumpGoRoutines, "goroutines"}, {DumpOpenFiles, "openfiles"}, } // DumpFlagsList is a list of dump flags used in the help var DumpFlagsList string func init() { // calculate the dump flags list var out []string for _, info := range dumpFlags { out = append(out, info.name) } DumpFlagsList = strings.Join(out, ",") } // String turns a DumpFlags into a string func (f DumpFlags) String() string { var out []string for _, info := range dumpFlags { if f&info.flag != 0 { out = append(out, info.name) f &^= info.flag } } if f != 0 { out = append(out, fmt.Sprintf("Unknown-0x%X", int(f))) } return strings.Join(out, ",") } // Set a DumpFlags as a comma separated list of flags func (f *DumpFlags) Set(s string) error { var flags DumpFlags parts := strings.Split(s, ",") for _, part := range parts { found := false part = strings.ToLower(strings.TrimSpace(part)) if part == "" { continue } for _, info := range dumpFlags { if part == info.name { found = true flags |= info.flag } } if !found { return errors.Errorf("Unknown dump flag %q", part) } } *f = flags return nil } // Type of the value func (f *DumpFlags) Type() string { return "DumpFlags" } rclone-1.53.3/fs/dump_test.go000066400000000000000000000032741375552240400160450ustar00rootroot00000000000000package fs import ( "testing" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" ) // Check it satisfies the interface var _ pflag.Value = (*DumpFlags)(nil) func TestDumpFlagsString(t *testing.T) { assert.Equal(t, "", DumpFlags(0).String()) assert.Equal(t, "headers", (DumpHeaders).String()) assert.Equal(t, "headers,bodies", (DumpHeaders | DumpBodies).String()) assert.Equal(t, "headers,bodies,requests,responses,auth,filters", (DumpHeaders | DumpBodies | DumpRequests | DumpResponses | DumpAuth | DumpFilters).String()) assert.Equal(t, "headers,Unknown-0x8000", (DumpHeaders | DumpFlags(0x8000)).String()) } func TestDumpFlagsSet(t *testing.T) { for _, test := range []struct { in string want DumpFlags wantErr string }{ {"", DumpFlags(0), ""}, {"bodies", DumpBodies, ""}, {"bodies,headers,auth", DumpBodies | DumpHeaders | DumpAuth, ""}, {"bodies,headers,auth", DumpBodies | DumpHeaders | DumpAuth, ""}, {"headers,bodies,requests,responses,auth,filters", DumpHeaders | DumpBodies | DumpRequests | DumpResponses | DumpAuth | DumpFilters, ""}, {"headers,bodies,unknown,auth", 0, "Unknown dump flag \"unknown\""}, } { f := DumpFlags(-1) initial := f err := f.Set(test.in) if err != nil { if test.wantErr == "" { t.Errorf("Got an error when not expecting one on %q: %v", test.in, err) } else { assert.Contains(t, err.Error(), test.wantErr) } assert.Equal(t, initial, f, test.want) } else { if test.wantErr != "" { t.Errorf("Got no error when expecting one on %q", test.in) } else { assert.Equal(t, test.want, f) } } } } func TestDumpFlagsType(t *testing.T) { f := DumpFlags(0) assert.Equal(t, "DumpFlags", f.Type()) } rclone-1.53.3/fs/filter/000077500000000000000000000000001375552240400147715ustar00rootroot00000000000000rclone-1.53.3/fs/filter/filter.go000066400000000000000000000341501375552240400166100ustar00rootroot00000000000000// Package filter controls the filtering of files package filter import ( "bufio" "context" "fmt" "log" "os" "path" "regexp" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "golang.org/x/sync/errgroup" ) // Active is the globally active filter var Active = mustNewFilter(nil) // rule is one filter rule type rule struct { Include bool Regexp *regexp.Regexp } // Match returns true if rule matches path func (r *rule) Match(path string) bool { return r.Regexp.MatchString(path) } // String the rule func (r *rule) String() string { c := "-" if r.Include { c = "+" } return fmt.Sprintf("%s %s", c, r.Regexp.String()) } // rules is a slice of rules type rules struct { rules []rule existing map[string]struct{} } // add adds a rule if it doesn't exist already func (rs *rules) add(Include bool, re *regexp.Regexp) { if rs.existing == nil { rs.existing = make(map[string]struct{}) } newRule := rule{ Include: Include, Regexp: re, } newRuleString := newRule.String() if _, ok := rs.existing[newRuleString]; ok { return // rule already exists } rs.rules = append(rs.rules, newRule) rs.existing[newRuleString] = struct{}{} } // clear clears all the rules func (rs *rules) clear() { rs.rules = nil rs.existing = nil } // len returns the number of rules func (rs *rules) len() int { return len(rs.rules) } // FilesMap describes the map of files to transfer type FilesMap map[string]struct{} // Opt configures the filter type Opt struct { DeleteExcluded bool FilterRule []string FilterFrom []string ExcludeRule []string ExcludeFrom []string ExcludeFile string IncludeRule []string IncludeFrom []string FilesFrom []string FilesFromRaw []string MinAge fs.Duration MaxAge fs.Duration MinSize fs.SizeSuffix MaxSize fs.SizeSuffix IgnoreCase bool } // DefaultOpt is the default config for the filter var DefaultOpt = Opt{ MinAge: fs.DurationOff, MaxAge: fs.DurationOff, MinSize: fs.SizeSuffix(-1), MaxSize: fs.SizeSuffix(-1), } // Filter describes any filtering in operation type Filter struct { Opt Opt ModTimeFrom time.Time ModTimeTo time.Time fileRules rules dirRules rules files FilesMap // files if filesFrom dirs FilesMap // dirs from filesFrom } // NewFilter parses the command line options and creates a Filter // object. If opt is nil, then DefaultOpt will be used func NewFilter(opt *Opt) (f *Filter, err error) { f = &Filter{} // Make a copy of the options if opt != nil { f.Opt = *opt } else { f.Opt = DefaultOpt } // Filter flags if f.Opt.MinAge.IsSet() { f.ModTimeTo = time.Now().Add(-time.Duration(f.Opt.MinAge)) fs.Debugf(nil, "--min-age %v to %v", f.Opt.MinAge, f.ModTimeTo) } if f.Opt.MaxAge.IsSet() { f.ModTimeFrom = time.Now().Add(-time.Duration(f.Opt.MaxAge)) if !f.ModTimeTo.IsZero() && f.ModTimeTo.Before(f.ModTimeFrom) { log.Fatal("filter: --min-age can't be larger than --max-age") } fs.Debugf(nil, "--max-age %v to %v", f.Opt.MaxAge, f.ModTimeFrom) } addImplicitExclude := false foundExcludeRule := false for _, rule := range f.Opt.IncludeRule { err = f.Add(true, rule) if err != nil { return nil, err } addImplicitExclude = true } for _, rule := range f.Opt.IncludeFrom { err := forEachLine(rule, false, func(line string) error { return f.Add(true, line) }) if err != nil { return nil, err } addImplicitExclude = true } for _, rule := range f.Opt.ExcludeRule { err = f.Add(false, rule) if err != nil { return nil, err } foundExcludeRule = true } for _, rule := range f.Opt.ExcludeFrom { err := forEachLine(rule, false, func(line string) error { return f.Add(false, line) }) if err != nil { return nil, err } foundExcludeRule = true } if addImplicitExclude && foundExcludeRule { fs.Errorf(nil, "Using --filter is recommended instead of both --include and --exclude as the order they are parsed in is indeterminate") } for _, rule := range f.Opt.FilterRule { err = f.AddRule(rule) if err != nil { return nil, err } } for _, rule := range f.Opt.FilterFrom { err := forEachLine(rule, false, f.AddRule) if err != nil { return nil, err } } inActive := f.InActive() for _, rule := range f.Opt.FilesFrom { if !inActive { return nil, fmt.Errorf("The usage of --files-from overrides all other filters, it should be used alone or with --files-from-raw") } f.initAddFile() // init to show --files-from set even if no files within err := forEachLine(rule, false, func(line string) error { return f.AddFile(line) }) if err != nil { return nil, err } } for _, rule := range f.Opt.FilesFromRaw { // --files-from-raw can be used with --files-from, hence we do // not need to get the value of f.InActive again if !inActive { return nil, fmt.Errorf("The usage of --files-from-raw overrides all other filters, it should be used alone or with --files-from") } f.initAddFile() // init to show --files-from set even if no files within err := forEachLine(rule, true, func(line string) error { return f.AddFile(line) }) if err != nil { return nil, err } } if addImplicitExclude { err = f.Add(false, "/**") if err != nil { return nil, err } } if fs.Config.Dump&fs.DumpFilters != 0 { fmt.Println("--- start filters ---") fmt.Println(f.DumpFilters()) fmt.Println("--- end filters ---") } return f, nil } func mustNewFilter(opt *Opt) *Filter { f, err := NewFilter(opt) if err != nil { panic(err) } return f } // addDirGlobs adds directory globs from the file glob passed in func (f *Filter) addDirGlobs(Include bool, glob string) error { for _, dirGlob := range globToDirGlobs(glob) { // Don't add "/" as we always include the root if dirGlob == "/" { continue } dirRe, err := globToRegexp(dirGlob, f.Opt.IgnoreCase) if err != nil { return err } f.dirRules.add(Include, dirRe) } return nil } // Add adds a filter rule with include or exclude status indicated func (f *Filter) Add(Include bool, glob string) error { isDirRule := strings.HasSuffix(glob, "/") isFileRule := !isDirRule if strings.Contains(glob, "**") { isDirRule, isFileRule = true, true } re, err := globToRegexp(glob, f.Opt.IgnoreCase) if err != nil { return err } if isFileRule { f.fileRules.add(Include, re) // If include rule work out what directories are needed to scan // if exclude rule, we can't rule anything out // Unless it is `*` which matches everything // NB ** and /** are DirRules if Include || glob == "*" { err = f.addDirGlobs(Include, glob) if err != nil { return err } } } if isDirRule { f.dirRules.add(Include, re) } return nil } // AddRule adds a filter rule with include/exclude indicated by the prefix // // These are // // + glob // - glob // ! // // '+' includes the glob, '-' excludes it and '!' resets the filter list // // Line comments may be introduced with '#' or ';' func (f *Filter) AddRule(rule string) error { switch { case rule == "!": f.Clear() return nil case strings.HasPrefix(rule, "- "): return f.Add(false, rule[2:]) case strings.HasPrefix(rule, "+ "): return f.Add(true, rule[2:]) } return errors.Errorf("malformed rule %q", rule) } // initAddFile creates f.files and f.dirs func (f *Filter) initAddFile() { if f.files == nil { f.files = make(FilesMap) f.dirs = make(FilesMap) } } // AddFile adds a single file to the files from list func (f *Filter) AddFile(file string) error { f.initAddFile() file = strings.Trim(file, "/") f.files[file] = struct{}{} // Put all the parent directories into f.dirs for { file = path.Dir(file) if file == "." { break } if _, found := f.dirs[file]; found { break } f.dirs[file] = struct{}{} } return nil } // Files returns all the files from the `--files-from` list // // It may be nil if the list is empty func (f *Filter) Files() FilesMap { return f.files } // Clear clears all the filter rules func (f *Filter) Clear() { f.fileRules.clear() f.dirRules.clear() } // InActive returns false if any filters are active func (f *Filter) InActive() bool { return (f.files == nil && f.ModTimeFrom.IsZero() && f.ModTimeTo.IsZero() && f.Opt.MinSize < 0 && f.Opt.MaxSize < 0 && f.fileRules.len() == 0 && f.dirRules.len() == 0 && len(f.Opt.ExcludeFile) == 0) } // includeRemote returns whether this remote passes the filter rules. func (f *Filter) includeRemote(remote string) bool { for _, rule := range f.fileRules.rules { if rule.Match(remote) { return rule.Include } } return true } // ListContainsExcludeFile checks if exclude file is present in the list. func (f *Filter) ListContainsExcludeFile(entries fs.DirEntries) bool { if len(f.Opt.ExcludeFile) == 0 { return false } for _, entry := range entries { obj, ok := entry.(fs.Object) if ok { basename := path.Base(obj.Remote()) if basename == f.Opt.ExcludeFile { return true } } } return false } // IncludeDirectory returns a function which checks whether this // directory should be included in the sync or not. func (f *Filter) IncludeDirectory(ctx context.Context, fs fs.Fs) func(string) (bool, error) { return func(remote string) (bool, error) { remote = strings.Trim(remote, "/") // first check if we need to remove directory based on // the exclude file excl, err := f.DirContainsExcludeFile(ctx, fs, remote) if err != nil { return false, err } if excl { return false, nil } // filesFrom takes precedence if f.files != nil { _, include := f.dirs[remote] return include, nil } remote += "/" for _, rule := range f.dirRules.rules { if rule.Match(remote) { return rule.Include, nil } } return true, nil } } // DirContainsExcludeFile checks if exclude file is present in a // directory. If fs is nil, it works properly if ExcludeFile is an // empty string (for testing). func (f *Filter) DirContainsExcludeFile(ctx context.Context, fremote fs.Fs, remote string) (bool, error) { if len(f.Opt.ExcludeFile) > 0 { exists, err := fs.FileExists(ctx, fremote, path.Join(remote, f.Opt.ExcludeFile)) if err != nil { return false, err } if exists { return true, nil } } return false, nil } // Include returns whether this object should be included into the // sync or not func (f *Filter) Include(remote string, size int64, modTime time.Time) bool { // filesFrom takes precedence if f.files != nil { _, include := f.files[remote] return include } if !f.ModTimeFrom.IsZero() && modTime.Before(f.ModTimeFrom) { return false } if !f.ModTimeTo.IsZero() && modTime.After(f.ModTimeTo) { return false } if f.Opt.MinSize >= 0 && size < int64(f.Opt.MinSize) { return false } if f.Opt.MaxSize >= 0 && size > int64(f.Opt.MaxSize) { return false } return f.includeRemote(remote) } // IncludeObject returns whether this object should be included into // the sync or not. This is a convenience function to avoid calling // o.ModTime(), which is an expensive operation. func (f *Filter) IncludeObject(ctx context.Context, o fs.Object) bool { var modTime time.Time if !f.ModTimeFrom.IsZero() || !f.ModTimeTo.IsZero() { modTime = o.ModTime(ctx) } else { modTime = time.Unix(0, 0) } return f.Include(o.Remote(), o.Size(), modTime) } // forEachLine calls fn on every line in the file pointed to by path // // It ignores empty lines and lines starting with '#' or ';' if raw is false func forEachLine(path string, raw bool, fn func(string) error) (err error) { var scanner *bufio.Scanner if path == "-" { scanner = bufio.NewScanner(os.Stdin) } else { in, err := os.Open(path) if err != nil { return err } scanner = bufio.NewScanner(in) defer fs.CheckClose(in, &err) } for scanner.Scan() { line := scanner.Text() if !raw { line = strings.TrimSpace(line) if len(line) == 0 || line[0] == '#' || line[0] == ';' { continue } } err := fn(line) if err != nil { return err } } return scanner.Err() } // DumpFilters dumps the filters in textual form, 1 per line func (f *Filter) DumpFilters() string { rules := []string{} if !f.ModTimeFrom.IsZero() { rules = append(rules, fmt.Sprintf("Last-modified date must be equal or greater than: %s", f.ModTimeFrom.String())) } if !f.ModTimeTo.IsZero() { rules = append(rules, fmt.Sprintf("Last-modified date must be equal or less than: %s", f.ModTimeTo.String())) } rules = append(rules, "--- File filter rules ---") for _, rule := range f.fileRules.rules { rules = append(rules, rule.String()) } rules = append(rules, "--- Directory filter rules ---") for _, dirRule := range f.dirRules.rules { rules = append(rules, dirRule.String()) } return strings.Join(rules, "\n") } // HaveFilesFrom returns true if --files-from has been supplied func (f *Filter) HaveFilesFrom() bool { return f.files != nil } var errFilesFromNotSet = errors.New("--files-from not set so can't use Filter.ListR") // MakeListR makes function to return all the files set using --files-from func (f *Filter) MakeListR(ctx context.Context, NewObject func(ctx context.Context, remote string) (fs.Object, error)) fs.ListRFn { return func(ctx context.Context, dir string, callback fs.ListRCallback) error { if !f.HaveFilesFrom() { return errFilesFromNotSet } var ( remotes = make(chan string, fs.Config.Checkers) g errgroup.Group ) for i := 0; i < fs.Config.Checkers; i++ { g.Go(func() (err error) { var entries = make(fs.DirEntries, 1) for remote := range remotes { entries[0], err = NewObject(ctx, remote) if err == fs.ErrorObjectNotFound { // Skip files that are not found } else if err != nil { return err } else { err = callback(entries) if err != nil { return err } } } return nil }) } for remote := range f.files { remotes <- remote } close(remotes) return g.Wait() } } // UsesDirectoryFilters returns true if the filter uses directory // filters and false if it doesn't. // // This is used in deciding whether to walk directories or use ListR func (f *Filter) UsesDirectoryFilters() bool { if len(f.dirRules.rules) == 0 { return false } rule := f.dirRules.rules[0] re := rule.Regexp.String() if rule.Include == true && re == "^.*$" { return false } return true } rclone-1.53.3/fs/filter/filter_test.go000066400000000000000000000421631375552240400176520ustar00rootroot00000000000000package filter import ( "context" "fmt" "io/ioutil" "os" "strings" "sync" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestNewFilterDefault(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) assert.False(t, f.Opt.DeleteExcluded) assert.Equal(t, fs.SizeSuffix(-1), f.Opt.MinSize) assert.Equal(t, fs.SizeSuffix(-1), f.Opt.MaxSize) assert.Len(t, f.fileRules.rules, 0) assert.Len(t, f.dirRules.rules, 0) assert.Nil(t, f.files) assert.True(t, f.InActive()) } // testFile creates a temp file with the contents func testFile(t *testing.T, contents string) string { out, err := ioutil.TempFile("", "filter_test") require.NoError(t, err) defer func() { err := out.Close() require.NoError(t, err) }() _, err = out.Write([]byte(contents)) require.NoError(t, err) s := out.Name() return s } func TestNewFilterForbiddenMixOfFilesFromAndFilterRule(t *testing.T) { Opt := DefaultOpt // Set up the input Opt.FilterRule = []string{"- filter1", "- filter1b"} Opt.FilesFrom = []string{testFile(t, "#comment\nfiles1\nfiles2\n")} rm := func(p string) { err := os.Remove(p) if err != nil { t.Logf("error removing %q: %v", p, err) } } // Reset the input defer func() { rm(Opt.FilesFrom[0]) }() _, err := NewFilter(&Opt) require.Error(t, err) require.Contains(t, err.Error(), "The usage of --files-from overrides all other filters") } func TestNewFilterForbiddenMixOfFilesFromRawAndFilterRule(t *testing.T) { Opt := DefaultOpt // Set up the input Opt.FilterRule = []string{"- filter1", "- filter1b"} Opt.FilesFromRaw = []string{testFile(t, "#comment\nfiles1\nfiles2\n")} rm := func(p string) { err := os.Remove(p) if err != nil { t.Logf("error removing %q: %v", p, err) } } // Reset the input defer func() { rm(Opt.FilesFromRaw[0]) }() _, err := NewFilter(&Opt) require.Error(t, err) require.Contains(t, err.Error(), "The usage of --files-from-raw overrides all other filters") } func TestNewFilterWithFilesFromAlone(t *testing.T) { Opt := DefaultOpt // Set up the input Opt.FilesFrom = []string{testFile(t, "#comment\nfiles1\nfiles2\n")} rm := func(p string) { err := os.Remove(p) if err != nil { t.Logf("error removing %q: %v", p, err) } } // Reset the input defer func() { rm(Opt.FilesFrom[0]) }() f, err := NewFilter(&Opt) require.NoError(t, err) assert.Len(t, f.files, 2) for _, name := range []string{"files1", "files2"} { _, ok := f.files[name] if !ok { t.Errorf("Didn't find file %q in f.files", name) } } } func TestNewFilterWithFilesFromRaw(t *testing.T) { Opt := DefaultOpt // Set up the input Opt.FilesFromRaw = []string{testFile(t, "#comment\nfiles1\nfiles2\n")} rm := func(p string) { err := os.Remove(p) if err != nil { t.Logf("error removing %q: %v", p, err) } } // Reset the input defer func() { rm(Opt.FilesFromRaw[0]) }() f, err := NewFilter(&Opt) require.NoError(t, err) assert.Len(t, f.files, 3) for _, name := range []string{"#comment", "files1", "files2"} { _, ok := f.files[name] if !ok { t.Errorf("Didn't find file %q in f.files", name) } } } func TestNewFilterFullExceptFilesFromOpt(t *testing.T) { Opt := DefaultOpt mins := fs.SizeSuffix(100 * 1024) maxs := fs.SizeSuffix(1000 * 1024) // Set up the input Opt.DeleteExcluded = true Opt.FilterRule = []string{"- filter1", "- filter1b"} Opt.FilterFrom = []string{testFile(t, "#comment\n+ filter2\n- filter3\n")} Opt.ExcludeRule = []string{"exclude1"} Opt.ExcludeFrom = []string{testFile(t, "#comment\nexclude2\nexclude3\n")} Opt.IncludeRule = []string{"include1"} Opt.IncludeFrom = []string{testFile(t, "#comment\ninclude2\ninclude3\n")} Opt.MinSize = mins Opt.MaxSize = maxs rm := func(p string) { err := os.Remove(p) if err != nil { t.Logf("error removing %q: %v", p, err) } } // Reset the input defer func() { rm(Opt.FilterFrom[0]) rm(Opt.ExcludeFrom[0]) rm(Opt.IncludeFrom[0]) }() f, err := NewFilter(&Opt) require.NoError(t, err) assert.True(t, f.Opt.DeleteExcluded) assert.Equal(t, f.Opt.MinSize, mins) assert.Equal(t, f.Opt.MaxSize, maxs) got := f.DumpFilters() want := `--- File filter rules --- + (^|/)include1$ + (^|/)include2$ + (^|/)include3$ - (^|/)exclude1$ - (^|/)exclude2$ - (^|/)exclude3$ - (^|/)filter1$ - (^|/)filter1b$ + (^|/)filter2$ - (^|/)filter3$ - ^.*$ --- Directory filter rules --- + ^.*$ - ^.*$` assert.Equal(t, want, got) assert.False(t, f.InActive()) } type includeTest struct { in string size int64 modTime int64 want bool } func testInclude(t *testing.T, f *Filter, tests []includeTest) { for _, test := range tests { got := f.Include(test.in, test.size, time.Unix(test.modTime, 0)) assert.Equal(t, test.want, got, fmt.Sprintf("in=%q, size=%v, modTime=%v", test.in, test.size, time.Unix(test.modTime, 0))) } } type includeDirTest struct { in string want bool } func testDirInclude(t *testing.T, f *Filter, tests []includeDirTest) { for _, test := range tests { got, err := f.IncludeDirectory(context.Background(), nil)(test.in) require.NoError(t, err) assert.Equal(t, test.want, got, test.in) } } func TestNewFilterIncludeFiles(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) err = f.AddFile("file1.jpg") require.NoError(t, err) err = f.AddFile("/file2.jpg") require.NoError(t, err) assert.Equal(t, FilesMap{ "file1.jpg": {}, "file2.jpg": {}, }, f.files) assert.Equal(t, FilesMap{}, f.dirs) testInclude(t, f, []includeTest{ {"file1.jpg", 0, 0, true}, {"file2.jpg", 1, 0, true}, {"potato/file2.jpg", 2, 0, false}, {"file3.jpg", 3, 0, false}, }) assert.False(t, f.InActive()) } func TestNewFilterIncludeFilesDirs(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) for _, path := range []string{ "path/to/dir/file1.png", "/path/to/dir/file2.png", "/path/to/file3.png", "/path/to/dir2/file4.png", } { err = f.AddFile(path) require.NoError(t, err) } assert.Equal(t, FilesMap{ "path": {}, "path/to": {}, "path/to/dir": {}, "path/to/dir2": {}, }, f.dirs) testDirInclude(t, f, []includeDirTest{ {"path", true}, {"path/to", true}, {"path/to/", true}, {"/path/to", true}, {"/path/to/", true}, {"path/to/dir", true}, {"path/to/dir2", true}, {"path/too", false}, {"path/three", false}, {"four", false}, }) } func TestNewFilterHaveFilesFrom(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) assert.Equal(t, false, f.HaveFilesFrom()) require.NoError(t, f.AddFile("file")) assert.Equal(t, true, f.HaveFilesFrom()) } func TestNewFilterMakeListR(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) // Check error if no files listR := f.MakeListR(context.Background(), nil) err = listR(context.Background(), "", nil) assert.EqualError(t, err, errFilesFromNotSet.Error()) // Add some files for _, path := range []string{ "path/to/dir/file1.png", "/path/to/dir/file2.png", "/path/to/file3.png", "/path/to/dir2/file4.png", "notfound", } { err = f.AddFile(path) require.NoError(t, err) } assert.Equal(t, 5, len(f.files)) // NewObject function for MakeListR newObjects := FilesMap{} var newObjectMu sync.Mutex NewObject := func(ctx context.Context, remote string) (fs.Object, error) { newObjectMu.Lock() defer newObjectMu.Unlock() if remote == "notfound" { return nil, fs.ErrorObjectNotFound } else if remote == "error" { return nil, assert.AnError } newObjects[remote] = struct{}{} return mockobject.New(remote), nil } // Callback for ListRFn listRObjects := FilesMap{} var callbackMu sync.Mutex listRcallback := func(entries fs.DirEntries) error { callbackMu.Lock() defer callbackMu.Unlock() for _, entry := range entries { listRObjects[entry.Remote()] = struct{}{} } return nil } // Make the listR and call it listR = f.MakeListR(context.Background(), NewObject) err = listR(context.Background(), "", listRcallback) require.NoError(t, err) // Check that the correct objects were created and listed want := FilesMap{ "path/to/dir/file1.png": {}, "path/to/dir/file2.png": {}, "path/to/file3.png": {}, "path/to/dir2/file4.png": {}, } assert.Equal(t, want, newObjects) assert.Equal(t, want, listRObjects) // Now check an error is returned from NewObject require.NoError(t, f.AddFile("error")) err = listR(context.Background(), "", listRcallback) require.EqualError(t, err, assert.AnError.Error()) } func TestNewFilterMinSize(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) f.Opt.MinSize = 100 testInclude(t, f, []includeTest{ {"file1.jpg", 100, 0, true}, {"file2.jpg", 101, 0, true}, {"potato/file2.jpg", 99, 0, false}, }) assert.False(t, f.InActive()) } func TestNewFilterMaxSize(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) f.Opt.MaxSize = 100 testInclude(t, f, []includeTest{ {"file1.jpg", 100, 0, true}, {"file2.jpg", 101, 0, false}, {"potato/file2.jpg", 99, 0, true}, }) assert.False(t, f.InActive()) } func TestNewFilterMinAndMaxAge(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) f.ModTimeFrom = time.Unix(1440000002, 0) f.ModTimeTo = time.Unix(1440000003, 0) testInclude(t, f, []includeTest{ {"file1.jpg", 100, 1440000000, false}, {"file2.jpg", 101, 1440000001, false}, {"file3.jpg", 102, 1440000002, true}, {"potato/file1.jpg", 98, 1440000003, true}, {"potato/file2.jpg", 99, 1440000004, false}, }) assert.False(t, f.InActive()) } func TestNewFilterMinAge(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) f.ModTimeTo = time.Unix(1440000002, 0) testInclude(t, f, []includeTest{ {"file1.jpg", 100, 1440000000, true}, {"file2.jpg", 101, 1440000001, true}, {"file3.jpg", 102, 1440000002, true}, {"potato/file1.jpg", 98, 1440000003, false}, {"potato/file2.jpg", 99, 1440000004, false}, }) assert.False(t, f.InActive()) } func TestNewFilterMaxAge(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) f.ModTimeFrom = time.Unix(1440000002, 0) testInclude(t, f, []includeTest{ {"file1.jpg", 100, 1440000000, false}, {"file2.jpg", 101, 1440000001, false}, {"file3.jpg", 102, 1440000002, true}, {"potato/file1.jpg", 98, 1440000003, true}, {"potato/file2.jpg", 99, 1440000004, true}, }) assert.False(t, f.InActive()) } func TestNewFilterMatches(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) add := func(s string) { err := f.AddRule(s) require.NoError(t, err) } add("+ cleared") add("!") add("- /file1.jpg") add("+ /file2.png") add("+ /*.jpg") add("- /*.png") add("- /potato") add("+ /sausage1") add("+ /sausage2*") add("+ /sausage3**") add("+ /a/*.jpg") add("- *") testInclude(t, f, []includeTest{ {"cleared", 100, 0, false}, {"file1.jpg", 100, 0, false}, {"file2.png", 100, 0, true}, {"FILE2.png", 100, 0, false}, {"afile2.png", 100, 0, false}, {"file3.jpg", 101, 0, true}, {"file4.png", 101, 0, false}, {"potato", 101, 0, false}, {"sausage1", 101, 0, true}, {"sausage1/potato", 101, 0, false}, {"sausage2potato", 101, 0, true}, {"sausage2/potato", 101, 0, false}, {"sausage3/potato", 101, 0, true}, {"a/one.jpg", 101, 0, true}, {"a/one.png", 101, 0, false}, {"unicorn", 99, 0, false}, }) testDirInclude(t, f, []includeDirTest{ {"sausage1", false}, {"sausage2", false}, {"sausage2/sub", false}, {"sausage2/sub/dir", false}, {"sausage3", true}, {"SAUSAGE3", false}, {"sausage3/sub", true}, {"sausage3/sub/dir", true}, {"sausage4", false}, {"a", true}, }) assert.False(t, f.InActive()) } func TestNewFilterMatchesIgnoreCase(t *testing.T) { f, err := NewFilter(nil) require.NoError(t, err) f.Opt.IgnoreCase = true add := func(s string) { err := f.AddRule(s) require.NoError(t, err) } add("+ /file2.png") add("+ /sausage3**") add("- *") testInclude(t, f, []includeTest{ {"file2.png", 100, 0, true}, {"FILE2.png", 100, 0, true}, }) testDirInclude(t, f, []includeDirTest{ {"sausage3", true}, {"SAUSAGE3", true}, }) assert.False(t, f.InActive()) } func TestFilterAddDirRuleOrFileRule(t *testing.T) { for _, test := range []struct { included bool glob string want string }{ { false, "potato", `--- File filter rules --- - (^|/)potato$ --- Directory filter rules ---`, }, { true, "potato", `--- File filter rules --- + (^|/)potato$ --- Directory filter rules --- + ^.*$`, }, { false, "*", `--- File filter rules --- - (^|/)[^/]*$ --- Directory filter rules --- - ^.*$`, }, { true, "*", `--- File filter rules --- + (^|/)[^/]*$ --- Directory filter rules --- + ^.*$`, }, { false, ".*{,/**}", `--- File filter rules --- - (^|/)\.[^/]*(|/.*)$ --- Directory filter rules --- - (^|/)\.[^/]*(|/.*)$`, }, { true, "a/b/c/d", `--- File filter rules --- + (^|/)a/b/c/d$ --- Directory filter rules --- + (^|/)a/b/c/$ + (^|/)a/b/$ + (^|/)a/$`, }, } { f, err := NewFilter(nil) require.NoError(t, err) err = f.Add(test.included, test.glob) require.NoError(t, err) got := f.DumpFilters() assert.Equal(t, test.want, got, fmt.Sprintf("Add(%v, %q)", test.included, test.glob)) } } func testFilterForEachLine(t *testing.T, useStdin, raw bool) { file := testFile(t, `; comment one # another comment two # indented comment three four five six `) defer func() { err := os.Remove(file) require.NoError(t, err) }() lines := []string{} fileName := file if useStdin { in, err := os.Open(file) require.NoError(t, err) oldStdin := os.Stdin os.Stdin = in defer func() { os.Stdin = oldStdin _ = in.Close() }() fileName = "-" } err := forEachLine(fileName, raw, func(s string) error { lines = append(lines, s) return nil }) require.NoError(t, err) if raw { assert.Equal(t, "; comment,one,# another comment,,,two, # indented comment,three ,four ,five, six ", strings.Join(lines, ",")) } else { assert.Equal(t, "one,two,three,four,five,six", strings.Join(lines, ",")) } } func TestFilterForEachLine(t *testing.T) { testFilterForEachLine(t, false, false) } func TestFilterForEachLineStdin(t *testing.T) { testFilterForEachLine(t, true, false) } func TestFilterForEachLineWithRaw(t *testing.T) { testFilterForEachLine(t, false, true) } func TestFilterForEachLineStdinWithRaw(t *testing.T) { testFilterForEachLine(t, true, true) } func TestFilterMatchesFromDocs(t *testing.T) { for _, test := range []struct { glob string included bool file string ignoreCase bool }{ {"file.jpg", true, "file.jpg", false}, {"file.jpg", true, "directory/file.jpg", false}, {"file.jpg", false, "afile.jpg", false}, {"file.jpg", false, "directory/afile.jpg", false}, {"/file.jpg", true, "file.jpg", false}, {"/file.jpg", false, "afile.jpg", false}, {"/file.jpg", false, "directory/file.jpg", false}, {"*.jpg", true, "file.jpg", false}, {"*.jpg", true, "directory/file.jpg", false}, {"*.jpg", false, "file.jpg/anotherfile.png", false}, {"dir/**", true, "dir/file.jpg", false}, {"dir/**", true, "dir/dir1/dir2/file.jpg", false}, {"dir/**", false, "directory/file.jpg", false}, {"dir/**", false, "adir/file.jpg", false}, {"l?ss", true, "less", false}, {"l?ss", true, "lass", false}, {"l?ss", false, "floss", false}, {"h[ae]llo", true, "hello", false}, {"h[ae]llo", true, "hallo", false}, {"h[ae]llo", false, "hullo", false}, {"{one,two}_potato", true, "one_potato", false}, {"{one,two}_potato", true, "two_potato", false}, {"{one,two}_potato", false, "three_potato", false}, {"{one,two}_potato", false, "_potato", false}, {"\\*.jpg", true, "*.jpg", false}, {"\\\\.jpg", true, "\\.jpg", false}, {"\\[one\\].jpg", true, "[one].jpg", false}, {"potato", true, "potato", false}, {"potato", false, "POTATO", false}, {"potato", true, "potato", true}, {"potato", true, "POTATO", true}, } { f, err := NewFilter(nil) require.NoError(t, err) if test.ignoreCase { f.Opt.IgnoreCase = true } err = f.Add(true, test.glob) require.NoError(t, err) err = f.Add(false, "*") require.NoError(t, err) included := f.Include(test.file, 0, time.Unix(0, 0)) if included != test.included { t.Errorf("%q match %q: want %v got %v", test.glob, test.file, test.included, included) } } } func TestNewFilterUsesDirectoryFilters(t *testing.T) { for i, test := range []struct { rules []string want bool }{ { rules: []string{}, want: false, }, { rules: []string{ "+ *", }, want: false, }, { rules: []string{ "+ *.jpg", "- *", }, want: false, }, { rules: []string{ "- *.jpg", }, want: false, }, { rules: []string{ "- *.jpg", "+ *", }, want: false, }, { rules: []string{ "+ dir/*.jpg", "- *", }, want: true, }, { rules: []string{ "+ dir/**", }, want: true, }, { rules: []string{ "- dir/**", }, want: true, }, { rules: []string{ "- /dir/**", }, want: true, }, } { what := fmt.Sprintf("#%d", i) f, err := NewFilter(nil) require.NoError(t, err) for _, rule := range test.rules { err := f.AddRule(rule) require.NoError(t, err, what) } got := f.UsesDirectoryFilters() assert.Equal(t, test.want, got, fmt.Sprintf("%s: %s", what, f.DumpFilters())) } } rclone-1.53.3/fs/filter/filterflags/000077500000000000000000000000001375552240400172735ustar00rootroot00000000000000rclone-1.53.3/fs/filter/filterflags/filterflags.go000066400000000000000000000050031375552240400221220ustar00rootroot00000000000000// Package filterflags implements command line flags to set up a filter package filterflags import ( "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/rc" "github.com/spf13/pflag" ) // Options set by command line flags var ( Opt = filter.DefaultOpt ) // Reload the filters from the flags func Reload() (err error) { filter.Active, err = filter.NewFilter(&Opt) return err } // AddFlags adds the non filing system specific flags to the command func AddFlags(flagSet *pflag.FlagSet) { rc.AddOptionReload("filter", &Opt, Reload) flags.BoolVarP(flagSet, &Opt.DeleteExcluded, "delete-excluded", "", false, "Delete files on dest excluded from sync") flags.StringArrayVarP(flagSet, &Opt.FilterRule, "filter", "f", nil, "Add a file-filtering rule") flags.StringArrayVarP(flagSet, &Opt.FilterFrom, "filter-from", "", nil, "Read filtering patterns from a file (use - to read from stdin)") flags.StringArrayVarP(flagSet, &Opt.ExcludeRule, "exclude", "", nil, "Exclude files matching pattern") flags.StringArrayVarP(flagSet, &Opt.ExcludeFrom, "exclude-from", "", nil, "Read exclude patterns from file (use - to read from stdin)") flags.StringVarP(flagSet, &Opt.ExcludeFile, "exclude-if-present", "", "", "Exclude directories if filename is present") flags.StringArrayVarP(flagSet, &Opt.IncludeRule, "include", "", nil, "Include files matching pattern") flags.StringArrayVarP(flagSet, &Opt.IncludeFrom, "include-from", "", nil, "Read include patterns from file (use - to read from stdin)") flags.StringArrayVarP(flagSet, &Opt.FilesFrom, "files-from", "", nil, "Read list of source-file names from file (use - to read from stdin)") flags.StringArrayVarP(flagSet, &Opt.FilesFromRaw, "files-from-raw", "", nil, "Read list of source-file names from file without any processing of lines (use - to read from stdin)") flags.FVarP(flagSet, &Opt.MinAge, "min-age", "", "Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y") flags.FVarP(flagSet, &Opt.MaxAge, "max-age", "", "Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y") flags.FVarP(flagSet, &Opt.MinSize, "min-size", "", "Only transfer files bigger than this in k or suffix b|k|M|G") flags.FVarP(flagSet, &Opt.MaxSize, "max-size", "", "Only transfer files smaller than this in k or suffix b|k|M|G") flags.BoolVarP(flagSet, &Opt.IgnoreCase, "ignore-case", "", false, "Ignore case in filters (case insensitive)") //cvsExclude = BoolP("cvs-exclude", "C", false, "Exclude files in the same way CVS does") } rclone-1.53.3/fs/filter/glob.go000066400000000000000000000067561375552240400162610ustar00rootroot00000000000000// rsync style glob parser package filter import ( "bytes" "regexp" "strings" "github.com/pkg/errors" ) // globToRegexp converts an rsync style glob to a regexp // // documented in filtering.md func globToRegexp(glob string, ignoreCase bool) (*regexp.Regexp, error) { var re bytes.Buffer if ignoreCase { _, _ = re.WriteString("(?i)") } if strings.HasPrefix(glob, "/") { glob = glob[1:] _, _ = re.WriteRune('^') } else { _, _ = re.WriteString("(^|/)") } consecutiveStars := 0 insertStars := func() error { if consecutiveStars > 0 { switch consecutiveStars { case 1: _, _ = re.WriteString(`[^/]*`) case 2: _, _ = re.WriteString(`.*`) default: return errors.Errorf("too many stars in %q", glob) } } consecutiveStars = 0 return nil } inBraces := false inBrackets := 0 slashed := false for _, c := range glob { if slashed { _, _ = re.WriteRune(c) slashed = false continue } if c != '*' { err := insertStars() if err != nil { return nil, err } } if inBrackets > 0 { _, _ = re.WriteRune(c) if c == '[' { inBrackets++ } if c == ']' { inBrackets-- } continue } switch c { case '\\': _, _ = re.WriteRune(c) slashed = true case '*': consecutiveStars++ case '?': _, _ = re.WriteString(`[^/]`) case '[': _, _ = re.WriteRune(c) inBrackets++ case ']': return nil, errors.Errorf("mismatched ']' in glob %q", glob) case '{': if inBraces { return nil, errors.Errorf("can't nest '{' '}' in glob %q", glob) } inBraces = true _, _ = re.WriteRune('(') case '}': if !inBraces { return nil, errors.Errorf("mismatched '{' and '}' in glob %q", glob) } _, _ = re.WriteRune(')') inBraces = false case ',': if inBraces { _, _ = re.WriteRune('|') } else { _, _ = re.WriteRune(c) } case '.', '+', '(', ')', '|', '^', '$': // regexp meta characters not dealt with above _, _ = re.WriteRune('\\') _, _ = re.WriteRune(c) default: _, _ = re.WriteRune(c) } } err := insertStars() if err != nil { return nil, err } if inBrackets > 0 { return nil, errors.Errorf("mismatched '[' and ']' in glob %q", glob) } if inBraces { return nil, errors.Errorf("mismatched '{' and '}' in glob %q", glob) } _, _ = re.WriteRune('$') result, err := regexp.Compile(re.String()) if err != nil { return nil, errors.Wrapf(err, "bad glob pattern %q (regexp %q)", glob, re.String()) } return result, nil } var ( // Can't deal with / or ** in {} tooHardRe = regexp.MustCompile(`{[^{}]*(\*\*|/)[^{}]*}`) // Squash all / squashSlash = regexp.MustCompile(`/{2,}`) ) // globToDirGlobs takes a file glob and turns it into a series of // directory globs. When matched with a directory (with a trailing /) // this should answer the question as to whether this glob could be in // this directory. func globToDirGlobs(glob string) (out []string) { if tooHardRe.MatchString(glob) { // Can't figure this one out so return any directory might match out = append(out, "/**") return out } // Get rid of multiple /s glob = squashSlash.ReplaceAllString(glob, "/") // Split on / or ** // (** can contain /) for { i := strings.LastIndex(glob, "/") j := strings.LastIndex(glob, "**") what := "" if j > i { i = j what = "**" } if i < 0 { if len(out) == 0 { out = append(out, "/**") } break } glob = glob[:i] newGlob := glob + what + "/" if len(out) == 0 || out[len(out)-1] != newGlob { out = append(out, newGlob) } } return out } rclone-1.53.3/fs/filter/glob_test.go000066400000000000000000000071421375552240400173060ustar00rootroot00000000000000package filter import ( "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestGlobToRegexp(t *testing.T) { for _, test := range []struct { in string want string error string }{ {``, `(^|/)$`, ``}, {`potato`, `(^|/)potato$`, ``}, {`potato,sausage`, `(^|/)potato,sausage$`, ``}, {`/potato`, `^potato$`, ``}, {`potato?sausage`, `(^|/)potato[^/]sausage$`, ``}, {`potat[oa]`, `(^|/)potat[oa]$`, ``}, {`potat[a-z]or`, `(^|/)potat[a-z]or$`, ``}, {`potat[[:alpha:]]or`, `(^|/)potat[[:alpha:]]or$`, ``}, {`'.' '+' '(' ')' '|' '^' '$'`, `(^|/)'\.' '\+' '\(' '\)' '\|' '\^' '\$'$`, ``}, {`*.jpg`, `(^|/)[^/]*\.jpg$`, ``}, {`a{b,c,d}e`, `(^|/)a(b|c|d)e$`, ``}, {`potato**`, `(^|/)potato.*$`, ``}, {`potato**sausage`, `(^|/)potato.*sausage$`, ``}, {`*.p[lm]`, `(^|/)[^/]*\.p[lm]$`, ``}, {`[\[\]]`, `(^|/)[\[\]]$`, ``}, {`***potato`, `(^|/)`, `too many stars`}, {`***`, `(^|/)`, `too many stars`}, {`ab]c`, `(^|/)`, `mismatched ']'`}, {`ab[c`, `(^|/)`, `mismatched '[' and ']'`}, {`ab{{cd`, `(^|/)`, `can't nest`}, {`ab{}}cd`, `(^|/)`, `mismatched '{' and '}'`}, {`ab}c`, `(^|/)`, `mismatched '{' and '}'`}, {`ab{c`, `(^|/)`, `mismatched '{' and '}'`}, {`*.{jpg,png,gif}`, `(^|/)[^/]*\.(jpg|png|gif)$`, ``}, {`[a--b]`, `(^|/)`, `bad glob pattern`}, {`a\*b`, `(^|/)a\*b$`, ``}, {`a\\b`, `(^|/)a\\b$`, ``}, } { for _, ignoreCase := range []bool{false, true} { gotRe, err := globToRegexp(test.in, ignoreCase) if test.error == "" { prefix := "" if ignoreCase { prefix = "(?i)" } got := gotRe.String() require.NoError(t, err, test.in) assert.Equal(t, prefix+test.want, got, test.in) } else { require.Error(t, err, test.in) assert.Contains(t, err.Error(), test.error, test.in) assert.Nil(t, gotRe) } } } } func TestGlobToDirGlobs(t *testing.T) { for _, test := range []struct { in string want []string }{ {`*`, []string{"/**"}}, {`/*`, []string{"/"}}, {`*.jpg`, []string{"/**"}}, {`/*.jpg`, []string{"/"}}, {`//*.jpg`, []string{"/"}}, {`///*.jpg`, []string{"/"}}, {`/a/*.jpg`, []string{"/a/", "/"}}, {`/a//*.jpg`, []string{"/a/", "/"}}, {`/a///*.jpg`, []string{"/a/", "/"}}, {`/a/b/*.jpg`, []string{"/a/b/", "/a/", "/"}}, {`a/*.jpg`, []string{"a/"}}, {`a/b/*.jpg`, []string{"a/b/", "a/"}}, {`*/*/*.jpg`, []string{"*/*/", "*/"}}, {`a/b/`, []string{"a/b/", "a/"}}, {`a/b`, []string{"a/"}}, {`a/b/*.{jpg,png,gif}`, []string{"a/b/", "a/"}}, {`/a/{jpg,png,gif}/*.{jpg,png,gif}`, []string{"/a/{jpg,png,gif}/", "/a/", "/"}}, {`a/{a,a*b,a**c}/d/`, []string{"/**"}}, {`/a/{a,a*b,a/c,d}/d/`, []string{"/**"}}, {`**`, []string{"**/"}}, {`a**`, []string{"a**/"}}, {`a**b`, []string{"a**/"}}, {`a**b**c**d`, []string{"a**b**c**/", "a**b**/", "a**/"}}, {`a**b/c**d`, []string{"a**b/c**/", "a**b/", "a**/"}}, {`/A/a**b/B/c**d/C/`, []string{"/A/a**b/B/c**d/C/", "/A/a**b/B/c**d/", "/A/a**b/B/c**/", "/A/a**b/B/", "/A/a**b/", "/A/a**/", "/A/", "/"}}, {`/var/spool/**/ncw`, []string{"/var/spool/**/", "/var/spool/", "/var/", "/"}}, {`var/spool/**/ncw/`, []string{"var/spool/**/ncw/", "var/spool/**/", "var/spool/", "var/"}}, {"/file1.jpg", []string{`/`}}, {"/file2.png", []string{`/`}}, {"/*.jpg", []string{`/`}}, {"/*.png", []string{`/`}}, {"/potato", []string{`/`}}, {"/sausage1", []string{`/`}}, {"/sausage2*", []string{`/`}}, {"/sausage3**", []string{`/sausage3**/`, "/"}}, {"/a/*.jpg", []string{`/a/`, "/"}}, } { _, err := globToRegexp(test.in, false) assert.NoError(t, err) got := globToDirGlobs(test.in) assert.Equal(t, test.want, got, test.in) } } rclone-1.53.3/fs/fingerprint.go000066400000000000000000000027301375552240400163640ustar00rootroot00000000000000package fs import ( "context" "fmt" "strings" "github.com/rclone/rclone/fs/hash" ) // Fingerprint produces a unique-ish string for an object. // // This is for detecting whether an object has changed since we last // saw it, not for checking object identity between two different // remotes - operations.Equal should be used for that. // // If fast is set then Fingerprint will only include attributes where // usually another operation is not required to fetch them. For // example if fast is set then this won't include hashes on the local // backend. func Fingerprint(ctx context.Context, o ObjectInfo, fast bool) string { var ( out strings.Builder f = o.Fs() features = f.Features() ) fmt.Fprintf(&out, "%d", o.Size()) // Whether we want to do a slow operation or not // // fast true false true false // opIsSlow true true false false // do Op false true true true // // If !fast (slow) do the operation or if !OpIsSlow == // OpIsFast do the operation. // // Eg don't do this for S3 where modtimes are expensive if !fast || !features.SlowModTime { if f.Precision() != ModTimeNotSupported { fmt.Fprintf(&out, ",%v", o.ModTime(ctx).UTC()) } } // Eg don't do this for SFTP/local where hashes are expensive? if !fast || !features.SlowHash { hashType := f.Hashes().GetOne() if hashType != hash.None { hash, err := o.Hash(ctx, hashType) if err == nil { fmt.Fprintf(&out, ",%v", hash) } } } return out.String() } rclone-1.53.3/fs/fingerprint_test.go000066400000000000000000000033721375552240400174260ustar00rootroot00000000000000package fs_test import ( "context" "fmt" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fstest/mockfs" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" ) func TestFingerprint(t *testing.T) { ctx := context.Background() f := mockfs.NewFs("test", "root") f.SetHashes(hash.NewHashSet(hash.MD5)) for i, test := range []struct { fast bool slowModTime bool slowHash bool want string }{ {fast: false, slowModTime: false, slowHash: false, want: "4,0001-01-01 00:00:00 +0000 UTC,8d777f385d3dfec8815d20f7496026dc"}, {fast: false, slowModTime: false, slowHash: true, want: "4,0001-01-01 00:00:00 +0000 UTC,8d777f385d3dfec8815d20f7496026dc"}, {fast: false, slowModTime: true, slowHash: false, want: "4,0001-01-01 00:00:00 +0000 UTC,8d777f385d3dfec8815d20f7496026dc"}, {fast: false, slowModTime: true, slowHash: true, want: "4,0001-01-01 00:00:00 +0000 UTC,8d777f385d3dfec8815d20f7496026dc"}, {fast: true, slowModTime: false, slowHash: false, want: "4,0001-01-01 00:00:00 +0000 UTC,8d777f385d3dfec8815d20f7496026dc"}, {fast: true, slowModTime: false, slowHash: true, want: "4,0001-01-01 00:00:00 +0000 UTC"}, {fast: true, slowModTime: true, slowHash: false, want: "4,8d777f385d3dfec8815d20f7496026dc"}, {fast: true, slowModTime: true, slowHash: true, want: "4"}, } { what := fmt.Sprintf("#%d fast=%v,slowModTime=%v,slowHash=%v", i, test.fast, test.slowModTime, test.slowHash) o := mockobject.New("potato").WithContent([]byte("data"), mockobject.SeekModeRegular) o.SetFs(f) f.Features().SlowModTime = test.slowModTime f.Features().SlowHash = test.slowHash got := fs.Fingerprint(ctx, o, test.fast) assert.Equal(t, test.want, got, what) } } rclone-1.53.3/fs/fs.go000066400000000000000000001301201375552240400144400ustar00rootroot00000000000000// Package fs is a generic file system interface for rclone object storage systems package fs import ( "context" "encoding/json" "fmt" "io" "io/ioutil" "log" "math" "os" "path/filepath" "reflect" "sort" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/lib/pacer" ) // EntryType can be associated with remote paths to identify their type type EntryType int // Constants const ( // ModTimeNotSupported is a very large precision value to show // mod time isn't supported on this Fs ModTimeNotSupported = 100 * 365 * 24 * time.Hour // MaxLevel is a sentinel representing an infinite depth for listings MaxLevel = math.MaxInt32 // EntryDirectory should be used to classify remote paths in directories EntryDirectory EntryType = iota // 0 // EntryObject should be used to classify remote paths in objects EntryObject // 1 ) // Globals var ( // Filesystem registry Registry []*RegInfo // ErrorNotFoundInConfigFile is returned by NewFs if not found in config file ErrorNotFoundInConfigFile = errors.New("didn't find section in config file") ErrorCantPurge = errors.New("can't purge directory") ErrorCantCopy = errors.New("can't copy object - incompatible remotes") ErrorCantMove = errors.New("can't move object - incompatible remotes") ErrorCantDirMove = errors.New("can't move directory - incompatible remotes") ErrorCantUploadEmptyFiles = errors.New("can't upload empty files to this remote") ErrorDirExists = errors.New("can't copy directory - destination already exists") ErrorCantSetModTime = errors.New("can't set modified time") ErrorCantSetModTimeWithoutDelete = errors.New("can't set modified time without deleting existing object") ErrorDirNotFound = errors.New("directory not found") ErrorObjectNotFound = errors.New("object not found") ErrorLevelNotSupported = errors.New("level value not supported") ErrorListAborted = errors.New("list aborted") ErrorListBucketRequired = errors.New("bucket or container name is needed in remote") ErrorIsFile = errors.New("is a file not a directory") ErrorNotAFile = errors.New("is not a regular file") ErrorNotDeleting = errors.New("not deleting files as there were IO errors") ErrorNotDeletingDirs = errors.New("not deleting directories as there were IO errors") ErrorOverlapping = errors.New("can't sync or move files on overlapping remotes") ErrorDirectoryNotEmpty = errors.New("directory not empty") ErrorImmutableModified = errors.New("immutable file modified") ErrorPermissionDenied = errors.New("permission denied") ErrorCantShareDirectories = errors.New("this backend can't share directories with link") ErrorNotImplemented = errors.New("optional feature not implemented") ErrorCommandNotFound = errors.New("command not found") ) // RegInfo provides information about a filesystem type RegInfo struct { // Name of this fs Name string // Description of this fs - defaults to Name Description string // Prefix for command line flags for this fs - defaults to Name if not set Prefix string // Create a new file system. If root refers to an existing // object, then it should return an Fs which which points to // the parent of that object and ErrorIsFile. NewFs func(name string, root string, config configmap.Mapper) (Fs, error) `json:"-"` // Function to call to help with config Config func(name string, config configmap.Mapper) `json:"-"` // Options for the Fs configuration Options Options // The command help, if any CommandHelp []CommandHelp } // FileName returns the on disk file name for this backend func (ri *RegInfo) FileName() string { return strings.Replace(ri.Name, " ", "", -1) } // Options is a slice of configuration Option for a backend type Options []Option // Set the default values for the options func (os Options) setValues() { for i := range os { o := &os[i] if o.Default == nil { o.Default = "" } } } // Get the Option corresponding to name or return nil if not found func (os Options) Get(name string) *Option { for i := range os { opt := &os[i] if opt.Name == name { return opt } } return nil } // OptionVisibility controls whether the options are visible in the // configurator or the command line. type OptionVisibility byte // Constants Option.Hide const ( OptionHideCommandLine OptionVisibility = 1 << iota OptionHideConfigurator OptionHideBoth = OptionHideCommandLine | OptionHideConfigurator ) // Option is describes an option for the config wizard // // This also describes command line options and environment variables type Option struct { Name string // name of the option in snake_case Help string // Help, the first line only is used for the command line help Provider string // Set to filter on provider Default interface{} // default value, nil => "" Value interface{} // value to be set by flags Examples OptionExamples `json:",omitempty"` // config examples ShortOpt string // the short option for this if required Hide OptionVisibility // set this to hide the config from the configurator or the command line Required bool // this option is required IsPassword bool // set if the option is a password NoPrefix bool // set if the option for this should not use the backend prefix Advanced bool // set if this is an advanced config option } // BaseOption is an alias for Option used internally type BaseOption Option // MarshalJSON turns an Option into JSON // // It adds some generated fields for ease of use // - DefaultStr - a string rendering of Default // - ValueStr - a string rendering of Value // - Type - the type of the option func (o *Option) MarshalJSON() ([]byte, error) { return json.Marshal(struct { BaseOption DefaultStr string ValueStr string Type string }{ BaseOption: BaseOption(*o), DefaultStr: fmt.Sprint(o.Default), ValueStr: o.String(), Type: o.Type(), }) } // GetValue gets the current current value which is the default if not set func (o *Option) GetValue() interface{} { val := o.Value if val == nil { val = o.Default if val == nil { val = "" } } return val } // String turns Option into a string func (o *Option) String() string { return fmt.Sprint(o.GetValue()) } // Set an Option from a string func (o *Option) Set(s string) (err error) { newValue, err := configstruct.StringToInterface(o.GetValue(), s) if err != nil { return err } o.Value = newValue return nil } // Type of the value func (o *Option) Type() string { return reflect.TypeOf(o.GetValue()).Name() } // FlagName for the option func (o *Option) FlagName(prefix string) string { name := strings.Replace(o.Name, "_", "-", -1) // convert snake_case to kebab-case if !o.NoPrefix { name = prefix + "-" + name } return name } // EnvVarName for the option func (o *Option) EnvVarName(prefix string) string { return OptionToEnv(prefix + "-" + o.Name) } // OptionExamples is a slice of examples type OptionExamples []OptionExample // Len is part of sort.Interface. func (os OptionExamples) Len() int { return len(os) } // Swap is part of sort.Interface. func (os OptionExamples) Swap(i, j int) { os[i], os[j] = os[j], os[i] } // Less is part of sort.Interface. func (os OptionExamples) Less(i, j int) bool { return os[i].Help < os[j].Help } // Sort sorts an OptionExamples func (os OptionExamples) Sort() { sort.Sort(os) } // OptionExample describes an example for an Option type OptionExample struct { Value string Help string Provider string } // Register a filesystem // // Fs modules should use this in an init() function func Register(info *RegInfo) { info.Options.setValues() if info.Prefix == "" { info.Prefix = info.Name } Registry = append(Registry, info) } // Fs is the interface a cloud storage system must provide type Fs interface { Info // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. List(ctx context.Context, dir string) (entries DirEntries, err error) // NewObject finds the Object at remote. If it can't be found // it returns the error ErrorObjectNotFound. NewObject(ctx context.Context, remote string) (Object, error) // Put in to the remote path with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Put should either // return an error or upload it properly (rather than e.g. calling panic). // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error Put(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error) // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists Mkdir(ctx context.Context, dir string) error // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty Rmdir(ctx context.Context, dir string) error } // Info provides a read only interface to information about a filesystem. type Info interface { // Name of the remote (as passed into NewFs) Name() string // Root of the remote (as passed into NewFs) Root() string // String returns a description of the FS String() string // Precision of the ModTimes in this Fs Precision() time.Duration // Returns the supported hash types of the filesystem Hashes() hash.Set // Features returns the optional features of this Fs Features() *Features } // Object is a filesystem like object provided by an Fs type Object interface { ObjectInfo // SetModTime sets the metadata on the object to set the modification date SetModTime(ctx context.Context, t time.Time) error // Open opens the file for read. Call Close() on the returned io.ReadCloser Open(ctx context.Context, options ...OpenOption) (io.ReadCloser, error) // Update in to the object with the modTime given of the given size // // When called from outside an Fs by rclone, src.Size() will always be >= 0. // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // return an error or update the object properly (rather than e.g. calling panic). Update(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) error // Removes this object Remove(ctx context.Context) error } // ObjectInfo provides read only information about an object. type ObjectInfo interface { DirEntry // Fs returns read only access to the Fs that this object is part of Fs() Info // Hash returns the selected checksum of the file // If no checksum is available it returns "" Hash(ctx context.Context, ty hash.Type) (string, error) // Storable says whether this object can be stored Storable() bool } // DirEntry provides read only information about the common subset of // a Dir or Object. These are returned from directory listings - type // assert them into the correct type. type DirEntry interface { // String returns a description of the Object String() string // Remote returns the remote path Remote() string // ModTime returns the modification date of the file // It should return a best guess if one isn't available ModTime(context.Context) time.Time // Size returns the size of the file Size() int64 } // Directory is a filesystem like directory provided by an Fs type Directory interface { DirEntry // Items returns the count of items in this directory or this // directory and subdirectories if known, -1 for unknown Items() int64 // ID returns the internal ID of this directory if known, or // "" otherwise ID() string } // MimeTyper is an optional interface for Object type MimeTyper interface { // MimeType returns the content type of the Object if // known, or "" if not MimeType(ctx context.Context) string } // IDer is an optional interface for Object type IDer interface { // ID returns the ID of the Object if known, or "" if not ID() string } // ObjectUnWrapper is an optional interface for Object type ObjectUnWrapper interface { // UnWrap returns the Object that this Object is wrapping or // nil if it isn't wrapping anything UnWrap() Object } // SetTierer is an optional interface for Object type SetTierer interface { // SetTier performs changing storage tier of the Object if // multiple storage classes supported SetTier(tier string) error } // GetTierer is an optional interface for Object type GetTierer interface { // GetTier returns storage tier or class of the Object GetTier() string } // FullObjectInfo contains all the read-only optional interfaces // // Use for checking making wrapping ObjectInfos implement everything type FullObjectInfo interface { ObjectInfo MimeTyper IDer ObjectUnWrapper GetTierer } // FullObject contains all the optional interfaces for Object // // Use for checking making wrapping Objects implement everything type FullObject interface { Object MimeTyper IDer ObjectUnWrapper GetTierer SetTierer } // ObjectOptionalInterfaces returns the names of supported and // unsupported optional interfaces for an Object func ObjectOptionalInterfaces(o Object) (supported, unsupported []string) { store := func(ok bool, name string) { if ok { supported = append(supported, name) } else { unsupported = append(unsupported, name) } } _, ok := o.(MimeTyper) store(ok, "MimeType") _, ok = o.(IDer) store(ok, "ID") _, ok = o.(ObjectUnWrapper) store(ok, "UnWrap") _, ok = o.(SetTierer) store(ok, "SetTier") _, ok = o.(GetTierer) store(ok, "GetTier") return supported, unsupported } // ListRCallback defines a callback function for ListR to use // // It is called for each tranche of entries read from the listing and // if it returns an error, the listing stops. type ListRCallback func(entries DirEntries) error // ListRFn is defines the call used to recursively list a directory type ListRFn func(ctx context.Context, dir string, callback ListRCallback) error // NewUsageValue makes a valid value func NewUsageValue(value int64) *int64 { p := new(int64) *p = value return p } // Usage is returned by the About call // // If a value is nil then it isn't supported by that backend type Usage struct { Total *int64 `json:"total,omitempty"` // quota of bytes that can be used Used *int64 `json:"used,omitempty"` // bytes in use Trashed *int64 `json:"trashed,omitempty"` // bytes in trash Other *int64 `json:"other,omitempty"` // other usage eg gmail in drive Free *int64 `json:"free,omitempty"` // bytes which can be uploaded before reaching the quota Objects *int64 `json:"objects,omitempty"` // objects in the storage system } // WriterAtCloser wraps io.WriterAt and io.Closer type WriterAtCloser interface { io.WriterAt io.Closer } // Features describe the optional features of the Fs type Features struct { // Feature flags, whether Fs CaseInsensitive bool // has case insensitive files DuplicateFiles bool // allows duplicate files ReadMimeType bool // can read the mime type of objects WriteMimeType bool // can set the mime type of objects CanHaveEmptyDirectories bool // can have empty directories BucketBased bool // is bucket based (like s3, swift etc) BucketBasedRootOK bool // is bucket based and can use from root SetTier bool // allows set tier functionality on objects GetTier bool // allows to retrieve storage tier of objects ServerSideAcrossConfigs bool // can server side copy between different remotes of the same type IsLocal bool // is the local backend SlowModTime bool // if calling ModTime() generally takes an extra transaction SlowHash bool // if calling Hash() generally takes an extra transaction // Purge all files in the directory specified // // Implement this if you have a way of deleting all the files // quicker than just running Remove() on the result of List() // // Return an error if it doesn't exist Purge func(ctx context.Context, dir string) error // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy Copy func(ctx context.Context, src Object, remote string) (Object, error) // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove Move func(ctx context.Context, src Object, remote string) (Object, error) // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists DirMove func(ctx context.Context, src Fs, srcRemote, dstRemote string) error // ChangeNotify calls the passed function with a path // that has had changes. If the implementation // uses polling, it should adhere to the given interval. ChangeNotify func(context.Context, func(string, EntryType), <-chan time.Duration) // UnWrap returns the Fs that this Fs is wrapping UnWrap func() Fs // WrapFs returns the Fs that is wrapping this Fs WrapFs func() Fs // SetWrapper sets the Fs that is wrapping this Fs SetWrapper func(f Fs) // DirCacheFlush resets the directory cache - used in testing // as an optional interface DirCacheFlush func() // PublicLink generates a public link to the remote path (usually readable by anyone) PublicLink func(ctx context.Context, remote string, expire Duration, unlink bool) (string, error) // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error // // May create duplicates or return errors if src already // exists. PutUnchecked func(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error) // PutStream uploads to the remote path with the modTime given of indeterminate size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error PutStream func(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error) // MergeDirs merges the contents of all the directories passed // in into the first one and rmdirs the other directories. MergeDirs func(ctx context.Context, dirs []Directory) error // CleanUp the trash in the Fs // // Implement this if you have a way of emptying the trash or // otherwise cleaning up old versions of files. CleanUp func(ctx context.Context) error // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. ListR ListRFn // About gets quota information from the Fs About func(ctx context.Context) (*Usage, error) // OpenWriterAt opens with a handle for random access writes // // Pass in the remote desired and the size if known. // // It truncates any existing object OpenWriterAt func(ctx context.Context, remote string, size int64) (WriterAtCloser, error) // UserInfo returns info about the connected user UserInfo func(ctx context.Context) (map[string]string, error) // Disconnect the current user Disconnect func(ctx context.Context) error // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that Command func(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) } // Disable nil's out the named feature. If it isn't found then it // will log a message. func (ft *Features) Disable(name string) *Features { v := reflect.ValueOf(ft).Elem() vType := v.Type() for i := 0; i < v.NumField(); i++ { vName := vType.Field(i).Name field := v.Field(i) if strings.EqualFold(name, vName) { if !field.CanSet() { Errorf(nil, "Can't set Feature %q", name) } else { zero := reflect.Zero(field.Type()) field.Set(zero) Debugf(nil, "Reset feature %q", name) } } } return ft } // List returns a slice of all the possible feature names func (ft *Features) List() (out []string) { v := reflect.ValueOf(ft).Elem() vType := v.Type() for i := 0; i < v.NumField(); i++ { out = append(out, vType.Field(i).Name) } return out } // Enabled returns a map of features with keys showing whether they // are enabled or not func (ft *Features) Enabled() (features map[string]bool) { v := reflect.ValueOf(ft).Elem() vType := v.Type() features = make(map[string]bool, v.NumField()) for i := 0; i < v.NumField(); i++ { vName := vType.Field(i).Name field := v.Field(i) if field.Kind() == reflect.Func { // Can't compare functions features[vName] = !field.IsNil() } else { zero := reflect.Zero(field.Type()) features[vName] = field.Interface() != zero.Interface() } } return features } // DisableList nil's out the comma separated list of named features. // If it isn't found then it will log a message. func (ft *Features) DisableList(list []string) *Features { for _, feature := range list { ft.Disable(strings.TrimSpace(feature)) } return ft } // Fill fills in the function pointers in the Features struct from the // optional interfaces. It returns the original updated Features // struct passed in. func (ft *Features) Fill(f Fs) *Features { if do, ok := f.(Purger); ok { ft.Purge = do.Purge } if do, ok := f.(Copier); ok { ft.Copy = do.Copy } if do, ok := f.(Mover); ok { ft.Move = do.Move } if do, ok := f.(DirMover); ok { ft.DirMove = do.DirMove } if do, ok := f.(ChangeNotifier); ok { ft.ChangeNotify = do.ChangeNotify } if do, ok := f.(UnWrapper); ok { ft.UnWrap = do.UnWrap } if do, ok := f.(Wrapper); ok { ft.WrapFs = do.WrapFs ft.SetWrapper = do.SetWrapper } if do, ok := f.(DirCacheFlusher); ok { ft.DirCacheFlush = do.DirCacheFlush } if do, ok := f.(PublicLinker); ok { ft.PublicLink = do.PublicLink } if do, ok := f.(PutUncheckeder); ok { ft.PutUnchecked = do.PutUnchecked } if do, ok := f.(PutStreamer); ok { ft.PutStream = do.PutStream } if do, ok := f.(MergeDirser); ok { ft.MergeDirs = do.MergeDirs } if do, ok := f.(CleanUpper); ok { ft.CleanUp = do.CleanUp } if do, ok := f.(ListRer); ok { ft.ListR = do.ListR } if do, ok := f.(Abouter); ok { ft.About = do.About } if do, ok := f.(OpenWriterAter); ok { ft.OpenWriterAt = do.OpenWriterAt } if do, ok := f.(UserInfoer); ok { ft.UserInfo = do.UserInfo } if do, ok := f.(Disconnecter); ok { ft.Disconnect = do.Disconnect } if do, ok := f.(Commander); ok { ft.Command = do.Command } return ft.DisableList(Config.DisableFeatures) } // Mask the Features with the Fs passed in // // Only optional features which are implemented in both the original // Fs AND the one passed in will be advertised. Any features which // aren't in both will be set to false/nil, except for UnWrap/Wrap which // will be left untouched. func (ft *Features) Mask(f Fs) *Features { mask := f.Features() ft.CaseInsensitive = ft.CaseInsensitive && mask.CaseInsensitive ft.DuplicateFiles = ft.DuplicateFiles && mask.DuplicateFiles ft.ReadMimeType = ft.ReadMimeType && mask.ReadMimeType ft.WriteMimeType = ft.WriteMimeType && mask.WriteMimeType ft.CanHaveEmptyDirectories = ft.CanHaveEmptyDirectories && mask.CanHaveEmptyDirectories ft.BucketBased = ft.BucketBased && mask.BucketBased ft.BucketBasedRootOK = ft.BucketBasedRootOK && mask.BucketBasedRootOK ft.SetTier = ft.SetTier && mask.SetTier ft.GetTier = ft.GetTier && mask.GetTier ft.ServerSideAcrossConfigs = ft.ServerSideAcrossConfigs && mask.ServerSideAcrossConfigs // ft.IsLocal = ft.IsLocal && mask.IsLocal Don't propagate IsLocal ft.SlowModTime = ft.SlowModTime && mask.SlowModTime ft.SlowHash = ft.SlowHash && mask.SlowHash if mask.Purge == nil { ft.Purge = nil } if mask.Copy == nil { ft.Copy = nil } if mask.Move == nil { ft.Move = nil } if mask.DirMove == nil { ft.DirMove = nil } if mask.ChangeNotify == nil { ft.ChangeNotify = nil } // if mask.UnWrap == nil { // ft.UnWrap = nil // } // if mask.Wrapper == nil { // ft.Wrapper = nil // } if mask.DirCacheFlush == nil { ft.DirCacheFlush = nil } if mask.PublicLink == nil { ft.PublicLink = nil } if mask.PutUnchecked == nil { ft.PutUnchecked = nil } if mask.PutStream == nil { ft.PutStream = nil } if mask.MergeDirs == nil { ft.MergeDirs = nil } if mask.CleanUp == nil { ft.CleanUp = nil } if mask.ListR == nil { ft.ListR = nil } if mask.About == nil { ft.About = nil } if mask.OpenWriterAt == nil { ft.OpenWriterAt = nil } if mask.UserInfo == nil { ft.UserInfo = nil } if mask.Disconnect == nil { ft.Disconnect = nil } // Command is always local so we don't mask it return ft.DisableList(Config.DisableFeatures) } // Wrap makes a Copy of the features passed in, overriding the UnWrap/Wrap // method only if available in f. func (ft *Features) Wrap(f Fs) *Features { ftCopy := new(Features) *ftCopy = *ft if do, ok := f.(UnWrapper); ok { ftCopy.UnWrap = do.UnWrap } if do, ok := f.(Wrapper); ok { ftCopy.WrapFs = do.WrapFs ftCopy.SetWrapper = do.SetWrapper } return ftCopy } // WrapsFs adds extra information between `f` which wraps `w` func (ft *Features) WrapsFs(f Fs, w Fs) *Features { wFeatures := w.Features() if wFeatures.WrapFs != nil && wFeatures.SetWrapper != nil { wFeatures.SetWrapper(f) } return ft } // Purger is an optional interfaces for Fs type Purger interface { // Purge all files in the directory specified // // Implement this if you have a way of deleting all the files // quicker than just running Remove() on the result of List() // // Return an error if it doesn't exist Purge(ctx context.Context, dir string) error } // Copier is an optional interface for Fs type Copier interface { // Copy src to this remote using server side copy operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantCopy Copy(ctx context.Context, src Object, remote string) (Object, error) } // Mover is an optional interface for Fs type Mover interface { // Move src to this remote using server side move operations. // // This is stored with the remote path given // // It returns the destination Object and a possible error // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantMove Move(ctx context.Context, src Object, remote string) (Object, error) } // DirMover is an optional interface for Fs type DirMover interface { // DirMove moves src, srcRemote to this remote at dstRemote // using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists DirMove(ctx context.Context, src Fs, srcRemote, dstRemote string) error } // ChangeNotifier is an optional interface for Fs type ChangeNotifier interface { // ChangeNotify calls the passed function with a path // that has had changes. If the implementation // uses polling, it should adhere to the given interval. // At least one value will be written to the channel, // specifying the initial value and updated values might // follow. A 0 Duration should pause the polling. // The ChangeNotify implementation must empty the channel // regularly. When the channel gets closed, the implementation // should stop polling and release resources. ChangeNotify(context.Context, func(string, EntryType), <-chan time.Duration) } // UnWrapper is an optional interfaces for Fs type UnWrapper interface { // UnWrap returns the Fs that this Fs is wrapping UnWrap() Fs } // Wrapper is an optional interfaces for Fs type Wrapper interface { // Wrap returns the Fs that is wrapping this Fs WrapFs() Fs // SetWrapper sets the Fs that is wrapping this Fs SetWrapper(f Fs) } // DirCacheFlusher is an optional interface for Fs type DirCacheFlusher interface { // DirCacheFlush resets the directory cache - used in testing // as an optional interface DirCacheFlush() } // PutUncheckeder is an optional interface for Fs type PutUncheckeder interface { // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error // // May create duplicates or return errors if src already // exists. PutUnchecked(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error) } // PutStreamer is an optional interface for Fs type PutStreamer interface { // PutStream uploads to the remote path with the modTime given of indeterminate size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error PutStream(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error) } // PublicLinker is an optional interface for Fs type PublicLinker interface { // PublicLink generates a public link to the remote path (usually readable by anyone) PublicLink(ctx context.Context, remote string, expire Duration, unlink bool) (string, error) } // MergeDirser is an option interface for Fs type MergeDirser interface { // MergeDirs merges the contents of all the directories passed // in into the first one and rmdirs the other directories. MergeDirs(ctx context.Context, dirs []Directory) error } // CleanUpper is an optional interfaces for Fs type CleanUpper interface { // CleanUp the trash in the Fs // // Implement this if you have a way of emptying the trash or // otherwise cleaning up old versions of files. CleanUp(ctx context.Context) error } // ListRer is an optional interfaces for Fs type ListRer interface { // ListR lists the objects and directories of the Fs starting // from dir recursively into out. // // dir should be "" to start from the root, and should not // have trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. // // It should call callback for each tranche of entries read. // These need not be returned in any particular order. If // callback returns an error then the listing will stop // immediately. // // Don't implement this unless you have a more efficient way // of listing recursively that doing a directory traversal. ListR(ctx context.Context, dir string, callback ListRCallback) error } // RangeSeeker is the interface that wraps the RangeSeek method. // // Some of the returns from Object.Open() may optionally implement // this method for efficiency purposes. type RangeSeeker interface { // RangeSeek behaves like a call to Seek(offset int64, whence // int) with the output wrapped in an io.LimitedReader // limiting the total length to limit. // // RangeSeek with a limit of < 0 is equivalent to a regular Seek. RangeSeek(ctx context.Context, offset int64, whence int, length int64) (int64, error) } // Abouter is an optional interface for Fs type Abouter interface { // About gets quota information from the Fs About(ctx context.Context) (*Usage, error) } // OpenWriterAter is an optional interface for Fs type OpenWriterAter interface { // OpenWriterAt opens with a handle for random access writes // // Pass in the remote desired and the size if known. // // It truncates any existing object OpenWriterAt(ctx context.Context, remote string, size int64) (WriterAtCloser, error) } // UserInfoer is an optional interface for Fs type UserInfoer interface { // UserInfo returns info about the connected user UserInfo(ctx context.Context) (map[string]string, error) } // Disconnecter is an optional interface for Fs type Disconnecter interface { // Disconnect the current user Disconnect(ctx context.Context) error } // CommandHelp describes a single backend Command // // These are automatically inserted in the docs type CommandHelp struct { Name string // Name of the command, eg "link" Short string // Single line description Long string // Long multi-line description Opts map[string]string // maps option name to a single line help } // Commander is an interface to wrap the Command function type Commander interface { // Command the backend to run a named command // // The command run is name // args may be used to read arguments from // opts may be used to read optional arguments from // // The result should be capable of being JSON encoded // If it is a string or a []string it will be shown to the user // otherwise it will be JSON encoded and shown to the user like that Command(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) } // ObjectsChan is a channel of Objects type ObjectsChan chan Object // Objects is a slice of Object~s type Objects []Object // ObjectPair is a pair of Objects used to describe a potential copy // operation. type ObjectPair struct { Src, Dst Object } // UnWrapFs unwraps f as much as possible and returns the base Fs func UnWrapFs(f Fs) Fs { for { unwrap := f.Features().UnWrap if unwrap == nil { break // not a wrapped Fs, use current } next := unwrap() if next == nil { break // no base Fs found, use current } f = next } return f } // UnWrapObject unwraps o as much as possible and returns the base object func UnWrapObject(o Object) Object { for { u, ok := o.(ObjectUnWrapper) if !ok { break // not a wrapped object, use current } next := u.UnWrap() if next == nil { break // no base object found, use current } o = next } return o } // Find looks for a RegInfo object for the name passed in. The name // can be either the Name or the Prefix. // // Services are looked up in the config file func Find(name string) (*RegInfo, error) { for _, item := range Registry { if item.Name == name || item.Prefix == name || item.FileName() == name { return item, nil } } return nil, errors.Errorf("didn't find backend called %q", name) } // MustFind looks for an Info object for the type name passed in // // Services are looked up in the config file // // Exits with a fatal error if not found func MustFind(name string) *RegInfo { fs, err := Find(name) if err != nil { log.Fatalf("Failed to find remote: %v", err) } return fs } // ParseRemote deconstructs a path into configName, fsPath, looking up // the fsName in the config file (returning NotFoundInConfigFile if not found) func ParseRemote(path string) (fsInfo *RegInfo, configName, fsPath string, err error) { configName, fsPath, err = fspath.Parse(path) if err != nil { return nil, "", "", err } var fsName string var ok bool if configName != "" { if strings.HasPrefix(configName, ":") { fsName = configName[1:] } else { m := ConfigMap(nil, configName) fsName, ok = m.Get("type") if !ok { return nil, "", "", ErrorNotFoundInConfigFile } } } else { fsName = "local" configName = "local" } fsInfo, err = Find(fsName) return fsInfo, configName, fsPath, err } // A configmap.Getter to read from the environment RCLONE_CONFIG_backend_option_name type configEnvVars string // Get a config item from the environment variables if possible func (configName configEnvVars) Get(key string) (value string, ok bool) { return os.LookupEnv(ConfigToEnv(string(configName), key)) } // A configmap.Getter to read from the environment RCLONE_option_name type optionEnvVars struct { fsInfo *RegInfo } // Get a config item from the option environment variables if possible func (oev optionEnvVars) Get(key string) (value string, ok bool) { opt := oev.fsInfo.Options.Get(key) if opt == nil { return "", false } // For options with NoPrefix set, check without prefix too if opt.NoPrefix { value, ok = os.LookupEnv(OptionToEnv(key)) if ok { return value, ok } } return os.LookupEnv(OptionToEnv(oev.fsInfo.Prefix + "-" + key)) } // A configmap.Getter to read either the default value or the set // value from the RegInfo.Options type regInfoValues struct { fsInfo *RegInfo useDefault bool } // override the values in configMap with the either the flag values or // the default values func (r *regInfoValues) Get(key string) (value string, ok bool) { opt := r.fsInfo.Options.Get(key) if opt != nil && (r.useDefault || opt.Value != nil) { return opt.String(), true } return "", false } // A configmap.Setter to read from the config file type setConfigFile string // Set a config item into the config file func (section setConfigFile) Set(key, value string) { Debugf(nil, "Saving config %q = %q in section %q of the config file", key, value, section) err := ConfigFileSet(string(section), key, value) if err != nil { Errorf(nil, "Failed saving config %q = %q in section %q of the config file: %v", key, value, section, err) } } // A configmap.Getter to read from the config file type getConfigFile string // Get a config item from the config file func (section getConfigFile) Get(key string) (value string, ok bool) { value, ok = ConfigFileGet(string(section), key) // Ignore empty lines in the config file if value == "" { ok = false } return value, ok } // ConfigMap creates a configmap.Map from the *RegInfo and the // configName passed in. // // If fsInfo is nil then the returned configmap.Map should only be // used for reading non backend specific parameters, such as "type". func ConfigMap(fsInfo *RegInfo, configName string) (config *configmap.Map) { // Create the config config = configmap.New() // Read the config, more specific to least specific // flag values if fsInfo != nil { config.AddGetter(®InfoValues{fsInfo, false}) } // remote specific environment vars config.AddGetter(configEnvVars(configName)) // backend specific environment vars if fsInfo != nil { config.AddGetter(optionEnvVars{fsInfo: fsInfo}) } // config file config.AddGetter(getConfigFile(configName)) // default values if fsInfo != nil { config.AddGetter(®InfoValues{fsInfo, true}) } // Set Config config.AddSetter(setConfigFile(configName)) return config } // ConfigFs makes the config for calling NewFs with. // // It parses the path which is of the form remote:path // // Remotes are looked up in the config file. If the remote isn't // found then NotFoundInConfigFile will be returned. func ConfigFs(path string) (fsInfo *RegInfo, configName, fsPath string, config *configmap.Map, err error) { // Parse the remote path fsInfo, configName, fsPath, err = ParseRemote(path) if err != nil { return } config = ConfigMap(fsInfo, configName) return } // NewFs makes a new Fs object from the path // // The path is of the form remote:path // // Remotes are looked up in the config file. If the remote isn't // found then NotFoundInConfigFile will be returned. // // On Windows avoid single character remote names as they can be mixed // up with drive letters. func NewFs(path string) (Fs, error) { Debugf(nil, "Creating backend with remote %q", path) fsInfo, configName, fsPath, config, err := ConfigFs(path) if err != nil { return nil, err } return fsInfo.NewFs(configName, fsPath, config) } // ConfigString returns a canonical version of the config string used // to configure the Fs as passed to fs.NewFs func ConfigString(f Fs) string { name := f.Name() root := f.Root() if name == "local" && f.Features().IsLocal { return root } return name + ":" + root } // TemporaryLocalFs creates a local FS in the OS's temporary directory. // // No cleanup is performed, the caller must call Purge on the Fs themselves. func TemporaryLocalFs() (Fs, error) { path, err := ioutil.TempDir("", "rclone-spool") if err == nil { err = os.Remove(path) } if err != nil { return nil, err } path = filepath.ToSlash(path) return NewFs(path) } // CheckClose is a utility function used to check the return from // Close in a defer statement. func CheckClose(c io.Closer, err *error) { cerr := c.Close() if *err == nil { *err = cerr } } // FileExists returns true if a file remote exists. // If remote is a directory, FileExists returns false. func FileExists(ctx context.Context, fs Fs, remote string) (bool, error) { _, err := fs.NewObject(ctx, remote) if err != nil { if err == ErrorObjectNotFound || err == ErrorNotAFile || err == ErrorPermissionDenied { return false, nil } return false, err } return true, nil } // GetModifyWindow calculates the maximum modify window between the given Fses // and the Config.ModifyWindow parameter. func GetModifyWindow(fss ...Info) time.Duration { window := Config.ModifyWindow for _, f := range fss { if f != nil { precision := f.Precision() if precision == ModTimeNotSupported { return ModTimeNotSupported } if precision > window { window = precision } } } return window } // Pacer is a simple wrapper around a pacer.Pacer with logging. type Pacer struct { *pacer.Pacer } type logCalculator struct { pacer.Calculator } // NewPacer creates a Pacer for the given Fs and Calculator. func NewPacer(c pacer.Calculator) *Pacer { p := &Pacer{ Pacer: pacer.New( pacer.InvokerOption(pacerInvoker), pacer.MaxConnectionsOption(Config.Checkers+Config.Transfers), pacer.RetriesOption(Config.LowLevelRetries), pacer.CalculatorOption(c), ), } p.SetCalculator(c) return p } func (d *logCalculator) Calculate(state pacer.State) time.Duration { oldSleepTime := state.SleepTime newSleepTime := d.Calculator.Calculate(state) if state.ConsecutiveRetries > 0 { if newSleepTime != oldSleepTime { Debugf("pacer", "Rate limited, increasing sleep to %v", newSleepTime) } } else { if newSleepTime != oldSleepTime { Debugf("pacer", "Reducing sleep to %v", newSleepTime) } } return newSleepTime } // SetCalculator sets the pacing algorithm. Don't modify the Calculator object // afterwards, use the ModifyCalculator method when needed. // // It will choose the default algorithm if nil is passed in. func (p *Pacer) SetCalculator(c pacer.Calculator) { switch c.(type) { case *logCalculator: Logf("pacer", "Invalid Calculator in fs.Pacer.SetCalculator") case nil: c = &logCalculator{pacer.NewDefault()} default: c = &logCalculator{c} } p.Pacer.SetCalculator(c) } // ModifyCalculator calls the given function with the currently configured // Calculator and the Pacer lock held. func (p *Pacer) ModifyCalculator(f func(pacer.Calculator)) { p.ModifyCalculator(func(c pacer.Calculator) { switch _c := c.(type) { case *logCalculator: f(_c.Calculator) default: Logf("pacer", "Invalid Calculator in fs.Pacer: %t", c) f(c) } }) } func pacerInvoker(try, retries int, f pacer.Paced) (retry bool, err error) { retry, err = f() if retry { Debugf("pacer", "low level retry %d/%d (error %v)", try, retries, err) err = fserrors.RetryError(err) } return } rclone-1.53.3/fs/fs_test.go000066400000000000000000000212561375552240400155100ustar00rootroot00000000000000package fs import ( "context" "encoding/json" "fmt" "os" "strings" "sync" "testing" "time" "github.com/stretchr/testify/require" "github.com/pkg/errors" "github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/lib/pacer" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" ) func TestFeaturesDisable(t *testing.T) { ft := new(Features) ft.Copy = func(ctx context.Context, src Object, remote string) (Object, error) { return nil, nil } ft.CaseInsensitive = true assert.NotNil(t, ft.Copy) assert.Nil(t, ft.Purge) ft.Disable("copy") assert.Nil(t, ft.Copy) assert.Nil(t, ft.Purge) assert.True(t, ft.CaseInsensitive) assert.False(t, ft.DuplicateFiles) ft.Disable("caseinsensitive") assert.False(t, ft.CaseInsensitive) assert.False(t, ft.DuplicateFiles) } func TestFeaturesList(t *testing.T) { ft := new(Features) names := strings.Join(ft.List(), ",") assert.True(t, strings.Contains(names, ",Copy,")) } func TestFeaturesEnabled(t *testing.T) { ft := new(Features) ft.CaseInsensitive = true ft.Purge = func(ctx context.Context, dir string) error { return nil } enabled := ft.Enabled() flag, ok := enabled["CaseInsensitive"] assert.Equal(t, true, ok) assert.Equal(t, true, flag, enabled) flag, ok = enabled["Purge"] assert.Equal(t, true, ok) assert.Equal(t, true, flag, enabled) flag, ok = enabled["DuplicateFiles"] assert.Equal(t, true, ok) assert.Equal(t, false, flag, enabled) flag, ok = enabled["Copy"] assert.Equal(t, true, ok) assert.Equal(t, false, flag, enabled) assert.Equal(t, len(ft.List()), len(enabled)) } func TestFeaturesDisableList(t *testing.T) { ft := new(Features) ft.Copy = func(ctx context.Context, src Object, remote string) (Object, error) { return nil, nil } ft.CaseInsensitive = true assert.NotNil(t, ft.Copy) assert.Nil(t, ft.Purge) assert.True(t, ft.CaseInsensitive) assert.False(t, ft.DuplicateFiles) ft.DisableList([]string{"copy", "caseinsensitive"}) assert.Nil(t, ft.Copy) assert.Nil(t, ft.Purge) assert.False(t, ft.CaseInsensitive) assert.False(t, ft.DuplicateFiles) } // Check it satisfies the interface var _ pflag.Value = (*Option)(nil) func TestOption(t *testing.T) { d := &Option{ Name: "potato", Value: SizeSuffix(17 << 20), } assert.Equal(t, "17M", d.String()) assert.Equal(t, "SizeSuffix", d.Type()) err := d.Set("18M") assert.NoError(t, err) assert.Equal(t, SizeSuffix(18<<20), d.Value) err = d.Set("sdfsdf") assert.Error(t, err) } var errFoo = errors.New("foo") type dummyPaced struct { retry bool called int wait *sync.Cond } func (dp *dummyPaced) fn() (bool, error) { if dp.wait != nil { dp.wait.L.Lock() dp.wait.Wait() dp.wait.L.Unlock() } dp.called++ return dp.retry, errFoo } func TestPacerCall(t *testing.T) { expectedCalled := Config.LowLevelRetries if expectedCalled == 0 { expectedCalled = 20 Config.LowLevelRetries = expectedCalled defer func() { Config.LowLevelRetries = 0 }() } p := NewPacer(pacer.NewDefault(pacer.MinSleep(1*time.Millisecond), pacer.MaxSleep(2*time.Millisecond))) dp := &dummyPaced{retry: true} err := p.Call(dp.fn) require.Equal(t, expectedCalled, dp.called) require.Implements(t, (*fserrors.Retrier)(nil), err) } func TestPacerCallNoRetry(t *testing.T) { p := NewPacer(pacer.NewDefault(pacer.MinSleep(1*time.Millisecond), pacer.MaxSleep(2*time.Millisecond))) dp := &dummyPaced{retry: true} err := p.CallNoRetry(dp.fn) require.Equal(t, 1, dp.called) require.Implements(t, (*fserrors.Retrier)(nil), err) } // Test options var ( nouncOption = Option{ Name: "nounc", } copyLinksOption = Option{ Name: "copy_links", Default: false, NoPrefix: true, ShortOpt: "L", Advanced: true, } caseInsensitiveOption = Option{ Name: "case_insensitive", Default: false, Value: true, Advanced: true, } testOptions = Options{nouncOption, copyLinksOption, caseInsensitiveOption} ) func TestOptionsSetValues(t *testing.T) { assert.Nil(t, testOptions[0].Default) assert.Equal(t, false, testOptions[1].Default) assert.Equal(t, false, testOptions[2].Default) testOptions.setValues() assert.Equal(t, "", testOptions[0].Default) assert.Equal(t, false, testOptions[1].Default) assert.Equal(t, false, testOptions[2].Default) } func TestOptionsGet(t *testing.T) { opt := testOptions.Get("copy_links") assert.Equal(t, ©LinksOption, opt) opt = testOptions.Get("not_found") assert.Nil(t, opt) } func TestOptionMarshalJSON(t *testing.T) { out, err := json.MarshalIndent(&caseInsensitiveOption, "", "") assert.NoError(t, err) require.Equal(t, `{ "Name": "case_insensitive", "Help": "", "Provider": "", "Default": false, "Value": true, "ShortOpt": "", "Hide": 0, "Required": false, "IsPassword": false, "NoPrefix": false, "Advanced": true, "DefaultStr": "false", "ValueStr": "true", "Type": "bool" }`, string(out)) } func TestOptionGetValue(t *testing.T) { assert.Equal(t, "", nouncOption.GetValue()) assert.Equal(t, false, copyLinksOption.GetValue()) assert.Equal(t, true, caseInsensitiveOption.GetValue()) } func TestOptionString(t *testing.T) { assert.Equal(t, "", nouncOption.String()) assert.Equal(t, "false", copyLinksOption.String()) assert.Equal(t, "true", caseInsensitiveOption.String()) } func TestOptionSet(t *testing.T) { o := caseInsensitiveOption assert.Equal(t, true, o.Value) err := o.Set("FALSE") assert.NoError(t, err) assert.Equal(t, false, o.Value) o = copyLinksOption assert.Equal(t, nil, o.Value) err = o.Set("True") assert.NoError(t, err) assert.Equal(t, true, o.Value) err = o.Set("INVALID") assert.Error(t, err) assert.Equal(t, true, o.Value) } func TestOptionType(t *testing.T) { assert.Equal(t, "string", nouncOption.Type()) assert.Equal(t, "bool", copyLinksOption.Type()) assert.Equal(t, "bool", caseInsensitiveOption.Type()) } func TestOptionFlagName(t *testing.T) { assert.Equal(t, "local-nounc", nouncOption.FlagName("local")) assert.Equal(t, "copy-links", copyLinksOption.FlagName("local")) assert.Equal(t, "local-case-insensitive", caseInsensitiveOption.FlagName("local")) } func TestOptionEnvVarName(t *testing.T) { assert.Equal(t, "RCLONE_LOCAL_NOUNC", nouncOption.EnvVarName("local")) assert.Equal(t, "RCLONE_LOCAL_COPY_LINKS", copyLinksOption.EnvVarName("local")) assert.Equal(t, "RCLONE_LOCAL_CASE_INSENSITIVE", caseInsensitiveOption.EnvVarName("local")) } func TestOptionGetters(t *testing.T) { // Set up env vars envVars := [][2]string{ {"RCLONE_CONFIG_LOCAL_POTATO_PIE", "yes"}, {"RCLONE_COPY_LINKS", "TRUE"}, {"RCLONE_LOCAL_NOUNC", "NOUNC"}, } for _, ev := range envVars { assert.NoError(t, os.Setenv(ev[0], ev[1])) } defer func() { for _, ev := range envVars { assert.NoError(t, os.Unsetenv(ev[0])) } }() fsInfo := &RegInfo{ Name: "local", Prefix: "local", Options: testOptions, } oldConfigFileGet := ConfigFileGet ConfigFileGet = func(section, key string) (string, bool) { if section == "sausage" && key == "key1" { return "value1", true } return "", false } defer func() { ConfigFileGet = oldConfigFileGet }() // set up getters // A configmap.Getter to read from the environment RCLONE_CONFIG_backend_option_name configEnvVarsGetter := configEnvVars("local") // A configmap.Getter to read from the environment RCLONE_option_name optionEnvVarsGetter := optionEnvVars{fsInfo} // A configmap.Getter to read either the default value or the set // value from the RegInfo.Options regInfoValuesGetterFalse := ®InfoValues{ fsInfo: fsInfo, useDefault: false, } regInfoValuesGetterTrue := ®InfoValues{ fsInfo: fsInfo, useDefault: true, } // A configmap.Setter to read from the config file configFileGetter := getConfigFile("sausage") for i, test := range []struct { get configmap.Getter key string wantValue string wantOk bool }{ {configEnvVarsGetter, "not_found", "", false}, {configEnvVarsGetter, "potato_pie", "yes", true}, {optionEnvVarsGetter, "not_found", "", false}, {optionEnvVarsGetter, "copy_links", "TRUE", true}, {optionEnvVarsGetter, "nounc", "NOUNC", true}, {optionEnvVarsGetter, "case_insensitive", "", false}, {regInfoValuesGetterFalse, "not_found", "", false}, {regInfoValuesGetterFalse, "case_insensitive", "true", true}, {regInfoValuesGetterFalse, "copy_links", "", false}, {regInfoValuesGetterTrue, "not_found", "", false}, {regInfoValuesGetterTrue, "case_insensitive", "true", true}, {regInfoValuesGetterTrue, "copy_links", "false", true}, {configFileGetter, "not_found", "", false}, {configFileGetter, "key1", "value1", true}, } { what := fmt.Sprintf("%d: %+v: %q", i, test.get, test.key) gotValue, gotOk := test.get.Get(test.key) assert.Equal(t, test.wantValue, gotValue, what) assert.Equal(t, test.wantOk, gotOk, what) } } rclone-1.53.3/fs/fserrors/000077500000000000000000000000001375552240400153515ustar00rootroot00000000000000rclone-1.53.3/fs/fserrors/enospc_error.go000066400000000000000000000006031375552240400203770ustar00rootroot00000000000000// +build !plan9 package fserrors import ( "syscall" "github.com/rclone/rclone/lib/errors" ) // IsErrNoSpace checks a possibly wrapped error to // see if it contains a ENOSPC error func IsErrNoSpace(cause error) (isNoSpc bool) { errors.Walk(cause, func(c error) bool { if c == syscall.ENOSPC { isNoSpc = true return true } isNoSpc = false return false }) return } rclone-1.53.3/fs/fserrors/enospc_error_notsupported.go000066400000000000000000000003211375552240400232220ustar00rootroot00000000000000// +build plan9 package fserrors // IsErrNoSpace() on plan9 returns false because // plan9 does not support syscall.ENOSPC error. func IsErrNoSpace(cause error) (isNoSpc bool) { isNoSpc = false return } rclone-1.53.3/fs/fserrors/error.go000066400000000000000000000256511375552240400170420ustar00rootroot00000000000000// Package fserrors provides errors and error handling package fserrors import ( "fmt" "io" "net/http" "strings" "time" "github.com/rclone/rclone/lib/errors" ) // Retrier is an optional interface for error as to whether the // operation should be retried at a high level. // // This should be returned from Update or Put methods as required type Retrier interface { error Retry() bool } // retryError is a type of error type retryError string // Error interface func (r retryError) Error() string { return string(r) } // Retry interface func (r retryError) Retry() bool { return true } // Check interface var _ Retrier = retryError("") // RetryErrorf makes an error which indicates it would like to be retried func RetryErrorf(format string, a ...interface{}) error { return retryError(fmt.Sprintf(format, a...)) } // wrappedRetryError is an error wrapped so it will satisfy the // Retrier interface and return true type wrappedRetryError struct { error } // Retry interface func (err wrappedRetryError) Retry() bool { return true } // Check interface var _ Retrier = wrappedRetryError{error(nil)} // RetryError makes an error which indicates it would like to be retried func RetryError(err error) error { if err == nil { err = errors.New("needs retry") } return wrappedRetryError{err} } func (err wrappedRetryError) Cause() error { return err.error } // IsRetryError returns true if err conforms to the Retry interface // and calling the Retry method returns true. func IsRetryError(err error) (isRetry bool) { errors.Walk(err, func(err error) bool { if r, ok := err.(Retrier); ok { isRetry = r.Retry() return true } return false }) return } // Fataler is an optional interface for error as to whether the // operation should cause the entire operation to finish immediately. // // This should be returned from Update or Put methods as required type Fataler interface { error Fatal() bool } // wrappedFatalError is an error wrapped so it will satisfy the // Retrier interface and return true type wrappedFatalError struct { error } // Fatal interface func (err wrappedFatalError) Fatal() bool { return true } // Check interface var _ Fataler = wrappedFatalError{error(nil)} // FatalError makes an error which indicates it is a fatal error and // the sync should stop. func FatalError(err error) error { if err == nil { err = errors.New("fatal error") } return wrappedFatalError{err} } func (err wrappedFatalError) Cause() error { return err.error } // IsFatalError returns true if err conforms to the Fatal interface // and calling the Fatal method returns true. func IsFatalError(err error) (isFatal bool) { errors.Walk(err, func(err error) bool { if r, ok := err.(Fataler); ok { isFatal = r.Fatal() return true } return false }) return } // NoRetrier is an optional interface for error as to whether the // operation should not be retried at a high level. // // If only NoRetry errors are returned in a sync then the sync won't // be retried. // // This should be returned from Update or Put methods as required type NoRetrier interface { error NoRetry() bool } // wrappedNoRetryError is an error wrapped so it will satisfy the // Retrier interface and return true type wrappedNoRetryError struct { error } // NoRetry interface func (err wrappedNoRetryError) NoRetry() bool { return true } // Check interface var _ NoRetrier = wrappedNoRetryError{error(nil)} // NoRetryError makes an error which indicates the sync shouldn't be // retried. func NoRetryError(err error) error { return wrappedNoRetryError{err} } func (err wrappedNoRetryError) Cause() error { return err.error } // IsNoRetryError returns true if err conforms to the NoRetry // interface and calling the NoRetry method returns true. func IsNoRetryError(err error) (isNoRetry bool) { errors.Walk(err, func(err error) bool { if r, ok := err.(NoRetrier); ok { isNoRetry = r.NoRetry() return true } return false }) return } // NoLowLevelRetrier is an optional interface for error as to whether // the operation should not be retried at a low level. // // NoLowLevelRetry errors won't be retried by low level retry loops. type NoLowLevelRetrier interface { error NoLowLevelRetry() bool } // wrappedNoLowLevelRetryError is an error wrapped so it will satisfy the // NoLowLevelRetrier interface and return true type wrappedNoLowLevelRetryError struct { error } // NoLowLevelRetry interface func (err wrappedNoLowLevelRetryError) NoLowLevelRetry() bool { return true } // Check interface var _ NoLowLevelRetrier = wrappedNoLowLevelRetryError{error(nil)} // NoLowLevelRetryError makes an error which indicates the sync // shouldn't be low level retried. func NoLowLevelRetryError(err error) error { return wrappedNoLowLevelRetryError{err} } // Cause returns the underlying error func (err wrappedNoLowLevelRetryError) Cause() error { return err.error } // IsNoLowLevelRetryError returns true if err conforms to the NoLowLevelRetry // interface and calling the NoLowLevelRetry method returns true. func IsNoLowLevelRetryError(err error) (isNoLowLevelRetry bool) { errors.Walk(err, func(err error) bool { if r, ok := err.(NoLowLevelRetrier); ok { isNoLowLevelRetry = r.NoLowLevelRetry() return true } return false }) return } // RetryAfter is an optional interface for error as to whether the // operation should be retried after a given delay // // This should be returned from Update or Put methods as required and // will cause the entire sync to be retried after a delay. type RetryAfter interface { error RetryAfter() time.Time } // ErrorRetryAfter is an error which expresses a time that should be // waited for until trying again type ErrorRetryAfter time.Time // NewErrorRetryAfter returns an ErrorRetryAfter with the given // duration as an endpoint func NewErrorRetryAfter(d time.Duration) ErrorRetryAfter { return ErrorRetryAfter(time.Now().Add(d)) } // Error returns the textual version of the error func (e ErrorRetryAfter) Error() string { return fmt.Sprintf("try again after %v (%v)", time.Time(e).Format(time.RFC3339Nano), time.Time(e).Sub(time.Now())) } // RetryAfter returns the time the operation should be retried at or // after func (e ErrorRetryAfter) RetryAfter() time.Time { return time.Time(e) } // Check interface var _ RetryAfter = ErrorRetryAfter{} // RetryAfterErrorTime returns the time that the RetryAfter error // indicates or a Zero time.Time func RetryAfterErrorTime(err error) (retryAfter time.Time) { errors.Walk(err, func(err error) bool { if r, ok := err.(RetryAfter); ok { retryAfter = r.RetryAfter() return true } return false }) return } // IsRetryAfterError returns true if err is an ErrorRetryAfter func IsRetryAfterError(err error) bool { return !RetryAfterErrorTime(err).IsZero() } // CountableError is an optional interface for error. It stores a boolean // which signifies if the error has already been counted or not type CountableError interface { error Count() IsCounted() bool } // wrappedFatalError is an error wrapped so it will satisfy the // Retrier interface and return true type wrappedCountableError struct { error isCounted bool } // CountableError interface func (err *wrappedCountableError) Count() { err.isCounted = true } // CountableError interface func (err *wrappedCountableError) IsCounted() bool { return err.isCounted } func (err *wrappedCountableError) Cause() error { return err.error } // IsCounted returns true if err conforms to the CountableError interface // and has already been counted func IsCounted(err error) bool { if r, ok := err.(CountableError); ok { return r.IsCounted() } return false } // Count sets the isCounted variable on the error if it conforms to the // CountableError interface func Count(err error) { if r, ok := err.(CountableError); ok { r.Count() } } // Check interface var _ CountableError = &wrappedCountableError{error: error(nil)} // FsError makes an error which can keep a record that it is already counted // or not func FsError(err error) error { if err == nil { err = errors.New("countable error") } return &wrappedCountableError{error: err} } // Cause is a souped up errors.Cause which can unwrap some standard // library errors too. It returns true if any of the intermediate // errors had a Timeout() or Temporary() method which returned true. func Cause(cause error) (retriable bool, err error) { errors.Walk(cause, func(c error) bool { // Check for net error Timeout() if x, ok := c.(interface { Timeout() bool }); ok && x.Timeout() { retriable = true } // Check for net error Temporary() if x, ok := c.(interface { Temporary() bool }); ok && x.Temporary() { retriable = true } err = c return false }) return } // retriableErrorStrings is a list of phrases which when we find it // in an error, we know it is a networking error which should be // retried. // // This is incredibly ugly - if only errors.Cause worked for all // errors and all errors were exported from the stdlib. var retriableErrorStrings = []string{ "use of closed network connection", // internal/poll/fd.go "unexpected EOF reading trailer", // net/http/transfer.go "transport connection broken", // net/http/transport.go "http: ContentLength=", // net/http/transfer.go "server closed idle connection", // net/http/transport.go "bad record MAC", // crypto/tls/alert.go "stream error:", // net/http/h2_bundle.go "tls: use of closed connection", // crypto/tls/conn.go } // Errors which indicate networking errors which should be retried // // These are added to in retriable_errors*.go var retriableErrors = []error{ io.EOF, io.ErrUnexpectedEOF, } // ShouldRetry looks at an error and tries to work out if retrying the // operation that caused it would be a good idea. It returns true if // the error implements Timeout() or Temporary() or if the error // indicates a premature closing of the connection. func ShouldRetry(err error) bool { if err == nil { return false } // If error has been marked to NoLowLevelRetry then don't retry if IsNoLowLevelRetryError(err) { return false } // Find root cause if available retriable, err := Cause(err) if retriable { return true } // Check if it is a retriable error for _, retriableErr := range retriableErrors { if err == retriableErr { return true } } // Check error strings (yuch!) too errString := err.Error() for _, phrase := range retriableErrorStrings { if strings.Contains(errString, phrase) { return true } } return false } // ShouldRetryHTTP returns a boolean as to whether this resp deserves. // It checks to see if the HTTP response code is in the slice // retryErrorCodes. func ShouldRetryHTTP(resp *http.Response, retryErrorCodes []int) bool { if resp == nil { return false } for _, e := range retryErrorCodes { if resp.StatusCode == e { return true } } return false } type causer interface { Cause() error } var ( _ causer = wrappedRetryError{} _ causer = wrappedFatalError{} _ causer = wrappedNoRetryError{} ) rclone-1.53.3/fs/fserrors/error_test.go000066400000000000000000000104771375552240400201010ustar00rootroot00000000000000package fserrors import ( "fmt" "io" "net" "net/url" "os" "syscall" "testing" "time" "github.com/pkg/errors" "github.com/stretchr/testify/assert" ) var errUseOfClosedNetworkConnection = errors.New("use of closed network connection") // make a plausible network error with the underlying errno func makeNetErr(errno syscall.Errno) error { return &net.OpError{ Op: "write", Net: "tcp", Source: &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 123}, Addr: &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 8080}, Err: &os.SyscallError{ Syscall: "write", Err: errno, }, } } type myError1 struct { Err error } func (e myError1) Error() string { return e.Err.Error() } type myError2 struct { Err error } func (e *myError2) Error() string { if e == nil { return "myError2(nil)" } if e.Err == nil { return "myError2{Err: nil}" } return e.Err.Error() } type myError3 struct { Err int } func (e *myError3) Error() string { return "hello" } type myError4 struct { e error } func (e *myError4) Error() string { return e.e.Error() } type myError5 struct{} func (e *myError5) Error() string { return "" } func (e *myError5) Temporary() bool { return true } type errorCause struct { e error } func (e *errorCause) Error() string { return fmt.Sprintf("%#v", e) } func (e *errorCause) Cause() error { return e.e } func TestCause(t *testing.T) { e3 := &myError3{3} e4 := &myError4{io.EOF} e5 := &myError5{} eNil1 := &myError2{nil} eNil2 := &myError2{Err: (*myError2)(nil)} errPotato := errors.New("potato") nilCause1 := &errorCause{nil} nilCause2 := &errorCause{(*myError2)(nil)} for i, test := range []struct { err error wantRetriable bool wantErr error }{ {nil, false, nil}, {errPotato, false, errPotato}, {errors.Wrap(errPotato, "potato"), false, errPotato}, {errors.Wrap(errors.Wrap(errPotato, "potato2"), "potato"), false, errPotato}, {errUseOfClosedNetworkConnection, false, errUseOfClosedNetworkConnection}, {makeNetErr(syscall.EAGAIN), true, syscall.EAGAIN}, {makeNetErr(syscall.Errno(123123123)), false, syscall.Errno(123123123)}, {eNil1, false, eNil1}, {eNil2, false, eNil2.Err}, {myError1{io.EOF}, false, io.EOF}, {&myError2{io.EOF}, false, io.EOF}, {e3, false, e3}, {e4, false, e4}, {e5, true, e5}, {&errorCause{errPotato}, false, errPotato}, {nilCause1, false, nilCause1}, {nilCause2, false, nilCause2.e}, } { gotRetriable, gotErr := Cause(test.err) what := fmt.Sprintf("test #%d: %v", i, test.err) assert.Equal(t, test.wantErr, gotErr, what) assert.Equal(t, test.wantRetriable, gotRetriable, what) } } func TestShouldRetry(t *testing.T) { for i, test := range []struct { err error want bool }{ {nil, false}, {errors.New("potato"), false}, {errors.Wrap(errUseOfClosedNetworkConnection, "connection"), true}, {io.EOF, true}, {io.ErrUnexpectedEOF, true}, {makeNetErr(syscall.EAGAIN), true}, {makeNetErr(syscall.Errno(123123123)), false}, {&url.Error{Op: "post", URL: "/", Err: io.EOF}, true}, {&url.Error{Op: "post", URL: "/", Err: errUseOfClosedNetworkConnection}, true}, {&url.Error{Op: "post", URL: "/", Err: fmt.Errorf("net/http: HTTP/1.x transport connection broken: %v", fmt.Errorf("http: ContentLength=%d with Body length %d", 100663336, 99590598))}, true}, { errors.Wrap(&url.Error{ Op: "post", URL: "http://localhost/", Err: makeNetErr(syscall.EPIPE), }, "potato error"), true, }, { errors.Wrap(&url.Error{ Op: "post", URL: "http://localhost/", Err: makeNetErr(syscall.Errno(123123123)), }, "listing error"), false, }, } { got := ShouldRetry(test.err) assert.Equal(t, test.want, got, fmt.Sprintf("test #%d: %v", i, test.err)) } } func TestRetryAfter(t *testing.T) { e := NewErrorRetryAfter(time.Second) after := e.RetryAfter() dt := after.Sub(time.Now()) assert.True(t, dt >= 900*time.Millisecond && dt <= 1100*time.Millisecond) assert.True(t, IsRetryAfterError(e)) assert.False(t, IsRetryAfterError(io.EOF)) assert.Equal(t, time.Time{}, RetryAfterErrorTime(io.EOF)) assert.False(t, IsRetryAfterError(nil)) assert.Contains(t, e.Error(), "try again after") t0 := time.Now() err := errors.Wrap(ErrorRetryAfter(t0), "potato") assert.Equal(t, t0, RetryAfterErrorTime(err)) assert.True(t, IsRetryAfterError(err)) assert.Contains(t, e.Error(), "try again after") } rclone-1.53.3/fs/fserrors/retriable_errors.go000066400000000000000000000004731375552240400212510ustar00rootroot00000000000000// +build !plan9 package fserrors import ( "syscall" ) func init() { retriableErrors = append(retriableErrors, syscall.EPIPE, syscall.ETIMEDOUT, syscall.ECONNREFUSED, syscall.EHOSTDOWN, syscall.EHOSTUNREACH, syscall.ECONNABORTED, syscall.EAGAIN, syscall.EWOULDBLOCK, syscall.ECONNRESET, ) } rclone-1.53.3/fs/fserrors/retriable_errors_windows.go000066400000000000000000000025351375552240400230240ustar00rootroot00000000000000// +build windows package fserrors import ( "syscall" ) // Windows error code list // https://docs.microsoft.com/en-us/windows/win32/winsock/windows-sockets-error-codes-2 const ( WSAENETDOWN syscall.Errno = 10050 WSAENETUNREACH syscall.Errno = 10051 WSAENETRESET syscall.Errno = 10052 WSAECONNABORTED syscall.Errno = 10053 WSAECONNRESET syscall.Errno = 10054 WSAENOBUFS syscall.Errno = 10055 WSAENOTCONN syscall.Errno = 10057 WSAESHUTDOWN syscall.Errno = 10058 WSAETIMEDOUT syscall.Errno = 10060 WSAECONNREFUSED syscall.Errno = 10061 WSAEHOSTDOWN syscall.Errno = 10064 WSAEHOSTUNREACH syscall.Errno = 10065 WSAEDISCON syscall.Errno = 10101 WSAEREFUSED syscall.Errno = 10112 WSAHOST_NOT_FOUND syscall.Errno = 11001 WSATRY_AGAIN syscall.Errno = 11002 ) func init() { // append some lower level errors since the standardized ones // don't seem to happen retriableErrors = append(retriableErrors, syscall.WSAECONNRESET, WSAENETDOWN, WSAENETUNREACH, WSAENETRESET, WSAECONNABORTED, WSAECONNRESET, WSAENOBUFS, WSAENOTCONN, WSAESHUTDOWN, WSAETIMEDOUT, WSAECONNREFUSED, WSAEHOSTDOWN, WSAEHOSTUNREACH, WSAEDISCON, WSAEREFUSED, WSAHOST_NOT_FOUND, WSATRY_AGAIN, syscall.ERROR_HANDLE_EOF, syscall.ERROR_NETNAME_DELETED, syscall.ERROR_BROKEN_PIPE, ) } rclone-1.53.3/fs/fshttp/000077500000000000000000000000001375552240400150145ustar00rootroot00000000000000rclone-1.53.3/fs/fshttp/http.go000066400000000000000000000241601375552240400163250ustar00rootroot00000000000000// Package fshttp contains the common http parts of the config, Transport and Client package fshttp import ( "bytes" "context" "crypto/tls" "crypto/x509" "io/ioutil" "log" "net" "net/http" "net/http/cookiejar" "net/http/httputil" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/lib/structs" "golang.org/x/net/publicsuffix" "golang.org/x/time/rate" ) const ( separatorReq = ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" separatorResp = "<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<" ) var ( transport http.RoundTripper noTransport = new(sync.Once) tpsBucket *rate.Limiter // for limiting number of http transactions per second cookieJar, _ = cookiejar.New(&cookiejar.Options{PublicSuffixList: publicsuffix.List}) ) // StartHTTPTokenBucket starts the token bucket if necessary func StartHTTPTokenBucket() { if fs.Config.TPSLimit > 0 { tpsBurst := fs.Config.TPSLimitBurst if tpsBurst < 1 { tpsBurst = 1 } tpsBucket = rate.NewLimiter(rate.Limit(fs.Config.TPSLimit), tpsBurst) fs.Infof(nil, "Starting HTTP transaction limiter: max %g transactions/s with burst %d", fs.Config.TPSLimit, tpsBurst) } } // A net.Conn that sets a deadline for every Read or Write operation type timeoutConn struct { net.Conn timeout time.Duration } // create a timeoutConn using the timeout func newTimeoutConn(conn net.Conn, timeout time.Duration) (c *timeoutConn, err error) { c = &timeoutConn{ Conn: conn, timeout: timeout, } err = c.nudgeDeadline() return } // Nudge the deadline for an idle timeout on by c.timeout if non-zero func (c *timeoutConn) nudgeDeadline() (err error) { if c.timeout == 0 { return nil } when := time.Now().Add(c.timeout) return c.Conn.SetDeadline(when) } // readOrWrite bytes doing idle timeouts func (c *timeoutConn) readOrWrite(f func([]byte) (int, error), b []byte) (n int, err error) { n, err = f(b) // Don't nudge if no bytes or an error if n == 0 || err != nil { return } // Nudge the deadline on successful Read or Write err = c.nudgeDeadline() return } // Read bytes doing idle timeouts func (c *timeoutConn) Read(b []byte) (n int, err error) { return c.readOrWrite(c.Conn.Read, b) } // Write bytes doing idle timeouts func (c *timeoutConn) Write(b []byte) (n int, err error) { return c.readOrWrite(c.Conn.Write, b) } // dial with context and timeouts func dialContextTimeout(ctx context.Context, network, address string, ci *fs.ConfigInfo) (net.Conn, error) { dialer := NewDialer(ci) c, err := dialer.DialContext(ctx, network, address) if err != nil { return c, err } return newTimeoutConn(c, ci.Timeout) } // ResetTransport resets the existing transport, allowing it to take new settings. // Should only be used for testing. func ResetTransport() { noTransport = new(sync.Once) } // NewTransportCustom returns an http.RoundTripper with the correct timeouts. // The customize function is called if set to give the caller an opportunity to // customize any defaults in the Transport. func NewTransportCustom(ci *fs.ConfigInfo, customize func(*http.Transport)) http.RoundTripper { // Start with a sensible set of defaults then override. // This also means we get new stuff when it gets added to go t := new(http.Transport) structs.SetDefaults(t, http.DefaultTransport.(*http.Transport)) t.Proxy = http.ProxyFromEnvironment t.MaxIdleConnsPerHost = 2 * (ci.Checkers + ci.Transfers + 1) t.MaxIdleConns = 2 * t.MaxIdleConnsPerHost t.TLSHandshakeTimeout = ci.ConnectTimeout t.ResponseHeaderTimeout = ci.Timeout // TLS Config t.TLSClientConfig = &tls.Config{ InsecureSkipVerify: ci.InsecureSkipVerify, } // Load client certs if ci.ClientCert != "" || ci.ClientKey != "" { if ci.ClientCert == "" || ci.ClientKey == "" { log.Fatalf("Both --client-cert and --client-key must be set") } cert, err := tls.LoadX509KeyPair(ci.ClientCert, ci.ClientKey) if err != nil { log.Fatalf("Failed to load --client-cert/--client-key pair: %v", err) } t.TLSClientConfig.Certificates = []tls.Certificate{cert} t.TLSClientConfig.BuildNameToCertificate() } // Load CA cert if ci.CaCert != "" { caCert, err := ioutil.ReadFile(ci.CaCert) if err != nil { log.Fatalf("Failed to read --ca-cert: %v", err) } caCertPool := x509.NewCertPool() ok := caCertPool.AppendCertsFromPEM(caCert) if !ok { log.Fatalf("Failed to add certificates from --ca-cert") } t.TLSClientConfig.RootCAs = caCertPool } t.DisableCompression = ci.NoGzip t.DialContext = func(ctx context.Context, network, addr string) (net.Conn, error) { return dialContextTimeout(ctx, network, addr, ci) } t.IdleConnTimeout = 60 * time.Second t.ExpectContinueTimeout = ci.ExpectContinueTimeout if ci.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpAuth|fs.DumpRequests|fs.DumpResponses) != 0 { fs.Debugf(nil, "You have specified to dump information. Please be noted that the "+ "Accept-Encoding as shown may not be correct in the request and the response may not show "+ "Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case"+ " the body of the request will be gunzipped before showing it.") } // customize the transport if required if customize != nil { customize(t) } // Wrap that http.Transport in our own transport return newTransport(ci, t) } // NewTransport returns an http.RoundTripper with the correct timeouts func NewTransport(ci *fs.ConfigInfo) http.RoundTripper { (*noTransport).Do(func() { transport = NewTransportCustom(ci, nil) }) return transport } // NewClient returns an http.Client with the correct timeouts func NewClient(ci *fs.ConfigInfo) *http.Client { client := &http.Client{ Transport: NewTransport(ci), } if ci.Cookie { client.Jar = cookieJar } return client } // Transport is our http Transport which wraps an http.Transport // * Sets the User Agent // * Does logging type Transport struct { *http.Transport dump fs.DumpFlags filterRequest func(req *http.Request) userAgent string headers []*fs.HTTPOption } // newTransport wraps the http.Transport passed in and logs all // roundtrips including the body if logBody is set. func newTransport(ci *fs.ConfigInfo, transport *http.Transport) *Transport { return &Transport{ Transport: transport, dump: ci.Dump, userAgent: ci.UserAgent, headers: ci.Headers, } } // SetRequestFilter sets a filter to be used on each request func (t *Transport) SetRequestFilter(f func(req *http.Request)) { t.filterRequest = f } // A mutex to protect this map var checkedHostMu sync.RWMutex // A map of servers we have checked for time var checkedHost = make(map[string]struct{}, 1) // Check the server time is the same as ours, once for each server func checkServerTime(req *http.Request, resp *http.Response) { host := req.URL.Host if req.Host != "" { host = req.Host } checkedHostMu.RLock() _, ok := checkedHost[host] checkedHostMu.RUnlock() if ok { return } dateString := resp.Header.Get("Date") if dateString == "" { return } date, err := http.ParseTime(dateString) if err != nil { fs.Debugf(nil, "Couldn't parse Date: from server %s: %q: %v", host, dateString, err) return } dt := time.Since(date) const window = 5 * 60 * time.Second if dt > window || dt < -window { fs.Logf(nil, "Time may be set wrong - time from %q is %v different from this computer", host, dt) } checkedHostMu.Lock() checkedHost[host] = struct{}{} checkedHostMu.Unlock() } // cleanAuth gets rid of one authBuf header within the first 4k func cleanAuth(buf, authBuf []byte) []byte { // Find how much buffer to check n := 4096 if len(buf) < n { n = len(buf) } // See if there is an Authorization: header i := bytes.Index(buf[:n], authBuf) if i < 0 { return buf } i += len(authBuf) // Overwrite the next 4 chars with 'X' for j := 0; i < len(buf) && j < 4; j++ { if buf[i] == '\n' { break } buf[i] = 'X' i++ } // Snip out to the next '\n' j := bytes.IndexByte(buf[i:], '\n') if j < 0 { return buf[:i] } n = copy(buf[i:], buf[i+j:]) return buf[:i+n] } var authBufs = [][]byte{ []byte("Authorization: "), []byte("X-Auth-Token: "), } // cleanAuths gets rid of all the possible Auth headers func cleanAuths(buf []byte) []byte { for _, authBuf := range authBufs { buf = cleanAuth(buf, authBuf) } return buf } // RoundTrip implements the RoundTripper interface. func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error) { // Get transactions per second token first if limiting if tpsBucket != nil { tbErr := tpsBucket.Wait(req.Context()) if tbErr != nil && tbErr != context.Canceled { fs.Errorf(nil, "HTTP token bucket error: %v", tbErr) } } // Force user agent req.Header.Set("User-Agent", t.userAgent) // Set user defined headers for _, option := range t.headers { req.Header.Set(option.Key, option.Value) } // Filter the request if required if t.filterRequest != nil { t.filterRequest(req) } // Logf request if t.dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpAuth|fs.DumpRequests|fs.DumpResponses) != 0 { buf, _ := httputil.DumpRequestOut(req, t.dump&(fs.DumpBodies|fs.DumpRequests) != 0) if t.dump&fs.DumpAuth == 0 { buf = cleanAuths(buf) } fs.Debugf(nil, "%s", separatorReq) fs.Debugf(nil, "%s (req %p)", "HTTP REQUEST", req) fs.Debugf(nil, "%s", string(buf)) fs.Debugf(nil, "%s", separatorReq) } // Do round trip resp, err = t.Transport.RoundTrip(req) // Logf response if t.dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpAuth|fs.DumpRequests|fs.DumpResponses) != 0 { fs.Debugf(nil, "%s", separatorResp) fs.Debugf(nil, "%s (req %p)", "HTTP RESPONSE", req) if err != nil { fs.Debugf(nil, "Error: %v", err) } else { buf, _ := httputil.DumpResponse(resp, t.dump&(fs.DumpBodies|fs.DumpResponses) != 0) fs.Debugf(nil, "%s", string(buf)) } fs.Debugf(nil, "%s", separatorResp) } if err == nil { checkServerTime(req, resp) } return resp, err } // NewDialer creates a net.Dialer structure with Timeout, Keepalive // and LocalAddr set from rclone flags. func NewDialer(ci *fs.ConfigInfo) *net.Dialer { dialer := &net.Dialer{ Timeout: ci.ConnectTimeout, KeepAlive: 30 * time.Second, } if ci.BindAddr != nil { dialer.LocalAddr = &net.TCPAddr{IP: ci.BindAddr} } return dialer } rclone-1.53.3/fs/fshttp/http_test.go000066400000000000000000000027321375552240400173650ustar00rootroot00000000000000package fshttp import ( "testing" "github.com/stretchr/testify/assert" ) func TestCleanAuth(t *testing.T) { for _, test := range []struct { in string want string }{ {"", ""}, {"floo", "floo"}, {"Authorization: ", "Authorization: "}, {"Authorization: \n", "Authorization: \n"}, {"Authorization: A", "Authorization: X"}, {"Authorization: A\n", "Authorization: X\n"}, {"Authorization: AAAA", "Authorization: XXXX"}, {"Authorization: AAAA\n", "Authorization: XXXX\n"}, {"Authorization: AAAAA", "Authorization: XXXX"}, {"Authorization: AAAAA\n", "Authorization: XXXX\n"}, {"Authorization: AAAA\n", "Authorization: XXXX\n"}, {"Authorization: AAAAAAAAA\nPotato: Help\n", "Authorization: XXXX\nPotato: Help\n"}, {"Sausage: 1\nAuthorization: AAAAAAAAA\nPotato: Help\n", "Sausage: 1\nAuthorization: XXXX\nPotato: Help\n"}, } { got := string(cleanAuth([]byte(test.in), authBufs[0])) assert.Equal(t, test.want, got, test.in) } } func TestCleanAuths(t *testing.T) { for _, test := range []struct { in string want string }{ {"", ""}, {"floo", "floo"}, {"Authorization: AAAAAAAAA\nPotato: Help\n", "Authorization: XXXX\nPotato: Help\n"}, {"X-Auth-Token: AAAAAAAAA\nPotato: Help\n", "X-Auth-Token: XXXX\nPotato: Help\n"}, {"X-Auth-Token: AAAAAAAAA\nAuthorization: AAAAAAAAA\nPotato: Help\n", "X-Auth-Token: XXXX\nAuthorization: XXXX\nPotato: Help\n"}, } { got := string(cleanAuths([]byte(test.in))) assert.Equal(t, test.want, got, test.in) } } rclone-1.53.3/fs/fspath/000077500000000000000000000000001375552240400147715ustar00rootroot00000000000000rclone-1.53.3/fs/fspath/path.go000066400000000000000000000073701375552240400162630ustar00rootroot00000000000000// Package fspath contains routines for fspath manipulation package fspath import ( "errors" "path" "path/filepath" "regexp" "strings" "github.com/rclone/rclone/fs/driveletter" ) const ( configNameRe = `[\w_ -]+` remoteNameRe = `^(:?` + configNameRe + `):` ) var ( errInvalidCharacters = errors.New("config name contains invalid characters - may only contain 0-9, A-Z ,a-z ,_ , - and space") errCantBeEmpty = errors.New("can't use empty string as a path") errCantStartWithDash = errors.New("config name starts with -") // urlMatcher is a pattern to match an rclone URL // note that this matches invalid remoteNames urlMatcher = regexp.MustCompile(`^(:?[^\\/:]*):(.*)$`) // configNameMatcher is a pattern to match an rclone config name configNameMatcher = regexp.MustCompile(`^` + configNameRe + `$`) // remoteNameMatcher is a pattern to match an rclone remote name remoteNameMatcher = regexp.MustCompile(remoteNameRe + `$`) ) // CheckConfigName returns an error if configName is invalid func CheckConfigName(configName string) error { if !configNameMatcher.MatchString(configName) { return errInvalidCharacters } // Reject configName, if it starts with -, complicates usage. (#4261) if strings.HasPrefix(configName, "-") { return errCantStartWithDash } return nil } // CheckRemoteName returns an error if remoteName is invalid func CheckRemoteName(remoteName string) error { if !remoteNameMatcher.MatchString(remoteName) { return errInvalidCharacters } return nil } // Parse deconstructs a remote path into configName and fsPath // // If the path is a local path then configName will be returned as "". // // So "remote:path/to/dir" will return "remote", "path/to/dir" // and "/path/to/local" will return ("", "/path/to/local") // // Note that this will turn \ into / in the fsPath on Windows // // An error may be returned if the remote name has invalid characters // in it or if the path is empty. func Parse(path string) (configName, fsPath string, err error) { if path == "" { return "", "", errCantBeEmpty } parts := urlMatcher.FindStringSubmatch(path) configName, fsPath = "", path if parts != nil && !driveletter.IsDriveLetter(parts[1]) { configName, fsPath = parts[1], parts[2] err = CheckRemoteName(configName + ":") if err != nil { return configName, fsPath, errInvalidCharacters } } // change native directory separators to / if there are any fsPath = filepath.ToSlash(fsPath) return configName, fsPath, nil } // Split splits a remote into a parent and a leaf // // if it returns leaf as an empty string then remote is a directory // // if it returns parent as an empty string then that means the current directory // // The returned values have the property that parent + leaf == remote // (except under Windows where \ will be translated into /) func Split(remote string) (parent string, leaf string, err error) { remoteName, remotePath, err := Parse(remote) if err != nil { return "", "", err } if remoteName != "" { remoteName += ":" } // Construct new remote name without last segment parent, leaf = path.Split(remotePath) return remoteName + parent, leaf, nil } // JoinRootPath joins any number of path elements into a single path, adding a // separating slash if necessary. The result is Cleaned; in particular, // all empty strings are ignored. // // If the first non empty element has a leading "//" this is preserved. // // If the path contains \ these will be converted to / on Windows. func JoinRootPath(elem ...string) string { es := make([]string, len(elem)) for i := range es { es[i] = filepath.ToSlash(elem[i]) } for i, e := range es { if e != "" { if strings.HasPrefix(e, "//") { return "/" + path.Clean(strings.Join(es[i:], "/")) } return path.Clean(strings.Join(es[i:], "/")) } } return "" } rclone-1.53.3/fs/fspath/path_test.go000066400000000000000000000122051375552240400173130ustar00rootroot00000000000000package fspath import ( "fmt" "path/filepath" "runtime" "strings" "testing" "github.com/stretchr/testify/assert" ) func TestCheckConfigName(t *testing.T) { for _, test := range []struct { in string want error }{ {"remote", nil}, {"", errInvalidCharacters}, {":remote:", errInvalidCharacters}, {"remote:", errInvalidCharacters}, {"rem:ote", errInvalidCharacters}, {"rem/ote", errInvalidCharacters}, {"rem\\ote", errInvalidCharacters}, {"[remote", errInvalidCharacters}, {"*", errInvalidCharacters}, {"-remote", errCantStartWithDash}, {"r-emote-", nil}, {"_rem_ote_", nil}, } { got := CheckConfigName(test.in) assert.Equal(t, test.want, got, test.in) } } func TestCheckRemoteName(t *testing.T) { for _, test := range []struct { in string want error }{ {":remote:", nil}, {"remote:", nil}, {"", errInvalidCharacters}, {"rem:ote", errInvalidCharacters}, {"rem:ote:", errInvalidCharacters}, {"remote", errInvalidCharacters}, {"rem/ote:", errInvalidCharacters}, {"rem\\ote:", errInvalidCharacters}, {"[remote:", errInvalidCharacters}, {"*:", errInvalidCharacters}, } { got := CheckRemoteName(test.in) assert.Equal(t, test.want, got, test.in) } } func TestParse(t *testing.T) { for _, test := range []struct { in, wantConfigName, wantFsPath string wantErr error }{ {"", "", "", errCantBeEmpty}, {":", "", "", errInvalidCharacters}, {"::", ":", "", errInvalidCharacters}, {":/:", "", "/:", errInvalidCharacters}, {"/:", "", "/:", nil}, {"\\backslash:", "", "\\backslash:", nil}, {"/slash:", "", "/slash:", nil}, {"with\\backslash:", "", "with\\backslash:", nil}, {"with/slash:", "", "with/slash:", nil}, {"/path/to/file", "", "/path/to/file", nil}, {"/path:/to/file", "", "/path:/to/file", nil}, {"./path:/to/file", "", "./path:/to/file", nil}, {"./:colon.txt", "", "./:colon.txt", nil}, {"path/to/file", "", "path/to/file", nil}, {"remote:path/to/file", "remote", "path/to/file", nil}, {"rem*ote:path/to/file", "rem*ote", "path/to/file", errInvalidCharacters}, {"remote:/path/to/file", "remote", "/path/to/file", nil}, {"rem.ote:/path/to/file", "rem.ote", "/path/to/file", errInvalidCharacters}, {":backend:/path/to/file", ":backend", "/path/to/file", nil}, {":bac*kend:/path/to/file", ":bac*kend", "/path/to/file", errInvalidCharacters}, } { gotConfigName, gotFsPath, gotErr := Parse(test.in) if runtime.GOOS == "windows" { test.wantFsPath = strings.Replace(test.wantFsPath, `\`, `/`, -1) } assert.Equal(t, test.wantErr, gotErr) assert.Equal(t, test.wantConfigName, gotConfigName) assert.Equal(t, test.wantFsPath, gotFsPath) } } func TestSplit(t *testing.T) { for _, test := range []struct { remote, wantParent, wantLeaf string wantErr error }{ {"", "", "", errCantBeEmpty}, {"remote:", "remote:", "", nil}, {"remote:potato", "remote:", "potato", nil}, {"remote:/", "remote:/", "", nil}, {"remote:/potato", "remote:/", "potato", nil}, {"remote:/potato/potato", "remote:/potato/", "potato", nil}, {"remote:potato/sausage", "remote:potato/", "sausage", nil}, {"rem.ote:potato/sausage", "", "", errInvalidCharacters}, {":remote:", ":remote:", "", nil}, {":remote:potato", ":remote:", "potato", nil}, {":remote:/", ":remote:/", "", nil}, {":remote:/potato", ":remote:/", "potato", nil}, {":remote:/potato/potato", ":remote:/potato/", "potato", nil}, {":remote:potato/sausage", ":remote:potato/", "sausage", nil}, {":rem[ote:potato/sausage", "", "", errInvalidCharacters}, {"/", "/", "", nil}, {"/root", "/", "root", nil}, {"/a/b", "/a/", "b", nil}, {"root", "", "root", nil}, {"a/b", "a/", "b", nil}, {"root/", "root/", "", nil}, {"a/b/", "a/b/", "", nil}, } { gotParent, gotLeaf, gotErr := Split(test.remote) assert.Equal(t, test.wantErr, gotErr) assert.Equal(t, test.wantParent, gotParent, test.remote) assert.Equal(t, test.wantLeaf, gotLeaf, test.remote) if gotErr == nil { assert.Equal(t, test.remote, gotParent+gotLeaf, fmt.Sprintf("%s: %q + %q != %q", test.remote, gotParent, gotLeaf, test.remote)) } } } func TestJoinRootPath(t *testing.T) { for _, test := range []struct { elements []string want string }{ {nil, ""}, {[]string{""}, ""}, {[]string{"/"}, "/"}, {[]string{"/", "/"}, "/"}, {[]string{"/", "//"}, "/"}, {[]string{"/root", ""}, "/root"}, {[]string{"/root", "/"}, "/root"}, {[]string{"/root", "//"}, "/root"}, {[]string{"/a/b"}, "/a/b"}, {[]string{"//", "/"}, "//"}, {[]string{"//server", "path"}, "//server/path"}, {[]string{"//server/sub", "path"}, "//server/sub/path"}, {[]string{"//server", "//path"}, "//server/path"}, {[]string{"//server/sub", "//path"}, "//server/sub/path"}, {[]string{"", "//", "/"}, "//"}, {[]string{"", "//server", "path"}, "//server/path"}, {[]string{"", "//server/sub", "path"}, "//server/sub/path"}, {[]string{"", "//server", "//path"}, "//server/path"}, {[]string{"", "//server/sub", "//path"}, "//server/sub/path"}, {[]string{"", filepath.FromSlash("//server/sub"), filepath.FromSlash("//path")}, "//server/sub/path"}, } { got := JoinRootPath(test.elements...) assert.Equal(t, test.want, got) } } rclone-1.53.3/fs/hash/000077500000000000000000000000001375552240400144275ustar00rootroot00000000000000rclone-1.53.3/fs/hash/hash.go000066400000000000000000000161721375552240400157100ustar00rootroot00000000000000package hash import ( "crypto/md5" "crypto/sha1" "encoding/hex" "fmt" "hash" "hash/crc32" "io" "strings" "github.com/jzelinskie/whirlpool" "github.com/pkg/errors" ) // Type indicates a standard hashing algorithm type Type int type hashDefinition struct { width int name string newFunc func() hash.Hash hashType Type } var hashes []*hashDefinition var highestType Type = 1 // RegisterHash adds a new Hash to the list and returns it Type func RegisterHash(name string, width int, newFunc func() hash.Hash) Type { definition := &hashDefinition{ name: name, width: width, newFunc: newFunc, hashType: highestType, } hashes = append(hashes, definition) highestType = highestType << 1 return definition.hashType } // ErrUnsupported should be returned by filesystem, // if it is requested to deliver an unsupported hash type. var ErrUnsupported = errors.New("hash type not supported") var ( // None indicates no hashes are supported None Type // MD5 indicates MD5 support MD5 Type // SHA1 indicates SHA-1 support SHA1 Type // Whirlpool indicates Whirlpool support Whirlpool Type // CRC32 indicates CRC-32 support CRC32 Type ) func init() { MD5 = RegisterHash("MD5", 32, md5.New) SHA1 = RegisterHash("SHA-1", 40, sha1.New) Whirlpool = RegisterHash("Whirlpool", 128, whirlpool.New) CRC32 = RegisterHash("CRC-32", 8, func() hash.Hash { return crc32.NewIEEE() }) } // Supported returns a set of all the supported hashes by // HashStream and MultiHasher. func Supported() Set { var types []Type for _, v := range hashes { types = append(types, v.hashType) } return NewHashSet(types...) } // Width returns the width in characters for any HashType func Width(hashType Type) int { for _, v := range hashes { if v.hashType == hashType { return v.width } } return 0 } // Stream will calculate hashes of all supported hash types. func Stream(r io.Reader) (map[Type]string, error) { return StreamTypes(r, Supported()) } // StreamTypes will calculate hashes of the requested hash types. func StreamTypes(r io.Reader, set Set) (map[Type]string, error) { hashers, err := fromTypes(set) if err != nil { return nil, err } _, err = io.Copy(toMultiWriter(hashers), r) if err != nil { return nil, err } var ret = make(map[Type]string) for k, v := range hashers { ret[k] = hex.EncodeToString(v.Sum(nil)) } return ret, nil } // String returns a string representation of the hash type. // The function will panic if the hash type is unknown. func (h Type) String() string { if h == None { return "None" } for _, v := range hashes { if v.hashType == h { return v.name } } err := fmt.Sprintf("internal error: unknown hash type: 0x%x", int(h)) panic(err) } // Set a Type from a flag func (h *Type) Set(s string) error { if s == "None" { *h = None } for _, v := range hashes { if v.name == s { *h = v.hashType return nil } } return errors.Errorf("Unknown hash type %q", s) } // Type of the value func (h Type) Type() string { return "string" } // fromTypes will return hashers for all the requested types. // The types must be a subset of SupportedHashes, // and this function must support all types. func fromTypes(set Set) (map[Type]hash.Hash, error) { if !set.SubsetOf(Supported()) { return nil, errors.Errorf("requested set %08x contains unknown hash types", int(set)) } var hashers = make(map[Type]hash.Hash) types := set.Array() for _, t := range types { for _, v := range hashes { if t != v.hashType { continue } hashers[t] = v.newFunc() break } if hashers[t] == nil { err := fmt.Sprintf("internal error: Unsupported hash type %v", t) panic(err) } } return hashers, nil } // toMultiWriter will return a set of hashers into a // single multiwriter, where one write will update all // the hashers. func toMultiWriter(h map[Type]hash.Hash) io.Writer { // Convert to to slice var w = make([]io.Writer, 0, len(h)) for _, v := range h { w = append(w, v) } return io.MultiWriter(w...) } // A MultiHasher will construct various hashes on // all incoming writes. type MultiHasher struct { w io.Writer size int64 h map[Type]hash.Hash // Hashes } // NewMultiHasher will return a hash writer that will write all // supported hash types. func NewMultiHasher() *MultiHasher { h, err := NewMultiHasherTypes(Supported()) if err != nil { panic("internal error: could not create multihasher") } return h } // NewMultiHasherTypes will return a hash writer that will write // the requested hash types. func NewMultiHasherTypes(set Set) (*MultiHasher, error) { hashers, err := fromTypes(set) if err != nil { return nil, err } m := MultiHasher{h: hashers, w: toMultiWriter(hashers)} return &m, nil } func (m *MultiHasher) Write(p []byte) (n int, err error) { n, err = m.w.Write(p) m.size += int64(n) return n, err } // Sums returns the sums of all accumulated hashes as hex encoded // strings. func (m *MultiHasher) Sums() map[Type]string { dst := make(map[Type]string) for k, v := range m.h { dst[k] = hex.EncodeToString(v.Sum(nil)) } return dst } // Size returns the number of bytes written func (m *MultiHasher) Size() int64 { return m.size } // A Set Indicates one or more hash types. type Set int // NewHashSet will create a new hash set with the hash types supplied func NewHashSet(t ...Type) Set { h := Set(None) return h.Add(t...) } // Add one or more hash types to the set. // Returns the modified hash set. func (h *Set) Add(t ...Type) Set { for _, v := range t { *h |= Set(v) } return *h } // Contains returns true if the func (h Set) Contains(t Type) bool { return int(h)&int(t) != 0 } // Overlap returns the overlapping hash types func (h Set) Overlap(t Set) Set { return Set(int(h) & int(t)) } // SubsetOf will return true if all types of h // is present in the set c func (h Set) SubsetOf(c Set) bool { return int(h)|int(c) == int(c) } // GetOne will return a hash type. // Currently the first is returned, but it could be // improved to return the strongest. func (h Set) GetOne() Type { v := int(h) i := uint(0) for v != 0 { if v&1 != 0 { return Type(1 << i) } i++ v >>= 1 } return None } // Array returns an array of all hash types in the set func (h Set) Array() (ht []Type) { v := int(h) i := uint(0) for v != 0 { if v&1 != 0 { ht = append(ht, Type(1<>= 1 } return ht } // Count returns the number of hash types in the set func (h Set) Count() int { if int(h) == 0 { return 0 } // credit: https://code.google.com/u/arnehormann/ x := uint64(h) x -= (x >> 1) & 0x5555555555555555 x = (x>>2)&0x3333333333333333 + x&0x3333333333333333 x += x >> 4 x &= 0x0f0f0f0f0f0f0f0f x *= 0x0101010101010101 return int(x >> 56) } // String returns a string representation of the hash set. // The function will panic if it contains an unknown type. func (h Set) String() string { a := h.Array() var r []string for _, v := range a { r = append(r, v.String()) } return "[" + strings.Join(r, ", ") + "]" } // Equals checks to see if src == dst, but ignores empty strings // and returns true if either is empty. func Equals(src, dst string) bool { if src == "" || dst == "" { return true } return src == dst } rclone-1.53.3/fs/hash/hash_test.go000066400000000000000000000107101375552240400167370ustar00rootroot00000000000000package hash_test import ( "bytes" "io" "log" "testing" "github.com/rclone/rclone/fs/hash" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check it satisfies the interface var _ pflag.Value = (*hash.Type)(nil) func TestHashSet(t *testing.T) { var h hash.Set assert.Equal(t, 0, h.Count()) a := h.Array() assert.Len(t, a, 0) h = h.Add(hash.MD5) log.Println(h) assert.Equal(t, 1, h.Count()) assert.Equal(t, hash.MD5, h.GetOne()) a = h.Array() assert.Len(t, a, 1) assert.Equal(t, a[0], hash.MD5) // Test overlap, with all hashes h = h.Overlap(hash.Supported()) assert.Equal(t, 1, h.Count()) assert.Equal(t, hash.MD5, h.GetOne()) assert.True(t, h.SubsetOf(hash.Supported())) assert.True(t, h.SubsetOf(hash.NewHashSet(hash.MD5))) h = h.Add(hash.SHA1) assert.Equal(t, 2, h.Count()) one := h.GetOne() if !(one == hash.MD5 || one == hash.SHA1) { t.Fatalf("expected to be either MD5 or SHA1, got %v", one) } assert.True(t, h.SubsetOf(hash.Supported())) assert.False(t, h.SubsetOf(hash.NewHashSet(hash.MD5))) assert.False(t, h.SubsetOf(hash.NewHashSet(hash.SHA1))) assert.True(t, h.SubsetOf(hash.NewHashSet(hash.MD5, hash.SHA1))) a = h.Array() assert.Len(t, a, 2) ol := h.Overlap(hash.NewHashSet(hash.MD5)) assert.Equal(t, 1, ol.Count()) assert.True(t, ol.Contains(hash.MD5)) assert.False(t, ol.Contains(hash.SHA1)) ol = h.Overlap(hash.NewHashSet(hash.MD5, hash.SHA1)) assert.Equal(t, 2, ol.Count()) assert.True(t, ol.Contains(hash.MD5)) assert.True(t, ol.Contains(hash.SHA1)) } type hashTest struct { input []byte output map[hash.Type]string } var hashTestSet = []hashTest{ { input: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}, output: map[hash.Type]string{ hash.MD5: "bf13fc19e5151ac57d4252e0e0f87abe", hash.SHA1: "3ab6543c08a75f292a5ecedac87ec41642d12166", hash.Whirlpool: "eddf52133d4566d763f716e853d6e4efbabd29e2c2e63f56747b1596172851d34c2df9944beb6640dbdbe3d9b4eb61180720a79e3d15baff31c91e43d63869a4", hash.CRC32: "a6041d7e", }, }, // Empty data set { input: []byte{}, output: map[hash.Type]string{ hash.MD5: "d41d8cd98f00b204e9800998ecf8427e", hash.SHA1: "da39a3ee5e6b4b0d3255bfef95601890afd80709", hash.Whirlpool: "19fa61d75522a4669b44e39c1d2e1726c530232130d407f89afee0964997f7a73e83be698b288febcf88e3e03c4f0757ea8964e59b63d93708b138cc42a66eb3", hash.CRC32: "00000000", }, }, } func TestMultiHasher(t *testing.T) { for _, test := range hashTestSet { mh := hash.NewMultiHasher() n, err := io.Copy(mh, bytes.NewBuffer(test.input)) require.NoError(t, err) assert.Len(t, test.input, int(n)) sums := mh.Sums() for k, v := range sums { expect, ok := test.output[k] require.True(t, ok, "test output for hash not found") assert.Equal(t, expect, v) } // Test that all are present for k, v := range test.output { expect, ok := sums[k] require.True(t, ok, "test output for hash not found") assert.Equal(t, expect, v) } } } func TestMultiHasherTypes(t *testing.T) { h := hash.SHA1 for _, test := range hashTestSet { mh, err := hash.NewMultiHasherTypes(hash.NewHashSet(h)) if err != nil { t.Fatal(err) } n, err := io.Copy(mh, bytes.NewBuffer(test.input)) require.NoError(t, err) assert.Len(t, test.input, int(n)) sums := mh.Sums() assert.Len(t, sums, 1) assert.Equal(t, sums[h], test.output[h]) } } func TestHashStream(t *testing.T) { for _, test := range hashTestSet { sums, err := hash.Stream(bytes.NewBuffer(test.input)) require.NoError(t, err) for k, v := range sums { expect, ok := test.output[k] require.True(t, ok) assert.Equal(t, v, expect) } // Test that all are present for k, v := range test.output { expect, ok := sums[k] require.True(t, ok) assert.Equal(t, v, expect) } } } func TestHashStreamTypes(t *testing.T) { h := hash.SHA1 for _, test := range hashTestSet { sums, err := hash.StreamTypes(bytes.NewBuffer(test.input), hash.NewHashSet(h)) require.NoError(t, err) assert.Len(t, sums, 1) assert.Equal(t, sums[h], test.output[h]) } } func TestHashSetStringer(t *testing.T) { h := hash.NewHashSet(hash.SHA1, hash.MD5) assert.Equal(t, h.String(), "[MD5, SHA-1]") h = hash.NewHashSet(hash.SHA1) assert.Equal(t, h.String(), "[SHA-1]") h = hash.NewHashSet() assert.Equal(t, h.String(), "[]") } func TestHashStringer(t *testing.T) { h := hash.MD5 assert.Equal(t, h.String(), "MD5") h = hash.None assert.Equal(t, h.String(), "None") } rclone-1.53.3/fs/list/000077500000000000000000000000001375552240400144575ustar00rootroot00000000000000rclone-1.53.3/fs/list/list.go000066400000000000000000000057661375552240400157770ustar00rootroot00000000000000// Package list contains list functions package list import ( "context" "sort" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/filter" ) // DirSorted reads Object and *Dir into entries for the given Fs. // // dir is the start directory, "" for root // // If includeAll is specified all files will be added, otherwise only // files and directories passing the filter will be added. // // Files will be returned in sorted order func DirSorted(ctx context.Context, f fs.Fs, includeAll bool, dir string) (entries fs.DirEntries, err error) { // Get unfiltered entries from the fs entries, err = f.List(ctx, dir) if err != nil { return nil, err } // This should happen only if exclude files lives in the // starting directory, otherwise ListDirSorted should not be // called. if !includeAll && filter.Active.ListContainsExcludeFile(entries) { fs.Debugf(dir, "Excluded") return nil, nil } return filterAndSortDir(ctx, entries, includeAll, dir, filter.Active.IncludeObject, filter.Active.IncludeDirectory(ctx, f)) } // filter (if required) and check the entries, then sort them func filterAndSortDir(ctx context.Context, entries fs.DirEntries, includeAll bool, dir string, IncludeObject func(ctx context.Context, o fs.Object) bool, IncludeDirectory func(remote string) (bool, error)) (newEntries fs.DirEntries, err error) { newEntries = entries[:0] // in place filter prefix := "" if dir != "" { prefix = dir + "/" } for _, entry := range entries { ok := true // check includes and types switch x := entry.(type) { case fs.Object: // Make sure we don't delete excluded files if not required if !includeAll && !IncludeObject(ctx, x) { ok = false fs.Debugf(x, "Excluded") } case fs.Directory: if !includeAll { include, err := IncludeDirectory(x.Remote()) if err != nil { return nil, err } if !include { ok = false fs.Debugf(x, "Excluded") } } default: return nil, errors.Errorf("unknown object type %T", entry) } // check remote name belongs in this directory remote := entry.Remote() switch { case !ok: // ignore case !strings.HasPrefix(remote, prefix): ok = false fs.Errorf(entry, "Entry doesn't belong in directory %q (too short) - ignoring", dir) case remote == prefix: ok = false fs.Errorf(entry, "Entry doesn't belong in directory %q (same as directory) - ignoring", dir) case strings.ContainsRune(remote[len(prefix):], '/'): ok = false fs.Errorf(entry, "Entry doesn't belong in directory %q (contains subdir) - ignoring", dir) default: // ok } if ok { newEntries = append(newEntries, entry) } } entries = newEntries // Sort the directory entries by Remote // // We use a stable sort here just in case there are // duplicates. Assuming the remote delivers the entries in a // consistent order, this will give the best user experience // in syncing as it will use the first entry for the sync // comparison. sort.Stable(entries) return entries, nil } rclone-1.53.3/fs/list/list_test.go000066400000000000000000000062621375552240400170260ustar00rootroot00000000000000package list import ( "context" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/mockdir" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // NB integration tests for DirSorted are in // fs/operations/listdirsorted_test.go func TestFilterAndSortIncludeAll(t *testing.T) { da := mockdir.New("a") oA := mockobject.Object("A") db := mockdir.New("b") oB := mockobject.Object("B") dc := mockdir.New("c") oC := mockobject.Object("C") dd := mockdir.New("d") oD := mockobject.Object("D") entries := fs.DirEntries{da, oA, db, oB, dc, oC, dd, oD} includeObject := func(ctx context.Context, o fs.Object) bool { return o != oB } includeDirectory := func(remote string) (bool, error) { return remote != "c", nil } // no filter newEntries, err := filterAndSortDir(context.Background(), entries, true, "", includeObject, includeDirectory) require.NoError(t, err) assert.Equal(t, newEntries, fs.DirEntries{oA, oB, oC, oD, da, db, dc, dd}, ) // filter newEntries, err = filterAndSortDir(context.Background(), entries, false, "", includeObject, includeDirectory) require.NoError(t, err) assert.Equal(t, newEntries, fs.DirEntries{oA, oC, oD, da, db, dd}, ) } func TestFilterAndSortCheckDir(t *testing.T) { // Check the different kinds of error when listing "dir" da := mockdir.New("dir/") oA := mockobject.Object("diR/a") db := mockdir.New("dir/b") oB := mockobject.Object("dir/B/sub") dc := mockdir.New("dir/c") oC := mockobject.Object("dir/C") dd := mockdir.New("dir/d") oD := mockobject.Object("dir/D") entries := fs.DirEntries{da, oA, db, oB, dc, oC, dd, oD} newEntries, err := filterAndSortDir(context.Background(), entries, true, "dir", nil, nil) require.NoError(t, err) assert.Equal(t, newEntries, fs.DirEntries{oC, oD, db, dc, dd}, ) } func TestFilterAndSortCheckDirRoot(t *testing.T) { // Check the different kinds of error when listing the root "" da := mockdir.New("") oA := mockobject.Object("A") db := mockdir.New("b") oB := mockobject.Object("B/sub") dc := mockdir.New("c") oC := mockobject.Object("C") dd := mockdir.New("d") oD := mockobject.Object("D") entries := fs.DirEntries{da, oA, db, oB, dc, oC, dd, oD} newEntries, err := filterAndSortDir(context.Background(), entries, true, "", nil, nil) require.NoError(t, err) assert.Equal(t, newEntries, fs.DirEntries{oA, oC, oD, db, dc, dd}, ) } type unknownDirEntry string func (o unknownDirEntry) String() string { return string(o) } func (o unknownDirEntry) Remote() string { return string(o) } func (o unknownDirEntry) ModTime(ctx context.Context) (t time.Time) { return t } func (o unknownDirEntry) Size() int64 { return 0 } func TestFilterAndSortUnknown(t *testing.T) { // Check that an unknown entry produces an error da := mockdir.New("") oA := mockobject.Object("A") ub := unknownDirEntry("b") oB := mockobject.Object("B/sub") entries := fs.DirEntries{da, oA, ub, oB} newEntries, err := filterAndSortDir(context.Background(), entries, true, "", nil, nil) assert.Error(t, err, "error") assert.Nil(t, newEntries) } rclone-1.53.3/fs/log.go000066400000000000000000000121501375552240400146130ustar00rootroot00000000000000package fs import ( "fmt" "log" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) // LogLevel describes rclone's logs. These are a subset of the syslog log levels. type LogLevel byte // Log levels. These are the syslog levels of which we only use a // subset. // // LOG_EMERG system is unusable // LOG_ALERT action must be taken immediately // LOG_CRIT critical conditions // LOG_ERR error conditions // LOG_WARNING warning conditions // LOG_NOTICE normal, but significant, condition // LOG_INFO informational message // LOG_DEBUG debug-level message const ( LogLevelEmergency LogLevel = iota LogLevelAlert LogLevelCritical LogLevelError // Error - can't be suppressed LogLevelWarning LogLevelNotice // Normal logging, -q suppresses LogLevelInfo // Transfers, needs -v LogLevelDebug // Debug level, needs -vv ) var logLevelToString = []string{ LogLevelEmergency: "EMERGENCY", LogLevelAlert: "ALERT", LogLevelCritical: "CRITICAL", LogLevelError: "ERROR", LogLevelWarning: "WARNING", LogLevelNotice: "NOTICE", LogLevelInfo: "INFO", LogLevelDebug: "DEBUG", } // String turns a LogLevel into a string func (l LogLevel) String() string { if l >= LogLevel(len(logLevelToString)) { return fmt.Sprintf("LogLevel(%d)", l) } return logLevelToString[l] } // Set a LogLevel func (l *LogLevel) Set(s string) error { for n, name := range logLevelToString { if s != "" && name == s { *l = LogLevel(n) return nil } } return errors.Errorf("Unknown log level %q", s) } // Type of the value func (l *LogLevel) Type() string { return "string" } // LogPrint sends the text to the logger of level var LogPrint = func(level LogLevel, text string) { text = fmt.Sprintf("%-6s: %s", level, text) _ = log.Output(4, text) } // LogValueItem describes keyed item for a JSON log entry type LogValueItem struct { key string value interface{} } // LogValue should be used as an argument to any logging calls to // augment the JSON output with more structured information. // // key is the dictionary parameter used to store value. func LogValue(key string, value interface{}) LogValueItem { return LogValueItem{key: key, value: value} } // String returns an empty string so LogValueItem entries won't show // in the textual representation of logs. They need to be put in so // the number of parameters of the log call matches. func (j LogValueItem) String() string { return "" } // LogPrintf produces a log string from the arguments passed in func LogPrintf(level LogLevel, o interface{}, text string, args ...interface{}) { out := fmt.Sprintf(text, args...) if Config.UseJSONLog { fields := logrus.Fields{} if o != nil { fields = logrus.Fields{ "object": fmt.Sprintf("%+v", o), "objectType": fmt.Sprintf("%T", o), } } for _, arg := range args { if item, ok := arg.(LogValueItem); ok { fields[item.key] = item.value } } switch level { case LogLevelDebug: logrus.WithFields(fields).Debug(out) case LogLevelInfo: logrus.WithFields(fields).Info(out) case LogLevelNotice, LogLevelWarning: logrus.WithFields(fields).Warn(out) case LogLevelError: logrus.WithFields(fields).Error(out) case LogLevelCritical: logrus.WithFields(fields).Fatal(out) case LogLevelEmergency, LogLevelAlert: logrus.WithFields(fields).Panic(out) } } else { if o != nil { out = fmt.Sprintf("%v: %s", o, out) } LogPrint(level, out) } } // LogLevelPrintf writes logs at the given level func LogLevelPrintf(level LogLevel, o interface{}, text string, args ...interface{}) { if Config.LogLevel >= level { LogPrintf(level, o, text, args...) } } // Errorf writes error log output for this Object or Fs. It // should always be seen by the user. func Errorf(o interface{}, text string, args ...interface{}) { if Config.LogLevel >= LogLevelError { LogPrintf(LogLevelError, o, text, args...) } } // Logf writes log output for this Object or Fs. This should be // considered to be Info level logging. It is the default level. By // default rclone should not log very much so only use this for // important things the user should see. The user can filter these // out with the -q flag. func Logf(o interface{}, text string, args ...interface{}) { if Config.LogLevel >= LogLevelNotice { LogPrintf(LogLevelNotice, o, text, args...) } } // Infof writes info on transfers for this Object or Fs. Use this // level for logging transfers, deletions and things which should // appear with the -v flag. func Infof(o interface{}, text string, args ...interface{}) { if Config.LogLevel >= LogLevelInfo { LogPrintf(LogLevelInfo, o, text, args...) } } // Debugf writes debugging output for this Object or Fs. Use this for // debug only. The user must have to specify -vv to see this. func Debugf(o interface{}, text string, args ...interface{}) { if Config.LogLevel >= LogLevelDebug { LogPrintf(LogLevelDebug, o, text, args...) } } // LogDirName returns an object for the logger, logging a root // directory which would normally be "" as the Fs func LogDirName(f Fs, dir string) interface{} { if dir != "" { return dir } return f } rclone-1.53.3/fs/log/000077500000000000000000000000001375552240400142655ustar00rootroot00000000000000rclone-1.53.3/fs/log/caller_hook.go000066400000000000000000000026111375552240400170760ustar00rootroot00000000000000package log import ( "fmt" "runtime" "strings" "github.com/sirupsen/logrus" ) // CallerHook for log the calling file and line of the fine type CallerHook struct { Field string Skip int levels []logrus.Level } // NewCallerHook use to make a hook func NewCallerHook(levels ...logrus.Level) logrus.Hook { hook := CallerHook{ Field: "source", Skip: 7, levels: levels, } if len(hook.levels) == 0 { hook.levels = logrus.AllLevels } return &hook } // Levels implement applied hook to which levels func (h *CallerHook) Levels() []logrus.Level { return logrus.AllLevels } // Fire logs the information of context (filename and line) func (h *CallerHook) Fire(entry *logrus.Entry) error { entry.Data[h.Field] = findCaller(h.Skip) return nil } // findCaller ignores the caller relevant to logrus or fslog then find out the exact caller func findCaller(skip int) string { file := "" line := 0 for i := 0; i < 10; i++ { file, line = getCaller(skip + i) if !strings.HasPrefix(file, "logrus") && strings.Index(file, "log.go") < 0 { break } } return fmt.Sprintf("%s:%d", file, line) } func getCaller(skip int) (string, int) { _, file, line, ok := runtime.Caller(skip) // fmt.Println(file,":",line) if !ok { return "", 0 } n := 0 for i := len(file) - 1; i > 0; i-- { if file[i] == '/' { n++ if n >= 2 { file = file[i+1:] break } } } return file, line } rclone-1.53.3/fs/log/log.go000066400000000000000000000066311375552240400154030ustar00rootroot00000000000000// Package log provides logging for rclone package log import ( "io" "log" "os" "reflect" "runtime" "strings" "github.com/rclone/rclone/fs" "github.com/sirupsen/logrus" ) // Options contains options for the remote control server type Options struct { File string // Log everything to this file Format string // Comma separated list of log format options UseSyslog bool // Use Syslog for logging SyslogFacility string // Facility for syslog, eg KERN,USER,... } // DefaultOpt is the default values used for Opt var DefaultOpt = Options{ Format: "date,time", SyslogFacility: "DAEMON", } // Opt is the options for the logger var Opt = DefaultOpt // fnName returns the name of the calling +2 function func fnName() string { pc, _, _, ok := runtime.Caller(2) name := "*Unknown*" if ok { name = runtime.FuncForPC(pc).Name() dot := strings.LastIndex(name, ".") if dot >= 0 { name = name[dot+1:] } } return name } // Trace debugs the entry and exit of the calling function // // It is designed to be used in a defer statement so it returns a // function that logs the exit parameters. // // Any pointers in the exit function will be dereferenced func Trace(o interface{}, format string, a ...interface{}) func(string, ...interface{}) { if fs.Config.LogLevel < fs.LogLevelDebug { return func(format string, a ...interface{}) {} } name := fnName() fs.LogPrintf(fs.LogLevelDebug, o, name+": "+format, a...) return func(format string, a ...interface{}) { for i := range a { // read the values of the pointed to items typ := reflect.TypeOf(a[i]) if typ.Kind() == reflect.Ptr { value := reflect.ValueOf(a[i]) if value.IsNil() { a[i] = nil } else { pointedToValue := reflect.Indirect(value) a[i] = pointedToValue.Interface() } } } fs.LogPrintf(fs.LogLevelDebug, o, ">"+name+": "+format, a...) } } // Stack logs a stack trace of callers with the o and info passed in func Stack(o interface{}, info string) { if fs.Config.LogLevel < fs.LogLevelDebug { return } arr := [16 * 1024]byte{} buf := arr[:] n := runtime.Stack(buf, false) buf = buf[:n] fs.LogPrintf(fs.LogLevelDebug, o, "%s\nStack trace:\n%s", info, buf) } // InitLogging start the logging as per the command line flags func InitLogging() { flagsStr := "," + Opt.Format + "," var flags int if strings.Contains(flagsStr, ",date,") { flags |= log.Ldate } if strings.Contains(flagsStr, ",time,") { flags |= log.Ltime } if strings.Contains(flagsStr, ",microseconds,") { flags |= log.Lmicroseconds } if strings.Contains(flagsStr, ",longfile,") { flags |= log.Llongfile } if strings.Contains(flagsStr, ",shortfile,") { flags |= log.Lshortfile } if strings.Contains(flagsStr, ",UTC,") { flags |= log.LUTC } log.SetFlags(flags) // Log file output if Opt.File != "" { f, err := os.OpenFile(Opt.File, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0640) if err != nil { log.Fatalf("Failed to open log file: %v", err) } _, err = f.Seek(0, io.SeekEnd) if err != nil { fs.Errorf(nil, "Failed to seek log file to end: %v", err) } log.SetOutput(f) logrus.SetOutput(f) redirectStderr(f) } // Syslog output if Opt.UseSyslog { if Opt.File != "" { log.Fatalf("Can't use --syslog and --log-file together") } startSysLog() } } // Redirected returns true if the log has been redirected from stdout func Redirected() bool { return Opt.UseSyslog || Opt.File != "" } rclone-1.53.3/fs/log/logflags/000077500000000000000000000000001375552240400160635ustar00rootroot00000000000000rclone-1.53.3/fs/log/logflags/logflags.go000066400000000000000000000014741375552240400202160ustar00rootroot00000000000000// Package logflags implements command line flags to set up the log package logflags import ( "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/rc" "github.com/spf13/pflag" ) // AddFlags adds the log flags to the flagSet func AddFlags(flagSet *pflag.FlagSet) { rc.AddOption("log", &log.Opt) flags.StringVarP(flagSet, &log.Opt.File, "log-file", "", log.Opt.File, "Log everything to this file") flags.StringVarP(flagSet, &log.Opt.Format, "log-format", "", log.Opt.Format, "Comma separated list of log format options") flags.BoolVarP(flagSet, &log.Opt.UseSyslog, "syslog", "", log.Opt.UseSyslog, "Use Syslog for logging") flags.StringVarP(flagSet, &log.Opt.SyslogFacility, "syslog-facility", "", log.Opt.SyslogFacility, "Facility for syslog, eg KERN,USER,...") } rclone-1.53.3/fs/log/redirect_stderr.go000066400000000000000000000005131375552240400177770ustar00rootroot00000000000000// Log the panic to the log file - for oses which can't do this // +build !windows,!darwin,!dragonfly,!freebsd,!linux,!nacl,!netbsd,!openbsd package log import ( "os" "github.com/rclone/rclone/fs" ) // redirectStderr to the file passed in func redirectStderr(f *os.File) { fs.Errorf(nil, "Can't redirect stderr to file") } rclone-1.53.3/fs/log/redirect_stderr_unix.go000066400000000000000000000011241375552240400210410ustar00rootroot00000000000000// Log the panic under unix to the log file // +build !windows,!solaris,!plan9,!js package log import ( "log" "os" "github.com/rclone/rclone/fs/config" "golang.org/x/sys/unix" ) // redirectStderr to the file passed in func redirectStderr(f *os.File) { passPromptFd, err := unix.Dup(int(os.Stderr.Fd())) if err != nil { log.Fatalf("Failed to duplicate stderr: %v", err) } config.PasswordPromptOutput = os.NewFile(uintptr(passPromptFd), "passPrompt") err = unix.Dup2(int(f.Fd()), int(os.Stderr.Fd())) if err != nil { log.Fatalf("Failed to redirect stderr to file: %v", err) } } rclone-1.53.3/fs/log/redirect_stderr_windows.go000066400000000000000000000014321375552240400215520ustar00rootroot00000000000000// Log the panic under windows to the log file // // Code from minix, via // // https://play.golang.org/p/kLtct7lSUg // +build windows package log import ( "log" "os" "syscall" ) var ( kernel32 = syscall.MustLoadDLL("kernel32.dll") procSetStdHandle = kernel32.MustFindProc("SetStdHandle") ) func setStdHandle(stdhandle int32, handle syscall.Handle) error { r0, _, e1 := syscall.Syscall(procSetStdHandle.Addr(), 2, uintptr(stdhandle), uintptr(handle), 0) if r0 == 0 { if e1 != 0 { return error(e1) } return syscall.EINVAL } return nil } // redirectStderr to the file passed in func redirectStderr(f *os.File) { err := setStdHandle(syscall.STD_ERROR_HANDLE, syscall.Handle(f.Fd())) if err != nil { log.Fatalf("Failed to redirect stderr to file: %v", err) } } rclone-1.53.3/fs/log/syslog.go000066400000000000000000000004451375552240400161370ustar00rootroot00000000000000// Syslog interface for non-Unix variants only // +build windows nacl plan9 package log import ( "log" "runtime" ) // Starts syslog if configured, returns true if it was started func startSysLog() bool { log.Fatalf("--syslog not supported on %s platform", runtime.GOOS) return false } rclone-1.53.3/fs/log/syslog_unix.go000066400000000000000000000032731375552240400172040ustar00rootroot00000000000000// Syslog interface for Unix variants only // +build !windows,!nacl,!plan9 package log import ( "log" "log/syslog" "os" "path" "github.com/rclone/rclone/fs" ) var ( syslogFacilityMap = map[string]syslog.Priority{ "KERN": syslog.LOG_KERN, "USER": syslog.LOG_USER, "MAIL": syslog.LOG_MAIL, "DAEMON": syslog.LOG_DAEMON, "AUTH": syslog.LOG_AUTH, "SYSLOG": syslog.LOG_SYSLOG, "LPR": syslog.LOG_LPR, "NEWS": syslog.LOG_NEWS, "UUCP": syslog.LOG_UUCP, "CRON": syslog.LOG_CRON, "AUTHPRIV": syslog.LOG_AUTHPRIV, "FTP": syslog.LOG_FTP, "LOCAL0": syslog.LOG_LOCAL0, "LOCAL1": syslog.LOG_LOCAL1, "LOCAL2": syslog.LOG_LOCAL2, "LOCAL3": syslog.LOG_LOCAL3, "LOCAL4": syslog.LOG_LOCAL4, "LOCAL5": syslog.LOG_LOCAL5, "LOCAL6": syslog.LOG_LOCAL6, "LOCAL7": syslog.LOG_LOCAL7, } ) // Starts syslog func startSysLog() bool { facility, ok := syslogFacilityMap[Opt.SyslogFacility] if !ok { log.Fatalf("Unknown syslog facility %q - man syslog for list", Opt.SyslogFacility) } Me := path.Base(os.Args[0]) w, err := syslog.New(syslog.LOG_NOTICE|facility, Me) if err != nil { log.Fatalf("Failed to start syslog: %v", err) } log.SetFlags(0) log.SetOutput(w) fs.LogPrint = func(level fs.LogLevel, text string) { switch level { case fs.LogLevelEmergency: _ = w.Emerg(text) case fs.LogLevelAlert: _ = w.Alert(text) case fs.LogLevelCritical: _ = w.Crit(text) case fs.LogLevelError: _ = w.Err(text) case fs.LogLevelWarning: _ = w.Warning(text) case fs.LogLevelNotice: _ = w.Notice(text) case fs.LogLevelInfo: _ = w.Info(text) case fs.LogLevelDebug: _ = w.Debug(text) } } return true } rclone-1.53.3/fs/log_test.go000066400000000000000000000001661375552240400156560ustar00rootroot00000000000000package fs import "github.com/spf13/pflag" // Check it satisfies the interface var _ pflag.Value = (*LogLevel)(nil) rclone-1.53.3/fs/march/000077500000000000000000000000001375552240400145765ustar00rootroot00000000000000rclone-1.53.3/fs/march/march.go000066400000000000000000000320461375552240400162240ustar00rootroot00000000000000// Package march traverses two directories in lock step package march import ( "context" "path" "sort" "strings" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/dirtree" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/list" "github.com/rclone/rclone/fs/walk" "golang.org/x/text/unicode/norm" ) // March holds the data used to traverse two Fs simultaneously, // calling Callback for each match type March struct { // parameters Ctx context.Context // context for background goroutines Fdst fs.Fs // source Fs Fsrc fs.Fs // dest Fs Dir string // directory NoTraverse bool // don't traverse the destination SrcIncludeAll bool // don't include all files in the src DstIncludeAll bool // don't include all files in the destination Callback Marcher // object to call with results NoCheckDest bool // transfer all objects regardless without checking dst NoUnicodeNormalization bool // don't normalize unicode characters in filenames // internal state srcListDir listDirFn // function to call to list a directory in the src dstListDir listDirFn // function to call to list a directory in the dst transforms []matchTransformFn } // Marcher is called on each match type Marcher interface { // SrcOnly is called for a DirEntry found only in the source SrcOnly(src fs.DirEntry) (recurse bool) // DstOnly is called for a DirEntry found only in the destination DstOnly(dst fs.DirEntry) (recurse bool) // Match is called for a DirEntry found both in the source and destination Match(ctx context.Context, dst, src fs.DirEntry) (recurse bool) } // init sets up a march over opt.Fsrc, and opt.Fdst calling back callback for each match func (m *March) init() { m.srcListDir = m.makeListDir(m.Fsrc, m.SrcIncludeAll) if !m.NoTraverse { m.dstListDir = m.makeListDir(m.Fdst, m.DstIncludeAll) } // Now create the matching transform // ..normalise the UTF8 first if !m.NoUnicodeNormalization { m.transforms = append(m.transforms, norm.NFC.String) } // ..if destination is caseInsensitive then make it lower case // case Insensitive | src | dst | lower case compare | // | No | No | No | // | Yes | No | No | // | No | Yes | Yes | // | Yes | Yes | Yes | if m.Fdst.Features().CaseInsensitive || fs.Config.IgnoreCaseSync { m.transforms = append(m.transforms, strings.ToLower) } } // list a directory into entries, err type listDirFn func(dir string) (entries fs.DirEntries, err error) // makeListDir makes constructs a listing function for the given fs // and includeAll flags for marching through the file system. func (m *March) makeListDir(f fs.Fs, includeAll bool) listDirFn { if !(fs.Config.UseListR && f.Features().ListR != nil) && // !--fast-list active and !(fs.Config.NoTraverse && filter.Active.HaveFilesFrom()) { // !(--files-from and --no-traverse) return func(dir string) (entries fs.DirEntries, err error) { return list.DirSorted(m.Ctx, f, includeAll, dir) } } // This returns a closure for use when --fast-list is active or for when // --files-from and --no-traverse is set var ( mu sync.Mutex started bool dirs dirtree.DirTree dirsErr error ) return func(dir string) (entries fs.DirEntries, err error) { mu.Lock() defer mu.Unlock() if !started { dirs, dirsErr = walk.NewDirTree(m.Ctx, f, m.Dir, includeAll, fs.Config.MaxDepth) started = true } if dirsErr != nil { return nil, dirsErr } entries, ok := dirs[dir] if !ok { err = fs.ErrorDirNotFound } else { delete(dirs, dir) } return entries, err } } // listDirJob describe a directory listing that needs to be done type listDirJob struct { srcRemote string dstRemote string srcDepth int dstDepth int noSrc bool noDst bool } // Run starts the matching process off func (m *March) Run() error { m.init() srcDepth := fs.Config.MaxDepth if srcDepth < 0 { srcDepth = fs.MaxLevel } dstDepth := srcDepth if filter.Active.Opt.DeleteExcluded { dstDepth = fs.MaxLevel } var mu sync.Mutex // Protects vars below var jobError error var errCount int // Start some directory listing go routines var wg sync.WaitGroup // sync closing of go routines var traversing sync.WaitGroup // running directory traversals in := make(chan listDirJob, fs.Config.Checkers) for i := 0; i < fs.Config.Checkers; i++ { wg.Add(1) go func() { defer wg.Done() for { select { case <-m.Ctx.Done(): return case job, ok := <-in: if !ok { return } jobs, err := m.processJob(job) if err != nil { mu.Lock() // Keep reference only to the first encountered error if jobError == nil { jobError = err } errCount++ mu.Unlock() } if len(jobs) > 0 { traversing.Add(len(jobs)) go func() { // Now we have traversed this directory, send these // jobs off for traversal in the background for _, newJob := range jobs { select { case <-m.Ctx.Done(): // discard job if finishing traversing.Done() case in <- newJob: } } }() } traversing.Done() } } }() } // Start the process traversing.Add(1) in <- listDirJob{ srcRemote: m.Dir, srcDepth: srcDepth - 1, dstRemote: m.Dir, dstDepth: dstDepth - 1, noDst: m.NoCheckDest, } go func() { // when the context is cancelled discard the remaining jobs <-m.Ctx.Done() for range in { traversing.Done() } }() traversing.Wait() close(in) wg.Wait() if errCount > 1 { return errors.Wrapf(jobError, "march failed with %d error(s): first error", errCount) } return jobError } // Check to see if the context has been cancelled func (m *March) aborting() bool { select { case <-m.Ctx.Done(): return true default: } return false } // matchEntry is an entry plus transformed name type matchEntry struct { entry fs.DirEntry leaf string name string } // matchEntries contains many matchEntry~s type matchEntries []matchEntry // Len is part of sort.Interface. func (es matchEntries) Len() int { return len(es) } // Swap is part of sort.Interface. func (es matchEntries) Swap(i, j int) { es[i], es[j] = es[j], es[i] } // Less is part of sort.Interface. // // Compare in order (name, leaf, remote) func (es matchEntries) Less(i, j int) bool { ei, ej := &es[i], &es[j] if ei.name == ej.name { if ei.leaf == ej.leaf { return fs.CompareDirEntries(ei.entry, ej.entry) < 0 } return ei.leaf < ej.leaf } return ei.name < ej.name } // Sort the directory entries by (name, leaf, remote) // // We use a stable sort here just in case there are // duplicates. Assuming the remote delivers the entries in a // consistent order, this will give the best user experience // in syncing as it will use the first entry for the sync // comparison. func (es matchEntries) sort() { sort.Stable(es) } // make a matchEntries from a newMatch entries func newMatchEntries(entries fs.DirEntries, transforms []matchTransformFn) matchEntries { es := make(matchEntries, len(entries)) for i := range es { es[i].entry = entries[i] name := path.Base(entries[i].Remote()) es[i].leaf = name for _, transform := range transforms { name = transform(name) } es[i].name = name } es.sort() return es } // matchPair is a matched pair of direntries returned by matchListings type matchPair struct { src, dst fs.DirEntry } // matchTransformFn converts a name into a form which is used for // comparison in matchListings. type matchTransformFn func(name string) string // Process the two listings, matching up the items in the two slices // using the transform function on each name first. // // Into srcOnly go Entries which only exist in the srcList // Into dstOnly go Entries which only exist in the dstList // Into matches go matchPair's of src and dst which have the same name // // This checks for duplicates and checks the list is sorted. func matchListings(srcListEntries, dstListEntries fs.DirEntries, transforms []matchTransformFn) (srcOnly fs.DirEntries, dstOnly fs.DirEntries, matches []matchPair) { srcList := newMatchEntries(srcListEntries, transforms) dstList := newMatchEntries(dstListEntries, transforms) for iSrc, iDst := 0, 0; ; iSrc, iDst = iSrc+1, iDst+1 { var src, dst fs.DirEntry var srcName, dstName string if iSrc < len(srcList) { src = srcList[iSrc].entry srcName = srcList[iSrc].name } if iDst < len(dstList) { dst = dstList[iDst].entry dstName = dstList[iDst].name } if src == nil && dst == nil { break } if src != nil && iSrc > 0 { prev := srcList[iSrc-1].entry prevName := srcList[iSrc-1].name if srcName == prevName && fs.DirEntryType(prev) == fs.DirEntryType(src) { fs.Logf(src, "Duplicate %s found in source - ignoring", fs.DirEntryType(src)) iDst-- // ignore the src and retry the dst continue } else if srcName < prevName { // this should never happen since we sort the listings panic("Out of order listing in source") } } if dst != nil && iDst > 0 { prev := dstList[iDst-1].entry prevName := dstList[iDst-1].name if dstName == prevName && fs.DirEntryType(dst) == fs.DirEntryType(prev) { fs.Logf(dst, "Duplicate %s found in destination - ignoring", fs.DirEntryType(dst)) iSrc-- // ignore the dst and retry the src continue } else if dstName < prevName { // this should never happen since we sort the listings panic("Out of order listing in destination") } } if src != nil && dst != nil { // we can't use CompareDirEntries because srcName, dstName could // be different then src.Remote() or dst.Remote() srcType := fs.DirEntryType(src) dstType := fs.DirEntryType(dst) if srcName > dstName || (srcName == dstName && srcType > dstType) { src = nil iSrc-- } else if srcName < dstName || (srcName == dstName && srcType < dstType) { dst = nil iDst-- } } // Debugf(nil, "src = %v, dst = %v", src, dst) switch { case src == nil && dst == nil: // do nothing case src == nil: dstOnly = append(dstOnly, dst) case dst == nil: srcOnly = append(srcOnly, src) default: matches = append(matches, matchPair{src: src, dst: dst}) } } return } // processJob processes a listDirJob listing the source and // destination directories, comparing them and returning a slice of // more jobs // // returns errors using processError func (m *March) processJob(job listDirJob) ([]listDirJob, error) { var ( jobs []listDirJob srcList, dstList fs.DirEntries srcListErr, dstListErr error wg sync.WaitGroup ) // List the src and dst directories if !job.noSrc { wg.Add(1) go func() { defer wg.Done() srcList, srcListErr = m.srcListDir(job.srcRemote) }() } if !m.NoTraverse && !job.noDst { wg.Add(1) go func() { defer wg.Done() dstList, dstListErr = m.dstListDir(job.dstRemote) }() } // Wait for listings to complete and report errors wg.Wait() if srcListErr != nil { fs.Errorf(job.srcRemote, "error reading source directory: %v", srcListErr) srcListErr = fs.CountError(srcListErr) return nil, srcListErr } if dstListErr == fs.ErrorDirNotFound { // Copy the stuff anyway } else if dstListErr != nil { fs.Errorf(job.dstRemote, "error reading destination directory: %v", dstListErr) dstListErr = fs.CountError(dstListErr) return nil, dstListErr } // If NoTraverse is set, then try to find a matching object // for each item in the srcList if m.NoTraverse && !m.NoCheckDest { for _, src := range srcList { if srcObj, ok := src.(fs.Object); ok { leaf := path.Base(srcObj.Remote()) dstObj, err := m.Fdst.NewObject(m.Ctx, path.Join(job.dstRemote, leaf)) if err == nil { dstList = append(dstList, dstObj) } } } } // Work out what to do and do it srcOnly, dstOnly, matches := matchListings(srcList, dstList, m.transforms) for _, src := range srcOnly { if m.aborting() { return nil, m.Ctx.Err() } recurse := m.Callback.SrcOnly(src) if recurse && job.srcDepth > 0 { jobs = append(jobs, listDirJob{ srcRemote: src.Remote(), dstRemote: src.Remote(), srcDepth: job.srcDepth - 1, noDst: true, }) } } for _, dst := range dstOnly { if m.aborting() { return nil, m.Ctx.Err() } recurse := m.Callback.DstOnly(dst) if recurse && job.dstDepth > 0 { jobs = append(jobs, listDirJob{ srcRemote: dst.Remote(), dstRemote: dst.Remote(), dstDepth: job.dstDepth - 1, noSrc: true, }) } } for _, match := range matches { if m.aborting() { return nil, m.Ctx.Err() } recurse := m.Callback.Match(m.Ctx, match.dst, match.src) if recurse && job.srcDepth > 0 && job.dstDepth > 0 { jobs = append(jobs, listDirJob{ srcRemote: match.src.Remote(), dstRemote: match.dst.Remote(), srcDepth: job.srcDepth - 1, dstDepth: job.dstDepth - 1, }) } } return jobs, nil } rclone-1.53.3/fs/march/march_test.go000066400000000000000000000273071375552240400172670ustar00rootroot00000000000000// Internal tests for march package march import ( "context" "errors" "fmt" "strings" "sync" "testing" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/mockdir" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/text/unicode/norm" ) // Some times used in the tests var ( t1 = fstest.Time("2001-02-03T04:05:06.499999999Z") ) func TestMain(m *testing.M) { fstest.TestMain(m) } type marchTester struct { ctx context.Context // internal context for controlling go-routines cancel func() // cancel the context srcOnly fs.DirEntries dstOnly fs.DirEntries match fs.DirEntries entryMutex sync.Mutex errorMu sync.Mutex // Mutex covering the error variables err error noRetryErr error fatalErr error noTraverse bool } // DstOnly have an object which is in the destination only func (mt *marchTester) DstOnly(dst fs.DirEntry) (recurse bool) { mt.entryMutex.Lock() mt.dstOnly = append(mt.dstOnly, dst) mt.entryMutex.Unlock() switch dst.(type) { case fs.Object: return false case fs.Directory: return true default: panic("Bad object in DirEntries") } } // SrcOnly have an object which is in the source only func (mt *marchTester) SrcOnly(src fs.DirEntry) (recurse bool) { mt.entryMutex.Lock() mt.srcOnly = append(mt.srcOnly, src) mt.entryMutex.Unlock() switch src.(type) { case fs.Object: return false case fs.Directory: return true default: panic("Bad object in DirEntries") } } // Match is called when src and dst are present, so sync src to dst func (mt *marchTester) Match(ctx context.Context, dst, src fs.DirEntry) (recurse bool) { mt.entryMutex.Lock() mt.match = append(mt.match, src) mt.entryMutex.Unlock() switch src.(type) { case fs.Object: return false case fs.Directory: // Do the same thing to the entire contents of the directory _, ok := dst.(fs.Directory) if ok { return true } // FIXME src is dir, dst is file err := errors.New("can't overwrite file with directory") fs.Errorf(dst, "%v", err) mt.processError(err) default: panic("Bad object in DirEntries") } return false } func (mt *marchTester) processError(err error) { if err == nil { return } mt.errorMu.Lock() defer mt.errorMu.Unlock() switch { case fserrors.IsFatalError(err): if !mt.aborting() { fs.Errorf(nil, "Cancelling sync due to fatal error: %v", err) mt.cancel() } mt.fatalErr = err case fserrors.IsNoRetryError(err): mt.noRetryErr = err default: mt.err = err } } func (mt *marchTester) currentError() error { mt.errorMu.Lock() defer mt.errorMu.Unlock() if mt.fatalErr != nil { return mt.fatalErr } if mt.err != nil { return mt.err } return mt.noRetryErr } func (mt *marchTester) aborting() bool { return mt.ctx.Err() != nil } func TestMarch(t *testing.T) { for _, test := range []struct { what string fileSrcOnly []string dirSrcOnly []string fileDstOnly []string dirDstOnly []string fileMatch []string dirMatch []string }{ { what: "source only", fileSrcOnly: []string{"test", "test2", "test3", "sub dir/test4"}, dirSrcOnly: []string{"sub dir"}, }, { what: "identical", fileMatch: []string{"test", "test2", "sub dir/test3", "sub dir/sub sub dir/test4"}, dirMatch: []string{"sub dir", "sub dir/sub sub dir"}, }, { what: "typical sync", fileSrcOnly: []string{"srcOnly", "srcOnlyDir/sub"}, dirSrcOnly: []string{"srcOnlyDir"}, fileMatch: []string{"match", "matchDir/match file"}, dirMatch: []string{"matchDir"}, fileDstOnly: []string{"dstOnly", "dstOnlyDir/sub"}, dirDstOnly: []string{"dstOnlyDir"}, }, } { t.Run(fmt.Sprintf("TestMarch-%s", test.what), func(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() var srcOnly []fstest.Item var dstOnly []fstest.Item var match []fstest.Item ctx, cancel := context.WithCancel(context.Background()) for _, f := range test.fileSrcOnly { srcOnly = append(srcOnly, r.WriteFile(f, "hello world", t1)) } for _, f := range test.fileDstOnly { dstOnly = append(dstOnly, r.WriteObject(ctx, f, "hello world", t1)) } for _, f := range test.fileMatch { match = append(match, r.WriteBoth(ctx, f, "hello world", t1)) } mt := &marchTester{ ctx: ctx, cancel: cancel, noTraverse: false, } m := &March{ Ctx: ctx, Fdst: r.Fremote, Fsrc: r.Flocal, Dir: "", NoTraverse: mt.noTraverse, Callback: mt, DstIncludeAll: filter.Active.Opt.DeleteExcluded, } mt.processError(m.Run()) mt.cancel() err := mt.currentError() require.NoError(t, err) precision := fs.GetModifyWindow(r.Fremote, r.Flocal) fstest.CompareItems(t, mt.srcOnly, srcOnly, test.dirSrcOnly, precision, "srcOnly") fstest.CompareItems(t, mt.dstOnly, dstOnly, test.dirDstOnly, precision, "dstOnly") fstest.CompareItems(t, mt.match, match, test.dirMatch, precision, "match") }) } } func TestMarchNoTraverse(t *testing.T) { for _, test := range []struct { what string fileSrcOnly []string dirSrcOnly []string fileMatch []string dirMatch []string }{ { what: "source only", fileSrcOnly: []string{"test", "test2", "test3", "sub dir/test4"}, dirSrcOnly: []string{"sub dir"}, }, { what: "identical", fileMatch: []string{"test", "test2", "sub dir/test3", "sub dir/sub sub dir/test4"}, }, { what: "typical sync", fileSrcOnly: []string{"srcOnly", "srcOnlyDir/sub"}, fileMatch: []string{"match", "matchDir/match file"}, }, } { t.Run(fmt.Sprintf("TestMarch-%s", test.what), func(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() var srcOnly []fstest.Item var match []fstest.Item ctx, cancel := context.WithCancel(context.Background()) for _, f := range test.fileSrcOnly { srcOnly = append(srcOnly, r.WriteFile(f, "hello world", t1)) } for _, f := range test.fileMatch { match = append(match, r.WriteBoth(ctx, f, "hello world", t1)) } mt := &marchTester{ ctx: ctx, cancel: cancel, noTraverse: true, } m := &March{ Ctx: ctx, Fdst: r.Fremote, Fsrc: r.Flocal, Dir: "", NoTraverse: mt.noTraverse, Callback: mt, DstIncludeAll: filter.Active.Opt.DeleteExcluded, } mt.processError(m.Run()) mt.cancel() err := mt.currentError() require.NoError(t, err) precision := fs.GetModifyWindow(r.Fremote, r.Flocal) fstest.CompareItems(t, mt.srcOnly, srcOnly, test.dirSrcOnly, precision, "srcOnly") fstest.CompareItems(t, mt.match, match, test.dirMatch, precision, "match") }) } } func TestNewMatchEntries(t *testing.T) { var ( a = mockobject.Object("path/a") A = mockobject.Object("path/A") B = mockobject.Object("path/B") c = mockobject.Object("path/c") ) es := newMatchEntries(fs.DirEntries{a, A, B, c}, nil) assert.Equal(t, es, matchEntries{ {name: "A", leaf: "A", entry: A}, {name: "B", leaf: "B", entry: B}, {name: "a", leaf: "a", entry: a}, {name: "c", leaf: "c", entry: c}, }) es = newMatchEntries(fs.DirEntries{a, A, B, c}, []matchTransformFn{strings.ToLower}) assert.Equal(t, es, matchEntries{ {name: "a", leaf: "A", entry: A}, {name: "a", leaf: "a", entry: a}, {name: "b", leaf: "B", entry: B}, {name: "c", leaf: "c", entry: c}, }) } func TestMatchListings(t *testing.T) { var ( a = mockobject.Object("a") A = mockobject.Object("A") b = mockobject.Object("b") c = mockobject.Object("c") d = mockobject.Object("d") uE1 = mockobject.Object("é") // one of the unicode E characters uE2 = mockobject.Object("é") // a different unicode E character dirA = mockdir.New("A") dirb = mockdir.New("b") ) for _, test := range []struct { what string input fs.DirEntries // pairs of input src, dst srcOnly fs.DirEntries dstOnly fs.DirEntries matches []matchPair // pairs of output transforms []matchTransformFn }{ { what: "only src or dst", input: fs.DirEntries{ a, nil, b, nil, c, nil, d, nil, }, srcOnly: fs.DirEntries{ a, b, c, d, }, }, { what: "typical sync #1", input: fs.DirEntries{ a, nil, b, b, nil, c, nil, d, }, srcOnly: fs.DirEntries{ a, }, dstOnly: fs.DirEntries{ c, d, }, matches: []matchPair{ {b, b}, }, }, { what: "typical sync #2", input: fs.DirEntries{ a, a, b, b, nil, c, d, d, }, dstOnly: fs.DirEntries{ c, }, matches: []matchPair{ {a, a}, {b, b}, {d, d}, }, }, { what: "One duplicate", input: fs.DirEntries{ A, A, a, a, a, nil, b, b, }, matches: []matchPair{ {A, A}, {a, a}, {b, b}, }, }, { what: "Two duplicates", input: fs.DirEntries{ a, a, a, a, a, nil, }, matches: []matchPair{ {a, a}, }, }, { what: "Case insensitive duplicate - no transform", input: fs.DirEntries{ a, a, A, A, }, matches: []matchPair{ {A, A}, {a, a}, }, }, { what: "Case insensitive duplicate - transform to lower case", input: fs.DirEntries{ a, a, A, A, }, matches: []matchPair{ {A, A}, }, transforms: []matchTransformFn{strings.ToLower}, }, { what: "Unicode near-duplicate that becomes duplicate with normalization", input: fs.DirEntries{ uE1, uE1, uE2, uE2, }, matches: []matchPair{ {uE1, uE1}, }, transforms: []matchTransformFn{norm.NFC.String}, }, { what: "Unicode near-duplicate with no normalization", input: fs.DirEntries{ uE1, uE1, uE2, uE2, }, matches: []matchPair{ {uE1, uE1}, {uE2, uE2}, }, }, { what: "File and directory are not duplicates - srcOnly", input: fs.DirEntries{ dirA, nil, A, nil, }, srcOnly: fs.DirEntries{ dirA, A, }, }, { what: "File and directory are not duplicates - matches", input: fs.DirEntries{ dirA, dirA, A, A, }, matches: []matchPair{ {dirA, dirA}, {A, A}, }, }, { what: "Sync with directory #1", input: fs.DirEntries{ dirA, nil, A, nil, b, b, nil, c, nil, d, }, srcOnly: fs.DirEntries{ dirA, A, }, dstOnly: fs.DirEntries{ c, d, }, matches: []matchPair{ {b, b}, }, }, { what: "Sync with 2 directories", input: fs.DirEntries{ dirA, dirA, A, nil, nil, dirb, nil, b, }, srcOnly: fs.DirEntries{ A, }, dstOnly: fs.DirEntries{ dirb, b, }, matches: []matchPair{ {dirA, dirA}, }, }, } { t.Run(fmt.Sprintf("TestMatchListings-%s", test.what), func(t *testing.T) { var srcList, dstList fs.DirEntries for i := 0; i < len(test.input); i += 2 { src, dst := test.input[i], test.input[i+1] if src != nil { srcList = append(srcList, src) } if dst != nil { dstList = append(dstList, dst) } } srcOnly, dstOnly, matches := matchListings(srcList, dstList, test.transforms) assert.Equal(t, test.srcOnly, srcOnly, test.what, "srcOnly differ") assert.Equal(t, test.dstOnly, dstOnly, test.what, "dstOnly differ") assert.Equal(t, test.matches, matches, test.what, "matches differ") // now swap src and dst dstOnly, srcOnly, matches = matchListings(dstList, srcList, test.transforms) assert.Equal(t, test.srcOnly, srcOnly, test.what, "srcOnly differ") assert.Equal(t, test.dstOnly, dstOnly, test.what, "dstOnly differ") assert.Equal(t, test.matches, matches, test.what, "matches differ") }) } } rclone-1.53.3/fs/mimetype.go000066400000000000000000000021451375552240400156660ustar00rootroot00000000000000package fs import ( "context" "mime" "path" "strings" ) // MimeTypeFromName returns a guess at the mime type from the name func MimeTypeFromName(remote string) (mimeType string) { mimeType = mime.TypeByExtension(path.Ext(remote)) if !strings.ContainsRune(mimeType, '/') { mimeType = "application/octet-stream" } return mimeType } // MimeType returns the MimeType from the object, either by calling // the MimeTyper interface or using MimeTypeFromName func MimeType(ctx context.Context, o ObjectInfo) (mimeType string) { // Read the MimeType from the optional interface if available if do, ok := o.(MimeTyper); ok { mimeType = do.MimeType(ctx) // Debugf(o, "Read MimeType as %q", mimeType) if mimeType != "" { return mimeType } } return MimeTypeFromName(o.Remote()) } // MimeTypeDirEntry returns the MimeType of a DirEntry // // It returns "inode/directory" for directories, or uses // MimeType(Object) func MimeTypeDirEntry(ctx context.Context, item DirEntry) string { switch x := item.(type) { case Object: return MimeType(ctx, x) case Directory: return "inode/directory" } return "" } rclone-1.53.3/fs/object/000077500000000000000000000000001375552240400147525ustar00rootroot00000000000000rclone-1.53.3/fs/object/object.go000066400000000000000000000154501375552240400165540ustar00rootroot00000000000000// Package object defines some useful Objects package object import ( "bytes" "context" "errors" "io" "io/ioutil" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" ) // NewStaticObjectInfo returns a static ObjectInfo // If hashes is nil and fs is not nil, the hash map will be replaced with // empty hashes of the types supported by the fs. func NewStaticObjectInfo(remote string, modTime time.Time, size int64, storable bool, hashes map[hash.Type]string, fs fs.Info) fs.ObjectInfo { info := &staticObjectInfo{ remote: remote, modTime: modTime, size: size, storable: storable, hashes: hashes, fs: fs, } if fs != nil && hashes == nil { set := fs.Hashes().Array() info.hashes = make(map[hash.Type]string) for _, ht := range set { info.hashes[ht] = "" } } return info } type staticObjectInfo struct { remote string modTime time.Time size int64 storable bool hashes map[hash.Type]string fs fs.Info } func (i *staticObjectInfo) Fs() fs.Info { return i.fs } func (i *staticObjectInfo) Remote() string { return i.remote } func (i *staticObjectInfo) String() string { return i.remote } func (i *staticObjectInfo) ModTime(ctx context.Context) time.Time { return i.modTime } func (i *staticObjectInfo) Size() int64 { return i.size } func (i *staticObjectInfo) Storable() bool { return i.storable } func (i *staticObjectInfo) Hash(ctx context.Context, h hash.Type) (string, error) { if len(i.hashes) == 0 { return "", hash.ErrUnsupported } if hash, ok := i.hashes[h]; ok { return hash, nil } return "", hash.ErrUnsupported } // MemoryFs is an in memory Fs, it only supports FsInfo and Put var MemoryFs memoryFs // memoryFs is an in memory fs type memoryFs struct{} // Name of the remote (as passed into NewFs) func (memoryFs) Name() string { return "memory" } // Root of the remote (as passed into NewFs) func (memoryFs) Root() string { return "" } // String returns a description of the FS func (memoryFs) String() string { return "memory" } // Precision of the ModTimes in this Fs func (memoryFs) Precision() time.Duration { return time.Nanosecond } // Returns the supported hash types of the filesystem func (memoryFs) Hashes() hash.Set { return hash.Supported() } // Features returns the optional features of this Fs func (memoryFs) Features() *fs.Features { return &fs.Features{} } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (memoryFs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { return nil, nil } // NewObject finds the Object at remote. If it can't be found // it returns the error ErrorObjectNotFound. func (memoryFs) NewObject(ctx context.Context, remote string) (fs.Object, error) { return nil, fs.ErrorObjectNotFound } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (memoryFs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { o := NewMemoryObject(src.Remote(), src.ModTime(ctx), nil) return o, o.Update(ctx, in, src, options...) } // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists func (memoryFs) Mkdir(ctx context.Context, dir string) error { return errors.New("memoryFs: can't make directory") } // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty func (memoryFs) Rmdir(ctx context.Context, dir string) error { return fs.ErrorDirNotFound } var _ fs.Fs = MemoryFs // MemoryObject is an in memory object type MemoryObject struct { remote string modTime time.Time content []byte } // NewMemoryObject returns an in memory Object with the modTime and content passed in func NewMemoryObject(remote string, modTime time.Time, content []byte) *MemoryObject { return &MemoryObject{ remote: remote, modTime: modTime, content: content, } } // Content returns the underlying buffer func (o *MemoryObject) Content() []byte { return o.content } // Fs returns read only access to the Fs that this object is part of func (o *MemoryObject) Fs() fs.Info { return MemoryFs } // Remote returns the remote path func (o *MemoryObject) Remote() string { return o.remote } // String returns a description of the Object func (o *MemoryObject) String() string { return o.remote } // ModTime returns the modification date of the file func (o *MemoryObject) ModTime(ctx context.Context) time.Time { return o.modTime } // Size returns the size of the file func (o *MemoryObject) Size() int64 { return int64(len(o.content)) } // Storable says whether this object can be stored func (o *MemoryObject) Storable() bool { return true } // Hash returns the requested hash of the contents func (o *MemoryObject) Hash(ctx context.Context, h hash.Type) (string, error) { hash, err := hash.NewMultiHasherTypes(hash.Set(h)) if err != nil { return "", err } _, err = hash.Write(o.content) if err != nil { return "", err } return hash.Sums()[h], nil } // SetModTime sets the metadata on the object to set the modification date func (o *MemoryObject) SetModTime(ctx context.Context, modTime time.Time) error { o.modTime = modTime return nil } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *MemoryObject) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { content := o.content for _, option := range options { switch x := option.(type) { case *fs.RangeOption: content = o.content[x.Start:x.End] case *fs.SeekOption: content = o.content[x.Offset:] default: if option.Mandatory() { fs.Logf(o, "Unsupported mandatory option: %v", option) } } } return ioutil.NopCloser(bytes.NewBuffer(content)), nil } // Update in to the object with the modTime given of the given size // // This re-uses the internal buffer if at all possible. func (o *MemoryObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { size := src.Size() if size == 0 { o.content = nil } else if size < 0 || int64(cap(o.content)) < size { o.content, err = ioutil.ReadAll(in) } else { o.content = o.content[:size] _, err = io.ReadFull(in, o.content) } o.modTime = src.ModTime(ctx) return err } // Remove this object func (o *MemoryObject) Remove(ctx context.Context) error { return errors.New("memoryObject.Remove not supported") } rclone-1.53.3/fs/object/object_test.go000066400000000000000000000127061375552240400176140ustar00rootroot00000000000000package object_test import ( "bytes" "context" "io" "io/ioutil" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/stretchr/testify/assert" ) func TestStaticObject(t *testing.T) { now := time.Now() remote := "path/to/object" size := int64(1024) o := object.NewStaticObjectInfo(remote, now, size, true, nil, object.MemoryFs) assert.Equal(t, object.MemoryFs, o.Fs()) assert.Equal(t, remote, o.Remote()) assert.Equal(t, remote, o.String()) assert.Equal(t, now, o.ModTime(context.Background())) assert.Equal(t, size, o.Size()) assert.Equal(t, true, o.Storable()) Hash, err := o.Hash(context.Background(), hash.MD5) assert.NoError(t, err) assert.Equal(t, "", Hash) o = object.NewStaticObjectInfo(remote, now, size, true, nil, nil) _, err = o.Hash(context.Background(), hash.MD5) assert.Equal(t, hash.ErrUnsupported, err) hs := map[hash.Type]string{ hash.MD5: "potato", } o = object.NewStaticObjectInfo(remote, now, size, true, hs, nil) Hash, err = o.Hash(context.Background(), hash.MD5) assert.NoError(t, err) assert.Equal(t, "potato", Hash) _, err = o.Hash(context.Background(), hash.SHA1) assert.Equal(t, hash.ErrUnsupported, err) } func TestMemoryFs(t *testing.T) { f := object.MemoryFs assert.Equal(t, "memory", f.Name()) assert.Equal(t, "", f.Root()) assert.Equal(t, "memory", f.String()) assert.Equal(t, time.Nanosecond, f.Precision()) assert.Equal(t, hash.Supported(), f.Hashes()) assert.Equal(t, &fs.Features{}, f.Features()) entries, err := f.List(context.Background(), "") assert.NoError(t, err) assert.Nil(t, entries) o, err := f.NewObject(context.Background(), "obj") assert.Equal(t, fs.ErrorObjectNotFound, err) assert.Nil(t, o) buf := bytes.NewBufferString("potato") now := time.Now() src := object.NewStaticObjectInfo("remote", now, int64(buf.Len()), true, nil, nil) o, err = f.Put(context.Background(), buf, src) assert.NoError(t, err) hash, err := o.Hash(context.Background(), hash.SHA1) assert.NoError(t, err) assert.Equal(t, "3e2e95f5ad970eadfa7e17eaf73da97024aa5359", hash) err = f.Mkdir(context.Background(), "dir") assert.Error(t, err) err = f.Rmdir(context.Background(), "dir") assert.Equal(t, fs.ErrorDirNotFound, err) } func TestMemoryObject(t *testing.T) { remote := "path/to/object" now := time.Now() content := []byte("potatoXXXXXXXXXXXXX") content = content[:6] // make some extra cap o := object.NewMemoryObject(remote, now, content) assert.Equal(t, content, o.Content()) assert.Equal(t, object.MemoryFs, o.Fs()) assert.Equal(t, remote, o.Remote()) assert.Equal(t, remote, o.String()) assert.Equal(t, now, o.ModTime(context.Background())) assert.Equal(t, int64(len(content)), o.Size()) assert.Equal(t, true, o.Storable()) Hash, err := o.Hash(context.Background(), hash.MD5) assert.NoError(t, err) assert.Equal(t, "8ee2027983915ec78acc45027d874316", Hash) Hash, err = o.Hash(context.Background(), hash.SHA1) assert.NoError(t, err) assert.Equal(t, "3e2e95f5ad970eadfa7e17eaf73da97024aa5359", Hash) newNow := now.Add(time.Minute) err = o.SetModTime(context.Background(), newNow) assert.NoError(t, err) assert.Equal(t, newNow, o.ModTime(context.Background())) checkOpen := func(rc io.ReadCloser, expected string) { actual, err := ioutil.ReadAll(rc) assert.NoError(t, err) err = rc.Close() assert.NoError(t, err) assert.Equal(t, expected, string(actual)) } checkContent := func(o fs.Object, expected string) { rc, err := o.Open(context.Background()) assert.NoError(t, err) checkOpen(rc, expected) } checkContent(o, string(content)) rc, err := o.Open(context.Background(), &fs.RangeOption{Start: 1, End: 3}) assert.NoError(t, err) checkOpen(rc, "ot") rc, err = o.Open(context.Background(), &fs.SeekOption{Offset: 3}) assert.NoError(t, err) checkOpen(rc, "ato") // check it fits within the buffer newNow = now.Add(2 * time.Minute) newContent := bytes.NewBufferString("Rutabaga") assert.True(t, newContent.Len() < cap(content)) // fits within cap(content) src := object.NewStaticObjectInfo(remote, newNow, int64(newContent.Len()), true, nil, nil) err = o.Update(context.Background(), newContent, src) assert.NoError(t, err) checkContent(o, "Rutabaga") assert.Equal(t, newNow, o.ModTime(context.Background())) assert.Equal(t, "Rutaba", string(content)) // check we re-used the buffer // not within the buffer newStr := "0123456789" newStr = newStr + newStr + newStr + newStr + newStr + newStr + newStr + newStr + newStr + newStr newContent = bytes.NewBufferString(newStr) assert.True(t, newContent.Len() > cap(content)) // does not fit within cap(content) src = object.NewStaticObjectInfo(remote, newNow, int64(newContent.Len()), true, nil, nil) err = o.Update(context.Background(), newContent, src) assert.NoError(t, err) checkContent(o, newStr) assert.Equal(t, "Rutaba", string(content)) // check we didn't re-use the buffer // now try streaming newStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" newContent = bytes.NewBufferString(newStr) src = object.NewStaticObjectInfo(remote, newNow, -1, true, nil, nil) err = o.Update(context.Background(), newContent, src) assert.NoError(t, err) checkContent(o, newStr) // and zero length newStr = "" newContent = bytes.NewBufferString(newStr) src = object.NewStaticObjectInfo(remote, newNow, 0, true, nil, nil) err = o.Update(context.Background(), newContent, src) assert.NoError(t, err) checkContent(o, newStr) err = o.Remove(context.Background()) assert.Error(t, err) } rclone-1.53.3/fs/operations/000077500000000000000000000000001375552240400156675ustar00rootroot00000000000000rclone-1.53.3/fs/operations/check.go000066400000000000000000000235251375552240400173020ustar00rootroot00000000000000package operations import ( "bytes" "context" "fmt" "io" "sync" "sync/atomic" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/march" "github.com/rclone/rclone/lib/readers" ) // checkFn is the type of the checking function used in CheckFn() // // It should check the two objects (a, b) and return if they differ // and whether the hash was used. // // If there are differences then this should Errorf the difference and // the reason but return with err = nil. It should not CountError in // this case. type checkFn func(ctx context.Context, a, b fs.Object) (differ bool, noHash bool, err error) // CheckOpt contains options for the Check functions type CheckOpt struct { Fdst, Fsrc fs.Fs // fses to check Check checkFn // function to use for checking OneWay bool // one way only? Combined io.Writer // a file with file names with leading sigils MissingOnSrc io.Writer // files only in the destination MissingOnDst io.Writer // files only in the source Match io.Writer // matching files Differ io.Writer // differing files Error io.Writer // files with errors of some kind } // checkMarch is used to march over two Fses in the same way as // sync/copy type checkMarch struct { ioMu sync.Mutex wg sync.WaitGroup tokens chan struct{} differences int32 noHashes int32 srcFilesMissing int32 dstFilesMissing int32 matches int32 opt CheckOpt } // report outputs the fileName to out if required and to the combined log func (c *checkMarch) report(o fs.DirEntry, out io.Writer, sigil rune) { if out != nil { c.ioMu.Lock() _, _ = fmt.Fprintf(out, "%v\n", o) c.ioMu.Unlock() } if c.opt.Combined != nil { c.ioMu.Lock() _, _ = fmt.Fprintf(c.opt.Combined, "%c %v\n", sigil, o) c.ioMu.Unlock() } } // DstOnly have an object which is in the destination only func (c *checkMarch) DstOnly(dst fs.DirEntry) (recurse bool) { switch dst.(type) { case fs.Object: if c.opt.OneWay { return false } err := errors.Errorf("File not in %v", c.opt.Fsrc) fs.Errorf(dst, "%v", err) _ = fs.CountError(err) atomic.AddInt32(&c.differences, 1) atomic.AddInt32(&c.srcFilesMissing, 1) c.report(dst, c.opt.MissingOnSrc, '-') case fs.Directory: // Do the same thing to the entire contents of the directory if c.opt.OneWay { return false } return true default: panic("Bad object in DirEntries") } return false } // SrcOnly have an object which is in the source only func (c *checkMarch) SrcOnly(src fs.DirEntry) (recurse bool) { switch src.(type) { case fs.Object: err := errors.Errorf("File not in %v", c.opt.Fdst) fs.Errorf(src, "%v", err) _ = fs.CountError(err) atomic.AddInt32(&c.differences, 1) atomic.AddInt32(&c.dstFilesMissing, 1) c.report(src, c.opt.MissingOnDst, '+') case fs.Directory: // Do the same thing to the entire contents of the directory return true default: panic("Bad object in DirEntries") } return false } // check to see if two objects are identical using the check function func (c *checkMarch) checkIdentical(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { tr := accounting.Stats(ctx).NewCheckingTransfer(src) defer func() { tr.Done(err) }() if sizeDiffers(src, dst) { err = errors.Errorf("Sizes differ") fs.Errorf(src, "%v", err) return true, false, nil } if fs.Config.SizeOnly { return false, false, nil } return c.opt.Check(ctx, dst, src) } // Match is called when src and dst are present, so sync src to dst func (c *checkMarch) Match(ctx context.Context, dst, src fs.DirEntry) (recurse bool) { switch srcX := src.(type) { case fs.Object: dstX, ok := dst.(fs.Object) if ok { if SkipDestructive(ctx, src, "check") { return false } c.wg.Add(1) c.tokens <- struct{}{} // put a token to limit concurrency go func() { defer func() { <-c.tokens // get the token back to free up a slot c.wg.Done() }() differ, noHash, err := c.checkIdentical(ctx, dstX, srcX) if err != nil { fs.Errorf(src, "%v", err) _ = fs.CountError(err) c.report(src, c.opt.Error, '!') } else if differ { atomic.AddInt32(&c.differences, 1) err := errors.New("files differ") // the checkFn has already logged the reason _ = fs.CountError(err) c.report(src, c.opt.Differ, '*') } else { atomic.AddInt32(&c.matches, 1) c.report(src, c.opt.Match, '=') if noHash { atomic.AddInt32(&c.noHashes, 1) fs.Debugf(dstX, "OK - could not check hash") } else { fs.Debugf(dstX, "OK") } } }() } else { err := errors.Errorf("is file on %v but directory on %v", c.opt.Fsrc, c.opt.Fdst) fs.Errorf(src, "%v", err) _ = fs.CountError(err) atomic.AddInt32(&c.differences, 1) atomic.AddInt32(&c.dstFilesMissing, 1) c.report(src, c.opt.MissingOnDst, '+') } case fs.Directory: // Do the same thing to the entire contents of the directory _, ok := dst.(fs.Directory) if ok { return true } err := errors.Errorf("is file on %v but directory on %v", c.opt.Fdst, c.opt.Fsrc) fs.Errorf(dst, "%v", err) _ = fs.CountError(err) atomic.AddInt32(&c.differences, 1) atomic.AddInt32(&c.srcFilesMissing, 1) c.report(dst, c.opt.MissingOnSrc, '-') default: panic("Bad object in DirEntries") } return false } // CheckFn checks the files in fsrc and fdst according to Size and // hash using checkFunction on each file to check the hashes. // // checkFunction sees if dst and src are identical // // it returns true if differences were found // it also returns whether it couldn't be hashed func CheckFn(ctx context.Context, opt *CheckOpt) error { if opt.Check == nil { return errors.New("internal error: nil check function") } c := &checkMarch{ tokens: make(chan struct{}, fs.Config.Checkers), opt: *opt, } // set up a march over fdst and fsrc m := &march.March{ Ctx: ctx, Fdst: c.opt.Fdst, Fsrc: c.opt.Fsrc, Dir: "", Callback: c, } fs.Debugf(c.opt.Fdst, "Waiting for checks to finish") err := m.Run() c.wg.Wait() // wait for background go-routines if c.dstFilesMissing > 0 { fs.Logf(c.opt.Fdst, "%d files missing", c.dstFilesMissing) } if c.srcFilesMissing > 0 { fs.Logf(c.opt.Fsrc, "%d files missing", c.srcFilesMissing) } fs.Logf(c.opt.Fdst, "%d differences found", accounting.Stats(ctx).GetErrors()) if errs := accounting.Stats(ctx).GetErrors(); errs > 0 { fs.Logf(c.opt.Fdst, "%d errors while checking", errs) } if c.noHashes > 0 { fs.Logf(c.opt.Fdst, "%d hashes could not be checked", c.noHashes) } if c.matches > 0 { fs.Logf(c.opt.Fdst, "%d matching files", c.matches) } if c.differences > 0 { return errors.Errorf("%d differences found", c.differences) } return err } // Check the files in fsrc and fdst according to Size and hash func Check(ctx context.Context, opt *CheckOpt) error { optCopy := *opt optCopy.Check = func(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { same, ht, err := CheckHashes(ctx, src, dst) if err != nil { return true, false, err } if ht == hash.None { return false, true, nil } if !same { err = errors.Errorf("%v differ", ht) fs.Errorf(src, "%v", err) return true, false, nil } return false, false, nil } return CheckFn(ctx, &optCopy) } // CheckEqualReaders checks to see if in1 and in2 have the same // content when read. // // it returns true if differences were found func CheckEqualReaders(in1, in2 io.Reader) (differ bool, err error) { const bufSize = 64 * 1024 buf1 := make([]byte, bufSize) buf2 := make([]byte, bufSize) for { n1, err1 := readers.ReadFill(in1, buf1) n2, err2 := readers.ReadFill(in2, buf2) // check errors if err1 != nil && err1 != io.EOF { return true, err1 } else if err2 != nil && err2 != io.EOF { return true, err2 } // err1 && err2 are nil or io.EOF here // process the data if n1 != n2 || !bytes.Equal(buf1[:n1], buf2[:n2]) { return true, nil } // if both streams finished the we have finished if err1 == io.EOF && err2 == io.EOF { break } } return false, nil } // CheckIdenticalDownload checks to see if dst and src are identical // by reading all their bytes if necessary. // // it returns true if differences were found func CheckIdenticalDownload(ctx context.Context, dst, src fs.Object) (differ bool, err error) { err = Retry(src, fs.Config.LowLevelRetries, func() error { differ, err = checkIdenticalDownload(ctx, dst, src) return err }) return differ, err } // Does the work for CheckIdenticalDownload func checkIdenticalDownload(ctx context.Context, dst, src fs.Object) (differ bool, err error) { in1, err := dst.Open(ctx) if err != nil { return true, errors.Wrapf(err, "failed to open %q", dst) } tr1 := accounting.Stats(ctx).NewTransfer(dst) defer func() { tr1.Done(nil) // error handling is done by the caller }() in1 = tr1.Account(ctx, in1).WithBuffer() // account and buffer the transfer in2, err := src.Open(ctx) if err != nil { return true, errors.Wrapf(err, "failed to open %q", src) } tr2 := accounting.Stats(ctx).NewTransfer(dst) defer func() { tr2.Done(nil) // error handling is done by the caller }() in2 = tr2.Account(ctx, in2).WithBuffer() // account and buffer the transfer // To assign err variable before defer. differ, err = CheckEqualReaders(in1, in2) return } // CheckDownload checks the files in fsrc and fdst according to Size // and the actual contents of the files. func CheckDownload(ctx context.Context, opt *CheckOpt) error { optCopy := *opt optCopy.Check = func(ctx context.Context, a, b fs.Object) (differ bool, noHash bool, err error) { differ, err = CheckIdenticalDownload(ctx, a, b) if err != nil { return true, true, errors.Wrap(err, "failed to download") } return differ, false, nil } return CheckFn(ctx, &optCopy) } rclone-1.53.3/fs/operations/check_test.go000066400000000000000000000200031375552240400203250ustar00rootroot00000000000000package operations_test import ( "bytes" "context" "fmt" "io" "log" "os" "sort" "strings" "testing" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func testCheck(t *testing.T, checkFunction func(ctx context.Context, opt *operations.CheckOpt) error) { r := fstest.NewRun(t) defer r.Finalise() addBuffers := func(opt *operations.CheckOpt) { opt.Combined = new(bytes.Buffer) opt.MissingOnSrc = new(bytes.Buffer) opt.MissingOnDst = new(bytes.Buffer) opt.Match = new(bytes.Buffer) opt.Differ = new(bytes.Buffer) opt.Error = new(bytes.Buffer) } sortLines := func(in string) []string { if in == "" { return []string{} } lines := strings.Split(in, "\n") sort.Strings(lines) return lines } checkBuffer := func(name string, want map[string]string, out io.Writer) { expected := want[name] buf, ok := out.(*bytes.Buffer) require.True(t, ok) assert.Equal(t, sortLines(expected), sortLines(buf.String()), name) } checkBuffers := func(opt *operations.CheckOpt, want map[string]string) { checkBuffer("combined", want, opt.Combined) checkBuffer("missingonsrc", want, opt.MissingOnSrc) checkBuffer("missingondst", want, opt.MissingOnDst) checkBuffer("match", want, opt.Match) checkBuffer("differ", want, opt.Differ) checkBuffer("error", want, opt.Error) } check := func(i int, wantErrors int64, wantChecks int64, oneway bool, wantOutput map[string]string) { t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { accounting.GlobalStats().ResetCounters() var buf bytes.Buffer log.SetOutput(&buf) defer func() { log.SetOutput(os.Stderr) }() opt := operations.CheckOpt{ Fdst: r.Fremote, Fsrc: r.Flocal, OneWay: oneway, } addBuffers(&opt) err := checkFunction(context.Background(), &opt) gotErrors := accounting.GlobalStats().GetErrors() gotChecks := accounting.GlobalStats().GetChecks() if wantErrors == 0 && err != nil { t.Errorf("%d: Got error when not expecting one: %v", i, err) } if wantErrors != 0 && err == nil { t.Errorf("%d: No error when expecting one", i) } if wantErrors != gotErrors { t.Errorf("%d: Expecting %d errors but got %d", i, wantErrors, gotErrors) } if gotChecks > 0 && !strings.Contains(buf.String(), "matching files") { t.Errorf("%d: Total files matching line missing", i) } if wantChecks != gotChecks { t.Errorf("%d: Expecting %d total matching files but got %d", i, wantChecks, gotChecks) } checkBuffers(&opt, wantOutput) }) } file1 := r.WriteBoth(context.Background(), "rutabaga", "is tasty", t3) fstest.CheckItems(t, r.Fremote, file1) fstest.CheckItems(t, r.Flocal, file1) check(1, 0, 1, false, map[string]string{ "combined": "= rutabaga\n", "missingonsrc": "", "missingondst": "", "match": "rutabaga\n", "differ": "", "error": "", }) file2 := r.WriteFile("potato2", "------------------------------------------------------------", t1) fstest.CheckItems(t, r.Flocal, file1, file2) check(2, 1, 1, false, map[string]string{ "combined": "+ potato2\n= rutabaga\n", "missingonsrc": "", "missingondst": "potato2\n", "match": "rutabaga\n", "differ": "", "error": "", }) file3 := r.WriteObject(context.Background(), "empty space", "-", t2) fstest.CheckItems(t, r.Fremote, file1, file3) check(3, 2, 1, false, map[string]string{ "combined": "- empty space\n+ potato2\n= rutabaga\n", "missingonsrc": "empty space\n", "missingondst": "potato2\n", "match": "rutabaga\n", "differ": "", "error": "", }) file2r := file2 if fs.Config.SizeOnly { file2r = r.WriteObject(context.Background(), "potato2", "--Some-Differences-But-Size-Only-Is-Enabled-----------------", t1) } else { r.WriteObject(context.Background(), "potato2", "------------------------------------------------------------", t1) } fstest.CheckItems(t, r.Fremote, file1, file2r, file3) check(4, 1, 2, false, map[string]string{ "combined": "- empty space\n= potato2\n= rutabaga\n", "missingonsrc": "empty space\n", "missingondst": "", "match": "rutabaga\npotato2\n", "differ": "", "error": "", }) file3r := file3 file3l := r.WriteFile("empty space", "DIFFER", t2) fstest.CheckItems(t, r.Flocal, file1, file2, file3l) check(5, 1, 3, false, map[string]string{ "combined": "* empty space\n= potato2\n= rutabaga\n", "missingonsrc": "", "missingondst": "", "match": "potato2\nrutabaga\n", "differ": "empty space\n", "error": "", }) file4 := r.WriteObject(context.Background(), "remotepotato", "------------------------------------------------------------", t1) fstest.CheckItems(t, r.Fremote, file1, file2r, file3r, file4) check(6, 2, 3, false, map[string]string{ "combined": "* empty space\n= potato2\n= rutabaga\n- remotepotato\n", "missingonsrc": "remotepotato\n", "missingondst": "", "match": "potato2\nrutabaga\n", "differ": "empty space\n", "error": "", }) check(7, 1, 3, true, map[string]string{ "combined": "* empty space\n= potato2\n= rutabaga\n", "missingonsrc": "", "missingondst": "", "match": "potato2\nrutabaga\n", "differ": "empty space\n", "error": "", }) } func TestCheck(t *testing.T) { testCheck(t, operations.Check) } func TestCheckFsError(t *testing.T) { dstFs, err := fs.NewFs("non-existent") if err != nil { t.Fatal(err) } srcFs, err := fs.NewFs("non-existent") if err != nil { t.Fatal(err) } opt := operations.CheckOpt{ Fdst: dstFs, Fsrc: srcFs, OneWay: false, } err = operations.Check(context.Background(), &opt) require.Error(t, err) } func TestCheckDownload(t *testing.T) { testCheck(t, operations.CheckDownload) } func TestCheckSizeOnly(t *testing.T) { fs.Config.SizeOnly = true defer func() { fs.Config.SizeOnly = false }() TestCheck(t) } func TestCheckEqualReaders(t *testing.T) { b65a := make([]byte, 65*1024) b65b := make([]byte, 65*1024) b65b[len(b65b)-1] = 1 b66 := make([]byte, 66*1024) differ, err := operations.CheckEqualReaders(bytes.NewBuffer(b65a), bytes.NewBuffer(b65a)) assert.NoError(t, err) assert.Equal(t, differ, false) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b65a), bytes.NewBuffer(b65b)) assert.NoError(t, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b65a), bytes.NewBuffer(b66)) assert.NoError(t, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b66), bytes.NewBuffer(b65a)) assert.NoError(t, err) assert.Equal(t, differ, true) myErr := errors.New("sentinel") wrap := func(b []byte) io.Reader { r := bytes.NewBuffer(b) e := readers.ErrorReader{Err: myErr} return io.MultiReader(r, e) } differ, err = operations.CheckEqualReaders(wrap(b65a), bytes.NewBuffer(b65a)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(wrap(b65a), bytes.NewBuffer(b65b)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(wrap(b65a), bytes.NewBuffer(b66)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(wrap(b66), bytes.NewBuffer(b65a)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b65a), wrap(b65a)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b65a), wrap(b65b)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b65a), wrap(b66)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) differ, err = operations.CheckEqualReaders(bytes.NewBuffer(b66), wrap(b65a)) assert.Equal(t, myErr, err) assert.Equal(t, differ, true) } rclone-1.53.3/fs/operations/dedupe.go000066400000000000000000000237371375552240400175000ustar00rootroot00000000000000// dedupe - gets rid of identical files remotes which can have duplicate file names (drive, mega) package operations import ( "context" "fmt" "log" "path" "sort" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" ) // dedupeRename renames the objs slice to different names func dedupeRename(ctx context.Context, f fs.Fs, remote string, objs []fs.Object) { doMove := f.Features().Move if doMove == nil { log.Fatalf("Fs %v doesn't support Move", f) } ext := path.Ext(remote) base := remote[:len(remote)-len(ext)] outer: for i, o := range objs { suffix := 1 newName := fmt.Sprintf("%s-%d%s", base, i+suffix, ext) _, err := f.NewObject(ctx, newName) for ; err != fs.ErrorObjectNotFound; suffix++ { if err != nil { err = fs.CountError(err) fs.Errorf(o, "Failed to check for existing object: %v", err) continue outer } if suffix > 100 { fs.Errorf(o, "Could not find an available new name") continue outer } newName = fmt.Sprintf("%s-%d%s", base, i+suffix, ext) _, err = f.NewObject(ctx, newName) } if !SkipDestructive(ctx, o, "rename") { newObj, err := doMove(ctx, o, newName) if err != nil { err = fs.CountError(err) fs.Errorf(o, "Failed to rename: %v", err) continue } fs.Infof(newObj, "renamed from: %v", o) } } } // dedupeDeleteAllButOne deletes all but the one in keep func dedupeDeleteAllButOne(ctx context.Context, keep int, remote string, objs []fs.Object) { count := 0 for i, o := range objs { if i == keep { continue } err := DeleteFile(ctx, o) if err == nil { count++ } } if count > 0 { fs.Logf(remote, "Deleted %d extra copies", count) } } // dedupeDeleteIdentical deletes all but one of identical (by hash) copies func dedupeDeleteIdentical(ctx context.Context, ht hash.Type, remote string, objs []fs.Object) (remainingObjs []fs.Object) { // Make map of IDs IDs := make(map[string]int, len(objs)) for _, o := range objs { if do, ok := o.(fs.IDer); ok { if ID := do.ID(); ID != "" { IDs[ID]++ } } } // Remove duplicate IDs newObjs := objs[:0] for _, o := range objs { if do, ok := o.(fs.IDer); ok { if ID := do.ID(); ID != "" { if IDs[ID] <= 1 { newObjs = append(newObjs, o) } else { fs.Logf(o, "Ignoring as it appears %d times in the listing and deleting would lead to data loss", IDs[ID]) } } } } objs = newObjs // See how many of these duplicates are identical dupesByID := make(map[string][]fs.Object, len(objs)) for _, o := range objs { ID := "" if fs.Config.SizeOnly && o.Size() >= 0 { ID = fmt.Sprintf("size %d", o.Size()) } else if ht != hash.None { hashValue, err := o.Hash(ctx, ht) if err == nil && hashValue != "" { ID = fmt.Sprintf("%v %s", ht, hashValue) } } if ID == "" { remainingObjs = append(remainingObjs, o) } else { dupesByID[ID] = append(dupesByID[ID], o) } } // Delete identical duplicates, filling remainingObjs with the ones remaining for ID, dupes := range dupesByID { remainingObjs = append(remainingObjs, dupes[0]) if len(dupes) > 1 { fs.Logf(remote, "Deleting %d/%d identical duplicates (%s)", len(dupes)-1, len(dupes), ID) for _, o := range dupes[1:] { err := DeleteFile(ctx, o) if err != nil { remainingObjs = append(remainingObjs, o) } } } } return remainingObjs } // dedupeInteractive interactively dedupes the slice of objects func dedupeInteractive(ctx context.Context, f fs.Fs, ht hash.Type, remote string, objs []fs.Object) { fmt.Printf("%s: %d duplicates remain\n", remote, len(objs)) for i, o := range objs { hashValue := "" if ht != hash.None { var err error hashValue, err = o.Hash(ctx, ht) if err != nil { hashValue = err.Error() } } fmt.Printf(" %d: %12d bytes, %s, %v %32s\n", i+1, o.Size(), o.ModTime(ctx).Local().Format("2006-01-02 15:04:05.000000000"), ht, hashValue) } switch config.Command([]string{"sSkip and do nothing", "kKeep just one (choose which in next step)", "rRename all to be different (by changing file.jpg to file-1.jpg)"}) { case 's': case 'k': keep := config.ChooseNumber("Enter the number of the file to keep", 1, len(objs)) dedupeDeleteAllButOne(ctx, keep-1, remote, objs) case 'r': dedupeRename(ctx, f, remote, objs) } } // DeduplicateMode is how the dedupe command chooses what to do type DeduplicateMode int // Deduplicate modes const ( DeduplicateInteractive DeduplicateMode = iota // interactively ask the user DeduplicateSkip // skip all conflicts DeduplicateFirst // choose the first object DeduplicateNewest // choose the newest object DeduplicateOldest // choose the oldest object DeduplicateRename // rename the objects DeduplicateLargest // choose the largest object DeduplicateSmallest // choose the smallest object ) func (x DeduplicateMode) String() string { switch x { case DeduplicateInteractive: return "interactive" case DeduplicateSkip: return "skip" case DeduplicateFirst: return "first" case DeduplicateNewest: return "newest" case DeduplicateOldest: return "oldest" case DeduplicateRename: return "rename" case DeduplicateLargest: return "largest" case DeduplicateSmallest: return "smallest" } return "unknown" } // Set a DeduplicateMode from a string func (x *DeduplicateMode) Set(s string) error { switch strings.ToLower(s) { case "interactive": *x = DeduplicateInteractive case "skip": *x = DeduplicateSkip case "first": *x = DeduplicateFirst case "newest": *x = DeduplicateNewest case "oldest": *x = DeduplicateOldest case "rename": *x = DeduplicateRename case "largest": *x = DeduplicateLargest case "smallest": *x = DeduplicateSmallest default: return errors.Errorf("Unknown mode for dedupe %q.", s) } return nil } // Type of the value func (x *DeduplicateMode) Type() string { return "string" } // dedupeFindDuplicateDirs scans f for duplicate directories func dedupeFindDuplicateDirs(ctx context.Context, f fs.Fs) ([][]fs.Directory, error) { dirs := map[string][]fs.Directory{} err := walk.ListR(ctx, f, "", true, fs.Config.MaxDepth, walk.ListDirs, func(entries fs.DirEntries) error { entries.ForDir(func(d fs.Directory) { dirs[d.Remote()] = append(dirs[d.Remote()], d) }) return nil }) if err != nil { return nil, errors.Wrap(err, "find duplicate dirs") } // make sure parents are before children duplicateNames := []string{} for name, ds := range dirs { if len(ds) > 1 { duplicateNames = append(duplicateNames, name) } } sort.Strings(duplicateNames) duplicateDirs := [][]fs.Directory{} for _, name := range duplicateNames { duplicateDirs = append(duplicateDirs, dirs[name]) } return duplicateDirs, nil } // dedupeMergeDuplicateDirs merges all the duplicate directories found func dedupeMergeDuplicateDirs(ctx context.Context, f fs.Fs, duplicateDirs [][]fs.Directory) error { mergeDirs := f.Features().MergeDirs if mergeDirs == nil { return errors.Errorf("%v: can't merge directories", f) } dirCacheFlush := f.Features().DirCacheFlush if dirCacheFlush == nil { return errors.Errorf("%v: can't flush dir cache", f) } for _, dirs := range duplicateDirs { if !SkipDestructive(ctx, dirs[0], "merge duplicate directories") { fs.Infof(dirs[0], "Merging contents of duplicate directories") err := mergeDirs(ctx, dirs) if err != nil { err = fs.CountError(err) fs.Errorf(nil, "merge duplicate dirs: %v", err) } } } dirCacheFlush() return nil } // sort oldest first func sortOldestFirst(objs []fs.Object) { sort.Slice(objs, func(i, j int) bool { return objs[i].ModTime(context.TODO()).Before(objs[j].ModTime(context.TODO())) }) } // sort smallest first func sortSmallestFirst(objs []fs.Object) { sort.Slice(objs, func(i, j int) bool { return objs[i].Size() < objs[j].Size() }) } // Deduplicate interactively finds duplicate files and offers to // delete all but one or rename them to be different. Only useful with // Google Drive which can have duplicate file names. func Deduplicate(ctx context.Context, f fs.Fs, mode DeduplicateMode) error { fs.Infof(f, "Looking for duplicates using %v mode.", mode) // Find duplicate directories first and fix them duplicateDirs, err := dedupeFindDuplicateDirs(ctx, f) if err != nil { return err } if len(duplicateDirs) != 0 { err = dedupeMergeDuplicateDirs(ctx, f, duplicateDirs) if err != nil { return err } } // find a hash to use ht := f.Hashes().GetOne() // Now find duplicate files files := map[string][]fs.Object{} err = walk.ListR(ctx, f, "", true, fs.Config.MaxDepth, walk.ListObjects, func(entries fs.DirEntries) error { entries.ForObject(func(o fs.Object) { remote := o.Remote() files[remote] = append(files[remote], o) }) return nil }) if err != nil { return err } for remote, objs := range files { if len(objs) > 1 { fs.Logf(remote, "Found %d files with duplicate names", len(objs)) objs = dedupeDeleteIdentical(ctx, ht, remote, objs) if len(objs) <= 1 { fs.Logf(remote, "All duplicates removed") continue } switch mode { case DeduplicateInteractive: dedupeInteractive(ctx, f, ht, remote, objs) case DeduplicateFirst: dedupeDeleteAllButOne(ctx, 0, remote, objs) case DeduplicateNewest: sortOldestFirst(objs) dedupeDeleteAllButOne(ctx, len(objs)-1, remote, objs) case DeduplicateOldest: sortOldestFirst(objs) dedupeDeleteAllButOne(ctx, 0, remote, objs) case DeduplicateRename: dedupeRename(ctx, f, remote, objs) case DeduplicateLargest: sortSmallestFirst(objs) dedupeDeleteAllButOne(ctx, len(objs)-1, remote, objs) case DeduplicateSmallest: sortSmallestFirst(objs) dedupeDeleteAllButOne(ctx, 0, remote, objs) case DeduplicateSkip: fs.Logf(remote, "Skipping %d files with duplicate names", len(objs)) default: //skip } } } return nil } rclone-1.53.3/fs/operations/dedupe_test.go000066400000000000000000000210251375552240400205230ustar00rootroot00000000000000package operations_test import ( "context" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fstest" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check flag satisfies the interface var _ pflag.Value = (*operations.DeduplicateMode)(nil) func skipIfCantDedupe(t *testing.T, f fs.Fs) { if !f.Features().DuplicateFiles { t.Skip("Can't test deduplicate - no duplicate files possible") } if f.Features().PutUnchecked == nil { t.Skip("Can't test deduplicate - no PutUnchecked") } if f.Features().MergeDirs == nil { t.Skip("Can't test deduplicate - no MergeDirs") } } func skipIfNoHash(t *testing.T, f fs.Fs) { if f.Hashes().GetOne() == hash.None { t.Skip("Can't run this test without a hash") } } func TestDeduplicateInteractive(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) skipIfNoHash(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) r.CheckWithDuplicates(t, file1, file2, file3) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateInteractive) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file1) } func TestDeduplicateSkip(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) haveHash := r.Fremote.Hashes().GetOne() != hash.None file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) files := []fstest.Item{file1} if haveHash { file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) files = append(files, file2) } file3 := r.WriteUncheckedObject(context.Background(), "one", "This is another one", t1) files = append(files, file3) r.CheckWithDuplicates(t, files...) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateSkip) require.NoError(t, err) r.CheckWithDuplicates(t, file1, file3) } func TestDeduplicateSizeOnly(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "THIS IS ONE", t1) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is another one", t1) r.CheckWithDuplicates(t, file1, file2, file3) fs.Config.SizeOnly = true defer func() { fs.Config.SizeOnly = false }() err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateSkip) require.NoError(t, err) r.CheckWithDuplicates(t, file1, file3) } func TestDeduplicateFirst(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one A", t1) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is one BB", t1) r.CheckWithDuplicates(t, file1, file2, file3) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateFirst) require.NoError(t, err) // list until we get one object var objects, size int64 for try := 1; try <= *fstest.ListRetries; try++ { objects, size, err = operations.Count(context.Background(), r.Fremote) require.NoError(t, err) if objects == 1 { break } time.Sleep(time.Second) } assert.Equal(t, int64(1), objects) if size != file1.Size && size != file2.Size && size != file3.Size { t.Errorf("Size not one of the object sizes %d", size) } } func TestDeduplicateNewest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one too", t2) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is another one", t3) r.CheckWithDuplicates(t, file1, file2, file3) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateNewest) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file3) } func TestDeduplicateOldest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one too", t2) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is another one", t3) r.CheckWithDuplicates(t, file1, file2, file3) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateOldest) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file1) } func TestDeduplicateLargest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one too", t2) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is another one", t3) r.CheckWithDuplicates(t, file1, file2, file3) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateLargest) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file3) } func TestDeduplicateSmallest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one", "This is one too", t2) file3 := r.WriteUncheckedObject(context.Background(), "one", "This is another one", t3) r.CheckWithDuplicates(t, file1, file2, file3) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateSmallest) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file1) } func TestDeduplicateRename(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() skipIfCantDedupe(t, r.Fremote) file1 := r.WriteUncheckedObject(context.Background(), "one.txt", "This is one", t1) file2 := r.WriteUncheckedObject(context.Background(), "one.txt", "This is one too", t2) file3 := r.WriteUncheckedObject(context.Background(), "one.txt", "This is another one", t3) file4 := r.WriteUncheckedObject(context.Background(), "one-1.txt", "This is not a duplicate", t1) r.CheckWithDuplicates(t, file1, file2, file3, file4) err := operations.Deduplicate(context.Background(), r.Fremote, operations.DeduplicateRename) require.NoError(t, err) require.NoError(t, walk.ListR(context.Background(), r.Fremote, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error { entries.ForObject(func(o fs.Object) { remote := o.Remote() if remote != "one-1.txt" && remote != "one-2.txt" && remote != "one-3.txt" && remote != "one-4.txt" { t.Errorf("Bad file name after rename %q", remote) } size := o.Size() if size != file1.Size && size != file2.Size && size != file3.Size && size != file4.Size { t.Errorf("Size not one of the object sizes %d", size) } if remote == "one-1.txt" && size != file4.Size { t.Errorf("Existing non-duplicate file modified %q", remote) } }) return nil })) } // This should really be a unit test, but the test framework there // doesn't have enough tools to make it easy func TestMergeDirs(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() mergeDirs := r.Fremote.Features().MergeDirs if mergeDirs == nil { t.Skip("Can't merge directories") } file1 := r.WriteObject(context.Background(), "dupe1/one.txt", "This is one", t1) file2 := r.WriteObject(context.Background(), "dupe2/two.txt", "This is one too", t2) file3 := r.WriteObject(context.Background(), "dupe3/three.txt", "This is another one", t3) objs, dirs, err := walk.GetAll(context.Background(), r.Fremote, "", true, 1) require.NoError(t, err) assert.Equal(t, 3, len(dirs)) assert.Equal(t, 0, len(objs)) err = mergeDirs(context.Background(), dirs) require.NoError(t, err) file2.Path = "dupe1/two.txt" file3.Path = "dupe1/three.txt" fstest.CheckItems(t, r.Fremote, file1, file2, file3) objs, dirs, err = walk.GetAll(context.Background(), r.Fremote, "", true, 1) require.NoError(t, err) assert.Equal(t, 1, len(dirs)) assert.Equal(t, 0, len(objs)) assert.Equal(t, "dupe1", dirs[0].Remote()) } rclone-1.53.3/fs/operations/listdirsorted_test.go000066400000000000000000000066161375552240400221610ustar00rootroot00000000000000package operations_test import ( "context" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/list" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // TestListDirSorted is integration testing code in fs/list/list.go // which can't be tested there due to import loops. func TestListDirSorted(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() filter.Active.Opt.MaxSize = 10 defer func() { filter.Active.Opt.MaxSize = -1 }() files := []fstest.Item{ r.WriteObject(context.Background(), "a.txt", "hello world", t1), r.WriteObject(context.Background(), "zend.txt", "hello", t1), r.WriteObject(context.Background(), "sub dir/hello world", "hello world", t1), r.WriteObject(context.Background(), "sub dir/hello world2", "hello world", t1), r.WriteObject(context.Background(), "sub dir/ignore dir/.ignore", "-", t1), r.WriteObject(context.Background(), "sub dir/ignore dir/should be ignored", "to ignore", t1), r.WriteObject(context.Background(), "sub dir/sub sub dir/hello world3", "hello world", t1), } fstest.CheckItems(t, r.Fremote, files...) var items fs.DirEntries var err error // Turn the DirEntry into a name, ending with a / if it is a // dir str := func(i int) string { item := items[i] name := item.Remote() switch item.(type) { case fs.Object: case fs.Directory: name += "/" default: t.Fatalf("Unknown type %+v", item) } return name } items, err = list.DirSorted(context.Background(), r.Fremote, true, "") require.NoError(t, err) require.Len(t, items, 3) assert.Equal(t, "a.txt", str(0)) assert.Equal(t, "sub dir/", str(1)) assert.Equal(t, "zend.txt", str(2)) items, err = list.DirSorted(context.Background(), r.Fremote, false, "") require.NoError(t, err) require.Len(t, items, 2) assert.Equal(t, "sub dir/", str(0)) assert.Equal(t, "zend.txt", str(1)) items, err = list.DirSorted(context.Background(), r.Fremote, true, "sub dir") require.NoError(t, err) require.Len(t, items, 4) assert.Equal(t, "sub dir/hello world", str(0)) assert.Equal(t, "sub dir/hello world2", str(1)) assert.Equal(t, "sub dir/ignore dir/", str(2)) assert.Equal(t, "sub dir/sub sub dir/", str(3)) items, err = list.DirSorted(context.Background(), r.Fremote, false, "sub dir") require.NoError(t, err) require.Len(t, items, 2) assert.Equal(t, "sub dir/ignore dir/", str(0)) assert.Equal(t, "sub dir/sub sub dir/", str(1)) // testing ignore file filter.Active.Opt.ExcludeFile = ".ignore" items, err = list.DirSorted(context.Background(), r.Fremote, false, "sub dir") require.NoError(t, err) require.Len(t, items, 1) assert.Equal(t, "sub dir/sub sub dir/", str(0)) items, err = list.DirSorted(context.Background(), r.Fremote, false, "sub dir/ignore dir") require.NoError(t, err) require.Len(t, items, 0) items, err = list.DirSorted(context.Background(), r.Fremote, true, "sub dir/ignore dir") require.NoError(t, err) require.Len(t, items, 2) assert.Equal(t, "sub dir/ignore dir/.ignore", str(0)) assert.Equal(t, "sub dir/ignore dir/should be ignored", str(1)) filter.Active.Opt.ExcludeFile = "" items, err = list.DirSorted(context.Background(), r.Fremote, false, "sub dir/ignore dir") require.NoError(t, err) require.Len(t, items, 2) assert.Equal(t, "sub dir/ignore dir/.ignore", str(0)) assert.Equal(t, "sub dir/ignore dir/should be ignored", str(1)) } rclone-1.53.3/fs/operations/lsjson.go000066400000000000000000000134341375552240400175330ustar00rootroot00000000000000package operations import ( "context" "path" "time" "github.com/pkg/errors" "github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" ) // ListJSONItem in the struct which gets marshalled for each line type ListJSONItem struct { Path string Name string EncryptedPath string `json:",omitempty"` Encrypted string `json:",omitempty"` Size int64 MimeType string `json:",omitempty"` ModTime Timestamp //`json:",omitempty"` IsDir bool Hashes map[string]string `json:",omitempty"` ID string `json:",omitempty"` OrigID string `json:",omitempty"` Tier string `json:",omitempty"` IsBucket bool `json:",omitempty"` } // Timestamp a time in the provided format type Timestamp struct { When time.Time Format string } // MarshalJSON turns a Timestamp into JSON func (t Timestamp) MarshalJSON() (out []byte, err error) { if t.When.IsZero() { return []byte(`""`), nil } return []byte(`"` + t.When.Format(t.Format) + `"`), nil } // Returns a time format for the given precision func formatForPrecision(precision time.Duration) string { switch { case precision <= time.Nanosecond: return "2006-01-02T15:04:05.000000000Z07:00" case precision <= 10*time.Nanosecond: return "2006-01-02T15:04:05.00000000Z07:00" case precision <= 100*time.Nanosecond: return "2006-01-02T15:04:05.0000000Z07:00" case precision <= time.Microsecond: return "2006-01-02T15:04:05.000000Z07:00" case precision <= 10*time.Microsecond: return "2006-01-02T15:04:05.00000Z07:00" case precision <= 100*time.Microsecond: return "2006-01-02T15:04:05.0000Z07:00" case precision <= time.Millisecond: return "2006-01-02T15:04:05.000Z07:00" case precision <= 10*time.Millisecond: return "2006-01-02T15:04:05.00Z07:00" case precision <= 100*time.Millisecond: return "2006-01-02T15:04:05.0Z07:00" } return time.RFC3339 } // ListJSONOpt describes the options for ListJSON type ListJSONOpt struct { Recurse bool `json:"recurse"` NoModTime bool `json:"noModTime"` NoMimeType bool `json:"noMimeType"` ShowEncrypted bool `json:"showEncrypted"` ShowOrigIDs bool `json:"showOrigIDs"` ShowHash bool `json:"showHash"` DirsOnly bool `json:"dirsOnly"` FilesOnly bool `json:"filesOnly"` HashTypes []string `json:"hashTypes"` // hash types to show if ShowHash is set, eg "MD5", "SHA-1" } // ListJSON lists fsrc using the options in opt calling callback for each item func ListJSON(ctx context.Context, fsrc fs.Fs, remote string, opt *ListJSONOpt, callback func(*ListJSONItem) error) error { var cipher *crypt.Cipher if opt.ShowEncrypted { fsInfo, _, _, config, err := fs.ConfigFs(fsrc.Name() + ":" + fsrc.Root()) if err != nil { return errors.Wrap(err, "ListJSON failed to load config for crypt remote") } if fsInfo.Name != "crypt" { return errors.New("The remote needs to be of type \"crypt\"") } cipher, err = crypt.NewCipher(config) if err != nil { return errors.Wrap(err, "ListJSON failed to make new crypt remote") } } features := fsrc.Features() canGetTier := features.GetTier format := formatForPrecision(fsrc.Precision()) isBucket := features.BucketBased && remote == "" && fsrc.Root() == "" // if bucket based remote listing the root mark directories as buckets showHash := opt.ShowHash hashTypes := fsrc.Hashes().Array() if len(opt.HashTypes) != 0 { showHash = true hashTypes = []hash.Type{} for _, hashType := range opt.HashTypes { var ht hash.Type err := ht.Set(hashType) if err != nil { return err } hashTypes = append(hashTypes, ht) } } err := walk.ListR(ctx, fsrc, remote, false, ConfigMaxDepth(opt.Recurse), walk.ListAll, func(entries fs.DirEntries) (err error) { for _, entry := range entries { switch entry.(type) { case fs.Directory: if opt.FilesOnly { continue } case fs.Object: if opt.DirsOnly { continue } default: fs.Errorf(nil, "Unknown type %T in listing", entry) } item := ListJSONItem{ Path: entry.Remote(), Name: path.Base(entry.Remote()), Size: entry.Size(), } if !opt.NoModTime { item.ModTime = Timestamp{When: entry.ModTime(ctx), Format: format} } if !opt.NoMimeType { item.MimeType = fs.MimeTypeDirEntry(ctx, entry) } if cipher != nil { switch entry.(type) { case fs.Directory: item.EncryptedPath = cipher.EncryptDirName(entry.Remote()) case fs.Object: item.EncryptedPath = cipher.EncryptFileName(entry.Remote()) default: fs.Errorf(nil, "Unknown type %T in listing", entry) } item.Encrypted = path.Base(item.EncryptedPath) } if do, ok := entry.(fs.IDer); ok { item.ID = do.ID() } if o, ok := entry.(fs.Object); opt.ShowOrigIDs && ok { if do, ok := fs.UnWrapObject(o).(fs.IDer); ok { item.OrigID = do.ID() } } switch x := entry.(type) { case fs.Directory: item.IsDir = true item.IsBucket = isBucket case fs.Object: item.IsDir = false if showHash { item.Hashes = make(map[string]string) for _, hashType := range hashTypes { hash, err := x.Hash(ctx, hashType) if err != nil { fs.Errorf(x, "Failed to read hash: %v", err) } else if hash != "" { item.Hashes[hashType.String()] = hash } } } if canGetTier { if do, ok := x.(fs.GetTierer); ok { item.Tier = do.GetTier() } } default: fs.Errorf(nil, "Unknown type %T in listing in ListJSON", entry) } err = callback(&item) if err != nil { return errors.Wrap(err, "callback failed in ListJSON") } } return nil }) if err != nil { return errors.Wrap(err, "error in ListJSON") } return nil } rclone-1.53.3/fs/operations/multithread.go000066400000000000000000000126521375552240400205460ustar00rootroot00000000000000package operations import ( "context" "io" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "golang.org/x/sync/errgroup" ) const ( multithreadChunkSize = 64 << 10 multithreadChunkSizeMask = multithreadChunkSize - 1 multithreadBufferSize = 32 * 1024 ) // Return a boolean as to whether we should use multi thread copy for // this transfer func doMultiThreadCopy(f fs.Fs, src fs.Object) bool { // Disable multi thread if... // ...it isn't configured if fs.Config.MultiThreadStreams <= 1 { return false } // ...size of object is less than cutoff if src.Size() < int64(fs.Config.MultiThreadCutoff) { return false } // ...source doesn't support it dstFeatures := f.Features() if dstFeatures.OpenWriterAt == nil { return false } // ...if --multi-thread-streams not in use and source and // destination are both local if !fs.Config.MultiThreadSet && dstFeatures.IsLocal && src.Fs().Features().IsLocal { return false } return true } // state for a multi-thread copy type multiThreadCopyState struct { ctx context.Context partSize int64 size int64 wc fs.WriterAtCloser src fs.Object acc *accounting.Account streams int } // Copy a single stream into place func (mc *multiThreadCopyState) copyStream(ctx context.Context, stream int) (err error) { defer func() { if err != nil { fs.Debugf(mc.src, "multi-thread copy: stream %d/%d failed: %v", stream+1, mc.streams, err) } }() start := int64(stream) * mc.partSize if start >= mc.size { return nil } end := start + mc.partSize if end > mc.size { end = mc.size } fs.Debugf(mc.src, "multi-thread copy: stream %d/%d (%d-%d) size %v starting", stream+1, mc.streams, start, end, fs.SizeSuffix(end-start)) rc, err := NewReOpen(ctx, mc.src, fs.Config.LowLevelRetries, &fs.RangeOption{Start: start, End: end - 1}) if err != nil { return errors.Wrap(err, "multpart copy: failed to open source") } defer fs.CheckClose(rc, &err) // Copy the data buf := make([]byte, multithreadBufferSize) offset := start for { // Check if context cancelled and exit if so if mc.ctx.Err() != nil { return mc.ctx.Err() } nr, er := rc.Read(buf) if nr > 0 { err = mc.acc.AccountRead(nr) if err != nil { return errors.Wrap(err, "multpart copy: accounting failed") } nw, ew := mc.wc.WriteAt(buf[0:nr], offset) if nw > 0 { offset += int64(nw) } if ew != nil { return errors.Wrap(ew, "multpart copy: write failed") } if nr != nw { return errors.Wrap(io.ErrShortWrite, "multpart copy") } } if er != nil { if er != io.EOF { return errors.Wrap(er, "multpart copy: read failed") } break } } if offset != end { return errors.Errorf("multpart copy: wrote %d bytes but expected to write %d", offset-start, end-start) } fs.Debugf(mc.src, "multi-thread copy: stream %d/%d (%d-%d) size %v finished", stream+1, mc.streams, start, end, fs.SizeSuffix(end-start)) return nil } // Calculate the chunk sizes and updated number of streams func (mc *multiThreadCopyState) calculateChunks() { partSize := mc.size / int64(mc.streams) // Round partition size up so partSize * streams >= size if (mc.size % int64(mc.streams)) != 0 { partSize++ } // round partSize up to nearest multithreadChunkSize boundary mc.partSize = (partSize + multithreadChunkSizeMask) &^ multithreadChunkSizeMask // recalculate number of streams mc.streams = int(mc.size / mc.partSize) // round streams up so partSize * streams >= size if (mc.size % mc.partSize) != 0 { mc.streams++ } } // Copy src to (f, remote) using streams download threads and the OpenWriterAt feature func multiThreadCopy(ctx context.Context, f fs.Fs, remote string, src fs.Object, streams int, tr *accounting.Transfer) (newDst fs.Object, err error) { openWriterAt := f.Features().OpenWriterAt if openWriterAt == nil { return nil, errors.New("multi-thread copy: OpenWriterAt not supported") } if src.Size() < 0 { return nil, errors.New("multi-thread copy: can't copy unknown sized file") } if src.Size() == 0 { return nil, errors.New("multi-thread copy: can't copy zero sized file") } g, gCtx := errgroup.WithContext(ctx) mc := &multiThreadCopyState{ ctx: gCtx, size: src.Size(), src: src, streams: streams, } mc.calculateChunks() // Make accounting mc.acc = tr.Account(ctx, nil) // create write file handle mc.wc, err = openWriterAt(gCtx, remote, mc.size) if err != nil { return nil, errors.Wrap(err, "multpart copy: failed to open destination") } fs.Debugf(src, "Starting multi-thread copy with %d parts of size %v", mc.streams, fs.SizeSuffix(mc.partSize)) for stream := 0; stream < mc.streams; stream++ { stream := stream g.Go(func() (err error) { return mc.copyStream(gCtx, stream) }) } err = g.Wait() closeErr := mc.wc.Close() if err != nil { return nil, err } if closeErr != nil { return nil, errors.Wrap(closeErr, "multi-thread copy: failed to close object after copy") } obj, err := f.NewObject(ctx, remote) if err != nil { return nil, errors.Wrap(err, "multi-thread copy: failed to find object after copy") } err = obj.SetModTime(ctx, src.ModTime(ctx)) switch err { case nil, fs.ErrorCantSetModTime, fs.ErrorCantSetModTimeWithoutDelete: default: return nil, errors.Wrap(err, "multi-thread copy: failed to set modification time") } fs.Debugf(src, "Finished multi-thread copy with %d parts of size %v", mc.streams, fs.SizeSuffix(mc.partSize)) return obj, nil } rclone-1.53.3/fs/operations/multithread_test.go000066400000000000000000000110231375552240400215740ustar00rootroot00000000000000package operations import ( "context" "fmt" "testing" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fstest/mockfs" "github.com/rclone/rclone/fstest/mockobject" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestDoMultiThreadCopy(t *testing.T) { f := mockfs.NewFs("potato", "") src := mockobject.New("file.txt").WithContent([]byte(random.String(100)), mockobject.SeekModeNone) srcFs := mockfs.NewFs("sausage", "") src.SetFs(srcFs) oldStreams := fs.Config.MultiThreadStreams oldCutoff := fs.Config.MultiThreadCutoff oldIsSet := fs.Config.MultiThreadSet defer func() { fs.Config.MultiThreadStreams = oldStreams fs.Config.MultiThreadCutoff = oldCutoff fs.Config.MultiThreadSet = oldIsSet }() fs.Config.MultiThreadStreams, fs.Config.MultiThreadCutoff = 4, 50 fs.Config.MultiThreadSet = false nullWriterAt := func(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) { panic("don't call me") } f.Features().OpenWriterAt = nullWriterAt assert.True(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadStreams = 0 assert.False(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadStreams = 1 assert.False(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadStreams = 2 assert.True(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadCutoff = 200 assert.False(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadCutoff = 101 assert.False(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadCutoff = 100 assert.True(t, doMultiThreadCopy(f, src)) f.Features().OpenWriterAt = nil assert.False(t, doMultiThreadCopy(f, src)) f.Features().OpenWriterAt = nullWriterAt assert.True(t, doMultiThreadCopy(f, src)) f.Features().IsLocal = true srcFs.Features().IsLocal = true assert.False(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadSet = true assert.True(t, doMultiThreadCopy(f, src)) fs.Config.MultiThreadSet = false assert.False(t, doMultiThreadCopy(f, src)) srcFs.Features().IsLocal = false assert.True(t, doMultiThreadCopy(f, src)) srcFs.Features().IsLocal = true assert.False(t, doMultiThreadCopy(f, src)) f.Features().IsLocal = false assert.True(t, doMultiThreadCopy(f, src)) srcFs.Features().IsLocal = false assert.True(t, doMultiThreadCopy(f, src)) } func TestMultithreadCalculateChunks(t *testing.T) { for _, test := range []struct { size int64 streams int wantPartSize int64 wantStreams int }{ {size: 1, streams: 10, wantPartSize: multithreadChunkSize, wantStreams: 1}, {size: 1 << 20, streams: 1, wantPartSize: 1 << 20, wantStreams: 1}, {size: 1 << 20, streams: 2, wantPartSize: 1 << 19, wantStreams: 2}, {size: (1 << 20) + 1, streams: 2, wantPartSize: (1 << 19) + multithreadChunkSize, wantStreams: 2}, {size: (1 << 20) - 1, streams: 2, wantPartSize: (1 << 19), wantStreams: 2}, } { t.Run(fmt.Sprintf("%+v", test), func(t *testing.T) { mc := &multiThreadCopyState{ size: test.size, streams: test.streams, } mc.calculateChunks() assert.Equal(t, test.wantPartSize, mc.partSize) assert.Equal(t, test.wantStreams, mc.streams) }) } } func TestMultithreadCopy(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() for _, test := range []struct { size int streams int }{ {size: multithreadChunkSize*2 - 1, streams: 2}, {size: multithreadChunkSize * 2, streams: 2}, {size: multithreadChunkSize*2 + 1, streams: 2}, } { t.Run(fmt.Sprintf("%+v", test), func(t *testing.T) { if *fstest.SizeLimit > 0 && int64(test.size) > *fstest.SizeLimit { t.Skipf("exceeded file size limit %d > %d", test.size, *fstest.SizeLimit) } var err error contents := random.String(test.size) t1 := fstest.Time("2001-02-03T04:05:06.499999999Z") file1 := r.WriteObject(context.Background(), "file1", contents, t1) fstest.CheckItems(t, r.Fremote, file1) fstest.CheckItems(t, r.Flocal) src, err := r.Fremote.NewObject(context.Background(), "file1") require.NoError(t, err) accounting.GlobalStats().ResetCounters() tr := accounting.GlobalStats().NewTransfer(src) defer func() { tr.Done(err) }() dst, err := multiThreadCopy(context.Background(), r.Flocal, "file1", src, 2, tr) require.NoError(t, err) assert.Equal(t, src.Size(), dst.Size()) assert.Equal(t, "file1", dst.Remote()) fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1}, nil, fs.GetModifyWindow(r.Flocal, r.Fremote)) require.NoError(t, dst.Remove(context.Background())) }) } } rclone-1.53.3/fs/operations/operations.go000066400000000000000000001656051375552240400204160ustar00rootroot00000000000000// Package operations does generic operations on filesystems and objects package operations import ( "bytes" "context" "encoding/base64" "encoding/csv" "encoding/hex" "fmt" "io" "io/ioutil" "net/http" "os" "path" "path/filepath" "sort" "strconv" "strings" "sync" "sync/atomic" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/readers" "golang.org/x/sync/errgroup" ) // CheckHashes checks the two files to see if they have common // known hash types and compares them // // Returns // // equal - which is equality of the hashes // // hash - the HashType. This is HashNone if either of the hashes were // unset or a compatible hash couldn't be found. // // err - may return an error which will already have been logged // // If an error is returned it will return equal as false func CheckHashes(ctx context.Context, src fs.ObjectInfo, dst fs.Object) (equal bool, ht hash.Type, err error) { common := src.Fs().Hashes().Overlap(dst.Fs().Hashes()) // fs.Debugf(nil, "Shared hashes: %v", common) if common.Count() == 0 { return true, hash.None, nil } equal, ht, _, _, err = checkHashes(ctx, src, dst, common.GetOne()) return equal, ht, err } // checkHashes does the work of CheckHashes but takes a hash.Type and // returns the effective hash type used. func checkHashes(ctx context.Context, src fs.ObjectInfo, dst fs.Object, ht hash.Type) (equal bool, htOut hash.Type, srcHash, dstHash string, err error) { // Calculate hashes in parallel g, ctx := errgroup.WithContext(ctx) g.Go(func() (err error) { srcHash, err = src.Hash(ctx, ht) if err != nil { err = fs.CountError(err) fs.Errorf(src, "Failed to calculate src hash: %v", err) } return err }) g.Go(func() (err error) { dstHash, err = dst.Hash(ctx, ht) if err != nil { err = fs.CountError(err) fs.Errorf(dst, "Failed to calculate dst hash: %v", err) } return err }) err = g.Wait() if err != nil { return false, ht, srcHash, dstHash, err } if srcHash == "" { return true, hash.None, srcHash, dstHash, nil } if dstHash == "" { return true, hash.None, srcHash, dstHash, nil } if srcHash != dstHash { fs.Debugf(src, "%v = %s (%v)", ht, srcHash, src.Fs()) fs.Debugf(dst, "%v = %s (%v)", ht, dstHash, dst.Fs()) } else { fs.Debugf(src, "%v = %s OK", ht, srcHash) } return srcHash == dstHash, ht, srcHash, dstHash, nil } // Equal checks to see if the src and dst objects are equal by looking at // size, mtime and hash // // If the src and dst size are different then it is considered to be // not equal. If --size-only is in effect then this is the only check // that is done. If --ignore-size is in effect then this check is // skipped and the files are considered the same size. // // If the size is the same and the mtime is the same then it is // considered to be equal. This check is skipped if using --checksum. // // If the size is the same and mtime is different, unreadable or // --checksum is set and the hash is the same then the file is // considered to be equal. In this case the mtime on the dst is // updated if --checksum is not set. // // Otherwise the file is considered to be not equal including if there // were errors reading info. func Equal(ctx context.Context, src fs.ObjectInfo, dst fs.Object) bool { return equal(ctx, src, dst, defaultEqualOpt()) } // sizeDiffers compare the size of src and dst taking into account the // various ways of ignoring sizes func sizeDiffers(src, dst fs.ObjectInfo) bool { if fs.Config.IgnoreSize || src.Size() < 0 || dst.Size() < 0 { return false } return src.Size() != dst.Size() } var checksumWarning sync.Once // options for equal function() type equalOpt struct { sizeOnly bool // if set only check size checkSum bool // if set check checksum+size instead of modtime+size updateModTime bool // if set update the modtime if hashes identical and checking with modtime+size forceModTimeMatch bool // if set assume modtimes match } // default set of options for equal() func defaultEqualOpt() equalOpt { return equalOpt{ sizeOnly: fs.Config.SizeOnly, checkSum: fs.Config.CheckSum, updateModTime: !fs.Config.NoUpdateModTime, forceModTimeMatch: false, } } func equal(ctx context.Context, src fs.ObjectInfo, dst fs.Object, opt equalOpt) bool { if sizeDiffers(src, dst) { fs.Debugf(src, "Sizes differ (src %d vs dst %d)", src.Size(), dst.Size()) return false } if opt.sizeOnly { fs.Debugf(src, "Sizes identical") return true } // Assert: Size is equal or being ignored // If checking checksum and not modtime if opt.checkSum { // Check the hash same, ht, _ := CheckHashes(ctx, src, dst) if !same { fs.Debugf(src, "%v differ", ht) return false } if ht == hash.None { common := src.Fs().Hashes().Overlap(dst.Fs().Hashes()) if common.Count() == 0 { checksumWarning.Do(func() { fs.Logf(dst.Fs(), "--checksum is in use but the source and destination have no hashes in common; falling back to --size-only") }) } fs.Debugf(src, "Size of src and dst objects identical") } else { fs.Debugf(src, "Size and %v of src and dst objects identical", ht) } return true } srcModTime := src.ModTime(ctx) if !opt.forceModTimeMatch { // Sizes the same so check the mtime modifyWindow := fs.GetModifyWindow(src.Fs(), dst.Fs()) if modifyWindow == fs.ModTimeNotSupported { fs.Debugf(src, "Sizes identical") return true } dstModTime := dst.ModTime(ctx) dt := dstModTime.Sub(srcModTime) if dt < modifyWindow && dt > -modifyWindow { fs.Debugf(src, "Size and modification time the same (differ by %s, within tolerance %s)", dt, modifyWindow) return true } fs.Debugf(src, "Modification times differ by %s: %v, %v", dt, srcModTime, dstModTime) } // Check if the hashes are the same same, ht, _ := CheckHashes(ctx, src, dst) if !same { fs.Debugf(src, "%v differ", ht) return false } if ht == hash.None && !fs.Config.RefreshTimes { // if couldn't check hash, return that they differ return false } // mod time differs but hash is the same to reset mod time if required if opt.updateModTime { if !SkipDestructive(ctx, src, "update modification time") { // Size and hash the same but mtime different // Error if objects are treated as immutable if fs.Config.Immutable { fs.Errorf(dst, "StartedAt mismatch between immutable objects") return false } // Update the mtime of the dst object here err := dst.SetModTime(ctx, srcModTime) if err == fs.ErrorCantSetModTime { fs.Debugf(dst, "src and dst identical but can't set mod time without re-uploading") return false } else if err == fs.ErrorCantSetModTimeWithoutDelete { fs.Debugf(dst, "src and dst identical but can't set mod time without deleting and re-uploading") // Remove the file if BackupDir isn't set. If BackupDir is set we would rather have the old file // put in the BackupDir than deleted which is what will happen if we don't delete it. if fs.Config.BackupDir == "" { err = dst.Remove(ctx) if err != nil { fs.Errorf(dst, "failed to delete before re-upload: %v", err) } } return false } else if err != nil { err = fs.CountError(err) fs.Errorf(dst, "Failed to set modification time: %v", err) } else { fs.Infof(src, "Updated modification time in destination") } } } return true } // Used to remove a failed copy // // Returns whether the file was successfully removed or not func removeFailedCopy(ctx context.Context, dst fs.Object) bool { if dst == nil { return false } fs.Infof(dst, "Removing failed copy") removeErr := dst.Remove(ctx) if removeErr != nil { fs.Infof(dst, "Failed to remove failed copy: %s", removeErr) return false } return true } // OverrideRemote is a wrapper to override the Remote for an // ObjectInfo type OverrideRemote struct { fs.ObjectInfo remote string } // NewOverrideRemote returns an OverrideRemoteObject which will // return the remote specified func NewOverrideRemote(oi fs.ObjectInfo, remote string) *OverrideRemote { return &OverrideRemote{ ObjectInfo: oi, remote: remote, } } // Remote returns the overridden remote name func (o *OverrideRemote) Remote() string { return o.remote } // MimeType returns the mime type of the underlying object or "" if it // can't be worked out func (o *OverrideRemote) MimeType(ctx context.Context) string { if do, ok := o.ObjectInfo.(fs.MimeTyper); ok { return do.MimeType(ctx) } return "" } // ID returns the ID of the Object if known, or "" if not func (o *OverrideRemote) ID() string { if do, ok := o.ObjectInfo.(fs.IDer); ok { return do.ID() } return "" } // UnWrap returns the Object that this Object is wrapping or nil if it // isn't wrapping anything func (o *OverrideRemote) UnWrap() fs.Object { if o, ok := o.ObjectInfo.(fs.Object); ok { return o } return nil } // GetTier returns storage tier or class of the Object func (o *OverrideRemote) GetTier() string { if do, ok := o.ObjectInfo.(fs.GetTierer); ok { return do.GetTier() } return "" } // Check all optional interfaces satisfied var _ fs.FullObjectInfo = (*OverrideRemote)(nil) // CommonHash returns a single hash.Type and a HashOption with that // type which is in common between the two fs.Fs. func CommonHash(fa, fb fs.Info) (hash.Type, *fs.HashesOption) { // work out which hash to use - limit to 1 hash in common var common hash.Set hashType := hash.None if !fs.Config.IgnoreChecksum { common = fb.Hashes().Overlap(fa.Hashes()) if common.Count() > 0 { hashType = common.GetOne() common = hash.Set(hashType) } } return hashType, &fs.HashesOption{Hashes: common} } // Copy src object to dst or f if nil. If dst is nil then it uses // remote as the name of the new object. // // It returns the destination object if possible. Note that this may // be nil. func Copy(ctx context.Context, f fs.Fs, dst fs.Object, remote string, src fs.Object) (newDst fs.Object, err error) { tr := accounting.Stats(ctx).NewTransfer(src) defer func() { tr.Done(err) }() newDst = dst if SkipDestructive(ctx, src, "copy") { return newDst, nil } maxTries := fs.Config.LowLevelRetries tries := 0 doUpdate := dst != nil hashType, hashOption := CommonHash(f, src.Fs()) var actionTaken string for { // Try server side copy first - if has optional interface and // is same underlying remote actionTaken = "Copied (server side copy)" if fs.Config.MaxTransfer >= 0 && (accounting.Stats(ctx).GetBytes() >= int64(fs.Config.MaxTransfer) || (fs.Config.CutoffMode == fs.CutoffModeCautious && accounting.Stats(ctx).GetBytesWithPending()+src.Size() >= int64(fs.Config.MaxTransfer))) { return nil, accounting.ErrorMaxTransferLimitReachedFatal } if doCopy := f.Features().Copy; doCopy != nil && (SameConfig(src.Fs(), f) || (SameRemoteType(src.Fs(), f) && f.Features().ServerSideAcrossConfigs)) { in := tr.Account(ctx, nil) // account the transfer in.ServerSideCopyStart() newDst, err = doCopy(ctx, src, remote) if err == nil { dst = newDst in.ServerSideCopyEnd(dst.Size()) // account the bytes for the server side transfer err = in.Close() } else { _ = in.Close() } if err == fs.ErrorCantCopy { tr.Reset() // skip incomplete accounting - will be overwritten by the manual copy below } } else { err = fs.ErrorCantCopy } // If can't server side copy, do it manually if err == fs.ErrorCantCopy { if doMultiThreadCopy(f, src) { // Number of streams proportional to size streams := src.Size() / int64(fs.Config.MultiThreadCutoff) // With maximum if streams > int64(fs.Config.MultiThreadStreams) { streams = int64(fs.Config.MultiThreadStreams) } if streams < 2 { streams = 2 } dst, err = multiThreadCopy(ctx, f, remote, src, int(streams), tr) if doUpdate { actionTaken = "Multi-thread Copied (replaced existing)" } else { actionTaken = "Multi-thread Copied (new)" } } else { var in0 io.ReadCloser options := []fs.OpenOption{hashOption} for _, option := range fs.Config.DownloadHeaders { options = append(options, option) } in0, err = NewReOpen(ctx, src, fs.Config.LowLevelRetries, options...) if err != nil { err = errors.Wrap(err, "failed to open source object") } else { if src.Size() == -1 { // -1 indicates unknown size. Use Rcat to handle both remotes supporting and not supporting PutStream. if doUpdate { actionTaken = "Copied (Rcat, replaced existing)" } else { actionTaken = "Copied (Rcat, new)" } // NB Rcat closes in0 dst, err = Rcat(ctx, f, remote, in0, src.ModTime(ctx)) newDst = dst } else { in := tr.Account(ctx, in0).WithBuffer() // account and buffer the transfer var wrappedSrc fs.ObjectInfo = src // We try to pass the original object if possible if src.Remote() != remote { wrappedSrc = NewOverrideRemote(src, remote) } options := []fs.OpenOption{hashOption} for _, option := range fs.Config.UploadHeaders { options = append(options, option) } if doUpdate { actionTaken = "Copied (replaced existing)" err = dst.Update(ctx, in, wrappedSrc, options...) } else { actionTaken = "Copied (new)" dst, err = f.Put(ctx, in, wrappedSrc, options...) } closeErr := in.Close() if err == nil { newDst = dst err = closeErr } } } } } tries++ if tries >= maxTries { break } // Retry if err returned a retry error if fserrors.IsRetryError(err) || fserrors.ShouldRetry(err) { fs.Debugf(src, "Received error: %v - low level retry %d/%d", err, tries, maxTries) tr.Reset() // skip incomplete accounting - will be overwritten by retry continue } // otherwise finish break } if err != nil { err = fs.CountError(err) fs.Errorf(src, "Failed to copy: %v", err) return newDst, err } // Verify sizes are the same after transfer if sizeDiffers(src, dst) { err = errors.Errorf("corrupted on transfer: sizes differ %d vs %d", src.Size(), dst.Size()) fs.Errorf(dst, "%v", err) err = fs.CountError(err) removeFailedCopy(ctx, dst) return newDst, err } // Verify hashes are the same after transfer - ignoring blank hashes if hashType != hash.None { // checkHashes has logged and counted errors equal, _, srcSum, dstSum, _ := checkHashes(ctx, src, dst, hashType) if !equal { err = errors.Errorf("corrupted on transfer: %v hash differ %q vs %q", hashType, srcSum, dstSum) fs.Errorf(dst, "%v", err) err = fs.CountError(err) removeFailedCopy(ctx, dst) return newDst, err } } fs.Infof(src, actionTaken) return newDst, err } // SameObject returns true if src and dst could be pointing to the // same object. func SameObject(src, dst fs.Object) bool { if !SameConfig(src.Fs(), dst.Fs()) { return false } srcPath := path.Join(src.Fs().Root(), src.Remote()) dstPath := path.Join(dst.Fs().Root(), dst.Remote()) if dst.Fs().Features().CaseInsensitive { srcPath = strings.ToLower(srcPath) dstPath = strings.ToLower(dstPath) } return srcPath == dstPath } // Move src object to dst or fdst if nil. If dst is nil then it uses // remote as the name of the new object. // // Note that you must check the destination does not exist before // calling this and pass it as dst. If you pass dst=nil and the // destination does exist then this may create duplicates or return // errors. // // It returns the destination object if possible. Note that this may // be nil. func Move(ctx context.Context, fdst fs.Fs, dst fs.Object, remote string, src fs.Object) (newDst fs.Object, err error) { tr := accounting.Stats(ctx).NewCheckingTransfer(src) defer func() { if err == nil { accounting.Stats(ctx).Renames(1) } tr.Done(err) }() newDst = dst if SkipDestructive(ctx, src, "move") { return newDst, nil } // See if we have Move available if doMove := fdst.Features().Move; doMove != nil && (SameConfig(src.Fs(), fdst) || (SameRemoteType(src.Fs(), fdst) && fdst.Features().ServerSideAcrossConfigs)) { // Delete destination if it exists and is not the same file as src (could be same file while seemingly different if the remote is case insensitive) if dst != nil && !SameObject(src, dst) { err = DeleteFile(ctx, dst) if err != nil { return newDst, err } } // Move dst <- src newDst, err = doMove(ctx, src, remote) switch err { case nil: fs.Infof(src, "Moved (server side)") return newDst, nil case fs.ErrorCantMove: fs.Debugf(src, "Can't move, switching to copy") default: err = fs.CountError(err) fs.Errorf(src, "Couldn't move: %v", err) return newDst, err } } // Move not found or didn't work so copy dst <- src newDst, err = Copy(ctx, fdst, dst, remote, src) if err != nil { fs.Errorf(src, "Not deleting source as copy failed: %v", err) return newDst, err } // Delete src if no error on copy return newDst, DeleteFile(ctx, src) } // CanServerSideMove returns true if fdst support server side moves or // server side copies // // Some remotes simulate rename by server-side copy and delete, so include // remotes that implements either Mover or Copier. func CanServerSideMove(fdst fs.Fs) bool { canMove := fdst.Features().Move != nil canCopy := fdst.Features().Copy != nil return canMove || canCopy } // SuffixName adds the current --suffix to the remote, obeying // --suffix-keep-extension if set func SuffixName(remote string) string { if fs.Config.Suffix == "" { return remote } if fs.Config.SuffixKeepExtension { ext := path.Ext(remote) base := remote[:len(remote)-len(ext)] return base + fs.Config.Suffix + ext } return remote + fs.Config.Suffix } // DeleteFileWithBackupDir deletes a single file respecting --dry-run // and accumulating stats and errors. // // If backupDir is set then it moves the file to there instead of // deleting func DeleteFileWithBackupDir(ctx context.Context, dst fs.Object, backupDir fs.Fs) (err error) { tr := accounting.Stats(ctx).NewCheckingTransfer(dst) defer func() { tr.Done(err) }() numDeletes := accounting.Stats(ctx).Deletes(1) if fs.Config.MaxDelete != -1 && numDeletes > fs.Config.MaxDelete { return fserrors.FatalError(errors.New("--max-delete threshold reached")) } action, actioned := "delete", "Deleted" if backupDir != nil { action, actioned = "move into backup dir", "Moved into backup dir" } skip := SkipDestructive(ctx, dst, action) if skip { // do nothing } else if backupDir != nil { err = MoveBackupDir(ctx, backupDir, dst) } else { err = dst.Remove(ctx) } if err != nil { fs.Errorf(dst, "Couldn't %s: %v", action, err) err = fs.CountError(err) } else if !skip { fs.Infof(dst, actioned) } return err } // DeleteFile deletes a single file respecting --dry-run and accumulating stats and errors. // // If useBackupDir is set and --backup-dir is in effect then it moves // the file to there instead of deleting func DeleteFile(ctx context.Context, dst fs.Object) (err error) { return DeleteFileWithBackupDir(ctx, dst, nil) } // DeleteFilesWithBackupDir removes all the files passed in the // channel // // If backupDir is set the files will be placed into that directory // instead of being deleted. func DeleteFilesWithBackupDir(ctx context.Context, toBeDeleted fs.ObjectsChan, backupDir fs.Fs) error { var wg sync.WaitGroup wg.Add(fs.Config.Transfers) var errorCount int32 var fatalErrorCount int32 for i := 0; i < fs.Config.Transfers; i++ { go func() { defer wg.Done() for dst := range toBeDeleted { err := DeleteFileWithBackupDir(ctx, dst, backupDir) if err != nil { atomic.AddInt32(&errorCount, 1) if fserrors.IsFatalError(err) { fs.Errorf(nil, "Got fatal error on delete: %s", err) atomic.AddInt32(&fatalErrorCount, 1) return } } } }() } fs.Debugf(nil, "Waiting for deletions to finish") wg.Wait() if errorCount > 0 { err := errors.Errorf("failed to delete %d files", errorCount) if fatalErrorCount > 0 { return fserrors.FatalError(err) } return err } return nil } // DeleteFiles removes all the files passed in the channel func DeleteFiles(ctx context.Context, toBeDeleted fs.ObjectsChan) error { return DeleteFilesWithBackupDir(ctx, toBeDeleted, nil) } // SameRemoteType returns true if fdst and fsrc are the same type func SameRemoteType(fdst, fsrc fs.Info) bool { return fmt.Sprintf("%T", fdst) == fmt.Sprintf("%T", fsrc) } // SameConfig returns true if fdst and fsrc are using the same config // file entry func SameConfig(fdst, fsrc fs.Info) bool { return fdst.Name() == fsrc.Name() } // Same returns true if fdst and fsrc point to the same underlying Fs func Same(fdst, fsrc fs.Info) bool { return SameConfig(fdst, fsrc) && strings.Trim(fdst.Root(), "/") == strings.Trim(fsrc.Root(), "/") } // fixRoot returns the Root with a trailing / if not empty. It is // aware of case insensitive filesystems. func fixRoot(f fs.Info) string { s := strings.Trim(filepath.ToSlash(f.Root()), "/") if s != "" { s += "/" } if f.Features().CaseInsensitive { s = strings.ToLower(s) } return s } // Overlapping returns true if fdst and fsrc point to the same // underlying Fs and they overlap. func Overlapping(fdst, fsrc fs.Info) bool { if !SameConfig(fdst, fsrc) { return false } fdstRoot := fixRoot(fdst) fsrcRoot := fixRoot(fsrc) return strings.HasPrefix(fdstRoot, fsrcRoot) || strings.HasPrefix(fsrcRoot, fdstRoot) } // SameDir returns true if fdst and fsrc point to the same // underlying Fs and they are the same directory. func SameDir(fdst, fsrc fs.Info) bool { if !SameConfig(fdst, fsrc) { return false } fdstRoot := fixRoot(fdst) fsrcRoot := fixRoot(fsrc) return fdstRoot == fsrcRoot } // Retry runs fn up to maxTries times if it returns a retriable error func Retry(o interface{}, maxTries int, fn func() error) (err error) { for tries := 1; tries <= maxTries; tries++ { // Call the function which might error err = fn() if err == nil { break } // Retry if err returned a retry error if fserrors.IsRetryError(err) || fserrors.ShouldRetry(err) { fs.Debugf(o, "Received error: %v - low level retry %d/%d", err, tries, maxTries) continue } break } return err } // ListFn lists the Fs to the supplied function // // Lists in parallel which may get them out of order func ListFn(ctx context.Context, f fs.Fs, fn func(fs.Object)) error { return walk.ListR(ctx, f, "", false, fs.Config.MaxDepth, walk.ListObjects, func(entries fs.DirEntries) error { entries.ForObject(fn) return nil }) } // mutex for synchronized output var outMutex sync.Mutex // Synchronized fmt.Fprintf // // Ignores errors from Fprintf func syncFprintf(w io.Writer, format string, a ...interface{}) { outMutex.Lock() defer outMutex.Unlock() _, _ = fmt.Fprintf(w, format, a...) } // List the Fs to the supplied writer // // Shows size and path - obeys includes and excludes // // Lists in parallel which may get them out of order func List(ctx context.Context, f fs.Fs, w io.Writer) error { return ListFn(ctx, f, func(o fs.Object) { syncFprintf(w, "%9d %s\n", o.Size(), o.Remote()) }) } // ListLong lists the Fs to the supplied writer // // Shows size, mod time and path - obeys includes and excludes // // Lists in parallel which may get them out of order func ListLong(ctx context.Context, f fs.Fs, w io.Writer) error { return ListFn(ctx, f, func(o fs.Object) { tr := accounting.Stats(ctx).NewCheckingTransfer(o) defer func() { tr.Done(nil) }() modTime := o.ModTime(ctx) syncFprintf(w, "%9d %s %s\n", o.Size(), modTime.Local().Format("2006-01-02 15:04:05.000000000"), o.Remote()) }) } // Md5sum list the Fs to the supplied writer // // Produces the same output as the md5sum command - obeys includes and // excludes // // Lists in parallel which may get them out of order func Md5sum(ctx context.Context, f fs.Fs, w io.Writer) error { return HashLister(ctx, hash.MD5, f, w) } // Sha1sum list the Fs to the supplied writer // // Obeys includes and excludes // // Lists in parallel which may get them out of order func Sha1sum(ctx context.Context, f fs.Fs, w io.Writer) error { return HashLister(ctx, hash.SHA1, f, w) } // hashSum returns the human readable hash for ht passed in. This may // be UNSUPPORTED or ERROR. If it isn't returning a valid hash it will // return an error. func hashSum(ctx context.Context, ht hash.Type, o fs.Object) (string, error) { var err error tr := accounting.Stats(ctx).NewCheckingTransfer(o) defer func() { tr.Done(err) }() sum, err := o.Hash(ctx, ht) if err == hash.ErrUnsupported { sum = "UNSUPPORTED" } else if err != nil { fs.Debugf(o, "Failed to read %v: %v", ht, err) sum = "ERROR" } return sum, err } // HashLister does an md5sum equivalent for the hash type passed in func HashLister(ctx context.Context, ht hash.Type, f fs.Fs, w io.Writer) error { return ListFn(ctx, f, func(o fs.Object) { sum, _ := hashSum(ctx, ht, o) syncFprintf(w, "%*s %s\n", hash.Width(ht), sum, o.Remote()) }) } // HashListerBase64 does an md5sum equivalent for the hash type passed in with base64 encoded func HashListerBase64(ctx context.Context, ht hash.Type, f fs.Fs, w io.Writer) error { return ListFn(ctx, f, func(o fs.Object) { sum, err := hashSum(ctx, ht, o) if err == nil { hexBytes, _ := hex.DecodeString(sum) sum = base64.URLEncoding.EncodeToString(hexBytes) } width := base64.URLEncoding.EncodedLen(hash.Width(ht) / 2) syncFprintf(w, "%*s %s\n", width, sum, o.Remote()) }) } // Count counts the objects and their sizes in the Fs // // Obeys includes and excludes func Count(ctx context.Context, f fs.Fs) (objects int64, size int64, err error) { err = ListFn(ctx, f, func(o fs.Object) { atomic.AddInt64(&objects, 1) objectSize := o.Size() if objectSize > 0 { atomic.AddInt64(&size, objectSize) } }) return } // ConfigMaxDepth returns the depth to use for a recursive or non recursive listing. func ConfigMaxDepth(recursive bool) int { depth := fs.Config.MaxDepth if !recursive && depth < 0 { depth = 1 } return depth } // ListDir lists the directories/buckets/containers in the Fs to the supplied writer func ListDir(ctx context.Context, f fs.Fs, w io.Writer) error { return walk.ListR(ctx, f, "", false, ConfigMaxDepth(false), walk.ListDirs, func(entries fs.DirEntries) error { entries.ForDir(func(dir fs.Directory) { if dir != nil { syncFprintf(w, "%12d %13s %9d %s\n", dir.Size(), dir.ModTime(ctx).Local().Format("2006-01-02 15:04:05"), dir.Items(), dir.Remote()) } }) return nil }) } // Mkdir makes a destination directory or container func Mkdir(ctx context.Context, f fs.Fs, dir string) error { if SkipDestructive(ctx, fs.LogDirName(f, dir), "make directory") { return nil } fs.Debugf(fs.LogDirName(f, dir), "Making directory") err := f.Mkdir(ctx, dir) if err != nil { err = fs.CountError(err) return err } return nil } // TryRmdir removes a container but not if not empty. It doesn't // count errors but may return one. func TryRmdir(ctx context.Context, f fs.Fs, dir string) error { if SkipDestructive(ctx, fs.LogDirName(f, dir), "remove directory") { return nil } fs.Debugf(fs.LogDirName(f, dir), "Removing directory") return f.Rmdir(ctx, dir) } // Rmdir removes a container but not if not empty func Rmdir(ctx context.Context, f fs.Fs, dir string) error { err := TryRmdir(ctx, f, dir) if err != nil { err = fs.CountError(err) return err } return err } // Purge removes a directory and all of its contents func Purge(ctx context.Context, f fs.Fs, dir string) (err error) { doFallbackPurge := true if doPurge := f.Features().Purge; doPurge != nil { doFallbackPurge = false if SkipDestructive(ctx, fs.LogDirName(f, dir), "purge directory") { return nil } err = doPurge(ctx, dir) if err == fs.ErrorCantPurge { doFallbackPurge = true } } if doFallbackPurge { // DeleteFiles and Rmdir observe --dry-run err = DeleteFiles(ctx, listToChan(ctx, f, dir)) if err != nil { return err } err = Rmdirs(ctx, f, dir, false) } if err != nil { err = fs.CountError(err) return err } return nil } // Delete removes all the contents of a container. Unlike Purge, it // obeys includes and excludes. func Delete(ctx context.Context, f fs.Fs) error { delChan := make(fs.ObjectsChan, fs.Config.Transfers) delErr := make(chan error, 1) go func() { delErr <- DeleteFiles(ctx, delChan) }() err := ListFn(ctx, f, func(o fs.Object) { delChan <- o }) close(delChan) delError := <-delErr if err == nil { err = delError } return err } // listToChan will transfer all objects in the listing to the output // // If an error occurs, the error will be logged, and it will close the // channel. // // If the error was ErrorDirNotFound then it will be ignored func listToChan(ctx context.Context, f fs.Fs, dir string) fs.ObjectsChan { o := make(fs.ObjectsChan, fs.Config.Checkers) go func() { defer close(o) err := walk.ListR(ctx, f, dir, true, fs.Config.MaxDepth, walk.ListObjects, func(entries fs.DirEntries) error { entries.ForObject(func(obj fs.Object) { o <- obj }) return nil }) if err != nil && err != fs.ErrorDirNotFound { err = errors.Wrap(err, "failed to list") err = fs.CountError(err) fs.Errorf(nil, "%v", err) } }() return o } // CleanUp removes the trash for the Fs func CleanUp(ctx context.Context, f fs.Fs) error { doCleanUp := f.Features().CleanUp if doCleanUp == nil { return errors.Errorf("%v doesn't support cleanup", f) } if SkipDestructive(ctx, f, "clean up old files") { return nil } return doCleanUp(ctx) } // wrap a Reader and a Closer together into a ReadCloser type readCloser struct { io.Reader io.Closer } // Cat any files to the io.Writer // // if offset == 0 it will be ignored // if offset > 0 then the file will be seeked to that offset // if offset < 0 then the file will be seeked that far from the end // // if count < 0 then it will be ignored // if count >= 0 then only that many characters will be output func Cat(ctx context.Context, f fs.Fs, w io.Writer, offset, count int64) error { var mu sync.Mutex return ListFn(ctx, f, func(o fs.Object) { var err error tr := accounting.Stats(ctx).NewTransfer(o) defer func() { tr.Done(err) }() opt := fs.RangeOption{Start: offset, End: -1} size := o.Size() if opt.Start < 0 { opt.Start += size } if count >= 0 { opt.End = opt.Start + count - 1 } var options []fs.OpenOption if opt.Start > 0 || opt.End >= 0 { options = append(options, &opt) } for _, option := range fs.Config.DownloadHeaders { options = append(options, option) } in, err := o.Open(ctx, options...) if err != nil { err = fs.CountError(err) fs.Errorf(o, "Failed to open: %v", err) return } if count >= 0 { in = &readCloser{Reader: &io.LimitedReader{R: in, N: count}, Closer: in} } in = tr.Account(ctx, in).WithBuffer() // account and buffer the transfer // take the lock just before we output stuff, so at the last possible moment mu.Lock() defer mu.Unlock() _, err = io.Copy(w, in) if err != nil { err = fs.CountError(err) fs.Errorf(o, "Failed to send to output: %v", err) } }) } // Rcat reads data from the Reader until EOF and uploads it to a file on remote func Rcat(ctx context.Context, fdst fs.Fs, dstFileName string, in io.ReadCloser, modTime time.Time) (dst fs.Object, err error) { tr := accounting.Stats(ctx).NewTransferRemoteSize(dstFileName, -1) defer func() { tr.Done(err) }() in = tr.Account(ctx, in).WithBuffer() readCounter := readers.NewCountingReader(in) var trackingIn io.Reader var hasher *hash.MultiHasher var options []fs.OpenOption if !fs.Config.IgnoreChecksum { hashes := hash.NewHashSet(fdst.Hashes().GetOne()) // just pick one hash hashOption := &fs.HashesOption{Hashes: hashes} options = append(options, hashOption) hasher, err = hash.NewMultiHasherTypes(hashes) if err != nil { return nil, err } trackingIn = io.TeeReader(readCounter, hasher) } else { trackingIn = readCounter } for _, option := range fs.Config.UploadHeaders { options = append(options, option) } compare := func(dst fs.Object) error { var sums map[hash.Type]string if hasher != nil { sums = hasher.Sums() } src := object.NewStaticObjectInfo(dstFileName, modTime, int64(readCounter.BytesRead()), false, sums, fdst) if !Equal(ctx, src, dst) { err = errors.Errorf("corrupted on transfer") err = fs.CountError(err) fs.Errorf(dst, "%v", err) return err } return nil } // check if file small enough for direct upload buf := make([]byte, fs.Config.StreamingUploadCutoff) if n, err := io.ReadFull(trackingIn, buf); err == io.EOF || err == io.ErrUnexpectedEOF { fs.Debugf(fdst, "File to upload is small (%d bytes), uploading instead of streaming", n) src := object.NewMemoryObject(dstFileName, modTime, buf[:n]) return Copy(ctx, fdst, nil, dstFileName, src) } // Make a new ReadCloser with the bits we've already read in = &readCloser{ Reader: io.MultiReader(bytes.NewReader(buf), trackingIn), Closer: in, } fStreamTo := fdst canStream := fdst.Features().PutStream != nil if !canStream { fs.Debugf(fdst, "Target remote doesn't support streaming uploads, creating temporary local FS to spool file") tmpLocalFs, err := fs.TemporaryLocalFs() if err != nil { return nil, errors.Wrap(err, "Failed to create temporary local FS to spool file") } defer func() { err := Purge(ctx, tmpLocalFs, "") if err != nil { fs.Infof(tmpLocalFs, "Failed to cleanup temporary FS: %v", err) } }() fStreamTo = tmpLocalFs } if SkipDestructive(ctx, dstFileName, "upload from pipe") { // prevents "broken pipe" errors _, err = io.Copy(ioutil.Discard, in) return nil, err } objInfo := object.NewStaticObjectInfo(dstFileName, modTime, -1, false, nil, nil) if dst, err = fStreamTo.Features().PutStream(ctx, in, objInfo, options...); err != nil { return dst, err } if err = compare(dst); err != nil { return dst, err } if !canStream { // copy dst (which is the local object we have just streamed to) to the remote return Copy(ctx, fdst, nil, dstFileName, dst) } return dst, nil } // PublicLink adds a "readable by anyone with link" permission on the given file or folder. func PublicLink(ctx context.Context, f fs.Fs, remote string, expire fs.Duration, unlink bool) (string, error) { doPublicLink := f.Features().PublicLink if doPublicLink == nil { return "", errors.Errorf("%v doesn't support public links", f) } return doPublicLink(ctx, remote, expire, unlink) } // Rmdirs removes any empty directories (or directories only // containing empty directories) under f, including f. func Rmdirs(ctx context.Context, f fs.Fs, dir string, leaveRoot bool) error { dirEmpty := make(map[string]bool) dirEmpty[dir] = !leaveRoot err := walk.Walk(ctx, f, dir, true, fs.Config.MaxDepth, func(dirPath string, entries fs.DirEntries, err error) error { if err != nil { err = fs.CountError(err) fs.Errorf(f, "Failed to list %q: %v", dirPath, err) return nil } for _, entry := range entries { switch x := entry.(type) { case fs.Directory: // add a new directory as empty dir := x.Remote() _, found := dirEmpty[dir] if !found { dirEmpty[dir] = true } case fs.Object: // mark the parents of the file as being non-empty dir := x.Remote() for dir != "" { dir = path.Dir(dir) if dir == "." || dir == "/" { dir = "" } empty, found := dirEmpty[dir] // End if we reach a directory which is non-empty if found && !empty { break } dirEmpty[dir] = false } } } return nil }) if err != nil { return errors.Wrap(err, "failed to rmdirs") } // Now delete the empty directories, starting from the longest path var toDelete []string for dir, empty := range dirEmpty { if empty { toDelete = append(toDelete, dir) } } sort.Strings(toDelete) for i := len(toDelete) - 1; i >= 0; i-- { dir := toDelete[i] err := TryRmdir(ctx, f, dir) if err != nil { err = fs.CountError(err) fs.Errorf(dir, "Failed to rmdir: %v", err) return err } } return nil } // GetCompareDest sets up --compare-dest func GetCompareDest() (CompareDest fs.Fs, err error) { CompareDest, err = cache.Get(fs.Config.CompareDest) if err != nil { return nil, fserrors.FatalError(errors.Errorf("Failed to make fs for --compare-dest %q: %v", fs.Config.CompareDest, err)) } return CompareDest, nil } // compareDest checks --compare-dest to see if src needs to // be copied // // Returns True if src is in --compare-dest func compareDest(ctx context.Context, dst, src fs.Object, CompareDest fs.Fs) (NoNeedTransfer bool, err error) { var remote string if dst == nil { remote = src.Remote() } else { remote = dst.Remote() } CompareDestFile, err := CompareDest.NewObject(ctx, remote) switch err { case fs.ErrorObjectNotFound: return false, nil case nil: break default: return false, err } if Equal(ctx, src, CompareDestFile) { fs.Debugf(src, "Destination found in --compare-dest, skipping") return true, nil } return false, nil } // GetCopyDest sets up --copy-dest func GetCopyDest(fdst fs.Fs) (CopyDest fs.Fs, err error) { CopyDest, err = cache.Get(fs.Config.CopyDest) if err != nil { return nil, fserrors.FatalError(errors.Errorf("Failed to make fs for --copy-dest %q: %v", fs.Config.CopyDest, err)) } if !SameConfig(fdst, CopyDest) { return nil, fserrors.FatalError(errors.New("parameter to --copy-dest has to be on the same remote as destination")) } if CopyDest.Features().Copy == nil { return nil, fserrors.FatalError(errors.New("can't use --copy-dest on a remote which doesn't support server side copy")) } return CopyDest, nil } // copyDest checks --copy-dest to see if src needs to // be copied // // Returns True if src was copied from --copy-dest func copyDest(ctx context.Context, fdst fs.Fs, dst, src fs.Object, CopyDest, backupDir fs.Fs) (NoNeedTransfer bool, err error) { var remote string if dst == nil { remote = src.Remote() } else { remote = dst.Remote() } CopyDestFile, err := CopyDest.NewObject(ctx, remote) switch err { case fs.ErrorObjectNotFound: return false, nil case nil: break default: return false, err } opt := defaultEqualOpt() opt.updateModTime = false if equal(ctx, src, CopyDestFile, opt) { if dst == nil || !Equal(ctx, src, dst) { if dst != nil && backupDir != nil { err = MoveBackupDir(ctx, backupDir, dst) if err != nil { return false, errors.Wrap(err, "moving to --backup-dir failed") } // If successful zero out the dstObj as it is no longer there dst = nil } _, err := Copy(ctx, fdst, dst, remote, CopyDestFile) if err != nil { fs.Errorf(src, "Destination found in --copy-dest, error copying") return false, nil } fs.Debugf(src, "Destination found in --copy-dest, using server side copy") return true, nil } fs.Debugf(src, "Unchanged skipping") return true, nil } fs.Debugf(src, "Destination not found in --copy-dest") return false, nil } // CompareOrCopyDest checks --compare-dest and --copy-dest to see if src // does not need to be copied // // Returns True if src does not need to be copied func CompareOrCopyDest(ctx context.Context, fdst fs.Fs, dst, src fs.Object, CompareOrCopyDest, backupDir fs.Fs) (NoNeedTransfer bool, err error) { if fs.Config.CompareDest != "" { return compareDest(ctx, dst, src, CompareOrCopyDest) } else if fs.Config.CopyDest != "" { return copyDest(ctx, fdst, dst, src, CompareOrCopyDest, backupDir) } return false, nil } // NeedTransfer checks to see if src needs to be copied to dst using // the current config. // // Returns a flag which indicates whether the file needs to be // transferred or not. func NeedTransfer(ctx context.Context, dst, src fs.Object) bool { if dst == nil { fs.Debugf(src, "Need to transfer - File not found at Destination") return true } // If we should ignore existing files, don't transfer if fs.Config.IgnoreExisting { fs.Debugf(src, "Destination exists, skipping") return false } // If we should upload unconditionally if fs.Config.IgnoreTimes { fs.Debugf(src, "Transferring unconditionally as --ignore-times is in use") return true } // If UpdateOlder is in effect, skip if dst is newer than src if fs.Config.UpdateOlder { srcModTime := src.ModTime(ctx) dstModTime := dst.ModTime(ctx) dt := dstModTime.Sub(srcModTime) // If have a mutually agreed precision then use that modifyWindow := fs.GetModifyWindow(dst.Fs(), src.Fs()) if modifyWindow == fs.ModTimeNotSupported { // Otherwise use 1 second as a safe default as // the resolution of the time a file was // uploaded. modifyWindow = time.Second } switch { case dt >= modifyWindow: fs.Debugf(src, "Destination is newer than source, skipping") return false case dt <= -modifyWindow: // force --checksum on for the check and do update modtimes by default opt := defaultEqualOpt() opt.forceModTimeMatch = true if equal(ctx, src, dst, opt) { fs.Debugf(src, "Unchanged skipping") return false } default: // Do a size only compare unless --checksum is set opt := defaultEqualOpt() opt.sizeOnly = !fs.Config.CheckSum if equal(ctx, src, dst, opt) { fs.Debugf(src, "Destination mod time is within %v of source and files identical, skipping", modifyWindow) return false } fs.Debugf(src, "Destination mod time is within %v of source but files differ, transferring", modifyWindow) } } else { // Check to see if changed or not if Equal(ctx, src, dst) { fs.Debugf(src, "Unchanged skipping") return false } } return true } // RcatSize reads data from the Reader until EOF and uploads it to a file on remote. // Pass in size >=0 if known, <0 if not known func RcatSize(ctx context.Context, fdst fs.Fs, dstFileName string, in io.ReadCloser, size int64, modTime time.Time) (dst fs.Object, err error) { var obj fs.Object if size >= 0 { var err error // Size known use Put tr := accounting.Stats(ctx).NewTransferRemoteSize(dstFileName, size) defer func() { tr.Done(err) }() body := ioutil.NopCloser(in) // we let the server close the body in := tr.Account(ctx, body) // account the transfer (no buffering) if SkipDestructive(ctx, dstFileName, "upload from pipe") { // prevents "broken pipe" errors _, err = io.Copy(ioutil.Discard, in) return nil, err } info := object.NewStaticObjectInfo(dstFileName, modTime, size, true, nil, fdst) obj, err = fdst.Put(ctx, in, info) if err != nil { fs.Errorf(dstFileName, "Post request put error: %v", err) return nil, err } } else { // Size unknown use Rcat obj, err = Rcat(ctx, fdst, dstFileName, in, modTime) if err != nil { fs.Errorf(dstFileName, "Post request rcat error: %v", err) return nil, err } } return obj, nil } // copyURLFunc is called from CopyURLFn type copyURLFunc func(ctx context.Context, dstFileName string, in io.ReadCloser, size int64, modTime time.Time) (err error) // copyURLFn copies the data from the url to the function supplied func copyURLFn(ctx context.Context, dstFileName string, url string, dstFileNameFromURL bool, fn copyURLFunc) (err error) { client := fshttp.NewClient(fs.Config) resp, err := client.Get(url) if err != nil { return err } defer fs.CheckClose(resp.Body, &err) if resp.StatusCode < 200 || resp.StatusCode >= 300 { return errors.Errorf("CopyURL failed: %s", resp.Status) } modTime, err := http.ParseTime(resp.Header.Get("Last-Modified")) if err != nil { modTime = time.Now() } if dstFileNameFromURL { dstFileName = path.Base(resp.Request.URL.Path) if dstFileName == "." || dstFileName == "/" { return errors.Errorf("CopyURL failed: file name wasn't found in url") } } return fn(ctx, dstFileName, resp.Body, resp.ContentLength, modTime) } // CopyURL copies the data from the url to (fdst, dstFileName) func CopyURL(ctx context.Context, fdst fs.Fs, dstFileName string, url string, dstFileNameFromURL bool, noClobber bool) (dst fs.Object, err error) { err = copyURLFn(ctx, dstFileName, url, dstFileNameFromURL, func(ctx context.Context, dstFileName string, in io.ReadCloser, size int64, modTime time.Time) (err error) { if noClobber { _, err = fdst.NewObject(ctx, dstFileName) if err == nil { return errors.New("CopyURL failed: file already exist") } } dst, err = RcatSize(ctx, fdst, dstFileName, in, size, modTime) return err }) return dst, err } // CopyURLToWriter copies the data from the url to the io.Writer supplied func CopyURLToWriter(ctx context.Context, url string, out io.Writer) (err error) { return copyURLFn(ctx, "", url, false, func(ctx context.Context, dstFileName string, in io.ReadCloser, size int64, modTime time.Time) (err error) { _, err = io.Copy(out, in) return err }) } // BackupDir returns the correctly configured --backup-dir func BackupDir(fdst fs.Fs, fsrc fs.Fs, srcFileName string) (backupDir fs.Fs, err error) { if fs.Config.BackupDir != "" { backupDir, err = cache.Get(fs.Config.BackupDir) if err != nil { return nil, fserrors.FatalError(errors.Errorf("Failed to make fs for --backup-dir %q: %v", fs.Config.BackupDir, err)) } if !SameConfig(fdst, backupDir) { return nil, fserrors.FatalError(errors.New("parameter to --backup-dir has to be on the same remote as destination")) } if srcFileName == "" { if Overlapping(fdst, backupDir) { return nil, fserrors.FatalError(errors.New("destination and parameter to --backup-dir mustn't overlap")) } if Overlapping(fsrc, backupDir) { return nil, fserrors.FatalError(errors.New("source and parameter to --backup-dir mustn't overlap")) } } else { if fs.Config.Suffix == "" { if SameDir(fdst, backupDir) { return nil, fserrors.FatalError(errors.New("destination and parameter to --backup-dir mustn't be the same")) } if SameDir(fsrc, backupDir) { return nil, fserrors.FatalError(errors.New("source and parameter to --backup-dir mustn't be the same")) } } } } else if fs.Config.Suffix != "" { // --backup-dir is not set but --suffix is - use the destination as the backupDir backupDir = fdst } else { return nil, fserrors.FatalError(errors.New("internal error: BackupDir called when --backup-dir and --suffix both empty")) } if !CanServerSideMove(backupDir) { return nil, fserrors.FatalError(errors.New("can't use --backup-dir on a remote which doesn't support server side move or copy")) } return backupDir, nil } // MoveBackupDir moves a file to the backup dir func MoveBackupDir(ctx context.Context, backupDir fs.Fs, dst fs.Object) (err error) { remoteWithSuffix := SuffixName(dst.Remote()) overwritten, _ := backupDir.NewObject(ctx, remoteWithSuffix) _, err = Move(ctx, backupDir, overwritten, remoteWithSuffix, dst) return err } // moveOrCopyFile moves or copies a single file possibly to a new name func moveOrCopyFile(ctx context.Context, fdst fs.Fs, fsrc fs.Fs, dstFileName string, srcFileName string, cp bool) (err error) { dstFilePath := path.Join(fdst.Root(), dstFileName) srcFilePath := path.Join(fsrc.Root(), srcFileName) if fdst.Name() == fsrc.Name() && dstFilePath == srcFilePath { fs.Debugf(fdst, "don't need to copy/move %s, it is already at target location", dstFileName) return nil } // Choose operations Op := Move if cp { Op = Copy } // Find src object srcObj, err := fsrc.NewObject(ctx, srcFileName) if err != nil { return err } // Find dst object if it exists var dstObj fs.Object if !fs.Config.NoCheckDest { dstObj, err = fdst.NewObject(ctx, dstFileName) if err == fs.ErrorObjectNotFound { dstObj = nil } else if err != nil { return err } } // Special case for changing case of a file on a case insensitive remote // This will move the file to a temporary name then // move it back to the intended destination. This is required // to avoid issues with certain remotes and avoid file deletion. if !cp && fdst.Name() == fsrc.Name() && fdst.Features().CaseInsensitive && dstFileName != srcFileName && strings.ToLower(dstFilePath) == strings.ToLower(srcFilePath) { // Create random name to temporarily move file to tmpObjName := dstFileName + "-rclone-move-" + random.String(8) _, err := fdst.NewObject(ctx, tmpObjName) if err != fs.ErrorObjectNotFound { if err == nil { return errors.New("found an already existing file with a randomly generated name. Try the operation again") } return errors.Wrap(err, "error while attempting to move file to a temporary location") } tr := accounting.Stats(ctx).NewTransfer(srcObj) defer func() { tr.Done(err) }() tmpObj, err := Op(ctx, fdst, nil, tmpObjName, srcObj) if err != nil { return errors.Wrap(err, "error while moving file to temporary location") } _, err = Op(ctx, fdst, nil, dstFileName, tmpObj) return err } var backupDir, copyDestDir fs.Fs if fs.Config.BackupDir != "" || fs.Config.Suffix != "" { backupDir, err = BackupDir(fdst, fsrc, srcFileName) if err != nil { return errors.Wrap(err, "creating Fs for --backup-dir failed") } } if fs.Config.CompareDest != "" { copyDestDir, err = GetCompareDest() if err != nil { return err } } else if fs.Config.CopyDest != "" { copyDestDir, err = GetCopyDest(fdst) if err != nil { return err } } NoNeedTransfer, err := CompareOrCopyDest(ctx, fdst, dstObj, srcObj, copyDestDir, backupDir) if err != nil { return err } if !NoNeedTransfer && NeedTransfer(ctx, dstObj, srcObj) { // If destination already exists, then we must move it into --backup-dir if required if dstObj != nil && backupDir != nil { err = MoveBackupDir(ctx, backupDir, dstObj) if err != nil { return errors.Wrap(err, "moving to --backup-dir failed") } // If successful zero out the dstObj as it is no longer there dstObj = nil } _, err = Op(ctx, fdst, dstObj, dstFileName, srcObj) } else { tr := accounting.Stats(ctx).NewCheckingTransfer(srcObj) if !cp { err = DeleteFile(ctx, srcObj) } tr.Done(err) } return err } // MoveFile moves a single file possibly to a new name func MoveFile(ctx context.Context, fdst fs.Fs, fsrc fs.Fs, dstFileName string, srcFileName string) (err error) { return moveOrCopyFile(ctx, fdst, fsrc, dstFileName, srcFileName, false) } // CopyFile moves a single file possibly to a new name func CopyFile(ctx context.Context, fdst fs.Fs, fsrc fs.Fs, dstFileName string, srcFileName string) (err error) { return moveOrCopyFile(ctx, fdst, fsrc, dstFileName, srcFileName, true) } // SetTier changes tier of object in remote func SetTier(ctx context.Context, fsrc fs.Fs, tier string) error { return ListFn(ctx, fsrc, func(o fs.Object) { objImpl, ok := o.(fs.SetTierer) if !ok { fs.Errorf(fsrc, "Remote object does not implement SetTier") return } err := objImpl.SetTier(tier) if err != nil { fs.Errorf(fsrc, "Failed to do SetTier, %v", err) } }) } // ListFormat defines files information print format type ListFormat struct { separator string dirSlash bool absolute bool output []func(entry *ListJSONItem) string csv *csv.Writer buf bytes.Buffer } // SetSeparator changes separator in struct func (l *ListFormat) SetSeparator(separator string) { l.separator = separator } // SetDirSlash defines if slash should be printed func (l *ListFormat) SetDirSlash(dirSlash bool) { l.dirSlash = dirSlash } // SetAbsolute prints a leading slash in front of path names func (l *ListFormat) SetAbsolute(absolute bool) { l.absolute = absolute } // SetCSV defines if the output should be csv // // Note that you should call SetSeparator before this if you want a // custom separator func (l *ListFormat) SetCSV(useCSV bool) { if useCSV { l.csv = csv.NewWriter(&l.buf) if l.separator != "" { l.csv.Comma = []rune(l.separator)[0] } } else { l.csv = nil } } // SetOutput sets functions used to create files information func (l *ListFormat) SetOutput(output []func(entry *ListJSONItem) string) { l.output = output } // AddModTime adds file's Mod Time to output func (l *ListFormat) AddModTime() { l.AppendOutput(func(entry *ListJSONItem) string { return entry.ModTime.When.Local().Format("2006-01-02 15:04:05") }) } // AddSize adds file's size to output func (l *ListFormat) AddSize() { l.AppendOutput(func(entry *ListJSONItem) string { return strconv.FormatInt(entry.Size, 10) }) } // normalisePath makes sure the path has the correct slashes for the current mode func (l *ListFormat) normalisePath(entry *ListJSONItem, remote string) string { if l.absolute && !strings.HasPrefix(remote, "/") { remote = "/" + remote } if entry.IsDir && l.dirSlash { remote += "/" } return remote } // AddPath adds path to file to output func (l *ListFormat) AddPath() { l.AppendOutput(func(entry *ListJSONItem) string { return l.normalisePath(entry, entry.Path) }) } // AddEncrypted adds the encrypted path to file to output func (l *ListFormat) AddEncrypted() { l.AppendOutput(func(entry *ListJSONItem) string { return l.normalisePath(entry, entry.Encrypted) }) } // AddHash adds the hash of the type given to the output func (l *ListFormat) AddHash(ht hash.Type) { hashName := ht.String() l.AppendOutput(func(entry *ListJSONItem) string { if entry.IsDir { return "" } return entry.Hashes[hashName] }) } // AddID adds file's ID to the output if known func (l *ListFormat) AddID() { l.AppendOutput(func(entry *ListJSONItem) string { return entry.ID }) } // AddOrigID adds file's Original ID to the output if known func (l *ListFormat) AddOrigID() { l.AppendOutput(func(entry *ListJSONItem) string { return entry.OrigID }) } // AddTier adds file's Tier to the output if known func (l *ListFormat) AddTier() { l.AppendOutput(func(entry *ListJSONItem) string { return entry.Tier }) } // AddMimeType adds file's MimeType to the output if known func (l *ListFormat) AddMimeType() { l.AppendOutput(func(entry *ListJSONItem) string { return entry.MimeType }) } // AppendOutput adds string generated by specific function to printed output func (l *ListFormat) AppendOutput(functionToAppend func(item *ListJSONItem) string) { l.output = append(l.output, functionToAppend) } // Format prints information about the DirEntry in the format defined func (l *ListFormat) Format(entry *ListJSONItem) (result string) { var out []string for _, fun := range l.output { out = append(out, fun(entry)) } if l.csv != nil { l.buf.Reset() _ = l.csv.Write(out) // can't fail writing to bytes.Buffer l.csv.Flush() result = strings.TrimRight(l.buf.String(), "\n") } else { result = strings.Join(out, l.separator) } return result } // DirMove renames srcRemote to dstRemote // // It does this by loading the directory tree into memory (using ListR // if available) and doing renames in parallel. func DirMove(ctx context.Context, f fs.Fs, srcRemote, dstRemote string) (err error) { // Use DirMove if possible if doDirMove := f.Features().DirMove; doDirMove != nil { err = doDirMove(ctx, f, srcRemote, dstRemote) if err == nil { accounting.Stats(ctx).Renames(1) } return err } // Load the directory tree into memory tree, err := walk.NewDirTree(ctx, f, srcRemote, true, -1) if err != nil { return errors.Wrap(err, "RenameDir tree walk") } // Get the directories in sorted order dirs := tree.Dirs() // Make the destination directories - must be done in order not in parallel for _, dir := range dirs { dstPath := dstRemote + dir[len(srcRemote):] err := f.Mkdir(ctx, dstPath) if err != nil { return errors.Wrap(err, "RenameDir mkdir") } } // Rename the files in parallel type rename struct { o fs.Object newPath string } renames := make(chan rename, fs.Config.Transfers) g, gCtx := errgroup.WithContext(context.Background()) for i := 0; i < fs.Config.Transfers; i++ { g.Go(func() error { for job := range renames { dstOverwritten, _ := f.NewObject(gCtx, job.newPath) _, err := Move(gCtx, f, dstOverwritten, job.newPath, job.o) if err != nil { return err } select { case <-gCtx.Done(): return gCtx.Err() default: } } return nil }) } for dir, entries := range tree { dstPath := dstRemote + dir[len(srcRemote):] for _, entry := range entries { if o, ok := entry.(fs.Object); ok { renames <- rename{o, path.Join(dstPath, path.Base(o.Remote()))} } } } close(renames) err = g.Wait() if err != nil { return errors.Wrap(err, "RenameDir renames") } // Remove the source directories in reverse order for i := len(dirs) - 1; i >= 0; i-- { err := f.Rmdir(ctx, dirs[i]) if err != nil { return errors.Wrap(err, "RenameDir rmdir") } } return nil } // FsInfo provides information about a remote type FsInfo struct { // Name of the remote (as passed into NewFs) Name string // Root of the remote (as passed into NewFs) Root string // String returns a description of the FS String string // Precision of the ModTimes in this Fs in Nanoseconds Precision time.Duration // Returns the supported hash types of the filesystem Hashes []string // Features returns the optional features of this Fs Features map[string]bool } // GetFsInfo gets the information (FsInfo) about a given Fs func GetFsInfo(f fs.Fs) *FsInfo { info := &FsInfo{ Name: f.Name(), Root: f.Root(), String: f.String(), Precision: f.Precision(), Hashes: make([]string, 0, 4), Features: f.Features().Enabled(), } for _, hashType := range f.Hashes().Array() { info.Hashes = append(info.Hashes, hashType.String()) } return info } var ( interactiveMu sync.Mutex skipped = map[string]bool{} ) // skipDestructiveChoose asks the user which action to take // // Call with interactiveMu held func skipDestructiveChoose(ctx context.Context, subject interface{}, action string) (skip bool) { fmt.Printf("rclone: %s \"%v\"?\n", action, subject) switch i := config.CommandDefault([]string{ "yYes, this is OK", "nNo, skip this", fmt.Sprintf("sSkip all %s operations with no more questions", action), fmt.Sprintf("!Do all %s operations with no more questions", action), "qExit rclone now.", }, 0); i { case 'y': skip = false case 'n': skip = true case 's': skip = true skipped[action] = true fs.Logf(nil, "Skipping all %s operations from now on without asking", action) case '!': skip = false skipped[action] = false fs.Logf(nil, "Doing all %s operations from now on without asking", action) case 'q': fs.Logf(nil, "Quitting rclone now") atexit.Run() os.Exit(0) default: skip = true fs.Errorf(nil, "Bad choice %c", i) } return skip } // SkipDestructive should be called whenever rclone is about to do an destructive operation. // // It will check the --dry-run flag and it will ask the user if the --interactive flag is set. // // subject should be the object or directory in use // // action should be a descriptive word or short phrase // // Together they should make sense in this sentence: "Rclone is about // to action subject". func SkipDestructive(ctx context.Context, subject interface{}, action string) (skip bool) { var flag string switch { case fs.Config.DryRun: flag = "--dry-run" skip = true case fs.Config.Interactive: flag = "--interactive" interactiveMu.Lock() defer interactiveMu.Unlock() var found bool skip, found = skipped[action] if !found { skip = skipDestructiveChoose(ctx, subject, action) } default: return false } if skip { fs.Logf(subject, "Skipped %s as %s is set", action, flag) } return skip } rclone-1.53.3/fs/operations/operations_internal_test.go000066400000000000000000000017571375552240400233460ustar00rootroot00000000000000// Internal tests for operations package operations import ( "fmt" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/object" "github.com/stretchr/testify/assert" ) func TestSizeDiffers(t *testing.T) { when := time.Now() for _, test := range []struct { ignoreSize bool srcSize int64 dstSize int64 want bool }{ {false, 0, 0, false}, {false, 1, 2, true}, {false, 1, -1, false}, {false, -1, 1, false}, {true, 0, 0, false}, {true, 1, 2, false}, {true, 1, -1, false}, {true, -1, 1, false}, } { src := object.NewStaticObjectInfo("a", when, test.srcSize, true, nil, nil) dst := object.NewStaticObjectInfo("a", when, test.dstSize, true, nil, nil) oldIgnoreSize := fs.Config.IgnoreSize fs.Config.IgnoreSize = test.ignoreSize got := sizeDiffers(src, dst) fs.Config.IgnoreSize = oldIgnoreSize assert.Equal(t, test.want, got, fmt.Sprintf("ignoreSize=%v, srcSize=%v, dstSize=%v", test.ignoreSize, test.srcSize, test.dstSize)) } } rclone-1.53.3/fs/operations/operations_test.go000066400000000000000000001256221375552240400214500ustar00rootroot00000000000000// Integration tests - test rclone by doing real transactions to a // storage provider to and from the local disk. // // By default it will use a local fs, however you can provide a // -remote option to use a different remote. The test_all.go script // is a wrapper to call this for all the test remotes. // // FIXME not safe for concurrent running of tests until fs.Config is // no longer a global // // NB When writing tests // // Make sure every series of writes to the remote has a // fstest.CheckItems() before use. This make sure the directory // listing is now consistent and stops cascading errors. // // Call accounting.GlobalStats().ResetCounters() before every fs.Sync() as it // uses the error count internally. package operations_test import ( "bytes" "context" "fmt" "io" "io/ioutil" "net/http" "net/http/httptest" "os" "regexp" "strings" "testing" "time" _ "github.com/rclone/rclone/backend/all" // import all backends "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Some times used in the tests var ( t1 = fstest.Time("2001-02-03T04:05:06.499999999Z") t2 = fstest.Time("2011-12-25T12:59:59.123456789Z") t3 = fstest.Time("2011-12-30T12:59:59.000000000Z") ) // TestMain drives the tests func TestMain(m *testing.M) { fstest.TestMain(m) } func TestMkdir(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() err := operations.Mkdir(context.Background(), r.Fremote, "") require.NoError(t, err) fstest.CheckListing(t, r.Fremote, []fstest.Item{}) err = operations.Mkdir(context.Background(), r.Fremote, "") require.NoError(t, err) } func TestLsd(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteObject(context.Background(), "sub dir/hello world", "hello world", t1) fstest.CheckItems(t, r.Fremote, file1) var buf bytes.Buffer err := operations.ListDir(context.Background(), r.Fremote, &buf) require.NoError(t, err) res := buf.String() assert.Contains(t, res, "sub dir\n") } func TestLs(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) fstest.CheckItems(t, r.Fremote, file1, file2) var buf bytes.Buffer err := operations.List(context.Background(), r.Fremote, &buf) require.NoError(t, err) res := buf.String() assert.Contains(t, res, " 1 empty space\n") assert.Contains(t, res, " 60 potato2\n") } func TestLsWithFilesFrom(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) fstest.CheckItems(t, r.Fremote, file1, file2) // Set the --files-from equivalent f, err := filter.NewFilter(nil) require.NoError(t, err) require.NoError(t, f.AddFile("potato2")) require.NoError(t, f.AddFile("notfound")) // Monkey patch the active filter oldFilter := filter.Active filter.Active = f defer func() { filter.Active = oldFilter }() var buf bytes.Buffer err = operations.List(context.Background(), r.Fremote, &buf) require.NoError(t, err) assert.Equal(t, " 60 potato2\n", buf.String()) // Now try with --no-traverse oldNoTraverse := fs.Config.NoTraverse fs.Config.NoTraverse = true defer func() { fs.Config.NoTraverse = oldNoTraverse }() buf.Reset() err = operations.List(context.Background(), r.Fremote, &buf) require.NoError(t, err) assert.Equal(t, " 60 potato2\n", buf.String()) } func TestLsLong(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) fstest.CheckItems(t, r.Fremote, file1, file2) var buf bytes.Buffer err := operations.ListLong(context.Background(), r.Fremote, &buf) require.NoError(t, err) res := buf.String() lines := strings.Split(strings.Trim(res, "\n"), "\n") assert.Equal(t, 2, len(lines)) timeFormat := "2006-01-02 15:04:05.000000000" precision := r.Fremote.Precision() location := time.Now().Location() checkTime := func(m, filename string, expected time.Time) { modTime, err := time.ParseInLocation(timeFormat, m, location) // parse as localtime if err != nil { t.Errorf("Error parsing %q: %v", m, err) } else { fstest.AssertTimeEqualWithPrecision(t, filename, expected, modTime, precision) } } m1 := regexp.MustCompile(`(?m)^ 1 (\d{4}-\d\d-\d\d \d\d:\d\d:\d\d\.\d{9}) empty space$`) if ms := m1.FindStringSubmatch(res); ms == nil { t.Errorf("empty space missing: %q", res) } else { checkTime(ms[1], "empty space", t2.Local()) } m2 := regexp.MustCompile(`(?m)^ 60 (\d{4}-\d\d-\d\d \d\d:\d\d:\d\d\.\d{9}) potato2$`) if ms := m2.FindStringSubmatch(res); ms == nil { t.Errorf("potato2 missing: %q", res) } else { checkTime(ms[1], "potato2", t1.Local()) } } func TestHashSums(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) fstest.CheckItems(t, r.Fremote, file1, file2) // MD5 Sum var buf bytes.Buffer err := operations.Md5sum(context.Background(), r.Fremote, &buf) require.NoError(t, err) res := buf.String() if !strings.Contains(res, "336d5ebc5436534e61d16e63ddfca327 empty space\n") && !strings.Contains(res, " UNSUPPORTED empty space\n") && !strings.Contains(res, " empty space\n") { t.Errorf("empty space missing: %q", res) } if !strings.Contains(res, "d6548b156ea68a4e003e786df99eee76 potato2\n") && !strings.Contains(res, " UNSUPPORTED potato2\n") && !strings.Contains(res, " potato2\n") { t.Errorf("potato2 missing: %q", res) } // SHA1 Sum buf.Reset() err = operations.Sha1sum(context.Background(), r.Fremote, &buf) require.NoError(t, err) res = buf.String() if !strings.Contains(res, "3bc15c8aae3e4124dd409035f32ea2fd6835efc9 empty space\n") && !strings.Contains(res, " UNSUPPORTED empty space\n") && !strings.Contains(res, " empty space\n") { t.Errorf("empty space missing: %q", res) } if !strings.Contains(res, "9dc7f7d3279715991a22853f5981df582b7f9f6d potato2\n") && !strings.Contains(res, " UNSUPPORTED potato2\n") && !strings.Contains(res, " potato2\n") { t.Errorf("potato2 missing: %q", res) } // QuickXorHash Sum buf.Reset() var ht hash.Type err = ht.Set("QuickXorHash") require.NoError(t, err) err = operations.HashLister(context.Background(), ht, r.Fremote, &buf) require.NoError(t, err) res = buf.String() if !strings.Contains(res, "2d00000000000000000000000100000000000000 empty space\n") && !strings.Contains(res, " UNSUPPORTED empty space\n") && !strings.Contains(res, " empty space\n") { t.Errorf("empty space missing: %q", res) } if !strings.Contains(res, "4001dad296b6b4a52d6d694b67dad296b6b4a52d potato2\n") && !strings.Contains(res, " UNSUPPORTED potato2\n") && !strings.Contains(res, " potato2\n") { t.Errorf("potato2 missing: %q", res) } // QuickXorHash Sum with Base64 Encoded buf.Reset() err = operations.HashListerBase64(context.Background(), ht, r.Fremote, &buf) require.NoError(t, err) res = buf.String() if !strings.Contains(res, "LQAAAAAAAAAAAAAAAQAAAAAAAAA= empty space\n") && !strings.Contains(res, " UNSUPPORTED empty space\n") && !strings.Contains(res, " empty space\n") { t.Errorf("empty space missing: %q", res) } if !strings.Contains(res, "QAHa0pa2tKUtbWlLZ9rSlra0pS0= potato2\n") && !strings.Contains(res, " UNSUPPORTED potato2\n") && !strings.Contains(res, " potato2\n") { t.Errorf("potato2 missing: %q", res) } } func TestSuffixName(t *testing.T) { origSuffix, origKeepExt := fs.Config.Suffix, fs.Config.SuffixKeepExtension defer func() { fs.Config.Suffix, fs.Config.SuffixKeepExtension = origSuffix, origKeepExt }() for _, test := range []struct { remote string suffix string keepExt bool want string }{ {"test.txt", "", false, "test.txt"}, {"test.txt", "", true, "test.txt"}, {"test.txt", "-suffix", false, "test.txt-suffix"}, {"test.txt", "-suffix", true, "test-suffix.txt"}, {"test.txt.csv", "-suffix", false, "test.txt.csv-suffix"}, {"test.txt.csv", "-suffix", true, "test.txt-suffix.csv"}, {"test", "-suffix", false, "test-suffix"}, {"test", "-suffix", true, "test-suffix"}, } { fs.Config.Suffix = test.suffix fs.Config.SuffixKeepExtension = test.keepExt got := operations.SuffixName(test.remote) assert.Equal(t, test.want, got, fmt.Sprintf("%+v", test)) } } func TestCount(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) file3 := r.WriteBoth(context.Background(), "sub dir/potato3", "hello", t2) fstest.CheckItems(t, r.Fremote, file1, file2, file3) // Check the MaxDepth too fs.Config.MaxDepth = 1 defer func() { fs.Config.MaxDepth = -1 }() objects, size, err := operations.Count(context.Background(), r.Fremote) require.NoError(t, err) assert.Equal(t, int64(2), objects) assert.Equal(t, int64(61), size) } func TestDelete(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteObject(context.Background(), "small", "1234567890", t2) // 10 bytes file2 := r.WriteObject(context.Background(), "medium", "------------------------------------------------------------", t1) // 60 bytes file3 := r.WriteObject(context.Background(), "large", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", t1) // 100 bytes fstest.CheckItems(t, r.Fremote, file1, file2, file3) filter.Active.Opt.MaxSize = 60 defer func() { filter.Active.Opt.MaxSize = -1 }() err := operations.Delete(context.Background(), r.Fremote) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file3) } func TestRetry(t *testing.T) { var i int var err error fn := func() error { i-- if i <= 0 { return nil } return err } i, err = 3, io.EOF assert.Equal(t, nil, operations.Retry(nil, 5, fn)) assert.Equal(t, 0, i) i, err = 10, io.EOF assert.Equal(t, io.EOF, operations.Retry(nil, 5, fn)) assert.Equal(t, 5, i) i, err = 10, fs.ErrorObjectNotFound assert.Equal(t, fs.ErrorObjectNotFound, operations.Retry(nil, 5, fn)) assert.Equal(t, 9, i) } func TestCat(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "file1", "ABCDEFGHIJ", t1) file2 := r.WriteBoth(context.Background(), "file2", "012345678", t2) fstest.CheckItems(t, r.Fremote, file1, file2) for _, test := range []struct { offset int64 count int64 a string b string }{ {0, -1, "ABCDEFGHIJ", "012345678"}, {0, 5, "ABCDE", "01234"}, {-3, -1, "HIJ", "678"}, {1, 3, "BCD", "123"}, } { var buf bytes.Buffer err := operations.Cat(context.Background(), r.Fremote, &buf, test.offset, test.count) require.NoError(t, err) res := buf.String() if res != test.a+test.b && res != test.b+test.a { t.Errorf("Incorrect output from Cat(%d,%d): %q", test.offset, test.count, res) } } } func TestPurge(t *testing.T) { r := fstest.NewRunIndividual(t) // make new container (azureblob has delayed mkdir after rmdir) defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) // Make some files and dirs r.ForceMkdir(context.Background(), r.Fremote) file1 := r.WriteObject(context.Background(), "A1/B1/C1/one", "aaa", t1) //..and dirs we expect to delete require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B2/C2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B1/C3")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A3")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A3/B3")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A3/B3/C4")) //..and one more file at the end file2 := r.WriteObject(context.Background(), "A1/two", "bbb", t2) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file2, }, []string{ "A1", "A1/B1", "A1/B1/C1", "A2", "A1/B2", "A1/B2/C2", "A1/B1/C3", "A3", "A3/B3", "A3/B3/C4", }, fs.GetModifyWindow(r.Fremote), ) require.NoError(t, operations.Purge(context.Background(), r.Fremote, "A1/B1")) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file2, }, []string{ "A1", "A2", "A1/B2", "A1/B2/C2", "A3", "A3/B3", "A3/B3/C4", }, fs.GetModifyWindow(r.Fremote), ) require.NoError(t, operations.Purge(context.Background(), r.Fremote, "")) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{}, []string{}, fs.GetModifyWindow(r.Fremote), ) } func TestRmdirsNoLeaveRoot(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) // Make some files and dirs we expect to keep r.ForceMkdir(context.Background(), r.Fremote) file1 := r.WriteObject(context.Background(), "A1/B1/C1/one", "aaa", t1) //..and dirs we expect to delete require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B2/C2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B1/C3")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A3")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A3/B3")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A3/B3/C4")) //..and one more file at the end file2 := r.WriteObject(context.Background(), "A1/two", "bbb", t2) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file2, }, []string{ "A1", "A1/B1", "A1/B1/C1", "A2", "A1/B2", "A1/B2/C2", "A1/B1/C3", "A3", "A3/B3", "A3/B3/C4", }, fs.GetModifyWindow(r.Fremote), ) require.NoError(t, operations.Rmdirs(context.Background(), r.Fremote, "A3/B3/C4", false)) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file2, }, []string{ "A1", "A1/B1", "A1/B1/C1", "A2", "A1/B2", "A1/B2/C2", "A1/B1/C3", "A3", "A3/B3", }, fs.GetModifyWindow(r.Fremote), ) require.NoError(t, operations.Rmdirs(context.Background(), r.Fremote, "", false)) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file2, }, []string{ "A1", "A1/B1", "A1/B1/C1", }, fs.GetModifyWindow(r.Fremote), ) } func TestRmdirsLeaveRoot(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) r.ForceMkdir(context.Background(), r.Fremote) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B1")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B1/C1")) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{}, []string{ "A1", "A1/B1", "A1/B1/C1", }, fs.GetModifyWindow(r.Fremote), ) require.NoError(t, operations.Rmdirs(context.Background(), r.Fremote, "A1", true)) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{}, []string{ "A1", }, fs.GetModifyWindow(r.Fremote), ) } func TestCopyURL(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() contents := "file contents\n" file1 := r.WriteFile("file1", contents, t1) file2 := r.WriteFile("file2", contents, t1) r.Mkdir(context.Background(), r.Fremote) fstest.CheckItems(t, r.Fremote) // check when reading from regular HTTP server status := 0 handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if status != 0 { http.Error(w, "an error ocurred", status) } _, err := w.Write([]byte(contents)) assert.NoError(t, err) }) ts := httptest.NewServer(handler) defer ts.Close() o, err := operations.CopyURL(context.Background(), r.Fremote, "file1", ts.URL, false, false) require.NoError(t, err) assert.Equal(t, int64(len(contents)), o.Size()) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1}, nil, fs.ModTimeNotSupported) // Check file clobbering o, err = operations.CopyURL(context.Background(), r.Fremote, "file1", ts.URL, false, true) require.Error(t, err) // Check auto file naming status = 0 urlFileName := "filename.txt" o, err = operations.CopyURL(context.Background(), r.Fremote, "", ts.URL+"/"+urlFileName, true, false) require.NoError(t, err) assert.Equal(t, int64(len(contents)), o.Size()) assert.Equal(t, urlFileName, o.Remote()) // Check auto file naming when url without file name o, err = operations.CopyURL(context.Background(), r.Fremote, "file1", ts.URL, true, false) require.Error(t, err) // Check an error is returned for a 404 status = http.StatusNotFound o, err = operations.CopyURL(context.Background(), r.Fremote, "file1", ts.URL, false, false) require.Error(t, err) assert.Contains(t, err.Error(), "Not Found") assert.Nil(t, o) status = 0 // check when reading from unverified HTTPS server fs.Config.InsecureSkipVerify = true fshttp.ResetTransport() defer func() { fs.Config.InsecureSkipVerify = false fshttp.ResetTransport() }() tss := httptest.NewTLSServer(handler) defer tss.Close() o, err = operations.CopyURL(context.Background(), r.Fremote, "file2", tss.URL, false, false) require.NoError(t, err) assert.Equal(t, int64(len(contents)), o.Size()) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1, file2, fstest.NewItem(urlFileName, contents, t1)}, nil, fs.ModTimeNotSupported) } func TestCopyURLToWriter(t *testing.T) { contents := "file contents\n" // check when reading from regular HTTP server status := 0 handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if status != 0 { http.Error(w, "an error ocurred", status) return } _, err := w.Write([]byte(contents)) assert.NoError(t, err) }) ts := httptest.NewServer(handler) defer ts.Close() // test normal fetch var buf bytes.Buffer err := operations.CopyURLToWriter(context.Background(), ts.URL, &buf) require.NoError(t, err) assert.Equal(t, contents, buf.String()) // test fetch with error status = http.StatusNotFound buf.Reset() err = operations.CopyURLToWriter(context.Background(), ts.URL, &buf) require.Error(t, err) assert.Contains(t, err.Error(), "Not Found") assert.Equal(t, 0, len(buf.String())) } func TestMoveFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) file2 := file1 file2.Path = "sub/file2" err := operations.MoveFile(context.Background(), r.Fremote, r.Flocal, file2.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file2) r.WriteFile("file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) err = operations.MoveFile(context.Background(), r.Fremote, r.Flocal, file2.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file2) err = operations.MoveFile(context.Background(), r.Fremote, r.Fremote, file2.Path, file2.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file2) } func TestCaseInsensitiveMoveFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if !r.Fremote.Features().CaseInsensitive { return } file1 := r.WriteFile("file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) file2 := file1 file2.Path = "sub/file2" err := operations.MoveFile(context.Background(), r.Fremote, r.Flocal, file2.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file2) r.WriteFile("file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) err = operations.MoveFile(context.Background(), r.Fremote, r.Flocal, file2.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file2) file2Capitalized := file2 file2Capitalized.Path = "sub/File2" err = operations.MoveFile(context.Background(), r.Fremote, r.Fremote, file2Capitalized.Path, file2.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file2Capitalized) } func TestMoveFileBackupDir(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if !operations.CanServerSideMove(r.Fremote) { t.Skip("Skipping test as remote does not support server side move or copy") } oldBackupDir := fs.Config.BackupDir fs.Config.BackupDir = r.FremoteName + "/backup" defer func() { fs.Config.BackupDir = oldBackupDir }() file1 := r.WriteFile("dst/file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) file1old := r.WriteObject(context.Background(), "dst/file1", "file1 contents old", t1) fstest.CheckItems(t, r.Fremote, file1old) err := operations.MoveFile(context.Background(), r.Fremote, r.Flocal, file1.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) file1old.Path = "backup/dst/file1" fstest.CheckItems(t, r.Fremote, file1old, file1) } func TestCopyFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) file2 := file1 file2.Path = "sub/file2" err := operations.CopyFile(context.Background(), r.Fremote, r.Flocal, file2.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) err = operations.CopyFile(context.Background(), r.Fremote, r.Flocal, file2.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) err = operations.CopyFile(context.Background(), r.Fremote, r.Fremote, file2.Path, file2.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) } func TestCopyFileBackupDir(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if !operations.CanServerSideMove(r.Fremote) { t.Skip("Skipping test as remote does not support server side move or copy") } oldBackupDir := fs.Config.BackupDir fs.Config.BackupDir = r.FremoteName + "/backup" defer func() { fs.Config.BackupDir = oldBackupDir }() file1 := r.WriteFile("dst/file1", "file1 contents", t1) fstest.CheckItems(t, r.Flocal, file1) file1old := r.WriteObject(context.Background(), "dst/file1", "file1 contents old", t1) fstest.CheckItems(t, r.Fremote, file1old) err := operations.CopyFile(context.Background(), r.Fremote, r.Flocal, file1.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) file1old.Path = "backup/dst/file1" fstest.CheckItems(t, r.Fremote, file1old, file1) } // Test with CompareDest set func TestCopyFileCompareDest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.CompareDest = r.FremoteName + "/CompareDest" defer func() { fs.Config.CompareDest = "" }() fdst, err := fs.NewFs(r.FremoteName + "/dst") require.NoError(t, err) // check empty dest, empty compare file1 := r.WriteFile("one", "one", t1) fstest.CheckItems(t, r.Flocal, file1) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file1.Path, file1.Path) require.NoError(t, err) file1dst := file1 file1dst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1dst) // check old dest, empty compare file1b := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file1dst) fstest.CheckItems(t, r.Flocal, file1b) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file1b.Path, file1b.Path) require.NoError(t, err) file1bdst := file1b file1bdst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1bdst) // check old dest, new compare file3 := r.WriteObject(context.Background(), "dst/one", "one", t1) file2 := r.WriteObject(context.Background(), "CompareDest/one", "onet2", t2) file1c := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file2, file3) fstest.CheckItems(t, r.Flocal, file1c) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file1c.Path, file1c.Path) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file3) // check empty dest, new compare file4 := r.WriteObject(context.Background(), "CompareDest/two", "two", t2) file5 := r.WriteFile("two", "two", t2) fstest.CheckItems(t, r.Fremote, file2, file3, file4) fstest.CheckItems(t, r.Flocal, file1c, file5) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file5.Path, file5.Path) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file3, file4) // check new dest, new compare err = operations.CopyFile(context.Background(), fdst, r.Flocal, file5.Path, file5.Path) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file3, file4) // check empty dest, old compare file5b := r.WriteFile("two", "twot3", t3) fstest.CheckItems(t, r.Fremote, file2, file3, file4) fstest.CheckItems(t, r.Flocal, file1c, file5b) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file5b.Path, file5b.Path) require.NoError(t, err) file5bdst := file5b file5bdst.Path = "dst/two" fstest.CheckItems(t, r.Fremote, file2, file3, file4, file5bdst) } // Test with CopyDest set func TestCopyFileCopyDest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if r.Fremote.Features().Copy == nil { t.Skip("Skipping test as remote does not support server side copy") } fs.Config.CopyDest = r.FremoteName + "/CopyDest" defer func() { fs.Config.CopyDest = "" }() fdst, err := fs.NewFs(r.FremoteName + "/dst") require.NoError(t, err) // check empty dest, empty copy file1 := r.WriteFile("one", "one", t1) fstest.CheckItems(t, r.Flocal, file1) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file1.Path, file1.Path) require.NoError(t, err) file1dst := file1 file1dst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1dst) // check old dest, empty copy file1b := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file1dst) fstest.CheckItems(t, r.Flocal, file1b) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file1b.Path, file1b.Path) require.NoError(t, err) file1bdst := file1b file1bdst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1bdst) // check old dest, new copy, backup-dir fs.Config.BackupDir = r.FremoteName + "/BackupDir" file3 := r.WriteObject(context.Background(), "dst/one", "one", t1) file2 := r.WriteObject(context.Background(), "CopyDest/one", "onet2", t2) file1c := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file2, file3) fstest.CheckItems(t, r.Flocal, file1c) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file1c.Path, file1c.Path) require.NoError(t, err) file2dst := file2 file2dst.Path = "dst/one" file3.Path = "BackupDir/one" fstest.CheckItems(t, r.Fremote, file2, file2dst, file3) fs.Config.BackupDir = "" // check empty dest, new copy file4 := r.WriteObject(context.Background(), "CopyDest/two", "two", t2) file5 := r.WriteFile("two", "two", t2) fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4) fstest.CheckItems(t, r.Flocal, file1c, file5) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file5.Path, file5.Path) require.NoError(t, err) file4dst := file4 file4dst.Path = "dst/two" fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst) // check new dest, new copy err = operations.CopyFile(context.Background(), fdst, r.Flocal, file5.Path, file5.Path) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst) // check empty dest, old copy file6 := r.WriteObject(context.Background(), "CopyDest/three", "three", t2) file7 := r.WriteFile("three", "threet3", t3) fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst, file6) fstest.CheckItems(t, r.Flocal, file1c, file5, file7) err = operations.CopyFile(context.Background(), fdst, r.Flocal, file7.Path, file7.Path) require.NoError(t, err) file7dst := file7 file7dst.Path = "dst/three" fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst, file6, file7dst) } // testFsInfo is for unit testing fs.Info type testFsInfo struct { name string root string stringVal string precision time.Duration hashes hash.Set features fs.Features } // Name of the remote (as passed into NewFs) func (i *testFsInfo) Name() string { return i.name } // Root of the remote (as passed into NewFs) func (i *testFsInfo) Root() string { return i.root } // String returns a description of the FS func (i *testFsInfo) String() string { return i.stringVal } // Precision of the ModTimes in this Fs func (i *testFsInfo) Precision() time.Duration { return i.precision } // Returns the supported hash types of the filesystem func (i *testFsInfo) Hashes() hash.Set { return i.hashes } // Returns the supported hash types of the filesystem func (i *testFsInfo) Features() *fs.Features { return &i.features } func TestSameConfig(t *testing.T) { a := &testFsInfo{name: "name", root: "root"} for _, test := range []struct { name string root string expected bool }{ {"name", "root", true}, {"name", "rooty", true}, {"namey", "root", false}, {"namey", "roott", false}, } { b := &testFsInfo{name: test.name, root: test.root} actual := operations.SameConfig(a, b) assert.Equal(t, test.expected, actual) actual = operations.SameConfig(b, a) assert.Equal(t, test.expected, actual) } } func TestSame(t *testing.T) { a := &testFsInfo{name: "name", root: "root"} for _, test := range []struct { name string root string expected bool }{ {"name", "root", true}, {"name", "rooty", false}, {"namey", "root", false}, {"namey", "roott", false}, } { b := &testFsInfo{name: test.name, root: test.root} actual := operations.Same(a, b) assert.Equal(t, test.expected, actual) actual = operations.Same(b, a) assert.Equal(t, test.expected, actual) } } func TestOverlapping(t *testing.T) { a := &testFsInfo{name: "name", root: "root"} slash := string(os.PathSeparator) // native path separator for _, test := range []struct { name string root string expected bool }{ {"name", "root", true}, {"namey", "root", false}, {"name", "rooty", false}, {"namey", "rooty", false}, {"name", "roo", false}, {"name", "root/toot", true}, {"name", "root/toot/", true}, {"name", "root" + slash + "toot", true}, {"name", "root" + slash + "toot" + slash, true}, {"name", "", true}, {"name", "/", true}, } { b := &testFsInfo{name: test.name, root: test.root} what := fmt.Sprintf("(%q,%q) vs (%q,%q)", a.name, a.root, b.name, b.root) actual := operations.Overlapping(a, b) assert.Equal(t, test.expected, actual, what) actual = operations.Overlapping(b, a) assert.Equal(t, test.expected, actual, what) } } func TestListFormat(t *testing.T) { item0 := &operations.ListJSONItem{ Path: "a", Name: "a", Encrypted: "encryptedFileName", Size: 1, MimeType: "application/octet-stream", ModTime: operations.Timestamp{ When: t1, Format: "2006-01-02T15:04:05.000000000Z07:00"}, IsDir: false, Hashes: map[string]string{ "MD5": "0cc175b9c0f1b6a831c399e269772661", "SHA-1": "86f7e437faa5a7fce15d1ddcb9eaeaea377667b8", "DropboxHash": "bf5d3affb73efd2ec6c36ad3112dd933efed63c4e1cbffcfa88e2759c144f2d8", "QuickXorHash": "6100000000000000000000000100000000000000"}, ID: "fileID", OrigID: "fileOrigID", } item1 := &operations.ListJSONItem{ Path: "subdir", Name: "subdir", Encrypted: "encryptedDirName", Size: -1, MimeType: "inode/directory", ModTime: operations.Timestamp{ When: t2, Format: "2006-01-02T15:04:05.000000000Z07:00"}, IsDir: true, Hashes: map[string]string(nil), ID: "dirID", OrigID: "dirOrigID", } var list operations.ListFormat list.AddPath() list.SetDirSlash(false) assert.Equal(t, "subdir", list.Format(item1)) list.SetDirSlash(true) assert.Equal(t, "subdir/", list.Format(item1)) list.SetOutput(nil) assert.Equal(t, "", list.Format(item1)) list.AppendOutput(func(item *operations.ListJSONItem) string { return "a" }) list.AppendOutput(func(item *operations.ListJSONItem) string { return "b" }) assert.Equal(t, "ab", list.Format(item1)) list.SetSeparator(":::") assert.Equal(t, "a:::b", list.Format(item1)) list.SetOutput(nil) list.AddModTime() assert.Equal(t, t1.Local().Format("2006-01-02 15:04:05"), list.Format(item0)) list.SetOutput(nil) list.SetSeparator("|") list.AddID() list.AddOrigID() assert.Equal(t, "fileID|fileOrigID", list.Format(item0)) assert.Equal(t, "dirID|dirOrigID", list.Format(item1)) list.SetOutput(nil) list.AddMimeType() assert.Contains(t, list.Format(item0), "/") assert.Equal(t, "inode/directory", list.Format(item1)) list.SetOutput(nil) list.AddPath() list.SetAbsolute(true) assert.Equal(t, "/a", list.Format(item0)) list.SetAbsolute(false) assert.Equal(t, "a", list.Format(item0)) list.SetOutput(nil) list.AddSize() assert.Equal(t, "1", list.Format(item0)) list.AddPath() list.AddModTime() list.SetDirSlash(true) list.SetSeparator("__SEP__") assert.Equal(t, "1__SEP__a__SEP__"+t1.Local().Format("2006-01-02 15:04:05"), list.Format(item0)) assert.Equal(t, "-1__SEP__subdir/__SEP__"+t2.Local().Format("2006-01-02 15:04:05"), list.Format(item1)) for _, test := range []struct { ht hash.Type want string }{ {hash.MD5, "0cc175b9c0f1b6a831c399e269772661"}, {hash.SHA1, "86f7e437faa5a7fce15d1ddcb9eaeaea377667b8"}, } { list.SetOutput(nil) list.AddHash(test.ht) assert.Equal(t, test.want, list.Format(item0)) } list.SetOutput(nil) list.SetSeparator("|") list.SetCSV(true) list.AddSize() list.AddPath() list.AddModTime() list.SetDirSlash(true) assert.Equal(t, "1|a|"+t1.Local().Format("2006-01-02 15:04:05"), list.Format(item0)) assert.Equal(t, "-1|subdir/|"+t2.Local().Format("2006-01-02 15:04:05"), list.Format(item1)) list.SetOutput(nil) list.SetSeparator("|") list.AddPath() list.AddEncrypted() assert.Equal(t, "a|encryptedFileName", list.Format(item0)) assert.Equal(t, "subdir/|encryptedDirName/", list.Format(item1)) } func TestDirMove(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) // Make some files and dirs r.ForceMkdir(context.Background(), r.Fremote) files := []fstest.Item{ r.WriteObject(context.Background(), "A1/one", "one", t1), r.WriteObject(context.Background(), "A1/two", "two", t2), r.WriteObject(context.Background(), "A1/B1/three", "three", t3), r.WriteObject(context.Background(), "A1/B1/C1/four", "four", t1), r.WriteObject(context.Background(), "A1/B1/C2/five", "five", t2), } require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B2")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "A1/B1/C3")) fstest.CheckListingWithPrecision( t, r.Fremote, files, []string{ "A1", "A1/B1", "A1/B2", "A1/B1/C1", "A1/B1/C2", "A1/B1/C3", }, fs.GetModifyWindow(r.Fremote), ) require.NoError(t, operations.DirMove(context.Background(), r.Fremote, "A1", "A2")) for i := range files { files[i].Path = strings.Replace(files[i].Path, "A1/", "A2/", -1) } fstest.CheckListingWithPrecision( t, r.Fremote, files, []string{ "A2", "A2/B1", "A2/B2", "A2/B1/C1", "A2/B1/C2", "A2/B1/C3", }, fs.GetModifyWindow(r.Fremote), ) // Disable DirMove features := r.Fremote.Features() oldDirMove := features.DirMove features.DirMove = nil defer func() { features.DirMove = oldDirMove }() require.NoError(t, operations.DirMove(context.Background(), r.Fremote, "A2", "A3")) for i := range files { files[i].Path = strings.Replace(files[i].Path, "A2/", "A3/", -1) } fstest.CheckListingWithPrecision( t, r.Fremote, files, []string{ "A3", "A3/B1", "A3/B2", "A3/B1/C1", "A3/B1/C2", "A3/B1/C3", }, fs.GetModifyWindow(r.Fremote), ) } func TestGetFsInfo(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() f := r.Fremote info := operations.GetFsInfo(f) assert.Equal(t, f.Name(), info.Name) assert.Equal(t, f.Root(), info.Root) assert.Equal(t, f.String(), info.String) assert.Equal(t, f.Precision(), info.Precision) hashSet := hash.NewHashSet() for _, hashName := range info.Hashes { var ht hash.Type require.NoError(t, ht.Set(hashName)) hashSet.Add(ht) } assert.Equal(t, f.Hashes(), hashSet) assert.Equal(t, f.Features().Enabled(), info.Features) } func TestRcat(t *testing.T) { check := func(withChecksum, ignoreChecksum bool) { checksumBefore, ignoreChecksumBefore := fs.Config.CheckSum, fs.Config.IgnoreChecksum fs.Config.CheckSum, fs.Config.IgnoreChecksum = withChecksum, ignoreChecksum defer func() { fs.Config.CheckSum, fs.Config.IgnoreChecksum = checksumBefore, ignoreChecksumBefore }() var prefix string if withChecksum { prefix = "with_checksum_" } else { prefix = "no_checksum_" } if ignoreChecksum { prefix = "ignore_checksum_" } r := fstest.NewRun(t) defer r.Finalise() if *fstest.SizeLimit > 0 && int64(fs.Config.StreamingUploadCutoff) > *fstest.SizeLimit { savedCutoff := fs.Config.StreamingUploadCutoff defer func() { fs.Config.StreamingUploadCutoff = savedCutoff }() fs.Config.StreamingUploadCutoff = fs.SizeSuffix(*fstest.SizeLimit) t.Logf("Adjust StreamingUploadCutoff to size limit %s (was %s)", fs.Config.StreamingUploadCutoff, savedCutoff) } fstest.CheckListing(t, r.Fremote, []fstest.Item{}) data1 := "this is some really nice test data" path1 := prefix + "small_file_from_pipe" data2 := string(make([]byte, fs.Config.StreamingUploadCutoff+1)) path2 := prefix + "big_file_from_pipe" in := ioutil.NopCloser(strings.NewReader(data1)) _, err := operations.Rcat(context.Background(), r.Fremote, path1, in, t1) require.NoError(t, err) in = ioutil.NopCloser(strings.NewReader(data2)) _, err = operations.Rcat(context.Background(), r.Fremote, path2, in, t2) require.NoError(t, err) file1 := fstest.NewItem(path1, data1, t1) file2 := fstest.NewItem(path2, data2, t2) fstest.CheckItems(t, r.Fremote, file1, file2) } for i := 0; i < 4; i++ { withChecksum := (i & 1) != 0 ignoreChecksum := (i & 2) != 0 t.Run(fmt.Sprintf("withChecksum=%v,ignoreChecksum=%v", withChecksum, ignoreChecksum), func(t *testing.T) { check(withChecksum, ignoreChecksum) }) } } func TestRcatSize(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() const body = "------------------------------------------------------------" file1 := r.WriteFile("potato1", body, t1) file2 := r.WriteFile("potato2", body, t2) // Test with known length bodyReader := ioutil.NopCloser(strings.NewReader(body)) obj, err := operations.RcatSize(context.Background(), r.Fremote, file1.Path, bodyReader, int64(len(body)), file1.ModTime) require.NoError(t, err) assert.Equal(t, int64(len(body)), obj.Size()) assert.Equal(t, file1.Path, obj.Remote()) // Test with unknown length bodyReader = ioutil.NopCloser(strings.NewReader(body)) // reset Reader ioutil.NopCloser(strings.NewReader(body)) obj, err = operations.RcatSize(context.Background(), r.Fremote, file2.Path, bodyReader, -1, file2.ModTime) require.NoError(t, err) assert.Equal(t, int64(len(body)), obj.Size()) assert.Equal(t, file2.Path, obj.Remote()) // Check files exist fstest.CheckItems(t, r.Fremote, file1, file2) } func TestCopyFileMaxTransfer(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() old := fs.Config.MaxTransfer oldMode := fs.Config.CutoffMode defer func() { fs.Config.MaxTransfer = old fs.Config.CutoffMode = oldMode accounting.Stats(context.Background()).ResetCounters() }() ctx := context.Background() const sizeCutoff = 2048 file1 := r.WriteFile("TestCopyFileMaxTransfer/file1", "file1 contents", t1) file2 := r.WriteFile("TestCopyFileMaxTransfer/file2", "file2 contents"+random.String(sizeCutoff), t2) file3 := r.WriteFile("TestCopyFileMaxTransfer/file3", "file3 contents"+random.String(sizeCutoff), t2) file4 := r.WriteFile("TestCopyFileMaxTransfer/file4", "file4 contents"+random.String(sizeCutoff), t2) // Cutoff mode: Hard fs.Config.MaxTransfer = sizeCutoff fs.Config.CutoffMode = fs.CutoffModeHard // file1: Show a small file gets transferred OK accounting.Stats(ctx).ResetCounters() err := operations.CopyFile(ctx, r.Fremote, r.Flocal, file1.Path, file1.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1, file2, file3, file4) fstest.CheckItems(t, r.Fremote, file1) // file2: show a large file does not get transferred accounting.Stats(ctx).ResetCounters() err = operations.CopyFile(ctx, r.Fremote, r.Flocal, file2.Path, file2.Path) require.NotNil(t, err, "Did not get expected max transfer limit error") assert.Contains(t, err.Error(), "Max transfer limit reached") assert.True(t, fserrors.IsFatalError(err)) fstest.CheckItems(t, r.Flocal, file1, file2, file3, file4) fstest.CheckItems(t, r.Fremote, file1) // Cutoff mode: Cautious fs.Config.CutoffMode = fs.CutoffModeCautious // file3: show a large file does not get transferred accounting.Stats(ctx).ResetCounters() err = operations.CopyFile(ctx, r.Fremote, r.Flocal, file3.Path, file3.Path) require.NotNil(t, err) assert.Contains(t, err.Error(), "Max transfer limit reached") assert.True(t, fserrors.IsFatalError(err)) fstest.CheckItems(t, r.Flocal, file1, file2, file3, file4) fstest.CheckItems(t, r.Fremote, file1) if strings.HasPrefix(r.Fremote.Name(), "TestChunker") { t.Log("skipping remainder of test for chunker as it involves multiple transfers") return } // Cutoff mode: Soft fs.Config.CutoffMode = fs.CutoffModeSoft // file4: show a large file does get transferred this time accounting.Stats(ctx).ResetCounters() err = operations.CopyFile(ctx, r.Fremote, r.Flocal, file4.Path, file4.Path) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1, file2, file3, file4) fstest.CheckItems(t, r.Fremote, file1, file4) } rclone-1.53.3/fs/operations/rc.go000066400000000000000000000305731375552240400166320ustar00rootroot00000000000000package operations import ( "context" "io" "mime" "mime/multipart" "net/http" "path" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" ) func init() { rc.Add(rc.Call{ Path: "operations/list", AuthRequired: true, Fn: rcList, Title: "List the given remote and path in JSON format", Help: `This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - opt - a dictionary of options to control the listing (optional) - recurse - If set recurse directories - noModTime - If set return modification time - showEncrypted - If set show decrypted names - showOrigIDs - If set show the IDs for each item if known - showHash - If set return a dictionary of hashes The result is - list - This is an array of objects as described in the lsjson command See the [lsjson command](/commands/rclone_lsjson/) for more information on the above and examples. `, }) } // List the directory func rcList(ctx context.Context, in rc.Params) (out rc.Params, err error) { f, remote, err := rc.GetFsAndRemote(in) if err != nil { return nil, err } var opt ListJSONOpt err = in.GetStruct("opt", &opt) if rc.NotErrParamNotFound(err) { return nil, err } var list = []*ListJSONItem{} err = ListJSON(ctx, f, remote, &opt, func(item *ListJSONItem) error { list = append(list, item) return nil }) if err != nil { return nil, err } out = make(rc.Params) out["list"] = list return out, nil } func init() { rc.Add(rc.Call{ Path: "operations/about", AuthRequired: true, Fn: rcAbout, Title: "Return the space used on the remote", Help: `This takes the following parameters - fs - a remote name string eg "drive:" The result is as returned from rclone about --json See the [about command](/commands/rclone_size/) command for more information on the above. `, }) } // About the remote func rcAbout(ctx context.Context, in rc.Params) (out rc.Params, err error) { f, err := rc.GetFs(in) if err != nil { return nil, err } doAbout := f.Features().About if doAbout == nil { return nil, errors.Errorf("%v doesn't support about", f) } u, err := doAbout(ctx) if err != nil { return nil, errors.Wrap(err, "about call failed") } err = rc.Reshape(&out, u) if err != nil { return nil, errors.Wrap(err, "about Reshape failed") } return out, nil } func init() { for _, copy := range []bool{false, true} { copy := copy name := "Move" if copy { name = "Copy" } rc.Add(rc.Call{ Path: "operations/" + strings.ToLower(name) + "file", AuthRequired: true, Fn: func(ctx context.Context, in rc.Params) (rc.Params, error) { return rcMoveOrCopyFile(ctx, in, copy) }, Title: name + " a file from source remote to destination remote", Help: `This takes the following parameters - srcFs - a remote name string eg "drive:" for the source - srcRemote - a path within that remote eg "file.txt" for the source - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination `, }) } } // Copy a file func rcMoveOrCopyFile(ctx context.Context, in rc.Params, cp bool) (out rc.Params, err error) { srcFs, srcRemote, err := rc.GetFsAndRemoteNamed(in, "srcFs", "srcRemote") if err != nil { return nil, err } dstFs, dstRemote, err := rc.GetFsAndRemoteNamed(in, "dstFs", "dstRemote") if err != nil { return nil, err } return nil, moveOrCopyFile(ctx, dstFs, srcFs, dstRemote, srcRemote, cp) } func init() { for _, op := range []struct { name string title string help string noRemote bool needsRequest bool }{ {name: "mkdir", title: "Make a destination directory or container"}, {name: "rmdir", title: "Remove an empty directory or container"}, {name: "purge", title: "Remove a directory or container and all of its contents"}, {name: "rmdirs", title: "Remove all the empty directories in the path", help: "- leaveRoot - boolean, set to true not to delete the root\n"}, {name: "delete", title: "Remove files in the path", noRemote: true}, {name: "deletefile", title: "Remove the single file pointed to"}, {name: "copyurl", title: "Copy the URL to the object", help: "- url - string, URL to read from\n - autoFilename - boolean, set to true to retrieve destination file name from url"}, {name: "uploadfile", title: "Upload file using multiform/form-data", help: "- each part in body represents a file to be uploaded", needsRequest: true}, {name: "cleanup", title: "Remove trashed files in the remote or path", noRemote: true}, } { op := op remote := "- remote - a path within that remote eg \"dir\"\n" if op.noRemote { remote = "" } rc.Add(rc.Call{ Path: "operations/" + op.name, AuthRequired: true, NeedsRequest: op.needsRequest, Fn: func(ctx context.Context, in rc.Params) (rc.Params, error) { return rcSingleCommand(ctx, in, op.name, op.noRemote) }, Title: op.title, Help: `This takes the following parameters - fs - a remote name string eg "drive:" ` + remote + op.help + ` See the [` + op.name + ` command](/commands/rclone_` + op.name + `/) command for more information on the above. `, }) } } // Run a single command, eg Mkdir func rcSingleCommand(ctx context.Context, in rc.Params, name string, noRemote bool) (out rc.Params, err error) { var ( f fs.Fs remote string ) if noRemote { f, err = rc.GetFs(in) } else { f, remote, err = rc.GetFsAndRemote(in) } if err != nil { return nil, err } switch name { case "mkdir": return nil, Mkdir(ctx, f, remote) case "rmdir": return nil, Rmdir(ctx, f, remote) case "purge": return nil, Purge(ctx, f, remote) case "rmdirs": leaveRoot, err := in.GetBool("leaveRoot") if rc.NotErrParamNotFound(err) { return nil, err } return nil, Rmdirs(ctx, f, remote, leaveRoot) case "delete": return nil, Delete(ctx, f) case "deletefile": o, err := f.NewObject(ctx, remote) if err != nil { return nil, err } return nil, DeleteFile(ctx, o) case "copyurl": url, err := in.GetString("url") if err != nil { return nil, err } autoFilename, _ := in.GetBool("autoFilename") noClobber, _ := in.GetBool("noClobber") _, err = CopyURL(ctx, f, remote, url, autoFilename, noClobber) return nil, err case "uploadfile": var request *http.Request request, err := in.GetHTTPRequest() if err != nil { return nil, err } contentType := request.Header.Get("Content-Type") mediaType, params, err := mime.ParseMediaType(contentType) if err != nil { return nil, err } if strings.HasPrefix(mediaType, "multipart/") { mr := multipart.NewReader(request.Body, params["boundary"]) for { p, err := mr.NextPart() if err == io.EOF { return nil, nil } if err != nil { return nil, err } if p.FileName() != "" { obj, err := Rcat(ctx, f, path.Join(remote, p.FileName()), p, time.Now()) if err != nil { return nil, err } fs.Debugf(obj, "Upload Succeeded") } } } return nil, nil case "cleanup": return nil, CleanUp(ctx, f) } panic("unknown rcSingleCommand type") } func init() { rc.Add(rc.Call{ Path: "operations/size", AuthRequired: true, Fn: rcSize, Title: "Count the number of bytes and files in remote", Help: `This takes the following parameters - fs - a remote name string eg "drive:path/to/dir" Returns - count - number of files - bytes - number of bytes in those files See the [size command](/commands/rclone_size/) command for more information on the above. `, }) } // Size a directory func rcSize(ctx context.Context, in rc.Params) (out rc.Params, err error) { f, err := rc.GetFs(in) if err != nil { return nil, err } count, bytes, err := Count(ctx, f) if err != nil { return nil, err } out = make(rc.Params) out["count"] = count out["bytes"] = bytes return out, nil } func init() { rc.Add(rc.Call{ Path: "operations/publiclink", AuthRequired: true, Fn: rcPublicLink, Title: "Create or retrieve a public link to the given file or folder.", Help: `This takes the following parameters - fs - a remote name string eg "drive:" - remote - a path within that remote eg "dir" - unlink - boolean - if set removes the link rather than adding it (optional) - expire - string - the expiry time of the link eg "1d" (optional) Returns - url - URL of the resource See the [link command](/commands/rclone_link/) command for more information on the above. `, }) } // Make a public link func rcPublicLink(ctx context.Context, in rc.Params) (out rc.Params, err error) { f, remote, err := rc.GetFsAndRemote(in) if err != nil { return nil, err } unlink, _ := in.GetBool("unlink") expire, err := in.GetDuration("expire") if err != nil && !rc.IsErrParamNotFound(err) { return nil, err } url, err := PublicLink(ctx, f, remote, fs.Duration(expire), unlink) if err != nil { return nil, err } out = make(rc.Params) out["url"] = url return out, nil } func init() { rc.Add(rc.Call{ Path: "operations/fsinfo", Fn: rcFsInfo, Title: "Return information about the remote", Help: `This takes the following parameters - fs - a remote name string eg "drive:" This returns info about the remote passed in; ` + "```" + ` { // optional features and whether they are available or not "Features": { "About": true, "BucketBased": false, "CanHaveEmptyDirectories": true, "CaseInsensitive": false, "ChangeNotify": false, "CleanUp": false, "Copy": false, "DirCacheFlush": false, "DirMove": true, "DuplicateFiles": false, "GetTier": false, "ListR": false, "MergeDirs": false, "Move": true, "OpenWriterAt": true, "PublicLink": false, "Purge": true, "PutStream": true, "PutUnchecked": false, "ReadMimeType": false, "ServerSideAcrossConfigs": false, "SetTier": false, "SetWrapper": false, "UnWrap": false, "WrapFs": false, "WriteMimeType": false }, // Names of hashes available "Hashes": [ "MD5", "SHA-1", "DropboxHash", "QuickXorHash" ], "Name": "local", // Name as created "Precision": 1, // Precision of timestamps in ns "Root": "/", // Path as created "String": "Local file system at /" // how the remote will appear in logs } ` + "```" + ` This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: `, }) } // Fsinfo the remote func rcFsInfo(ctx context.Context, in rc.Params) (out rc.Params, err error) { f, err := rc.GetFs(in) if err != nil { return nil, err } info := GetFsInfo(f) err = rc.Reshape(&out, info) if err != nil { return nil, errors.Wrap(err, "fsinfo Reshape failed") } return out, nil } func init() { rc.Add(rc.Call{ Path: "backend/command", AuthRequired: true, Fn: rcBackend, Title: "Runs a backend command.", Help: `This takes the following parameters - command - a string with the command name - fs - a remote name string eg "drive:" - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command For example rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2 Returns ` + "```" + ` { "result": { "arg": [ "path1", "path2" ], "name": "noop", "opt": { "blue": "", "echo": "yes" } } } ` + "```" + ` Note that this is the direct equivalent of using this "backend" command: rclone backend noop . -o echo=yes -o blue path1 path2 Note that arguments must be preceded by the "-a" flag See the [backend](/commands/rclone_backend/) command for more information. `, }) } // Make a public link func rcBackend(ctx context.Context, in rc.Params) (out rc.Params, err error) { f, err := rc.GetFs(in) if err != nil { return nil, err } doCommand := f.Features().Command if doCommand == nil { return nil, errors.Errorf("%v: doesn't support backend commands", f) } command, err := in.GetString("command") if err != nil { return nil, err } var opt = map[string]string{} err = in.GetStructMissingOK("opt", &opt) if err != nil { return nil, err } var arg = []string{} err = in.GetStructMissingOK("arg", &arg) if err != nil { return nil, err } result, err := doCommand(context.Background(), command, arg, opt) if err != nil { return nil, errors.Wrapf(err, "command %q failed", command) } out = make(rc.Params) out["result"] = result return out, nil } rclone-1.53.3/fs/operations/rc_test.go000066400000000000000000000375321375552240400176730ustar00rootroot00000000000000package operations_test import ( "context" "net/http" "net/http/httptest" "net/url" "os" "path" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/lib/rest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func rcNewRun(t *testing.T, method string) (*fstest.Run, *rc.Call) { if *fstest.RemoteName != "" { t.Skip("Skipping test on non local remote") } r := fstest.NewRun(t) call := rc.Calls.Get(method) assert.NotNil(t, call) cache.Put(r.LocalName, r.Flocal) cache.Put(r.FremoteName, r.Fremote) return r, call } // operations/about: Return the space used on the remote func TestRcAbout(t *testing.T) { r, call := rcNewRun(t, "operations/about") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) // Will get an error if remote doesn't support About expectedErr := r.Fremote.Features().About == nil in := rc.Params{ "fs": r.FremoteName, } out, err := call.Fn(context.Background(), in) if expectedErr { assert.Error(t, err) return } require.NoError(t, err) // Can't really check the output much! assert.NotEqual(t, int64(0), out["Total"]) } // operations/cleanup: Remove trashed files in the remote or path func TestRcCleanup(t *testing.T) { r, call := rcNewRun(t, "operations/cleanup") defer r.Finalise() in := rc.Params{ "fs": r.LocalName, } out, err := call.Fn(context.Background(), in) require.Error(t, err) assert.Equal(t, rc.Params(nil), out) assert.Contains(t, err.Error(), "doesn't support cleanup") } // operations/copyfile: Copy a file from source remote to destination remote func TestRcCopyfile(t *testing.T) { r, call := rcNewRun(t, "operations/copyfile") defer r.Finalise() file1 := r.WriteFile("file1", "file1 contents", t1) r.Mkdir(context.Background(), r.Fremote) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote) in := rc.Params{ "srcFs": r.LocalName, "srcRemote": "file1", "dstFs": r.FremoteName, "dstRemote": "file1-renamed", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Flocal, file1) file1.Path = "file1-renamed" fstest.CheckItems(t, r.Fremote, file1) } // operations/copyurl: Copy the URL to the object func TestRcCopyurl(t *testing.T) { r, call := rcNewRun(t, "operations/copyurl") defer r.Finalise() contents := "file1 contents\n" file1 := r.WriteFile("file1", contents, t1) r.Mkdir(context.Background(), r.Fremote) fstest.CheckItems(t, r.Fremote) ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { _, err := w.Write([]byte(contents)) assert.NoError(t, err) })) defer ts.Close() in := rc.Params{ "fs": r.FremoteName, "remote": "file1", "url": ts.URL, "autoFilename": false, "noClobber": false, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) in = rc.Params{ "fs": r.FremoteName, "remote": "file1", "url": ts.URL, "autoFilename": false, "noClobber": true, } out, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Equal(t, rc.Params(nil), out) urlFileName := "filename.txt" in = rc.Params{ "fs": r.FremoteName, "remote": "", "url": ts.URL + "/" + urlFileName, "autoFilename": true, "noClobber": false, } out, err = call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) in = rc.Params{ "fs": r.FremoteName, "remote": "", "url": ts.URL, "autoFilename": true, "noClobber": false, } out, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1, fstest.NewItem(urlFileName, contents, t1)}, nil, fs.ModTimeNotSupported) } // operations/delete: Remove files in the path func TestRcDelete(t *testing.T) { r, call := rcNewRun(t, "operations/delete") defer r.Finalise() file1 := r.WriteObject(context.Background(), "small", "1234567890", t2) // 10 bytes file2 := r.WriteObject(context.Background(), "medium", "------------------------------------------------------------", t1) // 60 bytes file3 := r.WriteObject(context.Background(), "large", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", t1) // 100 bytes fstest.CheckItems(t, r.Fremote, file1, file2, file3) in := rc.Params{ "fs": r.FremoteName, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Fremote) } // operations/deletefile: Remove the single file pointed to func TestRcDeletefile(t *testing.T) { r, call := rcNewRun(t, "operations/deletefile") defer r.Finalise() file1 := r.WriteObject(context.Background(), "small", "1234567890", t2) // 10 bytes file2 := r.WriteObject(context.Background(), "medium", "------------------------------------------------------------", t1) // 60 bytes fstest.CheckItems(t, r.Fremote, file1, file2) in := rc.Params{ "fs": r.FremoteName, "remote": "small", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Fremote, file2) } // operations/list: List the given remote and path in JSON format. func TestRcList(t *testing.T) { r, call := rcNewRun(t, "operations/list") defer r.Finalise() file1 := r.WriteObject(context.Background(), "a", "a", t1) file2 := r.WriteObject(context.Background(), "subdir/b", "bb", t2) fstest.CheckItems(t, r.Fremote, file1, file2) in := rc.Params{ "fs": r.FremoteName, "remote": "", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) list := out["list"].([]*operations.ListJSONItem) assert.Equal(t, 2, len(list)) checkFile1 := func(got *operations.ListJSONItem) { assert.WithinDuration(t, t1, got.ModTime.When, time.Second) assert.Equal(t, "a", got.Path) assert.Equal(t, "a", got.Name) assert.Equal(t, int64(1), got.Size) assert.Equal(t, "application/octet-stream", got.MimeType) assert.Equal(t, false, got.IsDir) } checkFile1(list[0]) checkSubdir := func(got *operations.ListJSONItem) { assert.Equal(t, "subdir", got.Path) assert.Equal(t, "subdir", got.Name) assert.Equal(t, int64(-1), got.Size) assert.Equal(t, "inode/directory", got.MimeType) assert.Equal(t, true, got.IsDir) } checkSubdir(list[1]) in = rc.Params{ "fs": r.FremoteName, "remote": "", "opt": rc.Params{ "recurse": true, }, } out, err = call.Fn(context.Background(), in) require.NoError(t, err) list = out["list"].([]*operations.ListJSONItem) assert.Equal(t, 3, len(list)) checkFile1(list[0]) checkSubdir(list[1]) checkFile2 := func(got *operations.ListJSONItem) { assert.WithinDuration(t, t2, got.ModTime.When, time.Second) assert.Equal(t, "subdir/b", got.Path) assert.Equal(t, "b", got.Name) assert.Equal(t, int64(2), got.Size) assert.Equal(t, "application/octet-stream", got.MimeType) assert.Equal(t, false, got.IsDir) } checkFile2(list[2]) } // operations/mkdir: Make a destination directory or container func TestRcMkdir(t *testing.T) { r, call := rcNewRun(t, "operations/mkdir") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{}, fs.GetModifyWindow(r.Fremote)) in := rc.Params{ "fs": r.FremoteName, "remote": "subdir", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{"subdir"}, fs.GetModifyWindow(r.Fremote)) } // operations/movefile: Move a file from source remote to destination remote func TestRcMovefile(t *testing.T) { r, call := rcNewRun(t, "operations/movefile") defer r.Finalise() file1 := r.WriteFile("file1", "file1 contents", t1) r.Mkdir(context.Background(), r.Fremote) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote) in := rc.Params{ "srcFs": r.LocalName, "srcRemote": "file1", "dstFs": r.FremoteName, "dstRemote": "file1-renamed", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Flocal) file1.Path = "file1-renamed" fstest.CheckItems(t, r.Fremote, file1) } // operations/purge: Remove a directory or container and all of its contents func TestRcPurge(t *testing.T) { r, call := rcNewRun(t, "operations/purge") defer r.Finalise() file1 := r.WriteObject(context.Background(), "subdir/file1", "subdir/file1 contents", t1) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1}, []string{"subdir"}, fs.GetModifyWindow(r.Fremote)) in := rc.Params{ "fs": r.FremoteName, "remote": "subdir", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{}, fs.GetModifyWindow(r.Fremote)) } // operations/rmdir: Remove an empty directory or container func TestRcRmdir(t *testing.T) { r, call := rcNewRun(t, "operations/rmdir") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir")) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{"subdir"}, fs.GetModifyWindow(r.Fremote)) in := rc.Params{ "fs": r.FremoteName, "remote": "subdir", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{}, fs.GetModifyWindow(r.Fremote)) } // operations/rmdirs: Remove all the empty directories in the path func TestRcRmdirs(t *testing.T) { r, call := rcNewRun(t, "operations/rmdirs") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir")) assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir/subsubdir")) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{"subdir", "subdir/subsubdir"}, fs.GetModifyWindow(r.Fremote)) in := rc.Params{ "fs": r.FremoteName, "remote": "subdir", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{}, fs.GetModifyWindow(r.Fremote)) assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir")) assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir/subsubdir")) in = rc.Params{ "fs": r.FremoteName, "remote": "subdir", "leaveRoot": true, } out, err = call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{"subdir"}, fs.GetModifyWindow(r.Fremote)) } // operations/size: Count the number of bytes and files in remote func TestRcSize(t *testing.T) { r, call := rcNewRun(t, "operations/size") defer r.Finalise() file1 := r.WriteObject(context.Background(), "small", "1234567890", t2) // 10 bytes file2 := r.WriteObject(context.Background(), "subdir/medium", "------------------------------------------------------------", t1) // 60 bytes file3 := r.WriteObject(context.Background(), "subdir/subsubdir/large", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", t1) // 50 bytes fstest.CheckItems(t, r.Fremote, file1, file2, file3) in := rc.Params{ "fs": r.FremoteName, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params{ "count": int64(3), "bytes": int64(120), }, out) } // operations/publiclink: Create or retrieve a public link to the given file or folder. func TestRcPublicLink(t *testing.T) { r, call := rcNewRun(t, "operations/publiclink") defer r.Finalise() in := rc.Params{ "fs": r.FremoteName, "remote": "", "expire": "5m", "unlink": false, } _, err := call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "doesn't support public links") } // operations/fsinfo: Return information about the remote func TestRcFsInfo(t *testing.T) { r, call := rcNewRun(t, "operations/fsinfo") defer r.Finalise() in := rc.Params{ "fs": r.FremoteName, } got, err := call.Fn(context.Background(), in) require.NoError(t, err) want := operations.GetFsInfo(r.Fremote) assert.Equal(t, want.Name, got["Name"]) assert.Equal(t, want.Root, got["Root"]) assert.Equal(t, want.String, got["String"]) assert.Equal(t, float64(want.Precision), got["Precision"]) var hashes []interface{} for _, hash := range want.Hashes { hashes = append(hashes, hash) } assert.Equal(t, hashes, got["Hashes"]) var features = map[string]interface{}{} for k, v := range want.Features { features[k] = v } assert.Equal(t, features, got["Features"]) } //operations/uploadfile : Tests if upload file succeeds // func TestUploadFile(t *testing.T) { r, call := rcNewRun(t, "operations/uploadfile") defer r.Finalise() testFileName := "test.txt" testFileContent := "Hello World" r.WriteFile(testFileName, testFileContent, t1) testItem1 := fstest.NewItem(testFileName, testFileContent, t1) testItem2 := fstest.NewItem(path.Join("subdir", testFileName), testFileContent, t1) currentFile, err := os.Open(path.Join(r.LocalName, testFileName)) require.NoError(t, err) formReader, contentType, _, err := rest.MultipartUpload(currentFile, url.Values{}, "file", testFileName) require.NoError(t, err) httpReq := httptest.NewRequest("POST", "/", formReader) httpReq.Header.Add("Content-Type", contentType) in := rc.Params{ "_request": httpReq, "fs": r.FremoteName, "remote": "", } _, err = call.Fn(context.Background(), in) require.NoError(t, err) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{testItem1}, nil, fs.ModTimeNotSupported) assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir")) currentFile, err = os.Open(path.Join(r.LocalName, testFileName)) require.NoError(t, err) formReader, contentType, _, err = rest.MultipartUpload(currentFile, url.Values{}, "file", testFileName) require.NoError(t, err) httpReq = httptest.NewRequest("POST", "/", formReader) httpReq.Header.Add("Content-Type", contentType) in = rc.Params{ "_request": httpReq, "fs": r.FremoteName, "remote": "subdir", } _, err = call.Fn(context.Background(), in) require.NoError(t, err) fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{testItem1, testItem2}, nil, fs.ModTimeNotSupported) } // operations/command: Runs a backend command func TestRcCommand(t *testing.T) { r, call := rcNewRun(t, "backend/command") defer r.Finalise() in := rc.Params{ "fs": r.FremoteName, "command": "noop", "opt": map[string]string{ "echo": "true", "blue": "", }, "arg": []string{ "path1", "path2", }, } got, err := call.Fn(context.Background(), in) if err != nil { assert.False(t, r.Fremote.Features().IsLocal, "mustn't fail on local remote") assert.Contains(t, err.Error(), "command not found") return } want := rc.Params{"result": map[string]interface{}{ "arg": []string{ "path1", "path2", }, "name": "noop", "opt": map[string]string{ "blue": "", "echo": "true", }, }} assert.Equal(t, want, got) errTxt := "explosion in the sausage factory" in["opt"].(map[string]string)["error"] = errTxt _, err = call.Fn(context.Background(), in) assert.Error(t, err) assert.Contains(t, err.Error(), errTxt) } rclone-1.53.3/fs/operations/reopen.go000066400000000000000000000073371375552240400175200ustar00rootroot00000000000000package operations import ( "context" "io" "sync" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fserrors" ) // ReOpen is a wrapper for an object reader which reopens the stream on error type ReOpen struct { ctx context.Context mu sync.Mutex // mutex to protect the below src fs.Object // object to open options []fs.OpenOption // option to pass to initial open rc io.ReadCloser // underlying stream read int64 // number of bytes read from this stream maxTries int // maximum number of retries tries int // number of retries we've had so far in this stream err error // if this is set then Read/Close calls will return it opened bool // if set then rc is valid and needs closing } var ( errorFileClosed = errors.New("file already closed") errorTooManyTries = errors.New("failed to reopen: too many retries") ) // NewReOpen makes a handle which will reopen itself and seek to where it was on errors // // If hashOption is set this will be applied when reading from the start // // If rangeOption is set then this will applied when reading from the // start, and updated on retries. func NewReOpen(ctx context.Context, src fs.Object, maxTries int, options ...fs.OpenOption) (rc io.ReadCloser, err error) { h := &ReOpen{ ctx: ctx, src: src, maxTries: maxTries, options: options, } h.mu.Lock() defer h.mu.Unlock() err = h.open() if err != nil { return nil, err } return h, nil } // open the underlying handle - call with lock held // // we don't retry here as the Open() call will itself have low level retries func (h *ReOpen) open() error { opts := []fs.OpenOption{} var hashOption *fs.HashesOption var rangeOption *fs.RangeOption for _, option := range h.options { switch option.(type) { case *fs.HashesOption: hashOption = option.(*fs.HashesOption) case *fs.RangeOption: rangeOption = option.(*fs.RangeOption) case *fs.HTTPOption: opts = append(opts, option) default: if option.Mandatory() { fs.Logf(h.src, "Unsupported mandatory option: %v", option) } } } if h.read == 0 { if rangeOption != nil { opts = append(opts, rangeOption) } if hashOption != nil { // put hashOption on if reading from the start, ditch otherwise opts = append(opts, hashOption) } } else { if rangeOption != nil { // range to the read point opts = append(opts, &fs.RangeOption{Start: rangeOption.Start + h.read, End: rangeOption.End}) } else { // seek to the read point opts = append(opts, &fs.SeekOption{Offset: h.read}) } } h.tries++ if h.tries > h.maxTries { h.err = errorTooManyTries } else { h.rc, h.err = h.src.Open(h.ctx, opts...) } if h.err != nil { if h.tries > 1 { fs.Debugf(h.src, "Reopen failed after %d bytes read: %v", h.read, h.err) } return h.err } h.opened = true return nil } // Read bytes retrying as necessary func (h *ReOpen) Read(p []byte) (n int, err error) { h.mu.Lock() defer h.mu.Unlock() if h.err != nil { // return a previous error if there is one return n, h.err } n, err = h.rc.Read(p) if err != nil { h.err = err } h.read += int64(n) if err != nil && err != io.EOF && !fserrors.IsNoLowLevelRetryError(err) { // close underlying stream h.opened = false _ = h.rc.Close() // reopen stream, clearing error if successful fs.Debugf(h.src, "Reopening on read failure after %d bytes: retry %d/%d: %v", h.read, h.tries, h.maxTries, err) if h.open() == nil { err = nil } } return n, err } // Close the stream func (h *ReOpen) Close() error { h.mu.Lock() defer h.mu.Unlock() if !h.opened { return errorFileClosed } h.opened = false h.err = errorFileClosed return h.rc.Close() } rclone-1.53.3/fs/operations/reopen_test.go000066400000000000000000000076421375552240400205560ustar00rootroot00000000000000package operations import ( "context" "io" "io/ioutil" "testing" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fstest/mockobject" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" ) // check interface var _ io.ReadCloser = (*ReOpen)(nil) var errorTestError = errors.New("test error") // this is a wrapper for a mockobject with a custom Open function // // breaks indicate the number of bytes to read before returning an // error type reOpenTestObject struct { fs.Object breaks []int64 } // Open opens the file for read. Call Close() on the returned io.ReadCloser // // This will break after reading the number of bytes in breaks func (o *reOpenTestObject) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { rc, err := o.Object.Open(ctx, options...) if err != nil { return nil, err } if len(o.breaks) > 0 { // Pop a breakpoint off N := o.breaks[0] o.breaks = o.breaks[1:] // If 0 then return an error immediately if N == 0 { return nil, errorTestError } // Read N bytes then an error r := io.MultiReader(&io.LimitedReader{R: rc, N: N}, readers.ErrorReader{Err: errorTestError}) // Wrap with Close in a new readCloser rc = readCloser{Reader: r, Closer: rc} } return rc, nil } func TestReOpen(t *testing.T) { for testIndex, testName := range []string{"Seek", "Range"} { t.Run(testName, func(t *testing.T) { // Contents for the mock object var ( reOpenTestcontents = []byte("0123456789") expectedRead = reOpenTestcontents rangeOption *fs.RangeOption ) if testIndex > 0 { rangeOption = &fs.RangeOption{Start: 1, End: 7} expectedRead = reOpenTestcontents[1:8] } // Start the test with the given breaks testReOpen := func(breaks []int64, maxRetries int) (io.ReadCloser, error) { srcOrig := mockobject.New("potato").WithContent(reOpenTestcontents, mockobject.SeekModeNone) src := &reOpenTestObject{ Object: srcOrig, breaks: breaks, } hashOption := &fs.HashesOption{Hashes: hash.NewHashSet(hash.MD5)} return NewReOpen(context.Background(), src, maxRetries, hashOption, rangeOption) } t.Run("Basics", func(t *testing.T) { // open h, err := testReOpen(nil, 10) assert.NoError(t, err) // Check contents read correctly got, err := ioutil.ReadAll(h) assert.NoError(t, err) assert.Equal(t, expectedRead, got) // Check read after end var buf = make([]byte, 1) n, err := h.Read(buf) assert.Equal(t, 0, n) assert.Equal(t, io.EOF, err) // Check close assert.NoError(t, h.Close()) // Check double close assert.Equal(t, errorFileClosed, h.Close()) // Check read after close n, err = h.Read(buf) assert.Equal(t, 0, n) assert.Equal(t, errorFileClosed, err) }) t.Run("ErrorAtStart", func(t *testing.T) { // open with immediate breaking h, err := testReOpen([]int64{0}, 10) assert.Equal(t, errorTestError, err) assert.Nil(t, h) }) t.Run("WithErrors", func(t *testing.T) { // open with a few break points but less than the max h, err := testReOpen([]int64{2, 1, 3}, 10) assert.NoError(t, err) // check contents got, err := ioutil.ReadAll(h) assert.NoError(t, err) assert.Equal(t, expectedRead, got) // check close assert.NoError(t, h.Close()) }) t.Run("TooManyErrors", func(t *testing.T) { // open with a few break points but >= the max h, err := testReOpen([]int64{2, 1, 3}, 3) assert.NoError(t, err) // check contents got, err := ioutil.ReadAll(h) assert.Equal(t, errorTestError, err) assert.Equal(t, expectedRead[:6], got) // check old error is returned var buf = make([]byte, 1) n, err := h.Read(buf) assert.Equal(t, 0, n) assert.Equal(t, errorTooManyTries, err) // Check close assert.Equal(t, errorFileClosed, h.Close()) }) }) } } rclone-1.53.3/fs/options.go000066400000000000000000000170311375552240400155300ustar00rootroot00000000000000// Define the options for Open package fs import ( "fmt" "net/http" "strconv" "strings" "github.com/pkg/errors" "github.com/rclone/rclone/fs/hash" ) // OpenOption is an interface describing options for Open type OpenOption interface { fmt.Stringer // Header returns the option as an HTTP header Header() (key string, value string) // Mandatory returns whether this option can be ignored or not Mandatory() bool } // RangeOption defines an HTTP Range option with start and end. If // either start or end are < 0 then they will be omitted. // // End may be bigger than the Size of the object in which case it will // be capped to the size of the object. // // Note that the End is inclusive, so to fetch 100 bytes you would use // RangeOption{Start: 0, End: 99} // // If Start is specified but End is not then it will fetch from Start // to the end of the file. // // If End is specified, but Start is not then it will fetch the last // End bytes. // // Examples: // // RangeOption{Start: 0, End: 99} - fetch the first 100 bytes // RangeOption{Start: 100, End: 199} - fetch the second 100 bytes // RangeOption{Start: 100, End: -1} - fetch bytes from offset 100 to the end // RangeOption{Start: -1, End: 100} - fetch the last 100 bytes // // A RangeOption implements a single byte-range-spec from // https://tools.ietf.org/html/rfc7233#section-2.1 type RangeOption struct { Start int64 End int64 } // Header formats the option as an http header func (o *RangeOption) Header() (key string, value string) { key = "Range" value = "bytes=" if o.Start >= 0 { value += strconv.FormatInt(o.Start, 10) } value += "-" if o.End >= 0 { value += strconv.FormatInt(o.End, 10) } return key, value } // ParseRangeOption parses a RangeOption from a Range: header. // It only accepts single ranges. func ParseRangeOption(s string) (po *RangeOption, err error) { const preamble = "bytes=" if !strings.HasPrefix(s, preamble) { return nil, errors.New("Range: header invalid: doesn't start with " + preamble) } s = s[len(preamble):] if strings.IndexRune(s, ',') >= 0 { return nil, errors.New("Range: header invalid: contains multiple ranges which isn't supported") } dash := strings.IndexRune(s, '-') if dash < 0 { return nil, errors.New("Range: header invalid: contains no '-'") } start, end := strings.TrimSpace(s[:dash]), strings.TrimSpace(s[dash+1:]) o := RangeOption{Start: -1, End: -1} if start != "" { o.Start, err = strconv.ParseInt(start, 10, 64) if err != nil || o.Start < 0 { return nil, errors.New("Range: header invalid: bad start") } } if end != "" { o.End, err = strconv.ParseInt(end, 10, 64) if err != nil || o.End < 0 { return nil, errors.New("Range: header invalid: bad end") } } return &o, nil } // String formats the option into human readable form func (o *RangeOption) String() string { return fmt.Sprintf("RangeOption(%d,%d)", o.Start, o.End) } // Mandatory returns whether the option must be parsed or can be ignored func (o *RangeOption) Mandatory() bool { return true } // Decode interprets the RangeOption into an offset and a limit // // The offset is the start of the stream and the limit is how many // bytes should be read from it. If the limit is -1 then the stream // should be read to the end. func (o *RangeOption) Decode(size int64) (offset, limit int64) { if o.Start >= 0 { offset = o.Start if o.End >= 0 { limit = o.End - o.Start + 1 } else { limit = -1 } } else { if o.End >= 0 { offset = size - o.End } else { offset = 0 } limit = -1 } return offset, limit } // FixRangeOption looks through the slice of options and adjusts any // RangeOption~s found that request a fetch from the end into an // absolute fetch using the size passed in and makes sure the range does // not exceed filesize. Some remotes (eg Onedrive, Box) don't support // range requests which index from the end. func FixRangeOption(options []OpenOption, size int64) { if size == 0 { // if size 0 then remove RangeOption~s // replacing with a NullOptions~s which won't be rendered for i := range options { if _, ok := options[i].(*RangeOption); ok { options[i] = NullOption{} } } return } for i := range options { option := options[i] if x, ok := option.(*RangeOption); ok { // If start is < 0 then fetch from the end if x.Start < 0 { x = &RangeOption{Start: size - x.End, End: -1} options[i] = x } if x.End > size { x = &RangeOption{Start: x.Start, End: size - 1} options[i] = x } } } } // SeekOption defines an HTTP Range option with start only. type SeekOption struct { Offset int64 } // Header formats the option as an http header func (o *SeekOption) Header() (key string, value string) { key = "Range" value = fmt.Sprintf("bytes=%d-", o.Offset) return key, value } // String formats the option into human readable form func (o *SeekOption) String() string { return fmt.Sprintf("SeekOption(%d)", o.Offset) } // Mandatory returns whether the option must be parsed or can be ignored func (o *SeekOption) Mandatory() bool { return true } // HTTPOption defines a general purpose HTTP option type HTTPOption struct { Key string Value string } // Header formats the option as an http header func (o *HTTPOption) Header() (key string, value string) { return o.Key, o.Value } // String formats the option into human readable form func (o *HTTPOption) String() string { return fmt.Sprintf("HTTPOption(%q,%q)", o.Key, o.Value) } // Mandatory returns whether the option must be parsed or can be ignored func (o *HTTPOption) Mandatory() bool { return false } // HashesOption defines an option used to tell the local fs to limit // the number of hashes it calculates. type HashesOption struct { Hashes hash.Set } // Header formats the option as an http header func (o *HashesOption) Header() (key string, value string) { return "", "" } // String formats the option into human readable form func (o *HashesOption) String() string { return fmt.Sprintf("HashesOption(%v)", o.Hashes) } // Mandatory returns whether the option must be parsed or can be ignored func (o *HashesOption) Mandatory() bool { return false } // NullOption defines an Option which does nothing type NullOption struct { } // Header formats the option as an http header func (o NullOption) Header() (key string, value string) { return "", "" } // String formats the option into human readable form func (o NullOption) String() string { return fmt.Sprintf("NullOption()") } // Mandatory returns whether the option must be parsed or can be ignored func (o NullOption) Mandatory() bool { return false } // OpenOptionAddHeaders adds each header found in options to the // headers map provided the key was non empty. func OpenOptionAddHeaders(options []OpenOption, headers map[string]string) { for _, option := range options { key, value := option.Header() if key != "" && value != "" { headers[key] = value } } } // OpenOptionHeaders adds each header found in options to the // headers map provided the key was non empty. // // It returns a nil map if options was empty func OpenOptionHeaders(options []OpenOption) (headers map[string]string) { if len(options) == 0 { return nil } headers = make(map[string]string, len(options)) OpenOptionAddHeaders(options, headers) return headers } // OpenOptionAddHTTPHeaders Sets each header found in options to the // http.Header map provided the key was non empty. func OpenOptionAddHTTPHeaders(headers http.Header, options []OpenOption) { for _, option := range options { key, value := option.Header() if key != "" && value != "" { headers.Set(key, value) } } } rclone-1.53.3/fs/options_test.go000066400000000000000000000153251375552240400165730ustar00rootroot00000000000000package fs import ( "fmt" "net/http" "testing" "github.com/rclone/rclone/fs/hash" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestParseRangeOption(t *testing.T) { for _, test := range []struct { in string want RangeOption err string }{ {in: "", err: "doesn't start with bytes="}, {in: "bytes=1-2,3-4", err: "contains multiple ranges"}, {in: "bytes=100", err: "contains no '-'"}, {in: "bytes=x-8", err: "bad start"}, {in: "bytes=8-x", err: "bad end"}, {in: "bytes=1-2", want: RangeOption{Start: 1, End: 2}}, {in: "bytes=-123456789123456789", want: RangeOption{Start: -1, End: 123456789123456789}}, {in: "bytes=123456789123456789-", want: RangeOption{Start: 123456789123456789, End: -1}}, {in: "bytes= 1 - 2 ", want: RangeOption{Start: 1, End: 2}}, {in: "bytes=-", want: RangeOption{Start: -1, End: -1}}, {in: "bytes= - ", want: RangeOption{Start: -1, End: -1}}, } { got, err := ParseRangeOption(test.in) what := fmt.Sprintf("parsing %q", test.in) if test.err != "" { require.Contains(t, err.Error(), test.err) require.Nil(t, got, what) } else { require.NoError(t, err, what) assert.Equal(t, test.want, *got, what) } } } func TestRangeOptionDecode(t *testing.T) { for _, test := range []struct { in RangeOption size int64 wantOffset int64 wantLimit int64 }{ {in: RangeOption{Start: 1, End: 10}, size: 100, wantOffset: 1, wantLimit: 10}, {in: RangeOption{Start: 10, End: 10}, size: 100, wantOffset: 10, wantLimit: 1}, {in: RangeOption{Start: 10, End: 9}, size: 100, wantOffset: 10, wantLimit: 0}, {in: RangeOption{Start: 1, End: -1}, size: 100, wantOffset: 1, wantLimit: -1}, {in: RangeOption{Start: -1, End: 90}, size: 100, wantOffset: 10, wantLimit: -1}, {in: RangeOption{Start: -1, End: -1}, size: 100, wantOffset: 0, wantLimit: -1}, } { gotOffset, gotLimit := test.in.Decode(test.size) what := fmt.Sprintf("%+v size=%d", test.in, test.size) assert.Equal(t, test.wantOffset, gotOffset, "offset "+what) assert.Equal(t, test.wantLimit, gotLimit, "limit "+what) } } func TestRangeOption(t *testing.T) { opt := &RangeOption{Start: 1, End: 10} var _ OpenOption = opt // check interface assert.Equal(t, "RangeOption(1,10)", opt.String()) key, value := opt.Header() assert.Equal(t, "Range", key) assert.Equal(t, "bytes=1-10", value) assert.Equal(t, true, opt.Mandatory()) opt = &RangeOption{Start: -1, End: 10} assert.Equal(t, "RangeOption(-1,10)", opt.String()) key, value = opt.Header() assert.Equal(t, "Range", key) assert.Equal(t, "bytes=-10", value) assert.Equal(t, true, opt.Mandatory()) opt = &RangeOption{Start: 1, End: -1} assert.Equal(t, "RangeOption(1,-1)", opt.String()) key, value = opt.Header() assert.Equal(t, "Range", key) assert.Equal(t, "bytes=1-", value) assert.Equal(t, true, opt.Mandatory()) opt = &RangeOption{Start: -1, End: -1} assert.Equal(t, "RangeOption(-1,-1)", opt.String()) key, value = opt.Header() assert.Equal(t, "Range", key) assert.Equal(t, "bytes=-", value) assert.Equal(t, true, opt.Mandatory()) } func TestSeekOption(t *testing.T) { opt := &SeekOption{Offset: 1} var _ OpenOption = opt // check interface assert.Equal(t, "SeekOption(1)", opt.String()) key, value := opt.Header() assert.Equal(t, "Range", key) assert.Equal(t, "bytes=1-", value) assert.Equal(t, true, opt.Mandatory()) } func TestHTTPOption(t *testing.T) { opt := &HTTPOption{Key: "k", Value: "v"} var _ OpenOption = opt // check interface assert.Equal(t, `HTTPOption("k","v")`, opt.String()) key, value := opt.Header() assert.Equal(t, "k", key) assert.Equal(t, "v", value) assert.Equal(t, false, opt.Mandatory()) } func TestHashesOption(t *testing.T) { opt := &HashesOption{hash.Set(hash.MD5 | hash.SHA1)} var _ OpenOption = opt // check interface assert.Equal(t, `HashesOption([MD5, SHA-1])`, opt.String()) key, value := opt.Header() assert.Equal(t, "", key) assert.Equal(t, "", value) assert.Equal(t, false, opt.Mandatory()) } func TestNullOption(t *testing.T) { opt := NullOption{} var _ OpenOption = opt // check interface assert.Equal(t, "NullOption()", opt.String()) key, value := opt.Header() assert.Equal(t, "", key) assert.Equal(t, "", value) assert.Equal(t, false, opt.Mandatory()) } func TestFixRangeOptions(t *testing.T) { for _, test := range []struct { name string in []OpenOption size int64 want []OpenOption }{ { name: "Nil options", in: nil, want: nil, }, { name: "Empty options", in: []OpenOption{}, want: []OpenOption{}, }, { name: "Fetch a range with size=0", in: []OpenOption{ &HTTPOption{Key: "a", Value: "1"}, &RangeOption{Start: 1, End: 10}, &HTTPOption{Key: "b", Value: "2"}, }, want: []OpenOption{ &HTTPOption{Key: "a", Value: "1"}, NullOption{}, &HTTPOption{Key: "b", Value: "2"}, }, size: 0, }, { name: "Fetch a range", in: []OpenOption{ &HTTPOption{Key: "a", Value: "1"}, &RangeOption{Start: 1, End: 10}, &HTTPOption{Key: "b", Value: "2"}, }, want: []OpenOption{ &HTTPOption{Key: "a", Value: "1"}, &RangeOption{Start: 1, End: 10}, &HTTPOption{Key: "b", Value: "2"}, }, size: 100, }, { name: "Fetch to end", in: []OpenOption{ &RangeOption{Start: 1, End: -1}, }, want: []OpenOption{ &RangeOption{Start: 1, End: -1}, }, size: 100, }, { name: "Fetch the last 10 bytes", in: []OpenOption{ &RangeOption{Start: -1, End: 10}, }, want: []OpenOption{ &RangeOption{Start: 90, End: -1}, }, size: 100, }, { name: "Fetch with end bigger than size", in: []OpenOption{ &RangeOption{Start: 10, End: 200}, }, want: []OpenOption{ &RangeOption{Start: 10, End: 99}, }, size: 100, }, } { FixRangeOption(test.in, test.size) assert.Equal(t, test.want, test.in, test.name) } } var testOpenOptions = []OpenOption{ &HTTPOption{Key: "a", Value: "1"}, &RangeOption{Start: 1, End: 10}, &HTTPOption{Key: "b", Value: "2"}, NullOption{}, &HashesOption{hash.Set(hash.MD5 | hash.SHA1)}, } func TestOpenOptionAddHeaders(t *testing.T) { m := map[string]string{} want := map[string]string{ "a": "1", "Range": "bytes=1-10", "b": "2", } OpenOptionAddHeaders(testOpenOptions, m) assert.Equal(t, want, m) } func TestOpenOptionHeaders(t *testing.T) { want := map[string]string{ "a": "1", "Range": "bytes=1-10", "b": "2", } m := OpenOptionHeaders(testOpenOptions) assert.Equal(t, want, m) assert.Nil(t, OpenOptionHeaders([]OpenOption{})) } func TestOpenOptionAddHTTPHeaders(t *testing.T) { headers := http.Header{} want := http.Header{ "A": {"1"}, "Range": {"bytes=1-10"}, "B": {"2"}, } OpenOptionAddHTTPHeaders(headers, testOpenOptions) assert.Equal(t, want, headers) } rclone-1.53.3/fs/parseduration.go000066400000000000000000000110121375552240400167060ustar00rootroot00000000000000package fs import ( "fmt" "math" "strconv" "strings" "time" ) // Duration is a time.Duration with some more parsing options type Duration time.Duration // DurationOff is the default value for flags which can be turned off const DurationOff = Duration((1 << 63) - 1) // Turn Duration into a string func (d Duration) String() string { if d == DurationOff { return "off" } for i := len(ageSuffixes) - 2; i >= 0; i-- { ageSuffix := &ageSuffixes[i] if math.Abs(float64(d)) >= float64(ageSuffix.Multiplier) { timeUnits := float64(d) / float64(ageSuffix.Multiplier) return strconv.FormatFloat(timeUnits, 'f', -1, 64) + ageSuffix.Suffix } } return time.Duration(d).String() } // IsSet returns if the duration is != DurationOff func (d Duration) IsSet() bool { return d != DurationOff } // We use time conventions var ageSuffixes = []struct { Suffix string Multiplier time.Duration }{ {Suffix: "d", Multiplier: time.Hour * 24}, {Suffix: "w", Multiplier: time.Hour * 24 * 7}, {Suffix: "M", Multiplier: time.Hour * 24 * 30}, {Suffix: "y", Multiplier: time.Hour * 24 * 365}, // Default to second {Suffix: "", Multiplier: time.Second}, } // parse the age as suffixed ages func parseDurationSuffixes(age string) (time.Duration, error) { var period float64 for _, ageSuffix := range ageSuffixes { if strings.HasSuffix(age, ageSuffix.Suffix) { numberString := age[:len(age)-len(ageSuffix.Suffix)] var err error period, err = strconv.ParseFloat(numberString, 64) if err != nil { return time.Duration(0), err } period *= float64(ageSuffix.Multiplier) break } } return time.Duration(period), nil } // time formats to try parsing ages as - in order var timeFormats = []string{ time.RFC3339, "2006-01-02T15:04:05", "2006-01-02 15:04:05", "2006-01-02", } // parse the age as time before the epoch in various date formats func parseDurationDates(age string, epoch time.Time) (t time.Duration, err error) { var instant time.Time for _, timeFormat := range timeFormats { instant, err = time.Parse(timeFormat, age) if err == nil { return epoch.Sub(instant), nil } } return t, err } // ParseDuration parses a duration string. Accept ms|s|m|h|d|w|M|y suffixes. Defaults to second if not provided func ParseDuration(age string) (d time.Duration, err error) { if age == "off" { return time.Duration(DurationOff), nil } // Attempt to parse as a time.Duration first d, err = time.ParseDuration(age) if err == nil { return d, nil } d, err = parseDurationSuffixes(age) if err == nil { return d, nil } d, err = parseDurationDates(age, time.Now()) if err == nil { return d, nil } return d, err } // ReadableString parses d into a human readable duration. // Based on https://github.com/hako/durafmt func (d Duration) ReadableString() string { switch d { case DurationOff: return "off" case 0: return "0s" } readableString := "" // Check for minus durations. if d < 0 { readableString += "-" } duration := time.Duration(math.Abs(float64(d))) // Convert duration. seconds := int64(duration.Seconds()) % 60 minutes := int64(duration.Minutes()) % 60 hours := int64(duration.Hours()) % 24 days := int64(duration/(24*time.Hour)) % 365 % 7 // Edge case between 364 and 365 days. // We need to calculate weeks from what is left from years leftYearDays := int64(duration/(24*time.Hour)) % 365 weeks := leftYearDays / 7 if leftYearDays >= 364 && leftYearDays < 365 { weeks = 52 } years := int64(duration/(24*time.Hour)) / 365 milliseconds := int64(duration/time.Millisecond) - (seconds * 1000) - (minutes * 60000) - (hours * 3600000) - (days * 86400000) - (weeks * 604800000) - (years * 31536000000) // Create a map of the converted duration time. durationMap := map[string]int64{ "ms": milliseconds, "s": seconds, "m": minutes, "h": hours, "d": days, "w": weeks, "y": years, } // Construct duration string. for _, u := range [...]string{"y", "w", "d", "h", "m", "s", "ms"} { v := durationMap[u] strval := strconv.FormatInt(v, 10) if v == 0 { continue } readableString += strval + u } return readableString } // Set a Duration func (d *Duration) Set(s string) error { duration, err := ParseDuration(s) if err != nil { return err } *d = Duration(duration) return nil } // Type of the value func (d Duration) Type() string { return "Duration" } // Scan implements the fmt.Scanner interface func (d *Duration) Scan(s fmt.ScanState, ch rune) error { token, err := s.Token(true, nil) if err != nil { return err } return d.Set(string(token)) } rclone-1.53.3/fs/parseduration_test.go000066400000000000000000000101001375552240400177420ustar00rootroot00000000000000package fs import ( "fmt" "strings" "testing" "time" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check it satisfies the interface var _ pflag.Value = (*Duration)(nil) func TestParseDuration(t *testing.T) { for _, test := range []struct { in string want time.Duration err bool }{ {"0", 0, false}, {"", 0, true}, {"1ms", time.Millisecond, false}, {"1s", time.Second, false}, {"1m", time.Minute, false}, {"1.5m", (3 * time.Minute) / 2, false}, {"1h", time.Hour, false}, {"1d", time.Hour * 24, false}, {"1w", time.Hour * 24 * 7, false}, {"1M", time.Hour * 24 * 30, false}, {"1y", time.Hour * 24 * 365, false}, {"1.5y", time.Hour * 24 * 365 * 3 / 2, false}, {"-1s", -time.Second, false}, {"1.s", time.Second, false}, {"1x", 0, true}, {"off", time.Duration(DurationOff), false}, {"1h2m3s", time.Hour + 2*time.Minute + 3*time.Second, false}, {"2001-02-03", time.Since(time.Date(2001, 2, 3, 0, 0, 0, 0, time.Local)), false}, {"2001-02-03 10:11:12", time.Since(time.Date(2001, 2, 3, 10, 11, 12, 0, time.Local)), false}, {"2001-02-03T10:11:12", time.Since(time.Date(2001, 2, 3, 10, 11, 12, 0, time.Local)), false}, {"2001-02-03T10:11:12.123Z", time.Since(time.Date(2001, 2, 3, 10, 11, 12, 123, time.UTC)), false}, } { duration, err := ParseDuration(test.in) if test.err { require.Error(t, err) } else { require.NoError(t, err) } if strings.HasPrefix(test.in, "2001-") { ok := duration > test.want-time.Second && duration < test.want+time.Second assert.True(t, ok, test.in) } else { assert.Equal(t, test.want, duration) } } } func TestDurationString(t *testing.T) { for _, test := range []struct { in time.Duration want string }{ {time.Duration(0), "0s"}, {time.Second, "1s"}, {time.Minute, "1m0s"}, {time.Millisecond, "1ms"}, {time.Second, "1s"}, {(3 * time.Minute) / 2, "1m30s"}, {time.Hour, "1h0m0s"}, {time.Hour * 24, "1d"}, {time.Hour * 24 * 7, "1w"}, {time.Hour * 24 * 30, "1M"}, {time.Hour * 24 * 365, "1y"}, {time.Hour * 24 * 365 * 3 / 2, "1.5y"}, {-time.Second, "-1s"}, {time.Second, "1s"}, {time.Duration(DurationOff), "off"}, {time.Hour + 2*time.Minute + 3*time.Second, "1h2m3s"}, {time.Hour * 24, "1d"}, {time.Hour * 24 * 7, "1w"}, {time.Hour * 24 * 30, "1M"}, {time.Hour * 24 * 365, "1y"}, {time.Hour * 24 * 365 * 3 / 2, "1.5y"}, {-time.Hour * 24 * 365 * 3 / 2, "-1.5y"}, } { got := Duration(test.in).String() assert.Equal(t, test.want, got) // Test the reverse reverse, err := ParseDuration(test.want) assert.NoError(t, err) assert.Equal(t, test.in, reverse) } } func TestDurationReadableString(t *testing.T) { for _, test := range []struct { negative bool in time.Duration want string }{ // Edge Cases {false, time.Duration(DurationOff), "off"}, // Base Cases {false, time.Duration(0), "0s"}, {true, time.Millisecond, "1ms"}, {true, time.Second, "1s"}, {true, time.Minute, "1m"}, {true, (3 * time.Minute) / 2, "1m30s"}, {true, time.Hour, "1h"}, {true, time.Hour * 24, "1d"}, {true, time.Hour * 24 * 7, "1w"}, {true, time.Hour * 24 * 365, "1y"}, // Composite Cases {true, time.Hour + 2*time.Minute + 3*time.Second, "1h2m3s"}, {true, time.Hour * 24 * (365 + 14), "1y2w"}, {true, time.Hour*24*4 + time.Hour*3 + time.Minute*2 + time.Second, "4d3h2m1s"}, {true, time.Hour * 24 * (365*3 + 7*2 + 1), "3y2w1d"}, {true, time.Hour*24*(365*3+7*2+1) + time.Hour*2 + time.Second, "3y2w1d2h1s"}, {true, time.Hour*24*(365*3+7*2+1) + time.Second, "3y2w1d1s"}, {true, time.Hour*24*(365+7*2+3) + time.Hour*4 + time.Minute*5 + time.Second*6 + time.Millisecond*7, "1y2w3d4h5m6s7ms"}, } { got := Duration(test.in).ReadableString() assert.Equal(t, test.want, got) // Test Negative Case if test.negative { got = Duration(-test.in).ReadableString() assert.Equal(t, "-"+test.want, got) } } } func TestDurationScan(t *testing.T) { var v Duration n, err := fmt.Sscan(" 17m ", &v) require.NoError(t, err) assert.Equal(t, 1, n) assert.Equal(t, Duration(17*60*time.Second), v) } rclone-1.53.3/fs/rc/000077500000000000000000000000001375552240400141105ustar00rootroot00000000000000rclone-1.53.3/fs/rc/cache.go000066400000000000000000000022361375552240400155050ustar00rootroot00000000000000// Utilities for accessing the Fs cache package rc import ( "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" ) // GetFsNamed gets an fs.Fs named fsName either from the cache or creates it afresh func GetFsNamed(in Params, fsName string) (f fs.Fs, err error) { fsString, err := in.GetString(fsName) if err != nil { return nil, err } return cache.Get(fsString) } // GetFs gets an fs.Fs named "fs" either from the cache or creates it afresh func GetFs(in Params) (f fs.Fs, err error) { return GetFsNamed(in, "fs") } // GetFsAndRemoteNamed gets the fsName parameter from in, makes a // remote or fetches it from the cache then gets the remoteName // parameter from in too. func GetFsAndRemoteNamed(in Params, fsName, remoteName string) (f fs.Fs, remote string, err error) { remote, err = in.GetString(remoteName) if err != nil { return } f, err = GetFsNamed(in, fsName) return } // GetFsAndRemote gets the `fs` parameter from in, makes a remote or // fetches it from the cache then gets the `remote` parameter from in // too. func GetFsAndRemote(in Params) (f fs.Fs, remote string, err error) { return GetFsAndRemoteNamed(in, "fs", "remote") } rclone-1.53.3/fs/rc/cache_test.go000066400000000000000000000026731375552240400165510ustar00rootroot00000000000000package rc import ( "testing" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fstest/mockfs" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func mockNewFs(t *testing.T) func() { f := mockfs.NewFs("mock", "mock") cache.Put("/", f) return func() { cache.Clear() } } func TestGetFsNamed(t *testing.T) { defer mockNewFs(t)() in := Params{ "potato": "/", } f, err := GetFsNamed(in, "potato") require.NoError(t, err) assert.NotNil(t, f) in = Params{ "sausage": "/", } f, err = GetFsNamed(in, "potato") require.Error(t, err) assert.Nil(t, f) } func TestGetFs(t *testing.T) { defer mockNewFs(t)() in := Params{ "fs": "/", } f, err := GetFs(in) require.NoError(t, err) assert.NotNil(t, f) } func TestGetFsAndRemoteNamed(t *testing.T) { defer mockNewFs(t)() in := Params{ "fs": "/", "remote": "hello", } f, remote, err := GetFsAndRemoteNamed(in, "fs", "remote") require.NoError(t, err) assert.NotNil(t, f) assert.Equal(t, "hello", remote) f, _, err = GetFsAndRemoteNamed(in, "fsX", "remote") require.Error(t, err) assert.Nil(t, f) f, _, err = GetFsAndRemoteNamed(in, "fs", "remoteX") require.Error(t, err) assert.Nil(t, f) } func TestGetFsAndRemote(t *testing.T) { defer mockNewFs(t)() in := Params{ "fs": "/", "remote": "hello", } f, remote, err := GetFsAndRemote(in) require.NoError(t, err) assert.NotNil(t, f) assert.Equal(t, "hello", remote) } rclone-1.53.3/fs/rc/config.go000066400000000000000000000057051375552240400157130ustar00rootroot00000000000000// Implement config options reading and writing // // This is done here rather than in fs/fs.go so we don't cause a circular dependency package rc import ( "context" "github.com/pkg/errors" ) var ( optionBlock = map[string]interface{}{} optionReload = map[string]func() error{} ) // AddOption adds an option set func AddOption(name string, option interface{}) { optionBlock[name] = option } // AddOptionReload adds an option set with a reload function to be // called when options are changed func AddOptionReload(name string, option interface{}, reload func() error) { optionBlock[name] = option optionReload[name] = reload } func init() { Add(Call{ Path: "options/blocks", Fn: rcOptionsBlocks, Title: "List all the option blocks", Help: `Returns - options - a list of the options block names`, }) } // Show the list of all the option blocks func rcOptionsBlocks(ctx context.Context, in Params) (out Params, err error) { options := []string{} for name := range optionBlock { options = append(options, name) } out = make(Params) out["options"] = options return out, nil } func init() { Add(Call{ Path: "options/get", Fn: rcOptionsGet, Title: "Get all the options", Help: `Returns an object where keys are option block names and values are an object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. `, }) } // Show the list of all the option blocks func rcOptionsGet(ctx context.Context, in Params) (out Params, err error) { out = make(Params) for name, options := range optionBlock { out[name] = options } return out, nil } func init() { Add(Call{ Path: "options/set", Fn: rcOptionsSet, Title: "Set an option", Help: `Parameters - option block name containing an object with - key: value Repeated as often as required. Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. For example: This sets DEBUG level logs (-vv) rclone rc options/set --json '{"main": {"LogLevel": 8}}' And this sets INFO level logs (-v) rclone rc options/set --json '{"main": {"LogLevel": 7}}' And this sets NOTICE level logs (normal without -v) rclone rc options/set --json '{"main": {"LogLevel": 6}}' `, }) } // Set an option in an option block func rcOptionsSet(ctx context.Context, in Params) (out Params, err error) { for name, options := range in { current := optionBlock[name] if current == nil { return nil, errors.Errorf("unknown option block %q", name) } err := Reshape(current, options) if err != nil { return nil, errors.Wrapf(err, "failed to write options from block %q", name) } if reload := optionReload[name]; reload != nil { err = reload() if err != nil { return nil, errors.Wrapf(err, "failed to reload options from block %q", name) } } } return out, nil } rclone-1.53.3/fs/rc/config_test.go000066400000000000000000000065471375552240400167570ustar00rootroot00000000000000package rc import ( "context" "encoding/json" "fmt" "testing" "github.com/pkg/errors" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/fs" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func clearOptionBlock() func() { oldOptionBlock := optionBlock optionBlock = map[string]interface{}{} return func() { optionBlock = oldOptionBlock } } var testOptions = struct { String string Int int }{ String: "hello", Int: 42, } func TestAddOption(t *testing.T) { defer clearOptionBlock()() assert.Equal(t, len(optionBlock), 0) AddOption("potato", &testOptions) assert.Equal(t, len(optionBlock), 1) assert.Equal(t, len(optionReload), 0) assert.Equal(t, &testOptions, optionBlock["potato"]) } func TestAddOptionReload(t *testing.T) { defer clearOptionBlock()() assert.Equal(t, len(optionBlock), 0) reload := func() error { return nil } AddOptionReload("potato", &testOptions, reload) assert.Equal(t, len(optionBlock), 1) assert.Equal(t, len(optionReload), 1) assert.Equal(t, &testOptions, optionBlock["potato"]) assert.Equal(t, fmt.Sprintf("%p", reload), fmt.Sprintf("%p", optionReload["potato"])) } func TestOptionsBlocks(t *testing.T) { defer clearOptionBlock()() AddOption("potato", &testOptions) call := Calls.Get("options/blocks") require.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, Params{"options": []string{"potato"}}, out) } func TestOptionsGet(t *testing.T) { defer clearOptionBlock()() AddOption("potato", &testOptions) call := Calls.Get("options/get") require.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, Params{"potato": &testOptions}, out) } func TestOptionsGetMarshal(t *testing.T) { defer clearOptionBlock()() // Add some real options AddOption("http", &httplib.DefaultOpt) AddOption("main", fs.Config) AddOption("rc", &DefaultOpt) // get them call := Calls.Get("options/get") require.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) // Check that they marshal _, err = json.Marshal(out) require.NoError(t, err) } func TestOptionsSet(t *testing.T) { defer clearOptionBlock()() var reloaded int AddOptionReload("potato", &testOptions, func() error { if reloaded > 0 { return errors.New("error while reloading") } reloaded++ return nil }) call := Calls.Get("options/set") require.NotNil(t, call) in := Params{ "potato": Params{ "Int": 50, }, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.Nil(t, out) assert.Equal(t, 50, testOptions.Int) assert.Equal(t, "hello", testOptions.String) assert.Equal(t, 1, reloaded) // error from reload _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "error while reloading") // unknown option block in = Params{ "sausage": Params{ "Int": 50, }, } _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "unknown option block") // bad shape in = Params{ "potato": []string{"a", "b"}, } _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "failed to write options") } rclone-1.53.3/fs/rc/internal.go000066400000000000000000000256031375552240400162610ustar00rootroot00000000000000// Define the internal rc functions package rc import ( "context" "net/http" "os" "os/exec" "runtime" "strings" "time" "github.com/coreos/go-semver/semver" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/lib/atexit" ) func init() { Add(Call{ Path: "rc/noopauth", AuthRequired: true, Fn: rcNoop, Title: "Echo the input to the output parameters requiring auth", Help: ` This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.`, }) Add(Call{ Path: "rc/noop", Fn: rcNoop, Title: "Echo the input to the output parameters", Help: ` This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.`, }) } // Echo the input to the output parameters func rcNoop(ctx context.Context, in Params) (out Params, err error) { return in, nil } func init() { Add(Call{ Path: "rc/error", Fn: rcError, Title: "This returns an error", Help: ` This returns an error with the input as part of its error string. Useful for testing error handling.`, }) } // Return an error regardless func rcError(ctx context.Context, in Params) (out Params, err error) { return nil, errors.Errorf("arbitrary error on input %+v", in) } func init() { Add(Call{ Path: "rc/list", Fn: rcList, Title: "List all the registered remote control commands", Help: ` This lists all the registered remote control commands as a JSON map in the commands response.`, }) } // List the registered commands func rcList(ctx context.Context, in Params) (out Params, err error) { out = make(Params) out["commands"] = Calls.List() return out, nil } func init() { Add(Call{ Path: "core/pid", Fn: rcPid, Title: "Return PID of current process", Help: ` This returns PID of current process. Useful for stopping rclone process.`, }) } // Return PID of current process func rcPid(ctx context.Context, in Params) (out Params, err error) { out = make(Params) out["pid"] = os.Getpid() return out, nil } func init() { Add(Call{ Path: "core/memstats", Fn: rcMemStats, Title: "Returns the memory statistics", Help: ` This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats The most interesting values for most people are: * HeapAlloc: This is the amount of memory rclone is actually using * HeapSys: This is the amount of memory rclone has obtained from the OS * Sys: this is the total amount of memory requested from the OS * It is virtual memory so may include unused memory `, }) } // Return the memory statistics func rcMemStats(ctx context.Context, in Params) (out Params, err error) { out = make(Params) var m runtime.MemStats runtime.ReadMemStats(&m) out["Alloc"] = m.Alloc out["TotalAlloc"] = m.TotalAlloc out["Sys"] = m.Sys out["Mallocs"] = m.Mallocs out["Frees"] = m.Frees out["HeapAlloc"] = m.HeapAlloc out["HeapSys"] = m.HeapSys out["HeapIdle"] = m.HeapIdle out["HeapInuse"] = m.HeapInuse out["HeapReleased"] = m.HeapReleased out["HeapObjects"] = m.HeapObjects out["StackInuse"] = m.StackInuse out["StackSys"] = m.StackSys out["MSpanInuse"] = m.MSpanInuse out["MSpanSys"] = m.MSpanSys out["MCacheInuse"] = m.MCacheInuse out["MCacheSys"] = m.MCacheSys out["BuckHashSys"] = m.BuckHashSys out["GCSys"] = m.GCSys out["OtherSys"] = m.OtherSys return out, nil } func init() { Add(Call{ Path: "core/gc", Fn: rcGc, Title: "Runs a garbage collection.", Help: ` This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. `, }) } // Do a garbage collection run func rcGc(ctx context.Context, in Params) (out Params, err error) { runtime.GC() return nil, nil } func init() { Add(Call{ Path: "core/version", Fn: rcVersion, Title: "Shows the current version of rclone and the go runtime.", Help: ` This shows the current version of go and the go runtime - version - rclone version, eg "v1.53.0" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use `, }) } // Return version info func rcVersion(ctx context.Context, in Params) (out Params, err error) { version, err := semver.NewVersion(fs.Version[1:]) if err != nil { return nil, err } out = Params{ "version": fs.Version, "decomposed": version.Slice(), "isGit": strings.HasSuffix(fs.Version, "-DEV"), "isBeta": version.PreRelease != "", "os": runtime.GOOS, "arch": runtime.GOARCH, "goVersion": runtime.Version(), } return out, nil } func init() { Add(Call{ Path: "core/obscure", Fn: rcObscure, Title: "Obscures a string passed in.", Help: ` Pass a clear string and rclone will obscure it for the config file: - clear - string Returns - obscured - string `, }) } // Return obscured string func rcObscure(ctx context.Context, in Params) (out Params, err error) { clear, err := in.GetString("clear") if err != nil { return nil, err } obscured, err := obscure.Obscure(clear) if err != nil { return nil, err } out = Params{ "obscured": obscured, } return out, nil } func init() { Add(Call{ Path: "core/quit", Fn: rcQuit, Title: "Terminates the app.", Help: ` (optional) Pass an exit code to be used for terminating the app: - exitCode - int `, }) } // Terminates app func rcQuit(ctx context.Context, in Params) (out Params, err error) { code, err := in.GetInt64("exitCode") if IsErrParamInvalid(err) { return nil, err } if IsErrParamNotFound(err) { code = 0 } exitCode := int(code) go func(exitCode int) { time.Sleep(time.Millisecond * 1500) atexit.Run() os.Exit(exitCode) }(exitCode) return nil, nil } func init() { Add(Call{ Path: "debug/set-mutex-profile-fraction", Fn: rcSetMutexProfileFraction, Title: "Set runtime.SetMutexProfileFraction for mutex profiling.", Help: ` SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned. To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.) Once this is set you can look use this to profile the mutex contention: go tool pprof http://localhost:5572/debug/pprof/mutex Parameters - rate - int Results - previousRate - int `, }) } // Terminates app func rcSetMutexProfileFraction(ctx context.Context, in Params) (out Params, err error) { rate, err := in.GetInt64("rate") if err != nil { return nil, err } previousRate := runtime.SetMutexProfileFraction(int(rate)) out = make(Params) out["previousRate"] = previousRate return out, nil } func init() { Add(Call{ Path: "debug/set-block-profile-rate", Fn: rcSetBlockProfileRate, Title: "Set runtime.SetBlockProfileRate for blocking profiling.", Help: ` SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked. To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0. After calling this you can use this to see the blocking profile: go tool pprof http://localhost:5572/debug/pprof/block Parameters - rate - int `, }) } // Terminates app func rcSetBlockProfileRate(ctx context.Context, in Params) (out Params, err error) { rate, err := in.GetInt64("rate") if err != nil { return nil, err } runtime.SetBlockProfileRate(int(rate)) return nil, nil } func init() { Add(Call{ Path: "core/command", AuthRequired: true, Fn: rcRunCommand, NeedsRequest: true, NeedsResponse: true, Title: "Run a rclone terminal command over rc.", Help: `This takes the following parameters - command - a string with the command name - arg - a list of arguments for the backend command - opt - a map of string to string of options Returns - result - result from the backend command - error - set if rclone exits with an error code - returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT". "STREAM_ONLY_STDERR") For example rclone rc core/command command=ls -a mydrive:/ -o max-depth=1 rclone rc core/command -a ls -a mydrive:/ -o max-depth=1 Returns ` + "```" + ` { "error": false, "result": "" } OR { "error": true, "result": "" } ` + "```" + ` `, }) } // rcRunCommand runs an rclone command with the given args and flags func rcRunCommand(ctx context.Context, in Params) (out Params, err error) { command, err := in.GetString("command") if err != nil { command = "" } var opt = map[string]string{} err = in.GetStructMissingOK("opt", &opt) if err != nil { return nil, err } var arg = []string{} err = in.GetStructMissingOK("arg", &arg) if err != nil { return nil, err } returnType, err := in.GetString("returnType") if err != nil { returnType = "COMBINED_OUTPUT" } var httpResponse *http.ResponseWriter httpResponse, err = in.GetHTTPResponseWriter() if err != nil { return nil, errors.Errorf("response object is required\n" + err.Error()) } var allArgs = []string{} if command != "" { // Add the command eg: ls to the args allArgs = append(allArgs, command) } // Add all from arg for _, cur := range arg { allArgs = append(allArgs, cur) } // Add flags to args for eg --max-depth 1 comes in as { max-depth 1 }. // Convert it to [ max-depth, 1 ] and append to args list for key, value := range opt { if len(key) == 1 { allArgs = append(allArgs, "-"+key) } else { allArgs = append(allArgs, "--"+key) } allArgs = append(allArgs, value) } // Get the path for the current executable which was used to run rclone. ex, err := os.Executable() if err != nil { return nil, err } cmd := exec.CommandContext(ctx, ex, allArgs...) if returnType == "COMBINED_OUTPUT" { // Run the command and get the output for error and stdout combined. out, err := cmd.CombinedOutput() if err != nil { return Params{ "result": string(out), "error": true, }, nil } return Params{ "result": string(out), "error": false, }, nil } else if returnType == "STREAM_ONLY_STDOUT" { cmd.Stdout = *httpResponse } else if returnType == "STREAM_ONLY_STDERR" { cmd.Stderr = *httpResponse } else if returnType == "STREAM" { cmd.Stdout = *httpResponse cmd.Stderr = *httpResponse } err = cmd.Run() return nil, err } rclone-1.53.3/fs/rc/internal_test.go000066400000000000000000000070661375552240400173230ustar00rootroot00000000000000package rc import ( "context" "fmt" "net/http" "net/http/httptest" "os" "runtime" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config/obscure" ) func TestMain(m *testing.M) { // Pretend to be rclone version if we have a version string parameter if os.Args[len(os.Args)-1] == "version" { fmt.Printf("rclone %s\n", fs.Version) os.Exit(0) } os.Exit(m.Run()) } func TestInternalNoop(t *testing.T) { call := Calls.Get("rc/noop") assert.NotNil(t, call) in := Params{ "String": "hello", "Int": 42, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, in, out) } func TestInternalError(t *testing.T) { call := Calls.Get("rc/error") assert.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.Error(t, err) require.Nil(t, out) } func TestInternalList(t *testing.T) { call := Calls.Get("rc/list") assert.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, Params{"commands": Calls.List()}, out) } func TestCorePid(t *testing.T) { call := Calls.Get("core/pid") assert.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) pid := out["pid"] assert.NotEqual(t, nil, pid) _, ok := pid.(int) assert.Equal(t, true, ok) } func TestCoreMemstats(t *testing.T) { call := Calls.Get("core/memstats") assert.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) sys := out["Sys"] assert.NotEqual(t, nil, sys) _, ok := sys.(uint64) assert.Equal(t, true, ok) } func TestCoreGC(t *testing.T) { call := Calls.Get("core/gc") assert.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.Nil(t, out) assert.Equal(t, Params(nil), out) } func TestCoreVersion(t *testing.T) { call := Calls.Get("core/version") assert.NotNil(t, call) in := Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, fs.Version, out["version"]) assert.Equal(t, runtime.GOOS, out["os"]) assert.Equal(t, runtime.GOARCH, out["arch"]) assert.Equal(t, runtime.Version(), out["goVersion"]) _ = out["isGit"].(bool) v := out["decomposed"].([]int64) assert.True(t, len(v) >= 2) } func TestCoreObscure(t *testing.T) { call := Calls.Get("core/obscure") assert.NotNil(t, call) in := Params{ "clear": "potato", } out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, in["clear"], obscure.MustReveal(out["obscured"].(string))) } func TestCoreQuit(t *testing.T) { //The call should return an error if param exitCode is not parsed to int call := Calls.Get("core/quit") assert.NotNil(t, call) in := Params{ "exitCode": "potato", } _, err := call.Fn(context.Background(), in) require.Error(t, err) } // core/command: Runs a raw rclone command func TestCoreCommand(t *testing.T) { call := Calls.Get("core/command") var httpResponse http.ResponseWriter = httptest.NewRecorder() in := Params{ "command": "version", "opt": map[string]string{}, "arg": []string{}, "_response": &httpResponse, } got, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, fmt.Sprintf("rclone %s\n", fs.Version), got["result"]) assert.Equal(t, false, got["error"]) } rclone-1.53.3/fs/rc/jobs/000077500000000000000000000000001375552240400150455ustar00rootroot00000000000000rclone-1.53.3/fs/rc/jobs/job.go000066400000000000000000000170141375552240400161510ustar00rootroot00000000000000// Manage background jobs that the rc is running package jobs import ( "context" "fmt" "runtime/debug" "sync" "sync/atomic" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/rc" ) // Job describes an asynchronous task started via the rc package type Job struct { mu sync.Mutex ID int64 `json:"id"` Group string `json:"group"` StartTime time.Time `json:"startTime"` EndTime time.Time `json:"endTime"` Error string `json:"error"` Finished bool `json:"finished"` Success bool `json:"success"` Duration float64 `json:"duration"` Output rc.Params `json:"output"` Stop func() `json:"-"` // realErr is the Error before printing it as a string, it's used to return // the real error to the upper application layers while still printing the // string error message. realErr error } // Jobs describes a collection of running tasks type Jobs struct { mu sync.RWMutex jobs map[int64]*Job opt *rc.Options expireRunning bool } var ( running = newJobs() jobID = int64(0) ) // newJobs makes a new Jobs structure func newJobs() *Jobs { return &Jobs{ jobs: map[int64]*Job{}, opt: &rc.DefaultOpt, } } // SetOpt sets the options when they are known func SetOpt(opt *rc.Options) { running.opt = opt } // SetInitialJobID allows for setting jobID before starting any jobs. func SetInitialJobID(id int64) { if !atomic.CompareAndSwapInt64(&jobID, 0, id) { panic("Setting jobID is only possible before starting any jobs") } } // kickExpire makes sure Expire is running func (jobs *Jobs) kickExpire() { jobs.mu.Lock() defer jobs.mu.Unlock() if !jobs.expireRunning { time.AfterFunc(jobs.opt.JobExpireInterval, jobs.Expire) jobs.expireRunning = true } } // Expire expires any jobs that haven't been collected func (jobs *Jobs) Expire() { jobs.mu.Lock() defer jobs.mu.Unlock() now := time.Now() for ID, job := range jobs.jobs { job.mu.Lock() if job.Finished && now.Sub(job.EndTime) > jobs.opt.JobExpireDuration { delete(jobs.jobs, ID) } job.mu.Unlock() } if len(jobs.jobs) != 0 { time.AfterFunc(jobs.opt.JobExpireInterval, jobs.Expire) jobs.expireRunning = true } else { jobs.expireRunning = false } } // IDs returns the IDs of the running jobs func (jobs *Jobs) IDs() (IDs []int64) { jobs.mu.RLock() defer jobs.mu.RUnlock() IDs = []int64{} for ID := range jobs.jobs { IDs = append(IDs, ID) } return IDs } // Get a job with a given ID or nil if it doesn't exist func (jobs *Jobs) Get(ID int64) *Job { jobs.mu.RLock() defer jobs.mu.RUnlock() return jobs.jobs[ID] } // mark the job as finished func (job *Job) finish(out rc.Params, err error) { job.mu.Lock() job.EndTime = time.Now() if out == nil { out = make(rc.Params) } job.Output = out job.Duration = job.EndTime.Sub(job.StartTime).Seconds() if err != nil { job.realErr = err job.Error = err.Error() job.Success = false } else { job.realErr = nil job.Error = "" job.Success = true } job.Finished = true job.mu.Unlock() running.kickExpire() // make sure this job gets expired } // run the job until completion writing the return status func (job *Job) run(ctx context.Context, fn rc.Func, in rc.Params) { defer func() { if r := recover(); r != nil { job.finish(nil, errors.Errorf("panic received: %v \n%s", r, string(debug.Stack()))) } }() job.finish(fn(ctx, in)) } func getGroup(in rc.Params) string { // Check to see if the group is set group, err := in.GetString("_group") if rc.NotErrParamNotFound(err) { fs.Errorf(nil, "Can't get _group param %+v", err) } delete(in, "_group") return group } // NewAsyncJob start a new asynchronous Job off func (jobs *Jobs) NewAsyncJob(fn rc.Func, in rc.Params) *Job { id := atomic.AddInt64(&jobID, 1) group := getGroup(in) if group == "" { group = fmt.Sprintf("job/%d", id) } ctx := accounting.WithStatsGroup(context.Background(), group) ctx, cancel := context.WithCancel(ctx) stop := func() { cancel() // Wait for cancel to propagate before returning. <-ctx.Done() } job := &Job{ ID: id, Group: group, StartTime: time.Now(), Stop: stop, } jobs.mu.Lock() jobs.jobs[job.ID] = job jobs.mu.Unlock() go job.run(ctx, fn, in) return job } // NewSyncJob start a new synchronous Job off func (jobs *Jobs) NewSyncJob(ctx context.Context, in rc.Params) (*Job, context.Context) { id := atomic.AddInt64(&jobID, 1) group := getGroup(in) if group == "" { group = fmt.Sprintf("job/%d", id) } ctxG := accounting.WithStatsGroup(ctx, fmt.Sprintf("job/%d", id)) ctx, cancel := context.WithCancel(ctxG) stop := func() { cancel() // Wait for cancel to propagate before returning. <-ctx.Done() } job := &Job{ ID: id, Group: group, StartTime: time.Now(), Stop: stop, } jobs.mu.Lock() jobs.jobs[job.ID] = job jobs.mu.Unlock() return job, ctx } // StartAsyncJob starts a new job asynchronously and returns a Param suitable // for output. func StartAsyncJob(fn rc.Func, in rc.Params) (rc.Params, error) { job := running.NewAsyncJob(fn, in) out := make(rc.Params) out["jobid"] = job.ID return out, nil } // ExecuteJob executes new job synchronously and returns a Param suitable for // output. func ExecuteJob(ctx context.Context, fn rc.Func, in rc.Params) (rc.Params, int64, error) { job, ctx := running.NewSyncJob(ctx, in) job.run(ctx, fn, in) return job.Output, job.ID, job.realErr } func init() { rc.Add(rc.Call{ Path: "job/status", Fn: rcJobStatus, Title: "Reads the status of the job ID", Help: `Parameters - jobid - id of the job (integer) Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job `, }) } // Returns the status of a job func rcJobStatus(ctx context.Context, in rc.Params) (out rc.Params, err error) { jobID, err := in.GetInt64("jobid") if err != nil { return nil, err } job := running.Get(jobID) if job == nil { return nil, errors.New("job not found") } job.mu.Lock() defer job.mu.Unlock() out = make(rc.Params) err = rc.Reshape(&out, job) if err != nil { return nil, errors.Wrap(err, "reshape failed in job status") } return out, nil } func init() { rc.Add(rc.Call{ Path: "job/list", Fn: rcJobList, Title: "Lists the IDs of the running jobs", Help: `Parameters - None Results - jobids - array of integer job ids `, }) } // Returns list of job ids. func rcJobList(ctx context.Context, in rc.Params) (out rc.Params, err error) { out = make(rc.Params) out["jobids"] = running.IDs() return out, nil } func init() { rc.Add(rc.Call{ Path: "job/stop", Fn: rcJobStop, Title: "Stop the running job", Help: `Parameters - jobid - id of the job (integer) `, }) } // Stops the running job. func rcJobStop(ctx context.Context, in rc.Params) (out rc.Params, err error) { jobID, err := in.GetInt64("jobid") if err != nil { return nil, err } job := running.Get(jobID) if job == nil { return nil, errors.New("job not found") } job.mu.Lock() defer job.mu.Unlock() out = make(rc.Params) job.Stop() return out, nil } rclone-1.53.3/fs/rc/jobs/job_test.go000066400000000000000000000216751375552240400172200ustar00rootroot00000000000000package jobs import ( "context" "runtime" "testing" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/fs/rc/rcflags" "github.com/rclone/rclone/fstest/testy" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestNewJobs(t *testing.T) { jobs := newJobs() assert.Equal(t, 0, len(jobs.jobs)) } func TestJobsKickExpire(t *testing.T) { testy.SkipUnreliable(t) jobs := newJobs() jobs.opt.JobExpireInterval = time.Millisecond assert.Equal(t, false, jobs.expireRunning) jobs.kickExpire() jobs.mu.Lock() assert.Equal(t, true, jobs.expireRunning) jobs.mu.Unlock() time.Sleep(10 * time.Millisecond) jobs.mu.Lock() assert.Equal(t, false, jobs.expireRunning) jobs.mu.Unlock() } func TestJobsExpire(t *testing.T) { testy.SkipUnreliable(t) wait := make(chan struct{}) jobs := newJobs() jobs.opt.JobExpireInterval = time.Millisecond assert.Equal(t, false, jobs.expireRunning) job := jobs.NewAsyncJob(func(ctx context.Context, in rc.Params) (rc.Params, error) { defer close(wait) return in, nil }, rc.Params{}) <-wait assert.Equal(t, 1, len(jobs.jobs)) jobs.Expire() assert.Equal(t, 1, len(jobs.jobs)) jobs.mu.Lock() job.mu.Lock() job.EndTime = time.Now().Add(-rcflags.Opt.JobExpireDuration - 60*time.Second) assert.Equal(t, true, jobs.expireRunning) job.mu.Unlock() jobs.mu.Unlock() time.Sleep(250 * time.Millisecond) jobs.mu.Lock() assert.Equal(t, false, jobs.expireRunning) assert.Equal(t, 0, len(jobs.jobs)) jobs.mu.Unlock() } var noopFn = func(ctx context.Context, in rc.Params) (rc.Params, error) { return nil, nil } func TestJobsIDs(t *testing.T) { jobs := newJobs() job1 := jobs.NewAsyncJob(noopFn, rc.Params{}) job2 := jobs.NewAsyncJob(noopFn, rc.Params{}) wantIDs := []int64{job1.ID, job2.ID} gotIDs := jobs.IDs() require.Equal(t, 2, len(gotIDs)) if gotIDs[0] != wantIDs[0] { gotIDs[0], gotIDs[1] = gotIDs[1], gotIDs[0] } assert.Equal(t, wantIDs, gotIDs) } func TestJobsGet(t *testing.T) { jobs := newJobs() job := jobs.NewAsyncJob(noopFn, rc.Params{}) assert.Equal(t, job, jobs.Get(job.ID)) assert.Nil(t, jobs.Get(123123123123)) } var longFn = func(ctx context.Context, in rc.Params) (rc.Params, error) { time.Sleep(1 * time.Hour) return nil, nil } var shortFn = func(ctx context.Context, in rc.Params) (rc.Params, error) { time.Sleep(time.Millisecond) return nil, nil } var ctxFn = func(ctx context.Context, in rc.Params) (rc.Params, error) { select { case <-ctx.Done(): return nil, ctx.Err() } } const ( sleepTime = 100 * time.Millisecond floatSleepTime = float64(sleepTime) / 1e9 / 2 ) // sleep for some time so job.Duration is non-0 func sleepJob() { time.Sleep(sleepTime) } func TestJobFinish(t *testing.T) { jobs := newJobs() job := jobs.NewAsyncJob(longFn, rc.Params{}) sleepJob() assert.Equal(t, true, job.EndTime.IsZero()) assert.Equal(t, rc.Params(nil), job.Output) assert.Equal(t, 0.0, job.Duration) assert.Equal(t, "", job.Error) assert.Equal(t, false, job.Success) assert.Equal(t, false, job.Finished) wantOut := rc.Params{"a": 1} job.finish(wantOut, nil) assert.Equal(t, false, job.EndTime.IsZero()) assert.Equal(t, wantOut, job.Output) assert.True(t, job.Duration >= floatSleepTime) assert.Equal(t, "", job.Error) assert.Equal(t, true, job.Success) assert.Equal(t, true, job.Finished) job = jobs.NewAsyncJob(longFn, rc.Params{}) sleepJob() job.finish(nil, nil) assert.Equal(t, false, job.EndTime.IsZero()) assert.Equal(t, rc.Params{}, job.Output) assert.True(t, job.Duration >= floatSleepTime) assert.Equal(t, "", job.Error) assert.Equal(t, true, job.Success) assert.Equal(t, true, job.Finished) job = jobs.NewAsyncJob(longFn, rc.Params{}) sleepJob() job.finish(wantOut, errors.New("potato")) assert.Equal(t, false, job.EndTime.IsZero()) assert.Equal(t, wantOut, job.Output) assert.True(t, job.Duration >= floatSleepTime) assert.Equal(t, "potato", job.Error) assert.Equal(t, false, job.Success) assert.Equal(t, true, job.Finished) } // We've tested the functionality of run() already as it is // part of NewJob, now just test the panic catching func TestJobRunPanic(t *testing.T) { wait := make(chan struct{}) boom := func(ctx context.Context, in rc.Params) (rc.Params, error) { sleepJob() defer close(wait) panic("boom") } jobs := newJobs() job := jobs.NewAsyncJob(boom, rc.Params{}) <-wait runtime.Gosched() // yield to make sure job is updated // Wait a short time for the panic to propagate for i := uint(0); i < 10; i++ { job.mu.Lock() e := job.Error job.mu.Unlock() if e != "" { break } time.Sleep(time.Millisecond << i) } job.mu.Lock() assert.Equal(t, false, job.EndTime.IsZero()) assert.Equal(t, rc.Params{}, job.Output) assert.True(t, job.Duration >= floatSleepTime) assert.Contains(t, job.Error, "panic received: boom") assert.Equal(t, false, job.Success) assert.Equal(t, true, job.Finished) job.mu.Unlock() } func TestJobsNewJob(t *testing.T) { jobID = 0 jobs := newJobs() job := jobs.NewAsyncJob(noopFn, rc.Params{}) assert.Equal(t, int64(1), job.ID) assert.Equal(t, job, jobs.Get(1)) assert.NotEmpty(t, job.Stop) } func TestStartJob(t *testing.T) { jobID = 0 out, err := StartAsyncJob(longFn, rc.Params{}) assert.NoError(t, err) assert.Equal(t, rc.Params{"jobid": int64(1)}, out) } func TestExecuteJob(t *testing.T) { jobID = 0 _, id, err := ExecuteJob(context.Background(), shortFn, rc.Params{}) assert.NoError(t, err) assert.Equal(t, int64(1), id) } func TestExecuteJobErrorPropagation(t *testing.T) { jobID = 0 testErr := errors.New("test error") errorFn := func(ctx context.Context, in rc.Params) (out rc.Params, err error) { return nil, testErr } _, _, err := ExecuteJob(context.Background(), errorFn, rc.Params{}) assert.Equal(t, testErr, err) } func TestRcJobStatus(t *testing.T) { jobID = 0 _, err := StartAsyncJob(longFn, rc.Params{}) assert.NoError(t, err) call := rc.Calls.Get("job/status") assert.NotNil(t, call) in := rc.Params{"jobid": 1} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, float64(1), out["id"]) assert.Equal(t, "", out["error"]) assert.Equal(t, false, out["finished"]) assert.Equal(t, false, out["success"]) in = rc.Params{"jobid": 123123123} _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "job not found") in = rc.Params{"jobidx": 123123123} _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "Didn't find key") } func TestRcJobList(t *testing.T) { jobID = 0 _, err := StartAsyncJob(longFn, rc.Params{}) assert.NoError(t, err) call := rc.Calls.Get("job/list") assert.NotNil(t, call) in := rc.Params{} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, rc.Params{"jobids": []int64{1}}, out) } func TestRcAsyncJobStop(t *testing.T) { jobID = 0 _, err := StartAsyncJob(ctxFn, rc.Params{}) assert.NoError(t, err) call := rc.Calls.Get("job/stop") assert.NotNil(t, call) in := rc.Params{"jobid": 1} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.Empty(t, out) in = rc.Params{"jobid": 123123123} _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "job not found") in = rc.Params{"jobidx": 123123123} _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "Didn't find key") time.Sleep(10 * time.Millisecond) call = rc.Calls.Get("job/status") assert.NotNil(t, call) in = rc.Params{"jobid": 1} out, err = call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, float64(1), out["id"]) assert.Equal(t, "context canceled", out["error"]) assert.Equal(t, true, out["finished"]) assert.Equal(t, false, out["success"]) } func TestRcSyncJobStop(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) go func() { jobID = 0 _, id, err := ExecuteJob(ctx, ctxFn, rc.Params{}) assert.Error(t, err) assert.Equal(t, int64(1), id) }() time.Sleep(10 * time.Millisecond) call := rc.Calls.Get("job/stop") assert.NotNil(t, call) in := rc.Params{"jobid": 1} out, err := call.Fn(context.Background(), in) require.NoError(t, err) require.Empty(t, out) in = rc.Params{"jobid": 123123123} _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "job not found") in = rc.Params{"jobidx": 123123123} _, err = call.Fn(context.Background(), in) require.Error(t, err) assert.Contains(t, err.Error(), "Didn't find key") cancel() time.Sleep(10 * time.Millisecond) call = rc.Calls.Get("job/status") assert.NotNil(t, call) in = rc.Params{"jobid": 1} out, err = call.Fn(context.Background(), in) require.NoError(t, err) require.NotNil(t, out) assert.Equal(t, float64(1), out["id"]) assert.Equal(t, "context canceled", out["error"]) assert.Equal(t, true, out["finished"]) assert.Equal(t, false, out["success"]) } rclone-1.53.3/fs/rc/js/000077500000000000000000000000001375552240400145245ustar00rootroot00000000000000rclone-1.53.3/fs/rc/js/.gitignore000066400000000000000000000000071375552240400165110ustar00rootroot00000000000000*.wasm rclone-1.53.3/fs/rc/js/Makefile000066400000000000000000000001231375552240400161600ustar00rootroot00000000000000build: GOARCH=wasm GOOS=js go build -o rclone.wasm serve: build go run serve.go rclone-1.53.3/fs/rc/js/README.md000066400000000000000000000015251375552240400160060ustar00rootroot00000000000000# Rclone as WASM This directory contains files to use the rclone rc as a library in the browser. This works by compiling rclone to WASM and loading that in via javascript. This contains the following files - `index.html` - test web page to load the module - `loader.js` - java script to load the module - see here for usage - `main.go` - main go code exporting the rclone rc - `Makefile` - test makefile - `README.md` - this readme - `serve.go` - test program to serve the web page - `wasm_exec.js` - interface code from the go source - don't edit ## Compiling This can be compiled by using `make` or alternatively `GOARCH=wasm GOOS=js go build -o rclone.wasm` ## Running Run the test server with `make serve` and examine the page at http://localhost:3000/ - look at the javascript console and look at the end of `loader.js` for how that works. rclone-1.53.3/fs/rc/js/index.html000066400000000000000000000004011375552240400165140ustar00rootroot00000000000000 Rclone

    Welcome to rclone - check the console

    rclone-1.53.3/fs/rc/js/loader.js000066400000000000000000000023601375552240400163310ustar00rootroot00000000000000var rc; var rcValidResolve var rcValid = new Promise(resolve => { rcValidResolve = resolve }); var script = document.createElement('script'); script.src = "wasm_exec.js"; script.onload = function () { if (!WebAssembly.instantiateStreaming) { // polyfill WebAssembly.instantiateStreaming = async (resp, importObject) => { const source = await (await resp).arrayBuffer(); return await WebAssembly.instantiate(source, importObject); }; } const go = new Go(); WebAssembly.instantiateStreaming(fetch("rclone.wasm"), go.importObject).then((result) => { go.run(result.instance); }); }; document.head.appendChild(script); rcValid.then(() => { // Some examples of using the rc call // // The rc call takes two parameters, method and input object and // returns an output object. // // If the output object has an "error" and a "status" then it is an // error (it would be nice to signal this out of band). console.log("core/version", rc("core/version", null)) console.log("core/version", rc("rc/noop", {"string":"one",number:2})) console.log("core/version", rc("operations/mkdir", {"fs":":memory:","remote":"bucket"})) console.log("core/version", rc("operations/list", {"fs":":memory:","remote":"bucket"})) }) rclone-1.53.3/fs/rc/js/main.go000066400000000000000000000067461375552240400160140ustar00rootroot00000000000000// Rclone as a wasm library // // This library exports the core rc functionality // +build js package main import ( "context" "encoding/json" "log" "net/http" "runtime" "syscall/js" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" // Core functionality we need _ "github.com/rclone/rclone/fs/operations" _ "github.com/rclone/rclone/fs/sync" // _ "github.com/rclone/rclone/backend/all" // import all backends // Backends _ "github.com/rclone/rclone/backend/memory" ) var ( document js.Value jsJSON js.Value ) func getElementById(name string) js.Value { node := document.Call("getElementById", name) if node.IsUndefined() { log.Fatalf("Couldn't find element %q", name) } return node } func time() int { return js.Global().Get("Date").New().Call("getTime").Int() } func paramToValue(in rc.Params) (out js.Value) { return js.Value{} } // errorValue turns an error into a js.Value func errorValue(method string, in js.Value, err error) js.Value { fs.Errorf(nil, "rc: %q: error: %v", method, err) // Adjust the error return for some well known errors errOrig := errors.Cause(err) status := http.StatusInternalServerError switch { case errOrig == fs.ErrorDirNotFound || errOrig == fs.ErrorObjectNotFound: status = http.StatusNotFound case rc.IsErrParamInvalid(err) || rc.IsErrParamNotFound(err): status = http.StatusBadRequest } return js.ValueOf(map[string]interface{}{ "status": status, "error": err.Error(), "input": in, "path": method, }) } // rcCallback is a callback for javascript to access the api // // FIXME should this should return a promise so we can return errors properly? func rcCallback(this js.Value, args []js.Value) interface{} { ctx := context.Background() // FIXME log.Printf("rcCallback: this=%v args=%v", this, args) if len(args) != 2 { return errorValue("", js.Undefined(), errors.New("need two parameters to rc call")) } method := args[0].String() inRaw := args[1] var in = rc.Params{} switch inRaw.Type() { case js.TypeNull: case js.TypeObject: inJSON := jsJSON.Call("stringify", inRaw).String() err := json.Unmarshal([]byte(inJSON), &in) if err != nil { return errorValue(method, inRaw, errors.Wrap(err, "couldn't unmarshal input")) } default: return errorValue(method, inRaw, errors.New("in parameter must be null or object")) } call := rc.Calls.Get(method) if call == nil { return errorValue(method, inRaw, errors.Errorf("method %q not found", method)) } out, err := call.Fn(ctx, in) if err != nil { return errorValue(method, inRaw, errors.Wrap(err, "method call failed")) } if out == nil { return nil } var out2 map[string]interface{} err = rc.Reshape(&out2, out) if err != nil { return errorValue(method, inRaw, errors.Wrap(err, "result reshape failed")) } return js.ValueOf(out2) } func main() { log.Printf("Running on goos/goarch = %s/%s", runtime.GOOS, runtime.GOARCH) if js.Global().IsUndefined() { log.Fatalf("Didn't find Global - not running in browser") } document = js.Global().Get("document") if document.IsUndefined() { log.Fatalf("Didn't find document - not running in browser") } jsJSON = js.Global().Get("JSON") if jsJSON.IsUndefined() { log.Fatalf("can't find JSON") } // Set rc js.Global().Set("rc", js.FuncOf(rcCallback)) // Signal that it is valid rcValidResolve := js.Global().Get("rcValidResolve") if rcValidResolve.IsUndefined() { log.Fatalf("Didn't find rcValidResolve") } rcValidResolve.Invoke() // Wait forever select {} } rclone-1.53.3/fs/rc/js/serve.go000066400000000000000000000005641375552240400162040ustar00rootroot00000000000000//+build none package main import ( "fmt" "log" "mime" "net/http" ) func main() { mime.AddExtensionType(".wasm", "application/wasm") mime.AddExtensionType(".js", "application/javascript") mux := http.NewServeMux() mux.Handle("/", http.FileServer(http.Dir("."))) fmt.Printf("Serving on http://localhost:3000/\n") log.Fatal(http.ListenAndServe(":3000", mux)) } rclone-1.53.3/fs/rc/js/wasm_exec.js000066400000000000000000000413221375552240400170370ustar00rootroot00000000000000// Copyright 2018 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. (() => { // Map multiple JavaScript environments to a single common API, // preferring web standards over Node.js API. // // Environments considered: // - Browsers // - Node.js // - Electron // - Parcel if (typeof global !== "undefined") { // global already exists } else if (typeof window !== "undefined") { window.global = window; } else if (typeof self !== "undefined") { self.global = self; } else { throw new Error("cannot export Go (neither global, window nor self is defined)"); } if (!global.require && typeof require !== "undefined") { global.require = require; } if (!global.fs && global.require) { const fs = require("fs"); if (Object.keys(fs) !== 0) { global.fs = fs; } } const enosys = () => { const err = new Error("not implemented"); err.code = "ENOSYS"; return err; }; if (!global.fs) { let outputBuf = ""; global.fs = { constants: { O_WRONLY: -1, O_RDWR: -1, O_CREAT: -1, O_TRUNC: -1, O_APPEND: -1, O_EXCL: -1 }, // unused writeSync(fd, buf) { outputBuf += decoder.decode(buf); const nl = outputBuf.lastIndexOf("\n"); if (nl != -1) { console.log(outputBuf.substr(0, nl)); outputBuf = outputBuf.substr(nl + 1); } return buf.length; }, write(fd, buf, offset, length, position, callback) { if (offset !== 0 || length !== buf.length || position !== null) { callback(enosys()); return; } const n = this.writeSync(fd, buf); callback(null, n); }, chmod(path, mode, callback) { callback(enosys()); }, chown(path, uid, gid, callback) { callback(enosys()); }, close(fd, callback) { callback(enosys()); }, fchmod(fd, mode, callback) { callback(enosys()); }, fchown(fd, uid, gid, callback) { callback(enosys()); }, fstat(fd, callback) { callback(enosys()); }, fsync(fd, callback) { callback(null); }, ftruncate(fd, length, callback) { callback(enosys()); }, lchown(path, uid, gid, callback) { callback(enosys()); }, link(path, link, callback) { callback(enosys()); }, lstat(path, callback) { callback(enosys()); }, mkdir(path, perm, callback) { callback(enosys()); }, open(path, flags, mode, callback) { callback(enosys()); }, read(fd, buffer, offset, length, position, callback) { callback(enosys()); }, readdir(path, callback) { callback(enosys()); }, readlink(path, callback) { callback(enosys()); }, rename(from, to, callback) { callback(enosys()); }, rmdir(path, callback) { callback(enosys()); }, stat(path, callback) { callback(enosys()); }, symlink(path, link, callback) { callback(enosys()); }, truncate(path, length, callback) { callback(enosys()); }, unlink(path, callback) { callback(enosys()); }, utimes(path, atime, mtime, callback) { callback(enosys()); }, }; } if (!global.process) { global.process = { getuid() { return -1; }, getgid() { return -1; }, geteuid() { return -1; }, getegid() { return -1; }, getgroups() { throw enosys(); }, pid: -1, ppid: -1, umask() { throw enosys(); }, cwd() { throw enosys(); }, chdir() { throw enosys(); }, } } if (!global.crypto) { const nodeCrypto = require("crypto"); global.crypto = { getRandomValues(b) { nodeCrypto.randomFillSync(b); }, }; } if (!global.performance) { global.performance = { now() { const [sec, nsec] = process.hrtime(); return sec * 1000 + nsec / 1000000; }, }; } if (!global.TextEncoder) { global.TextEncoder = require("util").TextEncoder; } if (!global.TextDecoder) { global.TextDecoder = require("util").TextDecoder; } // End of polyfills for common API. const encoder = new TextEncoder("utf-8"); const decoder = new TextDecoder("utf-8"); global.Go = class { constructor() { this.argv = ["js"]; this.env = {}; this.exit = (code) => { if (code !== 0) { console.warn("exit code:", code); } }; this._exitPromise = new Promise((resolve) => { this._resolveExitPromise = resolve; }); this._pendingEvent = null; this._scheduledTimeouts = new Map(); this._nextCallbackTimeoutID = 1; const setInt64 = (addr, v) => { this.mem.setUint32(addr + 0, v, true); this.mem.setUint32(addr + 4, Math.floor(v / 4294967296), true); } const getInt64 = (addr) => { const low = this.mem.getUint32(addr + 0, true); const high = this.mem.getInt32(addr + 4, true); return low + high * 4294967296; } const loadValue = (addr) => { const f = this.mem.getFloat64(addr, true); if (f === 0) { return undefined; } if (!isNaN(f)) { return f; } const id = this.mem.getUint32(addr, true); return this._values[id]; } const storeValue = (addr, v) => { const nanHead = 0x7FF80000; if (typeof v === "number" && v !== 0) { if (isNaN(v)) { this.mem.setUint32(addr + 4, nanHead, true); this.mem.setUint32(addr, 0, true); return; } this.mem.setFloat64(addr, v, true); return; } if (v === undefined) { this.mem.setFloat64(addr, 0, true); return; } let id = this._ids.get(v); if (id === undefined) { id = this._idPool.pop(); if (id === undefined) { id = this._values.length; } this._values[id] = v; this._goRefCounts[id] = 0; this._ids.set(v, id); } this._goRefCounts[id]++; let typeFlag = 0; switch (typeof v) { case "object": if (v !== null) { typeFlag = 1; } break; case "string": typeFlag = 2; break; case "symbol": typeFlag = 3; break; case "function": typeFlag = 4; break; } this.mem.setUint32(addr + 4, nanHead | typeFlag, true); this.mem.setUint32(addr, id, true); } const loadSlice = (addr) => { const array = getInt64(addr + 0); const len = getInt64(addr + 8); return new Uint8Array(this._inst.exports.mem.buffer, array, len); } const loadSliceOfValues = (addr) => { const array = getInt64(addr + 0); const len = getInt64(addr + 8); const a = new Array(len); for (let i = 0; i < len; i++) { a[i] = loadValue(array + i * 8); } return a; } const loadString = (addr) => { const saddr = getInt64(addr + 0); const len = getInt64(addr + 8); return decoder.decode(new DataView(this._inst.exports.mem.buffer, saddr, len)); } const timeOrigin = Date.now() - performance.now(); this.importObject = { go: { // Go's SP does not change as long as no Go code is running. Some operations (e.g. calls, getters and setters) // may synchronously trigger a Go event handler. This makes Go code get executed in the middle of the imported // function. A goroutine can switch to a new stack if the current stack is too small (see morestack function). // This changes the SP, thus we have to update the SP used by the imported function. // func wasmExit(code int32) "runtime.wasmExit": (sp) => { const code = this.mem.getInt32(sp + 8, true); this.exited = true; delete this._inst; delete this._values; delete this._goRefCounts; delete this._ids; delete this._idPool; this.exit(code); }, // func wasmWrite(fd uintptr, p unsafe.Pointer, n int32) "runtime.wasmWrite": (sp) => { const fd = getInt64(sp + 8); const p = getInt64(sp + 16); const n = this.mem.getInt32(sp + 24, true); fs.writeSync(fd, new Uint8Array(this._inst.exports.mem.buffer, p, n)); }, // func resetMemoryDataView() "runtime.resetMemoryDataView": (sp) => { this.mem = new DataView(this._inst.exports.mem.buffer); }, // func nanotime1() int64 "runtime.nanotime1": (sp) => { setInt64(sp + 8, (timeOrigin + performance.now()) * 1000000); }, // func walltime1() (sec int64, nsec int32) "runtime.walltime1": (sp) => { const msec = (new Date).getTime(); setInt64(sp + 8, msec / 1000); this.mem.setInt32(sp + 16, (msec % 1000) * 1000000, true); }, // func scheduleTimeoutEvent(delay int64) int32 "runtime.scheduleTimeoutEvent": (sp) => { const id = this._nextCallbackTimeoutID; this._nextCallbackTimeoutID++; this._scheduledTimeouts.set(id, setTimeout( () => { this._resume(); while (this._scheduledTimeouts.has(id)) { // for some reason Go failed to register the timeout event, log and try again // (temporary workaround for https://github.com/golang/go/issues/28975) console.warn("scheduleTimeoutEvent: missed timeout event"); this._resume(); } }, getInt64(sp + 8) + 1, // setTimeout has been seen to fire up to 1 millisecond early )); this.mem.setInt32(sp + 16, id, true); }, // func clearTimeoutEvent(id int32) "runtime.clearTimeoutEvent": (sp) => { const id = this.mem.getInt32(sp + 8, true); clearTimeout(this._scheduledTimeouts.get(id)); this._scheduledTimeouts.delete(id); }, // func getRandomData(r []byte) "runtime.getRandomData": (sp) => { crypto.getRandomValues(loadSlice(sp + 8)); }, // func finalizeRef(v ref) "syscall/js.finalizeRef": (sp) => { const id = this.mem.getUint32(sp + 8, true); this._goRefCounts[id]--; if (this._goRefCounts[id] === 0) { const v = this._values[id]; this._values[id] = null; this._ids.delete(v); this._idPool.push(id); } }, // func stringVal(value string) ref "syscall/js.stringVal": (sp) => { storeValue(sp + 24, loadString(sp + 8)); }, // func valueGet(v ref, p string) ref "syscall/js.valueGet": (sp) => { const result = Reflect.get(loadValue(sp + 8), loadString(sp + 16)); sp = this._inst.exports.getsp(); // see comment above storeValue(sp + 32, result); }, // func valueSet(v ref, p string, x ref) "syscall/js.valueSet": (sp) => { Reflect.set(loadValue(sp + 8), loadString(sp + 16), loadValue(sp + 32)); }, // func valueDelete(v ref, p string) "syscall/js.valueDelete": (sp) => { Reflect.deleteProperty(loadValue(sp + 8), loadString(sp + 16)); }, // func valueIndex(v ref, i int) ref "syscall/js.valueIndex": (sp) => { storeValue(sp + 24, Reflect.get(loadValue(sp + 8), getInt64(sp + 16))); }, // valueSetIndex(v ref, i int, x ref) "syscall/js.valueSetIndex": (sp) => { Reflect.set(loadValue(sp + 8), getInt64(sp + 16), loadValue(sp + 24)); }, // func valueCall(v ref, m string, args []ref) (ref, bool) "syscall/js.valueCall": (sp) => { try { const v = loadValue(sp + 8); const m = Reflect.get(v, loadString(sp + 16)); const args = loadSliceOfValues(sp + 32); const result = Reflect.apply(m, v, args); sp = this._inst.exports.getsp(); // see comment above storeValue(sp + 56, result); this.mem.setUint8(sp + 64, 1); } catch (err) { storeValue(sp + 56, err); this.mem.setUint8(sp + 64, 0); } }, // func valueInvoke(v ref, args []ref) (ref, bool) "syscall/js.valueInvoke": (sp) => { try { const v = loadValue(sp + 8); const args = loadSliceOfValues(sp + 16); const result = Reflect.apply(v, undefined, args); sp = this._inst.exports.getsp(); // see comment above storeValue(sp + 40, result); this.mem.setUint8(sp + 48, 1); } catch (err) { storeValue(sp + 40, err); this.mem.setUint8(sp + 48, 0); } }, // func valueNew(v ref, args []ref) (ref, bool) "syscall/js.valueNew": (sp) => { try { const v = loadValue(sp + 8); const args = loadSliceOfValues(sp + 16); const result = Reflect.construct(v, args); sp = this._inst.exports.getsp(); // see comment above storeValue(sp + 40, result); this.mem.setUint8(sp + 48, 1); } catch (err) { storeValue(sp + 40, err); this.mem.setUint8(sp + 48, 0); } }, // func valueLength(v ref) int "syscall/js.valueLength": (sp) => { setInt64(sp + 16, parseInt(loadValue(sp + 8).length)); }, // valuePrepareString(v ref) (ref, int) "syscall/js.valuePrepareString": (sp) => { const str = encoder.encode(String(loadValue(sp + 8))); storeValue(sp + 16, str); setInt64(sp + 24, str.length); }, // valueLoadString(v ref, b []byte) "syscall/js.valueLoadString": (sp) => { const str = loadValue(sp + 8); loadSlice(sp + 16).set(str); }, // func valueInstanceOf(v ref, t ref) bool "syscall/js.valueInstanceOf": (sp) => { this.mem.setUint8(sp + 24, (loadValue(sp + 8) instanceof loadValue(sp + 16)) ? 1 : 0); }, // func copyBytesToGo(dst []byte, src ref) (int, bool) "syscall/js.copyBytesToGo": (sp) => { const dst = loadSlice(sp + 8); const src = loadValue(sp + 32); if (!(src instanceof Uint8Array || src instanceof Uint8ClampedArray)) { this.mem.setUint8(sp + 48, 0); return; } const toCopy = src.subarray(0, dst.length); dst.set(toCopy); setInt64(sp + 40, toCopy.length); this.mem.setUint8(sp + 48, 1); }, // func copyBytesToJS(dst ref, src []byte) (int, bool) "syscall/js.copyBytesToJS": (sp) => { const dst = loadValue(sp + 8); const src = loadSlice(sp + 16); if (!(dst instanceof Uint8Array || dst instanceof Uint8ClampedArray)) { this.mem.setUint8(sp + 48, 0); return; } const toCopy = src.subarray(0, dst.length); dst.set(toCopy); setInt64(sp + 40, toCopy.length); this.mem.setUint8(sp + 48, 1); }, "debug": (value) => { console.log(value); }, } }; } async run(instance) { this._inst = instance; this.mem = new DataView(this._inst.exports.mem.buffer); this._values = [ // JS values that Go currently has references to, indexed by reference id NaN, 0, null, true, false, global, this, ]; this._goRefCounts = new Array(this._values.length).fill(Infinity); // number of references that Go has to a JS value, indexed by reference id this._ids = new Map([ // mapping from JS values to reference ids [0, 1], [null, 2], [true, 3], [false, 4], [global, 5], [this, 6], ]); this._idPool = []; // unused ids that have been garbage collected this.exited = false; // whether the Go program has exited // Pass command line arguments and environment variables to WebAssembly by writing them to the linear memory. let offset = 4096; const strPtr = (str) => { const ptr = offset; const bytes = encoder.encode(str + "\0"); new Uint8Array(this.mem.buffer, offset, bytes.length).set(bytes); offset += bytes.length; if (offset % 8 !== 0) { offset += 8 - (offset % 8); } return ptr; }; const argc = this.argv.length; const argvPtrs = []; this.argv.forEach((arg) => { argvPtrs.push(strPtr(arg)); }); argvPtrs.push(0); const keys = Object.keys(this.env).sort(); keys.forEach((key) => { argvPtrs.push(strPtr(`${key}=${this.env[key]}`)); }); argvPtrs.push(0); const argv = offset; argvPtrs.forEach((ptr) => { this.mem.setUint32(offset, ptr, true); this.mem.setUint32(offset + 4, 0, true); offset += 8; }); this._inst.exports.run(argc, argv); if (this.exited) { this._resolveExitPromise(); } await this._exitPromise; } _resume() { if (this.exited) { throw new Error("Go program has already exited"); } this._inst.exports.resume(); if (this.exited) { this._resolveExitPromise(); } } _makeFuncWrapper(id) { const go = this; return function () { const event = { id: id, this: this, args: arguments }; go._pendingEvent = event; go._resume(); return event.result; }; } } if ( global.require && global.require.main === module && global.process && global.process.versions && !global.process.versions.electron ) { if (process.argv.length < 3) { console.error("usage: go_js_wasm_exec [wasm binary] [arguments]"); process.exit(1); } const go = new Go(); go.argv = process.argv.slice(2); go.env = Object.assign({ TMPDIR: require("os").tmpdir() }, process.env); go.exit = process.exit; WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => { process.on("exit", (code) => { // Node.js exits if no event handler is pending if (code === 0 && !go.exited) { // deadlock, make Go print error and stack traces go._pendingEvent = { id: 0 }; go._resume(); } }); return go.run(result.instance); }).catch((err) => { console.error(err); process.exit(1); }); } })(); rclone-1.53.3/fs/rc/params.go000066400000000000000000000164101375552240400157240ustar00rootroot00000000000000// Parameter parsing package rc import ( "encoding/json" "fmt" "math" "net/http" "strconv" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) // Params is the input and output type for the Func type Params map[string]interface{} // ErrParamNotFound - this is returned from the Get* functions if the // parameter isn't found along with a zero value of the requested // item. // // Returning an error of this type from an rc.Func will cause the http // method to return http.StatusBadRequest type ErrParamNotFound string // Error turns this error into a string func (e ErrParamNotFound) Error() string { return fmt.Sprintf("Didn't find key %q in input", string(e)) } // IsErrParamNotFound returns whether err is ErrParamNotFound func IsErrParamNotFound(err error) bool { _, isNotFound := err.(ErrParamNotFound) return isNotFound } // NotErrParamNotFound returns true if err != nil and // !IsErrParamNotFound(err) // // This is for checking error returns of the Get* functions to ignore // error not found returns and take the default value. func NotErrParamNotFound(err error) bool { return err != nil && !IsErrParamNotFound(err) } // ErrParamInvalid - this is returned from the Get* functions if the // parameter is invalid. // // // Returning an error of this type from an rc.Func will cause the http // method to return http.StatusBadRequest type ErrParamInvalid struct { error } // IsErrParamInvalid returns whether err is ErrParamInvalid func IsErrParamInvalid(err error) bool { _, isInvalid := err.(ErrParamInvalid) return isInvalid } // Reshape reshapes one blob of data into another via json serialization // // out should be a pointer type // // This isn't a very efficient way of dealing with this! func Reshape(out interface{}, in interface{}) error { b, err := json.Marshal(in) if err != nil { return errors.Wrapf(err, "Reshape failed to Marshal") } err = json.Unmarshal(b, out) if err != nil { return errors.Wrapf(err, "Reshape failed to Unmarshal") } return nil } // Get gets a parameter from the input // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be nil. func (p Params) Get(key string) (interface{}, error) { value, ok := p[key] if !ok { return nil, ErrParamNotFound(key) } return value, nil } // GetHTTPRequest gets a http.Request parameter associated with the request with the key "_request" // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be nil. func (p Params) GetHTTPRequest() (*http.Request, error) { key := "_request" value, err := p.Get(key) if err != nil { return nil, err } request, ok := value.(*http.Request) if !ok { return nil, ErrParamInvalid{errors.Errorf("expecting http.request value for key %q (was %T)", key, value)} } return request, nil } // GetHTTPResponseWriter gets a http.ResponseWriter parameter associated with the request with the key "_response" // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be nil. func (p Params) GetHTTPResponseWriter() (*http.ResponseWriter, error) { key := "_response" value, err := p.Get(key) if err != nil { return nil, err } request, ok := value.(*http.ResponseWriter) if !ok { return nil, ErrParamInvalid{errors.Errorf("expecting *http.ResponseWriter value for key %q (was %T)", key, value)} } return request, nil } // GetString gets a string parameter from the input // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be "". func (p Params) GetString(key string) (string, error) { value, err := p.Get(key) if err != nil { return "", err } str, ok := value.(string) if !ok { return "", ErrParamInvalid{errors.Errorf("expecting string value for key %q (was %T)", key, value)} } return str, nil } // GetInt64 gets an int64 parameter from the input // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be 0. func (p Params) GetInt64(key string) (int64, error) { value, err := p.Get(key) if err != nil { return 0, err } switch x := value.(type) { case int: return int64(x), nil case int64: return x, nil case float64: if x > math.MaxInt64 || x < math.MinInt64 { return 0, ErrParamInvalid{errors.Errorf("key %q (%v) overflows int64 ", key, value)} } return int64(x), nil case string: i, err := strconv.ParseInt(x, 10, 0) if err != nil { return 0, ErrParamInvalid{errors.Wrapf(err, "couldn't parse key %q (%v) as int64", key, value)} } return i, nil } return 0, ErrParamInvalid{errors.Errorf("expecting int64 value for key %q (was %T)", key, value)} } // GetFloat64 gets a float64 parameter from the input // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be 0. func (p Params) GetFloat64(key string) (float64, error) { value, err := p.Get(key) if err != nil { return 0, err } switch x := value.(type) { case float64: return x, nil case int: return float64(x), nil case int64: return float64(x), nil case string: f, err := strconv.ParseFloat(x, 64) if err != nil { return 0, ErrParamInvalid{errors.Wrapf(err, "couldn't parse key %q (%v) as float64", key, value)} } return f, nil } return 0, ErrParamInvalid{errors.Errorf("expecting float64 value for key %q (was %T)", key, value)} } // GetBool gets a boolean parameter from the input // // If the parameter isn't found then error will be of type // ErrParamNotFound and the returned value will be false. func (p Params) GetBool(key string) (bool, error) { value, err := p.Get(key) if err != nil { return false, err } switch x := value.(type) { case int: return x != 0, nil case int64: return x != 0, nil case float64: return x != 0, nil case bool: return x, nil case string: b, err := strconv.ParseBool(x) if err != nil { return false, ErrParamInvalid{errors.Wrapf(err, "couldn't parse key %q (%v) as bool", key, value)} } return b, nil } return false, ErrParamInvalid{errors.Errorf("expecting bool value for key %q (was %T)", key, value)} } // GetStruct gets a struct from key from the input into the struct // pointed to by out. out must be a pointer type. // // If the parameter isn't found then error will be of type // ErrParamNotFound and out will be unchanged. func (p Params) GetStruct(key string, out interface{}) error { value, err := p.Get(key) if err != nil { return err } err = Reshape(out, value) if err != nil { if valueStr, ok := value.(string); ok { // try to unmarshal as JSON if string err = json.Unmarshal([]byte(valueStr), out) if err == nil { return nil } } return ErrParamInvalid{errors.Wrapf(err, "key %q", key)} } return nil } // GetStructMissingOK works like GetStruct but doesn't return an error // if the key is missing func (p Params) GetStructMissingOK(key string, out interface{}) error { _, ok := p[key] if !ok { return nil } return p.GetStruct(key, out) } // GetDuration get the duration parameters from in func (p Params) GetDuration(key string) (time.Duration, error) { s, err := p.GetString(key) if err != nil { return 0, err } duration, err := fs.ParseDuration(s) if err != nil { return 0, ErrParamInvalid{errors.Wrap(err, "parse duration")} } return duration, nil } rclone-1.53.3/fs/rc/params_test.go000066400000000000000000000207311375552240400167640ustar00rootroot00000000000000package rc import ( "fmt" "testing" "time" "github.com/pkg/errors" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/rclone/rclone/fs" ) func TestErrParamNotFoundError(t *testing.T) { e := ErrParamNotFound("key") assert.Equal(t, "Didn't find key \"key\" in input", e.Error()) } func TestIsErrParamNotFound(t *testing.T) { assert.Equal(t, true, IsErrParamNotFound(ErrParamNotFound("key"))) assert.Equal(t, false, IsErrParamNotFound(nil)) assert.Equal(t, false, IsErrParamNotFound(errors.New("potato"))) } func TestNotErrParamNotFound(t *testing.T) { assert.Equal(t, false, NotErrParamNotFound(ErrParamNotFound("key"))) assert.Equal(t, false, NotErrParamNotFound(nil)) assert.Equal(t, true, NotErrParamNotFound(errors.New("potato"))) } func TestIsErrParamInvalid(t *testing.T) { e := ErrParamInvalid{errors.New("potato")} assert.Equal(t, true, IsErrParamInvalid(e)) assert.Equal(t, false, IsErrParamInvalid(nil)) assert.Equal(t, false, IsErrParamInvalid(errors.New("potato"))) } func TestReshape(t *testing.T) { in := Params{ "String": "hello", "Float": 4.2, } var out struct { String string Float float64 } require.NoError(t, Reshape(&out, in)) assert.Equal(t, "hello", out.String) assert.Equal(t, 4.2, out.Float) var inCopy = Params{} require.NoError(t, Reshape(&inCopy, out)) assert.Equal(t, in, inCopy) // Now a failure to marshal var in2 func() require.Error(t, Reshape(&inCopy, in2)) // Now a failure to unmarshal require.Error(t, Reshape(&out, "string")) } func TestParamsGet(t *testing.T) { in := Params{ "ok": 1, } v1, e1 := in.Get("ok") assert.NoError(t, e1) assert.Equal(t, 1, v1) v2, e2 := in.Get("notOK") assert.Error(t, e2) assert.Equal(t, nil, v2) assert.Equal(t, ErrParamNotFound("notOK"), e2) } func TestParamsGetString(t *testing.T) { in := Params{ "string": "one", "notString": 17, } v1, e1 := in.GetString("string") assert.NoError(t, e1) assert.Equal(t, "one", v1) v2, e2 := in.GetString("notOK") assert.Error(t, e2) assert.Equal(t, "", v2) assert.Equal(t, ErrParamNotFound("notOK"), e2) v3, e3 := in.GetString("notString") assert.Error(t, e3) assert.Equal(t, "", v3) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } func TestParamsGetInt64(t *testing.T) { for _, test := range []struct { value interface{} result int64 errString string }{ {"123", 123, ""}, {"123x", 0, "couldn't parse"}, {int(12), 12, ""}, {int64(13), 13, ""}, {float64(14), 14, ""}, {float64(9.3e18), 0, "overflows int64"}, {float64(-9.3e18), 0, "overflows int64"}, } { t.Run(fmt.Sprintf("%T=%v", test.value, test.value), func(t *testing.T) { in := Params{ "key": test.value, } v1, e1 := in.GetInt64("key") if test.errString == "" { require.NoError(t, e1) assert.Equal(t, test.result, v1) } else { require.NotNil(t, e1) require.Error(t, e1) assert.Contains(t, e1.Error(), test.errString) assert.Equal(t, int64(0), v1) } }) } in := Params{ "notInt64": []string{"a", "b"}, } v2, e2 := in.GetInt64("notOK") assert.Error(t, e2) assert.Equal(t, int64(0), v2) assert.Equal(t, ErrParamNotFound("notOK"), e2) v3, e3 := in.GetInt64("notInt64") assert.Error(t, e3) assert.Equal(t, int64(0), v3) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } func TestParamsGetFloat64(t *testing.T) { for _, test := range []struct { value interface{} result float64 errString string }{ {"123.1", 123.1, ""}, {"123x1", 0, "couldn't parse"}, {int(12), 12, ""}, {int64(13), 13, ""}, {float64(14), 14, ""}, } { t.Run(fmt.Sprintf("%T=%v", test.value, test.value), func(t *testing.T) { in := Params{ "key": test.value, } v1, e1 := in.GetFloat64("key") if test.errString == "" { require.NoError(t, e1) assert.Equal(t, test.result, v1) } else { require.NotNil(t, e1) require.Error(t, e1) assert.Contains(t, e1.Error(), test.errString) assert.Equal(t, float64(0), v1) } }) } in := Params{ "notFloat64": []string{"a", "b"}, } v2, e2 := in.GetFloat64("notOK") assert.Error(t, e2) assert.Equal(t, float64(0), v2) assert.Equal(t, ErrParamNotFound("notOK"), e2) v3, e3 := in.GetFloat64("notFloat64") assert.Error(t, e3) assert.Equal(t, float64(0), v3) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } func TestParamsGetDuration(t *testing.T) { for _, test := range []struct { value interface{} result time.Duration errString string }{ {"86400", time.Hour * 24, ""}, {"1y", time.Hour * 24 * 365, ""}, {"60", time.Minute * 1, ""}, {"0", 0, ""}, {"-45", -time.Second * 45, ""}, {"2", time.Second * 2, ""}, {"2h4m7s", time.Hour*2 + 4*time.Minute + 7*time.Second, ""}, {"3d", time.Hour * 24 * 3, ""}, {"off", time.Duration(fs.DurationOff), ""}, {"", 0, "parse duration"}, {12, 0, "expecting string"}, {"34y", time.Hour * 24 * 365 * 34, ""}, {"30d", time.Hour * 24 * 30, ""}, {"2M", time.Hour * 24 * 60, ""}, {"wrong", 0, "parse duration"}, } { t.Run(fmt.Sprintf("%T=%v", test.value, test.value), func(t *testing.T) { in := Params{ "key": test.value, } v1, e1 := in.GetDuration("key") if test.errString == "" { require.NoError(t, e1) assert.Equal(t, test.result, v1) } else { require.NotNil(t, e1) require.Error(t, e1) assert.Contains(t, e1.Error(), test.errString) assert.Equal(t, time.Duration(0), v1) } }) } in := Params{ "notDuration": []string{"a", "b"}, } v2, e2 := in.GetDuration("notOK") assert.Error(t, e2) assert.Equal(t, time.Duration(0), v2) assert.Equal(t, ErrParamNotFound("notOK"), e2) v3, e3 := in.GetDuration("notDuration") assert.Error(t, e3) assert.Equal(t, time.Duration(0), v3) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } func TestParamsGetBool(t *testing.T) { for _, test := range []struct { value interface{} result bool errString string }{ {true, true, ""}, {false, false, ""}, {"true", true, ""}, {"false", false, ""}, {"fasle", false, "couldn't parse"}, {int(12), true, ""}, {int(0), false, ""}, {int64(13), true, ""}, {int64(0), false, ""}, {float64(14), true, ""}, {float64(0), false, ""}, } { t.Run(fmt.Sprintf("%T=%v", test.value, test.value), func(t *testing.T) { in := Params{ "key": test.value, } v1, e1 := in.GetBool("key") if test.errString == "" { require.NoError(t, e1) assert.Equal(t, test.result, v1) } else { require.NotNil(t, e1) require.Error(t, e1) assert.Contains(t, e1.Error(), test.errString) assert.Equal(t, false, v1) } }) } in := Params{ "notBool": []string{"a", "b"}, } v2, e2 := Params{}.GetBool("notOK") assert.Error(t, e2) assert.Equal(t, false, v2) assert.Equal(t, ErrParamNotFound("notOK"), e2) v3, e3 := in.GetBool("notBool") assert.Error(t, e3) assert.Equal(t, false, v3) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } func TestParamsGetStruct(t *testing.T) { in := Params{ "struct": Params{ "String": "one", "Float": 4.2, }, } var out struct { String string Float float64 } e1 := in.GetStruct("struct", &out) assert.NoError(t, e1) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) e2 := in.GetStruct("notOK", &out) assert.Error(t, e2) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) assert.Equal(t, ErrParamNotFound("notOK"), e2) in["struct"] = "string" e3 := in.GetStruct("struct", &out) assert.Error(t, e3) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } func TestParamsGetStructString(t *testing.T) { in := Params{ "struct": `{"String": "one", "Float": 4.2}`, } var out struct { String string Float float64 } e1 := in.GetStruct("struct", &out) assert.NoError(t, e1) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) } func TestParamsGetStructMissingOK(t *testing.T) { in := Params{ "struct": Params{ "String": "one", "Float": 4.2, }, } var out struct { String string Float float64 } e1 := in.GetStructMissingOK("struct", &out) assert.NoError(t, e1) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) e2 := in.GetStructMissingOK("notOK", &out) assert.NoError(t, e2) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) in["struct"] = "string" e3 := in.GetStructMissingOK("struct", &out) assert.Error(t, e3) assert.Equal(t, "one", out.String) assert.Equal(t, 4.2, out.Float) assert.Equal(t, true, IsErrParamInvalid(e3), e3.Error()) } rclone-1.53.3/fs/rc/rc.go000066400000000000000000000037051375552240400150500ustar00rootroot00000000000000// Package rc implements a remote control server and registry for rclone // // To register your internal calls, call rc.Add(path, function). Your // function should take ane return a Param. It can also return an // error. Use rc.NewError to wrap an existing error along with an // http response type if another response other than 500 internal // error is required on error. package rc import ( "encoding/json" "io" _ "net/http/pprof" // install the pprof http handlers "time" "github.com/rclone/rclone/cmd/serve/httplib" ) // Options contains options for the remote control server type Options struct { HTTPOptions httplib.Options Enabled bool // set to enable the server Serve bool // set to serve files from remotes Files string // set to enable serving files locally NoAuth bool // set to disable auth checks on AuthRequired methods WebUI bool // set to launch the web ui WebGUIUpdate bool // set to check new update WebGUIForceUpdate bool // set to force download new update WebGUINoOpenBrowser bool // set to disable auto opening browser WebGUIFetchURL string // set the default url for fetching webgui AccessControlAllowOrigin string // set the access control for CORS configuration EnableMetrics bool // set to disable prometheus metrics on /metrics JobExpireDuration time.Duration JobExpireInterval time.Duration } // DefaultOpt is the default values used for Options var DefaultOpt = Options{ HTTPOptions: httplib.DefaultOpt, Enabled: false, JobExpireDuration: 60 * time.Second, JobExpireInterval: 10 * time.Second, } func init() { DefaultOpt.HTTPOptions.ListenAddr = "localhost:5572" } // WriteJSON writes JSON in out to w func WriteJSON(w io.Writer, out Params) error { enc := json.NewEncoder(w) enc.SetIndent("", "\t") return enc.Encode(out) } rclone-1.53.3/fs/rc/rc_test.go000066400000000000000000000005351375552240400161050ustar00rootroot00000000000000package rc import ( "bytes" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestWriteJSON(t *testing.T) { var buf bytes.Buffer err := WriteJSON(&buf, Params{ "String": "hello", "Int": 42, }) require.NoError(t, err) assert.Equal(t, `{ "Int": 42, "String": "hello" } `, buf.String()) } rclone-1.53.3/fs/rc/rcflags/000077500000000000000000000000001375552240400155315ustar00rootroot00000000000000rclone-1.53.3/fs/rc/rcflags/rcflags.go000066400000000000000000000041411375552240400175010ustar00rootroot00000000000000// Package rcflags implements command line flags to set up the remote control package rcflags import ( "github.com/rclone/rclone/cmd/serve/httplib/httpflags" "github.com/rclone/rclone/fs/config/flags" "github.com/rclone/rclone/fs/rc" "github.com/spf13/pflag" ) // Options set by command line flags var ( Opt = rc.DefaultOpt ) // AddFlags adds the remote control flags to the flagSet func AddFlags(flagSet *pflag.FlagSet) { rc.AddOption("rc", &Opt) flags.BoolVarP(flagSet, &Opt.Enabled, "rc", "", false, "Enable the remote control server.") flags.StringVarP(flagSet, &Opt.Files, "rc-files", "", "", "Path to local files to serve on the HTTP server.") flags.BoolVarP(flagSet, &Opt.Serve, "rc-serve", "", false, "Enable the serving of remote objects.") flags.BoolVarP(flagSet, &Opt.NoAuth, "rc-no-auth", "", false, "Don't require auth for certain methods.") flags.BoolVarP(flagSet, &Opt.WebUI, "rc-web-gui", "", false, "Launch WebGUI on localhost") flags.BoolVarP(flagSet, &Opt.WebGUIUpdate, "rc-web-gui-update", "", false, "Check and update to latest version of web gui") flags.BoolVarP(flagSet, &Opt.WebGUIForceUpdate, "rc-web-gui-force-update", "", false, "Force update to latest version of web gui") flags.BoolVarP(flagSet, &Opt.WebGUINoOpenBrowser, "rc-web-gui-no-open-browser", "", false, "Don't open the browser automatically") flags.StringVarP(flagSet, &Opt.WebGUIFetchURL, "rc-web-fetch-url", "", "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest", "URL to fetch the releases for webgui.") flags.StringVarP(flagSet, &Opt.AccessControlAllowOrigin, "rc-allow-origin", "", "", "Set the allowed origin for CORS.") flags.BoolVarP(flagSet, &Opt.EnableMetrics, "rc-enable-metrics", "", false, "Enable prometheus metrics on /metrics") flags.DurationVarP(flagSet, &Opt.JobExpireDuration, "rc-job-expire-duration", "", Opt.JobExpireDuration, "expire finished async jobs older than this value") flags.DurationVarP(flagSet, &Opt.JobExpireInterval, "rc-job-expire-interval", "", Opt.JobExpireInterval, "interval to check for expired async jobs") httpflags.AddFlagsPrefix(flagSet, "rc-", &Opt.HTTPOptions) } rclone-1.53.3/fs/rc/rcserver/000077500000000000000000000000001375552240400157435ustar00rootroot00000000000000rclone-1.53.3/fs/rc/rcserver/rcserver.go000066400000000000000000000307161375552240400201340ustar00rootroot00000000000000// Package rcserver implements the HTTP endpoint to serve the remote control package rcserver import ( "encoding/base64" "encoding/json" "flag" "fmt" "log" "mime" "net/http" "net/url" "path/filepath" "regexp" "sort" "strings" "sync" "time" "github.com/rclone/rclone/fs/rc/webgui" "github.com/pkg/errors" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" "github.com/skratchdot/open-golang/open" "github.com/rclone/rclone/cmd/serve/httplib" "github.com/rclone/rclone/cmd/serve/httplib/serve" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/list" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/fs/rc/jobs" "github.com/rclone/rclone/fs/rc/rcflags" "github.com/rclone/rclone/lib/random" ) var promHandler http.Handler var onlyOnceWarningAllowOrigin sync.Once func init() { rcloneCollector := accounting.NewRcloneCollector() prometheus.MustRegister(rcloneCollector) promHandler = promhttp.Handler() } // Start the remote control server if configured // // If the server wasn't configured the *Server returned may be nil func Start(opt *rc.Options) (*Server, error) { jobs.SetOpt(opt) // set the defaults for jobs if opt.Enabled { // Serve on the DefaultServeMux so can have global registrations appear s := newServer(opt, http.DefaultServeMux) return s, s.Serve() } return nil, nil } // Server contains everything to run the rc server type Server struct { *httplib.Server files http.Handler pluginsHandler http.Handler opt *rc.Options } func newServer(opt *rc.Options, mux *http.ServeMux) *Server { fileHandler := http.Handler(nil) pluginsHandler := http.Handler(nil) // Add some more mime types which are often missing _ = mime.AddExtensionType(".wasm", "application/wasm") _ = mime.AddExtensionType(".js", "application/javascript") cachePath := filepath.Join(config.CacheDir, "webgui") extractPath := filepath.Join(cachePath, "current/build") // File handling if opt.Files != "" { if opt.WebUI { fs.Logf(nil, "--rc-files overrides --rc-web-gui command\n") } fs.Logf(nil, "Serving files from %q", opt.Files) fileHandler = http.FileServer(http.Dir(opt.Files)) } else if opt.WebUI { if err := webgui.CheckAndDownloadWebGUIRelease(opt.WebGUIUpdate, opt.WebGUIForceUpdate, opt.WebGUIFetchURL, config.CacheDir); err != nil { log.Fatalf("Error while fetching the latest release of Web GUI: %v", err) } if opt.NoAuth { opt.NoAuth = false fs.Infof(nil, "Cannot run Web GUI without authentication, using default auth") } if opt.HTTPOptions.BasicUser == "" { opt.HTTPOptions.BasicUser = "gui" fs.Infof(nil, "No username specified. Using default username: %s \n", rcflags.Opt.HTTPOptions.BasicUser) } if opt.HTTPOptions.BasicPass == "" { randomPass, err := random.Password(128) if err != nil { log.Fatalf("Failed to make password: %v", err) } opt.HTTPOptions.BasicPass = randomPass fs.Infof(nil, "No password specified. Using random password: %s \n", randomPass) } opt.Serve = true fs.Logf(nil, "Serving Web GUI") fileHandler = http.FileServer(http.Dir(extractPath)) pluginsHandler = http.FileServer(http.Dir(webgui.PluginsPath)) } s := &Server{ Server: httplib.NewServer(mux, &opt.HTTPOptions), opt: opt, files: fileHandler, pluginsHandler: pluginsHandler, } mux.HandleFunc("/", s.handler) return s } // Serve runs the http server in the background. // // Use s.Close() and s.Wait() to shutdown server func (s *Server) Serve() error { err := s.Server.Serve() if err != nil { return err } fs.Logf(nil, "Serving remote control on %s", s.URL()) // Open the files in the browser if set if s.files != nil { openURL, err := url.Parse(s.URL()) if err != nil { return errors.Wrap(err, "invalid serving URL") } // Add username, password into the URL if they are set user, pass := s.opt.HTTPOptions.BasicUser, s.opt.HTTPOptions.BasicPass if user != "" && pass != "" { openURL.User = url.UserPassword(user, pass) // Base64 encode username and password to be sent through url loginToken := user + ":" + pass parameters := url.Values{} encodedToken := base64.URLEncoding.EncodeToString([]byte(loginToken)) fs.Debugf(nil, "login_token %q", encodedToken) parameters.Add("login_token", encodedToken) openURL.RawQuery = parameters.Encode() openURL.RawPath = "/#/login" } // Don't open browser if serving in testing environment or required not to do so. if flag.Lookup("test.v") == nil && !s.opt.WebGUINoOpenBrowser { if err := open.Start(openURL.String()); err != nil { fs.Errorf(nil, "Failed to open Web GUI in browser: %v. Manually access it at: %s", err, openURL.String()) } } else { fs.Logf(nil, "Web GUI is not automatically opening browser. Navigate to %s to use.", openURL.String()) } } return nil } // writeError writes a formatted error to the output func writeError(path string, in rc.Params, w http.ResponseWriter, err error, status int) { fs.Errorf(nil, "rc: %q: error: %v", path, err) // Adjust the error return for some well known errors errOrig := errors.Cause(err) switch { case errOrig == fs.ErrorDirNotFound || errOrig == fs.ErrorObjectNotFound: status = http.StatusNotFound case rc.IsErrParamInvalid(err) || rc.IsErrParamNotFound(err): status = http.StatusBadRequest } w.WriteHeader(status) err = rc.WriteJSON(w, rc.Params{ "status": status, "error": err.Error(), "input": in, "path": path, }) if err != nil { // can't return the error at this point fs.Errorf(nil, "rc: failed to write JSON output: %v", err) } } // handler reads incoming requests and dispatches them func (s *Server) handler(w http.ResponseWriter, r *http.Request) { urlPath, ok := s.Path(w, r) if !ok { return } path := strings.TrimLeft(urlPath, "/") allowOrigin := rcflags.Opt.AccessControlAllowOrigin if allowOrigin != "" { onlyOnceWarningAllowOrigin.Do(func() { if allowOrigin == "*" { fs.Logf(nil, "Warning: Allow origin set to *. This can cause serious security problems.") } }) w.Header().Add("Access-Control-Allow-Origin", allowOrigin) } else { w.Header().Add("Access-Control-Allow-Origin", s.URL()) } // echo back access control headers client needs //reqAccessHeaders := r.Header.Get("Access-Control-Request-Headers") w.Header().Add("Access-Control-Request-Method", "POST, OPTIONS, GET, HEAD") w.Header().Add("Access-Control-Allow-Headers", "authorization, Content-Type") switch r.Method { case "POST": s.handlePost(w, r, path) case "OPTIONS": s.handleOptions(w, r, path) case "GET", "HEAD": s.handleGet(w, r, path) default: writeError(path, nil, w, errors.Errorf("method %q not allowed", r.Method), http.StatusMethodNotAllowed) return } } func (s *Server) handlePost(w http.ResponseWriter, r *http.Request, path string) { contentType := r.Header.Get("Content-Type") values := r.URL.Query() if contentType == "application/x-www-form-urlencoded" { // Parse the POST and URL parameters into r.Form, for others r.Form will be empty value err := r.ParseForm() if err != nil { writeError(path, nil, w, errors.Wrap(err, "failed to parse form/URL parameters"), http.StatusBadRequest) return } values = r.Form } // Read the POST and URL parameters into in in := make(rc.Params) for k, vs := range values { if len(vs) > 0 { in[k] = vs[len(vs)-1] } } // Parse a JSON blob from the input if contentType == "application/json" { err := json.NewDecoder(r.Body).Decode(&in) if err != nil { writeError(path, in, w, errors.Wrap(err, "failed to read input JSON"), http.StatusBadRequest) return } } // Find the call call := rc.Calls.Get(path) if call == nil { writeError(path, in, w, errors.Errorf("couldn't find method %q", path), http.StatusNotFound) return } // Check to see if it requires authorisation if !s.opt.NoAuth && call.AuthRequired && !s.UsingAuth() { writeError(path, in, w, errors.Errorf("authentication must be set up on the rc server to use %q or the --rc-no-auth flag must be in use", path), http.StatusForbidden) return } if call.NeedsRequest { // Add the request to RC in["_request"] = r } if call.NeedsResponse { in["_response"] = w } // Check to see if it is async or not isAsync, err := in.GetBool("_async") if rc.NotErrParamNotFound(err) { writeError(path, in, w, err, http.StatusBadRequest) return } delete(in, "_async") // remove the async parameter after parsing so vfs operations don't get confused fs.Debugf(nil, "rc: %q: with parameters %+v", path, in) var out rc.Params if isAsync { out, err = jobs.StartAsyncJob(call.Fn, in) } else { var jobID int64 out, jobID, err = jobs.ExecuteJob(r.Context(), call.Fn, in) w.Header().Add("x-rclone-jobid", fmt.Sprintf("%d", jobID)) } if err != nil { writeError(path, in, w, err, http.StatusInternalServerError) return } if out == nil { out = make(rc.Params) } fs.Debugf(nil, "rc: %q: reply %+v: %v", path, out, err) err = rc.WriteJSON(w, out) if err != nil { // can't return the error at this point - but have a go anyway writeError(path, in, w, err, http.StatusInternalServerError) fs.Errorf(nil, "rc: failed to write JSON output: %v", err) } } func (s *Server) handleOptions(w http.ResponseWriter, r *http.Request, path string) { w.WriteHeader(http.StatusOK) } func (s *Server) serveRoot(w http.ResponseWriter, r *http.Request) { remotes := config.FileSections() sort.Strings(remotes) directory := serve.NewDirectory("", s.HTMLTemplate) directory.Name = "List of all rclone remotes." q := url.Values{} for _, remote := range remotes { q.Set("fs", remote) directory.AddHTMLEntry("["+remote+":]", true, -1, time.Time{}) } sortParm := r.URL.Query().Get("sort") orderParm := r.URL.Query().Get("order") directory.ProcessQueryParams(sortParm, orderParm) directory.Serve(w, r) } func (s *Server) serveRemote(w http.ResponseWriter, r *http.Request, path string, fsName string) { f, err := cache.Get(fsName) if err != nil { writeError(path, nil, w, errors.Wrap(err, "failed to make Fs"), http.StatusInternalServerError) return } if path == "" || strings.HasSuffix(path, "/") { path = strings.Trim(path, "/") entries, err := list.DirSorted(r.Context(), f, false, path) if err != nil { writeError(path, nil, w, errors.Wrap(err, "failed to list directory"), http.StatusInternalServerError) return } // Make the entries for display directory := serve.NewDirectory(path, s.HTMLTemplate) for _, entry := range entries { _, isDir := entry.(fs.Directory) //directory.AddHTMLEntry(entry.Remote(), isDir, entry.Size(), entry.ModTime(r.Context())) directory.AddHTMLEntry(entry.Remote(), isDir, entry.Size(), time.Time{}) } sortParm := r.URL.Query().Get("sort") orderParm := r.URL.Query().Get("order") directory.ProcessQueryParams(sortParm, orderParm) directory.Serve(w, r) } else { path = strings.Trim(path, "/") o, err := f.NewObject(r.Context(), path) if err != nil { writeError(path, nil, w, errors.Wrap(err, "failed to find object"), http.StatusInternalServerError) return } serve.Object(w, r, o) } } // Match URLS of the form [fs]/remote var fsMatch = regexp.MustCompile(`^\[(.*?)\](.*)$`) func (s *Server) handleGet(w http.ResponseWriter, r *http.Request, path string) { // Look to see if this has an fs in the path fsMatchResult := fsMatch.FindStringSubmatch(path) switch { case fsMatchResult != nil && s.opt.Serve: // Serve /[fs]/remote files s.serveRemote(w, r, fsMatchResult[2], fsMatchResult[1]) return case path == "metrics" && s.opt.EnableMetrics: promHandler.ServeHTTP(w, r) return case path == "*" && s.opt.Serve: // Serve /* as the remote listing s.serveRoot(w, r) return case s.files != nil: pluginsMatchResult := webgui.PluginsMatch.FindStringSubmatch(path) if s.opt.WebUI && pluginsMatchResult != nil && len(pluginsMatchResult) > 2 { ok := webgui.ServePluginOK(w, r, pluginsMatchResult) if !ok { r.URL.Path = fmt.Sprintf("/%s/%s/app/build/%s", pluginsMatchResult[1], pluginsMatchResult[2], pluginsMatchResult[3]) s.pluginsHandler.ServeHTTP(w, r) return } return } else if s.opt.WebUI && webgui.ServePluginWithReferrerOK(w, r, path) { return } // Serve the files r.URL.Path = "/" + path s.files.ServeHTTP(w, r) return case path == "" && s.opt.Serve: // Serve the root as a remote listing s.serveRoot(w, r) return } http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) } rclone-1.53.3/fs/rc/rcserver/rcserver_test.go000066400000000000000000000370051375552240400211710ustar00rootroot00000000000000package rcserver import ( "bytes" "fmt" "io" "io/ioutil" "net/http" "net/http/httptest" "regexp" "strings" "testing" "time" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" _ "github.com/rclone/rclone/backend/local" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/rc" ) const ( testBindAddress = "localhost:0" testTemplate = "testdata/golden/testindex.html" testFs = "testdata/files" remoteURL = "[" + testFs + "]/" // initial URL path to fetch from that remote ) // Test the RC server runs and we can do HTTP fetches from it. // We'll do the majority of the testing with the httptest framework func TestRcServer(t *testing.T) { opt := rc.DefaultOpt opt.HTTPOptions.ListenAddr = testBindAddress opt.HTTPOptions.Template = testTemplate opt.Enabled = true opt.Serve = true opt.Files = testFs mux := http.NewServeMux() rcServer := newServer(&opt, mux) assert.NoError(t, rcServer.Serve()) defer func() { rcServer.Close() rcServer.Wait() }() testURL := rcServer.Server.URL() // Do the simplest possible test to check the server is alive // Do it a few times to wait for the server to start var resp *http.Response var err error for i := 0; i < 10; i++ { resp, err = http.Get(testURL + "file.txt") if err == nil { break } time.Sleep(10 * time.Millisecond) } require.NoError(t, err) body, err := ioutil.ReadAll(resp.Body) _ = resp.Body.Close() require.NoError(t, err) require.NoError(t, resp.Body.Close()) assert.Equal(t, http.StatusOK, resp.StatusCode) assert.Equal(t, "this is file1.txt\n", string(body)) } type testRun struct { Name string URL string Status int Method string Range string Body string ContentType string Expected string Contains *regexp.Regexp Headers map[string]string } // Run a suite of tests func testServer(t *testing.T, tests []testRun, opt *rc.Options) { mux := http.NewServeMux() opt.HTTPOptions.Template = testTemplate rcServer := newServer(opt, mux) for _, test := range tests { t.Run(test.Name, func(t *testing.T) { method := test.Method if method == "" { method = "GET" } var inBody io.Reader if test.Body != "" { buf := bytes.NewBufferString(test.Body) inBody = buf } req, err := http.NewRequest(method, "http://1.2.3.4/"+test.URL, inBody) require.NoError(t, err) if test.Range != "" { req.Header.Add("Range", test.Range) } if test.ContentType != "" { req.Header.Add("Content-Type", test.ContentType) } w := httptest.NewRecorder() rcServer.handler(w, req) resp := w.Result() assert.Equal(t, test.Status, resp.StatusCode) body, err := ioutil.ReadAll(resp.Body) require.NoError(t, err) if test.Contains == nil { assert.Equal(t, test.Expected, string(body)) } else { assert.True(t, test.Contains.Match(body), fmt.Sprintf("body didn't match: %v: %v", test.Contains, string(body))) } for k, v := range test.Headers { assert.Equal(t, v, resp.Header.Get(k), k) } }) } } // return an enabled rc func newTestOpt() rc.Options { opt := rc.DefaultOpt opt.Enabled = true return opt } func TestFileServing(t *testing.T) { tests := []testRun{{ Name: "index", URL: "", Status: http.StatusOK, Expected: `
    `, }, { Name: "notfound", URL: "notfound", Status: http.StatusNotFound, Expected: "404 page not found\n", }, { Name: "dirnotfound", URL: "dirnotfound/", Status: http.StatusNotFound, Expected: "404 page not found\n", }, { Name: "dir", URL: "dir/", Status: http.StatusOK, Expected: `
    file2.txt
    
    `, }, { Name: "file", URL: "file.txt", Status: http.StatusOK, Expected: "this is file1.txt\n", Headers: map[string]string{ "Content-Length": "18", }, }, { Name: "file2", URL: "dir/file2.txt", Status: http.StatusOK, Expected: "this is dir/file2.txt\n", }, { Name: "file-head", URL: "file.txt", Method: "HEAD", Status: http.StatusOK, Expected: ``, Headers: map[string]string{ "Content-Length": "18", }, }, { Name: "file-range", URL: "file.txt", Status: http.StatusPartialContent, Range: "bytes=8-12", Expected: `file1`, }} opt := newTestOpt() opt.Serve = true opt.Files = testFs testServer(t, tests, &opt) } func TestRemoteServing(t *testing.T) { tests := []testRun{ // Test serving files from the test remote { Name: "index", URL: remoteURL + "", Status: http.StatusOK, Expected: ` Directory listing of /

    Directory listing of /

    dir/
    file.txt
    `, }, { Name: "notfound-index", URL: "[notfound]/", Status: http.StatusNotFound, Expected: `{ "error": "failed to list directory: directory not found", "input": null, "path": "", "status": 404 } `, }, { Name: "notfound", URL: remoteURL + "notfound", Status: http.StatusNotFound, Expected: `{ "error": "failed to find object: object not found", "input": null, "path": "notfound", "status": 404 } `, }, { Name: "dirnotfound", URL: remoteURL + "dirnotfound/", Status: http.StatusNotFound, Expected: `{ "error": "failed to list directory: directory not found", "input": null, "path": "dirnotfound", "status": 404 } `, }, { Name: "dir", URL: remoteURL + "dir/", Status: http.StatusOK, Expected: ` Directory listing of /dir

    Directory listing of /dir

    file2.txt
    `, }, { Name: "file", URL: remoteURL + "file.txt", Status: http.StatusOK, Expected: "this is file1.txt\n", Headers: map[string]string{ "Content-Length": "18", }, }, { Name: "file with no slash after ]", URL: strings.TrimRight(remoteURL, "/") + "file.txt", Status: http.StatusOK, Expected: "this is file1.txt\n", Headers: map[string]string{ "Content-Length": "18", }, }, { Name: "file2", URL: remoteURL + "dir/file2.txt", Status: http.StatusOK, Expected: "this is dir/file2.txt\n", }, { Name: "file-head", URL: remoteURL + "file.txt", Method: "HEAD", Status: http.StatusOK, Expected: ``, Headers: map[string]string{ "Content-Length": "18", }, }, { Name: "file-range", URL: remoteURL + "file.txt", Status: http.StatusPartialContent, Range: "bytes=8-12", Expected: `file1`, }, { Name: "bad-remote", URL: "[notfoundremote:]/", Status: http.StatusInternalServerError, Expected: `{ "error": "failed to make Fs: didn't find section in config file", "input": null, "path": "/", "status": 500 } `, }} opt := newTestOpt() opt.Serve = true opt.Files = testFs testServer(t, tests, &opt) } func TestRC(t *testing.T) { tests := []testRun{{ Name: "rc-root", URL: "", Method: "POST", Status: http.StatusNotFound, Expected: `{ "error": "couldn't find method \"\"", "input": {}, "path": "", "status": 404 } `, }, { Name: "rc-noop", URL: "rc/noop", Method: "POST", Status: http.StatusOK, Expected: "{}\n", }, { Name: "rc-error", URL: "rc/error", Method: "POST", Status: http.StatusInternalServerError, Expected: `{ "error": "arbitrary error on input map[]", "input": {}, "path": "rc/error", "status": 500 } `, }, { Name: "core-gc", URL: "core/gc", // returns nil, nil so check it is made into {} Method: "POST", Status: http.StatusOK, Expected: "{}\n", }, { Name: "url-params", URL: "rc/noop?param1=potato¶m2=sausage", Method: "POST", Status: http.StatusOK, Expected: `{ "param1": "potato", "param2": "sausage" } `, }, { Name: "json", URL: "rc/noop", Method: "POST", Body: `{ "param1":"string", "param2":true }`, ContentType: "application/json", Status: http.StatusOK, Expected: `{ "param1": "string", "param2": true } `, }, { Name: "json-and-url-params", URL: "rc/noop?param1=potato¶m2=sausage", Method: "POST", Body: `{ "param1":"string", "param3":true }`, ContentType: "application/json", Status: http.StatusOK, Expected: `{ "param1": "string", "param2": "sausage", "param3": true } `, }, { Name: "json-bad", URL: "rc/noop?param1=potato¶m2=sausage", Method: "POST", Body: `{ param1":"string", "param3":true }`, ContentType: "application/json", Status: http.StatusBadRequest, Expected: `{ "error": "failed to read input JSON: invalid character 'p' looking for beginning of object key string", "input": { "param1": "potato", "param2": "sausage" }, "path": "rc/noop", "status": 400 } `, }, { Name: "form", URL: "rc/noop", Method: "POST", Body: `param1=string¶m2=true`, ContentType: "application/x-www-form-urlencoded", Status: http.StatusOK, Expected: `{ "param1": "string", "param2": "true" } `, }, { Name: "form-and-url-params", URL: "rc/noop?param1=potato¶m2=sausage", Method: "POST", Body: `param1=string¶m3=true`, ContentType: "application/x-www-form-urlencoded", Status: http.StatusOK, Expected: `{ "param1": "potato", "param2": "sausage", "param3": "true" } `, }, { Name: "form-bad", URL: "rc/noop?param1=potato¶m2=sausage", Method: "POST", Body: `%zz`, ContentType: "application/x-www-form-urlencoded", Status: http.StatusBadRequest, Expected: `{ "error": "failed to parse form/URL parameters: invalid URL escape \"%zz\"", "input": null, "path": "rc/noop", "status": 400 } `, }} opt := newTestOpt() opt.Serve = true opt.Files = testFs testServer(t, tests, &opt) } func TestMethods(t *testing.T) { tests := []testRun{{ Name: "options", URL: "", Method: "OPTIONS", Status: http.StatusOK, Expected: "", Headers: map[string]string{ "Access-Control-Allow-Origin": "http://localhost:5572/", "Access-Control-Request-Method": "POST, OPTIONS, GET, HEAD", "Access-Control-Allow-Headers": "authorization, Content-Type", }, }, { Name: "bad", URL: "", Method: "POTATO", Status: http.StatusMethodNotAllowed, Expected: `{ "error": "method \"POTATO\" not allowed", "input": null, "path": "", "status": 405 } `, }} opt := newTestOpt() opt.Serve = true opt.Files = testFs testServer(t, tests, &opt) } func TestMetrics(t *testing.T) { stats := accounting.GlobalStats() tests := makeMetricsTestCases(stats) opt := newTestOpt() opt.EnableMetrics = true testServer(t, tests, &opt) // Test changing a couple options stats.Bytes(500) stats.Deletes(30) stats.Errors(2) stats.Bytes(324) tests = makeMetricsTestCases(stats) testServer(t, tests, &opt) } func makeMetricsTestCases(stats *accounting.StatsInfo) (tests []testRun) { tests = []testRun{{ Name: "Bytes Transferred Metric", URL: "/metrics", Method: "GET", Status: http.StatusOK, Contains: regexp.MustCompile(fmt.Sprintf("rclone_bytes_transferred_total %d", stats.GetBytes())), }, { Name: "Checked Files Metric", URL: "/metrics", Method: "GET", Status: http.StatusOK, Contains: regexp.MustCompile(fmt.Sprintf("rclone_checked_files_total %d", stats.GetChecks())), }, { Name: "Errors Metric", URL: "/metrics", Method: "GET", Status: http.StatusOK, Contains: regexp.MustCompile(fmt.Sprintf("rclone_errors_total %d", stats.GetErrors())), }, { Name: "Deleted Files Metric", URL: "/metrics", Method: "GET", Status: http.StatusOK, Contains: regexp.MustCompile(fmt.Sprintf("rclone_files_deleted_total %d", stats.Deletes(0))), }, { Name: "Files Transferred Metric", URL: "/metrics", Method: "GET", Status: http.StatusOK, Contains: regexp.MustCompile(fmt.Sprintf("rclone_files_transferred_total %d", stats.GetTransfers())), }, } return } var matchRemoteDirListing = regexp.MustCompile(`Directory listing of /`) func TestServingRoot(t *testing.T) { tests := []testRun{{ Name: "rootlist", URL: "*", Status: http.StatusOK, Contains: matchRemoteDirListing, }} opt := newTestOpt() opt.Serve = true opt.Files = testFs testServer(t, tests, &opt) } func TestServingRootNoFiles(t *testing.T) { tests := []testRun{{ Name: "rootlist", URL: "", Status: http.StatusOK, Contains: matchRemoteDirListing, }} opt := newTestOpt() opt.Serve = true opt.Files = "" testServer(t, tests, &opt) } func TestNoFiles(t *testing.T) { tests := []testRun{{ Name: "file", URL: "file.txt", Status: http.StatusNotFound, Expected: "Not Found\n", }, { Name: "dir", URL: "dir/", Status: http.StatusNotFound, Expected: "Not Found\n", }} opt := newTestOpt() opt.Serve = true opt.Files = "" testServer(t, tests, &opt) } func TestNoServe(t *testing.T) { tests := []testRun{{ Name: "file", URL: remoteURL + "file.txt", Status: http.StatusNotFound, Expected: "404 page not found\n", }, { Name: "dir", URL: remoteURL + "dir/", Status: http.StatusNotFound, Expected: "404 page not found\n", }} opt := newTestOpt() opt.Serve = false opt.Files = testFs testServer(t, tests, &opt) } func TestAuthRequired(t *testing.T) { tests := []testRun{{ Name: "auth", URL: "rc/noopauth", Method: "POST", Body: `{}`, ContentType: "application/javascript", Status: http.StatusForbidden, Expected: `{ "error": "authentication must be set up on the rc server to use \"rc/noopauth\" or the --rc-no-auth flag must be in use", "input": {}, "path": "rc/noopauth", "status": 403 } `, }} opt := newTestOpt() opt.Serve = false opt.Files = "" opt.NoAuth = false testServer(t, tests, &opt) } func TestNoAuth(t *testing.T) { tests := []testRun{{ Name: "auth", URL: "rc/noopauth", Method: "POST", Body: `{}`, ContentType: "application/javascript", Status: http.StatusOK, Expected: "{}\n", }} opt := newTestOpt() opt.Serve = false opt.Files = "" opt.NoAuth = true testServer(t, tests, &opt) } func TestWithUserPass(t *testing.T) { tests := []testRun{{ Name: "auth", URL: "rc/noopauth", Method: "POST", Body: `{}`, ContentType: "application/javascript", Status: http.StatusOK, Expected: "{}\n", }} opt := newTestOpt() opt.Serve = false opt.Files = "" opt.NoAuth = false opt.HTTPOptions.BasicUser = "user" opt.HTTPOptions.BasicPass = "pass" testServer(t, tests, &opt) } func TestRCAsync(t *testing.T) { tests := []testRun{{ Name: "ok", URL: "rc/noop", Method: "POST", ContentType: "application/json", Body: `{ "_async":true }`, Status: http.StatusOK, Contains: regexp.MustCompile(`(?s)\{.*\"jobid\":.*\}`), }, { Name: "bad", URL: "rc/noop", Method: "POST", ContentType: "application/json", Body: `{ "_async":"truthy" }`, Status: http.StatusBadRequest, Expected: `{ "error": "couldn't parse key \"_async\" (truthy) as bool: strconv.ParseBool: parsing \"truthy\": invalid syntax", "input": { "_async": "truthy" }, "path": "rc/noop", "status": 400 } `, }} opt := newTestOpt() opt.Serve = true opt.Files = "" testServer(t, tests, &opt) } rclone-1.53.3/fs/rc/rcserver/testdata/000077500000000000000000000000001375552240400175545ustar00rootroot00000000000000rclone-1.53.3/fs/rc/rcserver/testdata/files/000077500000000000000000000000001375552240400206565ustar00rootroot00000000000000rclone-1.53.3/fs/rc/rcserver/testdata/files/dir/000077500000000000000000000000001375552240400214345ustar00rootroot00000000000000rclone-1.53.3/fs/rc/rcserver/testdata/files/dir/file2.txt000066400000000000000000000000261375552240400231740ustar00rootroot00000000000000this is dir/file2.txt rclone-1.53.3/fs/rc/rcserver/testdata/files/file.txt000066400000000000000000000000221375552240400223300ustar00rootroot00000000000000this is file1.txt rclone-1.53.3/fs/rc/rcserver/testdata/golden/000077500000000000000000000000001375552240400210245ustar00rootroot00000000000000rclone-1.53.3/fs/rc/rcserver/testdata/golden/testindex.html000066400000000000000000000003421375552240400237200ustar00rootroot00000000000000 {{ .Title }}

    {{ .Title }}

    {{ range $i := .Entries }}{{ $i.Leaf }}
    {{ end }} rclone-1.53.3/fs/rc/registry.go000066400000000000000000000040001375552240400163010ustar00rootroot00000000000000// Define the registry package rc import ( "context" "sort" "strings" "sync" "github.com/rclone/rclone/fs" ) // Func defines a type for a remote control function type Func func(ctx context.Context, in Params) (out Params, err error) // Call defines info about a remote control function and is used in // the Add function to create new entry points. type Call struct { Path string // path to activate this RC Fn Func `json:"-"` // function to call Title string // help for the function AuthRequired bool // if set then this call requires authorisation to be set Help string // multi-line markdown formatted help NeedsRequest bool // if set then this call will be passed the original request object as _request NeedsResponse bool // if set then this call will be passed the original response object as _response } // Registry holds the list of all the registered remote control functions type Registry struct { mu sync.RWMutex call map[string]*Call } // NewRegistry makes a new registry for remote control functions func NewRegistry() *Registry { return &Registry{ call: make(map[string]*Call), } } // Add a call to the registry func (r *Registry) Add(call Call) { r.mu.Lock() defer r.mu.Unlock() call.Path = strings.Trim(call.Path, "/") call.Help = strings.TrimSpace(call.Help) fs.Debugf(nil, "Adding path %q to remote control registry", call.Path) r.call[call.Path] = &call } // Get a Call from a path or nil func (r *Registry) Get(path string) *Call { r.mu.RLock() defer r.mu.RUnlock() return r.call[path] } // List of all calls in alphabetical order func (r *Registry) List() (out []*Call) { r.mu.RLock() defer r.mu.RUnlock() var keys []string for key := range r.call { keys = append(keys, key) } sort.Strings(keys) for _, key := range keys { out = append(out, r.call[key]) } return out } // Calls is the global registry of Call objects var Calls = NewRegistry() // Add a function to the global registry func Add(call Call) { Calls.Add(call) } rclone-1.53.3/fs/rc/webgui/000077500000000000000000000000001375552240400153725ustar00rootroot00000000000000rclone-1.53.3/fs/rc/webgui/plugins.go000066400000000000000000000200651375552240400174050ustar00rootroot00000000000000package webgui import ( "encoding/json" "fmt" "io/ioutil" "net/http" "net/http/httputil" "net/url" "os" "path/filepath" "regexp" "strings" "sync" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" ) // PackageJSON is the structure of package.json of a plugin type PackageJSON struct { Name string `json:"name"` Version string `json:"version"` Description string `json:"description"` Author string `json:"author"` Copyright string `json:"copyright"` License string `json:"license"` Private bool `json:"private"` Homepage string `json:"homepage"` TestURL string `json:"testUrl"` Repository struct { Type string `json:"type"` URL string `json:"url"` } `json:"repository"` Bugs struct { URL string `json:"url"` } `json:"bugs"` Rclone RcloneConfig `json:"rclone"` } // RcloneConfig represents the rclone specific config type RcloneConfig struct { HandlesType []string `json:"handlesType"` PluginType string `json:"pluginType"` RedirectReferrer bool `json:"redirectReferrer"` Test bool `json:"-"` } func (r *PackageJSON) isTesting() bool { return r.Rclone.Test } var ( //loadedTestPlugins *Plugins cachePath string loadedPlugins *Plugins pluginsProxy = &httputil.ReverseProxy{} // PluginsMatch is used for matching author and plugin name in the url path PluginsMatch = regexp.MustCompile(`^plugins\/([^\/]*)\/([^\/\?]+)[\/]?(.*)$`) // PluginsPath is the base path where webgui plugins are stored PluginsPath string pluginsConfigPath string availablePluginsJSONPath = "availablePlugins.json" ) func init() { cachePath = filepath.Join(config.CacheDir, "webgui") PluginsPath = filepath.Join(cachePath, "plugins") pluginsConfigPath = filepath.Join(PluginsPath, "config") loadedPlugins = newPlugins(availablePluginsJSONPath) err := loadedPlugins.readFromFile() if err != nil { fs.Errorf(nil, "error reading available plugins: %v", err) } } // Plugins represents the structure how plugins are saved onto disk type Plugins struct { mutex sync.Mutex LoadedPlugins map[string]PackageJSON `json:"loadedPlugins"` fileName string } func newPlugins(fileName string) *Plugins { p := Plugins{LoadedPlugins: map[string]PackageJSON{}} p.fileName = fileName p.mutex = sync.Mutex{} return &p } func (p *Plugins) readFromFile() (err error) { //p.mutex.Lock() //defer p.mutex.Unlock() err = CreatePathIfNotExist(pluginsConfigPath) if err != nil { return err } availablePluginsJSON := filepath.Join(pluginsConfigPath, p.fileName) _, err = os.Stat(availablePluginsJSON) if err == nil { data, err := ioutil.ReadFile(availablePluginsJSON) if err != nil { return err } err = json.Unmarshal(data, &p) if err != nil { fs.Logf(nil, "%s", err) } return nil } else if os.IsNotExist(err) { // path does not exist err = p.writeToFile() if err != nil { return err } } return nil } func (p *Plugins) addPlugin(pluginName string, packageJSONPath string) (err error) { p.mutex.Lock() defer p.mutex.Unlock() data, err := ioutil.ReadFile(packageJSONPath) if err != nil { return err } var pkgJSON = PackageJSON{} err = json.Unmarshal(data, &pkgJSON) if err != nil { return err } p.LoadedPlugins[pluginName] = pkgJSON err = p.writeToFile() if err != nil { return err } return nil } func (p *Plugins) addTestPlugin(pluginName string, testURL string, handlesType []string) (err error) { p.mutex.Lock() defer p.mutex.Unlock() err = p.readFromFile() if err != nil { return err } var pkgJSON = PackageJSON{ Name: pluginName, TestURL: testURL, Rclone: RcloneConfig{ HandlesType: handlesType, Test: true, }, } p.LoadedPlugins[pluginName] = pkgJSON err = p.writeToFile() if err != nil { return err } return nil } func (p *Plugins) writeToFile() (err error) { //p.mutex.Lock() //defer p.mutex.Unlock() availablePluginsJSON := filepath.Join(pluginsConfigPath, p.fileName) file, err := json.MarshalIndent(p, "", " ") if err != nil { fs.Logf(nil, "%s", err) } err = ioutil.WriteFile(availablePluginsJSON, file, 0755) if err != nil { fs.Logf(nil, "%s", err) } return nil } func (p *Plugins) removePlugin(name string) (err error) { p.mutex.Lock() defer p.mutex.Unlock() err = p.readFromFile() if err != nil { return err } _, ok := p.LoadedPlugins[name] if !ok { return fmt.Errorf("plugin %s not loaded", name) } delete(p.LoadedPlugins, name) err = p.writeToFile() if err != nil { return err } return nil } // GetPluginByName returns the plugin object for the key (author/plugin-name) func (p *Plugins) GetPluginByName(name string) (out *PackageJSON, err error) { p.mutex.Lock() defer p.mutex.Unlock() po, ok := p.LoadedPlugins[name] if !ok { return nil, fmt.Errorf("plugin %s not loaded", name) } return &po, nil } // getAuthorRepoBranchGithub gives author, repoName and branch from a github.com url // url examples: // https://github.com/rclone/rclone-webui-react/ // http://github.com/rclone/rclone-webui-react // https://github.com/rclone/rclone-webui-react/tree/caman-js // github.com/rclone/rclone-webui-react // func getAuthorRepoBranchGithub(url string) (author string, repoName string, branch string, err error) { repoURL := url repoURL = strings.Replace(repoURL, "https://", "", 1) repoURL = strings.Replace(repoURL, "http://", "", 1) urlSplits := strings.Split(repoURL, "/") if len(urlSplits) < 3 || len(urlSplits) > 5 || urlSplits[0] != "github.com" { return "", "", "", fmt.Errorf("invalid github url: %s", url) } // get branch name if len(urlSplits) == 5 && urlSplits[3] == "tree" { return urlSplits[1], urlSplits[2], urlSplits[4], nil } return urlSplits[1], urlSplits[2], "master", nil } func filterPlugins(plugins *Plugins, compare func(packageJSON *PackageJSON) bool) map[string]PackageJSON { output := map[string]PackageJSON{} for key, val := range plugins.LoadedPlugins { if compare(&val) { output[key] = val } } return output } // getDirectorForProxy is a helper function for reverse proxy of test plugins func getDirectorForProxy(origin *url.URL) func(req *http.Request) { return func(req *http.Request) { req.Header.Add("X-Forwarded-Host", req.Host) req.Header.Add("X-Origin-Host", origin.Host) req.URL.Scheme = "http" req.URL.Host = origin.Host req.URL.Path = origin.Path } } // ServePluginOK checks the plugin url and uses reverse proxy to allow redirection for content not being served by rclone func ServePluginOK(w http.ResponseWriter, r *http.Request, pluginsMatchResult []string) (ok bool) { testPlugin, err := loadedPlugins.GetPluginByName(fmt.Sprintf("%s/%s", pluginsMatchResult[1], pluginsMatchResult[2])) if err != nil { return false } if !testPlugin.Rclone.Test { return false } origin, _ := url.Parse(fmt.Sprintf("%s/%s", testPlugin.TestURL, pluginsMatchResult[3])) director := getDirectorForProxy(origin) pluginsProxy.Director = director pluginsProxy.ServeHTTP(w, r) return true } var referrerPathReg = regexp.MustCompile("^(https?):\\/\\/(.+):([0-9]+)?\\/(.*)\\/?\\?(.*)$") // ServePluginWithReferrerOK check if redirectReferrer is set for the referred a plugin, if yes, // sends a redirect to actual url. This function is useful for plugins to refer to absolute paths when // the referrer in http.Request is set func ServePluginWithReferrerOK(w http.ResponseWriter, r *http.Request, path string) (ok bool) { referrer := r.Referer() referrerPathMatch := referrerPathReg.FindStringSubmatch(referrer) if referrerPathMatch != nil && len(referrerPathMatch) > 3 { referrerPluginMatch := PluginsMatch.FindStringSubmatch(referrerPathMatch[4]) if referrerPluginMatch != nil && len(referrerPluginMatch) > 2 { pluginKey := fmt.Sprintf("%s/%s", referrerPluginMatch[1], referrerPluginMatch[2]) currentPlugin, err := loadedPlugins.GetPluginByName(pluginKey) if err != nil { return false } if currentPlugin.Rclone.RedirectReferrer { path = fmt.Sprintf("/plugins/%s/%s/%s", referrerPluginMatch[1], referrerPluginMatch[2], path) http.Redirect(w, r, path, http.StatusMovedPermanently) return true } } } return false } rclone-1.53.3/fs/rc/webgui/rc.go000066400000000000000000000171461375552240400163360ustar00rootroot00000000000000package webgui import ( "context" "fmt" "os" "path/filepath" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/rc" ) func init() { rc.Add(rc.Call{ Path: "pluginsctl/listTestPlugins", AuthRequired: true, Fn: rcListTestPlugins, Title: "Show currently loaded test plugins", Help: `allows listing of test plugins with the rclone.test set to true in package.json of the plugin This takes no parameters and returns - loadedTestPlugins: list of currently available test plugins Eg rclone rc pluginsctl/listTestPlugins `, }) } func rcListTestPlugins(_ context.Context, _ rc.Params) (out rc.Params, err error) { return rc.Params{ "loadedTestPlugins": filterPlugins(loadedPlugins, func(json *PackageJSON) bool { return json.isTesting() }), }, nil } func init() { rc.Add(rc.Call{ Path: "pluginsctl/removeTestPlugin", AuthRequired: true, Fn: rcRemoveTestPlugin, Title: "Remove a test plugin", Help: `This allows you to remove a plugin using it's name This takes the following parameters - name: name of the plugin in the format ` + "`author`/`plugin_name`" + ` Eg rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react `, }) } func rcRemoveTestPlugin(_ context.Context, in rc.Params) (out rc.Params, err error) { name, err := in.GetString("name") if err != nil { return nil, err } err = loadedPlugins.removePlugin(name) if err != nil { return nil, err } return nil, nil } func init() { rc.Add(rc.Call{ Path: "pluginsctl/addPlugin", AuthRequired: true, Fn: rcAddPlugin, Title: "Add a plugin using url", Help: `used for adding a plugin to the webgui This takes the following parameters - url: http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react) Eg rclone rc pluginsctl/addPlugin `, }) } func rcAddPlugin(_ context.Context, in rc.Params) (out rc.Params, err error) { pluginURL, err := in.GetString("url") if err != nil { return nil, err } author, repoName, repoBranch, err := getAuthorRepoBranchGithub(pluginURL) if err != nil { return nil, err } branch, err := in.GetString("branch") if err != nil || branch == "" { branch = repoBranch } version, err := in.GetString("version") if err != nil || version == "" { version = "latest" } err = CreatePathIfNotExist(PluginsPath) if err != nil { return nil, err } // fetch and package.json // https://raw.githubusercontent.com/rclone/rclone-webui-react/master/package.json pluginID := fmt.Sprintf("%s/%s", author, repoName) currentPluginPath := filepath.Join(PluginsPath, pluginID) err = CreatePathIfNotExist(currentPluginPath) if err != nil { return nil, err } packageJSONUrl := fmt.Sprintf("https://raw.githubusercontent.com/%s/%s/%s/package.json", author, repoName, branch) packageJSONFilePath := filepath.Join(currentPluginPath, "package.json") err = DownloadFile(packageJSONFilePath, packageJSONUrl) if err != nil { return nil, err } // register in plugins // download release and save in plugins//repo-name/app // https://api.github.com/repos/rclone/rclone-webui-react/releases/latest releaseURL, tag, _, err := GetLatestReleaseURL(fmt.Sprintf("https://api.github.com/repos/%s/%s/releases/%s", author, repoName, version)) if err != nil { return nil, err } zipName := tag + ".zip" zipPath := filepath.Join(currentPluginPath, zipName) err = DownloadFile(zipPath, releaseURL) if err != nil { return nil, err } extractPath := filepath.Join(currentPluginPath, "app") err = CreatePathIfNotExist(extractPath) if err != nil { return nil, err } err = os.RemoveAll(extractPath) if err != nil { fs.Logf(nil, "No previous downloads to remove") } fs.Logf(nil, "Unzipping plugin binary") err = Unzip(zipPath, extractPath) if err != nil { return nil, err } err = loadedPlugins.addPlugin(pluginID, packageJSONFilePath) if err != nil { return nil, err } return nil, nil } func init() { rc.Add(rc.Call{ Path: "pluginsctl/listPlugins", AuthRequired: true, Fn: rcGetPlugins, Title: "Get the list of currently loaded plugins", Help: `This allows you to get the currently enabled plugins and their details. This takes no parameters and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/listPlugins `, }) } func rcGetPlugins(_ context.Context, _ rc.Params) (out rc.Params, err error) { err = loadedPlugins.readFromFile() if err != nil { return nil, err } return rc.Params{ "loadedPlugins": filterPlugins(loadedPlugins, func(packageJSON *PackageJSON) bool { return !packageJSON.isTesting() }), "loadedTestPlugins": filterPlugins(loadedPlugins, func(packageJSON *PackageJSON) bool { return packageJSON.isTesting() }), }, nil } func init() { rc.Add(rc.Call{ Path: "pluginsctl/removePlugin", AuthRequired: true, Fn: rcRemovePlugin, Title: "Remove a loaded plugin", Help: `This allows you to remove a plugin using it's name This takes parameters - name: name of the plugin in the format ` + "`author`/`plugin_name`" + ` Eg rclone rc pluginsctl/removePlugin name=rclone/video-plugin `, }) } func rcRemovePlugin(_ context.Context, in rc.Params) (out rc.Params, err error) { name, err := in.GetString("name") if err != nil { return nil, err } err = loadedPlugins.removePlugin(name) if err != nil { return nil, err } return nil, nil } func init() { rc.Add(rc.Call{ Path: "pluginsctl/getPluginsForType", AuthRequired: true, Fn: rcGetPluginsForType, Title: "Get plugins with type criteria", Help: `This shows all possible plugins by a mime type This takes the following parameters - type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3) - pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL) and returns - loadedPlugins: list of current production plugins - testPlugins: list of temporarily loaded development plugins, usually running on a different server. Eg rclone rc pluginsctl/getPluginsForType type=video/mp4 `, }) } func rcGetPluginsForType(_ context.Context, in rc.Params) (out rc.Params, err error) { handlesType, err := in.GetString("type") if err != nil { handlesType = "" } pluginType, err := in.GetString("pluginType") if err != nil { pluginType = "" } var loadedPluginsResult map[string]PackageJSON var loadedTestPluginsResult map[string]PackageJSON if pluginType == "" || pluginType == "FileHandler" { loadedPluginsResult = filterPlugins(loadedPlugins, func(packageJSON *PackageJSON) bool { for i := range packageJSON.Rclone.HandlesType { if packageJSON.Rclone.HandlesType[i] == handlesType && !packageJSON.Rclone.Test { return true } } return false }) loadedTestPluginsResult = filterPlugins(loadedPlugins, func(packageJSON *PackageJSON) bool { for i := range packageJSON.Rclone.HandlesType { if packageJSON.Rclone.HandlesType[i] == handlesType && packageJSON.Rclone.Test { return true } } return false }) } else { loadedPluginsResult = filterPlugins(loadedPlugins, func(packageJSON *PackageJSON) bool { return packageJSON.Rclone.PluginType == pluginType && !packageJSON.isTesting() }) loadedTestPluginsResult = filterPlugins(loadedPlugins, func(packageJSON *PackageJSON) bool { return packageJSON.Rclone.PluginType == pluginType && packageJSON.isTesting() }) } return rc.Params{ "loadedPlugins": loadedPluginsResult, "loadedTestPlugins": loadedTestPluginsResult, }, nil } rclone-1.53.3/fs/rc/webgui/rc_test.go000066400000000000000000000072661375552240400173770ustar00rootroot00000000000000package webgui import ( "context" "io/ioutil" "os" "path/filepath" "strings" "testing" "github.com/rclone/rclone/fs/rc" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) const testPluginName = "rclone-test-plugin" const testPluginAuthor = "rclone" const testPluginKey = testPluginAuthor + "/" + testPluginName const testPluginURL = "https://github.com/" + testPluginAuthor + "/" + testPluginName + "/" func setCacheDir(t *testing.T) string { cacheDir, err := ioutil.TempDir("", "rclone-cache-dir") assert.Nil(t, err) PluginsPath = filepath.Join(cacheDir, "plugins") pluginsConfigPath = filepath.Join(cacheDir, "config") loadedPlugins = newPlugins(availablePluginsJSONPath) err = loadedPlugins.readFromFile() assert.Nil(t, err) return cacheDir } func cleanCacheDir(t *testing.T, cacheDir string) { _ = os.RemoveAll(cacheDir) } func addPlugin(t *testing.T) { addPlugin := rc.Calls.Get("pluginsctl/addPlugin") assert.NotNil(t, addPlugin) in := rc.Params{ "url": testPluginURL, } out, err := addPlugin.Fn(context.Background(), in) if err != nil && strings.Contains(err.Error(), "bad HTTP status") { t.Skipf("skipping test as plugin download failed: %v", err) } require.Nil(t, err) assert.Nil(t, out) } func removePlugin(t *testing.T) { addPlugin := rc.Calls.Get("pluginsctl/removePlugin") assert.NotNil(t, addPlugin) in := rc.Params{ "name": testPluginKey, } out, err := addPlugin.Fn(context.Background(), in) assert.NotNil(t, err) assert.Nil(t, out) } //func TestListTestPlugins(t *testing.T) { // addPlugin := rc.Calls.Get("pluginsctl/listTestPlugins") // assert.NotNil(t, addPlugin) // in := rc.Params{} // out, err := addPlugin.Fn(context.Background(), in) // assert.Nil(t, err) // expected := rc.Params{ // "loadedTestPlugins": map[string]PackageJSON{}, // } // assert.Equal(t, expected, out) //} //func TestRemoveTestPlugin(t *testing.T) { // addPlugin := rc.Calls.Get("pluginsctl/removeTestPlugin") // assert.NotNil(t, addPlugin) // in := rc.Params{ // "name": "", // } // out, err := addPlugin.Fn(context.Background(), in) // assert.NotNil(t, err) // assert.Nil(t, out) //} func TestAddPlugin(t *testing.T) { cacheDir := setCacheDir(t) defer cleanCacheDir(t, cacheDir) addPlugin(t) _, ok := loadedPlugins.LoadedPlugins[testPluginKey] assert.True(t, ok) //removePlugin(t) //_, ok = loadedPlugins.LoadedPlugins[testPluginKey] //assert.False(t, ok) } func TestListPlugins(t *testing.T) { cacheDir := setCacheDir(t) defer cleanCacheDir(t, cacheDir) addPlugin := rc.Calls.Get("pluginsctl/listPlugins") assert.NotNil(t, addPlugin) in := rc.Params{} out, err := addPlugin.Fn(context.Background(), in) assert.Nil(t, err) expected := rc.Params{ "loadedPlugins": map[string]PackageJSON{}, "loadedTestPlugins": map[string]PackageJSON{}, } assert.Equal(t, expected, out) } func TestRemovePlugin(t *testing.T) { cacheDir := setCacheDir(t) defer cleanCacheDir(t, cacheDir) addPlugin(t) removePluginCall := rc.Calls.Get("pluginsctl/removePlugin") assert.NotNil(t, removePlugin) in := rc.Params{ "name": testPluginKey, } out, err := removePluginCall.Fn(context.Background(), in) assert.Nil(t, err) assert.Nil(t, out) removePlugin(t) assert.Equal(t, len(loadedPlugins.LoadedPlugins), 0) } func TestPluginsForType(t *testing.T) { addPlugin := rc.Calls.Get("pluginsctl/getPluginsForType") assert.NotNil(t, addPlugin) in := rc.Params{ "type": "", "pluginType": "FileHandler", } out, err := addPlugin.Fn(context.Background(), in) assert.Nil(t, err) assert.NotNil(t, out) in = rc.Params{ "type": "video/mp4", "pluginType": "", } _, err = addPlugin.Fn(context.Background(), in) assert.Nil(t, err) assert.NotNil(t, out) } rclone-1.53.3/fs/rc/webgui/webgui.go000066400000000000000000000162131375552240400172060ustar00rootroot00000000000000// Define the Web GUI helpers package webgui import ( "archive/zip" "encoding/json" "fmt" "io" "io/ioutil" "net/http" "os" "path/filepath" "strconv" "strings" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" ) // GetLatestReleaseURL returns the latest release details of the rclone-webui-react func GetLatestReleaseURL(fetchURL string) (string, string, int, error) { resp, err := http.Get(fetchURL) if err != nil { return "", "", 0, errors.Wrap(err, "failed getting latest release of rclone-webui") } defer fs.CheckClose(resp.Body, &err) if resp.StatusCode != http.StatusOK { return "", "", 0, errors.Errorf("bad HTTP status %d (%s) when fetching %s", resp.StatusCode, resp.Status, fetchURL) } results := gitHubRequest{} if err := json.NewDecoder(resp.Body).Decode(&results); err != nil { return "", "", 0, errors.Wrap(err, "could not decode results from http request") } if len(results.Assets) < 1 { return "", "", 0, errors.New("could not find an asset in the release. " + "check if asset was successfully added in github release assets") } res := results.Assets[0].BrowserDownloadURL tag := results.TagName size := results.Assets[0].Size return res, tag, size, nil } // CheckAndDownloadWebGUIRelease is a helper function to download and setup latest release of rclone-webui-react func CheckAndDownloadWebGUIRelease(checkUpdate bool, forceUpdate bool, fetchURL string, cacheDir string) (err error) { cachePath := filepath.Join(cacheDir, "webgui") tagPath := filepath.Join(cachePath, "tag") extractPath := filepath.Join(cachePath, "current") extractPathExist, extractPathStat, err := exists(extractPath) if err != nil { return err } if extractPathExist && !extractPathStat.IsDir() { return errors.New("Web GUI path exists, but is a file instead of folder. Please check the path " + extractPath) } // if the old file exists does not exist or forced update is enforced. // TODO: Add hashing to check integrity of the previous update. if !extractPathExist || checkUpdate || forceUpdate { // Get the latest release details WebUIURL, tag, size, err := GetLatestReleaseURL(fetchURL) if err != nil { return err } dat, err := ioutil.ReadFile(tagPath) if err == nil && string(dat) == tag { fs.Logf(nil, "No update to Web GUI available.") if !forceUpdate { return nil } fs.Logf(nil, "Force update the Web GUI binary.") } zipName := tag + ".zip" zipPath := filepath.Join(cachePath, zipName) cachePathExist, cachePathStat, _ := exists(cachePath) if !cachePathExist { if err := os.MkdirAll(cachePath, 0755); err != nil { return errors.New("Error creating cache directory: " + cachePath) } } if cachePathExist && !cachePathStat.IsDir() { return errors.New("Web GUI path is a file instead of folder. Please check it " + extractPath) } fs.Logf(nil, "A new release for gui is present at "+WebUIURL) fs.Logf(nil, "Downloading webgui binary. Please wait. [Size: %s, Path : %s]\n", strconv.Itoa(size), zipPath) // download the zip from latest url err = DownloadFile(zipPath, WebUIURL) if err != nil { return err } err = os.RemoveAll(extractPath) if err != nil { fs.Logf(nil, "No previous downloads to remove") } fs.Logf(nil, "Unzipping webgui binary") err = Unzip(zipPath, extractPath) if err != nil { return err } err = os.RemoveAll(zipPath) if err != nil { fs.Logf(nil, "Downloaded ZIP cannot be deleted") } err = ioutil.WriteFile(tagPath, []byte(tag), 0644) if err != nil { fs.Infof(nil, "Cannot write tag file. You may be required to redownload the binary next time.") } } else { fs.Logf(nil, "Web GUI exists. Update skipped.") } return nil } // DownloadFile is a helper function to download a file from url to the filepath func DownloadFile(filepath string, url string) (err error) { // Get the data resp, err := http.Get(url) if err != nil { return err } defer fs.CheckClose(resp.Body, &err) if resp.StatusCode != http.StatusOK { return errors.Errorf("bad HTTP status %d (%s) when fetching %s", resp.StatusCode, resp.Status, url) } // Create the file out, err := os.Create(filepath) if err != nil { return err } defer fs.CheckClose(out, &err) // Write the body to file _, err = io.Copy(out, resp.Body) return err } // Unzip is a helper function to Unzip a file specified in src to path dest func Unzip(src, dest string) (err error) { dest = filepath.Clean(dest) + string(os.PathSeparator) r, err := zip.OpenReader(src) if err != nil { return err } defer fs.CheckClose(r, &err) if err := os.MkdirAll(dest, 0755); err != nil { return err } // Closure to address file descriptors issue with all the deferred .Close() methods extractAndWriteFile := func(f *zip.File) error { path := filepath.Join(dest, f.Name) // Check for Zip Slip: https://github.com/rclone/rclone/issues/3529 if !strings.HasPrefix(path, dest) { return fmt.Errorf("%s: illegal file path", path) } rc, err := f.Open() if err != nil { return err } defer fs.CheckClose(rc, &err) if f.FileInfo().IsDir() { if err := os.MkdirAll(path, 0755); err != nil { return err } } else { if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { return err } f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644) if err != nil { return err } defer fs.CheckClose(f, &err) _, err = io.Copy(f, rc) if err != nil { return err } } return nil } for _, f := range r.File { err := extractAndWriteFile(f) if err != nil { return err } } return nil } func exists(path string) (existence bool, stat os.FileInfo, err error) { stat, err = os.Stat(path) if err == nil { return true, stat, nil } if os.IsNotExist(err) { return false, nil, nil } return false, stat, err } // CreatePathIfNotExist creates the path to a folder if it does not exist func CreatePathIfNotExist(path string) (err error) { exists, stat, _ := exists(path) if !exists { if err := os.MkdirAll(path, 0755); err != nil { return errors.New("Error creating : " + path) } } if exists && !stat.IsDir() { return errors.New("Path is a file instead of folder. Please check it " + path) } return nil } // gitHubRequest Maps the GitHub API request to structure type gitHubRequest struct { URL string `json:"url"` Prerelease bool `json:"prerelease"` CreatedAt time.Time `json:"created_at"` PublishedAt time.Time `json:"published_at"` TagName string `json:"tag_name"` Assets []struct { URL string `json:"url"` ID int `json:"id"` NodeID string `json:"node_id"` Name string `json:"name"` Label string `json:"label"` ContentType string `json:"content_type"` State string `json:"state"` Size int `json:"size"` DownloadCount int `json:"download_count"` CreatedAt time.Time `json:"created_at"` UpdatedAt time.Time `json:"updated_at"` BrowserDownloadURL string `json:"browser_download_url"` } `json:"assets"` TarballURL string `json:"tarball_url"` ZipballURL string `json:"zipball_url"` Body string `json:"body"` } rclone-1.53.3/fs/sizesuffix.go000066400000000000000000000055771375552240400162500ustar00rootroot00000000000000package fs // SizeSuffix is parsed by flag with k/M/G suffixes import ( "fmt" "math" "sort" "strconv" "strings" "github.com/pkg/errors" ) // SizeSuffix is an int64 with a friendly way of printing setting type SizeSuffix int64 // Common multipliers for SizeSuffix const ( Byte SizeSuffix = 1 << (iota * 10) KibiByte MebiByte GibiByte TebiByte PebiByte ExbiByte ) // Turn SizeSuffix into a string and a suffix func (x SizeSuffix) string() (string, string) { scaled := float64(0) suffix := "" switch { case x < 0: return "off", "" case x == 0: return "0", "" case x < 1<<10: scaled = float64(x) suffix = "" case x < 1<<20: scaled = float64(x) / (1 << 10) suffix = "k" case x < 1<<30: scaled = float64(x) / (1 << 20) suffix = "M" case x < 1<<40: scaled = float64(x) / (1 << 30) suffix = "G" case x < 1<<50: scaled = float64(x) / (1 << 40) suffix = "T" default: scaled = float64(x) / (1 << 50) suffix = "P" } if math.Floor(scaled) == scaled { return fmt.Sprintf("%.0f", scaled), suffix } return fmt.Sprintf("%.3f", scaled), suffix } // String turns SizeSuffix into a string func (x SizeSuffix) String() string { val, suffix := x.string() return val + suffix } // Unit turns SizeSuffix into a string with a unit func (x SizeSuffix) Unit(unit string) string { val, suffix := x.string() if val == "off" { return val } return val + " " + suffix + unit } // Set a SizeSuffix func (x *SizeSuffix) Set(s string) error { if len(s) == 0 { return errors.New("empty string") } if strings.ToLower(s) == "off" { *x = -1 return nil } suffix := s[len(s)-1] suffixLen := 1 var multiplier float64 switch suffix { case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.': suffixLen = 0 multiplier = 1 << 10 case 'b', 'B': multiplier = 1 case 'k', 'K': multiplier = 1 << 10 case 'm', 'M': multiplier = 1 << 20 case 'g', 'G': multiplier = 1 << 30 case 't', 'T': multiplier = 1 << 40 case 'p', 'P': multiplier = 1 << 50 default: return errors.Errorf("bad suffix %q", suffix) } s = s[:len(s)-suffixLen] value, err := strconv.ParseFloat(s, 64) if err != nil { return err } if value < 0 { return errors.Errorf("size can't be negative %q", s) } value *= multiplier *x = SizeSuffix(value) return nil } // Type of the value func (x *SizeSuffix) Type() string { return "SizeSuffix" } // Scan implements the fmt.Scanner interface func (x *SizeSuffix) Scan(s fmt.ScanState, ch rune) error { token, err := s.Token(true, nil) if err != nil { return err } return x.Set(string(token)) } // SizeSuffixList is a slice SizeSuffix values type SizeSuffixList []SizeSuffix func (l SizeSuffixList) Len() int { return len(l) } func (l SizeSuffixList) Swap(i, j int) { l[i], l[j] = l[j], l[i] } func (l SizeSuffixList) Less(i, j int) bool { return l[i] < l[j] } // Sort sorts the list func (l SizeSuffixList) Sort() { sort.Sort(l) } rclone-1.53.3/fs/sizesuffix_test.go000066400000000000000000000043641375552240400173000ustar00rootroot00000000000000package fs import ( "fmt" "testing" "github.com/spf13/pflag" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check it satisfies the interface var _ pflag.Value = (*SizeSuffix)(nil) func TestSizeSuffixString(t *testing.T) { for _, test := range []struct { in float64 want string }{ {0, "0"}, {102, "102"}, {1024, "1k"}, {1024 * 1024, "1M"}, {1024 * 1024 * 1024, "1G"}, {10 * 1024 * 1024 * 1024, "10G"}, {10.1 * 1024 * 1024 * 1024, "10.100G"}, {-1, "off"}, {-100, "off"}, } { ss := SizeSuffix(test.in) got := ss.String() assert.Equal(t, test.want, got) } } func TestSizeSuffixUnit(t *testing.T) { for _, test := range []struct { in float64 want string }{ {0, "0 Bytes"}, {102, "102 Bytes"}, {1024, "1 kBytes"}, {1024 * 1024, "1 MBytes"}, {1024 * 1024 * 1024, "1 GBytes"}, {10 * 1024 * 1024 * 1024, "10 GBytes"}, {10.1 * 1024 * 1024 * 1024, "10.100 GBytes"}, {10 * 1024 * 1024 * 1024 * 1024, "10 TBytes"}, {10 * 1024 * 1024 * 1024 * 1024 * 1024, "10 PBytes"}, {1 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024, "1024 PBytes"}, {-1, "off"}, {-100, "off"}, } { ss := SizeSuffix(test.in) got := ss.Unit("Bytes") assert.Equal(t, test.want, got) } } func TestSizeSuffixSet(t *testing.T) { for _, test := range []struct { in string want int64 err bool }{ {"0", 0, false}, {"1b", 1, false}, {"102B", 102, false}, {"0.1k", 102, false}, {"0.1", 102, false}, {"1K", 1024, false}, {"1", 1024, false}, {"2.5", 1024 * 2.5, false}, {"1M", 1024 * 1024, false}, {"1.g", 1024 * 1024 * 1024, false}, {"10G", 10 * 1024 * 1024 * 1024, false}, {"10T", 10 * 1024 * 1024 * 1024 * 1024, false}, {"10P", 10 * 1024 * 1024 * 1024 * 1024 * 1024, false}, {"off", -1, false}, {"OFF", -1, false}, {"", 0, true}, {"1q", 0, true}, {"1.q", 0, true}, {"1q", 0, true}, {"-1K", 0, true}, } { ss := SizeSuffix(0) err := ss.Set(test.in) if test.err { require.Error(t, err, test.in) } else { require.NoError(t, err, test.in) } assert.Equal(t, test.want, int64(ss)) } } func TestSizeSuffixScan(t *testing.T) { var v SizeSuffix n, err := fmt.Sscan(" 17M ", &v) require.NoError(t, err) assert.Equal(t, 1, n) assert.Equal(t, SizeSuffix(17<<20), v) } rclone-1.53.3/fs/sync/000077500000000000000000000000001375552240400144605ustar00rootroot00000000000000rclone-1.53.3/fs/sync/pipe.go000066400000000000000000000125611375552240400157510ustar00rootroot00000000000000package sync import ( "context" "math/bits" "strconv" "strings" "sync" "github.com/aalpar/deheap" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fserrors" ) // compare two items for order by type lessFn func(a, b fs.ObjectPair) bool // pipe provides an unbounded channel like experience // // Note unlike channels these aren't strictly ordered. type pipe struct { mu sync.Mutex c chan struct{} queue []fs.ObjectPair closed bool totalSize int64 stats func(items int, totalSize int64) less lessFn fraction int } func newPipe(orderBy string, stats func(items int, totalSize int64), maxBacklog int) (*pipe, error) { if maxBacklog < 0 { maxBacklog = (1 << (bits.UintSize - 1)) - 1 // largest posititive int } less, fraction, err := newLess(orderBy) if err != nil { return nil, fserrors.FatalError(err) } p := &pipe{ c: make(chan struct{}, maxBacklog), stats: stats, less: less, fraction: fraction, } if p.less != nil { deheap.Init(p) } return p, nil } // Len satisfy heap.Interface - must be called with lock held func (p *pipe) Len() int { return len(p.queue) } // Len satisfy heap.Interface - must be called with lock held func (p *pipe) Less(i, j int) bool { return p.less(p.queue[i], p.queue[j]) } // Swap satisfy heap.Interface - must be called with lock held func (p *pipe) Swap(i, j int) { p.queue[i], p.queue[j] = p.queue[j], p.queue[i] } // Push satisfy heap.Interface - must be called with lock held func (p *pipe) Push(item interface{}) { p.queue = append(p.queue, item.(fs.ObjectPair)) } // Pop satisfy heap.Interface - must be called with lock held func (p *pipe) Pop() interface{} { old := p.queue n := len(old) item := old[n-1] old[n-1] = fs.ObjectPair{} // avoid memory leak p.queue = old[0 : n-1] return item } // Put a pair into the pipe // // It returns ok = false if the context was cancelled // // It will panic if you call it after Close() func (p *pipe) Put(ctx context.Context, pair fs.ObjectPair) (ok bool) { if ctx.Err() != nil { return false } p.mu.Lock() if p.less == nil { // no order-by p.queue = append(p.queue, pair) } else { deheap.Push(p, pair) } size := pair.Src.Size() if size > 0 { p.totalSize += size } p.stats(len(p.queue), p.totalSize) p.mu.Unlock() select { case <-ctx.Done(): return false case p.c <- struct{}{}: } return true } // Get a pair from the pipe // // If fraction is > the mixed fraction set in the pipe then it gets it // from the other end of the heap if order-by is in effect // // It returns ok = false if the context was cancelled or Close() has // been called. func (p *pipe) GetMax(ctx context.Context, fraction int) (pair fs.ObjectPair, ok bool) { if ctx.Err() != nil { return } select { case <-ctx.Done(): return case _, ok = <-p.c: if !ok { return } } p.mu.Lock() if p.less == nil { // no order-by pair = p.queue[0] p.queue[0] = fs.ObjectPair{} // avoid memory leak p.queue = p.queue[1:] } else if p.fraction < 0 || fraction < p.fraction { pair = deheap.Pop(p).(fs.ObjectPair) } else { pair = deheap.PopMax(p).(fs.ObjectPair) } size := pair.Src.Size() if size > 0 { p.totalSize -= size } if p.totalSize < 0 { p.totalSize = 0 } p.stats(len(p.queue), p.totalSize) p.mu.Unlock() return pair, true } // Get a pair from the pipe // // It returns ok = false if the context was cancelled or Close() has // been called. func (p *pipe) Get(ctx context.Context) (pair fs.ObjectPair, ok bool) { return p.GetMax(ctx, -1) } // Stats reads the number of items in the queue and the totalSize func (p *pipe) Stats() (items int, totalSize int64) { p.mu.Lock() items, totalSize = len(p.queue), p.totalSize p.mu.Unlock() return items, totalSize } // Close the pipe // // Writes to a closed pipe will panic as will double closing a pipe func (p *pipe) Close() { p.mu.Lock() close(p.c) p.closed = true p.mu.Unlock() } // newLess returns a less function for the heap comparison or nil if // one is not required func newLess(orderBy string) (less lessFn, fraction int, err error) { fraction = -1 if orderBy == "" { return nil, fraction, nil } parts := strings.Split(strings.ToLower(orderBy), ",") switch parts[0] { case "name": less = func(a, b fs.ObjectPair) bool { return a.Src.Remote() < b.Src.Remote() } case "size": less = func(a, b fs.ObjectPair) bool { return a.Src.Size() < b.Src.Size() } case "modtime": less = func(a, b fs.ObjectPair) bool { ctx := context.Background() return a.Src.ModTime(ctx).Before(b.Src.ModTime(ctx)) } default: return nil, fraction, errors.Errorf("unknown --order-by comparison %q", parts[0]) } descending := false if len(parts) > 1 { switch parts[1] { case "ascending", "asc": case "descending", "desc": descending = true case "mixed": fraction = 50 if len(parts) > 2 { fraction, err = strconv.Atoi(parts[2]) if err != nil { return nil, fraction, errors.Errorf("bad mixed fraction --order-by %q", parts[2]) } } default: return nil, fraction, errors.Errorf("unknown --order-by sort direction %q", parts[1]) } } if (fraction >= 0 && len(parts) > 3) || (fraction < 0 && len(parts) > 2) { return nil, fraction, errors.Errorf("bad --order-by string %q", orderBy) } if descending { oldLess := less less = func(a, b fs.ObjectPair) bool { return !oldLess(a, b) } } return less, fraction, nil } rclone-1.53.3/fs/sync/pipe_test.go000066400000000000000000000153721375552240400170130ustar00rootroot00000000000000package sync import ( "container/heap" "context" "sync" "sync/atomic" "testing" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Check interface satisfied var _ heap.Interface = (*pipe)(nil) func TestPipe(t *testing.T) { var queueLength int var queueSize int64 stats := func(n int, size int64) { queueLength, queueSize = n, size } // Make a new pipe p, err := newPipe("", stats, 10) require.NoError(t, err) checkStats := func(expectedN int, expectedSize int64) { n, size := p.Stats() assert.Equal(t, expectedN, n) assert.Equal(t, expectedSize, size) assert.Equal(t, expectedN, queueLength) assert.Equal(t, expectedSize, queueSize) } checkStats(0, 0) ctx := context.Background() obj1 := mockobject.New("potato").WithContent([]byte("hello"), mockobject.SeekModeNone) pair1 := fs.ObjectPair{Src: obj1, Dst: nil} // Put an object ok := p.Put(ctx, pair1) assert.Equal(t, true, ok) checkStats(1, 5) // Close the pipe showing reading on closed pipe is OK p.Close() // Read from pipe pair2, ok := p.Get(ctx) assert.Equal(t, pair1, pair2) assert.Equal(t, true, ok) checkStats(0, 0) // Check read on closed pipe pair2, ok = p.Get(ctx) assert.Equal(t, fs.ObjectPair{}, pair2) assert.Equal(t, false, ok) // Check panic on write to closed pipe assert.Panics(t, func() { p.Put(ctx, pair1) }) // Make a new pipe p, err = newPipe("", stats, 10) require.NoError(t, err) ctx2, cancel := context.WithCancel(ctx) // cancel it in the background - check read ceases go cancel() pair2, ok = p.Get(ctx2) assert.Equal(t, fs.ObjectPair{}, pair2) assert.Equal(t, false, ok) // check we can't write ok = p.Put(ctx2, pair1) assert.Equal(t, false, ok) } // TestPipeConcurrent runs concurrent Get and Put to flush out any // race conditions and concurrency problems. func TestPipeConcurrent(t *testing.T) { const ( N = 1000 readWriters = 10 ) stats := func(n int, size int64) {} // Make a new pipe p, err := newPipe("", stats, 10) require.NoError(t, err) var wg sync.WaitGroup obj1 := mockobject.New("potato").WithContent([]byte("hello"), mockobject.SeekModeNone) pair1 := fs.ObjectPair{Src: obj1, Dst: nil} ctx := context.Background() var count int64 for j := 0; j < readWriters; j++ { wg.Add(2) go func() { defer wg.Done() for i := 0; i < N; i++ { // Read from pipe pair2, ok := p.Get(ctx) assert.Equal(t, pair1, pair2) assert.Equal(t, true, ok) atomic.AddInt64(&count, -1) } }() go func() { defer wg.Done() for i := 0; i < N; i++ { // Put an object ok := p.Put(ctx, pair1) assert.Equal(t, true, ok) atomic.AddInt64(&count, 1) } }() } wg.Wait() assert.Equal(t, int64(0), count) } func TestPipeOrderBy(t *testing.T) { var ( stats = func(n int, size int64) {} ctx = context.Background() obj1 = mockobject.New("b").WithContent([]byte("1"), mockobject.SeekModeNone) obj2 = mockobject.New("a").WithContent([]byte("22"), mockobject.SeekModeNone) pair1 = fs.ObjectPair{Src: obj1} pair2 = fs.ObjectPair{Src: obj2} ) for _, test := range []struct { orderBy string swapped1 bool swapped2 bool fraction int }{ {"", false, true, -1}, {"size", false, false, -1}, {"name", true, true, -1}, {"modtime", false, true, -1}, {"size,ascending", false, false, -1}, {"name,asc", true, true, -1}, {"modtime,ascending", false, true, -1}, {"size,descending", true, true, -1}, {"name,desc", false, false, -1}, {"modtime,descending", true, false, -1}, {"size,mixed,50", false, false, 25}, {"size,mixed,51", true, true, 75}, } { t.Run(test.orderBy, func(t *testing.T) { p, err := newPipe(test.orderBy, stats, 10) require.NoError(t, err) readAndCheck := func(swapped bool) { var readFirst, readSecond fs.ObjectPair var ok1, ok2 bool if test.fraction < 0 { readFirst, ok1 = p.Get(ctx) readSecond, ok2 = p.Get(ctx) } else { readFirst, ok1 = p.GetMax(ctx, test.fraction) readSecond, ok2 = p.GetMax(ctx, test.fraction) } assert.True(t, ok1) assert.True(t, ok2) if swapped { assert.True(t, readFirst == pair2 && readSecond == pair1) } else { assert.True(t, readFirst == pair1 && readSecond == pair2) } } ok := p.Put(ctx, pair1) assert.True(t, ok) ok = p.Put(ctx, pair2) assert.True(t, ok) readAndCheck(test.swapped1) // insert other way round ok = p.Put(ctx, pair2) assert.True(t, ok) ok = p.Put(ctx, pair1) assert.True(t, ok) readAndCheck(test.swapped2) }) } } func TestNewLess(t *testing.T) { t.Run("blankOK", func(t *testing.T) { less, _, err := newLess("") require.NoError(t, err) assert.Nil(t, less) }) t.Run("tooManyParts", func(t *testing.T) { _, _, err := newLess("size,asc,toomanyparts") require.Error(t, err) assert.Contains(t, err.Error(), "bad --order-by string") }) t.Run("tooManyParts2", func(t *testing.T) { _, _, err := newLess("size,mixed,50,toomanyparts") require.Error(t, err) assert.Contains(t, err.Error(), "bad --order-by string") }) t.Run("badMixed", func(t *testing.T) { _, _, err := newLess("size,mixed,32.7") require.Error(t, err) assert.Contains(t, err.Error(), "bad mixed fraction") }) t.Run("unknownComparison", func(t *testing.T) { _, _, err := newLess("potato") require.Error(t, err) assert.Contains(t, err.Error(), "unknown --order-by comparison") }) t.Run("unknownSortDirection", func(t *testing.T) { _, _, err := newLess("name,sideways") require.Error(t, err) assert.Contains(t, err.Error(), "unknown --order-by sort direction") }) var ( obj1 = mockobject.New("b").WithContent([]byte("1"), mockobject.SeekModeNone) obj2 = mockobject.New("a").WithContent([]byte("22"), mockobject.SeekModeNone) pair1 = fs.ObjectPair{Src: obj1} pair2 = fs.ObjectPair{Src: obj2} ) for _, test := range []struct { orderBy string pair1LessPair2 bool pair2LessPair1 bool wantFraction int }{ {"size", true, false, -1}, {"name", false, true, -1}, {"modtime", false, false, -1}, {"size,ascending", true, false, -1}, {"name,asc", false, true, -1}, {"modtime,ascending", false, false, -1}, {"size,descending", false, true, -1}, {"name,desc", true, false, -1}, {"modtime,descending", true, true, -1}, {"modtime,mixed", false, false, 50}, {"modtime,mixed,30", false, false, 30}, } { t.Run(test.orderBy, func(t *testing.T) { less, gotFraction, err := newLess(test.orderBy) assert.Equal(t, test.wantFraction, gotFraction) require.NoError(t, err) require.NotNil(t, less) pair1LessPair2 := less(pair1, pair2) assert.Equal(t, test.pair1LessPair2, pair1LessPair2) pair2LessPair1 := less(pair2, pair1) assert.Equal(t, test.pair2LessPair1, pair2LessPair1) }) } } rclone-1.53.3/fs/sync/rc.go000066400000000000000000000032061375552240400154140ustar00rootroot00000000000000package sync import ( "context" "github.com/rclone/rclone/fs/rc" ) func init() { for _, name := range []string{"sync", "copy", "move"} { name := name moveHelp := "" if name == "move" { moveHelp = "- deleteEmptySrcDirs - delete empty src directories if set\n" } rc.Add(rc.Call{ Path: "sync/" + name, AuthRequired: true, Fn: func(ctx context.Context, in rc.Params) (rc.Params, error) { return rcSyncCopyMove(ctx, in, name) }, Title: name + " a directory from source remote to destination remote", Help: `This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination ` + moveHelp + ` See the [` + name + ` command](/commands/rclone_` + name + `/) command for more information on the above.`, }) } } // Sync/Copy/Move a file func rcSyncCopyMove(ctx context.Context, in rc.Params, name string) (out rc.Params, err error) { srcFs, err := rc.GetFsNamed(in, "srcFs") if err != nil { return nil, err } dstFs, err := rc.GetFsNamed(in, "dstFs") if err != nil { return nil, err } createEmptySrcDirs, err := in.GetBool("createEmptySrcDirs") if rc.NotErrParamNotFound(err) { return nil, err } switch name { case "sync": return nil, Sync(ctx, dstFs, srcFs, createEmptySrcDirs) case "copy": return nil, CopyDir(ctx, dstFs, srcFs, createEmptySrcDirs) case "move": deleteEmptySrcDirs, err := in.GetBool("deleteEmptySrcDirs") if rc.NotErrParamNotFound(err) { return nil, err } return nil, MoveDir(ctx, dstFs, srcFs, deleteEmptySrcDirs, createEmptySrcDirs) } panic("unknown rcSyncCopyMove type") } rclone-1.53.3/fs/sync/rc_test.go000066400000000000000000000056251375552240400164620ustar00rootroot00000000000000package sync import ( "context" "testing" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/rc" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func rcNewRun(t *testing.T, method string) (*fstest.Run, *rc.Call) { if *fstest.RemoteName != "" { t.Skip("Skipping test on non local remote") } r := fstest.NewRun(t) call := rc.Calls.Get(method) assert.NotNil(t, call) cache.Put(r.LocalName, r.Flocal) cache.Put(r.FremoteName, r.Fremote) return r, call } // sync/copy: copy a directory from source remote to destination remote func TestRcCopy(t *testing.T) { r, call := rcNewRun(t, "sync/copy") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) file1 := r.WriteBoth(context.Background(), "file1", "file1 contents", t1) file2 := r.WriteFile("subdir/file2", "file2 contents", t2) file3 := r.WriteObject(context.Background(), "subdir/subsubdir/file3", "file3 contents", t3) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1, file3) in := rc.Params{ "srcFs": r.LocalName, "dstFs": r.FremoteName, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1, file2, file3) } // sync/move: move a directory from source remote to destination remote func TestRcMove(t *testing.T) { r, call := rcNewRun(t, "sync/move") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) file1 := r.WriteBoth(context.Background(), "file1", "file1 contents", t1) file2 := r.WriteFile("subdir/file2", "file2 contents", t2) file3 := r.WriteObject(context.Background(), "subdir/subsubdir/file3", "file3 contents", t3) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1, file3) in := rc.Params{ "srcFs": r.LocalName, "dstFs": r.FremoteName, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file1, file2, file3) } // sync/sync: sync a directory from source remote to destination remote func TestRcSync(t *testing.T) { r, call := rcNewRun(t, "sync/sync") defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) file1 := r.WriteBoth(context.Background(), "file1", "file1 contents", t1) file2 := r.WriteFile("subdir/file2", "file2 contents", t2) file3 := r.WriteObject(context.Background(), "subdir/subsubdir/file3", "file3 contents", t3) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1, file3) in := rc.Params{ "srcFs": r.LocalName, "dstFs": r.FremoteName, } out, err := call.Fn(context.Background(), in) require.NoError(t, err) assert.Equal(t, rc.Params(nil), out) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1, file2) } rclone-1.53.3/fs/sync/sync.go000066400000000000000000001003711375552240400157650ustar00rootroot00000000000000// Package sync is the implementation of sync/copy/move package sync import ( "context" "fmt" "path" "sort" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/march" "github.com/rclone/rclone/fs/operations" ) type syncCopyMove struct { // parameters fdst fs.Fs fsrc fs.Fs deleteMode fs.DeleteMode // how we are doing deletions DoMove bool copyEmptySrcDirs bool deleteEmptySrcDirs bool dir string // internal state ctx context.Context // internal context for controlling go-routines cancel func() // cancel the context noTraverse bool // if set don't traverse the dst noCheckDest bool // if set transfer all objects regardless without checking dst noUnicodeNormalization bool // don't normalize unicode characters in filenames deletersWg sync.WaitGroup // for delete before go routine deleteFilesCh chan fs.Object // channel to receive deletes if delete before trackRenames bool // set if we should do server side renames trackRenamesStrategy trackRenamesStrategy // stratgies used for tracking renames dstFilesMu sync.Mutex // protect dstFiles dstFiles map[string]fs.Object // dst files, always filled srcFiles map[string]fs.Object // src files, only used if deleteBefore srcFilesChan chan fs.Object // passes src objects srcFilesResult chan error // error result of src listing dstFilesResult chan error // error result of dst listing dstEmptyDirsMu sync.Mutex // protect dstEmptyDirs dstEmptyDirs map[string]fs.DirEntry // potentially empty directories srcEmptyDirsMu sync.Mutex // protect srcEmptyDirs srcEmptyDirs map[string]fs.DirEntry // potentially empty directories checkerWg sync.WaitGroup // wait for checkers toBeChecked *pipe // checkers channel transfersWg sync.WaitGroup // wait for transfers toBeUploaded *pipe // copiers channel errorMu sync.Mutex // Mutex covering the errors variables err error // normal error from copy process noRetryErr error // error with NoRetry set fatalErr error // fatal error commonHash hash.Type // common hash type between src and dst modifyWindow time.Duration // modify window between fsrc, fdst renameMapMu sync.Mutex // mutex to protect the below renameMap map[string][]fs.Object // dst files by hash - only used by trackRenames renamerWg sync.WaitGroup // wait for renamers toBeRenamed *pipe // renamers channel trackRenamesWg sync.WaitGroup // wg for background track renames trackRenamesCh chan fs.Object // objects are pumped in here renameCheck []fs.Object // accumulate files to check for rename here compareCopyDest fs.Fs // place to check for files to server side copy backupDir fs.Fs // place to store overwrites/deletes checkFirst bool // if set run all the checkers before starting transfers } type trackRenamesStrategy byte const ( trackRenamesStrategyHash trackRenamesStrategy = 1 << iota trackRenamesStrategyModtime trackRenamesStrategyLeaf ) func (strategy trackRenamesStrategy) hash() bool { return (strategy & trackRenamesStrategyHash) != 0 } func (strategy trackRenamesStrategy) modTime() bool { return (strategy & trackRenamesStrategyModtime) != 0 } func (strategy trackRenamesStrategy) leaf() bool { return (strategy & trackRenamesStrategyLeaf) != 0 } func newSyncCopyMove(ctx context.Context, fdst, fsrc fs.Fs, deleteMode fs.DeleteMode, DoMove bool, deleteEmptySrcDirs bool, copyEmptySrcDirs bool) (*syncCopyMove, error) { if (deleteMode != fs.DeleteModeOff || DoMove) && operations.Overlapping(fdst, fsrc) { return nil, fserrors.FatalError(fs.ErrorOverlapping) } s := &syncCopyMove{ fdst: fdst, fsrc: fsrc, deleteMode: deleteMode, DoMove: DoMove, copyEmptySrcDirs: copyEmptySrcDirs, deleteEmptySrcDirs: deleteEmptySrcDirs, dir: "", srcFilesChan: make(chan fs.Object, fs.Config.Checkers+fs.Config.Transfers), srcFilesResult: make(chan error, 1), dstFilesResult: make(chan error, 1), dstEmptyDirs: make(map[string]fs.DirEntry), srcEmptyDirs: make(map[string]fs.DirEntry), noTraverse: fs.Config.NoTraverse, noCheckDest: fs.Config.NoCheckDest, noUnicodeNormalization: fs.Config.NoUnicodeNormalization, deleteFilesCh: make(chan fs.Object, fs.Config.Checkers), trackRenames: fs.Config.TrackRenames, commonHash: fsrc.Hashes().Overlap(fdst.Hashes()).GetOne(), modifyWindow: fs.GetModifyWindow(fsrc, fdst), trackRenamesCh: make(chan fs.Object, fs.Config.Checkers), checkFirst: fs.Config.CheckFirst, } backlog := fs.Config.MaxBacklog if s.checkFirst { fs.Infof(s.fdst, "Running all checks before starting transfers") backlog = -1 } var err error s.toBeChecked, err = newPipe(fs.Config.OrderBy, accounting.Stats(ctx).SetCheckQueue, backlog) if err != nil { return nil, err } s.toBeUploaded, err = newPipe(fs.Config.OrderBy, accounting.Stats(ctx).SetTransferQueue, backlog) if err != nil { return nil, err } s.toBeRenamed, err = newPipe(fs.Config.OrderBy, accounting.Stats(ctx).SetRenameQueue, backlog) if err != nil { return nil, err } // If a max session duration has been defined add a deadline to the context if fs.Config.MaxDuration > 0 { endTime := time.Now().Add(fs.Config.MaxDuration) fs.Infof(s.fdst, "Transfer session deadline: %s", endTime.Format("2006/01/02 15:04:05")) s.ctx, s.cancel = context.WithDeadline(ctx, endTime) } else { s.ctx, s.cancel = context.WithCancel(ctx) } if s.noTraverse && s.deleteMode != fs.DeleteModeOff { fs.Errorf(nil, "Ignoring --no-traverse with sync") s.noTraverse = false } s.trackRenamesStrategy, err = parseTrackRenamesStrategy(fs.Config.TrackRenamesStrategy) if err != nil { return nil, err } if s.noCheckDest { if s.deleteMode != fs.DeleteModeOff { return nil, errors.New("can't use --no-check-dest with sync: use copy instead") } if fs.Config.Immutable { return nil, errors.New("can't use --no-check-dest with --immutable") } if s.backupDir != nil { return nil, errors.New("can't use --no-check-dest with --backup-dir") } } if s.trackRenames { // Don't track renames for remotes without server-side move support. if !operations.CanServerSideMove(fdst) { fs.Errorf(fdst, "Ignoring --track-renames as the destination does not support server-side move or copy") s.trackRenames = false } if s.trackRenamesStrategy.hash() && s.commonHash == hash.None { fs.Errorf(fdst, "Ignoring --track-renames as the source and destination do not have a common hash") s.trackRenames = false } if s.trackRenamesStrategy.modTime() && s.modifyWindow == fs.ModTimeNotSupported { fs.Errorf(fdst, "Ignoring --track-renames as either the source or destination do not support modtime") s.trackRenames = false } if s.deleteMode == fs.DeleteModeOff { fs.Errorf(fdst, "Ignoring --track-renames as it doesn't work with copy or move, only sync") s.trackRenames = false } } if s.trackRenames { // track renames needs delete after if s.deleteMode != fs.DeleteModeOff { s.deleteMode = fs.DeleteModeAfter } if s.noTraverse { fs.Errorf(nil, "Ignoring --no-traverse with --track-renames") s.noTraverse = false } } // Make Fs for --backup-dir if required if fs.Config.BackupDir != "" || fs.Config.Suffix != "" { var err error s.backupDir, err = operations.BackupDir(fdst, fsrc, "") if err != nil { return nil, err } } if fs.Config.CompareDest != "" { var err error s.compareCopyDest, err = operations.GetCompareDest() if err != nil { return nil, err } } else if fs.Config.CopyDest != "" { var err error s.compareCopyDest, err = operations.GetCopyDest(fdst) if err != nil { return nil, err } } return s, nil } // Check to see if the context has been cancelled func (s *syncCopyMove) aborting() bool { return s.ctx.Err() != nil } // This reads the map and pumps it into the channel passed in, closing // the channel at the end func (s *syncCopyMove) pumpMapToChan(files map[string]fs.Object, out chan<- fs.Object) { outer: for _, o := range files { if s.aborting() { break outer } select { case out <- o: case <-s.ctx.Done(): break outer } } close(out) s.srcFilesResult <- nil } // This checks the types of errors returned while copying files func (s *syncCopyMove) processError(err error) { if err == nil { return } if err == context.DeadlineExceeded { err = fserrors.NoRetryError(err) } s.errorMu.Lock() defer s.errorMu.Unlock() switch { case fserrors.IsFatalError(err): if !s.aborting() { fs.Errorf(nil, "Cancelling sync due to fatal error: %v", err) s.cancel() } s.fatalErr = err case fserrors.IsNoRetryError(err): s.noRetryErr = err default: s.err = err } } // Returns the current error (if any) in the order of precedence // fatalErr // normal error // noRetryErr func (s *syncCopyMove) currentError() error { s.errorMu.Lock() defer s.errorMu.Unlock() if s.fatalErr != nil { return s.fatalErr } if s.err != nil { return s.err } return s.noRetryErr } // pairChecker reads Objects~s on in send to out if they need transferring. // // FIXME potentially doing lots of hashes at once func (s *syncCopyMove) pairChecker(in *pipe, out *pipe, fraction int, wg *sync.WaitGroup) { defer wg.Done() for { pair, ok := in.GetMax(s.ctx, fraction) if !ok { return } src := pair.Src var err error tr := accounting.Stats(s.ctx).NewCheckingTransfer(src) // Check to see if can store this if src.Storable() { NoNeedTransfer, err := operations.CompareOrCopyDest(s.ctx, s.fdst, pair.Dst, pair.Src, s.compareCopyDest, s.backupDir) if err != nil { s.processError(err) } if !NoNeedTransfer && operations.NeedTransfer(s.ctx, pair.Dst, pair.Src) { // If files are treated as immutable, fail if destination exists and does not match if fs.Config.Immutable && pair.Dst != nil { fs.Errorf(pair.Dst, "Source and destination exist but do not match: immutable file modified") s.processError(fs.ErrorImmutableModified) } else { // If destination already exists, then we must move it into --backup-dir if required if pair.Dst != nil && s.backupDir != nil { err := operations.MoveBackupDir(s.ctx, s.backupDir, pair.Dst) if err != nil { s.processError(err) } else { // If successful zero out the dst as it is no longer there and copy the file pair.Dst = nil ok = out.Put(s.ctx, pair) if !ok { return } } } else { ok = out.Put(s.ctx, pair) if !ok { return } } } } else { // If moving need to delete the files we don't need to copy if s.DoMove { // Delete src if no error on copy s.processError(operations.DeleteFile(s.ctx, src)) } } } tr.Done(err) } } // pairRenamer reads Objects~s on in and attempts to rename them, // otherwise it sends them out if they need transferring. func (s *syncCopyMove) pairRenamer(in *pipe, out *pipe, fraction int, wg *sync.WaitGroup) { defer wg.Done() for { pair, ok := in.GetMax(s.ctx, fraction) if !ok { return } src := pair.Src if !s.tryRename(src) { // pass on if not renamed ok = out.Put(s.ctx, pair) if !ok { return } } } } // pairCopyOrMove reads Objects on in and moves or copies them. func (s *syncCopyMove) pairCopyOrMove(ctx context.Context, in *pipe, fdst fs.Fs, fraction int, wg *sync.WaitGroup) { defer wg.Done() var err error for { pair, ok := in.GetMax(s.ctx, fraction) if !ok { return } src := pair.Src if s.DoMove { _, err = operations.Move(ctx, fdst, pair.Dst, src.Remote(), src) } else { _, err = operations.Copy(ctx, fdst, pair.Dst, src.Remote(), src) } s.processError(err) } } // This starts the background checkers. func (s *syncCopyMove) startCheckers() { s.checkerWg.Add(fs.Config.Checkers) for i := 0; i < fs.Config.Checkers; i++ { fraction := (100 * i) / fs.Config.Checkers go s.pairChecker(s.toBeChecked, s.toBeUploaded, fraction, &s.checkerWg) } } // This stops the background checkers func (s *syncCopyMove) stopCheckers() { s.toBeChecked.Close() fs.Debugf(s.fdst, "Waiting for checks to finish") s.checkerWg.Wait() } // This starts the background transfers func (s *syncCopyMove) startTransfers() { s.transfersWg.Add(fs.Config.Transfers) for i := 0; i < fs.Config.Transfers; i++ { fraction := (100 * i) / fs.Config.Transfers go s.pairCopyOrMove(s.ctx, s.toBeUploaded, s.fdst, fraction, &s.transfersWg) } } // This stops the background transfers func (s *syncCopyMove) stopTransfers() { s.toBeUploaded.Close() fs.Debugf(s.fdst, "Waiting for transfers to finish") s.transfersWg.Wait() } // This starts the background renamers. func (s *syncCopyMove) startRenamers() { if !s.trackRenames { return } s.renamerWg.Add(fs.Config.Checkers) for i := 0; i < fs.Config.Checkers; i++ { fraction := (100 * i) / fs.Config.Checkers go s.pairRenamer(s.toBeRenamed, s.toBeUploaded, fraction, &s.renamerWg) } } // This stops the background renamers func (s *syncCopyMove) stopRenamers() { if !s.trackRenames { return } s.toBeRenamed.Close() fs.Debugf(s.fdst, "Waiting for renames to finish") s.renamerWg.Wait() } // This starts the collection of possible renames func (s *syncCopyMove) startTrackRenames() { if !s.trackRenames { return } s.trackRenamesWg.Add(1) go func() { defer s.trackRenamesWg.Done() for o := range s.trackRenamesCh { s.renameCheck = append(s.renameCheck, o) } }() } // This stops the background rename collection func (s *syncCopyMove) stopTrackRenames() { if !s.trackRenames { return } close(s.trackRenamesCh) s.trackRenamesWg.Wait() } // This starts the background deletion of files for --delete-during func (s *syncCopyMove) startDeleters() { if s.deleteMode != fs.DeleteModeDuring && s.deleteMode != fs.DeleteModeOnly { return } s.deletersWg.Add(1) go func() { defer s.deletersWg.Done() err := operations.DeleteFilesWithBackupDir(s.ctx, s.deleteFilesCh, s.backupDir) s.processError(err) }() } // This stops the background deleters func (s *syncCopyMove) stopDeleters() { if s.deleteMode != fs.DeleteModeDuring && s.deleteMode != fs.DeleteModeOnly { return } close(s.deleteFilesCh) s.deletersWg.Wait() } // This deletes the files in the dstFiles map. If checkSrcMap is set // then it checks to see if they exist first in srcFiles the source // file map, otherwise it unconditionally deletes them. If // checkSrcMap is clear then it assumes that the any source files that // have been found have been removed from dstFiles already. func (s *syncCopyMove) deleteFiles(checkSrcMap bool) error { if accounting.Stats(s.ctx).Errored() && !fs.Config.IgnoreErrors { fs.Errorf(s.fdst, "%v", fs.ErrorNotDeleting) return fs.ErrorNotDeleting } // Delete the spare files toDelete := make(fs.ObjectsChan, fs.Config.Transfers) go func() { outer: for remote, o := range s.dstFiles { if checkSrcMap { _, exists := s.srcFiles[remote] if exists { continue } } if s.aborting() { break } select { case <-s.ctx.Done(): break outer case toDelete <- o: } } close(toDelete) }() return operations.DeleteFilesWithBackupDir(s.ctx, toDelete, s.backupDir) } // This deletes the empty directories in the slice passed in. It // ignores any errors deleting directories func deleteEmptyDirectories(ctx context.Context, f fs.Fs, entriesMap map[string]fs.DirEntry) error { if len(entriesMap) == 0 { return nil } if accounting.Stats(ctx).Errored() && !fs.Config.IgnoreErrors { fs.Errorf(f, "%v", fs.ErrorNotDeletingDirs) return fs.ErrorNotDeletingDirs } var entries fs.DirEntries for _, entry := range entriesMap { entries = append(entries, entry) } // Now delete the empty directories starting from the longest path sort.Sort(entries) var errorCount int var okCount int for i := len(entries) - 1; i >= 0; i-- { entry := entries[i] dir, ok := entry.(fs.Directory) if ok { // TryRmdir only deletes empty directories err := operations.TryRmdir(ctx, f, dir.Remote()) if err != nil { fs.Debugf(fs.LogDirName(f, dir.Remote()), "Failed to Rmdir: %v", err) errorCount++ } else { okCount++ } } else { fs.Errorf(f, "Not a directory: %v", entry) } } if errorCount > 0 { fs.Debugf(f, "failed to delete %d directories", errorCount) } if okCount > 0 { fs.Debugf(f, "deleted %d directories", okCount) } return nil } // This copies the empty directories in the slice passed in and logs // any errors copying the directories func copyEmptyDirectories(ctx context.Context, f fs.Fs, entries map[string]fs.DirEntry) error { if len(entries) == 0 { return nil } var okCount int for _, entry := range entries { dir, ok := entry.(fs.Directory) if ok { err := operations.Mkdir(ctx, f, dir.Remote()) if err != nil { fs.Errorf(fs.LogDirName(f, dir.Remote()), "Failed to Mkdir: %v", err) } else { okCount++ } } else { fs.Errorf(f, "Not a directory: %v", entry) } } if accounting.Stats(ctx).Errored() { fs.Debugf(f, "failed to copy %d directories", accounting.Stats(ctx).GetErrors()) } if okCount > 0 { fs.Debugf(f, "copied %d directories", okCount) } return nil } func (s *syncCopyMove) srcParentDirCheck(entry fs.DirEntry) { // If we are moving files then we don't want to remove directories with files in them // from the srcEmptyDirs as we are about to move them making the directory empty. if s.DoMove { return } parentDir := path.Dir(entry.Remote()) if parentDir == "." { parentDir = "" } if _, ok := s.srcEmptyDirs[parentDir]; ok { delete(s.srcEmptyDirs, parentDir) } } // parseTrackRenamesStrategy turns a config string into a trackRenamesStrategy func parseTrackRenamesStrategy(strategies string) (strategy trackRenamesStrategy, err error) { if len(strategies) == 0 { return strategy, nil } for _, s := range strings.Split(strategies, ",") { switch s { case "hash": strategy |= trackRenamesStrategyHash case "modtime": strategy |= trackRenamesStrategyModtime case "leaf": strategy |= trackRenamesStrategyLeaf case "size": // ignore default: return strategy, errors.Errorf("unknown track renames strategy %q", s) } } return strategy, nil } // renameID makes a string with the size and the other identifiers of the requested rename strategies // // it may return an empty string in which case no hash could be made func (s *syncCopyMove) renameID(obj fs.Object, renamesStrategy trackRenamesStrategy, precision time.Duration) string { var builder strings.Builder fmt.Fprintf(&builder, "%d", obj.Size()) if renamesStrategy.hash() { var err error hash, err := obj.Hash(s.ctx, s.commonHash) if err != nil { fs.Debugf(obj, "Hash failed: %v", err) return "" } if hash == "" { return "" } builder.WriteRune(',') builder.WriteString(hash) } // for renamesStrategy.modTime() we don't add to the hash but we check the times in // popRenameMap if renamesStrategy.leaf() { builder.WriteRune(',') builder.WriteString(path.Base(obj.Remote())) } return builder.String() } // pushRenameMap adds the object with hash to the rename map func (s *syncCopyMove) pushRenameMap(hash string, obj fs.Object) { s.renameMapMu.Lock() s.renameMap[hash] = append(s.renameMap[hash], obj) s.renameMapMu.Unlock() } // popRenameMap finds the object with hash and pop the first match from // renameMap or returns nil if not found. func (s *syncCopyMove) popRenameMap(hash string, src fs.Object) (dst fs.Object) { s.renameMapMu.Lock() defer s.renameMapMu.Unlock() dsts, ok := s.renameMap[hash] if ok && len(dsts) > 0 { // Element to remove i := 0 // If using track renames strategy modtime then we need to check the modtimes here if s.trackRenamesStrategy.modTime() { i = -1 srcModTime := src.ModTime(s.ctx) for j, dst := range dsts { dstModTime := dst.ModTime(s.ctx) dt := dstModTime.Sub(srcModTime) if dt < s.modifyWindow && dt > -s.modifyWindow { i = j break } } // If nothing matched then return nil if i < 0 { return nil } } // Remove the entry and return it dst = dsts[i] dsts = append(dsts[:i], dsts[i+1:]...) if len(dsts) > 0 { s.renameMap[hash] = dsts } else { delete(s.renameMap, hash) } } return dst } // makeRenameMap builds a map of the destination files by hash that // match sizes in the slice of objects in s.renameCheck func (s *syncCopyMove) makeRenameMap() { fs.Infof(s.fdst, "Making map for --track-renames") // first make a map of possible sizes we need to check possibleSizes := map[int64]struct{}{} for _, obj := range s.renameCheck { possibleSizes[obj.Size()] = struct{}{} } // pump all the dstFiles into in in := make(chan fs.Object, fs.Config.Checkers) go s.pumpMapToChan(s.dstFiles, in) // now make a map of size,hash for all dstFiles s.renameMap = make(map[string][]fs.Object) var wg sync.WaitGroup wg.Add(fs.Config.Transfers) for i := 0; i < fs.Config.Transfers; i++ { go func() { defer wg.Done() for obj := range in { // only create hash for dst fs.Object if its size could match if _, found := possibleSizes[obj.Size()]; found { tr := accounting.Stats(s.ctx).NewCheckingTransfer(obj) hash := s.renameID(obj, s.trackRenamesStrategy, s.modifyWindow) if hash != "" { s.pushRenameMap(hash, obj) } tr.Done(nil) } } }() } wg.Wait() fs.Infof(s.fdst, "Finished making map for --track-renames") } // tryRename renames an src object when doing track renames if // possible, it returns true if the object was renamed. func (s *syncCopyMove) tryRename(src fs.Object) bool { // Calculate the hash of the src object hash := s.renameID(src, s.trackRenamesStrategy, fs.GetModifyWindow(s.fsrc, s.fdst)) if hash == "" { return false } // Get a match on fdst dst := s.popRenameMap(hash, src) if dst == nil { return false } // Find dst object we are about to overwrite if it exists dstOverwritten, _ := s.fdst.NewObject(s.ctx, src.Remote()) // Rename dst to have name src.Remote() _, err := operations.Move(s.ctx, s.fdst, dstOverwritten, src.Remote(), dst) if err != nil { fs.Debugf(src, "Failed to rename to %q: %v", dst.Remote(), err) return false } // remove file from dstFiles if present s.dstFilesMu.Lock() delete(s.dstFiles, dst.Remote()) s.dstFilesMu.Unlock() fs.Infof(src, "Renamed from %q", dst.Remote()) return true } // Syncs fsrc into fdst // // If Delete is true then it deletes any files in fdst that aren't in fsrc // // If DoMove is true then files will be moved instead of copied // // dir is the start directory, "" for root func (s *syncCopyMove) run() error { if operations.Same(s.fdst, s.fsrc) { fs.Errorf(s.fdst, "Nothing to do as source and destination are the same") return nil } // Start background checking and transferring pipeline s.startCheckers() s.startRenamers() if !s.checkFirst { s.startTransfers() } s.startDeleters() s.dstFiles = make(map[string]fs.Object) s.startTrackRenames() // set up a march over fdst and fsrc m := &march.March{ Ctx: s.ctx, Fdst: s.fdst, Fsrc: s.fsrc, Dir: s.dir, NoTraverse: s.noTraverse, Callback: s, DstIncludeAll: filter.Active.Opt.DeleteExcluded, NoCheckDest: s.noCheckDest, NoUnicodeNormalization: s.noUnicodeNormalization, } s.processError(m.Run()) s.stopTrackRenames() if s.trackRenames { // Build the map of the remaining dstFiles by hash s.makeRenameMap() // Attempt renames for all the files which don't have a matching dst for _, src := range s.renameCheck { ok := s.toBeRenamed.Put(s.ctx, fs.ObjectPair{Src: src, Dst: nil}) if !ok { break } } } // Stop background checking and transferring pipeline s.stopCheckers() if s.checkFirst { fs.Infof(s.fdst, "Checks finished, now starting transfers") s.startTransfers() } s.stopRenamers() s.stopTransfers() s.stopDeleters() if s.copyEmptySrcDirs { s.processError(copyEmptyDirectories(s.ctx, s.fdst, s.srcEmptyDirs)) } // Delete files after if s.deleteMode == fs.DeleteModeAfter { if s.currentError() != nil && !fs.Config.IgnoreErrors { fs.Errorf(s.fdst, "%v", fs.ErrorNotDeleting) } else { s.processError(s.deleteFiles(false)) } } // Prune empty directories if s.deleteMode != fs.DeleteModeOff { if s.currentError() != nil && !fs.Config.IgnoreErrors { fs.Errorf(s.fdst, "%v", fs.ErrorNotDeletingDirs) } else { s.processError(deleteEmptyDirectories(s.ctx, s.fdst, s.dstEmptyDirs)) } } // Delete empty fsrc subdirectories // if DoMove and --delete-empty-src-dirs flag is set if s.DoMove && s.deleteEmptySrcDirs { //delete empty subdirectories that were part of the move s.processError(deleteEmptyDirectories(s.ctx, s.fsrc, s.srcEmptyDirs)) } // Read the error out of the context if there is one s.processError(s.ctx.Err()) if s.deleteMode != fs.DeleteModeOnly && accounting.Stats(s.ctx).GetTransfers() == 0 { fs.Infof(nil, "There was nothing to transfer") } // cancel the context to free resources s.cancel() return s.currentError() } // DstOnly have an object which is in the destination only func (s *syncCopyMove) DstOnly(dst fs.DirEntry) (recurse bool) { if s.deleteMode == fs.DeleteModeOff { return false } switch x := dst.(type) { case fs.Object: switch s.deleteMode { case fs.DeleteModeAfter: // record object as needs deleting s.dstFilesMu.Lock() s.dstFiles[x.Remote()] = x s.dstFilesMu.Unlock() case fs.DeleteModeDuring, fs.DeleteModeOnly: select { case <-s.ctx.Done(): return case s.deleteFilesCh <- x: } default: panic(fmt.Sprintf("unexpected delete mode %d", s.deleteMode)) } case fs.Directory: // Do the same thing to the entire contents of the directory // Record directory as it is potentially empty and needs deleting if s.fdst.Features().CanHaveEmptyDirectories { s.dstEmptyDirsMu.Lock() s.dstEmptyDirs[dst.Remote()] = dst s.dstEmptyDirsMu.Unlock() } return true default: panic("Bad object in DirEntries") } return false } // SrcOnly have an object which is in the source only func (s *syncCopyMove) SrcOnly(src fs.DirEntry) (recurse bool) { if s.deleteMode == fs.DeleteModeOnly { return false } switch x := src.(type) { case fs.Object: // If it's a copy operation, // remove parent directory from srcEmptyDirs // since it's not really empty s.srcEmptyDirsMu.Lock() s.srcParentDirCheck(src) s.srcEmptyDirsMu.Unlock() if s.trackRenames { // Save object to check for a rename later select { case <-s.ctx.Done(): return case s.trackRenamesCh <- x: } } else { // Check CompareDest && CopyDest NoNeedTransfer, err := operations.CompareOrCopyDest(s.ctx, s.fdst, nil, x, s.compareCopyDest, s.backupDir) if err != nil { s.processError(err) } if !NoNeedTransfer { // No need to check since doesn't exist ok := s.toBeUploaded.Put(s.ctx, fs.ObjectPair{Src: x, Dst: nil}) if !ok { return } } } case fs.Directory: // Do the same thing to the entire contents of the directory // Record the directory for deletion s.srcEmptyDirsMu.Lock() s.srcParentDirCheck(src) s.srcEmptyDirs[src.Remote()] = src s.srcEmptyDirsMu.Unlock() return true default: panic("Bad object in DirEntries") } return false } // Match is called when src and dst are present, so sync src to dst func (s *syncCopyMove) Match(ctx context.Context, dst, src fs.DirEntry) (recurse bool) { switch srcX := src.(type) { case fs.Object: s.srcEmptyDirsMu.Lock() s.srcParentDirCheck(src) s.srcEmptyDirsMu.Unlock() if s.deleteMode == fs.DeleteModeOnly { return false } dstX, ok := dst.(fs.Object) if ok { ok = s.toBeChecked.Put(s.ctx, fs.ObjectPair{Src: srcX, Dst: dstX}) if !ok { return false } } else { // FIXME src is file, dst is directory err := errors.New("can't overwrite directory with file") fs.Errorf(dst, "%v", err) s.processError(err) } case fs.Directory: // Do the same thing to the entire contents of the directory _, ok := dst.(fs.Directory) if ok { // Only record matched (src & dst) empty dirs when performing move if s.DoMove { // Record the src directory for deletion s.srcEmptyDirsMu.Lock() s.srcParentDirCheck(src) s.srcEmptyDirs[src.Remote()] = src s.srcEmptyDirsMu.Unlock() } return true } // FIXME src is dir, dst is file err := errors.New("can't overwrite file with directory") fs.Errorf(dst, "%v", err) s.processError(err) default: panic("Bad object in DirEntries") } return false } // Syncs fsrc into fdst // // If Delete is true then it deletes any files in fdst that aren't in fsrc // // If DoMove is true then files will be moved instead of copied // // dir is the start directory, "" for root func runSyncCopyMove(ctx context.Context, fdst, fsrc fs.Fs, deleteMode fs.DeleteMode, DoMove bool, deleteEmptySrcDirs bool, copyEmptySrcDirs bool) error { if deleteMode != fs.DeleteModeOff && DoMove { return fserrors.FatalError(errors.New("can't delete and move at the same time")) } // Run an extra pass to delete only if deleteMode == fs.DeleteModeBefore { if fs.Config.TrackRenames { return fserrors.FatalError(errors.New("can't use --delete-before with --track-renames")) } // only delete stuff during in this pass do, err := newSyncCopyMove(ctx, fdst, fsrc, fs.DeleteModeOnly, false, deleteEmptySrcDirs, copyEmptySrcDirs) if err != nil { return err } err = do.run() if err != nil { return err } // Next pass does a copy only deleteMode = fs.DeleteModeOff } do, err := newSyncCopyMove(ctx, fdst, fsrc, deleteMode, DoMove, deleteEmptySrcDirs, copyEmptySrcDirs) if err != nil { return err } return do.run() } // Sync fsrc into fdst func Sync(ctx context.Context, fdst, fsrc fs.Fs, copyEmptySrcDirs bool) error { return runSyncCopyMove(ctx, fdst, fsrc, fs.Config.DeleteMode, false, false, copyEmptySrcDirs) } // CopyDir copies fsrc into fdst func CopyDir(ctx context.Context, fdst, fsrc fs.Fs, copyEmptySrcDirs bool) error { return runSyncCopyMove(ctx, fdst, fsrc, fs.DeleteModeOff, false, false, copyEmptySrcDirs) } // moveDir moves fsrc into fdst func moveDir(ctx context.Context, fdst, fsrc fs.Fs, deleteEmptySrcDirs bool, copyEmptySrcDirs bool) error { return runSyncCopyMove(ctx, fdst, fsrc, fs.DeleteModeOff, true, deleteEmptySrcDirs, copyEmptySrcDirs) } // MoveDir moves fsrc into fdst func MoveDir(ctx context.Context, fdst, fsrc fs.Fs, deleteEmptySrcDirs bool, copyEmptySrcDirs bool) error { if operations.Same(fdst, fsrc) { fs.Errorf(fdst, "Nothing to do as source and destination are the same") return nil } // First attempt to use DirMover if exists, same Fs and no filters are active if fdstDirMove := fdst.Features().DirMove; fdstDirMove != nil && operations.SameConfig(fsrc, fdst) && filter.Active.InActive() { if operations.SkipDestructive(ctx, fdst, "server side directory move") { return nil } fs.Debugf(fdst, "Using server side directory move") err := fdstDirMove(ctx, fsrc, "", "") switch err { case fs.ErrorCantDirMove, fs.ErrorDirExists: fs.Infof(fdst, "Server side directory move failed - fallback to file moves: %v", err) case nil: fs.Infof(fdst, "Server side directory move succeeded") return nil default: err = fs.CountError(err) fs.Errorf(fdst, "Server side directory move failed: %v", err) return err } } // Otherwise move the files one by one return moveDir(ctx, fdst, fsrc, deleteEmptySrcDirs, copyEmptySrcDirs) } rclone-1.53.3/fs/sync/sync_test.go000066400000000000000000001610361375552240400170310ustar00rootroot00000000000000// Test sync/copy/move package sync import ( "context" "fmt" "runtime" "strings" "testing" "time" "github.com/pkg/errors" _ "github.com/rclone/rclone/backend/all" // import all backends "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fstest" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/text/unicode/norm" ) // Some times used in the tests var ( t1 = fstest.Time("2001-02-03T04:05:06.499999999Z") t2 = fstest.Time("2011-12-25T12:59:59.123456789Z") t3 = fstest.Time("2011-12-30T12:59:59.000000000Z") ) // TestMain drives the tests func TestMain(m *testing.M) { fstest.TestMain(m) } // Check dry run is working func TestCopyWithDryRun(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) r.Mkdir(context.Background(), r.Fremote) fs.Config.DryRun = true err := CopyDir(context.Background(), r.Fremote, r.Flocal, false) fs.Config.DryRun = false require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote) } // Now without dry run func TestCopy(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) r.Mkdir(context.Background(), r.Fremote) err := CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) } func TestCopyMissingDirectory(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() r.Mkdir(context.Background(), r.Fremote) nonExistingFs, err := fs.NewFs("/non-existing") if err != nil { t.Fatal(err) } err = CopyDir(context.Background(), r.Fremote, nonExistingFs, false) require.Error(t, err) } // Now with --no-traverse func TestCopyNoTraverse(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.NoTraverse = true defer func() { fs.Config.NoTraverse = false }() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) err := CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) } // Now with --check-first func TestCopyCheckFirst(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.CheckFirst = true defer func() { fs.Config.CheckFirst = false }() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) err := CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) } // Now with --no-traverse func TestSyncNoTraverse(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.NoTraverse = true defer func() { fs.Config.NoTraverse = false }() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) } // Test copy with depth func TestCopyWithDepth(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) file2 := r.WriteFile("hello world2", "hello world2", t2) // Check the MaxDepth too fs.Config.MaxDepth = 1 defer func() { fs.Config.MaxDepth = -1 }() err := CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file2) } // Test copy with files from func testCopyWithFilesFrom(t *testing.T, noTraverse bool) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("potato2", "hello world", t1) file2 := r.WriteFile("hello world2", "hello world2", t2) // Set the --files-from equivalent f, err := filter.NewFilter(nil) require.NoError(t, err) require.NoError(t, f.AddFile("potato2")) require.NoError(t, f.AddFile("notfound")) // Monkey patch the active filter oldFilter := filter.Active oldNoTraverse := fs.Config.NoTraverse filter.Active = f fs.Config.NoTraverse = noTraverse unpatch := func() { filter.Active = oldFilter fs.Config.NoTraverse = oldNoTraverse } defer unpatch() err = CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) unpatch() fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1) } func TestCopyWithFilesFrom(t *testing.T) { testCopyWithFilesFrom(t, false) } func TestCopyWithFilesFromAndNoTraverse(t *testing.T) { testCopyWithFilesFrom(t, true) } // Test copy empty directories func TestCopyEmptyDirectories(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) err := operations.Mkdir(context.Background(), r.Flocal, "sub dir2") require.NoError(t, err) r.Mkdir(context.Background(), r.Fremote) err = CopyDir(context.Background(), r.Fremote, r.Flocal, true) require.NoError(t, err) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, }, []string{ "sub dir", "sub dir2", }, fs.GetModifyWindow(r.Fremote), ) } // Test move empty directories func TestMoveEmptyDirectories(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) err := operations.Mkdir(context.Background(), r.Flocal, "sub dir2") require.NoError(t, err) r.Mkdir(context.Background(), r.Fremote) err = MoveDir(context.Background(), r.Fremote, r.Flocal, false, true) require.NoError(t, err) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, }, []string{ "sub dir", "sub dir2", }, fs.GetModifyWindow(r.Fremote), ) } // Test sync empty directories func TestSyncEmptyDirectories(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) err := operations.Mkdir(context.Background(), r.Flocal, "sub dir2") require.NoError(t, err) r.Mkdir(context.Background(), r.Fremote) err = Sync(context.Background(), r.Fremote, r.Flocal, true) require.NoError(t, err) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, }, []string{ "sub dir", "sub dir2", }, fs.GetModifyWindow(r.Fremote), ) } // Test a server side copy if possible, or the backup path if not func TestServerSideCopy(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteObject(context.Background(), "sub dir/hello world", "hello world", t1) fstest.CheckItems(t, r.Fremote, file1) FremoteCopy, _, finaliseCopy, err := fstest.RandomRemote() require.NoError(t, err) defer finaliseCopy() t.Logf("Server side copy (if possible) %v -> %v", r.Fremote, FremoteCopy) err = CopyDir(context.Background(), FremoteCopy, r.Fremote, false) require.NoError(t, err) fstest.CheckItems(t, FremoteCopy, file1) } // Check that if the local file doesn't exist when we copy it up, // nothing happens to the remote file func TestCopyAfterDelete(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteObject(context.Background(), "sub dir/hello world", "hello world", t1) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file1) err := operations.Mkdir(context.Background(), r.Flocal, "") require.NoError(t, err) err = CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal) fstest.CheckItems(t, r.Fremote, file1) } // Check the copy downloading a file func TestCopyRedownload(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteObject(context.Background(), "sub dir/hello world", "hello world", t1) fstest.CheckItems(t, r.Fremote, file1) err := CopyDir(context.Background(), r.Flocal, r.Fremote, false) require.NoError(t, err) // Test with combined precision of local and remote as we copied it there and back fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1}, nil, fs.GetModifyWindow(r.Flocal, r.Fremote)) } // Create a file and sync it. Change the last modified date and resync. // If we're only doing sync by size and checksum, we expect nothing to // to be transferred on the second sync. func TestSyncBasedOnCheckSum(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.CheckSum = true defer func() { fs.Config.CheckSum = false }() file1 := r.WriteFile("check sum", "-", t1) fstest.CheckItems(t, r.Flocal, file1) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred exactly one file. assert.Equal(t, toyFileTransfers(r), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Fremote, file1) // Change last modified date only file2 := r.WriteFile("check sum", "-", t2) fstest.CheckItems(t, r.Flocal, file2) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred no files assert.Equal(t, int64(0), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file1) } // Create a file and sync it. Change the last modified date and the // file contents but not the size. If we're only doing sync by size // only, we expect nothing to to be transferred on the second sync. func TestSyncSizeOnly(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.SizeOnly = true defer func() { fs.Config.SizeOnly = false }() file1 := r.WriteFile("sizeonly", "potato", t1) fstest.CheckItems(t, r.Flocal, file1) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred exactly one file. assert.Equal(t, toyFileTransfers(r), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Fremote, file1) // Update mtime, md5sum but not length of file file2 := r.WriteFile("sizeonly", "POTATO", t2) fstest.CheckItems(t, r.Flocal, file2) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred no files assert.Equal(t, int64(0), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file1) } // Create a file and sync it. Keep the last modified date but change // the size. With --ignore-size we expect nothing to to be // transferred on the second sync. func TestSyncIgnoreSize(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.IgnoreSize = true defer func() { fs.Config.IgnoreSize = false }() file1 := r.WriteFile("ignore-size", "contents", t1) fstest.CheckItems(t, r.Flocal, file1) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred exactly one file. assert.Equal(t, toyFileTransfers(r), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Fremote, file1) // Update size but not date of file file2 := r.WriteFile("ignore-size", "longer contents but same date", t1) fstest.CheckItems(t, r.Flocal, file2) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred no files assert.Equal(t, int64(0), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file1) } func TestSyncIgnoreTimes(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "existing", "potato", t1) fstest.CheckItems(t, r.Fremote, file1) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred exactly 0 files because the // files were identical. assert.Equal(t, int64(0), accounting.GlobalStats().GetTransfers()) fs.Config.IgnoreTimes = true defer func() { fs.Config.IgnoreTimes = false }() accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred exactly one file even though the // files were identical. assert.Equal(t, toyFileTransfers(r), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) } func TestSyncIgnoreExisting(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("existing", "potato", t1) fs.Config.IgnoreExisting = true defer func() { fs.Config.IgnoreExisting = false }() accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) // Change everything r.WriteFile("existing", "newpotatoes", t2) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // Items should not change fstest.CheckItems(t, r.Fremote, file1) } func TestSyncIgnoreErrors(t *testing.T) { r := fstest.NewRun(t) fs.Config.IgnoreErrors = true defer func() { fs.Config.IgnoreErrors = false r.Finalise() }() file1 := r.WriteFile("a/potato2", "------------------------------------------------------------", t1) file2 := r.WriteObject(context.Background(), "b/potato", "SMALLER BUT SAME DATE", t2) file3 := r.WriteBoth(context.Background(), "c/non empty space", "AhHa!", t2) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "d")) fstest.CheckListingWithPrecision( t, r.Flocal, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file2, file3, }, []string{ "b", "c", "d", }, fs.GetModifyWindow(r.Fremote), ) accounting.GlobalStats().ResetCounters() _ = fs.CountError(errors.New("boom")) assert.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckListingWithPrecision( t, r.Flocal, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) } func TestSyncAfterChangingModtimeOnly(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("empty space", "-", t2) file2 := r.WriteObject(context.Background(), "empty space", "-", t1) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) fs.Config.DryRun = true defer func() { fs.Config.DryRun = false }() accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) fs.Config.DryRun = false accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) } func TestSyncAfterChangingModtimeOnlyWithNoUpdateModTime(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if r.Fremote.Hashes().Count() == 0 { t.Logf("Can't check this if no hashes supported") return } fs.Config.NoUpdateModTime = true defer func() { fs.Config.NoUpdateModTime = false }() file1 := r.WriteFile("empty space", "-", t2) file2 := r.WriteObject(context.Background(), "empty space", "-", t1) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) } func TestSyncDoesntUpdateModtime(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if fs.GetModifyWindow(r.Fremote) == fs.ModTimeNotSupported { t.Skip("Can't run this test on fs which doesn't support mod time") } file1 := r.WriteFile("foo", "foo", t2) file2 := r.WriteObject(context.Background(), "foo", "bar", t1) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) // We should have transferred exactly one file, not set the mod time assert.Equal(t, toyFileTransfers(r), accounting.GlobalStats().GetTransfers()) } func TestSyncAfterAddingAFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "empty space", "-", t2) file2 := r.WriteFile("potato", "------------------------------------------------------------", t3) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1, file2) fstest.CheckItems(t, r.Fremote, file1, file2) } func TestSyncAfterChangingFilesSizeOnly(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteObject(context.Background(), "potato", "------------------------------------------------------------", t3) file2 := r.WriteFile("potato", "smaller but same date", t3) fstest.CheckItems(t, r.Fremote, file1) fstest.CheckItems(t, r.Flocal, file2) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file2) } // Sync after changing a file's contents, changing modtime but length // remaining the same func TestSyncAfterChangingContentsOnly(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() var file1 fstest.Item if r.Fremote.Precision() == fs.ModTimeNotSupported { t.Logf("ModTimeNotSupported so forcing file to be a different size") file1 = r.WriteObject(context.Background(), "potato", "different size to make sure it syncs", t3) } else { file1 = r.WriteObject(context.Background(), "potato", "smaller but same date", t3) } file2 := r.WriteFile("potato", "SMALLER BUT SAME DATE", t2) fstest.CheckItems(t, r.Fremote, file1) fstest.CheckItems(t, r.Flocal, file2) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file2) } // Sync after removing a file and adding a file --dry-run func TestSyncAfterRemovingAFileAndAddingAFileDryRun(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("potato2", "------------------------------------------------------------", t1) file2 := r.WriteObject(context.Background(), "potato", "SMALLER BUT SAME DATE", t2) file3 := r.WriteBoth(context.Background(), "empty space", "-", t2) fs.Config.DryRun = true accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) fs.Config.DryRun = false require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file3, file1) fstest.CheckItems(t, r.Fremote, file3, file2) } // Sync after removing a file and adding a file func TestSyncAfterRemovingAFileAndAddingAFile(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("potato2", "------------------------------------------------------------", t1) file2 := r.WriteObject(context.Background(), "potato", "SMALLER BUT SAME DATE", t2) file3 := r.WriteBoth(context.Background(), "empty space", "-", t2) fstest.CheckItems(t, r.Fremote, file2, file3) fstest.CheckItems(t, r.Flocal, file1, file3) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1, file3) fstest.CheckItems(t, r.Fremote, file1, file3) } // Sync after removing a file and adding a file func TestSyncAfterRemovingAFileAndAddingAFileSubDir(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("a/potato2", "------------------------------------------------------------", t1) file2 := r.WriteObject(context.Background(), "b/potato", "SMALLER BUT SAME DATE", t2) file3 := r.WriteBoth(context.Background(), "c/non empty space", "AhHa!", t2) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "d")) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "d/e")) fstest.CheckListingWithPrecision( t, r.Flocal, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file2, file3, }, []string{ "b", "c", "d", "d/e", }, fs.GetModifyWindow(r.Fremote), ) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckListingWithPrecision( t, r.Flocal, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) } // Sync after removing a file and adding a file with IO Errors func TestSyncAfterRemovingAFileAndAddingAFileSubDirWithErrors(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("a/potato2", "------------------------------------------------------------", t1) file2 := r.WriteObject(context.Background(), "b/potato", "SMALLER BUT SAME DATE", t2) file3 := r.WriteBoth(context.Background(), "c/non empty space", "AhHa!", t2) require.NoError(t, operations.Mkdir(context.Background(), r.Fremote, "d")) fstest.CheckListingWithPrecision( t, r.Flocal, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file2, file3, }, []string{ "b", "c", "d", }, fs.GetModifyWindow(r.Fremote), ) accounting.GlobalStats().ResetCounters() _ = fs.CountError(errors.New("boom")) err := Sync(context.Background(), r.Fremote, r.Flocal, false) assert.Equal(t, fs.ErrorNotDeleting, err) fstest.CheckListingWithPrecision( t, r.Flocal, []fstest.Item{ file1, file3, }, []string{ "a", "c", }, fs.GetModifyWindow(r.Fremote), ) fstest.CheckListingWithPrecision( t, r.Fremote, []fstest.Item{ file1, file2, file3, }, []string{ "a", "b", "c", "d", }, fs.GetModifyWindow(r.Fremote), ) } // Sync test delete after func TestSyncDeleteAfter(t *testing.T) { // This is the default so we've checked this already // check it is the default require.Equal(t, fs.Config.DeleteMode, fs.DeleteModeAfter, "Didn't default to --delete-after") } // Sync test delete during func TestSyncDeleteDuring(t *testing.T) { fs.Config.DeleteMode = fs.DeleteModeDuring defer func() { fs.Config.DeleteMode = fs.DeleteModeDefault }() TestSyncAfterRemovingAFileAndAddingAFile(t) } // Sync test delete before func TestSyncDeleteBefore(t *testing.T) { fs.Config.DeleteMode = fs.DeleteModeBefore defer func() { fs.Config.DeleteMode = fs.DeleteModeDefault }() TestSyncAfterRemovingAFileAndAddingAFile(t) } // Copy test delete before - shouldn't delete anything func TestCopyDeleteBefore(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.DeleteMode = fs.DeleteModeBefore defer func() { fs.Config.DeleteMode = fs.DeleteModeDefault }() file1 := r.WriteObject(context.Background(), "potato", "hopefully not deleted", t1) file2 := r.WriteFile("potato2", "hopefully copied in", t1) fstest.CheckItems(t, r.Fremote, file1) fstest.CheckItems(t, r.Flocal, file2) accounting.GlobalStats().ResetCounters() err := CopyDir(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file1, file2) fstest.CheckItems(t, r.Flocal, file2) } // Test with exclude func TestSyncWithExclude(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) file3 := r.WriteFile("enormous", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", t1) // 100 bytes fstest.CheckItems(t, r.Fremote, file1, file2) fstest.CheckItems(t, r.Flocal, file1, file2, file3) filter.Active.Opt.MaxSize = 40 defer func() { filter.Active.Opt.MaxSize = -1 }() accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file1) // Now sync the other way round and check enormous doesn't get // deleted as it is excluded from the sync accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Flocal, r.Fremote, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file2, file1, file3) } // Test with exclude and delete excluded func TestSyncWithExcludeAndDeleteExcluded(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) // 60 bytes file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) file3 := r.WriteBoth(context.Background(), "enormous", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", t1) // 100 bytes fstest.CheckItems(t, r.Fremote, file1, file2, file3) fstest.CheckItems(t, r.Flocal, file1, file2, file3) filter.Active.Opt.MaxSize = 40 filter.Active.Opt.DeleteExcluded = true defer func() { filter.Active.Opt.MaxSize = -1 filter.Active.Opt.DeleteExcluded = false }() accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2) // Check sync the other way round to make sure enormous gets // deleted even though it is excluded accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Flocal, r.Fremote, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file2) } // Test with UpdateOlder set func TestSyncWithUpdateOlder(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if fs.GetModifyWindow(r.Fremote) == fs.ModTimeNotSupported { t.Skip("Can't run this test on fs which doesn't support mod time") } t2plus := t2.Add(time.Second / 2) t2minus := t2.Add(time.Second / 2) oneF := r.WriteFile("one", "one", t1) twoF := r.WriteFile("two", "two", t3) threeF := r.WriteFile("three", "three", t2) fourF := r.WriteFile("four", "four", t2) fiveF := r.WriteFile("five", "five", t2) fstest.CheckItems(t, r.Flocal, oneF, twoF, threeF, fourF, fiveF) oneO := r.WriteObject(context.Background(), "one", "ONE", t2) twoO := r.WriteObject(context.Background(), "two", "TWO", t2) threeO := r.WriteObject(context.Background(), "three", "THREE", t2plus) fourO := r.WriteObject(context.Background(), "four", "FOURFOUR", t2minus) fstest.CheckItems(t, r.Fremote, oneO, twoO, threeO, fourO) fs.Config.UpdateOlder = true oldModifyWindow := fs.Config.ModifyWindow fs.Config.ModifyWindow = fs.ModTimeNotSupported defer func() { fs.Config.UpdateOlder = false fs.Config.ModifyWindow = oldModifyWindow }() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, oneO, twoF, threeO, fourF, fiveF) if r.Fremote.Hashes().Count() == 0 { t.Logf("Skip test with --checksum as no hashes supported") return } // now enable checksum fs.Config.CheckSum = true defer func() { fs.Config.CheckSum = false }() err = Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, oneO, twoF, threeF, fourF, fiveF) } // Test with a max transfer duration func TestSyncWithMaxDuration(t *testing.T) { if *fstest.RemoteName != "" { t.Skip("Skipping test on non local remote") } r := fstest.NewRun(t) defer r.Finalise() maxDuration := 250 * time.Millisecond fs.Config.MaxDuration = maxDuration bytesPerSecond := 300 accounting.SetBwLimit(fs.SizeSuffix(bytesPerSecond)) oldTransfers := fs.Config.Transfers fs.Config.Transfers = 1 defer func() { fs.Config.MaxDuration = 0 // reset back to default fs.Config.Transfers = oldTransfers accounting.SetBwLimit(fs.SizeSuffix(0)) }() // 5 files of 60 bytes at 60 bytes/s 5 seconds testFiles := make([]fstest.Item, 5) for i := 0; i < len(testFiles); i++ { testFiles[i] = r.WriteFile(fmt.Sprintf("file%d", i), "------------------------------------------------------------", t1) } fstest.CheckListing(t, r.Flocal, testFiles) accounting.GlobalStats().ResetCounters() startTime := time.Now() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.Equal(t, context.DeadlineExceeded, errors.Cause(err)) elapsed := time.Since(startTime) maxTransferTime := (time.Duration(len(testFiles)) * 60 * time.Second) / time.Duration(bytesPerSecond) what := fmt.Sprintf("expecting elapsed time %v between %v and %v", elapsed, maxDuration, maxTransferTime) require.True(t, elapsed >= maxDuration, what) require.True(t, elapsed < 5*time.Second, what) // we must not have transferred all files during the session require.True(t, accounting.GlobalStats().GetTransfers() < int64(len(testFiles))) } // Test with TrackRenames set func TestSyncWithTrackRenames(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.TrackRenames = true defer func() { fs.Config.TrackRenames = false }() haveHash := r.Fremote.Hashes().Overlap(r.Flocal.Hashes()).GetOne() != hash.None canTrackRenames := haveHash && operations.CanServerSideMove(r.Fremote) t.Logf("Can track renames: %v", canTrackRenames) f1 := r.WriteFile("potato", "Potato Content", t1) f2 := r.WriteFile("yam", "Yam Content", t2) accounting.GlobalStats().ResetCounters() require.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckItems(t, r.Fremote, f1, f2) fstest.CheckItems(t, r.Flocal, f1, f2) // Now rename locally. f2 = r.RenameFile(f2, "yaml") accounting.GlobalStats().ResetCounters() require.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckItems(t, r.Fremote, f1, f2) // Check we renamed something if we should have if canTrackRenames { renames := accounting.GlobalStats().Renames(0) assert.Equal(t, canTrackRenames, renames != 0, fmt.Sprintf("canTrackRenames=%v, renames=%d", canTrackRenames, renames)) } } func TestParseRenamesStrategyModtime(t *testing.T) { for _, test := range []struct { in string want trackRenamesStrategy wantErr bool }{ {"", 0, false}, {"modtime", trackRenamesStrategyModtime, false}, {"hash", trackRenamesStrategyHash, false}, {"size", 0, false}, {"modtime,hash", trackRenamesStrategyModtime | trackRenamesStrategyHash, false}, {"hash,modtime,size", trackRenamesStrategyModtime | trackRenamesStrategyHash, false}, {"size,boom", 0, true}, } { got, err := parseTrackRenamesStrategy(test.in) assert.Equal(t, test.want, got, test.in) assert.Equal(t, test.wantErr, err != nil, test.in) } } func TestRenamesStrategyModtime(t *testing.T) { both := trackRenamesStrategyHash | trackRenamesStrategyModtime hash := trackRenamesStrategyHash modTime := trackRenamesStrategyModtime assert.True(t, both.hash()) assert.True(t, both.modTime()) assert.True(t, hash.hash()) assert.False(t, hash.modTime()) assert.False(t, modTime.hash()) assert.True(t, modTime.modTime()) } func TestSyncWithTrackRenamesStrategyModtime(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.TrackRenames = true fs.Config.TrackRenamesStrategy = "modtime" defer func() { fs.Config.TrackRenames = false fs.Config.TrackRenamesStrategy = "hash" }() canTrackRenames := operations.CanServerSideMove(r.Fremote) && r.Fremote.Precision() != fs.ModTimeNotSupported t.Logf("Can track renames: %v", canTrackRenames) f1 := r.WriteFile("potato", "Potato Content", t1) f2 := r.WriteFile("yam", "Yam Content", t2) accounting.GlobalStats().ResetCounters() require.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckItems(t, r.Fremote, f1, f2) fstest.CheckItems(t, r.Flocal, f1, f2) // Now rename locally. f2 = r.RenameFile(f2, "yaml") accounting.GlobalStats().ResetCounters() require.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckItems(t, r.Fremote, f1, f2) // Check we renamed something if we should have if canTrackRenames { renames := accounting.GlobalStats().Renames(0) assert.Equal(t, canTrackRenames, renames != 0, fmt.Sprintf("canTrackRenames=%v, renames=%d", canTrackRenames, renames)) } } func TestSyncWithTrackRenamesStrategyLeaf(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.TrackRenames = true fs.Config.TrackRenamesStrategy = "leaf" defer func() { fs.Config.TrackRenames = false fs.Config.TrackRenamesStrategy = "hash" }() canTrackRenames := operations.CanServerSideMove(r.Fremote) && r.Fremote.Precision() != fs.ModTimeNotSupported t.Logf("Can track renames: %v", canTrackRenames) f1 := r.WriteFile("potato", "Potato Content", t1) f2 := r.WriteFile("sub/yam", "Yam Content", t2) accounting.GlobalStats().ResetCounters() require.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckItems(t, r.Fremote, f1, f2) fstest.CheckItems(t, r.Flocal, f1, f2) // Now rename locally. f2 = r.RenameFile(f2, "yam") accounting.GlobalStats().ResetCounters() require.NoError(t, Sync(context.Background(), r.Fremote, r.Flocal, false)) fstest.CheckItems(t, r.Fremote, f1, f2) // Check we renamed something if we should have if canTrackRenames { renames := accounting.GlobalStats().Renames(0) assert.Equal(t, canTrackRenames, renames != 0, fmt.Sprintf("canTrackRenames=%v, renames=%d", canTrackRenames, renames)) } } func toyFileTransfers(r *fstest.Run) int64 { remote := r.Fremote.Name() transfers := 1 if strings.HasPrefix(remote, "TestChunker") && strings.HasSuffix(remote, "S3") { transfers++ // Extra Copy because S3 emulates Move as Copy+Delete. } return int64(transfers) } // Test a server side move if possible, or the backup path if not func testServerSideMove(t *testing.T, r *fstest.Run, withFilter, testDeleteEmptyDirs bool) { FremoteMove, _, finaliseMove, err := fstest.RandomRemote() require.NoError(t, err) defer finaliseMove() file1 := r.WriteBoth(context.Background(), "potato2", "------------------------------------------------------------", t1) file2 := r.WriteBoth(context.Background(), "empty space", "-", t2) file3u := r.WriteBoth(context.Background(), "potato3", "------------------------------------------------------------ UPDATED", t2) if testDeleteEmptyDirs { err := operations.Mkdir(context.Background(), r.Fremote, "tomatoDir") require.NoError(t, err) } fstest.CheckItems(t, r.Fremote, file2, file1, file3u) t.Logf("Server side move (if possible) %v -> %v", r.Fremote, FremoteMove) // Write just one file in the new remote r.WriteObjectTo(context.Background(), FremoteMove, "empty space", "-", t2, false) file3 := r.WriteObjectTo(context.Background(), FremoteMove, "potato3", "------------------------------------------------------------", t1, false) fstest.CheckItems(t, FremoteMove, file2, file3) // Do server side move accounting.GlobalStats().ResetCounters() err = MoveDir(context.Background(), FremoteMove, r.Fremote, testDeleteEmptyDirs, false) require.NoError(t, err) if withFilter { fstest.CheckItems(t, r.Fremote, file2) } else { fstest.CheckItems(t, r.Fremote) } if testDeleteEmptyDirs { fstest.CheckListingWithPrecision(t, r.Fremote, nil, []string{}, fs.GetModifyWindow(r.Fremote)) } fstest.CheckItems(t, FremoteMove, file2, file1, file3u) // Create a new empty remote for stuff to be moved into FremoteMove2, _, finaliseMove2, err := fstest.RandomRemote() require.NoError(t, err) defer finaliseMove2() if testDeleteEmptyDirs { err := operations.Mkdir(context.Background(), FremoteMove, "tomatoDir") require.NoError(t, err) } // Move it back to a new empty remote, dst does not exist this time accounting.GlobalStats().ResetCounters() err = MoveDir(context.Background(), FremoteMove2, FremoteMove, testDeleteEmptyDirs, false) require.NoError(t, err) if withFilter { fstest.CheckItems(t, FremoteMove2, file1, file3u) fstest.CheckItems(t, FremoteMove, file2) } else { fstest.CheckItems(t, FremoteMove2, file2, file1, file3u) fstest.CheckItems(t, FremoteMove) } if testDeleteEmptyDirs { fstest.CheckListingWithPrecision(t, FremoteMove, nil, []string{}, fs.GetModifyWindow(r.Fremote)) } } // Test move func TestMoveWithDeleteEmptySrcDirs(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) file2 := r.WriteFile("nested/sub dir/file", "nested", t1) r.Mkdir(context.Background(), r.Fremote) // run move with --delete-empty-src-dirs err := MoveDir(context.Background(), r.Fremote, r.Flocal, true, false) require.NoError(t, err) fstest.CheckListingWithPrecision( t, r.Flocal, nil, []string{}, fs.GetModifyWindow(r.Flocal), ) fstest.CheckItems(t, r.Fremote, file1, file2) } func TestMoveWithoutDeleteEmptySrcDirs(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() file1 := r.WriteFile("sub dir/hello world", "hello world", t1) file2 := r.WriteFile("nested/sub dir/file", "nested", t1) r.Mkdir(context.Background(), r.Fremote) err := MoveDir(context.Background(), r.Fremote, r.Flocal, false, false) require.NoError(t, err) fstest.CheckListingWithPrecision( t, r.Flocal, nil, []string{ "sub dir", "nested", "nested/sub dir", }, fs.GetModifyWindow(r.Flocal), ) fstest.CheckItems(t, r.Fremote, file1, file2) } // Test a server side move if possible, or the backup path if not func TestServerSideMove(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() testServerSideMove(t, r, false, false) } // Test a server side move if possible, or the backup path if not func TestServerSideMoveWithFilter(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() filter.Active.Opt.MinSize = 40 defer func() { filter.Active.Opt.MinSize = -1 }() testServerSideMove(t, r, true, false) } // Test a server side move if possible func TestServerSideMoveDeleteEmptySourceDirs(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() testServerSideMove(t, r, false, true) } // Test a server side move with overlap func TestServerSideMoveOverlap(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if r.Fremote.Features().DirMove != nil { t.Skip("Skipping test as remote supports DirMove") } subRemoteName := r.FremoteName + "/rclone-move-test" FremoteMove, err := fs.NewFs(subRemoteName) require.NoError(t, err) file1 := r.WriteObject(context.Background(), "potato2", "------------------------------------------------------------", t1) fstest.CheckItems(t, r.Fremote, file1) // Subdir move with no filters should return ErrorCantMoveOverlapping err = MoveDir(context.Background(), FremoteMove, r.Fremote, false, false) assert.EqualError(t, err, fs.ErrorOverlapping.Error()) // Now try with a filter which should also fail with ErrorCantMoveOverlapping filter.Active.Opt.MinSize = 40 defer func() { filter.Active.Opt.MinSize = -1 }() err = MoveDir(context.Background(), FremoteMove, r.Fremote, false, false) assert.EqualError(t, err, fs.ErrorOverlapping.Error()) } // Test a sync with overlap func TestSyncOverlap(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() subRemoteName := r.FremoteName + "/rclone-sync-test" FremoteSync, err := fs.NewFs(subRemoteName) require.NoError(t, err) checkErr := func(err error) { require.Error(t, err) assert.True(t, fserrors.IsFatalError(err)) assert.Equal(t, fs.ErrorOverlapping.Error(), err.Error()) } checkErr(Sync(context.Background(), FremoteSync, r.Fremote, false)) checkErr(Sync(context.Background(), r.Fremote, FremoteSync, false)) checkErr(Sync(context.Background(), r.Fremote, r.Fremote, false)) checkErr(Sync(context.Background(), FremoteSync, FremoteSync, false)) } // Test with CompareDest set func TestSyncCompareDest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.CompareDest = r.FremoteName + "/CompareDest" defer func() { fs.Config.CompareDest = "" }() fdst, err := fs.NewFs(r.FremoteName + "/dst") require.NoError(t, err) // check empty dest, empty compare file1 := r.WriteFile("one", "one", t1) fstest.CheckItems(t, r.Flocal, file1) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file1dst := file1 file1dst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1dst) // check old dest, empty compare file1b := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file1dst) fstest.CheckItems(t, r.Flocal, file1b) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file1bdst := file1b file1bdst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1bdst) // check old dest, new compare file3 := r.WriteObject(context.Background(), "dst/one", "one", t1) file2 := r.WriteObject(context.Background(), "CompareDest/one", "onet2", t2) file1c := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file2, file3) fstest.CheckItems(t, r.Flocal, file1c) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file3) // check empty dest, new compare file4 := r.WriteObject(context.Background(), "CompareDest/two", "two", t2) file5 := r.WriteFile("two", "two", t2) fstest.CheckItems(t, r.Fremote, file2, file3, file4) fstest.CheckItems(t, r.Flocal, file1c, file5) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file3, file4) // check new dest, new compare accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file3, file4) // check empty dest, old compare file5b := r.WriteFile("two", "twot3", t3) fstest.CheckItems(t, r.Fremote, file2, file3, file4) fstest.CheckItems(t, r.Flocal, file1c, file5b) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file5bdst := file5b file5bdst.Path = "dst/two" fstest.CheckItems(t, r.Fremote, file2, file3, file4, file5bdst) } // Test with CopyDest set func TestSyncCopyDest(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if r.Fremote.Features().Copy == nil { t.Skip("Skipping test as remote does not support server side copy") } fs.Config.CopyDest = r.FremoteName + "/CopyDest" defer func() { fs.Config.CopyDest = "" }() fdst, err := fs.NewFs(r.FremoteName + "/dst") require.NoError(t, err) // check empty dest, empty copy file1 := r.WriteFile("one", "one", t1) fstest.CheckItems(t, r.Flocal, file1) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file1dst := file1 file1dst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1dst) // check old dest, empty copy file1b := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file1dst) fstest.CheckItems(t, r.Flocal, file1b) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file1bdst := file1b file1bdst.Path = "dst/one" fstest.CheckItems(t, r.Fremote, file1bdst) // check old dest, new copy, backup-dir fs.Config.BackupDir = r.FremoteName + "/BackupDir" file3 := r.WriteObject(context.Background(), "dst/one", "one", t1) file2 := r.WriteObject(context.Background(), "CopyDest/one", "onet2", t2) file1c := r.WriteFile("one", "onet2", t2) fstest.CheckItems(t, r.Fremote, file2, file3) fstest.CheckItems(t, r.Flocal, file1c) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file2dst := file2 file2dst.Path = "dst/one" file3.Path = "BackupDir/one" fstest.CheckItems(t, r.Fremote, file2, file2dst, file3) fs.Config.BackupDir = "" // check empty dest, new copy file4 := r.WriteObject(context.Background(), "CopyDest/two", "two", t2) file5 := r.WriteFile("two", "two", t2) fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4) fstest.CheckItems(t, r.Flocal, file1c, file5) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file4dst := file4 file4dst.Path = "dst/two" fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst) // check new dest, new copy accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst) // check empty dest, old copy file6 := r.WriteObject(context.Background(), "CopyDest/three", "three", t2) file7 := r.WriteFile("three", "threet3", t3) fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst, file6) fstest.CheckItems(t, r.Flocal, file1c, file5, file7) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) file7dst := file7 file7dst.Path = "dst/three" fstest.CheckItems(t, r.Fremote, file2, file2dst, file3, file4, file4dst, file6, file7dst) } // Test with BackupDir set func testSyncBackupDir(t *testing.T, backupDir string, suffix string, suffixKeepExtension bool) { r := fstest.NewRun(t) defer r.Finalise() if !operations.CanServerSideMove(r.Fremote) { t.Skip("Skipping test as remote does not support server side move") } r.Mkdir(context.Background(), r.Fremote) if backupDir != "" { fs.Config.BackupDir = r.FremoteName + "/" + backupDir backupDir += "/" } else { fs.Config.BackupDir = "" backupDir = "dst/" // Exclude the suffix from the sync otherwise the sync // deletes the old backup files flt, err := filter.NewFilter(nil) require.NoError(t, err) require.NoError(t, flt.AddRule("- *"+suffix)) oldFlt := filter.Active filter.Active = flt defer func() { filter.Active = oldFlt }() } fs.Config.Suffix = suffix fs.Config.SuffixKeepExtension = suffixKeepExtension defer func() { fs.Config.BackupDir = "" fs.Config.Suffix = "" fs.Config.SuffixKeepExtension = false }() // Make the setup so we have one, two, three in the dest // and one (different), two (same) in the source file1 := r.WriteObject(context.Background(), "dst/one", "one", t1) file2 := r.WriteObject(context.Background(), "dst/two", "two", t1) file3 := r.WriteObject(context.Background(), "dst/three.txt", "three", t1) file2a := r.WriteFile("two", "two", t1) file1a := r.WriteFile("one", "oneA", t2) fstest.CheckItems(t, r.Fremote, file1, file2, file3) fstest.CheckItems(t, r.Flocal, file1a, file2a) fdst, err := fs.NewFs(r.FremoteName + "/dst") require.NoError(t, err) accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) // one should be moved to the backup dir and the new one installed file1.Path = backupDir + "one" + suffix file1a.Path = "dst/one" // two should be unchanged // three should be moved to the backup dir if suffixKeepExtension { file3.Path = backupDir + "three" + suffix + ".txt" } else { file3.Path = backupDir + "three.txt" + suffix } fstest.CheckItems(t, r.Fremote, file1, file2, file3, file1a) // Now check what happens if we do it again // Restore a different three and update one in the source file3a := r.WriteObject(context.Background(), "dst/three.txt", "threeA", t2) file1b := r.WriteFile("one", "oneBB", t3) fstest.CheckItems(t, r.Fremote, file1, file2, file3, file1a, file3a) // This should delete three and overwrite one again, checking // the files got overwritten correctly in backup-dir accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), fdst, r.Flocal, false) require.NoError(t, err) // one should be moved to the backup dir and the new one installed file1a.Path = backupDir + "one" + suffix file1b.Path = "dst/one" // two should be unchanged // three should be moved to the backup dir if suffixKeepExtension { file3a.Path = backupDir + "three" + suffix + ".txt" } else { file3a.Path = backupDir + "three.txt" + suffix } fstest.CheckItems(t, r.Fremote, file1b, file2, file3a, file1a) } func TestSyncBackupDir(t *testing.T) { testSyncBackupDir(t, "backup", "", false) } func TestSyncBackupDirWithSuffix(t *testing.T) { testSyncBackupDir(t, "backup", ".bak", false) } func TestSyncBackupDirWithSuffixKeepExtension(t *testing.T) { testSyncBackupDir(t, "backup", "-2019-01-01", true) } func TestSyncBackupDirSuffixOnly(t *testing.T) { testSyncBackupDir(t, "", ".bak", false) } // Test with Suffix set func testSyncSuffix(t *testing.T, suffix string, suffixKeepExtension bool) { r := fstest.NewRun(t) defer r.Finalise() if !operations.CanServerSideMove(r.Fremote) { t.Skip("Skipping test as remote does not support server side move") } r.Mkdir(context.Background(), r.Fremote) fs.Config.Suffix = suffix fs.Config.SuffixKeepExtension = suffixKeepExtension defer func() { fs.Config.BackupDir = "" fs.Config.Suffix = "" fs.Config.SuffixKeepExtension = false }() // Make the setup so we have one, two, three in the dest // and one (different), two (same) in the source file1 := r.WriteObject(context.Background(), "dst/one", "one", t1) file2 := r.WriteObject(context.Background(), "dst/two", "two", t1) file3 := r.WriteObject(context.Background(), "dst/three.txt", "three", t1) file2a := r.WriteFile("two", "two", t1) file1a := r.WriteFile("one", "oneA", t2) file3a := r.WriteFile("three.txt", "threeA", t1) fstest.CheckItems(t, r.Fremote, file1, file2, file3) fstest.CheckItems(t, r.Flocal, file1a, file2a, file3a) fdst, err := fs.NewFs(r.FremoteName + "/dst") require.NoError(t, err) accounting.GlobalStats().ResetCounters() err = operations.CopyFile(context.Background(), fdst, r.Flocal, "one", "one") require.NoError(t, err) err = operations.CopyFile(context.Background(), fdst, r.Flocal, "two", "two") require.NoError(t, err) err = operations.CopyFile(context.Background(), fdst, r.Flocal, "three.txt", "three.txt") require.NoError(t, err) // one should be moved to the backup dir and the new one installed file1.Path = "dst/one" + suffix file1a.Path = "dst/one" // two should be unchanged // three should be moved to the backup dir if suffixKeepExtension { file3.Path = "dst/three" + suffix + ".txt" } else { file3.Path = "dst/three.txt" + suffix } file3a.Path = "dst/three.txt" fstest.CheckItems(t, r.Fremote, file1, file2, file3, file1a, file3a) // Now check what happens if we do it again // Restore a different three and update one in the source file3b := r.WriteFile("three.txt", "threeBDifferentSize", t3) file1b := r.WriteFile("one", "oneBB", t3) fstest.CheckItems(t, r.Fremote, file1, file2, file3, file1a, file3a) // This should delete three and overwrite one again, checking // the files got overwritten correctly in backup-dir accounting.GlobalStats().ResetCounters() err = operations.CopyFile(context.Background(), fdst, r.Flocal, "one", "one") require.NoError(t, err) err = operations.CopyFile(context.Background(), fdst, r.Flocal, "two", "two") require.NoError(t, err) err = operations.CopyFile(context.Background(), fdst, r.Flocal, "three.txt", "three.txt") require.NoError(t, err) // one should be moved to the backup dir and the new one installed file1a.Path = "dst/one" + suffix file1b.Path = "dst/one" // two should be unchanged // three should be moved to the backup dir if suffixKeepExtension { file3a.Path = "dst/three" + suffix + ".txt" } else { file3a.Path = "dst/three.txt" + suffix } file3b.Path = "dst/three.txt" fstest.CheckItems(t, r.Fremote, file1b, file3b, file2, file3a, file1a) } func TestSyncSuffix(t *testing.T) { testSyncSuffix(t, ".bak", false) } func TestSyncSuffixKeepExtension(t *testing.T) { testSyncSuffix(t, "-2019-01-01", true) } // Check we can sync two files with differing UTF-8 representations func TestSyncUTFNorm(t *testing.T) { if runtime.GOOS == "darwin" { t.Skip("Can't test UTF normalization on OS X") } r := fstest.NewRun(t) defer r.Finalise() // Two strings with different unicode normalization (from OS X) Encoding1 := "Testêé" Encoding2 := "Testêé" assert.NotEqual(t, Encoding1, Encoding2) assert.Equal(t, norm.NFC.String(Encoding1), norm.NFC.String(Encoding2)) file1 := r.WriteFile(Encoding1, "This is a test", t1) fstest.CheckItems(t, r.Flocal, file1) file2 := r.WriteObject(context.Background(), Encoding2, "This is a old test", t2) fstest.CheckItems(t, r.Fremote, file2) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) // We should have transferred exactly one file, but kept the // normalized state of the file. assert.Equal(t, toyFileTransfers(r), accounting.GlobalStats().GetTransfers()) fstest.CheckItems(t, r.Flocal, file1) file1.Path = file2.Path fstest.CheckItems(t, r.Fremote, file1) } // Test --immutable func TestSyncImmutable(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() fs.Config.Immutable = true defer func() { fs.Config.Immutable = false }() // Create file on source file1 := r.WriteFile("existing", "potato", t1) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote) // Should succeed accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file1) // Modify file data and timestamp on source file2 := r.WriteFile("existing", "tomatoes", t2) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file1) // Should fail with ErrorImmutableModified and not modify local or remote files accounting.GlobalStats().ResetCounters() err = Sync(context.Background(), r.Fremote, r.Flocal, false) assert.EqualError(t, err, fs.ErrorImmutableModified.Error()) fstest.CheckItems(t, r.Flocal, file2) fstest.CheckItems(t, r.Fremote, file1) } // Test --ignore-case-sync func TestSyncIgnoreCase(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() // Only test if filesystems are case sensitive if r.Fremote.Features().CaseInsensitive || r.Flocal.Features().CaseInsensitive { t.Skip("Skipping test as local or remote are case-insensitive") } fs.Config.IgnoreCaseSync = true defer func() { fs.Config.IgnoreCaseSync = false }() // Create files with different filename casing file1 := r.WriteFile("existing", "potato", t1) fstest.CheckItems(t, r.Flocal, file1) file2 := r.WriteObject(context.Background(), "EXISTING", "potato", t1) fstest.CheckItems(t, r.Fremote, file2) // Should not copy files that are differently-cased but otherwise identical accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) require.NoError(t, err) fstest.CheckItems(t, r.Flocal, file1) fstest.CheckItems(t, r.Fremote, file2) } // Test that aborting on max upload works func TestAbort(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() if r.Fremote.Name() != "local" { t.Skip("This test only runs on local") } oldMaxTransfer := fs.Config.MaxTransfer oldTransfers := fs.Config.Transfers oldCheckers := fs.Config.Checkers fs.Config.MaxTransfer = 3 * 1024 fs.Config.Transfers = 1 fs.Config.Checkers = 1 defer func() { fs.Config.MaxTransfer = oldMaxTransfer fs.Config.Transfers = oldTransfers fs.Config.Checkers = oldCheckers }() // Create file on source file1 := r.WriteFile("file1", string(make([]byte, 5*1024)), t1) file2 := r.WriteFile("file2", string(make([]byte, 2*1024)), t1) file3 := r.WriteFile("file3", string(make([]byte, 3*1024)), t1) fstest.CheckItems(t, r.Flocal, file1, file2, file3) fstest.CheckItems(t, r.Fremote) accounting.GlobalStats().ResetCounters() err := Sync(context.Background(), r.Fremote, r.Flocal, false) expectedErr := fserrors.FsError(accounting.ErrorMaxTransferLimitReachedFatal) fserrors.Count(expectedErr) assert.Equal(t, expectedErr, err) } rclone-1.53.3/fs/version.go000066400000000000000000000000751375552240400155220ustar00rootroot00000000000000package fs // Version of rclone var Version = "v1.53.3-DEV" rclone-1.53.3/fs/versioncheck.go000066400000000000000000000002651375552240400165210ustar00rootroot00000000000000//+build !go1.11 package fs // Upgrade to Go version 1.11 to compile rclone - latest stable go // compiler recommended. func init() { Go_version_1_11_required_for_compilation() } rclone-1.53.3/fs/walk/000077500000000000000000000000001375552240400144425ustar00rootroot00000000000000rclone-1.53.3/fs/walk/walk.go000066400000000000000000000444241375552240400157370ustar00rootroot00000000000000// Package walk walks directories package walk import ( "context" "path" "sort" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/dirtree" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/list" ) // ErrorSkipDir is used as a return value from Walk to indicate that the // directory named in the call is to be skipped. It is not returned as // an error by any function. var ErrorSkipDir = errors.New("skip this directory") // ErrorCantListR is returned by WalkR if the underlying Fs isn't // capable of doing a recursive listing. var ErrorCantListR = errors.New("recursive directory listing not available") // Func is the type of the function called for directory // visited by Walk. The path argument contains remote path to the directory. // // If there was a problem walking to directory named by path, the // incoming error will describe the problem and the function can // decide how to handle that error (and Walk will not descend into // that directory). If an error is returned, processing stops. The // sole exception is when the function returns the special value // ErrorSkipDir. If the function returns ErrorSkipDir, Walk skips the // directory's contents entirely. type Func func(path string, entries fs.DirEntries, err error) error // Walk lists the directory. // // If includeAll is not set it will use the filters defined. // // If maxLevel is < 0 then it will recurse indefinitely, else it will // only do maxLevel levels. // // It calls fn for each tranche of DirEntries read. // // Note that fn will not be called concurrently whereas the directory // listing will proceed concurrently. // // Parent directories are always listed before their children // // This is implemented by WalkR if Config.UseListR is true // and f supports it and level > 1, or WalkN otherwise. // // If --files-from and --no-traverse is set then a DirTree will be // constructed with just those files in and then walked with WalkR // // NB (f, path) to be replaced by fs.Dir at some point func Walk(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, fn Func) error { if fs.Config.NoTraverse && filter.Active.HaveFilesFrom() { return walkR(ctx, f, path, includeAll, maxLevel, fn, filter.Active.MakeListR(ctx, f.NewObject)) } // FIXME should this just be maxLevel < 0 - why the maxLevel > 1 if (maxLevel < 0 || maxLevel > 1) && fs.Config.UseListR && f.Features().ListR != nil { return walkListR(ctx, f, path, includeAll, maxLevel, fn) } return walkListDirSorted(ctx, f, path, includeAll, maxLevel, fn) } // ListType is uses to choose which combination of files or directories is requires type ListType byte // Types of listing for ListR const ( ListObjects ListType = 1 << iota // list objects only ListDirs // list dirs only ListAll = ListObjects | ListDirs // list files and dirs ) // Objects returns true if the list type specifies objects func (l ListType) Objects() bool { return (l & ListObjects) != 0 } // Dirs returns true if the list type specifies dirs func (l ListType) Dirs() bool { return (l & ListDirs) != 0 } // Filter in (inplace) to only contain the type of list entry required func (l ListType) Filter(in *fs.DirEntries) { if l == ListAll { return } out := (*in)[:0] for _, entry := range *in { switch entry.(type) { case fs.Object: if l.Objects() { out = append(out, entry) } case fs.Directory: if l.Dirs() { out = append(out, entry) } default: fs.Errorf(nil, "Unknown object type %T", entry) } } *in = out } // ListR lists the directory recursively. // // If includeAll is not set it will use the filters defined. // // If maxLevel is < 0 then it will recurse indefinitely, else it will // only do maxLevel levels. // // If synthesizeDirs is set then for bucket based remotes it will // synthesize directories from the file structure. This uses extra // memory so don't set this if you don't need directories, likewise do // set this if you are interested in directories. // // It calls fn for each tranche of DirEntries read. Note that these // don't necessarily represent a directory // // Note that fn will not be called concurrently whereas the directory // listing will proceed concurrently. // // Directories are not listed in any particular order so you can't // rely on parents coming before children or alphabetical ordering // // This is implemented by using ListR on the backend if possible and // efficient, otherwise by Walk. // // NB (f, path) to be replaced by fs.Dir at some point func ListR(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, listType ListType, fn fs.ListRCallback) error { // FIXME disable this with --no-fast-list ??? `--disable ListR` will do it... doListR := f.Features().ListR // Can't use ListR if... if doListR == nil || // ...no ListR filter.Active.HaveFilesFrom() || // ...using --files-from maxLevel >= 0 || // ...using bounded recursion len(filter.Active.Opt.ExcludeFile) > 0 || // ...using --exclude-file filter.Active.UsesDirectoryFilters() { // ...using any directory filters return listRwalk(ctx, f, path, includeAll, maxLevel, listType, fn) } return listR(ctx, f, path, includeAll, listType, fn, doListR, listType.Dirs() && f.Features().BucketBased) } // listRwalk walks the file tree for ListR using Walk func listRwalk(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, listType ListType, fn fs.ListRCallback) error { var listErr error walkErr := Walk(ctx, f, path, includeAll, maxLevel, func(path string, entries fs.DirEntries, err error) error { // Carry on listing but return the error at the end if err != nil { listErr = err err = fs.CountError(err) fs.Errorf(path, "error listing: %v", err) return nil } listType.Filter(&entries) return fn(entries) }) if listErr != nil { return listErr } return walkErr } // dirMap keeps track of directories made for bucket based remotes. // true => directory has been sent // false => directory has been seen but not sent type dirMap struct { mu sync.Mutex m map[string]bool root string } // make a new dirMap func newDirMap(root string) *dirMap { return &dirMap{ m: make(map[string]bool), root: root, } } // add adds a directory and parents with sent func (dm *dirMap) add(dir string, sent bool) { for { if dir == dm.root || dir == "" { return } currentSent, found := dm.m[dir] if found { // If it has been sent already then nothing more to do if currentSent { return } // If not sent already don't override if !sent { return } // currenSent == false && sent == true so needs overriding } dm.m[dir] = sent // Add parents in as unsent dir = parentDir(dir) sent = false } } // parentDir finds the parent directory of path func parentDir(entryPath string) string { dirPath := path.Dir(entryPath) if dirPath == "." { dirPath = "" } return dirPath } // add all the directories in entries and their parents to the dirMap func (dm *dirMap) addEntries(entries fs.DirEntries) error { dm.mu.Lock() defer dm.mu.Unlock() for _, entry := range entries { switch x := entry.(type) { case fs.Object: dm.add(parentDir(x.Remote()), false) case fs.Directory: dm.add(x.Remote(), true) default: return errors.Errorf("unknown object type %T", entry) } } return nil } // send any missing parents to fn func (dm *dirMap) sendEntries(fn fs.ListRCallback) (err error) { // Count the strings first so we allocate the minimum memory n := 0 for _, sent := range dm.m { if !sent { n++ } } if n == 0 { return nil } dirs := make([]string, 0, n) // Fill the dirs up then sort it for dir, sent := range dm.m { if !sent { dirs = append(dirs, dir) } } sort.Strings(dirs) // Now convert to bulkier Dir in batches and send now := time.Now() list := NewListRHelper(fn) for _, dir := range dirs { err = list.Add(fs.NewDir(dir, now)) if err != nil { return err } } return list.Flush() } // listR walks the file tree using ListR func listR(ctx context.Context, f fs.Fs, path string, includeAll bool, listType ListType, fn fs.ListRCallback, doListR fs.ListRFn, synthesizeDirs bool) error { includeDirectory := filter.Active.IncludeDirectory(ctx, f) if !includeAll { includeAll = filter.Active.InActive() } var dm *dirMap if synthesizeDirs { dm = newDirMap(path) } var mu sync.Mutex err := doListR(ctx, path, func(entries fs.DirEntries) (err error) { if synthesizeDirs { err = dm.addEntries(entries) if err != nil { return err } } listType.Filter(&entries) if !includeAll { filteredEntries := entries[:0] for _, entry := range entries { var include bool switch x := entry.(type) { case fs.Object: include = filter.Active.IncludeObject(ctx, x) case fs.Directory: include, err = includeDirectory(x.Remote()) if err != nil { return err } default: return errors.Errorf("unknown object type %T", entry) } if include { filteredEntries = append(filteredEntries, entry) } else { fs.Debugf(entry, "Excluded from sync (and deletion)") } } entries = filteredEntries } mu.Lock() defer mu.Unlock() return fn(entries) }) if err != nil { return err } if synthesizeDirs { err = dm.sendEntries(fn) if err != nil { return err } } return nil } // walkListDirSorted lists the directory. // // It implements Walk using non recursive directory listing. func walkListDirSorted(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, fn Func) error { return walk(ctx, f, path, includeAll, maxLevel, fn, list.DirSorted) } // walkListR lists the directory. // // It implements Walk using recursive directory listing if // available, or returns ErrorCantListR if not. func walkListR(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, fn Func) error { listR := f.Features().ListR if listR == nil { return ErrorCantListR } return walkR(ctx, f, path, includeAll, maxLevel, fn, listR) } type listDirFunc func(ctx context.Context, fs fs.Fs, includeAll bool, dir string) (entries fs.DirEntries, err error) func walk(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, fn Func, listDir listDirFunc) error { var ( wg sync.WaitGroup // sync closing of go routines traversing sync.WaitGroup // running directory traversals doClose sync.Once // close the channel once mu sync.Mutex // stop fn being called concurrently ) // listJob describe a directory listing that needs to be done type listJob struct { remote string depth int } in := make(chan listJob, fs.Config.Checkers) errs := make(chan error, 1) quit := make(chan struct{}) closeQuit := func() { doClose.Do(func() { close(quit) go func() { for range in { traversing.Done() } }() }) } for i := 0; i < fs.Config.Checkers; i++ { wg.Add(1) go func() { defer wg.Done() for { select { case job, ok := <-in: if !ok { return } entries, err := listDir(ctx, f, includeAll, job.remote) var jobs []listJob if err == nil && job.depth != 0 { entries.ForDir(func(dir fs.Directory) { // Recurse for the directory jobs = append(jobs, listJob{ remote: dir.Remote(), depth: job.depth - 1, }) }) } mu.Lock() err = fn(job.remote, entries, err) mu.Unlock() // NB once we have passed entries to fn we mustn't touch it again if err != nil && err != ErrorSkipDir { traversing.Done() err = fs.CountError(err) fs.Errorf(job.remote, "error listing: %v", err) closeQuit() // Send error to error channel if space select { case errs <- err: default: } continue } if err == nil && len(jobs) > 0 { traversing.Add(len(jobs)) go func() { // Now we have traversed this directory, send these // jobs off for traversal in the background for _, newJob := range jobs { in <- newJob } }() } traversing.Done() case <-quit: return } } }() } // Start the process traversing.Add(1) in <- listJob{ remote: path, depth: maxLevel - 1, } traversing.Wait() close(in) wg.Wait() close(errs) // return the first error returned or nil return <-errs } func walkRDirTree(ctx context.Context, f fs.Fs, startPath string, includeAll bool, maxLevel int, listR fs.ListRFn) (dirtree.DirTree, error) { dirs := dirtree.New() // Entries can come in arbitrary order. We use toPrune to keep // all directories to exclude later. toPrune := make(map[string]bool) includeDirectory := filter.Active.IncludeDirectory(ctx, f) var mu sync.Mutex err := listR(ctx, startPath, func(entries fs.DirEntries) error { mu.Lock() defer mu.Unlock() for _, entry := range entries { slashes := strings.Count(entry.Remote(), "/") switch x := entry.(type) { case fs.Object: // Make sure we don't delete excluded files if not required if includeAll || filter.Active.IncludeObject(ctx, x) { if maxLevel < 0 || slashes <= maxLevel-1 { dirs.Add(x) } else { // Make sure we include any parent directories of excluded objects dirPath := x.Remote() for ; slashes > maxLevel-1; slashes-- { dirPath = parentDir(dirPath) } dirs.CheckParent(startPath, dirPath) } } else { fs.Debugf(x, "Excluded from sync (and deletion)") } // Check if we need to prune a directory later. if !includeAll && len(filter.Active.Opt.ExcludeFile) > 0 { basename := path.Base(x.Remote()) if basename == filter.Active.Opt.ExcludeFile { excludeDir := parentDir(x.Remote()) toPrune[excludeDir] = true fs.Debugf(basename, "Excluded from sync (and deletion) based on exclude file") } } case fs.Directory: inc, err := includeDirectory(x.Remote()) if err != nil { return err } if includeAll || inc { if maxLevel < 0 || slashes <= maxLevel-1 { if slashes == maxLevel-1 { // Just add the object if at maxLevel dirs.Add(x) } else { dirs.AddDir(x) } } } else { fs.Debugf(x, "Excluded from sync (and deletion)") } default: return errors.Errorf("unknown object type %T", entry) } } return nil }) if err != nil { return nil, err } dirs.CheckParents(startPath) if len(dirs) == 0 { dirs[startPath] = nil } err = dirs.Prune(toPrune) if err != nil { return nil, err } dirs.Sort() return dirs, nil } // Create a DirTree using List func walkNDirTree(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, listDir listDirFunc) (dirtree.DirTree, error) { dirs := make(dirtree.DirTree) fn := func(dirPath string, entries fs.DirEntries, err error) error { if err == nil { dirs[dirPath] = entries } return err } err := walk(ctx, f, path, includeAll, maxLevel, fn, listDir) if err != nil { return nil, err } return dirs, nil } // NewDirTree returns a DirTree filled with the directory listing // using the parameters supplied. // // If includeAll is not set it will use the filters defined. // // If maxLevel is < 0 then it will recurse indefinitely, else it will // only do maxLevel levels. // // This is implemented by WalkR if f supports ListR and level > 1, or // WalkN otherwise. // // If --files-from and --no-traverse is set then a DirTree will be // constructed with just those files in. // // NB (f, path) to be replaced by fs.Dir at some point func NewDirTree(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int) (dirtree.DirTree, error) { // if --no-traverse and --files-from build DirTree just from files if fs.Config.NoTraverse && filter.Active.HaveFilesFrom() { return walkRDirTree(ctx, f, path, includeAll, maxLevel, filter.Active.MakeListR(ctx, f.NewObject)) } // if have ListR; and recursing; and not using --files-from; then build a DirTree with ListR if ListR := f.Features().ListR; (maxLevel < 0 || maxLevel > 1) && ListR != nil && !filter.Active.HaveFilesFrom() { return walkRDirTree(ctx, f, path, includeAll, maxLevel, ListR) } // otherwise just use List return walkNDirTree(ctx, f, path, includeAll, maxLevel, list.DirSorted) } func walkR(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int, fn Func, listR fs.ListRFn) error { dirs, err := walkRDirTree(ctx, f, path, includeAll, maxLevel, listR) if err != nil { return err } skipping := false skipPrefix := "" emptyDir := fs.DirEntries{} for _, dirPath := range dirs.Dirs() { if skipping { // Skip over directories as required if strings.HasPrefix(dirPath, skipPrefix) { continue } skipping = false } entries := dirs[dirPath] if entries == nil { entries = emptyDir } err = fn(dirPath, entries, nil) if err == ErrorSkipDir { skipping = true skipPrefix = dirPath if skipPrefix != "" { skipPrefix += "/" } } else if err != nil { return err } } return nil } // GetAll runs ListR getting all the results func GetAll(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLevel int) (objs []fs.Object, dirs []fs.Directory, err error) { err = ListR(ctx, f, path, includeAll, maxLevel, ListAll, func(entries fs.DirEntries) error { for _, entry := range entries { switch x := entry.(type) { case fs.Object: objs = append(objs, x) case fs.Directory: dirs = append(dirs, x) } } return nil }) return } // ListRHelper is used in the implementation of ListR to accumulate DirEntries type ListRHelper struct { callback fs.ListRCallback entries fs.DirEntries } // NewListRHelper should be called from ListR with the callback passed in func NewListRHelper(callback fs.ListRCallback) *ListRHelper { return &ListRHelper{ callback: callback, } } // send sends the stored entries to the callback if there are >= max // entries. func (lh *ListRHelper) send(max int) (err error) { if len(lh.entries) >= max { err = lh.callback(lh.entries) lh.entries = lh.entries[:0] } return err } // Add an entry to the stored entries and send them if there are more // than a certain amount func (lh *ListRHelper) Add(entry fs.DirEntry) error { if entry == nil { return nil } lh.entries = append(lh.entries, entry) return lh.send(100) } // Flush the stored entries (if any) sending them to the callback func (lh *ListRHelper) Flush() error { return lh.send(1) } rclone-1.53.3/fs/walk/walk_test.go000066400000000000000000000526211375552240400167740ustar00rootroot00000000000000package walk import ( "context" "fmt" "io" "strings" "sync" "testing" "github.com/pkg/errors" "github.com/rclone/rclone/fs" _ "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fstest/mockdir" "github.com/rclone/rclone/fstest/mockfs" "github.com/rclone/rclone/fstest/mockobject" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) var errDirNotFound, errorBoom error func init() { errDirNotFound = fserrors.FsError(fs.ErrorDirNotFound) fserrors.Count(errDirNotFound) errorBoom = fserrors.FsError(errors.New("boom")) fserrors.Count(errorBoom) } type ( listResult struct { entries fs.DirEntries err error } listResults map[string]listResult errorMap map[string]error listDirs struct { mu sync.Mutex t *testing.T fs fs.Fs includeAll bool results listResults walkResults listResults walkErrors errorMap finalError error checkMaps bool maxLevel int } ) func newListDirs(t *testing.T, f fs.Fs, includeAll bool, results listResults, walkErrors errorMap, finalError error) *listDirs { return &listDirs{ t: t, fs: f, includeAll: includeAll, results: results, walkErrors: walkErrors, walkResults: listResults{}, finalError: finalError, checkMaps: true, maxLevel: -1, } } // NoCheckMaps marks the maps as to be ignored at the end func (ls *listDirs) NoCheckMaps() *listDirs { ls.checkMaps = false return ls } // SetLevel(1) turns off recursion func (ls *listDirs) SetLevel(maxLevel int) *listDirs { ls.maxLevel = maxLevel return ls } // ListDir returns the expected listing for the directory func (ls *listDirs) ListDir(ctx context.Context, f fs.Fs, includeAll bool, dir string) (entries fs.DirEntries, err error) { ls.mu.Lock() defer ls.mu.Unlock() assert.Equal(ls.t, ls.fs, f) assert.Equal(ls.t, ls.includeAll, includeAll) // Fetch results for this path result, ok := ls.results[dir] if !ok { ls.t.Errorf("Unexpected list of %q", dir) return nil, errors.New("unexpected list") } delete(ls.results, dir) // Put expected results for call of WalkFn ls.walkResults[dir] = result return result.entries, result.err } // ListR returns the expected listing for the directory using ListR func (ls *listDirs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) { ls.mu.Lock() defer ls.mu.Unlock() var errorReturn error for dirPath, result := range ls.results { // Put expected results for call of WalkFn // Note that we don't call the function at all if we got an error if result.err != nil { errorReturn = result.err } if errorReturn == nil { err = callback(result.entries) require.NoError(ls.t, err) ls.walkResults[dirPath] = result } } ls.results = listResults{} return errorReturn } // IsFinished checks everything expected was used up func (ls *listDirs) IsFinished() { if ls.checkMaps { assert.Equal(ls.t, errorMap{}, ls.walkErrors) assert.Equal(ls.t, listResults{}, ls.results) assert.Equal(ls.t, listResults{}, ls.walkResults) } } // WalkFn is called by the walk to test the expectations func (ls *listDirs) WalkFn(dir string, entries fs.DirEntries, err error) error { ls.mu.Lock() defer ls.mu.Unlock() // ls.t.Logf("WalkFn(%q, %v, %q)", dir, entries, err) // Fetch expected entries and err result, ok := ls.walkResults[dir] if !ok { ls.t.Errorf("Unexpected walk of %q (result not found)", dir) return errors.New("result not found") } delete(ls.walkResults, dir) // Check arguments are as expected assert.Equal(ls.t, result.entries, entries) assert.Equal(ls.t, result.err, err) // Fetch return value returnErr, ok := ls.walkErrors[dir] if !ok { ls.t.Errorf("Unexpected walk of %q (error not found)", dir) return errors.New("error not found") } delete(ls.walkErrors, dir) return returnErr } // Walk does the walk and tests the expectations func (ls *listDirs) Walk() { err := walk(context.Background(), nil, "", ls.includeAll, ls.maxLevel, ls.WalkFn, ls.ListDir) assert.Equal(ls.t, ls.finalError, err) ls.IsFinished() } // WalkR does the walkR and tests the expectations func (ls *listDirs) WalkR() { err := walkR(context.Background(), nil, "", ls.includeAll, ls.maxLevel, ls.WalkFn, ls.ListR) assert.Equal(ls.t, ls.finalError, err) if ls.finalError == nil { ls.IsFinished() } } func testWalkEmpty(t *testing.T) *listDirs { return newListDirs(t, nil, false, listResults{ "": {entries: fs.DirEntries{}, err: nil}, }, errorMap{ "": nil, }, nil, ) } func TestWalkEmpty(t *testing.T) { testWalkEmpty(t).Walk() } func TestWalkREmpty(t *testing.T) { testWalkEmpty(t).WalkR() } func testWalkEmptySkip(t *testing.T) *listDirs { return newListDirs(t, nil, true, listResults{ "": {entries: fs.DirEntries{}, err: nil}, }, errorMap{ "": ErrorSkipDir, }, nil, ) } func TestWalkEmptySkip(t *testing.T) { testWalkEmptySkip(t).Walk() } func TestWalkREmptySkip(t *testing.T) { testWalkEmptySkip(t).WalkR() } func testWalkNotFound(t *testing.T) *listDirs { return newListDirs(t, nil, true, listResults{ "": {err: errDirNotFound}, }, errorMap{ "": errDirNotFound, }, errDirNotFound, ) } func TestWalkNotFound(t *testing.T) { testWalkNotFound(t).Walk() } func TestWalkRNotFound(t *testing.T) { testWalkNotFound(t).WalkR() } func TestWalkNotFoundMaskError(t *testing.T) { // this doesn't work for WalkR newListDirs(t, nil, true, listResults{ "": {err: errDirNotFound}, }, errorMap{ "": nil, }, nil, ).Walk() } func TestWalkNotFoundSkipError(t *testing.T) { // this doesn't work for WalkR newListDirs(t, nil, true, listResults{ "": {err: errDirNotFound}, }, errorMap{ "": ErrorSkipDir, }, nil, ).Walk() } func testWalkLevels(t *testing.T, maxLevel int) *listDirs { da := mockdir.New("a") oA := mockobject.Object("A") db := mockdir.New("a/b") oB := mockobject.Object("a/B") dc := mockdir.New("a/b/c") oC := mockobject.Object("a/b/C") dd := mockdir.New("a/b/c/d") oD := mockobject.Object("a/b/c/D") return newListDirs(t, nil, false, listResults{ "": {entries: fs.DirEntries{oA, da}, err: nil}, "a": {entries: fs.DirEntries{oB, db}, err: nil}, "a/b": {entries: fs.DirEntries{oC, dc}, err: nil}, "a/b/c": {entries: fs.DirEntries{oD, dd}, err: nil}, "a/b/c/d": {entries: fs.DirEntries{}, err: nil}, }, errorMap{ "": nil, "a": nil, "a/b": nil, "a/b/c": nil, "a/b/c/d": nil, }, nil, ).SetLevel(maxLevel) } func TestWalkLevels(t *testing.T) { testWalkLevels(t, -1).Walk() } func TestWalkRLevels(t *testing.T) { testWalkLevels(t, -1).WalkR() } func TestWalkLevelsNoRecursive10(t *testing.T) { testWalkLevels(t, 10).Walk() } func TestWalkRLevelsNoRecursive10(t *testing.T) { testWalkLevels(t, 10).WalkR() } func TestWalkNDirTree(t *testing.T) { ls := testWalkLevels(t, -1) entries, err := walkNDirTree(context.Background(), nil, "", ls.includeAll, ls.maxLevel, ls.ListDir) require.NoError(t, err) assert.Equal(t, `/ A a/ a/ B b/ a/b/ C c/ a/b/c/ D d/ a/b/c/d/ `, entries.String()) } func testWalkLevelsNoRecursive(t *testing.T) *listDirs { da := mockdir.New("a") oA := mockobject.Object("A") return newListDirs(t, nil, false, listResults{ "": {entries: fs.DirEntries{oA, da}, err: nil}, }, errorMap{ "": nil, }, nil, ).SetLevel(1) } func TestWalkLevelsNoRecursive(t *testing.T) { testWalkLevelsNoRecursive(t).Walk() } func TestWalkRLevelsNoRecursive(t *testing.T) { testWalkLevelsNoRecursive(t).WalkR() } func testWalkLevels2(t *testing.T) *listDirs { da := mockdir.New("a") oA := mockobject.Object("A") db := mockdir.New("a/b") oB := mockobject.Object("a/B") return newListDirs(t, nil, false, listResults{ "": {entries: fs.DirEntries{oA, da}, err: nil}, "a": {entries: fs.DirEntries{oB, db}, err: nil}, }, errorMap{ "": nil, "a": nil, }, nil, ).SetLevel(2) } func TestWalkLevels2(t *testing.T) { testWalkLevels2(t).Walk() } func TestWalkRLevels2(t *testing.T) { testWalkLevels2(t).WalkR() } func testWalkSkip(t *testing.T) *listDirs { da := mockdir.New("a") db := mockdir.New("a/b") dc := mockdir.New("a/b/c") return newListDirs(t, nil, false, listResults{ "": {entries: fs.DirEntries{da}, err: nil}, "a": {entries: fs.DirEntries{db}, err: nil}, "a/b": {entries: fs.DirEntries{dc}, err: nil}, }, errorMap{ "": nil, "a": nil, "a/b": ErrorSkipDir, }, nil, ) } func TestWalkSkip(t *testing.T) { testWalkSkip(t).Walk() } func TestWalkRSkip(t *testing.T) { testWalkSkip(t).WalkR() } func walkErrors(t *testing.T, expectedErr error) *listDirs { lr := listResults{} em := errorMap{} de := make(fs.DirEntries, 10) for i := range de { path := string('0' + rune(i)) de[i] = mockdir.New(path) lr[path] = listResult{entries: nil, err: fs.ErrorDirNotFound} em[path] = fs.ErrorDirNotFound } lr[""] = listResult{entries: de, err: nil} em[""] = nil return newListDirs(t, nil, true, lr, em, expectedErr, ).NoCheckMaps() } func testWalkErrors(t *testing.T) *listDirs { return walkErrors(t, errDirNotFound) } func testWalkRErrors(t *testing.T) *listDirs { return walkErrors(t, fs.ErrorDirNotFound) } func TestWalkErrors(t *testing.T) { testWalkErrors(t).Walk() } func TestWalkRErrors(t *testing.T) { testWalkRErrors(t).WalkR() } func makeTree(level int, terminalErrors bool) (listResults, errorMap) { lr := listResults{} em := errorMap{} var fill func(path string, level int) fill = func(path string, level int) { de := fs.DirEntries{} if level > 0 { for _, a := range "0123456789" { subPath := string(a) if path != "" { subPath = path + "/" + subPath } de = append(de, mockdir.New(subPath)) fill(subPath, level-1) } } lr[path] = listResult{entries: de, err: nil} em[path] = nil if level == 0 && terminalErrors { em[path] = errorBoom } } fill("", level) return lr, em } func testWalkMulti(t *testing.T) *listDirs { lr, em := makeTree(3, false) return newListDirs(t, nil, true, lr, em, nil, ) } func TestWalkMulti(t *testing.T) { testWalkMulti(t).Walk() } func TestWalkRMulti(t *testing.T) { testWalkMulti(t).WalkR() } func testWalkMultiErrors(t *testing.T) *listDirs { lr, em := makeTree(3, true) return newListDirs(t, nil, true, lr, em, errorBoom, ).NoCheckMaps() } func TestWalkMultiErrors(t *testing.T) { testWalkMultiErrors(t).Walk() } func TestWalkRMultiErrors(t *testing.T) { testWalkMultiErrors(t).Walk() } // a very simple listRcallback function func makeListRCallback(entries fs.DirEntries, err error) fs.ListRFn { return func(ctx context.Context, dir string, callback fs.ListRCallback) error { if err == nil { err = callback(entries) } return err } } func TestWalkRDirTree(t *testing.T) { for _, test := range []struct { entries fs.DirEntries want string err error root string level int }{ {fs.DirEntries{}, "/\n", nil, "", -1}, {fs.DirEntries{mockobject.Object("a")}, `/ a `, nil, "", -1}, {fs.DirEntries{mockobject.Object("a/b")}, `/ a/ a/ b `, nil, "", -1}, {fs.DirEntries{mockobject.Object("a/b/c/d")}, `/ a/ a/ b/ a/b/ c/ a/b/c/ d `, nil, "", -1}, {fs.DirEntries{mockobject.Object("a")}, "", errorBoom, "", -1}, {fs.DirEntries{ mockobject.Object("0/1/2/3"), mockobject.Object("4/5/6/7"), mockobject.Object("8/9/a/b"), mockobject.Object("c/d/e/f"), mockobject.Object("g/h/i/j"), mockobject.Object("k/l/m/n"), mockobject.Object("o/p/q/r"), mockobject.Object("s/t/u/v"), mockobject.Object("w/x/y/z"), }, `/ 0/ 4/ 8/ c/ g/ k/ o/ s/ w/ 0/ 1/ 0/1/ 2/ 0/1/2/ 3 4/ 5/ 4/5/ 6/ 4/5/6/ 7 8/ 9/ 8/9/ a/ 8/9/a/ b c/ d/ c/d/ e/ c/d/e/ f g/ h/ g/h/ i/ g/h/i/ j k/ l/ k/l/ m/ k/l/m/ n o/ p/ o/p/ q/ o/p/q/ r s/ t/ s/t/ u/ s/t/u/ v w/ x/ w/x/ y/ w/x/y/ z `, nil, "", -1}, {fs.DirEntries{ mockobject.Object("a/b/c/d/e/f1"), mockobject.Object("a/b/c/d/e/f2"), mockobject.Object("a/b/c/d/e/f3"), }, `a/b/c/ d/ a/b/c/d/ e/ a/b/c/d/e/ f1 f2 f3 `, nil, "a/b/c", -1}, {fs.DirEntries{ mockobject.Object("A"), mockobject.Object("a/B"), mockobject.Object("a/b/C"), mockobject.Object("a/b/c/D"), mockobject.Object("a/b/c/d/E"), }, `/ A a/ a/ B b/ `, nil, "", 2}, {fs.DirEntries{ mockobject.Object("a/b/c"), mockobject.Object("a/b/c/d/e"), }, `/ a/ a/ b/ `, nil, "", 2}, } { r, err := walkRDirTree(context.Background(), nil, test.root, true, test.level, makeListRCallback(test.entries, test.err)) assert.Equal(t, test.err, err, fmt.Sprintf("%+v", test)) assert.Equal(t, test.want, r.String(), fmt.Sprintf("%+v", test)) } } func TestWalkRDirTreeExclude(t *testing.T) { for _, test := range []struct { entries fs.DirEntries want string err error root string level int excludeFile string includeAll bool }{ {fs.DirEntries{mockobject.Object("a"), mockobject.Object("ignore")}, "", nil, "", -1, "ignore", false}, {fs.DirEntries{mockobject.Object("a")}, `/ a `, nil, "", -1, "ignore", false}, {fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b/b"), mockobject.Object("b/.ignore"), }, `/ a `, nil, "", -1, ".ignore", false}, {fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b/.ignore"), mockobject.Object("b/b"), }, `/ a b/ b/ .ignore b `, nil, "", -1, ".ignore", true}, {fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b/b"), mockobject.Object("b/c/d/e"), mockobject.Object("b/c/ign"), mockobject.Object("b/c/x"), }, `/ a b/ b/ b `, nil, "", -1, "ign", false}, {fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b/b"), mockobject.Object("b/c/d/e"), mockobject.Object("b/c/ign"), mockobject.Object("b/c/x"), }, `/ a b/ b/ b c/ b/c/ d/ ign x b/c/d/ e `, nil, "", -1, "ign", true}, } { filter.Active.Opt.ExcludeFile = test.excludeFile r, err := walkRDirTree(context.Background(), nil, test.root, test.includeAll, test.level, makeListRCallback(test.entries, test.err)) assert.Equal(t, test.err, err, fmt.Sprintf("%+v", test)) assert.Equal(t, test.want, r.String(), fmt.Sprintf("%+v", test)) } // Set to default value, to avoid side effects filter.Active.Opt.ExcludeFile = "" } func TestListType(t *testing.T) { assert.Equal(t, true, ListObjects.Objects()) assert.Equal(t, false, ListObjects.Dirs()) assert.Equal(t, false, ListDirs.Objects()) assert.Equal(t, true, ListDirs.Dirs()) assert.Equal(t, true, ListAll.Objects()) assert.Equal(t, true, ListAll.Dirs()) var ( a = mockobject.Object("a") b = mockobject.Object("b") dir = mockdir.New("dir") adir = mockobject.Object("dir/a") dir2 = mockdir.New("dir2") origEntries = fs.DirEntries{ a, b, dir, adir, dir2, } dirEntries = fs.DirEntries{ dir, dir2, } objEntries = fs.DirEntries{ a, b, adir, } ) copyOrigEntries := func() (out fs.DirEntries) { out = make(fs.DirEntries, len(origEntries)) copy(out, origEntries) return out } got := copyOrigEntries() ListAll.Filter(&got) assert.Equal(t, origEntries, got) got = copyOrigEntries() ListObjects.Filter(&got) assert.Equal(t, objEntries, got) got = copyOrigEntries() ListDirs.Filter(&got) assert.Equal(t, dirEntries, got) } func TestListR(t *testing.T) { objects := fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b"), mockdir.New("dir"), mockobject.Object("dir/a"), mockobject.Object("dir/b"), mockobject.Object("dir/c"), } f := mockfs.NewFs("mock", "/") var got []string clearCallback := func() { got = nil } callback := func(entries fs.DirEntries) error { for _, entry := range entries { got = append(got, entry.Remote()) } return nil } doListR := func(ctx context.Context, dir string, callback fs.ListRCallback) error { var os fs.DirEntries for _, o := range objects { if dir == "" || strings.HasPrefix(o.Remote(), dir+"/") { os = append(os, o) } } return callback(os) } // Setup filter oldFilter := filter.Active defer func() { filter.Active = oldFilter }() var err error filter.Active, err = filter.NewFilter(nil) require.NoError(t, err) require.NoError(t, filter.Active.AddRule("+ b")) require.NoError(t, filter.Active.AddRule("- *")) // Base case clearCallback() err = listR(context.Background(), f, "", true, ListAll, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"a", "b", "dir", "dir/a", "dir/b", "dir/c"}, got) // Base case - with Objects clearCallback() err = listR(context.Background(), f, "", true, ListObjects, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"a", "b", "dir/a", "dir/b", "dir/c"}, got) // Base case - with Dirs clearCallback() err = listR(context.Background(), f, "", true, ListDirs, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"dir"}, got) // With filter clearCallback() err = listR(context.Background(), f, "", false, ListAll, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"b", "dir", "dir/b"}, got) // With filter - with Objects clearCallback() err = listR(context.Background(), f, "", false, ListObjects, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"b", "dir/b"}, got) // With filter - with Dir clearCallback() err = listR(context.Background(), f, "", false, ListDirs, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"dir"}, got) // With filter and subdir clearCallback() err = listR(context.Background(), f, "dir", false, ListAll, callback, doListR, false) require.NoError(t, err) require.Equal(t, []string{"dir/b"}, got) // Now bucket based objects = fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b"), mockobject.Object("dir/a"), mockobject.Object("dir/b"), mockobject.Object("dir/subdir/c"), mockdir.New("dir/subdir"), } // Base case clearCallback() err = listR(context.Background(), f, "", true, ListAll, callback, doListR, true) require.NoError(t, err) require.Equal(t, []string{"a", "b", "dir/a", "dir/b", "dir/subdir/c", "dir/subdir", "dir"}, got) // With filter clearCallback() err = listR(context.Background(), f, "", false, ListAll, callback, doListR, true) require.NoError(t, err) require.Equal(t, []string{"b", "dir/b", "dir/subdir", "dir"}, got) // With filter and subdir clearCallback() err = listR(context.Background(), f, "dir", false, ListAll, callback, doListR, true) require.NoError(t, err) require.Equal(t, []string{"dir/b", "dir/subdir"}, got) // With filter and subdir - with Objects clearCallback() err = listR(context.Background(), f, "dir", false, ListObjects, callback, doListR, true) require.NoError(t, err) require.Equal(t, []string{"dir/b"}, got) // With filter and subdir - with Dirs clearCallback() err = listR(context.Background(), f, "dir", false, ListDirs, callback, doListR, true) require.NoError(t, err) require.Equal(t, []string{"dir/subdir"}, got) } func TestDirMapAdd(t *testing.T) { type add struct { dir string sent bool } for i, test := range []struct { root string in []add want map[string]bool }{ { root: "", in: []add{ {"", true}, }, want: map[string]bool{}, }, { root: "", in: []add{ {"a/b/c", true}, }, want: map[string]bool{ "a/b/c": true, "a/b": false, "a": false, }, }, { root: "", in: []add{ {"a/b/c", true}, {"a/b", true}, }, want: map[string]bool{ "a/b/c": true, "a/b": true, "a": false, }, }, { root: "", in: []add{ {"a/b", true}, {"a/b/c", false}, }, want: map[string]bool{ "a/b/c": false, "a/b": true, "a": false, }, }, { root: "root", in: []add{ {"root/a/b", true}, {"root/a/b/c", false}, }, want: map[string]bool{ "root/a/b/c": false, "root/a/b": true, "root/a": false, }, }, } { t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { dm := newDirMap(test.root) for _, item := range test.in { dm.add(item.dir, item.sent) } assert.Equal(t, test.want, dm.m) }) } } func TestDirMapAddEntries(t *testing.T) { dm := newDirMap("") entries := fs.DirEntries{ mockobject.Object("dir/a"), mockobject.Object("dir/b"), mockdir.New("dir"), mockobject.Object("dir2/a"), mockobject.Object("dir2/b"), } require.NoError(t, dm.addEntries(entries)) assert.Equal(t, map[string]bool{"dir": true, "dir2": false}, dm.m) } func TestDirMapSendEntries(t *testing.T) { var got []string clearCallback := func() { got = nil } callback := func(entries fs.DirEntries) error { for _, entry := range entries { got = append(got, entry.Remote()) } return nil } // general test dm := newDirMap("") entries := fs.DirEntries{ mockobject.Object("dir/a"), mockobject.Object("dir/b"), mockdir.New("dir"), mockobject.Object("dir2/a"), mockobject.Object("dir2/b"), mockobject.Object("dir1/a"), mockobject.Object("dir3/b"), } require.NoError(t, dm.addEntries(entries)) clearCallback() err := dm.sendEntries(callback) require.NoError(t, err) assert.Equal(t, []string{ "dir1", "dir2", "dir3", }, got) // return error from callback callback2 := func(entries fs.DirEntries) error { return io.EOF } err = dm.sendEntries(callback2) require.Equal(t, io.EOF, err) // empty dm = newDirMap("") clearCallback() err = dm.sendEntries(callback) require.NoError(t, err) assert.Equal(t, []string(nil), got) } rclone-1.53.3/fstest/000077500000000000000000000000001375552240400144045ustar00rootroot00000000000000rclone-1.53.3/fstest/fstest.go000066400000000000000000000367751375552240400162650ustar00rootroot00000000000000// Package fstest provides utilities for testing the Fs package fstest // FIXME put name of test FS in Fs structure import ( "bytes" "context" "flag" "fmt" "io" "io/ioutil" "log" "math/rand" "os" "path" "path/filepath" "regexp" "runtime" "sort" "strings" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/lib/random" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "golang.org/x/text/unicode/norm" ) // Globals var ( RemoteName = flag.String("remote", "", "Remote to test with, defaults to local filesystem") Verbose = flag.Bool("verbose", false, "Set to enable logging") DumpHeaders = flag.Bool("dump-headers", false, "Set to dump headers (needs -verbose)") DumpBodies = flag.Bool("dump-bodies", false, "Set to dump bodies (needs -verbose)") Individual = flag.Bool("individual", false, "Make individual bucket/container/directory for each test - much slower") LowLevelRetries = flag.Int("low-level-retries", 10, "Number of low level retries") UseListR = flag.Bool("fast-list", false, "Use recursive list if available. Uses more memory but fewer transactions.") // SizeLimit signals tests to skip maximum test file size and skip inappropriate runs SizeLimit = flag.Int64("size-limit", 0, "Limit maximum test file size") // ListRetries is the number of times to retry a listing to overcome eventual consistency ListRetries = flag.Int("list-retries", 3, "Number or times to retry listing") // MatchTestRemote matches the remote names used for testing MatchTestRemote = regexp.MustCompile(`^rclone-test-[abcdefghijklmnopqrstuvwxyz0123456789]{24}$`) ) // Seed the random number generator func init() { rand.Seed(time.Now().UnixNano()) } // Initialise rclone for testing func Initialise() { // Never ask for passwords, fail instead. // If your local config is encrypted set environment variable // "RCLONE_CONFIG_PASS=hunter2" (or your password) fs.Config.AskPassword = false // Override the config file from the environment - we don't // parse the flags any more so this doesn't happen // automatically if envConfig := os.Getenv("RCLONE_CONFIG"); envConfig != "" { config.ConfigPath = envConfig } config.LoadConfig() if *Verbose { fs.Config.LogLevel = fs.LogLevelDebug } if *DumpHeaders { fs.Config.Dump |= fs.DumpHeaders } if *DumpBodies { fs.Config.Dump |= fs.DumpBodies } fs.Config.LowLevelRetries = *LowLevelRetries fs.Config.UseListR = *UseListR } // Item represents an item for checking type Item struct { Path string Hashes map[hash.Type]string ModTime time.Time Size int64 } // NewItem creates an item from a string content func NewItem(Path, Content string, modTime time.Time) Item { i := Item{ Path: Path, ModTime: modTime, Size: int64(len(Content)), } hash := hash.NewMultiHasher() buf := bytes.NewBufferString(Content) _, err := io.Copy(hash, buf) if err != nil { log.Fatalf("Failed to create item: %v", err) } i.Hashes = hash.Sums() return i } // CheckTimeEqualWithPrecision checks the times are equal within the // precision, returns the delta and a flag func CheckTimeEqualWithPrecision(t0, t1 time.Time, precision time.Duration) (time.Duration, bool) { dt := t0.Sub(t1) if dt >= precision || dt <= -precision { return dt, false } return dt, true } // AssertTimeEqualWithPrecision checks that want is within precision // of got, asserting that with t and logging remote func AssertTimeEqualWithPrecision(t *testing.T, remote string, want, got time.Time, precision time.Duration) { dt, ok := CheckTimeEqualWithPrecision(want, got, precision) assert.True(t, ok, fmt.Sprintf("%s: Modification time difference too big |%s| > %s (want %s vs got %s) (precision %s)", remote, dt, precision, want, got, precision)) } // CheckModTime checks the mod time to the given precision func (i *Item) CheckModTime(t *testing.T, obj fs.Object, modTime time.Time, precision time.Duration) { AssertTimeEqualWithPrecision(t, obj.Remote(), i.ModTime, modTime, precision) } // CheckHashes checks all the hashes the object supports are correct func (i *Item) CheckHashes(t *testing.T, obj fs.Object) { require.NotNil(t, obj) types := obj.Fs().Hashes().Array() for _, Hash := range types { // Check attributes sum, err := obj.Hash(context.Background(), Hash) require.NoError(t, err) assert.True(t, hash.Equals(i.Hashes[Hash], sum), fmt.Sprintf("%s/%s: %v hash incorrect - expecting %q got %q", obj.Fs().String(), obj.Remote(), Hash, i.Hashes[Hash], sum)) } } // Check checks all the attributes of the object are correct func (i *Item) Check(t *testing.T, obj fs.Object, precision time.Duration) { i.CheckHashes(t, obj) assert.Equal(t, i.Size, obj.Size(), fmt.Sprintf("%s: size incorrect file=%d vs obj=%d", i.Path, i.Size, obj.Size())) i.CheckModTime(t, obj, obj.ModTime(context.Background()), precision) } // Normalize runs a utf8 normalization on the string if running on OS // X. This is because OS X denormalizes file names it writes to the // local file system. func Normalize(name string) string { if runtime.GOOS == "darwin" { name = norm.NFC.String(name) } return name } // Items represents all items for checking type Items struct { byName map[string]*Item byNameAlt map[string]*Item items []Item } // NewItems makes an Items func NewItems(items []Item) *Items { is := &Items{ byName: make(map[string]*Item), byNameAlt: make(map[string]*Item), items: items, } // Fill up byName for i := range items { is.byName[Normalize(items[i].Path)] = &items[i] } return is } // Find checks off an item func (is *Items) Find(t *testing.T, obj fs.Object, precision time.Duration) { remote := Normalize(obj.Remote()) i, ok := is.byName[remote] if !ok { i, ok = is.byNameAlt[remote] assert.True(t, ok, fmt.Sprintf("Unexpected file %q", remote)) } if i != nil { delete(is.byName, i.Path) i.Check(t, obj, precision) } } // Done checks all finished func (is *Items) Done(t *testing.T) { if len(is.byName) != 0 { for name := range is.byName { t.Logf("Not found %q", name) } } assert.Equal(t, 0, len(is.byName), fmt.Sprintf("%d objects not found", len(is.byName))) } // makeListingFromItems returns a string representation of the items // // it returns two possible strings, one normal and one for windows func makeListingFromItems(items []Item) string { nameLengths := make([]string, len(items)) for i, item := range items { remote := Normalize(item.Path) nameLengths[i] = fmt.Sprintf("%s (%d)", remote, item.Size) } sort.Strings(nameLengths) return strings.Join(nameLengths, ", ") } // makeListingFromObjects returns a string representation of the objects func makeListingFromObjects(objs []fs.Object) string { nameLengths := make([]string, len(objs)) for i, obj := range objs { nameLengths[i] = fmt.Sprintf("%s (%d)", Normalize(obj.Remote()), obj.Size()) } sort.Strings(nameLengths) return strings.Join(nameLengths, ", ") } // filterEmptyDirs removes any empty (or containing only directories) // directories from expectedDirs func filterEmptyDirs(t *testing.T, items []Item, expectedDirs []string) (newExpectedDirs []string) { dirs := map[string]struct{}{"": {}} for _, item := range items { base := item.Path for { base = path.Dir(base) if base == "." || base == "/" { break } dirs[base] = struct{}{} } } for _, expectedDir := range expectedDirs { if _, found := dirs[expectedDir]; found { newExpectedDirs = append(newExpectedDirs, expectedDir) } else { t.Logf("Filtering empty directory %q", expectedDir) } } return newExpectedDirs } // CheckListingWithRoot checks the fs to see if it has the // expected contents with the given precision. // // If expectedDirs is non nil then we check those too. Note that no // directories returned is also OK as some remotes don't return // directories. // // dir is the directory used for the listing. func CheckListingWithRoot(t *testing.T, f fs.Fs, dir string, items []Item, expectedDirs []string, precision time.Duration) { if expectedDirs != nil && !f.Features().CanHaveEmptyDirectories { expectedDirs = filterEmptyDirs(t, items, expectedDirs) } is := NewItems(items) ctx := context.Background() oldErrors := accounting.Stats(ctx).GetErrors() var objs []fs.Object var dirs []fs.Directory var err error var retries = *ListRetries sleep := time.Second / 2 wantListing := makeListingFromItems(items) gotListing := "" listingOK := false for i := 1; i <= retries; i++ { objs, dirs, err = walk.GetAll(ctx, f, dir, true, -1) if err != nil && err != fs.ErrorDirNotFound { t.Fatalf("Error listing: %v", err) } gotListing = makeListingFromObjects(objs) listingOK = wantListing == gotListing if listingOK && (expectedDirs == nil || len(dirs) == len(expectedDirs)) { // Put an extra sleep in if we did any retries just to make sure it really // is consistent (here is looking at you Amazon Drive!) if i != 1 { extraSleep := 5*time.Second + sleep t.Logf("Sleeping for %v just to make sure", extraSleep) time.Sleep(extraSleep) } break } sleep *= 2 t.Logf("Sleeping for %v for list eventual consistency: %d/%d", sleep, i, retries) time.Sleep(sleep) if doDirCacheFlush := f.Features().DirCacheFlush; doDirCacheFlush != nil { t.Logf("Flushing the directory cache") doDirCacheFlush() } } assert.True(t, listingOK, fmt.Sprintf("listing wrong, want\n %s got\n %s", wantListing, gotListing)) for _, obj := range objs { require.NotNil(t, obj) is.Find(t, obj, precision) } is.Done(t) // Don't notice an error when listing an empty directory if len(items) == 0 && oldErrors == 0 && accounting.Stats(ctx).GetErrors() == 1 { accounting.Stats(ctx).ResetErrors() } // Check the directories if expectedDirs != nil { expectedDirsCopy := make([]string, len(expectedDirs)) for i, dir := range expectedDirs { expectedDirsCopy[i] = Normalize(dir) } actualDirs := []string{} for _, dir := range dirs { actualDirs = append(actualDirs, Normalize(dir.Remote())) } sort.Strings(actualDirs) sort.Strings(expectedDirsCopy) assert.Equal(t, expectedDirsCopy, actualDirs, "directories") } } // CheckListingWithPrecision checks the fs to see if it has the // expected contents with the given precision. // // If expectedDirs is non nil then we check those too. Note that no // directories returned is also OK as some remotes don't return // directories. func CheckListingWithPrecision(t *testing.T, f fs.Fs, items []Item, expectedDirs []string, precision time.Duration) { CheckListingWithRoot(t, f, "", items, expectedDirs, precision) } // CheckListing checks the fs to see if it has the expected contents func CheckListing(t *testing.T, f fs.Fs, items []Item) { precision := f.Precision() CheckListingWithPrecision(t, f, items, nil, precision) } // CheckItems checks the fs to see if it has only the items passed in // using a precision of fs.Config.ModifyWindow func CheckItems(t *testing.T, f fs.Fs, items ...Item) { CheckListingWithPrecision(t, f, items, nil, fs.GetModifyWindow(f)) } // CompareItems compares a set of DirEntries to a slice of items and a list of dirs // The modtimes are compared with the precision supplied func CompareItems(t *testing.T, entries fs.DirEntries, items []Item, expectedDirs []string, precision time.Duration, what string) { is := NewItems(items) var objs []fs.Object var dirs []fs.Directory wantListing := makeListingFromItems(items) for _, entry := range entries { switch x := entry.(type) { case fs.Directory: dirs = append(dirs, x) case fs.Object: objs = append(objs, x) // do nothing default: t.Fatalf("unknown object type %T", entry) } } gotListing := makeListingFromObjects(objs) listingOK := wantListing == gotListing assert.True(t, listingOK, fmt.Sprintf("%s not equal, want\n %s got\n %s", what, wantListing, gotListing)) for _, obj := range objs { require.NotNil(t, obj) is.Find(t, obj, precision) } is.Done(t) // Check the directories if expectedDirs != nil { expectedDirsCopy := make([]string, len(expectedDirs)) for i, dir := range expectedDirs { expectedDirsCopy[i] = Normalize(dir) } actualDirs := []string{} for _, dir := range dirs { actualDirs = append(actualDirs, Normalize(dir.Remote())) } sort.Strings(actualDirs) sort.Strings(expectedDirsCopy) assert.Equal(t, expectedDirsCopy, actualDirs, "directories not equal") } } // Time parses a time string or logs a fatal error func Time(timeString string) time.Time { t, err := time.Parse(time.RFC3339Nano, timeString) if err != nil { log.Fatalf("Failed to parse time %q: %v", timeString, err) } return t } // LocalRemote creates a temporary directory name for local remotes func LocalRemote() (path string, err error) { path, err = ioutil.TempDir("", "rclone") if err == nil { // Now remove the directory err = os.Remove(path) } path = filepath.ToSlash(path) return } // RandomRemoteName makes a random bucket or subdirectory name // // Returns a random remote name plus the leaf name func RandomRemoteName(remoteName string) (string, string, error) { var err error var leafName string // Make a directory if remote name is null if remoteName == "" { remoteName, err = LocalRemote() if err != nil { return "", "", err } } else { if !strings.HasSuffix(remoteName, ":") { remoteName += "/" } leafName = "rclone-test-" + random.String(24) if !MatchTestRemote.MatchString(leafName) { log.Fatalf("%q didn't match the test remote name regexp", leafName) } remoteName += leafName } return remoteName, leafName, nil } // RandomRemote makes a random bucket or subdirectory on the remote // from the -remote parameter // // Call the finalise function returned to Purge the fs at the end (and // the parent if necessary) // // Returns the remote, its url, a finaliser and an error func RandomRemote() (fs.Fs, string, func(), error) { var err error var parentRemote fs.Fs remoteName := *RemoteName remoteName, _, err = RandomRemoteName(remoteName) if err != nil { return nil, "", nil, err } remote, err := fs.NewFs(remoteName) if err != nil { return nil, "", nil, err } finalise := func() { Purge(remote) if parentRemote != nil { Purge(parentRemote) if err != nil { log.Printf("Failed to purge %v: %v", parentRemote, err) } } } return remote, remoteName, finalise, nil } // Purge is a simplified re-implementation of operations.Purge for the // test routine cleanup to avoid circular dependencies. // // It logs errors rather than returning them func Purge(f fs.Fs) { ctx := context.Background() var err error doFallbackPurge := true if doPurge := f.Features().Purge; doPurge != nil { doFallbackPurge = false fs.Debugf(f, "Purge remote") err = doPurge(ctx, "") if err == fs.ErrorCantPurge { doFallbackPurge = true } } if doFallbackPurge { dirs := []string{""} err = walk.ListR(ctx, f, "", true, -1, walk.ListAll, func(entries fs.DirEntries) error { var err error entries.ForObject(func(obj fs.Object) { fs.Debugf(f, "Purge object %q", obj.Remote()) err = obj.Remove(ctx) if err != nil { log.Printf("purge failed to remove %q: %v", obj.Remote(), err) } }) entries.ForDir(func(dir fs.Directory) { dirs = append(dirs, dir.Remote()) }) return nil }) sort.Strings(dirs) for i := len(dirs) - 1; i >= 0; i-- { dir := dirs[i] fs.Debugf(f, "Purge dir %q", dir) err := f.Rmdir(ctx, dir) if err != nil { log.Printf("purge failed to rmdir %q: %v", dir, err) } } } if err != nil { log.Printf("purge failed: %v", err) } } rclone-1.53.3/fstest/fstests/000077500000000000000000000000001375552240400160775ustar00rootroot00000000000000rclone-1.53.3/fstest/fstests/fstests.go000066400000000000000000001722671375552240400201400ustar00rootroot00000000000000// Package fstests provides generic integration tests for the Fs and // Object interfaces. // // These tests are concerned with the basic functionality of a // backend. The tests in fs/sync and fs/operations tests more // cornercases that these tests don't. package fstests import ( "bytes" "context" "fmt" "io" "io/ioutil" "math/bits" "os" "path" "path/filepath" "reflect" "sort" "strconv" "strings" "testing" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest/testserver" "github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/readers" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // InternalTester is an optional interface for Fs which allows to execute internal tests // // This interface should be implemented in 'backend'_internal_test.go and not in 'backend'.go type InternalTester interface { InternalTest(*testing.T) } // ChunkedUploadConfig contains the values used by TestFsPutChunked // to determine the limits of chunked uploading type ChunkedUploadConfig struct { // Minimum allowed chunk size MinChunkSize fs.SizeSuffix // Maximum allowed chunk size, 0 is no limit MaxChunkSize fs.SizeSuffix // Rounds the given chunk size up to the next valid value // nil will disable rounding // e.g. the next power of 2 CeilChunkSize func(fs.SizeSuffix) fs.SizeSuffix // More than one chunk is required on upload NeedMultipleChunks bool } // SetUploadChunkSizer is a test only interface to change the upload chunk size at runtime type SetUploadChunkSizer interface { // Change the configured UploadChunkSize. // Will only be called while no transfer is in progress. SetUploadChunkSize(fs.SizeSuffix) (fs.SizeSuffix, error) } // SetUploadCutoffer is a test only interface to change the upload cutoff size at runtime type SetUploadCutoffer interface { // Change the configured UploadCutoff. // Will only be called while no transfer is in progress. SetUploadCutoff(fs.SizeSuffix) (fs.SizeSuffix, error) } // NextPowerOfTwo returns the current or next bigger power of two. // All values less or equal 0 will return 0 func NextPowerOfTwo(i fs.SizeSuffix) fs.SizeSuffix { return 1 << uint(64-bits.LeadingZeros64(uint64(i)-1)) } // NextMultipleOf returns a function that can be used as a CeilChunkSize function. // This function will return the next multiple of m that is equal or bigger than i. // All values less or equal 0 will return 0. func NextMultipleOf(m fs.SizeSuffix) func(fs.SizeSuffix) fs.SizeSuffix { if m <= 0 { panic(fmt.Sprintf("invalid multiplier %s", m)) } return func(i fs.SizeSuffix) fs.SizeSuffix { if i <= 0 { return 0 } return (((i - 1) / m) + 1) * m } } // dirsToNames returns a sorted list of names func dirsToNames(dirs []fs.Directory) []string { names := []string{} for _, dir := range dirs { names = append(names, fstest.Normalize(dir.Remote())) } sort.Strings(names) return names } // objsToNames returns a sorted list of object names func objsToNames(objs []fs.Object) []string { names := []string{} for _, obj := range objs { names = append(names, fstest.Normalize(obj.Remote())) } sort.Strings(names) return names } // findObject finds the object on the remote func findObject(ctx context.Context, t *testing.T, f fs.Fs, Name string) fs.Object { var obj fs.Object var err error sleepTime := 1 * time.Second for i := 1; i <= *fstest.ListRetries; i++ { obj, err = f.NewObject(ctx, Name) if err == nil { break } t.Logf("Sleeping for %v for findObject eventual consistency: %d/%d (%v)", sleepTime, i, *fstest.ListRetries, err) time.Sleep(sleepTime) sleepTime = (sleepTime * 3) / 2 } require.NoError(t, err) return obj } // retry f() until no retriable error func retry(t *testing.T, what string, f func() error) { const maxTries = 10 var err error for tries := 1; tries <= maxTries; tries++ { err = f() // exit if no error, or error is not retriable if err == nil || !fserrors.IsRetryError(err) { break } t.Logf("%s error: %v - low level retry %d/%d", what, err, tries, maxTries) time.Sleep(2 * time.Second) } require.NoError(t, err, what) } // testPut puts file with random contents to the remote func testPut(ctx context.Context, t *testing.T, f fs.Fs, file *fstest.Item) (string, fs.Object) { return PutTestContents(ctx, t, f, file, random.String(100), true) } // PutTestContents puts file with given contents to the remote and checks it but unlike TestPutLarge doesn't remove func PutTestContents(ctx context.Context, t *testing.T, f fs.Fs, file *fstest.Item, contents string, check bool) (string, fs.Object) { var ( err error obj fs.Object uploadHash *hash.MultiHasher ) retry(t, "Put", func() error { buf := bytes.NewBufferString(contents) uploadHash = hash.NewMultiHasher() in := io.TeeReader(buf, uploadHash) file.Size = int64(buf.Len()) obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil) obj, err = f.Put(ctx, in, obji) return err }) file.Hashes = uploadHash.Sums() if check { file.Check(t, obj, f.Precision()) // Re-read the object and check again obj = findObject(ctx, t, f, file.Path) file.Check(t, obj, f.Precision()) } return contents, obj } // TestPutLarge puts file to the remote, checks it and removes it on success. func TestPutLarge(ctx context.Context, t *testing.T, f fs.Fs, file *fstest.Item) { var ( err error obj fs.Object uploadHash *hash.MultiHasher ) retry(t, "PutLarge", func() error { r := readers.NewPatternReader(file.Size) uploadHash = hash.NewMultiHasher() in := io.TeeReader(r, uploadHash) obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil) obj, err = f.Put(ctx, in, obji) if file.Size == 0 && err == fs.ErrorCantUploadEmptyFiles { t.Skip("Can't upload zero length files") } return err }) file.Hashes = uploadHash.Sums() file.Check(t, obj, f.Precision()) // Re-read the object and check again obj = findObject(ctx, t, f, file.Path) file.Check(t, obj, f.Precision()) // Download the object and check it is OK downloadHash := hash.NewMultiHasher() download, err := obj.Open(ctx) require.NoError(t, err) n, err := io.Copy(downloadHash, download) require.NoError(t, err) assert.Equal(t, file.Size, n) require.NoError(t, download.Close()) assert.Equal(t, file.Hashes, downloadHash.Sums()) // Remove the object require.NoError(t, obj.Remove(ctx)) } // read the contents of an object as a string func readObject(ctx context.Context, t *testing.T, obj fs.Object, limit int64, options ...fs.OpenOption) string { what := fmt.Sprintf("readObject(%q) limit=%d, options=%+v", obj, limit, options) in, err := obj.Open(ctx, options...) require.NoError(t, err, what) var r io.Reader = in if limit >= 0 { r = &io.LimitedReader{R: r, N: limit} } contents, err := ioutil.ReadAll(r) require.NoError(t, err, what) err = in.Close() require.NoError(t, err, what) return string(contents) } // ExtraConfigItem describes a config item for the tests type ExtraConfigItem struct{ Name, Key, Value string } // Opt is options for Run type Opt struct { RemoteName string NilObject fs.Object ExtraConfig []ExtraConfigItem SkipBadWindowsCharacters bool // skips unusable characters for windows if set SkipFsMatch bool // if set skip exact matching of Fs value TiersToTest []string // List of tiers which can be tested in setTier test ChunkedUpload ChunkedUploadConfig UnimplementableFsMethods []string // List of methods which can't be implemented in this wrapping Fs UnimplementableObjectMethods []string // List of methods which can't be implemented in this wrapping Fs SkipFsCheckWrap bool // if set skip FsCheckWrap SkipObjectCheckWrap bool // if set skip ObjectCheckWrap SkipInvalidUTF8 bool // if set skip invalid UTF-8 checks } // returns true if x is found in ss func stringsContains(x string, ss []string) bool { for _, s := range ss { if x == s { return true } } return false } // Run runs the basic integration tests for a remote using the options passed in. // // They are structured in a hierarchical way so that dependencies for the tests can be created. // // For example some tests require the directory to be created - these // are inside the "FsMkdir" test. Some tests require some tests files // - these are inside the "FsPutFiles" test. func Run(t *testing.T, opt *Opt) { var ( remote fs.Fs remoteName = opt.RemoteName subRemoteName string subRemoteLeaf string file1 = fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: "file name.txt", } file1Contents string file2 = fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:10.123123123Z"), Path: `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`, } isLocalRemote bool purged bool // whether the dir has been purged or not ctx = context.Background() unwrappableFsMethods = []string{"Command"} // these Fs methods don't need to be wrapped ever ) if strings.HasSuffix(os.Getenv("RCLONE_CONFIG"), "/notfound") && *fstest.RemoteName == "" { t.Skip("quicktest only") } // Skip the test if the remote isn't configured skipIfNotOk := func(t *testing.T) { if remote == nil { t.Skipf("WARN: %q not configured", remoteName) } } // Skip if remote is not ListR capable, otherwise set the useListR // flag, returning a function to restore its value skipIfNotListR := func(t *testing.T) func() { skipIfNotOk(t) if remote.Features().ListR == nil { t.Skip("FS has no ListR interface") } previous := fs.Config.UseListR fs.Config.UseListR = true return func() { fs.Config.UseListR = previous } } // Skip if remote is not SetTier and GetTier capable skipIfNotSetTier := func(t *testing.T) { skipIfNotOk(t) if remote.Features().SetTier == false || remote.Features().GetTier == false { t.Skip("FS has no SetTier & GetTier interfaces") } } // Return true if f (or any of the things it wraps) is bucket // based but not at the root. isBucketBasedButNotRoot := func(f fs.Fs) bool { f = fs.UnWrapFs(f) return f.Features().BucketBased && strings.Contains(strings.Trim(f.Root(), "/"), "/") } // Initialise the remote fstest.Initialise() // Set extra config if supplied for _, item := range opt.ExtraConfig { config.FileSet(item.Name, item.Key, item.Value) } if *fstest.RemoteName != "" { remoteName = *fstest.RemoteName } oldFstestRemoteName := fstest.RemoteName fstest.RemoteName = &remoteName defer func() { fstest.RemoteName = oldFstestRemoteName }() t.Logf("Using remote %q", remoteName) var err error if remoteName == "" { remoteName, err = fstest.LocalRemote() require.NoError(t, err) isLocalRemote = true } // Start any test servers if required finish, err := testserver.Start(remoteName) require.NoError(t, err) defer finish() // Make the Fs we are testing with, initialising the local variables // subRemoteName - name of the remote after the TestRemote: // subRemoteLeaf - a subdirectory to use under that // remote - the result of fs.NewFs(TestRemote:subRemoteName) subRemoteName, subRemoteLeaf, err = fstest.RandomRemoteName(remoteName) require.NoError(t, err) remote, err = fs.NewFs(subRemoteName) if err == fs.ErrorNotFoundInConfigFile { t.Logf("Didn't find %q in config file - skipping tests", remoteName) return } require.NoError(t, err, fmt.Sprintf("unexpected error: %v", err)) // Skip the rest if it failed skipIfNotOk(t) // Check to see if Fs that wrap other Fs implement all the optional methods t.Run("FsCheckWrap", func(t *testing.T) { skipIfNotOk(t) if opt.SkipFsCheckWrap { t.Skip("Skipping FsCheckWrap on this Fs") } ft := new(fs.Features).Fill(remote) if ft.UnWrap == nil { t.Skip("Not a wrapping Fs") } v := reflect.ValueOf(ft).Elem() vType := v.Type() for i := 0; i < v.NumField(); i++ { vName := vType.Field(i).Name if stringsContains(vName, opt.UnimplementableFsMethods) { continue } if stringsContains(vName, unwrappableFsMethods) { continue } field := v.Field(i) // skip the bools if field.Type().Kind() == reflect.Bool { continue } if field.IsNil() { t.Errorf("Missing Fs wrapper for %s", vName) } } }) // Check to see if Fs advertises commands and they work and have docs t.Run("FsCommand", func(t *testing.T) { skipIfNotOk(t) doCommand := remote.Features().Command if doCommand == nil { t.Skip("No commands in this remote") } // Check the correct error is generated _, err := doCommand(context.Background(), "NOTFOUND", nil, nil) assert.Equal(t, fs.ErrorCommandNotFound, err, "Incorrect error generated on command not found") // Check there are some commands in the fsInfo fsInfo, _, _, _, err := fs.ConfigFs(remoteName) require.NoError(t, err) assert.True(t, len(fsInfo.CommandHelp) > 0, "Command is declared, must return some help in CommandHelp") }) // TestFsRmdirNotFound tests deleting a non existent directory t.Run("FsRmdirNotFound", func(t *testing.T) { skipIfNotOk(t) if isBucketBasedButNotRoot(remote) { t.Skip("Skipping test as non root bucket based remote") } err := remote.Rmdir(ctx, "") assert.Error(t, err, "Expecting error on Rmdir non existent") }) // Make the directory err = remote.Mkdir(ctx, "") require.NoError(t, err) fstest.CheckListing(t, remote, []fstest.Item{}) // TestFsString tests the String method t.Run("FsString", func(t *testing.T) { skipIfNotOk(t) str := remote.String() require.NotEqual(t, "", str) }) // TestFsName tests the Name method t.Run("FsName", func(t *testing.T) { skipIfNotOk(t) got := remote.Name() want := remoteName[:strings.LastIndex(remoteName, ":")+1] if isLocalRemote { want = "local:" } require.Equal(t, want, got+":") }) // TestFsRoot tests the Root method t.Run("FsRoot", func(t *testing.T) { skipIfNotOk(t) name := remote.Name() + ":" root := remote.Root() if isLocalRemote { // only check last path element on local require.Equal(t, filepath.Base(subRemoteName), filepath.Base(root)) } else { require.Equal(t, subRemoteName, name+root) } }) // TestFsRmdirEmpty tests deleting an empty directory t.Run("FsRmdirEmpty", func(t *testing.T) { skipIfNotOk(t) err := remote.Rmdir(ctx, "") require.NoError(t, err) }) // TestFsMkdir tests making a directory // // Tests that require the directory to be made are within this t.Run("FsMkdir", func(t *testing.T) { skipIfNotOk(t) err := remote.Mkdir(ctx, "") require.NoError(t, err) fstest.CheckListing(t, remote, []fstest.Item{}) err = remote.Mkdir(ctx, "") require.NoError(t, err) // TestFsMkdirRmdirSubdir tests making and removing a sub directory t.Run("FsMkdirRmdirSubdir", func(t *testing.T) { skipIfNotOk(t) dir := "dir/subdir" err := operations.Mkdir(ctx, remote, dir) require.NoError(t, err) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{}, []string{"dir", "dir/subdir"}, fs.GetModifyWindow(remote)) err = operations.Rmdir(ctx, remote, dir) require.NoError(t, err) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{}, []string{"dir"}, fs.GetModifyWindow(remote)) err = operations.Rmdir(ctx, remote, "dir") require.NoError(t, err) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{}, []string{}, fs.GetModifyWindow(remote)) }) // TestFsListEmpty tests listing an empty directory t.Run("FsListEmpty", func(t *testing.T) { skipIfNotOk(t) fstest.CheckListing(t, remote, []fstest.Item{}) }) // TestFsListDirEmpty tests listing the directories from an empty directory TestFsListDirEmpty := func(t *testing.T) { skipIfNotOk(t) objs, dirs, err := walk.GetAll(ctx, remote, "", true, 1) if !remote.Features().CanHaveEmptyDirectories { if err != fs.ErrorDirNotFound { require.NoError(t, err) } } else { require.NoError(t, err) } assert.Equal(t, []string{}, objsToNames(objs)) assert.Equal(t, []string{}, dirsToNames(dirs)) } t.Run("FsListDirEmpty", TestFsListDirEmpty) // TestFsListRDirEmpty tests listing the directories from an empty directory using ListR t.Run("FsListRDirEmpty", func(t *testing.T) { defer skipIfNotListR(t)() TestFsListDirEmpty(t) }) // TestFsListDirNotFound tests listing the directories from an empty directory TestFsListDirNotFound := func(t *testing.T) { skipIfNotOk(t) objs, dirs, err := walk.GetAll(ctx, remote, "does not exist", true, 1) if !remote.Features().CanHaveEmptyDirectories { if err != fs.ErrorDirNotFound { assert.NoError(t, err) assert.Equal(t, 0, len(objs)+len(dirs)) } } else { assert.Equal(t, fs.ErrorDirNotFound, err) } } t.Run("FsListDirNotFound", TestFsListDirNotFound) // TestFsListRDirNotFound tests listing the directories from an empty directory using ListR t.Run("FsListRDirNotFound", func(t *testing.T) { defer skipIfNotListR(t)() TestFsListDirNotFound(t) }) // FsEncoding tests that file name encodings are // working by uploading a series of unusual files // Must be run in an empty directory t.Run("FsEncoding", func(t *testing.T) { skipIfNotOk(t) // check no files or dirs as pre-requisite fstest.CheckListingWithPrecision(t, remote, []fstest.Item{}, []string{}, fs.GetModifyWindow(remote)) for _, test := range []struct { name string path string }{ // See lib/encoder/encoder.go for list of things that go here {"control chars", "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1A\x1B\x1C\x1D\x1E\x1F\x7F"}, {"dot", "."}, {"dot dot", ".."}, {"punctuation", "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"}, {"leading space", " leading space"}, {"leading tilde", "~leading tilde"}, {"leading CR", "\rleading CR"}, {"leading LF", "\nleading LF"}, {"leading HT", "\tleading HT"}, {"leading VT", "\vleading VT"}, {"leading dot", ".leading dot"}, {"trailing space", "trailing space "}, {"trailing CR", "trailing CR\r"}, {"trailing LF", "trailing LF\n"}, {"trailing HT", "trailing HT\t"}, {"trailing VT", "trailing VT\v"}, {"trailing dot", "trailing dot."}, {"invalid UTF-8", "invalid utf-8\xfe"}, } { t.Run(test.name, func(t *testing.T) { if opt.SkipInvalidUTF8 && test.name == "invalid UTF-8" { t.Skip("Skipping " + test.name) } // turn raw strings into Standard encoding fileName := encoder.Standard.Encode(test.path) dirName := fileName t.Logf("testing %q", fileName) assert.NoError(t, remote.Mkdir(ctx, dirName)) file := fstest.Item{ ModTime: time.Now(), Path: dirName + "/" + fileName, // test creating a file and dir with that name } _, o := testPut(context.Background(), t, remote, &file) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{file}, []string{dirName}, fs.GetModifyWindow(remote)) assert.NoError(t, o.Remove(ctx)) assert.NoError(t, remote.Rmdir(ctx, dirName)) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{}, []string{}, fs.GetModifyWindow(remote)) }) } }) // TestFsNewObjectNotFound tests not finding an object t.Run("FsNewObjectNotFound", func(t *testing.T) { skipIfNotOk(t) // Object in an existing directory o, err := remote.NewObject(ctx, "potato") assert.Nil(t, o) assert.Equal(t, fs.ErrorObjectNotFound, err) // Now try an object in a non existing directory o, err = remote.NewObject(ctx, "directory/not/found/potato") assert.Nil(t, o) assert.Equal(t, fs.ErrorObjectNotFound, err) }) // TestFsPutError tests uploading a file where there is an error // // It makes sure that aborting a file half way through does not create // a file on the remote. // // go test -v -run 'TestIntegration/Test(Setup|Init|FsMkdir|FsPutError)$' t.Run("FsPutError", func(t *testing.T) { skipIfNotOk(t) var N int64 = 5 * 1024 if *fstest.SizeLimit > 0 && N > *fstest.SizeLimit { N = *fstest.SizeLimit t.Logf("Reduce file size due to limit %d", N) } // Read N bytes then produce an error contents := random.String(int(N)) buf := bytes.NewBufferString(contents) er := &readers.ErrorReader{Err: errors.New("potato")} in := io.MultiReader(buf, er) obji := object.NewStaticObjectInfo(file2.Path, file2.ModTime, 2*N, true, nil, nil) _, err := remote.Put(ctx, in, obji) // assert.Nil(t, obj) - FIXME some remotes return the object even on nil assert.NotNil(t, err) obj, err := remote.NewObject(ctx, file2.Path) assert.Nil(t, obj) assert.Equal(t, fs.ErrorObjectNotFound, err) }) t.Run("FsPutZeroLength", func(t *testing.T) { skipIfNotOk(t) TestPutLarge(ctx, t, remote, &fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: fmt.Sprintf("zero-length-file"), Size: int64(0), }) }) t.Run("FsOpenWriterAt", func(t *testing.T) { skipIfNotOk(t) openWriterAt := remote.Features().OpenWriterAt if openWriterAt == nil { t.Skip("FS has no OpenWriterAt interface") } path := "writer-at-subdir/writer-at-file" out, err := openWriterAt(ctx, path, -1) require.NoError(t, err) var n int n, err = out.WriteAt([]byte("def"), 3) assert.NoError(t, err) assert.Equal(t, 3, n) n, err = out.WriteAt([]byte("ghi"), 6) assert.NoError(t, err) assert.Equal(t, 3, n) n, err = out.WriteAt([]byte("abc"), 0) assert.NoError(t, err) assert.Equal(t, 3, n) assert.NoError(t, out.Close()) obj := findObject(ctx, t, remote, path) assert.Equal(t, "abcdefghi", readObject(ctx, t, obj, -1), "contents of file differ") assert.NoError(t, obj.Remove(ctx)) assert.NoError(t, remote.Rmdir(ctx, "writer-at-subdir")) }) // TestFsChangeNotify tests that changes are properly // propagated // // go test -v -remote TestDrive: -run '^Test(Setup|Init|FsChangeNotify)$' -verbose t.Run("FsChangeNotify", func(t *testing.T) { skipIfNotOk(t) // Check have ChangeNotify doChangeNotify := remote.Features().ChangeNotify if doChangeNotify == nil { t.Skip("FS has no ChangeNotify interface") } err := operations.Mkdir(ctx, remote, "dir") require.NoError(t, err) pollInterval := make(chan time.Duration) dirChanges := map[string]struct{}{} objChanges := map[string]struct{}{} doChangeNotify(ctx, func(x string, e fs.EntryType) { fs.Debugf(nil, "doChangeNotify(%q, %+v)", x, e) if strings.HasPrefix(x, file1.Path[:5]) || strings.HasPrefix(x, file2.Path[:5]) { fs.Debugf(nil, "Ignoring notify for file1 or file2: %q, %v", x, e) return } if e == fs.EntryDirectory { dirChanges[x] = struct{}{} } else if e == fs.EntryObject { objChanges[x] = struct{}{} } }, pollInterval) defer func() { close(pollInterval) }() pollInterval <- time.Second var dirs []string for _, idx := range []int{1, 3, 2} { dir := fmt.Sprintf("dir/subdir%d", idx) err = operations.Mkdir(ctx, remote, dir) require.NoError(t, err) dirs = append(dirs, dir) } var objs []fs.Object for _, idx := range []int{2, 4, 3} { file := fstest.Item{ ModTime: time.Now(), Path: fmt.Sprintf("dir/file%d", idx), } _, o := testPut(ctx, t, remote, &file) objs = append(objs, o) } // Looks for each item in wants in changes - // if they are all found it returns true contains := func(changes map[string]struct{}, wants []string) bool { for _, want := range wants { _, ok := changes[want] if !ok { return false } } return true } // Wait a little while for the changes to come in wantDirChanges := []string{"dir/subdir1", "dir/subdir3", "dir/subdir2"} wantObjChanges := []string{"dir/file2", "dir/file4", "dir/file3"} ok := false for tries := 1; tries < 10; tries++ { ok = contains(dirChanges, wantDirChanges) && contains(objChanges, wantObjChanges) if ok { break } t.Logf("Try %d/10 waiting for dirChanges and objChanges", tries) time.Sleep(3 * time.Second) } if !ok { t.Errorf("%+v does not contain %+v or \n%+v does not contain %+v", dirChanges, wantDirChanges, objChanges, wantObjChanges) } // tidy up afterwards for _, o := range objs { assert.NoError(t, o.Remove(ctx)) } dirs = append(dirs, "dir") for _, dir := range dirs { assert.NoError(t, remote.Rmdir(ctx, dir)) } }) // TestFsPut files writes file1, file2 and tests an update // // Tests that require file1, file2 are within this t.Run("FsPutFiles", func(t *testing.T) { skipIfNotOk(t) file1Contents, _ = testPut(ctx, t, remote, &file1) /* file2Contents = */ testPut(ctx, t, remote, &file2) file1Contents, _ = testPut(ctx, t, remote, &file1) // Note that the next test will check there are no duplicated file names // TestFsListDirFile2 tests the files are correctly uploaded by doing // Depth 1 directory listings TestFsListDirFile2 := func(t *testing.T) { skipIfNotOk(t) list := func(dir string, expectedDirNames, expectedObjNames []string) { var objNames, dirNames []string for i := 1; i <= *fstest.ListRetries; i++ { objs, dirs, err := walk.GetAll(ctx, remote, dir, true, 1) if errors.Cause(err) == fs.ErrorDirNotFound { objs, dirs, err = walk.GetAll(ctx, remote, dir, true, 1) } require.NoError(t, err) objNames = objsToNames(objs) dirNames = dirsToNames(dirs) if len(objNames) >= len(expectedObjNames) && len(dirNames) >= len(expectedDirNames) { break } t.Logf("Sleeping for 1 second for TestFsListDirFile2 eventual consistency: %d/%d", i, *fstest.ListRetries) time.Sleep(1 * time.Second) } assert.Equal(t, expectedDirNames, dirNames) assert.Equal(t, expectedObjNames, objNames) } dir := file2.Path deepest := true for dir != "" { expectedObjNames := []string{} expectedDirNames := []string{} child := dir dir = path.Dir(dir) if dir == "." { dir = "" expectedObjNames = append(expectedObjNames, file1.Path) } if deepest { expectedObjNames = append(expectedObjNames, file2.Path) deepest = false } else { expectedDirNames = append(expectedDirNames, child) } list(dir, expectedDirNames, expectedObjNames) } } t.Run("FsListDirFile2", TestFsListDirFile2) // TestFsListRDirFile2 tests the files are correctly uploaded by doing // Depth 1 directory listings using ListR t.Run("FsListRDirFile2", func(t *testing.T) { defer skipIfNotListR(t)() TestFsListDirFile2(t) }) // Test the files are all there with walk.ListR recursive listings t.Run("FsListR", func(t *testing.T) { skipIfNotOk(t) objs, dirs, err := walk.GetAll(ctx, remote, "", true, -1) require.NoError(t, err) assert.Equal(t, []string{ "hello? sausage", "hello? sausage/êé", "hello? sausage/êé/Hello, 世界", "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠", }, dirsToNames(dirs)) assert.Equal(t, []string{ "file name.txt", "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠/z.txt", }, objsToNames(objs)) }) // Test the files are all there with // walk.ListR recursive listings on a sub dir t.Run("FsListRSubdir", func(t *testing.T) { skipIfNotOk(t) objs, dirs, err := walk.GetAll(ctx, remote, path.Dir(path.Dir(path.Dir(path.Dir(file2.Path)))), true, -1) require.NoError(t, err) assert.Equal(t, []string{ "hello? sausage/êé", "hello? sausage/êé/Hello, 世界", "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠", }, dirsToNames(dirs)) assert.Equal(t, []string{ "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠/z.txt", }, objsToNames(objs)) }) // TestFsListDirRoot tests that DirList works in the root TestFsListDirRoot := func(t *testing.T) { skipIfNotOk(t) rootRemote, err := fs.NewFs(remoteName) require.NoError(t, err) _, dirs, err := walk.GetAll(ctx, rootRemote, "", true, 1) require.NoError(t, err) assert.Contains(t, dirsToNames(dirs), subRemoteLeaf, "Remote leaf not found") } t.Run("FsListDirRoot", TestFsListDirRoot) // TestFsListRDirRoot tests that DirList works in the root using ListR t.Run("FsListRDirRoot", func(t *testing.T) { defer skipIfNotListR(t)() TestFsListDirRoot(t) }) // TestFsListSubdir tests List works for a subdirectory TestFsListSubdir := func(t *testing.T) { skipIfNotOk(t) fileName := file2.Path var err error var objs []fs.Object var dirs []fs.Directory for i := 0; i < 2; i++ { dir, _ := path.Split(fileName) dir = dir[:len(dir)-1] objs, dirs, err = walk.GetAll(ctx, remote, dir, true, -1) } require.NoError(t, err) require.Len(t, objs, 1) assert.Equal(t, fileName, objs[0].Remote()) require.Len(t, dirs, 0) } t.Run("FsListSubdir", TestFsListSubdir) // TestFsListRSubdir tests List works for a subdirectory using ListR t.Run("FsListRSubdir", func(t *testing.T) { defer skipIfNotListR(t)() TestFsListSubdir(t) }) // TestFsListLevel2 tests List works for 2 levels TestFsListLevel2 := func(t *testing.T) { skipIfNotOk(t) objs, dirs, err := walk.GetAll(ctx, remote, "", true, 2) if err == fs.ErrorLevelNotSupported { return } require.NoError(t, err) assert.Equal(t, []string{file1.Path}, objsToNames(objs)) assert.Equal(t, []string{"hello? sausage", "hello? sausage/êé"}, dirsToNames(dirs)) } t.Run("FsListLevel2", TestFsListLevel2) // TestFsListRLevel2 tests List works for 2 levels using ListR t.Run("FsListRLevel2", func(t *testing.T) { defer skipIfNotListR(t)() TestFsListLevel2(t) }) // TestFsListFile1 tests file present t.Run("FsListFile1", func(t *testing.T) { skipIfNotOk(t) fstest.CheckListing(t, remote, []fstest.Item{file1, file2}) }) // TestFsNewObject tests NewObject t.Run("FsNewObject", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) file1.Check(t, obj, remote.Precision()) }) // TestFsListFile1and2 tests two files present t.Run("FsListFile1and2", func(t *testing.T) { skipIfNotOk(t) fstest.CheckListing(t, remote, []fstest.Item{file1, file2}) }) // TestFsNewObjectDir tests NewObject on a directory which should produce an error t.Run("FsNewObjectDir", func(t *testing.T) { skipIfNotOk(t) dir := path.Dir(file2.Path) obj, err := remote.NewObject(ctx, dir) assert.Nil(t, obj) assert.NotNil(t, err) }) // TestFsPurge tests Purge t.Run("FsPurge", func(t *testing.T) { skipIfNotOk(t) // Check have Purge doPurge := remote.Features().Purge if doPurge == nil { t.Skip("FS has no Purge interface") } // put up a file to purge fileToPurge := fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: "dirToPurge/fileToPurge.txt", } _, _ = testPut(ctx, t, remote, &fileToPurge) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{file1, file2, fileToPurge}, []string{ "dirToPurge", "hello? sausage", "hello? sausage/êé", "hello? sausage/êé/Hello, 世界", "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠", }, fs.GetModifyWindow(remote)) // Now purge it err = operations.Purge(ctx, remote, "dirToPurge") require.NoError(t, err) fstest.CheckListingWithPrecision(t, remote, []fstest.Item{file1, file2}, []string{ "hello? sausage", "hello? sausage/êé", "hello? sausage/êé/Hello, 世界", "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠", }, fs.GetModifyWindow(remote)) }) // TestFsCopy tests Copy t.Run("FsCopy", func(t *testing.T) { skipIfNotOk(t) // Check have Copy doCopy := remote.Features().Copy if doCopy == nil { t.Skip("FS has no Copier interface") } // Test with file2 so have + and ' ' in file name var file2Copy = file2 file2Copy.Path += "-copy" // do the copy src := findObject(ctx, t, remote, file2.Path) dst, err := doCopy(ctx, src, file2Copy.Path) if err == fs.ErrorCantCopy { t.Skip("FS can't copy") } require.NoError(t, err, fmt.Sprintf("Error: %#v", err)) // check file exists in new listing fstest.CheckListing(t, remote, []fstest.Item{file1, file2, file2Copy}) // Check dst lightly - list above has checked ModTime/Hashes assert.Equal(t, file2Copy.Path, dst.Remote()) // Delete copy err = dst.Remove(ctx) require.NoError(t, err) }) // TestFsMove tests Move t.Run("FsMove", func(t *testing.T) { skipIfNotOk(t) // Check have Move doMove := remote.Features().Move if doMove == nil { t.Skip("FS has no Mover interface") } // state of files now: // 1: file name.txt // 2: hello sausage?/../z.txt var file1Move = file1 var file2Move = file2 // check happy path, i.e. no naming conflicts when rename and move are two // separate operations file2Move.Path = "other.txt" src := findObject(ctx, t, remote, file2.Path) dst, err := doMove(ctx, src, file2Move.Path) if err == fs.ErrorCantMove { t.Skip("FS can't move") } require.NoError(t, err) // check file exists in new listing fstest.CheckListing(t, remote, []fstest.Item{file1, file2Move}) // Check dst lightly - list above has checked ModTime/Hashes assert.Equal(t, file2Move.Path, dst.Remote()) // 1: file name.txt // 2: other.txt // Check conflict on "rename, then move" file1Move.Path = "moveTest/other.txt" src = findObject(ctx, t, remote, file1.Path) _, err = doMove(ctx, src, file1Move.Path) require.NoError(t, err) fstest.CheckListing(t, remote, []fstest.Item{file1Move, file2Move}) // 1: moveTest/other.txt // 2: other.txt // Check conflict on "move, then rename" src = findObject(ctx, t, remote, file1Move.Path) _, err = doMove(ctx, src, file1.Path) require.NoError(t, err) fstest.CheckListing(t, remote, []fstest.Item{file1, file2Move}) // 1: file name.txt // 2: other.txt src = findObject(ctx, t, remote, file2Move.Path) _, err = doMove(ctx, src, file2.Path) require.NoError(t, err) fstest.CheckListing(t, remote, []fstest.Item{file1, file2}) // 1: file name.txt // 2: hello sausage?/../z.txt // Tidy up moveTest directory require.NoError(t, remote.Rmdir(ctx, "moveTest")) }) // Move src to this remote using server side move operations. // // Will only be called if src.Fs().Name() == f.Name() // // If it isn't possible then return fs.ErrorCantDirMove // // If destination exists then return fs.ErrorDirExists // TestFsDirMove tests DirMove // // go test -v -run 'TestIntegration/Test(Setup|Init|FsMkdir|FsPutFile1|FsPutFile2|FsUpdateFile1|FsDirMove)$ t.Run("FsDirMove", func(t *testing.T) { skipIfNotOk(t) // Check have DirMove doDirMove := remote.Features().DirMove if doDirMove == nil { t.Skip("FS has no DirMover interface") } // Check it can't move onto itself err := doDirMove(ctx, remote, "", "") require.Equal(t, fs.ErrorDirExists, err) // new remote newRemote, _, removeNewRemote, err := fstest.RandomRemote() require.NoError(t, err) defer removeNewRemote() const newName = "new_name/sub_new_name" // try the move err = newRemote.Features().DirMove(ctx, remote, "", newName) require.NoError(t, err) // check remotes // remote should not exist here _, err = remote.List(ctx, "") assert.Equal(t, fs.ErrorDirNotFound, errors.Cause(err)) //fstest.CheckListingWithPrecision(t, remote, []fstest.Item{}, []string{}, remote.Precision()) file1Copy := file1 file1Copy.Path = path.Join(newName, file1.Path) file2Copy := file2 file2Copy.Path = path.Join(newName, file2.Path) fstest.CheckListingWithPrecision(t, newRemote, []fstest.Item{file2Copy, file1Copy}, []string{ "new_name", "new_name/sub_new_name", "new_name/sub_new_name/hello? sausage", "new_name/sub_new_name/hello? sausage/êé", "new_name/sub_new_name/hello? sausage/êé/Hello, 世界", "new_name/sub_new_name/hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠", }, newRemote.Precision()) // move it back err = doDirMove(ctx, newRemote, newName, "") require.NoError(t, err) // check remotes fstest.CheckListingWithPrecision(t, remote, []fstest.Item{file2, file1}, []string{ "hello? sausage", "hello? sausage/êé", "hello? sausage/êé/Hello, 世界", "hello? sausage/êé/Hello, 世界/ \" ' @ < > & ? + ≠", }, remote.Precision()) fstest.CheckListingWithPrecision(t, newRemote, []fstest.Item{}, []string{ "new_name", }, newRemote.Precision()) }) // TestFsRmdirFull tests removing a non empty directory t.Run("FsRmdirFull", func(t *testing.T) { skipIfNotOk(t) if isBucketBasedButNotRoot(remote) { t.Skip("Skipping test as non root bucket based remote") } err := remote.Rmdir(ctx, "") require.Error(t, err, "Expecting error on RMdir on non empty remote") }) // TestFsPrecision tests the Precision of the Fs t.Run("FsPrecision", func(t *testing.T) { skipIfNotOk(t) precision := remote.Precision() if precision == fs.ModTimeNotSupported { return } if precision > time.Second || precision < 0 { t.Fatalf("Precision out of range %v", precision) } // FIXME check expected precision }) // TestObjectString tests the Object String method t.Run("ObjectString", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) assert.Equal(t, file1.Path, obj.String()) if opt.NilObject != nil { assert.Equal(t, "", opt.NilObject.String()) } }) // TestObjectFs tests the object can be found t.Run("ObjectFs", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) // If this is set we don't do the direct comparison of // the Fs from the object as it may be different if opt.SkipFsMatch { return } testRemote := remote if obj.Fs() != testRemote { // Check to see if this wraps something else if doUnWrap := testRemote.Features().UnWrap; doUnWrap != nil { testRemote = doUnWrap() } } assert.Equal(t, obj.Fs(), testRemote) }) // TestObjectRemote tests the Remote is correct t.Run("ObjectRemote", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) assert.Equal(t, file1.Path, obj.Remote()) }) // TestObjectHashes checks all the hashes the object supports t.Run("ObjectHashes", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) file1.CheckHashes(t, obj) }) // TestObjectModTime tests the ModTime of the object is correct TestObjectModTime := func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) file1.CheckModTime(t, obj, obj.ModTime(ctx), remote.Precision()) } t.Run("ObjectModTime", TestObjectModTime) // TestObjectMimeType tests the MimeType of the object is correct t.Run("ObjectMimeType", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) do, ok := obj.(fs.MimeTyper) if !ok { t.Skip("MimeType method not supported") } mimeType := do.MimeType(ctx) if strings.ContainsRune(mimeType, ';') { assert.Equal(t, "text/plain; charset=utf-8", mimeType) } else { assert.Equal(t, "text/plain", mimeType) } }) // TestObjectSetModTime tests that SetModTime works t.Run("ObjectSetModTime", func(t *testing.T) { skipIfNotOk(t) newModTime := fstest.Time("2011-12-13T14:15:16.999999999Z") obj := findObject(ctx, t, remote, file1.Path) err := obj.SetModTime(ctx, newModTime) if err == fs.ErrorCantSetModTime || err == fs.ErrorCantSetModTimeWithoutDelete { t.Log(err) return } require.NoError(t, err) file1.ModTime = newModTime file1.CheckModTime(t, obj, obj.ModTime(ctx), remote.Precision()) // And make a new object and read it from there too TestObjectModTime(t) }) // TestObjectSize tests that Size works t.Run("ObjectSize", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) assert.Equal(t, file1.Size, obj.Size()) }) // TestObjectOpen tests that Open works t.Run("ObjectOpen", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) assert.Equal(t, file1Contents, readObject(ctx, t, obj, -1), "contents of file1 differ") }) // TestObjectOpenSeek tests that Open works with SeekOption t.Run("ObjectOpenSeek", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) assert.Equal(t, file1Contents[50:], readObject(ctx, t, obj, -1, &fs.SeekOption{Offset: 50}), "contents of file1 differ after seek") }) // TestObjectOpenRange tests that Open works with RangeOption // // go test -v -run 'TestIntegration/Test(Setup|Init|FsMkdir|FsPutFile1|FsPutFile2|FsUpdateFile1|ObjectOpenRange)$' t.Run("ObjectOpenRange", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) for _, test := range []struct { ro fs.RangeOption wantStart, wantEnd int }{ {fs.RangeOption{Start: 5, End: 15}, 5, 16}, {fs.RangeOption{Start: 80, End: -1}, 80, 100}, {fs.RangeOption{Start: 81, End: 100000}, 81, 100}, {fs.RangeOption{Start: -1, End: 20}, 80, 100}, // if start is omitted this means get the final bytes // {fs.RangeOption{Start: -1, End: -1}, 0, 100}, - this seems to work but the RFC doesn't define it } { got := readObject(ctx, t, obj, -1, &test.ro) foundAt := strings.Index(file1Contents, got) help := fmt.Sprintf("%#v failed want [%d:%d] got [%d:%d]", test.ro, test.wantStart, test.wantEnd, foundAt, foundAt+len(got)) assert.Equal(t, file1Contents[test.wantStart:test.wantEnd], got, help) } }) // TestObjectPartialRead tests that reading only part of the object does the correct thing t.Run("ObjectPartialRead", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) assert.Equal(t, file1Contents[:50], readObject(ctx, t, obj, 50), "contents of file1 differ after limited read") }) // TestObjectUpdate tests that Update works t.Run("ObjectUpdate", func(t *testing.T) { skipIfNotOk(t) contents := random.String(200) buf := bytes.NewBufferString(contents) hash := hash.NewMultiHasher() in := io.TeeReader(buf, hash) file1.Size = int64(buf.Len()) obj := findObject(ctx, t, remote, file1.Path) obji := object.NewStaticObjectInfo(file1.Path, file1.ModTime, int64(len(contents)), true, nil, obj.Fs()) err := obj.Update(ctx, in, obji) require.NoError(t, err) file1.Hashes = hash.Sums() // check the object has been updated file1.Check(t, obj, remote.Precision()) // Re-read the object and check again obj = findObject(ctx, t, remote, file1.Path) file1.Check(t, obj, remote.Precision()) // check contents correct assert.Equal(t, contents, readObject(ctx, t, obj, -1), "contents of updated file1 differ") file1Contents = contents }) // TestObjectStorable tests that Storable works t.Run("ObjectStorable", func(t *testing.T) { skipIfNotOk(t) obj := findObject(ctx, t, remote, file1.Path) require.NotNil(t, !obj.Storable(), "Expecting object to be storable") }) // TestFsIsFile tests that an error is returned along with a valid fs // which points to the parent directory. t.Run("FsIsFile", func(t *testing.T) { skipIfNotOk(t) remoteName := subRemoteName + "/" + file2.Path file2Copy := file2 file2Copy.Path = "z.txt" fileRemote, err := fs.NewFs(remoteName) require.NotNil(t, fileRemote) assert.Equal(t, fs.ErrorIsFile, err) if strings.HasPrefix(remoteName, "TestChunker") && strings.Contains(remoteName, "Nometa") { // TODO fix chunker and remove this bypass t.Logf("Skip listing check -- chunker can't yet handle this tricky case") return } fstest.CheckListing(t, fileRemote, []fstest.Item{file2Copy}) }) // TestFsIsFileNotFound tests that an error is not returned if no object is found t.Run("FsIsFileNotFound", func(t *testing.T) { skipIfNotOk(t) remoteName := subRemoteName + "/not found.txt" fileRemote, err := fs.NewFs(remoteName) require.NoError(t, err) fstest.CheckListing(t, fileRemote, []fstest.Item{}) }) // Test that things work from the root t.Run("FromRoot", func(t *testing.T) { if features := remote.Features(); features.BucketBased && !features.BucketBasedRootOK { t.Skip("Can't list from root on this remote") } configName, configLeaf, err := fspath.Parse(subRemoteName) require.NoError(t, err) if configName == "" { configName, configLeaf = path.Split(subRemoteName) } else { configName += ":" } t.Logf("Opening root remote %q path %q from %q", configName, configLeaf, subRemoteName) rootRemote, err := fs.NewFs(configName) require.NoError(t, err) file1Root := file1 file1Root.Path = path.Join(configLeaf, file1Root.Path) file2Root := file2 file2Root.Path = path.Join(configLeaf, file2Root.Path) var dirs []string dir := file2.Path for { dir = path.Dir(dir) if dir == "" || dir == "." || dir == "/" { break } dirs = append(dirs, path.Join(configLeaf, dir)) } // Check that we can see file1 and file2 from the root t.Run("List", func(t *testing.T) { fstest.CheckListingWithRoot(t, rootRemote, configLeaf, []fstest.Item{file1Root, file2Root}, dirs, rootRemote.Precision()) }) // Check that that listing the entries is OK t.Run("ListEntries", func(t *testing.T) { entries, err := rootRemote.List(context.Background(), configLeaf) require.NoError(t, err) fstest.CompareItems(t, entries, []fstest.Item{file1Root}, dirs[len(dirs)-1:], rootRemote.Precision(), "ListEntries") }) // List the root with ListR t.Run("ListR", func(t *testing.T) { doListR := rootRemote.Features().ListR if doListR == nil { t.Skip("FS has no ListR interface") } file1Found, file2Found := false, false stopTime := time.Now().Add(10 * time.Second) errTooMany := errors.New("too many files") errFound := errors.New("found") err := doListR(context.Background(), "", func(entries fs.DirEntries) error { for _, entry := range entries { remote := entry.Remote() if remote == file1Root.Path { file1Found = true } if remote == file2Root.Path { file2Found = true } if file1Found && file2Found { return errFound } } if time.Now().After(stopTime) { return errTooMany } return nil }) if err != errFound && err != errTooMany { assert.NoError(t, err) } if err != errTooMany { assert.True(t, file1Found, "file1Root not found") assert.True(t, file2Found, "file2Root not found") } else { t.Logf("Too many files to list - giving up") } }) // Create a new file t.Run("Put", func(t *testing.T) { file3Root := fstest.Item{ ModTime: time.Now(), Path: path.Join(configLeaf, "created from root.txt"), } _, file3Obj := testPut(ctx, t, rootRemote, &file3Root) fstest.CheckListingWithRoot(t, rootRemote, configLeaf, []fstest.Item{file1Root, file2Root, file3Root}, nil, rootRemote.Precision()) // And then remove it t.Run("Remove", func(t *testing.T) { require.NoError(t, file3Obj.Remove(context.Background())) fstest.CheckListingWithRoot(t, rootRemote, configLeaf, []fstest.Item{file1Root, file2Root}, nil, rootRemote.Precision()) }) }) }) // TestPublicLink tests creation of sharable, public links // go test -v -run 'TestIntegration/Test(Setup|Init|FsMkdir|FsPutFile1|FsPutFile2|FsUpdateFile1|PublicLink)$' t.Run("PublicLink", func(t *testing.T) { skipIfNotOk(t) doPublicLink := remote.Features().PublicLink if doPublicLink == nil { t.Skip("FS has no PublicLinker interface") } expiry := fs.Duration(60 * time.Second) // if object not found link, err := doPublicLink(ctx, file1.Path+"_does_not_exist", expiry, false) require.Error(t, err, "Expected to get error when file doesn't exist") require.Equal(t, "", link, "Expected link to be empty on error") // sharing file for the first time link1, err := doPublicLink(ctx, file1.Path, expiry, false) require.NoError(t, err) require.NotEqual(t, "", link1, "Link should not be empty") link2, err := doPublicLink(ctx, file2.Path, expiry, false) require.NoError(t, err) require.NotEqual(t, "", link2, "Link should not be empty") require.NotEqual(t, link1, link2, "Links to different files should differ") // sharing file for the 2nd time link1, err = doPublicLink(ctx, file1.Path, expiry, false) require.NoError(t, err) require.NotEqual(t, "", link1, "Link should not be empty") // sharing directory for the first time path := path.Dir(file2.Path) link3, err := doPublicLink(ctx, path, expiry, false) if err != nil && (errors.Cause(err) == fs.ErrorCantShareDirectories || errors.Cause(err) == fs.ErrorObjectNotFound) { t.Log("skipping directory tests as not supported on this backend") } else { require.NoError(t, err) require.NotEqual(t, "", link3, "Link should not be empty") // sharing directory for the second time link3, err = doPublicLink(ctx, path, expiry, false) require.NoError(t, err) require.NotEqual(t, "", link3, "Link should not be empty") // sharing the "root" directory in a subremote subRemote, _, removeSubRemote, err := fstest.RandomRemote() require.NoError(t, err) defer removeSubRemote() // ensure sub remote isn't empty buf := bytes.NewBufferString("somecontent") obji := object.NewStaticObjectInfo("somefile", time.Now(), int64(buf.Len()), true, nil, nil) _, err = subRemote.Put(ctx, buf, obji) require.NoError(t, err) link4, err := subRemote.Features().PublicLink(ctx, "", expiry, false) require.NoError(t, err, "Sharing root in a sub-remote should work") require.NotEqual(t, "", link4, "Link should not be empty") } }) // TestSetTier tests SetTier and GetTier functionality t.Run("SetTier", func(t *testing.T) { skipIfNotSetTier(t) obj := findObject(ctx, t, remote, file1.Path) setter, ok := obj.(fs.SetTierer) assert.NotNil(t, ok) getter, ok := obj.(fs.GetTierer) assert.NotNil(t, ok) // If interfaces are supported TiersToTest should contain // at least one entry supportedTiers := opt.TiersToTest assert.NotEmpty(t, supportedTiers) // test set tier changes on supported storage classes or tiers for _, tier := range supportedTiers { err := setter.SetTier(tier) assert.Nil(t, err) got := getter.GetTier() assert.Equal(t, tier, got) } }) // Check to see if Fs that wrap other Objects implement all the optional methods t.Run("ObjectCheckWrap", func(t *testing.T) { skipIfNotOk(t) if opt.SkipObjectCheckWrap { t.Skip("Skipping FsCheckWrap on this Fs") } ft := new(fs.Features).Fill(remote) if ft.UnWrap == nil { t.Skip("Not a wrapping Fs") } obj := findObject(ctx, t, remote, file1.Path) _, unsupported := fs.ObjectOptionalInterfaces(obj) for _, name := range unsupported { if !stringsContains(name, opt.UnimplementableObjectMethods) { t.Errorf("Missing Object wrapper for %s", name) } } }) // TestObjectRemove tests Remove t.Run("ObjectRemove", func(t *testing.T) { skipIfNotOk(t) // remove file1 obj := findObject(ctx, t, remote, file1.Path) err := obj.Remove(ctx) require.NoError(t, err) // check listing without modtime as TestPublicLink may change the modtime fstest.CheckListingWithPrecision(t, remote, []fstest.Item{file2}, nil, fs.ModTimeNotSupported) }) // TestAbout tests the About optional interface t.Run("ObjectAbout", func(t *testing.T) { skipIfNotOk(t) // Check have About doAbout := remote.Features().About if doAbout == nil { t.Skip("FS does not support About") } // Can't really check the output much! usage, err := doAbout(context.Background()) require.NoError(t, err) require.NotNil(t, usage) assert.NotEqual(t, int64(0), usage.Total) }) // Just file2 remains for Purge to clean up // TestFsPutStream tests uploading files when size isn't known in advance. // This may trigger large buffer allocation in some backends, keep it // close to the end of suite. (See fs/operations/xtra_operations_test.go) t.Run("FsPutStream", func(t *testing.T) { skipIfNotOk(t) if remote.Features().PutStream == nil { t.Skip("FS has no PutStream interface") } for _, contentSize := range []int{0, 100} { t.Run(strconv.Itoa(contentSize), func(t *testing.T) { file := fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: "piped data.txt", Size: -1, // use unknown size during upload } var ( err error obj fs.Object uploadHash *hash.MultiHasher ) retry(t, "PutStream", func() error { contents := random.String(contentSize) buf := bytes.NewBufferString(contents) uploadHash = hash.NewMultiHasher() in := io.TeeReader(buf, uploadHash) file.Size = -1 obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil) obj, err = remote.Features().PutStream(ctx, in, obji) return err }) file.Hashes = uploadHash.Sums() file.Size = int64(contentSize) // use correct size when checking file.Check(t, obj, remote.Precision()) // Re-read the object and check again obj = findObject(ctx, t, remote, file.Path) file.Check(t, obj, remote.Precision()) require.NoError(t, obj.Remove(ctx)) }) } }) // TestInternal calls InternalTest() on the Fs t.Run("Internal", func(t *testing.T) { skipIfNotOk(t) if it, ok := remote.(InternalTester); ok { it.InternalTest(t) } else { t.Skipf("%T does not implement InternalTester", remote) } }) }) // TestFsPutChunked may trigger large buffer allocation with // some backends (see fs/operations/xtra_operations_test.go), // keep it closer to the end of suite. t.Run("FsPutChunked", func(t *testing.T) { skipIfNotOk(t) if testing.Short() { t.Skip("not running with -short") } setUploadChunkSizer, _ := remote.(SetUploadChunkSizer) if setUploadChunkSizer == nil { t.Skipf("%T does not implement SetUploadChunkSizer", remote) } setUploadCutoffer, _ := remote.(SetUploadCutoffer) minChunkSize := opt.ChunkedUpload.MinChunkSize if minChunkSize < 100 { minChunkSize = 100 } if opt.ChunkedUpload.CeilChunkSize != nil { minChunkSize = opt.ChunkedUpload.CeilChunkSize(minChunkSize) } maxChunkSize := 2 * fs.MebiByte if maxChunkSize < 2*minChunkSize { maxChunkSize = 2 * minChunkSize } if opt.ChunkedUpload.MaxChunkSize > 0 && maxChunkSize > opt.ChunkedUpload.MaxChunkSize { maxChunkSize = opt.ChunkedUpload.MaxChunkSize } if opt.ChunkedUpload.CeilChunkSize != nil { maxChunkSize = opt.ChunkedUpload.CeilChunkSize(maxChunkSize) } next := func(f func(fs.SizeSuffix) fs.SizeSuffix) fs.SizeSuffix { s := f(minChunkSize) if s > maxChunkSize { s = minChunkSize } return s } chunkSizes := fs.SizeSuffixList{ minChunkSize, minChunkSize + (maxChunkSize-minChunkSize)/3, next(NextPowerOfTwo), next(NextMultipleOf(100000)), next(NextMultipleOf(100001)), maxChunkSize, } chunkSizes.Sort() // Set the minimum chunk size, upload cutoff and reset it at the end oldChunkSize, err := setUploadChunkSizer.SetUploadChunkSize(minChunkSize) require.NoError(t, err) var oldUploadCutoff fs.SizeSuffix if setUploadCutoffer != nil { oldUploadCutoff, err = setUploadCutoffer.SetUploadCutoff(minChunkSize) require.NoError(t, err) } defer func() { _, err := setUploadChunkSizer.SetUploadChunkSize(oldChunkSize) assert.NoError(t, err) if setUploadCutoffer != nil { _, err := setUploadCutoffer.SetUploadCutoff(oldUploadCutoff) assert.NoError(t, err) } }() var lastCs fs.SizeSuffix for _, cs := range chunkSizes { if cs <= lastCs { continue } if opt.ChunkedUpload.CeilChunkSize != nil { cs = opt.ChunkedUpload.CeilChunkSize(cs) } lastCs = cs t.Run(cs.String(), func(t *testing.T) { _, err := setUploadChunkSizer.SetUploadChunkSize(cs) require.NoError(t, err) if setUploadCutoffer != nil { _, err = setUploadCutoffer.SetUploadCutoff(cs) require.NoError(t, err) } var testChunks []fs.SizeSuffix if opt.ChunkedUpload.NeedMultipleChunks { // If NeedMultipleChunks is set then test with > cs testChunks = []fs.SizeSuffix{cs + 1, 2 * cs, 2*cs + 1} } else { testChunks = []fs.SizeSuffix{cs - 1, cs, 2*cs + 1} } for _, fileSize := range testChunks { t.Run(fmt.Sprintf("%d", fileSize), func(t *testing.T) { TestPutLarge(ctx, t, remote, &fstest.Item{ ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), Path: fmt.Sprintf("chunked-%s-%s.bin", cs.String(), fileSize.String()), Size: int64(fileSize), }) }) } }) } }) // TestFsUploadUnknownSize ensures Fs.Put() and Object.Update() don't panic when // src.Size() == -1 // // This may trigger large buffer allocation in some backends, keep it // closer to the suite end. (See fs/operations/xtra_operations_test.go) t.Run("FsUploadUnknownSize", func(t *testing.T) { skipIfNotOk(t) t.Run("FsPutUnknownSize", func(t *testing.T) { defer func() { assert.Nil(t, recover(), "Fs.Put() should not panic when src.Size() == -1") }() contents := random.String(100) in := bytes.NewBufferString(contents) obji := object.NewStaticObjectInfo("unknown-size-put.txt", fstest.Time("2002-02-03T04:05:06.499999999Z"), -1, true, nil, nil) obj, err := remote.Put(ctx, in, obji) if err == nil { require.NoError(t, obj.Remove(ctx), "successfully uploaded unknown-sized file but failed to remove") } // if err != nil: it's okay as long as no panic }) t.Run("FsUpdateUnknownSize", func(t *testing.T) { unknownSizeUpdateFile := fstest.Item{ ModTime: fstest.Time("2002-02-03T04:05:06.499999999Z"), Path: "unknown-size-update.txt", } testPut(ctx, t, remote, &unknownSizeUpdateFile) defer func() { assert.Nil(t, recover(), "Object.Update() should not panic when src.Size() == -1") }() newContents := random.String(200) in := bytes.NewBufferString(newContents) obj := findObject(ctx, t, remote, unknownSizeUpdateFile.Path) obji := object.NewStaticObjectInfo(unknownSizeUpdateFile.Path, unknownSizeUpdateFile.ModTime, -1, true, nil, obj.Fs()) err := obj.Update(ctx, in, obji) if err == nil { require.NoError(t, obj.Remove(ctx), "successfully updated object with unknown-sized source but failed to remove") } // if err != nil: it's okay as long as no panic }) }) // TestFsRootCollapse tests if the root of an fs "collapses" to the // absolute root. It creates a new fs of the same backend type with its // root set to a *non-existent* folder, and attempts to read the info of // an object in that folder, whose name is taken from a directory that // exists in the absolute root. // This test is added after // https://github.com/rclone/rclone/issues/3164. t.Run("FsRootCollapse", func(t *testing.T) { deepRemoteName := subRemoteName + "/deeper/nonexisting/directory" deepRemote, err := fs.NewFs(deepRemoteName) require.NoError(t, err) colonIndex := strings.IndexRune(deepRemoteName, ':') firstSlashIndex := strings.IndexRune(deepRemoteName, '/') firstDir := deepRemoteName[colonIndex+1 : firstSlashIndex] _, err = deepRemote.NewObject(ctx, firstDir) require.Equal(t, fs.ErrorObjectNotFound, err) // If err is not fs.ErrorObjectNotFound, it means the backend is // somehow confused about root and absolute root. }) // Purge the folder err = operations.Purge(ctx, remote, "") if errors.Cause(err) != fs.ErrorDirNotFound { require.NoError(t, err) } purged = true fstest.CheckListing(t, remote, []fstest.Item{}) // Check purging again if not bucket based if !isBucketBasedButNotRoot(remote) { err = operations.Purge(ctx, remote, "") assert.Error(t, err, "Expecting error after on second purge") if errors.Cause(err) != fs.ErrorDirNotFound { t.Log("Warning: this should produce fs.ErrorDirNotFound") } } }) // Check directory is purged if !purged { _ = operations.Purge(ctx, remote, "") } // Remove the local directory so we don't clutter up /tmp if strings.HasPrefix(remoteName, "/") { t.Log("remoteName", remoteName) // Remove temp directory err := os.Remove(remoteName) require.NoError(t, err) } } rclone-1.53.3/fstest/mockdir/000077500000000000000000000000001375552240400160345ustar00rootroot00000000000000rclone-1.53.3/fstest/mockdir/dir.go000066400000000000000000000003761375552240400171470ustar00rootroot00000000000000// Package mockdir makes a mock fs.Directory object package mockdir import ( "time" "github.com/rclone/rclone/fs" ) // New makes a mock directory object with the name given func New(name string) fs.Directory { return fs.NewDir(name, time.Time{}) } rclone-1.53.3/fstest/mockfs/000077500000000000000000000000001375552240400156665ustar00rootroot00000000000000rclone-1.53.3/fstest/mockfs/mockfs.go000066400000000000000000000066121375552240400175040ustar00rootroot00000000000000package mockfs import ( "context" "errors" "fmt" "io" "path" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" ) // Fs is a minimal mock Fs type Fs struct { name string // the name of the remote root string // The root directory (OS path) features *fs.Features // optional features rootDir fs.DirEntries // directory listing of root hashes hash.Set // which hashes we support } // ErrNotImplemented is returned by unimplemented methods var ErrNotImplemented = errors.New("not implemented") // NewFs returns a new mock Fs func NewFs(name, root string) *Fs { f := &Fs{ name: name, root: root, } f.features = (&fs.Features{}).Fill(f) return f } // AddObject adds an Object for List to return // Only works for the root for the moment func (f *Fs) AddObject(o fs.Object) { f.rootDir = append(f.rootDir, o) // Make this object part of mockfs if possible do, ok := o.(interface{ SetFs(f fs.Fs) }) if ok { do.SetFs(f) } } // Name of the remote (as passed into NewFs) func (f *Fs) Name() string { return f.name } // Root of the remote (as passed into NewFs) func (f *Fs) Root() string { return f.root } // String returns a description of the FS func (f *Fs) String() string { return fmt.Sprintf("Mock file system at %s", f.root) } // Precision of the ModTimes in this Fs func (f *Fs) Precision() time.Duration { return time.Second } // Hashes returns the supported hash types of the filesystem func (f *Fs) Hashes() hash.Set { return f.hashes } // SetHashes sets the hashes that this supports func (f *Fs) SetHashes(hashes hash.Set) { f.hashes = hashes } // Features returns the optional features of this Fs func (f *Fs) Features() *fs.Features { return f.features } // List the objects and directories in dir into entries. The // entries can be returned in any order but should be for a // complete directory. // // dir should be "" to list the root, and should not have // trailing slashes. // // This should return ErrDirNotFound if the directory isn't // found. func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { if dir == "" { return f.rootDir, nil } return entries, fs.ErrorDirNotFound } // NewObject finds the Object at remote. If it can't be found // it returns the error ErrorObjectNotFound. func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { dirPath := path.Dir(remote) if dirPath == "" || dirPath == "." { for _, entry := range f.rootDir { if entry.Remote() == remote { return entry.(fs.Object), nil } } } return nil, fs.ErrorObjectNotFound } // Put in to the remote path with the modTime given of the given size // // May create the object even if it returns an error - if so // will return the object and the error, otherwise will return // nil and the error func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { return nil, ErrNotImplemented } // Mkdir makes the directory (container, bucket) // // Shouldn't return an error if it already exists func (f *Fs) Mkdir(ctx context.Context, dir string) error { return ErrNotImplemented } // Rmdir removes the directory (container, bucket) if empty // // Return an error if it doesn't exist or isn't empty func (f *Fs) Rmdir(ctx context.Context, dir string) error { return ErrNotImplemented } // Assert it is the correct type var _ fs.Fs = (*Fs)(nil) rclone-1.53.3/fstest/mockobject/000077500000000000000000000000001375552240400165245ustar00rootroot00000000000000rclone-1.53.3/fstest/mockobject/mockobject.go000066400000000000000000000127111375552240400211750ustar00rootroot00000000000000// Package mockobject provides a mock object which can be created from a string package mockobject import ( "bytes" "context" "errors" "fmt" "io" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/hash" ) var errNotImpl = errors.New("not implemented") // Object is a mock fs.Object useful for testing type Object string // New returns mock fs.Object useful for testing func New(name string) Object { return Object(name) } // String returns a description of the Object func (o Object) String() string { return string(o) } // Fs returns read only access to the Fs that this object is part of func (o Object) Fs() fs.Info { return nil } // Remote returns the remote path func (o Object) Remote() string { return string(o) } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o Object) Hash(ctx context.Context, t hash.Type) (string, error) { return "", errNotImpl } // ModTime returns the modification date of the file // It should return a best guess if one isn't available func (o Object) ModTime(ctx context.Context) (t time.Time) { return t } // Size returns the size of the file func (o Object) Size() int64 { return 0 } // Storable says whether this object can be stored func (o Object) Storable() bool { return true } // SetModTime sets the metadata on the object to set the modification date func (o Object) SetModTime(ctx context.Context, t time.Time) error { return errNotImpl } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { return nil, errNotImpl } // Update in to the object with the modTime given of the given size func (o Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { return errNotImpl } // Remove this object func (o Object) Remove(ctx context.Context) error { return errNotImpl } // SeekMode specifies the optional Seek interface for the ReadCloser returned by Open type SeekMode int const ( // SeekModeNone specifies no seek interface SeekModeNone SeekMode = iota // SeekModeRegular specifies the regular io.Seek interface SeekModeRegular // SeekModeRange specifies the fs.RangeSeek interface SeekModeRange ) // SeekModes contains all valid SeekMode's var SeekModes = []SeekMode{SeekModeNone, SeekModeRegular, SeekModeRange} // ContentMockObject mocks an fs.Object and has content type ContentMockObject struct { Object content []byte seekMode SeekMode f fs.Fs unknownSize bool } // WithContent returns an fs.Object with the given content. func (o Object) WithContent(content []byte, mode SeekMode) *ContentMockObject { return &ContentMockObject{ Object: o, content: content, seekMode: mode, } } // SetFs sets the return value of the Fs() call func (o *ContentMockObject) SetFs(f fs.Fs) { o.f = f } // SetUnknownSize makes the mock object return -1 for size if true func (o *ContentMockObject) SetUnknownSize(unknownSize bool) { o.unknownSize = true } // Fs returns read only access to the Fs that this object is part of // // This is nil unless SetFs has been called func (o *ContentMockObject) Fs() fs.Info { return o.f } // Open opens the file for read. Call Close() on the returned io.ReadCloser func (o *ContentMockObject) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { size := int64(len(o.content)) var offset, limit int64 = 0, -1 for _, option := range options { switch x := option.(type) { case *fs.SeekOption: offset = x.Offset case *fs.RangeOption: offset, limit = x.Decode(size) default: if option.Mandatory() { return nil, fmt.Errorf("Unsupported mandatory option: %v", option) } } } if limit == -1 || offset+limit > size { limit = size - offset } var r *bytes.Reader if o.seekMode == SeekModeNone { r = bytes.NewReader(o.content[offset : offset+limit]) } else { r = bytes.NewReader(o.content) _, err := r.Seek(offset, io.SeekStart) if err != nil { return nil, err } } switch o.seekMode { case SeekModeNone: return &readCloser{r}, nil case SeekModeRegular: return &readSeekCloser{r}, nil case SeekModeRange: return &readRangeSeekCloser{r}, nil default: return nil, errors.New(o.seekMode.String()) } } // Size returns the size of the file func (o *ContentMockObject) Size() int64 { if o.unknownSize { return -1 } return int64(len(o.content)) } // Hash returns the selected checksum of the file // If no checksum is available it returns "" func (o *ContentMockObject) Hash(ctx context.Context, t hash.Type) (string, error) { hasher, err := hash.NewMultiHasherTypes(hash.NewHashSet(t)) if err != nil { return "", err } _, err = hasher.Write(o.content) if err != nil { return "", err } return hasher.Sums()[t], nil } type readCloser struct{ io.Reader } func (r *readCloser) Close() error { return nil } type readSeekCloser struct{ io.ReadSeeker } func (r *readSeekCloser) Close() error { return nil } type readRangeSeekCloser struct{ io.ReadSeeker } func (r *readRangeSeekCloser) RangeSeek(offset int64, whence int, length int64) (int64, error) { return r.ReadSeeker.Seek(offset, whence) } func (r *readRangeSeekCloser) Close() error { return nil } func (m SeekMode) String() string { switch m { case SeekModeNone: return "SeekModeNone" case SeekModeRegular: return "SeekModeRegular" case SeekModeRange: return "SeekModeRange" default: return fmt.Sprintf("SeekModeInvalid(%d)", m) } } rclone-1.53.3/fstest/run.go000066400000000000000000000217011375552240400155400ustar00rootroot00000000000000/* This provides Run for use in creating test suites To use this declare a TestMain // TestMain drives the tests func TestMain(m *testing.M) { fstest.TestMain(m) } And then make and destroy a Run in each test func TestMkdir(t *testing.T) { r := fstest.NewRun(t) defer r.Finalise() // test stuff } This will make r.Fremote and r.Flocal for a remote remote and a local remote. The remote is determined by the -remote flag passed in. */ package fstest import ( "bytes" "context" "flag" "fmt" "io/ioutil" "log" "os" "path" "path/filepath" "sort" "testing" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fs/walk" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // Run holds the remotes for a test run type Run struct { LocalName string Flocal fs.Fs Fremote fs.Fs FremoteName string cleanRemote func() mkdir map[string]bool // whether the remote has been made yet for the fs name Logf, Fatalf func(text string, args ...interface{}) } // TestMain drives the tests func TestMain(m *testing.M) { flag.Parse() if !*Individual { oneRun = newRun() } rc := m.Run() if !*Individual { oneRun.Finalise() } os.Exit(rc) } // oneRun holds the master run data if individual is not set var oneRun *Run // newRun initialise the remote and local for testing and returns a // run object. // // r.Flocal is an empty local Fs // r.Fremote is an empty remote Fs // // Finalise() will tidy them away when done. func newRun() *Run { r := &Run{ Logf: log.Printf, Fatalf: log.Fatalf, mkdir: make(map[string]bool), } Initialise() var err error r.Fremote, r.FremoteName, r.cleanRemote, err = RandomRemote() if err != nil { r.Fatalf("Failed to open remote %q: %v", *RemoteName, err) } r.LocalName, err = ioutil.TempDir("", "rclone") if err != nil { r.Fatalf("Failed to create temp dir: %v", err) } r.LocalName = filepath.ToSlash(r.LocalName) r.Flocal, err = fs.NewFs(r.LocalName) if err != nil { r.Fatalf("Failed to make %q: %v", r.LocalName, err) } return r } // run f(), retrying it until it returns with no error or the limit // expires and it calls t.Fatalf func retry(t *testing.T, what string, f func() error) { var err error for try := 1; try <= *ListRetries; try++ { err = f() if err == nil { return } t.Logf("%s failed - try %d/%d: %v", what, try, *ListRetries, err) time.Sleep(time.Second) } t.Logf("%s failed: %v", what, err) } // newRunIndividual initialise the remote and local for testing and // returns a run object. Pass in true to make individual tests or // false to use the global one. // // r.Flocal is an empty local Fs // r.Fremote is an empty remote Fs // // Finalise() will tidy them away when done. func newRunIndividual(t *testing.T, individual bool) *Run { ctx := context.Background() var r *Run if individual { r = newRun() } else { // If not individual, use the global one with the clean method overridden r = new(Run) *r = *oneRun r.cleanRemote = func() { var toDelete []string err := walk.ListR(ctx, r.Fremote, "", true, -1, walk.ListAll, func(entries fs.DirEntries) error { for _, entry := range entries { switch x := entry.(type) { case fs.Object: retry(t, fmt.Sprintf("removing file %q", x.Remote()), func() error { return x.Remove(ctx) }) case fs.Directory: toDelete = append(toDelete, x.Remote()) } } return nil }) if err == fs.ErrorDirNotFound { return } require.NoError(t, err) sort.Strings(toDelete) for i := len(toDelete) - 1; i >= 0; i-- { dir := toDelete[i] retry(t, fmt.Sprintf("removing dir %q", dir), func() error { return r.Fremote.Rmdir(ctx, dir) }) } // Check remote is empty CheckListingWithPrecision(t, r.Fremote, []Item{}, []string{}, r.Fremote.Precision()) // Clear the remote cache cache.Clear() } } r.Logf = t.Logf r.Fatalf = t.Fatalf r.Logf("Remote %q, Local %q, Modify Window %q", r.Fremote, r.Flocal, fs.GetModifyWindow(r.Fremote)) return r } // NewRun initialise the remote and local for testing and returns a // run object. Call this from the tests. // // r.Flocal is an empty local Fs // r.Fremote is an empty remote Fs // // Finalise() will tidy them away when done. func NewRun(t *testing.T) *Run { return newRunIndividual(t, *Individual) } // NewRunIndividual as per NewRun but makes an individual remote for this test func NewRunIndividual(t *testing.T) *Run { return newRunIndividual(t, true) } // RenameFile renames a file in local func (r *Run) RenameFile(item Item, newpath string) Item { oldFilepath := path.Join(r.LocalName, item.Path) newFilepath := path.Join(r.LocalName, newpath) if err := os.Rename(oldFilepath, newFilepath); err != nil { r.Fatalf("Failed to rename file from %q to %q: %v", item.Path, newpath, err) } item.Path = newpath return item } // WriteFile writes a file to local func (r *Run) WriteFile(filePath, content string, t time.Time) Item { item := NewItem(filePath, content, t) // FIXME make directories? filePath = path.Join(r.LocalName, filePath) dirPath := path.Dir(filePath) err := os.MkdirAll(dirPath, 0770) if err != nil { r.Fatalf("Failed to make directories %q: %v", dirPath, err) } err = ioutil.WriteFile(filePath, []byte(content), 0600) if err != nil { r.Fatalf("Failed to write file %q: %v", filePath, err) } err = os.Chtimes(filePath, t, t) if err != nil { r.Fatalf("Failed to chtimes file %q: %v", filePath, err) } return item } // ForceMkdir creates the remote func (r *Run) ForceMkdir(ctx context.Context, f fs.Fs) { err := f.Mkdir(ctx, "") if err != nil { r.Fatalf("Failed to mkdir %q: %v", f, err) } r.mkdir[f.String()] = true } // Mkdir creates the remote if it hasn't been created already func (r *Run) Mkdir(ctx context.Context, f fs.Fs) { if !r.mkdir[f.String()] { r.ForceMkdir(ctx, f) } } // WriteObjectTo writes an object to the fs, remote passed in func (r *Run) WriteObjectTo(ctx context.Context, f fs.Fs, remote, content string, modTime time.Time, useUnchecked bool) Item { put := f.Put if useUnchecked { put = f.Features().PutUnchecked if put == nil { r.Fatalf("Fs doesn't support PutUnchecked") } } r.Mkdir(ctx, f) // caclulate all hashes f supports for content hash, err := hash.NewMultiHasherTypes(f.Hashes()) if err != nil { r.Fatalf("Failed to make new multi hasher: %v", err) } _, err = hash.Write([]byte(content)) if err != nil { r.Fatalf("Failed to make write to hash: %v", err) } hashSums := hash.Sums() const maxTries = 10 for tries := 1; ; tries++ { in := bytes.NewBufferString(content) objinfo := object.NewStaticObjectInfo(remote, modTime, int64(len(content)), true, hashSums, nil) _, err := put(ctx, in, objinfo) if err == nil { break } // Retry if err returned a retry error if fserrors.IsRetryError(err) && tries < maxTries { r.Logf("Retry Put of %q to %v: %d/%d (%v)", remote, f, tries, maxTries, err) time.Sleep(2 * time.Second) continue } r.Fatalf("Failed to put %q to %q: %v", remote, f, err) } return NewItem(remote, content, modTime) } // WriteObject writes an object to the remote func (r *Run) WriteObject(ctx context.Context, remote, content string, modTime time.Time) Item { return r.WriteObjectTo(ctx, r.Fremote, remote, content, modTime, false) } // WriteUncheckedObject writes an object to the remote not checking for duplicates func (r *Run) WriteUncheckedObject(ctx context.Context, remote, content string, modTime time.Time) Item { return r.WriteObjectTo(ctx, r.Fremote, remote, content, modTime, true) } // WriteBoth calls WriteObject and WriteFile with the same arguments func (r *Run) WriteBoth(ctx context.Context, remote, content string, modTime time.Time) Item { r.WriteFile(remote, content, modTime) return r.WriteObject(ctx, remote, content, modTime) } // CheckWithDuplicates does a test but allows duplicates func (r *Run) CheckWithDuplicates(t *testing.T, items ...Item) { var want, got []string // construct a []string of desired items for _, item := range items { want = append(want, fmt.Sprintf("%q %d", item.Path, item.Size)) } sort.Strings(want) // do the listing objs, _, err := walk.GetAll(context.Background(), r.Fremote, "", true, -1) if err != nil && err != fs.ErrorDirNotFound { t.Fatalf("Error listing: %v", err) } // construct a []string of actual items for _, o := range objs { got = append(got, fmt.Sprintf("%q %d", o.Remote(), o.Size())) } sort.Strings(got) assert.Equal(t, want, got) } // Clean the temporary directory func (r *Run) cleanTempDir() { err := os.RemoveAll(r.LocalName) if err != nil { r.Logf("Failed to clean temporary directory %q: %v", r.LocalName, err) } } // Finalise cleans the remote and local func (r *Run) Finalise() { // r.Logf("Cleaning remote %q", r.Fremote) r.cleanRemote() // r.Logf("Cleaning local %q", r.LocalName) r.cleanTempDir() // Clear the remote cache cache.Clear() } rclone-1.53.3/fstest/test_all/000077500000000000000000000000001375552240400162135ustar00rootroot00000000000000rclone-1.53.3/fstest/test_all/clean.go000066400000000000000000000040171375552240400176260ustar00rootroot00000000000000// Clean the left over test files package main import ( "context" "log" "regexp" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/list" "github.com/rclone/rclone/fs/operations" ) // MatchTestRemote matches the remote names used for testing (copied // from fstest/fstest.go so we don't have to import that and get all // its flags) var MatchTestRemote = regexp.MustCompile(`^rclone-test-[abcdefghijklmnopqrstuvwxyz0123456789]{24}(_segments)?$`) // cleanFs runs a single clean fs for left over directories func cleanFs(ctx context.Context, remote string, cleanup bool) error { f, err := fs.NewFs(remote) if err != nil { return err } var lastErr error if cleanup { log.Printf("%q - running cleanup", remote) err = operations.CleanUp(ctx, f) if err != nil { lastErr = err fs.Errorf(f, "Cleanup failed: %v", err) } } entries, err := list.DirSorted(ctx, f, true, "") if err != nil { return err } err = entries.ForDirError(func(dir fs.Directory) error { dirPath := dir.Remote() fullPath := remote + dirPath if MatchTestRemote.MatchString(dirPath) { if *dryRun { log.Printf("Not Purging %s - -dry-run", fullPath) return nil } log.Printf("Purging %s", fullPath) dir, err := fs.NewFs(fullPath) if err != nil { err = errors.Wrap(err, "NewFs failed") lastErr = err fs.Errorf(fullPath, "%v", err) return nil } err = operations.Purge(ctx, dir, "") if err != nil { err = errors.Wrap(err, "Purge failed") lastErr = err fs.Errorf(dir, "%v", err) return nil } } return nil }) if err != nil { return err } return lastErr } // cleanRemotes cleans the list of remotes passed in func cleanRemotes(conf *Config) error { var lastError error for _, backend := range conf.Backends { remote := backend.Remote log.Printf("%q - Cleaning", remote) err := cleanFs(context.Background(), remote, backend.CleanUp) if err != nil { lastError = err log.Printf("Failed to purge %q: %v", remote, err) } } return lastError } rclone-1.53.3/fstest/test_all/config.go000066400000000000000000000117411375552240400200130ustar00rootroot00000000000000// Config handling package main import ( "io/ioutil" "log" "path" "github.com/pkg/errors" "github.com/rclone/rclone/fs" yaml "gopkg.in/yaml.v2" ) // Test describes an integration test to run with `go test` type Test struct { Path string // path to the source directory FastList bool // if it is possible to add -fast-list to tests Short bool // if it is possible to run the test with -short AddBackend bool // set if Path needs the current backend appending NoRetries bool // set if no retries should be performed NoBinary bool // set to not build a binary in advance LocalOnly bool // if set only run with the local backend } // Backend describes a backend test // // FIXME make bucket based remotes set sub-dir automatically??? type Backend struct { Backend string // name of the backend directory Remote string // name of the test remote FastList bool // set to test with -fast-list Short bool // set to test with -short OneOnly bool // set to run only one backend test at once MaxFile string // file size limit CleanUp bool // when running clean, run cleanup first Ignore []string // test names to ignore the failure of Tests []string // paths of tests to run, blank for all ListRetries int // -list-retries if > 0 } // includeTest returns true if this backend should be included in this // test func (b *Backend) includeTest(t *Test) bool { if len(b.Tests) == 0 { return true } for _, testPath := range b.Tests { if testPath == t.Path { return true } } return false } // MakeRuns creates Run objects the Backend and Test // // There can be several created, one for each combination of optionl // flags (eg FastList) func (b *Backend) MakeRuns(t *Test) (runs []*Run) { if !b.includeTest(t) { return runs } maxSize := fs.SizeSuffix(0) if b.MaxFile != "" { if err := maxSize.Set(b.MaxFile); err != nil { log.Printf("Invalid maxfile value %q: %v", b.MaxFile, err) } } fastlists := []bool{false} if b.FastList && t.FastList { fastlists = append(fastlists, true) } ignore := make(map[string]struct{}, len(b.Ignore)) for _, item := range b.Ignore { ignore[item] = struct{}{} } for _, fastlist := range fastlists { if t.LocalOnly && b.Backend != "local" { continue } run := &Run{ Remote: b.Remote, Backend: b.Backend, Path: t.Path, FastList: fastlist, Short: (b.Short && t.Short), NoRetries: t.NoRetries, OneOnly: b.OneOnly, NoBinary: t.NoBinary, SizeLimit: int64(maxSize), Ignore: ignore, ListRetries: b.ListRetries, } if t.AddBackend { run.Path = path.Join(run.Path, b.Backend) } runs = append(runs, run) } return runs } // Config describes the config for this program type Config struct { Tests []Test Backends []Backend } // NewConfig reads the config file func NewConfig(configFile string) (*Config, error) { d, err := ioutil.ReadFile(configFile) if err != nil { return nil, errors.Wrap(err, "failed to read config file") } config := &Config{} err = yaml.Unmarshal(d, &config) if err != nil { return nil, errors.Wrap(err, "failed to parse config file") } // d, err = yaml.Marshal(&config) // if err != nil { // log.Fatalf("error: %v", err) // } // fmt.Printf("--- m dump:\n%s\n\n", string(d)) return config, nil } // MakeRuns makes Run objects for each combination of Backend and Test // in the config func (c *Config) MakeRuns() (runs Runs) { for _, backend := range c.Backends { for _, test := range c.Tests { runs = append(runs, backend.MakeRuns(&test)...) } } return runs } // Filter the Backends with the remotes passed in. // // If no backend is found with a remote is found then synthesize one func (c *Config) filterBackendsByRemotes(remotes []string) { var newBackends []Backend for _, name := range remotes { found := false for i := range c.Backends { if c.Backends[i].Remote == name { newBackends = append(newBackends, c.Backends[i]) found = true } } if !found { log.Printf("Remote %q not found - inserting with default flags", name) // Lookup which backend fsInfo, _, _, _, err := fs.ConfigFs(name) if err != nil { log.Fatalf("couldn't find remote %q: %v", name, err) } newBackends = append(newBackends, Backend{Backend: fsInfo.FileName(), Remote: name}) } } c.Backends = newBackends } // Filter the Backends with the backendNames passed in func (c *Config) filterBackendsByBackends(backendNames []string) { var newBackends []Backend for _, name := range backendNames { for i := range c.Backends { if c.Backends[i].Backend == name { newBackends = append(newBackends, c.Backends[i]) } } } c.Backends = newBackends } // Filter the incoming tests into the backends selected func (c *Config) filterTests(paths []string) { var newTests []Test for _, path := range paths { for i := range c.Tests { if c.Tests[i].Path == path { newTests = append(newTests, c.Tests[i]) } } } c.Tests = newTests } rclone-1.53.3/fstest/test_all/config.yaml000066400000000000000000000170001375552240400203420ustar00rootroot00000000000000tests: - path: backend addbackend: true nobinary: true short: true - path: fs/operations fastlist: true - path: fs/sync fastlist: true - path: vfs - path: cmd/serve/restic localonly: true backends: # - backend: "amazonclouddrive" # remote: "TestAmazonCloudDrive:" # fastlist: false - backend: "local" remote: "" fastlist: false - backend: "b2" remote: "TestB2:" fastlist: true listretries: 5 - backend: "crypt" remote: "TestCryptDrive:" fastlist: true - backend: "crypt" remote: "TestCryptSwift:" fastlist: false ## chunker - backend: "chunker" remote: "TestChunkerLocal:" fastlist: true - backend: "chunker" remote: "TestChunkerNometaLocal:" fastlist: true - backend: "chunker" remote: "TestChunkerChunk3bLocal:" fastlist: true maxfile: 6k - backend: "chunker" remote: "TestChunkerChunk3bNometaLocal:" fastlist: true maxfile: 6k - backend: "chunker" remote: "TestChunkerMailru:" fastlist: true - backend: "chunker" remote: "TestChunkerChunk50bMailru:" fastlist: true maxfile: 10k - backend: "chunker" remote: "TestChunkerChunk50bYandex:" fastlist: true maxfile: 1k - backend: "chunker" remote: "TestChunkerChunk50bBox:" fastlist: true maxfile: 1k - backend: "chunker" remote: "TestChunkerS3:" fastlist: true - backend: "chunker" remote: "TestChunkerChunk50bS3:" fastlist: true maxfile: 1k - backend: "chunker" remote: "TestChunkerChunk50bMD5HashS3:" fastlist: true maxfile: 1k - backend: "chunker" remote: "TestChunkerChunk50bSHA1HashS3:" fastlist: true maxfile: 1k - backend: "chunker" remote: "TestChunkerOverCrypt:" fastlist: true maxfile: 6k - backend: "chunker" remote: "TestChunkerChunk50bMD5QuickS3:" fastlist: true maxfile: 1k - backend: "chunker" remote: "TestChunkerChunk50bSHA1QuickS3:" fastlist: true maxfile: 1k ## end chunker - backend: "drive" remote: "TestDrive:" fastlist: true - backend: "dropbox" remote: "TestDropbox:" fastlist: false - backend: "googlecloudstorage" remote: "TestGoogleCloudStorage:" fastlist: true - backend: "googlephotos" remote: "TestGooglePhotos:" tests: - backend - backend: "hubic" remote: "TestHubic:" fastlist: false - backend: "jottacloud" remote: "TestJottacloud:" fastlist: true - backend: "memory" remote: ":memory:" fastlist: true - backend: "onedrive" remote: "TestOneDrive:" fastlist: false - backend: "s3" remote: "TestS3:" fastlist: true - backend: "s3" remote: "TestS3Minio:" fastlist: true ignore: - TestIntegration/FsMkdir/FsPutFiles/SetTier - TestIntegration/FsMkdir/FsEncoding/control_chars - TestIntegration/FsMkdir/FsEncoding/leading_LF - TestIntegration/FsMkdir/FsEncoding/leading_VT - TestIntegration/FsMkdir/FsEncoding/punctuation - TestIntegration/FsMkdir/FsEncoding/trailing_LF - TestIntegration/FsMkdir/FsEncoding/trailing_VT - backend: "s3" remote: "TestS3MinioEdge:" fastlist: true ignore: - TestIntegration/FsMkdir/FsPutFiles/SetTier - backend: "s3" remote: "TestS3Wasabi:" fastlist: true ignore: - TestIntegration/FsMkdir/FsEncoding/leading_CR - TestIntegration/FsMkdir/FsEncoding/leading_LF - TestIntegration/FsMkdir/FsEncoding/leading_HT - TestIntegration/FsMkdir/FsEncoding/leading_VT - TestIntegration/FsMkdir/FsPutFiles/FsPutStream/0 # Disabled due to excessive rate limiting at DO which cause the tests never to pass # This hits the rate limit as documented here: https://www.digitalocean.com/docs/spaces/#limits # 2 COPYs per 5 minutes on any individual object in a Space # - backend: "s3" # remote: "TestS3DigitalOcean:" # fastlist: true # ignore: # - TestIntegration/FsMkdir/FsPutFiles/FsCopy # - TestIntegration/FsMkdir/FsPutFiles/SetTier # - backend: "s3" # remote: "TestS3Ceph:" # fastlist: true # ignore: # - TestIntegration/FsMkdir/FsPutFiles/FsCopy # - TestIntegration/FsMkdir/FsPutFiles/SetTier - backend: "s3" remote: "TestS3Alibaba:" fastlist: true - backend: "sftp" remote: "TestSFTPOpenssh:" fastlist: false - backend: "sftp" remote: "TestSFTPRclone:" fastlist: false - backend: "sugarsync" remote: "TestSugarSync:Test" fastlist: false ignore: - TestIntegration/FsMkdir/FsPutFiles/PublicLink - backend: "swift" remote: "TestSwiftAIO:" fastlist: true - backend: "swift" remote: "TestSwift:" fastlist: true # - backend: "swift" # remote: "TestSwiftCeph:" # fastlist: true # ignore: # - TestIntegration/FsMkdir/FsPutFiles/FsCopy - backend: "yandex" remote: "TestYandex:" fastlist: false - backend: "ftp" remote: "TestFTPProftpd:" ignore: - TestIntegration/FsMkdir/FsEncoding/punctuation fastlist: false # - backend: "ftp" # remote: "TestFTPVsftpd:" # ignore: # - TestIntegration/FsMkdir/FsEncoding/punctuation # fastlist: false - backend: "ftp" remote: "TestFTPPureftpd:" ignore: - TestIntegration/FsMkdir/FsEncoding/punctuation fastlist: false - backend: "ftp" remote: "TestFTPRclone:" ignore: - "TestMultithreadCopy/{size:131071_streams:2}" - "TestMultithreadCopy/{size:131072_streams:2}" - "TestMultithreadCopy/{size:131073_streams:2}" fastlist: false - backend: "box" remote: "TestBox:" fastlist: false - backend: "fichier" remote: "TestFichier:" fastlist: false tests: - backend - backend: "qingstor" remote: "TestQingStor:" fastlist: false oneonly: true cleanup: true - backend: "azureblob" remote: "TestAzureBlob:" fastlist: true - backend: "pcloud" remote: "TestPcloud:" fastlist: false - backend: "webdav" remote: "TestWebdavNextcloud:" ignore: - TestIntegration/FsMkdir/FsEncoding/punctuation - TestIntegration/FsMkdir/FsEncoding/invalid_UTF-8 fastlist: false - backend: "webdav" remote: "TestWebdavOwncloud:" ignore: - TestIntegration/FsMkdir/FsEncoding/punctuation - TestIntegration/FsMkdir/FsEncoding/invalid_UTF-8 - TestIntegration/FsMkdir/FsPutFiles/FsCopy - TestCopyFileCopyDest - TestServerSideCopy - TestSyncCopyDest fastlist: false - backend: "webdav" remote: "TestWebdavRclone:" ignore: - TestFileReadAtZeroLength fastlist: false - backend: "cache" remote: "TestCache:" fastlist: false - backend: "mega" remote: "TestMega:" fastlist: false ignore: - TestIntegration/FsMkdir/FsPutFiles/PublicLink - TestDirRename - backend: "opendrive" remote: "TestOpenDrive:" fastlist: false - backend: "union" remote: "TestUnion:" fastlist: false - backend: "koofr" remote: "TestKoofr:" fastlist: false - backend: "premiumizeme" remote: "TestPremiumizeMe:" fastlist: false - backend: "putio" remote: "TestPutio:" fastlist: false - backend: "sharefile" remote: "TestSharefile:" fastlist: false - backend: "mailru" remote: "TestMailru:" subdir: false fastlist: false - backend: "seafile" remote: "TestSeafileV6:" fastlist: false ignore: - TestIntegration/FsMkdir/FsPutFiles/FsDirMove - backend: "seafile" remote: "TestSeafile:" fastlist: true - backend: "seafile" remote: "TestSeafileEncrypted:" fastlist: true - backend: "tardigrade" remote: "TestTardigrade:" fastlist: true rclone-1.53.3/fstest/test_all/report.go000066400000000000000000000176701375552240400200700ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "html/template" "io/ioutil" "log" "os" "os/exec" "path" "regexp" "runtime" "sort" "time" "github.com/rclone/rclone/fs" "github.com/skratchdot/open-golang/open" ) const timeFormat = "2006-01-02-150405" // Report holds the info to make a report on a series of test runs type Report struct { LogDir string // output directory for logs and report StartTime time.Time // time started DateTime string // directory name for output Duration time.Duration // time the run took Failed Runs // failed runs Passed Runs // passed runs Runs []ReportRun // runs to report Version string // rclone version Previous string // previous test name if known IndexHTML string // path to the index.html file URL string // online version Branch string // rclone branch Commit string // rclone commit GOOS string // Go OS GOARCH string // Go Arch GoVersion string // Go Version } // ReportRun is used in the templates to report on a test run type ReportRun struct { Name string Runs Runs } // Parse version numbers // v1.49.0 // v1.49.0-031-g2298834e-beta // v1.49.0-032-g20793a5f-sharefile-beta // match 1 is commit number // match 2 is branch name var parseVersion = regexp.MustCompile(`^v(?:[0-9.]+)-(?:\d+)-g([0-9a-f]+)(?:-(.*))?-beta$`) // FIXME take -issue or -pr parameter... // NewReport initialises and returns a Report func NewReport() *Report { r := &Report{ StartTime: time.Now(), Version: fs.Version, GOOS: runtime.GOOS, GOARCH: runtime.GOARCH, GoVersion: runtime.Version(), } r.DateTime = r.StartTime.Format(timeFormat) // Find previous log directory if possible names, err := ioutil.ReadDir(*outputDir) if err == nil && len(names) > 0 { r.Previous = names[len(names)-1].Name() } // Create output directory for logs and report r.LogDir = path.Join(*outputDir, r.DateTime) err = os.MkdirAll(r.LogDir, 0777) if err != nil { log.Fatalf("Failed to make log directory: %v", err) } // Online version r.URL = *urlBase + r.DateTime + "/index.html" // Get branch/commit out of version parts := parseVersion.FindStringSubmatch(r.Version) if len(parts) >= 3 { r.Commit = parts[1] r.Branch = parts[2] } if r.Branch == "" { r.Branch = "master" } return r } // End should be called when the tests are complete func (r *Report) End() { r.Duration = time.Since(r.StartTime) sort.Sort(r.Failed) sort.Sort(r.Passed) r.Runs = []ReportRun{ {Name: "Failed", Runs: r.Failed}, {Name: "Passed", Runs: r.Passed}, } } // AllPassed returns true if there were no failed tests func (r *Report) AllPassed() bool { return len(r.Failed) == 0 } // RecordResult should be called with a Run when it has finished to be // recorded into the Report func (r *Report) RecordResult(t *Run) { if !t.passed() { r.Failed = append(r.Failed, t) } else { r.Passed = append(r.Passed, t) } } // Title returns a human readable summary title for the Report func (r *Report) Title() string { if r.AllPassed() { return fmt.Sprintf("PASS: All tests finished OK in %v", r.Duration) } return fmt.Sprintf("FAIL: %d tests failed in %v", len(r.Failed), r.Duration) } // LogSummary writes the summary to the log file func (r *Report) LogSummary() { log.Printf("Logs in %q", r.LogDir) // Summarise results log.Printf("SUMMARY") log.Println(r.Title()) if !r.AllPassed() { for _, t := range r.Failed { log.Printf(" * %s", toShell(t.nextCmdLine())) log.Printf(" * Failed tests: %v", t.FailedTests) } } } // LogJSON writes the summary to index.json in LogDir func (r *Report) LogJSON() { out, err := json.MarshalIndent(r, "", "\t") if err != nil { log.Fatalf("Failed to marshal data for index.json: %v", err) } err = ioutil.WriteFile(path.Join(r.LogDir, "index.json"), out, 0666) if err != nil { log.Fatalf("Failed to write index.json: %v", err) } } // LogHTML writes the summary to index.html in LogDir func (r *Report) LogHTML() { r.IndexHTML = path.Join(r.LogDir, "index.html") out, err := os.Create(r.IndexHTML) if err != nil { log.Fatalf("Failed to open index.html: %v", err) } defer func() { err := out.Close() if err != nil { log.Fatalf("Failed to close index.html: %v", err) } }() err = reportTemplate.Execute(out, r) if err != nil { log.Fatalf("Failed to execute template: %v", err) } _ = open.Start("file://" + r.IndexHTML) } var reportHTML = ` {{ .Title }}

    {{ .Title }}

    {{ if .Commit}}{{ end }} {{ if .Previous}}{{ end }}
    Version{{ .Version }}
    Test{{ .DateTime}}
    Branch{{ .Branch }}
    Commit{{ .Commit }}
    Go{{ .GoVersion }} {{ .GOOS }}/{{ .GOARCH }}
    Duration{{ .Duration }}
    Previous{{ .Previous }}
    UpOlder Tests
    {{ range .Runs }} {{ if .Runs }}

    {{ .Name }}: {{ len .Runs }}

    {{ $prevBackend := "" }} {{ $prevRemote := "" }} {{ range .Runs}} {{ end }}
    Backend Remote Test FastList Failed Logs
    {{ if ne $prevBackend .Backend }}{{ .Backend }}{{ end }}{{ $prevBackend = .Backend }} {{ if ne $prevRemote .Remote }}{{ .Remote }}{{ end }}{{ $prevRemote = .Remote }} {{ .Path }} {{ .FastList }} {{ .FailedTestsCSV }} {{ range $i, $v := .Logs }}#{{ $i }} {{ end }}
    {{ end }} {{ end }} ` var reportTemplate = template.Must(template.New("Report").Parse(reportHTML)) // EmailHTML sends the summary report to the email address supplied func (r *Report) EmailHTML() { if *emailReport == "" || r.IndexHTML == "" { return } log.Printf("Sending email summary to %q", *emailReport) cmdLine := []string{"mail", "-a", "Content-Type: text/html", *emailReport, "-s", "rclone integration tests: " + r.Title()} cmd := exec.Command(cmdLine[0], cmdLine[1:]...) in, err := os.Open(r.IndexHTML) if err != nil { log.Fatalf("Failed to open index.html: %v", err) } cmd.Stdin = in cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err = cmd.Run() if err != nil { log.Fatalf("Failed to send email: %v", err) } _ = in.Close() } // uploadTo uploads a copy of the report online to the dir given func (r *Report) uploadTo(uploadDir string) { dst := path.Join(*uploadPath, uploadDir) log.Printf("Uploading results to %q", dst) cmdLine := []string{"rclone", "sync", "--stats-log-level", "NOTICE", r.LogDir, dst} cmd := exec.Command(cmdLine[0], cmdLine[1:]...) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err := cmd.Run() if err != nil { log.Fatalf("Failed to upload results: %v", err) } } // Upload uploads a copy of the report online func (r *Report) Upload() { if *uploadPath == "" || r.IndexHTML == "" { return } // Upload into dated directory r.uploadTo(r.DateTime) // And again into current r.uploadTo("current") } rclone-1.53.3/fstest/test_all/run.go000066400000000000000000000241271375552240400173540ustar00rootroot00000000000000// Run a test package main import ( "bytes" "fmt" "go/build" "io" "log" "os" "os/exec" "path" "regexp" "runtime" "sort" "strconv" "strings" "sync" "time" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fstest/testserver" ) // Control concurrency per backend if required var ( oneOnlyMu sync.Mutex oneOnly = map[string]*sync.Mutex{} ) // Run holds info about a running test // // A run just runs one command line, but it can be run multiple times // if retries are needed. type Run struct { // Config Remote string // name of the test remote Backend string // name of the backend Path string // path to the source directory FastList bool // add -fast-list to tests Short bool // add -short NoRetries bool // don't retry if set OneOnly bool // only run test for this backend at once NoBinary bool // set to not build a binary SizeLimit int64 // maximum test file size Ignore map[string]struct{} ListRetries int // -list-retries if > 0 // Internals CmdLine []string CmdString string Try int err error output []byte FailedTests []string RunFlag string LogDir string // directory to place the logs TrialName string // name/log file name of current trial TrialNames []string // list of all the trials } // Runs records multiple Run objects type Runs []*Run // Sort interface func (rs Runs) Len() int { return len(rs) } func (rs Runs) Swap(i, j int) { rs[i], rs[j] = rs[j], rs[i] } func (rs Runs) Less(i, j int) bool { a, b := rs[i], rs[j] if a.Backend < b.Backend { return true } else if a.Backend > b.Backend { return false } if a.Remote < b.Remote { return true } else if a.Remote > b.Remote { return false } if a.Path < b.Path { return true } else if a.Path > b.Path { return false } if !a.FastList && b.FastList { return true } else if a.FastList && !b.FastList { return false } return false } // dumpOutput prints the error output func (r *Run) dumpOutput() { log.Println("------------------------------------------------------------") log.Printf("---- %q ----", r.CmdString) log.Println(string(r.output)) log.Println("------------------------------------------------------------") } // This converts a slice of test names into a regexp which matches // them. func testsToRegexp(tests []string) string { var split []map[string]struct{} // Make a slice with maps of the used parts at each level for _, test := range tests { for i, name := range strings.Split(test, "/") { if i >= len(split) { split = append(split, make(map[string]struct{})) } split[i][name] = struct{}{} } } var out []string for _, level := range split { var testsInLevel = []string{} for name := range level { testsInLevel = append(testsInLevel, name) } sort.Strings(testsInLevel) if len(testsInLevel) > 1 { out = append(out, "^("+strings.Join(testsInLevel, "|")+")$") } else { out = append(out, "^"+testsInLevel[0]+"$") } } return strings.Join(out, "/") } var failRe = regexp.MustCompile(`(?m)^\s*--- FAIL: (Test.*?) \(`) // findFailures looks for all the tests which failed func (r *Run) findFailures() { oldFailedTests := r.FailedTests r.FailedTests = nil excludeParents := map[string]struct{}{} ignored := 0 for _, matches := range failRe.FindAllSubmatch(r.output, -1) { failedTest := string(matches[1]) // Skip any ignored failures if _, found := r.Ignore[failedTest]; found { ignored++ } else { r.FailedTests = append(r.FailedTests, failedTest) } // Find all the parents of this test parts := strings.Split(failedTest, "/") for i := len(parts) - 1; i >= 1; i-- { excludeParents[strings.Join(parts[:i], "/")] = struct{}{} } } // Exclude the parents var newTests = r.FailedTests[:0] for _, failedTest := range r.FailedTests { if _, excluded := excludeParents[failedTest]; !excluded { newTests = append(newTests, failedTest) } } r.FailedTests = newTests if len(r.FailedTests) == 0 && ignored > 0 { log.Printf("%q - Found %d ignored errors only - marking as good", r.CmdString, ignored) r.err = nil r.dumpOutput() return } if len(r.FailedTests) != 0 { r.RunFlag = testsToRegexp(r.FailedTests) } else { r.RunFlag = "" } if r.passed() && len(r.FailedTests) != 0 { log.Printf("%q - Expecting no errors but got: %v", r.CmdString, r.FailedTests) r.dumpOutput() } else if !r.passed() && len(r.FailedTests) == 0 { log.Printf("%q - Expecting errors but got none: %v", r.CmdString, r.FailedTests) r.dumpOutput() r.FailedTests = oldFailedTests } } // nextCmdLine returns the next command line func (r *Run) nextCmdLine() []string { CmdLine := r.CmdLine if r.RunFlag != "" { CmdLine = append(CmdLine, "-test.run", r.RunFlag) } return CmdLine } // trial runs a single test func (r *Run) trial() { CmdLine := r.nextCmdLine() CmdString := toShell(CmdLine) msg := fmt.Sprintf("%q - Starting (try %d/%d)", CmdString, r.Try, *maxTries) log.Println(msg) logName := path.Join(r.LogDir, r.TrialName) out, err := os.Create(logName) if err != nil { log.Fatalf("Couldn't create log file: %v", err) } defer func() { err := out.Close() if err != nil { log.Fatalf("Failed to close log file: %v", err) } }() _, _ = fmt.Fprintln(out, msg) // Early exit if --try-run if *dryRun { log.Printf("Not executing as --dry-run: %v", CmdLine) _, _ = fmt.Fprintln(out, "--dry-run is set - not running") return } // Start the test server if required finish, err := testserver.Start(r.Remote) if err != nil { log.Printf("%s: Failed to start test server: %v", r.Remote, err) _, _ = fmt.Fprintf(out, "%s: Failed to start test server: %v\n", r.Remote, err) r.err = err return } defer finish() // Internal buffer var b bytes.Buffer multiOut := io.MultiWriter(out, &b) cmd := exec.Command(CmdLine[0], CmdLine[1:]...) cmd.Stderr = multiOut cmd.Stdout = multiOut cmd.Dir = r.Path start := time.Now() r.err = cmd.Run() r.output = b.Bytes() duration := time.Since(start) r.findFailures() if r.passed() { msg = fmt.Sprintf("%q - Finished OK in %v (try %d/%d)", CmdString, duration, r.Try, *maxTries) } else { msg = fmt.Sprintf("%q - Finished ERROR in %v (try %d/%d): %v: Failed %v", CmdString, duration, r.Try, *maxTries, r.err, r.FailedTests) } log.Println(msg) _, _ = fmt.Fprintln(out, msg) } // passed returns true if the test passed func (r *Run) passed() bool { return r.err == nil } // GOPATH returns the current GOPATH func GOPATH() string { gopath := os.Getenv("GOPATH") if gopath == "" { gopath = build.Default.GOPATH } return gopath } // BinaryName turns a package name into a binary name func (r *Run) BinaryName() string { binary := path.Base(r.Path) + ".test" if runtime.GOOS == "windows" { binary += ".exe" } return binary } // BinaryPath turns a package name into a binary path func (r *Run) BinaryPath() string { return path.Join(r.Path, r.BinaryName()) } // PackagePath returns the path to the package func (r *Run) PackagePath() string { return path.Join(GOPATH(), "src", r.Path) } // MakeTestBinary makes the binary we will run func (r *Run) MakeTestBinary() { binary := r.BinaryPath() binaryName := r.BinaryName() log.Printf("%s: Making test binary %q", r.Path, binaryName) CmdLine := []string{"go", "test", "-c"} if *dryRun { log.Printf("Not executing: %v", CmdLine) return } cmd := exec.Command(CmdLine[0], CmdLine[1:]...) cmd.Dir = r.Path err := cmd.Run() if err != nil { log.Fatalf("Failed to make test binary: %v", err) } if _, err := os.Stat(binary); err != nil { log.Fatalf("Couldn't find test binary %q", binary) } } // RemoveTestBinary removes the binary made in makeTestBinary func (r *Run) RemoveTestBinary() { if *dryRun { return } binary := r.BinaryPath() err := os.Remove(binary) // Delete the binary when finished if err != nil { log.Printf("Error removing test binary %q: %v", binary, err) } } // Name returns the run name as a file name friendly string func (r *Run) Name() string { ns := []string{ r.Backend, strings.Replace(r.Path, "/", ".", -1), r.Remote, } if r.FastList { ns = append(ns, "fastlist") } ns = append(ns, fmt.Sprintf("%d", r.Try)) s := strings.Join(ns, "-") s = strings.Replace(s, ":", "", -1) return s } // Init the Run func (r *Run) Init() { prefix := "-test." if r.NoBinary { prefix = "-" r.CmdLine = []string{"go", "test"} } else { r.CmdLine = []string{"./" + r.BinaryName()} } r.CmdLine = append(r.CmdLine, prefix+"v", prefix+"timeout", timeout.String(), "-remote", r.Remote) listRetries := *listRetries if r.ListRetries > 0 { listRetries = r.ListRetries } if listRetries > 0 { r.CmdLine = append(r.CmdLine, "-list-retries", fmt.Sprint(listRetries)) } r.Try = 1 if *verbose { r.CmdLine = append(r.CmdLine, "-verbose") fs.Config.LogLevel = fs.LogLevelDebug } if *runOnly != "" { r.CmdLine = append(r.CmdLine, prefix+"run", *runOnly) } if r.FastList { r.CmdLine = append(r.CmdLine, "-fast-list") } if r.Short { r.CmdLine = append(r.CmdLine, "-short") } if r.SizeLimit > 0 { r.CmdLine = append(r.CmdLine, "-size-limit", strconv.FormatInt(r.SizeLimit, 10)) } r.CmdString = toShell(r.CmdLine) } // Logs returns all the log names func (r *Run) Logs() []string { return r.TrialNames } // FailedTestsCSV returns the failed tests as a comma separated string, limiting the number func (r *Run) FailedTestsCSV() string { const maxTests = 5 ts := r.FailedTests if len(ts) > maxTests { ts = ts[:maxTests:maxTests] ts = append(ts, fmt.Sprintf("… (%d more)", len(r.FailedTests)-maxTests)) } return strings.Join(ts, ", ") } // Run runs all the trials for this test func (r *Run) Run(LogDir string, result chan<- *Run) { if r.OneOnly { oneOnlyMu.Lock() mu := oneOnly[r.Backend] if mu == nil { mu = new(sync.Mutex) oneOnly[r.Backend] = mu } oneOnlyMu.Unlock() mu.Lock() defer mu.Unlock() } r.Init() r.LogDir = LogDir for r.Try = 1; r.Try <= *maxTries; r.Try++ { r.TrialName = r.Name() + ".txt" r.TrialNames = append(r.TrialNames, r.TrialName) log.Printf("Starting run with log %q", r.TrialName) r.trial() if r.passed() || r.NoRetries { break } } if !r.passed() { r.dumpOutput() } result <- r } rclone-1.53.3/fstest/test_all/run_test.go000066400000000000000000000070771375552240400204200ustar00rootroot00000000000000package main import ( "fmt" "os/exec" "regexp" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestTestsToRegexp(t *testing.T) { for _, test := range []struct { in []string want string }{ { in: []string{}, want: "", }, { in: []string{"TestOne"}, want: "^TestOne$", }, { in: []string{"TestOne", "TestTwo"}, want: "^(TestOne|TestTwo)$", }, { in: []string{"TestOne", "TestTwo", "TestThree"}, want: "^(TestOne|TestThree|TestTwo)$", }, { in: []string{"TestOne/Sub1"}, want: "^TestOne$/^Sub1$", }, { in: []string{ "TestOne/Sub1", "TestTwo", }, want: "^(TestOne|TestTwo)$/^Sub1$", }, { in: []string{ "TestOne/Sub1", "TestOne/Sub2", "TestTwo", }, want: "^(TestOne|TestTwo)$/^(Sub1|Sub2)$", }, { in: []string{ "TestOne/Sub1", "TestOne/Sub2/SubSub1", "TestTwo", }, want: "^(TestOne|TestTwo)$/^(Sub1|Sub2)$/^SubSub1$", }, { in: []string{ "TestTests/A1", "TestTests/B/B1", "TestTests/C/C3/C31", }, want: "^TestTests$/^(A1|B|C)$/^(B1|C3)$/^C31$", }, } { got := testsToRegexp(test.in) assert.Equal(t, test.want, got, fmt.Sprintf("in=%v want=%q got=%q", test.in, test.want, got)) } } var runRe = regexp.MustCompile(`(?m)^\s*=== RUN\s*(Test.*?)\s*$`) // Test the regexp work with the -run flag in actually selecting the right tests func TestTestsToRegexpLive(t *testing.T) { for _, test := range []struct { in []string want []string }{ { in: []string{ "TestTests/A1", "TestTests/C/C3", }, want: []string{ "TestTests", "TestTests/A1", "TestTests/C", "TestTests/C/C3", "TestTests/C/C3/C31", "TestTests/C/C3/C32", }, }, { in: []string{ "TestTests", "TestTests/A1", "TestTests/B", "TestTests/B/B1", "TestTests/C", }, want: []string{ "TestTests", "TestTests/A1", "TestTests/B", "TestTests/B/B1", "TestTests/C", // FIXME there doesn't seem to be a way to select a non-terminal test // and all of its subtests if there is a longer path (in this case B/B1) // "TestTests/C/C1", // "TestTests/C/C2", // "TestTests/C/C3", // "TestTests/C/C3/C31", // "TestTests/C/C3/C32", }, }, { in: []string{ "TestTests/A1", "TestTests/B/B1", "TestTests/C/C3/C31", }, want: []string{ "TestTests", "TestTests/A1", "TestTests/B", "TestTests/B/B1", "TestTests/C", "TestTests/C/C3", "TestTests/C/C3/C31", }, }, { in: []string{ "TestTests/B/B1", "TestTests/C/C3/C31", }, want: []string{ "TestTests", "TestTests/B", "TestTests/B/B1", "TestTests/C", "TestTests/C/C3", "TestTests/C/C3/C31", }, }, } { runRegexp := testsToRegexp(test.in) cmd := exec.Command("go", "test", "-v", "-run", runRegexp) out, err := cmd.CombinedOutput() require.NoError(t, err) var got []string for _, match := range runRe.FindAllSubmatch(out, -1) { got = append(got, string(match[1])) } assert.Equal(t, test.want, got, fmt.Sprintf("in=%v want=%v got=%v, runRegexp=%q", test.in, test.want, got, runRegexp)) } } var nilTest = func(t *testing.T) {} // Nested tests for TestTestsToRegexpLive to run func TestTests(t *testing.T) { t.Run("A1", nilTest) t.Run("A2", nilTest) t.Run("B", func(t *testing.T) { t.Run("B1", nilTest) t.Run("B2", nilTest) }) t.Run("C", func(t *testing.T) { t.Run("C1", nilTest) t.Run("C2", nilTest) t.Run("C3", func(t *testing.T) { t.Run("C31", nilTest) t.Run("C32", nilTest) }) }) } rclone-1.53.3/fstest/test_all/test_all.go000066400000000000000000000104731375552240400203560ustar00rootroot00000000000000// Run tests for all the remotes. Run this with package names which // need integration testing. // // See the `test` target in the Makefile. // package main /* FIXME Make TesTrun have a []string of flags to try - that then makes it generic */ import ( "flag" "log" "math/rand" "os" "path" "regexp" "strings" "time" _ "github.com/rclone/rclone/backend/all" // import all fs "github.com/rclone/rclone/lib/pacer" ) var ( // Flags maxTries = flag.Int("maxtries", 5, "Number of times to try each test") maxN = flag.Int("n", 20, "Maximum number of tests to run at once") testRemotes = flag.String("remotes", "", "Comma separated list of remotes to test, eg 'TestSwift:,TestS3'") testBackends = flag.String("backends", "", "Comma separated list of backends to test, eg 's3,googlecloudstorage") testTests = flag.String("tests", "", "Comma separated list of tests to test, eg 'fs/sync,fs/operations'") clean = flag.Bool("clean", false, "Instead of testing, clean all left over test directories") runOnly = flag.String("run", "", "Run only those tests matching the regexp supplied") timeout = flag.Duration("timeout", 60*time.Minute, "Maximum time to run each test for before giving up") configFile = flag.String("config", "fstest/test_all/config.yaml", "Path to config file") outputDir = flag.String("output", path.Join(os.TempDir(), "rclone-integration-tests"), "Place to store results") emailReport = flag.String("email", "", "Set to email the report to the address supplied") dryRun = flag.Bool("dry-run", false, "Print commands which would be executed only") urlBase = flag.String("url-base", "https://pub.rclone.org/integration-tests/", "Base for the online version") uploadPath = flag.String("upload", "", "Set this to an rclone path to upload the results here") verbose = flag.Bool("verbose", false, "Set to enable verbose logging in the tests") listRetries = flag.Int("list-retries", -1, "Number or times to retry listing - set to override the default") ) // if matches then is definitely OK in the shell var shellOK = regexp.MustCompile("^[A-Za-z0-9./_:-]+$") // converts an argv style input into a shell command func toShell(args []string) (result string) { for _, arg := range args { if result != "" { result += " " } if shellOK.MatchString(arg) { result += arg } else { result += "'" + arg + "'" } } return result } func main() { flag.Parse() conf, err := NewConfig(*configFile) if err != nil { log.Println("test_all should be run from the root of the rclone source code") log.Fatal(err) } // Seed the random number generator rand.Seed(time.Now().UTC().UnixNano()) // Filter selection if *testRemotes != "" { conf.filterBackendsByRemotes(strings.Split(*testRemotes, ",")) } if *testBackends != "" { conf.filterBackendsByBackends(strings.Split(*testBackends, ",")) } if *testTests != "" { conf.filterTests(strings.Split(*testTests, ",")) } // Just clean the directories if required if *clean { err := cleanRemotes(conf) if err != nil { log.Fatalf("Failed to clean: %v", err) } return } var names []string for _, remote := range conf.Backends { names = append(names, remote.Remote) } log.Printf("Testing remotes: %s", strings.Join(names, ", ")) // Runs we will do for this test in random order runs := conf.MakeRuns() rand.Shuffle(len(runs), runs.Swap) // Create Report report := NewReport() // Make the test binaries, one per Path found in the tests done := map[string]struct{}{} for _, run := range runs { if _, found := done[run.Path]; !found { done[run.Path] = struct{}{} if !run.NoBinary { run.MakeTestBinary() defer run.RemoveTestBinary() } } } // workaround for cache backend as we run simultaneous tests _ = os.Setenv("RCLONE_CACHE_DB_WAIT_TIME", "30m") // start the tests results := make(chan *Run, len(runs)) awaiting := 0 tokens := pacer.NewTokenDispenser(*maxN) for _, run := range runs { tokens.Get() go func(run *Run) { defer tokens.Put() run.Run(report.LogDir, results) }(run) awaiting++ } // Wait for the tests to finish for ; awaiting > 0; awaiting-- { t := <-results report.RecordResult(t) } // Log and exit report.End() report.LogSummary() report.LogJSON() report.LogHTML() report.EmailHTML() report.Upload() if !report.AllPassed() { os.Exit(1) } } rclone-1.53.3/fstest/testserver/000077500000000000000000000000001375552240400166125ustar00rootroot00000000000000rclone-1.53.3/fstest/testserver/images/000077500000000000000000000000001375552240400200575ustar00rootroot00000000000000rclone-1.53.3/fstest/testserver/images/test-sftp-openssh/000077500000000000000000000000001375552240400234655ustar00rootroot00000000000000rclone-1.53.3/fstest/testserver/images/test-sftp-openssh/Dockerfile000066400000000000000000000004201375552240400254530ustar00rootroot00000000000000# A very minimal sftp server for integration testing rclone FROM alpine:latest # User rclone, password password RUN \ apk add openssh && \ ssh-keygen -A && \ adduser -D rclone && \ echo "rclone:password" | chpasswd ENTRYPOINT [ "/usr/sbin/sshd", "-D" ] rclone-1.53.3/fstest/testserver/images/test-sftp-openssh/README.md000066400000000000000000000005321375552240400247440ustar00rootroot00000000000000# Test SFTP Openssh This is a docker image for rclone's integration tests which runs an openssh server in a docker image. ## Build ``` docker build --rm -t rclone/test-sftp-openssh . docker push rclone/test-sftp-openssh ``` # Test ``` rclone lsf -R --sftp-host 172.17.0.2 --sftp-user rclone --sftp-pass $(rclone obscure password) :sftp: ``` rclone-1.53.3/fstest/testserver/init.d/000077500000000000000000000000001375552240400177775ustar00rootroot00000000000000rclone-1.53.3/fstest/testserver/init.d/README.md000066400000000000000000000020471375552240400212610ustar00rootroot00000000000000This directory contains scripts to start and stop servers for testing. The commands are named after the remotes in use. They should be executable files with the following parameters: start - starts the server stop - stops the server status - returns non-zero exit code if the server is not running These will be called automatically by test_all if that remote is required. When start is run it should output config parameters for that remote. If a `_connect` parameter is output then that will be used for a connection test. For example if `_connect=127.0.0.1:80` then a TCP connection will be made to `127.0.0.1:80` and only when that succeeds will the test continue. `run.bash` contains boilerplate to be included in a bash script for interpreting the command line parameters. `docker.bash` contains library functions to help with docker implementations. ## TODO - sftpd - https://github.com/panubo/docker-sshd ? - openstack swift - https://github.com/bouncestorage/docker-swift - ceph - https://github.com/ceph/cn - other ftp servers rclone-1.53.3/fstest/testserver/init.d/TestFTPProftpd000077500000000000000000000006531375552240400225610ustar00rootroot00000000000000#!/bin/bash set -e NAME=proftpd USER=rclone PASS=RaidedBannedPokes5 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "FTP_USERNAME=rclone" \ -e "FTP_PASSWORD=$PASS" \ hauptmedia/proftpd echo type=ftp echo host=$(docker_ip) echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):21 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestFTPPureftpd000077500000000000000000000011151375552240400227260ustar00rootroot00000000000000#!/bin/bash set -e NAME=pureftpd USER=rclone PASS=AcridSpiesBooks2 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "FTP_USER_NAME=rclone" \ -e "FTP_USER_PASS=$PASS" \ -e "FTP_USER_HOME=/data" \ -e "FTP_MAX_CLIENTS=50" \ -e "FTP_MAX_CONNECTIONS=50" \ -e "FTP_PASSIVE_PORTS=30000:40000" \ stilliard/pure-ftpd echo type=ftp echo host=$(docker_ip) echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):21 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestFTPRclone000077500000000000000000000006451375552240400223660ustar00rootroot00000000000000#!/bin/bash set -e NAME=rclone-serve-ftp USER=rclone PASS=FuddleIdlingJell5 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ rclone/rclone \ serve ftp --user $USER --pass $PASS --addr :21 /data echo type=ftp echo host=$(docker_ip) echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):21 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestFTPVsftpd000077500000000000000000000006301375552240400224040ustar00rootroot00000000000000#!/bin/bash set -e NAME=vsftpd USER=rclone PASS=TiffedRestedSian4 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "FTP_USER=rclone" \ -e "FTP_PASS=$PASS" \ fauria/vsftpd echo type=ftp echo host=$(docker_ip) echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):21 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestS3Minio000077500000000000000000000007421375552240400220510ustar00rootroot00000000000000#!/bin/bash set -e NAME=minio USER=rclone PASS=AxedBodedGinger7 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "MINIO_ACCESS_KEY=$USER" \ -e "MINIO_SECRET_KEY=$PASS" \ minio/minio server /data echo type=s3 echo provider=Minio echo access_key_id=$USER echo secret_access_key=$PASS echo endpoint=http://$(docker_ip):9000/ echo _connect=$(docker_ip):9000 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestS3MinioEdge000077500000000000000000000007571375552240400226440ustar00rootroot00000000000000#!/bin/bash set -e NAME=minio-edge USER=rclone PASS=DeniseOxygenEiffel4 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "MINIO_ACCESS_KEY=$USER" \ -e "MINIO_SECRET_KEY=$PASS" \ minio/minio:edge server /data echo type=s3 echo provider=Minio echo access_key_id=$USER echo secret_access_key=$PASS echo endpoint=http://$(docker_ip):9000/ echo _connect=$(docker_ip):9000 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestSFTPOpenssh000077500000000000000000000005531375552240400227040ustar00rootroot00000000000000#!/bin/bash set -e NAME=rclone-sftp-openssh USER=rclone PASS=password . $(dirname "$0")/docker.bash start() { docker run --rm -d --name ${NAME} \ rclone/test-sftp-openssh echo type=sftp echo host=$(docker_ip) echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):22 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestSFTPRclone000077500000000000000000000006521375552240400225070ustar00rootroot00000000000000#!/bin/bash set -e NAME=rclone-serve-sftp USER=rclone PASS=CranesBallotDorsey5 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ rclone/rclone \ serve sftp --user $USER --pass $PASS --addr :22 /data echo type=sftp echo host=$(docker_ip) echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):22 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestSeafile000077500000000000000000000035231375552240400221400ustar00rootroot00000000000000#!/bin/bash set -e # environment variables passed on docker-compose export NAME=seafile7 export MYSQL_ROOT_PASSWORD=pixenij4zacoguq0kopamid6 export SEAFILE_ADMIN_EMAIL=seafile@rclone.org export SEAFILE_ADMIN_PASSWORD=pixenij4zacoguq0kopamid6 export SEAFILE_IP=127.0.0.1 export SEAFILE_PORT=8087 export SEAFILE_TEST_DATA=${SEAFILE_TEST_DATA:-/tmp/seafile-test-data} export SEAFILE_VERSION=latest # make sure the data directory exists mkdir -p ${SEAFILE_TEST_DATA}/${NAME} # docker-compose project directory COMPOSE_DIR=$(dirname "$0")/seafile start() { docker-compose --project-directory ${COMPOSE_DIR} --project-name ${NAME} --file ${COMPOSE_DIR}/docker-compose.yml up -d # it takes some time for the database to be created sleep 60 # authentication token answer should be like: {"token":"dbf58423f1632b5b679a13b0929f1d0751d9250c"} TOKEN=`curl --silent \ --data-urlencode username=${SEAFILE_ADMIN_EMAIL} -d password=${SEAFILE_ADMIN_PASSWORD} \ http://${SEAFILE_IP}:${SEAFILE_PORT}/api2/auth-token/ \ | sed 's/^{"token":"\(.*\)"}$/\1/'` # create default library curl -X POST -H "Authorization: Token ${TOKEN}" "http://${SEAFILE_IP}:${SEAFILE_PORT}/api2/default-repo/" echo _connect=${SEAFILE_IP}:${SEAFILE_PORT} echo type=seafile echo url=http://${SEAFILE_IP}:${SEAFILE_PORT}/ echo user=${SEAFILE_ADMIN_EMAIL} echo pass=$(rclone obscure ${SEAFILE_ADMIN_PASSWORD}) echo library=My Library } stop() { if status ; then docker-compose --project-directory ${COMPOSE_DIR} --project-name ${NAME} --file ${COMPOSE_DIR}/docker-compose.yml down fi } status() { if docker ps --format "{{.Names}}" | grep ^${NAME}_seafile_1$ >/dev/null ; then echo "$NAME running" else echo "$NAME not running" return 1 fi return 0 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestSeafileEncrypted000077500000000000000000000040401375552240400240110ustar00rootroot00000000000000#!/bin/bash set -e # local variables TEST_LIBRARY=Encrypted TEST_LIBRARY_PASSWORD=SecretKey # environment variables passed on docker-compose export NAME=seafile7encrypted export MYSQL_ROOT_PASSWORD=pixenij4zacoguq0kopamid6 export SEAFILE_ADMIN_EMAIL=seafile@rclone.org export SEAFILE_ADMIN_PASSWORD=pixenij4zacoguq0kopamid6 export SEAFILE_IP=127.0.0.1 export SEAFILE_PORT=8088 export SEAFILE_TEST_DATA=${SEAFILE_TEST_DATA:-/tmp/seafile-test-data} export SEAFILE_VERSION=latest # make sure the data directory exists mkdir -p ${SEAFILE_TEST_DATA}/${NAME} # docker-compose project directory COMPOSE_DIR=$(dirname "$0")/seafile start() { docker-compose --project-directory ${COMPOSE_DIR} --project-name ${NAME} --file ${COMPOSE_DIR}/docker-compose.yml up -d # it takes some time for the database to be created sleep 60 # authentication token answer should be like: {"token":"dbf58423f1632b5b679a13b0929f1d0751d9250c"} TOKEN=`curl --silent \ --data-urlencode username=${SEAFILE_ADMIN_EMAIL} -d password=${SEAFILE_ADMIN_PASSWORD} \ http://${SEAFILE_IP}:${SEAFILE_PORT}/api2/auth-token/ \ | sed 's/^{"token":"\(.*\)"}$/\1/'` # create encrypted library curl -X POST -d "name=${TEST_LIBRARY}&passwd=${TEST_LIBRARY_PASSWORD}" -H "Authorization: Token ${TOKEN}" "http://${SEAFILE_IP}:${SEAFILE_PORT}/api2/repos/" echo _connect=${SEAFILE_IP}:${SEAFILE_PORT} echo type=seafile echo url=http://${SEAFILE_IP}:${SEAFILE_PORT}/ echo user=${SEAFILE_ADMIN_EMAIL} echo pass=$(rclone obscure ${SEAFILE_ADMIN_PASSWORD}) echo library=${TEST_LIBRARY} echo library_key=$(rclone obscure ${TEST_LIBRARY_PASSWORD}) } stop() { if status ; then docker-compose --project-directory ${COMPOSE_DIR} --project-name ${NAME} --file ${COMPOSE_DIR}/docker-compose.yml down fi } status() { if docker ps --format "{{.Names}}" | grep ^${NAME}_seafile_1$ >/dev/null ; then echo "$NAME running" else echo "$NAME not running" return 1 fi return 0 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestSeafileV6000077500000000000000000000030521375552240400223510ustar00rootroot00000000000000#!/bin/bash set -e # local variables NAME=seafile6 SEAFILE_IP=127.0.0.1 SEAFILE_PORT=8086 SEAFILE_ADMIN_EMAIL=seafile@rclone.org SEAFILE_ADMIN_PASSWORD=qebiwob7wafixif8sojiboj4 SEAFILE_TEST_DATA=${SEAFILE_TEST_DATA:-/tmp/seafile-test-data} SEAFILE_VERSION=latest . $(dirname "$0")/docker.bash start() { # make sure the data directory exists mkdir -p ${SEAFILE_TEST_DATA}/${NAME} docker run --rm -d --name $NAME \ -e SEAFILE_SERVER_HOSTNAME=${SEAFILE_IP}:${SEAFILE_PORT} \ -e SEAFILE_ADMIN_EMAIL=${SEAFILE_ADMIN_EMAIL} \ -e SEAFILE_ADMIN_PASSWORD=${SEAFILE_ADMIN_PASSWORD} \ -v ${SEAFILE_TEST_DATA}/${NAME}:/shared \ -p ${SEAFILE_IP}:${SEAFILE_PORT}:80 \ seafileltd/seafile:${SEAFILE_VERSION} # it takes some time for the database to be created sleep 60 # authentication token answer should be like: {"token":"dbf58423f1632b5b679a13b0929f1d0751d9250c"} TOKEN=`curl --silent \ --data-urlencode username=${SEAFILE_ADMIN_EMAIL} -d password=${SEAFILE_ADMIN_PASSWORD} \ http://${SEAFILE_IP}:${SEAFILE_PORT}/api2/auth-token/ \ | sed 's/^{"token":"\(.*\)"}$/\1/'` # create default library curl -X POST -H "Authorization: Token ${TOKEN}" "http://${SEAFILE_IP}:${SEAFILE_PORT}/api2/default-repo/" echo _connect=${SEAFILE_IP}:${SEAFILE_PORT} echo type=seafile echo url=http://${SEAFILE_IP}:${SEAFILE_PORT}/ echo user=${SEAFILE_ADMIN_EMAIL} echo pass=$(rclone obscure ${SEAFILE_ADMIN_PASSWORD}) echo library=My Library } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestSwiftAIO000077500000000000000000000005541375552240400222160ustar00rootroot00000000000000#!/bin/bash set -e NAME=swift-aio . $(dirname "$0")/docker.bash start() { docker run --rm -d --name ${NAME} \ bouncestorage/swift-aio echo type=swift echo env_auth=false echo user=test:tester echo key=testing echo auth=http://$(docker_ip):8080/auth/v1.0 echo _connect=$(docker_ip):8080 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestWebdavNextcloud000077500000000000000000000012221375552240400236600ustar00rootroot00000000000000#!/bin/bash set -e NAME=nextcloud USER=rclone PASS=ArmorAbleMale6 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "SQLITE_DATABASE=nextcloud.db" \ -e "NEXTCLOUD_ADMIN_USER=rclone" \ -e "NEXTCLOUD_ADMIN_PASSWORD=$PASS" \ -e "NEXTCLOUD_TRUSTED_DOMAINS=*.*.*.*" \ nextcloud:latest echo type=webdav echo url=http://$(docker_ip)/remote.php/webdav/ echo user=$USER echo pass=$(rclone obscure $PASS) # the tests don't pass if we use the nextcloud features # echo vendor=nextcloud echo _connect=$(docker_ip):80 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestWebdavOwncloud000077500000000000000000000014321375552240400235100ustar00rootroot00000000000000#!/bin/bash set -e NAME=owncloud USER=rclone PASS=HarperGrayerFewest5 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ -e "OWNCLOUD_DOMAIN=${OWNCLOUD_DOMAIN}" \ -e "OWNCLOUD_DB_TYPE=sqlite" \ -e "OWNCLOUD_DB_NAME=oowncloud.db" \ -e "OWNCLOUD_ADMIN_USERNAME=$USER" \ -e "OWNCLOUD_ADMIN_PASSWORD=$PASS" \ -e "OWNCLOUD_MYSQL_UTF8MB4=true" \ -e "OWNCLOUD_REDIS_ENABLED=false" \ -e "OWNCLOUD_TRUSTED_DOMAINS=*.*.*.*" \ owncloud/server echo type=webdav echo url=http://$(docker_ip):8080/remote.php/webdav/ echo user=$USER echo pass=$(rclone obscure $PASS) echo vendor=owncloud echo _connect=$(docker_ip):8080 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/TestWebdavRclone000077500000000000000000000006651375552240400231470ustar00rootroot00000000000000#!/bin/bash set -e NAME=rclone-serve-webdav USER=rclone PASS=PagansSwimExpiry9 . $(dirname "$0")/docker.bash start() { docker run --rm -d --name $NAME \ rclone/rclone \ serve webdav --user $USER --pass $PASS --addr :80 /data echo type=webdav echo url=http://$(docker_ip)/ echo user=$USER echo pass=$(rclone obscure $PASS) echo _connect=$(docker_ip):80 } . $(dirname "$0")/run.bash rclone-1.53.3/fstest/testserver/init.d/docker.bash000066400000000000000000000006331375552240400221070ustar00rootroot00000000000000#!/bin/bash stop() { if status ; then docker stop $NAME echo "$NAME stopped" fi } status() { if docker ps --format "{{.Names}}" | grep ^${NAME}$ >/dev/null ; then echo "$NAME running" else echo "$NAME not running" return 1 fi return 0 } docker_ip() { docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $NAME } rclone-1.53.3/fstest/testserver/init.d/run.bash000066400000000000000000000002351375552240400214420ustar00rootroot00000000000000#!/bin/bash case "$1" in start) start ;; stop) stop ;; status) status ;; *) echo "usage: $0 start|stop|status" >&2 exit 1 ;; esac rclone-1.53.3/fstest/testserver/init.d/seafile/000077500000000000000000000000001375552240400214075ustar00rootroot00000000000000rclone-1.53.3/fstest/testserver/init.d/seafile/docker-compose.yml000066400000000000000000000015371375552240400250520ustar00rootroot00000000000000version: '2.0' services: db: image: mariadb:10.1 environment: - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} - MYSQL_LOG_CONSOLE=true volumes: - ${SEAFILE_TEST_DATA}/${NAME}/seafile-mysql/db:/var/lib/mysql memcached: image: memcached:1.5.6 entrypoint: memcached -m 256 seafile: image: seafileltd/seafile-mc:${SEAFILE_VERSION} ports: - "${SEAFILE_IP}:${SEAFILE_PORT}:80" volumes: - ${SEAFILE_TEST_DATA}/${NAME}/seafile-data:/shared environment: - DB_HOST=db - DB_ROOT_PASSWD=${MYSQL_ROOT_PASSWORD} - TIME_ZONE=Etc/UTC - SEAFILE_ADMIN_EMAIL=${SEAFILE_ADMIN_EMAIL} - SEAFILE_ADMIN_PASSWORD=${SEAFILE_ADMIN_PASSWORD} - SEAFILE_SERVER_LETSENCRYPT=false - SEAFILE_SERVER_HOSTNAME=${SEAFILE_IP}:${SEAFILE_PORT} depends_on: - db - memcached rclone-1.53.3/fstest/testserver/testserver.go000066400000000000000000000104721375552240400213530ustar00rootroot00000000000000// Package testserver starts and stops test servers if required package testserver import ( "bytes" "fmt" "net" "os" "os/exec" "path/filepath" "regexp" "strings" "sync" "time" "github.com/pkg/errors" "github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs/fspath" ) var ( once sync.Once configDir string // where the config is stored // Note of running servers runningMu sync.Mutex running = map[string]int{} errNotFound = errors.New("command not found") ) // Assume we are run somewhere within the rclone root func findConfig() (string, error) { dir := filepath.Join("fstest", "testserver", "init.d") for i := 0; i < 5; i++ { fi, err := os.Stat(dir) if err == nil && fi.IsDir() { return filepath.Abs(dir) } else if !os.IsNotExist(err) { return "", err } dir = filepath.Join("..", dir) } return "", errors.New("couldn't find testserver config files - run from within rclone source") } // run the command returning the output and an error func run(name, command string) (out []byte, err error) { cmdPath := filepath.Join(configDir, name) fi, err := os.Stat(cmdPath) if err != nil || fi.IsDir() { return nil, errNotFound } cmd := exec.Command(cmdPath, command) out, err = cmd.CombinedOutput() if err != nil { err = errors.Wrapf(err, "failed to run %s %s\n%s", cmdPath, command, string(out)) } return out, err } // Check to see if the server is running func isRunning(name string) bool { _, err := run(name, "status") return err == nil } // envKey returns the environment variable name to set name, key func envKey(name, key string) string { return fmt.Sprintf("RCLONE_CONFIG_%s_%s", strings.ToUpper(name), strings.ToUpper(key)) } // match a line of config var=value var matchLine = regexp.MustCompile(`^([a-zA-Z_]+)=(.*)$`) // Start the server and set its env vars // Call with the mutex held func start(name string) error { out, err := run(name, "start") if err != nil { return err } fs.Logf(name, "Starting server") // parse the output and set environment vars from it var connect string for _, line := range bytes.Split(out, []byte("\n")) { line = bytes.TrimSpace(line) part := matchLine.FindSubmatch(line) if part != nil { key, value := part[1], part[2] if string(key) == "_connect" { connect = string(value) continue } // fs.Debugf(name, "key = %q, envKey = %q, value = %q", key, envKey, value) err = os.Setenv(envKey(name, string(key)), string(value)) if err != nil { return err } } } if connect == "" { return nil } // If we got a _connect value then try to connect to it const maxTries = 30 for i := 1; i <= maxTries; i++ { fs.Debugf(name, "Attempting to connect to %q try %d/%d", connect, i, maxTries) conn, err := net.Dial("tcp", connect) if err == nil { _ = conn.Close() return nil } time.Sleep(time.Second) } return errors.Errorf("failed to connect to %q on %q", name, connect) } // Start starts the named test server which can be stopped by the // function returned. func Start(remoteName string) (fn func(), err error) { if remoteName == "" { // don't start the local backend return func() {}, nil } var name string name, _, err = fspath.Parse(remoteName) if err != nil { return nil, err } if name == "" { // don't start the local backend return func() {}, nil } // Make sure we know where the config is once.Do(func() { configDir, err = findConfig() }) if err != nil { return nil, err } runningMu.Lock() defer runningMu.Unlock() if running[name] <= 0 { // if server isn't running check to see if this server has // been started already but not by us and stop it if so if os.Getenv(envKey(name, "type")) == "" && isRunning(name) { stop(name) } if !isRunning(name) { err = start(name) if err == errNotFound { // if no file found then don't start or stop return func() {}, nil } else if err != nil { return nil, err } running[name] = 0 } else { running[name] = 1 } } running[name]++ return func() { runningMu.Lock() defer runningMu.Unlock() stop(name) }, nil } // Stops the named test server // Call with the mutex held func stop(name string) { running[name]-- if running[name] <= 0 { _, err := run(name, "stop") if err != nil { fs.Errorf(name, "Failed to stop server: %v", err) } running[name] = 0 fs.Logf(name, "Stopped server") } } rclone-1.53.3/fstest/testy/000077500000000000000000000000001375552240400155545ustar00rootroot00000000000000rclone-1.53.3/fstest/testy/testy.go000066400000000000000000000004131375552240400172510ustar00rootroot00000000000000// Package testy contains test utilities for rclone package testy import ( "os" "testing" ) // SkipUnreliable skips this test if running on CI func SkipUnreliable(t *testing.T) { if os.Getenv("CI") == "" { return } t.Skip("Skipping Unreliable Test on CI") } rclone-1.53.3/go.mod000066400000000000000000000062531375552240400142100ustar00rootroot00000000000000module github.com/rclone/rclone go 1.14 require ( bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512 cloud.google.com/go v0.59.0 // indirect github.com/Azure/azure-pipeline-go v0.2.2 github.com/Azure/azure-storage-blob-go v0.10.0 github.com/Unknwon/goconfig v0.0.0-20191126170842-860a72fb44fd github.com/a8m/tree v0.0.0-20181222104329-6a0b80129de4 github.com/aalpar/deheap v0.0.0-20200318053559-9a0c2883bd56 github.com/abbot/go-http-auth v0.4.0 github.com/anacrolix/dms v1.1.0 github.com/atotto/clipboard v0.1.2 github.com/aws/aws-sdk-go v1.32.11 github.com/billziss-gh/cgofuse v1.4.0 github.com/btcsuite/btcutil v1.0.2 // indirect github.com/calebcase/tmpfile v1.0.2 // indirect github.com/coreos/go-semver v0.3.0 github.com/dropbox/dropbox-sdk-go-unofficial v5.6.0+incompatible github.com/gogo/protobuf v1.3.1 // indirect github.com/google/go-querystring v1.0.0 // indirect github.com/hanwen/go-fuse/v2 v2.0.3 github.com/jlaffaye/ftp v0.0.0-20200720194710-13949d38913e github.com/jzelinskie/whirlpool v0.0.0-20170603002051-c19460b8caa6 github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 // indirect github.com/klauspost/compress v1.10.11 github.com/koofr/go-httpclient v0.0.0-20200420163713-93aa7c75b348 github.com/koofr/go-koofrclient v0.0.0-20190724113126-8e5366da203a github.com/mattn/go-colorable v0.1.7 github.com/mattn/go-ieproxy v0.0.1 // indirect github.com/mattn/go-runewidth v0.0.9 github.com/mitchellh/go-homedir v1.1.0 github.com/ncw/go-acd v0.0.0-20171120105400-887eb06ab6a2 github.com/ncw/swift v1.0.52 github.com/nsf/termbox-go v0.0.0-20200418040025-38ba6e5628f1 github.com/okzk/sdnotify v0.0.0-20180710141335-d9becc38acbd github.com/patrickmn/go-cache v2.1.0+incompatible github.com/pkg/errors v0.9.1 github.com/pkg/sftp v1.11.0 github.com/prometheus/client_golang v1.7.1 github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8 github.com/rfjakob/eme v1.1.1 github.com/sevlyar/go-daemon v0.1.5 github.com/sirupsen/logrus v1.6.0 github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 github.com/spf13/cobra v1.0.0 github.com/spf13/pflag v1.0.5 github.com/stretchr/testify v1.6.1 github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8 github.com/xanzy/ssh-agent v0.2.1 github.com/youmark/pkcs8 v0.0.0-20200520070018-fad002e585ce github.com/yunify/qingstor-sdk-go/v3 v3.2.0 go.etcd.io/bbolt v1.3.5 go.opencensus.io v0.22.4 // indirect go.uber.org/zap v1.15.0 // indirect goftp.io/server v0.4.0 golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899 golang.org/x/net v0.0.0-20200707034311-ab3426394381 golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208 golang.org/x/sys v0.0.0-20200720211630-cb9d2d5c5666 golang.org/x/text v0.3.3 golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1 golang.org/x/tools v0.0.0-20200820180210-c8f393745106 // indirect google.golang.org/api v0.28.0 google.golang.org/genproto v0.0.0-20200626011028-ee7919e894b5 // indirect google.golang.org/grpc v1.30.0 // indirect google.golang.org/protobuf v1.25.0 // indirect gopkg.in/yaml.v2 v2.3.0 gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776 // indirect storj.io/uplink v1.2.0 ) rclone-1.53.3/go.sum000066400000000000000000002275451375552240400142460ustar00rootroot00000000000000bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512 h1:SRsZGA7aFnCZETmov57jwPrWuTmaZK6+4R4v5FUe1/c= bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512/go.mod h1:FbcW6z/2VytnFDhZfumh8Ss8zxHE6qpMP5sHTRe0EaM= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU= cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M= cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc= cloud.google.com/go v0.56.0 h1:WRz29PgAsVEyPSDHyk+0fpEkwEFyfhHn+JbksT6gIL4= cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk= cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs= cloud.google.com/go v0.59.0 h1:BM3svUDU3itpc2m5cu5wCyThIYNDlFlts9GASw31GW8= cloud.google.com/go v0.59.0/go.mod h1:qJxNOVCRTxHfwLhvDxxSI9vQc1zI59b9pEglp1Iv60E= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU= cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw= cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos= cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk= cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY= github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc= github.com/Azure/azure-storage-blob-go v0.10.0 h1:evCwGreYo3XLeBV4vSxLbLiYb6e0SzsJiXQVRGsRXxs= github.com/Azure/azure-storage-blob-go v0.10.0/go.mod h1:ep1edmW+kNQx4UfWM9heESNmQdijykocJ0YOxmMX8SE= github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs= github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= github.com/Azure/go-autorest/autorest/adal v0.8.3 h1:O1AGG9Xig71FxdX9HO5pGNyZ7TbSyHaVg+5eJO/jSGw= github.com/Azure/go-autorest/autorest/adal v0.8.3/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q= github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= github.com/Azure/go-autorest/autorest/date v0.2.0 h1:yW+Zlqf26583pE43KhfnhFcdmSWlm5Ew6bxipnr/tbM= github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= github.com/Azure/go-autorest/autorest/mocks v0.3.0 h1:qJumjCaCudz+OcqE9/XtEPfvtOjOmKaui4EOpFI6zZc= github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/RoaringBitmap/roaring v0.4.7/go.mod h1:8khRDP4HmeXns4xIj9oGrKSz7XTQiJx2zgh7AcNke4w= github.com/Unknwon/goconfig v0.0.0-20191126170842-860a72fb44fd h1:+CYOsXi89xOqBkj7CuEJjA2It+j+R3ngUZEydr6mtkw= github.com/Unknwon/goconfig v0.0.0-20191126170842-860a72fb44fd/go.mod h1:wngxua9XCNjvHjDiTiV26DaKDT+0c63QR6H5hjVUUxw= github.com/a8m/tree v0.0.0-20181222104329-6a0b80129de4 h1:mK1/QgFPU4osbhjJ26B1w738kjQHaGJcon8uCLMS8fk= github.com/a8m/tree v0.0.0-20181222104329-6a0b80129de4/go.mod h1:FSdwKX97koS5efgm8WevNf7XS3PqtyFkKDDXrz778cg= github.com/aalpar/deheap v0.0.0-20200318053559-9a0c2883bd56 h1:hJO00l0f92EcQn8Ygc9Y0oP++eESKvcyp+KedtfT5SQ= github.com/aalpar/deheap v0.0.0-20200318053559-9a0c2883bd56/go.mod h1:EJFoWbcEEVK22GYKONJjtMNamGYe6p+3x1Pr6zV5gFs= github.com/abbot/go-http-auth v0.4.0 h1:QjmvZ5gSC7jm3Zg54DqWE/T5m1t2AfDu6QlXJT0EVT0= github.com/abbot/go-http-auth v0.4.0/go.mod h1:Cz6ARTIzApMJDzh5bRMSUou6UMSp0IEXg9km/ci7TJM= github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/anacrolix/dms v1.1.0 h1:vbBXZS7T5FaZm+9p1pdmVVo9tN3qdc27bKSETdeT3xo= github.com/anacrolix/dms v1.1.0/go.mod h1:msPKAoppoNRfrYplJqx63FZ+VipDZ4Xsj3KzIQxyU7k= github.com/anacrolix/envpprof v0.0.0-20180404065416-323002cec2fa/go.mod h1:KgHhUaQMc8cC0+cEflSgCFNFbKwi5h54gqtVn8yhP7c= github.com/anacrolix/envpprof v1.0.0/go.mod h1:KgHhUaQMc8cC0+cEflSgCFNFbKwi5h54gqtVn8yhP7c= github.com/anacrolix/ffprobe v1.0.0/go.mod h1:BIw+Bjol6CWjm/CRWrVLk2Vy+UYlkgmBZ05vpSYqZPw= github.com/anacrolix/missinggo v1.1.0/go.mod h1:MBJu3Sk/k3ZfGYcS7z18gwfu72Ey/xopPFJJbTi5yIo= github.com/anacrolix/tagflag v0.0.0-20180109131632-2146c8d41bf0/go.mod h1:1m2U/K6ZT+JZG0+bdMK6qauP49QT4wE5pmhJXOKKCHw= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= github.com/atotto/clipboard v0.1.2 h1:YZCtFu5Ie8qX2VmVTBnrqLSiU9XOWwqNRmdT3gIQzbY= github.com/atotto/clipboard v0.1.2/go.mod h1:ZY9tmq7sm5xIbd9bOK4onWV4S6X0u6GY7Vn0Yu86PYI= github.com/aws/aws-sdk-go v1.32.11 h1:1nYF+Tfccn/hnAZsuwPPMSCVUVnx3j6LKOpx/WhgH0A= github.com/aws/aws-sdk-go v1.32.11/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/billziss-gh/cgofuse v1.4.0 h1:kju2jDmdNuDDCrxPob2ggmZr5Mj/odCjU1Y8kx0Th9E= github.com/billziss-gh/cgofuse v1.4.0/go.mod h1:LJjoaUojlVjgo5GQoEJTcJNqZJeRU0nCR84CyxKt2YM= github.com/bradfitz/iter v0.0.0-20140124041915-454541ec3da2/go.mod h1:PyRFw1Lt2wKX4ZVSQ2mk+PeDa1rxyObEDlApuIsUKuo= github.com/bradfitz/iter v0.0.0-20190303215204-33e6a9893b0c/go.mod h1:PyRFw1Lt2wKX4ZVSQ2mk+PeDa1rxyObEDlApuIsUKuo= github.com/btcsuite/btcd v0.20.1-beta/go.mod h1:wVuoA8VJLEcwgqHBwHmzLRazpKxTv13Px/pDuV7OomQ= github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f/go.mod h1:TdznJufoqS23FtqVCzL0ZqgP5MqXbb4fg/WgDys70nA= github.com/btcsuite/btcutil v0.0.0-20190425235716-9e5f4b9a998d/go.mod h1:+5NJ2+qvTyV9exUAL/rxXi3DcLg2Ts+ymUAY5y4NvMg= github.com/btcsuite/btcutil v1.0.1 h1:GKOz8BnRjYrb/JTKgaOk+zh26NWNdSNvdvv0xoAZMSA= github.com/btcsuite/btcutil v1.0.1/go.mod h1:j9HUFwoQRsZL3V4n+qG+CUnEGHOarIxfC3Le2Yhbcts= github.com/btcsuite/btcutil v1.0.2 h1:9iZ1Terx9fMIOtq1VrwdqfsATL9MC2l8ZrUY6YZ2uts= github.com/btcsuite/btcutil v1.0.2/go.mod h1:j9HUFwoQRsZL3V4n+qG+CUnEGHOarIxfC3Le2Yhbcts= github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd/go.mod h1:HHNXQzUsZCxOoE+CPiyCTO6x34Zs86zZUiwtpXoGdtg= github.com/btcsuite/goleveldb v0.0.0-20160330041536-7834afc9e8cd/go.mod h1:F+uVaaLLH7j4eDXPRvw78tMflu7Ie2bzYOH4Y8rRKBY= github.com/btcsuite/snappy-go v0.0.0-20151229074030-0bdef8d06723/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc= github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792/go.mod h1:ghJtEyQwv5/p4Mg4C0fgbePVuGr935/5ddU9Z3TmDRY= github.com/btcsuite/winsvc v1.0.0/go.mod h1:jsenWakMcC0zFBFurPLEAyrnc/teJEM1O46fmI40EZs= github.com/calebcase/tmpfile v1.0.2-0.20200602150926-3af473ef8439/go.mod h1:iErLeG/iqJr8LaQ/gYRv4GXdqssi3jg4iSzvrA06/lw= github.com/calebcase/tmpfile v1.0.2 h1:1AGuhKiUu4J6wxz6lxuF6ck3f8G2kaV6KSEny0RGCig= github.com/calebcase/tmpfile v1.0.2/go.mod h1:iErLeG/iqJr8LaQ/gYRv4GXdqssi3jg4iSzvrA06/lw= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM= github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/cpuguy83/go-md2man/v2 v2.0.0 h1:EoUDS0afbrsXAZ9YQ9jdu/mZ2sXgT1/2yyNng4PGlyM= github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= github.com/dropbox/dropbox-sdk-go-unofficial v5.6.0+incompatible h1:DtumzkLk2zZ2SeElEr+VNz+zV7l+BTe509cV4sKPXbM= github.com/dropbox/dropbox-sdk-go-unofficial v5.6.0+incompatible/go.mod h1:lr+LhMM3F6Y3lW1T9j2U5l7QeuWm87N9+PPXo3yH4qY= github.com/dustin/go-humanize v0.0.0-20180421182945-02af3965c54e/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/glycerine/go-unsnap-stream v0.0.0-20180323001048-9f0cb55181dd/go.mod h1:/20jfyN9Y5QPEAprSgKAUr+glWDY39ZiUEAYOEv5dsE= github.com/glycerine/goconvey v0.0.0-20180728074245-46e3a41ad493/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls= github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY= github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y= github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk= github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0= github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/google/btree v0.0.0-20180124185431-e89373fe6b4a/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.0 h1:/QaMHBdZ26BB3SSst0Iwl10Epc+xhTquomWX0oZEB6w= github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200507031123-427632fa3b1c/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e h1:JKmoR8x90Iww1ks85zJ1lfDGgIiMDuIptTOhJq+zKyg= github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/hanwen/go-fuse v1.0.0 h1:GxS9Zrn6c35/BnfiVsZVWmsG803xwE7eVRDvcf/BEVc= github.com/hanwen/go-fuse v1.0.0/go.mod h1:unqXarDXqzAk0rt98O2tVndEPIpUgLD9+rwFisZH3Ok= github.com/hanwen/go-fuse/v2 v2.0.3 h1:kpV28BKeSyVgZREItBLnaVBvOEwv2PuhNdKetwnvNHo= github.com/hanwen/go-fuse/v2 v2.0.3/go.mod h1:0EQM6aH2ctVpvZ6a+onrQ/vaykxh2GH7hy3e13vzTUY= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM= github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jlaffaye/ftp v0.0.0-20190624084859-c1312a7102bf/go.mod h1:lli8NYPQOFy3O++YmYbqVgOcQ1JPCwdOy+5zSjKJ9qY= github.com/jlaffaye/ftp v0.0.0-20200720194710-13949d38913e h1:itZyHiOkiB8mIGouegRNLM9LttGQ3yrgRmp/J/6H/0g= github.com/jlaffaye/ftp v0.0.0-20200720194710-13949d38913e/go.mod h1:2lmrmq866uF2tnje75wQHzmPXhmSWUt7Gyx2vgK1RCU= github.com/jmespath/go-jmespath v0.3.0 h1:OS12ieG61fsCg5+qLJ+SsW9NicxNkg3b25OyT2yCeUc= github.com/jmespath/go-jmespath v0.3.0/go.mod h1:9QtRXoHjLGCJ5IBSaohpXITPlowMeeYCZ7fLUTSywik= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jrick/logrotate v1.0.0/go.mod h1:LNinyqDIJnpAur+b8yyulnQw/wDuN1+BYKlTRt3OuAQ= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/jzelinskie/whirlpool v0.0.0-20170603002051-c19460b8caa6 h1:RyOL4+OIUc6u5ac2LclitlZvFES6k+sg18fBMfxFUUs= github.com/jzelinskie/whirlpool v0.0.0-20170603002051-c19460b8caa6/go.mod h1:KmHnJWQrgEvbuy0vcvj00gtMqbvNn1L+3YUZLK/B92c= github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 h1:iQTw/8FWTuc7uiaSepXwyf3o52HaUYcV+Tu66S3F5GA= github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4= github.com/klauspost/compress v1.10.11 h1:K9z59aO18Aywg2b/WSgBaUX99mHy2BES18Cr5lBKZHk= github.com/klauspost/compress v1.10.11/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8= github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/koofr/go-httpclient v0.0.0-20200420163713-93aa7c75b348 h1:Lrn8srO9JDBCf2iPjqy62stl49UDwoOxZ9/NGVi+fnk= github.com/koofr/go-httpclient v0.0.0-20200420163713-93aa7c75b348/go.mod h1:JBLy//Q5jzU3XSMxdONTD5EIj1LhTPktosxG2Bw1iho= github.com/koofr/go-koofrclient v0.0.0-20190724113126-8e5366da203a h1:02cx9xF4W2FQ1oh8CK9dWV5BnZK2mUtcbr9xR+bZiKk= github.com/koofr/go-koofrclient v0.0.0-20190724113126-8e5366da203a/go.mod h1:MRAz4Gsxd+OzrZ0owwrUHc0zLESL+1Y5syqK/sJxK2A= github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8= github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348 h1:MtvEpTB6LX3vkb4ax0b5D2DHbNAUsen0Gx5wZoq3lV4= github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/mattn/go-colorable v0.1.7 h1:bQGKb3vps/j0E9GfJQ03JyhRuxsvdAanXlT9BTw3mdw= github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d h1:oNAwILwmgWKFpuU+dXvI6dl9jG2mAWAZLX3r9s0PPiw= github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc= github.com/mattn/go-ieproxy v0.0.1 h1:qiyop7gCflfhwCzGyeT0gro3sF9AIg9HU98JORTkqfI= github.com/mattn/go-ieproxy v0.0.1/go.mod h1:pYabZ6IHcRpFh7vIaLfK7rdcWgFEb3SFJ6/gNWuh88E= github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0= github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/minio/minio-go/v6 v6.0.46 h1:waExJtO53xrnsNX//7cSc1h3478wqTryDx4RVD7o26I= github.com/minio/minio-go/v6 v6.0.46/go.mod h1:qD0lajrGW49lKZLtXKtCB4X/qkMf0a5tBvN2PaZg7Gg= github.com/minio/sha256-simd v0.1.1 h1:5QHSlgo3nt5yKOJrC7W8w7X+NFl8cMPZm96iu8kKUJU= github.com/minio/sha256-simd v0.1.1/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae/go.mod h1:qAyveg+e4CE+eKJXWVjKXM4ck2QobLqTDytGJbLLhJg= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/ncw/go-acd v0.0.0-20171120105400-887eb06ab6a2 h1:VlXvEx6JbFp7F9iz92zXP2Ew+9VupSpfybr+TxmjdH0= github.com/ncw/go-acd v0.0.0-20171120105400-887eb06ab6a2/go.mod h1:MLIrzg7gp/kzVBxRE1olT7CWYMCklcUWU+ekoxOD9x0= github.com/ncw/swift v1.0.52 h1:ACF3JufDGgeKp/9mrDgQlEgS8kRYC4XKcuzj/8EJjQU= github.com/ncw/swift v1.0.52/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM= github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8= github.com/nsf/termbox-go v0.0.0-20200418040025-38ba6e5628f1 h1:lh3PyZvY+B9nFliSGTn5uFuqQQJGuNrD0MLCokv09ag= github.com/nsf/termbox-go v0.0.0-20200418040025-38ba6e5628f1/go.mod h1:IuKpRQcYE1Tfu+oAQqaLisqDeXgjyyltCfsaoYN18NQ= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/okzk/sdnotify v0.0.0-20180710141335-d9becc38acbd h1:+iAPaTbi1gZpcpDwe/BW1fx7Xoesv69hLNGPheoyhBs= github.com/okzk/sdnotify v0.0.0-20180710141335-d9becc38acbd/go.mod h1:4soZNh0zW0LtYGdQ416i0jO0EIqMGcbtaspRS4BDvRQ= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.12.0 h1:Iw5WCbBcaAAd0fpRb1c9r5YCylv4XDoCSigm1zLevwU= github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg= github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.9.0 h1:R1uwffexN6Pr340GtYRIdZmAiN4J+iw6WG4wog1DUXg= github.com/onsi/gomega v1.9.0/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA= github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc= github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pengsrc/go-shared v0.2.1-0.20190131101655-1999055a4a14 h1:XeOYlK9W1uCmhjJSsY78Mcuh7MVkNjTzmHx1yBzizSU= github.com/pengsrc/go-shared v0.2.1-0.20190131101655-1999055a4a14/go.mod h1:jVblp62SafmidSkvWrXyxAme3gaTfEtWwRPGz5cpvHg= github.com/philhofer/fwd v1.0.0/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG3ZVNU= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/sftp v1.11.0 h1:4Zv0OGbpkg4yNuUtH0s8rvoYxRCNyT29NVUo6pgPmxI= github.com/pkg/sftp v1.11.0/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.7.1 h1:NTGy1Ja9pByO+xAeH/qiWnLrKtr3hJPNjaVUwnjpdpA= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M= github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc= github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.1.3 h1:F0+tqvhOksq22sc6iCHF5WGlWjdwj92p0udFh1VFBS8= github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8 h1:Y258uzXU/potCYnQd1r6wlAnoMB68BiCkCcCnKx1SH8= github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8/go.mod h1:bSJjRokAHHOhA+XFxplld8w2R/dXLH7Z3BZ532vhFwU= github.com/rfjakob/eme v1.1.1 h1:t+CgvcOn+eDvj2xdglxsSnkgg8LM8jwdxnV7OnsrTn0= github.com/rfjakob/eme v1.1.1/go.mod h1:U2bmx0hDj8EyDdcxmD5t3XHDnBFnyNNc22n1R4008eM= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46/go.mod h1:uAQ5PCi+MFsC7HjREoAz1BU+Mq60+05gifQSsHSDG/8= github.com/sevlyar/go-daemon v0.1.5 h1:Zy/6jLbM8CfqJ4x4RPr7MJlSKt90f00kNM1D401C+Qk= github.com/sevlyar/go-daemon v0.1.5/go.mod h1:6dJpPatBT9eUwM5VCw9Bt6CdX9Tk6UWvhW3MebLDRKE= github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I= github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 h1:JIAuq3EEf9cgbU6AtGPK4CTG3Zf6CKMNqf0MHTggAUA= github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966/go.mod h1:sUM3LWHvSMaG192sy56D9F7CNvL7jUJVXoqM1QKLnog= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s= github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a h1:pa8hGb/2YqsZKovtsgrwcDH1RZhVbTKCjLp47XpqCDs= github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/spacemonkeygo/monkit/v3 v3.0.4/go.mod h1:JcK1pCbReQsOsMKF/POFSZCq7drXFybgGmbc27tuwes= github.com/spacemonkeygo/monkit/v3 v3.0.5/go.mod h1:JcK1pCbReQsOsMKF/POFSZCq7drXFybgGmbc27tuwes= github.com/spacemonkeygo/monkit/v3 v3.0.7-0.20200515175308-072401d8c752 h1:WcQDknqg0qajLNYKv3mXgbkWlYs5rPgZehGJFWePHVI= github.com/spacemonkeygo/monkit/v3 v3.0.7-0.20200515175308-072401d8c752/go.mod h1:kj1ViJhlyADa7DiA4xVnTuPA46lFKbM7mxQTrXCuJP4= github.com/spacemonkeygo/monotime v0.0.0-20180824235756-e3f48a95f98a/go.mod h1:ul4bvvnCOPZgq8w0nTkSmWVg/hauVpFS97Am1YM1XXo= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8= github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.1-0.20190311161405-34c6fa2dc709/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8 h1:IGJQmLBLYBdAknj21W3JsVof0yjEXfy1Q0K3YZebDOg= github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8/go.mod h1:XWL4vDyd3JKmJx+hZWUVgCNmmhZ2dTBcaNDcxH465s0= github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE= github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c h1:u6SKchux2yDvFQnDHS3lPnIRmfVJ5Sxy3ao2SIdysLQ= github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM= github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= github.com/vivint/infectious v0.0.0-20200605153912-25a574ae18a3 h1:zMsHhfK9+Wdl1F7sIKLyx3wrOFofpb3rWFbA4HgcK5k= github.com/vivint/infectious v0.0.0-20200605153912-25a574ae18a3/go.mod h1:R0Gbuw7ElaGSLOZUSwBm/GgVwMd30jWxBDdAyMOeTuc= github.com/willf/bitset v1.1.9/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4= github.com/xanzy/ssh-agent v0.2.1 h1:TCbipTQL2JiiCprBWx9frJ2eJlCYT00NmctrHxVAr70= github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= github.com/youmark/pkcs8 v0.0.0-20200520070018-fad002e585ce h1:F5MEHq8k6JiE10MNYaQjbKRdF1xWkOavn9aoSrHqGno= github.com/youmark/pkcs8 v0.0.0-20200520070018-fad002e585ce/go.mod h1:ul22v+Nro/R083muKhosV54bj5niojjWZvU8xrevuH4= github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yunify/qingstor-sdk-go/v3 v3.2.0 h1:9sB2WZMgjwSUNZhrgvaNGazVltoFUUfuS9f0uCWtTr8= github.com/yunify/qingstor-sdk-go/v3 v3.2.0/go.mod h1:KciFNuMu6F4WLk9nGwwK69sCGKLCdd9f97ac/wfumS4= github.com/zeebo/admission/v3 v3.0.1/go.mod h1:BP3isIv9qa2A7ugEratNq1dnl2oZRXaQUGdU7WXKtbw= github.com/zeebo/assert v1.1.0 h1:hU1L1vLTHsnO8x8c9KAR5GmM5QscxHg5RNU5z5qbUWY= github.com/zeebo/assert v1.1.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0= github.com/zeebo/errs v1.2.2 h1:5NFypMTuSdoySVTqlNs1dEoU21QVamMQJxW/Fii5O7g= github.com/zeebo/errs v1.2.2/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4= github.com/zeebo/float16 v0.1.0/go.mod h1:fssGvvXu+XS8MH57cKmyrLB/cqioYeYX/2mXCN3a5wo= github.com/zeebo/incenc v0.0.0-20180505221441-0d92902eec54/go.mod h1:EI8LcOBDlSL3POyqwC1eJhOYlMBMidES+613EtmmT5w= go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.5 h1:XAzx9gjCb0Rxj7EoqcClPD1d5ZBxZJk0jbuoPHenBt0= go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.3 h1:8sGtKOrtQqkN1bp2AtX+misvLIlOmsEsNd+9NIcPEm8= go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.4 h1:LYy1Hy3MJdrCdMwwzxA/dRok4ejH+RwNGbuoD9fCjto= go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.uber.org/atomic v1.4.0 h1:cxzIVoETapQEqDhQu3QfnvXAV4AlzcvUCxkVUFw3+EU= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk= go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= go.uber.org/multierr v1.1.0 h1:HoEmRHQPVSqub6w2z2d2EOVs2fjyFRGyofhKuyDq0QI= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A= go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU= go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4= go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA= go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.15.0 h1:ZZCA22JRF2gQE5FoNmhmrf7jeJJ2uhqDUNRYKm8dvmM= go.uber.org/zap v1.15.0/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc= goftp.io/server v0.4.0 h1:hqsVdwd1/l6QtYxD9pxca9mEAJYZ7+FPCnmeXKXHQNw= goftp.io/server v0.4.0/go.mod h1:hFZeR656ErRt3ojMKt7H10vQ5nuWV1e0YeUTeorlR6k= golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190131182504-b8fe1690c613/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190513172903-22d7a77e9e5f/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200115085410-6d4e4cb37c7d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200604202706-70a84ac30bf9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899 h1:DZhuSZLsGlFL4CmhA8BcRA0mnthyA/nZ00AqCUo7vHg= golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= golang.org/x/lint v0.0.0-20200302205851-738671d3881b h1:Wh+f8QHJXR411sJR8/vRBTZ7YapZaRvUcLFFJhusH0k= golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0 h1:RM4zey1++hCTbCVQfnWeKs9/IEsaBLA8vTkd0WVtmH4= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190415214537-1da14a5a36f2/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20191112182307-2180aed22343/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU= golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw= golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208 h1:qwRHBd0NqMbJxfbotnDhm2ByMI1Shq4Y6oRJo21SGJA= golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190415145633-3fd5a3612ccd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200610111108-226ff32320da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200720211630-cb9d2d5c5666 h1:gVCS+QOncANNPlmlO1AhlU3oxs4V9z+gTtPwIk3p2N8= golang.org/x/sys v0.0.0-20200720211630-cb9d2d5c5666/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1 h1:NusfzzA6yGQ+ua51ck7E3omNUX/JuqbFSaRGqU8CcLI= golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200622203043-20e05c1c8ffa h1:mMXQKlWCw9mIWgVLLfiycDZjMHMMYqiuakI4E/l2xcA= golang.org/x/tools v0.0.0-20200622203043-20e05c1c8ffa/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200820180210-c8f393745106 h1:42Zs/g7pjhSIE/wiAuKcp8zp20zv7W2diNU6arpshOA= golang.org/x/tools v0.0.0-20200820180210-c8f393745106/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= google.golang.org/api v0.28.0 h1:jMF5hhVfMkTZwHW1SDpKq5CkgWLXOb31Foaca9Zr3oM= google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM= google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc= google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8= google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA= google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940 h1:MRHtG0U6SnaUb+s+LhNE1qt1FQ1wlhqr5E4usBKC0uA= google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U= google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20200623002339-fbb79eadd5eb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200626011028-ee7919e894b5 h1:a/Sqq5B3dGnmxhuJZIHFsIxhEkqElErr5TaU6IqBAj0= google.golang.org/genproto v0.0.0-20200626011028-ee7919e894b5/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.28.0 h1:bO/TA4OxCOummhSf10siHuG7vJOiwh7SpRpFZDkOgl4= google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60= google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= google.golang.org/grpc v1.30.0 h1:M5a8xTlYTxwMn5ZFkwhRabsygDY5G8TYLyQDBxJNAxE= google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM= google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/ini.v1 v1.42.0 h1:7N3gPTt50s8GuLortA00n8AqRTk75qOP98+mTPpgzRk= gopkg.in/ini.v1 v1.42.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776 h1:tQIYjPdBoyREyB9XMu+nnTclpTYkz2zFM+lzLJFO4gQ= gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4 h1:UoveltGrhghAA7ePc+e+QYDHXrBps2PqFZiHkGR/xK8= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= storj.io/common v0.0.0-20200729140050-4c1ddac6fa63 h1:BkRvlginTJGi0yAkpN+4ZKm2YpG63bDSDFLQtXYxxdg= storj.io/common v0.0.0-20200729140050-4c1ddac6fa63/go.mod h1:ILr54ISCqCQ6MmIwT7eaR/fEGrBfgfxiPt8nmpWqnUM= storj.io/drpc v0.0.14 h1:GCBdymTt1BRw4oHmmUZZlxYXLVRxxYj6x3Ivide2J+I= storj.io/drpc v0.0.14/go.mod h1:82nfl+6YwRwF6UG31cEWWUqv/FaKvP5SGqUvoqTxCMA= storj.io/uplink v1.2.0 h1:7gGfkTv7zT9ivSCMqu7QirUMdHVaeBnlZgqgWhRKEd4= storj.io/uplink v1.2.0/go.mod h1:U7VFTdoZiBgmijKdvB5dUfi6V5Ew9PAHhUVyy8rJWBs= rclone-1.53.3/graphics/000077500000000000000000000000001375552240400146745ustar00rootroot00000000000000rclone-1.53.3/graphics/cover.jpg000066400000000000000000460604321375552240400165330ustar00rootroot00000000000000rExifMM*" (26JZb"'j~   51500,00*CanonCanon EOS 7D2014:03:16 17:26:02Nick Craig-Wood33 2013:07:26 08:57:212013:07:26 08:57:21`<  &4<(1DZiCanonCanon EOS 7DAfterShot Pro 1.2.0.7Nick Craig-Woodhttp://ns.adobe.com/xap/1.0/ Nick Craig-Wood XICC_PROFILE HlcmsmntrRGB XYZ  1acspAPPLIEC sRGB-lcmscprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)KmXPhotoshop 3.08BIM<Z%G720130726<085721tNick Craig-WoodC     C   @ }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?.>;H7;gXp10z$:ԜYYuN5,c> T|vg?4އY{B2,)o=}cN&k;HBrÁϽsK;HQap:x:~ƥçeE.g/VjȥJ)X2 nyr 2㯭ok;nvI5N*q'/|֪@&tfȏۭ+"өZzHl}_|d.1aJ[tn֧S]/ "6dR(Q'Gr~>Pd~d<#ɷ;sL:nQq[gv?qmd.q=^8W*Ѭd('kKC1e]=q÷z4}*Ԝco ^'b 1H zWN-E잭y;)y_V!.Lb֜ҟфj{꺊֯klDb*.î{g5 4r[jMzjSF]HJ1AڴjS6c(T\2i#Rn ] B3w֥G*]&e6V<.GZS̹?Xzp}̸236,@ǧ'{Km7>-IH"dtUrcy~եTci_^Vx̱ו!cVۧ.ft(P5ma;Ht1f l{;~u*捿uJN̕fpQi<9>!'#T:k=H-rUmʰvHz$wG,yjVs"J46զe^8RFxֹd*TwrqkKWb9d|},fc^=*Jadr˷ ēse8Qœ9v 4l&jN4}ƢK[_\{3w!Ԃ2mt[3;+ Wv8O-HlSYtaZ (Ld.'Ƥ(~]DP%YF$H: {czgdFfEgw+}idVB'=z+S.[qV hO&*sW7i&ԕ^[w9c\Hbw9^;ڹeOJoFJ49$MFX='o@>[hښR]RI"42HѩHP5i;X:pV1* g8 =79(I%gk)9n]7l>ޣI9$1bcv_4!yq*Oex Bo.&Mۀ<1[[%JjY][ѧhW} Mbc̻\'-}bĜg8ﻦG"6>ݜU#O{\8xrUbϧCSXQ{u.8RRi&rg>c3ck:tv"5#*j_OrqgR,ku.i.d ,#ɻ'=9B*.szS匕[i7#4:`޹PV3\Q 0+>$*vNȬ+I2O}kOvrK׹TLI&n$HPd mONqYӭN\"4sޚ]B* zԚ)%N/O v4O3R&K4b=z(Uw1,tDdʺZI%g=9<ҝM{r_M.My  q0ZOrZ,qI0Q8;I~\εwoog5 4Fde?*JpRi:g}۾L!_ʏAk 4cne#Vy켾ce(c2%@}}9SWTmaR4-MkwNUiٕ1YIƵE -/?qIKFbeY PG9(iZmg4b}n2[)+FǙdx'bG=uk̑D`އ;ʳ樬K%ZX5mH"#4]NOp1#SEKM-,{23эw|YFG'8эV|EmE٤LnTz^[Fq*TWoa4-`yJS'd)rՄ=J^Z_.5}ڮThɥ~[ug*V>D q'Lhdq>nWrJֿ8hMk~݂hnacFSIe䁞zzѩNTRO$kU)Mv~ f-]t .I;sXWaV-:k~UTdkMv w*eWܷDvΆTy+?Ȏ]4FDҺyq09;va-!-F>g#Q p;oB;{UTjnhj{wqi2?k ŗFx9:Yʝw8ќoNI2wacz )PrZ֣j)Z"ZlV4\h5{:kt@%F~VU.͍B:as}0qF)qt*jJWͮMI'8/k.,m<)SV%Hf Q6TydR1S㧯2Co*W++bkʧ2[{)t 9*R:ަv@"7Gz47ZvBNN y< NR)4:MP ,Eʙ΀\v5=nԮI-Ko$Co0@x됹=X{-\ڜ]W^p"F}*[mňe潗Gyy6#'8?OΈһ:)J<[> a#mϙ:1ǽgkTcR:)Z]o CQe0;OiI]N5S(J֜|jG]3\4d"p2#Ӝcu)ם;ϧݹkQT2z߿5DX-҃nmh21}j9ElU|=iTͿè1& "G&pgRGVJ:}*4i;G&`#0;c7w#5o$zowq;Mnر'#<3ǭ(M=:cZ՗եв<IJ3u=~;hu)ٿl:dT$8B1lc[ԩy][:s]obmqXU~S{~cxQr"3VӢD+4 ${C*va-FX`O${ zvyvei fZT\0&(.OF+Xig12q~BF)RRoE$2`_d~tJ&t{K=/hTm"e ¦1SD.ӀY^Zۿ9yV^rL.6/2H㠨qWryIPsM=mc:biX*z᫚ǚJqycUr~gLf.֏qE>u*wA K Yͷ֪߰QhG&߹-.v>9-u*;IGF!o6̊\ּNNgI-ʪq?l˨&m4ɺOmթk9rh*̞R eGoÊqt6eVR'42c([A;FT3qNRT\l+<~vDl 8]g/fl~F0Yo+۸/ qReU坾⶷9eH(,=RQѥEhޤC1m`:Y{O.⥫q.$;zAֶѧ ?ꛆP9ڜcje9C?)ݴ/J٣yS#2ej$Ɲk|ܯq,Q33(^Y]+@_[.OڱTyTz1?4.Y>~3ңާ i0-w0zVRT.m}YS0gUTvj1>zW++bfT !KrX6r[sqsZ2 8ʠ`隙KvfkRXĜNe-'e'۠1p2_x}kB2ޝJ2dVY#d;}[] M.LR 8=k-!Յ?7 v"!OօFVNmbBn&+£t'$jVfZ< # +~aU=Oz[^`FюQ}r[Z5݈F |Nհ˞ΣFn-{0?ϯl;htq-qi?圚5#R|`@/ܹd{:F<ښTJXibHGrܨ9}\ɩrrG}++L1\zQRqX{ٓCp,P $r{sU#糴"y|kwȓ1lu)8F7mbx+89sYʧ,aIZ;a׶Ҍimngm~cw(MN/Yl8(Q嗼iMC)yYO8+D)lk)GJY.+9E~s.YFwÒf7vq[^;4jK6mνzS.intFrI!ŕKe qkUJqԗR9XG.W,$ZBPަҍ)YCyZ|2ZڐOd`g8lB%e$.b$ *aג[=U%1d屜?κF]ѡèǿ8|V&9fvwc^ 3M{DlFB'6LmA%>jبӧF %i ?*q%Fs3sF*6zjGo0KU#_/ }wUSHRvAkjeb699^߅O2&0kJkR MO!9 % ֜YObJ&ǐ/w:Vr5) !/mE*q'?}gnqy.~8 V=ZROVs=_QepeQf={Feq>Rn >ea zZ1r/wgs_Ş?a-nT}Hܯr(Ȏ?gbձ~>T6$fY7lm1ݸcֹ*rRUt723n?3UtcJ+ߩW[Bwd͎gx\ϡ'/]_)?s/#Hъۆr:N KTԔe!f ے}YF..f2NkEr\y%Upd>@ ]'8EO 6kfl{y]p=)+J1Ft%H&?qcMI+ژqw$f$V1i=*8Roο6?qCZүk\)cl/8qma- $$Ѿ$ gvssV|%+$TU?7@;~5j\Jqd}0WffOCS-GI:K¼{ǒv?0a?tnrzlBħtª>v?=T(IXqq\uMnM";a*F_<QXOlgG4{1y=Y"5AƑJGx'Nx{Mlk3K1F:~)Kdsь[-?4;䲰֢U~e$*>>QSW) 2(͑+D~zcѣ'9+IJ",92bŤOh+K\;E@ܳ0j(W'v-mmҾrYG{DIg#Cd?Z,hO4.g[.?vWI>P8u2PQL7+Y=X³3d^<:'N^#"@w$뚩Ro.MI;v8DL@>Y4e(z y<6[{S*#rGTZX\{=+?aƓM*3fs۵i({\ӺM%#JJ@Ui`/x⳵=͗I[VdQr};k8Z N\dQ,(lI'B)i;)N.1Dq#cws%'atn HWzeH?1YKK`_pTaD` `%JN 5]?pVM0q\kh6^uW (t9zޒ^2$n`+'jszw5 Y=]-n,fSQ#EY?v?±︥zEYY$lR;G9D.[EjqRZ|`,ҴNYT z}}=_rMrI܏v^K%|3({٢2|+q*~[t$_226N2@d4y->^dyQ!anکϥf˩2m' "pspKpzבXӔө5)m#tI-cdX*p\:*B>'T)%ͅՀ_E8FVZ*4)a?'I;ʴʶ#Tqa`QJaj'eqZ2纳ٍ%.}Ν WO`(mZPF߽k7ȗ*t֖ew4wp`HTӌc~'$$h72E0r:)b#OU8SOM8ed9{\I£Q:܇uľm qp=ҹS=;ӍIF[_qU]dXK7'r:gdnSQ"KG(23Cک˟~0kV&Ri+̍cNG<aXI;hs8pO9oA9Z.eTQ䊳!-;J0I0\ꚗ~$~DoiQy+7Vci^/2**|{%RM_+ dI<鎵F5J7I{htJ{'ԭk5O ~ozd~ZU!ʚ+G\;[LV%x H-϶+)7Ζ}Nt-Zk-v]꼪k^YUwcNNg oeVݭki#;VP:~h^?_>W+Z-|-e .{ܑ~^_V[hm #yr(dڮP38֫N۲|3H{l)]]c-#y9z玖NDG2)IZtj4`FI|a(:u+1u-~3}XW-*} ,sp`cgNU?ӧR^{}GK|cuU wq=U*ԭ6vCë/G^>srT؊˵go3$]ۗxX__.NQ$vxXmP3+ѩzyu4ܤl2~1ϧi/-"T Yb#Iv`G8zT9#No}M$HRáVqW-:m:c){7k?68nP3B7$ޝ^Ԫ$WInĆq.(112Xa !m *rz޷/Zy8ain۾Xl:gܾ[6@$q]7N*~Ft߯9 ~R_qO}h咧/j_wnue_^޾ߍg^,SZnk.VaioǴ Wx+JrbQ8ԍ7(;]-BѺ\naj7ۃu}\*ti5웓[H3O ,!U2@8dӍ_RE(bU 1c@A{|Μ۵z1OK6LcI$9$𧆝?fԼʪ>JjFc&#E`^TFRD#)RwOHKˋ$EIh+3t{9U_ճ !xv,`hW8=89ngttq~L6\aX=usTߣSV-!Hla-2FmsNm+߷*0t,yED~= cN8x3Jhv޽~Ccwh2o3W\^aJRWqdmvS~\~R9@JZm?f Wvp7sT7\[v8\JقF^tȫ5ɲceo}D> VA%?C(5 |БY3I jR" {dsNNir륎\=>w(]>ág_3%TGE#,y>+4U{!Yo۰,QvcڸASiy*'ZQ3;]mQUR z4O?-O?|=mX>ߛ,r[#VciRJJ.Ϧ]I- 9d9?噍H흼㱮 ¥z%~ V؎Ct>FPIJfҗ :Jd݀={A Z>Τ,-^hsK?檷4";}`J1^Xm,,ґ6Wu*Gֻ&[5HL^FN;v^}թNQa/ e9zzVuϵȇ3ŸvN_5kƑ*ĩ*A'操ir\N;,6hȒHi sֻiЧ6qT#)7t v\] DŽ'8+ c+Ji1uc $fY-#n['st'ǵUi8J]:nThQi[Gߣ,r$<|c߮1S6Z4|lTI*[''1]9Ju=XwF]^ *fp ;M9{I=-R[}o_!bllۘ0A9zWji+sԓQyqךj6.C[(4_>]֢\Nk}4 #2/ʭ!~vʔR9iVj]"HeM5ڸ·IV8cjtBMSzA$R}<WW4iǞ*ݹ[ͽL;Y_-0wd 91kR5c-Wro.Zfa,4)&tu'BagwI6+32M)"7Ed]ّ[Ʈ㭵6qXEuDuV3J[r]c7%1E?y}p%{7̈;f;>`xȭ;DaӴZ" 1j'ׁ]riyhߩ$;pfۜ85qz7-ZA26{"([揖aq4To5iԻtK!-W]7#WT=Ve9V3]?|2+w YN.|cߑecXEpfqt[V${ ap9Vf354QaXʈb9y֕m)ƴy"43,v?30A[eunGPM+#d[&1>X9nY]l*4FoȦ]IBu.iNM#Њ.:]u2l̼+ U?0`GOָ{Kw9%/kSn b."e.UXtu)GV|ҕ7kw#61z}GQҹNJWk@x*؉|Ԏ_>xhJTotխ`U@^YnC >f<1&?zQN3zFU=߸u=iӂi'g1|~^҉J+[PKNʣD* 曌bf)ӌZ'۱S.IYXl2dm``z%N1[%-qࢡe,0 ҲsTQYc!md3JQ3-Is-һk,pJ*lö@={VӧO&T&[~hU $a9=+3u-kȞhT[:+IUUK m9cZ.R? kc V2tԯZiy!k_y=?$n݄ ]'@2};bt^쓏U,fr* +J[la:1ŪBX Ui%_0ە=G8DqƜiSz Ge?ZrKBTQr @y?N+I%3iE{ܜ}z*yquNGzIr>kg \9)FNcjnP1NwԨi77sy.]br8fWT.ȘPѫc{è,:5")nvP,mh( a %b·,by [ʄ[t?w'~#޲ʤ-tB׵GGHlw֊OXVPmw帲E[WV_N=XӤ*hǞ O"bu82rsuU)MjiF"Nx[=lz{+tʌ!TW`b'ވ=<gRjEoAHU\21tGp޹W}5 FVd:X`ȠNO`=+8ye&}WتN>q'37v${TӣIn޾ΥN6A=W%H2*JN]}WQVVi2G$\ۉn Oǧ|zTᠥ);׵up#*~OkQOM[$ӟa]%}#K U5k[O&S$ҵЁ8#Nzt?wa¥9ҩOӡA ~ʫnpzڤUrqaIX^V# w<9\Wי}s9Dh.<uNh[_IJ˷VchbZs'=I=ѕ:^Vr+%m6D9iF--{EJ4RtOVI$4[wTNp?Jqiug,GvnFdrYwFqQsls8*Iiu#xd8 >qgk8Q:yiԓZ=nD#|o2e&Y˓ s1].okivqӎ^Uu}$Aeuq;zzSqt cw2[1<~MJ̹V"/n/e껈Q\qWoeMqV۶ ۤiw'Ny<ڶ#fR9`a4Fs&![pvO{N3Rb^3ͩF"-ЎDEefU-鎂Ɲ="|7׵c,r6aX \'=Z[7 ӌ7?6VdZI&>34rgU^;tvCkexdGs8܊;[^D8* {lXmO-1;znCҧBD$\yJS=R6wvo:Ir+GZ[W*0&V?+O9cnqE sz[M{MlUMߗt6+hhexZ6q<AMT_?O4pq*ui;AEur[S|lcrJl#Ѝ=Z^jXK{Y.Bø7?pdsǥM:no]yv49ѕVnUpᛀ3~;Uj\/ԘTA0WfS f\珽UgͿ:()4qw&FG=mv={qrѧN5zߎQ5)HW3Fk3¬Kܮ(ܕޖ~{[BjWE,GM `#}kJu"]mATå9UoS܎qu?tjFWm[(cN4UnyXDl*'[4zdIgt/"pw{g8gS)r|dдa#tگ^pAQ[JnӥSԚ72Ƶ',Kv6aـ2ޥiFVv>ޝ rkM,z ,r>V@kEiR2R"mZH]%xV*"=ʃpT'Zi`J5V߷[OqN==)֣J5Yk}¶R}[Ѝȹ:ڬyV'ӿc3Wt +_3JQ7ؘxa#*Hl~ sƿ<}QRtދmHOD],wq'%n9Щ|~U%W8 C 5"QաiˣJz#|{6M)KkD,5hDUz20sM8O,}&Kn;hq=q'e9Zͅn[{.rKU]ݏI'ڕJqQ\͝ԣ/cq&IIIrNTJ E˧C4ߴ^5k nwex:O+QQs~5cxBEEP_[2f!2gc=3S%RTF\+5+=.6mbN8# Ei2u9Ry$#(އmGE؜,MI4w#ks.e,|98={ե*qGHNީcj& [vY83׏ʹ\ios^Ό}}o\ ` bEjYrҴz}LkaO zD.j[򱃃zf?~;~ c+NmW&[iC4i2$V>}_٥GQR-+6+HG֯7"ީ,aom&`-:~ڞOYNR6Uod~}t"U*. n6LWY2yk7w֬)*Zi!md0{g"(/S]Mi7638U-oNi+|T~e(۸#߰kNU 1׫Lȱɲ0u:~fQak=}kY['# ̫݇WULhۡ5#SjKmʈ!5efߴө'e0RԒjIrR$2βFE|J+(꿴}Q]Q6i[xcӡʋK}o%Q]:Ī8UV34zNO[_!ZQiGv[]^Gp0#OoZ{:rk_%FU{(Ö v[` ޥ ڴ!՞5nn\s](s ;\' |,Ulc6QJX5)(vk[EXٶ]sF%a>HG<}Hevfٟzsޱ%'k:fvrGk31X.@Y3>W?|gL~N$ynY6^ MyR^hCdg6d#׃\ӕg6}},D W݉%7B5+ڊg9wg)T*Q澿AJ"U9ueaЭL$j,6_E:餯p:i2ddU# rF g~^WV7io{}EH˖Кv[l<$jҺFHNIU7tpIY>|]i8l\N"<_.X!/"kC$m8#zD˕IqjQsA"奪mV|}1m?ө&NOUł?.IDlA޴Sih/asM[.m0Qv|ҹ'p0sVc_!Ue}΋@׌ʴ= 2k)-"yLJuxehYEH'<z+- Sj*c(UrFHTWSԏ,i%MkӎR,g=J)舍 =),K( szcӊҨ7)(;רa(W(YEۖ'RGַԞ~hBEѥQHc3aH#k?ƥg0O_2i>1pXm{gִ=82a QWVFJMK*ՇPcVy%浇Nӻz_B6[T>a?(=xW~cT9X]%6E2z49ӡݲͭ!tv67^׵9ReSw=Ll`}k?ab}>mC窟.6SWs=(teC4鸢%9_$x۷=JT}'? C𕏏`qW-L59BJmU4l~^0{zW%L7c*u$tA,1Xٔ_)ʹ>kC)Qq r[}qLۤdU>e qߜƼjяԗ ,ݸayi&ui|V7h1q[I#Xpr*Wӕڍq֣)S[1lȠ$\ݳxu0WVu%c~2Y ZthȩUWLhUbG²8ԇ:5_jM(4H"V}ᤏ14nҍlD[ܵ$'uE;TH'wNcҸqQ璋[ѭZЎ՞1.#݌vqon{cں֔ouSFR L` wyap+)r{u]J"x6pxo⵩(Yuȅ:[+8my@= g+J+7ʎVcwtr39%7d)gg,e1mGԧ{Z]O^2_ikيv#bj`qZSvVE{=tDacnO:tQv'Iq-!q xk,EKI-vfZs_M:*`NJRG IV̱mdJI1q`sF"4{_;(Q_G-^FE.EYr9ϨGޓQ]Rب#GN4Sc*qNRV G`Aon׶x9Qߊ458%#t,*xȒ9vUe1c%W/Cj4i _Fmӈ 6o61 TM#o] OwR^VК*cWsZ­;=S//VED;o%ӵL#(c:2sGFvlyHPaEl6U/~UtN3B#]N-Lg#&F?ufq/-׉$e_-ٹX`6{zuKo:"׫j VP[/s9-j0y8G1R-ӿ&KyfB,j,qwUNT;ˊ+yp:rsp=kLE:kT]JiҌ-rhLȭ呐v"q1Q5 M]{y rԏLv=s#[/:`-[WUU7Yc(›_" 1jZnȴ\|̐Đ9 p35S.kWw5WAEfc&zUϪ^h+BhѓCx $'p8ׯ]YVP8׏ִjJnM~Nu9GG#HH=r{ϷNOit፨J չ6Hc(CI$`8~98JVSMInMLKWwBzCxw#s ?V4a9FqQ&\4#F1N5q&eT+^k8R&g^SOΝu{pI9mZ('?tLir۩ 4)xR5!;=#+/m:\G]!&Rϛ8nx{Bвp/6 H'3YTw|Vױqev;Pȉ'ϙ==zk֊߽*Haՠݼiɽvo^jcN:~rx*M5$vQx9TTKhO[4w|(=p5(_kܬ>&xtU I[XFܯ=kh1_3JR- U#-h1mfzg yNswzH9F夔mi2A85D*R} g7V$,mݎrWS]xV?ln1s0)f`hG9ӭsJP䧣ue4^B'Xc~^@ Le)trI+_F@76/~ҺQNrܤݻ|iѫ+)s~Xy{F'otx}t-B"XE+Kwf՝TNTӧ[MdR45H4s2;oȱe-bR˟9Q =k-_^зg""la#/0SooQ]4]YN"NXnp<Ď7f« 床!g'aŽ={12*eim}ӕI%4ElFvF}qRahnMr,khӟ7mG&a~$գڝO{>[eLwYٱcֺ)J+7ϔIV[rIY&໤nY<jT}Y ,3D3<3RN"-ô6an~Sl.Q՚T'fYv[ӞNE(REG)E+FY[oөe'_c,C}z lsj5%+,+(oOQUshcFQ*d73#\~J-5RN*m;Otۿsү2BߺÊ15`n}qVZt~נc11qӑZJ{~==4^Յ7}t,-:y$r~j9_5&W_1g$|8uZEϚO7喥T^1Zʪ'ߊU*siDg>Q?+SVR$P1H*U6sU|ҿAFQ|33|O ]^&+F?~7c8EߏyTќҪEwcv2Þ;gdvQ4H<ŚETn9T*ΝXX̌FpoׯZr躄cw˹( dۻ;vI?R~o#ש3M,kF[ǠHS/6Nch=)e*6vVgӕ% HschӚSiRZu<|sO|VV𯼒kT"2v,勷Pyd|ϭI`ymqmQv"kvle۴ Ov1q>7FګƽHc*ֺ!!fGqS$INZZP<ϒ鐥Oȫ/r)Q3} &U#v#;JI舄-A(ӕycοBg*M,,ʥѹ_z٣}rd?C98ƵZ:4㶗jok.iG@1Ϲ$a#z}k)5-SKЬ&[q+|ZB9;D=2rWp) 8J614R+ &U\uE&N2Mجn[R#}3fYڧ26e"b)#¨*ݎx*bKEJ3Wlf;vҳ7*&ѭ'qVfHosk)ũs-yiӽ([d59[{9Kdr9nC=ן eTH$pH[.YYʄ?;ү' >d*\"u%Y_n7/ |'TqzB&Y@ݝςOQ2;J%95s q֧XnI7T0>[ 1 i[cyJ2`=G`KUm'ybhGU4!xxǷ^YGe{/g\#M/ͷj~kh!*-}Ė=FB820x)KqOJD7 vhn@9)F.Ug/rJ֚݉?MsȪd(Nyl yQ׵tH}OoBv+bls$(N1T<ק(.B~^9'vi˪% 5ؒ+fedܻr*L<чW33=~nI<;$ϛK&]68ˡ2+oF(l0d>QuE5h'.Xm 6}_҉JugbHd,$JJ\R;+`\v˵U~mǖNb|]I{/g;Z^w9#N.2YD~ mvΣ⨗Cjry.6q`*F1RY]A3GY`L2y/y(fW'~|35HԈ?>¦RaH١ N9=1S(=LiєyDXZdiJ7V\ViƞU9JR喍<H.8k]SRuuSc};j,c~է:nIҳq[ ^K# RRF&#U20}xSPۘ4ֶ!$X??ʞ7rqJ\;3v7/ Ү<\9_3t>F۽O҉EJW8y*F{tr6F\qTwPze!Uݤp?jrreE|{ $w)}k9'R)'@ eލR}:B$Uᕆߕ8JKÕ|FF^ϖy:8W TqQHc y5TÞ2Vs}Yqʫ)IՍ]UL(5kw(v*8NjzF 3hZ@HU? Gk9*JD[L<70$dx}ӱ *,=<{a|>nzT%tRˠ]Vog߭mUejɳ͗Sy}x>洕:9b)yaU9.KcBetx%f .8<ͦRUV+甴p_<_VyugDy#$Dlգb6UjpUVm;FyPYqrk}PIVIvk - }*4"N>XdyX[2'C9(J9TjO,Z8FnNm^r{_F̊Ur V\n6c.kdw}PrQڢ>~Aj}GHoBxNrr-aiU#Wv*jGݻw<5ㅓ8^|VR,ZZ2g RG"7 m+n_RTGJ}˩?ɶ:K̳Hd'p; D%{AN=D{Z,$_iGnQSV2U8Νԓco-4-x~稬pKU#Aݹz~TN1˕US!L<,+ x\l$q҅&1žƣ;q?a(K:6I$ۏ@?:ϖrYsS7#ʇJdx냒yڪ*:R6O(h;wS4Q)"i7.2ӛ6f4>h<ʨNPjH*S={dt.Y^;$J~ m?Sm:5}d($‘W$0+]Py#u&ˁg 0?qSN3#(%Z+h>x jvsssUh|xW'-s=VQ4{߽vW?xFEIG%[$X 6dΟvtJPG{4 վzn2·2hX@X!*<\r7vă*~lqZ 6K(W| rʌx9y,g85Rgt )F {qDef䔩lCO漶ykKG֟3h4t;z}{1jdq ܯS~G9r.ە? 6GkB({k?ۓ2FŸc(ΛbGou&HVG[cp֦7Щ{9TiCr)+,th }Y=z cV*y*aRcFJ!Dq۪?oQ8sJSRQFŸIwάqʓ$kN18Uw}`dDҲ '(=*5`a䥠 έ)`["o'5./', u#޶iZoL[|^zӧ9ZsJ53I~de2GnBa SN*:q Rib sөy-`RV 6Kt}C SZiŐ,)l2#|ֵzaƹ%L1rXzdqWr}VU޺.8;IV+(.09@Rjn1-˨آqdyONrIٓZ! 'd_5~+͌U*/nms6IB*ۼS$ce$=IF9H:ԊEXT wج%OZjc'{B~y[ jW~MXTW=b9HI]Ѳc籮z>Ǘ8OoI>T*meQ@q$r}+͕ԩWhN/tϷ˸D9EF"1ZOM/&f'S^{QQͽE%+)4Keaw-/`"U(st N;V>4L?gF2_ms($jW\c 8=;w~)Ԧo4Tȱ@99G4nWi$<賤aUǶ}9GjSF֊z̪^ݶL\4Esط&1Җ_=]bBؓ,OǞo)rTSF.QW-nl֯h(2׫eYw&BF6=s5].׭~"%[yʹ$Lq"߂XLcNť}*Gݔ}K 3ʭWvy;VRS89G[t(#QێJ_݇SԆ߉~1'珳MֿNa{i<8Oj}sQ-_3Js*ju!1J0;du:3;jJ;Blآ۪ `GR'qRMrGKj>_B\;Kw,qD ܎0\S8nj1NV_'$B5۰mTnX_N^)r˥&AF°+p=~ԣ TzU9j8-.x"fEՒC\LT{6`9]IMGFO?lUYĄm$ҌG#2WgFUVn%O-!$6gFpi˗Ml@c?=={թM41zh6-n7-I*fc q5NWN3eju%T K,#9x=}+/iTju*/Ni"xc[[ )!VdfgzduZE=]'>q{|o#\מzym+vcyz1tkiեk/ǣN}kTVX! P|sګPvZ)>+ϫI"~fUJI>zJ/v8k{LF)ok$e1۽3HFNT(*Txm§iŶz[ܬ~\x~rF)ԕM,_*SNJI{Qd2>InOHn'ssi)*n/[vJXx˞_䊡ufV翷jUek3OcVI Rcl-dgGs8>^]oRU{Z 4mJAh吟,Nx#WE .i;|uiͺJI=^0Bg}; sT]&[ih[iZ?w,vA$yn:{э;:ݺmٝTJ-$e6e'ۑSTw~dԫJQPQ[_PK'L-)+WwKEO#Ge(m"hVU4}6ⶽm]ߓ)O+Eٿ! òVTeMpg~5tZK!Fi=}iWy_#(ִ]ҍe2NcɎ)fktOlb9' dV#;?.x\=JtaONh2F[W ƈN28Nk8XY}m/{Gۺ6KY$n+Ӡe'+;XSU2)ė1ʬAS:}k:*<ܶGUhM]장vʻ726fqyQrSj%QpM+ mc4>+4(v[CO Iݷ'ԩ݃w~R4``۔l}qV4g:1qnF]?K7RnݑW,z=g>aGԖݞs ?x#&8w*VwK˨rRHUs޽sߵiS뾝IZTݏKB R>Ӟxp}+NJ+\nO-ªy,JU_OѯN1Pqγ⽝ xY~_gڲOQj?nɑ*JN*%D,{ff Y#^hF/K4F27nk3J\WTjF2Ti +|\Nz{=kj~Wkc)֌e'aH tە0~)F:iMCݺ} 23hc\8֦Z7:q6C&UYoqp SqjߩJjb9 4#5]8\gjUH֓rE$_ NUeA\[hku Hԏ?UvV3V<ܯrhP㜱CcJ.я3섈H Um/ ShiƴyБ4L#ڽZ.*u% o%c!S0z_3d-Fn9̒L-nqߎbj TWĜ!d`y;mMi%NM$oV䏯E,wiHU 2𼏽[Ǘ>wOy?2EBy,zJ^&IF8Cviw|݇s?/g%weԈJ/!;|玤9dynsBE[#B ʣZji.JRuc`{oנ'ʷ7}TotEy~΢# i&rB8֪u{eV`gk556X&#}xcM'**bU_ڿH++0 '34ݛVJь+ 3!(qxsIFTz~$ŒyrrXYdw#k g3sZӏ;Q#Dh/ʿ(&5\}nU8'Q3|ҌdZ]xq=cyh&?<5*1iJ4;b]؆kyk$8?UNYϕ-,,c]uZGw]p㯯j^2VU1^EwK&oFOǞef=9Sq/ʲ$@60=YS[lPC݋O$'Ώ'=ÎjVj^`&[o s߮xiTF+PB-E~V\snJ2{2ؒ >Ua=OZESs՛WJ[{QZrqLo#xե3Ff^} sUzt1Sϣؚ Y[k2NH8ץsƥJKȲVsdx #=(H%;#MFZ]Iob2.*b9d>ԩNmm|L7B}]7Yᘑ mߕf9n9ӒQ*R,jsTM)T}r\w{X`fWh@8ϡRMMjRJ,XlVcv+QU%Nj-v0Α*LYwn#fnMRS +qnVI /+.A45'fJڒ^ aQ|I#uRaOkRF'bf<oU qNZxʍwzzimbhnmP 4rױ5Ӄ۩Z8SVԵ,˖~V3tqe;#je%+&g|XI9'xl:6o̱yp$>9ӊP啛'Ny6W}#0vO,tc~Ģ36_8\tZ>&XZ:;2#vE$L}sW[㪟XRi'$̱# Gr{mBZiQ[]6v8Vvʨ*c޴tդ^4ybe :0<ذ_SZ-WǑy tBZF}Hf,axj g[sFiI~pG,[U7eێ>Xs~M~ RÍ,\\,cb1~ZίD[TEFxЮ xt#E9Jw>qQNbx @SwUHϥtJʗsF&_!2Iē8e08 ޝ9rޞ"FnM]` "G$l.Xfe˃ϯa^LZQӯOSVXɫ骾\l+2anmk+#oSw#p[AFnlqǦ*[-5}o~S3Lc+d|8zSF2)חU~(yFAϷ1D+GބOeΔ'k$ b:QۡV/-]oRM5J`gRUZ88JTYC/VѺ&Dh"h_.{Gk 3%[nJ64Ԛ۵ >`ZFTmg8ǛXЬ49j@3*Do9^>PNsQRIl19WF2G!UP[32Su#ZQRrsṎ;$b>4R̻$U?x"X:omNYW=6}]A>fU^v2#GJRt)5~:+S9TeVyK2# 6rz=jNm'kz_S*圬ܒioY6W8nnIt_S|FH(ddekn\G=2szY[^ R.JBXo^ff~cz㞼?Uw[LTIrF;60cYKGQqs׌zwYVk~[eSrdpF_ͷq\}u8?gB4?7t $62F3!N{gT6WݵksQ'{zֳ6YH708l\xBI5weр)ӯ*.瑰ۛ }9j2Po/yU"_$a[Tn6rU\ҧV0i%[]OJx*u骑v>ᎄI ·7G qVؚU0iJ]|F)A+m-+r:dr:rR|:릥Tc˱V!`*$L}+I&z4ke8ҧ i^v fyH70b95K[}ܜӊdLd '$yqZƜeM.;^uArQ"$9!P9Sϫr5QN*Z7m.m❵ #%HV1sqc<ޔ>Qwt}qGi_kmVkxY-Ww&I8,H' ڶʵJ.ӷ6NHEG~.e,追e?h3o SZq8yExYUc.d?1?N3NpȒ&]BfO #489¦=S]gظX5yhB y{Xk$m)QQ4&ҷTf;Q]1U12;z$]]_I܎DYw9=^Z.{_ֆ4=A%%WQ Ago姙?7,8,9* -wwi3xxe%g{7pY\H,#LRO9㠭FrJ^V.TEegbQ$3qw3xw,Gݲ5i0ϧao+_C9v_>Ku\D$Zr,Em߯{Zע秊R4ڋ['_6[rsy8lnju5Վ$iG,޿/$u# 8jzJM+o!t}Fh}V3]Qfi+geo/Ĩci;N+ђ7_NsouITlʪVU1 K-Wp{î!n0#o=$\me9}Z}>m+3M=JєOqVvz][1h&˸c'֔ 7wQxƜ}ջ,SwjLd+zr:tԖKFVQI*NT[v x'mO[^R4?#^ En\eǡ=j+r GtQyd+Ċ6uRJR]cӽzXawz&R3ss:ZWqN)bjƷ4UK=e*f9^~}-VITI%t,_ZdS4`l~?\u9:%N\jiiz]軲\*J*NaN4sG]>v&>fē{uKԧN7xW1m hRR+E k6QhXm7fzTTiS~?'fN5*%[[+fV`d).܎x]Ij}Zv#Y '}c~u*2N+R1%[2OY& ycU(͹ٌEVic\CpGrW%r8K]ݭrZTw';v]&D0ȍu*.t*a9_O;Ն-.IEw4q?Zq(B2HeY.vl~S=+٨ǿEm)V`zt`Sgc ۯ]MrQԩ,;YIL:zjThPR[[RPW|0 ܛc*K}<"-Ŗ#y=x^E'ޝ9֠Mj~a+oryl>Tp:gA)'8-Wsj*rNϷ~l7 28?0SR;TrSQNmF\]Ie~fsufXcZ!dO;y,af[vߎ#Tƚ~7kv?rK^#udbdyxkԕǕoݺrʴBRM-_Dy򁟘ޭѯʥ'%taE֕Yi,R40 푣`<WBSҦ#R0B\5,*X`ch*2k˩:?XYv۫0ڃHO2F8'-NM5}~rQv1֫;vŲjӅ۹<> AԦsns?πq [Ͽ{:z8XR[2do3bx=ZNUVGMʊ袖K`.Vܞ9汝?ktؚ~ήrL'<{਴3)T\tz?#_+Z]dR~쟘d0^#*݉4$x/F;I#F8EEjteF2U"D[n$;1Ts;MtCGf#ӫR~]~cEO(wm?g+^b#)_kꇯt !VrkssyF8vh؟컻jI}FUR׷MHOƥ#nCgkэK=mg3FTw^cg_t_cHǿjҤcukX8bqS(h[,r*0G׏rܯttTNeW} :p @ʷSTm//GN4]ѿ h1PD#*#h1_xG h~ۚ)o~zuq+30]A O8=Gq_oi۱ذEr;IL譬F?*)]ot,'XݛprS#,rG󩩅mnHkj'gTaΜIE^lyFq湴p3z!]kkBXߕ[51m,jeeg0]!+Z%r(JZ1_=ϝ? ul9ZSБ0;H=hsyaZRNnk_#JB-%CuhH1 y`F;sEOgZm}/)FBC{,Vf:֑nvo,g+ө$j\2۱I?{=A95IKzXlR OZm W{jʍh9'68*q }=-sZF{q´jַ|M$ (a mr;ֻeUU.EW}OḆ .w?(>_i*Qc^ҤZO EHQӂqHŒyZ1enP6dž  =G+ukIo멋\I'nH&sٵni׆)Cɿj"`)IR>lC VJj =>oاfk8Ubn^jy];(*+ovmt䃀9lqq]*[K̑Ktne\_\KiB=+tffegR>l;G6F"SJ`K;^ni% G~?ҮUGܡQoܒHRKĻ6UH\q:pk(.y7= ׾p[9m$]f *ۊ>χ*s{v꼟QFvڒS##U5(g؊8'2{i!rv!϶}52JI掊xxִhC^.w|M2SMO#FF2ZTi.X=m%e}F9G0}=c7!q`jW7Nu-kl|rԻfxV'$EGssL#WAؘf-orxs3Ucz> - 4rE#)8yÃޱ|3~NKZ+WdVu<dC#mI8Z|ѩʴrZe %o7go?*bFߖ_ w]?Țt*{I/ycd24{!&=>G4c+M[zx)o=_A--*Y<N+jUi9A^mTuE*~ci!46Bqmt%MJIKt*:*4~G#mk gk}<5]Ek;_sa֔m&򘣋ʓ* p*9Y=:ƭVst0a_QZ.Z:-7tn\Dp *q_M8;cJ\uZ2atcr zU%:T֚WtKt믥|7S ;q+H?kCURMr+4:h)A'Y$=瓩Nn߭ Z?VP[gnq[(J~=@өU.IՒQT r,wRoe7^k]zof$G~`P(>c.g60yYO*iNu/i/u7}!_>cG_^kZ5=\yxR3QKKٮo khLzv=/gB^X#xl+dd?3ޙR{nj\?AѲʋ ѹ8LXc6,\a1ߕ#ۧGzJ mic|.A235QcGsUpvr-+9z~U {wfүA٥maoó$#\LiV{/Q(CVG#hZ]1sW:j_yՇW>Dq(X_rI7lgZ-Wݶ"ejk{t[bu*Cߎ?TҎۻ1Jz']}K t*7䞘EH}^D*{wZ>qڲ07ßҳٙ*:2I_gn[i7EG3R>p iNSkR[wŌƤYG̽rA=F:wU|,*T%$Nq|fE+"6ݺxzU#q*J!Ȑ1!v اĹG ג֭-HGq3ڲu§/-k:m ZٙCZ]TuX4{K1cW"%9J%i+],.n򑴎溥 nc)թYӿ-w}ͪ32p8G޾'7[?B/. =k;hէRu^EW|[2sՊ,,GziomU~VR'BZYJQ*eR!]O;|<'#})N2m)KvK#GWRѱ:=yՕJγ=+ݽg m?6ޛqZ/.G^kFve2JcuV1c uQֹyi<]J%抿iI VI¶c8f<ZS|~4),cn߾qV߳ʩ:1Ț׫ԑ`[3o²Oʥ59p~7}5V;a$l?3tp4%s}kCߋr[BkU řz~93N:.tݥtz؛U/*{wZC2|uBݣ1iF6vz`MEy5ܩR"ZTցF,U;O|zLJ9J/5MgF$ o̎&*r 8W ЧkiaԜilAb0 P8#qGҵAʢi٭Mh.c$lI7g>}+ᕯp88̕^LFb 3]_SSDZ5oRq'{s[)suW[Q_qf5lUUb߼z|Ҏ|юCLr =9oK.oNjcqIE!ן+ϕśԄR'&΃tj<098;CdǙ2kw#&Y$mnjޮԞ8],O6H7mcUFviSݣFum{'Q蚊Q]F6#S63Uy_?$.o6W׆O^򌮝wr|7dd}jjY$:RuV?W#fʚ(\;.Y,t^9~7ZuEXwrp>R5TbZ D܌?3*~`\#KqRllj(Fgqci] IdE 4V2OS9GE$cBëD".aV}ҲGUZ!ݸɷq]ۏ'>qqT(%{h0n n4MߏfQ;SIh`U9#itZ`s|?._N[Z;yb@w;"Pc*R~J5x#p&:CWa<ZӛE>Z|Do(fV~<{M*MHE|LݾC9Zss=iJȑ[uLm2Z3elp=_~*Wks\B{lm ȣܑuiFr De,0YA yҺ6,*SNZyXRpp#aC9dqe;F܏χc 㯭gZ3r\6!>cN۹WЎN*G-hҜ.ȧ)+By':s]{DˆeI;0G9=8QRCש}`I1HpUxzUۥs(q)9Ѥٙ79?Ҵ7)ǙB[*۪ ޹ ?O¶#g2nμX4N{Su [%SE&GGŻh7qnkj|ܭǕN1fK*<#]۝zwVQm><3 _,N-լИ^lT78u}zܣ߉WUBBr#e) 1?qҳ}^Ҝu$>I_y66[^Ww#rt)#̌6>]O8̉J>ϕBȦffF0vD ߧU]/UKY6|Qs1P9='< 88;ɏ+[̩tvZ6]buMˀ|S2ܘN*cqcԨt#QNg_\ZR\+x˴'<6O֪QV28N;ᗅLt_L? %QҨq"[*Fv(ppUOz\̩7Laymn4h.R%ɜIv}?US7_~g)7AJDA~n'\P#R_ 챴bed בNxxm5an7"lIJU_wfc`&nlUYyݷ,zNz~(edgԖ` }0ZZ2Ӣ줙"Hǐ239zϙN\Si4R?$O(] .Wʅ`λPFE%xsYz˵PhK6f}LpTb;8N{W IJ )KjF̧ϊ@>cӥsi]U\\$,$aV5Yg5)%MYw 26Y2dJo~⫁ FZywi 0bX~qIqA !o>w7rN{=JccHj*޽24ތw|s=H9Sg֤JkUr$tʢ5;pN}xk5m.K^ǝʤ\[bKe{5G.~?tMƝ ,4BfGm?+ǭJTFۑ# o.<\bO`dg?#aZHgNR:jźG*\5%Tkpz\5R`2TE^+K?2Cv-4pnp lדs-]_G RZ5>7q[JIB2vS͇M:>K}4y\%2z7~v=U;ݿjpm;OR˱HoQRhV*_jW4^C8Q6$}HQ*iN2_S8K5؆sn;$dqڳnnrSi5,yCmp{ǖH̨9Tktzچ#3ݏ8S|S:[UMuIjyS\mU.0pqjʤ'%Y>dB4I BC~wn@bq/!I^3}W;:qz/̪Q0mnV9aUqԃZ\ޫOOs )-4mmt#HAf%ִ)YdK)Q(~i6(Q)$֍K+_0T{ã k#2m)1{`5Z+ig9 -$deC3߷Dc}bq^d!>POǽE:5{ZhS{3BmC%EװIxY-XF0x'"kFڵ7R%Ī_A3nk_b MKR=]Z뱌%]CZ_ԚTkʪuL^py:~0pwi_ue̕ТH _H(sjNN{z+Tn%3p9"@zcWDTTZnNY*zEkr3C#H8b r:gJ5uwKVN4eȚ<A:񺔓,}xmZw/U:9WzaX׌f?tƟ-1]mq]ՙy T89+c)޻d cr,,1Eo]FT5KO1-6Y06T9>F-G(ίi0` QF;![#'+1|NXᣊ6%DY&ndq'>4wIK)& mUۋm9*Q9<<>~^eMnoNz~59I/k>O )se-,4s6h\mq"m8SC$ۇc•_,)J**AƝ] d˹o5IsHw.e@!ZTax-}?QRw̖{^}nF_nG3עX79O~ƕ%NHh5DHs̳>sʕM:xxP\,vx2N4Q[J b)%ib8,b۰a ,IRïk+5ndSNiWBy`PwI\}*u_5*FW^q$c* H<zf=~mYe[F3[t'W-I+=HPvA_w֫*qՙN Ey|^xDkF:GSY9TWA Hrhr?eRi\)ӍKD~vqng b1F_7Kq&fZ["jTv+}~9đk7G )yF%MYu]ͫS}z4E9ز0ܫ 9c.h9[M/i 3M_Rzzsȭ)Uκ1wixGm$mi5FN2zW,*c9=MuӡN;V4Wn+p!Y6TqӏL8ݼm *N8В9by˝ѷVg? ߏjT饫eF6i%y&UYn8ڶc̥~YGX˭ؾcexF_Vģxђ漚Lc$e7v &p@_Ju=z5Jwtf\Qw99j|;~l' qמfQ.k]1 I" -rZTTN>lsOcULq?*)joNkkݠjf9`x(EsI @$xX'd9WMөM-?.Y&V _,H{U)TVm:|A*Zc{>ov'?nΝEm*2[Emf--հ}3 S-ܜe?aNٗv#~8s{5U$Y-Xף,|;WTcNw)&}4kY2wezݱ޴];"ZGԯy$jƴJ't喝;Yܒ9Vl]1tmJ5yF.~\⟳ybM-KZas,EbU,4SnS{EoM4 R(̯I!}RcqqdI#nܪ8N.\2TRU'q;Յd1sП2wW&_wW,2K7_֊D(ՄfWXپfRÁDjSRteFQ d5ͨ 1ҹK|iыVd7`m4@QO/՝~Pz+w/ʭqn_iZe)W7>]=_{Ҳ򴮵ӯaZ]I$^p=aNTdڳ(ӧ&%qnm2G\:<S?N?odZPxc :EŻr;pIwrRݽ \E8ɛKd7+=y[йIԊ澄lNs2o3qq={єn)_ȕ6z[j9}RQk{TYYxc4{i|74r- b!Q^ K*xyj۷k,< Dž)BWzʥjtUHVWϽ]ish+Je;LV@z&3B{b+:zt6Ie{F%@[~9NGʥ=ISc'&n5]~U d9VVn f;l*Y =5w{:i>p<̱^)0quASJåNn?Z']\oFe*0h#nYS>%0Y{.Z)PI̤/D\n*QW -4&A:ᕎ3OsR\LcBHWk~%f+asF:#GXQ2c̄I'p+j(7OR}q՗VcX\ONYѺ*ĖɖY| 60ztkbN*ʛ뷑$lDjy\'Xթh$sLae9x|z}c0[wBHm[Ȃ7&\=cO}ot1Ռҋ6Դ ooc)TtSWL[U\?L,D-b0M[aiÑ\ϿMeQSTVpΦ2x˵XhY1}**F1H58S,&|0_^_ιdR^HP ^tᘾds_zӷk_]>_ƫN_nV'uʎδVʐ=A9ҳKt<gS}E,V.oc3uZԤF2߉xz)I%~G6 ZCl$UFɽG$7|9SswHJ<R~ڳQ#[tiytmXǡyԨӬR-r.0gOi).w;q|};+I-䁐# jQ54#Sn-qeV*~KM[!X$*XI'qSڳ僔קT0R7zvcR!]끞Dߺ}V^ZzmֺI$U|.psK X|DW}Ȍ曘GӌzGLU /-:Rj>j+KϾ&scWEܭӎ@kORJ+˷Xۻ ,q]yc=*J׿z,,InJbNċUݸwzv+e )=WKv􂌣w-SWoTnS'%c c8>5KZZwΦ2/, GX~-VswኧRZRRUYE#/KXK[x{^0cӷ5RSv^Ο5T0SG$.Fx8=+<-'qsR嶿-| jh=AGA`8ZЌoWm?kkÜTOaEn,9}çXsZY/BkMi̖DVcy]9eR# &*QZ+.fãN=֓W{É0jUYUhѻj{1Q~'/h K.:UNv;"ínޣky-ĊGsgFiwg*KiF->T\:+z}}k Aө7t}t}bjb0ԩu"٭fQ2*+;ocޜqXzW?#mSgY&63}O6E aWG2Rz~w:egizjWS5̬+)1Ơ#JSP 7eZUJV*d$bFWgӞ2kZj{7(^T;]MK0[ұ,u{l#+C捪ʟ<׽G$F>S6G8k>iF^'ehalqPs?N®z:ߡ(ZV5Qe,KgO)9t(Q?4$ ! {v^Hڹq_*thz$,.™r=s+J\l9Rx}ŭFRCd tdJϖ*~sK*:n-8IXBd˂1'8b'SeRKد#c y+8#>%K׷UMkЂn b`0y>My]g62T&h o$X}à_\Jqr멎Q;?{}m(&$h9^QJ>N9Դi[+D6܏*6U]3:qW/eI|4vVkF?1wgB{fbʃr۹Vb%d} qj ~Ż݂yx9X݄RQUIvo532r@ p}HVqN1mu}4?)Ӕmlv^[KEKIV11clH U?}wNOG|0iNʚV2F"CV#ߝ֟C߱id-4uJ`w&QEg~Mj:7I-4V {ۧKU4(?.3=1cRJ-I=lMw'n[K~g ̑SsF̪ oҽ=9%^]R){I=$Dv ,QA'p;T֩BQ秣Jٿ^E:rZ&˯ȕ]|]wLOyThZɚʜ) N{S}%2B7?#l.,Jo޽N2q~^k&ӹ YK4ݺ]fx玣oF1K^˯ETQTݺ;-Vckk:R=H֧N2wlۣ1zzǚnR-L#R }d|dY ׯjJJ6o޿OrTOFW h$hK+8};sOZtskiu}papxU}-%@ň6 @JqJV冩nzCico:Jv"/SvC`z>Г4>^'ٻĸ1U93*.6q2u6q* ƌ!ZN&V9ZPv #tӨi:#iw[ #r}DFfiKgӞ1R:/RLZ*Нut-Fʻdsں8?'ӌT:}WEԑ-u$mnY2a~c'u*9+r[K_eݝ%;9mCl) iR_A9?,Mm+Erߢ9ѧRw]mke0m!#J/|΅0y ʘg׼4Rb(vfɛ,rGZjrZ;F1Wvu$wF *9I8V.U%}C,C{RO2s\ԪJ[v} 7,#efrxc"Z?x\W2)ѡ*pdy<'?iRzZrIG]z'H):2 $I?OJ*T{zjvʌ^LܦFUWsV|KѧR%%С4rEp>q1Hsnֳw9T}'6d "ok{c߽ycR3թS;Ċ+iܒHP`N쑏^J<9ciیdQO lJ);<tQNTySmo5pGUev;WF%:=}kRfX>V oֺX]wޟ4xE[eSyhXdrDe=kHє6kjBMU[d)Ƌc&2XaY$FךIJ.P唯'eTO'j uRnT&>H]t++F\Fl/3zTDhƮ۹jE7٢ϖtW9S3kI&ݔ xqzg<חhKJKS+Yp8\5WZjTjW݈b ЯͿ~cۚ1j-)JqW_/#Sy? 2B3_ ֫Ys-j{(܍ާ+r|I 瞽Ard_}f#(F׷9^/68I/q{s◳ѿ̿=߯;$mOdUF2WvnЧv]rk/$Y\2VU*ߧsXVM~Tխ)TO ]˚(J%mRiQUd:q_CJ?f֫E{hvj*\` sA]W̎yPQWr׾Ը]iupT\dgyzSzoHT7G- TI66[`s>T*s-*ѧRK?8Xјa뢏 FQRD$38f;N[sKg)Q HaEUfsz IerT{ou)|]{|*X۱4ݫZ5AJ6W jJVA^I T޷-y]xǚ?1b!,U$5ckٶ]洭N/z4ѢܺhmF>^zɪjmNIN6[.U[/23 t޹}pK`g.y{a--) g$rs5%Χ,=$I7;3HZiO'=Fާ?*Z.E;ŹYwN>~;FJ7tzTM85ü_g8agU($9ԧ,G, 쯮W.!oGϠ R$$l$*Lnpr'8։QéF;[;b*F.*_O0F#$2nԎ$>Fzx%i !€3zR9I}WNR9-]2mg;08^WUJ׍"bէL-"o2sqnN89qJJ>nN*qK߷r sʱrF\wҴZ,"isJNv}D=/Uw8=$J14jrGm_kε8\ۭ/Q@[n 0ԩ!R+$[H"?+8u"uOt%}[3.6q۵cRU).{!/Ybʍ%X =:[Ge#Vo.:(M,ǏUTd;:'rW~ָ|GU9a% |Vm Z(Pxæ餾O9&*S'w/.m}IԜ`}6L1{=u1;d\60TN3wR5-X}]ssTM}Hc} *JQ߿c읉gR0ǂ9[(%t{L$EwcYQ(I$z'- 1R0\kVv#Jc,a8dQ*u |/ތ䞟D'rZF+wR:NmN$]♶ǷfN"I&oN5t69Eeenɭc8K\8) (5]mUWwBQnG ܔ[M4pƲF6G^duc]CP*F?ww E:n*?FtTr{rK: "=zq2N^=w3_Ǚ4ӭɚ[ENdUoq^c+IH(>e othc! [ Ȭe9/ҤycRҗ[hJdFLi˙'1 ԫ}<yTiEٲ#5J wc8B7y([pse8ztiJ?>ƍJ+(7]mNUi+&v 4o:TXJ 5O1peZ|ө 6b̋ONzkZ#4{A>W?wkoiد֗%_5܆PƻwpO88pQ[* Tvȵ*jUYaq0W-GSOF7ԄɑсVSýLr:TBi.aӂhU#ou\Wz8Ѝ0{?URQv߷YZ\Bc3!(_iR2F9c%e,q<,{Ȗ?$8ܞ}kSciI}îrIƢPP(ӬVi]bGu"r)ncA9ƅ*m#(ʜe]; e">S2yqczሗzyy+.gRd19kFRzY?q XB*2R@r=ZM:иΜi?@GUt*!]=zbVthZou\ $,'`QƄSJ8*JJQ)HGe9H$s=.6NVXJq_rJ^;˷ @ǿ[TMӌ۵ݟ9}m.MNv7.vbF$JB[M1KR*ɕL2?iqp_B0u. i$Udn:w:tTc.X%= ҩNKOeiym(G `GJP2W%Ot\ȳ5ĩo"< ~Nk:y֛&, |#.H;=6լ#8M}}f KNpGR1PMTN2|ګrCMkzqe7/3~qˈВkXd; r>=i)%ШVIv|CMѶHnGqDT˚ס:eK|yU2Uas?U[H'~]S1 -Evr@UӺ.L 9ԟƶ*9o}KAlXcl.}N=ծ#V7j FZ,:erG9q['mVT;̼u"Kr7[%ۻ1nWQ{fi%*$1<6 3p+J8'vBJM[H/5;O'=Ga:i%coЍf|dX^)Ƿk]NHR2Ѿ_3 9Tx2]?1*$vvdsy!`r4N7kS?rMBs͹ea}ِV޵„ԫIw]f߄[j0:p;tʜ4$3"rg_if pzs[<ޝw'5:4kf8Us?eQSRMܬ&qig H^\E8f_h힜*ޥNZ6GtcR7}S="U`-ב=FhN*< eu-7v%NnD6 `Ӵc#95avԯ8CnrvmmFrkӠYէH&#ȉe܄6z}k֧>oܽ6h-IC1v%'ZJ/$0Pvzõz noI9DJ6ijkMTI@VV7;Ym# skьavo(tDHB"ǷCȮ8:?nmuEuSutĿ̼iNM'/=nܡOt\3c!1uW&m"in4]߹F=3ӎ*t-l,A4(rT_p5{Et9ejnUGa}X)\0#>WLNF6^xoUI43Te:mFIGnFIA].^:K »sdy'jX؊m]y[&K++2WWՐA0,S(ÿn'*]Ey[!2UmR7,@Uە |1J--;pɻ9>ڪ>=XJQc-X9TUv QAW瘮YnesZuTCfL]͘7 9uP\Ƥe+\lIU$8Dmv{9{v,=7}Jq49ǖ'!fd ʦVïBQVCdϖ񺴧]Z}9"xՓ nA*Nw rvlpeذaǠVce9F6ҽyWĝܤԞó,1ms?3ql'~Mn~^[> >ϝϩmLjj)z_dj<`8Ϸ5VF\RA*Fww367nߊқv6Q.IMPYX1k۝TbɚQ,;Ѳ3~9<ꥌ.iX;;K.|$:mNZ BTpV!13Zs$i#D1i;YQ5^o`ԑ.d7e vN1jbsNiJ<1m6qϽLeǹW,,fnB9~4%^"tybUzÀ;sQqSh2KTKFÜqcRS}EͣFm+HvJrJ|DR!uތUHy?N{Vr+.q+1=Ә~Vc&7aszJ}۳|^)17-rs?TNTICk(WRʾ=9^ՋtbCYbEa'{?M0:qw+2DZ UziIir\܏΍]ɹY0?pk[MZK}Hgq-h6'ӷinUzn^fr"cNVȣ!(_0FOޢS励HRve+W]W^tκ!J)-G$ac>ooO;-IDz&rB$e{U]vz.hݡ*3G|oyW04z$oԹ6i*tEOk} -yB_ qhp9 .6` v sGZ1QFUjr{]\*2 $r}Jj=ѽK(VyڥVFʆ4/v2tԢaCM70+3moQZJ<;{Go|ã2yaG .=<צcHy-X/xg ʡmlB+Zye=Qef[m';y#<]rm$$髢6r 5m c ?ִ˭ٴUH@y"O;sm_^~T6H(٫er(q?QHXּ&&bf``֣^{$Dy4⾙eaE9u-*Rx !Z=ۓ$xOˊfm|,eu ڟ75RgFD -y,8RN:}p+ [3ԍIs-^G.GvzB[Oky 3'l2o^GL.ij-uD!gQO=~,F3>dՙ[q,rO`{x14ԣryIRep2>RJK#A RȳIIh}p3MGKUjj.eA'~z**Tm+Z NJ*\rN}|JҠ3T*ysq[э%XJlf }::ޜi֩(g@~RLv|+>5i(Bu`Ae,6MMv4ONޤ3j߉#8FQ߹8δ9´́j2sԂU}#$fKFUVGU,c;~ ;!;Hգ? = =K)SQw{/|$2\fmѪKؙFu5OO2G!y/Ip~{#XSZE76:Ig(TUȔc܉lvP o^Ʋtct$?~Om9z1>u#9VWZΐ~N]4MEDާ7Ւګ v^[K{S RcvbF q҈Ӆ]:ר2E<]W(zM=4+hy8g?5N+AD(TJ|X^٨N4Nto2,F7K\J{6QU{ʙқ.˙+$iԖ|R3WYJ1W OtTF}X܍Fn(yDL=zd+^ɘ55vlcOUS?dC*ʋ6ny>+ޕ"B3;Ƭss,KU=G `gU29]qzt?qܪe*q@"bMA\.3ҴdD!GBvVwDj|ރRHTeS@] A3~LWߎq^:e(F;:"` 9dlvz˜W4mj),cc>\?QYNoS(/t mTEgyTVJյ++B% pv/$ԯ)idyw=9{IF$ƔwIHn6V}0z}k-E}S2W{v!"7F#ġsN<3nI2ϴxa$gMUgF=VۯB;&́ےBߌ51M'ZMGKXغH4:_0z܀ ~,/֭ӧ2"3!dY#c0r1UEJy_!dѮ U]^U9oV:e(7GiFMdqJ`zMRh%e6wF\Q ̤ 89#5|^Vd),Ɖiq+,fm˞&-7%'N nhbNsUNRnQ <t$c̛e TVѬh(_gސ,3T\hLeF6gYat"|JT2GZcʴ{Oio]"҈*6Է8fiV4q姼^XfQOZ9a[ީ6Cn<]hdpQN*/S1'f2;ռSc8j{k2?jJ[ۧ2;FF0p:Jj8l_, UPp8&#%foEpW9 ~{/r:ѩV}K&# TJlsT:1SIgo(Oޒ,p[{W5XTBcZJZR^ j#ŷcj:t8KP)FZ_4q݀Z)=OWԞj5܎52.ͽFIiE6:ƛU9Z !vs.0wpQh|ͯ,]LL\#qF.apO=x(Ss} U:Jc"#Y5VZás&ͫӍ:1,y썇l_S1ҹO\m}6Lde{t]:*J4U7͎? rW= 4NV!ZXBѶX0{XVHE'~*qf*L&W װףF3eJ!}ySG ?: (hmu#:u+b(m4խ۩V $XF̌U_79 J17NdY"[["T6gnZ[UJu4Ty91 NUo:jJ^V}{k[xcU2, ϖ03~PiGUXT#K^ݺ \+ЯƢ2~\p=~Nq]XZqq_2Roө$RۅU^szZPVO27`(u`H>_oY(G:_"ZX6jxi&6U#PKyݕG̢MO׉p4TT} bU'$ wrec^bҏ4SмWh'J\;cefenCƗ/ت5S?y̞oC_Fd`ϽB{u:սJxv]۶gXKaî2@'DZ犅N=7cJ>U] v] z̋9{U\rCZ[m$r(\ϠsT_])RǑ{3"YB61qδ:REyF5;jblXwe#v ӷ^;V1^}#ƳqN7"xYcd",rߧ~=yi9":~MZͅP'̣9$d\JҶ;18u qqCKu>ڴ):m]\kwne!  w˞#3q`ϰsrYRk}ʗϗ8XJETGo}ta,m6]2a)9Z>֝t*syBHY_kdFFsz]qT7I?&)D]^O̾Dk\ %{L2Siy?ӱ#IEmulc>W䎩]::vʑd;y8ooя(C`eHݪOۗvr2UUᚃtyqvInR3m4k'8;zQNRVM[ҦB~Rl%O?*\ҕhy^GX -T~y cx" UihkCtb+.>e9\6jqgW5*haWYy%ӹI-vT Pbyhf$<ҮjVV`;-9esf|>P%}9qHʷ$zmS8{=?0/eXجklnjg`VէF4Z?P5JkI&Akv'M~y jNN_ӻFpOM2rLk#u׎;ZQ咓79Ckw1!oWe3,j:c{sh3/q\JXHJݹKmZFF#SKiqi4*]G.-9?Gm=u4b_GxZWi_1y?wy։JYr94HZW;[нO$lCe ܌U*2S\~SSP-OMNrc]8SG\sG-g%|$˧>o,V}TI#榜r֧^/4ӽ"3z $*ǧ9<2ԌݓԌ5znNR;`o.IwHTF q[VNkCS\Dh;ӰȮaXYfV/qӀnuIZ1|ߪB]?y zqֵ v|Ϲ^GUT"8
9ϷNw^[nAզZKб#X47!'#Q}_fuV,+"2BV>_yy㏡ֶt**iSWIkmЭ4β\ntjo$xUQƣn/QS#wOnyihY`VXج]?TaFV;@Q}3ƴ:I&RDRg+YƕHeeo2XI&H~s]fҭTouSij̯aR'Mҽl 7!Wf^h͹kQ(n4}3E'fl=׷m<g!{u+sa[ᅌ6zUNTotδ*,Dk۹FώqӵU8<u&}-#];[Wv{tXƟB5*I.[˴ι^[/gN6Z&gF]ˏJ\NVo3fBOsڹ *U)uo̕&pf0ps}p9=+ JPc*t}}n9>f GRsVQ9iyl$Y/|Ss$dFF;z9VEjߪ-iw6D2oA!@$ZѕTk=WNXea&fe$q8aTaބ~Q%nw,n7wK6Xۧ SF5} ;=', >)s49ay\TܤĐL+K~9ܜڲ*=%k>:s^E6syܒ.'ӰkFJ<*ST5õE)eiMIKUMw+ytP7_LFrG#9NZWKqXF$4wIk.}9Jʌ.S:0)^ aL$7'/E>YQޤ^U5e'Adg ?w~ɾgnJqn=UCR:jbs1ܐK$s|"x9"ыK]N:ҥKXX%ăLyr7gaZ{7,֝G:[laѩQԲ9kk+2^ds3w^qӍ)*q__wӧNͮX;vf23 kR1M] ,V#+ˈ! ɕm+c:I.:&gry ILFKl jty1R1N|'4/CFA+jPq,NչiE$)ԩF魃梫FWwV27{{XXج\@[vu%Rcͥ#$.2oPogM^28Tnnec>ۛkҨ=@fvC[2O|S5;Pvm-G Jf03ӿj:tyQk~gl0Qb>\[NiNG~46_FQ8;%S$afFT$dl`+|sl{[ tJTIss~PmVb+xUVW߮׽Dnmqkq~$rFi0<jW4gNR'nHmoG.8 wҴVv84pnmxYh# ~yYT.2)T;&`[vpZձ صנ\ٷ*Cnǯnխ>X+Z*ߡͻky5ob*nTO#4UnjhR/Ӯ $-f lҠRd56Jtl롥<=:{oCm4rKA늪u+T ҲziSoniߝzJTkŸ|u0*n\ʦ;eۂ`W 8˙=,T=VYn8b"xZڵHKTW۹RT`zu֐Jm"YBǷZ|Rz~X&]>&Sb(=1J蚌~EN51^KlrQŃ&"= k1ׇRvcVG1[3td}$ Y駅iSzd7bV$sTr`7&e(intT-l-xSoȄ,Kmo'$r>[8OLiF.Rj7Gu8`yOI&.h]m=Fr)Pңnwنr~]UxdTᅧ*2A  Š~TݺrF+Pz;rEfE!C6Y{}Hk:xy׭ekCe},k˩[1R^P[%ݓ+=Lʭ HdVd3^ԺZ[[^E Vӣ2E3~gaًdg9ԗ4?N[IlB& 6wcVZ#Zl?oTa,!I$Y䜳-2|U ֔'SI.qiF%ggؖS[-7-=:qׁXaZ=;.mn;-;<طE&b#סJQ]*xzmdK$4rKG31i$9<$NǖK-[nIheĥ zg=\}bJLγ3d턒<ս\qZwԪ/"}Wu2I O[m$+nk{7n#֩)%ktՈ,2|G&msNsA]43}w9kKm4Zjs#y|<(ZwgDO٧ dU؝=sxaF+t5Pp[sx_~>"iUM9T 9*bTnz;G^Zj1z,%eOkF~<瑂3Y5)TףsN6ON? kpN%S|#XǖiIJsk>1\-` ۰8*ʭ:u}κ<בmZGy ja[QӮOU9JR_ϦNR4UJ8b ʪV9n?m sOY~GE&#/BLm޿[rNZrt.Z4m.r1V5=vש(ʽ7%n2ZlcX:@P媔d#F6)Y_B!WHcʕ? ׎0qڌNrqSzI>~#hݸ[bbϻ8Ny8p*cRUwGEJ4jAN +T9fǟ_Uc8?BnYMۦMάZm-~]YF|ji\Ttܐ$fePzIQ"ptV6,D=x;R~Ap/f hex0<95&jT]YV?1Whn@3V匔]Ru}A]eclɹJ\eN߯~yk̉n^ߤrD`Yv= p)]d4~]aeVpW߯1nRt#f_#9ji8i?cEJ;@Ϋ'ږ98yUS {8_%S:+ZޅV?vf?~*%En2Lm,y¶y5}b6{^PUZzpj~i;.zj2j:x{(^W&5'QԕrMS&d$UH'S~#Jӏ*[ogi*Ɨx(``t8+S칪;>YYc\vI.߇9ЧSIhiBƱ*JtLe{[VIhḫH"q0;Z+%Rmwuo'l6Hsr9t =ktdOc_ZUA*+Vڡvz]9umȘ.XEh >Q6dqOs޳GsEKOF\Bd:r֧SkƕRz'&+mފ\LZ+r. $U֝_}@3VP&۝-?_}*zWv^[ZU47j8m ׯ={v)>F}\iÚ6ŢbxܭVN1KA+"&QMʿi(˗byey1,ohHlpyV3wK:Žabp㎧γTݛ37#W#$żs6OgtE8O3y2@'Xc# sTtIro-mj+X>gsq\)9Yjk:T//ǶMy˙-,eJ40u+Y7}F}e?x8qzkўuJn5H}ߩѪn*¬{$8j9N:kgסjjc̔3/`Ou~w}-rZ7>KIʋ,^ṶnNyNSo",v=[F+.֔dKg7)fIEdRm( zosq]MkГO  T yX_3JQqdW.SeԐyNs!G;c#88ӿwG7:niG Ǜ+)d8^MISw#V5Q-$"1}t+:qV:%*5)IَՊ t5нk?\R/-RnSϹeTy)yg(ъ7kiRȟ t%(O2ܮ8׊5%2wVM_ )KUЅev.Ìkjԗt,BO6Ȯwr2m81JM'0aq7-{'bkH#t=FG#C[Z>NǧXY-[$!@W+.*{\om%[P{Q}YՇ*rniۣ YC#."@>sJJ W^5N ]5-efH6+S5 L$)J_-''*d c6Nrȇ5$p[VWٵW^W1ygHFxavo,8RZs\xwPq\m.k*pMS.vp1\(Os2*KvcL/͏\p/j}֗Cٹw pv0~YTP$$jh' 4RaY"K (qwJJJJtc/k+wo紞;G"ۗ<ˏִӓs1![[ xw[+[@9aW<Mk,`t1ư]ɏ6]W&VQVLIA檕*_xU뷥alUWAr'zNђ_p)Yٲ>PJw9Sqz-oil2㕷Mi!i<C sQY3V*N[KK77#}k'C0^ Jrj*SKmERݙWo:z]>EdW >h t,~EУ5‘aoY]owaJӏϠFz>[g Щ>F2[pD(ͻ zt=kxF25sv}-e "$ehR0G91XVo%Pd#T;t ET24B9ⲕXI8Jܣ"| uֶI].ȭZUOl>Tr|(_VMhmNI[#F7gw^JZPv>FNV28E^Ic:0w8NpXh+r($ʹa[=UiQMo-Nѣ ?=k>&2.` }Ӗckshe+u%fQ1r:ҶN?)uas#q y>g&M.*^lȝ#hX[+#W٤._%w~sVrT6O#|<]H[wۢ`<$U{^v^fF5gLel\׌A'JQiߕ漒Qil K$_G'Ts?#<<-DS o(u(:ru­>g:K2ӷ9b 2C(Tz5)_-ײ-ZU\ƭ>ǯKrj%n[ BC`O*b+Tѱy.=8ROk:u"vKORP C bv c9U\2ZHPp%rYFYsN8j͍[hRxQ2JDc5e 1w,Y ZY<OYےͷn TF;^dMjfFS&Z5'1G=yzW4嬣+Q?ݫN@t_ƽ=G*4ͬEyf6# 8e۴*9ֽH{{vIlep-yyNijtв5uEos")2uB9ɖ́Z^e-6wlqXg/dѩzrwX` Np uzUKs%qw!q"IU6G8}*e{Y2$n'w(٣.XNfI\*͹V?U.](TGbŃ"=(,7nYWjO~~ft::jV4JrV#dbȬʿ>l>g*>&M2ȣ{mպ9_=3{ӕOwD_$c+XƢ?^.2ߩ!8ß~_J%,a}|Ǡ?~Ok[y<*ѷ5?){HI~e8WZ^8<uOu`+45 Dcp㢞0OiMrW=%<ʿ(8ZԩK-/dp7,Q 9sk]O}J&GiYnLc*RE"7WomګY+;O.5?wtR=?![FIŦgQ[qܰocz1VZ[D(ATi{qepG"Iwn|^:ƥϖT&Wr+\~)—<=?.)IRk6yLn#>XOg۠]z¢RSSi2(B YO)IJ1J0caex6qƌ+zKt;vۏEOݽ VfD (zp~֦rІc.1"{GCޢ4o}u!*FR92ޝRM[~%[3,c!x>-t&t}Snօy.ZL8:s2R鷡1h]^]Hi6F3;d#&8K[lk8ҝ*K&; A$U#ʒDʞinB |UVS)Y){o";)fˍ|μEȪqIjX&QUݕl+uV-̒ 3lԜ+mR,ybH\.;B+Y:KsG5+;i7eX{og9(o"3^$gHymn/yF2Rz2adel|U#2!٢K9VdیÎ11T5WE!%8D#PG=>[۴ieOn:~uUm)]z|&3Km,4s>[KtJ"XǖIm8#XcFJ͈ҵ3Y}jyِ]BXO8U>^i)!+0J=8޵|ܨs-dD ڶ{V|ՊY-z1#<׬@ZlM{7;hOh̑U[g‘:Vsva=+*3o^:.Y%\;nki^5ǗYԏNqѢIV֡ S$ .yrv86{tZV&շHm]ͻoqU{5(W,eh"o2ԁ4*7*IIe"SE9H" ~V)bZђmʇ4GVX{UӋ՚{IN^ȖIwUr9W;:|mʄy) VTRpnR3^+8ʚnƊ4Ɏ UhZFv>29<oxgM?x!q}5b܋H<*1xY*2l9eǠлe7ͻzgGVFV̖Y^m;V rҥ˴[ZGo1OƣZedl\+?L^Y9.ȸ--e4k$Ʌ~\$y_S F?g [@cǟ3j-qIJ"ЌqX¶ܜ0?:5&cB,̙EtK!}sSuF19v7ŀ _Uf߷t--ͪVhju3o{?< ʝ;꼂F]!m/#nLeTGI^c:G3s{Dy疗#W  ~āz{ܹ֒RNf0A]I8T iK*Y-c'&JQ~O7iF*0Cgk T|&$Wb5;Z4X8⦤%压b$Q2/ ts%"8\-sTܐ؉`7~fsCByiӢ޶r-ȩzrZ1#1`d>[`w c7eG#q!nf6H^Cu2\os*_uW3\3_@@'Yҗ#/gNUy0Uls؃ް*rڢSo߹VF /nF\q8ⴍ>}:ZMy%~<$Ҏ\ ^DO')uJ^.5:t}H@=kYyӒЇȟ*'osΪR\2="[i ,N:ZFZB9;ߧb8F~Uj3"IcqU54d*tı*LU`|vR>iޔtca%RQ%QZKm M14d.AGUWb" ݮ>V>*FQܚcZ<2-]R6VSeeN50u"{Mʑn|sL{ڮy2oN1ןRRʚ*w\/"%XcN/51NjIby<|\U% ZeVjV_70L+ ݔ1k%S(v}mPJV~9syaiO}UQF!i}kN=ԃwVe{xሴ$o%V?[#ێߍi?xG-5}}g#T'r>B z䍥k4j7ETGj%qwwMDI߁GR2ߖԎqmK|d0GN;KMzK&N^P[ÍλBU ?\}oo3, СIVY!L~P=+GhWSͯuض[vvm`=N5jF7KYj޿؄IW ˌqϮ)<\JkU -N*ӥ~W-,DF:]]|嶴,˶6VA+}kKєnѩF)YE7GnB(ӏ28)׫}%uܪp1HS\e{[ۙJquNch.[a8HVE4Qem̻G{p33*az: ݱm~eѭ]ב,Gd1½TRzYR0+ 9mSݽߟR-\JDyAMd=s]upR]]ҍ<* 4ulvG$f8ߵg9ӣZ0'M,REvЭUrp#{m=;qҴr1}7.)ID.H}uݞ@,=XU |}5fnY9u Xü,{;zsRyjjhӧ쮝bhvH0@yEyQR5iS>Ѝ[IckDHʌ< [{Oh6;yV#;Wlk5ʶ3ɜW]at/N99:WM=W3t2VGxDq.9܃G\j8yŧkyb[>NȪ*q]e~?iF..m)o],2DMxJ}@q׮ ?A7h>xʲOm ,{c]ag8X>K,D[GN}~@M~8G2X$1[S8nZzK235r\j7'\*jBRN.*tge,@V'&wl|`t?/Z֥F3|'ӷ yZ) leQ cN* E?i%Y[Il$ .el29:"qT)1 DjO~ѕ10`Ɇ=F9⮓B7_ֈ=ɲ!ʱR:E)VU*)E^D{[K:\~ޘ#=x%8㪷?*74q2JMCq׌u8*DgwFQ;4۳!dYF"!GF2q:RT}?P\ƥUKe/,]qq&9kT|BkŹ:߽g%9{},:ۖD뷨Ҥmj+nEn9{tqWVڒRl=ԑ=ʣ.v<~y?umiSNUit]}t,lN+hYm"9F6FI$meg:2vy=7w2;GWkˑe^2<6: zflm]nYraXA~aq,2}a9)bpWCRZXlQ$1$d0#ԏǧLqҥjT)J=VVSwcZ{1JD7v EĦ=Hcbڧy{:u[iO8-V@24l =S޵tkFz>\j'6cp{?±\czQۃD1GměrWː l y'qiN1Pw4+4 "B+Iߞ@%.ugTZ9a.o>)cdfaϓF:p(ub*(kaګڽ''Hj>UpTq.T5sӕHIP\˥.䷑=TraP`?tJe~qKsiY3d =xDʴf,EĠ9Q'(ղ*hyd,rPsFPpZPQnl2EJ!U@NI<_^+N_zƤcE7/TI n5CW*+4ޗ(nR8noUx9沋'cw/dSfYH0wNY(O0(sS^o_[O2Hfc$x[?B4_O'$ubYVQVvʶ7Q߱ ?,/*pb1Qږ&K c?r;zQuRNm$_ci.nkZ24[E:5_iKؤFkFIHz>:֞ҌPe(;An !>n?vЗ:TmM]I+eG^}~Q˧3c+ˆ!O&p`x=knWm8ʴ_:>Zn7tz_z[SQ5LPn+w]8קKާF=q]\^yP±~eh[Փv9㟛#Jq9R}z<\oo*bK~E*:.wtv9P;^ ʨV Sd9[GrUQ\Dn{z:ViJ%%}܂8[vGGl7.kcJr 6|y, @svp=맚!ҨZ5tfxSێ6 ֜c+;R\G<0yrnRH৸ШǑݝ4`Q1UUE'9sw9iW"9 =/.mNYӊJ/M ۼj2Fndew.l.T -ϨTVZJ|oeZa uN<}F2N1Vۘa %ٹF{A+81*VyrKq,vJ_9?9J5lʮ+ N>*\$6O}ɒK`rFzgmNݽɧ*ʊHl%Al|ʹu/u=t*n}w۔ qo#^܏b9KeQNuN}*4n 1TTmpU+98%w$(]8cT7;v~JyU(t$H̲q'symoLrΏmQV~_RdV{ycX$އ DvL#Ni#c  qt2jl-z8J\PK xEE?6 =GZcNO[0qjԩw:[hSLeSz4> [y*d |c-q)P֑SӰV&3ԨY5(mDmwZ8c}F1NJU&꼗f6[ncRv+|`gN0Ӌt$Taya$qZ.eŸ7QT(.Ib%kȷ﮿Չ#ınn\Aj,yloab)Tk[\An>}j%=L5MݼwDpUFTn8뎿L.^QRE#57Tk#|ζLӖzfFO!r+^8vFj5EkθBf~NsמF aS`<1̐I`\)qDsS~XԍNEZ0YG UH()ȕ>Όeث?ϯRVRJ#kT=X᣶s,YXqsҦNPUNcVymVāwcon;Vu#*o"l>#سiyv^ 4\VPwXS#,SC$opx<\ܮR~FtJ!ؚ9,pHρ8޲eQg^2I}kg18;A<gQ-3OwRk*®0=y;PR7*'~L;vkϙR( s#NQ}lV#JH!Xur+}Km S(RƎ"Y{I)7lZtb|@Y{_=eg+mђ_/s z' _HS_+UIw9d"<$a*1mX#1>cJNW#Qʜ%L {6#ׄ8zRqDuz;$p@;޼{sP#~Ub̋* F9#}+ "ݎZXE{"]0t#+0N^:Yq[eJe]_2 {c?upIy5Jݴf=jMKu񪽿b.G]hu{Z\W]Žei8e{zj{G.wnRSIi}5vmbJҳ, \=j*>H;Z98qv6ʸc*v%yǮxޥF*ܗvRWֱ9v?,vw9HJUU2%𛅎2ı˝ҹe.ԣ)Q4yr6a:cNrKRUM9|nX6<ˎ+iF+RuSEJ6lCt!V s3^zSѿ.zr|qEp&mtt*ҧF 'Fr[ѡlVٓU1s~T涞FqU9튘V&XyqS,E:\Ӿp#}$m"FVId=po(lqSuӒzK?rmW 9>TutS^R&}O /+!*(9]~E98V^_/NU"ӷOM6V@Fd|sN+%n﫳:Su!k @KmgUHƦ"hTkZ;q Is2ͅOs{I{8+dUۛeEsu$8㞜}3ϭgQE$+Te乐3]IͺfKqO+5(({~^U(I]uvny$],۳7~9V5nӭ][(ʥ~IDwn85zd'Q}i{{_B8A5 x-c-#RE1YsWT/fӳmו795Qy V6ی(+)ϕ[m>[ Q[$ n`fs8ԡ~FcaZHlh$rde {ʶnyTOo/:\bX|1VvrvHXm<,*<3q_r0}zźvF4hoQV-ş1'pZ{ISҜ.vܫc+c̖96:O|qS,=LE8/hVmÍƜK"6$h> q家_ѕ egRQ\bʗ%03rR'<;ىu+{NP+GffH? UFuдROOc1]zОݣi3/g=yX:XВ}֭2̲ȫs8=xɭ#&B8iTI}\!ݣeͅhWs}=+X{}>"u1t+M=-_ȭl.>嵄qFc\=zS*jF&qW Cp_sRCnmq~'88xcRvׯX QZ6buoFEaZQ}w*_z/d`cZE'$8ޥJQJfnp7qg=?Ʀ"忪rbyn}a]Fo;z"caWk886OEo;d]9u$H͹GE˜<0=1Ў}XU).o.oȊ)tře0s+arXs{ jaQ^oo^USRGr,Dm#0' 秷UH4ܯoNU*RKO̚$3 F<T*iӍͯo"-$eV2|)ۀ[ƶF*oRqcyYI9/{yq\nH&-g{tֺ c$t$ӭ2ͅ??Q}L<+R^[ͱX_g&tWCbY! Ns^xmd֏]C8b0MnژcUCI^>c9?mbvU祾⽌_8=>Sk(*Fl8`Gӵ]Jѿ^5'rkwؖʙ$,%~@><'=֩I)5o>]LhO}/'řXYv7TdqjUIs8ïP{:RW}# NgwWM_+?xZ3YN魗bY4[/;`~a@%U-dv^қ鿧b! drYLGCWM:SGZ8TȓV y+m^CpiZͫ֏$"꯯kشL!RNz<0w4jRn\N <=Nc<>ѓmRʃgsoVQ3)QKge"dI$0y<#}I=zW*'W}[wVSF.V]FcT_89ZJE֧W7Vu$sUX-k@ppF'vV3r5NWRZ-mAe3_ )rG* J-EٵtS#+Ion~O{|gϧtfetJFc>j_.|ޝ~ʵ~oDsƭm5YEjѫ=8Ԕco TE7 94bNms9{YBMgYE 5*̼c1-Jxo׷߱^Xz.w CE:ϛK-Ho|?ZRR?4FqpJ̊v<ӁYSz+^klYgôdFP1bT=sֿ㵈Zu(v^dZیWvޔW+;N$Y8ly/nvHԌ-E՝}i22,HF2AbG5KR>Y-dVYjZTV8e<ªӁ: ov޶XZka֑Gf) zs+R;_7RRƬE"RJB1n{!%fw#G*N5U:)Ӆ(E]?C#J]A=1YU=XK5Rjem[^!|yVn?JX"mZ4sYKUn7n?]TpWg){G+);$80ŝv<tAkc<^::~OLh{b'V9}jkvZR^zX%2guR[8e֫eN_bEqi+:S)?ySOp Nc v{)]+ߧR\jVpJnڻz6HyBӻ[g_g=qat>˷cƲR5 ג5ȅl[QpOi{/u?Ds[%@B<`~?JWtҜZ>CbF_ħ`c=;eSjW"ZE-vWN>ͧ$SivFΉi+]ú8Qww(Ɲt7q-Y^FMmS=su21zq.ZmuZ܈e4qzcιC1cq rQ'v3qߥ{jqOSQ%bNB$-L2r3޽n(ʜ], h6ĜO|diSYC!ipN7Ǟ<?0{TUGNcA-QK|6=1VowF6Km4v,z'<3 Z؉gxZd}q5.wTyTRe0n#T:rJ͓*u-fYI#$,{p7l`2(27Wj3ݍ(yVí)&+8B ޮ·9FQ/!ͭ su~_g?tÖr lQ (.[:Zr)nk(l6VA#g*WzԪrGsNzV\OuJ/t.\Ҧޞf]Ӽn>8-Zij7/w[\=7HQb-e ؜}ko+ *|~>~f.6-0==~W:qob_?C܆@W$mǡ{WZ?֧RUo[yj;!p9i5ت)6,˨I<&bd$sWyFV1K3eB"GaHI8ߩQPtב]ʫl"V^USiTco9'59ӣhm]yFJvzz_ PK{!3C9f2ΰFFpƟ'SMv 2+:w5VQQKO#g0#V^}|5G')],zYFolk|4aޮtk7ђ8N3[b&֋R O1 !PYxөe*߈*I+r=ip 9g-15دrrCRַ_DoR9+ek㢶[wI7zņqqN"yM_<`wS} ƥI)I̛p|G#g}+jwU68):.d3Tтz`:2I_;^El5bǾ6Q@lҺ#YTE(\۹88Xny9J#usN-맟HnZᔅ|`=UV۹4(<oP6?{Tn^[+gpjR63y[F@\r1#_fyjKX[Gu4;#϶3[TԚ5U+J dY1[>*W4,CISp]_N 6*M a:(Ž\݈kkżCB+6wS*ٖeNjjS/bub_U((I)5f[gi< dtEjq& cJ̏YYF~=9FgX$e1,q/ǵ~lcޕZuaE(>^NM2 yҌy&xs\[gVpsu)<>"]JkvVD$\ԧ^1J~UFvmaUa'8LbJ6 *s^MuרBmRC}ᷓ۾k^5RQ?g[&x˶N'J*jݍqm^˸طC= sמyһv<Pz}%Q¢}\F;u} 3c޷EͭL0x{уנ浸$OVLrt=E:]_)ON2:D[4Ryn\ ~cUQZ~I1/7ܦvs.{c=+r꪿s(vmV8h|dW35*iz>cΔd"pVha r}kJtI$1'=~8?'dƸթ-cҜƕV8 ̐1#nW>hMhFnwx4uTw|q[8Ԧs_.t&U$ 'Ȍu?ujw׹XG5H/.wX aڡ}TJ5;UzddnYT7@/Oktdkm)&!mh"aOӊu*aӶ\ʎWpɱ]Ė}OAOۡ5?vOmE ZIYIP 0%FZ{ vX`yPǝB=@>r^qJo-2CuEZF$V@;8փii̝u G]ӨjN ȍR/sjR$^!]Dړѧܘ`)v۹pIb Nh1+Eqҋ+p/u䛩26[\ ?gi #9Z-LaQJe+\=w>J~8=XrcV]U]M~>?JNQe:~GmDVӌ5NT"jV⩶]lF\I:Ez4]Gu24vDmzn`C+(.۴ >֩dԩ^k_$X~wHy3d&H 3G#ӥhjXyFmv~dӤ?8lie r+:xx{?r6Sјg$|եt]4\UH%bk>gͣf2O&vF33ȧ}1jK\*S =_`X;cTU(8]i{QG-_^P{c]Snt^HɅQշ=}kЧQ#c%M"HiP]ۏո]gtw*leE8}Ya3-OӊJy~gdJTX}KoH|%Z@y{:}zC/mt-hܠI'>toZMÕ:7M-0?*r@EvR]kOοؠkvXh4=59gTӧw_q"(ߘN]To 'XۙV+읬^L@C<䓞{Uԟ<ت/QlDr"}ƛ@^=] ha5p^;`BJ)Rcz>RET,<"R]+.O>}H[Xrqw8?j!));*қiZ"?Ɖ)]jjhhH< "rw0=Xgz)47%&t$QF6'vq؟s+otp;ZV鹱WM%@6Ѧ͌z?QZB.̣%9F,dekwMSH9Y\o vpHV{)h$oG*)o>s{] cfC'~b  <*6-z{NhMMjSo<{HHՉ~to  SfuJmJ[4aX? 1^:z-U9B~lo+4ʪˇI}:}sZF= ds"+;P3קjވ.#?HT0&f\{8Q=?.{ԥ9r[^pݏ|cnn6 Jlۖhc%v8 j6][sgnq5ce'#\|2Leui"CebO|Urg.nni;yefPn8_suبz!"&ԍcn>X#Ӄڡha-8ZQӍT\)yp3UvPi(ԔtVcΑrG$>oeMIm娒텊V/;v#kiTe̟ȈDa'ʍ匶8Y-V=)(6đ& 2ц 4n3Sr }.j UbKo#"qo}9=e:(I)'z+vcmG;zURZE:t⹭K4pY0qScOƮd*-yF(T~[WQ@9Nia3*+7 KX% =:έs.YI YzgiV0ce47jIJDpG|Y5]It>9DX;~9\zXɣ^ 3:Oh掺Zn_ОȱFVb۰_clVMJn2UV:[+ sҎg(N5$fIl"H yFz8N\)JFݼ=<7ߛ8 NJ69B1Q<щ`zwOFЌE((e*Alnq۽'`JTW4ѓ i&SQ~wzuQF1ro)J㭚WfP x:VʎwN\ndx ]~Rȥ[%?)rtFnKrvIAYJhU  I$Mm܌yK@4ƈ~qKbb'##[|qzZQ\\$lm3v)=ɋZ$p&O rO֧];NJ5I}Z%ṱ)˚-!Z6fNy#~Wj T*F=+єߴ(7'+s474bƒ[ȼ/̸N{~*n߱*Ǩ$P6`e|]bA{cވ{.oȖi$V'qvO'\2Ȯ8l<ciֶn[ExOɹ[ɭ%gsYF5-9XgK[%p7H%~?rVZT4GĆy];tⶨƔyy[붑-p+1]u{mh>f\W?73ÕaSIJ_ב-lx͸+/e9ڢP}w"Yc9-&㹉(=~igDPa'w g'Y6D*> 9G*c)S֞@8VEnϽ9~&5#f` 9D<Μy~.Fv=sYGFHIIWQW60};L~HQ#@{I=,.Zt|ц3RKRݙϖkWAkj̥lzpGTJҷ}B#ۼז۵=TrYJTR!sG)*O̿ xu10]H' x K'ˌ"zCKO i]yrȬ$s{TTjiuͳG]ɝc#jKFs^/#3tff'w}4u[XeHSdmTgR.TlN?DT妹{.gF{':~{ܮ(Ս~sI/eE?2#}Hʦ< YS}!IH||ܚ#EM3H2,rp*x2&wZN۠e䴲\3sUO^"=Vhii 8#f pꤞq8:5%I>ku^nX]ÂۇZ#Nݎ:U$%S $`zұRu kb `WR 8c?:R,:-&T^!ֲGzNl Ԏݻ~UJ>W)ɱD[e jbG@8 ~OVKknΊ5#*{Y|m928^tvBʝD}odc\U%uE_:|Gmܻ~p37kԔeQl9I,[8$ ?5}sE~G&#oGf_1UM;B6ǧi_Ljr ܗ`wVDlǷ,OnRrKj EHԇ{ɦZ5G H+5\i[o2-$xax^LoA>Yi9_хN4TpI'xUc?/YS:y_B]m4%ʘC(Uo1AO=_{[ܘrE'}~L]P#~֏ 8$w:ӎ-دo񄫰~T\:թ[7yOըҳzyЄI" Q=j(ъQK]s^_sTO1fQ ۺMTO߾ N0 2i İ7I"7xH#FP[nM8J1vobŜl$Wh]hTSK NQ}<,e@cdT։]E)Q)Y-ؠI2+"^ҕ4]JRNkhӖMИœoy 2$ֲE fu#F?()F\ݯlڌoN WCXռz/ w<~U:=+ZE W[4aY[嚹4OB[Q{CB2pn`%e W=A=S5Kz_qVkAm* y;u)ٽʜb1 SK}2IW DdװJWԺM}Ö$vguVҒ&hwm\mLmҸJӏz[O(FRWb49\C1ST` ~SA';_m]U$lNq!#$] #ڼTkv:gƒ۪K%ْd_]Ó'WNcNGjvQZLcEJnPYC|Ǿ8p5* Ş%ʾc&qz{tZT_B~*׳I9"gmO?smg6&[2%rǷs`c'>r"2Jw̑2,/:wK $"8i+ihnriVi:F0q޴"و5T{gT7{Zz.2o"c21+(޽kUhɩvGug [//2cݞ <9VUauϿJr_j5"} ,fDUT8R18'uCa8|iU*:KT&mf*˻.U\ۊZ~EKF}uy$"F#;G {5*=[,[*\LXV9rBnUȡM$ItcRq7ʥ}DeVC"AFyeJ.L=*Rwk)Yđ̋$f<9v?eNө]NiIk뭉"Hgm۱>gC+s*M:ÒjOTT[Ӻ]Ռєyo4jnר s*?32%p7W&VP#Prxө~e`+Fj 9& rm<~_s 4SкX &[v,A:K$vp9euR̖ghT*MZM_ ]1ݹFKz_C8{G ۪$ OgR:u_٬/o>1]0KPKc>jYFM)];Gw*y8,ۯzpZm)Fgg G6yXz ukԌhjjqV*Z+:;I\]݈QǧJ%9s5{-ocO޻bT)JI;`dn32}RazRS=ͽ5ŴƥC/@8q֝iJ6Q*Jr 6d2ob&@=|VuΤU4}_c*q_4ܗ] 'V3JkXъQaQT^l$VGm|xԜ}cF2K[iC.sF%7zO®f?35:v_˜|"Chv{zb0ҕ5R5B4FU~U8O {8œ}SGnWc$yq5JھtG)m2_fsZSKUgNw[X%yݫ3+n8¨:z]jY/Ϊ8Z)szl3 /T 'NB=OR:}Nq^G$̚9+Vۉ{d܌USqnRwM J54Z~#qپI66Ɍk%+%ąhM~V|9`t?uQKN+ k)'k N77=OҴ, P6ҴۃnP>l?Q=Qi U{N_$=N3N\[.^4~V9xx>ǿiԒ(Ԕa"LlK.TB9~5TS9εKM+'קȒ0eWd n8=jqЄ{/$,o3q75c?45(.dVP'f \dIJ}07cG?֒VRvfU+si_|V!¬N8>~[9Z+ u*,'0qckNW-hH'dx,Y@80}kJ|gޤ#'zvԉ&d a/D-5Bc.qJʬJ߻p6Ʈ597*OjI%$Q&@Rz~}lQ*"t|t5$Lܭa!؅T^G9R-mFWȭotGqՑ;9=?5fN^Ƥߺ ihjYaUQG_enRj`~QtId1FpG +Z[)^e /nŧʑn[ 9QߛcJӔ(EDm̒Gv5oiGVo6CrE{$tV20府}==*ʤgi'ӏ& Fm_ 弑O2fU1ۏ)=vfJ2VUG*Ks:; iӃzSMJ|<. ҵL1^H=h+Cg*ͧ~Eys\+g=NX7DT5دvX+c=2FzZS= xxinOԆ O[va|,Cc#N:t?eN*gRP|fl#ui"^?k9Vsi|(ӗ+o~AǑt_X83룦4yA"I3@30 Uww<j.k&r{EouzM›YbGc*n%j8+EkDBǕ#uVWԪ="&ᰫNAB(Oi+}|SIdAO5cUG<Q~u%R4vMw߼y|te-mTSO]$p,Amc< <ܲ b=K;Xq9~rdCSݓ_BS1 4s.–;sVٜ2e . ϛ,qAmJ"m(F7buXLpӜVwrMNq9D\ͭ O^VrnUE~RGQXךޝJ&:pMwtR/>bۆcHzrHӅ*UV.M8tG k缓CۊѲi"xV'nAԺʍm3N)Ao|hLMDVuӴ45;%tKoF,2ӡt''=#e&"Smj qVMp̹s@ֲRmtL(֣ _?B;56Ygn?1YSFVVۧu8ӕoƚ[Yo2CE裰 7zCU(γ(/巸mBL"`è@\WNKEkv3M[_n_:#,I808#ՍjqOhXחt5k45gVLUx>luX2+]yԣ,E5Z-?A,GM2i9ae/9o{>Gwϭ1퍙y-ߛYb/SKt!J+pҭm3y^ָyqz}:+S})ycHK X@x'cU^p5k' v0@q右n[j΢MY~G$#EOhzZE>M9JPm/y m,6n*ː9AǸ]r9c3 VrlC4Q$p±W?aU-/S,i+qy=3]*s=3{X>rBO*baNZR,h&4XLT c5YAi*N:.ǐ ;vdNXe(III}X_uY/ۖM_1P3 ֚Jd[;iԊxmvq׃ֳ7'c[NH8;hЖKx &60FGҶ"M`JHz0IԢh-UآDu46@p޹St\NSnZ@.$yŒIU#V<5)j 5˨M+G gּrΗaOusE꼂lE[` vzuk+mqNԄF 򀱆v8ֹqgOe.,͂zz~5RofT%V1F>|,F5MfN.4}I!u2+\t^2I>5K{uھvVEX[mr*ܱ{? TǞ>ӿ~_v8֓TT>a v1N@  g]q(sǕmUߥ^SjNjWí@K <5̢059hU3ym61Ug{t8U+{oqۋKČee30ڻO#?iNTS2}ht U4ѷc'i`q 4}OSXkmw}gk$m Nl{$NU"o~ҖO{oˡoi ^NWfO9nz{ GuNZW* }܄PPks8iFU%N0l[ac'w[NK͓d| gt==XoE(FQī1l~O_1F0Sg~/Ԛ9nXQr3szqWF5%̹ڎ/ӽ~VB2NS[ |b"lH=Tרܗ$};ƭ.Y>Vlj} D66, v@{u*Y+;gE8֏4lw+ZjdpIksE;׼V5LuZw+ɧYͱZۿ}:bT/v dkZNpVv]ג+ٗX&>aBQף:t6imm{9VnhwrkɤsZ|<.ݑ{NZi&+{;*֣avyACVk u7r"U^#ӚXzU 7u{21|ޮm.^ÄW)qW=Ooδ)۹aNty5bm*5em]tFj-W3ete/gNVO]mg+Ord^G8<;= j˾u:%O~J2O]^# ^@32|(HRvS8Gnk.ދOgs{;mgMXOhۤG]$@B:S*F=kCѣ5rlmihmc"#Ug|,'-"H;3no3 񑃎Ae:pҎ|75J:km楲O'iZҞ&[]I`jJ5Xx--bkjT-VqcksE]_bH^L=-Tn6=<.siI}Kt]=8u$7o}Un{7q)[P=ݧqo;^d$е@s0x9ц9'esӕHŹ]̲( l22y1<9֢]\BKMK[dvRP+CΏpR{$bfx!EEg3;Wrrb'ZN1@ɅE<Ǐsנ/gOv+-Y8z܉OfHbrNg6;K.#h٦I(#<IaԌҋM=SvNukR?ik$ #[XUefܣ"0^ϖ |RۢB˴* n*|-0HnmNonZ1r{uVF={L̻U6e?)QggڲRJ[kNUet,q 3̡Qd,䎊N{1NPqkVڥ6}E+t܅Tpwܲ3͎z`\]?3*jՒkBB*d&  m9tӥ.hRQT]%,#imb W$![z_BeMƲl]"6 m |'/a)T涋S:QzVDo1!Ue$ p?Gҏ~QO_C8q.Ė^Yo1[jw%ѵJZ幓d/`g_]wBZQ+F[sa#mϜg7.s튪8z*{+~;K ._koQ4Fn k6>}i([5 )>nwܠ2gsSN8{J{&ՎKV(N{c\UuO+_9YqnG ==?°ZJWs!e^(nC22>nG~kx\?T6!HKb8Ur}98<5N~ʵ޽Wbq~D"qԞ;F-A'ҝNF׺QKrѵԞ9"uR,wmc?O4NMMc>l;T1nnf G0g-1ۑxT(G_X.[1 r$j@feČzWDumY"Rb]{%AU3 ^:VD9@nz+J4w:#!{+}3G7';Ri-ULc{$l'tT z~4UvZt6jFk2'n{yMͭzpxsuӷ!kH#c21p:G@{cRZ3hb%Z+%L\%fA;VδUIt1H8GCuZdB)xzҾ/bC cF-w]$q4Hfzq_eT^˖53‡u٘@?/~HӖ쌫Sj3W6nVϛ}_AGZRQR+ٱflm6/,=֍8Z%+yat1.>7  >ƺ\iV82XrNyX_xֱ=$0RڮQR乖֟wo&Sl{UY 8n^[R%OU ]PFyɶ@KFؚ!&:tyb^RlY0ǎ1;§ލuh+!9ZjmU89%ͥ=Ğ{b/OrՍEea42Hn rKD-6˙#\C$.z9ǵg(ǛsHƚVəѲʹX>%(FKaBL"UC6cf#Ј'3lKg :8{Xqfd_pDUs>cӔ[ky!Tv9W,LN\ux4qY{Wc$=/mw0 w5cNI8rVsagw^Hn_ʾR?c._I.fDҲ4oNU?iRÛFjb60 g*1e_F*r|1=YA'gS9F[ԧB8M#`<-`cb ǵLijm+Þ6ķbVKS.YGLquU(*ju;-oGOȇ<⽏Q*8\>u<ʫk*ɒNU@ˊSϵ%F0cr/@a G1SN6ujhO^D"R!Dr>v<*yqwD"k[xóme݆s߶g(ڋua$wTW D+B<6?9951:[a.j1QRLV]b#SM̎+:vn˱qS[dۧЗInUq!@$8[FRFo/1֣%WѢ@(۰XpLQdиT_N?dwSBD('ɫkZܺҥZj>}VnV^f$d9ێh3vSq^k'\Q# g;z՝Z3ײ/ak; sF^I)E6OE XZG̓23m,fuѫQԔn 4g'ld}+hӧy= թM a2Ck2.#==%Sf_ceN|<8-ozsYMTQ4NC`[o ݕ!YHSuc+:Τ}?"FNZz Y|p*XiPѧwau'YǛva-$ŨQHHО@'He$>?iʕ?!̒3{{T֕I[^Խv9򉹸2_nr?1DcRjw\d2(?31m{ңFQSJRI+u}FǏ1s{GU--d!x&PC.Ny꿇F#Zކ5j:Qpwjc $!b09>*Ö2ZoمVҦ]k#H4 M5eO^LSӱ)1$ /*2OQqW*֩h/"!MjXI`YU8v.:irF S a{M?SZxm5m4扭Pm$tNf՗6F+y aV6Vu,v+hR=W0W(o.7+\^uWnA\2:s{zEy)I3|zr3oc^*Tqَ5?/ҝOe8r){/a7BuRn$.emoJnouiВ6.%oh]8򭨸N1rF\gQim2[[.e m<榜T*I߸*KG-FD%x`߇[V= kn$2]c"3|JqiW{vGː0s&Orr2pp3DaM.?R=ƨ~3gOgnWׯ{z ,mAb5{ۇp5ѫmsiGrJUmA{f1$*v9 =NkL-*Y=: 5$]I $N3OJ8iԍlGٮ: BUXʆm#`V8|ҭ:&ѼdE "0ђ1=iSu$j5K_=Y!7*c@ӭO5$e*s^.jB\e.R?rzv۪6XFM4G$GFUʀylw?zÅO$ d屓۽sGԓmݭot2e8/wkVdhwm`1`2qZ?BjNB8y! 9G\u9Vye&c*rY|5J)ՌbtaWBKil;6G9B cN1Yko=?ΊUmF$y;St-{>kK;[do,E/$cZb*JޏoovoC2yqD`.7`I^:h]NHIk5[c+̲_y<`];ϖ2_/+Q8iQR6Ս<{Xԕ =/.iGKgUf۸(}: JF6Ubӧw}XYDjʉ,y;d ѤIu,6!S[^II¢Va>ܐOuZKuIXI"a2F }ZZj͕.W(r-hcE 19ޕ(#[vJ'.InV.{!O*;hnc-ZS%nѭ)$qGcw͏ִ ޽NiǑo)wKms U`c-mbx'<k ]%NQ`C4Fx;Gzw"sKTuʱ&#cy< dҧ?Bet~mulW:#R;^>_DDAHY»)ߕ(ߛ]K&[iˋobv==+K=jTtN\\a>ֺXuFt&,eN8<GG$fg0c'J&SJqhKc1d;wG9qӖaV7-]v9 {{Pfl߸s(V͗ioRo/fkY3Qlt$hp>QN09MadB;S旼c(#y3~~u]Nt/+d/w6mTj5F//#UNYv?RqNZXkf+ W :gߊq]ɧV\ۆ4ٝgZ-4:ޒgnU>]sLkR`4$_kcq.3sJ{4WS 3ۼʙ1R.ݾה+m1=sYԊiFH-hP6aP <;J2J.5̖NSqUG݅FN,{Ҥ0PGmgҎ^i[rg>TgZ%8|%&YdY cRAH{f8N71. K!0XOR$Ӫ#nB>fe?9}Q*$SLk,`6sߕU|κqTַЎWegU˸cpϾVkXT:g2>jq5"nQ0Y%_rY<6sJrKK\`ewyp/>Zhls[|.zԩk1 H R\g=AZعJQ"K α[TtWM[kՏY&v~j(cYF]-7.޿!$qUKQ檤Ȳ.!$l>_YsZITR+o)##Zst揑̑%Qt>Y5$MHJ1xd]˺M X}:V斕D~~hdr!rEۻƼ`s汕K]~"W}Jlk$j_s.=֝ID<3շrg'xKW{$ĈfGcQrg\l* se.>Y#hĪޛv3J2 g jd]tdx߭7e9K[qH# ĬK/n;>sJRբeRO^6#==V<Ҍ5F_?/tזS\VUc^p?</jt.[?QF_rv}z'Xo~e/zq"L[|̛8R4n6YI6[Eq2ѣ"˳8珠Nc5}!dE@]S7MYSyGdV<~>cbRJt cnĻ4'N;SUKtuI^T+2KkXٱ#8$=pMtTq1MruvӸ>WYV 2TeqbG9%(KBdf9_1Xl:jb㬗fۍn̘D~nC ӊ_} v,WHT .^zJ'>h%9ǚŏ;s|ު>K&iwJ&9L cS(ŜVb2xđ~a+dqۧ9zRb>1[YY*2yetc*2v^QE-df&-TL1_ӧR>뽉!(c_xpF<.rpTvֳՙѯ|i$1ҟ.)r7 U.XjytVcQ.@$UfGv9*тwyQ˞o5>.>ƒLTqR@2b%Ti ,?wP?yPHtmA$dB+Sz)"KwگӶ;M} 9RvԎi7ue?W/[J.ȑyeAB?1zٮ zcU6!X.C\wqۍI#o}{绸V!dA4cr 3+'ר?^~CQS!ŽG$[\d{s׌>(+J.J;~k6ݣe)'+PFW4\v2$8s,X;u%싍Jl#v`OI1gҴQuFqiʢt5i>RUs-/{;)'Toݮ y~=*6f|ە;C4i|78i6LMThG myUBI/m*wL!inhw1[Q*ce\բҒeV߻xoR-_SS+ғr۝_9w:azhų'~nTȤg#I:1RAytUU݅QЁyy>{5uh*d7'?.R҉L4-5-b6^9V,_+84wFfv-+(:QSݱ|WlY$~B|*{dz+raMʒkm6*2rsב)F3FQꊭ{"DA_έK1T}e|1Ym)J-%- ǰg%@=Ɨٜ- dhl*ZG_氓$ t\Ī(`{gr{*Tna_mYu3*&X$3Ԩ7~'mɭ6% uRu>I%,U=t{]lOmoݻGq^ݽ*rq_В0^{KUbzMgRylQւE/#\CC.KMfyc 8${*jc[P(V]ǹᤗwrsx҈U ibRV;nU6ȍ>^>٨D*js$I*#[з<DI48S=B;xJl,ΪM]>Y-v2Jrۿ)Odˊ":ep"F媲'h#$sG5:_UHaE*O>b=J{Hr{4?.?_ESW~Jő-Y&R%IYMުV-Q6sH軿q;?*b.]oa@ <;(UO{TYG G@0OִTh IrTZ.O *'vYF*g-HQ)o%˞NzvMa !bcoBJR*c SфAa FyBg*oߚA/ЫcVejciyTdwW m$2V*>^ی] yh|H$Bc3w{zhN̟mC٨f];WD|{h$NY^)TN7B^?4F0V?Fӗ#&)wp'`TrRm8%oj_o\9ѓ<ڵ^KecHYb$g{Qԫ+UMּݺI.$bʻ`˸G^Q~]5Wt`1VmxZk¿7|yFpsԩKY5,K pIDaq#ԊTJ]JG{ _&=Cf C$Q9U'+çQ8mIeHʣr?RTt "H|)ldL~ʗ<גC@ND{Oޓ 'X:|өZe=1ܢ #/-v?֧\U)^̊UQ Q͹u=2j%R5_W)8m#f3{n5R7G:i%Uinۗs+u:Pߩp VSF`)#u8}k7MBkgx\ivU騱_t X_l#` 8DF5 za)J״m:Li [.A8#MjU:r\yVuèWܒTl|A>zb[]8l^#1JZ>~kI>lj/#E68 T_C%׺l"-@|=9a'R[2=-KuT#fp;hԔtv+[cyo2B7#y?ujW,}]j<̪%mcq鞀S5?u+̜UHƴZi>'6=8~SFmE4sca"=:f"֡mW_q$L[,oکnܶH8ʺcKF ,D#=7 d.>UrGCYTQ_nylh#2ϸ=x:Ve*CqwZ_]ശQ6݅s{11M+}u6c)\1 V4gsEF(!:Hn9F6*x=9%Wy9m4\~;j¼yåH򬮰ztԎKØԩW% /+oGe`Oqh2 &v@JiS=;/jI#*ᝆJ$.Vj]$RU]#$$MVXGI-wﱥIajY+m܎ Ϊ띋G}~(ʣIvΜ]ץCFJ;Z*cQ!=k70,lmF4 qLI/37V%I`*M/J[}c*^ZE7Limܰ4i8P1ϱQF2抻_ǑT^,FRO3Oj5W\GL5N#đ-7{g^%F5%+QwRC,3YcPW{9a9:iueĎ'V2ilg:G>ƴ)]5֏4,Vo:G2eFWfѕ*؎(&5܃sӵcN*=U;8vKMȮ'fnBW2֑Wcԕ4JKm5I,YZIw09oֱԩJOShAGh%C<'jrU9z+osy\䖭N[ʶ>˙No_?BhHDC̑cJ@F~ibc{='w9VM&,o$8˜f#M;ܸ]{^EeKX+i"RަvqWZ,!8L7gJ6Kokuq(..{qi/isv{-b V*=xV)G7f~^cm ]m47%hPt?m(>iΦ"\ߨae&Yޙ*uJMz~iIBڽ}<#+!6qӿQ9ΪTWrݕwcrtONQUVN2GGU8yҏs[n=dieV6BzjuTg/O[uo YH21GmD]Oc+>^GO'VqOM-LD2x =kJqkUa5[}nY}OU(;yu:Xش `F۽RF+XJS;[y(ڗ ^F Gֺ}}k:rnemؑ$$S,.匰$u⷇4i>?Rek5x ) $jGUs>hVT}WT^ mBr̀2}>դ}_s؜VOPWotts{F<deA#3Ǖ~uFTkz ֨ALiq'b8۶R:ͽyU#iTmnx  1Cyw*8E&אEu-4qr$~yFWZ:Q lV Ӹ=+H򳍭T'>9&o+5F2)RU8ImfoMПB;s]P*to/.~[9:oFA:$W2djnӊҜ%7tpԔi'SHIJ*bO מ5)oxhoث=ĈiY'zg~5:TKhu8nJϹY Gae## T`+YYFGNV]7b 70Cc!AeVˡMU׿M|=mQަQ5]7 φEe,[ws~Q9Eq ˳s-rX29i(>tͨ1_pSTo)aXmjI 񞼎ǙRSOAs+}#,ef<^玢2ޟ 2/Jmqq2[$p]'>]qԎFLdU[+mꎬ/NkR q>L{p峓L(8WjsӜܟ"ݴ(UsМhגqR\(tp{c3Mw6M*Ǵ|ssqTRPQR\7~^1U6*Emr9--[h$Is~J'SIK MXO$hP;C gڹg(J.~vQTh%q@6,Td W'ȥNtiyjrЛʆM2 a',$#29#1rIskBWq,B˿=YkzrdXk_)/$'(ue5>O(ӗ3ҁt9^9QmczjQ׭=ffW#n9 -.kDq)ͥ䖿xMs,vZ2xr1֔἟m*e=wœ2OnWtjYw` Z7Nrù7mPxVedJqƜSqJQ~Sqw8ӭZzH2FO\~iJqi]H/b@bY$fKPA㌂G|ssW(Gv&Skrw-5fG#nǐr_J䲧;jRNUHdr8ҍJ*4jS+>IvG!\-"J`@B;Fy%j =I7fy"imnd-}=X֥mo2anqޔr{9ڦ#[;FѬPy95qvKK?hXe]ɹ#+.iF\I wSneUnk.eCV6kK2g[3sW.q T$VTaFnhس&ok:ҌF1J-٭Q*DYQ澩$5rGͶ"XZQ{֮F念]U6&sӞ9ԗ}TT -[ˎ|D]Ŷ6?_S4'Ȇyźμ"+vnsLW%J`zz9*ԯfYb%aaA=JQS*8[z;a[,٣O\.sJt(=hI9.`乗%Uش<# F4<בLO G(Xl<)QRҷ;z[Dr+~ b0;e$ov3=J|W[iT`CI,`NRT2'Ў;.xn~p~^IOq]mC_5Tu3.m [nq*:u#$նۿP)84$77 {չ*mS(ЗineQ%||EpTewf4jI-O)b!uL*3b)SQNf֧+_N1G}F:c<mʼ9J'*Z[ %խzd]cQM(>eڹc*ӲVz]-GJJsEfX{ ɹۆ͇tOO?RjN<o!-+uI+G jq'ޅZL:mrF0;k+8b(X+b)W5p8,f_!|NRN|i (.e$bxnZV'j2zW*ܜm/+]Nzҵ;ZȂxS|U3Z:r[o4Q/g%5[̑'p=q6u/+~^QRTԧF\XTtu},{3NFOP1+:t׵KZTgm[Hx6p$8+^ΜWsh\Dd)HYWU;N%q5LjRU+VKwenfs8Mw Y7FV;rx#'{u9jto3G(ҏdȯ]-ı`˒~giQ^f8tN5!ޚs$3Ajm[KMC(f+Nq*7Mz٥#ѕ%S _}lks8mEpH$x8L>Ǚ+kw{ާ^h+U|:]e2pL:A}GnN[qRh@(Aw(SF[Z}ǣԏ,tr쐂+BT6㿧ASR,rOmFVռWKi#RFҝ9:R헸﮺mF9246̦>$z3dG)6&XП$m]Y qC)׃ӎWX[6r,'pU EJ^Wznj>|o$FH(@ bCLcԔE[욵A A:a@x(T]?'X\T.Ӿn<)emm)dAsSYʲw?.o!KAmx#s 31UG/i%o_#F~PnmAgJ,p7[魗tL>B*bVL8lC|zM;qpkEmog0F*bUx/S?잕ZEB+Ƨ,^Dv/b}tF>rVL$7J\4`=@%jj2U8mԒfmaƊ|dð3\NJ-ʻzuT4'^d=7*۹BWRmYȸe20|$lH1'Q[6՝ߥt5o޶ݽFމU6Ո xg8W)z8rdL#yUGԳVIXuo+RV4x Zrԧ<8{ӧq/ڡ|^Z0dc,>n^}?B+':kܚ|){ܞGC.%T5MJ#8ҦVռEZvn@:9-HkpфR$j1%f ?0jO/c[h1Lmrs5ȋb]w8rIM__Byj'Y]l{{J\[̊ 0$~iu=BVTSijG lZдpEkҥJ1c4ѦA5"Y| rXXOⴧ%sjb~"*2XdgJ R”kF$BDȭTV%>o@+z4a6 C<ђ$qnۧS'h8:qr[_ҝZu ԥouev۵P2~cw5?{~ Ei{bKX{XVۭ5i]s4VSC4ʧt-r=3=*Qa{7Ӧ+nR]̟JtҴp-#?`$PםFW_y<'̑Ȥd7ho;NyYe*zhV gК2UAy~_b6}c]R|RFSF6pս@{-N/ F@^0yrخWQ5꽍ϵUN01CM/+Ϳ-=MjJXX8n Kè36fhl䁖uDkC^3B{{ eYcu>dz=Oi; Ji^j[̢Nt}UqȬed*_3G),2~=5j=ngR*嵽Hw.5P1s]*J8wȷW:[s%A!};A{V_Z\v,[:k5=דH7$q>'(+[\,iۨjЂ_jm309UJVv'ZN^y*#}ʝ>I7}:U*V\<[={OC]U01wv3To}cKyxc ~=GW^8ƋOY8؝рO91=X-Nk4}3%m2}q0EL)я[-=Jvc(<<3,,cR(e^'U2+e(<SMӌߢFyo$6EfLxֶtjI5=kG`Qm`\r1x汝U*r/-Y'-OA,FۓQtFeTZ'`kvKL^T-6EdܙO޳RXMo[kQ{okc~0Rjt}FJ?=y=rI=֭N<>0fG yJhZFjqO_2h#i\3r?z4yj$N] 1mRqM5 TRӕVOo84⢯X5{== *uZR{6V"$⭪xbg_&.#'w=_]+k{t/*; 6 rŻk.  F1nWCzW e9 Ӝ¾ NQ^>Y+f;Km]JςcuB<d_=VݍɞZۖWz= &ԃ1OfsϹG_,7b6G'&PVer5Хm+y[׹}5o؟i7-Edܪ$߯o^|S,%/1/^ubTA{=oWҪ^;V}lT%Եo]42qdx<Gus~$}|ždӓ9&QR/X]et8ƢQ@cɍˮv.@h꣮s9Gv CfNiԹIӓn$Q+bw3݌˄#ަQ-3gKHR')y+RjYһV2| O-H cUĂJo{כ^16Ҍ'z摢}$3sUWѾQGHQg,'6NOX1H*JZ,FG$GG6>ks?=6m8*iYKM.rU^c_#*zMw81ZT{QJy݄$E2GdgLםGȢriWQA2ԟ0bL۱17=D1}UB:5!˵L&`qҲ*ڭ<=t5D52%Y"㯵MR;IB^Xqk:u5QvH~U?ϭi[\whi+w3Ʋ:Ȝ?T`q϶F?cS픥Fu騻;YM 2G=95Ε*4^FztkyR^+ʏ\ed)ܚ)F*0nl*1U-_bLPܪiTn5cN[U%NURKH+k׭N++yC\1A޷9‹qz^Қv0[KwVQc8 Sm[Fiԇa~ԒߎU9r\v嶍Σn镼 dp;4F2u/{l?$w^$p`IY`6;  #U w~4*qi:("iIh*|d֜p2N1ڔ}|@1~Uc,n5-눮,l#E9TrӷDWq˰ȠļqMMT۷_֑)Ei%S؟\VrruKTb#O;DHmʪ ;vʕ#G-J1U9Zɶ 4PZH =ju%*/6XUjگ6@<.|O-i[8l}}1[֌,>:ү'\%&F{kY6ps  WgClUe[1Ҙm`mvX[tޮ5*S;/rʵlB=ƭ,k1rvA ,ufҫNkzVZ8"EW!pOj뜣jw8Jk2YrNx,DdM}9&Z%GhcrYQt5n5SkNFkNY]t0=}zffu\+|WKԟLYm9F=VI!b7a$UݳKcx:j{sF"Z5eY%[!>llpGⱭ *W˺9+r6INނ|ONU*TFޤ?oGʲeC=)׭(ƍ7/n]HŸ*kb6BLc~ڜc?{]6$ۿBk&iLEm<~׊ƧхBWE+J'eܹ2`s֒ܚ~J+e*9Ynnkg'ڳN2W)VJ>bN =(ӏ~GT]iǚ*$K]C2"hu$GVŹoaNd4]F d&*D/\IZ40xW}6H+5-|,ǣzRg?siT:k hB4H,,a۾di{w}Ro# kC8 VTzۯ_ի8܆-1Z ݒ3*(.OR,T)Z=|HߐmlHZa'f:ujY;ĩ$1/ֺyjJmWqb*QIIOR'JIVf'*I=})Kp5I-|=o1jEJ>E^ݶzh}Qm?.p83N/M;͋*j.626 ,l&>b3;Wd#:8Np68M̋P84'+WnV:.JڭvjUd#|b拋[:nT} n|Y{RF#RPјrE4el˶2[9k=oVZ32^Viyk-ɸ}݂‰(g C pRq}:s\xSm-i#oR@ 3(\guuF3'i6*1;~_PԹyYRptř?{Gu#oJTa&y1&o ΌRmj>Zi-[RE]] T=oB"5 =Y]:աM F"\\ đ]yc^G:懴YsF;ʅL?.ߺciZ9xUmbb[9]ѕșSFJߠB]aQWF8yH)b%dWܪ@lg}:ԧg)_Rt[zB^IeC<.߮NqVTOɋK^]FfbIISv9cFu<ŻjbNp@ZGG1.f|0ҩNUc+_5 ٍem'ʐqr8O_Ҵ)I΄-iMyyi.\3NYJ8+U#KݷAV4x7GݩTjsϮԣ B!UVSTn2}zV|u9^Bh{zQyӪFayI X?UJ\Mwv9}EyTlhh+UwF8ڶ1QoUgٮDmTܑ7IG1QybȡFj,x!P)ݵ@N8 tƧrGdǒɷ98l~~G.z?P]DY#qcWcnIFNXBlȯݷ#$<4c/uݿԬ=\Dyb>9q46V=}qf=iGEbj1IٴhV%@ a3[(#¤I̡I9F"dtӷs҆\q Y,͖1?Q7nU+te5QԂmmX>hĶRxһi[ +{ofZk{zHjI[3;%#Ԕit䖩m UItgx5**掬:q{DWE&2;撗Z-'4fQzkЧ(QoN]C6Q1+3L旴}zb'rCa3zZcQVnޥ"T?*ǃ{tn1FǚN F4SHϰ3؂{selk(ʋ4s**sJtW_"̒E+QJYES |$oFru=)֧R/ݲ<$o(mpgfUOY2a({ؒ]fhZEi=˖:-ՕJΦOz֪>[ 9>s }5dʕgtF-W:VmBpffVhƶNVC5% ƾY9n`<9Z{ބKݎyvfU\I-<x}J1 V)([fC"0c1Sj%Tos'{+:c]ˆ3Tt_o_M [};ɷ9_wG5N]w$C,4j+Qǥ_;4u%nVnd2}4owܮݿ^[JQ} T\nĒΏ̥~WYrӎNT>HܭtJ\zdEؚn$+`]vi*tUwۻ}!;;>ʮ\$vd1,VW\47ZY;2czKy1c4`=Fs8J+@\Ӳ]4%`$]zf/&DFD]XS1u'ih%IE' XAtdyjhZ֖c#2n@[nw+oTKݍh>Z[Ou^?j>kI+ʌ$],76񹛆F9{ډϱ8rӒonwvOU]n\zq18*d 9h1=GvS(vGo_o,KxxpxXN\R *~[zAO8"+ 9sJ\ 6y=+:mÕ5qǵx֔FJZ=4kh i E+Ӎ8FݎhcUD@;}1֦IE9*ܕ4 2E,Ѱ#3 V.?;mNISrOK%K(tqqdsƤV~BySHY#n=B:)9GPЮ2@>Vr9 oR^m̆)vQbI1.gZF1:4$e6̨[fb}O^LV(ߠ?hfaᡍPT=z`tϯ>FN-%fƻ/wOKjm/vW}QۼٷsO_ҺyiJ+ZX FoUZF]!$njӿ/bMb{m+\e%codz K"N?\9\FJ<Ժw ovOS.h蚒8ri; ֑+48JTe :$L+R=OoaUΕ[En(F)\ ~DeDdVc٦Z:XԌ /2 }mʣ^$SV,u 5Fh(Ea0meSԸDǟj5٥:tӸ"<۳9*-lg`kN%V܎xƪQ\N_2l}*)9Ɵf4j!"RLǶ>yU?vOQ6̱I.}heO-7wYKoQ23Y$?(&N3tiXVIn<-v0G\\@цb0^VսjD)"y 6>eRKϯ(RanpE.kFvS>^nD7'Ìz}hQ˖ZO&Cs,c"(kHudL9$WmpZ#KcT,V #EpZW(^秹AR.]Hc79Xյ8R방bL.UK.ҎǂIs:֜\9s+]˖5hjs󩊍IiO7gتX~ۋ4_Z)5+ -KQVBۜo]Cy*>khs^1~f34Ux]s׷Q)tfl)r'$-=@*g%٤*лѮmAnK6O ݻ6vM2ʷ kipz涂\ƕ$Y-78YnMi]{g47)JcT)qc[sm#)jFڴ=L~Z#/2ʟ#{ bY.@VUn޿VQ.UB rI*SgiYfV\%yF,8ΪPuE&-.WSh.m%x8#s8(/=F1\.5p7ʘ4?txQX\gI^&Yr2NR㸈]6zN^fR6oR-^~~U֜i˧nwHڹZr dxm.Ñ+Q̫a"H#*KG!cߎډ|Zۚ-[_5eA?g6{4Q,-q;Qr09=*6M6`Q3ӧJu*MS9ryeWm<W,l?wN,SoH&UݕO<%ʮƔe:jRKaXؼ#մ]Y:m+2v.>񫓼m#(2ofX_$H UF͵€GXY=N+Lb?T78<S'N,S$|+dw1>iJ:%-k"U|tp \M jJRIֳ嚨I, XUݍqū:#rjIR/媖-.*3BOp2FI!`ێyU*[a<1](ch4Z}ΣE>Pvi0{u73΢c-7\c\-Ujvx9%;c([b z|&Ac~bFnjS| 攣~UC扡iDd)ޙ?B%ˣlFa<;-Fd&㔯w[ . 4ZO3s厇 b姻)U.BS5XO>23JSg }>Јn/c]0S&޼g={Vy|;SYݣ7ѕE4oc>yEs]drt[aקOOdhe{-؂r;nfӌ=V|ҩ|riɮb6vg\YEukZ˽&ϻ'ZҌvV 4Bowo2~iFI˔UFUHYoz/̲D90}zsQ+{fN*5L:%kxG0?CPLޣ|͇h r^_,RHr=!ڈ+hQ9#\RPQ ch|+AN?2Ja|O9?`ԣ/wo2Nd)m$|[pqr3~PIܨJ5fӰ}O”j&2#2ńFwqsMK:RrYHX9#6}hXz)k`bE!cY_vyztzW=H1z;a!Iդ~fGҰo 6;'3nl?AEHr3$"sqx{{%%^+ZP_r(I&wU8޽랤*FnRg-JU9Z[bg}{:TFTTa)>CCH]\|aڸ=rQ:(Rc w̑_9te9UILpKěٌ#\x8>IJQIcNWqkͪ \j2ٳj?RmY:i662I?J䫈pkMcIsw^CL%F3n'oz *w_R ;~Dd ȷ>f;R8<:ub׻cܝx2|+O%~b[;_NUK^w;#QhR'Vl o?eRNEcT\8y ~]Sֺ#RJ|>msSR* qPO$Î2XT*4&ʌ=_td^k ,?#5}iuW-JnROOފI6*t}7ׯ%)k#3.8P *kRRd:qף,Nhc?-$ 168zךrrkVTT}KԮQ<1n `݆!}01ׯpr{VUT*Sr[Yj(Op̷}gcx֡(ƦwI}Ef>pO$ *.\]-jϙ'Ե's09.YJIS=CofߘgR~Ќ*u-Vi! mo۰~:{R:{Ob#)Bi];߸Ʊ4j_!F }#ΜhJW]icMN-u}-mQkQ܋)T+aO$,ю;㸝[f@2N9 raE7̮6AufI-qM(쥂dZ;SZOOX#dEs=VwJ֬V[vxe]`=S] 4}QVmF?' 6 y:9mmg.+LHU#\ֳ'3p|Ϫ;vByͨ<H991's#Tp^KGF|hf8>%F>*ݚa_W!Okk}Z}s\/ yaA}1%#ԩWm{|ูC:?ζTԋKqG5ӧQ\Bq˩۹$gӷCVTT4D𤋮JXMۡռg>cث1quWRI5v!Bg&mdUbI7,ybsgI(|ЕKjCogg#Oq*ӜGp$h Nጔ=q2wN}nLԼT#!fb$\c'ӣF o}WRG{ea$D `9#>QihT ʔ#d3NҫU9ө*̨s:Xh9 y*jw;BJulcuQXhc6l0u֪Og}>HꦩƌyV$pya{{vҹi֝f)ʴpM n bz\c޽Vhݯko$rZ.u[{*yVy >-E s3ni3܎=*tǑMbbeT^[vYƤ*IWR_h]{R98ԷʭRN/CEH IJ:3I9J $i"69?.SNl̤ms,r+q9\scshQ6k)9A:QNdMG0Hb{5$]o3zvnM\\"U[<\ +9=Ӱ$ e`Y@qtV|PƿVj[_HҔi,Rr3I8#ƚo5#F'uj64|ytzJ2YO^8ƚlrhZa,+ c&x gU}ZZ7[JYaag[ڶJRyǪLfQ {YDFsO\y{1}b;_]H{GI]rwCWteO]B2J4:!ܭqFYpz׷ynh쿿c:李 QͶC`#e\V%;?Ck)ZM&eɺFfa旴rޏLycQ'F՜?\ .4բ{t:Rbtv'xcUII_x G1#6RmcPp ?ZUQFMFK蕮~hq#0mUt~Usiv j{vӢ\WjDeᙽYʤcQJorZ5a;VD6KE$G+ՃnRQS_ߨǝbe'䁜 \^6f2oh9c8fc֮4tSʜrKvľZI#dk uױ,?4^?PK`Oʬmc'(ԧuߡufZ+cTHEeH+==:RJjLiєW#ӧi9L1~6fܧQrv"[#.26~fs}hH+5ksi[}/I4,eUo-}^szr(:|GTg[^‹wYYNy$g1+jT㈊oF)Q\WB 忘3Hz׭P/n)eh`e窜z*ѕT\_qepo7˵co k*U$)5IG)!o#nUn{dv?C]Ӎ[=ׅIRsobdw YxڤN{sӊ*lI:_O!-V?5Vvͷ$V/ 7ךex/ۏ~bzwceNN(ʿ*^G f?:Ve~P35N5$5/yDɲw#~f޲3g[(#[=|kЌe~e:['x"x<ͳ!)wo~Q]ѭHG5f{xw6tn6F1\#!$lnS #NQM'+og+ȸjs2N׹y>J$r8h[R2:##ئ+Dnm^ΦQR9B-_ޖS5Ɏ󼲻*`=;g5Q\' Fշ4{fhf0'7.8ԟz4cceZ{kw6д* g;uUnb7u#SpGu "}= JVĪIק(ݲdR՛v3?p8W\cx4iW@D;Wܣ?*/{M}EQ(&wLB3L[,$u =@@q[TcU_-Utخw.zmb]4tO|~r}שR0dAiT-ѤjZ-Эqv^{Y$"ڃ g}5*ai^Z. f@zk:rmRSN/Omyf}\J1,rv f&{v|I鞦)'rSԋo[_Ic{WݮO4>M ya(aFp:gۊ猟- {iwe*ui%g)Kyh8mv$lHx 5R5ӷc8,G4>^)\-To*޴E[Eks؍Zub  mX2q;_AQOy#^X8 _^]f1,1+v\}kTiƢktcRDykY-h :}u^&cIui2O / dS(j9n͗e eIZHQJu~w,ſ0#x*hIIv%ݲq%W*9Ӆ|q;win^i-<0ޠC 曭B7D|lϽsϚZ+w$euN2Zԗ3K\m/Nݩ)K4YAoӯje2.̫(-ʞ)^6Fzܖ0[ݲӊbJZ䎋g\lU ](#W:ͲknkI]uVkv{bE%W SWU{JOKidUci8=*Q:جDt.<"KgW\#\4]Q 4QOrКI`[eQi1M;*|=|Hx#^:,]kѯQַWs|s[J1=33ue s"qZhBS yYnΑy;PGzVw_y#rNOtwW,Dk\\KǧL$o1%$aֱm/˚f{BJ(,=s'ͺ%Su,U'cTU\dw/^7,=r>^TO.Lb\y7Ў})֌>&N?gZ+7d44jfnr=Jݩ çҽ63]rswQNWu4_u*E68 22B=k o:o v?SG$pFm`#>9tBSEL ES{;ěČ>ؐ]Ifޤw%Rܵά**Vw(t/:@BX2>SӞ*4''CirM7T%2&Kݫ;q*ǒRoc?m8sQO4ٓXavSWDW4ߢgSIWm>ũf[3d4Tr^v;R3}W+nW#tG} u1ktkE:cek; f{y cmׯ=Ö<ԣ8JvM^VWhfmc jTo^n.lϔ&-N~9=D#F<_ןZu)ǖqK_=F[%R]ykTeaZrU%͵_==Ų-HyQqֽ FQX0*<]b8c;_2yi ˴+*d:\qxڤ|6ti /A3][7l?4@Ӄi(|FsRA;O /ÎF = hӊ\v<=lEӔ։}:]EHp%RH#ӣR+Y}Xx/}i*彭맞ǡFFZۯ۹^x-T|lA V:QFiy!IwbxuLTNK8.I&{iIܥ:nOCҞ?UvZ_{6t~7*~#3C ~Qdy1J^OF.EriرoA*1f`y*Wt)F\]{}Y=aUD*:cl qR餛oӱ^m[D ٮFUJ$ ^+>6kbiz/'Y=7I nqLBQ_Te9Mwno$T9lߛ>ڳ ^=#TFi{-2Đ YU "F#ӃSV施znIZֿo>#{K Cf?:Hϸ_n:bĨr6_Xu5ZzRG{}2ʎƃ-.r\F]?%Nj)kO$.pBYU]nK p(:u{kS*r~vDK i70}ۏJ.φ?tu^5JYSdOpq9dsӊi'QCͫcNrkGgt5%>АĶʭ4r\)ӚJ36VzTٻn,$7m-$je:gkPz.~bh|m#yº+Kw.\ߺ:ߥ{',g ۆX*2ϝ8=9֫E{4[dVUӌ#'~//qfpέv_>ŸE"IiY-۝Jr: =Fj%iI{m[=p^ ҩ~{=|sWU#8aV>%7Y[{H12Q֎RrznEY#3ɖA i|QqQ?u>ocd@V2̱O}*/i[/qS I+sɦ$;a@7p(NThV.rʳ)V?\g*9lR"Y69ɭJzTj4yIs17-ssa<>"[,eOlW]:6K1Ɋ9C+VՕ%h̲HTo=_ N\~zeVe?s=^}ʞ':*ua* AA1NFe)Ѝ3BVoWkBW*8O?W+% BnXSZѷzdXYcYh ={QNUjIՒ~ִqWtcwŶR~$ɭ FҋVɯ>NʹwXۛ0lS(:5N4INk?ʋv d*i{;?^4*Y?R+O&78;qگ<ܶ1Gj}ZIWːA tZެR_S 3RJkձníB zsSp_O3W Ėo!y$lTr; YU~{5r8Re ?L*R4 SJr:a-RruXN#¦4T+S"4~~(|m5i$o06ǧ45J騩ί*dQ *yxVWXU7Q*GԖ4HU +9l *]ԩe7+/{keo2E)Ȫٵ|Cj|Rm;7}>i1@ڽ@ۚ(s8No?cX4IʹRY1uSN;ks϶ Z22{oݮs>YY%u\Tjޣ2H[Ž=5Qro|ͧ[ = ,ŏ1X N0+߮ʌW,ybY+^(ӏsdK^]/tQ'ݓn\UOr$ASZɂ:7JՎ"J.HW,UwVqTY FZ=lrbdݢ30a1{a}}|浜tk2+#~X郌ϯ5RS]ԊcL/5SLӥcS7oK޿~ʥM.R=q@rsZT\(5^FS)y`rm=998[Ԛ%݉\Dj>@*ʷ5iGK-ij埔O:Ҝc'hs xRI}g$n..ܨKgy1ᎽHԧ9ߺiZJwFVXa(O%WtkV9uMKVi|ͭ$&:˻蟻{aa%X>M%$ cSF$[{RvDIK3L]V63?/N}k:BPWBtܪG̐L9 _LֳM%jsܝ ֳ6i  zߩѣiF:~?quFQ쮬OMlQF&Ykc<jƟMoM7gVΛ{nlMndUlrpy>QR 05tcoqL?뮍eMŽʏ,yeȱ:1)# =Ԅ}wW~W]/TvjfS.+_NzWE8akSI?yé\=\;͵CK6N01׊助,Km ތ}:ͳʰ+agvTQfx7V*|rϾ@Yif9;]]A6%Ruf; q5U*\;KN_)E)^ޞM8=Rx$\2WngF4yz},kFtW̙/O 7Zz2c4iƚrm\Y:Ȳ?0kۡ*OӅ[kFhY^ϖ2U38MzmKGKlkDB-DPikK:qi0,M+.Y#okjpZ G벯_39[FѓKsۖ.}bX# e|GN' Dbw.3I]1_HfePWAֵGduN4"]R__$H`9$`/*im~g/.">e #3J͉޳(={"7yafV`8#^4V[6Ev Ol+TJE^`g Ccӊ(2qbhG22HVFuRI47 "ܠ]f=ezcQgDraVA'|d~SmƜs6iKmcǵg82)+odvw7qF?s$}= gOQqOddKq~R:VկO_O~WZ̬-Om$M(>i} =86ug~>V ա\#*w0WqsGkVNVU>ڜ-Q WVdm}hݹ=}ϵ4]|u7tK |x*Ǐ̀9CJRYU}H1O l3vJq)T#gק9:xFK]$K%gg\n#{Jտc%DmiD۰Y~NzvCPbN+N3Zn jF8UY7s9*9jqa8liLxjߪ4Q!I2;`r0F3j1NY'wO|.A<̹Uh{og9%rI&@ 0G9>F|՟Hbi63ۘӞ>aߧ~Tԋq:|K E (8%{ZW-gU> y=^mDSvNA[KWWF9J.5f#_%gQWf8={֒oPifPWon%Dٹy0hW`zd9=kTԕJq^9[C{e~s QzMt݂VQD؂pGXF̠ #^a,}aɽzyrCq#*,G'ےzZ•9ѺoG׿RiiwEY.緺Yd'h G:M]xUJ5-0ܫ2m zZNrziӲBTjLQ2HD>f ~ہYWiN:ҬRdI8 9wd8ӊwht9S?b) hK ӜuVzRXef/-<#L8>*%i%It*Qu%Y&O&6oa ƪP(DF4yMw/?Ȭ*lyT^$k*4䶾1$gSr=~1ӞkOiGk9yJDKwh'k)sUKMwVg+FKc+kn| #YG[/sj&VV6=47B$tʛ%x ҅ݛ~} Hi%uN՜evs)Q朒2#Za8' 5RӭU+{,:2{/!(uYdž ߞY$cR ʖcC ;|rXrITiTg$պfp^䗚?7k3GW0g}qYTyt3hֺߑb`ZevǸ[*WxTҜ/|+{$`V-++UsGkZz}9CT,6 #3)Bݶ1SJ;n 4vETazVN)EΈBrS~ȈYn5GP6Ђ15ڴzBirڣ?R'YfV̑Z3cN5R+Ѧ˿t[Z#)0 3&U' dcڜ%MKuQ IM]cy<3ÚN5QKbv $,-[i'sz8ӺpR8YJ=ԆfQD;S'se(ԕK⣈tꟚfxLy+ˮjt(S>mnHJYI7q ?"2q\OKDLh'p,@|Z8efz񗴼M,&XVHٚ%\xENdI?аcwʸr9voi/7UvWG79eVK``]3y3ΧWֺW?.~ipC2mtVRK3LR.LhFG7)z*2Ey^߲+QS_ vɵ@Us׎MkiwSn^oDGL&nRW )kINpEW&~@&˪\[v1\vʧK]oB(Ng~a +XlmmX@C[{)j"{iyH,$~_cgqz:֧p$[cfXw*qϮg;Fލ~%QjRsK #NƥmqʗZQw*w6v䞃9ߥt{I]niZOfaPxqG3⮭ѤiR4!+$ ?C7c㎝+)K 3?qM>Vke[j0u`*QtuZʌ2yz2woje(-'Rީao,Ϊ#Ve:RqtNB5eE'ED@ ɮj*},)FS喝 ~\S,gZE[[_3xN2UX^LG0 6 #{ոKŽ"Ukʚqя' a!۞[5R閭BrN673׮acN-A_C9էSW^cvb3Cpmy=+nqj*?a*B\W}Яko eˌ|{v=7# NJ%%GV2rzs*P |`򢏒B.jaӤGFLklmOx)8ɰ#NvvZz|Ơ1=v{jqtc:տ=ABV),˶F?j馪{= qTjVɭaccMO^~8өOߋΜEyvZhLJlsϿҽ(mM%/$عfde]͖X3ٜim=*l<_#p95E=u6!g}7,,wwl*PFNW,$*CtUSSck%ؚ dv+ иn6Eu7p˵#BM#xrǙIhބ IY7.UqӠ%)kOB@@pq>ǖO?t**U#̴lullBRѺ>h>u y&bthmUx9_=ͪ*$UW}H``8<~>M<;~2HRP$fC Ϸ[sR_Ȋq⹕V˜3Ṳ4Χ7H%vF_3l|m'4ػ-H7h8R\h5jst+ ?yc9Go(^U%Ѕ%`Wm;Vg{MH1dqϹTmAƯ584Q[1YygܥwR[`#bn\˩JZ)q""ƦF8cV6dV$WG"*m41=w7۷T`~_(˖:v{Onܥ`TlT)[Gd@8չ~Թ\f*P8+d3XĖkvߓKN0hY" H9U'8? 柴OD5|AmU]WZsՕ&ؖYU]庌O\rwz2ݼ}gX߯*oӗJ.4+nг䰒Mأ#8#ӞNUc5%QoR }ʱ`Bɞ ]Qm'$\_NqSXyV* w30;zJ^R9۠-|¶Z4ӖB"ʥgL575d\4mf>cmc'Ka.QڒkdRdق?\JƬyeԗILjqF0GŦ.4O^*ı3UN-+t9rNwQ<8sj{XVr1b4xm:h>V*udor]~I ӯOJNQʣ咺ȋ } xϟ:}tWr9H|Œn0唝jSKxyn6XK;U895r)c.H_ʾ1N*So7w۩mDcr1ϦO󢧻gRRkJMpHmld~\{ɩYdovWxTDylȍKSjiQ ?Yaui15''u' #*c#7Snێv0*>iPm4nwrIPx#'ҳ[iVTX5l=8:WD/4tlE\$Jmrs8+zfJ-{>Ӕ~o4bdd-cee".]ܒ_hcQ#$gWzL~/KI4xR8 {Tr-ˏqW{~#QϛnYY;7ǥk(}_R^$ĺDB̩1WUpgO| -Զnpb}sU.nc(5#h-{١UWӦOץ\_.N[Dͭa3>f&eQp3=kCJ,$} +m OsMխaޜ[KFYI]f֍@Hx֜csUړ4,Y {+WI]_]ju+I 96ekcF*Wߠef95|{:I-`G]#܏JwG5X^bHVVqWBt$AW~:?=-Eٕ۸2s>s%+sΓUGFѓRʩ$7'zu9]{y"ǽqR.7QOSc.[GR1,JRQ<6dN9G+n)mmnF)VRwczr*T~_es(?JQ喤2q |1Ŏ3'U/1{9|+W .˅۸' ΚwwFGhqĶYT`cI|yPnzq|S\:ƌLvPs9e;nW]p+kwmR|dS LF?x݃n<+E&i~_2;/ʒAG 䲀>85rX~ďV~sTԪue(܆wbY.d:֑QӔtІA5kqUXP%۶w1OR;JGQsӃn$nZ3nV;b8EC̔D/=$mOQiOzNB(/ސJѻqCޮIF2 2T&x=~=ҔofU:4UN.%ݻc9?NK)Fu$'+]r FTu9֚tJ1~e7$,dެ+>h-K__Vc=Nѻ#>}kH7Hꌜo=cTJ2Bs9W0\ЄǼynel9j_թFiWrK0ym:dUI Џ- e+{katS嗼iOTe@7,lN1h(([ST:oqY0ƤjJ.EqelF%v~l(OVS:t*KUn{i'HU݆XV993~P̰Fcuܱv<52RjSu}FG%0F]y8W3Ź_2E0H8U澬ƥN]l4JcfS?<\+6Nd1\Jkcrղ8?Ls}|Wt|Ƨn9NSWRU"U>ТC`}@-RxŨUskc oR:eOzk4u#)uиVl1oz_ݽni޷#@c7 :1Mr9FT(/Ь"0_g#לҟg*8g۰A"G#tVu=89\ %)] mgbv Uq~Uu+AD'}p3׏\beN+*6|,xךQr>JKHYB6K|uzV.Q-N)B0vԍPjlXUI6eS6xfw93KSLCXr)JMبەDhG`sX򮺜EHˈo}k)*v$ʒ-$(u%R'4O@rʬAx] \”_Ϸɭ'wJ(؀Ri#ϝ59])慒v;'<;KMœ+rZzqo-YqNQSOcۡ]"V5l+dV4NRC?2C覭'ѿ>[7l˷syOӚQ2Kbjrs(OC:yd[yLvͮ TB,vup^ 0l_q;\{tFM}M(ؐ+HJ4'b+dF38eN |:. NȊN1Tk笘#4woW=A4JRqVTђ,}>KQOFe\eUb65!fmsRvrhӌj>h G!+[Tr1~5T+"R+-j3e1(ђ{>Jrlsau}.4F,q-w)(1~@V|ksJk1fffbVRĊt+>`T*yN|ؚ|tX$yeAۨvi>ӡ&C$3mҲNkOOVE(Dɹ[rcq8ϭiJW8*rʴ`In7/] m޿N ʷ,uv:s6HֲZy6\g~ʜNXԭ*YVbdKOiqӫ't-ᵖLI4**;޴揳S۩N8|膤=]xkg 3qӎkɔ\&gliǗ^C2aSڪrr 8ڹjJlrYU*k}Jw̶C򍤪@-(-J*^t,ՇDɴ#gRy8$d~y汯:4'_5"[_ q`1UJh)84*KfG˻4+/2vb@;@,bS>ej0guIm I$FcݑӶ?5)Ӕ1VNO[h! s rI뜌qui?8sQR"nvo"#(d8sއN8_R%e%.e$MUݜtEEJժEXtҨq+٢Iı)^kU:i]+bj*i;jǬFÏ1ANx-YJm~'x,#2zu"jI+NӶ>28aDPUvm'؞cQƋQmSzjO,]Duf |)nyۑQ|G*T| 8-ђ}JݷHJGp?_Z*U"}Y4Ӆ9*ZXOWb xc8ݑPJosƬQZ'Q'1tvnUY}#NrnUL>K[nʆ\:&qdvkOAE<ג^q.rLr>z]?_&MNRM.ߓ]S"!5yNTIT[rVȎ4j9%Uߑc$ځw:~Td}U*iē<0O:5STgAr֫S݂Isiu֓}{kď|0⽳^7TΝ>h}Fm=ĉ;#rUUH8<9']Art3*KFxh@wO1MiTպ/ Z m* XXSn9F3'M${utFR -"QD.d'8OcIs(zZFM]|B8c6Z5w:ןR*Si%t0SK5vK&f+1zSc)SyJ&˯.G<~Ӎ5jkwqO Ky| NNyYJ5UIa9|^@Jü\;_^sS'!GksRivA[:TfP >>5NK+1cYכqb8ssQJruϮև*嶛㈛Ɩk|Cg'?ȡB;vnE6ukL)pch]v?Z*JTZ1ҍ7BPMw>7f| )?{9ϭm헱~ګdQupZK vm0Ts^HVʵՇpQbxLaTzwˇ\ܭ[eFnfeA[ӞsW GQI뿙'1nxvm28<ӺǟJ1*,a]y8j$zsI-EYxɍz1.^Z8.J{42y+ +?vkhJRS$*[SdEsh֖o%Ă8a!1V©[mpǽ:ZM;8Y$O''*)rJmN8NjYkW}|KܜQ898VN.lp֎"8?Bgg,6X#:JU*NU? ¼Mi}HPhgf$glO*4k=uR\T 'YI#&2CeH>) XJR߯9pyT)^>Imǒ?6]*qm-+QZ1Th1ʒ̄-sIrz)*2G+HەXg9[q}ZrZ- Fd]/ G<ZܲmG9\"Z0ĉͫ dh֊]Y(SWPF _)w̥A=/z^4/m{1bmu2K&ՌRGɮs]7Q2s{"Hn e |[Wc)b)Bc[Ym5 71uYFTmƣo,6 řNP{UUM|RG/uoHDr3@$t#w*)Sj*̔K,۹ӿQ5[?þgednFssPxܣ'voGsйNKkr I8u%?QƟke,;q~yoЩǚ=y{2cqWwkGuߩUIS]{ y;Xv6=3 UR[5cCM=Jr2/~?tϚTԓt1ѧrMƤ$m=srɽOR<ΕJlFn&W2G#/A*H>p;r3W#MY_ dCq~tcRی-a:ӄ}=uөe~Ji~^FnwgyVח7-qЂkcjlRL=|.sVԌV3[XI&e秿LUFfgX}YM[ɎHY~?uF^7Rij/M$حػIm!Oq4ѭ(H1ZFf瞝}Nd؊̐VUS RNJ85)rFIxc=>SBN3voe0?JX֌Pq^O;ߛm8^qZQ?BD|ʪۏmyj*F29,Mi4eRέFtzAN8NT^.[Xun$TƤl8tMnu EJyl3y'SsdMNikF.YT^wnesGRi;ՉF԰dn&PwO?aRJ4Rn]}IVO/lW#9vOK'^5*Zש 7̬F35R?a=VQJW&βL+–9'ֹdogRDL#(NGP~R.Ue3Rpz[AopB\G/ce3Ө gNUmch-S[IZefspz9.ORxdq(I<'8l\ pUA>dIS Deb;5sQԄy^}Gۥ919SeWTi֧_GԵf3,2*@98'FQ2=5TZURִ!'}"rFu4 ԑͶeyx^o@ktc]9Jl]|-m~1U%܎N9FjM~jy~hD &vym ay?µiQ O+ cMI&q,X1׮HJF2MꋍiS-ޏr`A#>U*xOzs\vJJK:,̬R0 ǿOJ)J-IJ=-Ry.ff3*X~hR}NPkc$iؒ|ɚ $ss"+G8=U%N1ѽ޺1\+UN]u0te㽯Eno1WyONyshJ8t%eUVOnN-M9:0:1Tm<ۀY7c1TV~>iʥpV㑁ڳ;|2'U%v}7KdzF1\=Wi(MIԌ{M$NȝvƬ̣}ɬ;KMQ 4ٍX!-㞧9?¦R|J1I]]txֆq"dV Qp9ߠqU*MymۧO^]̄yeu~93ZG݋|||c$ܣ?*1ҜNȮj2fm67io232}YUq|g){6wJ믠K+"-Ϝފ~w'/b:=~.**| 0fߟ1{19Cv<;[O}~_-<=+$Ly#{)Hc~p{VգR[7ݞ~o\Q,-a!elmi[~08ÿij+v{pEkrCjedfY':ۙ_teM6֯"O3nޠZ?W\˷y%NTԮ~ʰnWj1)ʿ$RV[-tcO%{;NKiDm* tOzp"4# F%+BIeZ\ ge8U6ݺk'3aٻcu#kbjTR}NoTZ+t8N9+LTT>=e= Z5?,.'=:ҒeO&ڗ3R2*.ʞ"SRS9akb3nu7oHт?1~e6rwmQney#e 7yjNѻ֬ѧ-WNTt{Ju4oh0dRb>8xWeMݯJ\[/'q1ʆ'xUKq5hWm|aY0 jc%;hk\#Q}]BFdnfV;ps=Z؉Ƨ,b~˷raFRs}H-YU+m+ZrjzJv!̑Ho`<۸JaA6 Q0rLFtݮ-aQԌſ]t-]FO5+nFaX#y$mXtiWj-_t>ҡw 8?λ#J\}-F#hf%,T= YEr5jTu7uΜT~^mO2ڴ_*?3wK<kQnQWa?g߫ : b^1Z9̝1zkɮm_ciRFZ4QjeP *3⺥'TiYuP['xZ529sӿ^=t)i}_c<=aTvvms}Ӎz(ʚ*}R#V4s_wdbp8# sE:1Wz~w/aPoRAYK3IUnuR=Ꮉaih5HCYsӧO%c.hD 54Ў .O/q&9=:ץc9W`VT o˕};VtԵUtF*j\:mdaҏ,r4?y[Bcm$̲~+ xgzV0IqOEeITv^Kn' $oE_\zJe74{o2n r1*ꓫrR ){HFOͻ$@l;cqwLOaNt\&84&9F^ou!xC* ?^=ƻ)rџOa}4^In%b.WǪT/i&̮؆6U0LծDm2C[1GRU[GB{i iNqʕEKJ2JzC/9AAonʜc 4 W(վKe(f Gw#pN2)Fzu^[H䯃r}܇}ae 6ݜlAjxnJGp !y<ztꧢZ[=hӅtn0*:ڧ},0A@>jZv[ѣMrmO#jʧrr1O.U kw3EV '5G]Ti{XT[QH6VP9nǽ ۦ Se w"PwpG?δ[2 JnKtbU؋yާ)~dSIK-J10F7Fہ~#FؼE}U_֣a԰b C 1ן~+GI]JtckE}&IܚTˎѻFBϓpd$tUn64pRҌCm^Fip[>]<<+Krjt7} ɺ]ʅU`6Ns:Qio88ԒsN렰 o2m!9sV[.{]ugܷ+Kr$hq|s:~Е9%Mmjͥvfll-+gs'ԏ_uӌܕӸVTlSF $HKmqs>/8=[~A_440[9SQ&wi7eYXc<׻t+FӳKs)rIЦ˦te9E(E*f`{vk Q%*g۞0p11Z*Υ8]Zc(V2~oXJ=Ti!cq7 un}ý]MЊkTHb3OA3<<&R2f{ap6ګ繨%Q%Ս2\rqt\\.ԴW1UiY%<;s\a+v3C>Z!=:~8bG6"=>ihZAlg]:ĭIUZ}{_5C.6{㓁cI[CqNG|ܽ{_asIjyq1%;|o%%[Ԏ~^_M{gZ mwuEVBꐮeϧOV*Qv3){뷐MDo3G^y+GI+{ըKY]|<_`D  >MUhV΍?gFԵaV|Xdg78t\wUΊrAcY-e(1lyqڧE$>ezXuNvvW"IdIXV?e/#܎F*:Nw9ԵntGʧ呤he9Leӡ+ͧM7ȦݽzykxhEē368q[ƴt~BdMR]4!o7fCmc<کƓԝƦ+ S.\Vih!QG)ƚ;PWөN.5#Vn-%2G$Nɱ8 11ү+[$<_}Ҿٛt/̻g=+K'}bYvltӮ]׉2>4S_;RU&Vyc zaZNkؚ|T7gTǯ'u9F5i/c%;Ǚ#f=s׏_\թ>^롵oƒErkTXE`Uv_J<}{IRKE-A,n\ZՄZt}#IX(I O9)SQkk| Cqw xnWxYv#3Rq{b,EiEidR*0;N=1u)1ZisʚݫnmĶ=h;.I :q\49^tdY {`=p{fcnS[ԋH Uh]͒p2p8=+5cNv_)SD[ٯ>X 0|nI#WJ4d'"8DҟӨ b~Q\~(Ǟ]zhcRUZho\fY>JP=8j>׹Y" 9%;E5N[=%ʢƫ.bc$sn57O}17>[H@S y<8ڷjJ{>ǝS쉰{$?NiE٦tWpdvh $ CmN=VScd`k[fo[jB7~T**sE|Z+,e1${~Rҹ}n2RpoZE@=xy-:Z¥{C&*DQَ3k*RwEqQfVat{w##ұS̤\0ucKSCy&2`4Ucs޺VVaZ1!0Hᑙm9]/~Ss4)7Hgf8vž?QtNqYZR~fUui$,a@+zp} qiq撿[v20>[Ɵ<+&CNZ'o?{hάsFJ8kRS  +ӱR~^k6=jhRNZe}W~Nd|b-$<ʝX7VԾ]Ug^kkJ9(_W>{.d8b ;۟-B8vr9iUR;!JBW2%j9QxŨ܎:妞BIð_ ֱ2wvOZ* PjD;dqmFqoх?ݯ[~DPMYdX'8R*;=ZMywf2rۀPGL~=jFT)CgnjD 28N1X֭kR-fͣ\4dyن\RqKC1)+V|ɹv6Gsck)J4 *|y$!V8fmenkNS|XƏ.eд$*+E)L#$m>eo[] qlYA9/ʰN}uV}ITӇ2KD\@o(^ݺoTjQe4*0鯮6#ZTBefanazƒMlTiC=;YD}^rsJX.Z'R=4^͖(ٲxTp0q?*S_95nOA:"i#8ⴌyzuGݧ+f2ݜ1Y'O}ϗQڬ3o_0s=++|a{~dkFdir^uҲs܈ѝ8L$ K+L3YUzޜߚu#)>RT7_rEmJK"elzZp dRr2fFFfeU_y<|{ʰdiǣ\/m;MO?0qvRPߟf2,'DR=դcaF.sQ[JF$RQkmj%p0(~U9dE`Gװ>G8ߢ F!Ti/q6R1Nzw#];^ZUebrp=zJu*?coO/9'붫% @B6^cf*u#I.[}:J4~F-ַ1'[n?O횩JQ*3ET\WhFWY(,w`#' %'mNz!&e\y{}Hj~T(wZ.~)n wt\XZn3ק)VU~D9yٓt7C޵e*rĒn%uE;Tzֺ=?kժbZP9axȥzz/]tG]-niNISj/Tzرo'4jL 8RW.ܲum¡ml:ح(~顩ro(W!nYzi S4%^B[!65EKfz1ܻ~$ !+k%2X|n<)~egUvBegmM)8EH`XI) NҽJ2\莙THF:+o.J(w2FN2k3FsJi~Ӳ{F ?!G ; "x3[vF9e]1ȭ1 תIF0Zqv#/])6Wn1z卥XS&#x 7,-9;֍b{Q]u)akpG攥ki[Jj۱$Y1V\v֫12戍ͷs}i>emޯa);den6n;:HW3ۧUw%Zi4Dv6ѕ }_cTy;agSg& tI>mܲBU I͞@> Ԅj% Sq) 7``#8ٴ9.kh2ϷT`XzwƧ+F-Ft W r0}j{ٛSo8AX#i_-@9Smm4N˩Y+|RNt)6hr\$~#ۖ>gN11qks8JjY=Ivм;%RxۿZLhQrOKQC؊ybe%4aN1a\+܋}涗2NLR{&XsF$dlFrITIoja6 nj2ZAz?gV>^z`ҳuK H7q~2qu^D" ˍf7.)J>Z;yo>Yp[?)Fs)<۫M%0y5Υ&c*6RF7 ׭KcS^Y簆F!b2`<ݿ wN6;iѓ2cLZW- c=\S-Ik2:O2k1;?EH-]:pVwCX{jI¯g}M [e],~{jZ1~qs(,UsQJSKtɴ=@:sh惚Qj1"f";2~pPI*f d(M8q=k;ƚ b,74I2XgryxZWj:i"bC727Z- pl뉰h$TjF4_2a4,V3'=ެۜ._电 ,gYJ֩A(ّ+Ҧ[ I[KW,vhTRoeO"c'pjj9r_Cx֗*ܝf}-!v;䎽:麋R8.`8UuV W̏n?iGSIiSDC$q\ġ#ǚ*sJ5dEo.4D{^=*^-K ^7u8F7wz=0ڬ}3=(z⩮g$>L_Eu)E\βYǬBрYs+g>QlU:1WԖIp6Yˢ%thtQۂm##mXS\?}ĀAPJ65u-R9ї_/#n~X(=xԔ/(qo<#wЏcRRENRj]"!QHۗ-* JnE%MP)$\25Ԝ֩&n723O dqMEZ,U!(k6Bnz*rx&f/0DvÑN*Y{G)r+_ ~fMd֓QP[4m~J1Չ$T4*n7~b\\ ̕#Q*~ yGBdn0W,9ȱ4v v?z(Xiu*;grؑUɅ:rQ%ٳ +s=?zu3u-E#0=*RSWwCE#) >Uєb䯨F1!DZ$| y*k޼|r)7U_%pzӌӹ8s6 ĉ":ǾEW<>Sтoa\YI3+K$rpyFU%-9"EY%YbUS$GLT]cAܠaJ5~z "x̤t=kNHTUDUF{Tkqʬ5C Uls]\U=vi?26*yˢ8'S< 0H tfyGi+̭c`䌜|׊єWouM(iOoDEPpP;V~ܯk*\i2ӡV!V뚩$ϒ3eTH_*ۆJǷ(^8z)$݊9gnNx'ֳu7NSQ ʭ7c>Ğ+"RVܵ?-(XaTyқ9Y\RU*xC xǥc78)ԥkْǽ#m{SngvNw׸,0*;T'%37ܴm#g9oGgǦzhJQbv $>c&ݘl'85qFGڔg]$7LsEIsj# iazUSQ jd},:wT6Goϸ?Ɨ]>rA-3gjv+8+&4y+ #yjQXweQSǞ*2{?5$ gܲ2ћO(kyA<ýMV _4mM3gSi ;6ޞf5{ ƱefPq?PN8.XYF﹭Hƥ5r4iъ[ P5;G|{Q.k_aC1wOXAoi]/~J2#Sp׹*e2XnWr)KVAeAR,A]_uy762f#jeUXm+ڳ|]%N<[!(r*8g'=F9{}ɖeo+j坱}OZre)5m IJInIA=֒r4iNCXF2JљR~s׭sfԩQ܎FRQɹVEp+d2[cg%OO>̆Kp"Yc\`cQߟ!*ƥ3H杚H6J7(k1ߐum*\ϙJܑi]% gQ&G,TCE)ITZBS" K#*g֪:6VlrrǑZF1ql:)UoM2/=H=Sf4Q \>敔H{ҿesiJI*Ztke̙o9rqHTdهXr'ۜU*T9cɭ.FC?1>F'ة?iQYv o*r.jPӼyufh,vg}a{R=r9-RdIdhǯ׭g%([JZu輆]4ot+>]x2ϧNjJQ;od購FrKgO5V'-__SIc)j%=|,m\+}7uǦqN4<=Ts.A#@%iɀFq\/7ڰJQ-~Cdk-GȀ_03 GpԨSZ9 Uԭ(0l v\ҳSvs˒%z~t1sldsR8ۣס~¥y;=:Oo5cr̫76@?:r~JۿVs:u0M>hw$|PX<α;h(ekz {s06'sSRbWg'UYHP#~oڼF9ֲˮ:)Ol` hL([6pn^3R,5~DʝGRQNɻ 4 F_γQZXѴ IjPl\#>@kjJR)+>8ƥKijpaVI:u+Q۾u:45ebG]z{)b%ɲ߷9[C~"gIDvss~PsFRek2YZݕeiaݱ! 񏘧QOjM>4[[t>zUy-&oO"I&U6Bui/W9J:$N 9錎5N9kk14jkA\mż{YتcG4J1ܥy_R:td忘J~Wf5nVu%PΥuuDog72_@8>c,?,UCHά+:O^vƷ7"Ȯvx9NrN2TvIu4ZIic mȪrv-Z+G ZWKG2FU9VaΨF:sqT::ԩIN|kmw|- Q3rFcNGtZVgG~֢*2V{b9ӒwsZIi#$zy$JڌV_rFk{_1hI gn dJ*]!uH%ˈٲ0O=OʳJw~GDi(St{U~t%1rPG8jr/o-#:idk##3t?5ȩԧ>'BG _厵4+C5!R6wB֖~ecTDC'khؘ*Փf-ޣm2ŴmS n8Tg?f*4񒨧Q%e$ၷs`r0qkJR其47R_57yoFw:~ۦxzz1XK!_2EVsUc=^G]*:ɐ]DGv:1<3Umv˚u1fv<"Ei<&pXumNM=^c)TFisE]{~a]ZG9qNXs]Xԍ4W}?6Ekm,6e8zǏJ44:ppz~?x[);r~\Oe(sMc{x{:KM}@YHSsDܻ~fqנᆓ~},˴:q+IE93yQHsSwbZM s74UIlu޵SwsahԔZmr( VaX9HP6qߊ8ŠQΝZqN/'q cbHf<__SJE__D6v7X`$|lyY+E;^˨5! Wiܣ+8J\b1(VZ[Uܴjʡ3w'z}jhЩޞ9*T2iDpp'QZq\VQ<ֺ ;UcvF2B#'qd,70e{T*F2jFq,2{_#i6B+>N22'XjtqZkԐmɽEp rqǭgZ2S-Κ)Fz%&*G'NSΝ㧛xz*K˥FR7G>ت|:n4~cs~(˚ѵ>Wc'L˖UzTJ)ltsJVv}Q[]22G:"Zj^Kgm*ѩ8Suy,ѮKK 8ߍz0è6ۜ؈Tq",g>d۔Ŕ6 ?V2儜)?rN冂d37,9#89?JBTe9PrGRI l䄖X (CUURUa0T1-*;,))Gy]5=0UJZGtt |ǿ_OgxK~gE5 F>ɒh˹w{VK^~EC /d-m$ca'д7MluU QN}1NJ|~'DFҬ_ګ^9sxx]MB"uI X< ~C$-J/Spz꼋vZI ($# 0}=Їy:R]I8#x)Qߜ#~YGNݒ[.([3+U9Ӎ9'%1Gw>|n͹6Bst^x4yTI+*U%^s]4!,iYԧK-}ío- #c9oL^J*w3êҳV:X*amۆvo8#ԟ=&d]Fm7r8cRd3@~vǚ6NthIOO#&yYrV=WN? ֥LPTn[5NOOJHx!gᏧLBJ9}QoG(|hel =3W)8= -ucYH;s<)B7͒ۍR84ɥp .3d ;gۥiz[z7|dPG&䞺Q*ۨ- &+iFNWF*+HlWw˴ p89#O޵Ε(՟##wg/έ)+\ImHu~'VܜZTGM bʱ2yaH=NZ\`'IǧJ)JZSZj5Ĩo01i%`1WS+rgM)Oؿog B"7m \ 2,B rj9-[*Wͯc/P"*#e۵uAҺ!v{[E)ёZ-VWGm?Ojg6Hԩ7Wi#fT]-׷OzRQƵԮ"Gd yVS\Jͽ 9qެɆ_oqޟi5cZʝ)%MiRw7 yqgO\SeBo ~mޣQgRZmN+qЊV6Q.>nǡTӔSR2NMtU0(<HH'׽m>XV3S̨wey ?#ǡ>76ofi>݉v6TWı=3ۦ+YJc22s5/ocTrޣ߇՛Ltu3vCF8(NJ<;Py8(ǰIw. qxVҊR9~=>dqFW_efcR7iyT.Q]Uc#=do%jըy>NqrT\K匥rZςMm"E7p3+vJ*QMK_;OMQy؏3?tskN2wcPz<ͺVf 8#؏JF1Tsvo_BK5[䐺6}K-jtG#G1A6SQFTU)ӌ]KدrmrzҼ6{J7IlrΖ#8%+q#d#:;|Åt,W]NxVatdQme۞O};N+RIEm9 &3m̹_C1ߕś;h3kÍ8\uRҭNTTa5q^*$cYW%T=OK,jބl%#], ϱ*r,=h焵rȍ52WH9F2ӡ?_}Xi:P6C+qUofHӒjLQvg鐼ezts]NX̽2۔w3$L (TիzΧm=K cķ^{W,+ɨƍD.cY6Ck`:c#ZQ9$(Xn"&;ۑq;#zǨVU}:2%Z"HTi 'eG_ȱKIcTHP e9s)Ի{iד}z?B&Heߕwu6TeNtTU&؞=F~lך6OS*-OpCpy>8R$b}m؛t >.|b^Y;Nr}cRgRTyZۧ[G=+Pʘ>kOh{p10EFrNv'S^r匽l%ڣB!Oa!UqЪZZZ2IeUij*ɒ xW\WR*S֓A=Ѵ{Hc(ӽcXi^>yRdR9a15ú`I*Fpi $u Lo̹U=[S>[ :Cm } vW<9:ڹrGބnXCKѭ TF6VdIX z^|gY5[;)ӏ5Ug>u3QmXsBsɭFZ~yuVk]{z܀dWy ~Rz~EJ|oE*4]X˚+H^f6yǠcMǛsA׋K!ݷ5Sʞ?jS$V+rgln=>*+73T{5tUlY jo2'PvPxsX^Ҵum{TNm=Y#Itt۫6ݪz`#8Ҫ+&FhG-#HadFU|du>T=xZ:z5m"[6 b%/Z8?k:\WW漊pjH \zfN>]--z*[=B NNyQLt⧇RkuS["K\,(Y,̘A>Mĭc{OlK&UcÒ1\W4yjVpVwxVdbdnq<d Lc R{h@X\m9EeWҔ!t촸m$ڶnr`.=Qs jUvۢ3R\[q"LAiecp0NiZ49qQV.V GB:ZjF FcNxt?VR4yDQRbҺk/h~3g k;YOm*M&orSS*Uߓ*t%h$~V)yqL$#,eUTn-O`cN3{[R5+n:)ˍۆ7dg±59]ܬ%;$&&{[UHq FHN;WU5%Q;>'J9R;|wU*tM;[9Ral qJQwGgW,~+Uuj1NPkG̈́XSy|peiBOO O[ ; rGcs~nnhC[^ᄊ90Q7.ĖvFQ$}*ኅv+)]j[NnG27vDyrGa] 1M)i~^k< nX„e_3#گJq\{3)E;DAYʮxirRc9V~ͻc1\G=ʭ#q޴vRǕVo;;:s9CV5JXLeF[.af_mUtbG*"鈀Z䬾Y-_.WpmPn?wĆaϩTm#E5e i8yf\cwTʨS^I--Fm1$H9YVUB݈FX-"2JH>^UUv9z\}vwwMzmj<=Ag Z$2_0x+(Bۮ-]ߓ t}|?>!H+s~zUNMm_MlG5D}ش[ 誠H:V*M5hRgJWVw%\*bH.BNWq[S姇{~+fЫK]vI?"Vւ]gC~P=;9JͶvR#i_EԎX?hY|YxQ ҧ4_o6i[ AէSVKy.% l5+y5Q'>XὤSe3|%sNrRkBQ76tQsM{ڛԕ qQKnwCv`i!tʔzcڊ>δyvq<*eDnh>ބB r*?֪Ju# ~EG~ 4d2HĸmFrK}wn[kҫEǿO4@q)8uF]tSQQ~+ qn xZꔧVo+OƌuAr- vơpAXSk+vU9Si߱d!|3A#s]ԍMhՌjr[.E ԒO8rSZ6Rӭ_*4o/RI70H!X8[Ա=!/mYZ%w^zub+::nR+U2oċ/=8IXSx:QNO48rJ^M$clRZ6deRN1zwg%&*8Ш~yHXn8Gb{4aFTg*r~P(1!g*znյYTSi{u1%fGzKye*m )ץN29{mLP)T%-V\jϺm$\.v/<؃YM6m'(ŴǩqTF;QY״q4]h薍zp<恹W Үu8FݿtQʑ-٘B <瓽S,-1ʯ#4k) cA{^xuºr3ۧ51U#ҽ]NfRhOF ,FMʤ0>Rq}Y S%,6ƅ#ȱ෡ ]*SQ[Z PUR*Rb_^x8[O)Ք\6ORZi4p$P۰3} )r~NER~oB|UZ/G3 _2eF:~E[ZUX1L߀*m{a*/i㭓[&7'T5Z2mGVS-#yP׃tcZdãV5HiŷjCp;ǿaVzjm ˗N#DLȪ\&:pH}S??6j)^BLUn|:H|=wN*TT׿Y2Cmĥ$Ulk 00:pq~(~'txeMy$pʪQ(dlrRmؚXeY-y;M:kxRII5 wj?)b#kԧ&t3^Њ¬ |p^iSJ}l~Z]~e淆CRMc':.EuVIJPkm.1+kO+wͻ܏⦬ iTGm[z5,~W *ӵiN}Te*5ę1 H\Eֹ)U%aΧQiX絉8ʶdg_|v#kIgF5'RvMopy-E,!F1*c+|ۡ9rB% B|duPhՓb)gM;+p9yk\1cӿMݾݑ,DG*c=+4I#v7_@Wjv?ZFT];c[y1FQ&Nw9G?+uq54v;3u]0z~];|_ծo)ԧhQz8=xukrNVk"MW݂t8Z^^Uj~Iv$V:xvOFu9"m>^Ȯ9^}=9*Xz҇DOV S|;A6L!?18=(iM9+ܙ{NVװĒin'Vt9˺{K%>X=)dczzaߝ>ޟ4Rlko!P-NYX#=?i$8Ɲa,m<.\c={)R(V[G;B*2y .j#FWB)#(mWrE,*ΜJ9om}xˇ:H#FU2FnqӽtsTN:` w {q~^:`uQ$\E:e̟mln*y=pky'ZU_QKu^dYr=#Vi*(MHg Ӭ}%On]7l s(; ƤэUI>y}8Ǵ9@vsgoPy4#Kӗʎc^ r=xR߷aZ9Z=WaM̑Vo =#fsj#Zr7[r & eb>yYJOhyW]Z$AQ3pٴ:ׯ\Sz;&Ռ}ץұxZ9o ,pIoQ"+|ҵޝO^%JuOVuZcD4o6g}x64H09R}6o#$399_ICS\tJ<}KɧFfvVR>c+ݣ)9Zu#(п>B YX;NG?z(s\ђq FVE/u]\j]mb[- ͖%Vgֺyy*pH4r pgiǷ]eG|c8m,UK*W]?ֺ(DrHl൵F82qS)JU72u%&dki v;[d sTT溛SV1h(bF~O9҅(JheffU] N9?cK_>jO]B}Ǘ+ʿ)!^1+7GY-KcX,h`{TͲ}qVᑣYvUG^zOͩ)FZt%CD7F>bNݏ"іUq2a4eGr9㓁QvL2+ VA-ZN\ڦ)˕+$jBnd+͏&DR_rs*e\w篡.T//wRlW UO\Q|[dA}%r'\ |Ҧ+e^y 8;c>a-9k>g xxضzy]8ӻZZGξO2)#Ω~כW/Q3)J5 NuΧ1n-#v0a&r#9Y<ѲKQ>eА+7dO^jiliєe܍/lB%s^cj(_u#3ޤC,rl|[Ԕ\?4$37;NxSZƇ,o7~Z^hdO3]-˾p,TZ-+VTMJNhWk\\ž0s +n<ïLQNP{[AFpv^Omm@cֱ8F8JZ\l*ciUFqt2Cͣ%9'Uh>JT/aJ.8TZrnNuV,EKJPncۓ UzNT\arb9pf]7m~U`q?U)QyVQݳ!&Rmm!g-tc=}oiRˠNdmDG oĐ dJu/]_ڏGqwaHF4Hq),Diޭmٔ}LC]}ug 5:fnΟo 琻X6ۿ+ vֽ,}4\+F75T >^ ~nO?mn*jIBwM]_~ǂ+bC3,jqS7̺yS|WcmfTwXV7ˢJQzc2,_c|Iݿ#' 2.ߛҬ9 ͖|ȃPaF.qӯWR7;m3ԧ8u{E), q&2y#YUN]W u#)ΜRwGw(6pVc'π>/]vWά=JÄ֪IxRc&d(<*ymX߷˧N<^Zؒس ec8*%[^rVH9IWXUhOmei[pVm'iǽLa xwiМܭmZnmʘ/> {rk\oGZAE#G!>Za37EFDVo.YXoqYq=z¤jtOJl}WAG\Aq`r\9GUa(E.S[\$y&a*C]x-ǥs2x䴶UW-0n۴5٬>8H1Qm̗m#)P8<}:xF {RjI+]ƈ%Y<%%^3?lM?}`E۷854IߠJ'UʹVۙTpNwc=8+9r}LpTQ|/"/rq8Ե:%Dᔲmfelp3՜Uq?A-C2l#AtVIkǙGŕX[4G,nCV.gQh,-5ҍFi$%,'$r9=V^QPoa^kEeE!g FOs3Wc]hR[ M,)GXGiʮRYnb1pI ,gtbc uTxʜj5}K0fee#<q=+SIZ}{nϒ9>거?8^\ 8#픬kۜG#yѱ]O<:ުQj[Y"C*ypL~Ve\\pNLjR:뱼\}Df3degZӖ[Ӽny kX%X#:sGKr(KEK1~UBsUF5q$*m~\ԔznjF$H|dKh==jc}o7aj2m#,sm㟙OϡHէ;[[s{*zI>s:ƭEtk9K*W;p=M"ݾcaX!HG }92HMmSާ|U/d@QU2ۇ^g)ч$?/@_me5#<72Nڊї%чpxα\JnG2;ΊДksMk\ΤiRZGG[=BskwRZu(1\`Ȍ#95>j鿡=J%k&kRcbW=NꃔoV[#7'F殃a(mǃ]j{Eiu 8xƋw~Er3dm 涄I'k:tDdd2I"j']ZM[Jcw~@ .2:pz?S7ڃvF6!~mdKy pid}{m7hMH՝EV ̬u\ڜ+bkEԙ;[VqZR;j:ԓ=I(BbFrsⶋ^aeG޽vI.Jz~]9N8-ރI1mO3j<:s14]Pn#[S, wu/= h_w'ie ?tOx<2ΤgNiװvvN8R[H9ք 0~Vm3Q ft&,eV,[sV{I75^#$1bYPOe'=Dgt{jqa ]G#2qse߃N>J\КEwЅFT }rJ7kr×Я-̤J@sTcɱT|Ni)#yJr^x5cfwajk-EU`zr'` -r5 hY>P˓$g<P+ҥ*hK-uDSg9<֦7{N2Th+MbL|9ѝKts?o^Bd 9:TRW6:|Z/:Юn}V*ms7nbd p?ƫތNvB{Qom4Su)܎i$JU+8yuS^U[/Fedʒ6bo&;+9;JfXn6yM$H "$_^8e=EK3RvV&1J+՚Fg7mA͕.w,x.2:22溏D, "8eO@N*%+$茩"MЕ[` cr/ީǗWbG,\14{JF=SW:qL2+¿zY1ֹ棼Wc.iKo' Wlje\sgk3JیVl[Sو;¸ry8%imuG{;>{46PҀ$baTjSbky_&40@ tE*r^ IE,Fe#FT%ѴT^뎧 *|].U$04`OQhս2N1܎IUgXGYjc9T6_!HHysȩ$iNN[ߵgd)# :pAr\ܪI>+ wW,j)#ڳ?w\RHdB[qEg)4 <_1eO+$y5\⑌TR6[p%Bņ\3ut=c.d.PTo~ #c5^<z)J\3g#MŌdןOҗf'mHbXѐ*S߃UEFU>(Hs) `؆T"ha/~DqVxxI6iIf!eyڶcI97sU 5Q[wiUFpJT;vD3'bUFsDyK|YAWmI 8\dˣg%ن.v=!]p*tޣDF%Uw2mf[_x3ʬ&R(afUI̸ʥGkX"ko-1ibbcu'L0W{Jn&|HF9y~"lmpZ7G~jrN6ql|PH;8*K3HT *]׎,Z%I&$cʱ7sXKtC/0'LbDԅ9RP&X 9څOm:qܞ&dxʍesL᥉WMDci#J~eiYVeO?ŞV#jэ:>diH%̊6Ǔ0 ~i{8)u<^a>&Wnጊ9dir]ME7F*mS=,@L0ͻ8~~-9&@KV/'y\HdlaW,7~~f+ݗgiWI X? Gzc~y[CH䕖܍ggU\,|92r1dI5;n;CpHGJJVڔ]?{dʷS.++JCSH"-gf\!B9q1E8˖5r?R^vg߷k$9';UÊ.5Qy$Grju 9κ9ci }b拹ls#隮hu*|SkE[L<͸+oQӿ֦Q2*rZw^]ʷ7 -C:qr+hFQN>͡sDeYٌ&9b}qҴc)GDDw g%vk*\{pp8fw6N3n5`%뵡3(O8jjBV_S9OQM_b$B`m ޕSd ԆF;f9c8;I=pyQWzJ#](U863;'ΪJ{fUn`eyJJRn#c'] AsrΡF#2qJOqZf,r@ $w eMm)h½'FJt꿻s9JWZRu|sm'$e;Fy vhʍj<5N-a =3lmMFi/ pHTc^T+%+mah5Y }x2:%Q?Ϙ̎/FW?֦NM iRz鍣w$WOx"2)I V`D.(|H,ImK(%fll WdS+~`Ҁ9f<(p"h8֤,ҬSdMl/\G Ї\s1?1@y횗y8ʥ, OpU*4,'qDI~뎝+>kTQB+ LG :~\ye h{if'm6@ sN+H2۰Λѭ I3JUC/5.{,<[zYJK*Gg GO|rVlyUW 2LL~^٨.dtH3&e:%))KOC91T7*ƹUUfz4̮iMFTȞbSr^g/wVgb9ݔdž^\s*qRobT!uqK#.wH1}1 xyӻ5$E@l+XpGZ0Je~VP)< *@1Upq-6P#Qzs2| hR`۷:{uj\53e$3Hߙډ{5ʊ]F&a& sQm=(2e.Ę%px*} ~tI"U$UHԀVAԏζ?weaytlnҨ+ө/2œ*\­&څscw9f]_zȯp| [ qQr-_e%d{SGrqV5#ͺrwoAZQ < a[Aֳ唴LזQ#&.tUl2C)8>4Gsr=vĎwG =FIGQ.g359{> }&Yi莈ƍH+H\FgxhZ[10+ȮۤBc+2so[GT=He?Ch˟z>Q;͜w8ǜzuGa)ḬynElkUkjDJޝP6| Ÿ:hN3W)VSdDm -V5/nXuV^b՚y$+C9J]Hz"(a3y=y'9rz墣-FX#+-s0 ^?ZΤƤFBtf#c##ϷN#k]%*wyd.1m[N$_蠏4W޵ZzEteN1NLX3!_cPK9-DN 341 K|汌r֧R5tQG;Un1ӿ#z&+XR$GyO<46UYmdǵk([ $N}qH^xVN<9֕5u,"8f\mm13ʗ749i(M$\LcpٸvJT̩/ԑ^ = .:qMg%.U%)Fa׿F-ԩKv?Ď6c=d^tړ[ܒkD#VyK.WqY*Ҳijݬc)Ԕ˧F%ac?)9UNy==1Xʔ׆(&"QWi7|î1X=9+bJݺI"77#iZ֯u26ᕾiֵZ(g:RZE l]@'#oNkZNI44[CI$A2" @##"6k~oG$W»Hy܏ul5SAS]Tr;YD/\I#\.cd~GN9㹬iQv{ש#ꕱz7,LXfa |$}]1|yV}5TZ]wB6 ZBHFc]|:iPmv1lg:QZJN/~Ue);-l=8_f2*z< s1ˬzrt.+"HqU6I?Cڊ LeQsҝ[y-AyjA9z<9#u'= .J1FUx+O= ]E]V4*t⤭n#Y%gWT;%] Ɲ7v69c!Nr8ZtRN*GMFe[0_pq<\я+SGtH;f { sZ΍hѺz<=LOz$v^>J Dz؎Wε^ARNDHw1(sҰT%N*%1ƨR?-pd`c'5Nyoݍ1q1=Z!6C `jJ>ͷdwNfbrWxݹpvbӔuUף؂%s{yG_rG8$gVhԧM˕&gN-'y_1wB1=z֔]=y*]XIPauhB~e51>d]J_kzl'F`00}=*T3lƤcǥf,yflnA{XK߿# 4NRwK57FB HU6ʼAIa3hS "MIv h! q\NߙZ/TK VnTS=5O5֥.㍷t͍_ִ9+cN%\7!șcmX~̑ y"`c2@?^7լgVNe6e乑vȡ-gzdZڴhZKTLcUZ7UY.AҲeNvWKG]ji}#3|$OcR$$&eZ1G׏udwx@1gyJ}UTEfߚ-1̻z WE8juSqq傿2ޛOמ+!+¤bhn6nYА=@85\u4[:>I+ K\\EH<Ņ,wB/GfSiY<UU8\׊S R>_2?'PV^9*RrilU^+Fˈyaf(|?\֟kD:滐ǺH<嚽 ofRXȋ܌n)5-N <ͽ˥VB,qA'mFt5F]$O~F ed+i)FQ*j]<<'plc.}AJ۽W-cyxlg~EVlZhͼc$r2| H3/}_:^n=;̖؆ڹ\~] Gx*M4y2$}Hӏ.OlҠ2_08YǥtF2qFq+,r7*ErrdqriӦܥ܂hZ VE|p[Oi')S-E#:Z?3r |ÿշ$9nuBL=9&r7$5bb@%\` qkB΢OqtJͼ0\d#djS%GR*1Y&QY=d٥9T#evwUW.ddz`kʣV.3v{G$,RH6޽9~匚F~Ҕow43.ج|:uS%,2.[5C$lnf=VdF8;؞݈`_R~MEO^4ԪNYn#Ҫr1^E{owȞSG&T:}=J61oqyRY2J!e|ڧ~W<(|1Rgn8p$ qXwntӵԗo7|u*?Q&3"Mk&NW<:´;iӝ8j@$H fn9孍BJRDAcMʜvsN7O*AlH26d?{%+$4^9ol&)PJORG=i_B܌e,GK ĒGq&ThBoN`u SiʌN/bZoNuGk?gkv:Q)].eGoݐtojە|&){78r%UUw`Be' hhF}b] A^% |U s֮ZqmL9Lφ8,&rr; %i/1]¶ѻ׵rdֽ8Ԗici[\07A~18˖Ӗ̖2ɻrx{ߌcE^XQB3ֽ= \,mS.A1kJ1y،DKA$S-J`3rcq}E&i+b~q4F2 *TZjZ>V-W gXJOn)a};c232y=zqYJRvlN.ZM}co)rn,ߙȱؿ=Em1^ JsHW-trD"rwu=jjgFm[x@7H`_nݍsʴi{cR.X쵺{T.;7~;V5*J41YҖw\@F| n=2W5;# 2#O#&F%1|^z~?\B9ӲzLy?GF#=fM T/"xUT᰹L+ TO[oh֢]WmۡU&q:>ZJǛO^=g?{{}k&ӒVΉӯJQ׾J1+# K3`\΢zʛk$DawsYԧuii._k s,*VeWv)c/9W=80qƱѭʖyIE3H\\Q].gR7cmR^ XdE\yx?OT7uשj0 x "!۴X0'n}hԣ.I[Ro1A3 hCJ7Is'SWGEu.$5=\b9}՞:Vĥx[NOr|dUʛ#4]8|#9yCG۩F1SsM5!trNVm$@Ƿj騣**I|cJz2(8-:}=t*W޷gVR=b$HnnyWHUA3CeR)ѵ-nvF^ڣ[dOp±7/ ?n+JQ_B]:N~>d6CknɌA_O^c&>e4{-FFY77DžQ#ZR]:1z?kA˵d-+emcx"KYZ~2uoȆ8/ٌ?̩UV'91ǭuSRxWͪegm*U.ZYϸٞ}=Ums\:qאq3-k魺us*`B.$#im.vI~SK}F5Rf5ߙ64[<2ͽ{^ڴ/Y5۸n(.$=0>sVj-tee5܊}rzNF- #|V:7ԚHc1vz?/\W?:Tw[4ю,UV,ø5>΍;ꙷSmu!E2"t9p-w'-Fݛ#k$1Kcˀ`}sڶNR&YjU#%'L5wvHEVRbN21թZP0B5tZSQ]s0 GA#TuOXgxzG__K-@@H=I5\Щ.F;\zkjnfcj^(#8TԵ8rIkn#,&QķwӷX%Pcw㨢c]Q(T.I 04Nk~އ%L":v] K *fa,K;3ǩ F1oӦͱ4KmVvGy#y,N__XejhIW pUfc"?z\J2{YIIFZm"H,&@s힘MF7ծbFIlL O9cyjG%TtIFO~)#mz#+6Z?|w<=3z IM=mUJum9lGzHU3}iSTp^vvt&poD5.nrx1Y5#Qzt;R-^d2ܬyqɻv .w= tJUԌT}|y%JoԹuHXK$c?6H s{gvwO[UhUܑ%_!7]cqVk4Z>ͨ;p-ϺCw30$<-SzۿVgG7sܽkrlX$ bm@C5~rx'ko:xZZOUt-|0i[vЋ}:y(Ϩ,jNi1"H@BtJP,#8&FֽK[^Jew>EYa6WxNs߶q꫒+Ͼcj\վg[۶M]şx(?һ=9I;zuj:M^ݺ)ichۢ9 y㡮ZsI?¶WViv[^K1ͣZhVf 7=Fsr=>/Ti⥇:Tt,27Sn7r[=gSrrַRpQmÐč H|51ܞ|.N]}-f/hS}u-*7x)XղϹ[wLzWF:5mm|,z4j#vƞ^Uy=Hӌ0]IǙ;8燩j(MՖ&TU$W^X`};v~&;8YRkMÖb+'FFzr_՝E8|:s:`yH{sU9S)nc[ r-:yXd=|?꣇Y.h+S2Jw-kJD}8v'*Z6ioV{b,[Ff+wNZ] (EodyRޤ$סϧCT],gqS\]2+0W сt~'ֱ>Y{I;561{>WCcU#zv2ӐW?׵jBTRWȵ4,fF~GLn L(ƲOmxiAkPжHoJڼB[t bߟbdQm:WNc.Y}ƯКhc hA-11]uR>ƔcmC<^\y+ uaN_3)I)9%weRiae \3r8?tF|ʹ4m_ӣ*4WdWYE26bpze9SѣUedOYqcTc126z\5K#Ԏƛ'CDIzBLLa=}$jGm=f(pr[q}J*MEܚr8ծOی5r7ma`N94XI{O-ol?#¢Oٕ5~_od_9%NiJ7cR@|Ilmx?>uKQzю J)Zг, EU;ǵ|i#Ⱅ:tk)tvO=Jq(߷Wkj<YєB*h-?*sB|v[i՜k*Ntmet<1ҵVN_ Z%^doRh`I.&%;ccO]UXSŝyF*~'ϑ<7@8GN>S^Os1/=oAl;TgkN{~Z5J:.en̙Z`O ~wsEIF1N/c8eM|K"N pm27r%Z$uЩQ*{Fox7YsaN]"Jj^qHZ5kr_N13S)ncV6Hbr}{s*%#Օ{ᷕr>NGS׌gڧF/~ cR_,EVРHV)ri)T]V'}5Dp,Zh;yYbqץ:*Km7X4pV˛^<&*ק?O(2bьѭ hm>iH#{TsPZ'F,V ѳy>[C&|I{{}+xӍjZt*Qݑ!VXʅċ!^9y烌ct986{tɈRN~Hct#3oH'ܒzwMZo=:2*Mkt_NH]˒x㌏Yԧy.ftsF++o,IrѴ`Qp0x~u:;X3'=6RA \,Vi2Ў\TTs ظS5wf9[qd+H\@ z3nnMtv#nRO0En;tɭ*KNѕXGkGT$3Nw3w')Bhz<%X |yS}npWRtluI(Ҵ^9a4{T;U`L#$sNwJ%z*)k~4 ] })Vӡ)J4cL-cMBF6 pGZhpB}RϺm̥mqϽsǖ65vC{*=HqEy]AG|סN9ZIVYIPIo{v<»"Ii?ݶ{i%Hbz:OozrSNz. * cx+jgS{$aΞ Dg'm̛G<7CiG?e:iSSt;8[b+,8GkhsFP`x+<spZލOyz\=~NzJ٣Ѿ&Vo5l_{fNTOM۶Wp6nfu} EhMivȬOg˲*6O]t3$-/n~2eG?6Я0Yn7#DdbV=؇wb*'-S3be?)m:w>IFSqj >e 3*£2)ʶ>~U׌zZqeu.M-!HdAzPMi&[*̛~9v_?\Ƨ/*f,8#cG򘯅<ڒIF:$r:Սsyr0#=Lf1Esrj{Nw}7`t~qҖ"8*1.-^My)E-fӡ5)Ipq?”IӅM)U؎Wi ڪܱlҦ)4O4Ͽckk)fFA:oi󜎹kHƤlf%Rҵݟ a5l,G 3[ƴ*^R\k[TIdX 2O,_,:g!J!8vs~䆭o?BxI7"69ts\`?[z5ͬr,Kq4T9ʶSm~ʲZۤV6-c5* Ƿ᧧cx˯bKHYDM˅xZʩ&kC֯&5̌`+8irQ_9hR㧗߳3įnw'ƥq4{QOQnYd$$4#.Ӗ->veQ,Hq1֬4r8]4/~֍L<9`ł&Bv?wۊɨ-Xz֬v$x,16v1^eRy{vȅMʩmWo8zp:WEZTyd9c(ڌ獬YJv2Ã~CˇEשhm4OQq.Z\-̪T#{Xn$}Xzgy)JUoi((Ke׻Gf.g9\JUKl}|<2LE8L12yWe=֕T'܆+iT2||=;)ӏ,Ѧx>;m#]; s(Y?_t$IE[8tLK?DGj-3_ff~Oqs`qƝ=onl7A3*2LAUC?¥ZnVV-sj9Jܿ2IK[v-eiؒ08TfAvK[vyU-e+>4euҷU%SlI-qwF6oJjQc*4 |TEu) o~SB:SƌjTՏEHDrظh%>hwjrq#'vp2xN1J5gEH r<RY$ݗKw*ӂ]GXUy?BwߢefTPpz}:RֆUPl16l @{rKMX1掽}jܰ[m~eluS+ӥIJEU\i^W](Τ9t{\$byjϯJ8Sti`9ݕ?#"尻fSlgoʊe ~MӭE>"Uڭ{w=G_ֹe-OSj)IK<3s|\?^]Ѥ;ϡZ-vNMͼ;YFA{*iV:[:ѕtːGn*cZ%oy`hX&|+ݹ⊒M/}'y$+edY#q/O\{JR7OD9\Mɺ>_L"UcRJSvЪܙ1{yzVϖVVWZ?}k~\ͻs$eN#hg)p{Z^Ν9֜ȱ-^xcH_ǍK2|hZ'~Ud>% vm`kJR1e׭J*Ҽ[VcMox9JNTLNQșa@# :g_~{VidV_0// q޺F3:{sS>Z&#V/NSԪu+I{isڴ$H ڤu =}JIΈ—-mH4c5Q磁Rj\RIJ"=,v}>1"&Czժagi!<2WhY(}kOv1~*TUTw"F v̽:Q׊R-U#)?u> KSϡi{ӖM?=z {cTcvƹHū3Ǯgz6zc(Qө j7_m -ۯ*3e$ deo(LvgS+rӧPƻɎ}./ TF<ڶrNn58 tuݜFsmOݎʷ*3m."JTᇼcF2TQ۞wy-u߷#\m l$aJ%TROm j|;m1j<q{,,z%QHߛMGK8*coc rGdh:%=R 9Q4y<[ƤcnM|OxŘ=$qo}^u j*,8_.6|WDdc\=ۓw4*_׌S]~ɹsTcp~cNI^+ToM"a;:lCOv:ǖKO9iSՐ\亴vr3ӱcʥUHSqu12UY{/{:q|rJEi/v/UIN?¯R^ԖMkU$ھNIh6| YԎGYiw$=54۩eQ[+o]GJ@[#h6Fpes۷_/*vT~Ց;#G4.[-7h] Q=Q*RijM 2Tf9!Wt8j,"iλ~[G̒93:n6_e9{1sI,ffb̛n@SVV&G*z'"}Uq&.WVn+6+a9*9-e3xnemv1^k 0\sLWG+Щ{Klغr _z41j%mcMiO֦2MNK%mR(W[v6V?+)%#Q:M4uA'܏{UK+8KZ3$ov؎ji{>}NfEpi$cgxOgXz}J]2[\htʹXNWC7J<ڭŻmHٖF;cxɽJ-B$׎4ъ%8UrE1n`T] "Cڟo;w:tyQex$TE bTa騯dѯͷhʌg$es)I2+#D/3Xzgߠo~jw[V]JHL?v>U UGSy^sH{39gc+ܜ9"+{7zr S ~b3zCq S929I/=J."o2uЅ'yW&#fS 8ԹI5-ao"*e32;c}>frG;h^NcFy3jM({$%)O+ğcZudwOq);E.ۺ spWc_Sy=rj%fr98U o_wSN bg+ƪRr õDU[ldy(99ӷj|sS)l|֪W+(9.fY UOZ5*vG5>J9"_TnAHV(\R7 BKy;ty"Ԓmt"{+-,[*84}+ѾҩhteݦX,>fYAڳc}s֏-HVBc70GeeN֞GnDjgt"4?)Q} Z{NhI9I=H'l&\GW_j_s3۵܁a20>08"/P4oqgVfb8S}E/e]F黑$Jdutx.8`y?R-JvoUQKx淕[לc2vz򜽣ւI8I|R&lfm ~Y}΄eqӨ8>N&+G[;N*F/ʧu}I")>I1vrjc!Fw(n3ST䒊]{uWD\+M?~590z'v!2 A}eQEZڙT攴o-wVV[yQZ2#)NvD-W$L(3c?.9Z[ԩE;4]ȉmm*3 s*b"k狅fczsJSV7%TK?[ԖiicSY21<p cewʢqXIC2oןOZb$XU半QY:M8{W5B-Ď[hܬ}ujoKHțڸ񩒖$RckͿ FODccwQʞLgP~E)$F>T$hco4J/>ԟ2B\g8ݭ: VMcg#pG].nHMFTFѶLiuƴZf؋yFHTksilKy"j.dq{Q.M:]%2fuu3ZmlN;+Og_vWs(ɩhL8Y8N+ޓ} TjZ;D58ʴr%c6GۺkC'`GXrH\) UY4rmG9<sZC/:R5 LV;#QX]Rz!-jy sǠ_z|ҧ('em@ѩۓ#).iGMnW1nuc&~I;5)\,H>`n>c*}ݤiiT)c[PFң-]3Jbtll~sϾ3mN_tS_bG+"BH:T+|Ҽ﷗vI9RەsΪ.{FDc8|W܌>KhÕF\c7Μ|śzڢQ5=Ѳ43.c泲9)s8%ˋ[5[[D{e&H^=>d&A ˒1z4ͳ_ +,7@*VL !?2(>i]FKxdUIi~\7qߌw*w,=VQW=K*W_gQsaaB7I\```JQW#%ETz!C;+9FT?^iZJ~7a[ڸߒ:Z/:RrVG%I#}=ݲAҰ~s*'R" ̢>d6x_SӚ6Tqvz< [k{{єuoIOټ6wug#X3)FZWnn\9 N4't,CI$lPyvC~5%+6Ѭa*t#{X# 1 vd8Ӕ>o9VC 1=Zg#JWs4 bhT#aRF9#=iʔc%fsr/g=,kn@DZ8I*]>Y{KO[K2YNF9Ҷrte)6n|%aӦN/ҳyeMӺ ʮ˻ 9n[n eYI#ǐvp+ކR$C5H|[?Jr=jTc[.V(A H0 >9-n%:Q[R6XhH0`ayhEtyee;?+*Kv3ʧ>/Rfk2GkZm(٦*zu.x6)tU[@]qح-n[jpU^ʜ&,6U>Q NG=q\=TnyI%d8*;z8EjT߼4F]Ğ5]f*pg>) sE$H[q+YQn־G=*jPP[ǹB U9LsIF<:r"Cl?\.<g/iy; !U߷PX֣[TrTEktWIemGr?wILc޸ӧ^V8vPKX=*2wr/O.Rډw e?p<r8'#U(SKjÙk:*2]qЎxZ9h.?kn0+/,(JQN!=#_]I<*iVThܤ'{qZdBIm@K}㻁VPG$ԍ~X:*\&ɬR2A:=*^;JQtGR76X+C1eU8Ӧ;V#k?Q ܧ!˅|L{ 4HRIlceT[U2obriѫzױsa~ m]Gvg.ߧLj7qoQ /eN s֣*->ZҌ"dI-Ё$Ƚ:rpxnUߦTĪ/~>hΡQLE $r1dAR>Sc+7trމ5]VA/ݐ~S>v)|[#Y:k"}G,h]sv鞵TAF0t擺,2My(`Ȓ*arIӜ8yVc[O#k=uǷv<Ɠn4nOE绷Noбsom"Ilؿq:sӼ9ۼevVpژc-ܓx S|,#'d׹Io,3-[kP"8=vkAK DBѻnH4P?\'w;,6󲼟pc$ZjV^Τf.)J$09Cf['~aȧ(;^_c9=K(mlZ&6e(1͕lݏc΅)^+3\ܝ!" qqi $ E\>_=Bxz*} BYwSzgc5otdfIdyȱv?_*U(k\JKfGk`mR׭)FU).g÷Zj;$OQyqگ(SyNU)zD,jȲWTnz}*>*^e0աb> 24rN/9ScujrY$~ftԝG;szlbFۥܪd szu>劧_;~OrO1 As]qiuScMŻyddP?LδXi(6^]A]yer[cb܀}}+LZ͒zd̤+u#0J.OWveG܆lc"$ #]acx J[zXI74qgx9fnp*(UQ*GVWK:H,X*F3 ]vSF$rH.LT#@U.l9#[{̟{ֶK7}5;.]Ă=@r#sp,[J2[sGMM %hY^~ee?]_rrq֮.<*r{Ic1*U ~G{c5-o rK]D!_,mVÍ xS}*PTA,Lw">9ۭZ2i/(ޝ:,Kp`Xviw;@$ji}ϫ:>EتmAӵ ^V`3D4R23I^c68+NYKEӯcYTNVd+7dJ5.EJ~+֘[*ʭ8vGҴNNZOb2fڱ,9;G=9Z|"iTn[vm6ygk+8_\s*V.]41r#%ui#ϐ&nwqYsI{ۭqi[RJ H'@cR"J֯,c՚+a}v)N&2emp2:gqZF\IQFRT$`l =M:ѕ=oReRQܒo de+$bM=9u&ˇwE*۩hDw:e8p~֕ZJұ:T+ۋ?IW$[ߏb9SU*ҊV͉q@Os^j/0BoTZ\^y%|qa#zz~ E#(_m5܃ʷkkQ=+SV8N)[t,2`_Χchr7f+j1 ?̣9ӽP|Jڶ. Ӝz59دʗ鸉=pqqJg4tqJ޿5&#8㧸M8ƥG%C ;oCYEH#f,FXy=\(ΟӫFugͤ]̨n;}꣇樒*2]W$H5'ʻc4˔芔ukTOkrEJ3\!.^G8KKo'3/W9)G$QEhdE("țי׊"Ti%cJjI- iqx]ٱK ~*{9>Ug[NK맧Q?7ۿQK.xu+\Z{)dA7_ܣhGT]VrջJ0y!hRC`z e*oǒ:"s*Z%ey H*Hs*E<;>KvMvpźrI?ӥWN.^.&jrQ,!rkgӯ\2$~>4nɖ&kB7v*ϖZ~lcrz>)ՋG{2f݀8ҴW6N.jt(IYnInf$Wk~nI#?\^lrfV M  V x:OmmA%V8`L Nsc̢uZqR] JT:w|d8*9aƥvOO.4ԑeF=6/+G g%NSNЅеDcb5ڤvڱnY[*7({,$U6;WWQ]sʤj&ZyY7:X<溏Β?jYSqL江/&1\=6 biQ{y?ΊHZKS)UtWm'&V1rZ;0XZ2u_2mvvvxҹ/iuѢ,Jc0F!do==랝%x*=8!D $ g_޹*WEg=BhGIc&nagQ(8__Ste^[̱<a$;uNjb$ߡ+ӝ^arzl8ŕlx=1؎:qS))i잟yYӌ}~lͽf7fې9ϭ:&Z[St2DYF@:ÙэNR ,!k4eNd8uȬ}-ߡ4Hlaog럥GMsCm Y rq:Nxԟ+.k'6ۨ%3UpXOk2cZ[{D8]*ߝ(Ӆ8К:bhW,R!#|HlZVo:׾;'Gs6kFNNZƤjF-jT^]_WdF]5I$/|9=cK7~tNdM}~D*м~dn7p tVkdu-tNWIk I𣇫~ZMۢ˷hgoR%fo3zέʶwyV/xaUd]̪;*F\nݴ:zם{a[M/ҚJWE˖CLeaJ^1霚x|=߽3:%[Dאxdo 7U~'?J &w9pZIS[Dl7 YB"䡳]>w~GD:To8A FWF"1O8{j.ڥhwׯ)Bsd1W=8J7؉Sc܉{vBC|ăFF+L<>X%l [*O؊a̟zdQv|mW>q.^XK]wpYv*?,2\c>r_U'(+oQ\(%L8u |NsJ-Aocgfܲ6cBq8J|o{zmfe:s==]FUB nx\ GN0յv'0V5uuoN'Kcr6|`+7^qیֵ%9Um08=m 4۝ˉS"y@CzBSQב֗Q^74ppK6dw([BK|U_$MիE[7 foMh)TM,0LjmX`^:NkyYϡ6s[<-W%~e1:խ8ӭ.VZGeoI?iU@9c>,,/sRr]i>mo$,sy=~<.5*wWnc%4[bD&%ϕ|G\}k(ɨt^lUJIK`sTbc(jѯvih6]-Z +>ߌsVu)T4(s6#-GlV I##+.FͱJ565%)n.%2>%Wٹ_E625ƝMj5{-Kv^Xc kwm,G821JV1z_p~ HmK6%\H܌/#Z3ZRj 7MUU-JFv8]A@x\:V]а%Hn-mƹaNr:1jp*k^Ԏ{]MK|愨;uF2Ƕ+GRJ2KK#aЩ[OvF(݌=+6ѝOwz_ԒhW+`0Ͽk)JVKOqIZǘdVٕշ,|`6Fszjӊ;SJt9+Zn$.|mB0>^:qrV_QNQzhMp|5l +q遏YTPvsKߓHxF\ǿӿYQ.FJH1.~W$IUfI!J}GiZX˚}ŅPToʪU9fc/ y7>Ec޼N<ҕ֋'޺ߢ+\r͓ϰ^+uƷ*{_2Rn~w:{k>XԻjǩNS2J\eDjcoa\|>VnRM M_݉D$NqN:&Swn HF+UQldv%)Qx>~nd{J#Vqj6}w-ԏQ 6U }Zʥ:5K1j{dVy2ي6fYbIF`?Ӧ-U3yFvR{.y.$1#8ZE@tӭ:3t*0tc/Йm q*ȡrFp3tJQ/6ק؇F%ezYm?eh%ylx#w60y'g^^=j@9׭Ii2i _U,nX x Z*KF)=~kѣF1uEsu,vݴqt)HK**e>ckkvo9b#_2R`*%l&ܷ$c9غrV2$龌qe9.$M>2=zڹ)T9ҥ'}| %ţn@oluڴ(5iV_#J=/4jbck*S)FTiѴthpQӯAF>02}:vqR2ybaekbȬY>PXsZ2?W{ST+8Q@tڿwrc/fJ5eiE뢵ѓFQtqvOJ֟ZZ4JnvՐNhVi txlUSr^cNQڼmL]3yaozUԒRZӣj<]cVfc3g'?ZꩪVv8F;ư"rw7Q\e2{7qZ|V>c(\9=*N*.cЎQxi-S/ڔrM.XLCk{sGK+^.[u!InF;P>S]]|xSE- 9 ( YkqM~0I>IZШa܋N7[m2y,6?7z~5NX=J5RM=-}5,[=M˙?vNxGE˦\-;/Z]u 4Q7D &}#J+ܱ[zzn-\' :˴>߭hTTPäՍTh<*'*Zwއ I{J? ]A턮Q̥AIiw*~,mh%=71FH#+{:eaPSxƼԱq cj6͐zһ1鄽P^ I3 v-"NQW3Iw*ˎP#}0fmf$U㿽Tdg)%B869r~^={tm-u H 4$|= BSy{QMUrhRT~k=99\*{o̒Wۉ6nn}ԾO>FE-㕑Ll~ϴ'֟7-0NP׽R;DT}0Kt-?!X˚z**)\8ee['vs_MM\,L$2_F~֜1rZ- AiPՓIOLqY18U4ovld/'Px*Jt[v<ꊤ*rV]NoQ̿f[i$ܥDϽ|.20M6Uw6f +zcpFz׉RRTbCd錍[s'{t*kT\ɮZ֭$ZfHҮd(wRz>t:=iSQw*bYv!c#IzJF4g,T=2VOeծtP3nmwqZQĪNv쭯pv}D6|ᘖO^VDMt @e?wF'8#޹RWU*1[#h7bY$ʨOBGsL}2}j4Nwu FnaS%]8*ps0q]}^gV3**5K"lK2u(8?7k4~x_1ӻz0l֑UV9rNNs㚹TښnGt迋Vj 22Y`vy5JkR<>!N2d'˕~ZyٌqZlwONq̒]bg{m =OYrqR[I6ɸ>X7GJ*Je*q\*;T IU0yk ֳGٽ*5)׵-Xr |y;{UӧRQJ߲ `#]}]<<&vG T3+Nδe^G~ާⶎ*Rt8zz˕e[y#tq= Qm_g"Jn)'}%[Ku![;03#̴eQZ7aDWسC?0J0u5wR:YƟpb]̸Ezw}&l|=zvƊPBԤa~:91{9EMuG|]{}^E`Tf:tUa.gR/$Zx;}ŋ@f̕[ 01?UG:5U&ͪ`ؤm&ٛj=u(-CmV@89>N+֓K}cTRf5@.ߛ N#WONyadހ3Urp;JKWts-E tLwci'{gWotB~"Ï@<d}Mrphc"w=ӗ5K5aR>ڌ:ymd50&$)ԣ+VwwG=:2VUk ʓ,^Vygs֣߮yLW"]e?bvPdo#:vTOMi_#l7Ik']])k;.sU:ҩn3Sޤ7o++z]jח)پ]ϵkl(\*6ܝiFRN^Vs%)u#"5V,^ߗzv\4Kӌ'XųyUU?31SGwN2%O[HUFXr1޺)F9+j7N;[̒&Mvcҫ*+Cq'mmU UT&qǵO2mPvVp>`8 xנ9Jzl<] J M_$Ѫ+7TKIGWي4Ks T$!@瞂c̝yV)&$Y.$n6FUQH8?.y+6&Wd+$\mro\nT5%>i|= iԌTUy-31>[TZ2zFp=%oQhVY-ThI^I=`cʝF+[TP$n2Tvl޳i?gQuH#c-:d|́鑁#}=,*-Z|ɐ>?un67SQi빅Hsd6Hh]W =1Ota*[Fd“ 91Z{'#UV3ZBK[n SYOwg/tN]}DOFϖŒ qU,?g{[(Xե:Ӓv&TOyw^T6T|98`+;(Lqӝ;rak#t[Fc\iV|a3n6'd}OJJ2e(ŵ y̫n˦F@|E:IֵˡNQgW,I)Em e5R0뛔}Kr+?&O1XmW&Q81WԔd=[ 0R qFu'r5_*tr%ЯRfjWu{tkHTl1N-kR¼)?XA8JUc *k]Y}5w0w1>]i8=GkTR1V|ʫdRn[Yy55X6{*1bORVYJ ,dj!ugCV$M7Cdfq6T|u%Y`gwFI"FHUG\c*s8zpidg5$ABz|{RgvV|8\n? lݏg č7"VlO}Jҥ:sg:xz4OĺQw!f I/ƫ+qq~kch"Ww3N˚/ٶ۹cQpWTݖaVVtr0:g=5w{/",FrSj]vЕ&f*`peZi_SjVRZh vcXNݭqoqNג.8tcX6E {lI0I$N=]Kʧ-Ou]Y]y.TJ+FmNjc'kZnY[hٗ-?_{WL*JlFsj;w$"%C![i  5#~TkgR7Bk993oTBoKЩq3mvUSt). c?,0s#R:Mnm 7#%dc=Jro-M)ө 2Z dM"۝̓O)KXgOWBcI3XrNwgFx޶4r8ƞv3Y?p+6H ӚZ9Yrݟ[ʢ{m!;STՔ9r64ʱxqdz SaQ^-Y#=}kROW6狦4^b;/}NpsIRMEY}3̍!Yf)7erNDnUl6xN;薆j4Xq0g32y5qn.ɚJGҜⵔRQ;[67r`[c|%Λ؉%`gf}cTsҺ%)F}UNQinV}v^[q73[Crtsq a9fہ`zRrISP#fN ۳I{+;utGYAU1SRbKRl0yel;slLcDi7!v78Hce~a:V𗻫4"WhwRtdelGNo֢/Mufsh`ɬRQT^{7'Lm @L4*otWY$]y AV\w |l̻|aDeK#Y]s;[ٚA>}O~I)U(nKB[[%E(*dPRh+Z 3K:2ኮs#s5tEIЪ*BuuX5u@ Ђ1YQ̣$a囔v'l`>\~Qu&/Jڽyf_*6[ ILԮYK]NpK7:qc I\J wf]iƢ^hҙY,-y.I$ɫBNͿnKq *F4r4+Lsx<]neR^֏/6W3"Ơ<*Oz:Yiit\FuYފԶ'R妢5Ib#~ew=a;DY?jNȶ_"3< w#Jƥk"IL= ~^p1f\\elD>7І0&6vr=h$2nXݦO(q'z Χ4e1ѱ\LJQK11TK;N*OtKҫSY˙ٔ}uI+rNJ6ͻ:ߺ[H E7JWihI̪C۸`6A&㩔u P:?+n^=b/Z*|ѳ#噈UR<#8ΝdW/k4Nw*iey<Ԓ9jTfǒz20h,kd篨yTOs|=/`Fo-wnS)YF1WoRb@ČO°qPZQ۸H bmXn>ޕ̹nx1n׹0eݑ .nq'?Ҋr2芧zrUŏϳqR/JˡT_~Qc.Jnx95f&TەM'PB:H Ԭ62<?XۗZ +.F=rW8}=G)^נ+6`V3tq'(WnlK߯ [^&2䄭'AgK ۹{ǐGߢJfXg<Ӕ .[Xʋ̼33ϕEyէFoB^3 ƥY'ys/i8D.X\OV ~\*}D0)FR~,7nf_qpAcFǷ~YJIw}g?ʮ>ncN?zЎ)(d8Ұ4&<ҵHcrLl98%)]*qQ*̉ۢ;w1;u<{ըn̪T峎iIB6w$}}{^Χ74iJ)9$ckxf|t44MIrcϒoHUFX1UQ{M.Ћt6,φ B`gk9V1cQ["HFfVbXL:rMGK$mSSk Irٔ#rqcc>iMYٓ@,Q̑n,{YI˙ʥU;25Dq08~)7ԑCnYN6$"~^ﲻ}iJ ֶOȴUCxlm\T6gV׿F0Z1n1?JrSmGNw23hҢ%ө:u[avuDˌG1X@<` QuHBĮ\Y\ հK/QXK6$L/U\n+ JҌֽ my|VX?'ͧSYa*i͓Iݲ``;~1hӔjrr;{kkG87`7JʢIELYO%mW(vs=ȣޭ̩~M;$=иY͕#q5Ks"m3U<qEh,ta)>5 e$+'';u#$b%))84pђWbg+1Dyn·m6S iw q~1K˗RѼI Xp&GK̨qP-źBQICF겮 JG,gHXU/W6SJ>OO-Qw.v$RqWf,6Q*뎙Ԙv6{YZnef|ۿzsUǚ%Ɯ"[v&2=[4ԯ-(G|WKxٚk22q*9,OO+pV8덗'H]|N;]ƍ*v] $?ﯥL}R84q.P"߽;:$'a()Y{ īfڥ+q1ޔy)F*H-sWo˄lޣXQv3?i%}]X6^ (ד)GKNfQhճ/wHk껌IY>\ I3מ(PP\T[,SKhxUM]*֓QꆸȒk]*̽)EK&rR'S$+`17ִ{'/f'נ%)!(?gFv} oM̟hcR =~jRQ] 刮8TWusJ>6sJz4*1)>BFXdv:~6F+Q~}f?4ݛbrGIX*˖?u«~8)6sԡV2_Kۤ F^{#?J1U칡rw慑X}.$GWVdrp}dДK`e T@cuK*%ۧ2nfn ԖsR*aN\/i1_2Fp7Osju^MZrdgVbd>]p>]o1zWޱtڕRkuvF< i9qҴJk޶Vʢ>HSMw0j/L5#)I#PXy$fےzgerk,D캓DOmwRP&ѶI=9Wd6oeݗښLn|SU~_q:R㣺:1)/O>}ͻ9 ˓GcmRXxPnkޏm9m0$]+|$x~>r)*U!^HXm'RcT:qXQI55Ϛ1J1 Rd=jS2}.ƴfDFKڭt[(ӔTCJҗ3IY2úH~gS"^'<JXu6mt΄y}2l5m: #?*]7(ʟ=;Pfm =3ȧVG/1 O4Mlv37ʽs|wehO_#ДViöV׽oRNZ_EE5}Mbϔʰª6s](Ǘ#zor-YpĐh* :zu!lȰ.UnnǵsQO_U8)21iky5f냒W?EH=]d(z{ P,ANzVFoQZ)UڷkuB܏1m`w+}EƜ$uʛpHgL+8u5e.h?w2\`hUh]VJjsZ- EIHR;(Q6qJHKE?+#]8#{^+(IyuF3ԫvK`.}=$?:^Ҝ_ZvW[(,o݁X)TqѴRxF޻Wn0?J+.eU^儆XX̡H8$9bTv1N{׷{%H=:ڳTyj\iRroV0Ķg"c.ܱⴭ8ӔfߖXʝ9Э)'O+o3$r2YT gZ3t#kuYBE:ͭf;[Wc?xJRR+NT/X <~{Zj~揵M7qR䷉UIf+`#s 'RJN#HW^j˸Kj$© I ӭi*iSoU{ouYuؒf_&Hw1/ y>ՄTՖU5:*gm-"(r>\ *adSKUخSI 28δT5*J2\1wc˪$FGRioXYFw˪Z7$X% ln8XUњJ4kbuj:=z$eh.-XY0xn?aRR-=uPR5W}%O2YXr/lgv9]뱬[z>5 *Lp3ڦVVR4 IgdYxv8㓞WM}z3Ʋ.Wx؉d;m8f,ʲc88d =+Bt/2@"#HĮݖRn{QBc5VON}$W`>88J1 O^Gghc _#8\7R^2z^_WkV^s<ߵiWyZҫF5]+y~E#:+Mzs\R,\ErDi0c=3ݨo.Ugu:SiQ#4j;6oǎ=ZrO}\Ia𶭭nɱ@Zb ƶS:1IZtiʬwbh yJ8s<#:R:Q[_3NNN:ϧzN}z+J\Ew/C0Q$}cS]՟qq npOOJh+4+ΥkH|\0VQ#'=yǽvJ1IhSl0UG}pG*.hrhӍ?;yH-I 8F9st}޿qqr& +*mѲI Nu[b֌cM/5j77 0aim㿡犕U1QZ& mqH dlss^p^}L1֧/gbEqn#jMtҖ"*/cue50ctqc=?3EIA%Ȝ:= Zed0\[ݲL9$Ʊ֜js5eNJ针IػWjz8IoF=>,[λVhN8]]5#*:]jtO/dUhwxCy>8u{bEF*+u=]{+H~ljc9^kqRV8Rz6^=^LTKwu$ibff.rq׽uCQqdhN%К8wH;7' kӒO^C4 ]PTqc;ӊr2hΟ4SWkAf8yhf_sԟNt#(GZ ܲm,,2=r#?tJ{F>ڏz>]~&wd}ŽEJ*V٣Ų$lE𧌎ru}MiѻuW;Z-0*sۃҴeo{.tM*,p1 tl9=vGW-gx6"s .$R_^+OE_JY5s]ǵ1U.tFv99$G$2NUvn)9?^UN^WG\(qt(~И(ԕ"Dc'Hۘ2׭E<̒X۫{jcNQ}cJ7{_e̜͵UDh1 P̬4U9RUN6Gk}kaeY>/ບCۼ18<Ўt{g5N|%a{JxZ{K]\I4hwqުJ4yno/eQE`\:ܣ_kOMT,A*lYڪdI㯿N{tGXmFP/Kq 80۵Hx&rOOsiʜ#]kxcb lQK`t'zގCé8ԒQWoo$[HnJok.~nUՍKdJOmiY`PI>ꢥ8껓R.]YrJ *.ϔ1\(^mQOy!+vl޵9smM 49e NxVPpr=')sX}|Z 2Bss9#׽\cehY#Wx?^4Ș`Koňøpz1ݒHR5E2P˕gs8arG٪_f2 n&y2*yztްfC,\#WReC:LaTf۵?TTR1i꺧HЪ2H q\HA|ʩNbe{7\e瞧9XJ3.w:ZouqP_46ѝ/} #i:OK_14t KDܻ>%.{h3f g9>HJ_H%xj`պ-٤A= ?zVUjE}GXSvжwJ֐²Yq t/5BVMuI:ɸ6gޱT~.fߡjoC!HUܿӏ ԥԪTZkH *'ʡ60k֫i^Sʤ}+ysEQ)ւC.E6k@E2$ ?cDZP}N?JTOUƇa@8_ʸ4Vz~~_ 4lf+`NhԌW9jӣ*ikmJcUAj+asϧUIK?s5MѢ7̞;|g8c0, 8ֱ:+ium-;?5Ui9٣,v\89銥.M^NCHd̓VPwt~U9J+nU<<*ϞRt"DP ,)3t$r:v ;ɥaZJGNջ}2^\,K',*W--wao܍z>4*v°A~QOO:=Miۯ،5Ơ`U1`TjMioNXi)m\J_p $}isJW2ͫ=_+.W wF|ukjA-G(˹Z6 lEOc8]rʵJV_4[f6"Np@~i{5b7 [zGߎgNk8qj$eRQ.V˥۵~'ny]pRM=.4iZZkkh~d,n7Cx95,=>ggoNw~nYd' bV89ʝITL0N8K;p1d(w1PN<QUZKW]WE]LV>Km}D-YJkp@Tr˦t{l?+ԵoqM6eR}2rO8ӭhݸERv\-,WCk`noX`۞y\ܲO/VRlrGr)KOv0AGvBTy( ȶ3mURʙ)~bz֞1qӯo-!(J7ۧܠyjbTf&26u q}JU=%~7[=U9JV^w0 -»qe6F sY>ETUet]ۮERR<[|EKFщ ('ֻj:Unuv_~ ӊvnjefU]#OZ妽[^uڭ>ZnN^ۭЅXc1;dc(2ƣKQm w ksDmqیʄ*(Z%f/GF3)OgvI-I=ϲ1%;zryqk%ekzZQԖgxR{¾3VT6QtjpJd7c}˕FTFpqr8є5Rz.[ukTsAM{M7]7p `liQF*mk_rj`c%:ޗޢ=mV*~x;cmo\8zӖ[-HaE>UxЂ?L5\MSNqX?{X$&oߩH݇G==;:?+Y9!d;P9A?A-jsS8ⰸ-V䗖(Ym UFwj,vצyHF9B:6[!yfVe0y52'*{~oy0 ?lO/$5*[7v8K WJjV/260?@kاu(Uk4Re3ilS+0ZO-wd9<JrMJ\\N̗3iNh<+:8?^f$/o'aaSҴJkKo4kʴ+bY#[*]īso\cq1cm7m?y| SU(f_}WRwhIwr00Cq]Q.]u:y3اwKYۇp9Ҩ]-/<=i>y_LIB[*K*JwߗΤ9}t_!I ` xCg2+S^fmY~Iz\Pm_Џ)>YtcKT8"?6^khSW2A~ca.߶G95>oj][{hInlFTyHY@\#~حѧ(Kkctin߉n,pmhz{'.}J9O'HR6癶l`uڦ/UҫM8-ӿB/}|Hw6NI珠ֲT>Yj]N:)Ji)j_Bc-#@ c|"_fl lrF8yI6㞟n'`Rnl0RۯGIP|ƊT5%gW ԍ!aXl#t~fv1ۨF-Tk},=<=-_Zkk!hae˟qkj4zizƥ_fJ<;4ճ=lb;gVNIy}?5}7J ەc>bzڽ:qjV:UK~$ jHpNaFGr>)IsiN1['+jZv4Bř#=x㜜UѩdN#NU.ͫi9bUXTl;=rQ}hTKH\[3ɈюI??oj菱MoNVI[ؠ#Fr #8|>t}|g̻MaƲRٓ?c[I%0H1>sc@kQV%{/UJܼqAIlnflYw1OcZklc~o\̪Vo1Ca/叧k[1u9lE)݌>Ǿs^יG%*vCg[a4[pɒّ4֯Y^N9kV TU66ߟE- ec57,ѷ,.euv$-=a2" *V\ ]5 M4RHE 6PfO,Lo<EɾZ*ĻͺMt*9AP4؊sVQgIS۵ meaqՔh%#y/ >9s`cXrݓ#ǧj߯%ĸܮ!I^`ڪGٴ۳;*^R4ګ_++-MyZgq bEH#f#'x;TKjq;ߪ{=;ʬ֐زaP)}_uzmnlNK+nt[XHFrGJ>._ÿa֊mP4SlU5WֽaZwM%حYė;c9cwӯlSvf1%9'kzK/tJRҰ{9Q<]ɞI~ie%/ך=d\+ӭ'Q/뿠X|︻l^Jn*qjm7} l[odGPO~ڶUuc<:KFa[(d<2q]0өʩ)TZIm EQȱgqn$~h븢a&XzOLWgjiJJ-b[x 1VMUn|c(wHFޜiM1RolDw=W)Tn3iV x I9sTRMO"Xէw_+G(vOjtbV#寻c -C`x}9XW+aeʟ _kk|=3cq^J9WxFkۙb)JZImm s5?}$䜌qTNۓ2ROu_!Fy2I=ORM_Йb+[-B1te(Yu&"E(cc;`zt'~JmRW[ďe Ү~n?NE>hʮvUѫ˥,Gifj{rz犪]Goh] f B9sV4ʜWrF5+GZhK&rz{uUNpS3tNUgK[{7˴rp~9Zu*JЊrݮQ6ܴ"];2j9sڻFnZ+aV|־ߘK$0׆\\n:.!N&bR-ݽo$#ӥtKRRQ4v#+[W#UQǙGDܢ#|U&eV*A'$LtN4=]<TjSnO|^ 8I:yejN$}zÎXUdV#aFLʤݽ*)l5bIc1U2|ӯT9J8K ;Jn|#Y!c5ܤm^z=l=YUEm٢Ki|3ee$:h*Tc0jq]AD}+nN.AN`s׭5KM0T۰y`U!ez:2) lb(9DeE4YCBHwvak",Oy<ƹMS֢U5hG~QjʞTsr$uM-9#=W*G#z9$pkF|g#9өJ-#8R[S&m^:sʹjܟF*ֻ[z,=mW Ί,}@T:RGO_*|qir~@F7rH?xF31q?1#=j:I2Rqt!2as?Ozq~:e|7aAm$q2,+7G ׵cRS攝̪VN.).C[I#Eo6qjThѫ~bKAE^~u?dL74񜏢?vr{}nK*+7̹۞ xS8ۚT,ֶ$%{2׭tS8aܶӍNg]yzH#Xdoe<ܒp)H+{b+y-H%C81rw:SqS׾rlN>sq..@MNRꍒV: ˏ;O{NZTvMz)eRl$Kf+Oڵa*;-٬ʵx'/A z$W)8>ү/^ ){d-My=݃nª3SkJp/=8}b npQѝVR4jz)Ryo"VGYaa4|(H4K~!rH3(W`zq[RTu#(8wXXdWY-&Vu)Z,2RdcpFWFztʟ,9TnPMXǥW%:tZRjߏQÖ ILk(h_4;F?xڞblĥd۷9}0{fc-M%HEɹ܏*=^V}wSN Vwz]aT@s>%Ͳ茱KNރ.\$+*+ʯe{ªUm8yjUjr~ȣ+z~j#)gd󥶋 1I7n'b'*7u#Žw=~Qҷ?CSbDO[ګֵq ߍEOݾցlaƛ9 >) iki@8?1DeukE4s!לʸ,v3kx ǚ?5K_nVmͷcu}e\]Pߠ)G0W*7n#=z|=,-}zۂDa1pxO8#]M>߁iHk~ߧd\mLV#3?iȨ֕Jw_}IX 'j'h=8gRZOӒzM2ccZZܪZm<-̌9U\ +ղ[R&LF)^>鼨F _=׉vӑҴ3)VV;<&yv p9Zz1Ԫe*HĜs󊨹r6"y~k\4N}zcZڛQGz~ϗx[Bȥ>Fp?Nu9aQrr|>`2$v(HZzqq iimĐN3k[ֵͩXFwZnHVџʶz{U:Qx:܍.Kw UbTD^4lZV7:Fx'? JVN)4cxyĭn{/ZGܯvtOq,reߙG{uy$sSZJ$ږW.Qӵgrv+ݗ2}4Ͻwf%=W埭jWrRtdDďdz*aG cWqo34#@8hRISNmɻ%{o1{99 [;&lgPhՌyMycx۟εTz/'[xȞG|8(6F%g"[л9219QvTye&!.&gb3yVДrS=ánm(iTF^it3K3nX߯^q%CsTkQcCYkn{gҧB#SYԵg"C,sGmszS(˖ϡ\Λ+q&YphL=*i˒|ҰyB*ݼt{qJ&)]_>/ʴg$S8$wRGmi gӚǖ>^ETs_!:227gqIz]ɦW-]KHPCersxZTck0\Ct$Q's^?gCRe$fYI`.9NQ4._vzRfq?CS(= ̭ߙ*,77`ON*%h#KGvېzwOY]ΛG3NWr6ZFPaFk0 jf˯Cjc*vM^ #n$r:zP3?g$ڹQo 3# i~UEP?S;(C.dGY̻Wlg1U9l9?iGHXW%PJ9e=\XR.m܁7J~ǥhH.nvǞ#=ǥ)^:`R\ܭw3 {rx=кQ5[v󰌱2=۠J<4ʷO0v2\cU ZkrtǖbO#TR>ZtnYP)F[))^1r g $FUC62HqU(D8'6Tku1\yr/NO֦ۛV2)㸚L<"Pqp>QQq_ol?*\}3*$SN[LJ*^>BB7ƑBGQǵi:jGIXmߔ6r8=S&4T2n.W9̫̅O9<?Zio_v]K|[p~Vbr}#+w(WЪT\XVSӔ9dkpJ[Zem텔`|ހzЙS湥bsy8ʀxӊKVsʜ 6bxXdf#:r1rqq#iJ:;[=B6Ao+9Tunr-[Rժ1C{pr)Scל˷(2F0V'ɻ+gk%*QR_Uzv}at8ZH՛G?0ۑ{V|JXteuٖc-β䄜N߇ұQes΍X-$pmu«2㒻m|N\M !TA#i1:&G$f{΋'x''߮<5yn*4kmbA&[q yj)\ar yJJܷ\\u5ỌܺR*6uDJ,{Z35x#?g(m c)YF .~l4nTg*m]4i"i6ݍxҒ$Jyە"M۔h_^k;;C;%yP[xYLʜe(G]Mj7Vo3{>L|Q~rRR)QR`,`e`2ǞN{TXݧթKW8c3>m}? ٌ${x?u2wt`(8~X'w>OY;1֕"t*4+)?;OˌqXJ: G^wӢ5-A\'-׍RQ\nKr$Vlu#qҩ:i$6Rٚ9ǨDct*2td41-0s* u4ϥiZrDۥ4-#@%NQ_aqX'%AM9J> 7ךV]U+sq-fThuڽT1 1TeRNJU 6͎p:-9 8(o=GJ$0g$T۝ni?r26f1+޲vm0EWjq։sF8-#;A__]L_7eN' +r6g9L+9GVB~'5Pr.FhT*#ޗfI1LO~s~%͢ OIFIߥLex1Vut'|{*S4}^Chj*+Kuy{vSeO o\ ;|ΡNzd皊cXSR @ȒC.TUqڮ^N2+]|Q:XjҨ*Q휜e_ȪqKy9tJ*Fuyy)Ê+Ncu#Y"eS^i=z+x[uϙ,[?_T{F*+]tA ͞T n'~Ty(~cNR奿W2c}IZ*~|0FMbLgph5Moh֧' =Pm2"U1~xYӔn6Irꈣ|&w1榧V4[w{K/"Szq(tDlWi&_&uf-}%V,G g7{)HI6Hs00{|ډT;- !KjM5*#0^;Q޳䅘Fj["gfqVM\zc>SXԧ.䎺YydccUqcfrna"T5Wi$ TJE3\Da8 lBnh誋( y}xD_%8pRK ǥ FDFx~Y+;FK!ŃyF1GLh|W hc3H2%C fF~zFᢽܝ+9TV]7A=>SˈWMl?h䝧`3׵yҥvP9q7n Q?ҦQ$WUQ.KMʬQ@pz_JW'{9]q%{$B|6QtRZ1b%Yx^sҺjJ>;|uz%w YCl2yU?3Ԕ-&bNU)XVWFr@8lu+JRRk&dF |1`HsUjIY/;]. y24l،HsBi˧59qƱ"Kn Jg]jܯZIr\6YVkcRKMt HǘwnfWЌUέJ/{XSTry#Á^?`qLW2+¬͏8ms8αqU}X9SI#SPvX;9Pcy✕o#d>%ͶX(:Ɯ}HҤhY[%*fMb2A}j9SZOCJtc>i%uBZC~&GWuOS0ھv:ð[FydvfA{gװ9+=^#:*|kMlmV5Lw9Q>X]ugJ"^FYYK^A:]]-ߚ]K<Y|?`$<E]Ci?aO$(Cy1չ }x9Z:*1OvgJ2Pn'o29c`'hv)) R*pL-Y{9-Iʭgz[[aImw[-^s["ٞXx[d&58N.)- --| mгX3UGFZW:|GːTl30PF@ԧ>jq[˃]Ǟ8}QbwMNzxJqu~pr-[{/˹_ZFJ*__C"(K`Nr| ?>՚וkg 2czҮ|U؎XUUle8=kivtb]Ff<`|mAF33Z{чM6}~G:Rpr-ר[]IsV*&ZN[mz[tGuUluo*hJ+)J <*dsBpH1e'gZ\~m69~Ƃ@>u۸G’t1:~ǖA h `EVebgz t&Փw1Td\[y"Wdɷ}8 xj)$%NU)6v&AcTeۜ({{ҩi5wcj|OtYm!a䓎aWFm#j8j&F*>Ϫ,Hf 7mYGʤ,zӧCh46t[H]۶h{\vb3a{O^q5{/ R,4_Ÿ{udi "y9ɮqz%q];k(YH uaHJ IlwS>U4mԆ8^]'&m2w>{T'Hm7;G ̪w+U2ny"븓rGUdUܥr9#mGݗjzӧ#!rHrx>y**z%tQLb͜g>~^rrz4R勼VMӔcZ}2;1f )/{Isr1\H3៘`ڴ]uVj侉_BʞRE}8\`_Z2c iTOul.Ivݳ#cD*J1V*1)9rY0Ob(V2[Yl&^A^_ʉ;ڜ%Rueou4AXlɽ pޝ8EO:Nbi%k3.qm*4>iYJ1RKO# 4q I=n%c̛XH֦2)(:؜TqZ"3߻_s*vܺuf}Q Gu.Y g3𨚌Շcn#䵜l&rR}@J9ym*rjRn|`.qFsȩ(?sSؗ3$pFeRm '߅<`Gֱ"=v[ou}0oos\;y]*+sa5 RI8쇕VmLtO GݛJ9n^I\ I'k_Z]BM@V8#YSs9u)}Ry/~UϷҕ/g 8V¤4Ԏ"w˞xOJoMB*T!mv*{M-?*P-dSM DW]T>Ы$Ӌt jk_o†dU;E^GX:qn+)she2;gh['kZTjʝ7%+Ǡ[]CRU:'ۉnO#V`dœiӊkm!Yӓ@\I/ix:U "ϑw&6X Nwc)uHxѼc=x4S^^>ft|IYyFgVa.~\d$8F8ttwowDǸz<>6ZU ՝]ɱmO0*ȑvHG8?jSa_5z+lgB&R\vGH`dB8UOnu<T>#bEm%f qVcɍS5F8y-O؜ElGmjB BvڪyNpG|iE-/Ri?E8lUCI"rvgw{RsI;>㦍 啖 \ӷ$d*u'N=:v-F+kE^W"rmCiJt!Ewyr)a 8BeiC[~$2O$7I9py֮GH*sWՏ3 rv9I9;;Yƍl .E̎3\gmĻY{9늬:'QݦV(Fu#RroնbA*ìl/͎0O𺱕>u>XK4稜g-Y6LjA=@޹3WRZZx4%הQ_2V2*)18߯^O K0ibd!km\F͜{}kh¤i-iVM_$gHlOjTZkZ4Kȅ DUR=:%V>._ RH'mZpƦ#BW Err->]ȨKJKv! j;|iMnHێ?*r*]٤([k ;< 㚪4/g{%E)*ЏqTVlS;ҭq3U z}r̸¯E`z*cvsJl4}Ώ2F389)ڣ뜣R*[i{SlmӹGs59EMv1T(Xq6E ,ӏZVP]HCYN˿-@w66syeejŴv%?ykW?{qt+-C:;m%lϭmNT)}ۿCŭKzŏF^@|@z1K]Ƅ.Q;XuWWQAFeQ6hٙd#nO-ybX\rF[ԝ3zتTଣUvZ;y$k d^|c=jr>mI7rk'm6T[w ZJyo#\ \G4iiY+"yDIOPӧC u*ahƜwх ʼoad\gp<֣KC[(i 5ԳE&6qU*_U^_3RZell4JNїSPZqQµK",.ܬn}W#ߎ%$S美-Ln DZt"<6ChE9'fEtXv.YG9>e*i?5gkL 6mˆ0ve$#>Q;Wk}GJN<ܰv>8niR;":殪Jqjyy&fH #4S }}O>WgJۤ̿yFe)]ky%2ı #O1-Bab Iw?aҴYcX/PpY\eGLq895q~t[GJ[__$15B~eFck-޾aNR~dtmp,d(swƶѭ_?*sߎ:~Ҍ]]D:T쒵7 qbn(#pN:W2skQk1)gwz}jJKWM(xFV8pHfF玼UGNe=Sĭ{݀NdI<k5~"lg!g$tcJ*m&`88׈#Rs\^鏡cEroeJ9'I4,Eo$9|7^ZJ5U-HX\<) ,Jp@㎇NR1z}>FѣQy"i$Ϳ@g #}ףR<֍5eg *]KTEO1n4w.-X ;yZ\yf~fJm{_X(81Wm̋U#OZӔ4tFsI,,ҩ'$T\ל4V6Ec_)oTϜÐ~jEmmkKRR܋԰Z7ުy¬|1=zT_6;eR2]Zv%oq[x5Te2Z\?bZOo7\i{'%72~+pY1.:d?NNyeη-NhM-p6HNN} DkE>ѧ88&ۨe)Tݓ.x+*dzKFK+[bDy6n[<UFJ0rtߩi'+;wwv}h$8c؏qu_.L)5"yEKO7hW,e5Ys0jAiič~_3A8l^G BvKAu,-3H$[ q'*Z%%mMIb)W2V[x"I }۾|]W0SD5?5L!EF3ێ'Z4G 8=YT>3q|F\/]mA䞵Tǝh~k׌Ԗ2&5ƻs{@ % ro[u_-]] ~N7Fە0\ONM5]~nܖ@1F 7qԣ.~w+~EXӊWYHܨ9 `*ޝm'~yzڳN:huVNy9F:z!G2H%Hۙ0qn{u8~lE S=:KRp$-ӂ9{bDesJzVֿiV$a9~)T6<=j>ҫ1>M{ *[l߿Nլjʴ.­^;8?N\/&N$MhƥEu^Ҕ)ˑ;﮶,] U3sAҗ=^jS.6|3,0%jrЌ}Z1j輎J5RšnWyzI[3֝Mj64_"&":Ƿ__=.zY6 h.вHAVF+0l˚)|3N|N3}kԍt*5%xn6V%O1B5BX$ L/˟Q3ZsltԌyn+p#S篹?Uʒ(0 NNOJKR2U-bY~Wf[et@# SS]Ď8Z9ɧ/soh Z֥[M8E?zV '5T1BDZ|.-̭nA{c뿖1O VV,An [jBNxN4{:9*OXt}~7\o,7>` OqªQQwu*kOQRhwy"'c1tcDNTwd_:56$HYN{O=.Z>Stq҄%ݼl \r4SN^W3JQ%㥽1rYӓ~kJ!dtLBm1==)J1m3xƎ$ョ5̸ s1>Fe)FNQz>={ ێ2y&y<})6zIk<ϛ1㿭sʧ5{/ޤcB]oQ`KVR7>AV4]=| T"yd*i%&GpАN[zۑDSQasQ*k_^O1[Tsw=R,uZv&o{+YtD,g7S̫n!߀:=3SOKS O9gk[H֒2x?SY{0򩅾UddYّrI>U# zNrV pB*2^O#b(ƿQirУSh֧R)=?O2Icfdj_=u' TھD4ZVv\1b0=0=?j1" Sv$;p$ nf2x=QJ')ElOWNjC$Ŕcە㯿JQ{kqiJ oBx\E$ t| 3PMcʶTKt5M+wD\2@=:߿ҜDcJSWwře+<Ƌ:zv2/2r ̸,F]>IlaT*z7t;2фl/>nz֕9c%o)VI߻Cn`X#yèk:'-K[?$P#5_1qRqۑ;US\ԪG#Cd4+C|탂kӿe:y`m?OK6.XȞL[Qm\nj=U-eӷ(h{,+i՘т9OgNjT]9YNvVq,Ѷ<ݳpnyZQQF4UD]uoګIk/ }qD{:mZЏ,R[nx[@Y/ɵ% F01>/iF-ϪѢܢ :$i%RѢ*lq;>4զOGk ?**x̣̽O[FkSj}N\B*.H3Y>Y7n۸5ģRpoNǖӓ溒c.I?v=NiS 9^ 7[E C<GǙr@=WG\}qT@GhTڼ8B檴I/Q{h9pVڌ~snyF:=2w_4:cO62 c6sק^+FkJ0OT 1OH6'LW:QRݷFm7/ӏƸ߲NmūۻMȿlq;.kxHM붤aeR9mÍ>TrIMيI>[mr~HTU/aX[-yӿif1$|+8my:z7$Vt}n ƺ9ߙ7Jl`Uy=֔o/w)V\joȶi y㸪h'*գvH% ք 00G]oAC2+%P0w˞ݽZ]1䬬K.ݫ4|t۸'.ZPWr{LEa%m@Lȉi:QߧtJ*w:c'**Uj&i7.sخVk ɓ>C(~+Mj§yKvݷsװs喯}U%fwX0ɵygU_S8wIUmnb{>^kܮQQrW}$>Ъŷ|ʏO,U*TNNf%^n/o[i{v_:c+[{-nȭ#(]\=IlG+Z*eWI\wr1MTt"jF[]UQr8ZEvLeHD,1vO~MWveFj]K;Z~_2ݐ9;zTfz8z6KCu(v=7':t$5T5edڃ{fLw] ;sZI3!4~ΣlTbݡ:r1ߪ i,ETFbTdvީ~nVxגM0`TgxMks!J+损,sկ;R+tr86y=ɧu1ߛM$;Lqu;br2Ok6:m?@o&hvasMk)KG2dگ$ WJNQJ1KFUPُn*cӚ;cI<;s%)JliRJ3)g\sm SF{~4{NX)JaHb5 op?wN-ez2ݞv9G{Fn 4Hr^HeH*(εxy-OvS$gG~Ats*8rhPYv;vtGZ.h]8UrHx0jz~ QҜboA'1۷2IאsS?G4cD[Lʬ۶H׬Y&o%I$Up̣q=j{%r(S_M gF@MpFo\Q)K+CuۯF5WVi$U@{zT N5$ȷf]ZmJ2_*ܧMs9Z&y*XԺ9爔#ds$g10_9=c<)jVHHР%Xdp_5<ڴ. |pxbM8  Y;6nNkR/ud$3O^N(Ο;'kr&i! V^1E.ta(IaBQgFYLWH[_-tݩ1R5?y䕮di-Y&@nH-YF+oV+JOoQ cm$@L8?J<:w+j]=⹆c3Y;W5r|亦Us{.F-ݏw"odsNG4dO 7v^A!R<":~?xyo23g>2Z#Tlߠ˜JaW۷$hs_̙JZ"(hE$fszjf۵H.X68QQOHkYlU[Fs+zQkNҔdЭZ07|ǟZ*7ʯ23b6玜3dN#+iZw ~U aqM-XZiJd+UXYEsӵ#y$\+#`{3Z/vrs[lR0G>̢yU"]$F5sX@6=s ӌ乖Dmߘ4m=})/Y~eD$[ޣH̽U4$[Gq?#nۃөTw$Q\7Ő9RAjeRQvYYz!!ltݒrGO֚*"3ЙIY1.*N:&)JTwc Cy^xNIgTmIQ dP(eʵ4N_ h7i)!e#(4GvHsn,$mX>'cis5Mus#=,Jcl.g\!F;o.,eYVU֦1*o-^ 0>_A8>W55$?v:,dV A~b>\1''MԓDVA `3v(/Q$Wa˵MYm>rKVm:\ϡ]/͹ozyycߑ^b* AoõL*!s)ǦwrK,yۣf$*uu 9ζn4f5% i@#T*jYKszfCʫidbzv3tЭۢb1s4ֺK(Y'sHa;@OlǶsަ>riMJ.]th Sr[,lnqFF7v㯥(:%uȱi"E+$jW$GCnæ+)+]I{EmBl7YF@K+XJqibwDWsk9S3|my3yS$?#SgBTiM" v[kc'ڎeT]E(r6=s#Hr SՉ|ՖFb VFHgO'\W\[\QYn6,je|8zRn2R"YLۚ3<TRqHڝѕ9ZT%32;Q>_S: ,q~QG,chG3 2.$y iۓ7t+]KݸYԋnsΚ5b,J|ƕ~1b';U:sОiJӖt #ٲTc9ގ^Iht¤㭅xV4eUSn7Ir?Nc9QV8V1WV4Z8=;UFٷd7\]t{:~es}oxU<ZCTܲBW9ˡ2.fÖvFhXà,@#֗/$ymy#_pycT˚lkN}?$ƞ`6(֓4v'#+cITe;_ݐ_|(:?Z^2"1:r1bݤsx$RNhܷ}Meo0>e'S繷f`w:)&6Jj|O Ay8ms8u Qj6]{2;?Z/#HSiY0ty,Yz,ujyQQoͽm8  p‰@QWqn~tif/v;KF.gW.P.l7BV><~&^QF9dd>\! ^Q[Q#h$[v۵K؜q8|ڤ(_g_;ml{ -t/BH rm q=9c{ìFIgP)J;Jɍ-,߼qܠ<S+gGvWgC&2b1돭iƤ,?j=WevQmWk7(mj˱6m7(qϭՍ;(o)Ic]ܑΧxJ%R>n1ʡ1998?L`Զ'RfWyIknN_$cNR+C~âZ,Ic`Em9F^K{*զB.O61o wvJUuV'Pry *:T r3W hN<L QKM<4 |m,DzsGz|R5*W_k4@?0$x;y>B0FK~"ھئUqvRN3u}ۊ;I|(Kmc=9cmw_=k3ң#ݸ_5QPyZFVѼҿ4w2ĶbuV5(871/6،\[d%Vg?JCVʢq]_D:Xu rNT{}GZNWOz1qs"OGc&ys' Ky8W}}qX>S{Hqףn>7Xq(faoHOJI%ڋNPzݍY \q?7v=L=9J̈exFJÙ#w򑑷tȫn0>[kYoЙ (HWCcF:`.K$ Th<]}$Ab[ T-܀r?*!6eVZw,eGlz낷6"2RQnM$U%A~_nyr89"W%i2mݴ}3XU<$_:aܩ螺K*B&c"c%'RͽocXu*|S7<-mrVHn_z%/&x1WWH%Rb#kHԧ*vF+<-~D6J[Gg"yj6N0G\<[mί`?;+| >+I+WocJʍNJ{lCV"5UmHr0ҹi{X˝1N"$6plQV=Oq~5"T}7:ߣ)U:VnV3 O\~KY})<:չ+}ebi6~jIM?cgUSR:ۤUSq&QJ7By ¦P,?.r-^EaPe {{LP[ih(S6Iia;*˹W~G)JƵ:":'ke`#q\'4PK,? 1460^+,ջ$\4q6ٹV@vC4\RVg&Kul%jljXW4ԥN.Ыt4wĆ/SstPO?jSoTB{E+s3ɕW{J|iN:q[FOٱpv1Ӧ}@䊪qZiv\iyӡ]ȳvžO0I g +:}[yJYb$JW}c̒]q3^ hWc'CRg9sA3\C60*W#eO:n:~껝TeNA yXʐx3c=={5rT֛3(%RO%^x6<_7_(`}2x'ZҎ+ޕiGTkXy坫,A۹a{Y;G8wٽἅ%#!S펝]KqG#m7 |l)Q8šrb.a%Ӣ{q6׉Fͻw{g1C 4Dʤ}ѡ%s*w0gZTd鴗{؇R4hRW}Kgqn'"pۿ[RZ:eJ4˰x@M񷁎>4NRhԔ̮+Ck)=:JВF.O6TpT5iog7$lgJL*퍤e鷃rzVtڲ[hhlUk.<&\b<'(Ri_Pmm]2]TF=.Əo95h.*r*Kr,`ZyJ&DRl{>T4wtp#h'|b;yRO.b2nzOZզ97*Q vvU9s2f{U,EJ]{~?vZ2LsƖn?/|O\<.e3ѣ[m-իkv6aU#F0}9K9n[ۨ@RI,&"PUJ ;tO2 Hqf m%Hp=+IJ+'(9UnZŸK0醓r~PqxjIW:Gtv#mSyjq ?:rK҆_mjv#it+$>[MI+Xl}+:vW&H3ưU{U74u1jJi4OsVou#QV%_/Rzyws!h٤q{Urؘ\Xvo*RWjڹqN)Qo bhۀ >9slӹxzUtI v%gYܯ9׽MhJK'cR);]>bHc)D1ðyJr+[?+#*2+A#IZys.'LV3QmКF'6*Hn.V|]iN2e^JIXH吱lB13מsQ.EjuFWAn1J3Ѓ550M[>J;w&b΂8G/46yRJ^cWd-ˈ琈9< 8 tb*G܄ʌ6n"k-o fyO#R^X+o5VM4#0ۯNRIFVdr>˲5člq}`xSJ*52Hh<]yh,wR1n22 Zo8գUɵ܌M [av̟*<®4iՒވҍJkya[hRT9E;y7o󦵫ZQ{o{Α <̮m Kva+2E>"{*xv t)˖nbTי?O.D.6+w[ˈ*Oc]эخJK"zqrts v-Y3l vY$1ۿiǚ7M\Tlv15ÅgVenvnߧsۥvЩ*.8Esrc.X͑>]^>ӝ;3Z?"` W~札=FTiV2Bbe+7+iq2tuo,`:ȬKzJhE+u)ݺsmxؓS:ab6}JC[ _1Uc(˚~T8kI!d?ޮQXW2D&jL6rNNOaW$䋓WߤVg;.Yy|=?Z.L7Mt.sYwnAv9d>z\2JT+N:,eWa%z r6qDmn~X<©*7f3QglhWgiTu>%dRj3|Ҷv0(UӕipQUtӥf@"p¹ZQխ;eRa"TQ+/fN&YRWS7)ҡ}-4RJ:ʹuNm#1Y3_n~5o̮.bR- MswsKYryߦ+8ьvl^mQbf5#6cMISS>Z.Uf #p0㟥O'v3NpȜ@fq9\֤6ӧO 5}(<n^zV!^r9PIigu^ȁ|FW'30Qzr:FNV4nZC\ոKwsb-#b*aF1ԞOn*J5h95wm %?z)r٭^{k]<ژfɰd}:ucRͷBpr:):얚ߩN!]0 +oP@?8T4oѨյ]&bqj2PlQ? =%XsǪIn=+u?N6:.Cqgl?sR՜1T/}4R- Zz.z;pkFI;Y;Q'-ikܒ6|ˌY38!$22mQs5ʪN[+tF҄TZM|bRcSEUv1ҳG޾)Nrv|,c)zgs9'trT$Ր4Vc*˸/c+j*NiۿFKfmYW?ҝMi%VVb,V*k3O1>s'8wVWPOJ?G˖^U6qYGkm|D[g屒[QƻCB ЗaF*uҫ{%k6SJIqnH9PzvU%Nf#ҕ']n,y1;¾bop?:V5) 7VwBDFue*yWzT ? SRTRɽj?!%ɄU^VNP3ȱ=#qXſyf}76Kio1¤C|ۓ54׼>"5#YUw'E$^W@Ó}OjR1o\@-Y8+ Kq|q\=[m Os$Uq\/dE?N Y"1#*cS9G-u%:R$[30(i|F9ۊJIV&Qv^cM+Bn{sq?E?u6BR'C>jM5$,g=F81_"aFI9P[s+mr g9e}(Y-15b :c㿵O]јSIKF"ATw 7ǷZgAi~_>o$6MJ…㝧OOܥ&\Ǖ_V@!o3r67``OJN4#Dq#[uX-& Ӑ;W%]tLe*[kE}z Ed,Yh2P(Rh{KDqyImd٫}$*ϑ"ɞU/yVZHbܜuҲmGU(z|:5rӏO+l5HJ߹UWwаZ@l ץrb**U"zi/f^8G@ڪsc#V6J ۚK,Jճ!%[{zu+.Qa9cʷcchhmHSdrzȩNEԆ .n|ng^mSvRkO$yTV *$* 8AWSsALrƜOZyY;oZs3vײ{K.?>/?8たU/ih]B?ZJSkv ^9>7.80S8oˡ%;FM;O!Uc~aM(Ӡ.hoԭpסV=JdXr|#$;LduzqO>2%iTJ4X('LӚժFJmBRmu}dA RZWkaThu*hD1H(trƋÙ?PS)%-6m %XQ.;ժF8YvK`B)?{m 7^Qvb$ߩQbܓl縌oWV^*%M5u#4M,m ~LkьSik{~FJp=` -c68z{W^RSA߭cͺ<B8r+?r&X|嗴[2C$wIA|<܎a~Z--~91>SKB1ff<5UIZcB4ڻ[q oqpFNzzQO_g61~:̰j H3q޺p|IL%e* .BJ[en֔gnSgvJ jh廆LkTkh4Sǵ@O5u){ˮt:%ӳ̅d[Y66.gw'it-^/59rI"kh$i<9aztmRW.mhdԆ/c]*Y2n ylz'+osW[5RA$p[~6@#sӧ)+Z]/mך5Db1@tjZ1sU9Th6O*L,a[n7O鎙,DZJSKڕ=:I7Lշ$ͯTaKIhg+ ,WMJ1ӧS0VncJH 8RYs- bʺcrwrF<ªUZCY;2u*^StHUWoU@,H^Öxi&zשQ##nq8]u]l؅Rl excׁqW6#Nr:wZ_?qF,W5躍V[Oی8##<\OfvGt3tkܚQp;h 6Oq^9mmV.-(-]MI$, +8tֵtΣZ )+ !DldcƇ-5Ϫާ%wl ;+Gq(+G$02LeSϯJ.Fյm<>24\=>ϖJ.ҶMHJ^ia,0^S1>qT5%I!r)91Ie>O+JtVntW[~(%%Y1VH.'cYR7vC8T5#{{x^FfYdV.1y"Un~_SB)~k<-Ŕ/ىѫmOץzFRPmSH8r|1%SGӟEZpc~FyrIG~Q0DbYOZN3[_͉Q:֊n]oDw%UQzϧJ)өNohw4KWnRi%Y%eAs+ZwoMU(wO&Hm۬ G3.m={V..YN\"ݺ!'A݇L::ҧ?jck;iR:Y>ТPKRmӏ)өIGjZ55u VͤP3lUInînirO5Ťa5 >(Fm>m jReM[}<58 VqNy59iOsvZيJ2Hx i S +qV)mC3 w׽lzm#<;QN k[hB\oFiFcv߽eN:뫥V1߮LừLnS>lXIT+6+Ikw˩b쥬h $_~J87Bբ;{"'lxc%VW|ɞFݕƾjFӷ$m*46ASQ8g4Lfk.ɜ|>gk9R:V:cp~i"k1.&6Yv͆$p:|zVj)J]-wn< +q*(?t]^\a'N6OH&[r&7S*j<Oit+"n/iFz(gZMۤKk?乒!7Oۥrԧ7Ei}~ܴq]Do|IQsۥVJ1IsZjZTNFuDxcc'-NI'3G3i2},\A!C' ӚW‚Zxb##t.VjKz=XB"I19ڛyfnֱWo#7=p I%̣w HzVĥw..R==u-͝wW,qr+, F=O=|ђ)^&5hޢԚ L'dU]Gk?F17ʽ<۳+B3ӿZǗiZZ"Ţ #X`!18i['^=zm30K 9JiI7gY`Z6WvaqQӥj^*Qۧ|D#R^>Mc֛ KuW168<9PU"oUөiOoݮ&G'C+ &VSطp3,pv30=Q妹[cB;Gnۉmׯa_IcҊz/[Grn^_2\]8ٮ g|ȘI]|jR/!d19]1GbU#ȅW>c%ԙvʻi`ԣ.Tv 7 ya$jycJYLqڧl-!*=+9i̺EW/ϕUb7s׿89~J>L֊$Y`SFkO{&iXϔX"D8x-̛1dğX#׸y8m]VV\,gr<x~evgRӚf-n,x8n3TD^0E*cŸmwyRT1u-p` gd{KOK9[uCo=݉$aa}͌ѣw9P'hٕr*9h{/~6v˹^ ޹>җ%ՕJ}P٭$E-vVHEJu(~dVN;AWfs=ZJ6iINckn2 +›qo^n:qW:%Z. Hc7H8<8?i1羻NU*lM+6n%._n[>mHt>^XTQ? >¥]X;q95Rqj٬= mqou4?jr^#Olz ܮQN|UJԢUJWRaO-\e8&N2\7jƜCk];I6`g9hHRjʜbٷ{} ]6)bqJNM)t[U`|mUH3+:t{}qN#GnK9-HOYO ør֟Gt鷾:UVLrwQJ<+u{>^lΝ95{02yI6$8<㈔/NuNr{}#Kq>WuJr$&iSFV) ɖ9HX( S??=zTE%-h_5,j 3{@ߥsTֵvͰNK4u뉤e8U^V5*J*+Q.&]o7Réʯiw$TJm)/E] Xt8ⷊB3h4.,+a%=)JPnS ᥈bXPEЏiKd;=ޟ\RR#ՒQ_=7SPmXmMRx+^ϸ1 `bao!$kܦHV!0f7nU-pp8{GS撜Y$sHٕXF9NA=*,4O޵CmfuijAMIk{)cZZzzIo+(vŷ2!nG9#ZU0Z^_sIb!JIaEZ5ww;:ʥ PzMy[%$L9?A+pmw:*Ӄr!+;]WsdߞquF[8Ƨ-fQgXɒ//p$`g#H2*RS:~-WNx~#ET;qsD(g-*jwv|Ә8M7|}+: QSU,֓6Ve$*:5*J5$:iӥBεrNKiٗ|y88z4ܥiuG/5hE"k- ڣ=~RZ%編%3ݪȼ4'pE^Bh.ir^I4]dzVlo;y5F~-ԮUGuncJ^أݜ9#F*;YDH{G_S:rkIPΟϪ:)J[[Ot񯛼#rvz{WRkli):rWOaal^ɋ9]9!FFxr˹Z%)-a˦L-dQgbE%QoE%̠кTIn珺H>`A1q:Q*qmcۤ*Ff9xFn= ^SWOO2D;:ʪ]2}q#wQZIVꪎiDb;kQ%IJrH۲ޙ){8P`SZKudL#n@l^W4% ^bFH`yns3j9)r|1tf~VFYes*$(:oƴM%k&)ZI XXUןPEbBj2^ωC6ykNi85(vM^㦁lNvQpqG汍\Wk:rvo?1ѴLJ{BmׯJW ֥_yi$!升QO'+Kִ9ty4+v A @df?S5OqM[VݤG&I7{.Uz}aO洳2\/wuؖ);^G$\sU/kхbC$|ϟ+c=ƲQƓѝQQrխHQLjd:=1{5k#9J?1,lye`GNXd ڶ:͹+Y]/((Тm8',qh3RrqE>yԶW<p0O__JhGX'$躈pf;sUTm.Uz oh~Upqv~B[\HL*ǝ][jr[s۶f-8-YFUnXsЌ>mlɩV/b3\g'N7ݞ\[_FgJX&#طUǚqxY'K2rW!at#F15Ny\fXWG_F]X$DcP@<=+9KDi>XA;0R\o' }kJ2拃VFc^3p.>Ճ49Xv籩kS䓂Օe_2z8G=R\CxI\+3|."_SIS\7n>Q]IUf9{U^úגGt'rf 2)?1yϮ:tzz*nLٵBO8 7rz{WDe̟c?(:+@ =pn}+N}.iƊl\\'nU-W8y{ql'%XrʻH'>H5knﴬgsJ=GNRqJbЯ^)t<̌nK(0<~9TO+٪ Bљ7I9cϸ-j+AF%a,8LͅH==OZI:ܰ[-BU6)}t:E(ݢTco7tR̀SYIԲ[4yo{q. ˅f~*S?h֟sz(13K6c{ɅI匮 1<5]~Y`*n8NsƏ*jIοw(aRG}-Qe#^vѩ S8B?Zޥy>3N[׷Qy 6p>(nTeNROВ HYfTO9>ʦqq}CѢ(f嗑㹢t.eFp5XB!yge叏c BRm2 噆F랜aZ.HjQyXėQDfnw7aRrV)T$]6q߷_ \3ztjΚmkqcml9^ z"jTMr1kx}w.y#\PcoRIZ8/…0p%&h+l_hC yW8gQ)EQjo[,T)i8称>b{CK8HPΎۼTq֜ҹ٬]={ D61$װF23XXՅ9|򩄢M:qXw-BҴ'WDʒ_?BTR XG#˙u#Њ&vXEir˙FC0ARGnLu5GLR;IZ,V-#sr>M{9; j@dUځUN8GԗT&L$SlJBu'?g̺xxղl%ki !+yjsbk֤cBuw@eە];sl9R{ÜEW-~ K < 9GnsGGgDp;{ >.^VT"NpK}:Cwl&m J<׍9iQc/uh5"x6Fb̸ezc5Z/KHR-?tO=rp>ZH>ΥF[q ~XXϥ9[KJOJ׾ޅhXͷ,]#95Qwc:QE1ڻ *}44gN+gMn\Ȗ9bLGLcSyoc5%vOb4Ŝ,1ׯV+mR$$Gs{1铎3J1=&Q˄ 9'?URHӌjDs\D .NWDnp;ǶOg%&2dFVK]NI֑mphYcx#YqJ1wo@{~#o.c, Y{Yg-o&# U Eg*2J] דL}ٰK ^j┵6I'nxdhzgzdT]Gk~I2ؘ6ATRWf5*҅KGwdVK4Knʜd`s kFB:ҨN6U?1ppq}xybPj9mQY} IqzZsT8o_e?d;.8ǩE˹8Bu$wԆX%0PUr58ʝ u)-.nOs$8w۷C,jJ--nuSN1ثu$;`7wު6QL=ɹ&2}AS!E<׵Le&#Jtnm򤺑/vV W|%u ]&W)TA?G/2wrk*y2(_3 o9؋#4*eۀj=<3/IIҮQ5I{d͊mݴ.Oҍ3} :\U庅nh2d ؎g۱cRhQt+˚DwO89#;PTҧ|W7^ѿ+wQ^F`jgJ1nb9ڢ,Ӟ=jUZC HY̒EnIIIFєod_"5i`$9^98gm% j!v=1\=?UOYh yrqڿ\*}i>XUa:u)ecǷ\ciNT:n]Phϖ[G<R%*Ԇd0P/>\۹4*3װԍ=9rryH;K=$X g9҅(sҋj'uBevcTߍLyeК\3VOad"3c vr 8jJTqû_ԭ`$yv*y r;)lgZ\7HW8b??T8jITD{F_1C6pFO8Rv}Nw; ܢLn9vGZKֵ.Q~ٖn&bdZxyeQJm8]kЄmkQGUSkEIIq՝c}G rF^r?.n`&2?v 3TOC{zfeW?u[z={DVY}CVl6/q{T rAZQ7pᜭGrŽVUMT\.N+_tRh6,Qȫ|#2;G"V7Ԅ9ȑͷBwmlsɬTdRTmQlא2¶#o?g)rJܩթF1NZoB<[޽=GNxkB2]Ԭ}ECy6xd8lכIŗJHEN/Ѓl1:Hv׎:W%YS[!{8QIВ>a̐fG OːOV|VwyUlH +3ykTz?3{׻DwK42wm~=:rQNC*%O5VyUb9W-I{ӷtt򨨦W}2%o,VrN:w7GTsT6Ϯ3NoVClC +mYxgyF]4~a(tu$x DPKG/c*S}ˣZh,`C;xi.X_LjFgr"ُuT1NN+r[vV`XF9\p}1槕J?KT]%v*&դ*7 Ү}1NG hڷcSqnqM?7E«Gwq^HQ䬎>K^ VTK5fU ~bq'Ʀ27}S el\bM__79eb;I'90+xHj4)(nWPYΤ#2g ZqR^}TE2($J/dipp _*reӧ7J7whU[Y{9R(];Ej~kIa-u@啋2pAx?})ʍlBڿ"CE.apvO0ҧ7D$f?ӌV%%^}JJdtʝEiirE;cm5p2Hvm<}!(rE."1 Jvna2Je sӊt܌\M!vkK fy&cV=>V2TuAЧC Ꟶ\ĭ| csfFi9+{eQku$P(Vel 휌֔aScjїPqӣ$J ~`Nr1ө#^f$rEUDbw+c=pq֜ybfiʭoy1d/W|.UҹQןjS[ݵ#$7hpL!< u(-8 2êvri.;k(fR^8°e!KD/uN4c4ZE2;9*00ۊԣiөF"knwQs+V88yFmT?{گ b81scZKOvOZ8]G CD[*HɉA$skThӣ aZ#^ܿ# 9\*(CgD(ƞ"՞d$}###sprq㇩+Y´(d,>AlWe2N5(Kzqb^"<F sNuSVkWJW kTvzg9+l;RNNW}uVR'0hX6lqzV6ii UJ*Wo_AV*|r_%rs\rhyRw j2ǹY|-qu/zt}wGEנ/>c)UغF)/O$Ȍy2y2n^래?JHŤ~GE~Mˠvb+m@ qGLQ(ѝ˳&*BӊБaѧZ9$_G>/a%Bkعls@! 6 8Oֺ9c7rrZ%h̘v4VUe{,&8y>f]A$ wg9$1ҢT9ov\zakwbΠQ1ӔհCr[yya1Y9iR2=A<i|STJ>.k[E"w=}u$ٍOgRf]%m8rS롥>h\ɤQ9[=3N^ҵODcQNQTZ~(lB?>x|EV1[y#J8iIQV}~yLnb6q=1N1TJuK]m]e&faeyOR{w'^!QƩz+U-EF[5N+S飇3mYS._wrJ,L_Y>Lye?LsU)'{]ƭ̲8C.^ԫFs7SsTi̒n#9'vV* zՌ{4 UeUade+խi'G0˿Ȓ:pd#/ `2q׌zUӫ7eY,qnKբQb4\4GJ.4/O> wi.\Í7GA_p(r_Wy6{[Kz|A,Q $$@%r:F4)˙o;o:ZnQI&~҅Ge#n~yN5$?O3[Ejέ6.Q'ҴR*m08ϙs-Ǣ/.|@)L|8 X:un_.Pp,9(J-eS20G!L޵JI[˧j:HѦ7mR\{3ZҧF>;5*4Em@02商6'#$ֽ ZW]F;}ۙp>O:){ҵe -ӃheBif+~}?:*'qaΥ5oOHM>H9quOƻaFN~Ԓ2dEpnXG`Ү-sѣA5hJ ?v? H9:mpO [413y`thc5%mkEu6ю>_Zu+Jy[MYodFa*,Tccf{ac>2^8ohžNj4|~hw^8bBݴ'*<;ӧbllZ^dSۙx8I%IJ.s沧wo/ȨKǚkWЧ ⍆wBon9\?nGi/껝TTT\s6񱸻)FJVr[ٞ .Y[p*I=k(e)mwI([dt2/ԫ*mB;~?2+bB;z4jj=4*qsmbs¢o+K9Aq(> Ɲ:KHӾin{U2=<Ӕy98+h*\DA3G{JܐZ.(_%\+pO kCRi?Go*%7fR9h50}*/?w'|N~i+,āGeqީv*&q#"Xs;31.mQ$"1ڹRMSNܖ7HUi :qZs>[t\8yI=SHeo5ڌ?/XIʦjUhJmCxאENBEDYNR˖luR0^>*\CtʻYQ ̀0xu\N>!cJ,зC{vSZ +.4jQ8*7й3]$q.1z.hG[%7Dqe_I<$6p9霏^kpMLdhGٷKYUmP#REzZ];IoW->^i]<;M} +*I _Srjkr{kc8U#%n̺q"C 9#S낝myZG,ycW77X;t;#Y~'-L=:n~dKm;K[Hgtˆ8=x9=xJ;H筃{GnI|Ƒ̃?(W2njjWE' bfVo\%RR쎪U7ȫo@NFxVrm{rF1]?T,/Ap͞s8ndl{;&v0yk)$$grOkk- VRdtjJKéCaWlf!U>\'G.u%')רsq#50s׽LNj4s:uIr'׿,5hd1Hc!*9}$hƟ~ՎҗSOt&2C"g͂pxD-=Ԩ/2E'e2]C#\jhOF!Y,<,ch#>܁[Wk*oe"1yh`ёJȍDsƪ[< Q+nȪ2ԯ n)uDTH$&1'th4*ewfWgly ӣe~-&YGqy$s~0?Z/N.ׯTs*j_?%YUc)mŗq$ =0xj<ߺ詉jw f9hټ5G%WWѻms+up{% +{yEGR^D0NraO#F٥8$+^M&-_KhU|ٳ'(S.ok/V}d b{ ?mJ'N=NHF['Gv#EϿIԏpniTPx,{4W2-?(HMomF'"QN-OHD2G-kp09T#ȽMMhF$=5ۭ/Ӡ!Q)|DWn#Yaiw|0 8mNєקF:>^Qȥ$Vc`ېVZjrWOgS+h޽lWq#F!81Ĝ('Y{U)EhxTK꼭эJ3NUTh#leN\+K1Ҽv5-Cy49z9={K{/m2pP̒4G8Rzl?ֲz+imzh*Q{{$I_ixm༙I95hV^qΣ~$)B62ޒɸ6?,EUYjŹj |l{szu[n1SK_nLF+1W={=+9JJL+=Xܬ1p6tzOgncZsNOv(k-wK#11r7&>\ &u*+[Kh>.ʱLXճ,LHc=z>ܭ->Lө~ɓKmy8;H WϿv)(Tm^Fj|םhCk":r˞XSZ2̱vC`j0]\U:2wz[\G%g%G!Cg954eVG׿k+\gT彿1lOC7oM7 m8֒NZ+ t}̖#{dfh؜p1]sN'h/;_{iF"Tq\v؎́^@# b%zcgn-=]_Cz=In'o7$y'$#u=+Tܽy*rIh^w KP-ɸm9LtZuГmkk}lwV(רR~o i-C =AӏjڍFߚqkԍ8K{}">agܟƏgy?~gNUb9g`BX5sF? (Rʥ;juioSVuѤb'r#Oojwze z6n7s5a#FUW6xp>V/vwmj3:xzrVzfoiqG#$#ʤ,E겔ս;."kF{y?%2G>v+)5V!OZZ46-#HfO/HOZB} hF)zج[[iٔ :㿰Y:|FbcFfG`| GUV̆8ͼn;s9Ϸjcg9R *Jp}{_e]Q:'5'h]PFvV3ߵssKGwSg7}&\G'6u#~*u9$*jrVbmϘ>-~Q 8Q>߇M~[: Nt):gEe 09jQ֚tO7com$zα͍Hn$:ۥx>W,j.G9X+V=ՈNc+NWO^ryQۢ`"FݸMowu䩆R:E^]#c-&W,%(R;#:ShaqitkJQ,Cy;NiNq41 8ǿ^v[jeRtᇌed@ye 3?:׃nݖFQ$'f,l8Orc;.Ɵ>{=bT\Ñ4E62r[Y&1-X H=:\ܘlOK_WԙcO}~~CS]W<)EuH:?nބUjZ5Ao1@v<%;PU(5u -v]+ |g9.hZM/mK'wk&Y}W}DmGiv̱,d=O+JV_PM$Eݰ< }3W ɭoΧϯaL.#YZG=+U'f`;"bBi+ӖަQ_q u0J!V$;SiS $֜.=ud#c pJq{XQ<i<q\˛F>MOomxy<I})FI:%k._r "UCGLsJVt9#̙a#dW7\OOGE݊ygB:523H1[X"IhW&0677em:өdw<[$,wN+%(Ӗϼ+Aݿ\wj]LI9{3gYȝ|7)8=Gc+cfC:lf^\teR==qTs㩕wq#VO2Ffܧ|l=kTa8tݳK0d2yX1gaƅ9JH/-2n9n;c5ԡV[RoÖi0.z}=a~֧7,XsQgeXRűqҼlvg-ju={=~Ȉ:y|5)1z[b]m W9W,vJג:?hX[DVFd`Y 9s/7~dJJrI\K'c-4qH/%I6t'qyKkbܟ<ׯf!eI ¼9ֹQu$jt|@pyN:cMS4mdY`WV^ l=G甡FepmKjL~SݮJJ9b]?[^a.O 7k`q c)~]JjjWib]e#1\eKֺz*ԥ+M}gr.U8K)$_&y-d=z7 @ՌIiuc87!Zؤ豆2۰9=OتFTnVG=dᶕV5 ZۢM;I%Ȫ"iۻ;r{V#Njm?فIPiMѨrH_90yR A[} jUn[h cVךoi-/CFN5v#*5<#km$(V*cS^~"RyZu;**tl:K pk\IoSʒ ~5Rm'{EG2  mLҹ%wOib+In+YZ56F@H#|c޻p:u#}ر5FۈiY$R*iՓI*̋T{V%EUda>X{jͿU(eKr3jV5cB3\cG]LY(ZQ̊1D۷+HZQ/GXAe;1=G?i)K5gt}zK_2[Yx3Ӛ(SEƒIh_$o圪]FgRIiӨ'yu'p.-_h33'R*3NǵREgqgR71iuLC_pa'SN}%.,vy(^H xlu"5*Tu1FKMN%XM)aҖiÕ[ERon5!$Imlayxrz;vGCncT/0~_/i'ʬ @3N1W95*9pF:dDfʧqyJۡՇZc$]ڹA{=Ү zdVC`㇏5̑]{ңQq5'KE;1y9GںO^ORRCn,?uwol0s9EiѺZ/6N^mj.ItaLCwN>'5ˈc:WNh UkyH9'´X7Jo{h6{,j@A˙z =E̴JuIbQTm"x'+je/,ƜQjI<3+2$w5ƋlÞQGMzo&Ya\Y9q[K=ls֡%8Ż M:WGwn,Tnڂ(ccmǦ2FI}+Gg5oTk*u$!mj>~eFQd0iR5I99Is'm(NodGv9$/2yʁ c^}k>j>rR0M[bk7 'ep>0 p}btAbHe]}3rOk:4~[^ysKjVКH%®ŗyv#OG:G^=UO2Y#צxǚRդ#y-fhN3Wn8=+J\{F#MŸ7#^n&v``3Snd,%.Y-4I$hd}Oi*~*唣Ar-Ă{rVIeS6"nx>?ZcnYװnT>\y,ʹaE$kZ&7oi8E oo{fe|tiIޯo=:d̼}hK屑 q95yR#خjr~/!%H*gKs|8>f*!iB9uR6^USQSwש6N\R@),jYY$P̱߰Ϸjdy!|9O#;㞕2ZrTױTTialX[Ȓ\m䜓9)RڦgR:S[],k37+מz5{[o0u*ݶRZEv'#۽D}"woK#geԖ6e^I?jTyo(ÿ2iOr b)"9p[#YƟ%gqE4 #++ɟ36w0=ҊֿSFrWhaE&V6v튉Q}u)Fq5ci-F䞇qֿUIiދkz#:$ʥxlQC+ E^ȑ#¶6*O#]Tj:~bKMJ.tM4!hP õ\jԍNg_|wѾiI3E?1AnslRG)=lt>hO,A;]ۏ˅8]x\EJ9BRW&gR+.2g]SZb{+jY2啘m#4Fv| $4CÁ2[9r9ڹzJWWR4bm(F琤nStWFQ{$ْ4/}@zwzW\gJ=%c,Jtg͌pKzr8\:/*w+ر[ƪUp8c}e# 5*Ɵ*Vot(ۍGOiNWZu/m.,<uv 5qVua}ނ\^L"f:aQf]Ԓz1Ⱥw1Y:}xzmy# USWJ̆I>Z\߭TgzxTnbZAu˙:2WЈM %m<@{~|Rjn1zI8|1l|>mpb9rVśd^Wk>{ŨuKUQmKfۼXu#TQvjf}iGlY0b}kjjtt]3.˹.#R[….[6cUY.IUIYT%nwr2s2+~e˲IKmcXg $k 9<:դW3N6UTm$AU]n\ߎNqz:b"9c<ki(K̎XJjKD .z0SVIhtQJՆhgo,ǵyU,;*b)sKKt\GqVm.}2>TIբtk#)fV=$ڷ撕UN~XҾڻp**)ܸ864%}אn.KqUN.ftRk*7,Wcvs舘9ϗ^p-n.~Ld$e*4"5FhVsA ?QW ?J6MhC6Rưy;|N? }k2ӌeR2 -it夒CnO/cN71[ٌmC-<dJ}5G9Z]:h K(mW\'[j}ɥKVv _sӯ8\VQ= 爔9tؕ&dU3k+8|a J<[~zyQ'9,sMnfLeqqէkWrۙJmۛwPsϛfr7*~Xh3j:gcsOݻJj!ie3μ4@${VūTcN7_ЉV H="% 85<Ҏ\>}޲Ȗ߼?*)+שys+#uB+ os9Y@}E o'G5_6푬- ufmĝi*?XQZi.UQ߯!2"ۋ"SF:zVQ#X4,{m= #j/Z5DqVJPZY-1RӠj{iXNQ-]wUd ssJ [fc5Zl_-U8d!nI9?lRJ1"YF ?*,zW<:ӼVKoF|mlڠJt>"(IkuMfR'SÚuNH%XM,\t眞R-RRoK?B4KvJ~vOҵFRw]M*jkFw)+[J#.ei$I Yܨ( i:qqI0qQZؐ\<6#yHI6ހuG<>՗%T9{9+[a#3BDj7F܎{)>]PJ1ŵk% A1ӯCޫУZ$#eP̾ERZQs$`w>΁:n6;wY9y-ʶ̜ʚRAl^8m0 6wTjJ1wm>J'嫶\{?ɪW;N\KglAə~ew*?ɭ#*Oo} epH$H9s?P|N"ٗYǧoM6VgmYbH^|-An[9$'{sjm>b-ԪI21 sAcҍFOtťkƜĭszQg:Ew pƍ"݁ kJ!rVvʮ2db#} $ 2lݏl8$C-Jl2󹿈Uqs^)od>=b73+a*%(9]\e,n#s|ۛ'֛jت}"\HA_nONIjIr$?ʶ9?^+hɔmR$ _ȬܭR\ױ Ȇ@ξdavykNN:N\Ma$giΪk';ڂmRHe iSf4ku51E H8szt\8J.{Sۆjmݹՙwmqִy4qPh\P[=éeQ=VQb'OE.IJ\N.dhr@Ssz`PyE- 5JiՖm'Kv'JSC{[TT۬P/H r'q-\5Iyzg5R} :'M !;?Ycfx=Nm !k[Q( ndD w9?Jb*ӭGފdWrcni3+ךq|*BdZMzWw˸mfW#YrvAqp2U!wcEhևTcQF.+NjY6_#sUMqZcE4ke~MfgRUEIi!U;rd]G~f%+ U=Hoߡ#2ƻR6t\GGVPezj7mgIPʧ8~OfuRTT\o$pe>ynjܩwi ۏRU8tXHwӏj9yeb\]>R8S\c7Z0åhIF|X\ SSLdm B2GjSc9iq/5WjNpnj{E(QZ2Xa6 q+h\_~6&{cպH&6#ijqJMKI4[ #6;Gz(ҊK7yVtB"*V5iƶkF2#3I#~PΥ9Eh'=GmU29{[\wC׊2U4u4S맠Da8'S喚)Kyo %,@T _AZF^DЍ>i{iYm, %eQ&26=Ժws:TW{]j>\Eub v~sQԬngh7w£;Nq蒒V3Ilĸ]I 67Pg)K~>SVZ;__QO3F Г9\sI>RsSJvEyD186BQ#cE21SW+0t͞ Q[ܘ4B|d6ux'Fb4F+_0d۹1$q8,Q>eJmVO8vt((kF:=YZX.VV_ddI˃UFcvyIb=wz\X/.l*AG'ӞR+cV2E'xUU,{pI9z%%W.d-$ 8%W]jtVP ϊe?tF8=qK}Ԭoȱ\|z$i4F~Y\չi'ʴ ѕ'$նFgW>V7|=YiGC" =]dV H0kjVTU:1dq5#p.Wsg*q#S(^}L)TpZfX 8X՗p;cpԌzӥMy+=ȯ4mQHɹp{f0>{_<ܩSlŲ!IIrBY<@Ƥ]:ߡe*Q]umáv{xȣ;yw= ʥ:.fDR9199JtsU9yvk\IzFkJp:)]M:q&VxZ[-ūۀ!1N}1E3T[gQSJ2]!$r2F39T'k095tmVV_04KT1di{8=^ۡ}$A/mYآ:ӏT^} 5$xC57H`>AؑYʤUI[&R+/}. LR[+DLʫp8=@YSE)nAӫtWW0qa+ EB1`梵oUbյ<6GJ?ҧRZ>ijU*Qݾ^s 珑+q9tT*Ӷ/ܼtVݑ@c[}kT}fu+z`6i*gꄗ+Lu)NiΓխJ p?.vs_AjoQtyū{gr|hatFӥm,OV=zlSgO,nxR/<Zrӷm %N2+it0M% YIR98=ƈǖ}?S(,U6W'%)5nҥg)l>q; Ѥp>zҝ9Oݦ7]m⯕ 7&y-}jr$R:PMM`YfXb͸[*V|zSR4۔oͩ%Ko2l]zrV./J.Z.vblXֈBV2x~TԐV3ȫ'{zgkU*^jsJn2SqpM!8GCt]YR"HLM#U''=HSުyv}{"jRB<6$04nnc,[=ҥ(J[E{u#e[Ѯ_ ;p>ӭF̌i۵Csiwje#mXc^*`Ӎ)}T۔chyhrzgsS#'+YtlȬwڪQxj4cSV9eFr}[,V|TGU~beBN*8x0u;WyF76Tqgވ.[u4GF<{j mmo,mO,6}cUZi1hsǪGuL$9=+̕7M4_]UDF#?PVW}H4#٢h5?c>STjɥ=%HՌ(581+oкqNVkQ˾P5q3x4QNԣ9%%Y'O2O0zt㨣w_"RvB"|6qQԔ-=o1rUvQ&)YUI!h3 a}tBJ[e#Um%OO+(83SZwLlĖnOQی]>|V=~9ZSéFRE?+Ƭ`џ2{ %k~C iijVͣN߽Nخw^1Ogx:?WaBYlF-H9ϧj2ց)Yc2Fʪm_,]6v*R,zlo D [Ajetei!lzgzZΤ4q* U)Ǟ.Zt= [-R6 C;UOXyUۦ_gzi'a\,3:ޖdV:t.uI%d\u _]?Gu] 6(ً?+7ְeOwc:*itNHa1;qFUKm:7M!6%I\z`V*i,=>[I% 8Y&/K 9>vzuiJ)/-˧8ҧ>UJ`߻i$'N=TwF2.SWMB]8 J¥+S,2~u$}Z8Byq޴iY$wF(˓Fvc)+rH@q#N羗jߗͧ=?vJV$2BdT,eJ烟8ް9JꍥQ֩nזT+#Lǟcp`v(9m]v%V rlrJK1)W&N(~訪8jFx:βWm6Ri&?.]ѻor0>S9*aT뷗vJIdwh|đZH=ߑ5S:.>]ܵϼ5ݵa@;g'ڻ)wu1T\ؒx.sː:5-#Ֆ.͏7$1b9Ef>8'qӧS"Ml̐ H73z]/-}WȚqZJO[Y-JR!=1]?5H2~6p ,vXWO<}4u=y-,14,Rv~9=Bm{: 07䞼dvSսMaZ1j1z=Y s+˛rm=wF$rӠGi7c(!M<;u3-} qZu RwrF~x[o3g\R]}B$з]9<uy׫nme@щ#$ ӐkQmt]83 FBxOotGic:|1ѰEŶu<}=N^hpY'̓в~nq]*4ӾdQ#Lc1rqڪrmc*1ao=ªGPe$g8ϧүOPu;]?;tCͻڧs[k*nim$]/ ''=Tcxʜ/R8Q»[N 'iBUFE A-˻#01{:?שGESסbe}f8Q)MáXnf[^<HGI<-koj1ܯy,@߻ w(e FT<49R9/&W79 gk^m/o4G *OEcl/!SN?NՌ:]ԩO(EWo9Z3veo=8)B&g8Eiz|xؕ_wS.X˩p~ f!_=vFAo(TU+_fh]Gjz3SOL)K^$ܰԍxni4"]$vhdg#h .:s_—NiEԈ c.O y'9mUJ?*~҃صNKH~cm—sJN2JO\ׂoiO̐Wp$8?#sjݮ c** j> fuP8#8QRo[b()+}cf 1Cg9汩(jTԥwhdGdk]ʿzE\cWzViJb`.+$%NWOzrT虍LGeࡋ䍜hz׸ͧd)FJYݤFGLsQN4W܊)թ8z'@eo4]I%O^5}nz.eV_^kFJ>7 ߰'{.&R[+Ͼz, tJ̶\I|0OrQ\R#_n^ЍRKG`'ΕGEsJV<|>{ ]H-~SӇqN륟ّᢳWvcߌ&9 zq횬Di-ǧO9HYg6(iVOE?Xգ6zWkftj5:;L6bݎ =\*M.TKhΤ剢t]L[ "f^>9֗&lEX Ds9r6X& )-<˕IÖwĔAV~e'kG+ c#}L12Fֽߞ6YfG .c11O\zu"%&NZHMpRH`I\ujqe%:+Z7Z[]پʧsSX8ͯ*.HI;~Hڣѳ(lA:Mi*ϒRs3rІ[gb3ެqjeGdƥlV!2DX<5YhgjSf])?"d8c6Bu>JrKv֛^JҔ44>n@A9=8ⳕZqz=6B;Nam<9μS]^]R+So%hiN'v:bW 0p205cOR(֫nD.dF߻Oٻ/ԷR3Mkq iFiW8;WEg$s,UeNH|&k瀉%.s^^+ Rj[֖'B.z=m{r.THʪwI#.O?uS'Ybyy7Q-#]e=VjBi-zB?mi,4{n =>^9xJ5oxӎ$սJ&n;Nz>>~VedrTQ$܊xKܖ12u\-eUЧ7-դk*#ij!%AST^FqR*wM)%kkϰ,ߗh!q8ϯb-Z~_oeu VHCJ̀ʡx9#2i(ڝޞ_~m*4j.MgUۃ$|/S=x=u5,;}t*Y(^qQYG;1nmܱI&B/8H2c.DM-OpX4k0O`=)U;+}}Zfίۯ4Ar.ĞNnHP{=]3sSU*KrtXpi6hF&QTo}~Z^Xe[# ##djK ']OxxޓRoD5ܫ$eDL"Re=mZ.ZkO]4k_<ά'/#zJTvzX`{񩇃Q{~}靲WFK^` qdܧw|tR'YZ:o矕ƽz?NmZAfMv-$!,H}SKGDuj bKgҦ2"\ͣ[;~=;b[UROd{º)sƣpWXb*c0c z[-]cU~ytS5_\/+f 3% ĨCҸ5(G^4u؄^m2vH>69QHsnL{eAjěԪ |TUn7iebs'Y,'=TJM9oO}bI.UĹX卲1㑚.U"N*rN2^ Km68'wKu*|W_BhaXzҗ_иaSs0y6*z ՍyǕSMSҒP\[VY5l|v8=3WFU"VŨQK=D.bܪnܝ gXʤԚV:Ԟ{m|hCϡڤSZZ+߯B[XܕP+P}6Ee͇RׯFmQ>d$xEHw|dQIdSYTƌdpKȗ%~Y!}~*TڽSU)Ѩ}FS"+7C]+:sN66ԭU./$kݳɷʓ~TkE~omtc;IaR)!F/X铣F\fmac?/$1ϯ*Tڶ~ƤG&kIGeXny^}k _mpOMp(c#`8-K]RE}L)KE6 )w U3pWnX丏1ʱ1 >3Y27:*GG5e{?ܑGhét緸涩Niڛq䵗vneXdެVlo#׬h%&iG`ߺ3S q**)dN =kSKE1JQ5n4 s5:2Gu%xG[%hgܪ{X+;lC7ȣy \ez(:nNϯRNi|Nd]MɳvuKw1J<ӨjI#_ҳJqWi+&uMr~`誥/S:8jq'{o+f ƢA-'}k8V7퓄\]ZVy1UT}U+܍ \s{cc^Ңi~KδqIouɷkhn7Fˀ܃={TƤOTs$RMB?^)ޔ׭]ƢTʴK,A;CG6^x6ݿ#8QN7w-t ~jÅ# qWG-:5MVGٸ^ZE+mW+*%c,F7װJu%UzvLo2rRX4>i>9&ݐرmg{,yЙK",J˻o3j~Kocs>f6ǩ4yةT6RU[wµJ-+-QrFޔ qVF}J~wksOz˗cIr^bᗓc'ge(^N6fS|ۓ\s8:FCdJgP)r롓y"; _ d qV̞REJ wyN}:V j6cFJ:m:^~ꐼS7]F E w.!by漩SD<=y{^OS5a%bxGb{d ּF5FPV1_ձ6VQ]mh 5ۼ-}L(ҏ+#6V&v!K G$`[mQQFVO.fpr#hl~0+͞Vfu0EtSS4Wy= z`[k[-1èN}~: XڲQRN5#tg'.*}][]|˾Lhw7 |\n=OF3WS5ǭ()Um*#Bfb Uvp+IyT*jJZ1 2mmq]1.mUɭ^4" )o"o0A-u)tw-BkT̐⸚2O,ZBRtU;gxZ5gcq庖93j3Z砤}ļ;I{1* Hrm OXX{;+;&7cpqq?Z5ƽ}_.8Twmy H-$Y]|sZTqaOM=m[G-.>V8ǮxEUetL +=ַt|_D7q~pX>?W4TT M Fxmc`V5:=?c%6ۧR^Υ8Ԓi!mb%7fMq,1yWJ-;ae(_ v6W70'@ӯ\Ԕ\U켿"iʤ=孾D/G3)UU1 c'=[FҴ[&1RMh:!\ތʬ8csOF5.m:ԍ>W-y %`\lt=}F=j]_7N;ԊscǣzMm\mNq}WM$фRw>kЊ%uDstaizoZ-sc)5.ShiIk$دtm^vAȣ惵y_ݺG,t,2 ͌t'c޶u񪔝ܙ4ͤ*$2(kdzu$⢕ͱӧ%ޫam.%Y䴖w%H=F3YF5/ :4b'~^~a +(RE>3FNϦyOeQ8SS&V^1 mg';~Gj 9$he]Q|oZ_Żb0ʸ8ˀ,w*R9fRwyx-t oXu%j047bٸp28O~qG!WK/rڃVte0WvA<<J>ε>U ӧRRӳՍ{1I Y"xЊz 27͹ӽvRzj7{;d*n7^75ZZtk-򾤓BtMpA=uS6:*ѭ~GaK6V(PsZJI>DɫN| Z4J#cvى?_YJXȦilC8&;5tjrEm5_,Y2+1ܠЎTYƝ歪\YQq}lM4I+$JVvNAqsʵ975{%ݝkAJ[=?_Fd%dzv$Իs7۩<Ӷfsw Īt޺9TNښ$!MGNr>%Si9JT#X TyəI ~^+{Qv.3A"#"|yk{N=;U駗s<,凌;k^P+Ր2gzYNx|'$F eM\}jaF}^oTiIn﹊6]]>cǷ\·U9"]{5dG*I;xk1e̜$UڑaD8l`pF{ѭNYR.J-IY-{y IO۲ m228GZy=|E:-R4qG]̱ف9=89=W(Q:hrŜ0[YZ^eT;g#SG^.ϩN*t,av`dK.RSir[`'mc(|'sNk?gNԭZVKA(\mΜ2Z=M*RW|$*WߞյjH4i2-_Ԗ;sn T(7JvJ-1e13n]Up6{ںөB*6OyIEs&|haF z29\1N GP[X?r-f>cRMCf':\?G:X7˙ .ysӌnپ#J1zZWd{HrF: U)֊Mh3}[ 9u&˛co-BvFNg3WIU(NjVY@̰c6 սxⰜ'*ZdsG}@'WEhԩI8#O]䷷_@ ~.dl}:~%|#(ǣnDTnQu)[('4@ :9%M7t#Q{ rF?v덮:kSriۣ/5"HjmTc{YGJo~xzSkek; UFQu%6z=hhq٭K6#Fz=ZF4-(%Eԧ7 7|:SҒFH~d* :폥vFPPrQjyѿ٣,G9(֓E2F)[M }zeۻdv^ RQknLna"Tk9NOхrUdr.oN\z:ױ$K[knOX*9F\_1 Vs:2F{l.34jk%aG%{<Ǩi*=k9S$E-ʹ3 #c_ݜ'Ќmu'mcΖ2]AM3Cq$+y{1^@ f sSMK}ĸ*F|1Xt׊Zx}<A]hO bp1ӧGߡu"ϩ'rŵxҽsJVb㉩MĒ\?k /1PK ~t73{Nަ˧vƟ5/}妵D2@uNQIQ0qz ̠yH`yN^gU)ia.|A >_7HO~eI)Qc<{nkoC8Ge@z`g+Vz5VcVC4Y,zu}tf>ۭ=H$\9'wZSOB=.FevEes鞕I;zFDզǒUf 𭽯7]B<ѼkH 4QpIj5-JONːĶ6){IJ7Yߙh2@ݖ5kTKqH$R#~֑gZJtr1$q T|/d#Uozy]4z +uip`3)R5su)MҵΫj=X\(RtWfǑoqOT%:.g-;u#M/1/&pǞj=5::8^oie~f_+j)b9n߇Oٚq9R>RZӝ]̣(Ư+ٰn%mFsWN\Mܣȩ nP@Βmp"˻-1Q׽?f>\JYm\maq>Ϩ hE}kf[ i+\03֢24e(k;Cfɍ}z?w_fV&qaUd-8P.hx &Ύni{9 muU̐gN=+ozt{t:ܹr$1[^ǜ(Nc)]GkZaϘTӞݿIZdթjEKR''Wa&mQdtG_ZKEЌ(%-?2 % >ʯ#3/˸}O';֔kSƥ9GR7,]ce"Y{K(NZ4ԉnsxmlBys)sS}J5--קMά܍ѳ*bMcRIcx&WsWۅS)SkW (] 0c܌sUOJ\q;8I$(ٕv֪N33+/2&y|';y'*ӺCfPIS秿LbuFQZOγJ?z36;**q50*/sRӕpd63cSG5_wd8ReM@W)"e܀~J1QcfZ*2PO]/W^{IBۛ?{ڴ.zqQR1OVr&K$Ĉps1gi.l4,Y[\FAOڪ^0MQjҢi\9#MX82ǿ:Q)sE)jNxꉕYn[kng¨O_j1ب^͸/[Ζ|;?xJ|JL=8jɣ򄈫KF|dSxJU*J1~pׅbXvQFSWFԖNY n8~58]ΙS#ؕV}$g@fp2(ʟ6._ieCV8~]ޝNg 1i*XCfZs=ɔC{~Ḓְ ]B(4dc)(KV3BF*RK^y;i$hَcZF>onm<󕛦G'능h%)6ж"1մFU˓9=HQ kt+FVݾe_ϟʴa){ũGz +:헖u#򭎺Uj;+ǣ%7OllPHsI:qN1y+:.Cq"0EULf-2wnx縫XƝ'oyTpm1NGc&(KZ!0kU qI0+9/h2唓oKS]w6q䞜:TcdOSY_ʋc 'MN;QWhNGRzֱqWGWN:y2p*N3 9+Z:t ko2O0+2ߓz֜'{WDUHgxt[J{/2:.$<~/$L(sb4@ٜ*c:ө(Ӻ^fl-G,b q~՟R\rKUt;}UJ)F榪Ebq4Sg?w'B0q 9lE 4-sϵs29B3藺ljcgSv3|wT&ya''E\sEZ"FhCcyO)R'gXJ2,z^\ ̒?ا/IPrۘsy{ S|8OGI! }~uzh7nQ^I"1ְE' 4UQ%$I+N1)ܥeتזm5Ū2VU=?O,/mL%EIYf\̣FkjkQt!nS{1B di]J1$ܬB;\K.A9S円1q KfYo-xLy:>X8EhnUSoA|HO⦜bvҧe;$XXBfY%*qӌ{µu#f8-I%Wdڨ@Pi'=yy[-99[ƭ<Ӟޟ_zGoN|һ~̵Mr]DY0҅_+ *TQb[TS*^3= .-{KuE[EڊhrO`#UΣ4 iCϱn%8hlwQ߹#0c`~;Tiͧi{oMg65c֮RctsjzZ^ 4=<*sҽ˩R\߰r>Rwm2^sG2NTA4Ee(/wMJ\KC__=KfgYMxo5_imZJZ *sn˵Obr=GZsKɜqtryQq-7|'>T:%/#4h);88=+.-.Uyn̻T5<ޣvH''NKW6Zؚigh r=4F1}fgpКy!$.V7<2~lxxJoNVGtnv=8 Vh}Y4BW%U[b9>krmG%tm{ hfTV=zϖQ2jPVM1[3@W)O'_4yuz+IS4Ѳk$m)۴t co҅)Yt7>YE mn4Aݼ MІ7N{} =x=jc)jmQRO.QS9knVm7; Sܘٍ ".\;3g'׏e'u Xn\j)wL[9ȹ,+ N}r0H39>S,Tw:9dd1SR,563~izgZ%5ЕDOC6WWON ,lʋ%B $Nr1XEs;EؙTps!>yeJ\VfyDHI|@S#g֢N] "*-]q-dfn۷jR0y6e$2n8'O/.FynGmѾX&{cW/n7C/%g`ž( %Ny 曌>֦po*ι 0*=ߵKCNj>4[~IJ9g`7}_@l*<8_ؼEH^U8f1ptB5U=9}{9i[rNH Rw6߿ֱ F5VDnUTl v#Z^+W:&ũ7a&ς;o;lv~{iShRNW_q2S J2O,};YHK}ǡ沨4m)Khm$eTr6`prh^Ϛ?yRZV!&B[ 5qI|F3EYr߫>9<#܂YYW8#sU=+H8>}z~ԩfHa+Ղ{9ZÏ,햷~\\-!Vo0rFEg~kF7DK$0Cp8{?2NITz$ȸOݒfY5s$lړ%$gYEf~cNrw-4+j䴊Qg?ϚS拕vQ*jQian|3z^5Ex[kI"37ͻ$O=MsԦ}lIXX)xȆ c#) n.+)T*xNlhp^<%aUTƒ:Z26Hicېs>ՅIJm㔽jZ;Jۃy ,`dߵyGQJvah)=R"oj2nqq1jc6fqS?o5y2 '#OOzqQ~U9hMo&k3,Ѯ㸠2;\ܾ.NIy[rLci*]y]F:ΟRc"+Vq OsqkR/qSRԮAK1$8]:OR[_#~t۔w]J4SnL(ïWOj&cR5GZWQЫG4s1NpA+Ч͹PRiJK{y&o܌70{溣wq ݋/ڞEdڤ)E #TYEe5ZtO7|D!99*aUʴ_LwNݽBFĨjDo9tՅѽINo9cI14{cR~H:_ҴZRVT[$8*YT R^\j|Q$]یaN~⦆}^1w+qEkɻss"܎IJ AA@cwPN;Vϖ-:t׿ A 7Y~QW5ާ~SYʜ}U# ;ߖ̑X3)UmpPrW=zVReަNi/t,sM{HEF[CJ_#.$j%T<URS[N4E6[}{"d,pOz4y8Z>엙Z[kp+iVh\?__J8سr'aT_ܩ=*v{:ѓ[ g yrgtSJTմX2^F1A/|Q֮yQX$ݚ*>KAj.=L?q>uUhQ)++;[rpmf{\%EOSє0yIkigV̹J݂GL:RM?yn{(ы>_R+Ɣ] Wp 9V y5>5(=<^gP1˂H dA'ޕe ~v=1p[#Ѥ04G ڱr[z2jI%.Y-=|אØ*T*h*)Jϯt Ne `A3gr3})s0w.r8d!mȾd)^NWS]RZՌ0$>c3-$ɉ8c ߯7Y*m<;UjZ{vDL L^dbRNJ DyRUh\CrENq N֕|BvCiv]3G Pcvjl}ˌzvcW]p4/qK%+Fs2odmtikRY.KXbr 7$=kJU+b)Fz7H ,I3pۍ<29^.J]yI-LSW(EtTOM(j%c*3>"<7=ӓTgnk nUc1Bmz?<*gF4至'gnIy{ X(OE/ge;*<)K4?>+Yϖ˻.je"$DɖϿ-ƌk_qF8vԯ}w$Xx<׎_/,X20i-ܷj.o%H$Qv}osY:,n|ש$)(hVc 2K6nT)Zz(mtyyQ=I%Q,ҮVx sFA#p'lȬc*_O=ISnjZfEvy93U:t[eUEϢ˛` q v+xrsTjs5O150xmÓ9'=jeF1E G}Wȫ,"6V8*= ]eQM39:ܜ^w1T@ֹxF䪖dq\t[ٴKЯw[-ǖ(vCJ>FgƱ8>Z+!'?Ly[)U]jrmڕ$- ѠUĜϧZe)mZTc'89zyKyf59_a[M4 wvs]8*(ʝ4ti`%rٟi,pQyS}$SĶW |mӎ0f54QhtN mYw(, 'ʴs&w؈Tzj *-͹&8Azw{TrJrJ1jWkUԐa̖%beUɈ\ٷ݉h);[g+Ԓ(8 898c)էE(m9JWKMW2LX.7ۣB`mH=jB3iw[ݻ\ԧM4p5] ߟN=*58hJr[从VC3mF?1?S=k qM+^CbʜMOY$`m\jTRmˡU]#)5 i0nc\9#]TN)NխR*]4Ua=ט4((GVsq4a\}IZo40Io7]muW;HIW+2)^yRNME/_~wv׷_!#5fS$q[~EN@u>Nrs{=[~>} {rwc54U&FFwVKj[?4pRN2M-\۵-WnUBT׮=k=Tk"JWTne#A}+ҧ :1Fb;)b $z{ٿ=4c*i%̌R" zӕ>Vg*ӕHF989DeOUz3lfM6W`glNG\ = )ҢJm5H'Jyn I* }x3횘ןw[KaV_DA rysd 5%Vu4NTnrٙ4m#C>iy4k=-[(Cy8C/˴s82ӌc'4U0N]yy /&@9U;dOQ-R5#f5oTՄ[nBq4jn8*rrWzZtڒױ _-̱M;rMJyX97P2nqGLt9fDKNߟNlM7SwW;qӹ?ֳxi SyބQV?ܴDK*Ga99i՗u+ۥiK-e#?u|`U{0Nhb7]/徶?ĨޝmrkeS>m汕?m.g+7K,>/zh@VF8m[_+~'(ԐyH PW5QvήlE4ԭWH^*FG2;z=ot쉔cO2u&Fl#R>`2x>O QlޫvgTqfA1+T3 k'ޞ#R}2m"1GPxN>7֣V/ܚ A ++[nqC8յ~mpM4'cJw4ReLK·`A?u(qRTp{iOQs<Z>{ɽ3)N*+[o?/u(YB_Bz+Ej"'Rbij߽FFNxU*UKGkN ~!>\+n)ftd>ںt_,;JtF:iA_"9$dǡ#qeRRm_vs{Vc_1dXB$jxkx֭ϫjReMimYO*fEeˇk\*Z+˔YXݢ=~|p1OJSR1tUbWjof29P퍙V=|c." <Ͳ*T*Һz1g&wf?/%0q~J"sSNIŋyI ҼL${X8n ǧ^+Yn] М+v*I=Ɩ4K}A=+iY;HxF.6OG#97aHSdֶ(hF:0OOu2J=yߵsJ?xSKN[|U9?*Q=rk8ʵI IEsZBܪ[}%FTymfT+Jěq<==}k85S( ZC0{mM򫷃q޺9Jq|:"hc/bH*8_j۱}k^,S$T9d+ %RڥNu_6!q'U+ `tR՜ڷeHݤ|˻nOm>g=H9 s|KdgJn^]QHPF#_Nu"6`֔a,ź|$VzRCJ\,jmmw{DcL˴?x69']q$-mmLrjaVVWIS泭R8oG*զqMh晾"۶|U^U,.C C&6ie_ޫ՛,H[ӣ[/MC*t{E)Ze38w RSePgx~81,CKLQn]6#ڿ*+m$g{Vܲty.m{\BM;.",ml%ReHU#Q΋_R4F; c{hRaHzVs̭Z-W F%Zuڭ"dxqۨ?y6ҿql id_/~ ۆ 쎜}+Sk靕eEK.ŘV6تYUX.x$(ӿuW?NY&mGt +#Hr9;z9n~ jKݎBϚMՙ^GҳiVZMS*8v~=֊Rj#:cOYw߹5:UhՂ98hSg*!RrRN9>Yr c+^,^~vv" #]ʃ<+g\ѣ-Nzy !ʟ*w,ʬ 2szWD}o[S:t۔u{'j oÅܬFҢrNIjpU_vMͼL˹s8*2RqֽJxyJM73KFQW[nnwAg3E0 H>}CrZƽ VmoN?Q]FERQF::o*caG'gK s-,lM+fE4 )eۏȂ BtZ}:8(/vj1N;&BXh/+p~Eb 8zhڛryr1^9t{EvCj}lWFZMЫcrC:`?A}:KShԧJ\;5DdoNL}dtSoCKcn\3s?ʖ59cX+&Ϙ0:Q~YƤ-H OSJ_ dKOqF<nW72f~vy$R$`#+۠f{WNV$qfe XJ-Ðs/Ϙ>4\#=Ow܄1UcQޜ&%9cf!nz~ҜJ.{?gSC%E;n۝ ZG9FDWDdeX˂5SUedݹ9j$LJQS&M03>rUG)T6a_qrL'Qg-:t3oQgd G?vt*E_o0`?- (=y?rӜtL%N3x^BEbȓ,q#66\Wh¤| Whi/ GY(zK '+Aեp[8#H_Y,:dz:e+#H _ ,q"q$^mLl?-7vl}؃=,He@9<ԧ&}M%c:Kzg$s7qWTiE\ݾuCCэwǯ9ߵDԧ=:4J.[irhi%9Vڸ7Brs8JwPZ1Sʭ'(Ed`㷷kZrmYS4+)4o/ֹkb*B*9>Eu%:y,ۃ=8=:V$Ms~^s_iBA7!Rv秱Xٺr^Vk*q%AY4Hw+GƄM>S5,շ2ʜc;ս^WK~PJ2WMG~tm L*$҃/Q4yemm9?O5RmrO0y"~(|OH#a ގWׯN}sYV̺WY գFʞ8tJSKUhy>IY:̱*@zw<~Yԏ&,xS*5@4 AV]:nx՗_"eO~b nB}sϗ74}^n#YI7mM#0{u49)_z(ԖB[ppU7* ͪZ۹J_JvgY$jf cj]W+66eOTǕUژUp jwvrF_l|V +|3++9{`~xjչTZc+9^'t3+m t8aV1弝8iRNM$/Cp"Z-_̏ʏʊ(^5G8URwiAzl?W\D+meۻ `ڪrQSJ޺nUt^ac2FʬVEJ)>W˨%˶,p&Xq,)Y?sew~!8䷒e|}USslS̑cd]NoF48"J]F$IŒ|=AJO15*˗R++p<9e)+j1apRR4f<ߏoJJ$^6{l>|XXĜtJHЭSl'ԆKIkdV<Cjڥk_=wQMFM5VoN~8_2ehGۓzJjru4.vX 9OҊQUSHݣ 7̽ꪜ?fGױik*<ծ6Jsq{[T#d~Җ"ͯAn"&l$S159{ۧ]Jt<ȋNʖ~8F<͜u=kF8^[d9Ԏ`Lk)TEj3֟gخ 0ZyGn};jSecB|Hf̼UofqAaURQ6_?cR;GEĨ+`:0F*e2d_OQUZ{hC?ڕ]92+:ۖEM{9^t}>Ejo4S*ٖ7ֻ)͒~qOJ$1jr,7VFSs8ǖZ](WSKG><~< 9IVN#T\ִuS:cI;N3;Vj4Щ^$$Ѯ w|SnZ'oTvv u!=qrGPz+)TM?~;rlKkoG!ޫIOW4e7SKO3:Ɲ 5ut?0#TyRUlF2SZدpё4~fÃdv_ٸou2N+׷MTo/#^|JV:={O(\,'6WP~B09+zpu$յ^#89ߛ@vO*;vVn32{shtGͫ&J~={xm(Xʘ,3ZTiߊ^.BxN:Ju}b3m(Դ\JQ5r@+9|yTq^-;tGRMvM||T:ze)YnC[ۊGLu"hIF;˨@QH7c|kX}AvNv9Ӡ瑥˞0GsǿoZ4}:Vܪ+ːpqN\QY'&o-i+8b*ɷvB^e8IJZU9_%ڱ,r[yT h Fs}w'k%S*qZz0Q|>RH6 xԤk-7mC:>UۖYPߩ,3ēmKĬFLĎk?c|6&eZKK0)$hf|ͧԔo*5[M2K-̅8F7{ahޥW$x4ı7pn3~kKfuƤb-{1JG-m)| 9޸֭Es-e1V;ƼqQ]e:SU&m4ЁJ#\!vZz~JR㧮K;Q_zxt*IU:j4:oqGUPJ{8C_SPh9 Gޟ7~?ce$MiJWQ&T0zI/cOOyni ʁ"L{FdPaAN9:VeN)} ackK1ww3<T(s=I'.֤7O9pH:2-^Q#c$hh;6x>դ2IIkd9-'&ɣX些fҌ)+;jiIl3/LOqu)kL4ef+yi+w>o85QYЧZ_Hc&eQq&6p>²*ίhuZ\\_*;S*[8Jڌ8)}_>Ȏy_f9>9FڑZ(ӌ[ y幑ռ'ۨ=3]=9.hjYKvEUI"eeSX$Sfmܸ<#ӊ3ST4"Ii̴}$r0ٺ.HԜn峞x}+Ji{DtFyg1ѴBg5eo3 Enx/BŻMe-ǚaw9%͸Vu*FNh-ח+30Rxǧs3r[״=G3Ij*yvȸnHkYJ'VvNc "?.pNק9KZF}ؽ7X6^9?Z5#HE8 .jM"Hz\.yF +4D|6!8IiFjABB"L_ʺ!9&ɕJ2Ͳ 2Q;>}7G#{4岫wLpm9R}uXmc*Rт>|1lj܂kg_).1=gNnc$V 2n'FkHVU:݃ʡcTzF+Ƒ6\ cz=-B6JZҎb6۶*W4jsqFo;& sB-<\L\QSV5/ &̚O&_eRNI})I^f8:onHŔBI[##r1rQmO<4Woj{$ r{>Z~>޵RR2{]yѵDq )ʟB;V[IuF!е[.t o5pqs:v8Pٷ$W ]Ip۟*ǿEH14Zh,wBQ푌~KB^{YiNܩ8+r/ġuoev¾PPb]rHԅd|3lU0::Ƨ,cJu]˸<ڹ۹i{ϳ1&ɂzr#8}jeyWb*r9tКVo.SfOa:]2'lt{>zM=-e2C eg)Kl,E^E [tޝ:.sÅ GOzC^R+Hݩ_XHz8=j}배t_.HėnbfUI_0H6LiLwjr\'IT\]$t99P^\+IGHͅ@ h#y^&Fҋ-<3%̋ĀquE[7 hi4rȑn!VӏQ 0tzs)蹓r.nt$Hb >捗wFq5NIJU)o (>PvZsU9dԥVש%$*rjR94kǙ;xy= \JuR2 C7.T/tF2m-ʧN2=HRY?pӌ`VfXzqw35AV{3&D`GAsTsI_6%RK2(>u+ROU#.d6Ke)#. i"̍kǨfdQxR}ӚOToB qnd30w/Zru4R~!s)F."UW}GKhqƗ-K+23 f޽)rȮ$a~ ~>JܺhQInE{k4S!?1v9R9󌊫C`9/s9ٽL jjC'PTRڬ\cR:3qʷz+IZ7Ao(4nwNW;ӱ|MJqq˝r&w+AWhJVTBeIgvHZIAGQԭ۹T2nd s+Q[TpN\>RFy7n9>8Z2Z] 7]ث%ռ֑[}wx#1ʕHR¦".]_刷'x8۞0/5|{. XІ=;sϭpQyѥ8+Z[F0lDo#)m)y\MjJӄcWpc)GJ:uTwxcY ˵s0G>+H=4zޟJwo/s۹d=Az΋i4lTЃPR>ߘܜ޶T'̵Mh{n㸙 n=OWRA{ %7;55x$|ʽpO>ғs D_Ziϯw<9h/rOn3ܩE.4S="]O?}r#AS8hߟ)SUI,h^ZIG+Q׳Zl0%3ӎXvpq.WkIi}K7FUHx NO538$O%$ݓ k`Үc7 VӧlUF&S٧ܚi2Ӓb8cߵayzV8EV[: v0UTwagi.1R9QQN\ڮo q0!P~灊攥QItJ-n4HlCpH񁚊tܜ CnBi^` qqZsTj7Qd$W;ǐ'҈ĿiJ19iV݋n?vqaSnv2jm0/sIp[tnwL9T98i˞1i2¨8U%኶RÖRRۡj~fF.yr09=r(u&5=UUf*|Ƿ]hP7Xn?-ѱoR4$ c.- iԗ4y!_x:+Vڤ1=j~ނRpkƵfbʌp}ֹĭG81ypo?0UjVY+o)*d-jW.H==J1iE~}IBӟ/6s^ehNy~Q#-̑I*A#N;{q5c]ɝw6J@Q_]cu4ʢHd֑W&olFq&%fQ#d\{2Ŝ* .dW\r1YJnQF{8)+5Hx'*BoΚD"8d+"9t8S(J7*R `XK ~;sϚK2iR}m5,\CQN-dhⓣ3/B›uSOaRD[&֎@͵x#>޵mvqG߶VRys$D;i33Eb騦ϝ?ao-H:Erߩ.bYFTwʧ>R1[þe4߸L<mv#=TiYqeTxS?0]ߞqQ(#I5Zbjn~zuJN#I !a l_q\y1XO7uaQxnwZ-spORzֳQp9U |34DYFͻl88Ԁs[J4osG9F6$O3~]9hkTה8 .}xJ%N9*܎Ck,bjpGBq(Tbʕ |Ȃ DV78:dxe(c%n5V3 y>XbMr*RKvoʟ2L$&sGFZ(-7rKvIx,(_ZΜe yrƤm' E`dfW0\zէaNcyߨ=Vq1,xUj/c۾R8n eVi :׌Vz^:yC!*{ԮEZ5"Ċ+cov ޕ:-9`FYdZӍ~Y Ky teVٻl/VUaܲnRv~&1M s6?ycn*jRMHݫ^ yYUY/ӏVnZϸ,RQntjT"՚F?WLlFONyr;uU%5Ӓd1BÂWSZJkcH\Iᅮ$#;(8o] ~]J2Tf%#1XPHlʵJ~Y.AvHPN@CJҤVeIvJ8~yiNXZ "n ѷ%ҝŒlTѳ.dkNgZ Bt;SkCRZX7$Bʪvzgֹ(jt9;b8^ud)D =N:v52E*v N706~.Nm>fǏ?5ѬJ <α8{zVj1R2zÌ7SUf|qqjS_xVjEFo?AthQBHx{W7ٍ~jr\ЎI4RȶAԵ؅evy0;|&s\(&>u%dԌInG#+O8G\u)Kp)y,H2671ݞs+ Q5Q(ieD13|zNrFHXl)MBOM[8U&*eG7&Jt/(%)U9Km3S=/Qy k" s`qӧNkY9k#VQd^<9=M(J0."rz؇ϒ!3I F1gӃө'u5eT6!|**^h~Ʃ+N8r6S2(ksmleW+8~ֹA\lFZ H!]dGN>JJp*TcQ)k&YAϖ~"JvG,‹m;cKt 0By g\ Ənj˷4/upW?{iuF"Kޒ~d!69WGXF'QMwn_*եN<Ѯ<˿>^6kJܾѨ3ϧꑛ7 eRco~\#ڦR|W/ /d*F+{zwN"xJ6{1]do2dRc'?kQlS׾^DS]2|Y4yzc̩Uo*~yаmb 庅qOWYƝdyRBR[2$4''']9ԠsѯSVִ{.,L׫} .\nE6x{W,ySVkvzu(Ԕ5-Vo$`cOP=^H9mAHH㕤;s`q]EpÚ&[I"ުbWUC;%;%kYyoOq[өwaaZnS2g`Sc54VwOdD|n 1hcSk' :߽vhiFڼNjSih:3h<֑ʧۤ;Xְ\}/R67A-^^8 ?HuQʗ"ђ0n1HlrN=s'%f!\S:?:'(ӟ7-F R*F% w|9'?uSmT溄c:3K]aERA9җ5k(km-OEd2M kNF4iuDmhFVU J4yom,?/!&6Y#۴?t<85#R4ܧtki=ſKk傼J*V~[aR'$]mJY]|g{waN+hN7Skn$\0=9 T}Y%:xzѼm=., r&&6檴i){)rG&3H֏܍X›MYtK*ξ~M?]@Vym}>յ:>7NU#efϿ\\(44}Bdo:e`Xϊ#J-ډBJ7qZE8\` ">I rNu vglHw.V1:u)Qz'VRwar,N⪵̯ BI;Ө\K;w]g2;ƸR7ʵ<-e?yn/ 8U7ʬÖ~sޮ$K߻N1H}d rmy8&*ʬy8=zu}U .;Fl3sUl guֳR#Sia2HxB=9ꜯ'FtӜn_I&H6w?2}85Jtߥ `7@G=+RY_-tq+[Ęqy7nOaKBzҤoV6WG,c2<~a]l)K_ܒA*}ӏ2zs)qQj6軈\Hy#nx=NG +T(_'tIKj*)Adzr<\jӫN+|/yae+IYnE5ɖӆ!݅f,#s[*qQ N2zoo!cxcR%^6;v{tRP6rGΆ2.1;TqŪudt'=զ[je.\R[MOSIKO/F\| wBԿv߼ t w$%N4v[hI8~ x 4H:۵.YIEiRjzIo8KVYB7nzxFvnNp林ON%1A S,j`:IGߎi i8ͻ)jխZO=vKZja*f26G̏r63ֶXF0 {j{>\3A!nB0tFHvTjM޶$]|]GDyʩ\vOz^;uTHӃJa5F-vn4RzK Z-_B͋Fc)N:WDNVNH5g{.,p,яR_"Z8w6'0Q,N:WF1֟rpy)S2WiOoAvua;mZO nIؖܰ`(Q=׎kXF5c:hE>tY$ GR1<9ˡׇO~Mmcq˶UFr:iiѩF=]=G.(Yeߙݒ[]SRz2*{>A˒a Ҍcu$h_2M66zފ73ySeg-yxF 0˓qں)ǚC5O5t@IjinThǝB;x>SV>dpUUmަ=^ռƖ4L|fy5R5t&2I5x#dTOEѻ쏴5w`X.Y6ݒ-OVTJWzEYN2I ,o'WC*= Ԍ-UE4n<..IrGL.+jvOȯim9j!v̖aknFG>\Μ}M^1sM8a>.Wގ~qIsV玝u$KFnqHҷuHJQb X3uS] '̚W:񔨥_B gkX?q'p0qSt)J/KVwԆH]HmN2rvߊޚ(mgjO!QN&VCy`Ƕ}hN4ê_.vׅ $A^ ǨO-|ھ8KGٮhJ$3sOכ𫮛yhȥrɼxӹP/juVBD B1UZE;;,_z0F}cJ0,L--1eu4mNrx)]ԣgIR ps7t^Z2c4]K3 :pzs8{Kv]HR;NVe{Sju#ij6줯ԽiQHqŠAfh'.pF4>֯-[\JcTo-w^Aݨ|RNMԹVѬX5cՌW*zYKtdhUg-:n] *&=F5Ȟ^1Ǘ -Ms.x\ަT`9jBQO ϑ*烀} S6iIPBdymsȌSV9e/z] JVfH՘4lg3:Oֱݻ_c2\ܖ9 7*Nk:҃M(꺝ȒeY6$hʬpFOӥcVZ=NR撧Xc 8'_Q+kT:&[ qXէ:SIr%ɭn'V- "ᕉ s۷r֊cʚI-GovbkRQ=")!ˑK۟caV5VV9pu"muݳ5G5\ =OΊ^ϕ/~)^+BU1@NO2+U)SqO[ \JC(TXТ{ƹeV2-sOp*w֮@fs1Q28ՌԄbc],G<~$64bNIq5Nr巼t3q5MވH+ s`46ö=77\wTe(ZkG{i\$6.B߾P<#'}kOzTcy΍:vMɸH~=(6Lw]G\H| ]`~5J5'SУ(w[6$I|Hsuf.#xCoEݞ0H@N*ksTikyZ:tֿ1"Y5¹FFBg?P?AS(y_/mm_yae1s(Tb FwHRTVSsrKh9'{[ #݃Z]4]'*Uj>B;dWęL0p{%uNj5RW~zHNu+Г+FҍxEKߢي@瓞)ƚu=޻iӿ1yqyƬLxhӕ8mnisVD3Ģx.\3 #]B T㌌t:]-S EG~ml{K3$2ylï_g\}v:FUf1b_kmC썚Lð{ݹԻхZt_M۸{BI,&2wmoA힟Zh{41utdUmt xH76`qR|o/3 H#tv]q8ʵ:*WuۧcR֒hOOz;I[{_Ux/9(MSqbמe;T2G?ZQW+yթԩYm[-ad#k8!=z֎V}ʼn|AF9=q}[aTiwWF|5'8/X8qn@泭F1p==Nx_dqDpgR>M]gQ%9rWI$ >6<Ņ:+.$ԛt5{+P:Aa@һ#zq^8g%966vߍg>GcSԝ%13)Rdة.x9 iFv"w![]y-cb@kҍ(ԦsY2XՊwM+ۨ{v`IZ66¿,~=jN.wqiSi6~`cb~UO8O?i5'Ŏc[7YGωrGr;J겨ڍն iFդH񭤒V]xz5*Gz+M4"thS ꯫ج?򨭇rҖCxF'ddMtzryjhDؘUɑ\Hovaoܿ) }kw+\MZ Eh<4fgmr 1 'Z{//V"b,o[PӠ0k=I<}?*~nWיUlVX+$!db\t֧IBO~Ҍ/Tid s3#8?H=^&2dzr#h]H3E`y_nzkH[ԌKUW]-emʹL`ND.ƿ{;_ζV]- z+y| /|+Zp[WV,5=zfxU%a6A%ʼȻKʿ0:(߮^g*Y{7yzV:y9Yn8`cwޮe ;}ӍjU9V." 4ųwAۊԯR #N~{de$I&YFAr9~}qNiu)a8ZMvJb>Fgd㷵o(Tom:壒D76h7FvH%*8rÚwSuiTb⢾2hQ<8UXd`zWZ<*=t%Q\y&9/RʬWsaOWz'ڰWlx`u+8Xu=5Qai5A4#IwHw4k7U9廆$so戶[8*t+UӥJW[=B,pX:?֕ju}vVsu?< , ܪXeJ2//#4(ۅKwI"7)en=º))Ԃxr[%:3q[֎=n[h?A nmó4qϯ\jT9huy[Ӹ4^e p@kYsE{MOicBdUVJ=9Gny*Qq^Vuޭ ŕv$10>gV4NI_/SrjӲ YkYȭDΡJ|9Qm5BQu$1.%$YU l<sqӚ֞"4JR˸+(X =qҳ|;kvK>ZVWyYv2^7azcܴuٿhƤe9A6"I-lUU| FA+8qdmR҃軮fUMjQ\ `F>#*IxN ,Vs'>d)'"*5{E"˵dݺ1:(N1"kQ.U~#Z6۰<۷~bqqX|fHCyC+d # 3sm6ڍIZ[w@@9R䨤kQBJomfy[d{*N;c֮PuUFODфeʯ{[bF{G.9ۜF4Kk^l9{uIt1nCq=x)О_lסգw ȍ>݄ekGh56o EQKnٽTI|wc#O_ߓ8pKeoe_frW FKo~gJ^q%ݮ_A88+ju)֧x_MsiSS죎E7(`@Nq\8'aѭ&SD`$wګ}?b^]﷡iJt6V=b- 6kVXWm>fZj[SɟhE3|~#19lc\x2dU*N<ׯ#H{|2z [э_ fQmLIR䚗Q(/@k$&[v~_sK},TR>K٢KkWs4&wۇʤڊ8\E1XtDW#t5QǙ+WNiZQˠRvΪ8ߩ9"z"u#5~X [ȾfDïj{пk4ڎzmRP#~U ۯ+j]%FKobxBdH5\<9RSr|,-I(++d9ldOJsIh)'f* r::uJ8vKcNuiief[HYZmm5ڊ/~i84hi[4h*ᾣ˭ztyGNqiܱ%odFCw}kl=|I&k l2ldۀOOvF-9i^ dv*6WcFڽ˔UK6Ib WI9:?-9N>\ӥwѕ)ǜ<-ZS5QJЎ(C6v9+Ҧ58%LnܻIqߐkn7ːK3|ܽθԗS1>Վ~\溾J򕛵!G!i?LRZv/;!ܿ'Z11 q,|mӥiZ-"eEl<PYԊh*B(mrQQ3zJ՘zܵjY}sދ{8J= 3lQ>r4;f>sʹM_co~^Jwٹ^ΛFpk))]5l+>IFWL=S1V?++}˖:BQ15XT37ˁӎ1wWh+necV`2|ȳrk~D9,oMlͻT-"ɍOuݫ_2wed7NsjqԜ* {p-IzV5(yհ"Rľ[N/&r ?NztpQ87E9-{ eB86«n8TԄyo}SR-Cjk'\mUrԧ$]Jv猪Q>Z…*1mt]MA%]#`$~x8yU#6ښRXm z4Jlyӗ2P.Hl ʡՌS*8]9ӸIz)޼iNi65*rOb6I$eUqa(:&ؚLs$>gr1SN?ƾg,imխeS/ޑ 99.3\J:rueMZIlL-cUb??wθa@'z'(׹IRMl=Ȱ\Dۮ0Fx7|Z^ &CV+n$zYӭuZr-n:S,rId<߻\y9YF5*$tگ$ū1|cF4'ץRkB<Hgг#Vy>{JR[mm?R8ӗ<IEZΦ0cqq:h+Qi5w,YU +mE۞8NA%4~Q֔\r1I:uP:``UH+}*),q41 >IXڕ5x"Io4oEJo󷩜֥%6/_-\/wBHw{Ǝ+2[ҩGݓ+-s[Ȯ22p?zqf$k};i,+.0)&ߺ27/B0}1ֹgcիQ%-S VP :gtkF\JN6 ViF\<瞧L8kr哚߯kjUXR4 >7z08ޟq_NԒZ+ˆXnF~V8=n*Zzh8TYaȊViFU3mڹRkZw}[zwcY˶ʨ#>`zy]Wӱ˚"9G* 1*ֲs_#Z|-%Z6Q-t_/q V>F\.Ozqme5eK[6;F"x7s߭DVOG;-N̯h*c^P}f߽%Z>vkH[*b1f*t\N'z&if"( U>:};w+Z1u%=$PBz Q8$է[~d"BaGe*6h q(ӒJ)FS7DN/8Pth̴9\jvu 3+n;pfxTB+0c\9<ӕWTiԝ5i$1$ߙ{{'wӔU'(ʱ&[B탎QBp2狝]DOoaw.Eoi*)sRҌb-~Eu%<["zތbgc1N.j=f2#FPg;OҧI˚߉2"^0]s\v"YJ.P*5R1MnJX{s8VFQ{;ٻAKy#0Sv\c=J.rxuyb՞a ߃]JڝHX˙!be1ƾ^NNc*pM_gR7O]aD|M/N?Z#uyycY#am#ۧMwF!ndzkSӿ7mnmqB~єJ2nDEIg:М֟V3RQf-52ڡ?n1"_9A87ߧ3!ew0;[Gˮհչf>} k=VR:*Ftb1h6Y[i\r@҉]f^8iKdky ѸzΥjw9hӏ,0y|?z¦0n1U%uٵ4,GBd?*?|~U:ҕ+Il -^dԺ0]3`cԔuLEPqէg&0"͹jWW8;3\:p>ߑ?C _{RVeT(Th^+/w/>TC#!Urv2Hy+y˖k~^SڳwB9-W 2NsV\F,^֞Ȉ` +r{=*Ms$98Rz+M%3\hNi%[R T,Hcyʌ鎽kѫJpvYBIihY'zԪF-G}K8l얞\UdpHm:pxUb#$׷؊u)^i97 "Tu[λ)ч6jULEbl?! Xmn>,e8lg/BKN;LdV2vH9t)ӣˣWs?mOUxmg5[veg%`ׂy$qUF( 4_DWv+Md![h*0qz DiWSHqQvRi]1ۇ$u>gFݎMʩ0b\ǧZSH'FU%li3;WjH4j)7^nk%ogFNEm}Fm aW=Ow%MJIF23,[w ۷8Ԍ*^V9:nؤ44ڤN:\-ˊ)=W^t]HUR_eR9ֶ?iw 'ЬF27}.=ZP،瓜ztTm5vr*XF (\M~9 `J֖"cdϧۧ 5},B+- *0`˩byh8hέ|EMO?g#I&W-hrnN˹TV26B[߅wF9Ir5Rj2߯q/$m6XmRqZSVRvhE,M. 3SZZݛT~tTo2xzҪE SX{%%ED gB?>4{xkNQot=Y8+ CUH)ӫ$4Ngmub92ge(|X }}w}J@DvP@=A|Rs\֥RZRs2` ~E¤nd3\\y- ǹfn8NNS墛ح q뵛oj1۵\j5&D(Vv2[:tZQQ}ÇvQf 7ߓߎ4eI9n7Xy^[MY+,l2N:xTQ;o[VW Kd; Q)Ӳ$Yԩ03QֲhT*vN0VD9-i94^ߘI(z}:~cܑ.R/'Tc%SàW?6e8Mߩan)Q;Cv+>nYqJ1NW oH) <@X۴lp}xǵLjTN=^al&c P8Q+ ,@9[9rްاO&tbҜ8U"=SQK_7 ##j<'q啢gͦښ8 c ]ӯ\}>w}gE%Kqwgz V/MGR}l=GO18`}cNN:tǚ/U{XQvڸA'SҪu=m+J5,ҵYbm* g=j%WӧHFU(ŞXU;ԌZ)u,nUF'CӯtsFVmN1}bZGnHC=u&71U:^Ⱦ[mUpvWp3*eM7*>m4SnY+,zcqֵ~j/F˜PpJotNWw—R[ndZ~e5}FcbD[rn[F k]S!bK)'vZr7C<]2mʾW(vt;ԕ%sT32 ܟƗv斈!HɌ۵ r 瞿rJ壐U>`sW~VaR_"X6,aO89ҡϸEyu#kgP#F V`wzGW4RPrhmӳù^FtJV'%$!̫̿$`̸`rӕ4<YXn]ͺe~_)8Fnrf8TR6V2PawqU:si˗Ԯٌld#Zqճs*.U䃎ǾGԊmP 4~-Uظ(,{ti?i."\fT`S3Ƕ=ʸ Tymm Qr{VY-xd՘4zE%Vev3`a}sHӔs_`1ՑKD#UY t(︪J|uA[s`VqNNz-ƍ8't`^#sp^gmT"iYe癹K.r L!ON.2_|oQsY)Y&Qb}D#̑ ׵hWII.QU/g5biEګ~8<~<e8U4y1,`=ⳔOdzw-Y͹`Hb@>mfti斣9|[=kNO;3իO WW_qby&CqUM^2-EINmsMjyȤ|JRV[9F'3<6V(dڟ,`ZH+"h8cdp1sImt_^ZY>QgqӨ5W2N61vߑbپΒ\ hìK"#=9 I~gE:rN1y-E^UI^I*I;[֍:PWMnLy}F0'tDUO_zn*2/c-m,Y+,olܛ'(?5)y=Ȓ%i$#fe<:ߗ˚rVI;.]C'JJ2*%bIv`QvmP&^grDnQq#?JΞKJ0;}H1–4a_Z^7r{HIIY帉2zzJ5R$dܿ0 }Hqު*n']­G8u+fmR7<+(JU$jiyzeΊkQ;EG,!5˱4T^F*+_?#9R4f3UZK~$A#L-W@˩*mG/mĐ<2|ʹ۸]8K\!!i;yj[`D{6`$Y̭'׏ǨG,i֌_), dD30;U%c (m8R}ExF\i8=2~Z*u/p6] KH9R|R6GMDwIx}GNJ9)JL&l1/@XJ*^T9pCی֮Pl)خ {WamFFy?NjsnmOFν1<?wԋvwwَ)*Gam-GNBu3[~o9y}9x&Oy.$b|Th'1VęNk1cFE(CnfYN0?^k>II;oƕ7Qk_[W3nWk?lnHdۺ `klޔ].mwEmPXUs#(:tʣw"ܴN$g`4WwN3Nk.]w*pSMvfKxsɜt}>j}15[^HW͑"H}pr=JQ֭*T-a]-f*V$A1:t&ULD-#󠱶Gc n8RG$Ga9-9iLxJ`vG6=OA:?VUE(/h3432wi{}(NXu8@-y[~ w4/XՄn"+$(2G=;2-&aSvVk\j[_Cb p85'eCͭ E-b}/|d>yz`=1sԔΝغtc^/8[lU/L\3)i.o_䈤 # t{J㽟BQnwռȂ@g r;zTUوo j>7vy-6b_u*yvѺJ+1֮OiOMV#ޤ-nmÐ1 ҕ5u+inrVBo"M.KtNEKgu*vtqiGzw9ۧ*ר6iT1ߙ|}1JtjTҡ/rII:augnNϧ?ܷ%oǫ܉I7V_.9A߃r+(J;^ڥX Z (taYKM9mTqOA*,ƯGnʪT(Z:(RkV[6gFy6 p9NRK8b_M\O+HW͑ԩmG<H):vOCҍ5;yuddh?4%@Pr9$AZr|w\$E4*X[xp; >էKKgJc.J) n&I{eS$0}G\TLKOo 88jb0ꮏ_*}٭32C$eA8`:kNiTF ߻ls_ZUK&vfUʯ{$mFpKCȩSgIrfhd;[Yňs}B SIᢊU =OCF1Uʝk[ЈBTڕ9+f$3˦Be(㐤٭z [uz2}] dWW q0?>V_S e/v,+Vݷ 䌑{NIZ|1kQgO?NEۤqfo=:].Uh0m!|C/BxFdnG(=IdcwnVU9^vыSLG*zWsC(8:Vh#̿cӢ]3Ne8º)ԍ=$ܒLm*M+y$;SC'F*_#8*Y!=1*2w~Ob{G0mbr1:vk|ъ+ST_6D$L}OQIQR Q6qr߻oƭsFKMJhB7Hl#l ͖o׷Zӣf4eKeKP6YwW){=R"Tj*ADmE wy{I G׊QE5&FGF:Er;_2c] uU~%Bo 0݄}HsT|t! 1I?*RwOQTN=# Rsg`OgC(F2D6B7GZ|tRF:;>lX6y 32ߝ[̯'oC(J1q%]ncPхEOUJ[^ϖMHU 1Ui%p;wztIݮ"q]GOA+Q}𲁌c*EUZqzj_wk$ʢx┡ SWm=ޥfWye ܂#>ϡ1-tYGDPnoS"cJNKT ltq2d`3'1ϭ(i#TS)&EmSAsFTtM>i;I,Or9ǭqʌ|cՖ*𬌋,*~pEjuE(Fu%(vͫ>f28|5?=(Q,=ȠX}X6NO}O\5TM^Xǹe7,c# Bz繭]ik?8zqmtZ@>O\U:6:TW0ۙ U~*X~t9ѧN-xԕ%V"V驾in ߺt~]H 42;ap_SOє۲q֕IFOwqm56_˒/qN{:M*U>[F][ "F1q߁JRKٻ;Npn ]CG[ k1?>3i&2jE٭=FW?KDן.j1Ī6s9V `ZQ_t*oEf慄.mk/Xϡ=3ڳ:dkP'[d/]if~cenn接8 Ც`[k@(jLƛ&~Rkd>Ưo$gCӡ{ӕxj6{v"uI;v*NJVzHۿG`8<s~ګI;'KpZE({4eIYXSZrݵazoxdSѷq=gщjMY\-ԢVDуRǦ;V1SOob;$wCm乊2q[rBt SYW .m(^]9< ub>;z3*]m;p[dܲrb_c-#|TeN*4t|W q&tcp=3?Jjt}կqT,DTڳ梨r+xr-LynqI8\^ Xt.G #]j>U\yu:cN4bPi׊WZYxש&'uƍsd6֯C,ۗtym8{Tj*˕yUQqAqāL#xR3ִXEB*]X^}J-#0\>uƜyC)QT{!+LphPv`G^9=kԍ?vXzNQm4%[ܩ$,ʡ,1kJt;)b(ӟզ+v;bJkʵ2>/m:ip hZO,#]=6uoG"1j`;N3N:jBvZ=ܑa "e׾k .gەH7vO1X=;1]4-}ISM%Pȑ0B۷JQZkSӔez=J,j8fWj?_dE>X}:[̸E(dS5:qw:#ߐ̬˻-@\y,O,ꤖ˧؜ĀˀZ6AGZJ5z,e >J莚jq/!RE0TIĒ`n㐸|UKs8aNw] Jdf%lHxWR=QRur2d=8c J\j1.V>o(VE䪾@9#֑%%s:k?"O1+:ܤSHTlʠY'O?OO]NQEGc9QktJMe.52ܕDe\msI$lfݱٻCWFc rsӯbYbI$#<h2\iԙԧ=!H2[y gS'e%N>ҚZVȱeyPWU^ҶITH¤є7t(X4fEKprr9tW4Tͨ{*1DPeu ;`: ZRwN[Rd1o`㓎?SS(2ͽ>vb<ۼg]Δn_{[) "i~\3^EÙT,qoKΪ[cyuY,B]VԢuIђ?0d+ccUT[散=&v#|th8G֏twF8PО# ZU޳aF^z7b&5ReĒY-ԈxBvFxxө*Ev8J85Z_˩? Xpv!qQe맽J0Ikm=IbI6]2d1+yNrkzشym#.yVTRՃFRת'cWYTǞ){W(q#6Җ~xLC U;uHs菳;ۗgOByI;gA'ֹg8^DHo&I|1B* tE+2TEDv2^2$8Ϩ6kӿ#^’3,OU;٭*{IAFz2QԒTխ_&V?721TkSiI;~IѬsKˍm{d;g3Fo.(FW4r?>Ҏ$أ-92(u4J]̍怟8ۻ?:әV+~RRJHQ@R)Yku]sךqɻX::I."mq8ɮ\D*t, ڣGb\ wpm?#qAQ\# n8ǵ IiHѮ pg~G% qOv}/}4` S-lݯ:;am؎'{猔l*4~x=l6[ςaU Đ9S;U(U+kEnKbZ~xbI&ɹpB1c#*zPUd?ęsӞJvt_jwuL]O1Ww# ; heF5;JI.!kl k"cq?4->Ju+TY.Я̗]WQi? 95SCWA )[d0+ήOT奇C([-ش 3\m 2899Si7=7tkEE9I'gyzxy9;'ӎk|7dyU~Ѥὴ$nZWT\nç9qҷs.:.guЊ/!-ky+W#JǚW5okub}^k)VX$ljT#WZd}oo;#Z5(IG{%Wb;y&65Uf;w/OW*2MruVoBPL.C uJ4Ug'ԑ}',se<'uC/kw-<ΪrFmsk%WQ 0RvyAɮ8ԣ% ];YySͬ4I}i1$QfҼsMm FOhoS˩Itn_ g#̻w~nҪN7ߟ7XUf}22bXaachюkkFWtk\mkzhW+~t;>1[wjqS滖 p7&7γ[ĕ]98U>[jz4N+Ȉ -^`Lͽq3tVQO3XHVztDwmiMv2mqq Sr9#9ZjtjK9Pqw[>}%Iyys+ {Pr!(5]S?Vw-㢵I!UvHHH+LEnh u*jqɅI ku^%%i>t}c6KUbI-7V+z]ck*t*Vw-F س*mjrr'N+Q}_]ORQKZַr QXrɃϧ嶖LjPw *K`rxk^_i~niN>Ғ잶<!ު_0 93_HJVjkX \i8URpOnkbw3exod~cunc?ro; quZ*T*;/Ի$07T5Mؼ{m\nsr0sD0}2_kO f5DpmEc N>xlc%̌44~c*g΢V'umZs9aQ[B0v#xw"]@c!W7'8\1%{T嵴:3zQb%N4- iLھLqӇ#SN"im;20^//4c4 O)N1_iV1`ʵ5xYt} 1",ۤۖBtӝOfOޤz7y֋Vw7lsG՟RL1S=UآYf|6+XgN?ZJ]^]TJj7vfۢ2cW>YS=wa<:2? RL Ѳ0gg\RZQf8q [V6Ceeqҷ Λc8D*p]h,bEX؟رOӢԥG448[L׶PN<(@Ā*@TcJSW2ӿMq5NQmVb+*:ȯjp[sڔ}{)T2ק[hB&vu*ZqRWaiΌg:ϭ  * &cLCؿk't2JUED+,rHܞOʊu#[I.VoiJÚQapna;xT)FJrW} kk)O48ݴ(8ccԨk_rFN&ʬ,cXWZ_ևr~•UwzYFΝ:QטGmCd$l$,I=:TBN1o=[,|קV45vvvS]즕UbWiNIz/QN4iv/)J5)mw.hrAPsmq1Z8m-zyB2'Eo,+H˻?3g듏zҥQ~h9NUvcc(I=zk[AlcU w e/PCNbZBI:2HcBU t㞽Q2,iF^DK`G*/wu7*4J:U+I fmX;FqJΝI'%t̨ʍv1jK}v$"P5[X9Sw莚{;_bUn߆!o#ojUǑ6N4/B[ӵmJ\_4iӃka:M,Qo7a#+ߧCk:دv2l":68^ަlPw5Iu,s i֩*ON-nlUYKTOtYќEeF^a99Ҵ(0؈?g}}Rs,J҄ONRPydY5ϒ)ݖ19ck:ь)RVٿAE%L: Y8d sb*|+~ݵ&$5wuuKW6_*!VZ5S@3޺-jiv){q[-idaHze9;1QQu;+Vc^ѴŹ}M'B6?M R_ -=NI?+mʘs}k^T^ *TIŧ^ҋf^ra==9䤶48SGzVH8P1Nkѥh{Gv+EBd q;O9UNm}HhZ[4[ndr{{tq-%I9'(u,3*-UBSJ}9L y[3v6lwMQE%[,۷gi t&9UR"d/E˩I\̟*v>Lm#.WcJTX[t~G-ϯuS%f$QWө=}+ҧ:#Ȋte([ {3^;Fi)$Uu&5l0U`[zgѮf9Z+ne c?ֻ)GA]HF,"RS.G3ֺ)TdN;2ք6aXoq]pB%NS敗Cg>ii%>T"8 />R}XJ?"4[9\7tqh8X32~6_,ޣ▦NU*KصB876Hs³猥cJ-vewVJy'"E Jv(PRVݲo x9ZkSFַvN>|zT잭S|l̸!Ē6H6OSKY*k=+K.uϭr(.3zH3o 08pr,V++5if7bpX+܊[3$كD7$l걳/b~b>9(RQvnыwpQ};}*y}ZcwcQtI$YXdۧqz.[;+GތZЭ%<.|_qc>YYA?RU#*c`۲{ ڰs)F^me 3dU9be9FKW}DFoUelF?M8ԒIOĚ[}LU֩NZgG8I7i#f€u }ǯ/u]Q)7ёZ 4 lHy%/uXpYB@̑oUBp=jsJKsJVzz[F G͎3ּT]=RJMO܊[y^2HTavm{+#J*{[ȥ|KCA'm֦ذ[:OSk^z`Ti?o/qQ5+32G9.hm~u%-'S%l[9#'#D2ok{.--cAW;zZΝӣ&J!4d9ؼkhӏ5Ṃ9kz1-Y~ `zU{H{L.XƮ?4֒yVH!U~8kRejv s{5#[;%emě wMg(˚>ˮocNLG}lIF*ݠG-DQ\^?ƫzW(wn$cs.ܖfpz+JTܪ)6*<1o!["RwI܎,D]GTsFj2撲4qlǕ>XƼa+Mk3Xlh HG [!c+sTN:Dx(-%-,q٭"- ;UƥS(-tONXnUymR~s=H:{֔#UNhsm;bo1T/#qϵiWVӭתz:4nY?N~-(Ul5h_mnm;ZqSjrȄ@r 1)G'ބY7MFՏq#qp:T;Q>x[B'pr8`*(FKWFUGwumo}>⭹"H +*=e.hr~ZPZR[%I"YW c~\*GdDs=]Fh#n#۟ڻ#M.j*dXYr01ӎFuy$4:Y$gITr0Lc sN~ӛkmmWuuc)ϖW nlt+=6J5F-,x$Vܬ09#=*hWϯόQV@o9c/rX#?/zU#Ͳ^5:p|#kkk$hBteCӊΝ/iiAO'}k(| \+1]L1~ʻdh)WmmVO15l,y`z5vNIhZL*lJ8q`V~=N5z|ɦ4I,*Vrrs_zVpb*:v-ČWVS6Jf@̸V=@84M_qcRxux/_3pa/{W/i>R}ӻ;ZD/qD*~qѕ,G~4a&6I$Vd{9jҝ:QnT8OO{]G!H4mN8:z\꣒u9u[yʤ%$->pۊʁtNb%Mj\<4.!X2~ p '!R_2^8֔SVjbEsĐ8Vɒ;dp?*qobגP^K %N$L%ӌiS~!-{[cr{mB2qAJ.K[;Y7b9UV4E֍d'W%9SSok.m?pN:Sڳ/u(ƜTv4Ċnѷh{T*cSm+jO,F1nyMuB0w6)G+Kݳ5v;CIrb3OgirTݿMhsc`c0N2? ƜTxIm*IL=DzciU*Q(F^Sj)Dd1X=zuJ^SnVUPdw`mw#57++9gOKTd∟"1a+YJ9?}tE9{_IkcpukJtV->ATQt}7ɧDX 81p=GK8]%,'5YC,I"[k6N3S(%oGS Mk.e,Ym:qޏiNRT߹KIΧӠMʪP0b34㇩4b+qжK$_fAH{Ma%jԓp؁9?mۜy:jp㫿RJͮR,8GMhUV՝g L9lxJ-Ž=Ae$Q @YwUF/m2ݛcU~G=Q <~fu|w⵴>Ns6nGj֧/qJ'R~]Vku{{*qzU(ʮ =X_$r1ܴTl%(8-D.U {Ob:s_,wjIzݫ\?L[TkSI?z~،G5FXkp?\Qֺ#FN)-m]ZJImGx#jJ2OFw4)>U~ϥ1C7:teR{Jl]BI2u< e:jkO롇/" pN-&"t2}k)[dFrߠxbD}&/?[y,iŧ$mo {,0V_u'*c[IBI]{NKV+FTR\fvocl V9|}ëGSCm^6;"l۹NRFXԼ ,S(,9S9Vqtcʺ_pWV84Z{d[o\n7yrSsMkR}jXmH]RBm"c(;@ TFgtc8ԭRIZ5#vU-f`6xz.R]޷3%{}D̙Zr[xz.$UG{L\X1GzX] 73ku-"s'ϿoJTuӍ:|[-~vEUN˷\Zac$t{nI r&Y@e]kKݗg`,D\gQ[> LyP1H?LNkJV_xztw6IZVnǜFz S*5+j*ѧ6}lHD[̧+Fܜ5R1| [lkjG=z4qsZm0̬21\=mY2Ŷ{cfdmSV抎E8Ҍz|3XInl,ebP21S:%*_ٮE g~l};V#ԊTdqFoE;yPU\ڝ:ҍ. rCpnyA:rsr*F<^Velq")Tlu#޴uF4RU.H/J^Qp~xb8i^Nkvki9,cfiqWq⳩%VF.&IǖB$נ_Zqʔy "Lțm lA;toNfycϩz5; #ryqnUfc֡VwM51e\ۦ}0az`OT.[a9U/r/n,V^~k)ImJ7S Ao1lq<ֻ:h= v Sl'vAc>Шqv)o[yR(mʡf/2ѕMK6zCY+o-w!u?0JFdVZEEg[伱4K68< djNMLJNW"4ΗMT_BbiXWfOIY0B1wjvFӔo?ágOX$g[`)0N+4LDiۡv70xpz+sMr:7h?"'N^jB]5]-ǢtuNhIdcOSiOq=ʹ 5ym`tֳdD|5[Ku,gXȤ'׵M2Yg %(B[(-U)z-!r7:mOہ1Z*:5hIժն{|f Byeh:ye6V"Œ¾ZE2*r%F[xl~^z*su+ԥ+r[yu @:WrUJܲWZu0ڬ~`z֒i۩\Թ2>fGPpsKl,:M^A ݺFq~ʺ-I魔WRÙainv;ГGҥ(6݉:hE,kV.8? czbiHqIT *iҍ=δ|oAnE E?LcSFSmNde"7۸ qӷJ9YB|{\]4az%Bÿ8+&]B r<`k9Gv[7E\|oNv~J`֨ۙ$s?ZEBWvISD%p@|Kre^נ Xpme?$cjS)G)7ty&Yds_OXlv:Ѭ%2\,c-p>ibٸni|'hdlc*q k~qܓ-Lc+&cǑ[Eƪp:pJNR34|ܰ;T.Z~mrm[#c_8L֟/6!7+_I,syYnc9^~R87I=Fpcm sc5$%# <%FU dc5eFmF>ҥ奶Oo*VXY^qNkiy;6"=埙RETPSKŧfmoR+qɧɳg*M˥9b%Nخ F>luHbӍj-;5Ui=9Y:tTqz[tVvc9%+)ʣW!ty10Ü21_{݉o*$^!Sa9J2QQ(/řc|$dzc>FRd hfy 2 O1m./B;!Cڝ~j֑ї*rѿ3muo-U<Q.[3ʱ\qۯ5ehts^U""#2!*#m e|YUqpw#LqWȯA9S{h2yY ڽ~vTeQ\Aઅ6ܜ~u r䕣J>X>Vo{sg{܉TrmJ8jZTSm%g$VԞd°¨iO;"4{ȹI67':p0.j{ XkF0ǠjF1N*|Y)# s9Eң2ē")X5"]:HNn'$9өq ٮuc;N31ڢIJ.v]})B~Er.gf 4BLV<qS}.S/gtHemʾN{\.ھ马J$%8aFVFd/&X'0I8>viAJˢeK EWso-o>xQ;8r׌MTFnMsgYUW~R9`N w.U7!<_9w+p?=O޼vsXZќ*M)]}Bnlk ܨem#=*eˇ{>*SK_$pX@&T $j cڦIԏV_Y40IU|$H3ҳ&NZD$ܾo AkHZV+`zSK YAk۵ 2Ǚ|9m"5 ?=;+J~Ƣ\?q)#>}B+ojZ=Ln}v㎙T{J4|cX(]t !n[cI#'%F{vէ-:k<I<TT2@w[ {j4cZz'aD6@ђ˻w±ZWz *eoRAty]ܨ<iEY_cIb%OG C3hƌmjKk'ogWjYOr1DNƟj+vLjOٮu.i*q[R+V2F\^y'tTuIrr?RY#7QnwI#'89#{iw]MBݽIiMt>OtF0j7ҤW2wZmM< `2a`?q[S_ʖu1ttG#r͌|NA Q\֌w-<zיcv㷴5ƿt`+gZ8r(Ķ8߼2w2X~?.kOȚ4cn[j4&TY\)8UR~g4VV]ܴu ' 2 wҶNgKQ&x,ŹTӞTQ*MM%(Ii[_-Hܠ?ҽXSu5ycGAii-ӛUݟq8V>;+ЃH 6BnlP\{pUz֡Tr>}$R?XʴmHwֵRJqE;Eosyo`q5x1TSܓJmpVYs?>^]6fiӎweh#,8;d!-2Ҫ7/s ƚNn-`7gs  ;:Sm^D$ܩEٕ?(aӏҳ )sF8 1 ؑJ~ejSg³59und0,$6vHzR4oͥ?hZy4^S-&W-s{WG̝9Ta NP)-R ThY[o?Zil⍽e&&2DfUNX~#5*7m̨:~̞hݹv1}jiөenqP#dX[gPǠoDȺfui5ь,tj<wOzǒ#έ*+^eiC}fnwǦ??ƪO-:1Q%H$qK6){qX΍N[:~gq9&pzu'!u)W{&P"6k.OLgj9J^KKzh󹘆W UW12f[{EU4U.@+<ѭff(§*HCo#\k3;p=hZɻ\ʎt:_o$2yT<+R1ozwoLIX"~ ϯ^+'}hmS->Cv$o/7ap:trW38=ȷͽiUkcrE~p>>ނ}/0EYR"IݼO}+UƝmӛ|ɹ~}m`N96ug{_b$a|#cQ#FAMopUem9XmNEJZBhFӷj<;/dPkuQ_j( ZZ~tivEדkU-oKt*Tk{NZr_5q#F;Ia=qF1z}Ob Q{뿚IĈ-Sruzwq:qT|KxTK\43E56ϩR(|.;×WӪ]SkO*$U\sG9Sk_(S:Pm\b.29ReaSJ:؅]$s2(is$0RJ.[9%ru V(ʬ>Rd? 5!)K??+W;NJ_y*[uNNsakiMJg4T`ӢLq?Ǧ+,EmfR*IM [UtV-+?dO}oЙCFi'ɑB-%eu1N sJjTS;_5Zl7f9vDMǻdd+`V3̭[mۧyHX,ϙsgRA8Jt|j='Q,'IvB $:+*c -~åtS).biҏz h1wPTsWNwrK+Mk"ݯWtSBф?#`gv~u+Tvz ¥ZϧeX7ڴ媿Iqzo|^7w߻fRG?vT#JI;Yђ9̏R8 )JgDPum"H"e'n r89%5) ZIܮ*0Fж|`KzLbF6̎zժJz_F500d ZJrӏ~%wzU~Zɿ-2B#Ƣڧw"/|+sܰkS9W:iY"_$ 6vP=Ab<*H7Fvf<uIۖ]>Ryc߼UXtoso24WRՎOijiMzD27H_f;O֦*I۞u%_U#K7*Ү6+ImFkEʗq,M6l:w[2c#l:TJ+**pgqϾ*y2VTiCW}[rtO1>*J^{v8(הVƵ?y*GjRq.tuWeY41ᙙg,Dj;Y^1;I̚M32|23ׯ@OR}.Z+z@bl wN~|{YU |˞Mjdzʽy˘6{_bե6!z~UыWCЕ8ʟ2$PZwg2˚NK馜078jrmaQw-,r23mee;Y[i^\*yχg"W2ͶVP3 ۸85**-Z#RRz\h2J 㞽iƚ]Ҧ"I<8|kjGݗ]I'LsapTx$:Q$Q7$FYmvӸ#8fk/iUG(KИC粛yB0dR{s=?gRJ;%[Դ%wy`m~n{֑HRYYҋkz̲F̘GLrG#Dw!C+缾j+c$JܬZ{PN[h#\YB 3fHǨa9QfU[O5I>U/zTw$F%[208'Vߛ8kJh5m,;c =TGFcFicu$ n:lSv*/wԴTI46 5ԄXuOw->d"@eUWffoޱ6=ҡ\ͳJٷYդOp s޲駳%f8@$jA˞goS ޜ{'AcI`Wk!pG-iJymzܕ's $uzG'SO={0Jz^>L7HK+~1FTkJ)G*D\n-{`cGS>U{^P(6woBvV]#Au#*EC%u}#|H'NOLIRKCL6MU+7%8-F?WP}<^IƓt6w̯"n^3T!)ES_O+,6-1*¤RҎ˧s*rjond/n#}$w2j㌂}kX5!m4Qswzz{M%iĒL  Fs#nx'j5*k(V^vz߽ VhgY 82d tIZʥtoW~m ֭̔b:\&ap3*AWteNTUH5RMK"+m+ϩNkG q"sfZ)v]?c(˞wOP FҾAR31YaS.G =4EOmZKE~x[ lܫ39ANՌ#JRO塵9IJӱj+ko4f!(p{WJyϦzeƼeVZ=`,k (q# N8vI+:'** Y[hG+#"fB݇=>xǥkZ~wle*4Fۼܪ06a<Jܫm4UjqMmlR6Lr΀=s[S{tO웺 5ċIy8!W?rQNwRB5_Ў `V[~^Np TpۘuJnߐ.KsnB2%U== ٸ*_iGRnBj#n@<.: JTh{;%˳V|KᒺH&Tt+'{TF\z{)Q{iks4k% [h>U8 sAUZoO{mm>S)}?$5 7޳*qnWR߷a\\rѴq!o.Dmpza>Z);٭[~?RZu~kiV8mv'1vsӧ薖Z[cFueʔm̯*ܼ?-岂z<,/gW7et>So7#3ӥV~>+R5$Gtnqa\IS?3l˚U4IdM$n[{^ bäMUժbdo~ňHٱJu`b)t_/CNiH@iDIy_*2}O'3iѩM&:||D,DcmV[i$gd}9'=oAZQrtK7ly c's0-6=~Qʓvud$Cnqۑ>RЪYaڄ_6=G\^b3F:C]jT|Լ]W` [[3sAf+BZ= QOr}k[[H[P3<?JMg[Ԗ{b Xx_]0oKk^[E,"ed?)c:TzF1t6XŘfOI>}b]{:1x:vSo= Ȭ[nwmž*e&)N[u|Ė[K!Ð+`zm׌o<"'NF-r$]ʇcwld㧡TSNj_чYSg[VVklCo8+jabOzMƼ LXO?*)WX)Fd|&{3*CN9wqk_zZ=v+GNU*KݽF+|FT’J)U!|Ko˺ma#n#~\o|Jpr/Cz| ]vnBztr8-B9Gj߯}qMr+M_W\7[SF4S}/"Ip?#~jcO Jzh9ЎSj&oʧt{Jܽ?&Wvҍ偤Fʧ m9ϯ?$uxzPW{3JjKH\k.866r}(ѭGU]*jʜץ幔2q%pp3[ԩ.[- +ac뮯KuдFu&bd.[tjIÛDDzrNr^k6Ci KEܟ'˴6@ǧkb#QO#΍?ޮg&\k+3a9LrG[N7'ϰTvlP2ɷs1J樤}}Na;;n}mkTW`1V)Skkn\k)KZȮ(w?|%rǒ5+A1qVu'=v_>H08oWqzNi'y>c)Wݠ{ 쪲,rrS-B⩼*[>]1ZO|ǭe*4SitWCTý{2IW74uH#VUTpz)ԔHb=[YʱmE؍J<~LV ^M:ZKsr:(Uܧ'ROTx|LO"YLیv?ƹUJQV]nYn[J,bO=c|0$l'uU⶧wF|qUF~wiaj:k޶3FX5j$vE?W'Z^]QՄF$,@C/1$*3’1|VօmSzkck@,?ϽzXIGdK$>ԟyKczקMvroI/z&Ŵ~;{UQwOKFRR7.ZYcG&bW#QM[jSҶ͕?m܋v -`u^j1ntdFH|Tn|ݏfu^of2E8E?UM>AzRmnHe,+zgԣ)r{?{_BV)Ckc#q&4쒸o_z~zYh+}3[rMJ~dq #>m.ZwRәj~\qWLU#Ed TG9ץ ,:=ȍ2)do}}+]q#wm8'5Zc+[V7+/OzTveYh|FI}=lgNQEIߠGnC.e%.VΞhrj 䟗z\!6J)m;~V&[&Qd/N?Rsݫ s-`Thn=FMg+.nnG"16vp0zkE8U9dBA&Uh_o'$ck+㧘'=jm~4l'Y'djds}yֈi:)ԍE̻"ͺOknQNR\"*rS^שxs _[O2<>X*c>U۶瑎ӭO45眺:7UxB!T2t0᝙>e}gA(sq:s+qo#-/*0x};ӕIےq M(kO0/"9psjs5+G&aeϩϰYT|\HgI#UWvn {ZLnṳZ a՛߮kUFqշWbdcf 8ʵxx]:Rbי.}5!Ί0D \pT7S}^dXfqUi2~:y';_ӕSV8ehJ,{N@ ֹ1S$q¥I׷˱snO<卥,qV&DprDXwm'ݎFzjFׯ4o31I\*^^=Sib)KN.")Քa),[o|GTpp<}፩s]-^$MR8ـ[RF=?)ދќn"RDq$wKƬܝqG*(֕;ʭ{*RythpCInT`89Jו{[Ј1w%YM$8q 3*?w~W^QD+UF wY8oOYG>d%Ш6;U}zbVQmO6Qjz%г$XX3# ǿJTi:MՊѢ(&>tmA&rۺ?^k*-|-=LicR*tխnВkbuFR"?˿oήj!b+LBmt ۴t׷s\cI]lOBcLc9UNWytJiV|_BIoȣkȲoA+IJ"N:ziN0V{G{;̓1w==;WU8;￑ɈsT^,#]+*sN-4ЯeN+ҥHj +ч#{N2i9S4M4XqSo?@I4kGvֶ߂+A-Hs=a/NZ<% )klI=M*ImZ0 F'pLӨJvZﱴ{^fM( T{gu!Fϕ;ߗxG(YDh+n Tn;OqZqJDԔh$1KⶑgV_1 ׎HYU$Y߾CiFH2i.3*ҥ99Ϩ(sZu A(EC#KBۙv*]Oiu+~?፣*v[u0Ayyagbs9'𩔧O߅t匝87o/R%Y-b7uQ֭ԕwy9]=Pҁfmۻd8ǵ_,yPK8t5 Խ ~Q$}(kY[TRw/.H|I$9,GQ]#ٽ/}]Yb)Y;fyQ,OHw vNiTrZ~Z5Z`H&琢-w1{+:_7Rngo AZBX-5Շ4.B07 ,R6vn*8:kXK wn Cc 2?geF}'u M:ydKpQ3*tk2f8 dcԄOƮ1}m엳"kw0,sߜu9P+wve8$zicHFr15>XS =Gt̜I$l;O=q*Hvk(b)J<1sߜJ[.?XQt଼Hn?uhH9 ;qA*撓fXWQ\Gpř)|WZ4ߗk)I_ϡB'V N?Oj5h91rM{kCg GkSdj #0 Gڧ)= japA5ǝ'>r-T ULnWi:gOjci;drIrN}0jѧ$ʴ̯2*ەrpz+ԕ4bʵ *-Ffg8{rI]QKS5B?#슆V̑Dž 3jjb1M9Gٵ[ )#Jn{~ɽ/RQ&ۆ)'eJ?T()hj dU=O5uhT76/gԾ%9X*^00[sljҋ: Wqyn6qvCb1g'RSoq6y. GrmpGW-iJ F=t=KՔm+}5֧=m$<Ƌ6n 5#(ztʌg4EaW&:67yc5R߉υSC'h@Bt}0}JS%R*;sT'v ymӁZ>/ڜcZVvzO= {ͯR\_v.r4mJ -``wʾxԝLU嵝˘c>ݎ3> O[irMS~V1rʌѲ$øpq]~>.1Vt- bN}(jTFjFVBfm ֯:>3iWݠжvD.HY)׿oʊN7hތhYYy[afFLm10xV>|fi Zr?owF8ǛK~'*xKHߣkndy Li)[dM Vl8<}*#5V$8{nP1Hwpqpq){ϑp:mU[ubč2 ϱZ V9/]ԉ&ݏ>eTdg 8j˿_Rw}<1#6c݀zc?jʥNiGϯ\jH$@ݐN7ԮzQrb3}l Ew%UjD[]*'ZzT']ړ%r"n~bic#9s)DҾ2("R,yܩӮG];.֒Հu*A񷞿Ƥ#ZӾdFB=O}j%S}C xK%eU)NԥN⣺};z֍Mv[_lp=9j1_"!JSuVuڬQM~eO@{gׯ^5R;P,m-)'*'yӍHR]ȹ# ׷o*aQ5W}c ̑B7_cSS֥G(ꁐCٞUeٵC۽DN1VGߓ_p˗u۝ʹWNw8SkϠ"cYw*|_#5JIZΚ T^ek!^IIm՛ngRoSNhD4N#a2.8RԚ#-:"FV""𱲜<>a/u۹qVG[kzJ$dȕiJK&PeQITz$zT}am$NGm$edqC˚jN-ꞁm[үe̴;)KTJE,$R۾O Fx9OCRpIK~*6 ҹ\ʟ,7kxYX7?Z5ek%\\LY 2rp>5/MJ4 2HLwwqsǦ3[)vʧF]+O-ZDbȾ`*x#ߨZ#~Xɩ'e+It; _sE8NjS<\X1#ζG=gRG4נotҮp+1=e6elmڔ}~akAAP__Vj'#N_ǵ IlN*1l]5;TdvϽ}Q7+K+ªA]WJ0Clՙ~e fVb:]+[7![Ygyo*+s֪cAJKw~jހ^"Yl,lIA:up,tzzj/_wZb5dݠF?:e<:'O2T#++}({˖$3ϹvTEGPЂfO P*y^9QB_D@OVl^8VobBI2b>dR7ʸ 92q\Z1JU$Ħ%V̽o&j!K$bգ'$ýKcLi)=F|34f7qҦ2R~FjEY{+1 3OըmoyX2[17>zs-$h?7ˍ,_+Sϧ5VQ:s囿 h]q$ʸUf9'ZCߒoD'ONm6avӰ7'v.U*KX;w+Byho隝^j\ܺj@;˖i$O26bm鑌t7zެ!x.d1UVf*W,%J^q^5,wɎ1Dob)ʯYe*6Ao֝N/FR{m$[vvU(gYRs*TVK%BhogtFTk)\ZeXgmI'꽤k jSfVƠDZ4|Nr,/ymN5'VԲ,v?W疈ꔰҨH<Ȑm!Ulrz&m{h)fef`va Ƨ =0s=hVPTMbU(#kuiV6>eUs>U#Iӏa-ٿ{2#FϧlڳRN:4m{|bM/E洔|s{HqO[iHgKhSq󏽞Ύ^h%.-jϚL.y#R64B4oNÏ5>6[~^s] Ϊd1}>cEWYs񎃯={Uqmr]}uk` 4sFn=HӨ{vCVkG",Wk*~oqKq׷J5uNP拆7[l2#<\\X=Ǧdfo%}; ۈ{cS5;IFмC6[?_lVR'SzNܦ y "G.QOCG-㮇7 O]csH F>ꟽnwg(KPN?;1aqu9>а2B؇NHdkH:jakWwQi"sm8eۆgU:ҕ99- eGOK!"`cߏR*V_2> +߻< UPW7QUխ: AH"^@H9(Qf 2Tڶ')pȱ4VrøUtC*bA q< fbUf02~nLAڵ(iG^Fr;X$o=וʾ[e8U^ӅS5OoԚeܪ,n_ªT[ʤ#ߡuZr] ^YiݦmE9Ka--!8?ҩz KZ*f;,onJ{7 VFW%##ӌuءgڕH:9}x˕/O@IyY,'ށ(YV:Ys7ҧwEk ڧr3 ~)~|NɡkDՙ˴xeg(N)_N̪l2y^49ch{dNFlF#MTfF"U#NUED@nTT^fqyOJKV9gf WlYݍY}^M=:ITh\e[3JKMNh(ԧ{*sq Wq2NQNS&g췖]b>xuJUt$^q)k69ZR3F{a#tU}r KcQ]$xբ>][_!<ד`Hy6hbÚ06lJȎ<ƙ~l4(WԘ4֝.ĄJ<͇l1~Rr؞XS Հ6 zST֕Jnz~@sDCUfQxF1o~Ta1+`=_u"XcgneRy<~k)F:]ؙsFQԆY[VɻܝqIM&5N,w֮]Lf{ܧFlKq5 IoĔ5Xc G#)>X3Qg;UYNOY/iN{^Xo!)gHws~ytmEg[%ܞeߓoƦ"eva'i%;dep v P Ruh3;)8FO^EڄUc ZVq8ՌWWخ3Gc6Ӝd2Ȭ=cSm>ŅWnr#Uݳ${bv \gMN[H#-ʭ'_\zZGBcC=p19^+9{9{ؙ{k4 32Lw ԓX+m:XI>kUo-ToN_a=j#[7%g**FQJK+<eef\e=ʵTޗ3]gu۴H^FGLqަUy*TyW+԰AU~bz爌ʕ xz5b-ʃ2UVzdm'9\x5'M/ztڏ)Uv^}̵sT_VR9+JP){ļ\%F{/ױ^RS ю36ͨq=waNUHq||W|5 <_Oʫ ENk&v׋G$,6GcmcҺ*Q#^a긻8^7Ktl2Nxn'9B9N6UIG[+; e1\.'2޺S/_tk|Ia&`lzW?;ZY:u2FL#G!Lrp=N+eFQf4jFQrI>HeYb1 }}zs޴jss2>3k$̰Vs{ڪ<ӓsX֏AӃvDg1{e8TiNz8s B5PLdM6[fw(lDe2IGoNhF8z3"6^IٓGo,qdFH2'9ItJ41k*n0wWZn{m5f9^ G:)ѩv#Gj)P;s⪓K-¤)9FkMAq-(0Ȅng~ tF:ѭ,šuOԖbF[zu$MSrՈ=>Xuarlf} #?ՐYV^ T%ScNIM4 ~RV?/kP횮YFI&zEiЎ{Yc׀#}-~ۑң4P7{N4cb'Y-1ncyV\KG؂H5ha獠I?ϥg*%̗O̞^ysF/ma7O?/rzce){Fucjk]mO9UoyҏgoppwI\r켎J7 wj_P֧l22۾e$r2GSʹCuބF+k]]Okbieי9*㊔ؿJ8JKF&K:UhNO4Ry-Qꖅ{sەm #mI8:Ig_ vѯQt^\2"/z1#XgiT]ƎmKHC7ncX@D:#N<Ƣ-8=F6{zf"FlsU_pF%܂KU ]@ozk8E97Ny妛; kk[iZW]mKxXdW?{ⶔ}Ģ|5XJU+L&m<㐷Ʌ!y듌vYʵJ>ۿܤoKIZG`l*޹;x 2ho*pUM$qıƭ10zsh+e)T>Rܮq*'3]toRɭIf?WA[w*øTAFt.!*G5X9?ZΕժԞ4aym{}EHS'trIve5w~dDs 1bKy60eeDF=xLU%%.TօJ.o w̙fmeTM+*L;+*ǃDžߨdbr+ʜ\1&9F͝swu'|åilLcZKڕ!QYa~κcVRmtJ&|kE qq$g$czuc&[s%MM=[؎,͸'cjF5CVRcq, 4&Oӿ)1y9JRk{o`Hǖ#`N8={g/gR'RINKM5#Z&e*ܪ^5*rSM=O0} nw䁻9◳oy?OiO{w {I'k̫.Fб#a'Rȩ#%OGg*mS~}u}Ĭmwz SiI2#Gyh] vB덫p}~"w7Oܵ{rض$ kdRfPdy~rVOh8˕WI(6&HӶٶ={VΪ֦Nw}cSI݌Ҳ-Fn59mv]FE0† qsU^H7:ATs[KoІݘ=\Щ+}C%|#/}hoRc W4Tk r3YN#{ܒPS ͈Is aA=<23sr2WNQmAvV>/q+Ε8t.I\}k3ܿ89+yjT[;2ݵT$Nu˙m|fGW3\呕l= Enx-޺j`;~dXF%)Q5ʖzNPOy8KÖkE(>.4b#[b{K8gRq$}((iy.;!>yTg=3m*ҫӖZV"u>gi޴kU8ԩMm7r;i%9T2.9#o~|mqR}XQ]dp4xBH|-8? .q،4TwH$ {c0EfBXoA1kk4țFD~Sʊ*GRc(cOoNUVH;I'iNjW?M|w:j`5ѽuu&<~5M 603ֻ-8v켶J/i(ǭ/|V>T*FwcK[oWvkW2b1Ў1cOZ!:W`v0[~Y&7c\ NRO4K?>"Yb6HEb(JZGvֲ0Ti:jjCmeskm ^`fo c^ҍI.i[Msp/Y_&Nٕp;2#:ں:J2uyF]5В &E.\\Z4)F ]Tùgo[FhZs,BD<.Ii$/ *(BTCvV0(ڐI8}j*ӧ7g߯s>yE'tRY,*4W·ial T'n_eٞS挡~_7N[{ ^Cܭ- $2\Ebp>^{tWQxӡ 'gS[-Sů[*4L}X\4[ۿk+}^j28Y|P}e>fr8< dQF]\F.7ml+򼽬#An3ҳqRNPNm뿥κtWu# '^oI9 rGլy#R7K-}ӦG=H~{y?2k|/2ooB8^'oFsJ]L"Y`>j)ƌGIzlrS >A\~e5p˩U[VKl3ҝ ?q߭>XUS]ns[&IoIi;W}cL-9rIH|VI>a}nҹiVu-eSWkF7s[l ?_S]rn[%5i]_[0JV+ekv+3XVN1V]/K{2J8]+n[S2J]zbFz7ɶ5 `zn:VStjJ:ʔI+mr6 ؍qʐv84TIo?1ӝ5߽ t~;p0r2{tG,Vbdъس"1pL] nv瞕Nrb#IuKfAi=WUFxpG?,U%8j,BY[M<2d1{l=v=kjFW%OPoLӰ8.)$qj4ChRVX&Py=zV1uNyhO6Z}]\J{RFL vvE*|OQszGnLy<;s>nsx+nog(Z_M<;'.$frM7e\23zЬ^bRl]àRv 'p_܀0Aֽjx5\ODiOT\P-̬n*s )ik5꜓wmđ8{". OJƦ*ӽ45ҽ<خIVe2qϡ0ITO[ŽwZ!9=MU:1UV*59j?uDLk2hv2qMLcNP~>ʷ(Z/̰K}H|2V*Zܿ#β4A]F>L.qwGޓz"E6= \ma@''9cڊ']֧MyY-r:GTW-6NF*[z%&ѶB̪[ӷoI:xBtzcu/.6eTAqEk}4*y=K46оYn?}>ھԸJ2n;hX5b8.r^=55?w6{jJ,E4~cB$+/&my#ەRQ:zyAl$ v#݆=3ںӍYZ.leLEnHJIYPp M>C㷷8q)^Oi^Js[M>;WRcY&U3YShOo>J50kU׸p#a˒_~U 'mmIe^q_oαQtc9hi.Vӗ[[g$2ͷGA֜]HٿL6"c=IbediA S9ޢ5M_: 6m_JmGns;#y{kFiNTF1vMs;Jad=;N.[?4{p^Y\FB8KB0GͺR Gb}:=5iӣZWIy9dtl㌐>iB-j*z~sc;5CQʪEXLeQ (E&aNMq!y<[,: An]2:{Y[QFpGFtNѲyr6>PIN+:m4g*s~cPqG*۔lcފ4E*jQB>X]Y8 ȧ њu ޾bmf ~?J5ݚ~biF̲ډig1vg}3+7NnY/e܂u={}Aoh/#IF׵ķY/6fݵ*jFҭ?ke,̍ aǃӭU7m4SWszdMLj[[o,u=:^QnZth鶿e5 p?AR qѿ*2FeBIܙ郎z_l4cϔky%,HdX$ڪF}QL5"XԲkv:z{y%lUd-{'NxE(Ԗ"2pvF"2IitEX4kǙ69¹)QKUՎU+AF4guz,-0Pzq+S۹qui bO#q+|V^줣cjR֟`L$ 4,Nnh_\_NlhQəvappWm9F48zsoy4FJʪq?.Oozʚ{y{61 <`+{>搗SB8L#'r?J|Vqd-t \ht7Se6y'%|>+[_34ג7-.8\|x?$tRS[bnYߏj)IGn]}ާⴔaH[ fF\yKIHNQWIhw~aGJUZtgWBT|fW6̆WX]'kG(Q)Mtᜇd\2uFnU8EyDTotN2oީf`݈\jrەWnnwY^!U=3nKȹnI*Aݖav­֬OpV7 цz{H{HNi "(cs])#J9Z[FE+Z*\ ݛDf\0V`׏JhͣEtiBr>V^ڙ֔c$rB)r'_ֳ4s"o<45Qq'N*<ɎX tணF:hr'j _MzV>S:]̬{r֩Տ‡t׹{>c#=TFoQI3 ՚Ff z'ֿLw3dy`\ny dƳMZF2i+3*c2+̣Fq҅RMX#ZQ̭Ad+|/Ri)Kܣuh^]ͻaHe*Rm%#[W8<%Rro\>!3*iOg?3rG5~quVR5*кVB/ˀ5y_*jb-5su)i-=uӠ6܎{+VDܞЅi+G2WN+ˬ.VQ?)'['חZ-+KsJ尬%Վ4Bz1{w="ygSҲ̀en.OJjF)軄ѳxJ*{NXJΤmZaˊBt݂yϧL}+5*IUƌӊRkoqG4Qf)w\>9Re4{B0Cެ2ۤd򲒼sS=s_+h۫Ke1*k1>U&ۢ^ՋNh'Z0Z6i9ou\:1хciO1e߯XsVzؑ[Zd_Jp'dn>H6~Vpє._Q@IE.$;t9ihԧ5#XeoCUmQjR]6eRU%$IvKTɀiO#qZ^ J1vWO>ZԂx#ܷ##nWW'[ax'9ƤSomo"T.'A@۱"#咽F4jsJ*sڷK+Jކ*S!Mٲ݀ӽTiƤ3{Q?'/lt#רڼq+sIq*ҥ?i[aoSU]9i2b>X:qnyd%k4][ 0ҝ:_/{?h,,wGO^N}z)vwo B"ekB/Y fTW*QZM4Сưi0B8B;*K/-H{e\FѲ;TꒇU}a0]O͍.ܹu'gJcbҴe-#^V_!eb7+r}v:1N(tԟVxm-? NpG2e-6VFr⒖6%[X`]~flӽr5RF̨ ʢ!-&m·`We>dC tFIۦQ>y#I݄g CݻRjAMrwq=e?S28CmrwݼsQNZtIPv^}F[Ɠyye—\c뎕.υ9]XLSq۱\hp|ߑu9b4cmws/2v޺0J*r鿚0Ӕ<}>xDbII <ܮ9R{)kmS JI;?'}ȚvOM:1qeH6fmY˖WϹe=VWllY+c=ZZ[9!F#o&FEw0:tʌ%ֲ#ZmuC#.1n$ 82vG]jIC!yasy{ zҢ:5LE:koB<2іp{=qjFTOiM{ZBY9o'k*~VU)J0nDyǢȆ$3 sNU%ȹusEF.;_D)nyo=ǮG~G^je{y.imi;ysyrK ]V8dc;{}iVR4oxʱUAkOb;bQ>uS_c~_d7PfD;F a؃ڶ*4F2枷Zv&[5k,lgP=:*Qi~v5i1n|$~:bҩxz==6 K۠3FmȀO| {V^Τ$7_9ptՕ嵝2)xI 9FqTe|,h,mN;JSywY81 gO\ܳԪ#M4Ḷ;լW*mrOE-Cn-xniRT??!y~3ֶʩ}Ӻ*qԑn3z(ROkZm.6c𭻕\#-q Sym/ȝV;ܖ޻1ܜ5.f|r־F_0mUF0G<:[O!7.99;v֝Hi+鱔 n~dr]ܐ;^Jjϖht՟}Qe(,yRl%zy[͕cvWz}8j-]HTm&S4&*9ϯӌc̥̕*Ym۪EH7o(E=OO^Ā}k:u=I(mb20x9:u܈ӏ3a^yǚE_3'fv>v_{lbfI݈HtW5RQib<>6;T:qj;*<}?1.#dZ;waZP7jZ*K̮"Ck 5 [ *ӒR]C)(I5t:)NhlssM95SWV2Es>{?mSjOOsҧYM:6e-Wr1[FrEIac  :;ⵗ4&r*{/kGDWX;vF$mƦMMFIM|c3 AU#{'469x8v{zf!/ݻ4eFWkB+{vr,-:f*1^I7'{bD$ݴF8QNkX#ޒȹğuTp;{7HD&psD]K"%yhTID+ Fy~fW]_к#R4Ey {O-R>w+`>lWu#$Ѯ-;ŽUvh̏ zgum_VkODwvm_de?_ts+nVmopp;\dE:~)5WXHW?|[RΪ^k^Lw#og$qk0尥^TM=Ȯ',:ݺ_s"5ΩFr/NUf,>},D֑EQ=ZlAyϚjNNlZX I=ͼ/$`,xg$ۥDbLSJ6;HyI~bs#}*9%yJ[}^"OE˹1 JT㥊QԗyluÅww {Oy]#n?pd;m cSotOMPIM($(x}شFq Qٷ#vݝgQoA֚}Rdm 䑦'h]iVCsګ(NZk6` ar[=pӗ6#ZjWer9Kr3az7cirVPy|HND]:q>XFUh!ʳG+R،DT";n2IoUi]yЩ}ylh?c9G/4LN>VJ+[v7F)m7=ӕ_SXx2|K,>X};^}+J[u/uר}vJs8I>"Fy}5Νyso mdde܌RrHcLݺ~͛?gFqRݙWȕx`:YtG^>W%%YlIuqSAx2voܫn` tj/g(Fu3K$p77? (+Og'fŜ3䵜{XS'͆1Y֌#>tɝ9U)VO[u&;l` %mYxF)CZ/SqȄI*FʏjK{LRZ&W2,[yM()]QSvw K]HzcghԾG-d׵2.m1;;;%I}Js3;-2y\I+F+ +'5r{~t vU'w\I Ǘ,{8?Tv)udWnfpy2qRJmJkc(lH|qg' Y-ܝmUi<`??g%R%fj9ƒ#S(^rUtn#y|!?lqN~ˍ_sc|۶gQYC 9i솴И ] yR_"NXTĀ5bHUBV?vqqө7/$A3̬ѻ2qԎپmNt yPnXwc5+XrΞ~CCyfVV_\{gZԖ6s)KWoR8"+csDS8ђOX!m2v,du>קYT6N)Eh +$fL+Bn݈S%.i-JU=6,i f-ߦI=3\b9gWOQso-#\yjOC{q[Tw*Nq[m[d&@ sҳQ}{*w&H[vª?29a){9\B @ͱgn߮Ob;S(ʫG<ԍogiV5hg>XJSI6ZQLX(34q?J9i¥Zy6aHI&  dMȚ|X)6Uk3<`㿵rɺN9SExp|b+ӌ:5Zsb2(,<\;+J\[Kxk)t 5S3Ub*sX?̉B^TZC3>! ؒ6;x;TQM7J(iCF͸U ׷ZKy+BS~K@y8ԍQ0$q^޸7ߝӣJVj50K"6m_j_-8YͫJ| )VPzU?lm+za ;V5UUG>rԌiB:~xxw'  F=V25Q=z0m|*Li;pzBcOGLG˫.b(e]UyI)SM$,[Cb9N =ETGWu s*%Hǖ\]{js2T+JPdImcEg_-Qs89 ]tU}Uѥy?[̍$? F^n})VrFI%[A+{]qraҴgkWUݾ>c{[O[ĆCpC3N;UGm]@bVMmu1aa a\6?t#FT9on<-Zc躖2nIդ,݃Nc\O 'Ԋ$fdmǴ?ׇVc܌Lc nUD i4a~bs瞹Ni8N08TNz-][v^?L=>H($3}WsθƻegZ+L|g9ďjXrR^tނђec,"??BQ9[]Za,6*Nwqڝn^]Q _1`W̌|8kJx~Isb*{?gUob+h&m]ʻS ʽz84JKG{iب:vTDe@;#q_aN\G,ZkBHFE%nhQ_1ﵣ[Y%Y#d )z](nJÐ%Kmfa(-q<֔sty.a 5GUQ ~cQ.(ӭ\`rO!UcUq9Rrz1KB7yvFHIJ)OmbĊޝuryZ, $]JƲcsǾ)IF*+k5Лn˕?.WG\~}9^*1)E77 0G@$^}:2+4CdvE#!`z~GBTkYб2ݟnqfԩ+NX${v昡H-߃y+QJ*e|r-u TMfHx(+KD0B29<'{rg%h b͒-F$=IV~RVo@iy$x}63{}3R[E Hˑ w޹iI8q1j)EI%Ԓ@NZC~VE{vmcfxx5./Cƥ>z_PK3f̍7('-q5MTgF.FykwQh<t=1XTIY3;S i4Ȼw5,^ȸڭzN;ZN<~}4TWѯ>\D-cڒ*v9 =5V*T"1Z$MkXPW#ǡzW/FZ;8ōFU :xrW屍5^%d@NdgfRUw?y17Q_K1oԊh+O-%RvI'?.0!NThTJrm6{p8dV]um*:zGz˖nc]SR+O,FS3W"$_3j5#"@ȉɿtAzP件 +s{N6Q!?*w =+2efw/i(xf T/جUs=o*OiҞ|%]Ţaj:+{&wkemN:۔y]n6\4p[ YUe(>"U){tfds5n9E_S`Ɵ&(ԩhv"9 #Qq8=Gl뷗Iב-OgvLʳC 23m\0`6`7|TBIILiJ4'wנ崙|D_.zueݎJ \#iQ´IG" c3tV:hUNm'Fs!Y۔UeIgGZjWrxjZM+jf5VWI~߯cSߒk?yK0p$`F01SZ3K),Ƭ+, x'폥].Jk?ƵjGwcl7uY&ċ8✨Uۡ !RWw6[o7g>]v9LgTS&U̙`ѷ&ቑWbg[JoCyVH[m"I +7qOJ>휕ܚ>3\ecB3ʎUxY&:B rbcS'JZ5)SvOBA#yYCMuӧ'M>fܸͪҕ() 8JNr\H]:܌5rӓvȣrJ/t P:F[89n>q8uѩ-ڷkN)6y;U@^Hi:8RU'ȍ9I^CfM{h_>FjEaoGF,,{Mp*cђ\9Y~Pːv z+GW:QKAU-٣ؒǻ;W{ hW-qO#2~l9s(63 :8ԞLߡ4q?5ܹT39J.m"kFxmHX8Gm˪VyT^\Ǚp1VMWR~nzkݤKm0߆\S-Y+]Rj~֧/aI1/U* n;gڦ4{Oh^fI=lCgd(i9]jSRv3НeE*u+'b>ҵRUiSqKr (X ,)S>RCF]vwt-OSU&ކrG/g6IxuhnvCr' _:4;tGm#-,`0n1D%(uRT4f/7 }:sʦ!pY"g,ѷyk*U*r\mBh՞+-՜S:Ԛ_!;ϺǏÚ.=nY$%I60,jsJ1mwTe@2~Ptw>*8 uJZ,`-[ЫIcs,(8ʛwL%yh~_ܯf[̀nh% &1q\NGz|ʗɰ:e=:溩VT(I7ˮϩ)Eh.ZYc#:íe*ʬEj'D[vtF2nrw1;;~CftK8jʌ'q\Q&iNNls:[m> u2Afm -w)ڹo8k2R_K=NRdD%3co]s8*n iQ[^$q̠hcTqZ-:KFrJĦHZh˪."c;G{O,eaaηjByHRzcXJwjseoԑ#pFߗrW?h/im_L3+GF,O|088Teލt84ʥKc[{zW+%kMQSM;uRk]F}3OBG5*u%+.t`?k\mGuy2Y OZƵU5kΜh}EݑQ$4K&Nx9ٷ-8)۞նn:W4tG)J.c{Q}6nrW$c?/e)ߕ[A50n%DO-bMd3^s' =jTWo#YbXwdF#leHnr9놶U=m?NZqMMK}|,GמjF}ڏ˩ySnMki%bʧ n*Z}5g+[B #,QG/܎+qwO*[Aoɉ͓&H#T5)ۙr|qB l|>UXCXiF.+si#NF* b|#9*ZMuXZyZZ@-$V_#䃜}+KݗaUʖi$ITIffv;u:ZTS㧚$wyǡ<`ުkҊTtLDuo[k8!Ʊ͖s\ q3 ⊕Y6k]mk&\{>޾d6m#,{iV*HB;0Tt<e{/!0]ګE2yH[;A~?\sut+G-%p% 緭Gmhʭ(U$XƁcݟN_G4Ju++op_3-NTY;Em8ӎ!y*щMFpgNɭ|:q$# ~m zڲdp~KGÔ1ݸ4p\p}+JQ*hY/+TkUolIgNQex^9vc8 3#M:Ҧ*v bD#̽hYl63dlYZ0_SiVyonKZqq$'6\v_$SQ 0)I̥"M.Xd0:OOhsz>WW_MZ@cwHd~ܧZNPG)Ƙխ≚f$2'̠QiJO]SR5iihBə|<#ǯZTWoWc){7Ow$l_cٹ#*O :VZQ]^_cm> VIVʫ3 Tsۯ^}k(j[IF;bUY/S|TqJT:o~zat-y^x-nԝ8=$u}j='*W;by|Բ;7+2IBq;9ү'{y:%*tq-B5cc'NN>)5%̿RT񄖝{fhd3+B&P'p;pA8SމOȞZQ(AGo)UAvD]8v3FZ~^CL[(7OXZs(r+6E.,ʹPw N3یr*vzmW8ib ~)YXL $+c*sN1ڸcn]鯡R6ivɭ$dQrd],pN^FV[(bRzVJhl"[n$=jJ5kmb{z9!v*z4i[m?MUJn.*w0keiڲڟ+m]&UNvQ]-㍹sjI㓃q_jXXͩml9NJrRjZ[eňvq,-6j'=Wy%Y3jp\<n[p(z`#'EuJ Vv3u^ciܭrOv<g.ZqRQՇnw{ ygJG2~I]9jҩ oI9irGJsT(kNሦY-'WYcB^}iFI]//K~hʔ*VLtT[*$zZэ<>uTir1ݜv$3IEU ܆g8 BO[_hk~eP[3"su>_N mY鷝MYJ.pۧ业F*']J5>+ziJ0t2;ymU2m9 }McV4(=u\F y,aIz}J2em$Tvv7/)3[mːp}1*XFˠ#wg3r{kJѩ^+TgO%2K(>==k sVunchE%5.M%U`).]6oFTGx'8lhYNF1=_y~v~Ֆ8KNC Þޣ+=7T(]r!"KmBgN=kQo_?#jҔuۧ$٦vB2U6^@׌}+Jrז7mv#Mn#=ϰWe5Gy嫈q\Z k\4{fhQ=泩~Ge4wK[% &`}1Tnz%_aܠw]BȬ +o3p#qTZZ=U^gmh6ݻp$mVR8NK}J5DCxUbZ7d0E2Is5̻c Uv^yyi7#¬*~˰nf0vLqR8u3JiA{{otIsɆH`dm#7$GsSO7oщEݔثg88bGݍѭDŽj2ݏg/;1Rݸr1Ң_f]z8؊sj->CZ,{ys3*45bW"[ۓ j=hO +ʏQ#H8zJnX9+BNURʹ =M*QSu,Vw7*ہ#MJr3vG nEkb$flqt=kj+߷ciӅ74V6ٻ*KHǭis]ϣ8өY n"`youǾ:DTiՈɫbȳ3yg1Fp;zz~>VQn*eRW=R evl 2{gqҸ[n-'t{z >ܑ_azdcNq\̆z '|C#g$FHt߱5ISW+VWw^9\!WnueކI6asˎ"<{^8j]G,1SX릣Ar-v6ur U5寡ÅcW8RῙj_Z_Fp?1 2wdm*ei_4}wtVExS*}bA<QU&Iy+oa)%`dc9j2׻oN+iF-[J r$a ?:ս]a'Neͽ+;/:텱~(?Rw^D>qng# xm:#̾#x\U Q"=ŗs}c=5̶= 5%pYdVU)"3㞤WOtݭ{~g-:+Ւo2o-i@mqœkRkƥ>]$1ƩhhF 3AZ[.O"s)QF 3_0cd"9WC-$ҢmXA>kv7@OAV+5kcJa F+&NVz錧Qh~ݎ~ٱaHGl^}bh[vDIzssڽ oڄ>f ߈m;Ч'Jte_ؽlϺ8ye?='ֽ~]Wc[OcB*6cnsϊz_Vtӏ#ZV}q;wvB藽+4gcnWJ]IS Trڵ"E-[#_+ܼF>=+9{ڣZ\OQO01玴J/{F.̒Ủȼ*{5Nnk_4iˮY+6=}O֮5}nER:)q:ʲF$]TI1jW!Ip]^*;o'$K]C8]*ҩ~?޺ҸIJfwk9VF6XH ) Tj{Fo="fFfXF\ڿwVu%RJa]R[N:Nu&Mr/BxV͵nsik#dc˻mӶj}ތhْ8qcvy3Er+qݳYTܭu3˜Jg==e>H~Q&WwS=R+#*"Z5|·\wfu)]v2<2Y{Q*ҼHqx UsqϿҥr"2fʂI$MQ{t>⊓|̗(egٍآn^9 '9jVJ9ۧZhWXxj]y-^"](N3acڲjosT2 v=Y{X*$!'yqI?1YԬ"{N md})wQoq"H~Q$C%L4_hG=Zjb[W [GՏEz?ys ~DrGccgn힟zIJ挮ty˂2<>WRwg\$hP tYc=+U#$ߡƄm7bԨ2eG!W|EYB^z+1*-w3pT>VrSG 5p |qaQ'S㗳"4binٕ%cd$)89W'R_34M"ܪ<Σt:u?wש9xZmkC4Šmuy u gR}-zq_$W> ߎU*d: ;5{7l̮]sXԧͭߧ]? VJG51t̾ Xi.m, C緽g|O;Nu xRm|E$`֕)K[RigQF]SeFŖ g+:4ԋmmRH1]p .2qމ/ťo՘js]JIfNd~\,{v<3EJԜk[^FzQspiwsmr5g{do|+pA{oqYo rI4C0H[~cZI.( z6XٹO${ti^]f9J].#(Ȅ*<<lvzv8,Edo6fWXg$*3Zte(sU縑&ɕ߿C9yc{$gFu'i'o]G]#˷OQ\~1j 5+mX+ҕENwƥg'q[$xmoeǧQiGbB\)h29 YF5cOvaReY <Ă1!UN$SF4[ԾUe-n1[|@T)qs%CzZrJk ,aZ#/oҴgnшNKFQf&[y9[0tov]\5be,$ vL~mYTXITQ__3Q8ߨշYY˅8,goNJRqOukޚluGo38}eJSy\-vb*9F -05~u1r0zd{Rj)&*>Қ}Hn|]ךtzw_57q[l][,#=cǿz4})K.@KS{@C"Fʥ[n?ZZq{4]4ӹpAff;;{g5^΄eu~/0l@am ʪʐQ8I:--T*tR~?"+[u#Qiv!ˑ*|/=pu8}QMkUbעna-VTsߧOUb(ƜThXUo·Urs\=إ(ΤrZ -m*w '~Qsrʦ2I;i~n8jc3"&{G~fCY7+ԺKϯF"T aw(3r8 S֫&ߗ^\m:'#֕)a^jPQkңI7Yw>9U*mh;8V.^,4dLp7+.d~SjzsoFר V/9/ӵaq뢝_v+ŒT/Rrxٌ#bARszs[s)E5t*/!' 'Qج'RR|Όӗ#js8˹%yȥ_,#n9:~K.dZL8eV˳NHV+ƕΤ^Krv-zڹ]ZіFivIH;n>m$֝Jtcd,Uc\Q*X\5n 5X[ռ!w>ʧҢd9ʩ5AeN?J5ev0Ⱥ}cD6AGdbY1Ofєboڜyw_$1ƾQR2 $qU9VzmO2?*p7H˶DcOjҝ4봿9ӔT&.' R:VjqQ*Mml H`~Yњ%j" Y@!Xd)W9<}(9.k9єo[-] N>ԣ7}XJa[JIđJo!,.w>Y򺉫(_RG^^2Ku$o 2ͱ 2} ֪j)r*}i_hB4vl 9cfѥaoF3po%r)'o> %3/kqgZ{I\RcF񒜶$Y:O i֨.X3wn $vEV`2 qڧ8Y}cLE~k 7uC'ĝT|}jtc'o*5#/eSRlQFd_3 xiԌ9;eN$y Hң|(J]eO);L uegNc3S~J#n0Ie^׭pONT8)JI_k#!$ǔ_~#(V%TNۯB;Q.|'8ֳj!fzu0}An#0yxX#b'+挜eڟ:ِ tϯ5~kiE8] D4m۟5)NczUilZ.639y+(ƌ%k '5֐GaJƜEUi**l&=V5#O8Ƶ騭[U+cepcXaKPEtC%J|wB[Ē*;Ucc^1ʓSxYC˼c4{Vv|c5N9s~̣(Ut[k5:=}F7QX(,[9!q`g+)Fַt)_$_ĭw+L~}N)Z<;v2yI71}pRrӭG$oBm$vfX^MHՀ\?.pzF4ҕ[$\{>%8sFgu/]BL6ϱ:T/m̴c9{)Y $&M|e8>RN\4S槤ra;6@{q8s5ߪcO*zղ}MHل"[_ʶsks8T'zѵ瘧U#p$Z^N<~$ߐKmcރcMwgLR[Smmo9=*eGjgTjKBݷTx-'|t_z֑Y>1ʊvo_R85 I%gNx?λnPG-J kKn,F~\(}v(2{dqJg?[2v>1hwkG/d޷{#*~}/f8<)|%/}]#H{GOGF3}P=ⴧS|6v}lHnio##<=4SF28gu_[Ko!*jpr33!#Ίq:t^RW+m/Op%p GQ]os4+iFO ӏ;`|µRAE-n+&-citi2`~V.RUuE$pŴB|@rs׏ƺ{>V&QwM o#*>a?N).Sqsq>ty CFˌM+~%TIvamWXGc9ӭv(݊cJiWlˑN1Tznkyjj5^bڵ3n!WnNNp=^7#E9vЯquj$eC@^Ij8)ѓ]4hI$*~s k)uFl-E7uD \ȡa=}*#R]JMTI{n͠.y r3A](*+Jqw[Xo-*슽w#Nު<\בqMs`̌ۛ¸'e/Դ*}ʲMH.m.@o;=JTR45O![likңZO~eIFWH#UbVF:F+B>XhLdZZرs3GvU ۲~b=Ҷ+4TE.1VT8'ZjQVWInf"6^Fq)T-M>h(:$F-̛8[]I p\ncu4ȧkCG P^m޲\`p{g[5'ƴ%[۰8< }zըjgRĖ˒cZR2N2[qڊO޻gq#gK6ȣ8uqRpѽLJƪ>}OUG݋N6i5YV#>aqENiZu9T甗('?fܫ2늩JpRN!{z7#VOk|V.*eFU%hԵk%6Uyva9*:rt͞^-ԵT{^j8h|l\"g#ֆlُ_Id.F1?!==*%fȩFqѿG6c&wdl_z%kd5a@U&UV )F;ksrIk؎hR5f5x i?iPIR1ry9{Ts=/sӟre9\(ՕS[Уv毙OB nC\)ƢJ3ܥ,p@cfrӿ֜ޢ2k۱̀;w:ZFͨƥ?߹9IUɵ;x=?*Ԓ4ܩ< f#x9\ǵ)Ԋf*$} !ؾ] /m;bS|B)ũ]]yw]媧ϐzwZO࿞~Ɵ/,%lLN/&T3mܧ:IGsPiEYM/͐x=01X2TUkup]Y?c֮^DW^kM_jЪ퍿w'ʍ`Y^0rkv_"qg$:};[)rغԕ r.>U?sty)=IhH^dXU=NO%KO[1gt}|3I W3GGk)JZ=MqSbk"-$ufm p=yTJRqƴ5Kr;"4[?̐zGcJtI`Jj.}zRg) x<)Ŏ%`PY؟ 6F}l9)֍_D7%ycfb~0~UrNͨO➝`1O&V6+S84meYAFm"㶕YcVw{۵ ˘HE_--̉mbIxFj+ZDXy7+q7!:{$Չ!xc=-1ګn j%K*\f ^oSs,azJTw7GXҢDU|ֹX^fqk߶"ʣ̊jrV$A=ԌeYr[W7,PBqax e Ws?ֳR/[)rҼ扒4m sz<:!/>!X)Iyy^~~TFT얯Up.Bd`7#c?J(߱ju=kbhUdX1`޲rvH%V`&(fF~xmFx^? .i;0Ơ6ZFjyE;Fbk{+wgB7|+Cӟ_Zid*|[7Ku?4GNO<"+W qżJ7rsԋV*+VR7HgFBgg/{TrҌe\,{msg+>~ecOcʓ܉rZR!{^Q.sj/S[ zBD[`vN)JN7f0ZZSOQ"Q>Բ.#䌀w"(iՌ\k+HZ1*enzg֟jsGTއ[[bđ<2)Y,<-A!I?0irCV*Z+\8Qdke.9ׂ>jeV1䏡yJZ}/Wh<ֵ?wc'Ӝ{URۓ!m:,v3ׁNU"ΏpF*^nUcYՂ@۸hJ3\u*FKNoq%\3)<}>SN7 aR-D6kO0b;폭4eçN07BžSr4]~SonåfdQfUvP#b/w}T}[on0;׷OHG- V\Iݕ=p*lԣ.id\14fFF7c؏ZFN9}im6ʫ,63=Mg]Vk-%mwli~`3tYWC,E8MۛR8xR|nPpj%+ƜngYQw7!}G?(;1Օ8(@EKpc]:å(4{؍,ˋb,$#4'{8|?,БcpdK&R*K>Vʢweo3[bDHV8L d7\}5DaTҜ];&>$"9.VO-Xs[K RNtI >Wos{U{ZqΦK:{~a*N@d>e$gYII %N+\+\cQqSMYkMŞdh>`E@T\8օofq*KߣP#[\s %e=@S;Ѩ{Q=ukB"l5t.rq+ЩשTI>,WP3pʶ#$iFTܚzt#Yg* <̇U=1Trv\J<%DX1NqX1Qp֝9Q!.6̻VI7*-##}\jԌͺe5c.Eg@+g/IӠ8ңŲCp#m]Ŕ)#sQ0eZjo5l}rN}kiʦ)y Hyڸ>NMKӨ#eHogw˞_nIw b P,FG?˥aG{zڎߟVhD/pQUf@8s5JwƝ<hWWlN+FE^G%zVi==F*-H&"U#ҹcw;y5#Zڨu,7FQ0 ^2D' RXz)~7%KwOi HNkVOs#ȭce/D1+{NyCB~L-" $O~~x8 Z%٭#4?#eF쁆]4b.y=(Ib>u>ry^A}ke94(ӍDskF7mBVyޟֽ<= (Tm'a֪C%EFU]95ƬS6ą<ª ss+4FdwCTf2VlwʭU|'3T4*kq ٬  ]@qqS%khs˖w{>5Lȯ+vII$BHSo2hV[p㞠㎵nh|ΘB& |y9ϷZV[isMk[8*y>e$21ВAmS!l|"v,k ͸'?Lp痽5u]m5Y<"\ *YXŒgvɎس0{sڮ1tT:\"_"&qZNOӿ*#:Ty\A,N1\c֔l܍RqvXK2ɐ` 9n8Ӛ|C)0>wO0FOt{mZۑ*2#|,GOD-X1}>QI*2rpx2EM(?htwGs2 sN}[GrZ:a|̈́L˸Dz֎)]ޡJK X% 4$g}S%-#ФJ>ѿ|! :8˘P3GYHR ,bu%RVx[%V+c݅#}:|m$F3Q[ r,+#nlLpy?ZFQnXT+Sr\wlփk5lL}k潌N=c"C7T +:/Nݼ*\5ס-s+^K,Xe gy==G>{6ʈJ "8o2s\wUJVOb`+*$|tv4N3mkܗ͐Bnc9<泣KVwNt;]7L&ܑߧnƶONi?Cioa4kݹyP9<tOTgê҅K qoU78== xZjRf+bi,^>XwmnJ>&I;cx›)ɓj$#mUF>}iF5h~^g%sNxIu~OKqh&< t]1QI9Xx{KBpݐs2Gy*TYkv2:䜕Qʲt~w FH~Ɯ=J}aŭ5I> o1O8Vq$dcZ4}-]\A }W)TZ{}3*ehjw!&8e~t>QiRΑ)$J2sN6r%YPp]%W;6Ǚ<1&*F]{Qmy[+'QR98V8Aӊd :Z#d[[2[MLHFxYƤ5?WbȉsJ|hٺ / y5}MSZlxTԜ1qb{yLGsK|\1ۂpqk^VsJxri[]<̞9dy,х]Yі"59ZN*J2S軮w PAoqdb1?Ng+ا1}zxB9YG-I$Jˎ1UO0nk6){=_Wа?cĊc}21]P8X:0j]տ5܄DZX*FG?mFU!%#RmO2O)Rdj{jS_53MŮ䉱G_^Z)>+]hբ>-sgm N}wSةb$YRfJ%2\dO[QR}]0 6$a)xG'ֽNZp~ZXRUWD z~|Јʧ,pakĞ^YF;'8ۊҗ24Rb^'-F%?;|8+r2O7;_yFn$g GBs*QXǑ!ZUG۹F%Cz\5W>)Qm[ثA$Ӡc_ WN+Ri*M&ؖ 0[O([O_͐ܖh$2/ru"QTsqPyB5Vl0g>˖R'UD#;؛cSzYK}(E,iZfMܠp*9*2,vob[y!R20vR2p5/-Dve/lW\%c!LtܸiAHt-?"rZ:܅ mY!pğ:p\TeʬMgԧyrt. Fzu6RRʱκXؓp9{涗,$#OAn"c^=3ֲZ2=6E]t"{st&Uqm6p<˯rʤTM'nT4U0U`WqݿNJލ:ӻ).Ud/g 1/H=O=]ӏ)VI422&iP[x*z'sZBNٺqQNv\-QN/?T0UGuo$gFݣ\y Ɂ8y}iF~]MK=Rb%T\OQg>iҍKlko&p7)lk9Sg*Ot- f wHrrLҭJtjB[vmWvqu~=:W?-}[Zkmۼf`Y[\+ddG*߭e%~%J1M]6p<[t ҳN*{Z>]N-9[vjxZ?Xn70aHVHTXTvH AϻV>ʤou.Mm<0m7-a+ǟːIq~$cQRPkW۠쀄>SUMhuzE4r' &k? Ǯ`Z2rʂq|'bQ$-638]xjKyΗΏm>1#lc@N:z:Ɲ8ӦLVrͻmѕU~88v3Qn~o+ *Fc.ƥ:ΤD:0P7 6{?J~ЫxzXL$66rgMJ7jku!;hN!!awc횈oSrEepmGlZ8O=N?LfNKK>8˄W\n_W7Lx)Y![n AaT*Ҋ1h)`JSUUׯF2[S,y71wQ[3qҲhܓ.EF?,O3 geR)V8WvqQ([m%f8`[9'y?vU-ӎ89~cVሌg#oWIM4PFk:]'^Q~d镇y;Z8Û4h{6^+A/pXVm~=Rr{i*J%M]v:x%nw) 0WN7mK-9 ʡc Z6 vӡj''OGz^[_RVyk=zvʦUJqqkXtՙQݟbr FrU\y~ a$I3IJ˹9vSYFRoЧ({v+oo'p,0q #ק,v#GrB/ij7N7}]d&?1qϦjSi3u# rn<%~~~Q{SkqӝWdzߣ#ibdk1]KW9>Ƴt /AKZ_r9 QEo1bVP:qXU%Q(%eC]HXf I!s}IS)7ӹz[G,GRU7< [$zҢ.^ɽ/gMExFyڪW6GOΉ.6ȿ <WR8Z_М=jq%VI'm6ݻBP?u8TxZ|Ym䍈eGH=>4߉iKKN`W؊#) C}+j>Wr5%+?Я$e1^abɳ^@#88⣖{t*4mزʲw|ӆ^>|޲j[o E^wi%Phn#Ym e?-=l$94FaaaR+D ^*NOj]쪣gM9,4⮻iyb$6z:mayvƾZ7$0 NjvOo=_lm,=8R]OI?SOR1K3 *ӧ-In%V@u\>x>c"RʑMmQt;RD~!-^7>>hn6qӞM:rKB[=媷Ji뮚X >D~B#d/$`={֊家Dg rsXjFel~Q&UQYr}5?y[m܆&urdy7:Tvy\ƎQJ22iNğckVYfi6#W;l'ߢ9*h>"SWmjkhHo%0qv>ƻ+Qq~svJPm5/&Oi~h^35:th~Ndflʭv7 yls}+)E84f _''Znռcwvn'B0džV##4k뾖7R"b[ۤQ4Wl 2¬TӶ9JR/ӎgM6as+ X`ZF :ͫ'ͧ*SSIe9jeo38{+HJU*-uzjzXx%dBDJ͞5Nm9%('0O{Y^5dd&tۃ\ܵ-|-Gs=ln\6HAֳ?KrW"-?,ӑ}r^:rum}/sH„R)m]aCHN>9+)OV"U+I$eJ*>^W;֕*J:ZkZO-Ŵ7htT`sװAY&_Jk8jbU I8$tN2C>oV!q4nw_˕;4{=x^gc.]؞Ky KO1b9N }'w(5z]/tO٭޻n̕RiyXP)G^3羨Thت+9Y.<Y6`AHyVVԣvީedGۄnቆ21c>8Wo]6ESh9sr?;y ,B$gCgZNu//4Yt2"V;u1zэj WzRU)|23&}c<⊲oc8yדHFCnX$}H'Q*ܩ Lq_Yt_I%WԷnַȹV 3lsWi"읺(UvU*3Moc=\)k;wwI"KKY bVhy +*iTd+/6OmmjW˕[oAcy]uH^ّ_J4=9>{T'k'ԪĺȻYmݞ"쥬ԛl崖(Xcmc:W4SemlϯNk[BXs Y#v76'{{cD'(/_>TQo21o]b( IW?:iG_-Z8Nt4嶹1$RnqЎY䌢;״ݷ]W虠k@W m*ۖ +-tߕ!N/\]Vv"s>' 4بՍM~^>օUcӂɦ]s2)e˞}qV{Gv3b{8uhK;1i ש,:tssɵY[AMe]4S.OI 5qrJ:F*{10*Qj͵F{pD<ǹR{RhJkMUng{ƶH|ŷR;|tYJ4{=c)|O۔3Fv sz}{['OJ6`Qp8PǙF3e!d>dErʓWv9N]̌SJDSlNѬh|sC(ed54VC]{T^H]$nMj4VCgKأ_(ݻڹr+X8ku'kh $Y٢BiWv=Y_ *TOIuЅරY\͹cذ =:Ҧ9k&o/ReJ-=Y<~Moc89]QNqkU~q1rW$P9_ڭ8Q2ʿ\13ӌ{0R/N ]/ɞS;Y"%|V͑8Ee(k(*[GFx!H}O<nQ-8zTDk lCir8J2\kR8Zk:)v>Q}Wr.[>,>O-m0f3,y]ʼ$p1ҹ+Jr~=#mwr̻U['*q~+}NmYͿrn)?{:ʻ#MTs:G5t-cp ŒQE*/=vZwhD`E0]ˍ:ѧ輿̵ΑEH۶ytn:5te)_s^,!a*=JU/$Z*VQymuȼf殛zC^TyuB2RZ4E˸]z\;e=ί +:hȩU[v@C;3qG??]SRT)%i ӯ랢LvFYdeeRycԗCm*}XQɒ6p.6B/<nko:KFsֽ MǕ;-39RNm:%NV˾b{b;v 6f)Uu1S/a`:>xg% ȿ*ʛNy(>:[l4Hx+XK$m-ŸT+v2RV8jX?x=YiX^ۗڵ̷WDl*ݕcXj9T1nHsi2?e(NV*9G5^\U>HY$.͋eOf8󩔟.2jfO%eln3?&m#._#6Fe&n!n޹8]HLȷ9(~^3w9jI+/ 3G1qe^Vd7{DĉvٵL=i#3+(jҠ";tv KsPlJ.: dKe~r?qʴ1,~ŃufgŹA-UYrT}qj*VLFJ{f_*s=8g]٩:kM4<'#Ʀ5bc?xKdJe^)T]JZ-_C$D.\f+'sjs@DK!- b9g-eROz VB{LFFfQZq>k*E+G Eo]ۮ^]zҼ؜2nUCv ˟*XNQn.b[Ч&Ϙ>l37x8b>18ۿBxy#0Ǟké|שrZt7ɵSx¤)4/ЉR(Hr-Ϩ#9UeNI9_Չӹ*ZEtaa~m88~UZZ?:N08~]3HR`89\mwΨ*o}/ϔ}b7+nPm2}9?u#EǻSnE)FVw 9Czҵ*:Гz>Z\qVll7`?>*ҧkO>u$ßX6L9^x<;zʳ(;i~*5#(_$+\&5ܨ0̓^uFf8ya*v[Z@T*];eȥȝճ=k4_߾_0+EIa.КԣSVJgpmc1$`dv*}RVpojߙɈ[H|,=n| c#RZ-U*5r=Đ,+vP:ϵiRn^N> %,e#+:5W}C=hS] Na7/q5o' (\_1ز)|͎p񟧽E:Qխ|%J13B+6"1o?uYJ*1Z]$nb5-ɬ(§5^eb*ʞ)[bY#3.я1m?Wasm"*['{W*ץ+_ufϮ#wR!N6kD}Ųo8d*FCxO(5R*ߖn!VS)&C0Z/٧HpI/\5~a r{R)Vf$/ @Aa83rE+w˻]95{w (YG޳bԻkoUu=Tyf9L3ѓkhԊvg\06FFw)NXÜ:Q7E/9gA%6ϘIPRGCg+/?^&+9YIy R=<#_"Y}e]7$Tg1<ϝwg';NPhjUY{Hkɋ9m,k>o}0sS*Y-cSn%7){F30 y9sS['4iQu/1 o65̬ٞ=3緥i}oE֥ZҒ&O*&`eeo/#>>tƪtU)G'ېdPª~UojB>Ӗ5Z?tc$pKd6b#p~e>^Uc2^]%66^ qγ#*kU|-;MFHٶ5LF4)rȒGn׎I6*_SszڌiS$VNUjaEFDI3nXq65*JU{k=_吗}Ť-iFydξcQBO}963dyIwc }x^ԦeBSKv#I8W̅XX-n\~YN3#zuKGUv>8wXnflTfz^k?̈$ؒDqx֬QO}LOc%囸M:5#(ݑkKF?ȟyWBο1S}khԌkSN~^Do%8mmv rT*J}} j{Zk?f#ge{u===*Ռb.>ydiɕdUb UO$奿R񧅡'J ;Cٱ{WSFc5M:\7xRY`I}gF95zN=5'ȊX}C$hfr=pMt5kS/gh'}eN3А8㊌E8ԕfpvl:.~Y[s9`d`gcMӽ9NWLLmI:e$3j5=_St~#oF~_o5Ss%' &}ŬM;YzsngkTi{;2D[L+ QJr,>&g*hXYɛo1|#8\FDge4tK}Ŷ6#ں!ȭMv:))r'.->lmxQO-:g֊2cpЭﺿb+f3Y;i!pqG$ԥUhZJ 67ƽWM(zI-\_]/$jfXcx] qܜU4Gޕ7m$Bĕk!ִ唥ꊧxZAwI,ws/vO7]LkXN4FϺ9$).b "Fp9[ҝ>iTUTJtqq,5mYwJ?]54ҎtWtbmWgM8EߝϏ`')sYa{w,R}*. Ʒy<<$I|$m\}x$r*uҜcδ!]DM2Iq޴pmV$[$e4g~k j)F۹s}+W*BIUS抒E bI(|p{2Rbտxڬ#߽e'hiƤg fԚ8o<_0mǀ6~w+YScw4Wb2YHs/~7SidX$ߴf$H;fYVȝhJO/1`gf.TngZJ\uuQFgyu{kZ ?v!J?ˁS%YRsZ$% )$,der>kZ=tF*R|/7eʂ:8#]))B[utbH( 9?N}*c.W-#U#y- |[’B#K+_^@eN}HTXN Fy99tg;ٿȘ\=*E ) ,y'9:R=]My}ӽ&1 \zuۿg[RiŠؒI g!6J1fE=U5IeY~̿3p#'ךO Ru^޽I$.[m31,qF.T2S}#o1~LIrԕΤ$mFm*F:nkXU%:pzm/j!9+m#8Ҧ׶| sZ xfe}qi'O *-ZH<jI\t!F~$iVMWɦ %ş>}y; ҟ;tS暫pi)U_ [,s҈ǖOu89bt7p,jyUWxjt{os4.! ܣ9ZIX\Ov]L!f*߻W^A 2x?v6.hy5vT*q ܼҜzu㽚71*AO'ORcȩ'41m2vܐHyҦ21InW1y?,s*+ͼo ޼vSw4⎽2PyҲv=sߌUcgZXdxF|;Vrm˙jcv[^G-cQ勞Ј3!?;[{5sFRݩNAA(܋S} \'kX񏼖WwKm/9f$0sSӦ?9Xȧ1oƈh({*ӵ9Y*b.c6=G JU-˭˻+0Y(,O>+8RtN rn3ȰR{+E)JVtKs~{$fzS&=YT8v)G+sdnczU 4qZ͵kr.>R?E_->ng**_NYQgyci˯=pp +שϷoATe+Rw{q5 kml;cH9=GtɵS*PܪcJUjkm^X~hIv[-,lۻz5SVaziUD-}qY{9Jv>zkKm؆9ȷef8}0s].h{\hmIf rG)*[]MSM%^O]G37"HNNZJzOW[}]ڙ`Q垄p#i +ʓK(duFYRi 4KTdCPencd|>^:U*(:R2V~}} qV6>4}8CM8!ƞݻKx #nUeڠ p11kiS2uJ6};L}dcg5 [bȣ,ӵ*j=NiN:IE{r@E+rxo1M{kHS:c)Fkb;u.3,gk啬|+rRC4Ys vrӦh_NXZՙ 8x]GY2xfR[40cϡt'4IEt vZ%#֝)&㭷-]\M/$+]ۏROL3SFD3Nt۔_֪T;G VVzK`nE!׿52HYI*r,HIpTՍ9I.V#'=_nx_K͙s$ᥑUvM8;YT/oN4޶L./6<}.nZwr$r +2<WB4/?lѤ'FG:1KK1f Tv1V\rzT{J{8Sj[Ds"sr<~>Ι8JVj˸JF[oWgTq{CHy*á8nT;–-nK\U2~ӱ[ ˹[ $zqkKJuӡiw1n^ɭxsWэeFe%TT_:%.x{e8 dv~9is b/xŁ.dnȿ0?29Izv' yp )?}IWP6>ιVU,IXo+0h,|Kngݻc$8N8G5/c{#YCscfL뜟ώE&4ODItrDaW$z޳Yٕ1!V<.TRkt;Oqe.6+ToB87ư)V;UI7zь"M/V>X 19qӥg)^əԌTT_Hc]^+1ڠc#yd4^5i.SNY|/O"B(YdDT/>Zp"0䶚x ˌ}/=SpaT^HI;s+(+<];kQ׮y2G)]śpqg@8*jRO^{ym7g'N*T5b}r[nY `׏ҹ.u#.ӳE 3D8m;06џ*d)5uծ"@̍74Y=~T)Ԍy^TpT{y2"+yy2IQͱe)G4=xmS17n=)q-kaHO%6Ysn|8 u*8-'KⶉJq#/x4#yw ԫa٫;l[[l.Ѹ*CnG*4 Mg ʤc'<=Pů Y= g~qNQBTkU׭ q󿙁'AY7dQsàU?fnU*SZ{F}fwva{{k8E>RRGy>P.$+{vemysNh[hڶ5yy:u$M K,hHH>z;.xn[y7ߵEh.ݤ*c9%U{>XN*ׯbXyʏ˚HN~*)"#8NTz;KE3nXv yyR/ix{u}<[][ E0c*jWDSQ]Vo'a$pL0*;~}몊hXjF66O Wia޿$'j5ѥsoѣI{N߁"B\)s ncUz=Iºhҩ%mu,}apgsQ]togDʥOuȐĒk@eݶ;vki(wnZ姬c\y?ȻϠ1?[UԑRYX-+^"Ԓ??ΘʲI +Mn\G=kNZjjm*U mm$iM/Y}JֱU0ڣvA"yfwm1GU)Z2ٙSR"EB ޙďqP*mʧ_QmѲww98mk'nrlF9xȮ|S+D,@Mp M)^Ӎ?dЅc>s&sBmO򩊔b1=Ά{hPЇk0%U?#ӚsE+(iDgFsmForTn=2YԨHFҳHsUVn]ЫN -V Y#dڴe-p˅pW Mc(ԩ+7c}S8LDw#p]sӿq7Jӫ+m_&9K⸽9qz[,<9_yZHֲjZ0 JwBp==j$(sKI+;N &9mz%J+ʄbv#i'>a+ض*N7FU*TQRUDO8 ;4 y~Pnqq_z߼O/y-[R0DE*2ϴ:/nN%߲sK?1Vh溚ph|>Sߏ~U#GGkWNrM]xL04r_?ÏoCTkH7O;gMŸ7:rM&n y%$pzg=:}Fsݮ)TJ,VCS$s20֜E|76Iϧ̎xdx1O،f^v}vf|i=,{_|dc|ܖ_VK%o`@vrNqۧZ8x[g-,W1O4e\8dNk?5/WcԌ.w~HKye\sؚ*B.n}NFWufZ Gj|*q}w_ȟm6vc9]QQA7pN}:v*gTF~$5(YA{EQrA9.%YMW'#}:滩.:}(SN:[(#eݹuђ)V:7AmcRA7tF-Eʺ_1AFq\l|?wJ:9YA_&yP5a0ggGNol`[Gd41#ʪd^=y]UװSMԵGH!x߃ZMJ&21raJ2~gm&z ثAwՍKK'ڤIPX>\Te+4+ĀK 9]6*HEAYXY'ɍ<9-&wojq.2.W7N ZEE9TYJ?6V\ o9b:Juߗ^Wiw.ee{E=e6MEA #&fb<ΡI=;WTcD'oޥ+fͼH7N ;y[Q+u.U=w`+&?Ŏ ԻЪ3(z'vX?l\j%vVr=zYth 0<<^IErof%cI.\I'\ӂNWX_nNkoą!I<:sNk^hFKVt7ʻ~_3r;4k#mUD?εS5V9.a2L3~>F8J(UEJ̴v-AlN=vU sTjpwoQO"b/q-d8=x=e)ɷDOo6\617m'>ՔoJj rruTE3|t͡h֣O}GHlE(?׊QTn kJR(M؏P4[EyhpApq즢.QWkKOqv~`w=8]"cuԥz֏%4%oM]01kkngx< 7} Ջ/!kal~L^N@Vޖ"4ܝî! r;+{NJjuV3eME淼Ѥ|}Ҫk\ԩ'% 9Tce2q}J5K]=\,գ+#o;IcD83h4ӹdPN~_j\4tKw\ڴȩn37sNrzq~ڔ_7\Z!ON`0*%ƿW:0O !V4y~{F(i~,>UzqnWo>6g8z*F4>GVԙ7 0FLQۨu,-[ ?ytEv֑-ʱQ} sJWKsTNV0J]Ã~Ң[xTr8Dʖ.߼pUW qQ\ׯ_S8[y[%rY {OUJjMSE{Yo>ԕ.bVVܱGyU"U?rֻ?V:tjn%qphZn\xO m,G[$۴w9"BU#x U:,Z|Ϻ]?(^g58rW&Rܚ-:7vyI!`$sJx|G.зn#yB1V5"~_ORD"ȻY|a<gΤRk ,<*zf3Nz>[j8xfwП-Lۗj(Mz~U*Fg"[E`4?\]7:Rn}/ ӧ|E:*EFH*RVM_u=,:׳zu-AQۈe?)ʫsj1GRuil`C$A 3Fțn<:qOFtVqR<[H5/9hەUGActG pnE$cr9G_I8X rJY;v Z.uH۪-$0|ôQcӒ{XiQs# %!H YoҮ8*|yRQEu릅IJ{zW'%8{W6B5k4@xOjcOH΋kJQ]Ck,G3\ǙNB;F882=c*r5^уe)Tm4V(-v)>tZjoN!{pe#_ cEy^N5KtL`.@Uɺ!pRF8 Sy%ReE֋GM}~`\1]n{\bQ/٭WrCvહ'*?\$+bh=WBp*ʮFܑ1r'o֊$wD6gm3?QRL.N:0[VL?u\kQi9*~Yfv+vYX8k5~uV_BNJYN$vi:K1Sߘ>g0o̙bUE(;[n7D\mWBڭXձnϰLctؙ;LvJk/Yь="+s:F6 }kJ2KB+[ӳ_pFcI&zoFr;+ù4ҽc,=Ohm,!2y3q+[.7,>A+tihS tjILdF#$ Px>TlE6Ҵc֧ͪ].fkinm |(8aR~/_=ѭOoQ}[tv6U8hH9\ZWm^Eq<6Rc,~p@u)SGjp1r<Zk|BLb6Hߜ~F16/(85nĐoewmYvSa\"0knyn71[˓i r3R xƪ' )o˅i,͏9SF=Ygmb!m37AZQd֌g*~/!LybXmx zvHJK?cG觭$)1Gm\C p==Nr߭uڱM6Axp9'N;hʴ}|D=c:̰GCdw9 qi7(k$u/?!YƷ0[Dz2QTu83a*ќVYyze[Dy ^{VPr9F+G[Չ*p<;{kHl zF6w~f1RNX~2r%o?qH"6='9r+9(J;_[=R]:kש^u)u뮗Euu"&U}Xr~n0 EiTUOOJJfON}FY,@<;8Csԍ4K)&ջ="aVFB>ߎpy]Q(۽|pԧ ь][ +Kt6$Xʭ19zbo+-GtiE8߉ I< \MNG͟C:Y)ZwqTKV ZǑọ+XgojD9Vc997Nt#zݶǁ_EN WԖ&2gy{~5ɸ&:9I[Y&eeٛ'q]ќEAשCLUzKȫ[$Ypyjb)SG΍YstF-d$hQsTѓz(ŷb$gs mQ,<Zcrjczy-HJ_= 1Z}I c_Br\4F1:wfpyGi~y pUJU*/S[:2u&ۏMoQXna)O łZѓ&(aiiHueԎ@c%f9:4yxVK¢tXEW A<`s? #W^y8zQj aNahNP})aʥFkZyIRӫ[=o.f&-2v8䟭qbm:i~^ Mӏ3w6$0"}wJ:1/GgY~; uwpT3DN  JKZ8D).^q2cVW q/ZJXJea ]@Y,yJ1 *!qA'{WfJwxTi ImFxMdFT\bM8E[kPr[yܠw'# \|dznv".#zOVb5PLq\:WN*^m{LN&7Ӡc evUl|?{N=(W\=/.6uӗZ%^-brvj1NKCT(/-,Dqo*WLt=<֞"*v:rkwVXF?qHV@NA >GjRQjZە+ߡbB|-Wvq緥s.nWȭ59^ᩘnՠXLl]Tm@#֑C<˒:%~m\Hh~^2zg g$gޮ!TG&7YWJ$H#4jK4>@88'CXj~tZs'Gyr&XI&0풸϶rkjH񘊒-h^ɭQ!7/=1O OT9m=m,Kbѩs'8 7`Nn,DiUqit5S?EݦmXiPmf2AF9=U]BݕBpPl{PHO[F>:nskӾZĻEsߕ(+=Q#jqz#>V p~yXӧz^xw&O%F$卛Gs3CLkJ lWxb@ڥUhG B j1{vWV{X<ȀDu=A #VFBqj8%rv *8Ta߿6婋ՕʰZ^4IxAB #Ic\V)E^?fqrԓiiKRde#>ڷ*1]X_*rWէ%&*.Y`ytKњPifOQj.lĪ${SmE7 m}Θۯ4$V;T|;Or=ٔcEvGoy,t+& 098y8/,^F:NwW}>HMf_976Bn[?cz\xHdhXF=+P=$s$TF^ih[sF5)W k[F\lg$OSXIRQeM=\-8}{g֮?iڋ^o}YB®xp3{ΰTb sj9?Y003>ju.n2-v5)kw̫Ӧ+:)iSR,켄_ٌª# ~[ӧݍ+Q;^K[鲼RI+,T<7Ee ~o32%u|z4[c*3)9AsJ0J:*N40]WdRǟjR4UU-z?ϠFLI33Ыcwέ{;iҭ*_d1-)6yk?\t]^gkuvBtdo"9~e+\ڢ&bNv9>Z1jvB ĵ!2I% f݆:UÝo+Y#Q .pۜn c=JlR_yN1⤿".te.!x=W4*/ѰXx'm xjJ>ITݼL|1ϹF=*7s\Qj5FG[{upn`1Nj-b+ū HyV|$G_[SJlC(߹%Ȫw#'~5It*zҭI5oQ{#,~$egHfvgp:ySU$U*Fch,ɷldwsZʭHP[*7I+ʲ-SNTm)Q/{nM{U K29je=%iēF 1~ٱ#=z[ƌy9o$J`pzTK{m*WEh>.y?v(ǚI=J~гi#X !ҽI6Z22+峜ֺ}ʝG4NK`ȣ}+hZu9EMH|UIYQ*,SM8EK/tʳjr=Ϙ<ȨhSSMy!| o+ j6vW&}&6ݥ Hv>u%GR,y\le,NURWE v<2W sFv|+Zܙ{z3ZP;vҽ{K6hpڶ7er We)(84n.Pwuz{WE:% mRhqŠ)nM aø%[;Sd+h٧rŲ+*}=k5kNKllg9>t{F4S)"9isV?Ư{ЛXPR=U;J>Ż3Y8> ˡ r:|ڷ)ʝ)LHͽ,EqϽsXGRJI06V8=GsJiJ2l˸Df W} Ue[So$b! U'e[wj)rTȂI#qbXp1v E1Hs{Kߴ ={K/}ZNq M$fvYXmeq4 ]_ԩ 78ʉUv۝:uֹ劜1)TrJFEn瞣XOrOs~Ɵ$mgJ1K=-yșUY㯷ֳrq[z! QW fcah탬l)|4}qhєp%UQ+t+4c&c)4; hB]tJZۙqe!kUo7wFK0AjoZVvOۤi 4m\b]0$sM4Fr!h^+r$V]WoʵikgthTnfONfV!>`9B.R~^FRV{39XuSo[ka.V}=:ܯ$&·H!ۑVjK38JNֳ *.&_Z[sUiY7~dM%JROӞUIBe)S¦媶JdnĊ;Or1ǧV.M?P+bu{ArŶ9nb3pv9`:֑TW{jmZ]J\gfLrGZSrE;An!/˻y'CCRUerSt]៘EM.T#Z~qLN C)t㯽k(N]uZ((yr) yc-هls{VԹyԧcR4,PL31]\UGiY[,H+/ʸfp8YFRZ}#Ti_a1B׏j%'kJKk,~n-$`r{sZ9x2M; ";eG99=>*so2ʥ!{@Tq8'N>m|:ZuqRE@e+E:tdb/i-a%]6yfnQLV>3nVx\{BLlhԅ"%G=?;qҟ]GIrFE_εZu#ȶ{,=gOwQ Lb q屐tR{;%/ڝHƝ֫o]ve?0pF3W':KtsT,;q ޓBqj_g5.[#N5wރmS ڭ[hZIŊ7M^KF;lvem$98\=6L%Y=.4Ьe {OҽSQ-*TQ܎IKuspʼi#9YW~VaA^m}$f6D2uQ*9g 旹 RKy#  /#pߏU\j:Qj^ۑJ$tq4XCB6 Yԧg*Smu, fɴ3)ܻ(4FMNmR4VEO:S3|_ZTiεbUe㣶|;9~?NQFZQI.J?U$@L4mG)&xDa$imnvIF~R:sҚr-Z_iW9jbH̻6Fz֬c>XjvF5OZۯoQ6;t\;YEaR5*Rr7@6eAמq5ѽRZ4 Ig[.Fnv.=={+OcS٥gӱFU"5ҡMn v/wMJyy&$YaU${G6?EReNz 4щmWt 9{oG1DFJvRY|ؤ[s94^71_zͦxܙgjż 9*pIzڧN2ŒJOte*9f|ު=WM<<5+LEI:Yơy"O=Xv>\5#V)?vvdQUBq*W*2wbؿ\'-qҳ\>QG;2Ww.,e$m^q1ޮr~KoV9'\^Cc&=ܥ~LrsOEYJ.Fͥ)!F>=뚶Y$Ռd$c3 c7w޶%d^2}v^~b/ȃ\EGٸr{h=TRvv~=>\nEN*Wi~dVKx&я&~c=zFVz4c*;E7ζyۛ#1[ԍ)R*I5*j޽|[a&2|ɶG݌c8zNjS1#x!LQA'NT𵽤g8SHi׺E,~R HszWUrڷr{*1eBB~ZNGz˚jG6iZWݭ:)-s1F3]䔯`q1w%)lfibC#a|h<4m$ v8P6x>߭o KQ8mr)̷ UGs1]|:p}U%J8slv\RVՕxK6EH63,l SNU&29irr8k~~V/cխQ 6?}V1߳n,\Sgd'o?sVjTw*x%N;ح7Ytv/Qd>vNWKMbb;;nd]ۗ+c#:y`{G.mA|$/Gᶕ^x549] f#.k*U'M]A dfWd% #9]х8sӗ2ü,x $~:2HۗmBqu/Ȟ~U9?WRpRTy"Io$rՆHX?i{9Z<2܋ -ܴm2y>Y2޾ז2M+ri.df=\[ՒH”9Uܑ|ɖqw\}XsWT\l?ZNTXx ,7eXc_2O"BNHb֩5F1w[;ԍfRYwg߮:~#/rSNT:w<3w4|6GOQ35(g*2A_5V\Fbyʤ{T[Ȓ.F;SiF<(7ި@MןEƵ:ɩ_NKi:i0.p\ܒԪ~o.15k$^c 8'#ߧT)skORN|ߵJ$~dB2qw=+x2 NU%BI#R̄^A\޺*(Ɨ$ݵ̹jNK0?*fM]\Tl- iٶ3dG~cw+w[yu*4moj@L%/v_8olWG;yZsNK$YnLqjyytKS5)5bE{w3Fs?Lg?ynH%TF{H j8lӛEi*GJQIB\[}*F጑W[xz)8Xgpm>Cy$#1d$ 2z[JRU Wr-peb_7VOj}*&5,FpeJL T8ڝ',%L \p/=uo0*R̔S$3*G~:L~%WW.x,d/zt5Q7f-Yǘ x4K7{|Ip˻!BR4d&&U"n;Z8b&Gs|IF-;GmhQ=x sqW.j̟Sq#&T9yJ2V_? =~P# R̓s=Ӣc.vX9mcdnrySU)O;yapJ~ϔ)֧{-|xVH$쫻RxIGX%'5<6֫=li(}01̯qDSߗdU)r\pAYrIF5+y.cY2`b7zWD%'cXʧmoԉ7/l%D$#F :g5 s[f.%^EA<{ZS,qn~s4W #]O3vrJ6єnXGoe`Ͻ%?fи9ҊF}ERNx3ӧDi:y(ۮ[:[)!^-OOKNXFاq5%1[nGaNh凳өю&WWǛ K14jW<hzkW*`iYq9P܎Aoʜ].^i=qw_3=S",a[qq#߿5U"POc(n-m.V3>Ė4'߰Ȩj|FNÞXcTHڻtNTZIZ OyFWLCߟʇt՝һv߷ȩHZw#_M㎪qUB0\m*r{ wryM;|2nIֶ"*b(զ$ۧE*w5?(i Ԅօ. rcǿONmqrn=C Hp 5&_$ ~UZ+8R).oĴwߍh'$JJ|ф,]Cg9e_ݰ̒/L*aRr*P&-0s29mv O zUN'nOJ-~>RZݕ,T[ @f^ Oc9;t1EdAR3$Y0f[vʧxP)ƌ\Zm+֌Zݺ26l!E\Ȭ[q!Y SꅱFYw>N~R%͡O;;=.{Ҥ+%ΪƥK^FvVb>\pF8u R3oķeLbO=rAgҦnNvea3%ZA8\؟fo=J✕NdkHLMdۏJN2Qb#-nXʬ`s?^՜k+YuIJ8D/&\3*2W*9e%uct2]΂{H&?|ֆ5iOY_О"W + ŷ;sӯ?ʲ&G{t'(ݭ#KΝl<}wAmǗdo<q|ϗ.$j˛mIVi/FGr_=<,{FyW0wJ3cZu){M+fUdT8Ս+\Hj#ko4m$6 OP峹c(}&Uvˍ}?!YGvLd8ܴ) "|$#9~\etnFҚ%If7*g8dgAsqY*݋>RL+ Ȼ Oe)W5#Q}~ JTS] UoWcˬNZy{m,F*p2?<3?YrvԖb2^q!o+`%{KJ;$fdAoڔ}]J˖q{[Ff̍LTܤ7cSgI,˨n!9z^íЛZa=)/jZ1[l:*#H=1NⳒ~εF۠ i#M9ۚ>CrmGAM*pNs*n,eTE)qwqFs7T:G1sv>V0Qkq{yɽ10#sԍ9h܌sBFgRʡOEs89EAr"Kd~gndbGOWTÚƛݚkBxlHʹv2W/RXUG|+dzj=Sot}*WKȹtVqA):F%m料dr]d!nKd@އ(Tc)6O=={Ȫ5Sk*,UV ws*\e\=<;G G̽ss}FJRr &˸ZEpQ)Gm cO_!( ߓWp~f~4cM|H"9HRG8EAҥ8ӊRP ֹVx?iAiʂXig<ҪIrsnj[ظe"mF3t5C*0N -WHȌ.vt9>taN)EwmZim]J[p`sUړFR6iZYmYfM.>g 8cNJTmُH#WV,$sԌyW®dߙ,+؉d]+|}G>QyT,4jE^c,,c+A2\6Bq}ב"݉8-GӞ?AҰF1/6ԅٶ{95?w&8U{V%X Ba*ے+⳨wL|'u] pfXաWSłc+)TӔrtWI2q#0j.ќZhJrG`w|}?Z.juiK裏̊H$hdّ'wZQuS*ם9yGheDA7bqr+98|ȡFj.2[W84kaNEsʲE-Q7+U܋ylztvJ猹d }oԎ|Zo%oݶ+ ;+.iF6.u)=FfWnŽ6rH_z{Gt9e~ey.eaZ?` d{u"GC8ժr7^*4wȪaHb)ѧԕ'u}]Hm-VVY '\9M#I`ݚm]],A=5FpHѧg-}Fy.0du tOg5>X[h6A6fHs8nN%xdv08xXm6y$d래dNO^0\%$݂ HV Ozt}W.̼f!0& I%}+m|`ss۶kogFU/(:1MuaOpyUVª׀O5ӫOIn"J*(f@ۈ*SgEǖQѽ(v3CpI~7F ̡?PGG ]nF0cn3evMvRF44o{LDZ^ "谔dbCzz,aTwqq8k5~hV։,qhﴰާF[jƧk:*ʵ7)&uӪ'Ȳw,W3g Ԃ>ќ*o`jG{4ȸT nsxF SJܳ_DI"F^s_ǏNk35 Yw%W*9a*<{wEF)XITG,Ǚ<=2r|+o*T{,AGM7ڤ8gҽ)F0\IJۓJ[}5ܺ.by?!Yӣ%{U'ZW}} ]R(xJ$t]ot$n,@~;QKvޕHԏ$m.sdɪN*PDvyTc>RFM݉'d!VF,m'sY:t+XZ]iٷNݕ99*d:2ͭ$D_.x²w9r9gRSPk7#7f&cerL$I[r3湣R:ǔQZK~$W,|#7+y=y˚K=ʅE#|yW.TY`߿qȞd[+Zi]q<  v񓃑׃Xڜe rzϦig -512);^~g+B\zʛգaCyUX}ϧ=+SٛGŪki!pqD::MA]u2V" K:MrQdݑ{0FTx'kO5U]ݻ/rڪ? H>laWʸGڷө>\{ce8NqGߵv1%tT3J}kl8 FSQI'-][miv uJ{5/(uWg-|2ͻnvөFXE!Ԕ~|b;*;5;$>cc\ӧ9j9EPIO_1mMr332\*畭k~!uWϴC5Irqں!Oԧ=-ת1(w0n<أmm+ӭۧѕl=GVI#r/?:J,rK Z/Bx+b5ܱ$nZU%4҃ۡ>3kj6qxǾOvL֕.ZyuI+}f,#=:WV4%u58)S=%D;em*{g)ԗj}mme"+[imwyRG_ºcNn9Gtj6E12z}{滰zVMk͓[1I"}+9#A* k:5kts(Uf*pwt]kݍqj֗=#y#@W puNҒ3SVkMI{0RPFqzWJ2dkRiJ-=9)OwtRwIH!Rs q|pУt91;}-7vs{zs5y rnG# +r1=}(Sft jw'J"ԥN1.!uձ4N27|=~^MCEħON-Cyk{WS"F^.qAZjΊu#j27Muj";}j*:4ϵ?"ICBйLd\܁|^};}2U+T.BZ Zݯ nhbUNK]RajJ} /~ɰy|NUy>7XTQ_ccM0rY5-\r+URnt*RG-[$ݵ*2$ՊMm'SBʥHojo%c!o q HI ̊%oИXK2n>f$Fy>)? dƜK|*g6컚D6a&QowR9<\mcp'?x5e/ʎь+[nPITH1a~rʥ%*y$1Xr/ZREu[ ɹ#/l}Ny<?uI]ԍOƑK"'_7!z΢&R9a6r@cnI3EӔZΰԅ]" }2_72#mHqrkEX+mjr"1vo'v,<LG1acysҹBTWVCЍ /4z۰GLN #l(dg>є#[S&RLfhx2;[fTI۾%ˤWy-' 3mp!$qz<Ѝ4 6Zm+WA%ݰW8R˙y4j0[,#HFo%wy)I'mm}ךιaFZ>G;@6~SڽV?jSZTު;",X$FbxJ:lv{;TEѾ2KDѐŃ)nVp`WJu-}w-$)TȪY@ rsڶORoKcnݿt,Ґbv-#7N ]kl"VHV6>܌A[vIrAr+8r =V|6I{8ﮮ[Hi&TC"و9 1>pknH祿hrҴ k3/aӿ*m&=~\:}hH%!9'$d*q:~7捭O*֋Ѯ2el,ʱ,Ol1pw[SmWO]65qw}BKF]=FʫH=~5iӣR1SL?rvYU63*G}6q2W%v?3u=O^L\_R>XVdke q 5ǘ YA<{UԫG}>yBi>̹w%Q6VHIfF] 9MpʌˑovKT)”m및d)H8=3kh|-;5]G-h{;B]?y b6eEP:OӎJ[vJqqr߸[nIvxsǚjV)Q$_3_dP,_q$+u=բסҗ=۸K9LwȳW2=*rݴbcZ~# -fqn=9YE:U%OGKȻ"c5HNKFaw ʟ϶_3uk-ԉF!r2}~QWyԪFUӢ" \A2"n؏JR͢:*Z"!b|Q8UG޲E:ZIIzuB,Y$%OÊגP:biFR~re+-ݺ6J%Ik4k(>_B*խ8b$*Tr}V'̴i "+y{s gJ-ӒK_KBfl64sYTS5otjU;k"[X&\p;;КM.FS sBɽ :3eϼ(ԟ4V}6/QI]mn$`ҽk 6ݻ~0+g*Mmzb(1IM+i,#/?Y~{֒(٥Ԫr:%[ *,I"{Oct KXvH|ϛ IcyzҲe'm?9EF/#-* KnZ^(Ǹʦ*VI5,Pĥ`Fhc ml;Iw5YcndbYUP: 9# w SW/NïbHea#{g85 q՝ T4n{"w|Ƹ|3S)SzZT'5~čg$ ǖQn0v5TOM_ҥ ]-{;HZ'iI\|jUScHbciy?賴Q2rq׎:2>߳| Jړp(T_/-I4jue 0vwvd^>_|Ec?iJmT%̽M74gdPO_XxΌ[C"ME⩽[xQUr9>4R[ң[ٷm:C34asIRO@+FR_tQJji#X~™ fXz_gO3ܣO[!Gyh#R[OY .{ UOrZ\]%nrw7˻gsJ;OQv'>sV=]; `g fN|EL-Hѻ̓I=>C9sqޟ,wT#׸ѺG,rU8\Ł<9FV"1umZ:K c'ʹ@զ!B,9p^֚Y#J0yڶ*QM_U*b+"Z;KYPOLc##K#`n-4U-#vq/=sǹ%FJӎUyU߼!<ϧZTy'mo(I-:Z2fvc+Az[RIi"ܮ.- wj`:by?es*%WEнeՍ)#Bz48+Sjrc@i$svfw}?ZΦUV9m=$:;8ڦNIkYz={EKo6MS dj26<.O}J᭓֩$ЪR]~շj':k9dJU$]nD2gcJI-( FlؿS$ULסzM;X֍F/YJ;dX1\.еF]X`ȭiKkNMEHfH|r~johؘ[?gd vֽm'H]E$rn٘ÿOoMަ 4d(69>zrwvhօ8Ѥ)]ؼ77<Քti]g3]gpZWD~,t%9b[\Jteuh Mm#^ե<:^u$YatX ےG^Qh?>7̕j٢a)^p:i3 hnJ?v}zWeRg45m 0K8]ksE9.nknjL{OuF^CR1c,㝬̛x!)/#HϛaC-c.NOֺ({NmK4r\Dy^OkJi6Kȵ$1=ܙ727Uۡ1s]y7YЧ9YsJߴ[)ޓvDqz쌫kY@nR (>I-LQXu5>_C:`*߾W1ҹ%YYl_ϸI2}W jBeR;u)tݽ'~]e č Hw==#ݿc2̣wr\}:d}y/JzGsRO[n@Dav#}=1\2.kH<Ē*ӏۥg[>emICxw[\͏b1q1 q'=^>G2d$HYh9+4%1*ɷwžl=Ηc?ɗ4i`GM^}*3q}JKbI`̱m5ը'Pnd?j8Ff]R՜:Ì7fG 5,W,=?#VԮHL*T.***]zSէІKiB˿`;[EGR>}15:Jf,S;=,#BY(tVQ.e{TMVnF4)=I/yS&x-cD*:&o%jѥu3cknfO'}^/#ckXN2Q[l?VNn7fL%Z>ZvcATG7ۂ@'oc9S{nitB?+|s&ߓJ*CErDbQɏir睻|c~kmf]B=-ـI7e(F>bqjRy>UGMx26C80:*AWtq;nH<.Lj ͵>=O4hEv2kNmmCm`7+:%NQkhSzhĦ<"ȫRM2CTv q u\+*;T:uB' OQ۶={VtgQOlNQM/zíLr2ݴU9=OIvK47=#F: Ҕ7c8S\`H?sq3I!} r>*tR7_?OBZ䠮ߑ#2$iԣN.NvJ7BR.T)s Ol ;x2O5CIN+^[ש֭R58EBq I&V7 6c?~MmV2oNeԣuRӗ]ۣ19OC&RewZϕ230ps**Knwޕzq6Gw ^ȮMz4cC>hXiD[WhzNI|>^N P?Xq5Uۻ"$W URUmTE+ 5B/cH.rNJSQNUJnP)BuPq gO3+O[_V:hDmay'V*nkERyc5O$s<M8Pz~T^F\#_֤bK F#ʜ=j9rGE[dcQf=O#VΕj^8>")FOEӨ,E$hĕ'sK^ QNSOJwq/-X9O3qݸ` t9V)t&0ϞQxۖ/&N/ԍ:~[^ےIr%\C|?O^){޽:RR۸ȭX\K̻0~5JoXyR{ۨKS#=L~[G彿2)b hVf2HNsZU}uD̿Qmi+*q[ӭ"IS٩sn.d31| ֊û>~JO#"fcHZ5OJ>F-D UzF~4RGpbIՇF3Bit[zMyܻ1I}}j?w]XIG~>H(jE=GY-yܓϸ% K$2G=nuj{թ'N>ӳILG!`ۭƶv;0ߺ#ɀT=OҺ;WfwZ D֬v? |iIGn3I8GQS(˖ieQwzEvT++vcNcQ,D%9z@B]̶݋A;*%jlsm-qg*\ AFPz@$& -6Pǯ>V9`}NR]k{*!*@9890xTOOoe1|vNsb8Swo*^O#aYdzcVJTR).6rڴ:N1"_~9撕H&ZC5لy*a"֑wM*u%݈č$2yA=F1JR]>j0Rd#f^wrMEN4.BqFlGq-c"Á׎Jʣ^GO[i%ʊ,;KrZf_Гk/uͭ]mxMՑPkv@rn#gWEhbjK_-.E&"̬0榤ggt֑MI~(1 -烞3SE=biՍJ*3_zO6!g?( w 淔-i7宩rI!DǵVhdx_Qv8Y$6;z^+ ?x{[[#Һ˹O3pcp7sV*s%}^hHyl(l6arR,[jJq{O{KCYFr6hRfΰ_0o¶O<޺^+i NbЄublۈ19<YIΔ+kS~1}!]֍yRri#rk8Ӛޜbέ:*)RZ- q]YUʎ۞i*ҔotG*ZO~5l16w/N_Ȏ)Ie/e{!"5]#0f1sNLjҜy}JivW̬.OQ[{YTJnX-rVF\A}si)TO+ _.6!d.晚ݤ=:x83U(N-7c*pRțraUnR{ܓiBծdkE/Vi짧[jeRySѢ`ŦF. #@aמrkIKzo OTBYOLܢ8ԫJW%((Ŷlnp==\{Gw*RNCM~VI'*%ߡ б 9J,)vYPSƣh5Y&iR ~)FӖ+іVijɻn<׏ozRCKW%eUjTn\8]Ps;WObtH6>`@*Tkb^4n[D~;Nr{}+8/TjŘmNcUnd|qp8885>-:9=-nE,&ܱs;XI߷.d5%>`i$jnc?K^KMpsHwH|Ў95#'wc:wF"9E6V2&ڈm.:Wj[ea@}y:=,ywd\H6#c0sS% Ɯɫ^Ai^yzi=m ǯLjJ2iiӔcș游n$eV{U#=f[QSdiٍZMT*oLJc1HOUJ\NX{6)Ѿۑ<6+6 zk(smC8xC IOWs[$Ld`s{^0.5̹|[ki.c7V`srqE{)]-HR +*މ!U5 6,0?xev8ƚ瓵B\I?6076OfѦg[ё\ b7R$e`)TU:ߡKT)^)ϒR8Fms|ʦui:jV95- <{{r۷VR˅bIayzu;jއ=Jo$1Mb #ҜmNVӣ)JMigb͜ w.Ƕ)9F[n)KJȞ!_d~?Rʝh/NqȰNK寝&M~\:iNzaN,+}"VS*]'ҴP3* -IzaK#p2<zTэHNZ#N-o#HfX#~VʺeNqV}v"y^$ٰq5n0m?S5!Z"^E[;a=F8_QϯLҌT=E(M`%Ԭj7#(4 оhƚZd<-^U_ֳ|^ZK_[I%ff Usi$b=g{ŌBehvFwl|v%/i.U2jPђLVM}s]Ks(ҩ%?ӵۺ#ǾE U#(;-}M'JKc$jH߷T-Ӧ>ZeʬgSӌUX&P/NQrgc,Ehk.5bTYK<{TהcxiZ<Ҳ[hMNK$vt1F_ݒen@㞜s{VvjO^;SPO.{y-#D\o}N~Z2&DMw|nq0wQژҏ|F`Xv6|dmjT8L.1ؖp'ǟ?A JR3Sz2=/,bO2EY$ɷp}F3ެkF=3Rܚ 6%(H0h5R{0~zbGDXch*c%cITōB]¢T[4q2!6f0ߎ~;ۑ#;[pI0f dh=K9V"iѨU{9IfivW$Mm&0#jye{UW;$XSTy+'un(ػ2r;:gʛl1tÙDy{dfbV#ͲO:ȨW e;JymG+P#U!@[M_eNi+k*ƻ<nxD\%$c).&ςEqB?NuibfQ?>SbZ_R˲}C(ڭ*1=ys˞u9׸Ț6 S~zS.meSZYGUҷι$NөXPv!hͱ۷E<)QX9Z5KmMI#Xq9=gO2؇*o2zlcg?Λyu1EAY;yvKpSVN>'[NmYasL͸PtMiaN<3 cJRH(T$)(b_O-1PTGmq<ݠqZsFWFҕ65LKJѯ2Cqq4T#)_Q%Gr;U1捼Rԙ{YRP66х_\*1叻lR.Ae]spˢ&V+mC%'mv ŲG@}FyNU=_# S=m.l ڡA#a:\jO}>a\-}zt {܅^{DchRr֫At֓Q"l\UG|>N"e-̲"Ϲ2dm+Q}CۨPoF|[zJ žcvsr:.lWh|9}+r78?Ժ*hο+bF-޳՘œ2i%I|v [ϖTl*1-$Pa |zdʑLKՄ9^1.퐼X*6QOoz.O,7r1Z4sJiM+knO}wmjOz#<,ʉSFLck#)%*F0LlWTwe*2濪#xG&y#9iqzclޱ"iM\=Hք]ƿ(X9gu$;T%ZYӡWEJn3o P=qʱM룹#SGEZD1M bTt`}+^ϕXΜµ[BxC yjc*WsS)FT^ੴ K3*m{!sQp8Z'!yce3'򎞟Xb4YsT=mhxFA Y}Uћ{Ԣ{b+PEo dGګ>-эN#N:Y=Z}8 ;vݯd{yFӢ.j9sGjIG y|` djΤwG[[%c'_[XUU<n}Nc|XJ{ke[eknˑ>s޶R<4SȐKu41bM1#Pr:y?7~:B3ikH[ݢɘy$2Uw*AVޫƜ*kgHE$@v\p)ݧngՎ.jI $' j oW%:5kCGdf;<=qWm^Zv4#*XhgszqGJ嗺^Ӕ$}w^)cBCI7x>jҧ+ŕ#ةfQڴeFA? $'5qRiƝ>gv_o}O݆l<+t䔛vKJiu.1Cxg>26:`shǟo.iU"^] iKmla$|;^)s]ViξZ˃"v&ҏB½ 2N27f !cEhW **Kmȵ 3,8p1ߊ%w)V&HL nI}G\Qr/Oz!~̘6cΝkt3JRV[p"D$eRʫ>ka?gmG\R]CO#"`kZOŸl·SJ};$[cݵ8Nj#NUekTRuq[vpO<=zWdiBSwvq{Qa6*yUijcE;L[iZ%/9}kJr'J1Q` Q# UN=j3rOgر. y2~(`Թ[~$2'qUChu{H:dIAur}iSV"vB8_$8iUj]){5sK{(\wO߱RJ7RBY] B-tF_XSH"TK"5+TE=lJd!|۳}=?:*[{'(TRȥ7cmlݺ-҈v 9|~)+&E|V"y7-n(ZˡERw!yE# n*)_k}滽Co%3-?%Sj=QN E}1!Whnh㑏jY4g*ѩG:j*[F_1zFsqҧ'~֤T)w[i`y7\Vjݒ"47o[.b2pSƳ]2u(vI:9JY8|wV2r:SKtd2EP*V;dn[{SqsN2+u۪tw2z2kIN*/WutۤagR2vfߠZ+GtwK 88k2:9wlA,',5F~?,{7-]?R)=vȦ6i6|9=z~U8RnԕhSQ{ &ls-OjR59S2ifD tdu2IZI-Hta*>eVk:xrV:u斢FMs1 zrzvWHu*oUh|Z 9?^G?f SS2M%o%6 I0(={&jB|z2U+Q˳]tَ}61QN0: qׯkiF4⭽;'k5w K6R=;bUiݿqV# ]mynfM{{zVPҍ+Tqab[;}/˷k_-Z4y;.p݁ž6c둞ƲQӏ,7&M{^[h;?tO?k\n}_Q#H#bU]qR6]U8tY'>d&`s0~uW+"h*-ObLB-Cwێ^tRQjVM-MSB\}/|X$9t1EXK=GN RP?N捝gCsUK~5ZH*.ÙEϹ[MGb8 2MoP+yWq^h%yèߧAKo$8M# G=:\SI;t9:P5y4U#ׯtaFgŠ&efOGo{W(ڟV&}k]]7dXRW8 v)ю"RQKveE8nZ+S'45)KK;]Wjv=Zac9IW/5Lq~nwqWɻMT+ԘFqr8V|C2m݆_/cʯINr:WFQw9㈌o ;v}Gr.ķJǙ#Sc)ozj/cl:m;xUVncDS9x z NK[SR}Z."8et16]cִ̤ƫVgčT)S?O~+Us?{F^>r3\ ]TU1-5h 1̎Fk&y9eZ2`ʗw#aɿ!9qZ G}l=M7ʌGȞYO(Q=j:8ԗ7 bEϯZJ*W1ŵaHOU5ZSMȪae{)zg:˚I[c*S#%o'EsteJ+}8N+om}e%+׾=*(Ԙ:>γW-DӋRmc IʝJvZ*4#ʷ͒H^9|J5kUy<i Hd-;m#ry89Qe^:^]YKgPC*9YXc;<~ukVu*WifZ66BÉpP´J՞5 kZY ڡX`L<2kz2_vN,-.WSo@k%NrF3<|{vQ.h.X(r֖!f|/<siSVJ!pWOױnZUu;䥿ADj?ifo:Geq܍IY#UݒVWj#AUY{ܲB2%?3bytsէ"٭H[vAs׏Vqz5W#[kb-bFU+7/>Οͭpc)V|} Xm۷sMc*|R‹X㉤Id?uJURWiJKYw%ݥU*6'SWܨӊ,̲*<ONO5#F"QW>+I-,Q+l6fs~SO[)(iR ;e ncr@檴Z"kK-QN(e~wlrFRPaVDFqǹs#«tc̞BITmFn3$Bv<ѳJ5Zu 0 xr%•J|(z$F$Yw ,b2 t>{uy1V6$FfU2|g#Yi(0ّm4fۏ*6NqIjb)ZFw&&-!'Wnyu=k9)s8TakqC!Đ1gp9@9Zƥ>JrDT® 2h$zH$m)SNՇ^_yISR+]уNұFԝ+)|аm!9vjNK.[ORddv͕l3pA XVQǕSN1tefyŴn3XsE)-3U졭+]/cHvRUMW$ckaFVV\46l" O-'cJ2Wt^+T*0wO[yy$[DeWRW`+דF5=+ZtUI{K*\vU2Y60vH[۝~$S+h$.0qJ1iS˦ՔY֛ifxy{v}?ujiNӫLHV.$gs9b5+v TFbȌm MlwqN_gNoa)o7 -#Tԋ$nͨptӔr Q*X>ҲNt߿MjSԏWYAe#q+yQۓ*umA*>G^rGJ R^a*SnvoJi |.H8EU(ɭrF=c%{+[(ϒ>c$~?cS(pi٭ :r*i[MwoV"[FEHFF-x1]zptB(RjZo>QČxTy޺%/bZ%ק͓OQN[yu˙`1y<+nV rďZNxM[goͱʔkArD+5v6 IQӧb:t5Z-o84kJT'dرmaƜ&3.L|ܟUsQ(Y7sJ6j~2;$KǷ |F]Y9=FskX2M-7~z.Lʧӹ!yò.\6N0 =L>hޝYIQ'kN( 8v$̖I9CNx ՄK*x8=8E[_ͰI:rwk&#.̷nyEJu'f}l_#` OG̒'}]ym dzwD{F*[N6'䬵+qG+YdC>ҫr=7u25vS.l3\\6y+>IAkiBQөTV}\OtmⅢ,(Jt_ڔyhnKk'Km[Xɭ|B*ҥF{E_9n-d[#iYTҎiOlц"kDላ̎6 ;HQ8q[J5/KzZOud3Bvm6]K(L 'q{V*vO빟$.ߞ&m"*\@ݸ= ԩߚב2K[ϣ: ͡_&[XƜӺ}} 'V}|afeF LR1(h[Uƚ$bԗ[ 8Xehd*Q3ӢH>)it쭩RT'i&K'o?0ĀpzԎ+W{y >JiC#72vĎ>1u]yVN2} f FwQ<+2bjF֋kZ8ǚq|Oy$s*nz8xƣTy_?R#~JUлذ cgRvnȆ{`Ѷ້Wy^u#EÜ-8=WmwE d[o>\Rn[7c֥*qQdGw9nN>*T&wZ{?md۬wON6#tv)q"gSM Em*C-–i6=iQg[Еna463G!cFqy\g~4nS` )`==뗖\F%Nt͍u_7#w};~TR*rm_I.-E+~%A?횿rSQ}?n,{D9/;iU6>w&_|g_` K#8,;bGnsTcxyEꖫT5IaQ[/^koG{mzoS OE[ߧ#rt̶KRXmgl~hO, {Jc;0imZz2bXQL9Fj; gOkinCh#nW \ʞ\q^V,*̒@['Wu)lU)ԥw6G2m} O#b)T/^.IӌU-K72liW+{RNϣ[vٙQSoQ e- =O?uׄ|J h)^ݓHX.n?J/iINcz-Kz0-,N#$.v={֑匓ӭ5ÏIh_jDHAlDr)8ENVyTrh.Ԏ+KhIwʾGOɭV ラ9s]-,Ey[I u>z~TckUz%)㑮k|G%6{F:E6?;f b:gqg2R|w9ԧ ]GZ0G*FsOTSC J.Crn-knϠӵ58Μ6Չc_I1oOsQGDF * ;s+e++x)YpݥKdi 6Y~zxI^[#QmjԴx(3g+d^8ml2HnYYYKz{5F5 2*aԔa jq Y|b\ƸUv^ұTA]썩Tj$++EϙDf~Y;XΜg1Uɵ_!`JB[,{g}+q䦭{HY#;(v ojkKei5@#YH~Uj47t,7Z֟ՉnlxUIi7nld 9NqlmSJ1rkEIo/ڱ!8VrPtb>n#c$VjݿV~Tf-[O{}V;`G\vYQM6fE;B6V|F[fMkwNpXP<[q לڔTZNuMd-{XmV12>lqDo(i7R2"Kk,#[{`ǯZUKњb++F׶ԱnKy';Zi>^5IXW .|ە;WarN?3 RrZeVB3FG)S84P޿/CjX^H6S}Aj`esNNZzcJĤY*!yO³(ɑ <"q}vҮ1ھ?U+Z,/5kGq|(߷t a[ CSb>$8Pd)Z_?QN#2.㞤i3R׶-;lB׏jXyKsKN4uV]S\yYn1\08mhJgtm<=H.AݵC^2wcRMIa;O˿FssJQLkB^잻z631Wn>W%|b?k3rsHNҋE~uN\EgX ;+|+gmV- "6b26.]B9dS mnvw rTrėhv+萹}zݕ$bwzsqDSfMttY\2JeTf~<ᔮ24eР-8oו{sT>[Ϯ\nq*X~Z69J SJQ1VV+̝is4?}ލlgK)^qםS*rg >c!TpלW.#M6sT(N;Yo 2W ҏ/uQ>1j v>Buߡ_֩TVK#fwv..zX8 %og2,}-U#9ϸYb)*wSd{Unړ/[랸zc ԟ1o~11v⪲|>zھ}~Fe+;p]ۣOorɹX3dmGnyƪN-kML/XaEiCP}k)/ftP?a-~\K>"3}OL+9*if$erdf]x?O^՗4OtoCH75bf'l[wMY?dw&0f=u$RUFGU<Mֿr./S}iRc˷ݱ)FRu/mɕhe('Ͽ5j((e+FkY96Yk6qLɫ(hCx1*߫=R|+n#STJKmH!]Oa浌jUﯯ>QõU] )+Ʀ_/yY}4-Fx|9~O/ݗ iF44$cf?^9Z~VPzrwbrrc^sZ<'/JZI%n3Olּ]4C18\>L#8CX9d+'aQ)E_7 (718X8JSd^Jtѧ#XVq;l \Gs'8\5C4*Vݓaz1.ӽ~PT<}j~NM)MVAZ]|ؕ|T3CR_orrLl hFnkKQ[< O\z ooJ)'}]8\ɏN96S=ミjXBvz*\)QB^5$3򅏠Ns]2¹FJ-M᳖1nЖoWڲF^A|j˚j*He`Ɍ#NUJ||˯9SZ"1B4sh^v=Hr}tQNy_"xNL&>aNyYǖSvK~t̖i!XW]ۓ\GK捥) FЮ_2dywu;yhSMo[F*Tө{T0L$}`n{׏\(ENV [ycncYf_3\|=~jR3+M_/ԚYFZjװdn7Ȋ~FOwgtNZIE_PEʣ?xqʌ#Q4sѡIǕޠb "yaӏV(˕SPHmKVeU ҎlCV5&hr=K [c)=)”}VrR֛."l 8J1q4}Wo-NҾ]<_4V*~rsxtX\˧8uLP\ wJPUJ:2PM^Y,P*ܵʬXb{g*R"F:FRI݄dP}98֢T纛mF0䋺xv48EK^ԩSUd#kH!E<׀O]{tkŌI:b[nssZJZZԏQiqq5+J5{FhTtZu$wULk;ɨmSں御ur yIx`!O#>ہKF5n_xRϰءcS[#'ON>)OXqjYkf>e۾}kyؙaW<a֡%gT?[ӌMC*W^Cmc7)o|`ϿvPTt)-I^yO_zZTu*)L)(,.@jUZTEF$ɏXܶp(=zRM鿯ߖXb3+6c{q]Isゼc(EikE+ά[ޙ8zsR+MYyE45W)uh٘2~_'ߕmUE7tG~vQ5d.[\z Sz_aTX;9uOqRJWRq֟x[o!' <Âk*wF>څI7O@1"$ {w=qz֕pӥW~gTj)ESN:yO ޜg=}yb9eG5,kxTnO)sF~};T"~fCosx'9S RNgZ2/,䴢0^1t֯ew Q6jǎFH'mp3Rr9ԕ7#"(U̓&F*Y,233ʜ}]=K']rH=O_¹N2kn;Gͷ1UV}lU=ZTN=P%kp\ cUN2ZS4nm[KIǖ4VQNKKj^fv2 QF8m15ZtW@~."ٙYydy{}*8FVΤ(B.nRI^wTe[kHzs^N r˯([[I复$Ӟ>3]59:~dnۭ̋y+߰j;ZT*Sj=aRfW=j>0pΡ# #Aj/q~j|QR,RWn??gwsn79dk墙e_+̠=j2$at}?ȖDtjn.r8=*F]>i?`N!Wic:$ۻ$^qW~YCoa$f 7l_@<*ME#M¥RuYfd1ⴍKh*REۥMy @3e+ɧJT'r孻hےW9TRqvUJ_"9su4!0UGǯ]Fw4YJ6]S) s*kF镜Zh\}c?²g L,:;hK_#bFNp};s4F dw6k62G9'\~ZUisH.L-8EiUUsqUN2RΛ*2 k%xٖ1H#S}:gު)ӓќvX x>Vn8hr~CV,H# ːm]O{ud֗1Y~5|64J#M|i66v.T9kg]9xhfxcd"B3b;wdΪJEKR-\«qៜ|JZ*[6A$* mex1Tr{qՍiT,y)5e;s?1TZ6r`d!O=pҢzNiܮ<Ɯ. \#`N5"ѨXua4l$OtC<4m F<{✥?VEfIˎ<4jIgqzj*5G& r[n<֢^~Ҝ\e$;y8gq廲997{cgvfD$眜q ԞD*pRZ:*ӐH WARS>_}n,2 mns#`Mgͥa^ӽ5o2H'1(_9<{qY¤j]'-Qj_,r:=ˣ ˚SbL?1RO:M Sd" Y6Fڬg\:qIJ.-E3nx}cZRZq@em##yyЪԓQzH䴊hX#GvUL-?ۚ~s7n: *7>ڒQܚTie2r7UqWY;NU3,QͺxRߦt $ryj 3zuSH拋VkAIJ`:c5PĺjiVub$5^N%TH|׆F'MLT:#NG R q=x)T`TWi- n-YL<=?J^Ҝi*tE/!^T6׏^F|=IY)v}̤6+d *̚s$Ӱ.B yhֲeRI[O#Nrmh%-W;W(ӔgsySV üXEIyw|fe)]~UeE7L][qO4nc)F4R؍P:˸&7'#o~Tr*sT=!o['Z^XMTGЯ5ŧ2<2ʱcO^إӫCRvzOK!X: 8JORke%ȡAI-3,<ʊ['P*5VfE\9sG3ǗKd2Ɖ" `p2~aj*_SѥMMy!&"ȉW\U> v+)KJ1Nj%ϻl"m-#95,h٧{.k$%c6c1̯4`9+4mFMr(@|Ȳٔ6pGq89]oNXuNM'u*S*,OU=IJ-k4Vv)"f̪#j='̙ի(j7"euuzd=q=?N4#Fc-"s{_:L^vmz/ϥrS}C N*NkH ?+* R;8z^Z=S鶟vb:Y9ʒ2qǭg7.u{ݻ=m G(>e#;Yi$؜2JJ/g]^EXݾ?,אBZB ێOm8sEI0^)̭o' ow+{gITe4K"I4r2EcLgzUETQ8{ϛk[%۹|Y:8iUf0d8Ua&f#oL|~tUN:1sTft~V=3,R0:9o[-Q?Om?Z.9  u9* %!ME*9cjbFѧ(&C#7L FxZFQ3NSR$X+*mq|$}8>F<ۚIJ. ,)%~lƉGDT?w?eh*ܱ\6Hgg$)Kz>mԎU{K)M(<)ˮՍ8֔Xm\y "͙ ^ZHgfr-.ߍ,ߊtaRtmVaY3׏j]=EK[ȑdg {(WlR3ߌuu is{'"!TgH3.v=*'yl1iӍ~㧚KṶ,\|w8'+^p^{1d5?0zNcw %ᕂכ|V p3\эiSn<찏Wm8U`0?i}tSI-BYlj>b:q펕(өyIy\^׺ 2J+ OQ JQ^[8Ҕީy[k̒Ubm(v:;iBP'wZ#c'k{* kKU[Imqiۙ9m6@"dڪLp23OsU*trvIJBp1Olq[NiGPU(pz5b&v:/漎GTg[싁9<{B" z˖ozs:hi ,i QoH${x\|[z˺kODY <牼tN9Q\xGMSM}ȎM@۲v7}/b7qCO k)Lf!u:Z[0c7C)2VY#bz猐*BQiᒌM]]bdӟZ*Q]XCp'*S]I{Fk6x-+33dc׀3e.y{șaӖ_U2+ֶ2vF88 5}ȩZqkE*$jxU?^GqS̱ <э9z/@wXkv9A ,;{܁DƤ\SץqT/j܊%6VU^vq6o> Vʾ ]nɭ֍:5T;epV-=IWmh݀zڟ4Z>mȭJЌo~ wgtW(+mS+IV*ZzW/k{qJZUcۀgN9nON7pI[b?K4m"l<݇]jJ*b%=Gkv)A6$ 'ā`$zGt>Ya2|ΪrYܱ+hi>o7g Ql茡MrW_QV2d]U~.U1Sk Q!s6c+d`;~9S&ȉNKK;]>tqq#HUOx]IÛ:-z)7 9VlVMaӎ5:is;hdM8 2qwݝ4+=Ry2GG]瞣۶NjڰTR\9.pwdq_cUƳ=t.eu~؊6؏9]=+i_Foj5IX̶Igϧ9(8>K=|m@fY:hLH?9V1&JO _e9% >nۦ:VGiQF$fWvV^?6JtҌdMovoF64m`vg+xJ-U9#I#EEVfء=+^h.uj2}NKUCq )+qR䗺U:N4 HYO-G& p?\rGNJ[u`d3qBIp0{n^FzSڔfy7/iZYzF[, ޛ拻 >ʏj:sҾ|e{;Vr8zuh{ȆyA[v sZҔU5n.Va].7r~cjݟK8r}@[}O1d09_p~_I=|#*4W(T(ɉ;ppp{WDc.g CI$%ee- ]:$gMƩj2שXZYo.4PFN8Nr7s敹ߡTOO1cr|5%qU_k_>e ڢ4N\LeU\n2LRҔhR2U9в\*HpF#洄%NZOR%OFݛ@L$`ҢP s꫽Al7;?33V?2,/O=GlWWu4rM;}vd=UڹJD+CeЬ;{y @.3HT/ERuƵEBOquL nQx={Oh*.3WheBO5?{";}0ڸTLM!iasHe1[JjyY"Try\Z۶,w.p1g*Ny'ud$ذ F\''RfzU*QI_dGk-09Y0~`v9Oh\ʤ#BmrQYk`={ )smJM.qA]#cTV%FjޣbGYSbHVb݁t{cS\YS\7m "-*jY_=3ε7-b(QU0ܵ]-ZJCiʅm5*B1㇦H+X4[|7 dvHqW,i~EEGY'}G,p+w l~RqsTױ^NkA,Ij,e͜s;e.y[o^-dTVUCz$%wƦcy' g>&1ӑU袼4iTwV- x8͸~ T}ua).F6D #eE$npl$xrbo[KiJ1RMͶIcv0X){Ҹ#*O.ҧRK-e8cğ@uUTkBG+YuلGjA*U}*nN |'Z }Endᴙbҗ/"wHkAucԝE cv$=iN4k8Ri3#J]cķlUX7NHmaNg5*3aҵk(qY\r橥+#ˉv.3_=?AS,=j=Z6+[K1i|ءvWVqR9bt3^5 ۽ʕbf}s񿑬hyik;Sߙ8Zfس𪍱'+ێWDE C*t6a9F`pxb^x9FZk2wmPY8iT$>{~BV+7QZ[hI(Qdܥ6zcVuF=$MZmb8NN}cޣߔ&!E{;X.}<WqzW\JHz)ВM7Q,>rl>n99ǭe:t5[~*nO?2U,rVVmUE]c3y>®yyt9#Rm+TI"@yH̪4MX=qOn+ Qz=m>t+JQ^?'DY!݅Pd+ c GnJ [ϱ FPً*ÂN9鎢{W3RKF-K<$gnXlqZFQ/t:jVi(8lyoHUv_\>{~U7Nd9B?P0-1_~5usW=z۷d9 pF8?6^ԩM8=n}WT˙ iR{% }vtXT&X|62.6Z[$f1mf-@y%)Vxj4at]::K$*&( }}18ImCi,5)4nAp%B[Ftm:Кҵ71ӯ֥)Y[VhG.#x>Ⴣc?#ϧC*GD^1ت-LOϰڶi+c.kj,2,KvXy,y?ʹS^z4m;Kq VFC=>šMKΤU+uM]۫Oȍ qy$3:ܖhH2{qܒNߩ!>ho f6vZ$88{* F:r7r_wQ[,K zQֹE7&cUwk`h6a_0/͜=z~Q*4!X8GGիxLl0m3UZq:;_̯<}+>f5*'=qq4hrbV_ OwN~g,_8Tu֯,=(S!Wd|>q2 U {?e^t^iF/E|ZKy wI|1qP)TT)I"U*cۮGkJ46lu9+bӦ|zzHo|]Bˏ/spÜvkjڷnUqtڟCV)5%X|*>oߴvv*׼1X\)'玜W~݊׹WNIRIo_B~ ` VVϖqZʷߡG(S[w"x-*Q>ߵ]J䵎zojO-͖cߡԚҧiYi׵ivopZlbe,R4^_c,UXɏXą?3 < ޸ݎi=]].ͨƌVk^:80:rcU$+(xsKc{Ɯrե-2ʱ2ƾe9 pH9@%*x1[FeZ|[V|UelaO,'ՕI7q _S0כ˯orT*4FI,<J4o~f\_5ʬߣmsǨަT̅B2*wŞ/Yrb xG38SpO[XjodoT>DYyQcӐySWъӴuvI+yB#p=9R{GyZ+?ϸCE.fj dSeS~ΚsK46-QW .5OӁU^&*K4FNͥf㥄Op1&U rOƫY(=[LDZN-}~kȓ"gRFc_j$~L#]}#~8+=%RnBTR@ TDԻ *+JN<;fMoebkkETifxҔQ};WY] ɮtK[qiY+6ZO5ĉQđ퍓0Xqߎ?\SJ'kК${#ݥc|9Y(sJ,uF-hl])8=U$t155tבr#-~:Ȩӊ*sZ{_B7gxTU|v2=t# {NJRM#ru,.|KIy$]MGKIJҶ.ȑ9[feFANϓ-+WnhGw9vhlFsӯӯtB,T>::n-kKUDVa9)ӌg)F~4FU|؟*1ystF5$)Hԕ8hЂXU>!j ƠsΦXsy6%Um_86 s9犥Nz>JܲrVv]fWz|\:dt4)R|r:[qCz֤yP=jSeziT/Ж餹q\yVsjJU#Ew ݙ|0GGq8Xɭ-c;{n֗BvѾYGh=;^"?gSH??!oIq b1ߎ8NKDw6I i>kiU.ipFGyUЛjqeCʳF67b8#vy2kJEF^F) n81)t˹^Zy+-=~l캘I&3+s(Vp#d37pm%~\N:Ej\͢LmK[dF/9LO\v%:j `b$O^dQئo-)eSߡsTԣR+[.Sr}{۠ITڧ>8ҲVyCa1O(M/ńvNP+ P-!-zS׳nWȌGO.]B ~;%F,Ӳ3ԭU%drͿ/-շ,~*8yBM7"D^61';^jR]R~T+ugyEbŎЬ2r23S[ b-5Mr:kvi6Hbg lgZ8-(UqQ*u>V㫚2WHVb_y\ҵ&11SPj:_Iwa( Ў#{]06/Kt,hSq3mVLQcڍE9^vU]6b2IgqҢ_$8JҲO6L[ؘ՘}“ߧJpfSa*s=97tO]|d] 6{XƝ]r_KX%/y KMءhStQs9+5<luY+'\׻획Td -UiTylnߍ}W6c[ 9$r[KqU\}Go}kpztG-8F~7$)GJb<ޤiV_#7FW't!r+S)sYefB.ǷޟTJ][.>ndq̥?C+Xh(ނU+;=1/\ژ8Ո^̧۷֦8~[9r\#I=~Orn]Jt=mЉfAs:b13TKlA|ګ?´sji% 5k>^`y~nogh,*k.Nr}}+b[Bi[vεF<ܫ[E ܨUXxnoWO\f], O5FH'+aD<۷v4J-ŴL8/Ab8Sdc=}~*.R.QZ>yB2cYQg%RN_̮m2`6USjVrV*`b[vJM\ױzٙ;88MD`ޭX "kw8f_kO4k}Km}뜝i9y'e Ji5rOOHLx_^3W};)?On^wNsֺ.mM#( tmy= .?1aI7ynoz)\EAgcye1"\4tcȢ,9;elsNZܚm|,̝ܳGFs%dnZc%nϿ5:*|>Cl\3gk^]Kn8f<U֗5q>^U$hpY1'9J7pTc.b\ce@?^\4xQrB.~SkNTwSkSS_"w*};>C~F?<͐7SXVOV{[KliA,B/9Ӧs:ftU&Iƕe0wN}g*#&:37ڃِ3Iis++Zf7=. _-:Νrzz~g?- ZYWJ'̫d{$)mCNSIJm.qr{F`W5Ec&Hkmc+$2ZG)"u ʗ.*V?.EeUªs 9+9Erek~ ɥpg]L9M>yO)_$'#>݋rwk^IR܅PG<lF>bzz(Wg;8ӕ|Ȟ5UڍF~RU\Zhu#*tҍ ȓ{1KJ}43W>I]2*.ᓜ߯ⴔTeʒI4kQole\:sֹ})^*[/RPyeϿ Ԁ0=]:{diN*JҋMXg_"ɱNߛۥ:K[[=RLɹFU>X+;>_J6n#`kyc`2^Z=g;zR;{hX_ypz qzoغu^X0\H TCߧjIԣ 9AT}tЊ ąvֹe(ԨG{iݰYJz&FEǺYYB>? ZSh6m\d $g2F(aO5rIyh?ncNWlH\#,{vk{^/OE&8JUj( H!U[yy'#k^i;;$G$D3Eo+}oN|-GnjAIe_)y0@A֬эhhw^'p3(i-(>R\<`Wuu//Fsۢ4|F̻mx\V|ɦSm+vˆۜzQzqy[U-*,֌{+4~rQ$o~'=)mhvTC:./b <sj#'IbNFŴ,mmkVM6@o3/i^Ma(Q14%䑳Hm׶>2dVNj_-,1\%wۙ82x­q%c")ƍ1c0@&R{JnVݍ.h_.N8R'NZzs v./+^y Y%EI+SJiT,EV06}htJ%v?w F=zQj")I˦dv^]̥D˔Of۶q0Ԓ@CUV9SSI~k)S%=:dݵ_#fGmG',$a,#z%#8r -~a"=J筅H& Ka#euxw\O9igG{SE7k$,Jz韭T"V#.jݼvM oIbY<*cӎkkGD%fD0w砪Jr]ls҃ eG l|q4㇣(IYk,#=p8ZsG#|>""<W Vs=9u*Tuob0GZ@\:Qw[_R%Z֯V>$~CLjm{z}9џ'*T}X;fdfC#AI_Șū;Q, $;j}:tʔc{MSGmBTw_L+Pu"W T8Q%663;ksQHea*RT U ?kF8_NވѦq?2tb5o!Q^;k)ӊ˗F=\t˱>o"ZL JO[v:(ևR6+6qڶ-,*Gn?֎ZzeY.sFFXTh*29ZۓLN3bafaowZ^δo9T6_Ac`k3͑G9l`WE^3M9y_0 cY4&$Q׏^xNҼ8@$uېNzrzzS 9_(?ʚPnP(?.Uj^ q(Ԓq#$Oܖ>^2qۮNzVьw'{QU}Egl@m͍N*FufVmm\cO58%rSl❺4Ei6 `Jҧy=NFT|\-cܮ3þ3Ҷζ"2OO=ƤE$VRpۻ'J%̹Qʤy}DP|2UVeqҶe/W?w+!l B0y(Ƭ$]YQN*+Η,dc¬  >bmW.WZ.zrVz$WPupgGcѭ>L5GRYYc{sӿPϠ\>{mp۷Olxisrtg^"1N*3D7䷳D~bp:[ELE E+6U[ՑymP&N+{?2O?#4+yu$̌ʕF[ h)sZlXi!8l`ve=SJw2] .Y8`s#_ZBamikn$Bd|C6T6sc~ך[K=ٗr.VF a.C唥tH/$6K}͌F^RG4a 2rnw+.2"Ir?*=x|fܫ+*KVln޻^STR=]B9SXgdq?zN));ϾUtg6q>InӔI d͙[ Ijp3ɬRUց(޳]+O,IV?w$)iY6e*Wѽ{RivQ8:.gy$_ Zbf"i#e9i=As4lv>Y<OiW:hFS$ۿS"Vfw{c֛Q$#OM< 2\>k`nnϥ96iS;5+L'I1B4ܠ>U*/ӯ,-Mؚh|ҺL+ g uz}G4UC8N1m`[gHSPGA{OJA*)GD57+]Ԏ'ziJ|=I-1_~N~aOgFS!aR7fVxEV_na+o(ѩIN2ۛf'Ahn;ZЎF.[ w7p_ʱV3g'TZi\tc,@ۀ h(;;x}4_;c89M-} C xqdVg3MTe=*> qv*ۃڊnpnKK'!wrqcn[2Asso mk}qIڵTغӫ{ho%v#)yO^GbJ[~B-ByF-K(lq yg=JѢ5f:9[zY!y'6/&v,&XC1#o{ t]G[&KQ |r #_WknT+Ndm9'k6RO5U,iZۓ[^VkQx]G>m?52>VU u, b7yyǮFyUJn8ԍL!hЙql \kBg)FW߸Omw+F̬x+:2tp$֌e:aُ#nO_NVS7E%fNV.{/C j1O{hmGi %0 'סEUJ:#)ƒKD]#GN=S*1#i=ѝ=̫q{oݳ.AE\Ӟ+_vPz:4Zxi̖v$ڲ#Gqmge&K|Ȯg8ZKHcGNZ{e;L;74k<c+ݝ }]d;s]4~Z' 31,ۊƹ8m߽)F\ƝZjf:E#;ͻt)JM2FkMc%] 24?CUs[ sdpbmjs6rJOʿةU7|1mN*PqmX哊Z>\]D*`Vs:}*)Ɯ/bpeNݹv w};On8޳2Z*Ȼa"y :5RK#ʲ_J.t6?iMǙ'co-)[.s};ҕ)(B3pr˲iՙN kðdf5%{=:~yVgbY;(澭R8ۿyAk&vSQyУx[Q:WsYܱuOa8卥VPc9;VM.UV.Oݩa+,-rM\e*rP}LF*Wz H<$B ~`x洩mtgRL?$__̻qvDjrj+=BG |O< :ACOvs*ҫQ&m5K6XХ'2^sZ+4= N\&O. YI;6\4)62xu=I!3N|/U)8_D]**Vtn%KR=y9gZmS]մOcNRz--Ե g|wC'(=q)ifʧR`^D$m 0c\gG(4eBThڞAeHmdgvzL1{ϡetH,1ݰ<V6qQanl$iQN3 >O`٭"ME$q(O1/;UFɥ){5oyu rcoOgf*Z哒N/q<1ڨn)ԧg*54ٓ\Y-̐  9?²fWuJY5kWhU>}M\}M3Z,zu$UA8?wq)Дe ,;9xvoN:]^MA XʯXAS:4kǚ{eyPgj~S뚙(2rYbI"9Kdtԝ/i(Omv9'Vu}IJ2%E//oלQ(]B4r(-ze #|zy*J_Or, ̜0v󬹽tR{V`[~G>/=Y\Oa<"Ͽ֎Y5a8j]ȋܣ#UQ߱ѨGs3Or8$eƈ\˕wDfVh>ι;P}=A Ql Ǒ89N60*W;\<+,zsN}xcQ(I4Kr{ LIaq9)sZ*/GbV/, ML{(FsrAQۥ.װߘ^O|?k:$Ւ”{%t2ɑ]'[1̱n+$kF|ⷩv)J 9G98np׷5ҧ8.RDkBϔI*},:0ߢ &C ¶9T:oKVH-wS۸*$?7On+NQ* W*[V@؍Q[?Ó*9kZ0-/y̩,r1_OOkEԺ##wimǎW/+{j]_ejqlҿW#O>Z]ZJJKK-ȼA,H&01:ZUj-~Oܞ_a]ߩߕU^W$FV$Y5-yh/xǷǵLr犱h$K~cg[o2$ynN1Os\ilg.h5eݮr#Wi zڳJ..Of8RMI_KRv8 z##9U:pg'CUU쬯kXqU9rF#NU,}WQc*yqNWy^7'.daOF3 qՓwrvFj++YD:n>eyңR/VrkrKNUm:i]7d咧׹_Sw] XXS걫W#{:s\zb\m.WӟʹU;)eXث+⳩NѺ~5iƽhnn,6"3Hc){IrJxQM,̏+:bH/SE:M|0\0qwWSF9h"ŕ퍆d,6<~ʢ8S$zyˆ3`nw9*QѯQ|i v19zv..z=[TI~憐^+{S4lC ɺf؂xZޞ)GG,pj>aa{hK"|%x8\F'؊uu#3ed³2ssH4niV۷"&Yq`W ?iiZކاBVSz#۩I"yGx(4$oeӾKP$5;Y$\Al NӎHQ*W &tj֌K, ssx<GJ\{֞}:3ƚ բc34jZ0Ԟ^+U6rBS[T#y$-8`3Dn?toN>Th -1SsܫE[?6: ʊ4.UZq鯓$ī9z|u5(FuPNJ}"e`YXnFn:9is\r}=ZoΏr\4>^ҮR]i:r~wXTm'2մ\I"TJ2V]?w$!V0*LgtKF*nYԕh/mrI,NіMJR~-yfWr*r9G99E7k*YSeY&gvⴌQ9mq#W2f=9_AkEhUpͼG='tp8\w90:"vGGFXOqߥLcm(Ufz3ԽoyO.ΡTm/ g== \ܮSz%vGvr.]k/g*Tx|__ZPs?tU^Wj7v"Be #č9=qU(rU]mA*H6RN<[J]Umۏ.5î~c ՛wf^ڜW]#6(r/+9eV|^rF 8UVᙶ^Qƌӟa;9*$HTy^Qע (i  2îsV~Z֣ٙ){v*io,ȶ:Gycc׏ӟSg[U*khJ5^#Y\srݘ¥:&N1HPv`;|ҟS8x|NބnىfE#q`?*m&rQH~\q4埖qQV_iE"mVuHT UTJƷ,s˕HӒ7K8ZP\yn>F:+ӿ4y]7R3,2J~&w @PIrԧSk?W9{۾q^壒&H.<9~JrumeRN]//j܌Q\VZCk’ָ͕۫v~^L9W-iT=k[$[n#1H̀ Ű!a\r|_"ܧ[wopequQ$ѫK6A9v,W_E'Cn2CeU2$tdϚj<~# Z*{7eh$jʊqN+h'%X>M7V2HhY/*9=2hr[V=ȫ`-u/(V;O$5BvT~a}iԭ&QTN;n QYBLXUTLqЃO,.Ǟi*nK^9"VC,cnrs?F<ֆQ*V~W^J}!Ue#n1WM/`׏TU(U}ף1." }ү˻C29r޷1H9%*Pۣhixvvƾcqz{ V/uaj/.GU*MG(s<13*ň-ǩ=p?*3;9-:s9SuKϻ#Hiᕊ]̿pzp:VѭG=5]:R-;Ǻ&-tF6xӜvzwⴌ96fS揳[$G Ω̾b89#WZ<X9&Vie{vӜjS}ӷe@!EF8'{gԊo9yiٻI+z"# ]X|w؃WW}:t8T4'ĐF5" eA;p#m:zPcy=h$ +w1SgjFicj8;_[L660G*F1XڌcFb|LĪ/߃[ԋntBt%_7YdfYFw1<zVԹauEt\)Fܐ9<JrћJRt_9mcLs"Io|_j1)R73?9P7p{g$֑^VF\/fr9]#lvL.3|yohЍ׽oS|#+{{׊RIZDFG,S˝\'QJ=4!3ƻ~`2M9KT;{/[VӄW]O;ֶRzu/u;2)h<ˇ=zqmoN-U$\>#vZѺ;UFu_݂VYfe،ʹg'5mM'SM hcu^+isL%Rq,پI']:|,59IԳvzjEs_5hnx>Jg<תb ۲=1' NNp)ƴ]2UǙt2!oz,jWkLU#5TfƵIa!JG^ܾҝ/vЖ_qeVm7 76qԜsqYF۟wQU?J[8]S,F.Rg%;B$tFV*FKn|?:_>P @yu7#e5hZ}s#POPzu\_^t}S-V1V[$bާuvQi{K4}6瓖W;7ol*cFeN^P|˩iNu?a]^os)Tɺ/捚+@y=q/i/vێ_VlI7 2Q08JT($FhݕnEo<:H͵X}1~TI6 =QoC B/cU۷zg='w' ic`GPpy;$]Hֽ1E6 sUK2IJ8i'Xޫ|{+H*%xZ[M[|ypF2 Zivr{-NN80l;3c9vsS6. 2G2D "'<}1 2s'M8_nd,{Z59\S9].Nz=]ՇD$U nb;Tӏ-ޚjTڧOfV >F*|:{NI;Xh\~c{V59`ՖX4 -^q=qXwr2sKEH';ClsCԛdUc2Pz{XʚJr+JӋ^l=RCwyʿ+çghTpLo>ttz`rLy9WBjqU31xsI.J1NFswcxj4z̙lq¬*%y ~ʹubUj%ro&9[2p*f7đ27O뻿Q$Ml#2wncTͰES#4f+6{TʚlS\ahluHP[N;W-h1%)Gi#73 btqӭr?ŷ렩iI;|I)c.Aen3ޡbJ4uo 9*R:o Ħ UޅIIye'I/ǹ+:!Z}B8m[{c}xSe&㌪TdO{U^۹Iq8\2)])Xb)եF-{y@mb'%hzV5)֗*ө50HVkPfec$: έ] )T).(6堅qQaZBhթ_ Ћ|.qVTتʯ/V5{UIJ9\ikSGuz۫&D#>;c:*{ۡӇM^JֶqHV1R7rO`ycZ*}|^Rqj9`Uv|u=2GZ(_O[Kp'A2yvyFJ4N1Ң6vf  ,wUԡo븣.=Z2S̬p>q󓴌zyM:pr|ҺrĐWT>nO\U.Syֽ-ñ%c"2$Xg_rFﶽ#ZVku^};a: E<zsW*44(r˖W]z`uQo\(^Aa7I<Y^tg^iSwZ;HMtvCcOQ.VRm7"fnbdSRFk3OIi´%RN-~Oͅr V18SVF*(226F8zǵyխQYNRKUdPᲾr|vpsяJ=>G%*qi]?Xݔ4͆&^qLZQj{UFnGtdo!~fv1c&կ1n2[ime~c:ƤOPF ɨVRݽ"Q秫$s_4^k(FH$ksa҂oY$j{:V.SOHϗf|wjT҉NVA夶fulcLU$mOK^e̕<`۶W e^6oNzp{$z W^t~L8~u7=ڏe[>4ϴ*GYRJuNAzJv>ሙxbYXm\sj2ZVKRTl{Ȍʮ"̤*H?Gn)EYYԗ,ҷwL2 q '-~;V敜Tt]t+g B2y#CI-mm53Y* ^rwˈ2~훫To9Tr׷h!kW@URNNNNr88e U:&;xXg[5v"Ό(,2vnzsta*5$_yPDZKb$>k1ڊ yZG˲r$絺hXP&. ltz֔˝ʤ{؊ѡ *1*2,nWD}5tpK``AXӈURTh:{Ӫ(rn_.8\5ϿaӇ%s'~'NQiE*W6Ox9Mr_2HYHS~5+S(>:hօF֏mē/;;n9V/Ȋ؏꛳KNcpM* U~03}bin5<9ֶ-D,C#)c9v9#MGYi~WUU׻U¨%;g@zX{ח#xPi I&Ьi.8=p3iFy9V6x~ȬzLyt'Rn6+nufi|~klɻ1ZqJ keJ1䏼Kt\?5XhEYFmmʢ 4+.1N+*tw{yΜb++,] K0ڣ.z=KRXun6,QyY99+R;pq*\qFX,y3[ɹT~E1Ƿ\\5/w<,ުhCi5oP6q~ַ kygvL>X$7qx>uDc: ͲT2KNpxIxJ3طl|f8+S9#XקFRI5[A&qAu#Fc&7ԙDkveB]3|AYƜ/ьTsR4"yCnSѸ8եPgN4)| 9sʬtA-wT/͐q'?B2oF穥Q7FvU2sߨ(ۻ41Q&jńCli 2F^{Кu{ SM 1-+ܳ~{}+)St}[(FܲQⷆLqeCOt#͇W5ݶSNXHU[z1fTHJY{ Ҷ"8;L1\J:v$2;nFe7*|u&vuABpmGC(SM9on|5gR4V?+s1:]oVPWPJX|-J3Q}=>e4Yͻlrp@?cz69Sk{GUgr+UˢiIAw]H!fKHvl1ῄ۾3uSQSXr)([~aDGeb>m/hS~XuzڵAdDj~BF>֟S_i(F0ZO%H!sp^{g]84 ,e.] +qjB>[BNYO 1 ĆQ=Z/+ջ8d70_qsӮ3].%[FI46;kh'YK0} }{ЌEqRB['ע=*ܨTryrji.ԧ/"ܺ~iiIꍪQXz~Ǧ^Yv(m63=3MJU"x^7~鱜t^øSn7;Δi軠asol6LclJ̫b9FxG̐7 yJ+zٍ&KF~9ӕ:2K}/cN_^ -F[M p2׿zۖB5ܺ-L2r).~VݞQSG]LjRl>u`V8PO@O>jhݱƛ\Y" Ec 6F;cִ׿qJO)r "FneֱOQµXwm"? |;ӯDW:b!"=Hg< pY]U[捇r*%E^[֩ I=uw8%\1߹N@,O'vwI'SN?RWKK#f5!>^23:i϶/,jGϩ+FT"mc2ÎqӞVu(EC׎5{VKGp(6OM*5|YH' USSrUUۡ ڤwn?p9'u#r rF?>UJi"3cߎ}yj%ߡ*]2_!~;|1Q,<\lNcҊF 73+T)bԝg0=hWT!*=_z:|ֳ%ei·019h̿bzR~mR[N7IZt^ ²ǘ9*>NRrM 7( U.1bX~URy]YWBKf !qߜ^}L:L\f<֕y]ҸG+mNysr.ZdKG {4.h-;yE Ǐ8{TH\аaX(?2䞃ߟWMk'ZnjhbR?2;qyF"ݴ:+%6޿q]4tƥ>g^Jn< #?_ҷos*qDV_ݕ_㿶+ݳ:`o}Ek̪6Qק4w{"&U2G`ls\W<ݝiFeٛwp"S s!gQ)>kg'.]㠐/<4Wv!Ẓ,w1ӟ^^JZD㵣yi~Nt9̛_pZwC^miJ]8B)kQ#owoCR9YP:6Fw\2 yxQ6@b𻊮#q@;WZrqGEЫ廍#yqWR^Z=ERrU#*EV}>9>F1-hq[ƲmnGͅtZhgkq6^3y9xܭL:UyeȗeI*DCam+>#VG>JIѲmz2rZN*6T7@NnN}Fl?2 Nǻ'<~UVŽwJqީ6vO}ql4E J=z[n1ÚƍhTﯖ5+٭}wSf}9?*'y%kR}ڑؕ l#=Ϯ?:wiA5mSuR^KQ0/j''20񌩸٧^CT`y6?f*6r+9{;rF>m֋㶬 Qn|8\:7ܿzFNӕ}O+HԋgH|22{c~jI{];~iҩFiIt'َBcd$c8ϭuWO:m_-/ {ޢ!\W Ӑ=VN:D%]w,mhOq%kܝ2>nd6Vc>ݸ |g'8cS62s{j*؊m ",hW166F o//>^ib~#?!i"6X-8tЩN24[;F=}qYT(F :t!E];t _RIlU%8i'kQW;!hE00I$X\=.W wF.iu#yQx#:uhW%kvz=IZO"G22[ c3뫧N/:o-FP}Fe]NcyH84z42ݜw}}>V,׿AEY=;zu_J<˖J[h_eMګlwsQN<9b8ogX@[Xz6G۷*g|? _K07LTn'$Y7x>9(E|cR#I İ%dg9\㟯jS).vuGN4-[hY2={F7aŒcFZn@ח|˷fBq\S|RԣE-ÍD- ZPqzyiɿ//BԏXoa읞S7@=㞕Q+Ԗ/9QtgPQU a=ƅݵ^g|-5oe5`O&[2?i't3lLc,;VMň\#D7oZSӲvfM?IXĄp@@sDcMT6!vG ^R;)e YQ(qo2enIu"Hq`!g{$RF66L^e8y(cNxNNQqrz*ꪒV?'$V%d-ZNRwW95#NpW6v~'$Cҳ*N.QF:|њr}I&\;GsRE8{ڣRFrw6Hۻ ӯSJzU8Vv-iEǖfI#n?UӍ'>e&%7 VfbNlcN5*TM.ebkԉas?[mpQ+o^䑤M-nُx =8RQTEAkGR\!"fXt }灏^|u-(kgL",ϲ8ܸۜ#UʤU=7:iJy]#$՘azc_j1涗2/oU]ҿm m^F#Xv`sKF.]nqE7ԑdy!e 1"=Is(ŷ֧\cO|אW/1݃ޣߩM]TO׿9a'O`ѩ)J$Lqe.YIH sĨFf 'N0)J.Try}]5yegǞJKCǧҍ>ﰁ"ixf*wcMi/m iF}mh/.UC|8?8}t]?⹽53 1kj.5%nҫOK2o-68ǹsߥiuu7e V|U˓p7`zrxǠJrueFWROЗ˷ReHljQU\JF10%#;ߵLek]LxُQ<)8giNrob%[3. $3t9S0O+_Uo&\mpq jꋛן7//Rh!]"1#ҭ:86+ש"> m+Eԕ.{v7n`[EnW.eܪt}!0~f7=sꪵNk5"6+,+pbx9sN{0e;??A$I\dm>jԩO-JQ,?YndJ$GjҢӡYFUc~= "B(ݕ9uֹ8U5YEA-wݻo#Sƍ zMr>EhWgQ%HUsl#Y |"ss?OHٴԖX@1Pw(\&juITȪ|K)۵QL%޲Գ|VO{+i$[ jErXk[0RijH|Å~FsJSl^4//v^cbSpcxp;l9FJ1P"_, R6Hм|ӓקZRtʤiQ{[dlKllF_vs򭹪r;9s;IE Zy,<>NhOEZ33sNӏƊ1ayn8 ԷF*>{Hm\wTn:2}u!ճB+ʥpNH)QG&ڶה8%nZ"EV n@N*ܽʏ;켈:ٕfڲV'Knҩ{[em>6:w"8zt]yG>aIۚڝ8TRNfc" 0ޠqWE8ʍ."QA=UɹXϭ;OvVx?:5m@vc?1q#QTuEcPE"qx{+H˚nVZ驦*^U.?fZiF$ߑ*mm9fwzۖ{u!"+Vwuqdk*q":҇5!mQYB0nC\KFgS\I/"H0,#մ*GdyYJ/MG0ל=GtJPגkGVwQIZWlݣ>WSg=pA6q5<KU^y-EI"Tq5t')#JjkVk1@}wWӱ<թP\*,rEϗ!w9%tp*iܙ# Rv攪]:X܎{[uS5lˆ8=#4T7)tܬ7NI. b1Um~i?y?2(iu>%}?Yn )S]1Ե9rdsA8U x=yTy:RQ4^E1ۖ8=:UsSwڬc~@nȒDۘ5i2R4{Twȅ^a䀲8>rtQIhs¤d-5Ȓ7+cvOjOqTO\rj2Z9v7sWqY WV]0#`7qN<ֿYYF(Q_a֜5t-Z-eˍnem(~>/z[Ka:nDD|è?C[{HaMJI$k!^Fe*c)kdIpzd<5nQ5O暳Ӹǖۗ+ WyKͅTݴYfۢǟ9#ǡ՜)or9SC-6&Emcr\t#̴F\w6[б_*I[9Ӗ&yT|OWĉڨ_P197=GầWi3W+)uibnAB悔9ދBL#H31'}T)J? vKfzGҴn|{I4$G& RT0ߚ)RoFUQ睬 :gNU%;v2to&+Vso2n9RV<ܮ*Jˡ hbirA<{t%8Ƨ3sN0] #*=Gu'RƔ SqOu?S&gd '[F +t#ʣNu)]?sh&dnAg ӚUylrOeOTọ/.-.<ܲ)Ql;vH<7?ΔwUE#G ˏV0Ck!`{kxۓGRtc{d"sfTGG,+de_Q׽G=o6Ooc {TrpMMpmfH<ʡH翦*-տ)'Y7XUacs[{`ٜ>Z--G7Feu+[عBZEnd0SIoOɭ=<ˢzԹfiR#?vMmSZ^'\8Fř`KqfSs/Z>5ތԏO[\[Z4UJvzuNմ5Krէ{Բaٯ&1īm=+O݈G̱jiؤLJCy}O^f*RS=̫|., =8:*0tHZʖrda|b=/v;*QțиyG+/ZVW9&1 tcclHoM]ؙ LZԔeknc~lUdf b29zzݡ|25Ю[``H3:zUԔd%%UI71}kIF1z愵_2\,2D̶Hq҅Ӽ*'?\5i3}o'j_n=y1&,fU9d*MQnYm\GZ\p'h9y#97tfGlsZTenLXB,Xp@cs`eRߨ:HX=nj} (RQw!]c3|s[u&Xe#x=y>4O.- _|՛g"hwԪTJD<0ǻw̻JRqնi+#HʓJrdʎ8*5$̵$M×xXϯ~RbrI1V0vzp=8E;]tdƏ5OsV- DmTr>SI泩YnmS8مڔʱlQsDә55 .ғ78 }w#ሷF-ت'|&x9#k QFQSt׫*+L$>tnzdVR*TBTX-0OnO)jfJzإqwnT;ӗ-5 jIq4C'2Hߜ>ӕRRS&Q݅u9ssI^f#yƫ zojoMSdȚLIfo1`c1Zlժ{:N~qVo/ki aisT}ƥ9O}4[bHkg瞕.散?uƱl1YiGoKO {J{X;G~PG8Ng+hԊX6&hJqшR];+\~ (Z#;ӹ?{Rƴ]ƞZr6p3ӓJ~ňd!xAݽʍij~TjV#牱 !~y7|s߸aҲMͱ8Fb2ErH8%(GlNi$$+x6{Ҍ/}#LW,){맡5CXO8+~R0sKbѝap.|J Rg#s:aZJ7ka7Mo_/*p?1 ܍7P+sԜ#q>]Xp+ؾNNһG+#$70I'ޕ9Khyy-ŷe13+.x2wwBtf5DdHblUm_j{9bmwCYRd*Hw㠪jh۹uc쓲Uu>R&uo9u485RX_=~d?ј6FRԜ1iF>Up=C9 H>~;О*597_X+Rm2hvX"k|.Ls[K%jw_FkSú޾QyŚQj}Bq7r==R*KŅx'WmgK9ޱgF dgz۵u9ӷ:Ik?EX"bVUdjzp~]jqƌh;ͻ[hQYٟjz| 9_^QM~~oLgZ1VJ򣬆XT%|/lœcE>TrNndB7ەNсk U(UbzitCnUHQOAJG6]6 *+oMm!ڝd삍*w5LQIluRrH=:څoEo oVϰ[/хIs'#:iQ*sJjh`)En%w5{8ԣ%L8U<JĀQl/z>ƈӍ;r*qu\cUc.6/qSznnn!ejYrg>ZFTyyvɚ8V9%_&>iKDE烻}+W2*ؙS9R+;#i4T72 I韼9n}GOoi&}3*:^Aj)[ՋF")ҕӷ,y;\$duuB2J[+hIf_25 s`UeGsǿM Fmڻ *yGJ?z5>VQIh7.vc>\娹u.6"rD~c f?1Ҕl0$l=Cʜeڝ(5n$XA>zڊS Deʞ3+ڠg5̗s|=:}-a|21d*fUJVgWfb2G PGXh=?\z8{FYr§5JV(n%P<z.X\4h{MGi! % vy#4rlg'u7 *&S'Ҵڳ1*iC^ރP6~r+Fp:~yQr2>4Y$G[s:SǷ]9+ݕ"1nW B<`!~Q)KEsH֕ۨɅ"&ff KLySowk,2(f'8)KC8a*Fm7RHsn`z-rZ:V`,V̙Lqs֮O)I-|#g~o܎cOvq]Xlfb}SX<7VҔu~b[\sܪtyP9SwȱF" ;qsEsG;/+ĩ.÷ko?tYJ/EaYԆFUᕕ퍠15]ވD:t{hF~l3w*|T=;x"&Tk P~\EaRTz>ÖoO6i$.0$!'Q % wlfb8Xb!+لcZ!syo1UcaWp=S(Zu9P:|^a׌nӞg:ܔ_iN_]LZG6&]GbȮH;NeUμ;v vHN.1YN#ur.O!"̬҅2*d^w=sk){jvcH$SC6x?u.T(7{~$.k"8$czgҳUu~GxUitJ~oFFgk~2ЊykV,0LqmRTp'<`vX׼1Y1eJᨦ)++imJT' &qsvyt!N;[NTf֫V[1f"1߯`:TkSەպ#Rvz#ڑ(vڣ d`Z*J\/1a GۊQR2J¯,dxFF7b&ʯwcZTz#Jt+׹ {ėk6Dyˇgk91q*[Zr-R9⪝h*}=4&TjNtKbE7RO0~Áy=O:Vy>BRPvZ_ l q,FWTЛG$`s:rïo\=^:mrK2bɸx<tSON)Bz_p0iǘ|lBQ~?iɅǮ쵸%r'? }p{kj;KTc:{z$IRDhbC%\qSFi&EaR:IY$I?s=Fz@yκi0no7ՃIc-R}8eʹĜcVsǚRDRNm_9|HU+7#wO\WJ4ԗ1^in놖Xf$nsltNQwZW.T}k,^RC a$Rr$ʱ^-GF5IYIy\r#v~$9n{)2}^ciKR9v_țwj*8{VөuF,.ZLqx̋ՕV|AHs8h QRm[Zĩp窨zҟ7]کmkX{k84 ?w'q?^2;WE:>HMIkqٶZ4#!c# z∦{(Ǟz$M"ہ3aTsy*KszTYYS/gw#zʣG$NXZi][I*$BT|ˀrO<޴^U9_}[ˈV}Aݎs9k:r#R_1^@Km9nǯd9jծHH砌,FC` y5:m%hŻÕƑnF3ˑLs]IEӺCJt[]i F>TjS6RKnl *ڽFƴyG] T,N#`<ꢵ=+e$nV>ռ\d/ڨU#H᝛d&5w$*NPtB@̬Uw.@4Ju-Ш$FZbxka݂a6sJ6.3*Y~~ds-mi$ORw7a]RM_IJM/r l8>Tr2@>S6R>&(R[k(-Cm߮}kiU#cSy+aW揚0c =jd;XW4M )}_j :WTtL*S&MJyLgA=9{ (ƛO]H wnXc.LQ7A$vEw\WR9E)lRTwm LR#?mNR-:9jJ)~UYL[aU*[S'nr@yDīnX7¶#v} k@ m6\g?Ϊ牝?w[61Kr!w ݜgG7NƒhKᶷY;[AMm)JZ_RFW6vI7s nqHn?K[_NMUUڱ$I"FRh)I(oղ.g{ʫұTu^ҕ8y>ʒxam6pa}V^ҴkZ~R|At+\@`z/$p}G] ڧTUW#lI6ف98<{O5ڰN"x-[Fuo3*B`ݷ?:׷ mZh,㹶념my| *Ttޏp쑢66ʄpJKZF4]կgiY ?5ۮ:}8,~RU._[(Eh~^u)GMwF^ɂ6֋~摷o{='i-cNMOu=ڲiS\}A s\^)}H]C;HB28uɢTU;r9r9-%"XnlO_jw-^+5rIrrO qTr-Ljzv/Emwk#!IY;*鑟^sg(Ө .ޤZAH۷*e[$q.i'Bǘ@ଁJY F8r\ʔqs-bEXV>p{QR7=n[c;fs 7ʭNO?JjfL<5'wem.E}$+0Eo}cy͎*Rטu.$l`$`w/v~W*ҥБJ;E`h[>#ctr.m/t:i~T'haxoktaur5 IQ;}xJ3TFW*, Ck:y!P$zVJ6)Ц׽"ϧ=L>̿ 70G>;zs::j0G#<ݽsڇFUu",dO4̫!;6 }?vXTjkȳ S(Kxjd.Fn”x51olѴ y%积n=*SӞZ_iuw/\vQR :AգKǾRGqp AB=}kʪhT#MAEEH+,B6y nT+'*wz$>ёg?Q[i>[,Ƕݣ._O\\s;)FDKVrWg(S?cN>vؖ%б-P2I(s9Eܓ@ѾfSO>_y>j>귓$ď.hh90ƧSƜTM1H!B˜dYB<{⥼%2oʍocӷ\RMikcDIs6vp=r+9(R)(AIo6fflqzԓӿsJ5(O:崗N|UZьw.Ӆ<?:TͦtVJ[Ψ\ۢm/B?yZr*s}2> 2st대ְNT;jtTUz΄Ȝ`:VucS}2%M*8,{@lqA>-J1cFQuKarH6eJJ12W3FU*(eq5+HC>gȿ2qsי)UZ]E$$AO+Gr߻p|{Z=FKKA&yd=O=Aj9|ߡNjEԎQbyIa>V*ʅCs#Q7jiFTtdT?2z}gt]f1g1*@y7)`A鎣n+i_};`N:[u$x[g߷"U%9#beF*~}eWxyN'FQzzm4VA&IUܿPxV~S/̒6b9p1|pzұxGڅIƊ[!:[` W4&zU)u{%tOg 図zy*{W-Ŵ3Mfф`9R@UTK'XU)Fɥ+=<(&M?6G lLc۾V8IN]4JBd =9ǿX?r/y5&"-X1,$`͎N;}kJQj_{XQ*̟ vwqq+TjJW=%*QmL$Af+b0FdVr)b)T_,o_ѣ))BױJ57b_T0#hRCBĮ$/͌*jJ= a;Jim !Q,-,NxU'+o9)S}wbѫcy_J藲KAVҔ׼_:e22n9fnAxӤU TR-Jak[5VP1"[9䎧鎝 KR9b8P^~eͧ*E;mKoxoC}zW%:qk_Z"y_k>w$h#Xٙ~]frIW?5.W?eX-xpyy6223(o]-p*F0WiFe[[DXe*Je؎p>=Ml:~^i2QG5tt>섡ݻ?OQ.xӼֆx\%*kIlndlݐ* }9~?yNt5߽jG0J~vquNWJ۳aJk] Ζi9D$ `1CW"#ٹSz~^C$dPYJW{kOi)۱5ix~ϵ`[p?^ұujԗ/@3QWY5;D*#ϙzX=Y[k,:̑v}5i_+;9Kyw6]{ tTxy僴uky^lbUxѢ8G |X쯥Vjs%ro^\#[[Ǚ>fF@ x/zrNUWN5W}!AVH ' ժInԩS 26QqC285rI+-Mk9PܞkNJsG~~Jy[Ks^S}~+Y<q ڲzouPG{l 9C{+H&c<9JRn3QVoyR俼$lO.<*ZԜ}~pWZHfAa"08\=8]ݨiO?[j>o-HPK$zm4-$+S2Odϧa:*2QWni9k',vHJemUݵ^tϭo9J64U|/b6<]yqG)rŨ%S:2Fl5>B%3AR#ߟ\tEgkɤW$޿6 `gz"5<-^h^`wm\g~}ʔ-/P;6fW}εͯ".X+"^4ڲ$‘Ш|NG v?Gva>Q.ψGssM+6$ 3$en/Z64INi^Cks~cc;oלJqY'w:괷VFYyFvG^y<; MI^ɔ*kl2OhfY9N1WΤ_zikb8o~U2k9aIuөQ8StKMzzll<&IF|9,v],;m^^)1$%~lPUr[_ƾB[g#}}Fݤz_h+/nҌFrvJ J6=utD%'q8_SֺuQRiF*H\Wt<ٿs6ǭy.J4}o,hkMcI$ޥU>fKyGS?G !Y.f\qtӊg*.-5gYfʌ=w8B)&oG[tk}ch*jQeg6mU-p{U(nW[CaH'|826]8>Zɱ';OzU)+*_@%<^)j}W INE .Z$YHX,v+A9K+cF*\ v8uR\۰ shVQ-,|ќ2SN2ʳanYvYs4i`%df }3YVTz^F߻שYvĒnvpx(83)Ir[3Xf099'iMzŹӰ?l aNS]m~=\yq38j14VH_"1qӅV\w %eq3&:j\Qy#:URNK8<1rN=q?kibZt֣"Bc_|H3~`FAx+nog+Mlq $BIv!׿V]:SKc:u%6d_j<o0zL׊VW[s-3=d[e˔=` C\@J=~5%gӿQ7HW[bocOWNjETʥhikkRH jm${uK+m}uEU+"írљm#:^ެybBR/Xz,0?Nt2[i̲NdcmcT#Q);k!֧)=53E!Xc ˰sӶ}TP-+gОe^Gɷ~?ZEr^Ҥk)wIn6[3,a>Ռ=fABK-M)[ 7?ΥK_c9|;=ƣ/v|pSEҍ^YIn9I:wW!"AvDvI=y jB%5*5}^Nneگ|ݰ1۽ahѲW֩\wG6uAd67*ݏҰʷnyJc.qsmI(nVQGdWqb%XՎI<ˉ!Te>ڬ=j'fةUKv(UdRNu?ʹSzs-'7k-^J)l ǰ攠鵫fqE8`-$roY>V;ӏcޭFSoӹ:rQ=ɡ]roP@&3~x5z=5iQ 1;yv3܃?ݨ=?1tWWo{mA7c -d97~qF5T/%۰ˍΛlvaאHN\T*b%eB2'e?-|ԟOlRMޗѓMsq|F928 pNGNQ)Jt;#*5)'fƵDb VRN;V2_+17LgyV7xv'zzqJ\rSھYT!aCGSSJ%R<_C07C͹e}(|m*jXxK`6䑰z{} p.\*cQ[MzwjqMhr<6\;~E){4#ԝ5Z\!5U>^2A+Q 4}V}GZ.eel yr>dKWrݫߧdlIj+g {ʢ1ViJN=eNንG{q^kUsu;fcNF=뢍8JjkJoMjנ!kFUaN.liR:==zqMnAv x.x;+Xu.ܩTޞc V~m3U%R<͚VU*$C>qHHR#]l{c^J%?"'qIwB t_Ƣ\؝$FcbLR>_ս=~}mƖHJGSN"6ߧrFyO fNJ?kJAk苕jj]ʾaVӌ}ҳu5oP ݳU`̫W ts5e.{-*xz>ՒEn6*~fniS=dW.J/GكJk2VMzsӏZ>zW&Z[4k"W##NQtM*S$άp *]]G~cߵWb +1&;hVF{u>{PR*2dv^Dfy7\U_|/,sJVI^o?U*?鶨Le98}JnF罟^Lf/$^^1[i~C,%/g)I,bCe,Rr<βF vوI|qq=k8ښ1FU.ۼfɏ39J΍KUԥ*o+)mK6sɤS>"][KV^}QqԌdQկ]5+;|/#+?[,۳ZEF([ui2dFC3AOcTNTNx[bJImS$dȺqa[ӧ'KhO [٤Uq9Ѥt1ùv"k>S׀@ߦ)W͇r~Λ?(yEӻ1<qq$Rd;n Kvm)KNg9H]g<:5`J|>_#r:tMYW5)IU4VUs i0VkyS1.8Rnd~osʌ?.3*uG4M**4nz#R#*jiKԭզ,2?y3,Wq  >'mN4ߙ 23Z6Ey4|,*ܐ[N֬cۨ,&oUsg#+*6tU#帐Uc>,yUWRWM\n~mÀ 8Hk[J""i^$؅e 2+I{k哕tksv+m2+|uF1:(hڦNWyƴUd[#AuStӨNWW yt1qZQ猿yC\2¼-㬽JwEF|fp`}:QԧY)}=Mij|)$]{T3돻%+\KrIy" .Ivۓ#ڴJQydi$*&08'=^s^#j8z148^z$WGE?JKm[#`Z2{:s߷j(Ӧڝ'$֝ S4m)oI}qԁֺ#S'c$7~[JٸUY>`9\9}7RNGTcZ[./˿r+S4$UZ.VmE'J3|;RST_Vy^?&h'0ߧ5MW:S씭vw"m?G&~ni؞X.EK%i> ޼%z'$sZd{IQ1uܤZ6zte]HB['Ғc*3nVW_c ;X8kN\ҷ\֧fu+dUmt 3D>7D!sֳu%)6 G.%:e? *ߩM*>4[snxٿv+IMo1vAN8={޷}Ÿ<'?^iZvXahg qUNUũΗ4\hɸyW'l6́#$}T*q&=i+/3v]ܻ~5 {yFNQRd`1c.Y|'TSR^U\dFəTNz/󭹣yn8iѭWw[h[H5l0ᱏ񣖤{V'iwVq5:Im7n X*cEE^S%ZQOBHVQNwJhS$MdAX.(;P-w9\yֺ=C>?.rzqQ4{kg=|FX"km6ߺK0+H{Vҩ8T~ZyMElT?2={U71NI_fH7P@a)3C'8L ~Λr⹰Wl?KW3#?"T\tS.[CJ(J1y;lX3I,L߼'o͝_ZڏE7Nrq:ɵV:Vr%4iۨt$[iŅ-(OdW*E [FѳBWkcoʪR慑:~pkhU~*u8Olz}+7OVJrw IJ\a"u`pyq5N1IzRr}IagVuc [k1oǃHe9M|vr|viHU)VX<#<eŻT躑楫hѼ*Ppzyӛ`smu6$L+&ogt t'Wi>gӎsYɾ^+ZsM=ɴ2X_.*zggEII˖[plw=U_eϛ߁$r(wOn: .[hٯ)r㌴f?ۯ֮1B*qDXs.Y2sRhiR9I4q1VUs:tB[h$ʲm?3t3J|spv!ڹ;s߶H*-+ݚB<u޳1^.[ ?b![Q[rsi' 4Ή_%-M jNׯUNvei}^vc/d}Em[mrMWHmLNjzZxjWD3?ە hnUpiV5v:jm۳TR\(ҎZ"mEݻ88Q VO.87wԊ -o(ګkl{5k ӉFx#M+<NqTk){fN{vB˹FV'o?ZjQn[vY۾Pcv<#ҧ_1cF"'HGTcB3huϾF_g}ns[S]#ISb̹Cb0Lӡ⊕9$v$[XɺV>rI?vcۊԎ= j8as5]ⴻTn8sZRĎ|V#G)3[OQ,;$q? ޏ?c9PN6]FN6ܪ+z%fCXG vHىIsDc8M[AD2*pWiRIJ*^^b[!,1 ,$eOl⳩JaF.Qz=lNbn#ÚsEmMՌyӿSԎ]rZ#»,+I|O+NiI"iwK[b[V]Nq$1nn9?jƣ=U_i%["H|$j3M6d>ݿ+NjqyZ8ZҼ~`i?v݊T}'̳:f $c )[Z&kE&+ڬy1:QwQZĻc B(*Nzs]̣Shѣg&ˉգn?[|ƥHGZZRzj'bR9dXn)1I]Ì$I-F$VF*=Aקj5Z[8QiBD jƣQ c1[GEҳ{¯4U6~Aq3sKl֍s6Cj?XrX!"4j;tgʳT,Uvc֜!c4["ΰ(#YȘ*Ri幊85UbWK{TBr~ >Qbx}ܞ}zs54- Qe[y%*JF/ŽxZK;&gZpI'ܲ,bZEc>(SHԫN[Y,h^#E铏f2К,w n>d*Yӿүg+Y[t)y6>K8>HɻzJ׺AX ck9O[I?#N~h!7de*Zo,-I%/EgcOYGtoؓ|r,l>%]ʼV^[Vm{dH{5a{^K([ rN6/uitV>ej.9Ԁ?Jq3Kݷm4?ڮ"lyV(r\t߹f1yR_1OzGZ⧩Uv%Z 2p u{zU)F3Ckͻ  ~b{>eI/c-\2DHmH_/nq=S\b\hC$jȫb@u=qNQwzIΣF*eQ9J2u6cWR[ui7;<+;GLyiWYɒyOTlz;fARqd|YcfDVd`}3X?|H\Pvd(ʓ-QdUQ/͖ jg*qѷ~YQ7*g=k8 Ö+Qw~DM8F3u{;I"#MSߧQ(r2F9tfʔijC,nQT\ƮBPW #?+!JSvZ5rK(A_)7 pl(T׮8ⲨvU*B1w%1O#۪I0`1RL7^r_"ǖ>{[rU}XFҕ E'uDV5+ߨ OקT9]HӬ{v3AWSN{Pο~:'̖W0+9+¤vgU%֊9bi#*nr ؃M?v^dK+O]zNܨXY|xiiרRi.Y $`0JZ*2+ԋOݾi!THʱ7n{>XsiۛaS~N˫5$Xjm9lrSZ^lHRݸSs<\W cg)J:3bn[Xl |eܭv$ʱs4y#}HDI&e'&"Ԣfuc%r+=kaqs⏬F>:i̭hXKʫevɌvҸ=x8ƥ9Uu c^7`~K?J*~JtgN$x"Xt[hvSYmsSSu;y|(;SSwM%NMF ԆO*yۏ¹2{ZUa7Ko! J$+!HX,{G9_JU%FEՋs.'c6O14`y|fghףV%2ăȚy#̐98=BPz_S)*#{-e6[W8y~u_6sTԌ.#K2c5~~x8\ )2k7ԛ_q%Ȑ6\$bF}U`tyʤkRJ4nU)ԝݍkFvjKu0jUwC-#YQV%;zڹj1i$>?@Yʟe1cGo;kzsJuۙ[ex.|lUXp $N:1 iӯ^9rIX#Yٖ3mq۶k8U"i}=ifcD5 ?Z*S㜄e9?瓊n5U1PiE=>Vb& (x#)ӟS&Ӊ-.׈n=]%%ͭ啓Z?hb8Uv%xzV^ך=KadH -7|wKXMM%d)(O~)7B,9OCXF2vFƞA ,|643nR3Vo<%Uf^3'9#SSi(Qp[{q޺'NS5ɧR['{ʹ{"D 0q}R9I$ݸϚұf9=G'VԣSEFR[VM,T.U3D̹ݳ:0o7B#*Ű'ߵoW)'U٭[tXV3o &{#>Խs9ajJ<=I$ӢK6@l3n |u8OJ7̩ ߗLC3 =[uem\]B|\Ko Xc?Tyyh;u2)-ma,2[ܐ}1ֵ&^>MRI42yjղ:rsby+IjsJ#K]fqlpqFܖ5`D:22FуyXk/%QŒ_QE#c^Hf`ӧ_񪜹ccң'aȐ 0gOCY?ce"],M2 9#59*}bS5p${;YVYWR mIrŷk9e#; ,GhֳT*F@+[L*8xjQ*SQi kQϿa(4ޅ7vGv;$u߳DԍN[?qB6W*O⪧,trҍHۄNEf9o*Lgc*qk]NA}fWFOiF;9hQ1$mkRwѩ pk$Pwp۵)Vr4{:ҿb <9P3+7愡70y&K2GRUetfN1Y8=Ч;Mv$dm6=}}?{ԓMi}SJ˻99vߊt*i,[$}H+~MkfFiʹj 981v8ѕ;sIY%u)w²IJ6$@fwaQPiG>k-2W9#8G)Y=M1Xыn&a6ʃgp㌎wz yn/%)6HM"HUr?͜25R+V|cN[ZXmfDfe-'s~NRoϊRreCfi&Lw.WiÞG'F߮4VGD,$Y awFX'6֔-S{r^ѓb- p3r֩';kL-t$I[nrH]s{ ԯRPN*>5$!WhraۑZ1W_M}E-jYeK#fW'V:p2H>S5ʴ+FЊ٧Fhd ڼ3\#^SJ׺bKUmzzZEZ1(1 2{Io?/@q۟~[b)*7{-A3D*y] <㑚&ՙ%NTIXϕyp u{:|I*i);N0{(Ϛ:ӻW4K>wM墎9pr>Eg{(J4;eDLɷ˓^ݽhzy#p{$x6ymJ>Tovm H VL[3gˁڻ4I N-4JQ]PT=x##gѰJUUh&#2 Y]d'w*s#Yv2[kx*<ʞ9(|m[ dU;zsjB5=KJ]<1F" cvwqC_ˆM5#?g2 v|]ČwջrZnk|\kQ#^2Kz[Qz~B٭+<%7L־b8Г]:&m2XDjOR}kVfv\Mc]9wӥJ厌*k[Xfr2Y _ҊiRv(tĦh۔*=xQN2ͨbiS96JȒEQF#z]a^:XVhT2G'oz/^\ҭȞȊg0I99 κ1'{IĒRdl涔Jnɦ4qٵl,r Vї=kRR}?#=l5h{VtI}87>krVjN׿aC @{N ˖ M3>6UdBR";[$s@Q$:t]1;kCզ6ӽo ™m ό}@k$gRK P}#XݞWTnlqd+YT1s鞾³6r}4!lR8#:2,xڇTf^8eX.>rX)c9U-$6MFkai,"A/GJ6Hbîe/"mo[lw.K`6{u)ќRuQ[[[C5I 1)yss~MOftTO幟R+##8ϖFsӜsmIK[v9ܩԋ7/+=żzR'[0vZ[ʭ#/2{޶sIVhܥ{V{}ɻ3)cVQR*{gAFW[9.m#IQ[2`Fm Oj#SM"eRmnI5˺Z?.0qG֝iW+ Hv^bʗNsLUI;{GLq&5qUAYƜSletr֭.d/B¸W h^/r8FI^}Z7VWw6<۾1KRU)8e`R)-YxFׯO_¹s_S[M+"Mқy,,1ׯ9=+YF<؊Ti=y#PG?Ӷ>&,=SѮ[Y#w+f;ęUUQ|JҲ(ԩh-T#i;;g{ *]0A*}/E`]( y21u,njN1Вw Ty*#nA>t4wR*t\Dcr,\{TN2qkQw5Yʾn8&卻W&3N\"[\2xfJq;ʼdӎtJx┯b9$ry;{S8^?/+NNȱ>sex5OveE{-Mǜa($yzwLjJ[n}'|Lo +#e(qG!2&U1F`x{^k娭Τe9<߲O03 c1~k9~&u֦u]~cX$ d泗=m-!d^bs3˵a8*4v#:98v#qj+S*Μ[!ܧeRѮN3ӓaRy\0=;1;wV .08=:}j}\SӚ U%3B!E*n`#gsrڿ{ˡ8t{"H2?6K%N宧7֢SMz1+*9<g*r5N)JR=ɑn#H#4;|ǟ\b빭WʏDj- p҇ 09)ӵNyV:1w$37R}?Z娣풅\JHreGd3Pq=5-s[Sϩ%8FFěFyqQr=sXgR%-֭TCIh֞^O+5RGelI~(QHMj;&ȥ ;~L9ԓI"<-7k4 w @q~]sʏ&N~k^rַ.#9ae`c$L}FkFde3j)٢mҳI<{vztc&ؑ.cGc%a޺#ԲkK֚_FWbwJFBdvtJZ(*U#`l&pQ#w>oNQQiJsFM.WZ1z^~mQ#>?eZ*mreF2zqndXh9'v 1㹭o9qU4קr5HdP2;rrz5:qtdu*5J\&9P@nJs=殎/;y'RZ__Ȇ{YU]N8Rvz[U8{,̻|偐@ߞyiK*J]=Ӣ>tq[Vˡ79#Fo&r6Hb8'nstQm|.yv^u{"K}";c(VXV5`AtSZF1ݺxZu)Rmt-̷&B2|PT*V$v;BTOTEK|+Hɰ'GT{s t'ӽUEyKOLcT1Ɗʨq0qЊT#)*G/D7Ww=OLRL;rQ¤\p?1ܫAm".&sڊu%7X2v͵fc#Ÿ~s-M-z>ZI'~Ԣ#{NJdS$nlN"У_8%Gӕ[_ O>6Yw,29sVЖ/R2脜;xeQ9?*gN5}mYi)-zcaxz#Ƕ8Q h|pPN6q^9ETztJkC)9Ƨ,i_jH[̊ u ";!ݪ7|wԚ*)0/+FmѢEWvw_\a*2痿VcS|ͧue?BtxZID2>ĝ?m%GAhͿ#˖ LG-fR#Mc齔rxnx\ӧoҖ"$%4{E&࿹#ЎzJOUөF4qs=jyf=7 gtQFTf<؈>]ߚ#4by<À8^hRx'`K[v q?x+^*cZ}/gD֟k gq'ZT]jx~ogdݑif6 c=:{YO̪vi+dI$,͸Q2լ%.dau[z e[[`HuYN}*B/]u5eEND&9$S,(o:v?g{- \K'.itv_p h<.p yN3ָ'mo];V<ڥW#mI<[~@ӽi%}3/e!$ס7 C"1S\:ˈtJ8gxd1t/sFnޣ<®R*rܩ={&l6E9#1ުK},$Re}F&^x<*һȪ݈s/řVM}We>g)գI#2EnlR{=)Qiu0KJV17syUI#ֺja]=gX;7~d+Ei}O`ҳۿ\ɨD:HZO+zU .-(껡n26Bp9㯩5dy~)Jt2[+XJޓh<]X{61I-4< q(<9 t*ݶgp1Mۮ e¬Y*@ w8<ץMiSol=L<[{gqU3n:{åtaSҳZMkI DpBq׭Mnik*jICqEq5BddU{S*t踥X㕖=>Jlv)98'Z^U$Z443:U[r&&c< r@5N$Z59wȘx2"P:8.iRKkb[f H|Y$T=999Vhөw{%,>"u"ܕBn !0 䋽O[͊dܬ+H1^3ZJ2rZFq.neNb6ՉUYK7895<*Mku)YVwZCorL3cc\2OB8:lE9ʣ/Y09'Νt{4&7bq#<鹣-#u=?u5*q%]WAEs{{ b*kJ䞣%-"hXpHV)HU#[0ŀ^:g5ߙ86Sn>x#yVo?jXo=k*V"9Zv]VB-ʲ|{g$YէR59Yv*ʕ5רe 1R]㦽J @Zvª޿tc 9ϥe5E5+Ӎ'~Gqrab0FKY )⹥=-g^DI>#C">g: En#Mmd2AwzIp1w888L\=tb Eg*iۯWY syG :sֲg~ENRoV,V>"J\9ְ0/RJ)rCl"bw|<А:FqM\pԣa s_Fc VdnkNn^ivGeDhq?ҿYb:qղO5̰yQXYmɭC2ywqrskae^V{b*WSJI_rSJܭ/}eZR.Xnz&AIKyIʫpI_aV)Z);#TU;t~竅G˒8.P{/x5r+}6E І߯\؏z:&}Fк3|mq^=G9Ӕ$F)܊HOG:+0Eq<[>ar_c.Ҳg l:6G nWo(ίg3k},G$P$8<ϧyjKsthՌy.H߻vtƼgcFSR5T4nqC7˜ ~*z[{ʠU)F(3:#vdqYNg$߽?a/i]߹()Z!Qf9G8,{~嶊|RdQ/0ן5]nL/,6{xco*N*=+ohEh`~<f1hUiP{eNIEic&'kvR3ϧ#ZrNvIU4G n[t8=SN-8ӣ7amUb꯸v^G\q)^S'Ϊtmky6)`Wj'=1Z֝F0TN7ko/ $,c e_ W(fp>gΚ-\[2G̬d}\vRg˹NRܒ$F6(ZqV iM|t_۽y U*$oO4+nU)Z6^6|߱+5Ī[!v9OZ4+s4Nzy%ۖ~Uvn;W߯2^{f|arD|ʤ: eDA˚{&T?R{rfz;c )2\zMPVwk@{)@f ] ב)>UݯMqԥ(I53ym8UmO.Q^^3ʰvqz6VF*tUr)l~b6,F?N{V~_YJ;G֧>iykn\ˮon`.#\,62 g+c+m{J?#g2 yJ{-zu}~)aZn~i`8uVk^+{/5{I4]z0EA>JTQYw7K5~nz64Zi(~`\.q96GaSj}SUjЕ>koקth$e̠6 t1,X]IomW YvF ߯IpoInlmPx-1ӞGnqXJI_]T'9؍RSmX;H5,=H[[ɣ"OۛU#=J#&hi4IIo џ-F#9y/r)^ކ8V)STEa>h=x8<z*RwXQUpr;sD(Km~DRX|*Wo`c}+&֝sA,AVB3F݆ I$`RnK1I5oH0y0#$JӊrlSI)ΨU ,y9}>q){?M:Ovnn#ዅV#?@=84v"IsA4ek-; m6OqFLy#8OZ Fdfer}i/cSU=XsI0mV!N\5~UƷ%?u]Ȃ8ܴƒRw6HzuTMVZR$bump,DowrmZ"I[|hĊ,;ⵝOݨ]1>_zr.n/!H LlĒ:#j\t*Tٸ&U1;twݫ}Mu+k+G3#X+5R01OoqEf'ZWuXe1=+ջ&⽥YſUh`fWY3;HCSF2ӺMjR&}zzPcY<'[{}((B aGnk5Q]E߮{*ʒ]zut$Y$?ρ۞'◽{:%fJʻqq$14^C(SI]|X c8<;tN+XROd:Tj-HT3P^~kU9d.zr7+`v*#/*75{.lG}2 ;|hͪ!Qo %a|^'*Ws؃]%DUT8&݉^-a66w*Z1V{_Osl-c009:5Ɯm﫤1k< 7a֊qqiOr_$& P|#HaIҩB;pqJl"H<>n:tcR*n*Ԋsd: Τu۱=vI!=2YCu8ӳ#G&Xc8 Gq򭦪9(fR<$hb.?8|j,26.ѓ{{VeMXSQzX 4o}8\֒u9#:qJu,m$.e`qY_cV)7/Q36ŷ Us9r ;I"BpYT}x:s^EkZ*k{ioRMl1;+{Զ4Գu*Is0U(mg~n+5(F^qj8%Xe?0G\p}ץR,R4' v?sɪRVW= |ҌyZ2?0e9zG,Q= n{o2%c1rVѦl엖SvӸmv|۟5QKVZvB,(UN9a8={{Dբҋ <|p ާj7w[4e(F\40Ĺxca2:t=crweܩuH:z;5{JTcZCBZEe Ss1ɕiTõ-5]l ,SKwM)q=8kHъ̡:y ec{ⶔ4Vg}9BI]- neڌ:gu}Q&Bniʝc>[ټ0}]mʳA-SFڎ"֏(\JTn2WT}uOlrjɺj7K#sєaTI6 uPenxwȮFthJbvQP@_9e(m߈ۊf夎UFDRQ8hgXtFU2w}j6_VCuwՇZQ6I{Гv!V[5a?_=HRW1T{^>?N1NHSn1+nsLZ.2֤sG a|W弩],cZ5/R:Jv9+#gjES^8EH ^]d@@$¬1g)oׇ>H;5by.UBZGV_s. r@>>~3Mk] Ɓce*Itzz{?gh%~*3TRkrľm^CyzG܍hԧ)>Uʶw_Ț9Kw$rdl\gӯnԏ5N]E^߁"\$+"?z\V9i *2vh} YSS}ֆ2Z1SwK2%ɗǂ3Tp}W➆2Z_B9tX,8F<#ʯ}t;eZ50OYt,[I+Z+M݌|7 \g%<}d[xg ٿEhs՛t[X񭔑\mڹǓc \҄\gRRO=ʥ|ŕhzt:G)[݄gTJ$n9^ݤG*vZ/$Ȭn1y|Ug(Su=iۧRu4kbR5& `7*zJ459'kA,,85»/Lr21Wշ3u-e VhHw~*g)s9!Ƭc-lsEV9_?MJjؽ>V4.,Uxʮ̝*yr:wNy/[F?ֵtz mJ0m1y y8<q튽*hΈ֧g~Đ^]in#k0s*(fIg>2RэTOyPgJuo+NWA1^y#$p/,AڬqxTOF#J-أdO'Z_BQN*1MwʸFr}UU}_ثypmfP>T~)r}*J9J*Gir,*XWi^9͐uѧ"T[S>R8$=NOڲLK]ldIi5AV!Uz9R5' -7a1dlǠ zXTc;EhΚu=KD*WUgRT}8ȍ!I$VE;vsމT;Ӵ9ީm~ʲn#<=kUK KXj\Ems){YJ8p669mlv[9Q)I\մWv0e݊зkzFCmJ#XۖS: =Uv⍸Ҫ ĺw75Sy:L'GQZJei7veK-Gm=%rT.m6/ZMi*4*C6Ӹ?#Ҧ>޲7ֶ'h6+'ϔHms姺؛ihN**rQk_I Ťmk|6>\`ï*mJWooPˇEo""$j̬d|gnz }=iGݒmߙ:7<*H$l rT7kxuJ\szlYiDG vƛAqzLo6gUM4حr1^$/F,+4 kc.@=*`΍>[92Qdxc$ fʿ$pA'#3*?i+V=#:GVr%/y-j>KvlHC վdzs|ܑ9Jie&&R\|:ӞGL/k&s:4w'{q`ά];y ;|^zt_QiY/y]( bXJ2&RtLvVV\q;ÙƝ9J^E}*[?(IYԏ4lכ<#FBw;\uo.S|kkܒad]gS𢍔.E{h74>rHp}#zRԯi(Gm4cG$-^?Zidԗ-5-F0>m~\#Y{83rq+"H+wwqi7+'hŶs=+t:қm'5zeN6)ԩF0h}{w_R?6I0dBILǝ(<㟯8ԏ(U4^X(b^lylۤN`2-}[VHĭ;#x=3jԜZTht>ݭK*'yFS和]7%Goa ׿Jc-g5rMyIPc dw9$cҔ#QDsBV;xk(ʤ-X^<ѕh ( T'rҖ[^)gWM]\|9֊~˚^+SFw_' ^p;ڃnؤ!@޻v+9T槪Ei{? Ϟ8ݲ}r:p+T&R)htܰ䳰*GˁoުHj4Sv^)v/^q>rr,ӕ' %KkYJ쑉#99=GtiJi1QATWZ[돴[ay͒P2y8'dUfy;(>FJR[C-ȂHޯܧs{q|lUŨ}t&^ҝ&E4Vit2 `G#(ԴwEcO٪j+mK6(çtҧe-,TjΕfD ЭǕ,dJg_ǚ҄jN27glp+sV+]^5ܖ+c3 bHT3|ptqֺySjsJ>VwQ3ncN*ūl5dܒZ C]N2oS \W5^ݤ*!s8"0FT.[ ġd_/jyU?ORԗ,C,[UVezI#?ksJZBF*1Irf,ܲ3,=XsZQ:"u-љ{t]09\+FkRьU<&Ek-=8#ʮQ(٦Lh~$T#nI'ҩ%V Oݒi=զD[<0r7Y^$g^ iWu+Rb E6їDi,0n/gN]62eBκ9%RQҕ(عEcȥ}6jy^5}$ۺ8"k*-s֣\RES4odapdw @K{:t+*qS\̈́i{Vջ@nד\ӝs{t=^wVǖ{8ȪrSyia8(vUlwΟ,CeCL7rUܭڧwZz涣mÔm5*'ǎgSޒl,"#Dw{FSlOYQvCRgjD᯻:qoOC2[oY~NT|mviBEtUX=HȬxHʥt ewaG1lks[ȘĎ$˓^Qe(T^2kTȊ8U07Ah(1q멜+/';xh'|: =huF=. n0da?)8V/`f2֛R;88yƤ<_]=cjǷZΤNESTH~s’@Hvx=+vhgrODF`W˚6 ͝8zYJ2M>g&_Y'?q}GY*/imobxn|S xJ1q5i'2ufUUZz܈=IV3Qkb9):~VH<$#%[l?rjsI{=?%Ou`d;>Y`H`}3'8}'J8<9%,Y769U*bmƵUiIB\\C䈣`W:49O9V<ɭ%ڞƠR\G\c>y:)R%*Y*%B0Y#ی '(ƣZZﱎHҿ7Hf)"f{`wQ^*}֐[țFNdb;d~tn߽R㎧NJCWxGo=BԓAԔGoM6"{ekrq}~(BpWkK/TjJ=~|nJ%8UZQ/׻1Egm~9iԭY+NZ0rgAЕl{qw5lncJ>Tmy &v9nqߧz4w^<l˶q;@]oOu)Ze>3IKRpYHqq F⛷ԥY]-ˡP*gT;O#Fpy?Z\(S䒗UAwlHoS }{UƟ6,kS^j0Vh+22@B+4uJTROu/*IPG2Ο-G'iʘEXqߎhݗC 8˱٭˃_3,=wn>"{ess)8դE-X8=Gb~Iz{9Flciiֵ6# `9zk(4vf|*;K"L sտZth)Or9&Rȋq[F1|SӔ=\lR\2ylG#T\]rq|vD)#ɂ8N'+OTwj=O߳!gUQ*23>ݽЄbS |Mz>Fy7S#H.Icr{F7`T ~(VJiK**ڸ\vyURuل5v#IGzڧZwۡX⹔F;vVey3}+n_ݷR!MoM J^yroYwe*g){|L 7pѩ4Vk=M#˺VOJ$(aȥeӹ`-բhzt#׏JnOSyYIO-c m؟4~MksI{U'}A\"*qX)ϥT}զеǖ=c'^ycm5(4֫&EUڤV^8Zy2NIZ.'6.4ybc\쩤z]I%dPY01ڿ\w#Z)sm դR7)'guⱊQWqE*{t/d:!GrHN)J9J $|A#A1!{sDhӻSחK+o$rHs؞p:ZrSrZcIS|ץê?72\Ke[U%xfq]ߗ[K6]NBQ$F; m8>j[]?4T$\u;SΩQPf4]#c!/?US~Ԍ-l,F`ycrMMje򌌼eְ\3W.֒ nYќ}3ǛZ,LE㥶;JQ>a>gv)5SǛWɐT[=USg)GR6H30wc*\cg*Xo2qH~OA#8d-60NQ2wZBѥIGnv'=O8f+ZeKQ#[I2~<@Ƿ\qֱy<"%{fpU;ۜ D*kϠRY,~XUJ?a'JEtq2;9zڕv*T.&okFhu"c sFM9 3nh<ߩ"mOɧNv۹؏#=>'kO4yRB0 ې:qӎMcYYg9JzנV90|qӃT˚2G1ldjڮ"brԓ`u&O޼Q3ym $c76b7mŒ~qR5yZCɥݑ@8X;G$czzV>4VZLF,{u \f(j -"S UWXGA>j r]~"iJ3kԊ Xpe?˶kӭZ>Ku20Ѷq=jQ˧NTJn w4wFB's\Sҍ~' hmT*#XP󭝝U{Q974]~*)He D]05km{XM sut-1EVsqtSZWEۦ_!IeH랽9=?*Щ{ZiE]VU $6G(IOAVR {?3Bf߱$pYX#/N:8O~5ԈT[Gyd,Vv 93Y֍iSҝI*ܵmBkKqr+7PH'nG-KnAtn^݌M2˻˓#hw{s)Ss79O*jnr,Sژ_޻;nCiӝku(kQ$[+4őhnwmq^F՝9TKfE2#s|h xɮz*T// 4q\F ^?}X}^T^j 23Y vw=:k[b***#u6c1mh8}kyG)~?NZ1Zmsَxy6Wq#'9}(Z+=w Ȓɒ2f(޲3N1TVSsgHJ; cq9r[nF]NR]i5~G䘢B쨸*?sZ{HӺkrgqƔ^,\Ɇe41kl4\cdu1{9bl##:*JoGݻQQ[hEOi 0mlnQv]T#ScZ_'06VV1W5i_RЧNQtimQH@>h\iU}dPI98xEJQrV8Ӕ=01<5̌͹9|{T59aרB螏{TK&#i?tTƝNG)ʌo_6"w3~-jN#7/m^kIβa-bU?O¹/};4]"r#-+z^k_$ђTe E!;CqmܜtUkyQi\)ɮ[7~Бim`{5$W i!1ܹ:N]\yy}c)i=1)Rj=%&kS*>'nʺTQ;Nz2f0f[ 7<+|B^gT"(6;۞zWn/Cҥ 5FrXvC,a¾IcT$Pd"EHSrrI<SzGiQ8HL-^@$r1X*n:%~W߬^0{Yܹ7t2_?9V1ԪWJʩ+rԃ 6~\G>X !7)dvA兽Eͧq UI'~ƙ8$'9;q\J1w_SNl5`e hiP86\QIO>91#cIZF+IO'Њz"Մ@˘[ 8afe:u9dbO3)_)x#=?:ʝ&t{5Jnן^*1rʜ $9E*c`7l9nsU!yM ZbJN8$5y|U3]^=+Ȭ+ѫA._R+Ni#t/vxVfe8kJtksEhH"ģ?n1\+n؎p2?.kj4r˖k^ dy*R0y9QI3~yʣQ8af ASgQ[]ޤb'ihE,|Ƿ_J\U}![ϔAmܻ9̛ڜAKxǫ6 9OJ2M:"~<2J4{]]U=;dYԋm?U#pZB:ٹ[8汭ZVs>Z=|Xl+7l09uo֢K{V5x#, sDMO-{w.R笗ģ,rr8?θ){ע9J2POvo1w V2q^OOgY1%%_{=G9kЧ(Oƥ([?r1mʤۮՍ¤1$a玾rqi[uӧm6wv̋Izv\iʯۯCIʇ:W-[<~"Ϳ^)#p8^GjQ^+;=L#wR1x!!Iߨ rrV?2CdH# 8=;V1q__}9:R BG3rH̹$UISrVZ_:#N1IgpEmwnYq^v={>Z՜"kgF["Rf6>~5GZ/Dim[%ȌVfV^8W%jmTSwu"]?ۯ6CHDYKI8^b_qr]oI#nb*#gp:~=qD}x/y~9$ײ"nA9az4eۜRW{_ΧU!77_p9>(֜\m^J^wi|͹Y[nOqj1{H4yIh_1f x=zfjUFbO&O0>Vp^'WLkJn3hm2G _B6Q[|}}^kC!m$'89 rЪ{Ӣ+i-: "Ur F@>O]ur3]vԢ]FJd_$mc>u"T7=˚U{:  0%AO#'b%N]6EdI=7Lel$t;k=ݕZJnc{[ &6Gn0;zsӿ]J3R|7k꺰_!0_,ʎďCYͺRLHr/n 1.-s/$vTghj1J.y-5QOGYNco,E:Qiu RE,AeTWFNqtZT,U|֧KRױ^V..x*pqgʵiƖד[kHԗȓ͑Krr1ܐ?i/+Fc'+`hR k`Ͼ?N+g=tF*prw$mo &9沥ͫJ '!AќpLIҺI֍+uA׻,hWjn1=?ɨyej]:ZWS%@ 5,۶&7onKGKv{R< G@cY-(h|f5_uҔYQvA)}8'=AJq GoigOmPy6gֽhQv,nq$fO^F{Pө[`nj͈dwmmlSZCGOA={U^2in8SΕKI^V8]8DUTzd:=:@ۼΎfgU˙' (w_I,;qXӔ{W2PX٘ȧ00^UJ2rqNrVO[u'w)9?1lE<5r5T4bimS,6(l|$c;֏kR5.$"αC#}n8Z*JK+ra氱I.m}:j:OJȬNϴs59SI-NE:|qڬVs>Zq Һ(F󿕴X ܬs v檣)Kv**)FOUK"eSGٮX_ꄫb $]V|ٷ#>djNq}""sl-hFczKJ3uF<*ū!Iªv#VPw`?U:sQr%3]p̓^U`w5XէMsZוJ)I~u1ossiJ޶twFEFn۸^{ڵ\#hQ\ܴrWrY7Ψpbt?jHFet`}94hn.7@qڔ {} tQobhǞ'_h4d\vv?pyEKWk*q韛ވRKQ'l.^M *~~U#|ím͹6Q3P>FNv^rZ-OɊ $=<T#Ro]E*˕mYOH4d"6O8>f~Jk)nx= ljn?ҫ$b>NH.َ2,e+@w]Hfz}b2VBNqm]ŷR8 \ZW[_ue#)8''y+j=5*ԣ9%if |ڷ*/-6]:6Ќpe}tgUU4kN[c6VoHniYkU'RH9'X]cr1t#B^~yQ gT9_eQ9)itLʜ"nX䑔1qJJjI]*E;aO4JU%;݅gRWV *cAK%3m^@'1T`{[ikԹFR|:{>+iipR Yk{irj(#LeR0qDF㷘\Fu_-T,ÏyRi2*I_;!. YBʬ1/54|2:pjݭ1xw%fcR@W#o$]T"OZRuDk'۠>X+۷oN5#~(Ԕ$ቒy č}o$<-H;qjaweu1Z^voKK6+`"N3q(,omEQ-@2`'݈dy0䮑zr|B3hB`:cuGܨDZ2zkbhdH?#f; gy9s\lاWpÁ8Q،GwmВ )&,F gj# vPN-uZ[ 08ߚ7_3jҬt};Alp *D7|r@z[sZнNSF&nVh~VeynJl8JjMkrķ ?:mb:VS8ƤJ;:-JY;s}VW}:ь\K[`Ykonv=F92CXPj{jTu)$$ʾ\anWLcزƞcmyN}9TriAD"fxC6/zޔOVEJa4!*~`~dU֌+1R~ܚ;UNSrU?je4(ӧEܪ |O720-ǯ\ZC\Cupm\1fl~Qtqṹ.ݴ6o=vqN)JWgW(?#:MZh!U?%# .gۏJ6ΤdXPa|qSoGԆio14}sNr==8uWZ秈jdNvӃ'\u98*uSA2>erꤜ\uQme|j4S"x?jU*)+7sI{E+%ag _*r͵Xٌ{NN֊ܖd8N0ʹ Ꮸ'֮PӺ&^ڬI~-m#MWg3\VX"N7&A4*rU<2*6#<#3iѵ9[Srb8Rf #[|R94oܽa/3C4;Z扝JM]t$dwm7~L~54c.{O֒^igk{,1 jZĞr}y\֝:mt*κy%s4kF{"MJEb$0mGRi&qj6߰Uj Fo1QsϯkR*S+kӱj BWDh;uܐp̤ 7_ZUsdg*5')4]~p}[B82:; .oun{c>.:{j߳ZԧyD٢G,Uǁ hu3E$rȿ&\AS%#HB<޿1cF\eW?(cɭeΔ;R7>63:N;+ NQݱ;W qs*iEOiQ%]e&0f'˓] PB~ʹ2;b"3z~o>MpE%u_KI'iYj9T{B0BNi0HǦ8<\} F_n/)ȉ̻@ݳJnYYx县2L!U.l ON 0IdXbd7IqXUU^cT"cKŖ9wI;sUm1Om#)mrqϿL_gƘU%u؆$2͵QzDe=J}7\~1$rGOZKmFə7ANKMr=ҶZƥJ4ԥ}4g^\#Kb)fi0N}kHSv7N4Z]||-XBAN8U}rZQi~ʽ:TM%|p N=UQIE\e83u)||Cyx_kkqte;B[l=CvY#N$e$K\y/#U Xc!Ҽt\u'iTJJb\(I**WD#h?(V!a q+Nj)-PJ[7mL<^`޻~e?_N:|拄*i $XIP'cEev g1ަ-F*ٜ5%(sdj+-|?wϸ35'hYhl%|e63ҫ=Q")Fdf0UHC'͏y(g=\ݖd u{Xn=2r:zT/z\J"H( -~pZ۞IY˖wqn# L08Y_n%э:M @G˴mtu>{9SI")'Ig2s8Lan7~]!lv_1Y ^x>>fORE٥j6d#et=_XN;jֻ+w0#>XL g$δ䏳w (]j$ۻ2ϵ܎YSI_̨Ӎ(ަۼaUU[&#>?)Bk~ujEOOAȐܽ|W3`8ajI eN7(o,Hcڬ>e-/mBNexY>Kq"AB{QYfrR"Qnݼ-DXF?!OڳyN^j~vCwAr|oAbN4g-}M[xL'qϨ¤^-H[ߧQ4p0_,-qlTҏ:O#w}\ۋ at47+NTyp۲@l} j)C]A3or֋3<`\:zՍY{HlUEwk"|,M’.>Sgѵ{PH+dz.9&7u=뚤i%cz|w$9Lq=GzΉSN]u'Ν>b~`Kcjֳ݉Gs+RM1B3^;W<*mݫN6C dNCur7g?Ӕm{!l33nt{zv8ҝNjWa\[۪ڬnKgUWcRP؇54/?/#:wډOucIr/51c=2dCBy\rY8|gi zZ[vcwռܜvc)%tz)}ŵ`f=~QjI"#j]^E2ys/i9SSbINJtZIdGʽ6sڳxqޤ)lK((sI-s}+mtЗ(Zd);Ct_̘p8YM:PЪ^hu*W$2xҹSm9}iނ+|b} d5QZ~b9*8ӥӯG`k–A v cz]uӧ$8-ERE6"XnܾJ1ثn'R5${/]QJsVA#Lvz`{WBlݎx j>VHfgH$d jh[#Y~~IVx5%nЍ:tRͥ$z_':Or?,16 M)(ּp#f3O˻=:8Ej$hua}z#IT]K'"i*WvǦqG^J$n"%fP^sȩ^*2}^{8ϖif\ggSԻ4TcN)CE&%>6і<KnLi{GMz%͍ə ?ӽOZdӯS彯ܐyi1\׷+u'ME/h?2F=>3P~z~?ycTMRPt&֝\/-~Gܫǹqy稢I);|GQb So IMѨ wǨIJ -(NN}Zyv@$ Ʋ-AJUe]z ]1~Oң:̽&966yALq׏NKBhJ*-^D.\2VNqbpjf2Rv#WYUAݾB\/ʥI*TB.rBr8'Yԩ˾T+L=a2yh\ӭTkO+MAxY7; r:'v8qQ{wͷCv+1\#F7M=XJSAU, /6#JW&2W-innmy^)]>F=7;C$qA}`GS?ƞΔaδnV#;s>X6pѧnˠ˯:51Y*{tW hѝg*uJKMz6,m̭g?ҸZTjI5{ѭB>⊷g?y7)WgoSGZj5+^j-AIoVqF8 MWZ1Sѥ[19QN3׵gNJ*|DU4rJL1Ɗ#v@*8j8Dʧא\LfcNH ǹ>sN{u+F({z%ȟ7Rث-U.㷦@¶mg?iOމd(řU1uִ񜒾cjעL2r~bsMT:%NTڳ߶"*7mR[!;Ns]ewcR>nՒC s,e## 1˵i.I](֧+rYk q&9cH*Sccde.WV41Q EVq}k+aᡝL?7QٽUӪ5ě`TsN4v'W'ȝbRUovp`{=#Ýt(γ\DͻU{Wz8Ӵ:zqoTXi𠜶N(}5cq=lj;6rX$g$A 4m̷"*VrDE*[O0tSQv1%DH O-xߵ[2:򵡤*SwБ ,I 9Sr=CWDiQ`Y$i7f@ϻ$p8mJv+8FVkpqyDpF8Ny✔+zppkZj<,OI&|Ȧ4 啦V8挮;ƴiەiŖS8 6dgƴ#-[65ѲGo-@]VBpGNF7~"qe q177-n/W6?uF4wON!)VߐSg،x+hF]iR5-DPlyW"7*|юsӑZT(֩%in^+9P#a#nMU,32\ݺ~UKѶ䐥=G<[K|?328tFnrN_Wө{5 q$,) 3ۯtE=R*N7iXq[3 wd1?S4gR4y^ܣMjUR#&wtQݭE.;/\~ A:Dz nӴgdʎGfmŷ0 WM +6ڝJ%-An~Yo1~5]Z5#RR؟H (pe?)=~HM^~TEF]ͻ',rxNyTRWRdS"\\{p ǹiZOrMS}&73(?ҝԔ-;)Kظ[ 3 %1 팁=:.X8Fbb,&Q>u)THβ7/6;,A/'H'<":ҩ*mj!ʆh~̰pGc~hi$њSFRt \΍0 B$ g99֊Z~VeEqڶc5k :Eg"a'ӞkNZz4pG]I k4a;Jsx=R=[{} M4T?)qo#d_s+huB)4%I'ʧ8|ҨCJ)ާBP"˖u{{:~Bĸlv#ae1ÖUP9QSnF^ƣob1E?*OK83ڦͥ\FsxJ-N%Ӿba<_?C9x-<1f2c9rr{s3ںIV6%yK"u;"9r?6dRn|RFk,HGy-J3Tkw;0q掻[.m[")ȋ'{h𛈥3s#ƍTvdwQN>^Wwױful1;T#G:z4O'׊R^סJqVm>K`f*qF+V|CUW4wƪdL۳6YZfhInձn68 ӌXԥ'}QOu$Ci2kc;Npq?^ԧ)FWG-[JjZ%)wGV=>^IGt(ʭ8nEiyYv؟}1=8U9]jNJV[)ۙg t=(GV:r YD8J]̯dӚ>i]NhZkېT+d;{TrI}b⚱bG]ǴoI+/EzgsWR5#z5z:hϷ,=3qEHW%NkmV=Zt6ZKvhU\r?>>2}.8z*idkc39lk9R`Eriw4ʟ)zO/2^WGE"6gdq巖9\Ǩ^d/iοiJɝiݸPmF+Ԩ;FTb%1Yc\PF> jrPrk]=CxkkЧi'[U9Ёڴ~CZJ4mgo4$2yrL8ڧ>Z)T4zjeY{OCd-ebjT{jܲ_ ʛܾJo1R7aQϗq9]~ʧuQY5Y|3r3G=זK sbؐ@bt {\aΪ4k8_tm9i8?9+*t#~L5au>/6ҳ12oU~FV#fu_ySESR+w:lփn;U>=>sA$Nbr\s/x1 8S\I_gmzz1k xh2d/nϋIO#QJmlB{E(Ath{8駙3gI4$66Knmsӭe~{Lghj5B*R'`Gs].JاztoY;qRwN02qڦ"uo[4g%溑]vkkocN[bxxhG_ʜ)--ׯj_.",d#r8N)T>^YwاEM,ŵz?wt*Ɵ,Tˣ4qQcM$?3BBkuc*]2=9֖CZ@%VcY_Z'Ztd"ٕ<4FUR[${z&IVUKI;B3}0OCV5%#wdzu*SGӦEܭ7NFzcF3䝟Z5kVVi܌iv壴c;yg)^o_ Gا{=`LȢQ^m۩S MUWI4+Zn:F~S9##N89/wˣvNߪ؆ݶhoFI{VirU)өi&_BWV-+n:/9R2km/׭CO+ow0aݻ0T7Aʥ<Vba<#8.hɩJIܰ_N0ot ?-ؿ/˂7?KmӬPmM&cHeïYt8py"k q1sɭJ:եN%)3 D1 {qZ*\IԊ I!IXܮr~).d݊u62;eMe9]Ql)ԩ*6,A"E^w89U;=wKp.B-]x,·c#NSٌgIjmU+kl^{F{ F.Ty4N K7E ?6I3GW;9c]NZ{y&3ڲ@ۃi(v{HS$Gnmg)q槑>:*UPԂU Š{ZSSK^I!s#,*"`-(yR\'+Nr>\たz.X]NW葮L{d>8xy]uS(Y)aevcܜ⦜y9OEEB*$9=h4bԶGq*Ѻl|ۗv9}Qc/z1*ݵGvzxұR] b!I#*>i\7QPhJZ5xBCF«Y֠.k-w_CĨmc[[$ānWane2ݓS]6xÖN_ncx!gU(HW3)AŽמ}+ɩ,6~O}tpz?':=+ǐ۬"2eڙ`8@*YIVq2ڕ/M]to[md%! 2'#S RZ_yO.C[K~ѽۑ\wb𵣭Gv<\ח;"RUU Nq0ܱʦq$񌓤ed_,)ˎ?,+7349j`sEZ|#2U$۷ =~Ur_q߮&!A!G&^]VJM;?"献͝ &Dd#HF- QWl|ˋ٥FPD =I,,^^Z mㅒ&yʖ`Ȅ1Sg];m^ipn|@g(=N:U J[+EH ˲o-[1=xQՏ*Mo~}J ?}cO=k˩ JGOiN.N\^޸qҲ㜕Ieԩo#>qk3NX]X].t/Xw8SSJ߾ zxQ ɏR6+2[Iϭy:4캜ңRWf93ݏ3j+n\C~W{k.)?^Nݷ̸PK$)0KF˸\eE%2pFF!X/R6䚉SNk +c\3+##DӌVչ+/ĂV %;Pnܟ:TV7s<=Jܡ5*g[=>1:z%Tj+8;F}si+5oc>7£]3ܯ 6CA#XΌaOiV>I|>XV5'qXQ.u#GEIXn8Z/_4A\I1VzqeRKV jz _oܪ11qQ*u)GT(ʛ"g[Yn޼c=sX)FN718ԭk4I3(V1&oO\V_q0F29a,׿Q]m/8Ƣ QH1 68X9S֟DN^[.3#V=9='$doaRU'+d4hZ?5VAjW`µ"B8?66 tHvgvjJݮZk)7MJLmPѤxÅ߰ZQE+#WȫhYe.pyCqYJk^ޝ41:_O,2I5:Գ)&<Ʊiz:|Ny֜"ӿT:%@DwQ׎49hJ.-]1٭Y+矺09=:ۗwSRkS%52,{KnflS[TW.ɧN=Z9X#n3!nMG#inj޶q"U6LOJ3(} mF9NU&|m%o/ #9u<^){l;lD#КN=?_ `k,`c'$0}qu\d֚~OR!.Mu+X<[9략mϹٯdHڻ>~:kJ6wTF v/?׊t\y [՘l۹@+Ӑ8aSJц"8/Gő3$}zVS>uqQS)zhG#2j ۽;ڱJit{KNILhIxm~bg̞ʌhŻߵ{3$,i gק>V慚l<0u Tۀ4UVVy.J1gw], EFge]εDhiRgo6\KN'O=էNZjKH֒=$jW|V+fT*Ȋ#_3lc\)I\r7#pUI"x~RN1v 4ew6b4 r7`GBiԗ+~/R9Dk)#X.0}y?52TcSj=BkY&NF+ wx?s*ZwS~h f&GܹycluVr1AstcXEGl8?:T(4gNF{.F/'yN[nO^8R^KYKs_4Q3}*e/mzrSVtHvGo&#_+eX UhSnvG5:t+{h̰ffܷ&MB)R6dۤUes2q*ε4mj%l֚ -%;2XmQZʟ+XÙԜ^[aDKOwT}??S qu5~V-G!c$ҭ|DШFK;缰þӚSN)=JX]n f7XUF2G˞WZi urNUuL"і]9k:uVJ(ޒ],^Hd˹8<Tm}Wsє8m>DiHvPE9'h8:97}:2PM˨"2_!T9~Əhw#Kz2I~8 ^qJe*6Vk:y--lG#KrND$F8ng+}+抲7U(RW㥛--±o8<Zp-7.sp/?2-MN6wvXU(ԫ̥ۋhch ny8$)N+G)JI_jticn(0x98=Z4e(= 7/r.]'n+8u"F3*C%A?~Yck}^1M[n<*eu>[`me\<egCsK[M4`)Gq=~æ8IYҳRmƦ^EIVmUpKsǿʪc|wsq]II"XUllw?Jҕz7ӻc;a1m쑕~nr `vkԩChJ\h|εk`2Q)N)QpWr) gYL5N1w%G*Kً-{Uo/UOGGs sI{Z/nFq854yjF#Rm|Q݈RV+%T6qeF--|V%&6HyrQOelrae(C^BeLg~& gW374ۤ4s۷pU>?u]NXҔ˿6 tjѾTnmoC?J~δyt8”U=ݷ$B1{D%f1}-Gm0m'YFo4B)F45)w1zcIK*]b5 /itY߃[ 4>b0翷QRU4Iʕ?g0ǀz`zUJT.%I{m$#o1 ֘xk'k(Œzn }~TY1x'qIjTqQ~Z-o22.db7?eucsϣ#I"T9cF1B7 s}~4G*jRJqvrHE%L$޸qּF+7$d Ev$ Wszp{u==ӲEF?䒰C F2LV҄ter9s-WP2LD ^5;M[FI/aO3ff0G$\C[F,da++>xX'Yxw]Vqp6NkTp/AO0A$ylr?{ r1]ThTJxʑ7F'5+py{L(:[Ƞ |9Y^QZKԒ9XCݻcy UPZ^4TБ[LBnbȍA׵vP旺Ck]5n%?>cڬҺ#*rGU8Jt5bHqryZrt:)ѩ}~iO ~K&mX>ny>RsJ%[X1]R@U:4ؤ>XSY~zUN2G\=-ڐ }0F:$3 4`Ji+ˡp P* ?jN\eZQ]F:7Տh٦W61yq2'h'=jڝQ%v %\Yڪ4g*{J2VF7s Ecgy(+4(^k,k8]{sR*)imB"Xd;?>(5;$SһE{eU*n1Fr}ɫM'rʙ~btd5e8)֍7.X0KhGsA=^T7•ޏ&&-v>Zg¦1Mzﮃ?_0l۷ cBZxze-;=ž9sMW%IK{y% UԊ,sWfB9kx,G9u{.(-G8iLqWoQEJhޞ_2eVU%yGndn>W8A}k[Dy9)Iv%q's7 Tcv5}bȭ^TF7K%% t$?U^ ׁˆOEG*m=H[IM=*d3Y~']> .|`H́[^F1oFUM9ykOs0H|玿J>rꉖMU]wJC CG'iG Zm=.1i9RظR4lqaJQo-ClF+خh}p7|@Ǹբ|a.z֕Um +8泞XU1n9</Cc){im;klQxaܫ$)BT-KO^f=ablֵ#Q tQrJp[I. .n3z*{I-ܸZKw|IVMz8l:Qӫz%fګ,|H08#M9)F%fNjiG$fe $8cM[%XibwZ#P6, f-UR秅ZJor廣Ȗ+IYl[Τei;-śԹFDۺqE8'ZFoK"kKcY-|vYU<9_J=Tj&cG4<1& n+98JeqRIEԙ'ߛC*wB͆w9YMnjkr;ȝebP{>}j BN2RU/-չo9r(F}brJ&hiۮ-J6WqYRNtqjo" $Qm秨ugg+5?j &$$``^sډTZjmrͩ@OʱڽךZg*~ҫ%h%悊[,xϷv5(ys,?Amӌnk5R(]j51J$mc&7 ӹnonU(}d7^tm6xuQU=y]b*$,"fqZ2qҥ[8;Hmgg?n;I8!$xb|B\Qed95 OW 9XGQ9Q:EFI5ބ^$rq_ވڳE(֋bі96̧ L+#Q5hzHal槛%N|ϩPG,N~֨P[|6v?gyhgB/6??c){E)Tgk[s6u|>GxۓgVSʧ(ӒKnU|yI?1pi{{z\T WʎWU[BN{c{;C8ԄS%[#/̻{wUiF:NR- CuvۈdfXlGO~=FNЊVYJy/zrN[XU#5.X1<b #+ۆ`1uA]=ɬ(˟Ț<+B1Qu#5*Q U+SzJmLԸ{;̞^̤Ͼ9UXFndM]\7YvN:$;{2R[Fq"&wqsNc:1읝֖~e)E1'T x#qcNki-`򤴍PW1W#9IztiR_^KY$ުiEkr֬[.eEG),Ó<#RKgFz]IYQwWVkX^JjߧY*yұQpʸ=Q̰}*xi{w憴l{Xsn/udl, HVE<@?+f;;{[{^UO@+oS7SmC}쬬7fEq=Z\vfl,4K ' yl@=j$DMe 'Hee4{VVS_ o&`u׵L*sigŹ=^7øs~@^>Os|+X­^zܱܶQ6gGagmɍ 6׏һw%)Iڤ^;汭z+D9QN7,'XIxB>ZwD}$4UZ۰8CxhǙ3ߠǦ+lEOyBkEU)\M[ٮrJ_0+pjieN*.qzwb(vUB!}+GFtbR#n#c?T(IJ~ε]^l2J72?tԕ:kUhvw˒nB0'8_SX{JqmE^Wdi$m*Sֵa{I> 6g4'NYpJq9{hjpt%S/v pNzJU%MSjta)Jz]Z$YD7W.caʃ=Ң<|XKޝZrM>?"Cn/ sbp@9 N(Ѝ_rN,4Q\%£HKHX} H$zc[ԷBVj]?h\1^z_Jܲ(ӟ7-[I,fIhcv+VNzIF1w?,,X?*8z~cМY>u{Z.DUI~X5Hysmc9-)'k޼(n {|ت:yGQ]oׂx؜eKUH_q ?NiEUjk Ox_z]pSZВk&L8^yxuq^ouYʛ ~i,j5sykd6@} TW_3Z +[?n9Ӟdge)t]-E0FУ*n,e9Rv_2h3&O,rIԋT!CMOކU#<ˎ:bhkeUX&U1g9%-S5*\%UƲG"w {c+{Jtطb/4 y6$Cۑ?gV/E};U]WRV`!Ӥd$f_8$g#Ժ~&IF"? ,JC<Ǟp+)B0MDiq(y?c{ u$a^GxfHwMU;x{:֏hZ4A I<1x28A\^fp斍h!S,6~O3XI=$Ί8iUu$i".I q0C #w𬛟3KZ~Mݻ=GٞId7=8?5i)je,E8E-tWchEl+z\R1H.\Z'd,%k{8+&:'7&t`fFqAª1)'{\)bsl!vVݙ~]\a-3(~ZQ"eބwnSa0^N=+ӔتFM3ѳ'aOT1oFXt&a$`zuEZu]˙dUTz~=6Vs2唷9NrU(|.Ҫ;rH΢w'X;%bcQ__ýsG)%`df)2ٕf3=GRTzFž) ve=lٕ>jpmi}>=/=[#paǷ7.)FHKaoX*}t˹py`޹jTFռ'8 -.qr-ЅB\ާ$r_[y:o*M輆=ϐ(HY2p=:X|IVpKt~e1519_b3FpN+擺W0Gߛ{c4,IGˀێ~*nSn˧(Rn. ]:]I%ªy#j1p_JWI=#,El PgNw6̛m2N{ӝjғTy|#Nybᳵ.9W` ?Tkkz޽>Ap(#]%Tt$gך(#RPuA< ϹbRpIq|tcFDVKŶGk8GVfr kU[P8uU馽.oWu{a֦..۶ΩʛR[hnύbhpg?6{~jOK^Erac8nb2n*Iiэlz茹yc}ݎ,Te,T^khTf!J3u>]*2 ,ЊrĜMKʦ?,5۞mF^5dv)Qtkn#劕,[#xԬʋz<%8w1M?\`yqP[3roN٬p#>\x8P?^=lև7(v7#{ܝ:Q;uѝʢtpyro }XyOC܃k9TOv䶋N;{9{ne)45(;=14sp|qzVԛӪ;:۶Ī3m78?-tB2.d/gNѵ$BcY.,L܎9JsZ]iC騫k};ySVb' d֡d#f+NՕó|SWq侢U)heww͕e`zJU[^Vmw\G"ç=oT->7kr\KŤhc6_XoncʕzJOqm%ԒIHv/:c<_Y哦o;\DbWl'gki/(,Sڭ n23g?w'%G FTj_]_0XZH8\:Rӣ$_b1YEsj_2Ceqi}ў}s9Wk3hVbQ݆Ḡo`F:ǵ\h֞9}I$v+#~j]+}嶆6TedZFpOJQR}J-X`I%r=Qm9&t݈[ &yHvvcwhMzѣE r+vOzJWNYQGDem 48iӫ)/$HnUUm X:3*r]Nű,LKu8VR+Ƌ6Qgי2bYv.9Ub QNaS!{^j^62:|QnT_)J艮Kh "`?Ʊe[QW̋[M l> 4(&Q'ѱO(o' _,O-!'`rϵc]DqǙ4I,JA3\ qzoԢG$qۯ[f^Fs뛛i$!dqѫgqfyrʪKtY!kkSs0IW)JN U۳9(U#g[>Bve`c˜_pѣwgOJi)iB7a#w~{*䋿SZukiM;-m]9H݌ g=kJz]]DVi{wy;4H#s}^Ҝy徶kb9+}բ+qs3:$er~VjNصFJRu!Q82$>c Wpqߌjƺ޽o;po ^u.w$WD_mܹ*sAUOtO7߷QGn?08;~~.W(JWm&:8cg~Sl?*Z4Sa{$fol8{W9Snۛ~JNbis2=T~ҵSr_{8mѸXK!!f_~ ަbQm2[m&$c=z{VҔfMqN&2I"ƻzdU>]ۢ#)Mz<ۜ7{WBn5rkhM$08 GQUyUC\W?|$gn~\8qY“m'RHcB+&vlǮ?*yc(Z= 8T5qjdX"wS.J:ҦocT^.>AHD[+fZN~rӠGrEo#4FVݖ98ʫNi}9k9FnM,DWWeTTkeU*Hg n13ZĎ3h#4iz#^Z#&`x6cE0 yZ8gNIy2$W1%^dg$b on[2ıiY#n#w s2u3Ԅ/>n.]H#عqϿSP(ؙTu#e+LÏ1H^Tbyo(=v$xHxEmw0K};U#S&6 ,nM"1 =3Rm_dy\ѭe^cʋkm|ݲ}IaRRF4li-Kn[4fbP P0=ci7**F׫i`(=C"m1A?7yޭ(΂H$"U\c=;(*u"s$6a挠$P6ngZ}S/oWm% p2_9NDjJ>cRFMa6essGͣi91[m|+ y#f`xϠU9Z1J:: &Sv>2qyc?vocYF]-~AћV23`]? ǖ0tqOO?&UrJ {I v^{r̜SjӥhuJęyq?ztKٌcm(*8$ܓљִiV;nϛv+Yv^FQJ#d9"Hm#g?w}=sYԎoS*QWw-#ne^BǿL%:59DY[;\maӚjZhTu+KTCur=ˮ|WxA[>Q:r}cRYI(?4on%ff;BBevv9@TӢ4R8ȍDB)ͅ''89=e+Eosk!Fd1]^c(RUzܐC&-i6vzuA\2}M)բ+~$r#v¢ bsJjplC&VUF"ݼe%/Onq\XC?;*Nv]<̪,7 1ߌ4)jm-~5)+d6jmUeEenёnG2qvNE귏_Oi0'TvS*mkкe(FDRgqb3|r%TSF~>W3v3Wvm/b_*ݬ~_qw8`yyoge1B3H)9sϾH?FT]m[ʵ(N|KkhWܪ.2ޙ]QFqmGW}t8%qi7}2yMǞrʤ=R-.K%P dX+=__έ*bѽ WV]yFAW؜yuaM+nxpbJ"K!,ֳyɞ ǁqֲou'Fz{8[Qw*'OOj{kԌnتqG Qoe|NONrO8޸*a;Y_Uz خaXbT csi/c*F:Ar);XLwoP:p3{wRH"!Vi2<{tVTdOm(^x -Ux$g=G_Jڝjz{k]{6t& C `Ϸ9l'kVi[e͏qr3e_kI:zg5NT$DV5tQOkfؼUHB^%a/%~^:+Jn75\V&QpwVدiYT3&<\,Rk4{`1GU;]098Jptf^]O^4{G+1@bOI}5s[̺tJ:uHh*ghO+|SW=5*('{G>:xi}~8{uwlcd+'E#F2YhҷFaY|YWskR{[NGN:D2dYX5 rNK9Y-?B?QZIwbͺEqv_-#O?ֹrEwUhضjgXnyAG\[ԋKZƥHF_"aĐF~R6 WJ>VOڧ抧y+{Io+$j|XrIs©:Ww烈hsJ1v^8H[`d\TMޜ9:T8Ǜklva&GF[Wp;Wu){:n.^o~G~F>_pF#Y70ٌpO>(ꔶܴ\i;kxIy6)ڠHvƟ+]#^.Ko~~-SH { Q9.TjSD۸kxci$$!I8G'?4jOWg4q2ڎۗ+G$710vxQ'$)U*]SI3 &6=*z2iu=ԗ(%"`-XU'vƔyNiEwz(>1gP?oN2QȵHԑSʲY7:?UGwڝ/g/i[oԌM2PFbrmةԽMU[EXH(q3>TrUg,WR%;ZiDR(8Bg*ïڏdjRU۴lw94UCtT̕`1TYJ.; G Y"7pthѕ[ }s+yM]sGVs֧YC M%~d3amzQw;)J.ԯ+hf"I5|9Q)=br=a\GV~I(%Op Sw<-&J*J-DmL3?cV</i|wD"fu]c+EJ:rY%#$ p}LuSLRȯG4^e7cbzrHTt9#.D^?1#W 2#9ԨܵM~Lxąff]ys}*~rXpFBI#ܮFӯkE)>Nfߘvݹx]򌞾ztKTuʍi;Y.[kh~Ű(>W,*/iRZ~KAO^pU.ZpbrʜkKk~L_ tKgsF}kΨ{X짊V F-&Q-dx!7nrTq?Nq]CtOdʰF:;ϕ͎;W,NQ{ĩJH}Gr$H$F;a}0V;)5/k_D#d66`g<Yb')r[}==|ᇌ/hH1re hnSH娓Z-fdvx?'?ySڹS~V'W{QQG:|v,fex{ڸm.qcu'L>b,|7?ps<#M2Mٔu2NA&ޙ{WB|T(/C-7,Ff%['ӓ۞1($:(ƞ㌲N~Owl8_AHn}>h|Ae(HI;Z1',:s>/N]Vv{=W^de=)khOUyQsۙ'0kI aSn,`c^km,k[k=Jt- {:$n-zumravq ;_E*?bp086g۠mˋtBa 2b'߮R8SW۩&N;rT=?ȭƍKG )Բ{lı4kvlnj0Ws^X+yݏ\=)EK&%Mꬭy#=1Yg#ISv{Kmo1Q$z1&/%dS7gj\K~#q]6!y2wzQ;VmFRft2D(7ñ@AigR.)+jӌQ]_]M ΁RH|UGϯyd^}Ji(;߷2O&̋&G=kyBW }Q b?:ܱQߒVa*<~#y';qk%ROLҩ:Ϧ%W͔2nO\rJ_.8ҋrk}r:kJ2FQhF d˵2{} K\#RMg d_/zpq[F:*Tmͩ5Hgn"ON*#NU*Isl_攵^ȥRf5b72nu_=VXja rrͼ'?b*ԧ*|ߧ RKevk~9G"[]~i.|1 ysֹyieg){ohϒr}OD)Tݗ1p]w9K6Yyi7qvV`\ [ :1i$un٫kڭHwoV6E[3+($i~Nca)7G}G9F_M/9dl<&T~Ѕ2Knv,w*ʻw{Oc(uuqԈ"ՈɖR|yLU_NRrZceļYsz8E_Uk6oA/PiPzcrὥ;Fo5ƜZws1 c +#*-j9JRvo[ Vfb$:uZiY-|3DfXFv#m 'W@&~cx:~4QTk#E=tqZVW"ӕ6m7J*ӊ;VuzC- `V4BZSoe&8yI{M$hll)= GQ!徟'ȿe)YzCwF_j 헜O׎tT)ErF4kr"X~d}9+=9?k|&rrUVޗ#YygOnqלUJ*^V.u+S۫"T)=ې8U3-Tæ. 3)OwϹF"KhHdmTL"*q#EY_]u}[N+:cƤ9ԩ ?xwmNy泜*iIl4mUE=](o\L*T[[lklq46d杸#ߞ9k~_יTҵ鿚솘F$ O5{F;hmK6p F3Degz%b{ Ux 9s)TF1}|Syaiʧ3wB[̠,8Nmps5EJc䪋a-aIAhF5+iSN-ŲOqq>XLwU9i_pԧN#ux<]4jUz'NTeSޑؤ5"VV^C\\e(KT>CG$RKY/sI'㧥g3̎jnMmr9S2/sm_(~YTrV,[!xvۅf9Jrq+i9IЄ{ RҺU,sN=8-m5!;& 44$"L]pACy/g5sՍJ_8+HjYTr܅[p^!!0P;ױS.[ CBR7Gp?JQ~PrKsrHYc]wxp:LeJu]?Jfy=H&Ma09Nm<ڔT[QEBX qn^]O4l=M#Z]ty'p#qGC\єES*NhI{6rGQ#yjg )^6(#I9ӕkU~ԔSH`gEmVfiԐolz⦴' H$ 1F' *vFsjx]"6 :~䗆͆*G2^(QP5'N$Roz+Ju:P"8R~b㞦 N0 Eʹ7|ˍ܎O= iXT⤞H;ynʿ)׏ҴZ2EO?rC 뺹;Tce=bݑm4oU°nK%d J\ר\w-WNUR c|sflʅ8qSSwes/k_=4romYFs8{cEh4z5(6YgkP,%B/syE^Cݯm3Y7;G[Tz~Fx>iͧ}n?2G#e=5SMuMI*ϒck"w^޵F5GʣQ5˥Q6KAun;N֘XK]Qmj˻@p~bOI%uʚ\W4`{xjdnqA9~|=i`Ɂ*2\X甥%-dJ!F?6F>ésN"H)s>HD3K6SE ]CڷHiO*Ie`.~cz;u6(I_6i0G06'ɨ) 9~]0WU8Ǔy 8;s^:~\pR饁 jev)>_8x#Jƪ?`īG+3/ۈJ2[ΊMБ/9_˵k;:?QhYmA{~>/g9YG_'e%o 8Uf䃎Fp8zF 7 ->zhL͆ݷ;r;gtzK$I hT/=T{|NrRL[_vܣZͷ()1>f1.QnZ Ɠڵwp ~ήT/TĹlM7|>o}\DP n"imvŸ=JӖ1Z6ݼ1#xq-3fsCT8~`$-4rlztIj5wςvI8;ֲQRoWѡ(֪ 0ᙉVfcldE8o[ek Oe\mr2=kgӊqANjtڷ] mXxAQDTo"ԩ9;!5(2G$tsJRӡeR軇(7,3؟_Jz7$MiAwWP!YTcq*ޥtGfGţ2/weUW*0|{/"gfviFk۩<Vle]wUO~q{=(ab6 OB!3u i1)^Iw*>#`0A*tWF)VsoVV] ۭw$;[Vg.veRϥ8\Qƌ$mtS#Zʤj4*.wK7'2$y]W_"RܠĬ^w }WM3HM܇gԑq?ʻp\[я;mo-H<.¿dC$=dҪ8G:|ije\* zʩB>ΚpZ4y fUlˑǨԾFlD 1XVgVTNhOsQ;ӺйXtepcr[KNIIe^դWMo؅-hYԟFF1=OJ?{{8Ƶy%QQV4\m^vdܺ=R2ӖxZnԪ/`֒ISiy".C:]qߚTcErRxZwK~hf)?y p>Zqr@E,D܂[X$2Bmi#jѣ4gk<$bb{W8qF3rwU1R(yN|iq3#W%=^j8yԏ=-jk#3Uߜڈ]Wgf"͵rjjkLi5(YuܱxYR=`mm]: ұbw (^{EҧN1XiŋID$]n<{z Vu5] + [v2 Y<GFsEs(6J~T$+uMXf HqGGp}1\GuAP˯[.l`_S妽!4][ԵfI%2}INJQ~~wR[Pι?ҳ㶿q!6c!Y)- #Ri,?!LX٘6${g/w,y]bX<`G럛hiJ0zYąes#<k/ܨ5osU>F\>\(J=+*\IYb]DjGFڛڌ#{9f2Wr協tg)*rj IP5Č )TQ{-iTMh=4>|%_ru+zo<-/QzyFT_D:F>!n7d,'˪!ZVzƩN-E:mjUkGeKF|Sej4sѧ/i}2ͽ,a~uٗePN9ҫPno#xVe)'_3">y߮H#9tZ4cu\e--e Э[ 5+]nh7/k}PCqGfYUI8;%SDjmR9EUy":t=+4^\ C2ZB\I8w[Ӝc>I=#K iEY<"1#O/XnU!?2GfQ$ rH8'ik(lTXhFNɿйm"<گm}:¥EB<仌+~͵OJnԨR^Z7$A,c ~P*PM8ԍ>iVDGd23/SQعF2 B4q}=\D#:o;ZWi..b(Eć,srzqZSWFTUJtӽwv\)yn8V4l*)]iNYFcsk:kR.Vteh`:d3, 2yaX(\ zq3޹e.y]IiJu#gJƫ%@AZSK9*iY_NesXO1ȌA^kcY}&ߕUw.zGfz"*rN,n2w+`tM{ӳZ{(. k MxN9S[~Og-lG;^PbwOC<#Վ0΅|ϸ7NޙNXi%gy<6m͟`íOe^qhIfGJ̖򧘹Gӯ54lt0awWs#K3Fvl ztXԧ GU8aB.j#I=VrrG,nI,x,Pyc HVՕΤn"@F}0;#zxȭe(*i=κ#N1^)^D;X1q$lM"uWKvnb"m&J?Q7bR(ٽYrfX{㏮iK1\ކFK~îU&H/>v)}t.ѣ.X?QBIEͼʠUQxƳSFR$]B'H㴁W\ ˂9j*+k%8q- .weew/LXYSrL[!9$]_ tjR|<~DOZM㞬},:+֬nW%U.1kVgtv5ҫq=jpi"tZ&;nD9Uw+p32sF3[S):}~3*;H&A7@?󮏬{Fo)JNsh> ْ[=8ѷF~=.jtӧ:88OВ,CV;Y ʮzu+Ԅc̚¥9JN3{$ͺu=Zڧwt%4d[[yㄭ2ьZJQMkaG E]nKM-nۺ*'֗jp^큙L,Ea?tS拜SvkmɓP$Bd|D̅3VJ)kDlcryr1sD}*Ww[d(HB۔>*ynP)~*v]Ǖ3\=iU%źE+~b}Z6:]  \I9C: (r{zƜT%%tb$͌F}d c:nڪ09{Oq|cv{)#!Pb۳kXSq2U7Ru)oqQjRY~l:c8^NnۭHSL̫&r`Ip1?:"NRW<-9Tr&YEpE7p?׵y]tJHiP㒜=x[CYrhTOKmA,w3 M9G,"K$FFH(x(Qg!Usm 0e541IJ"[yۥV_qzp*mR.䝠UId9$iQ8ɻZ8Œɘ{:&Ыrݻr;n3M630I;TG}M;EW"Lrfz)"%RU$56#65i}k;6eSW+n"aȸyMZ{Hotc4q\[mgb`g|I[ )xtj6xLķXdQX_QQ";*dO~2yۭ)}M{̒vJTԵfy{%k3e.T>EMh&g,O.FLC0ٕj+8'؎yTXcZβ=k#?+Rh)*;?1E?v)9EXx>)I'0WЖ~]\\Ο%iF/+#CnY;\֩%BH.OpF9|/JWmt-kwD,{@¸R鸣 -'Q?r4V $f98$?J4.~RhZJ}H'h'_v+o¹RzOF7\t +d.y+>HTIO~k2w؆ F;}Me<<`6tZubFa,2fLi";qzJԎh^kV-dP_RàX ZʣsJ'؉@.ƑC6;S^߭g}'m_gv[뽞p5A>*TOCTpQ^0mA'+vpz޹ԥS ahŹ؀3$\)Gޝ:tQn+GIO1p{d֍9>kNjp} xWȷ[V ˅;n=+|;jQyO64qM$l|mʯR:dW RZnw.e䜔mȒe3,m E9>i.X컅Ei5Im-Ɉ`k:UJ|,GA%dMoXS+\fX.5g8<({oR|Yrݐ\@Fe¬k}<5TU)ѓ|Z;NӷC,1\/)rBskZN3IaWk-B&qP0SSd_>c*{jq5^uܳlGDأc4b^FI4փ^ .G?zc=ވɤRߩ=PoB!o=1ڻ=y *Quu okH #Emnjw==kiFus509#G_2d9V"I6$Ly0pHPHrS-XoNH[s$jѫ+usCZN4O4pGO@PpNx0;pk/AEޝ˥#M4尶]NPm`w(QAs^CNrӧ&X䶏p$8x9IF3ӍA5t~-^?1J9&)KUZ>ƒN~΍[۹'nB~Pske(<Usvmpi sFۘ:~gkN].Sq|b8"ko$ ˏ_ccKZ$J"deC/=kXΜg.E˩"Y\Ol~f#yp֨=}c7Hff gʏLsUNtа#wq472p998jiתm}?o @9.[8ʤZS_ȑ dVI_i g.X7P]X+g2Y|f*yV:LNuuӿ![vSon\dފ +sMS|ٌ0N FMд'kcFǽm=˵Fclq*.w*UnAj~Sq+B?w61cdf;@<?jnѧR3nlY<2o˵uFIUڄ#,ҫ3[^[(Mଝ;rW#z}S_KUTH$ϧi{:jWע6Q|˿B)-,"vVѕHs$Ŀ 64;l c޴QEߡ1՚Pi$ZK)qr6wGI~gkR:ڕM;P ja1zVR%˴|˨KfI]J.P޳ov_>[y>Bda QAۭc*l)ʕ |[mc\2S;zQ3^M8̚X$as;P=G'ߊ1 |ESiT峲٢bIZ8aP0~gg9e)Z Ucb?)$=qWE:ҕJnSvkz)#ݵNw 1}}zrM˛*2%f9'yr*Y1JPNRRNCOhoI1_+ d >~jwotFu7l[{xf#s⧓n1(קl` H[&IOrȬyy婭:1o%a UW\A*|G$ƕG{v1#vїhgYT{*HH9lQ+% z*Ԥq8v9׃ӥ/z18X.,}_:Gڠ* 0~j#N-.N2Ʊӌ|sY{87iyeyTw]JݻzģJ0k;[^t-qmʻXB|4Je-umSӨd%Ϳ+*P*kgӱkųm?bϨ?_\97RRm 1 n#X_-lyJ1K]ͥ5"hXeGH5mlcTޭi6Ip_7a)4dK4Q]ѹ|+<(|_+*2Rf_cU5)ǑDb#Z5Q#2?2mpzw*SeSE$]Fܪ'2H ι/iL$=to#ܭ,.K/fn0q XaJƶ":{Rnww7dW^oA\U-:ʺuju EPU`|??LwͣR8e#'8c񧈍EdMc*nJ7ԉ(W|Us/^3UŒi7Lʯme, #s֩(XΖ5YKNb̬$]0w3(8YVdF4ЮK&c2Fˁ+ǩɭiQqZ=uHb18 +js4VܙQR:1YV`#mQ~`7s=sp/;98SWkAK5 U p8뜞ʦRm[V܉c{5te~aלqjm;J1VXn[[%\SOr>^٥i/u1iJv]n1|bM# F9\#NTט]z[İ7.3)'BM~DF%;̚d6`H!-}3&s{9Mitw&mDFCD"Qztd}9#0(PpzJ9W./ьn%]' -$7;'5]”aQr>b|z+4SoLjT%ex4,–Qmi`I902>y=9\OsT nQ eeGa8zXq&߯pJl6TC6jvǸqʌ#8YQJXF[oQ]zp3ۯt44ЯO)Bm6'II6I ʑd҉U&*j<Ӡ:H 6;Ux֕W0:xX}w_.=GĂÿԫttK?"){sc$gЃT\o}Z qVՍkXUh<o5r|ңFiM{LphNz59FZVWo] l0Mq w8hQM߷CZ2m{?.eLjs!Jrod- ؍M阆9#'5yFwv0c)EiK{c,,2˺ڽV1%u*k}"%rbU#ڍG(y{F-MO[VT*\{`o?tJcNu忩8w]!Y|i#d]J9W.Z^I^'qÈ9֏mn~HxEEgbU$XRYVTSGJѩ':i?멦"B+M&;wp{belI UٌNx> Jh6J4h*z)y7 m=.9GLQqF.P|ONn]%+ nN3q\.\G0Zȧ;ˑ#'p $<Ҝ}mdeNSçI$(i<Dw{өh߿_2Q J<ߎ7;w&Df?v}O˥Tc5Շv܅3 eI!ldct#=GSں%hWtjJVԴ¼*ѕ>){?j9F:[i6Lbhu^q߰F֍/_*g}Ĭ 1frS{lVB)ФɐUvxZ>z:+){:2$ qqyڼ:c1娝2uYTXr;{w N:j"eF!c [9#rKNm~TxVK#oY*>e'izb`{-6Ti΢svQJgݸ܀N?\iʝE/ڧ,fI>a<\R{gȮ]ZW,V 1/1~5FJoTo)];o$rڥW% l8DI+{Yj@dnߗ*㏗%hhLZr:=qf0F?JQihQ`Pw DzU.tgZ9J1 _27hMnӧMiM>WeWڧƺVu&VAPl.9ZIicZu#Fv!XsnYG ź|o;KŜ4/Ʋw$nP<]YaNQ$M"۹sǦ;STfqERakh@6c'GzP.RjZ[#E,A_ݣBn1%v<9-޺G&+. *c8$M8]Ih $ݍled|fE9F] 3MFsqbqRR{R5$սNO(ʮ6+s%~УMW yd5,ˀlt\qnkĚ bݲHe? 6O%MmnN"U)%!Xec`9ٯ:}i}'MVw# i9;'T0Pmӗ//6܎[Hvv^g ʦHSqןZjы["U*lh]æø\\NLuTAx6*c"$3_ҧ+=u=eԴx3`\UQN(ƾZY_b2~ר 6vv{ M&c\ѝХ*kF忙aKьjS9a;}+إvoO U=N7P/;1瓕O{xN:=4uuD6ƓLI+nQћ{8:]<ƶrw'-:zrSu>WNKkm[cON4oG[鿕m,޷[yIEܙn^"28ƔKUz=jT^|ܫ]mFn߱2ōs{7+\G䷜rf㓎G*r]l]Ik3sF8e>5$}uU[Mohm`?(?I.\Ȭ.Sͫ Fk}tu54YYCLѪ.WhyO?1hT}7}<Sʗm뿞쎟Ú^&f]5jct+gG[~>&t~jt4XnTMkbÝq88 q*=6]}4ݝ9I:Z-/謺w~fݎZ5<39Jrmjr[nGȐI%ɍ|뽈 x.1)FR_y']mK $||&cvO|i4ݺ3.#;N9n2c,it9Nܴ^`Qu5-qYrz=Px^}H:wvv]#ͭ's-dod*|}pykPU$D:ܩDÕce+:rs%A=?_1YsƬLs2HYŸhLkFQQ-?e׉۶q5%WKEq2qӠ9Ⱥթ*~ͼ4[o1~u{汅oAN2O8-c89ҹI=.>iK$eVqpⳕ:j,GJ'[GXd.xBqN}^J)GnXe-MQT1''8c3zNvϷ̵v"ea|z ޽Z5ʚIx5T_[>;u~oVL+m`r3+ʭѓƦ.Wi+cJR)C4Xbe T{u5Xi^y%ܮ7nI l~P1ߟyUݩsM"խB[MsfP>K1p~ hʌEIkkiEH>x39^}HNg+&*2K։"HY/^?Oʥ.YhÉ.k]yvQGoH=;\5OqQT[$<8ant5=\2okFvoN\cqޮ1&*UIhJmd6W4J%Τu*<2|UЍ:~MyrÚR^z*J+o呴39K⶝ 7wצkjYUڧ*[=?*26.wmKWVfR;%;١(̓#Ii¤?U8Կ6ʛ[6T`ra{_jlv7d!Mǜ /s5}ܞhƯ7E3*=ZQՉI}: F1}e `'=:U*տFE& ۷?+|:W nW.nUet}+qZ^?6Y@8`HmԦ1u>uYi«ssWSqpw[2ˈ\n!%ڋ,!Yխ e p/5t!'{pq?NOYu)ZoA!yM?*)<%8G+^fչ)r{dF2ƻmtŒj+ά^yz,[ı"-Sjx:jR/好 }j0P]ĎKhT d ܒ[z`rƌ]t{l{(FnnbUL7̀t!+5{.N*\ ?CkL6ڙ;G=9>hk+8a.{S+4{LFA]cI9M~ʥi4K G}si˱-` +YS:-{MI^߁ aßnm8U4;ߺ[cND$oWo=G~zJN- sR .bKrVpnH9>V^S$oPYڮ73Mvg;sںy}T+u0ru7qH^$eڿ6985:u!ˣ{U}z)rwZXpDo}ʣޙ)a]h2ޏ"3wY68=JӣTeR*XP|ǞN'\ZQ-կ]<'R(LqHfC.KeTc==Šڔ-WtpVoK6e;~ܟzW#V[/٩)+$)&Yv|}*i8Txgr9͞?eZ54kB5bЎEM3 vos؟ ڎ"[?)5w<%5U(>}״Mo8l+/mDrmO~(sCV"iF֛dV ϡ>UB4eY%?Sjn MD^~\lueM.m-nkw QE4_38vudܟ72[t09}fK-#wd~f~߅tF%>Y}\|l uU^L/ Ϧk*rݷtr-[=Bdaw^㊌D}e˫} kGݴ_>sf I&Ћp8&U(ۑ?7}97RwZP=8oNצ)7 ,SFUvt9(ԩ {ڮ."RdÒH[t7byyκ{.85)tnW,O9[Ee~FJpRZsڲWe,DSYkU䲠#ێ_No/k)oq賂&ʕybncӁQ(S~%|D" z//[ùYLUoW3oG"օےnf7ff*$tjh~ѕJmߛUb6d,X_z`ahIWwZBH*VqiDT6$7[k`˂x]7[ħRPr7T01Y3#2ocHև$6I#+n8s뎙5GJ.G5-i#i 2q׏Zӗ68-jKГtkCvpQ9*s~FPokd:VP^)5{7cN[IвG/ʬ3u?oGMX:Sw=$')!*ϡT8ҋ]ͣRdD"/(yPXtǵs4ީݯ>jV6U6 NkR*]G3 Żn>_J֢tmYwdq-Q>Qz7ol[&l1m u8N2Ɯa*nzBndɕ>~W+sSR1VMrF3Sk[9 Sh̲*mbd'y݅ަ4 v`53qǷLǵhM''m* vXPΛ7.Fvq޳F*Թa{Q-P_&F^U@G08Oє")=<)OKYnK ^# uW-$qܥK+7 (o1Z2H_ߠhdUQP98tcϧKn|(~r8h>ɧgSie˙aFo1xqr{m>̠-cxX3,l1'ꌩȊҭFcשWŋmqסRe/F|=I$$? i*>$ټ{ҒolhZ=،o991UO)s].mswgG Ncׯjub&]_y6C*0sԎӚUQSNMG>nRw+s~U)8{_J9E m*C#/Umq}aiTw&H3JoۮmEZRyW2$yHi9O=(zD*T:US3ݏ~ ql׽JRi/G3~1堉[wkQT1Zu}ಒC+ْr>N3浍*aE)IA+*Ds}OoFN:Kr{v:vYc;=ֺH޴ӎhXB{]6:zu|_p/I$j-d7lq󭒏iܚ~6hREyu~Z0T<`ΰ3"Ϊu1rV$B5ÐA`7y'κ#Ժ_0H59#SJ0O?AěVfVe;q[rKt^Y糔7̩z)*zz.LmI5E3/uڛTSo aZ9wp#p:ߘVQS/n~YFIaY 8?}jDU5 ]2dp~aӏT%LgRZqL7DW':lQ(%ѝSMӿ^- LaC)Tm`3קj7-Rnuy"rkjGDSng1qKI<y#QsFy=$uY mdÚRH+e_uэ\qҧQN[Z(Ki6g 3rRXK2Oq4'Up랤?mN4iOV:3zZ!k*Sc(Ŷ6p}I6i`\ͼ':sA~TF*U]'2rIn'=\U#N<ߨW(-od5D 6ܞ;u#RVqc.ۆiY cWc0*h[>;Um*r}~tӣh]ku^XסaĢŵ<ݠn}88qŒ\&>T0l3byxui5yhGSR>[G|1|1u%&H8sYv6 ;yTciUqaz)SvH2[y.+q r|}}Icm"vH$sRozTqI Z9 `ou=ԜK\t,o,N[$~8]9GHiK^X4viu^Glpo´Yr]Jtj~.[9:r[o(v+ v>b8:7[~΢]]̱ʠ 5{jQν/KRG6u=Vb5 ݼ%;f7 +bo 뵝PWݴ/u\91s} ί2|J{}n\噷6 'N}yjѻ4xxNy!yح q֞iJnVVKsϘȰb8l0gMW RqR\҇X"r#&㎿ VqQc}pErdJo>gZ sԷhF퍢N[vq{*F1kg:q#4_pDm3Q;iiI=vyo㷌67X'XEH0}=M[ZTU.Vo&|>F۽ۍҿ:%<)OeY~RGL5K:Zѧfi!TU*W5y\S9}=?ue#<-z.<(.R Q j}w/*Ris-70z;Vq^iʝGD+XUj3ןR労4dw{uƻ~i]yqT(]"c)mnV;0'q'?bnT^#GnVlqϿ54.3i זYv%ZH p:Vvc 4wr?uVCG3%Oᚷnҵ*)[[iZ[x-?5vqU#h..`w74>\ ȭcG+53-ެ]c;]qQzlm4n/֑v毛mKc:jLULw6;ӛY*j []FcsoI)+\D=悅JKfhlF兌,/=IчC[JugyZyzto*nֹcicm>{k<Uc/`ҮeNzSiE'N{7cʃu5%FR,kS][VWCjVbr~h=A~1Z3eZJaLp&\o7 <[vq;,TEBQn?3 Bo"o:fż5 GO@k0禗(E$?juHV%d6;goZHۨZvݵ-Y5+1*6GmǁARlJjO2o[~E,.yʮ̸]*1Jl斟^7m}G[es|nl 1#ۥ](WWUIH1y=:teةJy'$3M2Vim.2z.kDeh(-g=ý:|t&cRSKC[jJHdu\/Ȉ=J#8񒶗;~qܝtIW^7>z [ň~bxf܆ܽqrѵ7qZ>.vȓ¬F#n#zZ?2BVN\̆!ymt>ݴg-*q.il2o>8i$Fv<{m~Rb+qE礑<='$}RRY*8ظ꺄7h %گ_NzT>9bI; eX CoQ=zk".,b-eHS!f'UyVjSv^ehbA#5T䓷sb>h%#S+OG>g2&]~cOa906QJjpeQ'*n.űr%+<懹'$kO֒+J<_+2mTݵG vyxz<Ϛ?v jjcHWDW?δT,[Њ-.d%"vo#`H9^kZpvi[0Z+t&6Cۢl276ghS=ssLAd8UfP,| 9G?9OJu!/vOUuKq}3?CS'MF'[F9ʅ[#v~PGgxqOOО1 A%J7aUK'ԚԨ .4OU#$``T§=iFKDZ"O9RXN}{ԓVDbEz L%p9c}}{Rh=KCW PUl|9eRTNkߩ(thB:*WU8N 1XrsK]IFW\r4Jҕ_7c+n<O?,]R=5&|6?je2qR߽>9 kӌdoC?S~4D%ᓖOaEJ)+ќ4&1 c%܋mM!2ʭ}=JF5|ʬ:ȔPWYLBvWp>nҌm8B mG(UamkXXEiS_ pLo szdLF">G":2OWӯ7riΨF$fv〤<+~VO 4/2ǻ~`q]J4eg}JAB'ͷ<=FRZQ?,-qyvCc#=)Rh{jԂCw1 -]}sߠ{ZU1}mnö[b7w+iNN7*Gt7?JNO=_s_uE<(:n{ֵJܚn,d@dg8 pAVjUuGTG~ALH < gJNmɳzIӕHeVO/kmV]֪iYǝ]+uamQRmrH>xؚTԋJj 4-#R_Jqg*^9Mzq]!UST/9]tGTg)HJ1rNM$q4i1'#>dtT䓵̩.c"fa%sѣ[Jk(Uf =?N:x}ΚYl EV\"aN朹yw^d_̎Ѥ +cew מ'eʒ'P/Ԫ@K%3\s>Z11nRpvb&˲{;+ ϒsMs;8Y|Tdj+v+)?ko1]'qJZF rfB,2FtREQ5[jRl!|2'NK8աwY6ݴ) }Iq'}Ƿ\؎[}zT% qN/^&% D?'vN#nIխɅZ?g(s% b""_]oaa2O[HWa/lͷrHb8gUdOݯ 1ӎQ^ \/̚?9&[EY2Ywd'=WU?KXK]bu2xY[l~X;~_Ӝ~F(ͨSR4F-BR4(ʌy@cF>\/wsaJ/77S>҄\}-{9c<8N6mTnN}Kyrw9NSfʤԙผmS1  #<OJ>;m5ku#+7v.mAЅUąUUl4Z`!hrh˵.?7ppG^qYb~NStR] ZtYXkŤՍED~- U%>\i=Q^QQQw m$nBeon*cަ}+wm#0S&@Y#N{*!GOn:\RFef]3(zӌcΝnj7OQ$ٰmW?W,d4q+W[䴊IcO-U$>xg+NIz.ff.KFr^)J3NIvз# ]m.]SFp1Zk%A%>[Uġ|Ϙ1}~nZt*5l>;XGzJ\?KkЫpڬ+?pѲ'ZƜebF=<;;H7-S3Ju.V=F7d0`uy)s's嬩JȂKi 1eRE$/ޥ{sQ)Gv0i (\rwcnNΘVWѣ󥋣whJR)WOWI JGOF5U~CEDp3q=*i)KnU%[iw^_+1h#cO>>]|Œ8kэ)och;ZOPFW~Һj.h9SyGGza-2+*ȱA~>X{9cVΊtV\C2;[& #.r19x#-9ӕ^AeDA8YM?<{YVQko=ʤpڬpyjNhtz/fԶ]IEK91G^1z^kEz 1mj==?ǥM %)Z1hs4rI$׍ͻo\k8Z[bșEJQ-8V;GoYg-q %7AV i<R͟ 婼QRy>LSG`6>NvTRAoc Tj~R?cL$SPc{i[,H,WjϒR7C}^ySmT<3c|ch]Uho_ꍥR\ϙiНJޥ.ewO_^ʲq.kŲ x$ՁB6d)rKFuО IJAm߅\cvQ>'.e9,D)Vtӽ[<Ͳ|͜Sg{n81HA1>v-Q޷$mYs (둟Ps^U:tZ[qEw|n9~^ӎ oN⸱R*c)WW[u3ՕDlВ2ϧt\(km=b)Q>|dI$dQ\rn2qܭ/#fXFn)nsΝoV2W% $lG3Wawcg,hrM>6F=}W(s~7)l-nvݱ噳^ܟ()׏5K_eȩ.T:Z$;do c۷_JT'Tz{;$|0J%J:j}~TrܕR_Moߪ*i܊7wcNY؜gS(S֯Gmxq\ jy#Iey{{~折9JZ-O>2tց#vl7)?T^\[= u!^ʁ_#A}XϹ#EIzxG%^fm>O~iR/[U0TUgs%Ed;w9S_]}޺4d{FHO\?hGUG֫}M59[y ː c DU:w>C0^4鶷ۿrdd:SBɕX᛹cqں#>YZ-=mgUQOko~^4{[5P|I$`lrI8jNc.MOE^+wHlڪ~sݨJQխ- -M;5ʎ W {+]ԨR\.tUFY~Q*tT0V])RxP+܅;k ҧռ[syIVB #\yi3(^1G~Vy?P,Hdcs<~4Q95ڽU^ҧ=G0]6:rFxϾtT挓QZysRKzIN,c'zr*S֡m/nŇP B>1F77Bq= *Y)- DI610kXʴ\Z4*TuՕȒfTo/dQgHɺvky*{,rgDZi.$I0W =GN:`>ԥS wF:Z~omŷdc-#<tF2ok:xSQmwm$Wveա2t 'ۨT՗'{:(,_+Nh>['dʲsHa|~2J7jk)=;4YMF. cc%R_\P;^RܪWApۜGXiQGךΤaNRGVzM-v0_-68ϯ'k8[6Nn/PxB(olgףG>Վ\&ܣN4O2I)b[R:ZuKۻzUJXw-^;T>Vav&7mݒߧ9yE)7f;Ƶ7[QWx&O -u)%bvnY9q{peyT)Y&|QF^zf#J[kK;1!mUF!U`cۊiq*qJVukf׾qQRZ K8d-F%^jyc9M--*6op51G]Y ,Sl92hb9vXIr>S*B*qVqgsb@|g?eh4T|dȩs#nkKFS,+ܱ,vݎ2qX{ZލEku56Dq}:qz5*rvN vfY%j_/w :r7c9FJzEfUZXI2 JVSUo??Aؿ( 4{Gc,13n_-[3}erڄʥNd"+NA{6=I©SfN;~d"䲫m\~u&"|ɮdA&forqR(Z[{E^[Bcd1 ^x+TTFЎ꼮WlETЪ7q?Z¬=tN\C2ƀ`sMe*\K)TyC(>2kq氫F43J5nI"/ȹb@nO\ֹ*52m0'mI J26g4u4 ^ʳ[[FUMǭq(;!HG^&NbV^O̧Q)/nh~( "Ι2y=W+ފ1tO^ r~Uv+?>nJqrЧ?^ N^W4_fI;4޾Z .?;259S&}NV(CHamHX+Vt-8IɫߨǼrBzj%aN>}muQeyXH(\?t)BPwR+M!g]*Ƹ烌[S֥FI$d)8U}z~5N7JRqv9bEhygI=ק}NO]C#Mvބ`=MzTWCң]jKq0ь`Onk4]<v׳O${mhnK`ySqkt('^uh3q.g/Mܒ]]K4[i(Y 1<*Qŗ,l}7^Ҷ צʚz߽(5-oF 7l峽.=P.Ufݭ]=4rY$4q7! 1>2r;Íy{KOeӢtv:1cdڎt起hlM̍Wˌ*<;>^%zqURݖ"G }bWd>ܚWws{iMi%շ;+:nBʹ \FO~Z(ԛTݝߚwO3FUdgknWm[h:Lwgn' y9/3*= ŴvhZ?q;޺.dPz^Jʝ3{jg(e;Yr)9*~ZzIs޸lg/UԬ i~D{in8 1__̉JTj7+/,=g;+Csn5Υ+߫"#wbLGM<μޟSj)F'mDb2q$bnݑ\\\hVum$L+i.Out:eMm={[a7"6}?ZS(]ٔ>kC#XF$`35=?bԿ3E*{y4g;6( c8cӒG':㧮ώLDGˀt:qvލ8>ZPeֹJ\nuSS7Zܳ,:n(:rh:w/06䪎?cjUgL_1@VVem׷~kShqUҳ3.3'~8IGY8eNjew,#sIԚ8ԕqԩ[Kq,YAK9\x|.e/N*ӯP[8ۖ9c3{G+=G%Xl~s\H^+MiT[tca_-M"*vAvݬzbVRϚ:xu'vn[Ը3R(|<CQʴXE~D+嬫!f'ipzu9J58 |o%Oo֢R(ruQ(wAsV%j|~~!.|ut'j}%UOogn[F숡xQ3r'9Ǔ^B{$լVXo4Wr׵48oBL!iZo6T;AA iWcfv}g25IWk͹;DӦ՘Ӄv$2+/Z1Hvⶅ9$㲥H֏#~Yc۽?b>woٌ.Tn2a"6ߚFl'}Eqʴd4Jj^kߵZH&4foE f7(]^ަU*W/3{(Sj!y?6AAJUtRЎ]rǙ2qYΤSK}5t'<HuS3e޷zsͩwPj6z~2Q$F4omUE={ғשIb1T^O0WvfX8 9SQO_>U^i-FߵfH<wm#N^ѭJxٴDgNVe끀N:LSdҲ5JSrd'5W_B2Gcֵ'/S?fȒ(mʯ9'aS&v Sv~\NUN>u+J4-'N⮗]zĒq:ȩlzK84ImO19ݔI3qޚ.7tJQ^eƧ͌c$O}{{F5}D[+a 6xxE>U>ɪ}GnEs{^)9WZzmFղcb (/zԹHc}3O޵Q.2+Gk"䌅w+Ӿ;qm6S32R-,< <$اRz{[dzZYG,=%F^^j4Җ=z=-uŸhV?/ t*2R>Hq)8b%8aFRhls5owY\.J 0xIB2[MNzex[*" H*sIKޒz#)J}7ЅMW+(E>oN1劜TN DY7qI>Զ*nֽ|"#f,p &b+|$$h5\|y?{=Ng4}-fu4{&q^8~ҮT%N<Όb9.%̗9@淍H'&:0oiMIhkF=+,G/QXxS^TSU%庌\UƜj1w:$yѐĀ#fSm9gޮ5(я&zaۏIR>Q(rVN"2U7ynR/{jNJܲ,Flqs^{y:?b@vw  .@В;W;5c o, Y Ff@zךߖ<#Rj6I$FQfpGU/vv+2Ȯax[r;sPd!N+f1;]% UW1oֺCKsӕ=c T3yc2}jQJ*'BNE+],M#3`LIbzuݱ(jDKدoʭR,yzjI?uΌEnZ|VEܼʾ\oc]\7Ni;|݇Y1c'\;zNҕ7̬g|RHvK{[Byѝ,?ݿ?ʎEIwnhlv5qhsk}u$Jcvh$m2%-Yk't8NZFmxuNY=|}ܿuAqR62t'FffN=GB\;lBmE nr2y)*.\8iTN+Qj̏> .\@2IcN:%sҌtV]BB6wmEol95OymEE;ӭmno6x5hƲJ7{hWFkggy /~\cZ<]Κ|\ ɕPH| }zܒ̕OZnOC#e`[>O[Qo'~jOTw^H9!}m0rӜV4RRԛڍ+REicdʯr ::ti-BFSvyag%ͳhџ2egԊN:%.W-żҰWXǂ=Nں#)rvFqݽv&BdsTzҳRM38r_~.cKl#;:˺ru~ qBt乪]bZV l֬a0H}3t)Ӕ~'QJ}Ɨ6 ٖPw}GcXQ4i,?%')MJLpj|0  R^2U6A#S2˸߁-9EJOk >Y\|fx. N2]Z,q#yWn߻߷jTkɫ/1 [ڹm"o+tɐ֬?zsߥ"̛tcjn9FR,hW,*u*F}LFU*^O p&[Q.xҨoFHRYLǨtg=~yQ`<,O$e?_?/zӖ03;)gX7sϥW{\Ƨ47"'kI3a{cx'Ӛ\田*svOo/S"(KFWy۷cNrZQUT I(ׇ̳.[Cڪ(\Чqm2H؝̓s8|D/{T浶*&f{mi= M9&SthXulodd$pXr?YƋ%&]%Z=r~Bڗi晃/W:JK6FV^ţdUaHV ugsԍTݺ9cN2Wmb=3#G#8F+ݹJTfqs^)my;wjNWIXn>Isxd+n#;eB39?iF3QgRT XK TnzӕF[ F4ݚly{{D9O 3!5kju1U-жno۪yr21mI{}(N.;|Ѯŕe՝U̇'HQJ\j^5oLVYrf;KU7FR{uKoysde;F g[F%ڴbf,7m(X+TN{gU' UJΧM Y"D[|N ~Uej|װ[Il[̷c售qOQs٫UN^&O=KO,e}J5BT&秗'mʱ|]B~g EG,V3ʡxvnjX-E:y%[xeeO.2~Fl0PgiJy-q!~ªqu Μ ,ckHb4&C9r=ҤhЯKo'IW;wd8>jPj_49Q+v5 ج`衹b?pn+:nuԪ=^ {fUX7SfՍ4K}Zbsʳ8{FVVI,{)ј&REjǩ|ysѥ7}ŎeY%h7' 0\+^Zv0Gy=?"oXOl%'Ƭ$}9A讦qJ&f #2*b%C_+o-Ȇ`˴9<ҷu;%g${HZEck;sE5F)&W|acb8B5II9O#%,.e˩=ygCYKmhDī,NVLIT{ e+.-2.׌nGs޹}+{tX:,~s tn\=oVF;UUc{r02jϚ;/S!"f>rv')r/Yt8Yi{HN:1 n#\:aU$U`q-w*2Hȅc~c~$ϱ*եkf[#8ysϕudOV6ie&F~E'Sq4FjmSnplU3Q*4T=kUDkh>xw cg%{F䅵2)Q?2ă{ӎ)RWiє,G#h/h*F54>L3|pFGCÊRbUNJy6NhUIpE!vCT:xz"V&2.~foJ'TsqiG lcMAӡڰXsuH2qtcw2|ܮ;{}}jNr+Is1NpjiJnwk&{a gԮyMqԏʙXy{87^i>r\N=ܯ ziū8rgCg="gii|\ZkNK1@Zcs[qU-\q^gR9hʣ(L}~|Ѝ~QOU~~ɲ&;wm1$G+2=jr%27o_=%*qBKZ~aGcA\:9T Z4ⓋIoQSˀy AXKcSzM>- N16>he(ԫO{8iV,C獈)T¾ZjX=z} 甥(V9sO밑B5B0G)*~m6I\ۼ*mg=L=(TKxNXIoG_~uSM\57vEoCO.?-W{r}Vx ~ѵd,l$3mބwvXr^[7/hoo$UdgyvR2}?/ʸ{Ih1TRS!yR;!F=>\=U1z_FzRy5 =<6:a?hkT%x|ʵ9at}o؎;<%H˸3(\ӿz' mn~EJN jW- v!ld:Uqo0>ZSEٯ5#dU|l}?9ⴧޏKpJ4cm~ Fi1t*<¥m'x_Ȱc*G>م_{"RY8(rL\MF]:Ҫ(ASWbA/λebc؍h8 N7gc*rmD2;V 9팷ַ:JwSzM5f?0<8BWHu& ec9_m :uiQ|4B@H @gףh$cQ(Ŕ1}{[|3M/M"EONLOU.hl4.~n2 =mWg5NDVHĬo5iYR\"xʀ>KNUzє#`e8W*Nx^y?%NtJ DL3“͐~3Tcfc y?r8ckxR}1с;'%SJXR5̭+מhמF7*|U_ktэ96~}bUTߘ=F=枽E:q"IUiƒFFOCϷ4{>J ghr^`l}[VmKCjt'}n!X&rȸ'psƣԺxzu)J맨m!cHnO ƪbєu Vu/]jqڤb.b,ł$D3I4KlKB _5#,?{'cSu!K G#:R,tǨǯʗXyQwIb1Y[~;18Iz!SƟ/:kAKY cۤهF\~喨XOb w>/d~ꦩHJyza9O:Wp;}pG3(YX``QR Z0\y2;E嶱Thzь9gd"R rO=~%UkGヹ4Q\0HeTmLp'?C,ϹNUebcrU/qXKVAY=WEv'(?0 Ryy#*Ҋ7^Gnw!Hqج*)_;)T|ܳK{TIUf55"h&5-dɒ$bҮ8fg/Dy#QJ\pϱ<<2F7?:rQ:4^AqSI¤n\o)In'xtr֍Ogn1ZITI?)q-v9j(sZOayf3q[6</oZ'sajU4~տrȪ2}F:_#[Ih} )qhT{,z@<UOw-a彗3ϋil!u69<ӰWޗ]JUײQjYGfXv$תU02ӣ_*u9wzl$ Q'sgB5"_og˱D6'jdv5HYM;ѭ;N<;ZF3s& |@==TO%t-ew+kw0ΫX?u}+ec9Q+~~Xy@)|F8'һ(QVϩMRGBrZmHJ^-+v8/vRF2_T%L-!̜g]fK u,M2x'B} F[hsaY{9@<ѽK'cdM._"#XK1ݹ{VSUN|'.+;X Qwr jMJEslMTyhGg25KT~їnwIO\~"irKXZyI 0*"cc qMsKYjYߧZ<^qA\cR|n#"_åmr)*?w`f *m9>~ts8˕T|K")0eϡ8ϿM,UQTe-o]$q#<#KZsG:ql#~$W }H դ9/zK~vK=ZF1IUN^WY*rG~r:QN^11f=Ŋ-s׃8Ҵl6JtcuPuP>`ݻ=Em˹Vg7ɿr{[FINr*{k TOj%*77zsi8ʝ; R4%Ye')(NI'sQqT&cNăk(; z^Ipm7O"kV2B+T^}(ɓ̙m(SUAL(:RY~ ,q85OkYԆ%d 6CBݎp֨'s5rR EV [x{˴U:1)mk +[~?CYj\~$_@gE/,-,Gr;N<4zBjX\eVMGҳ1V{5[f{7S$k$/8^Λ+n,o,yl~Q=9S('7V. BLaPn'YJ1Mmg"܏:hdeFvp7u`s kJ^kJj̚Xi~uY$7;ǡ&{o*»MjDScǘ 'ֳ39]ru#[`0^1֍M)DJ61}V2J2FяmZB D/۽ a9Vzs˖սK0\dv)c9\V&Cc(w4QYnr)[%ܰn'X\rFwLmM) f qEI{NƟWw!eUed .Xdr{gg޷ԙ"ӷ_"I7ٸQұn^ka! s/?1_qιQu*)^1"5愆$@aC}ӄϸ##SVJ~F|єy#"Yi67n0ћGlӪݯS.x)^̙3q(emYq:\meYG+ʟMonʲ&o^fyW2#]WlsM[KW|,oT2t{#*hЎ̤1L #vzp}~jU++Fw}mܱ\A$jUQ&֌=WK}T>iirng̑l?29y>f1Ν4Zir[0JO>5}:ǑCshgxRP&&抝JԦ#6o|ü6X?:je4m1囏-ȉܮv= SmSܯqp̠n}A9~7,OaȥWEv9zS^|[JSoQ%N"E6*\msʛz="D\ں_vq)ѕ5}~fS[!eA.ۍal'N:? 寅 ϩT8vHEg1޼r$ݻ{5.'mz"1M2:ذqӟQTb-}Cuټ$1ɵH>=>Rn5+a-ZVZZ%.Ha#!f  #]ҍE蕵k:4a, [T$?#mXqӱ?\j9ֵz/#Me'X5č3̻rrG9k:ONR.XMɯV-rvg$;ִg*NN/XJѬN1MV*[Q :9v,]r06=tQGoX ?B:^``*} sW__MaN{O͓%ymw08T{8K~ѩN.4g`'zUoM1bQv9h啚bijG[ UZRZnuN94i_slU,wWDkVtb')??ǠWye۹,CjSֽE-ٞ:qHqNdҌKN_޵{dgq9}>t%[pZu6NZFiWpmr;w)aF5^yNXiW kIx[cœ=Ta {t.Aq،]NRjrʍz'joyiI:LK𫻓xR_Ru*YrimQөyZOq('g=Ҝ\'XZ?:/,j=QFP[ԟd66NXdVm|s8ִ׹$گ&Ǽm4qm>c){ߩ-;-}͛yV2FosҔU/:s鳸iL[r$S:Siy}8j6֯nEvܺ';#qYVS ᆲSfxXHhվRN[<4jFz=_*)SZEm[FSi0zmnﯟ&%{ˍDq z~5#dJq'9?z7'5i1U\}?Riu/> eei=EJ6ZzK12y.9k:toʌhYw̖V݃pv1[B)rkmT_Bf l"652VNMI#i$e@gTsWZMJ#J4Tw{n$17;|*@!\c#x#A]XMAh) õ2eЃ c۵ZbV]ibsYnqj cF[e~Te%%{(؋4|蝾epuJ~ʵkO %K߯h[,h8Рsc>g);/p8UrX:d=mg(:.Աh%LBӀ?ÞkHіJRG ܓzHiBOu33?W Os))I8X@ ꛶]0s.7}e*W_˽iMrqUqa yE+2tӄqm{=ߨi"D8 r}346xƥ;}Wv DA9?ֳ1n}掌>Wn8RاH?uӌe$d[RN 㑣ehKj,sUJn1NFi&fF9,dTs7#U(F\ǩ(kY?xJ@$USQk֗ϹĻcX[瓜d?ch{E奆e`rs߯_º~[SFMwHibp[j?˥eiIF=1HU >R:hsʚO ~'yR2Z IR򐣖2nD[\8K[Xc~UW3qH^<ЕiSp kay!+jG1ӱ8$~FQf1ZV1ʿ6@#e'1Dc{*4)1[}w+pJ6 c(KEo=4[vq" bfm0:w5)jm!YO۾rI..:g(Jz#9'!"򑞽rl*^J\ 4OОkUN2v"s''wh*Īdl,{dQ*qzElk:z5bmaU>V?SQVH2o2FsJ7UӕvZ ɸe]]sǩ< c*r/y-Lc.eN/M?˫ܪO~:\rebFttdw6ᮕhyJR#[֣RQoHOĝ$j^٭5 )So I'M#=x~grx`;Cwot2Xh y nB*)B/Oʥv|w}Ͽ\q_f9j,,<~JvrߙoU2$MRBe ^Fj' NIusS4 \ۻHѽjF,'OAҸI]ơd5n!1#Uc ~ߊʢeͻkOI$Yy=ϭcR1wFga5o 'SMyƜ*4mXmŨaBK|tk;C%;?xFB_+|wp mg]8<2zWPm{]waJjLŊ:]VgawNP&1Ѳ@%ǿ%(VQOv_q8sti=/eg: vZv2^}+2 #gPFs+TJԦ}UsY5nt~m3(=YkM&MIu{sYt0qae~FjФꡑpvpzoªXכw uRR~=Ƶdo׾ b*Mg Y pHGȮK:Qm?!Uk]9I6Ǩ~k/f䝻)ioó4hZY7+~v5>R55*JU%)ZcsO66M`X'^m_:줬r.26a鐂w(9{/|"^w]*}]:vuUԐ%fh.ߔv\c=R֣)9(;udF rrN#=k5jBI;?#sNeǻtH-@},{YX?*a,?,]g?t?N>ƔW%F"mcU$85*B\neAV-[_mlHGEɭzZ}v{ud_-V+|p3k9iwRaˊYmlsSݧ_RM[,oK o;O։.hǫ#J1nڥ)y\WU{Ju_Orql u(ї2w[kOJRkDqCyNW8˂x9iQF0#망̩ԕ{!U%.ܳ<4d_W!Fo4mQ3g }]飶Ҵ[Rٍ0ʬqF/Q϶AR=MtR;}scn$y5Ί|L#t _-o.߇g*rNw3Oh3%`7˖qciӕi5--=ʣM(DQ/ȗ~=떥_g.k&Q Y#/ #Wb#N \u*˷Hϧ4b]ԕ6J,m/! #~xN{Gg'ˮpI܈-[y=jEI-_3Z~+^+[F;U=;r:dc%~b̗k7;v u­ƌn)\.v8dWiU{?^k4i'׷0sU7#x qlSm/vʥ9?n0N[ PRYVО|eu+hNvj?ƶ YGFW_8XgFwst9AoN}Emmqסrkz?#/m x{[D(J$ӎ@QPos{IsdwM"Kq̫'^1\2v=8{Yv&'T["ϓnN{ Q*j^mj(ޒJ#Ͳ}v20x;{ѭmm-ʌԍ&؊e[vB'kӔd|aba c\HI9csR1ҢMͷ cB뫹]bG'`uhdֿd+KD忨ٚAH|\Y~JQ{vj1(omlxN0QE ! _0FK ?x]m}4_~GF4>7j瞃9QR\ͥ䊭̯_ajB1C CU~杣gSI{v1 f=y59Tr߹)A_vh|m:}2?fZKGE:1^oFѽrl8g=~UU(\\JW)9-¶lc6Ît_,V/%nde>iM5 UxZuFf ŲGNzRSGm I=UE;K.,=Jr.ovOMP{>jNYmǯ_ҪjJ3m̙cE&K8h x9>)JWM+(%maR4SOJs:V^ۯOVIv3~ѐF8瓟sֶTw:*:~ZI$S6X%#OjT5 noc]tqǧi}(b*G]DRȮJ:C sus2j*.dӨB$3퍙F ûzN=js&*v,Kqّmx6}/cIN<؁!Uyɭc[Jj2ݽ{,q"&R|c\ӕXsroҝF:@Y W=1 }i5_wCYSqOĭ%x¶pffQ4y)bI$6EUd18ޏN{%d9bMYrNI$玃hSz'JKZ"r +H79J(÷t14+he)dX*8s$fNd+h, l鞕G0WhtY[dvsd.с\5]NJ5/Q:5-MA5v.qSi/[;J>fx8SFw>i<[3.7nS|Zʔmh1Zp~͙~RWIǧ\ezJpO_3pc>g/kεsuVii^x_,iq~Jq梜mӄN]dW#h~9jTc}i%)-[OW{JTw哊br$1*w.8Vi*0-k$#yaKet 30V#n]JsI>gt|ɾʍhʩe c'c͞qZI)v_?jc֤3~Q?ișY8R!InT9NP~{#i^݉.̬4z$:޷jFYJ/ֹ_m'wcӨ"YXHRu<~zq֝+׋e:D*I!hǘպ b"vFR;l -"^joO8>$n  _rOXֿ֧יRu#$Y wYwn>iw)*k_K)\Fޟ6ɅkIQ%[셻2]ьI"iZJѷq1};y"!KS}x8tV裳1PR)-Lp^~k.U9\:P~LYZ*TO\ɮtcJӕ7f<|++gwyjSvSa,D}6= &{nM:z 2W Y֗N1G5:2g[Y[O%$20˻r~55J؅_ͅ-;$v[/ Fn{֫mܶkZ0F--ī崑ɷr~ҋԎ)fpzdߨe{&+ѓKQMsw ;)~YTaEM85-NSʥSR;@L&7¡nJQ6OO+C5-$p@cވ6Os:9w]btvwWpݸ~:Gbg|ʉڴL\*z+ZǟuÞNcdR2:93)=mga#Ge0^:Qޕ>^OyY{RTk`-͕FSmA:Ӌ-%<[MI :VI6Wj16+6y'OHhQiHlIċsN8qZJ1NcJ*jVHi8K,9VH"AʹJz{m##I W=kJ>$:u)߿ߴFL47|vU\hB5kסq08jJ_ƥ9S5jdLѰ9ܚ ^QOI-& dE`kiS 0doOo^qj6|SF^CG^;T8S;8;?B ;2HI\q2Sd#x|V+OpW ޼kgji+b#!_oL_ w>Xi;+&Gil]*)K ~I|?\.i'"wNq$č m\tj˛kuW{[oŹp]3ҳ̜NiPӸc+m`6`q#8#ջKѾ"=WZ1}YTaGtR2gδFmHPzӨtq掝[[$Dn!GۭtFrF壱rQ6ݝwׯOJdﱌhE,)g5z"ۗ=2Gɬ'R֯ J_Z]I!\ =?*ErY;d匠эO-dֹ8Ս%ܥЪq)~\ p89ְENKFYJȍ sƔM̷gys +=qy9sJœ)&{Z4ӿ5HRYiV ^')(~j\ >Z  9?qs(ԍ5-nXUIa ܳ{q׭?r:hӒ?ٖX&erێ9=w.jvE{ɣUiom㏽zVyfV ltld>1x<sYѓIB_ެB(eW : ?y#yul|.eSOpj:\zryϿ rX&ە|J<"ˌem;t~{־5}Zn&=NssX\VU9ש^F;3if1^'zOz/v>uE]>eg8J2uE9jqʩC)Kr%j+v9Ud AГڜܹ79n6VHc2O>BPM4ZHSvc)Jn⭇Z?A,Gnd19puҔikc Iɳd],`~,%N5fZ+yNzq۝[\ҳXXd~(ㄭ[ p͟թ;<}mG\¦E nqW)9")biصjǾEe붪eN^ӱ,aM]lLe!|USl9~̱2 jmХN\/#FNA 6H99(ԓYckƎHl(2K`u}+z2mIh=U""۵FHzSmR7Qd[,q(qx2N~TiknZF;XYVu]v?U,jaqkװKFrJ#֥{_mN2Zw]M4$/,6sVҥSl0*u2P. Qlϖϩ;Rnzܼ$K33|7 FrtӵH˚%n.#n^]!w8^Vݗ:Z4ylO3$l]QT 8ֲ=9rKuFV8n't&;cXFմ*_.5b8Uqּ|tKIҘdD|l/#t:UITH  cVH^=*ԣy(牓ܬa!G=}8k9jTV9Ug^z\o~Ԍj;8<n6gUkw9FX3n>Wzh֯|6/Ap9oZ~SFQ^3/3G~2y"n5-]NzXau{q>w2n94StvՓO@nnu])U_dnjƤJz<Jn꯫m0۲2?zQZSsTsE^"dܾtVIs5!V7flm=9)JM$nޝK&VUX 0י8:rbh(Y@y{U#N<\lRFИamfyЎ gSݒszUUen*AYx[Z.yqܳn$ib+r (\r/ycJ]Ki%̞ZeX|>ՅI'%ʷpYcEЃҴN&q׍JIP2=1LWQ/*vKK뮻 1%eU*y66[H۷nՌ{G_a7$.Bmujy+OҦ[&)Jm 5WWݺ8syt sԩZu4z J:CokCTPI#Q389Tc)&ҢmVu$ 7SV2z߼0sK47qf'=>S8ʥNdC1R{.C\0 #)J݈ױQD*"^6l$OS5g[F}I~;U>Uvs뚗*F-)JjwD6srUԬlescTtG,NMOîYIϴ>419)=KSʥȸ`;+Tm@bEf g;f[8"nWA͜9A}<<5d]jT#NU0J/+ֹRO]XX8ԣȮq 򬣬J%:%p6D -L03^1ڳMFmuعJRAY[b[FL󱙱."yqN#RZdmAb)MJmjy$QƫzE)5⎌=LW18y+Tlg[BHrw d%V?vq]8XЍ&TIq2Y7y~`Xwmcܓ׮88rƣ8kaqX{k? g!dnҪ/sq0;שN\^EխRcJ+u$,;v)Qu\ڽˍ)N;H|$ q}3]}m6&t%V&}J(WRKOm;mğ+0K# p:]Pf9>rMUpɼrv9J9GOq]Ṃ~ܛ…Q:1޺FRw裏Rp̹BOx'bʟ?5ޏZVs-22qz*SSu잝IKy.˴68u!ayu}Ix3QXFY ؐ3ӭvTe:,=Z1J_1!I`F9=:sҺ)TۓJ&r̬qW*GHtr6xl(23F©8FSnߨ&-#hv*-~RMYl#񂊱.~PN |՝ܴPZ ^%cwɏ-#p=uKSy9[YYU|TczD+*[95QqѩJgm HpA q8ӌe-5 ~kC U|0 P U8FRM<[y6+}<-NGQ+$;Xzwc\^+5ZQ~R2$fwnQ=wV:Q*vƙ'FX[<:V4Gr1Gpn5%7r}qy(/=Tܷam14!]W#wi4oXƤF734aYqʲwr*#* /̎fH&ov7!f#OQkĒ?޼G$aЕϸeI{ ]˵3y.~lz\qR8T䬞ڎCs!݃vO9b$i.V&Ǜ &c ̫Х+U9WS myUeld;mU:g8SvVk# n۾O {W=H){u.WaV(IȻ FsnmzV1ui3 aMww93$cgqmt"8//4L画/-pYޢPԵ9cNNI"=sL2tܞ1r+4f1YX⊖Wkc&eRq/1HZIn8<Ұʥ+zO%]E6Uڻdnr_clgQENѩw}<2InD_Z̿R5(R_ȵg*[. 圄WNrI?=)r95Ujzy5m+·k t"sʬ0)F6}3KԦ4שji`Ts?iv0c/vUS6o'q[EMJ oRũ-*#?uI8#n8+{V5Pˡ&˷ c,mTRKX%볿˵GO)Բ۩yҽбv>Vy$UIEk\(ʞϨ(噘Wjq6gI˜֋"(ulA8+Y.;NY$im ןm>{ij1WAtn^MI *N88?ʶQ,GNj7,3:zNMls?gV.Su,lrc[FJ:F?+,2eP0=GRN\թ$NeH,?(Gִeu ]+y,n633ϷJ Qm- (.N^[5-݊ʄevz%3M}>p'b>Qg3R[_&Y7]nvqOlY*kF+$P k򼟾|#'aԤ}·SZ1aB%o)Xt=z?1N1*kR$J"\S2?ts]Wcʧ4۴ɀ zQțn/cjp7fNnhyo#j%pH>yIl|&pF߻OZ^2KGc9URv[KRI1gŔm&6l6r2yFqUS*$n}yǗ#^ߚTUmmԏYXIBBͳ?zZnU߈TmN5yrv#8P?ϭ:Ӵ$qEUCF R?9֔y ^ƥ;K[).-gM6dWDb[Ԍ9E>ȧfVƲ|aN2G^:s] =Rtpt=k~$2y^|3]O1 7~ymN,=hӅH:l6{Fm 6Ϸ8j4qmjnU?%ֱyU3h.^ÑEb~8#;w?Tf1i[QP̬#v6S] ߱MۻFnqZQV5#k$3hbן(dƥhNKa?g vĜ6Xr0=E8/ObשZI$lFcrG'z?*TʕҞ)T^X7)Tdylez9ӭD#̗S: Ԩ⚲-A5mv!@lD$w"*7a󿕦wkrvSߠ}(i h["vHZ8rG?Z<ɲF=} '_3mz=;}*b3.Xh,O/hbI+JM8FRXH}Ċ[rn>8E)7[ӯ>9n&ʸo1Nzu0zV2ћDɲ$򍺪B{Tr>dޅOk㷁^Y9g֡SlZNy5~P7I<#rc<֜=qէF4oTN˷U}ݣ鞕9F2/2Ж?,2]a!9~ڴfGb*LЌ{s捽Z&xInٗ>jƯ4baWؾa˵Uc dqk7N|ʗy\ʼn"Ǵ72>ܟNQ,Y:W]I(\|QR{vcSq8A7yXO?=qYJN732B \m/-C1>?Tsslr$>2EIˑ򁏯sYR4tdB8#`#]9ҹ)={œkBjktR'Uvڿ6OT!x_zjXRbHS՛rj{wcޱ$J\[hv9!XMŻm~IJ-NϪE{+Y$d3/59_r(ԩE6!:X?-UQ$L9 Ӝk*ˇb>gSn%I4Id'>7sXTowH57Eշ_9\! .̇K %[i#<3fExʶ2>?JkOZ^.m"0T?s?}9]lو2 rDc2G/ ּ]=qR˒O7P(].%i~n+]K8zwЎ~PgcPsYz ^T$ئ3;pdz¸Bs5__#VQە٭u9.gkvy$Q*0;`ϿZ 50ΚoN~35tӲ!_=lx}9:gU/m?pYҿ 'WIPoQcX*g=oX\޴x22'S*2Nڶ-M,>#fELdaO9:F<=|r%ͷO1[[rv{>='ZTjrtVf%=*vsW4yy_1Nm]~+M,I]ͽ\Lc^Pv~{4as-o 3mn O#TҦfkT$iEvoьXش/z/ZA'zWMNZTKTM-zlYUAeG?&[{VX9b} t->ifk9-8洩:*{+T.G\2a=\mi9bEs.Y?Ҷ1shԥ|sj_dlݬ'w =D4];-FliX}x۴1Ͼ UxG+?Ky88F{++MI!d7](n;V1eS۱Trm'kEM 8F#_UKSXx^m[L\Wh噆FڥQʥ֙Vk `22m8 ?y&&[Rk_a%)ZSi6pyǾOJUNw|ZG*<:4c۴#ߌu{5DycjW螚hAڼ會3=3Tk5{o}κTjFQY$%b*%VJ=q+\ŶQ5/[ݩQ[6~n'+OFqk%b}/$ru1Np?ƴNgmR]tb)J=v%yk}h UXӥ$Ks 1S)|dv-I$~pY1z,hǛ~lUjQ#Z*"պFo¬r1dSOع9OuW}+$4`nX 8=jr* PجEDVY3LNJ7GT9ͨ V >Te;IýiZ5O}:ь[V'SL RYI$ ?Je)7'dv=(S('k$tٝi^r3ttp:2P 6ے8?{uoM{l=tќ1SFv%-g1*7c{\эJU]M[bKt`hԱ?*6uŒgu؜TwJ;bGxa"F*+HT\hʦE pL2*M0zsS(ʤtJF&-Lb2>z NQ?4o*2Q_D,qº=JZ3c5)w2Q~rv9FsRr]ZG #3|r܏J*S_w݅iomvDX.fzqZӌjxUl^)m&i\tzgRiF}5h=e;VUr ۞=3wsI+R/$],6CӨڱm};%&}4׽~k2?wvF{TԎ/6Vm$"q~eb#IS)nWh1v1?,rsQ5.[V}6>l2Ϲ*N2m(3RӧE 7$d3v5t#YJ??!EÕ!| '^(5{^<>cG岐 #,w*= LiT잽@E~+Z\g{r=aʿt9J7V]/&,nݻߦ85%tQUV"?;(Bpzz~;Lp|$I21o]~b; RVH孇=FNı|98&~lNM$c7*@^9l3YT.[{QR{D$7?dvb6bdsk9S{wm Z۩[u,v+u|}kIQfvnI$2pW9QXJo&KFWa\4h &'3P:~ ZAX#ScjAF/9Ɯ`nG7z^HRBT#'Weg*~1$Z*/-r#lH VT5i{lȱ&kèg }.rez >m??wI.E$mK+e[x+2=qiz*wQw7$!^]Ĩfc vHk R7IG`:iߖ+I[K9ڱ?oTԣx`ьUh%1OI7"^*0f%y`'([O/u]J ugk5_+ZG_ x VU`G4?);YrP T*ޤonRtvhフ_񖻧K~cKeo=[Gq !0ڸr3^V.(vo~gcZ|Ϫvm]]~gG@e9d]3т~W,-6<ܞ>KcimcLYٗCߜ.Wm9bRk%ckK0 x&[6)Yw|<\y'[+4M w.oUo@f0Ov/O/7eJ6:^KRK6pB{c榝X=D=*7QfKLe+mx^_,v4=+:dhp e W5Goy~u |hA@ml.s#+Z;+ZhO$(11'82y)SSI\r>[F7_"y$)mǩ99W5iIE8478rZie'a s9Y>N0xr=*3<{@/U7HAe#9>{V1i]uTN)/b *YTB59v]7_?4_1tm F]-o>_/;Pt{SJSgsʠ֖m>Z%aq[`glw?k(kҳnO@<4k#R ;c2M|>nWCrs)v][c]Fv} |mG©^ݻSЯZM9av6y\9 d=s?c)FΟO"+F3.s^Z~#vܴ6jo/`E vEY6#˔e)Z5'/B($*Tw *^Q~FBVXٙLrqZ,es9-=z{v)P|\-ކQF1liǒ05 g)kPa?O\IL)YH#R,[84JT6ľi'fE"#1BT_G-"io(|H~^9ʎ\ֺ}JeiU;K6I<jjqJMeR#v#Oosw'ǚ$4h=q۟ƪ.J,NS<gRUAzF$,RQZ[H4IwU==m /7՚4f+KCFJw]0iRKĆhU&FSqӡ;$9]K=;1͸dXTRwӔ\m`}*@hڪ8ǎ~k03vvr]5e%?fP69g'޶E.]?wAڴ(ܻ9x=Eg<kK]JBpZ@;l8ǵc[9Q];1s'yK6ݪ[gOq^\'&*R^C.![ [ WGq EʪO~ӝTZDOyF=$hN629z!UEd6&x|njxAYRkoe}~Csm bJ;Wo˟{w1 s(mث2p`n7aI88?c.jqJO-彻 I9ZOg2=񑞹2֔u45]sUyUI\EIw’y?QӥgJ%]wʍIkkTފ+]N0y5#UThv"WQF1)hieتb~/ I/E$ػx|zb9c/z-o(Vĺjk G\{g8.!O=vkr z"VT*8W)u=V]Ik*ӻk#XЭ!x0yJ^RQ|:9"Qv7_&O`7 acRm/㌣NRB0c',HW%)WʹݬߙQǖ҅GkFm60x1[QT-k_jϥr<-q]B!<`}B7Ne(GD֣fH#taˮG~\W=eV>⽟4޿fC(^2{瑟iK4&SN{[ۇS;y31pIwSQBR彵oMd)eϴ7w"[r7~R)hCraI-P/˞$3UN*moКi_]||GvXǠ9<5(Mw£NIkl p푀gFNO溩˞Y:q^=Ȋ/ÎIjTlg5ƜZB8 zGVl(֌-1\ؐ8;.:Ko{}e.w[+]O̥qOߚ9kJn_/+@Hrm1Fb'lV])GI8ۯVF+%<18kʔy#b>2nlmi ҷwI%9-"Dl?2br89ӕ5MJ0e3AfaױkjҩR$q4Nێ3qןsʱ/iUF1{t;(TNW[+mǡN1m""yB3ː=ETTֽUbD~xF-'qOZQB_"DA!6^F;}}{Jԧ5:]"Av߻`o嚴٧NJqj1n*n+WNQ^d-= qݓ;Mb:Oc7V4)kLFmYn1w$T%# { mn FKp<=x֔Ќnt8'S&vo3$qm/$ 8=[juUN憷B4(@xbއ-u&{ED-N[\MhǝSN߁S<m'u"n~V +'{w:fX,3 ?gN,\cjҏ_ArՒhŭĚS=י_j)N)ɜF"Xo.ݒeb: 9Ujty4R8/O7FfJCeUs8uM8Uy 87hl!<>){ͯ+ ԟj%kuR !Y}k^J}ʝJ1quf[k9f0$gPO\+٨}\59+K#9V_'w"U8x9ZSNZ h^[y0-Kmd8җ3mE])8SLJnRn:f[Ari0N1RMNaeKc0l{ʬ=uoO)&2]aqpy4p+JJrje+0?ӕ<rHv! &ۥX0 guW4TI9i},q7NyZ^Hl ~V?Zu)z_mQ+|M^wtSY$fi7m+3Ɗ$f[K25/HZN2\dFNԷ j*vz:̵Wtb5\ۂyz`W_3~_KENwVrI&mhv}kjq]zJױnH̖~j><0ہ;hӔnlP%>]6dciGSLkы*MѩFSrO$:pLQ;zVJkeרeӠϳp6p^jt/a2O~`lYbs](V4/g/gR{~6[1>9IҚ;g#A g0 =<֏4vnZ5HzyqǹP($qE+:*3ڏnC $żݰ1nySוKkŚG]mYNp \}(SJZ WEHv>#ߵRmn{}*ז\EʭФbVmn?5tKI1\_,̯hq50Jro4qTWMݭdh8>e|ZR;>Gߦ잃?%ܦ>`a۟A+N̨֋h@K>f*_Һ.2Nֱɍ`X^v(񝛋ۄOEf O>ޟJgi&̹}Rc9D$g!cBdve*ien\p{ɹ5ބv}"+08}Y8$$.C{w1|[iԧM?{飳IW9Yc5+xS()/S NxԺE9fYR#FS8qfeFJW Ԯ `8'ƥ+cRI^)j[2W G?ZNMJ)WÜ6S9=Hڜظi]Y4"֢Y۸FY4D024pz{\_knv7$ݕ1,NuhgV#ӗgRqݽ<۴dFm1ǩA9ԍKr~>VV#yg,Qʪĭ 'qW}4_3x_!o6"eܿ*Eezq)b!N<"iy&_aӟǎj5UmK⵿ВtFMc ;cduhsSy$==F,M؜VFer1}Ei*m>=iK5%Kjȡs_EJI4Zjq<_4@o7zΜ>~$[Lld 3+5Oi(st8kFXwЬj^rŏVqtgDh(_iYFd8 =Aibu7JXe)__W+̊xnf̀*ܩu8<ޯ[nLjaKQ$_-#dMX:yYCeK`Z2*-QS羁Rn| kg,vP2NO|t]8qltҶߑ*ݴ+xBqv_JSUf0ADU[^jU1frЧY5= YԽSHIГUy.d)SMuڝK_TgV̊K's2+ҵV4Zc ܒHC2֢Ug}%Eʭ~VXG=j%;|}=K7)~U8\ڤ}eImG+#?קN~aɚKN0bj8_#HN2>T$5{> .2~Qk'=Q^)FJt>]~b UHsgY=mҤnt&Ol>bɆejbGïUh0b%7_Ց5ԋ Au 68 cVOۖ2ؽ㈰ lpHA\=TJ>WoHvyv=rsTiYϖ-ݼmu d̵p0I'sOo}J2Oᵖ8ʩݸUOjR{ TV{lV@n9ͦ ңͪk~ d"NVاV5)O̱iupӫO0K4QZƭ=+F0[Olmk{f= dF jaM{G={..Si\X'ZTmуy?CCNArb!{z隙R74~dFIV瞌.4۷2sӯ\VtOEרKM&zIvm`=ު݇4jGWg$Kh&o+)+nx9ZT+1Gkn -!A"Bp8B54ȦIenMkqߩwW(q1'ͺ,=ݗ;ŅzIG7*hטLn7+;nN=qVDu9rieacz!ET1yDQ%1܋k}(X7980qUt[N0+ɼ 'rQPP凐酬PR2#0R&-/qiTEl^h1{c瓕g T嵈hb۶ ʹ1px$U7n2QDk[iyp*ЦY*$Bny }kkˑ_sIJQreu^?v7dQ;X8ԍrm‰6c/C:^xr]u.V#>939=RV"1ʖEa&` 213CNc. ˹QޮQ[F^sOE}ד]$UY OSר'[rǒQ{ŻǯS\*%kd?]Ѝw'=#)i= q26t}|gV!U~*18+]:ܱ <uPݿ V1pmV(ƥnYTRKUa]Kx2 dg*.DjWZ+lV6K~ ! 3nL5j4sNRBx'KUL,lxqG/-y|?6irŝȒ lEfF@>֞2hH7,])V!3U N/5oCѩkSu[u%ȡH6_7<~uwozVI++JbYfx5)Et&8"%g/=?])QpΦ5ƝKP\58ҷhjgqk|VQ33pU/u=N8q:8c'xˍLdt#gt˚0Q8jT ;7kץ!d i5~<ͮg:kxӢZ 1tE'vӷ H;ݑx8kr"k{:KodIHb"fʮ2,:ʴr[FNtj{>(@cvl?yʖ1iɨkr'\>Vz ^}vԑ]di)+!%M m^4шUY:|ZN\1QR]%&Tp |۽DiԖjN[z9 #ưܤe-=IﯘAcf v y8<ڳr625nrv֒Gyqt޶dZmN4-DGl'NJ喲9a)U}4Qk>T0mV`YN9<5\*Kʚ~^CokÞ`W;~~~"ʴh\>qtrg,#Ki.Iv0zNz ˚RנFv[QRZJy7+G\T.[F4]'|ɣ*-ڶV 2%H9هyWb&ʑBIcڽ@Z(FCX4[Nft\8 O|jIli*ϗA1 f,GO+\}+SqѓK$1o2fdA|1F{ޝcU}dBhyIp1=j# Ic9-uuyU(ܫ\s۷jJ6",!-ӏ*Cɳr$>ٮS喬Rt_--{)DX1aVCQ<7,Tgn[ 8lI N;3U$'N5_B.ek{V=ǽk)Qg-<('*6_BŠY#sakUMӣ%$I)5'fdC ?6''o$ұ5W-nRDyRa(:j3ͻ#tm!/2lwϿ[Sv9ĶSܣ&f+q#8x99N6D¤c.{] fcq݅n޸#/z[ԋ澉ioQ\XimI`{NNZJKZvt*G,S܏OaVsz"/2~C絍mx̭r8YK&/3_=Gۢv $ %GҒZ咺"UJ2Zha e\VqT.Ta[TN MjFۗ yҬs8JIk寯/`]M\O$mND/ehZc*In[{I1 Xx#=ICRK'rجBo8#](Aʌ1{g?J囩#z|G*Wy >Keb&Y>bqpx{51N4KN2 16hv}$ 52 {U)I1hݸ~Lk oVlݘt?Jcgcd|.Gn+9d{m\eZTIw;h3j iLͻp_ K.[SN|-ogQGϡ`6y5|l#;s|JRN)~;EQJ9kȂh$_-݁[ v֔eQԼv]Z({$Vç[V[;V*Ğ\G֌*Г%Fze˭lǙgK8>եrrjieTmvob%I?{̋#șp{c;t;)s%%|˃s|1#榚7;:乤ո, 7˵Uc\{zֲabo/ut,7I?͒;A랟1=Q8.gnq6v=K喩j,W-8D G0_. O֪3ӷҜYKD*I/7&HlNRz.в) .Upqʊӡt(Zn%;ZfƢ4NVhn+'3]JIwrJP5VS4ڻ_ljggǚZDYa(Q@h3r3*AT~.XJđ*de7^{|btm۠U9oul2|\{RNN0G^cS9NUвn- _r8bG93jS*2M͟>u+ }7PǙ7JQ_Cki >dx]zo(jL%2Q v-˶NN{g=KuKsדvu p!WjOO=:XZ\QD+?x+4ekhsҥ:ܣmti z6y~;R^v8kTN_3+}wHkrYM!=JJ$W7Qcvֲ,M7@O9US,ԎysQR1^5q?2ԆF;YLvs5R:؈򳽻hE^Ϧk:n}vqJ3{ kxh&AVt#'+U%%i!Z5,ZM__ЃRI`VQ#*v<By=kg*OMȭS~ꋸپDfc9p qNqza(쉡"H ʥUde79 *o"YԛVc,sך8;[,ľL2, AYK* 9YGW99I%}ޞdqͺx^J:[N^ȒIy*Q'=zqץqiJ]ROoA,"3ՀAI]U1_ H8~[J/]'YbU.nO??tݺ>Oru-WMʾƬJ^kFM9F5$]ܴ&f`Elx؟¦i.IIԑ7L7^=Q֒)SVےJyKMܱn#$?8fyܞ G*v*T}?1z6)%'Kʲq?2O 2Iq.xz٭Tc++\m42#;Gp^Xop鏩^jFqn+Ja4߷tͤrbc?|'u6kJTӻ3nmА!Q*ǻjU89J-Wؚmr,n׭kcܪ֦[hI;(+ ޴#s:~1qQGGw,eVa:i֞*8&ܖ2Xأԍ+zwQ%iZ`X} (* %QFh۰÷kZrʊtI4eVQz+GMF7wZ+]ȥ#zVVU*qdhm7#L}@GtKsINTqԸj /5#((U6fļ韭cmDuQ4+eS]p=~8c]{)9v?Tc&6I.Qz HCCCI ":}yW%CFkhC1-lkh8T%p*|o8ͫEȍ-m];yUIꑴeN>Y/#0BVî6ԜU4-,: kxpe9?J˚Q;+U*[\J&] *9Iך(삒#[ff#qV9RTr}KHċc*sn{ gfWee^y[Ö/R`DeN[,I'*dLcN0i~#IYZhyتX~|NoxRU3FX䟛Ϻ񊸽,mŦlmbƻ[(6>֊qøӤ$gnQ cso=xTJV~d,[}r072pc'2?]1emBU_BId "?~ݡHj2޻[-F)jv_iR檿!EK+y"UTrFF}ƣT}\]%3:sk^R4(h6VQ=}茹<Iئ?1FPʛe9[tc{] oxlj.۹NV换9IFI_$ژ~JrN~T׹pqNNef hz&vdU9˸1I7.!##`#v@㏯җ,RohmիDCҳh'U7̬٤}oߧC'ꪎF 0zc99y]= $H9xߓ}sF-=ňYJ%ӥi}\_,jK-ImeOϵ tc\FJevKo?B&[J'Wtlw213~%˹.$ W|sϖ9'`Jt}"kxf?<,w\Q>U$d`< mo6:Vhȯ^TK}Fx$޺Udޢo:Kmd(ٺiRm;yC o zq\6*5箤v]ʍ̣n sבEӨ_qt怤Q#շ$zҦR'mu$j.\z1eY*km5,^;Ҩ Xsϯy}{RcˍmN4%%4^CsއZ7=ƜHY9-RdbgAR1%{0TB~i?x \291jݬI#pfx"}_NJCem],IcS(RP.g%A.ةt{KY맧qdW1m|QRX ^0v:gy硕OoQ& seUGt3F6-e ~ƽ#ևiMEӛ{y$._/(Yz` tʡs)kԋԍ$x.˓ G^Rۖ3+>i_Rn̗ kvPż%sҡ~\N[[ndy>웎G=;N<$ɌQ z{\_ &Cuyb}U5$/'py${gWF@un>U, s>[(E}MӄޔR&YZ5S-GE~i# tI<llHT}+Eӿ."U#w}>Ey!ѷ2q\4ӽi$rt+|͞ \W%iS\~**ܤZyzث4RIN2N9s:r/J2xʍIة?pV,YdFӊ1XwZ>ߩ<ӴZ,Ii A 3d㞝x84zTi$Ȗ8j2B<M1Pnꤏ|qS9M_3L>"zx"ͅܠ8;ޔSS m~]HmxhWXČ0ݱH+ qק_e\[E <ӘS pHڧ9J1_3:3QKw2erNHq[$?R!ɑ\)Uh<1#'9ZkJmiԪ.hǫsƱmb!Eʞ3dr5BFOMR,w+v? Np==?޻(ԍ6mmO"9bM՞kzIg1/@AP3OB@T8ӑ V!;Nc'N$4jS=n٭s4bn :p3qGbnF*wܱ#@NWi`Sdt $0ɺO,I)ܹ`:q'ʡ׌M=,ױI5U10ilȭ,uzڲJI9Χ?+V 7-'VO#w}Tg*G4Whpi8b!ZMuw&Y ʂ71I6nI<~SӒsJ5#FHx[ kU?)Ya݃ҸQFv~WΉWttᯞE&T˶&=Xc8ڕ|ڿO&bX:1ƫHAscOk'MM~'E9J4_D74CLf8Ti֦vquRVhKus|*ldձ@sXկi+'}{u)CL+!w;njc=}aN*1WykY>^T|I?=~,,Ttj\v|Ů㍼"hMvk9Q׵Nw(ٶb3 ެei\?ʸt ִ澮`8a)9F]K&5 was9wVRx-_v{:0rc]3Go F8Ix JrM!:NG&cocq-dmc7:{)rRz{+Jnﶽ-,{P"tlHc~V{GgMLEBB87u8 A\rqZw8FXEA mH<ԅ؟3~?uT qz]Nzg$^kȚ m]W8犙T 8yS'z1Gجg|\pWGי9dxJ pu+;mLu9SgzFJUKg[!GG!V GL?T˗XEfiV-Ϣ9n̛аf.r籬qjrb)vk宩2d;~^#x3ea'xa<896X$iuwW$O+qd}^? Nk%UjS1-fFi/K2͍==M˒N:`w}>CIda=kMVm v$_%Ux_?Nͩ'+{HK~FhCvVb o#! =?ʫ:mgBXzs:ݜLUnw[Œ8 `դfף;*F0 :K2(w"{TӍHKۯ !+JZ5Ӹ丟ɷFӿZ넹h)b%ZNODnpi+I㺜={ҩQ-*^1mȓeͭG˻2 Sjǚֹcj+j9@Y<Ōc׀3y_Bc(ѥyZ֋<,B+3\zW\cN7IلIC0tXN'=*Q~P#$gm onoRgN[h%Th*^6_Q~bhʝ4V9U "~nzNu*Q|TVwmPHےͻ3]\1o!Fdk6 nkҧGmng*q I&lۻuiݽ{S;EkՒ~eQzc]Xqjڜ*ъ*i˧~P+ġ$͌d Tǝʣ]9=,SƌϸFgvnk3K9ƝukRǜf!bx);ThZ]mmm ndw.Q-6.1q[N%j2HhBWZF.^XekP9]RKʕӃz $a؁[ӄyWABn}TdHTvy#<TTkWi/Kh /\EMKt.y=D;7:Ī3|~4sGt*ۨ4Wd+Vex:rU*7ţ_Oz.2&Hu4[FhI#J6u;`?]MFWhFt+~dyq/E]OB~W'3V7*<^*Ưvy5HS'^[C$+mf\Ա?(CmN72VNF{95:vԈУ |ԗV]O9fS1_qYju,W-o6_.Kyݟ Ɵ.i#YEQJ+6C&|Ҟ1ӹ`۹=8YԌIro6 nW4sfM"c/v4N['ڹ*KVrvw)em2!ZG[$sO^=NKN ##v OҲvVJ*sj:+wܿ{Z_SI"~bڄm84aR>[TyVJHnn3 9sVurZX#@~S 3o7~FGJc+QQ]9S\hevS5ԏЄU3מnΕ*r\41]y>"N6:)B{ϧjh.Ry<)q ynwCF_5qfOO 8#ݳ^eho# <-#>Ck1q~x[ QJSu>*bդHe[||8>Rg`SF-4<UidYY\#<8}zU=ٻEח;3ݛZѵutZ~O< }6I:cXrFQJNƤi붇U>cWP;G%jN3g\r=.|S (m{?ck@yc$YU6FLgh'jUZ3.2'93׽L\7MZIM~>ֳ4cF~e GbqƥIsn5*}F܈% Fp1`ڲÑ^d ޛj֛i!G#•K˕=pr\[c6+QG5(ky*ޯŚU9+rqfetяzߙbtVh ` r?M-?CN7$*]mq,ln $5˧:b*GN,iݴr.9ު\Gi{5SmW+ŐF;c׿rKz6lmXVh6HB;Idt5toJƊfJז2o~ℨӗ+E{W3Á; SmI1+'Y V6u g_QKTu%F4欶gDCyI&z6zMV:ka)r=CSvѩU) # BA7_3ӏ`vתߍk:Dquَ$8J1,VQZL4eYG$qICpFuGWer$TɆ#^.!{;OT)^W򹹤'EA嗓WPR9bI2_vs2/ =p5%% _CJݭ١<*A%AA'(~4j1#|ۘ_7:ёK]-ԍYge'l1z~zЊm|vhHVC#qȣN1Y֔]KF>u&#˽{i|򐭿ǸN1:%F0zn˿AC!E1w7=ҔFVcMFrɭ弁7y$d@+nJ#?>ȧ^0Fy9#`]JQv1^m=$MB20v6qʮ8N[68MP鎶`Dy%+ ڏ4oQkfGUU;m 9)Gi}Oeնt] s.2N:U~i( ,UY0Æ!~=֖EIJ/ ]hX2#`7^sRq|SM1i 2βcnLSh22JŻBkJ2KGLcNV?}VQwƒMJZ4Dtf`w|T珧Gܣ,e-:ZiiJ_q^mLyٴ"C6䬑nڋW0zWL{v:%WKg\ׁ:ׂZS8֋̂[YىTc5L-h]hϸQPueL3ǡҼڔjTI[[6_ֆN1nZu#O?D;pXϯְm$|%?i$Io6\9Eۆ 5:ьcݝ^2_i=ܩp,Db<ܺ)ݢGMA$qRъRR{ۯ2tOzȕ2\9SgaZSy;>pVOa]lP4`$q}rGq9{I;[] ,cmMChJ1N="'z @ZS(-70IJ]:U~vI$s^=Rzܙ?z*vAn;qE7NRj,%/ioZXq1<=aA}k:؊p}q-z9`&Ke[v;W}몤h)9ts#OkЊ͎7>Y;_Yч:z5gsn2gWhVel.F J4p]VU"չom/!/r5[ӷ5NQw❫%؆+%o3w@oӭRR9i%*Ncҽ Jh_ hӓh5ʉHZ5ܮc\M®e)E^4R.mFr' ʻGjҟM513obf5ߚvZҋ/?RN-knAN٦Upy_zWw:!b-+sdU=O]q)ݫf"ND4g&g,߯=15E;FSy[T0h~EfqSN56jޝ}OM{:q>ʁe|nӷc7Ve˥In<0}{tas{Jim&- c=?Ȯw7&V_ɐ\^Ȩ|ܴ56WnU_,Mł;+r~^Q汖 Hێvcֻi/~֧Nz~f \"'XӥF*)ڿ6@b:-PJM5Q[2Ǟxt{VqJ2QIu:y#G3ss\)5{u;95\}_~[72ɻU Iǿ5W^KKI_btζ݃nx>?CX11ΌVonq<)-fUm9U?֪9r}[ЯޣCiF2!6 b;ڊ)O0~F|{^ye)4H#RF09=+~YscJ5˾P+GŻsYw`=9Xӛwj|J`,>^`:gZ(.*}ʛE[Ŗ6tgV)6Ku9c;y+mNKBQFj:u%W.Eki'ʢJSVrەnO$rprޘ$VqX18MD(:wEO(䏽D-[pΪf9PsSRa+yw/={7{-}EI W ~T+- :T乖Pn6xמ9cR2sNiԵ,L1z!Cqֵ'k19Jֶ"cVV)QOwX'y5rUQ޳rNI}c- #f/ݿuׂJކ$co2PdyF$#ZfޟqRrtȖb M@= ܜ F4s%Al$fv[-[ZTk.X￝Zi;7#v[ӄwȬ=Iʿ3^Da1Falm\?|KnmV:RPM0"Ȳ)Ȥm?~1QJ-R#{ Pb9&VLUO{t tTNQI=?2bOOyo,YVFhr {\0]t*r.V[=V(VEʬuC~Ҝ˒[yNjSPGI?3fbs}}K{vEN1tLwCl.t "HK_ȬKRjѷQq0BeNu_²RN-om_gGҔy4dN|)ݦZԜ=gNYٌ) FXm+PzqVtRdUekMHbF1);9$c k垬1WBKia+ɶ]I9$GAwWR/i+9YFlV-+9SCA2s~vsURd-4\}F3ՌcOFC *NN.Q}yR)|W49J3v_ 3Ak I~_v}j+JUݣƟ4ao^-³Ssr_N\Kykm`* Zk8߯RT~ߑ$Fs~DUMrӶ;iRMl77n gYG\u FJɨx`nw0Qw>܋V2{"5J]v׿J6ɔgS ѫtyR2^&4aN7wIi5g&N{Φѭh[v,e^- qr'mSoӷd8@mR+7qr~<F4}Vy cicWn^MU>ӷ[F*ԟT_Cb)}~T6ĕ|F:v+HƜms:jF= #Tٷ]XzRrW7soyĎ&|w.0>hlFiӛ`vAܼ8'k֌T9K:*AEkW6DmUo2UަgCN{ͭM֬M,5Ա?ނio'RHރcM2N at8PxһiQyЋz7^dм~TNSR y'̪ܳoHx֮OG}*j&7CoT`UQ+8Cae{t(Nι|cEQ\,~Z3:nڰr1؎oOv(bua~tw!?#r69yJ1Lh\}S:>-h2-C,ialQtϻ3?['ऌ=2kXN[~VM-v&nif V3Lyjr%e(km^HIOOz%N*GR-|I;vUГ|t)kVJ4ۛ9OMtF=J5>KZȭw,/'s~tS]ɗ4!y#$;fN9?^kJvo(A$7ƲpӺJp8ZjR~kܗoĊĊb,_za3 ќiE_?rݷq ۠~t(]C.N0jR^B&Fo0kNw+~Ǟ>Ϳ?ח`5DfH\ziN2rpW$[en~Vg'tMEH== T8(5I3i`~1ܞb=觾2V,tj ۷,ңN3WiitP[hR!;dVrGP}3QetBXxMџ+JwHj cҴԺ1r[ $m G"nk9Smrpb#{؅RV**JJQQ5c^vY!?3vnznh5e権]i_4Gs[~e+H,NG ӚQz4F[n%ٖfl*ItF8善VtGK,S8W^)>_f<f+=-%y vSJUꯑ3i)5u2f+ٹQؑz#^V,+pfxߑ%N[heSBTRic/n+6_jڌ:zUr-]4&e-9>KRY~6mfU*TTwJV"iETjiO$K66v= 4t;q̈:4ɖ}޽*(ԦIS(QjizO<&&[o܅cyENU_ڷb D/5f⪯<>MTK1&̻+Zf^7H 7rzqfsͦsV>DbHQ'ݶ'&F$1jQ |+_1q\_ ڒL&)Ve9mK~aMᲈ۱u9n;aTRȎE5Tɵs|iE%bh;'s+*7ujeNIYTyb,Nʱ7\zul72*"WwZu1$'o~檟eWJVqmnd3";t?m%'9jaSt"(hmVPv;z~Mz\VNIB$$h7\wԒKCVi@ ~2I g^Tk5*R8|[Bޡs^Vv'ڕ*2X\5Ӹﴲ !s(Uz֞.\ۤrSsI$"8R0yӎ⺨Rt:jIƣI]_$c+2*n۞qnY^F.}/qm.\L`+4TxyԌcrӍ*rRvgaYl2X9{7.D-{+C)oW%(^,jT8%^:T7+֌M_sHjCv+)Ϩ"o:*Z6&%,;FlEr?iR1όN\u^=w o+s]\RwOg¾e0oڋ289 O?#嗳oܽnmʹz.fTs?gOhN eg ܺz㌒KV2۬pKwc2 21ul`Ԋ֫)$Ƶ% h/i>dvc7:{Tk̴OsXSKo?g)B3݂aøniyԭV4U.Gs% *+0 #[›z6l)Y6IsCJ ƦtkʛMk5xi^F]>|xǧ4Eܚuů_"xͻEeܫg\ZGetm:c.$kgc`U O|cWRNH482Zb0 t G8&l*{ImXFzi^U%{XR8]}YXF8V捛ᄕJXƻJd.2s4>ii{ 6A6 cU?>ܑmێr̽[$^eF?(UYpڢq:OmN'1:.ssJ9"),V22/#]\q/GlĨ[^9J>a9r}Xҵ3ԂO$r+'v>zA9JК=E6Y7++Dc #JNU=*cRqlqBsIcS~)օig1,vYN=3? gSosiXԬ3sU(Ǖ%cʟ"8ma >9.fq\Eog(ɯ?r=oXg Hp2z}Φ2洚:!AӦDW:p3W5j%{iI>46>bV:MHcd{`p\7+'tXk0%KRN hN3n&F,7sF)qgTFQA KG` /s7rsӟ^UIJ-EQo"ŭ)]Hۺ`8>"bʌ#ϙپNwJ~P+H%rJk1m~S?εbʃSoWCc_9۳cQSRsKF;oa8=Z4UP$ <p3,RWQ和n,!MU֎X\8 F6YEi%aSF?r˗/1N4kUG,wwqj>@CuN8J҉iTPu=ȞbCtl*qیUnD͝h T"l˹Y/pqПʉr*wBN^9vvaKqLU7Ti]^ŻKYe@].sԧZ) Zv"TGuW.3"h>k'u+&֟8gQqV$@qgΤ^J 4 'ydhW;3߮z_rIitc8:~ϕW $ns?(rv*8uME24dEcP(qyBmeZU6[U>vvRrZ2nlL,t.Г}KFd$?Q$0P-XRޝL]"[ X I%?zSc%t3JtfbתЎy^Wmʿw,]_ؖ%Mў.@ϧ#vR>M^ir5gur""ڊ`>(8%LDumI$^Yapv*U%4KݚkA?&ؔ+HdxnHAuIs0V2ܭ%:}e}E; ErR}s=Gִ}^ҷ=J4jZ]mWfSIǯk:o Ymn&q;(ڙf9ן°R1_C:c8ﮄFvuVb/x,zqY(43/k[:!, !e!\<-L.1M_o.l?=019SȩNU{Mm#L+(+ks÷B4~Dyl myҫnnTFix5Rh迭QWFhфm]B;FeXdyf>Gӊ+>ӣYў"N4{vH|#o҈.wrԨ.fӒAg2{d :chמ+(֨V*qW]!Б$WH 7l\umͩTr=К(ʄAn?n-Ԩ-g"Empz>oZr7SkS|ɕm6bIʮP;8/l[rЂ 9P6aal7.]6)?_"Q^qj7#γ>ۯNFVͲFʛ埦qڦ*Te3圔} Nۢ㑘**R0@z~u5j;6nǡE{[fY$$!`끒qu#No/?ѭ'ݴ~#MiWni;ds۽gi hs*XN./1*3(1#B0@.zwuM%atV䫵ѫt\49|坏9=5R1rﮛhVM+ipR ط cs=z r*evaY_m`bVY+˿UaQs>KukW}]QEu%'6n{rM^y7R,"8"ޖ𪦥R7{"㇕:NP߷<3I&dߟ޺#M?vNJJOK6m㎠ִF H>Ϋ.0ø]Tipwz r[A$bQr"F=dv,[4MI3|^zϽ}чάޭy &[?ZNtǀɶhٛ]tF62"H<>)+ A׏iJƍ}}lQnSƢL0sDQ5ownWli3wz*uTn^71>Q{.2Օ.YQI$/$,FR8ٗ8 x5*qѕÙȪOӜkJ;JR8HdK`%;Td5~c֗2ҔTCxroW(F^zUQ-ۜȫɆSqQF3QwG)R#ss{SXX铌i$%im&W}p+ F_ 0 {iFOF@nEinWݓ1kTU/eI{hIfbI9?%Lc `r=xǯZ#(JrVrwf)˖0R`fĭ,Lo?ZxhQXMG&i'vpqdr?S[ ^wZm%ߞXp:*e+:T֧w~TWP_&mBr͆Up \yXXAԕ乖O2Zu):hɦL vd%s=~*ΝE(0kQYy4v/&$ '#Qe9TUNNd'x'>MVTS󞢧-:(,( վUO3WFdwϼD6 vϭt*Nh]K=+*劖ШN6[D଍$py9?{ZF7;sHtۻ]z5LFЪ!Uv>okHÒ-t3ַ5-bKtFfܐ*˜+><ZFQhۖ:nxϠ.\I`U웷.[x?uRGu14" 5WfpK0'1YFJ2Qjh50jnVVw>kyNMM9٦b46fu],S#R <=WO>ا .%ijJ/q{.aYtH]`EV5(۩$Lc=I?*T#ҥ?{C ȅfOq2t+m ~3q<ۙUcHƷF1+!!ld/r}1ߨp ፩*Q#\<+ ̽8}F./Sյ!ӥO8pۇ''g5PE|t!dW`Cz]-Q)I;=F7GUB>A^F KNq+ghvWcbjngy)FHbv*){շ]g)_dhl'mQǯiNqX5;0HԦЋp ]J>fRcNU!V"8-J@sSNof%v}ck ^vx8杶fT`56۬G䌜sz`Tvl䗻Zv 稏k2μ1#Rl%/e&dI'HȶeXYe^2{$i)59Z+*{T}]g͢Z[g܊9XPmP8SU]W-uviT{lFQV'DJ;1,6@??ιFRƧ+N]@bi1X~w@=i(E]nj*AGTձddX2nO|Vq$*yi/Gu%Iz_-v\DrJ2ilEp!8m{=iSJv̛`p@qfƥISѯ_&$WhhZAB7O?Lq\}V-Pđё,v?+mFT+-=GI/rMW27y9JtE5y%i `rc/w}QJ^WSefo_F:&ǹR]4 ,2m?7={sIrɲe/'ޱ^To)Rw_c yXUs9V?Wd8ᱟde?Le]s+b(W%7m2kevgTy4}Vh;wQ`IFk[hzwfj,}4rGR4YUزI;e^H9jGʧQJ Y]1r_ʹbLjF[t,~AzV$G>vbȍ7atʷSk+-h^1m 7@>ApE׶ II|/9cF5.-]n/N6v>N8~d\:߳[v~O'{;FN*HU*mzv6(_b+X%~&b>8no' @ >Rq9Rd]{mNw~|a"cUB5v#~kiB_ܪKUcBX܏ʊn4so0>[ev_+<1qLr88]1*u16OG2=]MH}u^qU %gGYm2>VػzӁOOmVf2FN[em yF@$NyG#4RHFtRm蘮ϙoݰ`3W_ֱF4ۻzu2fK~DwirNIP>r*TZ1ۭZ'ffOrO\{V\ņ(QzHTŽVM9r?,8Ԧ=/ibeUq賻P]p^Zg'uQ֭M)$+ ioyOs <*F> %{mBmٞEc<댩6_Xy kwл .% &AuȮ:EG߹{(Oޫ!yF LNFsӞ%*I{+̪FXi:H/`޴Д_)Fj955 wvV[p#UnzkJVC5;Ft5KCohS3J\DjnBhZ[W];s1?ZjVTgEOqQMo曖?y9ڹqpRW}?QPRkgwПFjTڜmqӍ7O^eՒ(S^ir1{ѧJ>9lFt6g`uH' O'1S*tR;cSQSsRT+(H2ny*v`m=[T9G?!VҒy r~"qߞk2W4,Lh߿KB{L@H$ cǝGf^5HJs8֣?T۳c63yova]JTqN2Sٷ~q~[\"32pkTgTa |/{D u}UVGR2:MNiI4ҧW]m${3Nr^x9<8tc{5%OnO!o3vN[8;v=忧b0ҡz}w,Bl<#N˕ԖqZʟ4vkG# cE\|V&uT5J[&OFw )9iǛETgQJֽ wH[5+91ǩyWe(mo:T![} 'gkWG %×r/8ʺ=r _4P7`A#1is(=,=>x7Yhr{v}89: ޯ7$I|mxUs"O}kJ0,Өyŭ!iӖ<~u?,ڶT(ɴѡ̙VPk`tkNX\Gِ6j)߷<]杍J9IfU>P=qYʟE%TE2'Llzw(h^<óf>';q8ƢFt 㞕vo.R9/6qyyRs7 :9QqCt0rYG#}T:T&`mnds9#U"݊{xK+xSӌuk5r*iA rnoz޴nM\ǗVnsu16ۓL\т#$[Y,rYG/7wTGUܚИT>%^ypsӧlSJiAϠն͕Vs#y(̘S·HEq$|`8 )J4rƝjYnFU#S(:)ToLV96 F<۹3ӧOZTZ7RIUoUyY9I~c\sמM'ku*k䄀ۏPm9]JeP+b͜{~θN+}X4k o%YVL=r=F}\͉VA4'˅ &d=j:|kCG Fͧ۠\~ba\`1*CVvї?͟'5ZI&[lÁ<6Y0:q)ݱVOj25H5ƊuW{KywͿ$V*歃{h#A9\ 0lMrKޚ1ѫX%WdfNCA O^pʍJ]: }%ё ƥ`'9@+s;)Kjq"PGgF+0CQھZzEhx.(~v4M6ey8]0<ڪ2q8{'.^d}ntLn{$lfR0ԞIA+*Of]ѱEkH41Orş*'Row~ԊTy[nnV@Bq}9H'yuf#2p~dR+EIU<~c䢺(ب[4eI eY>\рY.r9U?ɓ{,B\˷m;"mu68h׺YEu-mN8ӭ32ElPv?ީ$unrM>Tݕj%ms}keCf]ېy?/C^=V2T:j1@P NާqԝkL%ZT=Jz\)]z3׊攽"2n?Bk`d2F@;YveNxuN*j&<"Fȿ*9x{޵z4QۆWk4FH_U9ַJSZMj4y~Eqyd1Az(FεOz٣ӢKL$kVvs6iʜzy*MS_uu1Uqm;WTn+)JqS۪;*TfSUq }ܹyiI[)R_#C"/z=]Osө\򱷧dMw{;tcR4`c̎Z>z⁶$2csUU[<:7}Q^@Fe3d${W#)>TEh{MOsC̱LjayNw)8!R4&:) E#FYɻ'+ݏY{Y*GCiues^wԭnJ4y4ܵMpߴ󻎜 |ם(N]LSU#w54̯$esvhʥOꓹKmPU#\%S:vN.2P0n(1RSi$tHq%>fry5:j/"->᳅ˊE*)lR "89=?gVJW_46n=jewFrqծ1^汩UR-ͤwdFt=ƢXWˢKjDm|]^qVRþ^kX#u&$1E!8Ugv$*c)7J0D黊۠oݐU'ǯq˗*ЊqE{ rxӭqO gy/bITVRاql&v7k^mL=Zڃ׷8$"{F@̻'s^>"zu[((E=o!X&nU!O+:6%va:xu9ޫm<ʼn݂ȸ,{v{>ιJ4M=tJߏri\@426=99T8r9_N<9G`z 7oVG٪Vj]euRo 7%|ŷ\ΤW-osu&ֿSLj54F1m鄪BPAj8nfPO z5a8ZQ塝H^XtYwbO,G<*?{]uguE'ʾTok6@rW$c%:zUfy ~DfckzJ^+t2XUfR'u=[iw8Is<ʬ87sZQTE/=϶A5,!ϸ4}ۈ##HRO^굥}BVZOD9慟kp-#8W k]*}m%Y!F<~ЬJ+֤?YcU?wc?2m^nxFɢ7hAiFYXjy(4%UJdho*dgOYj/zwch71W uQ,u?uʂZ*R|^ L $nG9=Ag9:\l_cH6C2|v9=ڪ)r%e߻Vԛ%ܗN+t-n;v={8#RƓnr[7$j&O|{Vhm3:Sq[1s *rJD.h#bܞ}jI{RQq^{h&gqd\8>*[=tcF>m1YnkBnWeO (E"p^htaOF:($Z5)'~̂4[A={VU%Q)UJqm5zȷ$kpǠ5olicZhpH⑘*mfxR +`gqp_kZT慧l42zpjUO8NyrԤ%yv?Jqou%4k8󱨢hk!v ~_"nڎt 8ns$RyM4htZkJ󹺷< RQf{6QK#Ù 4I\zVU)Yu:*O)]$6QVM'jN u%(^RӇfUlV(np0?1[~Ϛ7%Q `Jq޴evHVFWOБgW&$PxW97/V^;[?22Q'zY6,qҕ9T,XT5.weWEhF\{;ߩKNf>+.j6.>T֝W]}HLcp3I+D·nk>ߴ/[Rյj4*-=)ױLCRҽ~y^VGUWcAUk=IgM';4I nӟң><6%V2+6gYTn Anz u6+*=$F}gRNA}îy8kjkmI4ӧN4ԩZr$r+O>~)Y݌E֊)̋Wo5,~SdC#}݌00@#ڸJjfRǧ琻,ܯr~j\)[v ҥKmҼ0~0 'SO~\nCڗOo2̞Kn9<}-}5y84o)d%[ָkTqԺ1S&퍷mf$Ԧ'6#0-+:?95)S;`WQ^6O _f\J>SW:tR2iԬq%?Lcqrp4H+) f\$gi%ԥnUu\u&ݻXm#cN.,z]>m۬* 0Ooi)w^TiTzZL_OWR.(:nsi_UY2fc+l8=g(Ooҍoh)\"=ټÅ  zqEjy7cQxTJڴZb[gvQc#ޣ9ӂvJM>[Z*{yVEOA 5N8z|RrƤefvg b1;(ќEInf2#Ѷ"Xܨlp2Wle-!ԩft)TF5F8VVc5^+Eq"E2|[sZөS9R>Xf"Hu#'`vӯƺT59t_w-AqHR8G?˞%.xUIokH?Cd폟ҺNjI 6Mq1BR=zgQN|V՚GcdWc0=Gly߰+?W{,dzx{JNQw߹LM9s.Ÿk@<ϯdITv @tKWX8뎾QԳ/Rh@,o1d(#z:D+:iݽ.8тkZ>l_Ne[HVU1Un׏·HJVf٧iiuCfG[8YT#TTguOʚt#1ۧrwo…>N~eE]7y["1;1~N1Q6K\ I_-O`LiY1ly p FTxi5evtӧ->T+\q~"68`szJ~F4ʞ}g.ܖGҶWMEEyWEYvI&d֍Ik"ymec5 4I4OSIr Xm*\g9G4MFZ2U1\Z4/ V($tq+z8XӧNRO;nmUfRvҳ'L+|Oubв͸:ϜG''ⲍ:ѕc ][~ 2 6+]å\*W1s֔8Y?22<, =?4r5Wwђ(.%حI4~5ogZ*pՕٜ&ݫ/r:˩8z/ YF>g樚F.RV$qrdڽ>zѕ;Y{Zȅn4A"<2?rW Jr"{jLҔR^?߾3pr*sߏh֦șI7Ug؎me_2Ƭ{jC@|&ۄeQ@ǭD+g]qoЈL"g{BZ3QN>U2Zۥbmž}sUs8K<׻eu,5>v(s8ڔe/itdyZ"[wTUA*.0ZneICDǷ5Fˬ)֔r8ݑ3 c6ѳi.2R"_WeowLvGLo NyUVPmȥFnO\QՒwݾpF^2sަ]Le ѭtVb)n]Y|8y=:{ƜcGM߰Oޏ=KbiCdqG#(=ٵ2}́r}mVQ-Fn1Rve7d%2;yLjϚm\q~)AZ)ZHʖ\ƿ^ϿjS\FWO&M41?ŌuȡrZZ]bkt&(}NhS5le0wgMYqA@#tv ouOfhIvFcŎYF*)n(ƣO$F$eH6`p DOG*|גd/}7UrK1`1Ny>QoI*}W{O8#?O5>k.I|qԄʏke.5#9c*g_Su(>tֿ"XN89u_3k-6ЊFpCrȫe*U?Ro-y؍ǯ=O/i'D<ʕ.FUm"h `&_zҗ<8ZRHvwI"gus[@:F Xnr@'*p;Ҩ3 npO޷t£i(}q{TEtazp8sȭc~fԏz&C>%ܲ!a2}θUQoYmrԊke|$bAd+)rJzS}z*yR4GyyS#3m;O;CUZyG[] [td2NVr##*#\K|Ʊ8i8+ǗՓyT-R} e<Jn,6"aV2VjA-̓( (USӒ}gZR%ιrK5Y]l䅠_)U1ZϙnJdķi18lcq,$yT#W)k[Q *ylHcq;c[QZR)ʝzKCfm>@28giƷ5oG39u5xivܠ 'M:d΅&i$hpXax>}>{[sۻgr!6ƥ?iBu?f u$=:Zᕑu Yܝ&t#ھ^:qӗ%猫Vh FV)9Ji'i*j |ګE8\[8#KI4O%1)@ןfKDNGm\ 6z.R:em ݼRC+H~ Q[ޗ$]׺Qˋ\2 <j^_͊R!po'ehԐH$+QvuV=nnExLsUά9rO:zdW9xZW~؍ *+Iշ/ay֥~XQob nۯW%j歷kkfXʴSL#1ǏR<**͵r?4 H"/ݟǧZ_[^ Dbt(GĞr'=GnԪSc76Vz1#Ho%Wp8Кt#2Ԏ݅}CˏH,`Y{ou2.WlsEXc =z彟(֣oKkwdS*\?W;im5̒oeVnX`kirݾQIKx!r'. <Vt1ʤ} u\2UNӌ@y ; R52aowI ˅lu'>z]B#ݮm-",s@*B#%9{K#Տ-4e@сB?gW4UI{HMhI< ,ŕ$;~c[998SǕn~yU nɬ#ʋ:4gRh݁DTgi#sGVՈHr_cOpۼX+wF*;؉эL;|3¦0擶+K%ݣ̓CV$sGG"JtgQY}QrkPv6v~5(G=A*#U"iۗ)쿐kU)ӤqThr6Vo#Nj}IIA= *m=RMϜ.e*F'KZH(ѴME[yO5yv*{R#L,ef$rZ=(*>\v59|NLVIpWc߽8:ϵcUj4R,-`wgcޕiS:]9~`+Kr Oz/T0$\\ztE347|YDc|BzqbUO_g/|ԍ!YN0OQpk9<=,hʴۢE2cI& Y͈Qo QT{߫C5͆N0j)i:.0NB\XGq4q+ûRy8QN^Nl:y[qw[]F >YT=9bYZVGm6~\'cҜFļxnv4@N',(RumM+b"QϸǼ>?cɚ7;SskLDʺnvFJՂHKKxyw6+~ 5s@e+鎝+g sk_C˗599ItI6#4 sGCަvu~MU^߅$Ecϖ$0_C\Xζ*8JЦnk-ȒB{uFGmbsrvn]m} ֎#4֯oA,a6G川Xs8R0ޚ뿩iN;#-;4' =9溩ʲ:ԩ ӕ[5ѐ6ߵ(ː=FiԜiU =lliŴYuDzeVKAl$lLhڣ w3%QǡJ1jO1 26_,>p DZ]gJ0)Xаfr2?5FqnmVV-V'*T Nҵr洊54c݌~bmWrMFGeOg}ݏV,$<\Ҵ1Rksl-ju(1np2n>?Zw܉2tZ<ؖJBz6J|!Ot#PV9&wzSnvmNp"PM"e pTMߡ4K[lMnռ[ pC׎^k =WZA$gIbWNyoDyaPS"z"n.Umډ{8ryEr'p@n=zZ*R; G%ě+y~?Z#.mC:(/ ٕHW s˚5=rjFJ2r2;$*c-ZRS$lo Yd8l^is]huƏ=zC=>#Ǯl~5 F-MH*|m/1h% <{c8w?>{ݾ?EA-)-#x؍Jy:=$Կ5(rȎc&I~8s:ۜ'w'r(f[W3:G6`OpOMrR[CXZTA. YCn}0qǠ%)C ޽yxmfv!niI6eOwOW@pc1-E ׃qׯN+z1Jr,aPvv$dכZinDecehrGoWYttSNF'[('d*5fPx\q\EkoosoQuoN/%;v7$`I]x#BXwa**byyw!uic+* Hn$qQHﱖ&˚rlVxgXe?xqڱJlBjJˣ #kdz$XdeOPnqڌD$NMz#}XX(V ps|ӄ#QSB Vh;dL)G JjZJ]-MQ͈R14M~^D9TXaKlW 9>ij0J/){jqNjֺV]Y*dp<(CǓמzVu=޽OIʤfr1p=r~uXӴV>rn۔ZT61U^F:}yF/5oDayPw}~C/kvQ[3/$тO-~c6}kBuөq'Y8{358yyOR$k߰yha8ʂuSYbTXI ldTJPrH)SvhYC5X19_.^h.j*V 2 VLnߵogʯz6Vn[t*!nzQsJTӲbiV6@UMQ*Sm$hb=?+ROwA *NѬYn9IKގyz"O/!`8DFބud V"e+RW==yk˙7}<:G@1#4.nX6ۚeXϯL}*(Ƅ&").mf 7B1^|L|h}P'.7g>{.me8?R;(F|7vlfJ0*~ (0Xv1"FbN1ާҺ)ˡ8h/c.i7mmm淌بRݴCDr4J<\`N4nΘMev3]bە5+YF1Ҹ%lA| X.֔)T֌j@#m 1NRw5i+οvD`bZsәikL.pslhwV"ó7Kc3Y]TPI!z/˷ =|ەNRϳԗɑ[ BYzw)$z/Z¶H;xp}}]QyՅI F8G pr8"WBR6_4vri||ǂA{&&c월maZ럘8Ɯr{#fCS+˃ V򌥿zS-\򟘴|l3WNRW}N\۹ ݺ-4X;I(#G8ϱmJN^КU'$tAwh.`bܷT92{u,PI S6rUQ?iv4e ^} $Ӧ3F$qd?O.ɡBU} OhO/v\;qN1QOT/,hVFol[:sZSJvuEy)YgMTr\z/c/iuݢYH@ {Uʝ?iv~El,e%wv\Đ spTG1׃G&\G\@"icXs/8iMO{3Jq^imIL;vݟFԕOi%5\R6op\RyU?ztXfmf탎;֑V&)֏3$V+,{̥8p>lSe{R%]nmV)FZmc嬭QRXVl#9ٵ h;[[ MF:1 zGu6Rw-2kuR, dux_RM*q֖6<&+=zZ*f- 0Yp@9T6K{I,h +HwU9r:tZXBTeF䁞:qX:v2r*[+n=ONN/Seg;ĞZVT:)Rk۩*7{WH|F ^]Z+Aխ)C$R Q@W=8N2;ݒӕԏ< -o%-;8g7RZJ2\z+,M"/hq'N:nKĸo/EPsɬٜrRzy>S7烓2j!D6IGWΦ[n5v~^9#XZO>7 D#ȟrKyܐ~*yl\эvSI"Y[t+S?ZʥTu u%R_G)ByRa=Xof8U܎B6Knդݻ9Fr9)8I/]-)YDcm +Cw5Q{IsK\$Zf?f1Q 42dJ GRcէ# 3H @3EcR*[T}X{ ,`ћ9W5JN=CЯaBFXw=2抻ydzf 2} 52J[*M+ٿQZ̼yg5̱nI;>\5`元u'#+<ݓ~4ʹH^[PW[wjTʷ=7zkdKfI0dEP@獧YS^Hx͸RܟM$DuZ ʍ);i=)Jc0洓W"q$.$YA+Y4SjKmS J{H;שQ =)fVYOLc^|+ILhԔ\b܅SdE}Z,yIqȤWK;8:D8'ʟ/m'ygYUe$HG^miSMFdUW/-!]? w#~;QU&{u#Y-~xS|~UK,jXng*pvւ^]HK-d qc{b.~,U:/MOI D RX&⩴5EM[T4R"}o lG]q%@NxW?뷟JIV[յ{yzH#9Y`:Nhz}-2#4xmg>ۨ{Dyν]yZ H=Ÿ)iSwW ұx?VT\^]>BVc$s4CڊeNk>4xlL]_Q!YVhUTF[$`gOQ--ZҡmXNŭ{p%[;l8]5i E:r-בw{VHuell{xJP#0S>߁a`KhICm*guRiEƝc]W#q|!yzN(R9rEo8ԭS-q)5ݺ3u=sE\=Hyp$JNJ*=?3\r6T:7m=$͢FS!e2m<),>n{r^UV3[b5U%۲N֊1~-vq,|& l!Y83n뷞]QMM|9']t+O#-̘5.X:tZta2q!YYĹFAy\w+!.o(ќzt`7ao3c-69crHqR []tSe 6K.#X46[ Lc?(T+:*ziƲ>t\mV^A=8ޝjF.ͧN~mmj;/!#?hz7^FON߉^3J[1͵]ߗd:m.!yċ`"1= hh'$t;$F\W6VѭNWiFl2%uv-~bp; >Ңi38DɚMYw2<_ֹOܵ˳shэg)o.VasWl}gWSQRK{yF]h̐K(;s}lXd1@NY8 xǧ*Qhɷ/Njx~)jV8{VpVioA´RWcCC$٣+8yJi'N59Qvyӧ9K鬥F ѥ@!q#{e!+(neZ9*M $ 0liU y$xv:ӴV.>j^?CtnUElnWynۿ#88S-oj2cmw^DK[\lĒ7E\N(h.4jaK}Xy,GHAtbI߰Tz=?7ik"8<'TΖQj\Qc3Y $l߻}9}=iJU*MJ ܯS8eN{KU[[hfdcuUhˀ{sNjpۙ[_Cz؜&J"7LS"?y^SN^_?Z_ "yY`t'1ץgENZZ3ITK}:7;ddUh_pI/i^}E_3]N/ney D[9zP7F)Juj)k/tݤL\kHT'S(9Y;F_,{YٙY+9CIN(ޝɍ:sj m%B̸Z,1})THjͧn̚ vPI̕O3<8֑Esl4m#HD< AœN?>?WVeQOvv&?:J̑~U`]'iVO_y;UZK뿡:y&?gx?LN3Sq*?gށ-؊a0/vGo1h-JʜU)FoB݄[1'Td~{}Ło$)I&v$?=zu(sH^Zމ; x2 >ֺݭ.*kB",PUT<;V{K>]2y^ޕN*8Sz[Q"_p Q;TMJ*+ueW3~^^J0)7%sVZd˺ydTU%G]\,WIK*C3;sufk rLRkFKZ:諹BhtCK~TlWݬjԮ̥$ݑ4RH8 ,y<~FQӘ)S8[; lN;HcѫQ8o]ɌQ x~EUs緵|u6ܶ21Ϲ1} <֜9KJw}4 K+7UOAә-3Q:O1RkוPssSF<1jG{-4hI6W13_qJ Z6[w#I*9޾D7gw]>j(e=/p+NW[uNQ6V4S1#U9y!%,!FfB>V'c”u1Žib|c?nN>]HVǕ,2AnLyڶU\:; [亹|c4kw f?d^آsNr6J_ZEK%eR~fDOܯi8iGfKD?wwDi-w2U"}0,,EH۸8${V|furuэŭ|H~'sz>Ӗ師I'%Ϩ<\/w_2/%HteVq㞜U)F{S*{o:H 1`d*ǜǗu9JOb?(Ȍ FsZTyiҲס%̏qkDW/$)ɳg~D.y\E*02x^Ɗ>ht[r *0nAxMe~iY8;rܯqmbq#=Oӿzܧ}:hi+_R8|i$}A|;JU#MEt:Tczt$ n0dym28 rԄjS^ӓJz4U235[,>YAC•H[gT/*$KB7/^8q^~".R-^?|F0vs}kQHxZItCE>|{zVmE+ .}h ᶬm͎HƹeTbVJ5 d]ಫ}OH-kc*|{}?3F|k;2K%*.񏽒S8H.mgZUݖWf+R9%Xhw“`228m.U`ԵZЀ mO,*O[ccE.nfuM*kr[a 2*d{cڽu9v:}DZE1e߅8?ki[_CСR2<[eO*9$4jgc>8YjsС[y45; նȠ9c^=U&{LdR3迫#E\U7f *9݀j)#^J{gkL!d6-ʲܭ pk֣sգROKi;'XTYX`N @<H_\i];\_ jhfڋ$dApX<*J;_wϷm 4)+_Jڛ][!Cc={s; 女[r{fq:豱x.(T $`s5x{M_ }yu ~f\`z:;8p?iS+kB]NM>Ѭt_- pcyyU/j~f0!Q]qPM<{Yw(Vb=9#R)4uR=NXKX[('[h 9+m;gJ娪ҲkS*w]y3I#nM_*ii*ͻ{O@i3͍9lW*^4UK贷a3 hwsfǛ1NQw7 Vͻc2cǨM,:kk(٤ ǹNA=1<qj')1硊mf)v>Vlp1XӕYJG̚Ů-Z2ylcw;u'9QwڜJ2V75n5܂=;k)F\f5*:8m\k/=̆p>c謿GOC0̵aӫ;rZ}(}nf6gf?#^*;iN-O^_lc?2,I@é]؟g&eĐoVM!T//*{wz5~_Fj7k^es -K`a>YŹKCIJc%̟UߡAy Hc5d;NKIf׶LeX)#l*ƵI=EJ0PϺ3uK(ϙ$8ձr8SjJMǩt`斚w1uqubjT`{XTj Ʒ,U#YlJߚ2R[u-h%Hq32'=;`tUq_BV^J=#ewach$wu9.7rj6=/SQ"#hռp9}G1|*{< l2nZ {Lu)'Գ̯JL|>u<+汄I4l21fo+)rk?fXKi *c3Ҵ>iqq&TX2sz87NQؙcF,{Vٴwbll{v=i{yYjO(dI0e lSx\1 Ḧ́cן犨|RK|ݧv\<y}Z[ig9瞄YOR7n+(Lh12S}/ me7b?-ۢ:q9Xsڏg/SW64#hXv_x+RϠEFQ~7P2b9n\)F[[HdePPmaG-\VtvWx|WÊcM"VN{I.#m|<9S9}e N-<3tEb>fbFʍ57#\4hdUz޼)aRmݭMh^ͯQE9DZVRP/h 4fBnRk:Pja[,<_.m幼Fi &=gZ1${QWSJ}u$>{Ȓs榝)QNk_?OkvqG[vs #3p9{MuCӆW&~e`yrG˷{~t>^[?ivQ##w#\өoRԍ"{$3aueLc? r?wtms:1*N"9 n\VvԫSԖ6D<-{}+?k/hg~}=jv+OFfK69=?jQޛt6Ck @~Jʗ_Mѩi;Vee,:uM:Z~*5Nf ̤{W4b "L* x]Xz^"m$)5yl#u*9a?i Šmh"M[*$Pˍ ~=Hm}EDUmۂdqϿva4e.yRN3]4c e2Fe=w QS+{3+Gc$X[ w}G;d]50֜iݴGYL̻|vYܶM|J*sNI";pc ܌=vu4IS]]l lݿvH2qzҩN_E[m ~&ιzt]8ITjt.V;#uzF#Ðݸa1.e{[ڕG;?1۬1>YvqӞⴟVU8)/DSˍUJT3_5>VE)-/!d5q9?TjBanaI]lꪫ*|(*0mElia7fxλV<1=Ora+B{uh؛i_+ܣ8ךR+7kuЪM}1wn82:r/2sv1YӦ{쉣Hc~EU`. w?:4o%N5E%} ,ʨ ,dyN})FU%5uSN/)y>>c\T}!BnB1;dKo{&`_Ln+EQ(=Fr_W-o5e˜OޜVtivC匹g}ǴB=ɢQI52*ImFM.UP˵x9H5Qyiӌe{,:X9>z(U(Is jfV-Cm#NowObjqN+^$<|!,J;s)S4]j[PŗGb1A=;c*OQqW_ucie"7\>/N *mʕZFWhArY[Ϯ}xLEt$O%72`~2A3T.~uJ2TX`tlȨcbN8xǧZh}??uIb["V*) IҴ:,ѵ:rI_~㢚W6vlszsG.NV]R5էgPbydW3zުT?'^Gbrdb{f̣.ry=*yjtW\G-՟E9 $e` 3GqXҥ9aiJ-K_6»pL||Yr)_ѡ̖e̙e{fThv85c+U9ǒKCWdhU?zkq piԺ˸%V*0svTjF:jqm>ʩ4s0#|^ Vҗ2Rܑ^c[b+3tU wWD9FLڝ9E5\H5vdqө?uC_pV5ିB73 1]Q卢F1jpf&(׍zֲ8qSsO x̊Ղl 6}H妛fQgA 7Mʭ9}ke)E6cBD7& Ŋ3NwsY;ֲ-qƐOZךq7Kُ7ey.qc1]1- ӊ敶ݐxan,"bn09A?T|LOḔ4'_ qG-7վ|drn 4 9!j靝r;|FkZr+fJ>We(P@,sYJ̉.IPĸ`=^+kr[Qjtj9R_dRUs$ש*Ο2=zO##6ٱc$Oq"$l~6Ojy~Fm_K.PʨGpNrl{lT\(_U=X6>\2?=֬*F4w&Tåmwb,q O#\R=Y9FIBiQǘ&N9)$7ESFKߠ5];Oϖ9܎23z+J53jbeB\M!,b &\2,Ҳ>Z+|"uˈ 1V 1vXӫJ?x\˼m%I$]͖v3ϵG+өƳfÞr ,ܺh-沌cN/2tMbtgo#i0J??gF~lGfn*}66[$nt(Oݍ $G!ݍ}qSR\N/gV͹\l+'n.;ru5jr%m#d yQjԔdc:|\N+,yQOVrR[2H9n$W*?59S~uZڒE$M9ٌ0Lzߙ?q%d.XK-zsݼF?y3=ɪiEtTyySjiɕ`瑹oz RBG*jV݌Nwq$sJӌS=MZԎ1͝r6X)2ysi%RoWn^Ts+a1S=yT{EsKMu^oOĥ#M[ǚᤸ Xlf*뎥P:~2~7%G4*SxM)]I%ySje7w9 sӿLjcZFu#:ֳ:zܽfC3)c-?(ƣw6Zbb)$?3/]ǭkJ1珴g軳77O6Ȳ3,3) gϯzV;e+9M^Sd '?/ \B)GF*ѕFRbI$[^f9sS8-K]:F^I+S[%aQӦ~{>ﮞ]aGؐ1>^3=N ?֎1֤_q eey2 c*hM%8EJhU;fܬ5sz:(TڻI" *Oa[uiƋm5yD/ 9Qͮ&72rmj#2eN9ؒi< Y ݻVqU:tyu\,EEڥwg_cYY,·Pzy ѡt`00;SS_0km\ ۅ˃ ?.z<~U8vrVjq=.LѲF$I/?\캽?2qFq{Y <%Tlſ\.0+9Eo]7vj(hwT";7{ỵb#GѮqgN E, K&2w +K勈%#.N?j9Fs7Y:߫5deXEf6U9ӌjUI^vY3UvVڦ߯X&n,m-J,V-Lr4;Xd=p@^=t{Z{U;O?дm :֐;˧SSz[41ܳ0 3OsԨGem_LO1tކb啸`,^^8ܼdc5(ZNŹn.gV,ssiESEF~%KHl]s&_^Xӵz3vWFR>V{2ͬ֗*!ePZ2~2/f݆Ջ^?eRmTucmDq >c-4+|mc03ik^7w, $ v' dZ/*MVjF5/Zտw Ѹ=jTT&\mFVP 9 kHЕ߸ 'vI<7 4;*MVV<>YԧO'YKDX7M+3:tC ?)SmSK[M1,qcj+3')Jz(ʅֿ2hȬn:ux$[ʥ䞶},2$2ʣjAZҏ7_BYvʭ۱x㎟9r]$RRW};"9p3Ӿ=;D匥vcf6H60YAqWMFryKDh(բY*\[޹cTU0#w88?N:XCWM-sXT,C}#%._Τg;K5/xnf-IvI=?ө:h֞ɯ+HuU_ngfԳGKiJ]B& K3c&te+zRױjR U[r&rm%$1lߕK9%Svt#X؏N=;5RoTtQ4mp{{}m2Xx88OSg){H=V!^!fed2nܡ_-IFq)̬ZŸEn{>iYF5yPyMJ1;Vu#(^/Szs!LA)VߗdO<ѧe)N1$6sm'{є[fU=nvQW!^XVګ3X8 ´asKRXf{[hE`#lnsϨ_Ek#7^c&ㅸo8 PJ8~e8Q8ƭ7Cjop<ϙxolTMf4~"1K{{wm6YX1#ԣRQOKF1埯)+m8RÎEBQRrN>eպ]GЏ@{\~ΜZ4KO$>jLScr#QNC$I6(FF \zҝjTbPʰ2>cUʦ{Gn }zStJ^ҵ, J8FZ(.~8==ia˖LeHa)E..n-"?`UvQOǽT_@ڏ۷v[+rsS9PqootG Qb B^] 14K1JTJ#+Է'"qiJy k~K.31};}j%)Cޱ)~科4W.Hd4JF'@5;q4NJX_Kh:K.G9$Zԥk>h K'ʃs6q ghԧeR8ӽ:ދ ff>ZC28鑟ڱ婇ղjʏ+jY*V?x\y3̫QF\VTK/l+G=zVG.ThÖNSI[$9gfgfu8 sKs7-=:ӊm%ϱ=vi7~Jp_|Eew:ШDb< gk-'vqEΥ=˧2,99h٘rIbRJ;3F+{]fE*r @K сtcSK+i>Wr>Jp5QmFk9it#U#V~nuzݺiJKr"h޻>o$dzҰS<DJ=ؿ2nVm'2#N.kFkGBw$D.s3I^-'iKSʷi]m¨rWFKP./;DJW G{=J1Y9Z;*J1i6mJroX̋*;۷g[AZۣ3ЧJ~Нm#ӳ ͅCt=Za)l:FD"$L#P3甚Ko(vBI#1\&fm^JrttOɽbInߴW,h|CIZ}HQO8aX)?*xzΕB=nt#!>Lv%ݾϧNكTE8<{jy.E= X -/g.iߕZя%VtKoa+gDF?/=IeܜUI&Rϥ8h]m@٨[mvJrp01^»1o3r|Tܖ&{fm |Ğz֒dvoQ'b'[MHO)xNOvSJ4cwgmAU$ <$$FG͖V`#UT([gR\mly3~Lc9 Vw-pj[ߩb!c.cݜԣLcNzKFwӐ'q?Mܷ:Ԍ'7ͷ ePp1}jr΍NjzK)6yO]zmmEɴK% cFÞN|Hߩ4o&E]̿4Tq*ӸOs3MNpG.AchVyWF IJ&*#\Pz޺N{Jt(QDq|{'h 8㹭>VzT)(-RA2ۋ/8'WDz*~Ź)i͝dHMɹHLP)*wyPH<ۏ[*r%K_Mq[Mwm-2A1^"Um &$pӧy. .ϺJƤ>NcE>i N LW5:zUUsQrtY*77zQzS6y9%>S(>c +שqQ.Y^'({>j=O$J"lH7 8>9|V8Kz>nbfV%tS: rZ|os<ѩ[+_dȌ7R79Gn X>\uEbjS~UF eo{u>4^^z%F ģ{jW;h5;p/8'~ikf~_f7v+b?3|c1#mzIJݔ>-=/g5*QsNæEIm 70(yaѱT%(0ͽμ^"77AQKi4+E$?5/{GcFR y%pZ"%+K;-H[EJYG\g:I)Iتny*S}!QZU'8{oi-K)rX)?1t?ZhiMۭtdmŽ5MsUQَ Dd1>SտZkbDSM,eX;?ƭXƛ۩4J^f2h>l`=XTFPqrӥR1Z!'0"%9^pTdq*I5͜AnR6xV3Z9XZu5Z;ץBdYP lujGE)L%)݈nY{v۔ F8GnWj 2V5Nv0HחW1qZ^2۳z!d&y$i6Bק$ck͛Fw۝tB\g (38m)XϨ<ב>r6cGȑ &tlca9${b0_ӷOo/bv0r4W/s 2QLªUb[;5 w$q q\)Ԕ+ۧѽ:N45Zw,\l;pA;ϭuGC2!Q5n}A,+y$78Uhއ'һ{$U9b)>Ww&ӟ-p2Jm;}J*TqqֆyO|dKμ]r>iVϮ8{G,psnʱM!){t]^EE/TUe&Znr#$gu76\9i\ftUthzi1S*?v/3.<?J]9S_/&5`vvtkNS7$ZK̿OjmYlJq挗c3R"]̖ec|lS[VYiV'8Ns85%yl\9t%1Y>|xb+S9]/OBZH3ݷqssSeiZo8%c z֔c']Iz6]BFD|sgz$-de%8E!n<ٱ'0zs[F+4?wnS-јLK*98>q-Utݷ^]4ol_цz}Sj'ϰ2@FUM眜sS(GqR$?gf;qNץ\Fqw}HEĐXo?:a%h[/h-gVfSGs7%(h4c.gBYɶj;~R3E''J#6M/#C|Ǖcִu%V@-R C3;Vhl(yR)kr5ImDc\ I?f4G4f5@YcHĀOW3MIDh-Q #o^qߎ\bYˆK$P˂pG'wDld sZ2ݕ-͎ZƱǃ3NۚJ:/{BCY$I09y;ֲ8* uh$o4 @:qƊRZĊU%v" nb~S'Z #]n>bIܜ;`uckњ改ЅFY%wyJpOCkX(F\|ƪFQQQ++]B/b8S+HՌSOnэ44D֥ff%]iSKC5t6$HoyDPHonb#)r[]HX;.ō0ʲl3[rI%quK ]_]C(hcE\9Ⳍ}4RL΍+ֿb-ձX'5m8RV*ZKK=Ѫ& F{їQ,ЌE1eS`r3Go ~vԑ!KI bUUX_d yK6Jd^Xn vUR&+3F;Yݯvlq]K[+ʛWi-V=Y֩:rdF @1#kϖWÄ#F;v4f@ǒpkW.Eܚ~2cS!l,j$UCG{axWsF(ʜgqjARr&_Qַ -18o/8'DT~y?6?2^8Vs#)u9b77;c`rx=Si.&Ko%v j¦G>wczwJPwJZ?鸁CǿTMH59nޅ{i!H7Uf&MF=E^^U+$$d&nW).F;VsgxQo!Qm ~}9㟥c)NuR;489J`RS0n(وߌ|K-,w"Et1JIEѕ(I|1ڪ{zTVRʩU#d}"omڨO#jWt$F$QO|R|O U)QNp_0ߩu$ǕJH!W[j2.A9{W`=pnXNEN_/}'}DdBUfck9sF6rі.cW[UAbUxoNgR\4jIif#I\=ogoӹRpdI1Nlhrա([eju#%c9֩*[ty1ʧ~5i[Ӓ܂Yᵊ10iA;g?g.FJ6K7`DCnwymz(V !l`fpٻ8g:nossZ\.!.Q0*?塇یߎ? 圢өzy~ W Ww}pʸZ};_9y=kNAq:,[XnR6ݎz|+Bq׺yڜb̫kk_-XHӌs#(kaTklq*؉II{wH &h6F#Remmv~c#h.$W#~5# R8CirǣBdEXǼ'*]ѷ58EN4e緓'Py=$?4F9=1XcO1ڢjStu2[ ZQ%:p֜U9qr+V7A]Y.1881O,Znu,E<:t8c1hgaU G|z|>ZoRkM,,,[/PXHzj7[R33Bۨo.d]k)Td^((뻺i2ThZ+aNg@qHʮ˕sԜs8\=iFKKg2!&;$QDhթR/K  FS嶶Vc!i#I>LM8#UR4ޫ8V^5 ]XF*\ѣ_}ZۧX9~Iea&ffE*}xmpgs^K}^wѝzc\VRb]i:}M,#~^<6=ɧz7i߾5OkR^t1FpWMw$:#̚^DI;UB72^?Zsi)-.TΪFI+ly<089ʣʬiFJm̎]D%&BN4}ˠN8ɺmJ΃l6e\$qMu.ys_}*XTi%5{W&\hP/bz23ZӣR쒒D@8T @ۼfo}?R4r4RumpmЃ]Ңǚ]{:ZK},BBK4nce{FDTk֢Ԝ[(di ~7k}Nԕg$j%tmw׵W'c)Q/#_2 }HI5ԝ 1r~~QyluJ:WFԶddnozT#5R1tnQyw;ԣ/5nsIݞNr9P+nfc,W52,2YV=O ~zVҵ5抝݊HWnwip1[Szөo ĩ w1r#bddZ]/TکӵoQK,]iwҶ((ʝOh~@3۷* So !TkrO0"Vɼ8(OiNgH;!\i>uy^mx읢%m9tӏ3v.Q:DDڛyX ~7oc?z->inhXٚ=ew8$tZS4[&RM:!UU¥SI*rKX[,A\vm,,f[c`ҡ'h3H%ɟ{a[`9?ϚRUN-k+hUR6C->1 ď{W|nXmyA+/gFTZ4ُjoQZx}JTRzݕ$se‘zzV3Rm%MѲj bk58ʴ2أhs<}1ҸS-ӔbLjeU+I'׿M9ɥ4pin+ *#Ǎ;W=hQ} hV8ADHB«gX~MI9{Pܛ7˵ϹDFcn`uOw>\˙aFOCR]ۇK35r⭳?R6T/kQoR Pz4/v]~~qպc95hݚ)ʖszoK=oº˙۹,Rvќέ&#EJ+y#gQOVt:QK:[hVX4$u?wʟ%B5RN7\R3̋Km'.:8J"JwOw$m>;KyygS{Nsҽ:x܎! 0Nhw vh~ppHw#5Q1OxGu!Ѯ, 恙Z8 [{*s*'{oܩ_5~]:lfkS䤓.Ŕg J?9ʤ'JZvjY-c b5Kn)9Լv.Gk4l6IJ¡w sg)8+_RZ_3EO1w?**8q*YuԈYܯ -0x>_ /kV1v2Jmp󍬘tұ3wDS{:ƽwb,Y"6F$Xzu˱GKyu&*7;!y"*Zvӡ3ORd_LA2sԍ׃Fϙ9ȼCh)2'#OnɅ)+r1tJ0\VWA4{|ۀ^ydOsL":~Ѿw%˻8^6cHq{c*nV6S^o; uܕkd&A'ZrӭyT}v[VӝKY-O+,6n"*weF:%F5'x=ϫ{Jt S;,mLIIe?3ߧ4EӌնlyǖRRoo" żʗR:~Rʛ`_#N(q)խRStr֮ҺZej?hXv+ݞ1>sSM_+~v m{2,C2, {Q7R{JN:m:H=F[X[𤌂:J񱒜}V*JsRWz4NRQ36f}0s'N}+5ihxxÛmmxzyh~$ps'x8.mQe(m4Oɇ[*09"wMuY\FZ2Kax}\ҼeT2@3ey>UiFoy'hc2ݟ?_Z/svK,U_tQ{J1H?&v1ff)sju_A+ffR39j%1@U9_ZݍeQ&ݫf5^JRBǶa*]\; c{Eke,sp*]FVRQEy.1/[z)3rjV\qkYSQ9瞝q52Diʮ8̎nA`IYSc7]JwJЍۂ?&gqҹS33=aBrsk{nuETլC4A!#;fRSԉujJD{&X 3Ҽؘ1E9@"|(ztF<~sSo3 'wbsNyoZqJ]zMrbi_n \\g}}5˥wI#nW^*եmSU ]Bi#Un8L^aF74OáF6QMv۟,MEYwzr;^;bG ȲGF'0NxYөR{]Z۽]^O}%PrNQgK^,;lo_ړөH;GȄE@dX3XTgR0^&;,Y@Y'ӦVnI5AjrY^J$ӹ(6?΍T`8fNf1eWpI{㞿`!+-bh6hݓRzi1k;O2ڧW1Nno)năsV|Ԙ.yr peT1 .xϡ֔]ӆڿ TyyD*7HND{ss]ӲWѱӬߕ^JMC c*c㩇F+M?EKz-I}p6:=n{_iW[LR?~-fS4x˫)Τ]֞j_p%`U,`{*(F\wiZC=S/Щ wght=We:qO]?Ǘ$ Ԍb;뻔H=9S NX )J,Y?'z5RnK]m<4B O);B;ڶUβOF~SEh[j-U9eNnὤy5eۭ_"&gl>ϵ_J<(ԨRC~o&ؔH9<N4$$y U V8!^T]u!HgfI}g8TuRqKZI M4ͰmUCtsrȨVJ1Vh"i >3J&T9e<>bMUU^y:Tƒ]|ͩէʮem4#K5s;|90#9֑ɷ{kfrg;>Ns8-u)[S*ۻOWi2UOUC%Ռ3~mRA9n(䒃nJKJ}z݆hJoU8A(KgDqIjB۩Gϐl]:ڜ42;,̲Gvw.'w8'=g.I-#?W)+-bp `2p{wQ2ONᎥO[=$h:87J2_K.]+Ż*U5S䭾5bT`z|HO(k{/$Y]mF `2@{po gNql{xGy}cf]sZҠjÑCC*2۲ǡ EJZ1FŤʶT+8l#¶lzTQqJݻtcY>dۓ, $Pym1)g]ZW冪ri.ef<R蛗dӭNn~Z9Icqpm1QKJ#-/%@F0K|I$Y j!*|泖hN.2-m'voo3#C G0G 8j(B7NFe+Y-<Qw*̭p ֢Nj+ouC_>v5[1pps>jz5q7r;y."kHǴr9}k:qN*QN֋{X`[A7Dr4[0r~ZՕ4F7}{%܏G`qȬJΒYWk3aY]Fx99?v$$s:|nk{XhH< q<ֱqO5i0]iO/PmI'>}c*u!+s>T֛ۨd@b^[c*F6["22bo>~~7*m+{:tob;x#Zju4pvG gMmRM"cEƧ%A;DQۆ}|Ls#("FGLr0IDi$UP;sьʟ+*}.Eki5.4e^0H+_õ[swK %IQʆ9O֮RVɽȔ*iUt{ߵsQ RxnleN5vB$Н#ܪUdqiN¤O#2Eo+,<ҟ`~eR-#i"G oHq.zUR:1W6qK5og9^iߩrۢ-'H 3eI= ]8yt\pޝߝS,!f#iܲ1#wϠ<uN:'ԗwMJi븓ĈVvUFF r.~lgGbC΍2j7DQYU}>RN\r.͢NW0ʷ;)Srr)[!]͹v'}4(NQéA;ꇉ X#ly =vԔV._4q-w)y>vvA~q]*F"#@Fd*>돴6k#|ъ^"`ۜ.1 uSQRH%Fz=IRܐ|q5Zt#)rMqogneVfbrqg-tFQQק2U$5-̊k8{ݏ;GÑ*˗(_=m㑥i6yӇ3p)pMGpCnʟp]yV#u;,mgynG˻^EeSi S*d>u[ %N I|WZH;OOUA=Ӗ"6z.pS5pHte,,eMk!ٲsp t'uvgTs#4wL{`SרSen:z巔y~6xNN?]i]t*ue_7id۸yje<1ZBYU9mMKs)[4W){v}2~fz˯)iS1暳[/"ܐUFCGqϵM4ȣNxֱ 6IBX;in[$k[Ntv 7"cCQNcRQEZ;GY$_9hXpvu>Ys'dDr\H 3l|wrFx'֜eNR[V:+ѕ:1]4)q+5"7aO&zu'Zz-= R5I4k7A=:vsq(qR^l}"BX8Gϯj5ن"\;[^|iVX{n⦝)(}UjAHsq-nۍZEJ{o]omN?C {WEjtt`'N̗X|(+sD=-}ʭ*)7U] :T]Oj*>0җ5ObBydԐ~a8<s'-.uQҌpufecŒ⹥.m$UGmWAeUܹ\?):?kMea#7<}: 9▖֤=ÿnlfySܤ<}_61 V|-r~*joO.0=g?yVקtdoS4=AjɥVa(jc͂eQ6+q2:kg:q渜G4!DU8ZQݜʳoԘ9bx'j+U{¦ש,q߹`.^3JU7+[/bO9`8ģs}QJ48ɤ b˪J=?EeN\rVGe:X:[RB&$[a]dcRm;"C0VF;t{Ov+ƯX LY609sGq[}t4t}?gk~\Zu?;8>D#3P^cn>ﻏ֕>u'pЍJ0~݋q5]@c=3SQtQU$/=TEnfb001ߧZU]e[ZEGm6~fkLYє^,.ZnIkO?A\䓗NVt%C.fF]F!9ϖ|3N*\U}= 3:Iy< Np0qOsuIZRoJFMZHgv7A#==jcnJs{ҳsaSdlGTz|Vt~^}mn-ތmmAⲥ檝?gQ]K]Zm/wTu{,bN:`N=ǑW4N鷧FIf&q3I$SRΦ<*).8f#ᡑ@eqZFmR!icdǴrg׏U2КqKMEnvD|s¶^]-$yXa[>d0p~Ռc+4_p9Y(Q߽62qӯ+NXԵ݌6I'!$n9'm{>uea"UwRJۃn5Qq**ފ\C'gѿdsU)I|2iJiUZߧ ҹƩ8˩9=5ܨ_yI3me9灎U4J~nJv]WeA=9=9ө69ȵo+A+iْ DiE.ST)Xӊ*kVs·1ENrvVr'qi?LYy6ʥiZ:+Mױ.]fܜzqLj~-IUϞ*+ Is-SiU^S+/+56~䌞xPU'kMoA.gFnS YqS\W^%)-KĿeX ±I]qLg4*ns%BUR=vϽٲc;wqyǽgNQ*5Oe^y3$b4y݂I>4NRS֧ 5\v{;Xdw_.-eU-y^HϨ泗.~&2Em^D6ͳfqג?mS%h`7%gfjQq1F#++p3}Ac*A'ʓݷ<}? T[I'r,I$R}rԩÚ.,J5eg+;71-w*37hi7vu1%O4 p(U-qkQkӊ}amntUW37 )~\N>ϒN1qwj ۯoҺcN%7d˵w$S(q]wgf'O ؚ+FIJ Bd\9?gIFVj"?v;gc/ۊS3[ӌUOkt"*u[?4@NyyqS$$8^ӌUwBVWZ>ǽ=isl\9Qź9ago$9KUXߨҳRQ>_Ԛ'gV 0 :{.y{q`]$ $DEt6mh%żEv8>o.F6dwT[&TckƷ2(;U2wnFxOoX]9̟CYTZGN02W~n,R>]7˅91wd0?Lo%ÈjKV5d'b:ciKJQ-/!Oq A4fL=ϰjTsE|MWmʋ'31^nZh*XxɻIw#xVccp-CJү*tj%ZH7^:V+\Z.I$ɈX:*稣\ܶMjnytߨZFrK:W-gVdE9ulq4zzrK^r#̮챤\n W{Ji%#wNO>qj%AgێGs+-x4)kYQCURU}qS!F8 xy˕̵g%ӆXmas;c#NM*##ZVaOuYNШ};\.a7YdadqZ9Zo1*dkIsYB̭3;,6 2sqX/GK7Ϊ0Wj`AҊӽ;bʹQ;#XN5/dyxT]=7$[,g{mW=}AX+ʜoEԪr$f_L+J)l~XǰUn~F5hzv^Cs/Y8GLSuї3Ԛ+7R YU"ZTe;R"= dzw]~ڊѫ;t#Zj1ZWڃY;xaNUrE?^~0{ee{uөe(sf\^\/twIr_wC3yF.p59SZt}؈ҧRw}Ƭ;5.6CdqO^c(&g^SNZW^\G7mr8*9!R3Jjk~&߻eбg)4F?rv1qM$hkvݰXVyy)U8@Pz>2У*qݺp\Anљsr3WT+O G4[M%Rh/.GoUH=[ScnӫJ^_02[Ku /P1$qgQvt]/[siAx WeGqgTwZ 2YLe=0]+z)JE+],i3%nUHsz֊^ V|D|qZ?i+_DuTN>X`U9߂0Ngj/{2b;6x2tG}EN{0^ҍ]|F$FTȫ u+zu9cI**[Igw4-4j@X `>W6zUjtVɒ奴̗Xӎ⚂(֣jA-AhE1Ur2N9<.U{JQ ]5Nc U g(VߩoJ߁qDU|,z~n*1rvC%Qk33ew 47`iUU}R80p$ 9sRnWM$i^׶ZbӼR2Bfk9]QC'9-A5T(~d2*ڪ̥͙Aݵsֺ\cnȓ ]?51MHqڰn7~5#;IhxF3;SgyէۿA5,(vNqSne[n(?.&ܭG}}Ll.R#ۙK[wv#y YF2qXssjyJfto݌9V9*OQb麚ѩQS{H1ތ3sz~JrM9v?FՓ=ruJ0_l"ܰEpC09*#-aF\Z!#UA'TO]x1AqZz/Rҏ7(-fDvJPaЏ{wFۙ[7QrwLV6d^";wdAsҳ(7lR+9: 'uTن1T\tnS>zrrumK63+)e2><ç+TSVXvD$'p ͌*1QJc'򥺒kU.N{WoS:t aF[9s?_¹j{9پui9)nIe{b6?3%{pO=$OC[!KՁ$݈L IGd8T\Demz_[d{*ڥRu^MpI˽̪VtZʌA'yͽ[HG\cZ^m:7サ č²3֐瓓C\ֿaaSӋÔk8nͬs,Ѩ\Bk~9y%^dQSQ+f6+obd ,pd0=9_^V5Fe>%+W=Vե'eXYc $|t9ihկN5N-;hAn?wc5U:w֩S]9hTr[Asu jGC[I$S1Ws~u5=7MK {gY!Ke-ʈ)U#$u<kxǒ[jRz=?_1lYL!p ~\~̪2G](}@a͙ĜᶃϯQ[B5*roN:qmSBLO0Ye3=>=Wc TS#zB¡Tp+*J.([~ZZ̑6q p;:tח0E=GH%Thr{=1ڄ'kt5˚Um jA1o՝1t['gO1  *()jѳϠt띑 ߈i=H"M= ((tcmN:O4/[yV8z$2 !YEh^RMwQ4Imv4o/rHcߊlfe UfDFXc۷oQvky)JK[,Bۡv̀4~^T3Zn^2}yoCӱmHOox/o8?(<3].kt:j4k_ x7|ZRQ%hF"3ɱ\}?ε+Gb)Am)e3 9?j]Ш'<]gbޟN](j}/oeAFN99:UVZKo^o+e߹ n'NRܟkN/8ʭ?+oΒLJ]K29d Ψ3 ֪4u*Q+A=Ԓl$=NmgnEc2Ԍ Tynjsf\iFTR]t^=c8(Ǯq#F;rgZEIT{HWUp^Y{gUc(Lg.v6uҳI|ڵ4uc^IMIq 'Ƴyj11lETyƑITr򆸄G~_A*rB?$n7stƜz̠Y$ n}R\DUh| ipu<`xO#vΓuZK#AT䌜+1ճye%>ONrqizZۅGݫ_6]~]XpzcrsފqçFSUD42nɑ.1[?%tWʲG+2Hb<}zoɭ=_iݴЉ-$HckuwJIiONQ.]t*QIk*u%$v >xYgXUd$;`8#hvzm߱]b|d\ر;9^´9KOyKt0c ц׏9[S(vq |ۙ*$dgQ)USxs]%a%7%a .۴O~K&S4k{wkjT˛mEJdSi$I Sˇ8:ڦ1R39Swo"H xC{fQ&8;;zh*' p1ܓiGVbՐ褖<-b.) jjF4}DS6ؒUYE ۑϽ&ZhM|1?z6_~Z|rXZ Smr;=N4MrO吲_/؂r>9Q )FKo|7ez}OY:q挡(h4 Cq0 f |fJTQ#ySiƬqN3R=\cN̴ѳf`oxVr۫2Kf l/fq[S y"w+~a~UhY&uJiO^'Sm͸zww0[?vyay3(\lY6u52.`+ @EI'ϖax57/2qY!Dq'u?Vqw1a$2ZJ[ Qv E)S >TDqݝ'89EhTqdֻn?;hwHzLsXn\{JR۠@]&8_t\ͫPr[K$Ep}RE0@qW=h2שHc%mcq39/sY˖Qe AX#VVT ۓtjmRSRY,7Kug\'8)K!<˴+*^wIJ?iΏ-ktK-ϐx³ ~N9QN-Qx3nvO-c!W=Ȭj)sʢ[2;ôJFv01qYӦTbSIvkB/X Ų3zSQTΝOgWؖYtk{+Lcnc_fiT^*ğLrda ?608?ӊ*VHTݡ52Q Ǿy9yfJK-,I28C3Sjb鰬&"͇vӟ~+e7-Ej|.5pA;\dҦ UDa*PqwK7>o˷ҹrɯhG2e#Ց?w'ָU9JƴHS=kJ*2|q[ ;tW}e5UU@so?(9ȯ>2kot{)aE7PCȟ{ooCKU֝H-߲!(ǚw[ A)J,n-DAqq5،*rv shÕ|G6#*(Eit4}KgxM^  $pV4Y=9c'?ˊr qR1rѫۮ(#ر&_Ա#ZkKe3/52ÁֿWpksƴq)v~1J":6[OƸg莬TiJ_gnO45w6N»i֩*mE.YTپ}ߧ}ݵλ1.Wʣ#ڕJ^>ovb(ҏk,I xTP}HFVvDMCG5MƜm_^ν4mrIl2dE%Qn׶޽Ă8|Ÿ3Mn.QFJ %G{A .5O}8D [S1:┨TSmmt.j*=%ȑS 1yM٣jQKA9FgikYL{ ?S]SevJgݧݻ )y lq2ۖQY]x3Z5ok'xz*Y$Tx>/Ȍ>#SWqW_X݆ć$=۷jQ;(VɹTޥY ̑>`g*3:]<6[T#ʾ KVXmU/8#}oKc S˒}+OP<{QzӔU4V,K5 X7\/e16YUFFsVt֝Hw.22ЙP2\6)`rNҢeFFttԔݙn2&o,m1,q]}7a,DyH^w \E^Zg$ԓkM*Fyj۩+/YA5T0ljr*~ӿSݭ+YiQm͕܏'6k50,軖pşky8w;^8zQnOq7q>VI|ǯҽ DބJjݮ{nq<;v0u*rDK|Gp:s_gwmV\W1@JMYͻ?WM;?zrӜe]z1}jo?pW*DSGpaL=4[Y-h$jyEU,JL 3Nzִbmc*u%f= P2H8NUH)5$uio3yL/O<{֟|4Feq%Co,6HzNZjab+zjI2ImC2x<+H哉$"{X.yN88ϮySMV"j(>R6Dž#?㚨R3*mFwn,ŶA>Jxo&́R1$mUR0(2v:WOAA}v</ziy-Qin;̹#W''.TȥN1 Pm$̋[A8Vqi1S DRّq}*jJR*&'֝<2KnQv>9tSi_@ÇHr嚘J_h%DYÐm vlN=)GbS쭡vŧi7yzqzqS)zu2}֎oHXCÅQe? JN2սMR4-ff8dzʥ8ӖGnE;diOܮ7#qXKVNL U%O^,RIv]-~gVo3w^z}J3DmN4jKPjݽ2<5Ou=:a ZLL򒱏3j#.[} \^,p  Iɬez24m![i,w;C#V2&Oɥk*Q*G?XJTuJm ]sr_jE:YeNy9B<ڝTxC 娚m]?.Hs-6+evKϘA_Ҹq敺F.ߡem0#|qrG_|{~઻@̱1X=}y}H~KBY&?nOr{RR+7WxӿQ8/smQ呢Pyl+guxb")@Z1*Kׁvֱ,}G_Mc[s3uiڝ)J_#6}qJͤaq+*i#OdiF7mtR*Ԕyzno,{И٘emv ntG4U:L-5NNIV BM1fqkGۦTЯ>zEYztXJ3F2*v^[_c[oUO|Q%qOJ-cU,ޞfT3\XGҳM >~:~UqfҿYEs$VVVRn,p'Nrx.3i􅳽f%v+7 B25j2gJVW}~-F8UV8ط۟cUQ1U[K6։om7Xm8]5YՇ:q}ןn%F_=}J} +T!:Upkɐ;~nH^#ZQ;wӸ |ɯMNtqhs߹[FusMlֵc2]$ڶOd_0+1Hwg'zOZU9*vԣ*puf=FF+oeiљ̯$I+qnd1k4(Vhu~=}FzWZ^Y=J6tZ${w 7Os8jO^<\E~7ZG!DqUWYzWHNo2BeO5فؼ`?uqZ;$pԢW#ZGV 'c(c˖i[v~\a`;`e|kr(;XpK}?}>gN_)%e^+ӣOZ?S1=cj_ *mxsֺq?zι 8c%)^&5('<${r;d8kU#u,#TU[F1بӗ֤ |}=1:%+Ʀpc(XG]w/AʬĿ+d͟©S<эϙ+݅'h*FFВd 0Y'BW;vN{z51R*:h?U\sGRGByg$F'Ws<Õ)l*_"GYFJA=y]b])̙FGStb݌yx܍w3ܯkݺ{sq}Vm9 I}\U2ybVʬ ]>Z,"unQT`qe?-j+MD|$MxR!˙Z+M6 Szq\qqVTn[u34N"V!T.#W j_צ3Vi#[0feFU5J\jF-|W$>e{lljA9#-:wOo~U=]_!$_?y𭽧۟{z[t=#0y 1{OJz_L۩R*˧hr4eN>)IWwعʜ=׷ b,pp=1ӣT]G%}]4U55HIX4}[ojN<|`awg9շ9MvmePP6:~*SwӨiF2:P)`B܊6[?Fx:o䕧)q\ɜ|{臽e&~-JաQ~s|Tjt?wM)Ԝf+6 ̬jVwUHSR݅>lEwN]Ed+N+JѕhF0tcl$ DFmT#c޵4zQ I(ѹ Iȥө{{ZWwjyaYI"6ӜA"#MJ>}ȿiQm8+ 3/tgA/WvoSd7<۷[Xx?_ʦJ2{k}1)Ÿi.nLQ,Q{!]kQo9G+]!ϗm*p}>hNz/̚qQ}&+#k1n9 n0cG\=C9iɅcl?ok(ZÜq槌oK; hٓo>#`y4m{#dSo!ѻ<\ݲ>\;ʦFO {S/l;Z#I{潤$+Y" l̻;gҹOV*# ENI7q+3_n@DZm߿:sSwMp š_vZ6_G3?˚Z[ 1\{qflm5%.] epڢfn<D*U_iN5)=?'cW)cY*z  vQT^/٪~?Keʹi7g<NhMnpңS쿿ro,;rH 00F9#gRh9:3-|ו HߨHqT%)+YXٖV:-Ifs֪)iEҔ!2ڳNUnG=;u=#]VJѭ٤+*-k c<VDC 1sŗ;q~%*=}ZTi{5k}l TIq> *,= ~*S%Z!4VوJc qvɬi¥iT4=F%V1ݹBUI9N{%mkFu*Rtׯȣ|%&?#&(BR{ ӼŤ(=t5WKK[FN׶V٧vʫӀ1ⴏ-*{iK2#fm-$fFsSR_{If{k4y &[gd8~O`q(ѩ{ݺUeZkE0eX%9t9 6B!ce;^Eե8SU#(1 nD;y=:tyƵ&}#*)&iByPO>Vb1:ӧԪ%NԎ9a*VV%4kIR^B!U/+1X5n..UF[5i^ٗFԵ=㥻 _)ٚ6fxc۔^ڤy=?O& [$-tItR\ץ7Nj>[eHFrSu9O51T!N*0z [5&P̾fx\#Z*SӚujSQomoDc1ImѱשefjNRu"FyQUe~~Y~*y˯oS)Kj\/_dNZ!wzck^UN-thN3qt=F/&LYzTUtdPHuvbHOog ;mðfVZPøsTdi4Vb@J~Vh XQiIolsHS I$#czgv4ZiYWرv 9*Wv?.ψrM5_5J9I]qm%eEf7˷ Pqۦj(=n!%QSNϠ!Po]kHżp}⮣q#x1JI5R_Zɴs*4^calcV cnBFmk]XB\-I]6lN4WԲ]u DPy>esOg N_בE(ӊ^VT2F̌ }pG[QEEԨԓtoq1϶C0HYJ~>~NCۃ}F'gZ1b`ʣo23I/)jJ|y3Qn3b kvt8|0ӊԽU-lRGRJI $26a.ODۉTH?繭2BݵN>4WHUX6ҩ qXҫ*u%y+ )uvko A;EiGt4Y;{v#vt8Oz4cNQI?2) O ه"#mlkWtEE6sSkIʚӾk2H; .V%U+N^t3F#$40;WwN-2^7/c$m8~ +S֍un/'%"pGZ){DΚتs]miV8vaX9jU%ʟ4pMK4J[3/S8zT.SR1wH" +SUIT_#xqKRY12yi8< =+J>E)nO> kڕc>PWlŻuGI_(KĒ#,͹d#r׭itѕ,>9%b̶ۅn8ܓDe;Cx5(^uХ;l:մ#)^Rԡ̡uw-3Y 8-ר+z1d)ƕho{Ѱ d6pGOԓH:rץNBWeW]56]:S^h4G6*TyQMɾJzԢm0rxҺ*J*{SrӠ-yonҳjR- 9T.^AqZ0a:n*U:kxZr.NO7"]B[x9ԏJ֦4}u]6xԳ *eR9+R[)JD>djFzzZGܿΉGq۩FWc."ƸXT3xӜ~Zjݜv[.:)X) $ێSܨӨ%)_+$u|nfjjҩ+Z[+ZN̻Kwq]W|ZǗvuq?2u['^OЭN2Q+ʢXFW,R?ZS^Җ9xcgO\6ڦA:>*RN}4/[yqn<NE87^K~ˢq*ƒ#OI&}Cn^ٴ$m`Wwܶ=*JI#OvwL#qj<ݧnBuǷzrQS_QB2ŷ;$TJ2dV/BgX[ʹt~hƧ2AdfsH=;UFtQcWK<[y]qCt>R#cJuet[2l*@;-r\3Yὔ=z_nhVnѪR:iM_{K|qډIF7L#WЙ gOom=N6ޙD+>Wc55f/'cqg%:5v_;wTk1հ>QڱrpЖ!O4;G򐫌kNh|[>o2.m#T9)TTztjbTJ՚֓K,[h>u5ևfJPm꫷ѳkF[lkv<ӳfΙsq^[ +02: 5(ӫ$JnbȧRv'10?:qXSN3I˶r\<nUVwI-Ic(Z rK8Weg riMnMUJOIܚ9 P,1*Ur;~bZ\R{2.^,Uʮ'oUg*1qw.<<pm r{ `#˫$2i>nԚWNRK_>fI̱g[9B%O(Fceg$QGn{ߥT{(ӭ+N8auJJbftCkVt-+HntE%6 %lZs S} M%:թ*vէ?eboy+ѹ#\׏#Ijv2F_5UHk^I>wA]*hRH=-΂P[ȹkNx#|ֺ% r,㋭v쓱݅JV"oӿT]/sQZG2\Q[!bVܼFӍ9FWcWQٮ/y,1geVv2X_*\gVeukEegvlۜkZ>diS]G\Z,7 1OsF1Iɴ1jH˻Ysӵ9{Ogx{+yâ{w֪).Vg|#E+[Ԏ&ib&6'&LefY)E o|*>J\ӕ74+- GѤ*V$ƴQ5W#6~lX}QO'8iXogj!ӭ,܍+qZbyK1񲕽lU2+r@W4s*+9$. gs sJ \%SKfI#8o:p+TgN.;k$n]\m%Ewj:-ys|ձׯ>Etr6vǒ%RFe/,*[$#s WCQҤ]:q14e&To}:VUMsSd֭tK|k"ɹxیc+rť\m4r\܇Y^φOQ};h>]7RzEF?y]R.k5%5YN/zUIF2\а8TY8㑆U9FRvJoөf M,B)_㪸}={3_wa5 [It7Nq=룚/IhuB=TI7"3bFa8ۜgw~K_TZu"d$u#'=3\yv; :ы=r2]z[*ƛ*E()'-ٮ ڃ!fϘzg ?sNW%`>ڤo}IJA˓r<$QSɚSFT$֗K9Uݹf 6ןO4VRRadpFJ?3xݱIr+5-5Ϲ:U=%M-*DpYpT:p?JR)\GJ@dI4c*̚}agj&=.|O9`q*jeGx!%ӒyU]dmgR5%PTƂF&#c=Yt"NvlJŃ/ˌgJUvoc sL;%|!LThB3NkG>i4Ku yn>Pvxa$+FJCyTtqIpH<^߉Rcm,U5đ y p}OJ|D|= +Z*_kHRN@9wӽkkuvPU;ƥiԨQ~ϭ [#4g-RUV<ӺPe#\)I=WFսbT Tp1ޜW@|ϾX^^'?Aޔ%u#3]QoRU8Z'$h-u+m6X!,s9֫ө-Eԕ8tHdhgiw6>\zȥ.]_o0[,fx,TtJm2eV7;WQXi(9 d0mE/omu_а& Vw96yႜx5嶯m*JjP/|H*Fp9zJ0J=_>yAux3#Jsk߻`8abu\O0N:,ZR2?o2wy姟BE`N",Yrz~RD)J;WOsm~T/;^HыO#ԩi-y2N~'JRt㡟nVIQ36灑׊£Ռ:1.}YGH<]o.2RNZUYWITU3T9dζλTO}OZdp*~W}EI&x"Nƹj(ju&)SvX^{VHdet z{N"ԋ'֩7o"hDZ./ˁ<}&4TV5C;ge x෸HN\WNYF2fR\՚]Wb]:$]%Ж)pc$mrI䞹VrSE7硵8v$Jz;x0Nl<LW$ԧ1Rx9,n$;W9;A}\M-≢k6Wǀ ; g)7˧Ó' '~2'uUIiwɬ%Eӭu"-mo}?/d8Uܡ[3AG]3ȕg:7zpd|Tlivcz''._;[|*J3 2U3vi*s{=z9'cu-%iY30Q_Q޴ShR{3+<=wv숚 mlGJ.7_OR\>מ$.3 [nްjJ3H)?idPЉ<9p Ԣ%;mx|>UܼNΑ ЪAt¢KTVR[+vґr'խxӿ*ت6R]WlO 6ޒ<U8Qs&k- 7 ?TyK8 NRk< #+"Y3(=?EjU:i6ױ^kybmFTx95/Y}ʄ{|%%#Ky,mʱ뷓ϡU.E+jOֽ֌SMcĬZbg#FF~P)_vկ";X]ah21ƅFc8J96xxV6o_'n.I3yIncZS{dMٮ2" i+,{X29pA+E]wӴһb8nLʤA=ֶdlZGYɯ_̒Y*`vuqDZ)rrGzV>fKLoP_nx溨<=1\rgXr}3ҌUr&z6g'C-ĉnLjD>fJ3ֶis[c8JR֝?!9B#LG#=8JD=. ,Ғ&Rw'"m: ingtګ?~^xIF2^_;j#;h A^3F[9k~*բQ[Uf/e ezu"ܑg[}dr*t0gR1QF;#ϧj%֤Μ/YWc `y9JΌ,CDcXĆ=߹PSIxq⺡i{WeFu:Gq9&X=sXrv4&:eXff*tzjۙ>Mt84,>o +ue$9l@(ճiohvtx FJjQZ a{}ƧttWo܌g>K<˩Rc+6p*7VnrKrIs'.@>{^^5e;'wy {2G569efґ>yz<KWHqJ_8<`zY)/{IMEo)w@0XIuKDLi4g%vӺm֫nҜdANt09{+wMXQ_1#TnXr5#vDſZ6'RB+$N;W(T]4 G67nz=*T*KB1f+*ϒ$Y##zimRu+$/Srgk{mݼ`5Rx*TIB]n l8e󩝥6rj6kaV=ǖ8~5t9}%G16#mܻͅNW司&RmtDc]ة#g{XF~%_2aWJ0fFS$t+Q5nN7hA0mpz7cQ_ktȽ:T^ yogәdf݅9 "k:|YK=|1,p/ 1& ==Ω*nM_EũTծvy^i7cm:*rr)`fds+qʳ x$NH?ȘөNsH']&и]nsۧjܱ:u9𸺏m2@pr*穎 sc(ӌi]_RQ zcivdxc{bҫ'̴ M?"1$q]D3HPysۧYJ6nK唹k'R:D1fUO@T]ܬܬLb%[&93j10 ~7kEW(E\򣳴#V v tAX]VۋכRJZS4Rvm]Yj-d,ll~_ZmlU?IR3/$U眞Nk李*S-0Hm7nQv{u"vXUrڬ>gM6_J)mМ!N'UK2lkHSKt;9UaP6y<.is5omB=Ys*ޜe/z_"d67kuͷJw--T<{_AK!i9 JOӌ*\)\S K<ƾc9 8Eq _Ҽ\@ty?KĒMqwy]cԎzV)=%)oo?bD͎4d`<}k S"UO9mlu\{ڟ<SMÅ/CؒN?Ѽ5٦+][>MK ?/Ԕԕc0|:#saWB֕:ԜpUꉭY#SLG;:7ݖxx{8n݉gWMà>wۊRU9lނ1-HݖCs<:YUՖfe>U+]0^ԫL )GCc~yJȊYf[t0[O5 4WYXًTdcqVSvF1'ki*YNO8>\N %̒#hׯz{qpP;r~CH}(4,o(Ԕ\ցxC(l7?ֲ^Z=jU#F ?Kta>$mh۰$_Ftѕ8r=z$ۢUcrxT{9[]^8XP~~>N9r۔#5*j=:ȟNJ:T+I=?IUX ʧxlA5-r{kR.To& v^K^} T[m,[l q?Jz)c}Y?Phlf빝7Bqi ź2f zt响(IӲӸ{kѢmĖ\HsTQ|^1ho!meY^#S"F:ީSh*s;*BE1&Ny瞃t#mƤVb$ٔi~PӨTZҩOr_Q#7 ̙ۼg}kGZըӍNV쉾DPI$2*{eS[%tg*nqxo(GlLR+AG׿ɒ,2"{ɳqǷJTVNޢL!ZY8ɏ8\=+.RU[[FDnp ~汌jJM"cNxv6͔'j*.1oU#ZFMS _`8kSOI|ѲnV>zǮk9Yjm g&ޞB8Um![>gQ<`#;7̚7;ٶ'qQGP3Dn#JϒHKm#B&rgkLc=~sj㒌dFJ5o!n<)-\2WbBiIإQkl:gL-x]_#J,j^D>7浿FFK/ sy=sМtZ JްܰqFsY>U-NU)Eyߵ|sy@˵v2nsʤcg,i.]wN&5p{U(EKC7COup.n#۞kB9MSM{bg2:U׳+{B@eܭ*(;SNi=Tz\Km Yݼf2Oڸ%gSTmR)==J?f{89|U4ӌ>:T;ŵ=ƣ-?۴>^ÏNkͩ*iM;Y;_̎ihdafU^>^z眏7+ֶ#K馟gOta1c6^^>f:ziV[Em>fTv[w0?Ş9>ʟ$o%vuE*u!2bH߼& Pr)J#ڭq&ɦVLbJqqz4(a{.a2]:q)-`qssէ%4*7e妽Z["v6UdUשRcNےv ]cr.U݂N89nՇ~_ZZĆaheMαeʪ}ᷯ$k c׽/"F 'q]*rݽm칣#Hvo-+{}k:1*k9S]Iaj-y\l۶.7 z}}^y?TpВV=o9ܿ7I =cRRp~SRnvGqg8FWO<󑎿Jq"2 S{|wmc(5Y>ԡP>aA_9Fږch1µ5~ީޗNcm4C?ⴕ_ce+Igxbi svo+N^ַ,V#QJدm5ͷ٧t# x9w;UJht]}wtSPI$k~hZfd5fL`q:ۿO&t8qZo7s%m0'9*yl;bޯ{cuJ9#x+ni)F,uvQt֚+I$FOD6֕n6 mhgNrIkpNTaQ;wkd,#61;*jsM/oөN44o$ BS6FÌ=sZFQN'-H׼] iC4:iCڭ:?K3+m-s^lRӔ/KY2Vn#Hzt}O jsVԿ| (VfWtcN%S]GK<gH,p+rs:u3X”y_M|kT vmƘ Z1Ee0$?Z/2MiyRZUo);>bl<ֿ6Goyn.N~b>Aުvӻ:rR5Im҆V%Y_3[sJ;n4ƳYﶻMi#HܻC&0#sѩ XšaOzI|+n#8+n=(8=,MoOaFy085gMI9.ٸd~(8 c$s $Si=SSWVY6鸏*.ʝ8կfXeEEպ6 T-•-u8251\h#yvr$ĖKLA8=1϶+HJ}{UڿR.}E$'p$g#?ίX[ѩ/iQr,d wVeq1ܓGZSJԧN\WQqcʼ srQkԧMFjѼˌvGr}4l:ԝIDOvKn0ɍVKroВ(fcC]jmcuIs \`Jx8kh{}չU#RT9^&L01t8<4uӔ#NtVaEye#ǒGWMm<kjj1٤ygϿ Uܿ6]n4]8%@Wͅ7mlɸy+>rQІ[;iSΑ݅_@yiGʚHUN.Qܧr6ZRR;go;Hڮ0Ԩ"!o&SÎէ7:nG(v#*ungR)Tא30Ub{0;ҧ"J\ʜt_Ǹߔl`멳5|M;s, oYwƽi<l 3$lyk)GǙN\whѝ#rwI/uf J{\cW`pSB.Y~&>M88;HM8KF_i c'`émyj|I>ՕXӖK }K)QS_,LjaRI^i 4kŗݏOpQK~Dp)pdP2@ʫ-rOnm-僳bnWvC tX2\Uee9'D1 cʶwL{ 3Nw9jքQNWS;req|ܩW>ۖ<#+ߴb82qk2b&QYW|?*}}+O6vV5v&̊NO=y<2bF)s-)S s} GDZȯJ4T^g,,(N;w4lAMu\K$\g,s꽍DaZU#R+NWԶVHN#4~dy\8;(S3Wo(ҔSnvvL1v4X$]N){"4w7 W;*N蔩so~\o^d*sI!<Gyc-oPNxzオGsp ̼*{\XӓRogQ$jbqBGA5F]_%cLϒX,3?~1-ѿ-)IfRu"Kn.6j黻7*Z?wOdoϺX` ~V0FqּE8Ӌr3ƕI)>ORƪd6|qʂq¾kBk}|EV{(_.?zǨǶ89ZJ͟;U9=VmM9 C7vҾwNZ.+e:x󾻝&&IC=˩Tj.U)9GDoADxʣ0ݞUSTS{j?#fgyVA5zRmpJݟ6-|3kHM㜌vqTt>EԶPRw+`ƺ)4^o~dU}aC|u9GM5n1c}8Q+H1p7X40$\qk:3ovӓO_2E|Dڿ¬s95<[qԍ{$dy n#=tjrX:w#kdiuFz_vœecq`Y,̬y *1.nm&el 2?,ֱ$k sO+y[ȩyf0dl6r:OnzֲI2jB6C$ѱ}p$]xJ%M8`{="Hvqr1\)AH yX ,Qdu3k%)G4Wwkc% ~ՌkN" x6^[S9eu+ }墀7ʹkGI4PV 9G:>y*1j.TrubBLjncs{םSVvZ(,ECWR\)֔S#TdE̥ۓ\5B5{7կ2ډ'3.'x)MUHQv[&f_?z\ъW.M?kʥ-Rmo},xlED{Kɻ"8TY-p\j_>뷗-w7v1嫫9~^qхJvߟ9T"Hm m9QNVF*+ vq3L%+)aף.W[WW*Օ摚Vm3ym?pTJUyvL$ ^sM5"HT7]ߌdusPe}5m5_s:{!]nK-#-z.3Z!< pQnN'Ҫ>Τe&VNَtn(`ت`3ۭEE}PJvZ| aLgU`î2>SsSWlDGl>i۱XZVS_c;709zZZI^Ԛr*H̱'h˸G?*Q} r!Fg p@ߧUON\}~"hTqjNE7P̛,.=r15Tek[fU9ܦ@-Xo-N@kj|я*2H̰nm"1qO_R5>e."ÿ;Iۊt+F}BE8L `ʤı 䓜gyeQ]fnc{/P W.<rZD/«)n dj](eB+JtSM91PEB|ȃv3UG*1"j ] +nb[:#RSmb\&W3y9שZ-%QVi b"Ff \Gz)JY |QM_.aAdŕXG=qڪ*3۷MiJm_sbMm~ҌyYWQjZ˱& "Rs#:Єeum:#ϕOgWoG%x⁙_8ȭ*Q0ܻtH?Ajav[<\]eoƹTFM&9JҒm Gq޶ce(2M414i%Iͽ =k"-_*J{y.. D6G$\~TF|iʖ`5URC3yUٟ^:ְ)k{yj#y*ҷ]s{ѣ)i]*5ݓ}K c4j1;:1TcZ*IZߠߒGmv,{wD)7{?C -4ce@#+6`܎9䮋ke<2L6oZ+] [PQ|6x?9S浴.e`2ϲ)#U±lm8xgNQZZhCV %>Z'ؘӦEf-^/&dH7#ny5V[M~&T8~X!$W/̿ sTY5eTh癔DѮ2eR3M9BoQ)VXϙ[˨ i{2F{8T88<_m)ƟVTc8oc,xlʪU[~*5"ړ:[/ w eh1Y6ll7^#'q[BQkuZeY!Uo(NT-Iќ}Yl]]ZE07w׸=J{\ҥJ4W+H\7\$SR4G4kNk m1O.FwFdS=k#Wg\)ӭ $y+/R2?uFex_iMKT:JwW?yw:jWppս1[JGH710X` :{VT?v,Z1dXPsj:ďaZ8U8+i.u)ރ[[9fV#3rzu*kkq7! e 7B=sQwqB<9P>[Yr9 kJ7ݚƤwd21g9-O2<>wr8oεF2fcFRvK6%6hD <73qߧ#x4LTTOv-m,>,8fk5(=lmI%i_,l9,s)5&a ~͏(LQhXUݵ1Qc72d_E0UA+j>Zյo4>7Fw1e|rܶp1lJ]MS2#U^XE\ӢQ(j1\cPTXY=31f݂?/ƺIZ~Lm3 {LcrRFԱ5eRэ▾Rym@cw!'`NSܤ1J)A{߂53,3I,1m q޽IN⽌+rn$C1*H,zgǥhT՝7f |B&TS"##`?]tyCGG-^ރسՋ3+^Toʝ(>ySM*Sow]2{C=P[{Br;郞Rj)7P16Vmѷ?)S\;}~ pFԅ\pcWDj9JʌЩo=U G#p{WTjF[uR_VIWic;c#TjJ׋zΥש^di,#*7iBW*.gg'*r6 |x59C⑵qSt}Byi1?0_֨s8B8|Cob&i<-n-?OXΆVQr(}OONz_JU9"#F1oBo7 F:t8~u>fmpUeWi Cqr@##ߞ+1]~'Eib# q#In*.xhN:QNZ?˚_Xb%Sys껱\~MJqױՇ^IE2At-̑ǟU!鏡5NeZ3ԓU:5BVw2dj#T=+>(у )jkhpU*ON>k*L6oAqO®K8ԷVhE +%NWIӣh%?4Kn]qMS]aRSDˀYv^DeOV~E&wlZstTc'wbM{)֭ #?i~~VwqJP:P2k I$%-+gn=iKv3Q抄tS#Debqt(JP"ҸژT$9RRܒM%߆PEOit֩}i˂6iUQ{u2(׋P]L(v˷ g?|W4RcITW_,f<6Fίs/{aF2rM '}F~SSm,T&ւ"ǘbr[+qYF-_ƥNfAoj,5%h 0 2- sSZ\TsrrܘLHƲ+1xSViM_K,-W4̆_4x}jIhtǡxѧԁm_\63ymrFU;v,!YJp[v}j'+ܘRYim,Ui$,ų#s#>9%nޅʌ{~k3jf#,drG'>iV厉s sNNtmt>d#뮮h>^U%dNKwu+c 05ko޹Bk_"з ,$K)ROB ֟,au:6[,ҁQF4baƟ<_kl*bNt&ҞXZ7IH+׾yTS*tcF.X҆\F==MUI{1Ũﮅ)H-Ժof^y \ X^ngEJizyvəc[;FGSN2:ُJ%eo#?-x'= F71VuSW{ idm<8G76撕Qܒ56D6J ǡl<ў[rT$7S4FFKw@R>;J7CE9T<).[):f+te!\n^=:URY-;YsV53]$eiX`3u').UkN-Zm;o3Uڬ~}\c91Tc/Vƥ?[xR\mrq?S[0KKk\Vy2,*X0IsCR[s*'N.^%Hi Gԕ.Ft)Y-ec~rjAȲayH8{)PoԜj]F7I+'JRVFڤeyW[oQ(ت5(Jvce&ܻwosTgfѵ|D=tʗ᝖('PۛsvjNG+KfWS%kG82K4$Ү/?cOދВEMm,jv+ԧne#GiA{ԎZȲ6ʲ$b{ TmQhiQ|5Mͽ aumd#˯QYN56c 1N[?Ȃ W2C80>U;[;~u2u9Ӧ!ʑ ?sbܽz[[5moWiڪ3GzTK5jJQX[ڜ/̹8*רNۛmmHDKEW%~ST^m5s^H0y5cz?&R&`LsrYDDTXc)Jkސ*,KWʹ/穧ntTU$ލHy{16F:|і6<ƪ<է=; Esӹ$C*rw#ksN/z:ԧZRs]:'Z IծT8pFԩInUe=*=m^~DJ]mYͅU2:.m{9sRKi?} +̾bvMJWRa;kH۫U1G?^ZFs/i~&8 hcX p[5HMΈЕOy"2$m1b .3۠kI~T~ -;B T!9M]E`,nzyTj]i\u4YKT`qnvzȭS6ӴV'_;U++Ü 9/X=u x9ѧi6V/ɪJk\R`9c<\_'zN wȋ첽XXة+XZަt;Pu2̲akՉQHzu#>6,R'[ #y8 I>E72qZJ1"u,˨GmFscz汕jRQyWrk"aH#*g8nj)?2S)>]=i.^Kъu;N:q܁S+)Yo߹GGO_M-GKoX9J59*u (b0.߻ٻ/G:ukETpW9TJ9}L1vvA9EbSՙǞ:E6eny}79G.ZqFRSL4YɌr1'jM:u0 aī$ǃOOʥq7?sj޶A4E1epx8sԕJ )$bO[ ?gS^8+HȢKx q\:sFÚYK4eBKs'׊= fwK{ʻӐǃTNj|fmNIR|Շ̈ಳ,$8j0_zE #ꈑ#fb1X+ؚ8ovmWuĒ,FUcQAQR-߯rEhѷ,CF>C?GQ҈lM8nL$0dere#STژocEz+\-eUzqNU$}̥(KTO)ŠpA>՜[wLjI_ >i9//vN:u&uTVٷvߙ:ʤjGݐQǶN_5ϓ,D,KmUQ=XG-:TMTY!EʉaY}>ލ>u}\\yK=!wsNpaJn N5P{(Fqq5dጝOv*nIi$P&TmGO=J74 ^ϫG -s!8'?=Xd▽JU_-RI;ոvC@+:*GK۹hԕ-tJ.o6F\pW~qRD)K_2}:vD$o@8z)H¢Ra2n#*{c4#MK)Fy;ZE fiT% =m)C+Isn]9JTir32t(B$#7%ɻlrrr#~pj2|{ʕ שfmg6nzPS^di&ס7Go5דҌU\VWS}Ĵ'I]M0^c qQXԭd ' 5~nYUHܵUWGoq]Xz҃V˪`f'uڬ7A!ԍ8 '[vrxɯFF2RUonJ+4+*|rAF89$qխasYsV]M2Ö)%9ckW_V9T*rԵCr<ݽeQ]TkdtPcWkeőV0rr$޺&蹼B8/vϿK ˺6y<`+ju-lr.ㅼ\Bcc9]e#+ɧkSG٭w},g%n0ER7RU-kN<|[$|y!`]?i%޾fTRrdX&, cV`vZnzr9;BKXUݴ,l7t;zXJOj)Z2Km=z r| AnqZJ|rq,?H>qghZ=mJ1!bffo ons]J*TTŅ15qxVqZm3t&wei~i$Tc(b! 4t[ PL4q)6:ubRNܟE΅dnݹNO\s1,Μ=<sV=[%TgZrU, R<ZISͫl;*F6{x^craۅnЛTn/5(ӟ IJId;\ֳu9F<6;ffk.cO8'ڳIŢF_z)$H&u ͷbAstfuUJM{ߠݝӸVcv@>IGٽMnY%c9Y{H;S-#unAΠPmPJlFpp1޹e+=֤pq2>b}s2\B#$Og)S??XRS8MŸA)YiZ yUH͝>\dw5k[)Fub8kHSr1st1JZ5Sђ[>6i-',N2yIR-SE: ЊH眳K4s?>WR1¸>aVkF2,0 ;Gx+?5}YSp{'c3;6_?5Nuq48D B"uejT5(}GjҼb4#fNСT I)T߱(ʬ+4[Y+L {A\0onR_tj~ʔu9bR$ıʲޞZw8 뎾FNI\dŷi-P 5 y1y<b/m$S8ʭxIiׯb̈ EXOzӕSkGݿ2Ԓ%\meo8һ쬾֒檛Eg<^MՅ qS>gZJ, 8Ռem: ybs/51TS{IB Vɍȸ$s?ֳIGݒZ~1|ș!"`ƫX;ֻJ<Nb(Vs~V9>t||,E$}$bZ)*0c+Z~ҢW/Q7"`'[MJ #izsJIG92~YkD‟ʏ+M|pr8CD 5.13ci[N23k4)E/zZ/n|o-͞S彖ϖJ_p䷞bҫ8 "*Orcw87uM I UCG1jyd%s ػLv#||zҝ8jv{Em6#U۴`2Hvޣ4gF |˸0Ha`*yfuQ\1񵻿*嗡gMTJNc ˳0c_a񫍹#',=v i1:Z:Y3e'/Y8;cչJĜ# 0_CJ1򗲊@MܮQr+F=Fz?Lק+j%FʲoUb1ִ}{5lF(P;`''n{ºe-lփR'̴s[yU=9* gι:h[sry暌*R[xZXƪzǧ>k$RF-!+T2̕H4N~\Y8?W9?AZE~S¤ZIȅ (Y [=tRQEKQaǵ[^}9J49sHj&le TYHxvonGksZQEǠ&k.A~cb誒|gǷnѳ+&+_ "(VaNY烐?oޤѬ},LXmlQ&ۦiԥ(|%DRfvssJvS5%yZԍmk G1ϘzT":Rs_.p^4*d.9]ͳT(v4J%FHUU7GN*iku~ȸu\f[T:Vhw9?O%y Pc&rHH+=tm(kf_݌aP9 _ҳSg _Q$ I4N¾F˴!.99$B%*~N gbe޲,|OƹEmqrDyMܣhN.I_} h9K[ o4$r/!l~Zj2I1i6Uv~lECT2A{;$' sr9+>GcJi +49#fv@Fu>2o}? M[~l fv 3?Ҧ\|b')k%%۵Ԇ&n2ޑM $'>ⰩMluaiҼ,"۱zL~lbFrѤVdW]`F`عSiL×ٹFWzXGh ~5n⌽H]GV++_/??RvO[YaVVz܎nm!JӞ~ZI3 ?p]~|]Ö=yhewW$Bptҟ|}RQ^4ߟKK:o㷨汬i3Rk^o?5id\/lszV4w-FȢب ێoyR] UI+30 `Xn}?3XTތASpboT7=kLC)+lA#2وnqMgR,[]rGNZN,${AfUH-O<w8+(?Q|w̎hi2$@*TxEg~ݬ[R@rt6cܼ"3:w'g K+9cOmN%$hc۵c2.޺%ъU&K ˿X1[:3Se@_#Lm /G=8qޱJj">t[8VxG-䎝9Vn$:L7 b0]Owr}=T$^Z &?k wɸc885e>Mu|ʥ*2S}^ŇV+WxUwll2n\@#jSm;|7o7IgkoBGM?gRF߯ȭgk Lo>ñ==3Q: g=>ί:w}Gmkd[yp 3$t r4UT?ZҤb[gA$)gT#0NT#ӹT:Nj.[K˴esSRVZknySTy2[7*lُcwF"1=-w^f9Fwo]4Kiff-C.}TjE[l+SZ~c)q ŷSxޏg(ҺWƧ,jkf|%4lnWߏíj6p1["Y{!]䝄玸5#N4ܦJιyuЎK NO^^Ź_wcej1`)al\I+`XnkISh+B.}]Iujt#a|ʜ5ʕu+ ul \ržZ,V_sx{;lu$&Bya&{w99E:vkSNOm7o; |FYEaǨa[ѻ\\ө+{-̹F$ɅE˕(%cNQjVmXhH1{c>hzQ䗯+5"†nKu>*rWFNR斟ɮ3O 4a$(*Vwi:+`i§mS{(䌪ndzrG4ջvZkTU?)_!#0sq uNJQjKR.[9U'rq=+W͉VMOȚF" ڥ<j:ivkN<>FORkxYھ[69'\ W]46J2X, W /_qWe>jzC;pA-U:mDbGt1&=1[FT)w\-;XcjH?qr(TBkaiw Lq*8'8rgFtk,-㙣UěH$˟ZFjQԨ+t2RхTPr'<,ks.ЛWО;r4>jC `kEQ(F4yXeBcZ\Ѳk8RR;1[QHI©v֢uT>k_Wd>Y*>FէIku}yS%UFTc] ,=);s/rmRA #k?ghqBMt9L;UaZPaNPr[+n _Lt_Źkݎ!UX Li #HRWۨ (۾eV(W%O׽iٙЖM[0 _1Tt嗴q[133*p3sZsGښ4Kv 9ʥV'Rޚ0O9=Pn|'~k2p0ovG\+~b<طgG, DXF3㭷:)rӎy&:fymg7O?Nj$"rԩg!mYZ=L(' J?xKލ#h9rxG(1!Dlq.й ?ȩK1+_k?#2Lq/FT+- = Vrrv0Xzrwc͔|ck7_\\ҝ^oKduc0Urfh-mЯ%K'f\7^9ji%!PiV(:g;VOލQ7myP+39ovOdT^|IsDIp8%5VX#UukOR8UeܭjqtqO޽qՑ2"̶gK٣0G9&59ڊmX_Xi߹Nj#V)vۏQVUN*1gƏ3m G˃c/vwv^c}/k 6P\4*\GⴥkRԊim58Y)d1e}@2 mRvo3be8kq)Mt>GZh>n=ֳh ao|j5^Kc\E,7̪Iqǹ|"SMfS e2sy&+KR%(kni~cFf0{vW*2FJX44+˻ Ο4tFQq74$q`KrqUN$udֿydء-8[A>k=nV->cBź4p?S~:?05fuv(J1wES\jZUiv bJ{'%vǗ7 cwnһ}rR}Ik6$c5Gsjt{8B/n:쥛?ʻaGf~V܈Yr>l=몜o#ySl)Iai︱ .lmJ15'Ừ*G k4 eO:Ur+9GPr$I%XMl53*Rtd{vR=zu:u9yWsg]00kQΥYsKO"ɋFwG\ֹ9}/Y+=XʊW5eq#V^h J*< m\=p1qԧ]SP˻qםV0/ݦLW̉6x%^\ĮZQ*i:t*Qs浊!"wuz}>✥v3Jɮn@#.ߐ,N ké*q5oM٤iՄ]3.wwĐ{c8#g۱4hV'}=2M h_]v'ӵx/:IoGD]$#d,nIGSПʳQGw{k>μ%:j $VPy?jB.Y{/)SKf)F_q";q뮧VqfޞCÈ7!p~!yI~~!'FԄR05cNQ=-5ylf3 VUkg>*{;J(֗*^#k ]ך&%NjEvEज़ic=9 wvPjrVmgle`pG##kzt~wm3s{ޝxtŗpX±*=։TjqʚsBU6mßzuCoΚsqCYD;K;zVq)K|!ӔnB-ﳼNFR62,߽b;s+pEVOݻⱕJZ)sRg9.#v>Q=ί+N_aJnTh9VT *#9QVb)F2R}xI99+SrkKQGTo$Do40#RwzW$cR3J>+C 튿z_TT.ﭺԌ mN{MK$/2gV>XOHLJJy{z`uu'5^zk܍9c6 qRUN[]zҜekAn$lpv%GJ3JTA i%3@2=8Yӕ?m)"fU^>sm[w#ghgn?M<}g36+d0fefpTs\e%>0"y{M'LxbUx=?3U%K)h!{_iMF+T)iR!wo$G%5m0e?15ċӡWS/cjcQ=#"RfXqt1Dj%N1}?!{DR?R]?ח++rr]zD/fU=kF2Ah_H_4Oc8'N8.ϖӌ]B-V<:jF"QVehLyk;.2=}Y4mB CrҺLی{Urnng%OaH~OZƬcR\.*WMqu:dz#GyWf<IE=I gvjO^3ӧlTR} Jhmz߰BA#ЯʪOa?N)KUjY>%P?(Su<=aXRFmJԱ3c*1|qϾkJQtcee5('{Z\q Ce<5%m|ۆe.AV3YuISgȸJ[v~eF[r?U݉i)Yc-);q | tQuei꿭H̉ Z5f+\`;T.k#GN"ec/˻i2w RgRђ<Fi|(E'q~5QTvz#YS]oR9-\szTa)I9-;G 4,ܑ\-SUR*'gQv$av`g;=-;苧ݛnKO"4qu8{t,D o7]Tvzͽ.ԋ*KyICƼOG=5RJ}zdatEmfL6͸lֆ(^w_"OcYl5%h9q>Y^">k% +1n/kvM1Y2S֮RR?S5>|4rDv Kq²Nv{tP#a*UOwj% 1W"e*ѳ]d$lá>)ۗ]݋m HŒ}r>}:>dI&}Zѯt Ýimw0ڬ7`r?VjV{+hN>.0 G:GH@{gӺOS:nz$1nfe3޼_i%N)tF.\>I&ʣn:ױԃrѳ:uiTQ~xTh:}_VU)Tݭb&f ̪vi8i*sqhE_r81rNw0:җ*?z"t 8<:d4J"ZƷ,rI$TH79}z~TJ3Sj~( [E.Uo#}ENr~}(gnS;##:{UTsTs'f a&tk  TIaLe(~ڷ'EEv(I.3NGdT׭S%k[v2Uwf### ڪQ樦56NhYY `kI.Hfq|ʺ*{ k漒[1*U:ljo7n0[=SN<SNۮF4y޶dn6@N^<\+M*q1' s߭TՕN%%([~!Yظudm'd.:mIm׫.+. 2GgN, ~Um*{Zu9t{5)m32>l>=QR%MUL{{WTkF4:՗ӿ>u,ܖR18;RJ3w#cȝun߹k8ϿuRCU%mn6'Hc#gӶx>|M&nbVn=k-rC8хǫBNC|W1w(q+!vb˿.9ьq+h˖Q=Qso"}>fl =<p*;H][.[-vYW#<:Vv9TzGd?KHX2?>I^-˦\i~rF+k j9?D%'(J.|w_/&RIF4p=1WW75>^fԫJRjMZ];*l3mbs I'*2X *#oI:tk5Nd/,_r…^HJ* Q7{ 4tn'-O4Uf㓏ZU*<OԦ➽ yAڀ!ܘW ۻjR=c^:a'N?udXvcNsӫ-o}t;jʏ=b%kghY1!{?jUFgN4ߕO:: r7#:UEHЩSwvVa߭rQQqU9_b̩!˓4b6U9k_+3qԽ!TMnGDV8ͻ$PAҭ.dNyFB)Y֛{c\UjMZDøbFŮ KmJQVVMls sQjLeZÆ8JgGj:Fkr˖IV8͐1HweE_R8voN w3,+azҩOY/mRS4o\Dy/fk)|:&'Իxygi"gKJP1)ƹbXp3ӥ͕Rܨgw;ev!]Ny+7)Si_ʤ)ʷ+/Ԏ2[9bGNUש*tKg ѼV==z=7i';;+c 7Q6nemn5ŰcdĞa9={gӰ'RZ+&JjUWs4flz]UeIX¥B,*nb\wC'NQ$ٵXv|#JUfi rkтx+m2S֚ݗ,'yP}{jTyJWعeq,t]@?ֺ)2v[IF+n[.MYNe~S9ǾET7~h֩VV3JImC|d+7@qhrG*߹n7 Y{)raTrDwN} Au7̕%%,%Nco;t5gU:sbs03=Z*f㟥aRQ9J4ԧ75[@aуmjxg=kʋZA&6Uln¹5x{G/M(kfVO+ᕻq5%#]Y}^V!k5Ԗ0?#Ej0z j#yv$Ԓxcr;y~f^ҹch.uvO}my/+?>a)zTR2IlU{x%$$nQv z֦5%Oeg$hՔ$DR|2)5Zy1Q%'mp ʊǧBs:U2.kd46^|ybڻ[Ji*5$ž'$.e"L(ٮZZ6uSN5f>ޛ'f]V3zOZђQ׹VS"efG_c+iƏ4Tl,N.I.2IAe& cp=ɭrg{=퐑x:rn璓w)Q*-ո8Vygh°I&$ITewTtRjVӯXUYVDq.#c?6Kq9Oh7IZǢ[ԭQ5v"Ld~5ԽǕCKqoY,1z<ЋQ۷CJu9yk~$sK4匑?~=AֵS8-{ qzyGڭRw&G~/ܘѩFMjG3mڲ۷=w.ե8Ճ_c:)Vebh {'?fc^e ' `zc5^1c=CQQiv,G-ŬgI>l$==oN73 RdimcPI#VьSmovֲ\Dî ߑZԫ.K.MӣvKDq{c:ϭefJ5ԙ[siq$+|w9ާFjU06ervZJZ}jIl,7rN:`1Z9Nf#5]<̾eqlIls'Hu8j{Oh[@آNp ssR*qeV+][UB{I"ǧ^#8ѩZ<;!RI|Տ,7S7%MD[0V8]M#'~mGKJgi}Jhe~Ȏn](D ǓJq~ҍf؊,ʲ yĖ<ӊΧ>N\En-o-q<.m瑃׎V(NNXcv]-sOod]YװZsS9o*&o ՋJ~ήWoڭV->Mf@2/֖ǣЙbjj7M?[Uc#8TEUے:n ڛ:߭ɅI;EFy̎4n]`u3ֳPj #+);ycgf +Ny-Y^݂U9&D5$MW\CDbݭN+ѥ;}Of{ia=U^ZŽW]< WrO tVºNǞR&1Z< 4UJɳNqޜjG~F…9s݉C Q(Ix_֦Z]jXۗiUVҰ.i]u*}NONĂ edoG9+Ӕ_3 xІϷ[m6lN'Һ*GlٜcFu b:}?ƳR[S^ULɶ\ݠ2?±$2,o~9峎CXN(Gw8qU9dT5ӻX mV?ZEJRZ{|,kņ?9k FlN]z 捺7\TNrvOc¦ <[mq>W\$rƜjTY^I?=&6ǃGF1㯑4TqBgr΍7fH8"4txYJQRw}lm'l+$:TSt9Gq^^?e0C@/# qYr=-b#Z>MSwK~Dk?!I1Hc|Wxg9޲VFZ;ӷQ O1w/L8tb',=5 |}c-+mP[wsLGjIjva%MVqY1W`i>MU6]͠aN2{hSw鯗2cɓzy)J ޽S'$l@rM',yA:~ǕJ#_WwؙXndz~^ڤwF?4mon#ހr"e4hyjf(\0'=?,Z\Ңޅ{j?1'xl 4'G'n9;tGJNs]WK, KW{H~f9&}oBm@ȱȪm˪@#/Q|W]/秩횔d5]N=X5]ܟa޳(մ|mS (TS_w=ۤ"F!~%>&hW_,j0ɍWXnw6d${~Wpدwn}[ߘw+$+(#'#\8FWVg=:xͫzo")ȗ6׌0~ҩʝ6ֿ?gY.ڵܱi "q߿/ NNsFQ1ITՒNd)HIg$In.{;~CWi>4 ˟5NyF.\TBFdꍞ`{l~˹L9^Ы^XîK>{I]XT1үZR&k&L$޻iy'RRF-v2gzJN,-^Nޤ,*jX7uoҝZfK1nUV<(zzqPܹV&dʈHNQסV~w֌ [6/9}+ZTjsrvv#;to :)J>ѓ,ͺ;h:@=ryQe(RH daS>U%AJnOֽ=  V2#t'8)>cZ%kG{s;J]X2G+t.:̎gpou$(,X3#5XgQK)|?7|2eye^7H_zy+Qcu+\ǙS)u# &3|ѫ@gC빝i*AM;p?YRS3Ng-\aazw?*{\'+7.GV$)mp xǿ 1 03sX- R>VzM$PPyjѩc>18=R+6;~J)Ǚo弦"nqc6ev?iy C#p3?JQ]Z\[+'Э w++:˖zEiLD/4P2 <^` ~vJ)(dg8Q_8<:3pDQ{tNQWG?\viثpî8JUyi?U>grl^Y2vmO}?Zބo&8IWNnsk/dqiq=TT^JUwy JXv0Qw/:{/Y*>'sڪOy.wZ-w~hĈ9ןa*2cIP?v6+nz[ .(e ߬o1J<`}Nk->+R7kgJ(Z{_f rFMvԜy6pQkMikD-H+(˖,tٲ8ѓ#+Cc9c|dw`p Uڄ4Vi7{ O![HVF.͸@>ךڍХ_ˈKwONI綹6d[nݺdg5'Nͽe/Sϸ}m>>m ڪȋ'$nxSJG]>eJP ̛ȱ7Gpje*nSӌRmiC"|,l0̹|g8vƴW.rƥwMKCksY!"cc8v8RRktd Q[ 36;x9$u+_N8KQhU~U@rOQTz2ULDˑW.89uGJ@y8_!cW\,iS)9K-_3JRߢ1$vcۤ#/x*cM(N,*LGֶy$D%ujI}Yڪ/1H+irJ)hvQ4ck$"g2y[ː'(Vƾ҇o˧42Ĺm,J1fƼ*z+uԖ#;)@7(m0:>خSꟐFJ5ܞݭ~k.ƍF9?JYgq,! ZYrD\=%F 1Ү.+Fu˹-hcQ<0'ϷjnSG6\\+F[!lO9둎?`5^O2A~K3r:>]ni(k`H 9]*՘TNVa)ʓ(=zOoZ)ӊzS[! ;gnVR,9Qo"2WVS:c?lEΝ*qm>@B`:+t퓎(卑>Q؀&nyPǯ$~T4ע:*J2F0HNNsU(˙>qǝ PynkHrKI(Y qד5|hRn{A,jreP2WnO~iF4j$syI{{ ծX֜}6KjG~ơU[z`wN*Q-ʆykGOWBtWE$$B2鷏5 npLeJO^3Ip'=y__Z-N2?? fPP{ViGS7G#21{{S72D-DqKj*XT@=]R7$H/#9snxiF(895M;z-1n1Zrƴ:8Ǘm$Y??0͌cZwh4oBI22IUNةSp!O)4HW*t5aמ拚V]v,:nb>$}kZ\!$(s-g՚&`ۈgrZ6iI!Eެ?ܫש5=){II-F6J::sޕ=M܊ hQv7n*zgڷ(J!ǖP_1ב( 2=9'OJ<ܯtZ>.╁56(c.W9z }+oqsJm3y1lޔc%4rRh2}ݲ9~Z4֜&yj2ߟV%Sݿb11sц;F* 8'R6M5;n)Ha# ͌`*yefevX$UK=D|?sURtʜc7َxcEYeڿ+*~ݪa9*U̵hmݴOjKm򟽗ӞWͥI7@w>[%V F8<Cދh2J ̅.`g^jg} j{g( Xd_2xͧif#>ʉE*R[u%7?xڨpxAt9c'UK%bA I1~0,6o'6$=Jv݁212}*/y#|zkbYKrw#[@/vA}FYSuRɽ9JZ\O7TgsnHV)JYF۸B 摾ķ17/v{[+?ut,:8ϩYQZ*iF+o剚;u?*gͳ?Q\J2N2敾>G%a=7;sGJHK0^O6?pp G^e[:!%nV YrʪUdoY%m2JQJQbyd#P UGc?犏9jQr9=k<^JQ֌$kSGK2N=Hʭ,Va \ɹxuS9'-63iˉHs\vctyI*ޱK4g)ZɈ0G"$eYCP%{J\I蟠) #e~e;+*|ϡNTzm^w*8>JW^Ҝcv}u&Ty0ovc\G<[Yn s#+ryz0:)jXjjl/8K9eOIv+[Gf^YsJ+BR,u$۬sck*ONֹnr,Y;_fEf?19ӿҫVo6+J}$f $?xgs򕠌Ƥb" Ys>R\nRg+GΥesҌ&C !aʕ;N6`ǥr掫VʞQЍlpy|8jǙ( B:OO,-ǚdo^#Vw~юVq:&DLᔬxv\3sJ[[l[4% 3NlW0Ǹtlb{ߡ 1P߉FjR쬥k`z|wj'+_sq#Ά Yan/ܵK9{]uQoq_= `:uC_Aӵv.2w*@޼iyZzeR4GU͜X:qR}qcL)Ty$sxW3ֲ﷑ŌH]ͫ{YJͤb*,˴eV'83\ROhu:Dhʜ\j+]u+p <`v㯽wJ.%49k7mnM$,#=xⱖ>Zio7!cL8_0bN?0Yk5)Q+iG~ bXo݂it}IN6g)l \qyz֋VSJx)=.<4,NU_/o' X*GR9X:*T撾RԧF'w-E, ,%d '=qֶ(NE\LgIۣMڙl|D;F$&}NOTB2NvVX?y8M4E=n1'woZT|f^4=3 FyL9UxO]tFV>*m=B{RiE6(#1Ԋ)Ulֶọ94/Ic ʢgPLc6OHG/ciT"W}}V+H-~\n=ޞM*Wo)Am k$nb,an9E zۡ u#Ý;maΖ˹38OS:Q_6gq[lW舡_]#fod sglVw:ӂЛOĬ4+*:cU%Z6שҋwzhY[hfBqZNg?JiJjU:+;'R)n˲#6n ԁ ʹcjxu}>~G>/{-FO.ym P7䟥tʕ:\*c)My0LP8һ# HCXB$$cj2mJQq5cz.[ΊcmbhOd}z6V(Ԕd֝ǞD#Ohw#rL,GebV=1ߌVeNFﰨA-$c'jV\u]_%bAp[pO"%&E+C2f?7~$jy=iC5T|\8$ڷR$BOzx?S)FXxSMnVFS3X~_o+h*l`g͒FeU wq8sU6K RӰG,cxVR9.T~1}}]'yC3._z_wIFC/Y2١Tk݅9?'sIH cɨ^4'cwmVl r/YO# ʵؤ #SuNWFiֽDY&GѩU$;n=zJu1eNw`c$2e}b` a-哖z5$&M@GL$oJTGd!D6U\z{d岉*Esw!TjPvl`u9U(ԽXiJ9b9+mn\p1NޒT߳i߷@uiR|nXJ cͤ&9Pwqlo̹$m>8rΜeS'T4%%&]de|nג>}cͩR>#ˆq\Or&e9Fk؍Z)#xn31N{~U/hF-="nZV(1ѕ{\Z۹PıasssӮ+ͯͭing_J ki-QV73+̘ %xڜ24آV#Y7(LKޒo.EZH$t]D~[Y2:H HeFc,x>R\*:zeW3*@+sձθ*J1z};z~%.Ӽl`;~ֹe.XeFyT]HA5'ܽ~1d*q\lScbɍ^qꇈ*ҦWRńo`-WJNF\gk>p:QO:ɹQJlU7sOzƥY{K׺Xu.9#~?Xo/FqeCv;u8֔)K{ySjr'cS㉷+>y#@U59HSil1 5۲݂c=c/{3M_!"MMm: %dTes\Mr["-= =HԌiR\g{fhM+Se qӥWŨNSc^mo$&V VD?9$ V}fڈsg^Mh.攧1/(v4r323 z]x{N32*KPõw/Һ|3z/WO{_[*$G?5FSW JI4[T9z:;w:NNw2[+Iow==W-a2T%f&*U"KN $F8sּʘ9mݜur'rylS u^WD(ݽ & g:q' ђZ0Myu7,mH($|4(Ӕ9_/Z[6+i.gS֎l|awλcIr3r_E/Zj{v}zvJ4"J"ew .B4)N,O}kzqʜf sUٗX h(AQQc;W9EmvFO"M=fiYBq8涇/.Xz{ [b6y1]BnRkqSK^+?mkttƌc%b)Lks񏻩-Uݲ ttS{ 䏅òOO#٤&hm eݷi>E>+:E=H+r?{h>vB]WDhsn\WJ\a Ym;nk+l8}9̞kOcsmGSImA6\⢥C(n~ϩ#5)ԎF&=ɯ6T3*TבVydVi>Y Fl0$ X8əm >wĹ8>=.e  ɐw&) g{skt}Ksh֨dS(Rê^ExXp/FV"EE`.d~#\W^6i֋V[ zיSٹ9U}5qacZ%m:ȳ 2_>m ԌE=T1̳yאtm8.]Լ4L#*n^B]d]y9 ަJЧyEIP:mˌFsQQFc{;#z[W NW`t81 p|z2iSݭ1]U݌ᾇ'>+m.Hx)z,U}R_iRm^m MkUR"Fۀeg>ߡEI+,DmV⏭BUNJiQǙ'2)$@˹WtC8#k*vw0;EB:](weSY:V"#FUDs# )<7g(ݿMQ%1?20mܝ޾NTB8%AScY@ܣ$zcTzK{ulrSi54ұi6dIH[]*ҼXG+T/<}*hX{WwӲ E8ч,dԅd<*évsRmO]EIh|fV'y-:5x'Di9uQ+2Tehptn9֑Rr^]YQ܎Ehn92q։T玊׾}N*aִcRrҥ^y.[hCpZHbz9Su35>jumm-]H_kgrOJ>ͻu7cx}\G4BU 5c3_ǑW2HTD4kK0ՙX(ꄑ~H˲ /Cy:=E%i)*pI*iT WUY{JLcIӪḣuߗ$P\sUaSIzU‘)0ij77Ȳn!TОxa߮ކZS42ѩ7 >Pʌڢ2u \:+3LNd.)]E;ͽ߅#dr)^2{rvfO#aJymw"jsԒleY*l,cF@}:b)Ɨ4_,-LT$b-{hs0IwUskn#RXv9فi2`gDt*Gj#(Qgvv^WU.^YӚqktˍŀ?(9*3$=)r1ȕF&؊o,skNXTvԚ_&)N8+'?vR掞]F^.2 ܳ<#G(OKX;I7lEӦHW5ku1RJn?}N]ϧ?δԧÝRԗ=#jl,X=qZƏKmJWeI}Q+'e.Xr]{wQ(R>oX99~t'**zH!F+u:hQDAiTXʯUQPUӦ{ S0W;wOs-Grd_|qk' ]y==zVRc8hCXcq=GWRq/岙7cONu*F0K;%⤒}s<%ݰ_ZڜR-RUҧtR,C%gzp9ǭ]:2w+Ɵ.gdO'*L#<һ1̬Lp+MJ Gcdz3{BWiIvB"ʨ!v?>{kZpr^ڑ*I^H^F@:zg4ԍ*m˗UW%6sQʸu"(ӗ2mnwДVŷߩ #, _Ozq)T|Τ}er;[bFhDw۷c>Is$Eѣ}{4`]ȥE̍ c:r:iAsi֗9cemmIr.dQռ\yb鋧NWo~K \*2y%[pH֌yNdgu [G,rC**"ķ^qu|s"JIկ!g6 hl| =:ОiQ-NjDZR1e2>lyl%&w}zyg w-V22p?,VDJRD#N'eH&Km VIr/_QR w&, u=+1b1N׿QI n*Ԫ4o+YhF_~V@\ޞVѧSVtJTSWO"R[JJF=jNz6Dz%ZtkF<*RIYU׳7"7 0ztkOJPfNMwLϻ\#<[T~c뎵)IJ߹5׳JbbHp*+]q3޺z[%fά&ky2G~&)/Z1J(]Y#7 ,OC[ÒbKZm>E+&!4bB7jҜ{\((ڮ[ԪoPR(,C"Ge˔[AiGkEJJjWC<ĒL܅^Wq#JԔ駡-+_2)d`wgҕG:m$Ŵž;bRNZ֔b$Ԥ|]UUVe81JrJ>yЦv++hQ2hDg\7!w;֎_rϚjhбͻw 7_Ҹb9s#̕#{xD6fI ͕*F `u%SoM9sKٱۛiuLaː}:Уk-ɭߴj>Km\gEgS jUu]<ΰ2,,ǀIקz^KX?e&pye]cn_ªUܩnr|,pbPaTtju1 4ԙN]%=.9A͜W rWh%xAgRP3 gzzR1 OA32s19$c:FQ+B63q tv2*JxZ?ѳRi7c«aw@TdˈՍ#VE }x^3SSWRZGK~D2NbU5ݥ&RESwVf0S~}*^h1Ymp*sӱ \de 涩F;[ڂ^{9b[3wșOdƬ̛O*z1ISO/X҈Ɵ2Hޜpҧy-{}O"nwwaJ5%dr ynAϵn3zzr| +<ʬZI\C8 ΦYT/ݥwDEl(ިG7tJ&\rW1"dV"zmb$qZ"JFe=wyhg,DX)Z*pC^qT#*4՟nei%X^k-+I!~j1rEE!P2HTd-q:zUJ6^;[bϚ0m`9,ޜz \R/ 4VOrl|5EHʝ*k]a\turv+̯.!Ay۱q]16r0qs=9[UCvc^4m-RN* 6Js,E]ɻ9ZPv{w4gokw wHȋxzp׵mNNUGJﯙf-۳4.ݠ1{g=>{'̝2VzQNRo+@6\f(/;jwhR1DHdLlb9ⴭ)/}R4JV{eT :=U|3T%BZ_!$IbZװJh8mJTNmУ{z mo6ƣ>8t"'&ӖE J)K6RZlv1RLNji8ԺGDrmo2 Yo>Zn^u%B^[hǯJ4U9V s]'!m"ܼsO^:VܴOډaiE-{Vsv ͸` ƹ=e=G޵~|XI*S G< )>nN*g=7sx?o'·M4,$›yxzֱn݉"eq>֩{vB|I۪ƪY0VR<4Nm5&l{Vvg ~46̰pBen9YJc+{ Yz/ 8xldt]ԡ*Ҽ 6/68eƻrxjRu37ߐwWnS'ʭ i*q4.5+BVm%KoP G,NHpuܪF߉cK^t`ReE0Z³(mN*/Ez^?ӭd]q&G y=]vsJt,/4l+N۴r9sLW/,5IZ+NlK&'J%FuR5; qܞpQOa9-4S# [|,mrc<޲Rjs9WUU5Wѭ."{qlJ888T,֥D6[Ƀ[h{cRNߧǙ;Iڷ(1];4jsKБ*Ƣs1:֍0|va XG45V+ɽµeOIS$Y#B H8Vq*Wb]{f4Q.*%y\W o,e dU+g溥RάMz%AZR{XO,LqjRr/mNMo]݌iUG 3Z4y㼶%PmE.ߐ# `Cz=kqn2Ǝ"4ZIݦNF8hƊEE8 nTeA 9:N6zhG(.x=Nx=yط[^viSBNk$yhfǑGN@jMÜ4GZ/{K$X*Lj}^OhneثʪUj~?qi{/غt,ؖB;3\%ƵKՋײ# =2Ly^GR;S'mwr֜kq&"[ueyTȹ *cԕZhkĢisi$q;=n#Ӟ*gebb3^Dgd7[9f~xF(n1/4)EkzfVE_-x ~JIhkGջ=2Gtۛ{vDycZ˛pN8?wWbhb+u/Eڪr15tR=Xzu"JiE-f6gc!a(C=MtT_C886r涶^[1&I02W3/mWӯ]3e5{顗vt0xҸeB5'NR{=Ա<*ҠF62@;tzxPLSFzGk5`_$rH'>N)ƛgm5Z*n1[  w ;?P+j5d{*ѭ^|UԗO[H4$VS*rp3]%)KJ|hӣ(k+v-M2Q ˸O@}yW}_ri֜ivYYw3H\ 0|:z1mDInC*1G'<è TK*k]DbCsQ$:e+В34kGH_]fOݧ|2" $s:K4 5Emٕvsa~CQ:04.g4f0$Pn<o\ٮ11(^z_׎K{2xIm Xz~F"|ٜI]{zRkKyd!eXgm.O^OOJ1]u(7׮$/2vwTTc+5ZNqk6KIءWh8i-4[ʔ&)4ݖ.2dsyawHmRNNytZs\ڌRDc[!dKciڷ D!:ۣo,!Te 6@>^+JхC5JvX'Sۭv߼Tqے=/Pj~bӎq[7ucҧ qϡ2Ȓ 21S˖D-5]s#9R3sa,"i`{2#A:®TN (ƴI`Nq䱍P2I;z$NtRrm9{:[~EqOv`d+.qܜ*}S=Y.${p`v}~])hb̚רA&w|f55kR)>pFuk]M*Ɣԥ;Z1< V6QU$n#'SZ1kҔT6k"<ׯ8X^Ro\u+Sq¤=SY,dRm84y$MogVTeYKo5e$᪑j/4PedMnx>ֺ%)yzN5OE'-ldw3G1<>TʕTrtI[%{ ˷mviihUr(Nq'j9aX|1\ K\qʥt?tN\BKew sV#R澽[#nKPch z*$l:(pn4nDJ2 [ }y*q4Gǟw P=Zty37^ВX昵vq>qSSuRJ*1uXInC5ejZ; *l=vL - DlcױPQt#0K4iѮQ:DOo_-5EҎY+_-WNy8{bʛ0{Q%g8b8cֺyRTێc ;J<:K%*koěIգrNszdrh̙.#K}U|ϕS1ZB\$Jd{ʪ z۷cA+t% $dU;c֊{DF7wafI1`ms trI2YMm \<*~f?6Fz>jO$;eZqxh捔> tr5utc#5v^i5P-4&&+4ʮMІqn*侅rKX=cey,32qS/rz[Wlռ#ں"4pC$2H.eHd,>nAwfvqN1[ZC_1`Xfv$+uOZ2hF\$˵Yr;xl ۵MN:]G^v͎ⶏé)Tw82sl9J $UIFIaM<ʖE_/FOS}qWY- b(.]P<]'F3uB*4sogxKv#_c1ZII㇗7,ݿT2ibFTZE9ؚ覥-eKHl4m-"3 >]u9S GWf>.15GchVhGNQݐo% 8d5N|R)Us+ۛcJj1֌b8,kn 4ofR+˨Etw,v\xrWzv etcs;36TF*GG5))+uKJd9-Gc[oekNv*2mk׻rFwG  [^29wm\ULr+gl~jx< tKƟ04S6bbUerOTd풖 KɍgܨR/Y'$w+sm#͑ʈ{~*r>2hL[\']'yzy?B;<2l]ѕP,wgڮG-%HW@zִjQJKqjFMlfH\L$1޹rɩYʢ_i H2 nUV?Nzv<:FU:F#?qLr"O>§%&ݿsn_.>wJ̪qҎk+lWI{ _̖0#Of8#I.m"2OWm֩4KHϞyoSj8TGÚz1eT4$Xhl ofmZ7OrTVm8LWܥb哔b$,>eLw#_,F9yRu"x"GA9`;UVL'Xpd Rk~;\*RwydT ;2Ǯ8+FyԕZ/!6Q;&鶹8>ުN.N7%IV9 IJ==ږ{Y:{8Ȋd2#mXHb2JQKU#NQ[ڲ66/L%vsה+b&<>_&Qͽ  [?Cpq:wc(mKV\H$f>tO1@-?7ө=z a2e6BbWHmLE9I|!X;Nzz*uj:@>;}jgNd]%{URl6?7XSVL[+9fi '8/h(܆{!vAZTdSwD$J,{ָyds׭JQ1|UvEn:gc޳VXj0NJm{3lo38V!J#/GBe%E0E"nGR\u'%QGsYG~K;/#>^|zFJF18 [mM bFJk>jzy g2)iU^q=jF+SU%̙nTp\H=*)srh9J3JR Xٙd](1'9>㑊nS[1T$^B\Vy!rŹ5WIR*]QhP 31Q3)݀}3fFuj(1B=x}Ee(Ԗ_JT#iv7M)#`9+QѝrZّٰ՝k{m+hfSrD"{+.b,dA=~Z4;t3劵g Fn6,ffVdU]qQ*,rQ橌qo_Eo.IݴSĪk|L(eRMtVmy1Y㍳ׁ=q\#N$wp~Q8-nċ<#߷"9I=sZ}]aGk=[9s rI$㐫NrPMs澻U'p'zu_A5'մtMaEE[Ⱥd(2ˁc'Ԍ"c,h$`p1*T-|*'3Z-,5Kx#onǵLkBq.UF8QjCe47$60| sޞ#5R)~dԭS:]#6 4n14='UK%k)JV="M2dm/׎P9*yGRJp-LOuң&˙O3qp9֭ʣ~M:T IX G%n#B(q vVv&ḙvG6L}9&?('yǵcNRjKWeҏJϖQ~mo=^9KƚT|"޼$JRW~U6hYC۴##cNoW^Ԅ*J}-:3ɵi=ӜDQw_6cR_Wr|z+6&u ,,XolzV>i6ׯkR1'[$T/\uHO쬴yX\Dc% yaFv`OpIi ɻΗwi4[|[=z"^CRN[+^Z[,wHUo9=ngJ\PgNy'-z| eoRhmCF}3ۜUs>[:4hᮗjb\\8G#kj|*IQ'TWytH7lNzJ*b#R)F6S_IGW+V9۞rOӿ\iG:J?g.#&Iǹ+qE%(J*u*Ne̝G{ۭ n!I!?αd~K'46j"@>çUMG:gNY@H|S¥5;rƅh?o&/j XG=j8^_[FOF.R%n_/_*21J2#{7L6!WA_*T3}; wrTK qo{ 7~CִvP却N$ q$6n`X60ANi[QRvmrJXo"i3 X-PnVؐ qҵR5QV_euM6J%G2$뀤=gޔik7ΉUQv"s<8cYn2'Tw=W ^r*WʶXy彸Y؏j'tJ]mV#;;$*5S=r z 4T*&j&VmfLorO'9aYɸ:[FQ{%t\\2Mo':ޜ)>l0xODxmo ޴ ;R/'n̛a)FC8*R&'y` 1.bϗx/c83]1Z-ͥME8Y nUmq5 eRR+ ma7/#5R_*O$'ogN:H֍5#ƹVS)HsZ>vI`M!3Q[,ު9:qWMhEUݙ$S]ʼn\dn+'2FdXJŷ7#OW,K'Fw#psֲtͩ)%!Yo"ܷ?S3^ iNMe4aT,l~a<-H5s Izh|W5˱iM%򔓸3շ#m&oU-s͡R(1&eWszI:N(G?1f9՚zq3'Q%` ň+EE7QI'11=?Jkڞ3>oFeY$.#1nstiG/C]cfhǔp8'`j^tt loU]ڹ>`ףFܚO.f,Sid,wN~^=jזV٬2ndA#R=+أS;nu>-JIw2I $moF~8:Oz#%F-Ԅf},R·rw;fSt\-]4 s䢝|V$k*cw] [CXzsp6༒RKnø<9ZpbFhg^{YK.7,dI~fbտTbӳ-=Ky(\Ӄq^;?F?fQ=vwc^G5*QhF6}L,*˵8ϖZt)R;K4,k|Icuwϧ58ѨoeҔ%+5Vq ͽڸxn;vF9O./#f-X<2\yz5Gv:p.j]d  ?κES^5v>6.{K23 ISJ5#{j`ETyH-k6IT O#Jܲ"vJ1O[oy^wwme>߮AںFQ{ةV.m]n61ف=;}s\Ч(/#.jƣK˫->m#H G[o9Tb'dΑfDgz AQ?0k}|:0_]IƝo"찈N|˄ CNrX+쓛f'*ZXa{UZ@ەcm3?yUgj{Kޏi0tFkHW'ӠBs^MJ}QRwfsoOx_I G{pգ YMs?R!uZԣ:W(MXy`7f Myu(ǙtTXզC ;'q_p+x&l ?_5g֒U2TD;̛N˃SR~4$In$[(RкTd6h]‹̪p+hFۢ=*\ƱGfocvGVKbV)z*^kOgNR~Υ?[rxҹp?yJUZ5 ;|GNyTw4'~Ɍ!D^r}*t4롓'F;ۊ䒶OBcesTe(#K"m#ּF-FЪDDgs\)H_df\lIAK`{n*rQ5!R5!#20+}= f[Q$h.>nwڋ")m-J&SFw6ڸSVSGEE_QM}O̹{?ʼ|WKDtS{\夗#G;?yʤ\MOiVIm؎!q_=[ZuUITi5)V'=WRrc*!LE7y:zkoDi::KvDS9*akYN.i{Wa_}){URo\g5rdUSt4瞞9pjh\J/?[{VhYcs#G;I>>%*zD'5!4 $Urͺ=1#>+:k$䜟K S22NX|}t)4M_:%.mea%sUJTQ)2jqF>Y gO5%aJ]zi#2Ϯފe{U'eduK٤"h^[pR:ӡ{Vh9~C"4'&+il#ub?~~Rs'[CR %D~YH89G(J-F jtrj\AlFr߮~񮿯F2J)ʝ?g _Q&wi'ޒUFISOZ5{+q[4V *ڽ)T}>Zt9iUe dB#޲w5+IEG9]Y[ҦJ3mK5RzY0.*[rы\4ZI_bG>S ଽUz 'uw:h4VC} aCq-#zҦQI;B ?ZҤbӡψ姳Z4gLv+Oiʤi_0Vx,^\g#?מνZqߞ{eu1"g(o%|śҲw伎s)/y'B*&Vrzs:4*yU+JND 6nbFׅSzuHT\bթ*r\n224rpV`DZ㯭c V֣汏/,m 0ă29l*ܺx>Yz>dq;^4fh18S'ֺ=N8ەLTdI5xҖ)&Hp۽=+5V栓v5 _u# ibvI,=NxڹeBRZuZv6=lc;I3y*U :M[l3XC-Ld lFT޽Fy[wҲlP52.~{,JPr.8\`^k16i_dΪ4ʜY5͜c#<# ԨIF}EXv%nTp?Zҝ::ktFK NU$R߻#u^q*ӫJ}Qpًm-#nomO;;.|ng-~_^s܁sTQ_֡]OzOU F36¼n#zlⵋ7sX\vlq~lxP**qIgJ4cQU}ux8# ̧ gf_\;].ďs;Km)4 XsEE-nkSNW_KcBLg Ϡ9V(RUWR۶TƌrFxgt [{gß( ZDe{*I['t`[fw|Q`s8WȜMT2rMTz|Zf8ªA?vSɫp֯7R.̱kQÉ$yN=?@3潼!bmm:<)^,ad6eEb9;WоtmJjCkhs/+zk 4SF8r@\m+!Xdm6$`=+q |\]-|DO>V]VA+ e=N8>ʌ;Z|WW?#^a|OB(c)O5W;_ ֜rtg=i^mp 1QQR}j#I6`|SX`uV2r>\V-SZC/&̻hFAA5N^doC7%TȌ_*6WϵT"%mZQvv>}"[F|ʿ(ノs[TRRKQ9Sqy [.i2.OJ$%{snnɚy'Gg,=Hr:)i[uA ͼn玕_RnZt!I-kfeEy1!R>n #?ts~haZ(SQOP+t̉ (Tp{gҹF5&>oaa7&zxdky>hOKuk*ܸnt{~#mHv \zTVh:TwzNńv'vNQJk,O*V _ݖS]R̤U|6k]۶/^zP援:b9i{@.Xw]9R^Umd !|7czӪܓh)GT۵H! ꡕUX:cьbբFQ=ьyy sybrnlCQq3:q-ܜǷǟގ~-X Sm4mu/ܪNNy8;=T|OE3GSj$aI':٭#RU'R)TԕiVܻCnl+U-m'=Ѷ#IJN\gRq1%F3;Z?ACt)ric^Tz۶bMSϴsO%IJVMh2KiRĢ7'=5aZƔAƛC;jƐA_ws 쌹aNXSM;"g󣹳db.! %0u`xzJI.u+VUF788UiJ7c̥+kn:mZqyGl҅I'K7&Jӟr:dR_|v(]dy(7E& C8P8[GVi֧ϖ-#F kЭ,A1۴v`c>hQ,6e+do .,C]4JQ*Oe%% wo<ܳzs]cM]MQhG 3졓Yyz¢XT֠VLvy+*[a6?9ޅV 7*l[ޟJSzv6-MO7nUvFeE8/X|=FOuvI+4[o8;ҹ*{F8&NE&G͒p8;B:ٷz. D>mOIgRTkw'{68ck)-yaFz$B62/Z;uխF.2ҸmYRaV]۸ҶH:tTV}zD2##źc U#.ȒKQdff9#֪4AN]SNIm0,UK RI(pZrQ0GrZml6LY[䝹'Џ8#^rjZj29Ru |W-XԕNd' 7/y~рX,PZΥ]CٺN~~cG2m-t){GNV6R0$љ|<~ܽt8V$'̐msKNu>ca啛Ņn(JFR5hyO{DEZ߼WW_Zپ_`=)ьn>o9$WiG1y(p608$tֶ}uNsKT1eE3FzWȸӻЯs,#%'kjr%dsוUi=FI ژܯ4kheVR/XZusA;ӗW+F(Qd9.ov&Tyh%5ć!ZFG G%]p`=j/ XVzhH *Nymg6f۷ZF3䖭=>5ZF](徇O%s|4RѶY7QֶVGQ¢;YiJ2=xsUu)YF5Iz/Ĺc$,wΛc2BCNzF2;ZJe7T}^iQ4vp9\ zVsrb&$rhNT):tQqsGdndoxϗ3m 'jFTЩЌ+3=^S>~vӡ.$}fu>kJd6U{=k< n[ |8 ʉF9NљD-vU >J2eO?_A;lX׭9Sg9{E؂cl>ӂD2pxZFz=hvvw iqbgj=-As}̰"#5e*^ TB7i~lbQ"n@v/MTٮ>[9ghbf $mtIu*)T6{K.beTm˻ ϯqNJ5):_ iynVɑ=O'z}+Nj|ycy?"f;=TaYw?Q愾]YJp{Ԩ;nmba/&Y<ⴏfxPM)yal{'>> QF[d 3(۞:Os֫GWm[V*UpzWPqYMӓV*K.U#"N*SzyhM-AGvT('$:Z(]rGyn,ʮ[9ygcp9FޝJֿ50G쒗MwQGً[}/{l~XzU9lIP5xBO34C1ۗrxߎy*sԭamWNnkx 2sO咒q_XSSv!Yobw>WVeӨ^N4QI&sy-҆a1 rp1(-^VҦM2^KH7>늧N5i*|HzΊ@as*}GzYFI(ҕ5|hۏ4c8˚Rv0T.[wP@"7nD={usYE,ΗBF.!Z H6iT8䁒qQ swvLRᅮ7Tni$VpTyEߴ,;yc [^}9:tw~!ݸna1#=}:ޏ1֟q##H˷QjdrR2JOdE<yw"nyqִkڠ MFou8YxlI^ƜZO$C/UX>/*ScK[EX˻7âtfvupOO´eT9S4ɯmRF%-$eNWw۩R _cZɒ-P)@GORaʤyef;Dʲw<ްg'hƝf@\kVELh?\|Vq{|Z*k_5ָ#qe%IzRUt} K[GN0]nYI ާƞL8E˙wZk1,tHzkr+6U[WZGLYw&;FaѲǑME(1r^7MA vcF~rK)%k'RWvӱu+,C=qSNim !Iӓ*1Te+¥iy f!C7o}N:Vq-49Y+;XѲk8H=2?JI{Gw:jT_^"qlr(uaaseQs179.=_ҳ썱\.*)fܱXc?Hl'خZ' 384iSz$VrTUW>ՌRRn=HP+$L蛚ןjhc^QI}[EUUnU;FdQfKH"Ua 3ҹeRU'y?lg~~^b۽yەت9C'u'VrmF!oqeL2gE%OL~֦R^f޿5WwwpG\ KbO4M?oiFT$yz#un@mVYK}nqSze^=M?%dfxfdwGݳȬsK޺C_Q^AvתSRT,U躳Q4VV>!V `szŒqnUٿǝU:r2ˆ34XW#Z_UKwS;б]]h#U%C?tʏO([fwe㶷X6Td{בӎι%uiQQdqbcp8Gn0j}KOry:_F62qŜn :CƴRxZ5^푯wiARx@ aI9ZUcY5&.)wK[oy23nh;ߞZ1M5V1b4ww#AQvVDTU1sn,qKX%yqGNqǿ5 W2ONҜ{H{vNJy^6s:QԦ/NŌ2k"jljpJ\U'k-`Yn0<1s4mBOgE9'.mӲedxR6搑3S{zwt-S;pr߯,&&-`#J{*c suFʍ&ܘTnUL.=3ںT`֦'&ٖ3,s]Tc-0ޖ{uf;u?Un@tiSRoOԼEjz׹9#6eq㞃;~Qx+ e8ܸ#|uk49]UbnE+;J(?O«P|jwSqDs,onc83qTb+B7B±HZW2/+zQjFuU`iȍ!Fl\cuq]̌,i«$ WʭV\Nh =z2~KnaIQtߕ6Civ!$2|ۆzqҺ%NRe%hfW&ڥO$ӑWN^esjt&I ۈL/-=}NU'vU+ dpMřFG?w8NCEѕo07ԑߌVtvSSQwm m|(cG_s]ʹ>W\JN#ʪ]I <qsXÖ5Ai=ۙz*=t9O)(;p0۶ڹNZliZ1 5;w:m2+Fv_~}\*{KUFnVwjY @ϡ+>k8nDEJ**X%ylpĪ9(u\\UigUFhB[ RFN<{ڮV-LS?w|lW*óӵpƧSJ(U易"C!N08~$ewb+W|Lgʶ0˽^vrs;+/}ϛ*ޯepgCo-c=b1 ˮҋMި?U4l8'GcҜjӧߵ1[yvXN+~Gm^MgF^\>i;]$K8bhOӗ%k?ĩQܾ}фd,ee-jRIϽw*jEƜmˮ۽BZUY$̖Jrw(u:%J|Mv,="yw,۷=z4#`Wi 09Euz9h(ɿ;q\y?ytq@>jy5hj5qA,QE+J+n=WEFTݙX U% /tQWd:k>ZzLJnı%  ;%Fg+Y4+4z6ןJ֝egøɼȥe_-?xeO)Kc dYU*q?Zg/ujrӧb@aY0pÍz1ST~X뵘H7mCH^ZIXԕO[nAovqyAz}s҇ţ^bJmI'#lt*mQ"cf?)q<_y+oA{n;V͢ˤ+Mn%n0߼ǹ2eFbܑ|M$YXԓ=:j7{TkI+؎ph2B8)="nђ'ٳ"˖-:(ʞֿЎYD~s&<ׯZI ̬byH;cVUʎ8(c+I Fs}FM,б 峟/J2V,)3K)F2;u*IriIk/>Kn\g!~N2Ru=;d3,fyTt^6\c~UNBE.=Z7439Ft!Raez洇5MZ5IF:+VNm]d@|q0GA־[ܧ/i܅KKYi zV.:#H}DSF+͹G&RY&a"oȐ㯯9z֒PnѼc(SGaH1=9QTTԎ6 $Tar<#>v{R59\+Hʎ*v&cFSTmК7hO|\}r;|V݂6Ҷ9KfssJߨ 2@,Ww}N.zt#HnZNc`BֺnK1 \rm|۸ {QǗTg@g* tVܪp'"r+tlN2I*OQެ-"ed+49Y_w r=AҶ#ӎ[sQ#)?;Оq;V2wݕkOĆV;O$9QQJ)YT5DӠ\yQ7>ceFk5V]+HU@p 6k;tƺH 'Vq;'穧(F._4K$s4N#=2wt*iJ3vB\(h[rWvZ9/pTÿveuiQ|,ѐdyqY;niNRc>ї[m&!!hPcӯjKTegH݂έ&ES8G_!Fn:I.1rx>>]?t!x̒#uf8lsu*t~:;m*[dwy30x5;^DWKk$,\ ޕ5#& ޟ4UdnU<:氌~ƝIrbK$Kϛ$!끞zӲٕ0o%bffiLs'h x ~dXlG-Z0rZ,7 ֬%]J:J:${٥*y ڳ3Mu*E@Z.c3g~}T˗]RhJ>I/# $qV2;_8 Bn%fY}}={]XƴcV#0ڋYdoF~FU,qnJBN}XOҤ=e_ Bx#\)ƴiѦY]} *gv"9\m۱T(OK}HBGd96.G^7)NaMni߱+K/3UZmf#?&LQ,(T2.3p-uש?Qmi2(/$Ӌ^WXJ}4gfZfU;J =9qb}ĜuWvV1ܯ~@9u!Vxy t&ܭ֌iW"2 ڔ3%}Kc'N~ڤ=z/yJzhϻ,\24F79ZQ5_sn8Z*WAIwHE5lji.s(\ױYH&v=WU:ʜU~QuKt\p[l c*T۞O汭 2ׯi+r⼡T'r: ZW'7G1SOk=:&EqĊ%;Tc?(鷮zUU){%o"=6Ntܽ 3_1N-0:cY7Nsmu=J6z;~kEv-?6E`O*8^ԍMoc:qB4#=}_ֱ6t|2浤Ay ˵pHT%o#Ω غmEgT I'O G%X#>nYay%5v־'0[tvl+4eU$q$fbǦc`{~ٻ$f4ڞj$sVצk7R5+]#kSRw_!FTJs>+maʤ5E-%~#/_I7۵e=5'Sr #-ZUf98GCL[rƝ5(y5;IvCSF *=rJT۶ߙc[ގuNR41Wvfq1gS)'35*M7Q&4+l_FB{9ʃۭbQqJj3lT+b4M(a8dW!raf[ ҕTk-};Ufv = i=odoq$\nM %YnF7vm!3Fe3I #]n4jZ/ZRy+zu$ccuWxr~s:v[u_:+:ԣ馟GGK;9$vkEL?+{͌N*׋kCvO&踑wG'Ⰼ*v5Qc 5hʰVP{HƟl!ծ$ܱ%\[0wm#V*OUK;bRgc4 鳷qxFaNQVMԚ8*m')n콦lo>G, n~y j>l܎uiʣ-jX苯 k)C-m.ON:%Gk%_u+RVR8t%8A8֫{Jnc2FWCP\c^&^ѯ)5$^jI`Zu7?zXmצA^˛hUN0mWn¯LmT6)ʬ7RzO<+B#'^]'ʥnQ|ʤm{ңojխWyVmVF9w'ۊ(CEt9 ˩jkkN#aWOV\ E8S B>xu *Vaӷy+jvzٜTU#Աk2=a.Xt'{luSzd#A f"[<vS[q:(r5,?lս iV۷MӲ\nR޹]ZjNTc%;eEp?Zg̔)F-1Z->*8H9s5ba+M2ftYYewiC۵Nn51wTR+PB͵Tơe덤 sYcZLB#?1'vTmҏ6%*(`7nQ})(;))J*}QUB v?J/*rrRq侁[2iy?~CԊpW}Clbc$l^760:֏XRIpgv=0IR}}h$H6i#sߓzsJkCG>OԆYFwHAls淾Њ]RE!e4(^OR>[9oB`LU5#f9E̙IcLi7.\ڜ}X<,R('zQC&ݪF3ObcFՑJfʗfנ~Ss-4I3pX-h0G[ߗ]Ί\ZVIUDjy;>ނJ/;vz -Llls'J+_QaY[V~ECPۗͻ >[UB *gNCQмXQǧ_Ʊe+5E(ޤHCFT6YJǮQzg pYhѠ{ᙘ;tS+*Шԩ9ۭ[̼v0,GNj%(ÒOyQyEK$~{sSʣ&Q|E FYw08aXR5 0RH GQ~F42Di$5rNyNeMIP:gߎ5>iN)kX;r}zKץqʖw9Heմb~{Qw\ԍIs&EYG˶Ui8CXJQiu( >˴a#hҳ8>gӡRq%ԙe̒2^%>Ɣx;8&ǽe/Ié]kn鐣i|ʬ,1rz[C*qFw^ǧj:͹/;nGsh/{?lYu=zLo݊?*B>?ۏ*iI"y-ޥmCv]3rz:sJ$EKiTuNww7r~\u2Qb똼?ʫ-N3RcF*gUǷs#)Fz-v؍TuV23'RWE#mhԌF7s\z>KlL7|d\f8\1)h#/n̻qϠqGT׼Z),J HMrT#7HzۉT;dW%J#SL$˰~;~ˡ^-Sc3}3nv~U'ί2fYUXa{ޟњQZU,f^[B\+7H;OOڽ 2CFI3X#Yr|=r +6tcFZ;'%i,ՒEoI1>? vd'j̋n:|3Un:= *tӮخ"F[/*בEqx*ȱdxb XT&ƎT~7w±'ҏ-Oy+F4RjcCql,j[e{ r:u95Mhgoqrc6CWt4/+:wisfl{uҳӡ^ڝI[EUfۂ\[VuzYtaf%egz3g^`%suo[w`y*qvݻmhƗ{}C۷ g)I~G%[j߻ӱgKnVmTJNRp8Z+\bQ6ݕJ4Ӷiybv;U7 y_Q֢W_2k*/[vad~898+4Nש3E7~A# {v]Z6jiumrgwd7x\U+εp^V^O=exn6:ġwl(]9J**P~A/^تFU A+JTFjG]uT}VqQ8d.%Eo,sG-IM^ftQQoWF啊L-9zwqo_+A6Q[fXxl0U+-Gc;Upv_֣!*Azq2nm%e7PDG98:JQQ{>Зz5_{$ڬm<ҸnQ+Մi/_'؇0vяc8V>ҤT:u-N_`Fs}Ԍ{GsT8'jjmC`-#:MgvF1M)Τg @)v&ZR_%J3ZB$yJ#!RWrr?Z)iBќe9]e% 0.Ӄ0]ڋ4QۯM iZ/̭g>*=\?Wht9~>U"G51ֺVvw BMG$0lG)jSJN/3,B31fY P9=SSZ'bS:1M:ni0I -n۷;wϩ{:qO{Fȅ-I^^ܩv^JӌUNNV-#92@Q:ׯaNVh'}5e̗ g xԧrSWxuO/Ȭpw0pxrԭ'QN*~J}B7R'ٵRhQci֑9b7w:kizo(+u(N/pJQ $ĩePGANoGm@ #*R3( g %)9uŭS^Cu(0Uk=8 ~khӌgfsc/yM',cVO_CJ΅dIr[uCM70SXʟQ9B6Q~tJhuuyj*͸.q=ֲ J>sqi{*T^YߜUQJl= vSu/x*K)+}YT0L2U#;Tr֍_wS-OU[$vld]5<On{{VxԧE9;j4 "DNHXUw ?L~5Cާ:ߧab*Tr2E*7'e*~4]JS7pa G7;FNsОԣN14lvF)OH$USsy_VSF5QR5r6ܣ<k4Srƪ?m,:;_52?vV&9DHZ}XǙ_MR8Jw09^::Mr֚_!i&/&pɴo^jRRJq .W$$&FBz׿'>p (+?27\%i#?/lu¹*uumѩRd <뫵vb`ݝ_]p(֭8k>gݨI$Gtȼ>-x\>AƳr$k%F/'o95H|{XUyc9^yG%Cv~n#P$G^Zч4vF4on&13L4AH߿AzVէ|osZu)Ԕ]#h=nq>ecuiԅn쟞pFƠ`⳧*5U?>ڟ ?6[۵eQT~oKcE۵vNZ4a q2B[viA9qڪ?迯Wj 0.>p VeKԩg]dʲ,9~5:qq:9c(gПυZkI$( }k?m-69+E 񵭴Kwqu$d۠e51I>SKwi?f/x)Y'rztg0<#\̰c~Ϟ.fEOrG`呥5 QGz{KNzHGҭ˶&~ׯz,4B[_8mkOO[c|,{UHZ9ޱ瑞=>1.G*1';kϏZޯkg0bZM㓞[O(VjOhݗ?:%WK8٘M/u3 V5oZJK-N{7SO*u=Qc_sz0|?J-ןC>X蕗QO>I>;B/#d;};+3悿.z|pt.۳ N.'E&Gkkr=|;(O] ]jG͎F1G^5Ne*.yxzRsi2|_šYkmW(AW A_;pnSF4l:ǁ*ty!W%̄>Nٷ "U.]2Fdd_km8X_]Mb`\/\U(')ɫirI!*H`rOn3ǚ4js-^\y/n]=I,K[lG)UyRîjrr@(ˍ&/Lck(JvЈp׿bUkx`_26GMJúpO6ہO)ʤi^˥SޛwDhVM{-eye*s&".I=: #֑-]Sm*+>O[oݐWnއV2ֆru)ՌiJRXU`{r;ZƤҼe>M[Ҥ +Bсiq,tdLc/m|^JUD]ʧŽޝ=kӌq0}\k#mX؂}_gO37m}.H %Vq5%uQяcy!dvQq*֗2ԚTvbx8jUi8R:'֔gᨖUm̾jUqֺ)%8)'*(M|~)ӌ}&eR^ҟE溒OkH({Y{g*G\\eI8mg/v{"ޅFm[m\²5hՆ}ʪ)5t֒Km.#VxUK`ӎ+gT6VRZė7HƢ=rrtKݗ%4M9J5$ET z8Q8ǚjޜFRW%3FD1r{}@#SdxÖ^-ʚ--,|mR\.N$WEH9>UൖH ɵ#n kOCZuN).؍,]r'yXӔzRӫE[Ț|h摕#=}74)SúAKϹevVM86ߝtF1;l]iSHɸ:.N\0ur9jԱ)'*s6}p@FR[q&7l$X.D= 9S}tӱH?eFUcJ\1쵑B1"D]t9n9JsUcZT@^(]L˜^:JrMh*( +$:4r8>/$N^\G$cF0qWvumGag1..I㒣s]+]cRJrnTP啎7.sunӿ_/rcer Hʘ Kt~ʢk^^gtc/!Ks Rsh/n3}]IƝw2Mӎgm4H&śpҩRtGGu e"VU#$o4jFWH{i"xnT6!W֪IJ6Bhuݬ`n>#v< ˖+gDgh>.X<<@y5(^fi$L9+/F{b[.OI|oLeL|Pk5՘{J|Tٖ v/0Ui._g*[p Y}5I)>T9Ml>)_3tP>ΪeSu4veUllt'\K5*/)#ec8Y{6?g,,b-0+*ǼzcjF0;4;&ptf]y[sƍp޴Rw{>ogOU:~%y-26ݬ 7t珺}+F5>UEw46yQےI8Ҟs*נ#U%iWi|iQi=YVd662>nҥԺm.Rhi0oz=G~j]VEkrݓ̌w7`=:i{s .Z4hB4P㕏p9۹)^f)ԫKMי5][O8S&T19? ӧ̜ޡi˫.YdַRF|#ᜑz}zҮV>xʄe}Ey&dP@Xu#zEG]Hi6vh)#ic_*<61a(N#Z5КFXaѴőr(ay?h/γR5aI$HTmӃwwSgtZZW*]+H++0V.I'?JTf#ߜlKo28n2MJ9 ki8gDhԌݢO%ry`5v{u?zȧ)vAo54_‰Ʀ9*n~흈 42J[dGt$jQ٧n;̍aO?SMoKڥ>X-~Ԓ1B70zQRz"L_kƬŏQ-@iFwՑEЯʫ#?.sd^^gR?i庭J U$#.h{Vs63xպrwB-Aqw$EPNЧz[QݜJ5"o7ܭ5+ܥJ(U+ld}y*٭>9OǶiq.C2X.9Vԡ+];)֌ε+0 fBTzTZg_,nwʤpy{{DkEfJ-hpxdSnF2;:t+?4 UKAmWy)JRuN_B.\w"q׸iAF)5죃rmӵHmd]O oڴISv)J"IJjU;[88/UM:Rv+E&8mLFT^C꺠1b .иl`g#9ȭeƝ9}gT=:t\ 뾜iJP ԫFoms/ 폯Td˛1]JI>s+3_ֱW ݵlt3=dNb|xîFNQڝdQyw%'9`8in{zV9)SRw:^n!x .BUWh38?/}oM6ջO֛?7=JQ(1TaNet]; #2Bj>WGN3˜%W@ʸ_c?JGUfzʥ?ŵsGlEyǷ9/J0|S@9ڧv#9[VU97ƒZ-Q/;Oݓlu;εWЏmZ[Ț/?(l\}5TMiܺ;Nb|1f RZFWX"cyg4έ3#3EGlZls´T% J2miϹKm9#iFRI Gu"UWVC?Tʟ4Sz~w{\C ݴ<,m!}pzc$ҝJj6F|؉VQ;pZYu(Q$yl|8Dh֌u5&vD}9=S*lݔ}X$m y?+'fC{HǑ?EozwgRr9n83D(˕ɡ<*Qi&[mؖ1IMJqn.<,:ЬwDZpݵV@9eRSǞ96U[X[y||EfB~v#xj4'v:bN˯jS.-жOee88<ZQ%+'OQN*^d ,V^gystKNz4'w۩Yȍ00{)IM#Wk^ffhjrw/9=p=n^9U4SVXk$I#d199;QS=]Ͱ:ҷ/E#\vy0vRJ{t쓽.>f)0 nY9SWg""rvaTR֩4A:u?!Xg^UcW[=;I3T8LБ+:П*Jvm]YUR$g?OSڌ:cTZ"Hvm7TlֱLkӍ7˿VGܷqId뎊X/[FדztJ4ZNYd-~dsm*Ƞ68P a'.mUђ[:bEPY9[ Eǫ9gWJ׹bm` pC$m1`0F#.jN2Ԩ)I:I>kxUe~=0kӷ>ͰQxi,j?wzzTԋѝogSX{u Z>qIF<(΂2rylֹeR2BjhI%妟VV9NO''wm2_gUw`zbQ^)*צla$o-I 黃9O}zѧu'+_&1%P\3YßmtsFȜ֧֦U=&H!P wNQ1kyXδM8mJBī3 r>oWJOɢ"U^9'gӭaQ_wesHrE Rd]hrs۷ZR$L9U:}S$$ ~'ԬEysЗrF0Ϙk*|,T;zu%gK@ GZ?r_wZ;\ ̫%e>mQ6O_% Negv[e1y{sc<`s[0jC\i 6ynk|LW5gǒ*4'JR߻}< *ee\V!@gXJZElc*dD `|OL|''7ɩE{!$x٧NL99큁Ͻc*7୅zj]{Iw+= KGCr^RIriܕd5wmʅYzdt:ź5)>}kW6jm.|בbga6dT??,@*0{Vq%RwhWN<=Hے9QD*nq3֢qC *74/%6Q+l$b?uԗ# qR=VWqAJ*롤sDuKrDګ"s{{ڜ4q2Oo/kXg70$U700r՜#ԣQON o=VV^{z qWe]ޯQc/$Q,rm'nܰ<I{RS([5NIGGwB(Wʅcp9\g kb*GjPvFHY73r?(5o_]XE?eewD"y[v(=ϧӭ(ԋͷi2Iro>bkM9̊t.2B?A[i_^eF1Rku-͢=jI ʀFNӅ *qSV3_cD_b`9Ԍ( 黎Jh-ѝUOM5NOjSJ#SLɝ끌\niNҶfԪԕuZ6z>ݽH}7gJm5kƺS5ShkF2;Je5.[GE8ҭMI1oV5cmcq%7r)ӕ]/ahdg7+]yJà@Ae,鏯jIY {_w9U#3 qZhPc4ܺgl HnN#FEm*2N%By[!O:"lʃ?{'g"U8Λ${IrI[~>(~I`#zc;޵RZ;hmFww>դ3lS;1.N1TJUuii~8$*GϧZ~dem7WdHJk'q#8wvZ/ݱYlOՓ{zVЏڑo'g:D3o,r {*N=H󩴺tF2N3ڔˎEܮI+ ٿSmɾoM7G QE#g^;q3BQ|kQر[:t֮|җ*Rakqebbi}+j0dԞgB6~cxX*[W{y1U)Tsٌ>c' 1ΜmBZ[o8A"6c[sVӚDµJ1R?(C劯ʻ>sen)J&fPn yҌM٣IWApDJ8'$ 8Һ9~fckzi'Ao/ɵp1:=k7ʥd'k yZEċ&oq{BIt\=̊ȻHc~?Z*BGKc`>HNQ}sTNwqmF3lK~Y3dz)ʛU=tJ,^Qu6-qHc$v=J1ݣzu*rU0̬Wsϥb+]V#y\qZN^3)#лem+43ly^{g= qԩ?'DͤS+F`H,y9cds<=ߑ!]rx9Oަ:Q5)\}@DonT|§hvU% {# SKoI yyz&yo}/3*CndIt̻@=$)YXǕ-4Wnv$j{ڱ|ڥƬkrU'Y~f*szW,JLe*M\K?̾sdGQc.Y {:1rІ1p~E}̑x|~U2prQS$5HZlUllt]ʲ8!\I8hԒج:i>gxc[Pk6U]nq:rG\Ryѕ:lkJ<$K2mSk23]siS}~6epϕX?/_<*rsv]LkʟͰ4X ʸ*S)n4W,nϽD -5D[v݁c$U^sj+ε AhW}H<ӌ;G婆lvԎXiko!*Iy2;W-KVJM'2kVqV]b;n4_HÎc}4[ʎ4짤z>nmV%fonNTYөQDm-/е*Ui98gyM))vQ9^$R2&[$}~Ǚvgjݭ- ei:9*SQoS9TR@(]r998tcJQ,TS#u aVY&MnJs>*eIR_FUz|!_VUiFD_3/q{w7ȻIC<8};ҍ 6 ~{CZj̅̈]r@RHu5$ VKҭ르~?*ǹc[v: 9ES*nfeTz)837MYgh #o-#ݥN:=1zRuQ+;ICco"O0nl1B猜Vѕ乿5uݻG3}(2y2ma5e9qq.Mq,HTm-烌֥Rz7pu^M~! I%fy_NOuJ*obeэ8NX[gB|7.b8`GJ^{dtRFIVtVgoF1QgێqU ~C:Ul ԠC:pVn r,SCWh?0 pdvUDcSR%ZK60X3 qں5Z)k~.Ghv3q\O:VѾNq[Aq*c?+ PO_r=+ZM^Ft)>blKx`6~njqI]-Yޟ;{2{c53Ev{486ӆg#[I?afMp{gdC㩵ISNZ`ڹ*I?[\LjBЖSP'kͬGmI,5nz'>Խ'ĕi8e~^7̀1tF1ԮZ1Q49K?isJt1٘o ǒӵoN8ُZNj$lY['=kGEqTJN^5LcxVnpOkh6*#nimCoA^<$ԣ&-}"w0ۿ)qDJE˨(؏(sgN}QRNd=̑A TёETye+3OgQJװiu&݂}j[SJvm-W\I3z]qD%/wyU(~_/8;R{ 48F5an8RF}E9W9 &Ge~m-ѧ W7S\ܫ_ot0#۩fV8+1W1Q[@ۆ oOkNUsZ|+$[9;>i͢5r$2\[0+Wq-Nzxc1mMٚrܬh*1ZQ_4vҺZ\uY1r:B5H4)1g1jwRRz",MsG0y9=*1NJ0qrmgې9Z(F{)tvƷm5e.m9^TBbܹE8䞃-KN_M5CЌc6:]y. [Ӄ>FrOex"K6iGv$ϧg ˧F5fIm$4uBS*9J=캡V`yW YxۺB_dJ;GdP7}6`zJ'rJQ[˯^J\13e뚣d'~`[kx?OJvr(m"i:ώ1qҢ掇\ay3溽H~n##?ʰ2W٥=$t+*j5eNRz'v~euǯz72f]ߪ W=sҟR1ԞvGw`9FG_nqq\r5RFmFi<ӜsVu)"~%/z0;[i8t0pL#Esl#AJʧ$c(#MߞKnHọnv~ʢe!¥(+N]Gb32˴aq%8JJR'+YX؅+ DeR}Q~BIu$M}9r飖rtoQ 7y˸ ǿV5; m KkpхϚZE Ԟ~乵zn.T˜V3ɐ6~\gSX\c78~*)`۷RS;3ITn)$^G*G1|c<c]#e4VVk,ipONk2k5h뭇^[vh]9We-YǚƸ<'-͒JU#‚:u?AYm#e{iUY6ٷAhԭRTo&ȯsc*˸#^9=t\)ԿEs嗱z ]yF<'jF":S_gШq'q콘c'$`ocX7&'V1-K!!W#nf9>8;>݈m³!Or+T9)ѥN|Ҷ' fڱ˜y\UI_cj6wgKI&5\gql=Vӹс8ٵȊ9R&nwsg*4mk9JF$f5ٱ{8^}lD=HҴӓkwQ.mڸ6އЎzu*_O[KOoAFo}f;91[{W|oWCU=:{\GVi!2)\}~S2Wv_VW"0g Ǎǻ qֱZ1kIK-6X! HDܐ=hɶΧV\7|L8߮*cYs&؉NK^ɝd1jF2GCzTa;]/]|j)*Rvo~BFf!ӛ̕vA 7(?71r^3Kt&\pN6{$XdcHYN1{)ӕoy_M?hJӭNV[5\rU$R+54˱ՈiQl7}ڪ+ gVTjZS*کk ظ#y d'#+c$e{z[3M&p+oi>k'ekc:ʖ4Y.%]cH ((^K- 8yP5=uЎX$K_Qȵgb|]Z%vxC;C-k6s\Z\9y_.0Y!Fqq֪Q6yԝ)r4t~=PRy.ʻQANrxMr6zM[}{"Z[EQM$ϵdOg#'85B—}?lzSviUOrsv6Q2gO5*AAko0F(u_ܱɁ֍rێIhj#ZKwgTi*vȃN\;F"7".5W>2nc7e<>Xz=\wOsԍZȺΏޱKSQ}M15R%2ڋwhX7g;Ns֜is} >jog7濭E o#`*p!w|n>U".ZJU=hXiUr’?N>j)%k*AOV9#v*|Qm{iWШ<șa2dU$ 5%KEoO?BșF 3KG9>Z[\TѬMoq6i$ueaԚ2SdBbEX?9]+3XH-%Kn1Np Wqu9Zۯur'R%n˲,$Ky4cT;8:ri_ՇGؒG86%D𴱝=8UڝTkN{AZvc9O-q`]x-̔c?E&mfđLA nh9V^;h\1{(驦"8zpM꯻*4}Iv4|0Į8'ն!S9p*"\ _9Faק*|[_O-u}:"CHYV07`nb t{ϊrkW+GZ86_ZQʬq>fݺ&>5`Â+ڰWҷroGb1Y*nPcpAMƭ+mo%e-3"բlTc|եR6uzUU_92$'3[Iԓ}XSW0[F\{κcRTd%t|ђMhKPr]qַ#hQbMi5qIY~2dFR:)RVe4-HW'3]thΜs- zy X&SrzFvDxxӌZerY#!1ʜ8EFҔhiE᧸DEs3БTgt*>γ[O6G岆mێxڻ)ǚ6Liž층󢉷 *ˑ@xNѵ.iRҶĈ i#MA.|{zViʜ#y$5H}㻮:f5#PJw,9rT5>'U8;QFnZ&ͱ+tSB3wI ᵞ=u߽iSjeIZ?D1ʷ|I"UvI0N7fTW'h^IuI]MX>ޙ"16;#RJܳXtL0Vv(˻im洌*%FMfɱ=ۘ~?C~5,ЗR1VԒh#*33 SuhF\M .oBԉdsZIw:ƜcM۝±+h1X$aKjT^fn<6LWX ʠgj*{jau JیRyƸcrF39׭m7 kuuBoPO+7ͥ͡O6,Ę5fT/ɪe>kF+K'Ai7z7~qSe*H':ֲriE-EI n02:Tˡ̖ɸo:Ȭ`%מ4t?hߘYJCʜ>N*y.Q#23ˈoeri*I݇*H&ySLwܹg.Z:iSp5$nvZ"7x>pҧR1q-dtaP@՝Duɢp!^]2srFqJ.KUF],xfU.A}*eFַ`Eǵ}LVruyjEO2f۵ `k9(2&7KeZbA\B SJAg5_i{z~ƌmm:t QUrn3Vq1{fw!x8څ(.(ԌRnَeZ[-?z{u1"+],EsnUQڹyy^ѥiȯv󵕁u^k4E^TSS :r#p?~ K[N2+>I*۾^W=1޻#(kS_oۭbdo5$;>`VO`rVIi `:w*yvsJWw6-qo)HoveoSJQ[VW?3o?)ק^߆kR|[6559BЫ`c'm /Y_]*k*L9zdw3dِ#joc-2mvd>;bvHce;wu<sJ1oScdEfc瞵 S?T1\d}EprECIs;m[}hY? A?:_yΪFQz[ֱIu*IkNX&34ﱣmeTy8Ɣe)i"yI'mHg[ul,J>_UpqӦ}SMӲ.bEfnMa+|-TyMjd.I^vnu$7ᜪFbfm+]Z ֍Y[fCrV ~uSܿqBۏhrJ0f"ˆel09^YKS\C<"oR1:>} m;[ >dc .9s4OeIJ*ۿ';sG:rs+;=e YT}pԣsB{䉤bnDnx?/u2vk#+AxQ嵵:zrE3"O/=qzqXk.Mc*vwЌC p~(̨S`\G1l={5=]BfoRlfYvyoʼe;kƏA|*[1T=iZ|~Er*rZi2ȫ[V|Qmޭkkkrfy+^hh^0Gj}UcY&< r.{Ueu//;gI~UTFx'Z`Ft[U%}ckh"USw kOmRvq5GN27$ \DAQ#WK+IQz"{tl?yE?_,T>Fo#@Խk+@#IT;{;y<pkxNI9RwɪߨdkUf6U G`}+QcdMPҴQ #h_A~tzGOJO[mB^Ӓ+dy,kcRx$ODөҍwfE-l$ rG' wt4B1dz+ mm+-kOQӜt[{[':Sk^ƀU sՁݩ{8TRurVJ-g~;tO]S)T-uݽ#5Zm|l?±Ƽ|()F7%NZ3?sU0dkHJ5W29BQ34ʻq/ݜzc'{*dZߠVaﻜNvʍj1n2I玠~^750J2#hm)a#ߟj2:m-MZ9:OٕU79߁ZtN {1 qnZGjuQvwk܎iբ۸Uz2<̰1^kaQIrm'nG4j{֩$̆Y6nٙy:ԣ|:#.N|m<z:ux׸aZp,Hپ^LCzVrZ1渙ԓW:5 #DŽ^|^t<cRTsEi+ !>~I236p{?Qǧfy{З3woVE#nF0qS*KOQwBɌLp<լye- BPej,ѡ>Ѵe=FqӭMU#^_hjL,;0y BOqұESw*U?QGI;^q׭g[vסՇSGmtN glI89bJ#z9`m.x>kydҩ(:-|5[Ovଧ99n9't$C#5+X+2;~kXWmmPحA8K/ Nyzu9;I}”c8e+ ERHVf}{dZhmiw}Cs&ZYSbI1ʽJ78R"[.ZD3K O"jVlq$&8zV7lU" w%`!:9czVFL?\nXHgSj>U9jzϏ|Syz|72C|ai={vN5G -&ջ#upŃ)l9:=Rk**4Rmw#eWF}8_A_1ԛ~:VHHaX ̃=spjힶ ˬ]5z% ˹Ywc@ߓǥ ^e(CU4p$syC88vO-FVzqйܻ^ [>v@žnTm=zTZW]t:?߇}*{7Ɉ+gw^9j;Ks8:"+Y\o"mTCT6@e~Hr+_v_tH6Qm&Hc^?6"^}.ZR%w 3͏rE_NZy=nT+?_C:/uɧC >YGHYh>9W?(]j~g<,yf0cMJEWeqFHL})$vXBzVFҵ-n"V[uO =:5 {ZZ'Э+|M^hıIo cLpqae2DZ-\N ҧvÇ \fۜ㿩^4}nɧҵ?[ tJY2=ze)Y.t9Jtzݫne>AܷP0^޿Δ%*~۲jn{Je8{Q|IۜA5iޛQ|ɩt\ty#/~`Q~G_NKV G HcnH$)}jrMw=H0ff2GO_WC9];5ђb go,;0qu(]v:Jѫ.h%94}=@ee*v=wޕ:եUlӣN.fIiTSSx w*SQ-LGNV}:[d0,O'8*eNo^סUAI-D߱Lſ}E>fݯ[ 9'#[n6o?ۮ+xJ-= )JM] 0&8/Fy#~frq+E ,[4R8^TO24(Tx;;vEkjq/+yr1*oyJ- 9zʻe.hZMބ֍u[,gD.p='׵g9'};| ђ8ě:ӟJKЫWkrѶסf}nf N+٨*jIFQ2Cv) NݫJtܪY358`%"忉#s+ïCU)I$66 ~;nsYԍJWW<ʢxD,ef?9Wsi5)N6$1ܱ, lVq;8ӌfW5!4cyˁq޶I&#.Y)S1 EiOW#;[IIv'R0FB^~:ќSBI+}sy莪t~nԎK[\1޻V5)`0}"JZc~X/Un,3\j& Z3+0̚Jyӡn%6_+A]rf2QI'oį9I/U`fp jrFIzQvsޏm-md7R \>TNU)T<4Rm ^69~*.[R1厗FE.-Gܰ)ڱ2ï>µ)JnIؼ< &QFy!q\@ F#zMLF\wz܎|+3+ 7Θ%z⳧)sZ)ˑ'f%)%Řl峜s=c9ӽB+D(Aqپg19Jk)ŮY|vk/&SZ)ՓpѼbuЪTO=Zʣr}t QtߑX<*\}zޔQ{ߡ^u+ݸI)ʤj%=*J;B۹W?.AJ5)YC+ (۸*s0YI9^pr{汬S]*Vj<]䁄2M˝ӿ52yq4튧ZLÚƧ6/Fi/eZ|DsKQљy%{nW?cSV>?SQj;D07n+:ܵl![,;zKO"3bBRwq޳rlg-oa{.I˒4aÝӎ4S$U!Uv vRRk."0坼]P2ʌ_Hm9ϸz.U{ǛNgiѢ89W;W>"Q茪/-#,v.ZU79̝SOʜUӯcUNrM- x[ƐA8n]5%rKZ}ze15k _R:=xț5(_-O$`~m==iƬ~fQFE#0mOҔ#NI߮FU)4wNVH!WGxis8{Hܕ=IdbrX3199B:$qҦ'=$o'̇~);ؚ,bSL T۴rGzc=^(6Qup݆ȯON0nhDj4`퉖=ŒUќI&N\j a;ne\*pNߍid?L<;4,^\nrs\c=}OUSN*+l#73ꛍ(oGUеdߗojOn7lѨXs. 88Z҇4JWnG;*a91O0CrqWԧtq\$8V]sӚU7bH߱m伸8ai8d֑J)7غ|IY"fK!SRm4Y}Fdff\|:5tIt8a)+脖7peV_^^Eco{Aܳ\BI&ʩe~(ǖÝYK'-4y~ۏo{O*Rg-9Tidw}6=hi_cjR;tSow0MsU/}V&Q5+_fS?2ͷq{ TI'+ ljp{w?~Ưp.ʒ@|ho_HG}Bk+q$$D+eJ#u_#7+yU{>Q/y9@:云JVr]Ye\ĪnwϥrʪNDK-=K{ʗ`(q'Vs+NiNz. 큭nk(To$sQMЊ[sk7*5mo"4g)Ufzq< ozmÔRTI/>EϷ20FyGqJ-[yi#, ɗ""vXYp s5kr\eNږ#4lwgvuւpQXZQm'&VjUUyQJޅw3 T]n-x8#󭹽NimxOK$shYOzu!N:h'-unRQF83rKAw2\$I`i >{qEhƍש͊hK@gGݷXFOUJHj,\qb'?T-W Hx {zz֐F+MY{Y)µ s{zڍ*~h^>jqC^gq9TSkniUTR:Mnd]]nfd}>y%S-MrLKmN̯*WcSOF8N}F h$7>nc+կSW#ȷ9\3u=x=+Ǚ#L:NIX3B(E?yfR}T[UMqu>(ΥO.WMA+F/5sZD˸'jRjZ;lc.Et \DZ~kvEJ2I0eZJ5Ol*FRʝ=gv$Έ]Ir*_<ӎgl◯S8(]rˑ*(TqKt1>ՂjL({w$ i0܂yϥl++Tq֩EOݾYp|ZkwNKR)%0./s*յ[v^ܐ v`}~=+yF<\rcԷmj v8Hk g3sPgͱVkϨVogb͘d1R旸SQc+yy@G#didb!z+W=E')[{J|ڽlYDbCbF:9yr|9q\h1~[E(Ԕiz߹ ¬QpUj}On֚qlok0i#T9Y1d2]WXF==j#⹣^µoဏ;l6jRc Kuhrp6?:Dy~W->C줌 rFL\yw0rzTku0$YJl%Cig~٧Maf϶UX ~>S?JVn[+ rђۘ#HRr0zVUVUL=DҼ^$ͤSS^vh'쟓frji{v[gVݻ#a.ju.ۣΌf,x.s۟EYRQߩΫ8d[aS`k7.zQ멉U,mo$LҸg$wiRKcΕ8wQ>Ȗbm{WSG$swsmv3"՝k1 d qA$?zʝ:qJ2TdM<Ī4jY7'S~HҒnVT֔Snv3. Xexu>wЉ8rk"$a9gb\j=h= -㷉E1@dR}թ,kKsQ[kߢj6+0XHj䨔|=}3ⱥ:jkcX:xX?酱[eɕUwSRtbqݽF$\qHd*@H9cXI^IԾbO [$HA޽EcN%,FjAYrVŢ%GVkMBi''?yTJkq8k^'Ӥ[Z"LCJwgYJ-h[O-rFa c ΝY7 *~$nUSO\5N^ni]Dޮ$sSǸ #k)I' Y3:ݎv^soR Ew A1Y}WksVkMRKcVmQӃSc;I4S},r 1=r3SS{bZg&շճbGfE;$ =8sϭq{gJ/vgN-m~4Qb6/[l\Τ)o#/SU( 5X>Ǧ^8֘z崥^k)l.8ьa| c5:u9QCTC96cbWy)TIW]Nq§vA,1"0u,M~S^ +e(%WO_-ty:85d_Qs-b,̑;3$}WU-i[De^Z#G/3olF?t^zVQkG*qqTmVm69*t뷩-[~Nefk50o.qzuXKORU/m؂ kr:YIG˖#9i ME]=5JrM;7t{g;g)9T2*Dy?vg 89szVl HSjR] |XJRA8jNQhaQU:Z,KffOvF#JRzR*QM~!)TyXF=HԍѺ%6E ebg .1_U#&YHHu_+ 4R+FXKos5 +2#8'^cMIjއJugQ];[Kv`ĸ??Zڅ/sYc~LUZGKuiUr=zӽt:v];jTRMm{krO4 Io`p\\tFSqhzIjQ,n: }_})Ԍb՟ɇfeUgצ+҇\M#q+~W $VY9 qg8:%ǒon4hrxXr{c]QRSOaB}´ۗ5zrkhA4U%Mu؎+x5Es*I&$ʜ.;Vҽ⽟5?ޘ=s]&mMz_ HmY|AcHں%NI.u*$.-HyZ=g) wҕW9ZJr+Z_]IJ$͖2#rv҇T6YK0@P6dna 㪚֝FWETN5 Z]tH#G2=?>_.n|úGyxz?ct7)*/Vp4 ߼n۞eF:u  o3ڿw zt>{n{1VؼfMJ[*82eMc>DvWW>kܺ4;49hԱPN><5JoSZqӚ$PDCťeG[{Z#֖馕u57t' uC2|[jyg_ThʥDhJXN+0m}GMk)IK.[p‘ 'ߥk˿3>Z؆Ul]1? v)u!i+ؚp$i-щm9l qS+\Ju)5z㈙'FhR1_I,eanJ@ƬuZ~e8\޻Uqa 㷷5wʙR[E4dx6 =E*5]rQ\lSmn3F#' zqIttZ-"Knaく+9Jj1W)8E};D*Od٫:Z SjMB˒RtnWZ5oI>hkr~ni\Q|e.,➾wJnf'\XԔzrnǬo#Dߒ?0b+ |U\u*USnWҢMԋRGzte32*Aq7w}*I^5NjRJ:xry?9>vZr̹8Z!M2[Fw,8s{ֵKt}z[RrrT?J娒קy oorJ\˕-K;}NĎwsTfpJ񋮗-m lxq:wYI] Zh#gtXKq1>^kH7ԙJQ~v/"ym뎼WF ;yCԤg*>x%$ NJ9өF:vLU[k)6l*܌dF+jM >ti;6F3dVI1sO_o¼rm4K*B^ĐHm^w}NӓrrOnj+ =E)G< s|ʻͶYO#Ν:~k_SωKv2)Up>WGlmEEE]/'mQ[ԪgoQUQUHʆRpTһJIyZIdLۅڙ`299\QKݯOc\EbkD1OZ2´#6)@vT׸ajp;k5z10v |=S,嚺_`.g~"#fٌ/s9x36z5;ۤ0P>U{B<~VjS[ԝ:mw.DZTF38;S?tB3I9B BZ2>|5f#[QM˱RGK8Īwq׏qVRD˩b(Q!>٘={ԹTdn_g2_r̍>G{-LR ;[[380HK&'׃ں#JpT%/wѓq3M(+3,ʑ[<.9=_n:t}ET嵗[n:]vHr8+x8mnmOKa-z|(HdiKb(ƕ6(BsiZ?c iKj˯(_5X,!݆Rǯ?j?ݹ6h^OoDe"}yXFeQ\=5gWU8'\Up{^"qonrF7>\mr|ܰv%F7{]fce;U|yiJ:5f+0TTN=jUU%E=Hٿ&L6@==8zWRqJ߉HTNȱn$>^9D_̭bkyT ޣ8Uo)I${Yʹ/W(CNޝQ"D 52:t6^sGoh[u^_`3n$sܜq]څtsEE8:X;'եB}J4#*nmk(n!Z3-Б;~` EoNJ:=GaS,!9^At9TJRr(z ͂27m( 8}Mnӝ; ̢GLF+/h{uzF֠b<.=3ϮkHB76u#6%XK0+r>>,Tz8[s̭*݉=:19S] r4||͹ ?xy8?vFVKtb9U%oOxJ>6" mr[)huvx?K NZfڮ^e$Q?47߰cHcÌGʫ7Rr]!м{@p35FC}0G'9gּ#eRY&dg2E8i%XKVʻ%x=X[y~&u?sDמjVVyq#,&9Pal]JTgJVhk(!8<#֎Y%nk8-_aCJZ#2`U?'8B+H)ZZ_5'=EoԆ`/(er{ֺMFڠyM͵'9:딐^(˱YW8Q qګI Y˷nJǓ}+yiR?eI*LۏwdgqDz2Z *I7έs4ijױVy%b*c!ߦq[Z1vi]UHsqGSD4muO}+2HWRwn϶xyM9xoP۲V#P~~ǜ Vi38>lĖVEǴ~҈MY 6ZTle=?9J KqkpHą#:c<R%ok6h%Npq ǏץGz/T(h龥i2m?1'8H|ʕJv^ @eP(]WɫŤ|>c. U GG#ӟ8[n<˹^H#iAU[fE/j${'lq֧mrǖZindS a.xZ<֋0F6uĒ_I+G6`9}+N:~~}Ŋy^Fd[l3wNIGSRW*$RrCFI^~SU9NI⸷K$&ѷ$qN:g# {Y281$ɸyt'j_שJ2E8i,cqWGj-S Sv"YC;>T/yooj*JuJRw"img,cywjv2߼b ,p9*סF?h_tk#(qֱqבxЊoBY Ϛ"'~v\cDj03z{U^jgZ]nP|`=zڳ.AF4Ut>|J0̱ʿ2|WU8SoQL+DN=AN*2V#4j>jR7*2S)/rCڏ,3y_qYdjQ^밒L&&hb yw?'(lQ3]K0*V#$?Hܡ³Q|Wd@4uɟXcrn$Vsϩ)l޿*u-(+3lyk|0яS\rݛ3rr۪ LŶlIi99nڿC(T߼pɝ5+it1Kī,ݵK:qT"򥧘ƅ F*"ڣULv뎙it:>z xV;ezqeR#{ƣFQk5ǛmVo,vNx{Z?i;{h:nh)βm "@Ln)T%˷A]ͶIה ۑQXҒMH T-{\41N1Ujx9ѩF]]}4%۲4aɝYd{Jrh+^pz|$ҳX挩Z#2Ccg5u朶;bir>2ܯ4[nY|yS h~g ([.c#n,ʼpp1&"fvzߡ֧r}U< Iov׿q8AɹE~SJt*qMu>i.D c#A3iIEtN:QNU}E }Q۽i9= TaNhgvH/N1ҩS_:VT_KkeD ӀֲRBFպM.g©LlgZ1S]uԩ|9di7} aBzԣFE+#]jJ?zY;`=tJX,Qzrtpڟ(yglj6džq1Wg4ylQʖr[4׻dh*NÒҦ*ZYqK>QntinfO&FƉVMwX{~W;;:jo.D]$g@4"O8Ju*Kӡ)ZJ֟*[w&t0~]Gr3#.1gq#Fժ-HÞN S>րM|FqzS8)ѽ');y#e.j6$nvIp.٣3YdzgwJ'Sȟk)QRn7OtGoe7$lO @#>vdG Ri֌VbO!]ye]QvǸOƪo2T+QB5-+?2@<OjPQ奖۳8iQU,`e4y\RQ=ҲڪPwo苑aZ8V8܉\3]QJu#'n]J7J)wDko$pVo0?cۮiW>ʻ{$G<()1]ql#wIV)Sk]AӔkyaY iiN-i7 F2wcl [q$;UUJܣ<\Ϣf 1VUj=LkK&bN4аvd0}E{ܲ\覰'Uxn=;G<^1J.Zܱl`k+"?ȪB]IVf]#u:Fe72Q?#ԨFNȹNGo vݻJA<޶(-3KX0|RW H<ֶOVKFmJ}Fgffd[w]4c65GXff_siŻo)Y=zyL#*ޟ(F)eE9obKkMN7qW>T*NS;nD?¼_{EQ| <~lgDVRHMKT[͎{U:rFI/哷9;O4RRw*P5$'1ɝjתR}6a#k^<Ɨd}:s[JTNSѿ_.9Ro 6O{qEGJ)CVɺ5F8!ʮ8ZR{{yGV\TRױZ1h]fiژoq~+Yssh#J13D's3~jۍҽ̘n9T}'NNQC}ErF88q4Ӕ[QdƭHU ˷뻜`IJqZ0NNib'b |WqZJT8y1Po,reHK6Tk,"6I7UUY 29Yڢm~3wͻjQ9i!K);ԍ{tLe?R'y&۳Ax[,Ux:8YKcO٦#l)+d; jKB_ã/ї'9?P%M'=bVta+mzQJ6*t U$'s~JzDR\/uneJq>u]3so%ס$_Ljbڱ*̻rϮ?:(IiMHJ2@;fU]_dᇦ :ϕƢ9kFJJUԮeLn 6x_}hTĀ,g&wq\5ʤܢF^) fe o|jyF1(MKE=G8ԩ2ң*uOK<`h^H|a:ӎs׽uN1Km3nYR4Ou=Ǚ6auSq`ʊt勵0zZanb͹s9v0dk]eRdIw华0;c\=E-ٯ!*]"U9^jǒbֿiCq$ißؒ3?:ϖ1ؿc*iZ&8F3gwFm?&+hG]ۂ?qSyϷױ|S>U%>7wq%=ےHL돩Bbynkjf"[tљO]K+:qj"mAy,*7<Ksʲ!m=0xOrs{IrQu#cIH7 nj 8%=n,$gp3F 00~I^{mzzhtzdV@*X=wkt ֦C7?(I[hAdREW*[k̨FR+ p.>+ec(uiڙ3ٷIÀvGR2VLxfϯkyUg UspDRB<)\h;dw19]ÿ~ԣ_R4B#l/>WȜ WΊy^"\G+hÛTR.|9s[F^9JObR69ث's3K&Tࢗb9$hjHAS}Tԭfʷyң đxQH9}3DZ}#М 2? .C"~l֢g2['!dɏ-p'QEʶC^hKѢ!*͑ӧoqYBrFc.ݻ׷ڸkamֿ*Q̊wf fP85ňJRwDeSG=ؼɮ *W):t=OoZƧ2nzMj{'YN/5čBש}Hֽ4~g6ncd+F*~>/g(˥̥Y%ǻ?.ǯ'GZˡ&>Sw۵c!ޕh˗Eudč0h|3p}9{Ewi17UY,NT)a%cJtN2ok_9eݗۨٛSKc3Һo[v&T$?kK4kv=@Q-)G!CpW2|;N=9^16K] 8Z&Y~]>y8*/+zQhK_݆Y#o=8==+_Nӹs,)5]!I'JQ>h%65FoTb׷JP$ׯig(ݽڄrutPN1prT X|ǂ$Qڌckn(֣{רl63l7ߵz!~R.ҳ_$ɢ @{8*n0W>tǭ,E)TxVNH;wЎ8Νʪ$ݴ 'TS4'OCkt릣CI'KIK|}Emz2sJ_9}'YbI}>dnv [֕74{_uK(I;-[b9_.ov9e\;F2\ d *ȓhC|Vcq?65 +QEw%e!dYa[|p*{smZݿ34;F7I 'Z2v8ZQry eۘ^5QF?Nތb,j $bmϧZn3GDN5UfaC$'~飙Խ%%rF"OcŻGlO¶BQIhRV.-kDnުT3S)?[/C:ȞI񿆎02Q!SZ%k|"6?/QL/;X?'=Tqq /W"Uw!m2?>ki=Ί0哄;l@Uj^~Ƥ"`A 1s?|60zc5PI/9N2[qn\˅"`Eqn:p;,jp??\Oo_->aYF뵎}5~Όe#R̕#$\|ecԌ3d =uӣO+^/Y5I([5 ds=sof*%}Bx^Xc#y(\38yKu8"$HW%UӧjoR|P&vP7| x?Fh^eʦf`ʃ[2n1Kϑu0Ek8 Y%FASJc]p<;GONC M"C 繧*qz=ьk(dRKp{+2Ti-Sk29扖1SԞ;WT}ic{6ޝ1D\$sJ:ݶ"8WR.W" hmYN? ѶDvхS)lfHI,Et\*?zҝz; ӭN^¤.ddY K+Gy8g[E)9Tyy^x~wJ8yIKcyVKA: 6scsc^aU:яd}n>lӊq䮓ɸ%Y$y1]goZ*J2>J2RRkq#{#*'~S^FҤj{>o9Vo8cm6m:ʪ嵒l.Rjm}ŦvJ2l͐k{Ziܣo=S[uȶ+IW\/}W oHԪյ_Si[-#T-3?v,p=LIzL__}7mgqoh+,VnFOjrfZ-YF+GZoɨȉ14Ė$0=+h%QOCռ!oGUp)8x=UʍJls'fΪ?}MJ5|_^LVrtmWM{ ^r3,0g#s Rwra*qq0B[p|'N8N*od}HEFѓg9IQt]޿52rNq^s>چ!B_ WŒ̟i2#+cԞҹyYSJܷSpТV; <>SF-6UTI!K|b>~kXƴou^EG$F>!/V'8z(FϷcԋËhrY݌S<ܪi59Ehe|m'ac+ud¼ )K^}|ym 1ïW_Sceѣ-z1$9UO[S.TᬺAD.7a"m#8EkMZWR^I4aUџF+ٮ4㈕dv$k/ىef/g⧑]JZu65˱e8}+HM-$BK~x|nnۧe9}cNz>kgqV v78=AJF:rq9~*tRŤP{ǒ5U~=ki{J84y.h__"վPP*z}wnagolH%/4]}Hr.GRsSe8# Z+q!g}YQu2J *̛uq}Z2yueWBZ1J88焗Wd7Z鶲0>vzXU h1O=^yr8Y_БO;\"lR\UZa tW8~ӳEF3U޽ I{(l1gB1cȮz٤)Rq] Xrunj^2,u9rAȐy2 T>tʿg~<70c9\+R䊍5VBF7oW' rIKR6wl;͒3ZCQ8" o/'u>p8LpݲOJ横&[)rRl]W᪋)򜥨4pCws+i9JWQVD4{#5ZtFƝHBZ}#JZ~`ȹ:ӕ4TycUgQ񫪳{־Ϛ:1ޝE"{xZCǖҔi¯ZrөVqaUWi#59bЧ0cU#.#G>W TUf~$3@*Q]4*J3FF$|xU8X\`1%z$SK`O7l nv\ZPEԧ{-uü%m}!wYCHtEГ慒8 di9q9sV<Ҍ_.GN' RHI;-I̒n+3tчfU{?1RGo♚pc$$6zzd֯F~'iZ^"孪Ƣѥi=*oj➿=17Ԋr=9Z5}HmUG{JΏ%4GUv-'mɹA:T*rs&Tu}e6 = mJ2FrE kn0NxjCP΢ڗ!O?{у6j#~\ZQ'nR:qIuO4BѴ4o'"7s 3z⪟3z5)FkWVK,9ܽkrǕ+jхr6Pfɫ,W/u b5?ʻ[q#z;[*q劓\YFm*XvIrTN1OM4'fmn@l;Z-5T#sCQ{X%RWTRH&]e,em%="H|sCf٭yVCߦrqgk*TjY'krfHżG2N93qUs1iIi#7,{w+YFSp* ,Gu*qjNϹV+꺔偮nEب#Ewռq0KYJ}GEv *ɑ[Kؚj^w,5W,gUe*.q}3\!'Ti˰וhʸ =1]ReM$F*nTGϜA=H k'o#<:",~ +.}**?v)[nJ7u_e-hO,`gZSl#^Rov qn}+Q-RI=Ǚ.E1m=j3E⚔YlC8rtTBu?wk: iܭ!O)ԥ(3.T6ڼ`wMg9{QũlMZ1k@:xjy"5ROA FcVPL -8ȫ%7e,_BX~nX>JR +'j*+ʧ5݉9T/΋4mrffq}$CR ӣb#*tFWzkuӔk՘{K2/NZ\X2EN24Zerwo0:*"RBiԂVy<ϵ#4sZZ-]m"mRI#6eSB{Qs+Zԩ=V\Dw+pA w?4K*J2Jܿw_QO;`yw6jcǩFUe Xس3}AS(F: F-݆$6,ї\ƣU3K RMȭȫc{^߁G mȷ%Hꨅ0t8/jR- UdoH?+\c4QssN+qAoJửtgii*zSF} @"dFk ˖3X28G̟kHo;z }G~kv:ҩ|I(mct8۟vr=8e)JGD'0}#ZrMvJ9'rM\'1Ufu* +5Of猣R:'Wk*C*q_'fIq<i*rOCjrw yϚqԁָIs;hN2iqISleWF*;VetH!Kx1{8\澨0H 4yfS-y}*%WLʧ&$#I+Y]NৎG抓Z[v9ʂK^l|Z4r\N)*}; {V_^dNy$e^IVCFT}+JӠ*RW\teᕔI`0U@M:S|:~(;% KCw|ecr9RYs)6 ,46^V=mO<{TJ$ g`{Jh d}9sJ!zҷ$}" m~ffP7kU#˪[R<̂%gfY:W^[mןݱѫGReȒIJgMf9Té?:K_nw9ȗϸ ODLWNQwڥXʫI]X]G d4?%$} iKV;_4$MHYX/ݞvB4i$徶~Ie,@'%`t5\uetdeJ]ۘ͌0}{5=4GG?\K$tvrBȼ۹9koc*zs|CK7DT`H:^cXFH6X+46sδrZnh¦"6IɭbߺMxNz]5tFO-ˈ >w[^=iBU9 ZI/mt=yoNfOlMG;Fۣ2a=ET6[)At4V(U@X$7Aס Uao9XUqߥRDr)41䙤h(X9w㶦|=JtZLt[qzϥii'ΈC-ˬ~,?1e~rG ^c)N^OTZ/OVGl,tCr{\43TJմKȼOU>UiϿ]̬I-D9,]>^S TFjJ~%zqzt7쥶*r9]C٢RvnJь.vǟ_d6Ya1Yݴ c۵M8Õ佛R^` N7/vsj'J}i pUF>:dJ^^qJ4UA%ȅ 6H<El9cԢ6֋!V$8۷sҳRkȣRQ҉>SM2\Y_rKCIaib%![̦)8#OGkb@۴VMϧҲH)ɵ!ghP(j=R#QN,,l!èzs{nc(׋v?!s 2ImUhtji[vq'R"XZF;hk3; <^N%ULEqu=:S\shʧ?~tM=\I^{>j.7%$c~hvxR;sXK? "?I儘lc&#9NvȭNs̾ʮβ vgY_C t$Ok4k#.לw&c6qQ.B(ηҢCl8|Oc)/O(=HZ Wo {pR3oGHП7\grkCZq=7R)fI0 'p{W/4SN5v^o0vf\n^-Gv=_R1cjcҹTrj*;cF9[at%db~lv5Z2V x fioʙSz~ ]η/Kko5rѓ8nUR~^KBeaw1ܾb +h c%Zn6Dwuڡ2' `dc8ySt(ӭ'(n&Kn,y'8N43Q(u GX۵XN;(+\ }wI4X$G5 ˝_Mx1iIg+ù@nܽ<+GSٸ HiөNК]Wx~U|A9jTNIB?kB>^Y^ЂC,\%N#޺CfoC 18n4_Qpqێ+/eNzKʲã -Sܲز00qہSNZzt+YԢЎh Ul*HLAߦMtQf5QJVI`bgHwa;OqY֧>V65ӷbIMgiXANy=:7+ק(Z|Ce&3+2 'UF-Wg *\˧@I9X 32-@#q?AUnnOwg59Vu2·t]GRrߗR[ifpk Q\~YQz54kMUTFJqFթ>HpT[?4vǑlŎ4|L ޜw<%-{:sC3c:{)BTے%F|Y.0QcϿ5y-R' _y@pӋmXxqBAxne Nx+oeN2Nۓ E$0%?5tySӊqhEOf[#[SK&D-1}-2OjRhkR劳M ZTp nzRҌT+܉d餞 6~O֮J0I0:4lk4d,Ό0 }꓏UKU;Veqq;cg7RU5)Em?j-$P.#`~ySОj'i%[|eFfܠ3xRMQ*V5@pw!I׃rlR}gl1G&AK$C: ۵*EegN2Q}+%0fah@߿[RWDWheug6\_TEs%`T.H k7m6n<)F[sM0G^cPch^1$yv>&ԗ'x"_um1䓂p=]+3-rmRG!Um#8CQ/")FޢK IüU+ tg)qqG,iw"y/&+'͵خg JIr)N+wE [E?39/aQ])~O1ӫ ,?tc'EpI q3o:_If2?ҹj(^f3O^^62>;GJ2LPTu!ZnsNݫeש,4)ӳB\+1y5a*{=ȌTSHo{R2Ld xsz~$Ib!KvО `4Nr6ɬkJބjqz} XgRdvWqss޹Si]pF )s7=̑M h;Ga^v9]I9vT U /nOs8eJW]u}c݆+ұ5F\->b>0F8#JQedm'.Nih8"Ʋ}ָF œס)J2sSjt]<7JUa+`ZjaFn!mڪw/9<㊆Rף*'R$v}-vy3>q8>VU#vcQrӿV]3[Mo]ӂB;52RG=8U9IIٳ̩NK%ov啾\{X|IlA緭qUinKm}acSKr#Ȼ@AAc*X|#Tr\zWiFpWtfܪq<4v[F*^ɕʩ,b3,e@,p=}k!FG~j>Qzv*L/5hd8eV);p΂eR QAQK>:_J)NVKM]: bb-.;GQZ+{թQ-.6D->T`~e9xTb~fucu?Ăib{xeqctO=RHɴ?he.Z7L2N7ݦM$VZH Ǟu8 eHӏkS7~!LH7.>q?ΜiFTm{۩ bt"+VKz֍Oޥ)muqg-zRsF' Wxzs޺%ZUee* F<",S &NW-Jtu "o)ZOB?28Zif9VB9F;uu/j-=s| iVR\]\5fE.ӌ`8ZҍXNIYuJ~ҋ\wbV4ث*S ˥:5*ն~~yj=̑yq0re\s8g?L Rpn[k&40XWhݢɂ9ln$y93SW]Jt0_uJ-ofV2ZgEZRvz=ߑ:o˲9%LǹXrI-s=23z Ō IKt7W3Rz繖#wDogpIa(uױS改kjȪ1wm3Wɶ?+r[dۧj#R"*VSP~uuu|B@̭~(iqݚމ;+JYBpb>cgǥcP黶g^Cx(;9m8*}? E8Ԍb=O3JGX2[aVUQ5ݧjZ/扭7~q47xqM5ܯniim|r9 Oj5⻿#)a|ܖJݼ:-!O9wb*njܗGTV W Ivr=d–Q-Nen<94.@q=qmE~g6"i-41ֿbB<3Q׽ghfXz"Q+<$񓓎9UE6ݯ\褙uf+Ck4x <TjSk??cfH\.Z[u\!46[h%XVm>TF{do䌶~f4Mu qOݖݔUd S\y3'N)\e*qdΎneEl3q_^ueRm?Ng[=E2!N޽w7^ƥֽ݅Nb:6xDӞ"n2F|V9mfKFIl=:gӦ}Gtz*S粋9F OWf5*@$1ƥ buѧNӏkUa<ÁBϷvBֶJz9t[Z_^EYdAweOqބe^u{FjїKAZO4K7?78'LiO^~#|DcJV7.Xkkq5$2< 1=)F0RJ~?Sj5o_~ ^kyv=Qs#qҹ5ٰv<4ITiW:TJg\]ܬqw$aG`ƛh▽LKPokYގ#BG8*I/FUjs(;[ɭJVpL+o_jed9a{Nho ]ȋncc#mlHqg6z-?TcImm-BZ+:e^M㷾+9TqdRZr]Utzk{Hs0>_0 vx=1qpʤڔewUb8i0yD7b8YsSvMXJ҃o_Cj)p\;e@\Qc=McffZ]:7ֲaf#:7w8Sj>q .? [{lx\QBG'9$n+j2ԩb i ^]Wx|Z穦GVzcBy63ohk7V*\M qV5$3k߇kcZ ;+9Ϯ+ϫ=PKYdv6dbjv8zCֱ$G-8ݖXيU3qar.GrLw)ghFz25 G \`A׎Х:e&Vdo(kz~;Ο.NwIr37{h6;$7sj%'d>Jwn]nn@R& 6}ϩ{)ܥ~^<:SLeEC-S) <x=jbrOK_B9WrmݩE^Wbc_2}p;v=Er*#Ѥv=1s)?H*ZfǍu9>?$萋u?P9;jKtNSo?5䢬rV/B'x{WH˘Nh87`ܲ1?5R.2fsݒ]c*a$$@Mp֋ȚUv G9̾J7v2ܓF!x^^"kNɅ9F,SR@o||$^D}cT^RTK:7.)pH9^uj2xn{B=.PɢؑnּT&*RUKnrt>߇ZqhƦLW|Bx 20;zqv5ӧRpcMp1fWV-jkgjuF1nR!|&Տ!޼qe/}}mKKݵ Ha9?P1*Rڸ]F#21'8>Q%' /y(܍M(Ѫ#^1~ ߨg]Ai,C&ciUy?5u#GGD#z?|(Nn=Q7Ţݭ5$7tY?q֕T[&M%(Z~$2kAXt!{#$w,Pq$C5ҳe'Mm.6_:%,b>S8ϽEJoN;_.{ob!~޴No.gk}69(US6MO_0y6Z 랧峿{-abym G~}+yB?ݯT3yʬn]&kl8xhE›U]]>GJtcV=uH<H~FS*dSQO;=謊lʤ/U1>HN ^R$gV6Qpg>?|b(,ѶFF2=ZGѾo_ԮZ owLZeܱcr{w:wK[W(E*ͽAr:\j<^=2qRTTb.)S{ZfE|wspqp+PR:[&_ X$3|߻2?=zUN1_ן-t#kS+])㯿U=sKDO@t;]ЈݘycgecQ\h+FJ]Y$!6IĜpH=3ɨ8\mSb-_EcI$sw(Ÿ꫈JcmX٥%%A8iZ5T{f(= à_|cMGn\kFq$ే 6;z֔yVw_*.\]DxcK#/uU*;9zRs?Z46zT#."]w1em^gⵌ*-Yk{$jo0Xksm?1cO?).fzr'u~ \p̝Nx^ϙSfTR?}bAs sYCߊ^awonߩ9HEMʰX#wcNY9GW_ifS;K2c|n k~l<ת<:tkЯصvpF(RG^;Nk.Z>[:i({u*6dFU#k(Qv>(ŌM;~DZXTR1/6Vl-.iuVBR~?_\B}mF:ӳȎcu+@dw1֔FuWy_$̸<aYQxI'?朮^*Ҝy]̐&E$KAq4T̮ݗKNѿ{dWXmqj5G{cG&J_|gM6%y78u .Saw~n=>c:uFsceSߗz&}KG?k6(S|~*M7e|OѴZJbVH}90%Nªc8ש5F ɩ+ FW6h7j˟H;WdhdSJOhXSnXWKy.$8UaaONC3[B4(sUsϯN3-΋.O큏nyZF<,TLg(ubCi>Tr׷z98F>өԌqFr^@+ps'9_Һ)ԍ^~^ӴIH M%G=>VZcZ*U(KKs=6ݣ?Y%/lUN/Mf^Aex"#ˌ1]^QRܑS^q^QM4< g ׿jRg=J\]X<)w/Fo2BʮI#$pyIJ<=,=o}JKV3.hu+ddxQSn75~F~zwI+ʴ?Ҷrwz^Ѥw]~} U;O8]rc ,J~Vgi8:e>U|c.U8Gʲi`0C$O<;{G74S;SeR#$ǖ{Gq%Y*Ddwg9\QGN;6K$nyzu\>q*wR%qmy#}+_G⯿B_f#32e8?Z¦"ul/"M+1[屷s8Dk.{hF-u"Z6bWV,2';ZdD|;ϥrJաSrgwЫ׏QOUs^b(թ>yu&_}9sʤ݇xzLdV,G=sYהeSS|Ɍ3T-9ZJ# H?(5}VX >6}ZMFV&BKKl2gpBٳ VN34nS/ ;< 9 zJj1MjZR|bF9%] sy:Y{Rdj%>gq޴撒t'eBDRL]qO,Cp*QOn%ʖ,)sֹJI㧉 Z267J'Ulu溥RJ[ZqEA޷<w`tc92c)nm$yp7FpMuN6 ϛ$BI<͡'zS{8Ns 1& 7+q?;GFǝpp!cGtSJ:Y-N1*4{ltӖ۱)Ԩ,d+v٥FkI[ӧr1M/fH)/Qק~yO2^ECXx[:jrʧʢߙiɚ%tA TnsoVjm @#= Dm$Τudn8H2I05S(Fʤhԇx橸Ƨ2W:e-"H.Z[37g:\*w*oZoe{;R5*Cr{}=eZ:9na(5|G2^~ϛFΚN'o&Hfy$mY7BۧB}%#,=ogV܎ fxr=O={!R}(_]nIysY!`FSwzL9xE_o/'h=&ؘDeBj:u]ŏim@8ڏ9TjsϸiGovWI~g)ϝxJ&UWhꪔGK#GqѶew8A;V8B涯#o.6V]#نj1}U*֔TV_7 !{c7%463$}G>;VIT)(ϧB縊&H%6֎1;˩Uz[NR4;[pi<'#jJ;ruKr:Tn[ZȶWqm,%OZRӈ,CwQ쌸fnlGft&\TK]m̅$V #n$~.jqoQS*כn|imEپnsrUjLD}uJvG|lvbʊ1sHoB$?<:u8ӌev<.X ևS11b;kՁ<5NTƍGjC$qE9x8{NjYVC/rzTTqTs<R4h.60CןaҩUʙeOv{{VCrČFsz;ԍ:e{i/ZF๥o!%EeVlR:Ք}nawbXUsG.kƲBInI PQ)G'5ThJWv34zl'SpV>{geJܱ]kTvZ3NݣU- XsԌ/s^GG66M*2?.FP,\umZJa-mn=?iRrBg29jێAK[^c UhËYag؟_˚Jbִ~O')LVL/\ڪ (;n@ޱS74Foe ͛`Tܩ圷c9*E<4j-At +V6v ބNE94n* YFᶳdl'+۶~u*Vg'ȖgdnXrGR~ԩԌpvdM`{~}t멵IFQi5$6P٫3I"veR m2Sp7̴?4[{8YV +!tRjJmG4].Ul-V9KL.za߮1ޜyz6c}^+O&E 'd~"Q(%{ƹF[uZ4f9Uy'JPMFJҥxXǛo++2oXw?Q3jR uk[ :$V*BO޲qo)k_wy0&ccӸR5%ԪǙiEo2Ep<>ԔoҝERMKM,T[JeO=;δ?9gJ,*y,#ЎXK㱅H1b ",_*~+tӭVW^Z?y#<ݣ=@9?:1Ims)\GpRO.b߶Ad|u95y#St}7y9$W]cK/Oγ?i˺fK #L Zq[psP^+h~dV3yZƚJ/NMԊZ'۩M :GW8FrS{O\Zsı<݈n3Nq1Wѧ!ȒH#sEs֟5RҪ=FEWK< e|8`$ i٤*VnHdA_ܷuv%g+ܧE4qȎ|aO'NM:qo=9J[CmUn^R\Ϊ^e+i/18f ?<)FVn:nŁqUz2obrrInldiZm1$m9=zuJg[T|yM4Nn f> ?}ZϚRZN߭!i !Dfݶ<`?ڇ+Ę֦--)!^T&\47mX1 {{Vu*S7j a^[/"TDMi̍2p>>Z{=hөed IrE"g(X ;q9sh{NXz7[\Xe9ّ|O_]:ЧE ~ބޫE[jݢ3x\:F]A"&e>Tn㍸Q*טٸKErZK]=ϧ[¤_0*5:Ԏ`u{yL"' : TZe(?>Čf*G|2L[|xkJF\өZKw$WL=1\XMr>owEנ$yv$zϠF""pIYɯ*J<ݭ'keI _1Ig58ӼwhSN KXVb 9cq^t-Ic}Sj7Uwe Ѱ>s/8;U(_%Ewy:7.쓅>'jR5̴&#_o^ Bflg8UӃJ7CJvU_O?zzKcʤ-'%nlzM.f`^>:6J)"Ww;4줅m+/pG HQ$5Si꿯̊5ʜ7v"DgRMYYT?1quJ1)%Nj<֩&Ӽm\IXV8pey?Uٖ`@W.R^E?3֧ЪZ sYT䞝}Q(QKv+^'^y Ek5멵:e)Or3s5ijf~fY#bXwbx0;`Y >URt{2Vښtmfy#]*8iv߹~zW bI >fsƜL$f;ɾeeaQ)JK+b^4W%YЬqݴ<`=N(唣οӭS Svȴ 06j_r?Z)ՙQi4RY/f+9=xR9ԣqy. hgNwьGS U5撽 QZ4ImR߼$uB: h\o0YQgYio*rKu%kv8RWry 7hRq}JQZ.I^ h1<$fXQ3Nm(-5wV0GoBzvӦ+jPcm%5'\2r2w x87!+79RtL,Îܮ?}롨7]t5VזSy[yd 2[|(+U+#M-,ܽz[?viv >iI~B8>ab{ mϠIZid۝$sӧjiZ̟v{,3(ʷ͙O|㿱ENQtԥD![;eX9>>8+XA\|nY.VSt?BG^ӧz| IVb$9vvr ןNJ;ʔMe[24r+NOh4"<ؒ+ǩj*Ak^XfC챉-B'{=8eR1gRiEWؒdhI60yr֔dѭݧhܵrH[9>JVعT ǽ'9$a/ͅVn.c)JW5&Kkh<vYN6qwb٫Mi`|n22u彟s:sRNRd2&>aێz1޹eR. 4hߗ%rv+k(ٶ35I5~IU&dF*$`/<΢uZR>Pb0#U]X|SӊRutOboE`V|g"^bFGqF\q ioTsKDFjm6U`%j<2W=hI}D%J2彖6[ë́.':\1Jv&Q3P*w[PBBx9֌[}j]mv$Y|UApr+qRӮݼi ."f9 ``q=zEc;zrg(CcL?|mkK)SM)?.R#kEmw0Cc~o cVriA_nv~dZyYAI=1Vҟ&Wd S#d!+Ҧq a}**1qgeʔyj+G&,pVՁ|t}c[OKԔ^#ȖѫHU=zps4E(-zt۸$%w:4x!AZew4JM>K fP*<׎2RIzV2ڿ[io!֓ (`d27{=y.TK*5pev~oB~eoh~2FDu[RK;k-${!?3u?Qi^:S{5r{6(fSҶmZNȹͪ{8 1\LQO) I`qQ)#eSFT--/$uP}{VufgDgN6t@BH=[|?NdIVebᗑF1s0Q͔|OO j%f_ݰ#tMiTkCu_bnyҺЏM %)nKn݇y 6a;=j=IFdk:QQ\*dKD^N8ӏ.oȘӎ"yy*5tY. V{Zb.zs֪R^1Q8ɽ.68|yeӅZj=DiqN)7NH#8xUi%ICx-ѳ x8k1zvhNghi呋ۊszjM:n(܍ i(<AZ- sQRmyUUHVu[Z*H=9S*)ī>>_\yyoĪ-I`܉"G36/ۆT҇[_VqeXʋ@v8ǷMR敺u)ӏ-I4#cirL#RRځq1GqldK`!gSR/Vl0 ]#'j);t3r. Gx)5gesӕxY9p62+E64%D2-Xlg=OlUK-I(;7!툫Ѯk~h-P攭Y=Y6熈+JSQvfj'9%-q绶 11ߚӗʟ4oy-v,lYb ~y*(njkĶƳ+G&ϹW ;pG HV-Do$viSvr;GG2u>:$Xmc]ۗcJ\:2K8V[\rN{@(M%Ӹ{XDG yO?.;1?#JLM{{o(fU_Sh<Цog%o,Y[>I-, zSU=~GEaRuƹ | dvU𫥲%{9.fGs,q3e?VbH1ޔ}%r({ȦW6o9;8$zd;&Z*v;̖JIS;tuҭ~N *s|9<+AszSu& l- ws{Pﯙ0ejwkye-%=RJdv]1"?׊B>f,Q|^j}Ԟ4TJ$~c[LI(rz0$2q=[FS1qxR .e4c vaRQ䣿RG[Tcڱ)\rgyhG|!g>bȡ[p(OO~MӲKcRzzjXVF7Q8\\V_^ƗJBd@m_,sԟʳjXSMŤ-7U+{5<o.ˀvs7G WkVriKFĩ,zۊ/#dAXiYq]Q#6as.>F?/_ӭg(mNA4Jd픴$'^OƤ[9em|]Q H[)87 5C̊k|Iaqv\zv9eZ+Mw JQ^`dF 2K#$c}I^Μi<A 4!ƿ?JT9];wk[\rc=t8oB1!|Ɩ$wg޳(Y;~#:̞I`deV\K/뚧,X{܊Dseu YYv^q֕F5%/fX4cq_]L˺2H%ghгmOy9?r{9(wbRTG= w6e'΢#*7JX斩v!C+@\*`cV|i롞#EsrkfUTq\:5ԩNZ_k_ {>l F;8vYԒI8P[ktn˸1;ay 86>FߦqwoB3HUvqtt^N2*j4ۯ2߽q9$w=r1{W>&I}J>Uv2i ,T?t+ϕJsyUtc>YmA 7vZE_#9b}bF<ȫmD+odžve9'FF0S٘QIo׎j'[TXz`{D}!o#,z*䒩/l7$V쒮ڬ489ԧ*6fQ*X}LR{wVEelݜ``ם*1E'}<e /!h$03̬ GӜxeݻo;xBm~km߂c9rEixSQ*3,S"Ƣ>nT?aުHѨSxAQePIeჃڝWZ,RVB4ӳurar0,HuTSOyzx=nz[ 7H@l.ׂx55hZJVmzV&7W p@7^G_W'wpxVk˧_a,mKly>`А'OcՅ齷{ ٣f9JQQGC|e>J)Cw̅ (h@mɣӔtqQ%V~w_GLu탟ʝZ罍%6{ ~֑6)Vb<%Ғsw[PU)Y7tVJi6,|c$g%ZU*i=?3j4pͷ/'4̰m:^/LUJ˿PZt.;;u~D1Y:$kHm?@҉jYJ\E7$=ᅅओ['q-<[ 篰޹z\Ь_fweLRn CFZ52xOIG߂[z~F1Tgk̒[FkdO ڌ3VƬ(lw3^s6F}#k8Y'ݜRnOhuoQZO毚ᛎO ҧRF@99}2pWR_꘧zH;Tc.qQ+DT#v-qk?Zm{8FoҍG"M/>8a3=ŔM*[n}N :kko_R-[z%fifcO,y'۹QFR{<5h熱khY^ՌRܪ>ի˖*!JeZXFcinH8QG-+߿5(劻vq!?.Lwgj/)GN\qYT#*4hNGFaUi]c)0}h%uq¼My o'OONkNT'h]>}Vy nZ~ ' 2H񆑝2qҺ8*4/mߧOVLZ#\]` 0kjQ(hڴU֛.ޥ=f7U]q^δt>ݭ,1D#O~hһJ;t!SuQ=|X wQKTv/"hvXk62]B5+?Bxu`{vZR\FR+ܛ|),˞c~3]-RJק,:osgQNqU{֑OêV6vB4u8I4vl*$3SRqwZJ #NG>iZ"Qdls JM \U1m OEk*i *oY?Q+?u8 I݋G} w4|#Ǐ?L#(4k5U-°1vi>U#\ڣfyops(=8/͂r囷rU1?f-ch%?Lt/cJ]v$KDy2B[jBFx?JN"4ԕج[ Kpƿwto/4ln%v?wn#ފsqInbp ?ҳQ RJ}67.96T_7JoĎܻ3603G3HWtWN1Z;|yHõ{ϵg;"T״G0bR{ߕUH^Qn8qĮyc9^3Z8tO"`q˖NBy/kJKxcU+r_IS^oJ4RmWo"( |IFk{'ZBWvO!M #c~yefrTޛT?bND㦷c,2 29 xZƇRˎ ӶV-+9xHhaعpz(gRܨN1A-5Vj #á'B>^; L_NA#ӌV}]Anh'uGtq FO>N;~Z,Uݺ~fkѼ68chR.Zjԍ iG#jWPϱ)lp漖!Fu:MKOȍI$9k}q쉒KW#ڲe{ "<6 TNW-7(ϸon(˞ָ*S*7^[[$|6jq\rMȴ(-#BKyysQNZL(ʞ#6ovOwp0=^8ԍ5sNnJkhs6N`hՖ=$_A߹ңKj֟y겠Fnv]#!zsӞ; {MF:O^D 'E(As؏JFOGEg.mSVקS~ʏ:xQNOE'5\kZr\yF)m0Pvz~Gҽt};=wF\*ON15mq46m|7BO![cpp:dp8T"KZPU^zEs:&8# s5<'waTj4俧 yrC8Uc}w~%ϷHGf}-?z^ 77qd+9^$ׁ{V2RVmO}U(/=ji%ay(MU$axҧ ˣ5p+U2(m&㓞܎ZO*iJ?a)'zt욨nFcdN+ӧSь~ZZyy>Gղ:Fq#xVNOiuh`~c5ON1;T2^HU[OOY3"n3&WiԹEVȌ+rwc8:]?O(k^FƉ"u~oO|:UTt]:+/@lْmc1.֌%NR{[dUFm{ԞN+QnQV^u4- 5]q=r)=j؉J^굷3m7e*s%N;u8jTOXxm_1a1\23n n9\\u)FqU'hAi8°ƪJR%oRx5[|/JޛOo^媰Gm Q+3mf;pW'>jrk%&ި޴heU6ue)Y4e{KNi9rYU~`kDDI\ᕔlN~'G2+pXeer9X{Su- 왘33uUapjƢhvm#bkؖknUvf sb'f'Q ̟eˮx`9YEkXDbL1^!]ʹx;+-47*1>۶HsOF4ܞ~/C5*Nm_u^JԮ^xdݤǻ̙w/_H+ZU#_UJR~5U8gߜ)NS5۲GuGN.BX6d63s[*ǟ.eֶvЏ|Rc {P+=9_vkAlv$ffi[ H FF9Sqv*߿&Y-8 %$㑟֪TɨO]~_RZZkE?|Č `gYX[toϧ]hђig̑Ve*A=s횩FI="mѭȍ4_-jhcӿq]$3m3 ͌zs(\ֹ2lq%KnS&/5ъhʝ:rN2LG m*6$qTTSݏ-6$z{C}xG>ݸӦ?Sө("-̍X?.AR8=kIF^B^}"`3j<9m*tfqRN-{B /YT|=I)Va(T\kMt_ݹNpFsJǓ[ )Uݯ YGZDRpӵTcZ4̸Fv77lN}=8)Sn߈߸G7܊/UHt{H)$քbiƴT֚ gg$+i.YF3SQ䔖6E2*x_rksVkC:e'"r'Hls"wJ)YS[XC%0tG^=k1}=JG*VK},5Kh UYH$8+85hm#(%uewmO/u?iXYP''gyPWx`[˙mꔕ:|ߧrgˎC|^jvU4n%H|CM#8J%tLjnJz[̽3y0F | zGΦzՒMȪoN7lEjR$5řgolsEeQRNU"><|8SkVa/ sЏ+^]iF[]>9>:iv+FC3 a Lev}5T0~Ҿ*ÝEuT,}&$aP烒IO\O՟uZ_C[bӴE͋ 8$w|Lw$}/%c)JKZfӳRm]5-.3Ҵ7Y-u49-㙶ȧg AxR J1𔹕3o𑙵{\e~E3RTM3íiGE{xlaz5 GV<*J3Z_GO*XӔmNֺi{;ߦos:1җ"M^?4g[VYcTpx}z=8kSR\[u{5ѯ]&lkYTmu6T7$Q~8Thң(5OShiʓOw M- 3I$sFcb1'y8'9t+F1|[PjW Fcq4W| ،`Sy9%S^ͧ@z]r1<a',OwHۘr~^O )}LeSo{A\F`ݲ3yhFNdK*NJii~n+v,8_x%s(]Hs'ހ6ݻf/9鷨)f3ZQtHLl]zzZ҄v.bcqJ kR^;YZ>W3FGy{'NJ*ɽ?'F4U4RTDʬϸƿ+ZuOmZ:OfޖjU`>ϹO8Ҕ7)*e16feC=RF? V2X$۝͏zߚQڻ9}T/:;cm77LYSVN; wVѮUt#g*mI1E'- Rflwe0z˚׃O}}DmI $/^:VR2P_XIFѿd1Z5^[dn=8J۱O^䞫Հe¦3S=0JBG˷\>ؕz&pOM=RNU]W~[y?29mesWTy\c7 #<ګ~\-ˑ;[B:SNb͒A$pAZOݩtvҜe.vei@iUT)K{J|bkY?N=?:LygnaWVN9cy- p5ԭ'fӪ#lR$w[*Fې3Nz֜vFhž^k[zv3(w sQ\s#R2t{Ju)GbŤ&" }=*Ԍ9c^yU_?5qRFzTI/yze xaD~f6;KJzNc*W{Hd-E]Ѯi}LTeJJ;Y{͒˴8kJu%R-+~MZTfbnMo >XU[#'`dzԾiIY,Iy} K ,j)esU*** 4۔e,@re>7rkQVMoQJH^FU'\>kJ2f*EЇm_O_?o*rt{JqTerCyo-OFjtDhC`BrOz畚1*QݤV;J&6}jdyiZMuYfYwaj=Nrʷ2X_/\cn֬e-^R"ifѳ30 A=?{VT7ˇ5ݵ,[+ndžSJƷ2zt8/ԚfCpUFsm K빞"[!םzыiX<.XmTFrxGq8RFO#=Xrԏ)"5cN0m$;"KUN_f#v={t~1o3k]8m= 36~yGBw9?|ګ{9̪өr_"LhUNN ҟSIQs][xއ޶$tӧ{cCKk,rE3c1!sg5#]u6}|#eݍc ӎ*UQ[hZx-R. Wa*|tTdD2Ƿ_֮+$:Mis)A$}Zrq%)'שx\.-р r3۟ң)_SrOo"\o@_N4}bRղXpr8\cUT=/%r$2uY>U{aѝ{ԲrޏHO>Q=IV1+ӻۺũZ>bdQpK-rnO FQ95tFS^fUU|WwlmG p?AU>gDv{=祴Df?(b~8=kOvQ择]a=2;N՜%M++Z<.3uGbO3?)|zq[F4ά hL!L_7/N0?u%.]7[Fh7.dQZBP''+} d%Yw0==}>4]:4)wkc:Ḻ䝹W_vOsjRt&{xٱPps)Rcɡ]n|13a|ͱ94i"jtYdPѶH>cu=ei%%>U rzwF̹-@X1Wkgw=UN#Z lHQamp83呥j0JJ[\G "vFm{RIR歋UiEE5li1w~%]BXv"I;yd(ݯQRkf`A-YVW{t:J7NC -Iec:*b*I$I,JXg\R Q|-I#(w~_+ZGsI4Lҷ 6}yR*QWVa)ԒD{.~-^F\nx`A~榝Q5QVrl}Tch?ϥm\wh;ٶyQԩG6%*ӫ-\,fY0 :SR¦"!خYYLQIo9"c;yڻJ|O|UbHg^8gNRQ5yVđݣZ22.Nऩ8`׵ke.^tqzDј# <ⶌd}ͪ:~r4o Rf$9Ry;WNNRM.O4E9T~W^%kia6rI{zSU*{@0ňs㊨1JNO6}6 EO??'JJ0<*TR?&h V_D)J7%:OGnd`Dl njUU!WK<!G -JY7*5) ")gʨ, (Sϸ¦I^fu=onXwpG G˸iR1uHNi+Kګ"zzuor~oЉu'Ha![p߭T^쒄jrIM9q?1;VgQ{JoOIi3hHnHzg5e(| ~q6Я-pc&XATiMͰJ5D6\GơTd`t}k.\mÖolZU5)ӡO[GvhDK*(fvszv1+7QKu"=!fyYn:<ڹFvrWvq^OSrwvaa*cGOⱭ?iU5isDUkxߴ21<+_-9`7=dؽK_5,K9tg^xuJݲɶv8`T4ij:?.'Y0<`F"\&4d{/ P]Ƨ3&mHV秠J 1e+rF)'#0?JDܺHK&*ET㧠SRQ8=2V"ųfPK1l.kL; G b{u+5OSc?j%b rqY1[=׏_z6uvyo"s!I`݌`YJ1JULLU{0Iqk%r9?\r9z~Ҏ!=hٕ݃8QPJ9'<Ѡ̄+)= DV5 "+Vt6ya㸖\MrrRپzc|űʭwR6NJ38H={Vȟn} h⤹2i^VʺoBPgzjz֥^4ySLjnH8'*q޺ʄ 6 02h_tGmё`|fx#XT ?OjXu(Ƀ{hJ8:V5#{:ʮUm1nkӑ(Rܒ'Ab9O΢yd:[kQuZIܪxVQS_ ZN[˰ V;V8W[ sTVT8FTm5+ߢXih2yy,< s¥neM3LF!h" *dQda89VΤ2MSJ|˯KMyb-&Ã(򦖆1R\$A ہ1)Jzaх;Cg>dQvŦaʞzZ RRkNE΋]l>٢[ m- W^H=՟Hs W"nǿ\q٧5 Nw\%7԰ FOo8;9*QW5TV}:/6m@(8S`z{ Q}Έ1Q|ϭNō<_4^T)958ʨ8smq$1Vۂ6Fk ÛT\g]S"D+YCT}صRSƤF:+}n>e! NQYt*FMOV[#BJImJI7$̖q ۺsϯN*1ZKt8Yd O3=R?8Ȯꖩ𯘩+4:H6"]ly#:¤5m?3ؿgfb l6Aqzfhmr$iNg.!܆Ż~1zd+7{S,^/+8,Dl1G A==\evzT}3zǁddL$L֐IBTgc2EZ\G^ ]-.W{lGkrkLdu uF#5h%{Kff7_/X}#%(luQ|KadXh c*x=qZS+z{e {,y~Lk -nbp1Ө#zU\%5j+X0čEuF2NQˮFVtpCl|?(tZ{9TDQ'M>l.;m8/`KijJW̒8vm!W< j1~N*1I.ed8]2^͢ӌ^;[*+*IS|xU$׷Amy0˄71㞘V8@hǴ$;7/:jn;[av%*6LOBt*'ԍlC wBʫ1?7|9j.VJnVdu?@?x|לRISf0Ӧկ)o2Tg<Uem1I&647CH<VڰfյcRyNEkxIYO m>EWVU*/qE-m\11Ӧk*4?gOw}FLQ8&п^z: s*4&{z,?>9$gֱbӭ() S[\rN\y`,i!`qI޹yڱσԔ)GYۙ.wcrx^jekhmFTyN2IAEca?Q\29*NZ _-^?]ID~'*|Ғ]+yl0Oy.Y cjNNP(9=k}Ē32=d3M)_`xF7l*ȑۊ}PGQϯ7(H%.IhܖVק?zZZ瞞LrYW;FA t翭m:ܰ}f3y63ϹnUuOKc$^N#7wl.DmГ9c.X}LmNQ‹v:}k ҧgQE:X#?*61?)QSvRlk[t9H&H+YIw'b"?˶0c8+`G\ShRiZ#2 #"ʹ#>rSUk[ۮ.4vL88YSs<\j7;.Ț$DFxiF(< 3|aS%G:xV3,⮏/5R51-D]fEe#RFt&I[VUW 3q;VcRB'`rno泩VgFRe#G6z?U*oF^H??Ib Nds&ϡ[7-mʖAqӨ=}oY^*-]FF*p;VjTglrS&H|psך9bW]w݇\zmjN* Uz݄Yu]ku9jT!>x#h#y3,c(~UJQ:I'luW*Ջ?JrN2$2*ei۝͟*];5%$@dX;_hh~xtoOI69K$ wGZ|Tc˺%:UUo-7GG>*D+\}.\ETr_}ND;s6dzIj芧k9S6ՇQ<Hwg{'%:KoH rn23?_ZZQ7Ԏigqq e6ܧq$O+IsJ*浪?j`2RVsϿT#_2-u]f!&lȭ7SVҥRM3H֔c֌%XLGkβH0:uvWIAy\j=B Pas?L.FQrs$r\ؔ3dTIi48ySmtoN#3&I$;?4yJNznEy$Ll>Ϋ@8#:Y6c'+-5o1M"ʜv1SSލ(NoВH'm25fR |ʟ1@t{O&s*F(' d3ӧ5SݟrRV>5qʼ$ ޸^njګjCG7*߯U(FZΤ}U׸*c*ۗt>Ԏ8;#۞0?srNN18Ik>d*ơB?}=sQRRONdRȲ!]w9{z3QVN[1ҋrk,vƹ)7H{9sރLI-fs{Y{;hgnMb;F0gnyޢ'mʍ9N_NxTFl)c)cu#̍^X@͐V èLFZ&\_+V U0YxwJ59ջvCUrYrM:<0ҭ]U ƾAmμn$ӊwsRЊxʦexy\y9uw_ȵEV5{.<6Lvt^ࢤ)+ZYڲAfY[8kTDaK J5࿈ 4M!%0%~Rsz_pOWjvԞ2T~Ex~32.-|_0v>ݎT-cӹZqNo8bpYߐ1.XiRiVwIw+I>_N)QNV7Y~m} y2 aTPq9+H{YB5*Ҕ\dAn#\upW'FH(ij5mCDY>f,qҳ~%Vrɵ2ܤS/C+HU#, y{M/ZU 9{]ݿ,Z$\}G~?MSz-;QIGM W2JIbsIcZFV?3\F֦Y]i$2x#hFw"2䲓idY2w7!O׾x#Z3\nTeBѩf֏X)QXKp7tTK%rNIʣ^Я[H{v6&DpTBÞ8?w'ҰGJ.GDjtR+HD{xOzҔmeȬd]uen<试XV$xx:ӡFU*&1:vXf85>_˃8tW$isz꬟k~?ȍ7me]GjMGEfa[ȫǍrvǭcNm~ӿCib%V [cq.fZJ ]ZJ*V3thH1ҴulgϞJħY{V<δmUo*iӌbeo"AlS ,w<HьVHTN)'v 65+n~GzwkKK(b~VLE]аhn#| 5mɣ_XQ߫S|ȣsH:ڪiTΖ2jGEke ;YnkSrK=)KF GK/wJ˺6U9`5lϱRYIF30XL2{qJח4-v8EpI!BöFuVG2r%NJj\ǁz3tS'So%M~Bʫd>#+NKO#%:{'R[[8$cOw<Jε*q_)rTV}28ad2R9U\.*E8= c xX2NȒ9I"Bݓ'ۧl{S6N:/[QDPyl]|]ߥo+J4}k}[\E#N֮.Rw)Q'kdCkozTVsW#wS9]__OEV~eљ|H'O/{uub14h[1IHbڻ-%^rNY$\,V\(m>n=+<ܷ[//z .lqG*= )1WV[by.s ToV-̙/0, z<]jDZ[Y?[ӥ*w6W*6PJ7]4vrN+G))/"H&2@d΋aV;7 ^sWOfX;a|y9PUU;rzQui rRiߩj VBCF;qۧӥk̹j8KɒJ|t2`)DboRj$ wIXnp=~ei{M/\$"\+41}Î}Cte)'5)OGԎSr˕ y1iT"黡C *׎T-JNap$]mo=fg)'7aCTBs9T9q%}$V.Eʼk{`9\OKETc89#QD} !̭)82n+@zpy8U#IJLkM%ԌNr7Nc5ʴF)ZXF%|ƛnY5zqoDr2hu~sK0m5Lcׁ]v+5hۡI,r1߂J+31;RZ"Rj09{s79Qֽ-tdc 8_nANQ羧7O޳$G%xcO]NK]DXˎ´yd#no-|!|ަBWwX8v#.cOaԨﹳsҊ ZKVW۟3>{o# T[}H 6<Ř+?BoO]V.mwqm̲ biT`Yz{VrhU~~2&P8HG<v4y]Cmf$&uh9R#VO[́+"*܋u4}-ƻFE,~\ VHҕNxT=V$|G8Tnבq]Y$lfV?78?÷eWޕmqAgPYTy>^Vi^HW̐mP­Z4HRq]A?1ܼ;f+eRc~dI-wv\cyԦ C$lHsH[;>+VMr-8y1#'˞x\5*qq{NgzZC x_$GsV-SLJq}^ >fcf'q_ԫ_$~Um;D%ǹBm9Eװ|T7.F6_SYΝjr'vt$ߧ.d@oʉ#FAN\#.eɱRUTRۉXϾ+fҕ}J73S1Y*_*e ^&>P s/99v+QAɪ^Usګy-.h^ϧ1r2Ķr&׌6`WA;;קN_m+QX}}j*dB6!H8⽼y{?Fe9+N'$Vrđ4ʫԞ{g^z3Z([۱rxfd :d zPW4ϩ:Mb6R_?V]3,( @{b9MkoS%w͙Xգ7v[Eս:(Iu]QNN:V3O"`FFr T[FVILSXhΎ)Ҋ~!7oqj4p,0awZJ)/;ӡ}F0Ƭzv+kv=mGCZQdQKNmߥ̵t({V2**`aqUg*u1k6߫߼}TgJfkQgcͭI-g3KL/=a]1<~U8{߹_[EJ-|rqvL1GaHV$?aWFu{e|hiw2ś16ng'sq֕HG27qNi6mlj6>m;{gER>C͝:uەt6\̈́^\wVp3}:\=m{煛>{qUeC̫N4֦JCuW4ݥk=4kiھLƧj{1<!xK[Vz.כWUUx(\˭S;]ngˉIU*?+/Fr\i{Ry2ߙsӿY9I6IJ_KC4*J4ҋc9h- Vn|-zr+]2%1Dp*'YӨvn%Q~Zi˚-2%j˛a/ Eu{eU'ִ̚7اqn#q -WF3x]ї+{ݶJl,+d ۙF4)iRT{cqzXQ'{gv2y9UnWk{v[v{8eFQQ[u-]ͻ0SIӏCrʧ>q(\ b%\z~U9ҵ#uuvf;Uz“>K*jR@9VT)ˑ\pU܃}V{r캘ւ) efAc'\թ'B-B3,0ry?θk2q\'9hn_<~ e*z%.p|EynJ|8Wf5cgM1yy`Gvй1>|2ؖDȢEڪaO|էSfqTJI cڭ&vnV?[ ҳ׹U*٪dPamF'D7pu ct#Qk1rh$UY_|uEZJ4{zyJyRحƿbvM0Ip59ʜ]:W8q]tРuzt,$:q#7mgwżm#ssUOvF1揿Mt\cwn=uJ1vu6Q4lm .ϼWU#=~9r_wt[ܪ]Kerx'sJQWa(F)hAſ::"G'0 :NJd>_yw H"1CZ{h]~~^JkԖcw$E ˟/hr2i2e5V vEycj6p=?J=+|kq%!UEdC>U):˥iWM b8kTiOVj:w6o'2I Qr?p;g\ҩ9'tb>D1(anv~W$kr-7+iI2ery>;{Vңi%STZ^崁T3nަUjFMs:niE7m6aQ=8QJ-]gRqR]|hx.6`=Olwjr֖ŋ\1szG?gr3R)FI>ǐ l`1`#Ψu(Y-I媫6T2d(L}kHI/cG- adRmQ#u%Zx:Y4+$b5o2MrFHkJ1ߦ޻NU-׷JE5ebF2x1ǽrU!(Aj}NcF{mmʹ ӭm\`պhGyָS*1GCǒQwy2kSR6N-ВlnX<:lX/Nd$֏Vo[yѯ_na8ԥZ6]>DSy8s+-=,Ik<}Фm#SMܲTԑw ~Z\t4&"rgnI'=6̜fvoQlUgQQh#?P=:I%)TL-4h,Y{zgڮ0dъXg[\b{bn{3ڽ|qVkf#Su:j7+kk" 0WX oiNܢ[XBmB=а-O?65)Qan\"5/-ٳqۄ ;X\{}6>*p<%Ǐ/!𞷩9+ ҆U ۀJ`;so7[Vk M]>ߓ<,mDƋ5% Ex :wq1k+ZW.K=/Ͽsp9I~nN ߗ|5/,-ipFǦ }q_?ӣO#OY:x|Cq-eȉq:9*VlFFBuK{d}Dm[Z4Imh*xxZ6Fa%YLy^W*sNVjIǖdyeF0}˱m mmYGiŀ;yq^5y^NPGV^Υ~g#+ %ŕ0;ӑzq EK*(IjgibUBsrgp'Ӯ:ѧITчêrPZ|O//ŰO&f  #(Eէ_ZE42CI@t=F?aXRZsԓc4EP2X9@GkJ^Uso.TrMK~ c1lIld z2W3#N{޹ŴD&Iݜ :ccp笟W0ɫs?##- \/vA^Fࣧn\۷]3ǭ*KH+cp=zg5R'(ʜ4l3KD[2R_#ƣicFXr1~X%~n :qiȂX⍉%fN"2玕OSi˖2W[‹1f+I8::h)ӣ_ۖ(f ca)TqƣMo4aQdȡ]9{5F2*N߯-[l\1pzA&r3*i;;inՌIel '?Ñ[沩3l4cUȏE?,@:`5#i{Jp6ރysܾNk5Z%F1C㌲jp:]M[ϹT.i2`ʸm* =)S7Xvrzo03Q$g=(>YERq{:{w@Ygլ)OG^5֌6.b ̬@1ZӋ}TrJvy [xm)v$ӓ@zӔS_*8{7p5h o)~#/lWg^G7R*%5ݢR$`1nXlNݗ[I8-sas]TT5݈)SMy)WO=i19}"qܫ/VՊodVH G5NJ2sJI>gUE%7YWϗe#𩏻Mg:i}pp *S >ކ(ԧJ:sO>z4$M0wݻ'AO.k~F_YHu;^X`` 5,]E5!AvnI=9gt1]i`yZE9fI*hx8-ZVoB"^䌓VEO'pGYҩi4S[l)7Y ~*ʝ~P*sߪrqʨ>9'#wƟ1FmPU^r77R)BVyiʕ-C}h͘*{}J2Qz3%x;m2,J-^+qh=Mm!(cӓ!Uw5Enqis*JUrȲk#E#$>ԹN<\>y=niw+4o"n(\'ۭk̹7S*%[ [VrxngstRO>Zإy&GʸcvfkN\˚3ݮ%Y~.aќ<;RU,mF%ejQuk:F:2>>JJN3AFoBYȫEܻ[QF8JJrV=i.MBI#vm\2r8QF^ʛ].rM)-{>;>+Ƴ(,vӯ#.Uc-f*qŠI2S29JW3E9Tڦ3K_>kweخxJRz֩]ʚ#ysFsϵi+wqΦ#ZjۉX$er64G-4vb"8[KsҳsFwND I h#sֳIr٣)FTeMor>RUyQ{W,pli:f x?+KH$&=c.:>ѤזƬ^sJKS*cFI^̽dHڲfi(#<ڹ*J5.VDWfz:sB] ))rO6xQAU :1Zn3N P7Nݹ5R7OxG͗ٞ+GG8ݵᕧee7|9Ҍ;QQ~,}O'J:u-myǍ 8KxXxjMZ۩4|߆k/[_Ҥ#ˢi. mn0$Y`8gzj+@HA\Oa=+i2F‘(}CyI~P^LI"F#$xvF*Űo}ȥ<Od9(M !Jt*ObxYEM)R; v%MIǢ܎TX]jeGPTx8>^YsZj\ 6*BDx8ҫV&4wkB{Hut_늚ሶ{#൶Y|-x!SC-$G6jKne=#x#^)S"u0^49+/k=jV8s5s:ݓZv$&#{f\#;a@qV60,SšE@6˄P擌)r= 5I{[y~` 9ϯcЊH1 BnJO3Ei&'w$տ:\ђGLF?h:91qׯBk.29j+8P+~翽DeWWpRVR~,y+lfڪv8=~쮗Sbwⷆ2XGTJpZ0N0}fYqU"1r&FBB0O$4*] w.m~ҜW&U^l+`::=5(O.whO=ܒ${Ty[vHQǩ}Ӎ:kkk)fXm_\uzn33::|q,i"2p7/L_RqLδI5'@p 3'ҫIEI[*i۹4rFZ#2TFsߧ5[FcˇtFٕd#o}OZ/gkS nё\eߵtJRRV*{+r_sڡX U%VOKX,{O,{V_jI}OTkЫxn_"HcF!b0Y4rVTu[n~aCs޴QNvIw{Ȓy $m s榜ӓMG_= 3MT\{8VOV(k٭M9#r;`uVMaSPMiTa X wLzNQ^TMz C1g.~evZZ>E{뽳@ o0gIUOޖ"9j"Ķ߻*vp::=*-MBoLfpJ3+GS*%N-8Wmc4<8<ֲwoQԍWN-6bTr W.z#iJ2oZ3ul;tf c+F߱˙= /=Ϊۼ)!;[Z>(+ؾ{U{sOW:"Dq{hx yc ޟҪRt7Y w<,|:/̔[8dh5>V*-n[\)ʋq_.Ö )crI{F޾!Џ>vQ%n#sF+uJpU򱐃"Gqi:D.XU<ӮH[nWy8&RjI[k[yo4ھXszQZU%.In~ l!e=y5*Q{ M> B1g^B;2uJI.kzfV?+,e>~N!sA[vcJuuv3%HAxqO=VJRQYU`INȯ WJ#RK{uXyթ%U*sKo#> .xoc%N.m\_S|֋7rVWy̍ =u9r+{9Ǚ41uja)ŽG3ܤlaZ0r:MjS;8ۿ45[[YmJRj&U'մߣ off1H{wzZ~nTc'7kYK?a=v|͟l:bS5 ,Zɽ/p.h$- 6Nrt ~wRۂ nFk(YK_{܆ě#v׊"_85,- 7 o㎹zr\nJS#%#YU3>Q:II-0؈eR9f72AiB gp0=A{FRCUcnD>f<ۧ@ a\qVXޑ09cI5+BXo93H9ϧ{+FrI+]8ƣ%q.α#38JW܊4}y4lH>Uտ{$nd=ǖi_z4).C+6ʼnUO'r8b!Nʛ}r1$"O=I]s霜z~Ҧbr:zhV][g r1]]ǚgۄocV%q_(q;{Wnf5%EʝFmLv6kFʪqWSZ 4pѧnkb{LKo*V@[ӣFQumUd{w;́$YIjcduC.vmB9HMߺg۸pNBr=ӥzFQU,n =v>^$}=Ttzhm;m'6>g895N狲OF5QkKc7C,Pֺe*59敛#i#N>F8r={סNmevM.}#k-7*7G,8qᚺqpwiXemʐ.׼EhaUTt>F-KFӚT}ߠNʾd<~} vG֟*8WpSVZ*AƗj8z1=uayOAsλ8Ť*弹Zlu@NQw_z:N#Iku3Z\C3G L穨Nqm!a9j@.,p?rZ@w7L q^9JJSv K_04($fisS+rs=,G#$i'1JGT#KhI-Ż,6_\HQ)YCUWb\/`=Z^;،-Nw~e#IAIUlpǾji/U9_خ"=Օw ~V##ֺhڍp}u,dK(vpWtƒ*2H E5$k[T$ͤ>XhuݩcJ<8$'Һ:\/_2Vښu1BS 98gҚ8ƜSJ_n9V cc:IWG4FPvkj 5vZE ~s9r}?dt;/2%gv&-7 c4{5Z)Y)2ǺY02ێzsҳʎzrꊢI,4srImPG9{by_eHƢW$ 3JUUZ60b8(c^){*bS߀nKIsB>+H*TrbI S 3K 19ELiʝ7%QŦVxyjH$FAQRVؤWgVƱe0}EK5OyBq~(ǹOz>ߡ`SnUeEW uN+aQ[yk?:hw ׿SzRTVI \q~2oUYVyYjk/kIΝmR "IYl`C=c)sZJ1+^̄3&cV$鏥UJu](Rqߑ0o5MTl$1^r1Ϯ}n^Ϛ2/ƛvמ[[xJxu6vfV50|hTVqU{KOֿRB$24RG%hQDvcEǧzO}u)Uj"K2ኃ Ҳl4U}v̩gM.vGdgXQt*^Ϳ{VK=k>hfa?=tsN&Vu O=9\ҍm/ZWMG:;aqq',ƿMh4*6[uU3}샌Q>\\FriN.RG6$ɍ#y~;#YIFf>-"·r}j,5Is(jU|%%$ݑ)d[zFX|g+^۫pD{FXYrc:vESݎ|%fopo6MDe*ڴ̽nOiX:hU[3bzS%M[^fmG__1ub$~b=0~><4MHٕbyH ʿ)u?gqR>ĺȗ %ಲ$cS^^>ޞfٔe^\jCXovUqOO\s[BZ-u*rkz [#Һ({ZzC %'5u=ijIsYCl|/a߮sӢI8wh y c r~WWs*6,;歯{hJfo$5R`ݴ}OҪqiPizhGWL2RHoFRQ8N_d۷YH jVXwHqǿWU4c:pVm-,B7e[)y}sOZ֧&aR5]-$bv+[S-cgЕd=~$}.YVv +23*ƪrwme.TeEETRHoo#l(ht&:K{,L2h8hQRMOE{}dsuX-AGs]T35Nj)E-\Yd-Us׷?峱}_!j,e8Ҫ 0xN˭8mttMͱ($t'341$yAu, QZFJOvn}aYi}Wp3Swg괎%ZI<ؙYwN:wVLy2wʱ玿qUƏ-6.Hʣ|Ib~aU3~oFrRETi.Z|qǕuU9Q~JLXy{X"hn#Ha`yTsZ\ltӣ~.]wfn?k$FF?2j݋W+̬܅W(TKKn]sZZjJh@O s[Wnflңr{+F2R"-ç5²c㧥mO]oQv&7Ǿ;s[8 R-ߘdd{0z ǚ֜|"IKaw0~H_UM-hf`NFKsǮy5,yu*5)Iۻ;UTt)9r/vy%Zᕌr}vZr]:4i <ѥ\Ig :0HU)./͞;_Z7Whűy'Q)[N(u z A&9y?iJZHe\pO qW}4.gG.|rM/y\,:,Kl<5[;1ݹ:C,&[-Ձr{zVї qrҟ[ ;G=Zjd S W $S cנ?ζNe{Z!VR)pd$8ϯO¶T;94پH٥B^pw'j_5^f"RԄM$M۶_&I' d|U2<于&+"MvO*)vZjjZr!e_-iDd,>^{{8/ۉaXerJN]Lm2 lyp6`Gj茡#j|/;Z,NzOⱼN%*V} 8mdRJꔷf Oo"Z4fzԌsƱuZ*1֌l+3\ӈrTJ-Eِ#`˵̰x9瓞(qmu6:~Q lkmo-$==Jg:1rDBo-I8W!Qqww $FXlm1JU%wZ7_/i8\+8S׿ *;ІIfmrq )|<;;nk #4g3}GiSJW[:T^3"ɹv~7i4)E?D7/$1r9".H(ӿO7NI ,ʽzzDee歌&6Mu Mn7I+Ğ=OtN.RՒH#FLJk6qPI:5`'?)O\m;t .#N I_K0QV#?8iF(dvSjSJSkFvϨ>QZQRIy2[$ye͎229fEJ-Msz~Z.Yyu?ARoR$,R\+gp0 uԨk>-NhhƬ\椬캵~cٲ 31:1g'ֹ<tFXZ2z|ydRybh´jE=Nz-rjsT%B(`\Qz*\F4aVӟ2'r[#ojy-cUNɭ.D.>72; C}eR'z\#V)"\b$9sasNbWGMэgQ^a{xđ@hy\sI]y'uuu A} v7J44Ѯ?LtV_y5kj)fQ|=ӿڝJ:nw9N5+sko> \H~obЁ_ps库ׯVA(uܡUpOyS tUHhN3ZJuoT玾BVwjI[2sXΌNzr":#Qrbږ|X#P3ۥm/zR?iZЋ>wvf'`vϧ*Va]IRHd|L@ r{U*rz6rʵGQ[[eH2FOӷ[Ǝ!NY_OB4ӼMFѫ|T;J<`{֕(Ir% 9O$yqhm,X ޶Zg%g6&e)Hwn<}q1UF#Ti~mKy{twI31%$Lbobw9VF.Ii,Dpb>Zg5W)JMu0ey&܌d:~z~RWoVi۲-[,lJ̲8n@Pד].NJn z,avVb'g߯vSlQIݓ;jZGX|{y׵mF1'M4_sks"Og|]QrVKne`%"O34{?jy(C?wo6u+kxӒv2U+%[qvHͻ Gk.N6'?/lI.ᄌب p'Oү<֊2¤%ud3sح~qEm:ԚթEG,;S$GxU`GjFZc)8uaI6(PNFyi.]Ξ_eԖʳ1a#`,.Up1Y2ИƤ[mG2LW'̙Cj.z6ț^im%#YݖRSo[f@K2*ŷUGzE?uark.cf"5*UN1{\7e7(˴av\e lX!6N늎]RD(ٴ}t(";Pdž;q֗/,TdNT|ӈ*P#. KGoanl+֤zVۭE/>vǘQ$*=}Nj\}-L$J!6p"?xKuzc\[(8ME~U-{lvk9{;;15vi|9SV26#xs:mB:#6%R+(H,@~=<"llcnNҳ/iZjY8#gY0̻el8K%iT4yذ\ 6\WW~I?IHc?9]w-'8ӯOwc>jI+__xd3[,-m'za٘|,p8#5&枝D㥺e[D̅%`~hOLv;F[|ȩIJIՋL-5.6Y6+*9^s]^-n^[;@wdeNF1ccF=f4±ׂ˧j?uy&Z)eoW,asqV4Ֆ2]G1h^U0Y$vNGOJ%BWg(RhƬmqg-gE5O,34@&w$?U9d+r˚}dIBڿx8kєem5ȿvT-kAg%Ԯb[<1=EBY{7&9O,XjIks^~ҏ4y3h5Np kc߅POc=Gzxi*u#7juZK<ė}rƻݣoNz+ًr|y;=kST_7˷4 iª$|NJV{GG]># slʑ25Z12Ȝt|ІD:bufd>N|Q_5knf3\>]LJaF{qu^ԣMr%mzӧ.hIY|[[\!wI :EpǕM8S{$opSJͅscKǩ*[岗ssHkOF^le s#=k9F6VMStLYƏPYW),Ӡ=kN1jmzx[|y5iJpV_{K%VT*̞\̽y#i1V3.]K:z7W*cs}>S?jQ50O:o[ܧh+47W!<qb9Sdؕ.Zj wgʸ*GFcxӊ]x7vb71S/۩EזI^VZLF!ќ_˵eQIFJIr$un5oUUu™4ge=Z]ԡZHG :W<8h{wR(  us˘ 7IgQ(&54mmI;v@33vɬaV$KeV<y)';s3)RdɎ߻S6Y5X1RO3)EsR^%MK}-dXnf(N)cjkW)jJX6~FSew?u喒7K^}J7;}o\֜ڧCZ~jm尽= sیp+II2}bиa78h],uB#M[]wh[r|naC*U#둃<74_`N-k2fO=zU<$~Ht湍6n1۰d9% QV$nJI8#2ZSI[x1Op]ܶTsXé)r%e/G~byF/s:pdr^O0i 01Iʎh6qԕg5b4rav6(;>UG~w{uyƕ:mI؅˛{sU80j2%qi+єo= 3ğ*66;\_WV;PIڅe60g= ʜe- L 6-$OqTg%Yduy>jdr3)/oi{~MpԄdաV*WHrdA@wPH\owKP-b2ސ0m6ߗZ4xa.߿GE)4m#dӯzq{+m0n2OBE3B5NтT^.22x:u}i/! ZH`Hndߌ)I˕4uS#o"YpkWmW9n"b, < :pqڼVNV]TY{[K3~Tsp}zƍ>jvzjMzR)PFp(f$߀F4tju+sm*hI7cyWګu;ԫQ2Ͱ_"r `ߡ4GWmRTɸdw僷`{Kj^(8zMIz4*pHv:UVIk_4jUR;Ifys9©G uWvdn_s\ٷ[Rwdv9J ZQfjOI ֒b2I ]pWAʴuF]{ۯmL^yC'C:dgonWv^yQPO^1I7b=}Niҍ7_o x4j1jfeT%Jrv1cݮ,moO98:w+GܔWȿz59؊Y?{$sKk[y?PI!TڛAy:v5\=JNħi^=,H%]XM2z猞:>_zZqONօwi-\q8DZnu*E鮛1 Q԰aH .T~-J^ݎ~t bl`cT]TӗqD; s3*<z~u1zor]V ꁤ 敶fe}RV5ej)IhlbFTrܮtݷ~~UKgz8XAh2{{kTȮYFMLi)ӎ!JW:tܙ*M}=swΖE8m_*eQY:kU?gMh;4A$C*~EqN_יR|u]~HDx"S#y)p۸' FzUNrzko? 6鴚\ûw'iG8]nkJ饉 Pf NI$3 )ݰk&t*_9r0@A}{NZ+^KPb/t6GfU`:/zKru'LE7[3z|-X|;k2ݿ#_R?Z0tu'isENhScY!$1۠T)S2=-%("n+;wBDyl=ʵq^5T9vyȗYVO;`NO}&V8{+v30tjϳWQ?,jpLf壉u~*列SO/D빱8t2NmU՘Ƹ>YcMI2O+xK$~AYfu4R;І=rqӸ80pU8( $n+PšVpJbW!:B62uwRr$-Хj>}2F!I|ޟ'~xOu{M%|hu&HU%yv+h̩Τ3fו_v߿SԥMgnXv߿?4:v=,*<@x)^)i]FtiZ]kbżN^¼MOJI:U_N.o]]2=>uT`bg:Ν[s><魖?0*d;֑sECG꤮ϴhSM%U0A/G#~5*UDYPOcxDI>qsiޓ dg#?5SV^nz/[o+}敠ZZynYeڲ|v'Ӧ+_T)([|Ji+M/xoҼ0BoLa-@-:_cXTVVON#)Z)O= 7~%O&6Ʊɑa2s_9קSQ?º1qQnN½֖j+m^U,NNqӏ]|.ejJ_ Y?g_,=Zb,ʫ~Q $v~mijKTeF;H쪱*Z6=sϯ_T.v]EF"?o^¢RM߾yE9S,ĒcvUbr<8ҳ%kZ IA{h3) |sГK^oҎ"uw 6fyc=?*_ca_~M ^$޿{9SȢC8ӵoqYp|"|=+ۢ:R98CE[ |дk.6zsԒ#ٿiyYmfUR KvʵMJ1FuImߩ]{$ JZ5V%Ͻ'%%nb#߷RS@,@ D42]FJhLsERd?:wOj~{붽={h(phm{yn8RG':ueԚڵ7䌗'#S]%&"~6Fv+p >4i=5M)QqC =UČcIڵRZnO gI&a=&=qS[mF z;ˏqydrCɬ*ab]HEՂppz4}Lhƍ:N"g`+WgݶT'1 K#]ܳr:犮XʹuO4d8U>ǃ<+фeԭ)FSK}6ڒC#I# mb埻FqѩM_LdLJ o⓪hkdFI]yb!'}*jJ:?c9^.^Lۻi$ݵ{]8[iF!^;&cny#e@*}I;FMr΍-{iFF_5Z9U洿DVkH%e!`1'GөOjO-ܚү̜A+Ǿ9cQzSltp){ϠfV(z'; SF1ddv>QE*zr{ƣ].ZehU[ڸYOwSm$tce6#?*bմDficܫNqJd:֒o1V7 ˀ?/γ:8rҰq'tXq{=+ZR2Mٴ]I!Y ]ۨ4MέUZIs2K9hx苄oG5pIBŝnnG=9⳩8uT):eF==?c)8ؘӔ"e?jUsC]4[RzjW=TjMGDMQѪd`8MOc;UAVW^qkNIF3"JЁD&Zu+LYzkz0~[M-)v_-rQ89 N\9IHvJ}k>%:SPM.[\G";b=8SZ3'VNwEwܲod ⺩MJ:NhIUdv͟Ҧrb%Z+A^$0Jq"GYO}(WIuPO'Ni+4'{sdIlw7P1֢>gF8xvK,[._mrg$Q֯vݻ2QIjԦSTP&ۘLrEÓΫJQK[,}n[d{1=q*%g$s5B1u{O|/ hZM&H8=эX[giJP.ﳎB |:m=1@&oNTVI[[OfXGZTj{HL"8ra.g(gNqS]ޞ{VpFI70OOMc F3w-M4)%Q)=gەpbt]#R*%9BRsנFbomh aS]/F2OħPd_ܳ(Q?(U\bX0muMk^}-6f*p.#=p?UzI/4VQvꭰ*Mm,ff:WEӔj-^;Ds~ˈr ǧ歧ۙKԡCs2J4W:gF4M81)YՕO͒{~}}i*/#OrQY[O~J[#mUwcq'# SM);sʝ|U>{ #_=>8)XKۆ < cv3ԞP捖u{[[ 1mRNxMa/}V sd7F#%Rqr[taS-D@mʈr]WiV6=Uv[P١0ivr{ZpRSgS Dzj꼼ff͒BٸjTV̹ZdC6n0XdcֹBZϒ5/.^2;ۺa1uv)ls*k{LaJP8V改V95J:3§};v#Qs8poI\o7V}:\;@NIJ;XE~ʞRc=&:aВ IBU9kTR Oޯx󓁌Wbgl*}bbo*x/$3  ڮtfu)8צáI"śjcAc$\#HqԟFҫ1ޯ*lBplKug+܋-7LO9whb^*S|򼖾R*iAƢwW1[5-klgAw֭ʢM>YrK(>576&IN?De+Gwӡرbҿnrڥkѯ 8ժWe>i^.Lĉ$6uׯJ٭㸪GI% vիNZ/T;-tۣ+\y+.YXUAжO>8VE'km~dg$s9jVOgdoZ+'okVKc_F3O#9_aFQWe 'os*pG$}s%-S8Ӕc{&E};CvmP 9y<=SPMJiSwhu՞'x/EBdbWg=AYөZqoNyb+T(&حuH!E|0sTݚ1ѓՍ83&ާZ)r-W%R5 3H~rG׷:t4y+Qn"s2w`}? ʌ*t)՗=E,7?zsWhݜ*~)E)˗kU 3_Y4OxcbW`ֹeeV\԰߽ˮ>w0U_Xԩ)I-R c7ʸL=zFUc-$ox\Ze*.zs֢Q?vJqU@>Li~LzepOֳ,2TSVZ-&}QvrzW<8pTQ狿beV1.2'SgFol2auąvt*e O=\>j>i;YuؗBb*1^Owsj8>2~N{hĩwst=e+TƧ+id*{kN[$'Jo7K};!UZU%kE;-?η7 BCNXԊmkؙac)^KVU9c1~#J!ƵOԴM+.򝾾ڹhw kRV%O2ºE9zQQ"yU:w8߉odnq8nu.Tf8_cҩ$lxKE@"/ÑU_[R2P_1{JxT6N+R%:Ge|\#c*ZKM;KZT_àNTA,+dZ^ e=1qΞ*|n:η c;H7C2qS\}:Z[>}z~8u :\9,"'= 3唹emT;CrNrK|f6z|IKW1eou܁la=yӮ>(˛[rm+ZL_*PaF>kyR:q7r0+/L|m_3}< 1k9rEK[߈g#;3n'$cC\s$_cTy߽Mc,UnvNGF8a2d<.3v1:5>#tbچV6TMg#o=;Y<<^ ܪm-mQ\4;}TpT8'&yKecI(j[qhBesq='O7׷F.ZRl֖9;tHhȣoڸR#df \T^jX캔o%&͵BrozyU=Fuןp*`2޴:u}{tjT%Nĉ 1O>bbrTIVqXUSFLchIRhE q$6_$d'?G]MЩJ-;o ^l h?n9<8TM%{IN7?otG$qM+6ISztiQQJsTޝ Zt|UҮ{5n6u')I[FIE#Iˏ@9ǥuƕM.6MUOZ_BO. ɝș 81AE8SoE{1J4U_ ?$a$mRB=[ס }iEFmK&HHsj*QW}]T$0r'%dm|H|I8.ng7dnpyތe^;[rHm.i-D2*\}z>˷OSRidEdl_ݑ0?:t֪s"iV;YV>`zYU9#qmlkiEd*[xۑ^# hu{\r؊h8j&Tn~RI#Hz֥G-U8t#I|e[ܮkO''?ZҚlT `?|pp h_T)JГ$gc ###r롬ee=C>ftdyss[EԕsYdGcfOz%ܥRERdnÜ<^j2䍻٫G[dq,jcxnc}G9SVܸOIgb+o[!'>&?)'s/f Kѵ,FШWhx=x=+Y&4ҩ˪$,ՕQ|/j®y{ޔu8ԩOW+$aHw|8#ϯU7F7w/vD2*͍VEVfRHg ^կJVNK6[\m)wڄ.3ԣ'+ǙGIt̜ۨ׶Eco}hTheY"37?ȭ#.+pe<&e,TwԊŐ~:ҪvU8a8#tNVK&,_|F*OOg*R.cDQFOT\3Eedž9\t_JWI;$Cr]KF&8f2<JiVNK}}<جu(/r`n$\`\5Q(6*XyMu}BH6+g sYԏ5M^H{W-4!I% y}9kOV]gRSkK!+xo!lq*dG|}eME)-lgNL -VU]_* W%nۯ1x !df[0 x{5"=N_j-jEk-߼-9/ԓ}ϪF|-!{863(QWpwԭN^R?{3Zx֣'꺦7u68^r:u{ZteZUY "B_9)rzi%%1khYv強60oƺʟ3OczRS}$P rZ/̹\MG 5Mwf+H ovRu :}%y-vNGF\:W7溒nT知tA}FwQoGyճ9<}kIGKֿaQBRME[͖3ѷFpW$c'ZQ3\>gEl/RI;fF=dRqppn=lL6Kwӱ4;F%?1ێշ^ѭ.>UUIx׿N()sXtvm;QYUHۃN=qT5x.XԒ%գ<:~Q旼Fq匚cdk`vkIRYoPpW#ycsq{=SҝJN[*lMKr=^)I7(G٭Ht1FFf>i|]Ή4.ĆfºmrTƏg莀۸]S^WR4Ysjo1vFz{Uiʜ.ߴuLPV##*;t9=zQZyF.h]GZC-?w=i(J`޹/}[os֥(;Ў4X|.pzz~u׬g6#)A_ֹ/Wp_NG~h/gܮYVc+Ov+f5IK s2}r;W{R+-@E]^iMKTiꈞ)<3K:zu+OwRHQ\_ݰS#n, eVSzF+޷6jKf@>V.2&ȤPmQ[vqqvNi˪zr ZF; 00N~ͦЕ)G;_~ j_U#w8mN N;Fwn?NO1u] JMؗ"U-վ1?犗NnYɨI312&WѶcd8ogܩa+N^&.qTu$rN?Z) Wi86#e6/vqOF?=ֲ_̈MOW^H%k%~#W=VU洕|d gҴTFI؊;[hHc*6'^r)IZߠɯlh(7.ʕϯTa=[3RmfЎdit@v0ܭv}v9$>ڧ<{[5HdY#9;x>qJҗ֪{߀WXEU_ag-\qkqMgX̥7C֦+Ɣ)JmЯrHi3'\uzVew,b%ekI)LeٙWg\cyjGNydyZ=W{V7Gm\{tjxlR2\wwA,*۴ !3F'31]9ɻkަn܂T@F 7+ɴc9GBÔ^KT*VN+hlTpWqߵXϵ̳3{E(6F5:j@yq학ؖHRNprqޔeOí:,7!TEfiXnN$p4NO~zNTuz~,[(F>\*/Y2nKJ{eڲ0Of/ZwoOz"{uSȤa*Eѧ%B͎s?IH$2x tnOaJUX͉$vծvd3ۭe7ˢ_\~~5hFm\vJRqIydFZI .2zc9YoRnWՎ,1m.8c^J4uLmmnYeW'sYʴЅl 8fec2=*d}2NV& [I"a=c)T彋y)JQ[|Yủf Xpv zyIhpQ]cS*7IJ یϭg)u2ܪ;[alK c\Ǩ c*{lsOQ{r/̻~~$B~,:w-n,R/7y(*5(#fv˵xOƢ\ry ;6絏rH9>=:敔Җ-GR5,BFĞW7_+hG׬QF2mbKwa6Ed7lֱ%(2*&Q"&W721HB,s,OpƸwۜ<Le)N%x.fF01Rz={= i^mU훬=1S$a]Z&DWC}и ?\HMN\m4)O^C/5[r@CK8ֱ9WSCݘ/#9XK)- ~!ܱ@n8c*|ghQK ::l8hΌZJ܆NB)f\ǀ[h1gR^OE›f?P3񪔹zrƴW [6+F۶;9 r-[q TӆւU/$Y68Nu[V˱V-n#ưUQu]|qy/AZQG8GNQ,D)i in7&5%0㧿F8qZR܁`=a 6@9{[^l)jQ>kda•Uqԓ<!*{~'gӪmt1"?@猟§4V}~B:chrj ᓎ'ҳ4ʽS.<؉<5Egd?;h f?kB 񁎙?<֕%NK\mw;57m6na[ve_.CHa\Òsy5r|ю{moӦh G{Thr'f0ZR{ynsV:*U"daa4y!<8Pzg'ҬaNK34e6kb9Vu9e^(\&INOY//ϲ;=8KW]zs˹CH %LҺ{Vw]O-SkqOΉ6768=I?a)ECVPU1pOOmYHPǹ8$s8=qV%Snf*RT!mC%1pڎR? p=#x/TGe˩jTpwWnaFCFц+$TרA[{'klcZRR)j޵&t.aBfG`GJN2}N\Vj`'Vv:s^:_?29q:ϥr#B0|ӶQ-2G3+T0Ix8VW;ū]=QR7rf{>YݟÒyݻ35~4 -n$-^}nc&#X(1ע2lg:r]]6@.1x$⺨rS"᢮V&:7"|B'9㓁UrKqRI_wcLED-wIyJiFڛDl-C- q۞{_W^h׿9%:<8IdfxC>q'u!5=]^}ɒ&.ؕ۱$V#㊪tߴnNF05{A%Zk` *PH$`񎸡յ9a FPM7LqHxUUPd29E?gN불6\џ72&n*ub aǥEIӧ ҍNz;i㢒{E=HUۓqErR*uv߱aqITN}wZBO0drx溽Z緯UwqtfCepzn=};Vqu'E8+_ /s ޸3Y¤y\'(nt[װkՄuj֍:rqm'}[8U$uSB^r1Ͻu}WuWE4V ڦZ}Y XX,2g9^rzR+VhӢ Wnv*yv3r;K| u>kt^ XbVL3',ں(ӓ債164FW'c҈^f)B Mڻs޺Rw1Q,ifB[dFVq'ˆgP8ڶ7ZrP":hZv'Hfcg'i2c95n2:MUSTш\4;7M&lr$8٫xzpۆH /͆{f.^T㣗+*Ȓ!`hݍܽT>4y+wձ3ޱJ!;tOuYʽXϯZ.Ww6|_.n 7c=PtrL?qڧ#:ӟ,dc "IUI6i#\[U#⮘T]U #eSYQ o 7SHF~g\ ˖VJ_+7Wk}YTIV#^df* S zW=(Gˉ#ݓ6Pǒ3޵0frJ32uqf_˼7:cczڼU"O: q5?M5s$hb#o\ӿ*߸SR՚![*#a>ٯ2Pk=] ȑUYz_5C.j2,ҮCDvFdָ+C/R0EcyJ'ӎGp{+>kR>V_}t ~by{*򝁲 wvi[{Jr)E2Y;r7=~Qn1S58?[\,Rp8^tRuv^3^OSM4pu.͟3Fz?AjZ2M%E͈mC7IguLws9kO烰ؗh$X-.wmny6TK[^KFKnt}sJVШQXmrc=ZN܏IW(.m 'uKyqZ#dn$;}1ָqU,¤ԣe4,*ܫ`HƴtRf3#++;2kCW'73T+L@llN?ɨKoɵf1 1ŽltR)IGoEׯénh'2-ehyyŘ?JҜybՙZ$׎)$psƳ27](kn>3θӧ8{.}fx~y]S#'E-7n] ݉#*#>dߌV/COenG]ў7MWQ&k(GkwK Ԍm '1xSɽJ9rVb:h9e,:oΜr9}mJSyls撍'(_}(GwqN[FPRqjЎkGCW5ݤbS{Zβ˹YnҌ=$L۟Ybv֐iF5-&20wj:N eXUTm0F<'ROHKigW(e}&p+v=.)F;^df $/ lX7OQUxT!F.kwndz ==+)riãH ;Jր>Be"9kƸ^qw\QkNH+ =z+e&q]_$\-8;aRUPNiI=.G 7*X)Ryʧ$C> *q@>IidG44KhH 㷥rΌs4ԧ9DP-~lm# q yrRMȉԧ֖!J6U1'=9^v"5DԪ2}rfA)^iUKOL#~Ϸ׋Yk YwiF|ԍ|3DU{OKBJ5)tC,qGQЎ;px5^Zws4M5܆buU]cB7ɃjΥ?gWݕV{ -x٭[5Y~m{'ïz2jMфeiva.{IgώsJJDlZ>l\+JU[q?U#*۷ִ_]HSIb]VC$:Oʻ#rYhƦaw>Xzyͽ[C{9izF1r^˯}BIټaE^֔׶:*4ڱ?cLF1lL|ryֶQjpʧqZ0W٭Ȫǧ͞D̥Gm%n=5!#i8aLѭjZۨǕ9 LG9e'"I˖ߪDkzg'/inZ_뱷^0]34QL5Է㟡]u#)k}:z*4plXeIpȧm#Y˓{[lE~{A]yoв*?ņ*>Rh>y^YhS*W089RKUR5,Bg\Y1ⲕ?c&ַ*K ۮ1ՄTm#{G-/f71唩sy>ٕ$nK4GܞTN-VZcIߌۚTSKG[ЍGoFGj.wnF6;淕e%g[?f9؆VG+oNAJu\>&I]=rˌ7%HҦt\_ݩR*q{|rnP[ +6B#ZGu}7 tmm_%ϔvve~R?wx#W4*r%//ӹޫSPOWmk_6FhQ岌ޮU┣nnc)%*qz*zzsN7N؛D #|Ռtʼr2=\W]ҟE(ҌImuw3ź6\8U?:)Vi./̫$kӌrGS:&ےb$hYd¤~gNG8ׁa#YHQҽf#~en܂8>2=7Z<vcob {Nc/u5ĆvXUc#3sFT=Ҥ%.˯#b $<0֑OvM]C|)Ijmѷ6]лjc&ӧ*fA435d& 끐}3MNjTT$dN,JoaΥ(wX8,3$lm O9:E+gYFΜS.hÕG i°yV= 5Zi<,H󃱤vy96H=3Ozr/;KwR˶$rL>xǵ{y#TE^DB+ =%. 5lI ZG)?OXw.y#l/ʛ̷ g_xvY&nk֗wX=,yoUfOO˜H )G_12S<_ך=f3:h=JǕ:n {Tq^9=?`nE3coue|7"8˧qqsئo?4<SR^2ƲQ~S:ҭayboK|яչTCNHm-&[R4@78R[ rs_9N9Trr׵_/f굺8Z׈QҚ!.[[5ׅQ%t[ncї)iM$q$q7xGV5\o'ry_^"ľ+6U%aX|Bv3<XZ)95h6>o[)>o+\\w-cQ/z;2aeVR\M7e%ʥZmm6-e "d|^ ir5ewDylSNi?]3xcž,ԭ. _nbfRTzI~nTjb0n'Z7mW_v Hl( 7'h#j叴3('3!| F%HrF8a;s>['eഺۮ:3ӕE.^[I]-5N¾$hlWCx <.z:0:eR^Zi[_[tTw[WGěOQ 2G*,nSZj~%][gwESV㬖emW]w[;gָ'䷛#vҧqӸWV1U$Htg啸_{=E>8xk uH|69sܗ춥|&.Al K 3_f}O"T%f{'N_z$E)(`@ 瑟ס+Ҿ#¸KWNާ|])~'|C\n7ug}*e9'}RQonwr8^r? 顇_4*׳;"+gAOkY7oTڿ_134k9 㑴~5JdQ)8ԋЅd7QQl[qo9'=:uR~΢i:tߘjI!?yJ5#JNm/"FQ}%ne1"s)S6ZF_WêzmB0u/F=^s=دN4U-oЖ[kYc-$aQv78Ǿ>21jk]R_7b!q,n#+$WU=}8&Ԕ1WWV&C$[m8׿JU%N阮8mh>`0z~tz^jiɮĖB ~=s+idt`{D}mt:{U5M$w:WE V6Nf)_Ў#s 1k[DK]4n8V|޺0Ν!ƱLf<=ϠNh:o%MJ#jI' q5NRR{1Y8~KXbҌ EFSy۟smԺ] yGI?t,*U^<}+ ɻ?oDuQX)(۽O9{t狒m9F$rV*۱N3[.i]YөN\m&KD#jU 90թ SKs(kc1f)EJ[+\c/8 ϮGU9^sL#*TwEgY3YzX0֦1Evm'ڶ]ZR!;FWsVt=rtVBGⶔ^ZhǞ9{iee9\r3U*te3ПB7M  X\P9/]=+ܟftO>m8JLneqmgoӭe<Զ9cR]ia_(v3V|%˹$w+n~a@}QS__)w=OmwL\ МgJUly8teVWP3ީF|:-ӲwBa>Yyp#}=6=5Я}J4l|-ǟUҩtFQjX{u;,bn!eCqӸ*TjJsƬjɸz!Gz˵* 0={+WR^1qFR܋;Xvu `Thoum*UWVmg q۟~3Zr[:hьd%y/\*2ȭ|qڷQ+OZkkVېYWnO_a릩jq[ڼJ 3unU qlQje}^G&VvJK7V TeV2ԣi6UCIE';0uֺ9ch'F4EEK\lV.A8UJu6NTׯGgoʬxnUs\Zȕy&ߪ"'UvTm~}(^}¢2If{@m'v aOu-leJW3E 65Gl33ZF^ѴmEƧGq$h6w51a+}i}5FtKlY[YPO0~;:Ttelgr>G ;۬2L?JJ2KQz:FH2nmz:pVZK |zgjYZ::]\4z3>'Ӛ4՗\PM7Ԫavkc9RN^>yRy:96s*ǞqQkvyc=N?Zi.˅?iWȢ Hʿ/M?g*둊\%Eo"E[&#&z+:Jy8ԗ-_rYW0ܹN޴兛1(8k4HRdN \3XM=ΊGڟVDh92b Ys] v^wc9eu(wʜ-(ZŬϗz`8y9K-^&)T\69-pwrϭo.8㟴I4%Qŷ2݃qҳQTʴ&{DdMCqS{F[BIVY7HNqt)*ןvJݾUhvm5:m6cNӓߧ[-b$g+ 1Rs|Vz"9i̯n »ڶQjIB)GvKKq\v}~+BM=h<.ZgR猃+ySu9Q}Ŷ{X5ϗ\1G%Q~M:tjJ򝄙Dna6ˌk:RT|6}oMٖ{mU^?y*6Q&ir:j3?{s{9F|6Qvc:1{c;p *g5- k@v0Urv9=k*J^[n:yB~ؼQ Oݱ TkeRէޥ}Fm}#<”\:smvI:ыѓƤyؒpa $>b# 2r{sQ5NJܐvAWVA8, sVQGM?WEy.wKv1Ѹ~ۏ9[FӪ qw-,f3w׮1Ҕ}8o3KdAs36voSW5NiJs)T\Ah,*eszVR+I&diyK (md{ʜ)Ə );yܯ~5)JwuR5EKi&Ԓ_|y>Yש,=;'dGN5'D=:E1I`wL)Xz՜e ht)^v,[ "\C$ŶEv眶zw{C/(XmQ %5!FH18;\MSw]ki۹~T %Yzf<U*9MDyY[m[ŷ U*Vܮ-o fWa|+-jEL֍E-ߡ"[4jL;yU ~ϭs<t'*t"ؓ#2,r*sep=:=ڒɈN\:dⷓsZf%i60:e;>QOKuJ5ymЅbX,@mxYAdwzҩNm+鯫cOMӜk}ZW~l FvׂH=x^ݫURU%3^^y=s<\e8*MT%#cN8\,^K"=Nkֺ7O$kYlpkl,/VoF"9F&t;VkfTiFA+Kߥ.T9o._2'%nFeO[;'nt{G8dt*r ǎre+&8b)ܔ& qINK]WSU#V*ْ[x`c'ߠN1?/y=T;o'I>s'~ʷU#YQJ>Zrca@]^[2ScfN{yr#mܛϦxt2V+B-8īoȣ3L76x sjJJuJjބOj2xAe2v㞵NQi{k֍ܒKo?2 FŘ&Bp}Ju%͵nbd7h[*psέTb禺 +5)ʥFʩO&l,F[=ʼێ`J9(6=ur̯;2.wڮ>ҤTu*Ke{2O HUGdeh)-u͈;d;ẄHW<(?1V^QӸ|2:UKkUٹ+pI?NQFe.wDR^5_m #h哋w4 w?Ҭy$o1jA##zDpzp;W?~]ٝ7FqnFK$e$A#=y=:ynm{G"k¤gtO8R? ވg_і!9ga^cm^5+>y}mc]0d^-J >k}sV[m*eۻx3ވۚU% ҲCkeh$k͐Jq"(8XNRHiZDaUGv k HɌ|sF[[QqFQNkjE)yq.7c b_בUb2c}9Z4 >RMJ\aN-rHdfՔHA̡-5˰XQ=y~GY+BNe|D ЁӥcN\j-ߪfsҖ"1Oy1E{$+#n'U'7wuu9:~$r4܋l#gO3ǎ+o^N)>nVPdXVB1d{oSHJOjͯna{i5YYI= W<+TGMJ:k[%lL6#Ffp gvv=yyHъ]YżR6m])<;СZ\"muӱ5$O1E'd5H_npQ*ReQS%dͰMǷZ0t׵'wSOwl,xv^UBhNBo:D`슻R1ګ'.LVce-U]"w}u?Vvo#9SD$O/O..rKus#ۚqXLV*Km1m$dÍǯ\>ե Fˮt *0lO=ɭocnjvz:œozOκaR*ѭȱll*>_Jڤ9vQ*+[Sos&&$hTT_iN5w6KǓo߼Q9#'1NQ{\l!MT tiwo݇ $\gG eVӿRh7qXu/E 9s76cwHjB^TSZ^%(F6L*%yڧ/*Ihۚ+KF:e<# ) ?hOgU8OB1:WLdp}k n氏5K=I%(4~se zҪ+m;t^[!H]U~Y#Nܞ{zRJ+sSjPw%^i"M#OɮAJڛ'Gw0VJؑWȸĥ>cl⪧pyeãaV9j,y3kFgCT*Qi6G[sJ.؋r]oQfhlnϷ|{M6hZ#0sȰ=iJI4u"l$_7kn9zd#*kVoF<-FVm˻iFQҴЩS-<2?ޫkXh[3˖mZ#O=;6RiGvۮ$ekVFv}FF̷_G'ˆM˷*r[# 5TmbhUZKmmQ÷9<ƛvztnEwL&r={]8)0e* n OCG{cU.kMR0>uʅo1ڦ\M.5/1TC4el"F\eR,u+4ff?JTUJ1&m n 74FIG8BGr}+%%5Ѧp˽O}z;J[Y Rf3nҧ+4r ,NE4RZJ_%e*Cymf=}oNwmxͻ+;fYOw>^l֮4g5˙k *1>N3px&jӱkȮMʿ3^e/grF[t4%|ÔR_t#H.#I mĒI?g.[n*r'nH͖vcTJ2ocӮ7 LO@<w.]_S5N܈t*䷗9V4\JdS-!Q,,q8B8*I&uث=42Z7Tr+URbzǑ1aZfͻjoP;N6}~qXT?9^N\G -#{Tތ 9=E3DeXzȑ:#ϯcu{;欽$nY[ڬi'8ڱ^V&sƧ,w܄67>|co1^=09'R^Y+?#e1 hlMvsfEN|КJ%>J2< dwW|+:q/)KnueYe]Up㎕yťFP&:;rU ;>*IiFЧuGfpA"p &XcEZQJ5a5Tn{N\֊1ɩ*6L9@pvwn+gs7eK4RoG<8p 88g7L#hT[}a2OR'蝗O%Gx#ڭ˫2sn2ȥ;Ӆb<u905;5b{iY. fYev>}k5~ԊuiѺ̶Z@0eW= ;Bx5%;RP70[F~aHzgyʬu0ʼndՋ~v12O>)_%T"z>q-ģ/`(5+_RrHfV ׎u:kJn1z-,8Xv?$m~՜3nRkSoqq(EHo# v=+:?NsW鿓6cKUw eu*UՉgSk/#|n 7ꑍzr^JwmhNr9'(i.hD4ˤ'~ҋj%9 |9 x.W-jssw%F6LAi6^2ke"#I>P$mVf8ϡaYu9Ǒܖ+V9dmfgEMu]?_}R)蹓k1]Xc7#Q9{tq_:ݼ?ι-oK拣8ۯ S!Vi66p78~$TT|w1"$YfWUc(L+5N7m6oSFW2]5μfI0پU8qzTΧ-;ʌNu}z?/*cRU"SN|HX 7n^F+H1ӷ_ǽqG&4R/}:W0\1Uu{5*J)BN1qfFW334׀?^Èr݌E%#JH>fon>gMIt{/kߡ $ yVvzyaI#+ +.ߚUT}i-܎K/c U[?:ԚP{=nLK٧+6B/#=뚤e$%B칷Y6X?7`15m ]z8-;c>6{uD襱?[FQ0%F)u}wmx>׏9oOA4ߡ#.ڭF=TRZhͪJKs-;uBDCn _BGl? ø/sQ*Oϲʉt[vC68^?Z$Z|uԣ:U}Ğs[ڣysJ$s폯j姅Y麹51XNi6X+B;rH+ORmwsyScڤZoMi[,!ܽO\ykug,.ϓvm[ϺFIseU|S>Rzc(E6eԩ*Uϣ^]Yh;i$-X$x?ʱ操zхy7[_DXdoݷ?\0O~uiѻ;vQ˛V$`d I̽:7?zJO^*jK A~7Oj dt=={֒x|FrE^JZ$Ik{31TURHuyhcx͔YIܣ9==4o^触黿d4iG3YVgcne5q Z$A/) M67G [jNP^ʴ(鶎oacPZ6ڮXe=kOw;}bW UXo0prX5y%ӭ;0y1 .>m حʃaDiϕ;ioԪJZjLUd$pG$$,z3?u276*佬^ ϳKk4fݢrFs:°Rqܬ.*eȿ= ^Y$XїE۴؎sQMƝw]ڜ~ەh;I$zx5+F5\S̹(G~R[Z2$"5NT=S2׼7d2pYVLH6zcx/gMWXӭ'x[MYC O#wOJ:]8ѣM8Cs֭2F3&=ETyZ,DhN߯jmW-wF;=嚕#+/,ǖ2r 5zVr&bוܑC{1Zsn1&#eI&Rs.קZRѕ9BSiH1N$g˚McJi;kt޿"kiȷ*~0L|sgNI#Tl?)FaqOzu`*1Kgؿ,v.@Up0x9ƶs>DuJ4eYIs qw.֖IJ[s6zGWoC8zM,KicLIʎF ]_uC:RBe_h;Z23G: ͻzG4ik$enUj81Wd.L.jN>qg2n7ӟD+#*Kr&뷑`,fo6"%YdN@>BR7R4YuAؤϴ+e+Y(U+9MeFÌ]Нw{VW5.ѽRVۧ,?\3R-M(Z1aخvo9~JtS'9%aqg| WM*p 1UFKF[s+ǯ5:qe$zOon.i)1 Üsh:kkN2enV=<4jp U%qmЌ‰o'T:]4]KWІ O?jmo*{J1&-Hxb2j_"qz-3ot15S4fozXXћ˼m&mRe WW/lY'c֪[J^5/ϼ1Y^}xҢQ劶cy#x٦E%D۹xk>ZuFonn5H#ba\hܚs52^:}kFnթ.vd~pIc1YHJ732sz}é~+qȜdI3g$㓞F=mRd6򿼹H F}j"nQƣ;E|_Ić ߦ2~V*t/!C-mg2 MNma,E:Toe܄(/a WqazQN\r .\r߱ ,11dy?JZohQΝŊdMʭh kS*2Vy<()OZ+pg ,  Kd Q(_^mM1 >ϗR1ii ry||I^ (FZMttI;HZ2r?ZqNf6B%m!cq;qOo7u! Ue>ܣjsԔ՟Z[LF]*tWZ ZNJ-y>&2079w5϶RN AQСy q撶R_W}WagYw?)FFvyU= bg8տ2ע'Y,kn2G$?S\դӵR΢ @ Ur}OjT'Ua?.uoMaXP<+SVn?pzWӽv̙Z->*|NR+gyŷxocK7r%WGq/^a,j߹;ƲSB\CpNwrm:Ջ߅RJ+HԔՒ޾z`y0w'z+G唪T.WXyS疦vn3:!"0 Oz 87cf1ZTG_=>ih޳M$HC'Hlߚޜh\/RatoЩ$s3V###8cF2N#ѫӅ?7wAoqdt<2oe85czX+8Y5o,۽Dn,;°pF;83ajKZxFVw}m,u丙c3*~PZ4ʚW`:|'khYIyԬA!G=ҹjJ1b%zuu:-MC2D,D.a?Z:֝g;Io/h:zY,zd#sbxʿk:ԭk|AӤ'͍@c ϕiT)5_xY]ΆO IFv05:DxЯfBw>ۊ£}1 Cַ c~Z!{_Cū5*n[uy ڍ BGcEf:1|+ǝ.iyWc >ʧbiZeIX-m0QgWS8e:pM!!5N"S\y*sve+tb#x;+Nrz=ߊIN g; Q'|cQ\+E9ŧ˵hg0iݲ3[}Foꎊ 379k}HQg*Xt#JÄE.>婄%} VnXU8RO;TʤcgGm,btpTQ^98IƋ VV3F浘m,D(]ث&-;}kOl[VͺMZӕNt⮮V$#QE23,>]7̉5̻B꩒?(Ӿ:V\։IǗ_| )V$m`\g=~a*>M!l2~zJPN3?ᶾys\¹楳J\MjK\c2RrsϭW74u+-{[9ifHN9<W4t&|mRhmy1 `27Qx&PpC\z֑&*t;6Ŏ6I{3]W:~N飒B۷O- hS9=cbe2JWvJ,q+x 儦$RrNYG^觪} J5o̟Vsj[f+GOݳG=Zn2-?\hBTʑ6R:hG4p edYebG\)&j,]nFNxIF[~d)Ƴ6ߘf_ʰM숍%lrtm[<_5NjSQ\HV_le^x)sl*Tk%\|"q~,<7F5(]n6+nJ@ ds{W=Z|4?kѢ([i8;]Uf@5Rgw9?gVztv",Y{TB*R_xJ[v=+Qd戍8C~92ᕖ>YGˌ gʍ^JɊu0*/q\?B~n9+݁(,sEz\mQ=}P׺)SdDEgi7k6Xx6XjߍeSM4f Z5VymߑÌ}鿑 dyRzZ@dQfP1\ rzw}2(0fQ_=_NJ4Z;oFR>HŶ]%rٌIl93Iظg/RWI&3K?_A힧o{s*Sz_GHHw/$cby9:C<#2Y|yVن8 "##RIs7# 9&'j-ٌR2ʮFI 9W=PFnpv (6 N;/z[[-I#ib-FOQ~U|Թ쭾ڝ0x\czA+48fS_RNI^38֕ZwWmMgK2Vidݎg3Z{>uBi-~9+ ~Լ?,u^FA Vٹ۞GA7w+hӌW}Ej2U:}E2_1wFpHi̔muy%qW%xClUG\XF.N-lK>ؼmaVrFpj8|<R*;hV@'˜ ~WD_-4DRnݍC*BVI6ţ-Ws?F:saRJ5ߡ5xIO%G?wGzҝNQ[꿮uRrjђ٢KJ򴋀gLs\u:FD`ݘ3%מI5<Ά6[ l?uMY/чZ~,j^lfU[s|d֝Di ^ZJ -mL[q,I[Z\͹;4%<I% vA=rryגͶ/i[ރsOz\̹*{ZqR [*mU̸{*R}Jʟ=R9m#+o^׭8J+}_MZ[} Uw z+Wu`ʽ=oWtx y]W4.xΕNT*/d+7Gb'$c'tR{w9jT leXVI'Ltϊ1ٖFSdU/+/O NTa /p\8\ezsOiݮGh$VU_aq~I{)$SjjtbWhPfec97U=7cd"G\ v$ߞ٥Z<һlr}HH%Y>WEfPnj>V:4ԝb=HN=w'2`qf_/|w)W)K_>GO4ig%v[diG^UZfԑ*JGB1dnv*{pyogir-?፥'wo YCq"#&ar78%@qntYhgF{ _FiM1jѪxxK6䆉 "Er吞[󊚕+Uhy薖,8u ~e88JnFW5xTQO~cae[|޾a$1ǧzpz3.Zn6-frq\Rq8֣R0՝Ypf>޶Z$%rT[o2w4)&Ubc>8_gcSJ+s٪w>K׃Z^i6Fv*Gz ZX|7+ؽ4f[&[+83q_X5k<,h}!e\,e;,V24k/g㫎ZS{j7Z$,~]vȍ \sҶ"Z'wϩ,E,->u鿟bCh[^E{2S(WEt>Z9㭼mc%w10Y^W 't}VTd6eϯoRthyA_ AD/ZkWo,'Ȑjhk)5ؼFi=}?Vt7'YbЉ <{QTe8K_^B89F+][N5<<kLL>\99# ~X>i8R~mh^FzO~ xJN6k4*Lr>G_޼F +FNQRgw[WsGcwtU,p2muOĐ}rYxmaa:`ҩ[~%z}:-u&ϞE`8~b@A\t*sևJ࣢x8쮼 |"a0|KQjwNyzLU*rn^H|=*ѯռ W\xo.Bo82q*{#89ZpJw2V\ q;^g|yFmuM[{k׋ߪN*Կ|y],0aT䜾pF 8~5I>nm:uzʌy_3I4%v<qǡҏ4ֽ]G(gOTw*w_x˦Idi#h2{G޷:p1ѧ#մWmrN~s7.#=zy$D$ !׷8_M%άIfAu%*1[ۆv^j!/ݷ9?QZI_УtHCTƣRWTK+U{6FJr3w^tRdͣ)T|m 12AP0ivTu=?QV}BtsG 8‚F^[m$Ru4Fm6I Q gW3玽ʛarnT\#ڦ$)JTtm{sJiF2_^4SC&"la?zV繷JóТvt|{zsRI$͟4ye@O=*yfJB.n^8=ZϚ8_4}[]K15W(d1ݫbj)S絝>bv~i9K%:j)GH 0Wn~R_Sk{Ϧ猱sԓMm|yX( oifWʷ2.eEm?NRr]4Z/^Icgٜ۽kJPn,#ګNMo˩UǝhNx[WYh2p22@ǡ=OyE$sT_bbaE60gӚr.h04[HW*A Vq[N+NڕRqe4t2cfVT3I;A03AQ*QQͶk}WkIt.~SkӦWqԧRDm56ē5ԩn:w=qSF1qz\q&k +(i=z9<ҳ:z?#hLQ%NO0-'.I$I=+sG6ӧ$f Y4wyē9OirJ\Ch##>*OVxvrO[Ў(!Sp[Q}?9]gFoqJrWoةG Wv8jʥNoi;jJ_-n$?$@@穬jW4W9piӋWSwe;+JQ|5+aKnrKd47U#Q\u*QfǗF5*b+7.#ݽIQZ)FV:2w]r8T|>a-GVFoΞ|3ȪȖF>\;<)N: qzz﵉ecqNܵ4QnϪ;2KFw-/`O묣}%85%9 ѱ,P6kZfuj(;$96o :wn5#RI(2+S6^?_ЖCu2r2ǵWrs펵Wob*VVOpQx5R{Kp:f-Rčq8Zr5kb6yAmoǒb5 0CHSO_О+t.vo 9=gM=~2v؛OW\_Htq徤=\UdrF/Мc=TaGٷt+GrI m\/͒n{q涺KJоIrBU'$=)gRQr|M׺! id o66@g~+o~&bMڣ&%oQjT-*MH%UvݜkVN׫z_^˽ 񪤝>ZzwUSRDZ "ڳ4-ʬ8#)ë׹R.k[xčdxT'ЎM>zƊ~*=[h-줚]IO8qJ9TIz[V_W&coudUlv^sDZ5="=2o,̬Hs{b9b֝>iYY@yHjҀ،JPT)ƌo}F䵖4#`"i.:Teʏ"$yEUVqc=Gk7Tw)T"#eaE=G\$4b.0%y;RZ&c;scUT1/P^c4Gdb,"I$Y+vI 3:˒SOgRg3h^_o!8=tRwQvB֑qRO-wsvÑN8)/[t9]'Y-e]'}np?FQѥ*teGN{fgQ~nVqq3i׭i7Mʃʗ V;g)ȎUUVjۍq抻hʃ ܯd@d{iw=T\5QvTQf諾 W<Ӵ]ل*roc, V%S[ *)ӄ',<+xr}Hʌ$tȬ#)n(Ƴq5Cehw A\qGYY%Sl;kY[+0pqۦqygZJMGn|n %v}8Ҕ2b--Ii!v汕G-șэxRwh+6\Zw So-oD"m^=9#)\[n:JU#cj3RoT6majXJS}?ˣkI4o"6ضqj"ڜiɕ.fY&"0[D{uJF_usZF4BK|bo~V9]c)bc[u[{단ILp1vW<ʍ'sT&ן %q#ٶV5Ezn"-4o8T0y$\z`O~2_i6Xvry)X X׀r3k98ѧQ\>'ӯ4q۳ƫBapzϧ&)mOvϟF3JɭoS՗̎N}}3x[+M6ʚw-}:?2RcOy(Rœrx~C Nܵ<ڟXu=>wdlF8p2:gjS兒t1{kS~U˚z#\-G254RrvP;7GOڸrmN9$ +sߎkJjt-Zf\ZɧIa8hdܩ<$jR;1rX[r6v>֯*0.v4TKwKo嬫̿{r'ⴧ%FNӝ5˻۩&R9a ~_qsǥiN.u岶/"Dxp}y+66*u)%ۡj \ħj,2Gܶ߹QWii~wܞx9: 1w9\Ͷ|s #[:Gz K;FWMo$;@Oz#JN*qiiwD,jVPN̩Aqi̩U$L"8(z̥ \gvRyƥx#MwhU3˜{KKƌ嗘gwߖ?ßʻrw2Rݗ+ǿb2>(mʛnQ{[o ZF[rFmx|HW2U9!Z8iJ GƓ:DZBңI&05a*1vJy=;;eiH۸s߱a5/%GeN")pbѫ*;.H;B8@:WU>ZilTQW̞vpa;gsE4ҩ*5a+tviZ*[$#i:qwyF9rPh'1q&6r3׌>[*WdUQpBtz~uT% ѩ _W ȥ1=zғoEuU'nQNO\LWrUTrá7~R$vlؠ;A+rkҔljT; (ˉ6=01O+ 2+,$~Q=ռBO;Ա+ͻ~?YIQ+ʡG?ֹT}r嶏PI<~͏u$ccۭg;sY%M)js2KO:޹F1|RFQVGnm#GJT#Jrr\gu.ұܧEڻ_TF2|u%+!,(c H* _ z1cUWߧڹNڷW4e׹Kd[U!Uzu5)lcBVc6ZVN@|w;r±ey+ѣKWѭ慭tʝNkt膪nG՜g EA"{͝YqOҫ|&^ϗwtЖdR1.Ḇ]6J˗u0=鷶S4 2#brA<Ѧ>`6:J|Iu**:XJ7+ UAʜwl yG#h,8?I1S9J:iɧ~qŢϝFp#pp%VQ{t[FJ+\mC өiQrzMWR[XLV,gcӷ4%=RwB>L͵ Ѿ9zsZbiʷKȖ=!YpB)ⴔ̊W\X!fw6✹k"\dx}/[Vb`/v7cۃV_EԖ̚{尕vݒWϽiW%]l=Il ȝW`7ozחZ6'2 rYzֱ^F{LZIc(`[I9oiOZܶ*jFKLrTR~VkX񡢘_Yz)6JrOݳvH'>)Ɯ-_M$eޭ@z*g.0( ėΖO'g!PvTU8sGsZj)]6dZ\:s_zY;zsZ2(Tad%肙Y@f 8UiS۱AZ_b9LedR3|g=>TmQ;(ȀJU?w?y1҅Vrz<>Fחm}Bf%f+=Jy< ιemsy~#{Ԍ]G0?CTTq.eM"|G$T$}:8P3s޺)aIS_7ЍۙcHތ(W5֨Jku$/#1UF<=.jyLd$)J)-ʵdͲnmı#)ڭ=rVG7#7[Mkqv`dh˷Gn~Og)0N?3v]ՖyٟiO2F$*eR6JN֢Mp-D~YbJu88E5&cN]UH|x7FW=7teiS77׸t#ݴiZO;h ^ 8!7EA}~m_SXʜE bAV$JNzL´4e%^z ^KFc/>cg~>Eܞ;ҭ+]E ] ,ccp{t*'+nmOBϡ`)23Z9^]Y)FNRnUD37āF$ⶦ˶[Q*oWW6Snv{f1MRjR?:ZB L#ڬ˃ B9F\sգNdat/n$ah2嶶Q1n֔J:N}[ [S"|TͪuG?Rbĸert#>Oi2dJv6~Y. QP%3r z{5Tm1Zf ʬnc@zuC1ʭ2l|qW,5Mϕt2&7.빤-v==Hz#mJ4`ؙ_cL:wyrYQhF>hcO㑎;?uU>jtmo,p #I#'<~L[z3C[*[E~o'b?U>yn JE׿C缂-Βof@z:|?mm];,J0|ʫ099+3ۘ~Ҟdq#aV crx*?35)KnWRT\ 0~jw7y$G,QD|ma䟗G" M˯un,o|Ҳ:n8'/9 ^Ў['[+ȸǙ3<~U?bx3~cf m_ݮ9?WJugp͓쏔|zVrS_Ў6$գo)NApG^l;I7Ҍ=+]4ǩԏNY$@%!޹V%[Ui[:RTeR:'s䤷娰g-V{$vxn{7b8\P{jq֣RSDLba~>ǭs>ֲLTڋYf^,dޫ9wc+WWaQ Fqo[؎'A 72)T;V5$xRuGNDZ[MxՎРy㋧/v:M>d~k֋QC<{[Ҥ~q: $B<N8sj aJT!ue˗iߍ1.p8'ߧg9rS]|uK[7mPzRRI~]ΏcRQx֓ хYFϸ^eG/iȞ;>ʵ-@ F9Zg;.'YS|ᄎuЍ$7+ sֱS(zRzS^ML[S{Qoz}5.U؈˟0!]0[8:өZ,{7gF 4WɞFeʤ~:^kz򧄌a_',kVINnɮ 8'Vݤ NX9F:~g*ݭz#ʱ==3ֺ#RM&++oК6y..F<2;l5y͕hG2^WGG2y+4UYX:N-i~C|eԲx(`q=3Ҷpb'dk cϛs{jWEK*cNUcy-d,hی zcҮOvɹ(ӧ}<9Y_*+HԕHo#Vz[:%KYcOKI#j|5tQ5m{y_*@[GOYѧS&Sua4wѮ}.&JAdtһ)J-AٯŜ2E֊UMkI\:Mm*5҇÷1ֲË`Imޱe4D)St}F@ڒ˷qp6U;ǹ9JJI&]ty."YVHգ$+dmp9w9ʕ5$TB_զA @8VbUQxq~9(ԤN*}hVuMxD3m=BGX-7W8$AiHvey;x#2WQµʞ!Fad6\wn29Q?hkzZⴐ._2FϻU[} rT~m3j>Ӛ\H.n9Ǯ}qUqʥwxo11ZGVэIr򜕥Ym$eu\90:{ժҔReӡNQiO"[;L *ܐv>ǧZU^McRש$r"† x,pz׭iG\gITZ5C$C8,wB={B`#둣yE^M'fjnXdp:wqmmk%Ms+z o&18;J%*lcFNQ|@Ei" 7przg5}<'.xv΍*s1U`:9涍?gQ>dq^)T^-[,Y;Yv#?ՕMԒtW V&>Aێ[bnQv)Ӕ_+%5(f"hұܣs]T*mG]ƑeWڣq?*<Sk],٬$ ef# qR쥯baJUj9{I5ӏ<(PϚ8uŒc/y۷]̒sLsg#ִKR:}Vӱ"$L#nrÒ:N\80&܆#ODc&ӌc5Ē%wmOGG]÷S%R85#70|̺HpzWD`*zn:8W· 9m# ԋOCvXf|mǙ[8µ=L-um5_jYYcr\nbeZ<ŒhgU3i/8򭺎ljnp98zN+zT(껖ffGfb JQl1FRvؖ)7%)a7`}fC߳ESg2>u_rYF3:"$z#zti"g?PcZ#H-oQLƭf ?Ϋ݄[[T2kČʹGߎ(֝9OM7gYp%ySБ֎gLE2 n)>;ԼRyȊ%HmK/bK69(ˠFuk^Mȍwٷ_wҴRcNYEh6Ğ3''#߱ݳ4cY{9G[\͖8f1zd}y=aO~ҟˢ)I/([v0: ރoVL=H¥qͺG =x]H_D6߼vǟWooƴJ\?cm|ƵaU"(. Fuh^W_6,|9ۊ-wѝ19Gi~>RyJ#=l=hДUݽ?z\F-عۿ1>5{޼ڽգUF-=jXU0w}5կPY+mPWw簮}^I'mu;+YA򷾖VΆ]+?,?})K>VBudw̖%=CQ側N} Ij\]LN"(d;w6JsңZU+5ӧ,.%Tk/m| P&[$J$G?D֯e'wwoNr%d׾x]YLwV<<?é,ݾGU#Pfn tz+7J2q~ah/*M[[}#U˳UB*aR9jIr*~3昶ȷbXܜy?LzW4FNXɿ&zTIF[yGk '8)VRo=||hm?Oy[mex.'C'?ZR/4=_W~7ItqqqpWtH'zu!*jVu}FұKyXWqߌ:wg":)m{Xvz2dp:`\R]))9$h{[5"J.mV[c\eWZ)Ovp`Ep?9wU빛 6:JU%~[Trb!>Cx?kjEih R&錪'ҷ} FW\Q阴A'Ё>JJ-!,ŷ2B1tWU?hݢ*UEFp_ݽ|SZCciະlUd~ǗD9tvdn$)P1U{?wPtvf0*YF ^կ'+F$y.62Xke [ӱVK-ּ"*H+>>Z|HΤ%Qin}.v%s{hG;Ix_'Tc~Utg(ZԎ)ea嫲e^^c/CݙRc3(/-r{/yԧ 3噕X; v,(6ɍgyUe 9^/|݌kSHnՆb,7vq\iq$۾V8n3^uzjTS99SJKk_?gܯ baW&t"%pOL5DgRcվzM,&Hlkp{189XҬ/{|(Y+r.s1nMy#5;8Z|en]_=9=ȭ'ffo?SN·\'//z޺Š"Vnst۠UUtg <88>Tb_zRmrds^y#ڹ{dcY{:ѱjU#xnؗN9溩Ƈk:Ov'AmH\2=γPvסew#8]_A 5*J*ok'5ʋІ"]LQBQSF4]Og~egsAfVxrr=ΔhKw-YQ9StꈛXՁqt:P2K$X(8,<~u9w]zZ5#M+tA4W+y:[ʭU+o%NTekhWoFr3uޜ˿M~)SkFJ1e:\0;mېa(ԍ⬗} jFTF>{]]cLT`6:zJI]>撧OEM m/Ob=IG\eB |P&tSu=,t4ޝR O62w*eZJM!?wxXөZM6:tcNԞzڷ&Ff^YI=+OgZN˲)9f].|H%R?WR*KTvS|HiR32!=91ThՄt(U*\ԦȊ<[Ouye i#\ܒ?\}y~*u̲B:tǥbEͥbcSqk$$ o1,A jڤ,jZXA6cex⁚<.eŒu#-Ut[$#Z,R3\1 >?pQGt:}IPe-W#3y{Py%7]껧ͩӌUJ;id}|!ѼJ׷Zch)-\ m}YF>ӖKG~5TSq3!Wet[8,~}}뮌KGhJy6]0i׌?̷vߙUdu39 {_GP?z"Ozћ:*1tΚOtVNݪ-G")9pHl^MJSicdžk]/ Zkśr~n:g?.Q^TjO[9J7v efwE[8?sZҍ9){͵iʜ9jmn#4MnW($0Am9[Y^gugRi>k/H쁣WN''}3YB5)Wt˥)a5'thvF)%u/ެe瞇뺕gTgy'+i#/W%m4|J'0*Z/ĺ8bh+?q'k[<.>` bN 랽*'GI?r:n}=C߆|aեZO)2W9' 5:4jᚓ\h]:8e?yݯ:|ҴntYPDzB؞{uke75{RW}oW>=x3y6nT#vqqɯT~>^W>W̕1㾙x5\ZF7Kw-ck 9J\p#Iy}Ҳr趙yK2c鵰N=#cB׼~Ѡӡ5 yE]S-k8-n+|Ѱ$01s_:8G-޽mOu.sHKIGbs㞞}]r>:؈*i~E8D6dέʨ{b4;)FQi&mmNm 4%6۷CJy#A3;H+d?kV9ߒt۴Ȍ* Ynl=JbXY\5 siiWpDP*.GN8j|ƧVӡ$jBgVmŏOz9cKoZͲ#L|HYUGqNc5u/xE~F~{[zۤg+̍nrOpO>4F0~WDRTcV̅tSQ(պz69c#n;~aYR5a GjFkvtї9TK#Iwb). Xx]r%R~ iҒ|XٟhV֜[KC,E9֖d[Sx=35TTdaRxXJ'[65{ϧJR{K "2/H9?^YX$+41B0^x:URIgJϖ>DRFmԲiAϪtb*F?z>M8]ocpWƓ|gWE.NgpWnL$XL'ƪyn5ؚH1)Y@QA޶u/ʞr_"yG;~s]J 㓌ihJh"}]ÖU}N~hUQNPn#*8}{ nVYs zHE;%*ъmz6~3?m)TJReOݥjwѢ⺖v%YʨGҦ= ^N]vRI| -px'Fq9Fk#0J#qU ˞҇*GU5Nվ{UG5O̻ t%o L6>ʛE4i֩9u!Y=mOѻe:%b@릎5f۽U?ut8CWb%]#!zjT=aK-f6uCRz;e)-wУzm|fyې1WF6=~ͣSErw c"mxP@: |v{iFJQdXfWF1fjqܤR4/T{ էR ?-JtSU#/G~>mBT&~wSK;y&\RM}Y̯r)kqտUn>jwƥ:螾d?i+`Ҵ4䮶:rKmIn8ó.Wc±Vt#֤w|6|dsӦ3ӧsRo9.WBܻD>ϛlЫNqhW(q(n0nzt!Jn71",-%ϗ5?O׭B[FZ*Q"HM0lz|ww&4<>yźIvkp$<G1\Mӗ=pPXI?Uc:VTI;#%cRӽ֗MoyTa}f z`geZQGzIU*rU""4]]_`˞dwUhԦcvE>ISjzT$*YUVVTcGJגl%&ܲ0s=kU)fxs.zRZ-6Fc}S`c}*J*Ԕ]I|ʿ{q0 5(5AE'>eiYN?wèZ|Q*єd2o _ҶmeM;Z«AW߷W?{'ҜhIwu{HIq]I_Zזdۜ!S~{_{by!}ʼ[7ޏ:cMm?,\oY[%w-<ϧjjrr8Ō5aoC.YZC:4osbkyU].[GUefc5('"jVJDt4j|ʧyyZ8E%nɹ'ۧƴӵ%^]F2Cܸ$c)F8Ƥp)"rB8m)NpLS{y>l2j-_kLe[C<(YrJęÃ=+(멞th[=VuhJf@X7{qNIsYiltV?fcuѢGlE<̻$v#rV<˖βZgKxkߛ.>Z-cIF+qq"`[\԰ye-eʕFu%: 2A<a<RX+>]?gI;pȕ%9CN^DƧ=^hQ\g>Uۍ~1Xʧ÷&ہ%/2yZƴG*]nU\(SJMQ֩ [ip'y(R#V;Yw{.kUO:VrTz[Be7`+یN?.z +GVrye$}GggRlќK.k"sdL3uU֚mISg5'-1q)J\UE훒c.YJg׏+Z.4g*S7ZWYqf]͐ 8?)曊Qs Zt`_w]bl1ێ#tiWVRfo aTv(<*Q3'wndH rx/5:rrBiQlu$YGy5+HA>UTc|j?y:mU&'+ *Љs՗==g#,$ |2ˑϯzWn|ա =Ӓ]=)mwr:ֲOO.DI; ss76y9-rMlwFNK(hjnaܪ68 zִJTwZeN$ՋK(?->jZ%shR<]ס.c#y2,YdzdZTRX+s|k=nOQ9;KyjOImk)@?:P(5ޔߺ nè,=};2QO#ݨKnlmfέUG}=RD$mlY- wl BJrLszeNthTsHp[q%'x=2+IISKb7NbO")J7Gh@]qLç_&7)EYrH綖D-t`Hy#O8Tzc*؈>cBZ,YN⧡pzbW:9ʍWi lQx#-39R9:pG]U<5pW8;t(6ۨv3nwӷFPpԣN^TO-^3JFVHð崬ݻZ3MȢln#lI3(ars=)q끚ToC͔i7Nz[HBwu汩BQθ_qngxe[&1}^={;캕*J{lhW7&ƵXfʪ9MgwSє}Jj FlUqS9ua} 7p((덪Fw}lg&N_tٔtש%Ұٍ~BgNٞoec@l`D$+?\MATƚY$S]Q#n4e-TދRyg+O-}lW7.GSC!O c2ڭ:t"')uӹzB|}C\diJ75$;~U޻ *O[xT⽢pbrnҸ}FGjX$"kgB2zgQRU{urڬv=HHŁs^~`e+(]>֮" λeqm*CIEYrsϨ⸥IqӆeԮ!ha^s^eZ2B+(*vM2c]3 rs'c%ʬ\SkȖng )U.jtZLgl@Qi"wc3;g]ܚ8XҕE~#ؿуnާQZ)GV}lq`3v 9ynyRe$%yRJgQ aTdzNAF9=+2m;(ebx]VE;[=x'9R7quj/!g]v9;VJNNsZK$I)Dw8rnFi[2;Tf=Tu#?J臻V4cgMͨVf~d'#V;nq {{݇*\Fdl/Y# gӱ7o]WEu>tAmQN1/vZ Nխ[4kpdeϗy'98W(ԶVRO֋N7r,*?3oݷqSG(=Ubdʬ7p=84.3e(nM,ks%v*Ll{)rqcMf3/&Xed8<[(~%ՄZV䗷]F۝c\+,HsMeNj9K/v7hVd<UEzqzcU-#miPrrJQ'ydg=x? NeT@m'y̬F=:Q/i]? G_lrsQ:(GGu+r_ΎEXQ#9#5>jS-VYE]J7nbO'?Eh̪ʴʾ`)dCml8`޲aG?!bk3@dt9޹=`g(sƳ.Tt$zVQiØS%߈u2\o'HIZ+T5Ԫynꑞ xk+WoRƥ9]k,aQ~hlPpϽg*J2*PIlW\/&Q{جjRIݘ;Iqrd bC@zη/Wҋ]YoQcVU-˅݃1+1Ƭ08hUbˇ-935)KȩINݚ∑,/s\S7M#TQ;6&m?q˖:-ObotGhxr)P6FH۞ƥƤ/u3e4b-[Z" 3.N8ҹJQvUhӳ[$Oı.fyf,lߙ#JMU5("GY3&*7-G&fZ==7[D~K)C6I ;d¯2I?S5jJ.Vi^ ewg(-U+a%{dIw'-$Dfa p{V3(ǚ6l]vzhVL |(i{W(̥N?RY XlCu܄;#a76wTfeUT_wcocѶ/hv.Q}kYQeh9F.-'LaQljGF??Z(N+a%Nukw{* S~S+Jn6t4W?.-[>u-ahThѭGZ%EHźp'u ҍ,EjthWmp $AQmOڴ .-isVoXXXH*yn2pp*/M-iNa6rM m M*qqm |{j54I7MCBgcc({T'dU7R b$N[9kǑ- *-1&Yd63)Vi'8[V*jJ&&6kmaoyiF-'@QS<4Lv\>j=)b#;1]9FY9e?Ʈ\J^?Bi%v un; }kӔz"F#@sՅG/^W 8m\JaYW7:qǥtcjRz 2E~U@I=ZU7)s]JͥT0s Q#ںih§j1+DO}v{QhGDa)S\ĉ9?)dub+j5'N-TzyR6nO^>t+%m#FS_鏻ɓmRhAuVPzF~kHHӚu7#Ii,$ Yvlz55QG) u,L {V^{9js.vB'1׭uZ*TڧPZTlW8s{b9s;i*I=# tmî3ZV[):MF?'$1;HeͻYcR$G-r5YLnw犲"9CkagP>drß}=%cXA|SÞ9Gr%Mo$k g rYry'9~GQ*1Rlfmϙ&OCja[ݕ:A+J[%6}{`*msF쌩֢vAs;.6p,ۚ[-cu(T&7mv{\GnSc{O|a0l0= T%CGK!UB.o@9K Zv\-kxӌS LhH.ZN~/֝hSQd,s-+ krxH_jq)I'dSf.UĆϵ+5} )i?B+ߴF>Ao:ڟ, KPX-X>U8n2{T)M+ڏ/RiU:4#2pJӯPrRw2*8{6F+<b8T)ܭw$d_DW#p{w=4TS߸h"]ssY^ tܑ[iތL6s#jx-BIXН6\ʃt`"My5cF.TwDBXcMQaGO޲grkwgnmy9#Wmw}jtrsKB{+G8vœN9~*R$ ևjf@սqYK_CX)ԍҲأ\356 9qҶynʧ)'{X")_7~.OE妩rangcy{8׭q"0M)5Q uƧj.4Jt>^/Gֈ{הeh4І["Ug~*y?8Br""kE i$SMG_-K3$2^wrewCQY[]"VHѭo#FH8~_OƳn疛sV4驫F]h2/A=3krBk0PQh֑pN Vr=S|ߨs1Ay~cav"o{sG[NPhߩj3vv*3z{W=IJ{3˔JERwdT/ݿxQX{o͔]h[~NM姞ΥZW$6٤ oS~k*Q4Bw4Xdec;Kv, قc4+ vu&5{IcL̅H`9YsM 7%&?d|ua =sXNq ?khGs9*R^v]o=+T]ߴ0]@ kU%n݈Fw#dHdv ӀH+WEkdrE:pTwwl DtE0+c*Ӕn㭿xԴm8[D*}nmw/N?#TTcM^B_危kUe).#Rqka<[|{H$ml>a*q󊤱0խ)ʟ&$6[+ K:8+ٶ6k}4S̏$!eq7lyqs(ͻlS [7MsБNj$U-:I3ͬ۶@7r:MoPXByrs23OCxt~YX~Zm9_c=0{Ϊ?wbHoeyMsd秩NR.bXeO3V#9c:#ڢ_=ԗ5)}GٖS ~Fq~I(J#6n~do+M$2\ rמ׸\g>[ʤ% H`f w֊ZT](̳D??1!Dc94F4; `F I5ޝBR|AVI&c$;r=8eRGʙƛק(gXH!cV^D- wtB%6CeI M2.>*#VU:Kg{ sӥy*UVߒ*WRד+Emy~JNބT^|J1emzps:z|ўy({FHr´RdKdPy7mXB]l~uQNOnȚ84m*)N\ɵoFT$NKͩӯȫҶIk]dTN6)%Gjj+#{8V5'8E;/*œjF9_"Hf/Ǔ'BǏT{oy1rG)mVHܼ@O8ǧZV"M1WKE7>'mڢjcu%CC]n|jm[cgmU@9s~7ɜ]EG}:[ІKnn [!w$ =0Ru(4ɣe;yo8l;~"EnlKE5D3Z,mr2Up9>aYRO+?WS40HWVV^vXUUJ/kk;^KX̎bc?oŴpu'vԓ5U-{ocϔiӨ!_fᷕ$iħbמrAJ#k}[vw K*wIn3,b2A*j]9k/v߅Z|6}e>vs;u^Υ.^ۙ;Ь+"k#Japy뎇<6S(+Jƥ95ON;YP* AϡcE7bR.1Iunw3푆?ϩڴSmݑGOZV{^.N$fUlu$tW.Xێ6rƜiU܆P {[XcB򞂸HI=vKUj8K]t켋|B-;`n$)'8.eJݻdinZiٕ/T}m|G4UpUOWВ -ZF!7Rz{q/V g{yCnX`v5?_CRZqZ|+ 8,a;|`r3ې?_ZrM_C9JM5ıo"؇̋wPR7W2Hɫ'ܻ e}O^hDʞ"8/4QqѫmxA|qGTyZUga||ČxfeJ RWЪ=*j/,F fO% cUO^8QWMt6i<.7+vuލZnrFt,f~jpN>]i^JFtʍNnh[`hyfXԐpִr11+;ήY  pO{kZ:wX|w.3F%cd)F{m5pjRw|[IJ%A~>PRG+=Ȟ%w3ӵW9?uS;L99,X`b|尌6z>&IS@ rtXO뎞LE:Qj{sSy _? n*UcL=*}.G* x㷯s[(1fU!a!sm"`\*o9- *qvmj$ S*Nϔdl*ꌦ0B[k"`HG_Oh :~ί1KƸE1o2c ӳ߸R3V儻`P{gedOc229G+ucХTUj.àY)HvH$X\84rFy3$۸X^Y,{9?eK5ED5бY@_{{Oz|C p.c6-~;v{u5+Јܼlt;0ӋRN1J+LL?_ާ0vR7Pq1YU9G,R>* mQn? *J-R:v |CN6y#JNC:.e(Eߵpr36/䎇ʗ4g=SfU8*ƹ?>IJf|u/(my1'tՌ mlڅZķ1'ۃ*pzO~?:FfэOhOFln+;1gT\vaNjnZeFj[O3 $TQQ9ҭ[]c qz6pVqsw֧/lg}FFv[UF/ýeWQ^2oBmrf ݗ'<k;+(GܕGs|GБ1NK]oh}AaxNYHjQtEq]@3޹yfR٫+n1acCQ'd Z?p1X˗4F*;yw.0eͣ$u'V2qRM[i4In qӧzG[U?wegYGrrrָ%%fi~g6[ Nd#_\nU DTK)iWydF=91BKM q3[jI-dGՔi5$<Yl1Hn.wֳUȎZj6~+{. n0?DG 8֧-3tRX]m-v{zWUj>OCFR椬Nv^ƾb\<ʁvSNVTSLtJ2V6M_n!֢Nhƌ>FIݭm@ձ[R.1T}Hpis7d#zowi">mkJ^=9U8ޭVCclE}ۈ,sӜ*=qFԤ}zwvtw:-p4?*gֹeOmnzTܠj]BLYle;H=zd~kϩQA.:hsFA#F2XNqqҲoTܨdٴ{!¬FrwϖqO~eik_+a۴c#v0O*ҧ*FXEy O~*缪Kk[ ;R{aemk|\8T+4\Ԃ[ }ؒ1j|xԩ*R"Sgm>7>u֭oq=j>4Đ\'ʰL΍joXj_2H_9_qʣ;jvEg{uZ-Qxq>ަ2 EG(J+;29$l#Y7eaz #ZisRV}]Ef@[fAYҹ%Jܲ umF?R)7c԰ E#1vm_ni7*i˞Z!X_-8,ӮRWci- b"Twi:|N*xR6eQ9iKݺdݹy>Wi' f_~u)&zJgG5ɲ KuNNTwO{UsMQzEivE*# yߧEuFWqu3mM8zӊ~c+u4t֔E8U}9TwJQFVFܼTu} !8F.9!H@+q@FP~" J_1?qYOliZ3,J]yUcgҒH q20@z䖟!Ys]QՌe*ig-ŷ邳q׏N+Zss < |w9*z}}JiК}J6w|^kyJYi؈HbcnL`W=J|*nJVe7dz՘Uiϡ syybEϡIOE8;$}?NQҹIEi<1 ySV9Z{Y.$L^qfڑKFe*s~^Gy| ;uʛm~z^UI&R͗<dkǚ.WZ;&p+,{}O=q֧*IS޾ew 8ٚNd8oyϠ>Ê\?vVw^a' nJսHe30O%#>j?v'L ힼcqbe[ގ:9*nv)12#f]Fdr[ӜNk9{IEF}~]O᧋՜C$n(>U`}8Iv_XZNz_1ۅXѧOVFi0r5RXg{К審<}I.c-NN;vN.Qi}l-hi^Ti㔅CB򓑷Q9N0=]6O;(8\YϯCG/-MUѵYS\VSp$185ҋ?cgɈj*jl`8A=?1rԷ07#U:?+;S:,0Hfc>=?*u*s]#HSYAGN"S> kGF72VOs**2+[tYg*}NswmhyTВ93 8 f9J_iNNWZ~>ceWZZݯo^ڌZPi_t #ȶb|#3n Z̪S\ބ& yU;G*}VGW!YFjV%Id~#O犘rTTisI:軒Kqo;m)~NR,E)˚8yk< n٩F3ʲZUw- ol=%Oz9&_1[m#B p0ÌuOYCSZjMndd[>־brCC0GEnx*IW]9|0yJ,AϯJ х~D}?#jN77k HC'?2]yu9GTdzӵE'oi6^ .V)plLm28#rE_{[5_=M7ET̰k^}v}45-$'Enָ}8=SQw{b(ik-o}/^|GG2ʶpi@'X5TTpK aٵUcnEs]6K{=ϙ\n<O\UeiF"d5?_\7mbHYdY>Rۀu,𖋨P~KM ?u:}Q7̯3qx+گm{ abʋl)* 9(F.]S~__ng>T}ۻJծR{t~cQL%/#9=_=OR۩|t{v:5$k%۞:(޻R{ZaNR# kxVH\C#?#UsZrRMvaoh+9鑌sw1);hsfF*)Ikz5I"LmBc1x8MhpQ*q8jJX@4m#}:zӊ0*UNײ[-,a\3E[ﲕ2F>a ^:\M5+su;t o |GDf6,0mPs y8[j6SaeԃMnZoA<.&X|!T6QʘEV}mtmh}NG{xR}Z} tWM:yz.C%c~x_S2˽N.3KkmԊcmS]zݦ hVЭ|y$liyBFy^+2~I`s)h߇ Gay*,R-ăyqA1+x}'nWVshsU]d:]fm;c+gzý~f^,|m:bE`kƍՙ~m͞pH޼zw6*rqI^#ncbŁ'8Ϧ+u#*imcr:q]ۺ[32p8=W*k=_QRy=D`Y6Ϗ϶q׫Wљr^l#O0aRqyJ疽 agO=^  9,*|@8ތf[!xb̶b]*4}8V(_"ңO#o*K˳k$. ^z{r*%W&ufݴm&__#.Hl((͸QvlWqG\VqƜȺOTN[n2nR)A=sϿA]jrǕɭ*%z7 gXݎ2}kjiC9U72hLjeu ?3r94FiiszoRwf*Zl6 =iƢnWpF\i0[ʒ%p@oӦT{9% MAV5|wߞG?,y5)Yh޽}E:\l7'˵q:~)Sjv1uoC!!/ 1nOӔ־c+`@#+dޝ{V<вKkבp3]Fy@FxG5u${L,⬭;(K%ǻ嗽k~붇+iJf{E]3;ctڎ秆~F^e@rog|I%%Eߗ+o8tW撲z\y=$(]y 0+$Mʫ(dp˷׀=kJTY9KB+f5lsYʟљteLWi##wR,der,{uJF [ 3H c˕;O8Zi3 :֚- 2m,eOF<1ޜiǖ::;vKle!Uyyb)уr6DUV xAU)J.UyKmV¨Uی.:~UUE-v=X~NDEᷤxW$d޻!RTg4yUnjzZe;?AX >u8+)S-n ]TFY6qN1씨/+<SV7D_-~AߩXǚc#+\Ind)AعGA/gIN5ry}js;J藳ZݳNj|Rv+a.zUQ7ŦϹuTnoݦI_!j20^NT߲NQn*:Ȅʹ, gwӬ[i;全䬴&/qoBs?ҸcSTj9e.\GÿC5ĥ!E\2xs۞RN@~e(m SXu%(Ew[z;Lj۔0N59ыjʋ6՞knIAt:RnZ/.^R\D#o3s;y8S)Q[Rx-e"Wvѐ8$s*%y )=~{U+%ͬ2i s|wGfǶ5݅8\ztU/yYZMR[NU-;n)֕܏&k(ZͼWt+IJ7`y{IRgF&eINV,[yt9Ug+Йfxa]k`R2HzWQ7v\ɒAus*mYwc [ME%ˢTZV{v{qq泔ʭ3^Qv~Դg{Xm_-!oxD%(4~,#.K?\ح`ƛFX`ky=MNR{mfhfETI8?Oʝi(p{Qӌ䞷=ƻo>te8Qӌɥji5^X^M˷ LbSbHW(˳ѕy_Lz[FQo+3j25oB,1y 3aҗ.mbp)[ 0fvKq12q략g(N9U2%7c%g$HPJ>D.WOr+=ȷUު $c?9Wc:Mco`7cr'wU)]&O'xA+X3:t-UdPsj_yC4{A}jE4dϷSt*f%U#)-R}O,eY=^j#UN4@moaӥi/z\՛T~f]m'RR6 ڥJA3N$ypiONHfݶ?Ή(s^.!O,Bmg.k)Ԍ)7~-k|QWE%1}Qvg`9k?Ӎ9~Mxnqr͇#ct+ S"=%yɴvǜ}}э~&21R<ڏk&UK,j}q'<:҄]#C4oYA#չ){r=1dX_1 %隸EƢ]*27?($|be߽󳖕$.kn:sQ[xWne(JT, d9E1ZcЩjj/XlԆ[0LW $viNNJR_Zf)#YF=Qx&g*1goiI2hfn=8[멅l:58u`1dzbʭ$w9JvNj'u{OT[-Ź- ')Ns/z/%gzRN4ba.Zy4eeiO$FM[NQG;RoNy\?ފ~:j.2w r3/$d#.nVHb("3ѕ^_Ҵ_ IN>Uw/!.0N9-u>N1[l.XRӡ1Xnr~e r5BItm2cN ߃K[RNROyiϻ#"Ɇ%V>Rï_uF-Yroz*mv͙5qy&1w{Dg s~wg<\eMFn.-ZX%\}hv?Ư].nd&F .>&Y_̚|R"FsuHzE>GcD=*i%sL ,ʲd8?]8ۣw/g%zњFex%vjdz1U%n( ֲ\\fU mNpOL{Ue:%N^v J!FՍSp3g_348Z}Lʉj>z#==(4y/%+H&gv3n$D#̘Qqe'(%\1FRwbܖ0HE9\7?Q)>g6{=챪ī>^25އ%MsHY#89*>U jRR#rKj;wOsC = T95ϣ+Yk5bV=ͽK31ϸi){6:u5o"aqlbV1x$sנTJ=^ǝNT9Y)1q$.B;gϭ". EaG!I"$r:+:җ4W&Fr+lѸ g&3['97n՜UcJaudTWynf*1n[:V5N2K+O>$,#ghݸܑ\מ8RjK/^v37%Fo39@x5!1խisԽH&ȤI"B7rO:T˚]H;-YQ&>(LwKqIx'SYҋPzt[CT8v==\rMU#jbiUV6mۊs&8ӫ]I&9T@Tqk=#2tս}KCEy,JW{a9+Ѽh&O`y'a{iks8ܱC `zzRXf/cDUёɧCImNb3jx7z(G. GbѬ QO">4&(1i"ińo7%vʹ%QٽƜVS6O(VڼV9}=M*ѣqweZB<՝m8\2e}=gOy&t=*vNO'gJ<)j9{}imIpv´}I8 K, M" zT#6:uU=P}n1i\r3|ʜu[[sp#lTjJWJIS2uBQ;$w9=_2{׭ɷi0gisUѓoSXlE9Yk hhvYmۛ?x#b%R/riteG[G-ز) foFQ_yF<-KmӷN[13r#o*j1rߏjc2},\M8ƥvK̺4NlUݸ+06nsWMI)E;j9ae*ϖJ[uEķo(#vnϡ}Vs:qnRWHm*.1|/'ctQ=0U+Q4ҿȎMx/ʍ?ºj9*m;Kgo];n-כmyxV#%5tK:*Җ{XK8 Yn7sJywێտ9JI_xzT{>/HqnGAW4RݗR>W+Mw켉Z5F Qx׷iOݩfg t^jrmE啋3.J#ӣS7TVVG,c c wBU/gm>eYW,mc ׯ]bܒӠb'^0ZIWXzcBd:s*i{+Z3 `]qN.,KL7v,A׊>j5?f1ifyk=Fk8^leWyrlmUWn~&u6MN^.9UU+}~jF+"5Ir W[$=$.~[3za|ݞ0w wsGXbqfo pzREE>{vctH*9 JdC$lҔo3I19qN6+v 1y۾Un{B1nTc;  4lB9!7dsǝӏߠ3i#i#S}4Z6Mؼ5IUùEi{dIbFa&PxOLZ~ev Z:nAr:'$ib-. U!V2c <' ~]\SmQr-,G6+ǸjR^qӔKuF]2HYFfr c;r8+娹6>PzIi$dGO|6dϹ?B^SQSz5/<"S,px# u4Vu+IYّAۣu1mVlⳣMk)?qgF{I̬7˻Z8Ud>c]ZGV&Q~c.[m^wE`x\>Qד='Z1r凭RQt~IX+yC3t^g c DjIF&8[rA89<=8ӔtEj^ڧKk.ոyG p:zsk2z}QԩMJ։h9Dw,d,ql{{҅=M_jJɲֿk)q%,i#)Qkڦ_C%Rm7IddG!cjȢ>ӟO f=U0VJHOb~n5߇cJ?vIIMd_25fP{<V(֩ *mkKj/.sAQҵMU#kHUx$Rdin8|K}QZRYrnLUboS m Jcoi .[F."-rߗJۚ–FTn[.ޤFc(P6rkh* 9ۻѸ׊*\hkTR]G!tU CCc#5)[}ΨӸpc9*y2TDU>Tf%*M)&9n7"*ۖQ9i\$v~my%3MJ<̯sK}^Dr2m~Mџn8ְB|rᔔw٤gE& nU?vm&5ԓ vcO2F)F@$ZYI?0|9|sЪ#GX1(byT3av}]:|,T*{Ѻ}P\Gg*,Q6L>@krl5Rl&Mb)sV[[K~>D0#+;>n{t{9VMd.\A==rd7*qKp۾xӏ-@Iq?JGݷF\}xrIcp|պIc'vrq1u=XQ8Hnn|[s.֑X>jcUh5)ݫ<@(llepFsU(+rT_)>e:o9rh `:Wc槥Z6Y2i J<`3KKR25YWbK'2kEN1zu69˨ӘY(o5Ky 4)i(ʊW;",*,XuqUʺÛh$Me1+rUGHє}c$,9A$vҺǙv5_w`Z睤bssQn,[V(bmS],#l#cr)/r.kۥHqq՛PݛB9n. YwB13*yo?B]K ְU<RY rCjYW s:|cSQV6yU:o׮$}>i6fhǿ!JFWv\V\n`L=Bٸ/)EODGp'@rǹXXS(.R8SsєqDܦMCco[TwW#/vASȎ#wv9wqל`+(N\ֵGm ^I.==kS)Y=EY1d3 \܃ק9YScy"Ӵ#K4S33)lbF9z蓋ӭ'4ߴ}5*čbMWFG~׭urC٩Ja6inLvHnLjs;>޸FҦ%+{yWdiTRO߉a"Ul>^=qbkss^7$E¥Le\³xʜeeR+.[l7u8W4{>ڶ"\]1Dpafߜҏ,Ti,->[j9Ku󪪲*8Ri&N<}#9&Q`~Gq\%0h( }4՞/;/Hg}GֳrPJQ4 ,pB[=ߵrUՍ%G њ]@cq9dۜrG?"hEua&Y o U9^Pq2"#T/ *zYczs-Wo623I8cz_W籆"Z1/k6ɹ|9w=}麑T.R*=eؼ&ylB~cg U9vI~Fx-L?.}XD+\ϵWq|/c*~ w3tLۓ:Xw 7Ӹ)c6M-6qz1qZJ4cyz$҆B dlr5իu ?Ty[ww%K ]rMy"K?g K%>m`}j#?J$vFp zqUOݖ RK-;mhmm~_s:u*]Q>G 8)%ҶS<$h]^ =:q"IB Iif.3׏JO{^?BSPFӤdcAGZx~h贾+4==ijAm#c>PfU-'Wܽp:3(Ivz}yFf¶ɜCd}+8ԓn_˪JspOpmYʫm]A秠߶M?ǭ˧NZ/H{zrBȲG;4[-iV֌h]jKyH,7cEКRoi*JX7CmT1E);s6i8jy/2ɺ)N'g{]k=x8Ŷ\/weU 3OsNq~o_i_pmv߁UGlNSof:9^)>-pω!ޠ| qYb)TRVSРN!#3[h=ϠE+4^%MA&D2k7\ݓ<e Vby-מTݗ^G7-'}W*#XC7Ѯ;vwU(ESRv,%+^݄9D ͙pSr;j9PV{zrm,_3`.pr:g֦c F^ֳ\Mw%ƞbÌ`y#RMi鯓8ҩ)sIZEYh 9$r0sV2{wӭ-71EGO"کo%9FN9eU/ked[%FNkENqvK﵎YMk~Z5<$][yzڵJ~Nڞhɤj.W-$p@99zzgXįgR1J\Aȑz0l=k3䍗:vzv `;dw#m?OZPIK=:"qU(Эk@(o&0c ONx{8Y60ҧRwIᐳHQVsmn8]wsSe -*ad ӭivoNE$g7 #y򝐫:\~3UySv9M"1$k`# xǽvGow[XB݈J&j!Qm[ȟ`H9T% e"Ȝo~$lȻzѝ歺,Oq屌udRy?޶cQHy0>W9#zvӛ)Դfaa`0#>-XFϨ4a|ڪnuCrݕ̣Ze9UY~_1Azϰ?CQNV҇-ר70X=DZ>r'(%ۈ2ͺ0:zr<Nz $U~ l~|sJTB;D#%0jrԟLқhGǒ/VI;t%ʖܧnV}+ZǸ:JW>n^UFVVvґ8*]_oجg)([X'g$sk^yIhD#8%XBlUV0pr;gjlF0N{ߨ$`RC]98ϩ°tͣ׹yy"Ri YcSSRԙ|ӻ^+fq' D&ך&1FS9Jy~u,m_-'m2}V5۵mm.`-7/,t9sQx0攴!_!dֱr_64Uy9](?- ڣe7,8]9SߩTdYamj4l#6FsZ:<٭b*>Gd6?ac,+1zzVT+7'}ny){Zs{iqyr,;96ACoҧO]5ש#FQRI[{@ij1it q:W8JQvGZIΛyExa+y1LN}9r)s+(ӏ<Mi('g o B5VJ+~Zoy?)!ɖhGga"`H緯p{{kUv=\3 [+9.Z2[چRˌ``6p+b>FE)?kwx4Yt2rRH]FO?ʺ>T"u=ʙO\yz/ma{|:ִTER2I_r?sZJ/٘TZV!&̝6}N._FahN0-2حܷ "5W8WqSkVe)S[V"G77vU(t#={6ۙuU! WJ\Z8Z>ͧwcSG-D)#/Ϸ^LeNIz=*U10kzy߭k|7T kϭQbb4ᆱgzvh]D# uA`#N~lrF+ϥR[i%ZrߵwgW 9V5*r~osd/J@V6m\`Iwz6T0Qw:+$ #fNO\ҼQRQ`D3ª1zsN~KtDe([ ssn'@[8ֹN*-Blwng8QHOzϛΟ43#  '_ tqֻ[T5GwO% Ż1>ns^j4ΉBJi)G2fHfmTcI˘"1Z2hR)|mh;d{r+jv(IY=L]} qթSI]6k:U%R./KizG.b6U n ~f.i.goʞ-lA$hUWF@qZ{QRQgGbHM)X014rJ'$s[kG Ksm`8#jZ:5Ԓr4E {1ܓd#ta72/O_?EjsJ W<Ƨ.w<smtrԲѻv)]O5QȻO8泧=U=r%Yp8hW}9J鳮)7xD!.$-ߡL\b'٨}YHJH2*o׮H[Ѩ';>y<ă;NOqNAG׾\mï@|X5&ծP0Ud3Ʌoa=jcJ:-3ys ^3u/Fz82HaKj.8{G\)֋g/,js.rǦS\b㻱Τ#ԏ+j4R^mM"4i/F4|8f_g$YQQJQF:w1wΥEe8}~?ɮF{ԦȄ-%ԼdGH{Iqm-mwc6:jҥ-H'HZAm'&:ͽLD\=JҤv5O^8϶;{ׅV4qty_,Do"|I\3י(+Ayꚶ2%VrѴx*v^mxGq7 ,dX[VTr2F3ڹjaSOj~#cۅO3s7 @9=x׽Mnms:qdEK4w1zl~IFTgfi\6kЉ{omv^N+ NI;nSv+1VeyR0;A{p>hߟ[:u?UjRcѣ) p yq~nqCJ7Ir#^KGng=TzU9kF[y-b?D.l/ƿ < vҝuZ7rF"0Q$Ո[TG5yy{V^]_c2UrЖ>ߗ_evIZH d۵F8UFC.-m-fGƱc w7zΒuXѕT~.AVQooyc)a߱F伴M]y {9h^ :wE]&_SjHk*oUG G#ED?tC娵3Njg6 :54H5.Sj?.R2cBܴLc-\||[F5ݗO1ױ=-$X]K真Cc9J1Mݠ8"MU'sH@֪QHFKsJ-B-^\yQj>Ҫsg9h5<~&δhU],3[ZpX1,į Ɯ~Ɍ<jƜ%=wN1Qs㹑NYzKm8~q֞"QOYT)VAϨO;do*6^~JGҴӔu_LDmE]~^&ČDn<$3ǰg)UQ6ױ%)IYmaV}Ozי{eX(u&"9Ȭ%YxԌFt=,zqTSp8fA[ 6ZLoec ucE4VךQѻW}F9i 9oNOQ𯛫Fs1/{ˏz$}u>*AJ[yiԣfRKˆ+{ԙ$Iʣ}==kݵGv[*N\O#M*;I.eU#=Ge.ir=)TEhaAI&Im7g` sϜgie[)F'kj_x \Kf?}* Lv= eIKaauunKCR b&r4!?8*}8LFvf/nK%mq x q>O#.^Fy5ѧV^DLtnd2N8?ןSƣGՑZNΰ7$.- m /1lzv)ahRnݾi, Q|V5.#sm(IevFF:\c1/hk1CJO匃\xeL>"ю[9|EJ [f|A<>մXf [ۡ Mo(2H9NqXjsz&\ߪ(UUwwK];Ue^.<5-/>Utq 7d9϶ae5+znq (ԭhH[%ݔُn oݺp=QўU1R5ss NZW 0\s׽|ZZtZoJ/+,{=+lDSЧfw+ FY[v'ƹj8ӽJqW1N#7B? #0z mZ'NSiI/u_2C$)怫n<B@<C7p۵U>_mȝt,To{_abcoi4<0>CQS.kEHߑ3/.DUB8UB>dny9{.YF)Ǒ9<~]NwN2.dDi<-]zrszTdT)|O_1Tyrx,q#RU#nQYTi bgV;-s ?ZҤӒ:)OG_bc\fw^x50'w좹MWv + `UR4Nuhv#v+nj1T7aK_v).<1n!r28ϭt{g-%er{j&HF^Qnczq޳U'NMC ̺!{/K\MmSJKm@'M{>o锯LLG%f?:8FJ^9{NehT|iENw_zU%-(Juӕ'PmE_g<7)n 9oiFk[nUJ_q:KEV]8ݐsӑVs5QG)K^>=[Diԕ^}iQ%%Qv%_<=AN7sbkvw.ʏ.ɗ+97uGz+p۹{`ԣN^}_JP-Ré}\TkkNࢠu/' lg8ۣJN2|1'8{QNi,δcR[İ"$*ѩEؠNX';ceB&Ԇ9NU;)D'- t$f;b|_h%lBɖ=^UdUbvTh;aټh`t0 Z{] SJLe9n>*};FS'ƵKg[ەRn zO{EHԭZ.-L8ya֪bƤjJ;؃PW~ߗ?j:QPnO UeW\`}kyrɧ\-.|1qdu"+ZUZVoQJE"²۰HE99S*2zX+SJi$lYJqù+5&!JJב6++)wtVRU&Lڸ uJS*єSTW7@]k 79UZ3|ВStne1lU*1R !u)Ӕ=u'WԔ#3r~\]cD{:Xm!{?;ߧҩRDeSWtˏ5ٍZHw0GP#֦:q|{Ӎz$Z/nUxf$B\w;xIeӸa3H$2]M*ue#{ghԢ/T]Zsn% 50TcJWm =I=(rq'SJnyuT*ssI=o)'Ng_KX?N98ey[L5/=?fGKTmFTMBϱ9@|={֋gtd&{j+Cr=֫1nsU6|qĥ>ZHf~Ӹc7b>mOy&h.[*l2}4bA)unI<ӹ3ɷ{6J̑F**ֽW۫_ ZmUTrVQeD%ȭ3fozuAĩ`:b'Jٷ,9##rQ{IEyKl01嗾9NӌvczqweN_ `wp)GjRn%ta?Ҹ%ϫ*-בY۸d=@ٟAe(&fY0~82*?{^[ynMf#*>[JټrRJo؊Z1NIROpλCaVD::\i2xN.Q9JU*bG,Qyi^;NhL Npϭiq֕}KDܪv2qKJ޽I#*ɒc?.GN#_J%{}c,5ȅES8qr٩q|V,lj 1F` r}N\"UNuhI;\ݔH jrxⴧӢ]8SԴWLG?T{]YJWEF n ҷRR5iZ >oylAhߪO+zҥ_ei\pNFԈk# @0y痽3?EJVkR+],LdW_9UJ1@##=w?m llUy"}liDy(ˎ s:ݻ#Iʚu#[4y]|+qF= qRrB$AyhR7!C(EgT)Ny9iU*}^4^ kow Gz| U {:撕eӋ};`Ew!fX}0};+ʚo<;Sabb2 A],rAInc[svDt0:io2x^&)X՛oId.d9&KQkSO ,[~ye3d~9F\WGdT*O wRs#N3ӑjޟ="E(̑V;TȲؓˑYKr9R+F+%wԁai.d->o՚T;_KMv>yLwr;}ڵ=բשqJX{%{3AǺqA%JTr*qm[C}<̬u 0=>:4`k)=cb;ua+( ʬ~ldg])e4pǧgU1jrU'=(]j {u1L7^QFʷmȊBVzui˚3mn{x]ݍ:9>>"4kb,7Z\oR;ԊHiAU 9~ϯW,cq[== jSVKu2ګ(vl⳧NNQf[V./EsOTp FJKxGǺ6 Jgt*t{h!uJͥrʼn99G*rm؟gO뭺\*33s+:j1v5)Qz-zVċLu:>!bdhI5H˕}4#fHhR*ps8{w8W$y%%#sK{iBn3siqIoF !e+Q߭cSTFO^eb1پ5/y1I%Ԛ;96dNAqP{JZOn=8ƥy{H?"a0ʫc7jF1Wn8Sƣ%{m}Ie&LRd3`ԃc n4V2Zv-J ϖ` ZF8 >ʤݤiJc&lX]a`WvۿZI*1j]{soig۩$VF<׹y?3zt2敺~fRn^OoRm"p}⣑iI:mu0\Co{]ZOX*c⹧gh9-d..ngBw$]֞ΒVԅHddwwؗܭO:q޵:t䷿mRU'MTdZصӾ jTroLMuהe8Xk+v~F֙YƨIؔV>+ңJH%+uLt%uR87JII-}d嬯;O^=m*9abNVRGw(ݍ埔qZ~Og[HsYm`Čapg"Rt:1Ĺ4I8lSx~gO*PQ)AV1\fHݸ8$?7?@rO2rSˮMl ,$*2zsJ]Hǚ|.+PYR=z5SF4i[v4Rs^ySOb։w+Ϧ2GHsgNK7M\/$ijN@Rʰ쌩ѣKNKA \>JN۳S/vZYFaUcPA+ajN;TjTvQa:.^5<=:m6Z74"z+vy7jǙۧSM.1\1۴MNiFR]6F!cf=x5ף>g&nԥ/fWݻwA2+6#O5իSxJU5u26>uS$dcWoS9NQ~Ar> {lBO ұo}ݺNsM:+^,& =[SūngZ:Ҧ=R~O#e?*1\`9ydHWI! ķDFcSZF^uiɪ۽R6{`d/{~]8=5'{El~YSB ɦ[ϱ3'n`v8qRrvܘgV?7j֌jN:UӨ주e#g8J6 4JI:y܌UNxWO& ]]Bl, 1b4d|zlGDhmurΠOrtbyU%d\7cFj(ƥJz(z$kI_G!5%puG򫺝F:(ȾO#peovkHQio!Vച*ۋqߊѷfpըJO^D1Z['#UO4Bp lfvWq֪Ѻ)0I&q}p$b}%hGdqfWu.gij8ՔdԕeSn9J~bXw#XZ^ySEe~V##TR+y iTOв@\L*|#r=T*\/g4 _/;rJu9iֵg*,YݹHιmuFbIraHp&u!g4M]?>e{8Zaa(#2H+ Y3)ryY"_5, QvoCg6M8UI>~? u%Qӧ+;ը·B`k^jm=u5&߮`H-R v.3cV2נaRMKԄSg!yaH wxҕE*vd 'hv*(2:p%nemį*09GA>fm&\u9y "#au[XHwmhQ#F~>r]=zG)kI-܅V),\R'85*_R!N9MG Aoq;>\:`@j4Z+V+1-kG|:X4V91QۿEX[DciUr=WȖnW}[FXD͜`Na(莉Qoԍlv[aV+9~[9[[/̍x1/&Iqz^ە&qZUc'9lsv|q#KwkOk:}@aY;֏,c蕽H,\n[lR>nkJѦ;y%ՇˇVVc'R!jݵNx }F19z{TG.?wZbuHFeRNs^=9םMuVcMV6HqƕJjT5u=@ޢ>{Q7η+_YHVGX,X{:qb+ViyHmh)pJU3&8F_UKi eVwdP&RIMГ͹cZJ0զeZ%tƌ:.Yk :2'DMթFU=uUrRW"!q%Tc>{sҺ}zsNwJm.U3`8_].ZJBE$dWU;-,Ǚ( %̰vU~W'+JJw=dzHgCTQ7Frn w?zۑKVjrHng3ŘU{6I'#$Z҇h{3rem `㞜30޳S$0z[JRTEM+-+pX91Z{tFZ䅚BInf R7^Iu>YrBIEnBcRN wqךN2ٙ^6a,ݧU׏zV匍axw vOur改0!0`˵\9Q{ѕ0!+GT,h`Mc?0ғw;#SH$E"*$P۹fgrnV|LNZrGt``p8*J]ʒqۋ\p#~1֪=܎j|Q+ɕ"ܷCxj^NV$\Hfl>r̩{X5عER]9֑ܴ~!S%S=j-$t~Jro2~K]ȔnecHCP1Z4iV 󤑌n*G8QQ)[S(Sw#xѫFYv#:R棥T)ՕKu~O7Q2 .$lq|Ͱ<~J3yӢn񲲁I,G5)I~ƿ\.z+~Q i1J\l)·:rO[lx`5XՙUm>cMJ9S9!x-ܣyjT,0~N?:#*ZzN$*#G.C3!IqT6n\IuggsE6ƤRQq." ,I#U PC\ةK)4E$Hns,3OgG8㊈/%:(cmg&򲬭t#׏@G4VRm'cjڤ-/w_a-,9?w0>Yf8?cSwΥ Q7f&cݖ>t(ť3hׄT԰Z,6v/[qYF1]#NS^OoI.h}v"F-=3یP׽k*'u3"چcorǦMBrwՌZkD-vm4$0>~=)M=]N^ūB!!Sv_k 9g*ReQu|#䊎or˿Qя4#u-[ ukC31VI,:+zeQ?1b*9z4-q"x-@UUrvw!Tu4.Wfg%OֵOԅI" o啔EGX"8TW}VDsYJO*henZE`<,p>{WH{FUjhӨ{Fu nB@9O]6:qi+˯xeAt#X Ơ|uJHc휡n76Dua~FNkTb}41V*o~*BېWkhK>VVNAvPqړisU%xNėGr$$^x9_ .jOk]g[m,1n;3xV^`cRj;=I9hՁN+|ѕ;' c rRKgy=:sSZtFUj~e rM;#:jy_ W'ʙזۘyqڹs6ʿWMsͳ%{H='ڰ9.5]01͘UQ^23~`{?rqԝDa'w1GInIǵkN2ǟ%NQ\lW-op# X9(3R~ە,N Lױ y|s鏛Y1cNRd[Rwm`TGk7FVG HsaR$JVH*/ \#/=%ZjAH$fCar3"J99^#?Ab;YasV?{9ch|ˇFhW# n8:aOw՝$7a fyX_M)]c,F8{+PԲź?Bk*V4|SߣQK2ah-\:'v6葲_iPb '{g E%=riF%r)9\o]nqȬkT[ގ>Xg$~C. +${ZO ?\W-mdH%;DI$mAYNI8_{~&XvPCݼ/`xvcB9^r1#¶gXe_z\[o[m~KՊmoWzxD2?|';\55DTNdfOᦎh-U524ۉ=;q\oAFOsǷno}Z2fxUw*q"QpzY^*(թi*jZLѳC"ǻ){JeptjJUi vS51*b!ׁް'k}4K[5kt^a؆[ ͆$\FVJqY;̢D}Q!te$JcQՎvly({HDz}zjbHcE`da@>RJ^z'\:8jDZU6pn6T7܌ӗ"+Ē$l^dn8mp-u!TrSnc9SUHʍEtDdcݱ9% T4HT׋\?$rDnsujҥ:t̟\=N~T ' q2!@{{,t:u[-͑m[t+㕑W*0tⴭSӷ漌#N"aJB/X0trxG)GC<>*1Eei ?g )I3sU;E+wf^kK hdQ6/# ߡ4Qp~ZJ1kRVkmkos;[O4aEfrRRD BO7pY "2.Y=9sXʥ=nT䜡K^p'OcJ T#~dtJtKiRYr^ۣfFlzHȲ,;FH}@Q^\}6tVנEw 4pY`:uǽtEQMݯԨmkv$g_;@m8<W^3RRz 凡<5j",VevM|wޥ9$M|<}I^2@'8>ja(l<@9ҺӦJ5NKGH "W'Ozڧ$tZxx֩!_h:s*T}+GJ]9+Ymr3]r:u*7}F46e3!݌nMF29NM5亹IVHXQF~ 1ԍ+T*&FpV:Qq3ֲ~6hZ3Z].JtXz~ˌ<ItoN=4 F;aUsr6Gn/yV\w1M!{"):)5**IFQ&~+&8J1?ng[ݫ{81G) Cn^QR{+oi*i(sEr-k<UǖT#z#y.Ud@Y]F>㸧sy~K .dHr{8V+TĦՒGmSNz/+;aY^&~fkVLGP8 +r#S,,T9 COdi7Ԏ ؉VQ;]Gbj =8"z1/pc΢}RIǏtUJ$k%s8yFbHR+o.zbuoW_yNJU*Wpw]$Q+]yJ4{]}HUR2qTKT:xuE=70m0*RkV#tV+B?(jے/F:j]Է`L#ڱ~Y)E_ZX'tue(Re"a8zxƦ+H7eQNT7>CeƱ*7('oVqz~ZR~_c-V6`6l/'8`x]cV59=5R: n+5l~geQG{4M=ltƎ=-y7gW`9E${:W!/=Yr/{OM 'Yw*7du$c^Vo.އӬQn1[5ͺ^Kt1ۻTO.hw1Qk7_c?Y'!LTXkrui"AϙtK*\{V#Q傳 /Rq4Ws;w@>8S5g]:R؎&e;Y=<>2oi㞧?X~t:Tƥԯˣ dIIWq 08˚ <]TiG켟Nޛ\/L/`G$9ϭe߇ӿxSMEiZW jSrm=F:iJTiɂK_xWNEA1r2pprI`Vu*˶CVשid)2Wj|ݻnc5)T[sx֛CO>3&ƕO#>\WSKGo]HӨ4 tS((nq)j2f[)J_H|z|}9=kY{Euvd&+7gjT Z'Լ(6ΐ3J_`#Ua=z%JYCNê6^ø:/#,Op篵cwkCT:n)ܮmTOon޵)KRcVͮ[E2EU-#' \UNW,RwUʪ<gR5&p򺆙vcFr<ӳ|2(/+ՅR䎬}o&Ij .3c飾e$?xL$Gcx_ƺfttRy"C#$hMvBJRM/:#FQea991pdݜlN1w.^y-I"?+)< rXm/iйi3Ă#Ã8'Qh٘1v4#D:VF:5`Orb,3d,smmH劓wbX1dr?jMr+ #[kd3gWNVъ-h54i9v6 ?&f'q'&D5)k^tc0^977匯ўcןf< یu滣jy1VkV(ٙyެ})HF5?}XB,I#Q'ڵסQ#6bpψcn}G^s4uE;W;YQnћ]V&Aߐ9ֹ9֥=Z2kf/_^8'g2GAvCe~ee}+˩NgQT>T#N6+ݻqÙゼ9y,a/r:}?Zf/嬣)2BQ ) 2=Jᕔa.X2$yg }1V2 }2G"vcMy8{ؙӦhu4dFTU$;cۚN5-EOF3)1J8$g^%JtFSu2'ieFWorIyҺBWtiyID7IVe[?p֧+Jp⢡Qjg]@}KqӟǡprJ掝ө7؏N {c14.Uvٮ{JoCeG=&>c|Gͻn*SvR*J3鵴B6H5w0'<@AUJU-=.wMIQպ_!\(U ~a?0#oQ'ziӯU8Z>XU7ݑ8銸ʢw}_^Fӌ}TG#, +0_esu%R/g}/3:܊cb7ϖמVw_NwZ]=}:[ZUM~ʭkwZT25kZoإ@vMn9nat]:/ӣErE˭o"kR$g{X.W'A?ruj֌hK <WO߭W)rmQVj*1Gug`d-0g/ kZΔbiOSz b s°p}/?B^cr3t#,<ʕRA1le?Wg`0/رac- D9=%)^gG咔:hC^W-- ;g\~R;-ZQ35 [Q,k7w+Nvw9}1N:y_OGyc7xeV$6G=מ z^"s~dayUeS#_tge\*oɐms3`󌮮_ʯoJto]2(i){km4@|,fg4@qŏ`Y60sӨTqlޫ~}ΫQǖ{^~gd*y^KB7<ͻn %׏5Xij^߉ն5p!3 }hutB0IrGUf׵~a_.-b@ {׃N4}C8Obi٦k/̠F8~URRKxWNJo2I\׏ZZql (z/uj5?|/i3y'zVZWFTj:.߁w*dW[yb)QxБPoMx 9Srvt-_,G̈L~;}=놏<8ʍSʼuRUheLx =+jStjN<:}DC G4.c&!\>HcC3[17uk׳zD])еƷċ&[cqNtKoxK{K+n|π&o V!I۶E]%fp@e> PK~V}zm5> խk۾xƗo;70Feœ8#* Ҿzo_Yy/cd{~Lu2-M$_*98lؠҿx̥ j~WR?nBQۃ XI Yr ۖ׵ʌ)9Mv}oEv*Os>j\h[xʫz#ڢ>mѧV:UL6 =+_W3UMֿÃ,|Jϩ=4S{H#+i<*>i3F8Ժ򜕴/ƍ7>6f!ֵiS)F7I[enKd?zF޵hG-(7nguE]Bم=ʶԌI1:tיVo>%Y%n$ ^DiWT"Z8ȉs7 ԧSǝ2RBXHʏnpj֕]Mi6Jay.UUNCv9K37/ܤޚn 9"?ն/@\v'%.fzeAYKԢ$;ewy9k(έ{޲Zk*bSlIzSZ3yos-J:»c O2ls6y2?y3@H:iSsOVG, _nx8˕mNi5{GWr[m]n*FFp@=:WUGCWkvݕFMmN[;23|ۿxNX(;KsZw+2CnE[N`}kyGKDr27 @7;ygⵧ8+ZQL[gXI$Y0>N2yV6IԺQlO~`8vYro.Xݭ 1[-3ޕJT/iʧjy#v?֔Ur+S=n@gᕻG4yӨmSVw.pxN+:ƬR8eBҤkkr˺E;y?UJ>ƚIt:U"NWRh<*<Ϙ+`X&0[.[EWw̬1YԌ9F4OX?.'LS;˛"L7TsQ*r>~OE\{İ2jGLv)i EŜ1Ҕ]﷩%A*aVi'ӡ> R7WWZ/aIC&|qN,\T/#e?e`ew&6WTiLҵ9QQWf5OyWϧJVQijNZ[FvDs<}G5Gپ_q+IGsG#&h 8'St%2j{_ ܴєz(pLjTU,'5 yJέ*T.dh, G+8GcjtJtHp,H$LCsVmgkTSooj樯&g D!h%H*yk#*M G!o޳\Ls_RUVܻz U)N[HÚW|ҲOVh}uq"GB,O91*#M_:b)9&M<2[VʭWp'Je$ף-Io,`3Z.mr4yߠi%$*r ctԤҾcFp[gߕk'[cHԣJu:hIio^t3)$ f+ج4x+/CN}4VMv\=}+Hrt2NT̳yfMͽ!tϵW'RөԑFX`H$E,^.\2{*ۖgUHR! w4&me )cc'SV8ѩzPݣî9'K2t:'Ery.v,Dcp9k)4G޹I%*G@<J;IܳJmֺ%:"e"_dѻ {X{[ʫu#R*UiVΠ$3[޵t:m̓GprʽJ3O/w(!ʵ:vwױVy$P n"2G<}H)Z´K]ԑ3 Ks>Ug/uؚ4]ϔF3޵ '7g ȴҗ/1iSӞxsZKEsTTM6KYchn sX¥KNSB-Ыuq{bgda[ _~u5y]isJUaXxy 5{ҖIlGhftql֫UE譱*U+1g܍]v ұŠQZnROz6г圆ۜgg>ZͧÆsr{vDf,Jb>۟L5O+ZN4^~ͻ(Z_N?>W (Iw~5*ƞ[;-[ė2p!X\~5Ҫ9ㆭʥA{[CNq8Y*hPc4N-)(c4YIjFVBZ ʢLҍjj["ѷ\Z3fe?e XdVoo8p:gRe$f,$c-vǖ|z稢̢o>EZҖ!ӕDkS(EΞKU}Nl1UU*jvn%s{4`3$*GE6뮢%Ζ̑"IK s2~VqRw"!]KN[\mݎM>iT梨sZ:zh\,o,Tdyz Z#9QDRhlQrHɵQ?JRkwBS'fG,&6xrNkZ~9ˡViW\o~H\^LLN4xcZ $fẽ~Σ*ݭGG'fI^02ت+=ҜeU-S]TpU~L{;Q7чJUjM+yn1 ln͸aP|@=zVןB}SLWo&T.c9%=*.=蠵YP8,0X圽ۜm%dD@>]oqZQ&aOӦ tnaX+fkIǕƌNJOo3]Y d\vΪ{4Ii4%"$XnrJ<NesF# hoWi#`cjZWSkM$A@%H:_,]IT{TXxfORˈѲPOYrosYC{I-wJrQ2gMSP%kVt$,w }O4Guk:|7v ,=\+)Օ:.u0iGwKI))sNJ攥-4R]W{P~$),"YSSGq>kt9SM;wD1ZE]x5Y ۷zcg:jkeZ=KTV+;E*awaXAbt'Q:J]Jh l9㓜~Ru"))eL~dq69ʖ}iƟ8[Zv fg{&XsG#Ȯi/SveSؒT]̤}<ߵUIF5,Ib+FԧfJV=ƑWhqpq?Ulk N+\ZprѲ;O>ݱUNQ}I^i{h5,o~,2NQnv %'n{9*J1n4DjūvV e9>Z~2֝T[>$-YV}X9Jw 4£h`pAvw0j14WR,1@yg ہ㿭g"'i$fcGSijsί&.zG5YEFK_y!Bݿ{~S=+KvJL ¶sjumYB,k{͵؇8~2QƲUUiq!  ߟLW]|-ǞiKedf?6*+igauw(7ouo٬n0H0c${~'q*s4YE8%yim̪N{g=Zj2dфj֝K82Ťb$m9W=ZSh[rO,,ٺN|A5*qITĺ:tKm;Yt:s7i/TM{ ޭ=l^dȪ#ۖ@ьR5H$:/_'9l3+֧+{ SWkN+#2ȋ!va t=9 z^s5~yvkyi%Ǚ |sIA]$bQ{>ƭ̒J K56+Ԅn2 );?ϵt{)8ލH䴙PG;kd9=i;(N^[?!e3Hd<2=j湝LY{NV]5QaGc` >+64wD^bm;}?ekb%I>Wb$m)]z/2suf)ZE94ĞI>VB :VQJ4R)6v7*8# \h6aJtZ%[ |GpƟ40a#4O#[ O*]e-叛ӧOr+-/8œiI[(,|GjƵ t~K,Hv8ޱ_;wJQOU6kPGަ6_ q5>2I88yZqN=YUas9Eeٍa#y$rNhnXXd @d=OZ%JJq}j,y +kId޳"dpA #VyS{W4y5n\4"ycܜyJqr?cTHluբ[MʽfK, n0,#Lz}^tjQb&REvuow_LzM{ϗԅhX=&H`\#)$OgNU2siVԋM5nkvmKYʔaqyJ7HywYTcDhE>+T"N]1TQ+AjmNcy4rS-5{[{[l?*[%]#(ǝ^߉SZ5T]h涏UǙ &%v0N2Z:hTXZr\[5˫HKsqũ}OUE;Bս8eF97`6q5<.=;^EEg7 !K?+I#Y՝Ҥc( }lG,ؐ8aO=O>wg=Jr|"?g?3l 8kW䓜i]UL$}Im3=q{z^̍%f`UpU ?k5F>Z^H- ?8~o-QZ:itcR;Mm*)Nvاv+T#:Dm%n"KVkC嗸 '\)QF*J%N?g.0jVV,U)VQZR/Lq130`݀ϿW7]\*rs)螃6vl*LnX-z}26-(xue ''5g4jRcy]XaG+*O0FAkGdh§$XKfY%Y$v/̸O#4{$l鴜!$(v|q۴c;xss[Hq"j\Rr$ec_3$g qjʟ2"1Ti*Wcdϡ.N" 6S5e|VTe6)Ѽo^1:!}~SQFR_#\MyYeIY[8=󊊎{t:>h.OG$mc\9{VPR)+$_{u  m";uF J`xQR}kiRA* cVbHnv {8ʛ=hF;gK4m Gh̄P@øKhtV $jp>.+{VSE8nK`!IGclWmIQVڜF}ЖWy$#f#umEIȗR P$II2{vqýT}ev1mE&;v~nLz6[5e^7q\i8b?֔)ic˖HM~f` %t2ͣf?lKq4c(T\ 7}Q F7n$3 &?Bj8/8ɷ}zd=ZS*ݔM)6o~Ϩ]kI1pnqO^?JޜW3},Nu/QţWU(&}{.1{s\tiI{%`7(q-M~c:zt_i:r8Ri؍I0&`c_khڜޗ"asF'|ʟ30~JϖdtS4!]Ͷ/=ZIKJNMh:J.8;S%a֫[AѼvdu;snTI4HVIIw;FU 9#ZIFd- ^,,3+ߟO7*U'u_֤W*;#)9Gone/V%f}w'SF6gEZ)uoF^@C{ⴜZ"WfWgXح|;ۜq۠i;.2<}@PRycK۾q0rA`CӧK^c2zw()SޠuEKD2c5&"d@T4ZTK&KOZrK=5܅^)|>z~(/2hV%)KEosnY[o+biөNh7nKaW%prN9#r߯SolkD ȼ~NO#ON0zEF`ore \cU¯=3Mk*q;TRN{2yI(\al951^εZ=-ڡN]cEf!Y{J#xc%؎kq&fcM+wZvv~֕K-5#qm"*WNscQV^=d7nsNzu:Q#;C4qx ny581[3%R~^KNi n$vvʻ3 1ҵRgӒIEq$M41&;'GSz#ԴW_8xa'OD:Kn, yIKIR2y6c1l>\-ʎG<'GjTڟ_Ip<=:Na"Vkj3ʨNάehfe1v>qjU)vMJ[ qcvLRI8̡4%R\0uwFfIIsK}:J猣(娜1faVìx폭c$ܯ'e&D[Dž{ٚo3m<d3S*ʥE5ZS^nv3LqD?2 nF _ßTz+.iVqj/oB~`$9YA%{枛#FsTpoK÷m$ n;`>^,*sї/.C|UU 9W4U8vАK$)fX۾93XՔVNԣ)^2M}Arf}y5kJ;5i++>H/dcLd"3>qɬjӃ̕>4u|ɮ®݀Prs؎湪TZڬ,R\3B$PŒG:wh֍gR&]ĉl2ki~yy-U Snf)%]'˹W#z}j>i-,*jӨz>42GK? W*mhTI|W6WO9yzKD5 +)يƽqr;9ؘη[J[*?u)^~Redml'5f6HȷIe*rC\Ҝa&bez6Arž#>dMsӔjԽ9jFjc."2#Ku.?3e}GڱVkFXhӋ[#Mu$F7zs,:eMJ^xqo#|//cמjj8ԥJv[VTZ7Z5,m'>yQ%o3tV"hnVEȇV_~3|N߉ZSWkd21]7~?jl쭋֑dvJua2`ЏΡ~9s̐^\FO**rr?!Vy$vxFͱ3g ?ݥi+簾2(N=?Lw)B߿H\7pmyKX|=e+ЭpE'*F- %p2q>єѻc:T뾞DwWM77Ix= i6aU~UWs7ްsSRz^Q# '\7c08=M9SvfU+T7]I:$+D N^V|]5aJ5o_X-u$[q]JpQ{yd *+=3{qUR2J*j%~N)l֌DY~G\v:*e?giEDEhR4ڳk!:q0rv{ dgZ,m4}PSVI2ݻvmxX:2TjU*~|T%edXs098$~UeR2W_>EKJ[onq\;`C"ʹq׏ºeupJ{HnV|QBꔉ|-{mrJ._{օVm!NUʲ.t=8{ʛRtiGTOԉ G-oghI6ۨ{Fqs_[5(匬3Je*1= -ͺDDP(kTK~J#JI%%ܷmY7|<ל{iR]lsJQF/' sewh1MQIGZkTk=]GGg'vwk+&9s0?>}:m)0r G3ѢKn.UU=Jr#g_F^tNZ6m.cўnՄ%R/ORZ=HwGgG]\t{p\-Q{Zźs0_-v1?('8֮JPRnCAn}9Ҏ^h1nkZ:~E{Ȗ_3˙~T6VcQo*)ӥfͫ}KmN4_1[d~Tc9SI?/ BJ>oG!?-nFb7n N;u.Jtʜa;ӱ5ͼlFx'oo]ң[[r_чQz:kZ˛)> ΜTZtb+R;fxSM^߁jo)'}5c{O).$KJf\H$]T*IdCkcD_)\g*{^εrۙZo6Ld^?Zs+SvFI$ڑ 6"pI ?ta'4#yBIN淐}胆hCvvs.H;9{97}O@tqup&s#A4BeYfsGw9=yuRNT%zY\!ہkӔ4:{u,K2o;Wo$} tУ4kSjIl9B2cy9?uƚ\z6I,K3adܙlӵW,w"4/{uJ F'd<`=(5ԉ4(+"Rk)eFg<շ7(R ],qrpMZ׸,y>`<9GEM易[yсݱ7<~%%(مԔwC^A2G,Cl[q֔~֬SiА;ʛ`7omx߱to^ Q.Vܶ}ʍow=fma8ToyVZ ܫ岄l،*댡}>4fTǖ$I=E: IӠ"Bp@-z~JFש5ml,! Fݽyc+XA E۹A_ҦqԪu#4KgA B3ef9P%(ꖆ:5ٌdf%ia<6ihՐS٦﷑sUv9Q=>N*5,DS9QL[U KHOk?g+>M~c̳n?¹ N=z\2 i>R͖zJe잁R\ҳ1^Oƍ% :uvwI\1;)u`K4` ֬ԽU;p ,Ė|zw?y3STc$eDy-8=zEXП'**Gi1|ۻqN}%)>iTk$|F[߾B}ϛjƤo5hJOJbXEFde,ǿsW?[rvb+tj]nnzmS=TЪV8^y5Gxwd)8=k4UX{iYI6+VUf;}:Ji5#Qd ʊ?cq;*#+ܻHŀ7˖f%OvW#[:ϖ64-RCO,dV,&~SֱqueUU%&Toׯ}Tc)c)+if ȉVA&v Oʦ J^~oKMх G5rT:Z#Q/u@-1Fy'=r?|G޾[{QIJmzzʻnomͶF8ʹcufNJk[W/,Bn`w9USJ>ߙTSUi%wE `%dԓ}*V!sZ=CύIdF c 0zw洧K]&N\]_qm1nݻ+:э7UK= FZĎ}W?7=pT:U+d)>VxznuF5'){pيYGRg%jrMNj_Bg[?$)s3yݓa5QE-hyPB[]nF?Gk)n6>F~=+Ψ+v"Mo[=zKL(]̱;zrG5փj]'C#Xwq9y^79ef?m&eT[w`wQN;|z9N[\:R/u(4YdiGĹ83S)J%(J+[]nǷv g:(Dq(`w1Fs^x\dQ[i2ܕ<Ş$<˛)eTeVrų8?b/r`ps+5DwrK&w5J~kG˪[4+vhYRI8z׫25=\=JxxϢV)7I oUfI9^U#9⬶=Ԍ1*-q(Y&Zf -r{yǭjr?wJEb/OM/~]OZӬm г/ZꌪU7qά7V9MԶoߟJjJ<~%vMis*Tcy^2vG_X1I}+\s99Vek~#c7 ΍=}Ht]oȻj麻G|zdJuM({ngN.B7[dd-Glⴍ|=$t]ӥKP#z-|nT~d?#;N1ǽgZu=i^[:?QH-s4N3ҼqC5{61F'$t垱G=Nz5 żafr<A=U9dgsSS\]>;GC7˖2>ּUI_Gc֌im~"YAwdon 95kz^Kwi͸m3u+~hTs)ԌZRݴJ<`JHhNzuݣ4)Kx`ہtrG]6 v/<fI8fdRgܘ^@d+nޜ}jqD'QJ׶7T/u?&njTj(|U2|pTj:4sS;z+fڬr t{UFs5+J31rUn= nzFHTtߗTo ` ?AWөrRNۍqyC\cu)+XHns9zcoM uQ];" |ww'k\])=Zܱ]W̛;@?68sQ%cUIr҈ݬ~U]aʾykڪܕ:t=wQRoc#++Bۖ5=`e4yRrzJYw)NA+VvHph´[lmtEl*|ڦ͆b}=hƌl~ʋ+d2 N.єgQ)zc!դRd:kTm"rEFQgڪXm'y+~tZBQ dvNB},3ׂqMV莕?=ʏou5hTt˴\cE\FeFQRM,'y| FO<JKdՍ6wЉXj3y8Zwq%vVLj7%de^rz+F~^gu⸛R*Pv[9¼(ԓmk:}>XhWpT ~&ƑFlG4i*2Vф2O,¿u.Cw>û1NU4؆rUi?75*rujyKܬ>Ur21\#Osiө.Y> rv9=xzWZ4ڊcT55Ѽ`_?ּڽ x:rTyCv6#(k|qI?9֣*iJb')mJA3 vq\ zcti=llkgw \=~rl&4Taz&5c#ji4i,<y.l',ˆYLi`G:߾]1 n"do2@9` hh=>ʜcN]%̂uع] q{W$4e_AҕYMTxybӎ9wSNPvԈqK~ dڬKdpz]9eje 1׏kkeomsF2#i^f1*\M)omALG;&:2}]N24og۶Y%'ӚZbJҗew°Oň=}=ҷ֣Ri68CFW=I]a#2x]Pt%7P?Ox]/I?va5֣$H"[ەo-Wp{s*QlrSNH&1j㷧R\V=HwoĎ e@ˌL*!ZݿogJ/y'@cˎثRkE½4漅*FcUe0N}WBGub_bzȮ#sd7rsך,DjbjL_lvR+;u޺0U4U%=e>U+FQyw4a9jIb#)/嚉N^VodqZ]BX$Oݫz8RnOR%R<ַfC碡4W̖M rOsp3qQTy9UkKoя M9Vx`x?Oq˖49}gpK ( zs]eNRU{ikmC#dUffqұv㹥ZT\uW3өR4ҏ>M79tњGRs۔]ddyz{iװQro[$˧)6ޕ94VRt֯OCOQ~`ѣ,Agc|N02{c'*Ԧ5cԪU pw6c ,#@݀7p1K9E7L*'FIJMjIO-Ys3` 9#`cyR1jEՍZMN*\'|+> /r˻];JRk⶯:xM>b {yer=eJoISaMcHf757cyߑabFTԩO-cF=u5%1MG^?3]JqkB^׶ʭ] :Tayh>5G y]kIXc>NN{^<4hӖZwYe{w 6hz{^ũB>f;l ҝ6Y-l Wғ >[RW{63[$9S YJ%wgZp?UbЗ-k/;ZOe&o%ۆY&$Y*@wa5!I.-4{TNT$٤k~5YIfQC:A氭(΋NW \,w1ži. rrx|cz.irūYʵiœU^N.~.'$?q'9Gn)aa}Y=֍/C'?(QO]hzjEdD'l/r^ҺSUv#0ue ǚ-^kE᭟=+Igo5(-<.ʖeUp?1θ/XOѽ[zE2K NQէ{gum=2Wo|$_C*Py0[ȬZ'*NlwקҼxP)K~kʵ'ޞku'o,fRYIi#R7{R-G=ʢO>EE؆$I'ҫFqv}^O&x$n?.E~8ӿ~{99TeVtV߈-$+`xlzZ^VkQf,*s.Ϧ?(ߪ#*tŷ Cq+0i~ёΦ6ߘ5QGy#,c\,o"9^p2sw3ڕNhI+si*EFH% Uqlzg>ٮҌl5fUrhsqwtAf#ؓN:Tԗe_^r3nZcfe~FF#SNRab%%Eir;x695hJG5Vc I˒;0H|woZXVM(9Qj1鷟nh?3V4˳ƕhsO`Gk ݳrAkewtƥJR1G>c|H7e=*}Jzƞ+3Ij.jTJ0OݩgijeYYwJF9ӜVUtoiu=o[N_ҺinWkM]9P/lˌnkG&kjRNmVm8FD|7mt8;zTF̽.I?m+)M4W p1q[ΪTBNeFWnqmQf g#8#*s}؊1ZfuLX~^&YQ\(m9یwr8.uC>9GvSqPI dcuaܹ#)S=yz-.C Ca5<}uH¬}lg JV1vǜ>*\S|oqAq7y^9̅]fYj70nuJjP(TA|9c[C[iN^VKO3W]?H ri'~}[&^^/zOlEbttcq]Q)Ԗ;ˉ.%f}{eSZ.Ubigb0&8ӓmwUlOq_8ޣRfwRk7lS?/ҵpΥ-RKJhx8ӵnU:ڳS_>7/WNR8eVh˷“:{ztʆ#{z#"آUUp1YKQ\buͣBߟR(ʂx}=ftVY2X f[5cǻ#DJXyIOn MC@!l"]#H],<餓%m*ߒ+IJ.6:*SShK2M6XLF' qxSZJK^2U)ϒ/UW Mw65n+Y<ڼ1W*I=B1;Szrc(KkOަn{W=Gw>Dyj{9cNT_4]4@ae߹S$ʷ[KpX\ZW0pq#R=}Nۚ"ڶ]Z86.{w9;VRk;OTpmn&#ϚGlW$n%60ƈ~]:'W[|R]Cvҳ/',c#:1Tn2~zi#vᾹ(\jJRqeTr1({r:R9^Nҹo]ٗ zֺi2Or4 YY`U8ǖjg*Oo2t ;EE6ݕ sTߠ5H?yy`==k'ͬjtR+Kr;ɦA6H?)rw3xӒvF5*ue*iIr*\܋ЇvEP#E'>>q4N]~qqq+\ipZHdkr߃]1ed&JQrIN7Bi?G\5iʒi-9,_+}DR# Ze.Wiŧ.+n"ioXq#5m4gv4v 3+:XoLTʤ$s@LYWqUsn(Fy"9vGp35رKQ(Ym-K4v˄S5mtG=SGEJua{~[~ tɌx~^Q֦1$mԱm99!LxQ2:}GO:c|;!Cq4-+F3qX*FɥL !`8,<GҔTFQZu¡FO3Q0bC2Hi wr|WDo:T'H[*F<o։sF۩yjJh9ZtEy?w?_8 ֩y̑EFII*;kTZ~U:U% NR`d˓n1dPG;uZҜ]_]wԪOtm܊kṟϷE86i8Wm㼷8Ĭ :Ƥm~)F*TdؐYymim۲ӕIKiۡ4aEF6^Ku ڹ6l ˯4J52FJ7I-][3;,1+2v䎙b+(KкeZ+Z=^ZLcǕ_(+/&N sK1RHҤD$)'#qetWV"[-4m*݃c9<.3I˖*MJN/!cxk;M/}jFjlOijזmj"gd\, NjIItp{)kb RFS*w`#iTbkP,~Q'>OlV1暬 ai]Niɤݛ#1ȓ4x@⦤K*=#$D"xܲon /^GְR2:-k")ݷ'?w\ZtZ o߬љ/^3G:%^ -K$ƪ2NNp9?X(hIΤp]IG;Es=k67g8fTN01II]z*.!+ V9uw4%]jS(?3qeNaSM[bKpEnTr?ɮhMj} ÙHmv>l)$?/OLY)ۙ۱R {͌3cA~Gݙ*L}h՝G\K.qEOg9->H:\W~հGk]#GI ӗ(aW2w6Ғjq7tMeU x0W/>sEvf*"ߧtKxܟ;Ѱ8u{4[U'CXlS4H%X^z޾خtd/#u=4ճo,Hc ?v9m+Z-9F5ǵZB6w;~+g)/?)^*ioK/AXn?1U}eJ#} aHtj?2`3Ћ=r};,&X'$ };$B4ۧ="Ya/)-&^?Vݒ:>h>JFfߎi(JݥdfcHG1Y8ð*2RZ!60<6v n9ϿZgzXiԓܮQ267nwl`<.Fak[F] d9vsYʞoU2/2xK|ƋyU*?f攤h&@av7sV3KS(7k'n- hݼ]`J8s](~oTRH4oGIZr9R{\ýĒͻ,ŋvFh;~,,(Y]βjʋvX7AjR{vs:ha<6 KqszWp;nN5#(y$7X#JDq'=.@ja_#O4JQv%Y ˹x'ORkt}CZԵz}FGafUɌ=xU:x{ZYj. `{>n1c-GuJc. *Yk{^V.!S eb,|{8 \ǣZU}=E̐_'}N6~ͳJh+xs+u-{g⢚]љG8={(|KwF߯IAI{kْ7$heܬaߞ:z#i`OmxS2Op;+2U˺ᰕ#N~^m2Qt1U*|V]+PG ˹f;0[K0]Fӯ:۰F3%t&m݅~]0;~++NpnJY8g|wR(FkT[1ĈR6*b:Wdp=x\U9HG@3 YL/l~Q[xB,ĎIs/~IHHO^3];CU./RxJŷ21W!sֺS79^\ؑm^IGܧ$#Te)Ghw{yk]9sM:4yy,'c Ÿ ]1Qڒ:q }I<3P۹'koDIF7t@VeVf>e'tтsQ.UdvTFrHz֑3bDr2[R"rYZ\m+x~qֶ%RIA$v\ XqǵitӚ1݈i'YWp4m9[MEr~SB7DYeBH:4lgR斿q8y)Io$f$:U?Gd.JTEeQ9=eX#U%i}ۯդV*e̝MHѧS__119YX hDT'Qcx"{vQffñi.U(ƚW^p{nIc?>4e.V9Q޵񵗜+KufQm.{?B )<ʌB̎y W,m 7N >|˸=5-(&NJIP7nFF[S).^oڣSӢ=Vk'H[c>*d؎ocY5,O"DzV2OCݍG pZOun8 g.iI73:ӕIY xLFsoE"3|VW)sԌUIrڲHIlʨѶ.1=GLֱ-S-{fq7QA;Ǖ唹o7JZUv06,쌽NnharcyJٜsӏmN/5[Yd";;OEG,~.NîŕDlۻY8 cŭt䤤$͐хi<c׎[NEN>O^̈fY6l.$oQR]YC R^\Yr\F=xVRb*S*)/ČyrQ*ҷ^p;s*ewwBZ Zʶ%8oA\֔m#s^? v1r3b}zsqMqT)T1/Ffy氍9?Dh{&X%ήB )O;kITc1K.V4LȪvocajqQ+JMQ4ѕgNsۊq娹ӷys$Rw}]n[bVPvn8'8gvwIT"s%j*GA !hsU#Z#8o/-C8ԋqaN^Λ ܳڪŽ^5epW3\1Ng(jƎ+ݬM5M2@`YeU!ª3ڳ Ѫ4=kEG 󙛣rl}^Iu:fJ*VVmWe(3z~i>X)s-aZtU(jM#+vC\weXm76#uڭ ^Nsҹ{JzTz,\1u|ۼv{qHcxקM]u唛ϷYF,+ԁ\}M_5Oi?u TMZմ-V5+azƷ: ]tӧY hdDݞQ!UF2rQpU:UTԙm 88|k7FeR}<`_>NU}ӡ HӳFUkk ܥNArğ1Zѝ5yƦ&NEԞ̘r>cq?uNR^u u*E¯ˠdd̸o?G樚(ʕDlվ`nsZ(PWNgha)b<%դwBW_j)Ei})I;52DDj법(žz%6 BMiSisOmIshq~>S ztSh,J InbBBzzu=(庵΄8%)9EW/.K'a-VH4jcV׎Go^Pw@IVݳ ppGVԣ+H)Q_1fU+p1vR1p?1]#[-n$QG {4]O9{WD4z+\Xh#K1۵jTޅӍ;'vZwaV5Y\zUF59ЙTPC4Kw:܈PKdWDe%1(Isdt{,I8?%-nE&:d\7Zi#F>٧a.$GF{;`I{q4@2MW%. Ӱ3Xjc|/OC9K~Ǥi'NUaV^9<Z_gV{I%{wH `wrFzQ4&GQ9E,~#O#$`uOE-oI&6w~\I880*-egǖ)~bKrV[Xn@fsvWkI;ѨEA9kxWfa6U.cxTIekhˍU$VMŲz 9g8RN}ͥM**I݅-Ǚ30f:-h.]lsJDRhcYY34.9ڪ)kjOZ]?vH4<f=L)֔%cNGy+?7YȬ G~3NREX^:||hћˑJю{OnyN4Lj.fVo4R/*F6᳌܌2G*p0M4L<;cln:e3QedҜz)32n2N0#t+Njr_%nEK<6d].4QQ|=٨6\{]I?JU'sEz#ib*T%K*0I>E1jEB[cIETq؆ޗݛ&ǯLgR9٭^Fw\osm?| {^J鿑 Q[^uP8Djnm"kuHa0=>~Ts|۩eEKWr.~(+26֞Hh$r:FFqZEszrqWd(<؁p$7!z35Ysb&E'Vmʎ7L2'lv6㈼$~cn}7цV!c2n.TIw3#Xw +/r[Ӎ%Vb)gE5aPǿr ZH17*&O3tKin?JдET)Ar;G,Vm@#rs_+;hKGYQ՜1finu):gN^hU:U-[xa8E H,HL=U+JzSwrR˶<* PI':V5'uj+l(wl9 q"J-Ř֩Sϣ_G2۷4``+\҃[Ѯڑ"ݐFx{ʩVyZ8Toy; ~1~?b+kjpٲ% U n0n?GB+\X5~am`w&ۛ=*O<j9Z_c x hd;~wJJ0)(NMK[]\t[W˷S!GQ:z JEtVM{UvAf5,|Oc؜=?***-lR2Z,[[o#I AA=zV3WڃB@ErI~]˜( p8yNO{ Y8o絋7)FVO^9ҸqOUT̤ёv|ƎJ'\wsֳ沶^6'RGGU_{o$w ]B$Նxy8x=,nͥ5tkB;ŸS‚IPǧ jn)RJ;>W2r+(1g껱߁ְ(8ƱPԟy7"I. |R8}+[OufkdFRFFr {Vu>Bh%VF "˒̣c9Z.HVaDfEo,/dϷq{~DWYINrUۅ1YeņvݞpxPRI{b:\OB9X~nY4[G#׵grQ׹G{5bۋZ/,n^Qs.kb,]+\d,~RGlvkBm{7oRiƥF0+olFfvv1-A\*TEƥ^OkA^)!,đF$?47ˎpwtⶢԣF/_["YHmIoq#5^9Tb,=>k}Hfk)vduaxJOAח'ݵ_??-Csz[̉T~8#$c?p#s+Wh*I]hz fQ6崋= + JlhcԕKBO[]-Mʃ<rx{$޽[йo5ݼY;eV8?Z¥?{u;/]-ث -V1v 烐nN1S\z/續Zz--u- їA{q+(WRb!"K;?/zc#r@Hۊጤ9SWR\]7$yXBάɷR6En$5v笴Mtv_PRߚ&>T6M#yAhgcvz"U%$٥oPyuK!(c.n2FX?oI鹎-eQb3՜X<J>tM ߨT9bo-ݘ7gjpd׮k:QZJqo]&L)n21TM*e8Cs/ʣYH_k8ߑhi')SVwv-;rA {doNMWU#=In7w7=?S[Ɯm-N\fjm!s15t8+ة5Zt6;9D*{c{\>|6S+JGm#vm +xq]^1Vz-[wvKM|,%5 3ϷzhŶAJUOk!9TѷE۷n?&E?yV3zX.b503n'OtFt8:y`8^O)c~8{{蒼fZGdڿ2y{=*aMZԦd)Λʅm?@}9*.mSF0rzX+Idߡ \"QYu|mEզVuCo4| nrtU:ZtۤCp1<9EAYX<ء$Gp1]d]Oӝak1r[6ѓ[rqGDukf:)L-tm= ֱ9#˸6N,Pޗr2=:gy.N㵉VY329N1Sm&U:ؙY]/{,otUeW;gS(hvI~^7#e<g9SouM9G#|6?FG=ϷzKHpQ% $407ζh¤h+n֣b2+ʻw-U9OtiOiFkj8W#\?AiJZ%CgJWmFZd oOoOpkMeU_Qi%D1"'VOU:tEY#+yL(Jy8Es>Ql!H`s6߼~\/L`t9;lc0O:2&$0-1KZ[ i-,L Jy=Qߎk *rPON k_eJL~4HᎤFt;MmD6ɾc}ev7ZyU~jd25dU9۸n9'Xc+Ԗڊ8mhf";`l\ ʤN>iǴ6CxMLgTJR̖)E3:[yn{R2m=bU㲚vf1zg eRGiƜv+5`L˷n0?j.b}/+mBI6p*BF9j'gshӗ&4qtʹe =0zW,LWޡN8t{W2%[NЮШ{::~c,D-wûZtUw:IcU{[7z4F k$U\Mg{tdXgM~%B,m/*qu(J:F1z}mŝ垨Z?{]Sͱl]%>cfRq{Vq˪ZFՎ"[OlLEn*_ZIEX)HֺWE6YX㹼#VTr0R kw6xc.8&#r09 g>1*?##FJ SW i+xo9k[ڣr,1޹ JNJ*ϯS*X$ivc5PdHRM]qy([V=бX&+ҶmaIo8e6mPN;bW߯SbjJ\_2GD/#2IԍDAF̵dd)I|p=@] jUjr]aja*4kq8XX̬=iFtvo_ՠWy>VBqG(~ꛌzSFۯK=Vzv+N/Orm>'MB|=}4j{FshXV]KZ5սْFu>R5f=RInѴ#ݴԜ+鞉KOGц]* tK6kR^ʥCҼ lb{{}-paYx83֕g.k^fx5T߷=G@Cq.6yj`=s\Hs+ɔ=7;{eCoۧgZI5k!FSn:Z/dfYmmmۧMqϥqTN^[李6"1IJ]]_<^Lrvwӣf "k]̼>Re(R^cʗ \g=J|Z5vQPÐY-!gU:΍6;dL1"Fۛ<#S-.t]OВ( 1QԵc*rrm 0c˹u ӓ*'تu#MXk qEw:;ON{W6[J.B$| b@0ON}sJ>3,G3wtqNG޻#JNH5Hn[hud+/zQJuH^NW|%'+IZi]I|w?mNVѽnt:=uo4ȹ*[s nһUE"mFB׌wGK?p͹v:q&m)Je9cK߁]qJF'ii?ziʧ4tԣ"xj|&N?ҔKvMJ*IJڕQk3Ai 8ݎu*ƥOh U*TgU(7#݌rFmUYl/\u9z šȊed].yskKJ(\C4"4c:~ t J0|sӿ.Q*F=,y^v#HJc)${ 4U~l}N+58gN؃e-p& #nz75NJ喍ݜ:M[ˉ̅mL׋=eT^ E(uGy)XFt*4NWvv3^f'٩$tTrٱ!y 0evXGYe8#9q\5#/wo3֜Ȅ|mkI&-9^}jsrv4+ N>ʔnW HVS7'=OO׊St_z*I{ˮfV"CǟQs^oQ'm=~5U$r|̞6m̸&OzXj6䓵rb1]c{v'y<O|jaeNIB7,og$C xfz mi(hQ-imrͩY%d`f1{ $^5^.Zhec=%H<)f;p185K ^oZjTeӹbRTD;oF:wNPtotR:kmkf,kI;y9oiWҎ"#QВ dѢɟ3nCJ99._QlW=FAQD}ޞ_Tj8Gik[XW-# 7q;c^*QJqQvcUUW}-] \tvϭ(Z7ORh巸^YB# +=O8ǧSX{Jw}_#Ԧ΂cכȓ êCUPuo{U}^ijI&?AG$:Ҵ^;TTkvdVh#xc3JQd⧊R#1ewmܤ|3R9ksSNɫ>r#,Rm,ۆw)5'k}:t"4+d-SsGlf-`{W58ԡ%v˓87z|DWn27^#R+gS! Ĵm;$hǧSӊʜc9ew)U6{vV5ܟ;Uޜj6 $lKV̱17oVw-YGOUZG<*'AJס:UY{#j.R#*򔤗Me:c8'Z2-\sjevŊkxݛzrtSVԘ?j/%qk$6@O5OZ-MʩZZNIm I;s&ͻFl[G$U)>{X$[}n q\R]zvd,Dde򶀐 ,Nmm@9!JI$$x|a۸8Q(cR*wj)TסNI.e"1!dA>;BRoFLD*L|n1u#yބu{:(gЎ{knOަTcZ7J19|9}Qho9uӞ+Jt]*zq{+3A1[FovzDJuʤjZId+$i2uy:q\iԕ%(ۣ6F4/ˋ| G'qS]q?/a{=Dge:zJZ֢*|җ+ztKBl8\+G%Z\N}1qt=ʟ4dJza7x?q'>jw4F+Taݣ9*yǡ(R8kFNz+DP"|En3=:S(K楊V;Ovi-6H# A9CokƒQ~gݷw"8sӞ*SvN8~DԮھJkݷ}}N*7}_S(;?Էb ]GF6FWo ax)3ӅH_H!_\# z_I/dy)6i ks'{< )Em|"[?.<5,%w1F<~\1^ε%:kϸ~ P+[LTi׏¼sYv<T$P#S,f~fXd$?:Nj̩xt$R3lsŠ'o~u%?,C ,vH֒T]_.2*hGu{/㍷.H/a^H9"-j7_;_44h4MFGdbg{w8mbc#}4wfٞ.: +zNh$.#HaRּ(ԭF]=uR.cQ?r2XRd z q{*d˹TۮSHƻXl-ێ 9ϸZJTxxl-k_OCJHl<ǜ8`>\O5J1r{#ͯ)8OanHs8=1~5:cѥ+ak>K܀@o_ʶF}N5/_-tҵi>k9<H'⾖H\ӊw> NuU#kI=OӿJ㔭[_pFTrv54?3:\4J܎3.x?/NTeR1=vӡ|Mw>mR70 .Bxcc^0q[ő㢪T#ɼq#Ayj?xO$Ark_N/}e,-:vuM}I_:262m``{־GNU(PtZl^4ǽmۚo-Ur29?NyZJUC-4ca^1gc>;4H!UQWRP.ƺzrhxVM<'D|kݵYS}AR#= u1* {ݵk>S ^+Ii cpCHg~_S2W,m-%dW;;yXYn_kU=4z?MzUU9@zJ84vWvaps?zZgh>e]ȻYe "Vm+mL^-w漋qBd1I?.As}|dT-6֊i&fO{28=ztxԤN)uGE#Go#@Y@\GzrM?+aJؒs4 mؾmq֧٪VJRwmĂT ,jLNZkm=kEVolI >H9ۑSZ-X{j"yyZoG8>K_4"忑"+y7eBAOa(|Dcof9[%h~3dwYҪW1N2י\ #VƧpA{z)ʥNVJoO#!$U]vl{+QGqR_2ͥR^(qrI뷰Zӌe+J[9b*SVI%Ef .0fBvޜԹqJgY. 1(KvwMŝ1NZNnΚjr>^wgتkNبYԌEyn W)Kv_quT߱"ۦhLvQ #?|6L؆HK-Ŀ.]?_kJ^Ҝ{Js+7mkyY#.q=OJҜ|ѧZR~OeNٝq`1ߓIsiR+ثH/2dꭓ qas܁SSFR]ޏU9^N# #(QА: )ugJX_F+Kլj/̱9%`~9tӥVurUsզAFb`lcOOƺ)TS'kP!7 U|1~)rt~GuLz?!kX&Qߜg9N2uNt-#y}ݓ8t9E8̾Ft)xaԉtn[!H9$qVQԔ}]HF]D2Iz0Po V)b\u1oSvmn*Lw/ҧN>dU:v֝w\2$VSgդcnȭ!u**IcauQF^g=Zԭ0]Op1޺ԧ{ETRЂ]Qt^iKRGTIEE|+ ԭXT{³.q>QJz5bN㱻̋&ay 1MoZ1U0rw)%ҩS?Ju= :Z߉V8hY|ˆZJ5Nhcp~|M5U`Y74Q#O8;$o%F|G?ׂ?֮>֕{Ѩoz)ۺ_a穭-̚NЃL/ BOmє[+w6m0>/ [pR&a5LҴdBY>[f)OKFXu)Vt`eG>)^Kdvʭzˠɶ,>Z·ͽ5G-&vE*Hcջo*Tˣ}3p>qqϿW<$mבhi˕&ն̲<Y[;d) 899),(`xNRe?z:3g{XtYAyeNr{EDZ8VhgJ1'{i*yۺ1/;~b^k`O1]/;X)GSܹZL,!+Xk_fgM^OjZߵ啗kc 1:kQtt֩BxU}b-nu,F:Vggю}XT[}NjeN)QN>{- l㱭):tܭk#Ksu&Վ!Uvɧ9Җ_1GJ-mHu1D3aOZ#zhacf_nN2&<7mžޑm%Fz>iǚ7dի:xuV;IQg^ZTgȉ)8Y5ﲰKlalJ)ESoc*1Ki=OGKN^Ri}:#q^VR(THMk68N]GQTy̱Wd-_bi _4-560ڽY,2BN쑳;+(Q׿}zV~Pt:e^9s7OGX "ZY.'dƢrsTuM<ɭ;&Y&ÀK =z}jﺬ#Uzp}-o̒)~YeIQ~I+`7:U9U5Qjs;MUsy`nտ2'sny %R\ɍ/ֲN\4ԗRm?5}Vu <9# Ӗ-?u5mgmbW9'ֱ[Cg4tc塒I"$ `jiK]>DU`[/@W P(Ă 8qG|T^s9X6XADZ=*+I;~Wͣc8 䀞q'xtR hHd2W<_,g#:>SL>|c=#ǯ8PtFRIK[jhm"d!Mp9{TÛ+zoO>m&D{U_rppFs;u漒s*Vi%{;Yh8ʶ@(6{}j*r]n>XDg?UʹuNEKra;qmlW ~t:G̿h!m™S#Gӥ)b*wj݌cTyoؖ:QP_jr-ۿS Tٿ3n#<d]B6[HS3g8tN^;kTq3&UFېN@=?ϵtF^8jUouط l-J1* NsԞxǥ8u#ZBԱXC @7ƧnǯgOU4c/w]G}S N^::r˚{L,>"5 pstEHW]J~ tq6DLx]9'9knT-R+M5F?K)xP͂23#Һ}4 u#+ZC$J?zUcFT.4-QolҶh{~Sr7\\4eю8T9M#)F<[Hb]|-IUs(RrGNWlV27t{Li 'JJ1HU\^Q>XܩJXi v(;]%ZqjDǸ*n0G):n8#_^NһȣΑͻoUihҤ*lPjb6CIsVRsŸM5tAH>2qk(UaWi ZUiv#j>UjtыR߮1 #.ysN=R穈֋+Zo7+%܇q>1rEt1\f+(BѰrCTK/#v>W2(/Q76RUJ*GWcl9fӶ=z+F4Dfr՝#hC(La\?#(Gtlޣ2OxP6񜞣 k7;E0iث-lb<.c+?oNNZ[tRrKC[y <Nx\f̆k[X&u0ڽpry= p?g*Н=7L֣#{o*$fo2?3!Lc$`d}+<Ғw.oBVGn۶H+'M\湔l)^RZ7^偾񒏅L7ߵrN:uJ_[+ˍ.RhT#k[kWw>5(4nnq 2|.BҊqeQCEVKEMӧje*+TdҰ;J֒,En=(GKk;:`[b5E Yӫ.VXLa'ϡFQwu:KcE7WT܌vxoZ0_6)Q9FWw߁=Myr`#H泫-鳝F=-Wm.HdB+痡ʽFnrtLuqZ4Df!,jZ2/8=V3Mc[H>[̒b(}hTe9s#na&dyn9?g,=;)iӹrGeиq˼)lv?1\p_/qFȟO-c^ڱܴ9GƬUKʄvD^Oh,lbd|`TmdMc'GYjXh|U}ycҲG+%%[I vm#Uo}Җ]տt˨cK2M{7G^(ӧ+ Nn;vaXX,?-:} t<.xI̕Jݐ˧Z6]ptqASU-g(ѭ>žkXAȮr˧Nrbk[iI?*1S:4ji壶Wq_sf#N3j؈ʴTe; nNO֜F\}kkx"Gd.0e\8j2%)r{h30PN7zyiDff,|I|]k>Tȝ$) [pc_n<4G~ͳ˴R=kǮZ3{R5ҐFN:1wJv̫=In$=sZSsTt6w)WFcЍw鑟W<~e{)\=kSf5,2FEğhVy 3=W<ǕsSi!A"4Ml̫F<_ƸoՂW"- M+aWkS5#*.}wVDe)ʌ\sQ9ӴZcOBO=DeuR8 0bDeYr[_//#i?4B)6lq8C8)r(FUńCmZH:˕z¤djVQIر.2A)AϨ>3>Ӗ-I%٭R!in\f6l|1ON+˖iiFc)B"h| ܆\gQ3t'xFϭo*Ot]:2rZyNJ)N mdž{ȣTmca\W}8ԩ.Mtvr0+ N#xHdPczVQѦ?H߯%+0h *q 2⡊+ id085N1z*<JoF2дkr9 / ?+o~_̞YFbn^axUeQ tS~PiKM/у$>7gn]iԨœտQs}iͷ嵷 HFKFҶe'.A%2'_W#QPrFөz <)p͵6jpH_zEZsƗ&("Ⱦcn,9gi]T P6ap5M*PgATQT%s <͓O0yORI5Rr*{v]-a]Sp=}iԩe{j(Ǚ5-JnXAX?SoRۯւ #èRg&ch|ߨϳǏ)%ʫdc%(4rΜjTRѕV3%1{gS(k5c)[uܚ&7d5f$ \q=vEN[g.! +W9ڽqXԧ([ݐMg֚ 傱^s:sTOa2Hjc39@z95%nf5pBd J}5SwwF6rFۆp+JcTe iZyb3UPT9+ ;[mAOA ,3yedVQS5빝HrU%/=21c`OǺ{c54#xcJ8qkegMb+sӯ?q(ٹgs\nwSH ?us{8$ӭ(ڊ_[upۯ3Ipq׹<֥F=T,T\Kny-d2R<\ϸ4.]Z42RI!odJUPpJN;W]2;{[d;rF9,ĎxJ:8]|#(촙vaj֦cHWwC1s?Q唕:e(iՔnh%A{fTS mN-nQm2 [g϶:zbj:i$&9䍿 mx1ִj>D9X-UX)?olsœJrM=K`r-ĘizإNq{љ#|v#g#>%r~]=F${"dFU ̸AҶ{3^dБQAn{gUfEKk@O*j|i'ƺ!Vq)jODdFybVUi3 VN\%Ԓ;X6mW jm'+*N4Ӱ8kPÔnWEhyJ #=븣ʩqַBk$O${1SkOS(R ԲK.` ܷ<1׷򨌹i3J_m^yBдvrzҗt0 Ӫ8b1,'ο61:`ҩsk-kB2r]em$/'hǧ8[{U o k=sZqyE~Q(gRvUExnrVQ'IӴvdm: msHc˫rO/4Q"E]/–tڳj0ԙar:*}Ι y&l# nܱr#K/maI #;: g5-B>e;4S?r uyv1zrBfg"$S2x'^U>XgUN'C>1+~wR%AFt ѺkV3sQVԁ(ZK)Mh'$rQK}ء0 w08XONo!IT]`r?JѸZ!TqMUodZ(UwH2@*PO gRu%Vm`!3G'?/aָKI3%ZykH6Ճ8cENZww1]8jMl\3m P0{c^uԤub+N =ZȻ9%vi7)q9=Ic)F|rԣ[ymmE24?•G}:r4@2e ޹KMv*Ty$L~ B&QMm=hu"bF9ӻՕXʟ/sFUk[~6XIll_-3*YxZ𔚶 *yO"ȑ}$۾Os9Fhm˰+xe29=Jr\Z|h 䰊l(FFz^i΢>֩*J_5IPžZ3n29q*#.TmNӍ@HPyȼg?*W'ڔRMzy/2{{w4~g++m0>NmZbkesv-Vmr[ڱeȷg4Sviľl7YC*|[@Fڌ~"I$P-WIoU`73U#]gun0\8 csI&zx\UeonM^H渊""ϗ C'qW%JosYIN;5c,e㿥a'- :q %;Wz@_}F/Nb0csq),Y|,'䞝QkDVgNYT)-oM-&78zuc}taFu9:k{ʗncݓroXʭG%άB+=>Z%,s+n.rq0GQ^ RbJǛfxٮuNAU#p댊:s[v*u&WΉsWRKm4ŷ8/3nKLmr_ҽl-:|$t,-jVհd)\,2^x+цszTMz6pd/U"LsӾs8h;/kib@m|F=뢜N1w/eyfi2#%{/PR%wNPG32܊w?#x8GWv8D\ȉF=4nx& eNQjԒM4Jlq{rN*}]BXxJ 6C-eͰ-͏QN71*nUk;xR[f9<+Tg]RFl(>\pܨ!O/iwcsn_@9Fk͹>ΧkMm$U[t(}QHRizyMVFKA\8|׍8~Z6>OI;Kn]d%Wg=ƯFV8̕zkZE(_2\&?Z S~w]k庞;hK̋#ۤ#s 1#xU0Qͽ*\-nD۩0677czSùnaTk<Vl9ӎZrMUUFQt׺o4ҴY!]K7*'~X[ۿ2(Wf-5[S}k*4U%RM4rZ,Up]vrx=i:QWƘzΝ] /'han'*E:p~fܟI+~dPFferW*ufzqi(;ux>зI|sdsGw{mQ֧g-}U巅VO+.ZydauFqOڽlY>d[bm@!Eo̩m.]bTU3QqOe}::X_amsc$U,w'VZS۔P(5-+ۼmwsdu灃jQsI:f:vM[D bUf_,mSan>i_v(8B&$Vm6s1Js+~ҟօY-XϘڿ`bҧJ5bpQvWߗ!lߺQ WmDi֓Roo ЎLi#T]9GF*yҵƴ?yDUW8,y9\|Ğ7UX#ho5Rw'(E/m27 ZTJQ)ӔZFܣ  )7c r,w".T6$V NWnַͧ꺕R{#b%$?9+>(;' pnGUc5M/ 9k{hݷ}ė-l#//$cn=rUMVkVtTv ΋Q YN>a}f)ǣ[8Դ YLda[Y5=ڋЈˑj3}2f8L]cR:O٤=cS.7s`$-k~i;sBZ+laJ9_EƌdjylE\9Reh $eA9[B#~D\Ɠ_>FW5bV&esڴkoe1EB(˻qLGFbn2I L|EW~Qqۊ)J#O 4;1Ӷ{[rE{2sFUJ/rԔ:nyWxs>Zz3vWiE'hr}އDӕ;QZF{h&-B>xTJϯsҩ.W bbNL|έ#ׂҴjYJ)\@(Uu>9!O^G9ڱsw).`'G lBJ<7Tݲr9ffuVWldF9< JGc:,mᗨ?P{榤~\l0+"gv{ѩ-Y=%KȜ=,ʌ6|$72R}U4ahȞ42AۗC:4Q^NXI 'nՒrIqM[2LQi=+iB:2FQFIPF#Hj^OpN*2Z`]61# 7u-N\w_V+nEVrčߥ {r)}7~b%]5S+pR0n)mAZ6r݃`zOZQ2&ӨӔc-&Fx#9Twz~)J_ͼGY0(`±e.TgSR$9ՒVVXSV3ә\tDUArrH+ϭ*okSJN2 T/3v[8g(nohisIa#s_J um|ަӖYա{`vDžƼESF-ӷEٵ]ۙ\1߰ms>I;DZy3yT+*<5Ch.XJuzF[ڋTdouj$?Fu uO7}G==k 8}447T2Irؤk*O,qp*'(ճx4"gYna%o vz7dٔe*jSoж[vX^@zd?*/-NH+FOWοRrJrr9=~S^=x*q٫ KŏǺoH|9W"u:Y~gj+'Z;hߺ >ltO²')*Q)I^hZa?- CIRtJɝ^H-Xaj8OyԲwƤe_gk[@L~V8ss>fFROG ^L6Ibdi9#[Ӗ7**tufˠF}6Rv##RT2nt{ƹᶁE_G+j*)Ie&4|7UulzU%R.c UIV̴hak&V2[ҫ t* i߱xsGE+del|GTB5e{٫ui8n6,4Ob0tmeok? ) pF/ 2p{9sZvS憏z&pEdW-$skϕIFbԟ4+dL̻dzMrVc-:jN1lFV0,_.:KQd0͊Q̹v$6SՏ U S|KM-g_"4>\6J!zջQnO_jҧ74\{>#Rӷc~V+D+6c]2סP' #VFʱw6g3wʢn)R*ZlzQ)+岰#zqW_'+zg)O#wN<0AA"z⧒R?iZ5#U ?Jڜi#+eQ9ǿVќhׯŢlF Y@=8)J7w4q$`9$m)J>VH'݀19O\ͫf4cv5/}Ϙ߇5ٚV}4>U12n 8>4BMyc.lj_ḥe gWdڳRm]ѭtfT0PIxi[k;qc6v0'\|1dj*ֺÒ݌e!iMom㻵pj5^tm=]0|0oNr?Ou{JjJQZ'M:%S!G$yEN@U'ϸQ(V=u0$|UJ26ҦjנˆY*4fm#7Q-XԄRIM:4TeS*FɵYߟ1Gek.",E`͝1ew[c%YiNRsq[.ggY8D50m>=(%E+:dŒ8V5;.6qgԺiŠ^ 0f(HyDTz~ϙFuourt729I?)I>O5~x#|1œdg7q9SNթ1Tn@pd\zϽW=4Q;[qv$nϘzg߭ar>AЗaySR"̭~9JQ^}·N5Zw{ditܾv7ʤka}N\,8{ '%F"fy9WVfll0ѲL8e/I4dEyxI#߶=MQM8EsYO%"YU;/}qғRnUbWR$ ܎'EcRe$|>WZT%x*MIa?y'[Mԭ[ޥm2{ЎǯjujGܨsTsUb7}+fINs?RNMV[u8H3F7r+Ogɱ!h]M-E\DXGkz֡;yk2)!cK6$Jp=Ir埧Uq)Sw>z0586ulj մz8K\Rz wG\Ww-:WÈB}ާ)a[xC͖KY -Sw@:8G1WU|D2V/_ jiow~ ?IXK(v*Ȅ)nÊsTwөAV&-wͺE_Ė?@lw~<O6 UU?Eqpc1 c=b T_JkZu#E|:`-!os A^|3S4jG4 T.XB3Ƴ%J[VIb GʭvJ4\̚TN*H\.+ {-9ksҧ*ܞͭۧoPTC"E$g+(*ot2u0J̆G6Wnfl5q}햾s{JT֯@',Җ 3*8T%uӡY]#1lVʟǡ 4'uqhB"KݲYȥy'8,E_M1Q׵˵梁pKisw&v q8&Uqz-H六e c=iӕ4(Fmɀ3rT+yog=zMXTjQ^͹9|WK9A~%Z(J1V6߳xӂOsu>IJ?C/Fm}}iN/OvVѐ1XUcfu̒y\SO^+Hӌ`o[Rmu!^nգi76݋=ɪKWݫ-o/JLdH<0SrG\^\=v6$38|F K}Xb4fyǰOjyY5(*JoT ut t+cڊJ.VwG-n^GvV-xfh[s`p9>S*+SIѭ0ZMm'ڣ2YĊ\g#~>C ˎEn]x+#J$e[6ќŰ3kIԅ*ǥ -ܼOɜqm5TU"nK:[l#9<5a:Fzz3")O,c$@ q+TV5t5UnmES읒ƑJUq2E41*Z*I߽ANRCugo/Qf&L?tII3FZ$25Hc/O9뢌ydTeR\61*4 HrAߌ⪗2K-V[c>^xǵusٻխT}f7[/ȁUoh%UFIz >YI{[9۶ X'H6mgq9~~ƧSjLX*]t>m9V6G,#iz̭(.`_.NC(Ts탑]J+=1*׃d7g831?tӏ,l%l O501N/tʧNj"#xMU0n7[ۛxYeYߗ*vB%7OB[JFL-1#ҒgtTm m;KlHfbj)oqӍNe%Eqjdw}gCGҶP}uZnd{j$Y gTɌblȦC(2DN6=~(Vou=i?.[s Lm: ^/MngV>O2NC1[ܪA9}zRZ|G.&z7˸K7 MGD\4^̡QG ҈ݛ*K&o] < s5Q;(+yΔ)]{_.5T@Ž1SJ1ql猧G݉_˹yT[3Ts]mM1O2SFȘe?3g͎894F2]N|jym "ܩJ-dlX)/1mMO eOotQu-%ٙ>~S)S{3qQ`6hѕn;q L.nDlOn'VBE`T q߿J*9J*9.c:|*3NF'v:*Q{IE*jrSt 1ֱ4+)U=T{TrT>nTOox>UNlڵ^Vd{ؐXEǙ۝h7Zj)_́o~p|VeU=sw5_t`(Ѣ̌b&6u+ն)Seݖ{CxkwefOT _JזU=:wHVrEW?1?^ߍm%ٿ3L6#z1+;16vLF 83F2,* 3C#xԣS:&I,-њu| H둚u;35a*\$xH\^7t翯*#SJ{Hӟ4!κ%WΛ:nq4W2;ȫueqF\|۹3۞HUVz~5IE[ً5Ymؗ6;XԩWE~fxhudiX+z{ N2eΝEkQ7scv>ՕGRO&*aƻأ8q^OwQSMRMV׮=#[d`˷GԏazZqNQPBmZA;YqұBDjv+Z݈fvy!Y]ѾFRx}x>m)S|XsT#qAG3G#eC(JW'1jJNK ݀!V;5ڼ~chS&fi{y_G,?_On5v骕yIr*)FJZWT6 3g9\eiw)r֕gd3\n!m2`>zNKu:ӧۧR;'UdUad9;q{oJ|RUnZnԽi!Q 1TJ*iF58d鵔|cN5їV&^`Hbˍ E}сǟ_Zi֕5@(" @.>T|V:M.6io uhMrܩ֗7ؕ# Oy Ogp#Ƨ-G2T{rcGsɏc'#c)rvg(I?vbOg$XcODz-Ί5bĩro$"WvG8ݣ)('gڃ]#Vo'Vk]M >I>Oq t9FNSܶ"VK}+a7G~nVG,W-ܕd%Vx+h%j[Km*;y6Y2KcKHGTiV~$m ΛxڱӐ֪-hJ0i^lUS6"ߏ˚a--tFKLj ,Yw*;{8QzeyY]%ݐ˹N WEJqj#.U8k"z0Sd?ʴS5rW/x ΰ+)(H`}8=3E_^U? b8 BY9 <'Oku u"./b2ifp˗kReS?wʷu $qHh8#<8"˩N -uvfϧ?()8%Ǝ*QZo*+'+^?!ZSeF-LahvͅᤑrNz~Ue(Jʓj4?Jig2ȿ$cUۓ {єyޜj{^X? ֒/Gj"A`rF2qdsuR,y֪I Gs(-۾zOQ*of CחoMKdo6Bͷ8\sۧODCk] )mt́|+8=xj'R^Y=Rz{;Dn,_FG#jr)oz8֦֋oP:e?UEO+zJmKNkG]uBq G"J3!_#qTѫi%NU\O>&qeBVyX2wϳQjoCJuۈP¯oO1 Ƿfs^=uӱk{xZ=?BcrNZ<\S-ұy-|{0?cRU;EmNa|v;hE-^d䂿vV6vao#=rOz:_nq:>~koa$`Dqzj=:}RjrJ&Wc*bcPJCF#=ƪ#}uFMKڟ$inE*9~ՍG w@{" prn;J-[^]'ȈXuܿ1FNqֹ=;ogƤ)bEmYK;![G8*GCeE>Ljv?}R5J1''-4kk(\R(ᛓJ+ȑuX„۴.dm\nU*E;^,Mfѷ5,ӯ^ r9V~EY-Tqk9YJIYv,pG#sjwy*br_[iƜk=c\bE'y|y'#1ֳ8E/OB[DezB/C.6teV7ʍ%[dnrڰdhmӫ)8zU.߁f+onp]ƯϦ1SJ;NvOB;kt4LaU_$( =N:zUJ-EI%OD]~No ׺,x s}+.U]ȍ1B"bVѧ:E&e1k8F65 Ck4,١])P9_v6i3Ryb|E|ɻ׾yN.>ƌgM9[g4Fq!9~\Vښ85a۪~QE1c+g]3J5*]=ɡ-ܤ2Hʮ|c'׽iԵ2tJP'}}TyqJ'h3~4 q+y[ Xȭ&-YN2Mto=(yk:*x۳oήq/5iӍR}|hnN n_tr%ku&BHƧ >kO  }SJtܵzHQsBʿ|9>+6.8өk_$И|XvNHdh9q~Q`ԷF ܱ*/#E/g{n3LQv9WۜOƳfA ;XIQ;vs?x[&dW<ƑZ7Xo3p!}sb(]9KKKӊsnJ[Dj.]͹y#kIrG4-K {XU9Mc j͹eTKef+IsϮ>{m PeU#787uc:t+ɿy,T7 vfޱNsdPn+[qBҢ7t"<"X-"BҲeNdȧӶK4Ѻ;a^ONk>kiKWa !ʮXγyOY&bGO0ѻ`1<y+*Wj:Rc2w5/i9)RYrYRCzVU#$2VwktUÑOqJ<[NH%F@oc=GpkWV#lKg+:Q'wFZu$[cHn$]as'rhWxi.Rs\I=~J̴"۸*u \mCʒm/'M?.ُqלV3NQmnTt$wS<8U9 F㎠d:JQWϿE16" $jMh\CSSz3Zk[D ̤q9\.fٝTҜOfy?2Ǚhl^v7y?z5I6S:ri-He7ʬXusQ{}~FREߧ̮(l1# qqYǝ^CXzil[UU{۽f+4u.\=uRiHeE?7lW=XhM1Q[./[p͜6p3*QhZISPzu$h'S<$]]{s׽cy]5W$m%֨@8N0z}?ZN/sgd妍_i;yw@h\ЍN3NAZ8DN!S<DݔgnJS^{5:zw^EsH%VbRI{ⴔi'u}owԊiGY%W8ִ(VvF*.͛iQ'zgyjmUpVupwcɸD9ވE82iD6؈f`0a͜q]nGUYBeeHt$QX m sg?N@=hTrνYB3MkNȑq+v{~?eN2M,cNI!)ޑjYzWb[tRҜl'ؔ3"u|ЌZcԌdqB3/Ϳvl. ?>`ׁ*$5!۶#F5V"'mFQ:!tZt]Ǫ=`B\Xz,sJpVA?]9+e F.ڝ5RTJ6u޲$A?ZykN1TWB6hw|wt5tS,NjBs$Qʫ @X+m@]ۓ3NGD?fa}wБM4ռt=O?.>YFȯiNqGCff]mcLdhdwtc1VF19XX4c2u#s[:Ւ%pX$c*cP~5inf.܋ʎJQnŁrąa#sdߗJ:?s:(d$چeXϏևUJ.]4!Ei-H9O9hƣZBL٪Q\NJ^Q3"Q:h!|ݜ0#G]\єRZ>EVIa][&66|ہ=޵I˗kjxz 5Q.dce-I<+,~eYw.rFx8{VE~n]fČ;^7}T2#F-Xز-/݂xs«nUiG_a=!Md9SGAЁJ5O*Z,!,RU Bzgֵ|ҙ5%cG"ysN߼G^1Z>XS+3i/1}sX}8H\wZ oRr7gsr LէY 7VdihE1,rGұ; ~ui=NZʶWLBnW">}Gq~473F.r*s <ͽmw[B"jVۭɮ)!xe(=*b24RvoVI*dO: vڛJVt"I3vjGAL(3Mc7;=JkEXZ#٢bڤmeg=Q%$k.d3w+o؈7 zz~iB7S>D?E-UV^0q׬ߙv1Fu%B^+7y{py^8UR2NwvF $0tDq݂ggk%]yηCCy@m}ҕ쯅EÄQ5#Az#֩ӄeY幵Ē 7i: g>8j]tBU)_vEqqm;5-Bw*w/~,jo8g>7r[$6y6syciqmاVOnW7dtٸm GNߕ*tK('E6י v2+(66P]]iR|Gmi$hPW==9FMBz7)ůてgʩ`F2xҶIo=o4sq2?/V^QIJMŻ%qfn]FޝNQSK[@{;׮$6Mɳ*s3?VJZT_r1Ejc1D*s?sO-{|>\0v)Jn[9sBvɘx1YNP\Tj|A @2m̓ȥ/yĺGon?A' usH\hjV;no$Y6ۅQ=1DFQsju5'uB%J/Ѻ ?XҳR)Hy6 r1çBz6R捜]-I9t,.7m>^3sٜ9jk=-ߨm0#0~4dxTU&o&s1pG=+)9BLRumXKUEfB~NscJ^u:)b!<"!#X۪K+(mA\Ϡ9oU*FO=D2P}~͗){E$dJZ?3ӟzRe_)IFkGdCnTs5MөMфkՅu YYXyx$cUiF)yi#V5c*}Mc$qz"i Vb`a۽sʤ8B$rE{V=ѷnr}W/'2hP,tџ]Yp${9W4le #q^w* Ojy#Ъ,oeC(Sn8K挹;~|dv,I{$V IeUǥ,EE}Su%խ7Lq׿ 7)s7PEE5vjD1Bw,mMN;?j R]z-UEwٚV:R];G< 'lrxzױ¦YtՃU5fm/30v# 8}& e6KK5t[tީݴ]>gGk#c EE;Yu׵?ysQUr-viZXZʣ&lr?JFQFIY^mFU*|}F!ah"Ic6٤bœa+DR|MVA`G^:E(F2iEʣՖL/t瞿֑i >3ѢfK.v A׽i qc{ܦF[MeeNIliʴF1Бl$iHTrZIk406VW-;[Q-H \ >b9%˪שZ峺5ǿ^}*};#n-g]GnYBrGLvϭsè߱" l]$IY0t'p֧%yhEJ|Wӱ%\- ]|9ǖHSp^^W q*Unxw[[`a#qּYJ9ѕJ~u=hZGbʤ68xF+cZ꛻~ZgحP+eO @wN?Z(&CU5 b**8cW9>E~SwЅkfy\161Nwp=Vr{>1 ٞI%ϋwdοJ0vR)_VX!F7VI`H`} OzX8nuaGK8R8AWt6@9鎿WEJ1/vJq꣧[(gVx۝dI@?{JR g7֗,tk{ /ʫ=lJZN-eU0޷;dp|dtg_ _DeR#8BKTK1ӯ-r(_#wr 'tkNME;pҧgJqgɖ%-q/#z}xi?iS;ja}VMJ>"zqqһ'F_-/|.`X9\2yeRn |88r(R{4qs{Gb`UL1}~JGS=:j2KVL\eeov y(Sn%h֣;E Vۍv ψ͞Tuceٌʲޜv秥cFTZeӪ^PXjo'W%qZ¥okZ8o8k[2C29_vSg+:ӕH5u=nxyUn[gҴ$o4f(+Olq$e<z~%_~,i¶٭m@-MM=9bQ3q6vytkmFC(۽i9L3$:6bJ9mVT!ɇe"-ew^k.2ך܎$+ymvGˁƷj/R:#m5e9kk힌xXrְĪ|rwxy.^Z'{ xVTy?vKu=k#MR`0 |x]#9jg)J1Xe I$|֜#e]U7?UҋJzw!a Y6pO5wttգRֺg-eXn7N)ӏ#wFYI2F# 0.h*UTM W@\ՙ=JQnUQ"7 em6d0iZO")F;&%D˟WwYmlcZEن~2qNRRغ4t8csXr2j97o.mڲyM|Wzu猧N[p>~_uGSd̟m6,c&6U!@=a%MG_em~_t4|i:*-+ۿ`Z ˓P I_ʱߚ=oe*kBWo塋w\-̑ʫuS*m$\Ejtk:NZfс60H`83~+JxyN<,g%)?ڍ6.ylEZ'W^"-Mzr]dohcME^fbkLro.xjs5euo΍9S&{ƃ<l㎹#9eSIT+Qᬜ/Wz?Kk)G_G|O"26 S}X{:5"`V/f׭u;?P2janw+D0➜:ЌUZtm K5kq]F2s9 ߥsqJWy`9YEu^iwmȬfh$n +Ϩ ^u<Hr 3Ez<yce 98zR1~q<- GmwzZmbMr!gnu #ұWjjG9J2/yMɒ5bvqi5=z3 F5}KYYٙX3kRV.R+5']8Eȇl r?{k(JZ=Ɣٶ{Uch||noi̥cj*4YܝFN︠1=ogi'mV4b.yD@`>TE$P+#w;W)zJUja9 klwwnsISgY}wG-ivowm%cNkՕJk L%ťʹx_:22įwְ挕ΊLT[^DdOҺ'N4潦IF?Nw}7oNO98lb9z*SkuishިhŮ]e`~]3kҩ)Jԕgf3 N/siH/?3q퓌SSZjyL;rQw=gúp,a# )88d񏯽qB)'SFQ#gg "p̄*y;JUMϔ4%-67GN:gyjir\38.ܞ:ou5#%b~j ۲=q\4ٌ5+EvCY1{娪KEdRawIp2~W#$w2JWmhJ ;>ìFYdQwֱ= g?3\kh@ z铏^xh.ҽ:cڔSqtOG`t3h3x٦''/iFDegۜ?hֱƟ,֯Jj0NZ&b<搉:)'Lr=UWy~ڀ7G!g;Kzuɺ]H̬[X=r;܎+&VO+jtW4oS1t%CqBv!Rύʟw?ӥeԜ\'nČ@NQz[b jMf^ZE=5ud*m6qבvƭE)FхL+_}:TfU/7mvo˅q(֦mJ)޿^IݞsE8_y%Qܫ+.7 9#=yrΜ\o$!?ՙWcI1Z=e{Xז>ܫv$b3i~: O0*$EVW~lGzwg܃&>\3nrW(J}F~f>>֜f=Eu(g_atNwwK2vBm&6q$`B0Yʣ{jiLKDxIl$Y ~Sk7+zAlG#\mN ƬcS8 '~&iSO.G%JbYd_1{Tߛ9jsƫ[ٚi6TdzqrNz;"/#O>w9<9G"VTR y{F6N0=*Pȕ9JwďmR<"\y~R*2~g=8Գn䬱=pT9evlEtHەntק=TT8J)^BB9L^u\-y+#ܕ-_^3klL̀:zיZ2Sjc{)xҫb7$v=z}+*Ѵ]IL뤴| rW\798>\LQvo3MJKUKxv-dUc8txID chzy<µx]N> & <̨ *9JO}9өO {v'8b7nӞҳo8֍:i4ԍ-,jBrFxֳhWR>HCGue~͹CKe^2e6VG`Ut'}g3N{{і!-c#S*Ӈ23\h{u%VZ52ekOa3c&v`l98?g8i롢5oH{w'@:u#*6W[4S<^CO,n{dԽٕYSk`?1f YԒz}>TףRo{gITW؎I W~IIѩz1%aλtcf Y֔im+qN~nȲd5dʖ_X}biF 5~V:N'׹$7ӘH;Xg#ִtc)Y4+Eu8}7f++ʎ"-vA*|=tEI\gxrgfCSqdu"潠ޭ'ym}lxF:RZ|z\E/sZ>,hd9\r~OjZ?Ʌӝ+.kXY]X[ iap?@~*1K(+nQG32%9cw^5=Dڕf?$nUBOⲌ*^=Q~=%ґ?!_V  duE{*𱴟>(x4[h'R:$ `Pr<]iԕ/h7*ܮH 7PygE C=ɩN<_CǕ7=dx|JX7صrpg#T؟˭ryVJ3ԳLxYKoun.<0@'/zQU#xt¥>h?{Ul;'q^-zqoҩZfM2'y g 8$g9FY{?iEi:䪬'qe y?ϧc9Sۘ:auYC^"κqݻݓb嫣eaʜc[ Bdq$s)_-[{s~u9I-o*$)YHU䬌A#8 u=QcHH!F+܊\\-<+_ɻ4ksH .K  |o ⾭Fz[ ".;km=S⇊#VeV6.vn9<prkU~c)4#o#~$i։gkM'ڭ|7*I+oR3315 wpgc,6"Qy&gx2jsY6/F22F9<_Zi5S.!>d_W*-=5q*ԣRsI$ū,Y9#cM=?/C:Rqzyz֪Ym۟^}(U'N-N;olMԨ_4|ǰֻ߲+NV$",/3ªmIߩa2ʓoʮ/+صݗaF_r,l >W)G/<7Tzzo;KU`$ڸ ┨I?'۾pQkWoOxYGk"0K}918b8 V}?/h,ig%׉ i9`עW 0Zc/0t\Zu<}̬ܩNT99;QG,,71ſ3/dzFvI|UkE V,2s}iOzr(4y"Ҷ|Ƣ S*.:qζ84^4⵷Blɕ叹:XQOOshljYO!;K|:hR\Qӫx$|S~5p4֍{2KJUδ۹V ]Lv|4I s{}1\(Iof޻>V}YIo-cc |#ȣ+]nۋObDi%Qd;nߡU +lNro}mdpU)94LX|񎾈nHtfYy<ߟʥRӲ+P(bz*I:Պqqi +ww`{6V#++w@;oHfRq} F)_ȧи&UY7J[}Tv$sϭk/e8;|Uʖsn|UF" 0E;S[F 65HwU?5&,׺Dqq\f޳K&m + zt\:TQ=H/y'b 4͒Xd\FRUQQ۱T8 p3ǞI9;cޜpQcSy˺fZ7Ffm8ǧ׭8S%vWo7bBPvq]#*۾FuywLYqy>mNծ(a/zVDGx8ݒotQЯiR]Q0*+*߻w1c>I"Y!۴ۘQ銸ϛ޵trٻȰV <^7Q!?&ׂ0Zʤmp?i&Eħw˒QJ,m)}ƊÓX<)"KH<[G'ΤiSjY F 0Oe`U4TQ5LcNSMFA؄GbYFN>cǿQ[TҌ*7%7 7.-UsTQB.Uw$^zR'B:I^6{t.Z 6q9z7wu塝<[hFLFfk+e5CsǕ+S*Wb휒β层n}j?i#^ͩj]YP>O_ҕ:m= ,DS%u%C pϯd4RTj 0In#H<?εܙK#ic<&SfV_U:h&DO>$)>:<֏r5:Pi_6FO^z𪒔}͈QZSKnQJ(F¦|?8˖j[W*܏?{ַ՝Qm[bhnz݉#7 F+<yeɌyz_yʬ2" tUm}M0S;ji2"E~|Ef#]H=khMKONNoqY=3B9P$iɖ1'+Z^clE.m-ح>pGvQ<;N-*Dc?ݎ8UwЅ_pM iTQ榶9QݗO2P[܉ya/e#tT5)J.%q+Ǚ(la'_c]3U GЎșM@ʼ n㞤֊\G9刃>m.2 iFA 9֕*{+Xז8[7t2M-zcǷ=]LM/b8TibIO3wSXSGHƜ2(%mekn?3 Vg\kF(m&k{T8LߌUqqJr+F-4$*}ۛ't OGbDoeFweَhW1SF{S C Ug{ֳ|LhԌ-00KYjGы`1 #=kS/S^2]/:H# 2;3㣑>Ϭa*ovI, [%K5v6qL3]zխN.nIޏ77[&yk?t3gw^}Q^[agYh(tw˻1pzJk;jTcN7i﷠?&Hmi]l~銻Ii&MJq4zcE0@QXʭ VyNN>wlSάY~Rxn>tc%tqFJ}xmfbΪjb#5>"@<39WE8 8'SZ8S89$6rY2vqE9TLRQ֛N۾"FfM;_UG-;@N{V֔۲[k_lFIudڞ4x #+|۳ьtӇ5V+y1efzp=iKHcЫZ:?|`+)>kjM>d&P6A# !&l7~I#ڴ?f/pN?geUq.[Vxl`i ǹo{orM(DIf;Mn01Qv,NdeHfEzq>8U+K۫CEmi$eqXHd]iˑ\)ʢk">ƫy.mΣ\iƜi]9OK,Gp,̓G$~`VY N^yjӎFQt=;^]~E>ȥь_3kK!Q]gYUNpg׷[F7iX+z$KBg~V;1qVeFƸ2 ml5eة{2fAvH IQSn>A٪>){$϶9&2`ezc[(ZSi~92LF :u)kKEjg u9V&Ǘ`oʴFLo(WZwM"XZLѬdv <z0tڜӛ'iuڛ=s'^sN^mFei/WF& .ҵ:nЦEHͼ9^ܺ P$fՂ@b?ʪ4ySKRWф, y9dLh>pZΤeLVd$, cN:8ߩ3W3,FY:co{DrV:z=@t[Iai0>a\;{<ӵrSJk!o e#olQ+IF#Vϗ<>דGhDsrΟ2ѻ7 іO79=k5#oxqV]#1lylrU?Zƚ"4$wDHJnP d^XJTu$yA%4>Ll#gF>1=xԍf1R%ibH]zz`ZSF*hionmrIpķq1X*texŘU2q9<^ԤQR[}㢱 J#.QH{VZ]RFGasl7>f/&~n?ՕE-Le*SN^A H]mŸN+!{:3ﯧ'WIbm098~[t'm l"vyyRt:5)^ھ#U*5{5z, /l6r瓌s\wƴKY5RIqO'$ bt(X,=]eoqU睫̸#yڇmO'خgtH(÷'yMH7ٲiYBirKw!Q +3w= gR&!F)]H#2hֲ4{Nu}OֲBܭf?ZWptɯcX}3cZI>,pʻBÜ[F2nZ{hZ[dDk֖f ĂLqל~uN>ַ}w Obd7-*+jte"۸EcΕ?v98A#3]*ZJ˛m-u&Cla`c޶.:(R,Fݳ>nI;:n`I鞻<LjY"Ey|l}qYµj=)ѧ%EnDk[Hd y#=8-4ȕ^2@V,Ǟ^,mS t1L \@ϴ+*L;^8֐im?᭷s2WmۜkF0J+Ri"OYbUmۺtiGCE+K[]r;6?k JqgMΤ[dG3l8\LybkS5]8U12Y1 u:U/ٹuD{Oiv";#d7Uv{VզSX8hkAF0gƺW9(r\K$Q5Ē4[O8NF8p[ZryyuCTj8J,!PgӮ(9YEHguCc]Tί:ъIoԒ<~m+ln :3y!;Jбeܤ ss؏mrG;XY&YA5'[S,̨bM>!RBpf_@ǦtRI}GLy~ۨ Fc#*qj6njJorL97m%qB+RфYPv]0\6OTbb$RAiM# 3afUeV8퓏sؚӗEva+{IYcn!c9N2Zyѧ{Y[!1M.w0"#;t%%c xeYpzu?J+T9\Δ:G-ҢH6dv?/wEEV{[ 9kn9)ٮ]GHEjbAdɓ_nM{[䊩aUtqcoƏFk9:qZwȍTHeu@ U1qΈ%:3:OԀ >լis8jp#( FKt]`G=8G^Šu%)5;{h{6.-piyh)N:s֔~tZF\C/k(sQߛ)2Y-̊V .Tv#Jӗ*8ƥ} M(v6Pֲ-XT~mO8+Ǵl]sn?ݧʫ{0Ӭm?y78{ RY. =H2®e]ΨӋܙof퓝0<c<ܼ+O-yǝ )EJV"2֎*,q]alraQIsz9檝{T劊O YH 8拝m0Z#2#'TQ5GzFRhdXǩٴקs ۿ:;N 3,m&rV>Hѩik}=t"k8fpoS~>.|z5 H<qN;IEoe)Ha.l!*>쾧k<ϝ=z2[ym$Bmr gj\Wܷ{w";uTiFfcҭ}{hqHZvH'+:y}w&OS],|bíNAm+Z4[H&R);cXCc=N1Kft{,/R.Yk:29-Ȫ6HlcԕO{¤`֏[9#Vk#+F¬8V끞Q(:0FDD"9]Oq{9s.WOR{ʶo*_2F2)y?Ŝm9jRJ+2yUo.i+gרRkV*J-rIiqpVRX+短L%ؼ;59?+g zwa8%s*'n̑1"n:?=NIwW /N߉i,$KnRIf970HRz_Mm Ict5jsW[3:zs%=krZ v<|09`qS$u{j2׏̲BˁquT躞O9q͎c5Ğ_I3UuNjZr%dYHt4aym<7'p֭RRqu {vA>c:7'U"ka7٤SܗTa=N9hym?Ԩe،$6`n\ śrzGAPtP޽KȾPyҠʰs~QO~YF$ {xJ.=But/ǓF%8*۩ǵs[Wߩb͟!.ev e5iMa-Ֆ gX$ImVs\гvrnMI$f #&e$pO=9xM]#<<\_liq"&K6@ _|`W,By~r;-R%2  }}G'5*G5ciԒ )$)MI$< Nwv.h W77 ȢTdlYqߧsV7>f~ҍ99KݗWV l͸ : sTf{.iJd3G=E~NI[F5 |wOAoO_+ ޹#Ŷ1*P [ͬF\d+؟JTmmu"*FE5 QbF; :OZ#&SӍx/.QUd)QpO1SQTGStXlj7ʕb TdSZKNo7lsjJ>Jϩ{FX[cfRz6}Xz+][)smdnh|ح4(,zW`.Vu:NT: 'M%U䳳\dg?ZQv0+pNGs[Ÿ.A:.~}:4OU'wJͷ:e}?o\BT(@;nnVY'0NzZ|vے-ڣ1Ccv>fStJXx#io-;?_5\7NG,r ^n=?55NuZ򄕣}m ܭpxX:S{o,"~JEߕabܩuo!wVsֹNWHe J᷍嶹f#?{sYZ[3(aі(oqq5[G?,S ܯdʱ>vߗ#\LjF\~} VAcijncppV9m2;UyfI Z1"g8a}:8jH"ˍYU~~b,} k&9k(N-Mr:"v[%PzN/J~֊U0 fT yxs:u}m֭F*Q~|'YefHx'?z?]!zխ] (_ .`_.G#&j۴o{sa凒-0yr#."Y ; U*K¦&4\\/2QV9-FBߪh}&7Њpwꬒ)h[>_.ap6-O бbݜ8Ws=Ji/> ps޽vXՅۂG=-;_CIkwjХ휚9iΤ`[Z]Am挪O\Vqe{f%JRR_A'1e-8+9{9( ) Jڔ6pBr 9c)ϖQi9b)ey&[gGl3#yVNtK`cKӨ"[c ]ceQ $`ޤUgWNb0ti(>gVI1XcYƪl-\g2-t6;vÜ6G==JO֦nƼDM$Q&d x4Qjv]v୊6oO H`ܾZƈ[¯S~NKtw_o쭟:-4.'rSҔU}5^K}Q qH#S&[;G~0(.|îX= `831c?Nq]iY% Ҡ~-/쟮 T\w}Ό8Q]in=6ܫ ebxWuƞzk {b+ěPʝSI7践Elr5;@hװ.9q^3jn2gMVSz-eha"$rS溗|濯3NSte_O4%QeI;\bTŸunC8-b(cOnZQ/mmdkNK.KKčof1]Q;Mx5)iuߒ6m wVx` 19>P)=RN*4 98fn`䦔W@Erwkn֒G+Υr2_kMֽ5FrZ%/[GQ23aѡ},vFKX0Ό4 zsד N1sH^enqO) [s܊7r6H}1ZH` m'ƞvw!6k. #ںcRWrwfeL \)sK1Ʌ6۹Qn]w/3l?.ߕ)F*Q橱FWů'aԨ JFS(TU9^} 7> fL^YSVթ;nlȈGUU̅Q׭y>**Q|Ζ,k(p:۷kRq3ܒWb,"XmX|QdQPG$i;R6Y6Dyysyc<'QQuypq2#w>eR\ZQh>nֲ@к,{0sƵuDcFXT8,Q=ʅw 浧%N-'{5VTВ)1<9q抴\IX(ڣPާМӋ$c1+2r>t(Ъ3)K_0[FЋX>^u> k%JFvh~^8ӟʪV:xރZXVGFv{?Pj\藻QJ_p4Ndi&'Uq8女zbH̵ilՄ|q<*f%ZV{hH}Q&L}r6,1<2n+|9#ҜQsi (Y;ifOLҜ%-v0clyi3m@9?/ UI”{F>u\PWל<Rql|/Es-&59 A<>^qYW2G2EToяk2Sn #w#JRFҭMёȳ&d\<%Y>ԗ/la*ij1.^<~){5e:t?#.1wt\diFo2-y{v26:xVNtۄ=Y[*Hq֊j\L n'Z6CcyGX֍7+aR\W/u]p @<ϥaT[hbɔ42^y9YU)ΜcЊr\1Ȫ,.-HOu[22*";c\$mo w6Zo%v?t?ȧG7C('Z1L,Ļ6Ӹ#?di&٤S8FϿO|yqmߓXܯOFAsD1 ݻrUiS7.>J㐑m,_'w_Gt?uJT3ca%21\ݴJs걓{wоnCW[aܪq$syXISQ{_yiqn+ Lؤ$v8#W5J>Jv<Mݙ:i$%\ɀ1#Upz%9Y^.}}fHS+]xo0=J2X|-Y!UϜb\(l(zt9kPwf>֥:n.^ȄxLKHąs>V,DcO^kC&_7h5: ^Qp.VqHuzpO=+(Q(i8OwZ!bcr~GNp*V_[SmEyf6#x^:+aS[%#W+d U+>FNH=} X|NOoj(F+WmGN!+_{Ʊ%U>}O~v)FP=5%SR}JzS׊7?c*cbͬx:EI5QSSThf-@ Vf^ך:u7kƭwm/Yijm|Lg/c*Ύ^/?8c.$I1VrY`'<嚫KDLF53|f1IriԪ877:4OA\&YP/h?S[68Tgu)o4wU_~?fڒ杚Xzk7A&Dm~PGּMmu썡3Beu *+sr?yƤNWWioXWs*'=G#Re(_!u]_z#I7}9WS gtqə#={WVͧsԩ:B}~Mrvv+0x~>ՒsEնHƼԧ{U .2BFTү*U(Zm^MYfQ7ݑ[1Ozٺrwg)J-rsDd{KmmpI@$t>}:ʶMNWBF+gBңf 9A?6a{G9|HTV>FkWNln#q?BI<R޽v/;ƌ1<)rܨǖo;BmC*(GreQ:kuu.+'=;VeSyUvfV۵.ۑ_ũROr-c 8srD)˗pW`*0UY9\w%[B0q{T8k(F Rߴ1\  q(,ZV6y[{Y%Iu=քE)KrGb|qQ.R~~}J*_kc9zӍʥhOr .&˳fSϭkmv+QQlݢd!d|,~ΈΥXr%qܫw65S̨+VKOB;\ZCQrU+F[a֒50$ѐXdl XZ{Җ*JVdf)c`G;y3NtƧ+W9e:,Жj޳J z{w:Zd| ZǵO" M9YӨ4fFkry EA?bU5B ESኒjz}kJ7HJ1GG#kkEԨ"Gǡ?_ΦRo[+ |űFGQ(-FES>Mhb. Y;v*Q3*|݌Y3]vZQV&R|1ujpnџL.b66sr{%VE=H#D1K'9Vp.iTڒՀe2/$POO\Z1ߙLYrH=}kG]rҢ!-s <=ּ >X}wڻ1*nh76B#+pRVQxY/6WFHAFQTi4:QRWk+ѣM~r>P9$cצxFM&ވ-}o^_ 6 Ul}="e'6j5oТo9qs`t3ZӅjQ8SMӨ[<3}] QLEkF<-I(D5͚D#B?´ԨXQ6:c%6\QJW] )թN1wFEi. jijcXҏ2f2~Bu &]O9A:Z߿ ZME跷{~V88J4͇8UM$i6Xy8A)G޴RvVtەVdWLNxqThF~>yc1y)w7}+7N2{G^7:\d>9oFHŹeVFw%guR;_=¹JmP.8QRyCi~}ǝZuVђ)/,y%\ u=QZU{~fэ=urė# aׯִ7N*E;[G~hlui 8E.^o;%71ҭIx+0 ӿsJ3]JimafSZrՍaMj #.XtsSN-%va׻yzd[S4v ̳$x}3YJX&U',NV_!"["} J/gm9cT\aD/% F]uǧiTyo~A*9չKhў5y2I{tuNWΖ"5)ꉝld8T8\iMI.ǥQdwI7lƻǏs]Q!w.1wdko &ٿw p>~iTN/Ӡaetd摗̲ 8\MVY&kq+O?N>Y;Im4$KćT&^~Hz=H0=UݓhrRW{7j7#ceuU!A '?#jTU#MSKaf }j5T(Ɵ/2߮xFM<C7劎YT RZHn%˾gVq"ܝ>?Ǧ)0\Br4;9fUl m'r814Ԛ9`b-Ɨ.涓wcWE^[N51J+F,R8OX?ky&xginBFqҽ*ru95o45J,c+M܌ی}jmrEy[~_e%;o:A#zAP<3d9?r*jG2},^4kQ xQp_ 4lzG8Czu#z[`d7ymn<5{.&W)׉J AOZmK{q`9U?V'Y$R76981}ɩ?g#|9uCyo؏%-c>U}~ҟtoft׶-`ּg+E[r.''m5j?slddutѧMF QR2YѲ$œבҺ*S֦&eN-GEVa$qS?>N4fߖƱQ9]'xTtٯ0n(߿8쓜a ]==J7]ZYZ$aOgvW\yԜ][/wi8U;KO'6Rv$jT ypsEj N.Rvw9NW[M 6û,z㩯)U;C=<'}ÿ:ESƎ) \0VlqSOPN_WŎ>L ~rvʥxԠڳ3 qTdz~PYVER+d'#9WY"7;Ob)޾yp^.[k`<=+|7>;U[SQsz6{c^ԭRT}DqiӣQŻ1PNdI%uV}{{VtR^ѻxxԖ!B}[{c Rk "D6 KBDc\́]i?mѴ!ZL>x8ܷ7!al:]h0wuk[;sԣdgs1|)WE)9vQZc(ƾGG dGN3᧌vhPʖ0O>3'm-CͲxj5N8 {iI$-Ie {׃\gԕǗN=4|d[xQ_1Ϟr;F7W [ydF< &2H|E?gVNWNy5,yl>N+2zuO^âO!dYEX0z|ڹe.jZiӍ:G.`G'5{BQ/{H[58H¶|< sֱ>sRIN+KnG,JQ-4i}_c'/wl>Oڼy_~Q?SH~:}JU=ɏ0C[(Vfar\UKkcOٿi+8wԊ[mљF[g+@߰S֊TϫsrQC&䖑fH=F}zqS; ]>)rͮ[YЪ3yޜjs%~ړR.頍qo }=UψT%N[ꑒEkEroD][%U6d`bءg<³ JOnv|5n*m*Np~Z9dl¤g(G=0MqG()4UFܕ- AŸx8]{9\)OA"EY.3ʁ=;sѶeӃM,7P.+q۸snۀ<2Mh][+5Ӹ$^L"A#20̽*=m.DRky-5N328iRJq* T~ֲ5|ffry5~TjFT܁1ﺣҢUd<6_*z㎞ROs(RU*6ݒKx8+C?/nN<ϩϥuS8ssO3-6ssD}4Rů5Ͱ07JRmeUdŽBF$~nZJNƵ0eGwz=3NNR&kZP|V0 x[΢#Μj-nᶱ} 'TOyzQV]wZJd\ӏұo'vgeΛm\ekK&,.xݎ=}m\-jco-poayeN ~3XQhJѦBs(ib;S kQ7ZSZ}mn;u:p9׵f-|<.7ҙ0|)EF~2?希b) D8ٛ^MT$~dfVOmq5:=~E(aD0lTI׎hq6{5;m:nx3gw?J$v+Rj(4vFEYI#zƳnimH-HbI&$!=}VO2r 6Q!+g g;Vn2Ke%(iv0v?٘f#q]=o$IRm#弳\x/iU7M۸n]ɹpjgJ2I8ũ/B;HTc".ff+KW\S*7iNTtOס$m.ϘI{u9jJ gFҽ|gUPvsMI.d\fӞs #F#gߥC7QWRr}IaG Q:GI9:sRiS긽R,DwfHy:{s~O}8sr'HdXo-ٙ׮Nu);?aN=~EĵmaXqsnG#$\)Ir-OI6rfӌr|pW $cM*'dKcyi7HWrۜ+zgOJ5բk_g, ȶc8~:K9I*S/+n,^dhq;IAxn\jꞂOK1R>{qXJ:7bWRZ+Oa)#w?VQ掷.|퐽*Sjw7~GoNQٙ"\Pkicym3Ҵ;g忠]pq皊qQKy%bXۈ4U,2aJ|N:=/ }G17+sVr;yʇY4R|>ba^g,78j-5y,GK؊\ͥ& Dok۴\{֊R)Ƶ:&C$Ĭ .k44Vi&gjդ\Kr#eޅ[:UU2'jZ;+m{EGK|흧=yܔ(99ѩʶzYZ,#Bd N=+JRDN"5gEr^~CGB'Һ(QJ+GTHnmO!i mzCNЪ-AJ -´Ѭk[lHF1[>*~emnRolm{xϯzripmP4wW2I]"4}kzkV*;Om얯FeBmN#!@ 9][H󨵶9pKTv Tʜs Ue(5Af9`Hnq8[.<V"$s9ۻ\.T[ߚЖkt_qo-Al򓜞OvF5%SNz*%Vu/v| ozJW4gʭO] G4|-`jyV3h׻/d*FG\6{cu!N]Z.>IDoshonٮfޜy\sU9!sJ <#(yD- sUROoɕG-Mtt@1:nOU%W C<8%~($8{Zrq],bƪW+fO>=1\mknʭQ{ZWm;n eعje( Xԩ(T;xTKce'l$'J^5䮺z|+\[+"4x7Ȼw=M9c9{Tڢ;Kcl_X"%bkfUy: +II vٕ9~FynjQQEk=NMha2ɞusדֺN:_x,b-ߡAg|X9=*qjlh$m>y$[3UaO8Nsfn=73<۱!uڠcarOl+Vf҃Q4mBϒdj^k{-Զ 1ymsG#nԎ_56G,khFG.[N 2~nzj%ޞM 0[ivLRzַ4]MN^_qca,BK~}:oB6f NTcQQsjֆѩ+D\I ,d{Wǎz~{84meܩhYPGpKq֏- p_ށl1 ^ZBYۻ9'{Y9J RJϱB8!t[}ѕxQӱ/jʒG/wv`$NQ.*F1\18dU?{V2Lk.wD۵d|.oA:͙Cx 0:Vohٕnd9;N1gFXG9ae֩./̪Ly[ybB`|sJm1^u021F?w#~O\!+ڣ˯QҤ[9 .c2J㩇%vo*bkM~l7^mHsR.O+tf5@FΥٸc\E9s]APO~[ŵ H՛;O'9+͕%)aWMYZJ0)[ oZ密4tEҒeN4G$-UyU=bcԵdVk1m S9FMJOrV$i'Xah,,|/88ڰxYs?̩b1{$KMu")51fQcߗiKmok4Kq9У~Ôye-/C Eews]fdDDPHc"<LJQ.aN D1[miu&9|e-6 Arг3mnpA}sOݔnƝzxM;Ex(g &sNO>kmGr7k$I!ad(*=ֻ)Qzp֩d8^6䛁;}2OJKsTe[TXigYw8N*)NK?Pϫ:#%VDYHee9=x Tn9{>gOnI2VYѱ۽qp#?~#5.I(PC H'~^[{[>XT"QUXZ[IrBĮ3p}Z=U#~YU*lK6,~2 Xeep?^[FQKfNI{fidI0G +ciEpiZe7_Z1GC42UKߺ=GҮKKGK:HnnO'_;y9jmceN+Q+"oi (pq)2rK"fJo%vGuq潉ybX`9x"&6*u%XbD.Cvc:N1v߰$,ci0|0}Qkj$)mdE |99j:zJ wM"4ZVT"ws($Gݺ +iejm?t~cb)Ee\ƪ7ЯAmTŤr1q=i5s?qnbgՃ&qnILv 륻 e|6m=3Sne4}-Y uUe>zX=;ي7Dc, ]N8\ܱNKy&Ȇ%1nOΥAzfdƉ.O] 4o 6w7<}0jev]3Uy#vsI;pJNϯk4Rò[ j#=}Oc%K攤Ӌ 0$W՚L)$\Tfѕ8K~!2DSm+r$79ⱬ.”c+]>༸6xzJc(WExZH2%翽eΥ:xj%IMԪ͟9jғkCʏnopIb5+$uۭLZ, GnI7}wWA\|J2My'aG a=J/h>y{4tB&௚cPͷv’zP=R 0ė rgvN}3ғ}HхhPtB$;b\*' OƹlےG<)FZ[fA` d)ZWrb IBbS}>ZӋ^q^1'<m o3s ¢?ݧ'W=7?4aY˕<=+U˪՘ʍgyK~#lm푎Gc\֌VPYUp,29ް,8k@1bZFVVc#E%PC;.*X=cc?2z/OK_Z?}1l~^ z)8SZ[f_gM;G̺ EsٶĂɖW@/mr}+9{i[O硤m{$qʯ˴ax}ЌR{iF {r3L6Cs5R%ӅH4&\ZERW^ӧyv*\Fz5'5F2N6*;_n.D39 gۥceN~^xg,Bb\@Z`sZӌ}_34:d7!ڬ[w\}?JPTsխMS_i5XiNmFfrbf v2C9&K:oFӟ=^E%!()+f.j4O_M ?d θ!1銨Z-ڴ~aXՖcu$g#l)2ŷJ\$aK%׭TyStzX]?0[QJ+QW}RFŗpxӠɚB&mFUx=+^+o]|I# eeI $Ou4c)Z]UiYf)?/!z*N1֭Fۣ#'p=x<;Tťq_j/0'kzt`m)JGvCʹ/tmt'9s[v6V飆]9]i¤W2 {ju9\t5jC#nN{v\nchƝJzZ~_3zn ^:UOHv*)ٲ7i-xUB`Ͽֺ#$kN4ǟv91[[IUv-u ^HqUʥF:5%VLb :3p2&pqwϭl{U_;P]c=rG=H )jLo.CG69$z=*w⽯*+@wN2:u&ۍiңN/޻o,P\.3z{(כ.˱을,FYx}̣g8G~Zeƛ>iU)2 3׮>[#5iԒnsC# 6pp:7o3GN9Ct@ϖ6Gz_,Ee}ŵ$R3l㓁ҢJ練Uֻ?C2m4B4TEM{T'>S;5~>e Kb^R\rI/I{:y \̅%&dv:cҪ1ylF1~[M ,[fJxcP;T|Խ+[}xnIذbz8mbڶ[m3&ktYX(%X+ov3mm*rQߑ$xch~+.  n>w7/5j[ ͗{$]ʽ2'?\v5 Ys5'ukAw3FR'$WD\jKCK(t2&V^@^^A>4C4⭶Kpȿ&YSӃ>X:#E;6v{)2>TŒ evs׃tS9|+b` r],1olê >qRt1)UQeI${*^sӡ|ҲWseUnrT;s5 ЕOfCspC,#e2)δ3 ikȆɞcvGB>m3UR59]u4w"K{ I"a}@~ \|jr=WF; r,,+v'x,{ǣ9eVRqz~BL#89$*| 7SdO*s}Pb%Ip)>x2i%EVZ;Iii7ieZ:j./1ݾDݎ8Y{\$k#ciZfsrI!F9Kk~?%*59w3I5Lm_^FQy$RNhCi?xʮu2:Xԩ'gRnc-ڬvs9vu-$8gScz AJVAVcbzg?ֹ'k榹/ndR ץsSs q ]LmPjSv>lYԟsJuIoX6d湪VrlJ.,wtNDmߞsFOD,M*}_5IE^c^oNqnF/s隚׳3jŌK?n<7ItgX2ҥE:Ox77'txQZc}UE7ٚ&UUMc8܏ΦX]Rj+k݉Q:wNkJ[tc)V}4ktoY;u9~}*R|W$Lx#myk*b4cecē^mFP#NzݨRKNRM:[nUHP:;-3.Rn=+X=ާ˹;N3ֹ}(Na-{|1D"ݎ~\t*?W!p#̜fIrMZ5^TrFjW?^!XsOZ*2?Yaj5UlѴ3&^@O=8Oi5ZN.²oFmOT,t/'InMU`L<}O~.|D+|6{4VbGEC!,>\4ӧsnXR*K#H<ɶ7r35R~;u.w-Y酯v;0`r:׽vP/lWjQo[\Ҵ[Xw*Ub+Џ^V ֖m趡7B13A p{5Z*1q|,i9+7J'V0 ܞƽ-Wu[4K[t6l" !l}s}+Чgң.edq@I6\ 3]q;Eks$͹¿,xF {Vܩa\۹~UeI?+e~cuԩK hfV jwc'󭩛SǕɒw@3ȧ n3ܩI\o`dn+Q[F-X=]p6kHqb>$+yO7Ҫ^֌c%WlтI?ZeӠyevZ`9|Tq1O^WhcF̎Q#t*Fp l+&:jΥ?vrgIg*+*@8];E$Պ3E v*߽s^1'ZIiiUdO2œOsJ4XyS~pA3Ƴ'3\Δ[GN]HdA'̓= S3VoncLĒ.}0yҼNٕ6)+t^E-k$xڹG\1^*{T~I˷ap#ݜ3k -Z[}oiRWOcVI.>̻̑nʹ#x%XJ)-޾g?-?b5Ar!JaƱ8 d뎥-ʟ<`sjFR+kuP۴Aۑ? T䦥v.v4yieiEeQD 1GgҜIZ0 . 맥{]*Ueی9)))FUy/~񵕻yX9XC-:{F;3L6)4׸[]rɻg\Tc*¥EmLT9G_~hrO3t#c7ᱜtk9ERw R޷ Ӟ-UdD!n" {c9(riӷ4h>Gq#hFOӊ*Qg)TS姧:;Z8r G"9RXJqiUn6ft5_*X0zҽ-+F4]'̞xsڬJ9lgjFWk52EtKpKzhJ)Ymb'\n'ta1K2Nɮ8W6+`+K}-ix8)[j5 _[FYlkcN9kyI']ڥx݌q߭g*N~ׅ^V[%HX,dy/c]JsM7-FkggmKI iVq<ִz4oR/7/1ѬPO0ỳ3?4I~ɭSHB:2f0?Xٝ}g$.X[B4]`: FWFXj2e ޻q"ϻ+"0(N=xyӏCjS6o6It5Nc-LNg}z_-eG]Q:iƥڇ=}H2pMkNu;cNNdN]i/O2}gԊ ϱ4Էt]xsFz.#o5&o;nѹ~?^*eN]\SÕbTK+Y]:ƥo#U?" 6WkF?wN{wN[m̹fЪFA놤c-WFU$Ov_FDi& l6$H~d4L IV$1%:qԿ6rΗ6אy&FӦkkO%[M,$Nq$ǂ~T& wl@ez9JHt9b[cxe6[a-\gԩ.wpÆcs u[JӠޤky|&XEm9%s\f~>kwVN d+J1aeЙSq|ooA1aYmglm<ߚϩ2i3nndx,fI>ZI$L%IEacUOYY Ċϟ5/qk;ϹYVfOmbs֢03R/mDa Y4J*AOjdv]]ʧp\ax=zr1k[`z%[O4,JweccDikW'v Bx 㞇>՜^ƌ]lM-4'9s˗XܩlDgv$sW~F4%OAn#Iby>R <ϙ6*:$&ˋ~ã =r x2COWƩ+)(d8ldfQ 􍬪O@uJ1j72ۑ2 qm=RjG3lˌϭJ~ZIQmJOM)Q lF) qO(%FyErY[4mZU%hy0Шl1̲uS?s˕*a!%DE4Qw9arz~S*iTp=uRxO$Y:4̷?*QVJ[tNwSP{%exn<,[qRi[5+dcMSWW%XԔmb~=F4ynT&\.硩8ϖ=prIC>}" ;,ĕ[!y>l}+snNŷ/2ny7nS0EX7lBm{w9){zךdk"?LYˮ8a=xן*2\|&9{5(~LwEm8+rѦZuŵfg?5Z5hrSLD }Sk1$jđԞxV.RW0E7"mN) Zm~^W=Fgȣ^;i. gKCW;u)ӊV5e)MYGWJve[_w`KTs/gm4Vѕ՗ ^0@s^ѝ9KU+ ;܈h;~aסΛ7N멏w*R=Cö_gGq߿p{ITu)V;yey'z.*Z+~a1,xeS9i|%'ɴM]"~` :u9NQH4eh`W=Z%#7Jt33*2G)* uӏjjNRn8̈́ha2}}Jz[6 `@Xȯ2z{u6Tr,d0ƽRԨ7" v/+r3դo+YjdDvf{] ӏlY8rdUQr1G+^sS9Z-ph;pݳtqzO⣒N6%Ѷ6݀AjܟXGJc> aY?*?Ny:2̽OCDHhl`7#$rT%b% j.^4K[휃\28+GN23 PY`w"j+uO{Um36w FzWjށN{\ÚX8wmڧr 7m|#cէn6VÖZvdu3\±¯o/sb< #=*voMQ0Մ[nٰT}8={W}|-IWRVI̥ZP$vHgs.6~Yl>&4JZRbp,H;~U%*4pNvOlswX4xǏCZҔiIj3A"x@~.(_¸_VЬ=Z|ŵy'jCO=og'cu'I}<ՉwHkg22G"JqrI8X͏V ZbO8RIR_o.)^1qԆyw,i2CzOYԧTS9CW5KY&f~<}Ocoiy/'e8J8U)4ܼD"2lVicVr=̳+WA$#ګ\8xhQD+8w$L.&vibm=I<QѦF2}ض9voflAr{SUFŨ n`@#ҷig[NK;j6IfeH6+⦍ksjSn/\$Z[k$.zf*d*`Qlqwˑ==N)sKi3SJnoWvg W:Cx8ON:.Γɀʳ>޼uNJm⢹[;u65i///snFT 7:V 1їSNrnlI*4{p q qߠ!*IZo4xzSH+}7}܂9b>W}N|=*u=O\y^ZnOf()Gn^Vt+$hc@D0 cyzVܲGd*1U}ėq/$`d=cueRR<47vơ-W=x>ޔ!:1Z!F2IYI]<~XOӥaF~K~Z5Kk7ma$AViMzҔ}Q4YB!e*G9u[vt1^JZm4l/vvp=5)/q^]u(6R67Gq"c {>J>E)-YɁZ"9^F,3M6[pOkNQCkJM;&3F'e"h8u>bETMI*v2Ǟ{֘_zK"kEʍvᱶN!$j6 ßZka3ƤVVϱ=OD$Ӥu#yb@O {Zh(-m.fO?tfn/(#G&*N0,w<+]=EK SF;o /ƫkH14*w`r=,~_̓3F5"rj%4I'Ӥ#  ӌzԌm-ϗu*C0%ki4a5ĎI% =LwǿF4RwOS\tUMFt}GLv!xFW ,Zjӭ +Qk^tCi2ѻ cJNZVO赶_PfۆPsqZr.e ET]#׬d6g%YEG,}FjR\Q롭 9y1YU\+ ֗-̪Q4zjNc{|7*0r0?i֩+џ4\eu4tG@w>\znsʱ9Ƨrԥ8r$Ӿ.«>xy73w{ltjJEq:Fy' >lףV7[h~|5״ˍ.Yq}1z#֥ uʕd_w4FM.^ӮHڤHqca}j|тRZ^]>+U)F]۽eicyp'vQ^NghS-heOoKX4zPQ@HY$D_)Va[s8pc!iw$f{ ,N a;sɯ>j_cޒ{tc^[VmL1 6v pO==qjj(^K O|)#f?hܪx L J]S%:-{yؽ/اլڄ:jj.)@ ~|>+ E[oX8|D$˯mx7 \Cy/IRHE[?G{s]|j}⟇գGţ\e z gp<U[ ލ?k.6uj}4,MxVޝq ͽI00qӠb+F]\=t'Q}[]msҾxE(b' 28q»ҶReNڟFYyI?Z𣊯Sgj~,$:u'yYJ>99LuRJ*3MTUӷ]=;:^?]|Ȥ_<#NksL.-d]3,Q>W_3V]JZ9)&GsZtOܵ}{t>a8{u$ )8e?{eZ4%yt&Ohva$h#+qo"Gڥ9M4Jdlz[T\JR2vΰYPmu?ڹwfޝg)ุ!\1L{f[m|Y֎^QrQHnm,*7fY~fS{IKIng-)}$#Y7*ۜ`g.ZzS[}V\xΣՓ]L#U@N ޟju5)|@-O*YѰZ7>Tg898Iwc!4[qQNOZ>Te^=Ř>bz8V= )Kѽ?O!K$/mV]%jQreբz5AIlq Q8 iJe:| +=%d61ha{(ҊOn+b> ̍udu. zstP#/{CxUc{nJylsSJ5Guc9FWCB{J>Ni=zͳ+tD,aBVr߷ izY=S*U!(읟Kuo_:H~՚έjNVI|RWq=d&~Vޣ|Gw.NPQv_y51TRs$y6y?E?jҭh+]zi^drcm8秾:{ކef8sF>7)k~{nba@j0;0s}jۚai_}VVbUrzhiFmc졤릅X(ewmv#r=r%9XmF/]]6kEOTn=c(F*2Kv^gtkrsCB)/# qs//Q;g{b٨0ޫ2ẘS+[UQMtkA{lͱw.6];j9aWz<lr|ʂ#8<_ZrvgM |ysdMW{WFSj.UkM*.p6;dQ{1Hk~ki]۰;v3{UJOmMc%b1JmC_b ڌcRϕsseudѢE rxaiGIzQs]*60&mƝ=U*v3>rHБo$qZ\4Ey;|GR9,:w5(<֖ *}GXeV ϽuBv⎚5]E)2b2pyrtwqU$ShX,]1stw(fȋ0}k2JvT(SS$wgg%aNf\Eu.3kHmʤJeSUԜ] J̚;x%m^ cRTit:,/i'nl%:cd{=;T՞~'Ubnbd.6;kS-s^&ӲK9n#;&i4(ZOU>߉*\yK9jycFQZԚhdhlnī.~7LLV p3zV);E*tqtNGUa:*ƥ:zhKlwFyw*E?:!+On}=::3S.i*l"jJ~]=<)²&2Z3r8VqR莌GEqH%m {u+ɴRس 唡LnVb`1 ~(ɧ*r5[gHmi7#Y]yuBQQ~~y"B?˽+ьlεXԭ?7ic@fel>_j˗өtZ>~s_hvFuͺmrw{;'UyP:Qj54Qi&yەjA럦=*2 1TRt $"En<ƑriEZ_9c}K_.i$X^r6*~^)+,@ +KSrQ7vm\Ŋ񜓓}*c5$ ߍ.1l1gssX2ZWA)[[AUFX`:z ڌ-:iikm0.Vۇ*9 rFy8Q?K:oN~nx c94J2{)Ƽ-ɽ&K((X8VqBfq*ܲJϮ/j.mM  F~^>u@9unF#ϘrN8~RU%i78K_Igv,Hp䬊9T\v,Xi[Ʊܣ 6R06ۓGJ2Cn[l]'ssNQI/z>qg6˜G,q֬:h82h?@YNKQ?=kIT/4-ZJܯsreUQ&\|wd} sƢQ] 7vv ȌĒ:Ъ-Vf\FA-&8䌟O·ssj*_" I >\wsr.=]Mסӽ"YJ$IYz?i_ڕ:U!-Ke{6,Al ^ktO0S[~%mܳQrqrA'zUiԝvѕmfӔ]ʥNx$޻۟~ʌ]h'i mϓ:Q9j{ҟ+u:e;קQ0OkxKqaRT6ӐF9'H9{TP8X=~k)TO *P6ZSVpRWDGw5٣HpqIS.d22O}5$E`:d~c+|JJ`yi68yy*A몄eS}]ASkFQ ~c@8';rVg!aitYu N:EZ7ΪqRc[qazⶇ,}Q!Z}10\Bk9>aǩ=+i˕SU%Dj6ib_sz +HŽ_kGٟViCIs>^Cr*\G{~EM S"k]1ct=?Ny"RThÒn̷o4qw/⢥?9D!uiT64{gi˓eq֫UG`aYV#]cCֹU7hѡ;>[ܾ>oiSuM9,kđI8Eeަ5=,iQMeRU%;rIg+-d,FtV;or:gc(TgNVkiveV)->UG?=W_$Ψєemk{weQ^KqzʯSN""IݹN[t|VvSIrؿkgr42_) ==hFOliEHpa_=ϭiwԯ~Exgm]H8=’FȮv0)EEݻ+"9nG|ѻ.HN6,-+2Wk|I7IjQ,7LxisWmN^i= ި4n@Uޢ/&Q~ŵvhVWp>%]%NqZ^CAen H1ZKS[Eh6 g?$kc8H=#MM\\jCAb.w71?Z#uNfPzgeR#*h34Bٙ;zsb- }hW1&N͐9>ulԏrY#}><>eOǚH#Rׯy"ʷI!k,|\5hqF2adg]D~x O;{s;RRZńT;/f1z珯ZfS枉(5I5U6Ǻ8F{FR[sRҲbko2H_)p{=k7MSXV➻o5]QOL CQ2Œvm*Cq2ߒI9x -c m;˿ТS0y԰<)TT@^enV̬ w9ݻ2/qHNVM`zSZG\KW+poNwH芔%iX rdVF23Mw!X:W;J9GӯJ{9c*N. XP<~'޴WƊcYS*G4aq]/gviZZGF yYSc>I9tF/G>|I4H;|hrQFU5RiIVO|u8Mm)/e-!<qb1N"Z-; \1߹9֩~VCct#vzXoAlf\~_0s[Ǜ68?dܿŹwzI#8USjͩǚ\ӹ1nq v996e9rlOUAUVmtE{rimf g!X܂1U:գs\|c'R>k2wh䌹bҕ dYUi^deSymvmݽ['Ҷ,etc+jKuϛ 񞞴Sdxq۴sG61\/: W(6+ך8%?v{R\bsFj+JKRfIcakʤIhN`XźܪQ͹$JŌIUU]2Qr)s4L.>QsUrdͣ"2nKdZXwӠiiFR6P,ElpA,{p'/?s8|2w,U^\]۲A囶s o:˗oYі ۾PxOEm}MZz.㥆X&-8~i-9 +r"TKm 'ΦC5ߺRZB~ҩÙ+UmH&2=fmp՜(>mԓI,9:ʬsU'٣M@E W9'+^䣮֏l?iܡF*ynS4$Mt+ݬd],^yoïմzrg]N]Ww @{TTJ2vgROHOR)Y7F}pskG1|ЉDa_,<1pa%]oN4vmKyfȕ<;G,Y &Y>UJ猃 s9kSL0n 7 zglAJ2K8ns-Kn 2y>ܑVkeZ8kmI)$3Ve}ƸԥN+EVEX4!UmUm9#׌zw5euw{ t[`U + eIT6}tW'#3wc'<+PƟAo.0rK,l:>J-#jz"r&FH$v1DJjIJI [$q 72ߗyIIWwKbhI>Cۭr*}642#3պtIhc,ZT{^guܞxH q#Tl=+ݕY} %Zlg$|g+hJoΘFUm>I$,k@$?z/VTU*+G\s`OZ}=Yz{D PeQE Ӧj/5עikvվBKl 7a`b\yXe3{+x:{Қ%Dʒ!}QZ"Ӕc~eq~ ] 2[yqʹ+mܹzft96> 2r9=Ӕ}Kަ;?"P ۣ[y*~@?!5܉$_T[܎8?m*"l&7H0`n$1(I6ި9}ȒIY@|/\ ?֮c+7SR+n>⼨E8sMY "V#xWrLpwg Z/riGi2+i-effݝ{s޴:Tj艭^M3+n\ְ_y,֟s#@D[ =匌?+[ZZ4ΙJ~fu-µ1R0 8q{ҧjʄ!vȿi F1JӄЪue npdn5yE}w،+H؁VL)W㾶 6-cf c;}xⴔ9m>dέZO&'iBc$a/6U%*qz,HdUauan2N*rRIĂI"`yE,3N}TU>" {|;{~W#N)6{M*LIWGry'ZF.VFIr;7Y-ī#y {Gaզk\`=RievvNh xƖG}"X1 1LWSFX)'ɤVݻs^8:<=eRQŻ$,z3j{inpUuok%kxV6RD)fSsp:`uJ4tfZqrYṌZo_Kc=;޼]N8VR6mQZyc]e$`m8޲N*|/v>dv1mg^r?ȫKٻ.u$i3F -qN쯩h=ZZ-6%gjeT<JU*ϛU=jEKaᠸEVav*Sq 2-Ri'X<U؁?C^:w'RQ~+BQhw};(n8ǵsj.ReKorH.Z9}hĂsOihOQMud&ۛ vӎ⳩W+^bYkO+J!FPZr "bF.1܅xC sMs8˖'M8Csiew9aiʇM<x2-:Qn̚;+1Dg:dUNj޶4'L2~_3+us9~S)=IV^L%(eg;F{匬o2Uđ;Z56.:Gwq3zu{v$7m^Ke<$W^ugKu4~[gtg_j哩޴pIty[.[nAX%5b翛9*{Og8}~lHB>QgU]zƴVHo,23+me9*4Oy7o#nxGKd{;Xˆ9.g(잺C Ȭw(1x]Z NK5 ֡eUw7}p=+R8ur'ҷZLMY0t'ϟzjE$F@\Sq6܌&*Yn'u#32.Lw{-h6oU%Hڽ q\ !+EU*F89=s4ᆭ{RӿVA [j Fr{?^-4(JyG*(u<ߎzTSi7#T ~̱j%#%NrV [K^[mȈ}h%@`qug)ǖX՚oO$7<zV<ҩ^.TN^n|l~Uyj~K_+ʟ6}>癒;v+ QiQj.חmuצ'['HsBE/ %7&K2y+T~'8cNo#JjrWM{?vV޹(I39ht#:~)|/nrbKc-?0x>W"u7JYT]^UXu׊TbWXj3RzY+7$>d6܃WNeM?S( o$b[EDr>q\\N~jWw[~D3]%mȬw'1wC+A4WahYEhׯX4#i; 0tlocu$v [̮Mm"B$cPK0^7tr8nj8l]z>F&EV#^9],N-&X 'Gkm?L>{:+hVM^{DX6G>?C+ף}= c{۩U܂<2}R)okȢ/BEVs&/ r+ҍ/{MM|ѽ-wݳ]rJ8ƝJR\M>vzg*zt:Sb~e~ff/'rOzQ"[RvfE"_kXӓ&K4Q-cgYـYF@~CGZwncwNvlsN W<Z*P[" Kq~q:ѕi]g K;ffYWշ0FR_gI$l4Yp ymP.3"gV9N1VaZRI O3Kb'j厺IhV⹉ T]inhp4|E׷#Gi[pnK] }9J)uW^?ͱ`m)Gr&ԘU:G[I.ep:RFx+ gg4 :#z՗] {E'ȨqQN_R:>_m3]{A]Nӎۚ2mpRvcp'L6m-3|Ƿzu)rrV„sr=t6f[Fwd-STxKͽᱞyP,sңhI&]cs3Kr;Î~yz#>YSzrpeU^kXʟ6Ԋ.Fef3WnrpֱbaFi+[{#ǕTX/2m$sV3B>{d"ً'>߽h┟c:،D">XHljdP>U 8\ةTqp- as14K3qpJt9RG /^9=kݒzJT&  8WR6D0SBS39K=4q b3c F (ge3hϛDoLe(nf-V%D[؎'>LmvDo%tI`PʷGъ2Z1b攪.dCi#[DTiݙn`YLcpϲ_䷺CTG'R;T`nXszw޸%'es % fyK?-2vp gq+SɽzX(jIZ[f;Y}s=:t43[OֽQOIߢKK}ֽ:\kll"T`u=OLVo2~*㌩ʴ쵿kC ϲTrjz_~D^FE "8y|euմ7l`f`'x⦌w{LJ#>um;Ile}.w}58H%j]9h:._RkdLն۵p;sۖR|^mVʺ>ϕV ϥyi로I+;iwa:K?oLFZm9ab9=-3^0i]c5Lʤ!d44Bwrpw},*"RT쟞{U7-7U)-O"R69ݍۻח(SMt9V!yQt=OGVmvW}؎MAE:qJLKrVwFpU؊~ٹorBWz~iKsjpy 2'5N%y=?6MnVDEʶd;Hԍ][-}>cO<٦ %J iE~iuKR;8dmYL*$sޝHӽ] *GfJ O+PG5 Ӵ`EU_&06ȼeG2{ 9I=_2bdozg1G\9ʽJL5%.uvV(,rtYΤax{#HQ*Q[x݃E݅#ЃWVR"޿Վ)ʓu; i6r2y.21 .u0SgϧyXmVRHV1c3=n+hFdc̒ܩ+h vvI3[Q!X䈶@*= q+&Le[[ȡ%LhYKb99=Og/ΈQ^!QU];Ys׮*M&2wZl\vQ6&ڹ׋Ln#pJ~`r~^eh+zYr_s7*=9Fe~,v8ܨL XZ1vzb^Q&yJK6#fn=ZF[iF6Oa \F%E~mÜvd:t$V!զݴ4|rᱎ֢PocU e,q$3MVq}^\-άNRK)oֹv)Nk@xKIVTo׿^ nfe*R4{FՈT\ }}JRo4muFeUR2Vtc= ul,LWק|vԩ6D2$v(oݍg<鍣QWsؕvGY9]jOv-.G2{ *n2j[2!'?3nn8۱숥-,CW?/^s7Hʧ$!K3-˅OuϿ\˕5+u#Y?wo!dsӨz;ӲўV1S5I'q$% }~ޜכZ2ܝzin%-M!~CNOS^MmۊHQR<~ 2%+,Ŝ`@LNϧS͆&3&vB'5ʋcR+bRmS!ȯSh/k%gtDh㯿JTF7GgjvSX2$K1U?ISrB-rBg7 !NVʎy'G~#IԔi=|SG&1_ 2ⶄg*m5o>rԔjZu&ekhZGWU, dy:SMktSNϩ-Wnyn9#]\V:tB ~CfOܥW߽gZ4TN:)ʧ$Zx-vmOͼdcJz.MBqo[$6RmЏE]H$`3Syeueo%Rq!+N~N[Ǚh"ӌZ{\ΗKeu'?:w޲:l\N#J*/iYZx\8'ǥEjQG{?:8ɻ$bY";'e~eP>|GUn6һќfq"+"1RkuR5-w$&gX]*JEF3y'mQVhsGю؁kybp]gY? 1mڂ=|. 9*B*ӏJk[.!ȼ1ǰךRS:T8(|1yooRNsKB{"Q:dP4rVvguՌݽXMŦBB.#2'[0*75%B;#v99ʫM{-;DnR|:懳'IJWsr2 TzYFQSU{Nz4}V#EuؔI),ˏʭ6SR7[DzsϿN)R4#(ғm|^m:"L7RNԞѥL?55R]ӷTO,dYGъJ5oYMV,x?֛ pNvo[xFјRGƒdu=i%YߧՒ7+,[#wf ͎1C h͔ye<,jÖ+|zy2e?&>Ԭgfn3tKS\=Zty$$FYNР` sߞ{W-JqܣwAOJ[]W5IA xx5s4(rSsn=~v=lͻ9؎֪1T}"K름9'ؿ3 iU”m};q};܋!)4s{G/[eRe'ޤqڡYr8H֬>QyÍө<RK >x@OeN)&tTz۩ K^Ca thӊVzEtմ,x6q}'xN抲P0d3?}2} |F*V{h?|Ap.Ƿ(޸N+:\i#RNif{ MRSXcם345ڞ_khO ռ*LP>c@*TrF:OWܿ;{a4Ks !2w9SE[C Y/׋lj^uٖGXwk繥vQ}nTyH񦫯ǣ F4qfc9\ʥ^uBνmhkӯ9V.grwKeo8YQOITZCFWק|U Y$s-8ѩk[K籶aZ9˖A/av"<{tRxq=w|&ʳ~#G+FR2罗!0r*^dt<}q]IZ(TRAJawzrTEDL*Tׯ_Aham,KqJJGMQى|ڽL!71=WǙ'/Ռ~ah&A& uܲHYveO9fYK;4Iy,IusϯVXʑM?XZ<ƊI~t徵o1Ө$%ah<"ϩUN1M/BzpQZ}HdHe *cURTG,]͵Kc,i<{ޱGޏCw%K)'cUqG_p>*JJvM^]Xd U亸yfh# N2y9(EWcR}ud"p>^$uH\cǷGrH9c( %d[҉Q{Jt(_W@.0n k+o<}µ ђe~ሔ%6}4DcP r3*I9*Ӌשiԕ7t[ZY2<8〉 z~^N'?1wCmlf;duʖ*kiBI{4m(J/yjͰī'=iӭͿ.d},DySnFm'9ҹTu^Ѕ5t}Vmewx.|$N8~qf|*S\7Xi\21 8^vvT&3[nGLǰf.1]Rӿ]Ol ,>jNFPl^DP}+.J۰n5R2'T_tzLC4ʞsJڌcN\'QUgC9I=Hʹ+nS镣$|U sZGӼ_xz2xO4W+"<\;ŻuZ5v[y$&3ls9<ֱI'S8ɴ1O/eyX-J8s澯%NqF}Hͺ(|UtWrm&|ܜȯ<#7HS擒V5c$EQW~!8t:Rjwʤ#uk[OR>,J#o;苑n1ڷKKjJqe Q5δ-K #K',oj<#^nY6As$oH'_j稣'~uw$itH!snjX㠢4aҌjrWbϚk)I{Z*B13Z3m so%ϔpOڼtӒfѬA},E8!ۺVɶ"H㶹i;ن z{"1O-$Y"积>NѣNRm(۫J *H0\#khR]6) /D*MKE**j4qܼ#蔜b7zz7Ȏ;9X†fb1;z ʟ3A*݋ItlnV |ïart4QNgl]7-#$tdpF׻%1x´(GJ\ʳnlހiʧVOAwk+&f˷;GU*2?Wu%y<^Yex,uҪب֒ZE)2f4bQO~ )FdDS4eI-dӧb/mj˚6xijMj˷|<|B®t{Ib" ȽmQ|!?W#/xR/Mg5͍ߕ.&}}⶞5)-CG^on#h8.rg$<޸/eSk˗dQy|z?JRj|XTR4i|[y/~gfIiO,*T4pJZ97+dlt`JR{ P2vO]u,,eo.%Y=>@:ZPz\V H.$Sڽ3dW94ee%r2[@d1F@R"LIけINJ6Z3]qDq_`Ia6+H'?^ug-NwZuy_(j{6];nBCb=DM,PDԅ"YcYZm2XFr;9#N?ͯ~7˸*dʰOҲ*ueK9Vʊf*f5Jǧɪu{:y2ۃ9o9RD27d}q׾*]Nh$^54,Ei`$-66ұ+g/h5M/UX[d8ߛ|O`?/:ΰӏ2b))n-|$ՉQI]/lVro$GdgQ ߳ʣK=/$*̸F~Ɵ`&1Da&ehf`rp2OϮ:V/r[S-7E8$]w.vZWĞˡFQ]3YnN̬YUpF}QxTتJTb6mZ S#Ao9QuF%`[o#mN?r֍:!+X豼άdmUX{qRs;{9ai/_Big}xtޟټD_P14+FF]?׵c(46SqKr18rmʤ*;ϯ]11ƌ1RRtڒHc2G sgmݝ2tegKO*Ʋ$~\m2x$s5,Qɵ-LP;2ƀm8{ IY )僚g{&ڏTpH' [FJZvQSƾE`y̙,6>c@3J2G]id3 *$'o~+3KԧZjvLv»8 qToQ{Ϛm'yHvɗc|no^==NU/gʜf[^d31FAFNբ?x{:vbpd;cےysҔc="/sGi^6FiK.;~"8<׹FQ4y$f2("Uf-,{5*oA{j*6I+6=,UU/Υsx(ʤ]O[?2eE<{g[F^QQRSZrn!OިkJd(9SRgLD]ܹfP~R;߽mj2aMU@^%ai3vYN+bsjDHٙvܟn{Tt<(MlR )kua''=;fT9*JbE%Woʒ,G]''>íO,7fܧMB?!i0 [=2qYQe*s$sHVI2֦ӭ%9~F,%q,h3)s{2V 0eY0㌰=gw-$khdycv0A<cfӭ*NRNւ"- _%K63|sIt9Boil73;3my8hyyKbjGUS6L fIvq8=wQ2wD0rYFYNKmɈn~HNa7{g=p+9CDcZtHn!%XI`pG88bԟC׭Vj}loxG®29`AЎ@5 WJu=}?252I߼i t?:bKSSp]H*SJr_[U(Жn=yCHqV]70D NG29+svYV5{Ȋ!lK2BNڢ(F7u+{8߾-eGkrmNIY؂{k;ܝ*"q?#U SZjdhz} !;IoG=/W"$ԓ֢{^%/ӻ+RSu=9⳩i]2R.KFVU'q㎃֊tci\Bn+MqԂ;w5SԷB#%&۩g6i +Hqzm-nΥY[6CMViu&T:znToJ*6VԞ^p$i9U#s2 #啑+"d_%(>߭O*QT&!Y>͑2B_.T95g.9QQo[3RJޒ#kfʧ?:~iΜ؎[<67~?󧥮kV1wt$d8,ʬp?[掚jHY[,yi?CGfqu=W[#$w'jU&zČg;ry?iʡ%uM)őrc瞸g*=bg(ӄ*5Wlac}3Vr41 J6N7wi7)Mrt禜-ȌI 3עxNLhΚ&7y+iӯyF[ pFSh1HNV$3{W%E{?u6D#xY,zjԹsPV`]ucwP RyӏsK-2eT(ij͐6O ^4O=˝NvU+5*䜣kpH$oٞtDhc(ÒROaUǁשT6ZTtPΉ92#>1w/#iavQzW:{}OiˮWUXZ AR3>YJ6lIYR9# 9zVNI/to嘆m7t?iV0>Iב)STcV4Iê[۳ͮ?L*)]I{LN{.qǷ~5MS}JFmĪ$Ve//91j[GkM©n"XFr۔?{q]\lW48܁T{5(۪4ZkX2lr!ƹvF{uZӧ9GcgN uVdX W d8\G!F8nwf-۟oQ[ǞkjK5$,d8Ē|G'ֺi=y9ucX|2遌wZi(;#juP+@|뎄MT8ݱe}Jh{MK{;?y7w4!5P~lo眞sڳgG+nE.s0Zij~@nebYcx, d^z⚼wO:d !)!TL$`^khS5t%+,^%(藛4e%d,7B)Bn-?zU˚TMN2@|R[h <x8Ǧ}xINs>ݤ,2*iYYYVlҰ$dO'*Mu0L$g!>m{mQYԽHZ8HDjBm>kQ)G}Q5C"S҄d8\W$X,NXUmo6Ѝ$G'ۙzƛ~k_m*S{;rz1ƣ(SjJ _͎'T⒓ȧy޹O qsjnio˰x $`~%sӎ'R}2Wʓk4wz OҦ>鏻m6d7v929%>]JȦF,pݿwnzcݻFSqI}Q a ?1ZaOMVH/7ulAdgy'N<-nTXSye@<݁4/t^L}ɷa㓌r;~Yr3?te{).%c)+r榭Xѕs6AlcLy}6޹2]Hb/!nYc#'>2cG%rG_ 0`;k ~{?g{4KƍeoxǷ?aVtj8ō6j~`EJ`g:k`Aƣx#«͸ p80ڹ|KCsSkkm %(*p}{b%)^-ŶC \\l|g's%(Ƥ-Y# f#mr rZQߧgNyӺ*!5$x8泬-&5h7 6el&q xqR+#տu,-#IP6;Hp=G8Y$Q)K%6:o98=9a˼mN/RmIE9em0ezq߃\+kdw NߊQim#i撅5[ bلBGv6*Xv1&:5hYq,6*޿{89h\EbW^dMmwpyڜOZOܲ]ҧ)v-Z6_n9t:M__ioݳ8߹=1iN/ww*>3SKM(yPy*t{JkKzl}5?vEx~Uqty\[F׏ TVtm-@JٶvыMJlEdXʶ2p[)iXsj&ĒVZ˖/cQʆ@Pe|*qۡ^ͨ>T 2_%rg@xhE]ܗ שZ9#`C, އNά.JRwK^4X]k!1a+B|Θ֬m{9' I39G*W4*u%yI$HC6efncXvGN3sV9cƫY{9Fޛt|!kx"i&c`}j|DGqUF˹:|h]n3'"M l*˒;}:v59yUmd߫x2G$1k4m˸V$K@Y59I#1[3܎:wJn"- M1]45Ƴ[_O!2B0Z7dlLϸV5m=um1wp}?q[FIki*1Йu(I"+(˞{tN]Alc(V}:1ZTԮgGu4Z, zExV)e䂻gIfan" ԀZKu$5%[2dk_#x:s]\ѶJdR[q,B3'. R;H?u՗ {9+X]=,UJ?]I#Tv{N qF?HZufo1V`|RUNU$֖0QIZݭ$ r!L['aK]2}j-U|9}T{%=*+"u uLXPX00q3+YFZ*5NT6K,@,zӿ^ҷc(S[E%Ondt1/dlgwn N}k9svܵN1MtpײmD.{GM9Rȏ"quR[8=ǭtӔhQӌ񓒽^]l`Ut߻#10?SSI]u2YJ[l:;֒d+mutthw4D|92mYOMK&4jMnE5쳺G8Xe$gaoqi4ܭ:)<-yEbIZUPjn1S9eV>U}\|ce'%6A?k*]Nxk"+r2Fw =$赹~-I]nB30t?SQym~ҥ:|ol$ ]6쪮HY'(9F̏v2WRǯzRR~M{׼3.WO8z⧖N*IyHG٩'_,fx%W)P?/YE{t#{H<Ő|1 8JJքiU|-m2 E Iv3Lx}7[8g+V{F .1UQމ.eil ؑѳ Þ1}ƶVC?Q۠ӶШӧ(YFaE, uę^?Ռ79jGNLu*h1or+ =u9>Fzus4G. ۉopo{E~j,ifUrv9=N^4bH.ܲ*26 E_Y6?/Ʊ9J h?iNP̎#Qul\F<ɥOj-bNn 8q\U%˯CN[ӱ! -&L_jkG۹S-%~f6(Pܲgwn qq&fU۳ gtvZ^UV" I5fǙC'$2?:δkWCݎ|K8Gf!9#^+?g)Nm5&F\kz,7FK4nV̎bv9$ҧKFZ*W4_n@YR'A#0'kd.92E5/v0禮֏HL̡ٖl0A>6Jqx<*~6 9bdzt3XWal.aa7*e)*.N?mkS"|-r8'#;=Gzxzя4E.YTؒ@ĥbnU ckmcN^M~'Ei.XyAW=ZeZjwa%(˹hyͅ`'"Jt_#ח+~{r=LGLz|(2FԮ}gRvԙT拌o TU,s;zxW,,/om#*7d#kTuZ*R nhՊ1ܮz+(F< :HdD Wʼzw+I5+(s_'2>>0naMsڶv QuƾcK\kS~ZY䵭"Pf!'~5?,䢛];ɭB:)󤴻S#܈a[ءҏ;frC0z-NOt:$ D)]pGs3C U{mFYad0q?ycNgv9ʵIF8[}+).FIq]4eK[A߹fRCZ:h㔨ʒW{u"h,d}=;Stź}KKEӿ+XzrI$c4͸)9hN˨떷>dkۂ`tc]I;yk%Nڲ腒xfpYU$䞟T%ɦL+T\=#7 nu\1[[ S,/!'w'R6wR{w[f-? F2:s׃[ѧz^};tT[׹j [ w ]tZBpfmXZ[T[l<玙_M mY|Ca)ZyZ&ʾO͸|mQ]J\qDą7;A~:ֱv~E+dS?w jK5Bq3I8~R-ÖJ\ܥq"w͕PW 2w1JW1H c#fFi1߯:ts%UW3o! kt~l`sNW K:Reuj3HxVϔq r'rVv9kRU+% "KnUwt̀mvNןʯn9T[oυǟ%hd,O'ok*Zy%YahCgҵQRӨъ?"VQdX{v\Sӻ\/>ȹ$11kHɭ?T)CⒻJ.&dmm3>+5g>RzƷ1 $<5䌭btZ;a,$R$ ޸<ckumH 2VHgrc#CDe)t*2|ͩY KJ6V% ~}+[F=Mi3﫺I;v[[ix(X!zt oO[JzwB88p˃mOҦQ*?#,nEm)^vDXdUyxG4'fg*qmGb8KHWi'~IʜuZw 08Mk#b*Xpj%*Sw!߄A17p2jy^i<]6L{LW*S֭w 2bFF{}*h>X2[/ |*&'OoޑcNVzMq4I,K|[n:g5/Vki֔Z:ԪD˯g*nns议Cot$ff?tw2._v;?iSc4eMfrDvsj޽NiɇQ ;HV&Tf]mqO 5mnq֍JvNdnBcYtoڣ3W eN>݋ 4t` nS=1"T8kb472*_+ÖֿtrΤYij%I ˶xYs FNNx?u_s uds33+lv8=\=VUY݀$!@bw``>Tc)Ene8ǟ2"PBSTԆubY{؆ endD攔"dsƵMLwdhYVI7ƹ%R20*F>[I/ i N?s_4]bQ_nein1&+WD}_vaO#wX>%Q-VjΟwWORGE*˕q>)'yE? +rAZ49Nq[}gT*9:.ھ"(¬3QPF00qXVӓsƵZܬ4虄"&uV#ch?Φ+9s-;zUJRMlV,E]}RzCr{*5#%}q*T.tt腤WT;z=FOCߥqWEuGjM~RJmN7cҨM) }}p(J-]OOϣ3 XK2UݷvN y_WsQ_>jkO33,*y(HhoR 4m"1k:޺**ZJĒ7CXANQrM:YBY$?,Z:eE 4 yեK_O%SbG!*x:EQEӒ~đ~\=¤.J>Ϛ'W-iemx׊ *&J\zt]Mko*H8`$+휎tZXZT}JňR7$YnO]^g|q"_pF~p߻N;:_5He S{6o~ǧ$wHBkFqhUN7Sm=m.goV\?uIʥ/yiuS'؄Z[Iq!' v:vOUZbծKISIGWpe93v>.]Lk9F1W9#,2& /4TGeBotjzݲv9,=8ʟ,^\׿gmc[k [;"S!P$λSgkӧܭe6z۹O%R˒O{ԃEhƝ9JVv}sI2n1đdžULv3(x0EjǠzxa@)AF2~hFm=6N1ru:][FX㸝D<,;v·7jnBXROsFϿv 8#^9J'4UKȱ6ՌU߃\cs:M.ExޭEG.de&mZL`˒q8I"TEinݩYːy׭)JWݳF輫wm#װbcu邔ב& cY/Ѱ=j\<c'h\lK/)c8ʙfV5nw=q{VzLL N ]I:qfєcMIg\[E8zz]T%EUhFBdGkHJI 3".iS:ֺ%Jlz4*\܈Xf#ܫNsn:r伟^D[RUY;zsҵ8߉isǩ4:%1D0ʦ@Y>XF.LIC}V%:+ mȕ9Iyu.&jCu7QϿr;^-#<=ӟɌ%NM-o5N<|G\tcFVNJ? ʹs&cE噹^{y)v9M[nDMۋ2H\FcN3,Gx9EdE;~j+.M7o%U2Uyܹ8C2.XLk{١rVS9*5z? ZwKgr(c0][K4W $23ۥo#IŜxT~>E4md*#zvfJ+[~F16RM$TYr@>^dnU9%x5klEc*h''=GvaJnse)-4 Z,(Li2:?JʤdJ-;1vʰ1SȼWRISAXtߣ%$*6qzt98F~η*D4Őm*]cyԒCobEeq[!deU G~'iGSZ&^IդಮN26G|vYV_);렗r%HKVڳsr@;jTFLjJaN0i4eǓ,eg7U*NIV1޲:\팪T$8gWw [\/a]QJ{XWA'Ǘ-¶́MR^H%ADiSjG37Or:5qr-B/c-k2<#QIco88ϽEi}^Ӕoxʌ&VLdmǠ?/W$m<#IFK5q32x;d#[ԚO/r_jqxQ?o等,*#/ ܎ǧN›ZѥNVmXvcc,()89 FI-ˏk9tWV;x~r{ʴTչ~^O#Q|Ǹ>`i{5)YKm<30צN;ָVkt "Ŭsۈ*͞@?2[^șѝ:)һ}RHţ3HzqZ?s[uЧeܫx-rGo,V% qלrkʖNZ4J.Wb}m_6p3U)Qfrir;vRɑݣfQ=Hpx'K*qsV~|)aW,n.bX%GvFUTNp1GIr}3/iwDVWNۮ. n2n?6dB4Uӟ/3Z?*WML-+#k6G;;%F9[y.4-5 3+ysT!FK9[ >_.< EQm!cƯ1oo){bip.v|d{n!*urSi|۳0vl}S=9~jRi}󹗴y"VcH'cK~֫NkK]}}1bw8{:bVzg\oq$>T!Wۏ#澳'vIImdzk)ִxVGIpp8<}E)M[=G#'7m>i&A0e0(2x8_ic9k_=9O/꯭Gz %1BFx8'>5o`s(VMS G0ա}{ƻҬfY$"?\ujozƕ7RgR'n,Jwm.˂=A?\.UiJZmu:@<3)B2W$ :gU捻eͣ34ڦ[fg,#9cǫRTkaJwІ_:BrA|<5>Z4;#NKju#k7x_2`:Ӑ;j~ަ3דۊ_Tdf]:\ih<<*F\zKXk9m`pŵcqߩv'N3My>'@4=k[](NT=V&/]?y*i(ҼAkߥvpM7[nŽG^r6&.\6t]/ >aNPJ[d߻ղ7yow%qfWwg8'Hb%vk&aG S;?wZi7S%Ǫ\/:d2y׋MI+vyr￟C].ñUfm'pHpӕ9iymsjt黽wO5Ɖ3Mݑ\Lc$#ͭ(n|+Rz[jvmfamsW+1::Rhg3@=8=+ǣM;}:R?)i~?&IJVO^ jRΎ&AOV$ۧ޲ T`z+>YJw~^/eE^]O#3&Vfs_8eScrRfN$w$gtq 6zZ/Bu{LDm%kAy` d9]n4S^ZvCaiEHj^x d~>Z~ǝfKv5nl܌^S̘U(E"X|UcdIK4;}iKoB*DXdW)$ܻGֳ7 ]+"攤ՖL3Hn#$ڭ"pH[V#"rvRUDΣ[i'ޫW5r-ՎO䵳d{i!2,H=){Ju*$顝:tgس?J4-;g{'b5W:¥:^䕿>a 2E!IW6#ueO\GM'$֫XbfW=@~*qPacxVUM/UR0QΏuu^Z;G7*[DW< e}݋WcV !O޼&@^sڮ<#ޯ*Ϩ3׆9I&zxN=eG|jQ+wV\0oFGRE<=ԛk=i3Fof\@<0r?~K\q2>HV㝧2 sQ;pIGTiԎz k$@4`+FNikK-hA<>jy7%<3LM[Ƒ)Eݞ:|l]9;=ʹ|ӵ\T%cwji).#6a0#5)εegO$p{K_K [XҺɱXɂ)3jTloQҩNQq߮3-1v~R{|κ%N}v dX*rN+hS*q(Ȯv#F98$'8Nba];B8RJyƏK`˽A{v9::-uת4(ъ{iFϵ0oUSˣu;iZܩ%ԢD$g'dUJ1'4F^ҥ9'/vVIc>]?S|J\0S{/N{_*~ua!M{9 dgvA=]y/KȬr۸c& FR=7EoḢZSMS+bʚEX.qoNZ|/RngBj/^?]98?³FU/m m23S*a֧FqmnMZ&$[U2)yVT45-cZo%e«aB-?vvHΤ)ᕙ,w7;CT )S>fHлȹǗ&$1S*. $%w,Ăylg?ZN9UH))+um% >ƥ2.qŒ:V3:е`]ʻ׽M8Ե3*1>a-Ϙ譽}:SQMlƫI;bǨZ!̢ʣ)s]lOA,Isg{z~pSRN;.o3ʬGi>ңDRvGwiȸ{VrI< c$ ҕGilsR_qC+.$, sڲ{(F]~Dioı{m)sloz!4L`?obwC29 ޜB)u9ԩ%W i>fvmIN4]'4 MIz:?;̞^G~3~"JܑrQ{ky!DyQv".~UZlR4wFkmMk}nЃӏCru_xJ0pjlj$BHxSұJsRItV~m/cd3ҥԓgU$OFѭq~SYZFvzPS+;p=\{WUmg6]3ƒ!e/pq5PBWՆ"fȦDܝt ){皣~mvU %[tJB$*nu9G9ll+qmxԒÈ*vWkOglmN1Umo-Dν}Y Yulv[[DO.<[)^2өM>[>ʷ+#Ol'Q'*vzۣ_ASȽc{#TJ*pю%Y#%,JTk#h8Fr{nYUvZ}emFNYNǥme̒wcXjܱW$vww6}o6>z>Nng(rٽm\LV߷nYӧN5UsG]-吟/Ʌ*W|=kQӡ.jsZ$hD^XUPp>},!pgګgp}5o{UPXUH681^Z=}Nyԥ~J1c;Xv9']4v6NRZĴm a<Мu>N[[Rr>A[ j¶1;$vZ8Yw'St&eB(b21NݧӨGdVBe`@x".]^Z(鮢EynSۃ:VrcTCR]G-ܓ4Y}'.X3I}Sh#f$$u毛QMK[߷gV=~TUJtĞ[v-UrU8gotuȬWQȮPq~yWV)э;-/6:rYmUdr (4{:9rEԎ[}- ǵ~*/wNڜ?)4}{ĩ,Wr8mXxUN_7$J*tXÙ~#Tptym2yrYWꡇC2q+So+sKHAQ%V%’czd_kJkNcNK$Mw=VNH)\tcY)jcj>{I1XkykpGL]GYo}N=;ȝ^dEDq'c|$ddQc F;{vh|{X9Čӊ/bߣ鐩Q51s {s20:2Z'̉㲆=݀?Xdg<mJ=*ukQrZ+ ͹.ۀ^HD%$a 1.m]'mIw|I?^)zV}Tݖ֐ YUYK>9NjKvRE6sy+.*XzS/SyVNVW~[ĭH|ZUhޯAu,A7Y# r*v{lBw[K{2vUO\=kU(5W+I1e1,pUf\?t=z'N*UFlVIBT*Rܐy?*eR|$Љ6UmT*3Vծ ,%UxY֓pYRw8(:RjXBtMo*fNycRJ4+:RJND hDJ z8zwz܎G rvYJ갵ϗ5RKB=њxxeg*24t1YMdn>F<ֲҷ/emu7t6(ݤI."79ܻ+ӊe*TfkC,f/5i#;KONTT74oXFR1c#''>c}M(RkPKqL6p?vҦX*|fI#Ym#v6#ׯ8J\0jrivE|iJ6oǡN:kz YDfܡ,Gۜ~]X'rD֫0s?u[!:?κcH:n EvAy)r ,luN{dN"R$=g rJ9*{FMi<"3Ƭ?͖c\Tdi8J5'F'Ө5[Պn$1_sn5z诠ʧ#n}9nc({jGJp<\zbtydqy yRrdqO;{doxXܞNJq;Is]f?/`9QSpʪoz`cZ+_/Ě Ŵ̑`T`TJFqҩ߽rRf%Wk2()"Ȍʹ򗜖<&rǚVbi.^XrU^W,)B*/gV*:Eg#h\X;E9p6D ` itjC3JFRc^TtUNd+ya#d}nQk\-˺l4'~zFЧiAFϱ\IHs3cGy>jqf-4ܮl{bǒZVBC"I=ѫ@HzʊI[P)9:3Ue8 zW7q\s[K2.Y6mJHҕE) tydnw8`5\9Sa/g(jBi$K[Ў (ThƥDH-p\.>lT5v5$ֳM UCsڜo_mYfwU7=Eo%ԼI,ѿyrpG9UMJWWסJ5w&uk)kjpXcsjI{*Kŷu笱̱¡WtmNTs+`LƟ4h;#ͻiST^by6 mH(< ?AgԼa~+,^[)cPKvS5KNXV GN/N8uUdK3c69aϵk"yT=O[a3ߊӗm¥|33DswN;ߥm2eʑ$gR| TiZTZJXf ;.[I]Qz M\]y˽rp1g:n1hG$2<%oAiQSm%{C93ö=ӹR&yv,'nY--h{Ǚ)Veuf9 gvwzUҺ曾U[?mq=>j˾?z$f8?cX[`88>ݪcIF,ZՒ# U*v2: mKsIRlh(PAګ<(ٟJ<kEj4j2e#-ng'r+}38-m8Y3FVHQ*툄drq MTc+6mRrczOi$Q|4a<~Rbxǡı,Cr6^{֔ӽ3q*J7*2KgsE&IJ\=YC&Td fGeNcƠFw/8>j7rD* Whۜs Z1MHKU&%jGơK.ރUj•5*}ߩJJ>̌Ϙ~S1ڹIzhI#.KyVOlc=Vr)hjz;IUV,9K)GfU)R)n%Tk_F1j]s.nmU56!%;T6 [ gni]*\ACLťUik0l'z3TdU?zlC<Y2 2c+*+m%qޯwZN7=ۙac>c8+ipP;,[asP(j\71D# 񞕔LkSv{GJ&ܻTI gĂ}s~0Fk^c!;̒vn}qJQ4L}Xܮp]Y#|T9>k?gRfkV*ܴג 4*иjM՗gYB.JZ#V3e!a" + ;xϷwLb+ֆ9 6:mN+Q(Vc.^ir5B48 nlIMWiFG9\*Jss(z5CmoVf0;A?{;YrWQJJ3zȷ6lF. z #7=c4 (aY3 ޼ʱrR9N㢷Ӟ!"HmV]8:rrYŷ$4-I'}.7[Z:!") 1=;W/iUemXw?0ڿ0oRcKhJ"ޏ/ߜmQև 2OK[5#ŔΧhM]eNkHi?xv9'qY*Rg7׌Tm뱴*J'{ekE_-ɸO*Ӛ;7ts{Ow] HF?6#s(R1pЌ6h"v2̉,n(gS%N]t`-ZⓁN^)o!RXmaUG=9Tj_ԈVZEF- ד=ZΤ/eZu*bg ̱S%vq=Q)Gz3G_㕶t770 װq hF%$~dZ,bn2HYUU=ԵDT)I/{ 2:+Tva]kB;vJ!6i$ gۛQ*]-OQFq99~U$u{HV&z$b/1ՕO9s.k4i*TF%ar3k<`c߭gzrc̩O_༒;{dx>FO|߾>mGSs-ߠ\y-bfl0ڳK^eJFztG 6i6ehq׵(K^F2Nۍ)dc|z4y PhͼC+!~^X{I:NruǑkc}9#ߧ+J*e%q%Ó0kSRo[DEiؒI:A#`(p?˭q֞CLPr]|3nKrA'ұS_qVa*2Ow>ݦ@bg=5)FwؘU{/Hέ1};>ZRn:BZI8GyCl۷ps\uO r(gR4o#O )m$FF|p3߯ajʹ}jVNU ;>HBVs*>i_]^~d˨_ZĦv2,s=޸kJyICLUK;ӯMA7$q`(ݐFO:ӧJipiN;yuE,$3^Zue5Mv7Yӷ\DeKWv37W o{/d3h|b@s.zMrӓiNX<;NMn75g_{zs~U8œȭ4Wm,ݛ ϻS.WN$_Mry kkJUڼax#uTUQ&2/uk-q qWjr쩩IzU^X}ԒIck},l K7̑@,r~ZS{$3J7#gZKupEvDTSHGVcE`tZ60hތ9GKŐZȭ +]FjS71Yױ}ʗ/bS (MZ:}zU*yTq8=Iӧg%.5'JG6h9ʌ3z w*iʊt#7w&Kt$ݫmY#qKoэ+$z (7FI\fgf102;U4j4ּ#$@>N;3Iӧ Zhcw &$?8n}*5+mƥg-~.4f;OLgp#Vn;4Ռ*P2o8be{v8Q!~xұGV 50ۼf˱/n[:Fd`!89\+؈?v-u^߾A'<z|꿯cWPÝVʾe5nXs=5)P73HA%|`$Iʤyhw躖hIuʊv@c>ֳNU(KEM+Y5fEid`۷czҏ)7&rfy^UTI}w9Bo-Wvv^7k o:Xq;Xq@8$jΏsuzqdl[X$(_]P9 z|֕%Dj$L7'OW>(/")\F. ҮXXKɚQvEl[푓|^~kHӕuOCrIi}1=xkލ9ԨY-a_-מxϨ$E_Tiz#u!V}ݻ:uXkkN1snEHT)W#!^5Gf򵥴ֶ7Q<,=3 tGOYuxYnQ5==է77#?bm Wϱ"iKV3~їYLyz-qʜylbH˙[PtOqV~\c?XsB1.?WY' ~s6S;϶GҴo̬-ju9)+1عA䪳f{t<R,އR"9SVȍ HoߥtF̗?lnĀg˿=OmYMOJIioK cI/͆hWUOvF4t˖%O4F'qrIzgֺ!)KF#J1mݑ3h%M_HYX{ut?JF<&V{o)n lcb0qvXKr2Hl[G"̀lH'qۚg)hԨ.k$9󃟧lu%CU(Z]}x#{tӺT=ƵH-"L+A]яߓ*Rela%$[y+ooͱaqt9>N{tS;[2+vu E:3^ΤRk3%Y@m^Z:ItĈ,qX ~+{Nn['4V\|?/㖽, F ܒ}{vgZlҭDӵfC4,cTsI_dMJNV$ %!s#5eeSk)I8ɏs `ݻVuYTs3`˜،{}+ ԟ62F1w{3ƪe 7#yGeRQc+ 72*!}OjF:ZyKK1M${hݷ=+9(gTehdxNV=>9gx^擓UU$@ 8q.N>[*Qs?UYIrZ/B?2kkvί䝭Ԋ򵱍S}ŕڬd]By'{Y߻W*NBIQ'*}`7] Ҕj]-rJ#)!<{Q:LE:,.}u /?&fS'RvK X䷕2dH/ 8>~2N}|N"MJ+Gx'Y7}=~).^i_?П;l"ma+6޼}H?UaN7Zqݻ?B;mdq󿖭w9Бퟥ쨹Eyknb7dw|XF<8O)=K:yN2N+u-+O F#)5w_ֆ&g(cAҴN)|误MbKHHx>*&># +5)jQ"UdL0rmʣI; ?JsJk֩V,/c.K)֒//o;NAn:z4\$ϭOuYVkT2|pcS\qkTa9nbˠDnq\38*~\/q*=ק[jHV%%_ZU*r֫Sҕ%,E=c9>W)_I>,kuQhsT#T!HJB}ǎ9Ҽʕ'֧>~Ҥk!=*ctBxQ촏C .|@((ߑۡRt;/ Z[Y9ݹ#eM܆8v\MSYqBI$C.##>)^7KnNU9TXŶ 2|z:*JU56wC+"7.69gu a9F:=/嗳Ϻ-04c#3dy$+:=oesI". dTۺ5p8znzj+mSoS8o.&e *<@{ץN3 BWOʮO=j:>';r>jjOnrIKDs:߸] z ptױPNgV3# *͌pk٣Oޥ'Rt>Xoc_b@3ۡcOSTޏ=?Z}"(1fc\msߑ bz՜TiI8 'Vi>lo!*Ns˒RD7d&T9$jʜcj[?VNK^}hmͺB͓WRQ&qtԯvuei'1J<񞧯;zvzFnž;bXE}71圣%*>aձ3߭86re%PQm*:'$8֭'̮ͣ 1w'V3;GQ1 n^M15nG.7I!_r͌7{++3+Qa^9Cfd?Z*4/fp[3G"8Ps̸Ӧ{Ƥ[?Q5ĝ<ܹ}EUTd"\.ěx1};TԾɨ8P'Je??>}i(ʚ9{{PS4e1J8!GjOV*U9o+ 3Z,FHϿY˼ь[=2%Uj^G-iK[~4lnZ`)Vg9$k{sJM^nm#\#=8sv5gvMޠ]UHrkNMvNYy3`t-hj#޵_fvۥR&ӵR%sַ$t*^jnYmav_7N00~ѬxhSv6H-UdVm UX!(4aFBG3ƒK4UI.O^=ִGE:q6Ty!ΙڻivsJ;˹]jvЗηH86DF}sƥH6:+GUIk ^#Icw\pNA~HʥDz빔eN&u&6innVi:O~?AX敵zsZ&46$i]Mmo_Ҷ44[v&+N mV37=¹jQ(ۿiFwϠD^V9 gG8etlXԩe%g"f?wמ1P^^9;8777?2;&jQQ$zͳM̍q9=0?(rz4qmkB.n哣3n<}:'5c*pejX̖H ɸ2`](42)Z|1~>U'r?54cNWi?.3R$^Z0&o-{U=۶'n;g=*VmkoC*]vw&@ҴknmC2d1V18VOkzHVzu~hf$+s3zkKKW(5_6eƝ2s3+ƢR1md+ÍsG'>5R4&^Rmdb@*sZRۜ""MuVV^k%_qPmӨ#Ǿ_^sV#)TD 7v7ɹ#+r㎘?mHCύ\ԕZ {9o0(O8cN>י4Sӏ'2~qZFLVG6P{J7M(nSF&lrO* dڍc 9wkS|#Y#*4ŚFq?QOASjKXɾkV6)Q6,6ҿ/˴;VJM)"\?f| *[ igӖTT.DbF۵pGS[B4Kaok![R9FFS}sQYV1̧ZXwn3\ Inwuo攣yoӹomuu1+I0n㿵mOgS rVө%ܾl^c9>J'8;ÝIBf $a}XM&"tjZZIe&|f#Nݫ"I&Ɛ K?Qpf!d$.֒էb֩GKZEcVHoq8^O NVӭ;tl[\;fl4xl|2}~ü\TsrҤt6[ ߴm2˜oBJը֝U-]ʃn1Vc*N>ͥ6[$j F8#z T4 ZWz},W]\4r1i_A)N{Y^GbF'Nٸ Kyʍ=s+J46K|]6tu} Iѡd`} チ81rU-Qӳ_/TjIU۷>,koeWf2K$hଞ_ϴCw˒ ǥLLZ+Mm{Uldˮ_NmmkޅgU+/Nq^>aNRծ_CةXK؋>Xlcf,. 61$+ҩ릧6IFQ}>k{o0!J)= {ZN<^NIG}t/C;UTuuf?(\q9ߵyxە0T\}lJҧLkIH~Ƨ+4[hvN\u08NT잫\M,B{'~pIqtW{|A J)^g?XªzD3o1|c\ ӿn9TmhTo.D-f7V|_̓c] U((,JQ]yžf~aA F;.~Rjrz-b w81xME:u.cjE7wUIAy|~*K3,DeO^oU3C*Ơȫ'̽p ?.jy)JS*~j pdf;pzt(Z;>TӪ@dDQ*ǻ,͎~8bNPwz|uolecc|LWFT:iIhF۫xi7gӦ9z[=[:!ʹtR^hes"[nZEp2zr1LaUa*jӱ4A-c,k;r:S>.gT!R1jVTx26}ҩR5ko~S2UiZK[o_1(nV-1X)U+k8nUZy{dVPMx8S ͽ:uQŨ}a.Z= ĪIN0N?i)G~xt vz Ϛ#ީOO$kJ Uy#xϿ^p?ZġDMO ̹oηWVp~ܿ.1ӧ^*h慜.Td]> t1_$ Yr\Mjr)E[5<4heA#95F^͹[ԚqAdY2Uyi=ҫԩS|ש~J?P~"nZ=׳uC"36oz]?v:+Q峛Ь]I#Px>ZCtX5֠cHF`9JN3~TT3я5JKWId,1!OSx[JiS众12TƔ{eFg.m3,H5gݻ08=JڝTkWsRVK rL"ĬW,lc;j:?#w]OR{˚օJ3F%$V)GUW1#{gު9I:k&ޝJ+̟#JIq۵trGE}#JSZ oeY1 a򮄡]=ɔ+F.å̮ Xv/ zgzm;5ϱ^+nEJk %_7BI6܂y8Ta\'?,SnQJnоn(\JZF3Lgz7 +/˙r}Fp-;noR>z!֩qs4s n za@=xVyy\Np&C"lWG]D3[k~cI8S7%Ss؆EƲ1 YE۲n쪬 t<3]M14UVqdWizڍENiGS$ks>oY\,dF3ވXZԦEVkY;/JRu~kXuRw} YX0e+g<:zwO튝XvWђK~sֳr1Ru*Z/~rBnW8s[*n}*6P-F@ffsӜLQSީ0R۩v2dq7N{W=WdO,g?{fXY+@v+JGqԦԢŘui úČx QZGNv{nYQR>g躗lc,L啘++̭Jks%і-Y5JXEztqTIz1,`{֝?wLjrSOBk]j 6)8-JTjO޾Ƶbe\:?Ƽq#/R"Cwlnrjc,5XsZ'ܹeh*,3{SI%Mxͼ~lsZ2 +ECk_.0TW%t^JغΏmJ1oBWȱI'O=h)!Q' +3O t9B:kJT 5ܞdEU}9YǕ2+ѽ5ȝ Ebɿi|F>s?_jTe{Npɴzߩ$Do˞RQU'hmБ4 w,}wI!zgUZ\r<2P%?1ڝNX"NG}MŶߴ Y8ROO¯OsmG%IV'O2o73dRXqdo}b"#_:]M,:ef_aPho1>ԩJɘz~endVƓnSnr{ˑI-uFKe2i6y9rTԶhڲJkOȵec-l:1JUQ*1.-݊-Y%Qz>9EWJuDrèYG$Ѫd̀O83^2/,_Aݷ[r89ǥD^KR?[u!$vcdݷz($aX8o+]hnRQ۷ -hܤ3}猷-98$[kDt?QNIsrҥO}dd G%dJwey0={+/>Ƴ?u~ey oEVa@{WdcVyqce-17$x[${ cql,ld UY$eX+ V*r]E-Ml$DKc$.l+:ҍ:[o<ckU$̱v~QϷ[4N҅3刽OEw,{tUce8CEj#Y}ufL.VeXcpqҰ=OC<E8a5ǓEUN-yFЖߑ= "[+]g1ߧl q\z۹U$ЕT4!1k;gOOܺrߞ:ۡ$PhpUqǯ+9JZEG5v[G'B2"m3ڲԍ}^qlY+Z'͐On+TKޛVE΃dUQ@ >Q^ Y:;eWzmݴ᱊TԵ4\R)B(\2x(e9YoUc=k]5Zq+NyraW~_^{5֫6Ga퓒{SjL1trwsjsENN}qW^H~t3lP$U#9# *q,uJlrnBF3ۊDbʢz~h 5#`N=8_ٵՆy$+$;a%M;rִʞ^cZ5{2-ominȎ*n3@ȩ*IϙF?6M;v :Iݛ{0ӧuF.)7q/!Vk YXf1WW+ J`DU_kt@8ؚJ2+#7>^3ozy#U%fj`T"m2+-; :dw&.dORj%kddgw3^?OǵtJI^~&M'߹g?ԦcRImohX40qOZYF|Ϧ9%$kacH+T 9Eugm8aqe}?]3啬Ԕ$UV |`Sl4FI6{z͸^Jmڱ%*N*U;l*"9-|֚5`}<朢kC>ju^Raė[T 7_OUi4nxIGK^Y=ۘ#oNN~3ZJ1N,j?VVXr =Og̚*{5ɳg c9,Hb8ʜv aZʠKR8|mF`L(<U{ҋk֌˪$Wt1@~ ӵ=(ь#ԑ!Y'Hn[Qx=ֲoiʪ_ [J%6{~4hC,.21) X]1#zRs$J!`cu x H yZ3J/j{?X+~6$R)XժrHsrh4z/,=I]IZ&lօ;oBtVÌקAR,~iZK{lCm34r(UU9X۫V*{:>ujB@ O2?JpR6UtbZZ.2Ï6hv]E:-;Zҭ.X;a'%apd2zNJe]d TyJ,x*ҍ+4:=W{7"1Lj9~5_sisO WS>ʪѶvny<9c'9gnD>MۛnsqZTm~DhTVۉs%:HjH@#=zO7Jn-4Ƽ3AΌׅLc9ȮIFz/ܥpvf-iisGކX$g.)(=d:rTbw|]2+K]:p Re%˽ږ-wL…L3\Be=>Qw[-]qNJ2Q$\#VX+dq1\UrF1%QU6 dzܪВ}&ئ߼!s֪b(ؒwY(߾v|3odܑJ\Hx?>҅-^j46dYĊ1k&4mI\Ϙn3L{QV(ǟ_d_rIf?J:Ʈ夕Awvnя,_o=GL{K/`kŪ.s8s莈5 \F?h2҂|S*HY*ٛ=Z%+4lJҫFTslszD}5mIm\ ^?LP9]tԞ`O!zvG^.08XƸ $5tA$p1 # s\hךD~Taā$R-pEe^8SѐMۯ]wN9u>[EJ2W{iY$cGy'ow1SwשFD4Qƥo1KgߕsTȏ/6!_$ބV̵AZ>{&)L'<<ϥa*S9F[|!0O1\#mgR/6) -wJr=a>^{^/-܋||3sƦ>oR([}ԁ^.0q܉SWg`[ g?7 ߍs*~I1Oث0s\#9=b>TEh8uUH-Enk#}o!Z)eIorƔH(TI=4n HPHS`[ֶ{ SRIn}$uO0iK,}p? bȊqBNhJ]ڤܪd* mI*w,RllU3'?¶OTIomYj0[|GH.wd?ؘ=G4F4̙XxThO؎V\:֊<0wʩ7e8xEJ퓇ܴ!A>ׅ\M÷nj_dt.hX+(4q]Xm`3䍧kouK]M=GExYUȍ08jlcӒoWYJvoU灌S1揼a7{Dh2܀N2[ZM*.yZhuS+trb9K;88JP<Еn!ڑqOn:F[0m_,# yJRlMm훊imMvNeFE`ԕ{rԖ%ӳq_fWp?ts;γ北ކ?-u'2Tb̩{8o] 8Z.Uy,Ԯө&+Uk!|tJJ\yeN?#&iwH j\dǒI[Dh.*7H. +JV%=AOAS*|B*Rw֥k *d3+`:fSHD֤Pp[e~kcֹQ#p ƏEcފ>kXjrPkqxa|ic3}xǧJǛ;'cj4i¥o]HdK$we1CxJW~G* Sc +yB6e1 Pp3Ү4o#^|)olQOe$JΡߝ%q˖R:*{HS6Dw\HvJ"dz׽mMm¥JbDD¶6* H? :Lp׿!WNyitFiP:pSᵼf|.>V޵cNJ4b=.Ѳ=ytTeuS*oO V$WKu>9ɮQt}5Яu+xZX!P2ۉ$\cs]c3HR֗߿/8Iߓ:>^1q]#C7xjrބU<9I?{v:?LSMIҍ5On2{HAztϽuӧzUvpi+\*-0,:זnȲw|r%G\FQ)F;oUrFjN1:> E24ۿmHJUy^_**lqw#NGE:,ͽǠ<ߒ|S E:~2JPXhbU`Ҝ}<#gIpZR̚k_1"ƤϜ<`ڈSVZiŧԬUEcRUJ$d2hލ9֦bھ5LXF:Pjȝ{$Gs+F0 +FI:OMh/Gr)fUϧ^ryki)BRWwp%FSb?tzqz/'msTF>;ć1o$w``{gS)nm弖onV3,k }p ;c8}j㹧kwu9%o = ˙{e*i6y6)ۗzcT3&皲 T،k3OWܾ\=WON2^=>>>jQNVX[U >0Zxc1`ƒٖq7fhS +ǂۙq i {`cMJ>w< c@ʙLo~ [^ʟ7}eȅ컐;CpMKT)r{j\mt̅UdĠ6r zV3/_3ir]J{C$xi$l͖fiSQJ]~V1Es;_r# 3H첀2H};֔{E)Z2е$,>TdUC*Iy=28+W j+]Ve8zݎ=yɡtu_pFI,}3u J-}59f^#V\O*q'T|suM37}~F64s68㽼8 fdsFϨ>cr䋷i/SGh-MeZ9RRU:Es3A%ƭQn1srqfc9N&0p3Gt|_*dL9CzӜ%Ys[4#xn|xa=qֲ^QͱS:h4`E< ;CnpG8ʣIy^Ϛ{kyoYd==OaڰU9 .UiԖԊ5g t1V2WRk8TQYqyp>TBG$q멩RyTNzt˸[-C]x ;V|ѕO}JڞXd)y\bwngBטԈ,)_Ӯx犉Tf2ƤQ-o-R78'@+i4%:|l$iu>=Xʢ e|ͥ.Uǒm +ddハƹk{z:u8RٯlmRDڰT9S{}ZZ0w RoYxu*#)bgۨ6-3nn&@#>W%IsϚE:V_ 61Uʈ2?7;21׽EJi˚rکz-qZ+9{(I|@Ҥem&꩜(v9qimř@?5۶3֧,?T)I)y,Ʃ5ܻ%q+뻜!+Ymk~*#hK2<6*ɟ@2@>-0ӌzv-.6BPrtn09\4Ԟ Yek9v}lŦFn\}j9GT2NSwbKhtd]5.-Vn_Us5E[w:Ӫv}w)=،JV=XZ[qwvx]n^Y:+j.Gl-$%sQ Ճq.^-l > {{1q^.VWiFSOQamV5<`zwpV՝PY-MV(nŤ#y^K"N%'ozW}n54;(’r,ZKaf3goqqjcӱaJ$އ8ꦥ~ښЎkdge+39=p=:1-i,C!FYmI?Zq.jL/Vh>o]ݼz1(1执u3+׶kHJS^l+vx.hd &˅.*ڜt6W+תo^2a*Uav?5ש7H&3LW%1K[%ѳ;%dfr[ן²RXVlͅ2vIzy$8ۗ] @ 1Q8Nȯ;߹ʐV2ͪ:3y Sna?{}~(C?'l20nlz+mqT匵{6(7UǘۻvXKItѕ9<Ev1Z[{8S01ß^}H{NY̾X@ 7*wv\}?wc+m'*]FA*; }kѭJmoyGդERv6s WJ5UEmu h?36搼͸|giŧB*EUi+sZDae{/pI=VxiZoN3Sww%JtYO7|ӮyV\+1[|rQ^F*ybAǯR+J}ꯈpQG}JCqsgR!b`*W|ݣ h˚WZRоhgqʈ So'ǥtӏ5;ʎ"evoQ!i se~TND˖4U$kuyp TuJp&RN˛=N57m2qAmF1ws01X |t쟝8ͥq$$N;VSMn]NƏ-%Rͤ2,@p3}5vrbz^-IzmMfU5=  c0#f1$W%j՗2}_551RIF-kwM-Q6F}s[URJ*48Z.lEߘ-[4o218遒q>Pw$Ҥ%V)w-Đ"`h[\qZiR2ߩM/ED *_ăG<+{<{co4*wj?08qֵeYzt۶Տ{%qhPQBTO;>Og|יUZ57I [wHce`22=iWCIsTM"o]' xV֨_Ϛn~J5y$8X,/\9DZpnU[+47i;3n,?v6GokWz5}R8={&k"iP19>ʾSj;.FSn :*|ŕo]{Z1zЌu6q6]ʹvfzN?g~KП)UT5ʱ%VS#v?yF3t9ǯj,snѷC+W~D$3T;~_y -@TFoVRrM5%^#6/)dXUؤ4IS'HX׷O֧5N>MnpBoEǗv d3?Z%q˟V 活ś1]ЎmQ6ѶЋz~<f){*Q5Օg5y|liv)s_4R/ؿ=戨)vu*JܞHviX7}s% 1M'P$SFvJ2_du,: s8>QU_O#Nr3f9{bf[rFGPG>_чno"ȳȡCrsE>GN2RkVu-ܜI֔u1u%%O"de=mK. 9a)9N7yEf J+^{747 ",0P=YMu.ُs厯$EWtӰC>zJ3muTjS鿨؆w72ʣx㏧zʥ0AʊDv4E= NrQ(nd7Sjoe)>8ӽe)r7 NOMXchC]n;Bf+iTJEaƧקՄSQna .vhqR<׍YUSV *c'ֱm2J6E[۬t 3^]%aÒ*\$qmuN׉gXN4h֧{(/gi֋klësv>52qdяklm=˲mcvX|eR5-%a4*rU׭c\]·RSk~KkZbZ21sYJE>eF1kWfʢccc}MjuUۿ.ޗ?z*J[{.Eyqyj3:،kٴ!9SRmUM?ZOyw©~:}Omue9+-(pu.{zt%ktOZZ&rBw q8xG]QKIt><(է%'v+kef\7ʾSNd]z:B1VepYyGCY{I{DqJi%'Օ49DsdpNwm9"mE*vk$QƊ2y3ˑnsƜ5jM=KƟ73s輺".#[1 c_;9H\.Y]naՔ'*/Y=V)KI2GUNI:9FNR֣% I5ܩy\J;u[bTG8ZShU#HTSK+;[& SSd8W l~\Y5R\iz2$LʼeW>[Q!˱ORfm[AQu*U-sDdWRRv:KF@p`WhԕNuΜZRP45VYmpj u.;^ޭKsG~s,ꙣS#ynn5xNATd{ai]_!CQ1!'ˌSךRv:ʤ۲;-bOm3Q픩I\RXzJe=kJUi;',Ò[^̐]|DݐNIߵ%%o!Qt9KF]NHFh3J*~#zk:(|w f6iԨN^ۛr "䬱20;:Ωkn?֣ϝ)v+i?+N?ǥ D*rZC?URoc+ w zT%ϬSrQ=;cd:T _WT~Lqf1A*FON,wQq|Yb6pzj*M:2U>%d/S܉sq#Fl|=0HЧOJ6z%I," 6H u+*TvmzK_Ccs0I`Ky|Řl3p{gM\%>oVd[^P&˞\&V˃A9b!y4ߛ3֭O&X٢t#ʜ7߄[Λqr"^0h"UNbq:s\U>e(h'9lgA$8grű@\i\TcW4v2aur^mN[^9/XαKc2 ۾3WRW);qUvYUNLƍk#6cDR$m"䃞Nd4RZb]Ydc>[1bw|lI ,B `0dB `ލ$eL6>\=붇,%S̒TKۥ2K&].y9'˶B-ӹj2c23 ~Y?$՛Q[1u}N'?FoUzrrs2}F)dRc`0tg8BoF8uXUi8fŸ^8".x줒zs_4f<<6CҌcoC/[i _J$!@uoP[R-[xbw.ܣ8߯Z:0<%:zm[jb#m7s܃cڢTgbF\RuM@DO~\\l5nh{b%JRKznGp񷙇\{gYԌ{'µ7ȹc6KhQHzc5Q%J{Ԃlۅd d*cMJ^xsyrySoBa2M(ڹGa\wksyiNn%bۂ }x10֟4W2H"1WqrƼDO}N,G5Kr-5R$ۣg d]:zw9"[Uun O>{WZSvӹ ߚ흑WnUu ^=iԥt<_UYِ#w6Y{gxjm؊#-!}e$2 s1ӿ^ūTkԪ5SKcلqr%Y| ϡ{~NJ5N2HK'_C]rX+>f)np¥_Z%i0d &[hN ?J7;]gEIFto<1+g[oQ0F?k7"~Ÿ=>m&W4 9sB^/JO^WӢd#R)E:tIlIFh{&Uc)g22VITOG;G# ֯ |ɸbdpqQRՕ1r':Qot =(˙_6InlZvQ+n;}ϥnh7UN~GkF|꺉֡܉5뢑=Ea]qӯsڳ)>uYQ三tTDiN1 'y 9VKk;H_>;g2X']rN+*QR+o;Xؕ|yqF,}rÚ:/1dPUHY$_VcO3ۧ8ʜSzt1]uoDF?!ە#KTXxE'vIm[{ Ė;߿Njvn,-#O6AUkiKLҗ,|J8%ugq!c uV1j9䮗oZ5v}XViU)dTrA3tMJQtu2"4u\u=ĻӪfxT䢛[vYy\͇Q[O3u*J6ӥ=j^Wz>s2,1ڣqǵsƤ)Iּeԏ|eV})u%tԧ)ǝek[Y$+r2OךT=[;y wS5n$䷆F;!,FA۞*Jr-{&;0N:nvIooc"*kGVT]^[a 4]!gevp1dƾbk^WM;援ʲV+">_=}??[m[;e+rr=1"jIFzmGWV]WyK_j-BVO%`_B O$ ʆ:+Ż||=ee+-3/Lƻ.Mw:v;7М|=Dq=c]wڥGxRڭzwm:?7]Ufy|fo;8 @<}OI+kԧE5%y?|7#NԖntvSe'{pq+-O}v=X?oկ4UOI=MFpv2aqݍG涗IצaO BҶ'h{ǀu;(f,fOάzw8N:ׇR>ӢGiG˿CԴ4o o[s 9⼜De|xNv&w9f9SܓkTg5:q+mS{vӭf}l+~ g|\o/c*Rj$^-徟R HHN:{x9z6w69Q'hs{Yլ~hٞH]~a؎zt3\eJ8}qC双mf1 j8>6WSQ.[f=eM6nceS~^}{hp>//ˤ,d-56@:8\*fIf8 &>׏.u O]N(NKUX\X̻w1yס澾$.C%K;-^ko?xhuzs8rWpry+aAOkß͡xFX*ٲ0;~lnR5cbbV~#huY]2]ǷWOs./QEu޽ziJ55,>yOn# m)JW՗N.1Zǁ2CykBxXԩ)YsZtW^]cy?w'5rgmj}?d*4ڸYhRu=ub-&UҴω9^AJ5;X##V}?\+ uT?Z?hmt+ ⷗@Y8DZe)ԩ'?S50zߥgeI ;qQs=6%J[ie;cw.)|ʝ֞l¦q v$j1*95c:j+:,h@>PNzt%y[K[η%kӔ̻h0+D *<d 4Y%%eVʍ]>W+q5wI+#P#'Ȯv0n~ e*SiiBͼ3$ $m7MFDQm8?kI^M~q+ܬB?Oӿ8T:t73LR*:6O~nۡYei6VNX`^[V&qq;K4kX&95itܪq͵ѽU̖wyH9ʳybMv|geY$L*fJɵuqT{*vm~\_=?taJna+te[ J3*Pae񞟐s(3IsCpn<ۗ dF-#^)Ζ-'FvKzNqYFJ[nVp*ǿi9(r,-YBSrJ%v6a1ݗFR&4 d*d,ṃNaҕwIlF ̨¥q\c9ڪ7 Sfi&o*y&i+Dt3c<~8TNٙƏ2M,۶+jDlNZe|JNU(9}As\N2HܨEe!76KwKGm }9y؎8]ey"!eF{rFWs*[]"҂ i2y9rIk߂i>^ZβG&m9q_m- JTiri \F aOQFMh.xZ-;EO1ӯiTu+sl&k6E-9SJΌTW2NQ~j6ǻqB9 eOeJԷek'wnGνis^,FR,CŖmY*FU[aG*i^B[ CQFNWTv]- mʹn*後 kO` Cێ1V5#ي^Q{ohfсWA?ZRҺjJ2{,4DN.Ր ?=%V] 9P]BTxNhO֥u-!7lR` v*a'lTi6QO;0yvf%Yd5N&UhGW[XaX>sn0<1,T\Tq榹b' [m(Tm+ovVqf.jAHFz: qcuyК;mزǰHpFO\c8MJM>SB;&Ogm@:~tԩNW(ƴ$k$+ <ӧZR2vz7w> mҫGBol6ɔ Gy= T-J*{oz7D.-Amtq_a=TQrYYjzlPٱXRgFbWߎsӎƴݝSKT_2[}>ZykpÞ=sKGFZۡz?555ڣw52a{=LV7N hp*j5vJbͰbS$mF™u!jJ*ti$e(ʊW7=jGݺ-J>ե`p[Fcr M ۾@ *TיsUir5܎gnO`~S=r?m/rgRJj4A5*$VڪWnN;򪌧M 8ʤo(Mm]3:YJozM=6F,lFs)x9v Jm|U:*)'tzI"yV5Bʶjr^yK2[1@GJiw Ӌt}nFFQw9Q)kМc֨nEuT #7O\~;E#j3N[3o)Uc~Zi*1VQjX,q$@lұE ZRk"RMe %KԌd5g)*%'mn!̊7 w'w8rj<;D$*!;Eff&8ըڒMt!0Emrr:V<ҔASKo}} 4kxZ6OqD #8T5#-_CQKTi9c Jr9Si; ujTJtֶQ-\ްxTl|֑QۯSJn53l;nm"w8>Z)JRJLЂ*TR[j]XS)P}mY,ǨvZ)%g+TR>s@*/q?TYFSQnW-yYX8ýmmnƝl$%SHvm 7Dl3IⷌJ?*FVu m $g^)ԔSRF%I)& p$U>S: {Q3t.[NI11gl۟h!ˡ*sM14WzMZ$2qzufIpU.#izQ^μ)B6O `gIX62yVە=l֜eЉR '·X[w}*~sFZ8*\`0ّXF9c$[ByyjG,_%qN#G=+]cYF31[k۫Dar꾠w9I[v?q1$ _p:V>Ν7FW4o.t~έQuAZKo r 3g֢U*s[KqV .7,~cF!n;ߑ]ZO[/K7k7$~б*|el~8յj)OafBO]6j:#KX+\@U~UaDo}Oy_RkC*Y #?zƤW5ПjR{2+U8)NT#7(yIp_qU-ѥJ\d/64+kUMntx(@ O}z.*x4_uܽGt;bUS 891^9IKw#*ӔemNyhK9mD|}O޴mcmV!/ lydӃ18ΒUdX;F:u[̡ El]T,H ǁJތy9.YS˽YpIYY")WE['ɑws7IZ9iڴa)-= 0Z0}lg_󊯉Y9lo:泎֤ټDYBۛFvlGe=z 5<0I ",T<7aֳz=Z;!my~&tK֤Do;GUUU?qSa*Q ,($s:~a=15QJZNq^a{Y$d'f9cEsѧOYYVb]P=?֪4gSdZKc0\dcFvΘǖj̾fz=N8%_MN~ibOЖɮ%O!Fo/@'oʴ8oNO=Ojǖ_[$k4L'}-JJRІsna#6\suN6s:qNAđmW;W vKXӖ1ds<{ʹcN\bi9+pnY!ZݼFy'ȟS֤єTkB;IT9㞟ZQܚw_&#;.*~ 'JiJAGߎoSimc^!Yb9)ǣ1QI<26ޘ2;^FQGp}v W1l6{~&M=S=R]yK۷u61A䑭Y.duO0{g$R&PNIF&IIZ5VV^Y;h3rtB5T hUV0>遌z VrOpQy;>4RG/ePA+~k{9^+C:,4}bXgj1],^2("(٢f]&A`@# u\ҧk)r&렳im~YGys:4 NNEm"T V݊^{3(ꗩ5%N7QW 3Z 6#",gŅ^mvmsclKnW#mWj1-ʍ?eMI^H9x ʮFC3ӓ֔[7 .wpIlgsv{dh'RˡQtգ&fU@\1)GWIyCo$h4PL1#92:yl9y¦bҊͰ%M8BeT[-+I! Õ:2}Q;8'֝:+QdIo'-;C%8Q3;5ƾ1KGA1'V'xn1,:*S#-^#"{.߻oo5*јSׯfxK"#o̘$}7~Y}^5;{9m]7n,͜~}IJxzTBbAYC++w89Ս9};Ȏkcv^Vު)p=Vѝt.iהy<پ0hG\$ەhܯ?mcƍ#ov۔P>wϚN)>U4An4U)Hn9=L?hߚ*we6MB8nݛk8ު=k4(TS?{V]nW _\]995NҦcy4֞tΖs,l@>S-'i)FJVWzq4[Vvzֆѧ=t_!= v.WpM>O~>Q%QӋE84剓6#O[OĚ~˭hw?{<9T3JҊON=H>ֱK"C7,HppG~sFԿy3j=BE;0{1q:~əh/]+ˋ)80l붢lŒԢ-lɖ`hEe,9kgu{\%n.PHH8V·nڝ_6o.Ybg䓽xU/rI1S)57 e'o?Ϸ8+*NBc $~J+>oש(FQ]zE=}<QE&H@}O\tí\cιS4֞z6r#fLm2mA;S2v9Nn0IiaXٚUcnf'zVR-F\ŁSBJWol~5*2Ƶ:Ri^ ; Fzd};4PeA'u;MMͷjIs랤c*|"1N krBݷ3$d,kSLK.L-`TdU˻H=+Ue)>EnR XmT?zx5 ܋gDYWn Y2NLEh*)er xdi](:畟BԅQނ,Hƍolu:Ⱌi-YJܸu_Ј^B,bN\ׁsרFU'Car4Т2, zҼӌUձʧ54Q!I3_^ӛA{Ogvv,|A+M1pyOQUkCL;kOVc~а%,Bǃ&ܜNp{SIXGo]dc1$|HWLyM/NRo~e]w >lc9)ƼU7(V'o-&s|t>}ύZ*slhS`;C;[JRU78)qI={},&i6>ڱH ~5:mm3«QJ.bcXyFi@dgZu*T5TZ9mK>팻g(PB|i)jvw}KtG$[& = nuA`el$?%R1m^s䈵*Hi$䏻FCv*֥jJLJ,=ݧ^Q Ar9}j]y?#MQc I#mڮ py5kFj*oZ[64Ya .!i:+.pW9wSo&NQniGAiIe(fW3OtƥGm<J0uk: F( fI÷s_EN>e$vӧWJroFTZh=bB7Y%ђDgڼm#j+ѣɶtrVKR\)#Rb;Gz4ԾBNQqYYnңB9Ӝ.؞Dpj+/OC2i$.F|" ˸RyVGBeSkva]<]4ͣݼ$u ~uqCC[f2 uVv\zwcmޥF]G"#jwr[Qq#Bfgܡ9l;t?>ORBBwJ<烜¦2c^ZW^X+naT0*|ҎSbFHAb(9׹~)ݎI]nMmwf|ݽzQ)NDʤQ",I~okaO֔#Zj#hVYg]>fAY֒Z*z]h;V@/W\ZG:8F&Oz哗-IٵfGr$wy)sTEb#y4F۠g(h(<+2L*uW&ܥ˯CJ73o8a~SsTQwhͺӑyh~b>a\v17/rVg)TXYN:f|tpVwqIzm+l4Qčc~ՄNz0o-b"bRFǛ(FNN_[/ݪFL/<"_28Ӵog[ ,EG^ZiLIdRy緱RN*hFew xb2/.I\{W\eћSPwJtM'LxQ6ǽ9PJ8jRQZO>]G,2`'pI}x{8Ťwѧ*ޞ3$uyۗjN9,B*%ֶa+)sAw hd޻hǻvsYԍJic*h ̲n,[t؞k .XW&4* V0p'ʢ|ӳ\tvhc{o5aR۶kHxc嶗 urX0v!N~Q?}KmμMJq(% ]vU'`jT%~?y'ʰXVmy:5&QV{p+) ɸːf>cҶt}];UԥNQM?-t8ui&ޑg޹bs3ӎN0'Bti{ugG[\Ap|BUa'a_AuqՅR9FZE| 3Ԝ~k(aݹV,w6 ؾd++4\i9^24Rټ7ykkur:s^1gMkrFHIhˏzx+Уhzy1vZuYmIzt)LiQ曲_"mwh=z|4 ,E>wRGt}[y a߂O?ntQ]hTyk>*B~Qkxju5y^t' Ur:}F*%eZ2,m.mLsҲ(LpAo3>Y6Ƈf+ӝ>i;:121qz~3\xw1N2dfJ2[,eAOlqs9_Hcg)I+?+*7o;t>Ir*Ze 5r6, {y9^_Vm ۹Nh3$O+Qqrٕn@)&Vc)gRnQVi4C$F`2>IOs2j[cVRG:,L7)e.8cM[$$ɜ7 f\#|t^>D$1O0J۳RXEoq?ښܪ`^]%17A~9 9-}2q=}S*ji{t3yf>=5f|4gNRM޷ڲd0b̭q9g֔)tWԉdfbٕcj\"ݎcZP"&C>v?皚=~G;-X1IrKrDLs\6f>t[ te5YccYJ7HѧC E'p/k2/Ec_ZGc?oRKe~ YV#lq0Hŋ6~bI?Vq%RRo-#|V@wl;zW,캓ZIIsA&kn cR>ҕyQͣ|eGa/&&^Fؤ\g#{HNSK7$+[׭^ii6NGS91hԩ箶B}hfx,ѕFxr1#VSv;%j?eƕ -±o3Xu9GOe%x| }p]빵-ĶǛGudqu 兝Pc~<*s4Jӥϗݦ|*q2`wc Gle>UM>Ь]HY%[C,nU G@zgp3.z#dN ccJVr ̏8۹y؟+n^VЎ]>^9asGcl9sT~*)>_2F/*ڽ ҇BWbV-6fj61 FA>i SӜڸFU#u*2O]~[Քԩi}c9@l|GiN2DӗM4u #2 ,suT!R2rJ4SXշ8cww a5Rrori^ j!6ƭGqfOO̾gRZo,-M1u#ϥm8W1[]Gg6YSjT[ONQ6ʥIrnCci%c"F=V̓Nb^Qi뷡 Yo$H^CTiBISr;K[L,Ud:Fg i~΢S4U77dqœ{zqVTK+JK^ޝOƌYJ+Ǧ>cNrVҚl4hyuB#މJ2'۩mgEis\"Y7HP$Raw1'7ⷳ_ rzWU%i3~fdvQCHFQxr(s7GTǕ'e&1_{s>wIB+Ev̪T)rrS]4i"I$l̸-'(Ӓ} !*.h/tQS(kZ~fqOK= yTmc0cFlSߩJ$?Vnͼ^17 6#sºiԔj.X._C5%ak$| 8~(^Eܔc Eȓ6^OlWU)C>"Xz;nXy?̻5E6q?SJ<|tkT#jG}8.Um&hݙLK.Xn`~SiSbQz5FQMW3%fxX@A>,3߿SZq7*vo <Ӏd'#ְT2eVNTh!%ӏjʤ=5GZ6Zv~S7@sOlrF>z=J|ksGZC7lqcf˜:?q\*g0-56ۥ2q~f-dcFҼmM8&d:]`ԧVМ}JEI -s_SZrZʜyҴ XTs&s;TѲGdZ‘U>Xe'{mF{]J#TVqv ?v^=h=NZj$!axo-ۤox׋j+SMy$VFU V8sǥxI|RݭЎT{J͹FU#fESrv! 6닆sr}njJ_({9G"+UuI 1e^6W vU:e.[ZV'YL˅mpɥQByy[vZѓ PYː+np8ZJ[q +_"[t[DSrUx*qʞG#YƤSgorG8|*JGi:r/}Sb۳l#Xen0x;#Q jKvYUasZTN+'' o cH1d㏼x׏CX7ʟ|эU ]ԙ*Fƚ~V;jVxIJ7]ȇsfi#YYu #<ҷe Rle˺&eY-fnc*=CIJQ|_vQ6TE`ltFk1nrc(8GHdjIA4j]ːn?2Fx9jIT[U+9ZF:Tt]6h`,D~f;w/?U9A?Rj ˷pim١Y(>?^Քiʤ3Etl)Ef16}@c :喖NrPEy;^7(M"+$d0 xzNg7EhQЖܠc`U/*S[UVhaXYhِ3F70fϷZThƍ[n]J?XrwZ|؊H/?]7_TNSjtZi=>D0n8iGHr}'qS2eM$zYqo*.c=8SJaFj~Np5,sYc 䏗 \fپeO[Z]Xe9Y˚vS>j-'ꞿwb`!FFO@=:Z۹UK ݖxӧqOQ.XDӧh;u5ˈw;={VJϭN "f7 cA'#$3YӔb7^U1]-!8CeNq=zJʚ.ޤ k#j_\涩R'̵f5R(IUW;.a܄_^:AU'(PJ$I#c^mcZҍ߯bKJsȸi$eAYFR+$6ik{שrv,7L'ןl֒exN\Ҷ%Ehm?:ݴ㠩u9uS5br0=q냊ҏ+|x_zf}t/g4>C0Yoi#Ըo0r ?y^˦ѽU\rIyG׷dV&uѫ͉|Mn5YL|W~\Dt<>~T}{1u&mQL |ɛwnpݻ/nӔj4㥾z}V;.QGpo%<Å;vn=~{{0.K|osR[K>[y h$u3t$0#aNqwW_R!N#baq|׿]_a᎟Rkk(L&߳~NG-t q GO裚OBu%}>=-Mx͆w9/C+X r#i3*XẸ7yG`yh-yK-_}+F]/Zgc/̪.جԕj0,<\SI|P3I8aGWskOTU\.q]0lD{Ei%Xb=nk̈́rY%S_e%qhbG Ze݀縬Vp~Z(j{6M֓O )4ѫ\0%py ֕Nk.EZKv'a-t;L,q2|qs^~"4Mʥrч<4oS(ko^ۖ8a 7;(RH+rpUb:]v]Zq9UI{zf{>&_xŬ>*[>2v;F+#iT}̷,Jiɵ6xJj[_,Lq3㜁lnۧC>WF8x5徫g;umo^.34j1ݰ ;-2͘Vp򶇩IOZ kqj;9c^\g#TM;yu8qZ|MЮ0"]Z\3G1\ >a`#2<-eR1GELtkai}itKZ*|<1׻RyK_ѭN+٬>1+YVTROv'ʹ0| СEX慛m}3ھ1;YGUiF/7S͒mZi-M[4rSׁ_RiSTOxqQmZV/]Mۍb3Hl(u^0ORy#_g(ݽ n*nc R:{dtzkG2N}'ۨ Z1̅qp85IrǚޯNmPZ-탠-"QSKyZ>jJNqs΋( O Gzf?Eb[oQs.dl`qt>(b+Ju$-PLvaϖ gUFeM[VJJȒ[EmyvWUN*eRms<^'~7 r]V9779aߏ.Zws%6vB9 QLܪ2xej)%66H-=_~ e+5:hwn䖑 -V`C]JH̩GJR4p.L[0[Zԩa9|#ۊ u9\֣))i_iDPI%?gQsʉGQNޝ''X bvndpF9⦄tW2)i*S4q%¯ AS[9?o%fB22e̪SoJ4۲;a{N>>Zo#l\c=jg/uĴPEWs0zqUF46}ث?FF32p}==+2mf\Koʛ[9Nz/AԩסU29WT7(0j=/}]\UGfnc~9+UHNT#DMYBNuRѾkYv̧=(8RvOdS>#_3h7Vc*)f5Cpo&%*2Ѯ ӝWzoR8h\*>*j]抳ԣRXijI2/pwpGG_58IˡJ4ޅFfN0>\ѽ9ZZ$VGt̬};<%RyTnE坺(Z)_2Qʟ1JUC*&\vyFQ)FX3$$vۗv[ۏ>hVShWE8:Iqt#۸٠gi_ѲA=뢜c&eRJ.V~9+USut㭈TcBഌ.+8JkU-Ha@p?v#55v:xz]AO,sܹ*UJ.)ݓF>khʝ+Ov"Tɜ㿧9KnR콧Fdo)1e"a<56/ڣ1IYr*緧\Is OKO>W5EQ࿮#I"T1Ǹ0Oz5!jҵ:Z.,@)réVu+)ۯ=, b]ͺ(P;Tsa֕Z֚im|e(׸X#"Eιl=AN2V7ZN?1-;rqŻ{.~iܚ;'us>6s߾j&iJ_NlyaW+`5/=̮'"E$FZS:2ʙ9cKMb_;?ݳ4*twх)F^i9y3)&R0a"G/,yp\2*S%Reo7(zJ)FIe{TM-ˍ 3.8=WG}z.*˥^eu·{ 81=SE7L<\}ݺF|B<}~.[M/#\:Ȉ5!mٱԐ*:i/]:z?|,ןoֺ)<7mܭ%R޼6Ycln?k4M{"iWIE4lginD}К,e]R5-/uJu=/mI-ʰmQ9J4Xʥ]_qclJdf`%zX姳*iS4̵" $dDK'1^^'`Wq]VolO1jȪbp;+zR7˫ff[u5KbQǿzqZ*qox--{\HVHتr0ytiNo8i\(dAmS;AVMwMngbqVRrGe8eU>_k}6mf*pNGsڳ/}YN֖ff)ڧ#suYEX>8S+%oAя4䬺2"4F~0Zgt'Vu,d*vur}nyp̬@[T%g.E^"RM$$e\ɽFO'#>kӼmJ:Vvr;YbЦ01ܞ9gnds:jwrH~"nM~ҶbI&bMM. 60;5jFmTJ4RkNnNu9!nV&FIRpA98rz}-+dm#Ψ#QWJ}$_Te)]Kv=y2N븩ܐ V4ndZIBX[\$j7s]TF24x],i jF>ߺL|?BX&bTV}3]x$9O[{mzhOl"IO0T0ϿbFtc-ub&yr .Lr;JScТKY4R3)m hn\n7۷xܡf>ZI={/Ԭ,>[ۅ=ZQe]P+IiVaOgE{\JlG'5|bhN4=tk)F1=tJvBH[8]9S{'+go"-!\*tir!Ou"C,"sp6e_|p2zG,&aSj<ɫ6Qck8h[XzG<ⷓgdEգM)/zN-D}ɞ)UA8$u;ҰJSCO(k )1XVVc{Q㬾FjMJ+Uj2H.DV[UQ)n 'ׅ)cRz[;Y-*߹_ҕdgMiZ-ɨ_+)ڧʷ'}=W4i u'O۶$7i $?d+CoiZ:oZ7X<_:FGZsJPqrb{m73$VRѫ@ H̦?*Su~W%$[_ݒX_N+>Xw1۩^ ڭ9;ӌd斯HIIbv Qb$̭ch 'MÐYxG^;֑攮KqQ{-`y6j7#?Ñq֢RN1G W^AoL7яo[SgF?Z^cl~.g=a 1b ?ӌzV 2hXMEhŜR5Se-I6d0ϡQVIk-3ju!vDvd|挓==:޴fJ8zGQOvd;=q\#RWDʔ[{%!<~\Y~hlYӃi]`;5AH~bRVK5][w9!Hd?/5q;#)GjZkM%,k< hnG׎}yt.Hz=̒9x<]a=j1r%oO̪tס,ȟiaܑܪm~5q-Yl 2Cdn¦cZ/C2k^KV>S,09te4*qȶ~KL0n1cֶƒG+[׹I%]Iq g$ZSNi-Lja>Fr^N,"WG'Ϧ+ѣK[u9񌚎:.6!G3B#>]5=a(ڬZogt:\R$+*ޭ9#)S,4ickChyr&WGV䜠ѩd6K3$+Mǒ9_˵uS2JٯfMM* 49wey*vh,L&ݯtǷmN:҆Ёw&(U$h;8y\owY8֥ 0qޚM~(H:ur0qtTi3HZvN2nbQpp,[o1)^G\}1mRTOd 7|u>Ւ 0)]KxeUBޘ8ߓVlto_UZ=cfXT7~P9JJO-$Nf`s0MTcdt9TRB)^k09,pq![1N wCBy}awN*nO^NR'*J=I-*Nv}~N2 Wheio^ښs)~u+bfN#WSXTy+Y&b~qʩ5݃n'r;DЎZW_q:O#Wq:VgQ7Ƕ01ϽW s+]2T wH?ԟSX/J_2BA Ksd:皯B"* --V ʇ9<^ JQ(ȦBy*tg||眜V-b=۩gqap0?q]Y*wps#;UpF9ֳ8D\"gm˙'~c(tZtg?zQ#DRnFUz˓=y%'CJW{w[c+p1s 8ԦMz#$mmP??)G|؟gzlܼRKUgƔoN<DPbX7stqЏTRxw%dHLPETofyݩ~U?ё&ʶ,h1rCXыĊmchltA#UFօIK /w:v!v2߱JV:*I=^c졤®v׹kS4ǚJ%TxXB-sQn+~4-HTO۸铏tGCԿkh$3(ca:|ѽI[SJ@[$6^Î:(W ~eр|1e=_U(D [-!DN08Ec#%ERnEmQږZh; l_z6OC(-z* T${L!yn`iI$_޷̇!)[X#5iša<,FQg,=p+?F&{&&rsVQ(zǘ$Y0=N1O ԖF>W|G:=zVQhFJU 4^QNWNO&EiYr8k$Df8o26 7ïjrc;jͪDCao&jd0g#o{9*;vܑ#*ʹnby㞜Jom2Qm ˵sF3e&#hWq$YԜRmTi(]Tu<#?uYQTQ5%IM-GoƬSv>YKDOr YYDdr.9<߭m/0gQ;-uX(vc=ׯ5Qe9m.Tb9-E14$&2r O޵a V'aڦK~,{}*j1ԨǖHdΑtT`x;DJ7h72[z#֔e)jF ~b8nZG}۲3u?S/yhWv i$7I/38ZXR{ yO6I=+ZtR~lۅI7yw>й&Qt[~7`~Bz1|>]ъYX@wV]˜znʮ<̩Lz.jk ҫ&E\W)ŒbnLf㪟^O DҳO2'qӚQt29G_rX>m0c˜ⲕj6IFPI憻nfʬIV9<Ճ{?i2-L32zg=%z£F)E*|ɡ9~9W,q4o>wrmp#*W?*^sS*{={f1n3Gvɬ(TΤp[:HG:*/.;UVuS:jkЩqc[&V-;Le+1T.T% xo4<~5HF7"5mEqhhoq6kA pJGDucFishUn78sz@?Z攟5F(Բ+Nbd*GʡqUZ#vWJ>ai0bXd/9b9}ϥm*1i=% a.rTJRFUG g縮~ZqNeRK.5fmlX<7$g}q޹EҒhRƤo{{"Mm'dee^__j~8~Bm,LŘպ” )[kV,qao)kvk*ui+Z݇QCBF .GGSϑs\t]wB%7X󀫁Di=[9eRރ^0n67x秧4>W>/i[dsM4N0e]~?±s{ѩwUnPfqG#ҮS0 Q斄msl%fӌ!QК3a/ ^Ao`79L,goWdR̬q;VLBƒL;+Ir(U\[Kz"pm$eVEʫ8³'-:Zw o2 iYS!'L``RuN~L.{[]ŸZQ>KNZє>1j^玞դu}bַ؎YM쫸x?Z{W߸M!H۝t;DcEFN4Y;p|p\m 鎧ZfU(ʜ! ̷n%L謼g] >&y^|EWH,R9m)f1EuBQu6кգCȮ][Ͻmʡ.Ƅi-Az'?p9G9%4CRgh-G409խ'8Ǐ|vHlddsJ1(E7.m=Eor*jwpyqm-B@<OzQ4I;$6t ]=}.a/[R?̒4Pci$m6H=(\e(-쎚5iГ%]s~T{irlϞ厊~Β(4xcs94J>{1)պor TP4d)8N}맖?#Pތ$=m)MB[ۯRFzW4ilZc-]ջzq`-$/\r{rQVU8On5""-yNF1"Y*nͧ|8{l#-Π. Rzgӵ B4C4EJnm,+ޣcGx%3ұr^T(3E;Bdz'Aϰ2׹t#>ܥ+,g7(szdv\N5.~&?b^-XPJ,{tM%^Dam[q!98NӾE:j]7kw[%hT\66|LN4ݍZ7unr\"6Hݎ~JFyM*o ӰeY<'=zszrԔ-א[>;,2grcF:F[q;ϡ4nDy,*+p8Hϗ8DiJjT/IbiU%R!+p’TQڟߺyv:\Lw,ljUp3Y֗K|rU}ַߧG'2YaR˹@?iE&FRZxaPKe@F8x=pYHEZ|s%~h؟d=OhE-VQ٢L ۯml<Ў*R|ЪXuyojƤ*ҖorZi#(~W_ihrKYƵ&31+ěHT,y1X֒A΄Żc31Q"-'9e %,<.HՊĘڨdR2{QQ>gf֧cek^Cusm"G̋K[q5F֞fx?(nno&c,I #z~5ƥt49a򲷟!}QVxWYJyh[ r+iJicT-Gs:5Y[+~\϶ \FjbeO+K]꿦M^ v$r }H}1F"JMYZG-}_BlAi d,:@uߵLeNRZ|VQMBnݟ!ckӋ cJ< sV+S5]E9x7ӿ55*۷PJ{4hlSy>UO^ FMgjf14j ZDpi*u%ʶ3m6-!P2?\כ7vR:IW0|Io3Мty5-v91Ew`:JbN@u^25w<ꑩ̛ޯe)笒f1z_|Ve䤕ᓖ)=y} ]bkT#Pq؎x)Tq{}6rb1]>dMs_b g?6W,?cޮ;'VMzw$W@` ś=w|J>UqVQ{12+mUqҳ~әiF1'`XmL D8R.+ FZZEnbcnaGNaWΊi([$ہn#8gUߑ8GeKInXUHXur=ϘtV7~ʏ^,m?#I9&v}>HʤY$ +{p=(aWW\Ks8l3$S]4MÑ9lN%-(\E4l:c*jQN7Fi\БXȦFڻ[H+rsh̪jտ^RhC$|ozqko.Rx{%b\4DYc @ʰJp9LzKt{INMb]>8c2[g\xedkNtF5tn\>qך0xԗ+XԩJ74H6¬QoN}Gzu ryֶXw0'[=Z;(ti>6"XJ7霏{{EYS2LЂX&-6܀ s^9>k#OiZ51.vr[8#שNt:)Pq9V"+6R6z2Ȉ+f03sמ vJz03? E8]?ap$mPU޻9ʴ5sI؆XRʣv?)IwMmdmc&/ߓigh̿fϘsgڦ,տXEȡplhIƱ<ii7FSХp]YW,һhnq*|!Q6޵RQUZ7PM+F$q99޸8 cFU 43 f.@f,Ppo'~2 1N8\xFS*Ƥe㧔^e灑nOr=pUmHDWi B;vJhIk&P̘9ܟ\Oފߛ9U9qzm'tXG8ɧK>yT\RRK(4WݝZFQ۷MFS9f6w P|e8:V4eFZț΁Y'Ro3⶧'/w;ǖNʗK$eVluY]HH)zҰJkSEE9}-3m0~}j$r]E;6N$wC֦ާ5{,6~GY ݷvs\xF*Ol*epy^X>YGB^ T{+BUv{>1e{ZO2<|9L3G,]34R4L_OYNRij]*ѭO{rЭyefϭgR<5v4TiO--ə-#}yJU.ӡL;v#emŕ'"sAZG z5 FVY['(Ǖ%h-MNCG>+Ԧ9(z[~%V;܏o\<_zOݲwk8ѩQӳo}Z%+2(\pss:X$X2 H?9euo$\1B1w8۱%KKY[_F:Fر @h-_}~3^Kn[**,ñ#$Ni)9Ö+OCO\u 4/B8z攝9=lf#'YEYZOT@X)'I9q5RmmӃRգQnҭ˜p{tojqI~?ՎE>jV.iWH;6p9=9<*jFPmI&^Վ $izQtGUJc?%# ~EqԔ[jgVI%z>Jܬ(7maƎ|șsT&Ix78J29)O枥3H<Ef?%߱(at*>f=_2n /?(烎+͝JZ;U(7C6xe*GJiɷ-HѴM7[UT.08*^^T59-s5&2eZ?2b7 >ٶeH=9uUr$фFE'{Ηd>bc>sS^Tiʍ+D&SfἸD;]W#}JOUd~/d΃Bʮ¥'q#W=jsuv_i:EVN#?gRR|ΌcjQvr!;@Pz\S#ES>[еw*V 5JJ^b$e6{(1H~3YJ'+g8yú?╸x?jޜU iE85uͺv4WМ6w9'{u#8(\֜SKDy޿kfP8]ƭIV3VF+?#C'ٙ7HV`7aq^C枋vy߉th^OQ30cקkuwsѩiJw[u]FVʹlskUcsziƌ[wvrI *\y'+(<3 ʥM߮5TwhmA"dс;{`^}HO< >Rƫr!vA@8=kQwњSűXO ۟0 o.ԥ6m5&MtL~m HוtE(E:KM_Hѯ%_k9$^>U%g,m}˩'kr˳( Lp2z= j|Lڥ9S#(k߹5DBpT] 9 szb&*nY8mRHu彼g{3qב)BI=|2]ݻbvstݞB5t5ѩR#ψm]O\Lap߂3ctSoOu+F!Z5*Rz5M cU4$X7+*l;pxcFJƒQrJѿK_KߡJM][5tE9e CySq'px=ȬQ"q]Hn륑rJ̱Mv;FB3\q W*kTt ɖ%o_$E'Ns^me'yj?IS-ZGnJTZ;y.-5żs/r.ar+*w9NisA{L4ݞ~UFu(n/N}M kq4qԚzjeӮ ǻi;Sӱ-4ֺܝ,y˕@628YIYne퟽eԋ6Ѵq|ϺԎo¶n2ViN\k<`A ^+qIwK5{v)]\Zf˶?<)#ߞkS6_3ͯS/c'lvR@-|{~+_?]I/9'эں܎HT?zB3>[y؎^Y&h)'ʗqW `0U|*j-3&bܫ[KݲE?w*WbjEv?n9f!W'S\׻멍\dkZ4tkAYI1{p68ϯ(Gw151,-p0lF^I '9n:\2U!{7g{Bi!L6ȹ^N;֮!k]c걌k؎XCbEu [?R^+OaMØF2$plR@YT=8\Qyyt^anbQٳEcb_DeN8ߓX-$Uݓv8krjO_?R'*mud[ `r}ܕ 5}trJs|7=*$m`Lwh:)AcSXiwMnHgY|VT@q۞+G5CP_4mf`ªfh%z s.HIuw#)Y#nl`.:ZSFtӕռqOGXV_8mY?.7:jVER]5#D#nmNW5i.dw"41 gtD+QXy`(>N:qTW;c'em#F Po>aF.)\ol;.M2ez}~GT7-O̰gUTu}u]aEW^kR1]Od]،߂zPr|yZQvfU *:Sb:TZ[cupebz[iԯZu0-!` +O'4QrСhԺmnث䵷{CR0Xڜa}'%g-a/>;<yt[kΉcex+9oϱ 6Mu s'3!>UJX:\S(|ŏɄM'<jB7NϧjUm'~nyHͷH<ѩtzrn_y_Vͪt[,Km%*ۯS*ʬ%~u$O,2 cߊzuKDY5Ԗ-8Yw }5lB?3 Rq]ݯЮ.u˩ *]|y=zs޾q5+Us=3IJ*1ОG_QЕD6?}MOXme63|Y]Ny9yIZIk+4u0.\ӤHG2}zENjrSrߠ[oђ+ƙgu&i7QpF'#AmV4j([GF+n.mdVK'jr= yYRvw~Le*Ժ<;cp&$A"H@Nq96mˀ.Vѥ}'7R QZꮝZ_T֏#}<'uoAci qp<OWJ*#Uk(MjZ7\}瘱M G}0r> :[Klk/g[zrV[Ⰽ%mtbYNqztj[OL5q HV8ۘӀq{OZVu9J5cpkZ1_j;*,r. ֹN?,tGymSGkC6rVf;qW\$C2x7G^><3gO xK6H g`ca_:q[Ӗxw۶/M M,d@`k3*i{z^Y*1䭣误b»;O jq:;FN~̡xqےbxGG $wo.c+U.es4 ;4m?\`;֘jQPc?_Sni^ 'xV[8㺏n:1|~5h9iƤm&c|eju@#kyj ;{`TexըϦzIB$tl ;U'cVX?}HP 81ҍI6GM|m{rߵʇ7vxӔ9j;|/ 3˘?ylc.on>󹌰^P+xU{$ :յas/倪T>^ kQ>gq³*K2dw,I?NX+]ȩ*^MmnQ hXw'#OEg.X}1 } "?|JB335JteQ_bdm0+F%NIx$O*8"oذ39 'Vt%~o#(ED\9T.v3LEJ<Y188`Okh>k__ їS_X"E搲A*=*8 iMX;˗⹍hS%^g*$[ȣrOK72NDjN\' lk܎@ tu dz<ԔhPY$2œunA|(\[?B];!gBnޝGcE:ʥج=I~ŶR^4BXnóll*USJMhugN-+ ]ȣ?32YVRVç*W tCnR7ˀxo\]Rߣ:0Z\Rzr'r#RH p=GJV;*TT\j-H16UxӑUO_>ۋOg!xH`Hٽ=a]QN9up4ݽ{C:l#l$#br22U"̺[NWpٚ7'vq׵v|=v )sFQ[U[f~X#n[!ZRWwl쌽yc=J23]fB2q]u#(:e/rЭt k4lg#k*|"76(Mϖ+T'fJN_.q}?܅ÌET'w~xRN۲f[dTiʝ&3 ]a@ҍ8^_-RHvfkL|RSi!R;oD ɵa''Ur2J4nܙ=pg?ʮ=}ɅaLUf_GOV}erʤnu4va0}|} b}5'Ks|z^ΥW}zьN/“򞇷\RP#^qȱonY[$|ls9ɭT=;+U8tx271'jay~4MshZ; rܱO'9(G{v-iz@!0mJspFr8<˚3N".gzj]F$af1  =͚?qf?w"Y=9Z2\|Tkө.iz6Z0 E$fawZ)su閉'I>⑮ϼ:o$~_JԴLbGJŋm=FI " 铎* F>˿FIgb\q5~=F|zى%=ّ2q@cI1BaNkaݸ'Vf>^py%bJJ̚NDfOz}zc5w_2o-Y GDI'ۡ?ayomlF䒻ѫR\:0ꤍqbUIJ2.TſM.Ƒ9=sPdܶ"IJt73.{#vFH9qDUM{jm!"I˹yFGsR4ʵOgَx 0F{{֏VETm}fXU\,*c^}>KmNT+Jz͌,q6BN>+Ӻ}&8Ͳn#pNb'$ #gO~|ɴtOk%ݿ+BrKҟ{̧|BsL~h*N¿2$vP:LNrOXέÕZ[v2'{x?mMikܺ^TTuK%v%b:qnXpU|, RM۷rש9TN.KVE*a ReW7RQߡ=;B,o YX2ʌxhǤiM%K>hy|ʭ1G qډ)C{~c{YDȌо^#X:|ϙic`E^USzxRՈ5=%Uj;q֢y:G4*VTcs LeFӓsW*;$ڴNc|G5v9M]?R࿛l| z>wέ/U..| 9HR1f\jNtҪ;Xkד&P4ǻ3HbQ#Xy1]*tɣF:yh7PU挔FZ|Qi.4+"̼/@}SSL5ɲٕ/XaU82qc7M&뷩L_y`cW8 ;=GvF4e >j)(mAD 6pF1a&J|ɽA4ӯקRr5 s<y<4c=O{db,%z/G/>I:Wy<.,ʼn'm?OA0b3Jʬ3z?iNQDӭJVa}3'Ց-gS:kvFf!86}ߝIUG.T_[Kq!6.C @'j2 tκSCzwXY{Tf=$ʄrH<~epv +uO[5rKLjQ>I#ןjѸƞe:>Q6ۯGMv g9${dNF\iOҜs_D 5#;4go>FRڥݪUJNܑt#h'1SҦM#%ˬ#\46o^{vYs3Y7 h|;G{\e&Qom(U3)FO?ST)I,cXɻ= eTh#u23)V238QoˉԼyZЈV7I 0`sЎy89kKmO#ѫMHoFE 2׷qqRr&/Wmʪ+?e7S*quZQJTe]d+ndi?b1qRղ*auMv#6'ܭe'YZO3scتԿkBo~sJ\)IM-II$;b ƱOeU֫Ku&3Doצ=FyS%- X`Eby(ZmA9躪JCg*m3sD1hFr!Uln s=+Ҍc̥&٤F2ȷܬI'R>@v$c5e>TTi顱~I/*J§ͨU%[{:tqت‡2.dFjjT|ϡR8(hn_.4G{POCR VکW+Ho_ҶVVօZL2Xj*,=AZۚW;$ԢїG\eobu9RLih*c>uMor9}䓵@,;rX 3KAF#y]1BcS%.[#QR4*+%d|L>l8gh;op%XV?5ע9yF<݌eͲ!.<&Fv3-p>zVnmrW%uBʑq|Gz~59sNaʭ?wEЯ(s$ (>\sֳ>hug{A;<鍼6󫄜a"C^Fan![<6R1]8ב1Ӕoo7q+ j06'rh{M#(%[c8Œma!S;pIkIQV3qQNB^x77;5>1 ?_ˊ4 $?}g ?2}LWR7KԙSc4>6{N0sV6HΥ:,FnEwoRʧv҄47YxCFF#zF5%x_R*n0cnzw9B2^{5Ql&gf$'ߥi>{u,}Y~i~z<&˄yj]#Y>Zj덣~nʔe.xRݶCK+4cnLy\J=ܫ1lm|ҺK̗e.f6`,kYc1Y޽Hj'mH$Fv?oZǚ2K6y4B%r1~qm>ǚ]4CgmzHiY18R.QR>o\/|nmŵu,:!}qU~Tݫ́!x*eXgʈ{aq"X'ӟK1ms~HC@֍>qEl|~{_f:ac3|H>n=?QRzƤҐ4yc!$קs˘Q|R+׾-cy'5Q9nru+]*w3:pݱͺh3\w[ Gq]+{/hT'1iU篿V{waxXUl( 3 #(d%$̛$\1;\G9UC*Ƽ}i˚2lӋxfT&:Mr+ij*J`)'r9ִQnGea"1e ?$ѿ%2d.19׊Τl F2+qG?2EČT瞿(\QWvH䷺nWo2>n k= hԭ:4+MѰ%ݽcN.w"9K1 v{1n+Ք$yۿKmyF2w`˯g'khU<(|~UMI*mBn#9YOߐPhot9Ok+I,PZH}z~sJ^[߶F@0rq5"*B0̌4s۷<,rc+t!isrdy'>Vq~f4YByl($,.=YRKeьRNvۻ#vK{X}w]v&vGg?a'̯k*:oدknEfǘw+J6cOєV7^nvʈ$ӜSOVނBA$o|1]lc)JRFۖG5flp7 _^nm_ę~]R#YJM$hU<?⹪ҍH[Qz1E%/Y*a氼sS*r Ȧ%8cXTq*zĶ|aTƄ~uIkC*cSXk1OwӸvNPUA#8f*F1 GbEde c]Il=G-c}٭.M_ho,*$0/Ny,/*Ip0F0xڄ}jrE$?./\_,}xtQO[~f"7~}R>*rZ>:]Cc׽hn5sE9dc7At\Ǚo?4Egi;Z~_qό5%9𱓨+]=鍬LeګI:_Z>JJ՛i&(㌳F͕3$4tUOr& ^bH#*XQe-ӧnڲ^K4mgHW?ӟ?Web#Rȕՙ JLC"(洧NTmGNy,l.bOߗ=4~e_>WvY9]i;%ѓz}vDo2G [mBN,z{ n**3g/wKSZO1vUc?=b&j9T>GW[?( #'~\t/cHF\߰G ɹ[jOUk*$yTO}g4UU>n{tEm.[! YkguCrfx( 7r:UFW>(95snwINN?99Kz7{_*Iq&a8YwTyn6_ sle yiMʝ ]5m"kVX G*Qqi_g.fT^<K Fol~{FTSZnlaȡvǁn;08ڜERcQ)+S!5V~몍"%Nw q 5Ţ$ἼGOýkR2~e*r_H:Ɨ2{n.QI'넥ZTj?]n0D!*rO!is=;5D6qJf}`dVs$r=ںNRʚJϚ.Ou,sn g88 q겱:ur*ZzP- r3mSgnFM(Ɵ=9'=n%77(,$aG\ Ni˖o^i5'iGQG3/?ާH]CJtNV{Qvcughd6v#UɫﵿVk)[*I*ׁև([ӭMJ,n.FʇqpFNAF5>ZJvQfEp?֦?Ľ9jS"u@oG9ym_^`sk٧5ۍoFhw*)ltF_Jf[ң#SM喖EUd .X <֢2qN<G,fsqR{u..,{n6vp}yXGv.5X_m n̏iq\qDt\+ލ>vZzR51eeONOWzWX 6x6U}ѫg_z}y.YTʤը%uܭ=8ę, lns]jcf>e5oCrVD-2F\/Mz1hRO]mCmֻC}Etd'/|& WkR.$qdp6I"W}8?٧`r'hXi+8mR}pHגe#?5GRnm'2k]?}7B0Lv2g=z}}V_S*ߵqzfV@n6v쁀(zZD8g9֦ijVTsǯz$ ?Lh^WRБ]E,;q~PO8k+ԭ&d&NdG#<_p˛]rq_@1Vo71s ~<έ?ylsԅ*= 4YFY09=~yu5GF#V)-t)q{`9ZR+28ժ %_2POF,{;۹J2g8T#pT;`Z3Zg̔l1.Ƭv qQ)oyPNԥeսHw#,+'cۚ畣.rxQ;˯CY iL{s\V!֩y؎[6cݣ Y{+_ST8huAOd^ce;gsְo{q{:58n[,-.^LJUf ӑ_eRiIwvTӔyfMFU[{qֹQiǕkEL%[|]F[n9\p"ь*]>c[vI>I@'#>"*NQOZ7R7cۻX>օ8MNқz|4 %XnN܌8mOoNJӒWM([{(|O: Z[[?aվѺEI¾#^V任;έ 頶 q[ߦF"qW$;iPmTo^?ήVꞝh˞^[ ZyvbCIsǵr[}lyysӨݓc 1\[&引%YsobaHo)8={}%Jίl/g}Ui+\}V8϶Hmd1=SuIt62mz";fK'%w@݌ATak*J7Oi}'zXniTkNK74[.OaEUdC 1Sqb nϺsrJm?;g_[k~U({ڰp%UfS;g^ |~ӎUCi_-yvK )Z'o,%i_b{K0m{Nv-e gi~Cu8ѧRΫNűfʰݏ9WR肷ñv2UqqJcܹDYZ1gOV湯->_wFI,k<'cCn%"Qn ;NV"1T.[اw h䴆gI7v9WGYlmNRzn*Fbdz6qٚ}zt!V>c'yvR̢aʆ5IPyFJ,Y1$0O.WG)i~d8C3oYp^ফrlcՕMt] k"m;F?H\7ڤv&ح|r:\ӕ9"rld( / s8Vr E;G[`\ȩxDO1Ͻc)r*ѲZo/>T~;YH2F7c#=+\ne*Q< ?^-I$KR2dʒ<~WgEˡF1d,ڼ*?cR.;ibUg.גqJn(Jq:D$U!moZ+b}Tokg#,Ou?Jo/S~Fky*gݤi3(}Zާe2hIn m+1F:F2ҋùFmC&I"Fy\eA}zW#s5y)/0F1y21;AyRZ4a)-15Q]vRmv=x؊QhέI]hv}t$i#[ir:c=Wڵ3٥v k|Hn0\pzbBѧmаjrEGkt+IDc?3|n:W*s'rI+yiL|-O/ )ƍt*ӬIKFIq,(۾l,IH3X8k"R(Պ{y/Q6ܜg58ɥ+TqԖbGvm vn+xu%JjSPZ~WzVbʛpr QJƵJ^U8P[/㌛Z)e^v8;iY-:߱*1RIuLnU|I$r8 ۃZJ4mefэjQUw[>"ĤrG,d7dcp+JJtR4U8?{O4q^M6aX9OJKն^jI4eè9%~I̚ЧZA#$y?uFU99(rΏ+6;Xm 7vpǞҪIJ|W45n Zಲ46hP^;ዷCJ'JZ0X?t<(N=MuS}{Hls&nB$dTHƢ}V`eb.WV E9y#\iGeG,6 C؏ˊFu=tKs2-eYw:rhosYIJ}F \\1lo c++jF:4l8VeRBrǟ',bbNn,Opk=cRVm2]\ˌ/Ӝ3Q[\IVi$i|'2i8_ǖNu#$ DO1 :FoNb@]V9^P3>w7=?JKTilT(̆6&;pFɌ'rdgz\O[X>ix' sʆH$cY3$+h@#+n3 o8O~ء213e+t3rO8=iή5 SOBI#=oʧ}F:W?8٘եQe4GFpHU'-XGSyҳsAڱU=7E'kM4dx{{^r%7h$ e=۲|8PFLs²N5u)l}pܝ=Q)F_8:\vG"hofܿp(^@JxGIrϗr[Cjkt\l97SR OS5E4nnt;&b%`H{S֯}C[g-6ri1*M **<]8IAkՇkmd{FەTqá'BcE}x~SLR;2ʌ^FU!&2J^~]Y Jd܎@cUFVVZzFiSԒɳY"kO?oƝ9JVƋ|In /˚ܱk8i|]+8n>O7" r|afVEc撷Cjҍ赯}t#v˶c)Yns?VX* PF9qV|dŌJ4~CǷX=aQ(S*)Ÿ"̯[Ep{jQ6XiV D;)#q޸eJ8S漞慝*ffJwB5i7cNym 2y ͇F YFrPjjI(]zX*2ЋiNˍW.H<Ԩ{6pδyaӷtuV 1\XK~A\z9Z-Ei aH݇s<'a.e&sʝ5>tt{{8*XW~^OeZQש*/24s8 d̻d#2*}FR[GEamaE.P˹#q. yF7{T M֌ ˓Sk(Q^b"Ms[p[kzf)s]tѬ r"xQZ6ڟ4!]R_y[!h ˷v 9wN))XԢicHT-g,2vny\`W4yy8S*vNiSܒMo.]c :WbO:tc;"֑a,,aU9UO_»#e }t9_뫺lKi\QH01Oƞ"xRVʂVM6[ *2ecfPr\9eNcut"-Ya;q8׏-Oq{8OIbX'o1dn|^=Z4NsbirjѪ\gpcǸpVTiS?_gyumUR[6ͬK:F@w")qs:t%n5ֱl%doFrr~T(jMiní5YU)Uӎ?i߱{Jׯﱋ{Lm>b@pGNԌjSoXN{nTlc1l08_\piVv_qʝ=wG迤w^D6m`QN3ޙ+ëQf"4I/.g"F#ƿxeSsw{W7]'8qc%dgR-=AԪ]~oǿ2?jq%@L̇s_D}!PJI9gH'Q5z*iו߆nԷI"q4~b3erx*y\Zc8׶|kXGV%7o=5a)#R$OKdfqӷ?gmb^ݵ9#qja֯FI5hElb@#M#qz J Fϧu/,1L GsןQQU=&P~ٻq!$qZCZOwfWק;oK0UO筇)_>- 1n[0_s?ԩb):\U 0lqYX8 szTE(_JHkelP͇@wt9*1 (Aʛ2dM2)8沦m3h|ۨ42[C>HaU'qǦ9%RQYmnrRzw.ll"*ʎ:9T)wԩٹGmio m54t~=:ҌOS:t+[]v1~?g8ѓOD9vyuY7, 7srZ1f:*F5 cy mTvqwNcYqfi $mm"zTl乹Aƴj7{Z$%ŽL".6#߿Coc4k夬|ͣ|Ї*zv34hi߰cPG~\+r'ڹ)۩B>ay ϖ_VrJg=M}p6Q77,ӌ쥅s8_뱭J2d}AVCoFmqy9*1R}U̾[ MūV2EtFnpq0z4cOT>Xm|ȴ3yUf/6:~vf8Zxs}e9U^d4xB.{;y:ҩS斊ޗ:%%R,~^`˵0aS*ҵ#:Td߱%l'vr8GR:hʤi_{O+;+>ݻ9=֊җ\,4{.uc`l+ʬ GNzQʓfh2'imh)ج1ֶYwGLeNݣf*ireQvM0q=)SW9rE%N^Dsl{;,99ڳZ+QJNnVa(H*#Һhԧ(*yj*o~UC*m>tikvwIRN4GnrF69lQ+JW֧*G)=/vD@[CmbWhb 8fFJޖOB(>Xo罺2I+lI klöAxXF4(򳦭IFZܚ 2Ҳ E=ujƴ+cx-&M#3?_yԩJ-7bҷScmvɭ>c/IO;H?ҿA%J}SVK7Mۍ-v >vS_oFϬOjoa_۵nG |o| 1 3ϭ}iѫnit)b$W/j?OeW7 ke2Cv($yO̼Lv0zs揞_xђjӕTO}c٣O;21۷xrE ~*87{yH7nݽG d޽<e]oGMutϧ3M}ven)0w$tzqWZrj-fR]3"C*Ԡ\[ K I8V*1Vul<%ݞW/ a~/ k4K Nyn Azݤ"tt{#z-FhQ̂MzsדZrtƴTu^n-of*|GϸW5Ԣy8*RMv/x6-/XKC .;F~PKc ˌjQwбB|7n45Z2CcGM|<2𲜢G8mln,16С3 +mHcIIKٸ+ϪhJw:Zi'}xGR]Rfc/>Yأd_$8^VZwoc֥*$kKynPqt-Τ-$9J䜁ܷcÍF9e7? wFPe^ϯG4[(dy&bE<26Glp+eF_w橤oƐEw\:ʠH()i?)ՆzcpӞMR}&eϓmSU|Y49-r1!ørp?Q*95t˚N> fsP5E&8`Iq{Z-SZzW[m⹴Ϛי)yR 9k⸢> xSR>_3Zӳh(G CM?=| Ni差ƵHhmܑW],}ONÊliJqtڻoo.vIGi_0PNTjqI쯡EʹS~O1ocVT_S=*VJwשI59]!Ix`L >߅z4:yT-:%Đ4[vA C^'+GU8厉!M-ndS# rxckێGFRo 2:۴V2entkR*m<}kգ:)^C )r ;6IPl9)N?^9q<=J4} *TZr5Mramę_4<|m67dvws=sYFNvqJoc.I3Nj ޏ}y7?̼6NӓCұ+ԴK9eʚI] VP7zzUFjIi[Rj{o=!-ce<{{'NU2xwJGz/pQg̃|2UǮ}q֋є~G*~n~W%RVg|U>k:x._B'R~[vFݢH6ETc)Ǚ++ J8mmcL7#+(I:Bj{8Yr-V/8VV<󮪒qmmN*3nZK4 "Yf9$CQȑT^ޥ=ߣA%3*Oq9\'ŽV^G_pRhD.;c]P?fIf-'"z̮X~?^ztJ7zɗ(C,lY2e;}z9Q*kňխn v߲?,نBr>lAUN>Χ-V] JU%8vG.9c(yqIqu*SRN/~4iFUAkq`P9=s ;޿+*)/uu\GkQd2lqVn_B>Qp")]W20/W_4.ccR;KoQf*ᝤݴ9)ӵSSՏ5Ϻ+K\Oz~]3c'fuJ]7lT{ϟH22K6넑 {Gއ-KJCBezǵuѩRX{:DsZ@KuܻYDž<al+=(J/Z6g `w1:%7JYAbf1c_~rFQSՔSIc<<)% hp}y>VSuhaԣnqgJUy^ﯯ4zeȱݪ%e񏔎;yciӊrW*i.x?hŤ,(W˙C<"*͹uZly$ЖJKxfU24To_L2)ivzy,]fY}C`nPqm#0KrJ͒(#.\n޿{yud*T&R+?:mUJC΋喷{ȱ)KDk62Inqz9ZOe+ѡz""e- {)\tk.۬DkqnG<ҩX_m{GnE|bu.br{ǥO6fjRm=eFբBi[xQ9ۻ=~TJ.Xu]t= r죫_yZoƻ|?7s4W=LiO1mg(w22uj%a*)ʪ(EHlnzgx=Eu%i/,w<2fb̲lH}h0%F#f(k/t] v NrGzƏ-տS| ۻ o6L*uNR+kdWv+"/Ź\eR1SsBUw#׉#Bўkhz.y̶ѷoS>We)rI\i%$Kq#@<g71 z[7N(Nw+mM s]Z3ʮYFںRйj&Z%f#-2Ӟ?.]cNYKr_6ݴ:kM(׿*?iZvV.%Q$a ZOz^2yJ"f4E'k"7nrʾg$jy#ѭ.TwW)km-RCY ?2ɞWu=GSP漶p֍p;U2$hɳU ^{m UhҎ{B q}#[D1HQWO9GNs'o%mR;s,c,Ozq=+N^>hIt#3*evoǦxkh㯥NjBY i^oHwK H?'NNڑK-V1.%[7cS3UBv_)MmaNЫkxOZYΦ7(k+^ܲoF9W%⼽YNRQs>shepqO_oJ5=?y5ǟRE_GяV1 /e\XӵmŮ1}{2>oҚjt uJ|?Q%iq͸98\P{EJŴd z|j-we9=xz~մOY߹ctWc亍!H]ӜtF.˩:ĶmU (g8S0~S.hԳ78wRj߹Uwm흼ceC˚Z+?FÒI犹ʌ~ϒ ͻEe8<"eOrbpғ<!9=}kΔeaQ":{<$_S}z:攧N2w34v*\یB|ܥ$==?gwo碫=e$/q]7c"qmBy33(}/;8p?RNK[bZyIU26~adl泌cʤ9b#ZэOřOr=q,ncZa kmI'TՍKI3jr+1eQ6KuigBNE9TuyQ:0Y6{rQj[$q*Gl8TƬcs*Nѿ_cOOs3`_ `>+w\c9:=-#fHX[fU##MzʊCUZm45 &ooxC8oqϦ{5QmJ]O鬢mʉ[?b} mORGOi -64v~\Wy k+$o 4>[2'qZETJЯKxQ.$`1͸ z?F[Fq^CoCS#}n)u+0 ֜аF/;W;mpd_2;ڡIjV#wXI(|fVM-w#ln|ʟC'YV\#M S)NJ:/Mݻcm9MBJښF\R:$&)"ɂ*[=}+il~V-bT6vf\l=r0j5Lrڴ>xi̒б źdʠ *{On{{c"g%&-%hGfj[qQ.gQk"EgV(يt^jB%){/RVh|{K?1k+n2Ik30,#ǖ9ֳqcsR G4ge,r}3gR)ƥ:.\3o2nܭ195/#>XGIq$S/$.W?{ۚ121u)]x*۹!!}} \[ Ԕ_B+x+2e,Wdҹrȶ,/gcRt 1 Mtrv?W-IsKFQ|ܭ/2i#D3ʪ1玵(EbjO>(M]@eg۷#)|%iM;m3KhG˞ҕ䝽A;JfٟF}sS+;Kuq;x/] qGZKXQT;-S «+G?jrFPqFe I#}y# v/ٵŁZHZ8!6`Ƴw!sEd{PGm =@~_uBR'eRy[%aI~lShOqBp;e6/Q I;u,@)Y{>P1TR єVK 4jt}ںm |h!70.2W^34+=9i4Ju&Y2u'(Ӧ]HJd(;g<{Oxh9Uʅc0=}:8siTNᅮCc,;TZw氹:~m½:tT}K_m# \c\ 璩d&EMѢ1YHp};dzYG]Џ4.ܞ3˄(L^0=eG9kF*{ ]Cub0?~ZU18[ *۴r j{4B}iU}fڟ(\9TǑU?ikﰋuhܢɃ+=Oy.j8M&;Ja'009O~[c ?+j:p$x,NpG8NF{5ͿTj֩)sY%;xUfQNq׎{5Rri顚NMQD`1obiS"5gWcI} j̯4ʋr=TԥHG˷tx\yrz.rG-]: |ʻzt^qU>e7*+sm6Icqܤ~xB;7g*FMNLt U-N9^ѩ_si+J\ٍCxn89(TTNʊ4 c;kؒi., ~$:QJ/Uv)Sf-1傱HcE#=8S\=:r- XͳLUF}h)jtџ-4KoYK3ۑ޳5'̙Zq|FT7Qy gk:]2QMi(KNO 2#$6w9 g٣enq5 rC:~44Q71h-EsP죦{;# Hl ~lSUF1;gwJ)=YF.UUUxvӠjz59g2[X*)% deM'cU*RJm4Lf"O g`>bK_:}TԣEuqnm(>ǚ;JJ3d+~ː/\| {I>_ݐ .vS劺vNGoZF\yAQNU%n*^W6` =:.Ue ƤcV\ڃG1ye7^u_e[˒[0;|υs߭kN7[kV,IL7o˷L^N5gӱjհqNt9.2~;eA ۍgdД}-7݃J 1v%>Y^c3l*rϿ^IT_ ѳ8U%hmBn?19+qU-eh1PNkex'dF7>jJSc:jOujQ,nvynwNV(w.x>ew5Mo􍥹iVqr*ҧn[IVt|%@\ngI'e qmuc)EF/q!4r$cz i} ~얝)Sp#.;t54\-m.>hk,%ƨ6d ' ?5)s4]"[Å'==9]9F\؀G-4[Ec|ՇJ2Z'{iډsq "K J1KZPkUDs YQcVisdck7VR3iCRSomRBf=D.zzDG)\T[z~ww$p\6ݫWۨ*2ER(m,2}>IH}㛌?y.ޝD[̍Le.z鎹IONZU%&eGr-4oݲFTm 0{~'8SJKKunj⥪ք|ov}d_W3)JuMnu|3Uc-Q99NK90{irU1 TIg(݂ {{\U*FȪdU$O/pe"OGBOoq֩6Q:^Akg2)ڿUZNI;kPc+\lnN\t#5NSrb#(~>Q1Ĉ#c`y$މNuysػ`ǁ'#_ ӕ?u:*m<3M6˻w %JgNjr{oTlSWlҔ=U*J.+Y#S `'vߍqJ^W]̱Zm{:]Ily&Af18Y(:zy=h=uX mnU.CtʭVD2kc<8RNI]={+v%$u`1/zbQNzO%7%!{bU2RsXQ{i6m6G! 5ÚO{^hbv^䑕{qUnUS"Z,]U5N9aꥤW^<ְ/jJw}ܚ4yury]Wals=g*-] fƐXx^Npp8+jɂ=cetN;i9Ɂ&ݹ&ߴW6T7yf_&HռR.13:ɈEȬnF59dO"HiD2^@${QWE:5oMԍuم٤y $grx{T&['PNU;j͍9--.tZF,F~^8{Y~nFOl5c͵~~X:59S]?} "1ES3Vckjt1U8= K!.yl.g {4˕39eh#o,(l,8z^]?hlbs{LWd9'ׅ(.ę6WP]iͩF^krxnCȶ8m:]o2~"Lm+7'9\WJ*W;=hQNd8\a/`3!6Uʶ"e);:`C*+mzN^ijeIoBU qΈN*Mq9]#׸oty\2M%>jNQ&\r- cQwr+NJc)¾ؕuRߟlS٩6cfVqP"?.j9yJ啐U aIo~@#~h .hzO "n; rܩGݖ`Slq~k)-GF oïְnJwXܔ6HJg;5U8ѧm]2u !h'FlF?=;Jq壊-zt_ܘ^i*qFZ\WʝW愯bG@M<7sҾ_B.w3J IFpUsyYwe#s\$!NqRgY$Z"r>e;t>x>XԔfzõm9*.#YЌc<ꌜ:]BSyH[n9iSR5WtXޛicSX`󧹊E;Y,#O(4Tݨmp_B֭3mA4q+ogN5}wWJrtv&ch$V}GiQs ʄc_]?UW rƑlxѧxiZIOQTN{%ѥ\G66U:e+0 E\mi"eH̒6' ?ʽJte*H(էNNcbC3]ƹ;8Oҽl=8Tx.oN$>'W}Nt+ȱ3UUi2ǰ[om 8 ``{+4QrfRF̟rF@uJiE 5ц!#3+$Woo9t$v{EeS4py,8Xf_G8]kiC޺:)4*ZOK`ƹ\]4{N(rͺ"#n?&nߡN)U\wu[WhҚ82ЅKH *5){ !s},0v;XsFYN-YP5B « -%K- -oHsJq¯?iHH啱yퟗpM9Z~% 't{.˳1UǕMhZMZ8+0@N[翵pKON7z2- 89\$8)"]-/ch03l,r. Ͽ4v{887Zdnٵ  8T%zvxz1u//u$Esr8?M(ӦOfw[g,Vx=k+\?txxQWx?*i/:<=iFGh[*{ qU/fקTH'G<҉ +9hfAfHTmBne&&$C+ 4s<2ad,&?-;Y=+ьyfU(Tyf[xnJ65<y9(Ӕvy6iJ "(ry>_uTi^]!4+sapզ2Z7CcVIVez;L֌*ZIsI#o3g=ǡ슕9UѢ,ݩg9^7Z oOzs(>JT r$t~瑓yy6u{qh76~yrIRJIߠ{hayE{gXԔz3M$qcRYWV^MD,Rv$ת 6D:Ⱨ;ލ*KX(UW<_~*gJRѳsZjD ήSiQYԧ+G^7"2C,M#.$O??BRqN5JʤXiO~U⽢%u-Um7;rI3gR#5>Ḩ#P3=8s{zJSX3 s2y#c[]7aS]nyLd+b-IsB.1# :Q~fRE+EvO'.793,2\9&u½JG 7J.fp7u=p~ms 2VѮY9VE]oO?zVS(k[]%n^ zo~uyr9TT%~T#]blQמRXo8CuQfc9F,DZt/3+I@2}s\tԣyn=,~%U JjYY-UQ9yF1ʗQNn˷TtEռko4lʄgoCn/-9KOMSLb֏Gu56N 4Vʝ[NZ·-WoumFS\"4ػ [IR݄;Ny+iIJW)]XfQX:*~5$$*_Nj4#[Wsv,Y9nfftdڡrGמ,-H/*PΒ9XǵxEy%=Ij[]i;m\c, (F*+m+7wԎݹug]>tJn]M9hQWhSuFn/~u)wt& wn`}9H~W“=kz2- SwaWqIrou3s޽*Q&Ii#V{{K0J8F}x=Og+[E׫**Zw=XiI}mko72LbVi8㝫;blDJ릻_u 1O9%M+' *O~*ydIo:Iʧl[[kecg\^O# 5QeVF<c,Q*We:mo'fOSBPm+e\ cqN59ds֧*zcţX[ԳFY#!@$1 Z%kY4?_0TyUtW I~$E_pygq^N.8#:x7SѴ o+S_p0pzTTFwM]NAcܱv|~U5*_"E鬒eg!W~^0:OJ4ⴱ¤}\ۃ2FkITL(I 2OU=:|Ԧ+(:f,nqIfF^*TZBb]9lRaڒ<ys߁ӽTW=vFu'e7Vs\]4+;1ͤ8}k9>]ֈښORhi qϽpʬdnj|ȱv %+.;ğzjWOOĚvZ݊Bݗ`Knp9>&5K-╥Y-j }k9Eo[548U8m|neHUlǚqqVViGqc>fPW \n}/正Kʉn% r;slNI$hS̬$LmO} xMv"q"2b٤d֍ {cs%1_NVrMy rǴ.=^xu*J7_4j_^[hMl$m8|2NͳΧF*no`{u̖VRʫ^yWl;j"RNK~z_3ct6jVn Zwd{3WYTCУ/gmA]6%ӟ[RUGݎYRӒ}3]v7q=Up {jv"Ac4Yv 8ӏjXu̝)'Vz ő՗ƥ=\qG4F-+0|N.=X,71G\81*uƧ,z"X#XMJN̂ϯ?c.XW^SxՍl;T@K#Hsr`s֮u<5%N0VIYuٮ!h/Cbu9\OWKsF[5²۹;Oc=iS;ZieWF Q$H!Q`G=[aisT翩&49b>41+meI'UJӫQEKb9TE%hvm $d Ҍyb;ybU$eONN3;V#(Trn֩*ģE"#T.qW+֏,D}OvB=y>T\ϷZ䦒W}yVM"F~wKr}bh_wa*#s w=|ԓTJ-ٯKyVMcNI'QhiF5(qnh20*җOoeJ<+/=Uw6v"kNS%'EuwfYFH㍆[s#ZRgTkS:1MI4,c̬=tcS%[K#Z}HdW商: t|uSOJ ͷn`鎣yJO^lm(PQY=w30g&)[va*~Dng~_xcBW5/iG-?&M 0 p{0X0,yzqWR*IK9G"8]Id OLߎFғӢ-:DຊFTru-*--Z8KuqZYmf*Or}sϥOu"+ h!Wpe^ 9mv3:j1R1aeG?NpK}b9l`q$o2_ML)ś}rrRw_`X2[Zݒ`= 9pӓhN 7KO>r#UCt=yk{ӎ.L=# {0xn|m8NJ{;Y&̕XԻxs}~Ͻ*rc RekKm^1Am$ooa#=kS8<1t.UvxRnåᤸ0xs*/neL1ꖄkj>8Ԧk{|LӨ?~N]_xyB%$yč!|1u}*qɿxPFJx\sl}~]9.nӱ G>PykO`4Nuj,[d$c+1Zt3gZz=?S⋭Z; |;ɶ$Cy+#nFR{3էrvnkxZusE+ЏEF i͞]W5II ivwzx^X՚x$wm@l s+z/uajT'ٜU֕GdCvN=c9FOOS?bu۷GEk\I6"n\{?.ryTE^Z- xM]W,b̊7Fۓ wkOm4y5+6^ݞc. [Zpsdr Gygs%R[oSY-/fX"Cq Щ&ldF)Tٜ}kTJ?yƯG!iW_U,{Nq L*xXl=UW#ȥ y5[diLslZ|>2/#_j(E[wgwoOP^Ꭹ2çUUqp@dqï-9ž9qI.'e:)WO_V$5ՌnaA?g';/S].l%4)!i+ŵ#nkkKF;e9^OT}Ϭ\$׬eGXԴ+8}tԡnfHֵ yh?Ł[۳|l|.Np) uܮQy᳂Iaڑ#h~]Y9-)u~+H *`㡥Xw^۾Ȫ:_"Pǚ2nҌ*z"Kq;;CJ]sKwGòO \ە<0yrxrYٷX.VNqvk~j8Jܯ'J&WNRz1խN+vi%2 ҇d#9`N1+pyW4cRw<ľ=:.4O eNCc8`*iEn{Ȝ\ֶv?5koE[y#IDɃN=^zvJl꺤+Ns==?%u|-rGnzs_`=QRbO1>ؐ搅s?sJsksNRS~nU򑔉uۏU||=鷙8Gso>%ٗۉbOO^ԣ't_nRׯ:(XSsrs\/&= 4ʝ}By^rL̥HmWvGSP ,jujsSOܑ&cn~M)ǙKkv.;Wgݻld޺J/Y.ޕOW$+%fUPܕRcҊqMC~Zޣxjv'8K{^ށST "A3y2 `ddgߝuF_ZTiIsngJAttMFԓVT{dq*F+E0q.i=tz.Bʏuy8܌ K~;jڒZ7vY_98E²Trx秷ZE(Nޡ(*wjԄ[JHW}'%%yV[t]Q,!qʳMێխ9F-b}AnyQ]*Ϛ<)ޔ[Iq=xh.6دkF9nU?#PvƬUFz)rji՜]Fqq!\paFF7׾a˫KpϿϙ"7EMvg؏]u{Jq>Xz~{H;mX(8ᎽG\wʔ+)k&*Ok&ѷW3Ӡޢ f2̚{bٓ8lsߧjXVKkb)򲷑Lۻڴ+˖=6:*BkӡN"Maے\mqSٔo$t=F2͙"֐5U#ݟblљ2TuNm{ӡ&9rvxoe]H |F _S'޳Q}YaTQI=WRkm0Yld FNqOPGk)T$^OV*E'Vhhou/#h-;xn(ǗSX ugtkLnm1$+|@73TI]\^Μwoo̽=ʮu vYK Xa#*wh5o|۹vj%+J= -A5mܼt{rF9O\+.oiNz\\5̼_;2e~mN;:8n%pHO/GE|_kFHmGOy0y8"6ԩrt)s+9ev4ʌy9x7}@|Sk半=+SkV55qV՞7Y6?xddtiUHSJSGn,ɤ$B(u1yhwnӯE}2k:qZЙlA( (*zpGgN4B}-zv~Kgo$^ uEE|Q5.Ӱt@_ aSG޾VTdyVG#dqgv~IꋌiKEV# ̈׏JiR$ޅfc2`Xoq{Wy_zգ:u-W4x0гBm8=F>u+*tQ5{u5l#C9ǟ󻪑\'.,k(KE`#fcxUvV=8ۃc9jכhzZ(pTJҗK5^R9Sr}-e0ZGXc1c:9y{)Z]2F`*R};t)(Ƣu6]쇦 /kO0+oz]ɥU%$LBj|z'|qqW?ee0ԹdYXoEaQ\f*1Іs+HZ>_Nqӗc*qݼ㷸c]20=FU'Z2$p>*)Kz}?x֕֝˖+Z55譡gl p1on/:wW:##E7aLaq洧(ԔsjGnՂNe"]#|`{*I8ƝmHȨLڸ8?{OZi*F4RV xmH`N39t"Ql%^Y$_3;qzQ6=pHV~ui#+3ϡTJU9יRqN6{hיUU0ONG[[a&WDD,~?(ܵ9,:KA< $du'wO\g(J)H${h`iU21۞J肧+7MA9BFw/d0,2;JQG[AH ``g5\ҍ>oxN,3.#76A K"љ$(<ϗic,9;'kt:{;lED,9?UhEռ~dӕID=4,[F]R;kY}ū=3Z&z02OZjNDO$E}-$hmCHHSp=z%(sn .GXᶉUxڋkD#7{G1uvUQcX!G:`^^jmx*U*rO2LpCoA8Q$sP,-ƖPO26# yxǵrҩf;~QFq@p]Yg U)JQP9uch-;ZJ6: 2mȊJ[EԨ'k68Mݲ c=ǨU.)[iH!* R2ܥ)Q̚oGtҵoi͒21Qɩ|jɉS#Zyh U'$w-HNN RߣCDO!sDKBTJrlRj:G$Syįy9qJcͫ&qe{}~D0vcIf u#iZJ/OtJ33}V;Ʉ-UH=nk:tJ7_Q/hr |"R,Xf#rxjTm 붆R*fN*4x:xgSܪ9]^FNbsܞރJ>ouijbcNf!w6d:q]d^2Jdsvce $rpFvJt+ӯyR DEu8A#vq\8'mot,l֡͐ǧ>BۿmiR{EE YI;wUF0o:WQvh<$uݺO_I]$ $DS`T#=QTBk-A4;X7 s UѓI-^FjKubbX 9M7PAOںOm:zR:҃kDbx̲7-p9;*ϕO *RbsfRU8vG"4TjBI&fBƲUyڟJocEMFНK<*ؓUJkOr֐CsA$)Ƭһgr:/Wo/!h_»)2uOOn;WD GF5$4 gPtd-+FۃsgQZ#~r4HQ6ܥ[j9 GJq)&SKZ\+Ŀh*dXٓ=Gs{TaR}BeZ+u @ O_\%.]Yx99 BL|ur˝3q|g4Y&iQj=:zX[cqViv{j^dlHpHZI"~&֢aʕ#J@l˒7>SdDȧԏv&`^ N7[|Rj^ZńH\بv:zd~Ui^ȄQħc2BKN=kOi # k5&8T?37Ok5!N_wsHWK6_3.w(#p9{W=IEK5c86Ld#6,@9ʦU#\,4B8cXd#U;b;L)oԅo@if(d/c,*'=G]ZW=5ۀ!-ޡv'EOHr(@5׹j;XRta/,foUV#w*ПjbuExfTү(, 'nMc^oCC2d_iV5B1Φzm:goIbاȹPGcUgʷs8pT{2-|m/*0 r1ߧҢygy4wrQ=bx`, s0;jQZR=d/6p wҷIs"(,Lw+n^gy6mk}R5pa9Q1YxֵebFyQsWy)>k] b&(cK}0rny+nΊP؝"ʤ5^[/v7؊tԥ}BbPʥce.6>?jWF)jN4 <H?\jygm]4R慹lҕ;$eW擞H=ޟ*%\%Y,ϨZ#:>et-է¶Ểz{QdTE!,*QҴJ:Xr)F@$qldhQsZVƔh%q{"#9c j]7kD 4_4p*uiJՓ6cL{8$F+{Jni[&, w<7NOo֔RҾbQr tq\浊崔/pzs׏̪֯nQ9 @އ1|ߨYc0|e=O<fS/unKȞkce$3 N<@LPN8¹ͽ25Ac]Cq~JΤ2nE9/%;4RqYuUDK,ȧX=F}=#)iWẊc4*oK6A!rYcRNNr*eYSR{\FvcX=q=뚥K$:WkK~F}̗&vvOʾg&'epr7d"qwpz\U#9JaZ1FsӤIXٗm[ Ƨ\Qt:H姀mfʹ0ێ@9l3~&aZ5!lxmwd~5Qe_mY='g.ZlfSeG=sUr%/g=Xۋn EnF\w|5yY$w|7gӭTeʚF)kؖkDĒO>=S] e?^50pI铃]Ҧ*wJ)K̲YOdzvnG9ӏ씐JOr̒i# ݏʟ/^B͸ |zr:Ҵ[=G~Elݞ1?\(riGK>I8%*U1+yR9eIƥܻ *X1}}N<,Q%WwWtc4 nVfD[B_ Kr*rIvMYm3wmN7Qߑ={Kkr5R11{h>ڿ(osW:);=5-jąMF>9u*DE ;$Ivg-9?k.[XwTUĺfnv+#c#=3[F;6wFR :; ?ǭ-,ggub1mW-4Gw͸n>awAQ+ jޛܬQge1Q[sjv؅cϰ˻jz*%j6וh:D6xXj>-X` cpW?|ɷe}=HI15MJۘYJpE܉feoǖ zv=eТrg>f5#)'ЎX4XqQCIUK({֜B# ~ӝlGo&L!lbF'9?֔ymQFRB@g&1[l8+0d\ś UT1Um/%?/ΒVsx-EeaUAԂ9Ue'[JMnFOb=nR-ԝޖCfpQ#8lE~kkabTFgRHGޔ)]eE;jA{c)̛-džGJ'rb$x4(6Sѐoq-9ǁz̼SJvw^nUmW[x9t-zS xsNOrZJ2Qlx]˵$88'^cR:iGY\{qY˖HQ߰[+x-o1XGz泗'NB*CB,Xd+`{\ry H@U5{gN[}I&Gз!sߗ89+iT+y2C"k1BѳmM{+z=5Nsҧ)_ȁDq s:đП^+ I4rʤ#ӯr!s%` nC_!XʜdrR~X,JTvoQ}2VmN2.IuZ&Oxb$+fp?#~Sv~LҚR\:%T[w'{<(=J9{}fRkgB 8^:x-sw 󼠅Ord'ٻ;,%ߜs7#}\kTMHb*7Ԛ%fE~b5IsL匪s:o%vŋOoR gGYG=otR2U$}u-,]dNyS(mNhԗsN~>AUeY, DowG9~RݺdC^,,n*h(U^]`c*rٙX::Ǡrѷq#b$tZFeQf_4ۿ*5#b V=<} miJ)|%ȵy?1v#>s/ߞ[Il:2[w}P[e dp{ҊqM[GR0rW)c R-ݹ@s}ELSqEMD7K,3mi<Ͻ8קjѦeɝmcEԼn!.ukx(V;1qߧCǽiN9KlDj%߱0yq:j2@=ǡbN-F|VԞ/6-#l N^Vhh:|{yF^~¤]ս A߿-rZ.Efc[h+X~3\Nݮ,&E dn`0yJڔyMa˚ZR)/Hϯ<EW*eB}Rj^L2OHAN[ښIY/VIʓKX VgGXR8Ys r3G2-MGe-ĒY@UTb96nQ AW FB8^t\v+,/$1>N-Y>Ӛ^t[7virp? 񮚞{ڎ4y,Nk쐫>Dd"^4iS#(Y\J20 :Ԍֆu(J- V{2;&I3򌎘zcߊBQFu Ri~D(b1ַ@TH# t8DZWw&4->dib$NgǞڳ~}1%._I#dVەygWYSlI=hTV Swu*.I&fFOB0yT\gKaȚtX"qTc߰;jg(n_2i8{}$V¨3z'6"ޑYwcuxhXۏ3JMGZ?ꊩZ^Ca:l4_GO1\~zT8VnV-mˎ ѿ22VwƝJi 4#D>PHS2J6R1"0wȱ\, \NJ1v:NZ_vD;[H ~aw+&yFSNEeU3|sǭ(Ԕj.SQ=Gee-Py*H:rh|9b}u躯Rh%c˗ۺFlg=ҹ˗WW-WM]Nyم<ݰg|~^RZ4g%nXԌԱ/G;𒬘 #<{ZEczT.qF_Θɴ"UfۃG8\6R4#Gp!J+mʀF35|JZN7]ƅXE^ i!4S2RZiy1. mklFAz匴qnY6]5|SK7IT/E]K?Y q%Q.¬[3Py5n]̱RGac|V9"r \V^Nj,įqg314k=uu6~;#}Fί21i$8K'x( RQnDFpviH;v>(۹FOVFI2AV#98:PZ]]؊Ӓul4'!!Í3~+FJO(;.Wf2b[4) n٣Lg>sьBȏrX՜~W}+9[ XJhGo"IEV_*dq+S\ioMk4d>j0*H>uդjrOmLGJ]ĶqdXBǏ,r:9s\ЩMJլL%cQB9{u O5~NE uS 6TGoᨩiVR^DRhW\۷ܱygm$ĄͶ9630=0r=7NSqvKu)QyW[W3 Y=z ~irX%B-w0@̆\mc:Zm*JL&lRxdFS1g\VqV+K Qo w\i4=dl߯ZѬvl%^j|*Ie{h}ʫ|{t{9YˠX-Bk^5|醡~c)G@@>5V^-7e+CkۨQ]Ӯ5I\$ji֣nD׷yjBxf,s(f1דTSZ+EʵɫZXU`v}}Tg'c׹o>Ex+Su \pHۜ20;sJIԳ[si9jEߕ[:>2G,_sξ YѧRkkZYUP\}󞧏}^u8Ƶ.h#B?2(n'd4eׂy$WqZ{y8Џr[gV?~UAStQRQ$;y=NzOZ)|'TiӔQ-ge8htӏ4b >PN}]7dz !ORSMqz_ʐ u;|V&4򦝤9$0*~ebKg9Qe8s=%3vE+m\{U.Qe$tl0%ln$uY򻽻}Ine۹G@)kۯS\ս]J ȖЯ!ܬ`NHb8sY+CJj*RZ[vqNTpz׹NܷݏB1}.4W>]<^ qΪ4r_ 8L`WQ=\kԸ RUV\~0fvQOסuqq<#玃]#zt+IOo7^~2}v(ٳ⣮£Iks*~_]kF.l#~7V׏+:}-Q]+c,6|ʐpqck֖Vͮ܎̻wSRc@02UFIױZc w"c)phu{R6t-ە@59ЍQ<ԔyKm\!03Xԋ嶟Srqz "ƣ)c?:59%!TzݦU1nH,sNƣQ^Ջx/ q/ah^N$/#*1G|W8JFg.Ufmƛxؿ*e߅s4rˑ(ʔkrm1J|\MU~{9&*v4Ic/Q lXr+ɮ])B6F.hua'csch[u2UّuPMGOZK+{ϡJemHjmN8[B59\S6.XS! / +v?)MB[EOA"Pq#&#=sJꦖ^ۚ, %z?vd}/pRGDsz测(39ɤ2FrkH=YHӲZyVIs;*溡%:6qB8M*vWp>qMZ#ſS:u8)ٕPiJ^RḷEXeXs؎8>*5#'7-rYQZgk;_yiЪ `H1}+jKJ9c|ߺ"mK[f_9'JK]v {5,}đy/q;r@~Lj+. +B]fiݣgl,o'lu)rEr):kga{̋*o:dqzUN2c՜~YI6'?ʸNTn盎kE9[Alfp8X9?YP3P.sNm̠I\9j'*Nڛ{q۽[dc6{_zJ+OUrԞ+` c"zsU3}N$Z@hx\s4{:~|^]MmaeYqϯޝ_/ucjqRi6*4k&cr(dє.[jlg-(,œl|~iK[R)JĒy썸!V* d(1%ƚvosn})Y1fi=>11mjg5#+6i@g3HW dxjY/J>W6Ij!h2C9nϚ^䈫:߿WMH6CУ9SECd+!7_0'zM+y:o}Uӹ"qrČۗrG3^H=)ʚM &wuz]Ԧ%mZUNQ)T672s֔c̬cS~hغi̭3c+uֹ& bw]uǛJ*m[w3 U N=kͫR%炨/CXȬĢZ4V3Nx4QoؗTdUr|~N\|F4R^qupEK@X@8ǥpj1v;ԭ.f >?ZFUѼթGk^2 Qҵ+sTtKZ\*.pBV˒oNrnw: $j!xٗkG0qW5ߩ, ;l[ 9j_~J6c tf뭟X-h*c+>շ/4sKKv;'*?0 9סOR8JegKxyհ8uMUQmӇg;v>;/v<$SKaT+W?a\B FR ziwӣF|ѽW>*CIB*0R#Qÿ@y'Q5mۯJ4g[yy #U{*"Օާ'kku#,й7ތ~=+ЧhFϥ觇XIAsNz֫iJt*MBEK-"e`202G\ЬÖԍ?~i&$;d*t#0%`^Xbg/VWsW"U]m?{h0Eylɖn06N8q:xNI*iݾ?;|a&;Gy=umߩ֧Ud/U0$"'vy<䌏5ʽ6#N[[ǜ v6r}n+x\X|cfGJL(EX@NiF2r序ϡe`x]_ԥ~[jRnVyz=~iG4JrR5̬JZ',r0A=N<ꎧԫ,C匏z/~IZD\U~xZ)N#8<%YXߙUm/RIs̄o ԝzK)FmN슭h<Ս?xaFHa9_WJiizI3l2,j1 YkEké*rдӧvIz|\egc,_VFrz$_SukTXMZp68p; ޸5+IZȒEcsGz,K8_NF7yqrO1޸qLΤdZ96fIJ ~FsҼ]=pB.M3EK{f*C49g^.:RU.ֈtnތsac*ʧW֩JRjW|E9h{nflnF $!t*pK=ͬT}u]wGyT*|ܻ ^NrmP0eo~z?J#;_ogX]nI+:*.YZ7tJ#0Tبw;s"HuՎ ]MqBNH2'eԄ]j1Tf}z-EW+&?+F`ZP.XCMNӧs [ ",ю:iV<9#g5*xyYt}l4 ,0+$l[;gth |e$n^V|ѭt= NR]m Mo ydy!NpRij' 蹵lT[[I$I)F-^z5q~O T?$Uwʎ%?1*ydt&VQ:u]rX [1>cʷCʹO)6G?gimWT1yvبwmzNxITI:r۸ł*,JLPrF=FάY_O֩FVQjwXXrNd&%CZJL装jF]uam f41Jn2T S'n#f:JM᎜=am,0]NZ;(y$ISmߩTʵ7˦h&W#ۻ֓V4Q.kt909`JK{I?Tj~L|p u"uJ֭i{lIM򂪪1a?یTԦtw4[b'&YCH]0~4y2u= ӔJwHH#p:Ó[֧NQl]H)NjVbf[rKvO*|rWKCaG]+#nXhڻp{}pzөRu,mVe;~I$$e44 >i،*q^9Qpv9eovkfIE`3gWZY/7ȂKvv|VqTԽK7ciQb#I2Hzq]} )ʬ`R-y.l3̊ǒlϿJ#){H9|yK$IilȻYX /'z{ڴu%gZR4kN2b,іlX2qe5K9gЯh«#*Mɓ%A}z~8#N+|C2tmMmfGTbHai+F:Bf9^@ ŀ5?RptLC_co+I`L{qׯNi3斄cTJVO8mN|CG}cÒh7q9<z?sn=8x'f[RTimKV]p2<|޶Xzݥ}s5DҴf a;y9 v VQPr˚QHAoLJY14̮b^;dT,yQogMcA֓of4xǸUtP'ף}~‹e.%Ŗ2/^9TRkxѩ(I/'w|nsI?b9Ҽ0r\/֍cnxU[kxB~}@ z38F8aib䥩_=g=αKObB@^vkIɥͩS=6~GZB~''ҿ6' l|tW([-WDViA?gWW TyE6uef+&~5'F))Ay~(zu)cȳaH c y~Ǔ=O_CTN)6mwcԿeoZ%ݾ>'$*FЮ2N83q~yZ}c)Jmmix;]!Vbcd#?#?#3N>65no^xV RHg?RXg# 5W$֭{(Z3c="V++|Sڿ=ͲjvP[^giTv][N>w1J4MY8*QۧH_btq¬ܪ8nzz xnYS[y0m6v՝8Syvpqd^ЗvL#$2| %2noRH2I2[i哸0y^p{uJM~G6"]FX`i$b R>QȬӥiIݿ}OFc}=XԧQI$ջ_xSPidBy2Py r}J(#GZ_W%{5䄰[xr0'at.QJ9Nfo[L+1s9gh $;cfqМuOhOOܱD'2v(ej+ԍ_t-?SJ;+nc衇}昌G+JfY6 '5coR#F>^WBM4AZHSxn n?NT\ū] *z}؆(05Ҫ(IF ?LLeN4o=(ҜbuDRAE6ʨەqPxdy{]lTVwwBSN4L .8*G< wQO|S#}~d +*s+IFWܦ6[U}ʘkYSmGN5qa$GvDz}{u΢S n[fb#KqϭtQ*"ֶmRjqteЦ=Yy\3yN^=0Ug;7̜8mm8"h⽕c,{ıV4AӒ}ݻ\J ̎eW.dL1`y!I:z{8ҾgyEzI71ӏǯҷBisN8^_zZɐ&-'4dمH+[W^D:["aX^sZxH* 74AE`o.>vƠgۏW-:e6#Jx{GF*/YA,\llpq[G%'g4o.ޥ-֢J0ߠf_r;w*0j37pq4NgW Bqi~\t9:8{ivlG4P|7uQFl?J1>*0iw}|ɞ+Tc](ԩw'fG [Zwn]@s=;b5lsG2wORhĄr\sci8Ztp7Ka!ac#dHEX< {^yrJ[jWb[[ $gX)9K:eZ Ubq*2".fnW)Σ-ב!hvEYOӷ%RV98;t'[D͎ߊ甥e M:U/OBΗHvUBjʪ 䓱ldag[촻72ݵK*.쎤{\ЬV&2Yyu6ێ}so 5)sATKVXm4ye-d p-u&3R^I-gp>|#|gUT_8wOY;/t@&ݾa媂 \^(7=8TOHa:>E= ]lPgBパs]W/2v1uau!n ,ۗn0?Tju%.j?C忘,lk4Jv2O8jPF #OYk)*ǟ/1s_0am|s*U|1l>O7&ߗ(;V2#3=+Xԗ;IfKoH՘d広z~LT0c<ܼ6@=4vԘ.K¦gۖBwmwƪ.,% *=}1XSVW٨S̯wCp嫱\ g8+IreEBΙa|ی)cwa~ԩQi%S8I] @&YOO%RQW4B2FKq{q#yA2&+wp3xєeN`9QBJ7z ntל=pUR)E)FQh$n{X['_#1=|<a 쌡-]?)ǗKG3_-88~0yT\eVæeUe#zWEF.eOr7oW?A\Ha*!'_隨ӋfXzxw]I {yϧOΪN\,am֥0yTkjsn%z|֝n$OnpmҿnE?g>FTol|uJiV:[]9uy^ђO2;sr۵])KnנTy K߳#E]͝Afuѝ}*=$&?&36lLc'㿟BkS==67 2+2$}#>ʵsWzw&#ii')>NpyL亜ҕȰG;b$Y]hA̽N<n+8+TK{$w xW'6Ms%"I6q2 YF6I!($ėv[K2y,·qoP?6[F[\v,Fmӱ}+i;Jսm ȯekۉb0=kQަ0r}6rgkB67_Zx)nqd0˘]̙>ό%)GRjyVXM9[qTl")k{x?vw}Qr{~NEAN 8*%xQ|XǧgHLm&-H$`qN+*~*r~OYIB pVEJqF=r0 R[5vQ:ʴpۖ+g¹!O6GZTM8rG#C,[ݕ _\9}+.Xd+w,,\wv|A?L݌bvtsIK ?JsFѶ.>⬶,hNXt8f(SIS:qMi0#m >Smy7۠#u/HP'a;rS~rԝs^[bGbt,G )= +]Aj|nUoFR5Խa,w7l1JTrcҪ4eƍjXdѫf?lf>b3cs>קѤu*Rsr^|2'HW5s\nI| &?i'$瞙N59\%:!18q٭9cۥ^6 .1Cz(YEnU%$,i8sߓ~J[Y)FSǖgUS>]JxB98A%ʶ)ShV,>U” BRhh"h]RcX\H7kj"67 *8)AVam~Yx\^ޢ̧RNPe)J/A?{H# d īXKdre;33n}irZ ^+ə' O٣ .me>J_6SĂfV6~}q9N:E(Dms}`MEIouӧ9mrqy\m3;٢wID23$o`4ccψRpx 9S{iNR* qU*\n]mlu8FqV%Rz1&䖓J+ucFdI$/"真OƅOVn47bս̉y;*!YvrzQTwQœ'f,;FecȌ(<>8CZ8GaJݼ9PM1*ۛi\)J4"$73sQ]rQ.ᭋR #_k8ZNR=w/ZŔ f`w#Z[иU!fbm~c㜓;ZVqYӝ쌱Te,\~x?%%)]-c쭿rWa?Fױ;¾dK,ynS8W#Tty~q7k"<*Ɇ~2xjuZ -21zG'?҈ŨfylݒzU$uh1˸]̯nXv$~>r)\$f,Fs ~ujM˖7O- ,8=k7h撏'ùk0/X^٧+9^n$q S\3.v<}+HF2|Znۦqׯo;9J3chTr #:ӽcȺETb@voPY U#$.mcefX|d{l9v[4a];I$z)r2^Nf} r ~&MhHjХ2 sS/^ƖU0I,z2=z\NsZ&R%[d\>ZW|\ͥb Y&V'9zH=2HX,rqiNZ$(E|/QR݌s8b.뗕{{)dG^:3qb\&ᵚ6coGZKrIoԭun#[V19?2ViZ-jB${q$_4c'Vm:EwizSy}Sg]m[fFʽIҜZm挣@blqJ\ܶ*>Xّ1./,?/|sSut͜c[Ok㍣屒 ӓErJީRR AL( FB'~^4>(Mܧ 1_F4+ݽX⢢-Ysݵz\~J|,}w0~uggUu۰hɷة'ʰM>I1tEq62F8nsyb+HM?u}yC23HB،@CoFZq(] _G\ҦEREk}D{&yei*3@ڹa'̬(JSnX)nL{(# c4B(45ͱ,ѬyI _ٶQ \#.o^Ҟk,L c/ }k}۽mTq"Y*(9Xvyw8Ӎ v}z Ċ;.Uc1]|%$rϴ9l<g)F:tBrĢ5,X09cӃr`q[ -4qױjym'[ t󣕙YJosΤ-#/mS? KnaoeR5jt]PTszhؙەߊeknl=?^k?u<{2Ʌd-8>)ąnmo" F;ccgumf_~]?-[x]fD%n>wh:Z1^iЊ-sZneln08[J+>Vهg9'>ߕo]}HF%׭Ayy+i)_nVe<ϡchh~UV۷ ܒ;ֶwR:[)j;Whab8EbOn9e_3J6&" enUdgN rPMmBͻe##򩏶oFqkt}ȵ6{cY`MJ#9gV]9-=~Ɣ߹ }w%U1}lrԔRWV M6 =}Fᮝ>ԾdnުXpv}zUICx+=Ӓv_^Ye;fV]9޳~P6/;infat 0Y$71Čd=Rck\̡[ƢhlqT_;©eUTkM}EE2ܯ4@ ?Zԭ){Z=:աN:تFV]+I=۵I-J=hgʪ:iu-AvdӻFTU3mŎ@sN1PKWXSR[>ZhR1LLXg<;cj眽trnyR..I!w+~  ꮚs-8}5IBo)pHEaR"cyﶄ62C y,)$/AqEG)9OOT4 @Wsce~}OW J:V]_az̑ʋ~o6[N:`c֧I'j#N՜#)is"&Ȍtaj22v_ Nj䢧$odJd.#>ZB*4k9CX='߹^ Wm<0SmEUI( x2E?~%'E_fF$JDsTa|5J>K$Ih;d\g##5N3/~y\[΁~Gtq ub֖~l<`[vY$*S<+,EjNRZk#S#z3!ShksGZ[)ENvVuhJګ/CnsЄm,nu^h䰺݁(NrXc3\^kRo^n8r<oN3:mrn^[]4$R/#_/n6Xˮy9St⥵pocgu}alrI'`1l۳RYMswOjB!~5G(ST.,SJ|v&s'ڳg{8֊Tv 4$_.W;͖{|y89:rET8zս;M &N,2#9xeZQyub"zߥWtp\hYZFvqqv:c:t!x&U^luv\cr4R'dUcᛟnMRvPw!-! r2שj5EGϏj8V24qK_b5YP}B!l$O1%KC^hRv,*;=1YԴBt1ˉ/_rqޱ*R D-8G?ְNw,Pc߯ZwcSj"He??;CW=Oz6nȨKKH\.Xu>VWa:\l\yL.?+-Tt.eȣ&cHVg۸|۶aRnڲIW3:~dYܤ|8!r1ֳũ9K٨3"N_ʹ*rjeiT*.H+;7+;lEJrء*dm8riD7ʺs[37oXJLrEV6*ghJ\!/CN\ɫ?ľhݝ[fm  &TjZɭ0hXdV- ==yx7o1_jmE1/ ֱ*ϻV=}+^ʝվRvO%dul9aR?1ψʶ3V'-Ίɵs1Ӑ 9Ԅ#'gvZW:GpK Y.ߨ#*OS,D1Qp֖1#(.+Hcg#"NMYE⹤ӍnycSht{FTQSm;>7;3 QViװ3F^Wv %)TNf}VzzyG,s#^,j~~?Sq[^Z)D V5la<|+(~SPO$sڹ֟//TzvkZK@5ݴSGq~pe9O(G$ YDylޕUmnJjo{%nFդ-C!}9`+kU{a%ԒBs=A؊wEJ2 &FVpV{FMi@.3z>AdsH1m[9UNjTէ8r}gEp̧i NyҺNgֻyUw˗O2A$d[Ow覣f],g};! ~V5Wj\"&,pʇ$uy=y3D},mw-Hɼa̧ AsTq[/'V lņDWNGN3E8THK6Kt ovpr8yYb5Կ2Hw=rj';R$1z}lq޸];*n;܅㓃 }w/CY[#`LH$"Z-s)T&v#&g>h*PVdhBմܒ6lrBγ.JzLug4f X?+)RU&џ},).*H܍G5κr~ʣx3 \3rTc{z֗4H6d'挖gñ#8R_{R+f.&;[0y-揙.fCBqF*rPȩZNu4h'yUҜ&~C|dn[^_+dH|6N2W*:sMr.3OFUQ讥34k+m}kHMb\A-pɹr(#ۜMTvcRe%nH[Ge%'dj.,z|[!P7.G\yu.sVסM6Z0FE Tɍ` zgҸjS[]<5qNR6-ա Fp+8J}Ɲbna;m| )99r4gdջ?c3mV__z痳.{t(K=gR21>_tՑ$Z޿'VEbT=g)7hqNOY!b $RNg[O^tb}uh$7Pge{Z2T[ag{Gx UJ^Ӝw>ir94cFPMy2ǖҬWʻ Ǔ|1IR*7џDF6#uxl,jKru%-HČFֺMu+O R=H'uH>eܨ0iFTwM<>GNȽn,%.z2h!nwx3JU~饧jJv J4lhǖv6,xh˓Vպz)'=WS9Q\KmU]~kmm۰v}*F1ty\*ҩ(K[v6tZ+)vn**r-{>,$jWttjgOnR3JjF7&uiƝyu&y&8[pcdvTԨ!{*8Rlk{5 yrIAUqo)_q̒e,̧gSp9;IMKD-{;j4, ]]ɞOOsqim~G,h'&yu/*"JKaQ園RrI[~|ɽ[ St{>"lpʼ56ػmohKy[*^}IVf5c)]i? Y,+6O4-~g"Oo~ui@/WQoSj4mp,ۈ9(SB(*-l[g+Jᱻp SW3L#^ŷ* G?1S4_2yI?iȎXv,=YʧùJu(KO2խSE]w1ӟ5]S.f[B_-EPǪg׏AR&O57hGnHrJSy+Pԍ)%{+ߡsv-TџxT~cz4uRcZI".e0Q>dc*9j^\<*sk3$y| |=9QVWs-/;32p筿֖al#`:ӎ8*<ulD#G4ˍgVѮ$9R8J4 ^%2UGMȷ7 S~la'YC˭Zu{Vlex)InY.T2&DUfi1שGݾȗ>ݫ{=*~V2;̵  RK?s^mISN]>E&O:E_B;כ%k@!wDʠ?OJ捑jFRri;BٙJ>:c~#7[Qz$]]7ɍFfnֹSG$ce-fmہgy]IF*ś;GXOk3K j:LaeR6py8?s:1Z9M{Hvc?ʹ:+ݭ=u1/AC/DN>R39AIT.< IlMxTpt0nlaSlF^pqⰵ9jSzb6irɍf bjybV^WO3&h!V=+JJTV4I;;mi Yv],Lu3}'X#<~cF{vSRO-*jZ.dmDfw(1`kE+TZ#*WR2eb~Uf>F5_{tqU}73Kt7 9r/sp%|Kҏea`d\)і)E-qI)lFeeY`yէ,iֵ'b߆G:CYeYJ s˱2s15)ߏn-CwmGܴ; L0ٞ#jI+s*XwקcZ@4ic<2DIQ}<뚤9> Z=Nn<%lC5yy%R[~_SەzuݭקpmoQ O0A8x v`1^Sr_ [?=OV.vWK_ wObv9HJcyxc؎יqN>>?OPyMB~`Z ?^N0тO_ȱk=}[ve$2a̹^s֋[KiRskF=a_(}Eކ[M#_26s=qa0qԬןjy~WZ;>}v\]ymKOӣU1`QЌS=r]Ge٣y0[]Xqr=r=}FC2Tsiv{ JOC1T%aC`#CӭN Rm+]vY%۽<SB&վubLapƾBWhӕmrGM1]ڳG"Tb0xr3^3#gg-j+; rJ#'YzmJWitʍ9oO_3d]>%I]>1;#r?|5z2~_׹ԧNbܵZ=HamDE+e8.)J5#w(gk$Z;w$JNOԎ;V5)qUsF*K^,n$upʞz.nKmoԚXz)b-7!o"݌.CL'(/F|:Sxfg=R{h$W%v fdX˵,wqֹ㇍iY76痽mmĶ"̱VC3QJQopƣUym2kXog;FYhG_޻w(GnM0tR7Z~dFwm1H=3ךOEU%|+ϹbF`юֳOԍϱye\[Y,o ݉p6ke|6Rfݭؕ)cWtC"H+kGN*OF Vh9s1FHU+1 gzҞռw%OoƮ`MQO}^u ͳù|"^@#Qvk4xz~Oo*1<6&hXW+K3"5+$ȧڣ[&%kڔ v}/G_9 J")55}ouiʛMq]1$Ryc]tO(療7ԝ)L*ř~lrI0+j+s+]С.Y7]UB ,[z{ֱGy*i|͖vݞsЏR zWmLR=իVZi ܱhK;A o1[;`?Ҹ"QEGT]>m|hZD$#,z\ۢV^N.i`\C呺{ҭN2w*5-SRI7Y.m[kSvF8cR7OoĹT;\i21o/ƑOR-FKا{N-m[ 9csOz\^םo͢![[}˘WbTf޾F|%hȚ/-)7mVk*U%%MbiW=ԛN|ĺ@eT>b;{דXTQ,ӱPK /'/[h&~lǹy*jNeiTA$ 3IY0L,H#)T̹NJm5懈(5N ā}L!vyzbzTr  v畏 N9Nݮm*u/{hGE㸴3c*S+tJ^ޛ-dQ8~}?N8V0JpmમsN=Q-{RITӏNeFW]wT(߹콬ԟ˱p`*1nfl%ϧJtJZXZQ֚c̛WUe;}TӺ&k J>f ۷d2^>m[mAJRyc)]Bp1TO}ey{\A*MuT}ye*f2;yQmE]r Jҟ-9򷹔!R `̑㸷C{ {4|![)7b5meG?Vmԧ:=9 iCfجwd\tU=t9uOO1v3k6<1:F+:8ak)Dg]3q<־JJ\oݮb̪_1 ';ҟ;ZvomI [Z)QsbQ4ˣ +10Lw9#i¤^ZI˙mʌЮB͆nSBtcwiAbn8ICs,[Teef\^r(k0PF\mAqkc(ٽB<Ҋtđ=.nzg+NYs~Vki{{-cXC|ϻp9ު#˹if̛>-FyRvƗ`̪B<u|qL5e1p@'~-NtjJ%sX%qm@ϵrU/-c*ӽkMt&ȎSJv{ױEOgsS% Iۥn^Yc؛J6n=}c|6GTwzQV}tOlcc zs :ck7(5*ɧ@ðIyj)RJKȞ }UUn-߅hr9S%V"_* v a^VT%U9""RFVF Io:ue)rIn2]?>f UA983[Vcʫ%!)OnX$ɂf:2B׷Kfg*e5A-o'"a'/ ϯZҥӆ83մ}=E4hW&H קu9ceF(ǒQlt- pY{U[=0+Jnx7rGQ5}:i`I2́y s>~I$:rrԻV!r{u#)J$l j=wv dnY~85c ӏ"D[r* px֫rM>[Ztl-${ߨ֝/UM$FzI-;Ɛ,QdS~U}J1u#<İG6m~*XJ K,1[VVc杲@3s^q<忟BFד!WPR7:֫-Z,ƹH[T$Jr-Rp:^' Rv}vGSo"ɸ/#ۮ>4FT-(9.-̋ ǐ0yxOFvcOy+mGi~4k0M x1V#R+ԅ';&o.ݛ Upp99HcxԕZ|hb3׊Tm{:0hhC)etQ]RnkȎEH!$y?|Ю?i{ԕIKk=Ed\/4ս:XͭU5tx&gV*7ӭzVMnc4KH]<6llxFW1Q*+ssFkX+y"%qx$dg\qvkR*$}ִtr#e⠜ruF!+_5ܺ;IR"[o[z&ͽqZҁ┮U㟮>q~a˚2q8n/ݽwϗ'~uz#8Ɵ5 .Ya8?Jki)S+ǩb#wiCn%#瞾{Gʣ.eܳ9Pp o33߯OZ(aҗ4|jfQ- 9$t !,gxc(r3wU !ՆGS9h3乵{~#%;وq=F?K-.^1n aAiBzQq2)㴷fIIԟZǚRGDF)XGwIO/ySs_VeLipñYJQo&9x8QzZSKqŖsFWvu#׸杹trĭbbBۛ%d<%MktUU_ 4[N2aDY R@?tk'(ӞrLpe_u#Eg/vM\RnJ]7a[t\Od՛TVrM$k38>Bȉ mf,U'=rqZ斦>Û޽Re cֳiG^FGݫ*޹ uyo'!ɶQ Ic7#*%isa/{w+1 <:vަPȇFdʓ]nU#sz崮m eg鸪 Rcٍ_*Yu獪6T珼:&)he*.TѪE$DYL i=SL]Fr{m/'POHݷc<ȭ22EZj' #Bcc!\`喬ҩ6A<_gO\~iE{E8EȜ,Ep,'{g<{>]w&Xz[AZ1o;0I捓fm*g]u[OzSEm;%E0ni*l8zUB<׹!$Us0 0},jc(by}TWO6Xr؍nQ$4jN6D,l }0Oё=1E<63qnr8ަR5vT5o2aZL+( O;:kRiSn̄0oeb_1-*% ǘ|n 8:F2rrE4qڲ7}>OgKV̎f-,Uml˳CbZ1*FK͸F cORwyrў6e݅nďzPF~(C"b"s0YG͜z~uVQz k;>TF(JV3!"2 d0#'֗-Wy-q!;]ƪgTԾrӎ~P2 ->ʡpX`lӂ)Jҏ3dʤط .es]JgdRROFIlco5ԑёjw4aX3ZѫWgA2`޹N\.vHH@al~u".]5#V ŏiE^#>ƢK]6*U)>{(YTщרvDƴ^y`HbbullxWƣ*s}B']7򩼩Cg.Wѭ Rkh< ?w6r:v[rƤre- VE4>`\=?WiMߪ:"Q"91%L5zog&jf2N 8} Q߯4C$l_.2p^Ұ7qBZN~$H$F y\ߊʤv^j)Yeaf*bT^sz0+[ c÷|HpjG\} p_ Y ۬N=:\2}.K$A|`Vrww6beq`X=8銹KE#43N$2Ȱ51S? zg=X{9ɻ5!iO C9GҺ#4y^- |j九*uՊrNx'u1Jn}4[ΦSn[N;dw7p:czJ*+5_BнYݾ<,FH=}+e.W+<][|\y~b5%˕W5'- /aSn~f.nԎvu͘/Ȋ1v3.˜ywI?6Ga泏4vVr֣Zd!FҝV~`(\%#җ*~Ќ+!;1lj#.{]K7{60Dߟ Ju9U*ei+. ;L˻$)S9]nO+YvV IJ;+g%dkG]LpqSV#:dhV(#rw~Q改.Q)KqAĈwy'URSR0؏m[-Gx{iN[c 5ӧCt]xF|1;9l g>mE}!#7oa'uC֥hS\cu !"0+G95՗JZYZy{$e(ҦR۩NCu,(n:޶JVʒ}J7_!XnXjn-w5 n:o%olfQYZsGV*O[[ %יnY-~Vi>a B18;ԽaF8^K9)$K|93֊T0؊ﮤS1#Bs 3c?:駇5Զߡ,DaR-=,qƻ>f9=1u)J4ZoiD:r+/eǧ^.`CX ^<ɲF\9l`gS.I4߻ڬ݆Fz1E?z_4q1pv!`E~\(b:cJJWg*jJj/zkY y#M ;UKm]ǜg*Փ9JRWk3n wӧO\֑妮C*5?{+W$Tc],H0OpAFA/|u$Kxa]#׏Sɥ)2۾:ѣEF7Kn%w_m #s8RZ9b)\j$([,q$}_09TRcYyR ĝgg=HӦUw_dm,7Gͷ9FVue˶H#9v7u%''ܰsn*2FA#YǕ[~U#⚕ߕZnvdemzcL~L#ݬZӍC^=Txe9^r398l;ĴҤog3<LGЀNk|oeKI9+'jDk#}Ȼfj0NqŒް.i+o+;oUKiega')ݻ·mm|rH%kaUc @sgֳq+N4I02L]_nqk7+[ t[7j"Ĭ\ 9^]m \d-c>|۶;g#ޔ]5+AT8[zud4HKn$gֱ,^ǖݖe^fٸEmS\̭wנ?-RY-lUݻSm[)5<Vi8ۓzk[X.NȾp+i&㌃{'O߻TUK#6nK29ڀ W<|`u=k'4thPzitrDr皊OCը$W/tdUg4y 9b<#\5HE9#=1xi:M]wgF+{>Ϩ]Iȷvʼm 0v}Xw4O"jSRr_bKTimDhS>_¹}e+vML,iߒ@n(FoOW*wps-H[`ti~~xq\*.Wm NܩYJI7/}jp<ε\TeQ+v-DeURV8ќy^6T>fdI2+8ֺ}ԍ{JId*$?Ag=kogE}""qieyẲ3Euymz*jFdk_BXWZk=u`Lmᕌrc#מn:+ջaqǕ$"eg|3l5*ͷSNv)Ȏq 7^GZsH-œ/1D#y pz=SJ1r*4+~,P,[E"ɖ`;nc׵tkWb#]EybJlps ~:rzƌTFC3)2Qճ۞k<4}oi-Vi1KV?-d$?25$jӕJZ꿭Z/}˚}6n<<֋8utI#6I$y>+백ԏBjnMmyjmm=`09=Ooz/{cJZ }n>zyythžfU* X/nֽ<}4c(Լ$Q2Y[ۂHk֣Kshvы]˖RM(l{9W|dno<׳XxL0}H]ڝ8דw/AƎ7ѳNӔi^3\I}\:瑻8޺)*:72H!nd0y2ى'k5ZJ6hH䙸pyfk-z26?9ۂ>] #z"|2FuiIŖ)a4_6G OΣ>T}CbJ+PsS+^e[Il6$uY7 *ܡ}N1(jJ) FNϦō6+Rv8TIDTRy$:+ff );Sj`;/!F4;ϲncʁOe/ivʧYʚDI,1 (?\m .6}30Ȯn>W,$Ou46;s ss\baɬJe39JW,%$cm`zJcIr@yOp+$O=}V1?'=PAc26*cs\r旼cȞ$~\GCy9;vOR F5kmXd4b%)Ʀܣte62nƤ`{y뭭rJQRFs*2quIFM+jpR}JpۄXW iJ1jԧu%H5*JsN}k4vjJm3o#wX'u8*ŭO"uץu9Ż.[ g`z`W⩪'6GNo2uEQ`* [26\r1׿aҼ_w:qXJ?SMwJy p:eNaK;>rӿsĖocISZ8漴6e魭Fvg G˽Wd9;TsnwU*TU|pyӿ 'cEy{ҥTNh'˭D+UX،`s`Nԧ(s]}y;"~u.^ RE!*ZTjݓKM({Xkv>WQm<۠&rcW Y=|Ѭn"8\'ww*QTӳ paWfBڪ\I۶qNpU!&kwV~I cqNKՓ軘V+GyUě6{گwKEr*GݒZu{{XGq; -6?ßƺ%;u"4؏Y?^q4(IZPy_-NT}oU"[TlSr:yq^S}vhi$[89^y8ȮūrRE"z) Vh>Y([WGyIvmΫGƽ jѿSOԓT2*μ +4Nr[:F`m 8Vє+Y Uvv ֜y٬.Y" I$eW;Hss֎V-GTj:[OB4Sгa1Ly9%IPv3>S,drJ[WxgUxU\G=9ݓUqraZBU׹=kQsJ1i;2#*42w~=2?ȨӚb$[Ikl8;Atf.aI6ɱsz 4:c5+50[T'W,2׹zdAf gj$J$F߸`.'^ǩ5v7\VO/*pnzcox'%)ܴhUVG=i{>j;*S؄;)IU݁5c\ƳR* dX8na#n' zvϥL8 NuO2 dC&K3 ד]n{[bw61Ѹα1R2I&2zF;m9Ͽ\ݜQKhd˯Zn54[ r׾lAg-ʩXpwuUʜS:ZI7=~ZQ$YVCΣySݦױHI8Iٯ&Ii(ú9Di:~?e/kn̞{2vzwfJqd[fy3,lW#$sn0M3e}nL\XcN9}JH3,e;九Jr9bkyAL?z둏lb.hlƛZi[yG/nHD7W<>'ڹ*S:i*Jknfv[*){w<ҫZ2fuգ=ɘ?kӏ(U९-Ǚt6gyurTIs?I9ǖb+6Lže=@?j?u5'JFyvy*YNA8ze"ihλNKv. MְK-:]HGBfD8#6OsojRVdby']AvHĒ6wQR{%uS\F bYx=sߝ-ٞ"^I+c-$Gh+#HW6H8 g4G6-7Rsvz3`x7Hʦ:HRG:z!/Lzyf⪜;SI?s.(uII"neA5' IhJ4~Kv#[{#>K`zV*Ge(#*եY={OVGXaQONW̯byk&IioA5+FkSV<½*sQRһߥ|(ZRY CtΧ2Gd=#QZSb )cUG?Pyt9kSj+ -vHp'qמj~X`QZ= VWkx(1w9)^ѽN՜GN圜<~JQ3oNTkZVRd&]eeۻO\TrT요긤v?>+FJ<.A zg=%Q̒nI]_Klyg)(4Wlβ 3+m x~m EB*qOS6.O+.$yUJSZ1nkm ]Rds̑LqDaӌ/iӡkP2®XnXZg+~x]u}:|xgx9qi#nT.r "XټoFz#j4T)ui[vHH q9 &Lm8ה䝺3:SKz6ON<'kVehzq*F$\y.̅9t9=.歊2W?+3dmE,H}\񓌬T1\+w5"i򙇙o*u9 Zӟ+oc:+m: ʐN$GY#VbHDy/g߹ψ!+-{[IJ2Dr-4U9gOEkt Alݻl}W]6R#UV"yX523G#9ڻ#/{NR0U{zhnn#ںhFHٻ=|oVwO7%u_NW˕艩QJ~Q,gX?.kHnR͡S[ox$c剖Bd\ׯ=: {RuZjqˆ_xpyr\Џ2N_KJ+4͹jy{\pOc+9Kͥ?_szF;h΢-&ݭRE*Qn6#3qU?eGN3:TvLڱćg)^3{/nfݝ ['H|:+:̾w:+Fz6ӦT ͉Hڹf}\:#N* { cpbrԴYof͆SC=+Z+\T_4JVUUo1~9ysCjgNM7oiڬO_j9^0Uܿ. L8:q4]oVDJK1W FAT)FrXFM _11ߞk9+bVlVX(y!bͷu{}+Wb?ɽ\nq$+܌t5RrHpyU;JًSn,ͺc6c,y#u"4zhcٕݔuݼʧ$+Fqqv}c4cn]32FxZ5v)l׃, YF:kc<22yeqּԽ.ݻFu:mlQ- ܸb$c#}v_C?~N]hkE}2z)B4SFeNrP_nLϖ@`29#'p:s[ô՟Uel>ZA3NǨ^0hmMM;_X̕>R()'#GghNI=,j>=^cy5gO¡SθbݕIl,a`t>F6}LeT H>*3rjO7[F%fuϰi<1m+*J7̲8|YԚUֿoJR&ɒ$Q$L1cr'AM|_ 1垯&86uXsUS{ӧNF܂KBV%oyVЂ[T1zA``6cS>KG-1Rv$FQ<|4@8}ڧ-:kԨ*FB6%.)Wv|qE =j|u]?ė-Lg뻣 3onr?»?E\ciRk? ob0;p;`=+ԣǙӎpմ/m ;ul#$zvxSdq_i9I_AL#'?epb=Iz/9$aݻo '}{W-L>F}.gR8]ͧ&9*eiTє(-BM m(y9q)}ښs;+i(-LLf*v1{[Z>}L;4$r_^%ks0Y#1pFx(B wԩK)s'as9XWlM.zcCXӭoG F/waΒ_1~lcr eʜ[]ZrsR$xtU0Hx#W/+ܯcJshWc6Lm:)#\u/i;S>ȠɞI?J~ULd#r>gFRsQ+jr^A?}&KB}3QԨ枋tv-KijJ_ƾq5Gui┩;EDž7[ҩqf?qSZ.FO_ާ93\xE_, p)Б0 s׹+a -=6:=F5kMkC _:pS9cuRu#i+_ϡub"ux=CIup7$ :g<ˇ+]9az6빭Έlff~brG<pxhb*AYeғ>[ěL[+mjСg dy+*t#g=*j_=L !~UCi0pYH9[biNvLka7RGNEtL]dc$8`Wj}4>s0RJ4bgi,[8*xr32,.hyr՜gC:6Hk5&9#`nJ8kw\~sKMheSiK3Ӿ"xUo:fs cAP?3l,⢴`M,mgio o3}kȱxȴЬTysl/'U5˶2FIc}+ҭ[z|Ew;oO0kFxך (Mͭ&M=v>.b(>9˯x=R(l2H FO 7N>Ql?i-.֪/#S⩹T^Vϥf/CFҭ3\ $<Aq8q7l>onfrTX齓hOOGCvz1M|+87>20KMv}sѼGeD%I=?a੅j]_ q6d#'xg(sJuRVCeSz1v浌ceQf>#mÆOVzue>^k F:nZv"_.ti mh"Gb]JK6T̚"p7ɟ:E9;ڛ{咳ˆ> U9dM|=IaDfYvyY2##e:m#5 T"[eeⴧ*+}J۾ՒD6Q)A$8Ͽ֪1wnJǭ*|E?gfʑN[6Ufn8}?5'w o-mvUea~u$dNk-ؘr-=YR4evߗ?]T;◳WZB NmjRrLSA>iTeŒq[Z<74b+En&I5^Ƿ(mFF'Ske /:ߏJ))EB"DtcȒj씫u*~sԌzc5N:Ykn줁6zя3vlA|ww/^ֶKua)Uzw_ʰ\`^'5,YEHWO,p׾]2}nޤTjSQ_ZM|>lt'X¤i%BƣɎ#[F )SUH=3W2rv5#,F#KnMRYZݤP9av<`s09ewNJ:7Q)%}.: хsNGnkUJ>w[t"m{N~tu: U%oG KM pSۆ1䃞n(ߨe)SN}يIѡ]C3?x`?z^rO`(g-WnylE.Ч>e:՛KvWN˩"XK̪ {s5O tbIlcVܬd?$g=1i-iNJhz\z^K(mœ^&c*թ֚M2]5'pqÍSlc\7->_32];'}9˒3ԪR%VTY)C,Syjhn094OԇZT_[O3,sBrUz 늎x5QK@̛6c '%iQW,.3/o1I,Ā7'T2T5ml/,ݦU.ֶc}F5kiܤ]+v0?m][v4^X١I$Eռ{qӡMr* Pyr>Yy_J#SzpP]d"YvNZq|E5qRk0 !=+}sUMya)2]eyIwPs^3ްcO[]aBtNvrdY61={=)ܺu=7#F:1lXUI,9NO{'RV{"W,\"h^#]#FۢdޱB ՚SQJM%׿KoܪIO>NTeN}9?(@w3om?>e.fi*[_݊NCQxjʪ] |j龺YHUVVbϿGQQӋ}^bCe;vM"~cs\֯%ɋZ݈m wv' NhͶ)V嬥++$eEV_;mޣͣFG3cA9.Y6YTN/Ṋ7F5*+WS3ѽ}}~..df3`Ӹk.nks-Ⓙ69&eVa2Aj%ͼlD=$O\)LG.դiק;ib%ztw3oeG-Qkcc6L01ڮ48N3U8'Zamqr#-X|=kK izQya뉖)o2'ndSIwUهUE98ɨj-ڰYxV?ΝxIJԩ*I^h%A[B2cc%:>dtZ8|EI$~+G< aհW[FI78yBR[v +LwHПc*2FEiaK46%p {ԥnJ51_4ᦍ]4aO ⶧MF _,=H[Iff  'I?_j6v%IY[B2aȒ?.8O,cLb}N\M*zAbxdsZԌ.HvU/eET~ИT'sy UC-f͹HT^֦WIN\?[=jN6{ iSLeuU/ Xټ}x;VChIh}z 7wLtcZ䢒̹EǖRI KZBvON}3TOG޺&2yjLV|ߞ߭D0+٧:W\mSw4wA,YX6pT-O?68a6G{og+Ơӌ?Fr9N\Z-DyV܍cr弍=agtZ϶O/_+ OO֔ULV%NyM K,kϯA7~M)I. XI Ur ǷК_6p H0cuGW*II-VCwc wS3VQwbey6ÂpIڦJ.uQKYu}ɔҫ[goϵsrZDg(.Nd1[퍉cߊR՞t7!'ydYL:l`Qa9pj[۫^Colf*@(파c N)؊#W]R'VVM|PsRO %7%«=đ/Wʌ9,b}51*gЊ[+t N]RpK WAO4Mau2w brQzbqWg aN5=X[v{Ph=xR>VTSc[M!d)X:px*teΥ/cT[ndqo3甡eV9*OZ*4_3{_CXtt/5u(ȬۥdjE<*R1piwk 2G6,v##]4C){j M4FY9G̹ njv1l4`Hsb~WKrfZ֥I@mq51˕#sA`~9ZOEA37*~tǗޒ+aGFHFW.2v#V[=%EIiv]QNX)1#(jf '߮2+8\++=,F`nʷiw$WkhC?جwssSOM 2rq|ҖV|lU ) "g$:LΤwbOX/wϧ4EkvU9sTЇd<\3FvAN2IBmfC*ܰlpN:QCr^C0UcHſBFbDn>h \еT n+pwjmiOB;ł(417@;CN2r)ʵ}I3!ܬ'>1mu! H7ǦN)Tb$6)7n|c~R13uC#Ud ݽ)y9STw+K,i:}FNrrϥCo D, #V~r% w"رw}e)N6#QEYciӆTJl~~JQMn]DmSy2h~fVڮ = ~+9icQ* V66ńlO5Gy2y9k̿2\$1XJ6֣$^8#MпW!݆TdQ֢1et)6|ª6Ƹֹ/wrJh#6F+8+[aޯK% s09<Ⱪ $ʏ,mmAa2;G2jrVQ4IQp".qo_~rYYƤ*Fj5&)Wg˻/#9< іѕhc\ J6FF9ba&ȩT+c+},$ׯ_J"ޚ~떈#UYa#nzB3ӽkVZN*䷊YXwd pƝn7[~֩KM#j1謹/,X7njiIw\B0f8#%bgkvU%+FZ/ijhfgy6㷯+ТmJOylN'H˻- kHӗ5%٢l`Efqֺ cJrV"EV]ȼg|V\Lr沋 ;KґssӄRl\jCe,*n+7$zqckKsE6咚M7+#U;sq{~餝IGz ae?7_OKW#'fA-P˞gwQ?1)O&Cq#ąvN-3?e̲HcoT,O9~/g2(EpZ)F[ssӚ,AN^}5ʦ_>6Hm+|:WZDc;8mھEfQ{Ӵe1/̎BncgIT۔𴺗ԃwms,t(r憣vؚuG4,*8?kӿs4=oT|J:|1=ziML#Yr8Ͻe/isF6IhQC?Dg%w;3Bre 瞿v挤ӹTtm~rv9T.^kɹUvGqZ˖TK/[Gs?fuc-"7\X)ӻgԚKT[Fn#^ǥ)D1Z+"4 ~85q6ӌilRLssǚ\2o}Z[[ǵxd^[9>TsA=~hʸ ev"%~^};dը1x h\,opq:a)ԊU-$yvрq99jά`Nf/̮p ܪ8y{t2)y_1*yl8T^E8Vldrc(.ybϽf^~>GɜMpToerj%]cGЌmϞb;0VՄ3^Y]l1HfY5)NHG~1cXQp`\Ҍ"Q["93y$9;8|71I#+f9''Qq|7R p<ڱq_é4'mb2ͺEl7C\iud9ENdQpM+!Xd/sڲRM??}~ӶKN0N=x玟^9FQYc(EJӨ1Y4tv yӗsjt*0$؎FuBOj攴vysIǭ!%6FW XI\Z4t#H)7mQflJ^]P*đ\ V}E`)d ȹ^zqQ8giǕ[dF/-bv1?>Ylm?(RchGkr]o 洫0mSwSIܒ)lmy>.[ˡe좔Uda+6?=HTi`YI W݉;*F:ocn[BU!Wz%*v+s}1s\ԹJX~[_tQv=;v]p{jQ_W-]ʍ8<\:Te̝&I?%֕QV\O>؛Rσ є[{;QMʷ<,Ѷ$swzc ]ӋvvlӖ7mR{"E$6av.:ZT}\)^X?umsLX8QښkpY#YY Q><}wuu+ jY'h~-qo^.2q'Q*u=o*U(ԋ&$MnFeN:xn_wwhtT< 29b8ԍNsM$sfܰ ATS%&LFwLhl[𫎑]wZ+2qT̎5W1=]P>T:Чe+jmMxR:xZ{h\zὋv!ٌvyKU ?ּQ]LRK!F`R'Gj#N44q kI2u8$*6XgN=~!w$WP6y |0yr9%HTԗ,|ק_N1Zyu׼2J*Ѭ߼UC HWm8fPO7gK&4'sXWJ/d~wKH[i>N: p1J 19:u+s9|^j3Ek+ibjTTiyth@O8pbN}럚Gwӡd{iaWDѐvޫҥ*SEԱjP;2D 緰S'= Vwk!7r;Fy]Lzp8DiJ_5iѣ ;edvߜcsSR=QZI[mI2йqsNjIG5Lv𝍴µҔ,53{PS^CezTa9z[C2YwcnH=񏯷}La*Ȑz|}9 gF9:ӕJբ⌧[KmʸV-.n̎G_HӣNDTR/x,=r![N?#Xa+KѴ-\Da%+[ܘyG|yl#ҹv%*kW2q`Y$q =w޹=m.qEJ~_Lo/Mʴpp8@^:qͩǚW3]@2$)+$f;cϯ5aPSBQr)3ns+TC2ːZK{%\cZ=zSKd1 N|㞜wZ KH$siKY.H%VF*@-9'#kͿgvi xl[ٛOɝ͔T=jN9Z0*;QVV[ˏnm9==S\U*FhuB\ַw$nHy,urkf%$gZHޚV7 >38Ty=N?j,2$F̾gܮx<3iMI+j؊twd!/ ($3nV6 E*n>EW}[~[xUcf=wO~H:L G:vwؐZI$d3ҲJZ(O쫩Tw,p=Ǚ˜pp=Q/r-2UӿaǶaܟ7On+`)Պ3M#^JfՂA$ ˻'N? mY788tZi7$dm۶szfLE&nKu)#i<QxGVօKƾrU-f"GR;TABXgnK8*q^Y\ي=Ei8.Um5g,:fmt.$6б\y+|NFG|qֹԕI{;y[= ӡ oBiC4a*zUuܔR{99M?=M c\iWΐIw ׽5g}X:~.mڽ#tț|݈Wc0AJmqiz$.CDs^/gc ^Ri_тe?ø>׽F2sѧ.X/6ߵc4c|)㏕W5sE'jhdux*UOgiJRFnv/L9^}Oz}֝}y^S,ʊʻ9bc.Y{J̳4IljmI5SzT2؊hH.}8J2+3UȻ>c+ {ZW71Ṫm͍̹?oI8ob6e֜+=E 7_׊RRW//a*7  ǟC:܊rԳ Hy 3z2|C~NZ)-/Q0m~@X|s'm{";|@ƪS?ƴo~Hn\mS!Ty&J`8;?x)_IY{E{~f' 5Q[ĕu8,&_1mo;NF885~覣ʤO4ȅ?C)-}e{y&$~:e)uf5,LllrDۙb;jRg{K݈D_.  ^R1ktІ8cm6?jU.hŴqnv]?*ƬgJձJr"iGNՄ;yԿĖxke J:zs\I~n8dEі>?ָ/zfemN?Qν:lGN~+E{X&  OZ1tT|e*uvTʨ;*q{=8-ת:kGNvɚE6Կrw Ju$ND_MEOG^EcVkO Ҷ䑋d=\~ɹ|E89⪥8ք•yF% Uqqx.Q`zNzfztғ5VwV̳Vue>m1ߟJe"D'R/f|ItEHqR&qGaOՏuŽTW,˹Ҥ6+>UeRp;a)ũ%ihÜ*2PZ]W/-qu npg 2G>S|OO/굷Hʦ8.$e @q*wzjQ\kwe%g;"va1x=Z)BwV٦5%+ޘ-U7DUB.n?/{Zu%.|ԧ)jtv mEfl3~9F)Ӕ\Y[!o&dbg+a8XTw4T A^JQNu4l\7[p($gkՍ=nF~` vrbvOoֻfn.T-*s矡E{tҗTZ6Y?#We/S0_q5ʻr) pH8J,7Rei/pY啼fߛi\A޺ K^e:嶤7*ډ!gΪ.1R,f$W9U'?tB1VFykGM3ϻA#)(Xu);s[BF].t+g'+B\FY$_,|9U)VQn\' X٣Z s$lޛRjđ3˹e~f&0xW=JqOC;rw,Am|| s8bQJ>NL12"aG ~}i*wq%S GYu&rԮ-vdNskue8V<*5JjOCR|3"FҠmqEKv'u(A Щf =wrԈ2WXYVEb7d6p>_~U*Tʑp,7."a=JNji.^HVrl%gŰrHz+ʜ`GRWIIlbLo8{"weil+\P3{zqWF2S*q+|!R@OʭQU#Yd1>͛p1QΩFH ^PM\8ƫfB;=;UCq.0O͞cZ,ݙPhʟF G$`I'9~T*u<3Z5Ŝʪ$d0:+9sFXzҿ5v˙~]'@czUTyy5{v}.g?(k^hZ:VZowȋBed 1Rwdcc1 1]Kzk/z!Ӱ$OڔO.<݁y{HꑛB> :6(ٳZrfVXoV oĝں;JgMȎA>Dg$zTUSE={cהc Qz˱φc8mX 7̭km.S1Wi^Z-m+YSF54w զی=1[Kn1 us)X,#R9clH>*1RNkv6޹/RɜTFվe8cE\\#HRm:]rGyN?kT9[jrʌ:.Gaiƥh%_RmEsO9?mMESr`G+k%k]9Yd8$H=: ΤfWQB2w~-Gp)P>jӹS /ٳsGxDOu*HK'qwO֝GN0N SkO#jIY&XTrxQ /nTcݻy[{nc?LzcV^7L(Qv-E,kq^ <{>kΒ)JܢCqu qfbFON^g=ԡQ Yef~~TQJKfҡUE]Y=+KPHp^ʵ"MީhHof;6//h$t#֪0*iYxe)$FYڳqUi3׿*1.qЭM־V1a$\|+yE=Qu2N64Pϕ͆9\nVJQ3_-09Cӓ\J7=RWo] ďs+;0*pǰ&[f$)ʹJrB AJm- HѫU \o sY0*{օoḶ8<?aETyf+JC,m=}i)GU.uU}4d:u䜴EƚNG;xy*+x+-fד^3/f=5{GϺaZgWfsH?k)^r5irJ݌^"߻,>)N59R5I^Qo#>ay# zTiI)7/3%)k^$ˑUE*N0WБJMwDCnZrMG+8o~̷LHv>lBU"mSS=e3& M60 '*Ѧ}EYWlWR-ʑ3q+ǝ>}iԻZ#Rӝzyˮʜf-9+Z7.`s^6#NMG"1έo1ƌmu%3o1fچ'%6. !G7%a`y^vIlke~M2" ;v* YSǖzܺU].U.}cޭR<[O K!c$MiM$d;OηcOVy*F58uԙ#RWlh_~ї2:c)Tmor{C~67;xa^}]-],yӫNT*|Vl $[4>4(F:ɡ(ʋR[<UN>VYcjjKn!%UGJΖs>gbՌ{hnb.w/;p1Nb8#Aih/8=pqG])qqki6c]@]EIߊh཭Ni]23+ >Xe=_CKӣ*JGAk[jޛk zW+kF㮍| ] mIym(v1y԰1RSQt}BhTq n:}R[;R 6V7a.c=1ө߷s=(/clTbpTegk4kKUϻЖt!m2.W[7ߕ\)y4'| 9Ǽ?C^3N2Rv"N)z|[gi=;^-|/z/ڟ֧Sz>2^cnm u(6MhG"01 HHSz)~M-s{ MŽ4}2۲59\qiݮրjuFm /LM)J=oҶwNx''?UF/Cj-J)%0]+K_}Cayo}7aŬwݳ`U֩2JyQ#:;o{8=ҝԔb :݂ ך+~KZ>mYSbq61Vq:(ԧn[Нd ->_L8k'ߧJ3\m$ a}gi5&{#)St~Gqc ɻ2>V+o[݋O3Pe3WBxHi6dT? ⺡GVgZR+En1uBXU?Zenw8+T2S|XDymˑXKzBWoȸ“RF#ضp48 :*+aӧzZn$O5_޸~^W*2nml\ɭ߻F J\׊M~DkXelvcȤseP ʑep8_Jt;YJlQ4mp&0ni\qJGB9,7P~%*Tk.X2 nS<˵i/h^jFWN<dᷰ}ynYSoXRs!X 7N}G\fǕg +߰{$&L+nץm% Zz[Hlڴ֔jfT} _^5:6*l>)z8_p(⩮^+gf}քk2yn9\yz|թJ˱_\*⻫9m;_o~FzuN_ks3Ͱ> j%kZ~]/-#4בƧL0] q*+^m7VS澹ǯ_[?+/XCq,yKVjm.pCЎt  k#o&!P[O8,.&V{zuwעz|o _u7 mc)Hd#],s\'%⒂pSNNUkK[s_AJ̊?4xzkE˔OW𥩼{t|三T~S+#0bTh+A)F1:'zzRrhW>RO=?O¹=q1֣(Kex/ZxL*k_9}ðqN;oZu?<pRmߵ'Hm`Vt9c*JOmѧFswd)ay34ulff\9~4:Qni[3Tj} Eno˺A'n+LEe;cw/- "۟}9_e z]2ӡMu:ZqBm.mIN [lY"i!\'x}{Wy}&O>ubZkf"c(0~~6߽{rr{/Sܢ<_~?SK-; 5X 82UuZݷ{XS>ˮOÏ@AjH24L@tx Ci˖*Vkxٖ_ER_-{N|0rY\_m ܪD;Fo3(\>= o+_]Wϗnk gm2g?q_7}jRi~菡4w),I!sb?s)|F҈Fܮ}UO{}pCaB7a4r?i.UJ*1/Eb+ ɴKIK0x>U'u#Bom/C I6]sWSeI^L*b#Mt݅rq)5*pQJG旑i8ߏ-rzMj"˟z Y#iLr0->zg_iIECQO QO89޴JO7# ҖSFL)C!q6qqES%;#XT299L±0s(yWh6{FЫ Ȳq'ng]OZC:xvQ^koi\q-##UʑF F1F'.²1ں4Q6ІFhbL`+<#]m~i+H+n[UR av9l=G,v:º%ny 636K`nkwR\\Q;KI>I2 :(|ޥ֝SzS2\ G9.%>|:P I>WG59QiolDK{ӷN(goތZڜe1t*/b)]VX0Aӥiu}eAަ~DT,:ngÖwcyeIL$`c^_>X-.bSTd]1є|Ȼ 8ǙBH1VX$|ұ]$m<ֵ1I$N ]4NO`WLi{ZM]}Ib{Uk*8« qQsQ.ELS0fX_ՅAM'$~=J9ʎ:_[Zunexv8֔RajYBdk,|6SO9rǖkR[lm,ϜnݑՊM2N (PZv5ytg%IJ^6(30~-m/BέCptnCnV䶨"fox%5FL Q k⣦咑Ό~7#c=yTdkx68UdPI?UcԍchՖY#ub|Su!MXdֱJů'|d{csT^JteMg?*"r5ZO^Y$j0w~UQu{:se/<#}ccޮ4F\ G2[ Wot=I]ӛoVFV5a'̰u `cE.XȪs-ᴩܤ?j%(1Q6Bs3ƪW"Gvd黍?fasyx۪8ߵTMh~#aFduSҴJz uyj6h',I`?7M8ҋ 1拷Z?yBI<75_-M*x~V"fffM$0u揊HS}[]KwɕO9~e!צ}[_~ڤcxwe:Ѻo{/!mpg\7sڟuIl`gapRvQR_,6܁'N54Q]~} -_]4OʧrqxPm4t =4ƱFc;8ƊvSyYH̫?.{S;Z3Gnnv6Bֶn<8Gzz=oKs!-W:[^ҌÙu-vF(Y=#5S7N<;+2+Q F~g\2(:RU@T7L9eƵZ2D[\A ГqJ2wɩJ64ÙkȖI8z[eߵ`\SMqb9WlSsnb2uz5f} ;-B-s;n#?1'5%˶C/U6Ko+?X#TS]We[W;sӒ{qUESI۽3eCF*1 c,?@;w9{ܺRÿ^K[y#fQ&#UFrFѧ.Nd)bUDl:zdv<ҨEIRR>*ϦZ h.d}QT@{g޴yi^RՕWXPlh؆,1zVJsgGnnvxx8'nz՗hitBw 8hG$l2n*젥M&;yRHֲW$ bg.8/>m NZwzv {tMk!-{91N%ZZ8!T]mO$#۽RgcMTe]fuF5e?+*7#qGN⦔cva)эW}8HZF<?uCvF~Nqn]ikkAj ̼3ӿS8N7xjru=RvVx2EUy:Fޭ_5ɝƵXlA׷K&yp`y\c%硆eʗE^[1%$/8r?0w(ǝ7GwEwe]xיFVfFdՖc9,z֜dzYEo4R]~݂xF\_W7Mr-5Hb#' '8NCUz^MI\.#eVOrgF5c+7mA8ouFLi.ou1l\OqWOs]u||ɮ13)G#'*JJw42A%y宻htl l3n/ y VmN=ժEva!fMO.V@K.s¦ҽtFIPMmg(RУ{̙a*nkYDIdtQ°{ *SQw#f)游\ar7W]5ʽMM4 WVde(pylGdkZ_kbh"HcR=rycJT:N[oԴ0;ecyi.^f2#/+062y~>fF߼;ܼDrĭ }'֟4өa%yHRhEi(va[=ήq:9$зl'pwfeYa~qr1{׻fgJvLXwrQt4wm"q=a^Tgs67qz-g'e$D1i6pRrH#xBs)I9$X-ɂy_+61{.id9TI%k\Z,r'ϳ'S?&e{n}qyHc%-зͅ>h9Rj''}Q&YC'_1ߐz׎ymߡ1OÕstaVF\ftKc: ֤^\E% /B +7R{1=Yr$_N:?zOt%JI>TE4Bgnsn}=k.^NVWkNKeX95G=jKμxmye{Z#yd/w{VQ:Yzٮ9&}2b5,10=p:1S%b@'-o'γ7ްqR|"inʛG5z]')>EanVfQ}xDфcN:sFHTyvzOF~IG H>x?:jPmucNRAK;_Do~^Hu" x5˷g89x+uz46Epf犸GgЯ*kx<1@Y|zA9_A㇘eR"ܸ:+%N>XD7ǯ9ru-[>\=|ȓicSj%yK(ߞ$IvkmThM!( b U~`xW4m. ؏"YIߓu){:ב,LS6nڪӌ3(oBv~t:}mO;F24la1Hv(O89ֶѻAZF#ܟ^?ϭEKEvVIb`Bsң)+#m)v)iKK=99ǒ)0iO)X\n[0JO4FKq+L !wb\c5$0]3~zSNQE)pWYTrN9(&U%9]HSDwA+IFȪӕNF%8_^{J+r/$5b\ İ Ò~zQxFu%;-4>UVk2m8?ZϚ2݅({Ȓ U~ʼr=;+kQz'Z Qd_\u?ʦr.|Ж'O1f2ReUlsS Y%.Wrk9F;!~ގdCl$JbN'*kVCLm #)6WUZ#ZͷC.Cs@~(*2^v\*EVPY\w5e[+Dz?ܩ]y`TՋ׶A>v~e#NRB=Zt2"9?jydF2zjByR/jfn 1;;xHs$Rt3^;)jB&V1ߑJ>Zu#gmJ]u3 |n%llc?^mqSU+Ǿ(H$\-(qe^ϖMi܎a;gꠎ?JXn2DIstZ)ti[$r}+Pݕk7#)Cm?wn㻁*J'~vֳ>q˹F`N@?zƯ+pUon;:w$tp G)䔝8R.X?)VeU *B`?SYTVܸ؊yVWBYUN6szw UMsCWhUi<Ь3'RiVRGl"mmn=1%/lLG'a xBǻ>^8929b'g-UCfLqߖBu*qQ^^㶾14Ԣ?4{G Lsq9\sH֝8edo{1IYf8n#eyrr*rsk{YZ^er tzg8[=zgڷݭ~ƕiښW eP 0LiJTmݣNT"yid?:RcJNT,Iq4j*c2H7'\S岗W̒RTm|* 3KݝaVpNK/A4y4t-?w[nӏ~ֺ=M5sѩRU?zsjق2G2g+X#FWEX&W)o7f'==3ZF)Sn#*4T;.ƙ<I]]d䜱~`Ua¥e4ʷmIᵚB|1~W<5k =S7{_2i")# #>ti;>Jj]GBsx /S%-t~7_'ԏwF1o9Jk rV<=ԂAF:} k/g̮&4w &&x{X%RcPg߹F^.1Z? kLE8g+U1-0VG I5ԩƦ;S'h"ܲ'GJBQS Q5}wx<,s:\iu{E)YX™Mc?ZSN[Sjt"h/ElX䓑?ζjX茥Fc%[qqs HY>Q 9='ȚcdTcZvs/` #ǎ%7ۣFZ:sI+3O2ۆ).S*ѵ+[$WW5>\kO<דUPƝ+IrUw,}N{Vь} tiEo_#->۾\wa⷗ш8o=H~F8]d3z2J7n4HwME^=ehJѽNjʒj6n4en73F.qNZm`\$c,i6va3y&݊)ӋmoR/D/5gV=Ml=KC_Vy6^ ki)6+:2NJV][$ ٙ^G ~^'n*Gg5k[tJ4T͐Yb}2rsxVN+DvnThc|uLM 852﹣ xZpO5ٵHb3UV\o/Cjj{t_A.ګ}sZSRƜ%Gv])ɵݑb=9?vʏ,yֽT.ݴa\x_QFXR'WLʞUA .6zq=OhsʝH_.H,[9]FER( vYFLƗ<ކ4ΥWnR13۳/q돦)diR4hSvyX-U^:i1r34rwQ[{ 1ۓ#<\}GXTwDԏ{EtH[fE]z&7̌J|Vc4/ZP;+1,~#>*J1dx+K[B̳]Ytɯi9nk}Ui]Z/-U?Ϯ)B>g9Ut Kե԰wBVxݏ zV0 lROIgЕnFe̊c`ힴGcJrV@iU<{Q9)JʤR3[n$Sۊ>>z3UQom D.#UrXf#Z{g%}ucsU)Y1%*L#sjUg78k$PjVUH 8Ȩ\WfUjF1J!'Bw0 ?*J嶦8xW 'D/ XOpʼn?n*i{'4"5RΫlX x9eKMN_fwK^c*K7nJF\?3eYYX Qʌqi#㱮Zҩ).iijcj2qR+[\pu7vdζY?tf6b+I(ibڼN34 [2{A =Z8zd}7.fvq<{{NpJT D\.yVI`Fi$w@HqV4~*ս+?Éh$;da{TԌnBû[ gX|Up_\.撧O9ؽža#=[ ƻRlp <G+9|^$2;wpNjƥ9FW[[by=Gw{u,*a_r?^e*w9r֩~ZI}<1Qee `)sq Z5$KKUgɒGIFX88}뒵HE8Ty+[Pq$nx zE9oA]WiC vusXTjL懳g;HH_y7n$_ݗ,]OU\iߐhL=FD1ztN}j*4%&ayJ1e q9Ͽc6#膵q7qs\NTkEu*2NvVIFxϵp˚U9ujTmΤ]CfK86 Hdg%[v{dzҝHJ7؜F"i;iaZ7e8qֹ/o%'cm#mQ|;1e.KH2uRTp:okY=; Cl!_֢RKM)ӍL,y7jZ}whn:sJ]7<2I[Gʻb]ai ?C@#su{QV_kOIh:Mrl^L#'?f:o[8eTZۡn}&4Kvf^HFYg>T?Uj<`䝝 7dlcho+jR۳ #[/.Ml-DW̓/r;ci=kN|7ƴb~ew66q۵ǚ*HxST߹Zm:w^~]oZM>ՂDʍ޴J2SZW<7%%&?uhƦUhf8Lt)E%tfig9rrǹUѫhB_if}PZH_fFgffG'uOJƤjQUZ\<%u+I{XQ7g\S_N^+4q+[bcg]~*JW'ZRNkef[~hX`yN8R<{=jTZ/eԸxX,Vs\xxTrGCQ'elɭbxP ׫QIk(˒]:k%ݾé갮\ڿR4to5UH1VU!st:ҽzPͰ璿%K2aNkc֟mixIYf7`Nt8wXYc#/w1~T.>kӧ;;#ҧ'-70*o ߟJgJQv&kr2}$zQ6;)#Mbs]ܟ| G&-JlResI7̦MѺ<۷覭JM|\ m8ZJ/`Ԛذcn mz%.cYc[^fA0 j04Z+un_TVMDGY0Hv>Z_UOctl=qŐ0qУ:8.W!rpis_{m$SWl![fh[9cNtmÚ}c ړ}Vb&cmwF88cNR8ƚ.e1ue\9 qe`Zݽ:ʟ#ͨ/Z,rF[b`w^XOEuq)nڱFWaMjκnT۩2?ʠbN;9{V{MyӚ@H˹|kmxi+FZYdf2(~ZFz0iBP2,|p3ֽ:QqsN=#K+ɭ.g;#ʢU fב[1+VQcR 4[EGHߖڕ3>sHbG09sN[%|$+A$kfs=Wdnf˽+>it9HC\yBmoWyuШAWُ63ӿ*tE(>Lhk18؟ÚRO@)U,Xyvsv=XԥHPte}]w;4{8K}DsX?6PO_}ã)4Xm,f| $Yng9Ԫ^q$viX[_Ҋ-3Nܫ-1U^±g&qjIKc2kH-XwTǦ999-:z\E"T.z:.qq坼ezu 1ס>բ݌VGJ89I;$'khYAnW |~uiѧm^nUzylEBRK+F}튷+ʗuuI~(AN9oO8Z c7!ytm>&#tp?Ѵ[! nZ&iKI:9 cګs8QmuE<=(EȎku[f] wP{zqpH%ZMt$QbI l#fmxݒFN8j>z aW7\+*OZKū'rNHGe!K+$YLۢMv^<64κbn2v;,SIt܇-v21\哶P3#8`zI#$5J~ʥI++^ـeߟiKpoG)b#FZ|RYbAi `O~Z㭍(ƞ!R;UUS8#zBUХƚV# =Iq'\p:zJ͢sCN5=vV.'[H>oiY)$1ڣ,bU.khᘒIܿ3 m9݌p 誼"Ŀ]B6UF>gEKU=}<< $_"&Wh ֿDӫ+%ku?Mq嚱8_A^!*|}zzqi{+OO0H݌u q}GZ0.U8-omRgy4ZL| o=~__9SjQ{GQwVSmʹ^vyMo 9^q׌sNsKRemNRM{H`[EgN=yy( x'}H7ˌA{zVOhSXݶ>-ЮET`z if;#LZV;]:J)oUc]`mns߃\.U9y̹k hejr8浬L0?Gsϕjxz.Xʱ*=]E!yU8S-[F%B9NQ1Od$dU!0_jnQ2{qRWE@F>YGaYN:FmsĚ}} ը9J튤}aBAVFqxzթʥiN9GgXvV'c3@'xɨR2F2we]*I ?3m|:?U%vZ7Js:Z/ 7a^btF"T״V#E/݄*?FRܹ!R?#Z(fOZ*i9:ziSH_W: [-$Ic^v{sTȑ11;6써yI; V>%̟h9!l&T1۷SkգJ4icRcbLGύG^(Ԏ?hko)o"1Og83+ѧ.WՊӧtz%"WPJT,y\={-8}w:Nеm>UQ\>ZmIzT9c'/cTjNqѽъyL&ޕMƤ}1&~qRd{}*I?yP=+9+NZ9[J}NG/ooTprqlsGܵ6e.[n4gUӚ֩NNQ؜EjH63ZDnRHXI=h<a\QMc*>7 `1ۜʥ^Im\MfO`cʲm<*mcfUI4Nu#u֚~Z)WhQw9eN*/vZQk$RF*#=h[޵D^b1$vvoU9ݴNǭ2w&fְS ƛ8^H:WU5.vBKrLjEYqd]Y-rGp:z֔#.I/@MMn";9X;YTcG<նJqwq4[~YYơ3ް,[orT5`cg5C sq޸)9zݳr۩fm"2 #w8=k'%SvMʥ$XJ*Rm*)jmа"(+ Fh ]蠱zvtJFqf T[~g?oeEFւRmMi H_ՏiӟsE97 $@ҳ7=ȩf[mҬ,\8LR'fg٦vV\e9zTRmuPHB*U@oA8#u޼ua% g أp> 1GS);CgxٷkȖ3lןC4P>fFm|^lwsjV-4q>e1b0 zWRw~*KKrʾg?^ ykoSΩW]$O ¨Ջ-œ=#@Vz:\NFooTOC +1⬬pӂэ!v>ּT.V~GQ(z/"ID) #*-Wr: WM8+z59βZ9pnGЙ6&a<~x,Aԩjm}zF1,6ܧOҳ5;-7s=O*6s) ^RNi9ЩRvyG%6CT$duNM;j>h}ċO&8[`+*rBϥw}^u7q{ZTHñ\'qOE8ӕ^Ҵf,1:ʼH@HRA'5rQk/9)J╚vcbnt%H cZ .UN36RJ*J>M]Fep|n$F9~UarM*|qCKN?=W 9ף}ȍ:3^I nfې<\w8X}N=:ϕ-duR}U~ϧ(J4PiI*C2q֮Tbc)=Z,?#$='[R[B,s|zU0VSf(/aLpT)jDhqdGM@}sWS vqZ[m;ls#2:_3۱n+yn>2kSBFK$$ƭGjuڨ]UM;ٹD ^tz!TBHmUfv&`瞽pzTZ\۴ʩST_zB,ʞZssZ-2Rv"xVXy(K/BG_/&kJqf-4sPݰ=r:JJaMRn1npKY>niz8hΖe/5#Oo|'V˒Vey OB9-%N|Ta I\RȚŸ6 mf731Nj*m;%c 7WMЙIJ{)'R5{òy-[ݕepoC˥DѕKܯ8[#|NZF\d'>(Dl_+KMEelUv.s#4OӞZw|?f?Pp{j4*~%+zCЋ$a|\\qԞNGU)?2Wx^dշ! zpjOOJ66cpe@{<j穇SV/1Iq +8?7j1VWeaQ\zw Gtt!`[FGtUfU9F9ۧ}gc Wj 98ԋ/g*|罯y>_ɷnr?^>¦SidelP+jRc !ï3ML@¬[W8' K`gS1U%]5#BYT?Z|- )Qow,[@;urIКOugC.[/"r}O}KG[^# :M-4ާO5ev^d=3ֿPT^}n)^jZjW +:r r=sxO~}${:q;}_NK Z:^[M<#vQ|*$y5(ޤ޽~>_h 6.o"p2y\c CQ';[[߽ĥk#o4:^usgF)#8/8aUE ;[>ڞqٴm.q = Xl|NMuVg|= ,ۤ0$YOo^\*Qvl¿Z]ck'QΆ7XpR`q) ݊k %^h-HӪKm? Dndm"a϶q^/QlEJ$Ҏz=t~X+O>4KZաy㑼\~{ףB1ԕdߡߵ=Υ+^!W!TM|Uªi~|†$cn .Hb/ry?| o̗-Yζ5udkOܚ}m,Yi0#d cw to~|E|QRj8go"X] F[*0rI8eM 1Wmk}5鯟U鲜 N񪓺z}S?|mV$CF0;FHaTs,"*ϙM~gG)IsK7JYkǩxGŕ<|d7QPprz8womONUٷ}N/֕ޚ'k[{OQԮF,eNgv cg#2E1Z}/| -^{tϦ~ ~~]^ui#H\\c3a== $QAo{$kt>_Uj6kmߪ>aL2*[n#y<~X tXf 4onODoi.9 \f/SpyFcJ}KxP<FWI> %*iEy]βH'`͍a8A+ќiK.J5iijvl1xg]L>#ӵgB1b'R5ZXñ}zTQ4ɷpZޔh |积z_x=eꌽ75[f6ۼ۟ʝl-L<9nsӭY5zi4%ϜQRU%ӵ 1S]ü-ĒCWʥMw[c'o|ҳG3H^GqI˕;tlЩ(_RKiŒ$ۖ+&v85){JjV.2e7!ącǽg3D=u*q-cV~㕌`kԣ[ J_wݾxetU+H,j4yg⻢B=58z?̉"2Xc E)=Wc:q퓚M5,#yPʻsj:RuT{mA8U'>y6PݭaJ *0d &IzU2S fLy 2W8p?CZZNȸǚN-ZLЪ@_5Z*N?|e5&WpTcuAMK%^IS儬*}E"o^[pjxTڬ?#Vdi0~VޙlJ?:0w=m4%ѳ|O֊4}NOZJx7wеb 28`BuzrO 2:Vw޺|41FI6P_%ڳcdSZ*Z߱V8;[$UUi#ktɠR6I#61q}rS~kC1_OB+^ <ߙ<6kh敖0d ®Sn>¶2p\Whp"P/YO#uEISc5ؚNA\B8f1USmܯge{L4W"9#dpuN 9ןW=+iK*}.XѧdfF`H?Okٹ^ƥ;Fwj!3Zg‘i8ӌmkRȻufǀ (kQr_L9EL}˸cJՖi%SOi*lKr;fH]~FNK_"HE]mZ;6Jql0B{tNG$MH{kIVzȮ5e~#4W]=ڕ xV8%Ivy{Nz{v~FS.-T3UPq~>hYjL{Jqb_>(UqШ=:iGV_'+iYı*<'#=J7?6˚,dG1Us=nmu?yqO] ܰl*dn8٥>I^L\1LGv1YZ6_ݨ]>%ʜeRgKxZFt;zJSrߒ˖]4DZ)cv¦2m^՘Z[͑³18x5JS{kTY[q6so&lӹZp֐FdrdžݒpHck5%R//"m+bF\*S,-,EqVqm1`ñD%:.k̆流+` <F:{rhg*|[U+u9t{rNNz[S]4b.v$PZhUD,|75(F}`%ی<֛(rэd$z}B wym83{RHmQՍ)9+_Av(|ˢ 3}rO;ӧ&q&HlVQ=yz\%wJQGt1aa2IgⳭ*h&+;@oXZ"$3Fpⵔԇ,[[}w&`.ϺFgf'ӜrJvvI->$?̨3BY`~wUZt}Nf[A3H?*S^8_9G|z|E%ۤVҴYZ:Gt{IH)ƝeMj}B$8āYU9`xCt֍ԋRR~~_i yK]L|0smh^>N (ec|>cov[q v"I!|avӞڕ>W;CQm,oX5@yf8GA۽kh+sJ+YfKc<;N>UdTP7NZ[& ES5o$9"oR?a Kۯ>#n}]NRMTrL ٌy* ӚƟ4-\^b9c ܫl#\q]Z] J2͹E+!c0NO~+)_P#N閭i pąC6}qw5%xccN}Ad csמ#9ZRVY\o lp|~T\˖j=Xq,ܬRGrzk+quߗo0_)㈖ 9ު2;tkrhƕJQ%0&³ pĒJNpHfԩ98i^4*PRű2T6̣o'#ڟlBq5(;V9,q+NYGܘƥ(}l쭱KHWN=92%ڄeRыUK"ǓN'ӧg)GkL+B8rE܎m֫B6GlfMCNh 孾[~έ|ͳ!KWf>i^;"uD*H>:Vq6N#= X&~c {S8SgRQP#{xW+Je+g#|џFq2Ӹ*^hF0fqs߽W]YT"H["}ȳ+gg`S=+9*Z4M:uIJ+[vdy c$dyOF)U'anm]7Ozͯz8EO2H!!UP+6yc~&ަ>19 lWzE!9I^\ܫN{u[W5#I#dw1-#&и<x' 2aQwlHyI`6 gsrrrЊp#UC6vc ޺V1YRLjwGNӱ:i)F1U7̣&ѣmΎdG3e;>櫇7(Cy`wv 9 FN*k(*-ZB&f"翰}M)R,bahw0XFx9)GLG1#$8hd-?ҦKn *rK$m~L/LĂ9ҔmӧKW>gw X8ݝMla *W,7qnE088n+'-:rMַjH$]2B=t!R^ӽkY@cwr}r9N5ԝ#Xӗu'Ky?.V^76{^]TeE-k#{Qӊ\ݳ1 hۼGw6Ōf*H[EKT:SN*2RK)tgrTg5eY2MJ8|`d,;UGG_1sEE;XQL bO̫n@#ɯQԧ-Fc9)z2YUfBޝy|^=h8 y#\.%[wF2|9#PJ]x|a8{9%tP9ID 8#/zFjHs7)W,I8'r- #N2WKAV .ŹZT%pbeC0o9Y>)ٵ xIeT/n>TXN$ m*.1SJNH} l(~sFRy}E|r']~U*u -DH6@㞘C"2jZ^Xge=2ۿϥOҔ/6(!$xМ-PF2%!<^޳1sB0AUf m@UOO\\*gqk'~=gziK#&]OAF>^GRݵw*=D^WHoON XIWج^L(v5Z-JYιo8or8Q/v>9RRd-#V*F)Yj:ҜNi"FU7HFKgb)zEGB'(؆G*sڜؾ0WhXO&Pn`=zT)KSQZ =٣)_J%/wS9Sd[]뷦} a3ZTڱۙW i*;ޒGyF9v5~Iٔ6seK0=MҕnkE x%xpUdN?ZY}:ruDwN*X{Ҧ4e)]_㔶ܼToY 篦8:v.i?`iFWv^zsCն>d y/Q/8cp{^(8?ěNxTWq5!&"n_l$J+wYJ<#OIĭ4C,?u r?^?_|ViG&Mt4Fhsć>nE.H$|,|w{^RSǺ<r06qۜ %q(fw߱88=?3YOFZ#GU^8kMI|ch-mӮ9MFgb $?bv:۳/fgG9,֫Nnv-47vʲl玆2|ܭ1z~= 1Ke;zj.XnV+'DfH'r0Fzͺ 1wEYݣr^FSDg߯8=i|[S{6Y-yDd2s \c׾*TgנSBq;d ϸ)Z=6;*S^[ʲle]2A*xFدwVVc O׶Lm-S4;wZ!ԦFUv۴׎n+S:B,7np3Ox?r}W{I29Yuc# n^dYP`Hh÷5ov;tb+l!${V8ʰp}.hvr9r=&$yLy#8yY2njKiTYuBڠ sӧzܵ.FWPkmwn= V 2ZoF }sg'rۚmEJ̆/-|^g*^Bx%+'-9T.Q**Og&W4(Kd(R[? =}2rmcir&v;Xm ;}k9Su>QkWK7$弼H6秧z~ NœVb#uKYʏǰW?-oi2XMr_nV5;[6pz g˫Ț^Σ׮0(H\ySg)5q='5Vb||ozrǚU/Χ)('KvCV9%^8ym!ζr:KsKJ(DO\R9xo->SHtޯ%6nnޭCZԌ$ˌ]8տȍm-{\y{*~elە)Ju)Yotq]q=ZJo{YTsZ/r .lpyp,q@9ʢjD*K}|3_ #,h ;>ű2Nw#A8m2mVp7skZ^SNH-Z+Ԅ)"dR|w_ךK}+ѯԧk9I!qj,Τ61yQMGX$T[~7m )@Z>sVV6LhOGu =RO?Z֍JӡJpV2ӡYhkQAj7oR=+K~x89<5u$EoՋj*I$7ds! K{gu/cXJTV1q(gTʍZ$EԍGr;)PdYX09jq}{2ok,s?ZXfcRFW} [E˻nO>S*mzaAo# ol8 jv:kT29o>al;8|ҩX]W8_׊ҌM>fO0[i2^e,[jC۞7{O*rӖ>إH;x㉤62>Ve8\JދwM>1\̬p0 ȣZo4z4Ԛ7@fP>򨌥/*S4i-jz;TOJaNJSռەaߍ?tcO8a8vfѭZTW.:Z22T,W4%*JѣRZ54zvU.V7cg(9۽H"䙮$w1]j$̢tF#GDۺ.Xa̪>ʹ(KZDMrbHDn2N8g>i4iSkm9MW*vs}Zt}.G˶pڕ6Oʡ028?&4g=<HTpڹ=2Xf4$*6Xֳ9j娏tfd3 ^Syd3UdzyIzF?k!Tg<'TVeEb/hݬ!m rtJQw@Y3Op#>K3z@*R˕T< Hk`Gm ˹|rNNf :'(|n'\ќu{O}_tq¿/@c Rp}iz#Hՙw ,xpNy1ԍI)t\X3g3fC!X㎤ ⻵hC{ XlGlipI%N{>*e%T9el2ܑ%x[O3qrsHAsZjk9єSWq[(,|;?AXX7,˔er;>qf˻;&9玸T\yZQGR8Ǖ-Ev1]md^qV<ԧ Tf-#=>?'8?+l5F\׃۹nKh%..E?(.~PF7cZ4dԮIf@.!(c 6X` ֜Ѧsb%(ܱ,4gz r}kRYGԑbGǨێR;jF3v[;&M8J s-8z\-g*r弝͍c,sjkS-_q2^N,~<ɢnopG>RQ.i?g_r <(`|OA\ue8UNR"U)O-尝ʳoʪD-lqԌj{wq:o̙5-dG+;r=*Υ5w^(}l[r٦"]F2ޣbe۶tʪ6}x"PfPy#)J;9uO]XYH qMs5dؽ$oHUTP\Bk_ )BnwGg7}m#ysFZJ\$)]s⦅S?!ң Sw1Ony)^ _RXU*5~;* Ӟ>[_3RW92vv9ן[s'ǡG1D˨VX卢嗎W=1޽J2yuGc>1wb6Nv^klNXn f6e+1F?OwQv6Jt3GySnuAFAz̬U9FׯBW˶줌 )_r tQc+ZnkFtk[O2՜+ 3X]і;/T{yD6@N:UF|lsGWHaL~R~v8+ܔ|%I]7M^_ 0Qm&9n'e>FT=ӓ򪋄e{J~{~IR3 n ?3xT[YB2Ysx<ְʣ:N{nDd2*fD4uܨ[c!>Xc,ŶY~Of%"ڬV<W,&4)'{!amΤmNgottch3$REp r˪.40[A,isskKY;Q躒 JE+6e}㫪MrƝ;jO࿞/`.kܘm="RC.GˍeS6۹ښ|Gnf|:9ḵ*šm59Szng1$/ʻ\+$ܟ2=/ڵN_hT؄no>a׾x\.bFchnT2Hfi 3@&,/nuw2G7z#֞=J>NMt2!O8&Iuc:Yi8jèT,3ӯzIZt1;F^F;*Tcl=JxdK"\reԷ\OZ GZղCUFK.}2,=G׷J0GG6RS]ٴ2Irm4y}}>TyliO^i޵H#w`3߃k¸kzF{}٩#,$y\ q4)%vtJ#+rzq+[w27LuץKRMXފӚ/а_)L#whaʅ>S_Sm͸?srtgO~y2rv= ?73e''1Wt)G|}̶,gzc2d>N<ۗ칤/D^wrU#@5ʚ˖Ȏ~ehBxֱKB:vD7[A[ayݞIV65zpLŗ -{㿷Ҵã/orpC Œ?WPzT$ VWkMԫc8gXj&sٽG"'@V' ߌtNmLRzHIǗo42f  C~FQ;̊W ~V^`lT-~坐yykz}htT]Fm'+40_)-9v3qKs)SkrX[Сf\չok2`)sǏrdK̒L1l;[qvR|d7vn1~fJǖQ1|Jn۴7p}>N/^K.u͔qcG? \\*t1љZ\²>Xq]=_iI.b}I"v09$7V!4S%%b.w?Oι}uaI^-GrLs'iNI޹jSb0'vKzƳgˌg#SN1ml""i'3i e'I;v}y.i90O+y6hmcy|?# VIJ m,W,U-~V>sXIVN4Ws*a9=;{{TƜbpӭR)Ec3y?wja~|ѭ=G[_[Au/a+:% OVe.V +Bpb{;tS i=NgZĶfL*m2!pgS]d_,yTZ_0G1|U8r8ȩVi-ٖ8T*d8uΔb4T{c 'UNI9q9Z.SDGksªѲ"s9R1#{ʫz4Z#hݐqm')?l*tWmJF–*2/oNQ]驥x~D̷*'xYV.d࢓jXuk,ܮ0n{zqьy]ў*g-X==Nx 5D2$2~IsHw evШ֕qsRmBO-_]NaoO;Qi%gقfH8 ?B5NmZXy)+=zm?/f/.sӧJU+PQwGF5h9/{[v}20X_C]ҧ%vw NN^WeQ# IyQx{Hg6 Ѯ|~SN*%\+QQ Т+Rre1qZgm rKF-^;)#O-Px8dgޱF[2Դ|;Ӌ۴sj>\-cgg:g+ u}Nk򩇩YJ=]_hX3N;|V*mU*R6]9XqY6[8LG<=jZ:qjG]ʨApBOw>i̶[].9J^dԔ%+Rv]y[Bu?qǝ*vwrhUa);vYucS]60\ؒ=Vl[La:߂}hQ֯U#dn.'fQے;`sWMIKtFǝؑou+_2h"THѕvpzL*sbݻo ^F$S?RӌVXxs;^\]M 2LmsG3օNzyOzQ~,ӷs PZ|7} g/y:)UCfNQ(>?Q}[sZ8YnT=9zWj~Ϩ)>K' ΅r#ܠ$?:|{qNQvVfeE5鶣"VI$Um+RvN5#ۣ3<1 r{kj=SFZ] m^цQs<ҽ*04hE˱j_eh*{|#O{_n՝ѳc9C1Gt'.25=,M+ʪ)*IֺiQVq63u>a'*~UlSub*.H۽WxﶸeܬHo,AJ9xg](*+f )F^1աZWQXD%[mŐ}rA~ʼʑ猹*ӕ>]NHUR.Zfr@FT+V=2V#-B[,NX\[ʴYolmA_v9Q;;%E)Zy&H$Myc>Ad:JynaRty77C0|q1Tṃg'(,gY;Pپb9}y*Elք)w Բ UrF01TrxzѭTZ EU䍹NܞJFk5lb=㑁Nѵ:'d\·4f!~={sWobtF H,|z}q]wgdTJ7vG[v$ۧ|H 3ǡ^9sCGU̟t&G!ܷ'qE|4էj;:Kk7rYڪ,^J.ϯ>^z孝&; cm[`33nJqQ4SocA-]i3LDj$'<ӎj~^T5RTlis/(e_/=7ceٽ_̪1I{KpoUجT휰 KBW C`ӧ*ܔ_)ɌJ?߈wl_oϵIRi[V^Zo~RaWuq|BƖ^Ij܎Oߍ6&P#\@)F5Z(ɥ+j@*G &w6La ܶ3xjkFRXOKy &-͒UA{u_z~j2Ujxzk'|ČYJNۙG9(讏JѠG06IVX5o%KUUbN~N89TW{55b%FJUV/5F[pO>aJ2Ș+!>dۙSeO5ꄶx'q5.Fɻ 1; F9 Пc~rd,~aXTSђ[o?%y!y95IqqFM㝻%\,]8Z,Vasؚੇ#Tuj-<}=8+ͫ T%ݞ}b4։+\‘Ċ#v2,9d$q*IE&NYO.(y̰(TY~ G5:׊V 7K,w;$.w<*R[U)b~ӆO6f -6T?֮e+9;.$+ d;)9^֕"߱NHs^6lmy?aj859=[ 1ZCDeourv:CE-P]9qAkЎjtdV}w(l+1sZ#O0}v\ܰr=9㤤eqqW D#6eϮxWJKCЋ4u.fs4sW~G>8E^R믠%X8lϭ]Q)WQ/Xac2rNz$gK-PۍKM-ۂWk.C9yUTl´m5o22,~i¬+^q'dF73^S;^~'eN RdW A20v8&ٜjI[S4JYwHsۓOx46Ԅ%B fΚ`way˚J漏nnYrv+4-QF]8#ד[ )T]K:5ӡH̪kag'vƜk}pn۳m0ޣ<Ɯ#^d #}P@ݴ|N܎iC+>o~DLI ݸ)NfTmUo..G"nE5XddͷoQ1]jq ҽXKm=[%@ LB"N>1:d7};1f;T qac84NWwH<99-V®{3dOOWTo.=R.ɸ$I'JrR#Yq^*%Tq"çQK3zSRQE}|̛'Rfk_cgc[#8~XVkA,-X_[ٔa_fu8(H'iӕM5۪C l&V +p)y:*TX-ˣZ>YA"?XTKKR0P>̆X!%Fެc|J{ zSVteg8 4~_}ԑW\rNVJtU5h `q,ywqX]9Q,1s}>ZW4jkkf5*|!Y&Eį~nN*,2E)yIm89{9Sik? 6} oS>%dcd7F09`1>QڒVziyrt}^ @.4-Ic_% 7.>G  FX /zLvXդC$x]6͑@\9s)屩&c=&GjGt\kjV},sGvg dmdA8NxtWx\ b_Q3d9;}t 1V(jmb{T DomRvd8ke{-TR:;.7<71[F<%[ثj{ܫa|C6|==1ڴJ3w=p?CCzOjJ>27qGGR}z_4෾WqP'Ԝc> ^_F3񟏷>*[B$^Qߛ*s\q9Oҿm:~{STV1͵]I\/ϯN_d:t{7^&d2凩ϿZ FHzvg#m=zo"VPR:9d ]: -Y#^H1W^$;hV|t> [9U/ 둿x#\_ F>/N.|[YC{[;{f9o.5+Fݐ6x?HTpRn;5G[W}{ ?cIݧzZGM^FXء`2(E:tۺWN:p2ɩ>i.~g~&jz-|#Ld//Wlq<•\5c ]ߑ1`u"N޶=x.Ix>@9I<@ljۭNSIj֮4^iսu<}Z<.J^ZJ4j}ͻ :70n~}3>X\/{h{WIw`_ȍT9; sL+D]:.UWKu9,heo/ 89s*1++;oyu)Ԓ KeCe8ak͔g&ڕXʟ-7UؐnNU@Nᓐ[ǚ*>ze;h΢ī#1rŒdzl}dk jK4b$T3%-,Ujeϭ6#qKmߠcu,o7.PO8۽z,49/k+IZ&51bfY?xR&OsJ4*iUZwXqhwn=1Z{?b[N;M_0"23O^@4pǖM1oxa{" v2dd*=9OZ*jWq'Qj+qus,|u_ں)R^N(_Ne(?_eFsINK9ɺnmHwWfT`+J~봷DbkENF~Cgk_TQ|#9㑎@58g˹%*-V%ME)fk{Y6Qt'=3OF:pjoZO l,)mKrlR,US6JGn GZJҟMz]fgTcSzbks-u[慣)D"A#IY/y.KH1JۀOvk:˚KN;sD8W +F27wJ-nm9SJZ1c6 5v'{Z +am~.{/h۾cd솛dڪw*pz<*Ttc5WT0݌\5+PuH(yX#rZI8^;_BV7H< b>a(eceSb0K58fHn}EaiJ3e).ky!89t/vN.NعS\,*Gl=3S'ͪ2q} he0^7p1Q5儵-Z’R*ȹm0hד9h˚)L|`$72y>Ίm]h3!UqZ@ҵqrOXHcfRs#6dfRiJ®5H庍 {u>ukB̶ɻsVמ{Z}^x\ΡH#oOt\ԹngDv3~Rsߓڳ6)0QKR+mn[+h(Ycd(Z M]:VloР#3C{Ǹ+*u?i(LV'tM(R5;qϩM7%+\s5v!{uU&&S}0p2rOmFQya]al۸EH(fR,9r6l2yj$g9D[jt({?!hU3)ߧij(IR'EbDV,}_)J4Ru^K#9bJdx8oi(snU.~vd]kJ*'#zmb9TswEKao*b~\`~?AZNZ6i:*FRdHVwZqW["j{nXAl3V1e1 9]ymf)VQ ,"W q81z]y*4c;hYh&dVpLN$̟mIX|C䆭o2 K3+L8N=;ZKeGN4%[<1ܬur=?c?Ky>RZD9ڲF#TU3qYʟ4+ͭĊ٘FO_}InV'+zXHƪDMΦ֚uF*8uY~%tT$Ee>LC9>])ֆ>%2G$b#,1'p})qyqۂ̒S=k\,*H =z~*?|}h2y.1YfS~C)GPT7Aba[>9 .f0x`y8+L=9!iedȯ4r=p|{b JY߲A&+u󷼌vIW< cX1hķaf$%rq:W<q90kY ڟ#UVF|}AyFZTj{m;0,(z1m#k88g LW *ԁA59G}*uM<ׂΑ!I0Ic<{z5V2(I֕a4B۲ShZ }8糔cJgc{?&W9dP.yZغrK/R)QWr̫r[1sXJK7g ը S3,+nb>=;% Ql|3;Gc~T;INQ佟C:}6YgrmeQ9~uQSFڜս rnIepCmzʝS4￘I#[Ъ\c=eӺݛʤ`s: ^0Omlda^/ŷSF5嵻~dm LIi\w#jѕy97u46QFkdXscʐz8yTo#EI'E%~LɍʸNU{JmaSBNյFBVUY ؘq;E't_F.2bmRyn<KCN)&"⸕|d2xӟLDZmҍJO`hX}>TcSQKQ۸q-6,e}<HBNtb%홲u@NO>]ҞU4е\lQ2y8\cxoQY\$n-Hw$i!F9ִGVtarͫذ'\ۉrztGve[Nh+*Is5xBןj<ԯZ Gǜ?*}e'QQu31zӍ7(!ǖNH.=G6,.慔 ~ouT^T֏ge$ei;تY͞854є`&9&V 8chrښjT䣢ܷˇM"v*qrOs~Zr$9'Oi6뵕C !}k6aޅq4]ɷ+呱60H79U>m+Vi~e"W1(30Q5NG%4"^L UO֦]<+ԟ-w=n͢-wK2;ZӸ-v[}ѻǮy:G˹8˕J'gml ySnJ<|~eaQؑ d &'+*DgW!,p _/c鎤VetTXkwXbUX)88U)*MˠTGͥH7,Rc*lS;h=GXy9_^,\H YsQaU-X$arIjA8g)U\Rt.򏝆[vg֯]#ZkFɐ,.ݵJZd3|ͷ,7`rsڈU*Z(!{Yoby;TʤkGRB<|vb|ΥrR:9 I/"6~^ʦW;=l n^88$zuةFWC?6O>)ӓtNN3څ(Ye-=I& "ƹd{y[F3D[f*Աf۷R̷ɭ7soYOW ~(|&{؆O+ccz}}j{J43K ܼ;>Cpr8ޱ/%;bY7|'$>nhɗP~Um#TKƣjˁc_|Pz\򼥯RoRUޅ{{K\Hn` !9{JVJU.e] oTcrny❢2-_,I;+bg%̚)k+#)g(̇&{V;drXT>c)Tr׷<3]<,[=_sTGЭ+픴3ȭ^+ZԌSV#diWYX?om ۍ=-iiRRqNET>W9"%̑Cق8_᧡j?g+įnfOݖ*WI{ۘ[0i˘eTd!ڪMgFFc+ʧݍ{t`žUppz?ñū-$_h ><ʶc"[=Mٙ= )O6%ku.$v̠n?:*\֤_3靝HUK2>R&ۊc$5b |)EE=XGF.V[ۤ`~\UnhX2]:uȣn|z}}DnsYrKC+21f8 Y4#&aT_o֮U,Yԫ;Fc9.Kj1|fDQԩi~_ezǧB RZ-Bs"* qEM#_b4eS9.ǗOw0٦w|m㞕riކ޻Wmd׿}ᚈxFrlT+]u9|#Lfvf.>)FMC(Y39<ȩSܗ(ʟ3|{ZY>V(AڼQy}I϶գ]'Xoh.ZrVze)nۉmp9&R}ո WnwRۅ?gs)FLg;c;ϷJǙJ.YTvʷE7I$f8j+ݧDY bfl )s7)Sբ̰iRʱEZn3*5љ>_jC)#`?:9Fh֝7f2Yo9g._oS.|,ϙXp?*W1*&HgI>mX9=A;)ӧrd{89oɎee]eXZtvcFD{~nfvy~XVEQfF=Es{yR&,[k[cy4G:#eRq1eoe-sqsUhjTSPyV;YԝVWxXKY4U۩ZPJD"zci{GmU}?eyI ZfU>s֪J4nw~Z4cf26C~f[8ū2QOq 4y}+UE5*Qz7r,غ,{qsXЩEW y9܌rckx뙣Z4^/OC4C3ۏ^sIPĪnuE9A!sFqwWU9:kw4^*Pv}E2~`7q\kҷpq 1^dmUd-btw'b!T[L7SZƾvN翿ʉ?UJyb;yxn+fƍvN 4X;X򹪩~[P1b5=03+*xQsը5<U7۶\U} R5=t0j{4{OkbS-IRcܠ#9랤M^+c$$\^50 wzg:ӊig/8GŘ-RvpLk 8< zVr*{9ivoQ-:²\mf9vQWc?y^Tߨ+$y plR*iBTt:~D$'㤻ڌ/旵{%YҲϝ.6 O$yb{zx8!'?"%Bƾb9xZU.hKUhDgU`I}ryc&9*F T۟-'zןR.i!#K{YW--㓞"0$&rJM4{&]цO8+a=#[ji7/ 31Xnmpp{Z(qDRJJcbEk-|gr랇ڽ:x)җ=+VKݺocN=" >cX|ͻYw=|vW)K~ky;>Z,Kn-9ϨjIGKݿ}1YUݜ{UWR:Rt_/!_@xjZqZFs=?ʸerk9sSvyW[ǠZ? jE}O\RMٛadEǼֆc~yc۷͖Ow׿WqT]3Fv-UVO'lqNI9x3^1th}NlT'vBme/"94mA48Iܸʼi ]}_.0K^Xdb%rs.nY=Mjk)@ i|?*HcWMGtFiF6^UٮYF6Uʡ9=qֱj qWJ~Nd4wLGzuiIJ[ǚLBe; cJHUǻWIԏ*7i2Yu}Q( jda7Fvt X蹪m/_3F[YTho+Co3ΓMuagS]<>KK=KS$iʰ`^aaoеx-_M͏p^:֔o˱o޵w7kVY逑^/,/ u9[J &b7k>lV1{_2'vhlׯWyOBHjX4x=_m4;RO,FBmW˭.eYئ4Jl4JMj͒-ü;7|ooγnk#) E`+*)ԗL3ďRͺF dƲ\+5eMC2,Tys#\l_Pq;-I{2]Hٰ'xVgnf_y>'sckWn_rV<*rCnjw}sݦ$6q$c1ޢV/ YIribY");U涩R<+=*b-W̓dϹG?NOSg}B[y/TM^Ư"-Ynq;F"Q(rJRۻvo4T)'mmq*Gcd/I'8N@t58ʕ۽}?O# r*G1!O9,E8 pHz1"?c9C]j_@ׂE6ϼ:l_ΪxϖV4gC.E{9ku^]#b9dY̎R*r8+J3մRkRkFW]$ڵµ)^2~yJ _"UDr,9 tN/vNXbet=*24SѶ*B;UPїSߩoqJ#gקFKthVFbeV J0F8k,jjvќd_שjHd,1o'>]5-ؒ#pX<uFW$暚jܤ6ܫnc G9*jHdCq: ;q?`+hsjrnp[Lw9RFyo 1VDvł֦!hޖ*Gh՗8uVzgh.2B$Me0h{9~Eѭ{3,LHߝG]F+;l~f\Ha~{4i.OG̬I#<CׯUҦ^BnôLN^ wLM̿yVkbH8g* G>*iqdi; OW̌~ᅫqOZ8.I2*a%cڳ9Z2]n0D<BqrW۸S9I+E(SYa{uy{kre*Q{ڳd}zYe bہt)BĮe;˝Dcyk(X8%NQjikFa]+6qT(P'bEE)ik7ZD7HyR,73`u|bjsKݑ*ZˡѳU5eU`~ZD\գ$VO][wO3l`+޽Xǖ}NKoOw̱̿.z'NƉOk-Lޥ+>r[yoW?&FӏtIsJgO0nA&MtEhn*qOML$+t#Thg8;_);(O76TN4i[;ktd"a@~qY>i^rLEJ8>kmObBYfݟ-W\)y"}|bwra(VlC g$>}5bV&`6DPc /ʳ.ל֕mX^ v_]x?]7b찍p[y>i^uЕ)U-^12Kq589$z3>ʍ%Dgo3q=sڅG.xny~eƸXFTuKUc/bKHMMj9iʏm&3m?+FO~G^}X{;>j w GnJ~v;ym*R7-ޯ^D2:^k%9]htS#;y~#|-ͻIrMgYƜnc,R[z~Eђ4jC.S5捠ԭԒi Ȱyd (9Wǚ>΋,_ŸڐZ,+! *ׇx;w> XYj wˑwGå>hcS ,iV=mߏ8˓'}ӽy[t_GpA08yyEvS7}}kn1ky7BO|޺WիyKWj۫ޭ+#EXyx8A ^FZ[_iҳ1WkeH'zhZ.ԵczHT1+M3jDsZQIROžgFJaڲ_}fUYT)A_M ֲlv~\^Uq)Frի hH#=3JJ{-NzU(BWsI}Kp`ȯ&ݭjmLѕ5}/$1ۻ_,=1W$jϩF9*w˯-yM{]4].^U8qjsF%đߝ,[m?a(ԲDtCg3yp y:29? pEwSNnӏ\^@+~bU:~:p`S ֵ43عqgֺWZӕ>_?MR%)fUU:ݡڔdhiܶ'4G 49'9[S<~ZZ\ +;Hoc4 UkZm9(ԧ'/d1bɃqʝ>yjxBSl۵aUU[<θ,lmJWniWw옒F8rz|BrB S*Rv<97B|r!דq^lÙ#S$͟zx0A~U*nJ|jqZ8߄}9\pykǝnk/d>djG9N{d湚$J.FbThnaqkFF4eF}Y,2|=ܮc51JwmhQԮP\n,jiqrThѫJ5!.PՍѫpGҚOOĚi>)$@ӿ#߶OiǞVՃJ;Rl_lb7c#oZS*rJ]ίi/cͺt̏7 .@뎝z8zϹ1nRh iL6~fϠ}qUT~jʊwjMM-찪)Yw鷜 d5GFԵkaϡKpVcdPHں%SzngMXcxv+1[KUsvk7 t]JQOKQQ~䚌2U#qߞ#JVT䴹?Zq[z 5\Fn@ v?9SIY[R)Ӝ(Z'5-b;3,{YW;wTO iӗiYهKrT.^5&QR/Av.Z~*CGH($rOũ\jzvPЗ9ݸocӎ+J8_hܧhN"m1۶_~:p(mR +K]csG*\+4laF>>jUhǝrTj9{0KA#763"Ƴ/'ӷ sGg/[F*vkS$xSyj[, 1qϽcN6MӣZI7ԬޡcV?:#/)cz8Rܕz44U\K^;_ ,jܭ&Un}9~YV|К#~Gx[HϊDHn1܏1bԣy5W]D kDMWV'V;@Sޒes TK] 3oG?S_Zr:rG^Q~dq9v%NMcz[Ŝ/:؞*6ԙrӊIjB6z sS= }>nml=>]N~J6Yt]M S̗tjXyr5#S8f 4Cdc{ūI+ 8;[3{⊲KyJ'͏sW?N8ZKQ+2m# 8\rQBK;zW=Z|I- ,Mt` c=뎥.k'nʻW6mf<`g?:7=ɹP~fm<1c?#mLU/;My$ 9cg-q\C(&ۆ۞5+]ny؎iE1N7Zc4{aUuSkqd2*tn)|{Ny^ g#8ԍ;V­ڲ0edӜW,}])ؚXH$aWc%siFtnu!k[#a*$~cp8fc8-\(G*;hD{G*bT'׽z(jIJ:&g(8驵Zs#m}NG=k鰸XeYPo]:Ӛ%[}ِl|t>Q+Og%|=:VdשG TKMK vf՝,L<+? Oz԰l)VW;Mfh^~J*R)ӳVF0wycs`۞+J͢yܶVA_S~gҪ;UkKCyyA?̞gLݫArߩQ+icH;{ofnʔjKD&+XSN vWhOry'UR7çOL(T+ٗ(ʊW)\Nh2Y{49KR֝?v=yO&g2Vlg5_-jb}Mpb#6Z4sdAcv$Q;9Z_`΋_<}9Z-J޾ݱKieWi'#ʊ%dg9SnڙHsG\ '⼜Esr;fTflKl=gj7;ݯ6 B}; \)u^r,vb˃St6ҊvcFi?FMKWFyW$R1Jz3!&H}\lKo Ӿ{hz%RcɊF&f /C=~nWflJIؐ'_3he ֝+I_v 05<3u,w NX<ǁҽli*iƳA%]OCb́yf+"z)j{TW+J[Xڬ0aj {>wcju ;}k<5_id0wpyֹeFU%sUg(cB=f䵉S {›Rrq}? mε+CYч"'=d_(ө]a gic_a$瞤[Z]Kn#=p pq^pն9s=#LLp_N YԦ^1Ԅz}ıx/GKX~]_0^n*ik/zE튺m~q^T|\ԣNε<WV܋P?'1,$u^mLBTͫ< x>)l7ϩWJ*9MOo/x6ZHגH$eRW zNzt~ Q:JrC?W[;݌X'.5++6dE`p3gGΜbYRM.tvC[Co<cA.)qEtq8 #اS.}nCܧR R6m/mhzWo/ďh\"k:~dWJbNTw )Ht1. FP6-tOm5O]mgMLEʓ\ݺt}O>*xz=z!奈{dd`v^<;߽_T56bɨi2ڇfU_2 (g'0^)'YN*6ЭM/"ռLnsyou~z\Jx W38{ꚿ|P]Hak 0JɁ㞙'J9'QqUQM.<{I3c-YԩN¾o=k^G',GUx>=$|l:y/iu?,TTlV wJg>VI G>`qzؙTm4' 73tR;cA2 (w\0+e a;3kNcw+Xߖy $q5[Jl<\@5=#ĚY}drOȯws&<G+xO!ohڴ!c_*6n9?rbzfJjRVr/oeܲ<aiCokS dpy|"O0fҦ"1J5 d+93z~KkS2WW)5s$ &ݹr~=بф{k_3ڈs:ߡ8wIǗR*6:սrD_p0qԫۙǙs"JJ %azw=G?JFcmu}ZX,$VޠWJϛjL=+Uס;țMr}=_A9e>n{Yn&*ϡ G𷌥O-lѦ9-YUQ3o# c*F<8 RWd׵<,FƊrӳ训sym8Ea4QH޸5FNNz,_ajTZJmGKv}>H[Xd3n hw=zѵ~$[]*Gqyrn8c2Qns)Ԓc;t7J?x?uN1Ͻ~jQz$2EmѪ9u~Ma ou<}#& 7e\scULkJ2ש*S|ы7/%H$ JtQO%ZZtܨUƯM;o"Y)Va<`8wM*qo5,O܃#?'ʽRZ\X\+ihxWYk&DO0w}}LT5w=)eќ+K:bᢐ/0|q8kQNgMmpm >cᲣʤHK$g ZuGG+#iHxq/Ma&Mmq5{CFB vӥN'eLSٮ__BKsuyNFaG?C{;>h1k{dRܽ:Śid-$*{mOzq=تjT\n9e\Q-qMEIC{'1bŴƟg%m˕\t#U s7WsB hI%dh I{mJi*{v&iSLU?#ӽvK%̓EGOtG 1D#v'87:I%R)|n2 'B1t]T(։嵚[e]'>sN?QM ԦHfVڥzc8#+S2l2[-fϡOj`Y:zw!"qS'cR2ΥܳZ8X>xVO,2nqq׎.HVUK{멝}i/XٹP˃WU7[[܊*wOdUX,2F4bˬ$eNQ{潊ɣ]>vV 򲌶3J:4'fvhdey-#U9j%Dwff~m (]:Nq>23]weq<-82ntUESUS1[r3s{.g(Y:,4^mVS8 ׵\y%_tRk^E+_8*]oNBѻv.~ZRi F*?{ sD)ʌosG{}fpP6}ҫ܏iR;åXaU3 K.9?SsTJ6ڨ34QtRw"xA0̲c=8nWϣF*qwwԍc.!G?5VޥJ9^ϡ!GYeus>^>Z{9++eg}zڢV6V27~^y` +кتpVa#|FA2qWmNKZBHVȍ|o==z?L{:1MOVIhsBE%N1^q߃kΝ;#E<5!0ɵ;Dqq?4'q;ܰApE:Q]HU'=ZIYv:jq3)TQi#W'*mewqFN1?iؑf+{tr˚M(=:[!{`ec'==+ s_˩K=:P02L,+ mP;`gӽfZn~ƛS#5dˏ:b•Nnf"FFWnNQzu)6k8@wy<=}:QS] صJjgSY/gguRF7iطmn[_a}?z%(λD[Bv䑏ӟҹ ZӋweho!8UE:rջ,[y glUZҔcQǡq#2oGo+ qu5b`hDeN?v7t\#AI=4KӍ#I )V_i +Hs5d]"dB}ܝHZRsNe9%k>"Vew6Y[=Z%>h"^H@d4ܑƪ1e䑛Ur+])r!-ԭ9ݙ8V2Wx=O$jIUOFax^Y?pcv8TlͩT8!?ٙB?Pkhr73NVd^"c߆XOp:5wݑSrZ"H;,lcFڎ^]KIS~d *DA\meڧG\zpE?gOYf07cGZEsNI]j 34jV-pG8K[ӣ);ٷWRs<,ѳFW~\~9 PcYQy;[JKq.]ˀz? VijŏdAKEWs09裞?*0g&w*G :oNH1v:R@`vmݸs\sKڵz5\YZ2?2NAtGqks_7b̞Mfϖ]mفd79#5c+ZZSֿbۭvFUX9JsnovB.1FʼnHO*ƵG=Z%B=4rܮͿ3`*yb]̖~5Qm`Jn7o^T4ܪ]_z}j]Fci߹J`dΣtGꮪ5'Z䤎Pw*g~(KUQfW:߬guV.wݏVqSqr̄[̰mܲVO+;cVҧQkׇú6z=ўpEvTՓf5ڝnX!r42*3;\ךj./tD${ Ho1x=޺JNQ\HGkR7eK32𾄌֑SS5c%[ vRs]:iEԕ:!y+x)N|]hZv敺 k, ?(Xwgsךڷ?NQʤrbؤqHd\m^8;INnũ ;h2Iڤ{?b ?.{zUIZN*wnnycg?p q[ӦO=Le8ˑ&w.odwp=qYSX꒕8sy{B[ّP&F1sR1mG_S%֬ס<7jR^O| Qٝ1C Iꋟct)i<9<` NzqYƢFҽI[eR[KqЎ@ ˒;'*_)I3.3tuoAӮ.$:sȭyv l9ɬ:qh}K=ln˟yzA:zuj8sj~)V&hIp̉;cCqQ*twc夥f6,/꧄yrQI^V_(FHdH'ׯCQ?ML%6Ati0v~2(k,=*nNW2+! ţc(cqT啣#Ď u *mgEq}Ҧo)a$dEm;c4~Pwf<Ҍyc";JmDd}WߟI9iFZygn.Й2trqsV JZR'y;3Pv8/9VR]_um[;Z}i 5jjqc5/K?[u* eʵ4N1inNMm3?A:V2ӹ*yB;yv{lHg,OCnJU=D0كGϛʳS :7tL(Ā+o-&өvUW]b;YTa:?O[R]; 6my$ߔӶjڒ{6H~}}ReCJ-wfWcopNNO וTݗRWvpd;7svF1K3i{qb+4AFbfm'j}#jUS"-y{6R%VOu}t'rnĐZ8×$*+leIaӕje'fآ7\Mp қK&$WH}je-M:Vw˙$̜/G4tcQ]sBZ`bҶT\)sn49YY6Ǚ\BZ~.X7. kV}}q{wrcʧ{,&>i'FhN1Iܑ{.@lp{: cy+M5P9rbO8Lj(ŧB}zo8U޳z6ʏ7+QV/,lhPی֡ۛiiFEݐ\Z4icUf ׭Dr5)YnB*[26$uONz8H{vJS.iٿĄs'/\NSD*)SF3J^[^2BYmmJ|#snQ9ȩ^9%MB8(Wfmu})9s ]J3K+BvnʼnAJQQ1I6քr1).V=AM^1۸ `4*ѳq򏽒=F}SQj"Qn ̭." 9ǵ4eY^L)*~e{@eٵb;v]é#oʦ<ҽRt- :|mcstZZ&q0yccemW8Q:7|D"I^=ƭ`mNHsXgQK,҉AmFV3~kI逸7R|ꌧ 4rg~]&!pYZ1RiQMps[$FWi7<{_0}'e YXcnT(k*qWױ4r̖*(]z5:lPiIFb.`G ~Ӗ/2XomX 6ln8#Ss{^S3^XlL(;p̪=z҈ʥ4K/Y~W=IZr1婽ѡj EVI9ep'U'k)KܽҗDfcXnbz}F+hSNfT5˦dѶgR39JҝSԇ,E4[y/*ǭKw&tImn7qcdoa_{&%ߘ_>R @=;QPr2ۋm=uV.HkuVeWܸ#03SoИR:G;㸷ۈS+9%Sw/.y-ă^jcqUۋDUPPzϊ<]MN>b5k!vg:Rv%O2f"$8$;NH+*oFQ8U8V0G ߭c(ir ĆrwaTvQո.Z~wfeEf\s99ӌ]^ra:m"2UC稬Z{Sݴ97:uq>J#8 J qv#; </3K+rWE $yV~pUrio݋خƱnP'8W禕Vk3o*M< t1ea˚ڽdHR(DL7y/#$\(Ƨ3z~sK2F\XׯCIF<ؿ_Mcأ$G^Kͫ.Hubq HȤ*.HgbvWoESc4~a^ʃsEEEDҤbt Qo skMKqYJ]Fdo*]m8avJrpVDJ6-:圮2OCK'Kr$Ļ3cQǚ&KЃ8cT&RĴ8W,dyrMMYvJm8㧽c9F6o{ZXR[i#yx'?cZUO3RZ-eP~ }&w=#b r,xzֲ{5rc,yUGsqγEq ̑ƬN SRH^ZA(XVrA;g.ZZ_SR)-q@Weyr=bjD)KE1{T&8Ӕ^kq|gV-6X lN>7\ѩN7ݓ܂H8̑ve ӟt/ӤRKDf3*i $I}hCzjԮ)wws~ƴV7u~@l Vay3?YKRM;hir (O%p܌k*|LS5Ë"\ ~9yt@ʊ ߷#{9Fgf~{,8LөϚovZne;CDѦ g茹[/ҵLW%E~iO)xHcʨQ0xv/.g}}FHlR"D(T 'szUUnXiQkfz-S6 <^٬x`Kdw~լ#MNg\kJJܭ;|ʒ\),Q˜Vӌ]54wkvnd%&{?^A1\]ON6C˲]xUӕIE5q{6W`uFzg%MYyLzwfҜ:זmi5K?=G+;JMj]/)F[ijULeu;jC#٣~{f5@lZqu9uGky[uܚlL,q%?t~}>y'ӭӗ/ &5|ۀ9◶(sySaryYNy4J51TEfLwSInɐq޳kؘ£kif2Y_hױViͤX 2cX~UvrCq3ʟױupTM5n (d9#TI(F}jqj+fj3*Bʣ x#8'4ӕ8_vtӾQwIu~zb#.ߐۏSQj%jtSIw@͂GWW32y[BK;3y*DžT-ZʟmG*)ۧC뮪qX4}2+4z:]|۶BsQۥs0QM&f7g1}N;~tEu/CTq+kHf_;zKZ~\эnHyr2vWhy'ZU%{[FEHק{?hEjd+#}2ӡJ?MI61+23z֧-n_f?lϒ+\۞='*j;5Gy?X3Frʤ#|BP$r%|ۆ1sS+=єSFZ-\Cqeaڬp:7ir#*ͥ`dYOmzVvTwF!j 1Ϣr1?8%;ҹVb8m\IX#]͌`gڪ1Qg>"Qv%pɹ3 `cyaZ0Ubİ\\\.G(ny<z¤-Ց{iBKV1 1RZeF+Ș6ԑmR/nB{{gdǚX,V`I7cx0Vu#E;#RTiF=I2RXTGarTb߶,5̌۶q鎵Rc+s4jgw GI=rzW$w.VgB1w=";I˜O<.>(Ez/JZؒ+GY3B{ۏzyRRy%$]>5N^7*ʝkEDXeG^9g퓲1qQq}/uuR(ۻl,C61g¹5#Vz/yZE}o#lḆN}\~gڸc)FtkkF&݋5AW\3޳Xg?<#(;6Ȱbd_3*v{/#jэ8&}حktLsD̬si=ZJ߷UPo԰І!<$+ \.Yɡ4.>{ڇw,x$`v'{\ħPn5)")BF=k[V]Ό`zԝ^qѣ$wr4$s ur8IkZ>:4zQQ=e)n%X.L"ۃ9r+qWZ$"J8%SI5MMKIC"U_+??c^+Z+[mNPߛ^k㹹G+G& 3~?>{׹F7T\Rз}ņ\7tڽHu,KVV6aeܮG+op̹lɣwv?w>҄fmFd.Ӳcڭo0 猚ԧc+hY7!|}&ԦiIlU7HUc^wG+$m+JA|cev2+3ΈVɢJe*[qz۝}E*/6'瞾Nmjr"γ|[Hwu?Ny)+h:vF>ݻ6()f>ߑ?JMIWi'r;iVV;g~}QY~ܒ(Žܪa2O5%-ecfm0[}*$5I# ;9_+֥/!Hxn;l穬Nn(b"XaswׄJګWV9 r8q59U=C庎l- }1Jrz=qZwf}LcCn<,Ikdz%3mxI~^r5hR~tc@Ak#`Fn1:v8*(Sv}L}k;O }GnkrT%Nnf%y10}^Hz|*)"iU4#&Kefa+10 yּTF}_[["e=gۉ#clnF8eQ:"kM?C)ATʚoRhmŲ[#3B͕ P9G W&ZJy6{֊QZDzq!2|_W}3S#UW1H(O;~l8ƫm$mʳXN,;=zuhO-Y"2tyw52 +jXYhleDPvB|ץ_*RQ~#o-G*I68ѧ:nmXdb$]=?JNOd9h2KEebrK{1ʟ/9}m[5_>Z!_ׂ20yϰzrwq#\3ʞN+>NXRQЩv8V[翧+)rV͸_"=c~mtKax<pYFs2=աm,s+2>}:Q\5HԼcjK{,ǵqv-!=u}z-Hm^rGE9#4sHmpd;UNztkXY]؊~N_Wd6A+pq}rRR~9}N+K;"Fvۙ}G=?s{e+NrC9wQ %et]qֵran25gRWf1uo۴J)9-=S?b=ʞuT$$7tqv26rI^H32*{ f[h` n*|߯&Wh⚨[,,\ -o]gBR@ JqYKqz_ce;UR^ClDp?I螆ʤIrk1sUV^@scN׏r[\8+'UNd6*1éCrk) ZiA+{͢os iO#$.RL]TZPQm;l(h8Aǥe: * Dn?0䞧W3ez)5v[[i$+?*=yB)^m4LObnXsLtN225q툸Hcsu߯zQ'sT)i%[h|YЧ%q%>cH-K4yvbe$䎽q*w1JRJkn qkvyl[55ߘBW-Kclg|kQJ9QWK=֞ ~mEs>זRWD5I]V̍{+SP;\1fumsʯS\Rzթu# K]?Abdnl\F\}A:pdeR̝՝qkxJq"fe-鎽){78)ӝLG#FFJyڙzrx%hЊjwN_rzl M ̪Ϲ9?Ş}e>wQ+u_+ǸXk)òiStw'uF;fw> +c&wNIힼb1nNmJ|t) /8x;E=[sاZ5m{fˢf'R!]z1ƺ"i4ћ]{WmS !PBk5':)ԕffi Ȱزy33~ ɷ{W^+~C4B(| ']r%U_"So{{Ekd# re(sluVXyR"Hx2IA}U}cBGV@h(^Տ/MT~Ʃ슷B͸AʪۯCSfEh r!. Ҵ%ӧt'l]_-|gҹ!IG8j{JI&K Q5}.*8Ys*oSRڜ# J~Ekgs(ۺ3mmdc8:7^M:=I^݌p,cV5a)F4MGw'{ܳk prïֹ}'ݛk*M;K߂\qఏ?)-y򻝲[<ֲqpZ._gv ͭG* %[$z<c/g&%pPj?)aݳ[aq招GZBi'>kG#+7qSӌWνJ1q]{6D3yTsho96mt'jvCһ˚VJϨ協hi=1(+.nkig+SN[t)ˣK inydl8c2zUTRٙGNRM?] }sn?CYhyJJckE n6<`>5#)e9q16iFi.Lp#723O^F/h('(=|VS#=L}+:ṉ [2%{xR 3Ӡz g_:qo]~FU+T%kvr!3^ lhݙw6IӜKڢWb׳ݣȦ=qGrj)Nc'CR-WEve]l͕=WZRoz1SsݼGKD  $V]jvF \Nd[2T]o i w[XmQFIJ猒z8USqmc*2tܞik˼kqp2q#d)[Ks;B$osgQR(ҢDtV0 $W;7+E% ȿhbOU`fR_o##z^ܔX3}'ʭl(<}}9S an\jt=:<-Zwo` A՛-瞹GDJ6Cnn%R8?܁W{eJQհTrZX=[Cl`X{lcj5$?4U3$3#JRD Uy#,ȫGއ^s\rjEJtgs/UT;啇<:NcJͫ<+3qT9OS\~'ov61٬"adH9+ͫG-4pӔyZ[4qLpI:jq:<8*ї3K3iɩEI0;tƾG1(-g)Sc=_$Fpocq_;W 5yf唧ۢC?&ݱUԙuA =NKig4h'-w)8cv =3qN8)/ݥ DmG6sUrq?wѧ%r؞)Oϕ3ުRr_‹terq9_EžZ+ӭv2;m'k0*iڕb\ nrC|q׭F՝XZujVR?A b}xr2}^3ڽl=Ӌ3og)UٛVܻUYٕ|:B%ѝ-;Uw cuF.xfΓeo")WT瓒{:ߖ#>--_CFc &ʶ>st.}IoqaC]K[=qkDc+_qY%XGo۲ZFL Sԇ/5VB'=J>,ʲ۔ln\QqQ} Y( 6Jƥ= jSlg"mAqOghVv c`Jm9WZ0N2[LjٓG=y)ZM.;twF k>y=7G%l5i{љܤyfu>kawS)%Ozeܡy 3;j,yf>zXuMlxx8k+nfF[@< !W/,cn:F髿=Pʔc1|x9sZJ6*3n$jZKݝCpS?4c{J-Ҝcn&ҳ6U.3wKVk-/m5sZ< N+ݧwj:Irm?S4lЖ+yd?ZiaNGFcsҼ4϶BrXm3wf;3Ji|0oFp+S޵cӡZWia5sv7aҁb}UB^ȩagR7'^)Y$*uӚ~]:hXp2 t_E{lfy'>8^tSG-L,k贾Yi_YToPH<t uarKGOZ&˘/c[7Qe}8b,x3*isPR6Q[yDy)acOgs^/J$e2ܟ~:5Yl?4m#ZWlos׳B#ʭFqKF=͆Y:r9ʐ0~+rL*1jG/jMwi5)&?iMQ7mGBc@׌yRi%[i6JQU^mgߣczo_[׌a JP(ydpQt{;xctWwjާosxί46\,o E Bw:܊U[u4J>⫻xT_<`:N2FU.U]^ʢ2/ Fr:qgvk_WS5"Q)ئRѱی^MnswOF}ΉޒUeȧvs^TZ_Os̹N5Mo_1YmYOmį F8[݊vt}QNgIuE#̼SXO+f`߹YU0؜O>0աnZ-;7m]t̿^szٽzpRt8ػF=G:wr|m:{]XԮmEs53G;a籘w[V}V_|,,JQխӻih}6x\ҭ+"koAP|=k!'(ի7NE}VeNMqF#09=ʇʝOju=<>P_ zTK g8Vԩǚƴi}+-Ok.PlF*VF64+G/`n>N2h刧O]!v*lO,y S^CZMB}[ı<ƪ`w0};g*rVL>n[>\,l8o\WW,d1GʟWbC6 CpN3j4iӡ:?vǹSl㷥i88߿Aԩu?"܅9U /8Fl3[cNQF]n *$Y O$qWu5>*Pt"RR.Pˑ$u[ZBB8tO$[5#F+TKVd-%t?q9+"Vktޅ",}wqaR+يu/K _$jOnR7ic*TI-LpX]W'<2ON!Y:-LԑfB-Ia=g)K7u& ֊UU'ֱI'ɷV82^nk8Ӽ${mjȊ# ^)GgHUV1j>6qzv;J1Iſ@7\­8@w<>MBDjD@HSzgNRe$"nc8qǠJ\ф[T{<$O27`x<%hVMgl$^TQ<۹TrCð44hPheM;3{)IM+!bο7cө);oV"9krHT8/ʼեV摔yV[xh6f\άg)4C+fU.v9Tp99=P,9Es[.Ǚ()Is~BRt]F\YWpljeh(Z5*3Ao'32%Ҵt(沕9J$}_*NJIKqR dȝUB'cʵmJьlk*v~+Os0;Vmmy^䕈UYXLc˗i.ӕ^HdvYn0PBOj1| 54[LI S'Y#<+HlU8ʝfLﲅLX\9mQiKL4˱I޺,dc*MVil"0˕SsqkIP2H-)冏8T 8ζwR^MG4W;]D49^ CjsY+Hӎs 1GU{J׃Nwp iOU-`omri*[zF1j[#МܨN>K_% $A2˵r<84|QB;C 1p'J?`֋_Қ .ISV&T]v_?ɭ8£v3IEr<,r s׷?\ 9Wjř{3cwU<qڹNj{8rTǎ^c|N ~[پ,˙N)1 ' Ok*ֲ{FfG(C* c{NeU9 56*FnM>6},dm{|{TӒwrZZRZeG3v{8=5H%ʣ:a+Ioc66߻ S躕'-VB=YXo @<ڻ#QBGDyc(>kOD 1ÁVUэj1&F-ܱs16F~_AZn:NpZ- my*d!(01F:jN[ٓ(Vi-FՉA.@:?U+JKM+u[L$FۅN0ZU*S3ǚͽȖ[>)فy/9E9Y5Ӕ]'-d{Y|ުRcʵNӡ2$,ITo!@팞=֑SB3k:b[e3S+`7z+4R ^QI$4զnH ˹q201%9nTdK;e,*aۨ洏,dw8үMlUq=jk/iwm J2ʴHz7+ah_g5=Y|5Ա"lXsH#ad}y_z)FZF4Mǃ cXY+$8_q[z'X1I{XpV} |JNv/G{<@UlޫW:}K1N&ĩUu3޴bh{gmyTA^率UvK4vKs#Zț\b;;zMwgb' Y1\#Fzhӊ㘠!e^vzN~cR?6wc1%jVfrm"ʭ0UL\׷eJͷ-ӁP? 92o5JMn~^4E˖ɕ*FdPOWFɬIFz3M$0o׃T|͇2÷PmϟǯF3R*U9$gB$-׷&]F1&4 %Bv己I>jדymѐn1ᶐ.03Ϡ} !R*b0 `:񰧎8OM%}C,]cSs+EGD皱sV5%$JcgԆ1i ~VoJ*HꯩVTQJ4a0HPQCS)kdDW-;Z?09w^=ehknfnOȘc*6*>3*#/IkҧՔg;@#M#crk9sK^NDU_yWR0xvX 4c~TW,I"o<\vZNӕ͎Xw#/#9)4i}SscvQר'fjJ1EYp+ qf2V2qMa ,r.ޣnNW8?ʴT yϧR$I>\p6i5:}F?n$h"-$iA֪u*K薞fYSU@YS]%*/ViD<6~U}.kE2ՒDǺuxdYzJh_4mcRC+:x(̆BKW1מտ(ʧ'>!<'׃RScA+IqqNGlg#\l'9GWl۝Ò~|Л>{ĞlL#dpv:ԮtZ&|NЬY*ˍ^st){_B4)W1r3hcTЂWO0)couc驤.ki&BvɄǾGN]rvՊs]eDU[l#ZvJQj˿^7mݎAJ-DK[K?xÊ܊~2Z6wmAb89YVOH}H]DH2.08y6jE8¤Z{ \[I~/<=sJVLu}充FUpsϡFNHRI̟7x3G7qFe7dXKwnoP;*-yS2ZaϚuV%M[g%-//~IfgXY ǓЎ#5{G}Y52MŲL°MCTEݵ`ޔ2jKhR4t8vYrU0V#~96 '#=*΢I.Wt 0{)btlv~jTuFу[!hCyE3`mV = 'I7r2[T)Mh'\գ9UI{+py^\l71`H*c۸'+϶V1ȡpT:FΝՔKy|y_~*eeNW3ou chk˛4G *ۻd;ݘF#h$]؎f @sngZƳ#bc1}*n#z^T:egOdcv񏘜 +{iI8(E( Eox1e{[k9jNR]1+ȫh~1{׾摣N'ӻ*],P+ d tҺԭMH4̧|nb蝑O$>8lsTJuEbI#UDI<~{9iN-)'X( =:g&Cы6(֨G(ߗn~bڠcRtஶ}𶶎 xkV] Fs|FY,&. tTȞEmq]}/?y lʻF㌍ s]PўREyyvbid[h$ݹxcjoBhflOsuiZMKQsQ||t[\ěH׵m(12Uczw!yu~8=99#4^/_2i1QݳӽDiꮎZbE(i:l?U^*RF t}fV3FcFHZ6;Ǯ^J7+zImfd,'?gOٷ5dm}oPm=k͙VOI+QOOK_˿8RkXH8]ԧ(O#SB|tzmJ k(ۤi#Xa39VJefג6#"͍qURaHg*u䬿2"-sxʧ'̿<%cOwnc%q#Q$+[Ӆ*, kB_ FzgkcTӣ}f14 ?+.`ghʜR2I= ˆpG9N{-@HdYH;T:2qK§7/3V $(eߘWڵRZ[CIJ2}5İf%t)AZ4cN<*by[{ZwI+&3))OQ+̸ߎQU*6}QQu=c[kcwcjpK{jJJ߸(̙}w{׭yЪStC̘nLYNT A#'E)FMZ2}ir>þjLkC$UURTq35jڭbFM&޷ka"'UOl59{M^鳊Xi_Kūy" |Jg9#EJ*li}/Bc4jY űZqI^j猉$Wⵍ:J{q5*J\ewWkscn3:Μ%N.1m\[h ̣$przqDRK[>e ȱ8˙TT0[vIʰ1IҸGPA+Jܸ֍Jbeoqڪug(^+DqNiդQʿg3_능OٜkT)dEoOvSk3GbWq߽aRU%-ԣ ݈9?KN_yV#I(FvϮ?*TFFRRvB)(CSLYTzsڹew#%irX2EpQ$UGܪAllqֳQ*FTy*]gCⰒoˢqJ+[C wYr1 0O {¸$SyFnER"y(( cr{dc8R47O[U僋{?1e[)#H8rⱌw~Db>K﷗t0ApHo/f 'ßj2[m3r5u74E,Wr10OsvcЕ >*~WrkzzuSPm=-2ʻ3%sӟʻ);VOWg5bopyʌwCTZ1f6pe!a5(p\PEem Y6*r?ƵN2Mit%7 -PH m3H0Op?#ZJJSdmFMk :4执'ԛFVoK'>M̄_ם**2:u1t/~݈嶷x$: $dOk)q3 =I.dEw!$ P}捀=x\iSn&Ֆ̼ GSw F98kɭj1Nֶ]ODSq('w~z52-rߴV]i1Z+H!|*;u`jRF[R奆N\[Oe 0*=?xV:o96wVsߨc Z4⹑8JUhE>s X׻F^NbennK&|̪zkѣZ.DcqRsr.alF\:3סBGTqW{U4IF| 8b5bF1V{\y|)OzOOÚOg-YZ_bhnV_}1]\$֞֗R7&FǛ}9=:WmfvKdYX[ڶѤy;Oo!hq,`$}:kHՊ4q'v2< r翽\=NYF+,eViN ۼӕJ{y 1qvO+.w S,m*Z݈8m8;l?yJ83fc\ z=QN/7; )q1![?|}y<2Մc潐řG\mg خ|BR -O mfUbzᓗ-KR w 63ֹdiZ@cگٗߧd6廽2x7s#ԥ(1VT%wG-}zS,Svdl,#^O,{(Yi%fצJJ15*>Ycg?֥ɸJRm ]U\n^Ee/urQco#jy}qXs(Ŧcҫ$*ŰAe?ykR*qvhuTYy=k єeN>ҚT>J$  ~#)L9ˣ3.#IkH2;ǚ[3NR2ItbQhQҹkF6Zƚj[Z:,,s\ugdLF:VF(8p[k7n{t~ե[{?%)I9+.ȶ&s8{׍%=4MIvf%|p3(lgu۪t_%cSSO]GdY oaf,<:pqʤ$W͝RuF*N4FE߾>1=s޴8z%TEgweU]gȹOA׏N[km y`+X*O+,=M׹SN".ODVvzv)TV͜w^_ÏZҔ%R\ҽ#Щ)ayi]Z~bB_Vm6=y?jy's''7̭#\2mRzg9WmG 7,צiRBclR:$E8;w:N$nG$6&t3?TXmqkq#&K9}#RJ=_9Bw"[ݜghϹ@1Gs?wœST%JYFhU | |F8JԜ6Nv;%Xg>As R6MtV5l.+{=:rkX͡Y/ݧ#sO<ֽVvhiVrՑdD7&1OzZSu+1nktf ~PހJhμ= ihk2E(jː~=ף՝lQJD2mN"($mq%} xvo+}?Jظ}XX 0goJR9Sm`)?*O?vJ4i@c6`8rWβ6_9c?.?FhQuyFA8QqUN]"B) irHq'Q4I!S63.HoާދceRiZ܉!Ho,ĩ`I=rx)sJF%mF}B&fGb@*1{u,%Jg_qt?=?T5ٔ}zG79#fR1}{TE[ܴr;7|ǰ?.cьXwGfWZsvt#ڦRZ]C- \[v'#_jeh%.2;{eI_R3X23^.Wl|T<5)ɲ^mԑn>n~c8c{9F{>!Krel6$ݼW< |槫_2]] \N6kS:t 8ge/sJps:ܖI+Ii$XA8VT_0VŏY6sp$'pS-aVt/Xs$IXrn]JsO@l5IJ^KJ6}gz $QYѤ3(e^FGLtXԩy9K_#?V"˦XcݹrAkrqƿ䝺KěPn^qǭ~ RSيȑ p#Qcʜ>隙FstQOM,U1Fң]N:iMp'< q2EFR{/ĽSs[rC<1[*NߡUrV輎sWH.?UT kXKrTQ6}:sڷຝk 0Hw~X5jNTt]eO u/*^O <$M'+zrj3JҺ6jZ5դn&]cXےw==>ΤRUJRtz~f suq!Pg>_lZ{+k}{"#F!/.)>jBѱ3#EcY7kYjپzE+\N[TKBb|Aa{GҮhfv:5"#ym[ K ^,y}+ъi|_ڷ'/!xt %K0w@/g(˖~=2QhҬExil͝dJQ੊8l̹DE`d^6m Z,gG_58irOG9k(z_+-Sߡa3/5ԓ0\g*%NLjymGjFEUEG~:W/5-E-՛Zn$r2&;?Gk~mӲf/bI[pݸsӰ\TOKt4J"ZEʶUSO^EpJ2}[<v":fY3 #YMq.yN-t*4VKG`Z9سH%QҸWe Y}aF}HnuN͑?QywN֧Uju*z]J7VkkpC &LaT`Oa>`^X+_fL](8^JpRhdXlm-ףbqtmek /q獝}A浍>Y]F1\42ad1k4Ӄ}#F1Z6oiZ\4>L2(eiY[8$R(jssI4? w0=3ںEJ2{﷙1:I'L`2]ЩDQN?{W}<\hc622(@ϡ^Gor)':<6"UH7cWVRmuVraL=_KF_Y2In쪹c63s9&޷=Eқz'\Fe(34mǿAsZF^"ۣ, ڰ >x=+l/VeVMkTXgoZ1暿WdEخ&4l?*>k4^\VOwO%Ŵ0E1LL1#֢XxШV8Ȩ2ytg?%rO3:Җkȹ]$1wsOgQPE4 -CIq2XѧS2J0Q]qݾUH9* OVƸ(t k1IegG'p){IRУ:n~r+nwGTSoOrVЌc*9Zw](՚]цnqv๴4qeXH铩+U)X.$r.r7U,\}F1\DJ4u4Qd>Pq*%GVC|2Aرǩ~e{]aTmXǨ'Q䶕>S:M}1YԆ#9PHo,UER3/#>faZW돔f ~c*G7g''-2jenp$WRΤ9%,)'y`;kTZkrӮ&+d߄۴OGJ*PN [}I&HGn?*+ї} 8j>iv;yj}XF=>TFD׭Y,s;wXnY{zRr}= ~$Xy^\m'*~F}zqಘҚGK>a{\w;~C (޲wHO ~\dM/a(+.owKOzN0o)=qn>6R]nt{4 Tg/fs TrnVbtXE2=ԨECԶ8?kʱQ;1֧֑,Crch[th<q]{UKC5g,}*i;+O[Kkmc߸**Z3ɕ fnD2G"s\(Z3ZT}CO,մeEbTsǽyx$z{Ez5+GwөzqMMlmrzsӭ|J2'bj"ʭtV<ޱd9p#\\ Lp:zϵ\ QMltBRNT]EŲes['?AE9K!T(Z[ Xާoz\t+Ui^8mp;[G |&6z.\f4uM9wBn 9I;రߡFU'ݓ߄_ H_ٲI2Y!_Q1{ӧ E VSn*=fC/ e{X Qs~hzv#-GP=:]ɝtԼ4G|$QjTK~ǶcUFjCԩNkv~篑3JRx98R'%weNMբ#eU8VQq<{N%ԏ/C/۹xN=r?x80Vǡ)_UرXbCcw*9qR9(ϤtW۵szt:Q枇N|Ml_B(Y)KK$-3H K0 ^ru E3x JU5Z$WK\Jw.c8qxt4y|F1V|_hx5)4屾7g;Y23$s!qaV>I>k]ߧ϶.6Prm[uwߑ?i |h-W\\4YEC g F989~\rXvd+wkv}+2iJ\vnt7Q]@cxm sGR6E6[n*I{N|EnWq_3Ԩ? V:0ΩޘQw_J+cTݾ(Ѩ= _ڵŎ%16 rr>ԍ-}0j&]B4|Hu_q4cSGQno"?a;h6=#ClrxJ U8f/}57c"A+LG?Z)Jҳ&yO R.KO|n3$pu$I}x,#344_džn.#3IJ_EF2.gctb͞#ڋZ4Y:GSG(nu#9+z/[$?ʽL>ҵ馞eN:e]J945k~=k͝:vGP+.t/nW2O̷ֽ,ٴt:qqgHmM!a<: ~FT敏ba( ߩ}֊.n8k+%j8IUNEZlaF^q[4yƾQ摲۰?kj7=xգR/rס5Kpi|F;y=kɩJR3OG[wEjFڕ¤S8 4jXevs}+l,"U[7{y`rnDkwƬKEve+٩K_Skk]y?fQ IO˺eZ[4J{m}N^'mej͐.E Nsۦ1\i4߫sRЊKE;osGI&w.o/(˸|s qv8.w7nz~'e'6{:'y}ܾ ͤ%ADUVث~' 8W+Z[c tJ 2MfkX+doӀzkɩkY| 0t |Гˤ Fp眃H1|uo1thǯ}ORo)xx<skQ-:CNXEԅ.줁cǮ:k1jJM[GcυYF>ͽR۵]̬7rr22O~q\Ƥ_,DJѯ$,̸0ʪd)<5e(Դو]%vOqfX}˽N`i>i w9jF8iu]K hQd;H]p^#%x?iN<˰Dm@feccF=:}: QƜcսR+i;QLĻV6l?%:Ν;ww}`sLV׭;-Z5*餙1)VBFbAy׵oNnp,sۑZJIZ+c,HTs/fKvdi ʱUfT1k7_2rD.D_n N~(Qr]*Jm++292ȱ-&OSܪ^ۘʌ)(+.i/Cy,mA힦圹R:)rzhSkVl l߼L{X]1cgͣpOfJ<ɹA989Ξ`#G_BM%ˇmRdl@$=sn+Nj^"6Th &2"QTSV6+Mlb#jj˳{v9c]:6",.̬i/=1*cҏkhXKv*Dpi;zzj7wS9A;[2Rj p30zZTftc>Y-ח_4L3Bd!W*I9tkq4qwĘn[Gݕ4o.VI$;v'{: l̚wh:OQM7`RZGuT;njc:|i_%:;}Un?7v9SlvMqq؏sO){8;jrԍ>[Im㸎1+N#rXns?yӔ#;w2BRi=օ6 K78b?<ELjm~M^q<"H22X8Z/NŻ(Rxs,w<ed'.oR1J)w.ޥz~HVcԱmhAIne;:KMBoHmIt\G'ʳlJ9ek"g[9efLdZ(Kޘe0=AJ\3hGQQ6׆ ^H<6k9srmFfoqJ F0jdF<|z*Qη-G v+IyW*^ͥz=J!|c8>[\*w$ғ$ma3HWgA.g+Tu!dsUO${\yly)'>W*ɣshT&le & UgI,f*# $.]5>e-v̠ayZ٥kAr ep5yc03z"9avofr{ZsZ/GR&H}NFNUǚZṚ(ja zc-֮Jp۩qrb`cE "sWNKrs5Cw kw 5kݕɩ'v;2eq>^v(F<^ .|;=k?!SM4l-˨±,w kltԥQr=mlDY9נ1)r!EE_(r:7?'QEU*|^3GnVh!vVr_CWiFvtr5*˴suj|gRT8^uf=FV m㑞ZКrvba}qҺ;hͥ5nW(<ٕtP^oܯ|Ƿc'cUnnkHWjV\'{Vu*Sh_s ܤW)G:WEk*u#ha &R 6HQQWka>#+F `0 ;)rqUm]ؑt>EԾ[/KswOOn:UQ}64G'({F![:׎٪i$/fU{îZCxv֪QSܸR{v][r :~I2gJE{z}FON8qұWN-mYsiD l(ߧ]<-f cJ&,9kӜzJQQ'R*59bv};FIfP8QPm'm SٸEgqӊe^}sUYrSI5g*m[e+[w8jRm=Q:#щɳFeݴrZF9[Ԏ.GypG@=yTEH!ִk_y"g=*N]v&#WĿ";].8o? z tJҝ7vcOTd $۟3V0Rhu9+Ni[޻o!K_3 '֮1/y#Gl!_߆f'vZ猥\Ҍ#\w;v/SߞʵN^QTrYsCߘʥqn|՜/F-ȝ"U2BaP;s\eΕa Fq_DdOe%R/~9t/Ubh٠k+Hb 2VsN>AQq^lviDeۻ;19:Y:Min­G ${mpyEd^4Y 8ZEr摩V^YD`{.7#/yث>XB[y#%@<涌Έԗ*kb+9a>fݒB1[[q; u^.1#R;R^ ЬO0~}3Y'ÝɫzEtaAwzgQo5851c4KIVEqxUU'^T $XC~ iwtiƢzʢYWaM͹Sr KP/uܴeYZC򯙒7'q>X, K-!{žf.kaܭ"xz~fOV9iPz*12Gۻtz?u5*cV_2MXUJ< jMJ!o7zZ1Fn2)PG՗sr~zv5JhQ3mU}+9u&-*NgVgݖltޕ1.Š溗C69>Pn\nǥ\RcԊIdOUY T`ƣG[Q {ቤ2 `µc(يMF[3M$ycOɾ@3,|<}}zQ|EvqzD@~TS7fbvIwʭ`c֩r٭te&뛦J?=z{֪_gR\Z Dc}(IJwj9*Z9=ʒ:.ѹ9;tG܈5$[ԏ%PyywN57Ъqci1P`qО 1cWF\FM#:۬jZ)c#.9=(ktFjEh9.7+ mPq9;ӌ~gG*.x}٭kČW1Ӛu2[8jX}Yb;Q;4e:4,dgvpsJMf5mfYG΀Gt"/4JWq#uqӓeb Yg<~8ZԂMu?x>c~QIT`e7סԥBIs59TR,1yUF:\U{>ib[.K}*{@8BrGn?cBxd5c$\ɏ19ד}j%ʭ:jjhX43.\Q F>Yl%16g!JFq0*SR~=?e^.Pw2xREhd=:LTƴW=8~(^6)˳)`y+)K5*gM-<2\"8IuDQ->JJ-ޤ%_,qbxePZbv݉;Xdr#(TyS6yIlgb9trnNQoU"$KR&iec\y]fEGU Y,Vڱ Uu/bTvߡR#/VnA(ԦTeBoS*LY dy]UH0W zUJ1s:qզdTFՉ|:z=iݙsFOvܞb018#~мFb1+ *ML^\*s3J2+fy .飗s8GCZ|G3Ę+zWd(u9O9N^uu5O=Uz5<x]jCvdʝ/y^Z!Hr6'cn,q)ϚsQ*NܻloN$+HʬFgU9_jG/aTdƥ(~m9;TW̥+jqj]tcV?:-p~PkNnh'=8۪Dps6ey9}W4%{TVbrJ"E_|HœHf"m>|33ck: =3ǥ*.2U[BNZ4jKs $#v$t;_.GRlf^Ǧ+.WgKV2Tmi7Ak*U99pZ7"Bv/EsT}*Z/ĜD#ND@mvgizK_;9)z g7#FwHBJRUڮW}z+"ExʬVO_*Q{G ۹ee8YJ9.]:1.fΚu>gI^e?yMNY(rn.J[&7m[oRpJ}0N,<$# ~+Ru*7ӏ"y/$,RKezZB0M)|]~d,ɴ;?z*qlu#)r_iyߖڊ~@w9zTf(㫹aX,o9hU9O(r--ilmaU ak9F\ԨS~^Yy|yutBi~gОXgﲘ›Gl(!Rp@q)s'_\r2of`E-Ёgְ7bp9IXԊ+q6%cn8]2kňPNPO[3Kn/͒h0K`n9@sYGڹ?wDpt#:z'3ĭ cw$2}~Fت:Fƫ*,Rm!z9~Ċ~k7EiS^Wۣ3tn~O͌wR^3?X1{ߚeE[1$z_[)]t9*Pjz_th3W̨@=|񗺍e{G8$hb-¼@}GZҴu6H#31y6vW=”SWz~w=Y]9ҰOSJ{˦6p[ZU)J[W$Uz#!hW <ȮyJ)R^pZt̒En|CKl"PI#'1MHӖN\G*IsۗRAg $`1{՜(rJ i7?603r,z2RK{Xo/1MGj>mofN_G:NbmpnkڊyHϥ\)oo>TQJ2,Gg32GS$dA*]̩'yTB]٠1"i&9sVu#:ȮG1j ӎXEƜ[sTf d ;ti0UiViOm,oTTN:\N}BYW[µ%W)ahT 1=kB1J;4])JRu'ޟdZy2w1UUT 8?2uzy˩  s'vx##cڲGc9ԩԲr4g'\ƸVelҲ8i:t[[Ũ *cʩs={ ҭNkaNqU9d?O"Kc1]w7̓c(r} BVlIq,mKLXmº#4%{oԌ8d^쩝ʃ$t϶~WF6uJT|[64.*k1;ckݣFSj.x]46*Ugӥ E%4c+~Ƶv_ı>]?7<:vQ쎵)Fҧ˶j<W'Ч^U;zԞ5bUluuSUSqVӆܫ+M~y|$gj4ۖ迹hȻ Esz~<7ڈP:i1-ɡeN뚮1ק=֍*IZ_ɧ@XQtc] rЍnf8Y8n}3eI8ړOBahddm¹^FNaT@ScmeJc\v9447b.F G9}nobwԥy[nam^+QN4Te+Ow=xJOKu_N hwdݮ8fʩth5Su_̞+nm-^ݚO3g#vO=OMD3uc, }敕3CbGO[ԣ߱%5֝Vw3!E{0cHS4 ̩ĠniƦl_,`ͣ˜nYh&VɺeW9jeYs]#NXIGCtXbpq9)Ilk.Xu_4LJIAӚiB(J)nYQ >iJG_vU H8r9p?5G8xWoԂA =X{sָ˖: ~UOS}V :9=ndb"2ff "z';9˕4uƤm/rRo/?6)rэOۍ$ XF1dg Q$*qR'X6~R]퉦YJ2GҼfz3֭1*.G\xYKߒK>cN^[; @VdYXnܠμU2"8h ]eb&GT1.1?ɬaNJ+D.h[Nm}mV9|Hۙ=}I yxQ7ш7&9 nϙFy{vi{znaR8DWj-ou eTfTopHvϿ*c5A ޛi5y%e_ 5TAN}t˛9Z2.&vEXծ6W`ws V5#MTQwMeuG(TI'wojJ8լʩUU/0%o'r/n[k YyHr8\gp=Oj]N]P`A^{н`nc.xϥm)k=Jr6q-йhX*@WeJ?eRyF}4zz[B5Y;B1[SO ƛ=oy6232|l[@=z׵wQZaR9KOm/i[2Yخ۱qO3ӵ}&QD%Rդ4spy}?JRC1+g7o2xuQ_szӧvڨSr.3?zir^;a* R|A{ץOqCR1gsV>ۏZ.U(U. i61W'.Nx9]q喌1iX.0̀sOS]Tة>ѝO-0\W5$io6UB, O]T[֔g**I" >V?αuJ+T*MsE4{8$1=1F|jr/{[hQK7AUnUs3`֑nZ^oCn-RIRLe#2S,q9nsLZ$*B-mRIazWtL)T|0Cܹm+b6qߧ&EJQQqq"2d[߹iǞJ?cU_!y&#U'MKw%&{T++6coCTʤ.-rs:ҏ'aVWٙZ=_vm{zu4nc~|Zʒ@ܕʀOvoS.iZiJ[~m3^>]zحTèGMoסM6߹U |GN+EJWagcyLX]a۝?YTvuqZ_1|NcOys(eV۴r1x ]IG#dKwM&r<yFxγ QA?͖9F`XO?J=,J$g/YV"2mngҢ/1Jáj.`K*֛?/LUNҝSWٽS\`>zT:WTLDlKZcV' Ο,dse5oWyFUwU1ޖ">;]Q.gͥ2*)9moϸח+J:wVI-zvXn~KGܞYREX %dc4Aع+~}QZ^B۫`62zgҢΓqfU*BVq2J JHaflk){?5M7yqyPH6~֢9FN`VWsm6L=~R+S*Voh"3*psߩ\(^s6$z+{:݃O9U/ݝTB$4H@rcG59gD +ZE -6EVn]z715DѸu~^||ȸRW]dX@Ť#wկ-nL'.g7W!4,n^ aFxk^^[BOiMOc0dxy W+{1j{:wi/#itz)xRpHY-!;2ʐ*'w zWB4o'{jg_:p-,۳:nj|ۿj.~(sOGYLJټݲ!#okH 0~uQQ9nKDIȶEr۷S$ˡZa+Vem[$ehC1I*rrX J)Qz"kQikth%ͼ3w2njgڵ8Gj+I4+" 㚟ӧCحFak3Oڦmox<`qH5U#icݙʟ7P6WD(%ωL wzmR9Y%s*tghlIjP}ې0==떦kTU#e}M;X*%뻨Rz">^ҝ$cY1GUOP8#S?3 MQ=mM`fL$"*HܼӶ{W',-܎﹭oF Y;dxk5[J o&i[#/N]+ϩ?cw} SqҊm3H ;=G8\d婍OgijY\C0iHrW?/ڹ+/h,I44?rJW ,='AD~+vz䎀sї,Ӕ}1, a_ytө%G $J`ۆf?9rAIUR6J+2r)HaiR"@f_DOˢIujX`k9Ӑ+6RRwh175䎻B\ O17r׷׌s-b>/m4{t|ۙv{  $sSNJu:.HXpE{Egj|kQgvε4Z%ҭ> [1ny\jV,t)h0Ĭbp1t5yl)GS 7$W],B;(W3Mb?nNxPqr}Ψשtg滚?,;jKͰteyz~LdSɎLd^լiCM_o"}%oSR,LeȪc'=9'ǥzZ>V[#Z1RNg{yq?ֺez2)ͤWVe-۶+8y_"+tZ n#yqq}3YՇ.\UJe{q2:N\nMJ~^"`czUSOX1~fK},goY7cKzs^;ѧ\>.fc ){.ޞqiy=|F6|͈tX9$q"1UyJtoQFN9Q{dK+|<`UTIB0TgJNR9eu u5FN}>Hƛ[=6z-'ނQ)_2  g#ޯܫ+mgOR򮞛'e8_)7߷plϦFqLeLj:Rn*=RƐ|\F W Y#ұRvMCnH&_C4vvFaxdgj{r?Z%LR嗓4s4`<_s œiuqg*wVd4_2˳a n?b>[q59VQ%[^JTeT,&k d}+h+.c[iY$|Xgonr,#H^T(e'vݏRܫ3Go۝ܷFNj\yՉreYbco噛s/c#ⱔyet\Q(G"()\¹EN-s X5mߝs4LJ<(vv|Oʪ^ hKL{Eۼ'oιe ?{SxoZU]dݐ8:WRKKn˫[2.teC2 kϫNMG[˛śOQ3338lrI=y^}jrQ,t-X|m|8'#'G^ĺU#EpTZ$e@.##GY,FOxVݎ>"-P+O>Uf[vF~#N/-EG?}@ %+/c-S5˸%++/CM8IXփ}ȞK.O7͍ RL{UHv^-ܳ/-,QGw:51Nz8EZY-AmU,sq瑁m:OsQU5 ǴkչxXo+a\fsd^Rt<>3.h)Q}&ZU ]%v>/o_99 \F8>Ք)ֱ*2ӓyw4 )o u?5qhu5{UUrw2#M-ɩ.rgٕ6ǘ~`s2qǭ{e86!ў5>mYY@ UnH_ =/V5emM?%}"n3'^h.muON>{oGp^]:>^M>!^&H! 9W3rOaOhŸ {UeAV՞'e[Ik d )F}xn[9sl- v>mZĉ#7NqF1_*C-˰ֽs%4/lFXaU݆5>;"+mKª`"H(? /߈tݷpG.Ӆ݆G=RTsGKv)Ԕi{62AjIȅN9x']+dS.a7R4΢l9 8yX<}WVg&xRxVoݔUڽTʽYFnh&j7HNN1#=tˣtϬ)Rk[A"٘yn_G5N+՜ο|m =yGtZzMNz,y|e[Xs?B~*6sԣ;Ody;d )0:0M{MF9o-nDIZ 19,zخ=ީ~G 1߾XYʋNi9Pzv'=F+TWrB7Iy<⮅wج0rTu|(A~egeKݚ?-u籲tK]\H!`2'ݴjGajad5k锫aRiuvo>iL" ]|t۲B{ṾV[uǝSR)F-kZ~w] G []R~^-Ӟ>s[]M1SO܊Ş]cQNUytaYc z5گon6S єԜ=G]aRzў^eG %84k濭lpq׿~mS Tri)ngsu=w^vc`02Xdv$~oZωnm&sG5,~^91Y2?CgT[-g"YZMgZ5 ) Jݹ8eg[Kw>_Q$2τךyr#RI#?xw*XԌR}5 ]R;ru13_ҿ9,Nz3WBZuݥvZov2[; a_fZ2NM/^Uj|)[чSѳE!╺TOAY$WvʿwgAN8GZP+Z݅儕?F-FHX yS(>kQ4}U)ܲ,R"Ϧ;ʻEbqa$i#V]sUtB5ct(W$60"Y2uۨN1QkyP[YF{{]HHn&6bHשuS:5*rIi I6`68=XbFQ;RؖX "2ON[#IOʳ%m&3жwK)r2L%2#,p: z皩齙TK!TgkzP=jϣ} \Fbc#IpP#CB*-_W}M!p۔`n?ӯJnӹGލ{dXtgty.fBJn0[<˹pvkXjK*={m<2VeF-@O>~UR5#QEjgS2[G3}c+*sӃ+&9ܪ8$sLK*wq1GkRZ)"{U6+ˍϠ2mJ6dC8dIVXʾr9[KN5.%%Pc7$+  z 1JF;?~ƒ:ioYl;r>rro0X |d/R}ԣKeJG{4)yt.l ʮʻCs^W 4{aycoMy8\M;ݢcE1 |ۈzLym+=nXX!Tbv8>hH*o#ykš}:%ɢ'ݺDQFv4+^:A2yX]A֐۵c(~Q"=hϡ^Z>U\]u5Taʴ!w;)QnsVE4ajU5[VG˸ҴwF$0w*ާN/,l|hEhh!r n=j঩$"/E~O8S^'~A #H wm9U˚3#63+4vGU9ESRoFP4Bl 7n|~9ɭ%'ncJ>Ou|&dq+!m9Q._trvZ#2O/F3'ڢʌ#(;"X X'E9'QoԴB"Y;v;I?:i;mjuWyoS U0$R(~TQSE,Q]V՗ ׃^*chn%VIH)> du u'ڴcw+_CFn׹%Q!݁T-0ʼԛ[2ӀMĘzrWQYuQڴ~jk6f@sA?lI\ti֗d\7 >cgAkG2cO}2E(lJ~_cvZ2\JC#Kml<iM[1UFЊ6&ymvT8}ZkhmMy(i$Jb)q`s**r#zS+};-O jwz}}*jk̽[[-YS4)Rudf )؀GonN1,̋7;skW{|rRϠH,yy'=;c⧚E9+ALw+3 grp\ױo:PveP+FH ڟ5I{۠ќn/xGUV5rUV[Lq''gRE̴{*{41 #uq{Xi)MkotwF~fs(wwYSrKdVlEn}Gc4}y/:Jyn}#,˷jW v2єys=?!Ή",.*B^qb7/Gr6)dGʪ2e$74db-t #peOL}+J]^u\Ӱŷс4Y\9|SQk~ڴTa-++,n1e_OVPլ)`Y[67=y7rRU'5KaIV [2Gw=뎵,cy9 ./^dF>rltԔ-寗wWw/01pFI8qWH,$p?.Ǧz ٷmsTVjEK: ϺIr:ZJ馇Sݴ?RP\c;׎iJmnE%Ҙ>H䐃ԟ'*̴{h٫ i;FۆYZv1v -c!2Rz:iE'(8T_NI r=17 ,A kєN8^25ڗ8mk7^Z#h8򶻋-hfyeX<B0y|Z]ӔSQnGX[vi#_"zc۽W5h84!$r o"7̱6B 3>'5oP_(`Ŗ\v}z}Dk[]F%ɇr|sq|39SWh)؏y1_-U81r3ˆEPA9#\aSSBxv>ffډB=.em5%m%,}$F q4,ꔹߩmuJF0N ^3 tDgtlt}ێGʽx5R2ҩА瑙vr0w`8\UȽҟ69'g*-~^(m^=K+)KFoOVE|Or^Wbwy;ҹqN{2TC 3.~"–чuVh e<ϵ>gm)FK6Ko)PF<r82i9YHҪ7})Ֆo 6mcw}9[).efR$BB-||Uڬ8jkRa7ܬ9:zk(%%g\ZXّ|y>w$Tj1F<5iS×G6m,aT5rytG'(2rl;sS%F[?3}\ҋo:/xH:4 CZ27$dWqkq aN[^~{=s>JԦZy.d,ye<8}$ܺ)G͌/G=N8Urr(+fI-\hY'z'O'}챆5Q+Ö2Pq"C_SZ)FS+ʼnz6{7&3|}vHaI\"|nzWqImt⹧[ Yݸ^0=ir/u#ЊImh$ 'sq(ˑw9h; 0y 9QGn.kX pȲo9 Ҧ1Rq맗W7meJN3t=,s1zQm]Y=Du.S}AW4]1N\}n-g; ~ _}.F M3 'kޏM6+hDbHўGǹ=*%)3'FN|ay}lycߌT5#%F4!VRخd?sNJcӣ6kwZ62)#"Gp+JrwW1J[BC8o6Uex8kwFgErn%d6ǵoSӷ{V.1nfs+QDg26xcS^F1z ۚh]V=PlG+#Q'(悦c$;lgG@hW2F4N~˕H0ϥcB$Q,%Y72ہ>'V8]Omё\k$j$:q֦0oX#/si}XƳyh$"$)LVӏ5{o>9U%[zgͩ{JQ[d{3HR{tQ%V+O=$ym,jf=x)oiFYy8a|P AGNMs+u:*S5r?êo~d9^K']Y 3JJWU{$:ʞ̩*KeFz9q֊qƓQQ屟 d1B7|qgFP.9'com˫Jv8@*JL5*ԖBH r}B3z rn݇(VՐźbUcdsϡ8+JQ~F] qmagb$%=Gɫ'mi!̑E.6|G^溣(MjMgZV9|}dML$ZJ\RKVzª5UZ|{yfiFݸ{|sS%'%R''fTQ\߿,~z Ju*Uџ"?qhdna~|d0:WD$b -w#[iHcl> }1i5n6:x5-?4ҮfY1y;~S^58k#:wt`sg< w]+-zXzӂU[Ǚq'SDcoc 5RQrR ;/فmlȢReOO" ,8# qTa){ѹRHe4isct5KN Z]5ؑde#h&r21WgqP]*Mikw+uO9ckn*|Ԛ͑B)ܿLR/qQFI=ZnwM>Hw+1i$n;y*^nu#SyVd'dJ#U? a?ֺ]w:)B.wn]x.mLp{4iգMrTn7_* fXMY^U'dSFI!XolR1(hyPtz"Yۣf ;kFdS|6+ےO~$% Ac_.3*̣@82+VN䕝ݲc.70,= W6RdIrdc,=+ṷN `V5®<dާ^y%ѕhZ|]+϶,O鏱NKy|[M#st?TU6om c)J:/zŦ`jrvNOϭaZ79gFZ 7r3+%i[2ow<fq\2vGutJ8QP)f;Sh\QѶj'l`Ww3M'`q◴m~Xэ8j]efI3\ ={w=>i֌m{-l^ XZ$ϴeދg9FJT?Q$s {b|)N^t^6ڝl)lF+I(˕WYGP@MkNJ01Y-Ba*v$ێpX qZիMJ˪iܾv_!NʖJN@%w7=eN>q ."Ȫ )G)];3ѦKbW]z{{Rg**z45<٘99xƧ¶}9IIZ*ے1[b#M{zҷ[ȭys>hc]0RyZQV̞IV&.FRYw',ךʍG([&!^KrHch.v,~wcvֿ,~Hp͸JyG4< Y+؞lxE8OYcxW;RmOfQZUUiuh<7M \G\jNWa4_-np&0+/iNWeScy%:x&ܯ`&M:;kXM4/ub[/ٚj͏)rF0urDإu溓97eVjvTǛ-W*ӕyA#ͪ (~fʦ^6M 刧/uI"6HYa3*eN^C*x|=o~t`H d*G##T=:m4.ZmiA,˜sgWOg*[ZUjWJb[x #?ZRP٪},>;FvAq9<\V6zm7-m{X_.>Twp:R˫w?J3թhd, Be{jzqQ%̤ET%{lH]w<+{U*-mkamR*ȑᙛ t{<{:1'-fUGϘHgY0$^ko)S߈FR;i$_.Gݹy*SRz3=}{+>g ƛ%c?x>A\mF+~ާbj6۩5}٤1ȫ&0y1zWezrHkNZscMkiYg=sju&mJKwmvkoc,p1ܡ1>Ѧ :hΖ_MAE;멭cԷ$nݹ<ޙJcQO*qx6V_,A<~J/u+q5&BFW,Yv̳v6W#9\4 ۻI*=8(:Odm^km_2eٰrcҽ:J[#4ʛN2^JƸ_j0\mA-O##uٻx힜jrj}H䶏̒uc" pzw5//ui(U܆F lm7=uwx9h>$d:u$J~֖C2Oeezy$v^8+Q5- CN,D&H«(p8MF0OZL';^N!$\NUa5%Օvҍ\k:(](r_Eۤמ@ MKiS"dz4Ctk` Pq=xey6+*,E*Zޱ+ȱŶ=ˏܕg==aV6ۯnŪKjؠ[In@lg+ҧ$9cONm=l~Erסc&" t9!zOi.~zy}^X^H7twڧaǘۛjq9I:[_ޛ$Pl7#@B̉T|V}ѓ>R܎*zS*5ۺ\l `8 z3]tjKhF2*}h\/fg_:0dی98ʧq|~;ht@NЯ'}:=*dլ͍"Q$A+HZDOSޕ7gNkkZ훖Mbg@ kݣZ1z+]OB*25TevPc+Ԡ̥Udƥ# ?fx+xӔmYY frF?,E4N*C4E [۞sJ2M! 3J=q/TReQ6a?-NwQ߯R^%>}zc*|]9T+L ̌ϱI%̚ER2I [ׇ=\ɛӄ!WpjDžݷ#OΈ-*|KC9?Z9%3Y;[vC[JYaen51_,k'nU[-: g޲zcj8^ĈѩeU8#󥬵{ۖ;4FwN8y՛#cYUE6ѩVڿF1қ˗k LXP?GOgՙQ] ڼP:g?AYV޵EFG, -ȋVU2ه8*,5$R]u4=0~m9x5Z|z=}r7/pr_|ѣcj{ы1f g{gӌ2=%CLP?#I;C{ɬghoRQ5#f(F8yQ瓚zcJ\I-ֿ ao9`EMJ+ Fl%Mzko6֟u~`W;r[6N+G5n{ć;I&Qyky"HofY>\тTmܸ9\=y\9iEl[I ^Ik'<J$qSZ4c$sW4?Y|bR裇Ni-`k;>Ke¨10;F q'Wp2Uqs4kE1P=AG_%j_ݜd09ƪU9㈥:\#CIiWxJ8{43\/XHFKaRHK1}<=s޹::nɕiF^uh}dDy?kN^iZg<'5{Y͝-|ٙVQ?[t&^;}VvZ U$Ve⳨ZF\-dN]6YW-:rJ[S)n䓉0FpsoJƥYJZtѡ."7*ʯ$A=9oZrr*r{h4HďpP =Gc,uՕxCH9BfV|f8iiqḘrթ(aNQŷV&&ݰ{W5IrO"}ztRQ_.-di5Yʹ3UE87iJ*-r~R:cIv;s#h K`p=A VUs@[G"//|s\u!/-NZ5#tg|^z2y|T+1#2),ꛔ`p+A(7[z."MdTr>x_wsϕJ|˟Ef~amEؘLBH͒GϭDIݯQƞ҂zڮ Trv\oN5$XJ⹚j j|6UxU+rFQYq5U1P/HuD-wa(1өn],r X01GcWuJr:+Se.fm%xV9|m>B98 4o,-ZӍHIۧ*?CB̤.]ۀ֕)N%/v*횉q,&Ǹnkle2_ n.Տ3XbKwv{9BN4mv̧nOָ-^Z.Cym4|`\yp/2U.T}BDqݫ4+4-p6'Ly'us/hQreXos<84B5,o8a:J7xꑃ޳Nq2I2uAu*=i֕>NNZrKkPWh2Gt?ڱ)81.ge6f\.#ҹNQݜ~1;;=>)gX>\%3Fy}E8ەыj:KkKr *HHZ߹Vݚ6l- ܘ?.хPT{8̖jYߙ}}AzKi1QpSmnj/zgDh*Včn09yueuh軧j6ZtT{Ϸ9"'oNlg.9 _Gn{U&6F@5ecJUKSKH Vl C#񎕵7%)KTXRh>bye٥z1SzW%4kX8B1?^9T$cUJ.0cq6+TyeOf`7 >Kjr],,'Lώ#Um9eJrz/=jqע!]B7Y|-6FG>jS|]N?Ԫ9tZm^9|Z8 ʚߩ,\c֮J*t8ݮ4{jO#=}^XƟ={hX.Jp\6ܡ&]hߟ¦ԕȨЏ|-x9zzV^KY>M֖ZÛZ{L4LVO#z-te:.hr}O̍?-Dcވik )i5E[a?7$h''g:w+}#拵L: JEO^O3k1(;L-(3FYcp3р翵b3[y3<:3tndYFm͞4JIӰ:UԌy[}ɖ)<}F{֪Tw-#:X,D[Z֩40oAOyĤ7tSyP}|ө=u~eTURwvk.7fPa7=r O5OI_0V1`aj4ٳI M7e_'h ~Tb?w%ORaOI'YY4{G"6ش| |gsʜhסQʴc-R}PsGˬnF=x<S9]Əa2KKeBs+$mRڵt#m۞⶧#]4Rw< -7xsִ]i-^G^x=zqSs Fђ$yc;[$ݞ8Ƴ\ՆvH$?˚9]lWb2.rsV3Q3_q^YQ&W߻$l\,4r$.oٕAV]-}pJW2Ƥys1LיmbpK- I ,oּ&ʜ'G9Jlm%'Z{4q{wSC[s@c\ׅ2m,6Ω?GqR!Qp1WVTU3QEJi4N݋3p1֗Bco{M4-D0j*RG{.R;!izr_yr푎x;G~xӏOSnjrm6hZ8ow~ѧ(ɶ㲜ckzw/.9y˹6м27eӚ=5dmGS ^M+Cu}#PѮ4IendM:q cOº֌wʷ(߿~$Kx:ǘei3/< =}8ժSZ颷͓m7LykAN3?fZMrX<ɹKrxy+QW3t鸫cאH8 qV^E,'<}Ki{n&;R<Ѓ\ըƤn׷cC=IWbHc_ĚGF*^>gujμ`cx􂶧-\/kt9 4a'U,?3<`=7ե:5b 潼QnAqyhrNPn+d;r2oG4J>cOig-lgtq70?8t>`ܹWOϥ遮e19Szg5 ފIh?'/Ckľ1m/Mh1=}U=|c??.mtخO)ZFd;zg#u?v.{|$ʣdS?5p o/ n3~T*J=@dKn)Uizθ!))z~{=V˿ݛlymr1Y'z|_+j $"V4->ⵜJRN5rr6T&˖f|Y} aQ٤wTi_sz(ܸW$c#xXAE)&>q[<1['/e]+^o&NgѤvYw_]~0<5 V8ym큷h_kͭjVHQl>02BµU=^U6moNw\ԖMzG-?7;v{5ݵKS.†`c,Ǚ+O-ҥviz]۷{=,=Hs潟+Oӽ]xK\WKS%nV6e{qҎ#ڶ~s9b0%7C|[ZlQVMO/f.X{CoKcsW\+<^Go{GI|FHcevo*T͓Qԓ嵻2ũae_ݦ/6S~Zc_vw~>X6[o'f Z5]]9WEF1--s"`#5ГVgg9s|DFF$#9ԩˡՏ$[W/jEU.H@h)FW1]I{ǖ>};Mil!Ug^HѦOE)X=yRWHJ[ĺqл. 2lv6"RSSp_>ϱo[gd%Pc _i'Wxr=;CcM{w<6 t ㎙5TaSW߃̖ԩs _]k&f|fIIY7`R\^iu#b%e}ӧZ]Ww.'{4fOe)E54>}. $^wlޝ+LdW'kv>S:*TfM-z4-j_OeA2I$OP;ϧ,ݸ>Uk;'dkZw2NhbYWyTi+:饎Y[MM)SަoyKce,͹SqZIJ*QU+#e\~ʯY_!b16Iz2#r|[~ֳ:J>n9Im42OtQ롌}9'>.do4.f=ZKCi{HK+n0ɼ>_ -Z?yROo̥QʅT̀3Ͼ+Z*;ԣi+F7W̾uUb{;8$)JM"ovBqA"azӭeRFQ ' 6-׎3ҔvSɿh' =eXҕOqs&sS}"U=آDe*<~ZZnL(wWHNS~08=%s{ 5(g%ϔ.1@<#/yen(h7ݑ׏ιGvbr` dU~nu~*MPDJ-YsgN誑|\QͰڬ=t]aXe)D|hJ6UpzíFV U%w2tg=깒jNNT Iݸc%IJXs0qOUgH{=.$IM'=ˇ hn@c8V-v9)"r䲐ӌe#] 3W-IЉTfs *o_njU&^4i s6WpUG|nTiJY.RGk -~y}#5'dʷB "z瞹Q.Yh_$XY[Au){nO*\C"1ՋBcoN*1mͥy?C>]4HNQ=_Һ**>B)x=i˛xhaMTze[yܭל1[Q*W*qXUDnd}hI^hnR札b4QxAiT$pǠp;LT`ф* tmcIIN>mnF:N7&1<:AnFFgǰJ^[}$mȴah^tEi-OJ\э؊f2먲mG yN´TdV L;NL\9wN<5fHH}}@+_g*)h'VH/q1RDc*FvϷLTMG]O>U"۲T\8A#fɻRbҼH%7>M[敤,<y~UP,oSHXd0nESvq~5.|P8%DXFܱPRHVw $ln ˎއֲvax%%r H#]v-"o!(Z ˆIlA@3\`XKMiKT[Yl$e7kGw3k}*RM]=zKtV0L>W NA#ҰJrTpkyrnQSlQ*.d%eU\9#=JK]6/j#u2.07Aۃ?+y- TԒ&hJ̍C_?J¬Zu:]~d2#yr}=)QԚ2t#pe7/{Ӳ6m?dDe0 [a*cϴ3*]S Q4,yu?Ufͣ絊r\Hecy,V9SV-uƗNG9ra_d<t8~~R7sS^ݶ=w3mɝ.PJEi*DF8ԿmEq0U]>ee#RHF:Ww,hVݏqǧ֋6ã[,Bd嬊vZ)SREG,<}޽:[<׽]o|r>}hhISU(KF-KgvFl壚u}v9Ӄ\WW\_%*J#)4̟y|0ǎO\¤{o"DV;4UU !{Oʽ*R"שΫJ1jhVE$u+OrqP49;M>bϙE;0}N N\#iJ`G"3,j͌9x>C̥Nv, Ahoʾ>?:#EJI/\ެZ-Ÿ0#)5)STRٖ`iTO, =?qieIh@Ss}K|ʩO6*_D fnNJA;NѼ$*&dy$} 1勺*#WUBhpbC|GMNnowb28p_ ܐ{ַ~HIB\&03ljv$1X9r֧JI~d $VDZ\ִ~nNI];~_%#ڪ2|ŽJr˩kT{DPdlyyEP y{[Ӣd#BAxY+}ߛo8+4Y"{XKВc]e$7G\?ֈJts>^[ש4V-ycg<3ǿҢIFIDӍ>ZObVxq?3rkwʜbW7HOͻ{n'ܓ֗haVR躊,% J$}u"0V\~"M̹էf+jY֒@J)Br"TʤDhZZ&ٶ`N3U)B] =jq[ikbf4QW8Q<6[Fǧ_ڳ{*tC+qǯZ]Tv@SۍbGz{\rW{lȂ4_68BZE9FN}xgJ\+ ty-[؞qO"0n!w<<}+*wˋWZ=AC+e{nZrrzdX|6Ud9cӎcIkQSܭs{;!zXxה*qWLZki)a6; ~ՅI'x-]6 l/~[A,&Yx϶A=H)t4CVwdc;u%9{MVKعKF $-<$$oϡ_rQy]HUn yLL7|Vs⮝gJfX+6,Z3МWoqV̢]7m,lnn>bi?z茧Gbg8N~{)X;ҺowFLӶoj UN}K ^QBƨ3c~!rno"{9L $6ΛVL*p6' AkYV'G$c-ˑI5P&} qmTs]D# ewݽ{¢[&^].E,p rwns`Q):"c(MX. I׾6֑_i%5סjKvɷnB&Vvd3$qO99ߌUӞҍ,ڢLc ӯҦUysyTseCVHw pޥ_B}6>7cь9}S؇: ʷ| sO;(ӌ~={pMU;[!z|I(ƥW1׈[g+u(5cIէ*ܷђfGN!;Ӛnn-yQ$YBa$*k^dPF8m>ߊ{SZ9m3~nqEO4yEIE P4\\aڱn1љͽUR(eӞ}ynU@cUn}g)qfV+9ڎ"ZjES̜8Mܣ=.7&(j9Uў G+GqQLR|$eȎ#7 [cg8:ۖ2ʢSߡKP)#swoރhVK[T?09';v䑤inn[*N<"lTl#'aZGQRݵkhԶN|i} gͫfytWr˨I#<WE8{ױTԣQ%Qff렃<ΚNԨ՛Хy&5{cHorG~fݾR؊VexI7eTrGOlҏ7+HF1ר?hvINpk!59eV3F!L7oN?wZW#^Yy ?x9[oBZ4Hwr7q9~ir譯C:u\yy}zN[tԙIc}=1C^sZ)44.Qܲ3z9+I]Giͬm[,м(|FxGӱcЍEWZ'h%WsI|?5RsV ʪ噷7y'zge} kN1z| p6f-vjV-Cbh vf9$ޭrWHьw<>[ׯ֦7 T"Q ~i.QǑ+|+:>m]K{|*?iib9e$W2|,ux∷P/ i؆Yũhvx9j?5:NkK8$gzes4*cIym!+B{-P2is[JNWOg5k.[( mvVd$)*h{}zזD$h& o۷?3pEa8ӕ܌eVrW@. FaUNp1WsZ|L2|ܜ/힙\c:4g^ #CvC '=k5vԕO{m3iǽT2}k)F1W,Ir=ic'G)+29r.f&uaIhdܫDRcg>Zz[s:m9;_R f0/+8LÙ8iii xw|AQ<+JVg=E*sM`\Xlcr'zroT%OJ;3 "6_-׌X^ѷ}h7tnmLrR7p8S.k_FrTR4\]QmźXIyD^#P.-JQ]:=hɵ:`E Mbʇ!'==)rț_jO^VsPymHqj qM婍8ʝ8ʧR1c5ğap$ʬ xqz\q&ӻeIkmvSOn}]AiI9h]m!0y_ u*QrT5yTHs9?T4pҕIǟb \:[BUk?̫y#)_.[W"iGHH ̪@rW|@>te ( h %n2aWq;s^Ryqԣ*RoTTe>f6μ)hrS5&:[ʬ A'WV{Nkz>1//6F7ª1XiEXެ5},٧r"~d\z8b𐋓ӷȡ [؄ET6 #=Wzѕ>k1Tf6Zyߪ)hC n9n{םW#KKR4D;uѩ-bΥj>;3 i9|x~5Wz/XޝeV{ɧnL4FO)UXo,cfF2q]TG3oԺs(1"1{]ԣJ힄e cdսI.!Rw<$jRHxtCڊXwEV*>e&Ev Z0E?_C{;'{fYH<0 z1q+Y}yFݸ>npezL5aԥ~nw;4ݣBP9F.^$yeL{](ɽv:"-iCs=һV?~5P'C2m~^}6F?00,>ڽjXHTG_ӹlyW(f(eFzEŋiKHUfv䱐5/c?g5K+Ghɻjp+OihAs%xYQ6G"f ͅ#^*w.ڴ¤pϿh)h~ۙ=-Ge]m'Si)zHv?hoc#S޻FPC;fTYpN֌9=2 *lq}!M-a܅s$3\/-$ƶIDiWR:ǎՍEDu҇)ny(=/B)![_2I7+N9sY+^NWM~߼9;Q{HgB"A?}*Ts9B^I;/gSݑ<|\{\fU!YWN:V2%kb2F1®x}Cd9{MVm8֦3mY!in$jC)?5e׼G2mc+8{"UUxOcQC?WR0,L)OVZ2AwF!d̛GW9\"12Śo5fw0=?eʤ9}T5E$vj7cj1cOҸN3)81u{Vq4ʼ񌏩=E]9FS=ׅSFtk^=Kn%B3.@!<ׁ^|傽rǑzefs$oר[GZN/^*.]޾ZIu+)ܻY'>ƨIjm_ RSZh㭼dmʀ0@z{T# SY w"0,P@W.^ezM!Q:<խO0񎜟^{ҽ;j_Oԕ[ЌYl6Z_kX'GNH 8ӳi!Fom4$Y1iƷߺ| iݾW{im-Ð\H֭r^ભl+_~_9sr_EfX/VF1GozڟmsF6壽 x~Ж~\ۃ 篰Z{Fb)UqqIcK{Jx{~8uӌ}u:S.mI.BHH'F}{txƥt:|דK4,Y#+nXb@J}1c*SGMNmETRs{zM3^tihCN*zs_A<]fz)no?~lnݕW{-N6wn[WqR71ϯ~:uF*rrZ7ӒE] u]ғIT'(׳`dK;g}'}xmQ 2;ܸ-=vrEuќctޥ4rǞNL;E1ϱ%?ڹ$ʺ"ћ7v vl/r ^/Ϩa);\)XLVG$W`m| si KR"hq$7N;\ݱu1mK=z71lO@S,TGM II( hnO_XrSٶ[Wεdx#mF?-i}/z`.e $~#sX9F=Vߙ{#vdm'>ӛW{Yso IdwUP?:j'9:]JU#jhý{WX%&Y9Q){ڎ{%$*/9ϚcN<cN2ON>C`[$)8{sW?oTmR[^V2oc8wd{J,?n[;jf+˭x+_pQk˖lc ۛn^a2_Q./)lјo>P2OM^hF)[m0JnѨEmάA-qc}~92Po^ÞA'6ӹG͜0:t9qX})"w1Uʩ?=cӒYY Ge@'QFhBK29#aWO=Sڟ$bBym mYp23wzV)ƣF>ΤzJV?=;%-OrӗENQ+#1!Bŏ~<Ҹm7QRiNzp޸FR)7"3ľdaYXz=.is5/- aQ#FuYvJvR٫/ pvl*8県`zc޲翙]έGmlRX,ۂzzzvjroCF,leiP$®ރqcy;3yK!{v$ޫ@Jܯ8΢`݇RGM'Ϸ=v9*T9.5y]˩ :܌MѢjt9Pm{^Y>ja~kIj.<)s-52ۃGR:}r+[>B~F-šSHa98ʝQ(L1qƻ{cڵܛv{OgMZ?S>Oin_nxjo uZKн 9,OGؚ4V4a+w7rm@Ѳ)SܘFnV͹3Hga0~<4eTZ}LY@ 0'=sXe9K_./i8Y_$q]Q- =)EAm;K6dY I+SIӦ+:-4{DycRjm{W)(6p!cDS\z鷑fˤpUn@9z0q]g:R\,"D#] ;Wq>rt0FXk&hI9V\g9e*rԕ_k7zX#G; cۿ=yb݂TqzXShmhO+T 8k4e+6CIS5y,Pۗj{zc9y%{5#b63VCr9 jK*rn/O1qs)ghI"[}m#fucXb#&R RKKm쑧x!`ʧ'S-j;W5QGn8k:yt{2]O*# dӯ")g:Ѧ9r۹cugqpsnܟ|moc)>*VSU٣]ۤ,{{HmM#*i޵]EV,]It|N1=Ju!!x~+"vB<ɶѧ<%[yexχdyV }y(m)ouhjM4(o"*c?z4%n}U͢xXZPoF lkX un]z9BnѦs-{go9[sg9ӵvӿ5{:jY;ף9[2i.g4FLsVX:zj4uo<ѲXhۼ1@ju؇Sԋm顗.1s6k{F-(N*qG[CቒGrI9x)2"WQ5%O%DWy#~Hn^jTnvQ/4'5Xϼ-n eo*{m,<#isE.pɒd ##޺U.k]w .Q${zva|o3`)<Ӡh=Nrr~P6%%ijG ǐ>~RsǸϠR')-4f$?׿v%ʀŒ#=V8|DeVv4%;%oqhav󬐒 ?@}jnz-k=BJ} K=o^p$R=>Y%.[_¥H̬9pXZ'ς_%}zQD\(ԼMsv/i/cQĒ眞*嵬ٍEQ<$ x99o9N>OI5.Q{jK 1[9>e^=ӀkzW泓mG.Qm'K_35$Xak\IؼqϮM\St[뷙{wnW̊$ y'A=+ԝFWqZӼtߙW@4ry9<_h[{:k7fiڛ]^:2jw*g=+-(ŭuz=De%F=f7+njI`=yS/)_K'5u}7 .xc"F?*/]#<5^1',J[C<DDcZO/,y3\wSZӯw0Y R~:k/m u\n6ij!}:f1R:W6@Ӿ`0?{FE&Mil^bLzg9bk߹v/ZHҬQF5Fcc'q:VReUY^ZMZy]Dk.Mz85JPQ{ɉ-<UЌYCJIA 'RP*Ӌs FVkE׸۹F0:V#~RLFNz4+68eG RZe`r,Z1ip){dSTJ#ܬϜcVn<*K[H/;wy8+أh|sZF 6I DHʟ0Vl g`^1{[Il͋(& (;Uڻ| ^8tw4\37Q* @BF?kЧvE{VW+rwQzFv5,z>N+wyji3澆9;UeVdV]|ҧ&q2C.1UX%yGB笯b7Ԯ3{q(@K8,8p{ӗ5*T%v5P/7犔S M͵q2ܞ~VY7U#MGUvVFYhbY]9XJj|8;D~nTgƹgUWag#yLןZ]sRKsf99բ}Ibۘ"pֺ1䍶> XVr4IiUn㹥S0c=;h:wfU6 u?Nhc KcN8FN'!#½*2+YLi ̆ sRX*U!PԺ܍ʪ*:lu5k}E:q-^7QJ[c0'b{H|gߤ9>D d}~J̦1vRŠQ+4G7-xQ561[L{s6$v+<)@#¥5vk->'xxu8Z[[]yy~mZ֚7Z\e/pB\Ca A'-{XD5֚MKn]ګY:ol*iYGmn:@?aI-oL,c#}9<-Lg"%9L5fKgoռ-iDdwU`n3&_RÑqvRRJk-SMr6dׯnNba+ke8l` D0;uʆ# Fum/צNWRJfi7Ny"Jks HA 7/W_?qZ/YehM7ĐەZe" `@yR'٬Dno3C"NyTi{gӞqʶ凲m;ԍ]7aمh$J Ď?xY^*3ԗ*j]cşmIaTfeGR$e3xX׌jZGLhTqSk[U|n!VVMVx_ VXzkpsNiyW~=*n/A^8IV=sߴVەan`I︜giO{zio_SJ;Favcb?[aNַ ˳zvMAdڠ3dOWaSJc77mŻ JV1?_]X)^_q0AM$D,cpx5\$$SxQWS/Fi'-oTu ?a#?kF)~V<77 Yo*@ C X$dc׭e*n{oTt<KTO2XW(&[11(וuۣ]_Gu`N7g$[YKcŽbD1ЅvqSWVx)J/Hd-dY$M67† WinwI#5aSSbؒKc0y% P@ 1חm4c&eƚtM|oy!xEp?0?t#.yZ_3X}]ۥ7m٘Zý_Do2*o}&'oU﷑ׅg/u߉滤iiy`n+4/.1@L`KX`8}4Qk6ҽw]vnxDc9)Gwmu-1|9j-R|m[{K'H>lV[h*m{kocམFV{b8YBw8H9r9c9TSU>|I9Mn_GT{hW9  צx¤V< dVOC/B.ݡTG?*kWF>;RM%o ";|<}$aԞ 5(]sZ3mV}sSR+`y8_ I@ z9{yRl*stn=9|Bԭ<1py#.AO˨VGS07 n{/Hƿ+MpwA\G<xE" Mщ#TTyvGO)~;kUK26݌'Lk:nLsШKl{^F!dq:};E$JliN7#571o.O zu{W%/Vz+\Ydv"ېHʐOo+?eR2nD+..|e:%̄fcX~<^Ueӵf}*%uBaƜwz6Vn#=GYԍFs$đF?s<ӳf;XJ퓭L/A{8ӕZ+!2[sQaU>}ku˧E:Q{odt7h<Ѻu$"lI nU5 {G#. }DeN/{܊KR&;(^ڎ^i^_#_~SH#R6hRGMLm4YI駛(^=Z&Q-(^ӧgP$y[Ƀ*=#RwpJܩ|Z8,ʻ:zz3R9%s>IVm}?J3\םNڕIޤdf_0fNzj5#ͪB{MyP \:ȨF+Nޅhd2[p1\J;Hcq-Lly&3ں!N*\u#/g[G J3GkҶRcE)E++"))AP޵9Kqn[mpcfUv?Z1 Ҧ vkk,Je%=J]k{ITR]P"+Qfr9e)A9jMOi֋aR Y5Vl*>x4Owɣ칮 q.(4) [sOƗ6~ TeHU_~eԒXc4FlrթR)=62JQ 0'{ҌeS|ԘR;z=>1:h,B[P#čߍAXF2O5%* hV.nFFURud:#8ל1ˣAnw|۷t#c5ZVZ.u.+cEKxw˻h՗7GiF]_\)5%yRs#M;;1E?-`fw KZ_6R\=›m+26I H{o,0@ne$$ETf >fUZ:sW~Z|-!9$ 'տu\іc9rӨ{&Wo0ƴ9SoCJϒIH;v_#qv)mLjs?}]>#F2Es ׍mqI^n~}>Sp&UJ/q{înB {,cNtFJLҝJҼom.M<эcR>3h5c&WC:z{V1,9W,eκܛ([!$w޴J2]wɦl88L0%@^LO|:zwPխe ~V~O\ge}QLFȪϿd~u5R+WKPVCnFI~T{9GvqjӌbeYHWyF9eS)r#?㪳ԏH#OM=q)0EVm&Gjd m| x8S}tfs`6!1dEWj63z:rΟked[g;O@xSIJ?f^VQgwwc$NWww5=QGpYWb̥ݺu>:b%M.ftq1Ȯ2[!@7iI!RD,.ʬag'XhZ颫=ŨQQ/N9w9gEƏ72j١]0Z5U9Z8gZTCzG4s7;gb ǜ\ƎYF)F~I%vElD8Vl+I듎{dTFR*P[KI\,8_ u=~(E+FWjA')żp2y=^8rծs c+*r]b]o1?@i)\^RHC7smNR\HR\4CdJ_b]]m@ϥmEֲ\m2B}z~~s'Nc$X3E#,6͛H8v=?W\kJ1q:#(4bZ^%ln`]0D?fdy,/- ןN8єU)Ԥ=eS.񵙳g}jn(Z ~xeܫ623ڍnVˌmq]R52mK)9"Zqަrj\m2Ƌqn܎ o&:) &IdНAo3֫Z Z<jydžT?7?GX_m$0c>X7}U%T}ȷyS,E Y2H'`N4Ĭdm6SN*Xi(uhqvrP71߷zt-l9K<( nF3g׊.Xr3GjsvRԥvJ6c=kx˖:UO- W2G2IsѱVvmEԩ_OC6{Bw2iLWd#$5H%Rtx' x$$Ԥєe(u5ubyj=#91ү^gT}Ccx%ĊF<rpJ~Q 8Uf]<)siTF7o?0Z0фU8M5|ὖx"XrzRiM-wisyGVgSTR[F1c5M%Y C#Nr{sg*6k91yj_jK&%~8r3|))E]_OB1ݿ2~2JȈьo6I"bUW+mJkfbC d_=#?*|sc/f-w3DjDžaڶZkSZ|䯷Fk Ϝ=o`CҎYJ6JeҒ^yőirB]Vju$춹50/yY^\GkwH2[vFwFN:;"9#Xք>%1naVk>X^݋KF%ŷ<\p1s?_j\ѱ5/fQ2uXlR*nӁ}{VoN5ISԒD7 I˕_;gڱT~㊏,$RmiҌ"Gm/9rh/M)%#6Ein>l{sߊJr{t:}cGE JqI=T?\ߖO`~yYcRFۯoǿXF\]>!gkp)nO9y#\cZZuЎ{ѭ6eL,T*6z+mX6UʕZE-&$'3ךΜi;R*־vS&&/2KtOۿOϭa%ͺ"TarnRd#8!\zv9VJi'^-]3ea\y-rWUZ4%ҍ:iuZ"9vA&Y8K@mJ9+rU)*qר%a̷a=9_k?ݤjF#5*rW_:-M:Qw4t\hm 姶;q35RMT81ky24dr '`1yU((ĵ7t,J 7 m0q#NJ|TnGN/B3%Ų83l@'JR}EJ&^()/Z|QeQ5m>RLd2e.O{eͫ%^Mg`cLq!@> ө8$)aǛZ]5!zM+#}֑=#:={2}]bՏ!yڪT%}:XTE$rƵoC)IШ/a#06 nm0:?OWoĺ|1ёFټZ5fߌ{w5I:ɾ=TN1j%;X.HcUR}߱NeQ,Y쌼;1$=8\RvjY!eJ6k][iz[B9FZ8#A9 c1SJۙ^JSqoqp4B}1?:E݋KFz.s(Mξ[]y+5F.64~]/}3] \[7fMw!I=xmYZ54r}fj. $Alkqʤax3M/yfDDXY\dv$=+QaR-v%X 'Gdz'QdΘxmA?*=TF~*-A4dq?S5w&2j{ɗsgy?iFIMJEKN#mdd9\U9>WbŭڰU x`VR/6mͅSn] v+)V ؑmUqS/iNzEWs IǙ<lqV5%OKQI: $oݬ ³Ts[JP9GKIE~,eݷ=gR5m7 5}O !<Ɛ~J5 ɻaJV Z"Ю#P]O%Q:soАo{,3E'ۛyGی1r;Sx(1pvƒNsJ=ZԎVbѨ#u޸>ս:rzwX|2 <fǨ#=3yo#)E,:M<dVKCT|)ېwg=8zq}w:Z.m]~$=`VP/'oϬѐ-?,p/avOS)7 8YFחUlZ86]A%H`]ssۂ+(ƛM ϙ;R 4ȫ$XgSIYr 6-:)D36;akb%uO NUZ߾wO<ѥY(iA ͷ$8n޵7JˡɊjѧիM$O&F,Z'G~V4UjT{-I,ڞծv˜:QͶR/z\бkj k˷!p2Fz{U52QhiPA v~3Lץz4ڷz1~4/63ZI#26u߇v8ay.HE]>\xڽ(Z۰U/uhZ?vH~BEkxr[Xx Uč? jҹ>hlZfScor]T9zrg ytVcnqkHӋ^Bz+H59-FLmFlWXQII l-{rt*,S>;q\g\8deZ37: %[,6fu)ӌwܣ~U{ZPHfeL?-|̲3~`1:ڼ|E=-c.i.^n/ܠt b8cʾg8i9jSӊ-GmgnTv+qjFmr`hc"{Z(F*L͜u#=?ٯ:j)I'wg-Jخg ^]E`[yܛ7oiSY&iNRZhW,4*Hǘ0\7Lcۓֱ)hO#jiEuB$ql"Y~fǥgN4I_ZuKX7 \džM^`81ګN;I?MZrNvN@MIc lqh+oʷ,ݥ5َ?0nV@CDӞ1Wd]k侞V]iY|Ig_Lw]}5q^d~nн'c񮕉qgEƣI_EԎT,ʾ`%VOA9WG(ӌcO&ԃQyHd;g= {1qߛCxTʵ#ZΑrɳ*3n9۟^-^NݴZH^HGy=+ҧ]l,G3EgF h~\sz|+(c%hjZ[Imdzǧ5J.R)Wf2nP r3?UM*U42Y;AڮIsYxtvؑJʱsrMiouQIyY6z{UF^KϦw+f}00OgmΞiIT`}?:I]- J*ȍy[OW?/c±塪8Źt"Ϝ*p8#q\ue85t^r+Yl2=<11R.;(ӧn<Jw&rq7>i< xsGsQIcźef }=e:^җ-eZ@p36^9:fR-G tۅZ, F!WS9KVuGG-BdXe'eR7%E6Ь~fks[-Ұ*h}Ų}?^c"s[qe2YCĤ^h|֥ק֍nɈ'l}k HJ2\{n}ᶮܰm9ԡu$X7>Q(: D8dELJ8VVUS1n,Ԃ#Z {g 2|I+Ps_eܴ~D1eZ"L67CB5xbegJJNPeQZ(kӹH`݉m1a~l|:v(ԔQ4d-ݥU5S/Һ#R26QT#!Z9$W[%JсϽt}cJ1ƜG5ik9_1wF+Mޙ2*'tDkf<*m>a#Zӭ(z۩xX8էtghΰڅS0L.B0&R߱*ŤәoSҦp İ8@x<`}okF2#sH3^Όʸc`6 zWܡ֞FUYTQfQ՛ZL)Hϭ{e#Oz/voi[77giRGL+Ɨ42#+*I!N3}lO'cr!VvpN1oFrvG}QmXV#g;Ho0OoN99km(2喃ݑ"V}[ҟ-ʕZq~7m"#\>TN j1Rv]J&ܫ+>Zs4],lr?޵mnTF=LFO>"ENӻif`sƪ*PG&|wr)06ߗ=9tE^#V\xaVF6:.V5*i̭{6dyn8'N8B(]%R;An-Ĉv'zbe̜*qihPd.dXd?Gֱy8m@3?6ysK͋FNߔ3IyϽK4͹W;^xY6|jrwWɅ Ecڭ TݷH.k2y6z_gRGrIS$"$~\lw30FyqDy|V4nC{=y|$!sHz68iȹluVX7VF~}(]<ܭjc܀1 yێ}kP}i(TޞcV ,Ў8jVTgtDe*2WwV1!.#26g9+y\uoB?yohkkM5Ef-6b'K5 u&Vq$EQwGr}Vp%*ص $*U8?VtدbR=cw}y6´cN*,i}Y0={W=_v/CJX(췿~Io+ yW:6J}y$.%eWc&wc`=GM)X֫'${ *#%v6S8\zhvQ\^.\$k ̄o^x<7iT=e䕒-rXr~8Owu9R"90 -ӧҹFVw4kN]|[sj*6_,Z,--+$crwe[AG+h康tTEWoLkmZT}3%u]ۤUg]zg~>z1W?1JG`_G241 ~]GJS挚GlcP]lLg$<)H: TJ}1:.Ҷh,|G$k+|A?ty҄o'aM۱4wG.Nz=Z2Vwfn: N2_ V뭁3Eq;\Ti5kTG$e$<}zsX~5fU'Nz,M5F=؇9:`uƤcN*K+\JEw v8<sƢձJuʞQ5lsw2ZA[8 =ʤ9x28 ;.akO*/s ˝Fx5PQh8["kv>z8uՍ{䵌̧̊qnk'4gR4uoB+YHE" 0VU/RhQ^'[X,?N0\2-;S{>wZLc7eUyc,p1Ͼ+ntW<-VvVXsA$l8wWَOvsXp&e,#bȡsBIK]ɔ)w]ɕ9nqNhݽpKHױ~f_1f]ı9J6з g|mvNr9HǛcARN\wVsZLQ1u|?(62nX/|hs;UӒܺtkj $,G`HzkJ#-^}?i%7ew-iL/ ֔g't*1VԅOn񬰴аn^<ƽ+.}>]ų$pl3z.Peq{5FVdO/Ϻ5\aqsJOt#)NJ7YZi>ΐh:ak){MnNV{kw&$f+,XxaqF^8PUefhuo{=>xcFwF3㊕51++ 1qל[CUTԚhMU6yf†o~+jjjs[kSэB72 -rՇو#| UHՏx_mZm7Xb[8>ҳS z)(򾞝SRm ,텫8ƫ{*oؒ[AdG3ȧs|@88 O,EGd8Q^oԹ{=܅i!R< j)˗f- %/K|^'V[u#3 @HQܑ^O:rOcH֣{] {M~6R7aߊǖU*^+KcL=׌e%FF sH7i''n:jb!Q%쓶6h%E=P𕐩<ܧkei+>%(ƣW?uVc%ǗnS'ڶPqn+$:_N]u3ìCscyhQ7֌0IO??}EZO*4~\m$!rc`cMcQ^<_zQ=;ۧ*^kW1B$rclF}Ƥ#9﭂- 20=yI4i-h?i:Ȇxhuwp2~zQw4kf*ZZ{glbcI M|6ʤ`+y!pG~Tre۷.+wGO+Q| GaifEf zt擒[Tdp|s Szc)^U_J.Ɠmo ,S۶ُ,`}G旽(<,i':t?}]>Y{$IR[/ <`zqVNҕfi%r葼Y̧OҪ-Ѥ%RQd7Ȕm+(kiuA{HTMJIrkg1o*821^խ?:cw0}Hԫn8Z1R4ӂx'H?1zKBq4ե"ܫ$Is8JVև,e;-Ew㙜 1@F=k;)QotK ڹln|rsXԇ.JHn wnZW8ѣ풌EH74[$`s05Ο3qSm oly7gWJ|Lta/4W B]Yl~y cRW zRn#X yv)z|qNOEʯq2pWW:өLjME/٠l98ɯ:tU69cZ5{w3cTƋw<}*T_mkc+%hQ}}2xǵwby#ÚMqj6YrOB3ϯZi88-Q׆.?{H BgEo;>҄e-4;m:#0x$~}]TJ#fD62\u75sT-exo<~F3{sf2ԤjWPV?1Ymr8ߵu>kIa-SdtCݜobyTbjXmo-N;;U8ɫ"Tʢ֯2D '9Ͻc}L+4B,: 2dj%rm`;~={ס,^]ǹEI-4g#[dNbʩSY%Z̆I!8Mq9"xs# 򝻾^]l՚b!J彴C_)+g1 g8(?}/OSQ-uO[ፖMVmFY3\F%)wh꩎:ح}%L#y8*̣(lyzMҳM~\x~JY:uajEye ~RܓמJեu#Tc*.M\4ӯNۀ?*&~&Y㏆IYD 2Ǝ񖧍)F-,x xwFQi,Iq=9jʤZvFx2CC""+PdzW$>Y#SSt@$fyV5 NY AYJ晔c,OӷX7#)j^ƒr.4yRzvŶVmszz뗕>E9˙sc*ws:{u)J6p(t#]<|95hԩO]ߙ8I9׌}-tn<ٛss_+/sݤJQ[h koFvG>OiXqE=mӋl >^ FўmZoKo*O=Ysǥz1:2Y%mdЃ|rolu DKn;atO<,e=Z]}۩^ƥ^ۆ^:2n't%}hhU2p}ӌ}W~*>~GG&)nVA_G¼\ŇTM.}t?4wW7[q49o[c?R;X?eOvga)8tmm;[T_ZW,Q:)Z?"o"[y <sQ뭵3(Փ_E~JRYCm%CvPE+Iko.5)^Znz_m4i.cfmm@9œJ2j-(*TTw?~9;3e]m]t/qcOMΫ}5}37V6<;:VnI^:J6ߺGEM՝>3Xe1m+6cӦz~nS0B oTӷ!KqnVmA ס[փ~G0}zOOWW֔ V$57/2u| 1Џ<7קKƅIBItn^~8qR/2jb#̙$ @ nByX-=nG*\_7ώn5nnWrơ #۲};qҧYS彏1XDZ4Xz{.UUc1 ;9|E.jvi_}zTBۺ~YtmEO~~l˙&>T${7d oY,,HFpstOZZtШՋm33Gr뽤E\ DZzW>k­:Y;tkd %,Lu.`g :=ɵQ+7t/kO 8ȓ79iK^]su]W^r3>\8oʰj=U莓W5!*yVm#lPH9$pu%%֞z~l~6ӊGG¯G_Kl`Faw+6/ut6ϑf1\=1il٣]\2m?ONku/5ϛW喫t/id͹ӌM2ңH>U/"8ДBѶ cĖFIY:<垟C6(s9-l q~5)oEƚy3V[ Mo5++MכZ=sRQO`<35%iW & Q}}*0z^:yJ5(y$c4'Um18]445/M=:C˓+X_k[ٹ)DJѤc`w WjiRj}ǘcrOY6ӯ~J0#sysԧ;VK2뽃tV~]HѬ7w9JU1\>(qM;RVtoHզ˹w0W/.Gyh?']|r߯O"40E_ sCI8ɴ3t#B|R<`sWZmluʤl~>O,ay6x8s)7g$)lmh󷑜O)F/[ЯfR1 k6s\[rjƋ<'yiXcF9N~^GC*/D*q {zuSG4ޟ)QK; UN^XXǖ/!7+{ꎉF4V{rrxWGS5T56ﶘmEڭҵ4f^e.Iŷ$CCl` ~hunTC, YZs; @q=т^Gg}%̊r`#4时ش\smV5/c7ȶ~^#GT*1`|{:u)ɮ#}3IN\m^S\i-;@YYf=#{ꓩRjUYFil ql^nO<~zR={6|v)ِ|6ǟ_Ҝiݿ·'rbm`#;YZ@ZSJ;qw9O{ Z9GZfOݯ+99⹝9[D-}2;n*]6:0gj2)?FF8MŷGT7e\J7څl͵ 3֢M?R4e_У5ˈ۲=1VYU#MR>`[)r:jeԵsN2}РEUoI7#*%+ʍK}h,)O8OZr+WUV} Nݣo595(Nq}ho*.UI]`{7d^D8e[rnn:[J\i89]tߙd%04dWfqV]u"IX)kfٹHpqSF}b؅~;Yw6X`{8iN:)sE(;84geddr整.PuП˵lVn 9EǓUSGӬ&k nW#AU*?2.g̷GpBHB ͑?Deyrc?gJlm2]xS'-v&K{%7Yc[Sؑkj'_,,VYBHC` \)(ݵpx̋ j[B ZϛmιAymőҜ0|6^EiF0<;RTs +{i<2yWH8F*ևUZxz2W#BcqTt'#09ҕҕ7NzI$6u[2 Nq9xې0&G,ȭyFcIu)\r&7J|޴nDwGՋ2d&ne xuOKNtg$UfE=P;=hxt[{"+1B<ubWoaqu=+X›SN:vMsh @[*:㇧i=_o.F+4'O##,r+kF{~ )IRƵ HJ>U/4cFUigi281h)JQR|7[.Ė/$iG}liu_*|˻*jnTxyoN?#4T`EcN?^:qȖ&,PH7| {w5%QSV]:o}`FI<9jyN8o~8W{Ji[mSlecVw3gzSekOŲA`6MF˂=ԕih&ys/$\lQ'?:#9xC_2KfYt(dUxuG~hFji EYVS+|~u8zM7Cۄ!FI95kԔ}l5;R@iQT(#zkYkh+Eu@ #y-q1yϧVrR|ַKKoH'?iϪ;gNU_/O"Pt[iSfsJt%)hKWYYfʅ#?JQbטN4'}&G2G.I9B{Τ7԰H;R<Ÿx =q*vB(4L8qc,D$֗-n Edbf 5+kFIhi< 2?g]_Zq GW#HZN ~JN5%?yi\ 7B6z.G\fiEҵ*nz\gRR[]o;}xNxkk%{>_BO&r{;g_71T kbDGMD0PsR%[*},C*G Igv\{Ÿ4e$XI%a8#?α_-J1N͢q!g!X<j+y;QR[ݹ3ӎҹ+ K;n. RN62X,9 NnoL*sS) :+w! .eb8n'}Mt#eM)I)$gqQ,BRwZt"tmG8ӧJimL`޺n.L1Nͧ cuwQ̗ őYd.jn\ؚթ8) #8Z'Z3 O; ݸj$+.d} XeЁמsuG5˷R}aA"wPI:y, k%:ԣ% !}.[oU?NRԛدwa$mOOֶ.Vx{+4043WOzm3gZle^q!7qp^r[WBneϟQbܟ3ddО>52.WSYy.XUݽX}حGJ23q16 A9| W.XWb- D9|8դh8NÅ*nhеm9nV4H.JcN9Me5E7h[C{js6?0Ӓ;pq5q&ti5UӮ7[ .]adʬ[^8<ҌG/7jt:eĢ0 /jr%s?gnm5W`f/gsv=GZzޖ#^!c%>Q5b[[[ɢY] }xk(J2M({G,G ʣo}~\CJ7~|%f# S۽}AʚMu.+I4lMܐH4ݬg/v:2ŤO}3Q7kFS}e^NҮ1ѳN_lSӲ y%|% bYY98sX*(ɶ*\#WsJG9ӵ ROSB>=N8CrE8ޑ\7nƿx Nj/sOݹ]ov) 4,^G<ݘ^&쭴n Gcuq\+ܨQE&YߧiSް9|b_ԍө4`gXrz~ja%ϦƊ,앮Ud0(=#N>åy#"Ns?Lwo,GA.w,`u=qյ/Ěny XVbp#c( (T-1 ݘw.kNB؀1WVwaQ+h>BXnWw'j *Ν{{ș\^pr[xe:?gʅX9a0;8OU%Ee~AlbcurygYYuVY1ے~$zZQFQ*]]>[iIH$sZAJ[hiRM('h1pX؟N?}UjTB[ wH< c\򌹝Pr$6I$JM)n>3r*IF*Rg%[yr3(o.p}zzW<ե{!L!NZMњTy+57RK4m.ű L*q96JDqh$aI=stl&՟~v@?u4{a#38}8w*z :3m 72ȌaYfng¤&R}Dja\q#+'2p 't39lRJ!Y,!UB~AYO,hzW?Ȁʡp#wYJ:-xZ$l0mm늊I_C}^SQ}KZM;u*cT5K9*|j'byΉ.#N#ҹ>FӔ$xfgHF2 ޳^Q!#+9%br8UwFrm+sٷ7bq?:F<.e.%JPr{W#*ϕj[h{ql3"n'0@j?nT9c5gkk4ibʲ 0JSc?RjڮWrی"2nqf ǩ9ҫZZ[(J<$K o3ӯ҉.etggI$o.`d3FpGc9 HMmNJS2\[ӻ+5,]MSײ~ rGMΊ2M%rdQ#m$*'-* u Y7@HRr}VqoKyfzZ(_d"V\GCrFGQZIZMH4bqf 3u9lFQ )jFWs*bsqDW5SQVJ=t#kG6?$dp2G^2e8>֧-z**ec f8W)JTQFZ>eO&Ɏo1V [M(8o_E֒}bLċk#srkXY+ vۍ1r!^s߷kZTfZԢB̬ec NGO^ gRtL%:uD_U}Amo4Ud|V~[׭FB]}}y&I%!Uv`۳s:b^po֔TqJZjFmɒ_9#yUlcdT.z&M ۲"Pβ"V.eN #W.jjѽ6ĹoSU*KVyc5+wmv1B9=a_ҴEhޮA-[[E) mj|r{jQ77dwN&UV0\;GӽDͥF;U<##;tF_}lcOnRFZk =acd $/99劧WnM Խ]} Ry`a$cLOޱNa׼woʺ%.YoӾPZgMd>Y$@G*J*U-)#I%|ni{{Fߠ*q.F.0ۀpK@~ZʥYAk\1e8Ix÷{tQ>s49ɵ8o]g(J N1ؔ]rX("lڝ7(-WrO=?\< sn'}=k:uLX=q#WU15μZ3bn'ۘP\$rr28tUMFe*bm4~_6{>iYKJ*k5Qм"XT処̶.Y[Moh8QAuZctuъnk}мǹfDa ;Npq9x˚KKt[քY̏!TtRپåJJn-Yd[c:6sָeK{[pU7|6Nxϰ# i("GVnR޹kJsXFJe'4 |\)= *V^PN-MݲDFnfmZ0z9kI_c˒N.r&&cR>R.mɝxiR.hO>y3aFx7g?ᭇtuڵNYkjɟxbr;ǽiRQL⧈KY."&QێORzJo^}y(ѫjAk$2Q&Pǵ*Sz}QR~^IJa"]IA0{dzm^iBJyF5sMuu`¶K1·NM"4֌c;] I6{Xz8~i{O֖CSNIdpW">5¤xӝHTHԴ[ndj& ~y{X~UùOiK] I#ixߊesՌaRm;/ź^y]SmUFf * -]ai+Z,-F֬ȧ;>($;F%{eOf7< cTnܥWyQrW?.M,#p^>"v9\5{lVX"\*VVA7mʌJZ~&E2M}+3H WO,{'\ב[I~UmeK}G]ƫ,@=s>=Ntc%dT#-ϔGy\Ϲ”WNÍ/;ɑ7=qӽ|wMzҏ,$i!EVN1ny#irhu4lު1W [z5=xRkfmH9sUFkNcݖЗg]ZJJ\.i;(۸͌~s'+>),#fnzzgR1M4tF,G]ǐNq'gGM;L򏱷tYԩz~EJV %fbH?1 {WNF˔ vXޣ=:q?$@f%HzJZ;~lQ\NZ4O2VBtnMg+T]ܷn"C Ylf2j7dMn/+ҹv2}"UVg*0ۆWq=gR.5|󱤌UF7Q.[ٓ*rRs{9fczesޏM{#tImH|m~^:JZNwNRg9kUܣ*Ƨkg&g'\s|7n 3X2ts^iy3oB3Ƭz\yӴRMKc=d*#ⳫG;w .WY$ sHUO^sjKFe+nכs ~e0`Y0przWNIG[jqbҝ%d՟Ln-q9a7^kj7(8KNudE4k'<N "y}2,̪{pQ}-SVi#o2>9l5*)S] bPUCFx;+ʔ})Tr+4#S19f CW,F-z%F4+u6*zRi> tj*KRI fL|;xmqҦ6ߧZBZYh4DWvڰm>F-Ijt&_Ulg1y8eU;cUu#){%ZۛE$ˑQ"lp*֜W؆ 2LWw,׌?J>M&z*T>F-iR$iM,i9a]4FNU'FM-{udg2-1 dkOVqrף귲/[#^};:QӔ[[|ͥF*^ϚO#ЧxU8`#!v3ӞFze[W X1z[KlꬭJeVb3zs/RWt@V3k:b9^~-v:p˖ 5ڳf7Skաha({9cnZޣ:=Aj^2qOu~V4NSJZ8rk˴9$I=h7RZ4^+4{$MJx&#/uFl|ɲn1!~:)7Z+\"nnrG^9u)rXwRVhZIxد*W<>9sFź>Vn e$i{A{mgu)$ mi qN= 嵺I4-I' ?G^W4N˩;4<8khݳe~ɻ'SԜ9g֦iFIۡB]]̊}ȹ͓G=u .7u^=.m*rjH-nʐLU_08g4J8jfR1JYňnRO5ѕ#?X-$ؒ( C9+I4:}[b 卛zV\/?3s=Fxe)DϖNio{?BpQgbi|?Rup6pO)rFs4ەRݶ@)n{~qV]SrHu幜Ȋ*,rr1ʥ7sJ.h]N7ApEA_Zr喨fm n9=EmjSS& Uq\q QNLiҌn]l,OT9{TN3w3f*#ɴ κ=ڐ{J}= R fyW#'ֺ!wAVȡw2spKs39֦<̡!_tP(=z5O8[٫k t.n5:ܯ=ޟ:(W)_^e} q|sqKuZ$ۻ1?].3c#2ENqQw9/_50Y6x#?tЦQ~ֵ:N1W~zlLigsa^˱Ia;~&Eཎ࠴iq$~'\eNz_SVj+[v Yn"֓ q1WNS2k-֟-K 6Fݵr̵}z ׬|vsyo1氭M/is^ZRWIuw))(-zrJUbcnHh6y$G1d q)nz,[^0=X.RhJI]'Q"Pa}y~lƛh2G3f`pƳ(m"\ICy\0+J\XGFIIzz㾉I.?rÅ9kȘ5խ-/ZTnd[xN09to/iMadK8?.$Gc[N=УGCt)ԨTwӿ3|Mt2̖Arֽ4<4b:rӹMK,ĮwQ'~{pí.uHYLebG<CrtIFIXJ`*Ƈlas+r[6}1]nhϙĜW_O#YS9{ZRW]?,nuQHⱓj{ԪZ.m $_bpoSZfESsRZϩz՟'o:C<얐HʲQNJ:[|Bj8Kmאj,]we:7O:0NQ{TjQRJ&-I5'7QO@ x8Ӕ9#*ju xj;kh$_&Ls˷FhQ6?N75mݖW>}ç qȱp.{1\՞MIJqz>=;F BA=q(;]͈[ _tQeng~5Q9MrcN)&8b͹3kOwӋm˱SgjbziwF,ܷk>ӧNR5+0yqJ/lN|V)vF}8WnVtS4OpKp9єnlUc#$3Y7,`75cczj.$i_NKHTvbAt0&x8 ӧRQz^RIc:7X*my9,Zve0Pʫ0m>^mb.$\c~=GqEb.Y4e!bb 7#ԦȖ<cL:6 ix^Zjm_c< s&xs#V^``rΜe.vr*׵0Q\q<@<󣆬dXIdSFog#\ϕ>m RjI||elx_O;q\uV?4uoF?/;doN_qΜc/x/#.$]׬@ydדZydw)]6TY2ʬ``"4p/b9J$mŎH9^5jsǗrIjWk!̫y O&CTE$<U 0WKB-FԚHwSee={g$WU8qlne3l.U#|a 1E`R׻6,L6 a#z~F7Aifimۆ?kь\bR7 ߕVD@` q]㮠RM=7f}Kr_c]Qڟ'̙ w9 ~@UFCYF̏žffUY+?ghr.xZ< Y(ϯn:I-B)Qf Gwn%tQvfu!Q)4d 9eS G<9Z{`,O͞JA[W/ð|ͅmVq0Ex쑛`19.+ݒR!kyO2YN@-'R\QZkEdfvsJ2j)$EME@wXW(ivr߲HŻ^GlC"k ݳ%_S+Qo%fX|ݏZq3iS_!O1?J1\{(Vᵱp$VzW4i4-y75D29!HU*>\C N_q[--לYX2dCJXwNq☒Uۻs0ge8'A7$ܝJ|]Z. `c`$rԔZl,hxviڛ_29,{窧^Vz9Ju=ޟy٦i2JǝA*?CߧJ(ݷmz;%:mo_3)z_cOhk>_3#Iyk3*f'sjDHq{cnϬ{*iCS(-x]=KZHpЮ8t8\Up5L;u5[[\5܋ Vf^w=3 Dct<-[=/c-vc`ƲZOVki SN7]OZu𸾿nVGuf"_:e~7Fܶz\"f:|?iX:ddbg)+FՠdP%V't{X{9ŹE,6i 6 9jj|n2$|-gwh hDCq}.Tw8a)GjjVٍ**'c߷)9=NsJzia(o0qp9J?ch_cβ]#]烎z)}YZxψu&՚yec o6׎rOuVQt}ϣb)ɸ7$F[Kx#Ȝ [q_rK>Vu%]zW'l$)cx<9+uF"ElӝfSvXpI?jpZvӯsG̍&U۱Gkأ)SM6֥ kH˅M݀ʹ<\?(Z﮶;*n_q}Byz|O[_fFE"Eg`r1kh)JX/m-u0#o3r3v#[޼zc'{pT5Ͳ22\eq?v`񌗺Fazz&{_6(v1ю^7ݞ>#'}:*>Cpܓdp<ҝ+2xJHư1/nRFN3+g" NI#R{_2j$jEmk=Ӿäݾ*ˣhLH8-eS+ ΪYY-7,Udʯ͏^ 4]I!S[õ&PAC*c9fg5y4E4q t;jŞ[Rn]۬`-Lsi^v U)SSvcWn=Ǯ v^Q~Z߽ɛNO?a8wE<څ)g\$efaհ֧N%F Fg7Ik窷wuc#,jfEU s9Iߙ3:s_gX|K]osbvݷ;Mx[ENR |U@9W6W{˛VǗ_[Ǩt٘D``;@#>݃qi ukG'X|_X4 Ǹ$;}Fmu>.1zL9]MjS/p]Y+e|9U8FmOmof#<id>y$~U2RL8S>:;WJZG/ )yƳ %K<}t>O95bxȹMrM)'i+00t?JMhݹ.uLg{ MvK5G̳ Oi#r=3gOOϱѝҽ#*}SC+dR!:O98"$i^5ʦ~_.-g#WO8tU$:s9溠F)Y9SRߟG)XKZuj[$1^xmN;l{֐NΟ#**b`q3 280*:g$wqiVЅK MW̞^8*5M*bܩY.ܶT&|Ӛ}579TwD kGb˹AFGb`ktIjMCޓDqjC~`lQ}p1jvԏ4,Xh0f Yz~+H}lѓ]"O!8fZ18i(WSwKR<__o3[hI^dRGy;lF8yN:洍 8TIG>Uv1SzޮQ\ܱkղFFJ=\ iJUyz-t\D"W3Hf8~Py?ZV.I}n(Q%2]7ɜqXJfeF<Ӽ}ʆ])d#NCdUޱg:3d-):Řɒ]ś!x6X{ :rtZ0ɓi}!sN~h=.#'`ݝ0?֪re~&ju6O@J_*}$&wm{r+X8(IA^[t;Ȼs(4l&0\A5 $@yvѨe_΋}C17 =}J47d17b-}v9+륺FJ+}t>JRM30]Q&ϩteҤg)kr(|}̟lϿ?LթI4e+{jK7 ǩ1+OzZs^J7AP:%^i2N>S唗>D~%ʭQ[-Vv2m(׵;sElC|Ӕq +M,:c߽kB`Jb]IsIIFn8۞>J>#r絓F2rCPW=戸Mt #(u 1fuc(x pqW(46ZIGFi7-̾b9Sմpy=I}RP r6GnDs*(ןα)(=r[Ȱ NOXc'~\Q4zov}P̸V]vG'YVQVosLƌSw#kVțX/(§O>ZrZEgwRȖnfUQٵsjj2a6ĊFr̫|zҴ+N6[nmө ԨG>UO֎[˕X*So3K$p>cSb#KWvt#$Tnwr˚]Be!{5#6Gс쇽Q{ER債iT -?̶p0C'h'uӅaGϹ ٸff/'9˾u9jV~#om7<*ҟ7-Nڍԣk_ˈ+bL#8Lyn0xr^i'emgn6VPJs֐7SG)]Qf/+.|) =*q*5#fEG<y}ޤOJKybv5ǵ~Te F6gljUbr}1׊2R4%p'6HC|azJX-3ԯ9{][mUm=kJrtL[| /&,ZǞ@8ޜ(H:1-o3B&*YceUzN"5ױ;kP]Hث)cixԵdbH-.[mcfvў@'?J/RJc52KXiAI7 ev$^9:n[Kp(e@O?Eh(F&hVuUaa;ʮs;܎Oh֧Qv T$9 r`99*uVb^-pM"$/ JΤ"SMk&y<bI;<?jyn>j72qc"q\ +JF4*Sww]>ô&w7<R.ksXӣ*k٫! Hg*08L~4Qz-JehHm6sp1~jr5gU)\ &m"|=*z-IrvCc''Nqf5RMJ+}l-/4Fgn]:{JpK|ѡ6?*x' Xҵ:r4[}I"Gnr\sGzSʩZ_QlNXX=u'9|I=[,b_|2xO=? SyHΤ*֊t[?3r<>{st9)h {fDVrc0KmqiG4QG/^G"RrC:sKBX6Bj7O̠;`xZFtu2Aj3XeePyvZYeRp3^wEQrC*76Ր3*KFR5݋3!C׎djR'g#hZE_#ma>we~'Ŭn$O˜`Q)'ب W!9 2WOi*1緽H2՚.NH4(f-G{TԣrӒI$1/eya?uyd>rDALmc}ML&:+=8wy2\gm(C^ jTrZq(aw8QY܊tˮy(YYX{~Dʒko'7_ʪe8Ў95FWj六P8ԳGº'!_1,N;H9<k/[>R^\_1gvӎqHE%]k8GnLSZKImmn-ϜʻP=~SBN/^E >:=^kvd[嘬;QXd7_x=keN=c^jJvmV}2ˑˍY֌/+s9FJ^x_2[j#;nzϢ1" `c?GG7Qn{2ԕI8͕ΉJ*M2h I HڳWBeFZFe^'g4~Q$WWٷYL*@qG[&gI8|\ġE۰MYF27-Ff$yA ?\~UK*T9r .mWog%ʟ#veD+\3^ƕ$L}+U)TȦ6\0? .ߩ֭&*AOc Q| IF0e'3n&n,IR{3W#1>T_Wq>cy'8Hʭ j6r:=?c}S8ѿ h[,~vQ~VrԢ57ס"LJѾYYa}i*c%Fd^rQ;UN0j$ Φ||i?xLIsΧ,*´?ye +9H'l,>|@ yjN*%vonf;kr1}sR{܏3[V6iC#Fi9f=N*F鴵3Y{)ZYc`]a;z<ǨNd-Ho}**[o2i'} 4>eݶAG 7Wf!wL~'A3NoF4j§Շ)|^O9qSN4FQ5 QIZܩ},G Lch0~P08J5gtR>mBo_%]0KO<4coJvmIcciWwTK=M+֭N1_yA"8| R/i*%\Ryr>12KVM㎵3JV+#;KMPۘ 7y1֔+B}GIo3p" #qՌeRvo6 $>m1L9T(f:qK̂;$vs-.7 z:5i'R>\AAhHIT?ZQ7gZ6] y~cs#kUSyEĦH֦3turö꿑~9?0rkT~ӥRpRd,2G,k[=Oq>g$F4ԛ`rC;9.}kz妙Se-18`vkL=IJ˭MT:i7YIqebaZhVomjqVY[/\3*['*e_Vefy- fvsz3.VOhtKW|Ws2[w*Yv8\CR{i.n"0;fߙYjehťܵn".)XܲNT^Jde︳OV?-G^"U)UMh$Xs&O<QDyus?qnh6( $S69q'9cN<ӡ!F+ԝn\\FyI <:n=ե1 5'Kinyq*$8ŋ*xt_~2.qasp)GVKṣ*} q.ͥ1ӚR%KFhCWuKu,qq+Đf@+nNJaWRJ/SF=&~CkubVT|Mn 0gUr8t~8E%"P-)2* cVp4+SRN>D4s3yp<`vדU#8"jR~˝LY+ooJz^4jօz^q:]?gY|ݹR?,iTJ=:=DxcmmmS9g+Q JP 2-ܳHܕ+d1m~Hʦ>ϞNR)K23'A÷ְ5IݤrZPjz,eh*G'>՛txQ>H-l`2[Yԣ)-%o#UNy 7o- sy7' .i,D%}Ǵm4DrFRO9?ʦtIE5fT㌎usXcG>eus+#ʌ,¨9'8aR7$׌#]ėKkckVP;7$c`AYb%sլδ]б͈ sy'zengUaŅ-onv ژ~s\QlR0e"x7ȱ9? n^'?Fɧfr:4YF2\jީzȍ' I}}OKi9IT]54]F뷅zc}ksIt5FYy$ѪPdg p W(]ngF3׭ZhY&q@ ߁jtSN1qmXBl˓}g)T,*!Zul6Gr8?zXfkʤjM9-?SjdH2Qv䓀G^rGm7-VA,ȩ$4yֻE;aGvHRXUlYpc}V;5I#V4{<-q|[DtQ6.Vq=3q5nXڲ$kp8=9*y%;َ4jn+\9yeeU2OU8QUhf0GN64Tt7*C*.npF1 N+>Tve'e -—\q~5I.V=c ~vʪ^kQ^72>﫯9kURwЯtaI>z9xT.V$eNPrSob|G0};36/X.铎ּe8D<ЕI%s"8XO9Zu8:6~'^VKC_K+ ˾II#w<<+d䣢~>;su]boQ/$V2eX`Oa^UNhVuьc$WWO26cel>_X֏=ܪTX3n@V3O8Q.IJ;v9iQr+~W-Aqsc<;d~+JuԳ꧅STۋOԒR-Ӵs=1Gl{zVrnT= ))UכP%+B,c'b;}k-hcgF1Tֿy %76e@vYsITҍPzy,Ip㎟J2k}+*i"]b͞[H32<0Y[]=M9ܭ[ނAN~T$՚9dbdkQ*pz4mlh[˽iaVG m':єlY>nƅmU\kO{c*slZW cJ2ίhܹdX˷qqzchr('Ύ71w^Tɻt8J*ZZ-(PNW6]yԈG#Mr~U?_z(X(I;m+Ԯpppz^i-iʕ.kJ+J2ǨӚ瓳}e4F-&cʫt85w*ef4B^yxUϵ[sFmY.[^H_מG4nF~Ř &Ak#SuV؉;Ff?t/RV~ZlIWv==o)9>UvIy ǒ9 IrL*՛܎Mo;rOu NERcM6ȚP/7_St(nЭ{M~S2Gc.k #Ilg\q# )銗N_Sީy˺i 3fg$Jԋ#:ow1H'V&יhΤ2[B#9LCұkև)kưH6g2ogֳ8JEm?ޚyaqk>mku~$u6fFvm?0 +ȞK#II4˹,&Ȍ1HyTzv6N/2y?2ş!YUȫ@'ǥcZ%~ӅJ<\k-A$_3 zV:QS:2GT+YhZIW{H̀==?jQJ:w+0M-JM&7ʈsyo*wc+U*J~˚ԊHMs)E9:vyc]iP.w=ZvNΐҤadn\v T*6j)ú@{ QkNTiF5-zo&#}k7ݝ-r4 .QxU;8wk :>ϕ")򳪌yxݏ"ێe~kh31o xZJ[B1TϺV@,#knկe /٦S?UUg̞YD*N is ۻh0q\sZG̭yngD!8Tlm*v3n.M1]hFsӜz#(rz36u  b*q3-ﶫC*$€0Asztyaxĭe2 wu4lNX43u[xcrS-}3jPn]ɭVX_>{$9F Wg-ENӗMiAx"^Kh٣NiNԛsn#r,c\ᧅisKKo[۫7MI?\rsG݈mGqGBja9Z>n>O k9{HΝ9BM3?dd1cZoY!h7ţH.cqu5b^MsKZmQwyʌ*s)89Uчiost' qx;d?c ^&eb84>YJYS|rh$T`@k-Yn'(PڤG?ͷr\AODU57g}<%¼c`?ʦ`D59S}V!#_5bBʛv*{t4 rwd<ӷ0]Hf9=99(9ԝJiIKRer.>mkay\5)MgXǙ~!#zֹvwgGCq[F-;*uW>Ϣ!Xح[<|~6iZ17q+*)f OkyciOo3 8>𴃖MdrjZ8?ګ;g IV7$ !Wp=hutRIԇkkƳ$twK&&ewl׸!q]TcR/d楥kgY< '/X#GSzu =ײO^54`zAqd)ٵ=Yu߈khmfk3̲`yqkjn9ɸk/WSٹ=?m6?xc~Kh(ӯWS'QWV,!GY[=y}2RR%nX'ߩ4Ohȳ$xeY+~S哺} [P$[rbV<Ǔq\!qёBE.a-i q瞝>̧(ԯoC#qs#,ryk*b鶬F> Ru"^VyߝW;x_t8צkpBO]q#&GVG=TUq֯7Ob*ӧNӖ_y1r/F9>b9.8XfV\t?i7bjV&{S>VZ.4oٰh#ΑO__θZ2N]gQZUw#h'p9p+gFNhӣMZZz;? #CF>eG=$o+dz; Z/J6rZ&:>Ε54z/md18wHQt 좸cGY}tvM~VN:O-6s?i[vMī2/̇g<.n]ZSE*ti0'n^-Vwx@[ ՛agAIL9 B)%{|vx?eȺ~5oE֓xJeN;QFXnY*kb0S?m5..H"%fg# nZI=ƙd)䬛j܅~c8Ӕ3n̲pǾGxX3>.Y{uI7$n?^k_z2JkJ푆T6p:uF:v:(*=ıc&rxzURDLT~y.Ma7ef=1UYwFn_&Io*!`N}91[k/z%rOl %4d#U(ƔjJUȆmbBX1Zd%=zUӔT+>[K k[A*?U8nHrnsÙ=3: jz6$uVS_$c~[{R|MI Y A= V)sK}Ǽ[3;6c)6rTiV@ȍ=[yվ'C.cLn<\#/v YK{n2+=xJE9jTWQsSϷ^5/3v4ݯSN] <)FKByjFI'RׄJM̼lz$%NvV]iS/0LO\ץpM3+8_CfIp"umy8ߊhZ-%%eq;YB׀+ыrluB4/zg%d3]sX-K,H#M2x9r]:))Si/IJeq8'<bjr-A-47-p3ihѵ:j1n4jf#m>ӔeCitn"yѩ1$ȧuy:OM{;\O_γQ fBd6|?('9O¢;gǛgv /7%1>գ*NWVp#uc_/Y\+I|q`T|2M(4&P`Q>3dz%"?7͝,QSKdV"`ڌ[\=ݎVu3 Y&p9יUل&fѳ}Ur Z#w}N{Ocb %fxͲ5R;9YUckdN3wG>Ҕk{f4d*瞿7'^]u;U{O>(n[<9t6l*oUHZKE !r9֎NilcӍr\xE+4s2st#O;*4ɨϪOqL$rlJzlyա.WcN=vu|Eͱm0i+1imvghZzYx՛]ߠF6|&ǟfGJF^s5)#~'|1T+Hcfm48Q95ՅF_#?O|.fЯ1C1`ccBU'K[̋J|qW'nRj=A8c(tqwM/he* og:ڙw^3~jiYw:3f608=JQ^6h?_ 涸T%fo3uƹi>9ܣNOOotq7q${ 2+5SdTc=6wI*ϟι䥰Z6tCƺUxMuv\b}M.? %fX^Ed`~}tFr}^QsCKkXqy^@/y_,=Ed.tyLO0FzdFFZFLZ}PJcaѺzzWU9ʥ;0ډlӱ~904kpo-YCV> ;I'5D*M,aŲH9?|^u6gaӍ>VٝվtM>cw 2߯xtS)ac[mG\̐[+s^4yհݽV˩{kK7,̭FO$s~ҽ |I=M[G&{u*ZP63cWR\RyTS[[e lreU9ekt$eiZFv+;Ui}8Ty=ͻ v0{#VKG7Jt4@g{}W3#$`qs8gJN* 9*]8ߋ$s _I6A>JU ,b*OGZLn|+>Iⱌ}ob!(yv-S4+.4 pG=jr{.4g].maT5*x+Fu+ťm= oxfC6czdیu%MEg͡8[zf$IdsnJ6ېN5=+Յ4J_hao-|q`^G!Y|ɤf;` \cs>Σjߒ6)r$v)j2ʬʼdOw+N2kQN:_[lOxFM/M7(. 0$OLDniԩZ.TK:"+ qmsAT4g[?rS5NKɗ<;M.5M4J80R8."-'(l}/c^}?hi 2gr:cw_|H֭)E+o}?#fe)Fpwg_3<ԶyQhKE_&M7N7]2</cx5#wݏ\f%W;@Vi#ö$q3>T֞_N5)Pi=[y3>j . 1?7}1ھo0O_k5)Y='Ec? ߦOӥ|愮ɂTR؁}mv$BG3y1K r2?:5.mSacJ_k.&;Xors psרq\_C+s3Fƞ*U$''p2:tGWv;)RBRc>q)^TeaѤY Gq2Þާ ׭ZU0Y~+FkgJP%,-.+K*cPg}]2Tq1WIꮷW6]ܳZ7W1Ě/A$Y/!oPO|ѣWOҍH=7[o:eJ8xT+~?Mi-ήdZIUۢGBx#=Y_tcqmeO;*K ZCu*ާ=_bI]b0>:DREhUT TzMGx(՚QKqL;E0 ˜=G8#mRv:ǖ[,+ǀ㎌1׵\T߹5^im+I[sdׯ=xtR,z-LIq2jl'_ZuyFqK+\3K2$Ȼ d}W]8Zm٧rdWm籸@e {ϥvQIsu/#ZQme0SG{>xY[*N/j}&/ܚY-ѤLn\maպ+=~Z˯,[Z o CF0=<69TMy(IɽFUg xI?BZD$2@ 6~E?+jW|$ܧeG"[iN Gs{?g+d QcOk(6Ursq] y]~G-iTRVi-vA8cӻ`϶(tbcRQ&׹rcXdy1'w8*2z94ܤӯssVt:|ԚѭbE]@]}(JW[\qc,_hЪvXʔ] ^\jŘ.&EFTOҹO19<ߛqӿk G\1峽!P) GnZ4iеZi[kU~nj}海2/i;)h$«/̼{ qӿXViR2<~0no|~lcoL+ i?iͮs.vc>2isJU9'w!*#7|[nkS\} &-*q>U9th9rMлrnEFIF@#=_lq\vD8yĨo]0/Φ3|C E969&MČp~PFx|jsMdN umnXB3p?YŶcd܉Mȓ=hJ2VJ׸ۋfvazP;Bے/*^V"I!ȼ;m˒CZtot_0j2r3#I G\B*p̣% 7Irn˚ &E}w)ǚZItF+.FȌקOj%DS)CpH6۶?κd{%ͣ};x\ne%B^Z(ԔyV+ g<+H)+=(:ihWTO zTaf3}V{*EhWjUt8cixw aT.rj|Ɔ߫7ێyֱI+}CϹY&A2,P嶆}֐#ju?SZS~Uګqܧ;trLb߃8+:h2iSrWeY.K"/9: ֶJ.7L}y6+iIWkddc'aZE^Sog(ӡ j1i<;w$d#= bV!܁ry9_ʢ猒O/'>kg,q2c'#^vF<*54}ذХ34+ sӑmmImm|f\c=R/iʕej^G{4>a,s},o>mFRTŷm -Fn|IH㿦E^qUL?4S#+}!UHݻbIy^Wr"0(vx;T=["Sޞ.[Cif> ގьe/zI&>k kM$JCynH9*bLUkrR?j =1ڦdXZ0VYp0$ Gs^VҩnKE&㟟<y=3c55h[}^$7ʫ򟘷8;R:s\:/ow+Hʟ?fG~ca e=Zn?_ҳYSu*FIro7ܳgnKkmo~vuUN\ R C4,pW=}+:IVF}nf[Bd<Ȇ]o29fKIms^H1kI5uޛZMf}֝IEӿNḶdɲ;,甫20t~\V?Uܤ40p=;wUTlWy4$̬*=E_|CSݍzbO!dlj_Ʀ%y,<*/uRIZ99U<< hmN}9ᑓʁ"],j~UzrMhVe)N[u A]H=Τ[|i^|cOq׏S+*w4ٻtym 7V>WG++K9TUC+W[)J79>ÜsTFT6U%ȜfJݓNHܲ4ǘpz|،IY 3(Sk-Q4KԊ$>k/*vl$sΒԥPVVG
    EUVhe+'`.3׿V\1mM)U-ܵжT9gD0qmj>kin.>yiV;Ҵ*@m,I| 3;qv~V.T|FK2#I\oWr;tEli)]j{eXpy:վPqrkUdΗ$u-㸶_کRŇj^qwvMdI6 J۷6 $ןAΥ)Tճu|_Ȉ}CjrkɨIQC۩.Э,K!{ZF\}2v-2ǹ.pP;Z|ES^sZu,agIrܞR{ 2ci)TMЕB-Gn۲dkI8o/^,h99T3jݬ4 B|w)8ZNW\=۞88=j*N2ÖU*(r&;H\*y;VE;s2q{ `t^nWvN2KawM;8G=jX#^-F_ב% ,q#]g xPqM58 퓐9=Lc^\H`{Xmv#EIE;DuҾiNeSWilj%QF6.HԩyhG=#2X>߰*Sӹs\ܢ}%ʨON3zfINveFtybGw8L`g:ZS3m=x%vHb;ܥ~Ʀ{sSwbPwI7.6:V>_u%%4xom+OҜ5JTc\ꌤ|ʸ XJ~M!De)6|a!S+s^r|R2"E1_z5`Qިċ&{"NzZwη<^dGB$ hR[XUdtd |yXYv^QH%gcnە'|I6FwvEU-x~nsGS4`H>C,MebF qߨz4$՞ejfF\̒K;{e%wv&{͆9c'xJ΢8S,{A8'5ҟ'/-GF`_>f`sSfrKU;\ۼh@{9ܚV+JFRu%/,V@0G'?ҵgv7(ͩX(6O^1W%(ۨb#N\oM0>`R&ׯ>ޝ鄡8XY\r$dH]WQjVۊ3"9;KXAtO8/jO,/g͞26<tBc"ssb彸EefU]Nz{SwotiNdLҷEٙ+n?Z_Zc98C1ߡ~٥fYFm1=_z>ǖ)]Y0 f7]w=Z:.ERD+ۅkMF/mg+sӞ:SR\B9ss=R̷DYd[#b ֜&5:׋'$34ѻv<q*5бJHc8|і}E٦Tod~)){ϘN/2)ZHNvorxZʢk Y̚iUVyrYrǦOqXOv0M9]ă׶jkWQ冤, hy t? nh̥ؗ ünY'57啊,yHsn[ssϸ5%~VmRpVFA u_E;`~KZe*~]$1g܍s?ɥ܎dg mԜs񪌍TVnG2t`ghrs|մ\R[7HcYeXw1wӿ+X|M*G0DJMn Gדg]~M~offiV'-Y3H]_+LZrirȒ[yp21~1?zzz-t1;h)y 3~\W_4ק/Ers*^SWObڲՕv9Y]/S^8>QZnJzVyąN|0#io c+H]J%beNMB&v9E_WZwrob߯_qmsSn&y{euf~U3Q掚>Kr֥y s8}Z\kFХ6Ͻ< w0}ZhaQylJWfby9_{t玽TFKU#ZW.+]ypnfn^ijJr{UW4kNy :TɧM͡WEV924lq#$+g3bGu$'ZK4L06/={)Q8슧F++X$Hy,_;ry1\ˬU*U/ wOj#m_8MsWKS2׷bHe6NځaN:VU%+|X*s-T[Hq8;w**ra;JM' [yE'#+іFkWrIlDگ"2?ӌVnIM-XfjAsr{q\3wc=fEkI<LwOgVr-dLֽ*0@y$6Xvq9:9{v92w 4{ XZEh|xgStԗ!.eyw+mc|GQXUmcPr=Uep,|v59ǗQƜ)^KƊ c?7Czjw"Eb|k'!>f:`g%,[啶.#e ǖn~SSK!X%ܪC BrKdV<^q(rVДA?wCZu+_dEl7,߰os,I$vO3HЂqӸU"ʤkT_pX[xUG]Cӄ4h "W.jndF2k_Q34˻I?ӀHoUohzS|P@K@I+"EϯA:QR5sw]%2ЪKM1Snk=MJ4͐Ge?E9Z*ܢM#V98׿mwmRrHHr~l QtT;$S&{s rkmRsӞkMcHFQNַVk=%fY01IZSmesN3mZE[ȃMŬEےOc?ʶ[6ueM7=_+;#1I0^=sFﮅSTy{_JkDۆW}jߚR:5]^tlڴ6ң(tf܈Rfجjwsߎ8ǵtNXڔp]43$Hfne ˏ>s+Z>2iX{ͻˉȍr+=Xz/WqZZC-~)OTr9݇4\m/^TyrTT.-`^a'\1ީt+>i!,֛b31#һiԏ3"Xzu-4[wD~X̍3SrlGF{_Bմ-Gv6SS//iZ[,{9_,cIyЀ}4KGKo'BT%Z+c x$v'ӅAQӥLadM{N@8#y9Ou}-`k2Yc6^>R 9ev1ԜgQ>dظЗEFerI{V0~\|ϖ3imhid?6z5$#,D%:I ,p&w`硩/y]OiAkcU˙Ok5i]?Ja TTi6Fzj1qo՛S4SҶ[4/ -fG\UOtB])_O192)8vd$'?\ .h6=< DglW%Si;iRJMF"Cln(~VkmQbB>k-­4ཌྷo8 @cHpOEHrRR,"֢8ccRO.$zCs{s$fY d*α*m6 0Lv01?LW<'9cM?DLֱY[?/rZ#RWM]HӦ诰eRLs}NG~GONeV3 YꞋ|V41ݷ}yd-zTSjӬ6,&of Xu$988whz銖:׶hys<)r+aqӞ q^m-NI.m<`I6#eas?zYF{{TIVN/O>GɑHz%޶=ZuBBUC.N;d׹n/QI}ڛ{[&UYs`;cfR+֧+5 i &wtt5R\sЧ[VJ̽ó̚o =y#je:9Yz)#pj\ZꊓZ3G9B+АJ~Ӻ;wS -fI+¤eo<7 l%+O)ec*|\nlg~5~oeR\Ek10y҉Fm.V).0ާޱR^Ec?wFqJzΧ6*O$fE(98. 5x:cg۳ni'o\1uN.sEo(Rf5 7#98k1Pq}69cz*gGp".׷΍޼F1SG+^:kK4_94:~M:"apO'zu9QҜVՏoT,mQ6UxN__sӌyԫsItN2!v09#5Fk>]WP {k-%FFf۝=JIJ[ϞFӐbERc88vSrieߵZ6AF~eRӭ >9Μ}:yⵜFm hyG#ׁҵN+]W”%w-۵ 4`?zmmڲ('⽚5-|K5EUEey}z G:)Ikx._o?qM?gߡNQqz&Txn h+.[0«|θQZs5tiV)ȘfF *s{RM\X.\yܜ#{9Jk łvx~U\Q撼]9]ه pz'Enn^ J&`Bɻr>V%6N+ܼwQ=+{DTRVq7ƛǜ|/롥:n̖ GdQT~V+^ӥS4L+9!vvӷ?ֈiYEF?xc <F̉ۊۛq9SաSjȪF޾J#ʜ[m+Kr8'W/VO/BW&2؞0sZ{9tWB*7G3b3nylV=]3'A62tsۧ~1y,SkN6 R:\z%bc/ݦѓw7Vw#=7I-MlSw)N?2kQΤZA 2'YO{^Fu<*$sD͖$`.IBLr^8bVRI4Cwc#[GL3#yv/s9N:q~GV"aF ׺v^U$d+isŤ ㋲[bc0w̽ZƤeMJKΥ+ϖ7Ɵq*#QJo sВI429V^UmXpMD&؉X}ǯkNoW3xji:/ۢ1~U1];FT6mu$Oc>en $1[єХ C{wר#QSċ$߻MF8&/IIz*8&"VWnLOj'.#r.p_#J*Jh+w8gUupHWNWƱyfRmTA܋Gj۾1=+(qd>!ʚ|w]ӣ Ԯ8kإ/e qUڳ:>#ҽ$VJ_?$k0-bI2: 3(iֺ'kKM)țfeeKS)rZ~$4ھKdqKNЄi֗+U7uPC=FxSRZƤ]Is43/#3,^ sp ~QJU9B[YY@aggҹN6kSWOVbaUKfrUF96\j4[?ZRﰷq&Z Y?{}{()ZЍnh?y\Λ4ƒ/?̱=+ds氍7Ԗ5Oh./̥I x:柵B6V.4[H G!.OVRҿpr4淳LͺcʸEg*Rj]v:#N=&#ܸ?x>D#* n5%'MJO>5R6ꥁ:K^RgF^ku tA QƻV=GJ)^kLᢢҾ{Il]л>TkG8' q3ڊI9=՜u{iɖKlȓˍKH `:5EՌ Dj?g c!{DO38jW,d8ШVBhP\HVOB;HB/ʔ׻{1Rt!sܲL[Pr C_gt#K,@*>)5O%FŨ2D=rFi<8ZTz1նa뎮e[i\:r+n8| mrx2>jɭF6zz ߹Ve~f*T z4e\GlZMl18|\'gQG}M%ҭ{ӜzkJ~΢m*Ɵ,]8hu6HN\1R^0斿5t\nca2O3U(VROTcRQR[w.$Pi"n?1 8Oֺa(us"W[,v\s~%s[Pʚs"+ψb6t>e!YTy5ez1M$`jb0vn'xuԞ[lM'cSբ"Cov|00:3g]QicڭBrt[_M2O*a۷s=x5FX|(n>(C! ~+WNvo}/v:㈯_k]l;vI%= =)S'V6o[>dv HM;huok%N+{?Nޛ~^}+]9QAɨ'X}sڱBmƏ,\аeT\9^qp)TZ QfV(U815L=ԌFd,Vv1o,\=A2+V(ЯY/ym)T׏RT.TU'ЖNv Ey?6hq cFUJ^74C Y|F>NHJ4i*>m"uѠ&we,:xxos*IFɍӲv=_䖍~]FN}k>]}{OQU}{`IkH^l̙JZ캝߆Pa墟}sQuJTTa0L$AUizZNqVnTgOnnSWHf7Gi2[ q?zICb$EʻH^!]F7S)BW9rcK'񍗔^G_̑Jq^jsEBCՃV-"m4^Ky*K('Ӟ=?{jzQPpգ)'wC$գ Z90ʤ|+4krriZ i}ͷr2nVt]uO-uh!1#r*AJ?IjsZBa(_Lv,O۹^xz9Ke(Wkd>Һ{Y560sMUs~OoCddanRG^>ԳkkhʤjVSW^{HԚlfgYĘ̪OjyҔ]3Zϸ77QƸ+Z? 99c'xX˻ wpwIE;Jj6!e>Rݬz0!̩hռ/dKF@&ڼVi;Nf]ˈ/6++-sߚ-/ceZɱ[#,ҸJ94\)3ԫ;Y{+hU[?̬y5ndه;3 1>uO zQrm_bŌ"L˖NpOn~Bъij\b5iX*̈xQ#ݫ.1u/GrfB͵QP=]NTDql&eI=sIrU9mcKLbs#:Dy³snD.k:4VPI #ہ~9gudv+v(Km9Q-i\h;Oo/çSyq"rk_bj~)\K}e̸/_\sʫ&=MrJdNjrt8kE3\񅶙nKv _1n{Frf+/Z -L'69'FO_)/~nܫw[1<:}I$؅LcRN3qXe^K^񎏬D_gw?׊=+=]sIE,dH%BԁV5$Kˑ%+jzyx.n$ '⸧V0z+?g%iM8> IritݓsOw+e[3Gaezy͸`fNszU)EoԵc$H>i !=Ϸ=+҇Q9SNNoyH~c3֒HI~mWGbUPC7מMgVħc+3c0s=GN3^}H-[=9JQKZHw4rně]ۡ_q/R%RK54njý#Ef<9'ۥi%2kQhmݶ o޶=F89zW8ƟMQY#pXݻxQ<^i;v~5k\,Y]im-{!}8 "+˪_ *؉&N׶=ȼG{Y?$D6^YK qNʨ‘󾸤ӊ٧vgֱ}a)KNޟ'8}4xZ֛ ac$=0l{9lDSN]A{5G~Yj\g-6%C G{XʹV;85iFem<2ɣ.[q*=OaǧQ;Ny1/ VJk{.y 潼Cd 1QxqkZUnoiӴSc}oK ^H2[bZFL?{q~/,n(6i> dmFI^ߕ[-W8>8O\l~O)SwkMWyvjVS:WM.[ t,W7MO_+˫g6>w6z< ΝNu&շKw>#ՌZj.GO9[xՍPcʪsٶ:s^%uZUM%U^62ut,6jG?.C=~0+9Tg-z֥ZR2 Y졞F$ Ȳ3~Ьɥv*4= vZUW^KG ۅ@0xJIKwŽ"QGVzu/AXJb,2ï9=:v>*ro롬b[ەv]zlRv1_a%9ێ:SU4w_%N5L;ﶺ?jEvV)RF'+"y GiT^K50q{5DwEjdx?i~PNG8Dr5=fͤzuszu}frr +< =v<:x:mOzL,94{~;Ko.ډTT |;6sթM-oo> o)=j:uo-vѱ G$qm_)JRR53H׋n!STl34|N |a^Zj5ebcSiBOZ7_/3 1 FЧ7-7GbIÿ~6:5W  5eNQbgO]7zY,(bqɽ-|M;~2Dk9DVe|$I8‘L#tm߮}ncr`SW[;>mnv +ƶzOlPd7K e9O}6OR"T?bR]nUa .տES?7yOnqӆJ[4$n'5^? rNc^K7wq-ն/_v[z+>XR5!)ܿ>^BbQԪa4FmK1U$5|_LwQ4ONKx,Ux8s֥\;9S^wim%ǔdlzz61}+ζRI$K)U݅玞yrٚ<=:Hr0 J.;=0F2 F̟g)،Ff#73)9-~oB;h$[`U?v W,=NUg#E򮣞;\@ zD(h;9z~y-"/@=sIMiMZx ,ͻbÏלWM9)F=g鿧t{8O9|d?6q@=zkЍhiY*|Q̎úm%f~*sl[sH{^NemȦaF%#玽Һ9eʮTkE%J2(̈rIp1JQc9Ss*?*xtdEj47ʭ[.2wһ)kC֦וmɋEAnT-z=8iQO:+ajsGONElA[$0zlqE%={X(~_ fVNݐC*gݚڥ8]CɶSo|1\65qVe+,hѳ*0AҜTSr^K`gc`>TF;kXsM*u%["t/f??V./cU._RcyU˸c 8cշ)CڄD-t|KykևjM8JE/ϰnPszqnZDB2+Ќ\,@V%U_JOw(+`{W74`w~:]5Qrђ9|*grryV2K[m̢9Ul)#s8dsWߡ-VT b1iS:v1VܤI_\ҌwsSsCMdgJ#\+7㨺:pTeFW "yk'e 9du^#nO^?dbfq(DZ$me[-uk(F!Un#'8G^RT#%;>µ,}2;Va5PԲne<pzrI=Qrʗ]ݤm&il^Q/&F'U7.ULqZ{mOvef\ x<εc{sjPuayAŰVc3+h\VjRs?w4$H<(H9c˯lV#FM;،Kqp>Q=Z.[^ڗN)aB"Wd8&8\˖JJRMi^0Oھ2 {UVc2%]n6f>P6}?ZgWqϺW1MA=yӔU9EJHzbl Qs+z*<ڻysjHvO1yz]qP*{.Vlo*:5DWU0~Ln0s{jom54ʚM mynOvf'z151UR5hKQm[y-{vݕ1p=iRR%$>A6MmUG~(4#=ŲI% Mʣ7.Py:R$%+Jvqq´|ͥd.đJ۝fخXwN~bYImXn%m<-u SY1ZtTQ2$/q1Tsm4wQ=^pԆY2UO iQKF3W޹$udhd?ZhJ1^.[ (jޟҹ'/zJS;5[%VܪҦX1ۣ^9R~Cekirl:VjRmnժKϠoXdcQLTN}7}zX3+Ii,F0{v^9jӄobBy'מsDo?z^Kk2\|L~u\Y}-]1WrB~1kSoZhG=;5e$̂ٵ{4_{1.ei/Բ@n#{IrQqmGO?uuĮEp̅4nXy֊2J7n⦱ɮ̯nZ%fG_սlG݊1%6ԣH nsӭoF;#j_:2emsb9ue2wHҤ}pB6*/|u)4t̵]ŵ+,aUWۯ[ZnR"'Kq) ^{ʟWZp*k$ik")@0ay/,1p٫=-#K|Zۻ^~n<g*vdE\N~zqSqәZSM_G<};=ϯ5c +J7k=di M6NINQznZ9S:J;)ee ҷ玾g tb&y#I&-0b)F2rD,,^k2Y$/zuSN&1cX$-Xo%?1?ZPvijQ7>XO-dFcOCOV*rz\ICF_v dc ]nO4/ye1ZTT׷QmV]/͏^}x5#ԔH.KU/銟v:yܺTc$gMm,q۰+v}=0 MIF72oM4m,CΙ =*9{Qo]<~EUEʑUlz-FH7<<1( Q|\Gu%zsYc*I_ISgWu."$ޠwޢݤsǝt0 yMۏnT՛汤ӿ5dM2,w2n\qӮ{ºRJ5#g4F q&|dla1?->TXG-$UD˅%>y9>/iRW2qt4meuc†/_jYJȈhuYM)9~*KnRvjY~fo,"R$d?β;sЩSQ2Km$\y{?n6$Vû_q4զVx/8={hZ;~p4ˎj!qDj.~k잝, )n ݱ1\ҿbM+CZS w15IA\b@d%BHקU9E]866ANLUOiltK%eK8vY8<4V=ΉST,N2cDk.i;((9s^Y1ʋnN/E:;B92'<΢gEHQԵk#+EPs''_´u9iƕJtՙe'03sFVҬ5623rWif>Nמ4P%ʞ.`ʾ`$ʶb}ޅee>ؾE]q^\2Ol|#b?9,ִENy1}^3Q)kcZ~ΤySؐ:x`߷zJ:\#'%r攮i\g=O3$ onWӃ+et8Ȏ4tUK*׊=cN!Fxr6SSPb±cV_S.gx`9{۲YMtQ$xi¢6WԎѿvfcN9;4e%UY•fG#:N⪜jSekÌo#=+JsÔe9]z!Q cAbG޺Qə5%c/SSq|˻s ܪ֖P$ro`2zA8wRorg{\@n)uVAץ{nUu+Z?t[ܬ"9Vdrprǧb5FJP|ۚӛ7&Mԗ,r2n9<ʯnG}ZBڔ ddY-%Vty={Ҋr勽׹q$~_ $tUIS[/;1Z_f^ Aj }w*w/<g[KE9^F!rNmwi}ױBϻjwo$v9WE88A_tKݻ}s>H'iXnfFp.}s#:$yrVǯ{mW"9'b/l9`{ڴyʟ7+5,5rnq}03Su{SM5hsVە?V7} ij+݈fˏ&~A(I'mWj[HknU0,+9[\ƥjiW)-ɵݣxz46h2{gUmˆMZQ.]&)2<81?u9e ]HT>G k= %O1cY{lT/h GurvR^<nԞ q#vD){V#6ّVF$v6~^#ݧ<ֳ4 4mb+#u*QИ.xԌiJ)wd,v9'=zV9uk)(̑1i}Ř3ڢ-+=&ʃ8lsiʫ-[]e)Tbn|^14h?2p>pPwESrWl$2)*I#<֖W3r~\HH̫uvZPRM;|#C% ?fLdgeR2aTrz[ "lr1*͂'P;99{׺zЮdхqʬ =0}q֪)lZjS|R;cUf!ߗ={ s҈Urc%(?V/rȰ+dgڪJ1MКQ4F 3t R`Xnoy캕/g2J/93gTU>T1唒vNR4W--MsӁj=+.J.*u U|S+yUtѕkV-jo oNyDeyJN> -2Lڧ=kj(5F]ΦTGW–yJR7EkS:QKt&OyrH?G8[SQZwѣF0eD*Ρ6)-3z"sH)Z"X@ξf{}*ti'b ˙\M1ZD\['Ӛt[˒]%q-T[ݓJƧ4j[xHG47!<_0Ǡ^V5-+rrT&ԶtIFq=k[I%.YS{mؑ1-TFЌvɮj<{SBkhdBfl󃞣ƹI߯M8{8n46&fQy=\ѫVC9sFMG!Fʼ\}ނ'*p潭Dncry8̓sӾcjXiVNY< ܓs\Lw{u<4wGosӏ}yh,Vvp:f1zƦ"tO>J* 4o<NFg'+)'ucIԍ|Knt˝ֹ*SCrIZ;u"EcK5-c4K0I qT^V ۭ*ܜbGLU3<=8r4[42y1ޫ%;OS-^/奬225?Ǩk:)9c?a/cQPfhDlHu =%JQOg3I#D#""qJI|)j̔3 v[dyi iIٴ6!愸Ǟ{Tg=*2itM.M*[bnª2mr˳{ ,$g7|XtsKOCrmY Y$7>=6L쥈N*oJ=mNA)x8Nw8#;}"r~jtl}=:GZ鍢%zX9F4h_z#'$,|rzWNNQqKgRvJhUY$c38jwV)ETz(w+"Wo=+"2Jnm5 I,7nOep*:挶G]nfOwefUFx>[{8NI)/[R˱i8##clH]+̯7amc z %1Wn4QԋV\J`_ʦJU`=1M$ц,ye= HX:(F^P"{y!>grXz_tr˗uQu%MEZ;+Q%K|]y=@+b$9ya'M(b8G VoI{[WNy[Ut}Im(ɕ;o׊RZMt̎Oymۗ#a(^4V*k KiCps@e^[y֌jF4k*B%ȸ+ D$0s\rj3ap29"_0,vdrΔcZwlL\XmO oGXs{J[uSSM.niYbt|y:tz}ړOtbcef$HCc76,iZXlDӋ^S,!THmu8F1QٚOE KR9/ʹRyLw5ʣ'Uu>\}dUZ2.[B;uF^bۻ=}:wu.U ,xxڹSeT.nRFͷaf+ӍIgJk{[uG%]Mrv>l+;t!RF$H#fR*t'y%)IoSa]ªl͌gkON==|UB\wa'⦝?iSzܞ8JE<cqЃYn3澝z9N#4Mnvȸ ӗ$ٶ*:J̒Ĥl6aPCxt}RK۩է*T⹴[;fCm+ǻ-^}EY.tN`"cjdvk)G7{A_WKbWmu`: xZ^qI-7KO0SD0ZULd8$5+r~  ySw3qc*j:_ȼDI$E$q7/##hnI> A'QJjDkS2+&HbO,6~lgwV֩tV{7%`8=[Tte |ݖ5YQ,{WJeR)J1sZJިK4623cv>oӷ^k,J:rb ?'qIYJ**E59uڂPG߼qkyjsw5KǗ$Q]N9SznW- Q#R)KOj[R܊$c |W.>`? sU$\YDek(JZ~VgsmcT+c#?^2ff*\+pzZdcʥ[E^htZFw;fQ,>3t1 8M21,eHvȐ Ak|ܰ'opTjJGk>؀ěvc28yb#'`jE>k[>c>r]:{wFx9tr8N5#YMȧ {[7Esp$[l Zq6WW/K9N:KRGh|{f7c ޵9YZ?S},@nnQ'_8qib,Ӧ'yԷ>$A$oa,j~V~mSN7Jݺ"N^RQiԄZ=oհy&O?_2=}+ۣ)&xiy:"5u T~ F:ҷCڥZlw[i`s?ͷi;F=I<|_Kh*qtd. ;)«;zV^\V/F$U!OSҚ6;#SM=Q$(, Ϙ[9޷W:Z-ѼEV5ټ`P:t;ϖ*~xu6' Գ2ݜn}i=7SonlylGy%3<4M;,͹7qd~Dk'N~[8݃x^^W&ԅrێNܮ[jnփӏ.qEeU/Ð?2rzGwTYB@*x`9U*Mhghmr ՌsFZI;K"|Qq_NN<˝pFI>Z4bc̴M.U囿]*2]yq7)*0i2OT)KϚ~X74g rOPH+oc)FK#t ,}vn:aƝԡ}k ᘯ%?ɬeG9i@2?95ΕX8۹O >`M sJG$/ANU]lo3(yǭg.i{{t #zONRU^Q$qWhvLIj񔣿eNGg29mۋ/ wN:hڜʤ~b/V/U$;W[lX8Ƥg'gc"kkveu G|+2Z4}-دYߙs{W/t9Sأ<{j l4a3}c^E^%%k W *GHB3Fz}랗%-/b=JK\xV0Glzq{mc*xRRW)346  2^'휩ő*LV[Փt-3wu4Ӕ&~G))?FJo[%䴏p@S6{U?c:rbfv99[IKmSGL8QJ? >d] -ICyq};hэSI-nEY[w :jN&Jr'o_bu$@0=y]4ϝYӧFNZzjZ+}٘#9ݍ},(B0ZӱiYsA6ނ1?ڽ6mIʜmߨNd̢?}$EoNR:#nTBxP3Ȳ{T]7G|3o!b[sG99'T4jΌ&ǖv_Ƕ{1۠^2]D@dǿG57,Rc]ZcKp2}jkj#X[erTJIhIR61]wxc6#*k 8f9>׭iwrSs~6q[!cԇ}B.}4*[ǩZ%\Tª|R>n ܵ%>^Gl.^OQVc"C:BS 드qd^֥8Ԍ\_f] ow;(vy%7^W'?Z~glLZi[-j.U{QV2挒g%ZqSAcMM`v &FW'=jfrFT;ۓ\^dNUY[b`<[M*;uԊ[x"_2阉A<׏s\!˾6ӼBKv08vMe 'RnsfR0>=ZSOڙS`fhRlAc9JtcN-wҩΕݐ8YGL x鍷yHWXa,qsT&rFjC5Vhyj7݁?/cGzRݺ**$RIUTdeay9ShYM+b][`eڹxڤs6F A#F‬gQz}(\Yj}D#4#+p8eSV5{#iY"rֵUQ]*?=jTўw$cY~40'K{0B۸b͏ pONVRuZ.}aER wtcE7w=8:Omnr:M"OʮyYܿ^|cz%fpӭRl]9$͌; +NU٭s:ҌtIO͕eQ'v'sЩbսJTkFQWۯ@O1+N֊Qc7.-Ngg^D[Ҵq[هSߋj],s;dl9pEgZU%$ϋEFPۛcH '?N+拫k jn"k9k{vݟ^A3nS:Ubfg-mݐ;wNz?Jq-jg,f&fE5漇e^B*Q`f*% DDV@73$~õV3N*ZKV|]򥺸Tx@=r>-BblF^MQ ݫ8U6{N#.'or9J롆ދ: &ZLeFx2ڿ1RLiv818Մ*q捵*iԋWuXZN.db.ITvf~ҥjJD>ۅXc3s;sc߰TEm=qAp%/fF9=w^Lc&ܹzƛ} zWdDc JEʯ'ҍ*jI;FFpSD3 mƥrKr[X /~@U*qI7.ħ9玙8tͣ^Y"Rb=~ܻ[ްm~nBvz#)ܸ$xwc[Bqzgƥz S/#:}M-Wq"vC-*3?*#96ͣ?ݨI[y31VtuU(˕kr>!Ot l9===vѨRJRVZ#o.8j0gby9u8+Є4;zMT[w)St-. 09"jq抌5#niB4dۅon=ХIIh9'x,Wv{dz-V鶴hk2$r䓞g]q,lwy)iR%HfnP}N_(z,ѩ}vOmSozb[̱ sqS|ҟM*|_QF5j2p95Zs_P2w pvO1jr?Q˧O$PʒŶ$>)J7Oz:zgU+yVڻstY眣Rw446V,%&iBEHF9ecidtkxJOMhTc.~~WS-ԕU+Hr:QMsE^竆:Jf6L<$8dH*zq\/h Jp^L^f]U@ªM8O[9tFڿ)|5h3#M'gO}|ɶ 'Q{ggR6 !FyUbҾ/h]%NsaБq¥8]u:0%N|s f]Y_8$ǰsJ/(5)uCB,0G Jܬ1׿SMTއZr.n[zr~%eyvs۷9޽R}IkpM2BKnmd?Һʢ:#r8xaM F4x,q^F7iʝS={+?m)٣o8HQUx>!&{ѕ:oX1Q"B?>85Tcn[qrs[wluHbQzk Jn嵍-fHnm~n ؆'cV`GѕgTCo 5F­IJW54릗pf![єd2Q撄W+$M<>v8ʇ'Vn\0L=+ϯJ꧈~˖3j|l5S*k5(iry߈]`1W OPPAl)FU!vhYK'*Ccy<څFRݺi2.ߔۜq^*u>=L\T/_t}O*i_#hImNk4q 20(8帙S%{q-c~TۿvDŽ|+H7D[Zw~ǾÑ$bSxTUK)^k61+nYRwg>z硯CRW*}ǡfXBmOЛBĒLm|P8=xL0уZ;JZK='žcݕn,2XWEvyvSQNvsʥEBuS{7o%zzկYhʔm8UM kا:ѴS涻wzzUOVUU~.g햰Iu0"2 eX_qܽ)vKt5MmqfW$O_)5{z 96۩䗟 wo$6)`h=y랣8⻨ƧK뵈9m)|,=j./wU(Tw4RKm-AgӤ;|Q6l{ELREy5}UJ5J]6[|Qy4+G ^ 㯿b+aFpjTh0jin}Nso4!UT)y#5VF.eяg wcmmR00s۰]|%Fw[*Q/g%\/ &Ȗ8*\t^8WMx1f8YFkqAy40' 9O:m'kv ZPoC+Dg3&nDc p7RclSݵ[pc?ͺ ˬ_'Wc9NS?hSKMt>8eKN3>X,o-XdO:qԫG^=QS')Ox~X q\89ݑ"V=VI|Q$ Hjm`-_I癇iRwKnڻ.1^jjvޗߚnÝ?BXg42[rPAl0G98qdxӕor2ٻUut\c)֭^2o5^wWNo> tZuk \ aqFJ G8.37k|`g9{E(M[Z3|I?&$Y]ZϭMta_w"UHʫLeKYr6G\`7t:=>mKPl- / ,g ]tpo/+֨❑:/+ )||7zWZQ}Λ?oViB6flr{~ Qm+J*NF-bXaU`}2 }pT<[]:JJorWO[ r}AmmF TSMߑQ\ͥ~_?ٮy HC.GApT=ZIUSCk*sYCg' TTʝ7W~_ |+{iix|`Dܫ\VkZj2gxjy.tr[_9-s]K;>'e4}x~ϕ;{#==ZԊvKҒC{ T/<8r\G JQI\ܯ"-HNj-3U:tWK9"] 4+ )#NhVZݭԆ[gsE-͢C).e !>kY+YyyFQVMz G4G f c=7b5:֤"Z j 3g((.XR׺[̯*Hr9Z{:rjĚ CCr8U F+S< ֲ Zԓfhly7), bF8r݉f-c`G=ǨB[1ҩ,w}vYMl4tq͆psSSN?̵֌!}j%IRU+Jo|"꫸QP{KjV3] G0\pۉ'?eN\\g{E:SYʄͷ,P%%j:cq¨I2w0'n g#5ܩ5vJ;뮄VWjį$`yzu(Bx l4r}ڣpcjumJױdlθ^>;Q: ب%dؙ6T$2@㡝Ov>M A6/Us`z⟺bM9~E{MhV!FNzJI.ouK7ѕy ;WJi2Fa1G >/ҹM+ʕEj4 J7;@l޹N.1zvU-d u?OVйZ+wqp#Q[vO1W[{n<_dG98U"ҍ6kE4vҎe P|ьD݁׎*E̅)yqy}k˷=yjMH5I˗AWo)+2cҴ8s:qّ"r̪۲Q[#v{Фǹ+ބri48 nƴhcZX:ṃ#% W#8U8ڍ:6`Hϥm.[#:qf~u{Dk*`>FQtF -^M|#Z4gD>_PGey#yb 2cȠ /A:uq#[lʗws; OvM}mgAӓUR#(ܱN;TBW9sv#.hi գOo"K<ȻR8s~U[~\hB:'БP?ȭnFϹU+ǙKR9i:zrsMDo۰Hdy r~ϗty'IFOa Iۓ9yJ7e] @ϙp2w?٩.]E|D%Y28SEstQrRySx2FTheHfYtoQ[?KK숔UiYwe as]|ܱj*1 r` e{3XUЩ|p"m¶C09>8تxi|WשN`\?23G%yHgRp3zףNiE*LkXˑT",p8q=+ztӯ#RZ<#k$^,t4)8#Z0R7g~ikmAeQxCU9^G2kh@VH=:]9ڦXysɬne6 }fHƒGv/B+F3<ӷz¥9קs/Kt٦17 Ў>tj2){9&O`eTjZU.{zu&)Y-aM}]ï;g7FɝJvtTU"nc>RFK~~-c .gwSPΈƊ>׏-[~iF2z0vDEcczZqAVf?q3%JwcOM/,*$5#+NnMqg< Y\VЌpG^V>XǯȾan3&\NGCz~"&ކj5.9=CS_=Or$\)A d(۳G2MHQ~dL 6m4;`:sϵe&(VD]eFf$OzS5S˦FWuvp}(˙E:#UVgY'v]'>җtf[[/46vvFNp;Q$c'+ 3;Asίv? T-.Mj%%Ӕ@8vmAp:|N~dQR2LT+4 ٟv7bp2dR>EidFH3J R8jv6䒷;YWʵM6x9ڪ\1mCrnrtv`~RrRس LTR2.vf4h> d#@v]àx'c>Ӗ̹1X1-ջ~?Jh H ?0Dx\5JKԲ*6ivc <8?Қ[e "%\uc}Uq.A9SC[Lr[fZF\L%~/] h5{ m<{vN*rCr{Wݷ84vaw28xe`zRtfmuL*(=0Oe+Jkt$L͵ދ8e)(mF4e.E؎GNdMLg˸J)&$K37Ъ(sJzV6 . N}Nr0{s71ϩ%aFqIkHKM QEy/A<}{sITQܴd"h!6l)<;qޜ6**^E\nn[ƥg=Gwx2&z(c}t\=f-N-,q+~O=zu:oF&U9=t1g%U[ Ah-*GN/SNx\Yw01y}ҧS9J*Of=b @ u:qֻ!Ң=ƍ%')k}KkH aM \VR͕jueoo>$9*ٹۊ>̹dBgd߹<"XQO1`(e#JIN.%:T_UȼѯƩ]qzq}+iro~eTGO:]n14wpc+.8I=zpo+}֝ Ӝ%;Oۡ4Oci\<{·QӥgR5rZ[%V4\׶-Y3_+#,tGP9錟8ns{IJ)*2KnM PՏ&"1^qv8=+5qT*>KRoDY_ʞch3X˖s47(ݶ3o+Y 8' }ttSd{'o$w=4y) 5~5@d?m#'潢l.}Kr`9>e_0dI3g/gciƟu83B{f@v䑜*f|jN"ʻԪHQ'ֲm G+I P2(8P0> #Ids/@΄֞fP';ɿ(-65]nV_kFy,ҲSN1WFֲ:%N# #/Rq M!.#݁:s:gw;ߚuMpeupge%Ya<3z2Z\Ȃ8˹beP6N0~_w1Vԙ`؅*# ',H{q\2r4=Jq)Ȑ9R FO9x<=TߨMM[$fG xYch͜2F;E^vK[.,'$5ԚJhmwfo?tpg<:\DcS4Ezdj޾X <.v|v"9|[-痳mX{NU%MGF kd_5If`23jF:9jld:?74 MYʥ>gdMC=cGL)y[]57`mܦ{@e8sV[qo?%dH":m`2zqlٟ}4m3sִ$u[ʌ#g\@iN:E)YN58*XpAvqS~n5[HFmoF;U{?ⳍYJ6FUi~RQ[@WqA,J@߸E(Ԫqr[m""SM:VҴeʭw &9QP}Ǿ85-W:T{Vq!9lq1Rz(K ieه\qѾi(E U==WHRB2)!W'դ9'}9}oup&!cg/,m8Υ+fMl,o?B<'aҎ"$goh9$ wtӝ]Bћ`Lۑ3 m3]4c)J3ҏM'ec\L%K+yYez ]iǝԾZn;Y32E*;f3H'Ԫ:"ľXԩA6ZLcۯzu#^~#ٺ4ݛdE1R$hYwm=ziGS|:Q{-=|O=ٕnkZb˩V1%R^EoSg5RZ0}0 !R[vT=2zY˖IF_q݈#+Is%Kz &q͵;y^3(M5M$HC"oQ#䚋kѝJ|ۿf(#(v9=jq0V;`Q  ?\cңʗ/.1L89V \ϚwsU=nFGKhsk2V8?c[KƵ:^_Rȣl|.FNm6acv2;*/#uLyFJymӨīTa}_ιOᔪb%ϵCk;*b@T +lި1+S:[{xW?e ΫfgWm"aS?jt˴nkמqv<=LcVEeK|C<橴gVUPb@C. #ӟXRcrwׁ$gr(VlcL~u3Sy=Vq_WKM5*ͤ|ͻYpN#9Q,旕FGt_fPFOtrqcW<5wmL⽵MRGwLih)gYGb%wJIյ2̭HjVRưuZJZACߝdH m0#gkֳ3픥1Anr F9S8Ǹb>[ޝ읾~i&6O 8W :ҕU8F=pΘ4)'!yrqRPr5G/(iI$>҉FNPWOzbi &zqްH=GR5[9H]"X: >nz :WR^eUZՒ܆ Yj1 #*mlqlWNi^*5!GSZVqZ8PVKgUo^A<_?RRV%jwW9Bϕ+|s?|2q%(Խ{ϱȱH k 9%zzgҾWJU*r<ս5Ƹx]$,d/3=;G^x˞Ѳq?Xt/ԤE(U%@f\A>:W:fzR(OEԂ$/5dV~4iiHSZwaRr?:ƗI ~ub )+ۣ<{Z9:0gکԌ!{-NHKuM;%ʬ6&%OC ϭLk2NNҾ(5}/p#;u6ys;7wںpD(WgnXXc;'QE~`$R8Ey36ݭ$r2@𨋕{(PyeWZ#=}O4 7z~"iQR{MTnp3E#56n-(\ȁ볾G=>)>FRitHݤg*'ֹ-.k9NU]QJJq"398QGl涧q9E9ڄ|o^I>M Qճ*?k%-pIUX8yze̚h*䁧8 5:mI猿Bڸ B_Uy7vhΤe x$-Ylx*|ʤؐĬ Ro$*~*t}ٔn]ne1W >5IJbޏ;@۰$9ya䬂dgөuq"L1gYks2=QF]2̒ 9OsgUQk~Q˒ۭX%}b S9eJ.ѧʗf9|yA^ml=9GUsxƪ+'s0oM 4lYհ=Ͽ#yR2N.˯M q]]zZ]B}Yd*Mq_qaM+,3*=ᗃ 8wswT*sFO P s1Ȣ9/}}z5u،L\E#)q.cq)J)/[ZםHg޽؟l|v2c={VҟѓPK$~B F]~g8f񣇏%v- &08 {JUiN8+i x\i q8 yF4W,ֈ,]fܓG/^TίmCkO>jo)nH9R{:m.:qֽro7[߱(ڬ|V2k>]Ի;u.F\K@q['N+g^_ky[nF35uٗ jlS]M*ݢ X!A<_껖SiJQ^*3+#3mwkI{ЮYF5Rc:~qiytENi}̻I>ar yҩpGE:q漙+#cP{znOO9cSıY9i&x[7a5{hznI[ɻ/i9sShnZ)6-%W>ޣֶIIs$:׿ԫq-aّC+HN:6NU)fs$~DJә4J##crϚM.W 0)!A]_lzޝowVΈF>Eo\yRI+szAzW=9FdS8֍5*nZ.ry5-*rA %*kݫv9%Ԋ#[Q8CFxHӎUsԦ$Ѵv#o;v:sR:U#is۱nO.bۙViNG_\Rt.^{UYdZC! ppzaS۠FV];JI+'\Ӝ^q{a4K&ًG1w}5K^ՃN6̭k;#R߽"K).%nPchYdu9ܼ =Y{9s+jV5G[lKi[2Fd2c Ar8=ǕncE{)X zT)-/3.":Ǚ jBݧL38'۷8\c$vͪK*n5$6?]YTNVu9~$g,iI6ky%s^F3.5/E+v[,>a_uU9w}<{="@9e' k􇈴ܑeshtfI21Rt[Եpm6eT ֺ-ܵZ׹kbi]~*yw@'ӓ]8:%iN3osm${z<ӵ:]CN3ѓj #fqp}3WnXzRב<4jFҞ5c+W6&kiəHׯ=q[Qi+[ԣ|7f5 d/SH %GcmT4B^E_#RjKMe糶ɮ ַr c9mV5Th61Ӟ"znYaphYVbv)Jw6)P-'[V6Ҷݿ*LUi%(H/{}יnV"zݴT~#T\һBgi;"ԕjZ R4eoyLH͓ Nr}*Vyt1jXu4OV "W㎜{ HQ~~,j8t! #_KƫBvH͖1iqְiGu5l=Jn/c#]ƒt`o\~Οnvz81nyiz]ժ[]¬śtp~cu<? NMҍ n޺ޕc{Jj&x])Qq%OLP\6 Ic!N:YȘ~I7H.i]ܕr\ڦ1^쵳]NS{f!vr}:~J6BTi+$I"-#N),iwg?Ouӓu:5"'g9 v8k427oY6HVhȨT̹ndAnTJ4TZiѭE4HVԤi$>Jls:j8F+N瓕JEl)u>Z(T$ƒNKVO,OCsdln?(<{}kΧ{U4*kܯqE~{f܁]Q3 FLJ+STt:aٕn>ӈm 7r;dQ +Wz}yju' o݊vOxiOn!F̻Jnrq^cYӧIVSfֻy|zu)ao{Y&n8e8SۓR*\tV՟,[-z4!-mGHaԒwp8+)Jboiz#N7MhCMfѝ֎5=49%*._#$|,˖' vTT$lKx<'n0cE~^&Ȇ_d^2Agn37X4mx.RCXr(=ٵ}AZ1>y;w=N;uyto [ehlw^WzT]Ún.~-瞔s5rww"!;'Pi^K'WVX9_΋43RnWZv7E`\8Hzv^DBo l4U<:5}oq.ulGch'~57:s]ujf1[ӏQ(1mHrvHAgaͷ?ŽhI-PJjTQБF!ep IP{V2sJ\_ٷenX{ұ.1獯as$2C}J0F~2Bde1O~V/ &6)fdYcsLkGqW {2Sow o>U6lpXgA޽zsw$i^,=r:gwS/3Дeeq7 2ȭ#RsϾMz4~y9^NNɡLv[keOU8C1(lཆ)-q?]tQul w÷z9mգiXtr<$w?x9.TDcN2tl:f~Xym'ˏ'ޚ}H6BTVhF+Y9Y3I/i-qk,ms(yVRؚ[V3\<"POcy_H .8>u#'~Tev7 {x%w̅U?/x¤3464ˑ: ћv2r{sYQm>4R]ͻ;^!^H<{V3^򉍔+t9?+.=&*SR9<,X=֕xQ/ƊOOo*I #9"Te̥9Xgנ_5ɂ #džfW99鎘5ljʼ%em;_y2E<'wF1WRkk?3ٙ?@-N9~|; ܭ~c pUmQ?RF7t3uB6Iz=YP zwM=k"^}ݍ`#H d.Wv==zҌi7=,.2^/G#ůrN~{p j;p:5+o'dw}N>M߫w4oTw $k'zv0GLq]hև,FM_^ǩxkr$ww-'7,GOPs2 L.%Fj_:blՂ=;G?Jé[̌6.Tj63NciW)>`p~ӎl:i7~┓,o6/>f9]pz-8ӊM^Þ6J2Foا|8j0|:1iSCg}|ZV32/lUۼvE|v*Rm6:[zlɶtR*z_W 7>)gdtҲC,r^9~8sꮼ/}&Cwn<סNwl%NZl;{o!x$ =ʴR_OӧtFe^vЌzS]n>9e76UDm편RqK.7ȑ :¼W:|fҌբiY4R|ln=q{+** ߖ嶻}Iv<++ihl<˅e;WD*ܩqƢmm{&!$tOZ(V娢y877NyO>%[Jٙ@Uܪ30G<lW{|s2 f"y_ݩ_7֕e }yf%^W8zbMoy~/:rRkeg]VKҮG+I ed:s=JWYk[m%λĒ G>"=ohcqq"N.#YQnޤen}3O|ХI>[ѥ,3M6o]nV_0";_e˥*m:FyzZ_R{ .<-0~X6TGtQk4ߗ}[b_錞eo穈ӥ'M^CCEA]tV1|,e\NXOUǧ8)U嵻->c wV䶺߷[xQDzn\2+NA8(L ʎ*"獼w&'Vwi{yz|fZMq;2#r NJv5~{ګiimionb8 J)fW~}̕;ռ ŝͳCdnxVӋ}swF*teb=*TKIVM?=:ե\geSKln開nBqҸ1U#8S$^}NLХ#J`ZkqVRmЍc(,sw;y}qhCI_/R0>ukt{JY\B<xC^ tRj_Nz<5$MC V~_*=oR{gQ"՗;kn[VGf< bH/*+ Ws|_G? ~%^xNUeY+bE;FPq~.Y^&^w))Ak[>joFJHluhYMkx,@HyA5͈zpҶԬNOjmm饛m~66~#ƣfn-ZX_,d+|̓ey5 |ҝ5i~|_G[4>n#[IGR̆mwa.'ݣNcɇwwӷZ'HGpae)%kDޝ,։ҹ!=՛IUT-fgEv$X>|-gMOkFޫ{$o|YtVIZZ}|k6$z=H=xk๵7j2ѭ gUPs{tϡX^9d{eEHǖp2rz唢qJWmG&Ht>UIwjNc̴WHʪjJnT߽$rx?NI84}^ӛk8SFxkS%ѭCHfa'} ?czb)J^z1sO$6~@bq3F9εԴ?/ ~nHQ?zSULEMu3*p${26G]\#h'Ukwk(QPȹ2G.W[P_,eIG:\e?~mz9IMmRZ3IG~ %No(E fe]_0e|Gnj1踩KU&PfYW$2qZFq%R"- v!gwIlUGimNƦ&q^GsE"cjsbFN:|^w_a0'U.G_ítRU";bim2h/7ЎsIT)Z[n^3Mjepry#8ӓS*rbD}us\ iY%YwlBiU5%M 1oEv9qc\ˡPQs4};C_*0 @N{+)GI8a97'+In Wjƪ>oL ˕s5bU*u*%1֗p@Qۿ\$N~.<#=Tlr8MeˈQЎ+!7Kx[;yr4 FG{Rj?f%OBH/gn/mSBdk8(a.W(潐C]͜3]8M>b/>/Ԏι.J(ݓé"yx߻8瞜c:=Lgzv^MCf-Oza(j2,=տ jm~N4W ipܽc++]HƝe`7DۛrF?Uq斫^E*1?ʹG N*IyЏc؎Ƴ%8Ի[!h/~7W||M^TԚ1hgi ^sJ2T[y;[&0ܯ Z>m-|VP{ȩ=&Q&X523mFf98V/~W&ܡut\I[ G6ኳwuǖG?{Z{:QOS:t){F!d~soYqO8.׺Z+<,6cvGj᪒# N9(M7ȆKYo-ۆQx_CjεisJ:/N@?ʫr;exfjV TrYsマBQWOwaF9S$d2%Y_\` }:氼D֣(ƧzZ]݅F>e9 @VeJ> }QjD@d|n7qUS9d?}[BY3{W+^唓sIѕ/ k*QեvT-N)5]#n2}G~Gj1ЪD^A"]K|Q<2zkT1һ#yaH̅|7z/zΤ%RKYjqZ8ՊUbmdS,Wp=QR4PӴmP[~!6?T\u9+VPz~P_E.Z뱼1J?d];Vec.*3QsmM+^sǚLޘ/l#YBWE% q*$SR[kz n#;:g#5M)Κ&ޥKc*(6nrr1ۯJoGڝJcʵrTH0񭽚{5+%/K5 -M?TjK9]&å&f3*fU=GLWLeUAJQr#[tl\(E>^:hyIn]i8;E %&lfAF=:~aiT*]HO:) 1B2[[9*V\*IjfgC川Q6BGm M-jS aɒ8m~hFXr1d;(&cZ[5Ǚobʏ6ƕV.S&<bYF39J~C}Ws#08b7ߩw5ӨRN4n[f6,0}j}ԒhV;_w8vIٻRQs+h Q<ƎfR^dykz#(Un/г C"lG9ϵ+J.eF41g*Жk.0Rݻ\viGo"-,+3(PursWOMB*03{,;=DDkO%lgY^Rhڢd=y_tdǷ==kKFvVEoe~k+=;sx0^|܉grEj*e\C*㣻{6Y g'䅀=b>mXԹeʕkxcbq$"6nݺw|g0z9,eBɑH ryK-Yh塭eđS;`l+cdwrbՑ*++#J2n'2NWȖ;(ѹUlwJ#Nϛs9J2zM4lO̱fn*{M^DU֫K~!_N(mW$Y=#<s4m0@uRFjr]*bߺA%Fy='=i[Ls-JX"dVKG_LcJTӫtBN !-#H}g.n{>yZHߕ|QG*ASVЍ$>]dyjTshӴ^]q5~m}agN&5'dޝB"X_0o/*=**s?}JSh3<\+ޤUrǕVzyF0y6vX, <鑊\:%NvKX2)d+0*8q>D\ehMoMv\@(;;4kRS} y푦bB@ce kETuZUYx2nھſ25, ZPr 韭\5QJ f}$*7zȩE5c*%Zi0HT|gV~퉥N3;e; a`9?4sZƑST$ X|@{}8U##yS~:+̛9w<q7Qsu%#T)[rQ|(H5;05ʘ j|gLe-, }^יF}O4̭ ,x ͞wgҴQ8YIEK23[koWD*FY1bE)UT3wܭg'͡ty9yHc`+Ni=SwDpD[UWkb}G<>\*FbTYYef_-x 498Ǖu'{UdR gS9]ʲI9+' ŕzߧzmm*| , łvU {;ܭϞIK[tdmt*eý 1ܡ3|g?+nfRMw} fEU|\NqwђSZM&'^"}͸U-Ѫ(UTgz;O5uԺ8t_bThV2C4Dp$WO[/is/ι olng@P>Ij5%NsJ2|am$1ں.tzUOBޥ{?bF*jArTNyzVRhef=Cen7~N2qFG_\'Neq4+=ϕ ~\r_揾RQrv#dL1po'6~ck9dnaH^oɷk,;W'+^6y*Kٻs ^# sgV<+*rj@$ ˒L d㡬(Z/c(Txj݄c 1 s$ qۥO7,5qnki퀳m#=kL9=d]3$;ƧGӊ{N]-Cܿ4EJ>v#3[v.v" tՎ0q Ǚi(sK8Jub$VNy=W[7cOwnV'70UaO ޛI'urI#C#FUwrA qULovo(1ׯ,,Ȳۘ9ϯqϵ&c'M"ݹf#2=;Tw&1tbDkq") f;'=T֥r{$u'Ff`~U7MJN=ĭ N{8=ϧ֊*IhiEi̪gmJ 32yӑZG;trŸ*}]N)K"q39C:3I2e5H~Vf-/l+:rrc9UW0GS[DҩRTeomR7t&evsTmbO,U;2 c&һN1+i?yӥ5:kU\dgߞV|^E{'$՟*c_ھQ5+5-nr"[8#>76ʧRG8-H|YJQv{yyA;$WtܱK2Q֫i2 6v!"ib8ڬqoc$UF2iFt68Fw30do)잖2TT}ר6G|b8\Q۳ .rߙv;sag8c\9ɩQ%ɾ*̆L*di'885/z)U}%ifl#ӓ?94U(ɬ3I%$\G3jڣ|95eE9KF=I L^FbosKBjB1JI_1x<#cRyJx[ ԥhKbxۧ5&*1cשi:6p?]>k^Oom+Om;瞜fhq2؄qE|h9(2șU(7Ȏ(rwI#c=XCzbiA+"YL.Y aK|gEtSJnqn#-Dͺ5eެG#=G~kj]SK/9WbBy9cjΝ}涛2 ;,%d?{q>gĮ&b}:l\z1SNm؅]ߕF>KHʜHXtK2R:pUiEͽ ofG)y4;|ztkSI-RcRj.!cu2Xhmc;@#ه'5j~eZJ{{_+lnFc+Ƿj_3ZTaZum= ඖVmXв]}C^SfB-t \mS.oqq-_RR#Ytܼq2u"-$ ֌cu=4bzdIva83qN3ur[E*Mq EW Ќ2 oIm*0h]EEVHQZ5;Ϙ ?BPvԭ ^?V{Ovn!R`>KvQZԤ݉8KKW(0[uUǞh|6uѥ m〉β88R:}y=A=4ʜ%{hTƐGnUTI"(r1=p}ཥ?i[GUt~| 23ߧTkrTЎj\sYF^~~n{>o'b64Ҏ[hӧuz${!pd;\+.hjkIk3<6̲$t?k:y{ʤUZn: :T$yF"UTʶFf? rŹ6mN2UK[Vh 9g5k[(9^Neƅ-c]I#7Еg湣STpIlIc#R+=4 $:;5Vi] u&րO apjc[~8{FukǦW~q$n4ed~};WJOF<v!z gz4a(uʛ6F( VBpM6H`m5MS^M;D2Vl؝1N)|_e"/7H޸?]QqѲmgjHKyAJwCfik7AQ&Xn~USKE[5b#]>~uE.[F;7VcOn&]c1R%)KT2R2`3n 1Ƨ,w%+]'cjVR[]b)qǿ?yo%t_nI\yI!zt^b9RHe7P[ xa>xXyj>QI5=-eAh Ts־O0*{˱JIB6c I9 x_1wzwy:xJͭUN EiadUJ7}/}Ng[ qԮ5 )bmWЎ]ROkqU+QRT]]gLIRl0mq7v~uJR\Gn"h _#hwVUdrGB1sYWNo9+Oܔyzakxé;JFw2XjB{ߡqUԭhxv, y)/ggywk7gY 60qVypbmYmU,ܖ|mɐ`՜krKJe[F^hV6P}Ӯ=:d{jZb1+WNCRB,Gʷ-n$=_ҳg֟=hX̨뵙`;S]HFw:NU.scJRGǗdqa)ߍ{yl:Rkeg5,:ps^uӭ-Th\j~XUegv.a}vМ*idΩ3DJ71+n[{[7R%%gvV9֜1)Ի] bvȃ.F Ӟ*(Lꄿw˳$:~Z/IC4"0ڻG1zu)nLw"Ǵ|3ojZ;_i);ʵo wֱmsj~1N56XI[)JkܵcWx69 | ϖZ/{4-Gf\3x#<MrR v9f|8k4#g|jMr`o $V\:4eQ0 nDvξv9'yTtӠշ;0wu#>k~[ΈϛImNWOֵ<ަq- .gNUD'NUN/W]Jwgʦ`?sɎri*fF2@ r ge*K,uwQ$:*ME-m1F?cS +t܀\>$>QTc4yȣ>n70WzkXO1hGft}n6Sڸ}rJ 11Ɩo `erET]^|ڻdۖg^}l=RV9kSe+[OSմ,ee"K"(nx 5㯴;[cJͧwi+Lxmf9cӁQN<-t:֊uɆLݍʎsַQ\huRۧ_24<2Q.ܐ 9=+q^TN>"Œ$sHHb[ʌ>>CFù[LF':J):c)BMI:(ۚI9UH9Ú2|u?g(hɑPG7?]b7])b>LoUGˎ5T9Ċ/s2bU}H29e853o\A[v}A6.d]]$y/qw%UבԎQ-͡{&  '#m"=/]7̰Ȭ?0꣦"qKbI(ڼg=>]aXc5պ+j`jDؒ4{q) ]P]T/ZXO Ev7_lt\>_?X_A+KILsn8g\u*n?Sf68M cYnirxIS.1R{lca4:ȱɶNwgsYݒ[v3OhwQ{c?ieVh~\9F\/vg:FfqT˿8$%/y4[#4iSH Ei)EFtJuSק)3 w%FWiǨ`}ykJ:̣SRTݔnD.$)+x+3d:hJ+t(J( C֓#=j_.RTsSyf$-)L ׆}OpԝNKsIT%GtVDq\!|GsϿ 'EsSɒ\+GqXbR" cץL#)Ea*Sr-W3 uvW=/'3ya*j-y `MŰɍqqB7jq{S=/1Ԍ^sԩ;kQFD{EG;Z_Ou5gV:+:8rrB/?Onj%RjJ0ʝJ<UZelsМqV2婝:iߙ_.uJʂ{rkIKRZZv_I..$DGe 'Ma\ތz2.ĆwmԽFhjƺf¯{sӧn2G hGy=^J`e#$em20913%+ITt].l@ 8ԽM]J8icGx\ds%5J3ɗv8ןQN<顕nm}qDPXGm0Wb:KПeZ$&!e;Y,X G6#R[6BE&Uܪ2q~~h*wSZ= A˻oNڟև=,LQףnM,Q. Q|=IJ=ʕ{Iۻ=IP ~ZnS9LWJ=vv34ܻlO-[smqYG5ꌣ*~bQAh'5*OUoo$?)zzv+ܘ*sO0KsދwcpA8Ԓ+}~&U#d3.5( FNGx}Ƿ5iI4:QV7$y 6jg=ke?ui i^_r^ QM&b jihoNp[hy爡5fSqg;i*Tbܴ}zÜ6sfET0۞nG>"fסkIn#fD-!d=zzQr8MΙaHD#R"j 5a1kii};]۶P[,oV6UykZΤs-])F\"s_4}J@Ar#ԃ z(Nǥ*UzmjW {Ĝ}?.*2q^:Zo+34-IiFQ(ٽmާe<=|$T+=zͫcڡrH9sJ6{t;F5ݛ?N3&t[ Ơ#n?ӥwԩZXZJv5K?kZaJva6Uy\b0t7VrrvqvQk2?(צzOc2 QZW=Eki}">CZSN7OO# 8x/+.=3\팺V5Ͼk88[ۊLIs11n;~PqԞ:gVTqg{_!+*.cZ5:qWN7>TQʧ>k 2䩶юY [;~^{Dࣃ#ɶhGA)\=?sV)Gp}=7tiiWlo!X(+m;x{~UΪrGʣtj-~+Fa4's#vFx^ehˋ!\uҡ^%VnIU{\6O#!*@/RqTSb;uFXs-2FxҼ^isgJV!y&k g̸͖N/JTu[} qU4ieF2sߎ(.rZw;Όay_jl60%r}z⫚ S{5w^L`h DT(۷W5%(ÖRE)/B˻nk'_1hluGW4Wr9BnRy6BnAF;aqh֔--Jws.zO<*$/zJR䎭w3J%B!~SviߛTNX3\FsGl9C7<c_RC]E($P3Byl: {dWuN4r^ FI#G_+ #ޞK5^Vj3{5X«}9g{4\e ]nr+ym47 "fa2q+tr)tњQRU"l[3|_u#4rID]9=2:wү OsاR|W-/ӣ.eYQJ8H8}+Rz\UlA aY6?JޤVWz"U%ÿg%ʷK{/.|2}޻x Yӥ&qhUyWЃYHLH?ϵW',87Nn zC mTRTݏ&U'ȔNh-=**E›Iv\s,kSNāBݫ1lå-fݻPjq]-T 9gW@aTF\M^OĝD6ʷ_gIl1&qp$ҞqpWsѭT=ŭ'O=F54{mq֌;Jtb1p?鞭Bu/c(2P4zNP-BYfFvDH>I#y}Ҷ)rNQt+徒tUThnOO+M )IK O򷕹7Sʴw:hͧKoK!Cy#LcH9Xzz mkvn$!|ayi|c4-!ݤa d29ǧzRi(-E`AqoAJ++3&tq3JOiJ)?q#DU|^N<X=BQ?g\B%a+(e2 g`+H˖6oݲw+4S;ȫ/ux=xszZ5mSf|}f]_[s*o-bo"ea*{c9T=nȂHʾNIs꩔Rh3&B[(a8STFmϚ%tVJ0;FO<}3\`KX?2dO< h1k{]=NzcNWAǧpT./F&iEml;7Ճ$Z/v8лM\ `S6fLs\U:tkm*b,wm +yZƗ@HW+gw7^mJqk(SVuKVn:4WR,Vl®4ԣ:TTQϘB30tœRFRrK~ohyd|Ǡ=G~+ѧyvA#^HeS/q5 q7QJj)!|YI=N.6Qoi.X+j2Fc˸m}kѣg3xRլI h>Y|rr?!׽tіm&{h_VSg#$3vuycUs֎]>6URE/5_b)=r)Y ۄ?6T6ӂu*4Lu=/o3*,3b0r0@~Em)J/vafh.it}J4ڝ4^KݏJqۃ?zsƬ+l)=7luz'7F1\\g#N`*$wW3Fƽךz0n9fc.[iːx>ǵc(^̬Ow8c4sG 8׿Z? N[+m$ArԋzZX$~m#'wZyƛe[1 O9IajSU,̒o<O2GNt?<%>i7Mhxu1jj-۸8MU:nQZC_NRGd!Ok3mQ85UFRړѧQʚk9 q9YG`pNrG v];S=z]=JIχO9{V*wg;sH=L rѽ +(=CZ:arr@=}W tßӹ|Lp*>ڬ޸5Kݭu ]J*3iFl|=MitxH݅`#*2R#?_Op¤ -nXi16Ġv1NN{~\ѕ\:B:=|߆:[GQ@' y?^60܎ѰVQsMEYo2O[I>Pzp35cʞ($M+SF~c\zҶnH3j*F-[4D;\vA8]kj /XmFIߩQ*TB;Inߞw\*ZW݅=cdfr>f~rz-JKGDj[Q[i=r9, jsFSgKeYvK"XHeGUG]lyLֿm;̕3nyǭztܜJWܤ|C/l[˙=ZgܬއOrV ?WKev%y>Vic ߣ*B+k#*#?u۷nmNu5oQba WrFsIb1XYIKڭW.xa>;6v>G xA,֊K&h$P@I!$8 {ZoWnCrGN[oG|w7¾/Zuff=wd_@eJM;m>^S;ߺ˳~'ͪ1w4qecUQ [#1=Wޖ߫]FԢmoV_?c]2R0YSEXJǗᧇR^kky +WdGwѭMR\68 G|vp *kL0'dϞӔc]?'4$m95)#{ʗTd\99<9rk98i{GtV]5ֵHt{{;MMFՐn,8lyS8iz:jvmBӎ:cKg混{mׂh 2tY$Tg 1O\LzX\:4fzkb8j,/?';ᵭ5g6MPC{$wM39ny<{\wƝ[t4|.erN7n]MG$nQJvQ:|-nNjGSN[y#!;W ;{GjSNN4ռU1 5V;G894NIW<<--MW:1S!r#eR}Im#8sZ0RI{MlTpeY| kw=&3q;%ʄon]'DӧX˲kXݓS\ur*ٺ}S{o۬m5 2 0A9ӵ}?46Kel<Ք4!%1fn psǯJISnEN*xڟU䞗۵k h"UeP܀YGwt<3^#xݩi0-"capmU@ x95ƳqqX΂vkw Y,G]员#i$URgReb[cHv+U5g&0߯F$ Ua#t?JkToƒ8E\$I÷z¤$3Ӥ.M+$%ጶPD 2$bXa$@Td~>[y+SRnMi?a{V o1yб]t=1SNc%OsE~}F|Đ+  vSB2,Uz}mo5kTʛ2[4o[5>_7J۰E3Z+;d`r3B MJ6|_mC MB6ٵZ] ׼Ğ(?P^}vYGC?W9VN fxxn4}tu*ٕ{Ωc ړRjKUug[mkj~)zŗM5&[p` 8-3,:rjmwMiM5[V<>!Smjuo-5t>>Oi] mŮsM|Sp`%?nѤ}խ]t9]L> zҵo뮖[_O zu!.ZRk]ǰU1GC^sөFIuuGjRUGz?CtYþ XnoRդ7s1Y!瑓`/b*Uism'c+c+FUVӵ'jsnxOԂ_g(VnZڑb<.DĪ98t{TbFѺ33>8Fq苜IIIȍ-d&E UPv9F\z:~ҽi5-w' Ns{X\=;Џmo?F<ϔ3/qs׃^8{:kMUTiN𖐖o&ߓS$xG*rb%87NϜx>S6])VVxV͹>ykhԵ̧[]"FʪH\>c)|z34x[.N1<MgFOCrRkC?GXM}"lG<}k疖AQѭ.f:rÎxÒo6t`͘,eBNrcY7(*^}؉턒3mʎ;utiJ-ZR*Ha, $EP V FŕZ0Uh؆c޺VM)9Q}?h*FKd\2v8'GQ6nBȪ{ތO<+~}E AzuSPUB0JekkP8bb zt$wU\J*<쬷"չ#o<ϧ&USKB;r(\9RN=so'rt!X`HXEYE{gTʛo˩2(nPFyB6)^q QI~dfNh իPceTpz<{TU)]ʨee^eT1A9FI]mؔ=O޹}fG;g Sϼ;Wӂl5h$*srRO!7?&Sq9qXʒ亲ƥ`LQ?喉 {J~;t%iE`QqnMX-AfWUfxj:suZ1QƎE[beW;d'zON0?J1Q~Ty}+~ m+߻o~Xheϫ#DE qkN}AG=}+c;q4xVAu{l]@#f:q=MT#(4QVlks$0xX;8iw5*t͞M\YgiF&hn2h mج#W_'.)EUZY Vw\j+cY$i33 $gJee7VR"K}H`xO^p*sjO4y_S:QpX>ⴧJT32rVVPͪ_*KAG'Jt,`*0A6jYo?2 ڴݘԋQhͻ{gXN>agz9)ӌeT}CeFkMo++\z)]9)FYƒBd`x:U)svQT7+z9Sݹ0hC9FJIo+`|=99JjԯM]1I4*.\/EqqەFdzKo~XL]~ol{W%ҧYsY;,n$PјF+-=Y߱_Z暊Zl2z*'VWC$4b{ۻF~d}c'ks抶(]VRyQ.j2N?1N{ǓYTzTFrXo/1Hԍ1]/P-ж>΋"c|pxQ(TʣN>nDĒ3U\t]k4[Ŷhw|ʏO5l)J+jv$1s"&ݣՌd7[d$0HAW<تߖ; T5gJ甹efU4s+ri/ta4]QJZE*؉U1ލyjy=r/M'993=ɭc}|MXKQV9aW,cGWN41fȊIˏ 9#޵w̒V>jhmY\G>0<᳏CY7AOiA\HbVl-|#gJR$EwkKKh2~1ocSK֦d2yDlmjA+S%h*Uøo&UO 7ϷP=:=M 6[6YUlpTҴ6léJ]-By'K{/_ACNQ[%x%|NyZh^K^Pf=I.I =;ϧ/iJ>gl !;Xl򪎮y+VR]#Khς;3⭜#9jq3zt:0򫼝ⷖ5aVDlm$s5~Լm٣N) !o9TʚBiKFѭYnx_F2cQJܶdfᐰVS9`uZ/޻3,!fhY]<۵g&u#N){hS{L6MUmOnGu _s?F3LJ6 ՝K{G$gNiwW nyygzΤ&33z؊(JkX c!MǫlOo`uhׅ3zڴc/u/A-E.' u9|񈊨yra8l3i{nh-o5vhS`[;?5zz>iGm:r%p:\'kٸ۩zXMd/s\uw.FEYxڤp3ڦkkqFQ٭ >]Ϝ~9=;MOz>oЪN*$SGt!˪wtf=kJsQиF<dkVp2G~=?b=դF<˿qX`yWM9s=I}d~FS?C]6rObiÚBwܱgWDTۡ%$э;\ncQBzq^4g&g9T0yAmpF^ ]׊:N/cLcڿVU9#0JV+|W'1%ۅD:sR}jn\;䝯iC2. 6_*+bFhs$}ELysXT]Qcny5?i)Tm+܆IVL̸f#~rkHCCZ H# Xu[sTrstzvwu7g+t>[Sv^_F5ɹc nzs]MW2ӝSm4 sI>cqz.vK⨭`yf}qQ珳Gp-nBDW,sjr$t}qn/ϸ~&~RzJTN4gZyoBu'ұHMn#OZߢ5J}J3\-͓uΊqinE+DUs:AT] VY_%Ղ ^7 MSnte[8/6q`'#H>_S7/ʂ(eZ>$8#Gz_mKuf լ{9nEMdsFGs҅IյAQM[[uVYika|y7dub YN^OC߯%+Ɩ$6V""ѻ4۞NscW+߱Í$!Z[[ip=ܥZ9y[>ab8G/,6[ؽ :\|t\5%({KJĒN^ArT+y9+1ڹ?u<v4a\ۙ*y[osTxאs*^Qn.~fG==3J2z"eM6D r߹wt6o3Q \(Ǖ-N:J;Yϧ- 2JFPv83I=<5c(}KFX`s_QӍZap(ڲS85yy*ܼr_2qfY?q3wyK6DTem $H̬n<稢R2ChW ١I]UdqcjN>KQp[348پgݏϥqޤ[ +BK[dm4L@=Dw*H$cWo]sŵ.R}YYoq;, ,mO3UUmШ:9t&W䑠Vf'- o\'5"JЀI?hv w dWU8s2*|Y"A*ge;H޲ Nzv)r$eNv]e6(ZғN餥tR(M:,JQdH.1c>ݔ=Vu%(ĤUϸsNo ʤyt'24!}ͼ>Pwxn\c'5z/_B#Cq9 ۶ *c{x~VOV;)Ƴ2C%d,u0#ߊ/NQZwgo:p]n֫l荸3 +]-vSNF?i)te+xFv9⺡?Jm=刽*mo,!Kz}>ʧ~w*4e 4"o:"cmw}kJQ:%9GHȮ|飉cf;w|O8~*toȇeRv2w=ߎSTtԣ ]Siu_M^1F;/z&Z./־je 5Uث0T+3md4J 8*Vԫ9m&[JC< 'jU"Uau+0rr7zTQ_u9F~$1=yo%*iPI=_p{G3ac9z撕9>OWu#xlr+&1 G-ӷ\R81Rmv+H0Xo\<匪6֦U(ۛ}V8͊]. 8=ϥTR_l+b+۲KKsF;taezO~0\P8HsչNߧcL={SQ[[1j0]ͻ+YՊy5tys $uT)J,iqwʍYp8یqڸ.X9ӕtZ z7 1Sͭ%rIDME33.@QҌ3cx1VY|n&i#Ƿ8=iJR:m'{u* 1Y)rTK嫪%k`tXʈݔ*/@^2өEn|!kw]YY8*FǙT|KV*y FJ .uiTVTDrw(59~q 2O5&2#XlUk*MEyjik1jMF$1]yv_"F5*j] y"eBJgukS {JqуcH %gCs=vRnMÜׯbn^1g둌qDy'>JpuHǙ=yQLv+ꭇj8^e3yf+Ԍ(|۲8iGH\"1fi5f1bjVҲ-C$jA{qYK.MrbZ%jbYSD}N_ǵd9 iVV˩f+wf #1׊=[j֝Z2O[=.Yljţ0@쬔g=G2ċ=Vrq98;==X $XYeH̹aLєZ@6ɅijI%Z?/N=/m7gT!S$].4 f;p {teVҷ5w;*%oܞR)]3?i:гVk~Uh[e"ehq0FHsI&giM^bZem@S$cGNzTԔj7%l1E?|8Y+\$D1o#۷s]β[c+qmտFhy3.(WTszWF!%?~^ݗr嘑 (_3 ]h?MmĿYEQUp*Z?H矘z].nhXޅXF.Z7t`ʋ!]~#VL~ƳڳfA#2.^*J|Քu]un?]+tU#jmY26wg'$kHї52,зDv (7Sb˫>*[7c sӚҚ^攥+#i#]#i э>Gvau%ܘrG͵溣N֟N2R$X\2 qx ޜyn;y`[M:J2 @^vm9R{|<;de~~KJKVV ?㊎mΧ(E -Va*~5+=.Kt}OS.8$|Å98'szΟn/y4c]`HQCרy=N~gnqk0U>`(J\Q̥{\2]k6Ӆ;~N=kʭN5%Nt#/1zq^6*VHL)(Sk]H9BXU 5[=Ry+Kتԟg,qe.rE)^׽y5]y+qeq#=SZ=璍MQi_,G6wq57,}zTW~c`[mEXy$1KgT6MW\9ƢI=F\Oθeu"RR|4<*p}ɬT}nUOgi`,鸘d ׿Z74FQob$L7̥Xqߞq.-u 6s}]V9<{WNWԱ vNFqB4ed8FG dbycڄi&튠F1x⳩-J0Ia>ؑٿ39avr֧˕ 1B?kaWߚ{_̔Yїcn%\dfNFqU9otVO0WߒcUZF>΢zQ[Fa6m݃+II\⠴y!̓NhI=`o)BQ~*Mi,w#sǗEc?ԕK;ߥ6XA#niP9͌T(8;x2n'XG^c7Og1um$g¬_}ב_ӵrʜi.bL\|)<ۓZ*Mɧ݆p+#R)E]E|a[ϸ=Gji)Y$Z#r:!UOތ1N o)srKuc%M]~__g+=:p%[ޣk8VGXi9=U)}$v1²f@ קY.!}YJtDo`$2.ûɥe*jё9-٘ <%)TM qQQ|*}nO9Ƨ*Uf-iDtp'WWb۽I>A\*EFֿᆒ\!;f?yqsXJZXK}7]HbK8C432 y$-dUjJN}͉bP{>T+c Sy֖Ai{;&#hʤ*I{{|UzԖ<^8ig]cCe"Gj|8=,V4䥲G/ݭ7K[`R ;3q^uI~䥊Qm&i&H[^>rFQyb3aR^կwMF顭شcq]ág sۭDFM*uZiu܃Kk${pF@ГtbwM/iϪ6\$1`.>=;1Uʤ.e-FCvҰnis1K][yVv$qF+2zQֽMm+K߾_ɕyj'.h١Z)>Gdcpf5IJ8^֬# ԩ+-R:{egM3ZF\u)gi=KXp8ݻjrʴzy l7/<~eS4p鎆0b,[vߕ2eICpÍ6K1=LeK"]~X#H  ݖ8E+tfd:ڻ'*~#Q/NRrw[k6HUZ(YFf䞾NiEo%k4k466ya:V9>4&yq ]ziJOc$Q7tjIE\ǧ\zWi#7vyѵ؎7[W9+qЎ3/f7)U9}?^ǜxss|`>ŎEF=l=Y:R+3x IiUt޸Э/ef=JҎ*yheמC vۑ^RJ{S\ӷo+1·Q$ɝq{U֟cN8TJqj-Bh&I r?Ճu$Ҷ"7c:{gO$cWTe(}fӱ^K?(59)&c̹ P Ҍʎj:БFEyoqYsU-`ԧ&\Thڪ3(]N|=zܯ{yY7?3Z{?{ȮZd-'7ePq9R3'#)PS殤 |v09{G&ծ`YɺOqw<2VT}eYR=C[0s8x?`Q,$ qFVIY_qԩ${7OO{k*ZeU#PxIy#hӢjvODnFpp*oFOE{|l.hɖ8Hv%n0JzV~/Mu׵ͥN#tnVGo w X_)ui/wEn±'+P;zu'_tMf/o:1Ӟ}jeM|(\vq$Nq|?NZ˚L%Nw}w,h7 2߃놭9cK0Ѥ̖&>c,2l6ֲN޽}H{ۄM/9x Ŷ*Tcܹja)3FUuҲuCqD;M5َ&fiwsg?*)ߑji ֞TRWrGboF>-?VjF`w|뜮:)^v+^?0v}d)?+.+`g>OgSޝ}Br(el[q\t:zRTJ改#1-3cwb=y"B凖,jm Q}NfmS:kv0mI ޭckS4/z׺(3yVe68=AScqT%yI#Xbj2OvRשW#ܡE1 x*qӸʤطhF~WWc;t{Ts_ah˛bfaf ,y#?犎^ouyIci+G9b*q9wܚq/oLJyWNvour<zdz *S{Zt^}K#<쿼\n-tCtWl<ֿÿ*˞N+ǚ79*~f%dU[q!*=O'5p8Ũ%$fzc ~BӔ]'FGjZ#B204s uFg>[F^m |וG´1aHE't?Njqg*{VfiVCcCQ޷TtNm:m+J[d 2]vjm1znGs쥁?õ9+݈Ḓ $P|/W+.멶JH2%i~6YPsۧ?V:=oeq ;I_qg=9jT{Y);!~N2AU)Y-Nz6d%p`³F~gJW1y%d2/2pGˌzqQ(3yy^M΢fo38s6&؋T|O RMXNSn}?#F#3+ yZ}Z pMnj=O_Lwzư9T7ْ5Ws0NxkXxs 9js~3Xp'}Ⰺ1ZRNGO[qn]Vo E瓜ѩEgͥJW.<žt+Kx-m r@ДSı|#e3]nn%[#`lzcrFɽbN}09=}뎳)XX4qQ\]*a$|$rVS2XɬqThʜygNmgdq>"'T}dOʞ׏.j\i{t%y+`ǣ*E YN챪7w'̩ΝGεgN\Ҿ˱;XV90$q^ujZ'4yoǤ^pwpQ}y({mcK RuONaQ\珘g>C 4|؈7Iir)|+fI|YI!-A$zg0񌠜}$QNN~Ca!̐3 ֻ{|JsM;6s>"U Go9ĥZ՜+ʢ^ƾI<̒FJp:~(I{婴Ҁy/=stcFݥ-K$1-5rvJVW. ͷv5"-s*U\]2ƲsssRIQ+_2ژ.'Ѱ2?t%.~mj[#Zl9[j+m^EHTGa ye]˜cW\cy\_F_%̛biq0g;'ڽ̶HVV:rZ:-h~Y:Wi'-Iifli)~o0!*x#8`?Fzp^ڊNkևغҧM⩻ٴխۮݯV//I_En,Ms)䍀S F3a^3!+jٯdj[y~7<7|*[_ ]T_+U+ wJ΋Tcy%ףZr`aSSK_sửnRѼ*Nۺ[HF=H:ae,DRx|]{>GSMѩF^ mīiRAGBI =iGYw^ݎ|:"(r_~͟ 'U¶Y+<0kR F>};uǯie.<7-ŏXgFDcl1I 663'Z;K#Z_ZK췋3qy eAH |EOUӭR-z'QPU'xr;Zny~j n4swVG򦉁DTb}V=+]t~zw=<'ʤ0[ښ2wc]zWgIo|LC`^[up6+nkz}VS_FW=mg> P2K}s ZږERwsA02q,b]#万/k:ލt9j?Zŧ-,-̎DpVC`TbE}#{[ɥwx)>H6}ުC⧅`Ij5 >)T)_}L=G;Yx1ɶ'>ףIY&Wp@5:*v4psZ,Q-+X5;d{+'et70wsy-p|Y݅cAw5ÉOסՃ[GՎtva#ȽT&cq*(/;qTh⤥ܭ硕∮, Ѭr*qz~5V{opc_SOj)Ժ3[KGcn+2L57eԪ}iӧdt#iwqQ$5r) @ul\UzTEٗ{{;ej߶ٛnn:{e\߾bs*ݫ;ۯSKKevaXT$V# gQqqKN/g*iɽ[)"lNjNިRT+[F^d&]Bc 0<Z6֝t6Khe HUn{WZQU)ǚjh˖UWz4%IT_j $%Sپlߏ^Z5o*ʝKF--:QXfrPrx';lcg)՘+Ekv=Kk%1Br0sAs)Io#ì;ox͢$1 Sd]GeI+yM.]|ϊmtKxqtkY“GLK`+t_Imo^;jJ|NC|Tۺֳ[k}4}f<)g6=C^Ʈn#:U#!n5s=ɾw^Ԏ*TqQ+?侵3_֓1 ߼.HfrY*Fl[￿M<^skF~^i)g/jsȻ*V3r g8Fes0[|34ie|PxWǡExXiF2jĊo3AqQ/b}+{9j*]ewax3֣'mzvQB$Pw2@N:Ww.7YdgkIr(=M%M52 14X+A:\zJu4c匲@m0=s7t%}MQ);k2I;0vsצ9ڳ^Vrkڍ1Ԫɻj̭R{wyZ,RorՕ%fW-hY,c^TKXYtmܟ:t*)X(i,IدۣW4y'c^Zw]w۫EY#1yhx.k.M؜h*obsp=`J}b?RqsFOI.LVP7)Pq]V)|O3e9T~δ(9i{;tV2ƨ0EҲ9Ӌ魋*QYeU&;UX|H8pԄ{YRrzo^,0a5NnV {<, }660sq(;4Ɣz'QwE_j'+i.h-ƛVԖ-KgXZ5no̫:ֲvj[NI6Q%**vA<9E]Mt7BN^ԕׂ`2lyQQ5CZ|DzˈBLRs[Ҍyx.)6"ȡSӑ8mNdջ\lגAD,Q]XzU8LV^AF; A$GvWp˰1I\cֵF*Q%#a,`F~QcҢ:4rVQ!` m͒㷎մ}ڛ+uv2>gi%Niinc715QGנԣZGI73JӏZާ+Eh7M(k)r7}9Ot{hPO2MJxNՔTinL9$-XdX|q }85b$} y[]ظ.X(3)rsLc7B*njdFzW=ZtGg{u9S[n^Hfì]yfs\rc9b+T^]hy"c#URm˰sKWS 1)\ dn qwORsn6?j´5)1#0rʯ͸v#?F6Ƥ^Z:*RZ"-<?m hʌo1/ ;t%%2:BomHg$'{q֏Tg)OI[A3Օ[iS8.Qbiv漆e|Ѳ6ӑ\Z S%*h8F3i>sבw?!WiIBVFوFaI?LzV4eʕȆ Kڤa\Ȳ1^>?L)Z[v&~ä؎vZmlʼ۷FU} 'T;w ӍU7<]=BM1yj@ڧwQװRլȽ48YJ#Sj1rM6>gjXZCc$*Ƅ[rxj#yJG=OIE׶6/mpf ׌+jqvw6RU5ikvC+F[rOr~yu655%M5ڰI[͑v73c=Aⴔsdh,+mڧG&sKi`ķlvh(SڕSv] ay?3|U=CsJܱ$S6kXDKr7.UWh<r=rֱa*kT$\\Džgǜ̀޽jeɲ*(m߲Ot,۹:IֹO.d֕Jሥԉ&dK}޸#E8ZNRyg<"݌J sgS|ZJE(&$dGpÕ`ԟ޺Ot=$rmq#;w6ҤcJ<"M'Eܙ H z`MGݾUj]t|I+uu2GO4X[eF ?69~&P;8fs$oczDeihLi`Y˖^ČFP̀/*FRӕ=)k~Sn%j)]cnLm1nt?t7xl`R8;M{9%#C$i\4Ne̷GDj;ȣu:2Ch6qn4塌)hF[Q?g%gϡ(\>k1-Ps~6h6Z w62qڲũmrjyM<ސ9=#(ԏ7*)Gὴ(ݜd.z'VFї*b4ʯ9NU)ԩ%Bf=9ʂJwS0qjKV8}v5O}JV|rF9]t^Y-Jϔ,[ex_>MǗVejѥNشk&y<=+FӖQydY?IQ]--\eQv^+ۡSgg^hak.X%qF:` WQ]Np _خ1B˃όpI8QN_pSj~3$kɵ9۸zWu6zxZm5c60]"5# c<~M[;cN)߱ oqtV{iX[~s{s>NXݖҖ:]V2\FfՊl݁=Oh?9QU& -2[)XE[)c9/N+Jrvou"ZU X[j$i?}$y#sӠϖN׶_QS,oEdQ>{ '\ˬWoxv) }&,O8N}cgmCTeq 2G1& #?ǽg*|Եp*m5e뗻o1Ʊ 3}+}9jVMcfɡޑI>,c&B*s;8j;` y ^_Fo*F:_QnGAj̙FK,O*k`>{\ݩ$װ9;~1vTdDRڟ(yGPk[Ꙣ~VI-?oʭFR.7r$- L "\m sTb%NMw*}6t"Oc{h._bZ6+qoLr54yl9N2Zeߛ4˷r+@:e|ަV9ʟ/2W} 9!TA۸9hqjqc *;wō@zdQ]TyM./vӱ\l?-Ǯ[ykV;|Ρpݴ4ox9tQrjgEX{#¬_IV#'ּVkOShKݶI!ViPaf5,{K2_s^h/b5hθ;ϭW2I[_ʔm.y-p&\6pљ\6 n#>'4!7>$w ~QLT{Xn^w8< ZGڨ^f-©bAdfi-uSF*kXm_kfxۓJq$E8ko=ŗ${}Ɉj~q9NU#'Gsd̏B-%340܁v9'tkGT#Z/>~b]]xĒ:6 8Ҋq疩E4S+-dB> 21sUEkggNPchY]2otk36lޱfsb8{N͖k$XhvȪ{pA%hUZ:)Ƥf:t?hecssb+'Kko\D7ďQar*ϕ=zzqڱp姯M}6y-^@"̧+3SܻԎx>g{\uż3ede}*s D)b%ϸ}ge?#KmM;'){[vE) fY;9(e~;W[Qv* p 8z+wv~ܔF%e;d5;pyGN;2唓ZJ3rk}IдnRIpp߭a-u|_q=ʉ>N"w0hOSiilQF&<ɩh1{c#ȴm2UW[XK\U)ӒI{o,l,O@_åJ9ZN9sA&ȷY\\s5[U5ZRG+I4_gdo-K`Lߥa%轢{nO}ik-4b69C֪U9tf5%*קbgs$P2 ;cq\q~>=nj\FA$ H[1Pwh6҇h>9diU\+m$c= CZSo(]>m{hrmp ׯӵUtgRtԚv0,J: *c$E5ftSÔ}#\g7A)4)oZt '25ڧy=:zjJQU̗=Jkk^o%X śi6ڷ7}eqZ=כ&rW=tj)9ΚQViK&6WMtF6NU'kȯz_oiѭN= #OD[[آIjvF*T bhQTe +I)= !5ǺFu'WU:+E:c&YUT/ SӘқNJqȯE;C$^5$P瑃{goMޢk~ +nuI+]w)LbIuW̒Adr#UV=-n;Y'!bmۼϊ즹cbF7.W*p|`1_ϥMI;%Ti*vi"۶qcU:pjcRu=ҭͼV_2@هN3fwITe,2#K. +ϥugʔjxW,EZk-H⁶ 1=;tO^GF #V!KadpG8N77<۪e9 nXaV,DpA1bN[rߞzKV6Ig`pj+'D*qkU۩n5Y% bPF;Ε?=j+]nF#DC?³:sQN5nqok5ූdWwkFzINU#~N%;Z}v'kU15³@>SM7֩6ܒ+ M-MXwgo9ϠjO+_1]5y~#Vs"O2/g׽LDivVER2Pp@>_qNDeSߦ*&vncaҳ*$WHTe,2 ל~55weafZ"rk2%rqx=*jsTj:R6msu$^n~<<ҒmlwNdIG"8 9VG#% {]X#(Og7̴%F:־ٌguJ;nrJ n7Ӣ+:J[/Ċci776y {8Tr(kkn8J'v7N՛zZZaӍh/> q 񸼌 㧭O$TyE$4a&c.1'˟ߎ-+=M1161!,6X^5{<;ҺjoiiZ&^3c{Nh5%ۭѶ>eFjmjqNG;Xc-2O=8Im.Ǘ)JvtYx/.og=Z2I-?st޶q-uKa' TuذdaUSO,uԬlOswZb0}TӍK9]~fJbh򱲕_ֱ{4m:nq%BRj}M+IIWͪz"J]XE%%6*YH}CR4N6G$ed,N0{'c cJW/y-]Ϛ^ttܷ$2H6C̛nKu'`ϟ>*>֚i]u^čqUFFUp8#R1ӹf6˅,| 9{w#M(%v>l=Hc#c6*m8Vi݊P[gF&fkg#9xW9jN)UV;ShO8/{?hҾ"Q5e?1՟AU8֧MJjԺ"y+wws}bPI]Ld*ؖHmWɓ_^=9,EHҪVN䱐wLg8'Jk3SwNdc!]YsZnU:ԩ;YI,yW|#du$ڹ镉.[jVĶoW]7nU }9ۏz4s<]dI~pX*ż/x W,yzRh={pc8 "YI8eboja6c/"u]3Vb$K]m`I?tx` ,UV% 4eR_{ x Vܞ}GsӔg SRW]GZZ9.5Bi?Aҽ*R&PQjazWVRsεQ1u+XȂ7 2̿u(Ƣ3 S͙-EM"<{d_3qv8%.j޷@Q5/2{W y9Ի}G= ܫt:g⦼%*R^SPRp!225.J~y&mב2YFUriiwnɆ'|z+(/tF饖ڿ>К4wW^9[ m܂W:$qcQ^DtRYIFmPF{׭v{9T}y$p.ZQ^:;cS' ]Qe{{RYڥg}yjT*n}mIZ^-:G!Pŕ+`9=N9^67i8ޚ$$#$q{Pr ~}9N^Fa ;8t" &QjywŽױ*rۻz+Ӣ{o_t6E~Co$A_m/[:v6&2?USr:(Ə'iXMK 3êg4= -Hǚ)Ѕn7V<ܼnKۖyye59%ń!\~Bsҳr˩.TB\ ĩG `֑rסtedBXgi!t/Nr:N-̊k|zУMF1R*rNqB2vHis-VMp2l0Uk,$$ooHHqGIlTRzZ,epL,l2}}Ҵ/5yܙl壶d'!93^:k≜tn^w rGg?Zos,bJXE zWv}bQyUӚ5knM8-HY-&XlpOzq6wd`7\w9ѥ} iCAd0G6OZF1济e쐭$q;pʃ~5_g4۱HjJb' #k#Q`c}mZES'5nIխ] tmS=֛ ٯU1 ii#V^vOs5-ϛ~*ZFr͓= uStW778ƚiL+qX#Y&ؘk.mHr;-U=kK:xxKdkV7*HeNzc\<ҧQk}㈩ o4Lљɍ̑MJq;[Q}VZ&kq.I'%?j)AIb%NWI+-/㧡}VFRffҙ5 vH[9p=;VrܑLeD׳ߧBtTWo-ګ 8ƪHFlW;{[Ebu/l|[$zz=F3YFy6.jEot#ys26U)' y'g'eص:m$enx|*02IHUb9PYX2#ܳB@['$ rX9yj[]w/_lʷ8gMq1wEƵ8M՚͙F6 AW5Iri6tWX-eۼY[U Q.Ii˳O#Ȳ2̽GbE:8#:]*j0kXVF?+(eA$\2hqӍISp&510-/o_ÞxF^[ KYp`"#}<8FS `VЍM"oٜ7k(#_jқq"G{tH!V%2 A'zJ1T:Q65N eHB1/NR}|ɕ)V5|w,LXnOjJ{[|ǑXvvf_Ǯ{8y*u9TKui ]87. *cY?{{zzz4}b5.K[+p umTImcUe_աA3׮8(zεj2PoDBv6.\miןַ:7n_#gU+)Y5^E;gRVfHeThcl:NUn]C Nk=͟YXc OgLiХW[RX%yܚ;IYf!{=oy-qb+TIKZ#_Bѩe2HCQNmRw<6+'k1vе̐n,oG8'w(V˖2CدR6K#2[OqUڮhKeZn_?v<.Oחky$qƣ7lwRWbTy<Ҋ{xƤun ݶ21QY}kGKJ A4ΊG1!Ӝd׽p9םkZDfXs 22@9u\=S8J#-^J6]Gԕ7NmZ>u\A,اCM ӭ qvJ^n<œFNsTϓWrsRTt=v7:uUrdOt{Sg=Kh?S1խV^O߇Mo,rDmI$޽AȯJqR*SP{? ǧD1Iv=ds>(5=W],FS-6=+3AP,dy{m@sv2^9]uu;]pHXشCT1_̉rʛz>M;"ȿ+q󚯆;t;Ku{왗ɝG|̷_2uORK2k2[4Vk6ǾrN=*g$F>7+Y%h YY0Hw`xm#(;RYFX%wf8s=O׊{drpJ*˹~l1>Qh2c#ɛ>~O*o smm;y^>W^UE۷'qNV{H S$ 洫+}ɭoKy6_g8c=ET֍SF$vF#+#G_(ӋwpJ0zEݓdR.0,xo==rTg| B ׵h#iG;R; |坶ɻWTeˢ)ʥWAh1 yGe9kk VIݎ}qr˗bnU {O9Glg>rby]۷%[WNM{.pv%>\;G7(oU |O}SʝHr")bzq!sʛ*_ZdH¿*N2Ԛr OX۱rb no㷮}}+*S**ӹ(,BlRz[:9yB+ZV9˛byKhyHq>BR2~ƍ++FdmsȹXԖ%irI7:w9?b.\!xf` I'ױI"*8YXH9VVeTtM3C%5jAzd{W#s(c|{> c"O)ڧ<|Wl<ֱWø$OxcX7$y^EhrsߩP>V/ðϛUVIб\Czjq|]BXhH[4(Ye_i6rx,æN:))J{vw:][!lNGQPG޹*MVU91ԭs켏Kˍ1ӏד\E;Ts&EWVH8ǧ^zS}M05wnj^H),2|8'\~ޛg}N;6M<Xl0mzvRN6PVtCӣR%dk:N|[b1yA#tV';AsJ+va^M"yO-cY^xU'!FNUгw}'xm$WY+k8z2lΪ1̿-㑋^=OlVnvsZlvZ/`}aq3Orc0nufԫN7>6w*.a,$`7qdsܜtONmBu:)OM5mbFQLl'۸95aӺ՟/ƥw|a]e3G+ 9,Q H㞘>J5(}<ϙ=jI4엩0hb6h)u^ΝH;G^-we\2dDYeZV{nsЧR5N߯3K릅d4B`1?~+E8Z~ߩ6aojL>Zǵgߗy>sէ̧u4 \\\sӃۿzE t״KkWֶr,qWe$w*зgĢO{ U#*Ri@#f m-ϯ\U#$!"4M3/9nr?ϥe([Qwemo.ayg\nխOI$.]:heu #zʵKVss{:ڙBnmۙYqq~'m:Rl-ŜF\*6E'늙jeZ1g7\=qN)TV,dI!>^̟sն2OD⹣iM@)UfcNr>ѷ1%𶮰sLs?unǑmJ_F/pG#o3 rw$}+Subccjrzw?;Uiچfyq3v1CSIn!*^4^o%n ]&Kh|MFrm .qyUisM;ҧ*>/o;-'ozx;OhWȹrT)>]8\pMgj:m]uN#R5:K]ߕ-Gqz x7Ɛ{ų\66 ,k'BÐvp穙U\[kNo+NنqVeog{zRw@վU;M$$(epQ 8>z,-jZo秧΍J U~}Ru/2 n$qW-Ƿ1x~Ԗ7+I+zlG}HTyJ +iV^]o;#iz`qyXxԧo+' wrWd}৉l>(xGV4^ u&ڣ5n#(b92){9Nuoi(ʌji{ܩ8z>z>q[[{1"T9NH@X ToRb|}wzFNtc]{ꟙZījGO ZXxL̂hY2`)9l`!stڜOTZ緗I-+ZɻwnkR|)'qh~ 5M9e}>I#IFʤFA #haĕt|\_KE}u;G1*qZ-=踻__u>Zg_2XXdԝ?ymrʷ̙+;0AVnwmZ=euij)% z;jڭo|fko4r) u _7`c_ss 4ru#n{1.'{xQbOBT4m?!|qӿ1vTӐAtJ>Ο:0)ʊrVkx]IطЕV\uLgiW2J'S?ڮM~6ծX&nVG\3Z^ʚK_jw]z?6ctvn,دpC]թܒWk罏]U\Υݶoڿw:ȑמDש>o$Đꭀ8?E 4(nV}V[7TSnEvo9 Nβ i E5df;fɏ+)Lz`a啰5{;4Jik)Gd~i~x°xc7sCkV8^g3UYɽ~geRjTԠcu$dh%hPn{ȹJg}^R̪pG388gR^gU8ƜMemw8fm(_^9NXގI>4k N`Ywm=;czӄJQ֡b,۔ }ozS+mjQQ?ć[<6Fl6G@?Q=e(fV3o-چ]yß[ѡ]L_2Ug| iqo˻q@-ScAqqmd#?=NQϴȣUI٤י>;d g\ 8;vhҕg{}}QߚSV}}/M=-eWl0U Aֲ5N֧J>.PwV}t̖ȿIn=;WpzD ,kڴ E0ĜoZTݽ0H~mc228' ]- 1I4`,{n#: =?莐8:Y#[ˏ$l9=iET=ӧ8lrƗ矙*7(*Gݓ_q4VhZid92ps=j\)%{.&ͭTEczjߗ1bX2"${'㎸U9ue()OpR4+i$p;x<8#M;BNO$"\M:%;TP}rF/{kI-|P&HVfU]Te)h+݄6nKL1Aorzg{_qFOcօH{LD3.`߇Z=99&eZi+4O8a1:ZN72N;}+`Aj慯%([ܖoGs C'.Q+WNT}PlK2hj`8N:泩ue*~*J\ʒH”b&'*wQg%%oqk˜#?ֲcF4E£Ti9Vx#VVڢp~n~HMI?Ӎ}ܼf4 0<0(=:=^έK&sׄ%Ct\+ ob7+VvW3)rK]?Er]6Ka998X}IhcYޛy$6ZUˆ2syhڵis*te $DWHg S%͉^U=uסa5pKq9n/㌞O s]GBiIHn_ơ~O,7SOXTmvHfw;rB؞":+zzKn4}]9>)%G6g@ܸuq2=8?LTF-SQ9%w5ъ}۾#,3ïʜnEH  JTԑeZH#cQimvRQ#hdj`@<~9#ZwPS,P}HڳCXYC5̈ҩ*1FBhMWUqڪQ\/ĚԢ▏&DWL<(a$1z}qGSʶJ$-%$oEpHA)Qcw2״ޫȯwlk]jz~ҺaF5V5+t Vɏ0YRfޑɓ2ۦܠ`qG~3:~yB_$514+!1DVSc(QR"`\9w}y_HVuJK0M+)Fl&;4byeʹկv|hqF11.D-z\Ο-(>M,qA0szi0/_/m)AǠF4/I+jF'LJTzR#`isISz-.C/ s ?Unq?֔% XAެ1O]>H~OrngjZyx}*l8t.*QQNkNEG\/S8,W5U'O(Ij\bF.c)MtƤjAF(U1meʂ8+*iǚе-M5Y0*ܓ;WTcg='`<(M²*.wswU/{SƘ(,6~uڏ\}>IbH-ʬq:{kGM5N2u̵fH?sg6FVdu9BNßZu7/gNo"qE64Ynݏ;P(F+tUiV=#>df[uecG–瓃q?l[Uk f1Ilڰ1Y2i.ghDgrJ[<3V<7o/~U/jՔ32\˷59fQIS[-#.#ݷ;߃KjjRjGK.[\+̧eVb̧?C\H擅-%ޟ <^\#?bFq=GyZNبU~Ѩt vx4_%|l`6xnS}N_vPW,Ff"n r3aQKbcs40Ios~R*#)QdzGN~}N B~{5!Lcy\.2:{?#SWkҜ}Hx}TҮN2cOߨ XeFkv\HKaӦ *q6V4Fᤏ7.}:ǽw6}ę"ur嘟}kXMV߿bBg`~ܻ8JR*4:tZ e"Ed1o8NTT_#"/5\H$;rS#咧4ZXnVKR[2L5RG RJ] R%reQ-3N9<ҴVN#R>ϖdf[,7aKu8=iou8DrIMcFr0S}F;rrʍwF:;m$W6V6ح$H[ŔcK >b{5AJh >iUX[}ޜe Zy8wьVvW` vTԍE&Y%.SY"8ݸ9;W4Rާ G)Sw- έ-34pp'\ٿ,+o>F],0/9wYR"{+ߧ;֧d֥Ev{0l" g:jM9(3>H4ž@)(V¶<^2Z2gFEhr84LU#dn^.| 1PG^{-H;4omzh!c.|9܊ƪrʏ544Y\lE줖9zNy+8O{l!F*U=P?T՛KsJiJJrd~U·>*qqVfSlؼLм$,UfATquByF#=9ѩ TR4ew@ʱ (#隵_[Wry/jb _[*\i{VHO'X*6"Sɳ Nh[w1M&H/ dWM:niW1wRGk$ bڿx=8={⺣-WyvG=G"G1GI}{ЩѧS1HYm q&8ߦp1xҶ:ŧ<6;JhsєSo )QeV_ZmJ[Lm?2l2yǿUQOQc>̷53prKsb#V1זS^tW[Kh%Vk2ۯʸ\m9[UK 4II%BnPp@=+>Zra+]gAx Mya;Nn2iZٴ)ie9ndXŻ8܏ƮOҾ(6dK̯k+hfu^1Z%)Tr{j(SM[q-HIa 9Gjs8CIӗn*i%1AbYUv㜌wXE5̈r4ZpC8rq` +Nh{=uC:t%-o,Vp;ֹaU$_/=j(Ԍݚv"-[I-"R.p;\ϖ󂵻y|6D70'XG ۖ*](FJ컆GF,U ~u +Yu7}E?:Zwf5kQ^M_PMf+w F}Q-oTRvk]~'׷14q5:r5t1D /~ՓR]4o=c]iMR\ N`o?*\6r6t-?.xt{z-iI?v=pdiJ^vF:,E(ݱbn>Fj0sT^tL` ʤ{*ccTNmug9UYJU%$v c=wI ))pP>s-[qf>k7tKTr~e+NUc;'Oz:6q&g۴h7#o̝)GFֵIBiJ;ue(i2G#<qY^Ɯlqʪth&2*Sg,WZ֊kZHnF7z3LN7r[vNQjEUHr>VtV;Id=>Ew6e4]P 1CU!J y=@qU%&gYb6bSAkؖ]˹oQsV2o;?*t]%LJa8Aȍ3s]ѩM3XxԢ'ɠ :p=9^kJC)яI#"E$l89=(?=I֩.IEoh{gh2F)m!N9rrRm"$RM`dVaJp8=N Tc)Sh2L1kFq]V^9e[ .SZKir33ׅG)Ps_I' /}q-FAi7qS e)]:""PU0A{_CPeV<\J"·ΑF6Yr> rdҖЙ$Tfnz.Oiάik-+ceTn{sEg?gc*OFBޡKv/NO^(ƷU1VٯQ wB7 OàϷ|g)~^)N/<3!p$emOZҥHT8^˸s#76xNkr6=m#*r穻z- dXUyF8J1ΊjCyf+2qשET֩%jYƓm&v28ڽvGmHE{7jK]]pǰvRTS_wBZZCCF́ZrU|Wh9RwVz3jw/>Uݷ9aG=FtIU$#J̅;ѪdpR89"%~Š1- #mb-z.uӊGk["*2wi9ry֑O? _"F啍c*r{&] fY؏䬑wxBFKn>UN,[& t6iЙ@ 2]9;{sVvF=Zb mቌn+$ϥtGM;FV[" nq'd>R޷ TVi,vߔ$$?J$ʷ7eiyϒ,<*[өBw>[1>SG҄f'de=6zܥ4d];T42&e$IT<(ce`Yd1#6Fx>-.]_?mxN;r3E+#4)ԕhϹO@nqԕ48,FͅPZ3^H&sVbIc;n'Gi02c899ry53U9EGYgy̲y=)u{NKO3u8bXJ[\- %ds _oe*0MH>ץ|"nCNhݫ%ənfE_Sێz51\߮N87vmH+2嶶@'ݨ沩7V:,E.NUYxS~on>TVM5קe#"۶嘣-,Q˓'ӯz2rrjS;{먏o,Y[o|n<a߳3N"~-؎LpF= EZW*(^]./Q"Go | 3JͰvߨy-sP3=sUz? e'e0o6YBUINܱV5)z**.GXD۰V]b){H꺯O?H\-V8i3OROPE{Yu>HʣM74%k$M%yf RDSTc~nNsyڻ4]͢'nk7p9?vwQ˗{h^vk&UOdf43Q8??λ(i}L)ʞ4-H+XC0rTdzhDU*IE4i\7oνT%Z[D3268sϽzJ)Y"/ͪZKc;qѥxo,C$, d>v溣N\/i)Z:..&nWc~]Q˰x +t>o :qeBF:䝯ܝb$]⒧+ٓ)I"hM*wnsJD]RwwS9nM?.{D׷Fg袥-y[Em3Hbuv_޽œ~u'+{;jjZ[)G{nlW~^rS F.-ht0-FOMUE8t_,PB$>F\bT#79/)Pw+v>#tSsX>Izܫ,#n#uRqF̒ifi/UW*ikK$k.}6<[it7o*%Vemպ_9Яz ^ 0dbEW#*s_Vg*RMhs(ra`׷5%cJQ+O$89 Ź#T$7xa,͞kE(FRzɼ)vZ ޫUh-Q @K#qc>;+q+QxO*gƴ-6ro[-?8H|[0>Vb;)Ν:)]w!h۴BeԑX.{T*c1m;27V]ػVػh#yZT^iF>DE(m!"|hn^}9IWzTM^kn&FW[L4VQlꛌi_tV{]{[' n=q]ƒN8u%,!v-؛->{{oydZIY-Ƨ#2dZM+v6dso #{J˦:R[$ngqh -- mo;ʫŌ(x?rԟc:2l ff_0pqiQQO$^Wwj6Cp#&#Oہ5 9>jX# [+1X׾}/i|.*4e#[y$[sz֧.qi+TR1Cs{{WVH#ұF^cV47bǕq@ͻdS9J^F\->,2Mϊʥ؅Ok2Eo-BZƠo`>-JGVf;F, 9}u2:ܽ@#>OZR1Tf}2["IݐFx9⊕9쓲G:Ś+X&h_/g?ts _>T.o?bp%eM46B1 SV_G^k93kcFhE-Uv(9o?^}]=QNXyr<~=UI{j*ʜdMa [INzw'8+s,cjtp/&Y1`7qM\c(0i>~Z$,cdmγ82&k"wUHtE(Sl:dۛq±r #9+9]zR&W?x=\5U{G{ mnݱ{xJ鸳e(߹Xv㹼'E8OA(;Hp*%Qi2dJ[r퍿լe^I?3J*KCRIS[lHQvbY3r+{]{g=Yra`r2pyjm?rp&. Vm݅܏ƽ:j3o]X R0Fͣsd_2'#8h:loNUUDyW8*|;@Hһؘӓz1\oǝx#`̭)eNk֡MֻΊtYJ)贿skɛ'Vm:T}5LI{~k%U\+RNє^ i=ѕnỚeK?{?ΜqQ|w(a,tUҺ=LWS8XCJ-v,lcκ~gEL%:uz!oв|=x򵯩<\Eױۄȃ3 /̣;׏RJ*5]ԊL2)}ծNV݇N|i^pNļ2ڞ+2Qi7*JS:6~_Iq-/쬙~~\g]x8u/y$m tZq*J:ӖmѭH$h/2R>^R;7>nGm/R&3D AsUJPJ4ѓ߯:->9ibS|N1J>]G LUI{|]Z׶$̼67 sNh/ݶF2TFvqB0o#r@\×}՗-껝bdw &鼜(w>\җՌN9=z K;O?6ѷNq^-Uqz7+$=M2l]U~TPp?^O85E>fq0wNꎳB_[ݼlH3_Q581qg$K]ΟM%XBm\wkp+{u=6ﱳefcbO>ֹҌu93*m4aaE3|$ d#޹jTʞH*ܷkl-VLǶ9o/SW +H8D[6W$|e)]- QGk<8YJhtJ&F͟7tg*~cl?:*s$gc7͖ ˣ&/bGofsz |ZI:+Rp6ȇ< ͫLJm 4JٗJi+W4inĩckcbG\~#(=5?o+=ms'Z68_0Yyu:pdާ/! Le߂:Q*jiR# y<+( S? Z\ǠʶK[m"T%R8& 1 Gk8z"';N,UVE$>=xW樜U\,r;Oc|oZOu46l+p VOe0N_~>Xtm %G^%s3˽$Ef5FuN;)Jw]W1Ku7 WVק)rYSwmkrJUh#I}f5lhcQIl^Q\) =j<^)PI9<%(ݷIa&s5ך[yrCbF'kci>r/ a'co^9*q[˧:%(ϥr<;>*nRV"4cQ]s\jvҪstZ/FhJWӿe;!6= q^U)^#ȧib~v=Kl^UsF)Tyx+Xd8⾧4z==:n3VwZLu-v7KXm0-6n'%Z|vuF5=ZǦxk[Yyoȅݲ1}.NZvJ}NY/yB^:3S8V2XW0;\aRrlrXebFϒ'θ߸Z1qnIfb0'RAp∦'B1סZi k{"7L ܤOc5Jo[*{X62I02E%ynmLIFw{,eFnU/Ns8qXrj_eČ,c:=J8QǕ-~65TJ28U}:kW+KR]HhLsPPrh>jt2dɿtFYQmg)Ӗ"PZ|M 'nY=IHJ:"iS2Uay? 9S46d 0O\,S:#*.ޥ;efo9AʼtM>Y9&q^KuPX019| r=j$6-nTʋU9>\Ɛ+sv!*NFG^s[GmNB"䒪qԃ۟UMڗoiuf>mmibШbO wKRgR5+iRv5g|I O~JL[_Ėi&{isěLp\c^5bY+~W/[b#YP]q%"NRkM:q0'V2r l89#wqr-:EGG1C+nU FHⳝ>YY8R疗_֥I-F>c/Ņ~Ug׌B|9jQz7M3/=5VO,Is- ծa\u^zj%NRӱ5ʢS޳@8"!W$yycEʙY=wHޯtӍNk$2fb` };ֺ!('f(M'a|9.ېF7c;IGjƔIo̱m+:2D9eP29K+4Ru,l"ppc#m1Zuzi oڤWa1yҦ6q;YIC*^;wqFr|_f `E+n8%j.hju2ؼr"q9騜rطX@6 tS!Ԍ[{%(mOLhV1Nm%CTWhf'dҗ4 F'X;wq'HkЗzo*^uȌkԻl#V7]M~lVR1{*t˕j)K (KN:]3Ć䑣_-Պ`%g<2SJTXFRDϑ#:rG_pV'32IZ%x̱:Ìߠ%Fn~QԧV湫qj11s`O' 0:~~ 7'6+0jqv{VόʹDvU8R y c{dpoeO4OGd MO'WK33+3Hkc$ieMvO#Tr'kxn2,Wѹ8fQt/i[ϿOߋo6ֱ}% $j@pv Ƿ5֧Yop1)->iy4ic /GPO^慥::V改=Ot=MZ<_bwM6,p̓ʀ~e9Ϸ&8BQS5M}>9a`1*'VGrz^ԋVgiںZ%7 5rGь+EiNCHޤlti|kqXyFMJ"1}n#f6pstN:CW4Ⰺ)|{5){-Vq/}Cѕ1޿cRjET>eԹ o1^]"gEԧ [-G,pHf'63Yƴc+N2d\Fpy]5NLGKbʄvtr11ki|78Htz9oYbpT66 q銸ԋK](Odj%$VRs6U?/҄'ٝTu}ZFv7GsO;C^*KJW]]^;u7Q¤"I]ki{k_g|y[ `fQqi^ڄ+%_K,jkmZMQKTy$dBHu'Q^nmͲʑuݣkj%m?/sl4և%Wۮh֏ʟg59GXH>CNK: `U pWحVug$᣽KU}*/ 8ğD_gW7Z9{iǦm Z=Ěv*H2D`|\#r3?o$ܮۓJ^Y{w|նSD׮:~֟~=n9/ 3Ȍ:|n"%NITv>DêʔS厽SI߶߉Zn;1ȉo\Ʒ]$G ^cUªrgFB_]m/gڒOXi}-vGĎ3>7<Yƒ/08zتpmz_G;V /O. rHCׯySj_Pt#׬%bqF-[-<`t|~#, Z`kSVZ.lumt&T3) ~e{vYyrvޯCסKR>FDs$Fx]Nx9; =4\|}V5Z}Jyvn,YBL =+B鶏qJ2z7!SJ+1ڸ+;}qx8ԔyKcT8~wC5"KdUh&Fe?Ϗʾ/g>Zi>N+_y̎^cnY}jTiZ.;zҚyiq$du6fƞ$o6 4/CӜ/ֱ^QQBu̎y[Vq#RNFys՜6 7{Pxsɑ՟t 87OA}kUV9TOo Y&sMx ~XGW^zZ|k |i<^xOIguѥto7?;nmY%#(8[J5}?SzvܛŚvz$2oy3C~|.I8^Jtݿ 0q*~)m6kZ˒;FJ+0SpwJäEße0p6Zu<^>^>m"C\|y8є99wmv;08\U KY6Z?.W~|]3-4 R%_Is39 /8s9;O:ZZɵvҞ:T>ef%{zc^}g.}S> ֚` dhsF EoZA)&]'k%E[ b);qjv;i4ݛ?UeWƺuqoqkv $ p*MkO˥uJ3;=;^ӕ2gJjvwOT]/دZ. 2W厤mZƁcsgso  9Uڅ\Ek+iyYFJ9(sZ鷫Ziemv߁5OZǫ"]j͏N>a2*2է~*sS\ӯiLO=ym_ZHԿSέcwtFTwY] c]4黥N;/R`Y )O}wtr86c Q9FkCɹIy>m1+F~!\܉CqqZFVFth>h۷s|Oդjۑry'f\ҜdO:qR_Rȸ.gn1c +FQ}{ͧ+Nc岒(';1>ҹ}iZR8wxߋ^-$ *dR21 ` Xu?NJ./|P,6Ks̱W ySzⶌs8MM*B/K>ǹ;*M@MgP\wa\ZWgZlD}#x{;ekg38 {>MrJ0{<"jk|G32mʲ4tZj}uepovgnHc8؍jҶwEHI%nϩ/4K{o1c؊&XnErx#ֹ$|a|g4At |5[W|q'?)A0ѧ_vǩFt.VS0"Dva+qiu7/b צ6HU8jDJt_q\gl+(8?L֒(^mYGq<d*UHA<뜞j}Qhr[%Ǹ2nv.|5Z4.|Ϡ\.#W|MJ{9^|q$ʑ¢O8]@9ƷNUޖRO3D'ZE8'㿮8/mNtp񦔵fi Qk1Q2>`}9ssIKٺi&]clƌ,:ʌT~.,n5W!Bn4qM/hrG ]̹ ~ѵ~'py5(gXJU{wTb#bTظ,F:n;UY\n<̰u߰\G xQNgmSJKtVK=GJ6yXj?#jRZ=JH#e_ֹ-{)V#'ؑ|L`# zJbU(Tݽv)E4 j՛=:SZ7$uR||!?3"n9'ֲRoh*q-.o)ߙ}3Ϡ.YG܊ sN6}J7rB+'}qrI#TW36bDd9b}uQZ2洺4#v&l*J8\_ _9Z4c>W+ (na!N u3je5+Yʵ#_, Fq㌞h8sZ7:wieGL\~sFu*2H~eԱwu44w#{) 7ocֹhFMil[U1R?j|$S6V5_<}9ve uXPV$*ҩOTgʌjԆ[ɿfkBʀ^23ִJ{J*R?. E"hʘ(r==1\IiR$X]f"T_FoQF1SYIkyxu}w=9e0huR4Ӛ]UU)'Js8bȍJMF:>+A_+'ɏxq}t+\]:v-RV'>Q]Q.dUE$܎{+'>v<(V=7CNOSKu"خ~\uޔ.k̊={5b,7Nj.Zq\=oWQJ"4{+F2ϯjyIhy#$mu5$U%p0d攣+b)ѩY=?2캃0̒džb=NO\c=3qpT%GGXW V׭gN4᤿7NFIZKGo;<ܤmM܏zv:Z+8KmZ)J-i7\vr23=*i'o aیDJJbXo޶~j&m `UmIn9ʼ9<ԩJiq*_h[G[Fv4{擺ЙV|OktVFY0U*}x=h捻ὣ-Kr.2M7!Gz랤b.US]?a6U0Dz׎޽kSkmYrnb1J0r 鷷52Mre(<"Gq;XnHXI\S)QmjVtq{ʤVt(^$*VJH?}z8Ls{jS.bv 庣mg HQSϚo sPI+w]H$Xf'V9L)=ïXȉ_,d±W=:=7-M &ţ~@"ȥ}Lisa %HV!yo_aڹ䢢mQ}{Ir{N/#ssFzVbQ .IzM+ʫ)nsRJЌ67- 2!Tt91N6vFݱK^u|0wapAoqXҔeRAYڦկմ(&a7 JRؕ(S%7b1 V}=*rRTnё |v2tgb #8\%#U)^)R065vdv,rI{t.WQUqmspt ssgoAUN2 *4}ʶw mGXclU\$*9J/t˛IvDRWF{Q\R(WRDS̓v #ea"98OԚ+F\gi=%ooləiQo5s|&A ˖ImUwv]e I'($nmEBR`'8G\Ԥ#Vm]{n,2]BbYg`$wR1ji {'՗ (MGvEJqWOBVh1TyR9t]BH5=m,d`^ITj>V[+(%ɎE[9oTg$gTN[~#nlLW#uSIIOK5$Ay-6fcF?tS,qG&Yuܝ!VoMq[b=yZ?ĕ51K8|-٢I'3+Ǿ+Go.hVB܌:dkeirNRi /cmn|lER`,d85FSrk [Dmdu̠\!6cf#8qG};)Eʿ7FryϥD}r卥~ne|+ T%<C\Ӌ[+2/ga,'>bSBkcj$5EˣIJlz#C4rUz^+&Z:ҌTU55۫6]g$SXTMiJOw @d :35̤sQvFKмJalpsܪXjF*:smKE#lqMB0vH|ѴC! m;qSUN[ ڲc_#*1XPFEcSQQt$y]mU¢9? 4hx Im$U3?,ISUcR1zkm`Ћ{t#c8jc8+CVod"}B;*UF#>G,\Ѵi(Z) acnPw^N]=,$A]1=왦Ryd? dGw<\ڥ.[Ilh#D1pn?s\ܒþ&5Aܟ}'PZeX8U~l1'>][54ZO-w6'f)E*؝٣\0(G~t~el &1;ٮn]]94OJgivxF^'IJu:[$Vj߽+ɩ(MRu*H`a3tgj8#I(Yi^['"6mn9Dj'.VyZ*]lnvy;UKhe=߉g(/{75iN-EE+ck_(hqR1V2gys+TyK& 6:~4aԧ)]5xʻU2 v4.oL'bˏnBw/wz[ʥni*˿,v1뚙/s`|׆fC#x^zF}Y4Wf1pfn5o)sס,̪FNnZteF@12'5܏i˖ZDF]y2WE:ΈaF5<`W\%sM|ܶ9}LI0e q V u}kէemuwGcfo rw#' bJgNTY˺X[mmKsq~SF3]T5;.s0X%T{6udXdһ?y+RR+[IYG5R>(tZʥ`v*F{y)Be~Y##5RJj4 U\_p.H+mFO8FpuƢyd&HJi#W_^V\yfk5P̨ЩtyJU*(|8*R廷n$hrͻy?>pk:;{sS$7tFN>`zGOc\ӭe%N6mi1cV7<(OK\_BYgX4͓:}jeZ%V3phf''{{~5ٻl^/,s2a.72'rدf۶^١wFUa'Н_ ~Iߙf)h xh԰m)sۊJ2JlN_ΟOL~.jΝ馒;&c,щnOSg ̟M-2vK.sr:9VЩZzC $jxV2ܒ1{Uqnw+Al\έ"+)>])8h")/%NeI})\g)QtiͪeDe,{ C5tmJN *I幻EH$N?q⯞yTaR]Gr4>@?\*Sq.oʷwQ&k|A<UEIJIS܊/ʸii7*`<=4s1N\H>ՙMn]ۺ8{ZǚQkW}b&%.FF39ZTr\c%A"Z2u㒁 8reR5ٕd# {gXFuLN%+,PZs HT:VQTJ0>WHw)R3uRӾ4sIU!NoB]O%";PrF-(Fю#ZRGqE\yB&X4ď=IqRJڤ'oߜ`=x+jfqVr[}9nbUYWRK'.uye;20 \?E*R9bMJX|ՒD&M#I/ sQ5'gy-O2"/%3.M;IKmf3f_01k-**.e);nOe0h8;YsێMcZK*OkxKG؞ʎm+#\=V5Tytdʒr;ɧ6$,I s;Dc[<>ڸc)JN;|I{!VLm;uퟧj啺qqb]6 -˸f< x?J´c'd2}4-Xٿ{m!}q^'8g6+Uu,3^F#1޼\XKIZ>FDG'ڿ.nvH;31Y }VI;z6r6'}tyr'O0\.፸'j\ޢB='6b. )򧷿>(8}uuq#ͻ3hʯ?(猐2zXՔ*eU^sfnٱJG?ĠXqM-𲩫ap=7.HH(Ea8iXkTM B\Uf-"m}1֮yKv]ZQ;ZQ9zsosiSyH𣬰4{dܻF0SN݈4lڿۜ0;x')Ymi]( Ano3n6ɼ{954\W.uiF~G<+{}twEiiny#b8[A#g#77cЕ^H+~[C$)Q/r.q8$JwxtSK]0ܝG+~V& }^Z[X@"_2=ͻ54%+F~O֥+Xftvw0?:\d⾹(72 q;1i$f> SS.^[GeW(X h0 3ֿN?Gr:q}zҚׇlwH߾T'SP:ƹkE}N\Mju%/umJ=r#Iq׷zbxEq4gxd!Y2i>o&Ls#QVNOʫY=o(Bݯ~Ywyc!r19)ʦZ*d[%\)I5u$סգ^Ou%*ISyA0 cƲb*r7[J˫"q>8z7̭LTai+_tfFT2zMF޼zq!F*Z$:2q)N3MKӍ*Vi-mOZʜirmꎼ (փV 4U>je+7z+'fЖK "hU<{OGl֝IGӦc-}R5#%;^TIKnGLM*$w\h}I|\kAj1ҲJM|_4%%y?1Yli2]U8ƔqԩVB|cwF\. 0zק֝JrS*{ JEuȴ&U{Z֝YVffKb^;Y'F*(4}Ko"G<9wmlsx?c:W^zҌ{mmY6MF@ʹ0kKx}V Gzk{Kw:jc#G?Odźr2 pݷzr?Zk[GRN:c?fm99QR[Iwo%?.;5Lk.ЮUϸzw)I??T>nY}wd 63G'ץ\sxv?CRB) 'lvzc=MBjXFeVQ  9סZJ*<_SWI3^2P7?<קNM]{XX`yy?Һi7Ilvrާ1hu #/<'r%Fz\vRwWo4< v pStEFnDvRظv2m*/\muGܶ;\YEeVBY@9?08󼋔=̚rϵUS7qR}q]PJW'n޿?_a*ӔuІKghi$rxO,zSĪqfn=#VZSqhY}cz,_wѧ.{)Flݎ5M{e!rW cx9vS/{Dt9C.2tT.e1N\ܥc m3ᶮO &jm78GKMr)MrŶ y)(٣?Ikm yJjJtl0G\rn9B*ʆ94;p;OnM9^_Viiys낵78T}w~ O$wO'xNIsVIS.ޞڼz7^*_"U{iSՊ۫\/;ql5ΣZsN`6^}-Z\naێs_=h;}m44B̫+D]ۖ t~$N܊m}eL d{K2'p!pAnB>ym}5aSU[~pu!n9ظ 8ON(8=6W]~3[ UO*%;V4RyS?=?N1V7FeGM:E ^]nU,#9A뎵J^:͜gR-={RR\I&ȲUbz氕~Wse͉ ɴKpWcdg#۱5}'}},l*~ϙE6:(&gʰf :~E.i+bJ:nr񋏲ȸ_.~]匔aRj-Eܾfmb| 1~lFsWq~>G|!zvfk\C^C _M:Nq54#̹qWF*ZV=͋yyMo0XTzB˖P/0g v=:qѬŠ-b;$Ͽ]PN=>}9 3I"67玞tTױwn.6jV ;UGSwwgE){h7Vy}Ҫ)h:rFX.*en{g?9]3ٹG`ge![lTN9>e{*2GI\]>' x=OQ+nP6AhթbXsӓSF7̻ l|z]NDa_98#zW6IET^M5ix,AvQM#)OjmL0-ϩ滩SzKzGȬX W\hIJq]Hҗe}oCW]8d)TZ+FC9O_13{̍+G]рo$uJT{ۤr-̲^sҴ*5 9v ?'9F <ֱi\J@vnq:ycfeNCYRBVr<6,e \<[ k>GtdkLR۶󎟝e24%W啁TTy˵R3v'&4e+6,ED=k5 9toWW˪LtU|)\۷^HRy3T1#7goaT]MHWڔ-& >mbUc3>=И%C2dsYTȤ2^BeC:Wʝc'QF&?)~V=q~XiSI.NeK3F"2Hw#QpZ7R7:mYh|Z{g5L:+$ݑ4蹓1#^>#g{U=Eĺ<:ٿv݀sߜƸ]8Y5|viCjyXczӽuӏ֞MJt#w6IdNwSDF\IZ ?>bp8XW}';R6""ҲnҺ)J\_-/Zyj0 s9һ» K/bߡv8mpa]n鴽.Hꪑ{Q]j;QRCOcx$4tѝ*Omԉ .R<݌V5j.ї̌(3̋ױJQIJܭzI7mmsۻqSUy-zM3ПkiKK4餵멍6]Q8ϰ<йySͯx?3E5 ' )Vt朥R2Arcռϴ&O#s}2Vxdc؂+jskrV{Ǜ~d /nx#Fg4ܬ=L,d~[/.2ّ3GʧFRmjsӜe9F3x\̫jRIF݉KiliZ9~ VWkJF1$Vzk SJܺ4SeP5(5`ݿ]k?v*T"pҊm1aUX\ ҹ%(>̂챌|~J;p@Ou0{褍}dz-ݺޔsZBN[{Ya-8f7*o NkϧYBȫNQ Ԙ 94F?xpeTqӧ8OFi2ɴmn<[ʽiRR*nXqԨGHZлe[GQS&f{+hXH'vSF [B<ͱ|~n08U5Z̧INbn܌}~Edi%- ijQ{Q%$h[r0lؑZJ嶪~1Q'}>[(Ղ(baxBӜTMoȨJU%}z\gQ-A8s=k u#Ѝ]rTEDl*ƒ+EjwOV"9S^}Z:A#vR?aIToCaJv96W5\| {xR2*حJݎA.0F |䌞cϧUWV"{II䶔 ޿F;w=+Qi7Tê5 +!xM4. XVg犗Er4z1"NZd7OՕ?r',s(5d~hdF9Hf2Q-"}lF>]ˀx9畺m쇋YXR:G(t6w|1eraOF~ZJ|F;WӛFZnI sy2W4RV$֧I8+KuϦ=+ɩ%5- z}*N~ vKǒ<#p=OSd2Z3"g6.AwzΤyj]+V^YՕ-U[,yRjށSrZ ̲UQJ7ZkǞkߊX=ǒ;Oֻe^Te${63ܹhд2m;r0=J\ڻSB7 nhڴu(eR7^=kiaMGGyߣqQ{u=j`in#s͖e ^\Qkeyg.^G_LéR0O5W3΍9E*VD;@![rr8=pkŜaEcRn[OxnmG8 s9ֹ^rpTyIk#@:xی<%%|)FVZ3WL<"۷q~dZ|{ڂ1"N%pzkN*txZV,+xE*t?z+V&XXeo?/^Oy]u$U*7nh_m)ӪӡRue6ެŁ }٤IDy#iW;9ϭg rji[O̡|GV]b`ÂGNIEjmRc yV>l#*~bjV$I}~U }Zkmmby]vs<KkW&,Gk#, "0$qۦXWdj(^HڝIF:i CT߅;W'Ӕyo#*QlŰco$H܏9N s;9][$"Is֨i1];E=2m!9ޣ!x<׭NOzRGC|MmUel;cҽ:uӴtTcɤ5vuOXl%/]'9㞕`j8e`{DTuk{6M#~3ӎۈ~GoCocZҚomI.,V0#YUSí31^+4Fgaw~J\м^,Tt\T%&lSXaiG|;w jII-R~ruc/T9I,Q!o/h[ *tҗ2Z܈ԥU.[ٚ1ڪyjs-klZY >dQ}2RT޽4VpRUѤzWns/}8F#j09ˀxp{T59*^>^},zֈWEX[o߁#=9[sj:N+v_oiqɦH(HU\WDe(zLӇWz͛mZп^DX}psK2KNy9vF3|䔵k+7@͵s}\rN֖7e}MXeXnsLpKw$J.-/iEt:3"{!j' ,KOSD F+4!Vq?^_4aF_ZݭJJc{8R\ɯUo{^4*[miƝm9c}ƈ6v600KRb٭9{]ۧ*NqG7,iRw(A6  nRqr{jFr꼥(\En!W0 6%}w.u9voE$[mdxdwP{75Z**2Ml:8٣G6 HjޒL4忑~1f*[Wy9遞+9^[2\)go["(I cwIxS~`s JHr>U{bNsY9$㷹Ұr9K$Hfi7|Fqsߨ& hSEE]'ҰmT uveh̪+p_DWֆKѿ-^i܏=kyQ~a*vk^B6ñyj-eIkO8EU|IpB3Ң)_4UI^+܊y%i0sӟkojg*sMAhmg0 ezӷ}ьeҍck+on럼ˡNRćt*Fg8 uҚڷOױQe[N0f8>_e5Ђ9d,HٽHRmfGeeI/V_by4b략*=&q]*L!t!tUTRt!Efo/!?? ҝӇ;oFL9Veێk?{Y{ 1ž[oV v|D- e$pd|6#g*|=L-[5ml7=G/tvQ~%d§LZ-9XkV;" aׂ3gic=ȑHRw}y5N:gSNviwks==')F2:f(b_Ew/ St)a\)S-w9;σ~ CHh9q%fP4N_Wc6*wu,}+ϖ&ס㪵+?$̌ ΧϦHl*H8V'k:އ~,,- M{d÷eJ+GѩzGIv2q(I}FSh/oj7a2#]Qp9~W`p5=lR8{v$Yy7vCIǒyy=k_ZD}挴okN[×љvgYY|$zbV~՟kG^Z~4:V:~a;cy+N2I ^%7CT,|$hYT[ HϯҼƚro`sJJW>M ,aU/R:JЧ%)+s'\ﷅ+\ǸuGzp;v쥄ʛsz&Y>uйW\f6=d޽=:v[#apx)+=_p7?mt]H4syN GFo}:rb%:k+]Ii,F4Gc^n*H]KKөJxwyMIߘK6m08ݸ[T՛J{(A+J (w֯abMN~W{m;wFxJwQ}]ӵ~jfėZtj+T.@#99pZ׻~>Z3 y4׮otxMN&X`3Ǖ!<z|q MwӮٸ\e9aݞzu^s?~ԉ3M${4 kbDRʌV@x^4uýi]O>BY=:3 E7tɫtצE |on|;'ZhŤkk}9`3 U2KҺig>3!ΥECxIyoO=U~%xm4[o^Co8wZ ;+$ch>"/??4'~+M4Dr[!prX OseNhr{]w}?˳nY:[_}F*?h/kqkfhP@v*|͚pVjV['T[t&0 oiO컾WW]:&~~C𯍿%E#еK&ӜA@݇1u!- Qʡ̞6{饤xStHf{+f!=,e9Koԯg+8ɤAYil&,fHZ5 ?_rb2x8[O7e[#!9R{=_?_tw.!0F1zs ֍;w,MŪ!N:wc<YN>.f#O4>tka'bqx3ۮkQ"'c$ۧ{.g+;9"Xv:^[OK_iF 6>?=x֏Qq8YFMk}53.A$攕Vx8qT(ŧCUj0XՎܜ x9kReLh2LˋfEX]\sA}iMk(ѩ̢l^;nV?. Jө+tyFklT֤KyrIe#b{jJ.-:1Q&~VvV55 aGzi$]8YSVZFu)t. 'W@m[it}?Gm&MNiM٦̷xn]Tg;/w89Fekn~cX7[M/߮sώ|^(R4}gFK%-ě3N|<>*N=U$[Xxf*ͫ/&4mC㏉}V8;d6i'oC ʕBH:g GEtYYhn[O ֻJnq> BԾI]??n0 úM| <)Sn|O+ªuxW]W_DxKN5嬑Z2\s&6$&5u]V;K\^Fۗdwo\eHw}+ җY]S*t"o5Vq?w1 ʧG5E&$2O~\'^J4),-T*WaH 3]NVHi5rƹoXsrq:s};5%%&ߛs矏4;6.ᓓ^YszX:.Uc]|uAFQ+3bn{T_>՝=:kXm.mk{KtxY%˞'Ϧ Tz|=pt쮓[=ErTUOD B'IMO$7m*pTkeGܽۻ]W[CIFBǕʑqV2=-3zrzݟ[2[˰u'OOJ/5~gJ81q=_g?>,]F\NWzw'{vzR)~gI+h_ݴ̅NpJڧ4iӎ"x[χ{gec^bV,7~ yKV1MssEy|&u`[pW pӧ*摭 &֝G Gvjt2Gc#3{Tz3Чby)mjv"ՙvGRqO4Yj+M"i.li>npT:x${渪IUN;[;[c~?0Tz~3Hl^eF"͟#9JwƍoEԉ%X̩`_7Q}zQe焤,D>SBq\Hi=Nj/}Hx:} {G/cD{6>\d^'{JItNC |6US`+/Ö>J/$i(V peϧ`yQIIGýI1ϭ\(ڳZt!6l[x1I^hѧgϹFfV '}8RWMSn}LMm2o, e}F9J\Nxz߉\K Yl64|;' =\!:r*Yg{jL1S;Tzu\(Bi_ [m5b'קCj^nćV ڬ?xr&U:vM$?6N1sXʏ/gO'=G$M3O6,8 =} Tw.f 8;M2N,Y1n$GE=sr;S*JVosЩC٫iZq$>|Y㑿^sZbiǙmt1txeԭ=! Ny9;~}Sr]HVWpRHrH3׹0ף3>hDP^$6| 39ډSTiR}Jkei$&VV.awd0s>NUI eYԕVq̃]bs 2i^i[<4}wɖY-ww#ӊ]9cF*Ww&煣0ޭoQⴕ:rOJiкvCl4u{,jIqנ5⢎yTu(7UuKY13,eA+4jsrtZh. U$(kf-}N5*{Ѱ;@#+=#[GoDFI^0E%ԑy@TzsN4hySqӗMu3Mhfd.[T:nta4L}t-YH߷Y>JEucr$O,^dj)El?3[ir)2jJ)I7nʜ{iNz]X[]Jݏ^tڴtk֖CȻ2I6']w*͂Fq\cxycS},F)cLlN$zW?s:U)n]̓H-m r>m-UՒ-[D"}d6pV9'ץLrŷ'x~]qR ӠB^JkV r$T_, W97n2C7a;ScZ+ߔoғE y `*:!uJպJʒ}tK3)`9}(Qq_2&f8|R;gf4*nMiF:BD>U##tԧN= v:Phڧ2*랽?KSr.in%`s88Suʤ[[{vk"xbI?8W4TJ}aA+2)I dsZFUgjrvkBy"7G]zwMe*2. 2MY\g O2Oм>*H8[;ydKfQ(&ͻǾjMҫPo-,NNr~?*T˕y;BFȁ\,hmapGZb7oS*Ӵb$XYi6Q~Tm+juS[U$UcmxϨuJZ4ȭZQPf:8̑" d.Fc9S[tWRXo-b6%Sm=HIst lalG?̼g$x#a[SsLjԧmYXa EA!}LT9٫լ:HdrF{sZOB4\qyn.3Lm2H]Q^?Й-pOInCGU.[$tӄaS2%{2'ۃǡ泦;-0..BtMI=A5F:ݒIz`I0}֝8I=ERJ*$M~UVI_2=4ߠ__u}Es?$|^iFi]jCaXz4ymdH|xBc %✣rQR1#YxF_v!0{)J+ԗɨ6f$[\F9?6J:ɩ{ lp%rp[+7$fQI.mϗqʜ=NYg.FC;Gm0o c$c=ks|q45uTJvBpsjt3HOYШa ALid^Ӛ穷4lM:1IR1su#2L<{ErUP3A?a*p摕:rɔ3}!vn{|>1rWƙ$UFSc8ʬ5M1q)|eګ G~kIDjJӖϞKY$]Ѱ.?Zϖ+nMI|^z0ɶUS zßlc%?RQZy}ƤD;r6ߘr,+Tn]mǮ7v\SW\ƗJѶ(Xcʙdr:灏x.m c/vEcrO$yYNyPqb38A9Uouizzr{|و]3=k Zd: v2xٻ3T5#Nq1އ8y)r2m-t/ftǹ*yycy/%zYՄa0;t+ڥLrBgi3'953ʚ$Lv%\UG=JJuiNSjKB,v/LON_Υ8)f,!2ܰ񞠏^U8i澶)mH3ʶyNϞKVltbIiϜM>'mdc;|U:uf^Yc wvcQsbZnf^dwmdu\^jTvEӧ>i5$#/m" g9=z~H_A{JG] ֗3MHJoSNk>NkᕚrV?99e9krʜQ+Q^03rOwRj%rк2>(ۓ=Oc;K3[9`Iԧ"TeZ\F976߲@e+Џ{:qb<-2dܑ[8#gw{%S{uYyl#+XW[Ӕ>WحI Һ޺F+Ǽ~o0cN3޽:~KGvr0meݺ3Gn[u&1u?^5lͻb[IPzuoQho{%k5{E2J>i<ԞWF'aQZ2r{]z7ZAm̌6Ь]P{=b{_ZWl-Z,~bVmd]7uD%hyJ7>ʑqt]_A8Rz~S)snR5*S}< [Op<3|Xڴ(m N4_qc#*=cb՛%.cp\O)JoU7-f;NT1aWydZz ycRWZ"$*X_sH~Tr1yڜi X6-ޯdLrM9I率)J>N2oi Q-6*Xh$YFgU#VI`88}k7'k>;eʚR ;[ʬT_r]6'26zzV5:u.$m#h!ݸ75d3\ *^ƼW}uL[N2J,4G3,GHۮ,s'4trƤIBʵ_ݩ$!ozngT[ߡf 6Ȍ%LmX982?JIIߡ)Ruh-[Ics+UBv뎤Nx= #zVU!|<'Ot_%e;NG~sXz9`m46wK9ߞI׶zs[ ];3 F[wW[i)!Hl䞌7gp:hҭU]jfefcqs0O=#̵GEK!fKݾW#⫛Xѩ᣾ǖ S2裏ZƬ.ҹJUZ3,qucdu:."HDQmDܸ]WMJ+g *j,t,"-JӖNݺᢣMR1?7Sz9"èNTտN1[E7/P%O,7߭ZJEpq{NXԏ4~ 7N-fG`Uv5kǦx7P+$ׂoN+jS9R[gIzO͝fcO$z[1)bsѕAI=?]t8M?RSoܽvl[>a8ZS*4g9EZ.oU%UfC+؃Ԏw rD}u"Ԗli,ۖqR7gTo̦Qm U9SnVΎj~:[AfgHlYmn'NA6W6u' MXo Ճ q;L,sj۴2nګ&g[t{]."oV1fZ<ȗqQYԥt{[7mԻ}L~ph. <-׌cһ$\r(ӓh29 yULlr ;F1dE9FߝY< x>Z 8隈ԍ9>mn(sNI| m$L32\*V|ͦZe+i6qE^C1[I:Zw.pd"*ь~F]lQX® 8qU:q.MJ|Ѝ~ F,cq83\rcFN 7)?jbb3SV2ra>-CM~F)#n|sVR|QϚm7"[4C;\)KHҤhԋ5ij4I19&Z]X2o|s'=4iQ{?ס!KqVQ1ҹ&0$r7؍fSosӗuYۙj;D-=2,WmVIf!+#c+!޹ƼeggLU_J퍹=9RՎ-ۻ_J"zƤƼe;-_i]%pI~MUُ~jj6*#g<?Zu煜MGddR0*JqzrWr{(y1!#I8i$`Bq8.~,R2j隶KV񕓕 AA^ۚ+zɇpk͐̐#?_uT*rwvשj;,ô28ƱJ*Xm9sNҿDE'B˹ 㞘U:{~!z4'R?G[*=s*{Jpu-&@"cvP$pwN1ruI f"!b[8xƩFJPi֏>"y:_Sҹwr=3s{cV|[ȵỏe'q`r9:rdk4PhB1#tmq?J1_R--82yv#g?|_÷ZZ_}5O^^}K61>B]_֯V]pԍjwe\n0:qd'q-nK`yK2r0;W/eb4I7w,*d' #מYJP\SԘNI{&ZI$ޕJߦr֔\yiʶ z=rtJ:B8aZ|2FZ|ы 1bwg#鞕9_WRW%?ӱ5Qr2랤c)C9UN*fLa `u?Zᄷ*1S݊c,-UZOQQQBQ[d)JYh-[)y d#܁^-eaRIͷCgIDFŃ|ѾñZF*&-: F\l1ޏ9Z1MJyJHJ|kOg$֖N2mR"ipL]qbzYԆ-(MKtYZ9&VcytOB*7i*+o*a Һc F'dw/[߹VG+>-Vڜ\tVLta~]ь}wVtӌOefXsu_iUN>Ha?1ܕ|_ZI9+!J3-Ү\*;WG!J+NS-9ڤ=Ϸ2)6Uk0;G V8SqV J0\or74!Vsߏʔ-ƞΝQZf]. S@h$GNRqN]e{v+wgo%U*7%n (u2ʭgk9+Q՗֥3.=y1YKԦ޻v+<|wU>}Gu.YJ핧iAYI۸ }jrOC ؀F.̋ݞs}kWcR~ʬ;rW*s͟/ȧt-ownE n~$=;Ԧyҕ>eƳ4+o_2c|f>73sSgaΆf3s1l$׍<2RmXcGf9O'$=YRҍIF2ūX\Y#TXjKz{yh6idY$+0:VeՌ[(k}|"$ G~>1愣;qUV&Evo.k۴v9=py9V']-dWrK oyQ2+8=]N*}+7ݭo2C$Hq=Oz1lwajҗ5dE[' (a{p=a:($e8bJ֦egk2|7 Nw 9^pFMIT:W_wr͵kP\n,1,}@YsJƍ5f~P l1喨֟+ZVɘ,'oݷCc$v=EvS4Z yj#X_0\D~%ѕUu-+Hfɴcֺ#}M%{ "X+x(+®e*0W99islS-/7̭0|a}*ҥG#P(HOhbw "v͐:6hV~=Ωpn:slcVcO-fHղ6_^׽\hUF\c΂4C ep7qɧ.x-u)9/:uZy̫m=}jc9BJ)Kc6yTyo>/ͷ4sJW̞yFI't(xw[+~NH?W7+:[vTQw&? {`#.k҇4^PV4Y$*^diGk 3#0Uw Yn#Ir%v4 ݸ1}}+Q7O_Ҫ<׉7tOGө3E Y~oUu֢lvDi6I,X)ZXr.thcR^NJ˟548?#RJd3[ns.P M6**γt8gF{q30^1 wyMvB\ViMɥkzG< $w AmH]$I'+jQ5Х^-Q˹^%`"(YFaU3׭GI:aSTIY|'хt:{Frm*1WFXQNm]j;I֍y$1"mك˞ knmW_^fF)*_ۯ'?kiI&9NA= J!MJ=Sic>G9-Sú;xyWU:yQ譡Sjn*EOǘ$?B.bVöt!qޕʞy)ֶU[leX[+N8\qͣ/1յ5ˣe_|djO~gߩmim{/+JJӧfMg*_l5U')۱F4K^Aի@mj<ШsJr۱Q4tG"$nx[JJ+K[pi#[Uy95RҜVdMnz5e3p6jg.U*IW}v~4%2L3 ޹Ujߵb#^kMun.LKN'EO7}6U*F.'~͵~w=KpѢg ʷ#׎@kTo5Ӕ}{5=ECJO)Iܿ/9=דR5_:qh4Al`^=ϩ(./K%ӕL?4HgDmaoq۵qƤ6W{t<# h$~IU^QOc6Ie^@qpV ԱG~d,gr{Jm={Szǯ~iud$"Dbw|8+cER;șT</Y]E*Qc?0[j!Jۼ=ͺ#'seT[<hۄ|7v 2|31zSP)^ƓZ8y2*f=Ű'WvКxyF:zM _1Xmqu;8ɻ OF,^ 3mQ8aOl~Զ]:z4kI)P6H] 6f;mđ2_b$c9[ӌ̺s334m&8Ƿ˯R{~iq4^3zuJ.QMo)jɜ?Tkh5nTsWTj:lW,_/&y4%-fY[{bhNrO#BIV jbչv_#|SfEsӾ8,~)^(+ͽ" A;y$hy%J3'\;N~|;V0sdrOb_sl>!wH`-mpÌjR6r=Hb)5'hMƢrJ5xw.tѼ5:Zhrѫ.UfUgА?!\JUyZI^; X[,ZfJcNF69+io^d1S.=+\!efVeW,s:猞G=+0=VLvt>Q\ݺ_ܢ+;;?vVTn#w#qPz^G4c^KKo߰H]`\v8iRmKgYfv4szg=~ov;/V8eEt'v dW9V=W$Olvޟj:[xM,dS|㎕>>hP*2^jb6#nb1f9{s\oRW6˄!MѪ26jooNXˡHƴT6#ZЃ;,>d%s9)\i֔46b|?XO\lXf6੍ve((EuGewd緽6|ݒ%& dGKc2.3 d+Ʈw"n-g.mdP.~dAm'&3= oNongKR2aUWˡe-Y"Io`vn,}º)wI3hԛ~e;b=9oF N }O#-q^ui=Ϳvݖfz`R8G)ri[ȧ#ϵ$a׮2p?\WDek)~r5mMq0=pl㎜g;֜^{Td\ErYێk9G&>M&^8z۴r8{ҳ&ԹS啖VtѢF\6qn}3ӥkbB*JIhBf9=pq?՜ƥ֭ I/,jͷc.NJ^햟R'N/IwI$SD{qQJz.Q,\ XUی;`UiJ7_q)٫mlXB\~UpO2_e^f?.,Ƨw\IܼD-Pືލs`{JeTw09N;tm6'PُV #lcNOJr[\ȪF_jRzL)a1vF^ ϯ?Φ1(ZD+,ʮFUʎIFT^Eā#9 }tϸ.W MUtU,?FY~+?aXѼK|79ǜ<ھtwzGld ϛt9s-D,:zoTG/3|{ Ut9JOb/'|l@c1法>ü7vbBH(fwy];z]OHIj2 1Z^Ο/ckIx̑?u6FzcTkʔoo롃huܬlXqv#׿kYl+0xs&}7@6Գ BRYNNCq҄eN~J}5оQ!Q **ZsOVX|qUNԋlj]̻c% *$~jm6rԕJQ2ldiWnKm_ʹㅗ3Q׵ʔja&fj~ մ[Tc!+w֔q;p*5=9-gIf`+= 4>?^Nq wyӨ.玙ZU2Þn:Zi[D"M&+3(f'8'';H2>#SrJy#- ڛm'\JkoX r%{hqZWMkf x*8JDG ՒkD xmeF8'GB}ʊp1ai:45Y ~2]#KGqpd=0 GǹS%NYD8\,c54&|CO剴{PR;gy8 hŹf2<҂ߞ_[hQf[<WTF^ Ju0j;33r@Hq޾/Iߪ=1 zn[Ŕ~C7a9F4vi9^Q5B%`d攥cϝ]YyWR8,v;ӌuԆ;dS.oz(N|]VtdO53>y=u]\f^9}{&0]n 09}aNЏDgtݙf\78W\aRU̞TtnoS -RQ^-$!-'#V`̪*1:wױc~ǟkSxIǦ6g6-dqvpxYQ^?UXڷ!Ѯ-{H|wXsn/GpX6x9GFRj-SQ]חϧj /Tt4;>kJJ`zBNG:֒y^Mo&:x..?֛|5߅nwX o9QFI>鏡a1sw5zKqHa~IwKf&@,~eO OissS.+I#ܞ<MdեgSVHڵ[oOY/b6Mjy~]yrx`כ{Kw}]8WW ܓv~&Ծ:1+o/wYd*[ AݑYUtw>GE$uGo/SҾ'P1&yuṄw#|8߆~4ZΠvuڲ]gq6.OqPTSGmݯeXe.}ӿK^0_ㅄn8I=7{5ӾD*evZ˫ɥRgF5xKxK>y^>m0~㟽xNaj2_I=5I+lVUV8 $uem|#t1gV! 2zrxFze8ivN_|]C/z'yd#2B*c}} 7K7ږns׭g[֨')K"X²;Iw&SڿB'RډL.W?Os5YIO^(,2MtkmYTzuzTc.[==JG%%m?Oao-\[E"yXHyh?TjFJZmsk[}S:nxi7Tz|O6|F㺜pMNYO'IY9ifөd|)QvoD֝O^ó{ZImcqo24LœbǞ7~8f2kJMo~gNbmn2vZ_;WzWtf[t$k; yHU?wCg#AjF۴tk5]Ob1喎~{^^kk2UJ`8q]Uxn[֒v?se4V*W/ r،e_k~;O|Yg+P_Y%ź̳ tz;Քii{m1lq1t&eת=s~Ư|WMnf싹?eWf_pA;kύ ueB1sխۮ畃ʱN^y+s[b獯!7# O8!#W/i%/ClʷJX-SVײ}|)>դ|jg29'gcjV{%|̋y}@4rҺ=Мdv|Vo/>tr#EQFO>߱Cߌ#Xӣj++Zw960fY#\ê0c^ITj3I]pvEe~1do,W˽s8 떽J+.U( j[A?kTl/ 1_c*ʤz2_d֨ mȬgTa@5G08愝9?LlnJ<]IR7:cqچ)WW]h;-=ݬҩZ:88G=8k R>c kǫ.zkJ$mZ[>ŝE٥ 76VHѺ^A c? )]jcN=w~os&v9!=q)S]kae)IKKu={2mi$0cA~2J1lž*x͜uNe*N55;U.<7VFmλ@`jppeMSZ/ʕV(aWkҌS]{(Ǚ[8UΤ>i^nz6GQn}+8.fխ  eV<:zg{ߖVdqq$r[qŃqQmej/?ZsXB0fv3I8>+}`]fks5۴8q'ߞxگ3^ޅTc4sI$[Y?.II?iu{{&$|>ɩw:=c.K)maCJsk2)2kDu[xcxc<~3XNQfaЩ"my0Hpu dwZJQRnTwd*ͷ ͞<}+9E39J1oe崅妆0&KmOt>g+Seٕ,g_x9@㹪x1ROuml Jƀn+ƣ)sINnRmWَC v5o3gT>PhVy~) h JveihZKmo;f&i9W%s%oԊ1ӛGOR$1a*>zU8lkVQގx:Y"b6F;tϿR洭bi/Yj qq#nPVEUrH3~S[g=l[&kS^7o]æҒRR{C1gApc*>qڶ唤FWZ "ՃElcyg'*zotTuyYM$s֗.tN7}M{pl1FVLnųy(͜nHJV5bkuko( VXzs{R}t:hԔdG9h.$ߏ3~dxFqOޕIT>Q&"8Q[Os 6e]Dw^:a(;A{,y3uZ1,JD1YpjΥII]UHӕ%7@%~;ZTܿmVZ=I#%(*gZ km.Uy[ Asϸm'}QJiFzۨ6\* I^ڜR1t?gR-T$ɱET.FО-?hU(Ho6#a.w"h=YFZt3u=tC- _w9 洓V:˚qZkɝrpTnTA>v⪹;3´9T]:5#V't~1Nd;-3?vht4!VTE;g܏U웹zٮtV_+ew9ϯF}ƵnM ׯo!+ĿݒG [p=A i̤weN|v#f|l>\.B'ϥ\GPEWqmDIJ:c3S|Vf5NYATVH<8V'S~-Nmŧٷu]IJ Nˣ"H_96,*7qd6s>{KETq=, rדZG7KMJ+!wD!I%d0Glcڻ/.NFoR!u6]=[a*=Og asF|rnF?wҪB;UJI^@%m#|w9sֺFU:A[?+򴊫[dگNU=mZ¥m7cJ-RInW>|60̍*$=:^%(RnKVgYԎ4䋍+Ab 1 vw4Vh-?BH-`).CIp22~Z=mU9+_\;2; tu)sZ5:%{#ZvuqwnsӀi{Jkܵy-#KT ̹ـ\%vE)S74hWbF;׃[EFÚ2Դ7U)!aӜo)CMKg",<6 %S׶5D# cߍwf4m@GXuGw Jodj$!1:=iǖ;)frd|ܞ: Z*W%wZ[̾j{w>&d;39sFZy&TcqճTTz3ƻoO=joZN0Z 7>c*s&W̓q)(^bTՖ\) |cϧiqC%ً),y 0=*eg[Nl=Yrr$vfe<ۿyY7EVi \=*MŤ$!is3ꭂp''5+RJTk%!C~]4aXMأ<ⴢUa˜לkJ.rKsMyn;eQ'9\XF5EABb0?Vݻ~p}"漎:拦xye\_FOڸq0R4mgԎ,7A/瞤J)[+I%_5YKaX|Oʴ-&{alw}ǹvn洢5auˏl*d30{<~]*aR>EvebB)<"E`WʷoσҹcREʤd' ȏrS=ⲧ2^|^g,g;p[v>cTnhJj;۩fZ/ ڱsr#Oreq#Řr:MTe(6Rzبۋ3g> ˗Ms]{/\k1,ۛn=iʛSr74^P [ҫߎYӕFi`-`λw*<j .v)JQN-oFͩ"4r?=}iKmoNU>+ׅ0vssUEļG4*(YovT&~GẸ u׿[i#;w`cpxQRrOx[@,P3szQN<9q8IOon$%UXrO":l#owu}$s"#>?+|yBV/t6C! H|ӚwB*I UnvGYA'' WTdn-N0Sة5'p0'zyX2R$Wvo<J4Z~<ϻ2)67g]1ڜͰ%f]e[5Wccv;gN|qZjVs֝s#Zۉ7mp6~mQNq^ڝy[Eec ,~KF~,8N#-Oަ˧ŽxUcՀ+XNڜ«n'\E%xˑtO[Hy&A56ݼyry8g룛 (nȈNWtB͜77LR?,==># ̪B4!MBbXs&A ?ޱm (.&k59ڳ ذ'#Vr9wZSRRb_C*w{59rsVpTmVڶץ(c/g -z|{~oaJ>[eKHaiY()E sv:ԪҖ5챙&AX26}~*gh{L}MRlZ|1f#ɴL O^sXԯeۨUJz_̓y/VO'qI>:Xk514EEߢ`lxPF>{ou$F<9ksJ)dFv&rIœl9=hPZ[v(^B;TFӃmk^9" g+Pp6 1p129&<]J8yZ.Xfŷj_sjÞ)q7y|̌f<sKOaFY[p9gݍu=~.ƌ=VwF M6ݱ&8+7cCi$*'#UXv80 L.]9TlLL31o _JQqKUtc*u)=^asOsnw1*NSQdeZBTtۅ*;I'ZQReѺz=}ER#4(j98zĹb)IA~'EWlUc%${\^͸´e3`{T챏|qL`T9nV-w.͵^ᜀgOLӭR: q(9=nCqĮNy$c5;[Wk<Ƨqu*̘bKPUYdVdv#خZ1iT[-epb?WὨOZ^jҤ]zbRHYeGTK\;]=H >5Hʟi$U\s'}k:Ta#N1s׻z4%H*68 L\cؼj4k0ăsֵ)F<Ƨ罗W?6;h$j *Ԕm(kOj҃0Tuq׊gF:{uj՗;ɵP`h=+X\ֹ֪JM$j0ep2@a{zVKnk o%hhe>^iV=Ҵ8+Qzv~bJNˁ~4eˤmT㎟+:Ђ&rX"7^v?һp8ӯ_v#7H|s&~&O{Vѩ+[f͝4֕LYɚgKufL.@lp=N}^4N;] 5e9sX)S Wnk -n sO1Qy+6S9)H @#9pԩ.nXEZHԲ˩ c}UZRSJQž⬰-ısϫs7g&yXfF򳽤f!xny瑞:үhȚנ'mW ͿqϽrӔRQ.5"hl 0-ݾ˛e*7=AUhS$Qnm]LbF!7 ǭqQz\Z[Wr\X>|T&vo Ŭ1LKy.nF{;¹UyjêX.4|i֎+h~KRL'#OM!sHF0p1ZRTёV1Eؚ%ڱ/X G2O:p vo1@IPTiTU#o}rk޾RfM/osX'9="NϮfzjJ{RڋV&W~V.@~N1/irZBQƲ UUxH UB<(\FRxA_U𻳁sXӗO]|ucsۜqz烍(>e1]m/ob7g$A"pAOLqNv5(VRV].2v<WV;qi}8Ҧ<岷q [ hB&&\ #=t8S"<$m;>f~> 1xJ穆^b&7?Ң5i]Fךue)q'68ӷSm (FtrƼ7c'*]Kf9^6bW~vN CZ7h>*:m%gݖ-㻾RH?|˖1s@}ʤJ7"^UZy` j̮N{rkrqbp,K#ۛh8eR8ijJܞCo[.eup.s3vT$=Su09y]FX+H;]q~\ѦE+=H$ANlr>5jS{~g TJMlKqwo$ehUeV#ԏZ̥_Col^n^݄K,p&5*\za(ݳ9өRSMi%# $sZ,O}b&̆EDD ;{'RNY䒧=,sZ[\-ǗFgl($f% T߹nm=.mN5#+=~Drlqf,2T*Ko#[KUɵQOpiScJu!%N/FmX)Uaᛅ J:ԭ(oi4N>~A{vytRRj%|1(i§*k?e~Qh@(?$3}3t\gijtQ|bleϛnz}O'uROV P}vAW](nvJE6X ȹϮ3]4c$ZgG*{VRu4Rڥ1nvsӚ Qdw#xT@bpq~m}֧O2Zyy]zUy)2_2$ڮTE$rzҏ7C.j%Y V#}RٲqЭ%c#ȧ$i56$@p向m$c˹I9YpFؐ n~RP #AhX/KF2iz YFZ?9~UʕYG {tyk)a#I/~R~rnylqԴ˵mmx\u=0vG?nؒmpx>SF*6u(ݬF ּvwHM:wM Y$S@ךKrR[#oy//di">_KDZ޼*PMKqV)Ti32Inl;1qێb({JmϧJj(Y~>,h+Q {['䞘_/sT|{Hy793B.XW<3W_;~εKާQ_HޣBA=ɍsʾSMٻw\k$q%w6_p91޳bIcru](Y$1dّ}H{⻰G} MhƤkj)erH'|w %{NNnj_ҵ&0˗F: c^R[hԅugT(GU#2^1׵NnQjz8(+hCRPmE],@82q޻ NG)MJܻ~E|+vA]Qq=*QWwa.YeIVC-3@ֻ"N\h"/9ɒ㉛r=|qκN3̺ȽWԖ˅W&v{ ~Z)kmj|wڪ]$`NR\oxwV6P x; G\)оu+4gh9].8diTD4o2B=OL]PMJ9Xs-aӌdIݏ{weD*qqSB-M4^_Y?yPH'viQ3wk'םM~m摰8ݜu*kN1߸$`2tAbKY&xcT(iws|T>pOziܻ12G߱~zV.tCBdho+K8p:Χة^+VԴ('(y$| +>X{7Aq$ ˻OƩS-4$^]Yf屹zҝ9J6Fu}oAe4:\`o`r{t"SE:8yYIC{t2Βm§$:!KݸKr.7A6~HkHҔ暓T{,#o}g3*Чp:M*]4хJRFk9]}1}:rrqZ]f۹x=:ӕcFiW[oUtSV6X.j|} .gpקhsH䌧3<%v@'ɀ3'[#8>Ԧܶ1UE΍fZWF=k 1eTWcO| H6 $'eZ^"6GeW9Ld޹EV"p匕GTu!QT.T\$*#XՊ3`g*JhJ駶ջ3s<{z0Lʥr]j_1)2ʼn\ꏽ…XY *̫w9VZѧ:nxP! M ռZN[Hlž<_pGkX?F<ˆisZŎ\ҕ2AfV'޴M{NK @}60~9j9e 3QC%*&r1ԓlbet~)}%9h|* $M)nfJߧe76GYa1`GSǧ҈ƥH7jMRKg;qzqI'ؿvPtMW5۩^Ö\G&fm=4B\(%t;{bmI$]͏`p2Ju+N;Ir:s玞}"rzV+Y>:z8ndx y*Q~~zrJw8J ǹC~xa OOB}hn[|ȡ8%~`FUqz/iM|2l Jtܷ"8sJ5,!!gS&yOx')=m>]4^DiXL{IIlG\#RZzQp:MmYU.qXԩhNՋ2CJ~`~.{ɩ,-erh s\{^ʩjC`%Ga5/i\J TκO2ᕘ+=׭?k8 ֏NϡWzۯkoiU7-R{zK ]Dn:p]8\C/#l"nv>n0~KaJM 27L9?ʺi˛DsVisK֩(`W ?jӚ.H#{= Mhvn0a.:s[t3TJikA(n>nN{VS,e)t\gt_,l~#%%>i;y[xw8s]0v*|-49}orI?y6tʥE4kb# xvomd˺MѴJڷ)W#<k Z3FKJIcӠm(QMbrdKkcϼO\KrM)v'm)yzues5HBr֢m;1Uc9:kGhHlgk+}}+KIrt6a{]r樎*[to;zU?i*{ʬeMB1ӹ"I4S516=en[^{yUHi$g{kׄ򇻽@/$sFgoU唯mD/xv]/Nmw};8<`R?weg*+Nvx!h&_3|۷p2p=}^v'W8jCNVM3^yV~`9ƼEJqVLۤ5̕Io`۳VJHK;!.kֱU#Km7KR'#w{TiȂ*|C/je3NYKЩr*3:)ߜVS2&<ŝ "I6ɝ/pqg)8WeRd2agUqץ\y-m' ۙKΉmSɫWڴf?3aO9ֻyv]Bu5Idm,lvN^:_3֬#RF䍬1ֻYJh?.I,Wr zW}%hݽGW%]wxlgHWl1p98랾Q.\-UΏcFAvJ2 it"5OU-5䉚Ermw KD ~KaT{(EBҕjePve;uݻ)*p˚MV/uj g9I\jki緞VuCv#F?9+a }upϪ<ˏ՘qNT/u~M,v n~P?W75B)5"֌mıZrY['uܱ-Ԃsׯ٬>=l,^<nTJ^JdߴccHZa/NׯUNI5%qqo1OFrԪK&N %ZԌy%[@IeUG,nF>^i2Tx"pݳwl8{\_36XQyud:VoȝZ@km?3v֊ܺšS(>U%Q3ׯKS{nMo2;TfU|09 U9s(˛Y'Qͷ;>J#sjO%HIY"<0q߭>^i\q6Օ̂Ug s֪,PN [Ry YLCE# dm$ye$ɕ8Hͻ"ѴOۮ0z=)G&=?cێ_fCj2յ)Fh=8?O+)rLQᶐ//cUSއisn̙#ݷbpZW^\(N)!"Y[?Jq^}M;2tEb=FqUys*5j&|uqX0Z-X}HH\8y5ǿcѴz֑~on"VX,xsaSԋm-3]Qm+>7}ʋ浧s#Q;'*aP%$;touppDYA=ss]t' gd ȃ 6d"1N0MEww"o-42DY_>ھhMFt9^\ihW&/31׌S?#NޭK$[1[;dC?;jhR*z{2czפҽ9i2ù/msچ#IdRH[$nJʝ+eR+t|?G0c$w `cǣG8]\*ʎE;^ 88|%8GϦa i:ތfyo 6?L`@眫V>Wv[v_ԧMiy/2|-|0-u[FV_n+RC2gk@`+j}Vvc1Ѳu?ħZ>;<rAVi " v9},6;tRjKDKmo'uV̧ތji~^OG |&MgRk\i#F93z}\ev;ׯ]J^Xo^tzAg~# m$C*Dޮ2py{S_ܺN_/֩Gz5ת}xU> hz"TF'tA0@bNs\tJ TVZ =fpWkƲZĚ]PwƑqa099 TqeQ^+TZyL*ĥ*kXG{m}͟3x<[=yɾU%U'c`cORM,-Uo/C%ruhFPiim,r |?[];5 nଫb6 ^QO|,<ҊOۢf0#4dY5{iuOxF?^d]7R`RIb#ݶ#W<>u C]WMz} /aqNoZyow}wJ/ÿkZBmMHW\'3_;RGNM%ȩTTJ[}ǷWh¯%t -'OyCB eBF<&V˾SוJ)Y&qZZ==Zo2Vkm-g651ΛG"2۫|p9w`ӧ]~WׯӶ(UZ*wQw^Mj} _ s)F/]O竮ݎoH,7l?)͹hDwyöHct#ZS3氕[u=ZzTݮ߽zg;} ip%ܳHŚ_H1Q*-9{mۧcXƗ+_ ]4գ<1^\Z5c$k?>=0sN=}ZNQN!DŽo4iuW;GtJӓA=gּ~g^?zeboEh ǣ]ILg$Ooˀ8⟼neWyԣffe~6v>/%_-3[܁,Dr$aXc׭Gᶱ{|]O^rͽ•dݴwڼ_kwZ-ջ2orzc*3I4j~G^UjoU~F}u7jn#\F^T_*RNӿtR{$䲘Yps79q^>:Z49ӻ~.V0a`&[y~бO>\Io4$KJ1El;X#";${7i~]|GS Vּ#[fRdٛ))yTZi'>ɫ22Mk^/Z񖩨.g9m.$YhV1GA~1\ڮmkڶ_.ե_?Ux/u q2W<ym>k)?Bԛn'Q%{2Gq4lAoNjS˵*"WK2Z-`r~ZqnPSy|z")F|qߥy\)_#j*^]=CGܴ*$Xs}=sJ#M>N:o8_xq|+0RїsЧ<.b{k <7t{gq_̡ r6+,hv=gLW1 :cG¶ںkoQ<ۄx2|yN3tS}bp^Zԕu{Gryc5̑HȠ)x(5(o%o Tk}9uXYcá߆JG^==/i*ufEY)w>WY֚Y+PE?>zr*Z_~GW ,<%ލu{{1^] },jz{[e&Uv//3 <ŵkvéDv xqu1v=$W+VQ]UUpdv۩Se x?)%GBNG4eugS*kgx;=6IpPo+ˈͰTS|uws^җ0icO_)qUiGX#bjI}z.7Oh.4V߅0y|mn&Fo룗wgfZ'+.(Lve¨?v`FO޽ɭap_`/ކf} *TU3:9U¤t[4hW3I\j ӯҾ5=ZkBU~Fv1S9s/s[liEJ1QʹUUk둜εmV:J1wWǦ0)b<|=vʭ?qMt7lG *ȁ߃qӷZ痻&vD+9FI$JU$LW:T{K*Kݴ-pܱM۾fFB㚉9FJV]I7$~\+.Y8g==j~Φ>ҚMjI#R~l6{Zt H*198>w2zBRGKu2IyLk/@xN)ѩwúܺlN{nxRlCr_A;eІe+˜Q<4/ſyKc'͖Ej+sz*KWԔEfcXm~ ۴6s ?gTjB57śyг6`1d9wXэB3# y+Y,m$8仼{AFuSp$]=;$Ҵ m"3)i7 {ֲLSTb,!]уuߘGOQHFud~B>Y/y1mѽIJ1X.Cn rI ORܦeQ+='jꜯ.}ΊnHxd>[?)JU9 -gu:ƍ fq~֟GmXh;aHtu'ڶr:ZMWS%İvo3 9D\t,VguEbb7GEjR24PO~ D qݣ|2~Ҵj(Twz\G}̏YY\#^JVOG:.ij:spȐh~U^+ϷR _o2$̭ov0CV*?{ЏZQ4MiTEN?v,=|u<ZJ>רVUjGkt#h XA+ t4ۋƜR:>vV cen">loS~%-xf%r-v9䟺FO>N*Qc/\k'KGewwS匹/Rm̽t־^P"XaZOI-\tIV f V.JƱݲCx9Zqk͙UvO7"qW;U}[Ӝ:,h59Cez%5~ vQ3c`'~;zTjq%$FOr?!Ufa (݂OǽqԔfnUJ5{2udh ]I>U(U%KV ={*RRI ?gd֣A$;Uoc=ɩ卝(O{/m0+F.UߠXS[]6Wr'5%+#RTyۺg6ІBˏs9%/x佴4JFweo\9Rfk5~aV.!d 9qfjxӭgXr6WN3Ќ)8J7fJգ!|I7Q[T.2~֮]أJ;|M/?,U$iu9#ΈT8}_&}88gI0#p+ܔlg5~ǚKLřkJ<ĜG\~e0IVzYcnݦ\g޺cZ/NR &y"Xd۴xXM˝B.RWD9 R dgڊFzNf}'+CՔ#ō`nB_2v\lHK5)}F]m3 x́vO*hÚOKtGsGvpTzi){I]ZZRI_kdyP$gڦB>'5*nu|-dd+ms9Fo:Qg N*ãIІo'/4s˷]1t'T׺mwH)#ۜD9И՝QcB[ni"@vGEJi߳4ż+))r0#*G~*~u9l`= d<9}o%vwQKJPu+߽"Ƨp8}ni{>]j*}Ř@6vݿ:RJ,{'ecC#GVޘ֜e)YsRe eA23旽-WS:j_pE5@y` ǥk(ٲ^tw3HʬbIryԣ*zka{ 3<%!F cqJQmwK 1Z[B}|<2sy*ƓVV ue#$UpKu`Ƶ(Ɯ9hƥ~P1BbDϻ` YSNgAƚp$+&>YI9=q5^D+I+''##?*FRn6KA\3jqx)IZDF1|7eh`pz-S5*{-V';Jg޺dvӚ6V׷f{?/V7 }_S-%k캛4i 1+xlG3+ӏ,<#UD*~FV}GCJF1ߪ64EVY]sv9M6~J͚VCB+7awregw٥L r?oĎ=h[Uxswo$wzco@_4$7n}^Å7+41Wr7t>R58٣:bcdVI ף2اJ?g*'lgA# c(&}دS&=#0ۺL?Urv'.w]q'WvndER1Mje]H J⭜Ԩ-^(%Gq*+UU OlӁEKm5s,dLM.y,mc葌IrCy1+l}ۼdҸ̵'V=7];_)O2M9ߏ9"3J7EK|syZ?yZBEX>=yy()دj/B˔&/-!O^.3G&́E'q1?(=k8' SEeg+J~Rw)Vu"iTG͢x'9`G#}k7/v3 п"y-Vh<݀rxzqׂkW=9{lEiY $h6Z<5dX#6Eskc+EHܒ .%VoIoh=GVyٵJ r޳U:ǒvv$M Ahf*=榤TV&rMܱU}A=zzU;7h֌ViiU73QQSV Ox>V}:ίIOFi>fYpOʹ'skXQ{ӧZ.z H6nI&cQqQi&)r2:*幽j?*ji4ysҢvgO];pIЭ鯠dk)_$5q}|\3/˸t~qyvEN Tf|K`m<ǵy0:cu{ S}ʌr;7mR2t߱R?(E32A$#J/$퇳ܧq *J?6Q>zOR2FSFI1Uj5}}EK^pݤ/Z)Č2#.ju4F;@d#)2XxÚw;{e(^?0&8޽zs1kWHME7~\Β[&7np8%@OǞ}<[KY#O0o< tsSR^sj-FZ-Y[Qi$zU,Wca.zQ}wIi)Fj_eY!s=>>nWyƧyYZs2ڛ^JQȦ+39ڮAHuޱF{ÚR9?2˹[$hBBc=nI=5u*Q[Oj2$h͢cUKr$g)S;Ղ8ѥבim5dYFXq"뚥j y~/Rdf۷lܷz$A8F_b17em_ nçCֹnZ.hyc U ݣksֆwЖZ8^  ; qAYs~2=Mmgrr4(ʱ*3e(:|Lkb]Gmw'yI tu%}:hԔwO-*G\Hϵn1ێGc9S o/`o#RQG%LT5VW)-5عʭBn&SIszN7Lֿ3;ym[Uq^OEbrF8ɤ+q Yv>Rew n򬒞 qW8Eӏ2ZzHٷv1ECryZjFfXi. >KpT`F͎Z)[V\jFb-w>rKN{zt}nFt9/5БApVer^8s~]FR`ޯ 0FLWꌎ;N(,:6XPۢ{/?wz\э_0u,l!’ֲ.1\dRӽc+Eh:s卋;as&TKEM>[*.ƑLgROV;qZS>^ 8w$ 0ȿ$r1ہ3z?zyk~>f3"@B v#nQjx.7F }U;tusy>F!zv:*YpwBqxyaTwaRY=S YXێ;hOtoNZE-V̡mEj"d|ӑ߿>U9wIƳ[d2"7QpUF3>e.iZO jKeKˋɸ(3g< t\CVR˯{!nEmy y,MnhknDAf%ĭ&\L~V.J5om)(j2xR] G<:+t'\T(;cU{gy&Fj'd}YKYkRQ%ְǶIv\ƹRZ׍懻^vw<0O#Ҳ|/gRByrpFEe*Hד\jLZ[ŹHӊm Qiż7w+M<8O)m}We2eTaor{sª+MlvIԩc܇媔o3jJuum&7CsSZ VƥiEq$,[p;%Jfrs 1Ws ?^Ջ3MJqEjLْ6cn'=IxuRN*:wԱʻR5W Fx{~5re+ +b#yjZXT2A8fol=*nݗ,Mcv: :T{:r4N*Apka:!z)QԖFIrRN1r~u%%G;RiYؿep"˷V8<Tz9J4馞䖏t<ڼ;ӕEGNJ䋊^F<`#>׊R.1Rv6O&9xe^7˪i(mU`9UQrFQaK@7so'ΰnܩE=`6K|1q-NUV7g)X:lf!;W$ㆥjqIUy.&fuۖ2ZNQ0%ؤ Nҳd9%}םZ2*(\?9{c=TIx˛ߖ<HF:q5jW0LVHjN޸\:~VQ]yue)%uifYH!ȟq*׿[w~[**S2:0ӕHT-i+vFE͏4VJMةb J>tr Kt{ m9<²t$vfn3kn-1%.c<tS1.LBS-bC#(d>Z%{ҧ啚|zw yb6R d#==+ѧN1Qr{?81jSJ=R2"7?y_ҸjSRw*ҕ[uYgtnѨ_<W}{Ҝ%-nɚFO݆]qץJOvzzj:cke+It dꍕĩt&icݵ38dW,urյV-_G8)jIJɝy=]BNRmL̛THߎqӚ%OK|<:IIJm9#'=}k RLN0ֺ4g~fboD8>k;#ZDNc\wF۝yWnX7yOd:#NVT^}SV9$|ϞgJ3ܟgNM Vz<5QcSw_ΊdѫhVyb9?^?\i5i`甶8LOZN/hrйi3[*8սZ)ҵRji- I#Jtw Ѱ'^#T[3lCF9Y6,שrsluҋ9йvm9 XͣKRܳ_)opqUӧ.esO{%gJgNe2H]S !F,&Y[_Zʢ.j36mpѩ 9=zV:XAsT/ ,ʱ8啫.h!jNSտt;b~M>W]ϚZ;cclfcqݍtm\ۆ_39qVNOg,& VdUݍ kqԓKz9cQ$Tl`S+Į&cّ|9*2ok!wcpv8 Atm*՗꒷%L.qyOS?kGEb]"#6лX2z*o>fq[ddj2io^K?~(}eV]+99Ԫ.XJpA)hm~yl3yR:s68ׯjN W7B4*;U~+HVft/{qf];c2` 6{x*} ]βn:.f`4ߦj=55A#[a㦫X#3/˖5䎤~Ɣiݣ8ҩ&ܖ㿯k*?Bi7gI>21GduTF I˹ճ#=K/3Je%#ZB~t8#,qVUgd=CE ^0lp3|U*FMc&i͖줕Bq1zʜl۳;5296[* Ga}VO&Qnfg#HB.~kyJ3MwO;la[#TeȔ-Q%ix8;ԋUȧQ 'e* >b:wWPJ&Z^T]"i7_,pyS%G0Akv7uIBWikIgJ擕IuǞ 3ПQcJ[b Pevj[9檍HFZ"TThۺ}aYVڼMgR<6#ESWK$țj'r^*J];Rp$rAyNW:R/̖;(7%w+!OqW:u,dtFXݭ_A+fYfb0Qߞy[jNkm6edV.ܼy^-cNW|tGIL6Ózq\qQ屏r~w|AM¥l<Ju%N\#:}@1 +D͖cQ#&ݙXӒRiiBr򧘪Ŋo&=W۽>-NUǘc`xP>`ߑYV[Ysus{nq7޸J6unգJdЇ]%4j˹#9휯tZw[MMkbCo?T1cfY.-hյ7x>[,Nm#<y\ⶄz5\q}7n3ߌ5:1QݜPGlщcTD۽>U(u-rԊ(BEMӎO8^JƕeI7摕R3n#(ќ\͢͵Z?JO^֭J2y?Np7˔ܞOٮ[ҴX~=hiӷ L<zǠGPg̲.+(g>(idIUR)NK4MNVwk0Rd1kcͭBs`oa'֞4nynZ%{'mѴ3[nj~q˚?}zd]eApsʛiG(Qc:dkK3ˡo 762krI펼Tm2"EUf x/fdԥ 򾣦%?ɟ0As8Z{F]Z4[Aw2sTKFRR6m';p=hi4M8˕'u&+x[w\ֱsҴ۲Grrx_JҤeFZIr/R6PYʴiOF6i^Fsi1mn}bW)qZVKIo}⫭9,dhӲjiRf {FcMΤRZy"=̓)iYV@\zLg|E8T+ӭYKt_-.M+*9ԧy5#ngCaoZmPĝvAǿqNN%nud򂱬yXz鞝kNRyR潲0'>YoָiKS?z*O)T݅G|u産n<i"E@VݏVrf#M4-;bBʿz5*e4Nh:l1U9:iGk%.L*T*el=MN:oТfK"MICsZJ1ҝ:1;ّ{ƛlS6E><xJRN1Oz:X̚6iʺڨǿJKJԔLYx$o8h[x}k2w]w:<}baa-sv_YԚ4^dT9~kji,6hV0 *\ֿs4ytلk(V [ne:#$Ggzwm"ߺyyFU5Zmsj uNmHP2z4woSv+qpwǖ1L9T/id&*[Wϧ \3VGl>O 9e̬ۨdHZM˜) sWzWD3Z/U7)p~nP<ۚ^\Gq β/BѶF{ޜ[[R2}E w}J{<Ȥ2Bǯ֪̅n>{K@VW ‚mDo.MIIWݖ$b=#>z^ӚI B{{U/"^c,ms8RvBw9S'Υ~p|ǁY|>rJ9G]ƛm_]y5QgT='"-?#S6|UDUqtSLyg Hg޻wzsJG.#l2Dڽ%__NW]Hfrn#2I8cǚ+oBQweY0X];Q8G985}J7F$N I+?wfȟ[S9oGyH5*quei3>Dm>7S/ϸF!-FvrsT(HFvWsJ)ݫz Q,PECnP#O{Ԝb&g[{HeLڱ^{]YO5yz/]珘#HJ>ҋ|5m~V?R5iF5(%dm4 kk/Vfam$27*# }@>[kf{Zj)ՍFM'X\y#Il0c!Ss>tjT%m7~2QZ}TO V٩Uj_iH2j5-C[OR)fӃ}쌂3:q{:u%Ύ^sĶRc7nnp1Ѐ3^Tty"~ֿcJrSOmq5- ,yQl?1SC&zuf4TUK[[_KPWڋG$q88ֽ*8{9-ݎX:0断o#-L1y{HUt[e$JҾw6+k}NaN~L/Ilq3rT"?̰[ܴ#;Xgǟ6l^󵰑in2"0B`>kqV[]~ߌ~zk[h/xg^tG=M& n4,En@v@;}+*jIoV|'UnM=޷3>Th!f!b FHܽ3n>F:6TkOR ]|޷4/&D$Y5H$8e` E] O(T)=zMhңc%MKMԺy5zw6k>3Ӵ /:M[d?037 >~JtoRB7t`m=|>s&zW?a`";R˵dHӸLgXswRՋ刋Mo/NUZNXx'';! T<׋.ffe *zZƥ4c$\k\߼9E619#I~,WR,I[dn71*ƤtݿN#첟c*ۋi&Gxc-*-14~e,ʄ!R9 #sRQ ]$>=bqTF >&ٮ><}uIUkVqiu`0z=BA f=K/^=Jx>&8]:?}?[62VŃaWrOELSbw5֬IE&^w/fW e4arQWnild OML IZUcg;yw馾a}!NlvmK<=oY Ǹ~9~Z!{cpNFG_/QWN7O88M^_K+^w'D֤I>y;78a,L(ѫ.fOW_1QԪ5">"| k|/o%EծVY goԑqN`*wI{߷~QNRW[%'mou>Oi]wYME2d[<@9]%S<ŒV2I>͢3mMXpzd}+RcOVQS=WU^٦d⚮}#2C3dȸU$d2{RNݬzQIEZni>mI3_0nϩjb~&kn-jqxjCd=J{ YFJ{ A*Q^uoLҵm+a%Nвw;9V'7+~u& ª@Ы m{2Fn;z_DaZAJ >V8|`rژ=;*Fk}4; [\XXIH) v pԌc鏼XG~Z|q +RM$nin,՟E[D2oGu V% ->>/+΁8keb߃ݸ5SEO&ա[Eɭ?ۘd}0s rh1WI$32p?Dz><c ݙ'9ҼLUJU/Q$f9OS\գvtS[W2#?xݜr}݇R=,=9Fw3sNQƱ;{>$rN4MKQԭX$dcaTNG9iVsvVNWT%'??aN/h cwHX+3W>`^E&M3ԁfʹqQ儕e<L3/8c*أIZ+B*JK%),m:ȟ,~X-zc2,{RNRv< s*̥F~SߌsފnQxzp{koioWj5r)Fi˻abڿ3m9W6G#gV$ͮDycLYe6_jʥ5RIc1M [HU&6kU(2i:^p|D..blr@cR>_^ ũUd_D=\pԧ,Dcg؇- mT |3y]&ǥR5'`faT&2p2y⟳ԞTP^<"̛?Fjn"KV_(.i>uu⊾;^h9;7b\άB𾃽L#*)/{G,$~DS1@%6ⴅ9kSYT$} s\3<('(QMPVl&6m0FXt`zpkJ`zۯRU:K[[RތZ[(R/2WV[ܕe%vO/U!GABm+]#[N6,0a+R6nw8֪8JWӹh^\0E2a d姺9e哔^gKm)Ƶ6Ft쮚/scEeVg!??rϕte Z/i-VfGL:~ZϗKFWCJ<VU᳎q\iwU) Kk*Kv@;89ztY{v&QThZKM\U,>\s3XM+qm/ۢ9*;qӜtʵG ](Fp8z9e>Yr,B ñ?y#mUUBsSݖ[Licgh =~QQ5m- uU~zEZRN_q5 o+*;O~+=QaiGv,oI?Ҫ}ɿ#co3Iɻ-Ldg'̤CTWmH.U qO_Ϛ_rU)Z/CT?ٙxQUylXsysS~T]lrBP,A9}R^6T+QS۠I8%UxU֦ЏڒHZ?݆[pgZwoGaylWTTuz*(/NY<8}|DywZ@q`OJTvwЯ$HxsU;rN2)I"jy'*zG^1t9$9\iA^Eu"<2Ȥ|11U:-7cjr&ۨ_'V,NsҦ5{3g|ܱ]fUl?/njRRF~eCŽkneoEuӂDqvA,"0Ku_ݐë~>n&\k[K˸m8횪s8=IjTnڐjN웙7AW]-m{ܱQAq1olthWouj TTb7gu=5ymhr܂[$3EzN2vf+RKQ"wIcgGtFR\dFN,Z,UFio?㞕NV&eͪuVIǭRNI/ښIW1mQ峟]T4R[jqqF`ݳآs=W*|psJ1ht"VSuTmR芔|n$bV7}HR.Y]5fK;}U*x9?ʚ+2eMI#gc}I]wW REYch[؝kjq9HnUQ }'Jۗ[zabH8\3.~<*~aNU%++/^d.&$1CH}9֫)^&{6PFee9nCJF7A.mu*_:,ry8Ұ\:U˪2g !q#/˖o~)^1[OEOymP9jQv&oC>%E&ERI#gZE1T} ˪ &Mr: ^&/nTFlm $'yc'e.fSI.8̌(A1ҹ*i= aʦ8%+Ghq{qZۭYY/Dȿy:Ms FQ{J=aLU1NZ*+a3&ik#g~e*aAG}ʧU[m Id|}}0}=/FJg)#V= տe+3KF2lMn7}qs8ԉ:iN契 qQNR'>g/s*F^p}?kIGDΧ%}nO޶GGg iNMGB#Z.K,Xqvͻ99 9\y%voFE&)YjX?֪Tt4Ԧ66m{ⳗ'MN+}HIUܭgsޏz:-Le$,W OM%1exn2\/wQڷܩ&\Γ:p swGN=Jrq(r;on7Go~?>6Ts{}&QvVV[\Fcc/p?Uܬv2e&BO_N7ҤSS#DnʱLkN=⮙6y%lUw-K)VԔm ĺmס*~^nfѧ+=%IJ426l'3㱭MF/% Fwkp wq=Ͻq7s5*Eݽ?12oû;G\ҹjSS旻scH(Df>[no0qXʟ{T*mg-u%hWq܄GPAZ~.?f|VJ)O vzWT9R9eieF 젩,sR:دiP [rt=]FU*J59BG>^(zfhOB ј^ֺ)A^hq^D9kiYz:`W;VFu!MEF(eO)UZ1G=:m5R~%ra[70nhU[+2+5KMYWyeR9)!`t~ܟK{O>\k/g)EjVa.Q%=8eөȽ + 70>f(iiRR"Aby8=9*ѩ>hܵ<{̊de@\%FRn(s4-1mrzg#t؇)+$_ȩ,󛦷umZg9rԧwJ3e E!e,X6܇;Zs*wM^өVˈPt%6ӜB׹̥s5 cj nۅ>f 0IUISB\?3'fr.Tc[T>2)Bs^stF;OPG\iG!BoAJwǴsGVW|Z2U*'Λз{A<629G̓9*BżYXUn1i˗{S1NIo#3G{%&ܫ#'׭kD^^I&i[f!uU۴}w c:gzWg\,w7dppݹnYH=A=zQd1'ϣE[xXFo07 T^0AQKW%[v#Kh`D99;N:{z<5CjTjOBR)oL2ªIEvHҧw&hn$9>B? ^5'VsA ٩*Yr+pO=je'N??fzKogTn: XTg>%ƛRxYc&w#?iF\z9+8-KpPH!3lbCɴ uQk ]HKITfgO=ǖd\om}p+)3KJ^(|`~7ڪxާތ"MZW.YNq_۸>YUV,n.0Hl'XsI֧%RE#1"vF۷bcݻ "?R10u9Sos,U>kwCaITME;i8=?ҲN/iZ;zyy78YEGFJ*moff\+9F2G2X7b퉤c3sɧ'._1q|˞zߊQ割U4oRWjcF2~͂>kgN\gHūBHjCе?xS1sJR"M r/IQ 5s%N;5,Fݞ6x,W+ֹRTl\K!-Z8xF 7~ %߸/;[Ӯ:RW[嵢녊i7$hN8OsYѩ/CO%yE-rFgdIʴkK;\)8e}He_1CotF%9<+oiv9gFIZ8GU { B5#ZꞶ:VIv۰͵GT̡3V1OVPy;I"(KsJٝXZ~n}IdyPwn }yRm2'wMvFG>_w[Ќc4E,g;zڶ7kcJ1NN+q -$r9:/L?:֔='6/jWϵ-m,PX㒠Ķp:ҵꉫʴG\c==`<#+~: ҧu%%1 ʣOYԗ4rs`#i v]kFCv;u|:ʀZi̚\W8˕kdwFބvdFZmAt+3G9_ʪ.ؙ-]}j o*bc8nwsڪ*.bSbAg ,*윪I+T ns1 k < iNU&9J"kv[H{jÿk\e=,jxֆlAw ym;wxG܇F>S&DwpL(w,9*s+xMXlDkF2wm;L]:Ef$;v*^Qٷ7$l>f#=#޺Z4{G'{HPo<اw|*܏oq[%o1Y`sE>аʪ8V8) e3U556jRI2։ 7ͺL) )ʤjKZ4%SȊ,dTV8nq)+.~i-鱼Iƭ)sYjT{?w$&8B2lJm^DZExڪ)P8ץ'I:aFO6G{s.Rfۂ[ߏҳ7$/صO[ 0;~FbJ5IXwfhZ?;km˄9}KF,GӿDK{(|cKCY5zj /,ߠUIɻڌgw&Vc5NX[c'ZG.m,Xcrq7;O߱ަQj qy|UJR^٭4-XO$oG7zRV\ѥI+|VʰVSsZr0jd4gGb?>45aZзb 恑h:g1R,)rۆ@ވJeYRkwŨ Z5rSNbŵ0V^UpNje)|VTƲRYTa\tu4Nu#*rWMKp̰:$fco^=REX|UUrM=cz.=1+SIօHVř[St}y泫tߘ`)~r)AzM8SGlpP-27O3==8 TQdjy>ePY[NsI׮)rGF#G2:^ROeD2I>IIS*ڕ8${IkcjgUF~{nqM;n"jH*`Pk@+UizZ?D [}r6U^ { Rj[_q΋kUt7͖9$8U||2=GKIթ*2VWz>O7ș&U|7?xuiR[}BOiSՋВ O1 U۞ >NH-Yug.yio1Kbo/au:cA=j9t]C0y~`S{gO 3zWT0R(V(l~<[fUiI{l<#{-nΝ˶edX~F ~S*qtSJ7mL8ua?~?Z(QK#קJ8[6|˶h’<99?z((mtkh3Bdžfl*1sz#X:8^3GU?0 0OqnnVm۩HckA<ϾU2/rt>Qٴ6Jv}uH*{V6:ӌebyR]nߩ$~5yli*NED ee4}a+`nc)>HWrA?1fV3Xcd+ 0ˍy4gtDˑ-DK/.wdW P0Ga۟Γ[ZJP6I2w9yG{u%-HhqF d Rvn.T,Gs1۴d&2ya~zQkݛ\#1X#-ʹ-ʭoR]m o9 ?W<{R܅jC:|랢sΔyn^JlHlR7GަRъ)B61ȸԌ+UrK\ {A8UwxMyx8pԍc(m,\s 1r>ϖjSyƯ4FS9GWCc[kRdg[~G7Ru2@IVTEn  `RV]=9Z-f5(/1 C@=GZOV]-L(2g$Z?Ao#,LVf06Ϩk'叵ZiBqUpjpG܍f{y|ۏӽc8 %vo!-L1~{HFO` s[47k[TJ9b}_DߕM6[̏v޽xsQrZ߽hxNrkNíaiK/*U<#EE*|sXd"5{}StyV0 @x;S59^[kiu(QgsͷDmVE=~(TE:wZФ[YyRv)G'kߎ4^:Jk_Ek JҲ, ^>lqǠ?ֽ2L*QߕODKjnU*SrC}:0#˦jVY,.O)|+Si*5>]/Gp$j6*'kьʯnve\2}ҁc?v>JoK6ٯ7+Hcv#E:rFkݼYz_R r^ƎFL 63q]X}w:_Sj/.+%M?/ǃN:tdw}N.1R#QhV+r}+Jtڝ֦(ICد é!$B(0WEH1W:%l=[I})$cn:G\4{MIsPJf7e$Wcq)K٫w8y?z:[z,B9?yS=xJ*\҂3*D8m̧=y)p z?ixk/tK!c!^XsϽO3UER"X~Jzu洢Q`VOyrݔV_=*y}ӫ$|s Y{Ҝ%퍹@YFQQ5KݽJ76M Vݷp;[@qkjNVz|\[ZTbGd`Ojrf~Wc=K qk5$O?ԋbDrnW2?ixnZRoą/nVk[L{Q dNu4ܻȬֳ*ʼW,(E =?Ђc4Y!/UrOk弖t8ϹirU>$9+ .TGȵ\ljqY^:|{:31K˝r:<ֱbGLQY#dB$f)h&iќKѹ3ԃVc-r8.R=ף2I @D_sJ:%cUx_25%6<$?iMɮ͹D|81{t8ǚT2$GiY7 ׽9k$*1q{#<6GYr㏦GJoC[FmΕ:G*aT`W/gʯyZkcY74|F#}N)J5#h=eMiMģӣDa^Nk*H9I\*yc&UUp1OjGݲ:ZPjdSswF f;W܏ASRKjHMnEme]el9*N\4p"W ֫ rG((ɉ#TAG_Z:#=JYmaf@$| q4JzjmF:t$m63kmn[z Ǚ՗Ӡ8ӄaMt'[VE+?*rѧ/r-܉YeXHDd(cg⹕.2I\*/,r#RqzF=#%};YesTʤjGk'֋FkY'taX_s~knQԨ/зcCw:zTP=a(I=?@嚦׻̒`d6o0 A'q=? ] dH,81ʻUJymqqڢOmg$HD[Rq"+^yN_SA"k vl#2g#h @sEcQ=N7')ԩS+]Z.snۻz t2mTsrr߾zVrەO[M.6y{ros(JJ}맑qr j~MY7ݜ*Ҍt'+EcwpKw',ԧW=M+vO4ζʱۮwݽ;Vї]sӧ*6Mu1 &ࠜOJ!ӽ)ycӿ@+4u[2U6m[kdg3#6 逸;N^'~,m&/wSGی>BF4pܱW=7I$` S>{z֒QU{~"_mf@U$n۵BdW>'NM;X%/gG^M`i%c ާ20Z6lXDʦIUҴa{ܹoe"DZ\mqv=k2|֕Z.USGicNXɫhe6)xIԻ21[Us =k[*mziI= '9ZF="i5N4?hk7#oO׾GF\DGв匩 вhqUshuAjgIS͆5|8G~+#ڕiZ'rǀū\YG7/3W,T$')(yG7c~S~RIOpM< 1A!`wWrIJ#ߙ~fF4c_K)-lGr0}=89_ջ_qKs4sBӷ\iP:# OI)[iO4p -nf'4nMJrqwiFB\2$jHL`vԛ_|Vk6aq{ s»cv O%Z]K]Grf2q+J_jr`--_" ּޮ*W%[[V53YfQm\FqIqֻJZ?S/6ZӨysecYJQ==T۶qQZc]M:UXHO98XKEa{cGٿ rz \5i=zҮBC'Gxe]j۲sKs:|ҋZ 7A$ -l(9c^eHKUjSZm3<1gَ6D6sƼ\qbvomЩ2#tfqVIG*]ΓNEnW,L3&eqfz tTgRM^BR&W?t՝q|$n:d6N+x{>WwbBYUr~mڎ25FZ'V05t3۱F-6J[9[G ͻ8ޥf]$YTRq~`pz}jP9[rz~f`wˎq=k?anTuyIߟ1N8wkR==?1ӍIT槪gcO6v r=GPG( B*Fd5)iռ_8E u`:ӎ"n k)ʥev*ӵ·HӣmFy>WS*)G:gi+UQm>JIJIxoYZcOh r)U8S+tH[zmXUea@SN(I-vbSNVc{ʻFq{~_q6͡?ؓdZޟ}9Q7k[ѳ0YDzc^$`FF;q]lTi[و6I>\yؠl0%{Vѭyv믧oЩTJY7UNMՐ2b_ 1渪={~G/gxkt:m:IciYRܕx==kϔc-tԌe@fMbb=\o WiJ1qŊYEYXܶuXPsםxDfAb3𪧣z^+'b?--n}jo'\/C ^[m8籬\iHdB ClP^JR-BvǵdTڣE31ЫI$v>bGqƮ2M#8n vq\}3UN7W\l2D#yqq]j]{T[g[ƯSXF.7[)8481^={Wuw)1VN0$rt~bB 9N1ەt:}.~F 叠F:i'wkvjw+)fʍqwtF.KF^⾞dA-o#0C([qS[/BjVF0.pLVӂH,tݻsnخ'9go37,2z?ʱe9tDQ,7#uU'93s"ٶX:A=*2#OЫqy,ZHi V^5ӻ.cɹrf GVu% Ǚt&iO^pɮ]cF򉹊%qYZh#Rԣ;:0]m`s089֮jKzzPXiZ~ ܲyaO\o)EM?2]Ju%۹eyr 7>z~h&NYGfmh6In͘nM8ڢRԌgAΑĪv s3,j(w:v1FRr's0Ou"J(*wњz4iʪmׯ1F܎#)5m ʬ-'ԢԶ&2/%܄OW{O!(~IM% Mpmt[969ZӊK=lC#Fęw'8VNegd۰7NX?J^땎xƜ]<3U9sj-r]*G*I P]8Vp)/Fj]zҔUg)?y߭+V]#Gfr=C޲嵓3{I#b3Li?Z;m nP{wCP߭^ >2;{IG_.Zi~e++ĘXA(jiKѝJ&YO!~Մ5#ИcٛJM SS51*,U@>NǛ[lMc 5VU` 8X9&Ec$v #wFQcd1!h-&caRT{7g7 :,4IoUz GឲueE_!V"U` k:wjabTz{^J:|>+}FTib9]2 8Wh[wuxٷׯ۷S^ZNjciZY1'VfUϿS^\iUz%ϕs85˦KIikȰn@tJU]9Tc++_w?&7cػDZK;WSی;̫qGŠtxY#Ȼv }x;z׃cU>kC)Fi7hyG_vzV]\Hq/F R99$Wv3Fz]T!pvK<:6 A`zq{1XWn(ԕijJ],#[{ǸO,wǠ9q}L4S ST~gu,/T7_;[?{q}9s/R5۪/kDwftFzZu-kJ  LEOԍTb^}!N,c3H[vxSy%J\sFZC?io$_5I%]đ eNvd 3}nsGs5 Do%v̠$<le-{קCꊵHק|@~6y<3ژѡ;xH+G|9j3k=QS< EZ3>TN,MH({N:R]6^QS*[kצsʭ^V/~Y]$i,m|/'qZ˕'5UJ uw۷ORbzڴբk;fx!u*F#˞ÆKFJEw4i$Y}חc泊ym:㍻YO>+Nj|K{{XFwv ?q.(HnHU<=W^kz=ٕ֣G;Ke)G{ٮ+O Nw>*acAGvG}ZPQݛyG 8AFn5fߦzu/>|;ýQעo,cj;Ə1`m fR Rz5g䗟n}̸6hE,[s 7Fk{vk8dZ̶KF\qyJ1׽Ykm ]*5®K|[um_ |7M48fQ x'qv8ckbܭV_r?.γ F:5V}FFˍ?ns##ty*JQnMy8=v|Y—K{#4 3ÏzXhJ1\¼E˪^ZyzUc̵{GM56Mj6o*F[;{xqTlsF0U"Z[0oMiߩxlTi#f/\p:܎^ (;y=_xŏwVvCnpbFBp pHQ\>\K/M?wO/j9SۯS~&;J5Hy _gzln it.w~ҩDҋW-7b bY\aܪ9#=}<&*1z+kw{t=SDe]MGEś1'go;˱qWk8_+洽KĻѲ3lN1;6瓞[ 煖u]?0rshYz Btt6F)g ΎTe(u ;b0xfӾzt qJ岳viGX;Q]7ʼnuf}O1\eD+T/6 \F8e +t1mtI%Yq`WO_J7.dMד=<9?Z`oO1v+p|UlF'/oG'^Ju1٨>f/KcHߊٌ0 Ã)ͳbՅxZ|>vgc>Vg?dUOz.3Pؿ3d/ȊQ񰀪X$x^질#O&Xjq|Kk2=sLu\mNjW:׆e+aqpGbWnox)7$2 U ?q=u}͵+΋&Le']%R;R.fyOO Ic\3nH 9K Ni^̲jG9||hS{~='?\wԝ>gȼR:uF+XG@I^I YX["%L%2,,*!FpOCzASJRz/?3R*М]xo¶+~eUuB*['v41j_mugS,Tw} |m{J[NEDnIn 8wjK^Tj [v^u)/[d-5_x9$Cw ƽ?v>Os:k(Stψ>lm&<0yuv90^@y a,_zTrxW8YHU寙0ӡ_^n]?Rrzn{$ӸFFg=1gB~ζN1D2G;WX?1:ٕ*|F4yon˺>8x/a& so`+ⳌE}:+[.X|( ~UnVe`Zrv+n]=S [3F@$?_c1eϠ*_xD 0Fex?@Qp309G$]~<&Ouq?)eI`~`}߰qZќiZ2z;_j%T<$=fPN!eVI!T q'b]iK[=ӌn/ vwjC]V0Z7cqG{=JkE"C>:RVRR.bF nzJR4[Na.C:[c1˷3Hv8G_&өo6j4TrXd]~AڹRV{d==J(meYLoʣߒ*61\yypS9?efIrRv+ `wm?TUqiWrzv+\Ip\6uڅ8F<_LpEJlfYdR[h rphrxhΪffiLlBp8$ {wyՏFP\"}:H>,Ưn8P `Ǔ\GV)ѧF_Ў+׊6-TjSW]4㏧'^rV/Hٷb qZ2VPfN׿qaHܷ9W_R3a5yY]mF~Y,{saS'A(ӭu'Nkxdsϭt{?rGph+Ik Jr VUL{w}eFsՄiͨt5Ep ?iF1?z7:08G,ij2YnexֶRnĒ3h*Ǩ{tTJNˡTjK;VT[[HYq=A]\Z.Nw kwrCt^=~+y>$ ̫"nڊ|աJ$m(㉌Kqu#)|x%RVOnݫYc#(XL=j~ɻ.8W {w7KQBe{:T^\NWB*~Tu$H_jZƞ5MH͸tL$rȫ(ye~пG^W/Se*ډn +yUJ8[hҮ/lܳ wry8 ۠QQкβ\~%qRlIEaEmRgyٗV{PGNOOcW F_w9VXK#so`)~ssj2,gҰXc|Hor+Z+Vy7di)D|Yd\N8O8JR~F25«m,y<+'(?gnhi=9mTB>W##n3SR5..|E$[,t.b[R43r;EhZ0n1<k *{N6"7s )YHnߗWNdʢ܇KkrфNB&Ldbw1!G4Zo;{[.n2Fyqzw5%4G-?6y$v޶^i[Sʟ-۵Fl8+^U3,O3/"Ďc=(L#ZGv`7sq[GITi-+W6J]y=ڵ)G[B*_pd\.kѦtޝDa,_M* q>e]{iYULqF#E߃޵߹N[ue0ɱ֊qs ?VYP'2YVHv'A5eF3jlEv y9W<85J<ɤu8Y>ť L xUQO7.v]*?4?vjc*֟)2+r}QcG:+z?iv_.8S#0IDyyu^O]->g͟~+r7flJ3`wu֐n1ҳ&Tr-2@qZrn$D]O_Nhn_>Y>gwpL:;m̫>ƮO3Z]B~|mϽO,RNdO=A=y%S<4Er5NDta1*irS11>۴7"Ԋݷ#kǽd1ǖscڹt&P̔SiD<÷ ߞJ˖Rexm̯=ªrs\e7QQ"vT,A8#Y֧ )TZHa ֮Irq^ңzgB-~f?3FH :rGXx$|fHniwNNџCӯ*K{t#g^n>Lp߭eMFg{ɫ]Ipֿ-LU¬1ڽI?{zc9XZ1iNt/{@(梷`gcG+[;9LacݵT`u\$%biьgq E%@mArB=:TwQc:}PDL8n}j|ڽ ԩԽ INO=Ԛ65 1f[-֟74:=K'P_'+/,}*\v\EY,7e#c`88i#FrJͱc_2:9cF\3˷UV ,I9v\ʫMJ/ivܿ`:X-SZ1yQ8q}ja|DTw{^K4%UB.PycT9ʉF+|eIsVڛ7Į.036.Xʞ~dZѕMN|]QjW\_O=YܠsZ9[IZ %EY6U~TӞ՜+騥sj"Z5V am<YԿ./)I$jqw s.R\wHGPx\map!)l5:c՚ј ԍ4ceV$1 zbBѹe)ֺ)6e\>*) QJVldᢐllְOݺg>-͍D_io9=O王y}NG4DimbN`>'Ӄ\Q7/7[ucB6'@$2Ei78U'{}+<.x˵HmJiF<IҵZHR%.ݍу5}FGr,ydׯO[*:RTF%F;iDlW+Qw^Ӗ\m $9 oC~5N)v7\T*fEAsӎ쌣Bhn)_ 3:*6uӜR涥FQQ;t(,E'۫nwYʠu#dԎ=k~)%}ldGyoz)JVtN2Gp,Lr ٷ׷n)KT-Jf۴(ۯ"O%Nz3# {;=ͫ֩[Ў'VHׂ5lcUrERQn/$sebm_$1XkTL$̽n0I6Ʌ\2h$TkrƄ\ж8,,cGPx*9YԺv.%l$aI*MJDsZ\v mŴӶ HRG֜ycZR~'re+:nY0oJϚIYE4 66}ss'q2f.Wdě 6U)Wr7h팒/P3t'e(tLuwv$,etn{Vqbaxމ"7;N檤hIo7G-X\ڻ[ qҹgsNIr Jy%‡RFr NzLt[_2vѕr[?{%IC[K{RN\Q~^EX.<9USsC:uJSRhǓO]?hcwvc_Q*Bj%Sեƾ\4[tH AS.I+꟡RJTsRRgQYe%z`:۹F.0ۿA,ḕO8q򁌏֢.<֊.O}ٗ 9yit9i֌9%FXH$'8+hC^ӧR1m Vd>[ =*jwԊҦt"4ao~ф}koKHB#ow`1<ֺ)\jU^!K;IY BTpF:VXWC-َ0gyǭkh.tv#,Fw^9ⴧ)[_~Y.G{t#Ńde*3s׶+H7/iNnvy8R1=R4Ȏ+D͎DU\85~ΞTyw0:9t|r~T)TMxz1lW7k vk֓#>ǵZpӧFPoF]ku#nwcwȭ#.Uɫ Q7)Dc ddu֮9Q^*ѣͺfArF]=*Nj\JI|Ж;9>&N!-1#ڪj)I"RZ(XI @8{Jv9zj||EOYt qq9GJgϯڵjaYZiAgmeg#&ڊ2jؠ_=⺱e_ߑU-(#TrżpIl+ϴL.OF# TyZVkZ]βYϙcO=A9rԄM#);u-J۸2zbeTZ" W*xx&tQæ`Ŝ÷g(ڣI{ۋG,aY<zֳyΔY^-EBb(:gގm/ʝڏ 1$+,[zR|z6n;hY[FC**0<oƹbF!#UUY*您T^ѫ-Q,5EܬPFrOoW,[53W^j*CiG NqשVש֍GʗOĘ jmXFaSJ>]o픽QI'qsMrΤ,X~k&E{NogF,Lu䖸ʼneP̻ G~K9ՖC6+y6[.x<ϣebNB Ykqݮ#ܠ6@ֹHZ3:FZz'uZ)QI=*e'"\̻thzV?JrWs:OvVD)o<3lR=V5".dbI7#ؿxrf9翵M6ܹab#i rE_/er_zswfpά^{tO5U :4FΩQq,n]`3P~> -FP澾{/+)-cQaAۀx DURwS%%Qp[y>}s3l$NJ{N^\dFQ >yǁqXՄ}̪V(@/}yYήt&уsQnԇ*HVU$nyۭi.v4S,*|C=Fr*M)5e)$ ?-a]`s6t(֩R-,k3 +H 8'JR<{O JҿUbZR\ѳ˂c۵lԌZ3mzK剥(&3fS(\3]TkEli.Yf|I3ִ4SjI۱ŧ<$=ԔV9֩o_#> ^\**ӎ'WdcyK[PxI%ϗmfcJl?ZFU6[5ǚW[$џF9(F6e:vssymnddf]ǖUSJ(ROMkT/gc6&CǮH~&|]ԆI(YqwnN? 8ٚ{Xҗ̧ytcv' ) >:iHeomJ*. h"n G1W)Ts:TR k[;kG(6:cߥRiI7ԨPG'k9M237N#51]f0zu9pav<0Gk1$cES)߹rlQc=3WTTn4[Dmt`>Z囒PeʊOoUY}_Oc(˚kl-;ys %!wl^u׹r^qv*2Ji-!lG򞤁 }+Is%dJ4ǞW~[)#wd+J JoT{,@KTd*IY3=}o.>O&Zh՗|y=.icԭ*i]=|Eţ?"c*qNBgصk էȱfCɞ0\92PT}zN ɕu iR^/c>oŤԕ'xEV"*Ls+)^2,;P T/,ȿr=6#ڧ^}_Osz'X#m ̌;NJFu&U4ث|e!q #2q?N7ӧ_e|ɜ*\sV^1,v^fF ?gzըJ嫖eXb4݀ |mrR>O{~6c+Rz?eZ7'ӯ4QuP7Vu$NTv\ԍLB,J }hYf 㞙hsr:*{sSii&FPvS2Nghi't-mdUG^On溩*{0Svf6IUY*{8n̟9-RI9>אsF1w괬i#'ڜcvmhE4YmS'fڥ7.~`x{ j0{ty #/][}io9Q*6yZm5ٴ "nbcv0O/<**9cr+Y2©ihtyعǙ9~*ꥀ\*qsyc_䟵t3V#i`r:ZQrEoȏlgʮ ;"ۙjQM޲m#?ɮ8n)&ˆ`$;t^Jp;I۽\i^~V5e^j#fʤ a,2`J\ݽMg(6h\37ml$0ӌZN3رo4ˉ?݋V*UFVZdev۹zteB#(h"rq}xU%--vn'Uڿ9!@0w5NMQZF?r|,F1ÃB}N*SuQtE;:fx^17] 9q&0nUa#_3TisGjE[pOSת抍^Hؤok4f wzڪQeqzt1GkFHyX:zQ 優^;QffKIv?Tb-MKؙ?3vTl< wy\nsƝ)^tK*rb߿h9$2_"'֤cKeX_=Ƿ_jT7iDi5{n1.cJIG@NNO<s\i3Tw3v䑷*Lg<\ڵ^YAs6Mq;r88?yZ%MFNE!g =N?ZN?ۼ'{r9;<znfѵfl34=9=rBRvyoѷHYɵUx,:gsKٵ aӽַ.V=z3IӚƦUЖQ$! %I^9=}Sr\6M@ʠҰzQ^EK݊n+\m]ΔT+6(;-!n%?q?ve^-Åb6mc>ZB,J14u1VX>~~e, $q,uBIvsHW׶Z脣Q^5*, syRv֪N.Vٕ8K~e6e9&XTG w#)JkEv0;YymI# ' ;˟iʜ[1oTܻ%+8i\rNzf.M_vFTwO"`6[*ɸ5g)m~qpz%M.OܜVrwfZӋk:%,yNөdSz%F&]ܤo Vffc^?R>^NJ}M4m+InlӷROzYzJpk46eI4Imn#pd$F[GQF7Rzz^+`e=ؚ⽌c(A${E|͂c#$Ǐγsz .5B\̌pe<)JyvmaD-dEW![n4y7hZ2m9,9)7THF1ݭzl*N4yd9+>o{ʴ_E4*;.ssZSjUetTX $~u3埛fC1-Wv W49UT"ݤsʄvF⾔Kw9n WYBUvǟjRj4Aۤ&% WJR傷ptc[+)W誰]䎼)~#5Y *ѽG֛N1\rᾮ 0Y$v*SNjc7hTӻe{vX*<Jӕ*7ͣ0QU9SɯHGuW/0I;j8trҜwv޶O"kYbEKpIZƤeҧPZ5W^xl.̙3\19j dxݑ<ךR䍮ZSĬ`k0ee/su*JNO1m_# l#KظŹ+t'[^{[ {C-3 s<\Rq_7B W:QZN-e}͛c.<}RRt4m <ҟ.=HtB 1Qi>i%8n퀍/R6 rmr2֧g=w Ż*whqRrr]~&N1hT&b@\Zv%)F:}m$+ʫL9cV)1u "@/H7)sYFQHyw]Vݙ{GPV2i>_#UT;9_n,{QM޾F߄延Ybb ^9׵g*;;?y+#"־t%I7aI;#<%ۣ2oK|[, I1b׽cRsm4Ό%h=__3c$_OH[N]TkFҩ)e {XI H_mE'$Uc)M;6u߇cp=Ý68'Z_mƏ-M>2Zϲ2ab0:{tSue[bN6mhxUlҶZkfUA챦ɼF>Ћ6y±qⷕIFQ.Kt[} GUW/ p:{}ؕ*NJ4M*ՎN1 9-`6.ӝjѓ]_Ab*Nj.=y/cgΏleW*=3u'N4hƏ~m;$w3 FQuҹTJiiJvV{WdEU![N:3ךW2GZGz.osss UIdے6~'VQFrבiN%Cc.Ǧx>Z\$LYKoeWa:RzXѵ*k5-.w{X~Y&.ZEs 0d㿥g<q,W'Nqӷ89s&i5Sr2|{D9Gs#`rztA qL)y0*㇧)\ך9[dYm8UL冇3B9J<9u&C+nc Aw&_#dx&df$t99]tʶuggm2mȊ񡸽Ϙ0?x8?1jaJ5}1e4RE#|Ю9#۷Z(ѝm3r漏#Umq/1ii]"14๻K|hČY7|dt^Zt5߷j<1 VNIR29}# PX0#\>I¤ev3*X{)ks:B=O\cjucݯ4Q%ĬsZXC(>Ƥн#*Ns7iU9)_.Gϙ,jw C5Rmtb(商ᔭ+ͭRT"IiSaK. Ny"kWh]Xח:Cny'ϰx֌c6xУ%%el36 }޹jb%Sݶ1Tپ\c Zx|7v6QF^Lܲ24R,d?xdp? &JգkiJWx#WPFWlԭȍrވܧ9$[ 2W}tvW[кW[$رT/^ϮC|ztb$„4}UTǷ*ׯ[*.Glb+{ 1NO~+Glz&ݬr@Z8Ŝ +'gYΝ7/IW|qZQ܍:f`/-kbTrfp? ׁ\򛏺iJ7Zais<[n$i]7>]zrMc(تݒbAcAx'NݭIy1y+^W{=#2n(p ĬiTbanڥ8{V'/uS<i| rq]|%d~ۛOϑ*u8tS6*ɽY8 ;??7ӷsW}(w;)M'ujyc?0'u$:}[_ν>@? 2'<vӍ7iŜeS}?OZKzllgR6)h~JB2\dg~"}Ny8ӓV^\Kǚ[lTp==?Zqqqiv1*1bL'R}8sktΈ2B.cc v-q|qG _LkJrƕhv)ݤӢ(Ufz޿JrjS^8wV󋿢 "Ȭ7mX1Y7RO5ԙIʢ|ڟ< *Y^"PZ+ѝJJwa$Փ]tW¯^FIe;n?qN1knK5ƅI7anU2I;|ӵ{ZnbjZrcjt> H69*gV*e*:/wd\mQG8+9bJ*U#ɭMK䢯ʿ;~'9_[5wmbk_C;mǖ|yچ!ex?R6WkV>]Tݺ$cˑ|հT\Ny(ͿrҲw0oã%լ]ȲHhyaGiNRivS9E$zo-&}8ȪJD;$+YaEtZo-ຍTc1 㩨9msqj&T-"[8<GZ蚞Ҝ*[ ibإ7N<ҕCodٙcrG?NbW3"DL@̯`9u#W>GM?ፌ@r@GSZO*BjY$bwmL'4#WHW)sӥLj{ yvcgT/I}.O!|_"޾jUeryy}7@[.W9Qr >i>f2{ 20t}cQU)>d|"/oZ|2:IdY{Ozq yHwn>anˎ*RהA-X9ues]3H1Vrh+qb&9JiB3Ï߽O,eG٧$RUry[?e/zrTNqo#m@I&ae(ٻ6zlȓֈͼXɉs?g̣ 8S *V*V۞1R9S5Rhk)I_(!WYB%Iڂ^7M$G"iwo; !rKع5k~g5 v>'-tGL;xc2K`rp9+[-z[?FNF4Vo<dqH^Vi*w-ʔ2G7ˀNFG#[~Iw< eOkCJfr>MqL^ԨHʢK]4ۿ3?iȶr{]n{'_,V65^gq-3_/eۣ.Yݽ/=9| ~^c=.^RQ;jbk$(;i=2m;1k>#*R xwG+d/N:=3\W/}LJi-SF:57(F_UYn::ҕ6ۯF-.k}d/m[֫u4p_D+._sWzuZjɶ'^{}c2ӫB|['~~6kAaj&$1Cbeo_SoXHyd]Q  L8]R[ "<3k[JUeZZOL:WZZ+k$.WI< 9ݞszNhgE´eZv}!>0ݦQE.$U`IN2k m⩷yZo~SZ4yu{huG,5[m5)$H2B߼OZj rtn&5 3F V82mq`񁎹^֯=6_OF<^$f[gG9?}=-:+]Z԰4(SJ5QQ0oi c9PAU0姫vHaOr j_"!?ihۯWӞ)Ge]Zot,I9=ǰnUv]OWRUK}.+*BՈ}gqф4UNczKor_Q羟iN5ⴹ>-4SQ'*qdEKUm;2TK]Ekw 7^tIXvT`瓴z[+SVEJ!NN^o>5ym`E>s\Cv*C27HvJc8ϛZfʩ Y;ծ=gz垫 '#=s|5WnO,=d|o^-ѤH$ ܯmo@5rtek^ -wS-{=~> 5;)+9*'8tڽL>{hZ&]u[]QFJ*7|{yF,|m}u4qOkrQTN9:NWzum< h⹣V 4M;oӝf=Amo3$o02AA#~<4^*QRlm*u:N\~5_sCx[ޞrx~;`+pRBʠmA+鑎%өQI6﷮f_WT{w{?ޖsh? U6>bbM r璽#<")JZ{{[^ϡy4qTk.WZ]ַCԿfh/"i |ecn#,^aE(J%f[|yQ 5m>fnmrCk`݌pA*N=:$v2\&M,x'mṸ^LJb0`q$`v%O~J\4%d0>/ZFakuVͣ۵E ,*Es$һ?*Pp_x_MUmU|. =pxש]GﭼG&DAi0R%vF90rG{X\Ri˕>9bIFm>_쫭KTQ{[M^]ci7Im VFl1廇^im;)ӊ廍ӵu_I$*Wʰ3|V@Bq0pToSFKko^#%ַ< |1NY<7 \[|Y !rjI[Z/񘬞:jozEi$ӭ`]3._dTWׯWix,$hQ>t u;h]BX9pnT#{ј`T]&^Vmє;]Ku??O\x6qX*!͸1+ˎ9ყ mSLKq,W('nhhr=`m'38э9j}!ǯ0m,6&͎H_5cEtiGG6{K{@/- lH;?EnarxҕGϺ/x-odi&f Hܜ?˳B5o*p\񞯡߳ρβV8ٿ^c؜F"{8F;]A-\^ӞN-<4lW,'$bȫUs?Y_U\o\_O޽ⱔ;_ uI6$1@Tnœp0:z }:S^:3GKtO q_Y;晦5vNOʧyߧo2V0բ_W_Sϯ7BWڱY6.Gz"o#ԧquEP#Z8sl?\}z {nh.6 UV1\jYs+tmniO^'#J2lg13p:ps?]:qT̨ӓym hb-7͐=Z59cT٥w?fh:Uů<,D}Q^fU,%mΈ׵S Vd³kv$n8_@u>iJ._>VG=vM鴴u|2'~ӌmZzDHVo]{8[5@<0ƜCYFOpg.hor !YQ+iӧ%ttR6WUd\F.)ݕxER3ߡNQ~SFxAE䖷eDxհNN+PuI^\wImF8ֺa1BU-BayrmU³R^V/mivm m@=)J)хhFJ.P^\i'@(ƜK~3pOտАحVF;]U,0H^i7&*QIl[[:Mni9lsbI5(+MAʨF/iGiU-&0xw*wvg2[2;GnzvxJ+WRU#hgjhYJcĜ|nt=8qS?DMWI߯rd]Հmn:ӭi%5M 6xշra\sۂ59{Ֆ-cY'v܌~*Eny3nkY62r?1,N:' [m%k LuYJњKd ," *¬rx##+-SҡRTܡiM.ԉϘF3\s)T#J6kp7B[q3k~TIv2<+ao_PK窰SnsF9Y)h~fDH%U4a{sRo>\in44ٙR+r>SW8/ʢfXe'$eqf\K~jeOq4F22$jؙ#>})4d2nZHa&eY\m_c(TҷRZ:Jb+? Se9zSmI$+_&W z.bvy$(g07S]ɖNޠվ_ZQƜZNml.ɳϦхk#9rII ٷpKrj'9-RvM\dI72cnGM59`ͷk[ΟзN4]J$ab$8=_+5N>pxU<jz%d:\ʢq.kr3iVi]Q?*rX)5t乚KTsדՐJn3 Uqg׵gjnLc-H%VX:K ʕIa7 d՚=̠n0O?kNu#{i:9afQm19KRc_Iū?=DuU9 08eې8-ԙTN-;!Ingo':WU}sSj zGiiR2D]_dnFžf[088#b:R[>`&8aV"7q߭m/7Jqdcnv$pO|Ҝ_nZK:w6.]3|ԟ]8Y bH)Hvҟ2]/m'NvjŤxFY]N)$z rIh]8)o*khhQGR|)Es c<ţ\d =O=޺hS5R3ZZ- y3(ڭԌiMi%5nd9dxl;UYN2=3׷J)WH4cneyi>|?.{rGjikЊ5#Mr[:"6ẉ=p֭=->{Nk6pm\31$9F+O:R42ȷ _XaLt9Zn9av]-$3cht ySlCS擩|btنU7KÚ.̖-T[4#C PB$`sgKRԕ7my-O˵_q:A/i5e8jI;$[v ܌zoucAEu74*T*:=@8$3G--8Ԕ\V= }U[5ɑߩ5ֶ:rI0\$p.Kn_˘r>1YGOujZ_#Qx^cZ7v-{p$,'չG.ލ֝2App篿4+]>KO;FV$/ҵohrFro56ePV*\w]*jn]߶t/5KdUWD+"I!}H??{^G;/2rbzJ||%F5!dQl[d8#'$~'}0jq@ -\K,V= 8sc[nAƔ4oJ6W3O؛'ϑooVB1 '?0\L,HFIs=J}; ,N>^Foֹ w9[L#o۶Ƹ%{lgM{9;| z* s;Mq1Xˮ3냟|,G J2Q+巬?^+8rݣNYKHʷ&C nwzRn Nu=^^YKct\gұm3[*J$vD$l$mj9G#fΥ;ۨVRcn7H"V;P4{=uڏ77D,4χa4_2Μc)r䓘k$*Yqw:q_2cw}m}}Ib4RA*猥c5hy wl#dU3֦nڔxHJӈbܿ)"Z0-S溈:)$WFGܸ8O4bQ/DA$V':+NUN)^穀jb*OV] Էۙ_¥ΤmfK_Rw[w%{REJ0vse63*q׮H-kng){k-]I ʮJぃ\U=kUQ۞uM?iMQ!Xql1/6H9l\vⷧ_U wG_9 Ӟ}w#DwqIV8I8$B甩ʣDF/D"}d1튻KH\қ_9Fyգk+'w+{kg)|5iQ^볱^]J<2Fʰv#s[Gkwثw+ek3;r ҟ,Spj,$pܓ_ڤoU;ys1 x+֪2vVYFTx-6]ʭ+u^,4iWϩiV L8*0}/g K^gniS|Rͱ0(a+OgST&WD}2@jʭ7R:~'bV,$THѶ8ŵkZl-Iw6ٰۀ=zg5WKD9e quZ$6yZְ>l9J<[zp~,{c>dj'ʓ0N4ɑHnW_ڱe9-qnvH3 P?G+{Ynwb 0>ɭ') *{:wVֈG#H&f`98ƻ9nK*=d As.6:{Ti{KTJ*lVdP1MĎy^:WL} i.gd̥{ FY5>acCgַnk^O2ZHmdnuub{KҚKO@,x#b/Nu#H=WB){WRS-}8,cnIgҦM/b#R+k\i*;G&Y \yMٸѨz~/5,|+͢yK7䁒NFOT¤sSFqKƫm8<|jjVQ.[dx;`{T1U^.|Z P0] Hhִ4WQsY%g[Pxr =gu?~W܎M0+[{`vuI$z(Y)h[dNvXI18Jڕnw 8e߂1֞ZlDDiDɕ5 zT%'f)J+/Y]\1f\)aЌT>ҥe)7vQH۲q.HO\UB{-9GԎ(YfUrW=}O 覽g*!2o|,A:(j]nZyK/ (8-o}lDw38`_2 eIl$ⶇ/McǙ+#[P27(vv8~r^R舕<&ےV)Rtoy[3Hkry⎖NEpܘ޽}K=}ƋB^L {$ʄn1Ϸ_=1{HӡkA>$xIM@9-E9i/ΜڤI M74xU01j(I$9YrI;xNYiSTuw3fݼCC "ȸvz/sҹTu#;T}F.3Hc f^~c'ʭUh J\E~#e@-sYJ7snU59[Iryɕ+֢R3(ߏYfIѩӸ8] `m{ ]ѱVCO< T{hokJ5֬f&hV ݰ?zQ2P*\.qa ꬉ=~rmBGְQrW!#[ ?1UU9){NK"k{>Z:Vfʌ'q愹N%wm#3(q*^0kSv1SF>%F}wק1 |#HmuX4DUF\nK A4M`ҡǧ)NƑq([30P ~^vu*5܇ʽBGx&g{E([Wn;w'SWVDytOP ˹.7tbis!kǽA_\~UvӉ8ƴ~UD?QN|QwӪ]Fg;22cWE:Wβ敯9T@)_U`z]2XyF֟%UYx >oF ɥSXmL|8P(ѫ/}Hأy/A1SЈ][7%#U|*PzJoq喧_-K%]3Ib#手dN~e}u5hC =Wj"vhFW)y%|͎s=OܴnɧN2`iVgoGfG[e+MvECZ8W{vi-b.W*;G,cr1zVpj<%ݻ y9ge.Y_ƹ*ط3h~^[,I\5*~MJѤ}xJ%R0jUUXdE-nBQ⨥̥?gFThb- \/;U%vUf/Нz^.O'JqXHms~pc%Fi->-Y?2B]nOa(4Ks,;y;_X$ǘcd*}pY*t_&B=a2ILUenҺYK9jJz0<+v76o-g鎕)IIb˖T/\I'f, NOQzZ2y*6 _=7AY_yGojO2y1ψnXGԏ ?wU:aJөj|cTOǥgfc/gY/2Ui`8tJpHTR4jkvKh򛦂xG!T/;ǮAS۲ISkťe˽S;hz˪VLkIεhC<*o"$mkiJ{Ovw#HD7cU%( R_Mʥ(Rr=CqN1TOIKѯ23#/ʹ ڹN3LD\az.Iቝ)e~˚iI?BZ̺ep*+Md ݽ"1ZVKFe§ F8ާRcBFҭnō~_^J娤"4eK߼g4O YI@\c*2qvZgɇMUOTJJܪ{c8ttV".WemKVPG16=Hd.nYYS7աzcRRVʝk2vYdG/=9CQRSKFehggJ1S{* m3;ax9]jwhR˦Q@[ DJ>^EkN GClԶ#ǹوXnsZYYro#BXIn77jN):+xC-,z15HTe$l .^?rW'9󫏽32)KI$*T7,I$r'QScO?j5Ъv s'*^өSKݨ*iSG{SmQI:K[vmByk.[g 3\9=h33)b:n뎠 #BSC Ҩ~ݸ,ˌdqMM}IKV;>uV\-fO޴Q0JQvHbCov~c$bMX·U9W߻vk Y .+C]UV*exo\\6$Igی3̅|8"kTcvsu%vT-nF^cOLS,Qe]\ ݵ$^?:ſR1gO&*C+t ⼌Tz*Թ#ZI#H30SןZT}~"΢<|YVQ_E+$8s5?(,L`v3^w]ȭӍk@'-3C2ۖB nz`֏ޥ3Zu*ys A6Y Ƽe?Sԧz1[MefR2SvrRRVG%I: wŇy5_i'ky9ƚN6wW]=;HP T$I'ڶN6iN:z=ۛ|3qߧjQ|җ7pJt{+jM;{ yrH#Ys+ ,:uѝWi5Hh«$h_ǯ^0\Ko*1T+&ɕ\HzJWܼ=8='[>\g pp};skHSөO@-?-vvaʽHW/pb'^^G]ъa5N\/qs)Fl+ҡN3[ƌc-ԩ=o[s3nGtF<Ҳ0ܪsEȣ1mݷuFCHӏ-]QHHL>0/ z9MhiN1*OLyn6U}x<ڝY6%u 6w(eV\p{g͢-RݎM6>d^f|é[FRo{t8*Ԩ~Zv唗X4g8#׊ڔydy#E({5o,vҽ[3bhhJy,&Uc-} i=pXK6:)seaszF+teN1WZgFU-}zkfq:"<,kЧk+n<j̻irPOQRQ:=7SICz%p]ğ_`+>v2gʿ7?=GT5:%/]RTg8#;kH1Vhk#mەuNOֶPeNS]Exo&wcUJkj-95>qѩ53v&j[܍/.zw9RzhH4Znc9Aoq?.qZY?uTR;~>vב }w-=I߅jZH֜J|뱟Z^/-đz 郁I]=RRq,voo`;7 x'=?jTem5F}E+XGۈnzvJ)|)Qp4~nTn7yryc9nsӑRM<%$[\.ͼ(tm]gbs=yNU=j!K bL+#( gr=}k /+]#.oeG1@?g}OBE^mWE@RFу>KN.?uڋ++1 Bz5h(hlRyZ:6SI(mvܪJ0\F=JڦQbrA < 줔%K$+%%7`m^~ZR.:_}#6cA#1P~@m.gʋRN$7snߠ8K)Jڦ|[$Cmzz"5%nzNy`Q YZqe&5e#۠>1SfԲ E-G8A{kRq㌽x;[kX?k}33ˍY xZ] Uh}5&{^LsYGc*9eRJ5w0ui$V[rqzg9t7rOMRFflogGOoƱTKƊI'ȶq=+$1ȳ4~E;vaI=~cLթIhvؙcXA H׽cˢ0Mrt,lt?_~ay{FNWeųtAIP߻$s ?/iSOj~YoJSɣF\pc10Dd53mw{>NRReaefxҦQ%\.k50-[1frҭYuHt``Z݊bfF>9RzSܶ^]*rW"Vyd7.1=IQ ;:Xx*qzv%<˹Ts>ԥ$eImiX`H(@FsUg*)Y էy sULH܁߯gReS5..by5HX/#\$91ԧRVcE)ecjqTu>el,#eF\sϯ=]:rgޗv,Rd]BяVN/2߁gmq]ѧޱ4qQv}@wXLCFm z~t-eF_ qs$b82wu<zNohR-DO;ЂA߭rԌc'tsγYy!d̻汿4oHCjeVX[qY)Y7[+Q61ْLÕfQ;;e-B"# 瑌WD/=P}̨yW;~cFҶPբ}LhiT,,C!DyzJ6V*a)%<ݙ8kh|A$Ec`UGFZt}3z˸{zktU:tћ{Ǝ&>c?ZǚSVhًhV]EgnqT.dsJ\fp2)j:;?}$ɥA }lyvzg'U$MN# c}CH薠䍣gǕQr4W*,lP1-g<}oӱSJ6]ݬrPr{mʟ'Wk<a=;}kzj5&gN:4ePuiK|$9 ; tۧsniu< ӛo)H*nàTY ƥ5h.j]8==}sZG7sYQ©neIٙ-$OrIXZOoס=$4+1?g*~-G~EVHۓ}U[|~w5ͺʕuMr5<ĻH"sd8pZ3G=zCu9iaIK<sZ5?wAɮZ]IED2ۤfׁ>W<҂.; !tCێB <g I/{VXm(e ƞzҲG+f'?βR cͭF4?.UxqYJciJN> ȬVݎ9g*.O@({؜^6I݆,Br/oO%ȕwoe9ld9a}Ԓ(.fʷvfcZrW:#in3ėo*ҝ߻G8'>xXmN?ĺfv,rwr9TKCZIu<ėvbYNlasy}CKEENTZxXVe%{cscNT[N 1<s"CK0o$M5Vo'8oҽ(^d5)\łWdMDa(|zZK2Vniɱʤ]V(c _Zqre#i'S.7a}SA*rSz~^LtqܦEf_ƼQ=9))A~Gfq4=k;i 7nk\y69MJ|vZI#POk=OcFtZRİ@"IqԍEW}yԗ΂PE[wS\{FB\=]MTK"9_-={z1)RΕ5ogн ,4zfEFIl67 0P>.N[-cKs`y1]u!SJ_=Zn8tAky c/, j=ӾĬts$Ue۶6cQZb{XNlޫLVqa47|'q^H}Y:G[tͽ">eߌ |ά[8WU9}͝ߞf-&9GNصsNYT^:{Wlg:prf!SU,2q9Y;H=t <֎|ژ- 'q\3JX&}} s{SGs-G$LORVw3t;.k,1\)rGGBhA/& =ZS-^tt+̹O"V c޺#zf莮}'Ѐ^ZꦦٵisHͽ3d3.zh۹iʤwXf//+_5/v mIU7.{ ~N?JƥDk2+7C/pI9# EY:Qk}\ym2}qtʛM+RUNX<Ɖ_F_g>ZYIwFz_ny.|ƐydlqcڪksGJ:i])`-Z=q̷pyYGeJӞԮoKO-e\Ϸ<RRUߛegmUIwOQ5#mΊ4\IsWf&{-q0$gQU8ʋSYiJ8m;ѿu,^VmܜIY\Vbwo |$Iu $3I Bg`7<pvyڤccYjxDռ1i3F8FIw2w[?-8+iaiѕsOѧ7s<I}sA(pʤjio3zI yYQ8FjyNzs⳧MFƝh6ھ/ǚKCܹ35i%^ځlxv1_9e[m=|_>{>Yl֚_s>|F'SV de3x>/eiJ߯Sl<#[M?:tG}lIk9R^ѶnXO GRcsJ.KO}U6>XbIZRwZDm ѹPW=IJ̏BrfH q0=7+f[傋{¶ϵ;PX$d? YJVL7WPyq9r1&QB GY۹nF|Ƌ{CqkUs<.=4R-y}j\˯rǔlH뒥Nl1ִYYffG6튪Ǡ9?֎ekJ/LX(U3G2YdڼmjxԊahcÞ `..'Oq:`%oaS-oU͚'mvWjQ)hh|E$b92zjG˕mͻFv6Nҳ)F\,,dUuߊqRrَIGfM$ʰ9m>TĩLʪbp?Z^oX|c]n@F~\s8ӔL5[VՙKN|rD}ش46,C|XrN<9SG,N˹#gmk-䑉#ONZ1VKf}_~jPĒ~Ca++o/3v3ӯpb)Nխ~AJ (659dFݻoVݎ DcR-OɍI\8{EM}{XZ2˵X s׎:Vp(W$V(ҍ$-tk١⦱vWmFfUVe.U9q\r:<|mjU^n̫Q]owV' G?޼/iQRyO9FSgO ^]xq$rne8\\dlF[N:;#W]֗+zZ3[2H?13#=Ӟp1޸%NwZ?iZcy)%ITVrqҦ>X: -=o9{O,]4jgtnTRXa/>"F.roVܻQ?ߝ$rtʜ=ny#4eʩ˧Ny>cqo)uSB\)c;ּxӍ6Θգ/P\qgC(Vp~2Ԯ VoQ52-f#99b:|S9^鵽y\[. $G\>c;S8>oѴkTa#$F|>#,ۣIơrQ@$S?}GMlz뮯luR-_OW4F2mFSuzmʼo$v{J^֢Vor>%մ:Tn$2rO`z_z'zq+L4j޷Lrg^B6.mW `c4}UrJ˖{b X9bog[Yݽ~GӴGHF..'zskϗS˭[[]k s+{rot4&MӋXufw%A8![gJJ.uyTͽ6./7kk~=#ٴ>,C{$klƠ+$xuYԂgweJi=v{_$?n-Pxq*rp a LVkEv}NF3ձ)Ʀ[G{u4O&E|=DLeV1S 81һ&tyzgF<m{Ӯ?Lwš嶽Djg]k[-69r6+<GR)sʳxii y~G6͈,eTyyc|qYbdowh.,LX 9!~k<Z2I;o W1$zԮvͽmf+g2VIn2ªnhkGUmVյk |AtQç8"^fk鯙FFG22Wa !/kQNǭ߶D:ϋHck+3)EG q\Q.hn}V_өEJ:7ӷ2ƺQ*ߺ9r3ykiQ9+}V8({8ʼG|;i%ޓjnlc@]'Sv}>nc*3.yvTѾ+mྐྵLdW(ˍ)Uemtv}n;w23ޕa4]fַVQiktI[}^!Cm`a`W;q*:wiwUѯ>qpZItWu<:\x?Q"F1j@J B'5^#V{x]\zF;]KY]:H$MᏓ-oy,MzFHg٣VQyrTqOSڦVʟP-)gA5$S`vD'qf٥elnG/Mo0ikI&XX$ 7!Gn&y~[SK6+ihnosz6k]u[i؉*1'cJm(jTfռc^'kNOAjTQpXW{ߋ~)VKvǕp]1׌\rӍϭps.٭bάN,cuP9B'9]\*5Ia5f NSvSֶܬӻr3ZK4-b3y;v Knm'r˚_#qy1;YX(IW;O:w1j)O/JZGM{;-$ &$mXCcvmF޷#'P%ߛЫe9Ҍe;P҅Kյg|3dKmB=qخ98r'cN.0^xNtu2e.U}˅` 0!:ㅌ]mwWM{ MVO>_ Α«d`t?/qGʶ{z媎#_KشmB,pN㹿>c+ cs%9y={AxG[ gXó*p=?*N.7S%JjWzskڇ%d,v2yq铣O=ܴuמ'մ,Is#*F~[ ztZs%oԽirQXk1њrѨWn-㵚}ϸkd kGWIإW׳1ν3Ԓ}=kSpGe*qIkm( qȪ$Zq=IcoJN1*G\bT+_Cbm1lڸ+ׂ9(֍I)QL<ѬhsjsRJ31ˑ0Z>}6:V۩j9& ^3ǯ8?cH+6*IٽHlDȚ9/_o>ԪU-b'cm%Ɗ-NBq*oÙ19hdO#ɅS ʒGuD#'}sE|$j֚g/|ɦQm)3Fr啭^+2&[~~~~U  c~c#k)TGIJ?g{:= b< ;cM VmJ>KVSjFۖI3$( +ԣ R+E=F4&FVtPSڦ1/ǚ QڅuhhcUbdӍHKEuᇩS~K^xoGjFH[GG׮BַcnZ9{?;~-?qS^u wHG!t_0ʛ>CFUeeoWGɀ$y $`pGҕ:2;۱كU#s4wN.rŇwS@r)@dLRٛ֯E[- ;~V_ڔFRF:|7V Q!m6BT68__a[FNQ c۔}9kДg%Zubp"C#$J߳mlzQSo_%ջb}8^g:=2ew&A6uL4݌NdnIDk33 UQFRN6va RM[,A~1$X*cFP٫VKh2%_Nb{AJ>[t*Ҋ۽e$Cŗ c7ZG,}hsշxvM7U%U\*qX!Z顗h]@烏UF3L{R+^ai6Ƨr`pFi~= 3%nbXO ;r֭kN:5U;3X (ש=SyjݑK|nQ6 _P0l涍8ԕgD{_v[[OrIZ ӳu|< rZ`_琴~K.A=rI2kF4ۨv10߻Vo}'AεeӭYԓ DN OE FzUOuJNu\*Ff^ V\NfL$Uz2HJ脪>Zu/wsO{{xW,<_¹:JkK*nt5,b\\bX6q_+I^TNysP_ܬm*^< sO<}X=e:5#sOFvPn2ˑYTTwV糍j[}ˎuUXWޝ:WHթR5ib2Fܮ`spp?![C[[ywv,ZF]J2*(;sSRVlrSM6;ddYv+F϶5pzֹ^2.а̓>_}y#K޴͹,|BUb[듑y4gR7VemVUOvҧS=K\r,&(9'qζN3XM^M#xGYQv}NiV]iVE$Z^y(F\˘TEgUX]H[t_B+~eyBۖcP5 LídΧNrČ awckr2\R)dT[XۈYԬx8?ZujJrI)GݽVd2FOj)QJ:Ìcf$cNG vBe7MiyYWle`n'9tk?gro~ޤWv!2;a#OkjjPXwRJԆ)M' Hۀ) ץmRS|DJ՚/-Wr`tǧ8]1J]LDPAU]ű \N2lwqK24 Nryiz}Ua50,ƨh;͸(ǒI񎵯,c.U֣ qzL#XV}ج rZ֟73\!<Wa$%xJklHnf4/<ɈqڬFսd}lX*tcč=e,U8=ǿ=릔aљ^I{H;?ӭieK9nlTA㰭(Fw9};>vrP߁\eL45YIs#ݺE?W,+Ny{S^E͞}Os#H,uoC$2(=Je;'**mlP%Kʓulr=A9GVj_KEGV|,O??gi5/m4IJm[l1~VRgQ;1M-m|oL-jrOMG1mvyj6U$9@ qU%+ISN*VPmx>=}@%ZJU+[-4Cq}fH'22JqȫpqQTԼchW̳,sgN}M״浯qD˕Ą6~mVozqwKuԮn$*6zw8淔c=ΈJZh ƽY㯷^45j9Tk$I?2]ѷʫUR~kNNTc-_vClGۍ?δQlQi|6Dy2RԪIi$sԕJu-m4h1x\YUWǚ[!ңp}HHiRc'^Q&t9ԩE$ {%ۈ㏣o$>:rQ}84*%"3F+Vъm$M(oNPԉqjI.vQr9a%*c9<ڰ.5;iԫwiH< #/CsǡSUhb[(|%wmn,cjQS?WAH29J5%ͩ~.c>iRy9]y+FΈ^z34+ mrKG7݁?ÏV2JQ+-~V\}9$y͂I>c)rG|O͙ó,@?7jcNqҲDF?>PDc߭/c)OBiч$7Bsm߻P;N7r~EqqwM0e-sN^*rRve{[ym #Fm'ةw42E} g3C]H` zrM8ԩJPQ;|ֻL4;d22px91m*a]S_OTU<+ n[:jQnɤh ;F8qDe$,U:j<՗$3H[I#RLp99jRqѮӧ-䯦V Jfsp}3OeTcKݗh@˄ F0yW?`F4~c`]com=XyVzsF%W Ƿmvhey[2 *7lߨ>+5,cVzGLf֕,JwpȢiNvq8rxyFW+$c奞7ɷQSߕi.XK\J:NKo;[et>e 2O=FyD9c[lɌiմb|ɢH yp'޲9KRJmfiw=F};:ҲM:r%ZXY6\$ j޽~W+~֫M=5A۹{wRKn F#{R9fw.6ZϙHs{9t+Cq (r!ݴ߱nH_Iz_x 09?.;fJQRQ-["ͤmYGf(>NF}år8C:NӲM['t&I#&8z5IE[DgQJC`dslR%qS8oQ'McSSZw'HbQ%ϭs{GϪ**"T+g#2QhC߿AWRRRB*TQQ^@ogp##2=zR<=,EJ#kjsy,> zg\jE+#j5~bcl) %v'֥DRMVt>7% (R0I)I#'m-SvvџooVgZZرc]FVi$n'N뎂ӛIƴNZ$q$S)+6-N>D\.:$-Z L Jo$tFS_}`9YTg'FPTia&c;^11]TyyRr9F2mBլ-y3.Z?$le*gb61?g1q4gq՞"Eܚ8TW5]zӏ,yS,QLaa+ w_bq=H8'4,/XvZVRǯ\8aRu!xcPEImѼ {ϽL-zir<-A> J*~0:wzeU{C-=Rl [lVe>G;oNr{ki>hyy:)#ۤڪ\=r궹ʌܞ`)0mv}ATt; 7kYuN"YPpsڡNNjҒRtVdk|O2s q<kJ%nGrPn.i׊nV}<͍̥rq}^.Z0ndv=/ ègRgyk׌.YRB&i89?MH:HR]1 hʫ8 q\j[N&b ݇veG$z֒ZF4gA)& #iv!r  =+Ԅ`Zt4WJԓVlLH9fXJ<1ƥIKGjmߘ$iǔdoݮ摡ګ@9+F{GR*wKWd_hbn}Tc~t?[,7OR[f"I 3mM3cjZ8xWZ;$kkMFsu>'/c.g:r vt)'k7 9'%G} 54H* 'j%B)&JY]c`3s}BJ0QYGFmh`y,p0z`$z4c(M}(S5(iRżP"?mZ\mt).*4j*r'J}x)/#^1=-lzu^0UdXJ#zCاg沥/ cbwn`G,>=sKFQGk\ض,|Z~jnθ#ԿTF{Es$І@J;3=x?w(ǚ:$r4W(YWv#ƪqܺԮMNGۯ# \V$h`x#`1V.ZJtܘ!*ш[aI}oAJQMÛϱ{Il1#V5<@vǮ)Ea7tי"vYj־f.2 5XԲIff2)?/g(gKսt6P?xt0ks&ΪPu#m7RMqͼG?ɌdgOϧt:*m6e[T~q uhf"96#Oٸr*vM H 1w?-:Vr(J-m*Ao%Ucg u1q 6*8u9ITq[lo74lȰcG=z`ѥRc)ŸCj!T)պw+Ќ\znRvM4-2l+X9l&w޶*rKb|y8g/xF.mTcKmO-ȱ pnWrVs/f-u3*}z^jc%b=hzIψ`_5ݙE.k\nQ<6DyT4]=h\+@?(=?Q* ˱f/3 ydkM_bK;nX3cGJQz~_2HaIcgYYs'sUro$AoIV_=D_WWӜ]cit9=s=+HKCNjnױ]H`EWry8S8Z1RmY6 * 9*vCZ<<3rmӪ|֞1MUYY2I,Dk g*Z 7h%E\g(t4)λpZ"߼j`Nz{M#k5};Q2! Nx#=*9=RϴP:ߛOOJU"^қuG.$Mלd׮9ˑ^&'&;yΓKq *~˱>zsow w(9=QҰ}H;Vi,r?U>iME*oN K^G͹7{ϯ_\)F2q[ub݇&*ȭ=?JeGmŞ`͖Ǚݳ}eqnD0^Enҕ(3;5)EvBWVӹa#+nY' ~'mZ樔6LGw]^ueSI#s󊦯RztҶ>.w|Iqn 4% =z,%҈xTކ 7'{[v2oaqIXx][$ dgaW+F*SnY8kIFYQGʺ9ki\pogeWP2p}H=+ms)NTؑ5]c;':Z_B9rԌ]0ME_4ylؤtuua{r{Q4Rf}8UU5rQ8Xn-u ~:X:Z>XfҪ=7r^՟fSs2ygjݵ-jV# krF:{Q,b'm-髾2d*䪷QJP[-i֬׌Q$ 0ہZQ4e*rr9ϗ~^@znTl,,ՙ]'YO>V-` '߃tc+CFxf{(3Bcrq+:֔oo빜QEyϋf-#̫6g'9:ROuuSA2+I̋"!iw >yPFr+Wx-%V0gbm)n5l.^+[ԦӔ4֪5HKNJRQ r]P:[FJ4c5J0^Օ+UG`͒GL(ߛ`'՝ihѱki5=M:X0/s\u*ס#&ޙ pFͰEIIi:biGhיcnTy>dhyk/qT.K|j5h9woG[.SkۧJ2fO*b+d$c)E9ib'Sk.4UT0!K`n#=;w:CۧT4SMk2:ᕕ}Eh)Y]%ܳ+%oLk,{^Uv0q_Iө'ugmzX:4Eem5ٷ8s˷2H#Qr _K.m_vQ}_4NO0V̍8<䃃5z%wL6PVJom-w:m&KmBH* xte]n.wq^9}5E s^ni3drH뚺~kcZxԤ}g]̒G==?oKOU q_ܫ{AxF#TnnFt?ʒr+"N8 q*eR|ʕo6E}yq4$7IV8$)=3޹o>jђfdȓ3H`+ cys)ZI鮝t5Y60dEc cr9Ԋ^5HSm$: kF).Yb/id;sOz8_CZc/'(:.dVU#36WrIGׯXvlދC(S4~g "XYYfzm6߅:2OO#ψ\i,4i7ʘr@x=l>i,>Yu*m:#kx?4USqIOUں(ңSV^]zڞ֊eͯ?e1DՔQ*}9Y1wbm}΢mYO'֯6TsRQuZ>u1\no³R低9R3Zyn̻P# >]IYVH_Cy1H-F_ɸ8zO֦NUdZR V0jVk{+_; )FA>S޹2c.Fg#%UROǟ,,5k9$f o.Dlay?2A)]F{7R2v^Kɿϛx洊[tnƄ8qABztKU>GF; QO^Uӯoɮ#ݏ8 Jr7L-[ |V[L*0'q{[ޟkw0y](S9=?fjԔ#Kf}Hz~-䱻)##=y,cEbI]]Y+_^ju쉧Yc?OƾwQQqѣ{I[vv%q{O}$rҼx*jjM}RSϾybPtXž^^aFJ6Q}|i+Vu*iwZQ#D>#*߇j\h{]Ϣxya{t6/xKkx|LNlxוXS/t߾oQ{ŸIyLȩ0f93Tqiu>_0VpliT?0Pׯ2H\JGz01湾:^m/D鼕 0$Һԧy=.4g;V%ǀtK,{#ݜ9tN*69%7.ȳx3N0}&͹8Sϧju'.k?c_m}u/ZGʡ}c I\d1S*3Wn֪ԧtion"0wM/[/=NY;'''w>;9-9oqnm 0*cbǢۿw5ԤiEn쯧vkoi7~Z{o޶k}894;A qȵljԅ;KNIxNyk]y|D6>pIO*.38CJ:qZּw|դs lFB7u|/5W>=ƻrNڜ9o'- dyzr]5()Y$e,-iJ5xW/vp,!o2GU#O+f (5%gJ'y|Βn!b7ڥze+[˄؄6j$wu_%"T|mx\7^+;zڝ,;[te*nG +ڍ:2%fPܓMuaxBkyd,u´cn3uS pe}[<`dzfó-g,TGA98>7e^wnl,0NNK{sQAī&]Aqg|:2QE}2R}?_->tk5˝>(hs|9݁ۡ{4E;n+_S(^cq?ZeofnC+!|F9 ͘ӕLTmUڧ3s+'z[{(=ΖKqo G+Wc( NY{R+{oku/+xyF"|;}ti;~q%rB@{9EZ50ծݓ[ S N^nߦ׷bɢy>X!UR2+_?S*Ҝ+-c*wmkok2Z&1ݯ1WSVSJ\'m>#[gcS*z5Eɴ9WAZF-:M1Q><ܼ˻:\g2Q/+%Bç^Ғa|~,2VEr)\g8>aq.+=-M+[yQ5?#Ml 1͸ ۇΤUY%G4=LO:Y=zm.}]}ɴW/Ww`Au#(Fjno+SI;$nox\ZZ\̲,r$61A'xf<|%<7*Yc5/4O$ H̬N#p努cF=S[ݢed5kاuNt\z/uGdZLhҮVY6Cq/8P$7GY˙5mwK )Ts{]w>wuN 5;-HCTqSw}\(wiq&3*1JWKgêh:w0PXQ$n;q&wcq\Ec*UӱKbnUJ imz+ʧRR&T̞W? D"d월 UZ-nNjR"ZESo<1{7z0 F y~YRV\QKSʭGhߓ1;fNy9 ޼lfo_z<3>.:̱MxO22ۀ8l8yk{F}KGmm3>+kP$v`fpqG>溨˞2wVaǙͻIp;,=zgQrM{yf3VnuSE߉#On-|Oo;`eS7dzqSUDvinﮛ[q}YM4mc_~[MOdyߊSM +5GHe`I*C2H93UPi'VWkYelF3mk]~wg7X]aG޹eVym$3y#>1u=O3js·M?û \I0Okx <\bꘋAEnqӯF{=utxf#]\ׯ>N:5'R uG>:u|is~Z\*匆݆ 09+RN%~&#术g׉gp&yR60Uzt=sߎiSW116_n%a,}(ZN3rl#N1:wWTYa+8e{r8ވS⵹GЗ5EVƝ3܂ I$O>8|yfmάtљVjSkh[}۫yzkI{$޾K5+W-@~f~[Sr{ZRpW˹*:sI.v䍽 "V\ͻGDQ<[m H1ߞ=T9Uo9凅I6!LqdnYqnrs JvcWqk帀?nn9swrTS$RJױʋy0=8 *q\_2\#oՙV]6wڢ{u8#8'cLi&y6YcPIG|;uEk):Z=//g䴄뎠ןZ7toRXw$m + M$ɲifw3^}J_yKK#vyVe#ץrۖ-AR[]KVLM7~H`Fp?\)YIqʶ*UԲ.#W)P<Ϙdd RgB2_~ ;1fV=$q׎VcIKns֔FʲۀN\zzQ'NL`$*63@}fՖg% 1Uv.Z{%QT鰋824c*{9Le;v8ܒ+YF'HoDIOe*nkqɖbA돮O*'Ncf#V[h\ FێҵQQuf셹ko, m>湜7(4hSr|ɛ摸^}TZ1 8Xi&u9=Fӄs+y0V`^I>ݺЩOU$=ĒH]D|qqwŨZFt)Iޖ؆dY#3,> Nֺ#/y%/S斄1I9%Pmlpq[7k+kt:G4L>jmqKa\ Ao0Z6±!rz~tFV}<:BnnzѕE?tzdki)U x2\6.{Jڜ}v7 *. {[w]s؊TORк{Ċᷛu`<uS\иq`Ls]MOFSa4dżM,vFTyZ؛;pN~JRlgTؼ6w\҇;*QZB rs[zUk//^oby ),pA֟SswNrѱƲIcR؟L¶Ru&1kN/"$@V`-HvxJkB/mQKk,sG%cWR1lgV[y,Ea;99Tn(U ?y!1N?:uSVؙb嶩mǦOW.SZ 7,d*7hqV7 =7Zv;}AE;pxz\e(ӏ+jƥ²kuyfW?7.V)ӣ%s[O{I*CnT8A^?Q8y6f+xDLݵF{+?y߱URF^fh.w0duU $?mOUmL^D[ۙF:玾B?hn2Z&1j,ϰb2ON R.RZը%+|yw?#kmI8)|,,(ǹ 5'ZL(mN5+Hi= ۾<񎞵8ݙJ^1ihʌ"Xe㞼jsT+6[Gw+RQM8r۫eNwJ̻p;y#&h-Qd7{[R슗.ihGv pZ6$9U֦ϖRN1qeY_Rzrguw$[g/ƣf/lĎSȏ{C&B]qɥ?y(^jN*]Hry**2yB4"L(єd~+xVGd21ӊʬ~~Y?y0Vmq;`JJs^*8*gaXN:"1z>j};. Ȱ\O<[$ ?y. 1ttZEk+myH߻ vzs1lՌKzXc,0Fo0^{T$քмo+Lf `_rk~T[$#.?u:~=ZӧJ_X#V5-FH"PZC#qNx5-̽˖{"Y!6眂sf)u: UvMgϖÙ(2?-2[$9קzrE­Rb6ҒiX/@y9V,u~BSgd-<*IQuZd/j'  5K:O81$"cQ^ ^I'9;aˣLo]KQ*4[~\3g=Ui-srВO@[W>_89ٶ*pߡ1!9mnYwaz֟+΅E;y/t0RRsqpHu0yUm̪dnm\32Ulp{cp)r˘֝))igݛEVI{Ȥ>F:{=z_+b*#f$⦤ZMI-;&H&@H_r\ǹF7:ݣiV}nYQר4ͻtQ:w& n[f |:nc9rSI%cNt B4q*w84RRLuY"OhjureIG}d=ϔg9U_nU:2mZ7-USq +a3|AGORh1N[t/#e)2cfF$w}wkTpZ~eU,$X >{vҦV7(8otmwu"<1ɐߏ jwFЕ(tLLxcOgOA40|MyLfWiN~^hRno5ח}i"6Eۻ=O5R-6(ӧ+>>sS 1uOw,|`J9ʒs:WO+COgZW]@ {U[Q H^2I^+cNT|w/o?* 68,T'{T+hꔨӋK_N]ܶgiCʋ1'jSvũԕwԏQXA6?O]t=9b(uUE~q>ZFx?xxFp9iJq-{hWYf?28#CXkuV$ߗ5 yrn3&9y瞞4U%&BkkV5?!e'#PA+)QOH-؈m̭3h=\CgO}4k?43/" 0c)Ԋ|eY%muF3s;[ryW?r|ۂ8D`orʹ'i)s[8UF3=-߯)Keo1TI#y~Xc#y1XrɦrKG$2Hn2d$<կ,eNR2n*NÖB$F$#㏩X<,e)]_Fۗ+n?>Glw5^0иR S/V{g|mw'g+7.U}bJ\&yv--<sq\*mr˗o5`[y+n7;}* otp׌j{oQ$I!Lzzk {YiQK*%҃33>P[$0}ku,s<9=$#3qר?ܫoȊYSW~_ e9xΣME"l,0wݎI\p1O9s-G)ӨdS2Tmg 9㡮Q/*FSi_r-TxՏ݅[ ں5%TQhIaDqm˂Psi5)F1+6}4we6ezלVteb%3Gd8# F;tF<;nRg֏vߗ$ ԎS]N)抷(Y&mW's]SZ9*މB$e 0\7rxE>ni[B#Ri- 2ʱ&'$chU-kM,A`K3AZf:u4AXR^B1Srzꉦ֦ӳe{xFP~8* U <&W;֕9A_ߩFicAn[Z2{{V77k%(oѕfd +\BqwoMc485IIf]JUFM!)n8קSTCMn M#m¯3g'AFNGUl(.ݣf!7|+u:՟Nr>oq L|.quZ(ө#-#+̫>?ʣR褴OOSIm;v-a_ָe/Nz_Z},G*v3mP0>:|]'Y(jvb 7gЎzUs(+S BSz5dPv`OҊϖHIԍ`8ٕ.rOK&VU6Z?/ˇd#@Ad=R*K݄[vqU:6/NQZo~x14إ5vkB?&Aw)\g9#"Zë$K,2)mú1\OsGr8G0G(UMm߶8<<%8ߢ4tR5+5}>RѼqXU0 2NzgWJ>F9S]Wj+pd?»Ww8s*|eI(p?Z:k_N{h[G{rvۀ~c9hƚn&fY,c ܠ㿡iʹTrZİD%eUN=[S*:BF}F,#OW(ӕ^Ti*4VԶdw{$aICi iQ8CfXtqC|[89GXC_c:ҫJ\̞%JH3FG1Y:̜:1\ M9Nz=z~=I!H^H s lGZ&[_ 0zH>mʼ;]sϵ/SU+T,@۟aS)JQ厉t"%Y=onű~-ULTF;W+ R8jQ^^ Q]ˌV\*IlxynpHx?s֯o$T~ъ,/o,2nQv@;NqΟU=R W*#gEg}ʑ t~J 4\k_2Ku>K[x1\9bbI.N1b5ʻlrP0`1YӜgWa?sZK,$ӫJ&1ٌ ;sZ;~dݿDh@u9jFN;2{Kb֠:FB6Q$w힝 ԰FOhQD+t/A%܍rHJ:I7v:,z+Ƚfov#E9VbNr:{(^ϖަtc+/Ge>UHLAA8WO+)SYUleXҧFS:1;{4m S]Ќ՞%u[OԽceEl=Ĩ`T ɯJ>uF2k}慜>lw-3u+6g}?g?SNy$ qq=}umRcN$F ܿ:{IhεJQ̝v [2|W8Sc\sW^]7-HI8i$,rTuNU Yw4mU\o8`(=ZFΉAY]۱%ܻAv'Vӽ1^Z'uנarM l#?5Oĩ4jGqHE2rv$cҦ1I.E{$@w7~+Jqq%U}.A*N#b+nnVT^5My\vcrXڿxqxԩv.*(-:>/"-ewYAr?*ڑRVsb[I#([믖a܎6SR|ϥTхvos<j49"[}n,rGZHir@3ҹ9WOm'U t|RI`ϨK>S_϶Ip@h<ׯaާ?h rh4B 1]ݔ;Np:]~YN4kW߼Zy5(W}-RqS濦;h dbFa#ⶳ<,(JSm8%i#}˸ rAUB2V,eq& c^ilU싺 r&-HJ5 >}8ѕ+-_RΣ%4p..gv1{NXrӍǜӪsbnŽy'#^iՌ-wa[ڮEwh${ q]v=H啣wq W.#ty\}Fq0P啝a+s%)+/ާGKaȥnVYYI҉Qζ9۩BNiv:3i[^^t$wT< Js{7cE{pzWM:q+ԟgUͣ +~_G<q|(c'5iLՍbK;&Wq_2NsS[ӌi%;˖еO$mۗQ~ߝu*|5h[Lʛ$8RN{qzu{Q4am/ww<JJڔ2CᶹCήw7a8xn0$jyL)ӝ;-jho s?ʧF.`gO2&U1qY;~/uB叝L4Ji$_gx2> ~4ܹv_0w/An,ʪ\I6wS01甴j§SIiKmnc+6vN'yگݮzt( $ g9=}NV4ZWN#u2 ,'G?Df.k/ᗓSj7uͽjF~I1Te99Q8*f\֬ezY$7tuSgZ4D(J=1-U#{#2%xI{4bnqUF^ӯu2}i7U-zVy.ld *Jyj9)k)ƥ]L ]@n4G5řZJc:#W#/[w -y\v׃׮9TG=J\f!qgfnXֹw5d O*8U[ZtV̪Rb"H9y<`v\#ӏq{HE8Ԏ$,CvƸ㿵sNZ-Td՗bv7/ yAVr)n_%9;:c \u#io4m(bM$S?Ł>d֚<&]Y*Bv0܎y~I[Tw^c slsҶHt=jzkе$L߿·n9޴r9cBj=Nlθ4q5^C=[&y FFw:)TRJ+"iWGc<.>?kh(NDnIK Yyq6qrQvTr(ԕtg_j"6A =}F3J_msKMO|gdgjX6yݒ@5FUc~3*qGfوvYX(aϮy87yfG!y[YXЯA ^Nxԕ6SbB)hL+Gz~\ʊrES\%!udOֳZ%k?7-یJ7#JF Kycelzk{sƜI,%1)olszZF>Y@MsJlZRQ-R2}UU^sn?Zʧ?- 1j-gv2N4gڈ˞7}rE-n؜Z֬rۜ޸V橬u7NQq>"fP,D.3;VUU=ozܒcgs*`Oc~Ϋ^)oChӦڒHOEc*/Jj5%Rwהm3+T`}G~JSh*OK'E$FUbxpGnTΔo!TMFUM$fXᙲFzV2ZP_kINI$+O0zg<~V_7}mg,mls۰y\UcpShmzwbU}m 1 )µ87c*ѫɰ%1b<~׵s)V.f5"i9 $1ܭy6ڹ{Qڴ05H$S {c֥<5+sQݶ4vpQ^| ٗ)jifrW~敶,m*:sʜV*[w f y_^zzsF:&Ö$VmXkV:l8\s]R4%nKj pm sZozw4&ڽƹeSGo`/*f81sלrLvBx!wN? ǖ9LJfF"oR;6?t }cX݃a;KX:u)SzRu= CʒmڸF U_q2)~{_ck;G{d{/M?ztFm-vFʔbdjV2A m\nn';WTyIBOSϵJuÿsw2ap>l9+e*-jB鿑q-As}OkhP)FPS{ `eYI)\FiB;<{vsx>jCm/z{I vEnbd;vunz5)J&kPJ1> 9 9hN25gƒq|ڶcR,nɇbwt eRSg,uZ,шZICy[FǎF>RZX˕$FՄӉ ?x{ny8hU?nvZ0yh9g;=-$ʹ*6# XQ~F>$2UIKڤߌr9?Kѕ*~I$$U1&*Îqts!0I=p~{?{SnkȊeg4O[ؘ)9]6Flq9=fʢ/%md,'g>)Ӧ{:7,gܠLn^CSsJjۆ@ ==k:zڝ7>}WliͶTኯιРJ} 8pkuI;5cFxڤ{}ϚaJUH4X/-7&u+J\ i0~pO(u6,aI2ƣnU[GӷҾF<Z-ʴw.闳^Lʬ%|0{AaiŒ"o3y`Sn'Ch.Wvܟyӑl"coAT^DЙ$hp|)x&kT,ZjVӾ֧YēeU [ͷʏ\h$3UNGt}_7 [# ~Sk=֮9Jg;FWavĒyuXVaԒ_ӣLlz 1Ϩ5.T'zz<~sҷTԚԘhor6Yd>[+|Em(ni ~"ח,n ʧbS{E>hkK<X~eSw>sdvnS"$8}ri3zQ*|2*/cҾ%ZC,cȿ6 O~WlQ2ɂ7m!H'T]v56׾w`C0$Xh.Nä_5&K{wrrNGbqQϯuhFH-^S?> rƢᤴ2^F*ڹӞ{:c0n6-ke(N6V[=QWz0X&yR"mrB]v5O;ݽ^׶ok,U5Kދq}v~jγo3"ɻi\^f"3G."UQiZ뽌_?gbHڙxeQSI>+S x?uF4^cLDi2J_kkzޝuOU7*țB8,.WNCZHדu(4E'l0\XƦ"啺VtkFt{ft5U$)|f=-Qti:\`VzIn-{~mI-pln IWʬmz$y*|ocu{_'&7*awn[C[f*]6+n CWU/mJ /hڗV}?coVf1lvkvIB37 ݆P{0,D9_v1%U8=Ro{o}g{hrT >x9+P|oi1ꏣm].}ρۓ}:߹PIsNzU^>Gn:-[NqYr=0qG݌ۺWx*nN7|$o+8tvA'(T?wOUa_Z.;WKr$wVdWdr"NA:s\ wlދ6{;ӫGe!>ՒDxVdm!'0zWvQ_M}31O{EIK6I|n^QB4m?STp.w-k+i~~7+ךVd"Im6bӻKW_O}a[v>um ͽuC56a`v2֩O ^|ޖͽۧ)'oM|G]Vn-EmpM5ǖ %r zQxy-gӍ\DՒvW=tw+l-4YEX68`z֩R3r[}J5聯G߱}Bn!UWR01 :3ӭzFqo]ˣ)S:j놅{,I 2F!Hx;q8Ks_1*ѷ~kʵNOuoo+Jj%#r~lz{_%O妾pks^SF^y{Wi}~>2*U#"GTQ8eNTii][,V ClUA[cF$UTYElwzz%]Sa^g}aFmir z?VvqO5Tޗ57. ,lqy ~E$j俧׹%=ݴ[ǡ9] aKIh=wwyd?]m*77`2qXxjڕnÈqU#^VkKuӿ]cu%M{>[#+-=kkSUoՀgj7Wi/v ||'oZlA$ckWF\@0P=F&;&k=޿6t縉G<<]oInW|bZԖSg~?t]Djuo3sg+t6sM>NTzz8q[+tn۳m^,M̪IF{ =XG'ב#aQUX'O<QFaJU>OoxKut1ۘ/ rs . u9yWC<+j/;mݕg9ES{"JXjN'[[jlVln?.r*N>ojV&f}o}cGMZDxW'$Q>e^G={xRZ~O 'eOzŻhq퓃w` TRvۮl[ģH͂z+uZ#*5web:fH.RfS2z0OOn}lR[TefXyB{iD;Mn+F쑷Oʩ$-ӷ3O(JZۿ].£N-FX vrI2rzڽ \(u0ٍu]2|em IgYM嵌rC}A#G=~kV焹l%ޟݽ~^~g?9[y#'']\q¯F׎:}`d[|31:`gekC _jv]Z2Ȍb[qAݩjVg8ʘ~ZV]z'n©is|]L4v2:G*lFPuTUUcbuvhtTsQ2fN3^iƗ-jXE:q5{m[}$zdiCpۖÑz<=.ׯr*7|]}3>.SuYalAR\d`r?ǣ_-hkߒ16r%$շ^b^zuI47eSN\8*˻x5~]=Ӷ{_;km|ϪXXU8+.VoZ_Ӷ_6ݭ1ԁy*++~;C)O| ~XDcHwyyt>9q4p?wiEkMzs|Wb~FW4}K{{`wl=Xȷ=,.R>zG H1/ʲ0^I_ӵyQNSɫ(M]u:/>Eke$*u`qҭ[`==B|7] 4jg= #??Tp~k>#ŜZn߸e2:sUǙl=hӗ>e>Qݎ^GK;LJ1Uk[Yen[ڰ'7xs[So{W|^Zڋ8?k7k||דS{via)cr ǿuM2˻x`sts}-5)3ܖ:gI75>Ad-`}G3 ELm[?#oGz)|;[,wIOBƹjfVΦ[!>#A騛Ab"#G­J^ǗGӼI&;,q̪Fv|}8Zt)]%4B<:K5e$$cGf8QW:~E6R=Ld˝AZ}NsM]_iCȄx'⬲r1>veZn)N.LYTmK>V+fܲI8%~_^i}~⎥ Gv2suE%5u#3*uDѕK8YF[ZJ6=&wi!tw369zTQSI&޽kC ^]#[w<XNiңNWwFOHC n^K.N_i^]Z3kvZo_+k,sIsXyb+Ey\cGVOU-ĚkL ]9՝y)>`k@%+O=8^WO}<ϡ6"Jڧ]e=yS]Sݞl@"y6FN M ؈Xt,*a[R}݅e $}ⷜypSVKÍI S'jJ3z-~g*NrZ.ɞD,oQ<9Tv+ui\A\ fb:~5hulηqI-Y"*FÝ׎)Sq-M1$ywF>̑yߺ#6w+iь73wKw EJѧVϯDIknnnOqPH/Tek[Y zRPQ|WkY$xgi1zQƤl֩4TmJm-b+9Lc @ýr&pJeRT>D! g۳iy=hsЍo[d0 je:FG\dTIlU9Q;e+bFhi38xѽO{KRmgo#̖Xup;$d TW,e(ϧ(SUȒ8~!Q3>P?Z4{w6Q#"%jʁTS=GjTjTI3tei-5bpc][ ͕AҊҤM:='^D֚?* l%NxV*8;s+kbS \ª2sTPqԣT[NaI#t'KI;TiO9befYۘq^1rS:s[vfۭU_W~=Mr|1qéGzxKTr<1l~jtgM QISBZ%S>KA޸Vو^cBi5V\5Su#7N MnH3USMy|,4~pc\(ޱ98WU%vq_ygu ": TaH@<W<(jrʥ8RI.D?24"ia|~KX_YV.,[wRzWf*qZXUď:b?'>]:4WOm2MOq$Q:唈=GzִW55&(eɋ:FoN=ԏeK{u\0ݣnI2j]M)ԣӠ(m\Fd=8:] ֻ"X!\6\wǾ32ԇ2o}k0I(d(+\*aN)^f1_Ŀ]*yebR1 ̷1q 8<2:)CIVv߻Tn1^Vgj}sOTu!EUVƃnO87FВ\]"HqaI3)9ʷ??3Bo.6s}o˒NFz=+h1F8c.kM*50sQvI$Ffܧc~RqV6"8dlcqM9PbrRqze %&TiJM=:[#摣X*fI9ϱSV hKw:ghp Om+mo ׯJj{:RGWZ@Fp~s5"UI* NlfLweUN$slehNk{?z IXF_zIFQt [&|ѷ* O4Ձ.-w%wq&2r2Aq[S@I;WS["XcvQGTT$ԣWI %ȭs9xsB7SOihDYZM3o=6uE%>x-N5:oy% H'nxʟ7mLZ1=Q4i?^95J2?Mj!6f:p3g8JP+FU9z-$>pCb =?tsFۛQQzmW2Uh9G+u*R͛9|/(ݏSUKɫ~m7g+)){C]M+{cTeR<sҭF Tɶ2UUgx9JkF__|lY^9?ڈNwEJO趗1[n|$gMs]Sb22JW6LGLd8Tov>)Ain6ᙗsׯrGf)F}a|{'3HyOx\ܾfQ'+KԚiFأ?/#kk9JS[Ur2dIrv{h*x0LO;7lΜe>nR_%˓{ 혧'R6x$ڻJ`s\򼥩7q[nζumsߨg(&iU (K*1v Jj䔣[T8>r_V%KnGl }>9#<\nbqSR4a,K S#G?S\u#䊑ZD(fwIzX2"1榹~& ed{y%[o0̀~f`8Sgsg)^J:=99peϯִZ=R3m6mB`=zwY{fѼrER~uTev(I?̂h$.Dʤ01JR1ؽyfYm$#u;x}=isǗN菻EiWSy1*H~}0sʕ9jcNS+~maOn3 x#Yfgw"6I*ž}UUsөe+{!,!3b=lzsR僻՛}bQeK9q4xR9Vꤽt|եhՋ3G-|qAF3yⴌF%=Oiu,ϲQUq{E956F J;U'tBV'ۯQT("ZmVv ߛ#x?u9\ΜЉ_n̪H0r>U8:wLnjHch$}<.JRb+RbTB^ֺ\frIԋzĦ<#(>9AEHzƜb$m,.xֈ=ά-I7t}_4Zt Mh ?={p?*9cM]ɁL;glr9#(NeCH֍HZlf*cg zǏJ%NR{[Tbk+H vkRI,;xjӔHƥY-?SnNG F?Ġr=ڪ*'e ssVZ6edUr,9=ϭtƟ,{{NUhK= ;ukXS*2H̅W̅"}xKG.eR_S9sS3aBde?xlu<%&<smll$diR6qlŲy'qG$4u*ʚDe"WVF_!sǡ>y1, $0MSqu%6O%YT_?SDͣV-< ,nSg}+jFm($EUu¯P cn4̯($ۻrOBԣ]=?#6)*/9 t5M;mI8} ?6C%v>^ ci˿df>bx9j6#|?xwPZ3zfa'={sFSNs^6pml"ᣏ$u1g(^_qJTksZ킄 n0ӏ4*n*uNk9fA?=9hGlg9TN˖чN4m Y8g圗<ߑڶTyw˹G_ZpkRw-dT<$3`N?\W>Ux("Yd1 q.?{(^jpԩն- |3HA<1}5(5=Ss川QUBo`czR^6"8^}=jikEڋxQ޳=VH(Yێ# 1b:RO_3J+[Z盗~V}=:qU-)FƤWGvG/TeQВ+"Z;9  $cZݬck-ȵ[-UGyUמ{KdfVBByM,j3*nfsY~ƴb쵿QdYdH-e^:`` \?bԞH(CviM k3,kn#^;<N5"叵 s-/|O3\}XHxGЎm=AfU ua}nѯ@sY{nЩT6w8px򕌱n-j*oU߰rpy<OƲdy򒌹f+ʇ`۷/psxҍEc(#R[U"a哌cӗ3Z5+׌䖰,m"K^6֡jJԧZ2RMzOĞ^pp{ǥsOiSIʧ̝Xnc  V~qhtRtFȶ~o\ #׷҉v,]WHFǭaއ?Y.Nw}(.Z2jSofВjGpdh@~u[Kog*1C V3my$~Y#'3Ǯ1޲m{eZWR8 8Ț9HR{j]7s'*..\Y+7UweQJΚ>TiAmt{`Gϭe)N1u ţϙp*o'e)izGH.ee8]XCFch;ͯ"?XQJ̯uCs+ CUQ^Ṝj5Kbwb)wyܿ20Q[:=1Gvބ:g V ppqA}Z<ɻvFЧZ^?S>Fo-dptmg*u+.DbRi[HD/?:#=ӟۯg(4͏hXy3~EmGݍ)SK~ߠ_^ӏ&5/>,{O 9\+^FmNWnԥ*rk7̱'#TEj⪹7%mI",=0xߩs0IEpfA񭢥eiR{n9Qe pFAx Ti=TfyP;)]WR{;u[,y#~xFp? ~UJОZne't,6b3$1km;0+4Q5}{thƲ+$$r*ݐJxeV0\sU*J#UZ'`d( :EGRұc=lGvh``q8#i9\< b+KQI=bKeFAYAkTqܖKS){Py;FmqY]S*x* Hәգ ѥZve^ry{Ʒg$ﶴ'U!U ff$e/39TU/yw[mXrQNTeQǡN}̿AV HRyJCn^gtVHsrk8ܖ$dNuNG}q^@xUU8R:cׯֳJNuQ.Ykשk(Ⱥ2\7̹Fz)FQmt\EJZwٚL5t.N3?5TooC4cRq5!dKhE.g81OE۹5ybU ʸ-?P%%:cF;ߙyL\.UVW窆?/ wֱu9wۿc,Lj{/wȝIEfO0,e?0/O#gֻtΉ/xE ф/( W޽(viԩnʨ>V$ʺ5-cb4zn]qmmvs]SJ6f|շgn{ةTyszyDFsxQ+{6 _\*kIi[\7th~;۶9*9FZ}Ο4HI6L3^~tÖ~)HYy=}9˪*?‰$Cf GM9>|Ho16l}_>I: tGEǷ$iAw4lmbqDck4M:l)0Teׯ>2d\%kX%Fx`G$giEFQS}H&pXm)02;c҂N-͸ӗ+>m =8Dc)4|Gr12I4RH$FvO(6}MDG_ȑgɒ& ۾mN\ѣ.iF\YY!o5pY?r=k #*=הFv#v=Z\ֹetW) HZ\=yk769kCKt9w|98'!sIT%sEܥsre!U,X 5TTwϼu]wE]~loQ񪸩Y~3msl?;p>[9Bvd"LF6Uc78Q^=l4%QH\YK ^E\Uw54Glwmc H;<߁*v"VLq)|F~r4VϷoQsI:|d 6@p>nnG5sя3^m-wEܭ޽:qq},W+2o(&h[w\zhHGn ymuu0o`i s̕,2IҽxԌU%ne]Ǒ4|NzU*Z֟t*-Z]$elRn 1[—N:1d~e~#;jW S#ǨަtyrƔaR0^^H"捶Xax 鹾qVit8[kWۮ 21H99.hjQFűyIۏ#;N/SZG=k>is&TnwEy~q5%{.hǩzԓ۰zk+`N4\I.;Uv>2X*V.FI+n@f=GI\lȗ75Չ$܌SyE | HżpciWP<1ZJM6Cs 1269 rNQ2faN=:u5թ9N{nr i#\|۹<`w}IR2 ܱF3Q4N11V0"$l)fGk_nܣtTƉ$]sef74 %2nCycṪnO|Oڪ*Ҳة{YTWУy1!;xy*hhrwNG$@f[ 181Ǿ*bFW]L7.$[cGcNT{rȄ]~z'k;>cΩVU'΢BXUdL\(+H:9e6RHЭ-mfw'y`#,0yc1W?=¸ɭ֦-S4ݻmad_zW"JZ.Ju7+Hlxⱟ5N3i[Rboǹ湽FݙO yy+79N=?:SV*Roܫ}VckI}+tAs3J gxWR+45WNn8&Cus~("cB^D6,$"2'xVj5s[%an$[̭"}ZJ1i4ЗO[LGw*hΦOu#;֧jKB;Beұc)gZ>Y̍[Q#Ip±t VKFI9un50ܤu9?(Μ{; TԯnJ>nF}߱)iRI?r2wgN0r9[7J̌EPoG]'':uhz}# v&x ±4cGQKߡ *O"dp.+\iWi[ 'My[6wqvs:氩uC SK⁚dao]e ߅]Jyo{N"2{) 6 pNqVl#[|;G b}ϸUSw],sFm[iU{ayM/ʾf`m9QҮm/SٮYN 9 nh_ |ޣc<ҕ:ucRt_[ \ܳ[ s=JK(fJE)UײD/ Ѿ2zW#8q){E$qK2-1ۊRbcBE$n%R$ pGmÂNt6SS s7jɄ Ypg;NxҟX*tfR%lMpr9 zu*ZޥX{/aRVkrp7y{6zdB:*56,r12wW0ָn׊NA"bFS`c55*lˡNJI8k_7࿓siu*GgT^yj,[L"^7n@=s\r[eR+I_fXn3y'-5)\cS#BܮWcba8)ѩ%%U,A Ik"% }=+9NQW][V%2̫T(~ki4U!4Xa6rۥc{qթVfhZpLn`Oy wڳv:z>SrA lbr88oǷ{Trnᎏ+S $f|fEU =EmRNQIl)T&4&'md.]%*U)B/t<!VwK,\y?";lHd ?9}9UMyZE]D몾DZaێ/ N:ꊒ3oP**0>SwVס88ٻhBڏcq3JIy9?c/cnh-Nʚ`Vxro#kUHd] s zץaѺ{jc-:>3oKto{VM n~ES eGi߭C$гCaɸ7NJtwvO5w;= }a$,Jې= /cjo]Z2v0r;pOk!9{f%RCon>R~RhtR ܝ[d&Ai29W2~uP²1/g݊eB &nW{Oyvh߼w HcugiΊjǷcF6S2쳉=-ɥeܢ;;٢ޙ0-tW=ƝegˈiTU zVu"&Թ.G)5(ŵfR$| \cN>άj{isxVGR$jF5k~gAc#܉GJ+FS:M?mU=\іU#mKIvXaڹN2*0qF]K U\H }IIҍ8TJoii7m޽|=(I%ROӶ,{Yjunkp19;(R,vKP/<Zl Vm}kF?KFtwQGbцđ ; 9^ {zr%M;9%\8f($'j߯ѩk&U7hq xTZ2G'nXs]qtci+Y7O~.dZǞ;Zִ˦unԌ$O<ڔ hJI![vɒI=H'9laqUr^Y]izVzNןO? |7keG\mW~LJыW፟eXHm+62Fq vrscտR{_Ahǿyx%+u<Dyj.Wɇymo۰ޠ׏Oh˱:n,N}p ҼzKgEœE2==yQ9D0P.ߧxS .M7,YVf;\xe>"!@G'ߥiGGewЫs(oW洎)=;MH6.FV\F>ݎ4zn U[ Ƿ3gJdMu[+AuhT㣿r߅fn[q9=[*ϛ8VWO3Zݙ|S.Wߩ=H;i22T/;_œw7? A$(FO:FJ1V?/u/,/~ ojf#K~xʼn:`烕b;)=Ey_+JoD׿Vݾg4ŵյ]$]GQpy#:m-TnR2n;|fqUT#2i\1C^@~9g&r'4Ewyqr"iw⼫F׽Nmm hjlifl,VW8<'a^lFޝ|4r.O%=/H 96y3yձ\|tTg5,]o5 OnFQInI(vx֧{]]mO u^-IԭUO9@3#k]K9tt~0/H$txe)$bFѴv$5a`ۭ:51~IKT_׶Mj49Y,M$y=}iaJ7VgfoIU}̯\_xYmfB̀ɑn1XZ4i_sRS.,Ѽ2~a(?1sq~}>;|je{Iv?go0a_MB0[93]YBu }s-uZo'gyOTHdDMrC`Nܧ)4ӳy =>2;+eef2.Fl|+S~k>Mt5[:<7l뼮zg`F,o2j0V++KCAhm'ܬT:^ܫӣR}cӿfy υ|Kٞ{ۛ{Ūivbr2bp3s`ؘN*6MM<-l:qwkNo>6YF'w " Hϡ_;_:u"^hkʕ#ZVt3e}/X;|FӰw+pSЌB+:4\~>yZ_+vL$׃+ Pޛ[Loi'<{ص rHH$`81_AFntӧOShUV[t=O-Y;L1VXF$n^;RNSJ+Ԕi|E֟{#$WhiVʱAzq(^/w;l|bӻNo5Ե,VeyYV*:dn{ֱUH[MsqƊM'ݬfxSikGkH7!F;~Zto-{XYF+7w[j[^C$RL6ӓ 9b*.I[7$N?UiQE BrO;:_;ZjO[˹{IVn˹xiaf\I<O<~||rZ%mZFVӞ"[++nkͩZQ%iVR=«$n3 hڔkNع$1NUzʥ;5hSRS杉iq8_a^<Ѝ5vtpIm{!y@xcԢZ D\et=uK[褒,&Q #c#nYN͘#'MjTד魺<:m4kwg}5Zyiky<w7O/'I U2P~I=9;y#1\Qr^˻eߥω̪R.1Rke{YwRmpfYBccaʃ9pJ1vwv|ʐZUӭ~y4I$kzskTU*9~Ni(ڕK)5t}3WӡKT RR.Hly(a}wgw/zٛ7K+h0o1[XwcҦ8u&s, n[: U4/חRg`0؊Rw=*xyJTgn;|jI쿯롏O\ڔm2@':}+TkZ-?N&ut}طIӭ~u~h;؆3QWy2/-ʵKigx\FVn=:cyy4c*#ߨ{*{|џ~GO<˱*Hg'ڡ{_D9w=|6,EVV\mQIW%L1Pݼ"y{3[ZM)UR3檝Zm%T⹖+]jL}ze)ٜ5)Ӎ⑥&k%{տR1c~YF5}VZpv}IC| vzL07:OD\}.V'UzK M*$CI٩u:onfVUby;3^6eSVKpr}+ͷ}"ATMsّxn&HUX&HWx)YkFk;qm+cxyBk/_ʕHv{jz>.m\TnxX,'W*U1'X)TGJ|=OO\}MY&6p{sb(k~a){Vc⽽Oò]$ɍy8ǣ~RtԜuk3q)Ƌ]#[b5ɐ0ZTRHҴx[x˶Bn9k?_/C2 ]5n]B7gZsv?""EuMpx07`y??j;e{yMIHYY$cz5^rk*>_zbsO62HלzO'WTjG%T]NU:x{}=Une{}?egZRqEF]MehJڌҪ]ӕ9E,&b13ǗFIwܰ&aiڡTKjt䬆ėchyrvl,F+iY^Hǟ\ۧsl4].U:׈ =Yy=csP[7Ԧbؓ0_[F/,3?9nCchϕӕ連*/Ĵl|ü^Uˇ)NC4%ꋖ蓼Z&eۼq=UZ)SpGFw#x՟cK J Mb\O6Vc6!Nxm2x~QUnhػ\Ă/ s]*Wt]5E H"6#٢:vry7 BLˏ*[GX \ek~7)\0#ǧ5*Sbnrnr:{pTggԚ#Vn>f$yc*lxkGd;nf],|G6NOy\bsos&rNU(m"yk-_$&X Vϕw"1k@mmbq=ҫގsJg߸nmU7˵C:sD+t..rc-G/YYfh?=N=hdBBwh;eV2cVҌy^әrk.#O_}qߑHƤg-nf5ߴ펇NkF2Q:N-|8)&|GHJϓ9B/zu,iry)+rUw]݆ v1jqOЯngt Y2Uc?$:f=QGRׯf̻ۨѬTeKHazO*<㿦Emu(#HhVC |eٖvGBOS ݔZeaҌy٥-"Co6gl~4Ӕc7oiQeyct2-^@'Mt.iJ:T 21 1Z/¥8i~q2Cd)d,gNJڏKiRPP=4"`q 9kerM{HW!ɛv,lh~i-m~B3VYv}*\bhhlK,1J:y7+qקJ9Jv܍Gi`kHsy>x* W?IbV ҴH4S>H$?7瞵йatpN-"k}ǂs *%&\_}_z%4?.`z}~ EBOmGY\ FxTˣ+-J'wWLpx`_yR]LEGb^we\cwNTU[d!BXݏfgRbסL:[bL)\JQj(5&uPI$qn?|@-~k{%&wi"Y #Y$\,k=+XˣЊ4̶~"HEYU>FW2RpM7mu6uhơ[-'s+S4NjKS0yU wO )+Mt dn$ʍd*j#N6[w# j6@Y>U_"C.JWc5%;mHbHXޮq~)ljGSP|In-+`?`h)r|ƜI} z`*coҧ¤c̋z k֌$+ 78[ԧ(G{xxJn)j#_C[iY335o%F6z?\{qql׍jvFmXI$2:M'*t'{Ur'$Cs1q:E\6O,#U֭SQߩ,M?3~"nYZfuQsݲz¢T#oڹrm,b6k09eNU=|K4-}:VQqfZBvVJT &US%:`6_\ ˬ':㈶, E\Qi(mS=pȭ#AK>fIeUP~5M7.d/z1rLB#(s;vX>Z/i)^1W}yRJH=s}q[Z1۹JG4aъ_`O?QXFQ"2jj,3$w,PN9cIfXƜ|S~A;76*h8?ZݾZ!nQVߍ*{qM=ɹ>}+>_x+ݵk(JnpqVQVV"Qn[xU,ʹI!֜K{񍦭Yі飈Χ<.dp<H<-3w *T+T/tξ}]3b\u'Q^itfi0~]ǯpvR3_e /I,fu_;pq(5u۱Fps:j| >\)#zghw}W +8iy l_-qj~NQtSwɽ[qV 1&>Og5r]^BZuQ1zMrG-J2QfW*mQj[ :A\)jjmRZs441<0Y9 ʴe+J2wm}n][,F9eo}1QRJ;ʌB}p7gnG=?،<\mֺTK1;ұ7t;[h%k[R8K;cHBKqR4(JiY-ԹkdQ8R yem匬vຕ,ӨsjU9cwqB:Vچˆg_PTp8Ͽֲ)FI>9WRߪlw\~?4:NV!.dŒ9ݭAK;1mivŏַ'(ݍ#+I/*UHo!!18b.Ir_hr'JqVV.̂;Z2;``z~OeO NEH|ey̬тHml+׵Ljū:-NquTbǝY=\Nr7OS˚Y bN}6R*)ԭq,y=ѰRs 9N>5p{~(R:н;}o$ici–PjAQVW_;8PO.f*.llp^=e.[9?sJkWV;aFx=JTdsT[a1.Y m'$g9Q<Ԃ5B`B%f{ߟ̞\ԍN{7\}̦E`{c/~⬎iFb-ax62*}0xԌ}E -R+Ke[6HB%wd:{j£7NA[ʗo%ݼ ;pHO'ⵔcϔTj^* dYǚF}|q&%.UcXj kv@͍wSZ#kbqi-YfUs[͸ӍެiFܞ !|Kr89)xj^\v9*V~٩- xde h@gNI]NXkJuෛlxY& ayzJ+FYYy1OCy{r{xEӻ+rM"~id׿r _u?v[Zʁ uۻ_zQW9/fc sk.)Mƽ*g5f9TN0둎x(%i%$~VcyC<0kj{Ϩd o,k.1"#A91Mӕ5 VvnHY<1+nݸg?N+8RԽݔY-]ʓ0:z֞4✒_T)\%<.I]͖sƱmXR~d+;Y١"1d tQ~2Ν:~~?ብ*r/n+\:&tV^Mkas#\s&H*;3Wm:Jkc(ZSwZrBfV,QL㷶knoiR;UObbCM=lm[>^WGYE+/6ϚyW^k4=k]}VN͖uVQBۭt+F =U0+i]Y,=V2h%uZvw]EЭyr*rď\wXFr~}h&_V##kjpihuHh⺔ n4ɏnUA`NzV8Ժ*l]#ab:̪<%:uՐދ($hm39p*Rw85%ܯw$ww'E"=4O#]:QY=H5'D)6Va;`zc5#.dsdکx̊[<LB`T]^ϛXy0T ; kʄe2aSZ{*r|YΞ\r3}k&7+mhw}@=GQT))Zg_y~7eЫqoqpTr!m%}25NbWJJ ܮ-Xn"V&C}?+mκ.pi6w184J3XƤ2"a-́cS%9 ]w^gu8Ӎ%܎]ɳ"E) #ZtoE{3dNY6Vsv$_Ae8ٶ+WO_Ȧ$i!^vFx$6{sEiNmZmO .lBG~hԼtթ~vHFIȍ=\#ftS'R1ʷlSsϵo9"hS'[vkImL$o,ýnsӧO-E-(ƦI˖ow#E2HN[tr\#ΧNV_Bry\{1TNdjK«rqGR*GKޅ'"UvȠtc>Vu\Кx' egI3s\yrNj\$MJj q۾+j$Ќ}vDr-RKz~8בRQMEVù(;sY4q_ba~s2Ͳ$8F3ޝm5%N4$gͶRG:<>fx:SnEJ8I\͉#`ePyqcC],*5)aKSLۘbI8mdQNjɊZ:;TH//'Uf̩rR'>UBSnjz5U*Ӕy"+Ssڱ*qkʴlN˖9~I&3]Ќc OB<] {Wh7m\ kۧN ''sJuT;yv.[hqspv \@Oztˬ4mZۑ]ܬ-,Fֽ*vʍ8>ghiH[U`WNRo(O[PBŵYsI. דWuj6V#*DUP}ЮnwS6SrcBuHd m:=ǧ5d.cfKm,I{]qB2+8cIfp"D玄?^hѲֽֈ؂8>Xm8ҫn?wT4j-Iie%ft_kBų LԎ.yUW( =1SxI +@ (&¯}=ӔҜw#f"-x8$8);3'5VdIS LI铎?[MdpS*64R2*bJ πOs~ӱӱRX!}$sTkFGT*FT+K/SiU%ϙsYɩKƤZyĮ|瑷,S9Zu9Vp5~rn7~v֢OVLkd֒K˖c9-*XҹXJZ&"?xFd9O5,Wh{v=Z(ɵ۬ Fvj9^r\O0+'Ű bqԎRQ9i;$fܥkSCzSэ:vLcv@7gד]S]oGJϿldž^o+W[Oy+y2Cͻh*nj;^v"i>o}]wK&:?5=;LJ ^*K1oQ>i2־^T]:sRQ^*\/)31rJ+ͩmUv}D̆/ͻr ǃz֗SmHQyUi۱ԯ۩,}Bd+.?νre8|EKUMаE;g7nܲpG(::U.۟5݉$WGFz0dܬjێv2Oҽ81z4 M$a '1i$8}+zN\Ѵ]v3mwrsx&*r\>M]N@f[Kw<5EDc‘9%NjzV#Khݶ~ld?n먑,oKif a.]1ThҧviY^\hyQ9b45! - 2FqNQ6)^[̟}W9^=kOiʬ˧(>2#8A8>1rGjћur`|yz)qcuFcݱ\=QV1Zw23Ƕ0¼qFGT+^+ 1!mŘ.GoUoRտ'E/ ëHvT׶s#~U6*aw ђ :u9se)I^+SK"˷vqNx9Ur;(Xlڬdw\^UEv$pVYT 7瞇j*vks9ʜ}Ʒ+M+![IWNHϥLc*z>Yx- =}OJ34RXJF<?Xf:v0)5qQ&*iA4`cch̉2)~]};pj^ӛҰե$s";TaPev)\ޜcvZ:K~&m͜2yW6B؅9$/Ct63"c_f'xϒ_gFsђ&c_K3bC$%%l홷Hw ӠgRvn4t(ʬ [}Zk8^Iw~B=q1YFUks=8b%b徵xQoL2 89YN+4a)FMuUo-@e-ьҲR7*tc(٢?lbvߚ7CTR\[, nFc3GMH(QWJdlۇrOc)KH jJg>PykTPIɢNL7m'89IMp8Lg)NSic)/П̗Pax7[i6ίݣ R:s[c4xڰ*~Տ/6ᭊ1i gi`F;FI?ºc(Qrʤ$0+ "*C{CXUmSjҳRz. Fdm_HO^s5GOB{Ҍ!EGv_oDm:g<]kKOqɾTY|26=*UQECe'zfǯ|ܪ &OFLp=sGҹ+neZ撽~@j[hKޟ7aQƢ-GWJOip:r2.Y%I,͜d˻q'se-_N BЉ`fYF%{uU.=jլMV)yϰv>e9i\ᔥ sA~عAb "HF_F1ϸǿ~UޛQڷ7Z1m\'P+)ナB]˦ZZ Vmg =*9I$FyԼI Uvɳj%N1\W]1LV"->[/bhᅎ7 1 sgSQ压Jk}u M߽Dۻ,QiG<.x)X77J#8?^sG TF-wDΫɅd7g.97MKESOOQU*K;`ӾՊTd-SJ1s}KPE9AGHƥ8֩'rO&fea|̧-Nݽ*s(.]^`$#[Uf`tgStC 1N}nv8lm6+2Wqֿ|9T6?Tr2ih^viYUʹ>dUOt}uTIպ74Exʎ$2ь.(dpiko>xrglt?g˥xեnl[ٕm 2l;$Z6k(ؤx1\HY+3zp#i/BSR7CM3g?*Jv\}7bgE+BHw۶yHHsu ` fEtS:tKUsIӜi^[B*{]65*yeTU+,/--\HrYU<Hq{#9PN}B\6vͻ=?E89A)FP~C]hL#NN_sRZA9kVFiw9Oҽ,<}0ש]jӗ_>fT29E*T tZ>$fx]t&|iI*<LY6<'B>Ɯ[y˯_PԵ%-^s}yREv9]7VNڶq%t4%N~]yhƚ:cFI^=?Wq6Qt6FȲ~m`:ⱔyu[=:v .,\G#>g^$ݮ\WUR^lSVn.U)z-ֺM+<. F aa## s'<%wQ+^w!kgV}1o89R5{u3ѿ>_FNƪ6Fqt1u{1FȭC(Ϛ0r&UU/iIvWЫ+|I#Z]V3dH9< p52tQRWxYtH@ui ϩ3YÚ2+B )jިx)ǹn,aw|=O\/Z?c /g׷y4j0Z;* cw ,rKp]ɻU=8{WIEe~jO/OaXdXd"O2@PG9s=:~ؽӿkm}eM]{#mZZ8*CV/U)N 6 Yn-˸m- 7,sߥr£yetG*y[~ dhiTlut\$%pU<vu)Zmt+F6cZO"6X= [珥sK.¢{7$,wyo(>< մ9佤lzWrNwz޹9dz5d-|4{T3d~\C1]0e2nîߛ*eG~.^E5e]nQQ7d$nAZ#Eԡ{?hZW$gʏ6R8Ɲޡic#\0y9Ҵ;&3J5t]4\ĥ#7l%y9ұ| (=ŨF n%߻jH~5{Ǖ$㬣JaQ<|ZMEiסƝ4[#<3Չ=zsWD~ nٴ=̞y ΍ @''QJp!h8UmX1OOs&iOO[Fd> C .vlu#V7F{.qRI>tE~>ȿ-g$OE,#x3_EtkWZ?ƒFk$o;`V [CQZs1TK.7om#w6bq2BOsɨKE,Crcwv4*q~5y0_DԛZM;#7R8q(nԟN:(]YtﯝU|_}<~ \Yͨ -/FxR#jnM_N}0̨SIZ3o q ݷג Jf:uZ[|#Wԛ= SҬ*.8f@8禥ι6ҭZCooϠx!VK[|0Lh֙/42yq{\q ߮^Vw02ӥVQ[w{y߲=~8`]B>xߟ qʬۺpq|y띡arͷ'p;yҾBjψrfFvm:'֘wŮOsScq"ܾ˲u9k٧Rw:5-oشXXncV;c dc#(GA]j=} ֺ{i.XU~'l^ZԶ#:MpʑBYS*`ߺn3 kʩSRV/3FW|[識? &RIH<3̹ N>2J4ڍGS9SROd}!+9VXbKyL~`ݷ*X3r_+RZSKݾLRwDf'-3}8:k}*Wi~\tԧ-ɇ5>! !V,VBNWL R-n%D9e.[ʂ[!d[9-dmpoBI܏eJt8ݤbiNHkjVsF絯Z^!1V u[GNAԍO8O|6HalˍY(=kUjǒ|S#Vmlue]܀J)ԕ9>בGRUkmi/{MWlM)*W+C|.O -uu}̼>qA㐾"PEΓK[^Y~ZXPK/}7zjgYW;"00q=j|V:ZT,wMQX)O`}I{:ѧ};RYIոW DeQӹ)(@1[r=\jNށʑnJ6nQdOp:FR*w|ևY%4J)ݸ`u<=Ju\_ՌqQ֦RKf}S2Mן4ԍ zc 8]>5)O뷒W8fR~e $eJR6Y†kWjy|,~7SKG ;r\&C #L#RxqKmR[+;l4 O?霝%w q9[RJ_|y/[w=cg[xR I<5O:̻l198ǏNo~ޝV狛aӺM[]o+=F˺vYymOQq8Q{/#佄bNiwHaY/#hB@IcG ]]z#Z{aOP^y\do#,ɏqЎ_:qHJ7u{QkTj)wӷϩ"i10ۻpOS=L,Ny%oW3єbk%v˷šmOo'#~FWmi7-z]k;;YRbx|pHBb=D}z{ɽߛFm49癌HZ[v89>U'6ܻFQ~8j =dO/zbN,k /{oatxҭ_1<`ׂ:uR<7{[xTw #?Qz_Rn2xo)m)aFju9t9SĨFVrn~x iv%6<PQ Õ'ޕQ'(>d՛MV(T=yB=n',$Iz$.D wv01|Vu>'E$k_[#<]4R,k .ISrx =E5FIkN|8Gm+tr0Φ%JwvtZC4iHaӥ`:\jM{Xo ]EO'eK+q=#HB2w>GLqT^\ۊݵ/3zx\UiE%t߹&uIReY>Su:p-=wv*Vshikgrcj󣌣SoNgf VVz' IR|?[g -ͻJb?2;7 q^~*^]hx7촌tgI}|2ַi0@2&7 d91TSM|N:sOWum>gxm*[[ >y[t=l.ZMk/=jF$ (QfT~_G.i~]Zn~і B4}zvpkmSn$ &AmeWx +kF^J/{-g+;Wwo]9h*qNdٵt :=Jil⑼F$0t>cל6_֧)F+]g0}W_Æ7xѷNs택Zm{y~}#*ڮ~ϙ&WiFX\T(??Wsĉq}{Id6L?!_E5˫<4浗ݼ8+`+e{oNT\'s\񓃴QiHh 叙~s4Gdh4UY.'pۏ#zg]9}9F8F~Od[]?r̙y2X ~G":)(siY0ŰO#~(a#=:x|2l[hGsy?ȓ߯';嶬3K!?9UDr=2qF8h9y4C|+Jqp8xXG>mc&u_iu]i a03n^׷X+_^\\[#1~>s(iӌZ2OS^-R4~`1]/1{qwF5bwѣFvj5e /Q⪵8] Ov.ܩWWnI`3ӌ. 8ɨ?D0TEƻgny_9;R8Զ|&ҵmFU Ėhw|G<يH.noYZGU9'鎝sXJ ˩j}Bt%1TNRyS_̫{ml9=]TIIw6jM=z?JEi$lُ1ƾƟ~Tێ7Su}IbmۗuPH g911ҒˮԍS'Fjxj͛%wMO OL"궝6rJ*QSԴ uW>?9WEJkRi<-JڽC̴n2fMD4,۾U}~כq'gwj{ڳCB c[SiSn2gg0Io_ָ}{"-OvG-,aUcf[ =t>1VM3X*jQ'VWFm̠{ۿֺe*>ͶPiii#U}}3YFGOGw#{ixl`|̿1uգWHNƽlETm7V,n;̍ϚiԦ_hE4'xܣcF>5jxȨ$-nBk_zһnIoH}?g-nrP NI + LhNsNg:$9niwv\O-Y|'%h+FBy+O;"B$g׮sNOO_Τ-P+HuE Ur9'浔!;F/ՓF2꺯Q([(i:`(=NGeZq6h-iͼS Fa׎@~cdMS ;]ۭemlU8R^or=NY4v%/,8kɖEl1=ǽCjGQSVF <F_U~r~<[+e| ;GӯsJͬS~FRX9;^$?1>^pF{V.W+nRj&bEK[\FUlwұHs8KN_f[Tfuk%sϔ\Zk';wItn=zW]Ԏ9R4պwI [1?oΰ,mԩI7ΝzhUVa_0+Pzt?w7k]yNZV_;iI"q2wfss҂/,zJy*?3Y' e5FGL>+9SRvihSI6\$q[HE/f8jT7 U5Z44-d1Y4'#vFVru{.ƌKGwM/NR$` 9Bpz+rR'QJ <ƛ}GJкݵ5#}W)_n+.{TNHʏQ}бX\<~~[\Oܪb@̯,w?lY{sע0Z?aF}; #og6 (S5;f8'''CZSҺVGB"4s23m5qWD9TKThۼ*w*0n|m攐o'-D^A65rqH#<O65QkZvs#HzQ彊rvq$O.@W~$g5^?Tkd$ފl|68gjY)ywoP=yUozR}!kc#eK8QR0U2cdFnf g|9k)FJrG[%o HT0eOjGF#>BNjKݙ֌;$rIp~r?fʛ3-}Haw;݅^Z~g*y >feg(eS\5c\ܼgF_i 豌caް?p>-U輈&Qs0h9'w ï\8/xF[ D*HeIr|[:jϰfI\s2nGߵiU%& 7:FgiWAlקYJ2liA#V]9?AYLsVŢD9hs溢yCJ1o,1,(nqrvLu?w1)WrjdXN=֝?v57T\ڔ_7y99v'R7zs5.VHx['ޜbZSH츗 W++K=z^KO#z<_3:WH F%N?kr]Tiɤ۸B1$|v0xe.o*JuL4RF);$׷Мvz(Ɣdey{REeU}WUnՉQe̴KfZTU V9!RJWYdy"u~q$tMmNI-II$RKl9mT7~)J0ӡ5pƬoi'dpO>򽯹=\,F^OLWL/$kQ6iy̷]x#>o=Zvl#/eV;`RI^AQTu}گ fooaY̙t*S?u>lH$O#nMЊҗ*kVfd7 n+xA%%gc7[ˑ P|0GOBEje)зܖֶrUy^;JQR\FIb|03 8H=Z V_oi{u5iI5h[)$HW~b%CCoϻ`cdg~KcqrWvz֎4+2 IG=< rsȲQ nnYs}Nc+FZ&I Es <R*.XIod^Jr1k=VEc-^w(QR%.md,0K,^jip$%b/yV,0&̑A!n=k3;$ccw${ ⨣N7禇ԩ+tfmdorO#/g;+ʽH`?JI+[sXԌ`˶wGzK]?-HQONQĹ0fFO &Js612bWqv-F MZ[r|׏M,,!vf0ÌkMj uxfKa5*OOT*>T6-4zhF5pn3G~R9F0w{XX3I u8kr2z[y!02,;~F>oʦ_ 95$#d,[cifa~c?R%~cnݟFqk+19%9:e?9vO|j{73 n=h/W ~緯asY&œwԜy+J*l狫+Ḭs+n}dv^8$ҸgQTV3}껌i ە2xt"X-#]j^hlGzwsUEԻ[U%GpI K!kKdJάK[,sF66g#uٽƦ|͊%Ǥ|ȖqR,mny#^Q܎oU^V#n*H{S.^eT1N6EM**FR `;9B\ۛ*u!^y XN@< jy9&2hN++[[.2Wh{g氩+TP^#^MZ2(c r}F\} cR:o^믪F;|}kgRJ5H69&YYwb*sѶv+S'U*Q &x/,Uu)TRqf;vb;~l! [BZ#2s /'3ZkTׯ:ͯE}EK:]"{ [ J5:&&J#0 8m-֭2*yg{'ko> i-;jҭo89 sӎ"KE4裃"r=hh}8#R_/qUwY>igqpH0q:W>:t--d0upM]|ߡ<+lH# 7_Nڟ;ߧRQj~Њ8j\Χ64cS Kz4qXc+3>޷+j35[(%k|۾=s}zWT#G4Q$SF&XKnQ^~nFl)hpR:6tGŔEXR9n9RFѥTe1o=UWw˵NTztyfz[3Ȏ/FV5e)G1uߩB5$sm2rVѥTm T\mwt ˽UN6lN9`܃ZF' U:tz>F\Ci+yw03(vqQzBU7t$k9۵w37aßNXt:tkhC.Jf#< kHԳ[ѩ,=DԒөQLByv*:,JI3ҼITV[$Of_wqO}zVczĘʤ+sϯE-3'XV=K޾{nznTP쐕i)Jַb`CIR*; w5umϯ_%^J;^rv T9(z积9Y^HKڴD6ظIlYܾ_ہ0zs*e)S>UogN<*ΦˍU~E}xR*1GRR\+9CvJ>19QR/]3L2$n<{a_eԔΝirO?|h6̌+dgdgƸjG5kaJ5K!3\R:u2w)8J%V5ۏ9dk9IF<ҭRnZ6@5li]9*✤e`R6|]=f1ydmcw8\o*tY%'Р#2y+w ןQZ{ꌱttUO+㧭szcQ+n2L}UȘqOlO5}vc 7Yaݴ g5*բkQ|\? >qQFqʷR;DcN\9'HǧP*jJ)*JW/Z_ZDW~}Gc-L<=ZЇ,eey0[ރ.OcN:&7rI p6^ssֺ#.[&TӄZkZ6ͿyOSRRGU+TNM+F34*KۚϚ77bjSW""I/3`={giRwԧ:88]F㌫^ˎrRE'o.ȘKjV 4 3kNjMgR1r*+ɌIJU+5-nNPA2{[xUQso\:t%K1UhlTXS4ԍK)dZ@$hf ңIHX\N%RU+{8]G%ņ~n1'n{yT[{[jO˹{zk̯jQv6=m3336ש+?TZoۭFvn>yJ:na}b.-cg*r(U9>niі\=,#8onlՃ` KsJW$1[F_⑷p8gQR2e[QB mi3!aVMsִm'`榜%rEi &&(2$gG>ìnɜr%+}Dro^3ZVF1M{QIkm-̒[9&-wƜլsӌ+ĆR˱\s dv?ΰ~߻"kGm%TFwiDa+_RL*ªy٢!s0s6GndPzP7s:d+nb9gn?J%R1Y5zU |މO(T2[-d7Sk܅fY^WbF sTc%%nmD-=W9999s&cS6tc*x:_}:%dʞM+6~=kXJ oTd$v]arqɯ2;ŵe)h)EpWeF;g6qK䚲e$R< k37*;.Xe~:=/^y{^0fzvd_Y\w)d"B.F;^=\M۶*Znّ{^ڢJo5֚vؖM.BH2-c։amhW؊q qV#ne"Mr1u#O]Qt\Tm, r+mds?z8Ɲ}59%5ӿ[KHGWi\;^Q_DhT&M|ߌc+jSLKkgwN1j6fފu[yQ)?=U:~M]&<ꛊi5߹8ѕcQ6ƻ#8 ەJKgGPѿCuKfM|`d8ܧ<JP.,\yt%8Bqdiq.VRחk-Ȳoqr8CoԳkӏ_/#rŞ;Yϕ=z+Gk8ԧ9J2Zy62srzۺEi$)F^OhZ4MLI~)[aʛvM[{]'rxZRIGB%˴v%cR .%=*\=zXi :ٺɍ,Iֳ,=͕kyXfIٮw%HH'kw4me+ܳ>RgMrCNJ$h`1YM-Np4 /psB3ҧݲ=i%[HLvܽyy*e.UsTR j6~o+tF[go gI\&e9&VIk#܏z5/F'Rl7?N2RF3>{X/( w2@*))u7xٍHQjgVx.Q ڣkFr;Uͽmiu䞽R-4nκiADn[#`@aqQhөt.dV'W(M}@?ɩrnNHz2*9sV?>Y\!/L,$|36<\"DcCOj]+ Y;X򏠪yE{IrU%\,SƫqYnYSmۭ*~>zo֌e)Tg#Abx'bcx9-9wQ;^[FvFԵZsz^ TYs<ߚgRR]E(7e۹ ry(/%IsrN*52$'h JUKmQNu\xkFn&=4c#daa EFOVSW,*qQv5-wucKO"QvLcsFtxI:M8;Y]͋o$<ϙ&S3{kGeש"50x$YTRgOYc#Ol9 % {HǗAﮁt1<͟,0'=;Τc(m)_Ӱ\dUN Iݸpz}8g~M_Ktgr8oR#mC/R[t=ZoF] W!ND|)_~<+ZwzTdzs}`ya$rg=}zg9ܪ.wtkN#(嶉`IFf0NFs˽RS=78T9ju0l9ִHN/cU-Ɏai%kwgU*n\ռP4NP\~FڞVNeNz:2T%: e(.7b!B1Dz#[JU-UYt+8i#)QTkY̜$cMGp 崙ї>vwV)޼8ߒ=4ط dXRU{8={v]nZ):ܳu(@Nyg(ʤ[qsN*LVI$\,Rdgq&SҩSNR!V̛Ig8nkOϴc 7܎']͑&EM;rS}.S*kƤem}h֜iM`UWf=?,Q vqj8-䒂ُ>osE9h-ǖ#ߊU畮tԒ'rcffGiT|c50Cnb[I8Fz{7u}=Ⱦm$$x88ϽeF4rԬ(ԃW[]Veem0,3?^*m'u/r\_g.$icg_1W+{Jnj>]~^dC=Bjԓ]%+և*VOiTY8pqHʛ ?fW̙efK2:)*tNBYq*(Nr qg#&7WG发^Uƴ~UtZvC3gvQ?w (dsG t9*JG"Ă+fei,;Tֹ*V^{i5bK#)#6\g9z>j+K%;nnY[s*`۱g#zյMo8R*7o:BM"@6sq׌wJ4Spní Smu>sTz1M XƱEv?$Ӳ{[KYtGp m?*EX5faB@$;`{zұU#3NV9TSt;/۟]wvMOc˚S#*TVأs yfiDÁ3]̴)ӌ.GԒ=vgצI=NTMz)껛_nkvnm+Ƹ'_Px=Nȩ)DC}ZF&p̣zXZz,])as,8U'r;uNWRЈѩ-c F]*C]vA9z=̚%SnT${b^sEpzcfQdA7Kye*ruՔ'u_.9B*>*Џ*G_mk6>3[e7uߴw;zG׊+2qqRkC_N4fKn;ߨ^Ռ%e8zܚ>L>!,Y| q}H汩u%wlF5ْYf?x͝MpR[oœ-|[#4y~e [jFV*h7uEI~b:9x*fլoRjifyw=,ZoFjehIʔ^TfmcnӎS#Or~%["HWL l?JQvZ\r5v\i>b#m|$99.4e]vKxHc$}9M3ޜ{}m<;զ,C_]Lm:ǖ1zIoaoס?TTbXB\[Rzm*INq/Q+|V1/"hXO*YG+dǯ"5ӣ2rw+I} ݴsR#/jQOR&f+2<;[w[Ǿhh吲&TS*e.k2*?kxuѣe.4y'j*?˭y}L#e'w6q*"%S#E1 ޅG^V+]=?p][}88 w֮Q7{Lg0W~=$ݖf?ŷ5b^V8c~.g\*0}6GO\USZon#ud#ԮZIp&᲼7"n:﹗'m/ۡNe2m_-Tu*Z,s,F)P u%kMԦSQi*ѕ`t!N2ש^)&GUFuvmڍ} CWQ+Q7sޕJ3rs:1VSV C"es\MKrS;,q:ۍqJ8ߝktc 4w+>ƃ|Σ2kV6ɠx?tWՍ#Z8OcSF\ɻ{scg^vRIic_kW՜ʧ 5=X/{)5 ћn].lYKf̞qc'־ *Q{˛%M+ ><1y&o/w9p:p4rWmgɮJ9bm/owk;]@:4rLЈJLgjșBQJ͌ 4Xe]yk)B|vmCϨR,ʧgk qdfQ'}M#̝#s"aUHՕ]'PX{1\ԣ.f؎K۵>`ϰ|apqk7 r9gRe Yg?Z5s-o%%[ +FҼLtkG%[J3qii/6 *͞xy)J4i$<ǟ^Q,k6Yd.9zzJM%gR}E|WV"Yɯz85F=}֋Cݧ{Ƶ㶞>#>k[5}wg;nz}ezF3ȯMtGUQ'u:{ ;^6wVQG][4R̿xIh%YYBWI74iѬ?fz{>otG~ tCVm=9yUK3^xGRoc[0Tr01鏢4ǒqgW> Px{WMMZ&F`Ԣo+eq#z*Ԧ^;n*:掗{6Ar0uϯnM9GJSj]KWYoc!+ gw˝ۉ uv妴n,i.bkO9J$`Bn sӏz9fEZmȿu 3f^1מNIgF[Wޛwrŵy.sWNU!Nmeb*Ss_ 7#Be a: ;lkZOcI(TvM1\17 V@ )}_6/i\r#UW )Vfzfb"XZ/m xR%،[,&_Mjֆ3I<,2ʴf^"iYSc=NG9sK +4kNO5r@'u<^]maR|ȑB`Y$r?+7\wX^k'G,ȐxdFwnssI~k_VĤv .]~+EfߧRnKFoس>pʺ#e=R_~7 \%d*tVEs0xUngn:z֑(]NoKG}>ft` \wU1RA/KĒysTg]4=aV!qצk]ЍIJI~G?gȚ7cnH9<&*j3>+͡Snkh_39q}OAgk'WZ+&3룊j]{{lyt̞Zy'%A܃W.W{qWB8Oo}$q;NǒUyϖ>xk j w"FVE,&>~?֥Rn:J}չ[]8F^SSO1kq6eQq]JQr_Wޫj^>k:wSCq>ȶIjHr/xjJZQ;n^<߬R-2O{ogcu_|ũH"T^{1OYrn,*Q|ѿM:2mQc&moE=@e'p}kЫStehg+`ܣNeto&gxPg(>aʍms0~\`q+7'켼90zzkmpgR֖|`ԩ8ѣ-;7oGHMw %7#8|בW9Z4GiSE}ס^V d#O1! 鎜l Y]ٻvfӵ^>ԍO4\ ʡ;^~*'QY5z춝)8h- 5wLс  %sێxpڲZ5riJ-]GM)6]Nxc#ҽ|5*NEOOGYԩQ[^{-xω~r]:͵5pq,[{F Ql5#]6D<#i r ӹ`u*|r}5߷}UїmsJT)KY_sqLEU:黧2{GJ}? ϩ32).yY09?vֿ;FKC:kZ}'in'=;OYC &k{ m<)l*rGSOec7w7Vsyg4WKQJTy\{7\I]ɴ}ׯ~.#2{]-B#ğjp(//~n s,_zaw yUN&i k/ݻZr[h!j,koUƭW]^ư9`ԭBȒXwҾ's9h],{lh߳椖e%7S" Xrc*Qԕd=l_|\E[MVsmn<ȳev0RS']ӯjj?+9#VWMU65!!EfUfAktԾ%ʜ's|oqc5ͼoX(3>د[JMw~]N=l.z]v}\5aιTI2v1 g8LDңV[l课Optj-~]?"m2ٳiN389vOu<:XwekU'.W%nl%g3o/#mv;餒쵏jT8cp_Se+b/'^iw'Ӵ w<.7)+uDLj|dV5!0?1 n#碟J5<Ctv_x+zͭGn˵~Y<x>SO_ʽIahԔd}z5zkJ2eu Eej)&h lr2U=04Z-Fw50hvO<{;73Y1GjVY:1pp-{%587+0x8 5yjQݏ_/')5wөFwo߼6ˎ8Vo 2uOc0an޺t< ZU:y=:ErY}rA9^??Vth|¾H?m5F$w?LVx\\QyJRIOxCվϦhLL=Fh*Fr k{k}</aaSW׿oÿG [LSL|8}+'o|8úOGK^ai{xIne}}1K=,|9ٮ߰ļpANGוJu[O1\wէ}mx[Vx1!zVYR#軞)ʥt/"n>sL[F8^ zj%.zzk -%R 'k]${ҕHF}gj{ Ʊ׮^fʅK;VxomRmmg:eN:mtiMOZ $bSFa@nb$cuvfqTNT,%knMwv3pkգ2M{^֋kcV!ѭt? -oyoc܀p#ʷ ;dv5XƋs-u?F ϝk~''GyoxhmPN#!p{:?Ѥz=l䤾m ].x\G$W_s 3RK-Q}Zۃ c؁#Wb>s2zi/=O6zFQ"o:rzⱕJYQ'+yz/3&[ 򻷗"98ߘW;iNֶNJ}hUiC3RS^ZWM7Ρeq$do-vǯ9<>gYGRב'Oɭn+Yv!aRBAБWo NRzԣA-V7-B]P~V2y?C\5iޏ*=N\[nܥIxXNyGY/%޷XkQukfoOLeSiiʞnͭcSɏA;4:{ycP_s5i994r%sSkK=e'gky7ښ@bޣnlTqUTlFumcPۏ#-SQor,_XZh'=4(CŸ;~][ui 3,jc-[d8S)]_м7%>c:ʲ-c@#qsaYK_&I[q֮-m{Lk'#97=~s|RG=[l|i+[\[>-Rx;;~jO B5N>.k]F̊7P NF~2F<bOZV2 'άAB.CzqqGc)-{*TɷS=ZЪ[ Z4e1`x'gZ5%} ) koM&팏Njeь^/?8Un9鎸Uj2etRs{l 5m)N/TF"JUJC3ǝ<*G5<>}''^]>d<>R@O=J}HbYL@ am?5U9^O|kfRY~\Kщ8)_ F3՟*w EF1ux-Ҹ5*]t1z9WrdI,ܧ`ڪr}53SJՈI!{vkosir+žyfYʤvfkXֲRIX/fuBAEݹp]˂{mtny$Fw'9Ǯj9mӷϧ޵"5"}/ca4RA27`zZN\Yά^Mdy;4ЁTPc„E(3Bk[RuK"3]IT3XJ6Uӄ+KOOԯ݉|ncH^ַWF˓ɭ$풭(Z둏ZQqӹՃEfop~nFb$CH s\\Ѽ˱NT/Muhس,}JsOi^︶5ʛʳ8YSOw®Ȳ$(8L͉g8Lwϯj9QJ͵Gmyo?4Wʶyֺ9eJ:c_"|~O&NvAuҪ'4"iݻlXm*:y=I]*5m {E扻a-g9kfN:|6 b$ :=zuW4i)&hZInʀr+33?o3ڶ# C Y0aH0>[w*73BL@FsVȐ s~biKk{} 1ЙryU.zeږؗqy>v]C]ծ@ rֶqZ{ )lȅ1~:Vo]V04[H̸LW9s$gSU,ȐYcl$:7)i>~?/ 1V vYIMn-CLrr8}++yqOnٮY\d'%"Dfp779ytpkw)e*j7І+y[~ʤțH˚㩭9aNo[6/p88q\9TsU*Ӄ;w);X[SOc*nrdՉ1DUcasnjFVބ2#g-%cۯD/fDdv+ ѷ{?w#SE+Vq\IgC9a8_qN%+-ݽFMKgEڬrFNp~V|xĪc-#U^Nr9ct99\(ьRV3 *w*^NItF#%wciT{uK(>̬pʣp`0HZ{OiBTe;4FUƪ>71P^vs{k'J^U:.ؕc>ֺ86"Se^UY݉b_1B0}9?*xfuP`Hq1wS=&RUfۖ-xbc_=JrtMI=4I6br=+Ч Zn§N;8$bVE9n|+na*$|#mJ1WqR50i_4 371ߎȾ.&Űt;mv?{\=jҌe~t8˯nR{gF!\;kYR䒒~NXjkKkv0q1^Z[sԒ25|>\|iƄhRUh4křՖO8~*wuVuj]l^{-d&vwފ4/c1(I[}"I7``y>WDi˚B=F{%nU+rx֝(N%?hK^>9GA0|~vjrۘB˒IR@4]|.ž|{QТ-u:ty\QGqu{tUi]5`wܮ lr`>Ztpm$I ygs+1qz:~J愣$l0̲`919#N\æVnZi2~G݅qI>'X˱8;g=\•NUdŌ ZcݎzZgblzzG,RM]9d3k3ݩ(KFBĨJaxyefPI珧ZY$ʔc+.H&rp0WVMpddT8UE+1_1VH $+7u\7qSWn3p+ jor7ɽIwH׃ߎ=iۚ%)Jg@U;|x'ήJ:7n].͍Y g'te˦̘ӕH=>/]զi+ i#3]j3PNK_3׵k{p-ܒG$aX1O8\Xʌ%z)VQydy.bhc38<s]+(Iu8ūGlo;J\ p]IWHq[OM*a{ eNUnXnJ+%5P#sq^^&7uV*4YŻG}a_7yFpjѷuG%貳Elڎo +:򞜃jI WӢ/gFzn5-j2ٶpAÿ@+>ۙ;y8ʑq%%ľkXӢWc7gۚ+VWg 9UgfIB;{g|gP*JtN*n͖4OnZ?o`zk*ܮ켌=8rqkXUieV QI=;,ghI1j:A 3nS?\gx>2璿îi9ڝ.Xtr^m2DN-ٶ+nTh؃MkSxFmVi3 o#['XmɃǶFuGUaMu$mIF`o_Ϊ1<W[y}@]=qұUiEN2 "E X\udO)IJJS |e)Iue8mR]*jɼڦm) 9A-"o'%t40E|.,d=:LsQ/;=yn\-),,ۜz~5TI;-u$#];bq㩢r%JSӫ[/;<:2ˉ9٭S޵:4ҳ&$JZApz~G;´iʚMзhZoJ sy=k.x%uO,jO6בWk6ӜgR2{JnQM[~$DkQX Xc#+S:g;6O!԰D4Y8U%"jSewEm+f}.?1JO=F^h-ŸVVsC^C\^qT^^k-!۸,,!NEֺ9.^OiM&]e 'zZuvKMdHUX+ʓo*Τ%̬DeE^*맠n%fo'#9>zX,+y/<ͻv_=y+~Q2ueUo*()<׿2Q^qA1 e |{Rn>~vO>Q+#JyUFGڰ+.TJQKmkLH8#n:TӭV2'QԭQI-ڣџO~_坵ljwve0 1ܯ,=)F(iQ=/[#oS4:=.A&eRlzxz5%77g:~ϕc?k=ج,fU󑁂'ֽ QJ +;^Yh|⏉NVmB9eU1\qrkezt)I֚o\]e2gf >ݹ_gUdr]%9W~)41z>[2zכFQˍ:rOR+xF džpF+TGʴ6G|q r(lsL]lƿXbcwjG[2]Oe^VT}&巚yܵѸm݌޵1w9r6]NH6C^PG5QKYku:aZг_1 g:¾RU@_p8=:|=LN_w3rXԈی9'jb#cT/Dݼ,SҤr76V#ӵK|ZwMm=C  GN;gڹu,.jU(^ϋ3Dc䚪N5b)E˙2o 7rvl;Nk(J./SoNZz[^#6#F7nXE)be+?ruciUK.~kni'a*M[R+Ym 2vocIF8{Rw,vFyǽsVc)Ktqoy@v0ս S*YdѭF_D@lYpE$G=?JӞ2VJ2ֲXE~[v~3\MO .yRkЫ* 8n>mJs&r|ivMo,j`#* 穈7,6Ԣ~pvguc5#Uǧ<ϥgV[$֖t& ENE)WAVe=̰ѕMfDEqsQ[y{YFV[,(|?x܂}NJ2B cre x+2#M-Km {tz7o3tkNwUsRQ꣄2m^v1*mt(qB\3y0؍ U8sV::HoqyzʫM9gW=z&gZCOANFR^3͘ةdi>in2p2slyJvdlYq3,m;{`9\*SU9h]$%Y![`k{X'37{)K1jyKSE4Xb.HN*#R\Sl=nk+̝oԣr\r 3go8q2T۲J59o[dc39X+51JOfLdⷍo;c̟<$3u^E*{8ƌo_bd#s8NjC>e9Xn^M&Чwӵi 2mS|+EbѠIa*z㧿Qڽ<^TQFڿrCTF̘ڬ8|-NjӦn>jH$*~,1q5zJff܄f%@nP:סNʎ4]M->)VfS9lԊSH˛(d.#o, &Jxv$i+Hܜ<Һi(zTqڪʬLWTdùJRrпad+7_kWWZOldWVjþMa*)էБ-n:ǀvL2PbE. #g<\՜BehTUH$`W-F2Nu=b&&`3w<,N\;KrH܄{q85/v79krԓOMJl ۹G$s y>\FʯՌmVK1J㑃"MhS+".Vo!B,灟NW(a:qyx .[=2:~*w]:Uy] bQ?Z8kQW+iFmݸپڲm-y=kX^kɤԦǚ"-Э#D}Ƿ\c{ߧjqwr[XU+=Z md:. (ګ Ӎ?aO(k'U2 8|;FG~+֏-CdަTpcWhxK$Fb#&Qrz?Rvx\ʿU{y&Nǿ_Uv\rF-մq8~l~4f#,wK$cV 70S_Z_(L N"YU1r=v+x芑Ɩ6OH#=9㿧Zzq WbQm h3rpOA*_>g*dy7tf kowB?y $n%Fڅ n˧sZWDJ"ǂshJrI>ۙZܷ3H#̀z#O硵:0ܢ1u ǷQ=ʄ϶Onon(ߡıt>^ҋj]NхAOp e-/o[z>뷱΍`tgm+3[2]?SRs/7||{^ݫcu٭hSMiw9Udk÷Ncж==}:Q1ۮM\>ںvOrxueuY mᔸn}^%isIs>yzyzSo]2GTڻ]?.@eՁ%,j\q鎿׭LjsI ҡFobc ܜ*rQB7~#Rt D͗?1N8qcO록Z2z>&}UfZDC⳧:c,.N 7 d N=3ӡ9GeS܉f[ylm@`g>U">\:mRB6Yd$tO8Iԇ3ЪPs$w? Ԏ}骱 kr]Fv4rY.dm=[F+mˎƵw_&LvFL*1"n@qWoRaI49{T%KvmDͺ☲Lv??dusFbj6Gv x?Z<ڙT5Yu}3Xs@u#)%sӕ^/F-ۤIsVqwIlGo1 A\faHѝp=)>WO*Uj|KNQyp<"C?^1QRq[mc){',*,vG5eMHFG=bg'$8}iIǕʤru_/n~ҳU)Դ:rsfRG3;K',aBnT4qζKJV{Եk<<  Ӟ̟Fߙz~p.KHc8d.]?'zQ*b䗼GO.Aq/c)Qjn̨vTqsRWe9{ʟe5-|Up p~evqYʟV7geӏ%uf7e0ߡ#筗ӧUd9i;T*sߪu.4P40aɄ3rwc<On0|G]tCEhpG=뗖Nfՙ2η,dt۰o8|}#zҏcMrNejf mQSK޹U)*Yiw *qӽMXӡ 1&,[x|UVg btD2n ~(B|G"ˏkYEd<+-VYL +d'5$v{w*e;HV{py#=k9`FI...ϗ#̘v&Qž=9F6iB$Rկ"/'$2۷ j9[KX?sچt˸['GV=9oSfazg(ҏ-Fԛ^MSmtU cq 8UJ {_Cʵ"[Yc#zgQ;V^(*uZ.iXOvݸ88=H:\OGos:2LyS7ˌĐ{5*EEy]L*UV~aGpc66[gKYBSwmo}:F6-B"?ݑYWӟr8r+9rn1NuábCmqI7rq񩔩u2C/LLpg?53z[RiKWgsHC)fhg =럙KI3 j[6I-ڭ eb7*z;EuftMi{uH]=ce##koc'-iΟ4Ke-ͲZLnMfss^*c rKB/顧L4af`9lw{S*vEk8#ta9)rQLe횹],NɾO/q9$pG V1kbaF[kdiI6#9Yݺ<=o ՛ln?ƴc'1" !ĈQFa*K-2ic:j/dDpJ1S>UY]%Uϝ|zׁtF7_qE|nb,sI.inEg94Kn(f`d'w 63v瞧]QQOowacSJiG/[\ys9Szqzjͣg[H;QZA$nb1ƽi!I03ҫSޓoœjrswqwlԎgSݓzv)cN|ʞ5ճܑ(oNߗAc%Bwsa \Ew߾ؿ<4-J[\llc>tV𘪰]j|1d =8_z.7RӅ)˞ZhxOğ8Go@qx9^Nxͯ, &קJx֯oy浤xڭ~բ[&Vbٓ6y>[?om:]=x/RX;Kk$ww;mz_M ~bȀ˕bꣀL|nasJ!T-te:t};]Ϥ8<>m"&ZHFyj%RMtgb1RSWH oݲi8#=˨|*h7(wRmh>`|׹9NR2Y&NTgNOWR1+*K+YӋ(ǚ[xj)ݷ#8q 'NwG)<kyhtlqˑ뺆":z8Z__ᑦc4q3H8?zZѓSv ;|bB6==?׎=φ WnDZ8%sw *F.@xԒӣ9%.XgA6 2N~FNqSRd^QDn,͑z]sI]BINh`2 ̪3MWTs?>M8i&ҽZRHPi$G*r.IcWJܵ#|k>pNA'uR'VHo1Xm!Oc4~TNBIigϵ(ʛք];qꌪyGی~G<I_cMSwsf1glyݒyZ]ȣ:ae1Vku"XnF.̯'%ڬJ81գOcd4u<|d+^8x_a|ܖ#j+tֶ_I6#̬ySΏŭc r_jK0ersnNR~Ԓ8.0*$tLˡþp݉yg Ep֧oʺ֙)T=EyΏ-yB\Y3$|39&ݔر<uE,U-Q%sgS=BHFo-њL~gGBxǸ=GFМ#0u.-%wsvF n{ѧ;.m8xeSD}W㚾ۯ{6V煭MM9QiϷemmOz/= 1U*JN7emY`kGRIԭQӅgaZF>g8z2ʦ-:{ﲬ~տbXaq2H[QqZx=H9_gR*VUmߥSs/Ů>n߻Ttl@q9zq-2O>%)ҧY_K-4OhgKaH՚@o wb3 FJ5$pԫ9V/Dk3E$hEH){h֭wϽ:iԖ"#E4o=—o!yjcG,v#sFHIM<ϟ*aԣuOE]heJɴ^:v몊ۅ8Ӕ+=oy.vQyہ N\uϫƝӻWH*:=3[ZHEYB0܊jVe'+[y֬<aR0JFy:ѣ J8ĹUkKw"ӣhWXgˎ@2tQ+9TUoCjc{m}Uy!Tr#nc㞽= RTׯcʎaU 6z,RB ѬF<yu1vw~&8yח^|zɻOLpIĆ ?Fhh1PkdϘ>>^''dgLI''ӏS *3{KY%FOv]{ucψK~Z&]•<8a]u|5Zl[FV)Nާ_ Mh.,ʧ;31?ł b}nKsU%)IsmʕB׍;MOU ght9n}^Mb\<ųe#iS\MUzt+k [Ir]F8 .N˽;WU5I6ż$09< sJT_/;KBIj}},z5΋J] S09RacUUP~%_OMoO^3ubFM>ngto.h^/e9&P;yJ]AxcNn/K'r;庉Lsޮ4jZ=*Qx 52YG4QS>#4?*mIiqyѷӞ ,iMɫtOᣊ'9 "SFYr2O\xN2Rg{єR/#>oq\{Is3U9^fC5Ȯ$JaEajg2[vX2׭O)>]NR£В7짽zol:1wu5q%Ļw§aTc;׵R5ko]{NSյv>/-é^39Kyt}j%~aG>kN4ڽ⯷Gmku(ҝofOUg} ]g%|e $r1>J]?9b%R|ai$Ȭ(}v8]z5&-F4IyϞ-|xҵmNUVިW&1gpA8}&.F2ub2zmU[ſ]4ne\8yxgE|/ס~צ{?[|Pk).nU>=9_9R?XcK ^=,f2YX'LrIf4}¿LQO^)ӻnK>xr Hٶ(YwlPUT쓞kab/se%Y{;w}/GoMo2Fݺi \wG\}9GeJ奿=&GdݒM^:yl[vhm<0r@$5QNTڨwa^*TwC'GOHi$P+p|1Q9|K Y++o/ӔǮ~⧏/ZͺFe}i 15΢{mӱhΦ1R}lok=LדG 9J^Cף C޿v&D.?\iN;/zΙKWf 3ۆPuv]8䏠#)Q%eq1jQnKUgkzShFvn= TeAAȯRw_[X jM]J=[-٧Mvƪέ0 p2HGASr52xz -$ne?@zM,;]Ųz:ʕZ)5N&87t?,~4~ ~=_Ylڤ424k$( (Uj:WS09j"ߝJ͟ct[^Ҷԩ:j5ߓy fqcob3^^Jm{Y qZ[K+/cϵ誒\0I,Hw8=+h 7wK0b%N tV#'v mcmy<\]M31{ ='\LEJʗEVR*.Zon_ѭP[ir#7q䎿=+iRJ-}ǿETY[}{~/ǩ 5SRu+/8nDZhN8xBj-_oU{Ɣa((=KU>d_pV,HLwCs^w9| ZiKe| =^^ݟ dE<$ AO5uRʖZHRI7cm{?Or>Bn8O渊*_~|t2[p~_+S˙i|'֋_D|/㿊2ƥxSi3ܔ|1 N0x|=Ys6_5+ Tɥ>/is WUȭ$u^-5̛}z5tnKoK缐F˱p=<=1KQӪGZ# Rɥ_+N@7Bɍs^kEbsSNkco=]0lg FI_ITFqE'IFkVO |]ӵ1wē4lZ9BPy[N=Ե6:dl[c |wd}n6Bۘl GzW&#-1riFJR{Rj'28qYa~¯'FMz2{numGßڋ$_g~15RϜVTm3)m2n'VklcUw*N@ ٻ`cҜ%kڌfzÖV6wI$Fr29Tފ?q".!4rny*^5QNl+reWZXg?Mlͫij^Z)>m9N:R51<ּB KLKzz~|VnRZbIE SN5/J!H`1ɯH+Ox|tϕ3# y'{wQ\U=$7uCOvֲ+1TT:2k^bHr9mci.7R[iMmeU\I} SgV#;oƐ"`$/#$wi'SY{w$IF[H:GTt۶pGF쌐"pʙ 9$>[nR]E+\p\n r9k5C;hƗ4bhi<۵13,e*5z S-6ĨK0VU#=TImYCb(h^A_JI{3r7N$v[fYak|9<\'k2$vuBf=*]=NiF)(oӨ3%x7?ҏԾG,%xҺ(bs*dשCeZQ-3//'rZBKIvmambH/qE^, &e+/Q߂+>iNM=Lʔbԇ;}1nQ"Lr\p} %N1S5I'-kַ6f[6Dz渪SPXwQl^Ү[Ki>^=N:JwXx\b}{ƅe!IM8 uIU']R[DlyKfV igcjRi+uH%ZmnA16&0$hPG@>ӜUToˉJ1Q/FiyBL$ZF$K6.۽e'[}9+.is:Zf3y[`L~7=r1ZJgHƝi(C-TJ$%H۞7!ÛHR5t dNy'qKhA,2+%FG)lq۞?罺Rtf+[: vG߰aSN.]}IeV|vv=je-b#%kF0Wm˸0c?ҢQEdgnfb0_&Hzڦɴk(RI" Ki&I$SWL48r}4Yt+&2gqb;r{VPi/z܌6o4w #nL1'1Ҕ,Vb ?®3=Q4*q{j}qHV43usu:˙UaarX*Ƚ3#֨w>Zu.UifFp6=)Is%}ΥIԷK۬Gr@ 3q~9CW.׽ɾ{C m#l2?Ne*q܈Ӕ䯨Y;Wo˷!Qߚ]W?i!UfYaGf? FRߞpSӡ`'qV+? z@8l*IT42]]BɻplqdR;0\]7ِ[+I41t1\zt>(K_I‹ C,y-ƅ):z % 2!B{~V2FOL0i\OTT}hW+r3m|p\ᴎ8ʜeih+[ xnqrvk ?|\Ri>{:|=}z^@H.p;\u>T.iXKn"6s~5ʣ%coiZQnA)͟##1ryՏ?{pt(;H!uڑeI^X9QXHȿ46GQYԔS22V$hϨ$OqVTӵ]iYp]9#v靣f֩(acEڷtUV͌;n⪏55nji)Ka]$1m +Jrv{Km?̧wafK7(x=r8+i#**^s6<ΜV0%Ns5Ḇ dW{HÒuSTժҊju3"fhDB4%q=jȉѓq1f3n`ys|GEiOܽMEN6pX''aO=OF&>},"p0@b2 ǭuFa+(PCQ"O*6eb4o圞}m]F4NW{Ցue9n~Ҷ%eتةq;FFTz1?S[QwԈS拗CBK[90ǧ}hmj]{8L$Ǯ'5qe(uTy[7dcf >#NRRrL0g}$'?_Ҵ(7dsJC  y$s#SʡrߚQ?4HG?jؗ޶-ʅ?{-8&U&UM9cQlNx\]l%'bU-d`'RlqzgO.e<i?|x>ԞMv\[$= ۉ''ߓ4ry%e)+q-s{F6s-Cms'~SFjcb)'KܭіFxEW,ym֜%wr4˼JCO~Trh_/ږJJ5)ccr}jcTծU:n70gTέFyosϯNծP~~\裎3WrWGm r/ NSXy摎 RW'90;{ӋR}q>+̕Ya;RA^t[ӣާe59C5cd *OW\mKީV){YYk|sm n6D.8z=:W_$ߧuFd}׳o&oFN$<@.jK=3iT}٭2K!ܙ¨ẉ6Fܞg9MNH{nmnpRSrMeq`'pHaa# B~^u*IճR$￙i#Y,>o6>ms5˚[yuZO[7IcO-|pS:\Ӝe+#*ֺ?/3ÛR{;ɍb~rydc%Ji-jZry,JW!H!ZNT9c*ׂ~柩ֲ^C4̹ t#ֹ%ye)I|E̒=ukQWp4dӲ[dvIUN8s֬N~ȗ:rqzyz!dr2gӔdkqgZQ-YgB멦5S*M~BI]7@n=8yTkd#.և?799(3--VGh{rHʌO%N.znkF""ڇ2N9?BQ*:9ǣ`*}/ gФ.};>8h #WvQw3py"U[tfC7@浛zJ~N]cA|$3ܛyn[i%F^-ʗ7s/.%Ár9E=WZ*,%D#dBߎ+_z-zw-d/CnGXz| =v%iRMlH@>?:FICG}lP\r\#hXNWPOcɬJ8ө+Cf'co"Lcڪ9t`U_C[N-+"5XFVc Xʤ8­IByqh d\凾FJPueh%dsJ<֣Vgo"YAc@z5ݫ~L˒+bBb+7+VpO0ix秾+U%8FփGeGATKn Jq;#ؽϺ<{$ Y91ۯ#JdV.D'OKB}֔9aQ䁌q;I.呔lI=YM/f♧-HJ/G,;vzwFR99ss^ɿǡ)* Q.OOqwzjm=O WFX.8Wr8¬CͼKWr HL V,[r88N״MB SRQZs&[ZP5f[fֺ 92+Az1rmJ?sӣK?߽m>k$#G@x#2 +kG/cjZ3üQ_xgYеxEŜInϸCNp}=+LM9}a;߯]yTH$F|Kpd y :EqSUG.NN1 4f}@#k˛HR9MElW>*܂\g9J'QR~gF%%T6ys^Jis>ZrJK7]dCs zgz^KNTey{] fHL'>+hltGtl($4LeSznj\e̤ٜ9JIXdbeUbGQ?tӏ2|ioK$*Ȱcq=sS6{6+SX`d3d^ )%qOR\۸{IN^~v#2X썝*TӾ=qRwarסV7y!'AQ5(fwe{m6)^oQԟiѕfxmק 1"\"zl'VS3nśvy 9j[-Jjjy IorĞ}qq:it!E4gs|~ڥWߑT|.V#դbY@1sZsFӥ}WBjëgw` r1ҝJ=I|x^1M&﫸pyPpw1< v9>g >kLc} gh$.Wj_ҳIzqhoԤVl'`Roy!3U8ʜ}Ut[Wwis\JM)7~>cZ4jIo׿q+ M6t 2x^rjދ\'VW^D xgYa*#YQF}F̍,"݋*@Re&sNZ ڟq$Q–U9#}}*~ѝ?W)m"K ȱ;n-#Kӎ}攧Rzt8F2̞߈$LZ9`tP26,G8"eVuP7ӳv+!qjE^8?MHWqM5/ʚTD|kr;0ҺfP^ryXTncF;c孳9 z+)S%QSJUf4D«7?uk[R-ɍ._I?;#)#aTn>V =?3^۝oy/O4ZR\*s޳0N1eG仒]ܸQj咏AQ=m<[07G>]\krqfu|7/4۹>o=x /DȖ/(FSokȸN | ~\\9+ƣÐհ&7)ӑ o.&ܒгof z{zn5'?v&5)rOgܑb/rXQ{7G̢}-Y!bX~б nbڸO&SVίm`J-2۬v!ڸsgQ)I5*TvЎdx6M*q~kJ굝'dhpUeǯI$կψN4n_xiw_iIe L++\d^\ӎ~GR; 7߯pB̈2M?]j!-ҖF-켽K+;4h-8<8~= }aPݹ`=8>QRi(9}^Y 0J w`=kZq&y;lIЬI4t]n*IN:W4ITsb?uE]G5L4KfTWGU*J>5W֮Y+⼆㏧JǖVosT(TѭMLVv`]IzιBC:0d[ $*F=AǗޓJ[|Xn qǮ~%'w扄rWG\AdC4H\\~ҧ6(+ydY%e |@_nk_v)N{77IF>뷑vv6U{*r{~UR)N̙x 1{E4GMw"<,ߟqJ5mHNGiu۰ﶡY#Q#퍗sATGݎ*H',&7IN{T\ڲ˪Doz00PXzMjDjF4rE/zDj8s7RtF{̇x#i)c@߼Ĝt~{CJvM,褴jlHGO\Ro4yr5g03Wqݽk \~U_# r)sdZ*ќ=[DrMƸjJ9\c(XEP4ӧoyjMԕC?gNԭ0hWjrALv\u!-1i4Hvm|+жOZߡR1ZFm,1QY8 tgZehC~s+)VUqpkt9c(8MkԩvGjo˷ztT^.G S䋺.u9t(GD%ÞؒaJ1oIgc#N{qe CTizzTwQ.'&)o R1Oavɺ7h$9Ve#'{T(885miSmgniI3U^Rn_(ҧ{wg:*J9N0P{p!e$r)m̅QW==,ޚw/[8.&mvNUNzyr˛[V iڪkcq'8's3Te%ދU4˝;ҹeEskkX[r ff Td沔3wSb)M̥X'%} S{;аl/pG\d9mX$c.~UI|,QčtoX?_QJn]xo1UI?tғ\иYfh7eX+g p?ִٛ|QIjP98_=H5jKGqQJrr+f 9C}u+#~Pyc浧/ܩsJ[cs͍#mO6ȿv-S?kݍ%JJ2vLH(cv㴁 :MQaqpg*gNQ 1-Rҫ]m#w01姡R-^ryPjx fi9sT䵆G|Oˆr&Qآ)eѬ3u1?Jϭ4:[ydVvn z}pn2Jeߝ,S$mFN?OXЕ:65 HVa|ǜd{{+Kv%I Xsa#UnsQ1kzJ#%1#x]^ǖNǧ,=6}9?k3ۏHDc~S=Me|ӊ{l$p#ۮ;wk]VFrmenr0+"ƞ!Nu~~4jyk{#0RZV89ק_?N5F8N+}wӵȶo_|*}}}3\QOMȩSe-סkqE!Um`8=3֗20p}Ѷ#gx 8\tZtemia&u̐|ϩ[h,>ȹѕrX0zQ\+KeusJN/s_-%3ɻ8b= ц"?e?1R]dܻ土ƚxêY[1cz)֊fΜiy#3޴7yC#rac*4\Z"xyF֬͹Ўt_r2\Mo2Hl{Tν)EJdF[J)jN7Q2Bʑ< 2qxXHs-]d!_o=vdmU=5]㒩Z4Fƅm.|Ӟ>j:^ѥQz8]nI#'l;RyЩRRеir &vƻIq;~B’:Rr\@KE7ʾg8r2=WWpŹSmh/vxdҲv-Yc7tw?AUE"jJQF0Wrema F~l1J*iWSV* 3K0ܓ=qרNIy#|m-7{sGzݛaW-=?.+|~jTCU݅]CUv",u۴RHYg9ЎQ(.6ԃOBHbYIդ#Js3zke.glXMе6zi$mdӅeL09qp<Ҹiө9O/fTӕo"ǖ7|61G)8Bjƌc/hun)W,*~~R̝b$xn| :=ʦn\%5#̺nq}'SN2n*j]F/pQ:*ym9蔹]ҽˏnMwt$Ddi {`ҹiG9άhwwni3<[tM ^[T5;6sӫ(bV[v, !#xahs>)9[ҵJSپ[-q@3#\F@$ ޤ#vT'O&,ۉGjn{\{Q%d>.iJl>5y2N=8c&UO1!:u+ezϾ;6N*w}ȨBe*/o/*DCֹҽپ/0RWZH`TM:q5̆H.o7tqpAZ8IsɩĹބ> Y2HIzPӔNa%MpL*J"\<}+ЧҴq{8۴<˂vǃ+Xs)Nl=»3$X+r1krߡM} k<.l =R.F*E(}}"LM;e\>cJ^{~m_cxџw8VR?e, ced/^=L%['~+:uvuMTmR:ls"HdkW_2I&;`2N>_:aYRKKyji 5!(kw'ۆ_|LcOG]˗3kF]OL~]zF˖Nk %ě6?2]稯}m< ZiϨ-#atrOt9"ׯ pIw}Z)ʢp^ّǯܬGvc`CۚU)W9\+s-T)""PeVގ@bhI\RiZ뿯V__$7$1+>/zm:u%KMK~ym %v8~Ƥ*ZiөFo>wvU>59lg~qg$֧SQQ[z.u+˸̞bϖcr:F4om8G'fj:;0_>w]n%FnЋ]*]iIUcUP9+JߩJ50*-љ @_c8")kӣN2k ~8rB:wNhѳߩxP2|F>*TWie5d3G#ZFތ_Tce>ff<5U}އVMb.SSZ5`T+yOtӔk(kӧREc. T&n-:ӰaX[|[qnӧv>'HvM5h'B,I䃎02ğ;%[+._~ﱊu9W]noK~e71҉[H9=V~wQq ˿[s -m\XI?O׎JWO#*V:]i- [``;R+'sS5bs4*ٶgwqDj^F-]>_2;#U\3j #'ZVב0.|`r19Q-ɄN.1Z1}2CnRGW;31FO(_,hl("&1Z+x;+u~m,c?#s4Eʥ-Ӝ&`$ƐEs ?LtO+VПO!}- vX|0=uzk{tqۿcR.wvYױ܎ݫ SRKߍu5H㹒MBc ːx=?ZqRmm8;Y4]ML0i ۙyOnqk?)G&^ͽc< UYiٙ7ޟu^GҴ=Tvs_ H8ENiƣ۹F[E{o+Mkww}+Og/nFJi("e"EqNkjZ>N9ka{9j̭~w \WUSc`ûw[A++׭ JE4eUG ]N˽~mzq5ӬcvZqLMrJ%&8+So;;%ԱdEI pc*JOObb# +˖ws"l噻V#N18͈\Xם/g-:~G_SbTGiZ_Oq\tMSsmnnH/ %J2ȪN\SNR&IYǁ9O rw]T64vxf3eꞇDF?u{unG4R@V9 f؆MrFc'4pR3V{XIkum]<$?o/"UT_0'-#6xS*Qnt߲3٪(k]z/%o.hYԗtk N01ǮkhzG~g,tQ5ӫ;;T2uE?co/p3p2:Qw;zocp.ho1Tmz*򧟦s5Ft5s"}[{trIm56dkyxQjW֩/vry?3].vc"MaI#y =znSʹyV˹}FufueI1>AIWj~Ҵ]B,]wװޱou.keBv7Y$ u:ҜW+i4NSյaqywC$OW18hT[藐gܻHړo0¬0#'ғFBmzืw[dL>SR֛-Uiou~ ݒGRw.u"*khkvm'OZjOe*q[iiw ,̓( p=j:WK]]ӣ/5~WO{MVȲJd C\g|uF5iiӮX27e՟axpZ/aU*2xZ:NWvDzNbi!e<'\Uǖ؅B\^G]?,RY8%U1J=;m'u?qCHB8>qFI}?Z#SK[ڍhJ-g <i7mWTlgreNCEcx؞ZӵW1ۈ؁y^Fx(/JmALiQcLr~\n8oj^j$n5]OcCq"c^2@$ƾ5B/|+v^{X=S1'99e^<}穎"5>duHamܳyxגs^,yl)\4n/);>߀t8yey"kCH}:/;;\9̎f߸2`QRh֑k5֛ ;FáhF a˩k(mfO|ݱ+ QSuz&whs7ʨryi%ʴ?S*[㾚UTV=:W\"sQnh̑.|v ʁҸcNWQK޶3FɎ/,1nvzӥup!tѩf!tI4yy4d?1 OosN<;ZDtG'dwv֊|1A1**ƿ>\axZү9]{xr+~ݞѭ : GoAd-{yn^M[~G䞒~KŵNEkx˕»1<0YJ>k}?e)c[yO GA'#+ U#){>O<>Wˈu$Wվ? $>xm1(#r01ٯGGRiҩco=K6A0Y28r ^}H3ڿc51UiFSVUil۶gB2qV{D⾫Eʌf׻4pfF;\Ti'fy}(;9YY=rK6fݰ.jDe4~SQ~mahM"`q\T_OϹԩRRM[UZM7SVT秊1*hrX.pʭ@;k.jgktx^seRwyv&#;QQTIwo|qsv08 cȮ|=}:G:JSR7J6Վ#ݵ|EJh+_K!9SMmbp 49g)^VWt|Jvm۷'\t8nlW+өT=LSoQ$ĭ]J{58S}yi DlYUS t^E [-׋ԯf-?gKgn5Fޥ29@kαwV~4֟3fyNϫe(D|3&ʸ̱HFp IlW1*MThN/GK3ǚ02]cꧧoʥG5h_MvGd僫~ͭWW<w LcVi72x9ϧ5يV*T%nzvaR^ҜTKuyω $2cj0+)cu~,2q惴˧Oz^]ZVyg'|nYzg񩫇"?Q[׾S]"<Ȭ`|@'>ծ][uJ5ƃEk6^J@=~z.~޾ASס?.I0R1'۞ׇ'$o 8zҋ&sþ)n#[6XdX\rUQ]u}I^\nOwfSRkip>aǖ׏*6TQdiՌTvӥ&n5n.hKqJN7NRR`%nik ҡ6yNvyYTqN#&t 5ekix;xV{p{+Ri6F88Y*m73^1γZ6UaŨ/Zu+TWߦ Χ4yoggSؾ|GӼ9,jdUm9?@rzcל:і+ ~# `#[^UL_6 edkr$X[ta8lT+տ#JZUTOϡS[xo7m)Yfiֈu;*dluIb%N/ݲM/뭏(ԩB3U[y/>-j͡Cفaa"5飅/e+$?5-Y!v?ZZxomwѧG).qkaw0fVBr8?ֵpƕE%mmt^Үu),!쫒OҾ-Q`l~xju#<* \yq#)ZWo{&eQp8wKcQ{\i1k۬\i9Ldz y[Zt!צv3<mݣjߵ+nW^bQqKF8P]9o6Ċ(p?_p*8/%]nOxǝaq,׌Ѽ쿻;{L۵y8ZFZoZ[.>5;V@Ahl}0>Vy-:|ʞzO92~FbE8fFOn8?SRz5cY+S?Q6-|rfў7;?uSTUncҿe\EA5Žk؛OԍL*j^,.;4R3OboY+ 8ȬWwl:Uy7R5E5&cN9YEmO9(KjR]\ ͜.:{stc+#\Qx9.^)0yR1-%##>23[G S[6N\c sNI-?/ #\GnJ1S}MFM3{y6 ?ľg*:>Wu⹮U2m'q|hC ӚXwv5*RR|Wcw?O&L4r6 l#FQ֯-P+uע]}J-ͤ_44A}̡Z#t|w}E^nCy{{uVRr+tg:i읍=/Y5X #P9nIǽa5(jSmRcVvf7!zr9m ^.\a(ꖞcf?*\='ִ["VΏHnt8vւX1~Vr+yΥTVMH.ڄfݰ}ǥ8TZF oʿ6ʱ[F]dyCP "=BKxX<)9o_?^{k}jJ eG$fۺ8[P #Oydz%87{u#-¨~i(SO_NTعE+] QZ{H4.5]?XnJ.OFjVV,vyOe:wwMHilӏ}$%vd.o⺨KGrӍGnD+N~.nU*:F^whWE8mLڭ:ܮ0o[ ˄mj6m8žؗNTjFvV:Q k >\r%+icXK{m1aʞN**J_DtoyJ~Ebw>lT9ykRQj ɹUyQK1rwop;TU5n|# FR*v$aEw'<$*狲[U_,}zX4u,s2ᶑ$dsQt}b7WD6lu_s%T*5o+MdvRmesNZv} +"@8Iq]-ڷk/%ռ˽63(UdssQhɿg^Jr٫+4eS3$eV[ JmzUyhj|h\A1$#<ڵ7yae8% 8D.UN~n0? "ڛK=m 7(#pyf6NO~Q{U<yF#^O9GS+78*.rb2=x\ҫs+rŽ]n~ܧ'9gz9=p\ev$;@U#׃JuT}p)֭S nGG3`8kʥfJMzLU%KFD^778ٷvU8sLbZf*OGʲܥb~{pJXy$ )'lF~%mK%Xlb&sI浧[}QR-ׯq`XEcgqz1}ZH]Q8=+J{%bNJ)-cmV"5x_/u9Jҍ*ʬȲjyRpf=uy֮\o^ʓ% ǯZ$RcdQ'689TJRe[KW/* ._U%xr{Z\q"+GƑ=Sj6۲W6/c{kmtpxU6U}#Ҫ5'Ry^2Kn|k,|)✂r:O7,ms[֧om{؁QK&15pR^drS=ȊanFao|ZDSɶFq3{ׇ-oW i ;F=pxk5&vuԩN8'ӿ+9]͒J zSF骐ۙ@Kmɂnܑ5:u)v4W8UʹK0o^F{q[F J1|2 , UfL }1ڋJU/#NoiM)+[K~T *ȗg7 qzԽ贖1Qr)/$wTgHyp{2(-MRWЯ.pd@ۣo-#t^qu/cR):R UXHXcݕ1 9i=zzkԊm?1r3&yیd#zﱲS-ouoni&y՚D$`9Ҷ{7m[R/R{Dhey',y#JJ&jb#^4?`5Tdt]y-LaeRnKMtX쪙QB*ڸkF)^VQT߾N!eP˝;cZ'sr7ĿficaJpy[$TksFVYXcZF4S+ݬҴم[h p? 䊎ҌbF DVQL"qljbUy5ӡqZF\$=\<WYpO~fqe[*G$ 6O=>C+t"gl Ǒr~)GWbYPsQo*iCEMq0<N.ⓗ  'Y[;g8(z(? Ԛ}$ YW-yCJQl5~mzZHr>dYdjcNT܈F7N$$ݷ`GsNk }{[,` qg1Y\ `9;A=N׆dg)!qgԥ-,[,c^ic̎RU>Up~9M>j=6bەUSۧ*J9IܯTn}ē2G7!q9V3P{pYY-~>ieM*6tȢK_6YV0; `ym(ѵ5r y򓝤|aݑf m^?Փ~H;2pk:a̗RxT)ݻ$;K*o~.sEK/Թe$qY[o1Pf+>iE]JѡڽlXki9>LͷqqMtFgGW3BuYuxcFLW{B'횉ABc(nMnZCt%;nTz䴭WC?mZzG@YHᱎYg|<exU 8HVQ9TrVZXfI$*ȝ2N8YK٨_!MKn5r^J8U]N:1ӹ#ZjY'"^^Co}iQ֑=p\[ 'f''7qb-n}^Ng>µ:Ҕ/N,<[_/an ?<<=9J+ԧN BJ[̭r$|!_0n[:kJ4B<CBSxn9 q\% ?N[z-LW3'q#wwgiRǮ??SJ ]N):| c*/gҰIVvk >Z2[g*vhldgRްTlo Z)ٗ1eӕ<9ATwDR>_C\Nn{}NH)");%ᕂ|G9*FkUԺ$HImzk*+\tך娪;71yulj,LCH~]9NGJP+_"h m'f?>Aęwǵc&VZվ`Y "u`̃,c=QYD1NqM= 1͕*qFWgT2 %o/zn2jUTk40d 7דZr/c.in\V IPU| Q??u^N<049.X_МUNZwk/#UC7N k p踮H84ֵ]F O$,T4r-۷aiӧuw༏w pG)눼9oiuhYWeI' qoSTj7~-k奚e,x`r#\ck|W*u=nޣ?dirGn(xa8?Nn..>||u{Ʋ^K&4{4g˒2=COlt| }1Yri>qhkYwmr a?1漙N6UdIV(.lp߲G9l@»FWa!w1U Xy\ї,A dXv9>e&xӓdv\Ҿc #^1溿 o˾WMs %Q~mF[ ~xUJIsjV0kV7@ֶvʣg<jѨI߱:snlwW10{uIǗ=JOEj-ާj~R+|£cW X>.Jojījۣf`|Gs=xy]49^[r3 &BoPuGLZ{ek|1bӊrA4kq!O#H7g'TrFQ| F s5BJқh;8<%Egdݻ;~$mq8wJ$VUR8{;UKsMF.r17&*|~T Y诣:jVþE{4D8n~n1SQn]h{:B *jeSS~eQllHZIFTԞb[[Zŵ|˘9;W<=q j2rՎ!T ..]e%dTH3ߌdsVu)NmxhԬKw+;Md>t:*'>Wܞ8_hqfqGy..יM* '~R{1qMe'i8֊mG<-Y+$0vރM o%X ^OOS\ت__יS*2Tfv6#VM~f9޷]iGݕaz+wL6NRQNk{h۸0FR;/gVn-3NuKtC=4Vc$A=zsN2S;ç-|ŷ8QW qR:w8,- {I#{O*'f~Tiӿ4*bD[Ip&Y#NG*\/ b8Fk}ҹ}c&;J[GNެ:(ַ̝I.JK̖)VP܊Τ:{qxKS2KI;&F~ry)]hE)w$,Rnk|˙=NsjRxCKFXmm;N܏zi*z4(姘ZnaI s0i7p8#+NoyZKb2F+g EѓaEÛٹX/n y{v$+~h9e9SM.Mnf:8NR\ӧ{BίN#GQ=7hS7kG}?=&V0Wgy?h\%&}Mm}ƍ IX"~}H|mu)yWX~9 =phF+دr.mco̱brN~Q׽sG _}aVtW&is©qyώViyY2fvO 9mOn^՜Z5nIK՗׀;+(%Ÿ%k7$y;ES |i^YbGe,)ιgNQ\ѷ3٘*[1'99S:wVG7űq+C0xX9s:Jܼr.fӡVT26ڸjS|ԊʶjZl,빗bF+7^z+S_IFpƦ"SJ"{tZ6VfY1ޕ:NG֧+YZ+>qS*R[rEi;I,$w;UTrx[N%)KK9bi홡mV'ݬF$Pmms8Tȅ}x)Y^Fww]JwK EyB@n _ksTwW]J2}BiVb#U`G;9'#zFQSOg߉=LҭrF؏q9?λ?f9R~6o5sqx~l݆s]*GH[F1O z\Ľ9n?VU u/i;2x˕jakg7Q4 "5o,p9ֽ2 8Oޔ۵*쎻>t|qZZSJ5qJ)5̻7qLZ:yqXFOztiFߩacM1mwCo\شfFWx8ǥzh˝kgF㤶:M\!kܤ.oG5*2NKeyJxm0DQW8${~hS~#P焥v;-&9n7\Odt*;:W_ÖjZZ\dk0yY~ᨕFsrMk+HA|^Er7|1Z~ ֱ] z$V>مNY??#[O`SdC|#F95I7g.Nw.,-ֲ+)]۩j+8o8tJKc+v%Y?2qfdǎ4d#YJҰI"ew; ge/h>^h%ScsŻkqp|o8$-<ܺ&N1#VF>5r/4cћ$䏼[]1ƞwc+Ru$ 'Һ)S^} 뀯r]6ʟ,^8US ]X18W6w;ڔkm;;#/gope.ʑp;y9i>hmRN+S^n6vrG>=8U:4I<_OΔ:^1b[eb*I95B|ϕ&tjNI ['HPd)֭Iyةi.;Wyvp@=~V9eM9EӕIT/[ɋOoL }mOZQ'厯ku~kesm|o6'Ƶ&ԜTc/Bպ-mW/q֎|l?s)|zXg)/_ˆ9ϽQqF+Rn?Ǡ 9v 9{TFNZ9)K+l93Jy`f |NS1\s \)oi}Nv j6a?3nq1ꌹiisY>;d-) 㞵>>K~Nxh^Dg "4*Ҕmʔւ>V%I${kqrH\UcqkCJ .灒%Sn|d{~5ޏ7B/M/qfR\saOd6NO9N1WR޾DT{My5BtX9`FANM{.YEM4+p {*$*pwOJJjЍ:i>zz {d:^[H' d)FZɩZ!m 1 1ڢ)ΣRfT'A~+ԇ.E2ImRE#/gh*Gx"^ƴ}ޤVjqVP7*!]JVR:OkR晤FXA*Goz%(*1t^+I&*BtJdٵJ ەD[m,2xXZnlV4Rjeӹ4-9&mC#7`O*rM=yc:nGB_2{*Xoim_f1dE;:ƭ[X|$5s3O\:RG%:5)uD㻊vɾV;#q{R45Z8xq>U5Gg,jaۆ. 2"/2qj7g ۗ_.ȥ-۬j7S;S7iTޫVd'qDhW#?c&ur/#'QAf+Zޟ-+V%~](GG<,Wce;R4ݦ/R2MTEƣ#2y_bST1j5k9!~)jcI.kSIo0~g>:ewv= ,p2Ko_ W XtT~g6j;E=M IhUl,diegR:#.8he+Eƹ⺣Cs]FeXsF:#\Ft{"\ӑ^ SJ`UA>)1J.+75mL GY_{V:&חee<7q>OY]M*%xlºʟ}|AZFW]J2Q7c㹌\/1m'#u3IF}3yRL+ 4ѫhtӭ<;ĉc`fDؐ6@pÓteSo$֧S⌻hי8]=r,sy Wn]7p یכS,EG+OB5NW߯Cʼgc,rYUF=xaד(хDw=t-;>QonLV nC夷WbI+#Ņe;*Ni']t5Ҥp]qܸ֖#u){5Snlz2N?C>hmiCu-zWRQS3NQM? FIOJ*8ߞOQte$la ;LFww~ܥ̹L+(Ƣ%OpܹYJ\[R' #H36>cRJ7I}Ǒ[ DE@:QӇqnqK/Eݽ+?ctjTLڳqXJ7MxF<Cj[jQӧZZ4loVrSQ"XŎ({GV~O"qG\8GN[ **߱RT羽Qj]+y-,ˏmG^`iV{|^z_v}Sӏ*[G-q6A#3+HAhZ=0I91ƽeFeM2<)ȱp2ɝpsiTyes>+9nes$lWlUg==d?GN]Hhz~^cWy~Y mޱ#dsJ;kLoc眞@GJ'R__C'N/<,FvUOoNmYFm溚oڬr[sHQ{~x=+濫= 6nkrEqw*KaTߨ]XhkʕSS<MC>!OdHU!h'T8p{}-hвvKƬ*5bOPWϓiݙX@9~K6n1SF7I/[ݠhvp6R[PK 6^~':ZOsgX]ۓvwGm4xbeX2I?^OSCbb,Mkm8~ѻ،P?j%6˛G Kvd&98'ݜs]Z2pj;µH/Bτ|y HT w*V|4m4kP8WG4"QJvz4x.F]򰙤 }((F+1J61[܆e |21׹=ߙSr܈+oq4gyvW{.#i{>k^TEVɴ>^;eNթ,>Wqh~UoI5j-v4-hpJT+`r?+Pr*эHR{y"?{$ZR49IGݓؚIŹv$?^vbD.l pbӛݿP^ѻ/lUn0K0%<5/59mh&"8W%c:_A\&x1_j*)Ò~aАTx>i3n39n4y,֗wk {gksiSxh+FTpjWzX|Dc&L-{KK>{M.wi ey SՖN"9b׶WQOgCB&Q(¹,nR%{R{ 첪Ѷ+g>[ iVIӕw.1fI 6|2U-OS,(9lrπl5[8o7݉6=& f2U+)Cᶾf$j)){[M޿ב "O 4vC 摓{dc` I8/E{oU%oKo_OCqCk-)YF<¹zӦGҾc֪_=N9& x´1@8+o/S4B;lƾb,/2G1^3B4_F}V*4a%n[os+eXnWrJvvǯ:NjrW&ڮo[w #1ht犚*>FҌcɻl78\{Z>,Dd\\[?e۸1V>ƣ/k^ؽ46o V\6 n&_aX'׹(TRzu~r:<-_E].v,}G4b!|,N@XP*'q|k[VZY^mD|~pjSrV~oC'?l4E# Hq|s^,TUZtpю*%~쏖itۛdFq+ 7VWWvѐptdꕺ/.d[T[D~Gw^%chy3xsR2br+3Z5 Tju}"6\.5σJ8ܾqti| JI/d sљTJ2aN>oWqj:6]l:C ܯlvFPSjVeF48OϮ߯ SC,ʖqٙff+NS5(]ύm+k7e_*K79Tx': s\u|S5#̖8j? .Q0hm5m3-Ȼy^8N =l=jZ=6_ 4+%dYұRq9$td*r+izi}b|kon_ >,4o$ySkahITzpe|:k?icU@,\?Imʖ*k8|j2]Wx R\\|qgCRg玤|L: TJIl}vr~F?'RjrKTmcni"|/+[k&ܓvTc A_U+b_;6~U똧/}_Ws!FhHݪ/?Q}x14E.tO#?kC/fɶI dІ7khۙ%>#>G̯yK9"n\HF{goJ{;2ST];jmn[Uh+KLdS+A,?5]}Mo x^YP\4|m, n(Sq󱕜2 y}F+/B(fF<xU*qo̹J"̼1kW6nc,xNQ|VΙBThy_\dpIPKiS EJհlv xH_Y(R2۶2y%y9;F8y2u)UN۔j(ө~=cŸisFWQuD QנOWQ跾>7S#uz#º5X2ơyfq`1儒VjƱ:xj5} 6K+"5݁E_u:OqkսW^hEN_b%VQ2׏TC"Xʩq W՝:iџĶ}WsQżAe 48۸cw^VС)$+4>O43P{Qa6bqu;^ʣƕ1׿[!.ŕYKE9"R'6ܷ*zA?FPv퍺L=+Ii9GX(65K{b_2fLF79N8V-NJ]o亿H76'f-&5*1wLēhGȻV7mnA?gi*k[35 %捵ńF8r}0}kpAǓVש 4&KCgM(@TGSȯFNguZR覥+'`4w7e#?M:sZ|ةURM=Jl J62*sƥhԳ{I9Vmj'̬~ꏯOT*vB˓kg{D$_?c污Nm88#jM˖e1qe1C,JmG--4Hڧv^Rʧ2^8V}]F9UHrNu?vF2bVۆ7{dgwȮjCH?U*-btʙVFQN8eRVХRS-0Llnf9yƧ7MZΔƌGs7=z}}8\r˖MO[km?xFeJvݷvx\-ؘTIy ٵUݻh3dжjОg';z;[<~DW-L>*ؑN?k.DZT#2 \znIsrۢ'k_.[xە[{hU&8 ,yW(vz V.o&GGt! ڱ0:ߡeM9TXZuv;w a&3:EA.'- ruT# 1 &dXQ鎝*n\5Qdֽ2xݭ#4mo #){Oo⠹^%&ٜHS wjΦ!K"89a6O6|{we{Mj?iB!wrͽ7HptkWJSF6HYx OӻStF66-WKbA`.?Mc$'}\K"I"dUT9o~X3ZS6*5)=;FgS_HyiHQ ^RN!!p#+VN~.]4"+3|%L={~U?̧ITDIn*`A<Y/vky-?4h_ '7įp7Hʧ=7"k:ҋi2%dpq!gW4]jһ"Yyl?O\ɲ1Tӧ+A2oz氚6QmV4K,QFT?R:S)ia K7IJŨ%{Gs(Qjr=+WfUXэ￐W3HG,N9ϵsqJLyFW1= UB{}*]>TC\ІiKwgW1ϥs U)ʍ6mvvן''pe*)w!VϚAW;y#ZYKc*((ze9Y*q͵m  q,*| XV:|o/mB.&~^NyJ*)\T ]%G-+Ef~X$'zWhTHԭ:6Fw!XoJOٮh41ö뫥ZG߸m!963j\ 8Q $+gvg=jr^).hVܗB\> }zRy z|ҝSJ74ߗJV'-YryiYjz3rq]<̋U9<jƧ=OR{~^eJQQ^*j""WL2s?ǭ(ɣQt25FY1' Yd9WE:uTC4hԥ;5ͨi`a|g^?vZ7#ᚂ[[NWtl7\^FVO9''#/S(c[X;^ONa6(ԩ:JڙzXwqf1=j5cUSPKO7,LWh,v wZvwo?S:эiZWV{#RQim!1`ǿ+эJ[U+FkHt?*M*rx;2iFN/"o:]h1XIDi#dXzvcN2n2#_ǒܲ{ƍ e\u*͎]H8ϽTwEo*qѝl䙑1Us_J1qgEFGge:m mB H8_u{99iWJ1CF0y*I`^8~Σ.Wu`x 8+Yu'Jkӱj+3A,wK6khh"NV_inyv6lsϭW,cuӕY[mW_83,͍Fzcگˡ/,]I6wb?=wprR_ xX:xkn^h` nƴJѽh$,g-EiM\۷юxjU>](FEoeWܦލ}:ڔt,bU'(zF84ʬ.et+,?#zxڻrg"+(}U[''WsuZ1Ƭ+FnŀR95,]ZU ܪ̊Xgߞ*n|$5gsӯG,+KEdpV92q?gv"Oj9\\J>ɺkߚIٞ~l96ÿPPŘ oH83 ۖ\Zˍ'iXy<>Q(WׂFlU!מyVQ_c1榯!I"Bi8[xF-󑸒[qWo۹PJ#ׯ(G d&Aف߰9ayЩ^eEp#ԅ)2-Uy-`Bp\HɟN~>֝9Iz>wt^?IKIZl|p8zJ5t+Tȓ64#V̪{gu\R0gwɝ&6[34Q*{8Ӵ "{9I˾9f[Yc^T0L=r0:Wœ&E;VTf2mj+ԍNXNfwfmͽnp *m4meYƦy,=ܳGc玵K;.D`ͻ[{fi ƻ|ہ9Wtj{ y,y7| Hݽ蔡WEYJK!4UMT x4jIl()J/J2CϹX'ǕF?$vӌcIɦԮtVK" !Pi_*C|7,(F$ruW?Btsʄg_.#з6ۅH.};޻iK53}KP^VEmr\s+9E9F7iYCTZKHdՕcMi_BX'F7i(n2q9.[{hW#R~e猓泚ދnRR7ײfJ9y"dkrS⑳8af_%O*[vzb+JOI?Ty]?QnFйI03pz};r{Vq-B>OG&d{sg,3d}+8vc)J{i4[I?wݎ>񩨹ava*Ч~eYcXfzvS2tcZZmrGlb1 Ѭ܎@<3Dˢ\zFI*Y*叨#?J睹uw9}+s(Ԗi03g+j#QmrG35FMɕᶷxcCf+?ʬ.fDҏN\,.[wqnNlf6@ooG #QYԩ(R5*7؂rdB$B̤=pߍsU=zKF.+H&@dߵg7.kSխHbċFNy=Hb="կj^5I8c*8?+iKoס^[ehdVT;te*沷bjSWWMﮟ$5fo)Tmٞ>^s\JMZ1QRGP*ۏ1\߻dܷDӻԒC -'3S5*Vh:!P}XȘ0|QyYѝ?iߣ&HDklUU[o+bqֹƵ>nG Y+\EXx敛Xݏ~8GN唪-zieY亐25@9KttL-Դd?=s;׫F3s׆2̞# J eX60zoZ4윗co8VUu]w<)VU=Ɯ*jGhdnN۵?3:tiYƴ}:Z̊RD-$x}3]N;TxDZ CPFt:5MJR摔jSq嵈-j5jZG~z0y=벞um/,U]u~>!i7>..dxG:6mNiwkvݪF>ֆW7Vg|=W|z#Ե-;rXٔɂp{cjl?>&xj1ih7\QGm`Bs85aE:ܵUmu*;WCIOzf55<-wBr@}BnJmGG$c(%Z-=5OxOok ]SWuȩwepy=y9r4>*{=:8ON?r.e}Ut-@Om렖2@'`:\DDa̭OZ4cVkGׇ4OK7孤 NI=0s5,'Nٽo+SWg toqnv㎄0<璔`I3Uz4ߞdz ]K >=rkJ7MnvQ7R++qQ-v)rsJNJ[)`f8Z{<R˖ *L(mŎ#e<} F#:,{daVnIN4[1:u\[ c,&ڪ~Q8Qӽs{i ټ!9kXh"zIQ\nyy!ޛEUn/ϛ;czi]Ls[ 4ZGxBP_::~ϕr;+ߛu9;vsXjtm$B:@GϠ=GG*4eԍnúF\`XJdZ7M4$E#4ԊlTN˩mx,ZhO\[*[cZ>Z3/TyYZyrvĞǚNNoYJy/gSj8M"@_UuTyT'~\jөMj5va@JTd8SoWowNF`z\TgN2uBa.gaѥUqQ` ){~f۽1U+7MW|ɭs5fe}"U$:ƦGn^޿+Vk̹#Rue,zpxl4jK^qp;Oj#OϕkRדRoXX$k䂯3|'#Ӛg֏-NYKŨH`#::u?LW5kIET~ߒkz"жe"v u1chB4.iHеYGh?7=3V8ƥ8jVOzܬB;iH0n<x97PxZҧΑ0a3%"Sך&X=L+yEVhMw4AYZ哇681 ʝEt#t ˜`ƣMiӧ+[Fb\ے3Ǯ*%9;)uYJ2 1)1͟rjͭ>z拊CcFߊr)JBU}\Yb q2$mmSFG':5\J8yF\usmk#n1=kR1XJi8}7 fxYd]ubN6猞^I-I*x.=Յ\DeQIo#YfٙnW4XT$ұR1N:Yzuz8`kU:%Sy& ޵R4$i¤2qwXq,:+lTog63G*Q\ʥJK]6dh]1P =Mz*vװR^>ιw46]q内J63mm޸\d{4[7ƔRViK$m>ߛ\^q)FTԭrEozttG)JNvaydj6øOs e]ƍ5ԔWYe`gVk?m*m9clרˉ}Ω{gvsϚѓ0Kzg FQJ\m+,U Lq}j庱"]r_AY_=iWoR*⦕Ki5Y9ǭe.XTa*qw#7f+'9{cR<3IX$0~W폖9?*9ekI$|YX݈ܨΣ$}s?rRFWMeE8yc9^#~mmȧu /"X}jy(]iԊJ\oY]a+zsS(˗F|B\ 0*q֧%&GJI+KfWUڱ7s~ɤiЖUw0K~l,c0 3vVu)6R/b]!sb?N=+9Iv9hJg̚(stl[vYNI랃ҳRAy J-nɄm+UF#{qkH]c/q(a8UJRFDg^ݰ2E,*Ԝ6W_э}~ta]gvֺ(2uYyUf|ֺ)#4bODڥXpr=yM={G dywuS9|ZQ3>I.#Q&~v @zåiɸZzZ^UE*+*nMrQM幕vQ" ӿ~= rW@챯ʁǏoz%T6Ql6Q-.1i(SPIfb s*rGJXE\yjK\Km;p5hv` #`q}8HnKmٯʳMusU{)>Ҳd ۲vo Tqrnm꼭#]+uI-eQH`_:Jo}oV8y9NSJk|SDK34e;P`f@@qEkg)9;+~}l9)9.Eϵֽ5B^^F']@!GyU_Gj5#U–#-:{]y:pΫ,$ $'yu}cܲVV}MzR5kkֽiz\.vN=Saȶ3*(Y+ѶG[Hm0C#6IQ[NNڥU5m&Ӟ 61G α'ܸG}='x"P+Gqm-soum"V21ۃVU*Fڳ/SNhf6Tm6q䌥NTa7WY-31ݑNUtݎVN՟InJHV(YSTEsn*Ni}"ٚ&I-xӌ>#g8浄,w}.F~H6Ilz܈:xyMz7onXً+m#n ^ⴧOٮkkԊҠ.≡oڥJ~Qokj:%$e %:W?J>,=.ݻ)|aUH qxZu9tJΈU*Y+]2o\DD:e~^HOg,U8Դьp3j/,Hf %Cn2(baEŤL ǵ2rW{UNVgQz.ye%ӕJa*իsZ m%ĉ9#z'XR5)9IVmd!^qF)Gz\gkKBBа32eg8#g%m(Q}\d@3#"n;@H>cZi]Ϣ+S#-cm&V~_gn6.v[:^u&.֭֡gffOW Wt.mFeUJ-F'2#) f,V? qG5^}oNͧ7EY3*HN؇MOZةM~bƤUN1K,0#4y!igqe9H'ak{ևEYSutFA^ckU:7gJ)j6ۖ8?씱;EQ'f:5efwÚ-Cty^c!sY˒U6)J^^U¨.n0k:*A7v^WI9omk!`F֒[yQ[i`rb.v9?oұ,z1"`ܣjFa*=9ZTt92+03[1lay?gaRZz<bw26@Ǡr$s%޶rVFqc S[jM;`3gu(1xxTn%fl,aٮWNɧ9|q Rz=#an޹2OCU}Eө3I4I3ʟd.C5k & s)^\cyVBH"?YT*7FK{bX~U\?Z9GBfh`{T9,LZWqlaF]TwLDiϕYcvKe>cm6*t!I r0бSe_!Q*n^lV=]m_hW#G$*Gn>`>E{ye[p{J4i7}}is[C&,$OHbx8r:s]ѕ8Sm)Ju%m a"M=:?6A[}>Fܱe{nB$ҟwsE`wuR1fxyr-m$II8as#޲.Mg*N)K_ܱX^@R8+<Q޹Z^˕~IT>~= +D(9m%aCϷҦI_wSJ=Mdyw I'$Uwj|6,aŶf0wzu{uKcNQMرii¨2SIp1+IsKcH)5}틠 [ƫF=}-;RտccpX{lw4^o_c%i,74=s0z:/zv8Mkn]/^jwm`GnzRihʴhː#;)瞇 nst0oGͺvhFTU(i>DM6Mo C]2J`Dn|$өw{zz8Eڑ*$*:{c޼|M4*m -v0ϭḥKYS;֍NZS`\@D4hcW^#7wdʶ"T޺!a1U'5m/vsҌj_RY cҷ-XIדIeG&uvNVLƵSVOmV')}rݮkTTo&5+`tdS5+}'kFj,dm`kq_5#xA4f.K36Zb[ԨM)!4+5R"ScRɖi#}kQySNmoI#?J2iEirh^J;} P|^ݾ݅%Qz?a.]΃6G$H6]<<2;}j8{?#H jRw~[ix]Ծ8e9Ww O%pN3={*|zzF]:Vv&kr\4nHRDJ k}kr^[uz5՝Kn6b"Y0ܽ99~#[5P,SG-9=0[G㚯~mo;9p!$Α| ңUؾWr'ⲩ){g}{ʟ#h.MS-Y|=wj]IJI8>xMN/Gu"f@`ߨzgstBJ~s]zc $TqIM;q.u ^_>U|L`Ƴ6k)UM%̍9?t۝m>ǭa!<6MCSx9 <ݪnjV..,P*shUԵȚEUkCnP$ UҔyz3֥h+i$)ryį(tF*-ԊF5d͐Cx(ge}|Xd0dH_Ot#F]J$:?.zJIIa^y|ܤjZow~*4c#Q%7-58]>l,A0;V_c~ʤS4 9v Ut=T'= =R)[}iʟ57/DMi3Hz+\]}WZmR5L#v9 ˖dzRUٲ8Edw(Cpzg#R~/#זZܮYl_UR+5TjT7#heX8pH*guZT۵ΥN277 U($6+_#͎q)}4~!Bﭭ9ScF.@6$z8浣hǙc RRZs$7QYbf|??Φ5)UX,Z.ҭu;fݾI, v#4IVjtS*ܰ_{Ϥ,tt%m#9s>~ I|:rX%{PUgqAn^9rZ1ua,Yʲnmd>'3Zkh]>tHoޝW7iiYtCLXJWovLt K^Sի{ M͑MLqǠ&ѮY!Q~n{&k/dum$GJIakv`$VK^7ҭ$30q,GOV岏R#.g4C!QJ+?~kFeNd۳;WTn]mIr~7nP8=0~JT/ݤW"wdb Nk)k2e$#Bu!]V*:jmcSwM쪑`?6r0 \Z;= 7Fs0S_08拣縶!Vb(sO*eE;TCL7sŝA̎I<=+8Y~# ̒}IZGLApxgח¾Wн̸x>aЯ֧yǟ*N3jyUwr|3,Bofs<)aKЬe O0ƛG;5yGV]O:8IGtoϢ9O"{ M2_*ڪUQX0mW*S}ﯞ}O:%V2}_O-r(/I|Ȳj8 s7*瞞+ZSy[;aiʜ[A-$3\OϢڲu"tOTo'SjG$eTlgǽxx[w2rx=Z];5͉dhpw'gV&䜒5YCqwѹv98=\q碬7ZәhDRO +[Y1eEd}1n1y58њH +21'Wו:izkM;y͞99j/g莜\J\i~|\h'ːVvkYs5TͰWV5DNGid]Ktl"+ cS+mo88ɤ}ЮQiohԳ=O9oVyN6ۧtV& I%|nOuөT`}:_SlrVm{3L$aRd#$x1},\'/ʩ)'_Ŕ5gtZ5 8or GtSylݛtN[%ʿ)ـN*'v}\TTN+T_ tEխffBG29o,ڹaucEVԟ5oҜ)M)+[u;u߇'Mgۧ90WTvU)3NxHNiZ5!x㜁ɯ"T}D_ꜼE[_3ϧ2u#nÖ 8'5~R" ZT+K<SCuI%$̊ܯY2duJhZo^^^]gfwAGJ{JUo-OkY7qE,RP=:<8ѩ ]\кkEz|~TqzngNZz'rU *eG|}7aW~EE}irԥ&vI;>Zjk֤ZJ y0(q3If4zQNY@:#7XYַ:J+R_>MY%H墓vĚfvnmQC+qlgrTB*϶:5FovwgAGyu.˩)P<=k9b%N)I'}RrΝ%xCK53U)YzUƬu9wo]9ָC8xu`Z)&mz8a4J&iZ"@ nHs_ITjv=яӳO۟ 坄2oG !|c$u|i98ʜiSJwQ s=Ēqn7'<^דϭV[lۯe8Mjqe}nס?(-֩o [U`;ځy=0+؞*j~/=ݮsc)*2򳻲S|K1ʲ$~鎇ּ|Rrp,V"J>Қ՜ sݗ|jjס1t$s^Z6TRۭ?J,lii_`x7#i; be*mIj64·6|Bua1qPp׹݃FQ-ߕ#$%IJu(؊\TL2;BfMHʤS-{xcs3Gq.B6^ݧ`#;UX_'rByNZXxJ8λ~`.5xHQS?ĞOF iڕ[4feװ=7^]c\&hd[tGJK/D5@~άα+aKQ9?V*4:{]'J*iG[>!jZF! ,3ghU/>CR稔} _8FZk?f.5k2"xX,36W9tU(;7KLxRnUt?j-д( ^⪴7^F\2 cijϯ1m(3Z%ncko+]i0';r>x> bz4yg%_6[2-tF3.r9 kHµ9^]|#2Xm5ˠ׊=f5QF 5R'2)e@krzz`8yF:Ŷj-}0x5#FIJ۳iO/?f/x/gz/,Z8$eCE*n~yWk]~U*UVi~t}|3e#e$RLIn@ k(u> ZZ_~(мx;|TsR$yU$yef~v*u.Lph}y,^o)EKO_9ww=ZE98Z;wMkսN_gSM6S῎/ΚF=b#*0{hztKͽլޯ"Qs?*_xi|WZkq{ǡV5S[}ޝK]wסŦmWvɸ3*HةְHVzJ3^+u c' N{ץ#g(ҥzW}NƵOmou"i.w 9Ss^ɭ[S)JZ!ӯaCbwes@jha̤-jwm׼+=aWf]]ITt{iХk9|(JơcbY‚;ztEd׶hvEp:$s]~E*з{f41$$Ѵ,ۏ͑}^c%ˢIr+,Udp\w{Z:өRQv? _kl\(1}N(ױSu]Yn$C嗕3+\܊ۜoh>(T֧1W2b/즒ylq>iF,RcUL]DvgJޥJtm+ڤW{&9n&W\ w?g9ETJtceOԯ%˼nPsc[For#N7u#%8 *S=n+:JU$6*5qUG;[lZK2p: j¶:ӣ)[Ӗ89'sfU~jHVt}v<0;xͼRy{r"ۊ5gZr+̯bYTn;yg*?/fTqmmJͽqp+~jaԲ\rA d Ԏ}V|r:1ȥ[o.3+ap? z匣{p盿oBl%K!!l(,$2GCV5U1K=*"*9q#?M:3n{\%VUE ڊ*GP9RqUWM|o-NI˽kNr2*1R+גU)>u#Ҵ\#eT@˱šcplïԌ h8[+$tĶ`Q#z+NR3 U;.4S1JRW[jGqv7WvynXUSQ(磃~ٻ];w,,HE1#'z+ݿR^4ַOigSIxfUiSӚΥlDu֏5>T]=^UGq89k&_#yzXgd[;sp<SOɽGcNk:|U;;V3tے3)*;iwNKΐ[5F#$Y֫˗Ǚ{;R;dt.ăC+.Y_UG.(صA[měwd'2))V3WaQ^rȥw_Ay''=:fӒJOԱͣ"]AϹ4OܜE.[r-,gB9b1Uϧc&e8xh0G=q'*#+%*zofJ~f1꫌}}<]jKRʑS n װel]:ЩMhĵvf ͍NFWH\a!c.Vߏ5i)^I|ʌc.X!FGv &7=LeR1}MRI=m(ܡZYvr?ARJ+M#Jwr!tVn6ȡ'9'<⺣F3rzue??1bc%ntGM:0ΚwVĮ̢Gf U_'ZMˡEqq$I 2(@+.Eu*pdiW2,Kpҿnn s׶)n]ZߡQ{+KVm˶ ^^QqnaKvYL>v6܊ҝ^XꍰrMhIE,+6ݐ>s۟]W(iSu#{% m@[S9Xk\Afh.]c㧯>b%eRuLQ0Q7Z5,סܥd/1ɶ4eET~nZ!FZNjƽ(ͦYfqdl`x=+Zq?z_hjT $IIc,%?,gu_4g5J$5Z߻/qsj} RQvr3K%GFˎ^[jqK]n#"D-w[7$ɽD{Jm||̪*. hHbc8?)N@ֱuE&: w]K y3ߎ?q浯Ej3|wA7IVy[qDsSJe.c/`FY.:O8J+5z[M >^m^r:UqvO!΢q+v &9D)[bIqֱc8NDԞ*d6,OzUY4tF|NW[Klw>\RDnyz{UE$TiqF)]ZZP6p 'G6ϕTi.@H6OUy֫NI?yh0c*сv3yr)',HH79?f$U?v#6 NvW46%R$G ̸-_GIU=n$]kaߕe)JQ"3[wa$\ĨRM I!) A,=ְ/ATJqs#͎5,W~d;WsuiXE٤k)Pqr+t𧩟-j=(<ԗr7\qֹeg*ql#1VDUla6:J咾@G?PcUf޹yd.?4/hnB}d:dٟ~Nyy̅LS~l#onk%):J',ç?^ ʵR2$޾2Kfڻ\^cVҏ2&P/+_UlmL?+"59̕2)y%3,qF vp}zt5eEJQ~42T2d:rӍ.[H$+`z}ErJ<_ tܧ$=r}k8-DS>Dlq#]0%~ymݥ)?wFD;iGY3ҧMA7WUbmoSNf6"_ ј]Y(UPyq7=3]JӆJשF&yEUfátF;NZ$~Ua X$˿獸r3k6пm'E[4q%w^;O<ڴd]Dȩ 571 >uSϫ" 2Vn_3)~97dSczT1uW3+H˕@~vR^rF40TI0vA=-E~PfUTHӯ%Mk&u)VgeFK 1%#H+OPr1W\(Y-I\EKodӴK(o:Eܲ33XEظ]՝4u^5ə_rqr㥬WNkeCl1bH6YBG=+G,c&=ѹG͵,k',;{iN:#*ƣIhG+vdFTWt?s[{9JzJrY~eվĮѷwykXZ2!M%t^PUv˸e=bxW3Ly8cϱjN16i-te@az ga},TaOТ׊+?ޑr}ժi0R Y$Gڬ'J T!ڣ+Hsq}h>iYXӴ-LJ+IH; [- Cr2C䑞hoS QʧD˵wțJT=j%%qw&mvSӾSmP&.C>m{+iϗK F࿝FEb(#9Tf*UwBK|V8#rҒ|ѩkJݭ9;KfMj[$!`*1yT:K]zTj{MwmR bEA}ן9JZ5cWkfŽar6jwZqJnZ;R˥M$rpڹqTdq+}kz{O[kbY]S7yҫ4ڶzƜQSIkm٦%ټ]rz*Xr*Fpuf2Ig~Y8,s'WE8cOݓMv-ZC+%*#2qޡ֌!'ar6sB[5B>mc>z0R6M\2],yBuщDw~ZzJQN~gkj{LKHfܻ><9_U%^1ںe6w6Ho⌱8c:*kƚjH4jsHr٤Y[u)T]!o1 J |9Et3*]meSշdӠ"R,j s=21͈r^Wi$o݆oq=:gSF:!JV=}B\Hao2Ķ3\sjS#WoM]oiFɨb^r.'p,qGUU\\ `wVWTeRrNm,W  l͵UfaSʝMCoxFOiuI#09_ILt\|-\UGRZugEm}nGʈIzA8b⟝XLTqkƕ Fu#H9:qpR)JQvnd6gWbRFV޹'K+Ft3~Wn=u-IdRnN$NAǯj֌JxtQA5mNגFݴ$pSBRvR ^3?YNN{vM`$pU엝[[vLt䃞A+$fi8rF X%e}F5qrzwyy֒=Dž)Nkݳ^DĘ#u܏yd컝ҧȹ~4 hs$2g#nӻz+I]*d+ީcTSByY=b=nnJi[_Z4>ZEr#b'u9^}whZ}HԷFfE\g$WY=cN)uཌྷiJt_2Nu-ez֛'-JǹFEc|YikdW` ؁E(טgcO˼WRGN(dzME8)瑞Ox*JZFWY]t=SRe tm#Ldzo^5XT^𞂲eYS$ s93ri9-CǞ,iMBi!$ee._p9σz1~1тY[{4KY;4?Si-'VBcDY6d|uWlS_#ܕOySյ7庵&fP}+vykz4~MFZy/átXUYq7_s]Vvԫ(V?7^̻b9<瓌B4*rVe'˹x[ZqZL4kw( |f_FuNۡz_2¿yC>6 nF~u'|~am8y=}_㩇㏂[Qliuk?9?SZ꭯SNMm;ݏi_%4v1 ھ!nհAFu* ;H,*8|*xɻrhջiU>rrj_漾wk''[MaVqf"I'V_^^ 7)vi&|J浾z2ա]dE#Ҽ|W4]tk)R'=u"yv3 xیW4SSi-Z`3?-gǯ4S/"+S ֆuԍ.|F[ժtlތ/iM"kIX J+qM؍("G)'_X>2{vllg]R 6{8h_R)l)k,aKpX`>ٮzμjrr,DrHJL>anXrF䵐4*Iu ?eR>+a{6;ib% PGlt}kgF7woaV4[jNfQFtnRNpry5N[KCXòXY%pc։|݊TZ7zieZ\G$V6˲Ҋj5s:_`E$̍tް{rcQZf}~|zTQrkۭ%K3HmO4*vr]N1mb lg}qZJ1tzzQˠxq2{*rqR˪*4jE>ĖJGcֺN\SFRJKbsC.-đ{x8QSrkN#OǠi9dUA=a֎Ǜ1P=liZywmN5hcЖ\i}t VyrnBO sgznbsE2Jypܑx=ks|a.}[m Tew9=s3ZWvCXSI6ӿZrDZ UTN51bO_2f( qEzVƜ֧$jNߴ^нc'UI)o2R-P,-30eʱV:_ҹoEN/%%fgn\$u'ÿn+i+>Zu%oj]Eo8ߌ~5hӌb8U,g5OumI4 $o$_e JyJicMMv~=Mr1up1Rוq;|LsX{TKRQ>53Gl^0vܤ`An8w{klne][xԔT%v\Ѝ8ֻVkR*8t0C$,w.w`F?mےwI" EUYYdž:l:#/G_̩SH*myLH~Oy\oiVgh,޺[r ҽ ߏoZQU~⧄!Ub.TxޕHm. v_1He'\A9k^C>+дmy$o-FF RIю->Bݷfvz~U.v8ok/ǡz#YyִUY=5GDgN$\is)N̘\}28h\`zԔZgh- x6G$pO2k22>e z}A'eǶwkЧUo-LTp4l*sAһJҕ5=iw/p0-&ӝ襈SRcV*zWPho2㪯vN:Tk6']K,ǨLTBy%8cVw{Jբ7bgXg+ijKnXr7QQuC#F>^'uy3?k˼WaEp/?͛c+?Mczn>_"FBS)NR}h{6G$< tX~M>3Yj14R?l˱i< Xq2pOsQR-QnPGIXddgS' Dgd+Kq%ĥX_i}+葼J)4d\]gWlnr8bs֩8)\$fg1[l`ap;`ǞՄI$ݵ'S2 /{*_ƣv*]8f 弶*X'=}p}+FuFQwj *#ȟ18}+hԗ77vQe'ٻkϿ^hIE~Cm7"os+hS'Bc*"b,r9$d=_ĮjKwЁeeQ[$`1h}jQT}"ՀS]fF巙\MLJwh͖XudžYG;13S*1nnϟݾޅWՄ.NVł+`_{kf6~GG,c"2\9USw=s k|rIggUHb"9*᫺fI7{qͻ6z~5n[2Qv2uA66¸|9ҺS(˛'-ڬۆI}»(3hԩ^ކŲmDR ۊ_hW,oKڄy ,)t@O`y'vSlN|bj-U[.7`pzm$1V)Mc{{ZJV(^jSf\ HII.Iϧ:k;UkNkklt˜sIsEϖ5v2/*WjwkYyDiΤ|ն>&lqD34pA; rI3.iRmsZݜe=Aog;{SuԹJ5ʿV[o.!X4?;nuK~)Tⵯ kt⹍diǙ4(80:+UEJNr]6r\׮xQ yEǗc%\1VrR.N~MG]cėcU9$]gQo}pWF27=x֑g B+.#0n}7gߥyTkjyM릻KEo#nr[v}9kɚO/{KGCNq<]#(3vvuk7-bxLn'.knef7-' &F.xqNA-v{Oik?3z%613r1ӞM'\E5DI M!Uzv+jF[ױOue3 n@S8j/iVL?EeUTCrjK2ӎ=X]TE(h.4o.ܓrɮwRhՍժlN_BHz^VRRO{Wf%55ڋ~rԽn*W(jӷq PUwK?Mu pw2Ēsz~Skc'kK78wvAc^RkO޶wخ֒:2"H]/'dc:Cu$+49vVogMoF|0]V(}[+@f-'Q*rw"(a٧"tdV-qVjsE&rc^\vbGm"%ef ,.yǷ>TeS᎖zXT}v9*d´J۟|81{;^Qr5uc>2m3{cӓKݽ:SUuF9 77EKG:iTD{fReO:c*<ӡH]1REpj'>Te'=Y3ޟQ`FU€^5Mu9k{"- Y!ϥ`SE= TҳC-a̭1e>ORN;ZN% }vcF#*m rzYB2m-N:ya|<*qq>hK #mYnZ].4n^O̐F."i(.v8ڱ2iԒ$qIi䥶58C:潺3z*QD)̲RI#;##'sW/,*qU=FnIv]ZpcOvuȮmT~Vmc'd>]08>i(9dg?a"c@6:{?^uWRMvjn"mvH}ū#ԋmhB[橅czǷLj*Fz\n"t4̊ $?jU)-nf?y=?LW,rJ˩NIB{ aP`YydUd.M}R튲L٣Es1gz)UyyWZXSWO%mŚDV\lszΔ]+R1uURYѢO52_{ް)94iMN刞X=s1r:Tz~qkCn%UQ5Qʍ:e]2/ɇo/vArxj,2j1A6@[>I{DUj:J!k{2krn`$95̵װɂM"Ǵr7Ǹ9F7o~jz/P+w 4q+c$^1MGSΥ^[ꌠDS#-[Ԕ\}^NU*3QZ%&U Ffr(tmrY΢t'H<:w_ڳI jŵ/;/ra:yt_ BG8ݷ'>ʼԔn٥LU'% 5I#ZI5.߻$99c*e;@1%iu*鰴>Y|ʹ50?3Kݓ3xrԏڷ$ڤey=6.Nػ.d9NH$kּ`01]=OBZLW[V5153O_J|/JkE۫l;ĦI$XdOOӭa+MȫNdQɁ/kgTj{;ߙHmGGAi.&m"HÊK+_IjqQ.\zTÖ18w ZFF_H?Z|](= Y#ʔwWq{dS;nki?z*.k:Hʅ;pHZr/i))r~ r) Fī7#x *cFNL*svOC.9934Q^ż}\I7 fPkrpqӎ7/4pjE߷C-a^CI|vQf[!Y|-72[nnZDׯʝOuv쎓D%I5cnw9y?׬Ӗ."TH W1C*}1ޟ3VPvoM;:+ыZ͍?W=+H۶ЉP98ssϥ\ZtSZT᮫^c].7gHy, XSjϿQNjiyUfMETPsG7fJ)󫧧 #e-%q*yMj`qogO}/FS7z[켋pm3OĻUO#8:S\rژNJ5W2W&&x8B0@9ֹJ1W"*pꊳkdEܴ5I> n8[gޗ/qVnoqW![`fWλz81"T/{CJ+>νjuUć;Bg9ún1OiKj[fͮ\2UUJ"Ň : <:K}ƌkT'>EKXm _xp8fKMfM"Xٴ\nT}־ϞI>KTW[ӵ'^~_ݰ9^VN3i?>ӣ%ɭUVҵ9'[|_ 7r?U#/g%mҹHϕ6~z/iQ{CTnicin}@yM.qyRW}S:ҕ <_WzƳazy *B#ڦG*<㿱S﯒ظƶI$ϯgGS•>dO)54b;~if=9'vsF/V~GNLy5I'd5GZ'{э/.h?ieH22ubsg+qb2)skCcF{;uhӌ:g&兞KMž[9SsjsZgfi L"Ve yجGG)G_[}ߋGE,QRWKp~)lRH$y߂YFHe\0EwrJMY2Uޚ7eqZw+_[.|KX tG.lpߚ--nt= |๹YY21S&s@·>|+$~G㱱G O o$Wѯk3>QH9=5k:\C?76Go%`w$OH cJ]w[h{'WSNmZ=u=MmMqۆm㏗ƲK9o{ouo}ߩcNnwkjzgV.ٮ/YGTobr,J`qN;KGwwծb!(ۮmomZ#G3#/sѢ.|C|9`םc).c4Tmz GWaZ(Sǩ%k~.%lh*3giݒϯ<~Sӷ}t13ҷD3g·Dnk;.O==?׶*+ӓ\Ϯ8qNYa*\|Kt=}x$X1O٭&m wnp~ʹ{FK_s:BUad,V|sk\ϸԌWlqmulm9ǮޞW-Ҿ*j4kEʙ3O})ּbub#RI^@c[Uܛn?8ҹ9d9gķۉ̫3GoEh>ZrFxXWlw$J1X#n7tj.XbYm eϗmÖܮ?^+5#ʗ _>ŻxlZAU#nqsY˕] f;3j77ncy9'g/x8Ժԩyc/1Ocn֜e"2z6wHѩmh{ҔjKh}"( Mw=q3M\ߵGnݸ甥#;#Gqn|0 7^[FWE*]<9rp -{i-e˹-ŻG|h8}}huhڌZm`v*_+|'u7(A(V36rs޴su^0Մp̌IJQQ?OΟ4heR^%$;H7灂}sKwikmU_QF8[#+#RN]"G8"͜+2;ح\_5zn 4bVLcƪq8:_[hIz6=Xr#EӧSWfK}I-2;I?tz=*>:[[l~o|gI_ A`۴ʞYst} ^kNU$}Du>-\FsdڮgiO݃0Vu{M'txNGfh@G 55ש.Zv]Zj/G)x5 <3c>݇8һIotajRvhi[$q"n)yjD]:nQkkk3F:jjXŦ5,̧k.3^TNKC͎*kE8۵CKϭZ5FȿywVF7.[Ewq4U$u&>hcR&BġHcziehw#T䇽̮c?  6=9=Vʋ|KV!5]Ydf[=y>)QFHeH)?7P͆twZ[;>oTKz>[9<Ǚ:` ۥ|#2U}7^4$nwK񿃧E7{Y~:2rgFMTG#iZc!\vĤOo"Lj{#1ƹzv~Y] S6+uܕ՗Ѓcu\\c7N+EwCua-4jtt÷=` Ej~ҲRO]v穂lT]WIvGk+mUƤ-@|88#tgQӗNU{[EY.,rVF,tI9RU"_Z[M ,9-d6"?4MLz+u&*1$ Jۘ#LCps+ܫ)%?~mg#hTJ:;;iaiZޒ>][ p_@{u2򻫻_xq#>e{&+u;g%ԓ\_9v#6Fy 8 'ʔRo|ޞeu񑒝*~.֖Xӭ6cQn'@)ܟ3?O= Tj_/s|O>I"+"IMJWh\EZ1=,ehwՅJR4r^, ɾTvݛЧ*VѷC/ZZ}ş٦K~IaJSVo~ߡ]UWCVkqfվ0`I:tU~k٥|4_k渚5Zn͵t->o_&ͤ\7=ND{zcn#k T|jf: h87Vƻ㓏mG0}Wsr>-fa*qQp<<5xmu]5c$ w{hJrr>Z⫶oY`3sd{ 5Tq9?3)4wy n`͚E2J9kg3uΝ1|عf8E)U_y>vrLs-$=M|RRn[X7+0|O H&*1+/ A׹SoyCSչ-t;='Ű6cF}-:qO>g rڕNLD6}GL0F2yr1*8?OvNzO_ [xƪ}Pq9p.g=jn[~G|kՙn['ww9#8:~iO5񏇡u+F 6I qsYӼ/{_PVh :m>8)dU02wW*t6O/өR/ ns4 ߑ?ҹvKoSKvOMs߰Hady q{ XztM2Q;K3KHe'MA k,LXT:Kyr^u3=*/0esqr6{g:}JKaQh͵o,Xmv1{׋aѣ)E0򞱩j5桩-m;feN3צM|lg9EKՌ TkI 4+Fe[kVݻ,~ʌܹڜ8Q妛G\FFVhv g=cW{Ӻ54\eDvE媦FqZ"Z Krѷ3G )qF&]$u#xm51\z*bEt`V6l;`~UӋF6k+Go0 ߅ru=QXyo,s><x*u4۩(S?g-M M8Wj|9?ki%w y5<-5/> $y 1 ܒTdt#"%J2^ԗ}; L3 ۔xNN/XQ罷KZݤu\mOL¶* S˲YI.k쏔sy{jp\ј;z$P)y J,ڹe(ɧq[RV~`Nyʧ{c*w]ћ$Tngpy=9;VEJG#U5nY慦xF[N3on c?/Xg*]($9? tncR)KMӵKnu*sl}9ӧ ]X]u;ՂaV]VF_J2婱;{!K+K $`VנA׊¤dsM~?pjN)//Թ\ L@F$\eQ4ݽ6؊n vC샇5hپyZ1=A~^ur7wK;)ζIVZO獼>GQV2qmҟ<~IXė2"mmbpNxce )Mipiծ4""cMY9qߎj^y xʌ~h`*A;9<ߝsƚNiyhԻ ,I)aTL+hGD<:\ycs,l}sVs94Z<ҴYzt[y~>>ijH̡~f~NqraLnC8.OLϧ֮ъ+潮h@r$ffVVH|0*FTX9TJrFrD֨ Jgrt/lJur߱y^5\VnV;)PRNAʹ1)_oNݗ8%vD)Y{5jDv֑,);ӚSmKnF0u#MjZd[QnC{cМa$]HFyڼK$slXOb~>J]m)XAel1>Y}!r3Ȫr٣ϔneo͎H ;uDc-tNW'n! tbp~Zki4yyYWr#Zd>+>b%eRXd2er q~<2UR1D)+F_:UT?sHɹݵOp2GLP)I_MRˉP61]}GWƻ#o:*z猂?/9N^Q.Ox 0}'kn݊RK%F0Yw1]N?#'nXл,| 2s[=$*Q0GLzxRNg/waI,a( $t8 ֿHǗ_ Y )eSRc AПֻ%yE[gttMY=IQ]ژF˃ r2@3姪w*,apߵT/A=@Yͫ)7Y9͓~,\rן֮jzF-ݺr*iJ߉uͫ5Y_!wd'UH_xQS Qy$Uҟ7,:*J)(qgh1x|YXڤ}_R!5Lqo qtW"(ӕh([̰M̓K7gspyawb+`sB˞ryZrmRgXYS$ۈ8wNܯrh^9#nw2\9v洗,J/GsxDlJqgJqT*teQ+&Ƌ"@\3hWATԗWs}|i#;CtkJt=nIo 0:YCyۛ=F;銔UUD;v.Nr.>q9+@]kJ[Gc8,<8=q]ѯF%6ӵ얊%XWI1g}(/n.^|2CWsHJQNWH0S4sm89eƔc+_yi*08<7t⽟,'SGmm34c8<2r9MG!+*u?aԧVnvwq9ߵL'k"޻pSS*.T+26(Qݸ'R3y=IUwPNp9;F{&1Q5'wНnd̎C|(<zz菻wV}Tww>}Mb9}ior .O5UbUI# H9?QTẏ:Lkt휎cC7q#E!ulFDqy@ˎ]|w4=NkSي<-ZHCeI$pOVn+u"?EDkƪ0Y':Ƶ-]}=k$4>Un=:RexGXHXݚE˜P3ڱ88r;-HŬ,v]szL~ϚmV$̥\mF9tGUr4iV9#>\nV?˂(얿?sG|34 -PwoJʥ&m(B,/Q-Wp{~9ӧ5gN[hRPY}ϸt1 rmQ6o]ӟb)ӧF7ztRZ]R8ZIL߮k}:cN8ڭj=@j0*e#uMmiөh2(/f+nc򮈮j~=:ZsڐkiRxF&eG9#jm?W[vsI0Rvn}:WNF:X5LS05ci,FXI*p W|eQKܲ}VHO݌/nDaKs٣Vg_0~Az닌ݭ:?i7Y^Lf@Dh{[Cԭߢ=Wyi,fy`5ܩaCi+2Ly[6F<:w5gRIIGEvvhrvv)Sۓd3sTebLLꟻ#}p J1Sfޗw#쓤)V33u{~|#%o!ƤɶF"#ݜH RVh#yiWp$;U~VCqɭ_R״hMfٳ7v[$B,3Us"fEF(MY`O?^~m\]G=m;%f Cc׎9*T)VD9t'yĐo@IOCy';W77Wr]R8W,i~u9S7m}'KHʍ*jziFNVުpw\!W 1㵎tm._ŗSɍRWQr 6^+Kvo)wVaܒ0s%)F_w[re.JlR[KLA2v]k|kZ=wR寙,1Ib{blN5&]ZTJ+X[^ӰT 8:u TƟ,%oҳXh+yҹ*z-Ir_eۄv!ɗuc g:W>"RM>ݒZ;DĞf@dbjJ/6R2kG/Jbk>`<1Y×n>ǕS\ [0aRayqj3MpQܒ!ߴ>^ozugMe<XBz\R;yKkZ"T*<hkF^1/O& w/ rcb} >N7FOVPxPȫ$=6ӍNCJU٣;Yu1;p[v[9ȧNc/3Hʌiޫyv\Fx3h͹$蓍:=L;[kY&Id |>ʣ.mps^Yzkw ߧ׭g(/MzӔek4^X턣ap7r@9="eNRZf>ҕewH$a F?(R2[LIs?TSVTV0 `f,zIҸSi-}5(7ތ"=Wp`V^:;հq}$͌b.{u&G{\ܒݑ=בWY4Fl[=I'I.Sƍ5% yоMٺec>*qև=j~:[*^Cs0;sVRv'tėp(H5P!#zr?, qZޤsRޥwˈ;*SqSr-2.¼d[֧S}:Ȱcl{B#sc~:=t*"Y:sV}/'/Yg$v*>Z̏h0K3F1nX9sݫ3V2c̚8bJ"0H?(߻U{?@s]y-:*I7u; eht/ O.IXćYLzҔT o6#ݧgپOԵ_i V6VnpH#kS8}6Ks{*J4Ѵ5eqX4hŸ4qGZ.qMwn9-a>%'4*c--SD]N-#܎Bʥs3)1q SwVT9-(xa#S vV)}#Ӌoz1Qf f<7h q$9_As6pѫޯX%-SsyڅΪۗܬs|zqJ^{C{mZ RW{R%ݭ VKXE3@OJ7_i5ߩ da`܅ WnuN# of֖~bMç UlrۏW?4c=UdJ!cAiu뻿T9Τ݈(|xcUx#Zl|3֣$@4~"\ѨW\ySѬJ-r2nxJ2c3v~o+(RqW#M8UekϖN>)ռ)ٵ98NzcoUP@ rs5B2+Irpf>uӥ=5}вv!e[w׎Rj;[#[#2v͏-W,OoT& ѼtL&f꫸qzRXe%̙'v!Uʏ.6z*ɮӋY{:KBdG;dXܝ}ZJiqӷs9FW֍.$ vUG̚~UQaTҎ#r;dk{q29\ϾFoOtӧE6OK+w,yO4@i'٭-bO~l9J4McN74.FVmz:~Ϛg /mRnƒ_3?.ON5LI#F m38Mmzlyr.ЖY;iD{Wk>ٮOz2Am˪RKrR2hq_cqd#ZZ֓qv}͏o)`$ 瞙Şӎx$lPb2fY00x͌B./-ew^Lom(y2e<YMFP^WRDlXIE(+{I0rʹg] rO#mXFH<.=zUOk6V"FK_*v,Ϊ '<±Ҍ[a/b9A=cu;z.r8=+i*FӼ̚mmr?ԭ{"%yZV "mPЌFOISK޴o6[xdWg߼Y dߜn㌥iU="I/&+26\r?ۚ69jBKJZ06cFlASSVJ֋A3#y +;=MI1jERn}rե:ʤnI*4FP<_Fb'&Syǹֽh׏&a*9CE5XK+/ȭuQG=Fq=QbaϞ]U涞!t:iѼŵ-$]9^q9+'R1]:aE 7c3ϭvG:~mo_О[~"i 0 go?+.N-MOg)%ؙv%Y8ֱGkiS9s'tOuHk3=ic[+FunHϺ?۞ *TpG( MMxK6H#*Xj"{R]yM#6q>EMrGޯm\o sRoC2:j{iI}h)w cc_jR{XNvr:H\)!\2(򧧘7#)߲E8f=f+ӓ竎ϤZ2\,D>lݴcSNXDo帗|QYˑ1{:GTtƯ/RJ/dqNx!Q{=F*'&%RXݴ4W7O@1WMu3W#jU+T&i £L_}ӞG ^jIVwe.Q~:P.e$TX9[kǏ=I8ЩR1QKOrn#WWttṵYJ {^UǏ~2VޝH \oaTtI3mLuT 3{cF0ԸsV7ЀZ\.Us/3ԼUy^jrm$*$ԥn%IIFD~eXPŁ+ӏ›+OgI2 Q岮6m9< u<_?3-Ee2f~Xc*g~?J*|0+y˱H, )Sʜ>RN5L;rWk]nUFH#_)‡}1'D01v"%nnKo<-oY>V'P{\+Sj{J:kNGlZKcDM)Qn3бbY5Y6сOnsiSicNF-.dVVh7rGg%v9U ?$k:yUT[R*NJQvKYUٔv~q5>Tu2@V7ȡY|0H8 犥MS_Az2(|7pyVq.UZ1 +߉鎇ҺQU*4C{oc -cGV|.dq1;QJ[l Z$oڻUhʥ;h:wzݧkszs\M>nSs9J=5=xi)YXo=ռ17vyMn]H(Ϧ.p#e ׎u5aTU,DiIdȻKpWrr,99f#jP_"pF4*،@1ױRtIuQzT\]}ifiyS8SǺE ܱQ}:_֧J:r)סiYHW在*mc96zS]c3L=ENfV*[v{vr!]d6F,yUٰ6w-T*uN%9ƅ89/.5+X-r}v̇sDv*{7.Ek.iN8OX $6j2#fCO#95%T*hIcsjk6+y})-w.^}s>Sd4Ml[尬n sc}^{~gDceV7J<^%kK]1e*~{㽤W,S )NvIeD_H%_${1#pzwׇ~m'ʷ֕Xݗf+0vqeV*ڞ?cR|DkI5`ysӊ#N68G/V#gK# 0>]\RBaF:#"[GicOAOJI+}ZF3@y'b1p=c8^K]6 ~亯eXA] Alv?rt{H֞RY&̰%cGlf*+N\fQku#HWX4ǒ[Ǟz/Md[V ^H#=UЊF:+ߨՑ?v>^srwZ5z]g۳-UnJ#;3Qׯ̬~̊N?g=5ZRJ.6ZtuHB~^sH1hF ȰF~VbI<ڣQK^Iا .aY#f13XJ`$[F2IbJ`8qU#t6T,EҼFu$HO|]x֎U$h*^XRF >cڣ}kT䊌V݌zZ\/Ƴ<LDc ;[a8fjxyGf/ح̑{,[|D`n8Ie|j*9*FN͐5Ǻ?cWLkWr#V:{%r; F$*$ԑzMЄ_*w_QVToH,HO%UKC%uli?ÑTGMU+ΥںnndF qϧ=OKQ|VK~qjhGz>f] Q=yڊђBeZ=NDs}#ll T6Ea*ZNO%-Uc1,9TԔh^r=S\]Lۇ ֹgRJ<^ӠR]t;&,<-k9__bRi0ânO㊻ū\s}M:c`pp:XJ0DM.ɹ n۴vٸ"jXtԓM;۹j(dJ2+3XRSȟ0_,|a;2. '#Q){F؊є8S{Qo?}Ğ1Q8H(}? ;aCco*:~jrIJ^v+,jŕgִy}ҽKy&F9+˖V:1}R5@ 8'g-$9֔ut5+%)6AUj~'5zu%Gg [ɏn?HZuP.'8i$8+=:uuOBfS89}H܏8NXwާ=Jնzh% ou djʲ.ҕ$^"oo+yϩ̪9iajsI߈{t Dw=j%|]XҵE 33GnfǷJ֫KH ITFV-fc|oD6'JUgR)-.]5pOwDXYgrH`yw;lW?,w0b+V-q7d,ܒI?c+$M 0tezn&k3G V2|͑_CeLj_2lR;œ1u׶(WӽOk,-xFnWuV1嘓ʤ#Yt݌NKJBuv涧Ej~ Tʯk* u[Ǖy fj2@.ZqqN+jq择':;c(A;B#gf򭤣*k4U1mZ*;HCz rSyF"QGmn46,XܾZp9gۚzҕ5wQI ^}uإoբ`|kG4d6}G?Ju ݳ]YLiK Wri< vV^u#Gs)]*UvL~T)B5/.֦TOvZG3Zp~δԫh./ٮf3뛚N)VsIE ne6R=rT)?(JiE|m{_-9+8SRgf"i)߰յ&o>Prڹ%){Ts1N63Ǜh (5'7q(8TMM#J[ͶNUq.A$`?%uZyqUcuZo)$?BsWZITQ{u= s*>΢Wr; oU8g{OZ槊kbc5~/nfrG٣P>f'q׌u8ŵNQ_vF63F%ݷݫjxgOG)J5'f=;2#y(m xC.1oAX^+ϐXlA^^DN4&k<%-:X!E&fUI{q=zcޜ*۵"x[l7;zW*;5)<ѳ<3+lafY"#A\ q=U8j2NԖϻ՝NE+44qlUi1ǕMRI^<#SE~&Q-21ʻ} ,o#X~kopIRe$G7ӗ/5} =]eg$r:jT*rI*9k͹j3y"1#L%V%_?{#CA}7;J.mCV5RM,r~4an ӧc8mFaN6nFaDEnKY3Ӵ6ՌaZ6 CsuֱOݶ%IKO:&ѭ0 *Sފg*iSou?vsϹ EO[P/g-H(foo 0J\9]NJ׊е6:ITH ֧ފ2,)j^L1'4ϙi**sڳ`ʬmڭ#Cx+bR[< mS~ՠTuX/+n :hkLpFUpƌ<ap0' cv}jm˛fݺ>ZVWE}i^ K5TR6w8֧Z2]rnvz.n[yj[kp상<~Ӗ6~+5mreݺUwW20Xp=ʔ;gEf~+UiHsn3 I#ֳ%'X2b%|2& ~wF7{}ᤥ6WXIcIO,tU99cէ:G dO0+ }2?jR׭/eq&+쿯1sI ė+CBNr9qIrE_^Zti5N}=_Zy̒?i!'ynT'rְx\J{)FwZ}=|ؒN;{֑Tw#,?*k[ $Mɸ9]ҼT6u%_+?am?U+/ܹ>yێZFR'(Ίk{Igkj,Ӳ,r-|<p TVo^FAiiDsۮ@HrFx[{J6cRH|:͋/Cu4-0 u191BQI~5nsm?#u=>H&Vl\mo>Tԯu}ޭiEe@#X7 #)SrC.Zu*^КXy!yrgbN8$wTy[srѩ6}G hbC{'юGBz+Y7„*.sfMkg?.>\ߏ+ǚ\aN\b\%O1T}9 r:tuo_OkN<xkHmkǂuq{UN9TQqSu{“[M X Qhɧ}42U*{] _ t[-%kH5[9#cێlF"bw},ҥ:qP^ﮧܟ aVrn|v׿_*JԏĹJv׿Bň.eQm5j7n- P5`M?y7rD}$kn5kQkfuwG|rL&7 19Fyv-UoO<{;ZjouϺ߁w_jel\F-Td p8 nWFrQӾv~і"1诿}6h}vUSO BKER0'OJjeb6}ZuZJRU6t?c$5IdzwŰX0*cF8F*M6aJ4}ziӿYmOPWF8eUݵH/w/~F:i{u2i{}RT})|=&]J~~A'voFq8_E[˿ki~ ̠NKDgk~|faH|ĎGj3ۻL{hVK prq]4C ZK(cY IZGbә; ]Hx˚P{5Eף<,T[}뭟yt+i;^GT3׫NN>urxRXm^$S+XpA3kVrW~?Sȭ__zoŧ.MH̅ p?2zs\xQ嵭#3M=<=BesC7˵8 |H-|l4:\#Yn5H=P8浤μ$)i].p~,y$_9J630Bw3_IO^3^0τ,,#1cT 'uҳ^U"k2NI־Ộ0!yGˌ;yWVg8Is#n=WSGF>OQua[W<O8?U*sQ_3hSuNG*ڷy{P0pp1>{6=˖[Tfmڣ:Xq˶4($+3sVRqy'{oе+H024I֢_1fs. eFQy5.QoDkVl;`xbBw8]7Oc(Ѳ%BD $ӵa)+$Nr|/Qx?Ck}SNU1U\iy\iqicist>du^U m>ʕz5۽aǩOg+jSwhrԩ:[?u'Z9凌̎~zWfWU׺_^LJ:˕h?"yIjuLʹ+{VX9WqԎ#/jTmt4,K' ෿+jvI|q5%kSNfϪg}-\5(okN v*1N—z}2Ƞe=2z!9FjJ;;Vi6}ϟ|\ܣZ^e晤nS' u(vs>-.x_ďͨIM=KS0]H#i'܏O/*uC֭VI;}+HUHeS=ҔSS{l-*xr76k*|#r=^Դgv{lUc~*8㻸?܍^M M`Nz {5+?bw}/uc)<:|VַW2!զf[rJQ횒Pm{ӭR_kyb.ɕpO'˘bjjV/+{LdRNN cT3g5*K]Fai+i)2˿?gowDcqbͩA]8SrӒh}/پw<\@jΜlXR\/REpNd*9~Ռs|CL ~d[c ׯ.# [sV;9,ieVf-0g=5oO:omZ֬mn4-w2V|Uzu{8\U{nMUEel=؍)Tw^'jz[98W`nV_MѼ]=k-uiPJf<a<389B)_UׯNlib>dvC}GrÒXYզEF8>ڧ4shb*|W|sǑ˫x^C#imY Uqө$pz_SNoIgL6&]\]mTdlP q8U'F*]:sfaeڭ5te>{ZUѥ/!ZZҭW(bV55;43rr*-ocjUou4.c;x?z*ݏJJN1ԱwbFU]jTNWٕ:RH(#1_ o/Hd_asF:I3Ч(߱#\lBVWw;{ִhHtcikڷ/VEdxr `y :f: $?瑎VTF鵣tQeAlnz`uǿּJxʱ~v,^u+=^8[pPGr7skjjU*SV$ީٳSZN|g2dă;FAzp(;;%6z_ -,4I;XYYaWi&£>2O8#޽GR1ޗ^in*i>JYv{G­ 3YDp&/l1<XqҽEdѣM_} 2Svk}T-lKPpcq1^!$~'iF4a~Z,Su*:~#<3=rF@ K\R\,vV;B7$ @ɯcJk;2^>?a]5J˨b8S1isFOXz>?i, BI#w3[J7]]YJH'dHO %1W@gVRzmkF [+vq3cɌ7cͼYkMsH$c3~a}?/o#yGqaaUS!|~Y+n?ǹOިO"д3\]O0_  $d \SZ%]ϬLE.ii6uY%PO-VEh=;LקQZT]_Cvt0kxX_ƻTF8F0{c[KϡsK%)/{9+^cO-;πIvW:sK*t&[~ZoUq{h0}O=v2~?7';CY%*F"_Ni⻛E"_J̡H쓒85~2_詇MHQ.I-ݦY'i$}8_5C޷5/]iQٵ^WnS:~.8W|lfT'^b]N?̥|,Q*Fbwzh¤dʢܓvX r#ܿ3{g~I˕5R⽥ɯg]3N򁰭J>FA:"nUfN-S=*vN2{{$FʂA=ym,ZE%n`hbz^yVo&*FZV&,ё,=`{qS#S~I|ǵdR 9=f&۷rÞjW],:;'X s+%)Qe^Mt}%#V$e>X T)=yKk=pg5+)3)To]cgdښUe njJure|&y4גh3.Gڧ12  r+SQ;5Titz/{qNGBHX'k?a)sVҌҼ5,.$ѮTPP6W_j)FKM?SYԼ7R<ZB{8ӧ?yJ5*TJf\YYFv0ߧkc {(_~ū<1ƫUG^Ҝƾݭ~^`2~1&[=3Xu9VM/if.Ȗ\(!P:oZF3 %.efM=Kv#mYn玣jIED(e-%0pM*kԄUݙb01bHunrӛwG-K.]&7LnpU{翧ucW1N'0G|k?Ҧ1rZrB %n3*|w'k>X򸠩١s۟/чNiStR-DJȌQ᳏M;6J\FWy$W Hۚ++E~f폚u%|ǐrArXiҍo5><2~ r;FZ-/Iken^yҟrQAJgV39]5ZI]ۦ+1O7UfshUVԩ:+(t_0=i.vLeJ-r%>m.Ü``t'洌dZ._fI+ۢo.CrGLºW+H8 1qGNDʶUTƴ5H-\Ia&7 '8⶧%x4{:_ߠad1e-__zѝq{>m42[Hcp:rOcRWؘJВl^X]q GN#+I*pqq6\37Vnz50IᚿUrx&>JQ]ı FeZJR]F2X9$h@9+2{4%Qz-gi&:A]֗)Tdө8izFC NU=Ơ;tQ|MjZ~ou4oRdI އǟj.%Ѧ"JG;n!$kU?urQ>{v&Y`IjN#:SW#XՍntkHR)G'ʟ7 9Zy~dB6lN2s涣wuJ8J'[y$_(ȪfV8'Ӕc2e];\gE!aYrA=qt/Snv;~ΑH!qUvsBTxLtk,'qϧnF+ΚXxs'/Anҩ%VF;y(T$Pd]4mhe 2*茹ZJa"% GN:W%p)Jt3j DĒ0 w}%I\U*IJW&M*Yi^Ov߁DTgr*{=:[ymu]_JVaJ1'˸i R@#bxT17KeRDx_%N[9kc9cYroOXr)J>ܖ'ea{Ky3v BjrVCWte7l[w\{wj*%udsʌykO{X{F8?ZHY"c&fTHѪeV\1{W˒Ȏ4,tN_NZwHQٌUrP22yҰrZƜi^K۫#L_zqǥrJ9Ti"3?f3ncRrt{7fl*@9FHYPx) ilmU1|w9YJPz;`CqHA }yL:ys)JZ"G5ѧYp{ui:iok5*r;<1InjA?_YF7ѭ.kr={K_(ٶWkg`1uɮZ- ŪvoK%,{S몥JWNZij8% h8u#>ҳj{M:bRhʓȑkGvsmJRѝ>Ҝ*(kӵel N JefJ<*s׿l0]*B w>c8ŸFIfU1QN>tb=h/DYce57#sAr[HY_nd #߾@sZFR# 5CvojjZpNv9rzc5M<>".Ncaj1݉ ɹ%E2=x뜂q%(8߳=()Fdd<ȻمTqj/ݗN~Ft^9Fs +2c~@~zQ)J3F+6Vgq?f>O Hp{1y&mL_gSNO/}> jdrӸ~L#.}VR*I_}w (nk;,Tnn:W:~<]y]{ZJ:(mNgWGm9lPO GNEwѩY;_O&zTJ4ysӬO $V*PF8 5Teg*xyVܟ[/So1FY 2O'?UFU9TOMNj6n1-7R9 TVW#KMR6nh|>^ۚImlhaUZ)&iS˝hZM 2~䙼b3~j'NWmy}jlMm(b EQⰬ4CÙy#"HRMȒy{'zgi7W $oiҸX$A*`1caaR5ޝ4q%yXd&ԅR'mLъI.,M">Wc` zJEQI/GRT>{'^z(,DH+EDo=vFI8VbV'w\qTg<)%i~qq)$m9OxΜ#c(Uv/j+4Xn,jow猌{ySݧF._S־~rx\5,!bhb l#=fկy装x֣>4zo조6P< Ut2}k׆FwtʜO0Ef  n<9ƻ:NvD%-6g@$j mn r9'УFvuG-ިEKiILkLf;6@u#t}Z\xxv"v[ȣp";z9$czhr]L>;K^GvVSYco<9nKNv^?x:nzu^ߝnI+.Ҫ&\qYDZřha[<LGwcFȔR{,: {(ƚSmfy*noBu sHֿ3 zqZS 4RRmˡ=Fj0)Ua+YT0ٓ6y˺XՃ\[Bt~_jc KTGʵ5SW2zb.6/͞ :O=٤HqM~!Ҵao VJA;-j NeJ-|v]jn<Ŏll#q@{ 飼WNg3 yKr|oªW83U:u7uεJtR-;i2f?(+ѬNJs(E,$!-+c{ѩ˰Nji<d ,q(&< FWorlE,HX$$>sӭrԩh멾=&-y@%de8qUIA`ԧ[cmJD~UG(] jI|"cRUA#dzW-UT+> +v}kH%5SR5 捴Kn] `u^mj2E+B7~w-#ulYFsU 7rry Le>o4J-{5 XUwmbU\{c %'J2T\ettmѬK:s󪣈N1iϧ ̪^?.mLO˝k{K0qn{q޹eN1v.g[;d0r~voӧzͮZ~hU!V|@l[X0 c*ΒqNgk>E]8ƾS'I ;z5,E_irѢ;KwXnwmAҷrq\\/-gf(UnZ8?la ?g(:v0iӍmбI:0ƥpN3Y5u={'FI5ԹhtFɺ?,v8$?^6۹^ҕ%.e7{wH|ی s[(7g-ia]h޿)cDl0QVR9Ofӎ +v|m~eW#;=μcy+۩tӭ {+ xnikӪ}fGV)QdokI=x%N5I8^[s.SzM:ѭ~^"ؖ)ދq9#5m,EJrMRRM-,i2cF;@o?J襊 sJЏNe/mdHI?7 qר>dӢ[X jYc0ÿJr̥ksB>z]UrGVPi'LCZG02Φ+NM!V}BUHգb|1jY{w4sK.|CZ&@dݾ" 0<ԱXOߋ+Ǫr{<9'B? ?#yb432Fkc$(Cq=OlN1EK)8$ђoC N=VT\턖8lx \Λ|N[[#5)6_oB_aZ=ޥ#1;ssK1~Eq=ei ԉw3TY@=}:z-ͭԬ2 ff7] C0TW}:c<<?O8UgT8[uн˅fMېp=A<G̭geJ2wknmͳ,i<3v(#8#єywkyryn n:qU*srNjN (>┺i4ۤP|S1Y9r*QZNm 1UQpIs^p>2Z|X+,=*d׾dǖn.+)S(ɢ)jSo4aY79=:HӲLj8SK}˫$3ADI+~SsK\zmo^ȣX^k+bH!6MrǖFqK?6Wm'y*^BTk{e^Fb*"ebXqio<=U}$f c[s NޔbpS3uA PI+B\0ߦrqT}&:#F%Q#S-lv8\ czכ)S|̟%kxȁfsn0;cO6 -N8;5O.<ORsx4^/n9=}ܫʩv"A8=:})?e=NbҔRZ}PFUf6 qWZUj*ߧS%Y#FTeۃz N;^S1".B)*}8I펵Ѩ#+u{z0 ;6?Ad\瞝;Wy{KRy}魵)I`7c:ԃֺ|+&+GZ:='\1_P{EY.yb?t8dzW 2Sk]kFUk4HHסǽiS{Qg%H Q(H`q׎Χw(N:[N4x/!cSne!WvN sXERӯyL{|3$B0}~/lW:m7Unbf*H!5>guS_3z/Lra1$g>p˛Y:2vlֲWJ77YA?_5R>NTkZ^, 3m2?N[IșIFKkH/ aԏOX9hJg/C}(䑃`3)lXéU=e{؊o|3 g+A"Y[&]Tlsxje9 ^֊ C$0k&l+8\aqGٴ !cnwJ۶L85 ,nNb_)q=khsraR2nD336pJ~ǚZlims'Q<|fwy=lݖFxI#`c tFwfVW0]L6"Vob='8ezٯԯjEFm݅ͅ=s劝ltBcwvjWo(I\O@q]Qe=7y1ub/oY#ќ8֩FQgѝTvjw1٨unvl`ǡ87+zb \:%_jp>U8<x[~xIkf.4EeLnF;qVBWKNO-'kWjkl< ȱ}.pê\Vע $e.y\5+h<:m+27~:rNzⰕ9{);|[NnJt&ZiTq:߈HU*C6Kmm`iڭmt*H9Mw9Sėgȭ9U֝:/SY_N*zE{mlCH'*"Y{%N5BpKr'>_̨'kTa3F3^o|cFQs'U%ֿ[Eff)_z%OKlEHS槭ͽa"' 9r:cڹ+T|W[JN.v3"CyJ36ڜҩI-76;6Tv3Χ#o]5#nd)~fHTfFa9<ךxhhSkjKC|ˑ}տ3ϭ -7*Qgn[(ZvxpVgF<}Kqݙ-~vFڰtSy(^)n|6Kʂcߚ3L3{-A~p,)\09玕VJ(JRUcnYlks3qo}^cM˗Kʴݎ H"xطoU,/ ]TyZt2*_ YYlb?O~= /4~C12Hǟ?MTҌNuAhvU vFH;j5_2X ӑќj=74O-pӬN?w=? rWhSR]&-6 Xeg<<!ҵj?B"R78K2͵du=Z([[QnWђ= X{}jF|߻1S+i"HŎ$81 8 ¶ifsWp$}Cuc'_lu>MngVRbvk}Z67e]؏ž?ʲffBI򵶤2Y`loOX}N7̗1*Q9UO1&sn]uW=O׹5R>ѩ-w* ~z kdw{8J24%$6ˋJY5),2p`/7aֹN6suw2ysЕ 6qӥMJMFj)b%V\1bXZ"FL$lP5cЖIgVhU *I}4{b*Ν={ m)_T&Fރnì(S GI"lJgszs9FwgM/`9; ,r-s0e9?ҥr*׹w-6K͸L.TqyuѲƬ%nm>qÛ4p1RyQDۗiUb;*#N\G$*JwIX,#V)mޥ'ob˟/EFCc;֔SDuF^Φr\2ıaRrzQb;۲R>w]154*~"pϘvԊW91rhH~ $ ?7sQBu\x-Tz[5bIF6q|Źݫ%Ny8GMu>Bm%H9>^A֜t^vMqy x7$לV~P͚Iʌc j_VYp*啱 } 8,V)8}5J]-RC2x:#rmFOgʓovr׺/lZ eRp0aۏ=]Nhq9e6 yn!\b>r'%gV"JK6#K{$]u8gJΫAt7zX{84 #Nvѡ%<-2Fv%HJ.tjY{3rm#G9xZCޖ4'i-\c4stYowFT?ff# G}L9$\g?*<3rO?8(莊ݑs5gfnt'Jq+Z4#ʤ2PT0 $<2^ç>3c,OQ)18\K~&\L)$O4 ?Z RLrtvע8/hV #6:N+8֫(vbgx4ŵb_+bq>:ܩ-ElTi*,&%!9`ÒKcmR:ܧ'@"0uXzQ_pso[yu8[v ?:є"zt=8F5pGנo:iO54rǖ1L*;;~>bXC*3 ~V,Tc/f澕7\`"j{8SnkGذyKk|}k7tyMg1ts^OJp~:~HˏmLqkKS)99houcNy=ΧNT.* ף{oϠWT$kpʿjuj6#pGze)MXiRTkywnHKɍ˜{g޽(Vymi9J>Zƽ\ER쮬ʻN>l|Oƺ(֔Z}zcI=~}^BR?̩v9==3]Jdަ*QW4."uP d۽tQy%ua.U**Y4Bq-S8iJqv]·R%'奌e{u?K7>UryQi^R_#o>(o CpʿV ԅ8q9޾oJPĸ9Si^[q<+yw  8:8W/]C2k+4-91;Wh}5></pCdnPJ1a٥J*b^zzKK "+ b#Bڈq zUG )疭wU*ɸM06i _-1 _1NOu80^"OޯuvϢ_uS}+>˯?|qM\ #KiiGA$t۵xy[SV}w *Ytzyi - )rKx9珧Zㅔ{çr}֗m [yq*+-+:O#>a)ʝ֫=x:/ZxsQI [s\}A汪wUsyJU#-9T2Do'&o7|+A㚕-ЩXgЮIu68Vw9q֒v;F_1M1]n9 djycLcjwF}AbDl9^9(}JiIX#˕bH``5V侣qпk!Ɛco%@,@vG>';[RxE1,v?T}nkėde>Z۳w/vGS]ѵ˲]-?+rU%57,g{Y,654ʝ'% 7uGͽH<&;G{:ghQY 6*,iZF՝4)xB, 3Ou$7"J<&c))F<.aVU={-=78{٤ݫ˪^JK[VEW9hո>y8 eha TM:V}Y"Bn1'Fr9^Zu#~-yT'vtZ/%,!UH#zJ2d]<eQeL r+Oݻmק}ҮHB5b^m.TVRpožXy~bAޜ\}F[5jG[v4tT O ~RIkƲ/Gok,j;3m/A6!68(S.C}7۴+([kɍ9ֿ6q2Nޅ{ErT6Y{3]TU"ܕcN־i4է4 F ܃ҽ,-*JOt1TiӻoǶN4eQpLU>~d{gts VV^w+Vե˕#ȹq}k4`ܟr:*x=?i̟nhR ${@^p}kKXÞ]LT0|{3g|_b3)g?ʶʷ':ҕd媳9o>+Ӓ~xwN^HVnzf/-۷xzP]]3IrtkUה\B+E_ c pzusUږh|{7 qj/^ۯ~z"[ڪdo2mGxqָ2Z]4rl{ieۯ|A/ܽn>tL嬠AN8B }^+mV/*[].uzK1 )]~+z_D|/,DZfɐKnk9s^瓊Q8лa5+ckӒwjyRN_ q5%-RT;i׹q6lױAViowpʡاC{mP[{{ӧh]:FX=l V%%FRmM/r*sOޱ>_Σþ2] џr9W,%:9+[n2FM/j bD242rz'C{Fo<<)>3t{KUlq'FP\S{{j6VO30u0sץ{|EJoTK ]0n7r=->ImOfv^=K,w# # 8*;^]~zSpMӿc)-he&m_nϫd+dΟj=bݣz,A @b܊&QQJ+qjF/rև>xzyHk}1}?VcbE: _\onzfQd\nqdžF+u*wٞK—9sH9o׺s8۩ψ6n^ tHۺ6Hۑ\5*ZKsZş+"Ү [Hd[9}+0R_IE%&=SdqsYV4fl<~\t~禚}8^_3υu}\ͪB9&)~2y<`q iڤRkxyi&_2=[ĺ%<-$ltOsIM x66vێ99+擼NVj*0ozws VOG oT>u*FxY>æ >%+r?aMVkwFm!S1J W1tg¯mϣ<e=\AMgOB:֫]%gps^t=;CEf[}lU5Pvnάcn]oéZ@|$ڪ/OϿ\x3,*G'|9nצ3[anua6u<%·fe 0>=r~M{X5$sN3qV8K潼XgGLHd8s^>HE}ǽ埻eO5Q5Ơ*@ g*T21j{5.oy=NEm.-³|2HC\^Ӽd)tNGY$d#'$~=+nyN>XּІHwnfl09tOu{ Nƣ{F!`fIV=Iݑ?ƺ=l(ǙKv:MNvk}f6?=yc5^8/k*ti{Hg=C8_]N8㑟js#ǝYԬE\[k-ً rOA}rXЭ ƛo./{ u$K&dv3y<[&0@\ֺc(+*U'MmtVk Y~%Xƣ=l3n*JTo/S>|8#Kյ!:O*O{.sו=<6w\uӲ{9 zeQo_P?ogeK i#=S֩5Sؔp_ςKrۛ??nK*M u攥~G |8.ncc%v9=z3<.tǖN˳1 WPyN,lNu9+aQ3U".wNuI,+,{JrU\>I{S:ޓ݌m5cDYU8q֔j^v:#'i^#PKutVђJZ.rFXڱt=;w*Mł)l>}3I?*.q9FJwqZ 2v c,,p9Jǖ}zytzmYc c+?{i,E֌oO>f\a9^+FBĒ͕إqqQw%8'+_BKqs2H8D$cW/]jY%CdQxH8n8!񨴥֧gХ{$4qGr#2H~^=?59G^eDcjqd՟^*-M]\C#mX:O׵u:4c沌J)m ZYEFUpzoNHϸfiH |1VAtas"`nvN?ZUK%**(0Vm_9s3=}ۚ#hԳ3OM b"=*G#mn2CeM*6D`{zW]:cG>)0UrG-ywl~@I*hӪWq:ETԕe/6DWpH t*l%4ո.>v%BX ٛx${OOBdط;`ŀbIkZѵ"~_4zϡf4e{4LC@3l VݎoR嗼t4m"$)8Sv>aߩZN6Oix`no^湥)Thc [k42V62`cj%ZRqR{4X?Da+&PU2y^by/!qϨ9eho E[ |=߼l8>{fҜX(ʜwdRYȎ<^VsugRҜmrld`29;ҕ8)b4bi%y7|U3a.YY3<:M7-TsNl:\4yܓrV|W ]bwgv |ܒ6X*F`ֹlRtI;H[m@˟_EY%Ԋe,9$o+l9'y=+(j*u14ǻdvYW9 uhݻэqntb̍(㜌uxԭ{a9[_ȪFitknhL O~$Hn B20-Wq$cXEI`7(yue9#/.* i#4Iԭ41^U Ѣ1AbmOs9Kvߏ$)`i=pcdV:(գ(M;۹ G4r IGS<~UB>w6BJ q{z~Z~؊t99Cȍo1jS˹inC8LUSB5$ܟI=Z2er˞2nV2nᙉ<5N_C*VẐ(fȬsq*Q*FRN:['vϷji@acSN^R.o~1 1˒`6 郎W$.cO2S9\m4 N3X̞Z'qխ95q)AGXfݷtc *掬Gp]n|19F*GTeNmnIrvjqUȭ!e;%RQs$fkc09%3JGƪ&737%Wn9ȣ='oR Kt{nvĠndWC Zӆ!n[б ~ג'C\AQ8JRZD`93|cyvE]osjrlZ l*m|Ϙg?l '(ޛMT60X#t\/iݍ#F$Ư<鞾*TGz_|;nV֥n{V䅣ݓAבߚq4ʗ[l,æh`knE+6vQV}[KmL*&cqڻ7W] '+6pEi${T|@9#qPZ~ͭWRV3FY c߰23|ee$YFR9#ַ>XPT}TMHѴ`f3{䎘i<6t1Ԗͦ5T3cnܐ:See65mq`s[]kmYOkU/淒H6?|` #R>OOƍ8K7oĹlM2$qLsy81Q'͹Hþf3 n~߸zu51-hVOe:/YJWDsnLf7c{+MsOB1;IVבߚ%.Rzo/."D;ژr[oAAx(nZ9)Xnwgny^q:瞾9sTI5sl帒4Hv(NWR}5+*%{J蝾ki䋫"/{HrP^u*[w\4{v1Ry`Bڠzx'x5JGfkjtRxhK$F8$i*h^@^(vmFSiP K#@TNORX 3ҫޜ=c%x<IVs8yϽiEB3¶8e>X-|u=T-yjlifU.l3m8onsB[{C;-!`dc7Zӄ9=J5苎Gq@@.|֔ӌ(H؎).`t 2#?=r2y#\%rM~]b[l޺XZNַRt˫e;W ^Xռ:?sʤ_9th^;.J=3} VYQk[`nkmzYTIPڅ5[A 19R*i)\Ӵ"-_>ߝeSkt<6ќqKJ-~2bh>Od JWM\*;:+snڣjGb{֜ؾ^]dֺ\w| m'^zv?5JMKR#f27[,O'jsogQֱ%09Y ~qU*bmܡsjxeF$Ajz VZtFE踎`9)qy˩\[bM%,g L$὆1ozGE8ʣ})7UfP`tH$Vg;^6}+W\Mj| 1͖%wc?idkcz-m7d Hx<pg[JM*Ø:V؏uF>@zqSԕ׽nT̮gz7_]\)yK!\IL=WwGEK:&-&G*x¨%kw)Y{_6M{mn;#909!x+U9ew{[_C>hv~EmmFU6,x1'?sWNњ0h}ċl4r6KB~L.8y)ףy'o%xB2hqYqO~WӋG J-m4{iv'- qva ^]i[s5e 2rZ%d( H;}zWy7cOGkGCWL{{3G猞~=; WjYU۶Fy6v}3XhЎh{4ݴ6# cڨG^:ɭn#Z*itq:ҢOT3+iV1w ؂+yǚ7:ҧx?[$vd12F@pq~P{hƛt(ʒRUu2nPd"E1r}sz?{墻~Fuϲ8ʇ;@=ӑG~JQqsT[W3+CjP-<֐uoͫ:MY)<&Y$_8.S^,gM.5Q++iث=Ha޻/FOVb)EUIw-1uOΪ5tv1Y"3wL\kN[wi*d\6pg.KmR8y㫚WүQJIU}t.XZVٮRuQX>BNI^3vw[|b(FR~HN cH6p3ʤ֫*m^6Tp4^|Η,mw76~g x\ҕ8wUj1iFY6l}8ߵsQzGTk]n[wX=\5GΛ}FQ#GgcyGXQ,䪀dma{YE'J6I<ȥX/8=ԕ'y;SݩͨsnW ;TTXR~3_Ħ[y=GsF4kX΋G-FЈ%fPG$1Rw_0TDJ|5!n>QRQzEZ-F/0#njwgߌK#]I^R=D'|(hԓ?x|VUytQw#͖FL*sɮoŶDʴ8ks%sDѲ3%1ڹ%5'-%ZsdRj=Ȭ?wRM-ѼoG>^?jƴzQ#:N"qJ'B=j7^V'=+Gp#1-#~c(E+qj8vogюX$;_Vy9唩/'9P %}JʹڨO˞ܟYKT,F!QH`XƠG) zqڏ_Ҥc:ppvkG5<tHhGveUdr>mt-OZ-vO8Lfjǩ Tj en.e$G'ʅLn,xJ*Kң%.kΏI<]ҧ&%a؟NG^UIJchD_Zje˱Z5dÜo_-%%yw9ogFtV=)Qgc J2TO(9uRRe{\嫾5YlY|u:zuR-ҟrޠu dna3EzUƜ}eZݕǺcN @K*8N0?ޟ֛_ӓ>,Fr̳?v 9zGsilT[jxKOoa4KHIR7/|WmTޤ%#І=?az $rz`=s)b*J?qU1XztVidNcr!b~QI9kKB. j7[#e1^&hS5@c8bYط ŋsd)6οc FOQ5!v_erfY֩N&i xGui2|fgdB^-hmyU%k(J<oC_LV4rr9#I0~KH^{}=k_dnLcϽ,qr]^ 7$Y]46s4k|'cuSQ= guirݻP"yW'W?/q4uٗkp,yl 29_c~n蚱º_I[gdi'Ƿ_z|M:֓Vq-#h ƻV0>SʎsYϖ2ZwPG%[t̒yΦE@#teUi֩[اvw^DI) 0qk*p؊*4#I.%{feeq򞣜Xʌ[;#Z>N{E4wT ݫ'˖*ǨEich 88䟨Sc^1bZ9!fBdfYzIl{8r[KŚ4VʅpszZog%~GM7t uVتK ~U.Y68rG%)uy"ޫqU:kZ~':#j/y,˵vcұ8;iʧ;ޅE 3߾Ϩ^e8l^F]}G4|_VTF h_֧kЪʒ&n^0@$ &$HZ ̘SM[zV w4{W;W秦?¹')I_k y^@Vfa$pqZ^{rg-X\͏lWkNvtӹ S H?w8bzTeu9a=|Zn텲]rA'9ybLF;$I*1җb禌"QW}~y3/fܬtHs`k:xS۵*NWBUh^M62.+KO3TIy1$,jw7cx[}8ɤTRx;Q6l0\M_)(JUI-m,ExXn"g>Q'8}ES#i,_3,,r(AǯkMF}}mb{=ޟF鷫ų׷9mJQw:T E<ѷrӼY/cd+럭{Xz|X苴n-Gq)Wc1ٟ3?uO1~gM/dH31]ʼ)^ilk:inB=- 6q{)S۽}NO[*Nn9.O.=T_ןλS$T/:f* mcj)<]AjspO.1ɰ?7Gz`#kob1qǧⶌihԍZ׶/aIwS ҳ,x{g'hfۼRylѡ3O@}ǧ5^f߼}; rw37y+׿SUrMlgNi9'%줚Dyv ##9uF>Um=.6QQBz=Ͻ]bycVd_0͇d2s3Bxpxb>-*m%ȝksoN%}^HJvl K{~?z啮K:T7ucrJp8[{Iʝ'nn+ J6,uˈVY& X_JTs=.u4Sl=:uNJ-li:22%6FM폻߿BF*eVgKԯѩu6p6e,R'1`#w8@rQ}ZT}=täk@#mV*^n3 I$n{&\(~}ߝ.UvCL\) boQ{Vk<9JMoŅyND@s'Ϛi*ފݿhydUeg5*)TN1{4lvL]Ǟ ҝHʝ2e]?#F!kmwrG!ܿwf@8~Ÿ9N9V]u54E\L˸}3:Ҍlr⹥%ξ >h1-"}F*c oN25$oxbucfgN͹=M ղI:|7sgnG-|Vyv2WKgEVV Y_1׵oR[iLZJQi%g~յ; %UH[=r(F1s9bFgi ww"h]W) xMqN2Z4?2=MgKUbۙ\?N'pr*j UJԡÖo;|L+yӸ1'/W͈U'%$]-˶FzR'V*o1(\;NkUDpj~qo0lFqJ#)NyX"ԦXYyG< ai8TQܼ ;c6b@:u Tđc%Mp]@_4co 2S8sW*4QNӧZ.}di3]lfż#v8VN4ʕ-ӿ͒}V_FjF+jͽjGIYڄRXD䷘fvfcEIEzq_q-M1"n|gc5\b&W,%k;Yi&eq6F۷\:sZҦ%PVԥBS]Qe׹B۾= 'ӊ\'e9I _So:C0f[.>`H '#ҪܗomJܒK5ۯZhͭN1z=N2ZM%V|"mb^CY8NdD"H.l jrzyZǚ_ORIgE\nY +m,x緮+\022 7_kpY07q'9?4}Z;$i.): h6_ֈJ]^5&0+HT[Ɵ[4U>[:r.|{ecT1}z+s&;Dk>23޹Ze,9mA Vd?7j*Jp%G(r#f)f3y͎׿chיOSٮ,Z, bTz9?w5(wnFn#K dRIf^@ L7/t2.W=-L2̪ywzRukTrR?AֳE5/d\)ԄN@#'Z*P9iSBHRenUVtXöC 34fハIQ>TպИ&eo,  ק~zƢaUVIpn^8F\l <9[GJvp"jQm5]oЏL6+{vHҔF*z><=,U>FioIYߎR0p󧢽~e1ݛJʹApz彇&|٤[I-8Vo=sH $nJ|zM<-5m-^n|*B7t+OzU~fRáBfxge7QZZ.Zd`) 0*n6: ϨڟסFQݾF|#2Qv?/LeQ&׼ }_]s=@X4O:TfO[ H5ca Q5E.\_2F<"Q6a܏ju/iTklf vw#my~fW'z{zVOW@SC5.1hJq!39%Dӕu9c+)\X[ya<`|czU:U%EvtӔjSn_{L'3:sҕHoS*5d66 {6ԔbXBEys\d\VۚKrG]-Hѽܰ'x9iGmۣVdmYc˹u[ѕToQA\'+re^>v@'X7MWoSҰ*}Qˌ Sr|?B3G qߞ?5֜T# ԗ{StXXĈ1q"T0O^n"2۪/m\ʱBYdy| Hr A}^{E%my} H &٘7.1'W<MT%Z?s:Fd"!ѿt6~0z? )F~fd]߇vIW[c8V;j''iR1bGz#S2cq:dqںԧ).jTTӶ:}DKEi B]'^F}S RsKU\8SWgki* Ϲ]9rkp};BmqĦm`b`uaߑ{Z6V5$U"Ci+1eOqF8⺰QgNO$sU Y۶ۅz1GMiMf>>u0K^g)^jL7m\- +UU 1vpr8t\T[߄v$Rae*σFt|GfVq1|E@<文5JuS~EyfXȫ_40. jJ渉Tm~Fn%Uq$7S(tc6>g̹/2mnfeɍSo7pqn^fk4"+rcAS>{Is5-3&-WkvXh[6g\5jJUZ1$H˶ݷ\wӌ|͡NkG}xt[v0X3YT*=oRU-O#Լ>%$8t$g#ҝI(IYJQ]9{F{$/0WA!Pp8muPtbZV߹x…,#CF`nH^[Öw)z뱅z:^g8Ox9..^h$:0l|IppqڈB4c޷hr?k渍Z;v;$ќq޴\^&eR=ޫMlq"hdHnU=c ᚩ'7tuRF"IhC&É%Pű3`c=K1F|6\F´hQgp悵ab]^ \YIǡTZ .$u {c:qEN!I/Gܢ]"±A۞.Z1EVk]KF-Uz+dVGTQ[˱o:4qs\2IXе: {Y`F۟\e\OGA.1m\V~/)c.z6O>Z%W+Fnլ\{Bѭ+b,?=IM{m~NU${*ҿʫrE^ikSܽk{I )խ;w-wktt-Dgª >ަ~g_:VЧΌDXo.X9(՗:Rwَ}fmmctuQtc FiY}-jksݐ6fFMYrW )حWMVϿFaNoG% oj2j\ӷi[[K] lse98x]YЛNi뺽BY|ϴ[\ܥUSQXi[V_#^X>FV.55u2~Qgc(-E_1 :Q~Fz| Եr D$*$<ߐ \nMlz}lm7Jm㑧;mJO=0rzgЊrVIB3oSOw{hm;|y͹mrKEoX7ZiNW9%}N:}kJݽƍIk}u}+t H#2ns ds81QH[-rhApr#k(1dEm܌(?^NN1="^U%=2g~eZi{h^L,㧱rIŻa{8VN%Eo^?\*}lV"SܺlY= #kXf]j#wgy犏'$z~,Ӕ[qzOjFco-u+ҕkJ[3QZW5C/CYFpG>5Ye*Q鯩]*髻omif95 7f̗,66p7d^F?B̗Qmo5k_?Bo$vk}On-Z`h q>Q߱~.=mS hu]o_}o05[x8ʟ:[wC0>q۸5s-jɽ,붗;ٜXfEN]ﶗ{wЯ_|>jK&ZEu0ܒ3cYvۜfW۲^%iBb%Sš]<ۑi<$(q.VM@#5y2pJFѫ641cS'dKK]5f OiZNQ2H_*812+52M2V;jaa` pyS>'}SxĿ +E4>wI'98^1U;y=Ў" +cI{m{k 6h%9@H 8shrNI]{;{|x((rvoZt^v>kfT?**B(ā_+R,Zӛ菗FM+9two^zE?lLʲr8=nTN}G~4KV0K`u)SqKCԭ :a|OM = q V۵V<|FS^^?Z+Oy5Y<.Z3Y@=wlg,WK{+"Yujt]ٷ[~f֟)%#y74d*$։`e-#Yz^~=B֢.6ʊUXg8zdn9jRtuzT%$fGOTll:RʼٗLp 'vFޕܥr:Waj>RnfʻCֺ)r_Շ}u0J+33ɅW^ ~}k4zi,?Y}U9ľ{q*dԹ.MIKGOn"`<XQЈi+8_gr m4:uq4peSWD%~&ݭ呙01^1U¥b7(c*$J˹~!?uF]F44ܪlDN >*ipy}z^a$QG_=?:KqSNRR|M0Q-w/z=+Tי߇婪z|W>O O,jd9Fr:F =O_ W_{Jw _aGI*ҵ>mE9{7KRxI.]|d<3|ÖzS;G}=>Y}>dޟS*DgYd \ gDԷHiJzy_>?{ZV9]2h8"@Sӱ$[ykseXc-},q񶡡t ` P P=TZ'{`匟-9(/;kg_K.9O<< /#QNCz2J);78Gh"Iv3wq^ ř@i#8N{sڼ.CzRO9>F)E7j^Nȿk+=Z1qө(ƕb-KD~GCkpo ʋهR_יəSZIҒw͜ψus̿:4*:cPf"%R+'?tʤEK {km>c Ԭi¼20;}s:֏#_N)iuI"{SGR:ETb[ݦݲpq뺤./ (+~O]i6A(65T;4ƊR[fݧ,1~?F2ꌫ`p|VI5{uR ךԩnԄQפys!~RtaNijEsI^[zk1.݋I8N}tUWG^>> jJ>o~Den9lvL}xTaZ\nקx쓩QklM[o~-4KK[[yeܠx^`: lMDk*\Žsl HE5~ek--=KvmeZL~qsۊ&顝srzbI/~۷.OyʍܰJ'):~k}^" +u9.3q}9S[{-Κҍ/\Ij8paA=;ӥR"x\Е6ouZC"ءyv{w*^Σw<\F[sH *]IY۸I޼TFD0`ʟ| ܬy~B*I7px.VNot~?jOvi+}5Y^2JGLFN8Q}>d~fz]S9]ke}յٟ}5Ҵ7qr6yyF:~SZJMw}.y/Ak˧隐hlM!,xUr6;^[EV}-}akSjUqmYC L1,tz.N=⬣|vz42ޕTGԩNQhh:7SVۃUexYKg呝 $vSF4QϬAj0Rި$dQ_<+ZSsN+M7>GuzWLe.nh80hZbo$!1סaڜN6%U=$ȬF6enei{.uвB0g WF1+jSqopG)u& 7m\jˡYF*k[4-mʿ7݇3rK[[ODT2VnZnPx9հ.6v.1Z Ѯ,4Iw'̭@P3߯n rRve{ե8TeO[g_ѧ @..260Qy$vty~Iyrw8mz&KSӵ[|JZv*}8(*Q?095օwIWͫҧZ4yk+QͶ&4k%.)k` yb.^kj׽֟Ы>.dv}-}ھm_HlҳLBH@V;s rH=>1JKm/g]-v ϝ&I4tWJv_Ïz$#WY4餎GBFW<`O3 F-i[{r 4=9+K~7Eރ7<9k+ܬۈa_ce_?};yu?;VX-wzox)qlgXn7!>qeŒ81ҽMEh˙>L~5*kM[5/Km&ܗ}͝u= >%LsRXLj! d9t⡢i*<-2r<݂dv㯭t}_wb}oye(8鑃ɩyeǚqסw/#n_`35fw&6-F\ޯ9KE[MĕQ=O8oƹ8zu#9]5h5njZkr[b I(h8Ѧs4ܭ~ֳFdeYXtcHehq-ȇVPmq:N^ƴ.Ck]MY*@f'z`+ u#7Sftc^I5upnK,*|O{R$?-^ԊG_EJMVTc*ܑ9%fU23Fr:Vw9i/Wm]lSss$sʗq֕jQBdN&g ZK1NwVZ-74{+{U#9?ΚsnV4-.YWdB=MF3~%TV"kXjVrIWh"l*'9TR::ԚXLs,I>XG.-vg i&ݬMǑd3L]}:WEm?ʴ~-7-v^IIrj1*aF]GY9ٜr,S}Nwy zg~m6hû7rU_&S*He?k9UtW]:T.۹`\Ok LRlciVyO|De)ѡRQwmϙ9xϽm.hGW:)CD;o$dlF]⪚)4i{]?ir@#zҶRYJel8`8Hٽ*kw 2$w&Vysӓ9ZŦLߚujM4%UˑMg8(TLۛvO"y2F60?ʶSZ}z%+6ֽ%3s kOzeUi%Xp^5Y@f!w`};UQU/N~V+ 0 zfbE~M mJ N<ѽg\Ji2oLnXܣ#pLAE*1=Yv YQDۗ8xJ= #*oзR7mmߟ<~"jߔdh[-W|qG?AZ$ӏ4КSpvb﹆ylq8xPGv#S6+$. N~$VRmTRKxݽ~_v$xլ-7/Ѝ̗[--͹5#)Iiً4j ͍stv`)TŇZf+ n4G(ǕSv~hihWF={U>_*9sMJ$1QqgSɬww3dHᧉ!~hc<ߚhIJi8ԒE$ȿ0\=OZRo޵Z1ouܞ2m[nބ\ҲG?=>kL-㌬/B?r擻؟mNڻGrkp,ϵsG34Er!+ef|6W D{1 p#2 O̥6KÐӌv*TaSyR xwI$ܩ;N0F};vNccc8)Q{Fَ~X-7n2\RR4c1$>`e={}xKNO ju]Lq{VU+ Vz g?{;}<=+6-Sq_"fYDU/ypsyⱓqNU9-m~.R ;2#9yRm\}܌jv9yHV9es^=_''=+^iQR{د4Pr ]<{֥kc^\[Urw'Qu~H] ߴVҷڼ;1rNTyd֣(SoR;d '2(ڢ_/ק :g't#Mڞ+a2NYL㯾{TzzN$e9; ]+fa~R&Q43eܪ>nzB=kJqtK^汧?̎crbx#xϵU97{(ЋWԢiA)|mSyV(>BOyoQBU8}08VYM=,#hg}6feϗ8'֪<$w.HǙ^{i 푚N GӳƘziMTGw~R_3.2j}uѩMEO> `ڻAV^HxA%N\˛*KW۩qqq%]VbѬ NmXN1M]g7QNwSR ?(yjgWl}HFfJD+MLADUOi##󎼃]4p2~NJ 9}$fYFzGCV7QRm}֏u-ŮGPm1SӀzw>Z:RtmXe$q0֫T\ϯwV'e~a^E[Tmi1=SJߡZ|=~gYt2xnKZ+2jE:FtDy241;tշʹwcҜ8[c0ٱd9-2R.[Q^bFRw &' ^R{ ӎ_ $!-#(ץ(ʯUbgYK{ *UYvRsZSM%IE\Zp#/ʋ!8=zzt'ޛ#jQteIG6qVW5=lE_Lѓ ݴĜEZ$yg&9KItopTvbz/Լmp=3Xu:ş*HZ4ڬ_ Ld{Ƨw}; |{⤑R"嶟q*liĘXxUF2&紥e||Wpn(H,+%nzOQ*c-"RQI?:x6fĊmhgj`2Tg#'WN1ٗ2rWPK$t7>;Us_T)t3HUeYOݜBrX>9+Jrm;Vd:so*FhU*篷z﹧$y%/^üDu((''ִ&r(-G/-՚={)8G1NLNQvx%?hԼQAG5:u/<;bߛ0ol%fRoqZKݥvJJ߯c.O[K[Td7pҕκRӧ^*:BH|3vW6ye=vM)ʴ$㥯BIlmiAs#NQҺsϕۺBzWa$o/#:t9$Ҝ#miʖx7l nlj.gku9sJЊ-YKojӭݣlq'89uZ9$)IMFi]ߺw]۸*O{k#7K[}wwVAI'\N˚6OӤhnWfqm^MHեcϕ:tƗnc}μdl H'JqQ弘㈅Hrkjܿ6:͍IaFMfO0(,wz95J䃜}U{h{6gm٩e7<v:tU)SMr.ЊfԌNmrlҪm$mtbcqFEڬM95&ʣSZiH@Srxp}=9]Rb=Z3N)MZ׋T,/OlVfT&ߩ\j]yKrFLnAYpIv8Zb+%=$naԜs}"\c*NJ[\էY7kwc7ZXeV"{νl%U:nݏCSRм3uQ枲w;'/>ƕs}ibT#޹Zsrpµ)TpKyfo+sU\sI{7Jm>̭&88uN9JHmzy64U3|2pTc)u)J_Gm뎿ZM1O0 /0y<;){[=(,1>Գ(]BF;O1(vU^O/V ޼S:Q[NhڨIvB)[FR(m:dggSޕ멝)J}sͶy,rѾT+x8XE1x{mIXfD rFp # R|HQ3}+g<RS^)EoFt-܏$S*9^69'R+"qȬxt{8M΋\([x ysG"^i˪:}ie-u+/.W?YڿrKmNy{MַКF-q.c! @٫,c+-ΈӢ}FJd6Ȍg9ڱHBq%3t{D>P9G Gݾ┥ٙH\nYH>ǵs.Y.Egkq-~OIϧןt:4([[5uh#V Q jT)_MΏm'}2|;9/ MS9.%gPM`{~Tj%9G,G1Qv5잻=L1풓9`$]sұ*u{XSTlA#Y$0`b1sG~%Qs,6u_oz,.oj֝ྛxݎc4Hgs"NX(""읏QZF_\@ͅyYZ)+NU&Iuh$$pȧ;;{qs%|3g.hМ:]غzVOGִXW>fqBy&t%M g==_E5td{rzkMs>7t7Vθ_1U,QOjI< R\ЧN!>*Z֞e޽ -/F72G$UF4go:`_\Or#P'sUTN+CL;V9.Xӕo_,%f F?N8]iѬZ= EJU%vz7K8VYZ+ c^Wbq_WwxK K=Nºў,˖Eػ AE[T9+S"GOdk1efPUSwO~+'J/CM$!֬?4Ud-n~׌ۜ~gZ z=OI,n":q^qvTUEjw~|*[˷nx*pyjN8ъ;kn047QnHU#qG,˖=u5-9Zy%#C a2vdr5^XZ~7:-_>k &D2S& (ϯ5og9SNk4clfE+Fv UԶyyC_ooe-/;+G!,Ѩ#~WS{GZSNJ*'< ^;qURtT)M+AxBJE>USqK;1T9sCUu3q'.[hrsNXxZJj]lڍO"gy'WrW#S+T gFd̃xJnX$t¤)ǚ*uE߲C, -SؙbpT[t#YM6 sZ1eԧk137F+YIKmUTW/2_J֌ESh[s8Wa]eգ}Xd0pY\1Њ{8jTZRZtGm\n/{VuV=<<4Ƚ_AR^^\ڡ=Z:4ZԪ-hYVƖݎJQ\] %4iyrпVy$TPU$nOʎHM^FU>;&zU8eɻ/:;2D1Uȱmhw,sl#%1zS-_@`rMmi-̍"c~S[Z61i3$bq&ֳI9Qk~K,iZF]w\Vi4:7R0I{7";@z$:w9xDkmi%GG]*Obal-ıHs :Mc*RYpjJRѴik;Q)ԜRoMg5Gzܞqr wx~Zmh5gNǕ:sõ]՗jbO^j񧈒qr2Nܚ4kE*JUGBpHi(ZD2.+;((Nhӓנ%ܪ`rNޕR~:1RHXcdb q{qSqIٜEY˖˿OB"۳yVcUˬWz(]~vr{cߜc jgW(WmHZ29|e{߁6h}R% @n˷v;dc;ca75gTQNm^''+ UeRcVP͜k JTےfΎ*V}蝥k639Yn_QFr!g]#zqQ%R*qV/Rh_FEk%ż*a$ҨUdܤ''G9?.ZTjWMiөf+E4H'6GBB4mV2Nʍ`2L~u4j?k.˺5*Tӧ'h:Zv덪pW;];uјahԕ)_$/f*y}19ӚRJt{y|+Ft$o(.پl?ηs~u4*vW^<5fr wgj֕CS4n=\8)lFs#zowO!$72=+fq\u<]42մ{XYZJ_滗 b<9ϸGSJ=;s[.,y;cZRե/ K.͊dv͓J\J07o3bg\,lH]ף{#<|GlC;Ǐ]{+ScNKKGp| *}]:MGݍ*L/pFTە+j)NkjiFJ5Ԁa̻#~.t wZ#6s #ǙCV#WE vw%ђ?B.aidDjw9N9Tkܫԙ5Nx`n$տx_qb*N \๙=w6e,kv7q?uG;Obk~_em\a|>rF]_o{vE.H]XO]g*vgռ,͉W;=TcۧV-v&ֶԡ-̫]UU v<*|SUme7ڦlyx H6p3ן9{6*rxߩ쥿11r RnQfeu.cV@ېQ~x].q^twԳG([*j=1*V'Y7/Qr:ҧsi.9Teԫwnh\#+|۳OJTU9=|ʕh1\H7I 8JRMь#Q9~k5%9cV`y=z[-vhu+{Jc{nݣf+eUvA']˚rӱ ֊dovM˜m 2}y޻zzEՅg%kYq ?hU.޸zס{45kTqrI?#֭k2y,(P8Qj({xZqvJ9LQ|s}+&;EmpSukk~F W+c}NANztN:(ʚoCW߷ٴwG&ݶ1V|t=qZ#z3ʬa߱"72]wmWجW=I?@:|6wBC\%}ZS#bb }8 K63jZ,G±fv0*sV3ɣMƧ,aaݭ??[ʝFܲynW`r ;{֑yVIԺkT lEekN2Mnt1Ton/1>ک&JS[3e(`=R:_NTwبp4s,w2\bMF_o8q]T-CJ5%ʭV) Wz2Q$VRm*ou8Bdyd7F\m@ܽhƋmsHGDc5%w|ehȢaRGz㷸iJ .14'4n|&fuXT/ϯMyuES˩.Ziy1Ʊr#x*9}LsUmW;6㴜`~XcKgnYM%۫.Z-Y )s5P$g)_BtVlg5F5vpHKE]O1&>^ SY*RTNRn_зo3IB?~5#FPʝjvi՛;ȑ ?ʜiQilL=);+קձ]@\3yyFx>ұ9Ihk%Q;i:#*(CWiSU%.EUֹjQZ&5hvWm$NBI#ּF1d5e$?NU1q,H6y=F^?JTZάGBB|+InkAiS{i Yb9ڏfGGU}n~|*GGUJOʧƯ,ɓH}Bʉ2VH /8ZS%ε8ri} ̶eq;@v ⶍh-NS]t/[beIb/F^>f9}=@.<^U>U,h TKL9A\~:TUyRf밒i+3iع Ck1ERv(]GӖ uxU'2pu+sGѦY̾S2=Ѫ`>R~^3ǭuIA^Q7ko&ZF-H+6u+gq{5?h欙K}R]zMI5S̑=3`եi+ѝ?bMyىYƹ|?ZqFXn`K2Iv攣)KDrʌVn=6bRr(F+)BQN4a)EjϹF[u#7%|ĺ>`:qX.G5HKE)oQ[IFN0At?k}BTM%C> Yu[ߞxp֣ wapgQdKa yUNNujҧ. 傖imU9ck3iFaE$&eʲ`^5&9M+p.1$*噗)FիqU)ͩ-z鿧bwc2#+2刊H<pl9<_jճ+ʝ?kePbXK3ݞ8>9dܱzB Bݼ*۳1\ʤy1xXb"4kkѴ2VF㸮{du#k強g=YWܚ[̱ n^zW&&Q]#\эy$m8a_+{W4aNs=>no6IogslɌPdv?:ʵ%FL/4j=w4~dGv.2Yp221&5NJ9J\|I2*LK u#ָ/viZJ4@FⳋԓI[4Rpf ~yhە"UeR/loq EiˇvZ؊5$mb odt=:sО9ˣ7 := XiT6ɝi['cڷePM/M /1 W,Ğ>e>Ywv\M{6ǝ2m~rsp?{9]ìH8G}$+y3.RY8<^:~5C9өJғȖ'H;2JU+uSQ+KVF*JsJzjC75#A_~M9g^C|?Z*r-FQ_%Yس0b#B< T ONkMjNg"I W+э^C5?=%wWW.cRZu:01濙 {+oޢ%*ue|7thil#̾cmdgg=Hrԍ/Cf7fkl#;sq=k̩̯u8q)Eu=C~4{`d !YXv8`=a\*ƥ-CS=MyfKaŏh5۶1Rκ*7jHNFMsՓևGhʮGއMRU~V猚N13RRu ˎLұZ"Pϸݻ*~a+SƘy'[I+:bω|Eݻr d&=~lfpت[ 匳ch*9c+"CSWn撩hm!l|c`SU9޻M8ԧZ5~f Օ[)䞙 v_䪲R_;nmË-pL.sȯr@*N? (o ߻3mF1$yۧjǖKb'Fܭrܔ(r+u9Թ_,H.^Y72$WL&Qԯ!FhdB'53B;vLLIJDxnN*=x#RRz[꧈z#&bTұ-_'|䓓\'g+ Q~C(J4 (]cvQ\Yc!-v#AXWrszj:4H(]{%zt\.䨤]޶ +ewcnzi<~|W8e߼j r#HN݄9){:r9jӕh:y;|#^rkNN[h&Tg)],nbh>-,{}iKrʽ/} +4=CZj/{cS{$kdt~UʾoGp0+NY+(Ͷ;y{zv&_j{ui3[E3G,ݣH0`fl c)$r izJ *%)Om+cF*?:%(rV_} $͜H< O5NNQՙjQ۽>"ӭ̫ʻVg*Pr3;ׯi݇\~I;vG/IrA&e 3'c5TdG{?]:"6"-8][P09ҴEѩjIvw;ǵUh`u{szWDcZ1O.T*Tw4!{I <塍p%+Ju)85b;ڻ9cTzs۝u P $ n>^յxdXեHWM8VQ)Jڭ+k${HUP3kE-]fvcC8XLdͷ8k S95~~5I']p̱N2{ޝOcN:u=:> .IŻiax!fQm p9u~8B%.em qѧʮuoϡE%qH>¦YBv似pWxjȭcL\ J#q:>*q挣5ar|?bNe~>qDt{,<Ԓrd"М/^^Ǟ/~ۘNWnjXY[&Avjt7o3mM(җnfq$b_3jĔ+SoF*fkH\*Li)[ѣ8KǕZң|8y)=Gqא*%)FN{4<,_)~e;TITU8dF¹kռZzMSmf˙o~>cȄ5}9_θxT9m:O-6G 7K1jÕMi,ݴ?KPHBm,pn F;}q4iԇvo]mt=\x?kUm+V|ţ| ׊5%%YMȠ1vX`p2L}cX_얛ݺk~[/:i+{-ZM͟ji 1vwua2ڳ?5q&5Ԕ>]":t{ʮvr|ObNY3[4sJRѽo?O _yq+"E:29 }7V+ V?d^ѧקGேg֜L:ڮT}~U~c|FKnk]nh~NVJwV+zީ=_>]Lw}dQ۰ʀ瞵*_j^0cQ/xSMfUܛG\~u?i'0~ڥh\#L۶5d,*y{k2.RWӶϯhӇV]U\ qj $vҴp)I]Rp[nUaZd{IE7t;0f4RBaf #Ǹ#t'9G{vlsk{9:"M[^e-43:yK˻S?>+NNm}m(ʪJ)_ӭrF=?H1ۢ$UW lc$pib0z\ӕ.U{uv0&8z+rtMh^%/2v_ۃy'9j:_^~gNZnjvi֩ml6Prqx9oiJ3Rk{w^nΓc$ҏ0&\Bwry<,iޣקM]0tUIvWfn,wH@3W UFwaMSS՟k1lzӧ YԍGq.RTR|p9`hTQ[vz}4aeR=VvwoOeg&}A= s| U;/?< N1Zt?AsywF6VB>aWbܲ|-O`e-m=&V6qalӚTݸPj6ymnjןmsTJ?3q&IWkIGN= =/3J J]?Եo}M"eCH{)sF=>-˞/XIw'TÏLtDׄ\"]Ḡ.c6NzWZ<іLK弊@"ʠ 7MgIe:+ZJH@N=O^֫f!m:neG_\wG*^3~c'"HqsTb>hw/CN~>X؊u*YǩW|Us#G+ZQ2zt5,X1JsE*cIkn󓎾*^vF't }cmoR̓4S[{p>t?0ɜ{=iܧzmK!Uª;~κ#7{cM=SN4d'?J茕:#k6J)A\s:n;r^"lDa^FQ[I9m奏yj}nNc 3mnR6nGv"RNJksϛ>s ;d!p+FZI qL`+b)T~W얫ĞN%Wt ̞#/ :?O3LV.}Kum&fygmǎ@Aj0!ZϚZr/fo[5h"f84r# r~ͥ%ʣx4c 99[֖%ԜnV=,sUZw8;m6l(b2qֹTnN2}Dɫ):ͮ<-6y9"BZC΍iʄZIWϚSj*u4cNjZo{p5U/ުSŌZCY]f̬wni6ꜢKLAb\^Y-n*[?ÓQF{yTg+A UGc~aVk /l=Z4gE;_9q%_^S{#9Skm޵omUQMvƅ5kTLe-[Xp-6l%i+L%:m;30nN1]t[5#(Un r0\B pZΎ'bz(ԡ4їu;fɐ[w&)ө)=CN2AoM /'vv g#b@*y;Fu5R_v?{V_TiwTm3{>'PCN3ӿNh+ݒgBRϝmpb'>>WOEtejqS1$7O*A(i19FWk)ԩ+_O\|υwms#^i#Lbc0G\۪SVV&thԔuؙUWUA59=*w *^v^3t+(uV 0;TsK8pf<^4ڶcK $qC6@9=9?k-ӣ۽>r5^27. (Օeqqz {V8eUnGJݖ5tݎPǺ5  !O7KlaVQK^K[x]F5HcAl.G'/e =w]?೧Qƛקc?e_Okrۼx2\XǾzXbNtV{niR%E{٦]>ׅWC;UW\g韮k⽤Nyx\\)?N$tMF IS}>qKZaԧR1Jn7]][ײdF?+μʌuNޞ]Gdn62cWc(j<~tܞo"n![Qnu#q<2X-ov;}GPz--U8z7ㄪP%/_gܱzڄaxMܼ/iia`Yq1}*#&/ J׶6-o<(g*g+4fnMg+f13ӱi q7~?(ǯJ"dߕXcO٧_&nirʖOʦf*x`%(۠e17:mًy!RRZ7#5ՅEoJT*\{lvtsIvNay9NVnǦ7Ɵftz]fb oO>Sҭ+:ye[W0).-y$A<8>JJ/h}u g5I{W?5} JM6p$. G͌fqW^ics beFubϋ|WLtőZ@cwNKQT*|}Un;;V|3Be (pU#gO@kї2VO{~OduNޟ_ǹo8<=λi2F\~[DYH'9ᇣsr/lO/3V̨'QhӲVOsҼ+HfO˦lhHC9WϹy|ة%9jO~$VW(2I'q{6lUlTny^@B.-"M{nTA=iVr[#XYק7n} is[:Rַm; χt:-UIpM3\-(^77'k3g  ^zztqߗ岌T|y==K |!n"XԖf' '-?} nGff8U czq_5ZRj/EҍIbjb;t<]}h|Dב"#rO别Zҧ}u;}6GH=4y5mF5V4f~R5>QsſN->o+Y)@<`g8 <8*Qw~]=/ymN?ûKHI!s}]Q8iz=㊕+4sxkfb I1֪Rqhә+k=7H`$ɝ͍ۆ@)FMXqn!maׯhM8KD,A%A%Iڽ^g5Eެu/dX=KI#<ʒu=y|W,$xU6UYpNђ7^S;p#1~O~wvLcE珒7dzѕJv{%YKފj>[ XIy2;/9$Ɯ\g[u9~DEkfLE6Y.ENprS;vkdSZլ׼/Ԅyr cLşrzs=K~VZRQmKV M [x\Igs$wNYN.nMK?v `}kbH;f#M(ef c u8bƱTԓtt)tƩi).I%=}>*in'15#'~'lAԪIJǡ7cdhM#{u^3wJ6IsٽxM8ŵϏrr͜8Z'rIӌε ? C]n!VgVcBn=qӁYV5U[>K[ c$10;yǠSBXxu[Z8ldۡq, ( n89p::'u8%eRWH杪5ƔZ %L`imޣ;ڵ8ߙrZͨ'^YT?kXh]:6a^Y]$ 0T#$gq+IEE8)_Ҭ..451_Ȼ@Iw|f/N'Oz %o1Y7-f sp+Q9N}<^9 R͈9a'b#&׫0[qq%R=2OW 2VGS1 dLnf۷zU#ƶ1N KZHsgwP9>VbZQ6lEВ9!ݘv # :}zu=:)1|Y;~f\\lq1Te ӽuEA.3*F!T F~\d{P{Jr ռ֖9cYXɹG<`c]+#nUbeVyP S(OrTqtV] _w/̼'$t暌yvvJ'[?uGK>)L1c#?jeqUaRz39单hrm/f:qӜZ[qjt˚Y.ٗtR[Xn.8JNY.Tǭʪͩŭ#:xӈcR1WE9rϚJSViFSQtxCYǚӱ/z\:_Fqok eb[l$ǥV\*.h$^Z~6Vvso?([<ߡhJrtO[T2,z 1pQStTnDYM|_OF/C r旺,d{ɔlr 8=֜$L4bw]r.ҭęP?zX`p55ʝu)"姡cOFh$TPybߖRjkJtvTGOԱ|Gh(Q#*zrMg'Gv =c)~S.Ժi#*kFGcץaRu4occ*S~GkG_24 As u#ӽeGOkƀXI?W?O^觌n_1dlsҚ;Rm/QrΉy]ɴ~5jGX:T[o[ AT+=Fo9ՕJ~򶨙4-\+/ qNT[R攒ZdhIvPӎ=zG EsxrƏ+̖[v_[qy;]l挛E`nj~`{#NzqaeemyvK>ʓN tqn-]:%.Z~mrigTd0NF9{VpQգRG O)fx;ۚ[#8,Av*p 럧Zr.gdS)%Ա]3cȣoɩܤϒU9Gbxtw(bwǯj=m"j=FLsGgDcn-k(>Z3)*Nնm]5#?\ Xrc(TİOʒq?)pFL:OR}k7R4z=|BIraV UUh^<Z=S VY#o/j%$={W bxp9϶=k)T|eӒPS0sְDass9n%ѕT`pv>vф[p"- p0?GVT4{Xo2b7Dh02W}cϡnTegm{w(ȓ!@ۓs`3.'<Զ Wt#|̧SY]+FgaHpqS*qFk:n-$T'n|m?ׁSRZ#:-5 }D.D;xֳҺ&_wȯ.~4BFQ*1RQa[_/wI(u$i={uZƤ} }Jtv""+kgYH qsYƎf%GU)+uϷ~n+٥Nݻ7cμCnY0߭+SjWoR|N/t~|YY<*qnՌ'rKȣ%d?2;F@*fmfMEFIo!e{|tsZ{9F3nП,,~\ğ3z{WU8qc N2=BWK /k0Aר#5c i%{&X:q>6E`>\9qc޳OkǙRudvSBmc߽zx:#dխs&aBvG'QZ2.kmU+FGU9I.eqLT&B{oE.TǓW狺ȄجnPnd!Qrw`'888֒t c{hȕU:I]u 99[:1Qts{ F:){-%eǝ ˆm;qRd:*pVױsOOJV9kRp3>҈BОΰOʌ&i:-a2hRR4-!rA=+ID*j.Y\XyQ, p 8[I^Q} A3;0L >#?o)Y eCi)TtP29z~f**Z[ŽHgqx|v#{MBW]LJX:nG\Un7w:)0O[swv<11bϵ. T m=2H#iIub\Gq<)R>+U(Tt~{1{ůSr8\0#~uVtNV#uG&2<Lnc+0$o޳)EZU:qQV˹F.n F@@brׯ]wg qQʭoBPp`ۋHWǿ>"}m4},O;ٌNӷpp@W49gȕJխK_jVTfQ'nݐI'ָFQ/pԭd]Fahnc0V0Cn>kӬ'^Rn*{sT~;+ϭ{CάM dV?> ax^F!sە5{Gg-ĩ@Gjn8ۧzqrJ5R]H7/1ZEaG,j8MlDԵV(ܽ\3߭r~G*5m,T]>6[%oǎ\zVrEFa*Q|=۽LXfYl**${db\ҍ e{WͶ7`}뭣V? _xP[mi3Pg-~dlfUHԩ?#JQ&E3oA09檜F.7ڇ,;coKeU ER c2.nI[ g}yy_C2_5\ɺ@X==+ڔ^ 5^wWF4#N, cP۟'ПUvQ|9_1zAۣE4'a9ޜMNij4.8S$iZ1#/GOj]!Kxh= '̬m+R a)(I؏iOn]ڼK2yf pI#k42ҡywڶM(@&`}}{{W7T,:WO#CDpŔKF o nϿNyub}2V]M; {cXق 9-ۧ?JQ5'j[$j+$~zIrY$e~nn" [1E3GҦ0Fpi4:JXmYVX 8?8Dӓ1ycA玼sYMRXkyF+FnݿOoJϚ*W&]z;2lql|`l+[hLiʝ .5"e Ǔu銙7MݷөNy" X:NzD9y˱bE2#(c, yn9R_qG x9FV|6+n2/$epF}cJPVөtjExQv3K,cӿNTȺnԖ&*,v1J-|Zt=D]2IO~k:ьIVZ-e"-oe1 FxK^4F<}N\ooi$ՍA|UAvrT"B/"oZK_ p <#tJTWWc*ma}:]kQ*:ҳu41X_CG|v*)F;+Em*-Ia;I:~Ph}>Ѩwim}SjW/o1F+$@ϧ~cMAus_S:?h;(]ǎ5(ߵz3hK]zOԡAkm⟲[ƅ^6n\zc?@qEhTnw8z׼Oois:Jty8z1jN;B-#ݴ8$zR#o#d`ԵփLe<]=|{+$Wо%xEX/5Y#'=:9R䲉5KF/ͽ4E?y]7FB|gǽ}gGY[Eӡh%ZXfm#;n ГsqzsFN:>/c)FQN_y.K-_a ɸy,¥J=?;08Zh/C[:o%V#+$r~@8gOO#٫.ƮwI"ܫ,V >eG)9cseHZ_9tmsbp<rJδ}9)V53ZtzֹL&nU<a:9>tpT^O_3W^uI.:<9קZҍ:*Jm_ԨScyk:n%+Ƕ8a"&= %ϦGj`^5x=SMaIB0F8=xeN/f~Ÿ5vO$cUV?_ʼFJz3̩dӡ7mYuUNG=ִ3yԕFRu+nfڻKs֫X=tJ;l5fט[ J,۾nrsR֧֟,j82\0ڻn2V'#k\Kh-%ĢEWߙ 6wO8ҸTEwy(I5wܑ%Vex'cXFebUr>[5fcgwϵTy+"e/gA'ߧQ$M >F5%5#}f2H¶vcN\(Y ܯ234mju"{Gèx9nio۵S\ O;iUX^ұhk!fqbhPVeU585V3:j9v.v8mqmWFIȕN-b"vFu( ck oOrㅃjR,'UXU{ ڢQ9JM'hl3n@U?QTӤݚm%,NcSY<ۊi-dqUĻky 2 ujm#)$- 0lhqƣwrb%tMi cq! $*CTK1k̲$cs֬4g*ju^zN$,2c.J%Ed7)y*ƷTc+<m>}=**Nw[~i^}{y2e3EÝӎ)ӧOEK+೘ȊnA+iڻtsTRWKDsy v#YESoT(ӭfIng`hYGJs}@xN-YVKq$r " qk>ik6J5'$b8]̎m)888gy)i۰TBv$ u.vN7c#wRnzb+TTih6V763*oǽf(it剦(>٥1 43HN௯Gػ:aZhkuOos $vQ'G?J\#tԎ.kD^MG l sԜUTc::I%׽FRmԲ㝡_@9rK'`c''W`eVeŒa޸#X缐 o9=QNHԴș{gS[F!SuT&Nx]]󹼿?ZλUƥ9Qw{tas +3#+c\185*R_snU!ͿEgq;dYP`yYK[tMk(ٵ؂8h!VnŒ<ϱ8=giYt}N:5["Kk|)Jv>e'׊PW]t;i*Ii UVEi?0t8ʝocQTK^*F@Ӟ*R^4W-7޵;zu#%[OA .ff_pZ{Juj9ioPLy3yn^?wC6y/08ONp^9$&)Ԍf9y]s[.d;f\1s~5&(6%fy wʬN3܁\Ǔ~g[R} -s *V- n\x1NŒ)զv5m'Mr U $ZQQZ\( F_pnW//+m-O;:j -2h[<9ps4{2:>ŧ$~ vrM$.Qr1ӷ\Іir,+jYwuxt/gVZZqoq<9bX8 sƔst\Ep]|߷~ tJ-V",ZZ[۲c,ImEFN[9^nLGpFm׏ںǖDǚ>Y'D7g2CuzU*1R2G*Q?0'#Q[t:hCԣmrx9U+5 J} =B-4+m -'$sKSVgU@f 4s+pb̤Qq]7l#*T`gﭾq>vVF^qs[FQ_{6Z :t]9OiN8O2 ȿwvG@ppzǚSw;9j;3Gqm%`jN?ZO/ڤi{5 ۮ}:vjzzop^I-é qsޱ*5cSlra[r"ٸ>lcҹ*)6R&25}TbX)J.Nk Tswd\G#eHŒ :pK>a5~~}? Y $xԗ;j':܋k=ْ8B7 }5R9ɨ_ӖE|3m\:tT5َ8t9i3Wo2] :c>}cE\)4}?MՕb6;S9㿵rթRu9OJwhr4yZryF^MGq#G{4[x.F}+y{9&Q(J WЮLk$vi FcmefQ)+#L"}nٜi{[-]48#'SU)vVEY Mk<{1Uڣ8.Oݮϱ$-0 +iTYG*3QޜNrMIӹfؙ8v79QßiBz fg2<f`v73ߎդcN=:Ք1q{yw4ෞ+ٖ'!@q{#h]YBS۫б+^ʍAiۦtG$JKU 7̡x p܎qӵiFK3u\ב-..{qֺj@RP5h..Xbeb3|INJ¯QTYj4M,r<* ~s󪩤.Ӕ9坏([ռhV6zv?ָ?wNz69לdgf6bE nc#u"l&U(?f-6V5w8YdXQڦN:L#8w{ ۺ@|E9h<`)9QڋxJ)kZ-USj-IkQwwKaHY&f䍪Is?gRRGRTeQ^\E+.sɭ*SB:ܚTİڤRC${RlgXԍI{vL ѩ/ &d,~_|9JOQEz ]5_<*U+ӂqXOJqViݫ3XSNQߡ emʹŏ'9VգtTTc+JYT4qěVFmA'ZRٳEJ6lɻY%mef2I&-y*N#Q}JIsv6y$~?j&R7Okt2u'zQxH8:#F2ϧu}i,ZIܶaϡ=NbF*zQAx3SKЯcE84f Kgn78=t6~^]LJhVa3Tm*Tzg(TM#-lQv`I_`ްJ|Di(Ҍw'd;ٛ(;0:x8өMhpb#V2N=SKHUF1 T\N O>(Y^d bX*P? g9j/ynVlFGf끎8ѕF/Kfnfǹ:N*0QZ|'lIEH. pqB=1)TS٭tw[Z8ԓVZ%YDÆٹpL,WJ3'yv*uߴ]]5!F#pC8 u<~?K슅jro}ucѝkHy68W$Ko RF(F˹wu9^\UsEV"Qv2+nhW@}ҹ]5Uƌ"y-IRW~R09z>Vuqӿsz2>ku1uPc5r=YQi RdϿUⓌy{8էk%3gI_.0^:2?NjyX'(RѝHݜ+#ek륽>hӭEӯc ,Ioˏ_/RqZ]-M"XY|XAwmjT)Ԭ.utޅxS$n)#8NkҔyoaROtk;w2yy5~ҤiOR4Ӕ]gv$ H#)]lNWqxh9/[~+ #LU?rfʺU5#-vUފPMpclȹ¡'8uu9ԾN/ݞ8TNQiWOAw"& Z[A,7 Z*u*r(}5]QTpHZoݻUL)6f-lujc*-?^-t(;yS,gnf]ͷWz4a ɜJm椳{U rEmHӨ;%Iף^EH?g)&&bV4CXZxZ2i˨a~Vkڴ2ҡOIZV:Xd܁իdaHOyNge >6KsEoΡ- 3o5@x+o(7O.2\`H$*sʕhשJRSw"-61#F7Xvpz?WR+9>W5ȃjTdmm|};֜)ƕU0jU-. _jCj>i攒rNO_s֮_c)jމӧRJKUBobHoz1ڢiS%z4!(kR f-W/y?jNT5ӡBnV#nB.>Q'ey~ՅiXiN^g =警%aXn '\3Ꭽ56/?BGU8p؎ڝHJ5#Vom,mi3=MrʮGv/UVg.##Ӯ H$t\Tv{ohImp1ul';Σ(K_/jԇ*. g*Fsd~aVeEuv,7hͶk;+`tDri.Gmv^^*#f4?y@8$oqm[K%㰻[;fU!>PNp d{T:u'EG}S54y2!(Q9+-9K]5'SukGf֝/[%i#*0`<2܀;ҹjMs;=fy4moXYT#M^\rik~򝤚} _X%4;T#?*|ǃ `RiJsοž ҭ-5vh 1xڜN*βz/C7 01¾PUe><έo%%$=KEޥ8o4_4&q=~ԋqGjǚ?υgV9F`rr:3j74sGU}~']d0ͨCC<(%9#r䎷o}ommLSNu徻ZU]~G omɏd}g#8`J>Nt֗6|wgYpŏ9O)IAi娽↳.%WJ.le'LҾ O7i/{rpm.QvpuMNJV1szQ}/0qZ{nyY"ۨz}8S䨹9{ֹj.Yw19<)|ҍc4^ipUs9jۚ~46LS,yZF*R7b Q,sqV8=qtStȂ`*/%F}gmNK2ug$e.@<+.:ߑ[B]NQ{˱$QϹ<<rϭzzocէN4jg zv=VVڜm7GUɓy:J*1n-jsU+Q2cNGC }V¨sڳRI{gKmgak\]~br{8\8-Zr~Vwܣy/aIٿz_ PcFk{:;[[FkTJU-[3.6F10~1Ҷ(7~ǣ9QZ?wM5 [KbƋ.:ƺқY:z$ՎsrCw>IգĎG?zdBökHԧBwrBOWG NL%vS O z 5TutK[t3XʑgAy{c๤$Nif?|9{xRXXI-tբԴn-T=ޕyݼ6OQg#rqt҅|=zxꒆzM3gu6ʱ8}˩VWn{;jEFi-_}αAa2\D錓$ 5כ9h֩8۝5u0gFRW5=OcTkLʒ8l#I.`0JNh6?SQt"~(V+u:TnJZQz烾C[;U^>L=nq9n1>omR"srj›= 5 dc$f~_88 mICn cVT՝ߑ~wzVicpw>~YJ.[ZҺg?[TԚV@rmd?B֍:mBwtcQ[7ٽ^$m3p 2)TIb%N+~FMޙuc^}Ia?y\h^G^oIr^m P@[޸/gYKF>ںe(t*o2d3̓ZGFm~ɭ%5n;ͧB0;húܼcJl>.j%=;uܛVb$m{Fx.yw U\gT#+#2evevҌTltTj[j;K{؛nni(ԌkFTGɯ#DH{lnw9a-#jQ_y0xmw}t{g`Ҿ FJۮyMemj#[XVVE8i8_>,<>қںvcMMrYf+jzkv]lit&"F@ y\?Rl^+xwy]'ş=q}4m4a Vo$/?7\yl<M5iw[|".Q^_f|A*af0I/T..ay,dS, qJ\7%Tc,z)}v8l-I!deI%R8i᣻},wFqz>MvֱA۹@ּ,UyQj8x|mhc)`{SCZjJ-Uֽ5o-6wq%I"~v*|01^Tt6cb4~B; [}i"1G89Ѓs%9ӣn5|//|*%Q@_ :O9^iT%ucUTqӧih8>ɶUOʽ[YFȧR5=hTf:ۋ0[W|2mdB~X~~mߗCWs?74{@xʻrppOzzҊwڌ}%kE-^9K0VF 1kJއe(\D_n6( V7e]U])+h+B~6<ƏsϥD(˾m!.QwDz>ٖ[@#g@kN9YCN8Z<;CSҭkk(yy8/aRMeժJh>W>Jc<ҩ:Z_SN]OOTӿcS G3I^ x.2[ehZf {9mJ\4kӔR5<-50 YZO$wn+,W4FY4ק Tu]G鎆V-6W%JQQ[ j^gfhI2rˌq?7_1>VO#<[I-#צ'dK-Ϝj i9iǯփTߧtS*=C\tTV:*GVJw>N = 6rnU%.khUHԫ <קJsY4O OayNz+XiX3QɖKyKUf1S֌+_!du,p6'V8ӟJӜRIݓhN:Cy+Q,sX=ir(nQӷcڈ㍦2s2y㿷]3Ftƭ)7/{o+moZ2qvzuS;F8jT\_2}ebG}}9<ΥTլg#72`u=kZ5*u_$4[ tk [QmʒdIKNm:|Ŏ{ܟ7ҳF.rgWJ=Sb3:w/=:ܱzҌV|OBH.'<@s49"t}rtns]6ճ'ͻpdՕwo0F2s{烌l-E*ƚ|jQ0,l{r;:5R\Ѽ9HѭK3q+-̈s2y'?AY+FQ[ tchFƑyrbdk@IV:?g9ӓV쟻AoVtĠbY$n=Q$c[4OEu5nFCU+>3}jBۆG Qw31ֱ9`JW[2>m HS@(Իi]hTX=է,c1Mr#lfeڻXzZTz`#yO]JC#gGSMs|'=ϹWv *ɥ{.e%N:+HTe+@nʒcZƜyG4ySE,eSUSK6wQ:a K6($il7{V\΢R<,le9s=*&E%wFd̒m#rQՎJ|ɩzFƤm3.pOGcUˮHk2¹6ݬ|]:)G"h@"TfJ5{QF1m.b70ee/x=+ѧQGסQVp<]Լ:w,*͆h[iHP/5>Սfڨ=zGzsEӯO x6qU`ܦ^=j̜i)Qwzp5`Uۏӿ>~ܸƝ*JVwwtc= 0c;rI㠢>QzаM,nD aq;8b3bR'-ccVujJQn%QP<-J_z(':Rdg ql_jm,8Q^2HXd(jg~i{-EumBn:sQqBc]KtkGRx'Q)FT⮷e+& #=@11U(Ǧes= Q\ҟeN4-: k£0_Ejкu#ad>{:+FDx^~Nh.캲)I"F@g˵v7Hq\Z#ju)9YaU~o&R\H.m]\JE[ c,wmi6U[.-g"2[*UːGLXeʡQɽ6(J|Y| 3pxs5*.)6ޗ(p0`\6 u8>d_7~%t3܄|#(e?jR6c'Ok+$~tlG̀1ֲ#~>Ib;|fVܹ¯|tɣ 9=4)rQ|߻S\B7 cQct&JQԵymׂsxִÖs/w-]"Rq[Mʬ;'N2tFTcdN5A]݁}lߧc#qHY?wךqfvSRJIjMk*+&WzF? w/닻ɴA\rs?QǗG Mh~l!i&lfg:2pY3Vv ^G:\ {vS8G~f*ty#bʭ 8=yji}z3{)"ϗ AiGs_Ҝ*z$LBwK7Ue `c~kXP/~GEz1N}+K[q=cp@=hѕF2OTbk/{vፕ卣 Cxu5kyz+9-_F15 y*W]1rROF]L-:RNRhDfoF ##zeȶfSt$M.iy"2Gpʍ!"P70U-2>R RIݽ?Sl<`jvnh_eu#o&O 8'ӌxGr>B*?zSQylqDF*/]WQSZUf2̾Tpo7Uq<KJ[\\ie7O./.5g+g#qqZI4:50*c|uqJTٚ*#WI^~=9>w9iVP`IMcԟg.nV,\yv"9Ic0.fΏ'4W5oT73 +p=O֢T؉Gw$Hnɂi$]FPCѩ̺'$o4"F®7sמ+6umS|}ŭ5 ,-#~yNcnp goN|7~2GlhԱ 9kXfvS̫T:Ndo:"ehJgPlA;]88>TyhEk+"XBq#LpImq ~Nň6r즓mc+rF́Ϩ'jԹiMF奵>ux!RvАy^Ü0#ӭs˖R/J[رK-QFs1띹)+ Nm8ߨK8ax+~[zy&yiN1M|ʳGجy˷ z_j璇3]J4ܓס >G"#ec u9\J)Vi]Q&X?(IpXAbAy2u3s~OR[R+}]G'#=Ҽz59eEXs^lȬXaϿ5b!'(nxэ4E e/8cJ(%NM#IFzdΒdfRhl+jlq5(VI%UhgcLg;Ikܧ~[ԧ'4#'1y8}9(Mlw¥-K)][J䓐@vTΚNMކlIw&o &^әvR |ܸz떥G o6e]H۵IV :]k 85tMcV fO-Wr3tW,W-gS0RNM.|/-kn7+FpY;߻ESV:)]74Y W'QӞ*tIZAi#6i M%[wyݻd)EVt9SdQNw'Ws{{5?=KPbM 7aLP߳rPEljtueq؎*ҭScQ*zw5VHPi .0O=:-IK^[J:-N/ DEn,}vF^fB'ȩT+ d@@*ɜV eɍ\FNN3Y|z{Ea i0o̞}{:/$cQ M͜zq?e*oӚ`Jq׷zԔcDdj~: $ُjpN.O_S:]p'+m@cZQj-4z>mymՍq>c(}t%⶚M1[ii$vځ0<Jɫŷ)EʫcxEyl$PO{jʥ(]}{,DG?27 `jeoשQJrO# C4",]6sӿ8ZUJ;t>$u/m'HsQRS+4axu,\e.qS{OqҸyjSԵc1U|ͽHSݜeɥYO/%˓XIh Џ2z>~&ewy6y:U{:qʏ2 ;Nt*M |ܮ0N0qϥr|[5ߦp5#v6ێx+eUs&tf[ylٷ 8,2s׮23ְ1^NZNvgCD]FdO<_zƬe/zR19xB"aXLDZⰎۺ Q糦֏F#tč~gGYnsʷmF#c|bo/i2zzQP>Z&)!F+.L2O=y)InхZݴi=s^\]_<;,W9_côE:0?WSk.=k:t8L*[,G=+\?,]}Q5~y^>֕9s&9%УZ2d=>!TԎhւn!2r:WmHT쎹BHkr-.>h$ y̼5F:K)W9mY86e!0 HZny~Ӓn<5j=_v*b'+q2彿VTcS~:tO Wv (6I?>sJU1 ;iK[z->qPx{ZКd# #sa iO$ݒm~}Kx]H)ߖVm//v  Ōڒ̎l>܇qxH|ןe_O6f|=ɫMo{y|N>ww~u; L2ik$2wg CM{Yύv}B2 h٤ďa.m#k^8uOEh2)T[o+\{)YLE9QkQNt]B8.|' +^_:1 Z^%ʿ_3o7oXdfp v(B=z»c:^hzB݋?.gpA4bٙKOWڣyyF קV%>Wc$/ʦEO|DR>fm :#.$3|YWѝJ5)Ӌ串#̲n_G#W-hVwJ<ы;4~S^yiR,.~V/Gr^ᶽ?1Gɳ 9*5e*O" r8ڒHIe< VTIt7q䊺d[O01@eڹ9݊׾t!J- }Swhqi7m??]~ʛЕIsߕyO ѶE1r{wԴ3қW~-Y"U[{8h˕m֞iEX6ǝ8aNn+RZR{/ky?>1Mk@ yj%^\a59|i|$][C*{6Ƀ?'jzvfd -VDVHH1I8$58%dVɣQ*.6%~v$<ݩJqv/2VXZnfAlXs<]&$_U6m";s$?dtUy{-;nOR"Q2HcGRNܙ6Xx$'#8\'XSe)st:[0Ȑd UJ±N>=j5z/Stԍk:pidm[ }3nYjZ܊0Qkce͎e%vfC>ƪTӚv0tZ7ulXIU!*ecg_W4 QSOm:-k8Ys:lpx\=0ʂI]OQ2MvVhwN<ɻZҵys?-Oi-´䏮z<+8JIrޟ1hd]c'N4SpFI121yaznl~:snZ7K4@^4;boN'ڵ7)$E+wm;Wg!&7mVrZ8Ԏv(-+H=;3g> ԩ.iyhv{z)[%xcgn3G5\#&#kQӶ +ϵK1 ꨫG <~5 vzjۿ,a@YcG|R9I{ou]iCDF-r;8:/RN1oM e<&/#׃R4rZ%b}'I%Wj Œr:Ó[S ?2dYKMKhyr/̸#$drHda^m,:6hUv5]:z=+8GkX^!LvHf<+Y hit$AYfcGFnQx1쓩R^nyBW :ةkE4bH1%ܚ1"aWgMJz#U\$|d7ЏNյ5 筙N=9S {iv2vaF+oizZy]̻Xe`2VctZYqd.BS?tr)rZ$Ԍ:wm͸|VEJqOwV(Qv*1rzzw5*=LgVe}Ο#7csc»()ntSXyGݒ!9Ive`9=ϩ͵5Vt$FdPWl:u^xE1{}y6-I~ds 2ROBxB󽂥Ni{M7)dqdrׂȬٽReZ{i1y敶WP9u~fjVˡ(JJ.0n悙csw_\V1SQLԲ##3'"RG^:OsXF7燈aWfKXU>b00pq@? jӛfYi~2z(9J˕8c:dZXX7ystӊW=^:*U>rSWTjs>^YNK'ڍ#~'<Q J|ɩ#>IL{Yz.xx5ܪ-S'I&ܷN9<gҺmwk"FTJO]O1# (c3]4*rޒno'8SObH4c$|;+5[Wk~&i{h;X/ E ,P>ϵz(|:;=p`=YuʬvqZC^6[uF=[(A*-2u)ia:TIN!kve{cБ^J5R/rgZ( qtaA rA+آVyF"RJBx Զ2wrR%(;IXط.%9Twm>_i=;h9skȯ泦剩 ū맦LkSc'`5^n;Zů-sr/+ڔufZ".u2h=`?nΫ9v9Q/Eh|Hn/TzR0P4q> #$\U+sN(J4{weȒ$c/̾W z(] =c{FSRBIVj[z`mBJYaU{%IO$^1Ej#kUiߩjB>Ydo2QI>f؊n-N2rnieXeSʪqUY_NϵsԵɋS˩ۈ;., Ǧx֊%IF3I=+}&A/ξ[m'=y<YҩYޅJ5̿g_'p 03q]p\t}NZhXXHhxS?)c%t:0<ͩ[c봷OtB<\ȋ%8mi! xwϦ;w\ocht씭~JK1g|n]::j:OI&C Fxxڤ^5:5%kCIc*bXܣuSkFi){=-q+JHH+G'Y˭F~ӣEȪȋdVlp\hwV*|v cբw0bwgw~9jXx4+oeO{k=#\*qקo,au\?81~_z9rs(1od1H͉TNyG~yZR1ةui4axyqQUi֎Z۲ c"\*uA cY٨ߢGRPtWx¬ 9Qo8>Үy>YGݟ9_%gR7N3SSrƌ otJLjЪ. W:O7~YB>X% XcUfV_&zڦM_-?|WkM;wf\1Ǵ"Vn0zcLW*u9FtoNO$k0P{siw6U߻e+Pq##=Lo|%>ռz|c*+IH9ְNI}BUI(RmtsVUl~TTSڧ$53kQBn&?g=x~U//:|*Os(ek 8b@+R=EƥIYt'm#Č^yQYG*{u~uT>DADglt&1U*ȇd(sw) cqK.-U:jr _1Mm5[ݴd ~WԎ8隉¢ZE*rAY 4lؒEq(#F#nsGJZ\ykV];^KOU:n:?S~XF>  n2OO+d֦4Ў;A@N:JN2t*dIّkH ~9ϯJ*QTE*RS-MNڹ#'=iΥ*MKVNGʱtj]E%}5Hw2y4ٳP{uYbR'j3v9@tSF~{7څ|,U@|ҒI٣ƕ6G)X,Y>aqkp-ͧχwњʮZEcvsy>7QS+9{X&hY޼x6RMZ?0`z }_jSue|{D׻&O^\uz~ѺC4-TQt(ת$sjJ3k/#eL|1SWޛ"ߑqQBw3e_':zTjNR2u+L$MjdQ$=?(6FU2I]rÞ郑]Q.އumT/oub#o5,deQ99GJ援RbJKA.ڒX\/"lo&VpBxUNSZ>IjZTl H 6OqZѢ>[]u:)Ҝ.Zj+omp~G۵\$)ʦ!SO]Դdt3ǂghOy== ax6ֲm0WrB^}q\Q˗X_̵.5/"8c^c'X>XQz8jRJ<,^xR؝smٛ->OJI)^£udJڙ$-U5ӃcόK3XjB]G[pfB6`>yē VIEC#J)tJN{::6Z8Q,.X\4E|lB'cv:$i۱in~!f1`q2l#^^[F|֍%ۂ~PzGڴf' ;s[B&̗ $.sYs]lvVo.̻gW21'pm+Q顪N7js2芰έ'+:d;:WLeR29QqiٜiH#? 6 qzgM$8v21FrIy LUStcRru|7]n- 'ߎzS*jrzmSN8YPk(6㞝MmS JUgtt7?]㷒ID ~U8ֱ)T5f8,MH#,VGJ_hy{zs[SZRvKFT0R[kRګmUojF囻&$QTG1{a,EiF>Ȫ~8{Rmwk ~y K4rrՑb 9.0WnkF:$ȩ/ieISbLv+]Ճ$2b?y$}6̶-̃z5\ZsaiR^`{TT+BQOhޯn[G/bIKm`ۙbd~|CUF1m,.r)֥OkOBݠf+=4ZЭ>^In6 -C7#Fįg 8 қgB+R{lz%qCj.V7l$κ˖F>қQ[9}|LPI_)8^t=*8xӃ{a|VQjoi|5MlXݖWJvX8xFcȑgem2qU9b#Shw]HYi! b6Ȭk֒h#ٙƫ],^VZ>wukF6sxr;1q }1\擻"8v_N25-:6ҵۙ7Ϝ|LT%M\޲M[l3}G?F=6i{Dވ4TU`Pwp˚:t_RRS=Q.bW~Yw OʸsGM/GMqq+ *U/%SE;[oRS7 wW g'Trri>~Ԟ1@UdZ1k3~dWW VgڱB2R붋32F"duZvqotH>8Vats\4)_~90IFv9ZUY넑 ǜԞR.YwSۯ%6ޟ?SwLV_a=ʭqN}+:e;52Ga}ֿyΥmmJ(1#curƜVg*3N)#҅:lɂh+ QZ^oGm`O{qy-*dA= \Э/h[V8ZZhzt<^K$[I Քdh<O>ֽu/%{J:N[󹇪> 4ƵA+`8>'픖vzekKxd`px ht9#Nl4#!v|΀c'ּItۚ-Iw̅d lg>ޜYk[זZI'Ti*e\ʏ樌n!W8b*/gS 쥩/qaHڃ2HtԔ}L՟7+{nT+22h#GDi?߽uirʫ|ܞGKMsY, [xj,JwfL511hhk n RDCj PoM½5N~hҵ֝6q!3+f*UiX#"$pS g:w2;q1lW![8zSFR4q}"¢9ق>usWU7y-:{xq׫~K䌭J&$%"sq[BtSF^nOZI}vd7d iJ54Ӝu2Hbhb.,!v z\׷z:io#(~-=YchR }74x#Im-+#;@=g)G\DKдMƮL7Is;מVrJRr֛"ܡ||Q\u82(ˑ+C$U$ FZTUzgRQnq'wZI&YB)(@/{ЎR nO^ϡӝBPKCȷj[xnck:=<|GhKm-үR;7xՖ/23g?8Z2ֱ8Y(GφlWZ}`8R\c;$ckєu=j85kw[:kRjA,'lgy)z=u-R7Wwߡ\YQ"9GJtWQ\ֿ\`6lۼ|}q)ʌ)-aR_hӠbd0;UGgc+G $]>3J66LWF3rϫ8O_.^; ژ8Ҧ3ا׹F}~;w6xy=NN0s6K*]+SuFR@r65(%i:ZŴ u3t%]nW5%aХG 6k6w!E7mPs)ƬgRJkG#ʭ66{5Ƥy}&595߷gUu/26̄'Ҷ:xxOtga9M5ܬepkƎ_ǵsjZEǓ)ެsN c6؎8sF=p$k7g~H#$Q+Z YݯW~aa heufcRqCR#nrjޫ/CYl6C; I WU{2-W: ks2ZU rv<+Eݻ]B4)^z*&tVfّ(p=y{H_[ O:'y!ehK423)28n&R5'd_Ό(u=unM[[k$c21{,[:R涺\< i$6m)&X2)zv3VƣO?^t+_^ɬObfkvSaې}s^q8ZRZ1R qm~_ 6Pm6e0Kp<8xdU)+Mt2m$;Y[i[wp۱ L1T\R[+{F=M苁|~%MSЕ抢.k#hU̒7ʼsP=?C 񗤕3ϝѨW2{$>Y["'(-F9@`A'V28ZjRV}*)ZsoV:a ƠB;G\kV2Q+ETx.&pʭ$Rs0vWfk))>t2NNZZ&yyݝa6BXl6q+%GmغxxpZ y~ m66mr;MxxBZϮ˕o2I]faq 嚌>ψN^\s=Fm?-/m.TpSO!dU1u>6gBN[3B^*16GB.[hf-AہUmЊFtk캅}aVu9ymM甥5C"(;̾#ҧ.^ƾ K:YN>^he8m;+0~o399S\k }.pqfw$}}+5vO[#֍h%u6t [oI#|=+7IJQ_ lTcd+W{U$^};$:n 8UY&}wa2yZSWzj߯}lgTg^OVm]G{u2)<3odaR!Y؀Pӓ0k:sz4[kޛfsߓ'gxVMBE2aG@㓎s+b)G3эeh#<1mi[_[a[ٰɑ_>Q*N~iUVvׯ8 ֥m:B˺_XX*6 =F3[ [o=m|cZܮ)>sӾ? $AgČc9we z 1^~a%jj~lU9.<׺K;HiXu!^$&QLmrs|cQosͭr{[oG,RIKyk#gOZ\DѴp-?Ոqd61`c(9iL=J nYxR]G\XRlo(9kZ*aa){c͈#|OA5KV/vcy9$CkԔyz*^ΒGcڿcd?у,o''zqש1kW: ^mZ맙7?/>ʬ 7؟Q؟qe: g-DV!oxGmmCI8l{WS٭_{tjG/'%_;+Ze-$^1U'&Os!IBI?NHLH ~yq=9m':d~Vxq柳ٳG\m--)o:kX]F!~;2{k $m8]0婭8ODaȂ7o26TLc䊿fyc3ZLwf@8^^&nySGmjO6;Wa^yUFvyFW_'Ac<ڞ ^Et".ܕr:8=qݭkyjKع%٧sҴ[J̱ 8V gG<}kʫ쌖"U-_K8*˚sx8#kZ4~+2v5t-FL j<C:{2{nN- ?֪pbZt'^rkSJ̌9s#>B0TԮ~M.eư>g$c=?x99/sz.u; i[P\Fb9'<k \؛{1)ʗ^Ga_!֓~*XOf8 :vpۄr2#-qSzOXq*\yms“51a^沌xꊒhQ P,1psT΢SԨ)h^0w}ͽn\Z XT.!_s\J,|[Ķ#2c$~ 3=mnXs=D4DI6"f*T㞾һ+BTwJ>S% [I#yh+a0 _-'~-Q%6)T涫3+1b%,p2s~5yۣSۉ9^};哧(NJI8ix`3eF@#$y沓bx8/ʯXmbzZ·4ewN*)j_32T?J)Sn654492I5œ91$_:yR%oRڱEWvXہ6w9ƹ>XݘQ^Λ]akXjӑku#(%u޺u'Ur-sJΣv3xʝ qʉ &H'ֹG si7UG  ٘:.6x~+yI(6y3)VM2GK$Qz X]S;cjNw,E+FBp089ޔʓ[Stܭumqhgfɏɸ=O+vZ#7{)TZ/ ,vȝwH۶2zd;TU%y%ʜ'mza F'͐@As})sJڟ.{R) qOxtG2Liӟq_R)]R-Xڣd+6HƗ ėݹ~ϯy;#~iT#MeHۇ䓁{Q&#8:qoU VtUdƭvk-ddzt&W3,qZ{[ ߠ@84iS:2;KG4gheeep{N߿“$[L*Fu=86:')]8OUȾ:bՑx;{Mȓov/"2I>gvvTcM?"KXnVݦ+j9GGO}Rϯ9.<ooJݫN_c&I˚+KU\*$ <^U*]3 qRdʝG?UVo/rv =8)Ծ4J54 S I79򨃩VZFU+O+gܰij`ϧ9PkR\XIfUϕߑa)r5qٖJ֢s2Ơx''קNRjsw&E(!<{_3P<^OgR|~UR_x-R; qYJnV]\cӹ gR:w^?a*V%I.(=čn$Nݧڧ#:˒dw ,w+p}X7,5{ͬqsAsk^ɍϖO̿Ľ Sjc-Ѭ>r0ódsXJr֜I4"f;3)[2:v㚚GTcO\ce_ṳ ta+"ᏽaI{m:фy~ey^IYcxتUl21yUsGQSiw}F\+ ms6J厏1~&ۖR[q1ވ\MI{X/I.rf݉}GNn0e*虍qp# ,qYctt{$5[w:q; DXKK4LmGS gb>n{~κ)Ӌl{-L뫟-JseL=#)}cJ6Qqrh=ٙ>u)Fӆ"imĎMBGDk̍.]YI9Ysc%gcoFfY2pGc 7 :_^EKdƹfe꽏_֧Q~Ԕ;z\kpɵUhnaEh9KawjWH<1ǚdzӧS#r-h'9ӽe|Vc(-ZSe0ec؎ }e*vm =©el0R8U쬯cljwVc'VZK;U’NZ^Ϟ\$*ۉ Dj2 u3\SJ*.OɐKYfBUm'/ZhiK߇/shHPHs|Ř7"׿ԧj?2(cK'U98S:Ҕ%föހ5NHGOSt0C;߸y&bS9$>Q1W Tkb)B.idiqܽqM6XȬllA,0|Wl}e>FѢ(487F-ǨEo'̮u3R~i"{k,3Ip LU_39Mw:me{,|6A wte j#>ZkoZ ޵O\aViTqs;=6ͫIy>[" cߙ${ȶE Ut.TA9^;OxryE7 60pHSuZ:/ٞ],}MTmd ZYdEȣ 7ҹg(.xӊT|+\Nt-LІ_1@E^N:w(Y(/3SFd{82+,6H#mopO\u)b- ¿,3I'?a6Qb!<a=S\QR]~gndVfe;0ql-mOطkXP7Ϲdr9#)JVc)SM`S2&+s8L#Uߨۈc1,Uo.m=vjc+vRYW.93"W$iOg)Bps$U6%ǻO97?wsל쌬ܣ4©ʓ@hݍlej1}ѣd9+2U*]+ *u-ey z2}~UFR~7^^GKX 0xvҜmvzP7g2&Gmꑴ9}ulYsQdiG2fob'TRZEBeN~}BX@߼a: gCHNzQ ^ HG{aEQ)7TtʤPRmel6OsI7$ʆ&2x9S4T^ZX+maip,6ݟ2GG挕cZi;uZ.o0 rדWh<^Q52UHªßCҼj;Ijpʏ+SѓI ̸ݳ0ּVkZQvJҸBmx ^kɭx|:)_s8vw ە=yXʑэIZrхq/DOσd`#̏y4tG˳Zחϵfٳlyz֐I~u*THQVWs4pHcWol$)(Te+ZC;:NC8׭Nj.tG֓%vT4~c0p0s]Xzu_=V?VQf,B۷$l׭-/6g}DyBmڼ(R*طmfId< hsֽuZ1oK5"Jz#SًB/i"$#烎QNUnF8u@&4MEqnn溰[*qM6 -qn:y&.ʑ扭h[o3Ycm_`r 'yJ0k]/ʷ$+S;tZi77gw!8b9ʸIFV%Z2Uucj.Ovl@s򏙹ۃÛ1Wnq.XٮyrG|Zo.M(<~x|]w3w~ʾ[FB-d*+G a 9ʸK*jKwݱ}dOmt6/S:1-}6RΒ*m2TjSV4j8&X%|"%4e#܌ccR՜u?o2GnF$G#~b;9=+q8ݟ**ERfF'vO6YFd1޸<{n>Ro]Jb<&ZOsꪣN.ӧ*nVi.x䙖Hգb{=s[ZMV]S7d;s;n_kRYRK[+koR }@ V]Y| {=:V9?CJqɠՔm'qW\ri%O،6$ Uqβm[hg7rK==+sXS[t*N\sanNKp9tJ#JTR6Z?X$r1ͭ- f$* :s݉a3bBͻ>`l*OTVHߟ[載|-i1\^r/vWh|v;A2=iS5:Зe:nu:3ӡzQ~וUaFk8P.ϵcV2ۜҌ;<9ue%R*-)GGK͚qHXV^?/?ʹܢ艫J]#x[*~އpIZQiEtʆ[wkZm]h&צ|]!'fi 7ϩf܏ThϨat~>J!k{s}=GO-J?h<{i}k2HەA\`1z#߼5蜶״"#A4y?6w`=zK]=J6w#5o/-|ŷG"̡x'=BT쿯9rq Fqnc[c~w}1+i]jc<իsTw0kpK1r9Тhulߏ&/>i)f]sqRx89fLN~g+mWKecύ߳{ψ]w#y.$oES 8%βxht.:petp2V)8˯7g_ |~HWAn i\V\cE|~+T[}63\f4rwoKR{om_˒*󞾸2Ozظ<;5lj5˦̒ Ǩ@ouq֣i:WʲKk]Z)V\ۑy*Z1+:#پZn6)k2$>*ⴚsGCLax(cb>(rmw㠡kGm=|=]^nTn$?jQ&i>V;0#e{GFoyLkbArIr}1isI.z) 罺^rs+*V}okГ}I.cg<O 汊~O79RJ$X3@܊}4\|bG{8Ct24.gAQ7x2Xu=daZbj6`|0?)(ԩN+NDpLG\17ֿS ga[5*u*R >ΝH'6͜ns?U8~6nY+ cY+XxI&J%yItNqgӎ7O4y%VQԕZ_2m͓yH$޵%%+oק#@Ѩ G<\r9+/'dv#v3S',E>XIRNhUO_ƴvta];j?5e@L'RH]|T(D BwV<\JIJ;e8Ny7AMM?b3h)Wq_Us9JS%P(jK;%*T?4=HGZRb{SopGNz"Yx6=O*Q=ͥ 6l$g={sJn3ik̮d6¨>fNpɭW1RԤRؑ#X&ݾoqqQagRڅzگn;eO4c*n]݈c\y>ʮG8ETJv\'[يɓxsY*DsʥXS\0f8S&@˷gǧpk暋 R^͵$E c1|61)ĩC 'NQnKqBbqUėc~8sڲ+N7ײؚ+[LLfƍtktwta:~ҟ=a]y e*nP3a+uM'2NEq* R0qzW OiJΜV#SԑD>fI* 8ѕ=]OBƝ%ŗwpȡ n/$1לQRTM T718HT9J+FHǖqnk-HQ|ʹnR$O#gr&/e Y1:g*ӝu(FfDt]כy#p1L0#^+2k#:Ut]2ז4gcLY;yiI6ʜb~] DV\y${{iFWNoccn:V &&W6G[XGpJ1ɏe>J<9(<]:-P-+ T\|G'*C0a[DC"Wpke%bn=kY_njҨIZyY~̰(mw:F ' ccW*˿oCrǕmntlAlkȭqۮsNPdZJrT%bKx]xxʫ,-8wgW4jBko.ߠilđM2ݶ8x[֔a%Rއ<)ԣx7}v$d bA>oE9QT+z3)8FWIRZEܻ$z~rooN,y;T5VK˶fQ1^Ͽni"((}5FTӨH#쫁!'>t4ԟF^%͜؈1Ԏ]Xz:ݯ뿫')a"FUVזgN?[JŴiKӧgy"d(ц'=tS4J4)NM[g(kctA|y^gU:}kZ>ZvZ'w`ˑapj*}T ӾBF#Ό2fbKcw'ӵo =usU$#4X6ٗ_ pGҶjZҳL2)r{5"|{RF\y{;|z~uG(˞[QtU~l}ז oC3ʑלRNgmqVwYO̱ެ8_Gkg˧ʌ]I'YX0դe(˜R,-q,2*k}5RRㄷ-C8hVh:LP\6J2:vDZz?ga$I72av {ݛ߉twȧ-E|z#t(:xrjv3OK{Aq3G?aX˚4ݛO~&Y% nagFkJOXiFQi/BY }m ss+g*mI>*? wm?2̶3O̢0~T|ޒrZ[3RݼȣҤ`[hSHά(Z65տZ<Fh)OAHY?D{W RQԵܥ'7䌝SAhcPd=ԏZ5LY@Wn6;Ǹ氭WZӌM;ө͙FUv+d00b'{yFBnRV-|*ph?SYݖǙ_/FJJ=-8[^cnF[+)Tܝc*3Tm{%V 0\}MT)[̅dD3Gntc'c;ʗ-*SV'i$]4asGcΧRwks:^)7!{hfDk*WvX=~iiimM'm[3 2ev`37r2HzX-4G]?.9N[_.ee4rn*:x+9Zre*ui#LdXH%8k+͹EEqRۥ5ͮ K6 =9Uکw)S^Vqޙ;\[䜝}=j蔣J"{[ǡM-\{qڵM/r/1] =;WEDӌgvjk*gU4ln~P;x"刨lrTy['!kh%VrdYllnr*GbNJ^v&u†2|#6}y? ? kCoN4i23eap H+8;J}?AM}{w\d_-zNVe8NDua㈌wKt&^_c<1'i5~δwsZrw[[5CX#mªdp 臻uөIE]}b6Ѵi"F9'Ǟ0mzsԚM+Օq9z?v;]J5-}lsƲI  flq1*c4NtUPI +YGIWMQ]_茩զ;[fR8YI`r 9RGѣKw{zG*4ַ{knR,Q-~\lH̄"|+KE~iQnsm[Os/!!uYdx?Ҽѭ[86㮷UkHS]4^sEoюiO[ImtVV7ӫXշq+w_iө~y;IBR-LOJWSF%"XA܋Fx*zOSjueqvGEڢ|L>Ag}5֩*reTct!%+#{W,ˀXOӞ+QւK \еәn'YUcI*+qg88ʩ7s*_D^`Y7rc#T˨X;Je8A7$, 탞'Seޛ驝:sgA@MwI֜bFR^2)i!cgo]fP0N2:[:dPRW5Q(eUV <~P6b5nk-YU5U!x½ jTinJ*R_--̚pxʊ '~pmF7nOo/gQkt;JOiEJ*rw7<c}>aVRҳN*K@[Gُ5YJ .ReMIt3;yZ8*۵sb%'E(k"7ߙv3:Ana򑞧漛TZG'^HB3(;Xg#DT̗IӴ-~rL#M=?fY+ c1u2c;HA2qқK^u11f\0ӏnMc3ݖ}:l&FҲ6I #t6Ӎv6(|W2*0Cc[#Ue{wԔʫ"]v){:T9E`N4[n &1zf)j+ RV] b47;}qӧ^GqѣNͶ\Nw$RPc:krpn<ӔvNs@֑vbgjҌ-)#&tn8`܊LE~֋m]=ʗڲ;Irx99.Xsn5(}RYcQavzA꭪RkuN* v]̛$Cj;N9?UEM%fRpxD筎fP"Ց_>]sc<{7 J/I.Ǧ#FmѬ{ rFvJNM?-OJ#FNwg-wLҤq&خѩ(Ǜ_y/ETU#4vݖ'pMyugM{ҝ:-ou}N$7aUa~_jz;3 :ڷv=xqms Ѷtt:<6d%ӵsʤysTjM72,<.~Zu3d׃/ܖ1,lJMsFwTӻԴC!kV2Ē:֕:xYEԣuVmm];y?jڕL)ϞEy5lʴl7c-v+)CLOy7o˾Zbл|dg`wRSk掺*#9Wex΍qj0"usEe x3=>ca', ?gOHީib3R+G<ZR/f֝K6R3fWYC|`{jQBi_b<>EDoYX1ݻ< 8%Caר՝дⱑ2mŔCiU'ت6{h'5Y.A9ʻ#Vq} !ΒvЫz6+XcC wcZoc2k7lj5垍XϹKef+ yBYN>`1:sSuqG NQwK0PgeB፰@>םR-7m.e_R Ck(o%]$s V?v:jP#umΠqHY|+*XwW{2Ơt%BRz\x_R>iݴ>rTI{A w18ӏ4i ̠Uc )RКR}-f\iU)8bu%J2JL&Rm9UN%a.]wu]*9u+ŷ[TH8gưhǣ#' ʛTNGte3EeX@ee3su;6vz>ӅX8ꘕ8﷑jNq¶:Vl"Mb:J[ Q)E[[Z=Ԫ@?*GNk|- UJ |6yң69Ta߰i%V7n_|޹3Vw3F\ίK0X56d<*Fzq+fHVQ1j^#m#yXcw;~Pq:댣R| #+]{Zۅ]uFy's]5+1ܵ4K׈,?r)]*?QNo]x\S` U%*[[ObpÑO˴v>TI_ψ!Y;'c1mU9%_#գYV;Ϟa*~Ӓ(qB~5s4XX{u t_Rǫ(֥jp^Д%vP7ccgSW9qRΒ٢C1WK78I*x^iK W1lckc#N]b](YVyFUc9''<N]4ԭ+6sm|]TNY;zrSvf+q,q)d]'4T-9(ʌ];o&E$>Ҵ--.5edh7|O) ªppAQ׹שMU\9̵̨?}p8NN,SVɍg.2{ibZGdoFҖ9קU*GM.k?Ꭿ[I}i偍AV8', Ÿƚh}?[ s]Y=5֡-Wi,JNPٌ1$"'ѡZon}wG[+c_dxVڬ[5ռr]I9Gp{M,I%8l5FZ8aZ+K}ɭj;"賨L6'=q9RouKc*8j^۲[xMREq+gᇮAkaѧG:3;''ܹn=MȑK4<~b?i[ٽZz~hZ8{Y$۷kEkc :H)F }$*2y-QxBittg~j=ljYCɐ;A!IRRmurvKB"C66s'8/|`uh38xvVwŭJkĆx9$kV?30*G8ݽN.k^m~w.kmӭ>Χ@ osۧŽp@䓜:4vPRSoԚBFDv?rE|JS)KT04=ҿ$X+nz}jju֋%~];3+*83M;Q%M[cPiiwi<0BT@$j,%S.=חO[_%=Tt8_ 꺿|Q5dM2EE f$#ߤN/ Z[hޭ۲K#*j?%]m%7]p7qIjeS,3]<'i.]>L{Ɠ|,z}Oa/g'ʻ+7'}O`k#L|®1#'xccɍXѴe7 x`GA*zdsb1z\孉SRսM 1ht6B9G g i{3E >noQ m*՝ܴE5a*VpQֱNmLt#5RXn&h8#oq^'s%5Rg썳`G _\-Yuv"X۴`1ރhRqIa2.i +6_59YOs6NIX Z*jƐ,ܖ?j4ED?&hSc9%RRM+pFVrRhҝ8\Ϩֺ s6~ gsϵJN.ngMm}:Ȏ_ u֒76+zI[ܸX`z1S8]5"50cB>Gh?Nm{ܶ+bS[Z,G *Ӛ\[ | .kurI0Ubi$FbPx`wsqUNumzYe$WVu1#iX :)8Q{oɵJrwdΗ[xE%7`Nh7tծrԊMTz_enkp?aV5;Tv>ϯ54T6jws$E%bY]>G$*sqqNw2k"h;I~=y]jRUN1|utاp.4 .ᷫ~=kSR\ֿpR3WKKpJ^\1d0W(aenI>m͆F(::OʦMM?aiT  n;#FBOǧj*2IWV˒{Zt AッRrV#|ݿVqwVPʶcpOVsuG}٫YԤYOq+ȫqp}}}Wq+;c$k~\~ʟn!9?s ďчq|Ƿg7ϧDw[o 4+=tN[:vi%ַ/8n9893b{>vrZ;5cZmkX%lm;LƦ_]aaaJ0Vս&è^\]\+ýw9OK^gS֠N̥=:N'w9-.L'Fv^eh^g(^,pqYҹbk:iĒC/WT=ڋS0+8O,KnJZɽSfz9V޼59dVnWN2h\bP}=LQ}W AnzoR6gL*gM$8=yd63+eJR' o}=uE+m2qce_Һ(=trS{<ՠ9a$[={,EN2i"SE+jb\xkҥ5 ?潼Nq =}~gWmɚ0ca^2Em>ޜYj.'[0kt$dki89潈Sx.JsޭꝴKW%I5XpX$D}Oo=3[t%KȀw)n=|QʕĔC]&[|E^XUYwc9fXTMwL*8~yI7ﻭ~^x V?3Lp<kӔjsI:=|/)yy~xa`Q;ɥ`S.7apѬ,?Q7{_^} Z6K~M;Oռeܨ 6=N8?)ƛc8M>?*:e YwWS/ N;Hk+ve߽CS8s$pƃոsTTSMy+4v/}vQa4*-#:ר5ύQԔd2a襧ǡoVD-0B]nkn?EǿZᧅEK_o%Hdmo^e[mk=湦yLM0,clzp=<`k|֢kyy̟WyNE+f$ 0[L)Cϕ8γ\+Oٛuw)Wg#JQ:SݓӦή-.䓵{gȭO:!Nv&_1LN̽ ?ʺi6Hc`P-~+N܌sV-!ʑMo"m",#22G;漼W5<ʼѬ~(ǩ[G$uHZ5`1\yp*#Рөw_K^mmgGϗ gs`qIRT^|vF4k}Ŗ_gW5J/hpb*Pq} =o6,# 0SߚFmK/ N-/2}VSq#ey%X=z{}LE$㢶*{;^ )xdOy+u!(Uo<*QsCA"J|ǩ r{(I>\c(7ױ5nOH0ȡUpsO]ƝO}m}'uЊ[5ܟt7ǡf$YI+'hr}_Ўu[9]Ux9})1k SXi.o'y: s3֎[E+z|ٕŌ3]E9J?r7i.šZάʭ|ۏSIr=E*Qȯ|=iTÙ9عi(1uM-y!:Xsκiӧ7r^+tH /TdsJޟRZ?ko,/L1B[i3NJnseǣN0O\}8Gl#O5L_3Gٶk84oڿ>ǞXsf >yU;FRg (UǭN.[nlzϷZITs8 TLU_'p'+(ɭ'rtݓ,4(n|mˎAޱi;h8ԣNIr*up:qZG2Tu]ɭTiLCm~zW./c)s_MtJd;|Bk8% 5B!c#iǗFҡ%O.IdWݴnOKc(?"YDT>X*Gz5*;Jk_6T%Wo¯,+7s׏η KjTPRIo258/$Q 8SƺcRo&>*N~E+e@ۃ2󸎧?Tc'ˣWadO<7 ?j˖ZoRʷ s )fB8 f+?3SVúh"WkWUUO?ʻYEEjc*þo?2^&oQ9h(Fƒ(n6:ے8e؈U˸Hݶ0nuS8˖F"y[ d)6; NԹm];zӍ:2振%q'2o&\/pGZzfV":k-U!c T,6a瑒:8dU#V ZM4+ @;b1snH{q"o#GOg*tmXr$jAuYŻa(ӧL*F2Qظ5&^*2&i0sFy:*ovDJ哵gsvWRHYv`:~%T9(VmNIE+#ּJj>gR4!Mʋ>^Cdq횾i8/$R<ג _]ȭbe*rT.{)iFR[Yl_yҦ4mVAui@&Xu(+>*i'旽-ߪ/i#y٤UjN:OJVl76̻^/=+4aRTMo 놎?ݪܨpNjjF5)9Zwql\Cndr : s9QBE99]H!*cVU_?wcAIޤwwI0-fM[Pw~vrr޷Z߷Dy[jaw pCU9+u-M"',Rg=NXG)3粪\623''* N1Jwm%vchؖN>lWO_Ԧ܆mDyÁОx)f S䏓]I5țu'?0ƗKJu 6Ȥ)y!ڠSu|fHm:U`Pp{;Q*~ƞ_"0.nz't*=K_m XF㑡&)#1J%ˣ:,$7{h rc"P mF9oZVFvPkc8<yliQV܉ +0v<xsHt;0M%j-P=Ŧ<=zI-K>ޮ 2IuRq'8(/_&:Yy$J_=9荣gsuRXW5X޲{HnxOr$ r冑`w8_ڥJI5ޅSaVI'8ޕ+o]Bg z ۍOFƺ"yFHjn~6ۃc}Ӝ>"nh5kB{mOGLj=U*SIlƘ hzyEcԵ,xyxѓ6=OSG9GDf3}VhV2=1z[r9Hf2oϧ^3Vwy';GdCsϷ"~7?_CXs3\q}zVj9~Ɛ7Ԇ{𤇕EO|eJn\yVcL;2`iJ7i\Vhl:ľfЬ~PJW~]ʲfyT Ǩ7~WQkgW,1 67*sǾ3e{htF^4URr$pbfN:Q4|닥2JH*m??˚y)(iE͹+yE ݤm](_y$'_2%ݛ0uy{p8W>w>cndj/ne>[z-XgbҩvӜ:#MTc1Qc^\ʱ )#n OObOJ,mLiEʯ,_d6I^1ϵi*|gRWPԧ'#`&}G9aKd t2}B[\&&]%pTke^T$wVV_$D˖"cOٻu_ƶݕnq_xÒJw #6'vc*O^>Ɲ6=pwpW<bR2OkEǙӢ˯nLۣ|sn{zʟ5T˯ ݤ)$+!m$m9ڵN&jFMZAVy$ۿt8p;rHsW n"LEI.Exmn$1,+i3?0ڳ=T;@M9#(wzwօ^<ܚT9exS>hªT 9c/`~Smc&+%5j NܓE\e e)H4[>ooY|m* Fɧ>f֔iUwIgx`TF+h9$)W-뱡hn𽻉cuH|RNx=krЩ˒rmZ12-XiUi:8JR\+Fo_c}ɚVN7c=zQ+&g{ _Wc^kN63tl9=;Sٷ)c*;5V+r+D˵[ QVZZ0猩N˩By)eF c=^Q;cҴ+#\yen[8pOGj;V`:[tBD>S_|5mQxꜷk| ey$tϾ3׊S\­Zܫ]X71.w3dg8>XGT彋hzc[셐* v~98KF@vqNz5KNXJYPMU]yNYJJř$0.&\0#zY]Mc"Y%IDPX>FfqԈ X\IM ⥵Wei9eZ5YJ)RۡQKs!}NEYNxxY{H6Ɉ%R"PIr÷yyь)WKAu w+grqтl.*/MO&"ymvI$r:8<ݻ_?Z2{=L=gOR*&U'ӭyա˫OF:?bvl[2d?wfmע;&%Nj% _~q:Z=ZEYF$]mͽ3t l $, x#ߎ J4#zsY-53.H3A8=O^ʚ}k '**MS"Ȫ2d <:[~P)RO^ Ћwal0Cҽ,:++7b"]Ln_/qy73%JV2:ubbnT@3 ӕ8QyʝHJ Kz. *HWk *^MtW=:YMXoo¯ V2GI.m:7e.敺a+@H.Z&8(̘GMr3\}fOr7<֔,[`okNU9[l>7f\H[k/>\sǃNWUOr\[呲BƠ-vqSۥs9nc[×qwiՑUrvg*5ɰ0ߺr=ڲ6zK{35vj~O5:~i2n{6\6Oˏ^NkVq3[ 1Mᶅ|t/dȯۛfVۺ{w)Qc9Q'5^7,8lGӸXaӣVm!+{۝#=rz~O+ )a$r֞e:-[a#9K][5,& eldx~hsA_C5e+cCOnEۀy]Z㍪jVA$c-SZJݙRVۑֹ/{S_gf_Kkj1# <;wliS1Na-DE2yۤ*\#צjgN^kXoQM\"HyX*6)*ߩp"z~goY 0}e^N7eA*i]LєhE5+-)4aԇgmDO%9>Ќt5\וtQӵS.oRJLrNtR_ e -["h'x\@糰 $a6ĕNsT ?-IѩSDw8xbPd>Z`dRCaW5=5UM=x1\3Vuu]kzˣjQ6kO-% $`dOrэ}j׮=*p6j/*Go:T b{d}+ϧRjۭ/OūeAwm.T}$72oYUL 9k5NG(өEJ%_5 <1<`﷓qN9,z`09zԩNM4ϝsD@b!y6+̻W QzyVXn/~2_b7:z[ԅ.G*{}梭o1bnCAϵq2R8.h&_RQu%y䙲HF5~32#Bt-Bpi*r}z}>tk\ڊqږ[iPhp? ci6^7}+S)As'%ffJ#*9Ԟע%)huab1rjly6tv빧ht瑏ciqa{IQju_mϦJ4\U&|.a6Ѡ/$w|99'bQ z5w~ƥ9:[}lt4>ao 29t=1ۚ,RQIw|2{f=5Zu`4MPffoN ;굝jZy^&:uM5'ƾ[ڿ)}j"&'/e?o&5`I(l/Q.hhAѕO}X 9m̓Hy{py|'5͈4تpO%O=Ւ]5f>L}q=IҳΛ9RZHZ0asYKvl|ժr?";4m#wzVjQ\kTNu!9Ъ՘q>=x$oyk^Uq!Izr['Z\Б#Q9kȳx",G |DvC9gՖv(6 HVq)+9}jN!WܪSyc=jTeQb#[ZMbH sOęڨYp}mw$,1^00 r}Oz%R8Z<,%-?2]| y`T6Q݅qVK٩ZϡWUFitqzV{ZKyu6|N}kTv8 ݃iJd*R~ߙR(_NS?rFG$A= QZ1ki*+s'OlVu*ʳN,g.Z&f-i#Iܑڰ7+,A+XYcj04[ O1߶+(UHƝe*.?BkXX9ѷ${_. 29v8i?k?i{+U!{( [pk:.YK̹^JQJâY aC|CcV-su]:9c~El66 yi݆?Qks"RGX~Kn 7qpͻ,˒9?8Ѿn-߱aDtղƮn;|VWiiխ"cMwמ9q\pRjWcOܸAݯv8y5i8|ϯsϣU$AM%턋Dq db t9J3׷O*T-Yy-#{~RSiQQGC~Bd_NIs[# qrI;D|>incn2}Dy\<%nY\ҋhuFUȓr:a=:gZ+~bl㢾[-).m߇\bU:*zgc6I.--?œO54 7M?O!Fck܃8*4R7kv&6>KQ*lasoO(֏-TL{XE-ֺqZq ƐemY7RGp/<4lA+G]ΩKrğ1{A3/ʣwIlkNiN{61Qo(pm$ncGQ}ܞ cJ0[Fz2*weNw0d$d4*I9Et(ǖvfnY4Dbm4~c SJI:/c ќngvɺhݤPow?v6%9sMq"<][Bw6v 1ׁ1r75J&pQY%̻Ingo(ʲVߑrg]oҮ2{*|_i-Z.aWmVۑWN)ZW"hLDZJ,i+oxu?2=;W~RZXzj?hu.`. H7HǖU@>gVҩM+KGgI8*qw35[y&8?ec'O%{{ Is[FSR]^:m:e‹.qr=kXU"5VNWTNVXˏ+<9ɬ8os+j6,G̒jN\6_|yx$pO^IQ(J,aeSJRvV-$TY ^tOW.dwCF};Vk#H岣$9{§35ys53Q9 3tϦGT)-o\KFc|XsҶ(E^h1v["_Y)ܱ۷q}}4e>ehڎ[经svr0A'? XTiOr(AsqϹ济4c۠ڴH$!~dsV[Q+l$ڔ?QU "22>f8|ְ^y:uݵVT1$bG8 j7@NGVq}[izך35+ֳ;{xh1CLeݍ}T}CV>[43$wRPͭJ/v`U_1JXRLw*Z99A{2@\+nRYq۞baRv8ԧ7ivq+2kmq?p+yc%NU׊GƟ[_m^deI(><"F'IQ8%}MsNzZZ7-9kjr kS=Q@b[K xM}/o3Br\,moN`~UME7cGVm:^VsG 7qyJJFr#u7[7.~UԄaQJ3wKUct+2)-XUĵRZSaI?//3ӴUe^\U$M uDGGi ml[n {qq} ]XƷv:M?pA M Qd9*{Eg4 ].&YULuY@8' j)jT/w~Ru)G?zҝix9ToNV׭"I4}0PHu] ﹠CNhJG8>f{{NNW1:uyؕw·N8#YTV =[3V7d,KOZR5f)OSJ>Y-dپ&FQ{Y]l~0o#rbStI{%vֽ2M{rzjkTRtna5U8'k:{ɘƢu9ְ+*|?* Ӟ3ܯ' Smn Vq$pwCLNխ8Ӌ8p؈V:VHFS$=2}kzrΨt[Z1pټl,޷_ {ʬvs3ƾlo%guCHtQy*Ož1"+#TBu=MxShԊ{[BiLf3nUf Ӣ%SU{-ź{mX̘䓌!LSZԇu'ʣv̱<[T^9^Vf&{cf ;qqYԴS?iwM]tQ6s]|ۖe.s9Lj*~'6BiVрv.7p\r|T _%c ϯKS/ԩFy$o8jr7&^Sh``r?*5M/?h)Ij-9.ܞIP )Wwp\4+nS0'`ֹ47cMB/KI1H2ls<f4_7KJ`0+W6^[ZmrFw6T3m#sr>cQn_UeGHha*rDmfr0Auy,fѩ~HK+X``}Gl*Jv+EZZҵrsZJRRRC"8-Z: ߷c&"R-l_1YSpR|5杖rbjZKyen{8mq\c.W5P(&=ׯ8ʒQw]neZ56RHݜ6֍I8[E[v8{OcR-Y6$"XZYB#O1[pGNGOJUeߦDԕJ] Gʗ*y~+R)]Gsh֩FsW3yɈ 0a';=*FJZ#ϧ[QYd1JJ{OlVTԕe}݊qߗBh.!!i$F6R:ps[Vdgn"<Ŗ#/ _ghَ^ҼJ*.kAnh.ۥg>b$iˑZpZMt8IQn8<e.m]Tkrl~턁*1;֕?es} ')-fD!<<Fqz7ɯrkaZlbeg27!RB+HᓩM)IرZ\\go3oybݶuSxOEj>jFW[X]sV,77\fuH}㊻qN:~teN Al0i '8淩R2Zqqs=UY)'BqZBTx|NTi;3ЩN7Z;˫{.~5+w' ezW3bhY[*x`vW^;c)(Ӽ>֫wg 29_OB8.ֿcԹ7R}BUHD(IٝxXپSWOX$<$W7V=/N2;OAorqBe=z}8jөNw~l}UY \W:? 飈(hޜuѓe8nY};s^R4v1蕽 a(ZO`Y;TT|1pN2^r7ۣ`Fc9ZmNQ|RF{gWfB]e^3^p1GRznmN04gkԯnV$_-ңZ1]NyVʓZη>СS ~֐We8EEnFUUǻ{)ϷXAjRH|Cgy\0*i#Ueg 2=nRRrRINx銸5"uIbyFԓ'V9iʤeϻͭWYyi18>+%VGbM^-Y&rv#]6_3sFךϵv6,黿>Q⺩Kv3 >i9'ejEMLlPmMߡaә'_޾*Sq(Gv9 vo'e_PztӗCΟ+[C6L7r8ͶB֞ݵ9JՒ;_-9Y:u*I{ݺBT^3iveUϧ=*ctTPFL<˦n?0mN|I{NU9HM+'L`nVoUۭľtֳHҝ̪#KՒIyt;>ErZɷ {_SPmF/T[}o˖%Fc?<3JtqR"EFVj~i2"xkVjZxRxYWqyycXjPHFm͍"Uۃ?[ѪQ&SyVo2'g$VU |s]q%Q;ˡ*)ϗ^ʣOgKĶʿm 1Rcx@xxꐄcƜKnzݕ@HT74-1s|Ī%.ge*7)mh사Ҷ_݆^㞕)JWpZtjtcVwLX3x1*vSTmZR nr9<ѩ;u=Zx^{#iZefbap<y<}=Z}hBcfվGh~"ј.F˷zp=;ktTc}:te{Ea ' xYRqj-/s;K,l* E%Ooҹ}c*u/A,H|Cf[_W4gh7]*z'i$H?|`}Yѿ= yeFͺ@NO'Nբ;%}R1o2%`q#9z"h)k/6i"6;ʷ8n8Dc -IeEOn3};<䜼sNz0G#JU[Ww*xϊTQdU:tY..dnTޜg\u*uFyF2LevUr 5)^ǡ*6W8u<`u=3yuzDtJ+nGt9#'WU*v8}%Ð?Y[ #<{ZJQ͹&NXB}4Yvx98#{Xx== ~Ǚ(|̙C*Uc"-L{Մ{nz .)A+9iJʭ&vڷn%R6D~""0;|ryO'<7;#.ZiyZ[Hv6' zVrٿSH{r~.f\xY;Qx ]t}{գNQ咷o3B=JT_~E91z`?Jᭇ:LV2n/ 䓏,Lw$W[Q8NGy~6FxͺۙI67̿PsFҋmwSi~>U?.8.4pU;KqL1On0|ɍsT륻hV3('ЃTGl2@>Jݹ*om:1N=/_ǸɼFFQQfRV{W]Z.'f zfh]vgF_Z~SY,n-UW5J*ܦrӨ.&Di݀O^=Ji+bMZY:\USW3QSk_N7ZޘLІR*"2 /WʍVѳ7liog{o,!K5'\g?Μ(Ovc:'6p;R*py·?5Iiߡ& cj_[v+fm K]QIͮ4̛/ S^$^סh)Zf{/PèM{* _L?.ORPfX-fRZ4Ϫ>ZiVLWn{sב߃xعq>*{E=zu4>3x];r`Wiy8éw.އyޮLWZq~uUw8e :`O}K٨)j޽JjǞ\߳t>գh_J1ތ(;w|llqҒvݵ}5u/KZn?7ZX|p :2{We:q#)J߻}A;TvߧTI`};Zݿ_ޣwV d?hUX;08<|uA.׽z֧OZfK\cd@eZjR[?[U0>_xp]xk63oۊKgyPzp/^*|+uڴ+FM'xkjoda1[$?0y'93^\r''MTNmmݞ},qXެ6v[f$Ϗ<æ+ҧ?~=,>s~i?ro%ЌZ\Y[S!B= o[ٶOn<"y߈mRg޲o/ $>ryl*Y_㜣RqKŏwVKmM۵e2Aq(8*^뾖a tz('&4jH.i(C";:09Ǘ:qSoCng>>(7K%/ xv@_=:VT:Ip6/8${`x(9tSUy>Z4U+ʝ^{q+8^ߖ1V]Jrc=6ߘ8$g}a~m(Ɵy$V" `>PrƞnI!{p@\x~DFOqXA?x73ѰpyqW.4LøN)`Ů_7cFMz_Xi۱}1.iTwyqƧ.WmWo#2OC[\=~lJڔS|M5sO-u #qnGNUI'wg3C$uFF|8޸%%̬m,^B;ۘ0v-X#H'W&'*t6o&~˘A?+;w$zVm(R{[hZ\%ƌ0q}xG_O:gR .$ҾI<沿.߁ԩ.e$>Muen2W{)K5妧! k+~ #]?һJ>(o5w74*TY,ẋqm/qޥowyhEPx6Y+In|gݱ73Ze)A=ykrWJqZS}]n$Tw(m}y=Y݆}-g #TC)FC L)n38JgR2JZW+*(rsϭ8J>^pΝJuR4> DžUY ]ե#hJ)|mO & d8=xbԢoSvsμE4mm\L0 HZ3 4w/ .Ef>det\u6GZ_sGw4h $鞆~kV 2ӥC,w10үw8ޟ\jRM%al^deVZ&R4,i΅YL°wgfRx#{ߍN)5=S яjJݍq{$k;sgFk({.o!TM=ygTfuG 外oHU@qYTʒQW:zD].6PN?URwf5*4g݌G,NӹISrzsZ"׺EF$ﶚْG(Ec& kޕܴ9gRWtjX*ĭGs>%2%(ӥMܵmkYݦLt#N<N55W>T! I8JqAɻZ bcFۈfU9n5BhUgU36wmҸ8oNȩ,EJm%j#2J˕&=qR7iSJdm-#5Q,~aOp=J~T/gjMYyQi}qշȾaS's늏Q-wҽ̏.؏($+_V|=e0^4Orǖ2j.+ MːH%-e<ѩ+wmNK+IS֦\N5(Ӿt\oRcO捤rGZoMt<*z3:JkbMgcuwB#c}+j 8cRIk%$@*ƍc\U4V)Uk^X^h%Y7\s^մ!SXRJe! \܌dZޜ=7.Jjk] 771Fs=zViSWym4k9g'=ҮfTj)7~)SN^J %k&aQU~iZ7\<@.KW89;շkQm%5L,l3sA]|Κ24j&ߧz_dJQ^.$R"]y7mE#sg p r~/H\V6FʬR4rpq~*RKcHRYXcmV>Qs~1hU#N,3TYpzqew$eQNVha?gU\Ysʵ))F>,Uux)2ؐ0$>]:qKo 6J2ۡvY ;~8Q-ͩMMzX6.0߈80\M:L>r{dfV]68N$zMrV&XculOccJRt^In&OնG J^3)SZ.{v7ʫ|H?EE=Jqdؤ}S&#v>εKKgQϕr=m*ww*NBڰv1 iRܡw6\*9b{^-HW [ۍ7#F=;G6Tװsp#\a_s/0zⳒ.(Y[w`z<\=d.?r# v[Q'a5M-)jRk&[1:<2H\ӻשN|v=?tU#y\zJN 7(@/.Vf \iG7kB$ܓ&kK=:RާfT#kW#B#y>TV/Gg'F5-^;G$q~Q``g\IeEN2:rɱps?AZQ/sӷ7$|Z0ٕb6N\gRz4imX˼մ*sˑjkK']t0&VRG=ܫEc$d8]luG)[Ŕ 3"d`d+S.RIJMw!SFbv)<{OcX?$kFrCI^IUTc$su&Fjzn0rZ PeyqZF䮑 I+(pZw?0q78SSTթӧhi^Ax"dO`s9XVzocJkMK_g&?wy@?1[b\㊌k%64;nX@Lw=5i.Zfmj HXak;|>z4*Q|Φ*:OߡpFbh4XP)_]t^taTɧu<+.ߔdK{=+MvUJҕU6fiKpx<w1WmظU5՗dc+䖓 Tà=}zj_%Bɶ Ujybi'YSMI-ĥ$tbǒ2}[JI$أ'ʟx$(VowRtlӜjs%pPJQ%),̹;d\qUɤk2U>[.G$nDjh##}KFrԗڧԱ%C<0 h\rk>dS٩ZM=zڵhᤑO `<KsY=h,c66b #i89Tm[CF!Y@n*NW J;I=FV*3=뛛ЗˬZy!F vvG<_)xyYt30enYK~-1HCn\[ RJ:#/ickxFۙN+~9R#D1I~'>Q)E} *H{㞼u}nEr2oݣj7zT#̮TT4ܧ,07 ?w'jՄFe1"^ ں讷:"NeBtJU<%_Z'{5u4ye,DbFVNͭWF}bXC2mmS5:MM6ٿ%Rb4_. <~;wUකx,8|G8>֕=S1Ijaa&2ӯr{U9?f)rs'2ݗ&==}MTyjGM 8jV'2 4#f_\ y5cl<;Fv_-לgOLu>tJ1OUZf@;U.wcN=늤ycx..>R3B9+\xF5e⮬G{{ C+>$W:yWsTm`d%Na}t\Ur/u]"n-߼O׉nU/59[sRFTWS]YOyU[uPe)3"ydMgV&~~ls݅xU-fסZOz3"y? E"I2v'srq'LJ#tủ58<"Y1rw9NsqSN;U:kߖP^S "9R9U*F{>iJ~O"I,v28)݀9b 0ԸTiEy$LYFm%9ԀoETWt}Sw*a9"NjWqkی=gN;>̣,G3d[r:J{JvmܧPщ6Wze)E;_SHԍ:|gʿԞo<ȼߵzX|,c?TcESU!a`Y [Dd8ߚIYu%gzkvY##4oN=2ycE꺘bu`q= /̏n0+'3S/u;3ѩNL;p.SLg?j8 {Օo.#EF^ 4~g)TEĮS.hR*n3zg.huC{ɢA$Enr9>j\ܿrqdBnsNxv["9%t:xp#IU26Qx&iӗ+5oG?&Xa,i*@G8z(Uul-. MWGqEJ1tZխilU?|H*6#5nW ^Xߚ樋j v%yYF1iƔIijԳ牑k+{ ¹jSZ۩dRY"_xB;<% tj.7ngS9*GlsMJmn+=Ew۴?:Ql\$V#[Y0&&|*F5/#jL7LUԐۀotg.kBK#U2:0*}~ة*VZ$yʭgZ4.51̪s*!h2^{W6#=.sJ3zZXv" B6ʲFJS+췈ċa@nW-I{h5iݻ6w-d 妔8׎pq\μ?x]^Ws|]ZMe]jBs׊sQTV>RE0ٵm%)g1؟Uz)CIhz6oQ[HG2lm8;}ApIrw;Ö0ODӥ`Ve71SڥH7?B橥YDEV2~PF3<:duyKKN7[lSJ}*k ?G' Z2 gkRTB%JIZ_9GAX/M;kG8=VUeH\kZrSR^kNiSN[-;/IycnuQ?ȌsCZ{.X'̭~JJqGzjEMemņtej2AO|(0u#TJϦeRue/ec ƿgB3L\9UYvGUyhM,,Ӑ;;A>cǏNK[vg>`c2syx}+X{8vɧƜ+| Hs\+5,yU\Ԋi_g{Y&?>1ӊ漚Lcv߈?/*+r`ObQ:iT*|mgU-_$Xأ.:{wo¶ Foo3BtE]=M3k)%Fh"E+yy9˖"VZ^ΊQKwZDGcKk#aIH#1#%(X4fZ]ױ 201LXTmu*T!Iz×_M%)x"ۮ7.@O\5ԍwshZnYj1xGnuq4dYosیwTiUZv:g.VMk}|_,xc0A=(|$ ӄwf{iҶ%*GFyoy)%%gRi+9.5]^N)`8ťm{[* m$ n=k22~CT@)q8=קOѥxI .1Yƕ:u.3ͧY=Ķ+Ih!Un2C{U9yZr5:1Kicr <o]9[n&HcLq`a;ƔX-32.⻛pǸx+/iQԴW&J]1\rQwmR?U:UԒ!-ӓ[K88`ⱩSVyZgoA<̓ݸXԝS1.1mZKl[DcQe<=+ҩvZ'naFy%]I-HEۀ2OϵcӦA+%(K܏ԩӛwoiY@gm'۱5(ʔSr{z [UfIg^q\~<*)Qiz"j >M*A=x4ڣ((5+V(ku/+%F rD*B0WP>/+3\6$6a]XJX<{ndT:tVҸBf p}raRJS6ޏ˷q?Ҟh#mnGN'4vrIr̩S$>9M\:8z6RU?0|[zdvb4#WMJuܢoe0A$YW OwwBv$lo`pc߸K?nIg* 7pj)=^wbݗ`UUo@8<{E4-$<ڕ1Pq;YieU9e'uӇ&6c'ROт]AaT4oqxhGzػ\a.͓fڝJuԘB&W]?`c9Q'ǫFQToQ!|,I{c;zVnq*M'bxR̪HS$]8ujR+GKfGhnFIG9jc>(t7Q9%tq+juκjtdiqcF_ xϸ#S9GW-}_9e\II:~oQ^2i2DkYv:zTQh>"Wwprsr(7r/&#Uv ͆]^q:aݧr7bdA#nb~mʹ}dy-A:҄[KN`~mP~*9W} R؀ٟ.Kgu 7 7bTk:qZRf2츅Py*5~Ijmz)u5 F&A7=[ǛY4+NV6V.Bn1$W9'mТ4FYվS##ؚ8J+6Q *᳞N;ƥOFu)SuVЎ8.?3G#3KO9 ӷRgߩ,Y9®R1׭kN[T憜t"(UUd뮜trtw۩<$,l71laۜHdye4JsЅ#e]dfLك.F={w8u{;y=u8Ǖϭ翖}Ffr*OcnUH/{Dg&$I^WFIW]*Qo}&þ8iFZF'i9i9+z%8O)-xۏ?\gZQ4#N|IJ:jr&Fg@@zsN=:!xıW&hp%Y[jnY1p <s I);ftJ1isRG"ذoz{Kݒ<꣔^v&YN~MB6gm5)z\Ųnf Cnݹ;;z{[F1]}J"lSu2eW ̊`sH s[>NRmCFoiΨ;r̫! Gkc 8[ORqԤuddسǮ \jJRy{XZJ믯MM-˖Vٿ$ԒB/[jvÖ7= P j$+ׯׯa֗49?rƍiZ\Vhq A?Oo4^N"|߂4qkM1+.TIϩ?E(nsSǛGil$̑,˜3ǥU>iEIvo3hӝ8뭽 ]6 VQff7toM$ +\c}?JӖQGm^MkD(:玟ק4Tpm8mQq]Ac4LʾeĘ9bNs򁊊{]t2?u+յ x(DI2eO U.^WEC._~Eˌ365aDrԞe[:Qj9[e[p7`n2yE)C'/5iOUk-4ž\D?.QJ{+F5KC^`>I՗s3 z/IVNZJvjʯ|l*sH'~1RM.}'Vemk{pW?2^iJRg=*nl>pKm~5WwF3 oX=iv"(nHSvl=ӓUp;}kh"H ѩw;'?֢DxGpHpNG_k JvX4(vw4좑̿*(=Z_s41tLo]ǯ$g#%7wcச|X?gYvFR~rjoڍKz K]p2=}Ƿ5)NJ֌czkO=n]+HB21O~ϊ?3m(E`$-qRAV##mni VӴR]UO)K j+s^zS@KO?sӠ⷏,WuUtf "KHvm`t(-7rEq,qŽ<ʢRv{pyұ_,^弼(9YsYyQE)lȎmB"(fVdb{sޢFVb0In]UlpDB sQZR)[z469^qn3qszw>ՌgO`^)n.^H%kr*Z :kbPc[=AߝrK%تIZa|Ddf_=9YJTF5Jkבfl-x(\F8Vu)Kk*qx촷_TBʬ03asr2NZ֨z #I",#2:vGVn1"8njmFJU%]d4>S+aNZJ3N.pIAJkl|ͨ J,#Ԏ+'z:ss\"bhd(a/9o= J+[g;^K}?Ƨ7S:Rޝn0\m7|>jnhVyo1$QP2!oV2.Vp}voOuL}Q^]4e%i]]t-G"I!8n0czQ)K{b)ʔRkkhX4<,H)s}Gt_6)^9)J[_rЈR9A&@Hmz~~JI˄OmrXxD| W'=:7~ndA[. #UƳ:yZWJq~y/02YeW y]g&kI/8$8"HsV㌞Ma'dќs[:|m4sr08:V1(Շ:Z]h<#i8_֏g1.zw8< =g$;}^5+̛w,_ϰ5!'1εJ碑]M1K+2: 9j17s12nߔrnqY){Z?#V+nY'nw3*eFOoMԕN{CtZu'C@-y[Yƴ'uxNi% oƖ{dqۻ ӒDoEAIs/ a[3*S>MTQu;fV<8׽5ӣ9hJl!v/cZaS\r؂a\4㎜TUt̫w {JIU-'0ge99\TwCG(:vEׅ] e9|=iYjwZ!f~TON)5t{Jrg}y&Bym a>\v Jt=2 V;䪮~~yW-hʤTgU(ѩ:6(n>h!T^k&2bF:\B%G=1sMMgI۩4*/2@rѳ6X?.U ϽyҏDjtKq FKyF-!(ϰ ]Vob]|+^"r[ռb_Ms)9J]qh`2܉UW̎?oj܏n\3dMQz9Te党PG\. 9}k5NTV8eM֫˫ɛnq1ҽ,y+7*C>[;jkXk7lH#WQ!_B垭nޙ *U+J*?x2ԒZ40,dbvLr#DkKEOk*ݙfFW@>]͸/n+(=NS)ˍ8D3w+mb:'#NghttԢCfɲe`+QZD֥5}w#֣%v6i|l{s*s:|jtTv;s{oqh\y4kU*3^7OʽoiI]t98򸫧gx,)&uE`v| %Ԯz-۷NS4}l~V*5txo K2equa2\YmOMת'N*OfZK}WK_e-a/'f_h>zdjίyGRXNK7nɢ|QRݙb2:8 +IZԍiTR|AuYQfuڠD68MZ5ٷn:u=N_?y̫p/)Jݾf5is+E]>sna  9+Zji>ߩtSV~f֑;[ò8VRnPy8s]K{Gԉ{5n7%ݴ땏l#d%9EjwS{J̯ޝϘ3 )Q|Nz Zw5_gؕep۞^~FW~ޗ ,c'n\Z@)ݑ nC\?k]*vq22uK^1oiU3-aNgkԹƛNiG7d"68G]|G+wʒiw1?,ÞOƕ8T%kG,~o/rEt:r,D}ҍo #ma]c ThY8nrG٥{uLt+3Kk+rA>5ˇ+3sߧs7Ų݊wd{u)jzRin-O'4jB󶊔Bnjk=}iՌ[qօZzGrFᘋu1LuWz.O_"H~fVFRNq*W:-& 6sFVD_䞘<1kΖRIoh>CgskcƔ(ͫw:-+u +:U8'=G~5yWoS*QRw]ɚ$-[hy`G։SFlakuqI 탟׎RE_]I{[Ti$P$#kĐܞQNVԮzɽഒHᑳ5{vTpyOi,Wf8%F^ԫk?б 'ʽիiN.6fpF%"Ie$nWCϜiU}fߩcηZX_88t\k(Ss.ֶ޾f&HnK>ty(ɧdϦ.Bm?y! fܻПW91ögmY5&oZJ/ ZLx7(K@8-_ztӢw׻+p˨+$.b #9N#CKDޖ<^cujK[;Oy<]t1URv[Jh=u9i˫NI'd̖^k#ߌ5)eHdzKčNrO895՜cS5[u:8\ Uk^s,u^l$mՑA:㡯> xz<5Jpi(mŎxo-."mƢ6F rNsrz&J4jF~ζwՅ̰T~w]MCxZ(,no2G庘1*IMTu$l_T N!~행yQS_aӶ,3/ۑӚznގ,W^>W봗$^cg5[*>eor[==UֽKG +KYHo A ̦+k3OBr8==Ts i]nz"x.Z$W}fԜ+X~WvVnot/|-VW3+ާc5Յ¸H8k?wgSaA郒F8E+I5 v H6X7 sp\87l6t"oV0`;f%)Y]O?nR|FmyUFezTpDh-3'==:cڦR_3W暾yP~APj ׃:VR-Z<&iyj +.WoNaNy eȕG%m~zyC@|w~a=zOw#(MP #-p0>ϩ55%/Rd!٦hʋXQݑӑk5MVkqJt϶CdڞbD&F^[#w}Q&݈g~O'_«Xeխ z=q^O=#W~˩*Iw14=X' c3})Tiխף<n-eiVFHwFq]xVڵ:1Q{ݯ:/eqix++HNXT)V H0>U)ԭqI;zu\,=*\r]L$y&c~ιkb#RiA.QRөR:/~윥-_kHm @mJ f]xFRNjrM{I,\4ӒSz&teѩ/unslwny>e_jix 3i'Zi9$UČ:IrӲR*qRN;%ϙo2l@?ʹ3SECu#yyZ<,پR7㡭n~ZY~tV:@6ɻe ӏL[JK_fz]?u3yfl ^h̑}ЎNPpMaeSJ]%fZ7gŋۏϽ-5=xԎJѿӏHIX1Qߡ\+;{Fz'h-FX1qGEiК;i0ՃHSN*iPbӘs&ݪ19kIԄxfv}IC[ZFcj&0qR[S޺daQ-Hxٕ>|ߺ0M-꫌F-\v>Ul~[Β&2(lcf;ݞpr8]涶]ϝRvjOzS^Igu4s0ŻJH_ҼMJ6N4vT|wxZ:7%oC @d׿L}쎍9_Si^Xe}swLWc5#Xs_Ğ˪:/Uih$]ϭmZeAҵ׵մl֛5ߊ/5thtEGS!mX~f#=I)QM=?NE+?%/.Sdx6f)>oG@x9{TRϗKj¥^F㢳]ukxj}[t̀\`FFzkiN_T%FJ5/հN]FA[#O+k8J.FxgܮI%\=M0;A'־ O +Ej2:iP[+{`ls[<ZG$n ԑJ|2F.w={jQx+v-0Uh{c zK^~ѫ|;M&+kݣ4C4;}?J21GZ@3圮898zΜyM};0"bRcwv_>iOSJ.Zt6ŖF<>rٝs{4[Dxp][ˑQQ݌kFuOeQP/'ҳ(駙5+ICC*GPKzq*j=L(ԜddCl4hUSdqFu%5'`Xr' 'ڻ%'ߡ4_zƍˋGle. Hzg'Oߺxriʣs4x)o s `ucNVE(ǖ_O⇻.PѴj}9ތV%>NuOk_Ʋ![w^3g{ W7"zxWmN1N*Zq8۝Y̬o|ZϙF6R;ry!$ѧNTQϭtB5 k4=񇊯fkX%i#s׮?mfc=ܷAhŽD7㓎 W6$wtRqwln0#.3~ 1yP%-8Zy >bn`7c'kJQ2TSWKy"Yn,c˕5Q2Etг E4~E^:Q˔k,߷xzgZ|N(/f#> itNU7mA#ǗoE{zmS^K$Er$8oR[)>ϱش}^uk/:NTIpGWG޽̩*\ҵޯB[ʕBw:Vqoj׾mXA.wkZt%-[8jS9MxquqHy;Fs[.Ӳܴ[i!Y?KZ2OokR@?c,c|sZGzJz}WdfjtR(P7z֘gRnu4͹<93E9̥ynQNUORhanۦVڑsrz)檕NM<'h8?ȬJUڄlsʴ5y'bic8zTGDAyu0  IT\s]NRk3忌 /Q]m9+ʾdcW74c;FQ$-:o*DFV^D^/CPRRkcQZTV4$ *Ta9jgftbƏ˯Ȗ󕭉deȥFe7/, ŎHRwv=zVJ6E筶1?4j^]:x*kn%ݪ GF@8{I'sj]׳4ӍjTHױf,I#=޴'ʓ?[3T -r}qGvЮuzE,ѕ0"h8^Btem% :/?h:)Kj5f+Y>b=c]4}*g{(IGB9Fg,ۿ}:cw9[;h>|`pl/g.Jz'QJI)_#j}MRN1LT F~洄f.Egu%jʴ* c8ٛN~ބ%L7`ygck^N}|?eIVNLh'=1ךGGkZF#m,2pxGZoҹjGn1γKwW8B\ѻʎ&>};<2I+ -s )BMnCfI&y\.|jRQt:2\Mf-ܯ=ӊ invԆa;xN8WFtM\c|[Z?VݍӜLy߽wrFZi-r[T*T"WHvf"~DGv(FV F=I|b *My_ZQ-T_e j<`溣?u&g/iRֺ۩jdT/˴ Q:65{jZɹfvfܞ=?:ǙhbhT.ڛ^HS9 a{jݴ45zrRWUyeż2"]s:qs.U_ϙ;[xó ۶>NNCd8ކ2 5Jmc\qߦ<׺+NХx$3p'k(Դ]DƟ,o)Y=[o2d x7/٬x[[QƜemo,71+|~~5+bQ7] BMtciqNA?aR? XaͲA${4\3F?_LsQ.#^1%d¨:v<~ir+045n[i|RsAQS]h>|Ʌݴҽ v=Zɾ]΍j{-PLе7#秱RST)SzZ0EfT :7*5UF:ءdsFʹ@GD5畡uϾ?c2,7 #F[ntkZvuө^fi۹dSnZ7%OO|q+Z\TR54[w5,6S"Dw:1S] J#`yi~\7%)*ԿreDFxmFiB;:ts'%nV֔yiR^;y/ahҫD>>~?mGѭI&rjSMz4s(ܪ+`^IK a} W^%|baVd p9>ʵƜ޶b05X=W}oe׈oI!hY;nbԒqӣilZrjʾto&wHvrz~]k){:p^q}ôkC5W . ͓s8GΜ*Fnߧ'&+kBE`D nA=-s&14ew"r3<:L$j[>aa895#*ֲc9([ދzld+<#^;ӗ,"ܮz佄dmQxzxLf[?2Ps&I?#4Tosz͓yQ G˷swZ㓔48Tӻz07I+FP;1,'=:\:ZvsJvеmgs  ݎ>޹r΢TW۰Fbc]dpx>5_i}Zjq~]KK!C3|ʽTzT&Kz!>XR5b;t׃R"1*c(U7;5m02|qt"b%)5؈U^̽e1 r$0Cy#1wFSuɎk 8͎FU+СcF~]<* (}f%HA觊 W97̟mm]ƥ)O/3 8'+IFf'RM =6bڪ71\)fU=_ K뒎bhFrx'=ՌN6Mp)JݿsN75i_'5e*%ܫ)m1@f+sp%c݅{}3Te*bĪ"l`ǜg F{srKFn9]^7aU]R]`N?s[_ތV.-;.S"HO gp A'#(uӍ;n tW,EwzOecjcR7q8Xdmxu90抒ek}>HQ$w-sӊ:ۮԧR2>"d.։]yʜq)y6تtԤ}-vGH%V㞸ֶ$͵eԒWJRy-1zcz*T*fRNViԉn].nl8rϩdWJ2挚~dq/!vǷJ)TLҦ"+v}{gC'ty7 sӓؚyM-JGb\:nFGUk)O,yw#9:WF#ߊ} 2EkV#fm!#Oq^]Ysp5u–M/|nB0 ^tvrr(;hU[F@Y6nc==3+Ωh5-ߩ ي)9դfd1ۃק5Vi%o37T 1- ?J*8NM[SǞWR'YP+yxmpƜcwoN5)Zel?ZѕH[R{yUYWl5ڒ]+7nO{9kK!Ҍ"EV/樑D%H9 ] TMQԊU)y B'FV{ngjХ>˂uX'29=֧%-l[GJ ;.CZeUpGLw$(.4y^WkNZrJ=Y?cee9!6ĜZ[q\%%o#I;~`rOzU)HՒqnm[o)\B>zsKl9,Gէ^顕f,h2Yye$սHD\C'L3#m˅$ԚfcO_n!"9vMۂ^999gК" Fr ˘nܥ ǯְn"8z}l,+?v|*>^:c9TrRե)NDcgej4oBk)r+^E1,3)ĜnuKYo4ՊT}bc Uz?Ri7R\^u,.g_:Y$hڻtA_xNT*m-V gfUV9s؞xm%)6LңIbh ~bӍ&ccNAA5ţFBEspF:3N1Վ ڕ7RvȗEY.pnx">]Z^X>\VRkKkWVg ĜVm8׼۷"$hYṂn3y9=;x>^K;ΙNT>\RoqRN=x5~Њq~dr)VO._ssg*q+1I& mVmOF4[o̘K=*!]{t ӯ5>ngz?cX\G\n?JV/s?ztuF"+ɳsQ8:NgQ/"FX'͜d+T+! ϱ^"ꛡ(fhcyqIVg%Olsv顅2gSaf2dFIpq].OS˖VehIvo&۵OR:gsQ0G$LDsJA'5q[9YK94 fIQq]d zsWN9;%V]OPtO7axܰq# '?ϵZqN_NܲϧC-w)fVcn 9<~kRUʥ\5G8oXj1to'΍!W 7<+o³Ν/;K]&H[}92q/̘۵rSۓ3u^ุ9ӖE7Pq{Ǐz|kcͱ|uF嵼SfKy-«`nMrl}S2~t}&_ZIjttxRRRѝă<װkIQDP{LDeMM6EO.o)6#Zuqr*ZvM*1O׵)OUHӕԑn4U;؟H+'94}s~U%ɲP*-ˀ=GHNJ.IIZ%W fm;u%řhӌI 2m3/ z?s^Nꭇ6s3F˅k\U:ܷW"hV]*5A|w8?ҹ]g)7-2QU9tvK#oo?3 H%6ugx]tܖ[q34kgdGYFtEӫu-"_!ԖqHK0]ڴ״܉`QNTKlݬoʹڊ6#RPxFF2%R{_'رJS¾*V0wm0JݯX/&17}=¹SJi⩩'JdS'69H##} {0뛙0xUw1U'`=j4UIIenE~܄`+zz:1UGc8ԕ%]5VCv*vg#vEF=w*MM4KGmKo.ƥpv:N)r68ʭ ne~ryqO֣2*4kJZ۩,f! |8>zGޕֱ%M|߅O4Zm/ })$$H0g{fI8ed:].EjEef2nRtuSriQ |W*S5i%+e4Ĵj7F.e+fȑ<p嗱qEIFDy{>U(o}=3WVṿc l$Y$$x}}J[lWV!?6BdA?1sҺ*[3qqzyZvY-C/̼n^?CVIKr,E蹼c<,eqcx>ƺR3M꣧ C3eO3ؑ6p$zCԙZg\5li;g~ucO9ߠMVS,.ۗn9^4ӋqvnƑ؞a-N Ri3RIqe=MӌnbJRk2$R[5]^7&-=e}@RFWr*3r>?:)kX{BN)ꌻ{c0%ĒGT^8 k*w{h%vf߽$ y끞YթNSWV2J9-Dm\$6߹VO@ =:/wUsu+srXϖBo+F8>zQ^:.iƥJ*7]՜y#v3#?Dc%XxՒ[~gk2erO+ TsC]ƷF29<Nu)?m̝5*;U` k*w[(SV?K"!Ub?Μ =ΊE[tdkYg\oZ4(E%nM)Qåmdq[i-۞kRT[~eӓ_2f0]#kԅJ[*eRIgۡX^¥m:?+Ww{)K姥[ܛ[~lw?ZT).eGd6t\tdUSlXH5MBqVSrozUp[sۯ#8#JqÅNH/m7ijOn!adK9ډJ>֌enU{EvJC$&(3yP}r}jQ4#Npz[-z _έ"0 }Nu/O.9߀#_\쁇q1jZtQJe8$}bnB`EF%#q|jqoPXs# "e;q~U9Bܫt;vo5݋Z^a^N)lt*X)qIn_\iFFܬAQUS/i5b뚰Hgl+N}iߛFZ4eao j|cF?Z¤{o*v]q]^ʲ6$ss\51TqHVOKhrڞޥrzH խFg|{#"rZb@ A9=q^V&ay*Ŧuz2r#TrF~V메RQp]>e#YCny!+JMjcFԗv~d,[1yF8{cV,OCNv:Mub|c?ʸNk-xJZE#IDj~fnS޻ʽ*ihmϒ%CcjӃ/i[Fҧ'hr/;?=9}Ve*m'ɭ4D1e,|u9{8l-[i4"Ico?QqD+ǝ(~Sv$v]Ps=w6qCb-"?lnҳ(Dʷyf@'` $E9rӌ)b궗dʭ)8`l{)/8e7Wp6٦⧗,?l)}]AhsdžkA*ۚ5ʩR0=;k**Moi^N~Tx1-<Ȱ|01Zӫ'wrr;5t[ywAmo(ǥc'RT'ƌO˩k2EopGap~Z(9,Zt5vfF̌28U5hFs҇/Bޔ-JX@$d 98V3Z*ɫYMSv 1 NpSrrkK( o.q\0uJCԌmr$9<3u+~ڜKL/nơv '<ֱ/{vqQvo2,j2&pIzd f#M]+F^(WxTl0/9#w^2MS53֫NDBƧSln[irH}OV:JVK%NOnGaAVxNySDJ8xʄ*zugIHbv$q.,_3*1T_ȇR#xYdf,Yq9?^y)E=4*ҍ}vAT@Uo #X֔yjIZa-K,zc~k9s:;:snI6@)sqz9RZCw"_4oJ<6;")<;Iܻ}%ymLOSy#[/meNU&Cqq4rQׅG$Qri='cH;wEb7yX{E^IߧU `PUwsp9ߎդ_4uSRPvwȄXŇ+2~^y:w:rkEsӧhKO{5U[U6眒1۶kFۇC[T~,O8Dy;@pimWnqV.iinI{-STrcW\ (䗳N/c:e;;~p, ǘ$G 4d19iECWin?xIy럯5F\#ˇIociZGQMJfkR^K[z XZrS-V_ZV0uF3mFUcEtQizfv*k0봩sR9ïO3d7qa#|utTplsGORwEK`twtEԥwx2 jWe˧N a.ߔcn5]<(8+4tt)YDWu2i/QZF3oUR5e.!i2H;>\{?eYbkM9o/nW`^L+#.6ß[;6I-O2"wdg#{pE{Tq{z([u0o5noivtZBw͂3^wLJ~Ź c57sӅƥ>mgjr$7"T6\q׊ڜ)-oJ ]'ܴ%3=XcP @ 1qꓣ~DTm8g]ʭqjJ#NS]-'+YB]XNZƙ`H۞׏¶å뮋c/Eڱ0Uh t'=.5e ;CNI;umSoy$_jןR/g{'ס_\}LnmcnYr>n:}k,=NKV;m-YTX' %%\W*}?|/hY9kn?7JqrZj`j{kC?1 +;ՖFqSiΪٜ6C~NJnRˮm˶+"? ­ Sw918ZrꮯX[B9YzwJR^롣YUh$j[vF8#xvӹFڵ5lEV7*xRiR}_٨a-P_iUdQ\i-5KIpA|}*UIJDcN4O߳g;hLD>floa]QME9#֣N0fXEeI2oƳȺ5*FWOc}:\\dryB2ʕ}ށ ʎsg_kGԊǦkni)m A8Uc(G}2u%);{_];Fir#[$csYJQR9{h5kb`BX:uvO[]=(D]a]Eav.&^]'ny{`w⠝>gH)8ۙ=ӪiW0ȤHѰaӿ=z5I%ݩ1\ k3ZUSYМq8?Z+anzIzn}s,Rkimw]l/n48n_A9 =s]aԤ{=}cs 4Vo5yi˿msuOK rFkH 1{_Xzi&1XJJkuNu߅V(I2j>ʘ}gUx"+9Z][^I;4[-܂~Ry1dzce6k+__L6eSVPI/[{S5 WxI]d+n\ 0goS9sw붾MqU?|ѧӯϷ m{kgnA+3,pWg,t>S"O#\qk"y1#zWTSI4yCCw $v0yYZ2pk >C "q>[}u|=M>L-sC$rH Ccq`(_{+t=ќTjWwOUgBb *?&c$npsִZ[l]OgVX$iku142n);;f^ۓyIkۡ4.mR@ʥI ކYUM+kz;k9J)%Ϸ?VJ9e*)8=[{awI|@qּz)ǚP燈.i-y?]G{ve5yؓ}s֯}X+_Kt7m㳶XI~AH}&ڕ%ͫ߷C=7kª`6\HoԩO4,"Es19?sKCZjZ!%|"\F[FGzKockвaaB3.w)g}kRR}:Z5KVzE)UcIlEqzgflp9+ }OC6{)<1nkG(uя'UxUXnR߼K|ø/șJ{4ekfhn㓅1ԏze\:>]J֣Jk]˶) aoF^#f9FKcJэFkΥWql+n21==+ҥF69Fn2 HuK;܇="#x"48nsMr6cZM;^)5-#fUkary}E>hήogOKܞUˏLSYQa.i>m.hDn!rX0\)'sRrkœdvllUw4TpOYFo]bON(X(.@_+OsWGvc-fF&ӬГrdi0 `8==kQJK >ZZ$pv?sbϗ- g)_x]~Ҷ܀edj 2#}+_.^n>kBq^&`ſ'JEo$)N:UCN9?~ȒYt>JXjZIgS},D?l'HΤ^BLD#k+;zn#M߼U|n8UQG+.Nn3y|-;-W4uF&U>7E}_+IJY+My6ҧ8Nx;)rM^*i얺v05]]K6y\.+F99y:JN5άUj2i;]ѣL9H}۽}gZYw>G[>gM)-u:{cW&22qL2ĔːGsU\TֻZitU']eذwɀ9OSҖ5d]$"˷]̽{J,u[,e>dzx@ZYcnrQKNQa[0R-4%D*9b'~:q韭L>%}99ʰAsY-#WO=~5Bukcʔc<-;I&vHBuޜ"<4rfUզwm_~ ]hxZ=c?(_zYTFq,Te?Ww{ζu #`YU~}IN+GUbף>PY7䲿V;-Xm C_ZNKGyyZbi'g%ެ#N&팮Q>b+R NuZNWNu凖'rtgj/4F?,Ou-_47̴Rʫqc8F20Rxu1\N{_/v*_OO:XsR55jvNq1VE?wsҹփ3UJ4poo/3ݍ5فiڸ^+שR#t::;"g߹d{ϓ$ATzsҺIS=J8i;wSRk@EţUXvqZ>N\tM<p4fgz~8Fj7 }6`9ǛђKLqM64w%wUcaokC 0\Nে[]nZ6U{ʥK$vaj[ޒyKNR+1r3t`X-G^O9ݪ37RU%y3QQ__߻O<@?~4tSQ=4^ 9 n_'O\nIϫ 2H_pYxP%791Laqr>oNզ6U)^imͱ|ӶoivƼdxU&ˎFzqח[KyVvO}=GU6b~Vyzp#{u5a<9o&[G3ҝlDiʫV4i5g$91n#J^S jLFyjk`]EܪV3}*8ud飞8E3:Z0¬xQ"(F,SmZ,SM4rk]|5V>uZ\7:a-c~tni她1/my/֧*1Vyc}Kk[Q$&Ycu'%@|E/:q|xd!nLVzI5$t«mi6a>$txϥdԕG'jGmtk|3?͑ =GZNRzNX3H+KԎ8'ZƧɅ9Kgme:]={ouJ Z%TV,.b7^=zQz-2QMka_]L|B8*(QIbY6N<2ݲBF88+*ΏaԪ.[f|ۭ#뛓/*5ؘR{== "U?Y'#_,Rj_m΢ +}]nF9;])pJTM5kr[2oM.ٔlnGR8##XO>tMӧ&m yQFݫa'ri3b!-YNkF4fqWGKc+ؚkWm0T ۷ZҲv[+ݝВrjKx+y5op \We)KkNգW{h^FH>Φna`jlE7o{I!d~f9~}1Zm}}IcVرgˈ܍cq?3Kw44a5iZY8XL{J>b*SiO⨊ʷ[˕C3nڑEu }wgiqDMYsQMt}iw1䯷OZөN@$֤vjD"Xc*1(Q^zweYbtBnlṃuҪ׭9$rC^8cfZI"Gf=1]87eoBQ>UchU8fPx;Iz^.EǕDP2]ܮTFVydN12!zvXF6&b:!F08asְ*7U^՜Joˌ+yF-r1浳,;$Ÿ_1sk"*-4sʍkɷ`YU*IC{F1!i`0Y^84[F{]E6eP7͸`>S^8WJ5WTJۑ_=eEPqq*S.XMdfs{sFin`X$p5FeMɶtBM{1c-,cU%S\ܳVNAɭ#YIsU=uі 3^A$v1ӞΪ5#NJd%QtZ>Ѕ0Y=?wPSl̖22l͏Nҷ)TihyKM_*fcǾxZьƴFo+K{fh@nP|KJQk^oΤhԊO'} َd>sٙ-=?.JTTbޟǠMAElsѩe-9N9FO^,V\ۙC:̪}}+ hFi(ԜmbleeKEH_d8~mF^֣6}/5l54lW]R}hG^,cRB?+:njYmjpuim54[^Mc"Ff1c皚؈5q+uf봍NnbOFμG;'s _fdw:i{+(drO,Ͱq{Wߴ^kg圹uzEŴM29S[omLy/OѴK9Fց_V (MIF; ((W_NE7ᘳqd.+"0x֓-zt$ͼ<;a20v'=kR5SӒׯoBƟ3k fv~*vLeʜg$Db 0U=9z´e'ӹU+֕NXxi"11˻V63TeRMi3,Ϋ!vkM~N57 6U|d tzTeCJ,D/yr"t@ʬ,_OOZj2Hʝi{I}݊3}Ui"ܰu; A zקRF+>h薅F|MQpHXpg^ur-)[&}]oH]HrG{3]TJṈ4#[e0=5NN-錨k(6ﭻ HfFPy )*9zסN)J/} jJQSF #c^Kg'?}jȓ-FVz,HdhfݺU.2vs|v"t0(Χ_a< ,I"VXUgOcYM1RK_uT y3C#vӞe~dyXQN涛%2,_{qZ¢R%JbIjF6z c< yS▒m-?~KedvdO^${$˅\f}喭RcLdhG9$s%+hf vmjI,[b.͌H}_2:8"6, ڧt2دm۬1o͵=֒|N|MffI$V˸*֑K&˓#27H"?3 ׃z\adW:.]&*yJKgsIRrWDEkn @/IsjQ. nZC,ۛ7quk>ƤV\H|B̋4f=zji4ṵ˓rȣpn[ӌc񯑽Q?xW;7I'c qҞ㡼N1Bvdt:U>^>SHFjPZ}H)#Kvdzk$o.2rC 7<26W pKc#qTԭ֕":,Qyu-Uun''TbO"Gji$iVFpE 励=ӿ: E9*Jjr^릅k,n崊 lzW&n[_MK;KZ9@0 N{wzrrS[sƒ36-D!WNyo/gͷOpW7=ap+nL{,qԩ[Qm$"UI[;kΖ-YYٗkmfC``WxRH$y# \~9h)\Nz)O[Z1^,]d_1èsJQYFo~lg9m#Vj K["Aqk'jkT]Ev. -1]Hlc\'IIصk8_2XLC`22qjXGUԙ L?$> J+M}QGfXMvñp60G8#YÒ;FtS+Cv]v#Y>YjJ7*.u*I3W˔Ilʖ^ӊ|pT -ėF?v,l{+wu9qKsi%[K8II ev8 k) uwEdEJSU(ki K1ܑWYFٽNlֽ?nHFYڣ'=Z=]sVWj&Vd%A~P~(B2mk8Smk&Es67g8ɩq[ס1ѕ9[mLDik-W'y.X<۳T=yniXݽ7VߩZ^֖EhUU#Bq,q1\Ҍ#RRR=­u#-3 mIsӰJ2R?э;7n˖ ,ֶ2XHx猎sTrϱ<\Lʧ]JqEKG([#IUEmOz¤y5N)S;* doi^~sʝJSWLlvLD@yi.N:*Ƣ]nWxw ¹gߑ5qKG* "ӈ=,{@ =u|\{]r~˺nzK׆=,x> e_#'r%^:V n7)擲i{]֮&KuR&!b~R v>B/~?BTcg=zx?Mk+Y. kIep@3*QrI#(T#顩vd4qno[`q}Gc޲TuQ]{\ך5me"Fu"M_*}dȏBԬ5{eȲ:y+:.0KJ.]}k!,qp7߃/sYXeOG>ht\-#99$ Guܩoo qi D<6NBG^9RVXiF7_#;Ph,hUdl=p c=1S̜Q7So5K%Ί8Ϋ&qxq^1+'? xyI_ XGx[*#AǵkM:GQw9s(J5mu Y~e82sx'sCύZwz۩"H&?떵9JWdiSRSUc.UgR[Uɑj.#(l:tk1hz)Vtd-2+QE/Lrp3]|PR t?i>pK,vСFĎ(rSlGvZcֶi#K,GMS4ӂsTULIw=ٕR>U=rx氎TSg->꾄S"=rڦj_OQۛWSk!r:tc.hu( q/]}>ԜyL>N3r[!ٝr:x| CgZINL=*k8s}f>^0fEVUAz*E؉Tɤ6"v?N4oRЌd?uu^hnBPoS;~Ȕk ƌh%y%c<U γqz;/V庹4 y.ވ=ZSױR=4z1mtۗ2M׍+EJIx'nwvEHG,2a %' lYT\r),Ǹ>ZR.^T>x+mc]a4Srqkn1N, pĞ~5Q{lҧk m?$/r.0Kc,t9$ݯJi=~e>:~p3F:OJJPcNPpx\+d3?sѫ)OKO9jUNГӿ$Xe;U5+_oV\CÂyWIkN}]ƂV7UDiVuV55q*-%VvRy#zV9:kJyԷ5[5Vt5kqnS2\mlwf'{W4*T s/N%pV9ey,  xtJ1xYS%}eԹ[DM!oOy}XRfl%;K׊#%-yʲ_O< +ڤmb G`xe(ŧ#jU**j;`$l`+={nVrT{*СHʺf<@HT/G,elϥ&YB*y'e(Y%L03ImѿF '$#uvn[&pF|?=UhH7+G|vZStji)Jk˹0,Q) $չcew&ٝ:սu0֓I'̌ϧ~ִqBTJY'tkx'i<3qFq:qXۯXZk'LP%YSqjcR;9O.iEJbf-⍕m+ï~BU9g5Jnm =p[L^8\Ի{Μ,l(rv:{ڹ QGK-,a4x (sֶ9dÄ#L\m d:p 4*v{۩$VqoV'=A\i-4BZ.D g7>V/r3k9sW-6Tiը-5Cm~J*sI&rOvC$ِFykE̕gm|p .R@| A`1ҶhӻKt;l4\n 3rpoZ1/@w;TC$bMd[A}q拷5K >u\eE9:޻(Q(F!6}Ê>.3J@~?Z)ӌϚ$#g>df)Lqdܹp:NaZO]z%ʖjJcG4e8^L/ޛ ()ʊs]6*"eNNINOSRi^2)(,?>O9i*uWOR{oЅb*7H~\sڶNT !_שr|[ā^?-WrQ/\UGN4_*6 pqnuG^s\=-{BK$Mb=7+1#9jN)^M9Kk4,rgq1vk=K|]dW<ݟuE˓hL"x`*a{ca nWwތS̸5sfuN|iF_ ,W2ڧ{~cWc/v2iHSw9QZGTek"ݪB)Oa 1F$xQ+J.̦꒴>~~NRӳܥwCݹ *2"K:}ɶ%F{brvMJnWhjER {c=xsWh5e%0G$3,.˰ܢ {鮤3,\m[[X 0][*p]#NߕWnrײDJ^FHUfXd/HcWs+G*euv{٢,rsgvgShrKjѪ[6\zuE9/iupkz O2_ ێƺ)J[Sz銺f(929,[2 &Um< :qֽJU5\f^T3Vymg \R },cH[( /M͌c5NIJJ泽j|Kȡym-s_:[|9NMB+ Z2qwY/)Y93CV)6ԭN~5xus%>n_ZƝeK9ǖ2Oӭ]N^u"ie wsܼ%MSo}U%߹ٮ|f/IIܟvauoiKNdWZj9(7jko5Bm*JһMћJ:+KY?xMOFIZU]sԼx[{mۤ]Jބ#qVf֟ùe8lc˚өzӒK/-Bnf=?^Q]FjPO.nv4}I0LR_(*-[Ւ)dgEUTAP(NqުWk>ƝfI~eBIh<Ľq9}OzF%9Ƥc/دn3XO˷O-9EcEJq|BWmost+ss+Z[-YI-;)o%]ҶsN5RyRKkZl_o"=X"b*2u\dgd3|z]NZ|^dRWZ2fBKJ1ԊzSC-XGftb1qLeҲԵU-߯Uu;}B{t7*lבzҬN|jIM":BӴkgfY DksrzJˎuLpV/6rS6  w6|԰˚/ogܞS&]If̓\5eBR9ZXH-lަYC%ytpnl  A51R+%s~o/$ Wp)qS;W=iԍ7's1)I[ȒLLlƾ-ܵq0±]yBQRmϾMom\&vDy䌀{RR;oYY#8>Yt5zZ#IVIy6#sO+D!MFm0yZBՌ{UJQ%^VK"OٌvS:Fyda#<淤X>_йJqPW,HFnȭ4sS}ߩa5߫W:|yҹ暧Y)FVo$ 訊M,ݏP<⢤7g>"(gUfM11zߺhx2<~&{ )\me<㌆kFkNr7(:.6/#dnՔw{WIɮuBIˉ$`c7|8>9r}#N55A7n;y85(dWz$FH$f̒1 iָeRQxaT))A5(?2Ct!mI!{ ތ}ׯwLEVՖ8ۅ4(SFchXm /L`'9]4bjGW8*wIW )M~%me(VFpTIO?J=geƅpNrȔy038.3c]9SsM!T{'}]`7>^2xoιqWWaN.nIKt 3m꧷֍o18NY]EL|"Md1{ܺ~ i3sq§sscSұq^0JJN١Yjf=AB+9G{%)rԕ Q,awHwm9JZ-Dҧ,}فinldG%q:/c*w#8S"Nn}ưrNq!(^{>k(UW`˯3j\Nٝ>{w"3-ڬRmv |2=0n]ؕWʮ#P[k2T>htZ1TSC7{Wq?Jb*S(o CFn2nUFQocJ4ikd^DsC4i#HT(3sԧ)8躣|UHѢ%mF%Q1Pރ>QS:ie4Adc99E(SZ~G."(Tٿ+e"Id\ (o=N^WZLwUFֱykq3ns?i2+_y7TwmIKo$S98jezMƕ}tcT3d;W U0's)#|n(Aۛ_#V>V0 69cR\EmqNW˪4eьuJ֜jJ-s U&YUQaF,E+-ɾE?vM0/2K_.qR^%Y+2. bd,d1lKZcSFk淂iJ.cQ$JQROXsENexэЯ[śz^R[H'FPPЪ#IQcK9VDHbgXbݚ*~s_SVSި:m)F/\q[!N)E}C}t<V]_B%v\]30m ׍J&~>iGp.2KSmM Gfp;O8qpªNZKCҵ'S2ݣ=פƲ͂Hn9ʌ]q)joeW|h$o)9FOAɧV-AK>G&@>bqׁڱj?-מע}}K"o@*3(׭ i֭KRW{ͺDm!Re )ӂnJ][;01u#WYcU&IS#Gל.[ɻGVZú5[--j}K)s ̳+3giUKM+>ZFsq֌}qШJvf~o]m/#¶1|]#S嵽z!(|_0ZVFмM[lzٕnweZ 8ThWuhYU_P4σG7Mr;>q8_r9'm.NVFv+0&a޺]lEIw=*1&nkXjR"Y!_1XʤNW~?yn*VzhIZEO$1݇ U*q[/ƌhrVRˑF1bO+Q7]K ̭ ^_$S8EUvˈoZ%w3SE-&]ZoM݀B GvzWJ s[[y^hi̛T|F rkͧJ5I}n?f'~e3Q=LAdl`d8_ODpnZLgF4͟ *k> ßδis$wص4q3;s9j#rя3ѣꋩ}JKo.5yp> @wz8<"ּR%Oe[D{Y[G+7!689'Nj^_TV5kNZ'|LU?*qu%*u;ÒP68پQ??oֹㅗv *{gtxAK=Yn7!O9$ێRڴ98gxՄzʭktt{eWr1ס9Z穗bB_qC5fP$s?99RMA^>gc0X*(VVѨN+X]=}7cwk'Ιɯb^ק5{Hunׇ-C&6Đ-020+ɯQS[xJ^nZIq461aZ䒏3rg$#(m^]+];WYE/tӧԩ3 02+6t:tF\>("6Vl7s|;FRmDe$hKI6+^:9Ӗ<=#8I6~~onMPj%ʭev}"NZ6-Ɨ2ۤ`Mm8 ֍'[c0J6O~߿ivu|6{׸澇 J=tN<уIi5UCpz"?fVwfM.߲mmC,ATc$cN]'"QYHz*ynOV6ж7{V*;Jz{'rceN6lhnQ*eݰL{Q*f%=af?k*08<JkP8I^M`qh/. }IqXJVMҏ4/Mrn[1{TϖP}nCfKGe}tb2O>ݫoz[ek G mw N8y)RM|pqqWI[սt6HZEo#GOOhnD#W;Tr3Ӱ\ܹ-kJ2 -n$&dTs=WTr~/^=kFvlp8{sXVKy-Tzݞo_hs`h?9]T՝nq=M[[ίY$ p8,=3WU8W/M|Fš_#< sbdkdec8ʹ?63^f6|n'ψG\o#ԙ&]}rHaOU;[ti?燭ԙ*}+0'[t=>*}44+᷍ds$QZQ\];49Se6lW+WFuzD)El>]VuiE:vM{CrMpdll=zLrIoR[Wh~BGHR6,s;WC-j}bꓱk}qFiai&{iԜ({ٙsh5Dw& ~s*q~x5ȯs7Ζ^\dmVm$1۾>y|ԩyA\TZ'e%-|F3׷ Ti4o'j~K$|p8~[Ƥ'eN]Gg$%q*όƝ;W/Uo@LhԬcn-۶:cSSֿ_cwUѶ,cRACڪUk-c{zj^6nr}O+MhqSEÙ7]s4oiSpntK RQTjY !iGc y e'w_*o_Te.ef7׭:->`r3NKZO$P4w]C| ^ ;];@$|:~1[Vngi);+_m/t V%Z,$:ֱ8Ko3RZ-5R[h]ĵR>{w|ٽocꥇԍ$[OSxLݒY\ *\*s[-_tiD;ra_F$f~Sod6m?֨ۥm6ch$,۾e#-GmI_VTZVF^M $[1"45u|,ckbKƾoꌨQ"fU8)"Ĕ'Cׄ!kqfg;ZkЧQ93 mN2Aqbs֪1N6՝*(^b+~/V2v{)(Yn)XكsƖ"}7:J_fO%KwYr]4i[FOg,\7Nbšmqr0z7*"W#5gN;u*X~NV; x!>,- `*y9j(+^+xՌm6o2֑-&,K$a/c1ϧO"NX.]4^|iJ%G=JUleۑ~U0j]R}Z~EYvPuScnmrkIߑǹ-jZ,ʎrpTWW'%Fu!htzS&yfOrEǖU67JK~F՟5)Y x6xf)\JR) V9yv;|&LKqmmt!f9'-EikmV饺ƶ}f" ڹ"W>Z"4ZZ%}oNͻB|EKZ/]"9rWZݮCľiv2OCǩ+הjCU}u4P)G7^_Mxnm|Y[?g0R j愮y2,FiVi{[~'xO>g$l2&|:|Ǩ>,uj?wE}狊&-m?~4uƙ |- TJ0u\ҹ$wtv6ɤ^]DmY NGNzҴ}z4wnS{g 4'Z#+t4?4$=q?*؈ﱜ4iU*PcF;8=9)fue'ueX!dfSѫ:B1S3^Kyn.X.xԕ-y+.Y-NգQ/9'Py\בۧ:uemCg)Cs,6̓L̀9ץ=+8tmI L,N:%F1WՓ'F6ԭaYl$L*;N;qs80jj3Oo|SǵzjZj+S&;/Oo% w++CjWbK,:Ev!Vi`P}9ۚPz9J7/kc1a'8Jt7*qGӖM$-T3` w2K=zʜUf}g8D|£vg=1ZὤUwaph_y>*&)dQ'3y#=:r+~?gFҚ~KCi x-tۏpsץ[_N7muZK_;H.pnjds4˙Ng̽}9,G;IY_85\ѭa^_?#z宭e,dӈˏj<'##ңNQVH飃h6^{~_V[4/xբ1ƭ0ǓS}`px{ʣN^; JMi`ӣcG=sljhin#mM"K1W`G''=kSUaQ$,mk2ʂ s" 88]4ۤѝфc YK6Nk+KeT*Uc`998cbz'+afX|05ns7{kcԩZVa&Q{@ҼrgymNgRkGNVݢyk" j9b[5bh}y5mvUG`<sUǗ#5*sr~O֣4p,70]DqczU/4:RJDf6qӅe]?x`:zUٽoՇR+ Nռs%ʹFf"3hdH /jڍH7Jj6}ng {F #F|S淩;A;LEm1hr5hMQY*R*2FX;y+by=[e?yx?JmC\\jl䎣kOxWjw{v3o¥#*b웓zFܴK]v!ܳ9ܹe8S)Q29nȢa c$UKѳgR1ҧh9KV%?3b<ۣ\u~5'ƥ)N*sWnkBn }1GySߙGi#O8'>VRg J2w۵ 4esi0i ?1t+Joދ3*r\5V/6~žzXONz֚{;.zE4wpQURG֪St_*Ue70 L$콌1PZ6GneͳSBgWcWQOw}oR彼}ZG}1:wԧSY/'[dfۏ+iFQlr?,s\u[ >ҍ-{NÏ}˴сz8F2۱gN^{D:J|ev=iYZZ)Fneo_ȷo-Ѿu1S>{4%RϿؙekw`یc?U%5ZT[tX yk;ycS:kC4*_7B;C)8ggSipP'ͷ b~|Cc_mL%?^>̞h9yԜZTb;㹎<y m 㜶0kЧ oWyeIrg4h}$d$#c򩏻sRz ~Tcv?k5-%h>ouy.!-avPǧ$r1ҵ 'ɝ2kQqBc$(V9j?>O|{)8䣅e6i}̍I$jv7pJ)ֈև&?4tRw&z}<}cX.H޺H"}_e3DYaݷ(=U&(ԣVR]%TNN՞ﰍ$jĨ#.7c<§Wfҕ7&Fɱw7O*j2n.-LFg@D+\",ZSrR7,ɻo#c#Z/,īN<\8֔鱤߳X IK6U8L洍iiH3&#)-ͻRTlPM4(,vǠሟ/5#S/5b|F|U;=47ȥW!1+n$#ۯC]qԌj%(A(=+n*@2t}5{č04~bBcFq{UirTgFw o-9*OȘʥ7d6+8.Ռ/͖$팟NKWmjR@~ DU?9{VU*JMסÊnhlEq0܁ o+*:7KG*og{*SU%hbHd`fo1o1pzu<'h(Ӎ`6F_,;0H9^ON՟,~qe2K<зb'`g(qz+JիYmАȑ4 wG6.9KVJ[2[oUkD8Sx7F\(XwQ̬be_.а8zT;yQ9R"xmXy0K#+C߽ROycNO/wE̬ExZ%UفN:{cZS%N%Er2%Nn,:K_3Js=OJqZbʹ9F9SeȾX+ >rp?L rT+խ'ֱM[X ́ Vryu+B[6*Hs:TztIo4[ڧqSϧJGvi#?̐JH*cx#ϚvsIqВq[k# ksgOҳ.]bF}bӹf;hʓŽJyҴj5{nNRi k8rNH ?e7(4}Ɯc-Q³./ ZS%5#ZlA,6H8\Þ+bs[18IvʌgwG+.71x~5FmB8ԍ)Vo.u&j?z9^_mO#:tGq "@zztksS{u:(ʜhn$3[H!cIyHUSZS1)˚&wlYܘm8koQVӱb鼉| xV<ѿ9QJz\Im䀝H7,,F H<\TjNWhʣR&FZ Uf3cSYޏ-֟kK0o)dIXMNs\mG6+#%d3b'r?eRok_qU#-iu 2o,Y#V#zO&2e^ߩe݅4ORs~/#SO#V@fckʩ)-Yv+Z14| FݎU}*FQn[&xRdr?ydw=3Ƨ4NRoBhgyn^yMgʹl kzYnqZ8C(Ơۉēu鍪1ڵꍣN2R6Af.Yךvv0|RIXƏu]4馵{*qQ0 $6ݽH38;Gvyށ ٙ۸3?Ҫ~f5F;8g-'Hہ}k]R+ ivXhDߛU d' {)GN)7t֋ٵ9FEIUc'qԒn^-hʛ^Eجu]w67zׂ -H+FNқKb9 kkVUs"1#%9{t>--IFϮu&ѕy[/%[:%]u7c3 |NLi 363)9+򾽙6geĂUX4h@\ʝE6iZ??C:\["Ǹy8 ׽xU u.¤bg{"]E|ꀶG`2pbq2Sъ7oPn ?ʥI!Ol>k:{sj&K-&O 8z.T\:4lꞺd6-Y&^hgn=]qSս?JT}-I7flC6ߧaױ%an8mˍ |:֣)'rrsZfD݃7^kڥFN<;eG#7QYeWaPwacNQov*5v{?#>h.&0,1e \g0TYj躗-"iׇ/#:uⲕE_R)J:.Ku10/\I8SR6Q_sK3798Az)+X%MkI( DkS= jVªQYXa$ɴYxt_2^Db<'jr7~JNsSZܝ/I?WyZ8=Gp?*VKTH-~7 mTm_sW$F4nWsIꥢ]6hՋvۛ jSvֲ㩅8TK ԴgfS=39+R+NXFwHF~yLj~rF c8-EHZڥ?c[Я9y]<rA3k;J1VMٟ &rd 4c%F}3Oq_KpXŪmmF̐Z5o~\_6gem?>#zE7 IU$AǏZaw{R/U XFvnW \8뎸Ώ3њgQo{o޾f&ׂeFnUldeҨv0Nrr~Z+v۰oZ}kSrM/aGe;CJ`[Լe;U:^tRCŌS0Oaj4Ԝu?}W;+r Ȫ[`v8IU=[ߡTR o3rZzgеD6sKk4~d}o~5+QjOU#t֧'#Wr=7-&hbp /^qQjVÝW1|Cqjm[Xx7#+uB #jjus5)1OՋrSq`ŏ'8i-81|y%mhmp?y ^vrp1]>IDjU!}f&oi>4Z d7B 2y}k: 4KEPߩ>'4z%ⲣE7v(1\UO-טQB^xMva*C}Cuˠ aϸ>y805[j:8|[~N}k,"(GePHa=H|-J^ѹ-:}%Z1QMw  z==?uGM֩;$rR{Y\,q^sVLko`.QX:{G=8Լx.5#t֠ƪF qۿjʣWShLHUF$k &:i{j۾Cho332m<ֹFԴZ [^yaGJW9c)r%36c6 JcG^;{2YdnCMfFZΟ,t{rfAkmo"$rgYTWn2irDipY$ސDrZz8`bkUEHB+?iNU_Da3rKs9=Ncj\7$dxʖn#nSJT~-RLDRX3'y8fo? (Z2ikFKK[} Q O3zDʚʕh'uͳ}.tm3}F{\uuj5^ө)/AEHnU!{rsZX{(-hS2^Z$)TFD_v\pSNN2KmQa9汗.i{JM'̖{[leef%AI=XJI[mt:Q-9ZV}!mm|ULɒCsɮEN׮,)}w}T3#*ܿ{<׵*}NFiexIiq,$73.p͞砩}$g5%ENhiH\n sМFkRZ1Tڷ6hh_;kF1$۸ѥRv:{89;uYj|czq~e㰱bȁ9|<_kWlczҔ}gJ5ߣ} %eeXt~wFbG? xF՝RNR"Xqs܏ҽ*pv&+-Khmd2p1qJ*U=4(ib+pbqVR$FϯSWr\gk8&>c`3;3a8ҡ2FQ*Ք`׻]5,dij][d=嚪tHKQ{]SN1Rr$gj~CX^zy$њ M5Ā%[׷W5HJ{XԵ*:Vp%+;*OݵZloq"Uvf+=FyڳMJ|kFs^(Szu%E(fY,8%oFh'@5QWoR; Q[:iEhIM`p«~R)FQv72$}=p('U(TPzɖ`o1VF|*['')Zp)rQIp~-rJovmZ hB:紜:%EZ=ȎU1މJQц6أ{,jyYZ]nt4bډd*'ͨEj_ HIUWڬW9V?%ъ2kZfF3ea뜃\Ƶ澗HԌ\w!DLptلeZro#.UEoN^4zhExVnru-35I,Ve_oy#lq[蕗tcIYdCvk_1c o [u=?~Ҥx>obv˱|$a%6F^8>.oFzp{k}5lYTǩ꿏[乶zR娴}NvMEbQbl3\ժSQna)JNHKe{mC,EզVb3+OqYӗ:0% F]/"|`BGPzWM:Z￙(ʫWuNm*;2F$nqSҺc*I.х:n`fo2X|?zzUNQjrp^;g(8OpOtS*WgN˫KշI#mS[GYCAGu!;^]>fZG+1R/yμ=8N^5[$XB˷ֶi^ᅩON-שb=v\a\33`y9t鶝ru`^eey6ŰsǶ9Vwq^NT=_ 0F^I%#46ny}*yk~&Tc5m̎M}.?׆&rn܃J2^=l]R*ְ-FJu=Q-G6O3iaNV]-Jmխež=uF"4xBIA6xϘ=<~u79BXʴy-}7$Q]d=l9}3]uѷ7n'e8㚸մ.ў+nnMϖ7GUnuGJ<fFx(`,qFw``&tܹխ ; 3,A-X))_T_FhêBvLnUQ0?)#>Vs^ؚ3 K.#YU9j%e[QoORrJ&*~5w:PWoMܗ<n3׷kW9qUu;kUɬ-HXݷ'ױҸ׫:㇩e,%I!`85[ J7%DRX)_.9pN{kʴTmnkCS9'mu*N]dfw%v@z:%;ǒ%wZ馷fM˞8³ΕodO~dKƒEk_-U67C1Ӝ?RשSJ;X/#]+'ӚRAN+wn'ϻ ;GJnU;6I=C'8,ӭsW7{iԍUԜ5$]# 遞j{ٶΗS4@y 2#c5c/k>DבG&'iRƅRg{R+rDlۆ~_Zƶ".(Tr$i8f!2<gְZ򚕝E|$@>9^jϹ^[I]  tg##r_@Ac(K\T9&MjLmbC7%r r=(SnI$R.ԑ$VUU#Knu$>c֊/3<1Z+G-~MLso1+Ҷ`i]VkĒMomtHxH_uF-$nʯT62:\Seb5*.9IjqBr Sy#AXӣ%[>رsՖkO7*q}@#+2nfiF?~yCLY,Ki$?#9'sӥN"3u̵wʍ:{=VȾ[Bo=UTԪɺ[A w(,UYJOKEbj|3rOuC5SFBwS5k{>m^lȕY6Q)JKnvSѝ7 KX46QVBUHWZ R9YFWxrrzqTv_j%SY-5#..dxm6DG|C㞝q:,pk@NleG9s4;0*rS)ԜSmw)]m$Pǹq}/zomN{#F"4]hX#Ee ӒN>yT_3Q{}ŕr/tF6֔c'kH,knp*=jWr8Z"݁4vw ۂsXEsGrsF;MXЗ`8]ON.6ӹ<7 r{EGDtSM=_V2~[^ZC)EF(Q,ި{\EUDy=y돥iѣQєKzє bn'UN.fOjrʹ?Q'sG59JLj*6>Ik5Eʜ*uqh5B0R:jv0)ivbrR}-fRi+cKd_=rGɞGEӲKRC֞&ݡӎ{u$R .+Ry'NuRJܵ]CR8,8侀m!Pdfۗlzқ:_3XRCV1is3!́kI\Oo>]VZcti3\+/t[jSPimF59*KW{,?fqQ4s6*4)ҒI;4+j)V$q4 e$ڧ5n9GŌ>Lp9Erʠ+=sM/wv[=L72͵uf^^5 *#\r0skJ5^ƶT꨷i; x)٣̋7?8R>Eaۧ5t$?vFkTQjENib\t˼Ւ/5ª7$➶2 '0/cd d]96g]8+ɯ%޶65.8Ue*ZݏNek# [kS#lG S/\AtZSqr8p$l)a8ERTě(]G;QUorGVt{Fg*γ~enLOHV#9 zZT;֭G\p<{=kR X*CBKM6_5QU 1yq*4O~YV`HfFc?6==#QQӍN^RNpriiW18BcB_Tc+|T}u)6ֆFerF۹1Μi:Rj2/!YݣkAF{w;}jxkF!*i_4g\X,,0B0n;~r*m9ƔkTeϽgU#f=Zu;CgӞ .Hi_uW-+O/4pM%ͪVKKd.U^drDCy.hgSNF5^0MڅbO0LWNף抺3>5UnviTv{N2׷S6]$E(uP $$^dld:f{>Y^sU}\ 翗ȗK,{3H׷^+ҬN1I]9*Fh4'F;+}O# 2]՞_t__Zu%'h^U!Vz B,T~*=o^dF>rwk(ko{;eɬSj{[~z7E9|Mv%<nT}Ɲ:r,TiqA#R/iih凣**JݮZҼ]$͉T09__Ҧ9aߙFܭR4.\(D+̉!I6ј.?2L?k4޺GYI+'~<a}G_t)cn<ލ VAFq<=m^2Zv'LJ i 7Ka:Lu'*Z9RdK@ CӱZܘCTv{te'̙>n޵:zSHA4|t g-$ʰ,n rG98 uaԯSG=l<ۨ'yyk?B=1OO*B˵yV>oarE4-u}UEOOiskvh7Wc =Ԋ )ay rA灌BXXmgO5)A;h=.pUfvq{.~h<Ҟ+;_ k,k)xYwmH39)JJgi O:vZ,,//O Đ enc==+Ĝv1SPQkq-i22\I8:vK)(j켿YW5Ggno2|=e-YE^5vN$ʃz^5K+'}[5nzׂ-M$vC4,vpè {י7v}-o;ƞkhrI_-g9◻tɧ'vV es'4n_ YIq%1_UGUڟyU#Ҹ>r7ԩ>ȭ7uNkQvm-;˜Kmm\{_\כ'1RӍ:\o%(YT˺L 2xYb9v/ܷgF$o%Y:rGR8\}Dlh,ʤ>Mwӌi(̼WR5I z>ry"m^72ejڦ,v'; zTp=cvzZ:^XLym]K *c}Ev\^aSZMڕԦ4Y6%O1'ӓ_C_Վ:r(=|j4|*~Cn2=½ qqPz1/i+Wwرԓw>9Dwcni/OR+Q#nu4`<W)=EJWbR+\D~|ឤԭϣOCvkͬg%I gAcyi)E&U"سm;]KqPx޻x8'ʳ~mտ&S%k~L.Ջ1"p2y?Q~yeȣ2q򡉓oj#8?^GYkmTio"Khspy'/qWc-#1-/]͒jhdd~o_ӭqhTmM:sN]֠^hvʬ,$$;cV+Pv~GxF}9,m^ i nʲ8^A+qӚXiU=|JiRXx6z<{]gMXԈv9+x%K4ܥC׫Z.5i4Lmԝ+e@BgR{"E66Nrx<ͯCYsNqjz<]UQgXUYp}N>|DG~D(ITrKTͼklj8ԧ% XLTM-,kjVGV4MqZ.:1ĕ(N7m˿jGJcN]iȎ k}@}"E®y |5.󹲡z,tz.lcr=Q[xbY"#V&/:0zt񪭍sӨj]Yt\|Ȫh9u9ָbW5͊RS[2[J9mi:6۵88]^2fx|EXsi<2[[8hnȠ2`A8#?v'ڛ^"*U)\|9t6smU_clg׸I(kӕ].n<Pнs2j0ʃ^HbRZ t{7MԮnU-d]ڋ*ZTJ=ᦁyC^ı*httl u<^8B(Ƴ>}l<:kY[2١Ar;5i,EnxyN+<⿈ŭԺp zA^Tv}K_5zDž[VI2-Cv)끏jqVHm^wϣרtvs*6ڃ(7|uFT#: vg'Qi"=CZ6pVgO6*)NyjǁrA']v?VʤO#֢⩸VOBܪ{R/'gRY0ZixZ3'Wɺ˱MX˪9G]:ck8jZݻ\mk =EԘmv1b^4Gm:0QסQ`.T}oʽ 8l̊تǒ1zGԨtiT0bsVʌ}eNYsv9kKIkY&~fVyqJ*JO'^8nGXEkmdY<ўG@=գӊ[/ҷ+c:TI.aU ^}2p+k4{Y^5qS(ӧzkI{pfEjUy'R6XVѓw}!O5Qۙ_GSە8\*Uh׫<]Ѝmq+۟WƥKsNTo߿&4hN n-~ o^*f{馾ZeTV]_'k;ۙ,f Pw8/E|WaU9To?s8 S F4}ئ=z~SKmYw}# c'o˓g`╣/MVЂ]]TwgzQ+ lrgFgZ[:!&oĸm&,WAUKݕtӤ+vFa5X3076OSk'\οBq[rs^;78N..BO;neuk\97rC74&ټ̍==E}71Su潌zҖeDr<ۊ=gIEJSo1i1j{h̐džf!22m89=kHƣl2r51 cxei4_6 hfyUg&ED =;S)SQI~xʏ-4t$k? IU!&쟔6z}8G RZzr2COk0mJ~J 8I73 N򓱡\ZžF~U9׌eR:>Uj{z[in[UX㵟^rPp{C:VѬ80TDW>Ya9ZRG|%OG[n+{dqV惄MҍyI+K4p[5H'8+JqrTJ7n $7pǜ}8*5Ɲ:տy5h2FnZcܪ {ԴQ5*Μl3/719+>>M=- -,˸.>oδhi'tFR_[Ɩ5+JvkKp?gSTkeJ?*e GU"6:Bj&eVln& ݞoN^1*:[E>vn>^6ޯRӪ&5=Hʻ(b/&4g d gh5}Z"N"cݷwQ241iMvSSGZK7fkivc fFv]{,9J yb8VCl'SQRwܰ&a)}R}6- Fcrw8jFIJwzyZ [3\9+.Xr۸I m!C+1pEh1ïgEm 4B&/<R)Q﹫ 1IYy#zud[nWsf;z !#p28 vFԺ~7=CFxDpfYf$>?sZnqv(p}]6KǙH! .]Wtnc^z=[jh7wR5|ɶ]y-SSӂ;Y§,yZ+JO~j[w#:L}`X֮C!.b@w p7n}*=:0WmrYN{s=?:&^Ȋ8kx_$ ~k2t.JsZm&ٱcWw׏r>NOsRHkDm=sZF7wJkᶣcyn#VfRl_=s|+.MZz.7\K%r;O[hiKY"Y պ#,h6_e8'$:qe>E8VkKwpqq"1+%k-(8=,:W ,D+'O^(u%nۑ2$]W71on@} iN43yI))&a=ҫ/~N1#1VMJr;0,VYOꮘ\N/4tXq̑s:犒Bkaܫr+K$/Q26v iJ8Ż5#[ɋWݒ`'Ҷr4{Ւ\mKY1H+/M;(m;J4:@gT .ҧh+gj'7reB$XAܱ"zuUFj۸ec{dcarxMe#i8)rY'%yc,HyN74I{:wnϱ+\ѲU%#*܌?֭VG1 [UG'=hf]k̬=JWTeeyI+LbZ˳Iקjtj{Ne'a֕U(=.do/#K*`1x猎뢍Ow^iPGAE-GRUwJ?ȱyI3?󭾱);1Ky#T,jsTjGnj2jFz9裈QlG΅wzI\^k}H?%`>x֦2w%[޻-f5eU|0FGܲkڵi_L3=/v888gWEWUc:u'lIre]2^<`576=JtFgU~W0 =:1~_beKv~M7%j%@Wq9z=itM^=ՐG|Wg? z3 KZ>L s41; rs?Ck&ݘ4~Yo;dsV0ĵ;1I `W2NFOӧzfՌ*|^>3{Fvo*Z8*Frw_,/oUqøʀx?˾**inm ,tc 1MG-a' Yޑ(~=Gɧ-0UiSr0gܸXp*Z2=XэH"ڱF-26]f|{TW{ԣN&}JHnh鶮u= ph敢c~5aU`63ӑ18Eʥh#)dAUmݮz:I=̝F8HdgӧB|*1]VӽNىV, OOc4]|{IVjۧTrBKhYZUأ~^~n2ǿޭ*ӍH"kSUC4JѲ 8IǶ{V4?z]b)g9[9a'ݖ$gҮsZ?Y5H wcGvP:z[FpsԼ|]]M#Z8+]U(ϕ%o#*v %yjAMmFRӝ#>{Ndn(r3>tz|J>hKyh~V@c|$csץqtkGMѤѳ<5iw8;/={zWZ%Q5N[gXO]c!GJ*r9U*:pk; 6XEv-,``sJkTvVe*5'mĊ8c; g q~52䝞l5RG21kQsFcN [x\fی^:XK{~Jt򥨳e)瞹W=4U):j y N&e'n8#ӟΕF%ho[d6i>vvp:\u /JsحVYxp_2zS]X|De$µDE-k}7vM$r3ϵmt;*nwo2{$*ecR6VKͣ>kMVFe8'_JOzRzio/>i\ZI>Kt_8 {J#S(F+Jɷ8b67Z0#w}Nj~s-A [,T33k+֬]3O,uf*8c'+;t3㺺܍qCʠ׏]Ց4mn kh{pх^d֧?գN\wt]nn3WEӌV4iŴS6Mv z^NTR9[HoURf=fV5(18te.?3Z47ZMk,wl_#F9ѵ6S{wn ϴqsj!howFc}՚޻+gpz{ӌRwjϱHr,ŕE1YTNzֵ(Y-uhc5>58y\a;:ьKRifkH6Y#LǑ;g~qNK5hԳO&ƌj)_"am!R>ef!`26f8Ӌ^D']»)SnZ.ToQD|nёFܞz+.d<,M .V1ufubŸ:m?:#'_#sÞ-knEJJ.9J|cpMTj%vsbTX崭4Vo,2,`g ܞN1RKUO[8y2?̼nSֹYN e 2O2erVN-yq2rpG~x)y->x{r`iw&h~dVexc~+ɜH2ZgLwy(fJu/>IwL1`Fa)TTM8/\Ri#DK"?/;֊2*sFȱ,Ž[v;M X)?E^n98|zq؂aXcPѶ>ޕ]jK+6Ln}/ұ-:kS{^f[{xI1pzS5Ou|#31I!Gw.{/;cJɶz8:4+Z'2jyGwgؑ#ӺwiysMCәVYHN୍~gЌi=K/in_MՌnYE©p+:؊٭<3vO$we10.GjʥH߉qdfc:kyrn9prFqǭixӧcFSZ~f$TtWԏW3Zg 3ȥ2G#O':JQa/%C!j&L?T3N8һZ=GlkJ4ǁt% pRݹ`1;2x8Y;~= j[Nz ,kWˑJ1H^guO&8\׿cjKb4v,/0`o\#j2S5H2َo dcs`Rۚ42iiG޼nͦ]1SPKaw8=N]6sU#vt:-L?fdY: 0:ITU9S<ʭ:Rzm&Y X ƭ5Yw]'gWp,R*WFqU_vO$4a*]vwye0N]xx{_,ia%Ͷ+1+nx'550jaݬx8eB/qy*N=0+)#f}{ؚ4c[;K!2r0=:WJ29j{^k5,Iq{N]$3<ƥZQ9}TJ= \c5`듞?NIJ^+Iتtص %˅-Eeo>eR-pRWmk浕Tg =FGt}hSKvGpNH vws'bVV~ ?~ir6K{>\a vii$t:xZjn֚4|k(K 7cylEQMQXzOB9oO֑HS^G{\=^2ɇK]sG*evA'>ߥsF#[lD[ʝ֞]GG#47iϗ>5Oy18ԌU=ZyR7O0G8:v@<4[2XfƪɌ?F17 ii!Rnln6y#'N=\w19KXmB#FՌqaӠkR>j ZN6߱hM_`*9){4ɡӢ[p' fV|w;S*\zR4ܹz$[YD31%;Ws稬_+)R8#y!d_z洣[ћESnt'U&l(FcO=c:^[ZTn'0^GnۃOCGnIsJrh(Ҵ\Z{51E'~*v8Oqmlnd2,&eXgV1XwIU8Ԍ#'kÐNC2?xZ#OOVq+m[~EtUF=3̣T Sv>IA6+3r@8;׵ee9Ti[-Iէn-ZfX=qURQ{ےۘ,RNy@:(EX%ʩHY ۹wQ^994zQJ&?y3ac?<`~u5-3TtiEA!K̲c~{1iOSSK/i4vi'?wqgEY7dJ:MBdvQij4֦)(8*IMZOrldpX;wc<:J2*¥ԩ''N%Q($3O~+9ESqµ8rzh}I6+.:=1s\=i 4yx1*Ux9'VuioBj+I;Zޱ9BUXeNz{}k=:;*1uդS۽<.$ڲy)78 ?[Uϊʵ ;vdh] 5GG洩(ݸ_aBDvZAi23mB,N{O¹kR31I.. ;GUݴ(ϭKP_>g:O2B[ecHQN_e+*)5Gin9k ɸLJR^[X;|YT7lcjɶܖ}eSVoʪLkHP'+*1%k9Yt_|jݫG܎rANqf`\Qd '}8zR%޽Ge;ӌ-$m@-Lc?|lxI*NF0Hʜu<Fn@\.;cZqVӱRizf,7XK`1S({7_TeY/rJ+g{p:lz=D]=luFK t+t6^ʴZ)$mr@,i>osJRiE^/_ۣNaUzQ(Ϊ^I?PΈ*SO<~8溰tӨXv_-UUẒy'1WFM*{{A얝@-9CF2>^8JveceVm:`[? (ƜhƝiǩܰNA2B 1`rH8EvӣVDהjEBbNYFS*{sJT__Mܧ o0xOO_ƺZM+Qe ZvB¨.7M?$g: YK[[0і~ٿ fmZ\,@N8\˚OBtFɴ4j2|3 H=+{#^hӁdYV$cnO汔hu:ҟni:* [ėz+TwcJPZ74lx +w0]ޢ#nEiFNc$ڰczQ&ز*vzeh֝Z_պ(xp+oƴ˛$!. n G&{lu<,F@V%Iʑo^+O7gD*vehc!Le @'WK[͙j]ă+gkY5Utw]w::-u*4B;+~&]i *djרGn=d{T(bRӡ{m4޹Pߏv5NtZ~fB0qZŢjm"R&a`c<`{VQD:J0Ekt1/,`OfvGctZάMMZk0ui粍hUEҔeyTN3ric${7+};|~U+EE]#eu}^θԴ{5mvVM¤q:" kIR\~_.?Sѕ}R2[s#5Vy1 āsۂsk<ʽMNO GE{9:T^wEZԫaOWrYnUIj4>[#h 9inq֧F1sw[z_KK<C[enGmKVmUp4+VTKmї|Akmor`3ī.#p0z8eWmv.UK.O7[.6 Rzd("imm~,e/fkekEsؒzsz~{r/i1QK]QtE0mf^dX }QqmRt++ߩIT9PI$c:vѫ،U>Yk-ƻEj(wvy*Hݗ'R1]FI#H&8oZᩋ旡R2uϻKiJIσz=LEoiF.#A Tu So ` :V3YIvMl5YoSRҎ/{NÑ$,IlĴa| e ZY֨FIZ$u}4'7e .GSӌU.RiZ~g] }NO\n㨯>jJVN5U㤽~Y:w ?ZaFsԨW*\Z@FޕJ2rG:%nKTWh_n˖x=UuR=*tjKn֩[?P60צ*?i$rԩOM(V] BkXVtQJb%Us+2*J+JMG_Rib'V\A)$hjLže=rq]Qom[ SOm8tM4gwUzܺܲp{JR0/̻'/)'-Յ|jFp2zڔ {>XY[K rH؎^ rǯ_e9k=fmBrwq'ϥkѫEU$'y_$S=mO9oeۦk^5oU%,4aR-qssn"L=mfhObs?G4L9'9YDT.USNJR19%%u M~5ˑ\vj.'u55p%.99N+r+ѩMJ2[2OH۩*WtJS<1c\Y3%[sUwk#D˹no~=\riZk]Ey-ivlFOڳZ+zyωT8\$cU_rN3"(˴|g?)>gs4*J0v_DVao*\A<^m͘H#R>%N-fIF\2T쓷א;Q9B? 1"8}D؅z~g+ϕ{9;hRe ~V*NGKնgu%(ӡ&;c5._t}A9",C*niDQa[ݝf]CdϋKWHyٻ#G8Z&['-6@ckJCp}=De*qlLR.y>X[fn1''>Ʊj-vytz&@'zTʥJ"N^Ϟk $&yg1O>G{h'^8UwgGZvoyv59Mۘ ֪eXԜm&,رԟ,d`bG֢?ߙ8ZŐ|%HUG|s9⧗|{z8Ƕ{{%KZOm*Ϣ,"̲IsszkJM.hԦv$XGq}r#f #H:%{V R=mA6efY=+cv?)JTk]|rVg5i f|c6*e"#ͩeR}6djG (]KvQ=IƝHFŞ 2OdQ[bfχZ*ޯXhو :tJ>z\p˚hg#uc$1P e޵}8u3CK<։ݭȣ|Eh bӺ#:a`qmqԎ:Ҍ*JwGNRwkwy =KMm9RCLq˝Nk1F5aӅJumQ4b\xV8&A!Fd[빇 =&-|ΪUѼ{is64lM+0=t(NV.z#=:uߩ?/'kf?w(U>AR+U)?u&mue 5BUUF8 W~LXVebv`:zW*rov"?Wrվ: -Rfڿ)ctZz:Z92گ;gk4eZMLޏ,՟FmmyW=lG')hîf47w,G'^+ϩwԨԩ{i|/o#:V&qMS8SҼ=j˗sϊf-+;^~aM܊|9k> faLpQr] qӖc_'vr+ ѐFm[wc_S]+AkSNDOLU.Cշ'ʥ8ƵMpňccqH UQkKUrOA.KQȣzq]J待`Q"T mV ÕH ޵Iԥ(v;Qu*ZZynajZa緷z鵡%-.WuN_'H?F^3QNIG^ζEui "RH[n[ gM\1rK~{>XKk]Za|뜪8z*huRF[{f w#`4k沧/gQ6rb+Ə,[밑4֙X+g8STxE7Sjk˱a^V۹!/?Sf!익&yeG=KZk^ү=&Y6Ng,A^12Ӓ;+7OjCk9ڬUZ886N;9j>ˈ?譥eUtUFԶ=q>KRlr֒J/:{yRckJ}+DrRn7h]ftWO'Ս8{9l&Ɩ;B2/Nr8>S(ʦ/Vn̳m+G!Q#;G_SiNU_y*pM;i963Gct ^3J^RoO^W.7.xq_ҹ㈗5Mi9{Ir/.F.翵*sl˙s?Shdd>oj:=RJ30N4%k+A$T& 7s};8whGle ~e H .F!刯fl(evG~S.T~ҟ4{%䕤d =:6Vc2+>NHfBK~֒ur|ҽ@UEQ-#/k ֶF1bUڋ+W.=B8VP)OM)<̍͝v ɦܵ)ݸY4aUfbdy$2q+U%( 4RVz&%tݿU\\N' z S^GoUVI(h顓yy$ [Gu]BVIgZ2S{Hʇ{R;Ks[*Ԣ?xVϡj֭ n{W+VUS۱_Vro#Xm9?7yToTΈSq;OvZpgoZKܷ߱3mQԗ¿OxJH٪I%`r y@UZ-ebxFKW{Ht܆Xc(HqUFǧ抙:ӞK_Tq/sEoS~;5- ]אg.9VGN {;(-tx^7Ozeջ& ;PKxpr8D%[t=Rԣ羚M 3Þ/_ MqHV̷hh8+8q^*=nkbQNXYk=ᦑ}4e™;v SHqJ0nb407⭼w3s1b5m9]zp%Z*Xi{|y)rIwˁ$m ݑGҟףεy_mJ2_PdWm]i16r@9a?UV+ReyiA[HD0q=GS yRQ.K\kiԣiii[$.OsW`(Ǚ[cN05O3:{{]G˅?Q_aq}z}f4憏lVe*A&2'}1ֺ㭾%kzeo_Wݮz/l!Ė é}Ƹ+Tg j$Wl¤r[ +8ǭGxӺ dX@eY1qiԎX2#n0O5JSZ+bɵYpzۧflqFu)J?7ԕm3B̭8f^ ƽ XYF)R[ӡ͝;4!9]pӻ5.j{gB$T@eڬwgӥvaǘ֝J,),kFpڽ5jKzѕןo"t3mO9=*[vڒu0̪$Xn#.=?Zu%?ƥ7koDVU 72k_Sc97r՜̖* 3:aZJHR)^5 h&L+y>ְpՅKJԘ+\dny ,u'.cC(l7E`Acp:3׺4nq%gM8HNҟs\axon˲я.<ږ|ˆce_-!%IyJTڭ_(֭LQVF&v̫=1zIFmμtcc׫1$vBCdH]%32K1{X5 xZV_Aqy]fH|2;v8+2-= K0L6ºC45s03(8*FAs^?2թMI-N v]Y#0pzdUSmKV$զp- uڰ(rHW,uw<:dnڄȐÈ|Ft,;I猓8< q2r+ĝ=yZZeݨHgdEaӚUʴWsSz_Sm6[pVl~q [B7ΊK4NlWN7z=:s3N9w.Xr[!2̪q#UoŸ-K߆'6̃Xn'G=:Ua݅,tj]I[fWv[\ƪ=y8+MlN@Z+F A9q|>.x?DufMii5feI\ Ioe<qom>sK&k˄< kcvp@Qۓ%[*+p1qI8c41,)ٝާ'TѧaB (+[^|Q]j'/,>pG?ϧzj1i&v RG'x<}áiy^#9:y[l8{|ҫ%QeT/ҍ,(>vovK]4[_})%M/$YOq |89$VIsBM>،%n߯Cc.?D".I/@U@+PLT]g[fX\/J|׺MCg?Ѥx&ArsGLg*ONQIH٧nOR𞠻u8=fe=qҏnMU1VV{~2^T͆S+sO[JKpF1vҷ%i_’iDQBD <5:7wq۲[)~XgLOn7Vۍe)ߩo7-v4;p+yJocKIO$ǨXǷ]Vi8ۧsti9[o4OEqzˍ<ۆ`ZMp$B۶G8~R \s%Z26}"T?M䁓۹2Rr~V G^wox]kQdxRHV7$&[99=k~os2rw7¿xm^[[yd 2 瓜wX hF1Q+_"v6fHxYY܎҈*Q\z3 ;nlP!zQFUE. %cq-GER \1▯bv6HsΥoG{~k?}܀OFt3nq}޻tOBiqhsBQ}[O I|38<;qgp1ط2pGPTUdJ]9mu+ߕG*xְK}W^ij/M%mBU`>]XB-v*>EZm^].V&-?u -aS̏teo|1OI#9P{?uF/F|Mzk,qU93z?iTKSHrE3;+zN$siQS7s[E핎-$/9?/jV:ocRɣll7Fdl/5=wm $sXԄj7ƎU%u񥖧Gr "W2 ^(+sVFVhb{(S\+qǓZ}Z"e5yZ1i|_ uHn/S2:#Ib8$gیרԽz1匽xF5 YBkF=? ƤypU9kg,vۗ$~qsr)-Y| #p9y::u~ӗU=|"v̍XƾgcScSMK+鉦iW*ݪbL#WRNJ_bhV= km&bu\A?0\tA>n[HӦ̵gmJX X˶ h119=\JyOX5"ѧdɓL0X0zxcyݚSN4:-kz4V2BAp˕!:VΜY.]"akr v&u |ՎMJ[-[Bx:-q=_>komI<)lwUYyv]4A$xa>w-cCQt8ӻi껞Oyf5u,cYaU  [_C,dkN2qRqv3~&_"p .pHsQ[0J?S=saMuݳ.'SIc_;fpع(xR V,b߿lq*b_*kϏ77,5WIT$Ϲ_- 7ʤן̴H9(xw'=Nknhr5%)}oJYiYn~bOSS*e8[r1w$-; $B_94SQw9Tg\ hm|d U9ܥ("{sI&'Q8ⶌjZu)Ҍjr}g})Ǵ~c؏Ns]ڜ=oUéSi+|//5Ly[vs{suRyw=f'Vƹ4:chcr[\uh?c?^;ץF2sԩZjUEYYlaP*tcGr}:7m6m&RRh:ЧȌ=RS*,j_9㸮6Z׹qTmy_OaIdA59=URz<;vpmW9'X4U9;Ԇ5KUTi>U|LsӺFTiѥ.nm;wXnbQ6U&FqyRTE6_qVYL,kF~zq5.h%:o]Xwa_w5^du9Jt܁Hy=*~+zrUSv#E Ț5,,3zr?iN6ѣWЎ[(0'_r[1Ε+̩ARZm՞ES/oOoy/g?l⹏g̏muDƫ[OTb̬1,_: o OJ:>}n;o{FY#G婎_ۭk# qrױf8|bD. ~U37cS1AE7 ~v5<_6J7wڼx~e>_{0}9WC*&bP žRi>\n?ZYSkTb}oЃ$ 9;#'89J :#At,iI,b!$eYF)O_ך*J"NS\ޚ lzK؊3N"Hɧ_1Q89?N朥mjiF\К -\J21sI3\7IZ! ySU*pI?>ሣRDwB8>w.@eAFRȔ-Fg.߼v;vO׊%kDmbIAw 5 YGDQǚ=wDr]N}JQ"źnk/yBJȫu/nJL qDTEKNE ^^0ʊ?(Gsҥ3~Oi4܎s,gۖO˘1_9/gʒFї4Ewzc/"I !y5>]MjrZE8y1i<5RRIǕElUI;siLu=RR/oCSm~aa( 2? Su&(E7"Y~e *4(Քt첬EvUCw#БT|#F7IsthћU*7A#ڤt c]QѽI_rJMSS״?6i- 7Nn}IXQZ5ͬ\To{wv-,24`( 7d_UB5{xm#-kn7n>3e*N{ ij!Y=z*vhc*FT]=-!.^3]7RGEJq}ӿD0{Mxuݽk^Pk}5IVO~kn.h.a!~i9aW4}ɝ~bRTjp##FkUcR Ŷ QWcomb$yeynۏS{WZ2rS[#lDRc i5+ m\s@=1\ҩۖ<~OsjY"Yxތ럧rt~f)$SZ`ui_`z\Na<,eK]UZ6²SwX 29c}l:UvGjhn)Voq#w(#ޱU,NFpZ5 >}khmN_vM+Lɼ;-$6 #$bN\HҌ]{v(jYi̫L1-T6 aIMHҰI2\}NZ:_wf݌ }s}3j$TǞetvE #1 @s"1S.^W(qENԍ.`yfF* VRA# 㞞+ڧ74ψ*wm?xfE3u 8z&vFX͵䭩L\*xw`z^QMW)(y5Ր]jTfR[j'鎵37N,gKHёVQ9S7Ʋ%R%YsƻcxKfN+sK#TyFYX0pGֲNߩ2گl O<*yPQ)Z_o1#/g ]~]F>;egHZj.v_&KjPpY3'-+2bmcyU6﯑+utܼ9p'q#,V7Es5v".u4ř5t&ݡVXv1^6hMVOc2H | +d4bF0-RwvɎŐfPYӏ²9E=_ʟ%jpUVJ,{APk.ӚVԱ H̢wdO1Oʣ:{tZ"Z˗}zڗrBʫ49sw:9Gm`hR l?L[]5sVM%8M1C8*I;aJrQ7W4n{9s{\@Yceݙ`{Eg)tHRW=covLKfVCO\`?*r+,\&SW_&9O&іsen<7oF/m-:~ݫhwi)Sۿ8䰷_+Ztu֝[E'mɯbԿFM VE 9JrkNQnr>&Hᴑ..lo݃?N0zWe6eѝQNKm4E-{ˇ-e$dk?$rԷ.qi$ʪ1dԞ/)rN9Cnh֑8R \Gfc $7azysKERQ4c> BK;ads) pF{g^U%_3*J4ݟ^+/M%W9}9N+c{5Xyu io?n3}c^ƬeJڔ/\,ef*:F0x$qT[ZtWhap߶s26ן_ڲ +\N7kba8Ĵ| r=9cew#)9rko褆]6cГ{uaZzuYu NoIG;ׯ^acZ7ͫt$!o ͒F+=O_tO85ew.:L-[FI[_.3}м'jQWG.">*֚yY@ "FK0{JIs vkp$Y1)KD!fDmaƲi%-Qӭ-n}&-Uo89xcu辶X5/ +;DBXr*^w\U$c~YƋ[mp7FyXxӼCRDavGiP ִhTߟJN2АUA\Hvǧ^~깅OeI(4-CivI'\99#3X
    DpۍYq>r*Z]sIN/^i.CO-Ͽ^}*bi-jJ v/W̕r\RrO}\ ,ZQޤ=ȁ$a̜G5NKsۺq]}}4KeEaW}?OδQN7LʢKwE >4lp{Q]߯A[v}(GUr ZgF\i/27敓nyz9T"UE]싖#WvՎB3u;GNƲK}Jꢏ+F~>ϰ;#Bzz&#SIC>YqV>|tr6 ul㴣S<=9(DD-qn1Y HuXMs{I\K40)`cR#GYJ9eN 2OiQZN 52{<%*~ҟԒ)Ɋͷd8@lc֦1jVJι+!,=ʏorceo˲ }|ȭa~Pz~5/⮶jb=替O0 HY;bhөg] EQp{2acZE&jUjFn$w8s3ZSRl‹džRG ?SRc+IJ5%f{yyH`*{]zNXxiE[Qf Zh-F`:6^U=: rב^[XVg c+\+ԧS*4ps^WE{E dUWt ?{,Jn46*934r0.r1I>+1\FybeIEmdvRH:t+҄$חnO$Fs$2ܟtcN1ZptiӨk*d|άZEc#VhJ:NƵc&YyʺI1Ǎ'T֣[>ƸYb($(s~>rӊ)Ty4 Yp_h1dG*FPv yIٯ7.^2 3(Ƿ'V`7:ͭJ8=&9S%5y@t뜁ֹ+Cɪ1{m#|oV\OB6y2IS&Q aQr)]'-u5+s0¸o0cǏX rAAk2⦧4ZW|ZM3 6_8gs]kN:!ycDl\g={{V Ψw)^];e 0ۘs; jRj9e;J3*2͞8*vѼe Z [A#$hnrWYJU"_m 9Z~0do^yN,:#(;[V"XV5g> csޮ 4wG& I{* ʻ) riNummG$dqiUeuBҿ2ӯBݵ"Ȳ/C؏\fI<[FўkQwBxcY odr}*.&L> iԢCg#ffHnFrs>N+Nnj+֧epk y.-q&dLWkm8=㑃tBWpN:2rϿR[_ܩ\$CҮEɺLI =(9TN^Z]aefvqy*JNB3rnZK۹jPvF(hR\սt0n_m CufI2xiӌ]m(ϗRZul,mضO}+Hzi_ڠnyǦ~~uy`?-i]:^Y%gzJjQ5y7{y z8033)N.[{vVVyv 49sq]'khN\\l>A:M*nr}?LTeCkR.ܮ HXfJl89!){6Iѭgh2)bE\?nΉ~Ȃ?*yKGy=y#8]tJ6qySJNW$Ii?Mc܎P<ۘQyUlR;iFˡFuٳi2!o[m%NIیi,TcY7vcM*^CX|T)lN: )JirjU i7\D+F=8jNk^ iT[Rgkm>;bcl$_)/?5IsIihFvsxM%4Y?J]erVAIdZ-H\<\EӣА@0,ryc08qYIIt7'6'/@K VUkq,Ԗo:dr_Y4R][UCًE&ܠ.9qwXqi5Wvs=BJ.^NZ1T~˶O*E\\Nt^c|˕^\1DkVb&a~8S/yno%Qr,Giogg*Zn4q-D|EV,Suy|禣+2Vl]T1ƟחA֖bpKwsy=9n1ygR4ۥʢ8Y,F2@=:zⱯRZeӯ1#Ż'֦tȫ~Ypr~_JM99_gdhZ]I [np3g&ֈiԦ>yz)˘$ʴyQuԩ?f6-7a#3y)늪*j}bF8y9˨*?ʭH@8>+SmVnHVYVǻ*N MN]cus1V}z"pU7m u8q}q\cwOi0kkc򭷷/lvY-K{^HK"F;9n%Cw9*FӾfiVĀI#:V>ֶJ)Yڎ_NEKmxvJD\mGʠ<)&h8'˫S 4u; %I,O;aS3t#\}I]r[aw3ӍA_ pw}GC4*9ףjzASNQ+[ߡL59qm'vQN)b|Id/33'=/뽑~K&qpbq ԄɭMFoCpS^jmU'7tb71%˃EgkeFZ}=G GʣwS4waeR_Ď#,2m*{җ-8seAa5>z6 3> Ҝ]HuI|I?uY9oN*qiDgnKnlQזblٶCӞSҜ+8TBU=ޏסNıǩ({uuݶFw1+,FSZ#R]/5^]I[qnn?\Ҥ)_R5R 2VXOjҧ5vzO[lCBwZLUU׷Z4v_XJ\em,El=3\rTsN|F&"~ϷYM>xVvǵWg:j2^hQJKEEUҸn&%j:4,ŤBݚiRT֗mOZ1嶆8+'?z֢_Ros@38!O8pb)-IJkZ}i3,0Rg%QҊtV7%v30]3zלlp斯Zx4k+Algt)E9eN^5,N y1*%uǥrԽH_͞+{=[ܿfo*Fs޹JqЯEA{>|9A@ΣK]ꇿʿnl9f^fc}%U!R^ҝ ®͟UF1I-KV/.P[TOY9 SGrT5ǝ#|ҮdoSX{>]N2N'C+Xthb1ljQ${εϚW(ɩFX\+288WSS= x{7FKK+4(>~|Uu]|_fO'nS}5)Eԥwo @4y󹾔RSɣNU3ZCLm8˿PZ9. (-ߘ#'5#ZMi}L(6jns?YRR| )~s'L0zUSc#*SZt0o r:+kML{{3*XnPY?{95q9se.X'u7ץ{tjS ^ƘxƌGR Giorvk/kFm*{;VZ]%חm*4biLjTݟv;O8֖jY& yڷuJV+.e8ͿcF\1S^g5Ŗo(Y֩M%ݍ2mHJ;=yl_4M#叼&[EpxcN4bQSVBE$toJΤDT*un?z3c׌:f]z3(jp#_JѯmUcX3s}38kJtvjtܭ#ц(-2dr{n'5{83H)_ot1嫬{n)u@Ň=8u֜b];ԫ7$5푤xZelQbEq18FOR!Ro#ԖK,&O[پx?^'cnlyoϪӚIj<ԣ.h'} x> T;lr1s{V1R)IVwcω6`,!yWg'_=xGzxy>km}mz|05kJm}+!yo ㋻ #8D(U\]sZT⽭$s3BS&6f:bkoikw(}'zuW^Ofi_VoK3޾>6m`-S$ S8XIa'm؟mE4DK29cz|ak4;̰i+[2||:̾deVI 0R8'=q~*g8=O2..e䚻}={=k_/⥾{M*ڛ>(ԏ^t׮G;qx? _I&?jzv _qmrO}9lm;9.V=֛^KO= QLF &XOg 8Ҩgwc HxӘ򯖯S`z޾(ʏ2^Sph=;wkw.1k;wNجkJ|EhkJ,>v[ϚFWԏVR%g*BQqiv>b69QT?pӟNïk9BQV5)/iIDl# U9ROnxƥ5 ]45#4aY<8V;#;WRrrA~ưy39OJS%J%.F9fqMDWhab.Y>ӧKQ6U5`(Sˀyw,ХF MoG5ĭ&,m'k9")өM$=^kY]^dXpN2O$gO]PeG]:8EhKjxpxʽJt);zX~Y/eH3ka[.3vj}FIFsPy-oḧH%m8c{ױ┯{yө^.'Eܘ9~c[rO8XՍ:j^>t((i}~GEq2؈Y c=8KپNeoKܥOqܮ@c&䔠tFjQ庻\l\ܬ7nN1O\bV*ߩR AYjUUefD&%V,8i*}c֧Qk an B8.0I{ *q;5:&zm.M#cKKq. q* e~eК\Ud?Vrw4jH,B{U$Rot:+T9Cm͜?I`IdV "*>:=riשN<[Wiդb.vI\AǶ:{W߯up_F)ůGepѶ4[BY^K9`;ty^Te;y\F%O{Y#o.{V}BB&($(Zܱ|__ 'R^J~}<]*{i$3Bf>S=_z*a wgv3~77a-Uzzי.i>YcZsE I]*+׃]g+CbXt9fOecCNbGU5tL_}OL?oXȫ2Z9)ʼgjqw޵5ӒŢHa]==h3[t`o#v kJ^֠ZˁL$dMt8sLkIQ-vu;_݈W,sc\u>*҄`qٚGy7pw1ϩcڸ_yYyjAF[Csš\I@Ȼ p{;sS,LEec iIN; d4f_i'c to*d~j~5ދϤ_,5œpq8hW-eͩԖ>ukn>|5D,-кI I'ITjڽ4<;s8; J0E81Avj_qjEs9Y=u=goZsMQď 2Bl0>ahӧvPrFqcJ)uhn$a2cN ;ONLUH56x6{6fKG.dccp^~`j{q0:o;\x3Q;y6l\x+3_I,E<6I^5i~=mDv ?f kky ${5:v?Jf8ϝTNt=tV vKp? V;*rl%)rk> h[im+-gV1мJuY6ggCǩxrċ"v9ڵ)'f|f:eQ;Yl| k%ޛce{u1_cb*SIﯛ=l?Ug5gO/o/Hh|u;xl'pv&]"^ɵ{hv"X]4cŢ0H:xR[8˙+((jv+ieXs޶SڹRMJZ; t( x`ןuE>UU|R;ʟ-G [4|F]c)= %?el67Z6ztJHvǑsqv5aimԩ&9~Iޢ}|E]v ;}RAe4LeāW$ yH^z\ME&>g^|TB7Q=n]Mt3 wV 5 + I0\>>SzyǐNom㷧4aoNsh~&[yd %thƛmoe*^wٝw=֝u?,e99scȵyXl/)-9_U|VzV$qܨIY€;pH+*Ў*Mzy>\p]oQ4᪴¦ۆ-=z-7= TRT>156@69?UW6*7Wߏ bnɂ{=ܚǖv3Shv u V% s0Ƨ-ehwZ.-5e[1<cҭg-ngEq^..uGx20F c85P=+sFaVX^OrUne 1x/⇈-'Ȳi:7 99%RF O1UT٧o=zOc3a0D~t8ߊ6>+MeҲGtUR?'# R#(ۛi\4UTײkK^E;Jƶ0x+ʶRgj\.<zGapZo{{? 7JZ<z]߱2Ø9+i|2+LWfZzm2RJ1ߩ>$pT5+X'a^JZ))G:/_(4 &W{W}:kCtw}`ui51wVܥUm9}6_)-:\m9k|&qBw '=Oa:J ;տSH{;E]r|ĂX'j5#R7KO=H()*vv,8G#Tr΢m5:m_Fhg*<{p'8=uZXS̾G :Lj@:9o(RIGv[wB2rOB*e>/ sƸӫIɣiMf<#q,- MMMe0kGfݺ:^zS/]VE۱9۸G\<_44kkkv+X8>Wn֟z/iKW>{W_U-w؊b4ku3a0)@3Y[~o~ޭN^kܺsu+#\a~_WDkKIE4WNMQ:n-Z{S[Xhgkn\Y!MpPn'ZQN"_hLyiUO$cޯ5NJqiC18E!I9 cq:u=GZck*ӌ1-В-~H؜ry#9W-JW ͇cW̺w!eϕ̎1sRނKDWd%fxv0A)Ǚrln o(|T;[Rq3e.UTo_jʤd܊~ҕ>Ztfs"FHT=%S. 9#,+Gu㓁W%j2瑜T?* Qn s;3c{;ʤ>Y4·Jcdr;uITj7o}B);= kQֺ8;[Ȩ|ܶzqƵ(V{q]&Mdܱr;cZ^Nv/ahۻ5xaBā|3Uoҟ$dCEI]=`˫d nI;= Ny&IO۾l~뎤"̎zu(խ^"E|dpGָMk{tr*1.R l3<㠬ye-;SFk]7q3oU!zYIDsObq$\A/n*֦sYS:fۡ]2#Uً,y^nsoUUڪGF79JIݻtsƖ#m/+ KY&H7lulOZppm+zu6%n[JO )W,=V d'\$$lm!N#sZGmFeh'7ΕmHk${>n})Sµ:z*6nNHn1eL94L=)]tdoyNR8IsJ1DsRD!f2$ѱ3ޮ9}F BdW#xƫZRRu3Nb[<12%dfԧsIR㫺u&pzy늺rWW/U,޻ߵ|ȘGqp;'?bkϛeNQAZ.&HOĄ"'lVp.f۹NrV{i#+g0"7'NǜzqS#+>N,|vK9fޙ?_5Ef^勲f&%LGi8R\\yEIVQ¾s#nǹhUa柽t0icLcҩJSR5RiȾay<x)WL RӣOn .˷p;ۮ=UN12j{!Wd 6~~ӟj59Teu.B8|T60NӑQRUDMF?J-]8>U]u*W^jFwǘ+x+g:RVfueR=˘Py64GNm:#IF栉e' pCc}VpsjFzv#Ď!PY7t)k}MM-V3|%|n+53FUiwsM rT3κZ3tgR˝zu-@!܍߇9Tmw;|[Bϟ"ʬeWQ(\ƤqR oӱỏSc)۵y<vt֫2281#h߯"ZR"1~ֱTKEFlXSA8SWfܴS\r9M˷7}脷l#!=ռa%b.;Uqu>,l+)i#UF3t݀{4F\f^Ѵyu$#b2b6UMǛOzjҒ*D0$VJM-IdۺKAyŦažY]9Q'55R2KYlfn{tƬܯ#8H,/wN2W] kpc*)Elim-џ#9Vp|V3c(5hGp;\ĥ=!$p;8(s._6;gܢHw֦~D:}I2${\N9dYߥ0FrK2Lii?%T}mtw-ϗKne]AXzqRFHݯEޤ.VÅI>A܀}UNzZB/ݶ-9hHvf'=}J9TUuاf3HGUxO_erY|7W c~VQX1}QJG^n+Mwmffb~U%:0J9+TQ$_mnl$|Đ=~kro_ Iedju)׍־]Lشk{f<*෡=m8?z8zZKr-q ~` O5ˤ^ǥNEG:۱N](9kfB@ueU89_w"ib]J3Ft`x8=ohdXF0:uR"(E]VCkZ_|6ROȽan 6ѐyڷ璅k K#[M[PC#pb}zֲ+܇N/G/_e$Ms#!]\[s`yexŨ(Ÿ楖hfG;C~qg'5:mFng4ùvcaoQrYcOԖxhcVq1y汕F\׻3{дE/fʮy2~XԌ)ˈ}ji,Ɓ#Ud!r[iЁ\/hYhZF̳NYn19<`i售j`K)P\=;G+-*/ً1TT J!I oj8~cOcZ}]_y?&_lOhSʬr%2#se&;ZrnXۖt܎8f!X–oq3~+jtբU(Y{Yvc&s{\ZzOq4Rs37#'JE{r)It2PUcŒ3U}]I{c*QǷ*0+я/Rg9RIۢ]JK,<ܹ/׭uң.MWN0u#+?]=-м45u?#We,=ևV\iA*5* pb7N̛rN|oKk)DhW~mf*HkjtjUGOe'g}gr*W}(:Q; 2F."6{a]eOTR*wQ]_n4g"H)*:4O9SY>4観Bmhآ|.;+Få _r}Kc4M?2o=+8{;7}I|yhcvCvcn0;?Ljʛ7˫<,)ݮZ&c yqh9;r̉J1jJ4ا4泶vbJjQ칟c8֥NSmo}=>eR9VK޹+3=s:i 9Tjdb_(#{!,>çI>ڳF^o6 ʔ]LoԸYeozxrYo\jIq73"*9pR7QHlbZFPg$p3:4-["k $?.^8G8wQyi;Uw9>MQ{JiՕ}?R}z*_:u>hEhf>Z) ;t:9_̨֍L32`Hw|s`,gJ:R朮i%L#'אv9e9eR+r-@uy9C;#8z5~ރ{cѨ͞+Uv5eF1Zݼ~29nrzQOԧ(sǗ:m߉`2yCU(#؀O|J,x^W~Z;msooq9omdI6Z,9#<VRjG|"gtiwBJNHnc[J &9[w<[gsnf`8ϯOXՔFb?Hs\nVVMX}1gۙYy aqA5j>5aqyhP\,߼b5,Ėڣq݁u]?iSOL ;_5$^X,c ah93VRiiݝOٮkwc QW  qk15seEOLfWemۗ}4bRִݶfTP$ws^W}^VIjl|I4s;V:#ޔ*.T/w*au/Ʊ$Gcq ~VjgObVWεqtd󣙤kY=3OA*}=:1t{p^wue`}s+8rpxt%m,ac~PpO'zG-V+15,dTm=9l}:t(oG.lYXb%IV[D卮Fs1ӕd #h6Sr{tj=#+NxJrle}3=pLDTQaNu/_Q.‹Gj q03j*E*5=zybw*@9J/tpU7 osH̲J#;┡hﹼi<ע b[cpQAXEThŘJ-}ږ= oƚ~ѭ9؀.6Ԝsa'=,6WQIťMȼIJI}1U{e~/ѐ1{z_mᬿmWhpӧr/ًL7w ZV D8$OnՈȰtSVO/SWO]VmkZ>4pͶ7F U??_4iqrQ~nڳέF)1#Hp7|질S&q} _q&K{bSyЩm[{VP:?8R\;pQ .70*Uys۞+*TC4/,[i,[{G'U7Rjn)%%TIzr=?N\g=YN1J_I\B$Y;*U卙in苩%<ӵ8`?s3RIyΤ&Ҳ,YCm>ݙ万v냃Y2U/6eӻlսV64/(p{rV)IIZZJ-,Ƥ@[ˑFQZu9pޚس,sRc-d]RsF7.WSٷkw%?g6uXU tFySN:t:*2R)4-]]n5 ](ێjTJ+c}]biZs#GL|1_^G^ЦSN*M{߇LVD99uc⚦/&:2Ԟ'!I1)] 9N~oZ4֛Xʗ+W}l"B&xg?W)sIJ0*Pj7rK d\˴ϖ6ןf?fTTg-S}4(~$Q .^l&'hVգ]{ܑn36۽n#9^tQR;QGt!fOU}i8rV^5֯|-xϘ-pG#VTNkW5kVH*-O>C=yl0 wCzuV4qVnݵ,QmS?7OoTG莈m)nLQFf?7ASY&M:*7n?ڗ{H^5?F3sǹ5XM.~U(=Q:,>as2yW4js= *JQԭwF| :}NJ*)8)~dNAF4+9$㞝h)&qƧ-D"<) 3?1Yj ۖrc3 NxkGS>hdض~K ,df2;TuXy=}F*U>RyYʧV篿ךqnu{)*J7ϡo%W"w7 g}kHګ#.+q{W3yN5D9]m7#+ۧjo-} " q̭"ER3*e4У(RAHn-ӃXRk.S(C,}=:84)z'\{Ý5*4FF*[=k>y^х:z)>Ҥ|s˟ƊI/.qI&1݃cHuH]w!;6{zt%/kGNq_/EVҽ2?J?&ym캄I*噤vh O'BG)rXƧ,kv̖Mwu9v'һ4Q">Ο4U36Sqz,yrǥvRғV8QT.z;b6d%B2]i*.d.Q&X8#{ꔹuV5+G}k HydS:;~>(ҩ)7k/#YF)<57+GDo5ArkWҴ,ZMa#[lVZ1zA׭irV9*MJ+3)}$4ѵj l<Qo­eٿV켼a,x`(=4]h]I$ݡB=ҩ%x?*k^.܃PmFp[؟Ό6X_J3z4#F~khX̒Վ"mQjs|ͫ'ֶMTnYǯn Zw TRqaAjT8s9eWJnF!fV@Fx>մ~S(SHƹa0Fqӯ\ղRQ ]giC7ݷ g3_¸V+S)s(6 QUONN+ r5g,M/Ȗ,w-'Ko'`汩 ZNFoi֥ ~uV(N5VvkC1vz=lY@n*2r[J4ZumfEk4r<3َN?y%RO>r<$?)L~yKqt:+YaEͳpX~SdgLE?)س\mc!o^놴FiBNZt{"ƀma9=GCRh5.hoAm.}>Vǧ95y[rng-n͸@>O!B+g9R)E 7/5*sӒ|6/ݵ-} JJQ`<+nhnRL[Pdr:ٹksDRIcUh 6*ݖ>WE9N5MoN] B #M25\9F^aZk}dܩ3:(h1 bW8Ƿ9ںyRM{ZSDKp?Vs'~e+oW_mq{i'ԣ}˕2mf$cZGWcϕ8rEnFc hБ{GXrȁAx :n  ҟkݧ+k]F3N#o/U$gq]ڌ9eBkV{c74)N%fSb 햞Z9>Ԍkc  I'ǵIlºqK–ϹZ,;8ZGA?FU/d4k2Xf{9N9R<{ԣRn]㿯^bI<ƾ[yAbwӊ橦ϡ(u/k;ykɻ|n,#ޕI3N+^x|42 O51ϼv>dRѣe`d\nLdpknt}_봹v|8K{T@6sXE)5*Y;<6ڄq23 CWN@[T}?٥}! ێ88( N8z)B>WRkQ#iF*r@̓e%cLJ>28^9}k*n[u𪧱[%aN smXbR iI%d nc?i.kt"R ps=OJt㲿~2QZ ~d(Ue(gOҧMEeI2k\8T:mdrqB2yGF.p}#G?_jt9ǚ߯ot+hlI> 8O:#%I+Y I1Z"GYdZvwRi8-}G1)^U028ϥs(9_Aғ>Z$vc4yGW9#8qF)S%E>ε;!2&'iR<k:s_w_q/2gYݙ{>"){=:uGufc&~ ^B[l#p{WF2OFQ37QPܴ3 glD ێ{Vє}v:+FQ>H]- T[co r?uMG6^gzWr.yҦ޽|WQi= #FT&8~o4߻^>YI[)ndjɕ_ۺWeNKc쒽[9jV݌zs]SE6RMcV9SFT+Tѧ/a;u]*Pz9H䫃TITJZ\؋3UWF#Rsڟ}1v1\2/hԤԜ. vrr_j+hgmHaƝ8ۙR E2 S# ޹7v/e(Ͳ6ŭf~bdhJ667mS+ ЭRƕHЛ[ܙn,jAݴ^ewg<;}Y]VA7 I\=cpUYq Xᶮ>u⻨1y)üZooF]mk* =+sK݂o8aMȰ~x-cI':a8Unu+f%•ҽ =Z{եG nXmW"C?βMʼn*dN\,,Uj,ҶlWf>ujb3ރ55BVSR73G&w=NU,ںїY*jSiQh3vF}ЍoUQayJ7<6434{θ*㹥u5-It˸bV_3YO5S\RM:Y@yш't6̹U4hL1/{|3KKv8G*iSWO_DiXE4ܴ+~J\V*XҌ|[ti[K*BN;7czJ5=M>ym2\XJ=*;VWOCcM刮H#eiٮyH׵G7.o)HdA,$R+zgӚX-= tb7M3T='Qi {Wu :,l͂%+SR2C6YbYޣS:ҝIF-)nʷ6G'6Wo$jH^0E*'}+Z2Qg=7Z)λ8f%ccvEX͒SZ7q6,Đ{1icO9bh$}}HԌgI8I L8vRxyRrdb:c\JҽFk5}b*V74beTeij)Fr8rkki_ew<Ɵĉɷq8/# 6`#WuZ^{_1RVwVIeƗ|*yrC ꩎iD~hNI蝚z |[jk_G$w qNT8=Oڽuy*XiSٵ3< [[,!MSU9 sl |Dҡi!_ڌ?3|8̶Xee|lxz?ٯ慢Y _5m>uj^O4p쫹7ga¼]>k>~]=i`e-e[W\uOoY&Y-2GI[̾ G1doѫCuٯ['m^ 8-p' \`zc d*sJ7xFRjvvnv.5#HDl*;q8XGOCcaYTOCŒ..$&E _OƬ3z҅RU#M)-;dP|:FzXʔlܚǕ$iJ$q3Z?09ogi3̪4.qv$9ž4e\ieycʯ#suB-IԥtKUWq p?BkBeJmS#`C)`OC^G"ԃON|ݛnviW6Ɛ[qzn݇ .q#IwַiMQJI4r6`a8Ff(z1냌zWQkc\u+p2o1b%)$zg۳Kҩ.h?=[V\ӭżkFHJZBOfU[ׯA^G[[~]sʺ:Rw3r٤Kjf[iw7 U@rshqmI'n;<]푤's)uadaErG>vۯ5l|ݵ7 %Oz;׺t:rvt:]=N$1-0c3t[Q*wv]oK+{4izN[[ˆ3R? #UZ4jScߪجž$cN8-[ⵕ-VY2D1 YW9-8ygR6vv]7ns֧UZ4ݷs0 95hU2tn<ʩ|s۱3jTZ5]K*0jWtI%{noɪw[49U,at?ZV¸ᜥ;J-]סذq'9{k6?jΚZ5y[1NҴR¦~W=G0pi7k.^9ftK)wI q]mi W}#1ʜKjv-Gk[HSn p7gc/叶])e#B/yn|n9cQMӋƴ#Fw'S[Jc)ӏgR4mrV4cf~g3{zXYj{4pqz^UH{[S4a̓=SNJ([֔?9$@d$+ǭ7d{u9y+.X߮/xsW PLnrmڹF6^h{G}3}]۷ƙ-%]h89\D*ԗkjpNU=bݻ~f&7~m{oާpNkU9+b֩)8>:wbup*P8rrp{u?LWԌ]躘c9):-캜`yi[X21ۜ;{W=LUISWRiSBX!vlP$p5TNm&7~SLW I&L#(Լ:Kksrz;R71SmY_\.ڼk+UUqN㎞oT6gԣ} o x}-#0S c|q\N)6ڜj/{N)Vܩ{c׎֊8gS;$gNim_.X ,fop8'< W}Z\N7I]hz#~tFlKyF ulsR2+8b=^k <=n A_x8۞+䱘Mvz^iB%u9DFXmS=)ETg'W*ӿ[%zյ fI"qOx^v'_ʧ.OTrYO k,\nܠ##=pэOhՎVky3wDKN]RkMG[ҌkTw=Z}4m{xc:J5lz8zr'ks>fWəf]r8#Q8Co$3]~9Nqkrʤc-Ϙ6||?}&IF݃ 8 26S1ҵ9sNR}lu< Db)8G!]rIӅdƤmh;sgm$]43HT7z9a*ʴZ\Ց`oJ\`WmΘӭR]?F62|n9+(ExD0~ֺ%^yaElCh͸-#7(wqnZ֟772fpø-?_ڧd ]x{WV #pPA8 ,.*kI[O^2f&Kr}t>T:tR>_yMV} v>{- ͻq? )RN2jSֿscGc 5U{^#Qé*idhwgrګԅ[a^K٭6w^ү, dKܿ+ZT(~T4 [vbbV6֘\D"zp#kc뷺mEe[?XZS2NN^w^A*Z=[W]E;8rXiMynY5QV*ύpq\uc ɹ[].by>9]/acxgb*WoDvPBw zÝvŚuu>e̤Adx#ӭm^4G05uR>㸾!jkiv83\FZc2y NI92{.zEve/9ۋMWJr1=Zu)ң]Sinݝy]fj6m̗Rj/쵓 y32==+:zuNUs˦W}/$!n%I^T>}s<XԼbk-:1~˄M9 WWwi 6͸ۆs>nQz8YOڦ]0MS-]h7'RXu-SXkli'F1ϣ M4}qǧ%eݳ.v|)oū(5OwB1ؓ Lv*9nU*n/^/7aru^}vui1э %X=A猜us|Yt?bb+^KѭZ/چwv^VH۽x|4bOzZfq/qnH澣FQё*+M 'òlCoI1YchSDRovZ~i0?[GݕӹŖz=ܻcߨʾ?fҩۭ->w T7Hnjd>XYF֚,ghNF*00GO½N.ڏc̕:=箖]O>O,^ostL̾ F?HI4Znv?Z > ~JP|FdF$\Rgu7x^B\mbjʥE/t(6'i,!R2̝q^IyqR2M+y Fv3G ;z,;WD}kNK>$ί2 7gh{뎆(ԅ9+c늜y>j~[cw8T u#mb1N7f:oŘn5MONI ''##OJ)ѭNO__0e)m_gQl\E w'LUNe?;ihyD.gkskBŖ5Edmēۓ#'Zh*2̤ n mOS#l=qgaJ ;Jk%eݞw|OSr\ ) 19'H}+Tܜ-lg^(죡Rj2]ʰ8t%Olͺ:(өN\M驗4=̓3ǹ9AWdqQ^%RUo3Elb}O+i_sjrn~;˙aI#;7( u c>Q䇘OM6L{۱0gr󹏽uCڷefg%̜3?PYPD7y /oPMRmb0UGoϻ& ߍta7+X]Obfɫ%ʴo>t.Y-NF<ϼ9cW\tg֜Z.PwԬEGJT[2M!h&vqުTAyOڭ̽CR_)iV3/5Ճ.Vnq vz#gnN[1\ԥ5RhسeVdY f?.O8y9#ָ9jש# &fZ81j9}IsӞ!8UU\4eR}yV-j"X2FGoEFXXtZD_mo尭dcA{TԍYIJ%Ί6P48^~u9s$j}^C{M,ѯnp:2A+u yeR[~sLu99G4oC3Y$1% !OUN3mؙbF߯*E1Ϥ$6;2xlߥ9sDse=T[6.P&rpNq۷5jrDk(q-dXُld9ʓO~,}gu)ƥ8Z&fe\Ϛ+9(=ԑ}61ߏ޳+µJ,^3e%i[#;ժCUEΟhV͟?RTy5f}DIU-OmF^,JkPxV#32*̭rqߑ2[>XV+y3x +?TR0tU%*r\]AkvPjșiǬtK%m+<C eGAzӭvuUFce{Qu2n{hF+EeS~GV>[sƬkM*jݴ%$-} R'TJFKnD.w;qcSڎhwߖ -+]q#n w?OY} |'Kۇ/@|8ZQ7cu%ғ YWoʼֱ*R QzXd YԔ:+Nq[ p;*i刨N26`Ͷ@=N9JqU57ҰI/qܪܶQSJ7Q# G~RJn飉N,do1%>8GNki+X֗2{E,Qc[,ёtvv}jvsJj{LIrp=NN0fyiOQ˚9sr>(*5)]}좑[͖k#n* ={9NЕ9EV%EuUz`{LBpogڲHc;$]wF8+,aFV$`4.]N!R3s{#>蕶krfmݰcںQc~ZMhy?Im91ۿSҮ25uԅwmi<}9]Tgo%e=(FeؒnydW.ZZylݭ׸Qpc6]stnu}a<јc1=1މrU> ZʍrB%pHYoyis5t,wDp GTfggjς28TzJ>M|7QVrGְv=:^RrzEH{),ɄQ末lltө*9E2m#]IydVRu)F(Ӌ[TicpmۜOJ2qJ+n fY7hג>lW51BQެ|"c\ƭ+FY 'n3VCu"^U-ecO^j.-;ETq16$e۲[p>U9kJUQWtP?29JiɸبѩZ6ͺ-/v Idu*#)#M'_ &@d ;vZԍ^NyhdjVhIZ`3c0RQvMz2-۲Lө8kԪȝp$(.xֶu)ֽͩՍ9YJRB\wl/2һR1G]LCI͍"K0-4cR6@^:,j4`kuӞ5;2 Lj55%O:Jеey4I;M2>Rrfrӌ%6^FͼK*EgNWj1վ^M;"`ܮ\<r*4i6:(n0w]mi}} 01 p+QN57-u A VI>Tvs޸IJW2_kQ1oXԟ|QJR}}zM@iIGJWvR*u(j-y]1$cQٱԠcfm^ WN_o[&MuO.kNT.1*M]rKq=1\&3 t){II]_Oi/$Fyk32o#v1c_ҹ7ūF8~n_"ž[5[q3AV5)Uvܕ"Hu{E[ȦkŻ's%N~n߮;Uymv_ZLsYV{*SKPIqft9>0'J-0tnܲo dtkiR{:13^{1$ݬ9T(Gdxg^ÖUBN21=T^>X=_e woJ6m#o=PʭsJѵOco3Zsq$offV c?PH )3DѰ:nVӣ3N$jG}XKd?̠@n =+:w|ڿR%V|WFFUA;zko^-Y^R(Ԋwۢl{s$ bV N+ǫS}j-.gid)rrcFn@z}3^elBg0ϦXݼTPQ4p׿^~"JO˵:Te+8]?2aUGFgVb5f9EjJ]Rv'zI]e US}Fnfgr!ip3߮=Kݕ#Ӎ[~RY>gScRxW`yfUi$S95N+)R}!Ҍ_&턚&kq3F)-%RLi˖MmniZ3/Bܗ op#{3InW>_5JWG.3rm_㹉M|ҪBO+j|*}/ L q9=Y{;+dhoqLt6dn\gڹm1yk;y9$>V >5*wheRh.mW i6ҹ, zg.iI_C5E#_,\ӭ'.S)ʤMw$Y3xٗT V^3;Z) R;߳d^rK 0k ^~?`}'uy&Κ$.ʗhU~|tRQ̟qM: =H3펀Sw+U?Q!q,v0b6އަH9D5-.KؓK-KP ͻ 8㿯mXTMNty.9":BXؗ'xJӖQM40Ӯ!՟Reθ+sO'?AE~fZu#r5~ǠKQ"y1*+|w8u>$5K[\m'?qeF>,im֨$xdcVCӂ{X֜b{8x=n ,s`>4iʶ4bz|{Jk6kffwsӮ=ONQsoR':ާFhle+&FރrrH vFO촌SV;iѭ58h6k>7oI;aP$mg|3u%[m[l K=Z?_~xn+?6 VQǚvpWEs uk ܀:` zJ=Hמ -{;wO)uvX[p}F}A?LW5{5\<̶-Mєcy8>9-FQV jjHA<%#ItV.d-k0E-]@ C`ukb8],D]׼WKuo /xm-R,zWqXE;N"pKgxUm_Zfv̸a W`WNJmo#VS&ݟ{lx-n<rp1p35pyyeeۡK|WJռmA4 wvAtNN,v]Ni-OUۡ|H|NS+#"*6 dyc6qo[~,՜iʜee<2o|{xf.\I96N!m?O2rjl֒yĺ^ Wyݙ._g{s?->Fon1m[9#wF){t8Is)F1O+U\6\v?1y-I yq}i{ob⥯b9eN}hbrLp!rHҜ2ĩЌeE0jgzOYF6{Pq Giût-I9$קc\(Ƴ~MyLqZPm,Lvu[4cfس=@}㏴1#GQۻo2 #+y=?֣B'NEJև[(U`Sv^!++3ˍSnW-AEp-'X;vqn*(E*DK(pea_OJ産-ѕiVSV,t%VRywp=q?a/ߐj{ aeOsmc1pZPRէqӏ+.Ř$֏B>՝ovJ ck)8$[)$WON~=⤣feb{ AKw9lrOs\սm4Ըө:IKسsG jyצkOFY;KhF嘈㧿޳QI]inaVm9 r{~oRQFm:ufEG̙*r֬8fFT҃[ ;H?[Sa-?C͍p5y;ƀڦ8*%젹U캝ҔE5}Wa-˫HMǙ$rH 0ay낝Gw &Vve~{&|;U;u5,Rpv'$Y䙚Ok)p^R];kƎ݊dFYU=vvOnNۻX4jI7e,onMnzg |ki/xn[QK*yypW烟WF7pV?1dd${sӯJ0uiuєSڻ6m-4*TtQ<0T!mm ulz%uR-Hu=1Ȏ,CܼgNU'-*RC.ْ+Q̆ݤU#cҽ<ž;qO^L(Mؓjd@OïzyWeB6k2hu[)b3Y0+jxl1gR5S9FrU5]?lh>nb󼹛sy͇v;y/gvWn麉Hys 86z>VZ2(Ӭ%uw"Um.6'_oָ*7ybӿ7vwUeryָ⑏,nMM_!`WǤ!y>ssJ-4~fREx:OZ'zJMbI$"=q몔(.u4BSv#_5Va2G_qF/$+nKrsB ӔZ[^6_7O 698م7HϽf ξ[&#?J2|:q73E¨\=S֏moSO$3>U'Y5:i3X7ZkRFBRh*,acS47LsZSGK\qh7bK]lV~pH9ێTHIufN//+mM cq4sfyxH֌Z]HTF:fCUJCFUqH=5'QkD.߸1ֱ)[Wf7Rz+/od>ѽcxp{W}98pSN*^_ydvzn7r^ӚV_Y9Xц+&6F={סM^JFOzc?{#j)R=%dk0\EWE7+na(s/}Yy\,BtGSz#jnW茻;Iv"H\iNVS{R%w?1SR)A}K)ZZ߱qr&ij.Wj}?ϭvѮWΝW{5kf*OcNRt^=m۱Y27##[IثT7$5]Z6 0ÏU kT~g̷o#r_޵Lj5(n]ɩh/d{=aH<[۷^R4}QCI>Њ":ⱌc[Rj,E%>ݾbsc=Y˕=/t^OGlwwʱC.;^Q ʜ]v=/mr-<Ě5$4#泧[Rd5TO0fC*y;8JworB}7̬{ Hz3)S]䷍GU$>2S\Di?sSFU_:f|ʹdtN{W,FJ3?b]Di~5Ъ8Z7T#m38?ʜjS]G/44eDc<Zr6]Y<\eQ#oRsWZc4tUNuR$73,6~d)lvz<_CϕJ/=I,ၛK9nEh~UʴlfwI4aFw=OtRӛʥ:ӱVu>ZQsYV}Z"- uϜ+.jkmC]Jnݮd)2B0*nJ(׎)FR1IOͣw^] isq5Gb0e~F^ϗaɌ/>Hi2JT6ܑa]q-=,'#Is'nK+09RܸIE唕K^G, !wf=x뢝^ykS壉?WR}J猭ʍJinޤ^Sw0]Dq9WLkӕ5Q:nE "[UNE)vϦsU)FhV̎Fb0psfzJm[_͒,e@UKn܂3׊Q-;KoĕN\v~{?;[!đ  c=|4} a7؆iCs+I%daZu~Y*.~E,l]Fݞڸ%+IKZ%n_4v$_eD%5R)˙G)N.7mᰖY*c{rz=j(F*Oh5]Vi74m<{VRu{VM$7$ţ+~+ǜƹy[שkh]e]cNI]T}+,MmnI3u?RV"hʊRwkFjt3Z36[K|K)8zUԌRmGpOuY\Xq1f1q`cϷj)s%Hm_qBPm;L .i4Ej6yqH,ѕ΋; Ϛ$XZ\}B# |ʥ㷡?`AR[ݿڛUu!ddr1;YIA=֐<5^{]k:>sym6Ӄ^[+jӜP$lxv3דQ:qCSJ'w=>F]\'6d .A 85܏ bʍUrI'8iԌbzVERQݗ4 XP.jv)8KI+Xҧ+kz $-$X4۰6n=8Qҩ{_i:I,1|푁Ǩ瞜Q _83]}ty2E-w:Is#'"%c8KݳZqؾe[ކiOĒN;Y{DXQ*;lnD&B`!$X[9FJEIF-k^H{H']ѵ=V,yw(T1 MyM*˛̖^̥wһ#)A\Ayyd):U{Ԏ?Q6ĮA$c2;7WE߉Gui,s>Vr@[8=3UQ7"qQiw9bΰLl~rހOӵf䭮xyGh3nX-6ǻ̚E(NOG鱗2ȮʱI(ʎkRrs֫N.K~/mH!?U*deRp~S:A~(sY`6ȏP̻WGe_=Nj\D׸8Qý{VWG1S`w74PFwmzNQRG/;l`]z9so-^XhX"/-ҽ9JO{C%徇;ib>Ypx<=C׍k < gTazk:qdF2N" [A H3u?%)+-V-==z @~k9sJ*fi{͑ډgVgFҵHrmBOop?>ocfS3Mjrhh#:Co27qy3&:5kƌ\JvytzT9T\.%ZsYrM_閠mFf2Hأh8]dEdtE,e̗suUVLUvOOTkSw ݄N4u%[x++A]4[5SNONu+N>!8Sqq?VBvWN9vZI^_2v"ocf l>>I+rA%ԈfǥrI#ú.ϸye_' 7O4,35Qi}!ܿyBsZU _sFrjX\,rnkE\6<z-Zrg:EEm(5_;yw: KWAyBЛg;IڸrKEɩ-P}1ӼݱQS9*×ZEǗ*/WRW =_ʴ#R!:sڪCn=hNi^nH-)n7I n]1.wk]g(i|f A'teWeLGZt9(ٹl;#&$YN.{z3Jvo[=ד5m|4D`>ռuƌ}ֵRY6[Ơm `*#:>ZZMP9'_i3) {+IWX{== Jjs1fmONKxK0ͼ5nFa[Yy}YAI,r8;V5ilƦԲ#e.tRrv`1Jo[Owb$LзF=z{vCNU'f~EzRgRW{E~9BGy?{YBw"ΥǛ<1tյ!-١&FBS8'8$g:fԩrkK 8Gkdٿ]QɤQs'SwZ ڻA F7bI8"!%nާFߢsğkĖ,jʷth ۻGK0wF$v=qQu].G-u=BIUBs+S},mYJ.{6ɣRrJm V[:25+yxY;XmjKPqk~hǿph_eIU6\`3WJ}6ۚ$~W'v?_zƷ$Qŋ8(%ߎn|m;uN<:]h.FDHw}~5g#-lg+hJқ"݊w[%g=!2+4Z4wщuE j~u՛EF\.:/BlYr[Œ9޵(ss'wvS{ʱ$=crd1NZ:1ONSne"D (eCEwƴce>UȒ}U:4z-(ݙ>Py݆g<1I{9}jQwom-zFX;O^kGQ:R%W_c]VѭNVW_4)*oVkYs,I3nQyB8znT[M4;EZ@uPGzxEZ3V?a/[+{mh(F[EQW,ҼbfMBRnxS-̖6fh1Lx;(]]:rQ[W}XcւIhA\Ќ:gּ\NF+3_Q>X抷,cW9 w 55ug^4z]t1=>m;WFRvٽU?2f[E䍘}kFP)J8K~Wڊn,UƤhYw9g^iQkF2?> #`F~ֳz9.:#F-eFtvIco29W,aw c wڹ~Fs(JWzkAX/Yrv$c;FeF:FZ^>)/<Us/ ,қޝ=NƬaI~!KNHn,H_?0*\єyg i%P4X/Ӈ*3ЎԢߪzN+馗gacoq_j4:wcV۞y\^TV3Zp-nu?ZgLo p|n1?SؚS7tCʭKv9-mUYۻ S Oޤ¸+ 4SV~ɯW{SӧTU4GX ^+]a=Xyd;ָ|-f7~څ;WB]SNH@ʭǡB mZ2}7Gu.\F1rB$beG-{= )]:zpX}Jvq6' 9ȯS_/[NGɊi+ݯ2T5b$2R.J[;N8mr,ۙP02J5^ֱKFJܖѣ޶=4*1kh|$f;CBխVIcUucq篮1cRl]JtZ2z]O~T~-߇e ZH3Ly&s&m|4Sz~h$̰xfIJݺBÖO~U?Nϡ>]ZGA,3Wdz5с5G)F^M67 ͜r{?*+I9Z3Fse}"C)8ٷ'=ν\=h{%('t>B֡T7WoҹkÖwyx˖PNp硬!>mіU+xMC kQ_D{匝4_ i0p#9]r=z殥z[}jM{(=^[F.m[ur~aROܬ٤=ɶBGʩ89s)w߉SAs3ׅ<3%bɾp sتv<}*ǚ@wCoO Xc=zxl[ܫl|`֮^l? ܭk;$goYd1/ :=yjK߾2J3^vjY5T#l3s+8T=xG +$K3cZU6j7,g1;]F0A$= ruV*okhtg.5K{mBEG%ީ"?v?(J *y${,N"ګ o;zKݖyJF3@cmF$_0,urrܒ梟zT'Q'u5V)Ro3tEQ˒H=r;g])҇KYU:a.OkFѡhu-$8-FF8cTUVW]w4q[HjFVXxUfs=+knLcL wmڪ#S F~5'-Q#(i}gD#1n s+^ZДj-iv}d_/63jSE̹q~pqWX4?⧇"N~?7м[pNGc$m)2QSݭkuEO-(Oqgw(0^bPf4{z]uds9?gyI9> 7<߼Pve*5#>Ud;)ע{^g$%?kW5FWwW^8`;}i! iGeӋȿk M$rQ9VI$Kv}O#KNwZl+3Jۚ2J5Z6))wr*soXrcj%'n-&ܡvFI?8rԔ\,z}ִɖ+a@#Ur+q +SUBG{tе*%V|*L:TN4_yZ)5mrŎXTR2I2pCI*,7לⱌcٲj~6"9lFUSW2êPv[+"8Iۺ*ʍ׷ QF^:N1]֤{UcO}Up lE s8;Kl_Thw}k9MuXT#WhF# o5u4Ԣz4jFM_/VEH ͐::yNm3*nf~ekǙ -܌1j>Y=z|dWiR|$݃=ፚ-H("Hvyy`8eR:h5gB1V%ԯ[9ܾg;_UwrwGO-3W|E-ۻ+ּe&M reQ CI ['=eAR R gG>WV28䁟+፮Sͪb$őUV6~8YԚgVkX$ba#<+|3*|Χ_b1"+_iʬZZi=g۳ՍI%G7=F^fxtv?Uy;o硤|,+42nj8fV<:Ҕ5bRܞ@fŋ`bTzصbyffqʭ 8ힸ%_ymO[.=f_2 _&6ׯ?2~FsZrR4ZßڨOul8\ڲ奴%p^nU:ѣ&Hn_|)ܹӜd[Ƨ2Ge*֤zRk,.C"6|gVƟ*۽!UD,`SeS6Lڌ$m:YU .t֪mWrk5wEI ay$T]ҩ$"hsE^{[y`vHoEEK8xuFM*&k<ǍU]~~`QrІPY#iݡv==Oz+o [t>9leTRDwFEF+-̪˺oqZhDcztȶKVqD c@yxA3Urd*7]5Z#G_G?2ul Hʩ[8<~}I(2Ԃ֑Dıt x?EkF7n7ǖN;n$w/rɵ#r0:\wU|*/=/+bʛVfqsF6QMitG, uP>n135).זabwlZךM%LiGJnY[l|mU r˚ڣE_nM8ggkk8 s6m\ 8U+N+DN%ɭ'hm# lLuF.jWVB\2I6a{N#N}?#x_u%w1,zg=;P/ױW6װM+DmەV qV5\Rƴ0\ilb͸^JK+B*{:uO23)u\5ZKٸCպe<+>hԥǖi;ۦRC(o(RwV<~J8ۖnJJ,:3{XŇ'VTT{2N^זӴȊYC/fcù>Z~ߊ֝:߳Vl9YV8Pc{%[W^fX\' ӓ¸Y3QQkkGrz^TNrFJ̋{'݃kE2c+V(PW1yqJur'%fD$RvUj\XBUH+Gמ]yhM%4ЧwԾW>^展J4exgg4^H9b[3ďcZtv맗]J4iэX;U˚o`ݖ&f}v߅'wz]< OG}^F~aլlGhU^{ S|źjmpw< ߯i/.Ekګ v1۞{Oҹʖ9eN7]=Ia[YYGօdSsZhAn3I\5/,R1_y%!+F!aҳsLG%bkEoqA۸9u}V^>ͳM[iXutXflX`0qJp/6qN-'k"k]ofۦO9qUuz莊teN?[4[хnTK ~yI\== -ίku 2#1//#H ]0jVw&Gx#]Cu f[*>ƶeR}'j:RVl[H>;;Oku@Nyebʨ#T.WsXdgq7TԡEؼ*ˍIǟ´(vKe]xf|Lca` oEgCRZ;i yl"s%Մ8iPrA=Zh~Nʲ{zz?eh$R)ݸrR*]Z*)IioDc_J׳]Gu Xz`r'S[]rjތZbB֬Yiof2CОtc*#NI|J7zkرb/}+Jjq{+[55 ',.TQfg=s_;-龄}b|ѽ>Z򥈨S;"72^3%;2%)?gr)$5Ļ 95I(}*v߶bv滽9jNhm$ή.X^:ޝ8]9ʬiI9Y@ی&?4eR-sX#)Ny75@$l/<{^8ԉ:#/ؾ`BsWF{TګA u:J+LuGkt3u2IdiONNx=9jrqw1|l׺NM4Iw"6T9+ks9;G^\]Ë6}oL)ԵJsXG1MLjGȿiѴUԕ_gO}=%,w:2X9DiUыN-WL2iIF p֒Mm+[Stc;6ۿMYvm'_!@ԚƝJ<.w%%M+ȄCnV__AU}jZ~-&q#3˜>ޔ:Y&1G?T*,$UǯfRWO# ~ey۵U;Z&Sz~cWG܋;jJ>+'[wT@unlg5:pOFv_i +~_3듓Xb)rM;L4ԤV~Fq,,>fnztRI_n= >U4^U1BB瑞r:W]O=QFMPFmnmڙ^8g=ET$ԍZu(˯D^6rI,[<ɍ> ]ӎ+vT⼗BSR[]t:+9"ʶ'd*{K18QB󏡥d&IAVٴN~~Rs,UE%HL3;*#Γv4я=f{;fo,ĘTUA;S]\F9Ue:ӌ]-+J˖rcjB;4]%۬{U1v; nޟZ֓7"3( 9 n2o%K`\#}ZWl8ZojPqqޜVzpF>kجnK$Pk۞8JJJ6:^Mꑛpڣɀy}{ӿc9sWV_V[;hG ~T^-xY+wF & 6w#8湪s87{CRZ}Zd\3j=umM+Q*K%$rLǙӯ\Uc'F1Bvֺݼ& c$pJҌFW5XT,Clǥh_iK2jx݆9?SǧCiOW(o#1né(q4ls)x>t>BU({K_4jbo4LY=1۞sӵy #:xq)mo-4mV̸V>8~^Às+ \cʮyr-gwWDJac.w:zg^:WvpzfDӭ`R͂If89<1\)JgRR,k].?lגr9W)'QsTYA۩N,0U(' Ͻw׳6erSi :<;Iņw.x=>UTyq4hK}5XFȼ+z{}Mzy*Ѽ<Yb$IJսsx䶺{ZKgguN`yk$]:'Y]\vZGݲQznoz6 ][h°i0ZBJZ8ciIif Xb3<#ׯZ-IS9VQ=0S4[j ۶ v=;W_aU5_ =cINW02N< [\g>hͷCyxxBO c#<{eZT)kzQgkWėMiTgP XeߒGzRd>Ft _4lm-ju'?xJijMd"gѱ+$l}:*ʺRHi+JUFM $|u=p[:r*jF15վfC@#j]\z\m+X.#ݣ-rD秸q2qj<wh{6vr{=Tsn:M}/S7WOI4mZ6iGw$;*80+EsI;~ :(ZU]>g7Nem5Ï13Lb5$*qߞk*%uj)4tUJJ-||xJXԿGOOCgI< up7R/TVDʓv;bv5/kb( x{KiQ6A8=zWt矴e9iv滖Ŧdhg-\<שƕkQQ۱gOXf>dֱ?ٮt^eb 2f O$JVHҝ<v!3£{W[?7j˿IZybfO˟98=jΝBZ'-A.ng쿫؊Qiooi`)~Vf$/N1ڦU*I]ݎgMk>MW9odm˷vj*TDSX[u c50hs08`A,1V>U'(vԝLU>dio^`ƹ8W 98{jLK 8NhD0i֕=”LvMر+$2A&Ƿ>՟d/jqDѷs>n,qؿxjIO?S-,HK;+sqZ)Ҍ6z?̱h.­Hw>IQw-GkĠOG$lp==zdU9;.Ow>WucoӃ\To+:z~Ҍ,1OO[Jh&*T=ޥh?OVZut\r9!-2>㸖ep:sjQ*RqW8S*߫XfGXUcb\:rzKmkc}>kAgm7m$а!]u8)B4wnL1|,۴DMq#)eXفaN^Z6%S}U] TY.n3U3oκVhpԣCkd5Viw8E܁͂x=1?3v^Z*TSItvC#$KS#F6o%@rg.znU9ftRrv KqʊE\)}qJ1MY_6:QZ3$3xdѤJg7=^1ѕ 9ZmAqQz^iJt]}|l͎=g5PtN5N5v]O,jk Ym6>s=k֢ʑ-:ؕR7]{gY'1#q2X:gLJ򒋍ΊXZn{2ȫBJ6;1SsHb䯧ωZfv?xb}Üw\׭FzF4R WO}uNBeN $\`סNTuEZ5gvq\\[G5m.\)6؜%.f^ΝK{]XFd%EF8:}1ߞ+ E u=`hybY%JUNbN}s^5oiu;#II~gAKm%pMIJ |s?^}N{OsH\Vl.T˕7,z}^|ٹEcjmuGCiv6q,_UKM@ JbTN^]mv'Z$D8c2f0y`}XsRͦ8-.dY {*x⳧QqtFj>Y}1qq 6` I7t1(>Et73m^:ޜ[UvqOwWInff_ݰݎ9]Ts֩5ϣ+2nˬy玧~aΌ%gZQ&IZ;`?/svIrr[qK,~Un\K9Uٷ4_bJ*ѓȹb)ni)hUf\۞v]WҷTb~~"*awsp}EEE=BMIVÝ\C˕I{gm^'h IX֯R.Wжu8a6)ӃzUqwgJr~躻iw&aV|ots\U+U!ʍI8,_U-I$Igy?u;;9k:oKᲱil!,XYT"~W'ӛ9Ѝ iQΑr)f*z ?J毉*QTJrJD6-v}=Eg-"%d;dXH呤2XYmh*h87}=шce *3ƴN'{4oN7wzvq*j<Q# ՝ߙg3 *N8<^;י){:s*NiE7t]v?<(Ny:ytkO$jߴG~q\3FVnx)G]w5/&fAo.Tϟ~:ƤOzoQWk@ȷM7ʾgU9`{4TNE2tv.fTM2xRHsP翑jtOWcfU,i, Njro-:ɓR°F_^y<{vCϚd-{&r28_PN15z9T:|#\˒635c5+ǚ; ",Ov1]ޓʿ~ԓ ʓ١##=t|Alݺnkxդ/-ZBRwyT",tf|l6:E's `g95%>)$+fi6'p2p3DiQjI)_VDɗC}}@WMJWMhaN0ŸrS1{7{k޽#E.]+{v՟=%}Ⱦȫ:·*l+0}m#(,Jr攵oOA.E7nܖ?խ5N;Z-4#=ryi63׽Tc+4&5U;yh70%cls:~,i7g%!6Mw̨2:sZ)sG/|r |w.~fjݝmRju{8_66⧕\cjYhF)ūӡGlW1.R$mFMSgD %f(7FrF9; _J[UpWdڼwVuq.``wg=T/ Rt-hHp߻Y*H< ){y˕Ǒǯ̲t\ۆB-Rz޹wwu#]-<$%Էc̼Ev+Jw ={'2,{Rdwpszc ZxXI{['KdoUm2G1]=OW<9Y"c/yݢ3DI>U$㓎 >kH֟~DI9SI0661V"cFA*y3UVT}e.Vo3'J4+򽎪)Yy|q܋dS op=qQ'kzE{[w ]ln%1k_fa?:U7F,S4xc]f^u~UR*m[vd}É7UzZ,T}u2FwN5q̹uFL(%cNjZGؒM6KDcm7ry:t&mS W|e۷UrO¢JtbE_ldiD"ɴӭr.ކ.b%tlMaȚVyy7*1eCIԏJ^ڝ4cV"ф%6.4mFxRږ׸Nqtmd;nVI:IMΝqz}u -㽆Tq,ޠO=}"Jj>SZT촿BTo(NF=^*'m*:WW_?6I 0D ukk>GfiB|mmBH\e5Up:}qfE7=om_FW 紒/)PFGN޿J몼%)jWM>7hݻ"dC{aHs޷u=muQW_+ώ GcY$یM:<'-$W-V/\ru[ԍOwd\M=nsFzb8 Q;A!mX*Isڷ{KRhVtJ53`8'o^?vQiiլ$.lC\|XsPEk r6m{_ȑe_8ݮl<5NTh꧈i5^Nv |#MǸ8))'o O3>GY&ݝrcwo+NQCƝHsm:^m{ ƾZ)X*I$1ӽrb#:wo[sJ\^oرec/ALVuZy|cLkeY FA=NGZNֺM%R%h]GKoL{U_fB䀤ZG&2%etBV_n~Qη{ov[ |c]),qOjםyԒKj:}9&*(+3?ϥfܓi0QvsE qaI6ߓ}k ǚj-~hF<ѳ'9Xy6˗rrrC7x<֞2tGQgd{XO!P?:#g buDӸ\հ)˚+CiQRR̔nftgY[wN^z.sC^,&2d?u{k ԼmC[#nO[WRzƣkT>e$2^g})cI\5T{ NQ^07ڙtAn#em0ϵaSHpS[?^[߁jV>Ɯ7?x36J2hiyA&_HǐDc*V쎒4 K/k*|=} JR+:]6WNJ\(1:- 72<; 0=+ό#M%=JmZ)anzYb9Rc1۝XZeBYr]O;NU0kubK,Q$Ы6Jfbҹ]F[ -om}Yb &u\l>θIh9(֩']zGdܭΣz'Uc9qTMC#j3@gd.Y]mym .߱KQ[&`~eXsSi޷5-F:[xo1bqtFuTI[ɄcU=v~/s/e꾾qQqj1T'[=ȱTIT8mce,F7.E,w0vR#VT뢱=YvԱ|[G9%?QLKzt!!b˵H$ϵa*u>'N)[&|:5tq.QR,L];rޓ+ ,۾vVܹ*NvttrΏ,V5XѿZc+TJOkWk^F>\zQGfG/*CWee^+Ź-82~^Aoޛ߹aTR]F|Mxڪ+êIrE(y zpTSSBp+CW.n_Lj3ڮi欴KuX(Siߪ_>Vz\ykA҃emw6v;FrsEjIׯԮa6z[6 ^w Э.V[<3u $myG+y'5=#N^Zj~Qjڄ:ش+#nho_ C0<2?sovzzEF)ٵԓQ񕦂 k\#p|zsW`V/m|*T_:%ݭGjE.VRhdǨ@Ԩ͕ZժZm[~+Ϲ QI)$'m4]O>7]+v tpѫ Nvg%[nmcQ=^[Gۭ~#\3^Jyt5㓞$yJ1K]᎜g=yZ'}wHb(#gּoyz歈nѷK_*]_4Fڦ"VXX\p)x]z{vzU/ ZW0v?ÚҶ"TZk_je+ob؆vIrcGR1AcqOhʵ9E-ZV^/NiKi攭g>;[F:25q+bKAˎM[>Or1U[Uש?ivד> z#Ve >p6ORJ}}W30JҌ]7ЧOI jeKɕ`rx#}kIVZmc7F2|oOS~|BtOkyUU$f {㞜bYFU-O)RX8['kmgH|jז͐FTFB0GCב*Q{9, 5N\M9&nXxVY7Il6@q׷uRl4j>TϺݍ)/rcvQ)>|XΤܦti2훚*xr)̌|l_.?k{HfM:C3l.K`b>Lg,re"rM/#r[[MV061mZ|F/n䮚KF.o1|aۓ_'4M$lxw!(cuTn̾[IU^ G}KũK^eY|Դ+=J=1F2K"a;B61ӓ1FU$5^F4+C&thΩs [%玼vgqaM-rmVx^D'6fifa$:Μм^_4}M(T;jf2J <_ÑQZ5)٥ku#VZCy_^&X[y 7";Ȼ#邔cUZ7Sc#Z?;wkm߳|Gn-cu kk ' Y 򎿧e:ќmQYyj|#)O}}OG RyVL0 sQY˓i┣O--u_6ؤ)s;g-^zQݽW_ jw0+6C3s'&񕬞Ѹm$Ŀ+m;ۏZ9~$EH˘uUyncz{uN*U#V0woC"[lL`$*QNzĶwm#3n(#Cs\8{ˢ>˧-9Z޳$i,+uws־N\nǷs{//7TҐ[,x,_ƼJv^Xz1{Ṉ_Cf$">/F=x^zW-. ĐǷn+xǷNFtE|WJod^wc估YG $'$H:ҴcqUvٯ3<";ۖ? zWE%gîv{|InÙ}?tnH)C4NJWqTee\aF؝>ㇱB?@3:Q:SNVVI;o|A/WTzx46 |``:cO7R_Վ~0}i-Ns[!J,eZ.X~j7z+MyH~`-YmnOqnQ[08I]Mu>dxZ_f \iX1v1:%BRvѾϹX7vqӭdtӖ;4C T$q)CW21%YIuZkz=$xOM,$7A$~r_BOZp|ӽֽ6}ha*y&\/㯈R(i"I7|rAN<*ut}4>wɤKS2ԝL=?;){HQPikwuʵouyrS E{8`j VV߂0FkR=TeR'.hO˨"uiz턣_'To~al%vfb5De#Qy9bKw|Y0I\u/wVLZ-W0F٤Y#c3aa7\BBm,RuIllqeS};>ؼ+GK+6]~{\E|AY^F|ь,dW9>^OOET97dm{Y]N R̭u趽w'ԟ |/obdhPGt:ÓO#1|OKu"sxGOJ |Aw̮dԯLq{Wʐrfv="v3%˱bGGuOz+:\ejJ~O –ݜڽ2kͭoBIJǣG|_'suC=AJP{w0:r~ouoMi%OsҪ-v|b4b*z|/8=٢ Yzg#8[AUmguϡ/z|k'{xX`K$"4JB\`>9{B1Z[oS)'c4 n cei+RQmO[G[H>ޕە¤֔ܬ{uOvN ncc3)6S wuQtOY4*_1wҊ'sJeAK"XShTm$jS<싂~#+'ԫ]G8;]8]-.U>lEg$Ч$ene׊xqFϽln;b_3ȣ*8]`s閭Ԍv{1뗼q,tuyB?X.߯* Wj{`ǾsЏQEuG׼הiش {H?^޵8TLJu# C~M=3he*|ϛs$fޙzu98{fM8ӎTni04UXq}{W,{8++8cth,#`mbAs5z|#:؎jQZ"?+`[XX$縦qQ6&o:(lLqjҧ*55}"՝9(sndG*C+}=s.-_rM9\ŲK#`O2.xX>z?װvf۷B!{zԨnk'W} Vdm .1Ϡ<=Jz|؋:+mkhm,P6;wYSV8*S/ߵBr[r?WR7㽎XҩNRww4~ܰ|"\sϯuOdO&>{]733Wpv?rSR5cҲ&{G#~r7G\ɬ֝x?,%#gwRcv8_48/tΥa63}"By%'DGG sKoZ7וt9e $%5v0ǘc#ֲoEiUWZt/Q{PڪXYT)=팊?G8/ 2E' H)}Aoi~RZIy+ qwf%T7V޻g."*U=*I4T0m(%_י?g(p hEg8Qy*VةӅ=+)Clqک MhJ֤ԒpտVI *)=A~UbY̩qm*T9J5*drKѽW@EE݉MQ<^ƊNU{]&.c`߅8VeJ3)=VKSN*҇ATiHJTݛԚ ÈeŸ#9c_kHdӱVKe1ln=>IBbj)jŒYy}I[viVrRQZ,ŷ3n>%%5j|"H؟q|/R<*-Zѓuc[ySs0n+Nܗa"VXXcִ9TWFrcs>Y[KsFI [㚥'z1tM+[+Ǘw/+ re'%ߖ^VdwXyY'H%h&9y_rh"7y;sg}+Tge)X yww7fk #7 CNhN2wZ8Ҧ_%\ 4hx'fSߎB7Uim?vUst]>16[ 4#U2㎵rԻM0v~ŋwgDZ[nHs>.QHIAYع8(ɕf_I\U,eQ%Z_} ;̱J+4{{zWZqЧO..H -zǖunƐtyPfě7Ƽcִ=wQVOWrKM6ȹ?TjGfjE)^?r';U?>w)7.>[܎S=pFӶ?:˓~r¢W*_,蠙UwqZoSRq26-َċA8G nMlE̚nEF{$2a\~G<+E軑EC;߶ ?]E }SqKp? [*2oF#r]q;GFCӑǥT=V5̏Uo/~9穮TwVZ3tjY䎩|3q&~s!㎟}kHrѻJ`-Ϟ8Ü`}j)Ft֤wBkxZt$ifW;ҕI)dk*V*+|n|f`}xB4^ϩV2#B)|P$0Onrl|_,9nnh+{b u ˎzLv8]J2m'h @9a-WMt3>.@)C<>\u*F2&JWN2|G0 iEZZ3Ѝ F丙mc~fr $ 7h<[u]Jr1o2k1B' B[Te*rE[ V (^~9L*- \fUm8ې29zrmf2,Qڧ*pO?%( VmO4۔QZmos* l?Ƨ',ITQ1f sܱ3g9#=sUmfT7ձFf,BŕN{r*Oh%ԣ;f P}Y )G6Nݝ m{dmaU^sַʕm5}w=IQRX+IPV6 *qたUEJ:-JN]jW ;yml*H $T8ьok5͡7#KI-rƫ+' VsdSwQrB^s«Zy)OsәQq~aS|gNXF:9K1uKSqjI)g]2ǧcQʍ*S9l9!2Y~Y/bKUiӤujfGKT $Olθc(8(򴵻u{ 0Ʊ^xv䓞}8p˖Oy?0N4f9dR fe\0fnXw׏MQ.e'u}NO7+3v)CN;z˙ZyTmTͨ oۍsYZRަlyod`e9}+>ws:ih>@a@4Ov+TuĮ"͞6mhgFi/ywvCeE*YW\ǚSJx%&WʧNI*iԄZF4%7ʞ̛d,[YNAֵ4?Gq$Oco.Y& ~POlw Hn:pi;lQe % =sWZ>ei;^ϖ w~qmUa/9`N;gqOW_[5nKf6S.TGrC4R?"Ԑ{>P8҅gt~efmRrMZU捕[UrHڹkCݿiSe5/EڷԄ EY tnNOc$hǚ[u%_k HUbHK?}tڕI9>Hu%_ku--{)G#~]c¯<9%NNJz3{F v3oӏZ*TV=-2=zݽBTƫJ0Z%YD Ү)]ʢV)Frj_.7W(NC OZtbZ.4mIUn_On:jTR/W6٦ex#$k/R5Z{~e}APںi˕6 W̺IUX3c=:+E՚{Y-nQdnᅙbDYw;SOt~nҢa6Fx6'Fqߙx,_;Tc@j{NG.*tkW$I|`| /s֥NQV2IPջ[ԑCHxrGA\u9RnOsjNA

    OG(G*# ;Fzf"׿Q9fcu|H;6bHq:*ZzΤf9F8\qdnye >]WVyҥNm۰/"M~g4UKzsƍju%K_RxyZCV,dYGD>}Eݝ X*?zXT!Fo^x/VJhWH%><36u{~;#kBM RMcm4rT_CJ}*ҕ㮳C6vuy.RHq+fZF~5$CCLZS]CtGq20sT)otW}97%+z;i%dc 8Ў?E9R{:saeE̯+#|}+ZN:3+m^`嘰I$ڽZ#Ԡq_3钄t3&nP plg디֪[>J1ԧpM|pOץsʜV}.45| 8'b*J֎*SyuRi:$f:9=~S8c(V5T%bAasƲ 5ðo^)M.-,ykbJZK;3컨K-cD$vnT8ܯ1x.!8[_s)PNtmV>x8wрs'Ӛ4jQ[N8PR_2^"\ x! >ӧy-io; >|ԼUa7 ޿hw8;wٶh Z뷭%wl'q.Z5_,Oz1Rִ{ut~g ps:Uogw;%/M#']hՙx& !_RG35fKN㱾޿%=OmUVmCREfP'qֿ4϶ ,G?]v9णQc\\D#W]Fr{ khѧ6^gEҿF"yi) yNFsZƲq?]ܵ$.3&ʖ 2R-kM?̎Kfcm9J<]]>-4o#oʡJ3I9Gk**QwMolv 3>ݰ1`=}?iU[~ =ܗLXȻF7W??mE7\~EfGEAeL[8=ϥ(JrZ31{jtWq(EbIܾ[epqA+>^8(^PX5wȪr?:c cU)k-LuqʱLq=#?:8ƢmNoa̓~[8I~~+SI}L=erEjc"rǮ{c)ViB)-!`V;l/gIa*J*rv2@3:yOc&ܪ~ҥRN<MF,3's5sZzr%G_kMde o-ۥs<;Q^]NY;?>cw[#k3ߟOn+gngpo]HoimxB5>MrZ"}+\$YQecG͑ bSs[#eS0m>+y#F_a `g?tB3)8E>^>-#_,2:tiOwZ -o2Hh}*抍){i(Ejq7u]EcVܻ';W {4{IA:{54ۜ2uE^GDDyZc>_w Fߗl^TGV?WOn[8h7*gl"9ҽ,<{mfQ=}?5lѩaܤ Bxu>W*)He-dQ/7p ~c5JjTc$3uAV6ȡ3sX+B2PNFpyiOz=H45S+0'u5P=3KGFTi[ʆ8|ǖb:oƝK>٣^77ΓU)˲' #^W z.2n ݽKGEo̤ʹRFv_.?+1$I'z#(ǪUq~\r*Xgc:<{U(V~vvaAv?CWde8Dfh;[58(lyH r@ҺQʚќ񢨶4muqKb!, 0.iGC\2h!y>i }p@Ry]SRIi5~-mnv+v9?vh;JR$ԾCE$dMc׳N4^F}wqduk0l6pyD\Iw"XyZأsdy7C:uzxJpGNldNj <@]nndh*Bێ2^N?jFteR6Sݳ'Sף]JA ݑWF-!TWQm<5hmU^EH9]98$`y+I3OSWZ4qYnvx́70)@`}3Vӣ)OI^"2iss?.;Q/ b>PyN;] =ntӍJ{9+} .%1vE#/!_r3xGfҒЭd׫dn^>ҥKt* 6u#LVחEFXVb<N9>5\Mpҧ^ulsowK6dfcϽiJ,]ιG|lQbdY3{>;Z_2Jݮ5L+ZT%.k{NY-<;Lk0>Z>U9CZu5mle^m2đt ٪qMcS3,y]1TIKmK>"OD`f%fhc2c;[))ZQվ4d=_n7[GS9|rI]~EmqqK8_$9g F0?fu{6~6PWU)aWSɩy."&uYbI9#҅u(t*~_MX;d%ځIfl9҉*I-=>RRioc_0]1U} '}xsTiF˹bG^=,/sAZ{=yc8{IS[ԽaoVO2PGֺ#GI+)JO5級-6mĐfmAiUf;ZSߙHըobXyGd>tc)Ji/mC,0X|ޣU::s]dREЦM$ʫ3oH50JڽLe: tasY?JΤ4FjSjC.ұp:859MhӌRo_!Xg Ep{c#:ΜoBĶKku t.0@'涇F4ROM?jA5ǒ\+ucU.UkyW-M1oYj]<.!{ ZԫwGE/d^_[:K@kõ#b2E1Q.Ne7<|BOvWص-4[ùR1 øH 槢W4:rVb彬Pˍ\n,KVZ^fZD,T'#ߊofc=w5K6]㹩>.4ߠMej>T;pqF=۵L/QiB FiO3ros=2|Q[Gk;> BGČݒry+R%-AJ\6A.ʲ|q\@+z?Tt;,3ٜ4*vLGFz+ϔ)wQTI.]P <%oɜkQNMsJߛ"f,ɵ0`7ӥfadz,BAUwɷN{sΜ/R_SLUh*)]xo59' ?>Sl=婪$yn>Ry? %8ޣ_+JQ^=ֿٙႻQqqʵhY.xT;kL't ){J~kC. a=yj1>v);>Z)Tvc(cV< >3܁tZ}Mn^h?B![d`OBZiySݽdI]vT<K4]ȩZJ%o7A7̋ I9^\i%grIimڭ<ȨcCHVLsR6[,~XPQ'm4O=)S|wrٟw#tsRJ,9Q6R䁜z~TZQ%F[&fM``)+.qVM2H!͋2ntgztzC-:?`1Cn#|Ipcڿv]*Vd/Uw^]qccܽoԑ|X!S68A3Ҷ9%R&ݘ4b32O{zOKC~jjWm4';kiU.n%ku}+jte't?ԏ-ov72yCk1zJ*n1[WiH3)\BNԧ'm37P)s K, >tk̊u3"2>e@ԶyۃWwwnuQkFQo_멑-Ρ2q,y]xֵ87{n:S'G~gXmPt84 ;*#,9#z*F"_BQAAtVjjI,*ӭזd,|n$=I#rshc%)F5)E[Ănٷng,:15-S5%C^FO9~hI힃9'᎗\z?=P"]@U{HSVֿRIat 2ƹU[}p+ r|_U8Zs9oS= 'nd8$d`xʴl/ 1kv2LLUHc#98sNPi05(7@Iv)e*A#$>_1^joܣ$N^Y虤PRX ۲2x#N1]etEۿ*o0nA,)6吮[\z~)E6[O%s fg-^ml$uZ}i== 刢2f{'P̻Sj8bqeaΜ7udK`~a}."Z2%d]YE#Hb\7Q:'+9R$Iq^JZQ:QR~^~ݕg",y;qɮqbEBQN= NkV×SztS<*V熋-ڈ1fcWWYsN ѓ6Zy$lI59ehdBŢ43 <~dQIԳlD#mmvMs&JH}r7YƄddfӗ6IgXQh9?ݷբںܳ,Ncdq֔%R 5J? ں=˩RTZٓӬTt{jn uz|]N2t0NQ(f]Ԇ>ǥgZ;hmYRåО=FWݑ4pokjh)jt8IEwztQjFFˉU-ÞIqճUsTVvI<~`?w߭MLEIRM*%RVSlet rswqŸ$XuNN[^T;@gEFb_5t|%y?=1Sj%SHg>6J3hvk6m_\'rJ˹jyW*[2 rՏ5³Jo4v<O +:sn3NzrQBդ\ۃm 垸&W0Bn2ٖ`30Z̙Nm0囌ftƤ+eRv<1b^)EӊdJ9iI Lvj՝4*b*F/-**6 cR2YFu5Sy^OgkQrĮROP=dGDu4lmˈD{rx┭Pu4!ӣر0aլkJ5V9VKxDJ:ݲFe*`} eS:8xQEcѡUKIl@#oݹ3I۷,,ny+2`mwϯtѩQMuvn|Wh {֏9l3ڳHV*oRNJ/>>d`rde\~*:߼˄kBY`vpڸGӠWJ*X-e}.mIccc'^A5k*僴{e*{6S]iKGՙy8\(ҔirAߢ#Hۢ #*^-n cN1$3uW%=q~tSJ{GdR|2k}. <03*{U ԧ7_FZ=X5̖ufE?+ 玜ttjrzTmj} rƼ[{t֔ejpaVztGfep!PI \Wn.kSIV#;tj`us*kʤo7gөxI3$7rMu+'e(GN%qba-pڵs8ƍjiH̷K1?3cRxrYlkZm/B4Dw33^GSmW{ʐxPܯ)ycxF:ccz6z7ɺdC׿u˕}tvc)7Mɭ};uInP۫B9b7q?/L+z4}T.i{Z=lᎃW-gkFfk,7(-At\q^x摜$i]=4o9k~[[+_>"nYXId%-ʣc˹8+Vh4,tt+sn<.W="R_#oOk Z=qsy|TVi#j-=sr]}mY"AFmԂ_Qw%ԉF}:t}r Piװݭ-yvJ8C16߿FCvꏻ<~#rcp7scхG]>Uս Qhmw϶?´6{M=:֟:|߁%{o亅cvȒ@28#qF3Mеt`_Ӳ%׉MR9->R\p|.e*E]MG唾x;F{ѠM,xҿIsϭIFmۧKRO}I5ESEGo.xYT"0۷ T?L~xMVյ{X|ʜKuK^oԏIJAfviŶ=FsjK>=|QbNb˜tjùk%k\ Ҏ޶kϭxm g[tv>7yckq"cJ>09̩TRN{v/V|&eW^vZ޽iyzUph p3}+ʕ>d<*:Jbo49n1|p=NIr+=NUa!]O=ސ7!88qԠN9Q6>vC+I]#nuv bBm_--SҢtV-OkY Y]ĎI9U[z1hq|?&i&ʿ}ӧդkh}J1six.-gG+7ڹW8~uݢyut2ffi-Z5?S_{NnvsvRL"c{D-:cvSzRӊ8M,ڋ[Ǹ3(+*3lu8R=*<[C|Asnmϖʱ8r90] U-&y[C5bȁn&wݍvrJ:KZXzp[yr>(̞bm1)qm99q=4=`Z4dF>{P6F%dv9rx_7R<=-[ֳO@7cs޹yr__Ɣhʜe;to⾌C5̋fy<}0r>:ܽLIuhkGUDQ@G8ۅvxAʥ+3:{Lq[5 Deu tēҳRtkw6N&SӒNO~$ ܖXyyn<|RJ?ZJ˧<⮦fiabGabu9*c% I^=G}',Brmy]{nw2a qYhiJo__VmkNFk:4۲]jQZIk3#yq.haNy֍oz"ڲW}˵T5ocR ƝR2qҟ=Ԍ؂EV8U[Siӏ.!Q0 >7mjUK[=OJj:ҥ9FOSԣR2䕣$ %m:{WEJZOm䭥BދVݑQG?wv6щ0+/6*RN]N-e幗w_>-gg{+\ܥeӡ5h}6em|W3OˊT#Z2Mn>emM(p#7!m;]䞵q]ϕQkn-FREd@08`{֭R\Z1믥o hP2PCq~q㈄q\~LchmԠFcVUh1 sZbRQ^>9*RV-h|~6  +GxJ7wtӣ//XY4VfoUԔΜF-ʙfhRI\ |.OJ2G VwG驨$#1L18:< (f1kvmX p'cqVrOnR*$5|?&+n-H/9ۮ>wjF-MNm._iXQq#y{`0G`\!{'aR1s_7n{^v)clwI1UBI'=1W*`ߧc1V[8áxP 5ht, sO^٬iR;_n繓Ҕpg|+մ%fETF9'mzoъZ&:?So1*^Fz-{>g{ͭ^]smk$G MLT68TzbY(L ל:i+JLkSJk7u?sE Wj'ަwK%ׯ*&:t ^WF%gTjJ<ޝMmG#b+wvsE*IMvZIpt+ILW2|ҳdk|wpiKVe}F1?q^l:io,T(1wپ8=WM[ʷm;F#?2!x=9x0]Oi%-SR8EvI HǗR[GSV]_Fy}o__aka2,1bHv|m?.99+ڡ(6<Ŋ)G0^_㦿f:mV{-t#sxLOMi.,^F<>kQLB7ˮIzYֽz]eu>S; ]ODX¨vk}gHF28 xqOK'{(Lv#ݭ_O/?>|Aaxޡ់0sFM+Av&X$$Xpx=4nӯ~ʏMi϶=~,Y>Kg3I6 vƭ(UrvlF^^{}Թ_:MkoM:k{=_G薖+s$Lz*NO'Rڱ{Eo9sU3)rGQF;*qi}ɴ)0<M"!̀wMtQ:IEiK/޴-MYizP97yH;rFuUuh,R𭼭nh4YK6d4ydA*F}qO}ݭ4q79G'~q+xj<)>8YN0Rz+hnb*QJ<;)=DH u9uo Jl6`ʞ8eF<ϔthƢ:qkл2ہۧڽz4Uooj JS]#zX5)_G! ^z֒{j0= 2'#P)87pNŸc(&#benTq؃}?F0}IEK^)uKUݻ'0>GqzO_m|B4%&ݛ]G궺ռ[* ԩ=U¼d)E4ջU<+-)%_7|aHr;c95NuvG%S F*wv}4Vg_/[9ldyطʍV1dz; O5{iN٫?OC8 \qkW[SVu}_)0:U^ZB<[7/)ڡ[qwg z_U')'t_-,T.ey\?mc&t bg$?lt~_ԧ*8ŭt;)cRJOFGsc'r_J۲bO=J綥aV%|Z)k%1+;bb}xq>Rhp:[x1![ 8 :ӕr"hnh҄9{NN΍>|%g%dRN@g:Ob~ۭ!hu*n: }:6jVgHԜTjZۧm+#k"mqH}&(QZ||wFDpd9ڻ($Nj[)Y7ծ̷|FAcRC -G=.UZqmݷo3Qt>u-KS8_,;FJJgiƕ;sHUJ_/'tIo~Iu븬G|~jGLWF;A YF,pƬvGZJQO[zĖ~d^[R_* Uekvedw򃜁Lsl)SŸ*2Ss OwZd6i]㞼w r-bmgc,|6b<^jJI%ϘVuۦz^qqo$ḋ Cw+kq6};3ɔR*Qzgyg&-®c]m $u5;-ERSQj>+k =+>Hf޽ϰrݱ?iRZY]$2mATƲ+w@xuZE4JtirfiiUn>3Q?Q[\_h#YXƫ0q<>~3o_tRN~HʖlJҧSݵCùB]-є/ ՝+o:3of >iWl\}S{%}ȥʹsruRmc̪}R}1ZsKK;'>U=3շEծuVN6׸*n.Z~&=&gfݷzk1n%u*Oww bI7nW\aB^tmR;MiC-ġ|,>i?qrn\y+mRr|v.;*Ic9iL o=a<j7{tɻ:4D&o>sGj*yG'}NIHݳ'*S3cJ0dgiKISRkT:I:|Ξ(V6Op&}J ?yRNIhneO4x1EETZjl[%ߖvcSUcGSƗIm$lXI$gnWuwe 2+;PX1ȵc-hܬw.FG88Yԍ:h#N)S5$Yd|hr7#~āR}mN*e#7qHfN+:x N(el^]:=}GjTJq*^pw׸;E>ZQMާbz븲Swٕ;*J+ٽ?1.o2 SsEG5=:GʓrA /4}8 U9(6N\t[ue#/s4ʿ7lV2R`v]Ivz;6f҉AӍ;Az]u6V0){:Lř6ڬۓMv'ZKAqXNRճ\ qSG ҕXQvߙP6ee-vNF:s_Jܓ>¼iŵw۱>h=3Xh5;>Q\C5`'^3_ZP7' |9jJ.K~xk0FLtinS|fH7h0 )L'8{q4{R|Es6Tn[k?io><2RD֋_#t n{RtҊ9?izеE-ʲ"eGў;~u ҅9JWK:7NYJ}OzM*{UؽnP&K}NT˧8zm8݅˸7rfTҶ՞HMwm<`U(^Rkrn9l9OuStyjiymg.. }y?*%[Ar6;UZ;t>bf=vvOqzcv{GkhUPkV$>`2v~n:TjY vLS=+HԔ~3z5d?,rm+_ҪHɷ7258[ΚDHt9Nxo*WZZRZ<~Ӻᐓ޶jxHGS[>/)Uی)敗S?ӍiX[zR?9eRJwZKe)X6.1G=?ZVe.r6w*Xnb#(]nT|=Q%26l.s?ڝ7Y*~Ѥئ~˱l09ӎ1JQt㮥;1+[ZWqrGSOgF]][-].T-ұtfjoᣵʯ9 3^y#lWHqkQR^L+~7)\߯OΫJO˹}+\k8ivFG]7NW_M1QTVwe nD6s]QY#:X:8xq.;VByFW95M<ү$kT ^ucJ4v˹~+Y%iiF0);qMcZ?4ܙhboVAE._ijjOqob7wm0 G^TGܑ1Nܓk A 2ǎkJQ#HԌ❞~7+V'f<뜏^~"q՛SԩFUj*iyv,[kW8&Q{{7N^kз•DDrr<2?i%m-cc;"%7 !O}A?eR1ȧ#~h,޲N1qI޶*Ԫ*zo H/+'? *E.שNcQ>ԼQo$ .9F-rx zN4֝N3Ji]yt0|O-b27#D2|Ү1NI#")ӗ*Z 7Rn_ǛŞDKn۹c#dԟzn%zIt:TN1}50"I1b1WNRqs ru%y^׵{8I6f }+iU;+/cL=>i׻{ط'[n-,ce ``sA>YGU=9TwҷԾjp}^eɞ<g]qZk]lQ?WI=n}tV:\n~BƬR8FW%JX"Z*z$VK'6 w\lU*Q_2NدH֑[(7Wnx=;{ҧo}q^ɾ.M 4rvNQNOqjJdƍJKdS4s RqpH~*QZٴ:qk~^{廊8V塔&5:՜n|Yӊ=$^ӼA-|IlЕ9wƥ8GK_1JkgOͿߟECq< ௦i/-/de+)l v'Qlgmgw9烟5/6ݖ#V*8-Zf%ԏy%Lkm"CĄ=;RQ۷TM7^f_kFE8qyNu,և*2m/v>}a$xNC`n9#*yTѯVبod]jI*}'_ݯC`c#O#ZT{_ӖI:|[rW$ 7}ܱw5Xӫ:jQtft͛.e+Cgq*슘|V"4]uƝNk9#l2q~Os(WJQ,iE#+sY 0G=;V1&յڝJ17iGKu%)6ܫl>1LCRio\!Xsa8Fۋ5FMxowH& >Zp*WU5kzl/d_-V XinJ^تS24ETK.ޣ-Ӛ8_uIީw{(]cx݃sdߌbJom4':tթ!- 8QQw:#-߉ Ow B@TNA907GԺz>`㊥)>[U'kMiRF%N"+Bd@8C|ƈz"4Fݖ!Zq_(mJ>;ecOA?uQn/muEyuplL{Wli()u;ԕ[)h6IR?Ȁ3/#e$vܘʜ+H77.[RZsVtm|AV<5|Ȟة{*\5NCU3 8sTQUJ5{iq.5$qT{1Rde *V7%oq'" 3`$]ZAjQw{npTsBSԋT~7g)*zjq.e5ih&rdc{7N{s^.>k\#}ow5v_v ͷ{ rN8MR=өy*bNr3S]z4eN]5KTV dAqDi+ԏM D.)ʲexb::~uKV]zEV#9x `fGM[۩iq2 RC.2}hӍZ]]e-^$xUy)@'Aeﮏ)FrߝO=4ym$6pF^y8Ӳw#"vɆfzs֜en/޶.[b\c|F1z+ӟY+vFnt'S̾\V+uN 1Rz2 r՗wvWg{ΔInads֪nf͟ȬOͿI(X&csl9/J՗Ӻ~5nhҟ9v++s[89kS(T.ۄLa3K+9;+RgZgu^jjb }kJu5c;yK%؞BJ+*v珩!F<_:%M;\aXu 898zQ_F7/[Α"'*}jT彺 OZ)h`K y=+3Nq98_~r-%e[q,LdrЁJiJRjeR.վ"hm]-Ia IpG*Z5)c[eֺhZnc6`EhSO߽m}~Ecd\r |+9JSMZKxn̥QWq/g:xxI9=^kU9 X}=*U%)UMOʹlol/UfHO3hvAmNSQosi5:Z"Ϯr"?L;fk7:kKnǃnryӊL6}F<{<:-f9ix;s8vv^,WRaXS=뿚3T(.uڝFjPI|OHƞ;ŒU7$qi.Hg8קJߑiM_ jwe26.nGˊxs91T㇗=7wÝ:]23j訿,-Q9+TRާJRR爧8n%I f0bIm'𨍥%g{1[}ўMʭ3ygSz?7}zWSk*4[m]~LYFTUj,?s1undommռYYPqHGՅ~9~M^/zv?NL >i7-z>v>EK75ӓ Ҕ)&i/ T׳+81[.4eZ~l0ɯ0EWk}m|uqM4ލ;/'jέ_;˛svʳ|S;EC~!>>>j*eB٬EA3&vcvIA^}SRtyiڞItyB7&I8<g*uirT}g)Ԧ7si2Gai_o-oن?qTo*5#ek*ů^RəBtq[ $錓QIeIN)5g/cqo<̰xb3ϡ'FS^&k;vNfc sKFoi)>dzƇ<ɬ^G* qrqQZ*~+iʥ4ȕu-lM3Be'ҋԛJhu @xU00>cۊٷfo}5fxcej%`WXh?5om*{>+ZmI^u q^h{;kn\A Ҧ?+iٓRooo1q7ƣ' I#^'R-'kNRZk gb*|c c54ah);XcQ˝YczKn&%1f}ZRiųxӅ8U6 mO.׷ҧތ}Y1cSEvrA )"͵Yaόc8F+b0iLlٗfee@n*AcP)F͚TOiky9qSV^+92#%ɒ`66cF e G6˱%mip#lyW9k4-'պirJ[Uņ-/zq>^c0J=UݑrѢQf]#}}뎵ez9f_WBfOnn8nGҳ*iONJ${[KGozQRἶ\\%x'S N*4]˻yb}ZFݬN2HՄiuսքZ)仛l335iť=4k_M0?*M*ci ƭk IF6+yD{CJ\ 0;ӏUv"UxkLK>6QNpG՟璗7#M%נYs2)Ӿ>~ۓ-{7NJ?y=@YcX:TM.YiKvVVVݙ~QG~y)Z]z6^:C'1n=jjT\MB,RIH#6F~8>rHH#nqO:ɬjtG-IjvSaKR>}}jbKS*U(cNJYe5K[ɆrI3BWK-l?, o&I6 q떤hΝ~V8jRiͣۧG eaXYaeX~a;nc*s&kiS궖[$Ixv1cIKԽ}y}HOm *HV蝩 dݑF5Jy }|M:ryfcSy 1y傩R{F*ti,갢mU*GһaM]rR$\gl2`׿^Tc*1&瑡1f}y#=Z#;#*ҊC~~Ĉilvp}3WdjFQӲ4Ma~ l߳߼ ӁשFc{rR׹4[W[x ]m6G͞rqӚ ({[ʜ5G[}e"23fb@ m:q+ڌku6ΥH5c;PӢP mj+] 4vOKʯld 3h(= [ǸP} dpsGZQ Nr\5|+OSog m؝Kt O*O~H7-;lWr#aW}[7nߩ^]MY0_2ss Fory4 :\n65y2w#]sr*vq}cV6XAoDy[W;er*ɭmTpÎxuNqZ8%gE>̖Uq)8vy-82zX)ZY+AЌzO|g/Tg L-zNj>9וq>9X/GMiƕ5$צu6rт2F8<?xZv#XZq8,:O[5MNTzO7ZZ[eס%J$`˵pֺUg,D[-V)4IT|]+\yrdGfw2zޟmN*V]ww6-F5OcNqI'^T v?M[Bm]܃5I5_, #s# .H9׭tMSNj{SUbc}y,F55.g7}_ Q_~Ҍv=NIG-ߠX\$-$?Ҹ*Jj(#[Jϧ4ml-nvՄqexwVqju")2;)7.xtrk6!iI9݆ yZU*j)\0qCީ5576.[\c}cYS8ݚVQ'g][Q0wA¢J Gc8X-Z%űqEVgg,g\.,W#MnNw/%{ɬ1R|*'س텶zuy)Rq*ʲH[ #nP1xX'8kTV7 jJWCҦgdp7*S(k_vVcZ-J>j3mE ç0rGZq*wJ*9TkҒaN3:QFʢ]M#V61B!봫kZt;qWbh.I7t#7Aq%vJ}K sy`?~qG ~U*.Gm,b;!.*$ JNҌR39>i]dI̹2nņs*y{{Y%]Q$*C$iC0:0ZѕSϩOҋ#YĐG:JʂA&{/W*vQz{4$ۆlZQ8%OfnJ^ĄEq21m̹lxWtEE#;|rgӔ8*4lAalOHOSꮉGM:'*J1|̗ed]=nsYƝHGӳ6t>MKgЂ{EkMϗ~\c<h^T)\Km BHr~gәihmk,*$̐9zǂ:׎+ƍ(>~mioRSG5pirRE|DQݴ+y N;᳟~U 6¤܁B$4~i sҿERRQo  }N>W5ʜ˶S&ˆ,Yl\5?+J\ċ6ASFʣOoZrɊk8wZCK <7 =;Ea+;Bo|I# Dm8-BJZ$"AhDKa# y;v59[$,MmnV]I-mnSFBBF>beBSN*F a8*u"տ~dC$kn{q\tEi͈]֗gȗ߼pvͦ5+NOz_ ƣiA>ѲkL_j/i5wsjʍ |#6FdnNӟo:tyc25t8X GyxTӂ!1yȥpŶXg q\r{sQ^B̒=Ѫrzsu~$:RTպ[rATlw,k:j)ѧF|߱Iʜoܑg+Ҫdgo@iZTKPĝf;28\3}a¶ܩ89ϵc)>eΜ}<o [H$RV=Xe)TZv89Vi|Hٛwe'~"~Xzb-WG,xg$)b2@֪K**6}# u#&ܖd8Ԉ6py:Vuw5R/fBbhn8;aviz(k1ZHMǘYo}Jި\"Zz8 46 Nطt'{%ɫax^96nѕW QRNznpƕj:]: XS|۫AU޸ή͝8U vvJAODliw sp; TוXY.U>*$Mg2%z jiE:Z@nS7F8O"jRn%k\xmkT&ԒhW_.+vXJ܇GwH˖=$ӕ3*Jr+2f WڐFͷsm t:~+?yg{w*XV5HvX )KU?S8|?)%i{ ʔ\lrKnfVD(WP1JtҨԣkNQ4WϡpEbƊg889>ߖ=ͰOw(\D2ϴIO]t8M$c/cR6W=LFFY>ڽl:uo}[^h/?.zסNZMѝO}$sJ ~8K (SVShi$AİG8G~Ⱦ+6ܷjݧ[+IInfA;vOqU{Go\Ҟ*Tnc0۬$*rfuq-5+{[,WvqXGBMXQ󟲳q޷8U`FY.#(̯qXʅ>^IZSEIE\&J-k$7󧆧Ȅv0xU*1?QCLgefU֊ʟ*}TI뱥nʠۅ\߳& R1l VoqԘU+XugqMYs{o}:J!+Gq:qLTiԒqv!̎1Sr-zt:4Z#<%ݝx#Rmc#qly3Ҝ4{)IKO3_OŽɀ {ygcm7Z:k1en^XT: JS<Jen~\g\үOG~4Fwzƫdye~A5&JIGznQg]9Si֊MM[z*ZvZq]kaekIZF'zskE(kU[&4v#Z{5YABߡ 7(Q地2Ң2FOM_NNMG73R0JQ-K=f%@/ڢSu6,pa\ujӺЊ>fX]bboҸE;xQȼmxʩ '9]{kBZhIXdW9*!^C8RJNX|:p]ھRq<[r7=+Tvk4m~nXmeVA?:zX|-Z|"|=Zu\H 9}EۃדکJUbZV,>K'ޑ3=?TޭlE𶝬C-ӏcy}v0xzѩ<={_TCRB|j'knv+S᷌=oae٤h^dl+Gbz1\t>ne)Y=zjS״hEsB Yԇz֣i`.Y||MӴ}{Ë[b,۠o~px^ih=,=J[Z,v6c!k;2Fprx8z3_S#Z\w{m@7SG:o #%y+aZ;ۣ:(eڔ[ՙM8nZi]#o15UUX۰Fj䃵~g7zoԗ ^9p~u4iE3he_57͗ /"dpxNqӏ&34W]_Poa߇ɥ[mRW`@,o>N-8~oV4g|Ax=Y>GeQp'2wZ^G +z=׿SO15k ?7$^~U#Z* M-_<)ƞk}$>oh/ؾP>0H;go}yF6"\'uyuo]G✱wM 뿬|1ͼpA#@UV5۵L`ּ)T۔N"5";ٯ%ԥ;5wH;㷽qrMimG8馦&sg5¥)\(A4jO3lu)󨻻[?Zc-b 22ũgHb^<9k1ta̞ mEԖܬ";cֽ:vr4aR7q=][5asЩ[VU.jN[Sۿ鮧^7+w,VBh {ycbN8ϯjG[~=Nӕ%Yw{rZfcUĩ<޾>?Y8[YÊRq~,\6hdl,[y8緿jaBUe8pң*_'^h̞eĘUڤi ja]gEZISR_;xYjdHg LbqݵzԣSIaqd;*\߮3ZTU.P}j4iʕyE60dhv6N2qҳVhRwsTt);3~.F.G)Ǘ89uktuy;t=!S~Gs}Oo* uVQu>:4$ﶾnPe湣N<ܡ^8ǛD2 Gֲv*J#ԊHKv ygt)#̊>n)IS匜wHUnyiNNhb2˱B&@5:˩=ǑZ6n>u TeqJ͗u]Ѯ=TVhʫJ5'r_;1SFSTTmCJ1QD@եP0Ȯʜb3,L_s[+N;Z7tmx)Z[d}+IK72XV]+ZsZ#ʢ79{oIg ]c; RN1w{htѧFy%}vz͢$x kQ;ßƾcq [өЫUu_i}ZG3ojK ~USeaWs+)2zw,9cqtjo{΂q- ,w9ȯ c([ gjksjJ-/-dNdۍG'_m﫞JIӓBC$L ۲sCgx ZGت.wg_zp[eRq,`7yA]>kKxUJryܴvp6Az}ˣ9pզ{~%K̫ʬŒ8'ƴby?3Zue:ZxO=$J0itcЃ{uQOQN6[I}VHn~:FUHudc8wa^J \CwߩI[)o-k>]-O N=cr Y6ܻz)?˚\'TǎxsY>nhpsbR'è|3Xǚ\o 7,2 ###<5-b}nWW57$# M$:U"%Ii>R u'l|8q^eԭQJRvj%y:{:haxMt[Dy`[p8сkRNNWoBFUZ\>ou=:OX!uc+щ;Ѷ gx4)צ7n׾[ZN}2֖յw~ d%SOH= ]a1l>_0/fzN2ZO[]dd#Ṯ[=ͩ*2G̍dෝMx!yr٦̑6WoR8iE)Ɲ=;_Y&C:pykXhsNjrcT|y5Ɯn#ԡZ(r؞k>ٙ#RaOӟ4SyCZJ4Zz4<#6ִc aNo½{:Mn+U;=WȎ5k s%GϗXNI80:n_P\-%,{d#V,(ƌokwѝF,#<`i1)ƳRӹV# ˺]>yubdaaqQ} V.Ta 1PJ*W鮉v~}(#ү<[Íuck(dX˜Q:vpt)ۛmwjaiu+s=s>xAG<'-?h4?qNTFP`㺥USJ3vI7Z:u$)}7}n|8m[-Z//H 'oZxX8N5Mq(Q=oכi$­yV[pMa:4Qdrsz9>c)^ʷե'o|QétMko"P1$w9c_=dvd`%-jhںa~$xM拧у"4,2P ,UT^~׫躟w2*Ni'ezYMFm'4{cIn,Imn8dh$8 2+aㄭg%DVVM]9pJ/ѦG\qwg#t)8ݭ;~3P,Smө;37̑ $QIL ִ06taj[_mߗlO@.5=dOmÓ\<+exv<,-?}OmChw3񍿡N1ISN;+[$#8V~O~3S+NydydޙieN{йTrP_"֝,ǹ zV8Xq*Iep-_4޶V1f}dIkfzwnB?ߐ=%[\b'Mrcd[q1)FFNssYF=O7i?ڼyՁ0P 3[QWlv ]u3=1cn I&Tr1]HN g,׎|WdjGhRRnԼ;wqly _Dnd=A<QҸJ.Ni"y> 4'`'т6y_^xZU)r꣌Pe ѭ4ź$eՆ2rNW:Wh F)MN-Y&senlrKpmPm z((vkoB+٧kmf-̑یH#Xg'GV3ߡR ;$\e.N;8qYyrF#.U_E|Җ Y9߱?5((TfU)Y#q@$ ;֔Թ9Edt/*[ƪ~ݱB(*1zޥTiEoW]䷌̱n<'0/ek*~[ÕwZ8߻PrxxR* eR'[X0o6Nz|x\0tEzڿ(f|xOz\O!*j1Tryۚ:rz8"g*˲."/\*(ރ%wMOu҈f,~Y`A~?Ϊ_*ʶ#oף]d(;RL1Rǟ$n j{:֎5|BkGow5ruM:cvOiĥy2vmAV8OVmHVR4տQHge#r ? i_Mz8Vnq*[݊4(_rH.A7!b(?7 :\1ǖNHiebk57;#R-M˖hdYwKzϵvhޜ!RM~w >UK.pP;ni2c)~F۷ӯlU'to'+DIq/3'ۻj׃xU\7SKt [@Fn0a?os}؅V|ۘ3It is4hj>ex<5'YE?Rxmzg-ykur'xtZ\Uoo"Vx5\msˏZ嬼ڔ*~qc5uRp[x%ƜTo{hf:dGa<QӥlEס*u%]z7UVb_sVVOe#ε84KʍsI\ -Us8j**U~Vi[}r1S߯CNJj6V*M>)~i|[r~b95#̴&JyuЩ$.-[9#Rr #7uIHLJ*@#WSJ1ku+G5+Y#8hUU2ʤR"^Ȣ; GGoNF>Z T_f[)H q߁K2UF>B;c,K#Ir<#_^8*2:={G_N7Rnʓ?z}jT5}nuN>^e+o-Z\2.|)ӑ}z=YJ>:w.O$O%ZC n1pr={5YEǙhɟZo$ڳϓʬĬ`QLj1+VRNhEݻUoݫz~H8h1Wk"ϲ)D&d_2t+%9[G}.j&m_wL1rUʍOynA{83ެ.'+sW=qJ# ImOuzw´FAT\A gj]lcFQN0sUH+@% Î}j]JtTat~$y鹶dgbqgmNnog֝|#*i¤ei;5wMXJfnz{^hG*u$m4V:$a$r1&{'ΨWJڽ[ml!W2('xR8#8?)м'$ݶOkahʢ7Oo[ 2Ven={^FoߗҏZGp|˅v돧a=JvCU[߱ܕ>mǖpT76= !7GGuIV2Zn;RbI(\2V 6䷞ !U®G:|8j Zzl_꧃2rZ[WвQ >R".AӧR3Q9SܨB{ɻ","I<̫0FӵI;V/Zih7TM,EtTeeVJB-gB#8?]I]NENSM4ՔXlYmKYʒ?9$Wuc*/g+WktJP//>u;'3y ca&匥-k1I'}}n8nM2G$ki9r_;S `[S_2C\Fsۯڈ\ݺv<ڔ-m=-}#t&"a&Fwd19 F:B2t?[S*%:Vw22Wr3r8ҼTRRm/'~D'{%kZ7W}%q3 zI7%N ZU] xIҾfIf9J4OC.d>Pl{g\jicJiө)DЭ-9Ueڣsrp*\sΦ.*^6$7'?vҴ# OKTY`_U\i*MJ[,v^C6]ApqӏZM2VS←<ViunIQRMU*7[iИ]E+T+' qܜcN,n:t9V~d34 lU>cv<5)J\@362Xdsx5j~D\ylj$(n>c㎽:u}5|^(lJ]py>WW,HQP\Я5ܣƨp#')׌Ss[n(CD]+Ğ[$$tcO{VUxkN+Y[>b ,s2*M>S&˻_=TGOº9QV+S8M&ۘ SJ7[-:* lx{֊,-r9ytG u'i' D(~;ǞsB#\)ݸdt9VavcFQLoV P2>R)}b'(%̒y'ӛص.Xη5Yuȸ~cy9R9tmSF1I5bs3朢K-:mt!.XÖ޺ ښkь4}-R_0>-^Dʔe4o: )E'?k7)yN,䎞8nY@ 1{uԍH|[ Nz7z~&Vv2XqzkTvg4cIGU@E瘒-asQӥc)JqJKtU|K3쭤_M_:F9b;'ߊ/;'d\Th Rvv7*I `#}p1ִvsV:2-Y"[1rn!u;PrzsS.Y{nWiզlm>mvۥmV|5oq9lX$Eϩ5[)iki940hF$[a9:t5:w"hХ8)In#.b9 lna֪1s=gOc6#hƨ0o9x]ew-ƍ$ݛȩ3 8I+9S&tv{ ڋo.H~]#X?c8~r*VB$}yYfc98nj.Y&O1V8c@FWr'\xSn AYǫyjC٫%I& ;Etɍp#X*aF1;ݚ[_MKDm>a$ ,~g<3SȌ=nZ$qOOASQКXzqXsXn^Osӌ_Um5gdirqٙO@:j.TxRíWo+>~h}j9Fֶ|ZB|pG^9qvUK9{98>oNM*0 yltw}SnOԓI76Lhfz̸\WVPϾsAQ[$*d`+ʧ%kt>-SNGBr1N?ӫ3֎*J.[osè^62On5NVv6m~;D_,>V@r~R uw[){v3T╴mp?0K(>Oӯzeu>_Os%)w4N(<{WeT+uauo Ml$I|Xp8>Be::N.ߊ14iݣT6mۆOClWùUm_^N>mԢPRXUKc 'iT%VO8WTúpi+6[8?WSEn<`,n2A v8%w;s vQƵK8v_?7Ooyc wŻ! Ӎǜrq:x*iY],z:kV~uOC_>nn+ y鏓[,ooĔ1K٤mwׯ_5uEn"V5GWKwleH;X9m$a mcڜ%veJnRKżxiӌed,XrF9ȯ(K]3ph+_8mjy#BYמSGEi˖`VQQr Fhͫ<<,c#*Xv#'kJz{_׭DƣPZUhr<\o#Ai/$j{Eic]#ioc3mܝ9H0?u^;QW8Sa%z^O*- O^xzzѨ-/3t;Wj:2F\SǷEEK7xzu߀Hf6\L~mNBG+^+2J6zۡwN\MuH@o~b*Nh<*u+Tkzy3$Mt6Y-cW{[=xiTϵ|"&{/"]%1ꥃ21=[U.d=Xi$_i_kIycQ'z g8#>W"Sb}-ɺ"]L^ cO {sϥZ4)G9' %QB?)A\s¹1xJ|Ϡbq$of=&.[ n dg׊kyEyό2Mup!b'={c޼lvuxٿ9k74y£(AvU?w8Wvݽ9EԋmiƣuЖO(DzG,TRO\c%(ZehRM52J>9eZQ^DʏgbFBFx-;w~+QKTkM@ryNݪ-b\jF[h".#+f1m1=j<ݕC*xG%ݳ;#q 9ag}7g,b*K7(B2>^wdVhr_ UGH*:X²+qXRCyѭ>ezzw F[]JmfB ^ ힴV6y~Φj5 S@ [**r?P=k)T7{XYQR[Eg̑+ k6I'+*-ױ*!IFi%۾CyMgm c8`S:tZ89J|ؒ+}8 W fԴFRb"vqpqLjjH֠MϠJE?r*:Xt0C#x=9U#E+|TqDdɄѥ퍾f4.f:}^T߿ r%,xe'N}q>Tdk4W˅Y$Q x:0}XKL,-$O/j Q1koi'5F(`#ff,G<ֽwKlLHՅm{\!'RlܥpW9q]T+UiF][ijWT|*"]Au=9Bh.uӜTِo{&/wsv'βdU: OwtвX˺dP\zuNkϩ%uaay:GU-j"hDAr~+˩*sLcJkMB·(Lpzm{HƚwNжqP~e;~+T9raN%(GEԷbDM1gqҰս #)Sq4'׷\ݰ?zjRd.2kGmP8gQݕ,[d'ܩVc9^/mB)5juG#mE2~NjSGx'dc +O_Bޡ6>l')Gb2qSӣR+-}WXG=d6xDiPj)5=: d<)3GYդ=STUmnzo@ȣ€H}k%4JmnU֫jxx9^^UTN*͢!Y³+6qqxɮTpZaVQ۱uy}Udܱrp;f.e*\?97p;hwXrtru9J Zyry0pCmw;p>Τs+R5Gޕjh(ʻN:dj(|&cYJT6.V#.Y[z[+hr֩j54 ? *(ŵFt֒˷aaslaeQr֔]DCWYk JņT5q2ߧiBL~1ʏ~=ZO˙s٣c4Lۃ{W;E/q6bƹi;r1j㟳$uV6L,[~gLiǙ~up|@m'UEr|!F{}O1g;rp{Fem5R]GP^Qx:$t>xyKVYx.yn}1wzE/S4pij%.kNjt^ғ?Bav2#UbWcۿ]F7murri3k"{6h=i.{_2/}fz6Jĉ;(5ZkHQ\&>_ǜ뚷2r3Tcu-zQmn`$O6rdpOsgMӌ}ki}HARc37XqԌ78^IppoMI ǽ|śT1=:t(g){JKwF@DfY>9cAsrԧR6"[|Wew)T=J-F*\K.AmգM;`WV*g-;ϗ oRRGB< r?gHJV)9]CnxQ]8#v5*~1jh[Z孢h*k6u*ߩim&dIpU3Ǹ_J?HƍeRNWI4ɭc]"ɵ|®~rF EGVٛ|`̰$YGc>z)$3;nUor?\S/Es9s"xpynrN<חBͅG͐;ZF ltW4y9]ƽYӒU~e,H・8Oo¸sr˙}qԺޛnH-Ȫ +wY>fG J2)IKХ{*n cʑԨ=܎Ҵ:t=7^4w7~`8 zW%j2K F7_?}N%NS(Tݗ{uG0"9Ɋ8sy{?ϵkr~e˛E;ĚX1xٶ2008y\ѩH{E& =*?/ʴ~tۺswiDLn&yعF^B$p:-hT҆XuЎ~g1\gv#H[VY\]keU?3Hc>mQGVߞ)Z)̻Kʓ׭c(X>RS[Mm5I +{a*+9zEBm=J֔-4Եrliss zz?rխR*<ߣ:jђRE㐀# 1מ9_[mӔn鞡aP*q KYksxƍD紮XDB ª~oEqh#*U󁏯5Omvafɖ%<6 R^7)skq5 ETfͷ[+^qLyNdnt*)ƞ5%2ƮA?ӄe&JN)ImnH+3u;}=%Mu_mJTc+Ta%ha!O,x#LV+xoù [ ,{f??kN_zߠG<X˨Mۗax#N`_,QˁO5O?/nEΔ\ QuQʕOZ>intSBz<νyFMSb)š^ՊzGwUl5IKwcg}_/~0zDՔc*ʣMlg_Z`V21zRZxvVR_"|ŇͅW:~{RNOh@wcw*\=4:;N7nn[]/Ewdm&7_JTn:q_S"]~caޝ8iЌiuhO>ͯcLEeQ9$N=I̪[BiӴj5\ 2ySpZg\V2.?̼'VUyj2{jҌܮjr%Ә*1N.WcC} /:Y06J sКb嶘ڃ}G?tW*Rc4qy6VN&eO.}-6Qjv;/fzƥjrW5kt-b%f#)7*VX;;U㈩NJͤL,6q8Ô{^ YT/}K(K]$Kh_:9vwgj ˚T*TknвG0vƮR浶G4pq!ԎoƟGEݴ9V883\fV(ۙкሧylZRHnGʻ14KB#7!nd>\3ñ(NЕbS"߯S^MkƷ4x1AnKϖJXz9KUʴ0j)Y[b]xʨԏ=LU=o3Q)9q_zʵXj*~BRu}kŔetaFosCfK*M./*TueTCʄgy|r)s~G5:xxɹ萦Itv-ⱗTek$>7\mP|H_LH{>B"<6=cR8ޚm&E吻*ق'y*rzyFRGyNF͗/xn/g*-cZM4ؙc~M0s6=h[he,iѼz89qUԔFzv%JğbjdܲyQnQGORJD{quuv+.~^>U7Z>h5oU[ZIo'$3Ojӌo*5A)?%9 ,$V*Xrv{2:]8K{B{ӱN[P2O<X)C:cIFA鱗S2fBY Ht}ֆ9V#<\)Eme^s3IǴO[/@Rg-mY%?& Ec:ҫ\#,2WݹF=%bț P'nI>*iJbiթwMnF+*ܤ|2:+_ʕ(Ǚivqo1W=ymNrWaZPQ:tKĚ--Sr4s3EI3hd;r>L==ֽJrVM(Qi;Kv AzV7Rv9R\.^5ocSx0eyڼ潨Ԕ Ϊx՛ٗkTCU+ֵOǙwSp:.뉶F'݇7vTp^;N-:K]NK~X\ }ig *xh$k۽?K6gm.2r}J{N>.[>>倫Oyi*:GNyl-z&3fr:>/ͪ_%՞ZC*UvT8q'TR\ޫsխᰮd;^y-q<9JFKݶ0~[.đdyp4Gfݗ1Ganoǃ~LiZgIIj,D-<ˍ{ElFAHU`3#\\kJQI[Uo3e=m'~~ )bHH+݁#{|JWKsgXjI若O9K[K Zs13~Fzd[Uۡx>!.kf_2𿉮%o 5ŝenGN3u)׺Tqedy[hwQذʕ~K{cy5M=ᎺuS_K~v(?tϴ%^dR7$ٱ61c]NW8WߩR-6&s YL5SiWNJv'ZJI=i+x-l&fFZ0sOVx{JiNWSB W3۲v/mKK7VKR,CʧpvqS N5Fn5S[rTqqrMi˧Wi]{&I5-$`ʆpHE^1ҺRR**nQ['_+_U(ԩJ 9=/;-UgVpb)d<8=kʯMS5N[._Þmi!v4p߼ dc{8259uއBnպ+KfqҥʜFѩMsojvVdo9V3vT)ӳ'}j)TfԔJԒ[(haiQ$H$,㟔4N**"4~2tMmaI-]9cs֯Qɨkv˫,GSq]tm--֩Cus|ʤS5 -/&5c.H-\M/Nƍc)䞭lb0$bihm)MTZ +d$EN_^ o&g]mzڻͷO׏s\ѕ?~?O' 1Vh~'CLSls.G'mJTuieMF*3ztw^Dv\֪R7sKv<\G=oOFډݷ\˸5yT_?:qM3ԗIVX,da=xx0anB7_y_˘Lzzq+wrӫ-|mQm9޼]yR=L-3Mn&O)eeld6G#ЏƸc1}S=oaR2]|GZ6U27Ͱ1:saT>U+j; 5n7VDcnbܧulS_I7~<x8MѽcOJ.sL+Fz͸ߙNhQfAՇʻ@݁RuRy8qnl[f]۷2Iǿ+F)7[Wu0t$ݻFilUÌ1_TSb趗7>N߱6]Hz 97ʵiwSڼ_=3OҚW5 @=sXʓN+^ѧ~3QmT7'wHyڞ^ү3c2gCOR"V8ެr3| ~u4#z×dm'aVU>(1ͻ|K+RiYx8*mFlև|J|MY卶<9,6"Top؊Ѵs𶙥xZ;Vao/p^0&XIIT愭m_S9@ռ wYd&[co;o,G#=_UgMIާӝ*:q]X;3ѼC3i7Wƞtl*:3 fQH$۶ʆ!]$14˙/@LygV/-MԧK-Wc-;JOϸ¦4;tg䨙tk-[ɴy*QM4ӷT\N.Crrk uU)gRTe Y-;קS}կix3>=7q!}]3)jvH7vepAhZm=}筁wc*r(p7]1ouK7!pdPOQ`ӇU{/4_STe̓׭Ce-6fgJ9j2{+*4Ԍ/v^v͝Ï[y'/oqdW9%#$c|ੁ,`O=S}o_! tż#IUQ px59J5ynu>Wit} 7sF,e1wr:q]=RmcZ1j*ꕝ/é֋igxn 7LmPKg OL0fI΢Z{k.{siF^*}tVϬ ?Hj@| R0x }󞸯14ܟVw8ϛ87C/Zƍ"=taKp;cuZU+LVSSѿeo2UǯX4ْL6~ebc>āŽyxғsi ͞+%J!x_?_ZKɳͣm9KV~yxeN~'-4-m Ŝ <˫g/?0'Ob*駥^g>:،oo^FmxWhkl+-U-F2˖zoq4riLFԏǟƱ/p5)ޏ]Oe:?,_-jʭp }yQX..I;y[']\0F:>}qߩS ͸ [?Ϊ5v)'E\)s<ӵaR/["0,g[*۝;}c~HJ-nIu$:2 ̪ǧ'޹'/9b3.}S~]˿8'*HFk$0ԆUavYdIr\ kXCu>xC,r`#𭔜jNQMQh|rz{r*pCm9a"Ljaoξzi AkR\D;Y 6>ӊ~g5h*<+j^DH럕X9?\ץ2GM/\ڕH Gc 2&leI8\_}([ge꒔)ꭦR.XKq*SIM˖9ɶ2wbyfYX|rԄl8ud2֋q5דȫalȨ+Si<.-#VF*RHܳ0b?/uSqm>Xm\/m|ZFKmrn[xa>zZ2G_֩IJ6+F&N|%EPFO~a*jZ}-`icv95}ކҔy.<'RB71P8 9P3~ְ8FS'khEOh2|Ē#j$kN/n&H$[%f@u mͷc$Ⳗ=Yp$s~&<[+*Eh󏛁^,$kEyFws][{ii&o Hy_*q{XU\*EO{9C17ReZaoUzQ1he|ȱdcÊ˚\0QLUw ƚUg+vqn֥JRoٕ.ee*/mXd5CiN&R2O͑bq9_grF0OžQUsM43u >\|Q,aV5$I4S>fHn9Q]Q*S'>9m6ռjJ!G*l8Tթ bgRWZc2y~^7+b*U58ݛƌgסv$28Ywz5gDPa 4r~`t8{VWœF~]DP3V624O:g׵Uu+E9i~#͒' z+n*-cJM)LnTArcRQpKjgG_ihƟ,Yن8t:=6뷏k˗qE?uNI~贳#DH0O'bϯ^\u5#M]e64 !ܥ$8휟b(՜aiR1N7-^C߻ \q0\qQjv"KӌQj͂XL~^$SUE3Ϝ#SX=|X)|8rRde+jO N%E xhuCs J1&nQJ!F/DdHInqӟʫ*T;J_ HS 5^ =ʴU9uf,yUrQO#ӗѾ?-ҹ`wC2|ڹҕOȒeU#|wIFܞzv浄c+pF̽QNXdO>_.xjdX 3[/!@dT}ڳ|ב.T X-p8'ך/~*5?vtrGcYQJ$]x$t;FQ-yRZ^H3{,L\s1S(DZj[~oZZ0/ƿvp8?NUͫ3R:j]"H툪gSMzIZYVKeոou` Z{frJ wJ #( A`zpA~Y9HgXm!FazJϖs"ova-p^ɷkOƛV^sb9TZI.S2\7m2_8穮l%Z5- 's m|t~5j;^Uex#g-ۉ2IsJ 2}W8X?jӖ5QDSwr]-Ǚl gOfBqLWEX4Me~kHg#ʻc|2q*e'JbSGX"=v*@l?NjaF<՜XHbDVVo&oyrpz+h:hVW}!iv3g-3?z{p^Nn$I ݺg}ֳJ/~ؙq C.ނMmr\qTR>mSԊO5߯nǛ{I͚/_ozc6+b-WEj+P܎瓐pGqQN^MTV485I,e2@-m(ةTT:/-浕!t,4*2=їٶژIJ4DD5 ֦zrQo\;!Kop ;HګOiN>:Q"[-6eeV0Y'>+xǛDgFF䮿"'\z)9QLW&eͱV.htRT{ksnlP9}iis-K姴Eڹ$|FJvy!1ۘ?Jr~j";,3d$SC*W2Fb|ŁmO@k9cÞ9? ZkcjyIhoF-zzw"{9Ğl;T;mmcjRN6SخR[[n)|l',8mL_ՓZl.V%j=yԽJQh$R^s6T*wO3oN#kX{H<)ämKqUU@RxdR:.<aAF,Ctt{b_*ןACeE_rՙѯ)Jԑ9deef~:ޱrv6R\37*ry<֑NVӕ&>fЗwY坨眎9QSZ#<ۧȯzbdr 2vAۜt9uH+*|Z[ve0"IJ<^+xJ<̊Z|V3g`!ġD7/l.:}+WTi%`-N[[l>f'i${sUQ[[Q<c*ȟŴ͸t#ԕۍbLJ]3yW5_m nsU>4y]s fz~Yr1xy]:٩o,q;~Us򯟡OIŨ?RI5I"k\F$ܼ/|ҧΜc4"^QӡjRYy>ϔh4iᇿG\-B_:4w"*y%oS F:ԃ{WT Ȫyݳp2ň\PVO#]lXwyqiѩ)gr.uTgy[oM`qkyr\k{LCa E$~>Sj+hi%7)Kϡo$vl8>?Uq\zV޶}h6~s~?t:k^lex<{5IWKTM1 nuFf`yes>᜺T~-JŘ aijsJqqR_Vw6P]yKٳ]Qǝ7RV~z5z8zR\;DpJI֦-+a}W=UUݣtͩ)֗-6By 7m$rl۷oN+ԫFiۺ*U#*ܑ؊}Umدua}I$6pq{|Dj|Y]oclMC;c\WkSGJ Ͳ1["ˌC :Ǡrz[$o$z؊kr줵{=mo&gх>9$vz*F?[MW+T)i7$V'R2"~ݲzTNT({u.T*-5m2[u!U/&]aTJI;붞KUg&]1M:=>XFY^ydmhʩ%C0j%VӲN{5GVrOv^{Vz6G.rB qy$&6e}y&U%+[譾KR{^VSMi-G{Bdz ?u;Z߻N{]4\۵Y?L~5Rydݕ^Y(* &)(ay-Yns,>wV8̍ƻY:qEcRIm|i:qHnSR6n٤] 5b1ҍG ~2nE+Z]NZԖbiZ-l6zs?ykJIWy!/ik;=w9MkfY1͸獘u'YT}<اMnuPk.0ʫOZ>]ʟT^si֋W8ێs}+UuF2Ft3uF2SMN3jGLJv~s֪.MdhRuF[`[TœNN={A39]w61c9}9wIIA*>\q~W?-hqXi|KW˷Hv4*p>M\y.g`kT> XB]kHS۰xJ-#<Qr(T/&ft8#5i͇שj+lľz Qh:Vpȯ%23Rr(cᗏ3nFU(Sr>iE{$o\+0)Cwߦ8^k~ƕcN|뚙ʟ-G4u.v['1o}?Rqe[ ;djghP|0n=A5YRj/^5Z5bEukR wn!ZSjFJ&zd9[>Q=()$SiKJ^Yy`&Wu*28ns s|֑8{чU.:.ʬ{Om+mu#<5oBzRJP5 fntZB-Zd׏eYF#냎quUuW ~K:&?#s7,Tr3ǩg)KrZ"i2IV:&QIZEk2r:m2;9cl G9ڱqHm[ԚtpU>5<띘ݍ=zz\diRWcgO$e ˙v .pH ޳.vqU#NQвdڋJFWdpp'g?jqʧʚIش̶%uhdf=.x ִ֧a,;RߡܳFK+Ƕ=+ݧ+G>ΊyP)URTB'mz>wvU2$6eיf#3ki''ea˕'wu'"ɷrHYm|ؾG6_QbkE{vD+eXsz.[$"Hm8m Y''[I1HMmݯʏnV.7+|YVϩңEwiYK$)fcB q\uw61m[{(9>2nX*ygJV}&aR*tN`jmf$gO'8:-E \ꩭtwCV7# mp882W9{4u#-¬K6fqɮF23oYJĭYNz?=*:hVV#fOdORVImm Q>U ;WOnRSMxXN]/m\,kNZS`Jԣ)zuEn` g#8ꌽM#Og؆a ^c4kk0>һ0%bǧC`,J9vWtqU=,{v#{qm0 ѱ 5I=o08:j:gWA_0zuLA-+uyl*pvmZQ[]~w g xF-4]nQN781Y+QJבVeb1=MO]Ҽ;*ZԹm[ּEV_",6u'=aG GFk]_VQnQo^2iO$dAќc-9)F*˱,e. %ۯi>O L5{;tP?N=\"Ϭ|:h.:Y.V[p܎N+Z3*+rߋ%Z[_S4y^E6|3.0gdd~uR>TzӃ(-,"a*9Ǹ-jRtȯbGR!լr#O!'n+T*VtrJU&vaiƼy1lO!c@ϕ:r)rũ;\ꌕ8{VAq3^sQURz_o?1a<|,_={Ռh5ZİGl(mmNq 9ǵ%F憩Xҧ)M-ؤvݵT߰{CbSQ|пm4k 0Rѳ' P@դ쵾ҔȚ-:䴷V9-$ȓn\<`G_JUj(j2ҊZ>ۨiUAӕh{]*F4ӔDpJ. $- G嗳JڣzYnJe1;xS]NZn:TM$-[Uꢹvly}tq,[y~k69ϯrϙ:rm )=oT\,-Nz< ~ΤuuyjsJVKMDk2o.W!9*aWO*q}P0o3y{cߏZHӎo\2ZͶ[dclݞONV8e* ){pMPU-قG3ldlu<}cXp)TXfYUi:njr+?g*OC $墨/mt?ϖnJz%ku&X϶_M#}z^j44"B˸2Or qXT1Uܷ$a(+Go*Ty [E$kh&<`?,ݛTJ X8Xi[͵^ZJ^"8]).kHsjz_~i-rEdIf$#tV$ qǥsTiJəJxӄj_șö2mśb9I@RWk4kӨڼ&[ICʞBsr+::VM%jrNjY3|3V8U-OJ.vi*m7퐺H7.Ƿz*vsR<N"HU-*"mG\aVtԻz/奇%h{w 3܃`sNs'4aFZhe-Щ=3Oe(IB뷑47=f?0pdjv*@{{?u(1dsG$9$( ,W*0njRUd߽K&m`j̫ScZRYQFMljHgy^U9~igrnqג~}qU=՟g۩X9" 6ov8r*]EiJ?zڌH,q#bo%8<|SpGO^u*sG|ܸv-\j dUR1-O^~YsGOjvhu`\ 3}}zErTNmcr^i"<{h4j>Ɠۏ3y aƊj8TfF=o>m~,r_ݕ7tqxǚ:XRRԚ\*(_Ls+8?{VR䄭-tۡaiӜ\b=B7̤vq湥[O"aӋ+kr85fR /niRʟ+݂hgf!ƤH^p:ڹFR)ѭRM(/~cc 7[S,=uB:zEuJ F{jT/zۘckJu:.]X 9+NNv&s2vSc.Uؚ{5 FW`ۚ^fqC*=mFoiw~L* /\1Csxzc))oy9vٕaWY1 ,q=v GNsgsU6b)# t~[ˈX&Ն v)Ƥ.juέY{я{og>Gn#i6q(iSQǕF(%āgcf/gNQq*/ulgZE E+&i :c}8N'NQ.yo,l6ѥZC;y?x tܛG^"ݑ#A"Pq"f35b a+6޿Z[sIayhiA.g6Ѿ?vK |&J>J_6l湒tq#+*県 yI +#*Nђ߿B섉ymg*~L%k֝JtIw%yf۟#9P*ݦJ w{H+?^ݳW*u*RI}MRgq^enW#N3 qXXTF-"H̊pdkTWQT*dܲkbBrI~V.ZDeima4$$ GZu>6.)LwH~=3"ަq5CJ {nX]~[_t+d3,ַ0eMֈ$ZoԏJnWP-Ŵ}BmΑ~y=N˙rӍwО+ 1F7ԑYT- T}DplU[Y:WhdŻ,&9A ;e*RXJRf*TP~cЎzT2SNϹh73ͼXERxcgN:z4کଗS>G֐vgst4V5Ϳl{<޵SU6iю䌻wm$VnX$_W(غd5{fhcd`n>pRv[8o+\DmH#y<16:qǚ4jԭ/gо\q[ Fi9_CsuRӰT"kz=?e34,f=~/i$uFoNܖ'siʶ@R(wyJI Y0vPd| }j9^qN%y-/Ѝɍ ԓ4QJv9T](U5tE:|Y|'$8Y>Y^/)G/[UTXRCC2c=3):ud8g(OݻM_"ByeAC<ZV}Ju>o 'n2zz e{zjvS)VC-+lGG}I*(ǣfDƖ) ˓EїTJ~"VIڹ-s>]bz8wF\+~"~yjvOvEJӝ&ʝ_h~u v(76ձS,\{,qUyZ'}.⵶ks7U!IUbpKk+ԋ߾G-KcmU n?Wu'Ҍue4Io)ӂ9#9Lק(~dF;ZSV/gKDO$p|ƀR=O<T%ٝ~1RӸEsgvmk1r9t%EI]!;zv*\ha1?6ӜusZE-Za-T. ჹ<~y/fo_ܽ]=d싟w}Ȣ]$3Q$W&Jn!@nMtKj4VWc7f#o'>g(|RTlz)k[ [?.6Sޘ(ԏ3Vyyn15̞WGk3c$sGsDbGwio$^o-b$wtQS2o/u߯Qqm;x8u~rY$wq:zsB.BJ2)-[ UzSQZ858zySh[f|$E+oU}jߡtᝠd`.=9ckG*FMF-Ƨk-#Ç͉$0wNim |ϻۆgR8vH=6))K{PR5$Jɷ}orՓYHhcYky:c9$=LWm7]4ہJQ˽ң2Ѽ*Pnhlӷ ry\w)um2n-nfq8Y>E-6~L>rkՕo4c9}FY *W{m9$\zrniR|#OIj]-<ֿhC&X{G'RVקO"):rKO/H#fy#Dccʑ}놤b9+TZ_ "FXmɍ}?Zt!ykrh W,nkv~byx漌f5}ث\0FMj yy`UFi8܊kѕ۩WN\wOS5$-+FXˌÿc.E9 }6gǎN{}.Ju`ֺeQj+&戆nfV#} XAr9c_7]vRQyQȻW WikY.kN}㬱,pvK7_N0iGOq25 |kҶs7ϊG(UlCs7Na)5i:q6F*'*DU{ڰkaiFg,iǞ.yyoLW'$iV(єwRHe0+Ω~kΦjGݪXGkXҎ"{oѩ6Ha=~֝9F9jT~ |7S+(yc4ʔԳ,er ֞ƣuJ+[CWj[HW~>;_$jOp|3ϧEH8z#(FwB [1Y~n^U+%-d{JܕW/*%fbrFRՎܯGaZ\_<%.RӠz4JNO~Ƚ+ķ`jo`RtRCRN0\437ˍ浔V4VƝ(k5{@XH~<h!N1׷SJ DC͸6 8h˞"5%]hS+J¤vbկ~g%aٴrIjS/G[ޠF|85*si9V,tK{_/F2r s^TsӚo=B^kk գ5^kxfx/wSǵrԻ{ӌU"in͵9~ pGS:8Mb9es/OJR-F*5%=^փ^u,xff0+SdʅWLeHȵϘ˂wrkqԗ,c-,k"7G:WCUEc!Wks*W#zg\/Qٝ8UoLXlv$v6c+IYY~MM†sc#bOY֔ћѣ&[Ch]$–g$mRs\VeIRn_o3OM;x%O.M ^?ɨs=siƊs}>?ii휌cncCFң.O֌h5W SݤkS*=h0Sso Wͺ]#Qwn9l09rJ$cY}|KQNu|WC + |@M^]HI1(!1^oeXGq*-=|س,-XቤYY[DoýOZ Y5  eV?)R8oP:nAم~sr&/+Vrt_cd᧵#k3#G>Ѳ!=?IFQFQOxJ1 `|Y״-5dM33D'<8ΖU#N*!s*)+:JԾ/Ѽ#i9#ctxA53_r4oJ-U$Vӏ~.ŌiqZ-.J+9;sH9㖵% Ń*|D$Z¯ |8}׀vTau Z\dnFxjMÕ=Woǻ>;*[;mW mGMəxrK`@LkT?L<R7Կhl.>-6r3wA`s+`sKzz]oxLdp/ОO½Q% ?Y2IB:Q#k/Y~e=$|}K%b w4R)PXv;[8#^j}V,—"{'W=ZIb$%[2ǹ d9 F^Hہ}u᱔$}kvsڕ9.YlEd_Vzܯ|Tw.#{]bC|]4Řry p#X~-ۛ_yz5)QTC)g? s*l^l5)oWNЁgSMzߣ+ Q(Vo}+yzҵ--үzǙ6 J듇#1TNKQN˻}^iGeөJ?y7}MRXEom ,@(PHWz1noz_*FEVN~y==k͚wI+̩tRkl}(?3+I䎤W:w7ԭ$3B1XÆrB#-~Q4e[yy+mc>)\Ow,Xv2ѳ$uuӧ(.r4ޚt̉0V(,!RN## +8x/8cQ|?ĘZ2qF}3=y±GKS576"?cIRvHB7qsGc֦efmň?l]L) H.Uds1ǭgKbFV卞l31O 0J?}9SӷNTeޖ<{^D:}RKvԤAfʶw<\0x3[WMVu)ҫSٹi/pĽgP\[˓PxڴIjnq2GukO^g/I{yoFSp$ o)өU7wM^<8ѝ>{2{:P.c8GPxcr95:Rz4NTM--6$eQ9=O=Oj%JkCb%nx薚QRb?6@K_/w?Z譆`쭦jU1RWz[cH;H5< qυR.Md CiknVoU1ij$[ʷ(9+FU$+ϸmaGqC7F葼 {Nk)UI{;hՓkbKlrziRq:h#]_2\c8<eRR;- ^*3JB˻tr 'b*TQb^fZ'FFBp+ϒi&ZUPxOWs 8ySEFq*Ö=qkU(6,{9F8=x$ZH֢- JܴKpplֳݣhⰭkdC&u>[:u)Ɲs{H(Ȫ$o'u2F4;[OxQOֹqTe|Tg[úZ.e[d䕉c xs(ʤZiSW}A+&[ c%ap[lxxE9Tϭ(O]7/[{Wv26vѴq2*vJXE]e>*(Q <\ <%yh ZEGhz\,5(W2>Os37}F5]]F޿ZVWWQH7t9ߕ^)gɦJ"yO V>Ɲ>~gttకMKuC{B]!bGy- +VWJj*CJ>C#R18r1Xtk]c3bziʻteEhT8qǷJxp=e+¹=[l.6<:ib%);3߄x-BWX IмZܵ F7|{8ey(oCJTSOgڴ}FhdgYpa[s3drL>+>JUCCˢk\ z8&lq F0z寇uIoE> <5Gf]z3]CSGnfmn0{{.YO8Fw_~m?^%wpʾalۨ pT:mDzpSj@kOr?`ںcR5u)9^1׹xiA䍡)1NU\u{tWfɢ[*1c9:M 1 R~g,D S+rVqAp'naǥeTԔZzlyσ_?S7at&<]BĚK#C4v6Jo9 m6|bpj賂}tڶݼUWui}kPCcۻա;4RghǴƯUbqZIg}5jڔ>>aUX9?#F+ޞYKR]DSl.w{|R}5ՅU6I3vޱungNJ^C6֒p7..ץO)FRiePK82n{۵ii<9=L~V #ucsʧ-MU0&/"[Q;G1Sr3ȩnTyu:)Ɵ3߉wq-ēZw*O˰yz )TT[z>}RoLb{8ֺ)kf֑wFLVS\4~DfG$<xtKF񨢵VkIilR6_9ri 68s{Eo,Ö %s .1cߧ+eʥ/gi^˹gJb~ee#O~qRE4^K3BG_QO4-^F3[R;_ӶZ4;R?+#玝qTJPҜƕXZ7cq$hYrd8:WӖDYK, 331 QRy{@D1g:S٬kBdL#Wysks=ǵEH)W io_U ev8=~)IԼHԨ84B$;!u_,F ޵kyai&8oxD(T}>[3ݜ A^1\ӝ/}[p[ŜN#WJEf"2܀s^&/,Zvtii %?+i+ 8JNGKwQrҕugjNg_Z,Xl*3x<~5xa,:oh8˞GxeWkFTfO(X(~-+*FXn(^[Rh_,ďq^)+}jZ}}j;=z1K0RE۩VK%nGJS95)jOl)ǖ tactEJ{ۿMʒOiHG(?zL{Τ3*Kb ִU%.jUVqLu,E, 3]mYJ6Ռ㔼xb8F5`ҕiErBO8aʝ[_"\,7-Z*Q9k"}5!dr>[$1kY$sN8$18֊өem6՝j7p3޸o騸=mz=԰$ispxۚR4׻8jSC: 8b%vqSZv0N5/FΗaBA 3!4zsҴ.TTR}M%[`r[=&uMToOY&&]rY_CS*rJ[8>f֙ 6#2 ֱwsLG-O%iA"*UW^hN8p)t]8Էpz'TvUxtcfkg4~avg<:~Q)GGtNEpP0cR?8 fW,=?[v5kE(;'Lʐ>otH> e;h̸]NIJZMsƴ|ݍ|频 1 'eosow7{JkI@ۙ@('\h(ȣ©%.kU t՜U%SJą.cefUTg{)hʚ6Amh70ˮ@cٝ8=wc?w{}iJ*ԦW]Q efX׎I98aRSg ʫ#ɍͷ 8roKTyKQ{_"HbYO| -l>f&Gg]Ѯ&pVq\iI+[#}ʪZFVʧa׏YU)q9JzK.ܤH76sz~N-nXǗ+ܥ#;ܹNva9JRSۢE[ˠ$FV@˖v翯n>ΥE9mƾ"P^w;ư[YTիK_qNTKG ,\+(@qkNzsklޤmxVqYmsP*'ܴsBya*G=ǨhʤZrkөz([Q{Edx⦜tj' ,:WOV>-Lo~Qqn֦O :ZKn9zQRZ1KQ8-$,ʌFn7'8TyRxraKzE @2X;.:Oi-w`Z4\ek dˆN^ޤcij51|OknVB7C7FR͐?jTVxBQ{v^X'; ~^Py +p}-%Ux<*2){eί:xy~$',Jbv1֕(tSo9Y>#Y6,7}g).Ur֕|0BdlYgFU{g*8 g6 ԔlGz,G;I3#ڴi64jVrY9Zظ{|Zi ż̯ f8GWFl({}[1{iv>n1$@VܪfLJlܮeqmE'xFK6\FБЊˢ&wӷR6es$ EcjI[Rp40H:w>kZRorkVz.w#Iu_։pR6YY6mUvqҫITiI.DɈ ګU'|VH9y;ަZ9<BF3۷\s/SW7R[A8N6I&j˙-_VЂCKq89vԄ{Jqlyl{SNNIJtz\ԭ}$fz[G^‚K3\O Xd1RN1LdKXՐp C7^ps۾+jVGӭnTFnю}"yaw;+HFRŧC$^][Ĉ|6nۃw^8F2JCod瓺'HfKBexM^dkNQk{{.o_lF$W}zWg'5Ne)ӌddw27eNy=hQkn:h+(^͹Ek;jc/v+R0JKԆ).c@6%BX8`}Hh/%߳_ e%#W$fª{{5%6)-C}Ycec2zڤ$tvaЧMVr宻2no{,h~~f=*: {U"ť,*W-oiSȑuo/ ̌}O EiT9cKNtq$I-oF5 ԃ7B*RhWhfDYwRpy:74]I]p}D\b' rO'Z>FJ~þԼ@YX/ˎ+Of/}wJϽ2otaݲAS sQg$Ռ˷Y?6~iSqiNUKO#>񮝣ϩS~d)JU} T+2Å8ƥhed?hfUwj8N)F8uZsEe0WLeNeO0FI;nd paNR%oCt%}7&-@#cF8QdU(ݯ&LzxZ8'F>9[+!OZƃ3| 9x*Z֫:pE+?[[<}FۉxNP9]TCZSF1jo{՜i=7+[i%X6N#ʜ|<f*{juk;kmX{uN em|y*dj*N@\zg>jJgF(ū_eտ/٭";fAU<9 q\I)ҕJ\V GQ5L:?0fnø#:(d{=zt4w1),+ykk#n.\672 F@9#ǰ2[{kѳ%h.]r>CAs۽r]ft],] M52yq?s .DDƚsNwTVˍ.c~KԣRUE3 I-)St$J*1Z][Žc}˃ן\zֻ-vEbT^nm>mqs(ro{[}իYE,_vG$ߥz\Ѯ^OsU^YFBC+q1y--ܸRv]U G*~1_ͷgnrE;,πG5.jiͣ2rڃ>\PF67FdPB=uJ8zx~i^ϭsJ ~FŜ7X\x;>feal5j:I{9OQp0狀x;UI-,y,O>ނ#R9YGPNO֤eSjuP@>ilєjQUbmg)Ug)t;kaSM]Ȣkw`GUR⣈rVƆ<.g+chwN5vu*u$K[H_&vnx\j*^U4X/-#0G>{l-Ud޽-OSi~[߷2<|HmuW لI[BC$q1ÚrtaajYwu4'ۤL;yVk yYo *qoiSKo .xQ*|­JuekOHknc J;p:1N)%vo ֲlOonNT {qU&^28JG+{BDS|Щ^SRZ//XZT}MܯVэVNBYeNx#=;$7z]n[b--k$._n@4Re)$өˇ)r'ʞ opv&l+LX=x޺-$TRcMFQTq7 q,MCWs`kΤ*iiq'*%jxٳw*zWͪ8Ԋ{8F3E$\Fx+˕Ys9C&Qө7s*塒Վtܻm rA'#X9JW~=n_o{9crgnʃѱ=kݻǙT{~?xxddA|\|RZG_]: i!=2( [ >Y{#nLȿ+I9}v2{4i׫iEd6@hi4[;p9cס݈49_t!8RKH1}ZbxҊi:Ta~fbqb RtKS<ݍ;rHO\p{p:wsJESJ8>ɞ<ֵ?xI_&K9.'tڿ9ͫTVV3z:U)7v{cebUeÙbszu^a/h< X8Ԧu3vP\߼HN8NhH~v4tM-k'-D@'n~bo5\MPog*D;wyǙrsϛVwJnL*Jzv8Ϭ}Lm̞l2'̪qq=&^9sGG6s&KKQu 9n C`H'~ͰO-KB8uI;{[+5wm6Z-}"Og-SjOr_6!s9q)ѓ+P.bx7\@׭n HK=z⽥d#.: .* ,_lw?);Wc't*1+[wZFIWsTnIQ*sR:cJ\;xh|āpڜla[V 𥥹iZ2zA|#2VL+b5It9!udWJvv>9|W*^;2yrM9緭s*rsۡ]Fܯ)J>a*|^Jl(|<~tng:^aR2} r{F# ێO[޹.V:皗G${X*1@V29rsF){lX͊TSqU">.y9bUHKjl(4s+U9E/NF z{KRyf=q膘]1F?2ӟ5旻}|T؞F'p]ܷs28ά*Iu{r*_juOHhuU/.1R%`F}:g5jR_ԌuH374o&1crʭBI% -N}.,Z}M#/iK޷5`iFVEpjjT yTO~>kȴlĀygk2g5N}RN1/mD$q*y?i#t՝*XeN< '{=H$n6cgLУZ**$HcyVِxg_SlұiVݻ]gؑ%D alea㞝9r 2p?|ư=ӻ01,v8ˁ?jCKv9#N7~DxD'YF"Ss׿_jԋ.U_2VO]|tjFc,C*C]Q[ԚFz_?AAeCp+۶b12[kO]IO&V[u_ݿc0=֟w}ƙrǁ==>F1)MNu2͜m,_T'Mc&*ISuU  pZgL,lбt=>jjQVjj&:VIxmXN̛cڪNO~N8WnBݟ(ٔ7̼`A:z֞0n*S}_MuByz _~݅RTn8It1&hKs#I! `c#~9^$#r~UZC!ʻ=Ϲ]~mM0KҚ}43`ʸ'eۜ/'ڷms."$k/͒M8֕H9NXW BHoS8nR?MIHkbʧk+H0*<_k&$llkgSк xC@k"~%_rTIGuhb[[mQ"h;I"x^h.w+uɿe[\qǷl҅i.mhK@ Ʌ$|Üzv# Tҩ+/y,gh՛lYsH?,ma}ch[ejZ@LJa[={UF2* V)^jp<[Dm%9=qҹJW#\9q -~M01f?u?qW ӓ'k}{rn.k?i5#^NSѕJ$esETʤyWVv]JMtW0n*dN~ OޏOɋ (+46+:Jű.ENofDU#$VVY[oьi[Eq7imx~\*cB*iө*>5ywQmff\nG^}yXu74N\hPR;q^:~)=ZUm#"\Uc~3짅͹刌[[-}L{V\y7޻'F5=*%EZ[nwdx涧>+~!ʋrFm:YF=?ɮtenh֝OiQd˅URODž^_[-AagF/\j&k@ Dhj[`x&رkcYe>ѵY}РAy=}U%N[LK<ۥ׫fUn2럧ڦ*29<]mk]EN8\R5=;.*Ք- ay[n^5T_"7f:`mJTQq/ 2c$q˙;0r8=Zt﬋ԏս7gb\ 3WkNPvoBcV*XZnd[8ۻ QRdf5DkhW[ansO8A*V%tktn_[e[`fFvT=YETPMJw\ +AaJ.KBFebkOLi%'Hʝ[KM"X ,dI9'fZ5SqJ9Œc&~̊wA߾q\$de«%u44h\we/{ruv]@Q Tc%>_ORUFy1F}kxlhȬN<.*YArky!k7G U[FQ~M V$խ;ک;zgP*|:%9ʪ#Fxֹy\d87H丌+$*EYیl}j*Jlʣ 5q{RƟZVz{W5Jj/cao̕F@vm۱v-dV>cmYxܧ*s=ӡymWMm͇pIP,ms.O|qcJsF(Qo?"^Dhn6Ȁ8ǡiʟ4Xx1Vܱcq?$m>LVxW9J=mWp7k[wf4]f^22G+ б_:+ 6 W~v9ه*wDFXwNZeIm{o,v(9=8jOގG quM- '52yͶ|Q ˹Yx=z{UFWMV<pF@7q>W Z>.QfZH*54O+(1@4e5XՕemoCu -:W=R|T\,}o-|G** p>+I˚ϧTcU5uPFIR ! YR^#N֎+qxY6MBq؁WFy%:Qf a*\~kvh9Ku(YZBTmQyJRgV4~9-dO1WjnTgkSNVT9f+dv9TcKLEL\ӏnݺ}i:2>zdW5ISѵ;:okH|3 4rn3 DCw+mn'YUG{(TWmRto2䓓ӓYJ #N*5~f!\1 nt'cY{YUM744 ~8HOgI~#uoc:篧jK:=Tiͷ[x⛳6^B9'oNs%yYLxVRWtcGL):Fڬvȋ0=5عYХQǗ1n^\:$c翽Z勴qT} ] 1dV:9bҒ[ԭ,T)m?v)&JiĠJ"1i+>vL}*$sw`]~ٗRqE<a#ߔ[m=hƵ5b񯐱FIU3ނNW!|nr) Hrr55#>$trzh8HyF2Wr1SkKQ_dE2χRw:)ݩ9yWw #|+ry>µ9hJ:wk2o~'i3 zC[JQ涩wΝ>hrb=CLo|0FN;gߵm/gUm.zZ){8dQ/\}-ǧnfW{b33C4lZ&v J~8j>яjsKI %DπXd 5hz_Z:+٤nm[D Kk$AU){ݿ֩.tb>v>~NƐ`+g "֭s]=rJ|G^o.OIcVy'Ql3)SeeY\ԑZEsrNZni3$u4TF9bG:VX4cYʞ'#jxo6c-O#i>`s3ۯӊ mm{unQl~ ?7۷p%{G˛72SMe6:izח[RZ#2]緵ffYN3Ƿ*ENq4cFڂu:˅o͎WK4޷>+Z7Ч,6eWju#Tyh.diKV-C|OOڹ~ EKN+ߩY^-3YTߙZl5U(;+Q8v}13bri=YYX8j^ͭΪG>쨦ߜ`z4Oֹ^qEI"ݵێ:nIG1'$w6,cs֊,κ4ybuBt]8ϭqW[>3]kB>YJr9(Хwoy4Wܷ<J*7d~ʔFm1k(U9@6XsVƢre*|-5dio%^C2Fz0Oֻ)2+#'Ur`i[$`m`F9}kwW"ЌEJzkStFWecg*tv={WU:7~u9KG˱~2EVcX3^YDUԷkkZ?7}3Ome!(Xk<8_AIs1Zcd8C[+d8E?:Zl>Qtk:e{jӪ#NK HѮgd=hXloFPU%V^#uTwtfAM2xO_U2RNIJ]FtfQWr  ~XH=yU_k/|oI s7㹆E NP`  xA+|m_wĶ[ӖC~G.\H8vrv\Ԏ-xKS+H7ayec.(ٴ&X!#ў%G9gR˾ާӌ)T :&71wm~3[ƒik11V\c`F@ǷK/8?dk}y߃SkNŋL8 3:85x,1z5#Q]y`O*xTK7ëFو=Np{h 3N ӕ*pQf~; &k;dVnrsӧlgӥcZ)q4Y ?5ak$O@Gjæ}+Io/u:=I`}Wl<}1unɾR9-)οI,vɖ\#1Y9J<yIEIFZ=$~?k5_`SIkHV6m£!A#4֕+NRs ԥO^ޟ5yes̭VzU7oFW,A=~j,DxtҎvgTĺDŽdּyd9.U{ği߇sqRKejXrvO㌀z jYQR?R)ʥ;)yo=̽cSԼ/KpoG9ߊ婊׭7e{.)WRhŹ޲0o2 s@&P%6G+Y=udP$.r2s\uTT]N n# =]޺_qӫQ=~Vx5{nsemhms+"888*{8dݚ[huPEIkzh'4[&?\Hn^O'k ?jh^x[Ko0ִzaV9feQ RXx89%7oR +99rݫ5kVf5>isZ$@6XY6!ݷr}A=GJTr^S-[M.>UMim|EePߐ]}8TӖ"5#+$y'[}֓=ί M$?J+bhԫ7Bxe4:_C6ܔ$廑Fk *j8t)ƥl;XY&EĚ'vh)B` s__AƆw8vW]4+Q}M?NG摛31]t#3\/npZybׯe^.C.+g6ܽnsQ[xF]5C|oy|nC{sʆ*IZIwl:J6%{P\jln&XچeS^>NrK).^~_WXXFȁcpSҺ)'c sJ .=GWH FZI)KD8YS>??:-On7]=11*۝|F7NWwZFdլJ*T).ܣ}0G:ֶZtkTRUuZ\}if ۼʪ:d׌Хra9+(6CcHֵ&],9*S9b+F.*Qq$~j/bBE|~\dZg˶ t5{/ïkxmcjbದ\ tqd~MRS)r̒Ju{,x{úOsqou) Ƨ8#'۟rꕧh-:80Z˖.]Zu?E5yja, I<ș2BLEl; }l\UFnnڵ>Y[KYZInᰛ@!;Ó׮OZj8zjU{Ob%,5zqK3R |FGlrG)Jaz8SOn.dUּ֬w,D8i-~e"xQ奆rսHK$g^ Me,j[+OtgwQaMsW)Gw=l=XFM';{V cpn^Pou(#RRl}k0fh٭kwk,>t۳ <(:iWfha3Wd~&{f鹿gJuv3y7DX-sHWڽj9kn~FХ]?z'xV= ZlIp̯~9V ($t`yswt}Asw}xzʝX>nLGSc']Tfg Ez2%+:RX>`B 퓁sZ>=۷C F"_{|0TE6W †Jnrz_.d˹>fw985TSњ+RXmtV?1xv`܏OrGR5*4d`16BjW2I:,241s Fy<֑|TZ䶆$ǚywcOϖ/-gȆgXUVcnDtW3ա݌?ypzׄeNb:Xf g|q^ђL攡GYKuጦU \ x?צxI<Τ'Ѹ؄e "zd~1]L\1HmfFdoFuUt)Th2Ak'.%<?J\59r qOF}\"D{߽z2~ZUlԆ+FZqqD#JTips'18j0ӧin!ug?>m3X\Lt(W{[Y$E/1A8>4_^O VXU\dcGMu*>}690\qxjԔ}D (ȻY`J\}H^,Y['⯒\ӻk6$'a Pofi*k٨ԉ"!kޝKޱT%QE P(Ip2>+u= )IK)ɸ46gcTO$Ҫ%o{_Ӛ"SKzXIM\px'i|sJTh9WIs*y%-"r8q8JXzv}64Q{=M\Fwo~pӨ(~ƒzOㆾ[9 2|/|LMY>YVrkԣp\|dMNXh3ZFm:$zFhe>`w_jR<:ۨ|/)7ڋG>qۮz[:qWzQ\[Y`/=OwrG} ;A PycJR}omNmd_0݆]zA\kV1b3ckOWf@7͂1۱F<':oIzjHPWrjrNӕ_: 2ĖܿSxJܒӧB-JcRxRK|M9 }.YR~ŷ?E\SuՊHس#>eQ}yA֡Ahh[Ub`D+I:li[c@]oNk:# mtpX v#myӎ>V4Zi~W<ګm۴"23eZ&lbFnwwӽs}E^Rt!ZC_&rHbOJ2vMEaCFC#,[S3cA}㾻M403 +y!s댜-$?Ƌ[%^p[nF=5.VcG;3jфЫrh]KS\*I4-jS擂;w8krMh,ͧvHcړRa*fN>@λ捰6nHuK*N۷JoYkqY@$p0Nz4*T~jr~ІhlW3=4Mhh֕,>rm_4 ~^20sߓNhq~~,04"c#gpq$xǽ%ODs[^rvGoHm*ҌSF#R!ƅQ[>ҡT3 a;+ߘC5#V!mO[4hΥG+%Vۘ5NRqg\TH˜_J^_Z|8MB.DG#8#oʮ T~a%eiVXoyN՟%ESCaVR_>~UGL z8aG hէ7dlG߈N0˞JM|lۖQKTifA#n mSCt\gQNBCj4zBx֦IMMV_ E_ݩ->2W\o$ʐF}8T1C21V멟5ju=Py& +Fj̮8b*VTa(רխ̚%o͑OZǚ6,=*Ugk<؋t9sihLeJFܝWEsOe[v;ISn:&y_2tVXp3[GJ*N2vJϧn5$bhׅcqߎjb%T\vзf-OӜvYFҨסv |"<*ǗMR1hpa q**,+ӧSrWC1.XgYSE[oGW,k;kh*O=w*qs\B{EoD!ݖEx;VS_D:Zϩ4hMG1I #'\1snbSm#`8ǵt.kjc%m`eU*ͩjSr{m{u\Gk0 rxִQʍ4Wkv]yqNAATo}nXyj,͂Ywsֺ l̔TbVI|fQ8#׎qVѕ:*NI%)ɧ4"Iۻcɜ˯kifC, UӓkHRc-e;,QI|8'U1w{JRv3gҧMjmϔNGcW)>XQok"v3ϗrsϵm§RqMsRIAY\܊xj֎Ʃ:Li {c,nZnS-.v3uDmX368MW4_ORƤF0MVUTQPF*ة{JZڣ])gj#N>͕F2f} %v{i*Mҵ7ku-qs9G5Ts>[Q-^6 . ⒌ocsԩסV}F)ݢVY׾|t?Griɧ~L^A6Ӆ~S;u M7=J^:|:ZlަuF1fs>jΟRٶHv8:Rve&K*4= 4-I٧ȧ}}FzQFW:j<>nbOyjb~bj+sr4RJ,a{{QcFӎ'm~)g\LʲyHzXY*quӣoIBk5mu,/$ypȸ(g'5*ԨoK'tM/ur6c$WIʭ7@rFךn^ktߡڣOB:M[oQxxi#<#ߡ,W,z߭!kO6]/ByO'_,G_SJ!Z4燚Z]G.{ٛPȭʜ1q铎մSRmnzUptr<.Z;^U鷌5+SSvr(J^κ4VKuǷLcl*5]/RXƊ;bKݩ?y^f ,?4egukuCִqli58 (x0Y׏4dZop׹q)'9nA\yy7&G٘Z2 1>+*45q\jə:Dnw{{JQnTVѺKue},S`g#nqQܪS]S m~fK7AWN1DڹJ>Vz؊0=7Iti(vgW{osW9@Ty^ɧΏHTkC50lyynsG.K#H$J|+)̂XalOӞޕ<ЕW$|*Jv: d);a$=T9eV0cN!wSzii4nX,vJʤm(~F5VZ/u@4 v^*X9wЬ%:m_ݿ:IA1ǣ6}sF1MFTҗP9E- [-B۾UPP9֤=ؽnU[\ʶR98f$);$p:TZP7:JK_ⴱ]3*])i ~9IԐ3q)ޖ.#IsٶО6:)d`HŜye;L*T*̯&45<]S¹ۖ:OP\60|[)-[?y*SKFfV;M3M$wϟsaS(+ 4VIH7f{5b9d{|yr*2LJAU?R6X-{u.[iӼ.[ mNGnت9r-z/#J Hb}a+232w)dI#J20ҭg/kqp4LdSpz{S/kC+l#uѷlV2^%Z5-nY_dUWJQp5?[fycwpr3ۓv2ZJr儕[R JܨB3މޤQ (־DZxg~߃D! yXڤi֫{IͲ;eU[h ^9=yhRI%_a-<$$+S掛y-Z&!avLveaq]Tt7ͯW m|$8UϿylLtʛi^䴾ũ ꌦFeU 8#oh*MmzrHm0#s9׫:q(֋5>u-]qX@vuZ5FC5ke7_e$a~b>8J# /{:|7.hq#=j]rrQ㧢(GtQz _QeS *Ftz'kJ稚wVg7| ;fx+O{k)GQyQW {rѨ нkl-gdgaj{u%e]||9O.k>sF mÐA0LFRirm޺[S,F PKwEՖmg4kՈIwryqھ/{W$qYӯ't{Fܙ~γ3V\'z1Ԋ1RJ-o87v џv#Qrj4e?:+;' 79Ruj]l[_7BN{qrTR6+j3]Dw,GlsVtrVO;3MP9ݑ>y}kU%kG9o.ռ7o"_W~vxN ѩL4s׍a ,!w:Ag %u3 zj7S^X!du?[V*t^昌La :> H0rqbҩY՟=֥cյfA&[PxI9#?.=zxU|:Z8/롡zE$dWY-g?>?NLUome+b%Y^W*ծ.cR>f;PN2槎PdΊҟ/~mޥ}-Y\JͶ=/.4Q%jSOlSqRHݭK_sƗ4o{_oȎQ;!pY]`S+"UEymjL? R6v*mmzVsjo-Z5RհGYN#W9Oޓr=B6[HYXUqz8.I9)]>z`i:ީJڦM:+}J)ke&HuH+)JRvh圫JNv1$Ddm#.rҰCRr哴Bxe-$hXgYJet4]9&B;YS*Ǩ=>(ӕGk~eJI4L$[vnʎ(Gs[wLF=$~u°qWd`$4aK+`~=k?c(ZI'IE5Xش+prcۥmb[+N"KdZSuԗ5*Ѩ"[Xbˍ5Ysq-ĬWm1* 0'h9z./}ʫg=GD+Gnte]h2@?vAtF5)E4Ȯr֡~]cch5&Sy>ckXBv=(^Vb^hIjLRkus&QJ&W4}mCwjYvnxYJc[YSjUϳ#6ffeVQt9tX}Ui~M1NL |*1Ӷ+cO/NyZQRȎI,pP0y^p*Tc+XsS^k'~ĆfϖI&g;9O\sۿJ1drkIaoY]Yr5!{2P>AJO)EMBu ,],Knȿt{‰8XIBqm4\`[3L%RnKaHA,AV0OP1hU%vG;2b=cUKV֛V-\^cW?¹葧j%rI%O>v'sfZ%+I3>$6J2u$ӗȜWM+}Nq +`a݃XՌ#R۹Ʉ\]z^M$kwu=Q-ITN?wj˫{~unUI>},e(G:;n3"dNkFQVi5'SrM<~\k82܌}UJ׍ٿݹRJ[]S.G(49呗rBk+&uQN2ZVO%ӀdFA烏jq޿KFw% K^YҦjqZ¨y- d|ɆeIONW9^/Z R0e߰`M 9xVt§5ڿON|mg%;xlk-FW$s{c7*rQƝ\DI{l%vk#r?N=ʏ&[ʄX` w==R?tF07'؎FkxYI'#׾0z{UJ\fsTR]4VY~yf(Lc5sYQoz]!և [>[ߗzUEisSj-.V+%Ife%+=Gr;T%\4}jhӓMnIw I4rϚf09$]V4JwD nɏȹۗfa'g䜟ɓ'Z9Yq#)qkʌÁ?N)Rnݬ'Jd9Ǘs8UT3ZuU="v]IkYwe+֗rR?D430>Q ?ɮ5B+S*4Is]c2Hχf w#${~aS}(n6e(@i$VrO;eMӣIKF]Gqd }ޥ{c/RӋI\tVlH~Ub3>淌&Q:tw忭{ .\?xJiتqv{Hou]~_bO<뢍RקSso nc2A jҌ3~k6ˆGU8T`឵8ԖN2\yKlB÷c:D~gn Vz+j5Lv+IўutZ1T[y! c>SOEԊmJ{uڍr{nssUj;=[(=<~U?!{)Z]~ӝfEYBTl(#NOsi+bkO2H$n.r5gttR-*8%ya۸ Q-vEh<554iӌvQF>3M'^A+K܌19SNyƢv7A"?wE(*AU)k'N>G}u%]]MݪpOҬi7;s4r2T ;B=w]I4,ǭtVF4S 'mR9e*|tjRᩈZ-13.7V#[{WI졶{ȋVݜs\i&ieQԫ+vQ or>驿-7+͇wvBՑcR8Y@jԞQi^%\ȒI$aSm$۾}jrRש^cU;]n ZKY3hԣZ_e"'ogIEK@*n~T罂U#{Cj]L,n{U9GȌʴЬʍߌ^Ǖ6#96㷗aT + JIuPJq+j( ]fdݻ{pkB]JJQEs1xU$hJR6.96gӿjy!&eoA)ϗ#fP>IRHS^=K&HZXR%#eq;rRI߿BɷAݍ''J+mzU9E=,Y9Y]hޣT۝Xju!-Ynƒo57HvŻ}zuiGw/(mմMXc]2ݍZayhLn4ܞߍ*re%|Io|աP~lf8 c[ѭOgR̖i1o d~=*ǖT1WzQa]M$2VTykFkn,jsp( ra)FTew5 bTu#3ci:Rhz'8=Uc(VOs8Y(˶/t|I]/ym[y2*n1֧9i{JнH[|nnrqsY_:TKh+a;Da|VaĂ{te/{c)(F]TfFe\Ak~YtHJ6,Gh¬HU)FOc8XIY pY8NRcun@jRv3mjrac-Ѱ#<]=GOݍE 2YwÕ'EsϗT#SE[TʨK;Ȭ5U~)+Si|/¤uW3f:K ^Z;AB;٬*F-+/S&Om[S K~g}+nRS&c:CjЯ(W$ܗC5)jZ+ms*ki<沓jWi_jN *ѓN)ɶm<ȒLn\_{>Y(&q*+cCW/US}Oh7R Xo#VL$>VzSF2R"Dk+)cĆ9<;-NnT4g9ƕGI?vE ڏ.\H0gڎZҍ~0w@Frp0'IJ1|wؗZRjב`-ܲ`N|OwiכeZiær퉎_sYUkݎ W(.ey̿.~b23x#.4i)%'A F { unsy=ڸQ(Zv o14e)庅eXdi$ʻǸoJLJ*6n"oTf,lO<)B*N̔~aGƯ~KZ5eأn՛NKAեNMgj]cXAf*y8{Ӆ\^+hݓLcWij51Gp*qG}oNF/ŧm6{{cO.5UXASI6g_49yhY@.Y6{.1x&>՟u4W}7s[Qݱ8=sDZ/g5/r/Rz~_&ә!_=9qӁZT#KO3e:{'+y^K(o[| uU~ &28<:Km{]Le9Ӎ+fkBb=g%c 7\uQteE}{e 7Q>d5zϢI>[KHWm_$v8#ڢE/?=MeN[y foWcrx׿DiE)P)8ʢwadKbBwzgNf3XjrUSvH] 1=Zmo2sk4CkÎ?u:1/SROB-E7z 8^ul cͻƶoWY]M̥vX6NX z'բ*4kKdz2q= :4F9YF18{Osbq^ujOsy+r]LN|yLx[F/'>! ]vVY y]vMGê>{YhӊEWcm1otP#tǵwQ^N*SS,1i!݁1(򴓰Rz+ts4[|/Wu*5%iJV&}:{fs-s\NJ9M>[ƳV|;UsouF^g^ Ԕl@8muf/OzR孅[u%= '`VR74~^^DOP^Z.[G8fpM_{$*êt>6EKa R)?i'rԕIC\I[kh~e U:5)y=;zYuFh.w[RRGcWMr[K8yI.;5HVӄ*K,!;-bNrlLg\qѨy85 F{x?7qLDk{ʃBicN+Wq$a q4-;wW-JԭN& \yWvn*=j)kղy5Rj4ˑH]nce 7g)T5Y;R>UC9vsYJtjy>uM q̲Ƭ0gsV(*Wu#cJSש y~m.P[)oֲV] jN˗KRibYk%R{YKmvb q;(9ܥNdeGص'{- м'lU 巟SJ;sB vZœf^N,) J|%_gߺ֗Fg^J*ԍNUmOb}vv"g^5r4gG7ԧ VHȎFb;dtϮvV+B(s^HYOe@۴ddK}uGy{Q,SMm+}VMekqSQI"??iJq35=cԷm\eq?pNrqR}в9[T/[{IKUNmQ;?8O1w.ܕ98?Yʟ> R^$a}svӚkV VV;r{s QuFu,kit#VpXp@8LTJ1洂zq-F͜eM :ƚ{>iBM'ݶFsZ˚+MkxyGjH3p?:ed3匣fjHZ\'}*>w9*S-2cqVո_6 SM>Kc?DII(e(mvZKD>"Kvx\=zҸ(ӭ֦j*tKa̒h8Wd'ܬ֝+ {2T~lt]ч{EK F1^6ǧS_b50n(nmV2 usd'M#:r/oo5־'oM%⽽9YLW_,TkrW=qO-U,~{]߰# K t!|ͼpH:+ϱbeR~{K޺\]kfq8LJ *4'UoNA&x_Ev,8Y-|nq3k͔x攨Z~Gwt .kV 7~[(Ɯ+w௅Ok}cPmJM{YÞOe`yQSjchSw|CV̰JIY] >~_ >7eo3'"2HA >Wv+-JJ1eW9/u-<ŴL^YGq4jmpޥy8<JXЍȈ}]t*I5 ǩj4C $d: NXǟ*>y~L luLãijbѰoHSo#8M l4o{\O&K]_rD4ݵŴ"X2(0 'S C܊MF*^gifv(!OApFߙb1KFs)]WMhC $7|S4|\A%\ sc8>>eN$gn8#> 2|qh4[蠞=:ק(]]4+M??qҷ,8Qe۱ˏܭڼv>-FNkڲ{>3i IOӶHbH@ p=8mo5v<R}>mh.#3D F SЊX%it}5qqIm7O}|smCJr:u#+9AM(s>}F[^,&MV؀x7(4$b+dd1%~^ƏѣkQ{KZ/M9?ZӔcVKG%g^׷]G4ѭ̯`ܸ9{b>TQ(w] gdka;In[iYFq,x5+F۽U+G&Mozy0]Ʈ#%%C;fkVWj?MZhcoOf^Mhr>U l~Z.hSo/+|% 'ͯO/+O?n5;b 1ٵFc,sqʟ4ZuFNj;=R]{v12M_[OL'7$7H庨 xM^G_]jTpS SI/"Q׵)JV}fY-Z9MN9HDzW[ R#)_RLڍ͜4kbicCk+o,RlnⰩN1F#ߍrUQ4dMnSUSya}Dz*nEӰs3KnoN|"p9#sYJ4ɕ~]wd^|[7nw&*9!. C/+qu`* oK 7 t}Wt4V+5{]C&:jTUE6-WiEk[[r8D܃>YI4+ *knM8E͒'.:eh匵Ne{;ܰn- ۶ztNZ5Le s:u']>lL@ݴ8ޝkCScMsJwHću! ^\VB*d#`*cw_4v9'OnWxgLq]rṣCVKaȄ11 9RA#>^YjJɸǻ><}?N|U8Ӌ";tTZZ7}j6 y <6a' g_e?cN.^hRVOS)xCX㜍D܀x#=9B2/|Ҷ_>K<ѧJi|79=B)/?{mmBSuڭo.$+|.q AZ,긇-U9VG|Zwaku%f(~!`Y2$v J<ɵ}9h^z??,䳙֜qK8׏qm'Tx$Zt%T/ ʾZkeF\nm]rpdfߛ$`gWq~.J׽ ?,Xt\kK-LeN"Ȃv2zҖf-vnkXVZI"Bc{(Q,%R5%[^C,$F,Fac0@+tF#{j=GdM&gw5ǘflpZzȜ3ev^iΛA24m̘/Ps]Uc 47^ S̋M]GgUi<Ϻ1}jbj^ކXqzQ_~||Q΃F[+ Ƨg˸qPE}h΅h:VAXF+{k;K*G$cQPxffޖۻe:5t/>3ܾB#K6Ql ʇsˌd:qYtgkZ|;;_~ }˜kH޹8ٽ9*<%S -ەj:owpWfbO8'N~e<>QJn?O:Ɲ%֓q#[~6ϡbAZM{ROɘWɸE =f +y 3ǶˈMNQnk,豺WSԜsӢVRv2R۽Oދ2cu~Is~zשRNu;)ЭZVws𗋴۽Fx?r.X3r8{lJtu#NӎߡWßiͧ4.aF(\NsUSyZb1/Ҳ:rzg+*$mX(HxJ×$9$ERYU=z_Z2+T^fhp6wLw*euRJtH-Ta@I?z|6է54g%^5H$7͞yn*9cx Ot =ehbiMU F0+&=9%9;tdB=ʒHYvzc:IRWz62Apv5O2TS߸XE$GP9Q/{)EjBdV(=3Edw>ݱXrr!1UT$gLht)Xd/"K,a{=+{C\=M·o3"\R/31ZGAI ,NZFTiY.:jΪuktG&J 6tqɴZRo%m1!]VSM[:5xw 5khǛxVnO^Hϵ8Ҵu4ʵ }.ߛ5sISKկlVWx,~f#ъc)Q}>z":ejAVsӧ+.+呼gmnjFci^Rc jwܻ5ǘUi÷W<˕T򄖷b3ysds utgo9^Ul T@ǯ qg\iV$䮭 OǐE:|+QRMs,[܂xE^a֣yOPjs|ݿqQ$۵y%RZWm,Hݞ;֒#dKD"HsznFsҷW}?:vw]w/,6c+r$iƵK^E;ȳ]|Ǹ 4FV+^2gX8ZJMb=乖p儇 o޹c*hDm^^SQSzap+/a[Γ0w{w9Ihsb'|!ڄy+#oK8u~kݼM̋7 ^MJmoi >{[ް?I|w>o'xOQŬDz?3BlOMG9_]ZM87y -:vM7pX̸i60?< gV4!欖hF RQ$xABrXsWELU9K쳢:kؑ%2Ǻ6y6;V2FRZjI59HQ հӞ D +A5K^ w["Ƌ=E>sJIZ4_y.ͫ48zu,;oӝZѼՓY^J&2F}8]QgRrt= beFw-noFD\w\"8BPI?֨E.iIq`.nvՓ,7Nx*#IݚTQi[lYjrMʸh5Z#QK~*Zac{}y,Ucms.$cbzz4^eZ0x4t0ˆr91\4q+t8b)EE1o2{砮z%JEu$F݌J|xV5?_9zF>y;ۣUSCZK*ȠW`CgxbعUJކS)_=Uo|vSv;ҴO⹥ٮ]]I&B@TcYGM!'~)ݻ1IIZoGnfګ(or[z=rXtVuBGa=GQYJ2JOO5SIlFnrqĥyr?M0Dݯw\Em]ʛ_QE_ٮfwzϥu9ءea&F`{g*rMV5*F_#9$UX^Sr2a*#Z~f~~nz2[KcN7)\XQlqKnOG]?mj-:̯-~Z@+.1N[= rI;77QZQEhU}4E"|8nuR_BbeW= uu KgŽDq-8mo:qk&26|UB^*cR?k iTtv:[?ʫ{ttӍ<ݳsӬkmDD˪Ns?sGjg '%QϹcR4͌5%}f>\q!efK0v$28+jy9h'5MR4ª'>+x&- T}䶹mXn۱\:qZi齷J8QȒXccBdt4F;wZp::zFsOX׈5ӊ8~4FX{{f>cI!#l:a#St캾`wvѫ*A0r{G5j~?/z*j5L9$i=q9VK=NnYV߉o*or9UQ^^5" 5Գ(*tlO<˽rT,}TiQoHNZ32y|>ryV[W8*UjQYHcUrn1nk5)IFH@*X:xW+bDcZ\6hգwt6_ysZ2XT}:ӠŞI-/ 5 UTLt[Ɩw'KB흅[77!qԁRU+nqK{Vhڽ[7 0N@#n9UQiǑݢ%R_Ū<xWiO<Z:nJFUId 0<ȻTS>-.QDۑE'޻zՌi$[nKxmVE1c?J]:w7^DCjL%qW+.D`Ӧjm3/mW~^eoolq" Vݸ~5.gι*qݪ͌?R}ex&rΏ\xdCQX gONN2R5$y3l(J[V_vմfYmV Bc;W~#4tK}Ϥ}^]-džfK dI3$g<WOTr>̋HTzZyk+6py8=JUviRch~FƸVO^ Ee(sӿ[ة頒!feʿЊUb/gVROD[hey %ipzw$fI?{>]eo in4#0,^JDž\os1ww2%M^7!0ab kGs>i-SOne+aBd1Bҩ-L׳/gV2Hgц睿w$cq?CQ>;{|9Z6o-n.NGc뎂U[~v9הddBAoi&_1Yv?N<ڶ۶ɚg_<^r}}vsj)>YnW PV|IC7=HyeN\ Uhd )c{9=GѱFQY$fQ ߻76=N;B'R-˯͑}([if}nzRiC߯ȉ <8̨<`>J* F}I't/Rl;U$?3ceNJ/ԊTN,$'w%YBssT(ӭߟAHUcxB2_J<Ӎpo_5q{hR X=OF~AI%1$1txDb<\HSf/ua[X4)~PUv:=Lyy[2KHĩ7}9xik'uDS({jcȔcdд3Gqw`7mT}CTjO҅x?Ⱦ^Z Ԓ*V%?&hƬq 8p~hhS:xE8xZNi^N`ٮZ?wWs\=ZUbͫo9[#ZU+Yy6n68Kz]hϛ#tu1Ɣ{>Yxm4Ta66Ǘ<Z:^Ҝ*׵vMz=œ2f1s*Ҍ*(YQIKM3-ǖ̹e uvƧ"B}|Y7,գ*˜sQrөݾiU4!s2.[gOiFRViPB;V Gv챃$avkKW&iVӒqqVf$V ѝ̓2H𮚘vo3R1j*.ϫٕRDCJ.99 w {v[ɹI8Ԕ_.ԬߢɜeWGOz*r:|wsfv4yn0I>*qkKV1:k^hHՔE}sیc ;jS٪9$`Ok#Pv'ʺhԅDSIGZ ¥YUJRce\}Gj.TӨYK̝R}ʑ3/olץRRSю2.\Mq+V`~_iFCSu~[G-q@vp={Mv:uŕcz)ѶoQߘ/IxAY՗wV?w[[xԂȯp3ۖPGМuZ_v}^;^(4챰SaA9'98$F3%c){Zu7Oȭj Jɘi |`,Tb#=اu!24#+PL?2~FƝH.df݅{ܬg/iM%rhj6dFmU#\Y_OAtuv)۰Jcӎi(ܩVPI故|ȗ 9#S{xKuВIټ.NV=㞃5jyZiw5s4DwڧrN?qʵOg=Jɨ1E*Ku5F];u>el]i r6)}a{<&.zʋԳwbo۷>pzXƵJ2WS {CXqW:S]4wc(UYG55%JSJy6-wmlҹ*[IT&*t'"IڲYu:[Ni$ZEXk/lsԕ<3rE][mj'GCo}9>Ϋ[8\cT}opVa!?/ˌs5xʟ#<4PAFYӫSug$iԣQ]e^| `+zy^TORN{[r6'K6f]cH m'`zT59҇+z7BH%\y7LRqj蓆_qX+H7mF P g[gMĬ+>o>tJƨ ۱an'ŵ~=r5TNRw^EGo:TUm#gZ8G˖-J9*Fv'vkHӒ}sj5Noл ǟ̻ Fj/3Zʤe=cP.-qKe=W׶p}g(˙rԞߑvH.?y8j>xҭV-2k'DhǾ~#-[^1%e3J2qO&#F:Y+KR*w؁S)TX*Hidf=4S1"\]),@CV8ϽuB\qz.EV^]\i3R=Aw>"wԔUEm+vcY+Oɽ;t&8:j^rd3s lp?2~.YS;krvbtC,7u0zߏΜcdަJͻÜ۞kEOԙQs'nńhdfIveU}rqZ}^jRm1RZ;\зgX3ZLߗRMZteJM'{%V<0ǽk+]Xh^䩵aoyzW.ef{سET ͺO1k*|֓IM4Vn;Bu &Wnۿr>/[ (u#+\iLd/|a~*ԯg5fGw,B@8_| ǚ*"^S.Cqg\r R% (u]ث9Zʧ4t%Owǻ|ZѧxpAQ e]c#u#rKKW*X55i[ķY"n]kkYTuM~,GH6T/e=xч/Ͼ6& m|nem(E)9:i]%~LdO*Y@z+>_)$WX#-&zW-ZnqSkנCgrc_*WvZ)_^]hlHYʼMZ('_:-"67'9sYԵHl2kSVխ68\k2W^FTuiյtߗGs#lP[yA4Nl*V[u=ߕ%ܑ+Iy#$w?aS斥ΊuU7xu}-: NYRSn)X1=?Z2M9I6"cF8>TXH)g~sDcJNvge:qMw(nK kJۡ,kG+^($mp9һ!?wsX֧*Gй ò 퐆snյ7} z!ZK|G!s"dWuhyTẹ<_eoʲ32Jꏴ5.Xh,%O8Zꍶ4JzN'r2{mv*\/+fF[~>֑vF KGB~m̡3q]]]!Z%7Jg'Ǚ6ݕ#GW kBJN c?8\H6VrE|%c#j婟3.XRۙ-%;Öm9.taM.WhI`bۚVS*?nce*ʟB97Ad/$+.t<ӊ9[sS:2;^,eV ϖ˖sv55%eQo{O%đ7+7lO}Jt*NZC&aj;c U8ܽu mG-p'"FcDZIBݴ&O5;[vnAmx^zq1]*߷UV^XmY~m|PqX:T*|ܥ]TU_"K.$s2q?*ך/)[uSRRCsj\bUWG#S SQt y,[Eu9>Y9T G݄jfd%ch1+ gkU)I>]MQ֨R*>QY/.roX㡪++~&(ZiX 3qgɯ䌪VC'VY, UV}Ǵ)}r3WNSGҝ:)o:wd>e88"Jtj?O=<"hKIP 놢/q+Ϟ c3R6Yuazq;9SRoY<,}uO<pڲ3ѻǯ5[ v.VXekzzIMy23F|\]XgG8t̙cM|*JjyX+x-.Q?yi%;+r-7ZZl۵QϒcZ2d\N |̫oOI[u S)vI{: =U,KϹT{}r?}kZr9(ȩƭ:[xJ ^WoSU:刓nW^¬rVejsʛouԊq훖b. 2{}kIab{SF.]ų҆~m?&h:R iГtoOJ猥8$U5E7ʹ`Һfz7ZO2)?ЧyildU61olB]N2WlzUGrRVT\Va#;|ӶMQ֕wnTIR[^. 7wGOJaj~˖+ Nڻ2p/-a^Μ͜^{!-םU(g^+.sC}'mjSle*&YEDicUjƥfe(v\+Ï{hqSe- Mznw;֜9hS2Rk-ŶRApv2XʥHg=:|К䀬XV;Nk0L)#Kg6O7{nk_{Y&Yc*׵mѕ68ߺaTD8on\kpkS.гc#>az8RvMyVyYU7J^Nj4)2{{(չ刏5u֔b[KVZ? 6(T0ЏYʳq~X7Ͳ6SimlҕHQ%,ì2f70˟JTC*xhӇ=-kÏ6[{q\5=kaZGWWy'w/==h`Xx8-2fdqJ5Tt[;ђd,\nnykWj5%eXeF}޵C݋rŵ]mT#_#~9 ȻIW#yKލ+Ъ6F:MBNVIE{ίHU0ʃW85Q"zY}_&wVlu;9U#1jTz^kqntTE;s&ՆoqbXup6_|t⺣))Y2zY;vvO||meS9=ӟNnh]u:j{/cS5 {aȘMzt?OƣF\F^՚1-e!YX:ۮ+e'Vn1w4)IsKӶOq 噡m:_En$_qMEA=:G-F3a~^ӍJ#0'(y%Ny ?eZo7GOܤ5mFr2z.jjF.FU)/fji27cp=JKSޣ'O#GL).ݟ3 nr:ֲʥ+J*ninb껺+cס}b>B d,lcuܴ_ *WK_2^Q~X$6>Q6zFqO+{k(f l޺hjJ76Рh,{K;yG>ׅ(Ź>1*p+hX}^Tڱ PG${9 rsƍsT`%~DrcFzq޾j4k}SKzhq!̪dU@YYgdÎ^xՅ<wcNIQW6X5ipff;uA]#QT䧪KzS[yzMfi#HpX($czSIrkOvc4mh4Elqֹ%_U+J!>m%mod3R89ЕXk+t߸z0Z맡@z1ksc*8 2-%{{5S |kmg>11ە^X*E3;O }\5mf[fѣԡ8e;~sֲge7Fm`ruu4+|DM]jX᰻>Aס9NQx_CNvku#!g,2N';㖛nI[^B<-so h|vU>ct?R1^Z|lT0XJR6|?.Si6Z̿lUZ6#!؎pz1ppx0-E<) SV6ӵFga~$Ewb1w׶W;}joCmFKx&UeS᎑9db*}bS~Zle{7QF+fixv?`y$ 6ڵѢ|UhoN uqO޶+Մ:K[][e]ik_e_1EQ8瑐zd3J*K{Zߙ >k-{_x-Gđ2t߼-N@cF5(Kˢu0}[׺{.-bOx\ҡO7X?}B1BN=N|oyq*Qk]O#忋q<_|mrz<,K,#H:aU$1ncL>+n&UW[iMnUcA߳>k-լap7ʠ}؈;NG9={0Gu~^'N7mt>\Dl``3׭B9TJ׷lN;~ߦ-n1b7O)yGby0ys5PUjN*Z=.K^q)SJm^wWeouƻZ\Č.2F5>i0 pt#pTmz# Y5պC/|UmmB~i2kKjxPu{&ccm߷< ELOۺճؠ}Os'g> -m%웃߷$sbquQSm?}NKYe3qƾSJ*o.[&OkR:u*ҧ󹥦YL. sslz1M7\F: B lЁ=xԩw2%^-YuꭩwGn:WVWZ_uEOfozspakw'IWۯai{ݩn՞j&G;P'%+aRI'yZzߧsLRM%;vø??]xğ{Xk}^+_h%_eU~FҕW_/g?ɞ9uҹv:cY#>l=@Yl+9IIgͻ?7\W]-FX|_3.ںR1r c5j 1Yrك^3nˠmF|}9өwv^-r'6Ѭq)F5*Jؙ^[خe^(Ӕߡ0<5cq;7ޝ؊u"xcZ0mG]{vWQOh̭#:qَISۀO'8e^*KMLҔ(i̖eA-Ze l1.B21ӵyҧ''xӗ+mu"=|׎ٯ/4r֝(\eX'k$b@/N9,KRKc=RW]C92.[sÂ+Xnj[S#s5mOkVuFZI%34ێMe|;$*ͻ<>>^InuQҺhŸ\WʵYU.UfkK3}J4cR-ˡՆN%)M'm<|,MG%bǸ+=FqyV ~__3 D) J.G7 E qۂO֝J+ck׌~^i/:6q6x'ϦzMsu~Fxjը8RNTǑ/uiY9k{Z<+}>1 0l8V8$cE:vkfSҞ-ԢW}AOX#"*>vW$=:cRTb(ӍpT̹G'MѮvn rjJAC8oݮDž|@o94^|Wۦv~M<=:zNT|]UU_+JV4#a8|5+nVZ#zGM,gUl}4QMb3#b*auztmNnzwfj#2gHT0N{W}j8f~|3&(sjdݍȫ?>a589_^᝿خ6YqIKomsg[ocվYCdsss]4%grOmU9kV==+8Y[v\\KC׿g]]jJ xcu]Ĵju }SZMcuW҇+w蟡_)ʕ駦O/֬=[k+Y>L2gЂ ӥpbkSqSjt㲻9If 6;TS2 T\j~=j;<8Wv OG}=[FO]4`۪{|T_W%LDt6u(-mlt{D^e0=z=L'WM%{tM<,mV~ ǢǶ6ŪԶ̠n鑷XL7n.EӔ7f=oNF׻dA$n#==lG9#u9ϙrش1|I,$GpYUd}{w p*4"3%X~ۏyU:kcx.W}Խn]u6o>drQ}}GagN4I\ r?319POːs\8Sr?eJ-֟ j1vDA [ 9Ƿ>̬_O 容weusmhe#);pCp 3xM;鲿}UJv/]6hu &Fn瞝=+U)To{l\.*VS r\~;}nJMy220R:xZԱJRlO1R&z57O{x": (RT*;`dz+Tr&x|eJ垷p#r3zT%q~eKF3.𠁻q9dU*hͽKl[˗wC<irb//d^$*" T2~XY^竈rksɏ?%U!>fZ]*qNe'+m,NaYO̼' <#nѹ;uTHΤޥ{)nF9=9`23%QܳP),~7)g}&:g&uƵ 扤;Zғ;)QݾKGԖ+e.Vǖ GW%$#Jx)rשȬ-e FZF1qՊ45r_J\3MikGUmlMt4[^xًaFS=tr4LF8Nt&xR͌nŽzsSTm5)$Ӹ$\*" u)^Œ}ؚ7eDXsyZOcE+?%4{Wns:v?rRFZo鴏l둜Ϩe$^<ԂKg<b3ճqթB^ Ek[Vĕ#\q =jU%ʦ-/=JF_܁ɏ)$?Z8 NecNmp5w6avf' h-_GM9Qr_'k]B8愴&_7hF'n}2q\ѧxX(>V욷u$_@'uz٢*%zyjd|`ym=~W*U9b9^ƤvmgLۅ=,TO'Kao-Ws\-ݒ}FRwF}4NAW \W=J<9VD{sb)BKƪ>Q jRܪܭūFh8$g)r35/u)B1TUֶHt,hY[,99jcԔ7]}< vʲ,zy;zlTJjF'7~.ʭ O/h #P9,s׀:by.X.Uԩ5(SkC&y V;r1ӓfkr\bģ_eڤ'e%o)WmO<|eCLT#w-mCf[RNFFqV#]:˖KѴ72H=ǧ)rOPqָ̒"nj{w^A=wZ8kY=z$uyj1)̓ ǿL}i:s^_$[yC2GAe,DikNU_[B0혙Z=FNtⱔ{}.%qI,O.OVҕhF9(Ѵcwdil}=n>=QOK{ veRmd8~i_S*jI2 n'~:̅QJ.Zw+ܦ8vorzEK޹%'kW^[79y}3v^n,Gˎ^hUe4Zޝlܯm@cF2M=lgܳZ]n%('o:ݶFyycyDjU=ůZ0>6PwhU*:Цv1ǮqS,ݭa&ܷO 28-8Q?ϢBtVIcbF8'z*OvktKnۙX!wvV3[?˯H֜y۰3;1VyD=ߟz(ތW+fu!~[R7|cH}TM-ǝFi'؞Y4hsŀPr:dY*֔9m:zu=`mroK*6oNJ#'o#?.? ʤE9CoTKmחRc +6w+Mі[?AZQNPq5ٯ"DK6WW38]=Μ\y$y04h2C}3ϥgz:jKm4pdqÕTfR~U,D+Fjzm \E m|O!yYF4XM#F'H2̤mc yޞwtuVa aMAg󢣩N|5)$ܔ) -ؐ1ݹYJt:8Kt0nfS2'TQQ?y1fTX־ކ*n:4Kn[wPnWX7Q~nN/&кq{nV\D*| In}Nԗ#],$R}Vfz_j9wiAjHhʾ[+6;[=:]}s&5$Ս.%]`\׭M6wʝEEj^d_){O?OY2F M6rǨ+hrrjX9+٥` 8]W埶bdc=49t-{H6ry^cT7w3޶ĒGErwkNDc#*ϳjc+Ug1'&[Uxh`>緾*m%`RZCbn 3cTye&svōH-ChǖfF_i# N\ h<};ZrQ*td^O4yssT7Lu%ceuc,O˸ 9ջFm[Nʹ-$we*iѥ@ NyGOµ.d:FTs9=^hldIw~%'ǭwJF3a(3VԼ a#5#lsV>]?hۉ,ڥXR#23g^=awS5KkvQ\v15=Z;p"o0G}TJtѧ+:SIl9oPe8H0v#⢜h:nɍ9V*>m鹹st,c1rO֡^@sc{'\S:#؊mZpE:汅J巡qh%eUk>5)sI~cXRW(՘FIlIQ˟Z%o,͵FTڵ3u*.Wk qgw z#N/khm`Y}E:sNOCjrӋ[2Ry|*P嵤8{W7iHQujSMɛTm76t8hW1<ֱnWRX5廗lm#׭h~VtCFTosTvSZ#rmKCUjTo+_o'\m.: [f}S-iULzj_Zw*G ~_!-Ko){W/؛`̓mWt5*Ыנ*񭥽xp=O޽g%6QvxBRUBgˈ֚euX 1w*,װ4ͫ;(Ӵ}<^+_ qvlZO*53ק3Βikеu-lk靫c՝L_3*nͿ칥k1%RK24v#z[[keܖIwX;w Nwq姉tMZ[|ƴ9s!n~Sc+υH;vI*f]މ6SrKԕR-/z/Op8WRY)d^ye_ cN*-jۤܨ۟T*msHԪZlڴJ D ,.]TԹT;IYrI/Ʒ@oJ%!kRuTj'iZc"d` rB5(F׵ҷνj2ٝ&j2#u~#88Qi=z؏F7+hߟe2FBZFf|2csGrQTiAGtWpYe*ss\)Z$B+I贘eyl?8>cAƕ umm+-F }^? em}go_&hY\2GC ۉwс峌{ҍhsY7}(Tck[KkԿ ,9;~J\8JHrRZDۤQ#30Dz+:(W}J:yڭmzTAR*r]^ɫ6_+lG2'#SQEW*2;m̝A2ygTS@k1u%7zm!O!bN61z*Kݷ^\W'/>4vKXڷ~yϩa*&ף:*TW9]KA|{RM2mYsw#)֫N6j3hԌ;;d ]dkXԔgT*Qh4E*m d>z $εi~5~7tƱۯyAlv<{ss'kmc/ֹU/n432>h|ǜpgkF {EHODS^fA;~uF5)IT<_i;];tæq$12MX?om~~JUks_9Y?ݠ:~_3S4u*- x3_ y##8pk#αn2_K|{msUca5 {x*UH8=>^ڎ}sciʝf_j끌pkIS>e윯۵ܐ}:w[(K4mՖ\ܑɻ-=LTHvCT\ lvwdfb3jUݲα2I@V<9^JUԴb8S]ߧaln)s.N1+_6r>)QPh`kȩZT = urm滵<kF/Z#ʳY۬oR2T<yXt<8zmӕト"`$Eʖn9+*Z6R1WD  8U)+H,=M6_TI( s8WҖcST$YݕU3ُ8tJ.K>ݟtmZaev^@՜9/jzݶw)9Ҫ4*r%Դ;Se>`L)tOFMzS+.LmV;FA8 &8J|pȻyWJRSU_pknPW;GMOvgNN5Yyl񴅶lzq=^e:#_E%՗sG¤)j1|"˶MݎzA*J<1JRdD-78ֶ'{^٫!џB1S ?Z r7AFr|Ȗ7e O,OֺMhԧ+XAƒrͻǶk9r#JrZ^ۺnt(ʥJZqW.(-̎@xCھ/g.I3VRG?|QoWuUSz!j;zN1VW=jwk+_eOGRe8m>s5cj}{FFvބKBR+$䖫Vwe~|C* +$֯g}Zn-{M'{u q$Odl=:,cɦd"qi+5NyMZ-ҲygPxnD_FЬ,ރ܋t{+O֒Zt%hZiդRV9# #7Yba\t=|q4:ܻͅ8(|V6*ܐR,A܏LncƜn؁?#?uGR)NMBmy. 1x)-nP)?#~T|:{^F0^r&/r4Jd=b #-S*n2.68*YZ۾Se[EȺdYݴ >VQi{JX4}\^$KF|[>g*1nmNTWHX2SNO>㠩Tm;3 NDWKoG.# ӭ)'8k-Tzso,c`jc<=3J-BssOG}juęJ<9GNk46yU#Ǖ={ܴϵXW\ֺ&\Mc[q4Kص=:J9^=QKG|{gjI-d0ovW-MĐ#dr0=>UMIR$~bI$EWGޟ=jyeFhE5"̅cX5\EӹJ\ζ5"|mmc[3ѱjg9 {ЩϓIX+ӧk!]DIU^נĊ=y`cirUN3jZ (^zpn.׷ Epr?3ӽBOi1fZ5DQn[n;p9ֹ`Q;?GKⷓ%*[mOߚU(862toQ`q2=FYcwAW+Gt1)}e`-,#UTk~Q}1\'SSrAө#,goFW+-~F1Rםs+*Ihn7w? ZrzkFQk_[kj>&5s9?sVQƘ ձ ־=|$5TFm!FrGJn[34Ӫ╽uw~R |j)S q~tPnrY\UjFѭ>WcFxc_ǽZٌeJH£> Ǝ\oľYx>\3O!JuⳌևC_{|K__"M;-#?21IV9UetlHK{77 ^ r֌=c7Jկ 3)bMc .I1\qO\ T&kN?@'q'_=r3wשּieHЉbb:O<{2d:8Tɭoq 5&_1-:}ܔI$ҿMms IW+0>]89 åLQʫt3c]S<ňKpOVtiߕɓu_$,7Q[Iw 2O8W}>Fk*qwHEYX`v=a(9s.J۪Hc65ٳX8#?j\қi+*t/q^hٷG۷^zt8ֽ5um\[ޭL͜F?mR<7.)&X'̕Uq_/4UW_%(MC52,eE9<#Y~SNJV?ź4LL@n#WTeەY8G{vZ$|sNJ3ka0tyc/u_p([vܝ͐Ol*sjiR.) 𠴔.C$ݾtT;jwG1xl+˷<6zG޽*tUnOi/4&ڨʭ# q*RQB7̤Ӳ(UCavFNIE=sGy'HZ/Uz)+ǵ,7"X(nlo\quP|ݽNzڝev4 ,eV21/H9m)u,7֣ =^e;3uD*TyGU/g_mx#5rҹJfXcc-p3dQuRU9$}D37;zSZ~^V+ϭ',9`:w?J:<$Nj|C;[Z̀GEU~u1tZd<1ķM"[-Ϸ&FRЩe"k~O+>X =thL84.TPIdE͐]< p{wja%8Ge*VЧ'ȣVuG榯ՇS& 2rIs\T7}ϧSqHu{h:$%[w=t~u'ʴfX"rڧkjT9EM_̚PPsgv#W,WEU:|DҚIkwWΒ֓+H}kn^mN8Pݤ7U"k0} " 4#.g۱XZF_˷͖UH-o˹qֺ9e&wo=VOޓByRnUi…OfWqcj$q'C]zrQқ5% 2.Ȓݦg`~UJd.%?b4~eNrmR״JƕK+$V2ImcғQYazv^ \r gZQ[E2m禥@v-#f0+hY\Z}|*sߎ+y)\q+'[I Ƴ\_$5mS:FB|I^Jƥj+t=#xOx>ʭRC:e7`gzFTߦMiZ2OȾzз)ذ? yկ_CHO-kE 4t|ܣ-f#|nS?JF祺w9eR-FkeF8 ==*flxjE緩3MjsHUGVuG,,Cuu$yȭĕTc.TLkTTֶf@UNY=rx?gxܻFUyY]n-̛TS`fOWEP{>E¬gwcǿ9>9z)cRrm:Q/!uaZ^IrDfv@XҺ49{=9v-UTdURwsARQz_R~ 8n完º艬uVG ѯU;Yu5293\=zn7lBR5t2j T٪эFejZȱU!l9}irBQq*єkwP\${XmG69TǚW*{Em2^cQV⻢:Rrv6ں'Ö8a*I]=_QyY#,T0͐?L禚 J]YKmKw'JqZUn/MH#TWy\<gӚu)s]rѯϽ>}[+$ۀ{A]WZ4-.LOQVkMH $F{vҕrMSԵ *ײnfbs݀H'9>)[}ZZ5șX`2)1eBqJ3֖S׻ؽk\+;[lӽc*+߱NW/Ayt7Jn|#1%%]jU)oi34ٵ 8UyZ^[/jmk@mVR1>BjSJjJ&6mQ7}3ҦTSH Q]L)EgcRd*jrZ ;Z tL;'*Z;Vԩ>nO:k f1[G_YtޤT+74nPٷ,:ỎkFR~R6"6eghYkH7t9SHrݢ<хU\Oi9Td%yC7v@<xGݻFym߯o"[YZXĉd-{r:f4j?d!Y[ʲ473*P?32kcOgŭ;v6mQN>lg#1)+=ŲUfo8#9[n6*\cd2# ,]U*J}ÍLqcb5Z/jvFwc>;#,vI ϬuL㓇=z666!$ AǩڲKI3(~Ql)+y@VAA#y5(ڦ6O 虉tr| zֹ*s]KukEJ]C2Mb^=zp>ͨZJߨVng)ۼPTtRX\L-JM+tanTEYcZ1u;NKq#I,#2ƸVݟ$dqXM\`Ue|#s :ۧk(4:VJj|w8hI3Ѳ7폞xRK+](Zmq\6zzQGqU>V!o3|Q vf F#&ۑEu],`Pʢ[H]3cq*%I)qakњs]R8[pjm߹11=03rF͹NZTR ֧5H+ԮjT y<|68XU#=M m@C03CP[mNvYXayP`ºY#,wjgBy=FR@=FG󭠴tq,{ZEF۝㷮;zUF2zjuRR4o+M@ֆHXf?pϷ\%3lы{]y]a%E(=+8YZ{59_oԆ;kb`kPm9㧥9Vm}mZfjK]Hm̊Zぜ^=z %!qYH"s;ﲷk|eO|&qn;W<(ŴZZ15ivǒ )ƜM{甩Ԩ(ZK15($a$^^F+zqϩ(b3'YvxeUև4m%sҍfŽK0pqVԭSRN-5O @3"w&=$gO \t]fl,c⼺-zJN1c݌Nb?,[EntG 0{gunaӿ[SZR#.=Gj^@D{fӕkb&uCn"͹ 28+:|ҺX4 ׹{a W_ɀ+ԣRR}DU5+nMj̭'v.2mmtcb#娭^l] d~XG){*3:1܎;.c>XUcԘb,nESZwqҧ'wbl1$˟iӇxtSj)5~5s;x:a}=Uȱʪ xx+G R}5(^ݷ"1y[U8QFkK쿯jM.J0XYj1(Ir֣vjiiR(QtB/inWȟˑw4{:nqZs|QY kKъDEH?xo W5E''aR.2-csW,\b]]9Y0Zhhԫ-Xd.y%N+|gga58VcOJldYau`L~ܬGN.M&<|c*$#@t7{ܮH5:q弎ya*M'k,(*ԞC4{cr{q5scا^5EF=RgS"*͟Ҽݬּe{t64ϲqidfbFdz~URRIw6~y˙oc{InF $ˑe~I{NWg){G{UGA]{0@ۤ<}Ѹ`=SkFFjJMj6{uVYFҏit.ݻEyf$ܟk+=+:t\7*48n1KA*;^ Q'a^^:b6q-8c8ҝ*8IJwKfPrVTmNQ(|[J2E(ӅOYe1ms\gOjNXC1TܛoZIr=~n*~ ߗqO7vLo|XͲDRKاMq-I\}i˙l_mlw?(ph*I;sՌts6䠏ˑUܣq3ڶ?g.kRi[U/V&Qwazc3TƴÖ\hRIUeu1O<~UFUjIǞ"ض   hOy[ z+^Jq-Ylef$omr1~cIJ;#J۩DIm`eܩ#z׹7kT(]hsƙg%ydU`/r3Ӛj8u#+7m&/feeᙺzÚ:1Z{iVâżIIhEm‘޽+׊OM3)+Fn^3B.scONSJ/OFR6-t-Wlp<8q\T&ﮖ]Ŝh$,یzk:? Yz-֭XUnhUў{sy迗Rdڬ?qSV>l&|?%Uf"ܯG<N^H:؉8}uվ.XjI3u_SW{{Vϝhڏĺ=6ͨF+|݋=zaRIYGsp&[O'ώ_/XZ]I ֺ5+Cnpͷ,3=iGRmCTaq9mu._>,'/QOɺIL7Aǒ0BW߬\jII]w5aq9]6Ks4 ᝥՌtk4qrfD|u5T9Tr~p拥WގڻO~.#~+C0T}BUjmݞG-W=CyQ\$:}|RMnBYA;bzDh1T4ԔSѴIENJπݟM vvs@v>^ iROewkNe^ ko_|5ٍnN51b ^ cF9V[%onf#0pkͧM}}?>"U==GK- Źee)y#敫OUuet>g1/u~:g|Q׼+ÚndyidƤ #)Az]yz#R5Y^UI~η>X7dᦎ;c}ÌopvOjWkuoKt[/aŎC#3DPbve<JNg U%>֞Eev\%Rl2юC$v@jӥx;/#=3isSGeDsrp ?O1Z\};lmCY.WV&lJov&^^eL<}Iߵo x̪v73`#>=J=vi$ާԍ_}b-ɛq$WAk4"<&G9q;>LW/u⧋7 ujL:cѓ+j=J޿.{3G>riAJTR:N7[[+>v1Z!ѫrq-.Fꬬ mE:&kƲikW|]zt7)qo϶+Ԏ>iF{Et:HVGQeŋ-#q"F9WE T/iM>C*]ō0mpyWONNJ s"dY̙Qv'aK= Ju42oR2k Ǒ{tܣ5j7nޕ9Kc{@J-59IMr?TKmx֯Vn+TJ)IQQi.OTErw2fsҪbAִ8T)R_ xGa):$xccQю*iVa3=kxӇc$̶{AUՎR%n<|2ȣ9NqjhsP.ʬ?"iZ~QCI3H6'=^]iNvZ+mt=*6^oHgyq${cVicN{X12 ҖSM}LqS:]S[oc~'8l,9÷L9,EZnGGF']lha4m&dEazõi, :m٥~REz]Qޏhj*# rt*v U10yz[~ejRo JBv^F_EK )M+_⩩'S^e#J.n~!L;#~vc*8gq^<="T-ocűUGaӴeunΌ)J3ZMƪ ry]:\N%W]vo_5fɮ4m]=ே]'[˒m`r2I]:ѣA:4B wAc1&eFO\³4s'dyT=ͽnw4+mS@ ǿN:׫GS$J*4eOlonbC\c&xgM\n%/5K$a.UN ##czҔUmeXO}]ODgzE}n,] AS^|˞M:)SmswikNxk#DɼY%Oab(2k[/"XT"~h//U?C|j4ͻwK}/]?7t$ f3#!?P7Qj|FzhGoTV?P02>j![QuהkEV>S @YrFL3LҗIu-/%_N<@oܲ+H1i#Zrң%e[^{~yXehMz_|%ZhZ$aXqG?B8{1xO3_uʩ_0#)&ߡ|IdդkXSn?vF2jh'W<X)6ںV@;IWGӹ?gu_>߭tW?Rν\#]rɹvǟ`nII1SZ=K$.4ҪF-pQ7dO*IV~pOV73~|dxvE!hUA2f8%x9z 1Y|~R~WwdfdPWC@8r}sn??0ѷC_j_I>jH̪e<NknNjZ G7-OE}wi?#Ӭ-IT y`~qv t1Ҭ>BR־;>cnQ]Wٸ|@=}2Щ?zOmZ6]LROe7!'5'(#?{cV[Uy[tv{g=9W&r-EI,kt$ӽaQǗ1zkۚW/mrĞǧjN2JTb]ȋwVrR0QOmh,bEmwǞg)jG-S޲uɏEnzdu)6Ҟ%{!YL?0tTc̖߉1NNODXEgkWOl?ZҜ%Q$qF0R1䪼 {{)z"WJ+FG\&KƬ_snBC^(E{YFy{c%nGܴєgHҼDW,y11ZKvBt9J”,E"?OU(^TtmwH/.@m}y?On.VI~S4PDmfDb:e;2QG4p$n`:B.FeU֡\J~vN;NRٛF%Vԍ,઴xXN?}6Ob}<ܫsĺ%b3'ׯ*e.Xߩ <6nR_زO9Gd \*itQvgj6wv9) ܲg'\ϕHZkK+ܤK/-8ܪSp8\qPrzc 'NnkyWRQ[ x_h[rYUyLÆ럕j}+_`jP,0Y 7#)r롌kr䎈#Xn̍*>nƺ=Ekc]hKC.=#7JOF|4eN̋2z 5m:-ٴ٩KFsR7hנ`n*GLyWMFc3N\]Xw:y2F98<[hvƥ̵y<y;wr SlӹZyEBSڶyR\Zv:~^s|O}sS(Q[ztqY,,/GS9FfU)Ɵcv;Q{n#*;p3ךr#~[턮+G'ͷ ֲOIjqS.~UӱoWvɖw='QrʕNv۲}Kv%ϖv㎽+Qu/%pHWrOHX̏62_G.!U>[hȲD7[wP,9GXI/d(Q&̑*#sc:78ݎֆKT5VvY6Y`κ'OCjNrq-}爤Tg̐u5*FN6z_+ղ%em޸>z^wZn y4O3ݣc]#3ݖ\2A=MWHK2S m̱6dڮҏ>ZrΧEB'B2aY#Uvo넨uyi*1qN- _Ѯsc\(ª;iG .Տy{H;??RQΌ7NSNqnmlB2?kUdKT^UD k!/8۽tQIBInG3,avۜw1eK"RA I&>c|2<Өݺyy)r8--og`LwMkO F1VbJoRʨswĸd/ c'?U*r5vssԧ7/u~/l%HHԨϷלTRRm ѭ&a +}Ze*́;9猪%3i-7uqWY#Wr5Kd[.}e=0Iz+qr'RU9Z !I Qi+'tNq 3"~fƫv<71t]:X»G~_q[iKdqJ%i;}2a\6`qEҎ9]J]k+u^#zǮ? ԞOɚ֍ohQY}O2">O{yzU -s,dڧl~: EҜeZlGwtŁ+\gNXջ+13g\˕i_&$G1@RQI MeUWvWGٽL%NX%d}aު]W%8sO TRzHXj(f|QX5Taf`h4aY]2LlGU#%fR4in74q4"fVׯ ]4&Ո]CtɿlAS֏,u9a0З-}ƕ%"-˵wӭ RKl8Yae:P7lYA5? -j#ԋM!SH"ve̢TyH?3ZGIqIrr1k\vVmR Wa$->IRJ*p3T'g9-Q_Q^ pzNO&*+h:t҂dW9I#i*E)-z*8MΜJI&bn!`DmGaozUqhdFg2K<>Z[q͸a gvepu%x_6imW-q3Ƿj%:*FTcw۩FG:}l9styJq!ɭ%3J&Kpwt}%z8vQ%);#"=&;JFUOOuR/g(Gi[\IbE]ʥ~>ZԩF4VQ=ڗm9N<9t#${Ruç'RSQiu6 S/$'v| ޱxI%ta[G9;[k5?U9 7lvwB7]҃n%;㯭tQQ2%E=lՋc,G${ mF9~EtV5vU9Ju-#^[Lii9\ {?i%Ю &IG6`Ÿ<}0XR7v[TV14}V߲ğjqi7V_L r'ӋaijyzVy朽~/{ lnK3Go}>߇̳hbd1D[3kßgq_ZV,DN՞ 'j,2 ў>j\# # ?9N:u|O`([y1ݸ5{ E5>UkJK- Y[yw4]ln q^ g[&ٷMVI"7 .$bzcG#=jVМFZi{ܜnr2OӊÖSgNT5t${+n:N=Z=N=A7О8?+KIFScqyD{,:ϥpԭopNtzߠ9aUf~dgL+ȩLJ;4E7dc.0o=z6"2 9[U8Z1Zŭg6Dž[#$r9֦g,γ[Z [wVSi^XKؚѧ <\jk*ykfzqsW9ԩfp갓roeE|[}\f#z{泔8F*FR`S͈y`ml?Ҧҷsoi1bnKW|9Yq ik("dtw[H:W>~Vܼt5(io"yk9ܿ;r"O֝zV;BѾۭ|aUF5Ԛ%m&?@kr:iWXIDFhU)KI0j{|@ijT:5$#=:XT/u,$\Z~Y F 'wM9dK,ZGWb6˧zij~U3j>Bތ5Z3Cڿ6jBeR<ǹ־%cG_SҥJi<[hUvK2FNÜӨ,?b18yK_}N3F+ݍ}<"ߋ<Nޛk=,oPKl<7\t+ϿF5QJ;mo;xBih#$cb`}=}r4Z1#ݮr5^ \ƳHXWZs{TcfuL}GVwr8=4]nKxWb̫*e(o>\&r|?wsH> ƪN}8zD^`q1Ue|=V\xKSU?;Vvy:.8?Q¬M/Tt}Zf9\0Vktĺro>]W^{3;e88w \7`zqj8.8|֩A[Fz=KZwċ4-]#Fpwz ǖjs7ɴ}3G$;ml1^rxI9NR,*SK>4 P.2>̪I >i֦gB.w׿bfne)QSR۝طOj#hs4]GgEKhEʞ:{{#ι"4=-}㹱,NzV*J ԇ+[n6K3 {L3lchvgWG1ݦ",I#o^׊F^>#Ӓv#h'{KȞeg cn3<˦F^䵿nqk+-gϱgO.i> kQceavv(;y~J>I/_ԣJm{oQ4{-R8T5;Rҳ,$6Ѩhz5\Rh-c2oh#U+rg-y`G[^&e{\LlUJ2~&ƵSӸ;vVn6p2qn8I6J)޾OkKveFYPsЌZG*d﮶;xd6;sǭK'4m6ӣJu$iO^#GV#O<5#R;$Z4s̡6\uXT'c(֔%=m8fVmyjɂIOUR1ITO*\ĉ%cqTs߁ZBehiVk+GtrB~+)>٢1ng'*s;ҵ"䧟!0N\qOafΣRַbżn&w:E 3ߡSQjESͭmG, s"q< ==hc̭Tc4+4ݻ 0e UOk(SF$v9G9$ xZT}^ֵ7*_BK S5ܿ9#=jFϩtiʌ}[inqJG\!=9u*%쿭&[HSmc?QZ~'k]_a[HJnrw ALkRY\JhSJONL[gG;``n{]j4_tJ }/Q!$-'=::uWrLt[O|Ǔ2n^V2g*>]4#yU=mXJݜyiJ喩w=jn3WoR*oV`Iz)R\4ZjkU⑸vhd x<`sajJI4O(I>?.'໒ \ `^:WdU9z>%Jq%W2 7ie# @9 ZjVos^ؽ -%e-r8ڌ)ǥR,-5$(DAX٢C`}kOZ:&PmY\c3c5["yhbiLD^Uu ;gVZȩ):kz͔4%yjp0z[sFvua:u]|+Y 2Y#%'dCWU?MJ mYJrX~7v`jp,Tm g98JGW؋MI6Ր`[%+3#zz>֡S9W^DS5_8qo7ں(ZvQJTkb$lS]i7$w/ߐqҦ+CTe~#Y-ޱ}Qb+UZXjH7jDmUWKD$,6m쪾Jtd4(JWB V+5-Lh9ʱrwUrF{~m $d3+gkt5e-5Jr W|GWZ7ԪuX{D;zUS,Dմ]ERj5M+bdWs9e^1ik8)6n^RƷLC ڱNj|8^%t^2sP)-5L+W+bLFyuPD`ʻTc>/iQMmJ[!9OSfGeGkVN4:vhX}\eSCv.G{2ųrBe8Ԩ,24?iUH;hڻ۠6&qU%+lZZ%Dez1RCVo<{oyձ.+I^WBh݄}:tOMqӥ8NhF0`O ͉7b%7QgQRv~n^Ds=y褳Luu*Fv;\Vb >TRB9qQyf+Ri&PѪ;n8^7*t5b EܪW'r3so_Z=R;^Zm*׏jrn4JORtv4;)Fɍ(KMax%wYr;^ySOJC|3]mW)s]2ŵ8Uu$o*ԫM+$Y7Hr6{R깤g.IEBު5yZ ʵ寒4hh,{Ws4qscZFc5+;Y3V2dTuUTG=w&is*sW͘N0ۈ=V\z(Z{3\B* BVI"S`5C)BG[{uAViP 2472.Y8퐹mwRw2~jB#|gV%+#zՔ3"~8=kjG(O h \*@*/vgUnU 'VH ReW%] 0?,է(Ÿ>#IZϲRkxf1gf73c=Tc'-{^<؆\\//+ rJm4q%K.bˊP=٤RKBYQd۔x@NKiyVqkt(^s yknc=^Goƺ(wih{ruW$l<4uVȳm"csk R[Xڝ/i+-zRO=|tQ}? 4޷17Nߧ__Cb [ƥlĆ=W06]鶛u}[b`lハL5%ڝ:tcSR6n,sVV7zrB$F=9=N)¥fqJiS>[o&XdZәWC(me\(8U6˄NJ2l}<$qF,| ?mO-ޟq|Խس hn@q(sTqv]Njyjo=,r3ُ ۃڕdT+THLiڹ?7g򷩋õ=HuIcb+n\mU8{8U'*ke,ĒeJݨE֨m}U$w-_wypR p;//Vnw·Bᕂ\c=;֕%|] ts.Qȳ4 #Y<@N0s;JɽW.Dzm HQIpON*2|>JG'e[;ٯ$`wq:Xmw~ڝ:5+*|Ow׷k9$h"8UힹɬK˖R|EJf,YNW\v*NZnN:"t-uhUy+nq_;R<ҵrJ3MĮܻYjӇ,;0I̾uT+.Us{GmMdC[cp,zU(өy\9ֲz-c-Ϸ\4ML<%$易ӥ4Wkf> (×QGEM.䶜/ !Rt)-YT&&6 hɓiOkKKs1UZ& yFȩNh{m2'·(zs|Gwȷh7Q`8늪Ȓ8R2%fa|46()<1rƯ,ZyN.TU ޞg"6wwֹ$dT9e+6+M8WOEHQ5) 8Wv}KqПZe)]Pe8k(yM0kMI]ˑj_ߟjk*G nR/ˁҜc/i̳k/EaRjHҼjrKMOmֹ+GTSr{hѺͬ3,;|>ƫMJNZ>֌;SR)F6ڲck:BxWZU;^u7>Zw(,Șԩ,D} vn>VkcSn[9g 'Q2zkT%RI=BU/gӡZ9o%޵GZ4Z\K}NUp |  xzr1W<饈Q͇jXwewmJzT*ҌޖFcH)苎j\FQkt6~#X]2ΔiOKzo#ս,o'q OnWaYnT-1Z'k,c鞼w&ϯތjTCZUUYBTݕOkTt5֧Rm=$&HʸU 1SFwA*3$7&lBY} 9RRF^G& rB杵F>u0ҽBj>ϕ7,R;6hJ+U W?f-*e9C[鯑jq#*crXF8*ZrVXUs*cֽTdq¥^[==R% CN99z8_KVIYjSNBAYC1׵J:տ"jSP/ݢ֕oq2E;;ǥzTe'OSN"㧟Gw+Mxyp$d'8 FI%h!VOk̿F*Jdמs5Q?yオ&U+I(I.ۡVWKI+]GBT2}?s15#+b%R-E_R/fۛrY#ʑd~UZ,yhgNTh]m#~YCO[9#|tA851I 0̛\#}|5axmp· cV#󯛎i[R7Rpr9,\ߎуGmv(\ 9s_EOW:iֺtiZWcP(tmj[FAvJ@AF><\.Tkr۾I7Z>ݴ.; ]H,!Bщ#p0VE# dҫQUkݻ|zڑnRjٴ~g3 Wx.kyd_+{t'R9%:q#7 q~U1ߝZDOLctbQ 2@ 5ZJ#ʩApx_Ľ[xO@4 * J NJci˪‘3RHߦ'e> JL_޹oAi y&h/ypoAרRx1YVn2ʮwGebV9t 29&?+Td;}ާKgnf| j쵍?M}OeH03d%=xF8qu%vՁbRzi>;Vj횵i(dlTYB6pI1-9rT\vK<,s+QvZKNvu^]/Q􃱒,qsz`8Gb̾w|jxZj\=@S_-trb;W<9~adp:\Dn鱍lNSX uunytw:._~͞2=~U9]Ǹ;2$'ЧiY_^>_Yq#Z?b3UxJK϶Z-1HX7#>jq^N2 +Oj;k A^]ڤhp!eSNr95eS/y> '-5w3~mEo{FMBe֓*?+ ⾿/V2=S]v7֭NTrϯgͿ?b Z(,æO0G۷`nyB?#3n˷}>R1n&ߞSn{b~+-H.OҶ&uET&zlV!M{W{'nl+WW*OEΦ/UalvC ɒ=)2g筊NS%g*qƳ$ ~l9-Tsrq5)4feҲQӊti(IY7X̴I+2\o_jڤjs(cO[mH3veʀxJҏ(b[6ZhU7/mZ8OqtQ5(GK~Ɩ }PJNnzXW"鱦-4nr1SU:r:rLL=; bt}dvƌ_ -*`39 ˒}NSKe2@,Scnu=)8`]:]gqjPbS#am%CEy~-(ѫz)QMOCfrəݝA +(Ӧ<\tgbԗ]/eb2_G|~J*qK S ^[y#.YvDms^hG^0ѥ%$H=rx;FQ̞ك{29&7L$Bq(WcJRd\Y<gF+nE|-IO-#ҫ`'yc\f\E ԢԴq:ϋ|7u5Km3 #>V)JTx\iҊG~6xK5+NWT$w==0 ݊Ih'4aeFJ.Wnާhߴ^,V]˖&g#~l[a0M8wTR~0\/lB.81#6`9:9(Yy^k>|XK {`|W֭ʧO(%ܫNM!`9$t'*x80DycJ\g=L*4dڶ~k, $ /9AǧgS.g}OVX~pmJtqvo2ǧo kLGME,hэ-{m麤2[&FH=*e'k+iOR5ve4PsKc+8'אs] <=LsTdhd1#t1גN6bj:r[Ntg6oas,x`s0ǭsӏ5ktקcϏUow@OHb NIlcӷֺ:0bY|yjYOk?#B)i/o'%o&FӸyWffaOԍJ7i4^PТVkp4#:.R2O#8=휫aOtڷk.Q;?<%MUD!Ac ~򏕆N@C]zX<ӳc.IJNmޛtKuCz/&iƬq$퓀N=p%+/KUZWoE~掠o<=kpvb$Xʙ,sЊOJQk$^sš->EahK, *nm8cFLDk'uTKN[_S>ɥZ-]K|w-rΙ}GTed۾)J2)WOdOMzϏu-U{idH'A6g99 jtxpT񒔓n7<)Bk7pj_0<60$ seXSieMkA|%_$XLnGŊՏXKwk~0[!?)hYH˅n{fr>xK={usu9Hy!Xn2AOFNޕ{EZ?p]ջY%woہwW-#+),|a=>Xi{(ZjmΣ8Ilާ4}*?\]F&V$a8<sd˯x1άt[A4 m0 n~f9>TNHʥH;vgxYki[H$f펫Гr}=-9{;I6ߒOBiٷEOQYУE)3#nQ& rH+2n]=,s+{>X]4ֶ9_КI2mxY0z'5NsWaeQ=U9xԭ̠{4|UȷSy ˓?A"*:|U*{xCwLT6W 9Cs0؊zcLʯ.N7M۫]d~~߳Cǎ5Hbt,z1WbjԡESo?,QNmi~_hS EV2n,  \=9Vpѩds-~.kz :s aA8{v ,IoڟWBhr'𴐴fPȋ.JV3FmZ:r&W=+E݄$(p ׿Lב^L)FKe¯A8>RN{n^Wcr"d $bw=y=96<;؊e"0f>l 3ތ$EyKgͫNXٹ6i8P=1֜ aj5NZ}n3GB?2$=W] 7OXs:e\ |X_|?ҽ tE:.3krflI$`?gk2+J1+ڞ8~A.@ u5هd{NYolF1sۡ^9U]Ii{npDWw[w$anU4NU#/i:vu"s2H>|zzw4O撵W;oe ,(uےO@ֶr%*iߧCK\21g Awq(y76VZg_/*PGJǗ疏nM.RE#˚>iE}yH"YyZ(Ek%Gz!b2N}MkNyS}MhmWzCm_3+sJrWIy2ԥPƞ[1k'5ZQaӋOWd~#|2GF''RiŻ^ߙ:ry|S_qfi<$ 9$=О=1^ujҞO븝7ΛZzrZ&21cwϯNٯ5Nu mwN8RqIliQ|j7s{heW՘Wy2ZIIz5}TZ~V&1jo'ub3#4 3)_3'8:K_Be*O {}eCKe O* d_o1rnkhg\P2|〼"kCJ5ZUⵚP0. 3#59J5- jԕJ-CNbWDCh5Rr0zt[ճ/_S0o1AWcNSɿ^SGq.WROҍÖ4%Y0E5^t9wx"9vXawvTBxtgEĒ$sʪjxJ-lv^ڟ+VCmt %RT⭩ɈRM'+[^GyxJaM(?3itRݥo2VvT&0Gln\Dժ)өniirFUoЎHdRGV_A^Τg哼ښvF *\+mJƧQ.^X;֯o׊\* yrkg*яE:.Rޝ{4$UY+¦?5B1ߩj#9|?<0'3RQynΜE5*vnnIL Tv_+8Y+w;I]-4._̭l;tiOkbF$~ȿg< }8/%QUS21XhU|4L63lp,NWqXʜhݭgROI{+loi|6$uqsSF0UCgLqlV9*+)[G>ƥ*)~fRmdqg>dL"MB |(݂ͅ{lQ2q]סk°{U~_2M9PR{zVru.>Ҥyދ̶}E[#>NJ9(۾o nY~h ^Q6,W-u{b©!CFk2,2$Vq=mEV읒DyΧΡ?V*GE#cKNWܘ8EǸ^Or3b rS5%NZ?~ Ջ;nfm &rt⻛rќy^DYI%FQ< }'0HICH3n]I:c󩖱g<$+yq7)\I+rbFuU|`HqUNw'ȵ0rYjV+=e?d@5fh'`zۇnI٤)Q卣%$R1 Q8rxQ)S7'RӶ<$Lfq۟ASk5p#*\4Ȟ%GYtM KssҴ}?v{u9 W֩9_v gBS׺UcyOG4VU}ꛂs{~ZFUiFea#\y:}GJ֜f1h \`N9*9=?ZrNޗ&4sx%2h3[Qsbzh ]Öim|e78>U\b&U+SwV$Kɘ]׷a|Աq5bT+e ZSQMt:ѭ{ۥ$X2U 9mʹtƾ1O!3ʻ挕~c>dҩhK"E2[Pr:UF2jL:kEiQBduG׎ vv8.D\"+lQxt䤹tL*qN:핷aGE%oORkBoԘ,VUDquyy+1-A +W;h,kX_L6Qq-Z" r;ts{S_pZ*SN:[4U[vgdtLj|{4hA;jGC֔өNRܺR$kJ/[B |,G0AʓZ_rh2]d,Vpx82Fč#2m=y:2IÑ(_I7;UIO5;~ Ҝm{X | ߻mÞҩG_2*:n7/0icUUo|/AtҨR7ȱms߶zWD#ke9KFŎFq*x_9GٕF\ܚ^WiMVRmyOPx*!rf;.KeWtZFyK~fbpY:{>BvW`\[Hdgz9YFLk)ϖ2ჩg⟳p u2(GɺnUqƝ=N~y979JhrzŹ`кB݀Ǔ9NwΥ8%d8ƺ*SUf{ӥxEyp(O3ĨͅHUOKN8zr(ݥiսAgwybs|ts6{Hʏ,y^H$,nf,[9sN;54sRYB:[o5־ r,% Nkzt6Kw e.}vסZ^XfdwiFקt̯}o"Y?mBjy1]I#>UBwY{W-DFtp[͛MX]d0jM7B!p_JV6=:t$+,{TiuqrZ,^k%hҦT%s*R4C.0r?ZNJ4U)¥+XN=p|cPKɻ{c<涭[OsԻi\eo/ymtWsK] Oޒ+,,k~.hij/3.IRApcGPR{.Zi@&:׾#{.Br)UDjuFٕg0y}kzvJku)}I$U8 gʜ]#Reӽ[Zq22nk/yJ&tjFM [u6 5%;hCukBBgoE׹8n,V^Ckt8E `2M]c{sQs/Ӱanv Ww79Vm{ 2|Daw}>9ˮƍƍ5w_&Tla_LԦ&W/qbkf /)$*d9']TcIˑ;thi[R奁RE#}0~ltHʫZu9jVKCV+X۔ed㷵T}U)QvKl%Gfnsr?m$K_Sj)j^F^,R2so=^[c)sGEfsQ9BI^dUĀ瞝rUf&u#B:.?#-ԷjNVXƴ]t11m*OswjTO[omlݤՒ8vP@cUFi}%8tРZY9T8oS?CV_2Ibo݇n W5IFOViBjʦlQ;L]gEJ*ēl 篠JrPzRtLX5Qwr󷞕I.ku Ju$J/ɤ–r<4,6zѩ8I9Tn'jh^iYoѧ+[}zY~e?f7-WE/S-JS3 $NsZ*RFRridQs3IpTIm\d'[J2mNk41͎{^k ҔZnqywvn͕P쫝T ^uIW7gוJ4X'&Út"1TznF*TyDS/+IiyVEWF#c ~>VrYR,mF\ؾ?6=xҶi4HZĒHWhYʅ$W -p/txEfSշs9W]=FJJI|Xk /wqJ肕VLyyVe+3qw9>]밣G- f-c//ᎧקYԔ.%'ki#*L |ڡ_ߧSۣR)ECC>x$lk;;4z}@}}އѭRoQ+uXMn$m,bOmY''^8E8׺a< =İHa8"J2MV姫s޼ JHX#+f`6 :t<׃^ӱxT=ןCsQԒS BGKMrCpFrF1¢4og}3k8t1,nn_C#MG}*{:E5o9oxMYUXI6\ǯ_$~U7R kTR6GܣPF;g3M&3bXܜʵ)ʓ|"cO'mk1I"Y]GscϨj1:;zqVd姡?vž5fĒmceBn=u,׉&6Lrrbr{e viy"#*oNMzD| YܲࢷE=~v+b3`={h<}LMmq_N3^S].Ti^Ѧòhzٴ*&Oy_JS68cF2)Ram>Ipn`<4/EVSuPodp@v#JOT{\jZU#FVi!KIn:1(ӍՊ:։<5Dnұ:AgNm\Μ:?~$Gʜ,q"*9n8g־xs\T_GoúΟN3nG\ׇZ)J4#Yw$m_;i;(7mV5QnTS+ӓ\Q5pu+6(}ul67CS;9$(֓~0|Z=펽9arU Lne~Pt8S&pZ\KgIo3VczUK9IJN/6 ך}zV2V^薄) rs<{4q鑞洌Wv"6\F=ps #,7=~e.d}]GQЮu>QJi43 85|W4ѩh~TeU񁍽Gd3c(}zq jە9OQx*5hRŮÜndԞL1< Dm9WfR5׮n@!VU~\>\ ~ VtQ95bHUcT'wj=>]BŶ̞E.KP z92*eNWIF(IglZ֚O1V08>Sg/WMI vr!G8=Ee8XRNصus"m]G8Q60cɆctHe8"O0B'I?\V%%硤bki~) V$YF9r>i7-tG<%\E,/`]WhF}Q)E>U*{HǕ-bM*-dk1zRISzE8=_N6,L䬋V#YFɻO YT.߹4Ή}ѝ3Q[:*Q9'ͨdI7pwd+w.Wx念~-,O{'6L>VV~bۯ!>Dy\~}M^ʟvc|CR]ł[a;27\VquFtF-Mv_py'q9*%w`I&K¸ɀc;`@#z *|ѳ^%N*%WkʽGI\=u|lC!sț@]'QJzԜZzn%񝤒vʿ5Pu*]/I:a7\MtFOBUSSwg]:ҩE+_c*fU>dF==q+Ք9Sۡ(a3u+2{mGҚNĭql\IxᕘgN#.I̷$>Z Q|r]hөȾRU݇=F\qrƴmo07SrCoftГ̊I`i71:]m};dWQU1jLye uw?vbӖN^GRy9zur3Qg,=GQ6V1+Rۺg~w PQS7\e'c:+-_b(U8\v#cƴud],܀s 4)#jI5t7W{[=UJ:qz2}y!E rTޏi'E3Z5+S_gWaT 5Ys3zjG ͺNJUjS*V,G15[;Ih' 0ij+}K |&˖*9zo%Q&Meio!*NN婇u9= X IR7mU|sܘӴykz06rs6Z);%|jI[baض㍥Muӝ;Cz2y *yzכRO{VyMh ژEs^AX2Lܳ1O¹jJ^'NTz- 7#ƵjݛGQbQw/k9>n<wgJ^2ЉÌK)LWhJ7z儆cnB>bsWtXDyElL- $%-$v .nT1Φ"O!-U?ӯne%dkNUg(m۷̓j85.ZF*y]䯪~d7*\ws5[--Ύ! ^TFMcZTSmV6,CU5Z\XiC4{]ڌT.*1.67.kg~]5' [X Tƍyf']1Sq§N\bvn1޳?4W&囙77: n=KmYE۝?j9=VqNqsiX^UJ7|+3VHt9^g}67'ySNvadẛmeei(Ӵ_Էn̲UN,sp+t`ZҖUj1>tn|WjwE6 G RMMsҭN5Q5nI;As 1Σvӓӯ]iq=8f 2%|x┴Lghir 8 Ǔz=y{:5>cp1}х9%x>!EOWjtd~gvg/ҟ4uW+N4帵so. TGl #3ު4Kmhz4 Q|Tli:ͼrȩ24WnzT.K}ڜiQٲ\[|Yc=EǯO#:-KO!C0 qCB4gyYϠb?J\g.VtlREmͼhş۟Ae.hv.IQn\Mj11Bmcq'$F"ZvƯyY1Ǽ >)KHiRZ46tnُx!` ZU*r]4^Fxʬ>`Wz,9^dV:/7ʞFC$lZ33k.4ЇU/ qd7iZQZAԄT]CTXn0>iVBۛ#?ZhKTWP Gj# ~)4s{52ͥ*̮>>Xj?(NݵjF^ĬGqlgԓSYW(n;Om)GRR0J~^4I>F 2yz[N*^HQ{ߌQ8Umj$>Mns R_iѩMdu_1 Xd=k>nXR.qIng_J6ϴ랿ʸkrJ#7KwɔI,xj9;9~F1WxrHͽTQ333+uyOUש/c*i{VES%~dy;gOfekG,1EHYvr*ѺI1C>汔Je/T{H0r}άmk=zSVz1E7b8|y${;8u% z>Ǻ\v9='Fe'ER\Ց{oW&e<3H 9JT_G.x$jmͫ]KR<^dO!b" b><~UTJ}?:te 9ZI($n;p֔u6n!)ko`Z"Vv#U"'(VKGE;wc*ؚn ~W3ߟʺ˞7RQ$S^˞59Zɲz0 mDmvXzRjwZ.NluV]A- >ߕi+ oaYňoNsWW|GR7bB4p~d߼FڍX &>Zh7Ka)FKqv,!Y2c>2jR0ؘ+_}Jn bs#7FoA卓/SӷsTMcdp:秮i˪dQ+kaxTF<ЖhٯS3=6gg%wnv_b\]JOf#K[TryK"h\1»iFHrrF5pWT5M#jxW(RVx^r$bnf  퓟Jǚ+؉B>Y+5hfTS$G'5^ڧ7#zt:ce5)Mfa?x;ZٺQU?g(gѢ4S}d [zՓ̬b)S~H.V#va9ST~.IoiR]oz2X.hEٳϫjZ:'y p˻YJ1:=,IB7:r[I;zU*u}?TSg(߷]l흕KcՎy\ZV2k8NԢ@9oH~}y)T=֧u$C+QZ#7c\8 =?o Ҧ)|弿nvKiub~VYUw u8 uF1G(=͊Y6Z\BZ8ȏ2gF!rUy(7d>7(멗33i챪7m<sQu渭LB1JVwF<+f pk*n-^C2#6.-Xtf¸9{>Y/ W/3u]2HO(Aqן)T'8[FL-HkSN\qQOJO,DCsϵp9E)[yrC,Jk'O:VڦtZ"{H1r# 3kfue*pJ_Y̬nkJuHҞ"Q:g0M>U c̶;=W3f8*qY7NR\SjR-iizUq#~ xj׹4J5{z$J\?+xj#Oԗ5o_4ki~b{vo"V+:؈TKڊ}_O#Y4KxnET[hN.aC#F06vܜs(RRߋ_. oQ*aZ].LkEO=3&ZU+:o$(<x91QORZ0樝]L=G«ohl}:U}bU$)sv"X:[O鋨CQ`0G TNM;iͩ{$&NjG$teSԊkD^B\(<$&q3i[yQ&Gc*Tq{⩽mm;I8D{]Z*&:麉#$m ٯ:u<굪>wMm>\E>X)[;E}TN<ߞ~(8fl)T?0s^挮Փ?7]NijɧO펂c(yF®sҌbw?c>俋qڟ7F1EFҦL/vf4[XTԩ)7f:sXVЗ4TV9#WNk||^|,;/ö\=M{[٢FpWωe+kqK㡎 2m e*rVJwE/(:u9edL&µ5 %R*6z2w3tr43vcn<ٷTN??zS:!i3f`rO\U:nPxySg&nb)Q;5(Kۿ4x]O-p[MtnpH\ŹQ?]弭641Y%G4LjW)T};sdYF1pթӍm։y?3NRRi 6Fqv*?E&FesL6X²pNONz=+tε_SڍYb9`h+GfPpV+)b*4gFݕhY/'s΍h=a8yM]4dnh$8qVFKԉRӗ3ɾ,0ȇr5Y99XƍEp2bd{s4~3)=:pߚҧ:09HP;s?^>(BKV}@0mszױ>^u{ K@gw0Vw,|xraNrQI-mqV8NSn$0+h¢b_dƿb̢GWen.OO멏$!xTֽw;-͕\DiV0..w(QQu)uyZoRhmŹZP;Jt뿩SRncSO7+l.1U}AH~"ݑ*t~f`6-TM Yj݃iׂ+`±,Pzk+8i;{֮|GateBMЩLv8v:nmu` `dW |CxQ5 [Ofxn6Mg82W˖|E'Q6Io!B_NG塅d\%ȯrqp磉ӗg?RѴQlYGrCrsa`3#^5(.[}\mJɶlkV^sM~ ibN[[vS*\]ϓAI3wԔJn)KGMN?Z. K)jޞ^G_4?:lK,6 V0ģs/q2A~Zd}St}axKƾ8cF%8ګNyr*~)J!-VwJGR2wnLJK!k9 ٘c1L9`q.13|kz3ZtiƲIZZwƕG5 &5o-:]%Oc+{ҋ];kbhN+oeoTo}o KS{jh.>dLvv$pIjfŖxҍzx2Uީ]_?S,2uhݥO+tv,XOgMש<q؃UZ1PgKU֭-ݒm_^b nw9+˝J^E0(ɹ!"#,ǑYg*voNƒ]:mz}mܪ>]+Oc(A'ZRR빧iֺK;|͸knM7;]OQNZ;-i5ib9!ec'3ӏ~OZ.iJT3\j*EnU*@#'[Sv2hZѱxKU.c`k\2_';өS{}aSrw^ eKwKȥ*ьdrr;}*:m1)[5c.CineYnd8ogsRiwaf}-52ެfmwl]$jHR/G 4o6~ *M$vF_Zs {:.˻C6%?ZuviFoi6ZFEUOgmtl[]AM.ܲ}ɬʪNmKWVE߫^ZLmSһO/g/|맊O ;C{vl`Akoo\zWM:n鿫>v3I{և]xba3yʼMj,TjQq|;qxLVT3a??O;[ Fs:nm<);]d+f˸8}2y־30̥N2t9m,L\lRoHbcdOD9{Qrӧ_qJVz^}HxKm=Zg`Yr B,ɬb5Tm{iOh0|Uߵzp qR5VOD;4XFFޏ=u2@9oa2udpQyэJ[RRC1XPՔt%i3'_xմ۶m̋ )?gR] pبkZ}O[_ؾh{R$?{07י3NJP-oL/u3eU#2}WOƉYU#hmݳ)KE%dw4qZD^&X3@M|~)m1(ϾϿnmXĞcI2,q\汣G6VZۡvWgc|wD$,WR 8r?.k<^{M<(:8yN[?_IgpVFE~T˻O'`eJӕ~~Y:t뮺uGÏtGⱙbɡ 3d?:m]yJ"\w]WT{mF}JUGK{JҌsp?\q zj_~]4;_Lx]uܞb@BrN03޾z/:{=y*מc[:g5 wdkș"cG#o 04Q+}O xSj=o=g|Su&xsɑ cg B>V22A}^AR5q]=?#ao|\/ Vښ fz.sK>7TNx{Ӆ|8Xlu2m#95aZv8\-Jvm΋z͵N2RLs6~ޞ+I{ʅDޯMבiGW2o%nV$(Pr2s k(ʥjo^V=qo5{]OU~^^6RmD=֞KDSv9#'=?kQtJ}xn0Zi_}oc~qnKI&C32W?.{7Ojy09vUc:x̬魄oľ$&Bq08QZF +s>^mUTpֻEJǏƷHh!GpG#fJ.N\PJuJ)o^"D{E8Bp3rT)!HӗX9Y_M:y-u~R/IN-5NfAY }'>\qчb(TkK,Q3P!;T?O/,y$/8VtԺ;yѬlx̌O.;\~A¨;pyZJM[єwM Z\5嶰4fWf]uOQ_SW`dR:%q]QK[Ys1:8_Uo7ϋ:o]ZiP*޸n? ae%9|QkS{:Xz] eF>@<+rslKmu߱ШBDm4<\څW,yvmJ(O͕ߌWT)2|^i[ׯ#FiB7V8][lZLrp"[ͅa>q~R[8 c#ףu=ʒӥWmm[u}W mw{-L)/i[|0Z.@a >@O{<A*xZn[yu?+X-z{wsh/ɯbfLeo-g;p;W乤RSSlo>X:$K6v@ryc9Ror}ϦN<]W}),~ɶ%x^Ud K%Shc9#d.ATz~Xڎqnzw/&ዏiڄpGYdP%bS9a=sYJ.PZ3?f/S;bם.U,#\gv#m·@1WNz3穬n⌚qYwd(QwFn4+jϻYWѺd{B?.VݛF3yfkk>HT|=kէ16+ﯗs7zhAfnNFy}ϸ#N2ֺlhTVt01Kn$2,gV pW.g)ʜi;+U0QWc6e- k[S,wׄ{?3)0$6w?#^KVVOgVvqDrmt;)|+t}GE#mkr3i%ЖI=zwZu*7ޝVyƴei^W۪4Fx@??Ҝ_ڷ7r_yœyG㌠,,ɏ/i]Z%8_n/5vyr zW%3D{*)t96@iO8 5pl֝FW ;>mUl?r>i[bSM̶*XF+fFSd,5h{s<Rx֒IՅܫUH$:Zm֩*Ք4O.9f 8\zq~ڒ~ۣ*,.]/ԎU&]F99㟥(ӧK['VU Np@xlfʛ\F4⽫ի jLM2Z@ь񁜶kӛ4tF,mRd:ǎ {ZIN}+;m%匀b?]\Ͷt^q^ibm 34-${(qk4I]uԻ#۝|9׽k1t+xrAO)r_kITp$y;QJV**ִ*+9Te}@tp0"J2lrRh8pm_ֶ+t&ydX<*en ?3S8BkCԥ84+1EO )JrlD3.p>Q}zUGWwj6 |*{U+a*^: Vx6|ȶӟjR^2iūF_Z&܋rOzjҲ:)Ѵش%+I ,rˎ玵ymRIi݇_ 厝{ڷN/Vchʚqz.;\n81]\ֆꧫuٚ8*I2k>!7M[U aW;ڵjRymPȡAtVzN3q]2b래d-AM/b8R*|n=*c=Q%vlq,^lsUI;#gJ430"tWkH͙`3JODwmL]Q8і4vE\Vwi3.&[gb330 '9<~bCJT(ٻEQ82,H>tS.u4[c#RIK;r$ru1w7hS_i$:CQ`dǗsPy@9=٨ֲuJJw_[zgI-P|15N<6*ΚsU#Ej7>Nzc sF53CU]9_M&n.jPq?s9x|>V ֲH&;tTgrIbӿexw$inf s\`C2ztPCg5km(qcU%N7^ȩJ.vOmZ;5ku8'%kF\3 [s4Lmo_sT*ձT:FSKUw" mrOj0ʌ[۵u.ȅэߐKmh'ym;kH6+ F :W7,%+XVZ g7&vf5*v8*{O+O-ghFqU}QSY= jvmoݍǧn+H%O^Rb^_I[h[rb#(EJ.y7OG}l>Y!?ރڶM]JNG.>{ɸLmg#O9yoRhj-4 \J~%RߡD\۴ߟ wJJqZv֜.R=FAnw')ӕK7oCu앾$Vfg3/P\(TrEƽJא% mkQZ\ʵ>m($M.?/h(Ԭz.|W,c{أibYY-nZJ#z_I[܆šnإ){I躘BmV;H]NwUk{5F4i={la%K.B2q>Nlkim:NgSo*N8/68jJU*6޻غ|~WٚFen~a\~9VTRe½8]8E:{FVBp˞+JrF|Ho _j`˅**Tl\c˵zbIVhfAevSsGn, %"?8]sScI97}K"0UqSKV~_+eY@|$_^jykKUsʵiK"Hle!0>ܽU%k9#K3 llwG*^΢Gr,Ww9S\>7m<\( bG&u+Q[Rؽ-eXWqYrJv10u7r%rRCpŸe1,xⱏ4rR09+"Ϸ˴⦗ɔb8nd}}r=:Uc_Npy*vݴͫ`yӶkQM)ryQ.xkˡ{9 -pCV<)'_\V)iN-[aieE#z1= nvqΜOln sm`G'u\XTdRJ%²/oÏZU9~To;ȊHbW,۵sV36iVDc Co!T=FGlI=g6tGB:/2ޑ*^f]7Xj؅R/~SxLs$%ś+n&M͞s:6x5/ \O5hW_`}}{q[SNT\_ӨJmkx{omJPD ǁZӣN6R:ʖ:> &im‘3H9^1QTMc+Ւm_C_EsxW_ zPI6nol\oNRy"՚W|Ϥ|΃cK?m=̤ ~>a,un*sBV|V?fO x5uM $ XɆ$~0h7ͯEc՛G"7V,J,jrQ9+Aԉ}bZo~D2_Zd3޲8:=hэo|W~?ֵ{,dݓu"7o?aom{vkN*7ws{Jxo[uw|jT? o9h/}uлcYmɒq?0^J>n;'*y*+U%XR) t\:ן'ld(Ǡ/͵K %dc;ןNJ3bWRFͼʪ]˕f9ujMZ&)'d/2Z,s^v"P9#C[qpc |72epJ e?W=(nmjG.Mji2jlrF+s+Q)>*cO+4-^6 i]p9zQ%.H.fu*Q:z4f5_&O=ee)$gΤ)J)kT9[NejhEر[{dZ1ȹoOgNR_HԽ5qzBU*9f7?/~դy\{}Ƶi^ECNrjeFI pH>j~QjV__iqEB#0g7QZ1Së?P'Q:tsIyUA{cDe".Οn;o*r2Nsv85݇i-dҌ4 FQ}Muߛb**n)iܞPp9<[?gXcȵoyzrOJ}LcMEE2*|}1=y'y}qFɖ$%nVUt{ zڇsIT̒e2mWR3{^ /4kY^MzSg\jBrjx9 )`Mz_Q|| ~qt[2̛`l XaIݑ.IY*QY8'MGeIɶ2V2\@O\}-No}v-aF?=K?σ `}1,a,ٓ9~\qҼ\f;n0w|m,m=`ѠEH}$r1ӓ}+U'7y/3Zn;#|<. yM!͞=GwSs.E/dKK]U qhw:g#{ ~=69_|[n5b]?:59 ?p`v;}L^ :w_՗Gu{exzTb/H魺mWR(n.Mʱ‹8^8>5|:n[oӡن)sE;=^xtdae+:TϾ0y/똊*ԏ*^dѵF@Cqש`烃\88uN^GEtN_DZ.R/ B!%9b#dҜkF%~I/-]|-Sh/[8T7JQV{;ocլ>5Z7Z}_c=8_;S#)_Cŭr%kFHbiաu~^z}s}23/֕|n]=3oƽ(zIh>Tn~4]$՚f^ǯ \2MoyhuOTM.9^-o4ņHъͻ 'duiK8׌8o7x-hն#+JX'BT^^cAs^= z|r<&|2ucT;En~D+'7 ]ׁ_Sʒ)\tgGJ^i=o7(fC$mb[8>^q?ZXLҤeRZԀEI4迾^>q$Ւq>BqiȬ,c#_EjtU!$ӧhnn$$xo39QSٺ ҅GR}+5h&ٛ#%~`>o^h>V7uLA-ẺiBKoQ'֮mǡQN]X TF8SSTri,m$~6ONySj ҳO,WIWFV&?&u x| ySnxJKV4kSNJ:]l٩˃&cOqDiǝ=h伟+CY')$=y'{19sʝk_Vxn<.Jn^ǽRz 2 hƮХFۻO#~jehZ.4.neh؍eQpgrہ$wY*sRI;|[Cp DvJ'*O$Pѷ+4ҋD#j@ڿ63/G7dyfu\.+?\ycPIky'89POQ^~" > 3L%6RI9g:mu<'VNe}5!5U N\kJ4:U, e[ jcG<=Z{*5Pyw  #wʫ?J<>.jo_}=YȃAιe(U(ӝVVpye+Fqe(&aNe%5'hQIVSx鮧V*F|%؍U mp I+R䨼IӠ5wW7Z%rl}:et8ҧb~ :#SwsjJG%Djyѥ"fM68UTR:0{s\vےXҴi-RUd/[or3ɷ@ϯc:8'"Jת-yN .1#s r524R*V^#"xݺHrpx^ROc)Tk, "B9 Nc5'yukCwpLK.@sמ?4i2OMLeTU$gm?1V 鄇nf]9jM_iYJin,m>[DsQ9W =:OݻVA)TWk].qx<Ԏ_Si*woo.7;a3{{R-4"jV{ri$^TB6eb\qJ<҅s^4ӗXT1S{g_zR'4Ri8Ԕ ca!i^Ҋcichb=SK! @’y?Li]Jlw#7K5ԛd|YCG͌Њ#S_>Av9!rz /nNq\=}fzJ#T|]B̐[,wh}ӂ~ET䋵~GQU-믗bn#]nاjyrkwZJkmhӖ\0?2){4r{X˴FW~]Q)SgDb*Erm6Ը8iYAϾ8UZ^΢7ەKʯ{;1>J5ҼWr dxi^\خB8vWn!ԭ)e5:}r~ 3ִbV*2ⷷ~-Rguen׷*rS$ُ{nf@?cPW#bC18ן*]5]oQֺܲ6uj ԏ&"K_smds砧YM%j =^$fh.j] 9&IeP.|9ӥD*To{ 9J &VU_9=[QEϯo#)Iη,.rQdVU/h {e|w'Et(өomB4ui{Ϡ뻕m#Mu/&ޞA*${Jmrkm|sqGu%*n/"28-ҦQ`j2-5s|~PSuRF2VzW?*D=~)JX[7b=ԭ-J)BOOZ͢)i? XYd=R֭JkFdvv6ȿxdSR3z&g[Vd$.ۍ~< q)~ޤVJvwq{b9Nsj眬]ذfFሮ|E7*x|Cnho)Pu-ʅ;zv5moh6h8ӓNFQ~sJ>ӪrԒRCCN،o:T{詊TWV-Eq43|nej:e̫ӧ$EҪJ.+ϩYnU=4p:{29KR.J Xf9V"6N.6񷨬eZ4ލLEXsREM|^3uRJUѥU5Ķ'gv:WçQEhש̊BݫH6G.8=ʲ]qk us.>w $J᳟zާr=mLλ7fvqW jٹIc,E:4=N=emPg-EomxB]v+N_fiZRF9M$eZ.FN*ѕ.mo*Rva)D0ʨc$cjT2FN>J?Ȭ&Wl%];u7 ˒2Hӌb)6#U(Ђ]7Iun:V9)(߱_mYޥʥC:CVqMגzt:6w,ʽN819NMMl7-u n# j\d?wtIc-͐uTNQR\lжK9DZTSOgq;]BUiH˜)#sך'N99-lrSWФ4mאzS:q>^4J]V@l ܌ҴO]ܹ)gX]Y2&IYsk/iSc9ʭ{K/r \JT`^[AuĶ4 229}OqZEJ1Qs.Qi4wcR0G:HʧQֶ*)ro3oﱆګO_OZޜ}DƏFF{WrV]3{'eMB域m8Oz륇;5f3S&>KfFIvR^=lE8F\=IgzʍooNjUU:εg}ͫ=P b95n{y{X΋o6n_OCm.ǵLcIG}N[SKjw:41f- s#qɻns2V߇iIVfgYʣ$Nl?U&ӿCG!SX5&.N {i;jCw 1NrMl_IKH2HkSz= e`xٹA\Hs).k&o&֦kj2LUn8:U(ԩ~f5+F5N9n?v+ⱔ]8i;^RHn/7s۟Ne{*Q>_tcK5m؃p|̎R;iʷ՟Kjk4jγf?3n%sPǛ1㬶/^;W,(.Hܪ\lg9d{c^mf,;4n;We8I)u?WK67q*O3rhѱm]1_A҃~jGq 1]gY1+zu]9"lr2+6vXd&B44vn=Viѝ:1|_1̍)>`9  yuJV.e3U3vQddRjRw$w'EX==}9K#ROu|˕m1Z/#ԝK^-gqrmATF:i1iBm^q[}T}KE%ų\Knˌ{TdˍUVkwfI ?*݌lFewn(N+3'h,>n{cZDeB Iu?)Q(TesT\ɎEP$X؍ۈ=;a5HvΤce+uc R1hCﳵ\1'?NzUӱ5JoGd>]xI&m*90cG[s:Co륍Vt4iٷ|b5r3 NfԺrm[wVE5d\bBrcuRjJ7iT6sO{˷tKeAcB4BBz\hö{;Q"#9rhXhƢ[!-{lV?1Dk1kgK`rNҲq[u4|HI18HP8tӔh֓޽HO2mU?3 yɤ;RCA7KjcyY?NZδ`V7 ۶V\0o>_zXʛӱhT% 72j։ڸH==yҌEYq7GO Ѥ|Ͳdgߞ^jFO3OcMlѱ/.?/eXFru@iN#]uOoY{No֞JW~$<;q_zTc4r8-D?hVKc\q9sv5?kGOC4ͬt8gA8Up`vӚeoK kk֚h,ǀUZq̎zu'*.5d塇5ed {EZ%ʶ6O%h$ 7N69O)I,\~>.>kƜjI畬%(&U#Z2҈zڸJnO$Q"' OAXJ\*3;%ek єS*S&nkx^X9rQs{#m8ӏܹe}fZ!SHYw:9tyQ9<, DfRTV5zTy*I;2xCnX((* c7 \Ғw5s8.P#oCFWnRw`gX6kU)Zt*+26"|Ʊ*'hOzέ>wWܕy+FJN%F)2弲1މS\t=:-lc91fTuƌh5Y2f;>\?U$#f +\i+V枚.ZQLZP;i9#rx|TM^]GFR72ӥW+nMۛ*QD{ZtjiiYR!r ~Zzqa9$*1^ y#\oe`8?ξc$FTnU30X(SQMXx>XYwdꒋup)ӔsE)tQrpl~f)Ss:NwbWh#_1ʌxirv] +Q2sv.i[ydSGOZ5ihȁcevžn=F?iI=8=?>!%ivG>8?A霈Jש<5)G'k|tڻY\l9.l7nzz 0xSj+߷uxBO^}ë\ ,.@r)*Jyu(z???#q4'ޒV^>5׊xr|tg# Y:єdݏ~TF[[q򮫠٥Źo381nh*QgkFO/+M]7OMZ~جe==G|NeZkk{C T\/O4Zg1­EuW6rtܩa\zvZ\±P¿tNkb~}b=%}K&}f=]}1wm9Ui8ԍЪP]yt*0q`ub(IwvBywtljZI|Syq0S2^[FS6Azrk`-ՊGA'STխm_[➁7_5FEVX 7(q2rG7 ={^Pӄ&x]6RҬV|˅HRCYNN3ֽ:U*K4diE1U'{7˧~&}En-_v遃t5gxNϢFUj>i7emo_H+g{OgW_yr?io/?G)b9\͉Eq[i׮a,/lq|N;FOo9䣃{#E2uot%=N'pb#Y4e+H×F F p^*Qmr#٪Qփ_mr!Ea‘)էNI]j܏IAw>^md"G 'r+F]壇dKWw0^&6Lc߽tbkJT֚TN6Wh]krx_Rk-L6 vx9*"/݂۩pԃN;OZ2O0Y[z^F,/_S5*nt(-1ڿ,U?矡g ʝ][Dt^mv_Kͷs>*y*vwԎ9rO!T?QkQCHTFfi#;m95\mc.hʢK['R<4fi!T}{WTiэ xTvqG hd_ƥZΤөsJoq5nZO2FePq'qުTiRod*xG`\.mH#,{w*71ڜIrjtBU*IǙyx潻 V;X*jcӾ\EI(;-{͝Vag\,c6/GOTӃ孈FխlQk,2meT 9RgKʝމifl|y-">A|0rBwrkSڍ֚o=UѦSQFBvA郜aJFI٭uK9y_V,dI=O)sF;u:kNIvoZhy-9gk$EeϙFI9c_iIEROp#t{(/'rwkJ2Egv~gY Ԍ6 81]:F=Db (.[cn8N1t{Qi֬˔femz߭u^c ֵ=;v^*MV*F%OZסΩQ9{_k}i d޲^[4*AdC 4k̜כnL)^^  {;<{p Nc*=+JE QK=F8F$W=EtFR[yyBͿ=g_grн˴R)% d\TqvmrҦ?=) g!*9%]ObO_Mݜoޗ"E+F02580w>+Z0Ml|[#XHo Wm\68^WթN2{ EEy-yw/F>dvz+-TlN*)]O:ּKoX1,ƾպ߿Qºuhi [;LY~?vF+(˫]jOK[^# u)dc2A8NNGM qTXn?;Mb*U^7О:b >U݇ʷԣJt>4bHْV$8\iu:X>GkwE}pxtk_g+>V>P2zsbșOIo9mZ\nC$W\D9V/~Db}hhž43fHdR9^ Hy|-W4a$g B2:{(yZUeo?SJ;#|Io-\D$fm'9'A1ӧ"<"שNqVvwJl_C6f=ч*btucRnxo4}wFY wWc-|$*~е{ +t\._.@8u >P掍oMZu+SOUӯ$6 o420iq>] sS4Tc).e74jmXi3#NT) { 88EI|˝ /7m?3aլԽIěyRr>0Es+i+XKo"+ e}ȵّĐ[*w\޽kq-m祏s%KW-FoaWNIWa"BM+ia 띌N_7^3;RЩ.jI-qHNt~tcY!{U@0Trֹ;OT|}.wmv~u;oY2Zⷘ4`d21wmkgT֨+l#׼cxlUm >y#9$WZCFާF8|/4[}?<+} e>`9 d3FqۊҤT(iekh^yXʍiJW_-j:q#7}ݸ=@<@#Ԋ`(>㇍LEp_y/Uƒ9 F\!;T223>OFNe|T=}i}y@5Mb= zd#5e5%E<=2rG!@bHSq8Wg)%8,<:M:{'SҥUfJqsѥ5+uZ_yyb:]pu[jqn wn Lr1VvZkoK}V_J*wmZUpmO^mTC^GYg (%`FI 6pU&֮7׷[U\&y[k]Z.|quseoљV7Oƹ$-N5Y5F3rzu?ιiwR}E 9FtiSJOCKNG)H^C>q] %N356X.b}u+gns)T5a`_]eku㧯ԋUϱڡM%gqKtC0W۩'_cF F֩uӹ{iqȒfQs3ۭqN=F]hmU{Q,3/cp-?q{yFmhDVezπU*/IXɆFO$ǵw1JPsr9cZu'uz_|k}7V\q`&ø'P@=08JfvFTTt~yi?5merqeNmӧNj1hݵ]<5Wsum%[y2A  &"w_vb4՞~?R_IUP_k Xiyk _B'2yYMRX$y+rh2nQݓ՜˘9?l Ht<V&4uu-[Zu_S.HZݗxkLrsW9hMjE-I'gS>_y˗!em$C| ֏|儖ZަcBfت6T>$W {I9ve(Z`էk1]Nܐ9汄e{ <jMjXc&Ue-^ERw S()onA%^^ 6cQ8沫[[\*+Iچ&mIT\&eOFm~svi8^ֳӲ)H4,[p-~yylW5T3.^yU|c03Z*rԗW Xniݰm$wjBѳ+TjR,s26c26 \V{߹+J3X"~lN*5SW_e;-3K8S`Rvݼsc]5?}3ώuFmͬ6gK9A{Aaj;Sq!)mkwU_,[Wҷ[NXkч,tkR3eo[?Y;vS9fuiÖ)edBƛ6Fqlc\0<~+o[ts?39w7u{Ϙ?աك^}^iF%ԍ9{NK.LԎ} F;@uo%ZѬҔmi3!1,7w? ʽ8\¤u Xaet.s}9rbRoFsՅK4өzT2wc-R^33.ީC8I!>dm?XJ4k5XK2x.xQ)V}n'>3YTяdj{cGEUlHJ-v4sbMAJ0e ӔR$d KЌ:aZͦXfWUv|'oSҏeV6rG-'PKIDj$UR^F0{s[R$v&JN*!%V%VsX8nV˕/gȕ5DB3 :mFES]#ku:rVFV~P#W8%rFG5hb7:J=\̲i0jGTsKŵk?Z x"0 z{(^z4ciiW$m Oi/e̟֕:|o}g/f`$,rO>~_}'Ԯ4ykHR!G''rԧnr}N:nqkźh 6nms\$f.VՓW$*fO2E6yï|q\(ʞNe'27p+t泍wsB-Yg4be<8ru*W|)hP[$l >3ӟަR:0踤2{[)#FyXۯAj:Xҥ̒8G nZ׌Vt6U*BV}KV*qmˑT0#dz=+Hm:1 8UJ*vFn=J^^Mu3VRծ)"Id 7NQ"hۚH ldt,xsۊj R%MAzG",GOŰc}2U;~OԦ'p7O2e69nxR^*}bka,#,a^x5'k NITٿ-*#`iӕ9hui-)$`vPJ{m9(&HI]/70ۓϾF5/*+ۡZKVRcd۷-x?ҺNUwb)Kv2“U*z^ȸԏ?<V[y vv(e* "4lF$WD%Ij^փZdVWrx\2KԽu,i吶pz'8ǖ[XҒʾPmbxqϿ<EoTkSO~Zkc"ѣkVې 4ejUn^UNboOiF1zj>Qݙ7Ao'eP̼/\N){zt6Ɯb~W1`ry}:t4wV_TkI9SWыyP܏?B^w$r[bIa\~VL*vA#R[k~b֓&|;Z`f1G*CX}=U0̸T,W8\"@>iw8#xI[o.{2**(=+ 2Aүqۥc)6kw?)F'i6qu$+JU=yw:xSٜH&٧¸^~b;~vZPsU2Nilou8y+%Q٫Xxb9ZoZYrYv+UoKR=wUH$̃jTXb1i &.# 8Z)Ǒ٘Jenwl%[F ^+i5+~R,T=KPkֵ[*HEvaCة}ͩ(/躚vnc[lⰩN:>d8*ujIJF&+Z"?̣- !F|1/a늩F*<~gDpS.6iݎ"kzUGv3(Ӿy7}J50c.g}QB=oՈ"o7j[ \{R2mOџm՘}.*j|GC]JW̪J$znAwmt捱 %FCgڐ RFeNɧlt~hC:+دqe-ѲFve4mCVU4U[Y<ՙ*r}hUEz8d+7۶XRMq:<ʓ;].YؼN*#+3ogAki/(jpKMz?SY| [vJISeN4^Ӭ'-,ܬr,kߠ~抷]t2n,:5*F$((ROZ2o{KR+K5e$ՐŐg^kXX,:Q:}7=K#Imγ_NqN4%&eiQv<ߎr1Ւ*[SNAO0.qg*.2pujYM$k o< 5N25]KV7}ϔ}h2Vz3RG,Le;Ӥݴ$xZh[y[`N1 8*N::emYFXY~P׽_59q0E/CSM Uwur(ԡMYHE]﷒NNsT59]=#-]QۙmdWy7|xFZZ]ʚ[^7K̘ i4ҍNp5/s6Ssph.'u\veU[m \+rŴt*R]/.OdO?<ڸV{2 Xic߹ϰ>'Qţenj/> tc{ktLdc,1Ƿ5zu_#qW9a-Ya$@G`g i'RJ[SG % n o Gok:-D4~H>J1匭kfXD+z/aMCiUp<=?*c*9;c֜T{y"&wdrOVR9ag COo IƚvY’W'oѸVTyS>NP>3MiV[ !Qy88ϰ+ Sە{M=#N?8HymjҖG+Y[<BKGO;cǯm ROخn5h;(]=+UFJ-#-?3P5wd9'O1JOuƹc/(__=+ I;#RKTycvaJ眏tQ[SRshiN׾\ZArn"XdyghV}x_r, 刋88ݖɗsGwzm]*}%3e$~-̶4B~U,m8).-IG'jtO;DrĞ\ ɑ kIGfCc0i%n7r1IZfKN.X-FBs7~2~R/674?koL+4cx$}yQuTdNjAסz q1'zcrԔTu5W*fƝ **[yͭyKmԚ0Rzv,keٱ~Oc.i+c 523[̄#.Eu''#qZlWSO64ؖ壎[W{FҧS]I9&榋R̬}m'^1eW) ]\ҷEEڍ`?geteLiRkem76!eQYWz=sQ]9*WzIf(!w 2`QGBRZzXQ.L4?w=u)'}z/hܝt4KKVH#ǁ!\緥g/{=,=:rݒĨ m(}P!eτ͞$ d?8> RSQ+nқVUUqo=F1~Ek:u)ƤՖpq RʹuOOhUԪsl:Ky$UC7^HGBcn3@w JrU Ql8?)3 Zӗ4U/R[O/@*8M$a^ ́$WD\t3q54;dRI@^z7cNJ*qT ]YK.o33ȼaԌU҅ on\G0ie_")c'f=*TjE6䮺[s|(-UR*z8<~5\IݛT-^Ol<k..%cr2'1 oR ǡSZ;_ iQ$v'(WJ=\y%l֞j+hSs0]&޿ƴCRU4щ?25UGa?N{I멼RChs$,78R:fi% %(xƷHok[[{KVdb48]πX]cZRW]4eT.dޮwz>t~o.fa_1cݻjjצ]sR1NoÞe Bd2 qN;+k-O^iuyQ^>|}K[e.l~ڇn ФF.Z61t8:?[].ING̛ߚ!x1{O?:h6,0 y N9Nc[+5o7ߡI+=;ߣ3gjެ:Ė4Wo?.6i c+ANfY}ji٫vKn}kTJ)]-{2.|t{`HP%nG8x'N-Q/K+{ifx(t-ZV^KOo20 0+NP]zbn$}39fcxz|MY˛0*%,g[ß.x:Cx$zrJŸ42f-)#t1Aa_O茚Դ~oV b2rѲ>5-#TG##I G$gX.MZ1Q'̽VzyҨuVc b|ed ?*hµ87)nګ/21V=1ӏi鬉h֦ܩ ,n]z}*u?u~Z6dyqَϮ? Sqwƥ_zM-ĒԬ-maۭ )WZ|V*Cj}$cq!8#e Ӵa-/ZsN݁aKʔӍ5Pi$ O'`Ͻe⦬qEik<%bKa•7JӒ1w۷S1|d[6q1F?{g'br9\>ҭ9)QVב"b6dEs9{Uke*qnz}ğdxT1"{v*\e(|R ڳN#1al`dV^Q*%-Fkhг3o?Z.kiԌcI#nxT y>i(NDr˖y,P-Gv#}}NGAJN5̪*ik}u&0ۀ!V.~_vQ5 JA UX33CG|xa\U0몹8jtn ǀI'=a|-J5OשՋll]3U?#\zm۹ߴh+ c67yrWs:&-KE]uU+-Tw(w8ןzzqb^2;+WpB_8'Zdo%vv!O%@!Cn5*Q~f4|]*+o镚2mVY~R8<78SVsvViRybw[[˺3kFʯ?g:cF]XNllUN ];NZ?+w*,_(܃\ӭƪP(kkc!d2UfQ۽{C 卣SUhݯW|k쯦s9ϭvSz| O98Ҏk I<"ͶWb chP#I d!Y9ϱVI]۾m'# $m_6۫Dz4jl(`@Fz}9[ԧZU%t_aIBDH$bb(܃ӽuTM*ҩNVDV3fiTaI$sǵoFE/FsaLGmu_D#;GPvq׵iB9u'}c2"a5.~E$޴qQr^ޫ˹^8*3չw}8yܖIf%g!m&WvjQ抓ыD[۱  |Tm{NsV2[_q3.rn(+=LeLELE%~l(]Hi0I>ץ.ڻw>ٽ tdvqq1_9?Bkʟ3v5NTI>mm-հ$fVN j}M^ziVlٴkUtմ}QRL/͖Wci}ާeHJa}m,Ld/\U:U!cQƕmn[aG֊ДN8u*z\^ʒQG"Ġ=J1\<U'/y/B$I6>߼rQT\ǩN\nm)HՂܾށXwҵ_m-b9eF֗Ԟ"r]ϵgR>V׌)Z/кaF;7dV .g-WCb[VjWwϻ>i{KWЩJzږh+ tR|Uԗ=WJ!urzNR'Aydgܸϗ`ힸt;,A#"a1RsC%{6?0#lZB?=R[y7W:qEG*.эu{ЦiЭ{V߳.S5'eQSNZ-m^KYpGJUjFFPNzkO gצ*j5jss*W_^&O*Hwnb{8OALvmSɕJt~5YHgocy7ﭺ {2*(sdg=*Bn)]jSR5_*KnöciMJG%,tyZoIExb]q^ڤdѭUbi;6bk/ge=j-䬍+ +Pd``*樣H5jXJRƦyMʣ气yS5iF'W:52KZ^E-O\8Z& B2\~'5e*:Kr^.:KAa:Փyu"ɻe`F? r+J<]S^mnBK7+Ԏ5eR^teJ]>G.-xk7UӨ~f6ڲ;$8皊iF6J.KL~Y5գ26YZEv/zNIgN:R@Q )cQEL1!xf\׮TWFz~-Cysq>NwcU1RhJn@*A'z)F6SMlSHh-pVU2lu6} n#9!5f,Zތ#RI>_XԌd_6 ۴qϽtúS|U=7$<@auf=kiK߱9s.fn==_B8^Fձ gR՝:14m % w_ge8'q}T#VZ6/Iayn_݂pr@5M:y6֫iZ9ȁWvz))ʛ潎x: b 6ݭ]}T'S xFZuc^e*'׊ѳ{}IJeڪs izz1kGqkNTe̞{믗oB*EfQc:9tԈݙ.$@ 1l<-J19Imث2<[cf #۾z5yagQxkԮ.:e#'ќyFUe-b2[v{V:;㇧R|AYnVʫu89W_o^:ކ-n̒Ĭ{q]nn{?ԲNlzQ2>⊲KR: ]αH 9UU?cO6M_]eClGjtJObNfVӲQ/]T7%quQáxvǙvmzcze,:vL,vtk<-+,jXv c8+(򛌼\F){ԹSuzs<(MƣWMMʞ"ius4z[&VFU^ݽ?ʔcYԖ N6荟2?uk EUo iGh4ԗuoO2е$uY?ճgԂYNT7"8U5--`2+ L kT"Rwˣ%;z_ȯx"lڥyzs^MJ\݋ۍsI c1c=H-5e1Z)?T.leVqޜ.f35ͭVKn+tb28Jե/r~{PFl#2YH d~u<)^W"ZZJ7P^b7[-NNIrӳ_ϱ$Tĸv0GJW,_cRd-uё#[3bU߅^}֒}W]**niKЍ k,=n}\z?A oi]gv 9OiAثQ($ޱú?e$o|ÏJtVJo *Vנ'ZhfȹC;G-\$LrT6W )WR>iaӧQOܝe%iƴJ2kn'٣e2F7n+:w~>Tyw/Fr\]ēL+nC#-KpN2]y1Ez}֗fY *f2Cr9y8t~nm8i=Hdh/y&ornE=?ֹ1?2t+[z"mphXz^A+:Oohk+uW [M:&r9ہ<9'Rw/ifJ?㮧U7OZ\ִN.`܎HZҟ4] e%t-'ڔn8$b*pC-r0l쌸ʇ{zuGkmd *Wo}Ȭ 8彎S"`dl++c$9#'-A4Cf#dw9+=m:i(BՃEfW&s֒'RrlT67R~aqZG]4yc$B1ʩ,a43cUj0R(UKr]4%e:Ei,,5?/mkRUP?.:6$cT-N*kl 0W {]Ӧ1gds569<;޴Qz2߹{a52%24墅 w w?tFgeB26[ǿnmD 9NM=cg| ef*zMSV3mTx]XC(=͂E!|dr15Ԕj)N<jYJ҉# g3kh=}ui9b=^[QOE0d xNqXN50|6o(;7R#IU¶0@<挥 35LG+f@l*!8qzqQRVRU{n.+5fFܩsx=0+sS,(bu$[aFgOvwsJ*[:Wij(^Bf ӊJ6DR` 7/x6-X*нr6^gͯ39W[6fX$q" [F1~z穯JR3XӣFww߷woo_Nq/һb1p{r+Ih}׿^FE)Щiat&B23Lku*+c le^>^=dnwT9ZWv ((;k{v33M!Whbi62ÕgSH㸮RRwкUN宍f̗:m<[Xvy# 8P٦sK {.icio_Sy橥6yԣ=bXFa+^Ui~)n%Յe#:͈1֢Pu{R}~"h+!Vr#R:=nY?JIdݲ/Lt)Nʌ̑qҹeN6.WTGbe9|U+4`>$mׁS8zFZzZD/6Yь-jԍތ҆~"x-p)7}N%%KqNjniiͦrXwBYA:)FhL2|sk.1 1u'$'2ewt SUJ|%u>R>@N*2Gs= ;iÓVtDiѲeOO9{THFOyܿaG"8YV0B\o׹z.?*Jt)([nM6sP;1Y:*;$[(Z4KűkFW[Fw[k^E~I[swfoq)TeR璺2eyWpW>#2-B2qEDwLv5$և=J,1L~hun9j5VK*"<2ZԛU ٽ3+S&[ҴCS -gg5F~}L4ߑѲcjga)TaR5%.E0ˇ ;F  *2֨T#N+"7Es!CsiӲfR:)hE{{w_ߕ'X.yOz8j|t߹BvwFvl7WI.LjF)sio5DvrT:aXuByK^]zIxq5ՁmbguFh8|ٵ8e%ղ偎~l'=;sԕ{[O9s)([O]NFm8fxwʼQ)J]OS;<\3e}ER>ﺵӲvK+@z^}kRԇ,T$Ee/4l=ʸ _ Q9b3τ/6.u\zqVSmy|GR V|KK}FXF(Ar 9r"1若=5RPjP&,cPۤu<`֍()dt:i,gIr+-5浒oV6wPppԉi{>n}zlzX-Is_V1+r8>^p1?ɵ}ߩƪFt}qVмmeAo{j2dV`No# @==9Svk1׫){:Oo#k"<2 ?ReԖ  * sGq ;9}lJΝVC<wDOcx5Y4;60ftэXFv8{*bۢW5 OQZg{|39's88\.r@YRhϑvmvoRhBd=C_Ubա!1.J8$H#x0kmIF"6̝Zվ:Ml4rdlj3As8ka㌣' _Bs]wsڷù}װF9:f_^M%umE,eTJ͝OY. z/pzOWFcӕ3Z8jo\DO:՗lxW<{7]G-voo_m_x{ž(4$tJ0~a| gud~31xaRi;_uGxauq?3H݆zQǿa+UhԌ\/奎8x{:^ﯞZ]\_GQ~I y"XxF-I^㛔R^꺆ahi4RnI9_QjV]ZrS_ӱ]kZgCB՛ɸ{H Aʒ8~SGJ׷Xʔ}mz.j_m|U>v_-~aޙ}S%R27kwM 2&;$W׉1*~w-6#"4jrt]>C RO;u]g㯆q lZxOB2G͒w䞥3\XӍԩ}^?)jr^}4Hڦ$z}ZKO^N|ݹ:.{y|~om+7O>+k7$ssTKY[|Ί\z~lmtOLWyO՟_Qu_}ej΋uү^ďQadp`^..gR7)\؏vne>s$L*#woMBTjS k]-~V<~ut0G%OjޏyˉHFj`NNtG='ӿf&$v7:zoz{WͻΥWN#wcq;Jrݎ`{\ V8{iFMqRmsC 4kxln YEEJs7){e)NE,CZO&)i.FɑG?* ƚsZc*Pmw]͏wPlGO|&D62m]xG^pz7%OuZFZQoS|=jѵOMUK( az~s>*Ve_4O2ƻ{iMg{HN#:^Au<F4IVxNXZ˹.!.|\)^>r=zVP1Vz%d} 6!Yr͌A ${8?~ΉRS'f/xZ%em`5 0^̂23D05RjomኡB׭bx k~4gXOb>R:FQvۧSN2T֛?|}C0GV\.]n$pcJJwRQS^IhOmMX֭] KYn~ud<YҥFw~լSVhyiZ'`K&q$*ǴaOygRu9+=*Hil۾l[c6^]nuQ0SWg?%4AE^i[ eMJN4G++}ٝj3C"*n~k:XO#]VU&ʶګs&k:rA986eRfދg*N;h,$ ^ q>f]5Uc)IoSϾ ٤Ym#88]8斩lzZm>]oqM}Av ڊx`tQs&m>sL=lisL hToI!NA r 5\O,^7ISrT<%-X 2c㞣RJ3NO5#I;N/?jnx|%r'ہ}ϕj%ȚsQj$Ҳ9?|?V]NUkxxč<I8ajޟMLe7MOmc:? ,OG#FŤnv*3]%nu_&\A[fhm] {5RtuO*nRM_[s-*, c{_ƹӍ>1UvsJԗRᐘܧ AU-N6Ѿt*Vzy [,, NTes_L֑F+JN5ӡemcHP//|C0Wr:08Eq(Wv]=K8~3R ovls}F E$pT%ͳW4>#xF4G$̱YتݢbHaw cazz8ݯcl%EV$y۪Xۂ?}[^Zj3}>(X-c;Ipa_UF y%59̍UVwQv]g; 巍!lS9TX#Q#]`V<}kgYѦTz[Б3vT1%N)DTۛIt)fk&6eyO}+K{CR.%rGadsVh*>m#ҟ9b%+LO.}h _HdR{:(ԝڛS;I獵џ _dU-uvf o, .98?ʼF:"-ӿϞ)RM>}=omnx+M6mErHg\b+^4ӵ՟Rr]쐫sȾnn!p |ɸ#Jxb%)_5(K9>m0 1?_9J4'~{+3#|CM֯;W]k @ltS{攣iYcqXm{XV9ga#22z%WSSV߭u_wa,1Mi3#}ztJtiBF"Xx ]:_qMek"26#8Haׯ'ΚUR :5gݻe zd.hϳbsz9#;R2tMV )k:k^ZLlm3D###Ro}u_̻H-ϟ_fWö+rd69 {x]yQOU'W~t%t=e6Xz`g'QK(֧^.9{ A^=ŭyoF_*e1#'$vGGV3Od{we.fZJ_gu E>omf{v֌>#93|dCr5c` @-#9"m:؈&zXhf7*}X'Yٮyu7099 tR;=J)_!{r~|1nנwѕXVZ]OՎ^.><0|;m),`k'lMYcgҿ01QSrm_gbTݖo[VaTOӓ^}z~v[?Rע:វ\Ub^>l[ ׸OFYN:77ͨ$Q=ycc8աkz _,4S=ido9ʫ=yTl?Ŏj8W*vG V}YZzdԡ|H,1[vp@{Aӳj(OaUNZ]wkrx#9ګON2ݟG=r:kηH8 <Ҽ{iF}R_bթ[XHTs\u6/5.E?"wlOXFtZ0dW׮5 X|dC9|q]:pzߡ(QKWзk!zx"O6ۉNuZc.n[[ӱԦ*];Ჱ.n10˜}: )c)YU|o(pSyZ8<9ՎQKm[SZ۞ouGtɖ/@yU:v^27O<`<tqƧ˛ $(TnS:iGVv-42;q[9jJce*qܥeJ躒H>ICrѣ' ZQ*.˯<׈}ImumCs}pGrԍZ~Vgu=Wiiڕd% a1eJC ML5h7 WzjYsXҧ%h;[sQ ^t2yd7 /BrI+ܥWJ:z>{Iare7{GehBj-+3yv'+Nsr~ d>IF}OaFk(iڵMȎ?7'90=Xȩ4̪؏aE)']Z_6mѪN?&ikN.kܢ|O[ DVsŔ!H zrňOY?O1^<]&޷Ȏ1J.ׯ+=T׍.nZo$UO˝ԯ7Ɗ[Y nȂ6V'R]M~I;:ǖMKn6M;"ZL;ZX 6Ht=+o9JYw8[9|Yyc8"PT.h;wj Cs2,EJkסBtYi 2[ :xKN6mV*1mk PM[=ӂxxt}BQek+Ef$yS& tYCJ];o +ma'OUo_Pθ$dF'!1HLFVr7ߺfU>g p:QJ4Fuw5Ϲ 4fݿWgÅO~{gVaIFY yۜ)4ʎꟻKcy\eSJR\ܹI7( ǦXT"9VZ#b]Axr3ұI6]9Vҷ\M ~DWG%F# 0r3ԁYԍ;[US&1Ѫ*8Ν̪p qYeЧml{XmV$dus^zXJU$ڏmMr쑩)a9St?sFQ\*qnJUUT13ێu?E`7,'[ Ehٕ81? QVJU^m% ll Nu}XI6zN <'G%\2Gl~5?iQ_*t\嬼VZf F @"e)=6"**wnIP jњ7 EȈ9 Ҋ啓OuE&H"iѷH$@oEgRPVzm_W,dzuU(֯gńLD9K\rS.EpYY&hVc?mN7ETk2[{#m]y=c,?*hb_Q6%y˵#,ӚrJSoNcܘ刍ycrx}Nx8Z/魬Y= iy\qg󮈸TKKʏ$}GDzu{TÙj{:mmYǹd L5E{R %[BC2c`Sv}xririNaUFdyq ncve(|>iJ y^6\mc ct9mgY+R܇1<~*>Ѷ_V#X?51+MƗ-K^+Zpxߞ$:R;9+TRnne3n;ʪx.MoNjFcR˙.[ɜ990YO 8Kᶎå@cxs?Wr:{X|.# #8(aN]j zYbkwd(RN¦6 8YȎXI1ۧ&ݻP0"},Leɰrx-Ҽ!Y#]b9[sm.Τve^qjؒcʀ7T~TWA#w$mQf~c<2dbvvQ2ĩ+Eig;h@k7(ś-8h5lU*J-{u-[)v Ol`[nɄQhhՙOɁ~k#ޅ7Jzщ")tUd {H=tQR+YrT:*;2z]я.26`H6a`[<)KɩD[)&+h냂皸咿SGyDmB\Xq}zУn.lSyV/y|gx5lg%j}w!{^%Tݷ$.?ʴeШʝ9&o.*cgD;_^pzT]XdekI~FUp9?ZJ7ˎ*1reKNGV S;MӍj{mԫ6vi(tg5g(2O5n1Tssj{I4NRD:v>fs҈;ٽ{zE]Ioa۞N1#1%nʷwQ%Iͅcp?/͏^w=婭:m ۻ-[&^ n͂Ú _SV?"cax>p+j֖FAoCSKk6A#Zt4S--WW7v6U۷ϒ̫(;z\ܵ9#]|\kUU+NtTjESS$gy.f=>|b:{WT*(FnՙkGy#7n=Tj]ݬezWn)ۣHHsr=$iXօiQ_6H 7+`c(K[o,T([_Qu[ª4fČ<P[FJ= לj9Gv24nX_-wEJO4ޥDrYEHJUelgHԗw5{EYmf#5/#ƞ!F%("4mn}k9}^[kTiB2Nw;de z\RNJTkTs1yuPzwZYl#*rno)8>m (u.y Sֳ|r59+͒ķo;Eq\=k9{>^h9hmJoiST[R0凪~# ^/79')T9]F/gn=[O_1OiS:lF^VF٣37xEXFXlG-Iݎh:3Tгqop$\= %R[̥5w$umU5aV;}m%2$j~mk%uFR4-$]ncg^5/dr¤{X[ Ta4`ƥ[ʅqxkADyx Z^.[3 SfdKiỘlU85bEoF)dm7DVfWu?δtR*V儰KՏMy#bl>[K^◡1'mtɵ~i} TN? $ڗZoD8NXES-3 q\"쎸EʓζŊε9IiRZb Oy,I{0ud \5q(XK3$ǧڷTܞZo[.F!m,:үYh8b#*DuC%hx%U ATUQO^捥ֲeQح}*rR<4eߕSNXNpyx5Fիbcz4BXU.meYX?g]Y}x*:鯑T2|Ҭ+{qqڶ^1-jgMmM.HpHQ;koiNRCa≯xTcu M^,Tu-6l%F#+m689#zֳR;9Զ.Zk@ِq{bQ"úw߫zvnݹȪ ÿ<g?$p^œ F5fMo*Bq^)rȩb#(ū6̱lZUvݍQkhT/C|Λ؆]D/uQpIAOzBgӪ~uHw|1o|uN5-9qU%NQKMQ,oEq2t58Ə'~ѭRѢ=7jr}NZnjҧRwwKx$4,U8cd:ε':nwܼqk$QƘY[GL}kˬ̜:/nmLEa/̥RE8ϿQzzChӭݺ+||6'>s=GO4mLTpxn[jᏃ*Mb6N69{| 5񕓵 Yjpgg}3Cq2kqg=O4n}/oW Um~ōk"hA;󓀹#߯Ly5u GGNenoci{mZ=sո!W3_n+iÍ>GZ(# !O?^QQΫ֭Vxm ͬ;[yiF[5b~<Ѵu;vKp}:fԚe|cXXfn7skFU-1 EYYdq}eӏU(1SuՔ"+7^/'wg-L`pUg:GNgFL5>U7r4޹|{~Gں*J)FyZ4ch0c~+>jj&Ĺ{բqqe(ԷwxkFCy䅧s 2foԎҫ0Uzylm+ahd]6|I- 抓JKgȾv}mdЪ*5-2UW,+:hϚ*Ԗ+xx>J~<͕ӏ<q:κ%3fPI9I^6eV-HΚnu*QuUh_e v8҃×rvΒ#3 ʐzc<.Q}:0c-;`7ds+:{"-ê|7 7Թi援$h3`l1Ž;`ְ3iY)-Mm|wҺ'dt$Q5mFu8RVchӗ+Q0QUE2I'g:L~uBޏFz4c3 q-զTr %x'}y?ֶov-Rrc=zNr+ZSܚiUv['~uycSDq*|#K=ė>Z HVJqWV)Ÿ%ORb xC(rA~U5!)hG_?11o2HᛞZPoKsS|}a^Oqz$ ˻T0lGP35JJV0x*mBN+SV|/VN'K mU\yvC R^7UW8bAj-alC3 *9nzc+Tt&i[[/_K+'ya6lXMLۑdVH`02X#4Vrw᥅$Iui~I.[?ߴVL i!y!~ɆX1 Bmj5ڴ.u-Vv{nO[(rIkۚ25k-ۺ㶾.nu;UXۜJXO Rc.Dnnv[}e)F)6}muim|@w*VhdPhc)P:'p:ゝoiijݧNl~SĔp5)E-wv{iZ-S:X&V1'1۞<{{ 6]VӴ3)%1#Ȩ_$^{{!NJL>)wRX2ǧucNI:P2(یgWiVɄ\sT+\ۼXՒv yfC+.Qw(-5Bf 'T$hCH,<$wڵymNQd2kXq.L֑:lT B@[Mi4m 8,$ΜeO8N<,  +6ܶܜ }j=ڒZ,]47im7t=i-0 7M9GϥgR7} SQkmv6K" AqtF ϗ ƥIrAlʳȱ1ޱ{~Qk;撹͈ӥPWB)!QsTwyv+E5˿TU:Ta-y 0y%d\NTS9|mʷeԨ*=FrFQTofw~)r>NKJo!+$ggLK8dʤ)qA|+6o ޢ_:~?*tݓOAea,i 1ӕ Z§*ᱟ5JK- "1|鱺us\)$2ΰ,_o xT&5>dk{X<ѯ&tUV\n\g=Fn5+&յ*A.}4my[ckIq*jC+6q*N"y2-Y8=zzWE:0vY=|F`v#G`{|zuuS,mӠT]y8$vuTFsKk[ɌO #giV%i?SW.~hHIM==:)rWSiя4YhHeHHV_}OP4:#FG{vW^D=UJ"]Y`*~8rAkyeu%.XċRUm*SNNSՖ_'νH֏r%S$ɰXJ[}3[ӯߕniNӯim{2 F #Z&rzzw˗VvVq*hĸ6*7̧ ={VkJMyB N;3*u`ѩe^x\}zjޝIJ;'YI)7{9H}~Q{n-־G xzv Eeha#$9٣UKOA]oCjF- >_5>f}Oigӈ-rw43^}Eh%*Z[5<6 keYebf41ަip-e֦\ЬlQݥܥ[ wӕ;X`sFIG?2Η42+Goz[o*Jq\ՔGTu%{ѷ/]HͷI4anobjXMꎊl vW/{-IQӪobixpŕBNU#g嬪F1V#; :kry\$L1mƶtR-{WJj7 .WR?&psF&o1ÿn=ΛkkNҏE-20VnwpqOmw{h(:+n|݊V4\pbk`<.+ʤ"Vwht ;7ѣ Uy{;]=Ɏ|'HlכRO tUTl(+2NgUٽ~⬖yN=~iʭE[CG+,~_Y^Wl㜽K'T9l&!:eTP.ggYvQ]l-Zm*}sjZ4&ekg-q~*ʴzm&.iJTXYLۨ :ӦE_O[\aڸK[wVCj)niʧj]HȼQs=sXISUtOU`naUoC.nzSԿen<ƫ w^kJ5jEu<:;1el}+)Yl!ZU=Z&k2F\+!F %0*b5d/4}ߛ'my5.i^՝:uF"ѓZ;= #AB\%Ll̴蜰V*vf'hmx#QVy9c_^)֎/ ,ʽq~rʏ4X]rpͭ2{)>Β=9NqT}OwDv{ĩ7,v1aϡ湣O̰sUG>7WI,Z|Xnv΍)s8;Ұ-,۷m$R*|dPrM) G.wnݷ׭5RR ժNNCbP9%t1M o0H\wvMIF}z23 v0oP{Ono{NڍOTMm$rI@U߈Vѥ7vw2+Y$_.4b\ߚS售.y*iiXkucʑ&تi۳˶z7e|4zL4Vla}ծvFz|2@鏥pRqw2OeQkdɭq4pH)=OScͫta%%Ԫ܌ Z2\ӌiV.nWY˴Z=נ5JPv85S [y~`byob+!SgԫiIY"̛TR}A56֞CΦ4NӝYz GNQ[Μ6˹"hm.6$[N;(=F1ZԒ=8ƾeltp@ͺr}BQ'MB北4 p\ YVIK|s22;2oD*eȵE{7slLT_7OB^,!kȀ#[SR}p`$T 1ǿZ9|2:RmikHRA{:cߍwRqt.T)CFmYRܪ"7;zuWaRy˖$I PMAz8k.VmRxCre!~*N=+XRƑGA^c4$2<~+ ],*/: "]T s_vܲڶ|唛SZ2tmG9VRSb8Mo$c("mN>vECZ8_gxs9'f,!Vfyv:]6y+,\y`3(W)VQz3+Q]bj#U0ˌ[B=y)T~&!Kx,D1+J1u=n-仱n$mecB9xN)1w_9sR-/S>+8-edd73EINVʍc2zZA#ʆX?NErިԯܘ֋$V]_pzPy#ҲvqV(sڦȂX.L $vJv۸hߔI5 *$F`#9҇8ɯ1 7ȭ%6Vq x=[{U~i{73c4f*0y_qksn'ӣ hWZRg-BTqMGdgذ7߼k36AkTF-%",chlDf//ŅO(9̌9FNZi$l~=ʝ8޶>[2e\햮=^' Č9kڟJbGxinݷ8<ZV7%ʴǗ )[BHa,2Hf#5>dnzuV֪Woˍj}^ۙribH9X9#;[oSITcgc(|Ո᳜qj;ʔ%}z՜)^`Q*zS{w_>ޭ5՛$q֍V;93lzҔc(rzp~Zј֮Ξ˩ҭYν2GҪycþiSRM:'OomSɜ̼^HPl`hTurw4RW^]=k|. Imu~!K*7dW,[5{֏Б Ժ~ΝұN}7.Y8є5Z%.W-,G*e8Sh#jT }-R@O))ұwșS)2vo:hZZ2˼xvivoUZVN7+0ڜѩS{oȽcjTJ+YF*%RZҺ߷R1|r"Fm`(bhi(ۦ܊K: Uc=^dTe}Ԋ=y,QbUS"r2Xd 󎞤p8Kzb;lQ1 ]*[;cֱ(AO<:ӹ{J'ur9_%fm־SJ;+2I^1 x ,rBpzJ­"׵̇Zѐ$=y}?{T:t3*҂rHvI%9}u =½L*3H֌c*umE̋/ W^O7Ʋ8us6mN,X:89\r#9zQ(Tw:1tkFj7ܞ& *rq1vq&qI7Ɖws$,2|IvG=hrqMbm-i,@⼼W竆FWo~;c 蹑6bq*I9=qyUcWwiX_bF>caftT"\<Au!`OG`r=+R+l.ZNfrTc'qTͦVI'Þ60ŭڪܥy;p:L׃4sIajsy/t-gLuM.ͳwHw0`zsԎ>XZu-]uGNuTmFIZ-oWL~ $Vro,D*X@ GeQx&?8ZY?o^gW6խk=6̡$Kx8\p ic(QqZ][zj|7ş O_a {;vXc`>\;7]}LDpx:&|G-?Oxcx kJO4x1? RW9NxM8-}}OJVPXژ&\r=8'5eLb=]3$[a5IL`d݀#h{U2M^_֧jXPL?+;OV9?giY'z jtq6N۫_ѝu0!]Nzi -u95_ \5%H̥O$#U珥y4Z&k2oؗğ oh$?0 ^i;s)r=7 c'JKp>?g T&nѯ[T _<xWEigDp\g%dvStC*mkOiK8ԧ]hו^ )ZF|# ܤ cO%adѿv/Ǖ{|;z= Zmt{'Bnl q[F\OKD\vǂ jZ լ2 '!`Uq=jx SH^e6,|xs`p}rztM_ȚXES|3XR3yxsu\pU_"ZIISJZ^Uoz~jA[GQIjqΤ~2㰲} uyq;n=+ҥWRћo,G5.켏.,Zi-:)FwI6'9Ǩ8zPgM}csS/Iw]?^s^P\2ws8aw)JZU^vch}kơmu4,pRy2+5*֣ڕ*E'~Cڮio79*N{~IS['v|c#%}fkth_ SC>^[xr.'%5ܰz0FG?0=TF4 >{(jNgi5G$pw3Ƿ5vm棏OO>OgA]KTp8!1Ѱ1푊zuwꎨ󶤹Vߦ~17Re弓+[G @̤B!Bpt (Rzv=,Yhwr!QquqoQKɜ. l1=d}4~[F 4W_.:oOoEjIH.*r8U0>ҥOMiZu#vvo[[KZQy `W 'J(=տ NzըJcM;}l$24~`` xaVNHIѷXn62āR^zZt.?KY7oKi.0a^x8鞞?Zӌ^mj~k?WK1s%Hۭn$Vq'88ĭ)}U).s3Wz&-֟r$]gd;S9)iVThԋs[gaU|?ԗQc;x9ϖ&_߱:<ڧ~yۍ.d_"c<(@Jn^\qzӧNz|g1uc+LN٤fDrOz֕cM.r'Ԋ/tj[_r^Y&cH}r{TiS2V5#%wnkg#z78t7~Ռ .~jԵe؊9E?{yJ7qyA׵tӯS7=%R< |S}Fi$")ɓ[Nt}kF[knx$X}32rF}9+O7tӕRTo}~gh 鲳m':(7u%aNX^z6|Ee鱺F9VW[n0Jx%d~Wi G!Sc\HD5V2e*_VQ]Z$lsd1nKc,_}lQ"ҒKfjLprkS9)E*xMsh+[H]x<{s_cVB4AN/^x;B}:(a}ǩ׀_";2vʗSםJu,z\.Ϧ pSR[/j o$\e{V)TU}s )k^kLO-J|2SQ]N۱Ԍgu0>ǖֱi/ڝ6zT;ے#ʯ3Wz'oo_.=˽e)mGEKWzY%~G j]dof<N)}ϭ-OBGM)|GJ/E$ѲM 4+| 2Ip;F8ޞL(Fmkxƚv%I ,x-ܓWappWVi?ըSѫއ bVq}*' J3gf"C?2Γ42N]jۈ=(ӫ]V-58p6f `rG=y9)Ѧײ<\VtRJmi۱hZևȷY/GAcYF*:<˚[V=~1O%;nyBAR0+%tޭ}PF>R:|DД,Sݧp`6۲~s_ISnQi] KkKY#¬9''0q+CW?Z^ۙM&FZGrEYd-߃Bt2~푶Ue-^iњV('',Lqj:3֢ћh~ ei83ʒ^9= #K K:qm\ YDFNF88ـJmih*-{ԥ I*tf=xSQk[z8 ٷ à3 (Jd`\Kؿ}- ^[m.v?۴cjqsHƊM9QM_ZjFdAgV*ʃNOOJ<)%Nĵ*k.#ś.%sNAc:SSwIotc9^Ahfi[ ;zpW޶%SG{z[hv/[:fisVbg{~Lj{8w[SЭ[ŵgZx4HlQ+Io6$QG'Φ<I&۲G: |Z\XŌ%K#KqpX9aV ^uN'~2US-Z?5KsFhFJgo?}^8|-<*ק>/ ,$ҳע{2uIY&ck`cs؎?fiJ}5ejahM n]>F_ᗉ`2}?xRKgFA|#07Nf{m3Oc_E~:4?hUm㷆m% ׽ynhmw9 "]t]HcYMMM/IԌ#,rp ``Z裕کKKZmJQioukϋ|_%i%ƤBh'yWAp?iOe}-JQc=G]:ۿ: <hrINFGp;2ӚG{_^iEҶ"hvmh~ɾ( ,-vV1哀$\rs@aNJO;GqE>\OmowbӐj"4[/80p}+ǖZƥOiYǛD!I< Zq̟+6@2 }FA$Q{=8Ԍ:kZz1{vJۊy<?ih_sJPkTOewHa[vnRJ; t5X8{_yZARrܽKs.C{41J 3^,W#w]:Nubvۣ:[7mcU5$l@.䃴\2NԍEμ/~(]jZόM2FIϯ\eZ.}l!MEYYY/a~nd6yNz>V KvZXҝ6մCjMn;y3ZH>m(mČ;*5Ȭ[F٭7}4o[P" dhVtgv޽ֱ[zTypteM.fֽyu0㖋z#t* ȻK'vrw/8JGz`.TKY>abMJRnV"Z8V;Ac$c8ki)-,֝gkh++mt\}uln'[$g' ߎؚug6+HYF[wױ1]Ae%6Bo\/^kˏ-F_>RԗG#ⶸh iRN̬%ʜ$A8~2(|N!ӏ}yu($Y!Snbwd|Տ['S2go|A;"{?Gᗺ zڙ78 Z7O˕'+\ju,{iM_>nYc⽿S*KMZWfB_^%O|N#rK8އʧJXx[*ֿ}-r3t)SK]\"G"gB:z#E@@w$gȟf~IK5Wk5[ںQico.bJ~xP,WH&r\凹 <\3ZVoEتN-y(<^z? ?_xFy7Ϙ"n,q*Sq0Vr5%ʚיG=lqARWeʚWݗs|EO-~&"ó4_G݂#0 qmhΜu^kRJqzizi|gkwQo*p͞dRuԥN*^-&he䶒G_2Ue߻d? j`KBZqʕYr}H{4;yټ[ 5id#JW(k|?c3χ )1\|_zV14iJ.24jFOvbX\|rp_9 Gke+:79۴-3>xx5GhnJ![}߼PX /v?`k'͹kTPȭ㸾iq2_kyB15p҃J۹lJuJ֒PW_ѯz2żmDN| R圝;{!m#~n}=GWq8+*SVg1JU%uڴ#N҃ӿb틫=F3!pK7 PBOU9UB8Qa%)ʜdY'kFP ~GiEIS~M2d{*(eE$LxiQ4B&s_S([ȨZ6~nsڶeMR\ v(Ywaq+K)2elY̮ǰkЇQNoq,vr۳TDyw۠耼f ̊F{bVj5ԍ݇Z \aU%kq%){f6QVi-UyoЫޚlJf7^=p}AEvB2hוrd!waf{sˠө̗QIvo }rqһ)pttӒؒ_[M~Uq?2{G/,huB.ڶHZ&m['9n, {ַ} 6ڬh_ ) m'Ue̓:cEšGP|s㠪Tv0|ĦqkʙHYʛwq:1֖ԎA Jk{)l u$zFo˞j1)v0!=jcEK2eN7,+p~݁&1nJjڱS I=@V?*iX6N+?uT#܆\]$1e{?yae9rʳKϧIm**Uݝ?N٘J}ǺВv˙1$NI뎕0KS4Ekke+&6N;{f}$XVۃ6֍r>Ea(9.O|ѣojƺ}L8 ۰{g(iԜ-6N`uc>zrVomMI7}EEqGn;%'O^]F*y#N\ܩ9Roe1dgvx>k(crwR`\ ¨05̹~۪* US;zþ+Vt:bݿDk`~FNPhI[C 'lf@&i7O|ս5.VҚn!-;Qs@=':SiU6zrs߰W՜{t*M$>lֶƤEKtuk.IlV=y=Oz7KkP2B$xىثn)Ú"ͷQ.1Z^CӔ``|Og(N[Hr%%x)V<`9QsYQVz#4>S ǯ©T\}{do-e SAܑV#TeN52wmF1kHRw5*Qn-}[u#,N}:qG$u:EN4ȩ&k k!*>]<۵\J1浔Z_LnHmfhn w}ߗ98'5?wi+q\ѳ} U'~֐[:f2u*2uI^#r2uO=Nw{{(j=;>ݫr#ԑ>uRZ葠{vq?ɮ8((ӣRRRSԢ 237~}]ԟ,4<=&ixQÓC@O\u'-jšk]+ks~9T29C]d9eU}1\tۍva'BnγJ0i ~[0 .*.;`2GN9~5['?4q\OMj]ʗQa\v-jtQ]pcbmMhv{0U=.+؍q*1{˩*rYy¬K^3ՓZ\M"Pn%IF\S'v^H'iaޅOZ|jK|{ݾQⲣJyr=SVw9ofV/rxbKǙb;Vu#(\8cIFTruS^ϙ;T̕$o̫{V.^gehх9II{=m;XXɓ#*Ƥ= QI=I&tD梍8ƥg-<%g*:|\*VN)m҄a zbu'Zy|}6m Ҧ8RZUOwTKQL~ojϥew& kBfhӞҳKԊ0Z>ϑK[hzYEXU]t ק9%3*t?3J-*nL2_/;폭T(aݬCy~>f&Xi|:cmr@>qkmRض𕭝Ae&cʽ$sUcOcOcZՋu4$,UL8%Rs1:TBuÇU֎@$(ڼ~vrGdBRZ>'ӭ7 .<6'z\R#UQQc[Τn9*|Ju4ޥ"G呜+wR2!᝘!Ȭey>EgU#OS:WIdsy\F};']oCyZ?Zk[,%گI#uݬM9{K-r=Jb:n"Ttn(}u[a6λwQ2~R6N? yiƖM>guO!KFm z'oZ3]=iF+ ig:%í.ΌFm" * ⶏo~gꙄSJKZj}5w]e[n19 E}hcvMol졌*ʥ>846P;BĎr%(GN}MgU[IՙR6rAqUx]v-S,znC4-x8(x8J*GI(h(՞ TKlUdhL= r~_SNh.yqe(89pՔy3̛OS׮P]8*`zGFJqn[fե}v4}<>eaOiWq瓻%,>c>kUn)۱kjEn+(ũoGO efX6$6H=#)+FV:FKDc唶sҵ喉3>dW.^=ta&N Y=|a)w[ͷ>fI]tF6ÄhsmӮzhoNRM)Za{@0 !~c3VV.ZE>B9sڻ's3Bk8Y X`y?iG:$xe f_3k^x'ߗ*sT~]~jBa;FG66g]H|W%U jV9?bP# Nn{1AN=R&lXĺ\r,"m$G\q_i&9&pPIU{6 +V|ʍh3'N۞N~nXdvN5I-ԍIjp:jlr/qp2څ%(>nC/=}xw]5| yuC ]ꜯ䟛0uoG$mc-.AHaB>_GZƋO-_?38z<ד^o]N'>! b # 3GW7-{t5ׇӯvɻY53is{H dcڶMӭ)qjʦwV<go{lecժaoۀI_eF7nRMu=-o>l` AjJgoycF<_tYik"լdhndbN9J ֨S[[n畇v 0,l.eNF{Ὀ :WK˒泽Kz8*uя3D%o>uȖko9ffG!$`}6WAQ]jzW#&xym歧gVw9ox|e[,#&VdmːX {pt$rkEn^GS,:_۶z+_ki^ɐ7Rrb%x!]璫6$\p ָsT)]EpH.+w56I9WʵUm\p@Hج.#Q}~H2!(l;K˫}Bnl㸸WミqңQIǟF8."ji5k63֯4E*SMŻnq&(GII`瑐{q_eVNQVKsF+etx]($ܻmUczs,W1hs.޻,^)-.H{OZy"nmʠ`ۑ֡ GȆG$vO#i7llkhZ*vz"4) ǖ~)aHy5bi4ۍI#XZmN=Z(:|ܷc[Bщ|Sv+_[CxVѝ+d< ys[I9JX4MB#B_ vimHTuf>b=4lc:zT3} J*O'UyVoY1 I탞R|Gݬ=t-q3l_;QE._lFI$7Sa5uj̖e*%I*RORrz~dTZTӪJNf-UHSKRbHQXeRU23mN4 wYm$[fMQt⥯C::v.eV8{2 F6$ zA&>N0N9Rxjr]P0TqjjSJ,n ~#ҹ#֗oh*|T*ׂ+.[lL}۩-&buc/ތ1lf"՛¤cV`PuG5KЃ+ϫJQʔ#Զ#~EH3R}:W*|曷{JKDDFsl' g\XTirc^yy [,v^};Vb%(8]u*i3[+FL{q` g<*aS]oFdh,hhى 1'ׯyKos8j^o[6OqyH=xW,jƝNY# a/=Eh6fmF0=?c\|Ҥ5ӕ;)>i-L_IYʜTSZ!iS$/&2-?1` gZR=JUHjmIW}?*Ў:.&V4]#=ԫo+nWንs7G,j5 5hבo+BPq3g֧u:{yroY_q̅cR98[~}%g23n2*,q6=eN\oE9TCIeDž-!2ʸzn1k]CgRr+o]N[?6s޼Uө.U%T1aC8pH?]m}~yeM6uBB$ȫcں='w>֕iT=^@Y}' #wa:1Oul]kГΙRᵮ71%E8SF"SQSdA<\ª|ͪȸ@zTciN*JjzvU$W,>uQ.Gt] DUt[|1dgrleR8SJBi%dh|>wg8}iS[[}:qo$.XFtr~{u겵~M=|wWeܦX8;=sc"(ڋ]]LWJ[f&.'<޻#ZcPmzqzqFQOj%R<ө{9-n7&B꫿7qR)[^N>ڳvk J kاƛ[SO'idVڜh򴈍9ҸɖV6%v,*yI+^#a)ڧbEIh5Fq2F4R5.5-^idLY~jqjgR'u*+߶/gHS58ÖML {FX gդi*B7d6EYC6ݸ:BZ<_Zm~x灤e +q泄#E*ѧ'm~&u>wSeN\W]Z]$ۣ*oN~Z+ZVV~eZʴNJ3t/^mB37V52zʑo"嶖'wg,G[r7fV[FϧZۚZNs{ Ut4U+ˑY"{ %3u1緭q֌y9=h({dݴFTIuٗ.mm>n } O/,Jn1H2ѝ7 QEۯs:U(vzYJIt(ܚ:,?R__& ͘T([=SZJ.% ܷrk]ŹUU%>fS8kϫV55"&O.ǖMϳ*?3v^q\tWΛבļ⴩RSZqÃ/guFOgnˏFݲ?@Hڸ:qC}Z%[]FLcr9d\8OqX{HөkwWZF[߼-<,N0I#Ҕjw*ѓRpo9dtqvOơU3}e2Ͷv<ƌ=/R $[9 VEQq7<۞XWב\ fYQDFIF*%r9/'-սkH811~)Qmb:J4ڷ_^Cg_(H*xSڳ{9A*܊2ތu yq#LVN4tJʝ6ߝ"WzO.I$*v5JS<7 Ae|sV5}nw9TFM !?F9=ywkjs& gۓzJH(ʤr6͹*kS_2CR\ZHSO%FޡV,{z ު5e)r.5)Mm L+0.1Bg.[FT9T]5I*2m!]Z7BV.sn.5_в3._y$sH nrq֣Smk,̵k<3`Z]eIiZ[.|빺 ߐ?j5%8kTq[!j2H$HwܐqvPoJE6l>+E@TL +cvө':u!]RkmUc4?J}f U++l^SG,~RxtSQCu%%{|ՄNNNՍ[*898z7Qr7fjܳ#XyhI܎!]R:_BB. s0eI`tߥi9V8VV6gm>bΝtQ).Y"V v2Pcǐ6zk9GXgcNߩ)o*\ng;uҴo<#kI%kF.0ƯܨO¶d-/ff;\'nZLi$P眑GN1k˩Gا-ZIpe(UYYN to\*̋|sFUr:sElMZ24ȍKl<]˜d{_j7G=Nʜ){d3ٖ|Ϙ09szӔydM(PDSvtw9; ?ush{IZoWkYym 5Euؑg>5eRzfبK6 Ơ.T"bg˽rԕOfzwCgu"H qs.ei߰ŲXI2f|ݡ3#ڸT7{t3J5=ZtVR I<;۰yӚI/yFܮUSMib6LDzUsT1xO+N,I<ٚ8Y%GLQһ'өn}n ѓiVvsm~GGØ{̬+@J,;zWOu_MR4ܗ.tq E-%ԛ n${յJmKo'Y;/%-HLbʿ7ҵHťeS*>VbVQGgiUKCe<GZ\Irk*2R\uz4ԢvVݣszc->*qC{צٍ034wsQ~ϖRqOB8{ˢ3㰑6(e\1ʰ:|6a pܪKRhY3Ƭ7]zpT>VqiɵV}}?Rdv?$>TF7MI).[Y8rS.[t[UQƲ[m_5bu*h7L薾Jp^E-\~U>ҌbsʌZ۱VF.wzaT`US%-՞hVI ccQA }.U}:%ZHNԓ$[k :$ؿr~1khj~sq)O^\($$*fǧӣ*.u)JFp&[l^2~8gc{4ЕJmn "Cq"*;@8n*Qחs3pWg;N8f6rh 4Iܞⷜ^OcFֹ%}n"qkchI9sԌ5&ZKZ})Vo[Ly(өVneI̮q~Wd^IYt0lyM ;W!A{Vr44(Jyu0Aƛd38֚:4sh{6k͘lO8yy*+D_p:v1dKEYFOV0{Nr^n˗J{sZFOgjVm;N3p[+kTN{O ZwCQ?e~˕'oO ,4ԧ-޾Oэ꒰2N(.-w 6;5?w)4սmX0>\1ӗ5I%˼nSEѬE~lW,ܔ/R枚껈t Z7U瞂cNU;T~Fԩoq+wJmzyԔewk2r'_JzuGVᧈ.-Q& FRvSZr朗dQ֬#7>SDԇ|%Kh!z%}Yi#I=vr1ZN+GO,Ɠ\]p_38 ϵ(ʄe/CNzƼlbOv㎭J*hkD0{&_+w#b-yWi|I6r~s7TU^hE ,Z~dWLi~=Ԧ.رXWcG-yNh#?gRI] |(#^tjK}7-,Mݫ;G:cI^,9X˵[UގdzAܹnY,5Vnq6rfqS]9/ᨂ@sp9yYFP""4m}Kky$Tyg;[m/=:3RuG|H`]TjEC]FjEɿ_յ16]> Rs{ktRQk|;VZR:I& lkQn3Fͨڳ.o4_Ì);ֺ(y?3MKRė0]>cߞ+ѺkRp{K7l5,K;ȩ+3#B{%kKTֲzӭ-7SEdX׹G'qIi7 ^KgBj1E{N,RNcGYOrVײ4Ip3,uRA:W*|۝w˥ KmJsةBX+o+ Ja`6hcxxgN ##NemUuۏ$1swZ0X~Hڜh7_>cO06EoʛKmm#zSz:JU#ȵKej[mDϦiV5k7]! ?F95Si7nN99]CC_kv$FohSOi|x\:u#nåtdw.r26PgM{|tcΥtSrwZHq[ɑ1= cN4̾h58*K^j1nHo/p x㚟g7Ƨz64XM܁6X*q-?҉=\U[q3?sF|~R$*q&Պ(L;QnR,?;'Z=c^k5dUysxxҼTV*j֒Z#>+1> W2îK };J~Q?Aqp+a[ERts)*քlٟ_zeåʇ%JhW T'{*=m_F}DA%msGQ{Z4>k41c Q?PQaB[yt;hU~1nOKrOy :ꗋ'Y#\ ` Q#ۺJ.KȪf*1M2BAHڅrlE6e(`>aF1Tʟ&݆#+w<5Z;f|ņ7($>́׹+Nj(Sdݿ0Zj1M&=TP*O'Znw220@V# S4z^/Kkw3N4oJkkj-y$2i3Wx<.9TEr{tn~z]}tV3MCO,-fX%h#%iWiZF-&m+ҽkrӼ.:o A7a_+uإFң1OWeiIߧyR\vtrgE'Eզ'eѶ q#;XdO't;9jJ+E{H{5.YCD[]WsEVʑK#M8=FGN>kbtjYmBhv^&v4(c `6xgNs=JqJRWmt_KY%muz&/nF2:v?X;ƜwcTfɞk>"m;E5!Y$IRo^%FdwMtY֎"I]tFzd +eN7.!r[ c28"5wIy~,G,hթVo_3?N YcX9%'\tj)K_o6i̾׷s2K,.BFѝgs19F1'bi)TWdf_U!.>G4.&_2;uhꍆ׋~m>oRXEYҭجw*&le$g?y}-<\*EAin}wÚeIo10 *-qt8kHƬcu.ZROO?5 yUTm~k*׿s*vm<9c,RyKfsFx5N5 Gy$j? 4(Vh놇BGW;Xwך=Iޭ:bTRI]^jaOtM"9vasp=k*bQ認kC OmqvUF+]r985+5 j5)OW~zBVf7$tys^TUu?m:M%&>W5ʲy1ܞNj*EKC>0n.=Xljlw'N\\}ъFTUzw-ZCei=ݱ_7|AԟAd9%^K[k)gtRzT秦3GN)o;喟ܟ!UiR`2{dO^hoN=81IKsVز&q>ڍc\Jin d3l %QѨjkvk3C|i+=sȬw$ͩQ䕟Ϲ(T+=:+4+|sG㓟\#Oҍ9{<*QKTGyZI".0'# 3WRHGW~4kFfۏUg].MgTEmDyd.qqӊtW^merƥJ'ӱrX%'c@DYPܜf%(F\-Hޤt~(BX;|0t8GacmZz_{KfHvUW ъPK^cV8%Vs:uZncYNQĺ8[Zn.vrvjN^I: Jqݕ㰹;WTCh8ғRByqs-V;ZV6!D 2Ҹ)ԏ-MHѥ%?loN[XOKMŜP{\USuQr 0Jm5:QQJE*^xgWThV1ʷ;c1U1TcY^WЬ=yC:SVjշH.crwʲmeGsҾ S jt%u~׬ckxDZȧx8׎Z#MԜ!dpZapBGq0['!#MakAq3J[/.IJ szcz<>|${TFj;jK_,idK塏y< ]l=|#Z]72kR連q ojvzzn" :zԪM{=ftp6}5W o ޮ0oV&6;Â}cS㱕(|_kO][i4}yJ'^8ֹ+FuMF u{ 9SVkG+.wy$1~ѷF+5JBZ]x]y:iw$e[+Mu 5){痢;+z+M_j~\ìlVyc^%,k4{[qISKZ u?"(Imv30m=~Z mu8S/y>*nfb}5XiF1Q[zxfvx\Hڄiād#דTpR{mF1U)zXi#%ƣoq G:dEL@',yl ǡ'Fb]es<5J1!VNQr at->Ծxn%xGt#[M .m'ufUSh31%qM(=YV5}"P窟'0rݾc3^LjVM0[uO]/tG{UvZ6V[>q!XnFq4k57~byI!԰ 8zu%%ek3S߭Gό4t#[VfV Ǒ}u(w+Y]w_nGw4(ٙn_#积jiVRK4A;-JDʳ9" rrGLZTH>,׫w󿈷é0ǐ'wo_|Tiq'}VR+iW¾+}W1hpB I NZVkG7u`OeKCC1 8'0^2M$ol駇^2qM캷~_+jlng[u#P$pt9h:\ѽR+a)4}7JCo~"4VLci/I|тB;= x*S;Y.}?)Vq}?q-6m lo<Z4'x e rq<OFޞ]N 4w]^}?(kΗ׿L.3ANГhVk=|N*jﶫޟXڶvX,)o0A$=qFqcʯ#g'o𽿇<]߼]~RI%_o·/4t{n"v?CoΣ&ѕƬ’8\.aR+"yU{z&><&ڄqgYRDsդb]G븩r፮˿i`p'e{>z~Ï ؠD$6[f?o¯mR}~a*rr5as*j&_}֔pvk_H?W~ uYa#ҕ| 4V*m5GN_ٷ:] [*us2VOyOʹtۑ~JW-Z+#l{ET´cRۭSB'nVex .T{U*kE蹧IǷ]V599ΥfiEЍx8ygCV775m ,GׯҹeS\ɜ:Ѳamj(fVn83ɮy^F8I*z:ufG3mf^\\N[hDlT$wLpSNJRIy/2/'`~ui$۳զ_ec݅F1y'<J/}Kɡ-|>߆:%݊OguQM#!ds1ߎ+yo{UG٨E;nj@^&;쎪*wqkΩ$^kQIx÷wNzƵnW'N+]$b-, '*N2HOvUHYEԘԛK% Uܿ.?ڴT=d{(5nw6 qȮҫ+JN(J/KA`Y:#'Yܣ:gEtF/ɕO<~α.L2Ҵ-,FHw7$eJpogFqoaoȒ>!X0^EQ]I-XFh6W'hOPť1VoZ"K"ٸr3Z&J#%vHT%R8Wn0=}I}Wh\ M?=j=w ?OҴ(}Li{jMHH[vvpϾ?Uq֍4^[n?կj/Ʀ1PbUVKy,*3 lx2wHΝ(ʢs-)$^>U?LXԦKn^2'Hn5\9Yԥ%QJ9].A@YWn<^2c|*nPi$mʸ-sMSkC#*rBj) y1~_,=8OfwsI(*I4Z G6-z\JThݖ߈F7 ( 0Fx\r(rf$"nJ$j+Hι ~?CZrƚN)VoyZ_=$4Y%WsJ3jwRAhW\pGqOζeA(J-j˷ #p.s#sǿn)GF#8A&e<F op] GQN } M.ЕSijiNqKdPIf\ m !Qanwk/(,N UOg zRQD0hSfnJ^q-1*ї,qJaٵc,>аteGqA?w]5CZŨVەIXe9<]^Τt[ J1q;doG=kDc*J59MpfU[hnLo5eh9o6kxE]Tz.ޥw`Ƞ xcOֺFזB9/ycA˂Ǟ31~5q-ܩz~JʷC,N q:t5c'*9ӏ4R)%&~ɞpz~iiʛ̻k7;p_+wm-;ϗ޲+Iq)Yg 7hn[wC4mSc6k.#i?hu=qZ*2gQlwwF^8H~^zl){E1HՉ.ssǥg(2]MfoϡRT{h ttcR9{pNZ^.2khwTw~)em2&]rzMFRtT_3y"ȟ۟#$ 9'"4ޟ픓KC68hovs44zc%&2[tTC)9qǧo'fkZ}1u㽺_ F:Q5R\˧mz&p7rS|g׍MR2):5os0'wlqEik~M8F9RֲmI%FۊtUJ1嗳cIg DzV]˓_*jV4JEE6ӷoC!Y]&6m3Z>F8JTi};ΰ˸v/*XzaiF[,˹^U(֌]ʩNW%,V{q}jcN-;Sof>X.tqm*u&u62*0m]y+V媘>wZ9eMBa"eh$V?7L(S,W#u T·CyJZy yv]ˆJ4jJ[\W:iTvspYb(nCvwR1?W,9gDiDd78og_ZXɪ{-MhruUzTqvW9kфnQF[<\9o;iﱴ_l*{֜[}bf\Gsg}LcMԳ_>Sկܞ9"FUfeRB9N=ꐩ.kLg7nZ=ۀ>QJ?xэ8/~~ݔV8GevmWt1J:WB i,B'rYvq޶{NI(^9o W"Vzi{9B ku6Zº}(դBG8_ZR*ӊ&d]B FcoڴN5I{hV@[$V' aW*$HURdm<}9?8ӆ7: Qw?1YU"|fbnU&Bi`Qn,H74HBzmr:]xܳw998ƣmrt9}? izYv gLv=xyVQ54j%9'cBKkhC2pFRViZ[ 2n+.T=\5rUOkz"2ZK2,\')@eOYIhzh Lӭu]3͵-Ӏ~ b*xx__8Jo_î cjNQv|eS=V W~wP+c F>y _]Orb*(6׿_ CɒdG^Y 'Xʵ:ݙy.Rr=np5ͪx"2F2eUarO9ҊьTd+T*{vt/4SJIZɕNT1#yǩ6"5޽1X:tj*^wh!s7ͼyw K@}*tџ12Am' [ 1nn=sp#9(.}RKFf_,wЌG N=QVVFq:4ms7 =;cu{:N-}z5rX$'={)?=a)rYH7:Rv nڻ)Tȭ'B;JXZH]_0><dVHGF/gxzRQFvbvFv7 }>WN=4!{HѾp*)420idV7]jC{y/Wwכܗ^ǛNN>{lk"Ʊ'-{sۥc){J/)>tZXf1^Iq=U)OnG,jT]_t:l7)F~]>^0k2j8t*F\QYA'꣟=3~*h;_IGM!^A3|$뎞)Jndy)kf"F2:Hl=CC-Y;ícͲ]ogؤS08os'Vbct~ԱdfUO'fgvFsӎ̹ٵ:5%N-7ܰwَF};ӅsZws/pF|s7 x{DiڞU=4i![[DzBujm$NOq{Wʅeiy;#XWFRi7~'Tm-rbO3 ಂu5*Z]ҵ~>ֿs ֬Rm廻u oaQ"Drx8}!Ӎ7M}t]/8\TvudoG=|\Gi6*[Rn\8FJXZNd^O=_ rQm{׻ͯ_ύiֿY٪.'*7}1Ceg|,:rR]f/PjR˾0o<'$Y &Y%VkeH'&ؚ4Y=m~zok5bqeO{I쬝jvMh}3ڝ}ۈOVI$2Ì`j;sT2pz=egkwm5nm[K-n쭯M}Þdeԭcs&֋Ty:VFQΫϖszOo lw5oL;!,2ú<ɮ-X[;`$< ySltJ1NJI;R援.Q$־`ϾMۇ !Dʯk"c(C!ݾd};{VQ9Quehdu5.C p+N<7cz| V61wn*)˒bU_Xէ^•Gb /&y,Xqd^=u%%,﫷20:-HlFK a\~Q|cSJ'$gh,Z0a2~\#%ft*rd"I< }>R~yrTvץ}K Ye=}ZrR_ *ӤSٽծ]ډxLdpA"ָg*U%5gӱ_ZM]=ofkqX۔߷:sM_j{DƱl_,K@>cGNjpqQsCEVޏӹbvkMR3NxO^I>{V=}cx.>Cj3=\ORtVsķ ?Ȭ}9/,-oiΞ|B\p+4 XgϥMIWoOXFt˗$amc9-˩x[XcU?1'6,xtOg?e4_WHmaF_1<~X@ߚ1bөW=Nʸ±ۗ`8$qM;9y_j͹6ħȎ{M9JRAJW{hVWPF{yFT6[-u1a(Boim,RO-xU0!Tyyu}>"<& zۭؒhwhizjA;$B_̐3ya_rv9]蚘yTQwnϙ"+F][ |iFQ[bc'utf"2,h[k %ziYIԩQy5߲:ή`,Oŷz&x4XXFdIZƤwrIitM6ۅF8I?[:|խJNfmU6NB0jb*TPL2e"|Ǔ_rO[Ql=M8$Ȕm#eGcJxПE! *xЎ"*IEOgXD7NsޕNM;Iөe)D*7|Օ8ڣL֍9r=d]UONiEI5qTU~A{vក=*{J [b$66Vc?yݳzUS9j'[ho2?Ϙ061]v"\ܾٖv}~~b)ө6iE:2qIeY:׿h |w23} _iƕ*{Tw>Ja۟)TcR:9 Iiv/|6?沨I]-ZԖ Yh0Y9aϧRHYQRY_pCܘn\VJ4SKIicF weW{VxtP-N/&!f,#cy*EG[v-mNɡ9_`He]@u:Vu]ZE h22:pqieNz$3S{nfn_*I7pi6d89"ܛqtC,[RyH^}j-]ݔ' cWTT;08m-Q|K3ֶǚmѶyJVs һIJSU#JhRO;1V[qVekܠvZF=~OrZyC9A}쏻N_r*QKR-Of֬Τe*øtfqwrT*+CGFM}zk9,9 *vhPNR$7|pi?Lm.5dlFaQYTyO8.m_1UcuӬ&;e)TvKC e9lԍ>nxǒ5-eKoSq4(֏9yj$14B#ې>эo<4?OCV}n_f^q3ML5cmZu%;4[&"$-+s TOuF;mRC|L&Bw-n5FT"XVMM |rxuV_k-]eVHx"D^@})Fb׹Q֩ԧ}YY>@ڢP*>ӭ*# E^Z:ލ4WRo3H^GUvz\]?f#N2^7p>&dK 0t8ҺZ"ử:K㍾fwlwyb#)+}*ҒNNŽ/\n/?w=N-Yen^hLYW*SsRY*p՗ IIECJxק7(֣Ɔoͱoˎ w7)CeU5%vhV`~U",a* e[Sh-S9B,f\r5[ Ӓ:dz~]1W*Vk[V,FB 1sϨ2(^2zY$>:xR-xV]Aw%0cNsp{Ǹ7Jv/NGL8j餬ZB΅dnO8BVkrS^0pE$ 0x=k_ig1QN][uJw?H9=|F"ISjޝ*bsb28A8Ի|͝h)ӕn iGp7}OҥFXQ+qwIo$i"'t4Fg7Tk{Ӻ~hD7Iڧp #h8I5m-#Xh G˓?Z筈k'U{-ВHmDA*io:攪9Tf#M+ߛtoVA,WQv[ˌ}멫QK^1)c<#'s=;kOJSh%B/t]|*ӯk8U'̴zSNTnoRIk4֐#*aݵ}@LXQV]QqM0fdYUN2pp#NXEr7j]yK3YHϸ88#¢S̫Tguny&>mФ-n|?+&Bӊ~֯W>i~NrrO# qӚ2Ztt$Zo#$X ظ99?T9s$5KoRk"""+ 6 {2 tJTeErwRV4˂dd}s8^L#?IѧM]HKi#Vn1-:~hܾjf8?"fVG,`fagNj[$a9GHyɝJ<.w0ƺߡqJT9[3=fYI\>CֵNSJ{<!]cV˚TSmp%*vF8pzs];cfh ,țAK?>PqҴ} Z;F/V=$lfsۈ5FV\ƊNnpU* ~ۚy+-^,8QOoOݖ4T+^4Q,0x_nx?ke)[`7JH7F^ DU=O]tۦcыS2|BNǸ,?85NM.h*̾!՝㑵 #y G9~t[KhʍZqk} s+K5*YTlQt:+FM"6{ڔcNg\HŔyYO¶OmI.dvb+5/uV*sU\girԌbDqr#x+ʟMG^RnD밝~M{k{ʫ UdzRc'쎊nm^ -/FɎ<&hӌeƵM-]5o5.-z s?ڔUeO 4kA#d`=֗7kVsާ2%ʉ+u޹19^;8|%d§Y-:#Mm:)!͚58ĘqtF+m6їt#8zz (n&4IXquՏ,9$t%|O>Q罅JݺXD1o7Ozs]*b# (/$AʴpxNv̝H ㏽^|;acٞ5/7dr0dmЖ|A Q)Gxս=Ivx~U|ɥ=Գf{*( ݾfmSnQԩSCRm{KmI>eqCS:7O:%Kz y%//EGo:PޤsM\wNz-,YmV5LNb /^s|TThlsWδYodvi@Yav o'j8y]]u`BvXyy #~T Fp9}+{zգQۙ3BY iKR*50Km2_92RM+~BqSz/ӯR^!7"9KqK" 芌vxi+hrϺM;tB368fA,[LJq T)sF* .s^Ż/wu[E2N7Ae}{WNm<)FC#̎g$vtssZHۖTd6ia&DܬřFWh9V|wn2Om]x:Oc4 8x51zxv!3[^ ADw qWs^.ӷCٖ:tK];Gpt>eJv^2+mOkMx}+RTy~^]}pjs[HprZκGK xK02|%\Hp=>9txU95ѩ2~^ͽߜ5|@ݸg$䵥(6q9% K-x7YXivאL־bfgP |<?kކO9O.}N6R P.bQkSolbrcnAIXxϞJ6۷km\E7RJʭڄUtmXw2 1Q9#F1Fˮ_ד.ocNI+>v\7dsڇZ&+a Ha漚yD1Ki'Ҟ_IRk[l %Ky3)Fb%$ҿNCNreg /VUmh>fsxX8l-XRf4S6]geŤ 0C4[TqIsY)E2o%F7|.a->Z1s0p7*G%q* Sc/+^sF,\̛?R|𵖧j̑FVuho :G^zgWYI=ݖXxHcE/-2rwkJ_3?CS[54 v]߼f^FmIehV@ 1$=HOkҪyY芳$i6J،ʜ+>K3v驩ik)V9-գm\8}0+a}9)ҼX. 8G\RGJ~^mc{h+lE(6PI yiGѓ(hl[۲ª/7/g8ȭ#u9僕C1i3w2 >F5%wSjY^UI4YM \}3^c&ٵ9KQ.lpE§4t]NTd܎l޲ UGrnMJW#ǯ GkinЍ^6s瞇$ZT*ZNcZT.t9ed݉68,=#ѥ)VPNe9Ez[ox{[9,2m*gpr:{W5lv:pg*~U-//|kKN.o&ikv x]{g،(IѽGFo[ס|/|7D2Ky[cnH3vr+ f޿TR`:oÿ^\CHt[b1}'qa>"kiBv/kUPn9NeԬc$l+f>xsg|CVz]>"9|6z5wg UNQ^^2˯m9~*in8֪2 kN𭖇aZ>$nt7wRrF;R?cW#0Ruwfm#?c,M'%#)LHH1G$q۞},E'ȓFt˲k"ĚCH3> dr:tіO*VZitGxI{ݤ<+vJ4(RYV8{?2-`:HG@F:ץ)ʋ}3Ч(ԥ(u(xCFrVe%>SEB{pժݜZG zV6ݎgqgEciۦ|SD99+{eR;,jx=JnЧEG)gd\֪7xzt)TOȇ|nXoysM4rj%hK7}NJQzF{K;͗lWzxzQV[*odoBKZJRpՊB\W} )li"#dӊϖtg,췱˵E#w»cRd[AF7e"^ՔdW,6NP֨r+|*ric<"N]7Eo㥳6sC#һqrc<m%}yn[:U}{#MRvlnN]ZtsJ<̯HjۅHV ʣ͓玠TRnx:u$$ۏDW4K:NN>+>U5'A7IZ_ t˻-iEn?0&OޘHMUz^{ԩ*X›g k5Ȓ2H~cǧ~yrԦrNK_ x[|s4s %8xڽ0 8QZZ4o^wTR.RV]y>]OKN߹xprzq޾_8ԺFj*Qv!}엮m9!ǘNr3{`Ejw5ѭ.hmePmq_1x#:GLQr|TkΔWw]<ʾ n֓4m6I (]: 9zTqՏ˔'EEiv˵&Ȟ`Hr(<ې\i7QM|))S{:,EmȬcm6 5و{F'gǍ [_ui֙]FF$rx|UʫE_F8$ښ9cML)ʖd2jԩ"z|=lBU5X#Y#8^q^9Q6=xzIvًnZ ]¬mg:{WUJreJPTָ|hјp+W+OGs *M{voO122WS2u1#j2Q{YݦyԵkItK7wg-FؚuRvBSFZ5i0s=nO!)t:OxBX[OyFCG=:@jKWmtriΦ!r}oWcyl\(9 ;AחFnwxi)|˲CZ7N+FKy#m89[a%Z)Z[hV*Tqyt|;ֺq#UYmnehGzN }U'-m~m}vhNkֵ53|GqEK|J=s=TQ{7-wM?_z꼏ffx27(eQ 8p2>l855b%N^ӯvg[1"A!9 ~ԋE-R> p<[{̶ZD,kC"1:H=㢒ߪmk31KeK87ߛݞ9ɺU.sF_ npsӞ]pK{[OFڦ֎,ŞE𵦷s%N$Sip^L YY-gtWԦom7[;򋫱.XcTRI3_Fr_.eV*/q [ɹ;brNA=qRUZ{][Gsz{_ eCokb=#kέJXy]۽ wmXԙW;wmrE$v1i<N^}LL„߷2\vi{i>emRZL8k^cƑnR= l ҰRvzeUK~"ʜ9Wv[k(~^'oYxWCә 6XC)8_m#.Sѯ':#EGƜpxKYvZ~~?7f-n,/5[jMr-03c1q^VIYmx,}%}USa$1ʂ1>s3\.2)i[{ɤd"?3xJksy+K i3̱]N3=眧i+ӌeTDbrNrt߭c_jﺍŷQɻ$ {O5Ku~G,G޶4 >/*v3goJƧ2[4C*Eqv|:imGßW 1=KWa6K]fyVe,*{GO϶N=jr基CWAҗ#,FS2}Ì{ab~x~mĜk&=Q61"2|+NJ\ү"PEuȪ6rI+JwkH^obYC}xUNjm5h-|ƍ9m9j)ҧwscRWwk͟O+);#hӕ~U6#J2喝?"o3]-y ^ߞ֞1k,ESslrvRh,`H֒FZj< OIYEV|`RΫ_Ա6|Jƻ,&sT|Ʀi%+*+R=޸㈝-9Sn4w5TE{=DjsqzT@n r*M)INqu(WZsw浹[WkhYI6IJVT\ஞyfťزw FG׏\rF֢ZL*UJȼOqnmTgwCמ{"+V*~m%dƱU@}y')U9ejQ熑̑cyGCtZ3=[=;t,ifAca*󃪜~?sӧ+t<LB+9Kdv\O OB}GP姗jM {b^"Ri=.<ȩb&?ǽM(ƳwV:Werd(]cHn՝iΟO8WYvZ[xI ɸvG'>J\Ϯ6m ^|*; Ads۩yj^kqӫzm]2_yidV6^SYQjUJw=z{ףJҩf?On#éu4TgZ5Q՗kfQpj-d7E]^2 KnqWT[Fmbڳ}$QApොn;ƒ+T$j{y6^|oE\G``p Μou?鏌w7A?jgaѴvK+xUXICު)*v4'R6(0n.nO9t'5'dav)w!d`v/~ʫ1;(3h/>gRL4ez"+sƨ `QqԓqkZpJ>Le/e&Mg{\>5ju9 *n9>RM-JY|7w*u0\O%0d4rld'<%R273EAfkb(O02QKhӟf@8uSe2:я4Wn9"mTV8X`zWE:u9بӝh-&[v0/`ߟýtZ\nz{&d,ʻBN;Zqr_q.2O]zzzC?aOg޵AƏ-4؎i3N| Fk9FIVLj˓S߮ u3}̆hZ4*v{>}COg(I+g F3dc}tS6*4T]]ϞVH2d,7`x9Z鱭:im̻EEmdY$elIK޵LԦ̡5ǒ:p~KP66+Or!Ѳm1Z-$45K[ZifY>)*9|&yӊT4S7#)XDd<')A~%Ne^'a(Gh#qd̒d|h~ktjGe),#t\mcб g⦤Dѫɿs6W0gw{I8VyM;-uSN<ߡj$a(ˏz=NYԩRqWk{ls29%fc?wV+Jn[imsp?.y5Hyl*pj]3C0V;cr}kZR}N*~rMr7+S';_ü䷜JX]>GۛGsw}v%Nw7jʤJw ?:@5Ir=921RU5drۛ {Tgi5gtUJn VxEb+"VrVG\/ķ9_5W9[rd4dKOO_0$.Z86Z[y}x?srMsU}6qyܣ#@RƌoWTSHQ5i7oCkShӴ46%wAemˀvt>᜜;.Tמ穊iX~0i-3K;ma#Aal,W8Q+`QR;)%VzqվnZy%tyXOvެ坍~pyϯ𞞞ՅZucZ͊C33(xvjƷ7$QQwVʸ,O=yg G\Umn;xn*GFvZF/s[Oo.O1;qY~^*9˚bD b:v9Җsd{E%mkihcw8 :kQ:|^\Fڣ+N-JR<4 q .ݑ5ڭmo *r맡iO)|o>fiW?RgFESMU)g8ϻKԹ1ي=<:7zz 0Nu"kiߡ*nhndY|!9 q}j-GOC:3攕G;lqbT 6qT#JTKt9a$9\y7,USJqzqئiL3.Y|jW6R:QOENjp29*9Qd2k1vI%q9t9.p#˯SCq;B}ki;_tuGzl:}w׫krӲ# r@'Y~g/j5间Rqwn㑓2y~:zթNV3B0{]=-Bk9< Myӕ3yGŦ;mde*דt渥[5/::VN}4,f0$mxOa\ӚVoSl?mmֱ{m]Hm;Ug,}xygZ$kN5U^V}gtYl`7 F?[>JuF YX-f]o7ʥ#o+x1^Ǔӿgh蚛yIPwn9nǧ#;8燤w#xjыM_ϧS/G}ώ-/㺳aF0ݘ>zt/g{KÊ2aTv+~NJn|]ߎG\hac.YFk8WOkᾶ=֥ ~M61~۩;ZCF/_|l9qWZ|p֍JtC9ѢM,:҆&V;=D{4)jkKdvO=>;[Fz{:>nu^KE~vlzֽm,J5-+YǼ1\zz[[imbhUWSv@>IJT{*{1Bn`mLpyG9ѧ=yQ1`m盻q'ӷ'Wm t6 mR{<qj:?C $urᘀs|=jmMЊ_1bBׄfgۆcҽۙÚZ&=ccNn#Wm9iS⢖b4be,} \e}mݗlㅤ#rO7:#NThBDռ$nm$I"|WpSrZ(:"-ѲO궣=V1\ڶXi }2动秧t*Vꪉ(n ^T9=3]SzK&p6;idV]q mǡo +Fu%-&4hnv,q^~n3UF7w2Hojbj0aЯcuK2N :~o;C ]ZWo%^÷]9'۽x027R.N[`:z0ƪd|OvO|w>Zr_]>2K5XeY7DN<9*%ŧwq5yag3%tkx{ELS1@ncxN3}RU]~O^?BNuIu_O[LJcSl/"Jq.(\@q:+Fy7uݤɧ}/{۷ӭN\ߢdӽ~}[ d B)dw_q[]{;{,:EOu"I._9 ں*FIEG1RܥeNm`b<A}ji~Ʀ*IO[=Wn]Jb'4k^G9\Wלcn>ޏhF-] Kπ>5t%1@wrlnaҺcV|>-{y~g{EkYXAwƑ팑^1rrJϩҜgx/@ҡgWt $~Ǿ!DlLw8^}\gm:Zj65zHWfR7wgY~Wh.2*e]~AǕUX+H;#Xqr{akXΤ9eQ9,= } CFX͹ mɐn,x,5_ oE*:t.g(NTT:L]&F^\'ګi(´ceN̩F65:|mNxBITGk)`1OLK\rܬ'oŔz5^Μub-զdAfܿO;2̖JsIRdKK{݀AWJitG>j6X71yokH'+Ǧw+b[v}~1E?i$EITN;=O'aᔪɎ@b˥\snf,:2drFR*5GUUry}0N:}?{W~--!/ߓמY9kQpe$eecG$q tc{ %] Vr(+{:|W"ڽ v瞙W-H],]gc@K.Gz'NU*=K! B$ 4|dxz=MgOls!khYͅcB&+J7ΌeBLFT++Sm-ySjW r(8#=8k̋rn-Og^BGP 9$ui{9)j2:In$kspڿtן8. Ք5nad;9'EjtJڳu3{|v]Jww.owEqG,G~x~G,ZJ6ooN&HG뻢 L/8{Qv2 ċLM{d5)T|JuMUeVIo}.xQn}+WJ;GՓn"H:;y uT-]NY]ߡ5̑[TGSR}ZcSGk4cm H\񞿕MJuOCnXƕ?[P5lC `6 c֜Oc9͸=I1mcO#4gfQJ.I4l}>_8rGc׊Χ$W:ERyy䞞DPi[t8O[F>ԖZ߾y! j"7rTa Q#U+>U' "gʝV.V$y Ej\iR;MW1f2FESNx UGӺќӊ2~҂(ZGƼî3j08=֬UwKq$E_1TH*s޶}|L1n z!,h/ΣA+>Jw6NxzJuݡ%2_EV黧?ZZ[|ԌHI8y\ B"V5ߔ~:B⼾J)dg,PMs2n+#7>}OgFI^-SlZדM}T)ԩ'$=m"KJQ[FUe;.S\3&OcY2[p0zOֺ4N<*v67gvrĭzͶF/ryzte%=3%Mp9#9jrscB4VMnrk/:lгh2Fz~YPZq_tz4j9I+&G j":u8r'.t] +# ms>+IR=ʡ|5ׁ^rv"%gv9mZG泆lP+: VvP/dIͽȶn:ㅷtQu)u#Es2#⣏KD\C 4!G}54l-f $pާ=:s٩o3! ū4սU_,*, 7/MJLJSq%nS=;uWd+6ΏgYJ [:\Q^ Iw|zbxy\5s[TKϹy?75qTokuU/JWVAWJQxRm ۯoR:)Шykdm ^1>BŌbZJ)OJ˚ U(I OjQւs#bGڹ/QDbk3O'퍣:Lv|Uޱ﵆K46 3m 5StQs2$LѬ*SLj{˪LQ窥ȓf Ѳg}RmHbVmnr@y}Dc(["*ʫzW=LG-NVsVrK%.7mc:slRPudOJ-Wˍqi]cwՌs*17 `L5)OTrUU>,3:@<~#rF}F ,⊜%J&4k+Vܹ޹&F6}LԩZ lH3::VT$&ݞ?Ʋ m_)\}DGqKI7nNZt+K1iۜuJ6ޜ>=΃lm r亳YJ4'.+uQۊ菳jˠNR?2Žʡx|猝6DZM]QSQitЫw ˳zy^8J]L۫ 4@Aڭ]Ө쥩"5.VɖGvfl]\qj~y2F@WEHޛq1_D77C>m54D;ּKH*}-g_1b*łUЧkڛQRQ3f-*dgkhS*i#k54oGzv㹤ۗFR{q)9E۷w*y}- cH|7.IzJYMrS^irV:8|9~/!ƪ6va r}:bR=EpTJ1ױԋoc÷Hxu NqfvC ^7whszգ~9ǿ+'D8xu-rc]YsgYԌTZf0؊ب--~GOInekvf }8׊(3-=s8ߖz.wF*]s7/m}N2!m4G{h6u\vⵌ*z=F ts`gkogZt/S eR-[uT tvŌYQ6%8JJĎ$NRSh5iodvz5jD( {gr:kS8tW<<,;d^*(ǒ~徯v&MY~bmۛ9={B9;5Jx'вHF?338u)ChJ\`߮K~ALVb6kJtۓTScO6i%B+5Q|G4hkn}>++``YOpʤg K40,t˄3d`c,8湪:8SKN΢8mEqT>^ekSq]_A*4pO͈VPVIZEi$c#;A;ɈPu:{*7wO}Mr|sG{zW,s7O&hMM^]&5Y<9#9>zUy-4}>or+oֵ{+.>Yq:j%gTT皺-L!{D@X6H#NP̙֋p߷aYr>OR-npfϢߦ;>eݵnEqU;TMTk$ -ܱ.V33$?OvȏVQb=+8%.i-_SXag_:2A%^ 儥}WE%'7ORIk2Zĭ46 {\򔹥VL;J-wV?&6aNcw=)ӝHҺmoS42wxeeuǵ^ViVv&ۛ/en\U8˙ ~]G9h.4oc#hWIy3\*|pbc._i}S%y1frݏ*)re{c J>RZ$_> \+mѳr95=nѴjJTbZM ƪۛo09Ǯ|8tӦ7.oy]K GY<#U7X4X US`8:h0Uĭp3 *q5exY7cm}o'H]{Vv'8^Òu!Zj/Me?NmMFR:*Job+G0-VFI#kSRVi*WV-ƱtѮq~?kTw7)Ů!Uel8H9<]kFqҥQ疚:\5T'vzr?ArZM]TY'|fn$OOtRoChFQ^Obݞ@?# >]-sZnޥfyyW60;~jEϯbp8ֳ\Z&^ٸ.x#oU'0Fy#8Zk]>W&2u- #Uf[kx|eyaѻ\a3'W_5l[Olaά=}{ £zXԦ^wfY,gѺq\ҍ>F+tԣu;'vy*Iu# |ɉ`N{ISq["M*qӹ%;w1eS9ZT-N1;8Gd&$ ;r=+Z2oDЦR2d*?W_/-}:61K3-8hbia-.py`rGS4yɿ#j+MsIjjVF7Vs g?laNJ]N5F$ڪs\HT^h勭BV{(}\7bV+#MB'g#UYEKjnQn49`BɍρntןmVݎ)J:K#K-ߝT6O\7c:1n.JVe2kEoRaNq~xoi9_$G{.H/*hk:3o,}}ڹjC[̚SWϗOR-G$]gvAeZVNM[O%[.hq/<2psG5H+(}M0ʇ?.S.ʳ۠Y'kp0qxq}XJqQ7s6{[y2`l. S>CҩjtC6-b)ʜ?/C׵Uєg]EG[HЪ8V cD֯Sow&I[nj}E+=/2ձA+rŚIS~>Hjc#Uɧq)'{&|7˅2NVIXZ\DcyM ',~\`uNHmnLIxaCYԄ.uN[.3{cO͇k̖wy1#nb͜TOsJե {$3^,rHimLtiu1Zhr ¤,NH;`cOMm *QqkE 1Vf3*B:G\WDcF˟S*n&OБstF\=z1Qw鯑xw Kq?)ۏEw`Mp55m 9˸ҖEVI#h <+8un؞iJN- y4Y3"F:aNStVSߙVp}[eG-jj_'4d)>^\ˑ|2Em z qrs[֌!Dy)n_ȸ4P!'ѕ`zzp@YP-j4:rяQˢbeeDlɴatysXʊ疶{8ѭ(j:°Rnf6dn6JvONYO"У4ՎF|y t>x0JS9+[ak>EbJ̿.9`}|۫t3ۖ]/QaY&FQݻ\:OV81aN3b_$9ja\@Wve1JƝ=[EJ\RcyJkw1f-n94cjwwFU׭aRj)TW";eT0S3q[gFyJ5Iktt*j2RZ*q$˨`Tkf'qWN}SBtR3+n<=;uYǚ-;>Qom(EmiEy/^{WD0˕ib%rځ 7 ?7V>VnUc{jC} 7C|z+xԩqȱSƍYwON:uzU*2Mra^PZ sᇗzNOWdhѻw5ͧէ-Ʊ*\an c~T/R8}6Qcq U-7FV)}q2[WMԧ(*|u~[Iuܮ ߩϯlUpmiԇ4ݛiH`_n^7]Q8:tf0Ч̠G`{]Ӎzbjayz ji7ޫQ^N@1mu/kMq-ew ڤ]QNm9KGN?2.Ƕ,TWmXo9s-]u Dj1p]/`gtiJP{Vnm3l\sAqUZ]_eطDV]GWDceTO<3$v"5N[[ =(^]_ށ,Q=Ū4c- BprA!{m>NѾgr*w/u<$[يRWuv=kӃJRn4mݶuՕ%FZm~-q^suIJ[v:8#XBv2~\dV8 [MF+)   㞴ov?x\LH2UIsETc싍:u5ڥ1I$ʱ[$ANT~]wsZ,~[عUQc]qd*X^| g[H<ƩWN (P1_[G1)%-7v3fCm/pՏI'oW`rG55OogNWvrvª!R :q_MJΔ嫱u#FJ1/~fjoC#o!?1n0s\O}us9{ T˿BeJۤl~]h[}NH`HRi )e،d#Ruwm]2ҥEUm_j4mX c[xaa\*0989 H0[WTo]QQgm5v2xOexVcZ$l Pӂ0ybS'RڳrWhvJIQKK]kVћ:Ot" $6ˈ]7'*99FGכTM-qMо#xRm>[.Ԍnr1r9 8"QGrIwz/˩.}A0J3._G9v+ӢbmookSi꯯z5?6̲34wn]cr8ן*QմYԧުv>1#̚Fpg%9\NA8Rc(-£ҩӓ}4=ú~Dv4D%99 dc+Gq(ӍnT }\k4HNq]Pi] e_C y9wt KI=:1?eucH[*yHqwQdӷpՔJo]+Ga73͞xQ"Cd2 w~ jO.Wx_ޗMѤhT7%%+"ۣO&fR~=˙hJI2E\ôvy$[S2xњ FE#yZgٱ'һ̥mm=Nl=ik/Ha6v`v|qWLx$mIʜZMkK:1cU%pxǧO`M?aN7bkk[O9e7L2$;09W{_.8>4o.Sb{#F=e듞kE6өr rmxUR9@$g )IX8niZvg*.'H6WilvN1\(q8ƽ|Wkk{G?Lg]#-׎ 㩁:V"Q[צ JA<+ !x ]%R%qii%¬k黮яP+ʧ%f+8CoOVesna3G , 3) ۑֽ9\~lKѕOk({/b߈~4ѵ ySz1{]2ڕ)ֽ~F/Rgj-\[jd *qW#ps^RVELUrރŨZ3yjF_SHw@_NoE^|tV+KZCz]ce[یeqh,-gQ{:m(t=??^3,SDN2wm/tؘd9 w9u󸬦UV]x}|4ŞYQnx[H#,YWJSU}~ߩehT~{KO{sb_,#Oq `T33\W.:enJQ|cnQקQ]<<\kJh΋N$XѕC#{Zޞ*4M=0^P#J~]B>"]w]pFHϧy]h *6c֖yō$jSuf;Qf(POjʤ5]Oo]!U(%wGx±E% yGm}¾J67.]Smw#EdExn09yOBU}]97?U_B<чcѮԚս_p^nToʷN؊|{<>|<_/9]#?Jb!+Sjs[UXT^\gk_LⰭ+&ӻE.XI>kUؚpf ϧj{;k(I;n{EOwn+@?ƺ(twF2,3sR-b8cITNIhz7.ץsSֵGs(Nwk{|7blhHԫ-ѵ=`%ArqОC)ҏ,# iХ7Yc^<Dfe*X៦^c%:˅RpKK_ק3'5ʧmG~к72\EUUٟNF}rV|)^ouXFx!W/sӞRFM:#Nr_,[uүV6o/'`>쬬wQ\~;A+;Lrr9uq<2LTJj9[xCZx.-α 6{p  p6ױ8KZ Y`$[U ?SVFTZA~~Y.a\;F-5(œ+bkvafE+WڼԔ0yS"\\͖enk M֊VQjmߙrR<,~Kr{geu~_.F3jI[;}a-|׆|7;vӍ澑X~iJvRKg ܼS;zhw>Gu{ X;iVNGB k"^[۽j`qU)Yۯ~OwɬL/I ہ) ls_ 9秃cp^0FG/(SêX^mln+_s)  x?_|,5K_b}t&b$n%F6vHeFNqz_XWy;s_fϩ¼fNZ/׾|Sĸw2܏+QS憞vz8U%R+U-SΧ ->وW<Ϩ FS}y|r٫ڶ[c<'sOO\Z*5$rߢ#ЧS8Jz&k{t {cܥXWF9 ի&(5k7mB,Dnnl얗>?Vp$ {r  lHk h6՗#ǩF]UVyf%}Z$wj"@IXrr2I#F2a.}_#1t^V9Ku[4YTç˟֦t>:z+Q~>bhCFfFn$ qD1JJ]8N77f 1\\]w[mP[+m^6kP7'kvǼ^Fsj֣ :^}?"(ӥ(SWK56n5bV!)olQxvz]mԧ[齯=ai Œvs^=|x6Z+өUqةz~٪&=i>TɴRz`I? ьp^[w4Q+PM%{ӯm7g~ť\CEf(X9z gx8=6?/b&ߺ>' h~$iW'׎3+֝FxuU9z_| [泶<Ӝ}UO/OcJ\젟NLM`_xc=I(թM}?.]?cN6O̿r\ȭoyORMsH*Ai mi:EbݷtĜ|ָaZ4$2 m6-[9bz \ʥ8[Ku= hF4\*M'\ѩ?\ᦴ5?Eob_eS_pMY3#eِg<ꦪK|*5]h)2d$J5OO9edB;͹UzVo3Oj<=k2E4[%XC7 '޺+Pz~e'(HT=Ja%*)u09Y1z'*tTH㵂+#PbW~:ϛjRh5n*[{c<HQ&2++Ιa8kQVDVQnmn&`p#֘yGu/cOf;FPODc_sqگ*t֛Wo=>݃zm#hn@ O{QKS4%:[2/`wd%z>e{TܜtsWԬNxHo;(}[Ҵ[éW̥2>y_˽t^*rR3M6].{(?>zTɣ(ƚ$& ̪^>s*SZ#)Z_5fln6}#֦ITqV~}J9Rs媌T_3XJGL䥌qe`ˌg9чO{`9{$,[stD9(oTʔZ.qmVnh8?J>Qr}NzQfjhVUsVN4ru/l f̢[qߌW/,ZZXvՓiqqw30Op?ƹSViъwߡmg&`Y˜cV-[V"5^OFXKr$ۙcMϵC3wxsXJʶou&kCv̞zvr ceG^mwrdJk]ی"Pָ︰F cfߡψ}_oгEo3j.Yr."r[6c0^*+FӖG=I{Wg28b$-ymW=Ha孶GM% YIu\^Ƨ4ެy攷knGHdhi!1'zcc8^մ#h]I&34ٮ~Z?ݸ2"6=v1{WLkF1:hajF.i_MJu,ڱr*E's3QRȮWnO7}3鱵g Zi.~̷H$ s/z5yiԺ_RKqCT 9x㌟F/UTa{")l̑kFE1ӓ[$#$~E-SNR_Í1Ǹtr~q*\I~J6I#Z=-;ewcVݴap^랠jc9K0 MB Q+J7>w>iN9GY[m>qfnl=B:rj+~ӣN49s gApy.%OCU{yhpQBSJor!;WFs8J*ѿC>1o1nHhJzsF!ºӌGX=TyQRQ+g4v6yqpXqۡnjUtg]E"YS~TԷ/q"]ΝSpv6ݿwڶjFJ(zFַFKUV$ ʧ~jM}4jprij<Q\ԂgyPq9N:y#ٝ:~Źor P?2r)J_K/d]gf\=x]j{m5g:הKu ]0V:g緿Q*qOW& bS8[o,ƺcB*7{2:)JE-4IZg2c1%IDD('sǥW/,άRXkmcW 8er79'(*؋E.edŒGz&2r=ES-:zZwr5)pG\[U"R΅?guV\%keU)j㒧2M/G)_е,I4j$Q؎~|y!N<{ZDfhff뿟˨މTQ⇹[صYɍ(~tJ5li[1Korz:%y|̯2_czs]~NfrkVsxO<ֻ5frƟڳ-ŧA@BL';W9$:ʴ^*uR)hoiUVHF"S@G4G{_oZ#(5mtѮu؅Hȹi[x80sUROܚtcF?ӶRׁe^ 8yTh֝: jZP}}f&qM'%}$x,:n?{``z15(ӧ*PO~Z|GFƛQwӵ/cEm;Ade;dwz=|77/<*z=:/s6[Ӄo5NLEW N]/SW27csh3>r mYc$s9ӜK4Խ6u+$Ӌ| ?ZKO)"/Q|~# ብ ZN*]/kYvo{Lxz>Zw| [Kѯ|9kP^iDDʹcrIo =ˁ̪Kvex.޿=u>WUQ2 ͝$՟g:#ey)=r, .fzsѧM$Y\y?gO3JmZmʄ9+y-,涚Qaǵ6SZ2;թVfݕګ6nk%t^Is7к.tR>_O«;ҩ&\6.0۷msPoudk.Y0o4۟nO(iW(l*Ğg>uQu}eS5!BRIugkVwqJ4(YjMi}}ovI.y~:ӌ)޺zOs' pf x9uѧFSu1-<~)UQ*q*1e.>ަ-췱x.ܨMJy}0u?[dB/,P(GTnJ2\n B'!jn…P=R~[hiN>W<|OVmGsV^6{\ɧ)VB|T*[D?3G|VדFt7CithV7 {q%:S(^E x.yn#1׵yuqE.] E7Jo -&9S}~_3a"pp\Gs? e%S%tv֓^MWηe#i6| 8 όin*4GҿyHH˿^~(Mӣi~Ӗ;||i5unvT=kf\b,UX8lwmp(  ׉WPI#z5˴؟'OgNNh؊qN{X?u/t}&k.#أẹ7=}[RERRs<üY%v} {{eb$W[1ҡF1ӷZ`eRWSbrm]7#.䵆6uSwˎF_:{8qM3 Lv^릥Ⱥv7 5 Pѫ۷ Lzz~2ލ~mZ5!%؃jWR3G6?x9ɧJ4SFy56#}$OsI_ZF륕ѿ"&aٲnݍ{xw [S:u6+Ş#}'r\ #4F?1;>)F\GvJ+sόZoO5Z|# *?Đr@G£FI&GҶϣ^xB$:}1r >cg$߇Z.9ɦx2>]8LrI#L|w\{}F js5vGxcpN_&x-}^ ΢p>jy\/H~bq^$N:7J4ukIdyMT^4TRUU/iZ7kcmE:fQ׏ Te<<\^UQVb*?aֹ%N.R%юߢuZ}}]ArzpF1wL!!;}k֥XϢMt\j`+es$@.Gs|QqI1rtzkypfxV4UTP_ r8XzqoI_5멭Z7٬ޱ22|O]#eHOyΌ6p9\ӧO_CQWfΟm"\ưF>VUMu6nj0WZܕ%ϴ?1\:sc 쭹ROw%meR[lp8jNOe }n (Ҽ]FO2, / pO#xz{j)%hMsA!X*P6y7JrKGYJGʛ<~앖XT~`UFݵ' |-(⒣tOgw>d17q< N=sRErJ6k(kٚE_3v gǵvTJtۖX /hlnfHԐC s{ -zVI[_LD}*Vͭ?[}ǜJsK`hKoI\CUN] }+,Gyr9F:ӧTrL_[fw v'2z:j noFTZC}sR\q~]|$3]2X4Ol>S)AK# y ~QRdԜ a - 7mziCWS.KQO-p?@#-Qz:tݵ"xeFf1ɣNVeBqFe.ۀ$Ǭ̽:4X 'f$ y?jƥI&nmDZAg:sZ+N!ҍ3Ëw*,/˻qǦ~.1έ8F/i6$(Ue%Gp &4ے?s`vʼnw}͸==yrOC:Z71Q٤LU1)${Ph&7WyݴG#e Ѧc9S}XVC2NyzaW:)\M@0O,Eުga6Ya n rONNJsgKo ;_^@:%Ma73^QFX y^SX>|iJʳo$IUJqDeJԌ"%&ͺMm=tij9^MoFJy8<*}%G,y\v+ʼsw8ڧAߏzҥiwrQQyN|!hdIjܭ6^ fS))Խ+ȶ)ݺ3lpէ(NҦ%WC63­!Q* y:{}bDSMsi3U9 cwVxͩ}?/5G$HmCMTcEɸwW5Fu-s; uXϾ{~GR%Q.ow8hO Xmr`Ȯũ5 j+9^ȯ}i$&ܔXیtgSNRkRjG;_cᴍZEXlZ%*ϾFZ4WisI26ݭ ݫ،~c)kz,Iv\2O?{ӿZA|o};Zڭ_Z㕔8uh=ntԧUSm7?y#8s5aik#ʥ:1K[-C{}Lqʘʩ9k"*ѧQfm22G  p^.hۡF4hǚ+gÛH[w}vc|+QW&Ju$cdŰ3kQJG>*Z:bQȡܬ /'ך;<j|-rx\z䄵Y{I&1ӓ~\uJ)5I7DwqJceS)I6:r%Hdc?jj*ըA+M5w;z.OaNczU%%ʤ~mˌ֮f;#E i2k8X@}htJNRUػTr1qNWNJ7K9Q̗o#Fx%ͨa@^篶0:*eԤT(U!BsK#-0,w$'Q;J8IsI^LZ56ހF)Tn[x՛uj$;|J>UYQVRYYr8LcQ96m c5kJɣУ8ʧ4I0i>NGTs\ ͌?`OS(1kUYx|Y Q8aUp8ԭ.}/ױUjVN.]79Wò4fXնl3z8ƋK{:y.KINoQY['(0=Z/i[C8juʞ{jr:΂K5In߉;`~o_ҴȇJWWE I[?6֔17"5F[9HU9[Lw8k :+ݽu%dsSIlcorN==׹T5Skdn[D2 sJ5&Ne\z}k,aK{bޥ텲$0e#zʧUvί,(Y'u4c I^L9W)8RXXJ/6M5 ᙁ=ǯ#uURT.7"u~ni}4WW(aǜ=;bqkֵ~\;U5wNșfSIq3|ߑXmӴr#Q)3̮مQYѭrǩX^Jn}ϋvQ˷,-\ZC_cMY;"=Iu z|V9yu_y(qjZm-rBvF8v2*0oMB4T^690HPͅ$iGJͳJh7ܛY bEPCOQ*Syhi-׹!# W7̵>qƞab=q,W\j7k3 w+Ll0L =ڻhVTuMı\rGvzyN1#HѤÍk1*Y#v<)qc(3K8J=4s4{k}taOq܉#+ 81qʤa+o>{E*ƻpTl3΍Ztgh&Gpג໔[h]̻B88Mfz28%hX3m˃8O~Ա J>}<ɭGqNľ6o7^ǭrE)%~KӥۻK@^w6ѵUg}iu9T_W4N"$v|ZYȩחzDUin4T\M۱frݷ+*--Ī:7}LR+ZH+2c\v*a(Z#U({6O)gd;coIӕ'5V7MY6|5/ʉw5mne2W2F_Z-S&:4T`z߷%WHJBgDK&^%քl,#{T=0[Gg%+$-&ҡ7} ^OƦn.ikoCºKڮe/y[K>ބqt.m|]j1k ti{?O燦{5Ťk6dbHhݷx9aӳr/]rM'gwK7&2aV]חVQ~|oeԴ:|s"2Ȥ90aʐqOL`q ܲEt89FdWMߟɼ𺋹.V`կ3޼NJ"SݶwaFI[̶j)=WvHfY7ʯ/gΎ!¬m%W#Vr[&8i<ۏLF2}} jƄex /_vֹ%)J*.bVe<{QZ1'ٱo T.qrǟ8ܷ e+k|UR*SeNFZtYgo1q\%[u>&X]DeOB8hW5XR^-TҲ}G]X4I O]˜q5e{69OyNBlˏ:MkІ-Qbpzڹehq_mf U=55&oNKp5e)n4B8ʎwQt&n [yx| ZkSu8eF4b}Z-^o۾TFr5Ljq\}?h7ue4d0mqJ}iH&uKg.pck(~5%ybȵeXX׏֒a+ΥJti>6[ibH vNO=~<%I>WO4{f&g0#x9ZJ1IFd=2+ݶ#7#o0q{F1Q7(sIk]?gSge+<>Y'AҺ%J[cq"șF?{S哝u_܋ZnaC:hwn7AB9kzqӴS !B$, $,#FF6=` k_e(ۙ]ov+֕%NkTn*Oðj8mqr9eM#Wc.ݸʱV|M:}k[Ʊ:ڏc3m䓎IL+qt>ld~ŒxNo*5>d`9-J`vl>c*8UJ2?SxKWrwu)"[֞K፶0rWO z *(Tei+26|nƏ}bcta9;GLzY~"tp!vsգZ54^=nxbWnwy2mPln2g#p߂=qNn}O J|g+2UӦ,/m[IpJ'_yǩy^_NjFu<|or+e>]Һ^3zqJXE*jϵ:N6g>GFF׼7W[5`5?&7 m ؏=*' -sюZnO/{m[Pu41yHnמ#эj2I:hvὯUYrI&v< ?''W[f1$QB|:dޱ:ʜi~Wqdbcksm#Iv2]bcA\Ò zWd1rK~?)T)+믟N:MM-HñV@Jn6Xyc'owk]sE_=t񦸚ՅG<^Z[d8;IϿ Z$۵[OBU9]8ꓺgOVK^1|wmv-Hx.e+`yke۩jIvT4S{y?3>!^/ CQcp?319u8 WVRCއF T˵?zOZ!ԮOFxe7so@W6Sac5BJd}E.K⵸kٮ.vHcb1pJg1(G&3(ԝ(%ϣ|/f* Fؒ3*[}y-5Mպ[hᑼK*Km}LJ2>yش$6U/?,އROһEhuSl^#˛Y6[Ɍ ![B22M39SRoM>3tn}- ;P |w)ʪ{<{Q%R~y^*i%[MNC4d&Ay,<#VΊrf{u'*۔m B50rС֒o:&yl~5q(=S*9% |j$r,k9# 7Q^GZX4:uh)_[/4a\fr,QbqV_+.5cVfrkH.$X&U'zңJ*4i$]hfDŽ|WgvD,p,U@' j/f)k}R"5oZJaul 2=;Z9:~~]/8$x 9n9>|Wjz} 9o߫t^1˲X(F]衇Bwqiw}䬠VeU$A#<ӚҎ[NE.DzTpu%qtkoGnЮf᳑kXO#XHrF~GBm.$gz9]km$ܣmj+xMr+ ˌzu?dy>ZYYh:Rc;X}molc}1(ɯyFyF9'u֞Gۋ[Qq,\d!* [9yڼ>j{eK]~][:? ڥ흝Ω46p$+Bp܉۔qgjrrWo+w9eB2~#^1}fhX<{koQ,F5(cڰ֖O ³cuq댝_=]JmT⤮ ׈5Kdi#nS鑎{WⰓq^G%*+Ǒ.Oܚq KM9 1U`s鏛#zfVKFXeen]K VU%9'9kK[Zz zؖy@^~1O&e]-n>E7N 2H㜏~qy|+C|;#*\2kKMYVKUi9RGx#Zh#v:;<^lKm7˙[bCmʰ15J(R0UkX4WjF!߆W ߩEBm϶,:nV?cޯbzܾy$#6:4+[*QukGS/ _j`kż85`-=9)-a͕O5#sv:TVe5󑥱Fry2XF\{3i!,~8lag湪RZx/ڦR\G R~`^xgWj_ߙəT'ɤ`S׼M&_ TRIhX$nq 8_%ZX=ocښ>9P9:}oі{5PzS?tsUJT7-[ß-Z\i4'ծ@"Z]yy9A%5&ڳtSQ^۩ŝR4 :$4ra{ȮRif]x$2ܬ6.7w#W8تsn;uh×޺hu SEdel# y9<}}21kɵs ڌW4[բ'CmamTwTɁ1I>15*s8=buh⤨vODuV 4$c52Ƭߺ\Sw:⼺++>./TsL6ܤӷҟ==g֕J Y[ūuߝ6YSZIQseCS]^ߓ( 9*1_~7GqX~jY]K:exXX|u#9q]{cj?uӶtS4g#2_q-Zg=5jnFTie;jg*4 -m:[;="4_0$1cͱNy[=L4VZ:B3N[ָjb#ȩf8RIugjZ1$8#Tmjy8OW!ңxRb}Ga\, wYYlR;kY7r`c͓٫*9Uj_Y۽-zR78eKID*_ܸV@t0P1 ;k6r?z0 WӕFꖗMMLDҖYሕEB_E/7*N=V0X*Ag$&ї,3F%oZᕷ#ݸ5N\׋_בH^̹t>g*S%]:x7,ѱ$/WR8}_X-"cXc]V͸0t)ÖgFIIuZ6ielۮRM_ּR[[6AotiXIrqyJ_ig/* RzFf~o2I5T4c]7НVl$eQz{Vo{D2~Ҝv4+{ ۪8}y%NVh璾|v2ZO[,;OzH_Gg3mT7?# ۟U~S6C[/,b6Vܞjp(VG'-Oi;'r:tCR*h9-;f"ʫ{ :uܗatmfI0F#ӂpA[J)F2撼vR{?"Vk8a\dy~w rQgU0 G#Y~E8_t]I?2.m%xݘڻiԔsJI(l眜v5彵l<"٧kZw&ekMZƬ=#[|Rcy#ɔۭ)ӏ*z[mWk"o'${_~nHk)TѲMs c F+]T~ߩCZ ?G /BhsSV1d{,FKpF2ޜԬ;յvuRǥs4su_޳fG:pjn2*KF-T+;v gDbkLS#:٬e ѭFW#r3ymk9\<4cu mد08wB%(4Vʪ啘aVr:rZ=I <2<ג.[hN@߭`,QFKO+¿.=qȌ朹eEE<1+o}\2-֍!!.\9NyJ璱NifcQ#H{H63Z\Z.a%f]^F~lv^r:S(Wr{_aƢYa|'ҹ)sAl\kD8a KfGS8RqKTnJqqЃU- Q%6nۀ8A]~+2}8hG+0e$*d1<GN2+/mPƿ6''ⷧ:h%o%C*,.۔EXϗߩOR3.]&UpMGF juct9)eʽGaܼs^M"6)G$z;|G[|Թoث,XҔ_({uN9.Ujٍc>da65u뀽y4e j.:25)R[ M]̣S[In-o&MHĬ^d<ҢH:!T{KErڅVq2|ۗ'{S)s% jp.Q< uV;~eB LR7HkSU5c'2!۵_`硢]TuE-)nX_NU*(>[ol̐In9O)7#8$B^jbi2I[S2\0VUTt8)]Z U*7jbjKZJ-32v>~9J5Hd98'2IhJ4}ey?)HjT8f߯WS9c܌Ҧ>Y95\RVOVO;uE2>aڕ:Vw]EJөJ=E&I3+cw<%ث.{)SA5s\oxt?v9Z"߿bTd~Zm-dA $n_Կ*r]!d~UYTRw`r-}6]z*-7/˶4seNKGdYyӆ qa[=yW[Fw=ek+upx}kik#UiVײ,W6=heB҃Փ˅]. ܬ=yVMUd 7CӏeQQWZ8?ylx-oFb?OƫB)I]>*A=_c_NmqjY)$,r##r8qvM8QZꎛF;XLc'5ό4Og&ڊCe66PMG*i^ϱR=>kZ̽cdEnGIc9;rNyRJg'h'1(2>QBg<`g{t%.t֗3hO?q6#N{z{t9Z6ڵ:TKdh=j|GwbGn~͹;<ѻVO*/W*\ȹW@Z1ǚOM2 @17c!z瞞GݗW]SQwYsnJsUK_C:㇛z&22m2rԟ_]ĬGI&ө 9zROWnwNJUCn`ӡ r_fgYVjx]VZ7> } m_[̪Gc '`}ʷխ>OS[0Rts?Ļ_ ɬYͥfNBlc` ]~ޞGJ8yb]mmmk>U=VpO%uUJmY'ʮO{ꭿW="]#RBuHNMy01RsO~GU29_x_­g\skSs%\h' t5FB_kEv =SINi>U~Y~lnĝʒ;JoHSN\km>%Mw:SpCzclFv2*c/nk`cZN9^^z?&XzqG[x֗4זSȸoyKa`6gbr]:P;=У-oߡ'zώ~4]kTnoXϕ#.9‘^EJKtv:m̩b1x Zz?~9|,:x_>k:̶7b̷A8"DfEt߰ )QOKIEQb=*nz5o$ߒ\mDWV C'+2s1C1^TQJnܘy_"աI32Z J^`hv2aT2[<7ǹ5RTVKJ .4/. Fbv2c xu%(ֻg[FPpG} -Zo:9,#Ua_n}|^)[NݮJ|uց(m$]"ɐ[q\y+N\73Ul[qZ.+ۋ9M7pcjݴiZ*EMס_떺̐ɨ+'o s/Q7{=ѲziZ;XxT:"urѷ*? ?Cce*[{YRRtyu=3Kҥ||<$y7>y81_A ?3qDdb[Iձ2W]m(6ݓڽz_ gRiTwѮ )H.kXm1;O-ަu-7M:%GxߝӶᷓNzrZ|D_xAaok8>Hچ䑸rzR2Jͩn}E"XU8kẶmcn$V.rKu:VrW[[Ԕʗ5Dߩx 6F̨]?ֽ:u)~ޥH9[#Slx%ҽ:5[',DaUI%bX+<ˊiJi{JO~YěF.юZNEZԽ3ҭ%ܬ++tj*S˩dQgݡTRoG簮oNL*rZ'b,_G̣ 1+_ӹmݱ+)U=ӂ2G:g՗GކfomUgoƹu8NKicЭO7 va@ ۟Ryo /c:M+{[MA#Hgaװ?)=-Oyt13IX)=:FU)(ϡmbyn0=^G ^DSrm",rϡ9񩅤̙)imE#asҳt.R,t}IV&I$VeEvǦ:zsjb;#OPJ ok81ڥ;؏gY=R{SEo'=Skcx8FT*|_&c猽iJu坪,s[F65Mrb5!ƁV}kN^m% tjOVt=_͓{]>OVUJ+C X&;W*FFG^OZBe{$ti~0C{#ZnVi[ʑ^\[>Ei[t!*Bъ{J%w+ɵw2}?kI{A֜p4e|@[|2u&1yLѰpW*qyϡ%˳,WU8}( w"2xݺ]|u5H}" e^seTlz\l=#k4Mٹ`@A=G9y,0x|TӾw>efث5rcӜq_5+׺{7_^x02py'FZhs<9XiM eJ Rv׵}]RR7},Ռ-;ՎQL8) ?79K K]t^wb,v H*jח7KM6@Yw;vzctK?K#(/Rk{?Wdږaqwh^O$Qs`dNzrSQӊvӲOiCT~^=>><爡_6o>)ݘ+f2ŒWrM)9)&՚i%uNs)v5di^]D ;1 /0ܻ$GÒ +`= oT)SNٮwgQYŒ},K&֗{5}z"Qx|e P$w9C|*iNggNNVD .l N8O-jN\Z##X2VoCTbSYԧ)^N ɥ>ԛgGnTm#w""6A9c~UqeA=gr~IF,\ ѬmDv==]wESŸK|ĈzGo|VZith*sV=Q#J?4tV5N3ߎ7SӖ G"kYFJ6:)GQg]qZq܎Z L,2?l}:VU%g;ebZR{p_׵._uDr:K5Ṋ5;%u$ `iSo8Rm &rsƪ쩿vNXֹG@Ʋs'^GiƕI]I6bYVO-VW, 85eR|qݒqfq_<3{]+-rbHIxn1Zܪs=܊[K ¶s<DFx+'Wqh7$6rۗZhΤyVG;?sʜj;usrԧSwml:{&3 1+ g!@skP6=UG+$-հ+Աjj-UQ]4Or 飊4*fR[jJZ3 {סSEds_]cYRfk[ U6|1\rwս+Ñ-ă1 aCu^l(ݘkGncV8i6yrP=u(8g(O#i esqZ҇60 i\ϚW $9=?sR۷pΌjٯ'#eEhnFꑗ3rr)׷Ȏ CdHr0ke*M@:߻UoN3\ m~2G1?xu'(bT]I.#h2~GQr(7/VVܻ6(I9ޗM>XJS=;"03l6rݽ+~rZ~GJqLvab 0yc!%$rGpt?<{[NVƟ,`mx$y=溾m\9sn4~Gcw{gьмEJ*u,EyQ7F>O<a^UɅ9Jor[`Ͼ)J*NHvqvaZqXQk.IVlv~^e?춵;Ē>9iE;!C/+gnzq:n9bi+jU:TkEc`#+9n})Ma<\:G,iʜٕl) F|6@GQҍr?#^KS.}&ifV6cVq]q˚O*gWV#:m8D~qh2#k1:%MW @SnkuAVq|Wa%s]OX*_+1gw̥~+ߧ8(WwTKS]n|CrʅsG\]RsQRmxLMKVؿ28݆yR`凄hrf*Sn9yZm;(Xʶ5Kx^;W}?iZJu{'?amCH1:ڕyy]AIG+]J5g. eRgWE,,e rS8}@ 6bƧ<ێa*qn[,V^Xh.-ѥ[us-V?}u6Kt˩q-+ c̊ڗ:,Վ>YVPX[3?~5+ln{8):o!vFcU)r6Ibs?˓FN88/jƦ"/+Ԉvw$NIsͽ3U[suG Rq~} Z^Cj<K EՙTݏZө*iY=YTpz0 d[][s}ls榧4chlg^~LF6Hی?ƱU'M߀TԻ~>htV[ 79 \^4֒ʒȬ|̪2OSɨ:ʲi_Oe*n^h ŵU\gZBN}xÕ-4Va8(J'=?.+}Ύn_iS[1Zv s*tܺ"WDZ瘴Ķ8<OֳujdOnF=9r3+2Ƥ򩭙F[V +ػ[h;M3[kG c5RJJɟMQj| ҔӢ۩}oHD,4ÑM_gSI[6"4 ܒn vG"վZ_o&OGmwkxJg[lbY<266k`z+?kJ^5NkѦt!8e4"+efQF4~_V9Ha#V tSBSo,($499wzcg]EFZ_b[IGeH"+`==00;皚sk2.TE4<Q_糕7xryky B. R3wp=k^jzJ S>dC I9`yʠӧnՎQc}lև=$#=&)%*윑[u9褬f͵*];OT )ԕ{~{.xi=yNGCj\[vsnsVKoQ~UGr.YT|fDgo72/qڶYVJxh5%vn[E 78ⱌw#yMM'̲Wѩx& 2Ic7ZMfٗmhItgRՔ8.r:S[3eU|˧jJbLeqloooƹ*S8TeNq՚7k 0cn 8+2Z1GfJwɹCw99ڥg{iԕ YIۧ5jT8'ʫ7#ds:#V13_*pjw,P[*÷13㍹XT)%teF*{_[.ņ$>|цSˎEcxAcN<bkM/"km!g1LK }GgzT*QnۯxYݪYNW'8m]nT4l{9`%HӍ^d*5N]_}Q]Lֳy|gֶF4jG\,,bP]2#8$sNzNkJp}L`i{[k%xMXm;~\XFxyY=w9A{vq#ZI{,l7J晋 7ɂO#+:|՚_r%N.M/'1mk,yHojԜB5RI{paufm7|8w=/z*jok5 c1R˒FHki{9z#rj 279IV#ǗqXg YKiy=7~_#=(Feݯb?t֮N q6NCд#!-g#tUkYVSnKc Y /=E+c IJR#Tk;~\7rzǵs)R42M~eY^v!UNY߯V<;{/;=yK$M\)) ~oګ'eV)m Coc6p͜\]jj:𲔣W#e\b O{Eњ{9TNvpyhTרcZr.EGrߑnwNvZN zuFQV!ї7$7˶ʊ1?N?*RS5JTiKF ʱ\<^- W7*a?ҺcvmTM&WY!TY m`Tݷ8N75Zokg"g3.k* ]kG{VUPHi72򓃷=ҽ55t7rӥe2)˚ PQYgF ob%s{xx{뿡ְ(O_M̚ *ᘪgMG//G,/z:z-EOu 8lUeJq'hSz.eK>bq^qΤTmWѿM-o"2Ap= ӷNsԋhW'Yys6Tk/g+3kjGd1y!{zR#-mQ^K$C2b690.Hfg콜Zko <{]x-{IIYOQ2wUwuߡ*d:P g'UJ2 sVlUyӫ,mWY|~QCMZ߉}d2'],N>W3"/7ڭFN{zޒRSIa\r~c#[y25:N7*41m,mܮU<9yƥJQ؂ּ.Hԗ$c㣻Crь*n29}u- L\;(@g<ϭ\I(&ĈjFl2pT`#ֹ~6_µTwfz[}$ycr>\䎝k*e^wtv8E:~_qVrpKEjRc156 2ɵH;1kY[Nj!*nͭzB0;^=*)I~u(JZ8G-GCN 0}g/g+8~6q|vǠϾ5h]^o$F &]~{tkS5۩zcZG#L%@q\YymJUe]VIyܒ@ҢU'ۢ2/OCkJKta(-N6SIF]RO+gï%:ѵ|*~궫}L uFlmQOzWsÏUW6;τoOI]{έn[cS9K6mfepnG B {6:n0VI&հԴeku}ɟFyXd?գ}FT]z7ϓIdh3򝭵zqԦHQZv~`"B b߼Wq]7ɑYdڶpOc*Fu*9Yc"w}5R1u4,hqœvn->ΞnY"1 oҸMJ\GLvwR>:QT[MZ\5{{/+B0y\EZt]KI0BUo収9+.kzU*r21G'r4D֍8E'/_S>r'1v%q99k:+L*b%~dr,csBwsSLE8Vk\X6yjF1Wz2q|'g5:m[yfUCWeSNr _Zhδb7$(]>XnWg6Ea9I2I kßz5#v;Gs4".q]mZ:SX|hFl*r잇\.gFEMҝõzz;އ-LCVkyZ @q׵C^Mj4Urۙ zWt]IƵKQiBW[A#ȼ+y`3#RvIn_c 51%4"审/q \n~GJ5,oo5 m_!A]JJJirܚj[ $q c$*ꗳ[}IRG+Zbq,ʄ#;Npǜ8jҳRm3J5}LmnHCmmV=sڶTgB"#.I4CY^zJ5/ku^ҷUz]n>5"ї7.QU{67uxevEeFVP$<ǜsrʲU{V#.NTL9F'i ƞCm݆ϯ=@]1J49ӱRl?#h~hW]]omș Wp2z*JI-;n*R8M?ĞI+۲O{pxo2qϦ{~ʜb};OBJ:=CueA xy1sQeQūFrs~us=#kk!R rFAmĂppu08iIk=_uf79{ソ_k_i⽕-t=_uiwnf|[0r>e bmJ?E ̨c(ǖ^oo|֕"hQê"3f<7U}C㦞kOM 7m3n\Zmr叄co ML1۬I#,K qb-ZjIvOX<M}O;xQk ה̡pwq O厉_NSЭO\uW =Kwfe 0ןViY%mU5w}[ Z?G&Q$+\m`z^ulEҵ$]<9RǼx?^KVckT\FqdW$➷wO9)Ys_Go-b].Ÿb6?&q?z|Z=:Rcw8KYm_, œ#d޺iMsFS*BSwjxߊx!5F,0+㿵uN29JM= >͹J޺NIL.J8ֹ+T=*{*#:$6IR[=Xܵ[.Z4t_[yk$\x#$⺝ZmfCJ.1E7۝&IlrE!|eHp r;T Oo=g*T^3wX\޲l.iע;zu*rI3(i;JuM F|ˍʊCۊQr}7C0Zm=ؗޘL20m;N{+Z2Z$ٽNƚ_68z8?rq䫬R~= pgE^>$flazqMJUdޖG`![bBvgegYzyEyR%w|Th~Zl6E԰‘uV?}P*hƵj:1 4{㥣{X/LIHr~sIvSii}unkEh>(i%|XbYyVGS RߏB}l|āICy}4j/v _SyZkb s#zבԤ`*|&Ѧʿh18bG^;cpԭ A5RSd[^veoݭ`zs὇85-L!&gdwQ.NHz?}vAң;xWʚK!*8(5-CJ<~;KmɼH|mc$1ߌ&Xz[s4A^xE?nz^b0Tu0Sz0ψa;-Ϛ lQz3ҧF#_x@,%DkovO~N3w|Rv߹ 6cs%qo"ڬV#<c 0qJ]n/URۦe#/9wҾ4oQ8-ڽȰN6LV>Ufݕ\}z[нb]iݡ w${*hۚNzuʕ_?CKW|i6;f,3=3XʵZQgE**G?#~,$ l.dkuQf턒]g0>ۣ,:W}߻_3ww#6j[$SX.^[C)Qp[>+vZ0E2e`At]24k[&On.j˲z."َDQARv]ץtQ0ӌmMn&#qP+ȭN/sIaT.AԁXCYӕ9kxYa_jö7˽63lhj_+l]RMm0RCҝUJgHƢQn&e{4iy7˿ &j*mCJj1 {׶Ü.~ONJNޚ4i״ӲN>я2,*`l=x5#)i}=mԒy忺+SFvt?uUqӌtzZrQNn=7~OBti=P8Xy[aKלWN*l]lmar-J?w'/û&K.=;{rOnn1_gYPI-s)m|hzt$ia+E2#בNWwweI&ᶻ=KM{jl]lźF:9|CZKPV2~6!wp1khƫ5YN2Քmv+VtuXƶ|&VۀA<^jG›zke&k-,pr=.Io$McWJRmkqٵtuŽhsF~mvn$x\>k=%jk QΫww96ESr0s֦x<<b#^I򧲽5>*{bǐ83ԣ^*_[^oԧ{D5 F<̙$ ]8Ң7{2,=xU;Om, ,Gz{ר5of}DC^{m}6_w Toz /6UNg/ćFy-/{R^lkp>V&c6A4s44 糸KdehVU#ЊSUzD}.:|PnldRF*IБA^\,W^v9q\?[V~O_:/w:Awqv0wc#3^ԫQPJʍ<$a줜ouk~="eM 6Iۃ'o_%RRw}o< 9R\ r~I]{?P*(KK%\>'`xSrOĹs RK88g86۫y=̗D- s_8-^ YhtS2{LrGtn߁9>ԖVQG]xrF1?L|U:rNӿN̖ dۍ3^=JW^F#ҡHדny^miF2ڊWSNW-n )O>t\%QR5ol,>˹n$% >NN}|֓eTNlkg)ieiTwy3CFӒo/,z#bƫG>djY;$1ҏ(/?ND#yKj ?+zV/ ϑzljmE3r:ӭrՓcJ}KDp噶ѽZ ~^[ZyQ儬8]k[ho̮C|SR4įz'ٮɄl ?s&]h=oP4lghP >Jl{4Ԥպ>|Lx[{ciUa}~lzWJ0cڕUy7uk|NNqt P*ǰiQmۭ9Ԧ(Ԡ @$hs98TkΚ}B9.ZeS}cmtʬat#ke*r*ek;' N@I=W>!G7vקpQgA$p*K% &÷2veyjva}ƽFt|P\rzҭG/K(^+qjvqE+Sx'~qsn .V+#9һF4g0Gqq󴩏q;c>8 *4[Pǜͽsӆik:eӡW>3"$T6qki֥*-GGUތ5[s(-};u4̖̬{(O܍fTkB>.EU̐Fp[̓y56k*Ӳ]Hn̮Yg9*e4fӓ߿Vi&3#~eNRTc$Χf$$kΉKn4ֽ̹7ELο2)p>5OveEFW(}¬Pav 3s6Hϫw4jR\S(+$y|&sL+3HTc۵uaJ%Gwyx*CzHJ4=ԥ4GR6,*K(ye WrlsGLwo*1v3j9otVLR|a ~ygvYʍNiIoA6kc#7_pFFkQ1#30 yQMo[xX[*I2 w<Oz1m,EmEp3vw8KTT A$,ŽFWR0Mu$dv[oq~jYɢF17&;` qKZ).0#=z{VVTc(JϪ{If_ ׷Aֶϩ6tr¨m\#8pOz{L_7!ߺ8dءr3u66er2wYF؋2|$ '%xgH _e,ڬr3Ok>YzRNWkpGhcW56҇X6윚b[G(nq?j9sENm˖f''\bF)'.˓cǻy1q7c\)C TnK#G ƌnSܓYL?ywOS9„K+AAF4%N_x!= `:>"GǙ2±V?.#y q.[3u[l cE&I pq=_c\/5JOQ#uʿcұVP{t(ydzPwgUR̆ळ4"ees4ÕJOX B oStXFF\jp̐xeIdqֵ-$e;yV9FvdžEvь^/zT!4M70mBSZc(Z]hoN>VS,'RgdŃ)l} G\d|[OSJ]3Ǡ'ϭT&yz|9KIwsJUۗK{El|qce^ͅjRSZHi?,kc;9#X--rBߩ`y>au\,2;rr1I#[Tb4ov4Oe.?,Id_5FcFbOgNܿ+vH4de{ic 53 qӨ'{){9GS{,rLHې댮yAq^?mث[B֟[Ctym71ힽ8URs]켻XzoՎ[O zOLc)N^ʣpOut8YWo9#<>^&5IUS?TӦauRx 2F:?:և.*7fC4 h.ZUɝHͤ{TӷG,ӵ[c9g֑QM<]ІigGiR!A#zt:nhV՗NxP$ap2<'J7x;ʥ7{Wc^tۻ+)-ћ*/M +y|*ME6ڶ}V&޶|4gXERM]۫{w[GO/ifح)b{+)>l >- &)u-<SKEJR[+z'fl/z/O[Xd5;y#bNyDSxpyjkm5K͸^֥-լ6kZoc'94y0l L|Ì\N_^T>Mƚ֍ikk˱MP"i.V;NsNkɎQ Tѓ;om>jM㝔2=s )VpA FԍG]ZܰWmVs^xkXc$A=nzו_)Jj0{?#ȩs٫a4-} o,J̿3#3qRY_Oᭇ F$ӏ~r~tj׋t_ r]/uFHÄ "0W]MldkQk4yW 7:8>wZJJ>[;h8o h[ÓMAkTe^ r1yЍ*sR&d[ٶr]zYO%z'Ķ! GD}X*\Z,/k3k*彟aV*\s+'.e>6ooz9xZA'jѼ펪-mlJREs(muuPPQGڮw{'{4-BҼzzGTʤ"[tp=j6KShʤb.704/ٽr1ySeSGP3nf r0O_ƴN\*6B՞FнчY 9<NR|ױeUK٤yuF+JQtc_R)3lkܮqN 28Q"VfD9Nh)gUHN4[*Ll5J^Ʌ/էvEYiWw Z>Zs49kGz,>VeO&U|54gB)붫`Uev'h_T_}_U: .2b@hzW-j+GvݮcKmWv8Lc Ew64i(EΛM(㶓i%Gv b' TonTHRoghy2=K'|œr$9ҼUxJmtƝJKs̞;wKt9ʲ/i%uc{Og1ӿ.QcN}vRRZįG._*y>`U_GF&);_>R\\i,[Pedξ'8VeN?SڔMsmOK> ;Eafff߷돋*C O+8I]q7?xYdfh* y;{g'p9h7;[{_%wVښb-{IcR*G C!IBA|rp _AQRw֝oxYZ-mmmK_dxgN+}̬0$d=_]sI$ާ8 _4#љ$3St3 p>9 {WBJ3j_O9j%,_0K#v ZndLeOGcԾO,3$+2dn 5ƴtV"G d2z;fT8p*~Jꢶk۪mgo4f88;L5^ևi*8qּٍiFVhX.VBj\utkN\^jEk a~ar_zQpT64&?jt-[ q,dzZ6 Jϩ(e[%%d_&_pG 9?3R\K}l^vDg-ˎW@?C燴{t]gIm.UD]+);*s>GuZtʳE՚7 K0\Stt|2NkҬ䔕R5ߩoRhF!F2su#8ڽ:5H)Г>{dpL۱+NI`H+08>-O 4]ŧ4JM-ٞn,!8%󓎣:SI/hee伿?=:1S-޾{sмֲd&mQ*_#1|*~I<͛F0Z-fz\S.h`gZ~mzOeiሟIdl ݏlU׫T~ti$؟V=Z5ʹn99מhJ1֩exbimQ}h?X{Φ"6w7[ʷ4}-c#n e/q\؜MOkgWWFn=3rk~*xGz|n&u$S*{Jrվ+D׮tKvG VPŗ2yֽ\/Zm#\,)%f<3 ..m#23.,.pO̹q_F:nMߕ?hѥ}$W8/OC\++#&@NC{}=4}]4E{[l.Sw nM<<_=aW0Ԑ7 Y@Nd$Z~)iZ[mZ/Z1_X|+]](VZhGW/MX]4q]Gt >_1Ρz2OѮM8WF߳"myeH#!e\7gIɯgkNRӫ{[ ،+M֞gU|'("F[fy 8xvK5>O;\54^]iJG'j&ވd0"}VP|= 267e̛L;ߧN8>(R)9_w"E s*/:p,QF>!,n${#=s韥sIEfM8ӌo-4,<""9='72-dPqbK*1u8_S<:0OAn奎cڸ $;zb:r:x>Kx|LWSJ9^иU4E ~S'I$⻰-sҌ=4mˑ~d RA<?JF3i{V,Xdrorv5N<1w$Y$s؂ZҜy-cE(رk6]LOҮQI6mFҳhI-<#W+F*SL$We]LwceqMiKB3k8j.㪓+8ƜRE:|Of6N[O>'v#RKGR SM.@Ǐv>r:rR-+z!<\+/gkܪtkЫWxm̽@8Oe죣7t]HgB$207#ۯ=J+N^ZqIHqa-ÓN*}:&yRRvV倷Χnٗ 6r=O|2ΤcZ*\[C.Vg]d/0rWJ1b%Nq7 %'aZFJrEFܽ]Kdɶ6Eisמ\ yN>ZKs&lkJ8=yqӃZo%Ҝ4Y=Qygm`.Cœ< 썝Eϭ)ʍHښoى{mmwhk ~i OAʞ-ʕ)_CO]oV i#18U39OֺR2]:R[c2I )G?U2ՕB#FQtk[9=`2C0 ;۱mn}Z{NWWڣsrlӏ/^AJ2_5j &$Wی'59d)Uv~絍 Q~ na8¯]n8Y5zeǛ (MInv_,Gcy)sh/ J\Ke؈y$7?5N}JucUC~.$vXflk(ʷҥ9GlF0|Вp6֒)NOYm{R(A*Y~o60vc<py\Y9tZiDP5[edel=Z!-aN1[R澭e清B\(ny=p*):~8횊="HUU.qVu+S;Jf: [vb{zfI{5 J4:ӫ-ZsL V`8gߚJi8&Hh[}Ҥk,E4SQ;J?q"ۢFC˸u,Ĝ]ϊ37m-} "q#>oᩭ/g+1:+Co vjsUDiHR42I#VwK'>^zqQDLjSMX]8/.ͱYI @A?uԩªT)FVsJ3o&⑶ 9s{+Җ"@ ͹+8};',} u]{%U`"$#VMm=䏭m 4ˡU1*bReYG˷} P؟i.XrFEޅ-h>WNk/y?SҧBTbz/C0ؿѴ)͊*^{;z5Sdpou8ֺFRw՝%hGo%F9#1Ӛxuh6q\#Q;ړVUК/~T{Gr_4s'8k֣r؏zӾ.b`Kw%Z1Ƶ?y;s'ݱNeM}JlN;eUeo:|EJwu^>m;FELN\-zt&: LFm?yQKty+ {91i?s49,ϕLΠ.9>jw͈K΄GxWGK{M\+,kgs]18ԕ=^uo6b8"vLoԔiWotK S'vw؆,gfhUp>bWsԩ1ΥYrϯ;'Yڄ-]8ct?ih5*EglGconҵ)YyS7cg˕!d`NAPzheԔf;yzeq/7D善=3<Ug(RX[{?oUZr$$sʠU= (abWM8Լ{;f2mcu6H%fʸlN5ÕnCyVmrՙ%h7>Z l2kÚɽ:iLUIheܨ۠קҴYFM$:4]޺\t䁚#>\֪2uZhW[zb,cHSR玧96eU:ΞEjX,\F^J0]QjSq߹4^3kGMͲZs/]8Tu#3UpA@^8st8nN?yur8otݵp$̍c;{W<}Ƶ;Ry #V9bTi**,Ǐzoܾw՘NEzcoNd! ,kq$S1$3^eU'#Fj'`WRQ[6 m6[ d02I-36oiJ-?zWJ;۳$CrGWz΄(UVM4;,&M6̛d`kERzt}eyX'\پo/p;#q%vרּTH;,88^Ʋ{;?]*o 8 סl+f#rpqO\xP)%}ċk7 0v8[SK2IrAoXѡrɝ`qS^ZSQק{7fuY#X>^99QL!Nzc(˖Cp*YX+r=x>+Dwj#TLˍP`c~4ST){klkqad2mݎp;=iٮRIHci#lܜ?ˊ)ʛcQTmHH4$.G oOz8J1䕚Doh}U)ۅdZzKOᒶϽG6gAHtO'y^U&BWnŒ(Hcw\F9o{>F$3,-0\d ? x5OsKšGg .ZA^Rdzn=#⹜ƢW%J8wM˹sUdS<>t^^DUo.6A玽\DeM#:i\.nR-\=ƭuU_8ԩM}Z*P ֛tp<$gVߝY*|vK_,yd$ OTr;zQXTuPLú#kY|}Aa8cd`?/{ "-Tc!jO"hɔG :vjjaVƆ*1OMFp1uQ#(Y}Ӎ>h>E'ti+n E̮T!g 6Td0A 4evNq)_=߷iO~F/^y$[gv'o~? &\|q`{8T<*~\P澯BU;TdI[>c+1U5ok+P_vv38a]yҩ*qXF/o,>єN+_MVlGfTy^9Kk:5#`0Ik35FSW:*򢽞+RѤp&q^J %OhII:6r,,#l++W4SUn%%F,@Ę8gO hwEt߻gWٳ:ݕf ǒ=GOgڻ)FjqZۭ"X#69eqr{S%멗<';vEViFS#͆'9 nsS˻i 2ru^hR9 y@A=s֨_$B 6w{g=k9OhiOY;>ÞlluҧG2^ߠn8bTf\^ӸM'S$vz9ң[FTX'ayKH˻gԟ|^-ʌ7Bɷw!8gc_H^f ?w1_b=+HU$S(˚5y737sq꫌%&M22Yoq$1-p9$ W(ӌKMvHMIG}gFsE"ۨIً(q>8MIɽ4oȯm8ɨ۱r{8#ҳ?lܙ)O)Nzi]cG<cb(;7JPXvuxRjyh{5=_&VP8d!My.mm~>'YJ騱Oqcd(UQwz>U 9NRrl6p o*=LD"Tx)6[i.\7 @SB.7R6_u2|ȡ;c,tkSn[*Cxݯ5buhje|Uےwgq]ꊋz^gEUK0OŹ5V$ϿOҴ}t]F+Qow,yYʣ#Ggk!S]Guy[;c=.hn?gN-˼eoYUJZWSC}WenV(-TR,oga,2!wTnnYrͳn: '>_ :\^Dy|H<Ϙ[Tœ:?>kȖ]* ,ԫGUՇ*nZ_dKڹ`G9^-n14o&Za[= g@c9'$\co5%k>ڌ#%>mynE$k4fV+±+Ч)KG{+_S.ͤAd-RդN빝ǞX}bGWiKC5"n98>gM5*P^vn.̶(76 qy'=' qMK^뿗1N ilh'[x?sRc:zk) i\2*ɵ(R$ɵͪaԴZo-5Kp$;XV5p+8~{V|{Rpq 3yHP7g}RHX61.˯ſ]E hv$l yGa\T/ge^iSJt9[vFmo9@8{*?lq]ΚxZu74 xn7sx(jS;Oa:  ~`]K=L gUmt[m+;ϳiC3M*d0 gNSt?+tVjGKV2g4I#dh`. ܂+}/s)S8~]IVݚ1q 7p2?yƜ~-7í Ht:szvoiSiܼ-OiQq9s\2Gn_ )ѧମi(+6Uȯ9[̎isa=8#5<-W:6槖źW{_ֆ у7&1)~MpHkt9c.Y6{/G6cH2:WjUتթG2lXO"yE^Ռ^R%-cMFζR2HQtskE`Իb}OW.Dk)mF9(Pj-n)a/=F=MIsMǴ8#׿(J-Z:>ƍ F68&y$8PT{ѥhBS58F$Kq̬ l?++Yaط{jkP+ɩy_P۔TbN־ tqWw64,H#h#g* =3 X7~>:.d46>,3[Zjx-YI<|c38hԎ*LDeKpYtSj*K`-yx&Юp`[p;tFQl)'8F>ۺXFcn2; [r5czZI'{DzOq|LZɧ]uN120Wp JO 3ﲶb+ C_פM{͟b*0n37{2qdb_s /?9vq׭yrT [- 7o.\rlW^1RyRjڎN+Zn\i+Wn[_zaFG.k-Jj([N'AZO%9͎Oҏ76.J}*\,La FФAޜ|?<}e Z)We{E?'{VTѳʦW{L$ mюM/z7[ղ5m^ɶ׷1]F4sOi$/2+:Ǐ#80Ez|^yz^~a˧eEo<]VRVR6k̀|y8zM<>'12x.Xў%~߅˻!mfHo% ?(!rg˭xu$0x:o?& uo>u&5̤5˹,2FBN;`uppΔfr:nm=C|IYQuʹݥE/d;Xw98olW;%4iR-W׮/'RSPJg^|XUӭ#'H^gӿRF.ۡO~]+WJ9Gmz2}$ܮ;\5V[Yipt-vx{xRK$$Îz]TQ۹,.WxW+^Y c8ur 1x^M}䉖S:i'hcm3SXٲs}HqN2RW UatKm5j;տ.âɲAsHǸ\9_VC-mh{%gT۷mjWߙP˫8V=u^'Io-d'nsFN9\1k`e+hշ!hSj6ᏮH \vҖ*5՗D].2D]X RG'ݴڑ#Iom,z'EiJgۡt&& yE <\ϗ:ZQe͋O+Ɏ[v⼚ه$+lʦ\V^#/-<nThvRx|NmtܽuOqtQu^|qW(ӻOGO pg`3~ycwrΣz'Mjuk+GKtfz[㛝>ٖT,'gnzW lqx_NG*1jK>ҼEH,Ē+  c ^Ǔ^>',I'5qe:[߁G/chFYٛoC|u#(:L7%?SI:u}Fօ֗R!J>~_{5U| *ZrN;b8[]z߈'ѤQkIsGuGԴ6iU~:rA<]{J%pzm$|}az.:VrwSrܛIIy|^Oj%;_B)ԍ$4Nqyc4˷*\W/3d}L?oZ/6m(u xЅ•#o5]rҡR>nIai=1n*3&8>-lD6*lG}J>ﴋyayh sԇt57)N ^&զD,0qw}ZNh(SLk%aXO9vFToZ9QA+B2Jg)-5*JǠ7ƣf=`vSQcc,<kjQ᡺=eb3Iy郇xM/ǡG=̥uwttMG5Ċ<%dG8$8S,u]w_|e+G/6RCD їyJx٦|'nu!;{dn[/BPzC޾nx~}wjrOƋgI>-^z6c'J$RVt/+ZQ# ic=7䑁 c<}_O |Oɭ݃'yt]V 6ҧeܑZ"ݹ+y FQe֍}\h-ާw]#! x78V<" og3Ir̩~]0G3M.WzKMl[H[VT?]wW[dTeˆ)(^WvQO)(76J-(I6!.HRmH‚:Xf.a8}]j94)D/ʏ$=+ϩk[cΤ]'czD2B#h0A0+;4l-e~~G[,/zu#Yֻۿ}4'hrwJd-?)tя=R^ٸjk !YsFgutx:r_e) >KVu*59Z)?vtKgM$HmwO½ 880ժHo~(MRi$m\rs|܎1WQgcgP{Fޟy$~ߙH`b ϠsUࢴfѫ(ks诊:7 ᤾{yn79;N GXz\Ūk򔾱]ݭ;Υ &u'('~z_[K z5S劌&u9|3%3yq٣'97GNեJTrKW{>nk>0-rjm՚"{;@(YE[3|C-hmޤm8 c¦7m$ݚ}"*w\̶8B; 6&b$*|T1Rn-&MzJ+-%g}[&6>9u>~^4E6m?S0O鶴-)4k{`hd ?1g}4WV|p*TE9Jn*oc|zq\JWJoςu&EBy^hw}VoTuy{khI~?ƌeL=jҌ#}4}Sc­Zcw94 Qo>iWֱʤbԞ=|}bR.2oc)݇#.q뮸F*${3Jxx)7vbǪu2YFvajhk\G4y\]ޥpQ =+Ĺ6˧CԧFQet CT] Xn,1E] QZM6~g>279v T\6ru3 6۷=l3-KMU_/ŸؿL{ZRꖶl[ȝ>̠-)qCԕyQ3_սHezF1<629*2O)fsZ֧(g]&|GSg1^yR]u1QN.s's4,ڽq8ErǖacQB<#U.5r3`^#i-n{j2WImђE>]Ue@XsOF2j)S(__i}19JQqT=F7ׯq燩N.Z[RQ$_[H+ #/VrΝR 7dM6cL4V=}`T ݹdB9T{utni,'74}LI"+u?kX1PgFf}ͤnFۚ5nݞ{VyNi:գ溘^ (n͹03ɭsoTFW;>вLsHWv'o3E %:ԮL L?mNwOC8M ʴѢCx 3L^:\-IcϗpI[;GA+*pJK[0Z%4-#{n;sUQyVUy!WVQhd6)XF&nI-Ӝo\,cZ?K3٤v启wj-Vu.[zՔZC6Wpk*JrZ)ѕOSyx$ooN1i*yE(;Y/q$,{ ~c7Rճ(V҃˱~Y (ƪHYKu%ʖU+{99/C>|)YXa ӿβ曳FLEvNF׉ gN7H|:Վa^wMl=L5eeG3{idKFE'zWԍ+ VRؖU|6^u¤%EkcB~^*B;X`njOʹQPQ{<}EjZƫh\ѕ-Ns=MhΑcÆ_jN=HǠ$uJrw>]h%0$?e'3_Vrc#:W]Q E2*4`29=v[TOy"\GFX-sxyыt^"|F@-djI`qk̶:+QT^e䑳'\rnNzk9B=V5$BX γFY[ ̹A+Ӆ/6(ѕ% IVOݱn:wcQrە{KƩ,6pH50cLOKȵj"fJ~ep<`zM`Q޿wR[2Gl7:s urUF>͹;.|R%RfV*VEl|ܓZ>nM*r.cv1NH\0?Ͻvӧt:jc1ILlg['Sݓ)jۊԒB" !I'=z]W.F_ٕtFwzE(rV*U/'ˮH&vs& ?8upuV:[*$-xt=?d_+o`$on>#9ZI&cЩ(i5e_&SR'?oNM}ƕkJ3z2BнȪ̡t [{))4Qh SMl$~uУ YiRn!!:q$?q)7Wj==#\$#GeYdhF0=;Is]ElsƝH\IyYj圃C1 A]v%.0hFU`xw` MJo˩8NZ\GݡZ8li@K[J-M B\ޣj*QUNXlenڪ6msYjGD6:T$+G#oVTzp~<}q\rsTrj:y=)r6jS|!b+̙9bG5Z~gi>]!-ʤ=-MrѼT6,yO"YgwvQ-i5 /OZ۵UNU$}ordg,ۀpz#cN(ɵ4E^[TO4ryZE<JJ2ٿzMJmGM" 9U4] kr%]me^3ڣ)nrZ*hI.#0P߼ey?^ u]Z|Gi^Kt.\Dct吰ys'84ϕnZiƍ;u.iZ}9]Fv<&ќuo=-4lY[U^qOҦ^VQ楳ܷӚh~R(r}Y{GMNG~eMB{ pۇ|ұV$sTSsLm8S<{WIJ2Vؼ7>ީ=_~|\ozs*/:3dIMט>aUUWn1[''ԣ"Ku#/n-i s3ww#M:t+ )ԧſFP<,zk!.|@*|MzJ L^:nGR:\*dss[K4Z>zVwmjZ.:V>춺yj&%,_ AGU_MJP^g復+(*;Ǫ~oZj:e=F ݷ n968Ziǣ\-E4Ԟn9!8┶\pnjMR-WrF$7 m̫" 0rGoҺcS3%Z;;hEPIJex5bS 8J{t봿? m:qXhʔa*ҧ5ē8[YUn?>>ogMƲmKT]}};Kݫ[WrN-o[FTyI&SNe:LewKb4i-1 ˻,1L9ITԎ^Z[A-ǞY=6n<5oguHaVf'NxN1$v^/CTVNZ6יֆ:wѫUV8=qS^}hFrIdx.|}oksGF>HCƧe}^.7ǀ7s ToO8++y]߅t֚I8kenֺGWS٨;k x+ҽ:SI9T+]ٷuv.?yq};Z1Sh]WIibeơOb|N͞}:%6hH购>+EK{GwϦҺmRW4R֍GY鲍X,n=Y|뵎9Ewzu^Cu2[Y}I$lC*}\(qԩ-iZ^/.R8f^}w+驝Jm.tN+ZEgV,̼}=yg+-/]V`#WFf]7&:8ջtu gw/zW] v5UDu+j{c\cƸ>]4s~eEmFQ_a˕5Τ\dC0lIsvV[Q+ČZEfC9YQoiq:FBo~\q>Ɣ^f}+H.WeurtB\ד#I#k7$҈'qq,yk_f=c,RѫHGg܄=V4gyM]nFu }Ͻo ~ɡJTO#fml0-ש5VuZv&;]*G^Oޟ}fO ۵xB3Kr˜NG&Xj5{.<+g$C#DZIl%<8(/*l{?GGf{<4.~z4q ÐT@kÎUw~nPubxv^'4ZmY}P~g-=:_3*8>Z9 Z][qp)_1d y{KWS*8M+9%Ͽil k us cx+q>i98{i}_Ң=[ZZ+o_jqHqr,k6#A75=8O[Z]:w}4e*1-[Iע.o?[4W2.:UTFcob$|(ܹmvzuwnf4ptz}W7ٛTK?Zל,r۬%v$I98ӭ)ңm?Gvҷyz}nvw= 4!~e"խB/0i"iFN{Z[6f90T[N6w6gIFI92-sQe{X8 ݻڤ`uYB% f bgOU;+(\/{qc"WImnw,~gt =x1Pvԥ99{>Y1Gmi~8\A]o} E9]XǷi-,cnri<;\& zҟ%Z.xŭ&Wk#+Xx O..)EmHQųteu'Խe}ᥚˈwmOnUOFgF%&1RI8䐣?:K޺rR G)X;U$e㞕*^R+k`HwD,3N{N2dJV&6Fѫ.v$g~^[U=lF[qq?1 /yeG[I ;ca[EXT}n1,FAE wp?Q\һ8k8)*IorʾfL *ETƊTnHR D*6~3+^ўmO{r+\6wd ^:ka+915q4]"dXPv%޹矦s\vѾI!$"7E2t\m8<\+sKZ6i;xn0ӌW,~)}e,DvoB'eE3nC֮9S #t+4IgVb_OQ};כZ9[Kєqm7G {{z_#u՛fT,C *Xr5*׷Cof;Ldi 9VHg,$c4pOWA$ `Ldƚ] aZJM|dp*rO85yv^k7d^dLYʮ}ֵƴ&GdiӜ5mTwU1I>a*Wэ.oȵEp,ymfx⳶c"(hUS鍹A{d4KCb}w1G-!n#*|vҪw_#Jp|6KX--F'5ܥ&-mkou%ff˴c<BxRMЇ?eQ9=4vd[jÕ)'N ԃߣsR!ݻc4,}'6Hdmj5X(( qfuygRt';o2D3*B.(=叩46YW*m̎z،F{ԞL 1N*jK*Tg+w=ȹY +Z56'<F3QMq9͖_4!|(?7:-dZɹ2i"+6_TMskpZ0'v;X9*Wɡ&mH_zɴ{iU)QϠSFr乩 ^^ a?\,`EWVu<ݫHO5Sq}KNU!rrSjwkM<,jR_@3 m?ɁޮH˻rW4+0e߻DNUaO]BK$QQ䓎*b!Fܩ)mV|՚9LHCg9ѴiƔT篢7QHҍXK*O:S`kFP>Je(o]LDYewc]ܳG-:gH[X1h W+$sPN*OxOI#cVWRsH^_gZ ֫U%CgU{†6)fUaToSև{t #]'bX#:>wK%'s1RNz?(T5[#b9 rr?xkKN[uw4ԧO\=z6u̲Hnx׏cZRKtc*RyhW"Y< Ymc*^{Z5sFyuLltss$ildr{oyzFWG.n+8Z% J2qdX2m/=YԕON s$1O_zq.qӊosib#~!,9% }d6AS)K9Z-gOBԖ(fɶ1_Su)(Tt,Wy8dc#M4ޗ1+&I~wʃ>ݫHHƝYs3ilI$C0ʨ_zZKܓrٗ*u!>hEoik/#< ֑]2S-Gnfv21l0??´Wkcqs}CXDɶ4P) ]x+WWv.ThSi8{:+hLQn2:T3k 8XIS'$+UA2}R*-Risڍ 3q?ºCiVj/c Y,w jy+7T冩hбk}qim kjTم9JIV"5[bf`ACsYNSn|Lk[yfi3VJ*qݫtu-Ox+eemP01G5>fԪبEhfFm|FO'zj:N£⹜cnڲ4-e|6Cr}ztysGV85qKj.& G:\<'͘grϲRilc$z*F4߾SGliu26c9޸f/S<*VԷѻM3mͷx){=Yʋn+>_]nkhPss>uѩUTiӺot͍|N0kͬEZ<5iiؘUodVX q=+jѧyxוGU߱=5ػ(>H>^ȔE*ӶOB쭍{=[Zhj~%U[7ߊΟ5N*rz-Թo$hp^2e5*>FvЭBHi/FP9=zʥ:1{"m)tt1寙aW1jqNm qO[bc{86VR)\8=h5tp%Y-巊bl7<}'5rZ3У |%ՑI*\K!#v?#NP"rN'W.^زٸ)]CGqo}E*vEoqՋ u}n#z%}S&u9-yTYmXޑS=:i ]Ƞ1c0F9?j%ӥ;iGf۵M}dfܭ`eO˞N@&8pꛚz_@i|1R2FGQF{GOO$J4)Rnw li @~c7evVNE):|4ı&z.Pzdq\,]wׂUw#|JN#r}Nk7F&WEޜ}k,[x$,`*Kn9NtJ1]ҜHI'-l|yVHUdPɜnn6^KNS~j<{E9J޻#*:Ӷ}٦i~λ5n>?N}HԫO35%m 7w۴l1ycR۽ (;IZH-OoOjAm3"mʩfG$h I!#rGi@@8{V5)oԚY9s'{onܳ >grCgqWKn%WffafRT<ZQ4:zo q+eʞq@$s>Is ƫX{*cN{C{A2j60ԊکxYSrI22@!~lvS%镈g2/+VHiY8$彆8 ZsKs7tu%Inc^:ѡfH¯_NwGI=Z3onc3ovc8XʤnLU_n6ա!ɚcs{ёQ~tܺ2im}[19b#w*&#$Rp߲Oy#U@GZ{:6QI>Ν9sZ6Sj\do;Oq[J3%Wey,QE:\y5̋2E>a?c9acS*$N4쭱hR9سnbĬyu2y"NQt`r2+yz3Uj)E莚b-mr/chOKٟN #|#d~fr{i9mLتtڽ?kQTGE-+ux>`3Xq}qڻ9߉Ɵ3jQ VxE͆f_ֻz4c;rHūI MpnN}+ܣ'NVў*2;EhXFܪЏ޻#Nq~fZ;nXK;涖62]bf ʮϿLd>կR/i]էRc8Kmq`u<;zqQ_aK *nZuvw|8ώvu#?倗ZmyxqnjgR鮶 [ԏMӮӓMHV7b^7}: cTeQ.u9s;h.$[YpJVO9¶y$\rs+r]O6TunI&{r֓B۸ 8kTa$l*{}+h.~wz緞!U1\eCײZIg=\S:Vy\`ii6n:*SNQԎW*nﺼN19oumPYv2\M*>XrKx+R|WT~b[ͶQ{=x5y=٥lIf*Χ֕5)%)[B妡2@ ؕoC^HOUOkn5^ȹKp+zfQNTޏ[[sNijۿ̹idp']S"c:2[uMKknUlfk⍌KVF1tNTe^~Fk(XrJ*zP1]__tF2z.+MlGv06TNzd`p N7w 4.W-5"!$v0G c>Y'.y^pW2N[eWX5=ۿwNM[g{Nw!t{&?m¶Po H sJ"Ro/?xc4g{ZY&][Va`Gqֵ܊7u=ħt}5 xkuy^[ȸU|HŒH3ĘyƜZzv>'0֋“{m:eԗSJ{V'#s>WJJQgN+?W.Xj#e [i'60qsSէ1K~ǭzUJ޻#X=[_OC>8xRt:0u #9;go&iQ&O/3>hr5o;]_~Q4$SrnDJs̨][b}F{hyvIqY-8pHW*OqMl-lEH-c\jaqa+9-u_,K˸x#m+ 20x0{tc>YnhfW#g/ GG͉$^]Ğu2<1ŒִKSN׻ٿ3,UM]w=՛ƺU3[^Jȹ =m2:^~?u(*,aH1'޼匜Uh^;h3y&-xVszW2wy5J7H<-ma2ͶIBsZNr^)TIk y|:z7^hi2gϮkukc4]9N;a6yBr:O|c=ngJgڕĚ, d`c]$4~٫9W7Hcc(t8=8)*w^v Fr=cckvfViU-H8Ff}NrfjYךy>q+ϔCTjK5x̱dm'4ڌHp?SWTn5Q4[SQ%ܶ/,Hîp+B+0\9kӓ^s}oܲ-7)p3 XO׿_vs[(mV?ȸ,b|}1a<ʭMi}B2|M_OtmGTk:ѧc[F{$k[ 9*yOCq$ά\hʪ}nm쥄k#o`ׯ津֩5.~*Tk&qY3eWlK" ocmG7{3 DYXE+{rEyq#!a__Y<ۮǣJRj⧇/brV_]NRZKԍFM7o;˜LB/㩯IpEzB\nvKqnk [+TVLsSQwm7g啇^*eN4b"ߩ^Zؼ[#zjM2[xTe$

    2+ɟk>˷Ycp-,fZ}-umB[HXþϥpapM[#өJrߡ?.ot7?UZF?vԣN5tQ*xM~=%P]{5vӢ >ǻ\Te%Z܌4h)izG4HcRY7!\ Qz~k-GL%>[yɿ{?JesuiJܯSw-㱿~Qp&0[q)֖#=:8zWh<;ov-0Nּi^2$Of{hcχkFx 2c{ J&u SQ]Ns=SњMܥ \l{}l;Ϸ yr)IYZ6/~NWjZ8-g҆_<SzI_Wwάj1@>8sGUtӖMhgݟgÝ#eV tf_Xh;^zj<̣VR]{?=kxe)Na*ݣ ]c0HHt{E 5˹8JStSv[_9͋yPك^~cqvyY2e3˥cYٛc8OX\~̅ZS$seHa`r0q{XiZoӡe@7A͞jaN60ӎhhCno:6Bd^*efJ2_;Z"-w1 WE<+o}~^FtmLy~']2}F 2}=>زr?7kJ/<-ľTSIX{OTyKoS0y|rz+ow љ-DH=y: _МV"}?RyYQ'-O9f^hV&A4NO>TΥ؊u(FݽHLq(L>`s׭ngBq>EdR)$)̛d}j!ShS$ԟ̭-dTbk&:qZ_iG^vZ0d뽼wr,U}YSYGm3ܼwg ?@G==k4 뱎gum͜pDyy\#'Q*S1;V&Jzvʷvca[qV=cɤ s?.'PvR6cp+uBDcIR׹mkXTjF_o:枦5h¼ԞmJ79٣lf!GsМc:>%xԍ51݅zZ3W̗Y6#` =Eta.VGT]ȡ%೶8e(I_rF(~ފܹSi"< ,yf sʣ˖5ثgn#RGȦ(kw B>Ueœ4lW2":FjfnO^+ЗwOxzXEgijUgcנ5b0J.Y߱eׂY͇cr dqqھ'0RrW>_ӕkt_?}5ӬCq v:|~+..Un]k{:= 6UI#l~Cۿj*` ?2ku#*ѣF5|'u\nnVފF24F[ڙ*en>ҝG ri_յͶS*ɕ|r0<X\F,YrSi>MҦV

    c>3?Vv:)RV{u[дG"Y7mmSK励 P^`YvR.WVS{4RM ~yw3卞%C;#ٜF~<}3]]+G[)6߼>i{CSem?csvV`@[TnOuXu3rrq^rJQK[ԎPүGaiǕu\t& thi2w1<ưR2jFnnbT\*Ͽԅx>Y[SI4gHŖʞA^5e1<Œi<ݘٛ #%{4*qJS峚mttr$plYwwjƴu+;.LQ:cUVR]wukRVZnBWt T 7u? ԸYfB;By+gi'jg:mO~1+AqEŏ }q QomI8H襶yDsԶQ\u(:#S.UwԤMnl/9't?T*6flWiy[k khx[ }; 7N4nLXۀdbU6mV7N=I=Ly>Y=p;4!g=1?L(QW΄X܏|H>b.@ `:{@b2GֵycEӺ:Ṗ%W |ӏQԅ Swޅ.U% 59I׏Zҧ'5-܀F囩zTzѕhzC!O=tƚgE)T &_qoS[RͲMKb)nbʼqF)nݬD#ѦNzg)|˩Hio܂4XouU{A{sF\$_2\*!;Wsg/t2_s4LmI gv3OPzb]{:jmX4*A;YxUOq5^^ƒjnJВP[+O{evc߽mj9?(t2;*_q랕;_2iQ%[ vǮx+>JeN<ܖ(Xdu,Uu\ GRZ眥ŮiNWm,8 q(vHj*U]!JLQ eGn{yoSQƍ,4۶ۛ I?tDԵJ:zuIpQC? ,V2叺{JmI7ۡN'$2iǓ}81z:7ٗ}f^3q%Œ$&GFrxI5))N.[[-G}r_BO9JN<괱YFrDe,7 NjʤMkh9^W onWoκ(NVΨƛQnƸbdkH֣Z[^TeZQ05tk4N- 'vjфz(5"7 jePB_a[y2J]\,HvJk:3NU+9K(YʶG'#t=}hٴ[U=/îm| r5*~K莪pNV-ZY{ƒhrC0kIS9۱kyECZMڭwY#e*J r1J"*5v,ug+#+rrFI#Rq::5Ho.A9_D/"4\|RJ*'\.-U^H:PN?W:746,x[sr ۑYJhLNZK4"$ CI=kͯOI)]۵ȭlm^+%V;s投&cO܄A*IWӜ}khnW[Tlkuۏ5cc1=+N^F%>cU<׎O+ITN*Ӈ7![kU5xb ag8 J|}ƃKGtZM#$HTV6rO>LyWQRI m;hFù^Aڽ;0'*14eS.W-3>cM*ra{ol[5\KnsC(տjX].Kni4֩yKr<Tsϟ/d\O93@xXc9k*qL!kޞ} B GxW,mymw#ӑ-1 2a"Sgm)kTeRF5+tSg}9?;0ec99=Nxȵ)VTriKuune]N `dbwǮxN 5Vdy--wqTȻ?/'\rqcjyֈ˺BZeiKy}ϯ>QMǞa&g[lDqg ʼ`tԗ/<mRu ,3,Kdu:cxOR?*}%_~X%K-[R3ܙ66zfrZ+6ߢԩqb炙mn'}Vk]Iyv|.c@pFkSB[kfއv keU'x;4wϦe;c7;m46W^U+$aB?0{Nc*xDЃpyifU7w߯O=Rqa]68ViT0p0H;8ظkEZ#͗ԧR)MKmKoG/ů Dxn>Ư,S;hI mTN YƬ=tNWGv.rקW>N|g/nZNo%9)6p!HE0+8l7έd'+8tݽo{*\>Tܟ*,ldotk?k~?_go[AmH9}b zgoʰ!φ|Rw,oWcl5HL}导D"hD~"Ӵ!vw3Dx-zKHxu)N^{#QEc{n5#PiC0ݎA 3kSPR6ZwNvz}ԫiOgc#|F3|?O_{kS툭(Ԓ:nyoS5 5ĺ=ΖV9n~tܧTggܳRG% =Sği%,?moC/ݓ;]pzu9UlwJcֆgπ?f?0onQ/&Wg'px:r9~}"Jv7[v__UIȷ>y XQʁ8ռ;?%v|.iO/jUo>\xOWwghYrU c[Z#7}o^ >*RKVw© ڭjǑ=Rqq2mS{m[8Z>o5b Q+yc5e,jr\ 1iӻdeJ3[sX5]\I*ס^A>㑚Ҥ}5ΓLϟT+2]㎘55o.M4KVL\.^ e(֤JnTU)wn+SRQiu~nF-X<zg}ksKD9cS[W:YKZEo2e7ݏn*Wb- R45_qiA"-sەƼrݷ͝T\]uKAOd}3qbBRm5{>՘i #$; q\*fgYFQKjGk?g݇&_SS܎]-jy5)woWk:"8oVlT̟^9N,(.֗^Gus|wXb#]kХbԓ-cpYBw'X)GC^# r[qrioxmv͕s,a=QϷ^Gzv2~"[Qv^8?~K/͡gͶKm #+H9NhSNmR}&;䗵M4[]r|D[/0۲+-NoLq[e'E|%LTGih~ Я5bH8;yQъj;^`F\!py<~Qec:~Lmog]b`T$ʍﴅ2G;Wln亾ƽԴXݶepܞ?Ҝyvc:өxܐe$mWnrq]~~{-$w0i&ljǜ?3[Fĵ-wʿ*7z1RZ{>XtUx u:qeJ*K_$IateQ3N1:{]*ڳϩ.mJg/\Ō$+y`s׊qrZ2ZMj XF˕N(5 vpR|]Ҵ;FǍRW=1k0YMlD4ny-jU.>17$kC6:+A~Q\UvRfyXk(x/@toG<n۴{SVƟZCiu֏ G$k\ gz?N2dy1@ [EgPKػPYx$qf-UWzt/0u*^oof%>_|NmGs[qtJ*_y A88kz9lvs0ǕKE^]x[ ݮIc}83־ZKmM}iK-R^+w&_fK-}L$W9ڊdr3#1}5{^Jikk6okjv ÐGgsco6٢0"UϲQ_fYmI>tgb3VFzNk5il6rPCe ;Wkv۞q֍sGXhC%kӷnwv>JQ.ӧ[w0q&daylßƱ}ۣ{g&XIX{s;BШ(pg-R9GVķ՞IbFvXr9J\rƅH~),}\ؓ:qw9/6:D!|`>R\ΤTiF-w*MCwڱ̬?gS)e*hսG$\AJ2|HƶGqۡp93aԶ3j>$h|;H^dhhp֥))?OB%ekp˖VtGWj|?<$u1 [kQ5^rN{j"@ԯ\nN}y;Y-w+Xz/^u*\䁲o), Ӟz.XٮsͭZU[ D@U9{zW[ݳU_yV5ugrdHh_CךW88UzV{:t]=c"hXd`r 9==7WV49Qڧ^R[SC Xf眀(Z΍yR]-mX|hb2eXf\wϮkS{Fo:i&7]/b:yi%fS8^h:I?p#<> uylsjS>H^'cUhXSvt0#wy+/"#wį5hYR|j-bfYHcqxrOWdpul/g)AвyeUI <kӍ(i.Z)?Cj$U?xzנnRgZ~Fysle\ ]K5Ncʗ m)۲UH 3#(u.X[,vL.w~JG~55ӔYzOϘΤ}:Z)[i$p Pv"sSb"2c铧hgsu/ͩ*ۏ, 6Ѹl59sjlUX%. R$ne$7՚NT9Q\TIwmLnOs)ŭVSOՈe\S3S#ݕZvCߙ ;'N7}WY9Ю`A7TiQ=HӾՐݳԎzZ0ӹ2[I<|{۵a(Mˬ]wtTJ*yϥcggN[n&*UU;m`9-z~UT_S 7fl>RRHG*6>٭q^w$vw,"gd$wd*F^˼K2.UG^z*:BӷL:ڲ*I`Ny=}q]qom3T7HA`1kx3yQ{X$q]Tx;tֺ:0>Žv fYJXNs+ 1gF1O>FF48F+HsNW]uC_5VV]ێЬF3cHjv.E־ 򅬍mF0è'c?tQ)>kFF* Ol~~} %NUgnk07;:{ǚC C F6_ ~^^F=y ;|ǯ|Ioi'Qir k(]$Kc#A󞆺hӕj7wN*79ZEfE !ノ?`qQ)_E*g9ژbIJ#y3Qqadw6uRN;y5ȇ/ám?:Yu_YUxYFޟZ~ǞQ9b7,1aҔ_GGMhP:e*`GV5ԭ97U{e.w]M"]4q]8&TTUTc%~tXcn;b=Fz7[yygxsIx-E ǿ?/GO*ܚ1*HIrlwGXi'G2HQ{馿Ɉ}6lzG4dzU!eݠ '{Zz,VT˙PFIs\\[IF;+4v,p)nؤFH2瞜 uNismc,:u sێ՜o98x4c1*3̅x3G%$s{,O"=x(f[j=c9AG{_T%M bF;wHF9' T+#jqp'DxUv0l)9ퟯTn%)EY0bY!^R6RܹrƟdYɷ6>\:qc)˛BF2V>l X4)K#uN4=Nv[ k,k,K\u砧]Jv飈exF7ڠ8kjUV2S+%k9ٶ˓yB~_zߚKMR)(5X딏*ݓGOZ֖Knf]-\0Yʂö8q\U!'jTܒs2ƨvM{$ XvR\Ѽ Vx#$c2r㯽?S3|^+}mp')ᛝO($߀HK7$ "U(tS/˹B_~OZEG#E)VcG[xfbR7NUT鹸Elt w2_>%t8긹롵bK"ݳ'LחcY[_ѶgiPg85MsϬZK6?&aN3 0>֗[vsTV14mdh}*+7wg +ʍ5*rq.u21ϙ3ҼB(u6.**Az_¼+;FjҩSVr;U9'ןT\^cdlkF!j0R!]^&jNqmt+^bHIovo'8+˩N..֧SV5=Oe/4MIpΧGe$]m~֧N nb;UI@]7?1=+X)N(юeU]~c)K*PTa&k4OHGXanq,kթ%hQtҥw=Ҿ뽇Y[GەmdvjkT`SDKgAr4i^ڶxm%+be)I8̋BJ[7*Z8Zp2Nz'?Dfr6p>^``*u%V>m.6eR$c&i6#$zE(sXT+Zk%xծ"EfWa'W?ciITm[4gyňWn o]:F1vlYT}#n-ղfO#Ns^E(qﯖxӒC`$RO͈GګӲ6B-JVoTlm_Bʭh'fZT$2{ fh 쑎Tk3J5%(d2T\NinĻG/ސny\E?d㧐S^O*(wnj*z2#*Pbxl  ,=xͨ>C##*['wNNؒ9xUxԗ%ߙTEZ- <.q559HjFXVf!XF2O%}>VaI뎼`J1qJ;o4QN̚&G{Hw\Ҍ#}#9LC6/88Rz 0y F1kOԣ]1䒿w'>*#ÚkF/$#w${9ϩ9(wV4׿MLۻKO;!!T,cg<I9`{7%:J7&Jӛki2e67λr͹2GL)8zV9}8k.42)gd#29R3+9T9{FwkT%*daq{='ddgEq/oG#,9d=+kTaNKI+O/ڡss}:W:qi."*Z,M5مS{W :¬qZ?_F =nڮp7q߭(IIٞu:ؚu9*kwJmE7V78%r5qRnV;*!xw9b2M9Z˫㭄7o^DQ2e LCTT)wFdžuXi6mVs׼pe(i}s5,N:}q\ʭE.{^tj b)7+=3z)E4tgZomw҈cۀہl?ϽpS/QG iCE<)Зhԏ1T2=Gz{J> kdha`~e9hNJ'FQn0HEYcG+էrتFTߙ==ۘ7yy\XzVQ2ĭaZm]I&[mF:@+ڦR-W==jM0'# O?½~]}yJZ4ۻg{vdc*?13^xjFMu+W&Gtᶠe?8NQ\V:øSK\E#)T[p'h59e%ի*R1q_3;P[M"$mW=="cKZ)iX:5vucUi7E,:9>`pEghӃI~gӎ ih b6P,Ʉc|dI Ȭ%N[Y"[n;:cݷkI\U0nZ5O;Bݒ3 0fG+Q'nO<]hdY'I ;sUyX4!N5~O xXW̚Tf\d2v#ӭy#*SefJ Թ "*UVmsrI ʩջ¸kY;jZ3y|\gq-ӱ4cZZeadPzW$TYjLu,4wN nmǮs沜ER5*S{Twi6P9ޤ_,n%o:9fjU.GnUիL׏tiV]MtI+B`UtS;t99FSt@Vt]5MEntJk|/6}F(Wf;~uƌQ\<ԉfuW[NX`rpyƥnUf{SB_njbKHlU8᫆2Iz%/Z,Qżܟw8uDJZݵ<BgVQ}$l,ןkMlHo6 O*8HR4B$ڻx?:omysTz&,8X݂)TP (6ekJdqeAۀ1ӦԜ#:{L>0]t2[x[7]ǞN1o=Κx_rZ.u%VYIUE$`qWy(3X{j^arQK5Rߟ+zq]*ޟڵ/䝦ImV6U6Ǔ(Cݼާ}:TQJr]<[ȕR y.ggnWzjcGشF],5j*3-]O)A K0 8АyurݧkmV:4ڷǰ~)|Ww^Oq-լXU1{O-МzKߧnN*Bke۷ҵ2yR4Pڅ,  An@ GOjh治=CRDA9RyQ1,1 >e>%+_ΨF~zv3yY+by.)Lۧ[ m+/#vG-"]K)DQ F첮p*~ϚǹNJwo}ս=^>-In.Ie`IϿZMJRѯZpx?RWQ7,&K-q9:VE_:*Z_~&d̟ ,ldtԵe_^*Okio⿌֍3l|f|q>⼺c_)&ׯμ<+TӾtwz7{]Af"؋0N\3ۜG/ԗ*KV=j-ҦҖ;YtM OM=m6P@VI-y]ck99kU)t{z~','QKJַOS~.D͆AcO LTU#I*F-G]~Eiek7_C^iԗo+].wVj']!H\Ν 寖reJm;]jXO 4k%k$_ zg8q5T%*rJ;~+b#O^U_2"voMy7N7p=ȪZ `yimt@\547EpcԲG`r5Ti)+nks .;K^}gm?q@|3l<xHIOkr hehZG ;D*ө 8溻Hms'}:Wl9%zt= V˩yG4GpqwxNwv.yN-9%~KVѣfxr7om<ßC5-&.j=&j$mFO&Ud%x7n^Gq*ֱ_%Z>Kz-Ӕ\H#X2u8$|ճg?`ZÞO-<^~s 㜏ƹcjN,|iܳP=8M:nkFNtwm'9>yԌ^ngԂ^^N8گ*"Qzu;#N*hCD`3/Tc+7=U/4]2g'R2jKo#^^)^)6#O(a?yJ[⌣])xEq ڒYd d@>ahӕ61+Oܜ}:6p8cZB[!dESUڸO\j1;B>KTy!]YDU^3Do.L9lcp-A^<4q7mk]> F)Nڞqhov9~SKR6]_ѯ.JkedJ&Ir=+M&:cŽ:kHѳV\zrּRJNh^kWU^vzJSOSjӕI㡃[D|pvOһ'%tFkF|h:y20ڼ_4kEt}MN76$;EYU!^n"owCH<,aYOJ)V{*+[EjttY>xo|u<5*u}MVKEf5tJ|a4+/QNxߩӇ+N8i>Eҟ+nǠN7ht>hl~-dN|{Ҧ/z{Tqr%m.&jq5)Өvb)bK=:K-%z[}j*UZiBF%4u{>IcP]3W `R@q#Ӷ<бh%mg%yG(`mc2#~L-T{XSGF>sZMYjc|5Ġɉ pz``{׿K/f=62M/fhYc]k+_XEy+SiwoTd}Nfsohn٘B~:r"tOQROGץ^#ѡxUd](\FAGSW_2t߳JwP=3C߈L tb?OTPS_;3_V`%@&kzrŶEޫ&#$קc{Ӕ`;[Zn#77*RWNєjGӣBSVQG.YWk6G篿^=)Ku3,CI[}NoN6LrxJjԛcާR?ezic.w|d2]|E9Sq #<=E(ϩZ]UH̻2qLjTQcM%ṱu.#m2s22zv>28{zV^{v5Gso6M!o.% }H-#s59Dcʶ!TדF 1c$93;үJC5?Qձ$jvG=?J0ƍ&zp8wIoXwҀ|>dqֽ 4hGt=*8ztq G..&LI9F2mSG<){['/q3^hǕm=5'5z=_^[624|nA sub3L=())+>[/<0(.W}+_ xcG0!E#WքcEv} *j2M.ͯ|_흕ǂg3@mufVPFC;[^GPx5ӧ,DUtnz^hJomuWZ['x}Y|_\K n*I1GPqH8[FJ/n{p_Z+AR_}Z#Uaya1B/cr8u**]{xe'v趚fא,1H N\c;d_/+*~V|GŞ.I᷵E9&Mx  cOt>t?㊣NJmn&cx]yj0o_czpsSӫ-%]AFDv}{;Y?p{zHK$cN6o̯uM?ÞY7MKo%NH>QjɽƓY'uݡS\Iksb7s^G+ף(T®Jn=zt:_[k4)~)V+z;zכ+#LV!ԣXulp:Υj܉ ł{WM9f7e릡Hb<͠GOtԧ*64p^1ζ:/^ 1q"zڽ*uJv*x:NWI鮽ϴ>v_ 5,${y1wdd.gԩm-V޿1j3cE~'u5y ,L@դF:'I'@QΣJ6n}QF+_eyd|8.ȼ ?J.9WQ6Z]@ܯ /ގ RTSֲۡvyvۙ`t>vB\qC?^=F 7$#q~oZF;w tяKE۹$uڣfco8''gJ1qIF2|AEl$=H犏gNQҝJ1g_3m#Ǩj}k#e:>B PsPePg:h4g̛RO3M4S( ۾e?!+93֫NØԧ嘼OŘ#IYcȒhNJXƕެӖ:S3+,2.n7cG L*ABZ<ʤʎ}Mۓ&T= "HKb:c~Ku3~ǖC"[gr#Ez0q}4`g1_3ޠShԖy´N릆-2 UWz劉v0X[K'حϚ ,s-唭M?3z,Q[b˕  _J{]~Yԋpt )!wXdprzxx8*^Տ>xB%K !VO{s^5LN[(,նK,+`^N1U%z^$ifKHzz5FNGn?¶Q62~235|fE#ilsrEFϡ7<W/ ݜ1=ٷh4w'd2v/s~?u7NIhžȭ+~(9vKN5VesTΪ~Z5Y%fmA`'=x:=uQu-^H&VQV5G8Qsc/i]~Ėv*M#,9ڵ"ccZN$˚&f ,kx@; U7Q3Ue+Zmt VV] aJ\W?-SN}ב^DaI61m=V4ңRc !7ϻ ܑ׌F8FBԚʋm+˻f֎DTaz^4(Y,a 4ryESOCxƭOv}}-q:6|j<zrU:[µzMoE6Ұ#AXӯ*'n2<>&Qr4n"WxT6[e p9=b-?'GUMmk E匨QA~}bU;teK)f{XmFw81qҺ*5),5z1]Zz<6[X $F1 <=FFk~F"mz{ffϗp0N85YعԅGh?$(nPd\M+ 9mi=:4V[Hc1G JoF0;cJeʺZW*8/ᶍ*Cm\LL(.g+jJ:[hgoݪF;s҅)UV4k3<<*U8i#]5#e)T;W*l`aTe+Fi-3f}3JvVU'#4BPrjݎ8ƴC"sSm9}[YK.m# o(h REvF1ةҜge3.Tg͟-w.хoqݻ$puWU=t.hŸ y+yl;].X-:okc"R|'iY>Bᱎ o)FZ$>;JIV[h$ey\rzW)K- h٣CGxt"uަoIpS9db^ E';Zvֿb%uS%}/d=ANuv0;}w=A#hP*gm)t$4 +A3=kS-ΘTd^g]xOAHl|O[K;ro+AcY|C{uW],ú^Rk_-VkMm$cό:~k!Il/J L W<`U\6iGb'**J-55#zx -|sԠ͠ zرdmE1aB0VS08S d^~i,+aJXw߮/xPՠ̑g: [;(#*z3uR2MEfU֖t\I|W]߻oh7[-&{Y#&mB,}ё U}zioK|qEiRKvsZ/cI֚m6 ^Sm*IC׌LW,T:w%}#fX^olx+ ty"8TcaU燿oF,481y4Y[uHL'_ 1ZMx17e=? b1txم]LbÙ%3?bצ+sMcjgG"[Hftq8?NNG5u gEm;3ūZ.iZ=5˟| yoZoRZ=u٢i/3/3CZ:^mْ޾:ie8u̾w6R0|{j/7~x~"FYHm~Ϩm9cJ1jf׭}֜5W?,|[?‹ⷅl<$q2LAJ6 #pVܫsAkO՛"vΒmV#jGSVv&0][JU-]99N\:ީP5()r'=:\Ӽu2༵U]ˆz¤ۡN5+KTW>9$@8ϯ_κ%Jmz\;Y}JxIF;ן9.e/S*c'vii^[0f^<eNƖ謡ee.v,7Y O|<3i40Vxx<yxʑt-rz#+ aEb? l6v: ({j^ 8ּ{H9zHӮW}.d|{I#ʝHnJNju^Я5˻{ \nu AEʥGve4ԧxRMjc8LƹG})rSuq'EĘOLUz&U|KYiUy_-@^f"-zxI+Z-ℊd&9UXu?μ\W%OzާN<׋gxᵤO6׏kITy{UbjhQovM_q4uvLpB zJs;yki8uVGQ{xWNfׁ< {b^4?RWwMCմwyap됼ʽhW]O:U*S\?0Sʶ 7^F2RLө+*xwZݝH;F2ti ]x+޾,Z/-3!^Qx,%>"n 9OS*ZsG Zr?Ϟ7k:ƶnt`\34# k9g,G;1wڍJ[[Fyv ltmC11Fּ^4yo9kbYBjw<nѕI$I8Q6ɯ3[MH.8өwNSs-\·i3̈B3}MzTiTG5W]6E) |˷\Q5ѣN1ia(ѣMF:ioqq$0foǩ9B*R{lMLMKEMVHsB1Sh˞ks4t].I+{$TdX{x积EYJ^gNܼ]_OC-RCi D83IԁN?JNJ:N^=) I¶$cb;>H'9pyCfզlP%\pBy9c~}O.R!6r̸4,s)sȮyB3:y/N"y#ʷxvx82hеV1ȫ'`y sJ:cj2K#( H?ڒ?}׌(w"%^gQZ-Fm3'}^qζ9JI(uug\q`FsѧV9I&e! 'UHs(غnujoYnV̌wIGX^RW],2w ܠYw cziU0__RVG1,_9u kFtF{դ걋FI?2s{=ylorlhee B+jo6TƆ6 dn, 9+3惖Q里, A|#66JVbNR[}έSWcn8?ɭT|$2̋+1צ \%yz' qs{&hc_G \x$yOYU)[2j+ L-kkp1_ 8"jE~huim8z>{ :^L̪e \~5FySOWEᓲ|8ӣo5{{M0c0Sxk>/R6{w9^AhfB7mpѐ0מpTfM5 BN:#"ᰪRw'cuH)uJ]#}ELybӫ<^3,Ж<2NOB3m<=jgtiשN^?;1+ʨ؎hq6#Q_+kFNR޶}]zQ-7.2 ;#2NRi)F\ xGI.c_:&fpSp# 8Zg-xʔb?uw_368iEu߲Z_3lE5ķEgp[fl=F21Q)&dMRi4-㭴kc3//[kW} 7W'x0<k:/U=H^™ɦp^FH?jq}{M戮|"".=}kҝIJQ~" Xa19םS =Shr\i.[dleO=+Ʃz^YX%?ͽm+m^[m|k8=ҸU*u46eXRجVW/j)9TQkH#"LbErrc 9n2F]3̫L 2q5IE>gy|E^i*o&d.I&ߔ'9A\[ʬs9æ~"F+1N-4y}lN=: ƣEZPZֶMkc{- K]-H断y-XfMRTF1tWo֞*.tu/[Y#w$۴pH4ESR~w2SEݾR?1g ^%'e'koucAFfc?aEvQjSPIZOM>4hEQ]O?k2c]=F}޹w(QԩEs}uE۸LXͬ2IDBOc]1'F}2FݷZ]*Q˗^V zP?<ֱ78#4O6=y_sRm2Omy<Ɲ*׆Cw4EwdzV7sR~kxf̐19kVHڝѓm1c#j\I .OĄv{@+9G.2qIYsϧ%YFoca*͌ӟ\u˥*-$Lq\&n=f¤iwN%TE}Ӥ<3\M͕,Wb2OJZ+SzΡ7V3r1-s:F<)j`]ݵEb6T0{>n*]VM~Mf1G+,IWo+O<}BSۙQQ+_jWQʡ>&ӖQF=]8ݥ S +Yv25 ~P2k6mj֥žSP<>\L3xq;iԄֈ R#.~_UW),dR{{Wug)^Ko鞆= Y-n&FEV1+TӪwx2R]cv\sD*NU2K3ʵ#VӐ>`q:דqшw u(@.fBȹ*d$1@篦Eii҂O[05=r!X㷍$Yr \tc)I6˯H^ >Y{U#N'ܯbżvǖr9zW,*ҿM\Ujޚ攖HcI k ц9k75aicSG 2UQ9'y%gmz"͕t2#o=sn2K,E*Ih'gU1?D꼙FRmml2#{+rX⃌oW浊D d5ьeQ1d)1~Z%ZzƤl훆?te'Ȟi{5U#ձZQ1(g$d*iG;:ZSt'.Ɏ˹HڧIIaO/i}YfX@[xVoݎpsҦNif֚Ж8-`[deZN[jTJ M\Nʹ=F9mjSjļDԖS18Wi֤Mpz>bhfT]^vwNZMEFQ}I;gyrnUϥ?cRe` n)-HǷ9<՛m%ڻ;/OƹʥhZnU*h{rT?R($o'r8SZ(/84_Qen~dSJ5o_#HJbܓˡ"*c'b A皚Xzt&ΊսTa M| ^d\j$3oǚ쌣Y4p]IZo>&xdH<[<*8Ttޟ3*[5#̍)H=ֺ٫t'Z5pʊVٮF.%./ymZ`RH4՜o+w9XeeY`91N&U*E:im9yoz-,O}H8IԼliA%ʪ+s+U۱ror9'9:w抷FT'{_S줺R#oauN.R'BqxOawC3nvXө*wcsͧX5ZyzV~!J)T+d]Q%}M#:r_Du>$q&wޢR.cN6炜LX,qt}ׯ=[֧SkSjS-[E6I5ƨrno#rV)ZL_/vO&eq8G9YCNgލqW_/j'01<+ ^1JQQ߹iZ0@wHps{˙aN\ۻK:-ojG17̓hQՕG#U-uG9i6R{usGy^0E Y#Cj[;^TT)9FTw)ۈ}M+I72tCT,- G o_C֫Wu%wԫzG9nYn'pJu4Ԛ3JM!FkX`6(؟lIy3jbS"Eef$aO<=qX*H9)koUwI  9Vѧ+s\~WfG Dc(> vix+=3fKMM]HUCŜI㧯4TZy?Mu֖s9X2Wi۸$c?J8(CO=[u3FPH" 3 y'xYEƍ;^:orkH.7~)lvө trͱ. +YJ~53E- 2#uUMTMm)b}N~})7͐ x8VngL"J鿻^i'՚HQ~n9OcܭZбd{}GqZTI$iMԩ' KZF7 jEJ%{ڔʛJqth)[VBE0:>ݫ;rMJ׍/6N[y%/뱡c5V }I7w'jbI'xio3ZS[iom\^qI*nL5]n]Kb0f3'TѨ%dFqiEݵT7TQϿ̆%dwuQԷ=z*:J}ѯ')-Qs}(~zA}U?59J1p}I;>Ņ ct{I]pGCkJSs̭ :~}$1jf2m *Lq?fˎ"[OY#fݸ)XSJrT9mӪd ")'ǾkNxnu=)>. _&CRG>JgVV\9tOkݽ][rz<|c)k]Ⱥoao1_^zb2eڥ*lF{E̒*vO[J:^{Hv`m>dؙ#Lt*sI1 r5܄jI!1CqbevBUQidP*Uk~I7\zc(GGmNp<皪 c*v q3wo>S=:uWCޅ5*-{)U~mCc>>ҟ[Q化UmD!+2'8JjfHVsq[F1mq.W_;U|޽ 2N5;tySO'gi-^0_$#ֶttJ1ZЭ\".89?UsjQ-bk;}I@pn {s9T1F55.MͳG-͝wL9$+gq+gTR!ΔiM%je;ګO\]9ef4߯bVZ_@}1U4֔y&s^n+hiEGNOdYmm,s# %YͶwUNW99nKeQWˑ&#ls\;2Ee]2R[cNHJU26y#|RshWpN2̶4C $뒥iTo{ǛVVO%fە"-kUZ2>Dם\\: 9o^^"aRRQu6~fdz9i/FC1yJ&*I~_s3דSNQwBTE4j9uPnu\Fөh- wڼ3\Č5|K秇FC7VP9gZ/I{5ĦIa]9⳩EA2hSrV+ẍ(RiF9I:8~[ramn1gW/ IMjDw_FڲkV\aZQm;2gU_+ ֹs*7jN*MKTo<)ש?zWԔltԮӖf9ԪU斪șc)EY6qڣ{;#BR̝hO 1o6 O_O+ NXZ+vfv%I-fNHZtﭯoqfCe|T/’1zb4މi=7UFN9KD&o]K:}[^,arg` 8^((q׵?:iIJ J]oEh VkIx2)ܑNN+ʔ.edoi{ۿ|([ ;aY{zu3/')QSTu]nE #[v1$?x#y:iknיZ#Wd鑭*VFr6jQw3xZxJR嶺,JV Y̒H͌drIn+ҧ84BU#Ve14"ȌzGr:lWG;*k[X痵69!IY8󰌍F3z~`W/jFiW~q{u)[eˣG+\|'9=cVs[eJ~re.X(+-)*P[$Xb27w.9WFU=޽,ϸ%[M' ˃x5ʥ7uQ: ?c>gO26`Pw rpkTmvq g+ߘ>զ6Wr:%N>o]KV(U{׌+U%G9-^q>nܟzNnZkv[yLLc'B}kY 3HG,w _^.[%sEE4' E%e[NzUR_qޢ\EqLc+>(ܭCJ7;.%'ʪ͏7}kxGkC؄2tkW6&F֗|Ip=gε4KhpҭxկOR-t'Z(>_Q8:W~\{X:1)\s6/8-϶?,~]qSqvyi=JV}>CG`7eIdV7gw$g8qӿ>#z|+osiUgkk/'x}B%m>~#>Y+ߑ(īs%.e h̛w. Rq_pJ k׭RQObI)L| 8}F[CkQ#miז3Ye:Ƿ\8=s\S>O{<ܱ4ݟ]7\8^J_aQhޯΌU*zY N?mf.Tb@TjKY?ĥiM$ߡmveNhXNvGҜiMW(6bʣs)`#=;qބ=[Ԛ9:|ʶK&cBrTo^ViګI]b#(ڬ-]1kacnJJyUۜz/g)F+tS2Kԧ,LZH^Om ܷQ&gò4Zik舽IM^6_A=FD9 r c ӣN\:I y-7ԌdqmpH8$y{[5];I:N6iW|{6ɷ0 :름b^R_3wבA%{Q#4r1ʻ̫n-m9'ݭWMmgk[K~AVxNWus宴md|L7}~LJ"j2Am=By[;y>|n9?JxX%_Wˋ.Y5oDb5 $su̞Hm1_8VaW+ſ+8<]uTjڮ2dE/ Mk$mn˰BG*G8xǧ`qM֣_OSU^ϱ&;'g2dg rz {u|Ul/=f~mUESM+V/UvkO ל2#*B0d=@ <U]mfo,)Btݿ]{y]~7Vz BB)e7!\8",L-6M[o3Luh4ڌ8oKIƓ4k;1i%e>r=s>G[MSeڎ[sf˹Vi}yzr5y+Lֽ,jJ[=?3b#GRowOAеMROO4paO; @?yE:uRnWoi}9I$jkij.|H,IO}:یݒ:+BmQJ^e*W1kfjƷC!]!^=HxLTSQ2|FW +oT躍ӼakY]\m c"{VppKY1(ו͏^mab.`~Pp\q޺-JO]^oJ*t>_Z~l-"#<s؁58ME1)J襁GtwIԒsnۉa8+ͩRr^$',P@~z繭4SSx+tCZteVUN%OCxV|;rmkX4D2Q̞bIVRXn4*;Zѓ嗽)_nYLDkI<{qFMZg=ZE;^jQCmKf"n8#q6vz*KszJɶE8r~>6'Egú)JϢ3>حtHnuL!u^N"Ntt"5i:JAh|'͕z{Ub1*?[JT쎁t'ϽpμjB99ikvj!L ?:kJݏRK~"l.;,Fs8k1wC73m [FHO6[[,G-=G,4[-sFlX =Z8%­gzT{kq.ܯ=xoF;%OG_]>>x-l[ ӚvI[OJ2j~2xX|hsЏaߚ/R'OX}_E)uhU;9\Eu(ɩ?FrVNt^G##N9~c*>9k j]z;Gһ[u=Յ:?CZǑ/- :Ҏ"0yVQx#HԥH*5=ԫN^T组SzVRko$>^{iuS*J]dXWG cEIG|=.H^@d99qVu/i|~rYWe]çoz#.ebh{T i"z1Բ2:j; ++}ltabXlF1U{Y2jI2˻nS>Gў}zR/osHߋYW\Wʬ{5q+/SӼmp$+ϧG3Z2Ih0¬ݚa;{9-UhWf6QWWº5U[X}ׇl|[帹_峹 !FsH玝S)N]ROO'eIQWi;]=#xYat1``3xч!)KKy|~jSpiK:/;׻GF^Ҫ.9Cv1g=޽<J:pypGNvEY.~Ƥjr˧t)XbCx/+d.&$,͵s޺eZ4pu+Om9|yl ңMrm# nTz-lAs7گdf136I=Peh-]I{YAG4K -F'e8|s8k(M|O'T%ƳXiA=-HLT*s` Z*RZi<\4:-AiY^ ލ:qc_+}mN>Q+^|>Bk V\.8_2S٪vT!| ù.!Čcsciџ,bKJ^uTˇgF> :J̨嵩RD5<92>]*1O4lq,?^ƽ-lDfTŨ;?=L/izs^^DWiy `q{zbITSyEk}Zq$я?9^3IIĽ#%i+'}>ԣF)H&I\gJ~FThYWĦ}Y|U<885V;ƚ;&}u<eŝV5 g Kꬎirr]WfaV0#9޺Ƣe%x3 iAFY2W#ӷ9*-[k;)kUê.[Um{w*~&ZOi",[yݿp%I/L(ѕYkdNZ"dએyaTZKv8r$W l;cl8Ǧs^"*tT?]K%hUsMR7FQӥu0yZ]ߡ+AդKef9XqyȮ)JWwG=jэGem|Am|)?~]om*2i+ ;M4l0zg+,Sъz-7o#_rBq;ȧ)STlN1[/!k/e兑;p M;y>#[/T'QZޖwC4fC_'k؞iK=z-, n5Fm)@N֟4JRWVIiym|Cm@Jp磇ѻmYcsj+vK;dQD$O刭pua* ${'/-hz~-q2G>ᵓ%:(pQv.v8=Aت2A_=ϑ*Z儺z&=ϰ,uK=2٭qpvŷprBYxj:M+Wi@8ˮq>{*tzj*TT[f\ ]ܱ,y=s3Ut6xuV$UfɯIzXf6O\DhJ1NE=GS˵csS^3b{J/\nD4k#'iø.{~KaǙaDyUxzumcR*;]WDL#P6/$FS촱.,-}J_i6' &AhmsD%>^k"Nj ic/8l G@o3 P-?$U1cnA9c畭IZ)ܳ|MQdΨi#t\8'u0n"=HZC$SI |( u}k9~ȩFVu2#ݩHn̏\g?QzۥT)t0_1L=U)5-5MYZS@7<ϼ9{{>Ve^MZ Fmڴ)JJ*y 5 IČ[L28ZTWBHS%G\JuZ_ WtHEcd^0qӞ{ >[c37x'.m cx֥r;7"YR(eE71_º=Go ){;~MȆ>WsYԣZ+P:w՘lecڼN5NUo[ŻUıiiQ(ry]q{K<7iH1hRYЃZڌym׹U#]nmofL# ᜄ FTd=Xӝh>Ue#cks٫Ug:n<Ǎcv tyU8E&xuqգp$.>w,!-W@9CjǣB{m|GDjS(_N+)=\74ZV6<{]}_QfVΓj$+' H5xF1N:[=,Fo7oζ :n_ܩp@GMyU̢E1&p99ZՙԣOK3м.r7HyA#W'h;S9OT,eTW+U,<Κ_,$myka?jmj?05*r*KOɗ/6Qy.P#{־6˙aw 3Xy۽qPv4FS/ ё V_9c2u)bڜyω<ldyrI?^l~G:|TyU驪3sh )qmzWTƌ]ptj75~} '{F3:t?AhaFO\:V3/-E$p1PZ@d=}xں/vGb|^X|:ۗp9[N=Y98j˜Ly=p U9Svܘѩ*NddJUEc2 ~q]a$wSq;o万YQ`j#)j?$ӫM4e93]T73QME*Gl%lp:>}缍'Jj*W,R9cYp<sbIn6Fi v;~m!wFV37_ KʙaEHGUEjNBA+)Vf#6ЧԜUrz4T't'ĴYpc\~u ~H1߳LMvLl0g>oNֶR溶o~۟4&Y;trmzՄp6 ]7\{N<%1EYQ`6{~9FV1!BZb<4ȁ#xQM3jQ>M 6A2R3\dgs?y0ctj5(g $#<6wF~\4 u9YeAэL-Bpۆ>)KXTH|rUQ͏b3*61r=´jr~<{?M#to b5$U^&y9m(gpMk 8]qKtϒ ̲mh's1Oި}nLjFF >sGO%>,!yFisa~l`u53.[- RvW*\F'dg]ֲ3+T{yvg,͒N;q\+oyn H0%#ӮsVNu/{Yg 7J^ױوTW2YSd #]ck-Qjr|=AXf,v{緭uF7&и.UZVe;X/n}:Ν9}M}1D̐E'2.6B1u3g;;biv,r+l[#]Q 셤[4}P4d9[SJ~ΞYe Wim`8>~pe[XZA230P!zgZS DXGރZI7ܒ3npz9U.]RiFyeHǟ#q$xgq~[JWyCI[J}[KV|⻖$*Nc\y7jvo6O Y1c9?z֝8<ބʋ2r#xQ *#9'(r-Hm$[Q.LAS<~8DTdWM(?2neW*$vs}WN15JrQ+JGn~elKV#xFti~ɔ܈337<36S{:B36X9Q^,ӎc.*2ko_3R9<+g82ۨJNlAb-īH\.8eӊiufҥI)TIi6=:·7+tD[ -|#68ҌD̒94)mwdn3*`$(rT.X˯[[9eu1Kp9U^9ya񌁞At[ө&oCP{[OvJ`dWtGn0FGlWMNRsRQ\tM,-w3n#q8py}+*JeTMɦT2FOǍg΢WONV{>+üM#@h\mf!y8kN1N1Y)&ޥ[Ҏ6,]([m~#Z1:2]v6ZhSiwWZ\Gm#F$j؁Yy^=+iN:uκKrU-2ޝMe'c˔sJ4]Bu.-45i*q*jrγpFT}BK]2gh)>wSrz$of_9KGΝmfm)!A;zb=%G6M֧p',yr).sPpG;CVa#CK&||;G㊚2N5 iy/>dsg;>+RazY.^ZMzOij'u'./5dˀ7.d\i%Q:ǗYuk81lFH KR=FAF } #{FGo-I8Uz]ji-m}_9dzrzDΛrO:&XaIYISB#4ҩ^3m/iۧSԭU-:u#?jOx2M{;[XG)Ohcp mƯ)Z*TgVm-gUv]p ImIYYhpH(̯?;[NNd]?xUXI_Z=z>,f\m ϱ㢎+'TyN8ySQ\_}QxZ\ls ?+ъk]HP*քѥwhCz]dNs`87OVhʝKosQ[U[v+n#|_;)ν)K[_ OkmFQZO@FOc5){'Fߩ_Q:Huvd W#nKa9iv7~F?O?doVV^fn O1ݰ. y}X|luUDYӶ{6*smffޮ%1W7,j=v.[kE9ѤW­CyqNU+OnXBTO5s {5͡+[X|fe%##|zo=L(IFǛ-& 'fNN<^dϙbK{y$g nl7'xS{nzQ|*]4fiӗ-exN7T{븀s޶u룊\^LϽkl[i<}+ݣBP81!7ȯZFYC9#ֱaܕ:Br^i1+y09kʝz3ӧU5r{5|S~睊}T;o鋜V= E0]:`biԟNZszjEx3*ƻ`~PHWpn롧j](_A>"]vW$]ΉVZ顸цf{< q^̝F񃍪GO.D{XB>^;C߿ns;\d~[͕U ɨ"1O}S괱iEog:[B4gs|}k9g.h[n86`yp֏<<ӕG=M"xl-P8r߭smkSSV`ڬ[#>LjeN2r[0+w!*cNN~H+^4ۖ{T-ۺNDPm( ,)e<uJ#oRi\G3ZT.ݼ88-*q Y>{->>B,uguJa+@eWJq8CDؘ8dHg9#y6liw3Irdeq'޶qeNMa#K.ra%~`G˜khҕ;*&] ic*5Q|_RTDW+հif*2dsc_L}+JvLRtet~׵K\22`yr2)*.ujJPGo|Lǂ-.5#ekslmz2z׹`֚h^tpڢ獢x>%Wz EY ad$cÏetpҋI^ֿקf[Oʔm_^{_k; 'c-ԍd9㎧p27E.]/xf_|Q]b& kojy(ʍܶ _*zW^8_c))KVգmϧ<{xn OkfЬ&j6Am5a58ih%g&~O?/#eH[ln!4C6$gK lӧ'8XtkY7ϷD}._B&w#OxXG5޳$S(R0LVvs8#85o]ҾkkGCRVoNtzixEx E$dm,p;FF@= VMBxiǢ[?[+N*Ibi.M,^zJiC%X.7-wc:(l_oO멅YԩE^ʚ:]{_rίɆk{)SK+FaRΥZ._5g_GsKYuftxm|i./H4_[hHHL6VZ7msک 'vIEh=׊}KεWty1ӾkaҬ߿9kƚRoOv֞!k;[;d[s!*ɹ NԢӾeFTth>&F֥R yF^+;U0A[<CrKZק]ܿ CK-֪~o~;KsjA2k6w2뷜nwtRgd׭qNMY&Nn~ jw,sfPqF@F9=Ttߏ[[#ӭ̢Sm+ooI-UĚfT:[= r;*QRWrwwwW?,ړ5%(ٻ筭پFJʌ sރ~;RjwHz߶Sj71#Ŀ*ٳ9\Nkq2(C==OvwO,i0Ĉ8?Ο*izVb(&oɄUn}>JQM*VZKM݉ʬ$q契@U(&'pH~u--X}KܔY})97Mݛc"IF %8D" :}ޜyfL6 Iݝz랢ss 拵$ *+7+8O֮0IN܏XDY~Vp*zwѯ#"ed{A-_.::rz7D5nWшҴ9ikՑoBsWyi/iVԎE}:QVlǚ]R wl6`JQ^Ν(ß2;eds\v8SRz[tG,E;nWv7ngJ6"S授D>t#)%/ `?JTG(QVr9P y31q s;J-ZJrJɕ\(ZO =kWw^E>Tp\ˌnf' <~rN2sˈg˩geByf s'?UIS:tpܼss:qG'/Wy\m1A®JxLsJ2M$:*jId HJ#KLy5cMB3T]ܚzv؏an$3s71 c=Jo(_#^䶶n1w0#g'n_woR)ޤ7RytyQ.`7N=M){ +Б3A$ꑯHX*ϧ=*cN2ս:ժr"vU¨ؒ7{vctF%FPU;n| h^~Ϋ uUШr֟$ɒOm, E1x8#˧{JFyo~ӣOv^Ln2x$?SӜdpRtc[a37 ĶO&Ts 1䄛izi;?ŞztS\*ӱo>dlX:vNo̺8hCXspǎܞCKIGUԻmFێ=ϭuIG4Uov~%lW۴t_Qt7#NJ7ː=-#Oemjҝnmre,%<rnƐ}׭'-Gמ&RO,R$,v ^/\iӍ_u>gx^%S}v>?g.kn]dFv8sZ.mt Tߥ]fuRޤgL"F0R qn1G} /.Tu*C=5(ɚB"]~rPqՔ]!IsjK=Dn)֔m]q\JҲ9_x8'N+ڜZݶ"i6ΌC{dIk5'tױBagT~fREBV%Sո=r+껚Vi_lxVeW 8>FR1䨛jBo r$x;z^ՈSJO}2fhչ&<֣JU*X:. VR9z #ב^|aOǭ^wصck+G*lcs9r=EsԌB:;wJ 4,[q/_Ұ%QȖ!aeX9lW^qg*McE:‰{9Qz굷s2uAdBQ f < ͽ䊰$-r86vkHԍ=?ӨT46873G^ϥcNJF]w\v"dv>2V85i[rI[o=59lYbԚ^r9F^G#񪨤乶ЧVat nc c$`zN.VAp%'6ܴoK>nvEѴpTu-|C.7ޙ'U(k(ө$XoE p99|=wq~E}b2POBܒœ|rr hwRh'[ư?1%IpOr+IV,?Q\7L܊e͸x֝Gk#KNv4e2٣f2qqMh9CPYu9|"1Ђ5M֕yS冶ػc$~R U9,=zs+1̷3u)ӦV{N4i hC+'?^%]PR6(TPI=V)WRF>f.ȆH_wd䃟q޵TuFp喞Lǹ)YvO?Îpx9+ZQtW漌R+iMi&+ N}WE8ӗ>Ztg3Ggtd$A$stZN5:45>=ǒ\`}2@:3eyyM8j:/)|#8|\UN1{IԌ9)Yclbm;Uݖ7R0~54IIkn*qZαgl홤3 ^OPGJU/+Zu/+HiesDLc8gԔbo5ܡ{e*YyۊL/VQE]߁ѝG;4KmBYfUl`##TBQכ֢2ѭ"5| 㩮?{NQߺʳi*o-c(`#98Zn:L\j/g-$m*eգ[+ɍ䞹3^U_nK.2rH 5)''ec1Vۺj ;:G cD]$*en~+ͭV1d%CD.+]>E2 &Ay=**=%5q'fDD"mp3R}($- s8_A6S,ΥWq3Ãן´ѕr=XIogɸ7Vc_z`і" +!tg +hI_2p[-g9펕<\O 1\kۺb@cnjg<;dVuFzK^Cc`m۰~m.hӲڏ,5[{+m  ,cOwi,W.mNFw(GV -^,bm<)84̎ǔ#f F@0=ھ쮞4!b9Z#^9*`fq!霞G\i˦ΘTI"'ֲ6IvOA+ NPIsIҬMiAnlYa<10[vOG59ދ5JTff w= "$>`EClFHb9c:zKR;}@M6v1Y*oįkNҒZ]_Gy5"0<[{NkZ:W/g|ĶNʘD}q ?cj__O.UfGi--@ߺ5<9u]X+Ӷm4p$[~\:rcrt#.*J+;ǚ25$)$t1s#9'kQ6i~FcW Sik-{ȹsr|N2zt+ t-o=*U喽Z]0Y՝YsH#G-젉RHP.=??ZsVfS)-&`m~ΪVfҢAh4T+z)ݧ ;dfG"TvXٺ~UnΥ8M=-bDIVk {{W42Rpk_Br.2WipA9K^t}>\۽4^TNTN<+BWZ'u=YJ)bs#k Ӧi&TYms6#D^:玞-J&|#iGOA-vq[0a6yb_;M^}E~d-HUH\ }(6.]N+m=>w##npqK9c)_d_ug{m\OʥF? XԚP+{ۢ;k">7dsӎ)r+t3jrODyݺv1I U9<)KRƥZ'uWᷖH2$C:|,Y}{ ߅yueNtP꺋>DB[dzڥu#߈*2F15[KC4-tjЪT(ӍnZi=:hFO+Q/ږf]qg\tKA$9c`}UmZi]=7&;m6fgpxƜT5]=L)(&t %eROhN"5!%c.@l+ԥ%ʚV;B}?CNVKXܘcpsz1ʧ#ԅ9N-J] SJY޵NJFiӲ_#Ria&v)=rp?>ЧK)hSͯwDYOX^KFNU]EYnįjoN1]5a.{KsҩG˩N!ՏtfȲyeN 6\u)ʥ8f{]2իoK="mIռbOS9E)Kd_UQS4}}4>IV#VefӞx5#*Zy8qҕJ->a̾W̬7N9gUgFhr,Cn%ǖc$)hr{_{+^[BPvJcc[H[Foe9 }W:PeSg5Ԃ{F{M 4BqszW0Khy593+8+Gbi94T%-:z'dsj\\ 7$`]ydRHp̓/;r9gi*Y{CsYywYʈ9JU/S>U:/c5R'B/ުWe$=gv,s FR*jRW W?vgʜ܎;_3'=kim?:YR-Vz}=JXXb97+uVKGrIY|vMfڜsʥX |~^h}8T(R!2G#*)*EyZ<rIJɉwl<^򼷱|߳F#¨Vڎ@Q pAzjbUu-﫺-~O~ڟIc$Z{t{j̲Fhl&溕sJnIL ҏg[594TVkm;i7_l^2\h,ŏ9(F -}䭽+S$\_2o%e 񮽤kgiڬyeY0aAqۂ~>W>Pwo#u'm6om~zT۵%n\ SzwdzBj:pw.xK=U٭i#Y`랟=9K* OOjY<@ַrckufV~9 L.8T}?ejBIkN~M?xźIjzvk~ni~NI< OZհQJ=׫>n[W麰>tL+[MwKF߄텅đAlbçV8 <LW9!95ͽ-~g&;Gxt?_乳Y`\C̩lUO%~9T\ޏM)kol !88O H(֕o}v8Pb%8-_Lbwdx٦]NA=󎇿TKqRTMUZBM]uo 5Im[N6SG4m5asrcq*֜ɥMOWF[fӕM#lX89"CݍkeJoOr=_H|_d/#bc z{Xy6prdMEbzu=sUf%OE Ui 8V/¯ $VGI)K;RC`c=={xzuiI]OFJ|uk]f~UQo `Ē7Y ׏RifϖJR^]#OOIf _L^ i~J\o#wVMcq'ǥcٜJ騞Oi$4i|cֺ)”Ϡ{QzU`y$1 :rQqV_㩅9~ﯚE◞#FѫCt]rGntъMhve>]Sږv1ĮnI}3QZCͩݢ%f̓n~^1j=%M?{hy*8{U.)Aqy4>xl(8Rz=H׽rbJ9E+VzN+^F#@۶k0~KQEHishʧ峲 #Fbc?\k˩ZKG0ԋ;#QE&;x7]댫 Iz5)cQJ!;KSֲTp:RE*|sb)SAӭ:cFa;XGJqRtMJ6Qt f{5 /Ϟse''/*)Z#B[ke x$vQqrߙԧySnI&B\#I."@H9&z&}KTQG㫈ZA|o,I'?<yXJO0I{e1ȜlRrCr:X9FyQVtoQ06o%C2lo1xc3U9wRٱڷMSrI"HD@,GF~ .7;0橥QJ:Yn1g]}$5l2c^+مpiT}fZJ.ܽt9eVTRkF.2I^KN=fn"@l\dfV4im+bcMSPpZdUwf#}k֋*:Jǹ MEoB`4K}4\Y3>՝nZ~5nDTtWgܩL9yuM;cNDsFMMY{`4k{u2ّ }xzjS1tRj~=KH42|mwq628yBQS #kߢ:C]&70,clV2G5<:ӕ'kw~ HO;Yvw}Cھ3(K[F#iڿQhkI,?+;ckkONUmm2١d㙐1p?N9xtkԍI4I{$iu<:ĐU$e)$%Tt'>}61mغ_ǚDIjfo'vInݺf1JKȚ*J4;=ҥ1^.G4GeݚƱB_\ҩcϣJ>g_SKj K?ڻi8;{X_KWՒIX@ǖf&|*][ yfe/gN.˧^iVQjmf֩AQ2=OCkTsu84wú5HVSx^Yclqz779*r/{['+6V8\YyeVW\Cush5(+l,yJ Nzչѽ h3SC,Ψ2L0b[S88zJpv~Ӧ>m7%jѴ ŕ[Xr8W SQ^o^qY!ŻZoC];+ڲ= 0F{)cJZ5o3ϧ(M5JjZ-UU++gcs־s1E?¤(iGXII _9on c׌#崴O6Qzm"K[FMDQw/;Ww6g,ʳϏi B:}+J+5entkOKf̬J?]tqSM%qQNUC(,4(VP8f3dyJKl`vS%y:4UM'Ny޺qn˟'fp z0 Z}R8Zxj[O^Lw;3*>]'#ҧ圹wzjir;,o c< wzKI!É9\Q]61j[_ywzv]Y1 =j;QEVuӂ#g.VBw#(ݏc gS2QzEHN#o>7*\s֕)C&ߋrC|e&& jʧ|` 7dc?5SJ2zw9ڣg3sj IK2푾UG6[JNN?v1!-M;QQ啌=y?6:m:{ץ:4[ƘqM kwmk%Ř)ʕ;,ѭ vo0xcӷ}~_-nbIg$uXѦk㰹uZv^gxGᦧC,(6,xy?F33*kEu< J'Kd>aWlUھrz{=ⱙeI#T٫=Yíxx]ͼɌ/*_)_3*%YlbZ2u#Ժ]O?xwCuIcٿ{AdVK27QV=񵤛VKo{,:]>}ZqzW`cOrKoUp1S;#޲9%CE=ڽtw)Zil֩&ӧTwȳdegwv$cJ8HӨ֟OQ; Obm262xlw8oζT^*Er{htNVE^vA[U"k3/gN=X-48ᐯB#c׶EyxSSVa,m:b>1U$dvm0ۯ梦ң̽o+ESJɫu9 ľfFHy+(eG4*/z=֝n\b{~Uo*&ZWTҵiz n溣iFW=\DhQ排VZ]2l!L5L7aBs;N;uq֩%i4ﮚ:U+aښG(xVOuVm "1Ns@zbۆ8$>jѨݶ{owᱴV"եkݷ}WC?.⻻bdh5s" p1p1fU(hAϪVz|~$0>Ӧ~I=>g'í!)c/P 2p)NguʤG[u Ds6S3O2lkKkc i291mudR]IJYڄy231',9\6 #'*9Fޟ=<2x+4=.])?c':xfgV߯ʄ*2lAvka߸qr0+YQ+.Oj{/Gn<5VENe}fV}m/<ga[XN bWr)Sȩ9iz-6f|qߨ~0Rṉ}oLV/^De<-nHbbAp;Ec)?gŽA#$m&bXFGZQPCüMw%Щ&O0{O (Rҕhޗߣ2gæ\@!eo/ivzx׏ڕ+[B+3 'ˌz*R%jF7WDo$HkIѷ {ԩRZ-ߤVG ygo 22Ʀ!HujpmO 9郞՗&ױr(̼W+WVxlҥb"֏(܋Y2 ,azQcQPQk$sQ?w:֗KTybmkŻy~kT5Js4Z)# ҅-³_1H#w4;FdGl{qjQ~IMַNI4/# mcF;T?tc c?]RR\%'סdFوR2G8Ge3OiPN6M !TG"`JTpӔ 3Xds隝i4x^3l̒)ܫ\c`~W-HUqԎ6t^on_zc9?~k:_VrF=]L#=ݺC &HbC'VOws:r'?%R/coӽ(䎨Tc̹= mcZk*BUr{:]%eg#aBG99*Q^>h.A+ ^V"Fnwu=WLv!en35[y%iOy=尩T81$2]EkƧtrWUn68i+#$ʸVTu ?LW׏MUGC؋Hcd'J2Ɲ;or{{)\*#2pz@kXZQ,}-bI 1gfQ]4WR6KnFZ7ԩ*v{˚bR[bG|6sGp^rLsPJ(Ź|#Y2ܷUn?ʚ2R קQ\mYd%e#ӡu2G=IY\IH0YKry;O'ϽRW+F4״밡v izvc콟=nn96L29ΥK0V?f:SyO~zs]~hmFceڅNsa#H|җv9)Z[`&e׸#v&3ߜ~N4f*knVZH|rЫmyyRKAkM,Hǝɴ+rbTاRv4M9瞹>1.k8я3Im+>}mǥoYˑ)jvMHJv.$3G(ҩR2s鎟5J4-9(hi^>lqc-0`z' tr]xJe¥vnqܞO7oa{8ӗJ3'Rl='-/*ǠnReNRCy4aԢ7ϑ#9N4S(ӕܹJ ,sA90x9Ѣ9(Wm#? {dֳrڍh5׹u6<'Ԟn6;:y*mZ)q$t&,=< 5(Y8Zx|54)ԫQN-]j(>窹ce<YnS 7|v$$r=J?mh?SnyrǦ7I|Yg*do݆9 N+TOsHѭ9 wo^;BR Z_}Ik<[q*ii}Nu"- ]F$2.K1$`1N5[TjJ7q$\G|؍Q,Ƕ=k*8jlUY.t~慮ZM1fnⳍg ͻ2P˩8,l}Frӵ?l[l)TǶFѯʧI$1\^T4$cJn;qڥKVbR7iH i$^=͵Q+ZmTj[~e{+8cH<r:ZJckHYEgvnŏLW3F!%sv۶-Np {U:1徆ƍCy.lm6 ߮i~ԮHG21c`Q~ݪ#rw7o]Hum>9G?ufRh$]B{K{{PY˓9S'tVK^RV MӡZ_KKgr}h+tA?t;Y~-]9zߵʙ)(8N5n'okxɡK}#~Uh`cʥY{IrK Kioq%~$},u=KcOfeY${zjyeZTfS7S[&ۡo8XIc?`{4Jj92-7%} o,w%3,79;IapGQ).],f5Qȩm;FV)Px V ^] P;MFݣy#+!Vi{REM8\ r1pjpu5$jrBXTRZ=s盏|*~9VK]}OܱctRU-E}ԭ,$TW5}:}-v}-ljFY|VvwtwZYݭvy_h7A_[xI ' dnav}V%[*eZk},z[quy#gx,.ֺY bOUVyQ]d\1i#)q9*r*´cN-SѦwwN*_W=ѭow}ăkqouaCp6as<8򳳲2|Ik~Ƿx@:\:%݌{K[YqО9Dxh]|#/V~-=?Tr/4dQʑۓxxFO3WV5-&wtuݏ؞~]/ YJꯢ|)ҼS{Fo$hJH&yנ_A؈Ӎ3xG[d4vz xr2z8[{J#uߩUrnW9mGHCUm4WUpFz}r8EJu+^_XY>(w7WiEhVMy21nC߂j;c({Z k9TU?t~ٽ6~^GjU,WzYt?5Ö0x?JhkOg[ffrH9 rTWJ/oG{b%S|к{[ob + ' HCE:5NRkEЫrmk{n8@3cx?g >TL}9|3K˩`Fxq\-9h4LRͲxmg~LjZkU'-9~Ef%pjR2Sձ"i^ב^l:ON%3y?QҸjqʍ:^V4VXmogsW8j*RKSIb&cG^\$Q<Ҏ۷=ϻqcֹǟXWY=vNwۢf䜖UaviJSԾ j \پU5)yhy5i~gڤ+ȟHس31y.5EXcYC=|ʇ<8yu:k`RvX,/X>W&qO]e&Ԭ4Ցw3cܻz t3^9kD>`ၲfS…>#mFU;zu?JR]K*~sϏ;cbƔeOm4=sž7OM Um\qos_IGTvZGKPiiF~+ڣO"t#};mW6GzsӭlrsJ+(>iF2 R4hۄI3޽Jvf5+sT\w,1!ZO$]tx͔]O=yЀwH99*y9edږ>ZM. lv9}?J%N͈Ի= XFcUX6z7=+ľM5>طX=k} {x`1LcVX*G'vyNæEqԔI*sY=\H䑮t^a8|e.qF-?Q*D?/Ώ tdy׊ʴ Y"aTSsF9,e|m+;r;t߼W-=};LIn0].o:XZmĽP<u~/9[J?(6HwQK ؅iL-Nz'2ªHM=l}Eⶁ+Y-ҚOcݣ13|卲Gscw0S$&uwypfYWNzmSKK7U$LUmS{ @$?<N 3ߊ^J)?fZO-.H~)~=V,.a1O?//%2SF|o<_a&/"oԖ5ܧ Xq8\Ҥj)8+;^mK%}*V)K([_)~.ftd!`~Psh)Ԋmh^W;7rNj5eK^޽ c\M Ȥӌ J*cOK ݞ^/|'.hړ[FɊo8<#kidάeFk}[WnMx;n,I4g\񃃂r185O{(I=u[Z帊x.qU迦{ip$7e..p=ƣț֝ۢN߃8T刌' a~^_v[{=[M,4oB5w1A`YuGZR~~?089?tj?M+]\-M"ȱm O!$ɯg riNN7{YYzi roܱ]5^mxiIo=/aw>YevSWTr6馾W_T(]~{I U<גI>UUi#0sTR[]?ҭ_q=o_bdeUrTc0>WyM%ȭ>j~Sxw]j}Ke(xUUv?s˱J>oiNG؝/RZQ9<k^{! bbw?9c'O,9*KY`gitS4U֣PCWwٲOlSRRm*C,Fvtj|7ZZl}K< V.FI+KW8%&MdecbI0z}=(h.Se$VjYi#\k #pT20* _>YKW posOC8ɴb%d,.KH(&%'By+\} 2ߘu9tW)Q kV!щ4f:zWdvkS*+E4VVnU`W v3Z+eC*y`8z:z׍Rn/}Җ+/UhAgq'.zgsӞk>X=|zRkv9R4&=U nn:R4_qf$#vL~=q֏59/3ӥLt Z$szV˷CyScѶ6;w ]6p*OolW=~uKGJu%+-}K&cWXޤ7>Zꦪ+wF8hZ[ȳojͮܳnGt khJGsja+$ԹLow:V|ۓY[E8yjɹ՝{q֞Υ9Y}$=vucAv4irXFāP,Gw~"TPz+mԞfUN5b |"5v xڝ>Xfu4RReJ\Udx<EuKIX֜a{t'uVЀqQ9B̷9Fq^PQNb؊FIQܟCqߨڥ;&5v[oM/Ҋ\f+qonyb'vcB\6yk)GK0C&g-ݬ;mӂVJF]UNUQu cR2NKGDS|i~SG=D$_WW4<&++lS\[ΜeR7Km 2m==PDLTL[cv0=zG,em63|ҚLǽś-вRH#u#O6EԢ+v9[P$oNq*]eHL3) |wR 6jmHTNZIq1R'RVIivajZI+C3QaՊ=zt!r\·ZEڋoh*mCg޺9C]5hޭ7NN23 Pxy|6zcƺ==4][ՊnV }EL4̻=@`8=E\_&c׌jrm];٤ Cr@I]h(E==Q&W]g#<S]kvEJ2R#(=b kIswU m5bJ1µ?ϯE%X",S”z"*VGݜZOZGey"gB#]j[鲤!Vf?(=Q~*ו ]++X3A+9rG,¥toauIUwg+ 7X:tZɾ嶳4-W?Q>܊)(ĜCoyoBrE3Hb+?JƵIO lѵjEstX 了_ _/9Ar8jʋ{- 0@O=i{X4 {Jr8:U%.kMIg FeEUh,@c@D*J'SOc,E쁥GyuUU!^yfJ, ?J Z۸<)eHDjrݤC->r[+Ts+sO~݇٦%%xsKjpj-ѝ d7t~dqӡW[w/ZKqqc]QKŚ:rtyc8mm#r,` vq9S[ ˣMH5+0w*~~iױJU}{ElO6yʬ?R[{gV˯M.G[:ܲG`{AF@sz pRnϿVrb'%Q"ܖq#~_|;]#fɭVEimt]b2mm^x&FWbwCb&bYW#TzmR˩ΝT99fa!?j^6pfPjZ6چ!-ueXm_Þ_TM0Xz\_`0nR(c]Fo:Hح n8">mT\'tbv1:n exx RmΩSƟ,ouq{vO ܟnos_5B3Ȼpl^"Ì*ҩ'rHч]Cu9>0zh#ESxԔj֯k TΙ|%rN2:5c8Ŧdʧ4!0.Rv]ǻ=eMmd+Tƺ,Xa!8Wpҩnh\IU9{SV)d=#IrWPV۴rx>kTmAwߣ9)ъwi%Ր&s+H#<9::"b=YyYyyfʶEq!8$d׎X{ߑRI'輺6{F|RIɟ1ps۟q^~:֍yMq$ev$$}PۗfgtY%oJ" |2w[-Hn6ۨnKg-Es=sӓpMŚqo%CƮLjl՜e)m&\_"81tx;vgZ%U([;|e;F/ZPm]?a2Nr@εj]?ZW̸ẕHF[og>Vv4w8hSfڦ^9a"Cw9cDyeӨbkz$fU1R BEUSGr'Tr7w,o5ƪ9=ciQ{7 ٬%ZfIegY FoJ'=ڱ~c2 h^Vd7srAAY5+QҍWRAz$8>H`Wc֪u{)V ZcKαec8 GPZΦ"}79NU")~:l|7$h%" rlg箨ԒQvhkê]L4|qy"N+hJV~c y;\|rz5j5$3cZ[pYHB[ ~^s9اuz<36Ѷ5xԧ^Q2zԕ:T]lIQ)-\ӜmgࣅNq=nI-.Ji ,sFc2e %Omiba/>Gcqzg:'nՍKT,Yd߯^YGTǗJpm6b yq?z3V<2}^ZNW-L@13c'wkJs۩{͞M]j;$$=skJ9JK;!5Uf2Мgqǚ+єj{O,Oq5IJHasǽpʛ/}{9YEިi%k+_wt*8:gi-"[5Zw53nƴc(8^bרB<۔uʱ\pAy+jxx%eϓͶhId댂1׃Rʖs>"9$8kK{xbw+amb3ۿ\Vui>r)U5&$$:ylg#le^S%cgybXH$o,w+XN*H|zy[smI6e[zRqwT*4cJgGIo7$͸|<~MMXq;85}ԍeh;P$mU`q3:VhF- k*JI,?h"r0q\PNR)sZ#E+Λd10Gs]~nNZ^ w3I#[ᣌ|ܶItF:]i9𼰲=G Y$U0]HPzb7mY8Ɯ-4,AqggԕZ<ֳJU=e}727TcC N޹}TA(ҩ%8;{wHҶ>V#MuN4cR3LD!9]۾@JyQT%ud.0L4}80:Tz]ђpﯘ\,+w~]ʹT[v?u'C_F\D. 9'Vtj*ri^gY8U=m+cb˻F~n=k9VIkG l_G\bT$j7pj*MT=r#ukrC{aV7N 䏠]=k{є'$Դ]XKy0m~xgRUN4)Fwomv,sr1"f<_s_~ʧ72GwuvHcק޶Wv*\iޢL<ݲ]\gL(J%]wϝsC-쎉sNKĒI2,eRR ]zWe/ZxhfWj{{⳶mU%9J:5%̈%sS]T[m4iSO} $l<8#cOj-p 31"ƭWt҇2%+JԱr4jz)yǙ%^U(՚ok\I$1ۻ`XI5NQQkTrD*HwT 9cڻhFQWh>)^Iq$0T)X{GE/31I-ڮd3aӞ~5֑}J1r88>$K6cY#mܶx;z#ͫ r)kg#Vi+68y#kr^`r50޺/3u+fIcPHаPdDM;_bU_het㫹^A,[yeTXnoFUFjgNYNg|Im`SxPݮO׏s$u-%2䖤&sCFk˩<[i;/2 Bo(x#漙9FTw_#._ySd,|ŐR}[;/&Ue};mm9.#vk"uzs ̓?j T2:ՍGg9]`]JNGkNSsud-qqXӞ:\Rպ"BKۛw4UVQt" O S+#NZgm$ f(} i%:w6yySM:~HՍg)KFC*\yr%WtV֝X1͹xv 1kuS1NZkOm=( 6Oq?Nڱu+SdR\Ҍ]K7HFOWz5%k+ԥίmAʸaYԗ+UIU.YmLcr 02}n?i{D*ǟTۯiqi]USvg$AҾOM+2Jr}8EoHmi26;~^}$3ӣ:5Ѵ^ K+w^҃H88p@*Sw^oSN\_y~rݼbI )XՎC$iJ.ףOETon|h[lpq8?yW*SK\(شWceR !rB.I߾síR~K[OB;I7m ׯ?ZQ98n;YCC"+6ŭ+,?gF)n@Ʋ1*7`p}C^^"+c N2Vf۸|¹jKmM_˔;{5Q]BY[qFHCZNM%w,z湜xk/gBM.S`3]rP܊/QЯ|ǎ2[a}^<|5VS=SN[VFejc=ߡ^rw|{zץN.\gOV[YZMVhD|8P`THhw)KJYUj?GjRΧ|DݭoyqMj%l$$OAWW{np0ҷ"zlZwWʯQi|-eMn \4UyjZ595wȞk%AM7BzUNQ;X`X+)lOhmM/{V hPfTus>}rR^v-UZlab26:~5a]>tΚ+F_\˹&%v緥{Ӎ8/wkv] T4ﵕVXfu:WEME~~]JtY%=.]GQH^X:vރ9ҪU)`\Ѿ竈Q?Q6{@ghbø<taRMI)ԭ%4ϯaV$(KB3c >(Qңx O"K+vvmѳFxjJEuQCT].%%}6Of!+I-8c⤠30K%gonBJ,#98 d~^$Vm_[_CJx꓏%K-ִh';Km9E@Ab99rԔpJys#Npo^eCb <X1mzr0xOs:\^*엕XҳIܭgy2]بxy 2:m.ZW%.\'IvuiE LTG g)F+J SJe3VhcImۈINIЍ[~6ٺx59=t˖t7W\ #=N0 MGMFP' w +i4q#HDۉ *s;R8H+zt_9bWӢ(M0[hfAハNk|/rnVFc.nY+[3O)2iFqn~v5JgjI*#3ԑtF,uTջ!}za5ۻ e,<7zRUVul,U_#7.|=Ǧ,[+ws#{FSMnm|%*r_ݻKo~;/ZC"9VVbO\p1sڿ'֝:^cY}rtjw>3gFLdR$qzT8v\%Z|-Պ|/ nE}7x8> 2bQwy+2ӌ!r_=3x_-TZ"[[u#Qq= =G5]Yݻ}j{^Sn8}nﯝOW$6\yIqjuʉ6=Ez5 q9h]̣ÿdMh]v~3gխi[q1e0dW̖$yhג_?R{ykŤf19һ5i|oCCX4(-}}F!f;[s΄vnֽL},/8=]:bf6>j+`yJ穌zթNJn};m;0Vr"Go3dt 5IUu4<wkZWJۮyZxG=UF2GAtJ&'VPiɩi}-🍾_]^DIզWPfg'v9$pG\uk9'tz,}Ry E2\ͺbNO{vl*t_B9|e9Y[~jsChdZEċtdӟ*Y)Fv*8~w*rMyu}O%սz"iM !Px*>Q׫uio/Ҏ[~S|z/6ڭY&V^NJ7I8_Xz~MmRRVVO޿X+}c= q!rǝ7ͦ{狛Gwz붇ؾ ͪZCnsHvpp|J{j}Q4a4mIWso!@2\Sy-iӹ%hfީr9?ua)9l{Tei)ZK|㿉O Ց&n7lmzcU:QOv{_U^1nߡ_+aycve>Zc@8rGn;*ԋvqVJ7W; 3Ő$-umM2\@nBҒW*nߋW[whAN =+ +nZ57+}໿i'Wqoҹ1i1kK43S xim*w`)Ls_3jhmNGR.w7c c}O@08cFTEn_gx qvVҭeZKW~1IJ WPn}s ktKG' =FܨekƤ=,O9۲FyY`m}?JhB\NoZ8IJ ~xr1GbV"SRk=IQIs:qWE:u4+rfnUf"Sw^MJ1հc:v[)x63$ߖD< d=3__{ҹU%'gH-ZZdU'ؕAJקJ#S:m<ьa޽HP%$Ϲr[nF6 Ԟ8BqQOG8߮3W'kKJE/`p9b''t'F8W׌=iKQu#V_"ΌBK=^N3/媬nUVBTu~G)(k ֒*;.X3cpϭy~Ӗv^]N6){e#;y9ʦWwݯF5TNU ݮfB\˜nQ]rޝUQe<S(#k֪DߓvŽBxg۰HԷ54JK~6e WAW;m5Eic՛ZT$.5#zWh_Sq{qs3aYg%1NtΕ+3dMVZX^]sS+Z6pl,f5gg]ju^iZڿv!&-#N"Fsn y8k'oMzjԒZ6v}.v&X?< ln"TfqJWk3YhqfIO1 q_Z(%f -!g,q=N{HT5iKV<׹H۷cwsDycx~'1_#]Bcۮl1n3ӧֻhԎ.=zy%g(`vɺFfUX-CJ\^h?:ɖ8[[{U.͗q"ͼf@ FsɃlI-%k'1B7/vQֻz6teui#_5|94jK^օ֡RNS~&fn7g6ԯmK7dOԏKYֵҴp$ǜAl[SQ:eUb#eM;=ݼnsLO5 9 :{t1_IhegŽ3J7}sx-nc! $Hz|ČN!]}:u7x[k})u]Vk}p쀪!Q#L^z~gONT&{CX\5֦4+ la:sF1k[N)3y}2m}Mk7 x5a&)[GyB,4wu9mVKzQ~q}`Yx+OlUGʑ.ʑMoy^ߛ3R/.𜁤ᯒf@GxR,oG S9|"^"xyGi7N%YBb(^kJʵI&@-k&b̘c뻂O׵wrөM5S/M?3/\$Emja8ŏzٿgyʅ_Eo1.|Ct%lkA?s5(qrՄsSƩjVIs_0r]JR=3T]#b;~[Fj{VO 9z Y p̢EF3>EӠVN8}bio6z18?lw\M{ ݭP +6?2~RR9E)%~XIǕ/Mvv i7:t]ޒI$˻`oONk%R#HJw缹ECZls5u`,XdcqB4jJ>[ukZ_..5vٰvey Vo]hȨb+*p4V!^%~!|2кhuŦfHJ$1  [,DY'}_ź5JM}KZԛs~ -ŧ3 $Pvrpæv'ӯn '1oeeY/bUV5gae'3_oFjS4[_3c~M} WWv4u2X%F ьTA5V^|08x)쭺}}>~z~%rmYe5r$2:כZ.1NZfQk~o$Ԡ&nUc7S}Λtx4{hY6S95Q/gk;%*7\G:ִ+)mˍcfUڣە<汥NJ҃vOʣ}ׂC̼r.Q\]wh馺q*F3є1ku-"ko_t֕9]Z+S RQ.[?x{Vm6aAS'sWT8oۦG og{[Ni(gIIkxd?x6sW~[M$zTOn|) YuKbK([;q*ǜTx`{g?g5OpFZ=v]㯑Um_zź/ȿgF.;l ,aJUs^ΖC ]>S*=9^3GSk[dž]g}jp$ϯ5uiRIN4MX3t#."UsxTңm~KV!iœ{\2Wz[_6gp%_kZ]os Bm>?֛ӽS>jM/.rN\eYG#ï#1i_+<ǧ6exNO.22{t>.kb1չG*'+TwZ٧Yj;z`!7&hŨrR@Lgϛkeq}MOj6TDe\}ޱѷc3YlJ:w~`:sU-RSʾ$ݭ#gĐN~mZ%NщѤlI\qN6)Sؿ"Xc'? /T= AGngXCH39e NN ?Li龌O*[ovbFc#T۩'~[jG//٤;VHTFEn-NQ-A7_ G.e#MFnx=u芷wC$nXms2O;걲mKl^hN Z纓ldj~3'[7dtۿ]9jdo}w 8ZmRMw׷S6Z0FXwⴍIFOfz5N>7KK1)<>bWE:>pGoA8b 6|܌yxSROcK/mHɵdA1RI:O~0kŤr26cr wg04gF|M9j u˅?PCF=+ީKpW}%_e {9RNTӐ߿ Ʋ;`WdYד?cǕh\lwRIǸLd϶N8e̎(ԊPHZ{巅eFېJ;qۨov}&-^nnۛv$20[>y))rSIY!p$$\/#=?ki?ttQ{+vmYB.^FI>UF<\<4k_⻙Wk Ѥwoj˲:pjYŭf@tl"@O/>gdiGUc۸?{u/0yO$o8#xx4Ŧ#M>Tdė+qpd23_ +^֧5rsr4r#$~mZ1lsʜ=PO:dUʕH`uR=q^.YJؘY3vRަ;9+L!f 9E BK:4ph]{=1ֽZ4qt&3$9 >akjs;űI7ͷjy+ncVPbaQq1.$IJn=;d)zkpTJiiMēH6ꠒq[S~PNth$njs}xNqV܈ѪBKuq+upO8r_BJRV$Hp̼ ce;^>{iir;Tfm.\f4ͩI%^Jf`}F?SVԧYH1\oSKٺqԩmzyՑml,qaƢ1r'II;irwrwNJQ\4IsnWFCʊU`n>?IF]Ըڕ[[E"C JWbEkzi$g]j&EV,m}s޹ϩ>{&eG*X=N=z*~ Pl1p?{L(N.wCUF].ck\ڧk 3Ī0Uo*%%R&E$F$x<ϕqjEfPza9ǥS.i*q кU9!fNq5r.[Diԭu1Rʧj=>*etҌs=ȻW;}\ܖkIid x369#y9=~wB21[.=6gֳ>A}5F|Zm*Ξ}zy?iJ5%j=FplBv3<֫r:kB\a?>:NzF/Wz[dmr. Tr9to rT^6'O2Ȭ/$f qtFW:kBKWRbKzdLl9NTbk]\ n_i|hB.b=A!Fv>bsmGGg֌y[+OeUP-$#s+1i-&t}ܭ"\yjʛ `Id 9EHS[VVgm} %g|Mʄ*F;cgibRSZ+YZ7FB{YʛyZQC>aڣ4j˶F'^Nj#ZT%>Z)dJ@pTmv%'S['̟d]eҮ03׎O^r+L%zD>H#ҩ3;2-Z x|nElϰ.W.i_KU۷C>c}0l,QSugM>Wgf7Ef@px8#W.پ2%n2,BB99Ӯ+F1t}%挝B-vFb(VG jVv[7EёFx})5V8^۹*'&++YG]':mZtבUvYA9㞵.3qa&ja'aV(v4Nks)k1(Nq5;{2//>>h\H3Z#=*."O["3VemP˜Zڧ4BZvzMC{F$]߶3ұHӷxg*~/}|ΦEc ɵwLx~n7V WRҟ:WOTndvŝ-^6.\sRR4{kiåGMpв>`;s͢ЙT{t^g7-bcW}g)FQz P5*}ݫdE+eWhNFN{zܵхX^=UHf*ETOϽLޥHӦ2G'a].ݫQ7L__M4FYVYZO3;|8=O^ѤbEeЯJUeEeri1Sⵥ٤cO['KL_N{]49TV"#(~Ȧc Ŀ/,lWnjzWl^7TWGsn7ٌ2:ȸ20Wi Z>"Rgǚz>ȥ6ɜyj dwƥ2Qsj^&Ikeq⺣RP3ZZWۮ13?7ڭ;~P*r*_K]G,˹a{֍F^5l}F{X6qަ4w&EV2&0 翸+Pu֨J{7z1Ş?1[UOix*kNMFVFNu: $q0GL5y{ۣ8˖W}:im!Eda?\c){Wenr__U*_ʣdQ}r[ыGZ<֒Z-0$~$`6T²{jgSw= 㯏/6&k2GOo:ͽfY\xVӃ9snk5ﯓ]sZ+TTjq~N4f~Za:u!Bg^#z .*mE'{m_L/.x[_??=cJ>ݎ<ˮ?nJF$g˨ZY#ڦQe^AQ^i%ʍ<-Z7]#?.ջ4[f$:<ֿRzݧcXjҦKn#g2mm%x+}=;\N-KvƔp1]#ľ-6qoʅAFXߊFiW,+^;+KMW*Fv킌G}鞾XlDi孺6⿓>͌Ohn|3-ۇWg  3_ NVvkž&N'gxP׮>{|Moxjp.O@8mL:+|֛?juSYzigKĞMr'#r&RNJep;OFɩ{8K>SѦtO u3PhO5БeB!H+*39UJ׸kE}jvGYP?wvQju{WZ;5Ln ( ydqN*JkfݯkQeʵ*KUMkuhED_UlT^Qc>$ExXƥq㲟a%SF{oTydQ"`@*= yxt>c孵C<ZڿotHA>n 89xx f+4CAvpJ?OjO߱l/N4j9Y];yiݻy9~S䣥>[ume-]]va|!4~F闚P;>Tl =^|va$rz[[uSb1kߞ|}h:Mh6%s+ev?8B6op9){JJK7֭קpL;0t湱^MmcAbcLH]?wcOƏ2z"565}FJ+/E:ӞA?mLcVw6ѫJW{Vo(]gUYBzv>%RJw~&r[[p # 8 x>}45nZ3Нe^n"*K75N4DG:I4c "*ZZh'D6{dX'MÞzs^sJ-DC(eeW[2IJ(2y {g_[Ӕ0aiʛSno Լ*_G d6I[oWfm=3SSsxia׻n/agXw̘>bNs^mLGӕgVF| omc\g= *2y5𶍻>ٽPo HXX;)\V+O[*wxݵ͔O7ٝh8E,18]RSӱj9$K4'řۏSNs^8(pÛ6ZnNsѧm\z+{憝ۢ(yy$e?9Y8Fkoi+^:/uN[hBAu?1KƼ89*͈7[vkHn ~J1mt<ژD,bHbMc&a$s=;W$8r+NTշеl0՛*yoOƲ~'w 'ҸH1Negc. SՇ\ZE95 iZH;Km$7ƲZ3vKSU#ζ8yr8v[rÞ5RD3Z.Q BE^5ιcRZj7Ig\/Vl7шrc2q޴K8b6ܱm)9OBN:Њ׹%ZR|<4Ϋu,ۉ- zrqT_29;Gi?v*2Ct9MȄFIN;$@xFl; ?c-DydcfXl$ӁZQɚJ0К&`"5p=O|`#։ss;쩸6ф4a6Ÿ*_-مjrp4\1c~ѣdfrκb[k.r0N{N\.)rnWi)8d[r#tV+=:z!8ZlݞYF2>?fakE%c=4gM'ŒAs~S-/}?%ku*odj[Mkӹ̾&L֦ߕRGˆ󏔎xI{(&I]KjWu  8F-Je[[o MCK; )$xղNDcTw{Ksbe(|OkԬM6g8fۅFo3$6W*Nzn8|ݪmui>W[J -g׷?7|/pͩXژkY6| >Njsj}:9<X{J=?b? k3[o_IAot:tWg[oc9KU&ÞK"a 1l62Q5X?z>z3k9II=_COh$fv~~Lya! rkrd{]h=z3h׶OYI8# W=Oy=:'8-.y 8\, 8N;q^lU9QF*vXy.LMVxX\7QV3̣'._Kٳ|C[-h̨|ԙ#rNWgAC/'.WӾ>"JNOw}gR[M?MY5d$vV;S!IcuΥOiI_{7rto{!=wgq2˂[ƬAzv'ZʽP}f{Y,UMlJM/gMuVTRr0>ƎRnvt1OOwM}[- u 3V;K&f^0 \z<J V^zytuӧ*n -4n]5]tnf_ i] L#`;k FYYÎU~dݮ{-;sSTNZn3V.K><#i]XvXym+-^c#Zl_?;]:uOuwG}%+M+0 +_sӋ!Ly>1bK[1EɽHI$j[{/OǚKIY~-/̘?aZrE=GZP#G[FVbyr}> 1ҳkV&%]Ò?\dKDk'/l浶K7[UX'mKRfR>z\&Oѷ YNn.7M:qPZ]"I恥,g%^⪤LqޞLgЪӎeIsvb"Ю֐{hT{u 1nN8$,,-DNH=q޵$k>eˮi2 L끎[iQEc$sl96=Z'(#MVmO*UAמ ir煜u.d l qkt-ٟ4j$I̓ˍ_֟f3~hwa[Eu#:#[AI +3U)J6k:*J*e#w-kk*)n0qH<~u\ ?mQ HvQ}?ȩcF9R-Ei>.60z% ݮLcROȎ6jV8R:u%&. sHͺn[2i< cxD#V+UF,FʬAff2y1? 9W\9(s>IEFRdJ*Qk̊h>Go͐9Mr˚Q3R}JڬsV]4SywG5mrZZE1#YUL+O8\݇5(dY.xdtW-F嬷G [9[ Ik ګsp?eQ Tq7r}\-1U'өͽHC푆^q휟L,sΎ*J (tb0$ʹǧ|9?k{J*Z6gC -/`U=#%mX穈/N{9 fesWf3$</ynqn{;}.| s>2c;lpsQּ|E/hߴ^H~m_Er%>HcaaB ҸkyVRu9tD]Nhw2ڡ@?jJ?mhI%,Ai/(9eP0-Fef?CXsجlr*@'ÊQk9T*ʠ%;~gϽSNqNJ0qv_44Nv2|mUc.kEߩyR;z2H?6RiMzgfe]̪NXtR{ִF2uǾDQLYr=ۚFXii/u]RhG~`~eȼsZ574iQR35MIPjr9c:wF*n9E 4DvOҶ囓ZԗtsZm˟k5Nc^IOFhZ3Rы>q4Iݘ#c+YSԖ=9$_ 058慤|hƴ'{-MS ZkȫuvQd;͵=yQo-Td.^ǽ|8?1ٳ$}tFMb)b+sF-\DwI,3+,**r#ӯU8ӏgtfݬHݿ/&YJ÷'g}a$XMֵŷs)IhXM5}yg?i{n"?8tD$2yk^2x8sQj8U9TRJ=[P\!hpc<Λ_3͕=2nEe;]]OyZQuF->䑔ozժB>}ڪ}u\ۏ*Id۹f{cRh֎)jܔXl< 4^`fPÂodܝM:*ƾ0O+Pm]#ۊh1~\>|p']9kUUhfuZC q&aeA=1Զ1SR\6)EūG'$=px#=y=:WZ朹[6*<\Ѱϳ:QI90sbRpԧ;V=FeL31([RRXdNtF1r;h2#˒Ÿߑ(ƴo{n"ZFRw?] eCz W˿c6ۭǜ>YN[* 1z4puET|sF!zVb8Huc-S i6e k)lm'0d#+g#Z){.iTFZm *NٶHǯ[T;'Vg;dVZ6̟8eAZsY;"O[Ml6KHx:FTInsHΡə,Qz\ҦXzu:}Njk6׵~\vx ӊYn1Kk fo͌XIn{)PVE9v(Wv)irTeZ* du5!֏R8檜i¶R$д MnFc%Zq)k]-aJXR=Db;Tf Q+HC]S^T[-;=XyYe[yW_-Uʮ2zJgV{#>zJ^[iYI#F#]ć;n:-s\F gyVǹH1k*Mvf/X1,-p̐ApG>dpľ]v浦}+ϻoos5~G;F5,$'2+84ErLaV5nQFXϚR{8jM"β[}ZЎG^+~Zz3gSRw̻n6`1r۰[ss:T')&ORjsTڏU[{=vW;Q7'v`~5g#bdFm'26y8<eb+t Ycs;a$`cX/=r}-EY|FLtԊhmﭣ6p s5/r*T5hN GHѩ;ch4\QӳSFC{\y?c2I'n^ݒt3ORNhZia|Ϳ**8_N- )_Ncjd pFNV%'.vs*~Tl.tiI6:'S{qy[UUݩFMmwcpܸaILdfs0$#ssWF2?}hhKg! r6nzP=GUiJU9&lzV>\8ZT" 8˩֧wgvZilq#uEǚ򕬏EMɌh6+4X++ jNҺf~_H_6.g sU/)mo5ΜW4ŶE2lĮ'?Ru=W +KVmsh2#p< sZFQtS rtޒ]n6%G=)K]Mi{YJb8Rv1Yf|z}{ou(R2ѥwĒ}-q"6o 2}[RZ[MzGUzk\[4+vl@Pq9Td+N*q]vVFhea`OLr3SRr-8Όon!^IubFr~oֹ]6qzGq~ufS\72x繭5tڽa],g?j{k"Z-g F#gk.pӭRUdOfz(py't%6 IԼzuoƱ[+gUX +c*qdH¸SrVߩB.[M k߲?ؚ6v*IryRis^цcx }I6Vi+g8~PH{Wv{GFQrKYhv#WGg&20q{*%E3 `wMnc,c nV0UWju*E'ktIUEN/#rs>9Lیex[V+oIR0$c?kaS&查կΈ*joڧKϱhxX>\%Nx8\ԋtG4*߿Ũ -Qz nJw[JUSZL;, ݓ2,Mnec+%A{?gSkdޯ4&i'z.b,[ll? 蓧*~ߧSbw{;gec2N>54ΌE}w}ӆwκ9:k|VI."*y݌gvR`eZ97eȵRn[GEsC:c7~QvkuyRMmu*GѾ6et.ڸJЊNvR۰ȵ.CȶZ2a@#'Fzktʊqizw:gndwO<$E`|,q9ʽJϯcS.ӽAe'{=?s/ov9#9QQ?}-*쵵 !i^fU9's/#b3v֤i]KK6l6(uUsgDӌwc y~"rfi|F=uZJo.x_(]-)':s)Kޘ檭hƗ-B~ȟTibGF\Gȹ:Թy멛$I2|tⵣRrz1N0I^K} H[Cf=^:ie'cA~G}-ºn{haI~E\=8*6WSl&3F\ە.iژ{ۦ%Vۖ(Xfr+*Q7u\MI{G]u{kk˱sRmF&XaWE8B~N2@BwTźo[%/g/]ݝw\xWq;9-GcX|A9N5CY9ܖ2U{ߙ[$ړOo%*L7Z|o!U $I2<+Z;;>hvOҲ\ڍJ1dV߽;m<?tono>6=0zrFOZ:rM$~aW-[(gצOO9RN7fG#%V~Z-(y 8H>2J2垮˭Z}bfRS0>*\!/.=Ȍ̩#T_ 18uہQn/&~[xX+MZ۫3&U5KrYْT SsrS"2?uk:3n[z!p.mYT#=T򥌭%+[u^V4rUߥ\#𧁼=yo$YI c3O^8,Z.:km}|LS0h^!Pm=q}==k>teQ=mGm7559-FT3[-wHd-?`>Bݕ{yԔcF{ӯm6"rHЌMmmm#fWC[Zn!$s$20VN8qiZ*8Š -o?2ƥdsd.7#uyhu-kgRy'&DdrF$rA#(/AkUzsMeӹ%xO[l9N#PA3Z؈%hXB^O]l|0B2Vm$lY`01:}qJ,jʣrM'oc Oچ5TffVس“0I9^]/SΣʧR_h^H{!Q"qDHdW%ՠ3?^" {/#cH/3qT𜿺WN\>2V{h֧ᨮdeP-ʡANP0ҠHO:>&zy~Mf}cC%rRI4L&?$qG8xxjjSK)ZORI5E[G3 xlZQp;t Rͭ_3֍,?5>5m6LIo.18*I$3/g(M>GS-^fG5x䌍Ѷ8zǍU1Pw*;{'m~E :SS01Sĸ߫f>ŽGNeuk]^+P'dnV ~^s0XF&|wfDibV\Lʜ/^zcJ"*Vv[%.iw:}זr># =ңesS)E|1iv݇4G-H .]YeOvy/ʪrggkN4ThsMw͸v9]x_jζ"5 ̷MNZxѵg4ؒ7NvLcGN)WiԟCW}hqڵ̈`||A+Q;Y_MZتg%{~uDA,3bs~V*4\z,S^n_zmޗuqƃ-i6Vm[0w^XKߤ䩺u&S[:fj~ٶ5&!Y *J{=Lb]C>me,${Tv04ٍve+TO3 9.O48 .k&SZ% s3֫$d̷~BFOjg*mjN1i΅YjSMw^c26Pc@]3)+z[B7 V96w4-ˁ!;9lӥ{X{NmӾ2F6yt@+ȱIXf2~P</fHRO[ kzl#C[x'PT0'zqXGQTr JPZWZەURݻ<5IsJJma x>mS!ӯ-_qױb6YQב>/[[W;mp񘊑~F0xxbkeSl4=<}pq96?ɯu'ú[vRƍ{i~{q[(KvIGakOe3[H[wb8<G F*3Ӌ毌I_GjYDfUXil[ps"wO$vd Uj.gkJ)tZ[er> VhmIm2 98$zb *8Fk͛ӌj^/#۾w$.q"[#4g 0>sA<RMGoNy^io+|:׋<_5d֍ZeJ\ԏ".o~ߥ:|?먩GϚ;~=Vb!O/F :t~S*I>Օu-DV G$``F6>Qj)Seikj2Gpb]2 6=zQѲNerzEf\Y1^3mQSvGUJN1ϹHmw;ŏ2j>ҭ:V*ww_갽՟-fdyxp;z8HE=7>s2J.tH=+_ [M@ `3^#F64sN"@mWIu62^ǃ|]%h% # ;9 px:US0ڧ5֋Uyĺ, !,I'f4}V8yQ ScY^6̽y Ҷ978h7nm%D5ۣqԜݜ=Jߴ/<:qɢaYwn ֒~e8|`!>e9>?Z7;iԌ}"sL*c+FsWOa{o;L&ߙp8N%׉(I+~'LrXg'' )я*J:լ$ 4$9l{UsH饍>ɴECucKdS1r1힞e`ӲcѸUˬ Æ[0TV/MN:([0I!$?pْsEQWfn|#-NFI݅}WGk?iN2w}ox^W8۸~22:1m=7Y5{]qB~]K+tk,N{jSZ*A*~m  |T*kbeګEf׭oԉS=߁$,](-6\DZ~kkcLkq$\uB[6cR<犿p"HYN۟tFtR¤t՘Dsen9VF4X ½svF~cjF#;~uo3snPs׃ȃJ(ʚt3旺%VO(C~uOMM##Bbe=*|תYA 7JN"{~5g(^9ǟU:'-dg߭t:0ӊwHKi-~۱č%%s_F67IRV0X6>]ގ)Z=F-hl>U~M;[J8(~"y6knns9nEu1`ffQ&Ƿ'Oj욑R\%K 92h [n74X+QQQKFH4vҤ,A^j/)Is!J!ʺh6Bs~D㦆H%.Bde9xRjIB<>nbY.̀rqs)̶3TkrE/oj*m(&Dg/s$n<7^9֦QwNhXC=FF+MF-}F#_2T?/͟ J[Ɣ]5˱r[t+9ߚ 2NSsMxp0tee$-NHWt)$wGRsx֮#9䥳[M?z6XaC|0}\oFU%..2Lҷ~5>ˡ*t~{nwj_ 4JrdղKSݖY|üjǭtrrN:qt2wnrqV>ǚ*Jliha7'n*#̾# s8rȷ}mbG"1&OHRmz")|uLvn? - ISQ,~LڢQƬ|z湥J/E.-h;c*ԥU}ɣ1XԱ`[8ڹ-kܮʌ~%$xOMNm/T-iu+8r# TnI UkNFLEۅ+c,NA{*{l:QuRŔG[s{؈5)_ֹdh搟v9 S45qڒ%nVfan[*UgĒ:qnԪ׍InLjsTzF@[g˕ҵ痳} hF=L]Uro'#$#?LjVl :;}y=0!C9e:uiF8+:*Q`jig)>FqЎvg<1S29biX*doϪd8n_q~ 6}`5%K)c TUs|fU)J:|ۥQ6 ujv[]I3tf7.7Ī>qON޹%}ϸUF\~gcbolѶXC+Ӌfьw淑h(oܰUl|?\םRɜQ-OiSKțU |NGћ^Z4Ucȏ=;ZD[Q^ѱb*㍡ xF=G+{]>E)uBv_Jwꉝu"0pFY \Jվ2jwTR@ny?֒V^GE<5*m+Qctd6mVVN@=GqNUWMt:*I6vͱeaKn 'cׅQW}ISy,O@Nn뮍UNjyԗF VRy7>TiTi`O^VʦRwzRqIKdsEwq=xq6RrX[y y~`l^ZU?vG*Ҩ-i kp>43_JtE$ו1L,?̫o$^K"XNyV;Lg[IKЩ2D1C.O{gյ}"teo>z}Uc|> :n]]~zt|IFqzMW Y]z28=k'{u ՗Qե<4xLZWfV`v\H8鎣鲸ajS/V>GZħ+'o+E"G>!jvL6>em0'#eQeʾ%y]误]z?W_a wh|:Oxڌ_8eI;9rr Zj۹X=:t#̖롓o66=ݼ-dPA9=xR}+mDiԎJ1Z~gu{A?[2c3;?.sAOQU'%ֵwyVx5ZovYYi_ۍ;-Ze1}†b%O }80+']ٯ>/4ӆ"O|YG. xfP%I;2s>IEЫ'm7O3ZURxƳ:'KH-[O8 r79vIӪy5kr=,oυ+%<\&3# FrItahQFuahf!ƪu]m.X.-݆\6:$ץLVPq"i~絅UVT/r|]~iOWmVWUBۑH`6 v/Siqyo}GBȮN9ݟҧ\^$#/u4r[za i c)7z![;T H)݇#0yNE>nh5+պtoz8n259+lϖ"~~)?j_o.bfNfHʎyU[i' `烩4ܲZmu߱GpTҍZIj5osJ`-敧i|3.u .[oޑJlm$ʝNYjpn1M.ޏij`0V//o>zjuT|k6y jRʸȒ7_ c S]KW,YKn,cd]};jGEF3k{hww q>\pL*-(t5ئ-p :yr**!OoScM۸o۸RQ&e}.jZB'֩)'vu~бMW9#ߊq۽5{nLKKp]G|\5{]%Ү%\B@ q{JqiS[C.%VOeg`JGrNz =ek8Tq%ݤgU_ױ\H̑"`b"|*4vm qfB`nFKr7pqkƭOV_*muIU$Xx^Z9&tg՟;TMi6wWde%RU݉JhB !6kv35p')]ǟܚm:X& duvgWȠT|*G^ӏZ^GkNdjbQn~duPҺ4워7n)^rzuZ^Smn.5^Ka{W yuY=w˳*c?{s+"Iq,#%?x׳*΢I>#ڭnXۣ}%փZ}ioHI0YoHU qE?mIK[+R{w,]JPM.ͽyzmߙ CC4ѷo 9群+G xjimmt_>iRFky4^n}qIjYF74kZDzBW|Y(lT~[;~:}2-ڟ:3g_#)^;=?R {uKYee+vf8PHj$+imxmZ;z,a$tpA ͍ Nhݜ]R ^h;zD9>_k}qe91m3|ɻn)R19ߨ|AI›/+t:PRMKҖMco,#qOg[V#.[_=0E; uݹ[i#RY95q8v$|y kFJBNb} 27y#<~5/^Ŧnyk=}}?7'!&9V@YW}+X$̩ߙ"jwnӵT[_5b?kxGZ;YO\PB2DTb;[~{ V4a6qZQybh:?a&Ex>SC rٻ>ddЩܤ1U sGS\Se;a*JW6f 5ypҼ.EB9!Ee#v*y\a)E8OU-n9BhwH89ԄUkӕKwZ+q,F/cFF~a8^-9g_llGn3ߐAk|,ci;"e7nZBe<5)F79Vz~$ۤ.'ݸ*Ik-ıUO?iμ4::~$*8? N3z#oe'gXGU) H1p듚ոNjfRвڴ~S|̾^s19K)ǖ.h"f,͹~i;Gu1T]*rM<5 wc2mt)E+$UJ1DQq0cWU.Í9rĹf,x_x :V9]Ͼϡb$&9o3t+Ni{a-Eo#W,>I]=Y71c͜e}OQO{dkp-(˳'է+i)#M:ԮJ8'.k3T+w w9A]Yi\zWi!S"H˷G:vac&No$(B˅`59Ii#(gyyt35(؄ݨU{hvm^{g$ Vvo0XsޡKK(S>}?Rw n7=6ߙI,x9?Ojɽ %N4jG-QNAV ̣F{CYOTy\ue{ᑇ#&A6ZrӥwFHG&%;sTƜDQqU/s/SUK$ VUA#ⴍH7N:cT>XEmqUSMmE{9-VnvC}ת\ѕ[{hs`/I P͏xQҺ#F\˕ڌi⟳~G%qCodm[QF'g.[5Z>sW3̯sΥ2w~ rr֜Zc՞he\2_9ս9K);y,,m-}ZmW*#pp>SWG-9JkBz~G7 ʸf&$p:gi\>$֩t3V F3ţ  Wg,{1+]}&{s?k:u4ޟkN.|}uơo=!v_*9pc$3~QqχM/t]R[%b{HoZXVL/ I5KuܩFktwK6"W$cp<; "x)e W;*o/۬,I6n5Iko>MJ~~֏aBVnwq\X:G~>=B8ߡiwKϒvnUn}uc:*^ѩΖKk>aVGp?*gZQgD\Y[Fi[Xȑ2Lv.듞z{hIs4dFMK. )aXc~YH~{+U$\Jy udaEdʅ(yxw_RjԦK~dr-vF8xN4T?2op !K{gXQi+T{&ljx w+J1s)FlWwF?Ī{JNu&4KIUdI$;]ѲgʕYT~zHydȳmlLrz_QX֗"KV}KvCFFD1zǙFH:p1szq$ѴU9Lw *q;t#ӧWn]V*&$_Fo.4^(;S[D?U\ߩ1J&$ڴקȪ3!uiEtc)F5˝H[QQdT.U&F3BɌ~lwR*IiTz^Z4w"@9e` {59d|i[^.`X%ovv?vU%/w5T䞯}:ym$nUh+Z*vo"q&Pq[(g=yϵuae*rW^TVU-)$m 5%JO-cUWt#o?z蔜vZw2ok9F_gv^C岱N3ǷzSvkKZ8N׋Mī!{nցDqpϦO9eZMݜRxG}UY`|)|gp1knYScN[{ܯlWӣ_+א>nJ7rp+Z|эjSݗ~nkû{1rkzTc-FQ9=]Q&wS,6m2fSz2eNϡ߭ްWp9\?:r$FFks'8ۖ3އUhM;;O5y$6fW`1Pji5zxD@9㯿JӚU/wRj*gZjs{V7èJ:=OB rsu۸TJaVs/X4͸moSQFn`i`͹:?f՗o>敨yi#7Uڏv5\K;#Rsе$xWiI7dN z:{_sNW[߽DE#$ 3}rG^åeN5(.Ά뎢ZO4vpji d1=kMJJCMh"3,G @޹kK?+9K/N!Co$6L$s[ 1aҥ5{>-VFUX3'qj\e{߻eVE(DcԼ:qkj*5' c({Ҭ5$>Xvjpqk/%T-D.,Vy'*gWy<#19LU9 眽 j&tg6KHWh;Lsc *r{9Su*Fϖ)[^ 緍/H1GqXѕ?bɈ QЫqv;yө+Fwlz\$,ac8FP2zi= P,e!Z+x3Je 9+y̭n"MBw}l2[o2bUܻ's6R)ƌnߘxgS˸l8 }qңYaQd~W6+݅IW=L*uLhƝE&zjA,qH)X|p#NjU9F.^(?Zq$Ėm6ὼ''zt/aM>u^g5g2}G2,x\rx$DsM4$d_O'Ey).lY#*#7=m*1R1RZQVWo$' ۏʊ#ʹ^Ro]W[yq/@s;r =:Wj|Cwʹ،ی|͵K0^v(*TsSEI*@&?UCO{zt%dO1tns__$*SzjFq˖Q1"䶕Yd`ckUb9FVvk[4WWpXAcXYeRxOO>*TQ$C &C,}$`sӷMϗv1JTGg,M>eEf,v\V+V1ٶDϿQlFؖ]Nsc :Fi?{qUV_.cV@1Srpm/.QvpuTzqʝi_M5iBY~lxᇯUHѧR-R3S_Fэ&Wڟ2p'+J%ZQZjwEfZeվۜcT,*FւF8pCIԎzV}..}v:`Ƥo~[ny&M߼^Em)ŭVɍ8ʜ8#Q(fT.su[wSВ Ǝ(ܼLU~wî{~Rf#C:z֊#da<ztym+y^.nE Oler{x+jvK'+JuW%ʪ6|oGlcVN]zQ.NYG^^F;;i6?L}{Ta>{h$ T-ܫyd^q/SjymaB"%̭Nӿt:$vʜiSp]+M\C6d 0=c^67U(ʛdHlꩿoN9]s[42Zb(ŤBѪߗ߷'*vΩSס Wh8*mSfy) knۀ<̪#91Q)ʣRH)H|˕hGhW'8'LF-d9F5*5mz@aLlc;ʷnsxQvrbIU/3ʖ9@;?ҹy)l%9T)]٬mq "ʆcAεu ]EHnڑjsC4F[gnAdOnZy7w)6ܶv3#@UW@J*T`a]_ Ǜ,LB9wǷҦU"qljF+2W`^\YȆ@RJ>u@ݳ^rI߷r*l}[B4~01JZqNI1"dꚋGHFBp=F98ZbwZMTOKS\Ckmb{HZM= uk9?n=XjRQ61Q@k"'Q).g&#(Դ5F}Ѳ@UϘtFNtym?#OwyJ[f߻ҳR (ҋ}Lufwl]0AS|QQT̍> =͕ISMZ̪Racn7GVI9md7Hoћ*xyZL^m\NTjF0zN[^iEKl|nNN4}m7y#Leڱ)Xyѕ}vdJ 2qZX{->7KT֣T 0>ƸNk{о~Wȡyo!$:UjJ)Jn05hԣa[DZN[U/y٧RkP "5UX¦2%*b;j-eR[7_W4iѴ[8z(M tIB "}5ZŸzXklEnGCJQ6VUޛyqٖʺqmn"Fzn;iZlG߻q)J7NսNG\i[̱s"IꣿL {~|DWGe9U)rM $H\?b7\F˵z@uf4'mJQ|ҫt6,nnV($ՕHcr{,ew{^:DiPXw`c%'{79QZW_Pj.e^#6,sۿaE ե&xzq.T/w^Or-U.Ͱ"c {WGNU5[l*}jW"m=<Α@;rz|㎊^RvkM|GR\]'gfߩai*\Cy 0@9b*&vK,T򟲦֚N]hTs_hZHNH~1 1+TJE*/oh߉-ӣ{, y;|НO$c8OKW&9-OǐgX#BO^c*jS:Y浼ne5,J>aӌ~53:RJ* Tq{2$,jeo5çZqMu}OswAsq\چ54e#$0۞Mp3UR]1FR|eY. .y{RU'}Xk/!b7ES7TYLDa|9}pyqNQkT{SN'.>\IFW8qsikk XۋZYUv㌟jqs++/ၩ:r啥w \V1 =Wøe}L2=W^[[gKھf q-yӥwOM3adwF7|.9Ez؏<5ʂVfuoEyp1J5'I[lsbN4Ֆ_B_1fzΜ%xh3gZ4¹rzgֽ<-)ΛчS^kG\揸߉Uђ徤 M C/6Th:uqP=M06WG]gg uѣ%W"fF3NՇh>]F2ik)6͡+#}sJ6ƬU:lSSx9bvN1悎z0%xEtVvqD*IElMX9_3NSqXu)r]J, #Ar4F4%c|9.6j[kzO&kF&%rZOKr(kIJt291OH;޲뵸Tx+֎*1u[}= xU@y ]{H܎eZ454Mk|/WNRa攡-:fӧF/۸'#W*s9=tjs ťBimnR'x+ه_CWNk:Ѳi%дfeGHċ |/SzүGMOو*/)Mt{A@4S|+?ݏAW+NZZ=Wm?VXiuHUO>b;N7eNv:BTwy"4,Apܼ$kg9]^da}.뿚ƵbsW܌3۽!F_vr¶)K4&a˖yT.ՍG͎v8|DNxO jb3kD{ F8mVr(ѭhk;oǦ|dŝwO٦kUXI^9:2I^}OSe(=^ٿ? ߤ\68sGneYK8$ r 鐾1ޖݬͫ[/asL4g;[$%o\/wXhrZbh ʳ4i1r9{jRRI׳߷_' ^ʤgMu$K-[څfc tg '8jb0x6I;>fSˍh.MtNbxA Pmے~]L㧇)=_3Jnߩj?.qmd(}&;X mS?S=x0Z-toV<=uhkr 2],jo$z`sӃ^&&Jxc~F<ƷpwVi/߷<NgSb**'<5숣hbfXn:N1?M<32`v{wk?_ՌQQqT??=^ ֦?ʬng>v jNTK4 9FKlKm#WFK 4z茟6{o#85F R䗠VҭY՟BK|xCaxb<[ Wק:qWW\k4 JPu(v#/I(kZ2)8[{k~VU?ȹm6վuqt _1p7lubvIzWf*؊:.On{'ÏxT/%# "*x$I,+)N1kcɔ=4]Mև}CZ[xLḎw>p2)1єJQ}ӾܤJR>'xB߾<-:Mi7)][u+3~x [MzM#X 8^ys8I{87=O kK>4[!?C,kNrpG# :Ҍ鯗zu{>YS_~þ l|5Ư~\#XՌԴ9rV-{_|~I%} w{(ש.Ikrє]=;hr,2d%8.>g=wF޻U[v,9ψR4y`lg 'SC/T"gmknvb^F`anp T> %Vo-?zyGߥ:㨯J_ShyAnGr=9Iϖ:j=ʔi?Mo[;v29N鑜vk5}]j*3~j~0[vcH%!eWrzUoOegaG~ZxUqs+Lq"G=kԩYsM$n}^yoQc x^gf/#o^8/R$Z-GVIF/[M/j^mj*V`JfY|N+{jG h&Uf^OF2t$.Zs>&ṿ]QN5:wcJ%I4weWdp=iTucK-yOҴ<p=h+rꘊyt]uJ+}?ƼYTFk`cC[9j7^Q1[hJ6:,LO$:b7R]ݻWQ=EZt55^[,> ݹ]wnǾ}?ƸB8xI>o^8ݭ\Z=[Uk.SYxá~b d猒;+kॄמNK.%-OʹϚRSSR!;yûtK*"|q^n1^MrsVfh7|sUOxxIA-#n\_bWU[w;kNuj6*M]_9 s~$hetYA}x.k5Tu4h }tw/q8szz1afi>cѤm%ǜ6Oq9Mg&rJ'[@b["6X'oλa"%7m |u޶l$x䎼|zi8;_bj:~ϚiZzW< 4y^աhۗBۀ[m+G?^+qJ1[yCOwG:L,XKb8rA)ݫtESNl:ō[MX %\'\R22{s]k7o!aJ[Okiim$ yNN7 T\m^1;]g$kXYێ89 ] i2c^(-)o2!36ާ`Lj)+_#z=6w;m.Uh̭owH99{יSKKWJrsm5kσdJ^6lnB'pf/|z>Yda"d]^ߝys m=F~~kuQ)8jrGV8,VwJ?R~kDiOxM$dHw,r0rF?\XlLS.t:a*iǢIx`mmck#m3js:jPiplמ"+uM"y[0Rs~++JQYSg5vIwzf54&ײZXGji(؍rK` vjIԲ?s]Ɲkz*w_33C^@/PA?J*1]7ϱ.{к݁+O '@\yLeSqRwuΛ]Xl 3^%5\rI@r<XR8xf ь6.4{t&h̤ 0d܊ΞaI9KO~Hҥ:NvT{%i#Z4 %G^,zvb*TjsY5矊T斑j +a,~~.s6GhJW \Ă6# # 2v`R4Qi[]V_;p)YyMMy8R=l$HCrP66>O\%U)oѝ0Ek[;nnWËaG"6hC${O |`#˯Cid4 fcUH'Ls_p=^]|ѥGtK˿̟ VX3׎(J3rMw򭊕YmfPB7`1Bb*Ǜ_sOU{$u=BŬpk'[^κit1u}>J!n=A_?WԨݷ1Js-ű0\xi'iSjJnÅù6]fuF5>xjiM)L·oʹt5%sl}ҍ? xʹaN>әjgB.35{bfg!aT$N9''uB8u -a@\H3 <`8hQݴK`pXz|;EOAi3wDx,K?\]*-oeRj+}m.G_{GWUC,)*O$3.XOgFM?>ysOO;Qڋ>$ܛgNhaFZvN\-6JS}>goN2BZ[۠*j%oˇm+_a渷Hci̤aNJ~KA"G0dhmdQ~`s?2z?i>봮z /o;&&73؞8YS8M Gc+Q~FxKxfm=[1J:T[5hb*xn]uK:|r+gpϵpծU?)|jygQ>,`B f~r$)Eoxf+Q%°T<ʰiSQZ$o3 wpܒwp#ަ8i_MN5~ˠ^HVpzkEy({a$că?ʿ-0u({Fe+Yoi-ѵ`Fl7#2 =X,"fT%&sq#?{"0y9SG}}5)3r,wzMyAk21y<|<'3Z~FZj&7եżhory=;~jjȩZqiعkEv}ckv:9J3ϡAipG[IFG^ԫTo{3<^𮯬G[!oA{{\ʌx9))Շ3ggx;S$49aP?!l~_֢b3n'ɄAx>RWFz=? ?7$tZqj]UE|.){e勿r0h?@7~?]Yr]0kFbgd,%Ku=y?mN_h~TFRnF?2a8TO1p3>nT'T\2A`N:=~ʺ?wR.RWeN޿qJFR)y#y^;TSrOOcj]F⅊#wq3~{k~hʟr%V=c3Q5(ڰ3 H'Uk(ۊ%y[3|[\+ZJ8㊝>Tdח+"@fq'HySgt[CsFV>|'o21̐R5S,)hϚ})Ge 88?0/E]t<ǟDHrUapnd{8<<+J;A,0Y$^lݕ\7pp=?Jl}ʣir}m)%{os|g'+~vΧ,emoh0ܦm?'&`Oc}:{V7S-} ʝG=mcTn~fub_p}8ISs]Jr7GsڥnXqױ~FT9{ɦsTD՞l2[^U '=2>:jRiFS{ߩ`!6瓐9<!<ʱەe qݎ|Dv,iSi$UrblɹIQ~?Z&ryy[Ta̲ \k۟S<}+iKFFxew]2ʽVֹ=rǩ4E04s" dGݻ9^2ݫXFP3IB2vSVʳ;$q˴#$*KmL g{[-$)'&̏8OmYFrZ\oxb/qnFk9R(é.xW<.D,H56W[ڙ$ynf@<\U5*Gd¥_uYi@UKg~8͟s9_s'Z?kAW |nN0XK2 7(,y\"+#cgR]vk8Z3ʼm Tsr~U2RDՏ3v!.v'f-l\h6^5hVYk('*N34U&4|$Gtg /9zq sIshw(حqt;E?0 2i9$tVRiRTdlyrVGKml*d#q;Ȇa9Lc'듕NWcH䊲F 8A;`˜@TW5o/ݒQL|{l>^kus:I#\6kA d>z:݊r-ZCU秧Gk JUS;NWqo'cJFYpF;0yZeJԍbE¶^{Oˊڲv6槫Z:F\^wHlz}=릟4d TԋszY M$-=xk6yF:2lH'y+=6:#JR۹`0\¬9G$n*qqCd=,ԫHs=)mHUpyn~cG4^ڜ14!$4~jJ9=~4S!s6,}XdRC PM'fI[rv~f啙W4yw*2g+#gdf⍹awhdFڽppY]c_WFx-2;lm8 ;zʰu%-$y}Egpʚz2a?=ұ95AFs&;U 2e]pT3˭U9Y9ni m{ǽ%6Ѫ33s[Om'F,5b~o]PN~ecZPҔYBIe;v0H]OrnsKJɹdTr(*y:uKurH>dd;xʚ8N4I]e0хHi3W\^5Qowe{ZÔѴu1\,{X7\wj5s.2ӡњKD=St]Ef8֧.ynϼ #[1˱X+ AycOzꩈ8٫#BN/Qn6r_RQh GsW{v28UCzZ\7j֧)5o0.&>d*s31skG]rru \q+QR/acܪtX ^(ThR7 gqD 8z[֨ST:*j${.5)AJǠWe?iW-T证?3<Ҝ+Pn㿝sϋ:]PK57U(?_Jz\^"2kup74Ҽowk"21p\k׫JQN=F_U.O!LZ@7s`qJ-r}f+Hw (S%l}~OٿyX鷖:YXQ𐭌 T6@ahṜ]]lz 4{_cO֑;MX\kn加lN0F1F-Ӵ~veOgF\[gח^,-|L^k 7V8 uS+G,|/hϫCj67+a6x$_YZ[ IFot 0i:$-uCbKc'98\.I-vy롥avݾo$+H79PN2pO8kέRkR654I#Wa~ed2NT+:w*%g'C:Ѝ~EpZXy-OCkN, O0IןW?zxӣ[/ִt(M2¡13\愜SfoVTBGLEWbJP^NId+F$zԥ+(k.Z5%&k/" 8Z$"TRrѼSʬfev=87Gai.6}3ʭd`t^e[5DRx7 H~~u(Ы)Gw7<-G)%$aہ$q+hۿnjʤoOKV=S7hU:uxiў iG'lr:dcj].r¤yt>Zk|Jj5ΗgG 7ϡ? \im= 9 û;cce맡eʷ5YifI/$v$eu3Ant]OFpcס> HQHʥ<ۯjHkT8w !mGTFVٕ_GXUܽG*\N֨/c<ܖ~.\ʖW*ZVCcEq~~a>M^\Om_ )S3\谒grY=2kF,DMoXJ(/mٙH~UkrIJqz80UҪ䏻J/-vy*Q;W joЍ 74*@b1xgTz[mER n o`(bFϖԊ^pwn'8'OZRTʺ:(4CkȲĠ31ZR-u)Q~!cm$Ӧ>j#O/u`eh hzȲGq(k4atfXwA>I&e5=.--b=cA,g͓te{1Ջ"W;Drۨ>׺SG%H!arU P\@彖ߩj*[Kg& ne8ϧJ^KAӍJE߹pK4f~l}t}@#St4hygfm˻${ԥNO{Ly(Kθѵ;}[\ү!(cpdwv~3fj5k].XsGTx>\^nlYci;sZ}}lT08iGh+m[ٮ[5g} ikڜ<fFSy [qN<*V[].[[X$SS~W mrNF:߇);[ )7 9|eEoz>+}jҩ,Taӽ[Ϣɾ5/x{ GŲ]ǣu Lk$pg= N+ʚtS3=}i;4]_euswmvlefpCy N*},k'仦}Niz~y_NĞ MC33Ak!Q!i$#=87>1?-L>`wɟHQ̹VE"S9\91gk mC?#\[sNe8l5jߏEpT4fQ䲹 ᳌sߚ 9]hZQi]FQ#VzJ.as=OS]tz[uS7̚ڴm8&}~yde$};s>5=Z+ae*jWGnGƱn1W<zWuB.d߮dhԌ"in}/hzXv*" N;ʀzgouK)'&_ƵR]Xm%ܻVh dgڽl./,}}/_K{u`L;0#p ǡ}hI>ӥi}l=YS55o/pr#g7ڠ,ʃ S)c{+ք7\t>\?Oͣz%yt?,QcXSiQ{Wɶ˹?NLsI۩z^J ׯ'}},w9k)OtGCelOCپkDƍۖJ!uIq52Y^s9Ҏ? Muu).I>Y Gʅy'Nssh9%.dd?:Msч4ؚW`|Cz#5tfqM5oFŐsOLi*qVt i efzS*{%ס94 4~r g<TbR0SM YWmU>j(ZnN_ޫj4A$|9S8x{cEH?qSvƕM؞r ߵ y=zO? 2in2l[bzWҜdV!d,͂?ִb}Jy,-`wiԧ+Iq_yəѣѳ+޷C:zJ.ޥfOdvն\ |8ϿZʒIJqztЋB+G$,}9'ƹ9E[)vCp|,rߵsJJNNLT#(&EH[hq֔-64i>B n_4/|V=kYmLc(ѽ`;d-~3\598eiSK W&]O-UsrXΜSwCԉD[k`Gޱ(.ƥkAn{`Mnl眜ָrn7I~'=H**4{ 3Uae`\| ~qT>nQa!rv>`˵yr9\u4bbVZIƲVpOr>بƝE((6 $w|K)ҽ9[N%y*j3եM6Ba8zNt,iEHrM][.pU/m}sR4j\kc9JUkJqEyY6O+*F9n^:[FtSrVԔY!72r9E7(r>UtyHMqF1TTRm>DI:B*HWZGh9˕-l{yj-YfU8l@ s6ٴ6W}YuNUtCX󍧃5e+]ISQ8݄xWha(a=1+xs_SFQ]5o:8nIfmI{ pFRfKRbѪUsOԚ0c9S{; k9.fx9_zJ24R3edYP^v{Qe8ǗQ+FG$ޮ-utQrn2D0^ +dȹ'i9+IKGr(t%mAL6B8 9PjeͲvgDcys!x&زbQSeRh]'Fkг,RN˶B3'#w:rw* ]V@YJǿCjUuP"F Nh11*:Ұ$+b`=h3_E9]_+k3cdb~{Qwn#}yn嚝ہrN?~4JVFP݌˽]E oƜSS`-R|A$Oc:⹝/-;ʒa{rǃ}jʥs$d7",c29N3jq:^-)K{I'o⏮[rsxtoN4 -V48I,[Ws*ymlq󎙮5!FhNr8SP\nyFK7C~U(nRJ~%pInBp8XcNq]z0SMΩ]beUu(ƛZL?%h5͓W(y9Ϧ8/zV#8ѭi{.-;Jµ4cUFR;I^M9y"Ql\ִi%,EjqgJi e.I?4K/hcxŽ6DlSpfxrUK;{T1x2z,T$SIϕ!o9Ʀ115/Q4޿M0e:+VԲDJgr[uv<}b++c)FY"WF 3gTU*uwe;Ky(NN:_ʗYӗŸ(͵Qoj5QƜ^qy)EJ澽|#K7j\ܯsyȳ$ 3͌:=-ݯm-͵w1Wh˷ Qu%uVRl1Thc*rT:~4^s1Z!I( 1=+*F\Kݷܐ< .F=?+sta̔tV>cEY ÿONxYJ4jѥuXmtѣ* ^ [!H_.=$+p yY!TJZjYggir> ֺp_1TMU6T0\ ;s[rˣ:W5 /XMNSt8F +E.)$:d6 ׃*4}6EQQ{}?3]&G<}+IJHպj!yo'\O?Һ} <؉#[7~bAp\{?d;d]ISuT#i7i+YG#m$ni $W-_cZ_~-Rw+L_vv֫ 8֛[icte68XJiIk錩XݼovҙGQ[T֖.Nj2axg e֑f[֥NT+M4{v{OUTi>kѕJVYmjLgL돻~U-QT՝^~"%{OYKޤҍթS# %G֒Iѳv܋8\K3̍δ{yQ%唂Xdcs~_Z䍩uL?3\%y!k؞9T$QƧ*F0I#D\U8'ZfHW;(yϚ\~PehdG;,O }1SӋN6hK2NEȧk?hI뢷DƤ=MtpiܶI 9XXXG 4wpyP82UApLz:tiq ۑa~R6֮c"dK mm cW=Y(SU%na$oByU!S䂶N4->1 ] YKQVB~a\s۞4{xE?h}U ap1?fqêXjot-$k乐Ei3.+m?:9cCxZ#i~-RI5?6?u(Լe;VWD֬e@a@VO/{9h띠sy{Tt%ѮK5m<̾K330@c5+-ԭލU䜨n<_N03BI/2VVM_Œ#Nw;85{u yq5ǚFeR1+fqqU'JZC%|wZ6WwB^T2#| r{Ӛ# +o^de٬[S6ұ `*S=*$r'#{kv}xۛw~gj)TsTToIw_w6%m6I' OAc8"߹uhMɻ>CZ[8[&l)x8#ۥRZ8MbUkdYG(f]#RnƕNir?bq'܉~e߅aڣl쎊 JѕԠұeRCyڜn޵Jj3mm/X~3SaNړ+doK7~=šOo܃h]435S,;0@1W'2W\pRM}Kx͋fC|ꭸd8kT̪^V\]G.m^n1^IEs~U|Rz,..|ܩor=?wڛnq{Fr|'w2GBG}룛K7sF9M+qGEԕ}&ջm?HFl`AJT=buQ73Xz:VYVq1;KrO'$c>uF3[pE;nۖ'^S#R1[ƞ-oX^Fh.$Rr09h{Hu*T%+[[ 13J7`8~Z ԥ.hQt,rƣ1'Ԟ*x*sy=[LZ*s$22q˙Q8}W]:e@K;1ۿ>9ӽd}":pjQzƋ5^F7nWHu꽪>ZR|ԣ-E22h^(4W4h 6޶/s\rR5^E5/ұ($V){W:nX|= M%d ^q˕˗όDv3BHѢ߹zfbcJ: ݚ]Mcn?\b~:K#*TbWzh+ϭvTFpe^Wq!۴Μˢ&UV}Ka*/̫`ds5/*Ӌoߩn c-#ø䲎^.<=i؞(BN*;uGVR|meY{]:Z*rˆJ\!<1qemv~b1Q^ F)"^\W2VF <]xi5=Z+EٽK JO*xpEPry83zpK}?0`4OKh?*I-JwF9=pq^Q%Zɧk6AYXٙn(꿧Oz$hWS̷kf>KW-kSd[FCm+mcC`[ߚIc|+xe7r6,r̪FrbF;⛕:2J_Niw{RRMKW|`l j.\BV΍jɥ}/n8|0 ̒rcIDvqlg'QʣΩNR+[f8RʪPrQ-xZ{򽖽߮u:%ruZwWwE<-lZʦMXO*KOQrt_/䱴S>s Nm7Q<)Ud 6C6b*p]]n0^qKo?Kȶ̻$fygѝTZ;#Gdls'YӧOٽW3v 32Mn M$n_+VOJNY;'2IHٶ쌅l\\u8~NU-|$kާݮ"1rH eUf#:QO)%9gPk!E B1?e[*1MJtpcq[sRy2+qY\x?|' *\ZEœ+jveF#3GԟA^"q34btb_EiB jq}(VUT㊄c̛4^{yqUFمxӏ2_vrڻ@tn׏R17i*u"a#76ܔ5إ{qk K깮/zܠBK$f󏻸Wo2Qe8mZVAtG_=tN oޟrf~_JZ;H Af05y*+jhS&&>oCkJNU;)фo)dIJ;/O$-zTy5 fϕxܭһ)w(ӧ&-X*+#;1Vu5-Me%<1Nc۔_Miػ֩,Z~TuNHWUy&|=koc*pQ[WъoWur\](Gan 1)F*su[(\՟9XB~aJPދZSf՛ȕ擄^KXٶIpyH㷭t-m{u'I8^[}%EWz ` ziR:u 6!V&[Z^ZCW?uFEmUE5&T3IXc^ 0;Evz>ZeR\zf{c5 XcڌT(۟c*jowWu*RPj54m?J'RrVy&jvq'7+F93Nl峿.2 Em6aBRZkF,j{Ἆ- [e_ފb{ttt#5Quj ̿gao6#Ku:'z2ںoCIc#Zo !:[e]|H㜎՝JO zym2DvC#4qI&s`: U~kȼEUR$k[̧۝B%٣ 60ǯjΝog=UQ%R ~**T8+YT:z]1Qs4w&K5NInUONsJTSI.U'701^{~Wy8#RS~kI>deTjq}*}?G=k9}{KpӽW`+XP଑*yu_*ќSmf;ͼdŬcΉc8eRlL.g'y<+j54V]ocԌc׎KsifܴGg{78_hqZgleemVuEH85=mmYZF3Ч,-:\m[>zu1vk.#*[^OwQFOgmV{AͰc=#k_CѾWN-Id#;88q:(6ۿR[B=o%ߵ[[g\,;>g%q۱':5/Բ]5wۿN2N3^>6fԣ$1;TN9=+:UKCV?J8i%~-bŴUF׊PsybI.ǟO.rӵ Dlk;1v2p9q2TYB1^ϩ*5#$ֵ]oy\C-ny,VY̸ @ܮ-0z^Z}7K]Lӊo澮]=wܧ&խaY3lv? 8XѨ=Ҿ_γaZ|Џ_KӮ$[A8X0aaZjWzosicթJVO]7FρU-cPr,#Mp8ãoɌ㈦M;?+ tOpq`R?b:WKҋ?c#Ruz+k~JGsp<6NWrd,7Hڻ!B?1j7mIv1zqq59֫[<RV[駯ȥuZ^)&3y-4ySjGG* *0~exv]},28K.mvV݌ ጱŶm뭇85?7xckBI4yP| tEJXۿg:k_xR@35H)+gt!Z)[uceZ_TWK{oom)tKXOq»ݣc,ֹ+`q/ IikjΣ@88l)|܎sW_euQZ2jJҽȯhf U?.24e\9x8_WKNQ^qIA6ݞ@<8>kIT~^.yɻ(x{W,wnb˴ i\5toSJKWZ@ւ28d`K|xӑ[˙KsK|y_%Y3*I zpR:i`TM/NA,IޤeR)-0@e8=+ԎNJ-gӣTJϪ_~(8UhYlCa#ҽ*9=]_=<_SU魵(e۟޾v*\xb#;Y|EG92#J`nd6Z&#.NR|].]>17Z\ڪ,ke„nHysק)Qui;{l :o_.yk3嬒6}}:,}ǁudmI/[\q3Tr>Ɯz%mgEqg)̤ާx)5d41<񱪽FOԍL'ӓV][ok=*U )]J ۮkm?|Cgms /4hg-2EFŸ1O<sThbíK[Fi";ulpx^ۛSj1t'voYxMb[xQ`%pCg$ S5T*TkԏMs>IC'o3D48uf*NZqI݊j6<1vZKF̃snR$Vl0H8ӆϽgT/g^2OjU}ƤM_#0oc9Ǡ==}8^W~t>DW4lC,*NF VUJlWu?qrP:~>g9#֭3/U6~v%oۓ^235󸊓{syu9k\lK m.=篿^כJRsԍ9azk.l#62q;gW澏be Eo{u n\oQ?=Zk-tU4Ӌ\Ec{x+l<)DoW5qlF>]uǽz|ʥh},B.OsCZU,+ӑn"EgmyzMʪk[5Vfwg7 3*G_N#[aw6xsJYJџgC.A|)W ͫ^yL"L,|(zg228c%bp8SM4>8ogt[_ft>i v;FN3q1 v^G/Q^[V׼u=9ElCz28/ F&{q u۴^}iS].jzh 2T)1xS'K8=2]ۆqۊnY%%N"?ֆ6XQvs׽rJ] KpwvYw`AQ5o-HY"L`~C$]Z:/C:;w3Ҿx)Y8MK.$r@, tU7E=Wt%*4~_q쮵R;"M,D;z'z )Fa\05*SKүUA?ξn{_g[k_O#խ&Zĥp7~'9NJT6l;%#;POGH1(NrT6zץZ2ji߿(Ӯm][k72^^q}nؾm9폛3:B3^EtF$Xaf@@ڼߐGִ5-iTkO_m.!E2nWN(34LOmEB]5(-a7v]'^Ym3-'0A6~Zrr1>ղzmjҎ"1KWE=qRx ֲ:jZ]8e+w(+>splvAXV&S T;~tfo _<19ǮF;{JQgVOٹ=nh$qJ%T fFӓ)NR[:mJ 0[Xdln;g R\^+b#R74`d4cko^B^k#Y-;~Ggc;I ?_6x<1YՔTMZ"Cb,fyJ2zhLc%i!ZeYq '^ߩN4uYj.O.FNH S'ұe8.V7ѩu78z{9m 㩎SV;M~1ƻ0p:ȧu'm>m.֟5CuD7\+}}rK޼ⴳR-9G_>ŷEԺwf<#H&Hd XwdjWM1M5Ԣ~(udp *w^kӧ*n7# ԓڷkt"?O5qdr0d߁_Tq.ǗRxn>kie0yyX~I85XԪ|zG󬽜c5{Xg%8oh{Ty #o3o0́<¹&coԚ֜#uMhHEd`JAnG~>iFGzy}7͓`=zB1סqRWէt2Ifn@#8AT6Ob܋s0j n#ӟΩABqNS<4:. 7 @?U"1k7}mxVT*사TZ)^ڝ\ѓUbbq]ϖWgC1~7s͜wZ(::=NS9([6'vN>u5GT6/!XaPeֺxΚO$rldEP[pAֺ)Sc$#!]z쇽-P(Jb)etV[vmR%d7A* `m\dN-q||[<ϽcT-\H[rmwϧ|u?5pt*V ,n^;ƳEF1y'n9:rvvйFJ~?^I03.bK` YI8˕+)e3)?g(J?Cif^Dv??vZ%o!sġ+ۯqV~ƒtiz2vdw>m#8nߩIbXGZ8v 7.=KYY+~ʭLR}ɺ26O)IIY%Sz_)M BVܞlv1/^)ا>VeWmo0ɞO=siu:c(։XqF{0Uʳ@ oʎΚUE7f#V?R6zWR^Ь/&++GGmCTiǝkNe;B0ܪCNJޝ9O[#.W^Wˬoo&9 8'Ӕ`*4$;8M՚n%X@y$bwڅ.Jr2yZӧ)YoBUUdZA%gj*mJuQ=5R徤0:q\dVRH/b'I$!`:dof%gژGRn4:Ė>·fN[=tQ̔_O:=>E-^&}{JS\U)V^vvڍ奻KlQD+Ïh䄞zz/[udmo 8>Gᰵq-ş-XǷoʄg.z}9皉sFvs9rJoo!V6Wh VO [ENyy[NBoc`|-ESCS؂uy cc.A84Tyl߉<w({3C/v&WϽrﳫhXUvi]B9su:Sko-(ԍgE+.]:!%I{9N;;))h9{II.S,eu9jRAJ4TG: .(Ɲj˹go0} :LmʚgDa)NW{76/# sU*RsrV鴙.^Ee|D's\nRH֍H4-vu 8z$Ey_Ȩ\y_B $gtf%YRK;.~WȲڹ:I5+O/qx;c~N8 wb;'N4ȲiYFqKUFJ0Z*4UF ͙_>xŹ\x82]t9B +)p);nӊo/g== #[eƧmч2*x?{WV5+?WcN^4nR# e]g\g NkXut ?xێ`Լ|&7m{b95 9 \dbʹ{\ʧfkXUḪPE@}M9b=c{T]}<ѕs]鑱86]SF`u.K>,nxXRBe r_[^=nyy 4Iϧsv𕼚t66ڧl(䁴zTb'Wuuaq/{]m}mڿ?zUϏ= -{t yPXR߮qbT bpX;;ik~,jVO}DkM!_E|lcVnr9P+ч, 7ۦ=~}Ie_~Ty=>2Z~iڌY?[|]0H8'08n1^׷EG1ʒaxK?eIqZ]i!iUZE9 ۋ*,!VLUVv?UkF_pU7OQvo~o+f֍[ͿSZm!5kwxiX 6ayRVJ/[_揞ԍG{_O#KFi'QoV8UْB.Anx;qyuH*->Ec2zi84Z;vG3l^Լ?4#}`S7c|+Z1U!$7p֞Gv|[/A_Pٮ$ pE qr@FAex}/:ɞYQītoy-vq~4𥾕L״3.1$0 +ݕ fN~ZlBNZ6,m[HC4%ɜ 0cue%8saOM.px&xI&h>fp`Vu5 Vi,xS287f޳Mv3;#V~f.:rC<'k:{ݬᬧXo$' (3ycʧJ2+~zvEf;Vhf8{pG<`W-j?X[ou:؏^[i^CZ\Z̻@NN|rZ1/UoO=cjkZ-LLw6z9ENjmltb`J2Z|$m0|0'ox~3`Dm# `T\M*+ӵI[W;J-a߽uK?$&]-ѝ9֣hҳ٧t~[#)+Mm}v:+;8COkɕ*=շS-GN֦u}8ݞ5F 1)^yg~ W,^kͬimN}-~F#[J `z v(|5v\ԓ4-ofB~<3ק5NϧTӛz9ch?glJrǯLtO_#q׾ £HFNI9J$ַd~zԷa4JY۱;{ژ:ܰ[(Z!iu 0fq5b#'E4N7*]zGi[/?}+ s*VԗTpң( ep+XQSu^6Z7F͵r#δ֕:nԴ஻m=%CdosWs{j|5,uevp$|W6#i91u?G`jB#gcFoNm鑕\9 ׽yk/=޻:6ZFVMۤ}⤟=J\F7<ڤ=ϑf\YI~0cjJ-Z|w6I#">FrzJMjGV[FXGo^]FQ Iy{@;A>}+`gN*XYJ>KRdt>['래0ZjCxg (Эx(-/4Q[(ϩ)I.FqԇAlVݑLۤFBl?QU7MEm-мUiMSk_+t=S>=" ߗJajF\ǃêvv?mmLn}JO=W-J<(ѷ[F3C"[UC^eouH%;Wm xYy$P~: s==O>$xfIX+(^mI1N];]TS}Z; rHv&rG'8nìi$βO!8s8'5n[FIYߡ8pV%$tP-BM;J3Ik߹4cWZnR]"yRy_#sweߧFObX%+$_/82 ^W=Qʧd~~5LULc1¹*rF؊23Ct6Z]ݪ8=zpה=j*2^am+8k5{lSk6dcsgR1Ng-E<|{dhOG'ZS55猢~2X97`ck<'i<=q_pӴn>5g~ai?%^)>ϥW+#4)E~,SIϬ[kVO/{jFb># u }G~+m|C70![gkr>3ZM:=5_fL&lN+O{i>xC 5MOiz^Io}&߾O]gks_!NU2*qqwꏄ+/u%5gmoFgIOɛRxq'`i"\= BԔN"IEyh쵲Wv1fѫCnE}gKMz٭yZWt2dQ&TF;8sVZtvZV=UueiGF= kc}Z]AhٲUcR'7/#9cJ"I=4_3#Reg6mK}$3A4|wc`OU_;.2ũK{-<ߛ_#>:NmimI>-ߙG_'ra\^eox)nl 0Hc/NOEml},jZ*I._X1$2 f2)`v* zWNPnO,C?/#M?zWg(e8Ss] tƁE< ym#8h@F1g6%ʬǝ,EIVtSm.qzйAcTi˕tK]S6Z>UuEFں]_9qЍ:mŧ߹vI͵˕+s^fZ4e̖[gjI16R$ ."W_݁އ54ުݷچ>.{뺾=s }VLżڮ|푞A+pG:T5gU5&4v:#e oe[ϸ*;Un.:u!VZw{iSխ%moW~?fo76mV8 }7Nnm&??*Tfu"vZEVGˀp1$׋FRQn{ҢMϺNcJ4oROmi+y^[: \*2 oiTfԴ|@Hxb39+jOğ#,QUf NkS>iߠnsF# 5~M$9̉Mm"iT.w1y{TX0Km=os#trPVb֖ԛԱ0p-9ǯ `Srzknb4QmU ݏ#Z!ʥ%(X_qd_~RJEu,5[˕ x8E0vwj*\~?a~kKc\;U}קRm>D+*;!W8@?/eήxӌ̻+Hۉ_2Mz?ZJKChj$i#F(7qoF/\|һ'VNQ5 ̼9W4I2ι+[l˃d5M:\i)\̑v ?gSeM/@ xPyJy5RU!gbی:xpY{9a$~īJnVb8n/Ю(0\4jVۻwR9'RRq4( Iu!V?.4fIVU-5$^7In#3)cӁ?Z)r8螽-A$ѳbp^Qy^4$;UvY.9ƹjJ~6Ԛ p)-Gw{vu9ZJR6A)+梤eиε9Y%cVޤs/5[!sF,F;/ okH QN5l~b#%:I"x *K@{[3sGEf,Zq[ B"qZBc\-Bmtԝ͸JX~}To3Nk+%eO;NsZrRјxίn> :1egI^4Qرo<~r,H̠󙘨'ݍL\>FfEHK]6^8mMXԣ8=7K#ڬ>x}J%MZ[]_n\zЩJ1ywЊc46ەzq߿jr}2N%(y.VL)=ƺ"KB3ku  `ͻ*>uUjnȚh1m3(ÎxKK|+߫M"W!cmïpH&ѩ~kGb΢Ldlmi:op6=^)$eV6>88cgdUF10<7EaW'ǭTeίkyFH-_O.rꈓf{U^f''=1-6]OV>lNYab̪WhF2´.S˻K1eG\8{k;'F^sVʫ b[p; ߖ({J+I#7VdWwGm8߭^4j%q,3qq#4sVVsZܒDosN 0ps=3uS+5O؉;[mw8GQkɡ3Y $T$督񮏫zIZUYќs$~JI媌00Oy>(T:+֓xu#{3bZ}Gק9]j\qZu= .".Չ|YԴ~TL5[XF;u*~i Adҗ՞jwԖ>ԤePnYT9F&7zW>cp/ڻ=訨ң]23G޳r~}LN.>rӷckEImܪ n8:҇-c)a~u,62}>/&S2 -# ZEIXG4>6wo?*QN֗C<Y5cI'3&܈:t8=C$_"SmŴsjWlsSHÚ[! qnf7?Wc#ݺEFsXKKgXYU3|ہ$Zʧ2ܵ TNȇI-N3JQ| 6(A8#"crkn H"1϶k:n?E6ݍ -_-'?5V$sZacKnQ"ݓ88 uw+cz"E1Z- dx#z78&k-4>@}ˀ}0z_|8V7.-+4jL(F q铊t¦+gS9GS!m 뜍ZƜQE#JO5K/]Z<Qb7#g?ZJi-/ #V_Nyyi`oӒՉ'1S=}uק8ȥFv脇Ƣ6eSdqUm('.kchʅjo'tCx;)f,B‡`d>F:#ԯm^gq$LX$\)>`~hA-u!N}k5Y*bֳelֹo3h8r8EsѯJnd|j[᥈))svcʸ$r1OC%y>ƕ"g5x2>emdu9pƬ6w#$?SZCQ8=jԔj.g"^ڲẒ]V0̾X,Wf<ԭQ^qf("jEo1?MJwn:%*hYsok{99 V2kFh_iBv,;#\*o|1묩Ԃ<(%KxW;v\8 cڴI;?ǩEKg/Ϳ psZԌyQؖO6?(Ŀ(€=qSrW)N#y9-Uheqex:uZGٻ[KɭmTFĞd; t=OU9SdL?{ClL24jW9qS% Dʚщ52H (ۂyڶD>!Zݡ2[$ &ݨ73tִ1v(S-=G>P[Gd }(/_CPiɣD FTmiTk{oKCdwp7n#$yu*:qam̫wgyx cvկS>XԨWq!U9QUx#נʜuR^=h{8R5AH#VJGAQSM;Jn76|ϛ+ܜlODhI˿pJscy˴+`@>zR6={iZu`R26CgxH+;Nqedg DU GqJ R17uRkg28k]=˸6`P1gc]OƕՕ։[mjt$EdL>QJ#xE+[U:j5h6oՋ>RWQWhG޴edB8EHX0crk4>Qa-K!f 'NZW橊~]~ka&&]q'u⴩b :SRVw\\eS31-"թ:qٓ )Ԓx岸Oy ʺW%ʒN5G?-ȾX3US*-'fxJ2~|~ذZW'IJ'yn2;nWY7|&Gu^Lu+K}wbU p?6OcP{9(5ªWwqt~XR7">R:;\j-%}:%^|ȯ=MHP|c9sJ6GqiU4Ԓ+kfZc}c` ʤkm["nk]6}Scp9sDU:pFQRwt~lڄN}tj2$8?^k~i*{yF|"+i]mjO<cihs{g[E}/جmmlI+Ss#M=mK:G2GvuˮT.*<O,&(lMKQǯkFq}q4mka)*`d g~'Ժ%NIsBVa#Gq2>C-r==+Do4w{jpRپpGxȫ ֽ |riN[5M?,41-zq]Tz*Վ0TՂɰqמ滹Jԗ+Z=Ʌ,{vgoϸצJ,sHԨpʶ<۞3{Ok{hIm.YXfTm9 23luT}qWO,K̖Ku2O#+W'˷jSG-6J#د̿31vZ*U5Z~GDjEQJQZOss`V6}Z(s7Ӕym ,2_03|ѱBy88(Myepe))E̹Ry.'dnc8Cd遑3$d*ەN0玍;uGws]y0rx~Mc')Sfя6B"+.pq5:] r*=&<X3sT}aӋH짊䦢P:,yY@FmfgRJnfUq7OV{+쐜y\aF)SWsP\JRHA;N{+ϭLQ'-V9-np֗|(.=kn~f/fSMcseNx##N1+J5sgj\sz%O rI{c5\1IWW[q3;ugcerBe[E⵴hyggv\էw:+B..vFzݙ\O[{>Y5JrI9S(]Jo6nCw)ƛTj5=zYc*d;z  F(v4?tFJ=;MN;n&{;9R8V7t* A2}knU++zեq" h GLT2Sت>nh^.8֍tƝ:tlkNh41q^<ߩ[SюF5Y-.'SpO5&..RWdClˊǚ^*RQ4}[Ibn:Tף.[cNgcɵ*e餢7I-tnۈ0N Ti_SXS6:]+FLMsqt .Oq5VRcNQ=];+VAY9==Z|Ҝ#ZWZz^Q*}{TƼnJIS7n4|] wy<)zg =g Ut5rEev,X^1~"5MRi.?/?77bmU¿^zrI-% ?J~JNhΦ_>_UסT9uGֽ .;˰˚oڽ^>[|zWB:rSsb!6o[o]83 2JȯzIInLӒ4.Om-,1WQr5#r_=^zm&nVf]B^kQiZC)wʗV^XX5hE!ٴv~A0 vErQzuj5bkki;.vo|t19>$ydC{JNuh?x![[gż8؅ަYIׯsJu}|I-?RX[Iy* >iʣC<650*ikSB.]bM4rnZ6\(A2幥;FKKo]Ym!X @^ռe=ԯ TpdYdv [1W뚹+[Ko 4U%VuH5)YWt\y[qz){s^:ta$_OSҧF$ǺSqp/n Wtds[J<:F> _:\ghZf#G sdぜ 8̪TU_#9%_@,7Jȑ3)F2)JTڿnJ8lvٙ+{#[X)[ 1ҮvrO :u* /{.=:Щ(ѧ̽oeo,e_.$įTmg]H:t]מzUG F~vwׂ|lUᶨŽǎN M3AԔeB<Ǩ]N֐S st i -I˔&7i1ןݬ{H]+ d9 _0F4vQrv79|sҹ1XyJR[+QRR[ZjFncFUg8{;SJO/F`+Fe`6^UxQN1[)Udݘ (qQ^/Ǘ6*~0Lִ|3'܎]KVzЕMS[si'̘2,f$9E*5K] .]y=,Z"7˝}iN5RvI5ʙ3WVW܏#is>B&+V,vjn1Ѐ}GJs9MMNT&ͿZzX\d<vcNґU#d/=\/[PSnx1Y֩LMԨݥLu{[iGfªϵrΛFVt(j6*n`{¹*X<Хy}K~𴖫UM8?2lzc=]:~MꌠdEie8.C7EfTTB-z5j(icBM6Ic7q\*0j_yLDijڳKBw(rSr(>]dj ump [=םS% -NEI%]-M.ٞ&.yUoN#En1+>rE!$¶( q5R28b]ͽ";2ylW:3Y$V,VWloM*-X ڹg6nsQi=?ZM?2ޥ9I?J>k5MA+gG#:|ӗOS=?UV&55t{CduvO&+OҎ:8hkuk?3^A)4ۭ.(E@g`7;'=y=qR3᳚:2ӭ>6QMyWS<c#F'/4e:ܮҖuAC7"MtŹʞv'r}{-T2XꘊǕg>xA&mbi[*U~l яA]q.ֵUٳfԔRosǨ9F߼@ $ ,41QuhUF>Nm\\ռy|pSźlmC4!_%[ >av㦦W-og5wd֛]P_x[E\k7zEm2P D xGJ}(it2b墾~ڻHeO^M/-FkPFFG9(֓{|<ʟ5e~ϢO? +U.fUbfgT\Qܟ͝)').שM'\6״Gkn!S3Yrjs_x]:t#*UUJM|-}|T**nRv];okxúmn!h_sPuY̶}:_Z-K[K^E}vòim-;&˾ݷĄTHk^"4}|'🀬u(4xVOa2#@*6qQs}`JZ]:+cxu[~U'1gХ9FQ[|Vj}v3߼)v8lCw,>[· P\։.I[UӧZJQ}>QZMd6lel%NW߁^?c5|jp)(s+=`}7PjBѻuPJD=^]|o:hQi埘yL1X\?sOcG-H4鹉݀#a^F#N2pc#JX~/[E3x[9ݕ<'ӏ5p2uS}C)?fӯkxo_fG+('~ ueg8өQJIx U#c627sۥcZ߯FhJھݎ'Ğeo8~Uxekю*KdZa_ Mwq4M:uF=Pc 3秽vaRXM`kKQsSuYߥe}M+9e$,r.J%\u?^mVvQ^Vx."o}+XtCZ(}}7=j8gFODןb?hvcFR8)8eKCJX[zlݎ>ZiN1Qh:+L%_-[fN)*XnW~]Z, t庙~os]2*x1~ڢTWxfΥ|U:N}+<47Izu;-*{lt+2{~+˭Ox(UUv芶I4/Hŋ`Al08):sk-:~Lg=/O/_-DMQso.>&nXI9$1_'bm.ZYZ)57l d&-q׾q_R2uhb%/S=V4Ԛvkۻo"dl@8ퟧ\RJʛjtK4}{yv:Iѣ 8Zkj6q{srVihqT;8{7K.9C|#?y?#y|刔뾽} *ݷӡZK}>tVK`(Ul.3vWS߱ZXR撵,gov"n$c^<w8_RLTckR+3q_:D&QkkbM`6ɴR.c%F?dV&VRfUأpdsY}%>$_uLR f|F}q|ߧOR&z/ٯV]EwKF+ ~5GۿSSJ.*M/&D\]Y muԂ֮ _#9F[t(/6cvdQ_xʤ9/ft}\)w3.5QoiU̞J@8ahcRQz7L+b(w8oLzCζ+ORvo(QRU$bS4׾ʥaAEdȮʘ<gOU*8jײŸk7??</s~c l{8$t1UJ𾶺m|/X< *&{DizZ㯆Eլ&qn. K"U%Q%1e[E($z;uV<F$w\Ip  zc R)Jϣytdo]^5}SR6hI#Όo;N22;`|f%gڞn"%kS{w^W:>.Yh.{3Ie/VVg;[qǯzʶeN_ua^゙g?ߋ *=nhUoC e֕;φ*cϲC0nqJ,+qt&[ѭaKUhk+o "*way[xS#ܿFG*Խ_I_/_ H`[!`cwFG]tCt+F~(|i]KLII27D3%o,5\=*_O3SEݿ#˩ P1+¨qc+ʩ.ﯞ*c'*m&c]r.GڗQ/?n*F+H.=m9:wtCexsoknIjF1'iV$iV\=OkJM-J K^?G[,hߴƂqm6啺&D,~b`x'= &_)s;OM?2SMmzۿ~> /s\kvBC4sH"(Lx95t:Os2֣QBn+][MZߩ,†/ i9lu0rq}:ym<]d4qMқno]ާ]|Ipu+:*ten0BqϦ>h({l%*2^+[BiZk zgrABQJhkqDlkhv>)[dZ׷^S!ې2w1d8J讯xURUE{?XWB&b5aF?|k{i۷WE8+Kh t/,^n*hTUi%Օn,irجeMNrF`|Uylpy+oyQK>}Y6R'ǣVԪk~G^uii>\nl_{zcixztc3ieipZ c++Ծژjm`xTa|cw+'##zh([k׹P`[ٷٞS}\ +f^(ٺV*Fru7t7e[v m:tIaS)+= %7GZ={RL-foAmZ&I-a6ݖW*ɦסJWM*C7d`ǁ u]1;fEQ;/w:Ƌ: fݨ"G${vZJ'v:4VRm/{^Yui`Uh7[Ͻ$Qi#Pew`8'kB ||PN맯MKOwƻA''w![:ͣhE/"Yn ʯX[ן^ވ#+.劥KFrK/ ]Fl~ܳl @ gFhҒrw+[<;u7ʇsrN@ҩZ2{ _o#rK6B1l{6p%oxKңbFK}-k'͋]yh:_Gs.NCQx 4˺ӿ1h.C3 NS9 1ӝ^*/ߩM*I  7g;=}YQ%YJr6[j[Yefj sO|\;A|\#&7p!INJz~x8o٣^²;bӬfft 1Vj9J/t#,B 2\(#'1ΩF5%-z6ʋ~nTT֧N][Ԋ%s~6WxETgNRWKRx[",od.8i8aZQ1ƊUr_CoK/ojZ#rآ4%,sn "׌^kJ'NPq*^RЪƌq8~ṇRqVxnmv+, ڱff0S~&wN"1ߓW͆ eHmǶH+hF~y[t+^H6%ofN^+u)mնfa I2rJ^UVتR'QOCE뭷0o{yçi3LGc03t(ӔۡҊn2/&VHm( R8R821_Xޚ_?r֠$GnTc梾Ùs^Ϻey!X8 [9<k?\ c*Νݕز$۷mÞNOS|JN W܂amQ=UMǙ*~W}۾ nctRX.,OؚQ#jtWIv[4gL7\TSta|n\"h29>M*q9Jڷ^CHbX#'#JKXsQ{$Gm^_j*N Y# c'ׂ}}+ t_7ı4JNs cAM~ҧo?|^*-I$gÏimn$`RI7=騺>`6V蛕z+.nYrҽkKhBjd]كS ʷyS0X,Bd}&,J8??*)TG-{yp%ݖ3?\QR)kt}g(]f3q"+-r{J3Sӹ!mc34ozR9z%đ̋npr3^>Q"f +Nepq}+.tdMJ_{otJ.!m7q]OSNZRݮO+ŒY+aUrrkϦQTHv˷GU>jzUI9+X  $~fy$j񓞿S](˙Jz܇pce-ֻ!)Xu\bic͎OZqu"J}>GrFHw){fZiKZW7,uݖ _q͆{43&2FZ34߾+70ONOK迫du[7:ݓ3*}9o{>狨25{vDq1( "6;w=FLߵiV5݌{}?rFQmES4쭶b n4dgQ1}~]Qi{K#f?AjԩJ#KWZ؎YJ0oc>ץwƤHӼVEY ?{c8\N;]41*Z5̥srUmfl|1nr=xbOmN6#> xɮ2z}~lB﫹MuG%NWCs[F[KR,ywFJZL9j~*ҵ?ԨK6Ϸ^JPf4hGb jpzJ7N:1QJi婏yJ1]wǧJN̩M~n3 og58b'%tcMvViR6bzOz>-j^*!%|ۺ}񌕙RTVwUeXp8/+jgi86P5WV?Cv?0N7-FW=XBcO[[{#2H|NIRg/,&*84>Vi]t^2?^F[_ZԬ퉋dH*o+ UFF@<{#e),ߵ}n=;:9ˬynik~|cjЉ5R+*28ҝJuoug7Rf^ݏ<xZ$6> Kqj!R!Ջc9ܹ~W *jC[CU(SKFKnzm1^k4D1dX;Js ]ԱUkrI-?[XeRʹ%m:-:[^H~ia <ݮ pT9 ;&oMY2*m}Uマv|_O{{V3K23Z䒁'*w!e|EIO^̺.kaMڣ{vkK.d+t])iXI,챑o(x#,g8zXЋK^v# 3jI{I_mk▌5?Yk[52 9>6xG ~,xY>ӥW87to$W*_]毿o?bYOU׫z6+j:Lc䕙e`;H8=0If|J(^)6j(S=p9zį7F9t gךt)}X5q+nUw$~{sBn*I4{O[iGi%휡hn&c#*Rz]LD\v%5֖uHV%bvS##zW+K_0NRR鹱c".S[zqeiGVϷ~+TJ1q۽[:V*>i. S)eCHiO͜g m>J9]Z՗4g~X/6q{˫oM?pX?$\ދbw ´XX#ۼ^mS)+֧U-]=[j~՗u>>+oBXmǎ1q9^/TJn{T-:*Sh&[xY;9^ KXE?K8z4[ῊuMq&Bjp6,bymhoyuq4R5֗φw8ZܗMNcP丒 5uVfT}NGiEE.WcGMئ3pPK 8ΛyXsGCkƾĩjTÒp2y=~#̭%tw+XwJR[Դxc( J\p|vEsC} =`+~ mq_Ur# ]Fc{5xY,lF&w߃>L,gRmY?i p%v+Odc>߁}[\t}Yn}V>zgkq.gzcيݻmn$':rړ]{-5#JXe6{6tvHѳ"mpH=}ڳ˖Snii'n]ͫO# sIcVIa\T:7_Zg\dO6H'-8^n"#e5a˖.߾x[lUG#sr{0Mx0iSjR||L-/Z"<wnep#_^Vǖju#MWJK%, ^E+~c$qPcQqzFٲ%ܐʤ@`søSZ60R[fr䐪  q\v5VeS=RZ]Ia!kv&O9=޹FR},vG٪>,\2< W,*szTb&Ei]ctjkmPs1*ʮ+mln늉Ѭ7]Q# l|bL,2Ig,+֭OIont9W1o9ŕI> |ҲZB*A4VFP\E< +gq޺ ':gF\S4[jّ`Z$ uy8I+7]F5&{+3I=mFq)J4- kqG$2F0N?qXiǧN֡724rz5SWeRTAZG{}1'jyfqo){oĊm mY<1"4G f-#8buVjE(RdNU`q֜OHs 䚂k͵o7\lF=3rc2z.1EVX2rF<~Tٴ(صfj*id3 F'Jgʝ?]VÓc]FFTRp͸)Fϡ;9Qi۱M>Cko-u8ô`#׌c&HjCQʵݶ?a}[ZUqN\h^{co ڦAvz4n)D'w#*خet{Kzt= 1r9'ʞoMw n&T6fu&P1R+|61a4nu~CTJ0Oe+7goߩYWOmW7*Ҏr=A$W= SMlWݿɲ_,wצ /1h*!ʫn$mF[b)FOOgӍ9&y顿khlyK4r['=Ga^MHG]#\ncr}2Oy9JGy:,wmgFR^Mܛ#9=c)I^EE[+45 FN]YG=O_J^{[]_"-ZUS6*:UITg)ad{_'2+~ c6G>1qM;[>~\gu>O]x[.#DiQy 7c F'_^c=@4;Fe-kՎzT!^+x713AŤ-z1=ȥ*>Q:!FTr{Tӯ|sDK&FzG9UR.ɔyӸe<@>Z:qӞ{1q9I=z| ,fjH'%3WIǗ0RwЦʓP]Xӵv%,2츕r۾ev'ִuNne9ZXܬĨee1n|޺>[ԉY"Ud*xH9Ztl!hђ>lYhۓ-JvX{ݼ69#^dQdY.p{VZZ6▽H%1q"&:9ynu E+n`Ll_Ϊ)I4iF-NdN X㗝nޟAS%ԘF"WeG2'Y$ ɿv߈ESRGvd{W4vu5?J2潎0WW]M3'̬nH j+V)KVISn68UZbF3pMn DV^~zȭ)=*m'jb5e>9ZoQQm6Oc .S=eP5O'\M"Ut=zO*n]"DR)?j ǻ`GWE9sI1ͫVbpSr)`V5Fcwsr K9iGV?9kiX蒏-޶m}u (+9WLf+E)TmkͩojYWhPp}u)oj/v ԐM,w+A|pp9<{qZ{eO(odrd=xǿSD{\O̫s<HUW"Fo? QN:)#ymp՟y 9h;)J2 {g7yܼ? KCx}VP3>avgݸ11M-2*u#f˯I%wvX֭S̹]q뷡Myy_]AF1g5Y:(ӊRO&[٥q2N+ݨ_ R_FMnF^G9gEIFh)b#Y^?q]u&MWi򲑌tg"▟שR[jrjdm{zVd9]ݺGȊ2Cקi)oT9Q{ky/^ .7}1r\ֹR3dY<'׏^=+133 Ur{ךfEF9;v?Ne=_ʝzriwwQJ:I,Im3$x3q{lmOF⹛Nݭ.D{`Y8$ sè^v8Ti5ӱu,<2HDyA>ߟE8ZoC.EKE:|v*rb*FRRWFť̕GGO7Щ)ԊWКZOLOI1RV~$VU󴲍cX*rNZ]ˢN l'fdc8s{Oz /"iѕ:-tI'`{vK4rpJqJJ+N.ϯ@צrױLZf_Ilzum>8QRNUGIO=1,{Ǚv1צ2+׹/udvϑ )%O}z`5*sS__yYS 3*ě~UNR).sZN\Fu C"ǘϒ(\δKCcxV,%+OWL~{:"ՕL.mٽG0y説CON:t*Ѷpl"GoWs'yF,CKOޖhlf7W#z떥J3_KknY:6^GU ߧzI9Ya/zʚ]K0$c"͸/OLqڲ4(\Kۚ)*۹=k[ԭ*UneSVl.UP; |TrӔyeӼ۽ű˱AxTuhso>;Ka%m9R=WdޫfAs,DJ$HbywMKR hΫ|dk4T|EJ֋DgMm$py-Fel|98+T\tԢS^JDjM\<ƼѼNoCUa `W#8O\FVw(h+}}#!m+<Zȇ_3 yڲT3WSհ췵Qu@L(fb03c5u>[3g<\ch>%̉&IWn 8j ]ʕ{yoY_<}*aZU8u4\to]4H彑m˂BI-v9i0m%טO=rmI ?1jB*<҇]ǽݼ`F\g?RM#J:v1k,?]QC)TNc4wؖK*-Q a|y9aSY$AaM?vs^xaTtn,}fi' 9k,y(b?p ##ƺѣN.?_־6M#mjwKo媡l^cDeiXeWoW3Haᑀ$qz攪YصR~Lo ,IJlXN6z1_S5+N/+$h#Ja`3( N 94dȞ(y9;{:t;#2oUG1Һ!n7SŻY`gx\K7 |@yⶌ\%==Sr}khխͣR9ZQXcʼ?JSZV]PKy[dQnާ?O?Wyᵊ PX zVuW=(Ui{hr;aVgf (xc.G+F5)ԧU3S([9'3|{hӻܚ5! CTOs: {̎q =ϯ8RQwjK[Utk^0Xp9XTWsͩ#f"GlQ/=W,s6֗:se܊;V|@F͟H0{ڹkrٜ֕eZ },śsnl=Ec,EHJRtTST˽(HUlOa{W-j߽9ycNTבu1ʸ}AKE79b#IIRz:Zr)8ƪ/~G/3dxY`|>NjM$PJTF.HƜ̔ûfaH UE;UiQZgHq G;6#U8;͈tBi>JR<U[/"+Et᳃NZIJ&G>= n~!nf=Zs{H91Җ#ZF"yG2R[Ч0nU+xyo3ԗ%8+Aa =H>ei/Y%ZiI:+GΜ|\[l?WG?*:ڽ؋nڃљѭZ7N9QI$T`oMX=פcEOwQ鱥8J?qG摷vJ4emx붟hUoN>դ+B'>*:-;=|NM"OZXY#0 MMhJْg7j?%ҧ1(c#3֗4+'*8ifh[io9BvsI>=+֕S)TmYv^DK˟%(^־M,,_<:$fI#R۷| u+ݝoNI_oBƛk2GouqFU<W*nxehw6mˆa2sF2dA?:[ѩ=] OFONw[ Ӭ!>uOa]q~'~~ACk4m䷉1) 3V~wwOk'V.m_^4l/^l+1hCjNk'Ӥ$.$I3i^sǿ_7ln*IJ+綾&RqõٚLGG''NܚZTm~~Er*Z=f-=cG,T$dq0;ƺ#YJ]%yjFT"3t˃XVa6oGmoRկЪu)[7efkirMkn033MFǜFNǵuի%h]Z*vZZWym^vn܏ZSYEI?"Vow[ȢY:#:$zOA;Vz04E*4i%mk_a6sŤyn{;{ץVS skZ6r :D"Uk,©HL3cOsӍMsiu, %e`#߿ԩO߲kRV# u: `6v6G5ܐx ~έNy4ݵ}H{njߦeY*ͷc܌s\2ZۥbW_ZM3`~ܛ~Qդ^8q%r;E+=J «3Be'nN:*'&z\ƽz/kJkwwL6$W8Ӕ©5W^9{4 Mu5OVF(Ժ.ԭ'd8E|*([ʑyFu[YXa&~\0(kԏ/MWWK2Cl3O3ttVJic8zU(jY_q,!FǷ GPqun?YXiR1['=vWO\lKdfh]pˎjgӱ {eXY$Ͳ͞`' Ο.9sVhƾ`G,vӵ:Vrsxv8ٚ604q{JYv%pI#mKTt6ltfNrn9Ow7Jo9eRJf׊iZ*p䋵.E{dKI ^ݻJ#ZQͩ]" ~V,Kz>&n::kDEkE nSQ{kəGJ.[8 YkdZ"TɣRr7jZ>oEy?(МU:'Mn_y# kVX3d{p{v>ʴ[:p]9{Jnkv[hܱK.0G=0sz٩*bϮs'ewჭjxr4rC:pyT#[ݦuO0Em8Wz>,]6kWXR?Ӗ8hkK4# qS!zwc/߲MZFdF̒msI{mK#yTmFiǿLO€ r2\PM]]W:I/3~$0ME7M-!ܭr gZ.9ץJJ]uW_y x9<7:S~g*'`A#G'R+[٦]L.JSUdZ.ѵ[h7d w8(*T'GI[꭯]nq'RmKKu-u؊q=g+3Xr8lnp@ʳRu>Tr|0\VҾf*cpG\xqۙ>Uic=Rسf)㷇MI?ϵm#w+")Fu%fs0jZ{zMcRG'^aqKSڔhS$_C>C.-|,2>sN*Q1Qԓiv:kcif{4О̪)r㌍I-.ݶ*dbf7;sQb˲=jx4^8h,RɿzFu%FkΪ8Ҧ'=~}Z1$,[ l{zvSnmVѥ,SOu_h}nu{D߼U] V~6Ѧ21FSn1zQ*[O|2TۦCnS LU!u3]ocW;a -˵sڪae7=|՝u~rkP}Y;SːČv~UOc˫F8ZsXVOQTVS4vA+7:K.3GI]_KV5.q]8g5z ˇiH6.ٍTusStmΊ.YRV״hȆȄf<ʦ*V.e9[ +k FPwtc^LUSJ+fCc#ai-B U+K6j|]hzs_\c9T=d}l9UJ+f.2 J[NSY~Njo^e$L Jxu8;q֊$+r^#$hCBۣocy8s_{ޖ-4bk&I'xՕ8Bk.1sOhϟLu.Ԭ/mcĦVTYAr@<Ӓ?7xݏ|W rmog2.|Q 66#6mNCj*^~--ެ&[u2N!hܫg~o.h\P>lq^ ^+Qi9凭FӬU{ϋk^z"*OcPxfeRR}Ժ~|#jKnx {ؾqhT*< )TeNgoi-𡡒;6h7`sҥVEuUޅV*-eu"k2I[Cy#q}~*u= s>bzߙ*h!?̾#h Vr6I=jIӛ{uV;O~?VywdaTs9CVxjr*u1n.]l|j x%HN-KR=nh _M. IcŻr${0${W27^lpן-OFssh~*m~- sE g r#O95.Tc9S&xN:Rqr]yنBޞzjW>m~hl\-3F2{52[wGjN\==LK;hi13~aHz0s N41 QL=Ju+{ir>NO$ÉRqն;v>/jp8WJ)hcܩ5%ծo}:7-nbb4)8'5c(/*٭nZ578IZ/S [Zk,[_L5{!Uqa1O|I+o{}V*TTy';H|UZH#f%DΥԆFں=o~-]V&8yESwG@cm!ym;v<90qNrQv>WRJ4ӌwuv@ԯխttfT2:F{K/Z4ܥ}zAaM¬7m_uVfE+23I<ק+}== 3gMF0Ynkkv-̷7նq?|2g-m֧8IӧfQ9>==krW9avf,6JiTս/S陲ȳnOU>['O(u4l/2~_yG$U ڌv6IO,;zqF\S:Ms8iӒs׭FjsxA,,1,r2<{Vwʑ+u=^ȏmѶ['?*,njpєn5]F=nm*V_̨*;Bמ*q*1=ӷZZn3_UVxѤ[my32ciVTU<5H'F#i#8^ٯSRTdKUR~wM]i#5kcm5v! zuCZѧ'oum7S 7=7Rm(zݯ=$W-m_.8G9yB_|4ah]N,`5,hR^HeZTK"IramwnR+Cep!㱯8Y#Z=\vXKkCMl3h49\󯼣&<)8V&Ӵk{i Ed-l9WTT+TޫMoZ<8rA ;xϹD'35%II em8\zj?L 1l Jw~XC/QSZ_ց^nZ*y-ʯ&2O9G:(ƃFo4TU~~³NyMTQf?ِbBjjKs= 2a jKyq?YImN1^O|^ETw>V "_R׉>$ǡ1۷šʣr8kTc؈8-eQ"Z4s[O yefYQNВ?6z' ƒ]eTz[nȑ>(r7Da]gGiDJhZxI57P=q^jyuE^Q=>Z:paA_4;4M/NGm>v41 {wKߑO = AOHXMnǧ乪-5gՔGﯿ,IF diF@&VN sޥQ-Ϣ8fw6u_=Ƿ\w;hJ0J)ro_2I> -[sPZؒ-Amc՜icJ5js^/k#|I >ӓR,[KՑ4ƭ(*쿼$~vrEqRB.t zZF'!@#$պrTFlҵg\A;8݁`t)KbV+T: .[51t]t^f4NoR5ʒy ^t^#~Ugk,D(},wtˋ[.b>[\0ev0;*jAoKQ.-6߂xz }zXZ6/|fv0O|cWQFs~hsV3|_a5]*^5ob0szTpUܒk}-8(uվuۇCj/2FR%ɬN5Ԗ@ ?ZS=|^NmݢA:ZMۂqTVQV7F7f_3ۜk |K_˒CoȌrG2lyj?<3:s-!B|ܐA{tESԅ:qCḻ!Tc^k1Mg\fdZÚ6W1j̻pdy $˧j5/³g()Ccjg$~%G%4v>9Vq%koٔ[f<ͅU@ siZoqqoVVץ]!>Zl3~ JN|.*}cYF >jHvŻ8\zׁ'?i83OnsQ2eS4\-Sر Zmҿ"G< ӓMz#\;1TX]'#~JIY~et.x_p $1$f%\un.[Ub+YD~QAs){JQu,yo:܈c`}.In@[2+AjJ0Cb)JE_jzs^8;USE,pv>N2z}+6ʴԧou.@r3.ߑϚ⩽/).fКc1nhps۟tSwN|RpIQH嗝}F189jo,O=3ۚF6Ffbe-HV>.mQ\9{!o$o[8ws^CYTR܋h-T %sk啺3XϗIkrw29 u9U|#N{WDvcNU#&;HՄ!6^N?){HNSn }C-OԒV3Rv"F3y鶃cVRkԟFIEYs܁K)F܋I4~tR6 Ӛ|Ƶ)J܍f7J. c5uPK~;Q*Ԛʧ$Aw摨[+H8RQ=-԰LĞf猞خYJ?9諦9pX>nk hZ6,jr6r~oAڶ_jO>Q&@EedXXXߗZe¼lg#UOUVwQ"+m6q Hv>5qM|̽3gAyI&77:uS-^>T(jh(e˹3}2;֔)!5Qw2nohHYEZ& ʴHm8.+\A2ʭ<瞹 |j.Uk;iS;[fIrd #xFV]Z7JK%e̗% 1ڪ<~y\q߈=']YelsY$rx Oecȩj4Fe+Iţ8לVgARu+n䍗,8lsҕXjϠsB)=/Twi+,9X1U\ҴW]}J;w|юz9# SmV)ŶջC;y/ٔ`rs@\ڱseu(e`KvfOܬ.T$cRwrKq!U*tjw;{nT~Dp#)OU};d[;O)EȺNrĜ9#=+aZc>X ~K-d$J[]ͻ#[iF1g%|4ߡuX6һ`"ck^^ᬱg?f6XH6 Į,`b/VgW ue-z4O5ݭ& Nt#FŽecq6:x J4\VU>ݔgR/?7o}&mFڗCqv2l߼K O%(|ĽvOo(Uۑ梜RC[km}3MM@]Zt\I;er-*9n4PQ7%{vq~SsLU*n{K_^8c>]woIyNW 2NGGnۖ+kkoM6e-eJ|ޚ2`t?omnHc0#lnv $9 - uZ.zY]uO6*ֲ\kv]{F+˅ڦ4+eLqz漌v)VO^myY)Σ{k|XWx~-C]eRoiʍ$mڄ6 c$Zei¥ףw^OmNlaR*\Ue{]i򥾧187pYrA/6%xiZoܕ7(Xj;wf=Ox+xPᮕL){!Sb%6|3<[y-3krL| 7+2pTiPb+[ '&]I8ѫ=R{vw٣KRL/?ZG"=`C&GE$6ޟۈ[P-fVm5e>6-ZUT1 užeev^U}e/Fuo,;wMĠ+nQ8_)V愒wM-xw}ֵ>U =pQbFUZPӹ,E__Ծ8WŚ/ ߫beU9N\2YmLD=y|z}ټdp0y ^85`Cy'(l֚5:VM+Ŷʝ 'G>溣qӷ&NV; jze}{V ~JNQw}6Q咔M6<;Ӽ;yE"GqqyGS\iʷ_4NH*'gj44tΥj6qmt1u>,>\R"Z*)ԧRt|#owꖑQF<~I=AӡRf_OL:k[yi_\Zۻy嚟arx~K/o1(_vwjN4c^'s|<|dhn眳=1{MFϯC$W GdIpxwkXv_s3–۰0#b69XͣW C)ò'$ds?Zn&ڤz&Ekkk{.m Ay׭c*|ƣ'.ejwYnpwm?r4gga1xی7N8ӎ˛tdPRlgnܸDe+IHZM}NtWR|0Isl<כRQtG+aސ{Kwçy7#0ge跗rN=:G*v#Z;5sCJ0C4k vT`Jr0;k9JN[NZj}_E*kaFъ[~x#",s9.lϧ#JEZqx/cRlmbd@ŒFG(rd<;o$3^=n>SOpʟmHS\٠_3̐b뱰\~eI_2>gvi除syvx㯥ab7ߡ uGxF>}OҸZ1b:䧷\a0 %]kR1úZ붥"Icؿ3Vz:娥#VkR8*OBCG"7QO=W.Ƶjʚb̳lXby0oUr>i]!?i(RVI$To,a[{)IۡkSqMq8fV]iO 4%({49^)7qsӟrkOJʹou4N^p{ǘ6zn? 4Fd(>Ϫ}Ă9^0W+1kҎ*{+*In[;/ZF4`:zXbNU䕷zQ(ɤSsC`0my5U\8~N9I؝&pJnk)9攋VF ',ڬ6I5,T`TF[-H!\` xe:r*>V ;`_cɷOsu/|.GNY^"~Ϧ:#]OImj_祴7º4kOfHcill9ckJJRoݺM+צtpu+RwognNO{Ǿ𥜾9H2|W(=dYurmwCN+pR}}޻^|Sӥ-m`>߉3Z<>; )FQ^*z kJ59*/q\RR\jy΍48_`8: iZөRRz;DN\2!z3 O'99 u  Zm}QT-T,E$"L27~n'Va)_׃FP.W2οmkFL2C3,QJZVziR,(kp_'٦)q kadNsJ*L!uolRLs 3`9+L1qcu=?Oifbf >>u;^z?lվgHDzX0yb)ʴov~Fr,v䢟+cֿsŶؼ&D'6˸qϡ<<٢58gVTo ^O,%n,m݁#Bѫ)cW8 N*cV~ͶW=~_{T$ ,)+Gc?CUvO=GS݉RHbWX4ܠ~ncc[¤^Μ ?̎ݒ2cvwm%T'VRc]Hm`h5n]~*is{Oԏ*ON؉cɕxʦH|'tJtt/gCYTXYe,x5ihS| uf*Iae)cL+IVTR9J)CDŽ+FE3Oo|UZ<am$zq*wyΫkZwog̳1}y8xhz1k.ޤ$]]_vzzs]F݋^>ϗu/%Үg Nyz؉S3NZ/gوXFvX|dN{J&Veyh=y{єes2oacΑq s#$0sɯK"1~V?sZBg﹨߿ȶLv3/+9s vbJϘ!A~<хBI3aO^qߟV%4b!Xd3$Waub0+F*n*;-Ye$Ooz|vUK^I%MGUvp$ %S1Sz2UрBuO)4ޛ\fYv6Փ'k\Fsb=wӣ+,;E*~Nl YJ3#Ư:O bu>vxk\Z m[E̪'ǩLVI&G`嘵J+Gsc._u \i({Ao;yPGCR\#PmS:n^+W3Թ/ݑH%*NC`OZjrdB5)9^I&3\)V.R[lC"ܰ\,(Y| 8# >T][S1VD TӜ :B U\h:kȰFJOCyJQZmݷA냻*5]Lc'T9&xLWF8Lɴ0ֵ抎Q{;MJНeCmU,ͻb;~4>:gV+vK+ʀ,q.8߂x8'EusSs8R$!YKw(nw#5çZ"L/!̛W-Aa FZr#oLu[_O~uQ-ȣ"ڽuJ̳ IPy\_=nWvk*[]͞&lmB;ĪF)˩R%}3|qmh_N?8Vvv&U)$~nآ;|:bV-sF=R;VT !TƎ:kݳo)K= |̹fD(^ǯ8K7N59d&S 4ֳ*"wn=YZ''vJQuܾE}82DѺPB<ӿEFUk:*y:YȈ7+ pFFmV8)'2R^ N܃rY^3my]( u=pycR[Mkeї=7;#Wxzo/#9{yU9K۬Dbj L Fr:W\_1Wzic+Uh18~}hΤknLkJ.ю R-pFUR]Е:vcY:37Iξ[e0y_o\+|[RqPѭ ff1UcA|uN*Jڝrbٓ[ٵ2B;UR\5kF.)s>9ˈd}?ͅ=rzv{kH~:޷d#mec;p+x{YIk[o"OmQvG%hZPܝs3MAWZazKKM;%!I('SZ#N]_CF.o*BmFt^ԮGmJu:{Zcs|`mלWwNYE/Nu.U[s=9tv{jiߡ-]f-{:]9%V/;=$dB_?{fjJmv;MY;qMy51笒gU9 ߟb+۹(5_Usv'޽R-lX%^nDjiW{%>ZB!R#@Ln9De*'kN4m LĿ9|7L{W޷.–iT~,}&EJ[GriE䢬jrhV}922$b|38UJ%*ҳZ.udž[ӲVe㑓剌 u){oǺѮ/ͼ~LreK\/NzN=jpSq\ԴHfo2[~/jS}~q:9%vs1h ~lG9#NU&Z-e*v]Jey~wňu6pI^]K+9ul~K+K/g J(ђ>sG':u**j;uS(۷ZrJRю4c/y2H&i)%rGcZᦕvT&TgsGVm&ݯ$iw{=~mGqWO Z4NXT՟sr}^USI4ܖ;Xp3h8voF-*ynI򣌎3:r:k0)>D:Qӡa$&&!a1BN1 'ZhUԧ(4z˨B^fO=r;UХ/l^*C]i], N#ʷ%6ܻT.=s]5僅Yryv^CkxyzHco,rzø]bQNUUv셤3n7(98)uB:z=}{QΥMt To\mBYY*I99xc(ݿԭQ)}ꗱD ѫ \Nդ%SLD=.F絻K _>_rzW~~FU2\i\r}]4=[nH<.6C1X8IS6(I8&|mi,siUjI]ԩjAu/"@>RዩG7Rj+Jf#Fq::j^gRNnx=;2ͤjt(~ezFZYFu*[mokxE$ ')UR0sj5'{Ѭ"X~i8xخ\*ǖڦRQm;H`vJ5#cmnHUGm$X_U*= %RVo<؀"#n ~],Ƒ2 I2y*1mHY9b;QVb088I:lsԟ-FnĈ:`|U̴܉^Z{y|;^津/Rc>).VlaJn}Nz*r}L)ʛ\}xˏj8X֓G4-4!j(1b9g bL۷ctD^nF>d7̥[v֔'RkQl%^z|?E2J$8TKFK2+iDtgb$c4rQ\Ӻ;?oq=iMIk6@1ȸS;^:.Pr!3noq󨔥{̹%Swj1`[Q9jJZm6Bncp#v999=:fyevƔЭ,eiTe]JC1VU7v#ן(i3u#(FE)UHss׌WRiQr_9Z\ʪF#װU\'oNJۡJKVXeGey:cѵE.Vi&]lF`cf},A?ʺeӌ]l:[#VF1U 8SN/oz1Q駠١%"EUU99<{Vr~wKOS\E>J~Y"02CnPۜq+H֏:GK9mL[I6 8_Si^I9Ũf4U_9J,c qyEOTRUVg.}T+ oMZ4eg5NImu{=%D369֓+6"6W20NpJu=՞gh̚([O>ۤ`݆>*{IE[ѧvvHu_>iQvFxjcr~[5tjaZqZvw Y G\=LcΜj`=]]ṟxvp,guF/5֦WImr( 0ִIJ6:90Ӣ]R)&(( j8<=/:(ƅM"%7VHp˜qlV~>BU}{ydURiE9_m(1m"dvYw1W =O7cldkY[Oiv͓o;^LH((hՔOQ$ 9jR~/֕TtIfj|kzk*)!fw(f/튚KUxl[ⰑUV[?78w5Jls˓$fjЃ˿$l_y9)]CO4vS^7~e=gˇiZ2AQ)J2WziʝnL"ŖG\3y,۳g v:5+(k]\8?/I+_ M&՟A6 GVTWq+U%zX儫F-=}LFfb4ۼͬ;J5%9mNU:IۯV[o1^t_֬^/fܣ|/DNfVjƅwp@:qhm T孺grۤRcxԌc+LԒmSV1e՗IcU'pl⣅r&c[r֔%6ņe{uooo|bb*Ilc𫔧R\OgSMc^P RgYTre4'V BF%*m5f΋lzh8өR!H8vi:m5siKovlz}{qŻj]I;w9hՌ*1a+\޶NS:U9s]nۜ# Z:S9TjZ_x/He gsh.YƾZ'h&=$]M?Y[kJ )F4uZf>a|\/g#|NkQф!fNJ&h9V]$6kǖ6[_MyR2zsYTZKs\픏׿a*RЄ:iutڻ† בR*|Ee: 2UR);N;uJn@m*$5JU4|DhZ$;aAxY]QiйaY_KHەmSl:U'F4qou4[渗t&e`({+ vc KGgMgʷq3H89sU){hTi,CtWVp&h; q^خiNr|SOSYO5ܞL[w+.m݀FG9T?SibF+ZoԟPvᣍ m''545woIU徆%ݓ}c}3[8=]<عET? }Ke,р`n9ϥ\1R}:-F:kXPҚ:I ~ƬcQslKXzq_+w,qW v́ȮԂOwyGFRSϧ|Pܒd1Wʅ ) z,< VڞM5UߓO>ŧn%][. d2{ V^➚?&n>&( Kvx./k}lWRcc~NoRd_zNOC$,vW[eFIV#F>51Q-HO-2h73ֺ)£Vf/^3C㩤BH9^]Xw&^>8d1J5$^tlr}U>hBu:ߺ\ݧRs\JsΤɖbmc5ҩ[ڝir& V,q}Hl)_r|w3xөD }~fo֮9hL$Rw}tTFz{vB4#A YbWc0ʹ@,hfc,w#nBi*n5w?pSvl.A5q4ӒQW}LctfV++aV%\m*(+礧处3F>f2xS.j~g^J&lӴ'A}j#v1FRӭw1@ίՊhʁpO =EI)EP'ImnHژIV3q XʏT瓌t洧Z1ø^׿͝S)'kEIՕ@Okmή}hmNhRz_s);yQOi[6b4c p/#t=:?z{ҍjRu#a/V5ОG?/\0q׬J14XӊK^'enTHL3 rߕB{F iQ%8=Y+? ` cNִ_GZUuDdq-![VJqլe"Q6f_z~d%x+GdsJy[D֢G]X]*c39qQ^p.Z0ʹa(i?SUqkƛzYhiv UekR3T#$guor٬ɗgt& cyѭHҒ5Loo+Do TTC"1~:hcJ+J3yp<l§drTef}I0edlOjvӮ*V]<&-#̊V7ym5R]yȸϕQAƤh|߀\?[ 'g풗-_,Eq$.(nVv*}1V֤ӂt'nP|qFzp@SvNQSrjWPs"ۄgx}H?ʽZhz%He8K}J?qҽ tac4NMV썄[vSQQIncBcU$֎Z8hV"\G4| N>uK2rѕY}̩ckx>랥 -o)Q38gBXkwo$Q8\湩aj{E%GsJIlF7@ocnR˺5Jv$О^l=ǮEoknF' 6ۙ  IۂNYZ9]Ԏ<;gᮤ$3o=W wckF\MJ?cKȲ[BZ+_ׁxފrQ[}9T De9Gs}V–3Xw6Է٨M``@7;)ӣR,^nirDקIL)ft#ZҫRיqe Mrt'I<[ #MlfٵgO<;]-NGƻ^ktSaNVy i9gl1(Jt8UZ켌(܄^rJHu@U\ma?kFgYsVӣJ^İj0ʷw/ NO}gM/MM.K)pP999=}3I:1U Cyo䍝'^ze͛o\W7 kYYXg sۦ;chӧ̬66׌_5Em_C>ŤjKnYC+?d:z ,CZ֖"׺Wv5G`bM\fq >ZҽmU"יNKZ|EcEd 9$sTы3kцy;YꞮ'C=̓RmH=ҸS+[_ˈSˆi>nU}|~J;AmT=RGǾ&ѼrܲB ds=y`xkSߥ)*q6/j1Z4I+snqcˆ?mSyԳ.MVCF̻dܤ`o^Qowk*Vg; n#ӭ.t6$R| c@1eT?k-ߡU(՗U-!8m!H<*־4m+BS9+gffL<*ERޚr+}FZLfg|nI]# Nftvܹ/Lծ^[emr̬TryֽXSqK֣wӾҩ9$sjY6Z\Җ":vnB_4{Ii۴?/ֳu-v30?c JэHE< U n觍T8۽97[xG^ 4 K9~jEfT H~wPaqJԣkfz1X|vrϳiSScsyHHqb(<_K$Ia%*iֿ<fkvxSTTy<]B1cSU!w}yW\&U;8?TFwzO UIZ,Wk7R3\sgO#Zfft}?CU*jN߃Frc%F4z'V/ $6@H:SEӛGu?WJ+d}bhxJ"·d.H#yl `d {;0է*qzN׻;KԴuḡ yl;IG4SRRZ&#RT}Usw:.[CĞrE_ ``]U1QQϖ+|6eoeS?h d7˴kExєha\[6}jNS 忩ioD/-o:q_aqXimu4Vej˹h9tx c89uҕnWoUayӗ*w#&ӡ[QT3bQmY%{[uG`b*8RZ[Kk_>Iυ'Et[6Ef/29S8(_5ZY[]{Y_ѯ;&{k}{^X_'m8_ G/~پϯ}j٫ۿzt[;rn#-Oyg9&*8_bܥmW^o5=\-kv>*7|Z~uc4;_9N";%Z8^:RLjD #op~bs^f"2T!gPGS 9ֻ|C{c\Д&k{K6NysQm1ye[kZTT"yiURxd}VK8TࡇpkXZE̚{?n8y'.j|5$>!]>嶳qֹTx;3ԌN/>=ڍ"K_S<Cz;I~>)NRs|=+kRвxwܧN4}[<؊\]MܲN|.p1sӕ8ߑeB5[ۯc7RLPfaHس19PJ)aϧm kBRK|#R8}yk?>-Q#] NOZpz)]QŒ oo&9o1'k+33quu*TkZv]ZofuSVymqʴi:!-e]̫08 d.0}KS7׿5KZ2hu8z|K+=:0)JTllR-!`r1Z:_D|״Kfn!sj3x=G\R/mN|g  "*{ FX,FUf=>+F;54Ļvߘ(%UGkiJ:QHR8;|Ź>Tm(*p\1&\dI$|n,SƦ]-*n|Ƌ2p^hA9yd)CU@Et^[ZAFQ%ycO1IaY` ?-I{yBxYUNnۿ5VʒX̉␶T:^Ugdq#Yj7%͌4=B\a` ,OhӴ^^:(|RE$`^Oe:hiSrX r[9nP;SO埱V2SkU{-nYu*03 x?Llv{YŲlIq941 wA er'@kx6<,5.ib m|Xqq1&|UHKCڧslGwWdcB,w}_bsoĒ*?.ytOyeNbՔj#{aG˻?wiʒߡ+S7]QSݏZKo 3MR0p?*NC 3''װ\Fݒ|ė>NQu3ʤUvgܴ6tV>nU=}F?\ө:v+F^[Ub`G+m ۑ=tJ+%>co} )Ws4+ֱ^U"'Ӯ0g>fMʞx V^5Tg-tmu[ l3';Oʠ#݇\NJ-m4خ4!XuA=vˣ$=%R^v3|STO[nv[fфGڲ*#6̓p8W:N|jUN|m6eG*ӚJ zw Xqo3ͱHXwm%@ڦenŨđyMsb"䗼F2gbhc…QQRc鄤EI,<"-I8Rrɷ-Μy\n*H9׿^JQnס<&NEy3Fۓ׿6ui*OMntEU]d^W\\4"Jn2Sf8s$stG/$.錒M.[﷿oJ\ȒfH䍚f|X`=mJ} \tRGcqČԞ=E'STKWмI X*㏯mMƜߡiY_Bi1Ue9̦gH>Ufn=:0?su9Pȓ}Ԇ5.(,; #(v9172~1ʲ=_bTD1HUZ5 c$8'h$u96>~"tHGE"/ެUR9Ǫ>('RR,,Kܞ9TT&ս(J"6T`۹`G2[*3n2 $@N J v2tA$B5s3r0ƕ՚m;fHi{pw+zrFzqK2gڵSGmtrI͜z=(eg-9G! HѼXʠ^Hdc<EiTP'#A 2Bq'Һ#J =ZrpSp s&q9:PUOJ.mԢۖMȠ~?.Z.hѕI8[ Cw ?2H8v\MT\#-i٧}0nLpepG/9ӯuFNt*I4MTVi>̽\H]=J8b!'n^]cVCq=ZNv.<TBȮRo>^ڴJ2tP}tDufYZM}*4龆*G#8飓'S33?Ҵ#NѼZUq7J<{ҫNWۜreR<,).qߠۥ)rԊ:QjI-, 5ŭ[-I/3& .F=Vt*4өS[ork~}qb]`sEӚ N wq֚N%gVUa\Qs߭Ɯyӓ%w !Ļ|V-לzZ|Pj~EJ25NipZXIrцe{}\)b8-aVS9֨Ϫ[^m$m)EmEsE:)JnCTԖ"ѣNI ߶s#.VWG3jF"j7fU+fX=~gҷ,W9R8DvV'^S*tvgUI:mnGdz,pU"Q3)P{rkbF\O2M6|ZBdcv{ΚPV*wNM峱v sy&5MhpI25&,z좗2iTNggׯ!1r1]rozCVa'g۩^iBG3˹)O' t6G,ԩu8g܎Le:х6ԋ%MlZfqX>w|k/xӂ&E]v#5SI(KΜqREJq2 7Ak>[R׷Ui2E:4G/S.p9. FIKOS>i.v"ӻ*JJ17;E#mw$(JeԧRUeq|>׬bM4,7. wV 9 qRc&I;Zni_M_)f4}%wѽ7t߇O᮹qk5O|0|W j^I4k=_޻n~JTeM҅xFIfV[eo)wi+ѕg˃Ѹ_ =l=ђjNZ6KoZ!֗W}5pH͸gR:緩zl Ro84XARoҳ[s |FLshzfKUguVqR|@ls2T)V[8|&cSUmՖ:ok>,|Lj|?sj:>»KZȲ/cNm}u]{1^ʬ\[Z^{Ϧ'In([Zo&X 1t9]tԡzusLVZvI&՝7n? o$~ j}5?Ab +1빉tXx?aیZ:i.] TRQIuZkMmo-w=S⿈5ixKA֡vi<&e 9\`GGҧB~֢k˿s eW]V|w]7~} U[eX (qgku=6ϧۼK ÐX=yn:rr+:mIn/Kٽm=Xe头Qy퐶ў OO\^dc̭'cǕ$>#k@簺[ږ6 ʓcTpVOWn_S˭eTϋ.-cInv#w͜RjMMw} NI+u[骶V}n~%~#jiz\BkԘ&gMqP<$Va_H׿gk-{ۿجNaF2R徑k߳{ؚY^M%ecn~c_ F6~nl5!O?=N"'o$. ݂s?TjN*)Tס9/"K[G8 zȢQ]ގz~Y3CޮZE+ l ]I;=;VR&o䎘gy=Gwcb"ȱ㒻yƈөQ:浛#icSvj9*3Tu%|3Qk ZƶD r6];kRwC4>C,rJh xauS˛d=֧RYO#| d…A^MjQ 5ϩұq)&*/dl]љQVl r3±rRԭ)5Jg8t+[CgN_Veȏ Fѩ- ⷍ>rZkB%㑆 nsxvB4覆o Լ6>T1+XMmGMC&70D3*ɸ;`qRmi7) վnZ֞4C]+,C(mcfNxҪ*N/2)FV闭~i%[weh~yCO=x#+Gʹ㬔[ș8$ke8=vxTTEiǝ=|g$МrG?Q%m=K*/Qup>G,ENWwqt3W>%!el7w<{l|'3_Ehd} U*ѵ5zsg,#v@<Jm(u0Em;owbVGM򜎣\1M,q\խdgzѷnGM}f5?3~IÌS{``qt;Ts+3ڍ`^{r9t^:.ؤW\|w3ԣ>c*vu.3wwJ1ƫ^\G JroQ9J;rR76$n#*V~+FOX&#vv x\"T_!^k!\c,[ oQZtrZ @Z8$3~vXˌS䓋/TUH64'jcw44RvIo\&v%(^}oeEbkx.>gmnkJp*dⴒ&~IX|{AJ2΄PG,+#n TΪtӔ\[nڦ\[Gl'op{cqU?ujTKH*3$ϖAc=ޔo#,vRw~Cx;\TngVDgLٟ]ͫ*P!&[V,*OZ٩3HQ:nF±a7`}OZRi9K[!#.˼mq^DS\yI$3dfl3?R悻7S" [r^OJML3ʣ}<gFVfޙQFAF3޺c(YmӨYGDR!ݟ$Xr_gtPET\rqAq۟V0M9:X_n c*_Z*6G27c0vVT'?_AG(*̺2{v q<v+9'>GY?Ntʴy%żNpY]2y4REN=4#Y Z>2Aیקqfzbd-Ʃi*B4Y^/"kn]눥PX(?H8=)´- .!c$(<\yЎ;/t^4u*G"grNz=myb[|`ۈ8ftFv(u,1&78Lk5ޝEUٿw]L^-M4nd̊Iel#ǥvQ;|tjJm bgUN+8|:R:)֔]i,RC6ݮFylqFO};++5lsz.(dz 㜟ףN*{r,/{8]$~ZCІ\<=+~UzYJ+~]tC"0iNAF1b+mw*ܵ[I7{fh+jj`#tg<|}[IźJ)իt]ۢ1{>^Zn{d ;Dw/ ?ֹ*Fck˩*;,V9e3Xx>(Ԩ%+%:ҌEaU7ӭx: 3åEu??;ށ2ԕ/LZ+IG^SvާkA 6c88>|~,/亝Fe, s3y yoGՂ lWɓn)S}GhƏ՞7=M{wc$ݲ9SX{i-JRSI: XT;Fc##yX{$BN?*^۰m̱=w7_+Cc:Pj4>o%7vgoOӞ:Re*iN)N\n',T2؞?)N<}^RKQeY[tR@a sYSc%\*hS)ͣKq!&E {m)TV^{"THe>hʺ}9=S-)X𻳝9kkN̩-rJaq{d},Sde ΖLtxVO\6>Z*fN4ӊvZ_ƛO-sm]NcR6\#9F !ݜ{XrVNefև9yhkg7p;ֱeeKTy2wo382qnUIrGN1.o2d|Vn2LY8RtT;=kH§nDbhuU*=+XQ{RqibEgS+sg ?:{lŬ1$g+M{>TRﯚ:Kk`$gsǵ##9'F=S˪z7!M{M m*IH3K`>Ɯj]|Ε tF}5̢8s)p?ũjr:LUj_V#n i&̐AGusCɎ0"#m4q}q4Qv_eeRrk[7OE^cT0=nA\Z5T1*ۣ]=ZǽUV-drUOٺ_RTmSBΛER>p1T#̖]8Ɵ<%6t?{ncD`FUaAinߙSRz:];“B<͙/܃YRqw14-(RLDdrё<*n|K8qMJTm5#ܓjuE>[k2ʿ{<#w*Ϛв"J59ZVoZ<g C`+ovGo\(FM*t[QW] 6'H07_\gpr_#4~mo%JDz3zWNi5cx(Nk<TenY?^%NЩFzy"H_Q\u*{HI}j~hz 3?5UrH4SF4Jr~α7-BVM8?Շs.Cmy>R6ktEJGsŊLF2j[V-A Ql*sxϿ\TWH 3r̬vyNy#nq ԢhZ0#?@ֺնD̖8Dcc!=~e̬Ly]1I$GݰVƥIџ,zv3gb_ֺbe~}ޥHôs$kN[kVtEV0t{qjFI3m=uq# %*#?=ڒQkTm|F3&*Q:kXѥXjڬs$o隨0J\<9}UHZzJ>yuk4Ev09u2,Y{YMKh6Ü4XnzS/{˧sOgN2\UQt4oH|::c\Nm5!w1UHF %?;!Y\yF-M)M$~QÕ-Gc4*Q־DK v\q15?zJƍ9KK^]Hf_Hw.O׏q(җROO,Q[y.lUdRZr%;kا4*4YUOW?S\׹Ttgpo?#cp98XVOFSR=VeYRM䎝Δg$ޖG>*54{Wuo5FD'^4OOiwnFJ$Fq dO c*r^=S"<ַY &[&z! ʌI+QN4GTG BM97{v-  S"{}E[ 9Eg-4n/\8ߍaR2N8Sح+>#Z;{3+ߦăs<%o-w<>I.TjETM[Xi^eRIսAzY{k/ Os$;4Y~b#rf9 oUJ.IVsvV8 NxF1pA{5gNkuQ{-WK0 pm7Fn'7^f!TPtNe|pU:=w)'QM8.2{ !,d'n^pGN;>jPܑ{Hc˄ 3ۊ^VM%[{_:׀E֋V%کM?m- Hm6JU&Kmo|GCjεX6WSYU6Vrơ-ȗOkGPy{=LG/B[ol}jS$'m5/;Хw]C[c=x?*s]&mSRI%Ԏki3F'>Χ.ݿPRs)$MTO"{y#x0*MJJtiKTԡܑAGS ZAx|X<EyO[rhz#j Qϖ& $aq~mI[өYFk`:q2wskj4oƢtyeֺeC&L:*ӋQ6s9SiHߖ<`+.[%5 .Y^kNkB4vf%IJ>V\|x9#*wlP#%gܬ#CnT4,?(3n޵>_ifF8Z;/&K}mۜQ/frsT+e9e#n{V7Thӗ2Ԛ ߜvlyD8E%[ԏ_qت=rk}U\QwnFyjb*{:ɥesLa2kcyIc*I_@K|PY@01ߎU/tQ{5)5swN٠,yN%hʬ*:.͍ȈTY{NXιN'ulAa#q㎦*b9i#(44,'Ffi,l2y{zV?M P擩(vI%1͹K޾Fa($oCZzu)ŭ#ۯB;N 0ޑe9N+oe'szvF/푕f\3kIIJJHc(˺,8MP;w򺷊ͨƫBФYC&6#۞+Ju4jIIkУo8 OZҌc'C߻J$CdG@5jjWCE)sSkn";t:ݫs]nU=+C׭8l{6s)N_ڍTw8=nOTc2IV6Oh̯I^)Ҽ~ic.e(il٭,c^hm4Js7FцCdsVXZcƥe{k3=ɧH|iRdO\ctՈ{ed_wcFDZ]|qu&|l7C232ʱ+gZMEIMn"*b*k{&[kt$Z [3*ȡr:_ιs8ƓU];ymvԂҹjK.w.4c)SGkl rVG#iG9c8C)QZlӡXFnKrHѥbh7̀qyU)'(;?"qzN/E3.h&Sieh6eܻA8a<F2JS揹+[_}s)ϝWݿT`Fy.%TkF"wUR6AXI9[}vwo~nF5=k]BGinq#]*>P cFX 7y'ӌQJ:mviNTMRMG[o}պ̸XdO:#$!'h3\~U9 <,rBOOC B7YX638ԩjnϪg# GڡjOzqYMr_8Nܟ3m[PWS}PNZu)"Èڢ1/u(2"\6^j5=R:$i7w۩"C!ם*|9WT?~_RCZչфVk<=ëfVKqR/ԪIXoNzW):~RW3 m&@ojo>EӫMFz$IAW ޵;Tzv{8O٫۠723, jK3uⵕ7'o?oh'(:4S)J*m,Ze`?eOkSomF_č4-n#Hm\3曌乓>]_J2*qr[ʫ!GAK-_KcNU)Y}"3HЃS2vPŗQsqIsLхiԽe vNcqQzZF2RkWo7uƢ[ypyܼ5/LTBpͩ['ˏ\WTeOR4R2a!> n/1w?n+ djF+ۥ eGs#eM<6*s㯾qoχ*}.jXHw&OCӞk*Mrkgry6tmi:m,KyNZkZJv8-M-өE5e;.O]ҫ(u;1Jjh]ia~UsSo7՝SI|2 n01RkrTsS\ađƒyHPH8֝9mZa뻵i?ᑧXJB^[ jN8˝}8Q)7ttvVf V5 LZe}:_C:>i[O:M6ݤ.W*qzWQ^/<і">ŞѺ+ͻnq=+U#(~N卵in '~K_UD@$޸=[}znWO?D\챤~?k]뷕\6S{#JI$6~c]0SRUj0)̵ocoe,'# V9ב=8Z\ϕu~'QڧiPoʸ9ܝg(Qۋ~5 [2nzXS)[}06Ơ&x19X§f:^ JaP9x zgt֚ܚCϛ8hĜnB9yIY$H:KoET3; v;?Z$~d攚j2ϓ,^A>eb|;GkGTOYϬ^gQ[y#FY8$kR5"E+䝴goPMd1",v8$p:L=: Rf3D"f|r+Ҝ=̴3|Ӊj6)xd݂EH끓sT3OK+Mc GU\F\}iE+nU59{JY^u&yD}?*N_>gzT䍖~5Gn#1WoFsb[=NFWrޅ5 w/˰[x?12x=lw2ӴvtVU$m#c+זֶF'7Fcٵ;u1*IOZK~;.V+59Z\Q[2۹c*bbMkbftjۺMoT?6F8;rQtPZOzKݿ+ w7vUN5%nߩkm|G-u MF@m!ujʏsЌP咳[57ZAcq_d\aKX.8_{}|Ե-f}^&_1y' 5)qϥtbT]]t> pR)ozUe-j'̺9Jw{=5: _Gtr6ߙ TeZ*:#0JtW{}s1ne-՘pt~^f"ncŕWRRZ >m>',Y瞢*X:q tDSԮ~Vq2'tiURKW vnIehh-"$epX8%qZZǍZQYkc+^HiwYU qz>UR*l`'};#ּGf$Źn{UU),#e9[Q4mx㞽.%*9њ%-޷ #dH=ywQsIva(ҩ[M뵍& oW TeV5uu ,l*y ($UkR7TGoJzſ9fNRuo3h3hpȨfbzcYF]br"g;,Dw7#VNWίeJT[iYhw^#PsݦO.+Ξ ypgfvF7?6znsDӦkK{x?z6i|NZ-[6v~*ma3Y kY5N:5Z$qP̫N㓃ߜבZE^^[V5|:;/Rrdr,$'?sb%Ogӡ5=8/#Njn]Fo2U89zg5ZQ<ե%9+KCcXլ#;FD`r=;5REk|7i{ܲ>B ןg*^ꕭ:PTRsVkExBCq)P2t{vU_.v8#0QSuk/{Mcu}%hRb}VUcSH9<+]:z;-__U d=fC9`іQxP|V61j*+ٮu{Y VIhx@01soCFh]+$uV]nT@U#]1z1n-FK6j~_VHǎu[ GpQo[_gnBE k J0x g u>J>Z][V+NIoveiR!y+w$k 8y&II4C~^!xݿ_/;@wNC+1NQqʼiI-|ݏ~#|^&c.8*jS;Ab3(&3 nZ=>"#[ |:Ox|T)#.;{A_?RjU"+؄MN .~Uv~:͛%Տ6n;N8jSt:u!'$}||1[jpy|vzÎjStzgMn֧aN@pܐ; Ҷ/ܨY[V%(߭ oTSL^זJ2捹u}ok:KB;RH1&e }11_y=\WOԟgfv9nfa`}dhӧSdtfiTkR6Ai^(ݬ>8MlammgIk!Wi^=kG[_V)MWOӖ0igY$~dw?ɮ)Ss9NWQIv:+3ղJ+K^|Wv4)Y,6*2:u׉'Y__S⩮{s+]mgpwW]߰3 fn3ϯ8u81 =&oXr8`UTqؘJb'[B›Qs^UhI?Ezx:>΋k\'\ڏ$<|W=EJ\3#Uhj]`8=2O0x F|*֊{#]amyҷ*[t:[4m@<o];I،8OO[3I)&iu|?}.v˱n aۡ㨥,\S/huOlj0jY[eUP7m7c<dF]z]*5SRIu[-h/ 94o @$#4Qfmm>ZTʥI$^^nݻZe3hvZl$qI/ci7Hwn,K7Lp:qҼJE&ݓmyjե+?D_w:s,Y5n;N&"3G6 ~_n}qT)W/c[Q32n$w9ں)ҕl8WV6m{ɳvG8jJV_ք֩W:첮p{tyy(ӝN2ZsCH#ZFhݣ"bheI10^Ӵsd=N6z(U7#p\A<*g $M-:kX6ڻ߮s=;{UFj;3I]ͫ ˛dXLsy@Q~MHŻyEhi-y _G~hϱ5(WG%JU%-L UiFܖ_˥ih|&ܲJ=:cG+{M]i4P9p\;pF^Jz C'cK& =sw&*Q[hrtE+3mF`>^ Ӂ(u; F2mWfqBk$ߕzQ_<ilgYwVAz^$D.iou8a#?Cr Y?290?ʹz{\<-ޥӣ<=o_VWyp8"7d1#?6jSO2NJQz_R\ђѦv܃JS:i¤q׿m?-ll*!{laN6*ur?F z귳VfGީ[К[E|G[9d)EkS-Eo-DyjpiJt~㣫*ykmoQlާXP21ҵRܪӕD:uu +m\8@3iO>=f iG'Xݓzq_Y%&gJWz㳼m pGh'aӽG͞FzJ^t]:#r{@Mss2q̓1y-znz1p=GO_+-R݋Uu"8Jɺ;ʼ`9m^R}u_חʴeE>gK]?euKQmnY-.r3p:wzoqXKOxuætD-2y9u <XƼbu._Zd6OnVhbs0~qXҧUQk^J-ko#=.}f<O|c^.l:Q7~*$ qG' Ƨ,ֆpzu6+[ϴ r 6Hku긾k!f\0U*E5Ъ1Z_5)]A#<F1tѩ~S g}t9NKvmG9o2@[sF;zb˦= 1PkF`$P4)#AXq܌`tfi5>__tW1MwY@0?S|C9'5}~w"Qq(MFp<Œ7.*bM\|DTɕ]܁AW=H{ڤ =ITU&5Yp$0Oq.Ӕ-""^We9~^׉)T ~ʫQ^co&4H#H?YJZ',W-řٟc`fuS ѧ%M甒M7b/5/c[j6|՝:29ܬ^gaK jf$e?S+4{;uI%4 y!q3OZ)ܮKN?/OC=ZbԴ)9丼̠+N1Lگ^Ȫ!M$fV,s}E/wP'k~}uI"fVo#% nC~ޣ|[BȘGCkcϸr<%hU%Y p=ߏƎh{bmȠBs4ַ2InNisqכc~e'ҰsTOG_H)Sss#ťiI[_48 -U[87)^\ٛҋ[U 6=ORF?/JVfJrkv( ]H˽~ɏp`885W猨s9wg2_?Uڬ$!:޺9%ݻSqQo 7ol)Lpz `W.flݒF< 4E4uaJ2qԦ %U}8u*J?qmk(71.=JU%)r/35)Ο+KmF[UoX͜׏lg)Tqy"dջ%m 6YgvWTqUTkn/ʢ_Fwac&[1wnNƪE)&塚)V!}BU9q RmFO>"2ݬ؎8c8~(98?t96P樜g>"`:qߕu)^[׿5{Hs8Ƣ0$tG͞p8c-c+wc8W+j\k(\&XAo8#NHԩ_ܽj:RO;%̒@zonGq)zQ=H΍|uo]7]˖ZVhLq۪0TϚU,޽>D+Nyt3kW!R F;zTV3ҫ$7hjDڼ(I67̛q+̭9J\Oф札gr95',~\^Ϧ+RqΥib$-Fys (*<3YZv8e._1\[Z4Rnp?ɬjb9.F~^6;ث άo$q:ܺru;FN׶"{kZ= Yzk7!-IUi[4w h!Ԏzpzo~70:ғ&e26jzzRp JpߐIJInQn[w.2gjӌܼhtg񬛕Yy~1QQ?qcR99Ͼ;w.mcWLiΦFr^ԡ%וFi˛GsK3d͍w rI2[+yf9W3NzISFj> ORo-3/~kfج;H-:kxHֳ͝M%h-lg3MptoM\TASOUQk,ե(781 <}UrxF5%R+{+ .m*NWi4vifƿ|[]7UReI 9 * c<ҲX>7Z<+$UwŇ秘ՋM8v]{)_FѼk]Z .468$WKg+Sqi];|reQSA%y$m6߳V:펡&֌oCnʔ#\95Brm-[C̸F[MK?1M׈Ŧ% Xu,Gg/Ⳋrz?lZU+>4VE n|!0jWLH va9e*-5̰ cI{=>aᅟ[st)Nx|%+:?> w8/?}\dg+zPmFGRQ=_3--fӴYYY`~l`"Q3Hiդj[u]ď;y$;d T1┝8%sQOܽ⎇ZDZ׺}ǒ;Oy`*~,ԩ.c.I-9ɭv_am$}?1j>o'r5љڦJL6Rr['޺hܢiFoTXn3VƑ;Ck9]_8VQ~USRJuhrZK;s$br͵l'⻣n;Xytc̫, B&<Ԍq:_< :J$fa嬛[vF8<~rWrnbHlU}f-NNO8IsƤv4.^fUnn'%sҢZʴcQi6/iB/;STRER:xx]뵍b$Qn>\BNxh&x<s&t-J?uf9$:qQs⣲Vm#xo,{h9 ndWi1S|ܚ]e㑓{UF+LԌ,Ei"iXhy z{V1S*Q榣bY#67IϿ+ZiF)F\ Ȉ]uӕ*|GH$MP(x5N=Rs&֢$o7i$ڬ9==+8'd(DMơ/8}HNF0{V78GH_a4e&db !@9nM+u-Ӽct+(!l)ˎM09U9ji{{ n*JUsB;{V*mdGFé\.5VTg4۞Nzy5ʤRjRԿIt31m'Φn%03L܁xǑQ`\9^Rn|mxpϮN~ʝDF4-[aq%RFmL G9:c;*FZFEM[ 5j1$Gsz֑;:0fn^2ƧP4;mcOaA:½,&&zտ#-#SOljxIºicfX$84N]RXdc@ 9딪{HoVWjͧYfU8yy򾩦g 4=C^/QmVE֬U)&egj汭Vj(;rk=ugxv.(jmە[Ym꼏.oog ڢ.V9D$$ Z=rBv-]^UT{>{l޾T{%|{T֧+ Œ@rQZDzx<&"PZCaIM. 0÷~yO'95sE=5iJ yJ$~fQ 8mQZ<$W)vo!<ۤ+H7?*te̺t&%8O:>t숢ܳ<t@ad# a%:꒿T:d,4:/>_YYU^$ߵTz3f"8zUW׻ʆwU7CږR$lO^h=lݾI͌'ݻtLxDZj;g - lǯ |n/8Դzmi{˯v뛫9vvO^= WVpS<\fN\jwǒ:U矤iчpV# N+,eQr8.yQ&.#o1;w(ƢMlzu*;?Rxf(.Ǐ,m?ޛLƒ޴H"#;pϿ O/7_R}Amw5ys}=N{cvOO1 }:=uƏNQlL>gu?)?U=*%mlXe,&_6HϨQ96.p3z $ݹfJQDoIhO2$4w*20+3RNR_y9Q;>E~ƦY#fVLơLUBُCz19]?{2bK\)"gf~l?t{8NUUF·ꚽ$fA 9=jfNZ5QN}O_/VC-뽻_)b// ?.P=,AN@ۂ8';ci[z_֧^O=#F7igKIr~|apN3v%e*1*N_??a?"bYk!X 0r  {ta|NyF84?D>x-Ymc:1vߐ{}9+7 w׹=b)re tsOZQJ|+ƾbUw {a]E_O0Os,lUOf褓 xEܛ~~Z}3*^6܊+ܤʫSs'-qN.ut{u#dI+5Amv_Q6Oպr@ʷvdooY9SQo"X^Yr7Aw!U==~({K+U+MVxl\ wxȬ*VuؘdV]-LC4rFڨo3ZRJƔit5eYywH~Y%@RXZRkᅷxms2_l>r:|s5Q@[nw7w89_MI{~,i^\иҘP3!p p9VPuNMM'bX|6F`(?Rױ9Jç{n[%['zҟ$F4miȩqΒDUZ< g[ǚ6svUy+na}'A֚A5'J<ۊљFhfڬS7+nZR'Z{wo6,YU:֩&ԷNc(|ܘܱ'hlCiJ1k3IKǑLt^(C-*g@(㊧(K[S:gn@ mv?Y]}ǧJ4q\77!n3<簮~Z՝ҭJZ)uvѼ\G9kGN18JPl066SOc֧xo]o`pWk~A ''ѭ5SG}Qf# vpKmcgl沔&iN5Rj4sG$"S(;*Hqݿ+n:qg9R-%L4^xj[wQzh(Y_I-n Y#W |{%6h 69cq3$=Ѻ޸$姡fAuol/eP̭*vFTh]'7g ⵸[IEi/FqWhǿI-k4``#EII8֏9C[ӼZod(ֻi~UGCDK%*]p=0HסN=&kv4(JjHխ!ފn?*RRZ5s M73݌#ϴʄrI#^ǨRrQG]LKF8ZI77ݷ?LwXҗ5o~76avs4!12¼lGBXݓ8~69IuKl.@d^N'8;2NMB,2/r4ʻyX<{,VZh4TIr W~eU~k&yyWA#*}|+3gw?5wƕ3nkYd;[M"[UPy=s]T֚4"5vFMp%j.@=Tyrb*KEd^2) aHU[ 涣Z,-5QJ2~vo/ >a%RP+* rr9򪋗5t09r7$)l1zcM?9F ;"g wG?u2y*s= RXm.k` kJ4u8iחqT{ OIWuKcZusg5$YNG[z'ͬ^D߹[ d;q8q֔k{˪0&E$t-U\cW$hs}~]=5ۙ81#O#rb WFU7V/c71Tus߿jQ2*;|uyޱRJlL*] ##iv«09wW^Rr4^U+{iY[[{&1]e 븂?\{3EvJq,x#.?\3Nw35I^E kG>hcQֹ'N6^TaKLcvipm۝~%Iӧ[bJ%r7oq$2 ҳ"zjv@Fһ;i8u}jJ\.q&뮽Mxg>Z;r7c*(G?MB(".wIv較NIⵔr~Ŕx8f-Gبߛlܑ.v.~gۜsϩR(8I^3 ۸1ZIy2KPz]mӖZHf${?\?u!$@두qsW3qKBV'9s5}L(+0M1Ucܚ7j:|7&ܰc \J*1#{8ܪ$^cJ ?kOTO@V\6T`374h'B\j2\;[F1WNQtw} G4c-W#ۂ~E%+;Z*;7'48^ZRXuH`P]K+a-d"JBή,dYHrkVt"ayt'c#sJ2ѭK բ&3~3kMN{vrVjYjc$"9|2e#ε 󍣡u V1/293I7sTgnuf"O߮N 8ϣ~~QQZw8i29vbߙ"ad;\<^MIJ["4(Sn]ٛ7ln!;YK*Es0>Tb?I8T;TKFgo(91޹%=2Fkʬ_>ɖKEsLDC朥87VV/m:)_6Yt;`qNgڣNRi1#A[%pF!̌2~I}4RHǗw뿘3 KJ O$==c:qӿO8zҍw=/HKkyAqi kG,6zG*JKݽdrEb=2ltǠ%ʣFӏۡ[f`dN?i- %^ⴹS" uMwG&Tm8??OWJN[t{>ZQVMuL⒝:άenȟTKoɸ[o_©F0g)Ti'g\)2 ң~F[:Z=~+W9Xܿ3ar@qi-"?ym^F+ݴh#8>s=>Gϖ6i8էNNJѣl:s3,?>ZvN^^ T9^9=FZqб%\Rc*%ORŝ%sٸ-NsEc:u?jܹzJ5%byf")<7*)@|?mݔqǯ5*5BMӱs4e[w@353_\'gEJ5{FĆo-\t;wiry2ἸrCNr쭨(ѫ.Y;VP[#|+${g8ӭ%NF/[=些:ĉnݑOZb݌c̡->K`̓F* XIއpbeW}Cy!A46*+߷cVz 5Jө|nd&@ִIJ񊶇/,&Bm!+5u +c{۷js+rSӥg}zy2-do267';ld8ϵTe֋{w?#leIlUXu޺'&ԯөU#+|Sw>ʽZ8[QћKM_1W^)E(+԰.Vs]n 0|~⾂&JQp9tdhe m 8{v>aޝڻӌZo4L+n^}p We GM/uس?e2Ko4>NJ?һԚ_5VN~ Vb`H6GwG6m ѳZ.{/OR(r)Y2 4MI$q4T~9#ۛRKu)^8#6p)`ecW*vqG$= 1K[js~'i.wZ4cCo!y<֔8ۗXFҼSpzܻmkzmuy^44]-mRjKŭ{t9O̶Nm/$ڪqN>oíkGѕZICOΒ!fgiv3'Zⓛ}tcinIi۹F6>laXYOĊJu4ߥQEUll3N?+.5#S(JVkkƝn.qݝǖN*]Hӏ^]IDZt{y/.r*mc+kQL;I{iЂN@oVfnOrzooUT却[Y_{)TFvVIml؎)SݞaG86;89\}u"~ 3Q׭Kά4Mʲ^q-ԤeSbS2 GmpL*g tiNKК8hg$Z-KiC:y(-+#ZJ3wМegTctt9]CCF&")ʤ.S5 ( N1 R>1Sx9Q)p>uѿüyŠ->1|' t>8c $1z#1 hEq`crvy85(Vdv7ul RkFYpЩfeP5*[j}.x4x()GmKwǽMITE)3Oe.ng׹&g+XEGW= >~@elnt,dߐsPmMj6L`}(tF\͘iKQ&ck2TjU,+i ՟EoUeN>>nN*%Y ;H.{֔%(^JL\0-UFRuzQ˭Q֍<7ql+JꨩWйc>tXX`Dne鬽(֒i6Wi`C?wrׯN.)8:ƚF1vjqW6ʔ(5N//LV}LN%̙q/RU^5oJ+rOO![)mn~CuSHbI[J^1Wu%e-j̢֧sjFi-V0xx<+>Ԋ(՛kmXNMO6έ*@T^r+{i.L|~URQj#嵕4ט+UfwPYvj^Nl=h8٫fH8FïqV*KJJ:a'NQrW\4~쑶Q٩]3zz4m,Wn\,rr夐(/qE+|[o2{o^rkF4JNPZu7o`yf('Ls95:Q1\鿸$*W/rO65+ԓռyZt̲ǹR4v_0)E!Iqjʩrq0Gջ-ΛG^:P9i9*Qqo4i鼮-a޹Tmt9SK$Ek8 Ione+6Xn9 ӎ:Ҕ`'C߳.dީxnw#g yUsJܗx{ cUZ4֭zȿok1dISv9qc ޷Kc\z>_OҹJ:i)o!$1$Ѽ篧oj#*ϖێ"unHVe; }>t]ҌnS 0Uvsd'%PJ[=".|>SiK~tH^I$G]ۑz)ӏVfڻro29.V@?ÞOַeMmG˖o&{+%fx:3_GQ)|AϽW+T\N/.]G%y,'1߷cF1Q+h6y[ltI$T~V;z;ֳ/{jK)aẹ&{Y36W(>޵\&CʢԴs x- nU?>J4*M)'3rij[7s^}L?,[tGFTZPd/#jMeÆʎ 5=OfyxcUkؽ%ېLDluOzѧJ<۩)YEjjxgž[{>}Ck91,]'$d"'oԞ!Bk.mKs H2*H]Ӎ;?+i׭b_Oы:p} &HԴ '#=ySס:p^X>"\%>փyg<Mn 2;rF=mO..ag;E5z]Csx)G=Ԋ)ڪI 01 cV.:pVVo}m5GSÐwL#!'w]?Ìv⾚5#N*OTOSY+3[$F$p`O }[JJ硂ѽD'<92͹wǜ{8^HJzkH܏/M#|8۬GO/t< sJDv1̲J<qcښ~Vأ51?-,l:c>]PRv(W{įo([m>d{l>q劫(FM#^JlC{30\|Á9֥r]8֍GʖM1fR~^(FNj4F76;r>ڜNR- ? :}r~,rʓܝ ELJ4zLO.Y^ mv$ɐ8=6j(}^3,>.5}}njY#KuOkWn #q*8xhݭ*u'h}Cod!\**8O=z} )SR_S.)#u&ݮ`mCMٚ_̒xE`iNھߟ,N+^Z__|)3ܭ$?&H~SFVO^z+ʭەƫ5컕3IqI\*Emr)//d^|at4>)P.Ap'lXzʜ4N1֍/6A5-J9*+0Vgg[eĽkXtt7,r=H+e]Z[AoGCƽlv5Ҹڪ s:ۿ.U]՗S%-U5s7_^ mmn;>e_ZJrOxT~Uk3:qsnVYzWjMp# 9Kw|W_T]/ܙ2gF0潺u04iJ^7rOK ,x2^t)El#cɀv@o~^\)7>[yrj:vQN8HMkmGF_;*v^/̌#3j֤cJjF<#^%!ǯ3W Qrg-mRmE].m{V ܞ1zX~}TT*rYA^iYmIʹcd~޳嗺^f)IMG][gcKHsz,>!Q愶 ʖZYt xkĒXqg{m5*0A LwqZTh۾s{I$~ghwwmdpI nOz8)z(j+;-[T+I#%ԭ$K cp8qױRh}rQjoOhdop,~ZR=@'J:nx5*-;K Xb_5ÐbE+cvΪs+Eug:䶻h ˘eq f:+ 5UdVI2ۛ<gn-!wlG[¬g[ suf1|0Z(1i^~~ڄ1܋гn۷.]<\[/e#V<3\n'=ҎZvO-jmEz$\[s8[rz#\xݚ/$Y۞#Λ)+:eNGgܤ4*JdpV }j:[>)%-" %ӵp9&=/7y]ǢE\#r93,`aR1N{30*vK T`9٧|i?,WSC\'[̟9'N[:zz=[ЮuDIwpf2"n<JzzZp-v61ov'n\`ss^;QOi bm.dI#d\|ǞHA`s°l$lpԍ:s3kvwyycS\g؉Utcѽk!Ck{xR3mS3>Wבu RsZ[R1 rrF[ru竣}6,Kx#O5m2x;:ҭG/-'I|BHW,">bKJ~/EA@qxDjrɫ&o*]?K;e4߼HN'3Ƶ q*pI0Tݶ6O>kt8eQ>WpkWqa8+q!,[m͹Me[e9}-zWnsS疶oG1F5ٺi~\KR]k5v3Bx{4veU%%W_k0"8B \kEoidX"Xdk!3 eoNr[=`DZհd FC%ݡK&ݠۑߵmiZ&\}U5e0mԮl*NW]|x=u&C`$d 2{־ګQ$f/s1|y8W/cj`Uʠ.>a?ZrQ{u38?(Irr濻wS4Ux޾*r&5OM.=睡%ČUdo~\=2Tq^읆8=5ryS3EJشn"_Al̹Va@޹)o# 82h3# \aWd}g),uJ}]ɴ1ƫ-֧[=5S&ۋ`z袣T(HeHQw2&+~ڲlGJi$MWkyU`vjy+}Ե{ȁmżQ2 ~WlT Cp}Yv aXsJ͗ʳi^c @,f1ɻ,'J_?$BIc{,\r;ֱ\2qN-DIn%嬒~s>b?UQHǚN,t\%*<Ė*@GȃGn=UWbbȭupv+.ϽӹrpT-|AuV]`~zmowG.~KD2y>l}xkNe Ƒ&I f'u%eda]x~fo.RϞKo35f`!VH_JKSrVeVI Zʴ֤-֥ˉ_gG q\1vU,4i^ls8ҝJn2r8.MՔ .GJ*Ocvm):5>y6\S)U 3SOrqԅIScE57$,ͅ }> SZRii'kot>kq$l'C r~SWZΝ{OkɯCn e-U\ЌbF)+_3` W4(=>wkTjFRq/&h4S͞MlM2=HE}+I4n5SR:>+tofG?/>+Tԩ髙>oNO"ama?76>CK Z-_"ԿH.hF>[$] $cx qqn+_/.NtagumѫjᄒsϿ/?gtVm2Ty~˻,齘L`UR᱙gV+IY$e՗#5dVow׻]F_h/)& Ċb&6' ]~b0NI?T"M%=7O?^ry.!i$_Vf2?f㭜5[]8,u Ā+{yGEtfr+_C)pZ5m޺k|}?Gad;LO%N{en1W}4K_%3Jݭ__r_|LooľŎ<0b8 Νk[O#׳䶏mӺ_ϊc] }BfX`y`eH.1xl^RQi]Wu09ts*Jd+s5${}s.ckƷ'9HS0)fs85)aヴ%xԓz/eem%Zhtڜ[խ]謻xH્-<-i ˘yjw@ea_;SR.ihU81*Mi~Z~};68_½FúUhmHUռmb nS9=vMve֨άw[[-;:󶚮V~͗&_xⅮdjnHh%rvՋG4i*.3ES~iȥ[uhﶖO=jo4H-|!q*ܹXʓGbHa&ߴ-/cٽogNk?Yz5<+.9(r7nN:nzu#(5elڻ[YmR.*JrݓI[jtllZq\xfaF[2+{#r|nU Mײw/99{V4-ײ!_/f $<~\$pFt)aZZsr:vwx+)뮻?_N%wkw&q"挟'vc߁^Ncjwխy2?UƟGӵ>mPxzӇCHX|W2F9ϥǗ{8] 5|n$s'\R[{ӌo:̛R}1{WU?2d9xv"bW,XǎⷧkXGk[n 2\lc۞ r+أU,]vs ӭdݻΰ>U՚Gͷ AFەq;zqswO5`;|N5UfՇ'i\1ޝ.x(J4Y37_b\|$UՎSH#7,d݅Ns׸XfR#hԳ <&ͻQrc+ּ1+G'w>+(Ν%Y#\C<ۤU*zs]8x=j<)8mfs]JZ蕎U:%ormRkƫ"{-yi*j0Ο,cy+bՌ!/) ČAxy9'}-z3(qJߗSwEFx6I{WOݧi?sʩۧ<f*^a뷯㎀5$ߡ{h6e4vJ dNJRh)NЊ㵈Kt4;]dlq˖-Y5Kak^:=+ "?-UFT(*0N8YTkIɘK:??S}$XTT_-SO~asjYwD/+>#=c~?ӌv_kNc_U[9{Ow\/Vv~SJvQz/#&O4vCaW%H$vkIʛZp Jw7!,O[|{#y[<4/gGm žV/T9}낥izOufw]\cz2/R_iVմ>呎2z9*T_5'ާQCߍ.MOLn#1V@LUmRR*<߽mn쓀ad90}:OOn*4dsqºU[ݞaY n?$l\߭rʍHhu0{WTYӮR9|$)Rw߷(VZ)Q;FOˠj1]g%=yں)/h.TgGI'eCw,jevS%9rp?/JaN| i-Gn)Gtۚ4z+_cM;hͥ EIj2/ۆec'D)wsJIOԱ1\Ж[ށ`$<_&RN`gt6ؐ0>RrٮpVB. 63p[18ʲ|&0 ]x;sr9[rKhnYW׏ZӖt5)r\bsoȒrS 3c`u#YmA@樣^^[[}5LV_*{mE5m~Q:ķ24 [G_[:#*˫"SN 1ceR5cO#-ZwO˖+^ϧ{hOxWukwZ[Ŵw-H  K *NqɥW ]5V.7RoVF=z}XU*uiҳK^zYq{ךaqSoS׬I#!A )QR`^ŽV+w=LV[jjm>WOF=?Iͭo#Mw'd8V"r}6T+ Kmw}lEv#nG\}ʛ0:cW𝯆#/d09^Ef;W><pzv&:kW+Vg5mo]I3p]llA@Ju]GR5{o?/S2<-'Q?y6KKcMŭq!;UT!`dr3ӲJw= •|5)Φin~o'PaRrNij~ Onڔ.R>_Cw_Mkmq$e;XOu+ˣN7]1H]lOx[iY 3|vU1z>;wߞTy`.맮^z߆!$pyq}~5CIiN)*ߕ35)&,7mU<}X߿z$Q~d%Gz;r٢#(*|C-C$ޔ~䭏؎M8ڜwfA%ʴkܥmۆ3QF<3j1ApD2 :vUy^ڲNN2h3"oib{fxH2F}cALjFPn:[bC>Xp県TzN<)FS/Bxc_n~N kTU,Z}r܂UbvnsjF1cM.X%]NF3+`OxqkFPTE^Ҫ3VvOc^o{sy#[„ɺ3䞤}9ѧ-՗J4MxFkx@3\g7)zxҳASVV94;I n`ub;t=bgfܠg Qy.JSul9&!5}NA<u9ȔjP xW?eGGr 'n ?g? /0Wȝ* 2ھLoC5k]QW=GyXQIV`ySM>53Ru"$a. 3GmyT$u+f{*Ƿi}-FTw!kdlUvXIq58Fg"kюYIC ^yFk'.qUJMdw< >cXl@GҹnWeh`oPmݴ3 r'1-F| cn8 rn\٧OdIN&ǹXzw Es4yw}B֐E1ہۜ*OC5okjMgUU'k ~-58*I*kTnXK☸7Q3Xs6k)S[ٴ"#XՏGzʤyӏ8k 3jʑ,7|wkFUb >`q$|n5)s/vJ~\i s]\s.R]mX`4RXFO\έa:TZRFV!?*Ӝ~sJ2R:#>jO`1$ZE6y;UX˝ܣ}+6>__\T;Irя{ c@ 6_Һcs֍9F ~Ȗ7w7Up3ПOF9^wy,rSOj44Z1-c+>-d>s֯ltѧMGRSjlю/ޣnc6qԌDj9FF:r4jSlY#V%d\`69C,;RF[kd9)[ZFI]cE cZfܯHG=U{>ih5U&vS{IRpNwqG8z=V{ !}uRHϕdNZۚTR*WeenOOTԟThZ (6ӷcRi1QN+I}~,kq TgzkrN5Abf8əZGbIc*/qZ5{̸_2UvUl֣BEwfŷo,9=*%jrԜe.is-ɭ|[!gv)#r)7<iZ]ۃd׃TsQv۩ǝ+艖&fDInuX֞.5 aV7VNnܿNzqשIJ[gu-!dHePꈹ'z\%/i"ͣۗH6 瑃xuӟ7#G/ce$BЬ dY <:ڔL,hm̪gԏO\}#oNrMBvNe #Yx%?J!V+:UGNa-CӚpٶXbGOLv(ͻ:#i7F.CflS'ʃM9){꣈I:Q-Ώ (I8]֧ArO;\OtwӴK"$Yp'(d'Nq]8Ӵ4[TɳяG?}-H-݋0?_@;攪QWVwf ޼d 5bˌ9O5%dנ>ۮ4QdYW#Ti"~)-[;g ʳ 0_Ƽ:9>WYQ̝y|΃Ci!`1y7(g{׃k]jF<+ \6'^UJVԝL='jvZejVS;bB2;W%ʎMntv=ӴU8Fj9-I7Գ|cݒH qڢVgR9Y9j 2eF9?r֩)S:#F2J5tJZ]λ`1Xvf`9z匩8RJYCi.*]\d`aۏU:~7h#yvQ/%vz- ַ+ "˟n.x{nXEgbЯJ1I1ϭzOV3I64/>-ml!Y##ȧi*'JӊݿS͜/؊eT$yjr <{WU|___#{iJ4ѬfYYrsޭ{p1X]4Ԝ_[NG*F|}ϩw8ի&ӼoKs~S69*x22ՖXn"vdA|9ӌ:Q䄣.ۇ4cȧm^ߪ2ЪB$Qm~9=9 ~S+&k\f3IW,$ ep 9"Uqpi;K=]+|)'iNiӹF0AQVn w.q۷?g8ɴz5sxab+? +FT՞֧\aS nevu%<, (JQrJFHSj\hޤy`n!.+Z6#1劜50VXЇL<«m 3+ۿ沍g m,e8ቭbi!YDk 88ΦTyoSEkFXM% }znk1]KI*̩ZTaguGچ4kw8=EV÷xT{9GVsb"kuՏR9j.YQNM7zE;GmkKa!NS؜sH|baI)l~6e*9xj+X%j۵~y͚c>oE*[#`jr_*TE|ͩ4XAY9E\sqыU[cRSY9C^k&QatY= ϦHXԏ"Ty;kY}&IPT6qjTqtLjFzTI,)++<ͬFrӞrӆFtR/qYcB}m]O-Уxx*in&a7wZ4h3v2b]%I t\uN>ߟmZ܍oѫ1 # Jr?OCT/sZ+vWA, ej+GueiO-4/AovK渫B1a){#%84y$q\јs8*I$GVigo/x >`[x.c sG`m%qI#I7,!?LjMNZn'v[~#@aGyOݔZ^d}yN!s=1ۯ5͉.noD xiѦMC\umʻPiGzXݛSկeMa!UFYwg*?֮1[&e)rݍd7ucL˭\IJס$A"wu92Ӱeչ>W)T&mEX' G?gQs+#)J(Ċ!O Sd,D]umj#+FdT`rr1_z|s< jEݻz.K$gr3=xVTZ3[[gUX0,{՞כItJUcVlƋlHQܢMW)'TJRtw{|qg]4UDv>դFzi4/y]xsy=O_OHꋍJM-=Lt/ilADeWq$q]T߸-S3M7[Gtr 7:i#r&"R-orIKF|,ssg^1o^+iI kR%>LN=72Ϊ4ʧ49)I3Ί5r6HԻF&$[IUxoO+DWrIV}LbP*acxK7cjWB #=[x1 2sZ5‡BZhi)yY`icbܑԳ9ѡFpOIV]bjQm]ݻyy023t#WjJڂ3ZsNs#`vڈǞS>1qh|gUӧJTU_ܲBJ255$i";H\ȣr -EZN-ŘʣZOQT}=Kngfx9ӷjڜWcz))T^gnG{u-$*xʜ6t!Ю$0Ub5%掉sb)WؐEPT;mOR׍rZe7_/sV ,*dR988Z_R(l6to4FY3)$M[/s֓4]zEV04.vdF:W-J9}O>yI^OG:O U'=;#*JyTP3-t$] ssUZQIb#V0Zǯ虁wl.b \F) ( j֣(x+8 lc&M/{uoŚnmdvfXvFʲ)珔R8nkj9yUܭmo/ _g ݙg2BʄPn@T[1sOKIjK,]DFӊ{=zt9?.#­0o8^]J1teۗFJmsvQi+zW+^]O.`ܤfcVe+aR*k{.mCZ"ۗw#|Ki{># i2+*,{z^RQڃC4ѕftR){Dk+T{qr˷Ew{j4vTQ(Q4hD$}\g^}9ԩRp'^IN{_B$mYnmzq{U*In.YhژgRIkI C8f=⮓O 9|7Xn8XIFhRݾeڜbXr9W=?h#F-d|aZ}j5'.]Ϲ\e΢w2A UmwcGӭKcj-s,YXal9ƹG)?@J|nD Cv˗Uۉ_TjJzSHN{ BCy1W+˞u)/yER}T[,27+qRgs`NORo$9֏k"ڶ#y1uѶeS;7gOs:utvVT<:rok5̔5K5]/[Vy$EWhd` H$d yͯʟ*MZtم:JnOT吏}5gevq&$edl'#i8|:S)9/Gb74\8'Y![ uSM( Ԕ^JvX,,`f}{k *tױ`t%+3<Y1Ƕ8n&[ʍ< E/zdxI`'#*>RWZ֔eBNCD<H!3B>I8=x`Jb8ɾ|?WRmlr:jKIa&Y[!I?գR=*N1mv^5e*쒜g;W99z~ZTaGԔf4ܪ.6rt1EC֕W[kD֦vSZܦ>l<ΥZF*}}vix$WZ؊ʛpF:~vZHͿ{JQJI]<+Rtiyg1xU ܍7aʃg8uV~]J<=HѴ~~9/.4`mgl8ɯ>'x5+bdW;n ~NXQz߹Die;okԓb\$d+z<ׯCez22:q;/y[UF u?y(u!;\*]J[>  EG5OKn4pa1R'*L.ʯ)S6h?zXgGNkO*S95VÚ5LnltN:F6F)Un!vg/Ja\ o2 Hc'dQzxd%;r䞋rְNIW-HlDB'8;,k(-[+ G듟8iZ\SzM.?\u li O%eZjIl;9K_ˢ OQPcvڻ"t+g|hFQ]-Z+I6T|#J<+dbZv Mwpw[.k:T5x$uI0?+*uing/UC{Vso^ǵfM2ᇢ5cBZ׈誶 큞~pڢ]}|>#픥 ;w7ִ;x;qKr`WOJnةVZ'kN$r8"ǯ$d5(Q~_CҒt/FZW d9$1S8gˈZ}mcNvi=Wpjב<~81ׯ W-j;'4M譼YH!C>H׉Z,[qMk6-/dson2? יVa5)YZGYwW?$ekM+m8OtZt{)ԄxZ=;Qt'xT,pxb>"\=9kϩGE*=>gP|gG0%"HL.z'=:eeGNG -]z~ 8O°aQA뎜W:{w=d՗>[e9.Y.Tx!+ŭw7X m]s?jFRv)+2D3+yY]溢mq,G!mOV5ֳHKت$fuW7,}2uvѱ >X]zub. E*|ܱ'==:ewcRjTB9hjyyg=V6%ݬWѵ!ћ~rԧhc)I%8uicy5Ɵ׹`FOsG~wV9p9}Ӄ\kqk{~4 hGXUNi~"_cHb=#[ly^.f>k na!9T`21S >[=tۯ.=Zj[wx^J,]dA8pzA^g'+VP۲u>xee㋛.څd]sF>mcIBKEo´匭R֗WZy=V+ϲKxDQ&"v.>o#Aa'i~y+L>4 ~^8Ayƚ8Ued0"5+V(ªS$82پo_]kB֮3xVM>M-'IZ0r+VQ^KBJo];oڣ,:pMu+ Y95,e t)]cg rKt.ƋX,4f?p+-gRTSw$C>Mlo¶6ʧ4^2ɤǥy,V9|~$N7+J>Ɣ)bU3V7|'_'2T.& –1#py"im٧~ՎVۤq$?c߆->`>27]/|絡(Ifyik-]Ya;}I=͢q"yGj77 U麶I̭ 2NfF:{ciQ}>*Tr\fVjlsOs˕;~MڲnSzVW/y%()S$<>ۋ= azv*QiMYt5K6G=}89IKonvtƋjc~UPtϽz*%vyNOMQ˸5 FסF~RZoάbͷݿF3BRSӏ7ujڜ7nyFl`ןZ+;ƥFQiyT] ߏpƾZn GaT}1t/I!)4y2UN0GN1^ƏOފZ4_U/I14Hfp(8}*[I=tO%wV~z@qPvmI!/ czN#Fսu6*J*TWzv xsC|=mռ/e7O0E  Ѓ\R5\guwѥo^vI^CF o=s{t.Z&3<:s*ǝJD;uxe$rzz0M{kٜݯ`V[ 'eYc>fۻ91NSվ:wc< {g.AP<p?wk̯*=|ϘCTqkz:8.YHME~\sǯ֢3斊íO*enVyUkz 5&er'}GzsiTWUOh\bunT|q.KV~MiWNEa6;XQ^ưE7/gʖuWpK۱#ӚRc GK@&Ey7m Ҫ#nբa)Qu6bI7`8[ *nTޚI?Ѥo7nҟ-Hc9?̎PX8J"ҧ;'{`meQ~+1;,iZ+]Gn-$qZZ]>f+q⶧I/&tʯ;6JlLy5{H%G÷N+sr} <;ם*rUsvٟ/xD5]]F9b018 z(yQa#UDo'.OLT!-K}ݍ(EJmek64 xl-7 g|tV=ӵsδ&ޫU/4tTE a`'S%۹JVE54!G|YRF0ӓSO Vzt/t{tVIlҤđ"I޺anu >k鵼jz}P!UC9Ev3js32NOGR/3EQk7K*8RW5X4c<2O|J.Ym{TNrN~7O`4Y=3?*=ܺ{M&2?1f;wnR:cֹ*TƊRZ+2nQWSR7stN}9-$V7m8ӏj6ZTi2n`g>nɮ8*?q2^AÝpp?9׊J-{j_ݔt]˟gdL>D{pQ$=cRQM:ta')O~CBvJ2(VNޠ~ =:̉S/,Gћ7h}@A~#9q)ƌ쟯{Dl`G  U3kF-4aO*.[Em6xH-0zNJ\t!Z7O{_sug;f2}~oNA#:qtKR˷5ﶌxq bVRb120F=*TU&yJ-Jd8--&iDad#?@*+m'?z3Qŭ&;#r ]rj~Ӓ7!ƦZPI?29ZtftRN^)X8LK3Ih=1QGa:Mɏ 2RqR y$8=ʭZuew}盉)Tkm,ɮv͆I>NwrWåsF2^5KG!Lk(تʼ;g5 L#[Wԭ,/.FA!iU)UFJѴoZMaCVn{WOR-TDӗ|-KwQ-sN٥5eorS,O 5oUX0Rq{3jFtx:oS'Iif sF}6?;}fjO?7#\ȃ <|MH7]61oW 1$\:9yKSlZm1L8ǯ߭c s Gh,R"nnvNi^Q~5QSvXzFs^Nq&y'[nY@"zpƺ+fJSW'99|gp^C gx-Վ<}]&B6V@1s>F\=6uaJJ~],JFticWy]*YX; EZ~m)^^l e2/;W;OoǏ~.GT_D{ՖǗuٌoRz=mNTtx1$v,{dElR׹y}d^RUa21RwcK ~ºE9&TVxک$ H9svєӚKE?w}==Yb%GY6;I8Ğ1E:e(VQm&FnK;zpMw{moϙs~T~pl~r)&#p\[8zmz~EԫQ-K1/~\\ns\w5:TJo{cA|ś6 yvFu%('gbIjѨ?ݺ|LzD)jT,2VV/lר])/huJeq~]t/h/iN5\k$?#].UUQGHmyZc1+*5j)EJoRHf~Ӛ2֦V?ۢZэzMĝ>#5+;\ܗ$yP6~8\Vw)r+[r<ʼnSa%M5> ZF6VWly1U.U %Qs\2##d oV$?ȡK.F~), ,(k(J< hylNc8hc<nlʕHY>iQociNqZ~y#^bUoÃ%#]2妥3$,f#J>{{4FZ5%}u3o."xy<] =:f).Rӱk]۬PFd62II4JO%d ۝i|>G㜵kGW*5)RvٲIwk2EqNyۥmG2:R)j1s3](kvF59[ӥKh2ʽ~R>RFF2}*5$bRqG[[S>PCgR$Sp>UZƜ\LiыսlL,_dnqҦQ啛S̞%[fvq咖.R]xul̹f6ڬ1=+D-J2 oj0^eQG[ЧwpSWddhy*'9?Jqu(k{ʬQ/[cfXvn5yRSr6#\2[-߹fd !A8v5Y/w2I/a< U .x m;iN:fEa^i6ƛpw#IBbKMJiW}Ⱦ.<%`!zt.4j+审]]2#U1H8'?Τӏw^ѨFa3S^b1RKJmɒ ݓ?!\5!8bݽޣ$<dT>n3y84+!ח/~dvS)@ ?|W7}Ͱu?ߠZ$v9 t旺ȩI~gA$;t~**I˹/}B;GYcv0qS,LyfN\DYrhQ۝NN}I2өJIGdii 6*d4T z4hNEh:}5U- !2pq=+JkǣG Δ:w44yWqVI)xu(n1yj*ll#Dv"zszcVr6eRz6--4UVVmIl9Qҟ@I*tW5b|EYx*w6y5l:+T %cY03*=+լ;'ӵNFiN%W?)E*וMI i}XM }eNpzyQto%Ao|#ɍGcfTv͊Zc6k\[uXLhR\~ʥ9Qiʜ/gC-Ğ`Uaeqj7<GOy/'6Q'˹j!o%.#vd=;hI%WJ1u?Kz&4h8$uhgvgF\EM.<卋nwRkCwʟfI<_qٜU{]u)?֗WW5p+|6 [envhSȨ)Z=eS+MzxȮtM*^9 rO*ޟB9>U$g"#Х- {ַ9I^'.cF>daS-7$OmN5,6Jw\^JA=UFG.geJ4o'!7M|V)QdUF>S$ps>iѺTj]jtBI6}Js܄-'RVH_.OK~:w5\\K"XY4۴saʥa,HĎgÁlQkηѥwڷrZS5M/kѻ,CeSg<p9~r]<vvٟ2ܧI,'2ݭ4_RUJuWOwMٮ/?Ŀ z׋542ni1tn(VwpB:_U]?4!,<^[o{{w&:tI} P$ K0wx2,bKmKnx9HgU(j*Wu}kKnu`i68DRA' 挣;tt޿ӝ*#ˣg±xQ5h%>qtw9$px֦-fn$ mqU+vGxabOZ?5E l+ZV]tןi;-Vt\y7_i9 Vpr90գSI_˭crZ[un?'_mbp(]r7F7ZB:jꔫEo?PDʐd{q^S匹$5,AsxmYquoknXK}Nt(+oXhxyr͒=ԨZEYS5̊Ѿ|sn鸌WD9c3Xaj~$E!wpIB)FS~SS**QW4}h}mїt^YF:iRig Uo^<'DJn_ cU =|&e |_i=ff&F!<9.vFriCh^]-E|(795mB#_츙c1dSR<.OV}K7tZV ufyUjrMf#fhذya)wvaMFIXP0| 'h{ ^96nzѨEEzu2"{ڻ_lE:nU:ѧ&{Z~i[h 8f${d\Vn]M*︺ʹJY?I!^W9?UNgS.U9Q$uQ\9?SGӭ to|$DwL`2ǖ^"q=[*+,E_J ^Sũ*q43[{hdLfVןZ2MDG|lNeզ-ݓmGH'9 K_U1NQ\eXlq=r:MXөnGm5=J4SsRA!Դۛ-Km;_FBr3HEIGTo3)p/CEonȶd.&#fAc==tådΊq]}k|33b年pSПK2 ̻D[rg ߵt֧M':*ECHǩb`XfhRL19UUOb\7t̵:eC)6 7:|?rԱ*7^rso:pMyS'~^cm+vUM(1=yG⿕ky.>j{gT|A g#ǣNMEmC/RJĭ/Z5 Qc V1˪-n2,]L-5ۧs4'P~vۧsoۃĺ"%Hӯpڽv@Dr}>ONۊ}}-fslJx VQ畓{VkkG:~;=oòKtw͟29b 3V8k]%do=9baIT枾j7[]~ټgS־YfS$s\,$.FFXt5IUi g2_E8 }2H~դ.BTdRq^U"ߗ Fs{cgY#в! gtajFge_GxRZ8k1n^pYAa [O/~m[z>X+օ۵[KM4{Uk͋°hm3\e7;Iڠ0j*Tqzh_{J+ΠKI..Cm->U8Sv 5IJcdgd5с5 k{V6N%%xz;rGt|l G# s@Z_Uצv92XZZ-֯hm(̠M-v pp=F3ϙVʳt|w=ζS5)?Km~~iVm"+$%X9n?J2mjᏍeF6?x䴸lC1O=k˧MsjF5=Lm;6(m_1vv{3F5" L1,~R犻s)jL+CEY}v ڈrncN?F{.ç/-򏙙8V0NJ̱eėQ̑4q`ҢTe˫ w6$,śz2?+JqWVE*#5y<;s)[y6^ӯ9b2ɄeӰN.6´#RR.~7]sGO-r?J5lqGާiy`ulYwMo=zb'}NYF=7ZݎNizWNLVo4+*,xVF!_XTgO ic|C.49Fd0UrrN~8, [sad+i=?p~8ʚEܫnE[U/!P%o)TfYpN+JWצx2RjI 1fh~f2.!|6 >Xcs榤jje*6xcO5wrQɲUG%ubn[=Gg&߽dn`(^7X_1RR巟BwE1}݆#sw+Xn<:#ܤ} OBL<9_2ʻtzRF:v=U][[)쫹~oFɤF|^ܳi$j[vIm yĞinU(M٧nVi#s֞R 4t$`TUV2̀v#_Ttn嶵N<|@r1t8oh_k䌶dswmbdx8洕Ԯ56Ny9'[cV_yy$iԔ%9ON~Y5[ȧU_'Vقy䦩=228Z%F\G1Qiw^8p&lpv.Ic|TC,E'%[]KEs;U\B+RQMR4E,Ѧ7mB ۓR}8aN"[I .pR~zvC+.S(ެ-`--|dpy9sS~oJt7b)Y #9Wbv޿9Uis}~Gdb)][ԍRCHԇ犹bX[DT)g'q϶}:Ԭg-H뱩ڄ4߻dI~VZZ|Z64-dU剿vݫpw𿹒[Z4 vdEۜhQ6o4c%Q[U= |?L&/mOZC6appڶV5m-_Ț(ӕ 'CTȡ*2zU(9sr٭\)Z<>+xcg-+@*ZZ[jkKd&aH 瑁J+;dsSOߙp|9C24G9Ͻk9:m.zRZqb;FyuK6ހvYFq=>FҌh(e[[d%%ShV#=Ar2oSV$uBkMY.?rȸc2N'R4\}jyyI *ϞAyn+:oe(i+K4g'oϭbGmw&QeSA"K*tNֈTEqg,e}JdkɩpISyz3fw)U{JZ61nW (is|,ȭ,yc'<{kĮX?0VU"q^[D_+4BEV]϶ y~IiXз'q2lR[dZ.I{י9Je^|Ǿ(;j*FWR|zi ׻Xv "ye?zUSHԏ&賂Uct621 x'>eޤ2huc0 ە>Lߌq֪TR\{>ƄZѤaY>awv?ƵnY]jͣmr&Xi&n9S烽qÒRM6Uh6yy[8_e-[d֧e̟ܿizGk}u|<9ƽNnhٌ}cԽ!G`*:=4/qb9eN]IKayv)8݃nʪ>ݱ]E=jɖ3lyK`6?w]!JlE227zz5#8eR'uݫ8rc](:mN>"*Rb Ry _[r˕\Җ+ͷsתmH2Wd{[^w c#X䒿noyhDGs6`1NH\ʣ_Y]خo$7Ѷ܍N?kf$ejt ٖ -w>\'q摕iFSc* @(wsJ++Ey4ƻG8jZZRȋ0a+Ӂc)JTmԯz=֩nA3GP6??r>g'Y2Fhʅ[+נ=UcgQeWg_6hբfEfz tⳫyo+UgYJ+4q Tr+ƭʚT*wie0gĆBuÂNy*Ӎk;>N6rWo)%Bsc6 u&1ec1жѼ=]šS(ߚ3~.e Ya٤Gl9o KM&^{=GUEڿ-K(g?m2dn$'OW<1gV}S:1)VkVis7đx>G.Y_}WMPu&6rU_˒I>DE9ߊhƟ3K]b1Zx(+;h[0Q¹ Ǩ=\'Ͳ>$q{+]]$/BnY/^hN^Ӛ}vg,jb'ylՖ% yt S׌)TU=a+Usٖxʲ"ù38R9)QUzc`%jt{]DSsUR|ЌcӍu0br54l0"iO4,bo";f0nsny`uz+G*G`$1ݬ3FvQNy:VwԕR?&&/m3~npބ|q]W[v""Z=3]ZR;+qgϼj)KP;$_83TeRWQ_בc?lʁ8&y뺥՞@XtFcdfcUU{ER6O=b)wchSһi勇\e%u8fԲ8IB%E5ד)1BEV]#0Sj鴺;FYIH>ӽT*sѝT4YN[[iMkR({HwOY mP~o?k\ҎEJ}i >@y;z~usjuZINMzɶw/cP Ñ>J脕mGRFmy K@dֺa.]E]{Q![jHqolV1g$Ҿ]-o琐g1A\5[^qD238U}1c%(敿uGLԯq%ʰٷUĂs@w:Ԛ=іA};q4ی7^I^vH x3S[niAQKס\- Cy,@psɬӔvuƳO1Vq'=Iy:8X93./ggZGDcI9lFdRD{u'SYQyՍ>N0NN`o9%ɵ68ϥuT\sN.J=>M?32ӣ*zb)ʍ3D8c׿Q$GqF䒱JG eq>WM6?iFR˹p :wEPY#"6FHpUc*xoJmy.C2Eݽ5+*c5nMݡo7gm(FT]V.]6;WW4{ӌygedV8櫚P\rK}E2E5ݓ@#Ϝq\9Sͅu, OҪ#FiZ7y8EJry̕NxK~僵Vu+1vAR&l+xJUm(79\XөJpoil`w}^Oލyb9joETE{8tTJtt: Ą3jO^rIϧJ+{?fܞ>5cdA#4Wm۠SNjkV)j͝!9 ^Xx>rMNIb 7զ ڕ z|;s)M= te+{M:Fs+W$|] :&"U|E#P74q-Ӱarь%[$)cdPx,u*1{_F*1\RaS@ =)Q:(*8X+:c)QESvO*Ktf7޶9zSK~>ei"V30f'=yeZ 裇mӱu ՝ԒDۈOl~Zr^BmbBZ>?t^4a'r;t8DvLo00x_]4yTUx*W=ίBB2|'ֽ4MF*ɝiӵ(ڲh6F/3r`8;z4F,`a1׌'g+;z4RT҄]u2B BzO5?bf}h֎xեl[sX˛Nee'u]Pl+,Ǐ?0QW̕z Ql9#XQ‘Zgq>rwn8Zj*Fy>^~N}SiיWQ4Si#%8j-\kh#f6#ڻNuMkBTqNj_UVtSTif?2Nkh~}hzGXLliF;ٸWtw,yo<9fdjlMrgqq5H&w9T;[hb-]҆DkÃ{}*=;Iw9|KQVz_3l.vKuɜI (|qN+բ־ϩHԫMkS5$S!Oh=rzy3vhQN>/zz3.*Nqн 6fS\#9+hٌdw&k+yؼ)!&5S*Bzk5%JJ^~*RO ;be7Yl\m#=k9^J"Ĩsu: tUWܶ+b\nMiO=ᵽݺ_#Xre|I #޳G548J&6f6U=q1WTpnuTEľ!/k_.C$29ry=)Fwm>g*MOWw ݕ 6㍗<7z,<yoTIKv3oWҼwWn; ߙчPqo4KC+PK+ d`cNU12T>os;C:=b][2_;mǥk֯EB_#RR잉u9زX3? gU`R;0xPIaEdybP;kRI;_']=H<=mNXmFDgd$l@_Z%F/T~_/b+/.ӹ溥;利2#YԦ0nvvZYܙaeJ[G+Y1kIF. ~][Z]-0 s[[ދxtw5m0:kʤe)IhKM9e-߀V5Ƨ3`GW5ժ̗vˆhwYCzi܃CIͬNHU^>֡SJʶ#z=[=G:"^_oױ1N59{ZWƺ Lѫ `s*f D:5go'bB+{i #Cwn;!N9vWRJc~ZՔ$B{V]G3$Fy[J7M۽0tJO 2lnV#TV8|J ^j÷hu.iڮ7˻;뎣ͩMTK9W[M+ b\pGV皻GEz8嬭ϡ|.׮W['k$a\qq~GNtR6RvkM";kiډ#(x{Dp+yң(m۾xLj"ǃg2rozĖkn(`` u׊K[ThMNՂcc,SW}In0Fd?/Q]vɌw/Z63[ 6[=:eksԦj)! ;*/=w5|QՑ}1 }|ZԕԑZrO7 ~yɩѩ+n]+oF9{*Uc.U}4>G0:"մӯb]nl>FC\Μy'mV} J+2S`GzU*Skous -0, h3|[1=Fb&nXӻ^GʟŶH_k1\ x|<>G[S>Nkwx°X:wu :?1d=AXs+TMT(^K]<י azJ;כIk[u="k(TմH1l9# YT\{C*ӗ<#'vwa|#eݦ_vr܆kլWGOM|v6\y-~lSS~GӌCzK.ėrֿu㧦 gQtp#RV\ݸ[v+ *w/7ܿ{=ͪ*V>TSQ]7}E+^GU{K-ٶn]#2F+5v[Ku]TەkA׈hgM +P,IbH99鎛?\)٭zbڊuw͟E|a};nFXGl|dlfW>jw_/s>/ږhy3d kh֓q^o9%_ :)4s]BfIfe}3qҫN5VC)Sy~i<Beڼ)ƣm\jU:#KE1F2. .@^Ƴ9b:&:/A.9dk!9Te8'q5yjT%#N0,*Ȫn䢔%'.*uiWius7YGYBlMI ^z߄U:қc]~3 P1C2K6ͻ;֜QJH4#K/6hS\ִou{*tkcK& #ml#SX<=JyS6Z mn{}k(g{{ծzyz fSmy<*lrx#귲Q2Bm﷗.|O7/inZ8I-~IdqŘb0yc{t>74))).'c2OA]5:^4PCqH`I8xVUS/W|f'J0G^}w|8jZUBK;jpO\oVu40{8=_NgZ~&n!AY&qEy\Q]3yέ=O>{J6'1. ) aq0Fy+HpʥE^R X"m3Il̻h^JmMcuDW6DwlP(Ft;o{nSt[z ysӯJ*(8E+F6tuei頵G̯8\=rqqriSi{;4x~Aۤo%cG=3?iӓ+k=Of/gH}}o^o2cXULX֔iFkˣNQk.#뚔qǶɴuz^iZv8cϨѥR5Vm:جU*+Imc|Y`I}1_{J=ϝ^X!nvmF}L؟ZF\uG6?5h˺Fn3ԭN أzt/;$Ypn^JƯ4mcHf?#Y$ {{Vjj(;_Myhf< {GSg)Y\g 1Pɢrw'Fѿ2ź$6ܠ#/=HҊu%vUӕ^ pը'u\t*jOKDvӬ,b}*%indy=M:["<ݹmx9zc:R#->W+<fIMo7N^7F]<2:X~ӛC<Ḵ5[@ v}yz4fxX^)|C=18*! 1׊p3`e(((e1]N~7NrÎ>JrgQq\umصtZђlyI+/Oܱ5-/n.&bc6O )$sӚm+%~sԣ(TMᅰ~e:[۴3u/rž7)duU<f?h}j#]5f2Cl%I'z 0ͷ2/ӡIroU[z]:zmX,xiѻv9 .*P߽O+zLVJn WO/ohsѲG]UGvm^<ק|j|/d札ݹX `q_.t'Xc@67n9>fS>c(|]qqkG wwmihfrxbkXsӓqWfL+Wo쯒ח@ӍjnR'(^5I vrr:byW76̘Tt麖ەkxLkrl_3sYi{6oEF t^GIwGdʢxf8bJCV U9w V;<|qNVn te u%ĮQ"6ѵTN; uS 9].i0ۿ#Y[\0z pqϩr32fI&I›1 ch'9=zztv{e ԩIlD` >:_#JXtif%Zjp.ݭy8']gMD]L 8ǖXEnICO_y +%Ķmkaϯ_icJoIsƴ/cFHsb%e)8Ϯv&W͹i6 >cHsuQ1Ij}xr9e YEN3V~CI!w=Z(GD`.hb_>'C=8ܸ֊r6_3=~C*@w[@ 3=[s%rHTfs8kUc+T_Йwd +c8fnPEKDio- "Eg*ʱx€`zU)>R/5 k]ՕNxr;>k7Gٺrj4[eNk9JNZ`%Y]Yd "2s:ϥL*I֣>W4ӰBbt}PzOT*kaIbwgk35~쟼gWNՖ6]Vf^~m`X%$:r?'fwty)cNҧ+2)$ mVS8Ղ Rz4trӢk]FYssUg1EfYeUB/b?©GTmԦVJ&>wq׽oFQ[NjYϒ`Ĭ.n8_xg9_4ԙFki k~VՆ8##?J;ԥtR@Yx_%O>ݻvMR1\}NG -q=R[wl]F|e`Z0I#@$gک)SmoͰJNmKቤP|ʆn}~QWDcJ)b=I&F1Vq/ui3TqwEIUIS?eoU}Xm ^rRsgӊөSUmYLU@'}jhjANRM[C[YtNŶ)*{aq[N2w9cW}Š9WSaAӯZ#66ssR?+\YG>mvV;=J`Y!im"wdc9wpQS&zaԚV}mRo"6BO R>[V7|Ȓe_1h?)=+Of幝iSe:;m֠E|f9 W%HJJw:e:'5{+n0{ |z\>ghdr[|#)^Q*,E=n~'sGV0>f8.yZ#9cOB\b1 ٭rwW< *r[Vԫ:qQ0ܐHYF(S >d|;w%v[ٮ-uz{}o[bYokŅ#aG2\q8=4}>{tq2TVZYk'ٷ 9moɠ'^o^9 0Qg⽹q_Fϝw2ڕe8ռ$ߖw}9a?đBŚwm.o"E<1գ(Ѻ[NM~v[>ӿh7rWGi.cl sH*yy&b+k+&?9Y*uW[ZGMUK_ޣ4v0ݎnzfa*N[NSij[vŽ<6v$;\ݣ`"F !;U񽜿Wc5j\^}2WV ]:5Y ȱ|ghЮ{*5=^]?RjSj)^[]S.; YIo*VnYvVY:e\- ty竖nڞg^tMsû3VXq~q`;+t}ݷ_־y`SIٽםʵɩ'f>p{헺U=|=W,DTbR?7$VR7IN" -c8G>g/??kݬlyi>U~}:kB}uy"mۂl6ArF=NkF7O5lUht>dGC6đ'ʬ[m5WNSVΧ$(w ߉tk[8ୂz Y=Z֧QURW-AK[uh}t>ԣS>1d\-l.#4nA\:=8[_=\9V4<_i;0| ~fFU<}aY3Yl;:+%Z۬\gi`gGZF1yN_F^1Hawvx h9lɭaEEҧ;8Z8xGm喖?8 A##;]ԕ;'#ejv8ss[hZ8ArGBO?u<wwv/U'CkCRX|;Ҭ`$Q=?SE=|^~fnyȓ;\+/4̀(o$zt=N[KkMS n6MK?-4n팒Cm0xzZƼ(gx)^4~m o:w-n'-R1+A򷷗4\Rz?Kv:+/qG $<1#S4ލRWRiefE~lpq=+j{=S}OY5'T|:-celZ)$]˞y]OyיiPŔE]`O玬YBBb!Y''5IE;]|Uc(˖u5RZUX18`Gӵv{Vӕj=>.gUeʪP|OQ1^# Gm~lw%y&OGfټ c< 7i/NS6f1En H Ex'+\FQ1|mhwzd\G#!8#g9JZp}_:5ex3eγ5'q@'"#ebNI$<`c9L6)=:8|I:5ӓ#C=HUV3 cO>Ujy$ZLѬVBvc9g {*8II8fQ:lqE ;do#UU(˛c*uhhrd\ 9ޫ.ICxӄ))L$S5k `|I#UN”=hD'XWޫ[IHwNh:4P#OG& #>J*EJ <9gdͧVMW:ܯuj{ .<9gq׎O0( &ʯ ŁpaO)N5Z՞6g0y0%kg}cm8K=ߕ|c}cºůx[Kmn]Ybn윂9I%Zr2vv{']ߪZy}H`l֜ӵ]n}|!Iִ١&҃``qZ5jkzUI%]7?9ap8)uշZMk^*5 .d\X9ɩ;.T߫8iSPڝn^ (v *v9,\swOoyyhjr3I#XanJq>^Jqߩ?yQ߿s+n;A"XL;Y$u9'>EEV^+~R7~p$xm.:ۊRC2[}\c>9kZvjgi=7JTg9𯕭Nf+=nϘ|lOYjpU\Ac㎣{9tyataq.4deG,4G"oyHT7#9a>{4}fŦZ.kyTpI|+m 5XZ[2+}PƥGˣ[4FszǥuF:Ugs[*2 gu:9_GbM?i $IdiFgnl'a^Q娮Cv[R-%U^n1@ޟQOK/0Ul,|$(coDƹIBF#"qXzs G-u}.# ysvZ>zvsWQӳ_p)Yrk dž]"p=ˉ*W]zzN"[q9l¼|U udzY{U%K>]a%VE`^NpyJ2NS$vi|ͦKvx >wAtԣ(km p[{~5=w6V+n2<-u-|!/VDl(>P}{4}.tMu1~;* eF}я3Wgr{fLg;Y9㯽aRVmT78&2ώEBvdR-+ƛqʮ7vewNR=xySg $3"6oyw`Sdٜc%wk]<;#r'[8I@=̲1d]zLm/t8D7(W,g 1ԓV;fcOk;r$Jj"ؑWr1zq5£F~ʧ+rW$u0?8@'ի.1\vlMcEU ǽy%-TwQ+]9Xk=.ф{)- CP ycOݖ0RU)Z6LHlhQ3=sұNӿ̑G,ȍ.3gk?Oq:uk6]qY{8NކԩMJ H{c,aGRsS$ee$RIqfN.Tq{JZ+Ɵ#FDc0ׯ=;wiK3JxB6vpp:km۠YDW;~F uWDt9FN\iܷ|wd9#LUF2Iu:ʚM7>9U$6x>WDiGS2!6YrWl?&vWIfww,K񞢶"Km4#dHLʅ:2^WU*ekIFmjtǙ/ ܴQֶ4uV"Rtj{@vzJVngN<9^-IU;pJ'y¤cx›Mgm<#GVE pqAj%RP"V}2(vn+$ Ny'Nygd)VPn[w O1'y0`TS5M7vQ-u]ܪ-imȄZ,4?r' 8# )NZrε8y[r?+F6Rƽw%TeIɽޅ{2y_lXumΈ_{:KS.' $lsӧk3s*6kƶ.  ~RO^ս9ԔՊPkcWҮ98p$=Xޕ^FzQ{;٢T7Oz׹B\tPI;15Pi\Щ' ~uy[y*N}u}a Ѥb<oNOhۙu0"}5uhL$CqDyeΤ4o'WD6W92I9ceN;k<ʆ呔6W>Ğ8qrWES*eSm~ )vAjELTw6^WO[duI}+oPE8Z {vi"hy8VRA9#:h5-L[E:2R^ST HYrpqb[qOF[ǿXc4vihۇ+5Q- |-$aX<cv85Vk]MM'_-}kT}J:<:ٖ/|fU\~nuhѴ*Qm7Fȋo9v[ 0HWRQylEgmIu !l@q#rpzv,Rxj,iz6Z5ߏgOJܢӿ^:\まXYf'-Ԑ FKt5*Rx]?[ڌ;_onF:j[(ʜe5dwKuuM?rOLXӥ.a(IOsScٌQ,*Mmqִ}[oRj|Va|nQijsѥ*,-aavgz qUJ.U+] / %Î9Xԋyu8Uwy?1Y^=瞗1r=5ZTbgCIޔo#JŮ$/F7bޕI djXOwy 䃃qQ,=H= x\EHOn՜Ϋo7J/éXi\F.fbDI=ϿL XK-^ 9rnӡ6Şn>EiZRT['V8WqNGe?g_2P&YAdS׷LTѫnE3Z\J׷oR曥A} dm .sl{WTk{-ZVXh_-M;F6a[Ric򊬖7G]Ө4JkvgEj0RO;KƊ>^ƏrFwtN95U&N';E9y-[Ko\GBbgn7F`N\RW4eӿo'!3NNH?JU'۱IZ5_F-o"DZ=z7>c*-zW=B{=%巶_-6ݍ:gsҢnRYF8Y([&j hYr0GC=}9{VIwCe4Y4O6Xl@I}HݍaGݞ!<32Dv#KOq #}/ȋ|nfs=|U{oSnX*\~ v`ܚ(8sJN\lޤe۴c\ndAI{X,$0d,2Ǩ\9K$]ĶHrޝk/{)Uac_ qenk̩ԕ海<7TZIͱd(c#5NI;NXvNsI,1ȻhIҸ'm$^*~vٍrکfiۆ{z_;#Y)Ǚw{auKuVɾa>NpO_C1UN0ZoEw-M"UUr[u֊UF^Z~?y=ͺ}L~_EO'8\~J\Q59in]e>^73mIQsەc cEmu7ۍ h#UoGz~&Eq9BڂWIjW&8,w3D`猜g %Y/vR_iZ_Yb>ۭmN5՛Zm[[4f=͹$Y=}2st39JYp4NϵvSlgN56JȻ\FrJ1Qwsj{\@+*W;'6m,=hVV :ȧ|' cw]SԜOq2Q=5";;";ˑm 1OLz\ҕNknx5*{}w̞KegX䐶a}qJ **sI:pzw؊~鏖Y?*F|3J4Qw滙2On8@c^4s=_~17&կ{F\ܖmnV_R3qjeZoXzԩڧoŃq,4T_-*Ab+n}~=kGڨO䆬(elQRsZV˭iT}P]`IE24nALuӶ+T/mPNBZwyo# # 0׃[?i{fab(Rnko2(NrZ .I?_jt#NMЭa~ïd57Mǒ{:iћ#W_> uQY7m.VMbG"㪕PTV)%+̫ 2gש^4/y;rK(Z{C>ewӏ\\9[4}[yfYd*n;srFn[_ |Ҕ*sѶ…|<2+R*]vÈ(TC,[O(ߟJߛ<,sw~8fi?jOs\z)կ}C)t:*Y2\3_B1vzCN+WX.o=3g{<ש9}^4'ЊHm^u˻Yy>=(ev498ŦِOm1TxeۃT/λ!t:NW4+T֛A=̑KJ,ی~1޶i+rzt*!\?JVQ:j8?&f]݅Uܭ~nNO#^7أqxdu!$IT8ӍNkZ$Uwg#~Iw;dML[,1L ;ϯ/>F~άأsq44v<\FG#8$o5eۓ2']2v..Kmo\?sV^FJ*15)12]c+S h|jXVq֓*ߗW޷W*ig6aƒS¥NY&TUu%C2k$A Ҵ~ҟ,4߇K  V_ҪTuzjΥ:f\,Y빵I9۹&yYOT -־ETiNmZp~n.; Iԝ'4Q7am[zc$G+Q^܁'1TwזuiVeieæX#wΝ:#̞f*wq!sұUHX:u%v xhG[k/n(!-y-cFlJRz+Xk&\wu%)+3[G"nz->V1K(2jQr 7SDoX]4*̱|ʂp{ϭU:4I;85tvpw7\oѥCܧRZY]htZ+q;,&GNG[j2գib.yVJyiO1eAnwcڹ:r1焽#{OJ..U8Wj[UrEYt22oo'ҵGYjt%I$JKd| ͎=;ʰ4ܪT؇ȕa7Hܳ3cd+;1#UY{ce*mL*9b;XQ檦.RJFRq+K=䌱HOޑimC -FuQQ+8ߡn=*]y_eE\`7} Gzlq֌k/oZhq1"DJ`cI_CRme}ͻ}286/#2#\eO\?414ܱ,(YD&b1O WͬVv5&5o.9m-$R gZS4n^m\20 J\/>;koloqrY]I;Ī‘Aךދi-Wc>[ d* qlԡOCZ4*2me_DBn?8R+-;BQm{xىWQh*@!9Tc(q*t8u)\@f0F Ƕs[FuM҅JN1[t+ŔmO\z~5\c$Cχ%']ב&y0iebx>Ƴ"9w".6 i-3ƫ5 u]6̮ںW/ChTG1*S7V}/*xNk,;C6"[ڪ59JV_NFX/̱H)AJFrN]M__H౉a1s8cАq]T/''{i}N55c[JַaʖimSE[ZMXo#s3Ás֣~-N1y->]oSٮ KRtuF]4SLFTrE~=Ʌh%f| X{ާZG/+7U ff7R1ھ_N$&~ 95tLY/F޳ 5S%{hѴ͹uiGc\DJ^zw+T#N9)_Bխeq[sFpY; l)X湜ZReu!xUδ1KV,к- ˌ8Յ2:*nn ozU)F/&Xѳ};Gu\+E[=<Ʈa6q}ͩ5#^RZ&3?۵c'gڊq槽jєhZ16R+x {kH|я26m`C]1\;Itq]f8bFsu]'h19ʔ Z>hC' 88iOaJwWC,I[i.~뎕mbEM=$ěv^9B1sJy\`儀M&V<WU:a*mqף,7s!;3m2GO\ ڞ2QtKUfTY[c,_ eNyr08bTM̚8E;/xZ+Ɛ*qȠιm62 A,֍h~c>PttȶR˃:s^Va,Cw)efarYq뜊 $AZiqkNGguo`憛i;q'wLV:/FRMs)Ms--&ΐEuUH Olk,,ZmJu8]qSA)mh"NGVZ Cc]]q;Km$4*JF6`yau8|-J wQZ18Tz$~xS.Ieat#=zJ{|*_mRt,$m[K륎PRbm#~6 w>:8Z2 UF9Z|2mbH0yjJKGM_ϹD{]ڔixa!W#m]r^^#ʵzQH3unyrmus6נˋXmJWs0QYFһ[0jn7Ӡkko*p:W5Z1t)ӭ>tˡM{phDp$:ﳵq\ЧZ>CQ3DҤ_g5);ڕFWnEk1 ;洍Om%l:_ ;e~uZ1V1+uF\N|xL^SrIIsJI>(&N[,0T:g@%2 ~MӮg*&P=L{pu&oXu/}kf!~\l n aiQ?1:ϞWWw][5^k&A?ww3>p^0j(ɨI_/#>m䁄qy$@:58ʴyK-z["j講}٫H$q;PNt^v#Rw}Hx)^2ӿg\iO& _-?t@bFIVXd4W{Xy{<74}֊ݚۡ/Q5~Ӓ'޻#S%="Rz&|U!J&oܷ6Fkb>=9%o̺hafyZKY$T:S\ܲWEZ=G&o6< mFhز$>RF3Ϩx<өɈ}Zc5|UGAJ^ UG%wji+ZI/-()Ϊ)֓ Y,I3|`^Glg͊Ǘ}?{II:k@Լkj6e {+U8F 7]TZ=jxCi}H6ǜ>Pj7rmkdcR}/.5-)IcHuJ5>X;f'ŏi*[xTa` p: }Mr;9kav릚 6 cYQ֧Z3^S1*^Nᖰ#EWM)e([q9IeܰNvv_Zd|TIٛ`$y)< + nڷcѯ(ߙǭ ɼwF2non3¸W/[tUY%Wp3S\.Tj7+#LCH@Pg4F Z-2Vebv+ǢxFW~ƞѵքȫ2I:ӧgrh:I'✹vUV$m^Q$r6O&zRdo}X̍I>+?//R__,K&9`dO'+$[$'_?>ӥDBn}^6W*,eV{m,.xfV<s9u? =6T~g}i7L2uhԔ@cFéBM;kO/biR*:}<}ݿd%;Xܑ#y` d:zxTwQn)h{Z6Swh3Wwsk>[ZוJxi(vSY,j06ܫe_A_YJ>gքΗ::TH< ˂OqY*~#* [dUӼ?s=4V9\r.{_*piߥ7"EF[WMU' qC>Y{e +5c|[.Ie/l GVQb'\j$㦝]]ZO#Abqd`2#i9ɥcϩqD-ĖRA$IZHHBǻn?1XҧZٝNzK;[w~S|/&ݺ‘|:؏u2,q[hֶZ+-ULvujn??^+7.-d[KMJl;\C@H#0X24YJ֯eWou?=m϶],Fxڊ8Q ~UXVEö3ҧ*XK{ti rێUHVs%Q{z7_SD}cÍ:`gdTsШ^6*QV5gLWF+!8tq;a{;0xXєkx7cs$RLcA9MLDa=VZJ;Yl~,T!C4Q4^:-Fwqxm  j\\d<`{iSҼdСTo.i7,n 8۷#=kGN/7}M)տE|zE۵9cȆG(l N1+3:ҍ7۷#1Ztn4KT{[Tv@ef$c9s_G7<(QԒ\Fm^6j5& +2Ǖmy(s䤛k-qڼڒi.F"-TQӢ.C6YOͻ{}+8RWOۇ*J{ӺQf9q^OM1}GAgysAUӟje*dpʧ-{As-d?ڻ"4jB?3ˇu} }b؆b87Sاhe8ͯ xGTZ(SrIٔ#0$?xc\f䩊Q閭xU5+Z4П\S6ߥ\Iv'Qqa~ #Q˰u~f|޿q V4a}v<\غMꊱ Vܪwp0G nڎxXIhnn0K S mgf|$گm*TtldH'8FwdcL2gKKkb1Um,ў'iG&$ҭqX^#O pye)[E]E^]VLMn+@u>nc:<3Q Y\_n٭cSTfEJq7m#x9;jc[rO2Bۼo̤u#tEFN`i^,-lλTV%ebqWOٺ_&8<}wjYLFl)3b1qvqz.Z|(중,\:j2rЧbb2HCۥmRj(xg[dxAcF#d3Ī2*3WV{g{Kydq N;ߕvh^O)*|eZu6֊@Pp0.O|e47nE r3|WF6~QKLpfcs}:3^c%r񮦰[J8̋iW f_֟絅f⭭^v8 E7٥HȲ[ W'5Ɵ.ޖsg_s,h: \mO1 Q N$wֲJZl. []Rv/˥;hfONeY`Ǩ?Zԗ7)ЕekvA$2;Dc+vFNRXDsd,4ϖx#ڦ"} Fom>[[D`}ڣZZU"&Z%YEO[KC+\˵(olq&*SikgEYC崛6I9*'96ԑ8H#?3]b:U(ǙecCߜBIt#@UUqO r5 #fmۘ#ߓRڌ-GL#VLnc*~Qnǚ|݉%5ܡ<#223O^>o]\勻%Ҝ\Bcp934+GFPsKYeXf?yA'FiT);X4̶#>9n֗,[mS.K5%P3F^ԨlR._B`nw4TfaMI32n,O =}7UPۍL3˒˺I1C28^7DiF1rM|H˼,YFBUpqǩ3ONڽԝkߩ܈c44_IS~69ka-S2o[fc_~Һiӏ:s;B3u6D;3z8暾Jj-W1\I|)Y?iӌvn:cZT]w3k#ȇl[uS}ZUhU.M$c˓`8ԜS*aMJzL&k&2I&79SuKZ2_F޹.B*Yv]v$q*(Ҋqz.cڒdjiӌ}~V;R)D^&7;Jsz2X:deRv3R;N+~cZC0UwjnR2} HD$RM*\|$ZL.[scsQ汤ع_$;nڔ(FݎIS**OüAxKNj^2Ԛa4`WSGBRTarkQ$Rv'Ni':1TSKWu6t TGYͼ`=3؎ǭc(2OZK74.$hRfcv\{9{Ih8lFGgsg>fR;/vZoԎ%Y'5*2ӹMIAeZI soeN҉aN-.V1%S/Z- R$#+:jo3=tm}7s$D?(fvG??Ϊu9m۩ *ڜs$EVk2|>exk8Txu"B1/~:t4G%Suƿ{ur޻9}CI?m`d,ގYKKSXdr\3Q=EF4^vJ;1=IZIT*JR)W~U:o˝M*++LynK`biԔZV=i wcWS۷cϔjE˨3eUƝ>h')k}O_*jI |Brpr0^U;%WIdg[-ӼRUN1v{vDSdm\ҏ4G[qy=C&T-gZ2,-\ u9!;R}/c:xJq,nỈnl!Wmvec#BGF5=n՚kzwM{ck}8=bۣ21n}[EԜoKiF+(%yYlZ]{ϵ/|⎣%+}>e_+rpGFOZ5M;4ennYh#$-'Ɲz[PZD,ۥ#fNqF_FNbu?U|K)pm޶Ϣo?g-Ol7WVqð$|TN30g)WNIt_Vv|%\*TI=/۷dxo6OOgn!) цᘺ N>^:7=I:xz8ҚwVkO?Oђi+TOU{+iOǸOYX[MB\{d]LD*t)Wom+h"v!~~ ]Rk;5|]?,v?]\r֣NRVkV:uu2MF_7ku.M5x,W2"(9/CEnw /Fe{%~ k~.=v yTK%c$H<|)RZwo?O;V`*iӅoN!&tm&>!wgbɓNB=>a9Zӿ2_>c#_Zq;v]>u߇7ˢ5MVU|6m=;:#F+;/׷S0)-;uO-b%ޣs%6c0@89@?#ZMS[B( Nm}MN?4GstqgA1"9ym8J9}_WuPR|mFVKލgߺ}6׮"tiv*2fKa< @80uX,V[?=ʴXUZ+5 [6"W~VM-{k$.EFva& rqϮ{U^;!n+5 c''ӌV8gֿzpe,juŒʿ0a[cBexבIuU!l3=;S^eQ%C#xՋmWUqסFZ3Ғr[آ,>eLߩ\WW񝙶"Q&}Ƴ,*FG9zeʏ>r;ښjMi& 3m=U1M ԧfy2߆SO?VV)Ν7[5ΊWwqm$P鸵c*ї2j# dU!:ғ敐ݑߙrf  y<2_BFBՀ{_3qS>hT厚r(ƾg䳾ɕ|~B3YΤ3rNkV~֐/~npAյ:ro뱾IN^]5#c-J fTT]iF{f=*}}>Eݻ7zt5+8 25'KqSTn߉ÙSYykG%n87co2 )F)w%[ R$_=2+m| J< Qz[9[BzY>`YI;^+9J7[S]#_Zu [~1v3+)ruUhb)Ѯ\u.\j:?jv6x8^3ױ<^"97]}SqK<P55hfOYxWj0GN \<(KV`[~k;V}T4` guҽ/vrJ3U##gjc-2S'ޫQmI!?yU%+2cH~nI='Ƥeʙq)SzEy_ei|bFT`,X|ʪ/;֒qN܉Vb¬h񭪄 xe回>(E'Nnދ4qܻb}W׊Q4 lcq+2"F3*cueښpvܴ6HT`IFG5#nԣF2DiGORy/^`+V\zmoꮭ= ?a_ yBմ%Wˣž{4Ka\(2 |n"zrrRK.rRJQַiz/o!>o4[ v*d.݇5EƬ۵זmK؏U~{Z]?+ũ63Q\6%Ԯ4+P%GBWnccB+ޕ9**UVNy]H߯߿ݷC@񊦯5ӵy$G{q^N:t )RMF3PUon֝w>&jD>̭4G"d^s?ªs{[u>2cɻ~ƳY_I {>c_?X6sMq;NV; WFa'cGOO4=#iڊ>P"*O}H+Ԏ"R*גxImOv=7:FTnS@INJjUJgUyJVlvj#IWkv۴npɝ1s~ʴת=_E˒N^K]v?L-{cPcR|+){G)_:\Ii}SϢ rZU*c3|niGu|fu.4Qb 7a¼Y(>gυJw]|^mw2zڟ*TeB5*XBpp6vF0>9_ʷ65(F:f$UUPY1UI'yV9mR,&C|'Ob=J4⬵eV]=af?w60ZH8YT)s_[-E>\)fn z)v{h'dIw%.8+̻瓎LzTM.vEUK"f^pvz|c>FF"e:t`EPvqgNPM 5>q 5̑|\Ե|kJ2杛V<{zE 9=,jG1ktgSN\]NV)dF5+nǿ9VG8dlt:+i#lB_s{8VTT*Jަ7|{pR&kZapڮ~`OOUF [ک7 5bp99?: |i~V'UX/?J[ӄ\);?̐}2~Xs9*zA浥UMI$T%ihOYvǜuCܕ- |Vqz\?0[;dEl'è)NWBB#_vBI3}w=5q۔%>XU?(sӏ;f]o%DiY>B3N8(ʞ/$˺8ϘQE$ I9ϥL_6%)b6pHĊC:4#8JB#/c5ף s c˼袜#}diTiEɭP|.L1&PNR+(ՙ##Ue?/:Sn]4mWH˛$^•;[3^?RI8zP0AiM 3])wIS6o\T,pX`r:F5hJ1ݳa[ˮdeyB+(? ~jtԜ//si%;纑9_θZ)iT\];F+/͗sc*ԧ+cL>:U$Ym{IʯǷ@J$c~o-[5l]?^=1EJ\ѩӛUbU4#^f-r}s[SGJ-} K#cf '9lp+?i& 8|5Pcqzv7*sɸ6$;g;WRTm,{T(CފW荽3ϊ2O,zqWm*M֠nm+ʭO6_= ԭQ]mX5|2 +ϩN8ݣfonY\$e!cWQESv 1- -.HI&O:W<Y^U+&bܦUft&S33?e,DSKt[o2Ŵl| _jߚc=\XyϕWw-yޞchJͶo-֛Z i7,^g͍~lg?*;MAЍHvBŚ[#$,U0Gק^-:S&-ЬRFLÁc5ӔUJNf;0Dwln cⴇ,-~j"U4Dv897 u^~c팥}+敹[sߟ۲410p=ISZ]c:TuxeɶƪU.t8U^G_X24 0#Hakjtb)^0Z|7_4W!Ь+֢͈rc9/&YP68_L{s^iHURHDb*Tb2@ӍyTڥTJU]K[4th%fq#+FŒ1NMʥc59%M:~0E-29 }yzW4[G=JucQ(K5YѭdpThn裇ʤ{t.[_Oqsnd`Wj6 px׭tѥN1vvo)QMjO=K\2{q!8#ۮ{!R*^_%q,;~wFRܑ=qV(Вqo]|goM;XLN9VgM91(/ZEo^uHI(KJm5žtpz{zӧ*ߩ:s\ԧ6BjFW߇.#و< C)8Jqq57Ze歂mpW)J+ITMMFݼ5O4yʬV> Ru={JFPjgGFu[*ϯ%KGgFoh_-K[Dh!C|veR/T䡷Ƣi #/m7 F gh-k:r\i\ѲV[$eAS68Ү1VV:!ZBkhR[ߩiӫF}UfLRKc?*c[ƌ#@ܯ8<c5;7~KbESs4` aS ;wd}ӡ5%'$I o }ʶ}W%iFZN(uA:ȭpÞrzw둸,eʝ#Hen9îsTFQsS$ȧ9Y-κ5F*.:댻}8f8c] __Z󥇨Ӄ#+ӎLoʿ4'!jS^?U?_ծhF "57gA׭qFUwtԛN1Qꐠ)_;p12FSVW[4^{V,0' zǽ:Um'}~FR3&,xa\8X}OgZRΤyiJ~GAl!&nJߜ *;KIskT}Ʋʫ/3+ǹg[Gr~Oz5{kbur""r>np0s9hѧݎEF{*hbYHn?}+_i.ej*QRVF\MR0s"/IzI#Ĕ!jTFr3 Ҝ>؋lJ5+A$l J7NR/nk<#۷noFQF>u"YY7`WzZc\D}F28 䃳zRh~Ek22H$BKymWEb+Q9{M/lalNOTh.Z1q{]я3fYPvXάrv04yy rP# q9Iƛd}f8i&B)y*TrI#\5SO/_#>Y-.,c@m@Rnڵ9k掲R<6FCܹ'?5sJ\mѕip~O0l Ժк).o_eGֲS>X#8*ܮ\!F5(߿+M{"D{MKT`NZ[!RIO=vO6kB>ݳ7Lړ|1ӕzeF&`[<t¹%)]sթZ۵Ăs̠9ߩ5Ԩ[weUS_+B=ԸVTIڞ:rqEjܼ5Znk%y1maHch38$p@SOXMnؚrU%̵KK?1@$hKc^LN~)W5QVIM<M琉쑑cAq[֎"'kFݴКKi,R &n>N *pG<76$N0W /Ǖ6S==m/߿ae#lmyq^md"1!B*t8Cd5%fnwJP<'ߵzj%%==Vzww^!1Z4j,L$IF3=8:QorkӦs'i輋O)Uԡ/k2:氖"Q~}+JSN[%jGo.ؘVQK0j܎IlfP7 o_Nq]UmۧχhSSZ9/K#KI>$R|n: vFLkAq## 3t=J}:2IY;ں,[3Ed"3Es2;`F]ƶ<ڽWG/5Eac | ԊemxnlγO<ɜt񌹛]+Ra[.i\ӥ>`ul0FJI[F5R%O5#S6}9nNV}v4tX6۷ֹ+F1ԹS5_sF YIU~ȥ8ɤ VE!Uk3TSUľg5˪5-BR2Gpz,oOP(m͵Wc86]L\ye鵰 z~rJzY#Jl6D/F^C_;wM  b8? xޜ:+KNAyZ4#*wOz#7Is1!秦z\ӡuGEJ%GNjV44]W9H!#:a%nEE^iiҐ&(|9v2xtO*X0\՜P:ҶKCiTSAKu*f۹aQ#?rc Rd/bINO<{sҴ{E)T67٣Oy{R2Wv}O^ RVhJJS{kiGp^ YV'sW_K)E9mէ*ҶhTXFƊ9H^٥U9cu_"k30_3-5F1/e(VqbM%0~N1RmDkJS}>O"vGY6a<ݽ1]owsz)k$P-ك4o# rO#֩QtUxm[ʿ1W-dbR='9գ-_eW=j.WiCrt*'"-W?r\WPBn:q[ot̶kR #r2yy'9?k3MM$~ԯ6`mn2o!lQ%V}Uxvx .&5ycRK$ņ ;yx}(gr+>s> :|dTB$pKQ}+G>E:5RKVْ6ͷ?zW\eZkMqZK8vY1Xys]N5q'"k#m}cPq֨ 5HXٜ\0zSuˈ3{lJ"!3Վ8ieSwoh?2T-]$}}x?ZtԮMBV:LRVXq}a\"E{˯Tsc1dӄ=;z[[;$?n|8JR(o}|r9_Ugz7 m)\7' !^|QJ=_uZ5D՛גCYʤ""s4$}7leG9ۥpʤ[GTLc*Sm4t6VWqOjist8>oٱ좒8噕%X8kͩ3SuT HeYQIQ>)ieMT2[RE :}N[Ui2tw۪KHhF_nぞz(b(Ӕi]_I#Vx$>eI+QT瓩o_2qmo*J t9v:ܲ]=:q+[[b>c *&4m>k88IekR%d i J{Z:T(Z4Kɮ5Fg H>J¬ ;5Zհc^>У.o$|`unu&a;[pv#Ga{x]ov5OSreGG]yNNš=͇var=+׫VǮO RMn^[e]FG6M*y燌c&軚(SW-yᙡwmBo0psz4RJ4g}oRV'V[xIL+iwdfi7.2ϙOWc0Y ȯjr޴-)H=?JT*j[ѣSpwUw0u&xvVvU+.ׄueEntC2߅zRNki<_Mx௙ ׭uKB-qR|LqAm̶/"c3XʍJ;:ukSRnsw@PD~+9F^~~22-Nnu:%Jqt65F7b/`"ۉ{4T 5rCr}a'*tA'[ nW ?{QkXJ? A܅*S*Q] (%6~4"7emx$kvs֣lOshn']͸9q EZ-:rS4 & 6z'sJZ~mօɵgp﬎NNIEyJVڰLs}ӆ' =?^xKk{_S֌z?6?j+[};Əy+*2!< ;mxz`¥4f=}ucw= ?kukd1o1U2ݴ)ݐqy}C%8nz߭{Okhh8ˏ+a8\_sI_|='';nGSxQmQm%qt29$vuM}Ry\w}z?E`{5$;`` m``V]:n-O TSp+ݦ翮Sm.Z+n{ e"=fTp\$7g2WMFEZ{k=FSK{y F(=CC_+/ݵ;O,]JR}vZ.V|cS'N) _%:ֲK9(PnөGDF1ܓצ #^ JkQ+Кi0e\wSͧ3FSKD mq8U/&cF":Eѿ8yֻhbQiU=h)YFq$qqק*-cIStZuq~n{,BdTϼ;N5ұ PL<`ԑh¥d>z3J_E:-%Cl(@epʞFF9ȮFO5moXa$J{炿`\w^>JNmrź/Bk˩jۮm՝ӂAmǞ+<mЌV}{앑eѾ9p?5ʼHcw'Wԍ:wy##{tzt(W¿K[XԦZyg?}}Zب\$uyҩMe4켺/ک~"|-6([N_qq2"&Np2o$qFG=:2K(% /7IsƧ3w;0}躞{wP^3u0©' ^:gTRfscthgxQ[Eə8=y:q$s-Kv#o9>hW|h^sWaջo?3Lf2m9++i3?76kҌeLᗳyVQ2(]䜞Շ-:mEN.x놽ժLnLʒ~=s?e6?bke4ќ,0 !Elԍi<ص}Kfl|9>7)V+EERUuyeRsk{'(_";,%!RO;F:5TTμ=:u#g µ͕G'BGOU`R=#[] .A<1 9w{IS?zr6+rc<г|zc&D*|7Lt5גWr6Pc3(Qn K~$M *9& áÏ|cRvFp^vws߉eZx q}T=\`p\J1m?SGRWZiz"y'ukYm)T?1Es=IfSo2!BXumy(F_Q$j]wu{ky-O/,`Wp{כ3-;TM}RuvNϡx_KH;~Um *]_ל}i֖KLMvr@.iI]Nu*5x[Yf6O6.'vWe W`u:ӭN)FZynLy5-֢F-YAXHyA8ʍ/+MN^uuYo|hټ8\t>]a$֋EbH6ћT&EI&+9xJe~D)I[fLJ{ָSH0;pk2tliCܧ/gR1{Y05lc#kF[jwI6Z1X 7͸~uǧU)K^gZn_{^K{v:&WW.+rMEo#_՚6ܮLC!ݒ:QQOF"1tW3g컮OGf]Iko&d7QP9,J|$ޖ8g]J:44߂Ss3BYڋͲI ,lmnj5ra*qi>[*FB6VbFх9\ORUy:=HVi좑! $WesS(3,"8Ucc4/ ۴ o8z8WUEYJcPJݵܿ[xV۟hO^j{ڧ;%4<'es-C(u>}?rj9ޞ֧I[n}} -6]-R0܌v<],MIV<* nYsDm,u&w$BmZL 1~URTrN4ZծRERsrkX¥^ZoNz>PDfYJ̓2X]v'={y1;0@#uYmRIz_F^5?ey+W4䌥 .SE2-n$jFb)ԭ8IdЍsc_dÓƣOK3% AU<ϽN`qւbiL=ɩXOuhtFҳ}w,+\@4~cm̃frqE{Y #[ D s{\g'ȇD빖L++dVC:/HBم#܊ތnէNi#K#o/b$OIw_:rI8Jm9VLHЗ,br ǷmRms8ʗ= ,F[* lsVћ׿b(KSh27Ҷ^Sm'k+۩VVE.|v33s:}=n[SNE|ZeY9+u.h+_UZh<4MQvF;5Q(ur:u9Z: LAj(G>tSMUITZ(\BRH*?+xmWʐk,u-^y~SFXZ5f/4Ix\SiO5˷ůލ\w<{ծ*Q۟z:Jc.vc·06oz0+IC+Tc'[՞UaVfO'k$oY+]̛˳' ɰҮe J)k\ңv2cHvFύyqץm:ѕڝZr{4leqM(fHN1Q+J<ͨ֨Vʐ+ܴ}=+uk[c?jk>07+4‚ǟ_Ju(o;%[fS[ƒȄҮG{wpQIrGN玭5t`$Db*IlQZGƐ~A Pry?*Q'gT;8^/0/*5\3,cэ^{FosZӧJ֚MsI\onEKRT]^sJ1Ú V;yqir֕Ng$\t [{<폏8tF5c&[|V%YVӓO޸WOJM붟qgn + HTq;Nܑj&Y5eַ{" k{SٞXS\ܸ󖍷 Fl17Oje#)NWKbf" ̍pr>q$֊3tytg<*TQz\.oq9OW"hF;0Y8bc<0 w~aEs-QkPRVaHYO߿z4_5yj6O,esZFFI~G/4R*ܰ &Rt6RhK_]C|fR{FXB˹HLmc2S֫E*7}JW^/S]Riz/]֝ @꺧ntoZI07ۉ8 ]6uE1Nu+AUZ\Ӷ-79+JO[N{}_i.$΍bnp w#O^yeZ0ҝ~z_C1d5&k[i<[_-]3.IڴZXZJ-.~yoT-H>M5\v92Gե9VKUeJvvѽM/E:?ojhY+u p98FIlCNrt׶EO ZI^ukRs4/q?_Sj܉E5؝ $1Wqc=ڦ\o2Tt3_NҬ,"dݿ# REncJMOE[ՂkVcdd:8=y ЏQm;73OrO2}=RMGSI:qz5Oۧ_XaXIݺV?US%Zʤe%'J5^}R?h*p΁U}r{UƟ%NԵiq[yنrU=^7.4QБa㵣GBH]R`qQQZ7*4&ҮH-I `\8춴j&9E[I^{O8fuQ8xvFeeݐ{NTJ+E~EH&ö9V =O'THՍJnPS3Dܫ.XI }37Kݦx_ȸEQ[?3?iQ5 NwOj]%v'g(Ir+]K4<)vG ;U{)[nB<[\y,pI89cE:(ƛZ/r՛!(7I"1=hTgugE RJ;b&pÀF:~R FT⦵,Ң6ٴe8-ծZ:ğj5zniCbͪ_Cu$qBgO7m|ElrSNnյ_QԎ bZ7Ivk㶩|__ 5Qe|0x}&[ʶ%7ϨdUnխZ}=ӣhMտ` 8=acm>_0U=(4=ŗQỗ,TI[ |s`iQ3bs⟍>)Zvafd\I]N8=NOՓ]pTMJ6ohZe㾺Ymn 6cC8E%UzyvkZ_Vwy<ē n07BFg szW6(%v_ϓ*+Wzy^O!׋Jb:lt88###ݠ 7ukhʦVIJ+M.u.51r]1^9`k޶ٜQOb{_Oċ+nﯦH+u-dpŋIK1FʝuS~b2C:v[ r8 (8;);5kW-QNSv]ӵ.qxJSQ k8JF*ɥm9i.Z>m5YƭzsGQBt}Ώhq} oCivn$fݷh`Ǩ8ڽ<$ԩS:[-MYm:{xdP%hdw;^r=Jw]:y1?F->q~i0D#_LλY88BkcNm6oc7.k.h+KbiR}$c@ /m*157/[z{SQWd}ƞTI$`Y#!xz\mC8iNݽ ywxo-4k1nWh==+Qt_;<%NTWKTΒBw\S9;Oz T] I Wbϑ$ 6bq׎N94km)rŶ-I7nFVE&v߼\]Vz>nNWZZΜyVtq6AЃ3*U]ьsg^[e y_!̀q~3]TNRR9^\{mZbLrC,,XFt+J9{O.Teƥg=!!ue&r,/ppx x5B4*s^&5Xo吷Jj@-;N8-}sթ.yVҲ7o./G8BgŒE>kcǺ)Eӱs6z<{O)uGia,@wmgNҏ+=f2}En[j~<#ٔ-<"V.$k$0:1:唵Z|(ā$Q g.TJ9˕]\ȫ4̓Vo7c9>Is=AP,{,dc΍]= Mr6MaRe.☷*ҙ>8ӏ-q p+0 N@F{?S\uyKgcݤd|Ɏw9F.5> YE4ʨ(BNoZ啣{ȹW.$*^}N-2|%Jc7<&w\Tx T.aҰ8K\R>:E ٸo<|14Z? Cx\8ʖ y>Jʥ9i}R\k̴Vʑ Qs6NB݋qͻ3߷WNu{8ZJņr9ClO@IBn4VKQCG"vzi5/q+I;D.2g'V⣺1[OAVY[$37+!U dp{ ~aMK~0;ߨmC)z)Ei |Nn(̸fo1=yϠhջTկu-Eh0bN>ʴlƕJҿ:e?_4iΌ<'5eq-&$K4FG^[sJ1bf|.Ft٬Hڮ1*mrRIM4uZKBjrT{VITC) q}+ISRw*FU#tyLr,<.ߘ1v_ynZr]ͽc+ÁӷP:q-iΥls3Q/dugzz溩ӽ59g935j1鱳/eg9ǽ_:!Fd[^5f)L0O}H+G`F8]_2[H6Bv),d_-92 V9vFk(_y$P3#|̭9]kr=!h]o^Du)fzmq/{c̛VnISnH'ͻ=av+ _zÌ8ُަew^cO#e {ӍRoN5|L F,˴.sI?+7;b*sPRUn1%nDesSхql4u/){4`Bc>UХSZK:thqzԷ7-$p?>F%!;,;O#֯wݜS3Jp׭BMJyjsֺpܔjUW{~~n0O.F>&C2 9dmO6T)ԮvC[]G?*cNs•k߉-bKՏl4qlVNжeM;A[:˜ٶc~BQrOoSFIdc'撋Ic[F>0oig<}:.SXXKgO:8\^ޒ#F-WBח+SQ~l՘8ǵqJ}v0iSшI"ȯtqFޝ @SZJw">OFv}EfzG~$V+^ѹYC Ѫg`*+c=9n]IВ`MN% ͐r=%3bݽzzF֒1ITOo~^,Ƒ5n>[ҰȻ|/qI \Dg$ivqEly&vtBkeN%{꘶1Ӥ`+~Sg(E$($vx⹷ } g(7rRJ#|M<g{pNs9ϵc)F^KZݟ!2ʰ]=O=1ֱ|ӒWU_(H́+Yʥk엚+jw D8H#>v^TLeFqjWZihڤ1(Ҽ¼*e-H3ܨaָ/s۳F|Qprk˔\9iv Ynvi\'xj4QUv(=gK,c+X:ʣS*.Q-FIao,ӿ+9SRv]G4,b h&ܑ/#Nmg˯~=O?u_҄I\}a,I_f/*J7Or8yx;A zޜH$#*E5:L$–$c<N>Ғu(ƕjnQM45,u9i'I &6<5SimV"-ac3 lϭwGOh)E{z?!H̾Y݅q$p;V^/io!d鬭˴hg :~5Y+XE7-$SlyrҼ{J|MM'$aj.lvQkc{LbrHe$з Pxv9K_F[T2HU۹L7L=_ִWs(֫S+O~:![v_v #>Tj{Kк9&S[Dy[&O7*x iI96qNmn\[ðea\1u9j˕/uu"+w\wq?,Dev&5N܊=IF%"nX-ʃ1\*7;Fxmmu mZ-<3ssʳtTdo| /6 Ik˭)FWU:4-I^ vUm!d1xc9{;k.^[cpط'۽qʼJ+m_=Vi7!T FO8'Ry1OUԨ6yz F֏HGoțTsƜ*q⻣I%hVhDn=zVtUOh9RJtM]=P.Q ,v8E7'[4Vu!+[r8V˿_m$vZֳ&cTTeQ"b6"VuFG=:ׂ˯"U9Q*tc*N?', p6=z{UJHX*Q^Siq* r'p8=JnQ~SRtncIxYmlpNNVlTJ连H?%SK/Յż;eTom޳R5i-NʕFK\ĺ7J2#`UH#p8Y{5FRhQ dhnKc˵Ou}s52Y&$`3?vwJ4#Rc)TĨŤ?LcO1ƊTcHFQ+wF]Ũf򙶎y^Za҅R۬2coCߚU(S_71.NMߡFႳ$a>Wzl/m~2E-ʙ"SIShԌ_#*]Kc&stYV qZLȬwga^<*4wЪ}͜e96ٕIFn Qٳ'xzu=BQ3L2ͷ 9 {Q>e+ WIY~ED 'ڞEϵYG^~5E8?ҳ&5O($JIezg zi$ t]#?9p+SIfiaH#N gu(TRėo+ȱysӧvR:;ݣdHO8:kKJ&;9jBv}NgP[SL1!P p@;F~uѵםcS{;*It+(LFw+*[H=NA 1Q)9@+ެjcUi q ^+zoFT:u&^Ej\t鞝:SG2w̒H_"RnGj,O /ٛ|U$6s~Fc I|Fg7?"M\ 78玼ºEUZ,=5̱,{b1c<=k4uZPbؚi-5ݜa֔["RE]DpVI{1}.@kka.?xxqW:qf)ԊϹZu&KxhՔHx VѥG-Si~B1MvOM;}7X4S<̓mXLW(5RVii.i{8sIz{{oc侷N,lId޲1tNDՙˁJsI]kV1]^BɺCߒ}sjR)#XTj{hbOfZI.SxqEɽkVݴS#̆]f\ʚrsšj\ԧY$y5le-̢?s1t兊|wr坳K7 ø+9IFɉR{ԃLRIK;a{^H_G0L%F7_-@yۺ2qQc )5 kn^-+lZ\CSŋvk撦_˷µj"_{;DmFq+ӣS^[W֮ɵr͞N>*eMNZ9N7f]n}+*q>"?iX47t\;yƧ:{YPFb3{'IM6>K]$aWc} *GM;w<ьU5O+ 2?u?uЕ劋;(JuqvzekwY" ey Z>5wP)-۽;z/ hqȰ3*b1=sY´ͫ'ʇ,eRI/hV,+̓mEHwm9 Tg$qV2Sjb>JGxsOXḶ_3Ώg!'I|fr}r8;۱6g#ǘ5lBQ.0=2 q֩*[QI+GslRX8dEBq+U9e>& 8[7t rIO?W4IkKǑhi`nrl8,?:#-.rh+N~m6m?ack̕^j;3d)>H 3\;)hc*4aKڷ0kX嵒41Cc<H>;kROc2*˭]c]ΦC>lRTRgz;Q^y I,_/s2ȹFG^0zV\=kE*[4ȭy0V8\})j=B$6X^?GtQ5%&G,c dquHYp5%tNFi7г ,*'dx2G'lJdb;(SȽV +leJ8ӍXyTߓe.ui[{I%}I#nd`r>\}^MFgEk-a%Vd(s@'zTy=efo<9If%m;߃w 5cДy#]tM)/+U9<F<'pPV]S?L%Y\oGɯTWQVm}S՞5vnx:;HXLv9K0( z{Q*WoE}ѶJWWmw__#i6rG$k*$on sОqi}V-nӵ~n]b0-MtooDzE4pi*\1IřsJ㥷o=ཨֱH8-ѷ^tqN7{J߉ڻXZ,)E(qWVrzS)I|?x"7'+˚w8*qAy^eEsJJ}j=NYO%-#ڬON^MtaZRiONuA2'>|`nyHƝ9F__tgEmIΫj'd*т_:`~{#F*J^swȯRxb*wF5gRot8?fMFv2۱\b=FNx+iTlݏW-Kڬ2xo!1.Z76?9k4R)0ѩnv<4i2_m[Nӻ֩Ғv֑j2qy52NWvM^x\FʸңRb& zjrޠ]R(5.EXBQ$6{P]G ʒ#fefV*q1X'}LpM]s YmִM'^N>n]z- EE&Ӧԡ'ȹiw;`Y֚EE;慫S 縮FXF/*hj5xoU-OBA8N=ONa~c S%MYt=#y!'1(FkېF36Oe(i[[=OA8ەۿ[\x1Xƍ&5RxB<`ן.^ItƤjƜF^[Þ]Kk^bA u$kIhTEK>qj[JH#e M6@JI%vaͭGGaw;$)Z=lO?DiԖr1jGI(ʴïL.~8c)SXQYvGÐ>1+IFI/0JRboe܌9ۻ$s{bt#M(oGhy~"w%fKUS8{cӎ&V}ˢ=>JV}~ _S-k4l#ULr `dAXb0qQ[o̷Rk׵OnPf_3h1p'9Gxו*1Q|:C?MߙzoNZ0=hpH`MZFWj3A+}.E(s<|DXɅLۭ=˞ǯӞ:Z4\=,Գ٣㗍-A-gc1!dQdNcQj4m%k]ap Jou%nӺxy$̒;V)>Xv**WĻ-?qTKmIúL˓ؓqV1Rۭ{˗Ś~2FIDѵp0T6~yJn<۷SEջz?2c(i\[}_ƛ?2o>t[Hb3#I T}5>d;w{X!es^}^\YcؿE.-,Hz*(Ԛ}^iũuO%6l["8'}ot2$,W8n:WfjT\MPCsQ^UR!,ܡsIx4cR-_c$ݴ裏v>W#2A+cT)#C)F;og<{j]w8;KYLIqɍ& 9c+ɩ,Wܝ={ g7կ#<kײooOYeP1<{m.OO^O{7mnשa[%g/߉|'ѭoXy*feX/$?79kԭNixpEs-tyOƿ >(xKx[Qe+dA(`2rB=xژ|-:˚j)Zi3F,VMk_k5ݽΡS+x A(qFyn%`iөRVz%Q5XKygGҭ^ni^17s[9{ӡSQ>emmuCx~uwNB55i6Se+E=zvlq5ycwgqaaE>O|,׾j% /jEƟ vHT dQr>Mm֎#NI__5|[OF;8+'u./K}d&+`OIsK0Om4?e[R]G8t'k,*ҬjO" lhPJoF:[Pi, 8+r Ϩ䬪J[:%dvv]GoJ 6W GP + NlaQE[D~a yX F &N 4Vo= >$"lFvz;dTN$SZ)PӮ";!SeB Ҹ}HI'*4Q$BrFv2*efVA-T$vm#ִ{jqv;xV/jϕshՕe??3r}ǯJ|VrA!ACN0JA9[b5#y8ϙ3:R\SouK>Seݿ2׶j-\Jo$Mu*{}ƤV¤/_ȒsFI-gSrzmmŰhVY$bC+1j}w9IaNefžfImٚFl n(q m$k[ӷM^,տ&[j+ii16ܸq5H)F)O^"_5@1c?ZTtCRPmoR-/͡H'=}kT;'H}sJԭw&"ys%$pY\\չ9[Uq1}heط9ioO7/#9zcʾ epguMn`WL&m^]AC^8# N7} &Qf;S!bxm'ZT~AN#]~[[Bg]y-9zqRI|#ާ*P"m"$4,Vh&,7񜍼y޺Fg Tu'Ei&i1_IӜ:Jέ8V|RʚWWއE M(n#vpJ6ѳ9s;.F{ etUkĂ0qs\"/}y.YKM ho׏v7\NU ~UӕE꼴ԎRkv̞X_8 $s편U/Vji:xfog,o,Xž7t.7>jM)>>¿|6ӡu(Qv?8|~'^cq/&tWe{ʬx=+ >Znm+{ׯ/|uJ;xe?& Uv0Jd|"`h^[igO1TZ]%nZ[}]xŖ*[Ko%Z瞻y0x淩 %F4j*;/'(29ӌכ[ {^V0XhOUZ6w9{{*>"6#T~Sn9:ԕ[OӮ"J=+NSV%vLbKIZ̏+2W*v184r_@WT.h?* I2Qjc."i>ԡ!iq&g:~vl,Sv~V}1BNJ4m-?ELlWȅE-*e7mON{/{Ry7?d2}=U8M/O ^Uo_/"k{Y<,7NzckZ%AZ?0--|bcH}l.*Vv([p]sK)9I֐+2**[Vn2+D.Qw*H{gjCJ0]1V$itac?`GPsµ"U#fιQF-̏QmVk8Si>a)t * ps;FIy+BZvnTYskIiNRJ[ ١r9㩤}7"L]U)'@N{B(RhOE,-H{'n { ^QΌ%Ԧ5%8 JI${xR~Ѵ왒*Rt֭ݮ-Ƒ*Uyz==U֍%ڮ:S:18SNnZm*<ћ撻p p$8;U{Oyɩ/gE߹3H&v.JVomtԤ]y0䣒sI+.خc9 ɹ9>tmFIY/62>nzcմ}|Y^Zm|( ! R=6ÎF;Vrswئe|pmE(+KSle ]os.F<r~83ͯb?jg2FK KBj6¿lߴ\QהVRtMB~VcN q&6@>ӚJ;Ke(|/KF܅"t'Kr\_g%3'=k ƥuseN7/r0ƪN=9N*ۻ5TlI=B.omm7HAL~rBv=k8T[^xz._{.j^+`UiFG ]_]uq3Ev=}FkF\dͮڣ4!؂pqGs9R4W}oTeO6LhF[+ ʲ9 >Rps>ZcPzVq}j,rz=k[c)J4q]G#ۻ8'qaƵ5'I#<߼‡ɦӋm>#i(ps\xTRJ&3%[:W4̍3n]>ʴՇ Wz{K䍼8vx^3qd&oTE+FV+JQzkrǕV: F.9rK>M̱˻w uƔ=c!*ۀKUѣ.tSR3<^rSS̜Φ8|H3swㅩwjW !7sU"8ER)pFviks\A_E$;joAҹ9d7ج4\wSF88cbW=NU*0y(m*4-e "&ۀ< s N]]8jsRt$k(⸙l|q=hX[W"4!6MpH$V=|qƽHykq2 Fu"7 1<߄T$]<&8[E? HTU`2lOZ˚5Z0+U4r2ÞIxۨԫ}_RO iz}W($N1#q#9LMJӕ;7k^\ZOnS㽛2 / ŽQ&3cgĥx4߯BTSk+źM;e% ڹy8W.nm54RU7B#Sq#aFq###^M=WkAYNJt-rM3a]WzHxoR 2l__RaMϿuӾUnȬ7gbˎwTT}7M'ќêZRn^eiv4j+u^>*&*rQonF[2&ZmF޹ONּ}I(GN"ݎkeR'fF?YβԺpvKm3Nh |dg^Rc.TN'~d>۲ *}ަ*:ۿ"ܼWfbbp<Ƿ^zmviO>6U]xNUaӧ uj觫\\*܄R8ζ9|U=O[Of!ۻC_3洖5R387]m6(M!Q dVeջyHjEF6.8hONAdӏ8ܽl'XVEq8vn>3|Y~;JQgWN _2ؒBZ+wH~*+1䌡ξf#q[̖Rn#s`p+XzS5o8f3,`i2Ur;5-t=:jKz^}:@ʻYF,FB'9)*9f2$ի!]+ ߼ǖ@/AߩjJ87%)!e]mK~c^=MveRI/A.r6 O҅Ow4Ro˩IiQ#3r7NOc/7si_ )\2!dbxs~ӌe)v(ޏ(tѮeɗ[WF>I'tov}]kCI-^ea囂ISzw t0XU^5%(ׯoPppi.+M.ڔRX6ֽWۯzڽ̗HxMĀu'qhGh{ xzI:IknC? JԤ{ v&wg#>k%<\v]Sܶз^JicB*~c^OO݌^Vt= # :k؊š]O\i:WKMM6ⰵ}i>\q/mq*ڈ?+&7+y5ψuIYrŶl` \y)ǖ1wo^t}+x˓l$*6To= ]X<jQ*>W4]ީ?bZACsq _4nV3cEvyF5k%Q06={W쳪6n'nd`7¾:/#׊J!~TӧσG]QiEm֝6wgxIDY-"v9#gjB:t{z#5lKs2YEi)rF,+}˩*իR&օ6MĒǻWrcסJck+2iӣNo$顏 (oW'9rԶcJ-K)2n7y|GJj,D*ƢwZi`[H|ʑ*39#'ӟzԍoC/m?s8j^]Fk}P_Nzq>n̙S4ZV-nzˎ=LE=YJuM7 >d|3yǟ9&6yy-l=Zt}CZvY&xdg([h5ov^cn[\K>yGl*>b9z֑I~4^/zz>wڌ[jE-oonΈIQ7`3nl`U SNJԃkEQ⻎O-Kɻk pc=J-WSjX{t.k^ ]V]TCA&@`yNVkfMY%o6{dU96;,1aQ^"6ݽ<5Jz5)տ.^vv$kF\g} xqψ_*^/& "ʌHqUQ*qsS[1pDylFRynO<O^%4W:}WW$L/>l6n#T{e^5>u۰$v𷘣i[n}9IԚG hd ^9+X?ӑ܂v"E34UH~Z+D |_prM>*%] ._*BUd` sdv|ާ'*wq$x%TCK䑆${V-I:e+2D$.܅S㊊卒1No_1Iw)m0 8kh凲݆ Ȫm`AӠޢe_Ȉ>4p'&6I ǯz-8j1J촗+GnZ+FO<ן(FD~+iu$֎n8$qI\WnUbxjIIG #]MKmJK پ\/W$&%Rn ȪdLm1~W<+H悩Z6~kBxy4f,ƓNz)I^^[ E v8uK#%w:y .Fq<`j=+lM<ѡfIUăa[Sm8b;Ҏis Us8-o*qz;3~%,KHy#.iS݊3业E$Y&@khDӡϣJgZj3y۾]ēx[JJzӄM+Yt0}7wFBWykZ2cg׹a*j[O\w!Ǯ3ZsESfr\ks<|sZۖ(ʜ9CR23ǻ?^(Qm[CJ":5K {fl~]2ۡ0]C֥x--%ʹ9`F2X[ӝI&cj墷`j^ x )Q nN/*%N\RZ=VZ+v/PqgJIYRtb_<R2zw+bS%STA|s|Gqڦe{K ]Y.- c;%:s疷qXXǚmzZ-kyK M]8b+T_Ue(B_ -g搶9>Wtʝqjli2.!X92qtѩ D_<\R]ٍae2==:Y.V1RѰtďwLgkyu-N4t(<7RN3e%T8*tN)y|$>ƬYQY[A:|j4[os<@ۤj\%zֶz~+eZ zj7Yf-%Utk4c˩u*r8n䄑F+XNҽ;EDwqi쩸]^=OS{tIk*zu[0ֽsJjGx'R9XvrJ39(9ZѤ4`2aFϧzwSd75RQwSҦW [ʍ:qo++Sޝ x-6l+aUz[ӃmJ?cV/l#KGY̦o9>`=#;щ{v\m٭YbY$>~aqҊp6sVRzu]<3=T8O_6NT̝eF͗P#iӋ*B2tֆRȑ!Uh=OcJ;>O(ù鵧ȻW`6TcѺ}!biקxmŒ4S}0Yhbiӡ%uz!&)؊cw \(Ǚkhg)UNfK. Ÿ`W$ʢ,^ pz;4jdeXNxƍ:O]D|NOA?SjC:(NQ!;ONv7:o=r3sL9X)npnu\St;#R5?wݣwNhDQV0rqⳊ9)'ߝ΢-Q<\qY#VpF?>{~[{э;I[U ,7S)N[o RN-[4X$n><׵Q=~R,6dc9ҫQ}ɟ5]VX#9D+|ǡW{Giد}oq0U 6G8Q9:JJɡ,Qɵ_3 ׿c[өJ>$cL7>ܮ8?ׯOj{KS "-C y?x˵K6x9YԔtsӼ֏[Fy*C؅}xUQGӖ3zD$Mw|15νٜ\+FZoG}y pYKH)s@槖{z'=\\#NJ6hu_YHll3I8N̝O)Z1[~In䕚;([sm*Fe# 2>_cW(VI2|`#'=+}^gUJ5%'M/ϔ$f% yb{⸥tF*j3Vk +ĩfT$nDqR|{,mpv,B '?)F&|еrbh(n~oGV./iԻEhbw2F62z~y޳Rd+m8:^fYHm; VGO^pz]QKCXx)oɗEnLUThWhS(⎍?e("rrXlJ3*rOGcҺƞ̲MnwG`Td: B*e%fFF[R&.͂rvgo&_#zRyu&-$gh/J FKbԚ@`*)=9imWSTcR.Go,`ȳH`1O,4_'>xOtct(eN2kS3USܪ]blg}OzQFoRn#6*%ي~?*RIjP9/6/.\k 3$ަXMe.1:wVU֌2%]UUd}3ޢ8i+%)lۍJGM70F 2G+ [CVFJ[68߻7$ָKyKaJt`k+jl$_sӧJ4MxcN]ޥ oFM,oklSR|4C,1][.1r>5MuiAMYu][zrAc іn_ix{:!$ k*}:gR7迮.&VPӶw^Kۛug,6w2˜`$UFMf5+KZQY;[ g9'돥tʟhșɴZy]8&xs*M۰qQ,_,m8y$Z["tXV6~W0!N@3dVѭNIڲ jG ڻIpHc7I N|G#c[1W~ Tkh*[n:`199QTqi-"hQզma V0FA?IJxc[g[r7r8<ΪN*c|]G jurqݣN^ޛ.rے[Ggsa=@9ZT&*U⬣u#19UPHv3ǯR\}X8Z]#j)K =ُ9g82OQ9NR|>ƴ%NNnpTcgs򜎣?z֝(iFW#x~ߑwANpie9U]̒EV̾Jml c۱1*j0DrZtPVD]J7dg5#؇`ݤ&%+$1w)YiNXxя^+ZR *6T$3zsڊq%kϙV"T!H"XmaXrZMߦ%Ji]5˩X24q?1FRstө1rm5bM#3]Emb<x޺c{{9J=oGʷ .7á#tE5i#z1܊hqWMifu֍ZqIs9 :\P 719fW9pԫ])GK2rF?/A<~"Z&zq#QNzwrFUUvq#OG4cOnƜ饷.I{#=41.hg0̬> weSݵΈ(V]F!݂Aiwzѭ*7SۡZُ6xO9+>hN-ȞXFV%U<A;qVӓ[>Zj}Vu \y8*FT+їޝ̃dGwCHǫI\}+5~a]ҩU)̻_ F\ˠW{Pͪoc{6[#UUry5ʟsmONuѐ`pCCGN%jm+"87?d9>՝Jrf}Q[ Yoc fUbpSo+\"F_1dQI=- c-RPF|2G<֖hG k)Rn;[k5$KZu-}[{US_0*\OGQ_/cS2.1r\koݹzccc8VOf|[.ȩh݆p{ya%,k8Tn9oyu!,Z+RjFFo!l0ɷa\e:1Svd1--4۶˻USQms˕Xя%[P ooZQe$C:VjR[z;BINQzTT<7,Lp݈9d׼9S\elVwZ|(qֵJSj+EK] +i 2*6Sb=Z孇[TЙQ_-2Bֳ49h˚荭+;r'c}j:|mR3bk8I&?n#]\{vU "Ɇ,Hz}UT9gB+)ot`*U Us{̔Wµ=ьiŽ/3%W#+21g5}敿RkV<:m;v.$O25bcqy* %sNƦRFgJ_ՈV{zJF:Z~_n(pVc:{>E+@R6$!a"9zi)J:GDI 0c RۊU:6t3浽Rgo0|0qG+УZ:ԛRÐM_C*[6rGq^(]/lߥ/m촨ᱲ[hRی bԫZNStTR8ZX6]7"qwm~a HYZfTl8錊>j-V. ]M [_4]o]g.}-ԩA-oSB I(MްM4TcOik<0rjc(Ftk6'K9sƬ[w~U፣FGZSf"{W3Bcs\j^z輈;oӯBl/RWie7dsۧJuSvTQ/R}؍q$g>jUhԛqb' vpdcu:} uSQc*M^~XC($bα=S8nF#V#`_jKɝ/VpXIRƲev,.STkSPBr޶'xbTF#򛔅e9涋U4CYAk}FGj3hA!N|]\ӕju,:fU~Rz|}IsY,ZgՉ a傫,z=GAVTT}rMjc"%_, >%kl}V+~x.Kx3#n?Jk~ԒқhygzJ\oRJ>O4,TPb*܏+h Sm 1m0uiVq.5[n:&Hgq9GT32[/]sXh `6sV-S ʵ>YcZ9ʑT¨ܧ'WrﵟUT ^s6^$O0xz p:}U9Jd-wԕD:qjeߧAxU2 0q=:tn"pZ_[?.yuTb{o=GݯVO\0TrsaNqYIk^ ZYD[[2X]j9VU_-g8cz}jkrї-)hpqӗmN57b D޹qYӢw{iP.y{"|I M>s\{[>JX|*ޗȧmLqkʪ7u}9+LYҳ0&U\=1vKSlN3vkMO[}$E";\ܱ'|+Wt.UqGqR"߫}#̏gϷ+rAka{*CҖ}mC}0)#•e+gR.z|ȿQs MddVU2Yp\oF<_ۚ]Zk<7&YZKxʬT?0w#iEߩ:ѧhUOUu}CCjMJLhkTJ #yTeR.khpJTվ|O4i l#Xj*@53|^'ya|r=TtmKrw7Bwb8MIFQjOvDׯ.Y=!&Ui&8"3H@OV~ڝmqA[ix8}},J1w\Zhc*۔}*~SR멿`N68njTB>]c}u7ДcrMo3x4rpB\2q2%[I^7Vִd[ˋka^tqPG$3UZ\k&,>l~R9ޱjܞg3U#ηWx NIDh1t\#hkmNN-y$ha\kiTcnuC[^[+`7Ϲy"TMͭzH;VZ"ԩ(%Xb>Rqȧ աWeyJ|W꺯_]a 0001ӥz4q.ܦ/jI{?ΗgAlYyp7-`n0S} R`o7˦Eϴԍj0kZm½n;ekdmyPtq88QvL%('.p˷rPˎ5GmL01qܣs;5k_ܘb!Fp=j|/;u}yFS֙i-kjm't[dOxrtea)%f4/]y6fY|NZoK-L,)Oey Ƙv|tcsQzVg*K[wS9Z`yl%.]=8xY;z)I75~?(wRwv3˛SFozF1yFU,5U8ӕ7n'mI [gk4+J]; Q98;TnOO8ϪΜxy~+z& M%b7YFDl?}u82V4.%jXu:!D[#¡Y^0zwR/)E{KSMw5kdgs4^T?#aWKDm},v}[)]2# [}ѻ9r+j_uwG0]4wt~|oj췁R1$A2B9Þ}=zͱQ[]>_4W_ϩ=GiZ]xds :QW>j OKghQ,0I˽b/q]0*bv>\dF#O[LVkK~y?ƵueFg%ίe$ÑxFj6680]Zz~7Jo#TdNzxOW)N7o^[Kv#ٟ wV^ah?ǰc%I%%睍*r.}ow= Um.Vj(wvm:>sQ+xV30z?__hJ_4mXwy>JnM͕Lt6^oZ۲6TzJVJҭ0gm>o+)o21ǜǰyUg}#K+U" 9{9/_}!DrVFzz浣/zwGZեʥeu |'Eq)ţ0`x>rGOԍl/v>-aw)K[fO)a;c0G<+ZqZ[fձ8XÙ|W_:?k,0#TuF"|F*M-hψtZ6A>c,sw.^Sڻ6՟~g|s+zxic81>}P,̫7<#?N-S\ZViXն͹[1A$WCImT|qZ7 :p5ǒɧdzaXC +DF9,EnNgT| V. lNN49ګ߇巆cFvkʆ۽WN-M/q4GIme#:EK5?*JLβ+帞~*xd[K%O/v[gWߙn"|_鷠YJ+s^ӗ.f(wGS=nKnˡnΖq]6Kr{֓q1j4kI{,2y'{k~'mOrsr&=R(_/lמ>NՎyGiؿchmc8|DtQMd6sb8XnϿuԖʵni]0AkiA?';lcR&. oXbY$@#8=AiU\#RQR;̳܈<ƆL1RygZ,-)̚uhI^YcxȲJ 붪:咧'f{m~3J-;cp}OkNRy{v3V[ZOS^MWq FIubgy*Xu65x_ǓQlxg%*qrNUT-7Fn\Vf'N3g$ۼ3li{Ӎ8TbccU;yH p秽:sb7cJ^J #m6;Yi mUZ7=F1OJt՝:jL\YKrqjsK-7*S\ImmU< cy< st~KOkϸlmlo&D)֓Sot&ZA4QssƜc#.&H_;JRhTOכbī0ӖHcvQ*d)Rm[OĎ[=A'yiR>G2M8hƥvQHkmG皨US2oܑTܳ R1ZPp:_P4C#lH^;qkZu)49S^Cf{$*|+J~ϚȘӝ]ůQ{#YSuǗȉTXQ>N@<}<~8DT+N5[t}# \'gޮ5*8ۡks#40pЂr0zWw,nޥ{HUk$yXgoLjI$;%Z[td24,X+*0zhONsѮiKDזڡ&,!@uIUVV7ry⏖;qrOROO,e%fmQbw!ݾ֑ݗ卉ݐ9ZFT{2xՂOGu3+(z=kj5*]rtFTߚڽ˴vܢ6b=۾_Ol+ze)\L!KM m`1>j21iC5gf3\mLwFcW=nyFVtWO oɓ8W,v'VkO#vM3C)$J:b1H$ͻ28'm<5dynAa;;;= i-{KzFϹTQu?ֶXZꢵYn? /]Ee,=IIm$Ff3& a喓!p= DTΦ"4~.9c5;nfa[4Թ~tasTkQ/*7d>~<o r?-7bQk匒023w5rOB3>wJʪJ6g'O2G2k;1yaF9Ŵ}^06IfR3TVo[fVUo2O4iH裃h.ysi9PM7*m:w?κ>fky0f뫘%/,QNJVV̻vqwҔ$#OI-a]5p?i|gg)uuzywFcn kNJqBU%,:KTG"KUWRQ$齈#IHյHӔ/uEӕpUP*i2ՠH]\[ Wщ#K$,NwwӍ:/R4j T}|՝J|CJ2xUkI*˸ʍi]JeMSBԈ]_ncO]^4TeE$΋J@1xzZDTY7:2G۔\&ڳ֎KW6tqc6ܒ@qW?)'5 ӝSX'ˍ=,rEJ5E{_i~sV?sJ4n{8Y:w;knZxSv ,2 KP:$SzQFTVȻY'#{MW4#/yt7%2oݽ'q=;vWWleCwqVƝ~] \ 3Oe[PrW(%VEH'b8"e;9qSۯ=k"e+ǂk0qj>Ɲڷ]?3wBZk_.FnIspx]-ztgeJa]>Ftu UF91?޸T7n㎵Zr +!Ķ4Giܹ,[בQ\UvPrw/ {mrAs+؈ӕVe!9nq<95;; s:4r쥾ULO:T-ucyto&v,gl\zyv'7'[%춑$Ys=Ϯ1OJN8ԔVۯ14 w *C#2oi WN<>]*1N:jE'<BrzTeyJL9mX.%e +e-[R6tX"*4"Yלӧoa\u*pԒV̙^s'#|V1rl,AlBu~T~Թne˫648e2#n`g.cє&K5e]lǯ}(~[7qA Y+.W8v5˖l4eYn 'p8ǿTeN1WO3n"2l+&ջGcV5R֓$7Ww?4&B(K]242]f\3Q\trKƔi%u(ۦ 8L.3)dZK߾(x^-EYR#{mQLZJv*SJ+~2LRA=wT[o*Wa@ A]זFXXI2Uj[]z]F?>,*1hV$}x=0jIƛDci Pd߷a^'=/GOkصOq {.g2y]jn=TjAK__fڪvƻ01<*T׺X|=GK߿L~mm%KdXnV.v囟>)GW*̅R<# Ԍ銿g~R[©ro1c@8뚿k-)<R&KQN#Kx 98I]nO2|)^fK29#; ҊoXQJ̧fͶ=uƤ9 yQQSB6^ѯѮAXy;>ՍNoego_"BzoxP7 @ז]1'+8C_7Sգr~xƶwn2/4Kug"S9rIיF[x+ron勵㷉V}_EhibY܀ߺi@8̟Na+9hNWT3fM>ֶֽwW|E_ֵm6 hi_qU'=Xþw&Ϳ nOE4ZIi+[6?{ 1O ֵ@匡]NsjkOE+=\.c㤪~"x;_4;bH$U_ @$g;^ֻ)I:y'g{ZuEJK8e7K9*=A9#ֽiK޶`i;9lϝ?i kHVT/ip,6P.|>OJI=eZYaTFnY+{ 隵ȺаmT7 y뇛bXyrɫtU=*R.V~T<%$ G>-3 +L/Q䔮*fϡxQծdd["`OBGvB1Q?tԄ0I'fFMo— gs2IFqx֮5!{WoVկ~KPcŶ{GZ>V9bũ[Tlæzr5atNWjz˺yt3' žR1ӽzXzu0iϯ]Nc U"o{^Z 9Oh~H)9UꔪCOEܻ}==љdzgQmZ+Nw}zR[\#:B LvTcg'o?Oau%dkxZe-դ ,#*q59I6ݼv?5Jg]]xXҮZE@ '?B+:2iFr8ٳ &XKj:Ox⚎et8xskGA$σw=+W'kz&F @KU0onܪCGr:4{Б;m,dH$s=8Ӽ!V{wheYdڬOUpxGNEz6On|WQ"7aOAjTos͗d㭵5-M(bO i;]TkBM{Djb3s#},S2Gp1x?c1?Yn#DUs;ek\+!WPTdc5OҿbqW~zd4Sǯf#~\{84/߁OVޫ^SuhDwycO¼S{ NSP"KhfY F28z~Ӕgs("rS( + SJ֤H'Mrm1xcl.ND+MO5~Lđ泩XJpII$Yi6 ݁g(grҒW'ͻ*5cˢJ,]?2`K&1W{(ӡҗ7*F(OB.0cx(&s:lOY~|͏So}N8e#?-'#wODr=_`xGlr8;D{ }qSRO[BAo71*P͟jN2J4R dDrִ᫺7*Ny$o,aƷ-㹅Τz"deA-$|\XΪ4eN.I T W 0baZ-۹< +65\6 K_w$rOH08ӟ޴=t]ʋ:i;OlU(sֺyj5v6{ȆI&aFe0 ? ^]aSK}JVYS"9\V*GmH~zݣOGnszSW*叻f^U]-Ƕ1biKRK+\e%H`_0:cں#m Qu[DsXK۪#@0N8޴4l6Ə5b7(XDF.?q[(];]vgp9zgUQ٭Öi5:}ƻqx_qRkjtzxҼywZyhLjħ+ǡNksFNI\ecI`G@ZONzjΤzosj\ٷZ.$m`1'}1j%F2n/ON)G^zumj5ia2.'q5ZJmQTWWiA 7lk r8GvKS5i+6߽]쫀z:mxm)թN--?"d[xQUxfQU"S9_jPͼCkeH'X{VJG$ʴ9ZFTsԧeJL6qmgKaNNpd^8S.i;7cEQԉ&,bvUm^zZ##J9^#<#O+yT.Ws5sJtRmGݾeK ?ːlwnUmF"1&ߡV8Z6b#uSZ1z{9Fz#QjbSӟN^QFnUÅQqmyKݍ#9Wt!V@>^NJgm:vJ!F%U1Wodz-"Բu]7(cw>ߕy-Kq-4n[}P +Ϫ+r#̪KXZV7+$|dF~\|qy5)I*{M]D[TFiB-`?JJ1\3(\ TFHڙ-p;cߍyu],{'x.,$!Y n_sǨsE?Bȓ6$\HqNyQPSﭿRJK s{ce;*P=boNK(?6qzFZxJy쯷TYFƲH{pOp3}E* N[jT {`Wm1ҷh (%9^8bf+q4yI0+֧INN:ݜ,N[v~S_Ky}yҕF׵Ȟ5KSQbk̾JC, OU_1=qZoJO3*=tVz\"ZyN> yzf)K*)]3z7 &WipO~S5g'KgC~]~r Sgg$3D4{|$3<8rIA$ں8u9'ܑw,ѿ9׊מYӷoB#v+mQ If98y<ZԄef5JFI{isyyGgQ3ZN^΍i+ḘV)fC"+yc$Tiөtcѫ^p7)mDS'_/k cg<\~"RJסϭ`Ds+/m{/aY_q Jϸ\݀W;N3+*:qRRMd %lyAy:cZ\EiYIYKG'̠p ?JjS/GJQ|ɳ# \cAkJ(n\G^|$b}xjMIAceh7~ 1ЎitJZt^tix1fFWe[8=[g-[u%ιU(̊]4g}ZݩW Fef$߹e~g#<qU{9'- Ww1 tҟ-#S D+ieDn)?s usN0R[}D+pؗFL*ZR+sb E9}y54)H:6Yk_AQrS|Vt,/ퟻ:VR|ڔ5u5eDv9?4/㚏+k)IE/a9.UaݾF y18zJr9.5QkZ< #idx"}\[dyHHnْp1=D#-+Ŭ̱L4?xw|ּFNSJ"qГUmo2L2x< z7+pƍu؎iK!xy9ڼxS at%HlWL23nNNN{嬪B, ֯;\0XXMlV?"FюsRv<TtԬޯ&юE{niq׎3ҪgQ]nuƷ%9Q;&E6r,;G<wF>Փ;z+%7e')Ac)rϖ=z}^Z7eJOMʷ]m~G yՓETJ!E?u9MM4ѽFRGcPcw5Z'wZV*х#UXcwY]=Yx~hЙ<KF傥 2ЃY{l:qJNG ff],{GfRp)J;[nܶ&xYT##<rWo(ʥU7WoE؂V2oSҞKzXŠbմK 7ָ,;X~2j^!PcV]ʲ9TF.#u#ݎzTRZVI+ʝZrN1,[n~qhe˻NaO RWrkԜV"n)_usfWf Hl0bf׾8.*8ݮT]_VjTau{}?oܓ̹fmHܜlNZeFuoImĒzVP eeFG_ҧNx <[&F,nzuB:#ң߮-k3H tn@b5QC.|NMvaF4[ao#tu&kNRF ^eJۻ-1ۼ@x>TcԍNHlgTM9#v甴ZjxtG;IunZ@ dێ1#Q>=OZU%^U|ȼ\q#'*JInDHD( y¸^O c)I%k񭇋NiͿ-!t%N͑=xFe*QK25Іm*g=ԧR2Qn= =l=9$}oPEd+J3q}VT}mwfK^3}Tfc v+zz/KSWRE׽4v#nv#rN8OJ*nz~N5+TMY[mN~H_0iҲV1\צ؜LiVw鹇(Jum[0WjIP8(ʓj?ࢩKlﴛK"o?yxVqe&uuҩ(wwۅF&1*W֧k0)ӍoDRD];*ZqJ4E8<_ε>uBU kcHjƤR]FFߚRb*F+>]( x¦נqnx=8~o_Z{i:/u5R5}keVX Y*@JRSR4V+Bi.lGBpݨ+>4eRsǿɏ͓r@ӴNbh' ˷L}jáZ~ZY~Ck2}J$s[U҇5PN7žӭ%vOʸ[ӔF\cc{VRy)ZQ}XZ]^Z!Y[W:4S&[3*}6:?Y+K',/k՝/5$+B56e;qZc1T_4vQFYU&-̴/&%1d2||qqyz#eMrmI Q/R:v>rc*WRQ>ZGݟΪ2jL4 "C:.ݑ ֒E,=.kEo2{Iy"9ǖ0p0]>4R>W_7ڮB.>\d}3QĹB힥Dͧg3Fkɑ/Rv^a*{mgowOBqU%\X.N'Un GkkϱQ|U{N۵˼kenbkH3y NzzJ[_׹Ǝ*hLw4ךULJ5>[] ;yc8ߛ{uyZNZ޷黎vy&WxY<<0I #= Ah^(׭P啒{}>\duUZZUZ*n-0\V^^9/)TڷB Ew ,#Gtj֭O}|E;u4m(ii9mzWkNvNt#LluFR"ғ2nF8޹m[/S9li֕[{H\le͆:kFOFc9ak9Qghmњv`#\|H$7#JNjۯ&to%ܳ𕮡|~ ۱de㸓;~Qqxzu.I徟ןcϖ[[o8x7zw63G ))JݭCЩ/_-_MN>rB'8N c̻H98'Q9V]r4oJ>,[$_u\aO gf9VvkkTcG[K9S #,1PЀeF1ki<`eaU{e5It8QE&顏ZjO]ػM3byQzʽHh Z8A_yfpmݼY7U8>QǾq+Xҿ6 Uhֲ,^,csn i#ߊtԴR{~aEESXYZ~d~pr0 o ڱu1?T/,|U"CXV`Z>vƭ4}̩xxԌv-kh%|L_v dWdjQO^ާ^2XmT45ό_y\o8xR 4*1{rZ挢+itbWm|Y7{C s`5Z߁ReW[h Oeݚ}@. YU_;H 9'RJ_{iu~xJeVWO_]*Y5qq{^=3Kέ{tku5F+c7CZJKvHc~Fx9yӍd>xXlҷC,k^Oq $f@(X䎄G=xZu%Rz'd{X9Tar{g:MYf|(3<1yob+ D%RQr_8j|$w{_#< e+vym2"Hʌ2zH_9TI+=F5mSu5f#;%W#9gy9Tiݴ'ѫ+h D=,/l&IqU/;_ qUK{c(ac$Yկx/^R_nRY8,pr0q0kt,7=?' VVؗⵏ|] ;hfPzzW>zJ]Zﭺu<3iy1d\Gd$8"lHJ]Լ]糧/>EkLj 2 9$Rw"rU#%k'=ߴ5ov4E Ȯ|Mis%k=Lhm'+jml[6*i \F8$cmSy[t]?gϊ|/۳y~blc `isEK1\q*Uu;/(Ik_+}柢kbBMJ5~01 `eF8J7nӝhT 6nio}Ot |PQ\}oiqjk01Q!MP`;l/=bm64OGNj".4Wektz^7Q-ex'-?X^F?p6B1h㲚78{o)b0xsrkܚ{ﮟ0֥Z"+Ne@S#mTP*k碵պF"WM%m-[Zd?[Z/vTeRs껚Eeu)g~lg?khXQ%2Ļ,.U.>M񒓔m{:=k1ê"CIb)Yuԧm*SY$WZa V>(xBo L7d'8ǽqӄ?8ԍϡ홦J-Uf?gh(ɔ+:ILAֽl"u"jn]{<{|A,w+ -?q\{M}J=W\Z!rN!sn'SES[F<9Wj.nZnzN.O.C;cci-|v\=rP]:0}^8M2ᑇUJ H)08Z1 ԗ~;|f__T7ߧ[BΟ('Mf+ 2F?6?\*|^k[6o*Ӓƛo N7y^[Apd&vS03Ŏ X']|% ?֔Oߊx~ɦ)2Iv%^2o_>~aNk{ꕓox3Epֱ6Y" A7`TZIFin.TTcR-+Ofu;ho!]̍U} 2{#j(Sn;&4֠NuK 7~3:ۦ(ZK[1̏`!1zb.S)_zztom_3ГZF6IO]7g|Myi2']Cu^TvRëV{֔#ku~geaCJh171 .;/r:[niud$.DUC u%cOO٧̵ Z-fDd suҭN7껜%Fy\ZRjZ<(E%܎2=yQM8_Ίt(4xIаbroWZWF7j.F ϻ9UF39׌evh/bF̀v`t@j,o9Iӵ?-TwѶq׹\VI+IYk;#ice#i@x[6 ハZ֤){7eN:-:FGici>VQ}z?y-nY\ T߱Z_%˙[kI+Oq]hFT<.uSA$uNJ,m5lKem/\\pNQJ+㯘F2yq^*~V=%(\ܶ"溹>dWRm8?hs2n4{,B80@?±hoF*N)TFwyȟI&;NmQ{WEluL-?{tbeF ߇|7Ia])4AH;^0yS_/xXyq(7>i|FڜpY̍?Fsc<~UIWO3';Їbw*|/l(掫e#俴〢Ҳ2%~'5dغ|1K4e)enp*֧2_VK6_@͝n9^sFJmzjqO2O#KEK1!Wvtr3HZ˖TR~VLЙܱǑҡ'Qk̘ҕ c2҄ Nm|}Tg)-zTܯo")$B9o´>j|RO#42)Io\iԩd5Xyhc3c@$duǒTքU5'M] fR1Q r88 3netb9hjGciu^e-JIZQVN!M#r-F>8㧧j4ʮOSJAG/"/<⽚j =J>P{925wMnL~q玹۵z[rߢQ(Q\>p77,˶w.UTr:{WF}~穃+_ JB)*~R+^i{5?^ӗU?o-n(~vm܏ZFn6SHV׿B߇n(H,'dr=qgխrkSSQi6ӧY oGgGV*Jg=rӋWjʺި/!B6:n88u/߳&T18[[CThI ֒V%T2g'檜Sw֌}m+ Sou0Y{<z~BvC`cv8yק穀xGHNI&^5+rBHR88\1y5N%x{OE̸J sի3isrSŜn. XyLnkƤ59rNEnؚ1!?ԕ HNM,-.FU_MߨܥhM%Ҵm;a}z qIF]KYbUJ33g894:PVkuF<Ϧ0y6@m1=u?E?zKVj!C "yL}|}8ǽMj#*$$][+h[ҧkN6bRm$;H%$ꊱu*YLG>իt{.rۡ$q 3DT\~4ք[L޴a:j2d6FzݎNq?ʷyz1̒ Z/cmܤO#JRIo4Ez8xͷk/̳cql𮊸t4- ^?T@-v :w1\֐NSݒQQZt&t3"3qEIˡ1QddYY?O_i*u.eZqn̂ vmn#F @?Z#/uÑK^^jL>^\1ZZTsя/kMo,Ċۂm N㎂\ۧXӧ5)I{ڮ!nV8$eO36#~'&Q\盭R6C$ZIYLas㯯]4DӍ*rr~}He:ӟʩK*WUEhY+N˯f Zo{dsөS&l2ǶH@>Sy`G#ZhT{ ZzE KV;\w~:U#kItQeo_2S~Хѕ8f+{ҴynΪh54^eEaDwsIf 99]]95m}b L#9rj2|Ry5*KU~geJ4i5RI>VL nvƢK6I2瑮UY]^{^ZhӕLU<.ѭ=JrA&c)d hRV-SV^DN#*y58Ӻ.?u&gkA \w4ӇuԢ_P߹°\ua2ؘ~Τ}-[yTҝjjK[#X5*=y¶3GZPQ x!knXqVR匒nU .Uk~wIo\ҋWX>R1q&c S:NN2ƈƧf~bOluEZttv5YJ^6Yu4+c^?z<oZ`,kd=:|(On+, KGt޺iƏ52ugMʗ6֦S C*S9tZxTd2nDu2U\ÏsDR2N+Geɘ7462w6~PJgvuG*L[f!I~o0mqPsTrWtD PG=p3QZJPTu}{i'_k>i?+#MșOo^^G;̅6K^q>g~V0!&mW,Fu%"!Vl ֶ8Fryy-#JdVb/Zъ:չ.J8eU] HTeF1'=7Uj:y^+^mFV݊ޕ(Qiػ·NL=ΜcY"۞UmYK)Oz$NT9Fܶ=)T*"apʡW W?K.Eyt:\aOa%{S2݃~\ϭMIKc+gTӗ3Kbd#ji+RvckmoqM9J%yrW2: aV/cYWq߯c_c$λF(bTƴ-7g괸m-1wyoNkJK~MB2R}Z6lm96V@:*EIڜ}k6RB텃mͽvN+7-QQju7{rH8GZn++ q_=gug tmM+rI>:i>=H5Kev rѢV7 2{*N+RC*r0y!y=T=?N;fIrqDc%hkS]$@HB݁s{FT~h"b')8^f)rۛg8Qߟ^1Zrf">TB95˄eWb >`k1Σ~3Eӥ:KØ5-cYcsK'pJYWou=P4z[a@2cgre-1XQdUVF4rqRyքnEG8R喿#s^7#-Z7`G؆kH,StNZ7bUw3I#l,r{n޵qgV/kױFx UzG6x=:\sIŸ35tm3o䞽خj\9%(XZI13nU?SͳZWQyhף63u+))($4Rc1F_a~Rlc/ȹ vfYq'8GXR.e<;m ̲'Χ\pq9D.R8]vG4jܴ{hx890O޴W5 "4Er|=CsIHH)VQz{׋)};ԬZiFZd#O>bTo۞'Zk?ź]s[ˌ&Rr?kR%cɍGd:|nhn!=kY 9>g/TTܚ?FkE$}^g rHA?#+pk+kR08wG6VURVVߥgcetIZ˧gw6 wh:̞{Xb3} ^iϱ\{q)SSVzZ_׎*%Kk(赶&Ƴ&qk &Hnf@lvb)MKQߪ[|D-R;xd`6'A$Q-^ЧNNOvuw _v땆tnpT6:dח q"pG^䪷t#d6Rd }FW:\[S S2SMcZm̲i3ZۋXYȧs6ߗXz"Y~+&ܷ~t~V%ׯGP?-^-ӯ?Sw;Ո9 {̡H4[sakb7hk■~x/M92+FV%W-2GUVod|viens#R}'NOlȆ\}׿Ӧ%grSi'vS_GFw r,ٓS99b+h֋ڊշۿ?}C`qNrI$z}C_xؕ/$F;vI2q=β~C)9)xṣB0o}4׷_ӡمjxUN7wv[?Kz>tZe{XẼ<51^>eӧO};{wKm㗊WQ3cAr9 hھKeomm9͵Wn[uGދi['zNVM-Ǚvz`q^5xqҖ6šu_v SR,WZ|?_ݲۚ/ᚅG~ N8ni6ݿ g}toK[YOJ*U>7S:ɧrAqTcni#={l(=4GU*Ryp9?yԌcPa'k LT|8SSk[CR|ٔn]8NRM8sti[Xk7V5Ib e<=Rv_=?橌S:z]u৆4YС|j\?rm^yU]v)=rm9K6g|{~<[|N?OҲ? Tɒm-譂Ae'lׇN.b|?=i/&zV'Q1ӧ|wC]_e͂+͙·?U1`8b)$ܣ{/Īx4WKK~h3G&[&XϒTNFGc; ej}QGR!βmGEz$qʞ<x|rm= Va6WsO ^ȭii%4;  >`߁))rgHrB,lp8\al3mP]2eԴٓ"$s6Q<ڝދ'[Ѥ}_uӌy-}28^m7* %(Ue7Kln݆LXOu-7_e1$Up >n?.}j+TWU´mut<$OB/vǚj3?O9Գ}ѳGwMjr:&ܜm1"cp+HŽއDbjB~][}=suGI("hѿ:Bi?zR^-fHZIXrx>I1(rj U'4`.;8+J*MhEIFcAſ,>z?-}NVܳsƤ^L_bzYw5)wEF_h)ndQb~==Yu:=h܎f8^d~ba$T$ x 3W=jե{1RIjy]>$lL2yG*Og>kd8O ʿ7Q+Iŝ>3{>ԋϓޏ7gZ} +GivDfDq`eEJI4ВAҸ[chJRq#RK]th[$3߽ycNױYj44jX܅ռ0vC{%+E~mƩog<0'ۏz*/wwFrQWs2_B1=p3DESGuԜ<*Tr鷡jfeFAeV[QIczДnzciHNvsN_'*FPk8!mr|7#zb(TyYGRVWMnݺʧ3fӣi&kYyk(l}kӎz5=C_>#% (XfJ5\lzs+iWF\J6OqlsCx6~1gnq^.G&UdkJZgb]z]N"B;_^VmK 㞜V˚\R3{EKeUcy+}HӶ:TJ&iZ)ڭsӜwtbq^Ν o%a:+ C&AAg%mivz~"VF51QG" ӒueN~TrBOn<~oGp?浕ISݍ'J©Rwz&BM,׫2Lv<>rJ6B94ebYӴ+7N5R#Es]7`,o2F3*cj)V~[~~:>Ҝ^SӴ屷[pnA v;uLT99t0tCEܱRY ƪ:iN6z3-{xy<)S~_j剷$:MDcXVEn $r JeVZ{te?A"nwcl\gεEM>]/b*Jڭt:{g;HS6g5U)NcZijmiӼ4xNǭc.NDMTSӦ,,B`Ȣ*P:ʍ>IU򷒡<y{Gь5+6$u0 2I"` 0}~%\8]ْ"0v/q#?QM~G穈]"ne0񞜌g<GZ;>I&|Zm2ϡ#9nʝHt7ZE ! mܳnOLuBT%9~+BJCl4=Ҩ̱`#+noh:>JT},ز. 򦽞飅eAkOD*1a•;Ve@y"c|~Sy5.H^nCF4,ef89jӍE&wC5R=OWRT_<2gXƝ^׿ڤg7470LaJpsR^J Oy#NЃϝb77dc,j~Np¥JݝV$K|8xwb>QGLYԾM(Ir=&՟ʒ%ݤo8 qq?rӲv]TT?mt33m#2>C5dOG yy.T}: ן_3^2+iGwO1]򍤜duR|oxՕzy&k)TO!'.[|ίaǝ5euHk9TgyqǡU̮ɭբ-$A4gIq#{xjN1IIFJRow\IUmd+RyBcOٷ^iHbEY)"H eQslrrONʺj4r V юP30>=ͤ;)і"ruڴI~& BB 1wu={qZEۣDIwwykg=y#ұ+Rqp gjukiA}Lpbcy^4jr?zUFQZ9U挛v'(fYD22P?NkRQ]{Y$Vd~ /~\߇W,EH^_4aNoh)ZFFfs?Orz_&~NYl*j7oipc3u5Rӷ_&qtchϯRLKͫ6搂{ Φ*i/;5q燥{+k$v;p@G#{ZF9T:J1\.+JyܥCgcۦG?ZTM|\>:b߽=M-dnỞA>Ȯ:s\ʩ%?zN$[fHX󓓓DjBRknWtny"cl UTgk*Q٦#(a̒d4]pñsf*QjWk[1xSb<ۉ$$>S뱄k'>{yg/19??竈xU2'e-4(U:u8ްNqvPj.kY#3c?0}ZsT<} SON1nBGM2c=\7OE:da#*yw(}ZU(ORf˳\7G?׏VnF^*M7}C[;wo&`**'F*d,=H+q!6GӬ:v8 v9+hԤvO빇֧(ƛ^bȨrI" 8$]K;BckOy -3=J`W2$;ӽkmTRM9+'FI4]ʨ>[҂N]Y\cN)tSJ"7cSSN[J>VK#rxe*>9ZmZj_fY*xʜ˯AwvfKv9ϭiRu)'ce[)Y \eiyg>_?uj2>(8,dLqqڴ29✹g_o/hF\)8^޹"rrm5s (ZUeUd96<玽?fϸN J݅R3nrNO}{ӧ-;To5MxػHV#v?Jv֏5Dӽ/OXdI'>zTeG(Jӷ<`ʇwͼ2Om'ˡƥ9{*<2ܪ;=E8Gl>wm./d{7anXՇ'HʜoߡӍ\"ʳFJM})Er-N+/bnu,F13WLW/zQ~7KGCnPghwտrr}ߘ c${ &55+hE0##$KGIDnuܣ/4~dL9eW$=>eo^kBhmq3!Ry}r=>ִ(zh8I^/UІN+u-8|5ңSJu\-c2kʕ5k~ѩM6ܩ.o\V1r܉b%^\[| v:U0ֺ[;n;|9h=gz> MQV)GEJk\+юF>VjFMK2FX^4cxԶEj{QYԼ>eJ.] 6)cf[oJ ʌZ*soei)(6~mWڴUT}<^U+Iѷ7A\rCţAKcZHX/h;zW<6 /AQUM]ľk3,nV73 !F#}S>u47DXts~'8CLsXԜ'U+:u;_tVZ[ ZSM{nݟo1WөYʚk8F vNHϷn6jRiҒ4`ClU>{`򬹣(r+ay}osr;7Km6' ێO?:G<=E$1u[nSnx?bJޥ8JQӿקR QiLv]0 ,\*5]h?h=>U}uђjouƟNlV4Y;o3^K?ИUg]@_,å{zv -kvM,F;T\7<1^-ذ7bբI7Wkզq:%^%Ɲ=8L%K;aϽzT9~YE4G_,!,ؓ.o=F?Ú)9A7^άEqt#hLjaKcG޵Ny4tF4M}Ɇwn:>eҜeN?48 {qO;nvsS>g1!{(iKxګSt=5V+H+C8JQiik7!1V1vH~y._ڶ'oCycLmY㞙GQn)o"^7rav kǝ q1jjrےtONtka i(?1ɹ;8#qU`Oz99t"w/7=뎤)so.֩QTͭ:haӱ+aHssۚ^L ^݇]`c1ö3W\u:RfNRSѰ>vh|͠Ε~gև~"1۟a.5&6z\fuSJZTZK_9Zk_,[v=2r8{eySԧZ\[м-{m Glt]qyugaSV免9m#w+DB; wҏO_CNN_慶YnW nE-;FJl #+yk Hb|u]ҧ%iӦU%Uԭ#լa ͺ/r]8zUg mm4Z܊3`dV$8R6G#9y^iJdYcbr@yZ)KҚF2`GsFm&?ztqQ_"V1+O^_qĦ{uq-ӬlB,1c {'ãRy}Q˒*u~ŻI.JJhʜץr#J1,VǞyn:Bs[]v/h$װڹjTSJwQ$j׮V{K+G?O|3R]:%lTy}_O vUwtl:Ty}sN¢G;4y݂O|s8RJM뱣c.C6\(EeW]V-͉dS~>ks?v<劔7RN^͊ҳ/XN8Un[p5Z|aFyi~O`U`3QzXyIioSVtV-rIW4rciӄZȹfw, {gӵMX3ZHsnV9Yg=Vq\QKn8ܬ{mfR(Z2l qXZһ4Ffmv8uO^f#1QoT#)sDEߵwq,yxZ<էQ>qT7)E;jkIg p;{QF0e#)ק:\׫!ObBA=ԭEҝTdF,h@VY2<ǧ~ rTtd҃}n"o0@`(qIGm0OK.Tf0@Һ({;k{8naEl[p,F}~tE^얝>v1^Ro8'#כiQjH|7^ݎ>jC*ҭd Y+՘O9~̣aQ|vb5śFǿMgNvFxzow.G*܃#>iRq[xz,O9O-Agׯִ;I'ka5/#+L6l#uQSm*]IQ!q0ͽ(۸g88Bivb[.։Bޣ=JRNZ-N %{.ߔ=:N0z:)+ie_*2yh?^W$2B;8|E;{_֧qqt'cj٥r˹8d׵vVJJT?gL)Ɲ׾05FԬcn#"ƸpN:s\t;Rw_#Z':}K\& ¥c Gq֡bק"u׹ĝ[X|"5tdcD4^ͮ[wX>}YkZEDE M,=0=z7Mc`oFփ5HV*r Μ%%lM t}֯4o\ij:c 1~ tMrwkS:}[׸?ږHI0=\u(׾B2%rxRZjHD9$Aw{Ex6'W;mw#htr`Vwg*M(5#Ѣe"Bw#Zu9a硇>[z^4Zdi| =}}3^n3?0۲zC| #};W*s]_yУw4ܼ; I I w}1>҃w0'܆s,O{4q"/R0yyjXcqOs\8SoU(xꬻu9so@LzWTN4iƛM|xjlpt*+mzҩRxөi-<<0t ;mXF tN}pRVnǣ*Z5? 5x>{R!/wSq72HT|HOױ,,0֚v> 򪐄{_#KE01Xrk<=7{3ṽtn[_ 4Z;3 *qHgLWZqQ^DW叼ۧx%U~y5QQ8nYZȐoE1ng*:>ٍ]VRe.ܓ3Y:u&m*5*Իz#)UUW{Vf;FGVfeV*z֕OZTrk(dD(fgN}N:kSWE{ckZr;_s1tuT~j({Nc5lxsW^xZ\: OEE aV6ӧBF~NU)R' /w_ZıOܠYnxZʥs)}ԝ*P-krT[-L/L#Wdrw#8 cN֩޻ޭiOwtc}zY'o>_ Ijק⣇9֒Oo^^jc̟+!f #Ҏ+Y5k}6gf?iu'OKhi]ؾ¡YxVGce9vGFR8u]Wt0u)^_V]RM_6^L7FO&Tc,Cmy''0MaNQݿ|)J|^}eG7PR\RB֪EΏB1Aҥ[F\o{kȼ=>*e˫yןj6'@akUvxC7trqkݾ튩ZܐNzvG1M9,䷒8I͍Q\;qpk5iԒnWWWc-6+K}Wdy={XS˸;G+% tO=+SNZ>WSSUdnyǀ8E++\)_h` Hx5ak~y$xfE|<{x97nt4ҺK<_[QȒF r0+F8mw%zu]N]|WŽ֖$,Ͷ .x9O\wx2MNNϩTr]z+q}m6HDqrUZE\ܒ [ d^tO_6SUբBxUnjg&Bik9pSCk?7!bWe@ 88p;FMw=jymg4M_E۝5Ÿ>'ĖEkHV5eyfw}9ozt޶֭q[$v^g`CX;,jewIϗ*G8\HҌ[ޏF8 8+ox^ BvO'Ix\$DWM(VkH~"ϣ ][VVu9Y<5z6%0Z!LM`1gz݅⣈mnEJx)Ż^ߏkv=o}%;i/t;-l,Hwέ2BۊFTԻS/ԕ9*~;६Яѡlϕ7Lm\`0%saRkn~?C˛^ݖfs9lC s^H~+ =wyrN=kUaןha3l/`x*G¦*Tl!&K-&F6{g^MluDƔ~ZwmtZ@ndm** 3X{[zXdyTWy׈5[؀If~[ y q^,kS|Џ)y]~#i$b6Y80ח*6i3~hx_^(oa6̻ W3}r0=NtujSkz熭b4+2 }劚MD䋩-#b]ܗ2Mhw/߽-o|lӗurs(?SEpEOeYhg7Sih({]F$3 !n1^.hw]OGsn˽CUnm&m‘'}]V{j2}g7_|8ӭ6*i +*q8}/SU)]&Mnu=[DSuwk}wo^M~ӳ\׍kxZg*$7p0=N[R8Y^{妚?-|4^ۻ이?kwX4Hܭ?0'wZ*`iƞNNGajJiH|iGgopwFP[L3KYy3o4mVb Jxc%Ƥ8omN{${ن^Azt<ǴZվNǡiZY]p8=HJg"DOm +!Zywc9z%I9~*=;Mė̌yp:zj]hOkp9~RѲjb4w.FѠ%w;x 4NkкoHNXnEqŒQ,o.\YqY Tf9Anޝ+QI2 $vʳ~Mv-@JѢͻ>0}ߓWuؿDA^x"Ni7v6~"ɥY[o|#~t8{edS*pZ]tVRdi\g=ӌUB=yRki =܍bD܌RPayW##؅86g^}/ԨhJkkJ]wЋZ׿fk#$ae8vQ*/kz:+$v{[s]7] xnsqq*4'o'߫zwߩJ8h_K}ZO6{;FYwŒg9򥇔%v<׵eOo@k8lnVVe.xg\zJl4[Es8sN޽t_&Ǭ2yS~_ٽZ_.ȼEGRWv1>VP7xգv^.r8Sߥmˑ;b$cAj05Sm$h]S >ГqZyՅL<Df{5A ު˒䑒O\ڹ䔵l:T^Y&H'c9F;Z{,ۘaIh$rqS(N&?RVaqmHN54iS+ԚYMjTGAgrPҺe[7K^搌_.Fc΍%˫~T| 8IEjqxtȚ0Je.̼Tv8zִ}YGN}%ݼ2K/r06SМzѫS_s5+5geiX2LnO?XƧ}k.XJnJRb6J{b#ٹ5R.z2#7cF77#nq=E)aԗۯ͍F->P5 8Uog~d.Jn)mݾR,G==EUFx^j$̭5"ǃk1Ԍ)mZ.:ٿ>$ysp*#'N}Z;O[g ܓZ-潒d 3'ۃt䭪RmFIo:")\(xK{{eç7f mEjKa#B8'mTnFmG1Nߦ1]#hIkc8?OFs4iLDE+( n H;#qº(T\7۔iBg^n y>ZC/\zu=+H9T{+RXkFM$!*989< VJ{y/k?me<ÉN]etUh%neL+lQ99;(41繺-oх @HPrZsiTy]m/S7Z#8h«H+jqf^_UbTA ֬j֩>+ lCmvFᾕφL;˫dܘ.`Clu8=*eX| G`vQ$`y{+RTW+R{vj6_gV@8*ت*TͰ(*mlH姯\+{IeJjK" 6k9[YrU"=ſ4[V,Wr9=ZQ]Pzq|쌴Yc=o.YS늩W =9>ğn/jh.F9F4idIynG4űed.ieIs' `sA\1SiiA}Yʝ5{uiJuص8u H91-?m9;;:#qzܷlöLkČ?j)sYnbԓJ77l|YL6GNI82RndŅn*xfV2syDMƭ;qxrI)J}М/rq_K .<YRGlv<=iNJ#۱O격e~[4-c4km6+cvHXz1.yyWwQCn"ՈÍf~9Tc/z cpwsKnrG>k6ٹxVfwVgQzt6{yU%Unmi穉n-~e-omNrٞ*v*ZܫHG=EvV%Uh+K5'3+fGJ֦"4d)j؆o6%PKsVjZf3ݰmͰnT񒕶/贙P"?1$޸E[:ܒ:6fS1}?ZT[Ά"3khb ^z sVc8 Siڍ>JK3)cx*zۡ*J[2SYH3YԌ:=k%qh+7?uƞAL}j&5Q>gv܌g4h^*Cr:Ŧ$Hٮw0 gsRg>^ۗJJIzuu;x1IbiiGztQOg[L͹x^FG/}tmd?ihm9zuT.o0%7ֱ"@s:GWs+WO'ө*U9Z[+.u$rLc2u'{Qj6ƭ9ӧCq~ޯl훈V2랝s޶¥Yg&[VQǃwkkkoyj̻$9b}}5<_׵tL4[R{;FJgt澈,ѕrp?y'lYJI馆#Z/]Gk:uR#ʬ̦1y'' +zԥ=J#Լ;yg pX>6^S꼾SII{M}uV: ! iď<#+pT~_#Ɵ3|۵~f-$%w2o(V3ۨیZQ9)rS׮; [k#Gn# ws/69{uyҪE?vw_zHWE>>-sfMl>h>^m44SF$3]ÑqzpZJ7ZMKh}W70+Lƫ=i^b4An ig9Z5 ќweI=x?J>h洫]kpH&|I;u9*mjoN2EscJYgm|Up $c5fsgztlo#I3d9,sԏf)Jޟ-8m%v*Nn8#ޢI^8Yo+e nʎNj'b݉4=D#?9s[*.W_p؛2}T1*J<= b0~U_'rvC3.mS{]f 9=UƞjƆ&Iw8MᏯkU7cԒmv'-F'owuf>*Q+IԢ~ɵ1NK2q)|GE:qRUUᐬ0G#{ 5k.]oׯm;̍gNO&I!aK*7tc޴>Z7_Ժ5k]/iUgO_2` n p: jT.ǵ8Ԕ! v ^aA=B}/lJO$]5Yч%kUyrã 9u)ݨM?O^5ڒo“ʄqiHF޻V@=A72iXfE뱩cK-.],vb6q\ּyվk_.PUtݶo='ȎydD*9`^G\|"wÚI]w2K[DJ}#k{(Tw]O.fm*Uǂ+g8 Y{{^" %Qi xkח\<rD,R%B\nCG9@BhҫDմoGgSb`\=_75 N9-][R_L"c*68lzc6hۧmm^7i~ jðj";_>KO ¿m9c_P,TMKli$XQI,>;W~gE,ΛZ·YFFߙ˷{{δa~i4(G64Y/hT chP05,-Ss_*nm^uEӅTRsqf}}v9qU9Ox;M$_,y,"niI[[daA. 7VM1cEO5Ou_'/Z mE{Wi#w7#~K٨ݽKVߦ^ǿnZ%w~go? ۮ$Yv,U@o\{clΝHvw{zk}vׅի'w{_::v%ԚuKuLᕔc;cUq=cݓI=}gRxs^]ko\i:5ݣfUooHp2eN4.G噣Qx;--N@Oaw๮g"'RU $;+,f!FևN9.*{Z<㌷>(X49f |-U9^xG-7QɻOO?Nôӭ''kkB4Ԗ5ayK6 %yZ-3=V.]GC5mdKE~|"u=#U»+sI#y{tc9vwսmʽ: VV˧|4hkwϙ&P~CaOq\rԓMG_mwʕjzFI'ek?'>"}:.'rKJb;~zYE7o{ua DV~VN%%6QULJ{:1Zn4c(H;cttI8+eylO叠7QqrqRS_{[..v*@T1݃ujS(Ku<*X\=VԋQwti?]{ᦣoL\U9o 1߅|+NX_ WK5B :{鿧m7}a<[ɛʜNQZZS9+k]%dwe/!^[>ЌߧE=(VtZ--fY~c)ʢ4(֦Vc4?U+g#𹗳T*Np(ӽ|^A( $rG<Ęnz8_'R^Ѝ;z)tkN[YLlǯx_;aQOC4_ ۈ+0cv_ g+,=:xWQmk:fD_/ cP3^ ҕ6exexg!yc6]wcl9fԚ=.2hު饴9$x3d<׬c$}4:R*Dky;(YTm={<1i.kamd/ 2~#n*)FiGDSH_60e̞dLr8%N^ϙ_<<֝:j0>>C kdmqUF|ۻJZrݾG^??9d]"UM%Y3+=i7m;oe$-Z2|Tgk Fg#å|;woO!,."2J+ɚ>'0)9;@+>RAr0qR8'/)֌b.g#.Iw6A'of:*NvGgo3YaefRΥI<0}GPn{}ԾzmK *22AFqQ,WJM]ZЖ7VeʖXQwe$sKs?#^>2-W-h+?ZZi-<>b ,Qӂr1ּTVs?wً)^pi;'Gٷ ī19<~NGhxTv d_QJqogӮ}^}:qq\TQnBĪ`7CҺ#wR5{Ok2ż*~C=k:0,ت{>_g4:[i`i7ņ~>)-.;;Z: $&9Lmi$#gd`zCڱ*\q#$dncgTr];RNR2eq}XќF.M´L0~BF>1E E2W~ᚚed]ȩcM*oOu&ܼY۝ gsߞ8xQٳ.j DQ`dv Lr;tS)TA4~d{GrVda9#;xT ?#iYTcM+{Y%@#B{c܏ҹ==Q:p۸k3o!P6Ư)Z笧NJXۗ_&4dY0@\U%Sfu)A!CF(`zVR^}>Vvܱޜ)8qJ:o5?1a(g!w0ۑkUr}O>1Mt-6G!UV5U 1KbuBDA*=Թi^U. 7"R xxzz[Gc[]1i5×vrU=< Kp6$ebs$[SSm.i.Tؒ(6F,˻a_2ޠsѻ4 tvS&Ѵ=8# Z(RN$&Bd"u4RM_N^ Xs<{V3cj[G,;|N2%.NY [q+ qs[1m{NH͘6fݸ*ODBb\.ߛ;tSw[ԎxmZ8}F랜*)s^ZqkD`[bn#sU}ͩTe;+%ٷ8-޺aʍ%ŵ)1+,rmcNqVϪzH*j2ݕ$ՒI g}kQ 6[-v̑Vc:n:9T`Eo4=ĪTA^N <nk^^Uҝ8+O2vC o/w l1P=kyN);=SiդMj9^8+kJoS._wD on"K׳ǧ^1YeurW<}Wӕ(ek6qz ^i4G^")lѱU T1rOq֢k=oԩ)Ӽ+|`#$nV8| lĭ/zv\ex qN1VN!8gm۟tn>ےk 䮴F#+@Ur)aϹrޖ쳉'皚tc)k8ʥ>k\h-f݂G ϷkR-~XxԖ|h8 ANzc޲eQKcaղx. 1OZN2f#N>ioBNHDp@[q`WZy<6ѮK)g%~Bq^]Hܡk{?k'u϶;yr:bF#zf|=9I97wy?( U)= #tdFwKbRf?{s+9ƥI*҅owՒ/q,y~ׯ0u(ӭ5k[MzuRX6|=O]c̹"dN;R'`囵z,RLy^1Cv)fVl}9QFgZjRB|r$c]iԴ8jQN,eqqȯRIJOdqӔqY_yomܧ( $5SEIGU?iV-gO̫܂q^+ QLn#̖Acjqމ{M%/sJ4ֆjp iO r'U){rI*Kj2Cqǻ5&ScxYSegjdt6]Ltg%Nx8ZҔuO#W݊7VGw# &q׊Z#(Z:Ԧw2;[.@e\r|ѕN.Ksq5D?v˕* `zc^do#iQZ5o#y(MrJD~#?l6Fk-7S(17JΤ#RϩQJ7e6?n[xY$'<5!bܝ~E5E*4.A$q}k5 J΍/ⶒlt:%pi V3#N3ºZilҝ7:Ri%uc_[Γ 0>f@?_j狓$3Ҏvh]Asl&ݻʎƵ( #WǘڪLQ(Q`q\۰8MQv9"vJgKznN^w35+K߰ϸnQGs(Ɵ<Tߥ32&(` >5J|N[?Vڻ_Mu%USX73}@!һ},U*O{p5]Fn'e)G%UOv;3:VL-~s9Q<˓[S}tӣUթQJkc=XInIC?qԩO]NR㥗KjW~02nmBvh_m9>g޸sr cJV O=n㯾wexlMH=:Z]VIveԭnxypDϺC~~RU#E+o""Wś]F+iCVeӶKd3<@v~ +J4N_R^Ji^f!iu8 {qVR*(]@ԐyV4G: 'o9ͣ,OqU/gJp~V&丞a8ːƱnj6S[z[mKw]1*695ڹWRjm$ DyrG9`ֲysTSJ>#ΌI;mGs+|?5c8%.nYsmUݴ~XKѕGFQ tEknc{؁NjtJ4c/b*s˱^m|>=V2^[|cF+mw7ˌaSsȗ[h)}{?&>q|х6pשѳ#[3<߻SB>\w?q{b[#:5ڤ9a 47Le;)ѫ }N:‡4e{2ʻdܩӵ )?zZܼ7ݮ$<$``zW%j1Ir FiߦB1!~67Zu)EJ2{TT#Eriea""hB@f8r qQSQ瓺]9W!uiGsn%W߿>QR1MNy+{dӧ,lI g8Տ+v~GW=J1}ؽudmVbLeINB3J=8RJM߭ӊ9s7#YNޑla\Z{h^}ngX֊i+;?wqk:ٮN胣ƫөf<5z2uv[w1QG0Lcֳ)JXz\Ke f}5~5TcW:J.Kr[OX6220O=EXԩxqRSVsZ_ItT叜M bgr M;;qOG1>ƻRtH3.Bx:Z2nuf1B{kkI~pg*ρ?5s%%ecJqCEՖMCq߸wsX5ȫ$zݰXZ٢6+0=1:dXOʟ4ttppxekP##R6tFN*wRc>Y(m鎹V;(ө8hdv+G>fy*ڜ}WRQk5Nx.WUgf2)%r?*Rwg˖Iy(Z{I呐q<1W:jM? VuF~2[cMM $Osv:}N;Lk^6-UVKTf*E\IhU_a m>8cK32ANzoj/)GƜ/Ko["a6VXsHdzgHIY- RgSʒ&TYbS|=~UVvZYKF^hƪU`mG)%Eyl -:G.v|VRnk?{IN?B06T{L֨*`EE*q8y8~WTdv==guOS* m2p=Z9sEP4[+GlX.++`)928j(4gO2+\<٘3& x9ִT"}}4^m;-( n$sϷD+QDݿj"ҹR]!]"mhZΤ9tW:*BjU4KU4pvj.0=%ߩ4b޷3>4sE}##ֵiŨbj(Eimʺ6szt{ǧS9^M.~2X޻7(mꩇF)9+[RkGev 2d#J|5c.]fU[XVE]sSR-S[ U5}wpY<ԀO_ӕ:vL<ۛ9bt*}=xJ2TiJ.&O%m-?zЎS*4=%tgˡpʅ-V\+G~Mub| JL ҌJ攪{uJ\"bѥ[+2Ǘgm#vץJh:mMHnndHDˆ8c<֑V[ )7}j3țDg~Uؼp$ֺ#˷Nt}wڬ7* Hnڠ\|W2vgwN^בGPei˕no7,,X9$(sҲ\7rR}<QWG-Ȅ$8|\ UNx~ir}ˮ*MoBȲGNzrx=:yKtkvyimo#.NW\u\cE>:1$oubE FeMN8f7RJ1v-mmi!T ~2\<}_O3F܂{Ǎn .P,6G9LEJtimJW==֏I-&է0p䐖;{vj:a*Q /gv%DO?t9^d9k ~qv !HI p0:#[$t5Ǚ"ʋ/kӧ--kJK݅{cb'zT#-975$a6f*H#є%Nu%".VaYɮ2{9KGoǹ۵j-^uZI:: ]pW|n%FN#u5OHm\ΥJ'Ꞟ^9F+ 4ɵd,@ke)os:ܪZ72.Bq+6:ÏqXβ{j;kU3YƿhbݶF52p.vg)T|Z]'ء|m$3ix>HF>ѫ[myjF]kw1(guK'}~`rryQЃ3޼ETF8*0{H&s$7xdXHibNx{ۜחN\zKp:%)zWvS¨¸JZ5*=~JwKV坃O\F)r]iTQ(ԙ Z/<猔VU;sZ|bEelƿ,Zվ6:(qs4o(ݺL+sNg9G (nߩh$X0*\IVfnf))^ץҝ5*|zfbB {v;ʲ8bvW1ߧ_ZTU9өLjN{sr-5Vd `J*}50x|~+_25SdOJucRT2jXæaMFNUҽKKП|W揩S9\ڤpԍ9Eqh9ee 28U&;9.JS#\Hv噗,ø9;gr-)T.Y"0#^~m|!V<GqS-T|8e{4R1"6`v*iO{g1j4;Ś*dAIZSƟ*^Ǝ(-pF7.~\U)= *1*m5}w-v(3&f_FZѭ>eoU~QZ˔E҆+Xi%f:OhӲ&xr"F lqڝ*U%. D}9mH/.e_ӏtV*u/s:j* 5%;I#6r߿]T/gg**1m/RWVV;#) H7c;L#>^jU?}Z{gSRFwqQB}߯@8Qwps8=|prgO4K{U)!=Z @a~)$nTy1X )ؠ)fcw5QՎ9\wCg*U1סG2Rq֜~fj]xQcu㨯(]%w> } %њ<5* p2 pky5scykO]xO[B%AHl13c԰{EyT]+[,D\]]M"gǧi_38}l>ߺ#Uڨ&뜏lc)tWG'-̈fl:c'wZq;8Z+ٽTED-O%N^GS uODQ61EמޜVTk (jKc|jCo-3*zzWy,Ռ},omNL[u{5*?$f!q<*Ъ{:ňqQE\ΈJmt*#/I9Wm)3:#O#>;[Ztηyn)2ֽ=eh~xZnI,#CqYU`2_֧%kK}U:x IZiY[MkFh$2r}'thԜ'4q|uMwK`p=QQkQT['Կh<\덻^Jk=iFw^W,]kh]8CN׶AqA&X򥽫=G<\Gfsڢ1mozÚ}敊"25]M ]G-sZ# #9bhԩRmrxr;u)s~XD8'v+_iV25BOV^~$A{rqlE"MWx:c.t9i4% y\$I&hNؓIQKK)NRY+ߊlXa{9|.= }NJ?fR;[]oJ3c);s+NqFH$(~E{e%Zg.J򏰩-NyKofHi(m[ZdY F#gj1IZKzzzf~KW971I%żl9=r=JfқTw_"(cr?6Ӳ} kds΢Bљ|ĝ$\W%lN"Y(-=~F50)SvN<|(jr5*3~qhڌ9ErOKv"<ixJUUηv-~V] ),`k -J4əkF"Rpi-Smt(yXE@ʬKsm1klfu*֢mvwR8Un3G<3{ۇ=BBbEq@|iezy9Z뾝 | ܘxn.]sOl2t= nJ_;^0OsTP kY-x\rAH=zJ8{)aSi~/S_eZ/,p=Ad pQ"y&+W_޾g߳][[}< oqK :W0=Iy?*8NNmg/oA ޱkQ@9+FT~t2/g$v? ~gzܒ閸fŖ*2zaYVRQFXIެ CU,!"!qMy5cΤom3JӤi^L>qtyƋ.tgk6|Lv~YV5lwr1'^cmc1&J2Ng8ӕR*{M1',?Qz{ku:x'ShT_Ɗ$OȥqHx[WFޝ?# R-E8uMr+q6%\n iNhWRסprktXO!4YZʖHTo@QSgNS䲾從,dSUJO]ϤM"VfU~ψ{g:5*9E]%xZ쎳tw/4qƙHG^g~чI/qeZ SM~Ehfw +o*aM#tT)_]~=lDkaյv6 ^mn~rn\\u0]zjV{hkzZi;C98zW5z>Zƭ {/~kk/ѵ &u+ ` ۘN{vt`}"/GP;_)Y_%3F#D)F yE}8QRW{yx|^ .ywc˭^[gӦў#Z|Yjy2ȽⳌQY:RKsJ.B~:m̹+FM hYzԍJmݻiӴlgګh^3ݷ2FѧF{.yVhUIӃ~z`4F1g4-QNRun@Uil7y#˽ﵴ1j9Mmv/>XU%X?_ZYIMmTjIr?v6϶kh/Ve*oeԹ gOzcgSFjJqӿO-M iVe9a H;D1Tc2kXhHٔG2=kb](8ꙭn<*1߾GJuHϖUp&O]:;ևȖXLV^AOvQۡBBlOR^z #5X>Vnݕ]͍F;;M1OY  y9gxKN\OOpk۳FcIFWFQ~}.օߞv2n-滏ι#~VJNu}.PR[i B\p}CDcRSzw=iTq[/wX1#,iJP>ixIfF{.Sr>:kՓN|Ļ=JU"U.#(o>tyz626+i7tX~}L\j|žG4tڔzd\H$`"S^U:h1"Rh HP~R69鎾ņʔۯ~x|]:T~Ͽ4#P}hXG"nNCqPJң,?s_G]4e֭QrNɽ鮗NfH'*2r>>Tkv~ EU `v,/ >r'/u rneB홑UINy=9VNJvZuJX(Nw4wV\9<3ienm\sh[^#^_WsoϪˊRLHG88#׭LpK^4EK=]f8 ͝O 3ֹE95sҌFɈ% BmXo#)FTuJHE>z۱~^-3?"B.P;9eGݩf6Y#S%toiKc5ZR5{"hEEgI rpqS.nmvce= IJ*c.]zvwȌWrS37PouCQO3<|`9#>Ÿ*8Vqe*SYo$YEϗ4 >Qzct*~-Ts{ndOu,4-$4(~=5tN}\-e}}L+4nEM?}9q[)i竈tM/vzZ$`#'#Η1z7^'Z(ևTqTF^D2Upfn2ݝ7̧7o+g֎Ch׭=*hWefwqx6*+-UXF2&<{Ӗ*)-z6e9Gqn,UUxNgnW$[U6@烏ЯfmN57݈c*ﻹs\cJ}4Y=z#2^m?):eѩV5]ӧLU3+_4ԫKcΐQ&H#Tq9XӁZ .gz6NYw'X*qU^%u?&pT9<NNvaQԞX$\4#3H#'[7G4#+JQ_OB$l=jSc}z49K`<(~q@}Gz((ԧMT~~n#q ۞*TM;zЗ0)'HHǖ`gqOӥZ:#(S2k]݌{*"_3/8Lkj|YyԌOEwKI-̙KJՉ'cA]g̍ГE y>3B6PJ)(72u( đ;+]i.iN9pϷ) k>cMoF%tsS*掗cPUYkjOu%dҍߦ7u\{Z^ͫJL姊Fwˆ)m#wj(_6q~wΛU[#<ԌwOV?e*S[Ѵ F-mt!Yejd9zdB[+uV̴ΪO^WgzW7-ogy +5gJs4eޏrVr Ak5)-:sIAm,1+Fݎ=){1oiUY$l8W(˕s;VhT #6jz(NGkO )TkNLXMdq[{v{j$K)wPQˑ;?4,ʓQ٣TBEz9?gFޙiG\})YqE]vr[SiQ*ٵN\1ێj=(U" Ya{ FɿnQei.w(7FwG[ f q\qv&)TdݹXّB |3?΅Q򶙤ьS)RKrC<^֜o'/6O3'+3t^akq֖!WK[ͫ.ZEiqc,x:WE?cg'{v8+ŹeԮm|>U-Jfx;%ߨ[nO<}~gGF\;*]G̫V(eggS5, ŬSzK8HR <ʎ"}N*I9=O=m6`ث4JcY[8#Z3NV$5m},P5 _Zs~e9:~`+鰸4h_kz4}=:N>Yz[x^Y"wdHs<.H dg⽈(բkmszN:|;\"1f䀌w'S哕{·ݴkflmC3 I׷J\Vqʤؚ io7(#^89jjGI$**ZKr)Rۛ{7L`nzpnj%-__V<*N#V~9%Qƛv۰R' 7sڦi5_$7oá-p3ͬTho-M"7+Gm'(''5zWQ[Xc&Um}}=Ju=05䳊4t@0p>ztsN{Q4ֆs833Ky۶Ʀ-Rc-&Ic 2\q><.1ђsldul TT掍nM/$ @$`@OZFOЁf@^8%s^͙T qIQ&ڭt>##+m hYܻ3׶8TԕN#^;$E!W$+eϚ<Ĥiyh~mۿw8K&Ԥ98T1g'}M9ʮ>D3)U-J\іƎ\^gtg$=+Ff/cx3ˎX]ϡ]ܸSZ3;Qfl/@d3U6αM I$̡F;s=?5̕;=ru ]\\06$9'1|]9SvQ Bc?_UM*Y ƫxAr{=˫Z88si~#܍ 1?<|# |MӍ)5sis=q-,sIwXSqRlt/-tp^Kz}g|_yl6#Mg;ădkzI_wΎ3/t#UV6r1 4 ZWm|ۺiiй㏄9IF]wj3 N  c uYV/Oc2MS_WO23x1)G ޹^ۚ/sJjocFCV{¶D08{>+T/zcmZM涷s[7$ÒYٯ8@$qڽѮؚtㅝO~nME׃Xd b$dg89dUQFR "'M-5htI#-L1)`pG`Xh4W #5{YsYxJh_!f6{=LEZxz7vPk11X<:^Kyk]1Gx⅝փcb$Db\ON:QV}tzm>-`jQJIm럳?GE yaf[jaP< q{|qYkO]t>+a^iLj^+MEzI R`yb[$i 2nvm>zF{zx_UԦ̗u7iyq F0cб Q_=SRnWnM|/k cd)Nǎ oQIau5宏|_RXasikk-wݧ;[io.=eJxj8sKEu=\#' $利,x"354I M-mcqCu?޷O*-Z4Vz\8Awo][;nn˲ 1^֪T~"Ue̩6YĒ~N#+:K^fJeʴ_n:@O3ly>}r2Q_ϊh߉"ʪG9y>57m[Xb#()5v+k].n>H䜀I?/sW4[jϗjT:멋s}kH|pƱ3=J*8M7ȯTC m*Цoc߱6󅪛_2ܐ/%%nvOl.ԞVjB|:YtcR!bm R}υsQլ6xaZy8ʧ%FVg|tojqVt~k,݁8##~߈L Z/KrT_V=W'>`0cruNnW+ai`o_t p,k$2qx>wz5[7e| j Ԧ嵑OI3GNݙ`qU+aOS;~`euX Wh)i">FDIpIwbxunr5 IbAw$ѳ=v-={u|Rl=Bˣ˟thePceY bO"14>oK7#eS:ʍ3Ipdڐ;i9 F*lE_$ב6Ny\|<^o jOfl@ R ~|l;Yf]4TQFS|m>ƻihoN4:}v#JѲxvUĵ~ گ^61AeX*/QקZ%'oSI+\OuÍ'sӞJr&P*NKu4M䉢?;:q\؏{k+4.yҝ {4W) +dJ2>$z~=:j5e:\$"Iڻx#zcS*-24Vֿ2Wn8){KIW[KnVnzIEÚJe]~`ƞcn*s.YOb>mYؚ͈YnلR]y#tӱ5Mj}K*\|;ҳ'im,Ѣz"X9G8qNQ%~ôǵV36ӺIWv}bHwF)Z1[io."K#w\y;s>Yro#͍8^yj#!|3ϱ>ҹ.m2Z[ꉖW(jF>RQt0'f)CDۤQ[ g>Î qԌjnU.K]Ysk3I 7̥ZF8재OaYSRQoTgGbՇc]̪YgnG#(-_֧ʡO,%p̘茡-w5卭f83d x'qJJ2Gq72K{!]iUlG #kOH^Μ6O*g.7ἶt]dsʜ̓0VDv=rws(^R5cͳZF`FfTqy܃k1vh;KtL1*JuDT8?/Q Fy# cv%fۏ1u?h˚`ںDqۥ 9#7/€2q(Ӕuo&RnFr$G)"4ng׮L8>M$6{wޙ'ыy@;c8<%zBRP5UYGEҺ88rGԫq\l{[e6s`x6q*B1IQBPٚ==ǧNjFZ[]:S[w-%aVG9t΅iTr% OVbZ?+oފtVHJ|ojA=H'#/799?aK*6' Ԋd0H|:~*q\ֶԦNQ*+>b@U~?{{# Si}嵤f`fpǞ9QRUbʢi>kʷV h$ zCϧQ\N,k;rXgiOҷcIa=*Og}<7Z-$L$ˍ~v:emZǖwz.޿mnAi_GؼJVU?g r~Q˓MYG^kkZI:-^c<2Ɏ=gzUkm6E̜cuȿa{`w4~g3622NyEtƝY/^FVJޥ5=YnY6]u>7ts]ɰ3+0 U{<8*jG H9y@`/NA=kXZ⤛קBY{q#6#R=֢-&ULu*TXuMF5S#v̅\9Iu*Wk|mt -V]Jgehw2H<Vڛv30定y=8{:*q512G%I4lC }qzq5֪ajTQ&sCʹo"|8dr*Vj{G엳FW$#xW 1+Od./S~^D]U2p=o}*\ҵ!Ck ěwSʸ]3$:;VSKL#?JΤ*[V~m2|d qץpFRؚ%(w:M;Q";fVkxuT[U[KXڎqp$p+"dN=gTlӱ'> ޻`@ƪנQR^⾥=Y/'#wdp&_#r[jOW dr zg'p3ZCB6jÞ5{S.}zhnD)܀_?"7B0w ctT-Q)Q_]̥uKoB,rI'ҵeMBOCتk$*=ͽVO5dy|c ; ^J2Mv1o1In.ͺEp:'ӕD۩uZ7._"7T̹TⲨԩRRJѷ7W,s,d1iþzϡ7g>gEJU(%2[2v$EɍyVuVeFJ-=S ]iyI907rF=}*SkBT_4VvyeB8۱F9^yԒMbi'v<">R-[K|cPG`|npJJ^Mj#**2Z{j.$`$,O c?`*j7'd,Ip#{60w>/J2ڍzEңi;yyzִkv92%w^4Q1[ʑW21qkeNU$k:lwՅ(IߙW,[z 2NU9Vȩ9aIuH m9@85jS(+;ϪUۄʏߊTƤC#bVНѳ.XW~Z/(FEF0gߡpOhYsHkzL{9̻Kp#+)Vdz˛/x=|;_O4+Go8;XsR=U2Ob'v{Ak Ui-||6WB#"qʏmFQF7[[vLgޢU%R)%2j9W#̋/+H~fm3ҳF/z-z_k(Eߥ BƾD֒Guۜ ߽qOSyxt,.H!yQQH8=n+Zu!Q,UjSV~E'dU+'O^@QOYjVV oy9IzsfV-;FsJu[ 3cV991չ9UJ)){JҜڷK-Bžm$)-dU)XҸRƲWm}ӔykYIh! cIm\eʎKcn5IYNeM{"vϩ3cFsk}NR\u5܊q%͹o2AF='ӌ'tεwzߖ"@z:d58b=TEfv6{-XE?*RN[=>cR8<67vVmsVswK%3Ys>'O*F.23(ԂU:m%˱cVn•:tҾ52J qe34.DS<hv]eNױ*VM; ,Zlb]øz=sY{vɮ⯆fȧWJbapaPq89N2N[GF+k%RaF7JWk3ln{=9]vIFia# xR]H59 r W茪ʶ$K"w$JO,=bkHQzkEG܎w*ȻX+'1[ӌcRM }"85] >0G'O_Oz*0HX#p:G$*{Zw"*qjvol`.88URZsBZ*b{JGBF)cX6fm5.\WԭTGq J#yu]NJ71v]n 0NthƟ.Q'F!y#6Ld;FV'铞Q_::UяdSI"} r1=Os$L.Wpflm"#^uw*tTT]U.1,'z\Ғ<U[iōp<[@E(E%}{(躮hVDdžNpϮ9+NX:rko %5ki>-q.C<.޵Q)\nmhbyeޡOJkJ: *Ҵ#hQuS^?Cjr9:ad,(@cj{_tF8D۽Yu{~UeˡZ^ʵZDnFXnrUNy{Щj_CRXSs*0 ӇݕS{Rv i~pUd\n#So7s(S\Ғw"k$@n埘'9}meNKnNed˶i?x\/?}+E&yru?rT}YEcߘ|?5^-J* JZ=][gp~/-+5O*^WV*5ǻڧjʜ*{TپH|Q߹qFF_.GV= VujV/mq,EOoһd7ZP.؛gow~?eJ6nzp*o뱜1L$j*U9(um9\ OZ֧)IIKFtTe.si&YbXr$=:CMn/l`O'.TsXI ŧ J:naRTZݯ$ZKm(gS)S6W*|Z_-q2m˱ʝ77rJi3mxns!c<$Gfm[hPeMm1r+XiN,hЂ>]SS*iϙtgSJ !S]5ܜ9?Ҽ굤SDcZi}.[zTTcZ kVjg 7,r3Jg+= 5әynETdo ԹkXU+smꍍ bPv3,?ȮZoN.Ue}6W1?-˳n8YƤOZp*wReTg7Gw'je$-LSj7cEZ@${q\*5472u#DC";umX?:sXNWКrtX%Vv2y#Sj؊1] VhyW̜l~ub-OU}4k[;P|z׻-~8ɩvEJT8>ߍ{4z4tӔc^c,hPAWNU>i| L#szthKe믡"gdQPBp8ʓpסF7UtƌyW*׳}[$+ f)?zKsyF~e޻[>^觯#pK=#,$P)qm)*q{\>X^]檨$7 #f"4;ݸ@NsB?8-LTDuu2.KV_6U3,7gOz 8uBһ2֯m/s.K NcxJrb?:G1Ycfglj[ikT/R(VдBO&N~ q>1U$EVyN6BDg۾U-VtSjziSgj}cҦ *TNSkՂ…R#]Ͷ4C,qG\--ZlMndSk!Ydٛw͌c==q7TP$~`rȓin?]iwO.Z)YI ƣK[vSd8W÷w5#AN {o/;vwH^zUNcѦG+onU>2G q ֱ.eRGnݼaqqK$r%ۜ[hʨ8#$x5TT +F4-,t1mUq$ɝ?wsZԓqIhyR)'sxge!Xg ~U57*t&t I,nm=:0ks>gʽVu&O%Wi exNzpwl۳MUd35FOrm̷X]s,rvaڲ*<ӧ)IOR3c`y\ڀ`1Fq+ZUZr\pC""RJ;0瞟5eNuZB$eOO v$DYyTssV-Z-% c!%'=;vYΤ*Z ElED򥡍ِ.݊*cFQb8v|3&ߔg#q9Gc[P,R8hﵺ #dSm= 8ϧV#ߡjA*iԛmnv2]G5m"w:aSm[ʳ4JUq$0rsR;GRV2;gO|s#w[ԩ*ގ(IZ,[*)8鹛`us]c匩V+j!`7OުZܚR*_K ۷[}cH{dVQ}J^m$.cB@YxkN4geq;6,4&3q"ݰOt[FBtiԔe']MluGKnUnsמZBNZ3L9hC zFGSXrg-mAJGF%[w/`5J<*QAZ'ˀJwdz45_~V.#yBCg2;_,͞cww}#O3;5͹RW>3fo]u9-KuF$p!$R9GnN`ESקșF|mdm=ѱ=Є)aq'#>k KC2{;[ױjnc-IGC1Y+Ǚ>*ԣ9KOMemjAq]Mp9)uͅԛ B>NM}_yKV}O{z-̭;R[cv&iUXە;3sԏ@kLN1'.F7n y C'wH[yO=2x~WǫQ59+vK~ɿ|Aa ֢ӤIENQuVOL>=Uu<tݾ~G Bo!',uVx<)]EhJV~R-4{fTڟ{XmAws֘yKũ>k3c;LhfLF7w#5X[6xثr=ն)HxfI$ WC'v>nkhכ#䯊_ai̖!O%J`x2?/ITQt{m;_5K_X¯ʪ ?JO|wQ1UuGs}ea>y:F΍L.|܍'1gBX=*u8RXiڪ|bM]ҭN2f5,{V65 8lZS66JR^>Gg5}' hLV)jfOxlMeR,o mnB6 j~S]I.DD~CZzV"m&4ޱޫڢ<.j\do5BѤ[\4F\ӑ\x7IhYmVW4VrqmQүS򾧯hFk(UhUvj34u쒵_DT7v7 v~NG==}3,:~vsi*2֕6]+Pi!Y#vq)6cxֽjxYeyk㱒6Fxqݬҳ9/8Ӽ?jc.~kqٕi)c|jrt})o|s0G"-<`g򘊊흸w:rpkGo#6٪`Va^|(=t Jմgkm´i/z[بSTO־"K൷$vm o\=6 ?mM+l~}#c4mk߶#6~OMlᅚ4nD[!OSFqXX|rj+eo,9)/;9o)CǷ |3go*e1~eDZ#`p-:|9޷ˈ˪S9&kӭ-b#xYڭd]ruF#;N}skt/}ȹFۚ/S5ӭfdVm<7 rO9#rQrLT[_rTrFZYv:^9ZF4~xH<>\qSPMJf7 dIQ_éi"!̮# ac'ȯ`pt*w8(ի:R=0τ$ ?/#| і88ccF_:^mtrW mk#/E x*d]s7n~Q<Rx6TՓWvv}g-T cq۲A+sP9힏٣=\3^ *{]w{X7).X?)<םQ*-/uKLM$7`wG$ߩԔӒjΞ',l/[YfT-9UV _q8ӏ2! 1\KDm_|wle >O$uסGZu,Cy?޹i}wep7%Tn޻y2JKYYtxVVV`CZLms{gXB<345=N3FVPJq ڹ*S4ԋRl࿴'ƫq2BŗnѴ$тsߧYFUOQSd7×emvUV5-9|ߑ=|xeRnJ%ƥ5)ŧn7> [BOypᔓz#mJi/QrVC-漎i$sʒ` #ۏp1 mn{xƎ" FՏ>$x;yBB< 9~1z86_/ 趤쵿3|d ӭu/xX(oHdaN1Ǖ)ʳTbծO;psv%u~F^4ȷ"衉:j*Υ5nc-G[շ-ZyPگ-Ʊv 9`~H%JT8;K m\E5.{1^mJhyyS̪mOw7,EaQ|Wjf5g(y6$7s#8#9n|V_RVlzNH?hѸ] i{:nKKF*i]]>˵oYOյM.S\tfLhB?lF"붚Vc=>wk7dbzmכcf Fݼò6d~s;믚;_չTkw;/ BdSҤ4ei&S9UԨ_+zeJqw4_DqGHv3qcnI]omK MLiԔ<Ů]ڱ5clEոYv'@+i.k7W&U%)$U鹡q[M>i kMs\.JS{XY8yό.>ΐ )a8mnマMOTYΊt\ed~%6\-&Y y#vI$GU}22eʕu![G*ql^xATYRu1ЄZEeo3/s^>Jh=:/"[m&327a؏ǟ¾~C֕OoM8\[ܬl,[$*ӋɅJj{9{kWC@mmfGO½J8NU4i=|E6[mW*JQ[)S>g=9={ߕ;OI/c0+&3ddc8=*iԫWܛݗӛص̻Gb]o]gY68y#E}vv1Z*.D_s{]%đ];+y 7A'+Zs>n}>c?c {1fZuVt*zӹDcQ3u6ZJnɚ4n޽4:3zAlSs '~p1ߟ#R-56t!iI7лhm;lq~ZR0%)JS_"s$mfr,IIӕiQv^(BD ^g=JҤ:tJ.#n&;Nz0앬cR[wt#6;~n88{W,**0BVeb}qZ{7N/]^ݍcEauKK#B>Ts;U掔֯soNKz~dTD nfj^jie4{d2*Gh}Y2y ;Wvn^W5-V1xJeevHϙ)0O'0}=kQ:}|FP{}ͽ{\e6d=zk갴R=Jt)ܬ{;K#Vݻzq匢oo#ڧN8ɷ Y-ٖF|ʏNEYcXܨߵ sЕ䢵C4 hme1/#s"r8%휞 M?UVEIn5i9X$pǯڇt寕RznirIoƫ4nZgQH;]ENe8³[/ ch[Q//Iڤht/g#ҥ-5j={ඟy9RB$q̡1~Ҿw6YB;>tFܧSWR ,-Z;cܝwLc3\0RpP8.xo=6$r mBn?ӟe[4huFtF%r0n[C5jSW]CnyߛDžZ7wGrd$+d09'zYnN[uN弘jtӽck{JfMhm%ؑ=Fm +YuV]խӿ+Ja6յKvT~ēHX0*s~?yX\MXjTkVR1O1lt,y1!@:8fS}V}-;[74WO-*빀'i^|Xm]<k$mzttC@osn:]#u٘mͿq;ӡZ;ikY[sH[hTw=l.GY.D6AF.OTp]θpZ-gwosbU;O eFt-Z̹jJ$Y[rK,2+"ŀr{^\)˚GF9F/ײ*}V$IGnqU.xERY $ToE*;m'; zzqv=h˶$HFvG-z3}9z9Uv¯iuKs|r ,Ar8GqjF*W[X"Lb.?td[(Ɵ,g)t݉!Xg;J=S([jh4Ku]1͒Y nS)J0Guk "|QYJqZni2yO7ĀH}:zqyTثYn UT&Pn#|S8E$qv p2`=*"R+_T^IjĒ ܽcc{58E]iLɍꌲ`7@Zgv˩TwQlI&I'~4M":%>J žel'NPi[i+Wr$"C0VQZFQ$e.!WU̔>yQo#I7ІX.!s.߼9>IJ>srdSRIѪ-7E^ws+u=YFt$Q,,o-Yc2nĝ߈%ycItWxDTqhbN3h6k{8u{?І [jH!ee&9VّLKB$d*(rsNzWW-㥷:}꼑 ,A 8ƺ)ʜikG$+;̅پis",~_g.e#PEq b>98#8FK{J2|Rkjd%WvӨ>YloNbQL.x`ae\ݦ׷яZ ^m_𮈾h۩ztmO~B24H̿3an}Gqk$֧VI:O#>9nn1żʉ`ˌ!Otsrپu='䕚f|⮛KɳYc>kJ6"傂x'𭽴%KnT^:ugد|x_jq0vy$J{u} ɢȊN3Wi\ȕ|FBcszU([g;FۙfTyjpBqIEHR- sJt_}ۦV08Y\A<\$~eRVߴt=k(t8YpylJR6!6:r&Z dWVG^J:[+fbpUǽpN{Ѫ-wI½6\Lx=RUmtO[QQ{[ ]+"PێD;1tZҧ$: 7J0,QÍ3HϡR2n/rNRw֊"6%sx\:Ԫ)b坱eifU*jV|G^2S|>vo=Uh+T)Fz\lXϓ U z|o䨢Mz[ѡ-??ҥoћUT;X,|{ڮgmm Q撻MwF ve>x8ūfs.s5yJR65cNϚ4j'9|vmn#J7Ēyګ;$by'.zu`䥡t\t{8jBip{eC&MˈVaoώO+Wi|?+R.;6vIsM's׷^BIܫݶ݌k&,ˀϹױ#s)^sS)xPt4bM'~3^GwF7ݨXZ=v'۸(=cr{u=N퇕m >EV`??/~+QQH}j.Q 编j%NubڢN8nsDRwDre- KңHܪJ㤾d#;f|mjIkS֏h[ 4EUxUc*2wO~2ju is79^eWQVW#Y,d( PWFzr=Ic^ ol`LBc*>8H^id ۉ 6 q2y]T/v'FF-t6PFb:?Z*S敇E'DEpwIRshR#ط`4<}3BMǥ4rJ§/-usQ.hصepO+qªTsNe.H^3ҳTtGToFjy48|}eѫjiEz p+(ҍii&d_Ynnڬx±pOK&{8l:E7zLisEXЃGD˕ngxu4)+V%zssu8'پn[iR+؎Q [U׍J1B$^zizwK/͏T/2CgHR苌l~ʲ~Z*n2SqT{b9hq:J7W-{Rq}{u(HPj1j/oDZ4ZܢHQ;~:ox|7<3-CA-Y2<=sjއ IJSR4 O\,/̤*kFf\y\.u4SG#DO$;\u#xT=u+qH!r#-}ykƁxn#s֩}#ZR?YNxx%{-SR.w?d6іl2pzq^MHFU9:|Qj}NF;%N>:o-̪=i~s29NJסCD)?a=kxaT6Fd0>nA<>+*T!~gs Omuo>5=/R~19$MHTgfVs?)%[ZWNfኑp `[SԩQSz3QN1on6~*xh/ wyvH>)y8_jUII_e_G(ӋFwF_g&e=n4RKwI7Jo^20Cu~G´/f/=}>|rRhqKf:x>-pLZmRY%]aeU62q$̰Y- ugבћ9 ,eWt{^GhZՖX[sTW g #ڑ捠={FU}6&$'oی6 ~\=<5 ,55Yګ> I ?p8ѣJ0ZgωOLXE|a0 |g$ Rq9.*|}+χGFeIf6A;T$I`JF/F׮bWwOEo<O[CuȅԨ })}N[SRRjc*jfі?1{!r^^̕Ii~.Wt3eR[nGʾumy2$r08:WZrd_v}*Z](7gm6}0۫MOSZgǓdr<z|bR8|N[+m;ю[U=_ƽJҴ{[;vgE8[hq3ߧ_ּRuo&Oĺs뭞"T 5<-poiƛ=QGZ$7Rgsr3Ѿn1ThvKG,4ykӕM"eq']: is/$m2}N>iF4cݗ=<\&|.}hZC#pԌAlqHtZoG}|ʦ.flT}KxqF!P4a`QRsq_fm|6}u2~Mb(r6}unw_sIss\GyZݕdёpA8=yve࢔G_C(1ծG_ӴU9#w(=8𯝆6j|v*i}_PyHdCp2;r1BkIw)-:  cXXUo8 `\f '}PƾVv}>J߇av6^WgRIB_g^ˍB]DV\'>yjcT3F~nHce8^L{U^IX/Z]bpK 5u:q5(M[JU#jF,FyH>ZvAG=k^߯?`M[C<[F<1,Wtc xRRYJ Et=SJkin´lqsXbjӝt{V5gέ!V45X0<3ߝx*RnjIdM l8f9Q`~ϗbTfҿD::uqOp)*a^##n\=Q"&Պ}r7Knizze6n WV |z_U\{_S>AWvڏveWva=~f&"95tJt兵V_#쿆{? v2Ft n^sq_M{IYeZJˡxQ.vý}vH;=Hj[~^\S 4R^$gkвNF8|:w4#j8#E~f4HiQ>HY{X$ln1YQ;khJwC=JPbGr;"U: 'Vsprߟ…)r$Xӧw+L!3VUyo sXI/$<,U&py(5MlaW͒!n >ICjT9^EǚH`YX_Z*rsT~g-Ă3r9 p_irȤ&-eheԒy&pB*mo\}k.,y^ԒYHMA^9ֲ\\Wm{46\7˻vq-j2pJjzG RKr1pVu% yV?f5wtM-H+G C'`Ԕ=}w9yWx=˫B[Lv#I;p[J|DSuu'I.Pel08Շ4chr¤kW-nei14k6AZN2v b :,[\D\g n+1},0r7Z[abO(w[ zhtS|'i$\>FPBgNjPJn5.ڿRHsnr`}2HZSoOrQ73/Yz?8+_bH4Q`ŕޠc+zF(pYd]}ߔ N(rRB9i wߓkz:kFL#[yeWTVs-}y=ƕD[ZI29Mu/g}Io23 s 9+XޤuFw ӰC%# j<{[H~֟D{$<{8UbX5bU'{2xV[gmSrC >bZltqa|,xUݹ[^F]ʦWl,iF2En '_ε4te^fz~D2G_cc8vf7{ܦԎx][ym![u'nA={֒)QЉcI!V`Y'>ʅkG7y6Xf=y[*N4ݑ@ y"+n`}1iE] Ûlh.Xy?vFe/Vq$D0Ҍg=aN*0!q$^#UzmU~ÔMh!@~UF<ι8ɹ_Ah v`n;zW<Ŧe*UBefV$rN0ϯ9(*㵆kvamG/90Gj~ەnmU7t}JNk}bPhFp٭n%Ȃ8?u#5eF~\5k6FdY7) ʞhqUKrC6OipOҟ4㩝I:]J7:Ԉ©ݱz=UFK7ȡK+}q^彚Ը{]7ۭ Lp|(wךKTQ>;ċ#;B{(+oΞ]]S~|08X.6:+XF0E[q\.##ʝﮅa;c'?"&[=jQXrlŕeq"[֧KKK{qǽ  nӨ+.)E-p.m\)j4iݨۣeܙ[#FOgRޝ:Nq;oȅ9ڧgt9#Sn}zI"2{wN)h:'rWp }:j6PS_%]b!lێqƶ.1Zv\xCnfˣ#4r {Emʥ-.x|rOm 4s r7HF:QZtc N3C\7ˏ=+S4DԩEoe$2{[kpFXT(֧x'cO7tcY_--? WjmJJ̻ШzXA|kЯ&QeY2w+[P9*S.aZKx+|{q޽/vV5F;26-mA ,)c_eNWdhzduYQ$pK qW7Ӎ8aalmG8=::R"4%u#.zpnևK&y`dqޙڴ\Kz8PӿoS\hqZ^Ik23jzO]zt6-tּ 4Po_u>T{Y8Ɵf߰Gqk{{u1',z$է FQArN]KXi$qe)ݱzrsJ(g(VInXA"nܦH61*}G>[Yz~&+ cx3^ˑi&RksSN4ON{4:6_1Y=^cSZuٚFQYK:w%33Y~]vd1ۮq\r9jQSwV#-l۱V)ǝiV9JA֓l-YU}+ϫQISZu SJ~g{6:ީDPFoS1g9?N(S5ƙV Tw9=kQcg^ȞZ螖ѝ< :g==xS/g)CQRo~WaDH[EnOw^jV6~6iX5l9uPc9*Ws6eE đyj @22Pq;⹯)Rwtm(vb*ݕcq5e*n{ѪOڵڐTy"Y~x8cێ*N,yR5S\s+3>^)/SQ;R;oDR0Һ'(ʒZs:4~V} SU[QfbF(ƣq'FV}ůcQ*J멙?bvWHUWl{I>nψ*4tս=<{:RKs}o큘I3F\rʤMFj(ՔT|HįT4{ݹ^\ig2}I-5y ʰ`zw/ݫ-|:Qj8pmБӿZ2OXHMo8Rh1ɻ zR"K(4*7;I~:訨<(Ij%{_52 }T%)Y7e,1p;'~e_O_Y֋(y}Vu^`){Jny}F"\7EtD %ϧW<-sz*iXmk6[ⱄ|Rzukp²ƶ#>Zaw_.q*5v{lݚXeuGdE}P jƼ/WS^BM+[3lD*#`ݸh:u;VHڭ:tWi%UQ2']8simtocb}cI֛;u-kj-ǛjcsRz.=Jhޞmn݄<`F#h q=8ViԜycg5LdQKKwؒr~a~ʳF7}:C[M{T 26Oʼ}H5U#SuQ;S~4݁Ik#b촶_TsMn$-cX 6xבWRU9R_bPq\ZeԌ빝Y` C2+I[ᗲvr}پWm|vE:u,somFUIA말'whF9Xf$s; +To3ӌF2Ohԫ|FAG==q[F5eM+uɜTI8>.BL$uۻջMYά]h>exZEN qϾzh g|VհmkQ-6F554t0Lp'+c֭w*r/w G!LJɵv&4iR$Zt~^Ecќ2㞄;W_,}Lt:c[FbW ǹ9ai2sa Gf`xڶjc*FIɫsȭ!,ʌi“`kӌ:ηZnH!>;ҋjm=θFqonYaB#d>_iJRQӽJش NTͱW]F}wt7ld[}ރþormYMf%2`-I^ե*;VN ɕ%Ր̿Gp}=DHm:KCqs#Q99cԪkzFXXͻIh7˚+w~vr}{-H#SXoܢZ9YcI@8N9ִ4uяve+'n鎕t麒ԕfoib/3^FWxF9U5x(b(+!Hc%X[҅ȝާ,7>o0\y<*^R2Ox(kM(t:& p(,kZҌTiXa}lT:a Ѫ ER-M):V{& Ϝ:g֟4cRUtc|pwtsWU+-UU`=ewI)'1hedөQ=c--9`ۿx؆"^Ώ>ZدBIܾgT{9Qv9êRʖIV6]3(דڸ(ӪeR*+ ;9%jʱM c ?8CZoi踲#b&$$usXQND^7i-Bnav郞QRkZؑ!PO#ڨ=ϽGI4c}o'Wܪr=}\?Z31ק,-QO9vb9${qWGQROLy*vK-1.D1Vd-L6IVt{#ynDo2F$< W]Ӌ\zYbfmW >s{u= rkDw|jLT$t{WBRSQUGu+ pw4}k5cH4qe$XV|ǜ:FU"Μ=>?bUcUzEMksXR׳Z_n#IY66˓x;zS*=&SJH-eI2se22zgԕ=Gk~9-Ee;l׭rrʭ7Ժ{˻WAX B](JF4xSt1T5n. Ի 2۸u*܅*FDX>\rWV^~N"Og HlEY21ּUnz/>p5{q찔2k 8p;cҸjF27ZRZv>8>+YM<{HS,n 8Ԫ%nS|ӷRו1G ˢļIfnz ~[{m-̞LH$%sS^ޓu`֭Lf1̊X>ҡwwk-9kbfz2?+W\czj+vkOHynF۱ҽ qbmz~eV&S_+]Q/Z(HI9̈́ʥ@;(&?[Zy ?C>[Ϳlhm6TV,&?{ `׶Z7߭FioԼбa+}r39'w5U-vxyg Gi[csnjo/CƊc8١G|Ʌ8d)rЭO(mjkq/$roآ-̠8ӓN9\ҕjz'HF+5qvV|Rc3OǚSIS;n&@s 2qU^Un~az=ʌyn 4hqJ ]pT'ِ}w<#2FQ2"TNN}KjKy>Ptiթ{jsSG7dfq\8hH%Ү[Y^ciެߵEt/mw&TV1e6DihΓJc=yҌ˫8"Rj] [(-Շ&ҌSrz T&ջfb7qo I FEy<3nR({^Ke++CUO񬽭W.[ ?ѫ&0'W+4[Z5eo}PجT)2Dk㎞sKFa,;\J(cڼ[T;}ٟX .b*0I9'qVNjڣ*H]1$G^xK^zy*o?ݭ T=Y*\+ܩZ/Ԣ[Hj曚t}ɺwEU9Ԏzj(U7)ކmd\*7 ?s^+eAɛQ[ +^UqIsF%eȹ~"$,g;B7u#<֢a*tEQ-٨lDs}+(En$;ʰnS 8|ňmmڝ)TvSW?{{rf>ΜBmt6n}ѩ„98FpTZ}z=B$f g<~5ˈHC[Ⱥ|3N~EODo->z:}n9#ZfU|{ӧRREqiR^󶾢Fn*Q't Ndޤ*u) KjB<VեnT:Z~$H9pyRJ4y%z0tbBぎ9Lc+9贸O5btlwOs)_Ry'(}ŠRM&7.s8]zh#eCِ;7BHxqj-nmߦDspdU=ElEAت59"!YIEX&:kJ%M󲽴|YU>x^Z*c+qr֩wmߤ #mVuq}?:2JU"K$?8rJ3Qj)CK 0i $m_ErOZP{NN/UCqqy6siy駈I{cdhfc#^n%c5Wvw-L n)]ʝyFTn*}yb= I$sT-Nިgov|G=>1i_"T'RmkYf<}9'qkIT2Fӛ_SSkV[wXm,;H5)7#bJ}Wi^U?_gtvy#8ǧZNj~ԕh-WFzp{mFبwD^E1q;ׁO݆-=/vrRg>P~ +A`:Wٖ!ΣkF|LT;5ƚ[Ƅ&GӌWyK7/fu!QAr$OׁYۛC9U4ax+{Qke*22<^iGF8?RGHdקظB72Vi\B|ӕH8U8xe24}b@q^Eg˨is..Ye2m]88>ָ%4匥{vD';[if?zjoCԧGSZ7k!v(6뜃5˖S5Zrйw}14qn lҚ5얧kul||9u(ܯNXIdBS̎,qO l =}:P}4q,Ҭo,YFӰBCsUyF6VR[4xgOˤ[x7 q}1UR֧e|j{쮏~?jKgp&E<+" @qO=84b鵻9\iې}Oe.U&Iя-5Ȗ+Yq֩NQ|R5)ٽz _JeYm~Q ɒդaO&bo:7Y@zۻ\TW]hV;XHr^ǵ{[-3ҍ?Um/Yw\1&W{LcOIӭhRxru`:ʕvʣS1MnMgfa-J^Ϛ^} #HPHY^ֱQы/m =;G-0CW꘦I8q>2( #҅@k(PNjFPz-3f-_BzifiP߯S$ڿkFI3m\m9}+[9Z#}KV*kH1ʻ㹩wvOi˚uwWl9"0pG8\69rMV85Yr9-IiH^"4JvxVb۳g%IV9}X7y*ۤi==S]0Qyy舧SUg{o2K+?xa|qקC N;ҩ}4(u{m陕U⊎jrB6d*U4>5$ ?4e9jO 4{t?~iO>t.ӔPx֊ePU8W_C<=M6y-=~RcWBZ[DxXQZ-neKė6y%UU{aGZEE$a?i]'Gόaqvv`O˞8#icrHe:}iٜLچU0^-e)=Z[eԫKhR4N8 q?Z9btg?ʲQMqh33FF@Asta躴gvtգ=D]|΃gTxT"r͕m"H`yNlqT~=zn4MJc6ƒIj8zx"yk¼{W&Lrso%z#'8F+Zu$t?9Bۑ:ɸM?ZaATۯmx?x{(tmcIn3FG<IEJJ 6y50[]+i_ng[G׍Dž4[VH?7qG~b24Yޞ}q9~N2ntlkZsx"; ɘ䁋1 6;tr$խx<=9ԓr'prNSǫA}>drs3c5?vg{RRd):šv^a#Gd@sW?*VCa֬.٪Zek^Ԍ;y:{=/XLX$6D2FXapa׃-^i6] Ws.k+]}x&ͭΛrBq歳t(p~Rs{INnN_CԫS)6]Z}uOlt۟ /o&OnSnA81QRTJ6Z=޾}zYWꗰ{;[y|&rA;،a::i֣>+v֦/(_j;y_vi|8<;NGnJ^gn1rF>7{_ɜg>+fh|3YʷїENHfnS8R*&*m~VU0ѝZޚ8o~1ɧvp&*UxV9'<ӁYp;-G9(-^9-T줸nhZ5w 2FR= *1tpꥭ++FzF?o{KT=v~xİKkl%fvq/S EKqKUwV}>/zIkVw_~( :7I}ڈŬV傿6FUP1#e5HvdX~j5*b,vmɮ_Z~9]hMHEyw;l9^cSm[}%kp#髻m~[z$kF6NQ>!s3ܯNunO/TG)˛k$s֯MIֵk3#+N:K*Ǖ+v8OAmrֶʪUqѺr=?iEGsJMI=R [m@7t8p;S[w۠`kIiMsgpn\f'yʎ0Iּ\Z&k:ҖoMHgkH8dprʷn^ #RR|W]YӅM']> {N$xֶߕW$uF\EJjå8Oަ4㕤߷w:=JTEe :KkkkǷ#Evo݁<ӦjgRSi[ivpukWTcǮ9rƌoM\,u_Jds *cMrInɧF+l$O>cF>];SQn+ȥd#)Ww9b#:R*'7M{D۬Xδ;w<㊭^ Ԩ'=ssF*#.UCFD"H<䪪kI"f;7}?>iє3v6J0md~\ ѿ,ةʴ-g_2&eR3$ye)'ʥgNVeim>c$yHeQp'wsU╣mʟ,'dVmA^OC5"ԋ1(swci+RJW>f!TeglɸOK񿡄יYqaCٳO5O㔣$D\ozaU |xz֪F{*(t.͡.ܮFOCz0i}oȕD̒F[hr}:Ҿ-[]y$5he8nQap8Vo}ᡘ/?w.[+oZkBGȵis<[hZL3Gg~+HN1o ''w۱&a;h+FW(7.j@ãuuJTq] ˇQ#q_ctQ̞9J'UvCBdktߦ^xėrIZ\٭P[#*H#{<-4Ԝewݖ )=ս^}-_q>[!c˗ O^aN+S륗e~a-]YE.گf5o46}uk8g@Khg5 CF]R6g gMko22(]ފ_;[HV9 ],e@$s+xϡ$W(~ҩNZ%^pTNpqW(-|ȣF2u~^DF$*{ڰr]?[hI0yfd_Á>1cX֩eIKpV67!v$#/yod/ϡb%YǛ휶x>߅rToEdk_Rh'kg,՝fnUqn~Q6}cy?o/PL^]VwKn27rI9:EԐ ט檴i++ۗH9cs3SmV,GαV5Q(Wz T2`TF^9NYFwyk&тLxR )>Z2ˣ*ܼz>+ۃ'jFB74@6lO^wsƥ;BӌwշXII&]ŲsҺ#>[F;K H#6̥,z~8N5%)+ 4pjsc2\YGlax^ޕdoT|$ʣl^Ù;o2W$LyS8I)=e%g{׻㹍M/] 71PkJRwn4̠q(*&3`c'E+RK7lZ"f(feҥ9T[:\b]\[ؼ+1XӺFCʹoJ=5_ 5%U& ɧ8e%Fg]۰1 ~Y]ǚ=w4EK%ZHW-HTv_M+ {ioCO2^[yq!EywF:Y:Yj.gkt^hEkb9)E3봘Z ?h '0?>y YǙsG~.m;4Xv.Ji:ROӣ۴R<aH#'ָS;&:b+?ȱ,yyVpzzuJU%)$nLiZA;GKives9wy<'[\Vd?lnM]g'2wv9+f-no~i~~D2@!0HVTy\yroln =_nOJNZmuZ*GhIʸSi_ꔢyV3iHƞ u6fMf5iXk7w]ьdPNCe GfXٿz'j8FQo3.ߔ9=s]i2=K?=>HԦkƖ|I9>pVֆrG5}|wv=;rSڧ+kՌ0󋨧rkrYm^y ѩFI^eʼjԔk瘣m ˏ42Krk+;`K, 2ʪ>U9S0 l x̒|W"{8V]5M$"Uiƍ}4+02mvJ̭Zrm'eJv1cJJߑGZ2X1U7*_кtjStPU8Nc.Z; Um'9-(l֮r۠b#Vq[=oTgj%{:X<\ّiөy5fYMٯ'2U#r e+@\~ʤ6P@fXsǚRl+S["Kr.'vngtZF| 푺4=QtgԤF([Ļ 9Q>/Ƨ-'-ȡ|]5)sr#Z.,fCq ^Eٞles:M&K;6U0e2Qӊ´7߲:F8zRswzkٷ[g) "``GF/ǪpW44]MS#:v1}ɵe1),ҵ Ytq~gCm|Yk&0ecʞ>_ƾ)=SDo噥?f|ZI򢸌@e,3נ;WakS)_K=ZNw}\."ЎgY n`Lsxʔ[.T}ZI$r;/\kI.U̥YUAR[]7˞:t{M4(Mm3n߉Gҹ/W)F ڤ}co-$ycsu9Q))Y6qש%N2g_26;6e\PzsҹjG"hn̸n\2Ue({yE;4le=|#Se R7+n0SZm?g̨0eU0]NS2NVevL݈,mCLq5VU?wR#2*w`T/ߒ}xrvR9FBZ2lc~\<}zߛ^Fqk k&>P}&.]N~is^(Ql]$C'Ԏ>J^ 2~MyNҌP?Yl=0ܠ{5+[ByRJ9?}yiWVBޤ^浧VS ȥȖZ3Cnia|4@х$9^s]z5izqGVw2Ƶ|,_dH$Ns׶0?=jN~pT쨥ͮ޿3ό4yyqKcӔy~< VRo|j~%[$l !@'SQ^D.MWyX|ٚcݎ77p#?{ ԿR筗{t߿A0OK+&n7}|o9>4땖7 t Ƕ8֭ZUm |^3*WG˦H|;'ƉhcIq:{LMIm~}W[C텅FeF;Q\Wߴk~EB>UZ&ݷ0zГ*eS想Z.ciy/>|/ CĊ!ӡa *r0{*㒵L^"u..%RS?;_Zmz[Aoyg"`0ypxR?6OʦY^)_]>#fO|'jG :PnYfRP8F' qN3iOIi_vSG4q&gm۾ӧi{Rh^Iyh 'b8۴cyl")rNӻooSdU%RSN_mADu%vS4KN; >'#V8RR=ޝ=m|jp\BadYR- Gu JkW{~%e֫]Uo\ u(/+QϠNjjzk< k;_{ˋҮ%98 |>aSnWU鮞Ws. F7_|'xv9љnYYIO1+*/i~'yq}vC'DtfP8^-jx|+]t0{TW#_>owY.Ff8-;s~t+`q v38D엒K.2*iϪI{&j]4*ͿTmb I ŚZI!qz`vq(c+(ٽa*o˧EG_^g-myivr"X?fUT7,{P3֮u~˚ھC)V6coj3G Sּ0 MY TI[)s_A]Sƒ!~8F U4FnGum=?:&^Rq$<+_%O6N~][akf׫|2)mvjf2Nr<ld׿WKf}-突VEJKk|ߙx{V{hgX4Aؐ0D=EqU|v^Zw8Qڗ3{i~*}5|)v61学!a3ξ a0ſ>|-L!KUg|[sBk57ͿΑԔ~@;p+UmMY/Qb;z[ EttIȢH=}987+69PRq_z(ӏ2wwG4IIJERCN2[ ҾZocRfzzm#hgpct1*+G#ºGTz8=lyk>ƕ̰ynbX뚪nU2LՆoSc~"u $<Ge4Щce[ǭ_}sſ k,@'89KK68ʘ(uիmyO(u;UOaiVfg1~R2\\ *Zxc’ͧ4ӎ0ùQՎINnrg(/ I_unn Xe9?}&tiԗ+Kki_l07]:~S/7i+XDXjro`Gߦ>Sk-k b#ݵZt>_Q~֩41 Cܴdac ebkFo-:p|!Spn!sak{H~gLEC+cI8_QlWe"eVȥ{-G K漁s_nGgG͒[D~* \RVI C6Ҍ"srxޣ.Fuثvne{?ݓg0#;mFZv"SOf,S@#FA^:NWN^$݄HqecʫNh9~RS>m̀>bzc?JPϚRZvM ̒]}U,=Ojƥ5lLWQ$Vm1O#Fx=1޹LF1W"G sz~G%OrUJ-Vm=:ҹi驞!Tm+ yi!]o+\zK޲JPp'7YM{vm F0pǩ=u9d.Xy+{1"\[XF6ƪebxҲT!ʒ&GhH2JFѫvZoc.jы|Am4Ii6!y,psǠMs#RQ%oCʦ*HUP)I!57LWGu[rGpp#pד*F 9֞inOw}"n_ju}yð-!;z~kǑX駆(]!;.np*֩'(+*iMFW.]^?!ykj͒zt2I+Y%\*Z%Gsid%PyQ"dyny3eEKݒBlrQHa#ÿZ2QtgmBIGΪ~`X6:5w*\f;@߽h?6l[g+^Eliλk[WӡثWR}gZ">~ν*RZLmZ9 \~qqJTC:4kӨ,aiQ q}kzR挵:}JjRoWnir@|ۻxWNz*0Kk4\.֦PT*\龾Fޙs%1$V(;J4jEԨTN._bĻYbXQKQKt>R,N kr]VX{+U y& k2\t%΄iIALi HyyPq Tu|ܼ ۆ::$ތo~M,-$ߍI{FJ|N}cF;fk*8kTFvJtSR#]1`ҹJ$kICM;#UCBp+4}Kˊ+cjI,ꟿDQ2OZʧ7#1,k][\jPgxקq]ֽZt[5.zIN]v8\=Y^-~'+8A*6~p}:Wdi{7)6]LoşW$/=OZY˙;2k.]uEQ~ʥY^`5[ϵztM(/тya>]㞜e*Ҕ|UR8Gz4.6<N~ߎSQPqo[v:ySլvNJyH'E {TBduҳ4ʥCQ܌{zތ)sA[O$s^[KN#I+Us{$\eNtIӧg4oC?'$)}qUʤ?_]7,LGW<\|اF+2WРTo4wg:w>"JkKwF^I#7تłʘQrzt2*ѧSe uf8_r} :d׹刅;J/kd1HRI;pw cdIJRi#*SxhO}Rc֦Ycګ1mWh$8:U9whUY)>9{-O>dc >Yk2']\MMm2gUNMYT0Ǘ)`u9 FtEEnZs̍*< =ϭrr{H +ERO]-s{E[Ίk|yz㞟c/ewbs> WF93g(9P>OSX)SQN=ѾQk_1dn=z tZ);FVhrӇ4SV\6|\ʸcRJ/֧"_-Zc%čsyMӜ"Vko&~kUd 4haJdW.i%&\{.H:6HqҳEA֣MM[2MbHULN/QMtEҩIKR- ׸FI9^/u(ϖck:[FRfcݱPAG@r޹ROcѝ,}+jt2J.^ΒtQiN1tU!oDmYܩ;eݐ~EKX-uadWY]~αostwMCjҬ%-OѪʨ z"lJW_q%f?襻NmIjczݺQ^H$ۼg>W綎]q$6zLfo8u<\̱lC P>R=J䔪FNUEzY5"J4{cq\1ێ;u)j"%R4i+Y~%Ka~OC>5)ǖ{% }Zg(l[WvssSYt|/^ǟ:+Sf迿;2C޵5*C}u+Zr]w'6 nЫ8N'(=;4ӎW^f8Qi[WR{FV2+o*.he{+~ᇊKZݼHmė"WvqTy¹TRaoJj$hܬYHW{gy`a*Si܍f[T|[GtqZ9U̿K x+aiFqG+Hï{o}|$m*!??sJ7?6v=|Ѝ=oDZ)؍Y#W=r:viOֿrtg>t2g(䓐ш}UЬ kiS;Žt];'7 {!i7++0b89FGAZ44\[v8GJzi-nQ7l33  @t֚w"GS[v4As$ w7|n89DZkOX=W}gR8f}oG$Wv̱| Y{*Azr=z5)ί2ҫ)^;RgI1ڈ յ'i-v""h+|&rs#ZX80tF^ Z0b x#+u*JDjQzHL/2zwUJ1IZS}sKX=$KXm~Y۸;znc:S))5D}*0ϥ}4Qኳ&g38HN1i=1S?ۻ$=pHӂt=_p(R4.2i69#=qn8gk5w*Um IY%gIWg5IEq`V8u{/ԑB7̻T+8~&5)a+hCg s;sB59_̊8Z02Vۿt0ڳ'dEg$vV`TGA~+yk%V:FcOa?z߾-6bʫo~-n/s&߯B"C(ǜsJ?- d"ۖ܄)+7ɌeRI{WC6!mcw[ӌiZTcM'|q"ǹ2zm'm"כT66Ŧ3Ն ۨxϯ8TUrmMI *lgێNj3N=*u%(ʷ"f؏sm'jQK eP*FUdyxNR1Z%''$(ѯI!lc?>S:l_\Zy$%F Ip'le(۱1mtec纳:8m}=ozƤNcN|^߻ުH]:LD.hRb7l`c~Rq[/ Ra&ĹKR|0I(.y?F^E}e8x/)NOc]Rd\WU!k5?HDW66?/_}42t}=k8Ei:qN)mk4&ڻdyA^ֺ's= *ñJf-㌎QK5%ݴm26hd=:eΣJ̽a۴6Aoh|5U=~ӡTi17 ygi"WOjIm?3KR^ȭwp03KlUcx9Ղ3rRןA6$;ctIyو2s$Sv}bo#6tKw bXWrt>^N7߹R1PD1N( B9+_*&Լ̱C*;tTӨƕ=:bi$YEql;zT~ڧ`()kTMomXmmgʊpw|bJIA ]2ˤXO>*KbMӍn] 9di.Mr,[nDJv3J]6vʍY,svq±cy6iç&4ߗBhlwel>x榭l҃׷QQSUOgib4[c`Ydc'<׷a\riֵ#Ir7)Mב ps?¦5!=hIM[G%CaS'޵\ҭoSg}?1ȱ_ z29;qҩ'ir)"h6Gt}Qz]7Eʪ@gti%9sNȭd[_ݻ@yXN5-$yԔy鮨\=3B$ ݝ͜dYJRMyhZ-_rG<4T2ϖ:3*W)R68lm,41o^>n3?CU*_:|?*P't26T 7PGYrӍ6סӧдi8|Ojg%mM$V4-$ sQ%e8t9Tu<[h,6=GCsr3ZjSv};[|VȉK`12ǰ>WKޏ4^pJN1&h|#>?y~yc(%u9MKoHWhSz]>[]M*I:;XsqCc<8V+2Jq4mE F$7w#@=*ܷ<ݻJ)VF-99#?p7 gO>k=.ZTc'X|Cv\$+ +Ki8Qc'n#|m﷟SᜪF>u7S 9>q4֕3n)5~Cny5ܒb'wGE6by$p:UN_-3cOt:$i!34l۲yn]:;.j\֨"iU"A^{ڰSʥܕi<H&3AX>17I-Co~RITdrK&8_,X#0œ \vjtm m!Hgˆdw87}j=5%QR6fjv7WIVR]'Ү5FV[g2nw*j:zƮK0]oZN2rnFMݰnߺ)חFxTn}idycǴ.cJ{9^oISߺNnBai=yZӔeO=3Xʝy_֍NE8h< Mttu{ZUf>R6S ь_T׹h62ۇ3DL͕w#SXJ\E>/ukExg%@a!޹x_DeRTZ{Z:nNWP7z?-:pv8F29mc'2hH˶SJќҚ J#/\>hi?h_-dĄ S+v8j)6Mlmv .'88=]j&<7{mb82u۶>;渫Fdˈ('[!lnܳ?('Pɷ89Tyz?Ȯcǐȱпo\֒jJq.~Alc2.<>85*FlƵ95uJ7}Ր^tTy^^VZ .#qHan'V*QMv0GOwC.Cs=9+S3i1T] W7?x79EO\G-Z7 !SM"Tͪ{NMR2-b(/6Q]|nZRvszDjJ7mݙFKIfUՃߥkB~jQSJ~,O&EoS@2t5oPVЊ\<=;pJRƣ2F;AliNZ2 8'3קJ`Jۮx᎙kP?Y|Hcd}0(7xϰ_c-|ku@~[xϗn:'4o%' 8PmkgiەIz#~6"i ee83s,1~'R*Ռ{VۻV~ > LYa><=)aOk~;[TvmNǛXG?k/>bU9m85[\n|JזW'?zI+Wש?sV9x!t$HK SZ$֥ܬK 6FǁeR.yvfYyǵyt}JʹaF1WMLn_OZ9Zԣ,l} [JtT5ȣZ{JlUf1ȾxpmZЍH6zR ;VeM7ͳk<ٝhʝ]Dx?6P057̎&i[ Us}y#|,UiSB7kFƸҒ엑(O#[X'*ӵ}<]:ьmoS۔WZ8.$b7Oktd#)G3H0I8̒Lԝ Qס7RXxY6[o Ξ*qz__B1tZ]lnx?QX5\2<>^.2#Mw=:Qfޡ[5ް՟塪ND]} ;v#ے~fȮiaUڧ}O]HVD+ @G!|+DqҕtO5:u%г|:E1iT,k~fqf#0޺Z1ef5OA>| |n5K0JnV^SXb˞J2i~bVi#㎣ڲg5C"8Ӄ~"[iӝ>kn#ߎ#=zd~8y+qˇL=Oh{ϱk;BLY"r0 z#5d7Jل~Kp=I .x<;WVW)mӦ(g Ŭ,4l*IS;I c}}8H}\; tEݫYmxSt֣MY1A))$F1N&Sݧr}{Myd )֛U]zFe t4q&ou-Ugn3f'>UTK?K|6QiR+J JY'#5~&T9[oHyfe;~i~myo~UWR2ZW=zJu嶌0_;ϡ>ТݝUI 5R1/MOs E?ii$ƹ䴽ik293^!)Q[֌vVZ [hh VrsFq:PmG G Fz{_X·o' xnbIW8:OWϞJ8vVէ\iNVPe`181 r{yɇ%(Խ-F1 ;Qʶ.hvܮ\<;xƗW:T\=XBJnyH t'j%u4)եgc/-4imhS=#׏5gYiԒ)˥RLhkV6  uhUdЂ?Oʳ:oFyʱuT"?A)mI k1ث 8pc<,ZwX|FFiv5 6y5˝FO&OBF s#LqTTRT%ݽO36RK/ͫxl>y7E+Y'[ET53=w5LJtۉmfYU/ATrJR~=ݭ,-N_M-;M05oc9 7@zXƝԒΎV.tj_np6ӛy.-WnòMH##EJqh3jw_AwK0jMӦjxi(]tj_X#\U=;I;t~F}u[/86ROz&29J-tʩ/mT X|O}9Wd+.ǹr?Oj5JP|p{Uv6Yc 6Et'Ӏw;iOg^EUʍi֥Ż$=hwZi]3Udđe>SaQ{Wa2Uw=i]TvM=Ŷ&yl2N dn47[sqrՇ.e'k꒿ϾTOEkmoeͯߊzXFYhso!n]w,FH,sa1XJ5m+ZmCR-+A;Ӯ?=A-ޞL:ʝ<-8)6Ҳkkoc M9s9][|ExK?$i[{|]`sW嵱RQ.LEHEE۫XmDQUrУoAoʹ5&<Нj|w6Fi}=bFItBhT݌cx:j4m^WۑN".Mc#c1fZ~fr)VvzSP%lhdo;M\Mh e4c;]:Ic\9KCZR-/kxd`p'ؒ$QG*vQeMs Er64qiP.y*}ȌEzQؽbE~_ϷeRQ1(;Y'=w?ϠF5bi/EёJQ{mr[[g{2Bљ8#FU[:I} ⳶38M`ng=}i1"QV-6kԑLW2c4jʣ>fr<zg]1ZR:#CV{jUC*#yq6 q{֌,jJVs?k4Gc]BIIDI$طNk8hK-8ΥinӞi,k#8gM[KM;NI/i{omru6i?zvIG^⺾S]?v_Džnnd{7崱Iykѣvߡ`P.Yߡ{NԬ.ru*GkxpHXrlVR->"mj {di3&!8^>;+V+n^܇e(?(=yȯ`a+31{6PNv!ta]N9ObVsUws¸SLƧ<з#G}rs4JKloh8g)xNp9p2 ʏQdIohU!W%OiF%WIA؂8V= A];̱(;$) O/nGtbOE{Etu%fGMrxuԞy].JmTzU1 ]=kfYJ3޸bi#TxVe]߼X^\#+ų4\#wO=aj%^;UW>IGV.$ɶnR8=k7OoΜjJM./)o3Yszsi\JV-Ff˟(0:uު*1Z-p%FIc.Y.UF3o [(ӛc,S= -܂?˯rTT([l'9~#R+J1lз@,%y8tGWIXR6uJs8=6G}?:TN1kdQ=Ǚy0mioz_{GsZBi"W-y|H=ֺ%8ԺZVU\=$v#vl)?.GuѩRFT'JO^%gb0f^BI9zWTe.fGGQA[y/1+w3|7) eP˱&a`yWM:Cz.ץ--|۸-Pf=x?/$s]Re&hԤSծ^4DH# 8+xJ(Mμ5L3;vviآ.%䇠[ކ)Cw]$%"X?x=Tjʝ:)TO=kK[dHb4zzTeNWۯbGבNcG p>fߙF3rNj]~jhwxqnk͍䓜s;T{:teEuN~f{%n ̡T`䜜ʜ};J8KtfRKokw*0{ОxyIpڜv3nm?vmYg{v!fیvl}:uNpչd~LW XBA&'=]ɥ~87+[MKEF!Mcf=x<Һ%Q)ͻlUj.ܡ=>ϑwHdă1Ͼjyʞ=l*c<, m@UA[F4n*ݾFIEFDFe#I >Zzt5ztd- g{ ۔GXF3E;;滙체ݜr=8JNU7ZxlLu~&H㈬葃_jʧ,zfa>hGsی&?o腋f+Јrps(Ə+s~E>]BO8ʬ'8 ~&NeΛN6 -3Hc X ۣ}65Zsem**νщHˢ}ӷCYܯ[J4Ĵnqmc0EڠMFFA^W䍰Tr4z]KeO:+wh݀ cֱRzQed2j=ҽ&"N/)?Ï\vWv59jt6M@H ~lS֤9j疷_gi6f%{WM?m˾:w[zpLs gq\v1)mf̊˵WP9tSF1jHҥRk];Y٢ٹ>\{`}+RRR}«/gQi=oF[G"6dܫ?^QU$F.d?`e6놓prXjJ5=Dƚx['lN;#5aWb(K(M-KkЪNw3wS X*JYdR]=Z/#5X݀}M?om̿{ͲF5N,݅QHMTq[~#%hQ_3%7}>XE\Ti֧&U-3iݶ)0x?Ư4GcR_x;#1FӒ+sHxG SzԏVqTeN-q'0|۶ ^y9k%C4ZRWh}(cE]\qJ_X7* &ԑ50E3b˸#㷭q#[N*UzhQdV{u,wG )/]?~ 񯉾]6mf%rqqW֦ȴ֜)ƣKzNDSC#I6_sq׎k0vTn#M_j-/u8EȐd$e~PۘLv~pR^WtfU*Jp$Dv:XQ6Ց$l<)s߃]Xӷ.E˳סi-/I#}$'m35-J9Oi5XֳMYW'9h旺'i5%8Hz}CDB+K6p28qU(˗rߙu,5šimO,t?Nk?{;Vʺݳ;Z$HJ26! q#)ep }3Ǹֳ?ƵI(]t5tMeG_0:%TO7qF $\1RqIR xI'`1ћ9ך\d8{ S+*?Ұ՞ZIE;uqQ)XKoݔ3M8ͼ@RFJ2S83rnmtiby19L ,۽̀҂1</瞃ZnsJjsGud17˯FTeHg`d/RAۊZ#o&%r2n5Ks.z xț$'x5 Ti ";mePL2z`WNrR5o36G_+uT!-`{;)I& ƥgo 6pfPUv st]EIhǗ9kjjEKK^G b}rxF:J($V31cs ߞk4t𱛇.D3& 16eV gn8琿Ji{Wn]ңmK7Rlivy3IW{30m1ƝIrk]ңzdkZ͚Go @qB98=ǒ-_U:8tV^=ǎ4ֵe/{J?6G+ s^*jJ[.BN|utz埇|YF./3۶W4w&/}8ZQZKd߾ 5*hEu~W'-!5 HlbHfG˼3621$S(ӛJoNG⧊hJ-ݩ;;YY-~#?eEehu(v4[gA\p~5MC<g^k;G\RԺGS Q]~GǞ Y$!ܞbTx+$kƼ}սcc&w'9B~+nHdk?큀 #0gk<^^ Ws}^ O,}4>G>sҼ]gM MUˆeMx#I.oۢVOUwׇWVջY'~Ir>6tJĚtD^7 HOc/qI$=2Mۢ]<պy9v2J vi-.O=_h|]FRerj_oer q8.(T|K+i{[_6ut0 $mZ+m߯͟Nx3;i?1Q0H^qJGV*ߩ+x'܇#y'Vݽ2R斦^}dmVϘ^_JRMc:io=Wxo=:CS +"m*vq϶3_eF?hމyAM;ZH?؏u=ZHmå$mG'9_3:TUxܵm.β>\?r뾗ZY^3ů;=Zl9 dg<a/+]_sœZ]&~kñSZLc찄W0#iyǫGpʫZOSQIY˯{ӷKL ,Hf|&9$^޼N+Z-.lyR2+[iÞ>]:erA'b1ԋc3Qc(]=OKË[an$VaVָs Ts'J6{Y^7@pp |EiOSsIu:}Մ[c=GY:<qf73 ?L*Nrq~~ɞ#/8odL}H\8op*֔$އV3'Ry9'3ľۤUYm4X`9cl)lDj)]_gߒg&+{y#㏃4M"ݓcߥ~GZ-tcV?#Ac!b;} |+WVityRMIihУ]"z/cjTp[-ҥ{0<-;Ttd˚rWnQqldQmh*:|[LI۳XiElʣ63STy`c/-_OHsKo&6=S]hѕvW.XoK}|GoXyb{k)I]h{SsJi jK]6 qk76rUG<<1b_g{]/b}-__τt;^E`T#ѹ^өlu9ɶ1Ғt>>m6i,rI0*#iy{%M0e C^|OKi'Gi7 nq+5q -_KuT|gП :x_KxTq&cDZ u/i{;>[Y=SW h yZ[ԡ9 G"a}'R5%xS/>KSOk/Ve :xѫzn̼>2t(sFd@ֱ]*n:)VRZ|XmψnW-ܭ#$|Þ݀kR-䮬 W5bݚ %3BNAc'zWO+;hѦ{/,gev|`AԥFqSE+ߩ6#Ѷe{A褁ۿ=3ۡµYElgk6qejV\ _xݱ?xkʞz҅H/~|nprz=c\XltvLa~n9y?= EH9dϣԣR3cKΆ>s4RMČ9Đ9tJs$jp[.?OlOG&INUs<秷<\]u) 6}IhceY n}MyTio50KXב唴Z~M 7.2Gw\ۜCZ_#WWho'mK5T~%wlLJZV|Nu|l=QZp9Vdleqst8/z՚#g rhѹ?ʽRG1Wj_՝Y''dU\;o{c+`¸\S׎֒9ӹ$*Qk+dy߻lg #8C9ybddSYyb0?:ڏ*~{KD.-cX0=93T&_wWװg6]ĖSF#wgus[i!\$3Rh\n{OGڷ*:gx<-yש!vnUpXyp;pOҹsZ)ʤ)*JKWw̫;]XN.pmukTrքF' 4fcw5j{1V.U_˙K e+rn9dar6@9GιeBZ?9fojhd_jrp2s$vYԗ4䌪Tuh mdU* m EN㎸U—2LCAySSkVpKN9]n՟KyV#Q#|=1~UJƾF)IFcxv}p+Zw3NQԽdq >f:1X2_YS.Z_x!+y>QUn}keJj{I;ld@7eYF0yYX)`wuغ ӷ)ˍݓ{WDa 3b-"l6냌R-q]7fid=1֒)44*n}VLAt >U?7p{<Ҝ֩hr'aٛ<iK- Zqd<$XgI##58=]/ٹK>KyIހkxɥ*sJ#o6'>=?¶zI;(ڜic3/',1SsќWƲm_!kXwi &}F;+^em)SXer@cbngp=ZrKC?o/gIs'".63VK#RǨ%i!Qv:3WjN+_O!)MȎÐ{{gy;H%~^[jc@s|k9JAcrvW6BR;\Ҥre[;FA9yƴs#u ꨡfСr2r%.jz nBŽQ3e'\yuaקNqQ-iG(jOdvo@i.ZЗ~vo"jfpq8e=2˾bͯfhL+1hrA#:BRQi}#LQmi53Met6x 9rs ޜ%R<]imS5%eLO"FAwqF1'XxIƥj8V'vfp2 !Wnj.Tq8Nⵈ[LQue<}nzSN2qccWp֒@cyT Ps1V5FT?:k)֌gIۙ]ΡbѺi'8ke Y7).NJd1hBWnMGV^ {"j0W-9Tq ho4K]NFV<}m)k'b2^!7fY{sv m$5 '}T\A3sk7bCG^4oM+ *eL2A\TqR]LaNm_7:|8Z6V2帷+qerWL`~y{7Ϊ2^#`+o˓cҲпQiJͫi{F ֪*'ԚXJ\LKWb $lmO8~C *qdVÒK]E-oA(zqMjsSj2Waٔt/|\Vq-HQښ%M^(#/,`<{O[SoC*u%-`CN&U6ϿOαGQr9*b+sGg?3UaGX~Uܣ7TM*I9u0GiV+x՛jOS +:TSK@wϧӵjJ1՝jތ͎WXVGdY,m1ןaҵXomj"3-޺#hCӧKFihZ{ǒfX Saө\is0=:-ecyʐɉ=<3ںcWyY.}b1!o>kiγX摌,aLqd^淧FN*2_ZqkmsFxYX6r|:ߍz] i֡&=ÚCtd"m}Kmc-\R*QOGۈMP夯cZ=-;[d]yo8>m)ѕ* K}  [,:oeb2l>##ΉME[1u_3N־u)ܽKb76;*+H9wN|e6olS NFˌOn2[4qF.z9}i.Jz5,c|D\o0%Wp9UE9o8+w{~#%摚E_s򹔩\mkV2>NмU Lq& k^,Vƫ5M3 f~07 n2{h>Rc-]|FYebWfVx"4Idk,ok58cbf܋qYTWb%h$cѵ-nf *ϺƱձ RI=,:5J26aom[0 \L*gFQOgʊu- ZhPg)XF3f9[74RzJʝKjbG[t6?*~F/fջWڥż#neotzb&Lqg-qנWVRDjҗ&ɖOm?STAcGXỂ# sӛjVIae(w{^D,F-mfi ۱Crcތ?5XI#VGW-tc "/RŃ =@o⨭B3-Z88"Ue̒/ُs`#ⲭJZr u)wa!KTh#*%Iۦqr'{1 ss|E)3z0+s4N\ו7}?,Iq,q@sTSz_]whBz|Qn#<3o #{%m]+ N8jZt +dpO~Fk)JS4:rzYc( {v)X YFXjƊ玞%Ʀ$X+9gokU⟻ٔ&ߨ-+n-dxc,烎v[J)JIYkO;,67ݫI rvGb[#-W]:q}E9UZ==-BLѤw.*}pFNiJGIƬm kHi - b@9}{M9Qo{$i.6Ny#1Ƭ[qr}NO{Fņ+Su`[jofIl8\<ϡXTir-{*؉a*Qd7*u0_0FN0A'Msjb%Yƪ׶_v"ՊM} 6jf*r/'?[F5hƟDo:i}4]gyϭd݁FȢ?oZpѩZzld-8ON׆rT+o-B@g+,_Gcy,.o}iŦ8iIhF@#e۷ŇR܁8}+Xgcו:nHAcUXm azi{ ?2jރ^Wi *ג>m(˝+^߯ȨtK]#ۂzt4֟fq^%RעFnU|2H?ZcZ;iMЅё6y]yy4ӺId3͎ѭroLu}E \J^wE5$R7ewb@8'5]˝IFi; 1 +ۋ3|Ii^EX[*rU5O-Xő0 !bBYʜyZ[S\?&f EIt {?ȧ<#R{fsq$@n4{,#R(Pm܎]9' VM?p#GzR(FeȤy7,Fr)S\&݁K\RxJ6h=3 ǂ3sU+˩(+օI}ЮCp1шnInN*6+*˦^˹%U^G~rƤjI)JΣN/w!)ȓ)m*:iNveҥm( |{{X͈T r +s[}cV9/m'g/CegOe*Zs?JBiK/fs=%R3dD[H]Kv1KɁ c>Z5g(ƋWms.+:էŒGIHu_n~sV2q䩌yZ 9>"ßwq?_ifbF<:9y2288e9F<kJ>Ikb 5NMϷkCq~UΥ(h]_ZI&F+HcAZέiFRۜ5!O=Z;PT 6<2R9[գ.NmB1w&[h}zq_eNJntC )S-hY[-l;2n5G5jNiQ%̒L E.ӂFO^zWaUNk:&O8ڛӵ})RWꏳZOԔ"rdkm?)#=cFևBQ_wM"&E* &rx<>ٯN3ݴ{8;Q-\8&W[S#ȧr[6oaz+ܴL֦+[C+T)vwVT @ X`cJ\ OӿSӍ:1q[ԯu+o&>f[nO[z:5i:(ѫ)8Z;|wֆg<Ҕ Ͳݚ& +d9=9S8N-WT(ӕg4?̣O7?R|2GZu(ԧ y5N8{/z_/Q̋>x3.fױ={TFM\֩[%KxP\jN_k;;`2xsӞz0Fv/v6r72l󷒤c9:q}k/G75>X5S0|x8$coNR@J.kơحMfLcG*jMyZZVFݔgn1Pr`gq+Em/⛥doK_S$Nʪnm2s}k8joVPQr^\l4"u;AWv5ׄK?GrTڻOkw}o^TR[=tQ)ֵ?P;+rg"N*S9+i2lbvԟ!FVkw)r[\] 게5וֹ/p,.>}O]CF:R촦If}#%ilaGٮ2kcen2ǑR8+1(8CFۚ+vhdLkq돘cEs %%k}HٹX]I>a.ݧ:dt?\!YTinP1lj[ {W/ ^۝gc-ELXt>Uy3iƊIλӄid$¯gFqnTN7} ZD͒n}kISe'kjJ@:NEJ# W-UA׊㔪UUjҌc -#vor6vr O`9IVukTKWyOJDђ[sc8<{֒MF[0Z mlÆ]Wz)׭Oz :pҿ]:=2T,YY=K]rU+Vwֲgo)! 8CE{e[_8\.rzr g/u8{)Mߗ&m bTt'撋gI-Ϙ#A1q)up"(cl>[OK۷jKoC}C*cӵ6/S EE^m֞d@*1w{b]9gN}x)$ip}1sB%+Kf>Zn} F$%Y~֪^ΜZv5jQz\ dOg*DC,f5l*}kW{I(ȎU´-eu%~F8?h< )I-i M6c'\9{~G^-lŷ0Jk{u0xZޚp."ᅤ?9ӒjUW2T3Ie/e~Xzqj|8Kq[z#)YvyG)$ΈД*9j^ĖkI8y毙B.ck+n|^pN{|+3=i6mkc@s}*')GnJZ\ϰ\)2|ڮ>\&'NdJ4=Jvۑ{yb0%@iWH^#lq|m)]9=%$2Cf#VUcg99jt{fʵc.QOEms>:HJREO޴r3ct%G2H4ĸO&ۅ܂816n{˗N-o69b~bgOU[VcQUG$dkUFoXsZ_c3CkmXb&z{}HyGqGour(f~QXv>A1SN>bj{hTkg[=Ik?Cމ|</fu$̆?=UWlydpR@{i6^2TNݢ0ctl?wA^q'%YI}[OnXȳnF06a}9ϵz\ث56#>x0q"!=|F>'] ͏0s^]IrTiˑtLdRa&R'k=.hJqͻNy9KVZ)>"*v<:C稉$e]:GIJ-cIxM1dkH,ެWGFYO^|ԔuFtjSsFĞiSs{~MqE[MC CvOZ6I䌪dtJ^=IٲNIoƝ:؈Ǖh*-9_wmcywa~:u 枟jHCC5HA# ޣzK-_sʝEmKmb79>:*Tc~u=D{|6TiSЧ9-37ڤ]@-Ox.gR-= pn `LE6l1;'V>*j8Zѣ+I5s&dFeV?/m)N~XB*ZƜ7AtwҦ%z#테;W(ԅ9j2|1ZhkqvcY#deāqҷ7 o~+QR7>F/4}>7vr2)h^<\]㦋oSyumNwX1mq=Ͼ8Ksݔ'uӹzC6ZTi<Ԕ7҇^7|Uڽ^W|K_38FI__SƑdNﻃ^z14bϚOG9fi`͜jԪ8iGb 2}k)N\GR4b&t>gfhY8ۏJrJ^)i&/~Ok_m7"F4H|} 5#&m|' +[qgBv8e*IJSNӄoC#Ԫ\U#TΖвcN=o¹1%KU8^e xYWԤkV`fI: l0l=&69:d>QSr";6YY-x#A^1ZgRU</m<0|pn5 %I]?0?7ׯ^+ƩZk]\D:]WmfmvgCp7s^\{IȱQ\e~voUz:t.&uZX}1#w16_J|jud>$z8f$g>Xj>GxΛRc̾ѷ9+ W&*[ qzEq-˫0N"rnN%ѴƺYc<_j8T|CϨO7GHL:߷<׭OQZz9Gk{|S_nEK:[J [ OU:dJadT?0;8žk[zs9]nKWQvm0 G=kϚr-w1G1^!Aik6]`ABJסV1_?@e x:E#:4<ۉIRFx݁)7cǭS~u{lmf6{=^ٮuN*srnO:׈&Y<| KqGֺ=N#b2M M80r QyҦR4hM_K]^mX~;uӨsK*4:_S 8J;;MYB:|Lb8ʢw8XzǪ]ŧ> R)a!=9󞇞`*.UQ]܌#>nJ[mr[D+۝ O2*̄y+!~5Q˫ixu2zu*JQZ46TeݣG0dc#e4iFR]\-QKE۪q|#\XatF<`W!zgiMwEY|IE>w0(v|۶\/E,Qmy$~Lnp.F H;nݷxXTkW,E7/vI-~#݄Zn4KtHѴa($!ǡ85U*SSNnά%nukoKs՞&o+q`vw1}pj"RB?v-~f\";&r2c8=:>7~M+ԧN-}ddeGoSi\X8\N^_㤻]0\~;Oe:ԭcەq?z*۹NMpIRMϻ){:,,t Mݬ*y-YYY!ض8y}l6=kyyFRg ȯ@䓜ퟣ3ZKO},b֍ 8hSkS\29F{U^^ujumN*˭o_jJ3+o_25cRVZ5weF*sz]Yo#mFcxcccHGQ3skcFлrm^}ԧ,[{io5߈5-v+MV(5X@}dNzSrJ_= 1tէ:?x.zڬ1NђIPsV~+FֻK8z1UK|KNVH[,Y&4["d~cR*?6bR-]49oߥnxbEfVuUJ8M[^GJ8YakǛ{6D&(!eibB#q]v89==vʤ88ߝobݰ}$H>y{e*v\*v^"0wɹVIj{\^}l \SH|OK[O1&FY-p L8 dO=+_>k_sέg/ݫ>mjPF@ 8#X"gIs2tZГȝc=dgpx^ML/ju~9mŸF$@cYDy\ xq\T掾G/5l=v)4kw&so{5+K͈³0^2O𑞙 :jluGSOqQav7O-@8PS&T\imjc@޸e) GB)'v;;YMq7gf`;9T¤}I'zTv4ieXn:i}U8Stl/|C$qXvw/*Gn8׭F\4INONR=/Q{hY#76%llq־;?ٟU(&~o[?cI׭B9nB'* *rGm]u}?~76TStA6!9rǙk>ugm2 Mrյ+%VĀ*:$~*rJ6_ssr'o믩>$+*6gc9a~.kapܡs5M~rd`$FW?2?ꪥ)qw[_k)7N+8O}%p$[|2&2 #8#`4}&>^mMW|EuoycjdcLXN2qqMkSCE+ȌڌъI^A 0ɥ4|p9=0y? ejҴ skEm5mZ2pS\e gFźE !|o3ݱfRV.v\Sr矈WV%ii2.-Z+6vx}nID~O>-|\뺿^V<œ\O pUQ$9s}F׌S>NM7duLHAGc'tTAs;]?OFU׶vշ~_/"Kn#q5 A)־;6ԓWz߿٥Zr so}'ךr0HGu+~᨞ZV]lcm$m,cQ'8b:s{\6U(9sK,1_w9a|'=o9R-KI'nvV2I}3j|It0$eHme\7_9?zWgVvߨQ*ƪspsJR]}S_wMߡ4nO0{-$6?^q\Eu:Μy,+2O<%I<+Ro7_¢I{wyKb~S$E*qFUՒ+AՊy$ rwQYrK9FJ*Fef\t`t_\T8=^=<ڳCڤ=;<ҹÙ7XFٮV 7mFާB3@ We9Fy2U**A%*[mnyrGҵVhkF*&רa|Ự~5Z\բ̧,ְJ!# n ԁg]1bOa3M"Mr#/,ɕnC`}q:R}fw)o˱[A[sZgENn^x/C6#YFӚꌟ-Vn& o-\qNZKOI= P*wbW9"7{nљȋʴs7ʁ_ßNqz_nX1vwʓY-[EUx?'֜]O{jXtNwV’Nx#]5yԫ*u9ZOMWG1XzM<3攧-S}o,?*~]pj(RnݍNF׹Z[I<1rNivzКtZv[UIfU$ =xu wn۽*%˽Y~l1+cZ2.W3[K݊yjҍ8ݝY;\$*Q1\5cU>XI l# >\e(KIN ],"a;uqI7BUW.u#7K>N-]3TӶ͋Q$vo&eF?3I}Jx]<::-7V(oOS>_bDM'}^6J]UۆG^MqJК4W::`EcB8\àA~J;=9^*nXZ2HwX#g>hd}zҧRQ7o𹡥 $O0avt}kUf^ѧͣOȹƄ.aUԜpxӑ۵p:~1╟}~-6Ę8(|w9ӊT_I03 *q"%SCMZ?|A^WTo^;{="$zUP34T`eS{%n 'w{q޶|h :E- 0OEn nrzq^pҔW,4b^p 䑏L=[Ӈ*K87箮lHn}*LQ v5rQpN5S$ߡڶIq$yGO&֝NwZY!UvR2[q~>kd6:+OjWm$)H-O]ыqI?(Wڋ՘\g* 2:|.)rj]RvDql`O98o9=ONNzl٘Z5ҙn&#Q͎~8w6EZŶ ^FYpN9#ڰ֣^O&tN7O[G±^֏,ebmzuc$vk#|=? =NfK\Ck2I毦JY9B[z3^ҌӷPk?tV>wzJ~Syjއ+҆N|t(r7m#iFԔyVzQ)8H޽T=~=H87w.ć˸[Dݻ|k0r*695OշTE2̿cRܒ=HJRv{T) A zcڷN[ݣХ,,F2~Z\ٚܩ;[EhJQ[~&8QcaA0'\u%Qڞmijn{͏YF/S\\.3֥:>FFzlh1||x气e rk&hvk8bcUQ]TRMxi=_9Aͭڪ,iZG,eR$}Iφͼzp݌zW-lDjZ?"XzUk{h;uU-B^$kgk.lp0y{T\vV}AwȦ8'aUBy$jqj|)z>Zԯ=4M JkӚ^MR\]nբ n.%P7*882JV.ѻϢ1of(@#*Ԋo92t=^1f^SEޟ,D:n8rOSn' w lZ8ԌeIұI;}Ag/zQFrTĺJJv7&+R^|3sSGE^rܬUjuZj5K[n3^=?uJ%gAsj\wf$G EfRtVptiaNNJ ^l~Z&VFߛ>ZmQusGiVlo2>RkKZ*\t}#Z>nYgArԮV8#pd g9 z }VOjEeeC-)gm"m-_ ?h k>5o\o:i;DC+rAR<סZ8TQO,M{P@4l?|sxm}G,({E.dQ1qI~ uW # Qe$8CrF7c^:Vê4MʦZN^^q ?ecVyEns$ c.[]]W EwP'5pIN%KǮjrwksx2nf޿t.~sy[}aA;6Lө=e(Qǖ7܂gYhVQs/c*d򝱆RaqϥL"LZ;#-wʠɪ\%̔e/hڲ"y3\$ox:W=HJV.q?s[ަsm/V| qigcnbzskiJR[mON=% S,Z+ݭewG} NTbRkM>sž"ie#$8nmm2H<5Qqog1FPRIzW<>,|{%+<1ۡVs*N1:g7nÝӭNnJͽnÞO}4qtBI>W(._n{/m xi:Ub՚6K_*Þ=o9$Eڧn{'E|ec%<7}s3LfRTI={w=^*a [x[duGNFq^"u^7kf|*HӬ^Zωj1feS>޹2g<󎕞*GǑVZ^'m ,}^֬yz_ /z޷5 ^Ǩ1cpS0:%Z<Ƶlte_%uew=GHѤ<3ɕ,*\qCFd={yp̫͐OONJqܯ:sfbq#czּju4RIܫ$odc_z\wEL|]emwk$%AQUV\ҾF/]$>sX*IxGc:[.8yiN u:{xH۱&UN];/2O6t/2Ν;FVA [ƥC( ہqQxw8RIu_[$ͩƄw-o}/>Y[f>xڽFT5=tbh$kr/ӑJxmaUɜg o`ɰ`tmMkT9OUΥ]],/P@UFZWUەYxGTZ$H'y NpAǠ5NM%2 %Qm5n|>ñ RL|D>av۽+x|؆}Vv-ײX~:%«21\qr9Zo0!M~gӧNyouLh"nd[-E2(۾RWdqWT,D[Ԃ%saYXV#k P۫"ܦKAS~]gFg&KG Ӕ_~=/Ğ'ž iϗpg=uab欙Ke{z>:%3[Ky%pN: 4hM#)^jڜo|@h#o$H+؍IAFyu%sr< 6:_,KP:׹uQ~<>~pvY$YѕW ņv5s+}C8ޫ[vܯO%2HYXwn:Fj(ݫ}1s@ieG9;P>6;(֔Csz8z2cUc%^`Tmr(R{{q\M]N>n]O--a^dILp* WgPR{8}<=aUj/Uپ$[wh)p6:qܚ0z3ΡFq園V󤨳c!ڬrq'vbPn4Uպ bRq'XǺ5@#ʤ`޷50rksC]JO0IVW{WCE,D\*|)kP̒1|əK z{W(Eath^^g1 6>}lڽ얍ne9#sSJNO[Mq{f^Q*d{S *8u%$vflwVzIVSԞ&u7t^/7>VwiZVgD8%ОkYeF#R劊9QV^BK"ƊU )ﻁȢogJ]6$A -~b݀yӎkIJ1reeб*VO$t𨌵3O1wdč{7#=kB\*+SO*L[P7ryELlgZEKm uvV1;vAIяWR䖖A[,wdmZ^WJliTkrd*B2FzS*~Ν1`QVnVbcNchӍqi^kQ CyNAn~a;uѧR65̭ AW5&2rzM=pF2Xc8[J.1R]g.I$T6͍qr.J~TA3[oTGznƊ1+E\(%@#.~7g8=IJCa0mRQ=;b#64{5A0g~b79+f# TJ 41c_eIm: oRL re,eY#sQ˲trIJa,UYuPY RFY1<\֑RIZ_*ĭedR$e  Y]4E*iZ*E2lۿymsҶa*qDR|`Q֜jk}@,\1N[q F*Ȩӫ'kxQ_ZfLTKظ(nhjv[o-c}ATZCeȾh mÜքV#=\X[)lnq$Vqb%k=7eYTK W+ '?TkK!s.ھO1WƧrub*{[ȧw'R&C3MTe*u-$c1lVj8]ίo(K~T^p{œpp=ۛ0?32& 4_zf.GN*Z`yC³aݙZӥ'GMsa\;S=ZHQn' rO~ޕƛw*U9ᦧ/ )|B0pF[kIB:-ͨNZ<>|'ζs:T;@}qk+7犩N8RnnqӖ17-9'V8ԧOkmo*ԩ7wkVBnXb:}CRXx*Wr[ܖXe$nG_ҹgFQz&tу.5ޙ$rlgJ6Xxѧ-_װTU`Wh?J4+.w,w {֐P]{{qkGQ  )MHΦLCi=OCbJݛ9QU]xacxNV^t4"9ӌS3F09Szytsl?̾*eOKSUw,Z\]M{i,=6T27 p}?sTTZ:*|QCͅ6VlV#h`3\VѦr^c'0kXeUI/ #\RQSi>U-_Q'~Y4fBI$9=3QRBኩEZQǛRX[[?*3[,sI'8x'=QMLF"zt]vVg`x,u}i Uy,ZhV7uz%F4:9-nN]:J|iw{76e/8x9E?;t`#lkUv8V֓H/18}cO9F۽5Q/S`fR}몎P/gM9O驋yt|[\9TR.-NXT&!~}+\.s*|TNae]+S^X+[rmmLaGEՙje"_ 챠$zuQWCӧ/Ӵ Oa˪V8*9RWIm.|X6ł~nNxak9ӏI?/SQk;Mû }B߃7w@ܒ:qNm&9Q=ꉿM7vZVX{feR;~r_X|qo!'y`:&eIe"+n2KVNQni}E4tP!933F8n;t}55JS-V{l1U,#Ԩ%!V+c+H:t5uhtVt'g?\#ca4ll ]3Wky_mdtfi S 0Iƺš覥Ni]_qݵh(IsuL>]53=aG YE~f+BF[$ chI]geTQNַKwy-"#s׸#󨟵IXS .WbFY.W=9IVQGEߞ9)ol޲vp~zo.ueQ EţO_jPM*fڿ;W$U&zQ=+Ehu?%X+\FM_g)cGMډ~o=QRK]۪kJtk;i ;ՑX=Oj1|u#V%BDu$XQaN8wRNϫ;-viǫyGwCWzƦ:p׹eu;6G9$c)iS*qӹ:/U_[By㢡7-ZQIX1Vzu)'.8۸K $U4Wn{r0G⹥%7ʏCǒoukI.ה W' :sUS۞l/f#mL?—5)k览\OeC4op,ڱRPpUI5/'Pd 7;t4hR_n荰O|\sVi?Ҋ"іNw2U?|1ӧx䋬%&TYCbrO9\tKWs֔iڎt,-4NQwp2Ԋ.ӭZ'ԡ ܅cskf'؊2Y't݋Iq{.%XZ@ rH,Ic>QimSn.V+"ڌYISxy'Y5^=ˋ.ne*)rx/1_;)֥NRSCm5)7o")&bcyJY0x )F=[SU{hMDimeXd9K*k2z FzbjNeGi~iB[kQ }*,osYFj_ ..[rpJ ֕M-UIt$S#~e[_iF/WˉlEi'vOiO9~mV01DcmZiR`Kٌż~mnHڪy oZ?k.NemE` 6;W%JܶmUTbϦ S=+=fb>V\J-7 \gӲWU=+r.jwތODl+29yljFX)Sk*N[<.g5J%tV/u6{jFj[! ͸}>{zr}xԼJVNxa|rhDև_2J$82zMMॹ1+[(@7FqO0Q{M I%ҔKuYK{Qo3+nA#;]؉Bln٩񝤑#@)C߯~sXz-X*wm.JO.Ve :A50v_芍2,["V]r:xu9yמx]Kp 6rqWV)G,gJ5y#M.^9涅4?up'MX畚ӱXեEN?ONmSܤ}p =+4SӪfSN/Գ RDɴ;O?yy#WUr#FxwY0=6|zF#Sjqˮ hNn%'O~+zxTq8cN|ץK+"tv돗~p+ju ͻ&i+TF~vTJ +S/᫰6cnq>Osּ5&i=WN-HUu A{-^b,>!`Uce Fsz@yvѡݞtk̒],iAffGm|`FנtcO{#}_,'O!pO`NNrrKNϹQNJjW-Xɮ.>Ss~]W抲\ߘsJ}HdQL8lZ8ti[T%kL%[Xy[ޤ`2@ӜzWUBk7($]2xTςR8櫙JԽ[In^CXO'g>4+Ӳf=7xl Nl7w:N׾A&LҨ,2;Jn}8797-_yv" ER71$1nhٴܬ֭wF3|ʡ.r:W4{p̿3Ģn[t˅H$1,EJpqgGYkC/$ʑ۝zuU#wRZKVSk#j=Oܕ~3H',f'ssY{F9븭C"o45d~+ƛq߱4I-Z׿ȧw\m,cKz7[X, 'I68+5=֊RTۍ&\\ RөRՕ7(Ɲ̻ݟ2ӕhitf?XVWڷ'`i~c}k%GNKaTMǙ--A.XmuOs)TmfltUWxb_'qX?i9&S \=ΗBks($z:{\#F3r;Z8Nu]Ķ#I23WzTaZ´/M5':0R;<' G$\`gEYB:ESO,lU%d-GۭM.x'/冀[۫[Cӧ&&yFTs^Y EapۭeG9ՓR4VDqbOEK^',6c!8m-z+\ʔ҃fK,F~n8-?օNNwRW_qM6^USG|qYS1AsVkw69'?zWJQv3w}lA:Bۜ'3/a0zj1vf9eMٔ糐ftBUH +*?5v(Y[Җ>NC +F39\ܼװlu\zJRsKXiLx}ϚNWpkNK4qIud8bҶrţ;FO1|Ք4ź3 T+67@=zJRN]>͊VR8ܴsH C4OdФ˿O#Ҧ)U۷PsZI[­:wz OB擥:62tա֔eJK|xKٽ?^35=J/ۮP?ӏ1Ǽ+ژB.:~GoBu '^=sd45zXys[N |^φtv})i3H,Hk:9ym}|hp&^x>>Ҥ<+B߽#` c<zWaSR2c3|*/i|ȾhuMۍ_o։גʵF9X1Bݵvb wqԩ%txTV=᷼gN$_۞޴Z)R)/@I7mRO͌p?_yTrIyi[s)ثڹwg/'}!$R|~NJ:\C$̍G)Ay*y?{Hݛѣ$9k\-68fV+ +tۚI,Sև(}?c_j0̅Gc~#KC_ 5ofr,ʰ9\}z~+J35{ksN%f3".o40XmnVX{ZR#GCh6W$H[ɓs4e 䎻*c)ŦgμSYԣLP_k7_GϽpVu><[_=qӝXzX8ʬҔt䦯$Vϭ$wopy+0T*-?}=:mɿ8eR<[y-K'̤ ^la sKiso{y%V;n;+)N4ݑ1R_g=ޅŃJ+_iM%iԋq[Y48q5~SҩZzEۃ&Lj+FpSR KቖYkl*WVVN1f9.?.uoqYumR>Z3n$o v𸩕/iMG,f֋sP3`~5ɻԧ(Եڎ<-TFsӟigRQQv3Dq]&l-uѥRVF6lX[` n\*R*m:JR,nGZ%QI="rk{"#2svSqWF8Ah6y{ͷ5Jr)T>x#fI'ݎz J]FQ}I{?4KwZ6SRK/|;N#*H*b}nݮ4{~%J1F.A)?Ք:R(ل7@Zd囍RjQeԼj>_2֑;I82̤Oiԥ9TiIƅ)i8oh7ʭz ZF۸;חOZ35#)P׹Inlrb*V8ХxCݩ.o5;gg X3^-jcjT九skcܹXܚKs'.XOʹ~vXOJm˛;O,毬jVa{ &GWT^76*H'ӭ}VϔM+jdAr6mo$EF*>M]Io~E\eObS**T}lip7ۖZ)Ki^*Q5+7mBXnkE<?c(>Y\.y6Śu'sq>Áqŵb(օR>t]V7ΑQ6;`Z!hu{xz1Nש[Gf/3b\y@=+U9omr9-5Oӧ[/NI<˥ZW6p;~:nI-4*,=HRvt(%V[he8%wqykӟ2NE,FBx1mh8RTtn{cSz4p2*2W]># ?g-P; +pgRTA8䎇.59)Zw}z[euO5 <[ jGḪv*X 9)~_2f oM>;Ea4iYD_+x_3u}{yjV{u˦x1OOhGˁ;R}yVm+3oEJ}jirܚ9*rj޺irbnJ*VڎS^XYkF%ؒbeX9rQ+n'wQMv2 HuhYv=[>iu8iÒn.| =יRCIͭYzF獡zIZ+N7ko'V9J.rwCyX+s+dz~e.[aTftoc]is汣w_r%V}g*ҍ\jRu%$iwoP@l;ץ^DFSn\Kn>eX ܝTnV\1F>6z{3δBcn4ׯ<^h#W}R[7b m##8tSJ$lwVݴ8Mwַ~\HdSʲouƵEZ)zldMX7K[4ڨ5HO305KujTT_MbY~QW,n=9>TeRvW;*#( Wy\Hu~mOQ\ԩ% i_N)-SӼ8h|ɚT,ύy 1^n*16}N:U:rvnn\WFa5?qFm0S֍dyʧTuV|[:)~$`Tc=0:'u˽&g918pN:Wz+aʿ~[%#ħQǿaۧuj1lnJ#H_s4e8^i%Xps[X:F=J\btx(۱̩syj0Sƅ7rЩ{;;=iF1GkJK[2:~5F:= =])CJVfe39zьMJùEԹkȟwiKVhss(ѺeeWBT_z1qJ}tw/yCBf*Y-GEuQ4[]jY[mq6 @#F{Q(q5)Gda)& -5?k-&ei5%B1rG_\~UV!kJk߹5/t|y&r2y)JVZ_o&J8d4 ' (sȯ`h$9eRMdʪ-"׮?}vV=Rq(^ mmpr2znkӄ%(o(Լ4R[4?G4Oi~C7?wV;汌5S*tgyҭZE̎WT?xYFua<~>j_"UDKQYQ ɯF4+BVۺZcG ^2j񾫿o_v0evcy..0Cs9 xG[ZYWnۅXw >|s_o1eM|.qӨ8>um'Lɻ|m/UnO?O2 >xX\wYY U+={**ޫ0rZ x=q[lrrpT+s׏^8$(}41XTm%kvt%i|_흭+ͭu9HluHkF3мEjIC$?*~ e䌣F:oVwB:Iߴ+&b; ?( 9W\T_F}imIAXm-2#6uT`9Kͫ+zCiS$ԥ]oMF[[Y_ȍ[dCF9zھ/]jVn>+o;5fҶO_U*ׇ]q\Gt%VI6qcS06i^׻=eZ~d.{gg,t[bѽk"ٓG+lk"7[?-Z&WrFy}ǴePtԭgaEk[T"X gюWΉԧ$L-Ֆb+6[iG~xyS],e<<'ou?M͕X?̡_TF,$S;rܱq0|qUf iբo:.#n^޹FQctQ OPo^ fdN֊y55Kie6kv{TAWW&?i% $j8csN}Mb/xJQW[>]"[yqq't9߿ZkS.k'0@I SalaGZW7;Su[uyyCl ~fc{ W%z\ffʜm?"d.?V;2;j*Ch%5f^ ݂eRGʸ 4osmQ:Ub/˩rDfx7 cKKD3u\ sL31eaz?LWԚxv_?Hec@ŀSh*<>s`ʅ3"$ci9nm᱊YJە{-#9ܱIJsV)FҬWJv! 㑶ld9G5Rj61<.iM335nÜxk?*R[]2@0EIz619\һ9?3r32ZQg̼̱JN]^ds# $e0Gr=1ӊ5*'ebQȯ:s]y>}J39ٕnmf64*/2{t⺡Q"v'LwZ+*IR;uDY{ EobN:Rۛ8$= y\Λ}CI-25 :kk0wQ1{ONi4,/\{cjΤ$uR+h6n5 /͑sUEc1 D[[yJrE5\>ʗ@U)˸ )'XY3j#w9UNi}iҼ_MUnʸvfcۏtF43>[JA*g-26l͏|*3擳4Y)ȭ=it. ~⳥ {[Rmy=E-Fc <# c[OukTOQԴ'#YQ:(RE6fmo31!F1B=kJOI~NXF FZ5Rw(UUhW#z8+rnD+N0ӧ#;@NܠE8$}{7S|.Yˣ2K|$EǖrƸ+>ƝiF˯>gUTҴq*:|I,Iqg9JK8cni{cӭ%BNp՗$vsKNزE܎Z_1aSzimtk Y|H9^}HӝkK9bhI{Fղ{mMb9 d#\Nf.c/Y]fq$0eWb1ǧS#݌Vũ<UW=:nSn[W$I]Kxw'Ɵ;rƧoROl v̤2z`wW,i){HknKo sg'N:Nj˰FWs:ҴSݷo#:؊R;Y'Tfo3rɟT}n)U//3߸kIGviXޚ05)&ip8]FHZj)+-8JQs.^}|e5t  å3x9ϿJhԋi9j~Wк1v|(= N^fUԎ/>#9^S,ͣN2$.Xik^O3߯]#Os'e̪P2Zo#6̱I#wozYssKTcF5ͫ ba},g=W9F2)ӌ̕.w IBWYKƛUq*+Vm[4W@6D-5…HX3{j]Nt0"1i4[f-a֑oFjr/!RG7y̓"«/z/n敢vdkvH$7a+:w:iITnzti~dkXUEݷ':=SJ|5=ԒGY$ n['T-5!F3۬y"-w6735֥Z{MFnjZG UcsNZJ!N1v^}L}OFh,ʬ!`*?ºonNe/g%h FТȫ޳8Y抋2nK3/ilg:r}-LKKi*yTݖMu:V-SOB4Kcb7KC4ُy%LX7m\M[?ڴRm4,&BJs^׫Ќ=iBm5lDƨWuZVV)BV}JV}vޯ.rղ%Pv6oֲܚis(lo˳k@Y[#nEsZ>e1j*{ZtAlu㏱Qz#j>vGr_T Zig q$39<ڮv'ٹ>$/" mw`O#ڳ/7pi_ZIr 1&ըԗ4/?g xǿ4ѦIy,Y,K\\)=@$iݮW'o艭x)E멣ux@{i~H#㺅C2c;7ʨCzG;)oYIiV{F5',ƶVIodQ7AFp }]b1OmZ,R2Be2>VHN(]߲novۯSleMcR\dIZtzFcUdp)=* B 4>vOu7}ofiOGZI,s,ʼn%2;l ?Sh_ⰵk+}>-OHվx$gz609#:pଥ%}:#,:3mrq}zu[|7^EyN&̵Y 䑀(Yw8ӳV{O>'Z2çf=QY6ayDy9oAOOQ^]xžz~v^U 'PV]1['8ͭN0*}g{-m;WѲmA=G+I:*7[#uo$`=1XJ^+Nj7HB.#i$-xϠMroc0mD2H1? 1צ*s;t7R\+jcme|wKݓMut Ui$-j*1b*ioS̺vyvbG\u9EԕO,e˘{&e"<;@<CZB)lp1o3)JNOAϨ@H A=:>z-F5߻q˹k|sӏ֔#̹Gv3 I;Hc |q^iYZv(b'eT6hGnğWCЫ{coK/]m'%aw˫17lc]w+oٲNq_Cˢ3*2OԋT0PGO𪄞CJ\˱VCf㏙y]4;*Fte$)E{ {sQ*V3!PEiZe_3#l0ss{C;9hsY;%e9e8`=,/,)7it7m)OkY#*MHgo-8ヴyبNT㬣BʬKO|Jhze֢w %5r9$q؊«j[ VQ_kSp/ͺUXqǎO^Sq{k>]TKMz}?>+jHOhm03zי}j#Ѝ֞2>x[ƺzg9C\9[t,ujlOmRNp=k5[R8KsʿjχjUtX~ ޾kw_5Ұkxtaۉo9?729FkV=L2Ri5|Տl񞥤tK;4oT%PGps滱0Rfkpi7 ۍݏ̣9ΧWJiW4pl-Yd/ǙU g\ b MISyWˊ*r7>chM,uz^Z7ؚ9濒='JlmVXq3y5{QRܵy~Umxr Hi.6g1+"*rwV")$ڒ^>Oٳ0巘I9}N0s~_q6n#/iY>#8wI| dByzkԩ#+wca׭fVk~ .]~k]GYOH@?1BXT-UgF`hۻ!66\ʸF~l`׹O ^g-Jь:O4ug9d8PIQb.j6o3VZɵФX..U6r}:q:rQ|K3D3vͦbA&x8rx:FQxәz ={߉?M$mrn9\@#èP>e5'˃V>9@iKpf,rC ls_W)LphrB:9o]k6b/CG  sӺullg[2;=^[!!crG%yʦm4ub2Ozw$MOI$awso:qӥyΥjכR5k+>cu7QPch7nFֻb}GvW|ehB 1bK9zcOϳ0U̶'Gi(>%.Wx=ұ94䚊-jvv-eY6t`іR1s{WPMg$5 |qI ⸥*ӊvKVV:4߷O2J,lULNR:sH꫖scZ-Զtvv K53j1Atb}zu<*)ru4l=:ϒwqԿ 4<#V4e1d-$~P@%HcW\Q^s(5O^,pr)[?rFRt]N3מzzK)l%ḙȻ7+Uy$ :skGi+NDs6 WN7sFU#w}.1lv<~57Oސgv G<',;Ke܍/|,[vz7Nh}0F.rJ%y[`nAžsi{N|W4j{=#UukN]$rc$"gvPy>5(TVjYEhmw~}Mfni%Bb5oA֪TOrzߡ~L$r}:Wi;/-D#8˜GOZ7h+Ҿ[̥ U%pq>ޝ:jWqTSObDf|07G8t(es)J%Ũ9[9bpsӕG-.}IeX f  }JTԭ%7ԐĐʒ;"LnZNqi{9BIf2m8cϿii+%h: <ךY~\F7JB1kN%QElya ĐX4bpF;Oa(g Q9'8sed6d 37 ] \Ҝk95e8ŒۗiʼcW>[NRVBXmR?ڶx4hLˇ'߯\?SYҏf]H=[dyVu*\ڦ\R1YUJ\vsۧj]OB. xݔdn|w'-V+'hNr!9>F~T, s2\!0hSv6|ђĖjK$x vNFD՚TZ!"FOj1;sZznb g@Oi.Ww4\{K"yYHP#Xq]'28o dW-I85hȌdp9=3P=턥oanRpgc֦\%bFJ$KwnYxd7szyjrpJP6I?uF8(kBeB/UXϕH~\U9J/5?B cTxʖPqO/D)oF @߻-Ӄ5qqݎڛEv6N74cI)-Q/a'4(hqmN~l@wH3מ#+~ocjuJž+B#cGn1|)K݋Z:uG\BT~-K]6NC]ȭ /x֑唚#{NjPzĖ%Ԗ?إ@V5f\r1מzSIHW{9=^$+#+W)K(G\cTv P1ߩ=^lֵ\=NjMtܬt_0#zYsTMÇ9I)XKm !Ujuw=믖# Q/^Vøm\1ubϜޟ {*qNtczoڱF̞U٭?Nvw.2捓z"]~2GumNNEiN4=8.x^Fe*#V5)ѪԶUԼ,a:5I~k5%mt)Y=NL-IJ iȮcW=WwOƹRTZ.AkS]\ޑ<~ҹ働FҾәnʹfI ǘy8N1DZR_n%w]73s'[+aYK HΈ}^ZçSd[ڿ+}oƢeRj)ՇĪj;/jyvW\N9muWZ +"ʒyiӔ[c7D5H`q|!],:pi^+p+a.Vџ,}u%rea23Ec*.ւSJ:+.ڄl$CayH8*izG:YmsD{gRiG+j UT4VqO+5c"R[c$۷.ӱF[<쐩ʖ&[5TB r8刧[$F8>2qUn 1\&XwuJN_Rk^^B垖c,wmn]jGW<ζ#΃NĐvȥ]y/R:iݷRF7N:i<ʉ)4 c'WjTlߑSJ̯u:}$,FČ/r9zp碣u8]ntFq>3 X# R.GJrX4>K !r8᏾>Y[RʵnVKH٤.<4;R'i;0e$o0(` 9zJXR#eivo8qsUZ:&)i6J7;}Ùh#״+t vfxmF'1C F+AF.ʣQζ֖*5cS=ZxdtuW33HY9lT)5і"po]w:,Kȏ˙_087gXԏ6c~fs|]g,raFCuב\^^zE}WU'U'm[[IPr`q?eR5R:ttV+{\хߏHӲϹg.'bۙd/zU_6(G-2w]l3c$5Zŗ׋\y[U#*4#N2VRWqnGX;q9z|Ғq1*TJŌlȥFB=j?c$ק ro/||9'}bNwOGQ:ێ㞕eJu;g zT;ͣB*OKd]7pOol{Q.h+XS-H2tyIi#Oi)]w%{-Y7qӗ.ŋk!qTՇ5nQEfw{?8["PA17ɯRh5϶}Ibb[+ 6\]Fխb%iE(6(kEt8҈SѻJt)S rs[Su9l|BJۮ%y3= Ҥ=E~>4 ?Kbt̷8azR#|U㚙S+(V9a]^jDݺA}=(zSP+\+m[Z=왹Ruiiڬ֍X̄}3czz8jFR{GU+{EZicDU׃Y: IU؇8ԺMSJT]a& -Jsu=݊ӻMX.C dr{ sA ᫤m-CVpjkѵ9x>oܹ?:NLDTt!Xʥvr^eJ#wbe4Jz~F o8Q`x''۵y؜Djh9S.t[|p1ǸusS 9읿YafUw鏯NOzT=iV2XҢ2nJ( R?0#܊%O]<09~פzuRQ\{idoCszv%FU#h] FLBM_TpZy~d&WuHWVAkϩZp7k)+릖 Eʵ_$圕^:εIմR#=+˕y2V-"nÿֺM9d,G/ȱH,~{4q{ >s y麰NZ Vyayp*It+-oxR#4* ~?y$jڥ, 8lSnyWl1w[9*I=SuZ4 '_[ %bnqϥu{ѩG5][ ,dXBnlNA=8\)ΥNmӨэ]j??1 0Y8c[ځvsSzS[.*T㇩TK!Phd1\$u' }|$]joF3ֽ4on$IE.Xs_D.hn}iG"WZ錣Z8u2̪Uu.qrQ'yV}}*aʛ*r5v2TiemX*Urq{(ԩ iס{;l6( YwU8>/Oy_]\c .6EQܫ#ܳ0mVs㎕NQ&){:jNJ_a݊i7 ֜ɫ.,*JU/!-|ճ++z:r~ǔ1U-ߞռ*]Z]{{IE|ƒ"GWpRsqCm=ׇhO#Oo~m .B?g]6dQDZ8-o}I:)t bG9#74kIf8bԥta]-vI9fW}=3Z^1/;c9JGwnPnUy8=HV{sνGlwdluNr[T$9VXD*MSRQ1b8lv] D|Ҩ {{oag)8Ǖ>oVKEΪˌ`~^kݑ鵨@lnPccbFI u_Z9a(pOko}H.X\\#ImlO}G5-:1dk,ҴWݎuKxBH|7UOL=MbaR%+"Vmf_,gA#W=99ɨ)3Ha1F^ \r擲~Q՛v#eN3׽*U#edvJΚ];!T+.;pktFZZoBe,8ǯֹXԓVc(I5tUVʶW8ϧN<;:7Awg5j% |=r>kԩjrv]|Y,,}9hV;?ON87hroy3.$m=k^j죩GNQmrYESB2 NI4vnĨ$oo5>#ȍiQdhK>jݸ{H>JT쳺}YPӧЏZ)Gk JRrnѽ 2zt~G)]nYUN/BգrKc;ޕӫ)A"q%x!'={4-[Ҕ(S"7kSm$H{8x1U#oRu9ܶ[v[a?#efbnn#8JQ{wa{(KM3.amʻV22q xd{JGвH˹Il7oNPy݇UOeԓU2쳂Im?9׸Jr~T~){̗˨Cay%`W`ȧ G_һhkK~'O{#<"UH$c~|QQ{4QI]IH zPZ9 Dl*,F?=\8do(Zj5eVM5ĞXG)fٞ:n'nyyc]=NyB+TKnVQZ@t:库lMIGh.ac)o̓9JOCJXS(A? -Eܱ\<_$a;2x=+8ӝG(tArڻ!lc֕0*W,7ө&~]Vhd ";OÚQj[eVZoumddlsmQr8FvN)^LBЕg c0wWf V\do s͋5|{]#m(9lJHr{Ւ;;u܊TAZnBSpm0c*K{{5NQQQn멳{&_BkI5CML4_x>Ǟ;VrhiF_N.'+$njx8ZN+cyJ78?0V9 I%-M)nSM\E2i̤&3:tvԥboc .,a-\U?nAG K]ߓ >>ϙ><{XieehsC+l*܌yZ.O5S:t:v^E-aicPy= 8q\FIh:uk4v[4pdzn u=ueU]t[*y8?f0 2qLץE:7s 2Ho촔-q##GK(''}5gzJ妽fdvfa잼s랤yv3/ٻ6t6y$T R?K2zЩFw5`G|v9-8Vz__fHȧ:uVѽLev `Z`ѪsglRRRߡ%ٹ·+^{HI# м|Y|۾^6r9qR^Sy~yImq>T4)澩t+3N<~S\u7b'70Ht=F;=O~ZPu, E3qNM,,`ܿ\zW=jQ恟(- #|?^1挒zֽ-!HH1e>`އ>/io4N˨bkb WkNӍ9r#Ui2#AÂ=:z֜iZ}URH(ǮqC,$U8]~iTo]ZJ^z,sTmp?iӍރ(֋%۶CK_Ϯb;:cQkv"ʌ1 ں)ԒSEJT-41߷r8t=~(|X~ xQuh mA>͠uFFGGlf+)58+f_(ϑ?~VfV[ɍbd${JX$}t=:XmzzHoZCʒVNq$hKK[C7LUl };s\XQZxX:-#SIXj c`(~G鯩ѣ2:M/X9Y݊nYހ1pays[NKxVdQr xn>κ\yѧRQkFtoFڦ8G<~sW{^+~ u=^ykcR~G/xɝRy9^=i٦GWjJy2I r8 9xɞ8;yqI+*༜TiSQg=JqwiߡSRXKd2!N~:tN.jG/i\=ߘ~_N=xW7H<>.iI stOT>R2o#I~FVFAr+*7\SS$Zjݵκƙy5"Ǖv<=XӧִjJY-WZ^ݎgƿ&«\*C.? jYgG ȜO?<7Ǿ7y,/8x,͕:0+(N**g̩MrFxQ"Z+6H#ڽ>rܕ_٤U$eRq8zڪ4%*mգj4p5%o]OGYZ61fU86j;v'>"VEy\ƾҪ ?RajFi̗ 5ʶOFQqJvNU#OVԭw|˷{꫖Յ}*UjA{z&7{EpܥZ%NŨTjVޏ_zuKio#fw5F4{,6ђwfy͛\}Xt}=GYU5hƬd>oP8Gn5mEh=jhӏ:p CM;oՅZ]eSv$Ydu6:YqQR)ٻXbU̧%^M̖ndmkr;20x*$M,+9Xb>Ñ;'RoXo#6nWV_/W;Wj#juR4DtZd!ijΝOiS#?hUt׍ ֧R.~~(΄|/S'd!FU[|~G#+FV;03NK]?!FT3۳`⹪b-QEU7xm Ykad EoU٤nv'Nyz)?m*kIIJ鮇MCk~c#xz T ;?Qϕ?kV}o:I!쀀۟C۩ӶC̯N-/zGe6"qQo>xwy;$#7 ~p?Ҷ"3e\^HY_O- Ic4+ G?)|ft8W棅tb}]c-̖l;6&v22p?^=DŽkTV$u4Cr]Hm'tjjSO_c֣:˚wℿԮ|!+3< 8cHy'<1^~jߡ7/tu~g4#"U A 28e0}׿1/f il-iת>B7 <k.0I[ ]JdOn7WwO6$vpq^grsZPQqsmp꓄.eizufi0p#MISvkQ 唑WxLJyBk̰-rED\+29z*>M,zZtλE˚N IJV&g[FB1\XIgQ_#}qI_Nu18zM&o-qç}9<|:rLGzkʌV 7qi=oȇUx=%0U%<<sqjMzxŸ5;u ;i AQ}+6Z:PO}nl[i-k JwHe]t0M[Fzyo5h^6=`\ 'K?)BGV+nwr{%RGoȩ%ݰDP>DW ~ocWOGW*<.'?յbLW1h_O:E%`x/l0O]ow]_#/îT]_寵jڼ{ae+1 N~تu#6#Qg8>3IW2ʿ*pw>Z<] |-YiSjaj*q[&y/nKorȍrO'=ȩO<躽*5߿sTc qNoutvb=pݚV+fSF,el=}Ɣ yPͷ!x9ys2-|F"Tַu/$6DE#L Kק~{zS[7crNԚe6vVY0bw8 `c=+:Uc1 OB`Ԫ?(08u*U\:5'B]9lYFIG>WR5*iex48m6;x9g70Y1{L*#vz̻/ROEi kf񯙁֧i;};ANRot{o^w?tceuFd@v *+E%mڳ{>Zid0/[ު64O=WCcpnɹ-V5QI$O.N~]Fsu0{-o]:ݶ< :Xv3%mA*ʥ>mFGczkV%.~Qq5ƟmvRI#*H9~`wR;JJ*nW^wԬ$5!';{g˧5X^\kOO&G$jڄpw^[yvsgy=;VQ_<֔i'uЂR&Iڧ8s{`dzu+WFRv} voxs|.8$?z [Q9*v_4M7JkbbD.껉둑+]yNa[N-Zgm;ja뿴v&u ;i+m޸@!9ֶ_GX+u?/%ƤirYȤO)!2:on'T~Oqj}*ziw9^C[O2[?|å2tӗTi;ZQeyί'%v]]z_m𙤵YiadHT !b60U~9XK]lկK]:juF޶ndMu_/5-w[2->0rzϮ+KQ+;~kRՒC |YY#pIG==qF}n+ni®C|ƝI]&Im~ԾjS9?_ֹMvf|jV߉ad 3ZÂc=XV4-^O&h= 1(!㎜p}=S-@ш=#F9沜c15M=YdX}ul?Uc?3JxiW,*zou$0'&P =?jc~\4'{Ĉ _2D)ЏsӔjSi)r/#mVI%lr2Dv՜SqjӪy6@39^y{V1MktiJ).4$jYKBd7I N5$R~4߼ⶎG?,͵Ozr>b\^z+t*>h);ߪ"eԉ$N[@FRu,EJoZ\Krng .A3߇}Aַ&X@j-1#G:}=uӨ|]\`*#:zhʴUXy>CϺGPv[kQӏnَ%3 d+`#h9I]^o̧A=nVmHs~sֺ۫ SpD_27;jV-%G#?kITm;ziK Z~cK7DJB>^MGZT*~Z=\)ymԚIJ<Ѳ^I2?\1Omsx?<q~\hӜ9=J3*um {q]zi>g+^X3p>hU[6Nҵw6tvHnXLcb`Al=O]{98˩RPhG$x2F O+n ߼o֤s.?qQIo#ekӯJ\Rcj^qORM:f=앮Ⰶ8#HVKTV85tE8=GZTݍ:By6Fݞ\WZгyR?1K1_dJÚ;?!S/iۡY=ݕx{u\<.ߴF4]T6WY=_c-fhfh-6_z/mO|'V#rb _X~Q>yf#߿q\]^:$evǐR_u3oy&0dۏ[SLNذsyrɟoC]tP8}יa(TeމTi}C?W+qon)&|A3z*嶛V&N'vߏCR#__uߓK[4UiQv@y2= T4D噲AU?NOQk] :Uu̡Cv}W4*M9ԩ9Jw!Xf (RxW-J28\e>֛z΂6UU cֹTJ(7(mm pduYqMa[/EoNeWv}I 4~bqr^[jtߺلln6xZs&9JXlBsExpTf擆?uNUZϥDf7yK쭎7FV<-pQxm9=rʵ%ș{h=-XIZ6@9>1+?&ԥ:Kc@`9R;y qFgjƥG:y+s{.wY&hnkdS=HoDQJ:N6(å|41x `+b!j)i֌jYzQ<:"W|ۿqceNhMazdB w1\%V䋲_C"Mx[dۺs]䗽}PT6ih KK=YJ-)޶MI !eò?>E}F5JlI)zsYƢƤ#c, 9;>\|G5Rcajϧ_ź$ /Bz'RLEv-#i_W,)T: n/Gm-U,Uy=)s=ՠMݚ5>l}?!\\ytB.[S 7qMjsp[ 6>jƝI|2b*TV+q8o槚RLj].^FEeCX+eOCYkQeHGٔ19[)RWgٚrYrۊX'umbx8˞~Vqpyc'FVEnӵ!xy=]>VZ6v4JQ)dB8oR1Qx|HRJz7huޭa[WƶK6aŶvs[Y2v uvY4;_K˵omV75?9m;;DYTɀF0ONΜ]G6 ]l^0d“G4eY^WOM*^ƤZ+?OR> il-\+欧[뮎:Tyb֏61ѩ }fw/glU9aWȘ<!W%\gm-eʭCocwoF>?/91/m7n^gmOkEK}N7vV>Z` ϥsJT+YEZВ-yr|t{W,ROwϝ} ,b͹b@2iRv:Uݦ\|)-z}jr$"yʴh#C 9WG7s$.f2/[U?gXm *ǥdE]of>)H^m_F`}aQwcӣJW^{ɕ~cn?kO} emt>G*!cwCzQm ܼ>knlF߃_ʢ(ԔաFT{y@S]CXZڷC#L_5\1)al[%w|3P+2-XfE9=q0xs c^T[-|/Ff\b{nsؼ]g|ZZfn[6 gr>z\,ROZnugsR,I#*[TpUNIU ٌwR!}N#TUvn樂d{}HImǶ8d«w]=_=birZw{W9c8ߩKJCӕUTYKxVwӯejm]<ϗunb<w2H̹c_1,^޽Op׻ԥI $3609מrͲzy1u=P7>j*jL*FJ'v0ph*RS_-C2,VӗC>$F{ j,2|ɖЯ#ۡ>_w xscTefٺU#+ s{|ҥ>4,,_[><3QF9ǯ+Y4r2s|8#<[鞕`В;S嵿]j0&D>u+LfcӲt:NpO߆__ h#H*\:x+qSV|+a:3mI{MtSV^G_kƷ +{Oy>¹tϕs7=VVIK|Sݨh]zpc0rjQ]?/+c'eZU/ɥ= 8It0Cwv0}>,T+FjKK\_qnp+H2BoF'lQl,e5w6>[=8Ʊ[T-9r%aJ6U=Iq`]Cnw($ivw,GvMhPx/amOGxE B %g9œ-,}/+I;w9|Iy3o*B'A8+ۚ8VԔe|p{]ѣG_GMR[{ _?bܒ`?=VIQ44gm[E}Mb⿆doO1It:q{_^0x)m0+cw{V!1=3ǧ9[=b *Ȋ9s'Z kя2w5I{I {~;r'oZZy%Sr$'cPIiO9{޺jNQc 20D%W#mXWʚ aJ0pA_UӍE}rY}eR۾|B{-xw ^NLm8,ץ1Z8g|ʬ/~?|YkVʸ%Rr}Ϸ)PT1N3Jϡ*BY>3rYBnj[\ǷJƢwkOUahijX2}g#n)([~{J1w5۫GA7!=}<Ƥu}X Jko5kݓBlj< _z<ҼRnK#[uݳT^7{j_; je<ȡeTwCBe Rj59Wgc5c(~Qw|1~3m.VIu5Ʀ}^t5Q7( *$ɑďܪ2ϮN8j9SZq]-44O:MQu㝮1G=usd<|Sڭ}my7g+Iy1qRV=vZ\Կg(gp$L힧>9_ *N4$Ii엑4[捙(xh\8VMiTQ+m$h'%q:$E#9S'b2xUvk>:>͆)c7"-ˆ]=?ZiG9r'{Ir,q$ [$=zRѤ]:vs-UX^z\󔤹 ^h՗q([)no1[a#ߧgg.jتYSë>o1)DrZhd[ؚR2[cc^f׭.y>׹ݘ AȬJ[t.hбlֶgR$G\v:uWOK1jTڲ\v6}|;R[hӥwߧXX@[CO'j/ycB"Դ|<$kBӹ^V~ce+-G,-sXXg qC:Q*lDXFW0|R=:vDk>eJM@ Vldsֵm*^6Ky^\x @+@sq]uha}frb06Z"?m;+(!fPY203M*8ɴϢ=OT?{ekjVy7(`Aִwt9ʔ5O[3[DW5msP+Et5C ңj~%!UnUlQ[̩{;ŲTO1_/=o|QtTTllQ<ѧ,c)L8j%J+KNRV _=JI119iD*$У",~ocCjm*#&N,ɺɕ0zr=Ǩ1 mVm={D .̤ x~ߩR XuQcJ\QzA\6~\^^o8UNs1)6_R+ŏeQGܲm5֊mu VT$n)N^VG-aޖO,O5R捥%uikׯ U [dc >ª4*F5qxAzE[R䉮gڭfg=r}SI#Z'b[p7'nP.+Fy';|xPzQZ|j~'pyTroVbyr̫a7enEXުZ&Lڿ(Y2GFwIuJjnh5Qͳr oUmgqGq}NVRG}W+G#lу8HʋFTz4;G&>)ʬXq0Cq׶k>P{V9eGNS;nx/R=?ۉݻp<둌)ӭF-GӍ)a~hXLF&,~Tx9eUWGxC#j O.E6>If#N{<({?K"dXVlc{xdgJH׹4S]ɉgF I鵿___Z(E=dw/4D"fcߜ8N>/(^2ZtfVTNr.y|;4S\\7: XSܞ:ESY T= B4Izgynz# +p=IA>/S8'rwvl1}5Jܺ#m5Ցڮ?wקZ\ʓ2$]68IoIKkNZtZFd9a^*QJn˹%M;{۳wJQ/+/̼ޜyVٕFR/ws$ #}+Yؚ iw~mR0.S8W %{mpOC.v=_4C. l 4y٧M{Zyu,ǩI{on}놵:tduN]Qgg_] Ӝczc(ْųQET o(R+{z׎4cJU9,9`fT^a 4USozZcQ6sDT E' pD'O+'ܴ׍0Rcn:j|tdf9}W4FrT,sʝNX-<,Kuz$U>^zӖ<5Usֲ/Z6ʻF>aִ,dkZ&&B:(Usvc%I{J˰j [+u|éGlnf/۝[8UqQz MjF%<_7`{WWqv9q؈wzňuD-һFT@d:yʕ>gNs 1>*R"j{.YR8FMbuҴnC2 zVs-3:XxƣkwlXnQZFnjV1nҥ(ҒjUKV^lcls{7tԂUIj>QDᤖ'kNu{۹42ƬO:R+s# ]iT6!'++˩6aj_jbT!^wc^}JU7ݻ]65yj֟Ge qמpSOziZ6Wmi!Jpx\rʍ"qsۣF卌ڄm<ͷ3U$(;_Ab%jUnbYEtqgv:r~BKpzqV}_b";g]ʧ# A95RR<=%+ׯ_CQAm+Vm%jOׯܳ,>|j?*ߏ=q]%TӺ[jVROOD$KyVF>[r89<XbZRguo;Sj(Eٴ!m4xRW# ֧97%e_cTyiԗZ(eIa6L<չƎ"Jsw]Uжw+?gAj j{)wI/\6"gcӎz֦Oa\@Rg=ii?6Hk|q jQr|jiR\-?Rd6Ҵ:~ЩV畛N- ~d1Dcj0zƽwu%˷߅#>ƍMii"32<re -3*S2)֧"BVcvta c923 ֽ*pjwoKݻ+ݿXKI,M2nxGֳimku붿:DGӾV`:ׯAQ(ӅW2}FG*_Ȯ9n.힃ey? Gu!(Ky8HloϿJƵH{W9uNϣ*i> ivqXs{:z=zkFqy7r’g+Ǒ>E);&6$iYv=+F;+RvөM;G+019]u*Tlt"*IJ۴=iM%ne:3O}r7D2/{ڝ*u#GW;x>񯓼8v\H9_v;7}qN$Ђ96XaiTV?7V.]Njܾ?"x{gX"`y8"e%~dJPIEZI-$)9=I>wFc)g Z1 Mtm '9ZTy"2e^\IH]w.3Q;h槢HdvV6(C*Ì'Jۗ[U :5myl71}-nte80eU]8$zw&=!+vNr^ԱtId_O|ϞHTKg ܝ;nF6I``.+5!,R}JS;5CjiȮ.ے=xT\.~[I'IPNH$sE>H˕Njuy'.H&~\\n JiFOImw֏ٌs8҅EkXu!U2Q-ofHz0@'iwCIJ.7~FK,!|lb\gVR(ɍOkM[wBMkfI^Y a>Nq\.^hi,K FȝwLkY3McESWȊ4wk>o$=UT .胤{>ev\d_O{;Ȃu 3.;x'kHߑe)Y:>IH?%O9#X֧͠UT.;iw n:r8E:]AZFڪ` r֧)GRZ~Қ^Rf&(Ű ;Uƌ9ӽњYQֿ֟"JYB0`q|R:tSRJu]\8$6݌v}+|Z8ZpjϩO/X}nfĒrH}Tj{Z]I_\F*1:2|eR9  uN{w-r6lfhr>b;:Q5ޥR(ϒzNkc$;@{~krS~wuXn6S>il>eSxYCKu-r xET1vSRle,q#9Rk_-P jrC|:{:˕*iʬ @l^S;l's{r3nk$mۓߟO˥c)Kh~US9VAo?x1s9J{>Kgeh9l=Nt̵1]7qq@q$ea"ߎ1CE:ocQ¢J;-\/m !{cұ^쑤M{qn>b9嗶?U~Nwѫ)KN]be2[Rri<]:})7cGjb3IUm\㞃kîK-PBфp3H'z~єjKEŽe,E;:霞fKǠaSv) Ѭ2` \ױBZ5fVV)5nY puɯn?/;f8kOA4jߐx$s޻ٞ>Zx~F1u % t҆gf׊ϑ+Xq:5#z֐_2<(8z^k1rV3x'$gq̯xխJ2NUHTGؚͷ-Ć_97S)={Q.eS++tSF2.dɩ*\VhK,wޖK޾{X2'n㱅jyVx=מx^g.۫!VKX8U_k|ï55(E*dd|ƲZ(Iꮒ294{n?k2'7( l{H$Ö+>駵ӆ迹hƌ#pנrE^)k RU9GhǵSqi/YFrI99^|Wunv{?Qv~FJœG18NRmg!v[-ěܬ#Pz(#Q~NUqi2H%nl5xo.Vio%l̿)ϐSc`|=JnhM+ajSoKi|~Ny& wJqJQt]n6|a% qa76q(yԣQɷ.t~{#-S:/۷Vf'2}~~L%+e[c$޹eRN:ciiin5IED8U0ѩMױR{E=2L.WUw^;1 [z.M[ *3Oc<.<)}S XmsRcȭ^k5-5%IniTGU]ΛÖRI1*6YxjRݿ3ѣVO_c6YdUPS۟sx5=>[oߡz\h{p7[![g>R\TrG#Hj2Lq\n2m3v)AEؖwXZ=pqJqr1*J6'A;@ç ,7+>[D omcUc1XM׍Kv/:(E$`Î@9>oI&BQouu,Of w#JʴT R;Yc 7G7'OƲOf[еamU`wml)#U%;{XT\ݱgQ泫"d:OX* $gaAg$֖b/X3lg# -g'ݨJi':>n|^ק>ߕL+T,hkqLjwFU~a5XZj54RMkK һbn'ԟ´6je:֩(Mpa[ݏs#ȹF@tX:`tǽ\*JR9>۽,dKfb۶g$NR^=w}HV©ڼJQǞQNR3.į#MyJ6{ԭe]#8YY[ <ݸZ}NzjOߊ[)^9E/6~_t#+%%*w1K'Bs1+TeI9wyhV8e`QI]ɌzqY5#-aev#AfI}ǫzH3\Jeg.f5<0^xEoilsT2ĻmBC>'reN_-^-k.֚8bt v~4"/ ^eG*[={ӕI4ތ/yFj)dh+'XLB2N~VJ`I|C`s~5UbR/CZ4q.IYNW ˿sYi$UU F)s91b%S*kNЂ Y$9Uk@vj.m n5#R)u,#gs>WWZ=lj$inW]k=*<&2] ?ʽN[&a%GL\Bc1Kp@$qֽL?2[Vsv{#(+zErU1NMj: eˌY 銮U*FrRk<覼fa5MQG-^YI{Ne; 0|r;ޣ;YT5#h9dەϠ՝HrUV1ZF*oe!wHr8#c(zp9Zs׊[tudn߇ƹrcFGXL"G]&`OGן´bgSRqi~; dC~hkn^nYGC^Jcw6,7j$`e l'֮SKUE*VKFWy-]d?NA]iWBQu%.:;k-*zuk}g^2[i g=|>L#]Y?zRgѭ<^/aI6-۷]4ZPCIacNI+\/5]J-ES֍:Tbo} `iapb&I6fw3*S.ZFtթG(NsKqcʴ{׮foݶx⸧Z)vR?}:">!2d2sUR'S]>GGi0]$T*jaZ&lUkQ|ȜUgIrnhMG Z/GeKѡ Quhէ6#y'!'\G~cjrr'Es+pՌi{ڜ򇳒&g͎mrWDU(H²C^5.e5}t5]>iٙQ }*XWl,18^8iZM8:,,)l)Ⳮ8&^:5tw:IveUgWcounM9m>\Дo\RSysey\5c8QF Lу `z?Zᩇ9riMN뭈結U=ng喝ouSnz*W]NP-#!gɇ[{~UeN3MknI{<#ˇk|WSTCB:0Mz5;|:mߡW׷`ҷݙ8vJiд5m#wVCAb.޼ЫԧN`决tZ]ї<+hk s,cG$aj1MuGF*"ȹi!%% J?-H_N*JWbY>k$Y2F낇8JM%JOU=M_ GxM^YJ7$u?ZQ-; rYvOįGĺ\i0EXcvQu8y渰3 ro5r_{($}bmJIye` v}3۫qTeFYbp1ήڞea{ -dvi9ȹ`jxsx G; ;mnDKmonN6ڎiȳ2Sڽas:^W_35p5zt;A[L8Ž=YT=sF/mwj7l!m`1 y:xgRO1nb-/-乃U3,c88èɮ1pNZ;g KBKU)j(m'w?k\]NʕSm}nBgUb oi?z"4VUӗIn5+6[@2cS,:]wZKt+[ ɪ,_u|_7\IVP1)ugVv:Kj~_#m<^-8hۿO&ձtW#V~ םsA d*QWx1WEi{_oc'ǣLK M qoc_`)Es6ܯoт?yJIj]x45̸eUT{rE:יm_.בԊf=z^Ҕh_=-ѧ(GMNME~Y# 㷧z ר{eZHvmR¿ٛ^(_?ԣM-]XBn!c [bܭӲwz/(fTާ&{mz \VI3ku6yc  POS~++7(R6Vcru.V>y/bBK8mZFkn,FT![p1kNyYhqԔfQo{/<x9zdO*$~(ͪ}SF~eUt2ë]( QjMjTI_)֓Ro'R󇥻ܓӨToz1Eūhk4-GQ(w,LFz쎊/k1{J}2nOD#[ylnZx[]>vf܍N}\:(K SߍڣC/_s1x/Te-N~hJF3:bnLDU:rzO[iΩ_MmBMP9f\K"dg>(+QPk[ *,^Wee {HjtiHʿ:Np\Q汄1uk[fŵ|mhaq# O*.]u9cT26]Hy"7=Lb>SKd0E w,ŷ2݌v@qBT.XΜ#IIv4s&@6_b'yVJ&)yfmM Km`4s/CG^{ȱ(!{? |3aLj2F{@=pyЏ_Ty(W#Э`4*N.9ے{J*=J*v︗ɒVO0E9ʜW\wP=$j-lm-מ4^خniJ#D-s}>A3hִ+/.[U_v~Us;r~=~[T̛݋o-w~PV>GPx>t^:2'Ь|&[ə OE9 r=V#K4֞&/n-Np&_ǗlpS؜W{eIiJ[ס~%M+Mh:XL\ۘG.H#i }F4|˪ݯ;zCQʬd溧o}>}7lyj.t!RmT:~\ ejW߷wL3-IE/w}o C,Yӯ9Uy? RXr濙:u8'{/ڴ $gv+ߩZpwߧϩԦMݢֵucve5^>m%?ǨG Rߧ8iP־9ťxRsBvՔA'1Mu-t_Ny&!ԌVjk} ~e'.-kV]t tؿ:8{E{;?ߎSfGD[zi|qGz}G7KGe$V`}F[aZ}G/iө:nK}_߆2ԯ!bi݃F2'x;r*z_ )aR~ VW $`: qcxxvVɞ+>)YwwvRnIQ ̋4I#n·l9`tfRvqIKWM}yK-UKCqqR}=^_׹ߝY]jK̇OAm^YLUx\IӃzUKZ;BBXѧht:dMmT7O?x#Jm xv$pQuj.ͣ[Cͼcqbwlr:WN ]],zTuG8e (w2}:p=ўh^U}?FVugJI8|iOFrRLȌE_Rr#M۪E*û^Ar]Ymue)Y[ҹJ.Vx\-I?i-ޯ띲Drb܂8#){7٭I^ uLox^NeESGJM480qr3\^K&T*>/^%@%ٓtN){3J8Ν;mWڠmqJ)ɵ*RYMTI 6/`q=Tc]!Vc1fI7vxS\|,RZGw$٢2ʬc_<`tl 4QNkfGyriƪG{qgZNc1JvDOEXA':v=*8ͷ.9但Y1FPxcO^E:wGrIװL]ykeca˸u0\mmɫ}alzB-ǰWYKg,c(Ӽlw%$nUfF=jcJ}w6)b"ף%MKpӌZt6 ש^S#yc]}1WwD8{00Ii%MIq˸e{ ڂX_f[ݧAJùXeu=}LY?TSKѐ# 3pWoj>ΟjnZ 6ӻq6>Jfi-hMsqzg Y[y# \=籥E.][.dQɲ@!X6 =t?Xg^.OAH5I&{ju(Q4q ܶ)0O u 93ZƼc{-N+ݢآ"*@b1D]NtT>eZ$-"\H1*ѹ {qkOiNMR;_ۃ1,3aPO (F}*4TfPFa*u)4oNjvVne0~fWSONxєcefgR5O~|C@Z2+XƦ;*+S璿 QH#gn< =:FQ*SUiS&͖|BWS~һ:_^Tefem:oFqm&ۊokظ/kA)m!Qi*mxފS:b#$3-͚B3G#4ۤUU\g9=xyɮ5ڍ޻P;nGy<Y=?Zڎ&S~>Z;YnByA&݉I'w:~57-dS滕23a|[ $?]*>k!JSF[bOݱ-C?zڝH;)ԭ[Jn2+G"x]3J勫N3jˡ^OFtתA/ʲ"\8ݞnJK^3)RM i%'G Y3^3O*n{ё>Vʵ"Ty䦜OP Ǔg?Ŗ+$cO J-w)<f5qk)F6G}[ж!wnF8>޹%(Tڳ* [VJ̮3y9rJޕQ*;^ߧ8#**)o<25d{zW-J27˧SJSQz.Xڽ"y `*|t`r$8ө^z\3\,rm8~uN{XU,שm:Fĭ"ī#uqa斲z~EaJs V.n`lx۷O=~. zTG]|i19 dGY$1zWsZ*۩j#k.׹0We!R C>ITcfsJX#Z5N|Òq{?޸E/?qc̓r;,~|# T&T-I9( 4?0 jGG&EHPNkF%X]|?x*xG^srBܙƥZkm+KdY.2mmc/z4cwWx|#nر8ܜQ\_ ~]f=ͽY! fk>4T.UiJqwndE*MlϹeݷ~<NNW:0i9}VGp q4y폳+bƷ,Ϲ۽-Z#u+TuRwc>%iȩiT)0DnM25Q(9Ny>H+(9_V4}ڷLcȣ Gt v%%dRR[jKuyʳ|pkG۵1aO9nތyf~vK]QU:uz}V'8RUsFTYZ2H͏ьW6pmR2n2;vs`Xg*ѫ|¥F]̺U-pNNю5XxS۹59|6t>5+|̧Fp}ⲭY;(ޫI6m t*>[ X^Vr~.4{ˈ$}ݬtvA_w"ykI\5:Y:#Zꡗlɕۃr*r;G̤OF)В^[Cg:U0ȍ >5%4OUi.:|)H^}Jxz\s['LΘ &>ϽaZieJs|[˭:-FXeH<B,_ty&IiX.Nn8]?ghFw6iӻ{\|71FtVhc'nsY\ΛFtC.^h*tSR>Yҏն{"߇m9V;sx=3m-FFlk ՙDg#fc:2gQq$XTpۿNMuNQRVZF9Џ$}Y)c&M\ݏ8i>hzTdw׷bji4dϩ\~g5M#v+S_=OZ4x4_uI}Re{Om8*XgU@T/'~4n;JqhC}h)#e{r:u\ϺJGK𽍋\ݮY#8wa)Tׯ9*ԭ*SW]zHfY$>J0.O=cy*cU'?HIP&@ s׬gR8e^wqhs.N@҈˾ӏ$r]r D82yA˙7*Щ5[_}906޹;AxުScJա~d+׳Xws8ƚ޾Qo~YQR$Rpc̓ÛƎ"O場Lyc p}kǏ+MRh~\N2{סEKӕHMFtSm'bHC+b%cDڂmi.' GLTͲ0zk቙-Ѭ,`[=]ny96N7z_[|7t;/p_۬$7<27; xcwqӣ:S6Efǟ2QmYY.K=U];],|af%FWʝ鼜}𹏳rV̪ѩ:~>qxC,-F[ڤw 3SӒ9 {V RRY|cM:۩x_zmBgs.c*^/gl|"%vΏa-$#3nN'15L|eq+]__.[m#7{.)}8~j<:Yl~UDr21fgR2w;ay>'i G||I?ʧU~Dp$ԓ;&}/Muôlt$Pkتuߩك8v|-:Y9>K#>[/ȡ<|<~ݓЧ}ahUizk!Ҵ8[11veb/A##rVUt3ԩ+QvVƺծm 3/q?3sNN\v6tJ3O}uckPUWv7uJ!RopkFEN{uKKp{ZlvSM_G;#ediRQȟNPl-C5‹dt=s4AH89Xz5miN6ݠI \sϰT>wq;OmGz LGӃ~A#c*T%Oƾ*׽twzݪX<`+cp?3XǖU5۷݇ӕKkgWA4[I!٥1@y>^i1rt.ЩsTI's5dmV6) $QԓyPzSI_ϱ:F{>:뱆H_&GA7=z۟q/\|~7~W)/Z%rK2E۔PGx̵֟gT_)-m F"#eAy qۆʍE632U7}ᾕ :T漶9Zkkj^i ~u"Bې~sJd%~&8x֪7бe< ʻH~ت\enJ+PNEbf1Ǎ'~}{VswDHӓjfJ$T*ԷgU ws_}, WV]b0zkU")Ի` .ۗϱ ßR8^;zmm! x߅ZoЍ(_RFl+'[~Rw':thcɢ2qk<4vfJ/,!w9 ?^*yy}Ngndr[NGΡYg^1[R)£}r$ $bNp17$Lϙ$er$^Rë5b6SI+,7={IF̙m\c")v s+MGD}ڭZ )gGXuZJ1P)9-䑡7|N7p9ϯj#jqW10˅O5w4ikA0%Q+MSD٧MI5X^ 1ګ3ש5%i'ʩ>_WrסjSQDQ{7r$fֲ7ݽY"L F3v6\en$eKZ6HA6:=1F`sq}ҲЉSQt}t! =qLRwlr4^]Y<׽>^Xn5n!!/"z~/c)?yz]D${:yֈI)ױҍ9N;IZ!p# ]#*arU'"MXL2non8̵3ocM|0oq~z2 m{ԡp,}ӨNn>{ DJ3.cX'q2͐A}'G7&gҔ+L F?,}G_-[tl;:c?_j>שjif&%bC q8{Z?u;+Is+_ʭs _,ec=r{J*v]S"1^]Y$V;_7i\}`SWNsBj~v8jFy꜀3yj.q+NDsr #afi#^Pg۹厍j̙[i7fsIfӧ-.c,EIFQv .prILc^XwL#?W/hmohdsk^4f4{}|a}DJ:kwr֟[}rncfTzd^u%G{.ƞI=:JDwy2[JISȾJ|[yT4;CG9?iENNTL|yv$qm=c'O"gei20NZm.f$*[ɱ`= pyPt=K*I6]IgRK,1`#±%-Wz~k|)%.Xg*XZu=ȷ}LnCR;u c[x[ӼWZn<wq7(=FhTi"f&DlI=nH^_O38QXK9r[q={V5(KՒ(՜}\jjΟ6q֛kmN1=,mߘ6I,b{KuI?{oJ$윞?g'.?Qzڡ86c:H8MRMnnE uih'k)Ӕdiq0wӎG2uIDdʚ.u:})nQg)f0}GJbԶƓ\5̳NܪARPw.=Ԫ/F2崏 {+ZZ6 qO&F0t<+{K*+46r~3aa=sY?zN6L}^ifdMaצAy8=)JVFH>V1e>4F>#.nKgOjKMٿDfT2ß𬹪JZcM^?3Ku;x!(e<0{q]QSsSRpk^&oe![w$GldeG{?2.4~cRSی*Ɩkw N4)ynje9e!z2ӽ%Rc)o:ѩ.y%mμKrVGrqQڼ%Qvcտ3QhnKcEa&yiaPSYDsr u nQ܊qн:iMjŗqToysS/wCW'QS?u{UZYbeiYnoL1ewN{[O<-w6?UƦ&qBwESw+I;'jNWyE]N-ZBd>g]ҳ:ӖTг6I[k'񭥇 rryeF.ΡvՔWrrOqWy.PƱeyi^6©ԋz,˩ YUiqO~gFֈ+JR劸u\/vN=*ݭIu TfVGvsw*B2Պk) i Wʥ'uN*eY'_KϮ\#I 6+ѣNRFܴlbzsNfRvȩ)F.suik.רRF@qYRw:j:rM7dxڊڱr\+Jnd5E??0?N#h矵}קk6TB*<Ҟvq[eEqzƹi7v?*ܱGqTƜ}?kaU}bc0.vF_~qWV>œӌܠZWO̡XF8?_~}.ZU%)r E#"Jgt3d8?wVmzv& .|Q00pGSJ>1lo?29W>Lвys\cҺas9_QVewO)v5"m@|=k%xv磇%ZW!sgf)#59Ap#0,9݌*xc'ZAjSu!UX^c=ip;>s?YIi溢N+J2KS$/3*}pNs3J 0J)=C,*JztkZ5$cmc\FlH ldsztUi7ɥ5U~}TsU޶zE_<̑uJ >ɽ{Kx.''6 G6i'F1VFG^GvT27{\v#u wRIF_h8WyWJѵ $mrqZs7T;] sݬRKwRjuBoK)@zgRiK(/it kOgj q1tzĩS{kUmp.ISYT]$DqR}NnOȱ4e|+ u\nR ־Zdm) \8>zV5?Ȫ|nrz~^B]QG B[ly';3N1lKٮXDwI0ڼ=z^-zr\5&[pΪU\ns+QqgEFűK2cX ӧZI%>V*έlҳprmlW'8W .}L)R%7y=YLvŚb6g=@\iIzFR{|gd0KUNmmJ5N#If m)emzu52+޵J1o.UadFY/ \ҧ۵m*J] y Xhd9Ϸ"ǕEQF]GІEaA\m mbrt(am \s\dUa9Y^ڛ~m܅!pIvS cJT.ѧy\Fdin6݌:czR:b2˙Un>+#{֭ԦnȲao|ߵ.22{f;|ZTym\|sQnL=%y"5ʭ$XZ[8 .y#'x汒Urs[VDZC(fb{{)JҶ;5'˧y3ߢt݇><\Mɭtv.TSs杽-B;.;ۈP>S4_i8Թh\4j;۷7R0GvgvF3Mjc.W9$7['RiӒo2ŁB$r,9#>*18h\&?#kYp(fV2qͻOQ6_2k2k2h%$pH8}[~E<)K{X%i߻ p :ʤCq<Gi_(̧ӟ]U^zz'wBl<>6u Hǖ,VhԪQX\lUFusPElۤM"g##pQYKdSR~VGmtjrV_,~ѩhde\cJ9 Zv/V>hWvIt9^) 6VedrA@nq\c.V嶞9jSFMo[ha\Z^6Y`|([-xsEF8Bk_c~2`B2?JJ䢛<nQjk|)ڋ-L+ǝNklc6j:_|۞Y:)4Zҷ٥E۬{vgVas? j1N# OM$qʻ_UnKnmR9IjDer|`omMaӒ^KZ.үgjei~[;*x-cALm;r>^:W-;¢,?6|?ֱgi0*q߯&5Gɤ{y&UfSOgW9HY6aI{}G_ji(SVkS(ƥ:-{}9~v^{Jo[%R\ZZW;lUTu< YT4۽M#:MD_>oZh-gN:śe6gb9n]:TJMl|&LV3)XciyןS9S-cb1Ӄ뎆%+76[~neĖvJ]݆qjNn/ PȻ#b!f>jWk%ZyRn8֏&ԍ!l[+[}FRIhlu_7 3~:K{Hr[!فS=qE)5M{(9%ifYTWP Ӧk;8*cT\Q+X 4FV H\Qd^ڽ9=NJKfr{U)>oN"[*HWG=FxrrRT%n楚4jvXnqRkGE]=ϗu4C<-gZEi%ZeVH@gWi*oAC~']?BzyU98zЧ0Iԓۮy!y]D)V厯B#S'ʑhǦzW{G&DqJyO4}dc[Jq\Ј;4m#˹%5 Zr?_^k-JT7Q햍m7E/?\h.Vp$c> d|w<3<ԺJ{ ~oh"NH'|7QoM)n#H)$cF#}98mRNhD?,L2 >\=z2a qݠ@R13) 7V^ |zzܤwLa_1x}la5ԧ{vS)Gs4ucG`26Arz`NvV39?iibs䬏TswNG=ѧƲj:xU|W軝%kC虇~gZrxy(4[q}gV]-H󙌐`L=L.2|ܺz8gy6Oõ{gi"~ UNvjtݡb3<^EJ|T?hd>z.U̼dҋ>Fzifգupm|eYrѲW4|ɤh5η2!rK?l{ӣZܓn Z0J$ϨUx:$>RQoR]sHMvaO)j工CUs_Ojf N\DY5fu(s^-nxr]6Hog̷9?շ8=sI8K껮ǞWzkD&KXF|vP [n ?V7iKt+ܻ|_Y 0kGoִY4s K n}OƼ#)}QŸOɬ1f(R1֤Vv_gi>oLz"De rkVj?;Xf /Υ$[6f3)2#GӱhaɧKcq#̑@*~/}~Qv<ڪ5 lڶh|Y4Ӷ3[>^GϿ^ T⭯zXZxz^>Cˋ oKHf@2WB<-aԴZV_-z;PV˥I|kaswʫēz;b0~&MSMk菠<YkZJ01^y 5b2R289bk4?:5HW s[I &:cӆ8Z4C˰էRRk[>6׋wѣR; m&5e+)T5,F*4m-T.Zk{k5\G-EAZꊆ*IVkfgZQFZ_Zk ${+ACq|[FL=^ҿC+km%<ϕ{a}1UkYRVuY<[k.W^~΢^zt$s#CoH.`\e_gg8iJ&_GXsi_2_G͸fqLvgGMBn{؟igU5 %j_|UYޕ7Ο ;vYVd|F<_c,D\j_qT̝JQim];ZP|W5T$bff(1B@z_mVuU_VS`Rs5 Ic*5]FҚ;(U'SkO&8瓌Vө*F&MѲf/SfڤrX{*Spʏ)Kn ܼVҽ:īG5[lh[Zď䙚7de_m)ZM]v3Fttim3Lsҏ5I%d:}w{ xJWcT尼j)=ECLw!e W<+קC BX28پ]OW 5 Cn#M~@E=H,x 8ǖwuGE9Bo_>|ΆQQh4UA9k)SQ$9T{~o;K m牖߀ĜpGN3撺<\]9TIˡ5N<,q8r9;)QVOT g q ݛRv^Zueiqb*{Zo6HSj^eV卛iqKk/Z^CY~= Ahro~ blmTQOVkV.Ɯ$@epUW!:AL5+k|,{:<ᘝ̽u?iZ9\X(WV !HqzRNVEaOWeԴ1sϔǐIXKZi<_\Hp*ci=I/Z(G,ToSGOUw*V?5HXp4><:?0YaSd:<K[p0Vѥ~2m-Kǵ 2^iҍ5cֆpqQi+m{R[pq63]Ptre<cU N[#[)9]NqRQ1ʇq7^*zrۉ,ѴH< jS]V6n$;W#Z1¥HmZ&]Oq\%ȷ 3yml4FX>7g_"7lff 00lu882z_ƝUs'ױ}kpP\m%Np9_qLkŝJh{9B7)r7 +=l||JRy n \;Q\*2T,C~|C䁸Zrɭ8'5}IeU}rWyOz{cߚZ\QEhm=Iռ*>͈Lc?UJGw4UſtBڢMF>U\ko"N;Mz1qd}:%.nuQ%i#w|pZZ|[t90ܩ?:9`\J8fgvq9RHUfr$g W-տqԌ)s;k2/0lw9j#YGO"G)E]jE=ݴ2#g 8.pJXǚb%KK+6HzÞQ?#YT悚V2b@뚬UJkm>l|mӉ(dR^]dϠU:}&ut%3(6IUFQѧ)(wrHq+yFOI RF*4|6+ ?G~\Q榾oAuIXƮoY >h/Jwx#$s8U%fi@8$uu:+ijS,]V a1k&O֊+J1!=ss5^DFTٕr$roMѱE]x#'ێ溡\^g$Kd$r I^wRW:VUd};CǙ?(tBR &r:k|`Lmn b>cyJu^+ B{wۄyq'N3 >3僗-Kk={͕xp'M ~#*R*2U(Ld/cU{~$&D$Wlu9WCi}|ሖ_Kul4ok~||FާTe''{9?E!C/ݺF ;Hþ>tJE#̾DylbFo0X>渊5TIyam{ш~V<q+s%FöOԫ48o6LL==2x98'c {Wg_Ipb2t^8/,]_#2Rzt2X=GqC3e[?UErY9+wWanc $Bː7cqUN^x:RRg1۪hpˆuxtJ|C}SzߡFY^U*{Z%E;Q[ܦiR")B]n#h͜,cx?ZΔ=76ߑcF4}V;F<;bqW犌i_̒uĆPy+>UOc@:ܰZsknXݫ? 3[7\IȍYijކ7q. c+ڼ*ݎFl{".Uyb6 r@'J\:iۖc[ 8-&up瓽;(Ƥbhm/̋'m9 vQNWQrW,}"u#\;N:*'NHjdGԿvIXH1=)Ч6KO}; rPXel ܌oƲ?wqԖ2QRC nFc<T 峳yz%e=V: ^ΥE Nuӿg?L*2İI}23j|KKPδO9ɜfhf,|0F ֱo-/ƕ͢bޔIW3ҵUݘT5bI#R$9ϰ>V\A8RvEiQd1UB[riu*mϻs#ܬ*ݗmNJ-$#[6zQڊGʸiRNwBm,YsRQF-Yuw&Ij4/q_ ̹%(ѳȳoHB}wvmԧ%Ǿb=z5'xU|ǚp f`G“:(娢}QF5nm,ם/'ĭ{pi,񞇠BH3t<^vwnsN7E9_ޗcEsʜ]_~dx .auƧu5-?;qۃ@o 7b[Y+ie~?^MB5=Ue'}uo~"q;ZG_̀yvvrp:pkږ FULL%ogUOש(WV۽Kmt殮?C/A_ſŮ^d\܋d۝҃Pz*$洎 Wj)/kۓNXzv̒Oۥ鷰"k/Fe(FG2+N1I>tعk෥qQU-e_Y$zϷms)2Mjq4V,I;\EWnުev˕j5#zi2ŴXsQRU4[u3 Sucwf.HWUUdeܤ:WV{1(Hwg/45 ;p8gWO˕ 'oZ#HԩNNOS|){Z{X 6'~V% 0@kڕiTvC!](zkCJ-U)**pq{Tҳ.?.trt.EWRH,r'v?ySZCkli GujI#*\nNNsu?zW}_Z7tOt*͏῕'Z"iNxwLImbHvya rH8Vd۵VR|։a %@l'͌AIX>Z"K{M^<֑cFUv؎+.fdlxefi[޽*_hP̷ yfuin<~UQ.^[ލjtvizGGCs"TwҜԝTevo8{p̈́(Fly(bnǎ?xiє$dPّa'׹[{>T7qqMZ]7hو ׿lQ̇VWOb9V-ؘ:f>طR}&ݣhmCaGQˌT:a)[D>t팍u$9cn!9Z5QI wC?N4MO7Ws篎:}ņMDg'?w=nV;^g~'Ri֧_dk[;[bԮXcYT&3VM]מьQ֕䥣n}~u AܭK:<=1:8j+E;6~Col(ɠ\T7 @yvȯΨgqϖsw:ߗrou 5Kn>g5ROIg<IZ-ЎǟlU ;*«rqڌvo0Ju8#ŏgj-r?yiQY,4~4Jym#hc2Z;e֜nJm 5SR<[=ߥ9AO"Xmn$~YG7[tʵn )[Ъ1;>;O9k{]ཌm}<<C'8wzey'.me'%RjnO C$,.d%Qs>fG ?f gyx%\H>:vۯϭboT+ڌtMjDo)pQvwa}qH8Z4!%]eG cw>m6yۯN*rnIhimw:iv`y.&&_2#5f9~I'_^~{yD#O_c-EScX۝MYa?6 ?9r| bߕ|n|jb*(ՕMG8]YF,p(͑y9{ 4{~v_#T蚶k umSN!f=j6o$9?x֋V[t8ʷ4V=CV;H..)YBG9؃$\q\۩u1^\\*RQW~WZFk<1EvQD 9 m9q:ڿ/뱎[N4\Iukꟕ'|[ &3] -y0g6|U:8Cj6[ZסjUZ5*(kfd/?58a˗ٴ-0=N0-Q'yXVQ<{_O-pV&dn@=}\)<>+^.1IYhڷ|ק ?7푤X2.mVU):Wi I}eQoo+V3Ȫ%29 |Νh?vIY88^55[zY_՟wBoj g&fV; +QU*t=^>{WN=ӒmMN5M6e7 UHb9XQI_er>g 5´X3+ʼn ^9N97m^ٕN\4)Ғu: Pg-:řaw텀edQ50ogܗZ[-s>??/O ҥ۲Ө+N:q?z~~g'57nmZz)'S>5ӡIv!v\^4/f- sZ T WGV}Jj^dKr4qR.3d6FS(NnYI$*;~&ښRn]ShU̼ב9⡾Io8hHssW(D*9fHٕ[v#z+9M׹RU)Ѷ먮 I8=?ފGvxTU̗yKFV8Q[Ԗ XYch_қcz>t&bdei#A?&fE'o'IY[YsΎhǖWmp"cpx8{Bն״rUFť)%^^ثycx_w^Jdܤ8R?Ȥeө%͹51 6=sLcG*ێ<2GUau}SJ˩^IY3ɹFU[КsZ[FHj0ŞuOU9< f_ZS/w-S#dta5$;1´q(Z[~H ’0 &Wn7p}UI71cqݜsZG"fۆvxOcBט|L} df'4늘4c)ToN$\*zB?~0ej2F<JU%u U!O]09 _ܬvՌ23{]7.^UoQ^qm\=dU [bT.Nҋ[3F+sڜy1h\>1~өMrhrmENT͓ |ҩ?sJtiVWyjyxQ;ZXJ.gVmWPV10+ڱYE$eZIba&Zfz ʮ{ FyT%'Ioh#Yymp溰ʽ>h9~|G`R$ֺ#_NXGnJT\Jm~^җ,ekoԉcn1}tNNGuo޺US|JOsA&z]Vbw`:{?1̵o";f2">Rߝ*sNZ=,סZH.8 9On+XN=, ѨWt~CHK #bѪN9Zn"4Q=\y-mK.qjeǕ/V6/PV2Rۻ:YLκ.e1ȋq-JrnkWrݙ:mӠoO[ЙTQtdg[xIqEtaqՑMz%E9’Fkʩ*{nT++߱WtekvJ¥JО[1#'B+d >PyޛhdI@LlqסSLoRZ;is6((2dR&= u(ԥA>ٝ&?-Gmۅ'#xjy\}-Sdȍq+#ʭHcg{ܚ(֢㼯} b{o0yl˂Nb?%4ԧM(]+ 2LJBM&rҨ.ðf,Zܳsh#im@l>D_/rY63H!+4AUhd8$ic%F*rm#XgpxtJ.*n#S9;MtIH$s=ހ+EͼdՖUSqZoi$nU] = #.Y-Z̊_Z*4奕6ʸ !eͭПqݽkJKRjG۔[G>#\O߼t=zSX7Mԧ-bNSŢcoLr+>w:ISM]<V$\Gq\Uӓ綣e̿w[xو*T*Z.|q1cZFoQ҉a_2ݒ[ >?s[Ҍ1746ݫISSаG~{*7P~PkE$t]:z2Ð|'tg͇Qz6;a{%+QbDp W2QFu`.o3mNS(&&Ju5^Ds\[@p 5/gy=w7Z7g#ju~m:=d4-$s>UD.k#Jw{w AۿJNz< hJFkwȨ6մ9EEhiwݳmrFƏ5NhA1O{Jb9Skv9;c;uc1Th13(jiԖ[EAi {<,󞂽h5ڜy]tYOy۷ZPT엽߷p]Fa~ϒsjoPdܓ֩mϗPc"vsI3*?]ʫT};"8ʜe{;ZlEY ۲@gc\Q4q2tZl}CqqeYe_%F89+ S#+㮆N57x˳-CX6&S(ԝ5+.,[%>hwLuȭ*SלtޮjVՎBägdo˂߼rp~g=Y\K^9!;*fW=H]*BXz1uo-{G \ɵ@.2x~n.iЧro^~g*r'Z/B9/-Pr,ʬ;H\S#N4i Dw͍YԧZRW̓X.RWנ䕭OA x s#AFqn*4oNO[ʻ/ǘ'>"Q[c(r-тMjr1?Ԋ50=J#x=s#Xӗ4VR#c8ʽX?+.:U9ce0Id_ ˒1`#N>MbJ7 *u b9W2}zW8EAbܛwH3,mN:m r}$wFJ2&|bʹ8' ʟ7bPxԲhoXMpKc6vNp1ױuԯ7X]b sd:aRqtqjՈKԫ*[_l2MŸGDh1vz#+ƤFIS5i*!u3t*I$ӎ/s:r=d%B\`ǁ>}qYƇTRs橷wj/b,GՍ~S[*zXqoёDbndw|~x& vIM䀱.|3<1`Q[wNeZ*4}o \ɈT=dH7myl+_D ڛcR1N?SDpV5yFW{|ڪ }{p̽ (T׷/,b|ƻ_X}@3qTTiMQ&]:챫Y^oJ˗} ֫RJSTS,1,w}v2o: 5 {铣NWB!K(ɿ)7m$zV~Q ْ=yNT `qc$^FUB͚snL_c*G_PES'G=yTX ̟gOWNB=jSY?$EO9acHq3c4=̫FT_#>l{ܶcq: )ǚ" "rD`#m H?=e+˚tmO+f&EV%}PAJsӝohޝI$vS=1NGmD6Fu2FX{I+u#Wq^TiԭHҰS+2B/v^#yglv89:׵ZQW\{#?X3ȄۮE\eOҽy:}LT˱x/`CyK" c9Wa)J8J4ө=}XXQhs^qw~hSvM4'Wa}k F4mϧWeoG^#NXFXJҵ}%v|(NkIS;YY;iIFN݅o 42CǸ~g^ß~U(J0F-'/n1Wi>ң+4Ӎn^2͟o ndˆزI{v5X)!R֯qyd@͆hJyJT*TI|$uḖk!WHl?9Tc_M~z~-%܎ o[47* \A}kIS/<5TSiȳhʲrM{}S -5hCt#n~lf.W% Xʹ:ׅ:?xlqؘS]z㍼;327;Fx'5֕IMZj1ӿa3<\t/Ry1Jk/$ V;鏯"jWNt).cY>eWz'>Xs oKZT){i̪4QRK}ܽWNܘ\ԻksCM$$0Hq֣*.Υi{8kZ$X Kؙ(8!rK8teV۫ڤ!NMik[,[hw/*5-.f1Z8=4İj ϊ娣YZAZ5%V)i- 7L"ǹwZє_,#ku!%~lY~f#̥.Y]#FRzV:u1eS>UiYH|Io9}Ls8P7Q<~^k9ViǩڹWsZJR}D_'Ns=\hVHڣii0h!*[ |x?({}+T8էgn~HK!vۀܟ\1*5Haժ=/i.g2pzuh_^>h*ɒEg`o(zyE¥c':m>w0Z6FGвAJ]TZnݎFe~WcWNuZ Yb?6-"l+~hjUk8eٚQF~y{]FER'bԳi[4"#=d{PUӌeR/)q,ʼv?V8Y6$.FZfs׌*|nrөoS>@YcocUT)s'3_˹]#7?0=yYEArԍGqUrгm ghcZ{KEhL@in#דY-D9Eح剖][p\xu?us{9A*k$>c /)FA۟ZǞvmʣq'6۫c[8[KPPyԜ+Z|08Gf\y:;X$ԨjqoG+S' neGl¶7 ޽-=RιSR,ז|4|GD|ԾpE{GOhn)M/ uDۼrqy8be5!8T>2ghumf<XcB{2I^|s6ᏏմqmO j8 r:'Gu*N O͕kaeU.dϦoqvxTyNRQcQI8{?OLxSγ44ݏ9 @F~cJUz#tc//-WVbXo2 Bs<,O4̔{4gi^?]4aH*<$3TqǷ`1VPz<ք]J ~%C}X±7cwmzyeZOqׯ۱[|lщվXP$6@Pfpx:əM{F]NЭ Y6-U#[$L.spwm#=+催ݪauc:+E4w<]5O(IӞ}RWFNm$q}ΉHSE+{tPϗ8\FΖdYXr䑁' >GG{,U8E^K$pYFf;EoUER/#/Kާwuff;s]ƚNZձU+Yh^E^\/n;{gzb )g[Jn_#uӒhU7yvjr@R/W]_Y&&+}¾ jEooݜr0Xgq޲1y}ӋZJ+VSPB # VFvON:s^K)>[x؉5ϱim/u%=fԈ]~WV*jvPwNiFMoX|UXxXm8Cut?աV7c=:"S^FJSķKm']4߁v=s}v V4b1\ӭMku5|Cgg,mK126֮zXug 4bƝ+n9$;:J/tu>G7[lvҠR\j^W#ќ8ҝ֒ԎPK؊0pA\ܜܤ8.qc#$WxWwCU<S>IRw x&ދ^}zqfW:v-槕b_xW<3q"!i?1XS5}Zyd Q]rhu}BM۴~uʵΤyeWU?Oz2徻J[,1h4h$ќִۡ8ԕ4F$fCՍY*p0dW1#RS7RsVzWmf;q:tn-iKF-SYmHl{nڐ_pw\F̼6ן*Z7"Ys49Lġ͸%TFN]Ci'RUngkt`ҧ.kԛ*}=:g򻛟b/8IUuݜψ|3y\"ZLv}EJqm3ҭR*d|r>4𨴏YBܰ0^5/KM;6vpORigiszׁhc*vsV8QQOM.s'Mmk 9,yo/9VjqGWtC׉"{nna˫1Ā  Т{oeԾܷ=OωMc\= E%9<6g8y~RͭZeM7Myi"mfZG3YH2GONu֑GM+DgS_'d[k˹Z;K992I'nzVԧB9'O]K9$c_X??\cOsVOn:SԣuYmBm2yJcA61_qݴ=qKeJ1-u~_8, ,C/wJk__/]z\Ԛxz$w,<ҽF>_(֌ȭӡ{9#>HRz5ԥRwRNS[ȒdUTް*ˡхUFdI UrUF7&=-Zu%'vOOicO%K#iQTjLkFMZ[Ǻ։vJs蕙=yV"yUfp1M9|F}MUxT<{gBEEhZ\MΣ*}׌{jocyS^~0[kr=J+[fu mpA?Z|-i(C~(VFWe}k\7àkU#ԒץX.E#P|$=XBS#m::v6&𕦳'ۧSLxo^x }=QrsSIuQ[ =999sN{i_ߡx'zdueʍ.*IX\b[/7m/mtJUk.χ& 21U*xִ1QTV=.>4p)A{;t>ѼhӵI6۴3; Q'_3jEޖͳ*^nm=[yZ%FoPD}y;^GOIQ2@ǒ2ʮpmvZ.֛AUWVemկtſ2,ï=>r4׳zw= 6' ]6;rmwNQSVs̾Χ=|ҵ^<15B!dzV^ͷjrU NO[/jZƙnٿh\qSUq'C.%ibUWx]T-a]NndB]G oBʛqVSRT\}~~]g+FW{/pjIIsѧN|_e˫+3|Icݻv;R*qvyEb$|Nid5t :{^=8Fm]FR^Dw~ u[=9+6L *U_[}GS̩SK-zU@g}_*Q汩XD qUWrRԞIiȼR[=ln'Xs>H'L}gtU_mM{y V;XXXѝpv-n7,Cavz:§*)%5v4(`g67 6ϔH#٬#V\Im\/вqƫ#37Rsӏ3ZB^zԳm$#LxҦߩt/)o1;qڱZzNJROoQH@E*x uRM4c5̷'T{e. =8ƷPZ*|_Sk߾ C`A08IJIoѧZvvq[ۣ=z%VTđGeW9 zu8єR'A{ݬq4/:,@SnW2;ڵK%9[Pq5Yf6+m睹>$7N2nF{Y1PwJ$?N攩J#L껔cvR{k.z*ݢ}C.oC8%R]f6,=HuV R\ )E$f/ aWp9Yz#'.xiLyIMXcԓһ)RX R1%M1THxݟ9:Q;׳/K~F{ןRc+X'R+7v'KRL)6х=2I_z *\ӵ)k}Om"ꯩ_+A8W2u  )c}ثk;Zu>ۂer\+}m|n=jm` hi̹yR;s?DY͌s|OBo󱑭^>}4SRܲo "4clcgwt\mU}˩^;JUbkFp:xg zW~1%w  nŭn<OSj,ޝL8EI=:[C** >a[FRo~N.[o29M H:mPZ{:|akJ^O"Wm'uLeK4F&dI!ìm U vu+Ii4"&hV]y(JwRIǗ]c]vyJq7f%FR\SXEo6UبsDvLfKTrƢfޯ8̶L-]uƋ௰k\Z׷膬~fJ*ƿ30Ϯ~.V5ޭ:5͑3F+>ݧ^;N2uZ4k:m 煰 pb @9튤,o$I_hqܭ#4 #A?JJZ;> ["Mۛ,1q59ʜӐC.yfhK:]L t^3Iz"Tek[ʷ7#)ٲ>蓨H;֔wyF5U5wp<7l89SN-VKqӧ:ݟ renA>>T枦*R_gPӀp;5Ԍ*k:i.U0y#c#A󮕭Niho˕)o,rJZm5uTwu8kdEDjԧyَ$POua 3Zɪ{"u'̲Nef!rGL8>N1%n#i'Z{x S2n pq+KE*nE{HʭOi:TER:VjNKȣsmVGIV<WL9ݙ4b*YGQDx#NXsǽ])J/Eœo/$K[$ymA8*}<>մeWv4*rm^7+Fʸl**gPNtRQ&(,$+g*- FQ4Q K#lG0Ǯk~[ʩRVڱNip%f…(* r+U--ISo%4]}AY%ݻ#Z:w !N[7rUN;2;VJ:[or}JJ9HZxy3c߿\ԥ)Ux}-%41^/Ο/4)WHMh~E9Dsqi,)W6ߛalvۏYIԜ^j-B1lב"rTR~AEUIbwRn^[>n&.azqV>Cʦ!I+?B"*ؤٳ:qڊ9N31, Mọ8YGgVE$|)nAsr#e;.m t#># ERo%RsHd*tg׹=;_ΌMNRnz$9۹LwJ#lcgUr #A[{J^n˯RQ̌YmXc{b+\܈Q&`j1y]ˍBH{TYFRKrۛ?uFGJR>kX]m]S2ODy,ʻywukR\$dmv]};h)oj4u,idzYIqWvNqmlm?{zS|[3[ȴi47j_7w9[ӧ*7IQw [yZUd-T}+3ޕ:9tcZ?yBl~)zQw`d{~t^XөNջ<5ijFَ9mzv<]uk{'鴋)[in]vHPW#fa /Y? ៳֗;R2WM.3cN2^Z!V9BA$u!Nt9+)jﮧMۆ#Ɇ8ˎJ=놳:k}>]J4)j>L.~mXQU*à=AT;-8Ptt{mܹein16y wҰjt=r{|~afR/PNi(6XY.XVLhJx+ʞJ.4w崃d+Hʙl~*aS"fQ73rW<~9JFI?p-49?0du޹NyfU;_G~f\d{ Uۻehl`nsڊu&ΜE?yӛa_7|ZE {Fѓ^hs jCw891ܣ-OJiVZ0K728?]9IkUUNw)Hͧ{LyaH98$7n3m*ܐߧkDj/VWW< KOoI&mѦ~/;TwV*ӛEtȽf*3ӲZ}=}O. v b;&<ˁݗ`z0>~~hNGfٶY:]af[mQ#§ԧ.X晄.SW؍O1ڶ7,p8iѕ:|ڽj=oĿh)i$9̌1zX2Mcb*:0u#֭OtSm6hSCdF88@865+`iz3d^ҟp捷W?4t-Rn,t4}WUG ,|( ¹b$ U<CO~=t;4Tytd瑩~[8|z?iٮH&?41u ۼrX>_29%[ W.>09#f﮷JkR4m|w7is>o%ڳ 21Ͽr]] r;+_!u=/S{}] 8e<uW)+Itr֧Z1VӯO&ެ Whp1YΛwԟmU}eԶZO#ͱVef %18OZ`gZ[FJ}:g(^\TI&#<)1\BQОOl}+|wtӶ/ VPϨpntI,i,T ~G8=x9Q}/k]~{4}ҋ{;2Rٛs|a~H9ZN.VV:iBdi"7 J3=0F9SoU!7Mo:J"@)}̧hOv8=iK GC8$Vm4k@%s; J0pFM? ݙKgh=0/sԔ+9]J祴7ݮ6_u=ӽc)h9n_ymeYR1vݤ'G={UqYTx zmɇM#[7z7&VS:KZٞ0ɵӎxVFWFiJO)G[;$u=)TE(m=@vQ\XpJR؍[Xp˺?M);9EryԙLu#֩%t9:m=Txā򧙂OxW724tof{f3#eu'dF~鷱!WbWdWץ*QT>!5Oۨ|9<ձ))E"İ4sAYn۵WҺ>] q$r,qܳ~B"m%7ܚ ®8+99 c(-]N>1ZQ)ayjs-Bڍսʫx\#F2{ZʬkpfKI~!.3>ZQ6ߡ/3QrUXVU''q]tI$t1%4,|ËuTpp'~2|]G+/$ YƹcjVRBK1pՎmJ]A򲤊,:8f.I⎫##}vw|aysst&w+Ku Gy?TۛK׵sZ\M&O3Znf\pY75&kіRo}:TM6F;lua FA#kIGumkˡmjt⦒i]rݯ-m/:į Y>#c#[~xH Xql5w~Ktk;1xzT]vGx_';S Ҳl9,yGy1kԓPN1W'ڞ6j2pqc>~z+ 鑯ı ʮq g"99`(mUlTj弴bՕݴٳ믁ڌ^O"Ԟēar# <3ΜYv7:^!{v/N}OֱWepfeGyy?s_?/*Z_C>@|3LѼfm,eDNyqTf9ssyW/AmT)9D|w̱8_m8cdqoʷe:Dž=j7o]Eڵm\B_1(NKHBw~OM"?xI&f,_wv9ǿHsZǤ(Ihd9c]#y衔. fUjjsGHoq{~79YT{$i=3^]OJWcIQ[WWݔt-T[[RKuI, mzFzs]XxՕ'y~)JTgDxJkցݬvL]K_ ^Z[%qb4r}z-{?ZpGM8lx퍯k5-cv6z}k*#*SWQi}zb<`p\g8:zqҿJE){*i%9T+4a!+cݍsY*5!ԺS֖=+F𷆮|4,*$8ܻv:Ԋiwxf\eMM23p;s>- AάyUo 5]FQӒ.qckL5J4j;{ *9Io]k.q'|^y5 kVeV5b>oNZ[Qko3m$89=O\.y#ifZ=-:|;o[yt=O~ xk-5Sm@HfvM[EI;tG[SU4]ﺿ}/2g06S#o N=5xZ?|_^P'n^z ?7DPx֬M%Q>:NHNF̀AǽWC5r|d}QJ }4t"kl,gJSJ)]~~xY(lkE}K4d+wL8<_Yuw^qstȾЦV5RW ' y+X:bJm;MYZ֠E*nQ9#=:`!VQI-j֨ӃWj =B|I$qn 8g#ֿNr\~|gBEkN*ܲm*vs~S;s^}I]N.uOII+4v0 n!ro7(֡"ME91ѸLu#UcmSMԜZ&N>5ɬxfO-{0vz2s^=\)FGG [ڵ6]G˪]^ :mƥ=7mH<kR#iqEJ6nNC'm᭳:wƧ4eeYFӂz(T&J)r.~eZUZF33UW9#9?iR5V*cv̦hY# 48ǽj'˿aGwlV %H鏯֪9YԊ;v'P oe qf?-#;Ku+Gwq6~bܒ#vuQR58_V;/16Bp`u Q]ΊeFI_k7Z*?i-NG%߲FUf`{siFܺxzwdQJ0234)`iK.[e2 ̪/8#G_$e˽eB#C7R%/C:Ό_xǸk$7pS=?ɩp+:sj׋jCVbO+ є#c'bbgV8ٙd`2?Zu9c WqI,4y%3;勋:ҟqz]\gosqRCkGTo{/m+Ȁ]yqp=x>])c{4tf 4(?3v\NpE^ nXK>XDtgqmPzeF*U~ߐSq+izcR:N`UY ~ֶd*uSoQv|dTەW81%>j2fh..=V1!A=JN2F}|C/nQIG{W !:pǡ?ΟootR%O38ϊctRUGCx;233Nxh: 1fG>)>Udo 41 h¼@^>TUwmmiSocGOuW,R"Jf`^ԹmBQ/!\GN(*cգdZe7'# ˻ox:D% T2n|őr?vj)/Q*J^㤜KqBPƣ z#9Q||؅$2$Yf_%OrGv(%wsJ0n2 π37G۱G͓W:m'LeRNh5K_yѧrZC$'aҪU2ӡt ؇){ىX;i7d6ܞ֕96ֈXĺU#S8A,>Nbn/WӷFuROTMyv\г)#?Z#>jO2xJH"2!s]1q+A;MeosӨWqG+x/tNVJ[) ہ|WE:%= %1K;V?x?wsT"vJKyq%Y| [ @F;gL~6_ViՊz_ʲX.|șr $5Q%.WsQ27;Rw批Z"qw}ELn}sHdgCD.dc*Ji^tM#ұGKݕΩ:5%hnsfPfEXkI;N8cFN4l7F9*[koƾq)VU[&EKoeJVrӪ13EN,źsi;MI{9%a_\C}av!;Us֩T}\z=)Zkq۹VMrY]|=:2_&ˏ$Cw;en{I8r:&oC9Mr;-v'Ye*#)^ogRJT5~BI-0yP~aҮ*S|>*2M'_Kp6x*=kbsRJ,\FM=jydtV^V&<\$(ʯ6B# ;%c}gnq\I:rk>f q]4m%ɘ>ϧK+ҩ.Euk1kh݌Xv8=Oι6_!sC$Vy3*] JznJa:Ȩ/=z5}L-wڕ\yRp\r9F6_ס^rwV8~(!M*$t<,mU}bAW=_utAGg% !YV'-sTK_mj1pc WK0YJzu8=yr{5ͽeN0iqȐZM^$t#qU+Tڽ:sޜZ~{nvs-29={ny~Fp$-ٍ1z 9qU8;V)kb+٠,;]Q 7h,N?8)lb)0|ͫzc 6׵m'Z޺/jեb9'ӞvM,݂A/R::^[gK,5J~-%Z7B{2:4raHrzeo{jvחANF\#7́3 :{ WZ,~omQin6u$e#8뎹~i~]|ENs$S V{Xјz>F03zS&;QQd֫'{F[WY%۝ Wq\HɻOӭaV7 uF=zV2揻sNRӾ|ѬVlj~L;q֌Zu{K!i }Ցx;v5ЩZ >…:r+؁p@0;p?^xI{gJkDZoGJ\0|utAF;+ zqou'c]%+hWsݮ01}kiCI*MrtЪ"%n3Ӆj\ xhs #.W;V>fto(AV$aqY67r?:H$1Z5|_haՂq{ԗ7K-*tK EIo3g}qZ{g 15&~7wKmE, ^~5J8Nna=aFs6flntJNrCJXΜdo.%-h#UT.0r8=褟^a: o&OfUg֮8usҭ(;bd1DdfUkw<v/xXYZ\z>tb}%n\z/cjtjƵ#&8!z9U^Uݺts$n+nYGN(|܅(:,sjɥiV譣X! 3@ ˒;~8+xӓM[omfg\~\YTS|%Љ+]JY4l}޻yF>vR?:=:xwXnd\.O8 BkN-5RӍWqWTuLz;:UFUe(HC1J1 8YʦPEݼ%kQ#0zٶKש%khX h7ʫ2c#cRNU'%{ YU`bڲ(ͦeF',v #`+@lӌ+iI%+ZON k f"w(}_[vN9iR-ob)mbd*GdprN`GkGޱh)rܫoa]:F>7a0+)6F֍r=pԣr*u%D0=-õa(tߣ47m|NhDjP? Z~feN.=d eYWtdsӦ {?{Suv[(7H>\rι${1w[)Zʮd}~(3RQԦޤ"-k ``pqjHv'@9tg)VTm}z J)խIFZ$F"4Nbuf\``AgpuyRIMՊ~k,lh[˓wFϨJTeNm2+I҂%Tem /=Ҕ}֋G[̍IϦ{vNQ/Bch%HIvیb,\w*;E Yp c%voG3%*R9R2fV~AqG-dRVX٢v25݇NkjNi?̉#6,r"ʠ88+wOWggR$Ua:pSҝ:r+}1;:܎z`׸(KZ4dfnȼifgGP6!O~y+ޥ)hmzRONW;;b\',NK@ ׏bE^gm}mwgiyeCgHQkԧoKj_$L`_5w-0㓴dc'.O=TcN] [#Y'?63!l<b+r[ŕTV-w*aO뱜*rf+d܉ ʫp85h'𳎭Oc'>䗰iiʨ ;Wq$ikMng9?gVUXUHي䞣s]|1?em&w/>ބTх&+|҅m7ɓqWZ$i#Z59$\ncTo_;Dc`g?^ioR‹eI#͌'1M9I&}5H*{~Sk$rƒK ヒ:cWh&]ZUcf6B[oA#qUKp' p,\>ג6+~=}{U:nR{.Xw-\Go5y 0=xԓ]Еh^JT2[mumF8< /goԪyp7u~e EsmEZ79YG_̪*F:zխ1][{ ߀x稬cV4z}CXJhMozB'Xdfv4u@ĜY֕u#J ~)Q[]ؑ,[9grit9㊒Q+~Fχ쳐|1aSdҲj֩gnWKy`t)V4;Upp@vkɭ6TK1u D-[xvn^YGdkҊ*;?qTc)V4ki+]xnЪvDI]ʹO89t:Ҥ4L^Ri0t64DuYc Tӧ/idN*rxs,09>_knVTok<1wE1y~Zoi%)(·];%֣υ;vmEe G9Ӕ\oʧ%6Oteo7s⣖xW_#/iZQN:8#V?*)폧[v4\Mmm4]4pɢ5;*Vu)EԴminceR/pԭ-PX`[٦۵]lJSK[nUӄ[[z.yyk(Dtji5C+3#+H1=ҼBԩJ$$fWxzыe,V"VumMK-:k*U9鎸ݫ^ZTro}R{ض#[e5.AhO;~ eNu&--ԚwM9u$LvTG6@'<GN:zU.u~}N g*VoȵeۭypHa=G_uw{hKcFHceY#merfo^:=W$hhW5:S}f҂KƬ\!ߵcV\rJgVU=%̓G^gR"Ms=:y~k5>]#`5V #IghkXifvB͏d=e)SI5J2qZy6^|,Ɯh❶i}L%9hm tuct18Tn^ ZQ}WSaǽ|19a*єڴro0xA'5O4ԌTشs*yCßκ8Tc(sٯ^敗ݷnl1~wT}XIMؽl|ګIyy1F-ܔk|(nOcU9(Js|E Ul}1V˂S΋j^KَnvH/:j7"tN =V?g}Ch+J~@ʏԫYI%ʖ",}>}h {<Ƅ"¨1c5GjU<9{Hu u3 95l$ݿǗ,0I(3&FNJ.?5?=TWkO3OA&G ۓF&;c>+(׊X|&Tt߈.%C$9<u h甭KnLdSs$y#,\+ ԓnCpJRsӼ9=2=±\F,lO8<;9 I$19>4nxWY9y,R36cB3 oWfXʕ*'U7woMe[sSƚw߉KQi5 /Xw70@8weU[[O*x\E)J:BQhk>?ࡺGI>6'[>Ul\AdlG#9"8բOo>"WOO{wV_;on. K&[_0=H,=FbU%UBn|nn5iK.];N $tӜ~-Ja)Zo?Sϭ5}z5gd䲲 .FTiWjT:vZy?{j5T3}\ӎH#]_^nmo/;{?[(u KZ*h\Hs=_ wn~Ǐ_Yβ11LOuYMm孻r 1K UԔSѥccj˕.xm4 -7h%HtFHaƳ:m߯E{P\*O\]?T+]_.TȬ7 ƛy=FF@a=N1S)̨ʢou ?mIc3e7Fy`OO.@+upV=?mKߎ]ly+@C"“HL 8^:iY],T\v[ = OE0meڹ rqpyNI^:=&ѵxoM&ճ{z.?=Қ[XYES8rӮ2[O<2 -zۻhht)^{~k)Rn/G}"Pn1s'=Ipg+w8}<<}]s{C72y6?vcv3}/5}e[Sv6]ơ#w M.XƔa:zI4VwXjw#G0ABrn3zUFFϦl!ïݭ91#ͳ۹zI)295.i7M%+ vZba>ogعE"_sqy)_K=BΦhג8zǿNOfx:E{іe2chz f[$gToyFXo˹kN<+J;Z.]~QF:kaQ}uUկn$ܫߧ|jSo4xx{5kMN9m`y郞+ EJ}L_4_9MJy%L\WO+ΤSCI#ey*cw:)MJ.>LrNВY~KVc~Sy;֬n %ĐݬM,B'~Vp\9v#FӢUOKui,R-+HUzqӕk`Ԏi8`״M>LI"𶴺 ae\pG:Ҏ*82v1f֥[b뺽ǘ{})}0)rD|kX{Gk26Ӓx?Ng攓.kb&?}J]\h cw#+f~zjyvWtuNj5]Fw;o%׮ ˵H=y$0@)SH)toӭ^T҂劷k_뱃?W&s5Fc$}xt9-o2J+]o~)nva==В7ewaВ ZX+kmr7hwj8n_i$rI)3Lc]Ҝ/uG=]c[,>FPiۧ@Ƨ#21ywלU9akIxƍL;,-tY&]Hc?#h3FUkF}\;m{Z-;i]M<_,' uJK]/sKZKɯ\qNGWF喺t0N2uԴ?m[EPWBG};px~}|58^MlOϩΝ:2Tou4kM ʫo7#xҾVX0%&mz LW% %~OǽQ;mYJ(QUi=z? ~tmr}bHǗn|} A{f*uG?_W-ܬciA:N./\k˛S3|= ]B_Il1<};n+}')KZKĞdE4UXY@*xzrNRyh)Uwu OrQҲt8*u^iu$^[Fj1?C^b5{yI&cWko(֫K[բ>[8ʾ9YQn?SERZ+1\L[%y{ª)9ZFcKl %*ʥ}|\0lWK_7?A6G$H6NV?/ dcRSs5naJvz=uw|ŗmWFKTjԔiC $X/ɸc+$FZ*PM&j8MX׋III= {_WSv^$V0KQi-? Q GG$#zoVv|B| -]>kM7wQ(]G|J 1!"hvU'309 lvT$Z[[{18w[EhֺO.? MeRR%B+s)b}NF!fes.}ѭJ^ZJ{xWwZ5e >tڧ;#xI&wpzI}ۄ<.0ӭvBpwor72ȋݽ^cNy[z^.̹qG5akSv5l=G.^6RK&Ʉ6ܹ<2RKau.wkiLkE9>mqϽyRe}N{+~l| mUU1#@N8wYշ5 $VѶV}{0 v<zp3Yn-{?೗'JU}lu=6٧cI32eTpy0IsYۡWx[Ko- *9 Gjsic|Cem,J~V2q^9_N;WrF۲yq)Um8(jSҳͳ*$e.ge̸l#&jBUzqsQZmҴ-f<\u# NRFSZw/Y`/n;X|r?ZSU=mpuMS̒|ڪ3Pǡ'U:I|ѳ-ɝS|xSR)bZOm69._uo)sNrU9j{!y-mc"5s-֫43ټ6Z2!ۀ=yAҾ9rGM+wQ82,lnC:ӵz4y]8o39/3,xsߑzV5\-k"$ae<5)>m4ҜdRxa$:g'Nz*}H>rz/`H5lH%V >oߊ)%ʄwu5"YDSd*$V}3庒8(KFZI Gl_V43%w,DjnH ErmCjm_!UHN1QRtua2Z\O*Ksy&@QpxU!8m4efq>f-90q8{[ㄣ˾ڗNu]?Z\mtmONO䎇OʵPvzZ}ϩgh'wm^__vʰ,b3aG.A*BstӖ}uߧV>3R*I]{oEJ6G<`0AFU9TTS"XxJϽ}vXߏ³Y5[M;G=1\{jc8qUOu4;2dv~Ny6﷯(ʥ /FT&EݤiM{|3+'nl^mJUiH٭J;_𽙟'rEKi<-yc\g>aW;4pVJ.Vƛ?L#]eav1'=M]k``K?_.!lX ޫ gkE5L>mq6 )*dNO9ƟXDo_38`e$ۭO7<=I6څMUsv\mV=>1\\#%Is˫=⛶4i;ݯ7 {}R[vQ\&:Wsrz~"JP9qXF*Ke-__ X<9+Y3=|1Xϓƌz-m5( yVy؃^:q^tծ|3a,o*q@oÜWL:*P(GE7`aPH`ֲ,ZN_ - .[xV8cdruϰ5yWyxy]RzIlTïIc2UVls׹#ӮE|"2S[9{nD+j&#zsψ$ޭYm_Xe2mVc!f {uonaO!JgюD,sZΜ's^ʍde ;n>fNEs(պ)RPxhFf^vA#["HO̒.#7b8!}}1߭aZΝ\)_еaSdEe9oO~+:iWmMș3Y|'*bTO+jaG_2%&IHrA,*s[8*ZRrTn;p*GRNVRymcs֐*FR.Qosͺ,]qs;w\WK`>U ӹ;%vfi#= TyWƾ̭Ow}HDJ;*ǾFN0c]Qa"R$. d'xedE9\w(Ҍ*=^]Cg0ˍ)iXs'=NQ26mWi&p6xs穩u{-RNiߥ1FpN둜8V$lXko)GA)rz7sWwӨArpdR['?qһӢ@3't̵ai-hhrǦq~7';'FUS^CL2[?t6*IJRUJqEtKiqo"{QDm$2I+hÖZV┩z.gcZb)w3*F0|szg'j%RnOFOr9-k6MU"ka?sQӌQ:M?D+`I% XР둜`?W̥(Zy? ڻBΣjĆ3t˕Cz^+f{mvlߗ_隺2 q-:+#/kg=uQeQR?g3oI>,я9]䤤F~a啙Rv׽Ugft(5(/‚1ǵ9e:rþMKu:=$f 7& "T8SӨXɒo.[.99%9>)KmIG=Jui-5!W`dڸ*JH]"J1 t1HR>fYiRj0ž N99S l|31S>ɠ =2M̰<`+\?:<.RGc}$1a==9q^~ |J躙beZwenxcH]I|?L/-tOfFTb*ԣ8Qhۺ>lߋm^y7>إ?D"CVV?RӍG9g-}N#Q;nο_ZDž]r=r&kuس rp `d/ey=LI4}ޖk[ H׬fWoαiI$UIw3 -rccqpz -5Kn=Z;̯^[=᎑nbG9HEV y5O-FM٭k?-W7R}-tTŴ0̕#Ec#$8/gko)jZ<9,Ĥ~d*``zq_im|U7m[1NhZ;p|gߥy5e+7sjn7 +NʒFI\0'tRf'Rtw0\fe, bǒ3߭LegZJRxil}̌uۿ95mVgMt!Ьpqc*·v{qZOyZG,)krI=LVG*464/۾?ݓ|s֦H˦rʔn3Ms.q;8{4cN1Z|>.d [e~XI8C] ]lҎd`ݍuG*ѿ::)r^Kv6 qnQF}M#SV%A 3N#C{wSf!N19ۋ SW,ڻקOo["ۨ4`+(eS{sߊ6}/0裏G}7}^;fmђUPt5[rtfճ4&xcDQ$~BZ٘au^y7u5}1"͑רs{˧C${#&`җLD<˞J1UeRWUiNC|adP/o϶GO^mNZz};loo F0]͹AUr8(#Nu%d{x*tyo5_KhYgBoxGA޻)ѯ^a:2Wl>Yp_1!ryl=ovTjJJ-kgh/ūM}m$dU 78qיYFqw= r.9W\WЌcVZW>–2+s~V 4a&Un-{.9$zWNI&<~a*t\)ە;=O }V҂T`,$q}kcwd`+bb.Joi5Z&h.-%beVfe$ytqQK'zwc$j Θ,e0$apğ^d|\WK5ij>=}m^4ym g{~`y?~k+ppMr}_xޝ+ƚ^8!@C1O%p b~cF=.jGm~5cN漶>,zg%dIlֱ-]d睠yYMqS [Em3YUSWpw8'9HU=0EGm_KD\Ǟnqm0TFzyw'bͣm,/lv,=1Yr̹Id-OԿV -ӧSXPv;(Q! W,Iys*<1#F7ld7G^ V8}[oWi'B7[zz~hV6 DiBp=r;~r$g":/վ3~*>+LkU =TưPoa'<¾$KҮd6,rMխvXx8s%towGe/j~Ue41d$3dVOSzK+Z?HbOCFU}sOq֨&jIFDofIB/JWmeRZٞtI60ǯ* B>w_'ݵ#IٌqY˚6՝(ӶjI 4k uifxsXs,ַ6d#-7"O 95)6۽uR|E#o;hrpq; sP]u~]=ID9DũsIyOL7FH)i%didyS6k gqqJ<|ބ$%yV ʺ! RH6Igeoθ]댟cOrߣ,ڧ&V8Fe ? >5.ߚZ_"HIwvUoxay5Wb*mCo_q$y 4k\*|Јʬb^e 8 aSZtJѳ!`visӑo~]Kyo3ǥlVFZ1P$rt3s׊J*RSrCS_72}zcZ`+NPyԷyq <֑:yI+!)0A8uJEFV5:uI#4n]>N+g~n\ɵ8 =yaQnAjcʯ~:ZsJ̨̄s7?L˽iiEeo.GU?wzjy K]|盟i>˟j}(ڿXJ`?.ʁNxǪ~"J̘ا?u=O'l8Q4tl5c͐ nhC WȪh֓t*4ߴl7^|6sO6A_޲J#|[RqNRre b?^rݧcX1igGϷ`rG\ք;n#9ɓۃ8YӇ+ԦСw>ZW;9yک+i)[C*# 5;pAeb3U'Ҥaȧ#'VwK c-6A9I迥8i96d4DbN^:wk)r5.R(>Nw]c*:{?99Qu0J{Eޥg,yEfhyfl21^^GcW<ː nc}IVbb".Vj0j c[Rv̫"^GQ8gݝ U{[5kuRD7?0?tm}+Ԍt99譫r7gkpbȃ7ϵ]:`#jF}^,򺪷s'eIG.[R5)ӏ$̊w%\G4f۴8!Jp5JU$㵭IVeY77ߵwӧ%+=Qoڑ`4ّNݫ(Ӝu:qnfo)FY'(nuNIĉX/6ݞ 'cҕI(i ܪ |t6K2f]<7G*=~Qyf+˫'e NY-~Ul:SK{-NJh-'VR/ %lsgF%55Nm޹ڿQ]KD̺Jn_}Ŋ{SϹݞbc,?;|~γahX%q\'̚׽U7mS 1N4Z-^?Ydh]t򦵱8ӝQV k3ע9e[44n[Is_cЧR0c`{WZ.|pJ˺6XN狠: R[JJTc{,?JQHNXM`zGTeRqSioS%%if ;vz\yU*OV*}XI*ޙFqKՔԒg?⛉d]r9ZVxQݜvrTI[;XԄ4H̖br7#j5#V{Z՝^zr9^ԪBymcB;W2: DSNΚVK#>ǙhKh΃Oɞ1uUnõ/iVd]I`_kt{4ѤӖ\lGNYp+8IX#i=JG`m?ҝ:|w,De^l+Y9KYl'?^2F|W$T6@*٥(ө;AFX^iedաЩVP&y >\9۫&Xxӌ\ͽ046hlzҜ5UJ4umksfy$c}iBSu9-2Q֧xCHfeq8$y>ޕVN8Ė7ZߊηE {IFs뎥h$VBNGeu5J~V8=w+ʟ+*Pn6"Cm#yve;g? фj49>R)$֚%rǞ)YUd;t9?yr*Ri-e,Xדf;>)s]jsʤd^{6$p)[\2*Aڸ%Zn][8~W_ؽs4wPe;pX]pRZlGcR7]}{P:z˜Ukxo=JZMt6%(?I *M Pђ̤ݧm;\9O#Kɹim=]bo?[Ü>rקZ5㪿Z\s> Z4[fl_qӊP{_G>BHnĭh0mv+Y{jRjzcL08E_^V*EaTc628rM,ʱF1Wֹ .ӹUIr]ۑ3/2(.NrIӎ**=^UZte%)r8W1+˫yjCa?sZ)_CXiQm6ﲰ߲H0]<]8vN幜)6hWW R$H 795NKٷk3S޽wߢ%Ӣ8٤GN9ǯYQY=^3)ÚwIiO" )n oEo'x ~ί2M}ǃWJ1DX"e y_nxQ){(jAx$m@˹]p3F>MHէW4 s:[ۆ W<T)JZnኒR.[VA k?#fݖb:⷟:?3^K_.vwmG/4thlF8Wswcr{qPFQQ*=&61;0pK.Xۚ,Xx:bsfRvInxWj傒9FNRW;k/{9tQw8I>Y;_$jlFFIsK$~8masg13>޺L4v.!!y~_Ww.^>2u)k" qp6+68{F=]>^F>wDWҥyfn<G=+jq*}9Ekq0aV9קŭltKZ<)<}7GeΜewdw[nN[ʹO8I#**f`4d[/FGz銨Z4ct#< 'a,ZAc9 s?tER0nńۤaN8$.mV" QMu#0,{cQf x[F\qFp1N2먿jhdPC6%[Wӭ+46+t{VhjٴESﯧTK3^1~Xr1yݔlf9)o{";gY"ZǕMVh82 ȯ#mWm}s+KKm5#̯rn  2 ͥrԫRUw|LHHۈے}jqRןA8G,c:za*jvZYZK!A.ka=MiR<\ed f9 G9=:4iV'HCUxKmǿ㚾HBF>Ҥ)& WFݽcS[sluNU\uZk"cɹH9psq҈Nzv&hݱẁgw=k kU}ݽ+n&U0 R@ dthuX+Hs^CZ7XY7/ bcҕERpv(%L@Yf;@sT*G%YՖKE!w&8<_G &۷(IJ,c{aIu)ƜIQڏxHOUez{>2S;Wif-`m8jW^TjZ+O;{`*˵XcW)3zyu9qdv5aH6=9{HZ]VFb*um#ԩ+x=;.V R֚ӡ$Ϻ4]e-9 ԏRQdVEEu>eV`d]Ƹpy?v9qU%Rܱ&Tu^TgDS9=jvwԞ5r̼|@\?5neRTcBX!n۹w(<qYԌ(ƍ9Qiꅂ1chP QS8k#xzoqd6;:4h׎_ĊQ8j^C)dF>(@K+oo݋ ޿~-:X€۱ʹZ2;sA8}B8%bbq*[׵?i+(7=N0ME;vv wlTU#\+<xDT[bδ`I`Dc!V ׎D};Je̵(ܥԯq{yQ9!}:f4a:hƤG olV$(eÜ-/|o_iV!W**Jo(Lo"?vێIu1SO[kOXnѼ5O-T ~$`G:VҴiEk~7 Z wio妍,a+vLV{bR^;ѪnWh3#7)^__OzgOF,.X恞Fi6@ MMXFE+J yÒ[B,b4 G8_Ӗ)$7HC1 yw+[mKK!6\ilƣiЎ]NZEױkՖ"~e#;2W,0H^zTmܚ%s&}:1$1K4f8X8rrrsҪU%(+RkEC׵v3hV\δ֨姈('~{2i Ug,Ue*i%Ê9Q Nk8 4k#Gָ'^F=A4[hV~I:cW*rrroMz*8yUmi#?|%ZJ>EfsB 1d틎2渹>k:0SVOS&У@Ȳ6nIN10T/irh@\,*7p@{jc:~Cz1g}ŋ[8Yd^9:R\JRZ7vV/2۬>\rXَݙ=N ӊPx;ۡjʝi:(]73m1zJ-S VʦۿnZ#{vv~X:q*:Ս] 12c&6lӃW,mަr)J3OF4mlcv/$2Ic,-9Gɿ# G9]䭵h~E?qj:\/SΔNkai]̱+F8랼V )=7;iRz_R, .Nõeeiu"Xa.v+$X`o/m.3pqrV,zq1Ok*Ɗ ms}MpT{J 4w~bc潚sՕ׹ J<ѻy$.cyYaݻ95*Fs#z>GDy63,[6X0^+}24%jc UUrǥqGyt&Zi[n9Υ5<7 w ",bbei5m[ܵ 3F=WO4g+>qK|rF̻SnJ#eFvӡplve@~PҺ#RW1(e%6\*@'1U8J)ʞރ ],f!km<4TrZe3JŷgiL֑]~^{ZnbYF|V8UϯQZ]t*U-ߚ"e,UpXzjRFz l~UI7r <ִ+I(f)wnm>_;O|zc9[S#h9s&t4Mtv#񨨣}NZ=іVG6{kUc!8K@#ʝ)lVF;Fbt޳wW_-ڪ22Fz֦9iKw:Z=5mNϽDi4chՠs\nP6G?ֳy]7Q~R(@hp۽8N1Nt(ۡ}T/64lr8#Ӹ88^ Ҝ5devhfH}{+8`㤷58ZعEj>.5t]~{=to(Q䖊lԝ' EnX*{<֚Ŧޙg"asۯӚƖ`ũOmJ1|nm^-^$XIIq_gJ#X>{hxn!XA ݈^~oޛz1_v[7|j0c$d^F;Ťz~*#o]jHזG%RXq6jԫ~t{cAiW?u/Z{!kX0Lt$`3^.y?[{hDZ-uzWG-,nOf-9o8<`t qq\WR ZkZ߁嚇W⵿_ P#dC&Nq_X6iY6)fJK}뭮Eeuֶ>GS*g[wmR1WR^Vyѧj^k^_ uKwnTˍ͓qNz[_O NoL,!Vݕ7N߅zW^zIZz,o_saQC+){' 1sʢKgy}aEۿCؙvo0ٱWf7aG9_|֩tGsK߷jj^QLodIXn\D39g+J/haa+O[ҭi-E$.g8L_"e¥wn>j?.f{#o.X r$tLVڽwЫ]R=Ko5h~&ѣIhی{g$#\]S`q7 ;Zcu OMsK,In>Hl^8!߯zUKgۆ-|ܧ&_bh/iQ_SJuS+wl<[뗺rwV-.#CQCpFhԣzZiN/U\eثeC/ݖ<ȽHB_;w;-Mlєu?j:n|>i$}zc 0 㞙u%2I_qt1RmKn+u{ÖqM=s(6.zcgUkImWd:~K`zknaY%.+QoײHעͽW|Υ5r7 f3 A]3`F 灜2ykSJ:=U iKO,<#勐~W*yvf5(^z.=xf,!BuO,m{m584js=E QlMkn4]ڟ a< E:ݏ/g&,YHʤaZƴyӚraBMi]{.X(Cr!yt?۽qrGܒ.MpW*\nΣ798|Z+_ sr!*Nzyux3qM5{w檶NI[fL*G&ajIvuXP C,jrrs8\u3 BomJ2skftV+}|u\,l@ 8I]۞fEsmaC{xѣ}OCʝ4KhM̓]YFWio8~m}>5ȸfFw(p8sUJUMir/uѵ}dju p3\5#RJl,DԴOb>HP0˒Gʌ Gnyx /w ;Y[Ybjt\(^nۡ~ÖwIZD+q6}p2kϭ*7WzТ9zw_GWZoC$ƞYe[pЌ9~ϦarU??a0%PquZ]xJ.&of&?. NGOp:хRq8R 7}{EDQs(=[6"*5A%w1IA cE.Y;V2jJ^G|&㿆:܇EםtX4K;Ub8rx[;?[/gЪJW#WB]*Ff[XȌ${`v,6!э.}{ii^'8E=nxdM89gBѷskz3;Y~.Rzz\u$vz>EaF{k}/i*Q.ZG|-S_$7iЖY,D1RI (8\x:mZޗv3j7V[yU n@5G $g68X3=|AlR!09Xq1Rmx^jmZ[o>:JU'k@4iX°&W:0/kwmiNRi%-Vلb5'Ix)TiJSќh3y.V6SH#q랧<iѩ=GOҾ$>+t=9Vm*[^i񪤭mV_z|u9IÕ7M~6qU~U_0%Ǣo:᰿XۃQ]Yzώ+{]?RXoJdXā;rF8'82q)N)uiɵtᏪG)J񾷼]MwAy$7#;79 r/%hiK[i9Ά+$+t|Muk.VkƆ3* r⽥ rww.RQ*mmn}o9)pѳ)f,A=@1ފqӫ7QK6R~xC¶z/Oܻr[ӹ?OA\ձ*٧<ʵkVۯC>x;k{$JLʩ1pr3+ѧ~4#\(Ǻx{m |rF[908J򌫫q\,Z}$w]z,y,2~{_]Mm?^~϶Hcnp2mFQ&X֣bM$<1Y0EMJr.|YVG:޴QI_˷H7~yӻJ qu%dӷCXs>cy_'Z2~1kݶzƦ oqx+8EةCtKD?i$mێOzϨj.2r[*dW ɵY6`rG\-Ψ|q5+1m9}}+?i_Somv}bpʳB n8zocE[\Տ(%Lcm;a*SrZrWp$s$)Sל *D)QMn) wt\:iKJ#Ju]|͐#):lk7)$6U#R%R/̼;ɈiM%iZ.jEramm#H("01dsJU#SE1 hY#J<?µeͩQj6 :I S \~bzr?L{ըErN<ϸyo g̗ p1h8iTN_M[ܒw/"O3(7S?rNQꌟ]6vng#ʺiy#*nkܮk"9fwg+Ϩ}~'4eYSu,[Xbihr:ѭ*|~,wUPNz֎i\o-9FN^sȓ*m=ϿJ1{Ӛ˵y 6ֽ!b1ӧN<[4rҒw%B̭Cq'j״Mju=_Z W[ ֺ/s0*j-}F:wtel61[E] *u)ijf\<>\κq}李"L^[0mdgq'W*,^^2[+Q.O۽W {*쌽/)R8VNz߱s x[ ,h.@>!)K8gh}2oGUbK_B#Euwue&9$Q#3wo]t7KwOWn Fak4Bx fi0+U=c)TZIf B* ;N @럭WbܭwE:)SJ#t=´max8'ұcQ',m=IZ:wߧXU8y$hf&5}rsJ+54KKDn.[$}3jisF4N#j4D7`7(ya[ G\~4N6Tg5i8yb$ 黊)ʏ6X8μ.'o2Xck-mԧ(V2n'eyWhPF3={gmNqqIҜpz|M*^N|neݸryFx4Z8~W&C*3?sSk%}*̍#ٲ voҮxrQ^l9Sk)UTLSG|SJހjI'$70ʒmGqcӱG]9EEFlw_\"mSsӯ+ӗ=GH3>i9,6y+R2V8?1]Q8Q%RDtnگJZ["PjwE7be.o|ו5kC$8V;һ)ӏ3q&Rx'Fq~WaS厈h 1!zÆN1[?Sxoܖf A9uSR9ۙ}w" x7eszWl} |Ƣ浼ˍ,,$qS'`00}).u 'u*E_fs'P10n3[Z{JV{& kvuHԬ{6t8ݜz_:4ԡVI>g#6Td1cf\+#0XG5vvZ=gS$j0:cwu9+e4a/:&IKLpO+<:}_c^JQZ{tCI<(~,^Z14ս 4y&2~c? pk!NdT*-Kuh[^nBr;F޸%'٬eR9^V[ӂ/h*fb3uq{)]w1:tei'GSO+~UZR_P]oz+iB3926H<߭rOi>]JQ67 `-(8h=}{ʏԡRULVM$ap̻xO#_vUECoqq5Uesg4Ҧ'rz&ݭ>{ _EGhՎzE<)qiPY+y%ηܬʣ)I }*i_Ǐ+I]&^%<}ZTfdc*~p1UGYOs'=9ٍƩj5rRF5V<B|"{oso+=0_^I" Hv6{5/EIӎNe%$O+oMh"G a9=;;6aq}VTJjƶ9RZkkFƍOW:d6w˓vz\TKch[Tӷ|& .5Im?XF 0XnVΥhE[[wEh|}/|qujK(mBE+`HJqjMhR:uUl,*> 6_6lYL 9 :Eyu(R:SI(k/.x];t63}FXpzl⦥8kΌ>3(rW.xZSֻ%4r`r8<#\ZzU{Tq:4h~8|B|9}Rx#yVS pA㞾.Q*oNM 5*K~G~!O⿊k {yGoƪK8 rw?ha˰tEs4/mIIE4ݴ[ie-%5Hdr@UXuSJJ 1^VE>Ѧo_?mU&&Mh@ *#R;w>97DZT5jMui^M=2=xRDLQMM~_cC *Ϙ0۸dZ+R΂e{%TgiF3ʹi۳Ns53'ncՆxOѶ}kyf;]3A%&b3cY挹ޝ,K .fsH1mПZQe{gv"=qG=:uUnPTNeV">UV9baOKq9aH?15.0ԚrKm |G;~cR;FQƵGv̹m,qq\rSjlONkB127 u9g**2W"T"Y[Rn߬mÜtח:p硍WKmG)+}ܰxcFv{9EGS+6Bl #ּTWZJ4t1o"b!bTR[Vѩ'fAg@h$P퐒ĐA?B0?McTUk?%Vi$?NP3^%%>OIR :hZfU|Jv 5=ۥ2##BJLwkNN{%Rs6O؎g]M~_Qq߇R4Es:hIlM{4i9#8_S-]˲n-p+>R}\ݳl[A4v,x^)=}NA 4F ^bĭ!f;rC9kMxJ22,KrGN_N|":hYd#H6y-.:cuu:iܖ44 k!>[.z@za^Ry7bR>Ȭ(#1Ny_^b:Sm\xM(%6(3cFC[qw/Q\kͿ8xVmi[7-W{8FأK ^W'tZv0< Tdkkƾ).E/kX#eF#*XG^sQtxkk'do[sQܖY[}srM+쑟?{%-0RH=ngۧuLNޖWkO nWz1io˧u1 %ÌZ0rIwn_SgiUj-v쯧[#ds&. K5'9vctj`8i$OOS9G+Qa%U5{;;]/>(e56;M3II|̨%ߎbe|6d]tZ~:8)CGz7]tZ~ן O^&_%Sp*ysǸ{|^:hǖ+S|*tO6{ Vl%|g󯑭NdϝTy*'g}lu{pcڹS$yߊq8ϳjFm4I2#DC(\2sڼlEKղѿ'JU9\5L\_ʭ#וJT\crqzgVQ78NN9cQ"u#_"'.6{v<.V3;)ڵ]5**rSΗwā e̡%s58B⣦$[l +z^l{ZWtk4/k|hPI\gcm#jay&믈ӣnx :dU,ڻ)N]oe%Eυ^*_Fp63}B*Ӽ#Y۩zծ42c_pN Wt7>!)5gw{|n~d8o^޿5TZIJ1W<|=IhT3+߰=GQm4wW> XTwj|qwP ?'#܄ c}=,7}[b+:4U{EzmDi0}zs^cj:L=2QrzWIO1p,&Gl5L} ө^{=гs-֡ow4)_F nJg=0N:u{k_Ջ-{$O^]SՉs+ɠ4'D ~98|dԧN ՍZ [ )N.=w1ϭ}xpkiֺt=ЍV읽uojyϐ 8يqӯ'+S*ʴ6:4k+ۧ=g+i-u+HjBd?3>ku44sԗ[| "[;-^)f?f*=rs|~iR?Xn2zז:Rѿmϛk/>xb[噱#Y۳cr88xOP 段w{2ѣz\zeo3 KW> \ZG#gsj 7dc-?1:z\>_(W 4p2:lWMCB6ö2q`0HSMV*u,,W7: Aimz;u_25ܩ͌s~^1.*=5#)b;mow [_i{%ydc"ylX㞧o;vJJZt]ǟ|p0ڭ4s:۰2xt5G{<۶<ԮLգYGTgSob)[~b**N4|qw`idZBI<6;1U=P~篙瘌TIV] OѴ6>dbNe*7)hS~0Xeú0^H̀Dq{]2F8m7>%ݖ|M)tq|q W~Z_RrZxzs d cې{+ЩR5ʕݑ[NW}5K~?WVkվR, w zX5qoK}K* #/iW6:Vw -ϛ }XgX<:3J4[kO&.闰i΅,dVWlmA8ݕccQ֧iI3[S춷S֛${GeKl]AYӾʍ<]dV_9^5HSj]xKG}Uo0\* <=\80zqӕ>ZrQW9O1x|y~Vb-:j6ʣ%+)jλKmc}oo b ߻ܤ>TwoA\|$u+Ʀ 3a1#WJ5wMU7e9~M^ n^ZUФ:sxY07H2S:9&%.U}-Hmnx.QXR*m1Awg0kЧ:q(zxb)s ܥii6Ƿ}M?g5[}4Hr^ܦ|(X7N#棒My4*u[>EP2,_}P_Gϵo?z3{oܺ˞IetT[r]Ĝn6szr+8Ԕ})|onYxʮ9'T1ޖ gha!f9=֪?,eF&<뮅čuw.žb[^׵f.VuJRMZWyuI u`3׸pER\/s0RV7Ptrl V7nkкèZN+ݹG:z׷jA5VeWL4 Pyumϓ_O?QJMLe-{L'CDN1Q7{g<ϕڟ3סO/T,qRrp# xϥc'wu$筵i6Wѕ\yt3Tep2J ӽ+{| #.`8j~HqiFϧ7ݲy g݇JA)7^|$>Bc};ғ|[TM!6c=;sӒid2UaW .ޭ p/ѱC+IFy֪1{ɓ(Ij7;hdmڹ9FzqJ59l#Pq&hUgFKvbt̯̑o&8̸'yOҡٶtr^ƾ7BIwmMٛ޲jn/JQkERHyrUay8ⳕK;k}W@^_ 98O<\Tzp'Msse}2HX4gU~tb5x)F0J:[Y#hU0\*8WNiٵ2tmn~I *؞I#:WwU.Ӕ^_éy9Bv8ׯC}=O{ٽMKUo1ka=yДy<.XXԼádYJam<xPj)өNOKS2KȢCq_%WVE'q8ӷD"S^9Q6n$jڌϏΦ:rv_P[,:VH S C`(Jƿ6]C$q3A^TyBEٙԕNh>LNanǘcmI0sJǖmµ9FM{>^SLVe5hAȮXB6KZNjo㛻͡FZ6h[ yq5MoM i7nSZ=͞;_:G^⼹J5gȷ2Z%i-}aQ~3צ;V_X\]4 (QK|3qWF)TyCl/kۧWěvjQG c/Eh@ 7t֢oGr4֯b6.YҊvԴYsG@'+{{/MNhь[eujZ]D TStӗaYrZ4-f ܴ{%QTTTɥ-ܷ,FMۆݬt~j 4AZ6Gڱ6(ʠU9;^ҥz_ӯol{~ѧ}dʞ Slϡx[]]YxHٷ)29#J\٧~G8-oI8"6#]T_9rqJ;X봽N9CIL:` O^_MJ[i_4۴dsdR$ӭs¥9TN:/&u6vY;2Pw?xQ~G/6"~O[= p3qycߎOv648nk5Ǘx_WoUߺͨi},Zէ6Uro~r8!J1.V";$Q- םRNP3iZ{㧚0JʪDjT©iKə1BY#Anvd'Njo]{U,g$ࢾ8Rdlpf1#,qJ4ʇ$|ݎ:<5ITORRSJC+%) Ns;v ;ͭj0`\%x֕=;anvYW3Ilwueƛ<*WNTo=偧+c:t5TVisͷWֺ*V˓4ͨ rm(ו5/COhVɧI#G ߇Ju[rU*qn3DiW&8ݻRwOR(RԷe>ޝ:Vu*ԅtXK Zum 'e;#;R7,L}Fll]vvIS3jգ~[wCozʥJrvg'>+.3nƈU)8lPT.T*֏g]ITw-J3a[~jis򳢤9((ِDW}>ZI7Ou\Oq]~RzU'qJŶ!SXʚƤm">ܳm=9FTe*vGZn{JӖQ̓`S:D 2FC%I&4]17iu#R/#E۫EpԧYc17:n ,O2cyIjIJ*7΃_Rz_b Dm=xgFQN[DWnv=vӍiKY-)?}- yQNX ^A;~uW ׿E槍MZN?vLkx!B8898zWUǕB+VS.N-9{im,Q>h^O4M.?ֹ',yImۡ%mc$8Hl3a9 lh#wtVS Ym8d$M4heVHQة9Q_驭t*[^^[KRG4X.dp3 E(-NNNZ\h& HՃvcQS^G+_"yV)پVX,8=?0G[JJ;ugJjb=՝2FK'71'9±9ԓ}40OGnȥR .ݸڱsө4b~VyJeQ;[-̒+'E7p}{W"I[zz\UhOSи`۹pzUIgLeJ-Gwש$2M+8ڄq۶k=UGzZ.]i@.cC2Fd sՔo+BWko~ǕJy?+*RZӞ^/nڍju;J[! G~nq^Ge(׬חȩNRN0Vw}m{O1YӍ3>b,=?iVa(F ru$)3;> ''NNk-R8ݿ$@8$۬:GǵgJ֎ngKdҼH#Yikk'JJF{eB!̠&ržQ{$vHSDxЅ'ߞw9%+(o'j 2I#EZF=t{:|KT_ kf-+;HBl uZQ4o(WoЂLwn>qoU$\s49IH<#.s9]0.yVtyQ~d&0"=F<G>κܩ,D#7t?gsJkMR3\RS3KG&H֌u}ṑoa몧R6 8fn`!Y:=p[߯Kw<Ӫrܒ9]\tG\)FII=_OЈEoq8h=k) K{ZFҌ]RII:ktNV3aPs mN1u{F112*&7OlFmeFxŭnm.XZ Um '+ӷqڴos*{WPo\O)&Z&;"$~ tccl=j21dFVf9ןֲ6VW*|C"KJw(#uZM3 sMsuRU#Y0GڪeZwАC$r(b# rsFU+Z\]^L#Hw9v8ƚWf0uru&K%1ƫh#-}?:r:9i)'{h/%gs9Z{3VGYF:1;NqU?mOޣw7*eA7o}8FUi'Qи\F~R}*w*)\Zy]BӅ]eʃG]+"!ڔF)< }7z%V FIK+9EUu*"ʓNrRbXm&5n)#w?jҋQDVIEEe )~=}jjBq̝@{@ LϷy:Q(MaKj|ֲ*Ct EO"3m{s\|vRHrʶVV9z\°q9^ƜSI\Dצ0yi(Ԍy$A \Mr=9>3RsE~^bxHg*4nӎq\)Wc^RžF$l;QW,*wЈJN_fVӡcBUb>3\qnÜaJPMTP&1M|ߟlFi2[) j痳zz%R1NQ6e+ke_ۻ;tkIIH*24qvGnelXF܀gu#G:WK^DOT0x?J¥Ka]'VT͟@z#_c ^)Z#^/0AF'4M3o7n$<ͱw3O^BI^1۱eg}u/YBxYUqmтrUZ)QI1R'&L +f'j^ji/J2bUq-G/ɱipA޴nEzTZt$CnN 8u%?1F_J:Gߴr[!(.oV,c'i[܊+^1pmSՑKj^uV"O.j;k4im`K՘μ#ԏץB%[v;Gv1FHD 1QI;>Ҿ8)-NzϺm]wNz}x^KW41ݟ'vۗo?xaҤε*-lWׯʟ7QQIfUTnyY2I:q^*ӝ;:P0y(ln÷!p wاQzKu>iUvJo.X9j~b֊L=5j/s:+FvJ ")3Mm;B.x#QR2ЈU&&g2VE#<.i*.p,}*[3N#blqsj%WK1TьڔdhYQ*|q$Cr{ҳu>_>KGnKg-nوh_ hkV4;J˵pTX9iYo$FT(]H'OAۊKQZ9*)io B$lo#u 1~F6niJڥ(*![,hF%e;r:w5,¼9'{lUX 8x>ڊnC'S{o\3>k8Ռ2iVg(R]y! X#FFB9ֱM_K|qVIRVC%~gogDf/?*㔥<ڞҥ큀*~]#J"ަשʤo[>3K1M0o{:)K[֕J)Y=Zdo%X ʹg&=IiaTCXZD82'cӶiJu$+U=5[8\*[BxO*A)Ӌr0>ϝ~eytdF9 T=֧TvWk%libx3ޟ4y7MLkYim6ȮIFƔђSNH FG$tCwV#FRɧѐnLJѶ:11ҹiߙuyhqvk O&;vpT8GNJʔRL^;lih[ZlG3 q 8<:qڲjFMۡ2^qJk[C&B (8Ud(PWpV2I֝kG_׳ukP~^*}˨)XV:\os,(ʍ`ryccnIfgFEI#۽ŴdOqee)qH׿xq-IQ>HI_R7i-ZCT.( sTJqq^7HXXcC`/s{TJU'R/ؾ ,qmPxvMm0KQ+9ma4Hoy%hcZjQmѧF#H,k#מ.f#Er$;K[uUxOs{u\#y{"|w4"YAϻ=~R EEv2w񴏘g+㵒{G̵Dq /g $4N}@=xǧZ9]r}*i脎-w*_oќҨF2ٴi:J`ڵ8Biʟ3Fh$RKO91UN9DOk"r~f°gqF?i/i(sES%m-Z.!uoN{:ҵkʔ-gvkk#]b/;W=VJ r6wqf8 ʜjL'qNc(.S`ymwu15!7ׄ,SY$ȕzwGI9$}&[QEzS4[͎ϖ\־0OKs|]I2X`8rfAOXU,kzK6fhF_> SӭKv_'th?- wȥ3w`}?3:u%ytu`N՞ZIåjZW~ZoL0sgI4*vZ-3~|^4}ג̬k0י֣?wG[ 8q롫x1D2Bx~Ğs]cΟS:mzY(nZO1V߭%knQS6תm'k $<.jz:Sb<8n% p͒u1?WNއ^ћUD 2qn{絫{Is4ç/vkEФ]O y $'О':4bܬT:1n2E-V;Zծ侐tc\ecPǞ+˭R%~ӍiEl>0}MVR&$6:1^NgRROK릛mRm6"K$ݱHcw71;g⹣Ri~5'+vdw4o:~u4˥Ar;N@oʨ:0ߧ[ K9ԒWzkOOtOxš] "/`򃑁ۏSQ*.#eGWnWw1f<].O2Au"K2NAQAH8>5*ZjzoaZ]k[GeC—>EuApHe\##8}IxZt윴CG0nR˰g"VZ!BHVlgGҺ=9Sqԧi'mQ\@+nQZbvn^}yZt&sNu6y欙o906N e$a_2OK;;yI>k]Fk m97xhMΪ1)Z]:Z]hj֭ KI"L"y9wq1ҵU5g?PuT>YC E.$fhlvC)f TMmܩa𲣬:#i~),q%<%'9zo{k_TU:1u6M%ϷsO m)5H";[`N |E)ZW_֦xqv<cLyL;ی|Ҏ{~Tmt[M,޲G$gv<^hFҧRvνe\df̛F?z1^"IY-iF馿yoT6F9>\SOEig}Ky 7em,dyTEV: 8ڵHT!MJQ~O&xY:uROT.\]kqOj秘֣'d:uFIVkKuТ}Ի۟s]14%dm/cuR.tI4R<i:q*_"ZbS+䴘 =r9OR cb)k[;];vS4m;fT'uZvFK+;uF>QjM:1nVHe w8gnׇ4ZI/Nk{rhkz㭈LGqG{ ҡX֞4. XZq2.琂>f}zeyJzxī}4!aݽN>&J5gn>:x:qivzz6E؊(^-QNt]F;y m3Ta%NvaIEhmV&^:ݳxKꮂVX>M{Wu$/3#gzC:q&>QߐkkM)<#׽qP.^Ҭ ,r霷ZQQT?|,ǽmR,Obܩ՗,/g*܋nQHg>:ut;V4ܩ^σcԮT$rI, )ǩ ](3[5S#KkFwz5&*n~1b8T5{nV'!*5>V$:ײIm|KϜW}J~ƚ'Ъ'Czu{{hg^ 0EFw:ֽ~wb35˪ݟu9}Tk]杩љoy-VZB*J*]~< {&SyeD™pIapz y*\×R^N_ӭOk&`~G<YieS-kk!F&{,ֲFMѲUPsAWJ=]릚`wױ^n'HmBT5kCI,H-62˚){K?(bS~>Wn_z%0Iu"$ \0pLZmt7ƴh>h$=Wx.M?XXaW8%xz9{W*'*;7ArhAlnʯ'׹ӭ̕ݿe۫6&iloB;r$V纰=F=^IM7[|VSy> }xGpUZ*"qor3$d0QZڍ)9w5۱%Y?u0UR[PQr_YWr9hUN/]Cb&ֲH<*qf\89'dLUwWmӕK/F۵bt̵.5J&ZJhm/RH<~^{VV9al䜙jXif=H_$\;cin\ךg$C[O \wgGNJN뮞FTefo37 `סSUK+t{8+ku;wckym#g- Fq^Ÿ,j= *롽s/ڣ Ͷ|>5U'UOdzYv"Q:5H<َuݔs^92?ÏC*SJOWtakSR5!][m^kim g΍19p=+1TIoo_sfq}z]w֓mL*")c|ͻ&~mʝ;J-۱x-&I#1yzuRkU'{!T޸U8--$c=HQ|gzDc)FtM-+Uvs>Oj2 NMd_:Ij%vC?&ma-!}ц_2=noZt|U8G_̲dU y#zwj_QQ~%a˝#8՜*JZʩ8ʣm}mرan:)s)}EG̖`NN?qm!|ͬNӞ8|خXkuUO^w mS#T}Ҹs{8hrlQd}uL,t],Jѯ az'-NWֽͭ"Wp}.<;KW>7%$F~X9].o݅*ѻLŶ3Ѯ)-vȮrrS}t5-uňdjtJR|]o*K*YTMʧsr{HZgEIF%ӡX&\\e%^O:J+oki" [?Ã߱Vi߼nvUb;^CWY2vys^mI˕6%FKwcxZI<ƣYXeFy9ǧ9UIu;p6\f`xSbbPz('=*9mZw3|Ez u[/FhņYb}ҽTj:I[˚5V=+:~Y/WKw;c=F4}וA97c(qcN*iSGSe-:zuŬw52Fȏ_|bn#;t=p=d)G&綟p*"&粷跋om<|Fm%~G8gG*曺ӗ;˫꧖g~KM;g/ZP:۞ּhsӭ(ݽ?'{b)֟:voUף?ײj:4¡VO3ǟNظF(5=|u*2ӝ 5ܾzIs7BB\$%0}1$;0y5`R6]z^? [muKiJm6'w&?ƿ2 TqNsNhufjk,x%c^*4RM J复JG<⒜2jJyc d$ފA3#(ybzZQR]Fdr7nJũw83uHwUi#9?Nk7;~c:4k}ȑ2y`UeXgvUUƣOMby<I.|ĞjSױ G2QjUc[܌J uRRQPhD^SCW+yRvw1|",oЙ܅ʳ0xZ1^/=L%*MBs4b9 7'GNTGZjk[yVv4{  gp +[eO /BH?chc=OLqQJݭ,Kɷ Ŵn9ÙKsj"'E#UҕV;:|߉a&I#~ѷsn.qu{X$c!.v5+O|pM:pi3UjVlFYV_1N=ӧo;[aqN(XQ]4|wgӟ|2%mNѩQE[N!1G[o_zi5 b%U+m7'wv]JK6wWya@#8R:Mԋ}QN SNM1nZ:w}^p+*:c miKTUH<VԹbn߈Qܥ+\R x01u*J8NY-[Z;|2n zcڶ{C,EG:'ic !l“\ϚjRB cKʽp7z9*R^5S>Uk{.vf`p6G\sRںMKӯko8Gt|:ָ%( jˑ=ݭ ϺD ǥgM~Ze%eOCOO 0}Wfrr#V/Ng {ѵ7̨@>jTGOy% |Ö'\ Ϫi'FK%]yOO*>hʌǚ+}h2ǐ{j!.֕N{|&4hÑ p8>Rlef(2Q_R@tYX1۞p֏K.e̥_q~gɆemgeS_ԏ9XzU#}B˗eeO~IMg^Gs~FT^k*}(Q냵"_3SZ*/NOm-{N~UPW9b*̖WN>]VETQTVfm+Fmr:We;Ӵ0)MwEj6ƻ-sԪKSe h[xV1Ҷ)s3:~5eT63I l:Z&w"aqJ4Kjͪ[JzpJ^Ӈ:mN9RIvvȬݢ2K>_|f s~XYQ*N a5Q3zD9>ՊHU4] I4H]n@wcW]ZqzlsԨkeXͦ_'ܕՇOʷZ4tBSNCl vc)ԩ8lW5irܹokI9!9>=/֨=R[;F_թ$sq? JRvoqz,w)Ttds<.U]NWkKtmuzuY/UH;+N^[kjRO+]CVWw~{czZi:n1iJ33Lk2De #c__yIݷV22iϡzQئ㝨5kG6xt/@}J#]0ɯ Qd܉>/Zq4E՘X ^*+ȵoqcBy[~YzOh"TT[u\uhU2n'sxڳ~kE+:{_'3m%]۷~?Zֲ#yS)V$Umfl1?1gX<)ԨaFo-z[__ wgu'۟mVs*ru6Ǡ6q9 8ӵg]6n} ]D4l3I$ƹHņH!!_f~Gw>kGcd[[|4[UBC5=g@~e"dҬER|!=J`، JPj֦9'_.xᗉu|bֺQ*L$'z,U:vZЊKn饼swjtjYR p>03zJkR_̃φ:l:{˅ ;IBTĞ[$dù4$u zo?|%aq%?L\d}ۻAt\)=:)ɩ-i?QУl+ 1`8#<8se(EStZuzόm=FDgT807 dN[ ʶ>*+ўr<3q6쫹<ǩ*Rt%*ɸbK]7@|rҘTI;@ RS=V}N9hrӒZ'?mԼ)kyRM SiUބ) #ae/SwZZ['+ku5(Nqf*m:N4v$veTӭ]KnM6vs18}/;JM8J7&_jkha[Y##`YҾN Yd5~mV}-iTE4k/O1;g?)<}Gz/Zb%RP\z}a˨#?n|olDAjI%yi7U8 ] \v8>5%w1ˇ9YB>e9Vyֈ <ERIfFO$ }PN'6ͬj#i|<8$}9yJXZutݜ&;008ǒO*?CVVLwC FבTrF:!S+J*z~})2[7=>nO5߅ʜmkbjOY?ykm۬9rol<ҽZxzY?+Gs+lwFꭳM5=-0ZU&s1חw#+{[Gi%Vu(:`WUnqPVE75RK5E" 7`E'hWEOo7OS9=ϊ? n55nZO%I̙%106~tVB6OM4V}s*z.6i'Ѿ[x|5wٴR3(Y@DN0uKwQ׷Gw9uR^}vsg3 ۲FMǘR^>W#FNLye*JSjZ[_?T9kOZu'%9~kq(+q:P~6x-Ŧ3nߦske^Y9ۭk}34S+LeVrNqNM|F2GRc1 ]-]ցcU]w8p3O¼z1rzMu9dgެ-I#f$'9?Ҽezvr/&~*oB6jLd2:_oOH|e"+TkNKᆋ_YJCd+0N_x8µUIRUV'dmINOvэLV#<1 ]d 2o~xc ;$Ta+S[|fmwnl=zVuݡfv`q SgVZMAy6oe-(qSQ^t.ehROH~,xŏŠ@syM;pʸۏ|^8FJwou>eGN3mFM[z 5b|WV-2'69Ϩʲfֿ >kɧ0{)#H~T,˷E}>T R88sOª//;Gq}f[wL29J!9=,ςu~X\~RX+ {DNVW8xԄ$gi>׮%M>3]+0 rzH֡U֧,4qUZ6_#?S+41XC@sqX{IF]B8ZXi/i+.s_f{H!1*F'>WׯcHS2Eס 4໦PDd);OeIucOg}[^qU"w"Vm[O0͹Ye22GN+>I8zѢ5W۱uyc{yf~#FJ0+Tu8(+4ቧZ4(Gmn3yau(ېெMR$\ vI2pb}(Fi=}uS"eyvz|^)l0~*-O#h 88|~9Wcg?JynZ爊G#_|7H&5n,,|ż)āvY8{W`Ӗ-fߣG,aN6VWWVYw)c9'#Nk7%;M[/YCuΏe~TRHN<+u1w.)*ER~? }NI$G3c$=:qTuBjy{KB7ٿv 75*e .X`n'*#?VpucVtOv2,l.dsTI\xӬQp3ӣVm:tI];~&cN#=|8fiWZڥxQjGVos~=-NkYU-rsc=k`Сy-}O.JN-4UIqyhYJ8~?g'ec8x7ktԀ34[UM}>$DiԬ3^E+v%Gϝi@nOI#+om(O'mV];^R1hQ89tTd57N7KORq 6]I;x$㓎;UNZ1Tа*8Lf8ٹesӌߵ)F Jw=4mW{>C)Ӽyx҃^x%w=TEJU+6h2C!X/8ӌdөVvNch,eea*7] *n]w ̒YD6ͻ (H;7hXc3dc(EɭSE>[%xq f$3Nq|S85(YO*P# "jGG˖w` mQX+ GsT{I]MzT\}I \yKs$$[ @ V'dp`IIMdn9U<HNxuWݩvVPӲ~fL]0NOgwN3Z˛CΞ*Tj(f]0dVYc~UǖI>#hY\36X}S厝)1nȗ̅V6Vf=u g?roӨWੵ[Cy;q<9-$eBOYs_5ਅ;\ؒ;gAԟ֥bh*'g;L̚ϫb}wḴoqrIK"K.Z8Õ0'N-@(cfx,c\' do1Bz^Kju,> (99w淧~nb#Zڊ,C UQ>6onӨJz ۫򜌒fJ;)܏)FP2ܟ_L~Se'\99VFeW]jRe&bA,1nePJKOq>F$Gԏbj{1ҭR7,a'@26];VzSpE4O~XWJZ4j)I"-$KR8R1^ܑMeQ&DJ$ucLqWͳ7˖̳!k9cs늮WjRܒ7xXcԃǽTdL+mq/#n|rr;qۓV1ȹFQ*8NƧt:"RdFMjiU~t|[;`}0=9/wN=z[pN׎1̫T e`d# x?>R^V59Uߩ9D2`ӿ9yԴ*=Nsmi$^H,Kts/zlghJZݵ\ +ʣN!JfH AU<펽}K[SyYbkV qYOE`ߎGNW1n|aeIhxs{rN ܧ[f pZ4o6vr9FZ+Xcn}cGn԰T[IVhYV.bL288\ ~.>KIi۱8XV TcYֺRm-oj̞dC5EaXꎘ:u!RR~gRkӻԧ% v H +|ˑL)Vi^V_ü \e|:2r:|[:jRqR ߼_1PFWk@(Ԕ~-tyńyj&@?*+|F/:._Xyc#C֎?,F9MrZoMX7i7Wk 1 U`$4lڽJqјZSF ^\p]#0TqzNhݽgOZ~|љ]"m|:ӳ;0)+~'+uy$̞^0yu=Jֶi+^;#(nqq<\qeNKTVO1[\ʗg^Shyʬt@q_\#ݷrQ,L݋>h+2֡gjy\w~9w{۽b)ƅV:d)#ߌ5T$4|zƎ4VYbp8#uEyGG\dfc崲mgapOq{l*IN3:?Y\+BaU\+,l7֜떺sԣR1IpAg $_l'~= e)Sj^Z8y}evZ =^Nߛ[j}8#F6k:mOikUnPgRN*s#O EEEfᶊa397, G}+XS:Qn]~k V;i i pB3X2W79RSOQVH>F,xrN9>V8﫲E,H[9KɷTƵ7f }u̮Ude$n#S(|=7(֣-{5+"2#9-:u%R{z?Uh'29Kik#1䬎\yc2GzV[__ȈqEs]hwp BKoG=H=J}VFV8mQ7>ҟaEbJGns]Y*w\ʢ{.K s?}x*tpn3ukCʍC"USy8˫]|i-Cs,m9 kZ:]h~g֥QkϩYZR\¹w0\W(Tኔ^xX3q$ocjyB;m=~(v<)=+h9[={u*rޟZ*JuorQQWQe94+0e1"{{g9E[O4eZ-;eVkQ汬ʔ]"4C>GeVwVUkTZ94۵W;R/fV5%̶N{Q"̋;zяt?j}wodg#*zb˖Ƒ02t׽w%7~�tƭ_SS;hwI1YzM/4/_3!-)5bVpƼF L4h]{zD׿cޱ--8G5Lt3ʳ3썟͒Oq8/>,UJ K7GUX@Yh}܌98OZ1isRJqJn%yΙk$۹}rAshT-&UZћQ74(ljF<`t#XߌםZNuzĺ>٪ZPE%Kzrz\+[I;;?Rܳ[Ρ6+*ɷ '*ǖ1W"U&9rfI5n eJEF>R^]jJ̪b_'9+i}F[fff4llb#Zehf;Z 7HȊǸxu=d^15O >HIp6Ӝeޱ`^Qrtثtǃ"F,QbH{ֈ|s'ױ͉q4]I=m[wMF6:}zT'mlstGwh_-JmG 7v#'ܜEwCFۆ{hlC A汝J**W8aeU얉>`iVBUYSNؑqJJiri+BbHP?a0 GJʛ撔t9b-Nx׀O VRI1զ=~ZԢTOTR++YKyNG}is{xkTKB&U\1Wc#aX}u<j _xڄQ<6;FN9ZJyG-=z*RM42 U=r==selTq5-մ&2K2OyA3g$rȯJ*N+'%]v+\GOs2J)7ƒIhb8ⵁdo3gH<1]wwq:1ɪ}zwW-<ǽ<='=k?fVzA>QJVeE÷}=ZXiYŭN24һ2'}Lʫ#X:UZ1iF:iEnEeZ6ЩK~WEyndOHU.; vF)XTf$i{?͵V^)מIGwZ9aAs oo UGN3@ZB-]?ga omHsb\ܾr0_0OvP{~XzQN>C)_KЩ=u&U[fV,rN*1+qZ SۉܠɎ:<ڴLJv+"309瞼]bGHN}ƫ$\y9b㑸} in[(&i'4{vZHVj7 ܮ;b[z9Cgo"=տ|~ g*֎ϯS4\ku{q^eC! H+C/%rstFh!B]eɏi5}c5qz*o} L /rXΒ|<݅IE,Z!r USq&>R!w G$+!AR3;y*nGk*q$ލm7˂idGv&ڂIa؛NzgiN5kI7A,Z^+Ny8ZlpzRZdȊs$8U`0 gL8ƕ9)E$JEN1 *߳1|=܉!,+qm9ln0u]*c[2E7'{!tb&H1Sl\,ݻ۱XI@݂ÜqⱩOrN +IYtdmr'rk7*g*J˲ІIn a]8=μDhd VdHC$9֊eg9Q }6wrW8wrgNi ȳ]2g'cp'gWW}QhF\2H-L=TTJP|xsIn\)pg$YdmI-Y{/g/4crK*4p7~OrgRǚt>8,CӴG)CO)sufys?Aβb Q0vHnϧzQ2RAW/F#4*)BAG6%R-e0y) FBUA=xkNlұԕIW]J5 a!v*aO׽:JKlu=8ݭZ,"uɏ$shrQ¨n{2 6mZ峜sauJF? ֗D&?!p3{޳GV2rW]dT7Ӟ޵Qnhu{jL gA/|,~najKƫXǻб{jvB|Ӄ+pezT&^ݼޡGj@?=9Ov4[]ۼk0de8zWON^gF\˦2OK![*ϯ\^Ni#/-.ŋ<%ߜn=JMi2N+5M ݨ M6{F#<}#o/d2N. mU>L6ۂ^j!gsSN1oKZY6'\6sn{sӧ'[#,LT}.f 6:"wK<k$։Ӓԭ%d `Ǹ϶Jb$joBוOr bR4SoyҹZFuE6g0I$(~|8S'mKkBնefq:ZG'y4GJ'"$@v;=AJiO3*ʦk[yVI<.$-*5\UOSOi'(Fƅs<̱Y36}VNQqVRKDY1,#OO\qZZoneY~_A4Vda.ngdU]c?zўHe*Z/ZȨQ+GJZv:֭$Z̯bB:z?}]Hԍ1%R5ȁ,ڭ^YXQy<`sR]/B)FvЌ$JzIJ\تҔZ$Il[[E[X\qyFr|ۮ"餕gɣUtqn>;FNrTpdqi7W0HɑT̀;duQҋ2J\T[ t..<"y1dlv2yi*7MqUow2J-8zQ K9k#ӄqXA4^oٛs?#) zF<*JRU="6 JbeFlHcuY´\BdVWW-,I*[2{Vu"vGULE:69I#F+ָo+#8rQpYhAa~TU/$LBR,%܇s+|dz?*)IBSI{:Nrױi.,ѯ2Inb*sIEhRoꍋ=2KekLjQ:bZ#ErBֳxF`cv/=\JR6oՋHg̸q! 78<Oҳ&U*wwld'Iڌns#ֵTdfti: @$[6,77׿cfiN2VK۲C&?^ƚQM&U[K[tʂf@BNZSqЫpȳ\9z{5k#єjWk l2}O5S乕rJYq.X 2:5хD{7x|Bb_.\ '+s7)QchmHX =8Q5SΜeZifiһn3yl$t|W#[~FUS٤*n#ptW,}}$zu\ѓCk:*۳_^+ˮɽ:V|v L'z^= p{?$(B̷f 26,rrYA?i~2wػg1HU]qy\yzJjsԖYH@[˂q9WO2Ss6Lc(OHf5E{~(ִiy4ЖU%>^5IJ5슣*:_ū|3u/QVR %P`A:aR+)rmZNeG N*Dҍ~<>x]o{ho,h4ۂV`2.N26!xG0ե췷GV캮>gaq-(IX?}G8뎝*UgM>t>/ Z-HRxx^Zrw bPJ3[k~gIǫۙ,O Oʜ0r>a]wZQ1t*zz$Au4E|F$eV=#AFrUP[6ܙe(Ut3ů^C]7QX瑜tҟ7,=JKUk: /ŗu2Tn^1#^k8INQVK+ .}villX׌H*z) ^8<,jVu=|$kb4u,/ M?1#t#N1]ըrwku:148zvemNSc&?i֥B}zlJTVsV!kX.?=nom%t}1z6kZY (fy:9QRv3Gqt7t8Ҹq50oM'1F4f\o%Wǃ\4ӳaQnA6;rcīU+mEe+A]/xyl翎k0ݻkέ-ONuط;+mFaUW?{\&s81z:l't1EmS cu#cVJudޞGU$T;=ἂK_ݿ1u}Ef Zg4V;VºNNG 'ZmXk\-;FsS*iֺÎRXmmvdڹѩJ4zo#KK1zr7dxUSR]ɖ+=4]tA~cs2>v`sɨ9RZ[JմwKϹ뺖[Ѫq5cpiÃTYIw%H{޹1*tz|?=KI.}ukkIWR<ۖS8FyqWnxyaӽݽ~G;/n.$4QC}ߛzVxʊJvoF'bܪ=뎵Hb)[rɩG륎oP䡐Em ׋8}Y/~8>ҫR~jjrٕ Dx*iN4R{4y/Y9>:dQ̎ @DR2Ⱬe95̓$FVv6De)Z稹MҴG8dfb;P8NuMTwo; ¶DҲFֵQ q{cɾ[Y_-RHۂ@*p~.Hy)rũ|.1Zw${6][E.n߼kH~N 'n8{UK+%F6_[[S8]([jc<~N7th+CCDEp.s]q#[ȥZicq#q6aʜ}ZJCε/3>T*_A[<sԬ`OμڔcRn[Tc*Wi͞ȁbHs675T4qƕNT?{IܕPM~ӊJM,^>9O3bf[j9VvRu ̪R7*{ܣiGNg4{NX3AveSb8ʧ3eXzUZ=kg+ .=gH#1]1Fmw%hm-15hU^:˵TwwM:ћximxFVXIiݐPǩzZҲ]].F6Szm}ӕ 䎞n^#5o3߉g&?.6aXVa쬺=:~fszuz'O6cbѵҕelu_)fUpqW:\M&ֿ fɚ&*norOLpqq >aQEx(ԗ̬z-/^$w)3#9ګ'qztjS-Xѓ{uګ>3$r4p\neO-T >PJR28^>)}[n*Z1;{ |9pHԶŻ]q=+yEJݴ6έ\4%ܻ3^J|y䓷NINHm!X;Vw8{Kz6R)w. g)9Rn&bIJl8(Ng4 %rJѧ˚ ޺Z oko4tynUZ,p޶".*U#[2٨ݩN45$Y pV_j5:!Oi-.ViQ^~ҝGJeN1KEn,\Kmpc,K*^hFuIS*k_g~muMkG !*2`:9m5q贊Z_9ed9;yKEʯ}zG~OڃB-|WkzMl%[FYAP8݌sʂ,?s*RU%RI_{}x%Zh^VJ-Ǟk\WIgo$J_)9lʐ9#c5ʤkr?zU[RbRi;=SiKu\i$1aTy>N+:($n﮾F8ugv=-4V#?$s#)f~Se8E/EJdH^v*Io>nھN擕MW1&ckU8ؒZQ!9lrI6Ŏx O^x 8_;Yv꧇*%/[|A~n%K"¡do3JrZ_֡o7\܃y/,5  DLҶ6P~bN ǷN h)ӌ^"{] v9r+ڎ^wz[c׆rr;-zt<_Aj[LbpI½X.g}FSju_]ٗy47vb$s$_=J֜^룻m͞Qtݴ%lE[,z}2= eNV7g[hvgӭJwOk~{9jsiosmu$Ю@:1 :J&zuI>'ۯejѣZSݔ[vɾNH P6H;Z2uNvGN[)j=mg_]+n/R>wZWPya[eX6wI1U#EƜRvukio~ʤajSm ,XEjЖn` 1QSZTZSkO4w^z,Ϲddϵqb*.XhgFռ/ioi$LHxagӑ^-h5wfgTrc[St'ԵoVQGb>c̰sޕ4+\{YZFFJuEsSXmLjjn8fe+<6;*>ڏ+Zv|FÞ5̍o{p!ڜ86yCy>g`MhF+eu*blCּiN1oÈVdz"a$p,[{"kTS NRԒWo5equ8U9<ť-;uEkbo\^K:h)a֖Ek`ψ@ݷj~8sPYJ3KӧX|wUzfhIJ0ī$RܬB"T^F/{Eа54X۶)1z/SԿ*).IcSr[ISPhzef;G 랃=8SGTI*|^o0O=CIOqYqׯBJ2Z-N\*Nt܎{pݍ.f Eǩ*s]H B~玸VZqlg~n== #8޵nWَGHR&@ZE=´#h0>JOI85хn1נ讝a;f˜mqWz okBh֎}][4d-${r=5Q(Z芊}/"d%KG۹рyvsQ<,L>7d_d9.KH}O^9_^ҏ`Fr׮ҝۥ}πbFNJjQwut&g@eFRw+SwuUO$2Qs+VM?JQdgR- |s1s4ԕK5*QbYd^Wm>՜cS_yQTB Mǟ$ls/ʹs2kYI-_Vh:.t^Vhw%y#:qviJT$gMקS:T?b̲&YwmcO=HY%'fMNjr߭Ud)r<={tRc(jiDky#feY^38sD.f5%mؖAKi=9n+QdΚRe;ʫj$ G뢍8DׯcέHrIyq\o*a }^?yFRse{VB쑲@]wStBi֌mO}¹Sql(*feOymVʧ/Kk~>eW2~Z\L󬏐}}9<{\cE+lnB-A믩^4D򼵌+/hϥ_~W+.Yt9kTՖfV1*GdMpn1ٜ_b{bF4M+LUJ.Lgo盛7V,IgA*18ݎޟJϖ.6{Gg/>yt#;U:dӕz2|BMh.cVUc'3=*n+[5!)>2ڴqkcʃgTFIZˮC 1I^^6`zzW)UycR-6WaKS?VWݚ۪ϽӭP%Vg+Ɵ*1^]J%|UJ4eN1CrCa6ܹ NrL_kUlRR̍]yjQYU0QIEyo#.o;1m)munffF%}Tiӳm[ !ug%+ci+0d,wzt(8qjyD^ɉOkJu%-i{g M>O*#r3_cVXIk*qi[C%[pU|k%F:|[ KO]_4z,Q0%=u5Rwo`J.]-өiD~?ی:u*e#)?}b3욷[v: 3Q&󑶆-pwns\NR/4i]]I8fNOұD[QPƶw]\~yh?כ;#*xyk5!h1)\m$7$s⼧NTs=_EJK;ElSsL/v`zQ3Ɲ9r_VU,kDFY\2 tkƣkuIa$MbY>UNXǿ %/Gu{Q權rz"bYJk[Wj[dh(~6LRv5)SҕUE[;II$qZEF3g9_OkTY؂NH?g'+~ xj{W%mS~gE\?F4U|\|K}{v՚8p169+ӾkIѦ:jdP =o8h}W:ur>o-:ʴi误Km:(EqZO5sC*lX䍃C)*7w*)WX(ťmWgOKԗi)uNN[>)Z ?!-!=YV2Yn*GT}Nܭrѩj.Mj_l#^ӤO-:iAYz?:m9gi_RZ^~]-Ό]Zr4m}¯K\[X2ȖQ_ aJcG#(Zv~gPUXm|#VŜwޒ<]c<}q_=ZT3C=~#ͨi ED ,Q$k$RdS*FA`Ă#hߏGo[>Nu}+^5>*-\CRA"601cޘSrapx_a4kkPs+;$wu}~ r-!4r2Hq+^0w{oSBuVo3NbY*V"2UrӮQ4уh1\4Oj8frN3sۓ!kzAS2hyIjךsyiɔ0,}NFO?*+(_cyFJ˩ FYɻy; uW+Vc}J7 *g?kJӚ/voT0n>ѡ͐@|yW)GwE%Gf U;U" z(kk O^Amۤ•ã2ӱRW8Ԍt߭X*YW@#ߧ4k{6ߴRzyBKo%cJY~Fj\c{TNVqV+e2f\aW8?Y`#Hu!.+)Uݘ4$ZslhB>Yy.&6+s89E^Og,"2ygJt#{\Tt1K6GkHeN/qb%Ff8lZek0QOnV8%! }F;ut_2s?MH-9`:hG+&Ϥ/kyW_/;fo>Fz1݀^>e^6W[YUJS[(O?K(0~# H ,kSj﷯|)FU[i[]XJIv}b?b$WnN\gu vUڟ18$`ݺֱa)Jh15MRvW=ޙ$Фͮ#oo1(I8wb\WG p>c0{GM8Yu4?bгǹ^38֏-9GJ-:m_i:'7걮ȗ \1~Z}4]<W 8s޳.QqגZ涚izw|Aw3۵yvjC?/Ht)T:NNП 5HC"ƪ$G'מܪOMeJ3s!V[d^|UF0x]J1}QNq^ɘF6rgsכ[_׹x ѧRw⺞@:UiMl (8TnOL?8[\g7ľ4x}f!79+::ou5G&*U1}O~{V&y n\\9SmўELTW{ǟ\^}o3'ǩq_iT骭iQFwV龇~*$u6IXʔRIS>NN)Y-j}Na6DDrw${ ^n&I{qG4yaN[sV[I S温ԩ*3VIÑRҼ-eeݖ5 #idϙId={~Uڦm*ϱpɸiѣRקg1wE6W YHՕ:n%'&r\MVȵhcm$wIH6V?p;ҩSrIZ揤)U/;~g֟u9hzs&ǸH+7F5){{k_|y:5=d'.Ycba  șܧ:3(Q>Hߕ=t߷RP(ۛt_3Ѿ&_ ?~Z%ֹ%-H*Ñs媦5hgp88Vukө>-u Y"m!A8 ޼ ^21X2t9,DTd+;8W ]Ϻ6_&A+`5昺j2u_+wZߡ鶞9Tg6o'{{{\lvŨ(ʲ9o$R@g$g~ǖ[jO 7f̴|^58PxO:lS,,~ROۗek4~7(xY5[_8{[`*F](֗6$Ssѿv8#]V3pUvqW"Rpz<FI}֞|PJ4]˺G$מ 2.}AVQG 54QM[f<ōQT/ozݛt,4gRM++𽖝sYܿpoS*|+s|[F_ *SMMT%l72m>"ŸmUчaYtb^_FK~G*<2wMwOu{(nQ]41l [x{8\=XZ7O[vK0:V ]zZoׯ\kz<6݅5c4ca*8l5'(h۲]]Dpj֒] #IIsg#nmgg ~:כ_Q릺S[3~6MY[uUch >ZF2~a8i98Zw{8 .c4~/MCJ_&&dX$~εqapy+$ݻi\~'xЈm`d+2qNS#٦;Zt;, -IګN˽~WoY [W3eg?/'ھGaj}aGiuٜčWca᷊]wQTӸ,gҗ1TpvP˶&m( 6.جZ-pR1w6*CGhة{cⴍi[1y4-f!4O1P.z`9գ'-KT.,mbYU&*0$\gTVQ|uJJOo{wkpWFTchRPZ'.xPw]!ciYwk+lr)<<*{6И7@q3?{ZtF˖/2y'XN>Qy84 TSw2IshTz:S9'R.x7Lj[W{$lmFWXxq娟n|i̓9s#(ZmoQkq&UI2 &p1Vciدz*ְUK̉vs}>r5Om>U46rsUQڤ\zڗU~֩JVƥau#A+7J峑I1ֈ:)Sխ?1+S|Ka\srx ڟЖ{EW,>d^WQ5orZ:CS=O׌gM"飑d hG4ck#O{ħe.OqQn7"Y.9dFb#'rEHΟ%:^{ kv1iNR}y%R\GDg g\/_3-UR2aB*˸}+7v*4OĦO)߻,SR@Vy|Ʉ=i@ܢ'9ןYKYn9R&ܕSBQ8w8?Ν:pq-nYOŸ5qbǖ*\˩V쀋ˇhr0H''޲9s]4ه{5dWq8x?ҕNnRus#J}1ç$dc){)*4̝^;)!mmUq:¿-Irnb=1 *ejGsŸџ%̓$865O.S:㇟whp2vާgְR{WE:$ks .ߗw8榧I۾|/c{H˚:_o/|7 dUǞQ\K_F)ϻ}la5QK>J-}U PY =y/iyorx˶ [ߥs{j\LEz|3쵽yfZ䴪u;?=ulGw2ː/8\bi=cIi1ƤU%+FlQ[cZ8xj#NK:}YXۚZ72dJ1|^OO + tFN KkoCZv""U-q1i#E|Zߢ54)Pyxi'SJ1R6ō\͝O [хGfq9cP'i3\†T-$l!HsO:uoSWNriiϕ%CT=?ȯF>ͻ= X\Cqkm>vG6$Qwɻ2T-vzIݒ%iwAt9cӷKQݭnng̛Jgg%*Ӎju)[KeQ-fI7Hن y,qßʪ\vݚsҨqs6P[{E9(ڤc$ |*Uy5kU\Cd?n$x|w0[wLg*,c_ѓ^-/,6'|qQZ}Y4q_nr<,oF#y J*^]JXiEU7E{]zƱ*2Ms3sAt^Tѿ2S'Sfy1cSVU5ʷ:V,n)I# kϬJ$^GS6fmcyWQ.ytznOÊ2^U=*Y 35*tyuTk^N>T_-͐ykJQh4^Ft)=f";dw|`Jގ[lqRgf uP_1g*kazqISL<֒jk6-o =I6Iۃ}X} m+];?Kz];]hxFiUP $$׎՜޾[RJWw$祣|wR+a{MJ0m~ZTSI]w_rJEUl㞽HpMIT卤Kz#07>Rlwié˦|ѧ v˹^®b_p1dңR4B&ڋ}-*Ϡ ՖdV$@=\r1{Rs4a9Qh.V7~5vlNՄq1MZƵD?w,{eUfHiFc+W/zvC 6eqVqrin(ԯFWswO-~f=ЧZ5*j*M1"Fu{֒DcZOf0Ks1áⴕ8JW}7+:L`w}еj.eLB2ۭ\ͻ 3iY@ΕTiz)j oqcoʤyyJV#ӄ2d}JiD>ZiF >ķʾZ%LO[@ӏfnJNj+MBmBs8"ڹT(^'ڽT4Ի]~ŌPɻjJ7j_- 2ybd)w5ˊS&3C&O*w\NqWPvcO M]wMk64wnʧ^QN :ۣ4{$ [8Iwr \y=zq8KY>g)Vi.q_aƣ9\!MSm(FR,0+\# d1d|߱:F10F05w}"!쪯"a<qR'kҫE'"gK$p>l;9泭G^NgnX~lEyvq*#\=E3viVbKvw",$nvʴoh=Iϵzu8iYe@pNNqǒ-պ] اI6ӥ|[kx *xs{[ΝmmԡZ޷NФSy3$wuVU*[O Xө쉮-ZX.*W{>VvV`FefnrF1Q޻c(ЋZXέJ&7Ko8~mػZCtۍ̥5ۙ)"W?Z|aQK٧h4뵋F!J$AG0?.s9tn+OB4S]v""ʹAڥ8ʊtdESlEO_H*\.mgyndʳRk"+}w#ˀĞiuYA܁wNSMvؤdz[ROGks&.*ߩ%M1N-eTF-@ngp&$RP}(ZqnטcUn1<$ipz}}kYV_yRNQH}4.H"c*ilٷ{ESaǵI@sV⥈nK3Qq+r[&0H*x[5hE=^qɧ~"S6XBV $榤ٯ~/ԩU9wvH`w/̩#2\j}\򔭯DC  H~eKgSt4N1IZOubv\37w>+N* S[a,O-{ y+k2"m㐻a|u@psiS*BT$ބJBO&yd''ֻMHEâѰfwo8]ҹ:qJwYr8TY7/gcǽm'.J,lG sw.X[r]S5mRC.FYAyZ{[HvK{dcHF;TcͬCބNO{%@U~tS}Ik{N_WkE*K=ؗ Em'gN>_#S)sCJ8LO, d-!qQ]_Ċ莚|7W(Rtއ/-$[<9  v+;.{oعFwWG3B`.MXOZMcT*FDq:D@A u q'InGd{Ϳ#. ؏ret֏*AS4~}sQ(;jgn-ؖ˹vȲ9dVq:4~8sk[$kS ,^)TSmѢG̊TkոVpJy`zrx9{5eEcVU--o`ui-~r08g.]mG(ثwhaٱp}=GԨV娝 ]u,"8%p;* QYƛhTsgPy=HRwdө;=m%"fOq84IS6urΜ9fRI3.&f4)JV{-=N) tAyd$FFi#lGO~}y*ydm<5֧(5,B#F,ޔ=$ܩr֪{jM 'y$r|#Ǯsb;[cJ=!h %7:8={rTHfΨSXo,W$Pݿ+#?_jhJ ^4i^H ʫz4\XVku'E9D~`ō zW gϱ_SΰI5;d\>ⴣ.KhVט~[\Mmu n#51|Щ*y Y+pS=hZ5ϷR'y;H۵O^}1QSTcZnՕ6o8qQ7r.B-LqF.jq'VJeHdn%?0"*8Nɭmz#>gt9>):yeIjMqG< 25'8 `Əb{h")ۘma#T y.v`X \m9gSR]53ÿE.EaK[vZ5e1Ɏā=qڣY4F.ee$MƑU=1SQAS]ug*ҧuUknM PO4дbFnssۥW$iӋ"ժTn-Oӹ"+4FVnY\Ԍʥ/{RԻ߯AO_"F,ƒ|ϭzJQ#NJ%h122C9hʟ÷usFݞ}CsF#]g}q]4gRc 8#7R#eNvdM\r hTc/+:cݹk}1YZN79z۞×T h <+0بGEi8%Hk=ˌZ8RSb(.WB.$Gھf{"㌏*S׎*:I(̙VEh;C8=GqޮU\c~beH.M5Pǝ>?y{d)jyӅI"#2c;zҭ_YIu#>-{䱸KMV&v݂[Ϲk ܑ:5g ZzHbi$[J/g;3Њ{(֭vҷOuy%d39N9S)Z_M,TdS˯orx!kinՙGi*qIج=Iѣ&$_dl%VQ+0؊Q2m9*Db#?.G@&ws{HRVOZ-|Ĥ"ņ9X.;e¢"]>+˚h[yv>cB*V[!*ѣE^WZyu-I$FIftr 9汧QKW:'^kDw &Uo9Mq($j֜ɻ[(*{GvQ,d,Igs”䞤a'j6惻؞b+"cw.FHq+G4¦"buݝ|yb;cXԫ}Y5$'>9 ف1Σ#cN7r "pFzy2փ9*A3H- 'ߞsDj1n;~ӣ>kyYDȈB6S{9B.12ƒ),FEcJ3qfU}IC*ɲiPң*OT ?ȢSQcFݯ̽omK6)D۫QR4,t#^eܸ q`S\ šf"0wf8;qׯJZt{|Ɍy5{/ĝt1X8>ۯbK>ś{Yce;G~R&rՔioIx#wew+OQ֭#i>]Κ/ĮZrc]瞂֌-z|{$Yz5&c.XP[2)vs7Gn[}(X̒t7*HB1Ϩ^~&hRbG w%smu8=y,Nq мU̵teY.<"u<^u_zvF4_dj*-fn˷v?ß\rDAƝlH#FF?\VU%\{5%H_x#ڼZS=Zyי|DPlL0G_:NiG\X9UW[8h*Ff4{w䐀:~{XXr(I=6~֬u5̻9$%w$dI*Q竇3MEy}"13~Hι N"xyʊ T%ßQþ(Տe4@c4)ݗWZ9׍4Ֆlvi/ONsWPN[k4kTj:^:1E(fS(cc)ѫ/7k=[cqX_k8T}O.X .I KFK,Ǜ-i[Mۢ~w○x5TZ螎l[<+{eU#b>W-ϩGTwzh9 XY$4G<7|k˩Bw`yd]_[W@%VrrG"䄩8w|צ)ʄUJ:uGw"8.5&])Jqr{VMKmQ+Z7yOTڝf-sP|v䁌 gicfO}R[h]dW2jWew~Ixkև]/ *ϧ2η^S4JCA5=|Rx_+u>:8\FY-&߁Ɲ#I I$Lbv ķ 0üDZWRfvm֖7rVz7m}xsFlcY,mF Yvb Rq ⣁*fՕӓ^xeS ի黺Z>Ot;Wg[<ːs=x*9ؚrڶ=,SZ|k]c;fe;A%op >aGՂM~w<\G F^6/>7L^m.M6n v6V*V]ow&9. OTDR0WM'78^,Q~e ^'/.$ڏu[W+>y.Kvi&c WV+tK t$/,]p~~dV]ۏ<JmcFNm/=;/׳iz=QC7( Q};tԠwƿ '뿻XViJs(Vs69qXC*{5vʏ5VNOϧ kv֬F$9dJR}J-V8Mgݝƛ=v=I`Nw1jβcɯQT&b) 1KInQJ$s98WJwY#JL_\U#j|]CSC5Lޛx*SYUrVYsqI4&V$%6lEv˴/JTa4J|u+G;[=xɅV.nڕ/EvFce٢\ĬkrV>*!oF SXɹ{v"Vuo30-zaTO+aiTV^~ɞ?bdZKkJRJoc͌5+ݞ+R^kVj:yJ<^h/rh6Q,3ɕ,g+l|rR5QW5wI+Om2hC̤ɗ$x9>%*iG؟mQK]ciam6^k$Ld@asI<~/RboC*5)Rwnb>1L] np8ӹ<}s\U$䮵}Jes@tK%Fcy`}À~Sӊ|<ʵeR<_ߵEHUy?t^N+)AYlҜk98,9=cB/3)M[Ӕo&߱O9kz./ߍIyK>,#;kFU&e#9Tva`ђ#o^*ZjOϣ4=ᓿ}EuBJWV:/.Guȇh?09 dukIPiѦ%{ <)3I'atӼHF<]~GĢKp|-RW}F?AS,d=QYZ/<f_,ֻ[nF\FntE NMtW0uy4[[6eycU֨G15'*knx5NͅN@"9A{c"29(NI.H湞ݭypeStS>[ZdS"q~f' u沞>Jp/+=Mq<;~,"U(V|wwѭxR[1b$m;ץxso~GXU5Ԃmo#!o=}]J-&tO݊xj cdY,aw8?#]R1/Ww'tiŕxW޻09gQEOiedz-cYj,I`w~q*m&!)ƛNܻc^"+(hxlA#U*Ww~8ѕV__|+%e=y2jX M pQ ׁJ̭wwC찘8ҧNJ:mCQYhv®_P1ڼDR81TaLJTy7=+PL,i}݈J،Gzꄴٜ!Vз8m*En&z3H-gc*~s5ދyؓJk7ovWFJԛEK1RܺXֆJ<*n^|] c=_PQjr hd'nM2<|A[S揻Uo!fw_$ӧ)ٛQ~񩫣}oxt{]<iUa#3GzqR5.kK0OzF~L}:熴KE snmc Uc QyME 秩(,<);^[;ѭdQ4|mUmAF1oY^C/3i^9Cezy\,Mz|>21sQvOs 6څ}.|o!mA*4mI8ӊ\)+ջYor׌7K迮ދ]*f(:|ݱA H:ZDn- F; 98of5Ȯ#nbҲNhݿC:R6:{k--U 89FzgrM{6+rS,^ Ea揗#=F}8))s^yN 2ksqy$U 0:׈X/>FTMtZ[u "Y1hqHHΣi\ʎ&2#D.!x6HDqÌ;0sEHƜ%oCZKG)YxXs+},*Th5ƥ;5+/[']֧U[ˤLc;ܴoqA0ʤ=IakVZ|1*d3F~c-zu<96NWu1Ȉ*e]gY# &瓙PuGw6~񤗣yOo#/F_v!JZu-%Ԓܳ=sWV:qwS!O{>C da #=笗*kn/kR\=ך$ LOF$u95}X2})."m!QOQ񎴪>dQuN%wFp?_S.Y٥ye&Z#uCe=2SN>3Y-/dKq>mzn-?{SVQSԊk%].ר}~GLBu=s4bU ys~N\ڹY:YulylE&+N2޺iםIKMK漈cbW~mf2yac.Yh3YmVVI~Ѣ̐.N3QʺJ8JT%m#i#)9F#յ:u73^势GM:t֖;[d<01N8#4FxxW=ƪ3&,yO Tg9+NXяM"}: (&8ef!z"(ӕh7RUSq+lR,_k*|t4.Ur%^O# i??ϭLcNϗc(FTd7e/Ϣ&W>VPt#AU)ǚ;5*ajb^o>̒F[Nq.lw<5LiKoA4bf̳?Gg#֥[+DO%F"ՓJRT /s?N(QdΘϖnQW\dyY|WqNq\YJɔe{\r#(W'gzc|VҧN*U!FlM CGy9Bc=xyc/Sj1n}WOdHU۷*Ѯqhye~)q=} -nf{O3||O<8ԧ:ӥh晑? ?;4[yaBWsnngwyma8뻜y{9YvvY<2Y>Dc##^k.7֜* 6FΏ ޼R \Vg{\+{(AylYX9)Jc}bܭ G ьHZV9R9xgW6ބƼ7Mqg;WG4e}lT]hj>zn%Ml;Ёwi1TEŒy+k܈Z6 $mSݾ\wVc=}eu+{^ qQNt޷ߥ Jw {mITw9۷QjpRVoqՕ9JO>+Gcfps˔- rgm۲-v^;Uw2TPV ݏN $}W͹6XdT2%@-SRGKC+X Đ2n0bE_ןNߊkܬ=IԦ}?͸_&Qh.dVy G$6ֶU=[yßEfuΛUvf*1ڼcK8ՊXc¹R;[#xʥ8hm-H585{ҝOz/R)%Ie*gj))m%.$a/AUviJev mil=ֲ-}|ZZtdO>dl; JTݴhpbZ 52#lW`: 8'zYs5-t̺xy')QWZ^B-ěcSxHʤol 8~bBrG@zVݎ;ie ya_3 YNTc.g07\sy_Dѧ;Cuԝ_[lmN.Y=_Beo,l8>?SQ>XoR|&޻SB2=;[s,~b8V>frŪu3UePMlm)OQF["6;j(Ҏǖr‡hXT >λ)⹢ #SM_bwF犷j*~F.]i S5ۅ֋6ubO;Ap3wtzQm8J]_U]Tyvc t׽h um Мmb(^Ԓ)ݬ!+_3nzLHք*J7Mɖn5M*r谶V?8gg{8ڝ)FT۵יA{B#YwLislt.UQiwm][tb vFˏ\xʤ_3w1j{56f?x.3ҹjTk[uשyt庹(ǰ\ I36ǖ+4tN4MEy*ۗ;*;qXKzMӒ///%țJˑvی`hGxEz] ]?U©ݹ"`356=Q5Fb\0DPn}0>(2&tKV{j`* K<,Z63yqT[?;qP*Eܠ7$Kr\.Q{%G~yhڴp6N#i#9oFٯ Ѽ^wvmu IslstiFkT/VuBkKoO:7ıxKu&q$ c"Uqڵլz}juө̓*k'ݿbio Jm\rS8|) <8Li'|M_UUF15ek?>^F|Zl[E.<"O(Ԩtt"+ ^2ٷ{3Ѯ#x_/OPm\C,jprA=jʎ%knsԧy{Mmr24n쟗q0~@ɯɥÒh}fUGtM?߇ukm,ȫ+~BHtԣ"GDQJOwzΙ%eYav cZӔdcFކ,fhe,qxcJKGcVO?1LEYYcfK6Kc<}Nhߙ8BC2?z Cʕ9k ރ|)ڤcbJ&2u}H6U>.\r押YM+sK&"培?7^}>huk Hï([<{d!SV_%c/[\H6el=5zܫCzzy5yϟk-M Mx?t(ץNl_aFK2m}yi=E&약4Bwy3Vfgen;x{qY詹uZv_:'X4Jss9 ?xJ]Qք%ϯiI1!S'<ǭy1fy|$Va~ھ"Awq"$o ˝ѩb<QSM[|}'x|~7hlǺrUo- ´Q(~O.iwk>\7ê)?4ɻ<vtT>Y6ziޜ<; Iiq7Ѭ`rϿnIwaOB:%W.[އgR*jX$5E$[9')OOsJfG RR[{{Y4n+oIKG<^H*q_]J[gK/28ԢS@h?gWyVQ"xV $]|Y#l ǀ=JHѺowe{_/[ϫd|Utbnl㊘Q=>qNg*|wZgɤޤ_e*Wc6w-_cڍI(OCM>2+W[RrQ fSn$WfK);pd:z2z}f[K %5OOOծӔ^Vz?u麳m\}ᇙ `I򝇒cZiIlikngF#78pN7I6ĺ'O17_:Y{-j_4hs0;gq_c1\o_^qe#'^g~'^iXH[ [N#p#=04}b}<ʖ_ W)k}ZykM76G' )r1y0qh>UjWM=t>a=E4M?Okۜ_ S:WV{5I:ɷ~>o Z2#m @$>ڼ栤>ѭ,TMu^"v╏fX~eA'8ǥ|/ѭZ%N\|yŞOW+"A/ap nɥos>a]:n+]m=N^"wi #V01:e{=;/'Q${?>NnTӭTi< , 瓑 *W^y;q+{ mi}## P&y8''O{yR<pm%y#u8ԨcJxJ0:wЧ,#syjJ6w2ĩ|1ѳCNCʻF,<#ƜKڧgG5jOD,ؓrnNc3ߟzXXʍ8ӪgMoϹx$[es36#m'8?iqjtO~]m_xIӪAt.#m̹/^zvz5~^dbQO݊k]%mg6 Pg=Q0^ޞ[/`Y%i[7FU_?)xze)hOGRk~=3'G6ӫ5&%|@.# Ϩƽ<:um~fpz뙶Vz~{`㾴d(ī=R{<:7rͧkZw2yƄ|(֭; `n?www>hԧe=fqm&7:K<;BY~*'NzXlXB}wV[p|6uekˇ=\>Pۋ_1QGnZڟhѭjv~ZmM|Me1jH~&01 ־bQW_N^8˛5UjUтO5[V%46}U@h0r q rBV9?gLow.)0=\&taWԹchVپ=ʌS8Ǖy|k87!;UФ~p8?iO"%bneg,)w9zTrQ}zJj~*h2d2B0x>ڕJ'5k}&5乖MzkPGfC<+UǎrҠ梁i[Mo5@#?g8Ν:rfZ;Be_~s*gSY=b; rOnx.OhƥY8,y>eU6!~Oi?wcJ~4whZb_5FW+'eؘԗ//Rm/ n|7sʍ~/Б˼i*H ~tSgTkTTok'{2K10_'Xǎ3Zuje [j<2U8<^WLotWrKnnfQ7q i_騢˻, }L>5:(tt-Mtr}l =~S׹F.jSZt4#FsQR4,#.RU%a?ɫXԗ:.qsUOѓHa/溣.HM ѝF ~mkg3q<O+sAt-ItLH\l| y?(tF9St[X{fmQ 18\~^<]8Tu}_AHY ya>&Q2DܧY$*\V=iI&GWmxa8#[!׎RsҊ-JJ:!UvV|lqӞ5SJ7d{|o%,9'Gz^IF*$m z{ssrhtECDʗp[#ڥ}*Rc)IKDbTXѦ#\|}9?O4s涛)?omE/\g,=֋pMu*]J,l,u8ZrDHS,3]#kHA'9?zOYh]Nl'ͻ q8ǡϵ`c)TJVBysU}}Ǿ>eMMF]FmzHVfXf%O8O9+J|f,ez:aie}Elخ0ܞ;WSV3*fs_Onm3r?O\S%%KtAʣw̽Jm- nԒGPWDjʌS^5Oqe9/mb 7Dڹ\<}JwfOXDBL{^5v iSV9oڍIQK5qq֕O[5[[\˖s3d Oqb*SqQ8Nx}\Q躞 Qяݔ[(nfYpvv:si>Ÿ{f?h ;W#-(MV=jazY=1v5È:A+ccvI.aYdU\#VO{*R2(/ȽYm/˴~#KQIu{\*vN1ԉzumi^gY$i9nN)ӫNhp8(b*7{XN%*|]O6]l.e]1ѣN)ߛJXfeY%3z^Ϛ*t)K ijΣLF^7Jt@r+zUhD[dveR:^t 2k-_w@^^==9%]7wSNW&ڒpˏ^Ojt'+O֢ʝE޾}zU%դΘ[=qqQRTyjoM~KNq;O_wAVr\]hTEEv@N?ײַa\鶝Lr޺k\iCcbvG|Bu59rZVsZu/{,ѳ‚BϮxƵC8է%{ϱj1]|iyѣaCq2oZgFn)^+ӫYՔu9{\Y-X?:L&y:Fj5Fڜpms=.76K|,G+F*u]Ysƥq~V#L|7I^H3^=EwwvQVUM4 ;+m*IF+iҩxѤ2$t*Fq' l?dFn O+4zչRoMB#m6>^pJVJ*I[{\HWֳq+E>TT&Ӽ1l Qx$d8QRjۯN#SoeTFaubw#zzRO]HEXk/#-d!$\1HeFƆ{m"7)Ҫ5rc]5%= ҕJQju&߻֗,Kሲ-Sw'@qyu T}kN[ 2fyWwe8##klEniCI-Y<$%BbW7՗';It-2Fu|%1ɨ]1 }iBq'R2"9:+Og4tz g$!c+t'Eͪcj:phߴ3d~b3oM՟#GgÕ_ۏwI[Zr>h~'x~lVJ0ީ<F5Oޖ۽cn} ;)9<sJ*սA Y^+SF-NU1O裈Ї`X8*RR*#%8m@m`T>.vUjsH̒]ޕ:BCkVy)Ti/&tUH0mzzg޹jTGTWcp0~S~뵿Sӌ]Vo%嚲0epK{5(ԒiˈkATjDD#Y>l29X9OXM^Ɗ3Q>ŏ%dp;NTe*4=JknkΆN".]mo=:U%Z]ص-'#G 'fxq9QRjSTdxySNenvب+w=wӧJ7[|>+ Zޝ::K[x6y3ȧ@p z1N3l-Z<{Gqk 8Hٌ6NIlt֔j:uo+hߝ}ckaxۜWp$`\E)F Qb/R4n yZHKF['#$99?KZ[]O?k i,ədI6wg NخΝ[}X:Ѵ,;N[YMUfva5%`[wNMrΤ[]6^$.ԫs5KZ]H."ʜU1TRy]ܵfM$98(c$+ %H$}?*NZn/Fwa)Rm{YeF[7Lqe=O'{sn)J.}hWvV=~;G31 dN:f/-RvF&D6F*.ʺ%"8v\ڌc~xN\ҝNF㤛uQ$n9cZzԍi;-,6Wwd{UYrpy#և)/@+ s NQzVЏhFtjvy`1_NN9~Jt㮞)Hde_#T]"qED58y v<'I.׳%{ۏ,Y9R6?Lci}b0j\ra \}yr:Wv:+5ُ"9XE9s4+ы*~H$G++.#lgs\(O숭a[U,J[q_ϧTοj9Tm8І7$DpA<gZ.;~TBB 6W,Zǖ\拚o &9V6Ot\'J4Zkq,LYs"'9 ~ԩx-M97y0-h_ǐw^*}\mk(Jj*hC*Á?TӍ> J*}%܋g_1!AS:r95kVLgIc?N}Vd(<ݷrvĬwgjQeҥe4m-WؾY\i¤-LFIF 9Q7CNs0 ijWmoʢ3x1Q&l)F3)>ugs:ӎG-v߼5v64۷y$QFK <8(mĊ6];-cḛǻAqZpN-Oݓ_b ,ٸr6.x;0K(OW[ooJ#rZ}k-n?8qϦ:WeYJBRg\G5UeI{:&T(ʥ7:*FPEFw߭wS>ny= !VTҦN仹ln'+׬۞1:zI mk֥Z*\ߩSpGԮmd"Zfv:%9ԂԆG ͵-$uʬdgO_f!䲀@9XQNR %;eT/q'm"х)oRMmfі因_޼[>):|ZH n&*cnNzs׮? *Vr2JWmB[ib31܊T>CsY_ci*1t|`"2*ƥIۖ8,qQCL..U#^DcXPnͷ> 弯birǚ2vר!+n2==9uc/ v\L 2$`yMMZovg9G'do"1 !8fsUsӂ U%|X iT݃$;}jUxZۗ8JT䮺G0mEQfx692qڧQw}C>hզֺYn#CnʬndL\c5z[A+Xd0K鳎<s՝pZ/gB6lvȫzG~O4d-lg*qQeK&݆}Ѕ 9j؉EU}b) f?Oc.iŦ#_/jZ:jȞxFpW9uRTcU%o:=> WYNwFxJS*\bmP1v Hws߃Թ{Cq2RV)6- qٖ~@?M(TjbXϙԞ֗TO7fOof! d^hr5#J^ϕ2 16D%_.8X2ݴpzgASQƞt)$%.^F ]\*=yrԩ8N0'e-`E 'ɐ:x9U%S1R#.ɾv/ Yccrt"-"1Qؽ[$B(=:u^I8NQ+9ѳGh7sg#̍R~d6Юd 6q55%)+01m=Fe0W!l=0q.XR9J'١?ѓh*2OQښu%A$/628~(|iV^ﺿ듗{yie1m'Y_^4/qHiRg9ޱت5Fޭ%ir=yh3Cn,&Yeۓ qԳҒv%~޻!=ݓ24$2Δ2p<;1ǫʼSn;Q^RwW]pPwן\(93Mh'*WMIr}=X\ ǥyqqrNk6Opd3$d'c%-ٜ}ݖ-Y)$|1\~|CqF&۹z⬎b4Y9Gtm>穵< c%)EouMM㯏èjK -&fRJsTV1m]X\u0dW{.i׾8)$i+I-z=!o5oyhJFf=Fx|+V娚]?Ϲ,TN_e_ޅ^Gc{+A5scFo(sjeEm?^!sO9ԵƳnI&lJ̯=Npzq\Iem\M5(꼵_;*`2؛5nO_?Síz0Ң>J3 xtx9w~⿑8fծ}Fjٖe,MM9|Gw˟em,7_'6:WTF4֎蒿޷>(ʋGnyo >1hzŢ0j 1 x#gMTOá9~)Ӕo]=W \MK<+lyP-Yd(7rA%ɱ9 mݕݬ[=>[̕U;vnwF=_ri{{{6,lT 3 rG;}6 i9v˷1>NSPn4i;Ov$j'hP*=!GjupҧjjωkVYN7woi5Ԗ–K )GHSi3{e)'7_it +,[p*N?>A2XYo` 3ױR0*))ŭw<%Q8E-ʚ~܇,UN̒/H_8R:N֭NR+3%swHw2@@`.ӎ50?$D7gk:SO.3y^9r ̍-Om.$ @'qϕ>nUƟU=O_]b}$[Z yrrA鏧:TvJx8wOGUXeFU;@99Gݾ {LE[N>zÏqhYKBΰP9+O7ϩ0i;5_4W!VvOӟ>ڌOCxcOV8?|եVM26R6GʧkْI֫(Y^A^t^jb"Iypzs![i#_ߛ|;.#-*Ot^uS[4pZSFF[l9IsAi,-JpRzW4#>ji\FYd>l۽6ZNnƸY)|zit쌬wy8$u89zӍG(mڼySrni"*c$j1azAR8rt6anFso͞p+ Y'©JNN ~lp?>E\ˌ^:St9ѩ$wyioF#-_mVR:mR;/f" F'+ɷ,Fs@Z9Q4>>]8y-2zY:rhJ,cOSokbR˳l+Ws}xa*t] 4eU{D}Gci2}nXI`;v~/S/q|Zzr8/)TS<Ӗ)U"ʒVA}+\^:J9CFbGe[v׽C*Tܣ3Ǖ:IG~?ANI[1iR͈O+jT+tC%eHC++iՄbFӧMk{+!#doJi?W,Gu#N;EjQv-,eݐyhԩhivFXr*h9mj5#7ȴLBwK⯄k9a񒡶np ^\8Ԓ][/O_̩NWծWkۧ]v٣?m֕(chЦ>i6Rºqr\]X|%H~cyry; 쫫=/cQSJEnՍcOYkf4wV $D\tcGLE-V&TuW]|חO=wAb]GUxq>,x5d۸:g'yzs_EC*2R㪶V.z-ŮȟOk)|Ӥ5|Zݏ  YH9M-=Ugj9'gL%ݴ2|ܨڹu]ƌbF[W9Nx.U32I9}r?ȯ֬|Qss&m=J0r{޶S^]P[yaU$2~mgƟ-ͥppQȸ8UTdK_N=Oi[Ɋy۔Hv=qN3qQSERFERFZ,BpdwҌ5U\(J {~۶IiN7 A5Q浻ڵ?-ɶSƧ`! 8?z^=VR:q%~斷=V5ȏ&lΪ@y׽YZ;۬%C~.Am$Vc&1~={kx N[RUa*<;Tuy)#nl/8u2wIHH7_/tTnr="CG M)`~WSQSL-o{G}>n$ڡūDҕ"hz[=QRP贋ݝͪK6q zǿtsJg(NR4fSd-*8 vҺ?x}2J9J^DK4Vё]ǯ,q\UNVvaSMICXnd8$'ҧwZS[|QGߴV8 &V=(ҕ8ϧqr[fgYep1V!~TU ܜKIEۻ߯dI?$[c V1*rX+sDp廳ouڞ =.VPR[JBRBI㟗n7-]k_|*]z=>Z<UY}S8}$iCVN20 Tg/iM8k*M(/!Y[s؞:ZG=շ3J]̒-7j0sČEZKVUHk4/t$hcœ6v1Ԑ+'nRDgn֗9ۙublv?6ݰQ9;zT=|^m[_us8Scگ,S-pdVHd=r1JQ9Y+g~ڝDVvS,dUb!mB %ןFRO=?c EWtzIfN˜+B 9$'<ϵϚ4"ZR^gFen!Xy ^rּ:t}O"/Dnx }_]#Oɷs4^w؏g寈4KHXV/-fb6'^]:Ж<ь6RHd9g+ZJΝ/gSC񦍩kg.Zf"9[2AR{~GFQqn @`g=;w,NEtX㗺t=PvMTm8+J69j(tYMqvCrF@?دrz2s0R_qhG#X|{=f4cçRRԑ.7]+"wnlXcZ/.UUR%vXϧN3x\%u-GVUN-}-XJK4aiAF}:qڵܓ{t5nr=7͍cn9<5gs#D!X(vLdBӿ5 1%$Ƚ>[|wЬe)RwK z&&FCpq]RWo:|XhWj ;ՅE%55H8z_-U;Ϝ^O?SE6Oe[NV.]n 2Bn]ߌw;iT*:[pb}Gw`7<<ҺJ2MKk+'y"s*\͹Uf?{:yex=WcT+Om {o.l#3b jhiFTNRIn>s#ܙ*@Z(өRjɝ^7fb'vF;tJ3pLsv#K-q$2!n;WKhrҴ|)62?x z)ʕٝ:NRN^aPVIѷFQ{u$ci?ȗZQN,F]$f=͖1;O"F\)'X0\Ȣ"[rǷ>S qm,7"5b{c:/V,@ =rPy2EAٻXm@BˆP˕l-=@5>5h’.C_t]>FG=xTmSM=mܚ9evx+{~u28[K*oV/QWAn?vGzƦކXSr]!ڏF3"me ߓ'J1l4S܄6CjrÅY35۴-A7E-HaGG0,۰N35ݽQRTnߙK I3j>@ 䒸Q*%hw4[5,^bs2qYxw(7VE݋|]]ݰuR{_rУYFI% ykCgO8*c5ϩzF-'k ,I; wj(pl2G+(lmFaekL敺z||3U?gsӏ4s+PYB\y dN.zojw_B$۞5i2{8(Q߼(v8nS^{dJڴdXyӿC*ˊkyrɉ_ :u[{JkCJVhR٠ܬfvu;O+3](mbO2̍@uQTz{סKS^Z4M:[[:iޛZ.{9TW%Z"\T:zEQif\>f-9+%Pj,ny VN{*ѩSzGM*.7RFV(wsIk4w{(С}ĒHebbtRT +hь~FQrM^ɐI yv\NrAcv湟5K'w9jTW{Б̌%{#9odU{Qr{˱4preXDlw-D#+$W '?nԩ+3J*xXn=OzԖNDz0kv/)saq^-jmkxG̽;GKs.]=Lg\m8-ƛ~ϖ;*S:;8-zquR*~%b9סsɚ_#2\QTAi۪'g>KyJ` zW F5KW}w/@։ތ3UoUOs{G thY۬PȨEsԼv5W$Zk8>|+6p}sVmB FYk7v=~Y1R9r%r@p)AFwӦwmu"yʼm;WPev`mThӳ;ImμU ?On ӭNzĘwOʓG~)<:}WT1P嵎(F<- OHwrcmtQK:>hT-ȸ0p̵A4q (W]JZ̍mAv ^ƯۿTvJ4ita,Sgo͇nj=zTaެѮZhdwlb?v+[(DtY.W̚G°1+П%;z֣.d:M7W2Uw^׽aRWmz,59VɅf=I?Y{kO_VփM,ΓAc,AUS8k>_vOth))*{K7' t)F6M^_h)۱΍w? OSzrД~~3߳4-uG OES ?YJk['SJUd_6ˎ :qvѓF>_H22b fT}{N֓WzuhD< ЀY\#8N2iW.W:MJ5{Ho*@S[e=AraJcʞq9V\Rtƶ&NGkf!;6,JscW_e̴{GYFm\۬ku#.z*TT/cxIsov8(,Z?]j5h٥!|f;N d a,ExќwRV[?ogc*x{%[˱:_ݬ&C/|˳?y~Uzԣ:i+^^[=_Sg h57<=f`Wh$1p+x ^2U{$wnlUI{?_~&> ^2] [pHG"G# sQ| {loK Z2SkFijk #B/&mQ|Ɂ*dc_ZP{N\TiF")?Pay&iI)"~X)m^ q Tpnrj릷vz<,c^&3s޷s~g#-.C28ԥq{m^g|a0oud?^]#-%*WSS˜t'TTZZogGT}gPEgHM`?^3G).gxzQWӵc#Mis}E}$'FWZ5}FeZ#4j4~c23jQ24)>ck2L|]6XnpVh\YB$1۾Nx략)-u3KyUpf[ͻs#={'֧G,կ}\RSoGׂk黜ro{EF[q/#e_TgofJ=(GDAo<svC1>S/5Mg__5,#Uz9֧[@O^UK/ <|LԷyMYkhev ;~tsE]u9qm;h¥],n9<;pmͦ8z#}K1=I a*wsӽ>72l0'ۮ3ݮtyo4_=Nƶf%HΰH1svBqZ}lnZ .gv#l듁k?v.yЇ7llD2ʾl\b)#R4]Kr,0wSvq2\Q]:QMjBƌվ\}ZriF/*-NbF BU9'?MgIT֯" #[%'* ޯ,ic>vd;ziQ>|ѽڿtУ*z[CP*ĩG )J"QzOpgmfI!2v.\pNp3ǠVYZIjɚr˚HY.۸7Xաcd&+6v,O <i9Als#\%e|Ѯ:̼63Њj1ӫ;U|[:߅,Z98Èߜ|}t&t*soNFp~]_O3ݴ ݞnyD+z>qs;NtϺbb9 V9݂ ҹcF\ܯY.f9%vKI3hw4r {^N4T/3q1\X\F:z>e#zo_JGF[ 8ӕ.t)igf3ܛ~1?yciNXzb< 1- !?E=IJۛFRv|Yys]C PU8UۅԗW},S+Ӥ3}z#Sq̫8'ν4Q1Qɿ_cq&5̶0d#i58*~gF"2u!V&}=?y}CJҕ#m打AXdc$`\UT+n[_5ĪJ{tB|RFyŐ>c ^~FNHܤP:G^>.O]ZV_nI&>Y$'x^3F"+V;jQWxukEfU,8Pnʗ)/Usl<}hkcڼ'tJ? ^Z|g'hp3†$sB }&9UwR[_`񲣇ROEhG_|)]~Rn-cCqp2' +T+{j2l`KZy^Ut =m Mͼ @`r\ =F՟Kkm>(`򟵩̝m7s|i w[mݖFܨ5 lzz #)qj}tG0\i4VonL}{W:ڷ8iW5O>_3湑/y3=bǩ3kM_?זJ;%D[Cu=OէV9y[_ƽ,=8{D}-*rPNͫO@[h2͎OxǯJ5)4m[uzѩ?ե-#[[M>ũkB–XO:0,{ ;6 dfݒ~8*_D]mlG\ƍ|+l>\^Lc=%OEn|n:ֱj6x4T3I9'7AJTe/SүM85wuH.ZIc@TllTEssTD>_mG]ew班U]\:r7^|Qޥaeȓi 0lv'8q&'Ͷݙe/}[oРFVm p9qJ-Z0rv=Gq;E7ZSGi$V"VS{vk7bϗ1w}=l} [KtXc_l#H:6?WNNVN4:c߅PԴo3˓ls4}H!SĿf[vn#Rx 1Ӟ_QToը䩷՞k#=G;$1ȹSC}ג6䩏:˚ 5qPH**9r91\KJw;M8KK}& ^R&֏;4^)ԧY7;]B).ciWql\359)Z++tn۔  \~Y'Fѧ{:Z,)hG"ű5j\r'f6?A?zNx[s8ΚKֆID$9yErJ63zy*֛t&"-fuV^W׏EiƤ}jnW%xX@?C1Ks%N2|M&Z丟r<$9ߑ7Ft>PkCe۷G]9J/FqO~ˡ-apq[sIWZSe'+j%eЍ('=7t{9E3JQrܱؑ٣A?\}=ZB>:u.[ݨGS$}7>_4ӕ6&1^8V>нoyGZ ye͟N?W*ԭQ(ĘyGʧpA4 cu WO@.#IBh6&0gԟs8Qzr~:Cݍ^G9^4[3?cM9+ߩE4lPVG s޸ot.>k8l:Xv`=T Eq] -J$CuU }XS8l.!I])dUmFF+Fӄt:96cPg0iU 4gfy?Ba&>~IJɋ|T1ofjԋ*(a=lF@,f!죬G{8Ocw$7 A;Ky:ZES&_Sr*/{*C33GBrzӽUrΩeKI\u*I;.ߩ4Tmzvi]Ixcn+&N/T穣gITv+(ݎ=J6t3*\Fٶ um;+e!f;uDe)$'] q0_3ڇgڴ_z7^un-[k&e#֝n[Y.Ւ]^j p+AF.NVT Mjb]ۗ #kq=q.zkWRՉUytA?* mZ/yE 84~L*8ڦU$11m;"gުF I%pv\-Fv'冲}^YGo1\6ӵkk{NrK^4tǹM@1m'N\Mumu%.hd̷RG z+H+8rB.]]DZAKHܝ|=3VMDJXTYK$LzB.7ET慜u ^M`F|N 'kƿ,(mQ~S$+vg$s>55Njt{tEh<*9QnHȑ gb1q[3<4j\~zwEªݸZe'uRTyRiim/7GMq0iNXYuY>?4}:!"o2ܩ0IS~tb(MOlDc-(K7 HY|ŏn@?6r5ݣR?y}{3o$j$6 x:|SNWSm'6òY.ɺCƒr$jֵ;%9"i!gf-nCc@AaV>RN3m{\hV{qqi1f\eqo ztk:|hMJve{vh_!B}h*k}L4˹&Fu8ǭc/cFܝU/&.-xOq8Kt;~np8pAMu1ggI]5x~hTQ)`x*~0Grѩ˭N8z\=IBRԴ+κ9&nZ$'ZK~1JT/~6DIFgn3%9 W6;nXzºUaTjVg1=\ (66G4wbOI5-YګӍJ^T(m1f'rQ*om9_0}*ɷkNJʕ4ݎX哧``Pe)Uuz2ԜT#u 0̸q:ȉei%k1Ւוi$4'UGfnR' ٭K; G"n˝f'j3bT24mS5m2uu:oK8p!4β,Qp2A$?W-Z>N-[O3ɧ,TI[O&tmp|Գm\30z$5+abRQ+\;E:i6s'JJ2mIzJ>Y[.fZf;UV$~Pzs:w?hsjZwdo[{dۖ2Kc׭yzg[JTJ߯CsOI-y2Gʭ:Ty"QM ۫M $-g#5T"1G 9'}ת5,ū٢;A1 ʸ'ڹ)V4&SƆWQ ˵I#㓌J# e/&Aѹ?f#_˟ƹCjN9'RJ1Z\%[!x7E dӭyƥ):zJF1qtתH<j 9}ܴ^Fu"IF-ۦBDZeR_rcmcjXXk6Tam|< Vyt: *grbeK䮷vΒ<6VgRQ+]kzY~aHr,q^Nץị?atIoLFJ6f8 =Tx~NJM*2q9I7.x!l +'qq<5Ov mZ!a-+%李rH,E;X;l*SU稓_Ei~(ũwzW)r'iN.*2+Ԅ^Gc}1W4)Vnb#RϷH@ڧ$JwZ}[YKuJT9moON!FVw|biT(${ U;j_q4qG3H^ UT7q2m(ߧ˰5ݍt,yOLfԴZeN&W3)w 6*JzJ?U哔EL ]r]eInQtP"-O52H͋EnP:XҺG-Ld';=K"2 !-r;ڹjGzrJ4#RO̫<"^G8<IwS?fWNϢ[ϙȾfU~\i)EnG&+Y++9aN11Vc80~ӕ}\BI&$wϯ+0!ҩF*nlI$71~*T:t_mR>cl'?Al%º2;r;hҊRzڔ7ڃ] G˹,WqaNURy- Yw{r+7y*z㟼2{vVnTQJ[3n[6O8 2ZivNF{tEiN ]JJ)wz2yJn~U #WN7So\k#PyeݞӨʮT6%kZIqӌsIs+6LeFrvvۿȯ2`|tw) p+Od+_5NZmBV!LڞY59mtNF62HZ1qf ; TS1{VQ9C ;c >:t{E78=$#f$|˂{VܺX?wN4qOfB@^$qI;4S䛸֑%?Üǥi+4kZwR{5i[YC(?]5FI2ff$yU0@}k1zgn yKpBg#=J);)ԡ$J̻`,qG˃ui'"]9Kث=!YY%2:E:بӧ#&ggDg?{uiKkѮ$SHy6~EIU(Մ:1VnVg{ov`(1ӚaQ˩>Huua=w}xD5act< ڜOCGZ17n>7;̑ItH+{\y,\2qZJ\ѽcKN4doF9?=1ҳ,۱Պù[]Vg G"Gqw&7)89G|uc(ϕU..aH{MUj<h%i(m -%5[*┣')ԩ;:[,*THp=:gv:u>Ikq ˽k֔e+ZAVeQr#xvO~tss;8|DIyn.fa'kקj)+ت#+(1pnaͼ1b4ϮF'#.lNk'FT ``Qvyg̮&9Vgrլq̂dh68Yև$O%-atRŪ6IqE)?_^+;E5&g ]6heaM21Tԏ.kqkͲgk~96F21AlqPՉOjL&pxb2ǎ1JlzYJ8|ō 0R{ϧvKi3У~-0m(sRRt*.wb`VGzH0x^jJjJ8ǖ%;s3YY;8=*eSyȕZӖۗ#H~#6Hpzg+ s%>~Ks[ٟ̑˴uq2ܞϢ:FRBTcNȳ/K)n+9Iފ3r̙G%mFpz9et TK7!k6XdpV#O/2wfuWA2ܾvɬ{bQQܡħʒOs\s]^G%d31?mf>^;Us+ɬF-h ̹RGNc7#&z(smx 9e)r̔sJFG!pF:k{ѳ4ʝޖ$H!Hq{L۔4}qXl3^/AbH o#pۅ5NFuNH0U$qǵ*qѳIB2DY4 3v[J-S]1d’>X*Is⳾sKݓsݠ?:Tmّ׎Klr[ f&3;_ Zҏќ.kBy8ܬTzd==IjЊхX8:~sHᏴںw,؃<˝ nS[-J.I-r0/*2iRSRޥxveY/^~\==OLwcyuи6@7NʤQGDRxMlaY1N-)TI t\êk*{t[3b%7~UEYHQEv++pfO^z8嵟wz rJQ`.⶧9Z*bHeu[+_a!TP~s|/`U8 :Po-7$>IWRĎ0uǻJj]oh}VJ)uM?4 >[4c̘!J6*i$sYb/%“o6tŒP bAvs<:~[k[uM/o_vY*tRq۫Kן h>Iff[u !eNTmV)η7.NޫFk⭝SEx5ukr:%O˅1 ; y2QRѻ[]e\)YEꭳZ牼 XhX3+I1 +иq5*ZkqߢB1RRY<=ǭ:ҕJMEtۡR+K~ ۋ+|ŝ>U`>R9#'qю1TMVk}b\YGCb9w.Pq'uz,㩤ՠo<0C4w 0l70T:o_>ֽƝ <٧`N98+nG/U0.O~ &)#*=s? sK?><թIlfu+"ޕIB2\Tc [庶h=xUN˚;o~8?:>d{ldEHXsr+j)ʝ|]<ȡF4#~%O;tgC^~0ܥЪ.Uy-,mxCA_w+6zp?nk_ݸb1T8:OoMjOʍ)r$.X2Rr)`1e;_ST[G}m;OA YZ֩)ZrF~V$IbcNqv}šK,wc_tK‹wtFz🊍l 3NVS5&=z.[]lfeڇ=O9=*1hr)jk1 6{¿:̸{:7fˀ#\ӯSҹ7O՟FmϊMC_C͢=ձq$n:z{Hc8u0|[BhVzZmidVɕ~1ž9Z|>g øx+ݣlcbZkcci#;zgUEֱN#Ri[Xj<@ޤNހ|[%)gj:%t˶Q<@刖^5R~G|Ew63C JM%O=1ִNOI-ٞ1/+ ʏ 1_-q%F@8?ZѺ2;Ռy4m[P*/+_0}k-Ӥվ6UOiGu껙iQEw3v{~Z%;5U6F$j):e\u)ɶ7uwx/,Օ[=ߕycRUF1z7KgxNZ9q=?{t_xq{Ju%8moƮw4Gp4b)_~govTc-z߹C=Ya`9Rh4F*ϥ}%NϸW;{3߶#?e̾e*G؊3˧28/ yݷ{J=L=JcWsϗVo-6xq+Qù$.zcF>z/J+h4} ޿)֧yэH52Ot)#3ylː{VI]>/JCcҮMǡo+٥F*ֿtzS?nӦxUhBeX3I`){=Yߩ[2N^= n/,_·Io#\<@V<6-( Z_F Rk >s*pRNzlZkZs6ּ2Maw,*8P88+e^ez-?M?rthU~vU5-mW,C_?S/ydT*a⹣+osu#Mk7dgָ刜Y*mvaG$AjQKH4ZuԄ$q7*.\֩C|vywwrF-6t{JT/KqgsQ2X`{Wyu9ku#RVO[mC. =*Wd*1P@g9bn:8>no/O)Tӳ=O,_ii_f!M$3 Rț[_S8C`ssRZ.X#[ڤ҉6A{WK;7-S8#: knXIVMŋW^:+_"k*R[_CKٶXYfaedd3=< Y](nJ+c)zT?\5c3 N &mGS[2{$м2ruG웽;aO.vv9R_m 6+ qۮy|*a&euc1ºxGzA4MBf=ܜW`{dg#ݛz+}@ t m4vG2 qUIO~X$Mw8Tj{7}/-q#>߮+rN29XnSҤyM<'_ <;q/M'KIyp5$~np9ҷ%U=ahBiid黲LdAf H>*rZ8Jwg;dIm2Q:QR]+sC._~Tnκ9i^/^ƎF6ݷei407 s=OkF =J~ocv]* W<*I)ҩRNZ2pMKhq(ӳ^eHPϖFlARy z4pU%(C$-#Gyfwۃ =QRzpFk:X7߭O)hM#~f9#;׹zt_yKFpJ}6C[NkmF{s1 $c=3]WjZˡc5Mly8[q c#e[< |Dj J :>ѧk>5Yhg,+H`Gs3I?m8ʬ[I=yꭷTq׳j7^q# 6w62FANW6^umZk!3+&"bf=q\NZA5;BKG44Egy#e}$19>8tg)۠{һM4*yd0M8m=52cw"8ntZUY%hmݸ=e+sƄyT_SӦH xڈ2,|vyU#ݓZA61$ 1cX,~Kpy71ӧj?y&&5%ED C;v qzQ|)Ӎ?z[soha4-/c8ї*owm=쒽jA$\埁qEo:|}S\.i)Ǽ@*G,SF7+1"-,vIM̅Ԍھ.+Z*JZ|[{)/ +gڋ+#r\ss۩t-__.y2T)N i_.y$,PkvARIϵrF_cˣ_kw^`z[r&[fmd8Lez-ץ#6ZϖݴZXxFǟ~ՔekJ*:X魩_ykʷ79LVT+Vj>.m%9$p[+m#L׿p]mJ+F~]-Q n>B<{H#5XzsT.ס׃J]ϷT їeGַ.#Jh[u-[Neh<9H:88:f]4hURcFXᶞw+*C7G9rT'To8[iھq3 !w2O98ӖTaWVQ8ۙ= ^׵{8u5dRH_L1={tqխFmoŞ^/K G mTG6MR[#AW@kũ哼S[U1ϕz?7u-aḦ2sǽy1i9*sV=W_sVjo,[\|k*xUJ<&7ݷm]%Ty>V'O1RmU%m6ait?{QVEӺ}އ1 xc\IKXCT#QYbq9^-><Dz×G.0;q8]F;Nms?TsŬsHILzSk9R#J$SU0ڙl,2Cؓx'i.u8V}+5J*ܹnq>u?8*1Էܩ;3J AO^9"uK[s~-.O)g*0X)8߆kP%y;FOO-LVΦE۹#C)Ms[w7R$o\3<֓c$8RR2oR55ɉ[uYz|rO^stuХz>յ繞h՟HQY6ʑ6߂?:Si=.HOT>vB/e(]77Dpqg^htBSIuKcjKĊX,ɓj2fXYR^[ [F 5UR7L1iTZnfGg,tœe߉T擴#գmIvlg8q[պsb VMH"IO.=/ʥrqߏ]zX{f%mmcn@ӹyRDcS F>k4k фfں)TSDȊ{$UtIwfjXSkg^KQH2|LZX7.ر}=E{ϠV,I-]ױ.4:q 3Ļ xiF6H;*U18' ' F/Gs rvwkk=Xdh+&8}qOYcz#Q(ޯXuAr'Nv sCƱF5q=us?arLhϩ2RrVXb,Q̲wDTcNNm=Jٳ g eWpaGlo{xKۉMݻ}Ie|NGc:p\*֞CM3/&0p980y:0>H&DJdi#RW Tԣ+*qU2ۯB`C[RrNk;Ǜ]*rGq ?ܪ9JvzF;Ŷ"|dF i#'jm~k?h$ 3-"&0;pIOZ9yj]39{U!uo{~1o|`Ui$(5'=NT|m-ޙ'~QPJN_+s.۬GfY>cW#8ַN^5]nJŨ)c\Ǟ6ߩKmVv[U 3$#=GqUǚتJkZKO̹̓ .NNմc)/ة9+8]Yp 8 ~c(Fe;bh;3(|x[Qx~F"U^}tM yq뮊󊎗7TQ^Owl6j oEd}~,ng^|fծ"5':NkGkUBYTqgNfךl2ʹX;MDyZ3iePGy4`$ȭ#t xռD'ч^$V;Yr8> d҇iy(ԕd+\HCy{+wUM~dzxnyͽmj#C(wƛR2o֒M#mf4bcc6# H{˺jFsWf 7-κidwW֮imnuQws:/*8#B!{Tӭfܷ ?Nkx)5 MTsJ;ҕ,E7ǘ$e.6"mi|)]#oiw,p+q2'ۚu=yV8S\I5jV%T.v1ֱ-rԕl-hදϢ6g\I3}SѓVtn`Nc]ɹ^ӥq5==z*bVߡΒ\ɒH'9ZiN g{.h麋 ef~U-WQ8YRFllX&FGjSE5TPrrH5"?2l. 3yyf+QV؜>շf$T6BG[f5p9Np9=\^KUԇR M| &YG 2F}f/#q{XZR+˷wLFcd팩'9CܶZZRzWVԞkO<cF׹?iʔUaNiOTzo6+G ;m`9Fzr'fwN꺝/s㟅Amump sᘪ18'`kp~jWij}<ϙь*M(qh| 7/!5Դ>", HVVdGQƤO-4kG N3qOO8oqOM|:ސXѴi'#ve#OFhZw;}ކܡ(uyg(uJždEco IҟQƣvkMow4RrW|{R)o}ghV'w9Ajqs-ߦ3 w r՝۹WFj(4l⹸n :;wU' \}<4%R 6ںߓѧjNTӻqto??Wϱ*Ig5KkO8ap>v @l3ʏI'4*|\-|l]xƚdždi$"dSeea ӕچ+^oUrzwmY>D.k/T˚nSF$p1:VXcU{Yj۹ňpدsK5m%o3o0:3jVZkqTUay@Ab)F9ʌF{_O|1&:v1VfIǦ0?[9efOR^O\ZBdYWo[^٣\C]ߙiќӌVu5ϖ]H?#~cg8S߷ҳjr-iʏկض&SH5Q?i)(_BqZ#cHn>̾ZB۷j/v[]Fi (X׶r#9);I:\t}*56>Td~i^ONJr֝,HڝRJ|+[U.+KwƑe)Zɝx- Hp,?62}Azovyr~c{5krCWKJwNZ7jH.nBvT =]1NpqrNr:L-#J?q]ҌJcM.mm"xGXKy .آR0SG=^ *et]zq$g3;-#5Wklf@ Ub.NW]>ڗgmNuĵuV ۻ9=0{gⴒTQ*HynecN73sg^OW,䎯9jb+%+6&MAO|oԏ鞣jX~- *ay9nM [4qgW)sTaRU{nmxg]{};f8 qy(Rm-:8jP%|3j̧k18T`mU|},w b)[ǖ2GO7NԮ[ΕסRaQ˓7`Sۥpԥ)KqVNXC񦏟 nH3OnԗӛO4>[fPvƠ(>g:w,֟Cרe+|ZH!3*~l`>a9<|ZpM;~G/*Vv} %R]?sK;Jzc'}x/vw{m"y(*Ϋwnl-;-Wuc;cgBiu~F/->zŶ9 s׃S:rW{/Z5-TFa oq"4-Hz׋”4ʥH}cjܬymZcE'^\B2#}{ %n# ``O=y֥R+hө$xgPiY>mXZ2SzNf ({ݷŷiPzdu;b W::˚ 'Kq]YT/Nz\h{J~|;1a%)&c^=qd%W:s|ziڢ ;5J/N>ʷO9aUmPI:!sv:xLTeo#Ӽ+$apnV*xb0$g3E=:zS[I4jih~|Z珞\[S'Sԭ%ߘ9lLy8GJP5F5*)ӗlHcW\d#:u#m*˹:Uh.|va.GHa*Ro8mžנJ Z3KzZF%S埉mmɓl~DD#q }zk%NQRZuGԭJ$o]O'cM+Ԃ1Z֔ƛVZ0<Vua~jM莪8:cޑI5͠]jVr_Ͷ~;m78#9I3,,aQÙE_OF>H։jn_!6'u;~MJȮ)` IfXzjq5}|l/kbewSdﮩ|Gw4* u4 #̣/A ?AWu1߻?N<>u$KFZ;.<:HԤfeӽ8'Mz񧈥(FOSxђFW]Dyk= !^e xpqJ4^+}~u/5=rl)VvHq8rךWzv k撣NVӏ,vj:ev0N|&|H˩O. étHէYpqn<s|W)J~tk*tIoN_Wo$l$Ny&=OiFjV϶˫[mE$J0L|}y<Dow߱QtՔ-hVcԚ6eRE)^[ sbw=ǒ^: {ei6bֲıHz~EIK}ZzRv &y#{ed:&؉W]:[wӝG6iCǗˤ)/Fv1'f 4+F[FIŤڶ:O"GWVM#<)v*9^"JڽwZ eЧWIv7͞ i)WHSN멥!Yf&e\OZҲ}L(ʛvg ??s$$g4>M{W,Uj5cc1U(N_ ^zX"@<;_ WW0JGʵH><NG41XSt:(y*52~7kr[CY#;X{O^F+'UbmuԤhO)r1S/R5-'ҹVgo&41Ħ%ba1pON~ڷ}R3JU՚(ks3Gv֫e {z3B]BqdВIw XFIN{s uEYʩI4ò?)eUI<Oe(tlZ:վ4M A<8Xہ3ױJ.4}s[./+3a ܢ*޺z!Gc ʌ|5pۂ8#~|٨$k۹c.fj=k Wb(>}vFu%'EA;Mi aExV޿)׵hd$W,a֗X0T|.3}jSױZLk~a_*5 a gۂ9qޡƤ]hN֗s%hؠQ"3uFU9-q*J BHQWz[-fєc/g}SaēomER4e&-fU,YO S˥Jf˩p_q8݃RR)-Ņlg)H[ 385grѩH * ُhݷr+g}sJ]nMowt13aq=VtQmʬ? ҝ/]Iûېes*WC*K.Zpr}\{ONtW7QںH;^FƟ:gBmPмhyjbsY"\웑O\T8c?4ȥmc>({ZӼ1yϋ~p8s(9i)4ES;+S-r{4 pF7pp1ʥvf! QZU;rqrR4`Ռc!br~NO'AUFNڛ[0EX.w*z~g.f1Qi[oL+mʤcW3n7Xo0ٖ`I>qR~өQTmۡZKFy[;vdIJvЦ}fm6֢qwa("Fmgue2adU凧j}{˖5%fNpTW?\JR?/Jy-̑ߜn91W'{- #OM n$ "cUZ<彫t1]'zOP ?xţ>^$w1q]@;+K}k5gkߗMOY_cF̒ ܬ?ʷIYIigS YE>og}KU효ՊuJ:GξYb ׌5iO1NNwvW"-`ҹ귛:iԦ<ֺeR$ጫTTk( nWڧ?ww+eZ^DE99;nF6cfZy݇NyF̏vcc:Щ}zcRUGˌ&{ҧ31'cUn?Z797cZp+4$iҊu%{wam#!Fs՜Eͳ9rԐJCV~hԕ;C_"卡Pl^9JrU%u4<9czTӊPY~\G;}Nِ|]^ ԶeBvrںcRcxRTgkڲ*Gok@npiwJoQ}{WTj^Vѧ57l0X|ڧ$uʴԩyEt04Fo BR<}qc?һ)˙zKo#RQpfo1n$}~c_ZQve[ ~՚  'zq2Q6Aq:|іoR'ϼ^]lyI"L'9##tKrm_jIFen.`mpd;Srq<J3n#-|\t͸S;|77'w gcS*cƾ9rS3P mXU9S?߿ȿaGqq"2pە=ye8}ogj4Kop+ƣ^|MRYP9̈2z`JˢOReR=-C;h|rG|ʇd{qЏ´re=[q*1z3rD-#J#WU(F\{)b14cƄ:"Hwݷ y8D}VxeMC^F扢 慮P.b7܎"eN[Q^]Ϳђ;Hc臜st\'Ј7|}KBd+1lᗑ4QӬ,Wn[ N2uKY6O]|qQZ;rڦ$̪O;}ke*48'[})m*+"}m uzVi9IB$GbX$TRˏOwI.g~VFW{l`jY.[]p~<㊜hsmQk[3"̑.9#jSg3OUqԎ{g>lTvhteTiϙ;ߩօwqNQ:)} .mlj FvD%{ ##֝HU$e]#kCӂ2ܻ/^iOQ2u*{4=:F.sҼrƣ%9Eu:& Lec*Ɲcs\r*skpJ-5&uf<b{YG$D,*pwgkU\E9ҧ }sz..6n9*Nxzޅzo~;O Zn-&qda0p}k͗4OJZ Cy"VɌ8^i9]/BIyu0[wE ʦ<(ncVs34J(qjSn +KrgGqMHrKdraNXI)e1[ ȤU?]WN~}NT+98/"S"[ʱ#ysGs{gg {8{;]6刭ySijקCj-amb#6X'#8\Uo˥׃?e/]mbh&XPƮn8Tp8(ys-|KӹS*cNR\H-Pwhm۳ZFQQTnNR*N/_K1Ygfx2!dTJ>E}}J*Dhl%`mͺv/:>YJT~ER!a*;+Y+4 򭥞I-~ 2yEmJW[ӍYJQ\p/y=}5({_}N_XwR[㉞F'eՓ>f/Lr'GLV%h;y~fQIbNw_AJ>l;URT}ڿ+SRM&dBe8 9檭ERKDJR8J_l!XW|PθZeEfb>|㠮qw*<ЗtGs:CYӞ=kJu)(>w{u+@e1 8֑>RNh]eyɔs0g ZƯӅS)h 6RfҥJ[]/et 1*IxiErIO j.l @9fZrWя3no.dؖi'>VrL}Rw$Io.0͵I 9+ҵ8ӌcIgf\ܳ?`7t'Vr'.TJ:m5g K*fcJvR.Ng:?يyJ̬_sj9T#we$X.&64k o;A#spYj/C(%NkNDhiq+ǵUH rF(|Gk(yٕٶ8su9FN2%nӥ oddk)3E\dhRTMYNsGb>\̸vXJiNDE:n9$ ݴu#V9T63uYVܚ1Ө-^ۍ#K(]ͷ=*EɥZrw{-&g#V@ln2}p; 1 t]7ӧ{j#{DMH>fIyxbu:C5m-+o 3~)a`]kTgF1' eLe8zgzWT1st~WPM iݸ7UGR5/'aU|LʋN34UJ 8F@dTId͹p00=G*a?u}P{ffCe*=`fRoŵi=>1M '߯i*ZOԩUi B $|Y~QSM[YJtVTWC䍎$uVj\l84鸷DM\ь.W5}?xԾOT u-$HU[kI=ki9/gz| q*!eFYOxWm%M(SZX.fk)<7  9#OM 3#u,J$=;S Ҕv&wۼąnyKT\sF^)_1i!Fܱo~jR)'ZrkElŵ0H;UKЪ؉Eشy37Lnͷ';~u<9[zjJTqޫKB6A~Rzgn}3ZFZNT[~JoORnFֱaE)TCvh1 (r3ONJ.zE!uo"m;\$owЧ)]-;(Q4-x L݀z7QrZJя/*&)ϖDeӥrzDJ%) s<mY;I5u,oVFnU|}?CEAԏ/4 o%CUv3nALzUQz'akYhP]+\<9*~B\

    >f_=f-Eިs]}P2 U#)Qhbm[rt8JTdƘYğ#9GR1WJtTco=7#XR'3.reOMmFqI:4."xp3Mq{ͥs:4#9'r)0G&]3VK_SJJT>~q׏rִVaRה+E\3fI7Hr?S\ыӗ|P:s3X~cuI)[\5 o_LW3"rbӫ-7Էl!p?V3d~hYm$u늯i)ǗHiu&W_0/,ZNqsZSھ8 2OsC& m\Hk)%KWgV̪S߿uMk[E{yI`2͌e>8TQSwSbE{My:Ti~$[VP@XH|áR8eӌWGQg%dnv}硇bʝ&Ko>͗>4] +*6rNL۞+han=ߢ})Tu*$$~?YNJ#9@>0x$Gr'EƥijW;>;m=<;FĚ=n"Q"o+| VfOÐ%kɦGdɏe`p3Ԏ!GiG݊QG1*Eohg67:ry¬˛a9'-}91o&}t:5^QqnN+ZHĿo,.{oPmqU -=k]j0wA{ѽf++&maܸ"8NApysߛVNiFoCc)pm['R '`Ŷ >Q\K͵0+6t:ֶiIݝNcxSˡv=%FF2A2r3'[xM>qb ֜*B>45-9I IO2jv02ulqGRvЌqTݻ߫v?HV->k > 'k纆ܮ#F@yRۃT@S Λ{h>1JYW^{y3/Ά^%bы2 N1|LކO4qT6aiO{)eRrB3}yK$= MJ2n][^i'MG3Uk,ppOEsTJ-[U}1bZeۦ_,wƅ扲ۮHyW5xښnQwzu_ F-mZ =tgcy,Vt*XNO.}Zÿ2.I>N+|t{.Gv3 =Z3MŨfl@#aIU^Iԏ:Vjzx75nkVMnfk G'`992:]T)3*rz_;Gf>k &cRwG OfO(J5(C'gYLqC`3׿FYFVU*)Fpo kľ<9:jوd(GG T`ȘpN3{ҩ9o=OO'ȌvNM/~GD°hzjS?1Nz܋F*4kb8]j.:$@n+:ԡ( [%'CS+r*n';?Z\<=fYDP_]|tdEeN^;Џ/m4"]FD cVkri,y 4ѭb83WN¿&ZL~nd{G`ֈ{ua=zE2[]HjǠ9aSP 抯Gٮ"f>qYZ>=O.*XM_ΒV9u!w/J*ތgME^edyc3gEiДydeRrd2,[=+ҍ{sd6'sZ Ƭ"ӧ jI$4+sp3A5,ESkS"r}ˇO##̂ eYKҖ\e 47s3BXr&ObQiܒ+l' o[Bj[l־Eǂ?>2w7n+9W*xxZ*]9T20oMOLtK6#wJ8-Q(7˃]-]|4jJTZɄ;KviZMp+"V?wMI٢븏wF{֦F:=b)KgU$.ۻq^g+Ζ*7R;w2e=BŖONؠGViU*|P+QTU#9ی=8a}qRFtQtk=_W ww~'`0 TqWvW8:iqo66lH[#F? 6zcס>ycȟ>-.m?N]/CR8h}ѫq^ ?BZ;ii(,ʡD!=QRsR[dGaowXbD 23Z9{ EGԙFD<]a,UpNx Y]?vWC1[gx.K ܉ .2(_~=On[;o..l-TKwp-a8FH8>(=F&9)T3XkqQҏVO s9q (=~=<62E$t8hӍCWsӼ;WұzwO{:&o5ZeK[#cS[Q1UfN׾}xZJ1eLNI]p%5T !i6N;;%E(ҔvWMQ>;ІWdɻ`Đx9_9/gYTU֭3<5_NXwUIJGM>mcOѤcDA ~Fis9y#/un1%IcQ`v>}W5ݾ .X׷SwDӇrIaEʾּ%M&ziP9]>Oq"7n{j\Xէ馇#qRٮn#({^ηŤԽr/h/dգm|s8eJ1Mɩ+^4=_⟋6w3nILVnW"/>9u磷rXXZ\u~٥?''WӮjo\~8K,ZiߵV}E5O cþ+Vi7pQH<~W-LDn-WxY}6_mh^iP(3EՑ[hn+(EFOc^iT;|g3f4 M2ݕmC0ːF~ jo{`\v^_O쎺<)-u4IIIw`j7 6X0yXSz%ĺ{jirZD&pU8c_5[?^d~gUƣW[WFl%Fpp~\uFѬɊTU-ӼQ.YrzMtF3+SHͤC.7K [=ö?2į"C#u߇**駯|iڲ}N׃HE"/~Z?P:w済źt۞*֌k-jF mdXnH~m1zٝ}8-w}>򾭮`gmw̫JJmϩ 36`W)uHJǧz R)ږJ(}w:K-T-h]۾VJW)̬cb74_ ߔ67qHF RSIƬ0P7t6r>J M*2N9jŴ;=rO>cv`jB5uV2̪ITM`oj Xc2  ҮX(֋K[kUN:v)j|E5YX ,ۂ,#N@#99,?1rjM{}\#l4{KONtѴPFr1F}OLY|TQWW属#F3j5kg\/Ȓo06"$~ֽdxa_ ;p9WLMI{4ڏn}_i^:?~:-ßv| ׸".8: 8U;T}ak ]#xfu=.pe/H `p6Y\+]켗OvG%97mnڲm59MN8res"RfeJ) ff34̸=7=ft9tˉ,wb%lufNz;ʚiU"j*NrJgRgQJ"HfYtTj~C V"{%GF<[80pO4;a>NJOw>wx#+c)}qZBX_$Ѷ u1gNG ^8֖U9өF"1=ѫ-m"UR 倻NM~Cջw4mkCy+y+33bO͟{~*I%ꩌʣM@-ӂXk/ WFNeyw{dS?~}/;l'2[N4ky-x"UQFcQW|5n5Hx2'>C^k{zv!oZxDFSN9,Zjh̤{Mjj]?o繺%GU|ÎadYPr-|񆪱j~+fknT߂:zgvF\]X|6>x-o6وelF}z~kSwJom JQj|kq*0|}+ R߿C;k4|9$2E-V{:m9TOfj8sr5n駫=ƭ9SҒin4kbԷ]yd>`M !$ gZ:tKGu_[v#4c}z]ko]g⎰3\]ʲNAznㆍl=K'ugm_*h>㦻hRy=309>*2ZNuJ2uۧ ;6i8ˎ]\nxSR [~G|xT۵}?-4}V_k5xX|y;r15yaZz&MJ+Qv4_v_7QFVf*QW'9@rGy| q:~϶u;QO*(|8b2AcܟZvm߉ˎЋwX6 6v){j-oi*J|O-5=tF ʨB29){E+Yf)Js/'ӹ Жv[eV`\a #=IJ^ϖ19m2/5Fic20*|U=`u8>zT9iKj*t*IHD #md|>VqRTy%GqtFvu5lƾʞV16wy|]6Ν]ӂӱw ֗ΒV"/d#2\{f-7{֊iv{5%?YcZQٕ\qMLkWJN6,iJT+$\~JNP :\9>ҜbbؽiZ]7U#7.à#+)T:[!RV[IL\4\,oona~oJ#օ[ٽxZru\xR w%f냸eA$c u!N^Gߛѭ?C.xahWP88:שop*k?1 MSMt@#(',y`BhZ$oNSM%mmkmޏ1dpMI xz}놥ihV-Z/4前#N8x}ԧҴ(ΚqԻXyx!7- 2;?)=:JzΥKR~x&K6Z{˪4c%שn?gvZ.<],dVG)Hd+}Zm4s4#NF X$NH(ΣFJn+F6^SڥIۿGb># gӱ{ocj[c7V1]o̙)^ ksœhO̱.g4r:7d/'/ӑ~U.yӖrܛMľZٗNHs_:T.w~ :9#$Ȕ#eE>Y}|t<֘'\:Ri5gmi*gnIӿZ拗*tQNi|גp\+υubG͝èn;kJʹwTQJk]F=e<趖;_n=]R9nq4pU%z_m8=\39}kA磈RNMYL_"d;TeM6Q]jjʘFQ9+;i!;cܑGl޻ZKK/' ;VK)hcV]mn [To~b*fX'YYRfD,j̍}qvSmmJ&f~4Hf, pHmzѧ-Xʱl+>.ek+UYk+%2qn0)5޾O"Ycr$ S[cݽDQO-B`mG*;znǦIK{mxGV7al# .>vt&(۝׽%;k ۅ1ӜourGٝNjQR%nԳѱf;L`t=#ԃ2D)vml)a|vTc6HMUX <|1x~KsJrngyoط,oım{WEjjT#RtFmY{?vzNymo2դr[^[aY8O\V5)R]9{]/!xlEuby`rOTVc(([pL9 $q/ϿL(Nc:qwćdGc* =}{=gdV\=ڡs(Y{wR#]FKyS~BXrH>eJ.+}i:K–aZ :dAhyƕ+T֩Ћ&Y?2f.FIK}Nb#a{mINÀsq(77a'-6[ۉT4kt\wS*QWziO#_m2{Bs-iSRz#>9:١+&e`ǪsW/eR:-CFQʦHD,W9TPҍjJV^F}6`LoeRi4t}˻+fecYh`۸?/Q]KVi_KrNFY)1y$\uZ1J1ҡ\˨^ fwy|I̍=#ߩBj<㹕*kF;_DDbVE GPΈ'ް#JFܾ\i'[Ԣn gWV0+FZ\ܻj4in!>[_,t\5i. H qVN~ΝMc^1R]446 pF1E+^+-J+/Ro6#.79)?m'cRZr_*!a3dz[sjrRړo£@>KW\+2,;|=zu?w+cӌeibhQm~>fR"prTuk5-EiכR+Vjvf<&\Qy]Օ9Y/OCb x[oV>^ݎG8?t7R =[A覷ZHA򎤌j11bҾ΂ue ɐy\+aK+rM YxV7Fq޴pJhS|-J|1ǵhB1.Xr; 4_eGDTS̅7M8K]IVzӮ"n̫?Sе84gk3Kqow"aܽU*k *ӋZ>c9xi];7٫3OjWVcyqpDF?)=ʥ:|}k9}I0 ]moigk$o 0˕L>Ֆu1Q]مOci]-WgZk}̎e,W`K8W!TwsIFvo.KzKRT9^'M}idMY効_) ,~T6x#g}/Fi rUtVUh魢-g˚Y33dnQЌ`޼tJ=D屎"}Nx~euH*% xӾ+ʕj 7v^mhyώ g~ki695UΟ})0H` z`eקӈm8+~~GϿP%дg_IsCm$sƱ 00X4ezW:/]̋jŕ||0wp8Fc|,U?iWάK.r,rǮ{ieO OԟFq422}${=K|()vї".̞5̪$$d8u9kE|)iפ!4!߯VqV!ђXtK\`z:tiO7 ng -g<+|+O# SrM/u/4%,RGpWjP<ϕt89I5-8UM~.{m54dEVji;xwmoo/doM$zm25"ᘕʤۋsdrK4MJΫ|XwpL'̒Rf]׾J~2W]JVo"SQI#ظilfG?\XiT޽Mp}l4ڝȵco#Hѵ£N|T0#4Ӗ2mqQJKwj>u;+Gj^8F g<:㯭g:LW.wy0++ʯAzzsiSG zmDj[6hHUb0J@'+ zQ©FϪ7kkDr-.?v'vD۳1%p8#ު>ZrKM-aک-SiVW],{r}FuHWZEl|IN9ӎ+i7=X˶Xma #+קF2!)K3vqm6oy12 xZ1"՗Ό?,|zn<̓:x9'N*i”y'W~WN ݆q} ʤ%o^<\E.jֆ]y?e #O=<ʣiգ^u-nbRDsm:vK TEդcKEmᅕY^m`8 {Ӳgm<^yזr{z?{T'MFe~g{8|{UWV[C oi4CnWX^&TdKt ͐zB窲Ϲ e%}6NzF$z50ZL @wB}Oԗ-I4cOK}ˍ%r&vq\N2C/g&ṉL`HE{8 cSk͡`qRοR[In'xӺ*5/2+n=ħYVn:mgwXDflo5*T=M(jF ՐIW_}gmB);mqN׭uoiq PY1s\yFR~!U&-sg:w;Hƻ;UV7:ͥGsN#W/Re FF3RN2CnR9*nE r5ьeN _|w!Ti&2ڡ r6Oӡk5~R&3 ׶=gDYվej$mXT=ؔFM__jQ֑u̽N\t{[4S)Bxq>){f Sҵ#~Xuu#D̈́99Qm:sD$׻S'FǗkџH.5xJ} hw3lI} 91XZqneuV/6FQD[^/ηI&ukX܉Xˑs9>!~i,^F׶LI\ЈaUm\ըշ>"nRPnncyǧI&_8h#;wweZTM͓s>b? ||M[-rѳ:$-Pma< f>Oum>Mhr1;Tgݸfhʯ<[RqOծ__3K=S ձ,XYF!=\LF>N CҧIZw}z8~Si?k4R+!x5_IBMTg_=ׄ ڭwZ9BTԱ_YεL"6jT^ޯ~V#lKǷf5s,)?w=۴6pڲ\1u#..>걅Njr-ebJ>ɥx[:qTv4KZx yDI vETgWe}I 6ܨ{&ʍt%̺ eS ql3&x9: KZ7]lj}B;0𧖬bXK`njT(Fϩ.qg65l;;o`lRrzh:snOi3]y0L4jCL͈߹qjNIK6OW`l$ 4ImPu#"1jq%\%Mqo$-+1՛k` #"ye.xfDFc98}ިޟ7=%yıQ`gN=(OZT{R׹Ws+vر-2%opwkIIw8taί+{yƲL,0]X:{.eέ}Em-ęONt{MmV.4+ndwϡ#v5ܣoMcd*#+2 #BE9-X7bnVRk&T,] 9<تqWJcDVszq9^iNm9BTy&4P 6~f^VE#g(k(Ûgc˙;Xl$Ȓ*dd u+V4ī1bYp[z:ę*|Kb-IDVEPWyҜe*]jM:5t72)WXHPN3ܜZ}tJ6:W3n )-׌qO^Žedv%2aY鏮jgchǶ$MWiH`/8$j攤gRymf]VIdiiXX %ΛW2u Vm%gxy$AHN:uk:W=ՓzF7uѿ,6Tx$VY$G<\6] Е˱$C,O^O5ש#R3uݕ$?y#mU-*ЧJOqVJE7yU[53FFѳ1;Cdמ=,tF0TNoMn bHSiGNb*sʛF|Ed"6hsDFRZjա((7u6vW;ۺ\J2R필ǒ;KI"CUlWMJ^Nj;J-})2\rTG,e*wM$mQxYrSr1NS{Eݛ-o >9JQeX:yy#JUcNRZ9#UIm*1gђmnWRh۩ha}X7+ZEqSI.c΍IF/ݖcbD;K.x\9nhY-Y UPyj)Kr}9Fձ;ޗ#ANTeʖSM7Fw{]&EpA8Uk~R*N9IYyQZ~Q_lʴyo%u*^{Yyz!õLs¥Olu/8Zw?.N+J.ކnj_X8G}6+g!w7ޯ7ZUs%uhʥ;._vdmB!*ܑRiɷ| rs2~)EI{=5ȹoA8\/ix*N1k:6bS 푛3ng7Dь-z{zDi5Ul '5~H FQM]$Lt6|#ܢ/-\v9ɈӋjކ\չ+>#aKH*򩑔 0>b1۟_ֺiԧNMj^ﴂw_u-)>d+c |IՊQZb$ڒ^6([Y(3NQj3gS|m-`Yƞwcc'i({9+|'nƀHU>cG!d$tbPy[{BŶ "\KÄP̤cZQ浭}ĥGHۭj!Rd5h³)§ӱڏiOn:/{G4Qn3:q3~sGٻj9ɺYvF)^XL@cץrT)IuQugcj .х\bXg^ 쒿T{={s !,{m{Vҩ4)KNrZv N7zWI{Vxh/fM|̍NŝY;){~r8 NTcXF^B;Yd"\M Y|JV\$w(7('{SӒ,J $=;RJvr~3xNSWӢ*Mu9V8߿cR$zZKnNݤ?/ԃWMDinw}!3q Eʣ-۹-ãF6n<ף U*vKerhԨ֤L4x&ٖ<jКڛJQ?$V>$VׯSKy.]wZ#`2¬YS.3fj]؜DڗMhP4dca.\.2i#50\ui6:D30Xh<`c}MaV4]9-:QYwm-.h\9AKTsf)^fYeڪw8u\է(J CŧV2Mw(Dl?j*Q_FkZiٶby1!A=s ,DެѼlZ}HdhّH-&ÌQ\R1mqJ._\ܐe do;u}wcCڴu}w#Rm+FV`BG8+tvdя u'{gDj{8I]J%)-d gUȯl%|nbIUXsz9*{M*Ijzp)Ty_ޓ DYT\,S9Q=zg%$M D*a6Y;H]iaq0qǸ=LD}&bWѯ"h0{uf^1d*~Cnm=_ķMlLv(88p8t]ɭzyu`Z켽 iRM2oABml8µ9Ri<-Zm~2;pC7I'^+hSTֲJuCm卙Qb:GZU{C`٦ 26a<Dž8<Z)byafNRnx^{ #I4#.eװ۩S pn;r)g/6,x89GPMhtTQ.ȡȬw|93-1RfBTyj:N?ҭSk̮jQ脙.cdQ'$ҫ1tLiѧIYLmol$Vxz]E_VX 'mNZ٫#(VtW G1e}ҍ(M*%W1 RkLѝdv.r}¢iUSkm?rmdo1޸Pik)Rh/-qpudWV==@'ggQu+O+#}'TY |Ƿ嵭ʚnL%'eFf 9G~ yԭ ݒ,7j${ͽۗecnӃ7GedeB;,eA֦5M9NVӠL|@<\N ;TƟn汍IS#RC[M#+ nnFczzgcRU17n-[@OsFW=Fsۊӑg'k.A"!܏OiF4В9|ד>Tb9$z s%}Z2 5ƙo3ZDN\~PHz)8Um5#Qb"]ac 4%YbvdUR~G'etJƖry3W;5ӹ(+,fA#>czt97tMؑo,pSiݧkl7.pG}u+0>r#s>jsQ'i0'G'oZF4riwr=[iP^sۓZs*r:{(mE Vqeq8TVR"S܂%(ТMq@7mTӌ{s8Qr,K)abX0\3zVcfKYYLO ֈRzc.i|]S#7)$vQR%''BYKąQ0K7˴t$tjf*0}]FH eM90>2V3*qzR3'&d1s'rƔjJ۵ QW p_LvT/GVT'fbxLFF$ 3rWz^."5SU;ylq~4)&J|ÆTn_oι15ԓmae<c2:)A V2ɗ6/ҴuOS4#T\(|Uqoq.mnNUm1i#4k@<}rr#ϖIr {['TܹoN?il"6ξZU#F̥y=sT)GSc,Idc=O>Q%+y}5ԒdL3̹9jJqkrUX)ۻ&m<hF"c2u⹪rƒ/'9W?y|d)ϱ~R*QQz+ANh<+yגs<+l5oM1Z7EunV8crUfDHdr_̍oǏָjTq]6bOh=_oιg#ҩHdmmOdlu\CSQ`b ̻Mz)۹)TаfȌ {ƣ2rIKGyP3pJ2TtԹyм(Fi#V=\Ӟߕ'' '/,Fv:,zuW2唝Һ-+HԔvSvګGcO9y_qahXy!N;wOіm٭%Yf_%2rAOu<{yrK -.&|cz[IX}fdX*#$m㱯F5%(|K]mtwsOG}|Dž[hI叴E  o?>㜾orR>W^Vݺ[S3-Vׯ_߇. }WinfV2:~sF;i6ӧ*xTWe߈~1.<-<jb$ҠwOpO K]6y3'%N{rkϰhJz->,x:sK3Y~&&RxQER'0@Qھws [-~kZe^/))l.d##iОve8u-JWWd엧ODԾ_ÿc8n,n4n%۞RuC?uvF5s/vJNmgY_V1ZrlVQ@֡?5fqN3Qʕ]?gpíxŚ4~cY@?tg''VIʚi^gcӌ"[Gzwu_x{O:e l \sNG矮=}Omon9$em:tɜ9ֽF2'硞T#Mo]i?4;Lw#rG 2I5rgƘU:IЎ⟊~Me4Kyiծæ8B1]XlQ毎VoC?06~شYaʲC1#8<^YznL%RU#ZZFKǞߴ& ?H6|0@nYKUnպGeEF뾛۹ռKq[^^e;]HBT1@<'sz׷R]_ub`k=͏ڏ{H?OM^"Fd+`hr罯v]ZV+a%1n/bscі>yV_wu~?OFrfe|mp%im~k\F  _NAs_⡌qwo'.Z|wri߽uMFŧ#&6~SuѬr}^UIŤoSA}z&yֵjBy$o6a%v0`J1|]2{( yV#)ki쎊sr1suGA$}k|.?u}G_C+ #\>#^37-j7k<~Cn {8M 5M wKK$~nsF2U!+r~efw[YFTՈXwu~J>\butH3\xERKu Aʜ*+{j䩧r!TnoVO!/cNꓻ,iz!ݹJeFVFU%/h5ړQ&*Z"u#Mtg0QaG ?.ޥiM$yVT*Ju3)4-*l± :W7OG[ZJml_3BEm${&Aynԯk)TikԼ)YW |XyjQzTFq!ke hxDdԬc)y&ֽ &ꑵM7ʮ͍|ZГG^0$5}"Mleܢz +(MmZ'V˷t>s.L@[מFۧ&"Px_GԭAy44W[+P7ڷ|[N8lsTj5&oCTz4W_fa|XZS̢տvyU[.? .;&I3q#n`TH9^n'O&z<6i*4]I2H 9zׇF=ݺuoiZ{tM>x[6j}9aWoiQᬧZ;m&ogkpJ|Y'$F=8c1m良bt9bi!Uw]k̝fm[S{M<,UHBF.{3ڴ΅)~-_5]zhשVZhdVqǭ{iMF/OCzcf7aV8ZDTT( M6=_H5 Y/ ɍ63r9z:UnxrͱR(Bҷ_Hq4EVS6A89+MI)^?#2jZgJϨh>V6̽ .1s:%gt6Vy7I<I`r=p5R>:<-wĦx0IZ0Ѷ2w}WU=J8~V0Tm:zj5?\Zi77W2I5 ŋ6YN1EK;YާONK|N4oWmGQtOc2V'hp잏[zGVb|y3teR|ƕyY(R+=#h;lfiJQ,DQP_yJͺlcwM7/9TtU%)yG][^:*4Ҫ*:0Ze}k;Ŗ$3Yps/]a(ǩ4œfmt:K7$1.gl-o"ȫ嬋Dz t_RI]=XuZ:T*&7.3NS|Lj5dx=tidFۺ4vErAڽμ5aV G𲃟qJ%nmӖΧu-Fm7ðƫc*&Ӄ~**pqzJ4ViOkYë1$k k~=2Succ>N'ջP{W:"PܶcsuchE:Ц*+ߪ8x߻mʑO^qޚiǚ0rD~}]}]}{{ǔ9wLFLKgEQj_[,mDQLq4fBK7}+XޤIT\ԸzuęyuϨKJERE AZpyF֗cWr)E^gDǪjPq^!Rc<;4[|۽5q7xErF76 z`r'G^VxZ1jeM),Yڪ!ڼ~Ub.nU{=lf#ZR$o괿[5Ri]=])֍_hy_V9r]S\ԵMNXVOG;O'ÁǮxBJRmGd*Ftg ^q Hhݺ`t?*-LC]SOoSUq mc%2Hx֔֯M]{k*ά-(r)[~gx_d.]cA'^VVT {ݾ^f,a 6rp$篵x(Կ5:jJJin_!>$kY??1j+*1Jp2sG8Ԕ^Y$i9#c<.]?ZʗwȊą,cښVf38&ӏM,;#vV&coN[8_^Ҍc9r%~SP-'o-~ ^quuv麒_*i>Zz]G3<{f1/ڤtzwZc͢<ӌ}B2yq^:tУNiK66BFUs W8yUx^?j33`'9pkxY9[z~VVu3|C㧱uf`$*wco¶stat߷TzTpvשYr7 \GtXTL\cUэu~hVݫ]H[Gk2Nΰ2<R>K]MNaokX,]x*u%w_x4RIJ7u!_m dlyTu+/2T*,Z9DWO0.cߵT4[ϡ#-u^~Gx[:g}"DC`$󲭈Z-E;Mu&4Ca4pW<hTry85i UѨo=HㅩQj6A-#A>% eQnovq-{G*:=<ͣBtb}t0ln|_$Ҽ-m2HHKxV'ӣ{?_˙-#4F1g'SrG9F픥RkKXwn|=e7ٵb3M9RPVz>դ;0p%s}m X0(`N |w.nYY%۝j+{Y]sC揌d=\82m5י8&՟x6-hCxk$ǒXWKQNJHq>{˧-ؾ{/c랠w+gR.Vˊ*`*5mg\Lv|JtmR\qZ3mNZ5V"1rŭݙ^lwnm?/Z\Dj{XJ/6KB0434r(Fwq_jʫ^g6J~ 6՟[[`1Gp]F犪u|}̼BNY֞Ivi!2xCzorJWZrԧ^\\^&Ē(K{?qۅF1yREk̻2Aۊz+l77mESw*.cg߿=zTchJtINj]F sQk5#R4zvm6> HƔb']FľjckOA3]e/lF@fBBϨ~%+4uT6x'\meo>k2cN$& i۬홺k4=NRVz#`k 3dT(?SϭiJL3}[.Z gS]5r0(ek:).@]7  뫛QNzioN6AmmŮZYL;qʵb!R?sWh-!f9On{EZvL^/M|H7qomm&SRF犟gqJ蹶V?RXeLwclH˃D)]9N^wkp\?21KI\'n~VUS Fw pԳ) ^6zA{Uԩ(ŵsjao݄_nRrv扡Ta[s瞽oj='Sx5~lZxKPNSVҪ}ycRX 7|5c.Ӆe+'"mIf_?~QrKT/wksҲnoAǭƜݓ_2)KmO˟Iqj;yӣzɷ{- nX[o\3mд%$װ*ݷA2s_[B^ʚ8PR{4#LE%ʤʥ-8X3mG wAySnjZiUVob~Z1d\}iFsUԦWy$o >[2Ez,y۩yybhu;"8F8ڣ{`1׊4Z2N3Yd6ܨT^=ҏz;1:RR5vKB˫2([%v?zco7sUg;r&[hx{77S#<ZThd,UùSZ2Hq(+mc4SkztkV Fzgq޹8KuhԞ[멹g6 9Wef|$kZHЪ?|S'+MErymCpzY-PTRq6(Un=Y5M*iYvAk~i?@6\,XNu);w~|ǥ8p!Uou3sKJ-IWsƤjFw:ΌiIT%[L7|1VtjQeK-R-IiN/&pQr8³QOֳԦ̋`drٹ'aN1}t5-dfA^G_˨&}NQm8\Ss$w0[In}jsRYi%.29Jddqp3㊯yQqOGJ9)hG>W/1ܮgZNN_ Vݻ'(0nf)_¹-%.ڑ^59e꼭#Cov**n9~srǙ=ɥ*ձ.5f-~Q/+q"/8kxҋ̎Joz%$#Q)R4z|oDOtEkLɹv`&F>8q`qKߺ1?i7Xş.G-d`HъN6 }u߹Z֒Ms> _. }k¿n-w[ƭ#?/Ž3RqáB%՜ DONݧbBЄT̢>}|e[$ս5G>$D/4y#O4FbyHUXm+GK9JT%Cm7wmјT'${ko[k&}}č 4 r 1XW&ߥN^*uotC֟oʤmĸ)qĂny׌yfLKht{ {ۭSQүՃ7poo0ܚF}6] RV_ߊ wjk5WKvYn'w*A<іc+Fr[盙eƕzM{m|·ſҵZ_&o92nXӀ~_Pkb"_Տ/Wu~h{.gϏ ºQEo!Wn'< {W*a}+ QJ30z6޻g~iW F& CqۀKʞS[hZ84*jI5z;^/5m=n$FF9UyRG;}>XzU۵`処'r}p5Kui!cEkȫxލH:i.k_'{6é0vi/ܹoH2\4sȾEvypA'ڕ<,yomVvԥeFe:|}n㷆d;Y"-#oZS{SX|ċc 2e,+~|)TBQ],Dt D{5/.uIo zĿoᗐ@opr)P>lIbehw[>?5SWմb6x#/Lp0PezX)K8_mcEkk?UN}N;Ǡ=1XI$c潿!ߧ#߹Zܷk2ekydg$S\OU<=YaNܴhcaoysUbQrJԊtE,~.|,xCێ3=C^V~IYkSG!Yxz- ӎ{U_e啟6J'dE+Is3*&.eeHxz4;^Kͭ[Q)M8ow}wۦ4Wj__-d6r[-$UrchlǨ=qCQӔRT)owo>GC:G,5\iL3{H $Gqv_$^= X=EZd*/9mmgҹl.(T^d6[i p-wT+^އtaM*r,\ypolAV5RK8畔.B_ƳR>Λίi)-dW췓~^6Q^i$I{?F<y?g NmV%r{i{ۦ}9z {sVRZiFrRzLJ[DOF/mVb^:z|Zp]NJ;$LhL<7 ')JNISF// CzWۢ*FզiBX+Lx\U7J[DbO/3%%Y[}Q/:2Z8:Y^ؙ6U|KWԴȥ6Gg͂Aqc=&ГQZ_xZ9FOkwg.}o}yT<#Wԭ-dѧ)UNI#?4?B/sIX7?wACAv@r]4=C5WEѵ]NDKksxuk:*Na)7SH]B_5<$ <:lʤnyJm(9>оwk6i,$pq8=xSEkvFtF5v3< .g֭-_g,v+ mI߰s4,.xrkmW+$-Xnc'}+1:I]//3)ͫjZ[di2s JqZ+[x? -qs㫭/Pxmm/. =~G<Ö:qNLG ֥쯄/处*y >a)t~e[/Ŀ.ʟ'5o@W2b6N鴁ǵcNK{Ibi{MD/斒ls뺥G"50)N嘆v+HVvVl2FϥsZNkw}>iXv?w}ITq~f8DRW-u5+.]~,}=ӕΨSR{[ $ Pp9~>ҝI$sVWO )$cs3%IL[zu#y=֭ >-u`扸8*rtyTSByG1#?S]\tb!9SFSp-ħn2Vr9#AOF4-։y^Glm8%NTwP\.9U;Us8uRIaFD7&Trr={+oih[4pm"|qr~TSFqP~cBFfę>EjM4#ΆGfj fftYFVR+L$ѡV^8ݟ_/iV1yEň'3Y@=֒RQk_а' uzVj4k$Y'Oj܍]Vv,1Ywu9O<9b+4ZhɅZ/|Es]QN[U㘤%vJVhވ/8SZSe#3jDc-:c0ݸ1z}rrVS󬇀2 '9>%5%g p-?NoxU9ۂo#&]=*#)smNR n-!-4QY#(r^ rgl6+Гϥ(.Y;-ػu0{)GhN2RO777v)ϡlمT- 'W-"eorwQITШԌ%+ĔK];lr,ċS=},a5-THtq [> aR\^ƵJh\f?ސp#9?N٣r2Z3o-cK?)hc,Ke<~U%$Ԫn42Į0,}{{psB>Zjk0X+V\n'gAg1iZ IH&w7˷G拶rRWR~} }n\M2Q7cH9N2Gw=c;\VmC1oNGj4ʴ}L_=)wFeb gn3=:Jqp 4䡹p$m;uxQ[TJS;m+_8~hT勺NTӂ׳0eٹ8QyI[ZEw5(g̭7Iˎ_zӥ ܪ<sZvZ  ݫsyǷAҴk+[2kT)NVOEȶmݐwqⳝ9nʴG9۞ppy(ߏ*8z~[i?v}sZQg^AdVeP)SkXNv1rzQR-DrJӖFT >+|Ǖ9+ZT,6:ՍHGmĖ?%cy+i*љ% ֒\f8NRz>77DF͹uGf1qRDx.aUm֖"3?uQ)Ik2ı$?5>]IOIOVùFU(ʤm"q)^qo%ݖ"34[:j6b\ܼƔdDF|7,(c:йj29fڷݹ%ҭ)rKTaZ7%ױ! R1UϿ֫V9刕:+ll[nycz3V6fdCyr^UI גKRNjoCbY-"^aSמ?ZN*nuǖT+niE+UrhN&X|UwF:&Zr;DMFC}:5+:/k>lyo0e+2J3vTo%jy?)֣T)23H(Y+)8מ=>*~*Sќu0r}q8w MiԩS)XOcV:ŌF7c_A>6z;TTfGS:s6ʵC/P&ָf wX^Ӛ10Q~h%bVeI`c\^Ί%MKX~qJNg* ՕHmZ[J~VvĐsk 4} % -p8W确x:rm-Eo#kK.{i'vLBvǧ=Vӭz(]uZ^@*4jF̣99MIC(M*T^Efu:$ܻ`z#cQ/7um7?ҵT?S0xVT@b$䓞A<#m42Gj,̲nU ߿zeQ]NhE5fF{EnJ:NrS3=-o2DncwqYhHڌFMF*6Fz&TSNZ;C2IdHa|w[kd؏+iM̝W+o_#"՚+lng.OCSH:u-*DM&gO*F˚*֥۴nCHC?U%*qeS>U}9}zm[FY*3s1NkTϣK;#-㵾㙓KRylǩdqڻ*I{>h%~*x|пkZEHl39lYjxk%J6NM1XF&Mf2=<9B7gNW㵈R.Cm5S$_]B?UO}?'3+nݻ:sֻ*1*j؁-[ 1^ZMiU\[JƕyjPUgOCOݷ|SBU$얥aF&л" Gq\gS(-''f扦Fd ,nhk r5WkŽa/٣LoY ϵy +UWSҵde2A&LrN0x#k{;&pc=ߗ^卹VKAy*߹vyrq('F3G}2\Nr^mqeJi˱²yVio#/'S$pyQE{ΜKMq&c[Z5c}vƝz|"iӓOqIp iFUXv cmGO|EAefY){Qǹ7kʽ]Rѕ쮡xhD{r>[݆DJGx3H[1; +l}Et*MYyc\Id1{Ն?9K-{by$,}'M[~̏J)9ѺKrq LAao‘hT7~%.lZ+($TW[ie.al- 8}iQi|5ETspo^zkܑrڽnjjgASnV5Zr\lI; Dyqק(V^"1J5ɥշCb(VMǕ?T]{2*h̞6idg-rǖEbpG #)]IkkjؙFj~hU[&X,mIU3z:tTT ~7?RNյS"M1bzt]qJ;挩vv^DrV3H)A 1?Iʞz./֔ru#dY{w)ڲ =3ڲ׊TjMFPSV4j'++rZe˫-; 8a]:=i?3ԫ1]_ol,dy s.`urkZM\N`Ւ$Af1tj~&IsǧZѕ'Wp*UkzNUcKlޯ`?\~U9FwN˗躯&XhJ6SјMI'AGmuy<2im%VqG$vZ1rߦ敽N:}β7ӫ^vGNzw_\۶^ңeqn@UsqC/$8_-nCdpN1t`V[]w*i$j3⴦yRҒBnm.6 F#N#YlsԋHQbv~ZYN;SD>,6g|`b=[r4h+jOup% 䎙tcU=-۶1g%W&*U婤r]G2HYFI9Ⲕ/BˈDw96ۮ ݻ=ִӻzjjmlKlQ7wi8SJԩ/h9F˥He93yo#)Ҍ%;[*͸J-P'%v3Vl%Ud^FNW kƘt2+w, =&] sg`=^%R#hi`Ufa梾ݣ9 XʢG;Ib o^q;cQ"2'[Gn*d,ő_qU:d$Mp!ƪ815<쯙Q];. /P~aҲO*gw%]cg1̪w1%ΥshtJSʯt싳q &ߘ^:([I!Qq lP#=} Dv*# .d #2^큎1U |ɹH={^zf%`NgtVXX^/NbDY _38ggqUSdҩ(Կ.&b&El lU mcV{+~c.!t0JN6N׹U=\p<eaDIc*q>ʴ/k49̱[6[9c[Z7|Ĝyy D8oXpr匞^ꏩ4P[nL+2Nzv굟,Wp1,LH$y038V50Z":Cn & Ƥ+ep̣;Am")BovQj=}Gi䄖h+chTJb*u8B`Oaq4rn>goF[o\qxkӂg$Uj5%f^7+ʝ !^}{?,ef{^7Ģ:U¿+ت<:ýc%SUX^6Oa.j%nMxFŊO zsۓYqEƕH먑y:/n94a* 61ݟ :keNlaiMl)9~Uo:~E:uܭi!+uRrO.21E+ju(*<$$j=g8U/2GŐۙ#[|Ȁ94劗5k.V-*mb27'mAWtj4maNF1ϽU:r"}Utlln\/iCw]]s:U.j$ȗ ! o[85*E?iRVi/d7:XOـ8=?*k[DS4K60?k~YJ:j)NO.-(.DKgӞ#?z1R;VH)F 9WvwN{eQ\5u ߒ1XJMQWm\.I;y_vjң覂Mpl-ޥіteOWeGfFGt6K:GMg(wY~?҉Pg,5G1\^v8?_z(rNoYɑ$[ >w}H:wWo#hFI8'xd[w`Ԫvќ*qdFf+ bm^.\nBmCkwܛyF9$uk FL.z.H &,NH5ym'%$H*i3c5Vqb?QQ- #ǏoQюsyOCYӔkBVSw,.(ex/Ⱦ#xcʵ?ipj4~V"]>]ʹn3'gJMV\e9`/TStV{ߵ?h*P8R ױkumdSej;Xaq4,fUXw8()*[-z5q<^5əc[[rL#>>WO8Ǚ;}?cc7WCGN.ѹ[}20I$7r9=: wa'wc*ʔRjwyHľ42Nkc32Ad[p8up"[ucWVZ˥p|kiez-y^-gD9pr /#9StRN1w>mCZ1͸[y$hRmAI=1e-C;im*C2 @{W#߽̺jSr9QJ]7STiĪYFw#/ o3!*ԁֽ\=c&]CI')m~5P7lb7t9$#SRlՇR/{{v< s#<N;zbM{Oqz1uvO<z$.|Aߎ2S9]ߠ*sv=ᾷcwD,6auz vp;c>c rRKEӿc1FQ]7Iv`}iRDHp 8u׆"GgE:Zu}>Ю-/|W8m.av'T)эkݮ>MTO{ײ%y\DأmFuJTk2C#;Kч喫<_:[!LY?yW)ũ=ƥ~m,:rN)w%+|m>[46y)SΜ w4w$h3B۶t)vNj'QƜ6ռO{z "z >|JRC8GMwu&S㈔GZũoЎ P弝(CZhiI4|'~{x{QR"(D!XɮNr9BlЃÒȦP{wԕ: 5q7ribr:n.Y?9*(ʣ4H497+^Jvz?# o\$q̨W >kͫJdKc~x39L2yG'I`@XWWlfKͻ&č=:ףKHᥖ¤eyuз7=#PQu֘٘%21ROqZu5mΌvEhk0L]E9ֱ8F:*VGh2X?CϽr83uR7Z.cf/jةݸzNA}V8t}~m5Co" Hxzrt&4}}nhL\6T>A\3@㌃]T~I'ש}4N5Z( 6P6u*NRvrW(aYobmE,TNߌEc|pxLLiϡG?}v!\2Uۜ~/obiƢm+?S'g^9F1Zce͔m|Eqo8W"8,co^[q"ȯiv~|%cOe 'k nl*ϾUZ6oo_<%U\ۯw3#-©%dwcϩK $:TQiOqA0%W~R:W4RmnR#.e >epJv(ZSnn; {vfsI 7 tSz'JQuZZMѲ.#NDJOLqE/鞏r]ʩd_"Dd`GNsLEv>,]9O?~bOBH~FL@sFU)TQѤmgr&Zoߙ!DѤ13Ք(w<=j~&SooXggF`eFH^Üy_XV[N:uRj~kJ3Me y zWW83i_cek~3FKMwt:#Uy $o\r @>ZvcGs[)ԧ,T~{ ^@u@|7p8 |.S|z?Cף(e kf.;[¹#>.qfƤmoOǧs"\Q/{3meFVlvA2?{T0ӭ'uxHJKjv>cjK[uC$$?9x<={~KB%)-.30tiyZ;)MNO(Y( _WG-c–eR8O<h7LOw\#9b6@_Q4ou]NxUI/OsuMGe[y\[\o3}7=<=GwF){7<Ɵ/"*ͪ#M6K\#XXdr1؜zjSI{I5ͷ!鳐[v#u pOnqt{-o.aG^HK;:φP}זԶ$'NT̨ѧ媯k|ʣ}ѩvOO;vп_O.VԣXg!e`>`7sHY?R!ZЌTcN;^g9OOCXmY-8x[nX'&d){R;E+Y/>tإX i cۧM?GJ7ϡQ<9= تJ y8< ネjs-oSZ^Ϛ.ҺV|n ڴ/y&8#qEe4qXw[rzQ[ԕ;]&eOҢ_]Bڵ$'c#uueעO/FcUr֌n#fM )c#Yh^ySksvC#ָR2u--$w WE:ti>+r|:*Z#6b~j+Iޥ:ȅoFQvEUM;G?6dY 껈aw2[G/L,6K3w-[SLFZ.Xȁ6c  ӈ=ӡYtn[J˕[CMWXh%w7bLE4_#:љwLjY^IrrzvJyrƷ›mGU~Wo< 1tj5+&~gBTx%ךEg$}Roi=.i;^Ԗaksm/<QX 9~n+s9l~k*Fծtg{yKGK[Y5:|P>;]3G.|XU pNMzõ Y]u)Umzy7VԦgeg r9)JrV[_ Z\Egf3rd>H:+RyF{yz?ؒ#?3}{~ul,}R}RMSlj"^8|۹3ӌHJ2w=]Bnjĩ $xޱGZvM;0*qJ|hޗ}w=s?-t^8@k|v;vgn `5ZVdm|6?ҽOUK:qKi#|Zɻ\.6l91ۋnnuW"5es(靧8#ȣظЏrTt~K}R飷ȒE6ۢzN3WjSr=xSPђcf g}5ˈEwZS=3ޫHTFsӞ qSϓw?6>n0ϧӥzXzGЫNr3ŭ풗PW/4ہs\u%{, ͧIpXRĨyrqϯWC+_t%(+WϞgrI|0ΗJ S#r<& V^0y LڷWkťxsZK-WN7;g0*qOqhZ/Fߢ^|a"^=׻Ow|~j~Co3]BnN9,nC`dd+0\Jϲ BocBIm_Ϡ#-ṷtF x#s_?ϕ*?{E1w\jMč-~N}FVO> )9;ɗ2 .s3dn*;=*NzOz2OdFx9ӨߦOi%m׸r6STT:~Iu{ηcLnEt Ҧ]}AJu"ulaC"O==j$qY[$*Wlt;W<jѳ(fHʁd> 'q嶫cJr妓N!O,pHjG*ެ#eۋ1 mHF TN)ŊTTCc7yf1,*0}Jʤe{ޢU|y2h9r=v&hے{w mN6y&\V=\Nr]r&5+򌟛 =Ȩ"cc)GR EU]4ylֳ=_QAKةJ un-r#_5Ya@V|KETJq׻Ў'U1? Ȭ`G 펕P}2 s\*Mv#Ctg3'͏b 9=i-=5Tc.zݮP7R}3 ]w*c=}IRQZ'NYbO˺33Ah1E qwoݷ.t;Vd7b+@b2|}:1StRf]ʪdbF\x3i{$WɟUx~V ig:\#GN^m]iV]B5+ |Rr>Tk'wPJcBd唬aʹ}=+ Y*pkk_nVhMŹO-T[Q cuiƦ[#/P{S\w0^5J4I6/,}٘[1]8FRodޟ2*i{851u B)dliQâ{[Ւw!XZF' e#j\{R`8?%:!:jsWCWRw*i)3h:G7 O'H(*lJ2e(:ɴv1ҹ+|hFR> kwɴO֝Jܴk(խ @P(kG ifPZr̡gszu*KJ+NnflFHϗK{RY=n~m³˲Υ(ߛFiĄFYy'pgzWsK# o-:/޹jӍZ|j(ʎ2jC>wA'-W+Rʟhϯ=j*qw4J50i- JqoO\R)ҧ>Voԙ _ۯKUTy2yXFni)Ǖt8uZ*n< ˑs;<-zN7&ebJO;SVX9Jqv))#++t/1 $zT*ҧO+EyW'`2žF>,ElK,c泧VRoBch%,r뷀Yzu\7Pө̜&xi3*}kCgyX!Pu5Zvfpzҹye%k\ΥH{6jʷaXJVNIYjjBdyOjG[6gRxot\.F9xU=[*MGHs1?zGHqUI++օkb!椤6UU gSQ^{ߙjQ?}+i֖֔z{ ^NG7=Q5eU>b#3J1^ɻg*GY5}.u8UkPFzg»rӸREEɭ5~d| jn,Zsټhcnz1քqv1NKaxXtkK+{Kcr;`cN=HiQW."TO խ'i:?[Rڷ<އOzr SsXadYf+?4_!^ӤJW :rza銥oѧqLf#|w[eNNN>$gF]tH_1T]߮+Ϋ5urxy>AM6ľ^ǧ,qh-&|±, s7q 9nۯЩcbGv/<]~+R޳j B~q 3`lw͕Jy5z__i=^_]/Vվђj,FxǃNk'm_4iRjk^~''n&4ǑR?8C,;!dSYJIt~B+ƾ]A8nUFa`rWe8]kӫi,L*itK~??~{vm-va ג9Z/<ڽNJj͵k3z [jv~K@b]m2Nv =q1*p۾fr~04z&ҦeW/mGxk)AcBסmu q妾cWѕ= xti4pKg #4fsP洖툠Zw1oěg4,ixfKe" WkPqYcr|4*^7]32aN_U̬W]}vJLȒWo')S. m>'kwa#S5m$b6?3&#srAOlx4m*GmUk5(G՝OuzѥsCp0_#Un0PrAzH諟RiOdKӧOC ZxRQUnoORҼIj6P/*N m'd5{WvKӻ{hJGG֣y5-Ǝ%0,49U`0z#c y0~[yF晀-3I^XTqVsb*J|GcZ{sVm<~+ u63:q%OYd2*1A?0g WU):I=ߩƥ{$Ʒ׭⸚iL}nq8^Ig']iZ(J)JouIM$cq]JI#= |u5^_V:Ե796Mteq8zd=8>eO<ű> מ_xf%;d-(97ةSnɢ7 c nv7}s<9S]K1g}Z3N\yL3y>n?Zu2iGQ<6/c<;yrBlے{pA|i3(4q浑Df̅ksךFrG1NFΓfٮI僷9W^-7M[שćĞ:NH^f7~Z$%d#8@9K/VLn{#-B]v4HȸvTt^X%m%NZZK[ՒUvclx99{>c7uk?>w4n<ϯjé[?NǏ^\v[3mF=Zuzg﹜eR%*[yy>aiVJj$[npGFS<z\#^_qcG4B6A8*#2zd w_Uko$SbEwjsC/m/w!b9f'lcpV1XwJjK[4]$T_y+9AуwRʥ>W^鳱kHu] Ǐ4\ǒOLOWK9^ZTxdtz9϶Egqji6B|&&v}Z2eqrceG,tTEmNOZ ᙙ (]ʮ9ʎɯsl]xVR[Kmk#Rm߮|˾$XJEqsq $#d^O?ӯ̱QNO[+_[%tb1U\tiۿx.H7f!3vCQ*k{9;=K4JOE˦)泡ͪi&KQdYv1quxx,ǚ5o{ގexllx;ikϷ|wm->,5KC_ȱ:60G`k7GWWJTT,zvs⫯ yqpʿl-~FI+'L~M(ݺ_uk֋F="IkyռU%֞*ߴӒH :cIbiӚ-z_xYE,Cn˲/jWv?]5.#rNHb@#?,:4VJrƌ]~h%OWgkbOO NuU QJ7OϯGTɸ]181δOM;0+%UMfV]ˏc7J^m?mZfmeE$Ŏ޸umԩhV5)#cWכZJG6yj2U%yΦ m*ǎ?:(_59MZ~gfl1aO'OUNzܭmH}J&swc98+9?ֳ䪬])Y UnSׁ%BkVuJ<%_[OƑqN̶=4ɓú ڎNd<Ҷ4RF,6'9je[ݘF:r\_oty>U,1th>k+e%c.Xn4O\}u@D*8[)sU*x6v>j8E+'b>b+8o}?Ku^ RѢ. vpI2{}r+iʍ;0xkox˅d{g{Ӂ8kӭ,RRW7c`$_c*fy_QNo^krŸQV}3JocuM %  .d?tzե-o>FjXyJ.g7k38fߚ9/w/PG=ިtTɽN ũ"GZkMeky8vEFHH:򩧇y{)J>+DhiSn۷ 9:n,8h×S(uXBʰ#ZΤ}SRZ5=nCo_„~}F ?>i)Sn`%,yi`jQUl%ҏr{y[2m[-']JQwoS&={ ,K;g*HF$ܪAcB9Za۷;LdQ3sZR>h(}FC,Gc`ov*Tmt=YgY.M7n79rW! jUBX v*dg 䓻frvW;)> s7{2姪_,q[>q>~yy8FpeHktȱist' xQ|]v:E;G4rG6Xq=+EE?w\F\<*wTq;ɫzC2Ȳ8(z <} .vYy$*'F~3R匮:%%Tk$.A3t(ZKNLGV0cZ^w$20ݘGʤ8<{5pqomy"vDdc2/ Å眃;!F7'-A +|ރ?WG,{Ӽ"K6#f ϧA޳{e)u[=+:mJbW,~!YwI*xՈVu$JVVj{YdY'ι?֦2eh]ʗHw&魁9mpwJXx7^qG%%uљڍm$`ߵٛq'?Sdq֌))˦u`H/Ջ`B=8H\{py*VߏSz޻yaoܧa Q/v)?/뭎w[w.۷;})B2-R׍=;:],\sWKReT;}J3 pw+K#7:ָ*F5)z}Jv;9k~Q:(FkB2qlBhǖ-!9|0e mʑ2~zݻRT^LXc qZSk3"cI=k9J"ԕY]-;V?7gmOVaof"xeb0[KE85/>I=w {hY*ZQ١kfG vjb$*2InՎ"]яj#.UyltԜW_767{ET8'ȽFzeXnfX4cH٥1Nс}ҸkQq#*!iicb$`9Tq.G"NN捔,OuFQ.8˾ޛ_n8zҋ)_bʏ$&#q=H>U˧($kiuI#/o=I{ڻ"PNVŸtȕēILbBqqUYJ5ȭJZ5Ъg7`_ú[e7vlU=9v{j2ivr,O#pDY?[\5Th/q^r\Jo+|H}iW-=-nVkY8B6>Jli,55N^0Գ79TM#7RpN;)\žEjuSq)ɤ$wP7u휣{ؕF䎞}V622b?Jr^՝9JSZoʰG#(K~f ErٵisoFdn@ZZRQvNާEki-Cu4ӼlM>Xk/3t@^~rԃOi^ m sONquةS[?:=̝?-d¶6N5IS|>!:t:+XtFf9瓓ֹM;%۹y>vWk\4 yſxP:d<1ZNRYVzD[hUfu6>XXgG]IYz\n[Vl_p~TjIII+.̃옏t2~^??œRNM'DP꒏CoztmN\m>ZIUS%EP_1"8֌]mh?#,9$=t*9c;uGV,,#~;(8,a$Sf4MZYDpLv篱%R׿q$W-[:KM;X~=r8+V)PQZ[ۢ*T[8+8֋WpV%MSk7bcH3;V+4qr{(j#egdH{z-J}g4*J2RNj {]\Ypy~5Rqi"P}jm>LQmb2Txϥa[6m3UX֦]:=ɌҎ!ZKӮ)I< {oa[8)u_c'OЫgq$F4̧ =+~I{.iE-IQ2Ek$.Ϝ נdd%?4uHц\5y&s`89(^kMjO}퍌{'PaGn?mO|:}|-jXWoV֋NˠvV6eVrdJN)ӕiJ7zߣGݤkJF71%H b yuN"RIBE];z[]O'㌿ɕ'qZiB4ѡ[XiY7Ÿqw{s^ܵ:)ȆD6EdO,)XnD{lTaiRm=,ZIk]صK^_FQ7E%w<*̨2#kPaVI_Op4y nfmҔ)٭NiJCffK 9V>Vƴp{vlJ'~Rn‚s g'JREVGjUIi+V 3 99jҔ}T %Q]-GmYk'c^k(k˻$bx"ofI>l#6 RO: ۬)3m'tdUӽosI;0 4fg;X]=U/i,*ߐ[sM5c VJDD-?+q}=tB|Znկʯ`\-)mW cԈъf#CoE,K7a;qGV ɺq-$6fb[f_:$OBj.][H$Iշ'U{w3oezS5I]n89~Xm)o5S){9h2Iv?Smfu1v5 ⒋h-W̵-5W!vd>arӭ?hߡy&Lx Uu\׺3FjZXuڬOm׏T)T9F^7wnqۜ[(ǧLiʞEЄF˵],<; pC߭Rˠ)Rq3(eۂ~;1BeN12KBc3rzs9T|3qErXOsr1K8ZkFNXU(%,A=px#һQ*iƌ_*%YKyY6{V2,yz>єEQm=D!ܑmdwFًJ1v[h:J_HvJ˼8?Cұc*$}{ G3%{dʞr>7 ToT}MK*V5 ϧnҹ[ՒX(WY6-YV˾'~Mb/97 YWn?kJ҂,$Zz37l cVJmT9-KH-w8zДgjh-#F 7rp vr9RT%ԧZ3k;݌>Oz#XTT˹ _8\oJÕV*JJڭ<#V ?ZY:w苔ʧ+[- mjH?kOm%:qn[`oFӁ99F̗ŹRKfFUqR{nT5슓ًVo%B㢓Lr0psҵ)K'tU0f)IF2v>D"=ܬLD  ʵRR|P#ݽN̢5S2;pq]%rJژԟop0XUvy#?%YAʢw\[b|V*9U:R~Ό-6I+}r$A1Y˯tSqe/A)IRܝڻp}i(J)EV)SMz䗘2&^8S~Ks; %o];mxީ1[um H$s{9g#0W,2;ȬՕ_ N9wZr6 Ѵ6K$76|#?:^u/$h w$cw#wIө r`쭹Tc99 r1>_6R'U{C4Eݍ~qWCPػӠ`ϭsIK9I\WjXәS%FB.'iʝNkmRĩp,7u}sު17sө$AViJeI c9څ/zoʞ--„q%Jz0}Zxc3{cr3oRtޝ9Kn;$L7/o58ʯӠn7na<׏ƈZLz-Qv=pXR.Sѻv, T#|yj{}q\Ue-Y(Wѥбk$'*ի9snix°J02a6fb.-U6y<qk8Ԩ_gXK?y掼ֺ͕pI?\WH9'~p*_.F+y{_~"dǒ329>dqpO>7-[rt:-,/۱H8?W=oi%0=U!fi?7.2βQHQRG*`WlLʪ]N LVu+Z]DQ,g̍ۇaZ:Q4TvX[Y+(<ӂ:i:N3&U%%xօRL3,dF@׽\%atvי$nLC6w3g;?S-ٻkng%V]j镼M,$'a%Нo c.GFVR[Y;o{xΓUW[]-0ΜTWFݴ6/~!hB-|K?c"B$Wyl4#Η{a]6.MŸxyQOoI=o+ -3tNݗeO9jݑWTMc ܐ#zJ.1xϖVii|!mVX^;m$3ddcھ- V#l,*W@o5}2NUX1*Aסr,k~}?@sCt$]m׿/Sڠg7, R6֭:sy[~Ʀ+.y}]z[Ne_) j76Iv6O5{kPkIB1jU<_wi1 ~XЩ[/hY]\Ig-l_+9ʍIOpn09}̪{TmRVnZ5ghfEP-ڼӖ2ѵnQnZnt $󣼑2I?ju%R.eIR/o";,^d&~`x5#Tz.ߩpMƛ۩DC,夒\[ڼvI^8(ʥ("33p8LE:Q.qk |UQGf*NEKik9N*Tv_Q*Iin}X MjRGw*tz,BqF*6\y.hrԣ5 {;.xʰ8#~-c2U=AGvʟqWGDUf: nƺ㌥amJEz(ԕK#:rM'E3Tc-.[|Zϖfy TXtTmpmPꔡN-.Z#+K sc:qKzٙBGib}{S^O^ƹd⑶c=NSJr!.g܌il%Hҳ]}~G#n[9$Vш #AF5q OCRZd8" PCyYp|^溾ZߡC܃\}wMǡ$qҫY<,KF9g#{./$aC _s^/IgZW+u`mۙc+j{NYO;kW00Nо UgDګ1稴lh(5;q$1ʾa˽}YR%{:0UʫV> \\ǯZZ]|qǶhK);mIWգ)[vz^z}~1FP:q}k0w#59;;;K{xf sяL?bM2ߵn"r RQW8+T ]vNjqMjh'Dvd32$ۈHϮ=ʟɵ<_Pm-ɺ%O`W|kUk?`)SNڮFzƸ^T&)Ypy#yS6QӦOOégC=1;1ȷA]ĩ+ g%HU~5v?prZqk_ JK&V >emװDZ#5!y<ڕiւz:E$vvbfϜ )pnq^lh+++.SJ% 7WrCijsM ~-ZUMOʧ9{TM־U(/#¿_ūȑ[H=qЌ犵 Uͦb1yqnUiV$#uVuPmWr\80둓ړ}:_}\>e{?js:ʞdbuh>noƻV¡(J1JM4cVsֿf ]OUl,&O(8d"AC' ioc}φۻizuo;>cFM%L`qss_gʚjoWrңZOWzc^}$q9Pou"\U^-z^ih*G|s}+c)²ޝ:6t{8u<$gQ:įXgq5֭ZVmyv=21QRFq>t>8%xvS xԞ8 bܢyڕ{᭺keqWgt-gHeHdZeMpuWg[뱍LRjՈ?ZGxF ^!p1݆9xyƭwFu0j]CXlfw8ơs2/W#8{iGKNR6iK;trع1 xdq3 ({=6 Z9tiOs.~^[Ue;w<{S¾+xk|%diI$,lsUJ3ue_O0yhԩ}K׻'գku:\1;'-]lEw]]tZ5,^m6]Lm#HǥoVUjY'ʒG RVv3n46cqTNJӪHS^vex5kﺝ9+IZ2i[jk= siw.tiF{vlY6u; Re$/R5Kv-\9}5*xW$[3sF +1.|'֏ҬpV9J<-͏ {vcLq9{p{ج_WRk3Z4K^ΒqbE$yƋN敚ζu7H-cRyJ5v^ &2! 4t.Eԩ7 \1 -\u&7ss#caOqNǡ꒤K}?%[٘Nev6 #%#cBWk'=LioqEdžy~kssk7Е0`GVuǑ~=lƴTzrhiIsP*̫qy*}?1SSn: uf+H͐28n󎙧K3\-J]U1u-b͕cE.59=2ztҍM]?R?/?Skº/6s<6ύ?LW.*OzyyLD}m9E}FgC޼t#$XnIEEk^Դm3&晷$jӒ dsէ,Lc_z5ķ+6| M(;~>VQs.ԭk /7F$3dn+Z0e?Fq_;.xO˷Ϲ[ؒ}O^է] I*'vy>_wWleGHeaPG̻G ?17+tO{vz=y\s񚓺o}~65ƥ4qL0Cp2AnwoW<VYRP*nK}M/:n/HAq|WӃ1lpȢѫ-ךL5:QSMY=oOC|SxS4HJE4 zA1X|6niyz^.tzI]5mw tFƶvŽq rҖC9$ezuq6Imӭs1m7dӳQKϡ<l|KLGqHaX+2[|"ʝXX]|uWGt'khS}s]ߋ:}Pi2C:8NՙA9G8?c_FSskk?:XdW+A{\I4ir Q^kfcf"i]{t"溵h٠W ΟB@=CVQYF5edsm-r45N4uid_" /;tLUJZ^RU"SvC%\<"-my9cu+YG!vW;$U<9<sV4i{Fi r:_ɖGFRr#{WDj(ݯi[O2YeCj@烞gSo{֍zVTvFfi6pJ8#4NW,OMm ޳VwʼȡY&>y*ViVJܳLm)*iEsJQ\Qe(?D[|09p~m%p2uB%R3Nu"V69F*Igx%(STӋw}K<>לso1dl7qU1Sp䖯ˡ !jqn '<`cqUr7j.PmfWi#,r$X-FxݮF,HGv<֞δeҿR+rZu#YCseգXKѫG/+ϱ4Wn /q$e'[ӣ84jNwz}zwTv+z1啣:!ӤZdWuۏ+ҽ zKԣ%MTKFC,p]s3}81߿*֏s Gtݖ<%{x4vqnyQu pAo#TTQrWiUu^^>lv5#)k[,~J)A#fGQr38J%JEw+Ze>S?:~NiRȡ&2@n6㪥OkRy3F7O&^xW[oֶ]}iq aNqrQ F>Yͻmz񡊩N0M{ڧǟ%oHr@fH9t@1A?m99I_}OϳveX7̛]GG4_QmWH1ws_3.X.X<W$(̛$@OGB<_)GBыgk+_N|G|4WFJW΂8wm(~Ar<¬~aT8}N{=:Yw[IVR0x xrGy_oSfj;mn~F׈|[(`Օmg,qVxS IQTjW}mxzl,fGOo |㟔Fx|c%EA|Yiսri#9‘7Э$㤓7z,nV[=g-xᗋvW&oK '邵]yTNϋ%Hּ,gi$Ym-R$}yRp &>=?+Lv Tٮ۞9JE GD^!|^xI3 &eUTpx$+ NZ'mwOgsQ/M?.in'5tڗcvHOv c;>‡BK;1!LZYfΥ;k.X&W*§a VOZ"ڇOyqYfu;!$IPw#zIQuvI?.QXGrug:?]_-:lE]I@ !p]H޹ Vi(]_EvyjEs|FTSiBjDax/5}eZ9r'9fP}0@lލZSjoUm~IZ>y+lKW_>f?%\ X$z I{lQծW_ӹxyBҙp<涏*49hQQ~~lw<>l0;:95чy4oaij)%J~1B 21Uv*08`A_pXyAJeIūu~ } 7&d9`mbF zҵkO+.[M7wۣɵ5pNrpkJIkժ/KФPC${w7z14jr?M,xu{9JqMNױ]CNu̫mݞIOc؅/qvm2_ss 8/|RyneH59oH.s]Qv1'Cpɓn2EtFc-{:n%AyzkiHªIUΏoCoo.sa/JӚ.'LbI;4ͻWM%s_-P_;HGM%#e+6vQ_}oC˖P}gI$׋ 1m_1]pN0GAgǙ[~/Εh^3Z{ko#.I<"|cdؼca6qҼʤ0r[ZuWif>henf?{h}_gdc/tz]:TӴ~=[}~G?Hyqm19l$}@q^(Kx{+=, yQ%oŘ2xޮm|d qzg }+ubprIҕIb#)9r+ugAˢȶNcX,̻~+C߷]Wtq+TGhv׭ Kba,ϵ|S~oCbX՚Iimb?X #SԃF{v?^Nq7{ZX$xm.-tK]'tA{sO ^+sӌtZkw=D -jbL$>Z(3;:q힃#RݕtG֕K={u}zψ^~:j$<.jE˝=SiS`]^g^KF2,[U5<Òxo7ۺZ$"8 GC_K곅Mt>ZX}nry1Muk5C% kWMBZ^3Վ *FkjyoV=VET9$/?wH' <|VAIs>FK}t鯞ܭ6>퉬2x[_{MCƐ\ YkC.ю GU[%<֎zݤ+zR8j־mg▗i˧Gm?Ry]b+A%t0Pʂ_ ۢ9N2f~{?;2kta+S֥:~vθ[ɚx;HtԧcNiS| *ec.[}h+n,xN-_Խk^4ͼ7e >tT9hz=2*2^Gs麷%40U.J},/Oy8F3<0RR%%5i?ٷ#* nf88OP}8~&mwQmڭoln>x{n5ž0۪ʱ"B2~}K7oGIf\kl/-<0>̂k8I;ScO_|qϳ},uZhe;oUn+˧SV7G{T{#V9@V=Xt <M^I4FKsdkd5l^R).h48',J3>2kŔy{w=Jx9J\lq?Nk3У(BI0la?~%φ%5"#0fb%8>m;r3 ݝOrmkQ>Җ/ d{P?Z)*ݟ?|M-dGm:.:=k{M6ꏤF6cߎ'Ϊgu;{y￲I]4]JK-᷒Hm a8W"8D$VVޚt*U%^;۽=B-[f|(xzeRPHr8 d=<lBm~1ecRSy=xG~3Zn<Шc)C_]QN{RL(w'k(yq-l*{YIlE4q,VnEvn\z52v57NPm~hw5>o}(Z[-/6gU4hsס#k9_tkJ,o.e(j| xT2WYeԚ &i7c8=]/݊l=F/^1U0۶Sju^񅖞OX%%p'd;??´t#(d^G)zn,~dƭEx3ҧ)GE$Yseqqx=?!WFRFusMmnq "Jr0xu9:4To~$YZcNi$ւɨ\DfK,<kTIliFcd|ɤcH|呰 ~^6ÜRkW.7&sc[BrTjI/NT F>]{|;VMb'=(,0O]<ܱwGTTSEԷm,4% j=~jbReӪe2I<.qsnTb$֢vhwї]tfެ8O]Id1?a=wҦF>m]-Ks44jqڴM)Vj:@? }.>,k.ZIhtWD Yu$Z=v!GUj[$ŰonjNQqJoCљUG|6~bF2II1dT`n8=‰^J{Ek2!`i38$YGa.>[}{E9^$+%~ѤV>7nVe9}A汩Ej{3&IH~ϙ.m(K o+~@jcy nn\n"Ҥm).hm)pNϧ֪U*qU5LqH̻~H=R喟ycYr}w;?'b'n¯;.ÌjP3KR+}sDE%k+̧klFA=G#񢢵}%u="Qx'-7?>w%chя:顗qiIگ͹ ȧ ӎUj34n1nѿ# ϧ?ֲ-^&Z9oT_-4s'ϭaR)Je[Ў1 .ծU-!m&qמzt]%O#aruia _dˈE`n+*'.x۾?Xj:u6fo1V SpWD rKj+t_WYZQ7|t#>:jٝiR浚nI>nUJQ/cu7[7V>lAi*qovFu#*vI٘!e&dgv< >SʝM5姑ˌ8tӿ9mcN[tc[t pFxùO]_eKk0-7x=qQ<} 9YÚR^fTQOb0*3c)^:~ٷk-s4I$A"!ִ^˕; ަi7E~QN՞jRRP+,SoSw`+Ӗ4X a~cRfRI=CT/)Zӏ5;&VR5L%.>hAŴѼ.WZg"SqI''W-Jy[oAVMR4n4Dת=n{q֥JPdE4y͛5 +**,iNN>Z] S7s$}z4tbĽޕH|,jnk"2UUp:cڲyINWo[ QrI29T94ΚJiw,@lc2[!TzfTu-'53n㎼UysYM$O`o*u#բUwm Ʀ2(] ʴayGCRoYtaaym%U8Gcqyms:%&cͅݓԎ*TOF;h^W۞z^nT%E{FA۷,cr9$WozVEӡ.hUqfQ^n;C$Q] ±F=֥YnvSeMVy0U\7&4h)O_56D*Xb'T鹍GeI>杙(EUUwa[cGJie*=֦>R}Eicm]gk8#Jb'菪Ɲ=zߩ LieskdaNV#GW>&4nO]=]rFjW#@ :|d`W>=o쒽m]ެvX԰䐁`<ӌi?;_Mz%|N!b6'1!m׎GOS U㊗2WĬa+Fdz5!YƷ$jr- HBr9#ך)FҬ!W٭?Eԭ ܙQZ0 s'gޱڎg4b}\6mnXD5PTu=1q]\jTAo7*o{ PUQ1~V gH:yV׭T>*ѕe{Z/v>ib%LNys覹%Z ~ZJ/MfF] hH+k~L ;x'+3*4mk},~rނ\=HݘBfU@=pt JNo_"y1 s{z1ͯtRtc+yc^ڧlsʽI6$K_9w40Cg Iힸ>"*pwzm|ɔq+ƟG o8inI=uЏm4IȕF8l=9'UF#ӔZ;ܚ)bKHj$#H~:j)3Z**5^\k)VD,?c%prG~Y_Cϩqk! Yu78ⷡ\Jte.j{ZHP\uv8 cjUJQKߕӿMF=œMr%I[,(ݴEJ%h[#(ҫ*mkױh9 P#9JN; HT,yfkk]Ǻ*Ls=eO䫇{Wj{SL#yooe_@<5T#O .FkIը]ིyZaGr:g:οV4QCe6u Bc'3ڿ|iwiO NdBF5ܞ=֒>f5Nj>]Fcϖ `Һiʛ/i:ߗ&oea;5~3MhUHӔJ$ by듏j)ӗz1儚өN -q+6996| W$eG$R2rEh9`B1މZ/ZNsuV#6\30I t%I#S'R KQyvxv9b71?CTN^Xp }3ޢPEQQ%;?1ua#lP7/Xu8=Oގz:%RX| u54;0!^q+߼kD/#[;$i$tK~z{WZҾ(J`ˬ.**6Hۺ<=h=m=u( \݂GP.ϕL1:z[{:JV"5an,K3p+{{cRM]46veYA=sEk)F2guhR!+HE;kTI1 $tc_D*tRi=uH9kUm08浧&K˚QdZ91UVݖ- 2#BۗA=jdf)a⛻o"628I G~RH<9w9Hvb\KZ!Xr@n9,H8᎕=NnWyb*6Kf86^Y~VNԹg{6Mj#jkmjB5?ʡsU-83V]E n qs~anMF/[YwP/\qqVu9j61.rWE,A`h6txK}u2cN=["KȄǞܾJ^'vkkJQLqlbU!{dYǞzR%3ڒʉ1g,MnR{ֱWml)J7'6淞60Vr2pzcq(T*k0466c K`9~USINI;JVXl7-#~u䌯/%nKo aUaU'{ ӽO3Ӕyb>(#\īi1!oN> Ҩu.o1>|a"iFd2H+ β(6TFI.zGcϭG/46IHV//+¯gV{uvL1P0< oN2rQjCWxXX+mב}N*1j:ϸn"#e _`w$eO֒ԒU[;X ]|8}*R撕[/ b˷;Y)94hXW̷nnKOJX5c_o=qK2s3(EHN k))5m2y&܊Hʶ/o {hH\t Jw+[r|u[W\"8iʭӶeW-})]qלczkȧnʒ3M')m2)SQtn#Hܮ#'+EӬQM}HeC Ors#_OJӌ=0r >Z,As.p(7iJ痳kbw3~@ÏKg$w)ǒSST+YRU1Z0;d*8Mluҟ4}4)w UAv@κY{c5#trH+5/_1&1_ &҄[E iq9SסN(y\ΝyT|KRidiٝąnsLW)+G;~)F^MIv0W{7M2DI}MjƋAWOc|ђt-RUe<5ꙥL=hD$1r4>bytucv{ݕIr&e0NH2'!RRzK Oos"?kƤm#\=Jw4Mejh2FQ'QNv9ڗtM<{XUݕdd-#+} j4ȍ쐺ث㪮0y&t-:|u"S\LҲI&Ъdz}}zFQUG}Ǖ׼r:L~[{M/R$$Wh [`>7#r) wZy[]UH#cӽTՕ6Rlu+jS@yUaF{{VNexSM?WyLO2HC9GB#Fukw,yF .l608㌚EF:ow|7s${|rco9`_N/On@~i[r;⳽E& QtZО !Nc,UOsvd忙\Nڟܿ:9 y}x*!Ep>gR E="9:HUp[ {ZQQrʳQ1ۻ{B ,Ѽb5$Qg*>\̏ϕf?{=sYԵ; SSWL20 x>K,)VvtxL>܅۶>?u rRܖYc[;F> g<{TrzBSo-F"[H{?C1ZQ[V4' !bSol~jroAsE>WI">s>VW,l_,7qg\~iŻ6HY#$z}?Vr5HV!V@B^eF-T nam%Y5)+UvU=`$G~@Ox9y8Ҕy^΁Bl}؎(RcrDI n8ֶiƧ~ZDK"2r^;澻saF2v=LS R9/K2m>PF-ku9/uutNY:a#~KM{ѾJNJ/Z&}pιdz`#J9'Jv~[jDK>+5uӼova,'ؒWw'OGuiyז(,we85yjqNe9+ߧ_#?5I WUY*NA]1JT1IgI𦁧-퍲*kI!3-LN#݂#*ZkknՍAۨRlqq߀{T:voN4yUG?9=fWEǮA[ʜbN޽F_XW*j7"_[ic床Y#H3#Ld'J5t 4nI~1WCPRPG[Uk'}[~FE4|jc d$/TBoU`Glq>ŨJ[_fх vjr3uknlVKPJ#n}8G#_e;Gzv$﷗߿ Vso0 Ǚ¸k%VvNǟ(JOpixSHܒHѾ&l;~^Z-}&"MQ:%¢E9[3=8㢌4iʚ_'`jKFTg{M}-FcO>azҸe*j53f#'nῶRQYՔ >T 5 :aR,&WץmO:mF1Ux@1^֥Zj%Ia]6~{n-%fʙ#1L;]Z8-WZבhH݅i8U}z~(''zPMJz_[ku 0,se)AN#קa\v1KG jc^]kwYs&:uy+ڽ 9 si/ꩊ4g5"drt(2fqe0/Y۟]?XTcZbͤhY]w_gdKVPCg^%.5BdZ0$7YSx;5aS{OmM^)ZݯD%mvw#F]@F2'sCE8奌ץBuW4qV9&;׫kwFRJϱz;IZxO/s Ҵ{^Ɲ78͉l._1`xqEթOcZ7 4ԋKmׯCd(U6dlylrs^.ab篇USWunhwB8l䑷) _XboyKV$C|%6[ۙ?v=͂[nb1{Waph~{^16U4ۛ~,ʛ:x:өUӜ}hqɭZkݾX_JP*aEƢd&Js]h]KTyQM*ȼCϯOJ1Ɲo%wzwG9Fk}*秗QZO&F݃jPVƬq˪25 $TUv݃G^m\<}[^G?g.K_C3Mnb,\ʢɮr"SǟS;M4iwyYF@n }C(Ez;}RI;y~#?A I`g u9}:Wh}nj-SG<[Co6ИWq8^_eٖ_*n_yQjQ$}/ScG{csa]yb0m+c;}998GNn۽vb/_KD|) Iaj-qE<~_#u&z_C?ut.kh8כt6H$dt$2ڑ9{/;ib0hӜvS!,vȤTV}-=Ee䪜㞸vSNe&vsWy+?Uf+^W{K]ᶒ;s4X=/\4Nki{<QJү%1py> S]k#Ԅ57-L;KF~&eo3R>Y^g'WQ趐Ʊ<傒3}?ZiH[)EJ߿C]G/ċ5eH21rAz Q,6//ǡN{WOt_+OG;V=8#IT*-VϿFvaq~)>g?>+x㧆| 'c߈#5[n۷ ՀnP.x<5QIh֪5}}ody|Ix< tVKonﵮÏxuu~$S;8$q_f8LU۷o>85lɴ3;օO^Z+zm3ү2Aap#O>eGANlTJ:,CR֩k1HbɅ=xf_).]@\׋1I ['&IpӓǸ#姀*;4C0R}|/Gnfr㷵(;;Kw&3N̟;|_x'M<.ocTL``yZ|ڽ=N\=JK+'}lʽ9Q\yG=e{B 4)I!'w551oN3̎r%W<*Ξ"5}y˄[x{VHeF9S)(~1G ?i7uo֚x5;Mʑq7ER8[SOvRYar\v[jsڔW*^F_tcX1M :z,ú{Ȍۜ >jJS`;OĸI?̥Jv5\DeGw7'S P2Gwv'yG>Og֟#vy1ڧtԦ>y;u:iQO2F\3Rz$zhWxkŞ|~Nc^L8'>OgSm] vktAeHUZV!b*As܌VM_N_,CR|3Ǫj?bemr:sߎk(X^tʴQ լn!^2-ڿyrv㓏%HNT];8Ե{x"1yNu F[,N{ ^Ӽ劣564$-im[9wEVTm L]OCo2}3=MdCEϖ|?QsI65jp@Dn OF<緯z/k+ M0"6\q4InaVdt~ҥcUh|7'?/8#n³XJX-R{OWHzw澒4FoOzbѿ+i:Ŝ-R]CG# 7s;mu?O=5],o=:yi\u?ux-mYl>`/csqAyI:QmoeD]}w2#8ɫ-_U]}w;U^FFVhʩ,][$gUqg6tJ;j}~U0T֖a迲X9qIyxNvc{qRL= SU_;]5[/=I%EYk{MOuny~>x0ǧx_E:Fb{n zʻwZkfSn~К{$KdX򞅳H9Ÿ8c5yo[SJ 饻#,]H FvP?ֿ)|}ozȉԑ-da29/wqQ[->jη;O#v\nnlzبʥ8!#OwDY$Ƒc!O0s0=yzUaO_6-Ӹ:}Ymd@'x>ƪ>7Ojjkm}݂9&5e*ꟼW< s:%*jO*֩(YWg~\V8դvײ'xQ-wہ saǭvR)ūXΥb!-;0~?cFI=jU/v+cF1b]'?(ҽ*r۵>˛$ 52[lsG<^YF.q2ZvC )$@+gyc4-f8ϙ3߿9ANz){j*}z z)g׊#8kT}y2!+,#ğ´k:1\˹ 2"yGY7_J}Ũ~hmc"O%=Ham\.H=N=Wc̯^!A|brFMvC8;/+T\ZhU%I$BeC|ss?>ZxݦmI(xao,m@ >|}.Z&\[vmEm*f!v ؓ]T5GONa*VO4t|֋fԧG h$Y<.2{.9U"kթNN-$Obk{,j]͎ՅJrO:4RK[771Ua 1c;zqYgwaVRq^Ë,6éK1e ?ϗs Bs_7+BMZ뺹Mo4xI{G-JuVϩiխ%ӱFUuK7UO;dB1N^j.<o[vF*G[^Hs; x -0x@F^zj3iZW9gd QU ?OZ4J/U̪ڝNʬussj1<Ȗ4t]lU[YVI([}NWBolɕdןj8*}hb=RĒbm ZSO7qݔS+rI8dl-1Ҫ8-۩єtzB%IYl8j)*Sǧ<dݕ=chN4a;{WfKrm%}[UXsTl~drI"MnW+ʤd.6?3ҳq}GZElMV'^1ϯ?KuvԗQ\EZuq+9ZөW^wtyuRwz>ۣ9o]J1߉tO#LP$I7/79o/ ׾yQ~DQ_ȭl湥)AR.iWG 3h33a@8d޺<єzgO(G}FR598'*Oao1~Q~uy%̮e*2t͸vy켈%(.،ⳍNWckF٫B;1I8"Z14WOr%*\Ίr.91 g*r+Xӏ'ȝTB2O`B 9s*FTAZdeyEW;sQRTQF$j"4y+opڵ.PĮGo2^49ASoyTB̹{檝JsG186?v{NgvOihlw$܅fwQ%qy^I .[~]J늄=;r&Gyuo,k.Wr˓ڵtε*<3Au2_Bż!DV;d1]ќ*pvvꕻ*^umW_>%#9l#h#&7̒F[kezrFx8+`嚺M7k>ZZ4oZOυ-w-NL6!y*I-ttiKtt/=3⏁Լ= {V ~n$#ˎNTf8qXY8cRMԚ¾5 \.y`rW9;f7Gq[C%ͱJB=޷<ᗊ>5纇r6;? H@x0a׏/ą$I/;iO[ZLjl?PVV6@ ֋<̩Un  R1m)drLi[ mxt#{YnyOMNIݽO'qMwN2[i9V;wPќoG*gOib9{${;v+VFJTnKѕ jZǎme--I2\H5ROϞF9rfEllzgGkuQVB0rZ~=]xIJ]J0Hup?10sXsrEs[{^_O׫^v*h[ηnXa"ʃpJ0?yXgR_:rm(lw>gqౘE5MFOIӽ^g<[Zۈm,1\&z|:Pzٿvzt908A.i&: ɯjr\Q4-daRE^R*Ae}z&5[ьxQ9&9}Ok[u$!끸H=sQ:4RN>j2C2y]# 8'$裌CK} -FR+&W۹R=uV=mF OSvIՙUs^G^SOTkReIzx @899sڽoz)_יRR-ej/kVvUl[U]ag]=~çxD]މGX~'T𼺕̝2]*(; <䃌1JYt4SUYEf֚;_<7R =6Z$fczu10sd`1'$~SE%ؑG鶲>c^{ztr%M/O5kܓ\=[We]˻yiޛo>uyE^Pd;O->WQ=od[鿧M^}kɞ}4a r~''iI҅}ӥVu*kmoʢs3/bH3'j f5(Pk_c⾣q5-'r>~VœȘTkֿk+]9kP79`=q^L)m&;=x:;˦;{Ŝfۻ6ORz\괭]WE$`ѩ6.}+%g9Q/}]蒿~#5&qsGw,aS9P4/Pj,<=E%s;uSSLʧ||*wVZu{iΗ˓k'ꌱ UTOLm$Fl>Zy#z8Y9"\L7=[*b4Mr w/6SgUYo4k9;n6r~lB{ѥRSo|kwy{˫i7%rO>~R_]:]_J3O T/î\An/cWap3&=ۜt5)Ӯj_ʜpt)wooORYZK꫷wu<= s^ul4[v~g-]Xi==c @cp3z|n_Uɩ-*i!,&5c1=}{}+ka#)z!U ?&񖝋H^3n+ղFv; Jqs'&fwp1댷~݅{OZ4*5\{˯0i/$v鏯׵}=%7쾵l=IY2u}=-w2յjU{U5%NDF?2Q8OIv#S(;v_ihc۴O,?i.r)Fw^G1D>Fz;Ey$?ֽѭR ٫`q۶VU.62(# sko{%~*eQ++aokwf697$W]k59=O#׾xv=+Nkoy>u8'cp+J[7ft*Wާ3+^|VëV*{O'4Z𾞾Tq3_-Z&?(bٯDiz*;ult{r4+1?gg*XF!H'w<\;ЌxtZNĂH#n1{}+8:5*^}iuRU23y)9X9vXQÜcDh542W1]Ǟ׽N*vPճ*9Ent;gYmU31\ ji*/ep?+v_}7[mB9mڨyR_'c$ީk><1/6?OxXqs.4xy1 s;TMjuju{ѳECo?jw&%HշyCNq폯>uKaRQV?-X_巆$sӃaSO3IGKm5pChaF?xOJ=%{t;pԕ?o>3ztvwzrC #*B=[o˟#^:Vs~BQiJ1w^{^d? |gg[3pe1 {sEIs>pFfFgͰ`7'ހWoo?Z>Yk㧅Z*iϜlo^g= Ƒ7[*z C-K/JV[Fjmm|5TP GOJS޾fxժ_}OqׅStQ{Pѯ~o~+YTzG5)Ei^u0 ?:@92RJcgr{|y <-m^1OT:)⚛ԙ/-rܣ{NzfxWDkm$`Q܊Nՙ%Fu_",qRl91y,WDe&⎘֣kek+;/,kjP3s!#3~g<7#ڷ7- ӎk5 덫ɍ39$EY(/f$Riqݏy[Ob4%!\+3|x^+#iѫ*oIhv)l'r5*vذ*Dn$/s'U8i6ZQnnXYm#Hflx5ңf*cZܫ4r s;NSMZV_Q[*Ċo?iwQTzDYN}{cӇ2驤#ˢT[LŹJ(ǒ 2Ghl󑞹 )rc^P["Ic'k.?t~Y{ƴ-e=w J~PNqZ6hF_Ai* +59--qǖQIeN 89׫5 J-sY~b,ۖeqS/Hӽ7n#d۴*wTVK^,:ᙹtwo\K;~n'y=:tcʔ}ExY$p2xD}×,"{W\3==`i~/(4UP8/*TܣQ܎Rt K*cqnNUӽJd7ѷ*)ZUԏ.3eeNqs`1c"6ڜ{~ }+fm_1T|)FRjsʤ!Iɔge%'A\J7.WFl[I8j k_C[lfV h:tޭӔpTZ3n.#7M:k&jss-32tyRS7e.hӺhe|zu.Y;A'g޴[văwӧk)s^FtiԚdKr3F=2 u{7hwU}2Xo43;eɪER6M5 "9 $[}g0:uMUMmb֥CrۃZʶ"6 R5=GuHpU'qӥpYTd֦1detۅ]=ڤV*2VcN䎿yNx#FXUۯyuG(֧-ƛ$D7S^%$UŗO3~Cmeܒz3΅(ߙcI6eWeYXˏ?jάh#l h6\>*Ͱc8v5Y`uWkDt9jQ'umYB(cج+x9oR:bQ:SOV.$Dj<}iQuOGsi~llXy|ͽdӿ<⺪~V׿cvkKhLI~OgTzT]ͫ=EަHoY=u'G$*F%Z4mQI+)fp==:J)58G64+e%j+7A?O[Fqy4ˑK)]zU{i?yNjH}̻dtBC~<֐+ʭF}[x]HGM$tZF1=׹N^Hˉ$\p3y*!Ѩ%ͥb^8qB?w1rEkHjcNN.SX7\؂ T)^.z:[YC-JB|a2xpoZq&TcME=/嵈fC2Lͻ,pS׮(KDyFs^ioS 'D0'-}*cϑ~=@TT\8չek贵Ey\U'>Wg*^d7v~qӊEOgFt:iy#3)Fq^~9+Ǖnj:84⹒Y-c/!b0׿",r~Kϣ7tkQ㍝2 ֲZ5y*V5vk7m4pm՝Z9~eW RO*qԟmK#]]K"ĥQ $wkʝJm͇7OCVVmdΧs1ꓭ,M\r8=F5RKM59*`JRFwkmZy#7*F.y R# ZۣlIe+,,^0Xy'8 m.wT~yijpk$y-4IuڶTjQ9!F]vfl%1mw9Rwr;c=U%>o'Na :]5dp܆U׿+9TMA6T=bYZ\u%EimjT(fG IT*RïwW}>`b2su{iӍWy*)K~p-|?w+a*Qyn#OVIhT򍄻60snƪ9SWM_Tq\5r]yVfSgҴR5*r=:eJ"-9%ftt| |ͥ1w**ksS⯲}N5=fbEՑ!Bǂr.}z}k5NT2iùSխZ)wӧxTt {}=u(m~7~MI.Z 26U;Nˑc;4'xبN_%ku!!UQdA:D.֍zR^bL?C*9߯\~3g.u6姄U&2I2b`3HdsEZw^SlfȒ1p% ChrQn1{/Mʣ?i+rA{ \CwpFTuO8$sXj/GRmI#[\HZxm"AOQ3g4EeZ-u/VEiomqRUfPAI;I=c(SMZ^f8aIR]{>,1!y8OFyl >U~휵'Kϙ{dvdmxדҝ8rnyF,Z;)QDQ[kdgO($KGӯBerNce9ʃ՝Sz4ʛ]n'q˒#uF;-o1ʤk']_I{0b*K3 屚ʟ ?4tW9SdeoZBZ ̰P85U`^_ѧB ӒCpdd~V8%=ԥyZ[2 ?2۵yjބie3YYāgeTo.YM}@ͺ&QIro"6q4/|#`8H(XxX?uRԦ.Eڼms95t.NNTK04/KDe)UWҌ!N Wt+G$x]AzWEXRbUHS|oHo)-|іG`<DViY:dQY 4.G2+];XQU[> ڟ-M$%RQI`x~Ͱ6.6Tǖ1#EIʦƔhӳwu\g;"f;nߕcG^Bry$n0`tǧQRRz5Yɸ5g7ٷI纨qRǮ}Y9=X*fr%zzV&ѵ5R<-8dqϦ:zT9{KfTfܙYWϘ*N3kg EW1c ;Z~lKM6؜SOZ6M;Kߕz[1'̎5I qڔOeT7)+OQl]2.܅?(aNM3g ^ROFRj=)>SzrqdfQ%}N✹v@NR#rey|?Zuݎxn|DZJ~+HZ:ksmpbF:<*Ǒ]sT8a܉d=B J2ۋqe壳EIg q$rA7* Tj=sMYch[T/e?}Gi{H#Py*X}aST6q{!$M$# t__ƅNwI~<ՓLm\D=ۛTOv]"E Ƴ`OU B/Fͩ8.͚>'UHFGρWɬ1aN]V3mHw0?tkũ{pB%åQ }aSIݝ(ҹФS}Sm# eA pN;ܠFRfvF(r$ڻ@Ϟ~3NF}շBD70l=XnKS5}FZ,s1q.޴/.qFC+̪|vʩ^N*.1N]Hgw\2N=F6$n8gFYw@YT{OaL 2K6oӷV'taZfƳK+l¹'e,$Zk]{,#쑷?xc94gR2ɢ"62y'RN;sʜ}}LcNdXbc-| a&qzE2}FSvBeJ1k=呢}Ln) 5->ڑ7-^U?-ەXEX*SyYMHla_1?N)1P.__ihvɳn@:e&Je |ꈚGK`Harv#R/'NKYkZIQ{`m΋J:wȮ冟1JQ7̴d\zM$wild1 spJJn&(ۉ.1ss涔jFsJUS'HB݉B yLOO&k畆zgѼlx)o!bgi1/)st(Ԥy[۶ͧ+%ݪT?8\VR5}~EfxEHˮSd`zZ.:oUy +ozr9*9j;%i˛o#33ܬ?ֳy()EbƟO ;&I0.w7׽*bsJ˧a;bSm#~:ZNu%X=7Lfe1~KKKD[զ۩j&f^FU Ƿ-׆*Tv J嫽S̹2#l\3q? 0u&ɣ2Edʏ/qx89*Ue*7ؔ^\]}9T.UBpGzZ{5cRU,.4?nLfgW.ER%eneݲ0})^)L+n&Qؘ:o֧N:빟v]KF^:Gw1l|1I[#F:K-$S"1#o8XS]6Jq4K6VH:4.xz+J7`Px\gSʹ~ #Acn{.!rϒLKǁVdclG7]I0<}j%%iFt!@_#y<9 +XTSaʵDmМr*TM%%{*'?/;s+>gtjG]m؍a.n9XMJ9Nތ/k}U01kS9yĬ6v2Jyc`QwuB#78=y9)sr[Їȕ.Vy`'8Y[sԂ[}mgn,?{=W Vp GNq`& qׂJ16?i(ƣZ]+yegy^co,[*/pK0;v4b[՛gF+tm^ ?W9}oouK?4wS!P޻W'';g>ggk-gwC椕Z=-SǾ'Եm4k[WKvtw`a6󎼑L-7Qk;wv<%:uOӷkZwvzM%VR!qr3}Z\U.%BRﱫxEhm `Y|\եVG-I8uG/M:6̫ϸ#M&\R$c(o0uޛb²ڸasE[MO*-6K(sDa[)>ۗm\+I b5VF1KiCG- U[2tm ӫ- #+,+$-čw릎gJޫiRZJ>v.nfgq ȯW˚ /wbU2_ç[jUIJ-;M"?CaNkyh}{4i9{9yKQK8?y˸}~=k^0P^2VgzRj/ǗV i.z(X᝾T˱%YK^XVHlrw`TwXUR+=|z\X_<{gA"8U5*rM=LԊkiZ7Z4%tbQ66{k J֌iK`TN6ZRV3VTnNPWlSkveY`7g%'XPEiw̧v”6}v:#K~Gv'ց7]D#1pua2i9+bGֲ;݊*-jX[ :ûcҰc̥)TG=Ks886(ckRp]L'5*i%k7!W:rɧtZΞA3y03ҜW5]r֥GmmMIT|NN4Ek71,ezz\%R5v*xZ2MTZ54flXط ÂN3O{/fOv6,'m^-Tu{dqkʥ(SYI5fuZm[h]V$ULje-hٮI]#5ؽ4$)W yO_Z"bZR{it{MݤPh*"VMHn݈RtQQe޳u 3Kv~b#?W?yN*.:h|cÖ~3|O(ut6*1߮28&~f8aiN|cX [̸Wta9dbHu.ѭXimtd$qۧ3zک~22N=S,>M @q=CY,5:((/F2)BK]0OA/-Y@rAt־hqXJ]O0>(x,K$~qqe4O:XsWJɭ_i !~C;`nI:]ya7_]L**q~oGo#Ȍf{F=:cJކxMj9# N\F+ +{QƤS_2D89 JUj8=-! D_~7sq>ZJ~m^UbiﶕEvUl6 I(U9hѦufM#yo~\;WL*+MF2'u;8ViO'z.U$=7 ވ$k|`=v?e0)'vE뿗72JٝRh~h#*.Ƭa~F*8%hKUZkvӏ-\4ǼA ߡSTF;kvݙ0CnnNzUFr;R\jQֺhٙdlzGn9N4hzѧN.Y-. iB{n# MUH]SCıh ctEUĊd g= sVy^9JR|kux KT{_j)7n7+yrxO*sGE|;5Z3]ڳ;axrIc60^e%S{Z슑&Q[[RmBFcasVz y f挀F㧮W\mqV׷wo@%;,{=2 *'wYaӭٰw?YQ1N͝Хj^+qýgWqҴ;&$]{c<ڽl?>V9Vy>owWwލ>ǥ{rg.U{Aޝi.hѿέr=H׻F,nf1OI{+L%]6!$c89w{Mfٴ.`~x\UڛrlDw|>dQ7*VltS˹Εܴc ^ϒoru)xD!NNps\ݱDin<.VoB4bEmƭ"Pv^m˗^~)/ k+Gpv3?ug{IA&2YZƗ[$ܹ nGjabGRɭ8{&}KףZ>pji[?>x'^]/:.ł=xz`#:UtnЪJ],g|qf\Ô4uy1oy(VB5 abߤ亽Fk~0i³š\}Po306=0TөM|J˿7(o+u?U;چ,y#Mk*rwvʚJcńl^[/1{vqIŷ5vukt:iZ0ky͆9U'+BO@J4GZkP2B @{WK eJ Nyk=-Ju7M]Z֭!2/ݚ᰹c8q8xǒvѷ޿cǦhvG>:&X7_٫uVmB 0S֎ *S\wnZ}Վ wV\rh?)?L4&;[TU`:ޝ:c?v!F\՗cjӯYNdzN\[RvȭyۂA<λZaƜ+^_n:]Im0Xԟ#L."T J+VtO8M>#j-a'd0u<~kю]=X'c0$Ox>д;Z[IbU[;X)P< t>) ֭HnoO^8HU/%s7cQpT#<֡RXyIiYu}O5;hOyQǰZV_CZJ\vfJIq!]8[t*Mԍg8~%R7O*ҔW,M1TUK&$pV }kQ:6Oi*S.GwCYc\yMNkJ+A}ѓteRx[sRcGaqu%qoOùFrTIqʥ:N={Cqo<㏸o2R>eWVXMJKi>_^>aGJ0؈T'7u!h,>oUT+5اz>V#ucft.u^;bIp pf uGNkbx]ד6ogu FP:澗M[]y'?LpQ2_OdӵL>e%fl?NTP^9}qIn#kin $l,8%IPIeS.g(=:: ߶pml[qrF zaV>*iN*?ѳỊ]MukZNK.=N~zgF]o%SDHR_-q LBĽǧ'8F4Slt-0i-ȄL K<;tiQY{NNeϩCicYQx_{4:uoR/>T]ĖgUXZ9I8!J6j:{:tz׊YmbiV/V=9^eNMCk\K[75kM/<ͱgl:qa~ק;CEk:pG3n,tneF88=+s=|GH>x{MK->"Mx(#?Jrӿҏm]캫ٜxN@a?2Hr#"pvzw=,>Rmtv*.qj\*۫!]ɆʒA<5^4Vz}eXTj)+]Y|nċsj/+\c|5f㌜0=\)Yݷtmߥ_K+FIOvvoEѷ~I~%*_`ݶmİHrqs 6pxs=9f~l|KjoKw~AtX+ojO-MJd rᔱzi]/u{x882smj}ݵ c{ D$6trN/02;r*eY;TOKd릷8rZc/Pt#\KX~e8]'=G[,MO'g-/[Rqi^گ!!"\ݦ``J݌ 沧[JRuNz8F8Źmny_bU%WLjl0x (9#8kݻ^uo8d_-6ww_s^⦉Mwl,r+}ɫ:ݣ͝|p_vWMt믷X-Wi*[-Ӟ0H~WYL>Ox2h˧jr"D\C2x+SZGs%N7Kdnfi6D+ WQRݴ1xW~Gi͡M# pN1TƤjuU)\y, 1d& Upy8p bԗL&< d@9tjtۼrhg|~>*˖⽵V dfT,xmy=Q9KJV:Z2؎8nێ8#M)$ޝlyXƭF巚gɎK ~t,IfA|RNd{w;՛DF<(n14{f wxتV^vԒKcf" =]Z#egQ>e4#e Lۿ(ŒvEuh,wQE#B*mίn;#ZjYDv!^T,b߱qMۿS:PIinݬ,/8?*8iRY>nk~"A-m k. GCckShb0z[G+p(#_?=J2^*/4~Eu?.9!}7hyӽvӔZDKMj,[n,HBұ<==+sCIH>„y$SY.8JN>z^kriOӊMuWBlG`˅U=]*KTgd}ʪC 4k)|nCZާvYF=W-F,»w~4_OAJn*+*k]7,(wUv>EOo=iƟTys=$hncU .6ס*(.s9tV}%"]wۯ\)V[IE=6ɼ̭۟I?x=qJ*T5%h]BfnD261<{\q5kXZ4yIGA9 vP?t8Ж궕OH^Mn˙OSsι=tS\I.#%uVۗ\rTN:F2\31d%%Rq򫒔s FHhPm"ZUUclq1le*JB ]0zq[T.]p)Sj\~Lg1W Ü*JrV\d qn:1oLs>Ҫj 3n_J1RhUBn3ZXl\Yr0lM*3tQkk؈آF܂ Î:v4 r]#:qwޅ J#}BH}iSҤ\~c%Q:&Q|v]rf\m姫o%{Ea^c}h'=k8X\?/(j mUV3殜&NۧKqF # FMsF\S&קbYcYxy´^ƴ՟SxҺ!ICXf'(%+sGTvOzvflИ||n[kHTsS9_5̒\+R89ZrqQDhЦ̩qr/9ݳN9(4f)NM2o{TTS߭v:t4ў`ib­- ޼rƔQlڗ4g2ob0S]Q[ D`[hؒ$wSutFuFy݂{V/8Ftֻ6ZQ#*qj"+[[#T@$;p8ZSܟm,'Ib$v۸YpIϮOZ5)\N:r[~Ǟ133֮4) (ҧ%3ݙDxo\Е>V38EMn_yfMZ\u#Intԍ<=?uv$͗>^MrF5%K- ],+dޱ\ߧCaR^Ʋ*reg-„FQ}zuS汏4UnhZUfY!]GұRӹ|VU}M=&A:CnVc%ZRgEeMȟ4z~_rJ]RyU~/VqO(߭Stra䔴fƗ ,wrʳ+Cټ<%Ƌd)i9G|?iR%勔IұPmvy+NEQu^קc/<7-qelfUc83bq88z8:vkoVއèRk[o=柡V?_ڻ1 "Pܩ ZjE.i?_2'R88ޥ^]N'gX]&dA##>cR7j--y Súh[yyOߖ<K&֚ C{??'Uv+bǨ*DN 6O3,N\]7rǬHwGoo6P:'i$)dq<Ԫe_*x學f#áoa7U 619pWQp\EV=w953 \eI{omು%U#rdd+c/ԏx(_ ]]>%Jr]g MwPo#mBK)CYFpA`gԴ cP]oi1ac4; e%0 TRqz?/>_O qMkmqk@_[Svkp>q&w*>lrIrk O6*u(sYC/%_]\Ca$弙[sd6z,JWoU'-~ N—fI lH@R7v+ɩw&ө'Rzٽ?#]a\h# UiБӂƫqvJVY>f׾;Q<3i+ n?X؄f' ~RaU&yh: (9=U1֥ql $=\'TZ\Zm~G)8>wk/Եk\iLEcd VI}9a kZ6-]7 ~-]F?Sh$`pA u4:'h#R>i#ׂ.Zdb#hTӟq_'k¬D5kkoѓNS|P˙4}IaVďw! .2sq@@:U[}G 'e}Uwqf#Y fY-;zƥœdmV6 xxƤm~>ڍl@/h=@Nsָ8Q4vOk|zXYSu7oȯ+U"\K,YT2}ƫZ8]T2R#4Hs^|('RB{Qyw79CʥJF2Q5wW\Ͼh`qʯ9e$ b]4~Og,I^ΜFSѹ݌}չ~+F1хO{fGpR3<4W.E>_ 7mD[?1si'NKжܕR J_~2mMXy FecTӔ=ϑ2;+LUʣU#r9[-I=㭠c7 u'#XG5שZiV&a#(19N<=IbsEY- vu >UaC^\.ў 1v¬lpo*P@nvW*1Uc?uk~{2i Y[beiqw 9έ9Jwvn01P眭'{5.{Žlcx8qCҼVP婯_&osFEoeBfXcpsn@?,$kaܥk=7*ZJ=jÉ=ڟ+e|$n}k˧aW;kR`#Ǿ#Oyiion<'pF3K2,-۷.s޾I|Ws0WRjJ}۵X\3?>cWU?˞:[2*2Svӡ>6v<_P+omk扞*}d;3Sm|r Kd3=q=UQ_qsr?OZ$wN<`F28BXhS[ux٦_kyS$cI\drS#[꺿>eS/zmӬ$ (-ўv3*eQь$iڬlc>2Вk1< ('=N<n}+Q2ݍFamx?n ?gCh-F>f AMg^ֆT-oBƷ^MxjOEPBϮ~^Hsھ/0e_}<;.ou{8=cXeXH4e]==T wB8JG=J8 ⺏^_y^Z?Byܼ`~] i5T{ z[55 Җ) N28N? r>/uaWRR>|;gH97a7+r_x ~iGK0+SGy-;*{nk>|mFR#]I>q :jn_?rW߾M[X_Y_Goo;Dv|М`8"˚F]|<.-{Z˒]M}=ִKÔ(c,泝#2z^Mrr}W QJ/{#_[Xх&X`q'c[0*iuf9Rkm,8<4\ƥUIwq^%j{*VW_-LޙMyjGgpU[*.y-=>l%[LgO[z%[ R7W YoO»)Ú.+I=JYvF爾eoo+ZݛA;q׌zzzf4=6wkJtR7_ sYյk}>WGA;LN.rvQWv=%HᨺY.>#x?0obed1/1$ddco~xIF][CŘ\;^eѫm6|2|?Ag5Xbku"B!F0qj2JZǹ^e}w*mmω1LQ#\mڷ%zp.94e)ak74>$)1dne?'kSFwb2yӤk?jpi#o2<pH;RZ$q<$#vŬ,7 ӟlMBU0s_ Pw2̘p0x+ZYIsD2=.j:twF~ay_7XqGRԧu;8w&6r*aqy[vIVm|ϴy*88Ҋ{'Ѵ.~C[c$F3NSxZ|N._V?qCU*:pW:~g#*ad]NZ|FytpG e \U$mSYJgkcm=Ll&b77>ʗ3=*_,T1o[Xv:ߋbKeEwI X=TTXJO{f/vn Y>~Sch{7,56+M +L \u(ԧ;n^3p5momc?gp}pz\ܩVKcTUad<޾~(%tU(rqv_qNpir9?OVxgji-#Iz5fd*~=_z C"89\Χ%kmmRJi.,X1fF&2sYԬR?SeRN۽%>!T?cN4Tݻ*t՟bwM2Vj %Y^{ήQ/]XC1gWf2xZS,OoɄ7dۗڬ6AWdcJ_z~FDHqۀCz皥JWiG O&g!?.w<3ҴK4T)ҕ~Ifһ4ؒ]-EdX8%~ah[a^7 .{[${Vx8=MuEK&KJJVDo݆ rv{cT);WM?Eծ!^ - 6l{K'#i{*4.3r6sz5Th:p5*Ѻ\n ==y%S<:%>Nny0m|k"iOބI`+0gݷ*d3U?zjƑ)1,1οi*5i?cHI9-6Ì8<cҗ;GG?2HlUVL|=qק?*ny\*Kdsv1,]<ں(LXY5ń;F;J \ rԗrNUiN%Xyj7q֢5뱴Ǒy?~{~uN=N%>Hn^{;x5RY .R7 mZ5u٫OnJGRvnM{vL}fL5Fʿq{ӏӵqKF.[/Q8SXdM:gW*VEnѐq:VPNITZ8ZNuv ڝ[ZG^}Nxt}iReF*tk'K)]W$R5JHFM_8`a3tXWZiF"4jP̻ [rkFRZ47+)w>^+ 63*x:rvuFƶ.fNC0nsʕf۱!CSR}bi*kwضoƿwT8NjEIF,Ɛ}j刔 jAJԽ/<@ W*yk'oݰoމ \_ ZZEJ{˳رUm~_=OQ)sv4a`۳tcN=wl4mTk/W G.FƐimn[OT9x8s]KFꍭ:X_-㕪Zj<0Z_,^OscOhn6E' &cQڹkT.څr)w}<[ƉF>a߅?ҲS-[SyQҸ䶍&Fwgjf3FKK2 *K p9UΧxuO *^p֏{Gmc9V1Ffn2MBz52mG}jq5"4{(ҋl^l zⲧ/zFKRDN:TaMGߡZK8icoϧaSݨ*ؚ5*:t}[ b;?7?6A}rU)GZ[X+p~U!m~uẹ(߱z]\W5('L+ԺοiE|=GZtg{y#9SHMi'Á< G\J+mNi/̓s<@z9*:'%MvjC.۔pn3\ D2jrӮ$8_\):RPїAUcwkD34J|'g8?t7 eiz#c_n-Uœ>Iԃ]}8iRVy 9#[ڝ/~)˽:t*8{-sWM+xi]{AӡkkKs_.cm? L(9=1ڵڍRi*I^D $R''uȻ_dU,Iz *g(/hZ/۽Ymy{I#~cԜzw㊅Rʵ:c:#?QnV6W=sҊqio~#^v=Y.@̵ֹ4};_B+I^_+a|IS葅-1g[e;'瞿#Ssަj ;4rHUF.1s7*u8˹cڧ b20V5bsdaJ>빜iJs~%K[MB"P;=z n3Z\{zjg}٥-"1ClSOlsW=K-qJOhx4mBċ7ZX2@A d9EH~Ntz#Z^Q3V+hT`I$U0^ =\UW]2=J+]u,YAyb~G 1v>Un<}9Grmu4 y'6(̹$'>NJ#NZc+ZFK fEkq#:z, Ca}.$WxXgmxjE]߱b+Tq,r"Q:HllH#LYJu$ uKؖEui$rC:v'Ӟ+y%˫oq{ئ梖b3[YRrswQX6RoڬHĹs1=x)PgȖ3>6ٶ )1 t #r+Ak硝2Ro[{St(,_݀Hl嶮ȟSjbլ` ~eHRuxZap$ST}#4*nj.92܏mcZt煋Eo]>q"oETbz TOkZE{w!K4} u(s;ȭJW:gkļ*ϵ:߳fɹEB5ܪ4I8ڿsJU)ڷ~3 RmvpEf8$u9gTp5=[z}A6zr<^~d+pHzc:[vapRԣ}cV6aIf2ʣ22O]Bq5_(SVw~\kSkh1naݯOk;LU,r唰,?iVƔjO^2B.OT嘁?JSJۯuԔ(h٤F[MtʲH='HQ+_Rhx7we qߚCWNyF45ۣ]ϲ]gn>lT$Ri-mՇ*~ѵO3IdM_J8ҕ7N6ǙR72Kϝ$jc9(#*{1h]V[ci gJQUWG4o[>eUd*WU8cTZ2TmZ㧫VpdtޥpNz֎_iOd䟗 !_:}*0=Zj~GU.Z!b2|ʣY98ִik`u5ZyR"ʠqR*OZor b.Fz׽_J3KzD73G(RxJ1bUh-wэ!rn7K E5Nw:J;?29 jZ(ѕ7hTKHSR]“5N.e-FI H$fOTdv'Ƶ̥{mRk]!IahWv(eL8F#J~A~TUnp1T'+]uSe {eqziIvij59$:)BIorsu鞞5̓*YTw a(ҫ{g̮O{\MA7y5V"u_\pH>sF0ZPEl\o9g&k֔k5wWpxHEgR4S*_%6fiT<@9y=N^E'/[( !=yU(cob7'+[,d?Qr(Toyyh hb_k('V~9M[qaIʻepR3zVÎ~eS2OOAcZp^]?wSB<˥0(j|wo$I(o;+x" rA*#'{VrtS^dfpzz݅DIYDog)+瞵72ԺPaV;wHTzrmCJXXp~Q˪DP[a2՗~aXMSk6|ń+nm#`7$)ˊ^ӗ}}[,bqD+ǟFJu#tܵ.ƍn}F a-dzTr]BuV܋ێs5B->U"FY>Y9Z72tc)_}=7W=;9FpRޞQ̴K{fG̜<5j~ͫ6c{fX>g"F\猯s'R2mS)ŵL{!ó1ŽGO5ڱҭ)*o#\4c w yWi\uYTVBg9a(e+ӊNբ,\cr.d}+T4QJj0C_̕ _SzUDʋݯnK![vp%sgrctyTYg<l*~c/>w#%_,vdF8$y(؊;S%.k5aF?94oU$URFr0r9O|Ot8KyI;lEhH·ŊOPj%S޳V0A[Wrʭ [ZV{n_[aWwZf;2cd]W"l;U10w>_ʥT~5(ɹ~Ln]vcJ{7_5P3$U䉲>d=F{veaMe{*t==+>W՘ԃwOEИ $Bdgd}i9F׎1U Fѫ[lq;*T{*R4!ViTLƳutM:$eRHÜ|x#=}qZ>e#iifrdfU'3rLJ-h 2wI9ʢ*|R^д5!׿N:q~G."ܮU[=1;Թ^W0H^G*̉{J5, ecuܞXoϿ2Q*v\_O""ܳl˝V*t"6۰ge)K$2gĕfF@T{rH-H,1Jɽ̄dNP~JT)nS_%p!>Ƴevh8Ϛ2d\ `yK{ݙ,0?\g?c$%C^p(FQ,ekEnr Df}LR͢!OF{Dve~VuwuRRЎhr}{we;ghWrx\uy =/aUoxG\o:Ej:N۹fe*-ܫ5R橻1ǗKt4ln-R15롅*|bC/}s{:q[{OiRa0,P,\y :rt}C^J. ;HY"P{jt(zU8:нk3 ;qܕM; xU8%7 1GR7zsڮJҩЗдb2ߜ,XCsS»aˬe.OyY6Rye+4y{JjG0JIR}F\(|oVqP 4ř1=A]*FTIz~"kmV=I:=;z׭K:5gGY;YV+4?unC|ci6zɇ!7| gЌ< $gQ̰o*5"=uÛVS6ۄ0rHɯ8xzi[|S S N*U%w{ً VYeޫ1f;RGulV5j9$3&*RIoFiX -Y& 0 r{P)Ŵ{zՌ}$h&L1G T6)˗V5['Nuo22PAyHc1#E6P ߚ2j_Q?q^-P1IE$`z'5|jֿ5N.:y_}kBk_9fلW$u4[[hUÃIv#hx 0j(N2Nc'F]¯j!4nn#S*]{߯\җ+甬Hc>hT]oߩƋdK0uyΥ7&IөV!)y-yl8=9:qq^_͂R'Oo8[ 5Vn6qvIFO?QZk-ԛOӼ>]b}>9Qn~\¶EuivLCBNKVݙO8<9م5]{(m~|30(-U6Fs.!'vs㆗(%Kn>VMB*Gk=|þ%5߱61F(}@yӦQ{ݟrb,^w;-~JZ6qU>Z~~[ptԮ^/xmOTMhg?OR;+a}m%da[-i}k'Hma_$kլǛ/9内(ky^k=k/^cϔiZvTa/݃ndY2j/2)]FL8\]~DS*{_;P>; N=;NIEۛ_cP>EHTy`dfTS {}:X gFe?0`cW,uIäV3#RbUܧ6~%S-dM!bz |;=_k [$ݿϩ`kWwWv{(G#*6@R1G$ m)k1%,V!QG gwNZV)Kn~aVwrAu`Z)ɸ6jΜo'sTWM6,2]jn*\zMjeRսي/kI޼ڜI-^u,5-c 9 3ߏǑ^WCI믣5(Lg3y) g놦 U+dN0Tszyoԣ:o=?:rQuF><y =ʱ 5T5oiR歺*lDxTZUUxVR\cmn8wS[}u]*S贈ih[i8s2g s*Vh>=IBR\H܍W_Xg8D8uƮMIuNާcߺ \gzNzvv]KZ6kaz}{@xkRTRMKh-֡`=2+7-@}i(>Va)ɺf]k*B#ܲ< :gvɕJN4y7ho,Q9f{JR+fνݚn|?1.s(ө*sœjTʇM}b Gfᛷ;ӧM9!E?*B:u)芭׳Z\c*q8qYԫ.xsr:'ij3LFd-ҹJ1,U͈B<7M4QzHʿ4DH9ly8?pb).^2G.h/U2}d]Fב=[aimխaC /htZֳ[?3G\Csҽ,O%JW}OJ8ZR|lx{ R/8ZE'ןN'ӕYK-I>=OshzodfS988;UXш-LF*|'gZER杚#ӾR|#6$1rȸ=3jz9{ڻ~7:b͌R2G$ W q]5Ǘ=Ɵ7|Mm>;*So_U~'g7:>[7B3C#e8cqC<qDqm)ּiniUZu#Zi3b0ZtUnmKo+.IlYi#Nw#$'iJ>g_ئwt]Z@r.83o*^R:eV-};AZ}؅*}=/SݹF6lO10V˪SrʕDhh_:O. N*lugONk;i}(B瑂H }+4}8ߡS4me0B%/d*^++7ka#~--ku+Nno}:vVy?>+Od*-6ǥMY-[FS[H_65)$Oe;v9\_pg}OpSWM/!oORMW2]H7nF1}kXXP{~wӔ]XEE߯V%-FWU qslvRv+sZonzv.e]=N?Yh v8D\u'A;g},Ϲ*^n{Kc(!VwqاZNYPzoߣ-TmEU08=15+%{Ri叁Tا2nKG9A1V H{Z+^xkOQ_ (Q׾x]0hݭ}Ҽպ 6\c-ہө>[^Zu+Yt.YEޭ!{p *֋KRcőEK)*#ˆOOZ֜$Ֆ殌cfB}b+/qgq@Gnǚ\cf$:sydW#5Rƕ){oOqI,Hs&ϧ^va\lMO-1a4+?.Hޫؾmql_ҦrIʷp[!ƾO>=ӺGƤj_O7:teizF]G 9uA+<[<5;O#8֩5m޷9W#} "I"_Rz#¼V[)6^}լwM[uO?gϏ4lՑf$v1’1>!)Ti< zޏ4;_ {\,LVHIu'>fQ']>ON>*}SG>~ai4!W\'par2aڥ~_wvѯ*ʫNOm:V>9^M"ܠ7L*1=ھJ}me/rWܾ^VVWvZvDԔÕ+~]c(ԭ#TG ӻ`o9%ލyI,<+uy=:j g$bv|s^mUc>y+ݻ[[%ӧ="8>Qx֭F5}k%n]m.gfP--lC܎}_R' ;uv Xiv߼s>$hϠ h̄yFIμeG Ջ|]R1kK_7~ZF4&$nwrA O><.0ͳՌTbwi~-_>OGWg|:/~Ϛ >%khWˊ4W>⾚WBIZ盌VǓ>k#w/<ᅳܡD2| iQ+ԔͽjzFڴ#Geuu*mv|ԌeaZ.gGОmcNCmOg*1lD[QLJw/4wn!9f8>(m冷[ :["eX0y kԜg^)QqRu?mFFݳ o{3쭩<\2k跷ӡE'^F2jɞ c-|[(lh ӞOx:,[(Gr0@4dژm:t$v,`cZ9Zt%ʑYe{ؚ]r~VO~2uMîZRiEfNP?(̧b+iש)]N=-swéAGk'2lI g`*jխ]}8tF7RnKGd :,z]MǛ k D !9Wak<,sw`ѕjןUu{~WgIw[6g6.0@u9m'БF5' (Wʟ2WZ^x:K=hEz ϭxRQ咶]^DžR.ڽc<g}(2Ws 6睠`g\jIMuHWgjckV,Pħst.9r2./,vsF%e1 Yl:GWtqPrSC2T:K qHOp#^G'5%E#*Sv_>)ȍ;I㊺2]#Z2F<߳Iޖ i^s.uN@WR[Yh_QN*N\׶~m|9YJ_b1Fs^[SI]:3['A$ͧh& .g19g  kc(PMZ Y =Qs;->ۯ{t=gᆆ"kbg fAFs~!B^[^g)ҵ5g3,̐XcUÞ=1_qhXEtŹ呧oT6@ HT*j4m܋+ypnjva_jڌԭne,=,yct.GSUVFF-i/VLUEGRub՘C$#3mo)#z,#utѧ/wr&_0;qk = :nfʏ$l6%^e }N=5IK#9SGf;{OEd. ' duUOi>TGgn?:#kڱGUIJˮN7+%ԅ"/?v~U=۾r޻GOZT+A oCJSEږrw4KJZQ߀j+b&7= %ЊR-*<2-Wn.W(Mzcmnp[N41/\DP$Vn0~cۯm6;9SYlg1ܰ]{V5䮻FUUoB$)[8ʲZtFvfW)e4oeJY6F]iԜ_뷡NRrF:jҝ8멥XƵ5SpdQZB4(WnYϔg)SCREpS^5z7th<sKpwK}+v-]ΥZ;.2[LQ\y^`p.ק>6֋ ȁȍ1^ 3P^,dwJEt(6mOܯw1*ҋj#Тڗ{?j0}Uܐ@ BwNh՜P8bݷwb|$㿁:o;vu{*K4p 00F:TqSt=1Ɲ[>[﷩QqlIgq-&we##`<:_ǚ]{H`qz=dqh~ T }sIHTXF8?6P#rQtф^G Box#ōzTm%lS:. 鸁g|3V;ui_~Gx|"yI}4m)qq䞃zEv</ggS>,ɡTZ|su-t[5Ϟae1z2Af2M<>!Iu^ud9׏=oj~>M&h|YpdeͻliU]2axkZH{zg$鱉Q$~T۸pW)|ۺkњtVm[zkcޭx|d 9緮wө)SiU%w鱍w-uqnu"^m!cxUy]=|z_y<,3o1 Դ5m*+[du8z/\8nI>ns(ˑO_SˣTo?Ͽ9o~\[ wtΨa`y9\`r}:旤-wOC찹.)tke=>HaYnF:miʒ\wC8+׬Rzu۽=Nk^o"f75÷'jSF^y([/}k/0vh6 }sEO5vn]7N|dz]֍K><4vG͝VaV^mH~TZ۱ ,e $VXM/gTmch0U^emq8:NǠ fiLN ?e~hjo9X'-ySPNmXZ.sұk/w}=ju)]Ksb`ⱩKETXjYeefV={׏O٩lump!^d*rXy{8q:9'[ӜeJO\OAf Dcەҹc[m- EBqmc| JR>yXI5$k F2^zq]]*<֜э:-t:kLPɅWÌ~qSU7U&vcV8S^,un 8dg|N2Oo˰>:tһNaMӽO<;'@K5n!vgi.;68W|cMTMedM޿/3ԍ7+|]ry&?4ى@IrÕ:u+IIsl_,S^_>1ѭ w7w+ղZ);]i̫-6cs ٕ~2r:܊(֦.= l-w*{J4?d(|#fڊ[A<zW,74cNﺷ:2dݢޝ^O_Yj6-2_Mn9fk ?i {78W,fTy6="71D `HlgJчmu}jڌ=w}~;KӖ@8<2q#5xÕG>_S%ڤZh6cpsraOV9ws052̭y>ŏpNvn={cOdjS1R]4}W5e,6X62be*W<]51XZwosV\U2⦣z7ﳣbf2!d!u8ܻI85a~FNM}u[vgʄ{ޏnn>s|K=k{`,v6#- gxߗsm|aS4֗zKܬ1М+_7'G&¯ժkTs^{W%f;mvyʭ(!1>agUa)E;+7(Jhq?Og/*GVii3lpj]k.݌O-|gi>uut0gBpWۿS[ӳ9T)3M_x-dk͡^N[$îquF6+G´pIJ%q[V8v%Lg fݻv+:ЩZ;9[_]WePWerҺ:3ю9_R Iz O%)rդ.sCAyf+FwT =YE(Eik:8¦fXญf~ ݾ2Y7FΚv#YIɷ Nde9֩yr2K:d*6cJ*-]OK N66 MYcۻB0oJ4[;}íV"VY$qgq! 1I$g#'gT\ױxis'/~6՞g,񖑧Gw̰H P ]۝@^I$b㊣96׭&8~Y]]$wm=y j~xWZO]oF#LtkTO>_5ܗS0`w?*:wBOFuo4뵛,7oo¼F)y|}i `[Q>0޵XuSXGEs ,HqӍNflQbogoMwF\2#mXa{_ j $2uaԒvS#̣(?^ڋ$Zkn Q |26v;AGM`{z{Xx׊uK*e!Tu F  o}c<8Zî_?/?i>,k 7m } 8SYC/*_s(⯞.N2Eŋ}VX|ElbfXTF [>Z2.TEZ eH[9~u:ofGiJ]d񟕤K|mhӒ;JcZ^;sExK?*էjn} j{F;♒qp8c~zr| J8[@\C|:vTU#f얦j.d;}rGL#=I'gZT*ҳW~B '%g+5-^<ߪeNvmiFq.mԚ#\ y>NGRI|WS/e:0Zӎ=[ ;F( ׂsZFlgFyniOKiTY XXSG<A]45=Ӎ(Om4ʴP2+!۴˚qG5:g/.Y7$U =O~kNj2RVJٚ;4T\O= #oNre 458c$ۼPp8LWL&֥J=bk^okVU'tV*{luCJ:vȯ$F׸.7Y9]dܪMa5tܣR-¥:R-؋>\$s#b窹}v2`=3J1/i E;Jz/8v# MƱ[.kIaUf,38ʣ7>Z~DzY2d)F|STkFU-8ؒ>zΏTIu}Jڜ _}G4릆tqZ6PIٙvѽNy ku#m-ye|ݤ1L.28?^I/+ٷ-ˑb˯ڏc-9e;3[a4 ƫ t*%LעaVϐ'߷NXXG}~u$oO/#l34C䪰!SnGLZQiBXӳѿ3'^ngW<:\氣N*^sz|h{9K<ѤL|q=+yG[P8;3ujo;4Frn )z`^i{_CjU*F*Izkb ͽܫ#%UH^v3BU).[uuŊ攩;+k03Cӣ1*-^]JnޝM*r9zicIe%{כ*5(ӿarF*:s &O3|UDi)E:&'ziS\qiff*mNMIqMJR#{E!ߚO\g]h_n^kiT*<,AvxqatWre]KcN5iSMC,Z^Je&bGicTjѫ6-/Qm=o4zJjvm~fbe4N4,b攟C;PNħid޸ιP{ ^C'5֯J҄e-G>a)2vغxt] o*w_5ql.JRWHZױ\8I ;.aNTh泽[hUڸ*)ѥnRCZ#h{<@*=RqUV~E嵂ݣbM8-=[_yiT}t} a7mH`=cf^ueF6|xHdqeVr:s9R{run\C8Я=āxmo'g=*!sN%OI̯$b}1Zu#nh8ԣ{H(Yl#*ӽiNNRҒ?/U!Yhn+9sZ%hj ]3_ZQWjxV-<&mw۹4Kl1K- rFG>\sI]ѝH.K٣IGFU*j瓞kӔ;jbQJtܿOq[,\du>SrQ7+99@/ڮ4)T}>f*Tj. td-/tc8z:=H>즩<_ԛN[q$ۋ3n;ZQZ\Jfj̿Fm.EP;r>[)IBv:a:Te~k-j/nD<ŌB F8iөЬE.I)tP!Y.VNX=3';X9(աO&4I#eYl=jRR[”ySm?7'$JQ&:2jW.VokyJjǝݴ[ediIW Kܑ=2zV4\GFr*eJ:Q+]G0+ڀz`bV咄_N8Z׭.:=DƼ':NN^gOBEU)ϫew*8%sTy˔OlZjXg+v9 {S},i֧YN U;mÂ=폥i֌o~T[^jc>WٷAgn17)-SSOѣ#Tyv* ׏|{W,G.nk9Zl #RzzWGQwױ.c\Et$wHĖʖ%J/JӅH(˝Vp Sw*O1Bەr3}~+hDaz-le][\wվl}2}|6:qJ]yo#,ۆi9;޵Z(IS喖1.BqD =< \}^W+wg+lwv'ӓm\!N\ҭ@ݸ'kK7s9_V4mɳl_}A\ueEg><Ke%]+QVT4}YԭSSZ]_lgTo; \\5~oeSuyvzc+mFU*6m둎t/m4e+Ki+]]B+v@䪞F{kKI)?uv͘42Y# W*|)F2Ӻ5|lojT*%}ĒÏ^p>SrՍ^{QϡntW0\ϗ3zz95ꚶZӬd/J0d%OzR1ǽ\qfx,,T_rbC,lf])x'GPGˊҽteY%~f.бqs4{miaU݀]0:qvg<}FUj k+V5) ӮGO_lRF53,h5rKDk=n`FPb`G#"ӝ<MoNGUJqF7vk~CodFrӯOΪ(jRƯޫ]"iQUW?19udߐRQn:HqJ'7{*qdڳܾYʚwK5-ZF;Mr`xjOn+qӛy%.㍞w2Hex\թML=HK^Fv/oNkN\20J1N* I!Hȋ&yG$c;zD0UԜ`εhԒU_38KM͹,کF]I{^[ӎ{w9nny?Ս1q^-8*Tiԋ|m?hPbdT;#{Vh^]XyT/B}:Hg6ϳz*=*%jU.wr."&IL. CI?SN m? &Z5嬫8=#ָ4NOєiQr&1ҸnyB6oGFRO\jb FdI};TѣN]#ZT._א%h IePIl5ɫT¤cȡ?3˳7Ț~d:QVSG9^4{;[̷Ƌm(A<[vb`^y z1WTjF1?X<`[Ln[9W9⍦+۩ 4}%XҍJ~΂QZ%0[ɇB|nGVU$5+;U̯+2qcR4c9J]?J1-[x}\vDS|~~J2R޺RQ/6ܬq;AۦO3 <+ZpOgR d[I$UBּ#G|#M5>MoaJGlp>/Pѽ zaӄqȳ$"Iw20L`v_sk~:xz]z̺inaYf8YFrIߨR\R ]_$RmcpQqh3Jtc9F2 D'(6#pSЕ?Gv}Fe.E%Ƹ1sjʞ#8?D)^a:өn4#lǟ.MO~ bֿCUN`uanEmĖ%T*q~)$U}[ۈvM1%U7^W,,xۺ?q}~QJ-8,G$6dHrv@*TSU*{M~mvyq֔c)SaDOTڳ͌tLt:wNŘT xϖzRV~ߞ/bhM2vsJ匹i-"nTzVX~񝾟αk8s|ĝJ׾?ƕJNO^d kk2ʲ}S^!ѤV6m;Z+ 7^EmGɸZ.Jdu,J+nf~lDjGfDj{J:?^yF2 ܉$g8yT.52G7Gyl03YM+Y~!:q-gX.S̏st|n㵣ЌuJ+4FJӥI3o~.PNZG[5oOȡqُ5EnGWԗWwwȖ?Ɇ$j*zьn:Ns|IvU2#f " 83qj1NKs:~m4$O"Qj =}z rPCJo}IuY@;hվ`9Ppִ7xU)ޙ{,Vet=zJuiOn6j/.iڔPmhRHgćӧAYQEfp*FtYvm;dct9c6(gEJ+$L 1NZJ4ew5yԽ߹_P U1DQY8Gs^X3^Jﭑd_;Gsv5F"wryn9l\'ڹ׳m+ݴߑnugh~VsnRܚJ ld4O3+7*#(\ ҩR^Zy_nd\Gac\|c~x~u$cmwmؽj~8PV`p~`s唤Iu/YM2ɖxmzp9tcS:mvօfIne sft YƯNK<[5*U2R|LF?-uUN2FUBo.[]>ae2>Dۡ2*nMpL[k?&s9]Qk#~P79z?h/J-N@v>^1<֜,JOEk ;Z3uE }GLwϚU#834L&nzM>Vf*)ԕ9j[7.[~&@-+6eoNs8:9+FMyXlT(<3_e$,[\4ͦܪ+ ܰh~gIar}N_VV&(?BIgp-#-3serQU[z~ϰZ.V2a s)͉*-rgǭ(⩵JW=9jW͍GkBV A{]X|L1S :~i{GiGU\h~kP3HN U*r/dcD*C+?ٽ]*sgii$.:IO7gqRﯯo2)}GD?!+Nz]su*mm6!ӕFһb_CZltjɅO@c`Agէ_ϲ]ͯJN}?wl"dPnNy RNOMSd:iolvDrFX(^ m9qz k--H2uۋ^ Bʧg9nAB8Ѥǩ*4}yi׉V=,1?ʴ˚^?9^V|yiv6+0Kc9$WTǕԩ5̞3ry."/^O BJ1XHՒ[krJr{^>TUy [VdTDbBUQ- `~Vmק*3i7m?wpj[~eFa F9 ؎}Xb=ۆw<#N3u;>2ԖEnR'Kgҧ )TʵH(ӭʒ{.yZkvWѤ"8_CV8J4\bhƎ^-e?z՘rו"Zl.t2lkdK"8?y:j=J)F7ʷ:C#=`:6V՘{ysԲfeƣu=ךQ%m}~Xz\3%B8J;H2n|k5JwRbi{G.cF]:rֱF^?svДrPyknޥvffm$S9WVqFu^+(D>yzW S_Urj>%uȖd:𪧏+FI{3w{، +XySisNb4\HKS^竫[]Ku~Hi6׭:%&+c{a{رajWzXf(WgN;NvޖJ#&[+k6Ic֮[{zeQa&<,\I6*FѹS'I{cpDH{b*q-0#Ui+$𾽬eIBO=O^"=pܹh~d6JQU\W ѧ[;Yjs(jbъaN[r:4hhҽʛ@\xi( J0oנmMrQ MX{iЅZ_s0绞ku}&7nMgh*[֒A[Yvm#rw8uݙT#Pdƽ^F+ޯV:E9aVaQhf&oX#QYJ7_B4UI/fF%7~V>cZueE3U"JtPjqUu~ Y6qW9pm{;Kzn̏|d[!"xv>q]j<}woTn*Hԭn-4r\Ո@8BN0C?*Tun nuokRTcϧEJ=M[]me]OF*{|JДn׫ 4d,YW*+KVOFLyhSӯZmkV,{PZZ"u*T+ ]]3+oCqUuwpRzK; $A6ݟ*?x%N*c*~c9co$pךiF|0E{4b%W  #9{X/@:Qoy׀.-/6ZRf Gq6(q?7\㢎[9Ps-wRJ)E(cJDQꟴmsKMC-M<vq4F^O]*{j&@& Kob51tqҽL*t\*IN>ԍD9_؂n#mojtwGh׫-<鬝0rTY¤ku19Wl7j8UTmmA>aN.oDL9S*Rڻtkף}*\3,;XNN_gZ(ԣ$@ Pj0?~O>E^.w{(|G-H\\,^bA,۲'hNH9S>B*SiVI{yO MGwRțf47[8 H9Nӌyv"󍨮>V?] KFΙFY;6Ubϻ8s4`5J>IAt}VvR7~m!xaR3,8e^渮QQs\=#uj'XkȚF Ln\  x:rdꏳ /W=>m'P=BI$n5gGN@S**GMC£ o1yE"K/W8[?BXҖWgzR<3Kv$^H|᷵گ'ی 0\_7)^[kY]]OvӴ}SH03I9_?R\y3v)%KH>c,קvwJ]`dte@RSj(bcM<=6+Mfw^8mFcRkXi߹>Y|;n%^1֬=Ev>flr1ӿ<F㭶<ʲ3fޙmc;:1Z@7:gӯM9Jnym "_<3_W-dͣʩث%ЉcUʮ9lg E2t.N.hO$bm=yQQJje*qe-VEr1}T#zҍNh򦝌,_尸z9UaSĆ.?cs[LB0bv?2eGЀ%St[qn.M߉3 uʝ{E%'xi:RmqAaz)$0?IdEQ wZ.{.X߹^9_+:oRЬrkv'#GmB򽞝>խJ:]$K]='ouTefS,"*xc)SsWWO?ZnX, 1ߊ"RkKg9ޫ}cf](%z^~6{BNZ--YOcmclV)m^B%61NJ9MOáU^5>tspc2?V2ܺ]Z<۵faiɬ[y E̤l<?^MG4rHJ̬A<cXJWݵN=vF4iu|U.q "Sӌ} Q^*aR{XZ_GHJ}VQ S??i[< pRy|%U$zY"|t]:]X<V]hjQ,y{GsJu9E9N*feHt+3OFXWkuc~ZsJ"F*ھe%ا*iZ9ɺY:溥:oDwCir{{x<ݹfޔIhmF5ydw_ <sW;Oi\*S$8'$v,_mOSZ-9ZKo?$~~?tK_OїJPzn^xNT\"VzsU$sTeR2Rit2*-%rKuS/j`<ځM"͌j覽F)Ɵ+ܽ|a}FѪJ.["Ph-E*G?Z?[u.ԟM˱%Ι$e49R m*7_p7e99Uflntarїm GKbxVv1+:ڧigU\=:83/ė-Ү-[kP #񞕍5줥{褫Y.[rI3xfmX7dL[~^QvRjmc R֗om K*ı۵c~ƺhMi~t2Uk8zk,Qww{ch0srɂˁG74ӷw{zxg7;&Ut]moc|Q>ĺlb40DUB匫ԐyI}J>ʢk֭[};{m.mܯ[nǛ|@ѵeү8Xb De?$ P},z2Wy.znqwJ^zo}v[>1W"_ƨvʲ4kr@ʞ湳LQp;_}ߩܖnlO=yg?@L4fKTik?`|y'>=YiJ_-z.GaqXEKM]"`Yљe0",f3¯ :mPO9qkj[:xA%vJ1|EM6=B1wH|unxjv x &`T}>|Ax8ⓢfٶ+ui>ᶍac7m5brq3_fa)'ջ' J4-\p±En!8N`cγ,Λ>g'rhC夂5P6n?[J֚}O7Jʣ[ZI*H2GX/[ƫtGd(S!EA,6aɵckW\z4g%̿˒r̻tzFy:j~T潴h[ITؿY0U[3WDgeMoDw6XBrGCXPHOڷPiԔjו$ m6UU ˜p*t˹,Fo)b 8VS^KZ4fD&[eȬ,06`iFI'^A3*}$isEZ$PŇ?ո凯r5N2ehݕb+!Fܫ4m~Ӈ2B+A}y9d^_IuUk3'v3cV xu%z}#k_/2Kh92NRzwrIhSZϙM#ew$m |ջ#FJ1Uk Qn[ni.W`N~VR!TOe*>=12Ȳ?tvN4߹¥Nx+/ri];[+jFmN̥{Sn}JrI^ Q,=O^VƊqWti)w NEx!F-.>qֳlN*rݾt$nnU`w/Pzki Ah;e ,"Ņrϔ={= MSٻJJU#i"94Ky`bfXD|`;p(r8{5_3TػXr;`z{8ɡʌjJ~&v`11#Up*5Ԫ>ouֺ4޲{wtz2=[1ntx[Z|(Swy'SZ(+O WJ[#PӚH ȳ1d1Gq:S~+'Jo&b]c) y#wf_)U$m/Q"k]3x! ֻ5uٝFk}J Ǿ;3Źom5J] w^L{(u.β0 0~b=HtIQ˛I&Tf}>61"sI/Iӽsש)婕ij*6,abIo74-%v˻H!'N{{M7Q->lqU<UBQPt2K+b\A77x#Zcj%{&WR˸m̓ӵ]HJt=Og){]n$َkJ8~T<ژxѨO]]|T1-6:si,?,ەF4y}v1oԉY%CtEJWMlZ cpۊ-$s.r\ҲR>+4i.;_ta{tntTԩŷ2dzs467/2Hgl7F^Fl4gN2Ĉv/'ztMW^ 53pNk'̬iBWԦˑ?m5ttb)ƌSwl/$=wR(~[ (lv,MU&I;`/ݭq3YF)$dqȰXsUTaᇬԶ?̬@ V<\F =-׫;_sC'TDZ4ٝls'*5#.}voזhԡZWI7%ÿx<ڰݶ c׌3ג鵮N'i㨵(fVVqi߁^&$I켬G˕e';5K)]Y~+eت+I8}Mu:|3VgV0ʁr39NFOyE˖*nϿs*}bWO껜'OQ֝|,0BnN$o+A 6Fsttۙvu~oU6vwO-{|zKMYJŦWkUi \9 S%8sl:% Wd!aQ0x42Y!;;cq[֬_IV[>Z_oS *rezi7ܻ/κݨ{nSoڤ0 HYQ%z7arZV].[[*R}k\ =q;bGRjVZ %hh6u]Q4ۡ[99W+<=>h-)Ro{$aqVvԎx|R]>EKiݑ`VtI]w$1׮:qjr'x:^u~oj:YQ +,{XdeN8}^Q+nόUg.[|Ϗ5i-%*>U;yI#iViHx(ފo~.КJY>7#NF, až=xʼy;U$G1a~Ye[@`t7Smg|>>+KKZ׬##;Ivd ,rs--Q-5u}_Oíuլnlm``0dsaTtM9؎l\egkz'dF op-ėi`jSd>,L=ViL-%Vv/žt) unC"9}r>'=_Fs֬GϪ{ܻ;>:$׋y-[Oyg==_ɇU1%̻~oK<YU9Y|ϕY((1bk:ʥ{xgR< ׵;aX縕!TVS9F>cUNM7J0Pm[{ψvMvE@^ׄNMάR~fӵlc5"e>g+.}̮[|C.cK~|m@ gNޜ֤pWUԷM^E>'yo{o#;f88AEsb%Fj;.ֲSQvs"yurinzz>Ɯp'V<7:<-K n6rf1j80kb+T&ZBֿEsZ>Nӫ:ybn퍹UTٳ^QR]o x~ĶNB F׊D֗,'Mt:X>hkRK]l44uՠ;P/\ww*j-Y҄\q }E Cu3 ۞Il*kJt̒ٻiN^Ԕ^o3> |\~1ht}?+0?۲'4oq ԅ>h^[unי/%k8r9?翉"gJR*M+nxo@׵Ӧ#EMȱON^WcU.}o QMnK4x HcyԹ7F*:;/;x.&hv޹9-/x]/'~8xcKhg2s哰=1q68I5;?3~9xHʶw,lL)-^]zN5,Q V%@@ؓ=9kQH㥃\GO\kQ6|]n0?{24K.4GNAsc-ogGr38+\'ӫ/6ذfsӣ(/;\#mR#1 my$raۄ=GFzkTN9kc󼹱H#z^n+LymL=(z,ꚪLoܖ*vql^OJNe*WQz~~2𷁴2TeXIێ{`F NJ.)%w^0^|#IDjj_\.&w e-9+W\u*{_T}-m!u]-u䕤%CnH9+ԣ+Qx$|# M'[RC'$E`7g<;ϡ5%];x*jDL4LLrx+3vV,mYw695ٴob٧L+nU5%.FT}zA,R}}YA@[RkNƔyW;/Nޝዖ9ZKU-qjY1b%i.Wwm[Ϝܺ4*Sxu)v!o<loXc Q8<}몞NT[Đ C䱕bǟz>N7Z3> 27ǿG/N~4{H$P3y<{pjTVkcj2Jr\M3G4BWϽSg,Z2&yha:~cں"&ͥK'd[mL $*ݎzsSy>DǞO$4H˺2[o$U1I 5$*1 ",JU:c_mT"1fg)JQarZ௘NuWOQ&^"h6E6?7c(R}ItQ'DQ^(߹czt}|ܶٙD6 ` TW[4H$;U(`Q)G+BxT+^b4^YF\nmՌc99wdէ4{2d}v6wK rQ9|qx-Rڣ=+rkhE{R DQzזRƛR[KDU~bό_k%ܫ7#q† 1둎@#e+I:*N1SVk2u VGiR6GnpIjqG+*8/S.ڴ2IkW0=GӽirO;h~=37ݏW(T+[~>P#jssX~zoqQd7=ICtAʤ-/C9Q"N[hgMjz|[̫犙E&iIDoXvZ}Z)F܎D^M4a09EHjXcPN1.uU,dvdV5>Rӡo'{! 89=s40'}<%OgC}U|l1Z] Թ]d!TmǗ$33m䞃ҼԓK[XSIEUkz̉;¬|? eմyV_[:G"n7~l*9Kͥ#Е:{V)kqy"[Ote׮zu=ScFU):2+xH7$.Q^n"irh2.!yx~T,Tϒn)GrtW|[YN1 xyV,D# yݷa(i#5Qq9U,67RRCxTucOFD`0^{ g^#wkЫ}!jt5Jl9P7#q!SOu.jrh72+HG4%Ri3vɎ.{WkNQƝh63gȅwtndF$zK戬"h]ۊj*Rw[|Lj6DL/b ?\~WfQWV,-ϞByBfbx|zYݴEHޫ7;ewڦikeHʵK_mHt4OuT)^LΧ}.acPXBU.˩b#$XYzoaֻԧ9Y*uө*q6۫C2٫,\nR1GAYIr}'r'bɗm$eGVnYq{9J-ZЂf]u,l*kSErXWMl_ipy?By7vkNO^;I(?wF3Y1E$Ҍ(ܑ/Cܐ輒kiSxmN|IJjA$f(^ΊKImMX(gÅef?*ӓIFOcޛzj bβ0t8hM/m}M4␭6xp=ޛ{ʷm=unfGog~Nմw'}S;Q,DxXC6X+I2^N?L4U:Җ˧[xhZકL댜{q=V!_Q$b ;1 ߚ>I$οf;~_2;|$ETxG#? Zˮ v_XsF+HI|9{>hhHu y,cgi>m'jV?e&Xu m,`N&FF=OZ韴m %[e ]n#* fĩmEeNbԹ7g&ZKwoҗMDZ*u+%qYi8ٳF ?ץkZ0Vtm'1Σ2ySr q9)ܦ׽ԧ4>1-Y6s(ʚ>3R'7aW= {Hjpf2 nUGȝ/j~mFy`2s=9v柽nyxzG 41kG(::l˖zQ[[hUrܚ$Yi#o'$ 83\%hemswon)k72>Oa?未('}t,\622цֱy1QNzhG43Qz|+ޫ7}mukswDb%dE?-Dd9^C4iy7cc:ֿUJI$Eu9^XE -ݡOL޵'8ěNe-k;Bާ J]x~YsE5SY`mlf)FCd  ezG|$b!O{9a0O%>JR>ҳTR/]#*q-7tmTkYaVle85R5)iԧR:egO:ǘHoڱ*9==g(h.YЭ7R_]؂䲻HaQeݜtd8rJyݮ_UUe/a?HyZ{LwoT u8غқhaSWײ}3Z *%Hd+nOnEtFT.ejb暾%EwY۹yytT)'}zI$P82TlSsk9KZEяG$PD{yٞ@9dht a*w 'n#N?WV6M[Iߠsj37F#^N~\ӯZ-޶(Zqi?BK(^xm:WUy9%vN[Y~U%J]lܴ-V/>a'lYJQmJ7'y-z۾M} 4G#EZ902e#hNr+ӔjW4zu)]z]qb[JӶ?13r_ cS]ѝ7'SUb;Ǚ 3v/-͝:ս-W6\Oz*z-|ȭ(J uM:ENX{Q)SRZo|uCn9sэ*uUYrWNuM~NsGZҖ$n3N Oh[f/e+,D&T>ze6J_ kh"تF8FnqU9NM/5>,19'qYNNc7T?,-M)FRf̌?Hc';w4ܯmM)o4%.$9gJyOQ 2b_ݣ+FӦ?Q]膥)^to#Y#nUH֏vu/ĩQsN%pʖv=z%t[u7k '[q+n \1P14e)9VdLPɶ1(2px?ަQwf_|i3q#F,y#>\&ըO)eWlmWT.[Y#VA?=8^}X^\>[g·BO|b"gOIr;wOR"D4=k8EOh_Iu1U1.6|8ҩR쑶"TkzX.,2"nl.1?w\ҍR{)-ǖnqv{uzrx(nHу%$̣@fN:^h䵗UOi ]h\LrG4$ =tKٻѕY}_+iv\5hh’<^*yeSQK}bǜ=8/n~g4J 78U#ny&Rwa~/\Ty#5dh Qz i&ji+%yl xlOate`oMt0bbžsݻc' ӔZP7JRp2ܘ1#62;Ⳮˢƥ)]mmvfIbdoU(=:yhػaMnvyj۔8bڹjQR;$势.Z'"b`cVrEթM7YLUϹKa;<9 xұњT9|ܚ.'UVEmP7+3m?Wrç}T$߁.vH}Fsr֕2T?:tnr*Y9'Nj)0S:gbVŁr:uϠJU/֝5̩,L;le]8Z4嬝~ڤm++n I0U>c+a=>+XXo2O7Ph[nYA~\㎸ONTcN| K>iʻdzjե)Qz_4mȵI%bx35Q-)>m4KK$mJ7e1wYKI=Ѩ vʚzU7ͷvs<Jq|It v}ÓG~*J'r_clf6~ qmjH>\QDŽ Q׮ ̫L*2pN/a[M,lc,{PG rIIF/t.q&\7ߟZǗ.=[Z96̮v߼t\GM]TƕwڭQ{ 7厩 jJMErn7cHCڦyy~߉3$ܫԱ\*+j]^i+IKk mԊK"XyJP~r{Vv kSr|;[K1VPwQ;C&,Bݷ#gnqY2:%V"JDUUYQ=°Ij>8ҚMkWQ8oNޭ1r>]5Z)RuY:[*˖FcH6вt)]`V?NQuw9>gPQp޼d |NOe$bc qkN•)rnܼQMG{GU~3FV6o-'קcVmNqQ_]^`pjF8Mgvc*۵H\[f1{JۛFѭkGKЎUG%Y7C`=fUc$wPNU9_^MM2G.e1O}vOF}6(Ef|?࿋x0YIuj rcrTcH,:)j}-q*\vvϱiZ֎wvZ`l(^bAW*ʴRK^GRsgN iѴ/)&Tv=k(ܝD1̳w& f_TF}qN][_Չ9{8ͫk7=·s;^ShٵUB <9-o])UXpq29|]:EuGݎI?]_>5C6|EۑݦȱY[ѭgk۴$ӧrԆ|{[ /]4 %P',Td+=I L.9sS+Y輷OOyarWIYUkx?wk71% 7e1O<=_VR\/c9_SΤw \ףWF24\OC؆:c59.H9k–qt ~_v~E7h5Rtq^-ޱJqTܷ:#nn,}~'zg+_ Bk__33S#q/tWF>yY7QSB4Q}N|*}msvAop]x\]X:֣O5ɾiKK1}?*#8DW4qS[X9%,EK7zg7eVʍ'Ѿ^6 eev'ɁۯCyBRI"gRWkEԇJdTv%[kX+wשܥtg~^EE/\IvUqUqUyi3hgK 4Y;J9ZKw1]KpXIaxNiJҲM>ineO겄.΋F[tPԮSap7-u 6L>"NmuN+o͍Y'k#jRD:dth&܍ǎ:{ө=:8Ժ>Ծd§Vc\SKE8^gУ{OspsH~BW}*F"P5yXA:$ǹA?!|o'8}0{dS;?y8ZxmUqW:kh*G9VfrqOgӎ 6=W{y|<9t?n$%o3E O;Axrʎx(o%G)=g%4[ ac# }AX~o.khiMZ-%3&K~ox¾> L9ɴ{RNw^ӭ.\JU(bFq_[E4>TqE}y';K$K˵f yٵs<8^g9/g~^.+Ͼ#믂>qЬe+ђob#yyi MnӮa㕬cRrz(;iٵV{#2}uqiN2Vv9xrkdd䍈/.6z+#UwS $k; jۯ<o>,<9$qzر܅ʽN:z^Z7uwnf—mp?ЮwB( Z+ W&`F&ݻ]UqdbSD'X[N14F2 {+2h 7t~U*9e'NW)]j:}岏FqzqYkBH&1<ݺNzHiA:QIoeR /$8d-'qZڄm߲/ĊJWln+:j14c0^#]|?6'T;8#'Sc[vᶷ#8'_JꇽS.xznWXW+ }*7/vi,y{LVͫK$vKp H<u]Xz}?_qede8M]:rqS)b9[ǙQa"Əp06wB(vfDqw[nձn;qV>>Obƌ_WMuxt !?OU[\oB@ӂ={NSVjC?'/td}I H2h;O86ك::]u?r))kϓ: 6w\˵)&MZ YRId܌p{RkVXyM3:JR^58ڋG|oѴ'iw;v|]HYϿaJʞ6~]R|㯆&= "շf\ ɯ^2cGQޗæ*G}u#* 瑦>O-z=,iƤ]Y:tºݔe6q @1]Tj8JttVu%.N믣=_PnuF?:HvBAcÜWG0}y=AŦ]a!8|Nz[TjFsF:mn/.wG.Y*+Vz1ԍ5~٢lH $ٸXOZYSTkQ%5$ )<>㑒3gmGFn١{LE7|ս1$;VسxFG#_i8FqVK7W6WѺY3:8VMome>൥l's>2dY{)[?,a] Q}oyno%f3,#/w۲>|3%vjgZpSin XCvyWǿLsα Wlή f.aD1,aۅ'fggD+mqWH^=r+ǭ^=㿠^MhP]MqõR.2Mi&6`_d~bXOZ-˩jG9kIl۸_1rrA}O8cUsbQ {*{wēZ<0E\m݃31pkBR߹H⦣'%Qկ#'qrHs2\Mo*F{~z`!e`P yV:ݾ]ͪSgh+ tDW8S#^ ۡ,eGVy{)ݻu:ыkT] S~˓t65[ky#-ck8Gc]TN/Ayae݊d٬8P4+gtrki)a۝E-3qYH, usW\d۹fH85 DW,q4MԖI1sc3ך弹՞:vGeznj7H9(1ڽ(ƥwU#XeaY 1ZIb+Bzkkz(> jю?uBPdv3#y8+ѣc}:ӼȂh&fhRIl|w]=*xp ` U|38HTJ*UXMD1+=+~C%A#b8[6i>kNb; V.9Rk_i8]Nnzm ;F ,T(}.q<`{{AgfH-Y.2e 8(Et5OzܒTV3*B 2 >GT+-#7Uٮ\ tQRu99Wtkwm=2#q]}\|FHrz~9TtsS)MK^FEw5.ݥ?z#QSV{=jY5{,ź¤9sʺ F b5Ug T0O:a(ƶ1ԩQ$",sh$;vV:Ju6riMܰk4U6eVK[~mmJv>1SZ纰sFp{kis'GW4LȍWxRvݹ[ݦa)i,#Ѫ.lv^uhZTh2.:uت\"(w>KB䏾qԃ<fsL{yѷpKd㞸ZҚ#ԥ)4Oe85.-|X- 4ej{5Y!cwqSNU"dZiz7ɓŁ} B^׿CEjWS 2yPƐVOUʢ|ֲ6iʟ #ǜrs܁W?=WʽI)M8ݭ\D難-_L;c 1OWԺzԯn;Y"Iwmc8T==}k'Zw^9JfV Plۑ=Aծ *4ܪij~K6IyUV]Jʧ.X1:-Vކ]SX@cfNHdG9LrPOgC=_rWq2V7nxA1]5ש4*qH&*E6cr^JkV6tl9;Ww6IQB{zJ5):|m:6؅NuG &߻"֙ rߠy䓤~ˡTԧ ]fW <<})ƝIJ5y䤖3* _~*出s-]2-yUګ~ks(s>Jմm_Я*-!Niʥ;4o @n5vjQvV:vOb8an)Tޝ Z-5̹4h9z"%N:v{G /dn](ӳf:qOЩ-Pcڊptcri2o*}ۻfL s ؈B1VcϕGYuvo#nU!lhWޱuye;ORSannS'9gn_{Q~2tߕOz1燏$ttJcMrI"M-v|ύ/JΥZn4O24ʏ~MjX.}jcK14s4ag@858y弖c QIsE}=?}6 ei32 /\2qV1rݻ=ߐ2PM) F~cߌp>lg^R+[nEbprm:[O5kY"/m̑oace\eq3ۚs_/^u+:SM.{MCX>& ݉~N9DT|).~QGEK]G]SQKJ[Qܠ|S#ѧ+nn8qJ*c8s4W8 I î9ӵK-@V2,sCݥry Ѝ?S8ݵ6iROz/~,#R&YV)n9$ 5ذtT_KŶMN<*<^]SZ"n},v0p~F-䗚bo1FAƖSk,7JĹe9Uc#wz[^aS$Ԓegx74kh^- S a[ T?tlcф"_}L0ҏ%m59%/̐[C6!\` aGp>2k E$v][~Y[=5Y>".|IWv}'H kY%RUF]mfCH=5J%wM[2l/U4%dVΛRÌ?{}R=n;릷X\gsw]tk]5+yn?v6Xp$8#UɹO_OJ+ifo5-hHg  71PڼջkF<ԯN?3>5~ѺtӴ }Cm zs휑FjqW[yy_ʌcӿ.iy³=1_CѯFPm[sZWˣwQV6k{gzt/:BbV,+$wӯZTkY7M'g뿒oxQ#\E`˞d =3_7*n2m%Ӝ[jU>e"$ F/q^ IԌ.{^NZ#wKZb`"EY n]ϱzV?;3}8e(K}n_06ڬF@NyzVT꬗? MoӷM:OwNR"F5-v*>'vO7nXzTQmj]|)y8_lW#F^Vrrb--KS]=U $ VЋ^a4WZ7R㌪+Mλ]Qe0NH}đFDasoFғLS5Hѷ˟ƞ8"yF(.'Rӡav= ֔DqI!3Pѫ:9miYN]K?RR5E%xoαdCO,8NF|ZHSeRW_#juDۗ< q^8*}stiϹ"Y*v!$ u#$唬ޝSIUַ]iG$3o#G"gS:p=]z\pZ25guGo+E յo5%! 0021ָX=m{/zJXs^- L|s^;sKEyuo*]x-vX?1o' Ty$!}[Q?+𕞿Iq5O9#?j1jBb< fd#Ie[N[P쪭\JTᥝ0(MZv2,o% 0NiN\SJcMm,G= )iՀsGXܬvd\Fݻ#C—7+ 5ǚv-e5A⿉>omuInbENrHN=F K&{_ٴkҔt֗_K0'3,4!{/͎s茱x98g*0c%۶*ݴP(\Hͳbv'1Eϰ>]QߎvV;*uC`}^f iPί^iH#k.F0FpAN> Zh]EhrM>u~^ac{f1-ʊv@{v8Zy[Z5/tn~߉q2º`-4Mkx䭻ye9e*;Iwiߧ~ㆨ4>w?߆!9&u.GБF"RU]+[bi+rD<㍷uIt=| 1[qs6MaOF%n%Ռk1XA9]Ӎ:9Y/RDݴo]N~HX"d|(%O6﷙ZŬXݼ_O^ӯrpթVoo[- *CхkZx ¢"󑼻B˷ҭSj:Z+蚝եTxccS|}{>/Է/m]\}+çOdq%5] JI5(}k #K8ԕB隖LmlŴ+Gy+Vic/35;nҌ\mbY9oԷ;ׇju'#a]l.#z4}#Z>~躷.uf;:Vqk*̤}z+J5"WӿnuLu¸N~#*#ͪF&&џG.&br˞3xyʮC*-nKYYvL۔I{|,5mh:]܋q\tOstWNFNܬC"mdgm0Nr TB/[N]ڥGW|pRƿRJhUσ]e khSSim׹ZhخИܠ#?ҊfZYOy_xf=KNYM nXtagx8Ԍa'OS{Xoijxci+.A,r'8Rq?Z\lRn5L?3Mc>"h}G $a[m$ @ם UӇGUo]{QNᶟ[[]G!8xۑJU#^l잗7կ|mK+ls~"kM;=lS:Jګ>Qqy$ #8 |ːr~,ÙI;;h֏>Vs1 \3 /?jO&Ҽ9; ٗ e˽I.gwO_1t}(}S=J|Ic7ZS}i~[I& H(Pc{ڷ"5ɟIƼ6z?6?e|2t8by #[)IKVᯈvSxK?$W%}F&qT\Zw4BZۿ3G,"T8;YX ׿jiR>jJ255Zv@y ;$RyD[ƬYoیgUaj8һy1-{˩a^~Vr+G̬z/6t{ycVzzd]reg%=5hM{u[v̸܃:bdpFsu-}JXz}_o!XaF2SHBJj*qPgχIJ6eG>^ :F[I> )W6j/:q^/V= 8z|ϗduV^<*%we3ۭz4{eulig­'y9&auF~􅶞;tY$#=uG˙rZ7Q`nPiǒWy~~֗//7ɝY\S-9 4(dYVv"񃃌7W#IK{v05 hZ=7&:JPRuf/a$1 ˵<+9n}]mE_L$>3$Fox%nYcU߿1*TWC,BYqQTfRN/FBdW*7Q5|ʔӚHLn31 x%HQ򩔹7H:-OAf UX"'#91x*{w.N3-ܬ0o=NzwfYeXᅌq\VN55\̈́ͨ!3+nRf7?ƔMH\ъq>Zm[c'vYv!rDm7WumGwPyR*mt_lQ&;TNRi\O[|EDV N}{yY{Wjtb%)nfP6͜nuP%?Ȏ He2:%O JouPVo:]mcV(ϱ F2i6ߙ[N(?xFmMiB#xajIw3o# zvht؎h]cSe@GWp8Te4tk [ّJGs{UEs`hn؎]RnkIrOi 'qƚy)u-r$w V6'9ǧL*Ӌmr*Q}m Q4Y"Myk n֔OG-ct̿T4Ƿr!{nN=T_$uN7h~ƺa8  mWE.YEZ;gU:4Zv+[Hnm⸗FO~Z<2i迭Zz+;flRϺ=p 3*ҝ9N4 F"5euh‰3$>iYN {>Vջ:V9=qL)8Z4N.* Y7:-]ѱ3t/wR%ue?i+ʎ:qҺ)Iҋ8O'/if(ؚu#qC4nِy xx^-u[EH~r¥U$%c"̪vO*GtTJg1/#f=)B5pքe&áYo7pvMJ^EN*7_FYRTJpV,ëN{wJR򉳍WgaysG9$2TUI]z$*c*O?QF'T ik nnY]9g5S*n[Rث/O3^}n_k)5fvmQךT#IFR" ލW=9J0w6wMM:XV]:cqHJS.K^ݿ3At_,\צ+MUNƄ3.e$0Ax5#sy!yVm/ P.1n(.I^t9eZ:'ދO u9[sGi w~xۭo*5N2N)V#13\(?lѵE.k!]JCv)U1m':=-׹j%~"e܅|0PsҮ5yf㢴©Z$h! $ UfV26XyLqj亙ѧ|:)][ԵvhB(ss:T\Ruѩ`rD-0/C&!NNCzu+kָqcn23199v9ѥ*yj"]fw:zg|~_ʒcS GlV:'"ϖݻ;1R厱zg䒕޻ߢBͨvgsHp98jUrçSЩT\u"_h!Tw« `vqZ{Gܾ^Lq՜"ޭ7q 02Ry۟^{:ܛf&vdp`?qXT4m:s,?wѸ8QXJZu2i{7}ޥHnd%wuz1<#N]:w+^ΰAɍ~wƪ͆G=;URN/JΔl2no-(CIu /ǥt˒Pj] *u2M7(_Mn2I˄X7cӞԩ(7<ҭv"8$/""mnR?R/i(м]j Ve˻ަ\ѧ)؉~]M M2y$[X}k2<veحY!t6dF0AZў.*tvֆӾqhwq]q8+qX˔psjƖhr2ٕ'tN9+:u;Sl_iMK[6!+"U(ڹXX=\e*j2lT[ێZ瓣%tqʜCFM^+NUh/R˞ QFeP<6x?(S*u֖?kZ%y-יao8_Fao{'ӎ/z^Nm)Z,-p'LL#+859b9UOu$3:#oR8q?)^=@*Qҷ߸,dnG#0fgϧ>j'Ze1ݦO>LqźiK|ұnx=+YbSSmȳkm$lHZX|wu:_sK^u6ui7_K mqrpyƹӵDL^"t)gKXyjF?DpU~:QILX2ew+1$9J!Of\u*Ѯ,?!6+pWxZg3HS욃wwmZ(V,A0>ZǖRNo^^V"D6i g \TAҒMSCFn,krxoEl=9vv^[RRT⦜S(ŢC03rӕ&kbUdׯɮ}w71I4pێAxPZreަ2*;Zϩ^.bY2u"%$w=sƳ+˫NoM? Sqoi|ku 1lܡYrySV3էZH*TK";V[j X]\N?j8&9`(o-_5l q2m\TV䕔os2x5Q(hˢD֎mqW#dV7zoۮUO>ɯ?|eTDftG29qrJ*GZ !1#7m Xsεx1~%]SktsMoVK"C[Qץx‡4vT=XUOduo+7EV@awxє^ʥ)(5u]i!S3L ͑fQM.UE?ڥٮV6kY<1WeN;uKZ[&Y^ *i7KϹwuo0 嶍;=p?_"YEwOخG>E>QkQ˷댁CUMƴnt\>zݲY| ?O"C9avr1;[QlUX"2oZߖ.6K~qW٢<iiG*t$FltQ+SrEUilhk3u61q g*){9i2nu#{˱YTx8:Jj^l쩈T=, 0`XqDZN-Ѥk{OyE> 7g&܎?SO{8ݣ:F1$4,Я3ǜ~RElDnD\\pu>V3EO;J],iNjǙְMktU=J\x[J9A()nW{`Kv16 w/gIIJܽr~cq*ۈU3r+QKJSk}rO*'q洔i"HӨ-4 $d-0s#W=:$?idصik |ʑRhS5j;u'3/#^vQѣn5#m ůsL푖{s\>T*|ѼYYaIT슛T*Oh'S"BwIqƲPQd6fpI9<яQQr쑌(H|?7y8ZWjp\p_&$g@%SmN}?66Ű}gF1׬/~̘qFIdfBg !Q*uydq>?2[ U7[QBJJʱ,sGjǖW99))KrH6-=gGJ|4 0UY n|g=fϖ !w YO(R&2i=QLqRgu66 ~4msm~q{V VgfwܭU?J#RU$:cVTV1nn#csLp:QZ>$^zMa\G3 ,@Ǯjo89jQF2L|Cc8֓1S}52+kTT'=rVJm]Kyw#dg #shԩci/@gN,,sQc;\U=b1Ydݴ,%vcNA%lr֔}{%Ye N?ɬ%J<{h}&a*|cꨧZQ]wZ";Y=Uźیpt?Zδ9,1̀e3ʝmCjkT8kHҾ_u9kСzjWygǮ9Μo7S:tȊt& #+sW*ԯk}{%ó;ud‚::ZwJN_}QU;vW)!Bnx9=pGrN{ioR䶀2HnOz\YEFNU)k}Drܛ@0T#'oNާ\N0߽7evfmY<~&Nm:5H.* +󎇁F/r.v*9FቔE/#y##rz=B2=<ћriƯg=s+9_)^Tky "aezC N2Ww c8lG uK7IøRϯC񤲹}ۚ?WkQQrb*E*kGGi𫪷|9SgTNv} ie'+9⺾hMe067yF%AN-^YNQjPK7 }{=q4S*58-XjsC:ڲ4ʭ*rAc#ZgMNqQi-̱kw*v.Tr0b?]Q~/imbFX0f9 TS\\•D5Vm(̭o$c@`1Z8ըйR:o٫{y ԮG|)?,“ӜXUcXRR"s4Z'l<@Ͽ5RLں: `yo->UFGN=+YK\EhjQU5~ڍǗA/n8_g)7R[^cyl}W;1qX֌c$n3R6WX՞FJO=9xQX|D#7rټV7v~_ѸLXӔZ">)9m}? zNQ5M0H2zJ+I*tTg}COsMy|>:HtM;ݘŻӿCuJ{a76[6hR(ZGrHf'璩R{/ NQ-RКO.SʡU`vON?V88ع\JŌnvJXԊ|WinlZ̊u~P ::scR3Vcx| RLe^-Z˞62/'L 5IZ#ԹIg*AR;+;7ԯm^vRUuzjV5RRw}ĩsYW!i5D8=U{R]wzPyR5>Nױ,ZE\/e.m685&|VOH>S޿c8B7.VMM-UDRgOֲRө.kI;?UIOݫG}=Sw)j*N4 4'qPC1|#Szj /ǛqȞZYTfQURR>fl0R&呆\`gSQQM+gI[l2q4)>W6ğ3Lq?^+7G̩TJA A4k7<2OM4R s&2&BI{W=NZ~l+JZ.-VV+ߓ .GD8(:sdW1Twe%yp}%S=H)#Jz3AnG*|O$M$o'70\p? >Jp0UK} DxlvȤ>qӧ򨄹EOz[Y5d$:s{Bq^ZrMbh].p#Vܲ;-S\"X8/2W68I5BBY+nl),N=j_,t1Og'v $c笇o%y>5FZ}|VDM%n#ʳqӧPRR]c4 ry}GD}KZ߷B1nffe2=1Vp99<裞V@yax5Mq'MJzYدE.W #o 9 z6pk^/6֢WOs]ugTkN_ y].[[L+y}9:=Gz}iJ_.祅kV%h}g24h[ˏ8s^kIS[Z[K$\H8K餶nG=zJ<}n [kϱȧnc'5˖M_sH1SM>gMo.u_-yס6*^Ӓ9KJ1ﭺWi Ei&:t)r'-jcgDJs݆x5*VGCUmȼfgȣ+ihO1 Giixzګ4=*aq+];HkB2KwӺg l=m9~ Yv޸fUiڜn>³[ڤZԗB+ h00ÏÏ^}+ΧSԗ"歅W漈 Ԭ[9cQ4,E8].kWc+Tmm-46[Sn޸y9Xj+Raj Oiur~`˻}u$k3(N{]! ׇ䑵:>ﴶ3{p\1!m0~E95/ey^} q=.L1t)ʢuEhw9+X=}`U6ݾHU sr߇=)Aiwҕ<^\Jzǃum&Ѽ˝y)]ɍr0^uH*{[MK6"nwo ۡ$$Oּ+RRΜ/+I]35u In OX[9ǵcQuЪc{mY*7G]>tNɻKF8)13+  Ў3u|ܽΊp8껌m^!qf֯VNRw\4R5lqד\c5>H֣neRym5MYTm^=>nenSn2P|kʵ8i+im63Rko xpI<<;Q]q1P))]G~w;08Itv"&𞧤/u(/# (C0JBӡ5Y}l˕VdZy^g> >"*I6맩 Z2S9Z`*ǒ'hXrxj%R?_C19E8Ǖ;Z_7mC^2kc״ܳmVt `U+}^\{C B.ѕ2u'5դr-I%W m݈`G-YKW{o:-6m{_BoY''yu+J_r~ϲ6Ö=;z~g~ &Ѫ^637t@>Usjt޺:pҍ7']7/mlj.H#bȤ@v'yGijpn=/_?5K]?ğjï!l)}w탾zGެ}W֯`5ٮVYF܎+W~Qor۶#SM?wIo%ʑol>=pG)KWs\VV{yjz֕0Մs'UnI98z,cP'{5Ԛ:wMid.T`9=8 qgg;m/S|og_auRSzݐ{xܳ QҖ)Nݕc%n]s}_c[j DEA:嵯)R}/ޝN G6>63WdjŶgfM^엗Z 3 s)'O Nx00Pnzn. cUT]^J vy)rFC˽J0:qէ{kk'߲|{FݛKvƱ\ZF,wNᜃӀ uF]{IWIuB-Yl-o(fBNwIP;pyN;**2yǥN5QRWԱkIXDQp~efۘ`=*+FJw{تPJNFIEQ:78su4ݯ屍-ZSZbۺb?A= [R5pׄg.[[R"xw[7p|pxBjÇRKFIg,jnZC4Xnd?7@2;bʹ=>V;凓J׶+7I{ےuyf6:H# N-EumgVOOiM7~VxPФN7mHĒ=d;pXiJ*_{_2/kQwuk.G!i+&[,xڣ9kXonmE8ԗ?5_2$EhI%$3}0hX<<,ܚ}IZ:0t0u!eyM_I+ozij&o39!0!A?7z5+~:]-}e{W˫GH3čd'/tx ]=l)Zʩl@+#Lc^2Zyg,5%&Ӱ]:+S UO\$] rt[_Ҩ鴮]UȆ }ʅlNvѲ՞Ks_\SdGSeԸ2_jXƪw٭:r~m[(c6#]F6@tTjTQnU<E~o͘ed7HkynoTT~yk.}czp\!*n9jӏ2`Ys&o:6߽d)ݐy'uw{Ph.>tu$8kdzY){:گ^z' dt%@Kq^ٺ:ZVzBw|įqvxk\K[ ,fxOwIytFMsr f,ƁWp;c׹B<>X{Ika3ۜq]3Xj1z?ȯF *rqTgzX>Y#_i*w王K[KA2.?t烌\4U;b-_:D ~uB^κtQ^ig3b HJӌӠکʤ5fSqӽK%tL?FALzڝ^ rjEI/1\y%.bkF3Ȏk62o%VI Ŷ,? c$uM)ԄbrԓuB r1lczӟz)Jui?N5nI۶X2mke7pQ0Oa_躒gnğh*{I5}R~9Z=^K>Uqą{bNx|U/a%2@<-qN#8DjGܕ-#h a=9c{sl-n^Xޥ+R>NI #\2p3~~)}{khR{i&#|8=WCa>앁M\q7sgSєa2J2@#J;S餴3heghaȑl߷[Gޒ{싧WSToϢ2fΒ9[yx#UO/$~«ӿ*Vkc3چcczgmg)rҝOV*G,10333(lӎ>~|p: VU~E]{DhZSg?)\o(ҪQ7JU7$P[{Ԍn܅cy%)wܿф[w#n_bwzΎ2IoQv($uo4a5l=Jz] ΢Q !qL@7YJ {)VT|kb "W|#>y%fEZKVU`FWY,OJK R?h⌶ Q^w<ʕ'BRۡNhc1%tvϯl*zB:OĢD!];/n³:U,䮑JܼUl|ۛb,cϗ+&xZt]좷-Ԗ\Qp֠DewI}e݌yJ[rsT,EWއ]h[i{uU*-M7;*1l+" euמr~{]̪T8^UΨ~AHA3*{KJ_.ӔjhrMq p٤lgx}Th|=γBӦy1ĒFXGN_~DeI`|iY'8s[wMԁipyrƍJyǿ\5'oCR;L ^=ygrbN!䍫tj=Ny~CJ-6Haoͺ?.W~#}PJRɤ" cNFFRr[rY(;PI_8ƥCn ˖jsrjUu/\WބRZ _ϕ}IsS"oAR!,u zc|+_SHE]1CϿ?ʲTLRn+,}ǎҵm{z99Z #"`x=N{R('RϢ(Iƒ' Ӝry*L5.[O7&쌎:g?jʮԒӳj{r)y sG~+xKSy*v KBC (Ϋ[.zgRNJ_1aj5)\ʟMc^J)-qiÔ'|tq[Z[RZϽHԭ _Һէ&8W/GdN9ҝJg%taŻ.@]1~tR5NzJ1ntՄ;UhKVhZ8YeFxixZRmKbڷ6n+I%ⰳҽ]t-Yi[[F6&V$knIobdӾs>7H:gN-M9JT=࿁㯀-6%Bjk=ssb|60>- Si]vKTKRvJ]~x_(?Kǝ$"fޥհUpq#1y]J4iƽ?zi]hҶ~lDe*S罬ߗE煼M 7xz..i"Ԗ`1qV; +Kr꿯eѭYIot_u"|@մ[?<<䓹wl+\d c$dv>+b*a~mv.xE<.14'&8\湾UM]mcR]wߡ˧54htdRX)ke}4R*I'}/ׁ_?)$)kz!RpP~7[i9=x再:=^Z"Tikcx[owB 30ɀ3 c8m|,g9ndl݌O8C()צM;ٽ_w~ea+=ףz%XxMힻn6ʭ*OáZ\ѬӾq4JsmZE wbiDXmV^8c$Ujqu+5~o.XzS{3~*x'Υ[t]I,;yJpG8׫B+5]='GJ+D_;vc1a+z cن*Y^k:MLyx7!dRd33wGVbFQ k{;M^icgU.+=qqk 5o~[0JnNGZP]:L"`^+[e;hV^JKNKk۾_KMSM9C4Ŋ cԣZt]}?_3c>RPI7uO@O3,xUqE9'c1^ǗI}?.fiI=NҴ8ֽM%k7$d?hW<+GTvԷ%!=nV嫪1jL T|MȚ&4F/(vKw֝_K[ִ![KX`y1qԝLDڊ]-LJ[QEA~yCQј~|7ROv=N ЌΩJGmlU 4S?>ǫX^iws!` |z9=zW7R Z#j/~oys$$$6nt5K8wF6dzۯffUd^y^n:Qw~T%.UsYfT8^m?gsZ<^-%5sukTtu5Z)+"մ-5Qe -q8u5vofy'H'>#NVrb09nHl-7 ү\DR6qUꨮ4C#yNpX{z=>[-i$q֦W~s9TL߭Z+zƋt:*FFKdz.&hѮuƕ r&ֻ#͜K?gGmRr~5Xr2tRB|ǩRuZ\Uݫ"rghZN2S߂+?T22r0q`zױr7 ;$:%*S4"u~u`2NRnRۧu化>}vI ^9+J]~Vv3^ ּ%\G%Vu;DY Nrsrq%8pK^߶HGFv~־~]h(BHX<)>ǧ^<xeͿKhݟ/W F3wotӱoh&/$7%ߌux~~S֜c'uJt' 5zS/5; Xihindm,wς4ۏGOg5NǍ*~Rs"<=juѮ%f[W_ZԳd88,zrMУZ"rr,dSK=m=6C:Ρob%Q1I?ʚ]ӈ6=}; Vҥ ۗL|JJOkG.'R/w>ki%vbŋN0coChJ{[SUY:~fOe-d=k/bTU&3KNM6{W*:͌n uzeیuV.TW2o>֝KSY~V{ė:bWS VafrѼc-i:SN.KՍ.=ϽgRJϯcN\5l `RP>uV S'Te*kKǾ˿[f8R; ?0Q/־ ˤa :CS7WV}/qiq_V9cA#]|&\zvKo.Dл}N7T)嵺b0%6i{>g-[˟eOt#j6ݻ5[+OF &I܅I#eZ;ҬJ*6DŽϦ]BɸmUFتnbӳzԆM29w|&bۑw`\1 w8RzuM Rcǿ_|;XGyhrV:KNd}+\Bݟ0vrr~sNU9+'p]]b&]:9׀kiN2];^urG" Ufbƒ.=}'NKrYFTmFtZZe&W£ͳ*Gp9=]e,n2e.Hn# x^2꒲z8| HO]n q fܟһ5Z:QJcï.8y؋&> }pn2JiVR)#UbbTd^,TlN햚%XÆאO1_֟{]J7:$x9眓e8JImҜyt \$dACʌq4j}b7M<{uTc(F.ٚ]9*}}bبFIAB$ЪH Usx&#tB-[hhŢE(V>ǎyɮYb$f2*k:{keR~!2q~5%2+>^kЧR[~';rWrCǼw . vr~['x9GGtSGK-3tم̈+.k^k8֝omiNnUQw9}ʮ1r.9դh@_@A=<9.Xpi}[,kŏG=z*rf/kR9 BT/G͐1]J*W*BpRYY~2:V;?_1.$HEO;Ysw`>Nyl7#$y#fhU$o/6d`~"\׻5G$"D1o˷ǿaըEѧ%{?̐21 י~fR{] e:9+=:hC4kw1nsn;VfEJrI$ t"l4 OueOXFT#Om"+,+'r:sG2_79&A^Em-傻c8v?r(w{I.Q" YrCUMlLbrʰ8CkR4-kvi$Js-H]F(n7'߃ҴX#)֌h ӱVV{v(ZmmN5}9^LhB[ʌM>v$coҶoszrxR67{sm_JVd GO[ͷvۜʹ.Yױg9K]4^FV5^~KML_qXTo޽0qZF+$sԖ1}heo-Iқ~=\ID q噾`YIoo&+Rk:-(g;pmI3ֹ9-RN|G`oݑ8 ,9ҵ !DE2E9 fE(uMV ЮtEfaQ8gLK^fvn8S޺i#rZ[:QJP3C%cX\&`T)lMER8uZ<ӻoryMBV2m}'3v2ɵ66\kTұj2SF*#ۑyDZTTi6O{ RC7=ewSadi^c2TᗐOoCQEEWִzG/je[qʰ*qrsޟ{ϞZhQv-@'XԪ@?JFR,kiB,JJ4ԢL^;zШkJ9bYTȥy|#,ۏǩ(HcO1E;I#b= @}~u9gYS:MH٧g(, 6-vi4j@ǭg{$B̲nU,˵N1G^2GC.m<-/N8"Yi2> iT92-&hҴKx`s=;W-z\ʟ{piSȎƆ7 9r1yt'ʟ3$Է TY%8(0G^1uN2rߑ8TU%f֋lic\ܬS>ȋvqÓeG=GQ,[ rFq}_zQ9II- >N)])q6߼ cCKbcNh^J2^[nH u#,Dy%bQ=Z+Ȱ!FbAp}qZ|ݜִb~3R_h2rTO^RiG[)BHEAlF9e/x}$fQ朵F|.|yL|\8PO\s%ud{*|Yn{$ιk˚1_QJNSUm-6>o2`0sٳ=p(?2v}nk[~ծo>O IvZp#ӷQӷ烑ں#OT*E=MOaRv9*x8RG#JUct_c=лdN|O+/ZEb UT(POE .>uuA8*7 oݳNmms2XͿO8eͱTN+/d4 yfZ-#42޻v㈣(U^Xɗ rzp;trǖ:(ӕ,&$}جX N9CY:+{F*rCQW2HQ2I,6汔GN<;]'ݦUp~Y<0G\fRtG3G!ܬ͵gsJ2M?{ޝU]t9idIFMXh!1id웗p\f>Ѩ&:2\Ȩ+ }{{SR{=wܹ^yʲn,sǕasʤZ\3&ǒ9=y犊|*ouWm 6נ$qOcR:cSV8Z8UDjfltW<XcZ(y`Pa~:zzQ"6{,Y[ny&<1>q䓑jsdۗ#eU法.c:ѧ))Dr8629Zv' CUq-q=^W$ ;kFV un:ֳ*TJZ@w&}msDqoICWo,Jl6sjǚS:T[ϧ_u%Xu== f乙;Cz"Qn?l? e2Ȫ !*\(-)6"`$~f;RLjR4HFFׁ~X˛Ș] hag2 ۜ3nQ:rak^ՕR=c?=+)srmIkFkatS־Q4h}koqY >3Feyg(F7Pp99wkܴYZZu*l4tg2{r^4aJMg8tRx=~cqJ5cwz6߈ZfʑJ2ר7Y_rx"QOt޴_vT:KDu2܀xc)N~Vrc2YA5^1u㞾Tt&/>I[fm7إz!qJ}};Q4ʑO~Ueao<Ţn=y;y-wv}GYn1wnη m#wLzҟ#ؙ=AL;,+6dc sz%9GSTuCno$+ g l泗NRoCZQ,2dIF<ӽqu`UNMIXX)<|a7G9q5Q[][8}Ngrefb۟=TTxK+$275@yxC):zݭ{$HR`K@w.[籭N2a(IR ڪnxNmu,,8]˓88㎹5,ZJTjE{Y%oőCy+Z3%eU=3޶cMꄪF=@-Z x?*+jJQ[4Wn]m޲.qxMvӍii|_d꺋B4]]tNkÖ6N"=&t)Q-ΫwP;}=vr=J>E-QPWco컜?ַNƬ#%xWH9+$j2#Zӣig9Rz*r%Ib1rQԊ/$P#H\g*u4m˚T(CawhnI#3!Y {cδƟi]'dpmki;H;+p#YiIwI=\p0J*SR}h]\TUDֳ԰7htܤdU:q^źu>;[ȹw5p08W,sʴ\?m+y-;X,0E+pxkjrקDivD^ eDMcQkb0TD-pۖ9 l̹_UR$]~Asu#m6_ܣI wqJQ2FTܺzC'ZO >[d29+qbEF=4,ؓa$ 3ys?Z1VL磇&:Yj3zʶ1F/ Էipm$fJײB:~ϙtg$d nĤaÝ^ iW_O8youFē&Ս|-wQ޲tϡKVUmV7Ur3ӟcuF~PSnI^UxsQY\os*tO$]u<o6xT]be^%b0•B7s{3Jx64,nj2g?"cxǕ=KA2ydw8riOj.=Y \. l>IrԕN7,wJZH#Cg(EM؜4JHIݠesҌlcR0U/b;g \v'-TqIaT't_[8یOX咏ЙL|6DXtZ!yI_.io%X9W1} LaIdֶѸ?3J>WDMd H6x|$m۔gErrHڲN}nU1Po9lt&c)V'oay; #`9 lƪCE˷IgF ^+e̹ͻ c5Ozr挈Y.Lkl];ߨZ%F }%z+nV!Mۍ''k?ȏvyi꺽p=֍G9<[FR=SzY,#GgۨmF^-?_#8M\$r<}+R娤ʌjF#K&}1kw8?=q!`L\mۂ\RzZ} Y$ScR7O_̨w+63-LǛ*~(V{|Nr+H~GFڒ>p~e׃5SmF%Hyf$d۾=}*)Fov]ztq1U].O.13;0qls$ٽޱ.g[j{Ge8§"mXڎ;G5UF8gXoşNf_-?Kk淏OXi u/rIڈ74oȚ˻4+>EˆXf;q#VoՙTU=^ߞxB]AⅮ9X+.fPF? ѥ6Ō%Ny~[il} w/ϸ=+<:;tӍs2qqR+$keu}=-DOie4HUQ\Ji^Ep G1T?(hTKJdFj%)F8rk.{I>* &zU8Qok9UI;ss~[&o-fӨxzSgZ1K5Ķ#g9*9FVDԣEu]Qghx=y֕:){סq(> GFi@YJOYaNRϹ=dQ`KIڶԡZ-۷sF8la|5G#UHJŽGn{vþM[V,/V};Y-ܛ?ErxTR(խ)7ʓ۷rY mifUR1 Ķpz珧5JUG 4zo1+_ i&Ҭղ,3#߷]}b%=.ezqwpc6lv|:ӵieZ</kWU1ڳj=q5(Eh[4ZیhYG{tK y.D;QRgu-ѥvD,n?>pxj/uPFNZ~'?iUX[\ni"ڭztYT ԚmGHl?icٸc$dkis5HF襫yQݹSFHI/vOJ|ڸVztyu(5ơZncTNӱTM);j`~#yYZl䁸0oq|Wţ0E8kwVҬ?8el"hyziPvWzQF'kk߯nM z#t{=yN2n*6ӳkZ\yє6Ys؆>OS%O^ޫ5S[-?4zÿDᡷUNI2MG%I џ-VտԌ$+%^ZCռ/-ʪ(`@8G;{Wt6<ڸ\<+4oMq^.$8 ~Մi]a, chGU+ ofҳg'y#Xg44u1ݖ };UFNRU9CR~TB-au?ݲg<uMtrQwКu(?.^kp0>GX8 8g)L4dԖXĻեkK*dUULz^=t”trI]RּCrV0d1WN,'O}$]eh#V ,3 ]Qozk/ Zx~i4{.~D#K7ے):-}DBkB'03G͎~RIB [ȴmabi3p8wNj1S:w~xuӼk|sۭ:OiȄ28#ׯ[wIo'K%'>)tK}:G@yxwSn_sy?2 NYiH&U c]P~o1ETJ+t=^L#7\γ84OWWm|PMmu?2JZ2/Ast OeGHƝGzև꺟BO\)k{I$mn qzp~8~^ڞ[R SƉ=ߡǂln>u;]̵FV] p}ppj-o=)c⦶jd#erGon¶UƼ>%uc r*3s |/m>IlV9ל:vcx<-hSPIGRTSk]YouUW{qvA%=~QjUct%֫_6&s(ixBr83WJ.UNR[|0|K%ֳM 2IF:⺣iSoʕgʎ-Z=TuZ&1mMN,uꚔR]u46 '>`8RS.fX,=IEB+7g66@ؑ1ݜv秮4^C,SN0N7H.uFxK>Yخ[=~Ey[}?+b(ƴ\gf^=/WMqY^nhWːp͂~⥊ݟ2N5$kizRi42^fa̭¨o_^])*7c!5ɯ~)FOid{[%`{tv4FjOc BjrWz6KsU7ou_k2#w#*0z&pP=L=7tϪF.jb*7Q\%~vO".dyD*ֽ* 0+ѧ*Zz0۹1WNMirN9T児xrk]a|~4TT[Kn$NQu&—S .VSݑQ~u,CmQoގL<{*-<7}qu;Fys)i] 5):t$Z:oq>U{7}6jĀJ?) `M}CUketM1~߽,Xe(hVbFchy5F䦯}ޟw\\,1m2z}+,?3vTޫK̛/ ܤje-ź[Bu_q A\t9'*n*3KG{O={5F4e6]?n4#&Ku]h̎N2OLg5Rj:V8(ʻPincs f͚i,Q ,0NJKQ_Vߥ>}Sb']J=;0$Yb'fN2[q%GxhEը宖RUEoo%ꓬQc<{֕!R}~f8RPr^q>48kY`Ur^IdT&8zׯo|:Ƨ[;`y.A;NXѣ%kvKk<OQ|>&6&n6䑴u?Q_5FR4U϶Iaܕdw̠]q];{8ͩ4dk rma NIVU4FTi{m _\p̲y4k#8&@#y<5:dtIWi w3\8)<ʻ~bw>AM;+hxJҊ躒[lW{:s_ M*B:iMJHLml1mYтCЃZMN7]-9e5i'"3Io4ά.Ud䒹'د9S^yƯ5Y?T%w4+ӧQO)%iGԕJQ׮T}W3qy#\zNmN)nmd$sg=JQ"N1ӿc**Uz(!-ܓWc $rc \˯9cgc[jeH>e'Stӿ4Fj]_}В#DGxUI݃jTtoU޽Zv ,{֍BL ~NH>ORwRОMty5lGN\-Y.?Eش$(=֝TRR4Ay{}+5pDʗ*@98l Ԍ91U,f$7k_b4mm#z|ҝuT s䊽#)Y$ajֺk48Pj1_^V6ӐqҺo/愝jnyj!#Y/,?c}2ziL3}Mn /rrqpJFS|=[ԛȅUe5:`%Sk?my>-8XNC`{+9F\)FUdn~Xl 4c/OstF2:0H۵A)fw2~Q#ٱrFC< ʧ0t_qm$L^nÖw9=+hZGֽ 7sw,HuW(#qU|;U)ѲVf6nddc=qQvʴRѠZ73DRdbrSߑukSS)=ںyXu)(VJ=h"99#NH]+.IYIأ1K,*O^ҶFqINЦKŋ{ќn^kS4b~[uU)SkK2*}IنFRu&~*<1]վfzV֔oQW5KPq˂348G ƾN]XRLme汔eYk} h֍_uJEkX}=֑OVOUA$V),Uj:#R8yEJVVVqq"6ǭ~Z&4S\m/uinFpO_Yԥ)i9hr{ Q#V_f>Oyۨg4ңrz%Ŕȗ;\є9G } pF.3'e%I¤tZ߹ ]˃Ϩ\oc&Wdvmnʻ1ӥy}NZѧN]Wsls&i7'ʇ*ǁO:Ҽ*Ne:[7-,|3>~1W,+v%Vӳ,@X?1d9}Wn}íRN:Mlq\5#W4LtSO-}UX,4ǵ2#qҝIN^TsGS_.Qt96Y 8qϭCcשu"F{NbT!L%Jk6lvefcs\qzוo )uio+A6Jq۩ӌcFQM?rBܬfaJOn,nG m2N_:*DC4:mR q58R"FsFU~^=*J1*{W>T8fYUsxr15*NFOlzwn ׹HqURh`,v?jqٓĂ/|_znR*Mffn˹?)ߑ֥7{oodd#ʓOwn Y-> }Jn05 XbPFȍF'YҥR-9Tcjj}ZRLX$~-w3s=n3ZStk^o+Ze+f@Tʚ+#NR(Y{0Ŵܹw F4eN2w+ϣRWd6K994cvR2u2;(x\ʇiEڝhƲRCMiX]1ps޵Sܵjӕj{\3Cl3I -#1>+܊T[K'4. 8Q'o'ںS9F6>(R{] *`ˏ^GIQsٙԫh)69E;ǽW[ۧSV%um 7ӧ? =HB05nQveWh@v8sq\l+UmOkuXUݷ??WVе'M,laXgb_q$r |ӧi8잛בLJ 'w+`j`qF]WhuA{꿮ċW~mw{LM$xf!9cfGEѽzFZWEdKOj÷쥕8r;021gZ:i-:MtGJhׯbdɌc?q\V7vy5(QUkd9H"99qԃjTinݼiۗ NQVm%t仸xj{fCkc8 'Wk*BTMߟ,2҂R{^/ /O_ ]E`C#h<1!I=/[/%*x$_鷟y Z3]]^-#χߴ,t-u8z.&^K4 dpjQo^sGR!i-k-~7s^}zn)ԍ%]leqkT=*h/<:FRۨfc\,)Y#:+'r[ky" )e'=¶uNѡw ?EOgF\Z )SΕ3n#)pQ)J[Щxʾ|p&mk*19994YkVdهZUKJ~Oqc}{e3vh.~^7{~:{+jJzIVV4d`xqWˊ]/ٜ5]85ZEfopֽ<e6gQڌ%F\m=}xzko+F4O[uKmZfdzכ:ͶxVim~ƹ s!zTNi;y/?3)S& A%*ke )SJR7{ɖuN)ZF>ykkߜWXmSof /n$Vwji˛U3t48 Oq_DѨӻz2t7Dpy8xu=)."j5:( k.2nz&Xkmi;} iԥekBR*ikYw9`{5=g{ Tecι%wg[FesЩ Q-i@ۤ`(6E=\;p˟K/|;HJ6Vmbs`/pkOk$G&86 %I .ܓ\t9uGV2ըng!ezo"UܽiJRt*Y;5tR%;[+&^:\W^Sx[#^MMr{;/Nز5 ߀xu^1Z?~Iޚ-3Ľc_2b ;qOs^"^줧KN/c1)RivOHm6ɷb;~e929pP}_#FJw}?;6 <~"\bFF;X|`c۸5g#ucumgXZT*oImy֖Dq&#h1J34򔔯>4/{g=Cn67}=>nr:gQ_~;)_gE>iw$ҵK.WT^'T=A_,UQǕ;'F"W4m%uziK]<7GxgPX쮊𽹓p!PGL`򧍧,:_DZQzzyߧx- -wXcc\iF$6xǭ~TUsUҋI;F֬y<&{Gw2lN#z~1vz:3 Z)}< :5#QӤHxNڡW8վVtmwW_{kQeq$P ^8k v]~Ukضᅰ/ "Z^ֆrlv;׷J&-iJj'dnr5mNk|\-3S+J9yRj:3*ѣ4h@H,8\iO.huLQc;-ÍzS|ٞ"iqx/jk1*q3ѿÇeOz-EMђ_ˌէRDΕO*[_ *`EsIm){IrwjֹdYN$?RH< w#S[+ۿwﭛkm\GCKbVH{ޣ'ڽ jrMf pUTN..7}Ϧ4]WE>Iz!Fw·~IwR֛~Z?Cn\eOA_'S+*#)R'}KVյ7(2G`ySWcJ<4p\oSK[ Ic{ǚ8_yZХSZmOFe`c<:bʵV/OC&\eYi0y\'Z[~H[mFd`Cy8_g }>}?#*rEٽ:x̐ڻYшbcʆV9cw=zUwapg֩lXݳ:̢l;t?}% ~}898Eݭ9xyr 2;qאX˞KoϡًW2<74+ f 1]=+^6hΞ"|Mjn[ivӯ^uFZIGAbȂ9mA78ݧCY|^tG<ѮWBc&U D#:ڑQ?JFQ&J<~arJ|]£ ;S[z S].g_ @9*{X6W -RU Z+]N<2_Klؚ5VYBx#?Si=WDTw`yZbFHwA[ߏNe8BIkg%t*xGESVWrGȃj 8W,E]~~h5%tO=b=ZE]#E>^;qE!hPG# 0x{svFַ]%F4.WP ,,[9<֑+=|xZu#lg_w}apZEfhQOWTeJZBwO^2[ Ć\w.jZڍzqtY]Iq͹X.pڴdT{oƞ'thn-ȐCsIQq!dW_,U=,St.u4D[o6o0#ӎuIHڲVeX1_mh 0 {}D)S^ӿ^6L$/=wbdR[,'!H]Nn}:zQ[wOSϬuCIeoK.<~o0G 6^V]:|whl j]:yukP<VE؃،5!% EVj5~L: -JWZEJ\%S)WܣqەĊ#c-`(ʜ֗G4Ԋ|M~^6a&Wq_.y[%LVp}:w̽cc wsִi;*ԕJj*[i{idiClٽZk%RjWTԌm)] }KK{i;;k^xzj}N6ޓ?+~AG GK[ i^6,Wj-qFyVp [K3p?ZR/u/SITWr, ^sXG-KTPtD֑#s5j0B:te[ҧ[niN+&ȩF2+6Q»QN[R_O' S_mu4yVhPsҹ]m2=MI$[4_0tV?5F\ȍss)ַ$Q 76>eW,GJ#UTIcVlyf1PtXʵ8ImclEZu[㵾y'̻|{㎽+ͭ'V,0zQA60|Y_i(FEV͖+ve*"e\/C.,NX+x vΪu#Ӧ2f_-~^Icԍܖ*Y#dnK*}okQԷ-g +Dq)^~n߻eڍu#yȕgH.Ua5R2:Zᵼ4ji@'5E-N|DE Lf.3ny(V+w^z9)GދE+=Ԇr̳(eW.sq]As6yt0n&knu6,,LDPfwӃ +;Z>ϑ̴vtbwpDx灏>njio]zM *bdln\+z4Tj9%lMf;q޻=WWSz g)mUFݸ*I=1UN2<>mDeڱJu^q?:1u;0ܖ=Mtg匬1՜!9FdQUGZ%F6WW7%ס_S=ȸ}rE>U]a[N~$Ӧ"COI_ƪJ3QJt3nb< 6TcJ-e50֩(F}R8Ԓ.ژ'ⶡDUShǗ_R4{䷷Uzˑvc?*g 婽82XWe]$9S&fRX9nUג<|D96tޯ̊XƷ"C؞cR2;~䷚[_3>[a<8ddF ׵c><_DF%+3ٙT^n"bvHJ洱=E\X1= \ZN5&љ.LvGV^ i/{lpTkJNd$Pm/# *G޵'YkOo+y1ˇIfJhRMy|U ( [8qֳ~USnJ޻˩5 r0;:1?JTTt[V9jc#%OM+Fbv&O>ߏ?c)JrK(cGAaadjryqM OʸuJ3lFxL/6*z'mIu?bHhxmwB1{vXtc:1PWnI=z- PWxQU|A#8tw;i#(ƕe~moQ&1FۉmyVq%竊F[+y:,-;#,Aa8R1۶O{tլ5͇({Ϻ2vy ל*%N.<Ԗ+K|+~зG+ġK>w z;TFJ`s-C6l67sOlUs~io8i_KoUr 1mmcc8洍8ԺogMH^*ruX?y⿣N*r(ԎUKm!d+O9~=?16SrCe mcmi@sd's=;ZfH(Ͻw{\5 *m]eeV7K"1 q͓qӿoNjT:F@ɝ~RkgQjb)5 F۲ѺyQ(o\*qxNdb5[A Yn70eaj d&N CQ7}-e(;P"*1eE;OF;jʩ)syHva{t;SMms/k}]3*Ya=ۤ:qOTeFT%{G5c| vy?iNiDkiDQeo Ǟ^>ӑ kRūFtĝ7mg(~ٻ2Вm!DknI2rGl.S]:qn5 AXv38#+_g'r֙s;ðOʱN7عS24*m͵ S\fsƧ4ڬW{cڎh򥠛\>$pM7*HYQMtqP3Ȋv׎+TmR%e%kuV`wmh.JRbg&QCܬN˂yVrcʔce'Hؘ.<)9\%*\yfy%$SR^эG>Xnoǖf'Iݫ?{g(Ϟ7@x]q9㞤JƧ75s'{`^N.>J2Ԋ%"źC + I#FSV|FΛ{x-Lnԕ'ݝ?{F(v[ {?u-J2O ݞ:}?ZL ri0 2H"c,0zqEeKR9ut!6˱T3|sv"^; V`lq9 SR}vTn~? ?-/DiJKÕo#[r±鞣NNӌlF2DYұ\c5*H)|bw^z~U|#Z‘ E5-'==)8\vYөG?]R>tڟ#_V76;@YTkX#cyGUmni>i|Z)'uܩ7$N]Fnx;ƔǚEҩOĭE>םҶ[<(r敖D42yȷQӵ te(y\\gkme893>ԒMm(| 70ǿLVNYGWSb.wܱswo2۲$E5==Ltj8)ў1s巕!]Nʎ87!$rI>ǁ5O+;*J4MqP,YCnlO>M^9У9NLI n HHfe=G8W|ˠT?KFS\VexJOn8ElkYEߛoRA9U mϠc9+#-::?y'T,rG@}Mcwws_MX0]s4VfʎX_ U. bnv6~V=p>+N\ϡ\Sń[[lXn=w6q9\*rW;;r0[I$4gvq**R- ( tBz#5Kp'vrO|=(z3|$dX^8棓tsԩhG3LɶL=N:>V.S&Y*2{pNy#r|gUm\-h#{A~ʍ!+L?/>H 4weQyG$zoZ+T&U%y \#n[4D߻vEowc9~}>cӸ o5O F}+ Sڛ)sX9Gcwg'=iA{o[UJКaX 9ٴJۚ^\^qܗr* ]Q岉IJnD|䬱¯?] RVv9+FR*j̰f_/y9c 78zןF-Y5Fg]|ѶԞG3-E޲J+6*<ʪ-I>U&k( n쾣dg$۹Z\\vc.-+ycVjQnښJMn𭂪I#$~=+/{ry#ȝʏ[ꮚ5y$i YY=i8i$vV+Z*ɟ噄q$oͳy }i7ab1Q%Ecø֓w~]JJ-d#yF$t9N4}mv5)9_{tl ٽY6<㧯#5nYEKz_##[z|DI2$\_=^f"GξZ=o2xXYFد̀VqJ/T*ڞs EI !o*ei&~J 'O\ms֋=19Ĵ^{cG{w [ u_l{qǡFuf{|Wi{u;oN0:Q匶zzV8158M!gkil 1wrI gΏNO9lw9]s·ݛ I. xؓt,E8(ׅ߱Jfeywq˾e>sLp$5VK}-u:++~mKY =³ٸSU"׷r}Sz{~G'…m4`~#ח(F>V#S"}_&[o3<]%qsݪj֍jNyz2ƥE>e{_mb-5|ɏ9'S :^S D#IO?3Vhqϑ\,) \uiߟrW}s7>hsjb26# 2IYC\{۩ľeJȣ}m43u]~ [Vo(¥5:]Öۡխ-[0]ѕ)n}2h*;Xs&oխ/ Y`{D>NIe=zbir73CN-=CbZV*ˆ۵igKpL R*4fӖUv#ze0>Og+߹5bgȮJjS~dIiiUm1bŲNx?w+ismm`ud[|;D$#]\y dt'9^"bֽlgJNU.tt_em&Z[vÏ#;W'סQ(M)(ӴiLi>[Rʪn @$zc[TC ϩ %4jϧd1%F;Oڻ#FRԮlLk87ߞv? 'BcY"+*#ږ#2U?F){=;4 Ihx"zsC\?]Qju*k%쓷GNw׽ 5&ߦ-߂Dd6cB0V3됅F0n/ha."_'lF\k뜱8ƊsV}KcB-UU`J穎&.s=g\PaSf H9<={匩̦M(w3 ,pºm{;~GzjaGφ{{ΓvC(8ǯ}mPiz2Ig%8E <#):uoQs˭ˇ=VS |5t)<6625Q>PÞJeNub%WI)FNKgώ6X~l;׃N<C4TfV@}x^^ >G5)k#FY;n[8[ :t=*2{s;yY^\ʦؖ*ǻ} jByCad䑿efevIT+-XymkFfz./Z`v4l2{gb*F\U:Oǟ4k`V@A߰ Ǘ\z__R^2n.ϽUEJ_ }}Ciجrwl9\ ϿJ\U(2pNF*+waiյHoŦ$2nwc;hۛgR/DsWS'mwH${V#T*UM CHQ =Q*pr8DZo{z=V4{,k+=pGZp5ʛvWՑpv5_|GnaݣfX\ @{>\dhoqaTc<\_,sOCMMFز`~(̞_m忑uQ8fֶ"Cqȫo?|%N~`(*rVM0pu}W7)a[DREcʃ2~*)֟NZ [UwO]Jܷ6lfHCR`P9gG9[GG Q+>D4\yfffi#<㌞>a*tl:Vu._z[s`rX6$*}ث%/ӣw*귫aHߺF:Q*kNe$/c[ݲ.g'XI_q0%u:$R'ʲ 3 ߌڝ>hۜ4ʤ׷e3Q$32@!/1x I?7^qU qIiFk;-N?^$qS5+,9qvSTS)X{=4+\y-m21' ޴Pv悵ͥ~7`bUYc-n2!e>} wR稕 Ejdm!Fa#S}.k2v;cNTGO빽a$CcnюOnRO?};|͕ơkyyc%;SlC.{l<)Y|ݿ;{eiN7!IY-|2YͨJF֝Y H?؜^;y%qxz2M4E1*}~1Z*zQ3i?Cgio!c6G?wlkBPڜzr'M6G ̣8¼Jrٞ՝T?.׹|\5QvchqByKXwt +Zxz>WQV޽/iŸѷξRpIa0OcFᥴ !u [zB9L4rbO vJnU#WvN24twښ3⻚ #Kˁ ,Q qWS*OWr4%ь~گ.~Xb;C8,9c8*s U\6"9JJ6|_Վs_7¯xf}O[J $Ϩ >~5*EI~giGHE~|@'fci~x͵l%F.HaO~^#{nEU{+K+ߺ02/ɫ?5=Sgg4II,dQ9'14qJvv5N87ͳ}5xhp.'uCzO>cއc1t➽: 6 M_Lfՙg J1oCizO}kagܱEꏖ?==OZV5iJ>N~ޞ/'}Vך`r#O-X9N~^9$F#UkGF垳q.j5p̑1-9O` SOt~{1Xiu-;ww'Kp4Rw #v<: +'뭿jNz7gԥ6Mn#9^#);kk֜eVrM[ b*aif&Fp9 2v t4ikK[C<-*(YdyZ$(H]<0smKY6vz'?e= —svmHvE@29z,FQs}fn\uz> ֖!CI/xߏ>'V&U%=RQnnM.dSlnR2@ UOo#K>zZWx<$ v3u1ug&ɵƍJtye}: Jk*9Ej^QrHʓwx_tPM>Y=t1ef?M?sg%*M_&'eF)Ry0[H68 {qʣGGN"˓z+ 5%cޝx9=kΩ|h|TIkH'sH.?%rܠ'#^^}vmggn'OxK[;[Yѵφm#Q1wRY:r9q]OZo[gLޝ=#Q}}5м)-/x(WL`}:']֣}$WFªcih.xqF:wwV4⭣}mS2j_o kP[wծF_'>.hɴ[[Ь][w:XL?|K `mz79CZWMp֩y-:K _1dbE`~ipl9f p9|F"7{mcD+=ꚭ厬,mdgpsùay~5'(sZoKTcmQvpwQ#@<ܵU[P8'bj%'ZZ_G,NV:jN5#{ ؚ?E3.#({$ Gߚy]jͿEqêT/KMld1Y,"6Agi/%+&g1*Z/VnO=~\>4>ֵ.CHnC&{ֱy3ӥ ̻\9ϞF_*MYs#.UG+hZ>{"o.lƬBEVe9ڢ?שqQ*m4syG[/G8=zWJuy_Mo婈N)]urGmng>S8$q]Ijz~Gn)%Kغt͵~FȪ10G֦I4RW+mmeKJtcbd=>o_޻(ksb*B4kdְ:-Ud7^y~tFG㥷hލl^ik[J6Mq3ӌdo/zZpQ}o$ Y"ļdvf7NeY{{m8ceYYFTSduƷV]= 1V:g}֔&8)5WW;DXnTfةFRWUO#cBbG׶}9cR5.YIyjbqT\-.lCXi036Y>R;>tRٵf-Uэ/˒ǠV?qq]4gVEc'wO̜]N3Tk}]j2,$ɗ+Bp:?$e},G,as4iruU ;iԼ/Ė<rqsҹ\*V.4cNO]>SJk 8GvF:⽵>IGPi;$8f~GCֺiK]GZ1t*ܘxeu_&ascӷ|Rnk/A=iv3onV9*C ܞ^һ(tT/c5 :)X;1*MkrۧOW5 qkU(Y[>99pXJowӸÖSJʪ6}B4F*nCqMʲ (mZF-tFWV~[اhU:cv;gUi\CnYwDn+N2c,<5Ua2OHF ǚ"߳]F0>.0n|ǍXr]rT<z ɭ912{RY-(YFzzW#ے V-"cq吘@dZQ{cNV6R%N:4c>,Ԍꄳ*1cuk|:_$M<4UZ5NIJJnE:Gd_Rr[*J澁H@׾qjSD3v_RoM1ą##;-G;䵞d֔Ucg ۅ@ <+GU9*CӢd8ʐ6PÊ㴬&bR]΃GK˜O5TΞNAVF\ҹFNGGUq UxnH$Zg9{z},uVVK2F#PrȇJ;hs:?˷j0Vq3ױV2MX謣(Gpᶺ^y=q֗4#*ubol[`pګɵ^W`q'g._n-dgly6;OA\0-O9#IQI9/"w$EέI{M*8}᳎jy'8TWi! VQ啑%)eb'7Ѐ}95/.%z+>Bp34>R]J؁$7lzU8M/]г!F_;XT$͹vLֿH63O攟%:+{Z|j&*T<Nj3u=e-6ɌD3vZTqjuj:׹BKH`o_wsJº5 [H3":A2mVVl,p?gN*J_Z?,j챰mN:= zj54u{:7nStK(կ$|/p2??pQc8SOVљOY&m̹kF(9ۀZ΅F9SkMznr#e~'+hi;%u{onCt"kvfD01:a(R%ZSQu!~,ޞد1`hʵbTeX۶H sZ]2i'b}5e)/vc N.]q=ryϷyъ'NS DTfҍ]H1/q֍rU4n 3{yܭӏO3S8Ӣ^oSZ+};1*䏘\0Z^#} jc; ?/>MIA5֣N#twz-D"Dd,I6,gioNFȫ$(w{R5Kb%;bG6sҺVSxyrȎ]6o9ky[K֧*G$Uu9ܥNvPAp0ۉ6:'t#SNk6{:ֹ}H4+wuicOFv+%R"L36wp Myap ONWtikn b麰Jz%׽מx]bluG$v;@ْZ,19^+СӋY8ůvz´^+j(o5o, Ȥr) y8 xF*M[O/Uh*=8=B;UKKWtQrg ד9VgIޝg/o޼7}-b컗s%G 7^S|^FFH|MEHSWOJrmr[Yo B ]z%UHՏHSG dnlpҥ%O]]KKc3Icː7%}~^0G|#R1ZaEX{-|o_o] 88QvsHmNHG$u//jɭsx2~Yoιbamc_?,*[M_}aJnX+s=g^}w5&2$Vw*Xƣq gI;tOLTW7nCK|D$Hn, _NRZ Lt]LBxy?_x?ǻ(5/_jqYGL1͹@,#r1q0>^XI-o_\/.m_khy_4f2eNzץZvX jSމ+X-&tAe.HB1ZܾOkSͩ^8X5&f:}fI?(ٮ$YJy9##q8jM.>ܧk[Bճuq E#?)#e==Ei KqxJ4w]l0`ZY&4C#МbITU|-uIf}hK]Rzz5WyYla,ܐ>:2F6XXSZ^V.I`֧N;*gQdhw}y=kͩF6%ұZk9$eJq֥F_Nom˶z [fnW2:0cSy|(Q"9PNwTyiQ}58N-w#6_S(J[uel/9w\KpAAE>J1QmZݶ=O"Z. |+T[aD ҼS]k,E>Go{r \]kץfn4$G77yx*%6|E{ŝUq4ӓkTFWN8J-/OٴO lIzIP]ObU&"Rj5a@ҟ-7T()-_@Ցbuo4r]*wvT/C6+饄mjpeѭ="goswM-QYc'́B\]89kcPMg\0yz8Z܍YwhʞǚTҊ4,~]%Qk:Ք9_E?hoʷ3ohb>m+KCӬ{vAּcRZO^*m{Gu͛U sܣWTQOC(֎"Q4]WO nm *v:s\QtjQ<ھo"?djk~}#3wl.13 0?0:g3jB*k8=:?u+E_D׌{9ýòg rkA$㎙arm,JQ8VWek!nc2;mP3B$'g#eU)Hhn;7߽ޫ|<9xOF6'RX6&; `z b)8ӊ׿S̱ܰ_/Ch7i4gdOf a0ʕ`ɀOt $*|EIN1lr;pR֒z'~dWW][]>6z奬zETq.bdfyS]lCA5R1t{<=ukEl{w)ҵ=P"fo.a8$`/jpRWiXoiv_o3<-uIi[ ${?5ԕUxiՕ;ۗGkϖKy50+N/S[Ww\2)Am1L8ǎ[[<{=Sqִl|5Ofۙ&ڛʌpO#ֽHJrk} 5IRfs܅Q GG&#WݲV?z.+DO5l̎IY7^1q]T4-F8 wd=W=^3 TrGKZF*lx23B'@/z qd~_m}*/M1'?*q,:c;ŧo(M[L¸dq[q? 8;I$Sa4x3y) [">\*rN{ׅ(m)jSjZsQjiv/ۣ܉-ihϧ˖U6> {%m?so´w9ux|"}dcR1Rj7zn xMX4F5-o~ʒ78 c׊qN^wz-K8Է^ '{W.{$OWj`C2BNj_C޼3σ4"s2GA?eR˙%8?*ckĈx1;ӣ nX(E| $Ǯ=zTiige>1j.D>/uL.BNIp9э?캫J},^]_ m5s 庻J͒|ݹ 9:Q1zt%N3=~iQA(%=OJToW#yfK@VMdg/u#Zj}H<]GM+dpw=\uY8)E)9+~ D"y-mNOաBK|p}כR{Q>!!bI#y\չfn+R `?ߜfi"c'̟*8? ׌ֿW8sGT[Yv]xU.eCNzXeI1r[qß*:aR*74gY.FNBߐw+ף(Tx{y_G, _teQ98j7%tƲ|'˂9DaF?i˸7}Uq8:tB-gͥ1R$,c'ԹJm+w11+\G?/zTK%&3b#~M#-N1iiBprxSկ&L}\-dS+coMiKX߻Nΐ[@NӐNG=FzbT\_\$I6'kaSŸδRKygK>Y#_*99~5X)bֽCb D_hv'wSuwȮXc!<~#ԊF+JLTi&dlpqÎ+7Y'w|1ælrAec$<,aBN`*ϖ)_EcLF#TPYplkj"i-e+b9i箥IB-=?([hwѬỳI\ZjS8Mߧbp`y:QԱʒ5<p9֕;8%wܺ1)%'vsF!VEYq6t#Z.}R#~e+ {ɟ/tork˭U8ʕY)+y~^Fn,<3|c {^m5[WӡIGTW`Mś¶cS 8ҸjSQ凫II>7^`"LmgN5?fQ|DBVOI3Z/&y,5d%6̸r+soUK7e`TW/[]4c,m8-#i^Ɨ42 nZJIinq[tmĉVQk͜EBeemomٖbNJzwaZWC(Ֆx?ٱyVϘڽ=N{4eZ7wմJ$ ZT|&DrëdI\5)^6JVtf9,M6ni9ZцL[m )'V#=== u=p{hZ<$`#pUF Rrq]2DQngW=F:(գyv-$LSst^"T(`qڎ8NzNJDT؆5m>5^Rqnk*iIFneEo#f?3y׷N,u91\Wkfye! 01>S]R׷eDe߾ ?څg2pwhoQ-PwGv;r)B.r@!c{VЏmHSm[230m<㨮j1vߙb1-Qv-8K3# O׷#5^:Ww驈Rʒiidw6sG\~ ^g3j-qeKW\ :ⵥYԗ+zu9:mKֱw#c]ȧ1\cRQoC˧(7m/Q=߲ib.zם)NU4z.|U8ӧ*!BIJg?S+W]:thYZ@n-6fJ?UEW_xD tINM=M4ਏǹm1 g2qJ?JIF>.]Po+L+jrJ+c͛}ln쳜u^kQKҨUQ14 gǮk:J ^4)Wr[=rZ^ 2<;Km{ ,vx_|oQzwc*04osԍ>e_7v,pK+ ,3؟؊2rJ;uyxmNI+kV=dYw1#ovМފn+&GlM>_5#!w3hF4jKiWUb^GRfR/]:[Q.Fl$CS8BN;եMrtu :rMr)ʭ73Z9|#Ͻɤ[mf ),3SQ{I%.VV_"&m?y.,8IɯBJrG62yc$֣|.6a8G^+1[J7:*c6QFoNA EJ!XUVxPTN1^}HvFY*,q u%*t:9E\,?7ʪ8^*HٛԥZq6r}\N>_drqԍnFVe3~1'i'${vZӧN 5N,&Wݴ+FA Ng#iSqؽj&L. v汔yhtfӨ& ~ZGNXCrMl^Yeʫ E\M7rJM#:>ƭks2\ޏw"Uʔ%SR !ckˣ䒥}ڶ-cB۔c}&\a+(Kh5~j:x쟙5r. 24aUy?j 1nmGn#m݌{gܹ&Gb[X"7R0W=?YO}攨ӌT'?+˝P9\pQ56Xi<1* ǞGmTkBX{)̫4mc..W ۍ9>Y4S,Ȍf |ÿk ^RΜbD1L6w7N{Uӎ#IZfha{zָ8s>n(ȥ^K㈇Bsm]s|w;}?:KNxyʧ% O6̑Hڸ8GGٽeRlg[[Rh^ى?(sV)TRrz4^,F"4]g{,=ؾVE=?*q :njb Id1&6@g 4*KGm0Xwc~u#>TlLbϧJ˱~:fJRnOk];iЯ{jHw $dd2x8thIUmoȮcQ ^j4iB2=lUp%و$* 41{Wd9_-lN#Qn+66+TaE+X˞ feF7I?],ob'NJ5MHy ϯZGIQ;$2M%id?<͒|Fx9[FJ8~XnoSq4T+Y[ wGn߯el՜t0ҧk%d㌬^c;3cs`qڵcisN\[ 6./ ?:/=gEz #l(ÙB-Ĝ]*RpO47.^UIt:Vʖw7 0^Mq8':RW內USnk~件yyCV87^f"72'Ҹg-}7lK+ZA'w4y8Sry$βIsIZe=%mUT@~orͩibcZ9dc(К(ݭq .39JKX܌9Wdx[oڲ6"yYò0L)5F? ?S'sY%*e+A^[J{_1(KUdCڭ亍ʧ7cbk{9TJ^&kkeȾ$=c ̝XUR!mKi"PSӷ#񯝎**ju5徇+?ׅ{(v'#:gҔs)J~W:)t*Z?LZ|ɺEV`YUYz-)]k#tb4yuYc&t"o@A#I~q54/b%뻳e}w-ƎnCC[ǻreP1rx+Ύtnjh(ˬ߁~7#͜2 ʟ/@8#Uq=}/s8|-wz? )-ՙB2h'zJm)E8kayM;-Wgb 3/ *Jk`g89קG:O2aTt9tht%d/ͽc tjVkZv# ͉)_]y edGɀGgήa[>nmRrwz%߫y#Xe3:+sۨp={)⽭9+l<BI:KȆ?-l6̛}t *~F͋ޝ8ŧ.7l]ܩ!|z氎&j+n;'NlsLu6&}~~_Zqu0NSJ^Oӏ2>ι(#Xڼ}OF}9mC yvNc#y1\q xXsFq""Bv[ s짜Q4ye^3Fn/wS{9MlNҹƧsB{vmXͺѮ-b3p0ٵ)Km=xgO[kw%Ho^ @1fʼnnO;Vq/ș*۩ͦ$RNsǩ^*a[k~RtZvM잯1oE. g˚{TjB2F.yEhCG =ަX9{]zQ6n/ڭpy1եRW9F20-SPo_?ל,aISmmZo)Y>7Z%xt6S,_Z:x(LO)ZɥoNkJ)57Nݸ_oyc>OR1N+}֖.zqNo|}o&&p<М2ɀ˂qx?jfurC~_CK_wHIȽtA+ |/EVCʸ)컟<߲~ݧbjϖR{fk0 nM_և9+}p]5ҦJSjSUt}-*[Ѯ4ˏM$H9 !\t_@+mt3OSp[m6jĚǎM.%Iȍ7x>ƿOCXXtN2Mi׽kyKiui!MHnQ{_1W J>_J䎌{%[# ptWZ9SZ[dmӌvf鋧wxzïeNJ;ֽ3 !C$]*W{~҄yշ[%auuf]d;zλjb#x-U1>nk"0Z|n"򋲗ҏΤv)>{sl%ܢ%E,\qs e4}2<][֟5 ҄]:Oao7Oֶ7Wr\-ͷ9e}0+s)GYV۫1b*7?Si$ n&Fh*9={sYєTZ:ӏ*W-pyT6ỏĎ8JTԽ `;&  pjsY\jw6ܾB?v9 |_-0Mh|S5֚񮡵R4rHxm,KAE`ҍ[QY]}ާ+ dN+[VB}B>p\sMik7^i7F:jgP-UG +㞣pkEק|ܕFmFUGOb+J2rִV1 q]ϧIH릘a8zn}ڜT;K`uǵaNߨIGEҚZ>#O!*$YoFpygftƋm'Om!ӧu >fX+zѩy}?cT]>zt4QZWi68`I89p}i┤8|aIu<9=+c)${t>rb%tzg> xHU2Gqa}ĞϷVcQW I|?Լ{pgh1 qQsd>bT*O[Ԭӯ_M|jX[MPkT`N*n8hoy-*N_̺[܁@7}EE(}{.H;$7\Vn&*UVS(ɽZ2 ?Һ*nW6-徭>2!i)vZ&+j! xś:cПңWw#(~1~hן>yqj)fiwͳ9׃Tmk->OZJ ВW>n Yk7S]Em͍Gߒ0+`%)>Gt*TVI|-cw6"GsI_/'Z6m jHWP&;VH,kHS{W6uj2¦N<5gō'VeT/>vdiV`NRFzw}_9SՄUejwjח-+\JXJ۲6<=HԼe{ے2Wqў/hs[J|'GJ>#[[]OZS[]NCㆋoZfmQN6TC{׵K,*~K~.#jxh^mk2L<|?)^8I귱l;i]S:J˪X٫,-m$?Nzjo5{~EO*J\t\V0 E/,mP8Z:zBhЯ^6nk1ns܌ݳy\}VǬThR$_zNldHdX"c=A2yFQt^sFe~im)_Mk{UXh7,x`{% ULOMiwi ❒i.>蚁-Cip3%{=חd#i'kWMi4cO xqjSqxʵ)IKM,{F`#8m"^:R_温TmJt=#K4=Θ$?N@?Oִlbδ{>{mpowvb6E ;WJTk_f;J0nv_C'D Bkth~IT}1[*٥뮷RX&jJ|;}Xڀ3zg?GGTO> a\h^e$mp ӶX1k`z8R*xZ;mhzU0\7C| (AjmU{e\ $qϽ|6Jw_yy9[]{߃ug|6_q38?fo^^[ ܬaF!#$q\NauQۢeJOJm,/{O{$j3ϔR 1?'NNI6⹱ܔecR]FKHd3xЩʡ k{z: 麇h~Y摝xº'9s$p+2TOW}0MGhnMi4WHdmHs^U%rk07O^csm$GQF>#w(_iv>+VzhyY5mZ[m@Iq}2I)eªFly=$Ziuf6j%RtMyoUxOHIY;vIne- mI:7g./ףϦ?U6`c qzW鿡^Qy[{omi*x3 'i3<ڊp^U5y^?}F4P|?$c81RkORu)ҔjF˱{"ӵ+ɏTY. {SV#tޛ;|L\ѦM-uީP` ::9㷮+ǝkK{."4yT{tq>Kjc ;k2\HA}gTV*><֋x|H(mF0kc dy<>:n5MuO{<+w}W _E ?V ^$s׏*΍ENZNbq 9n2{k}SW9Kƶ{)#u k]JM}CݚVgJQ_s\$.s9R3:^%$ԗSҾW.2YVI<{'$gs<8({ykmϯ`i^ڵ煴 ד>Uٳ%Ymlv%ɕ c%f9iJ\[ӌ{k`uu1E;9 wӛ&iU4v B6u w>Щx?^^;+N--N-J5X䅁@0r85馒8Jr~dZƀ!jF0|$<2 7a:A}s5%땝1r?hϕYcx/PDq$Sp0F}=84*Gխ5RdίNJe*#;'ٿz‘J1 ۵x_U/z{ge̩ijkhntE7kx\3ny NJ:g;'R۷CӬ{fVw~Wʍ~Z՟"$ﭼ ıpʎY7a6e(\\k]'[m!#ǃ9=8ѩ\\e\? 2m;VS89>r+ѧWbZEpYX\:O*}GLqQJk~d0kNZ݇Fv ؾH `cָV1tѥJ?_ȱq;?zElx錞JSDummuEoj1rȭgιTqN-6)-YEQ[A %Rq=z`?i{= kss]ٌ-#rϞ235Je)hvR5zato/X9|Emi;4YFVndae8mn-ٔ,{w(, G_Y5%oԚ4i?2$5U~ u']q!QOKj(ԄrEI6v )8u/^N\,*8)]bKr[B AlWEU=BXYB*3wB,"l9>Zi!T1>{qDeR2vfqWr[;_mYf 䝭CEu{G+rNԨ7/R'ssלMt{Ztw0_Z/۸dxg02 B!ϡh oSw+]")XGvS e1\{ITwG4w.Z;ch1zcҶ%yk^ƥJo[VIT#mݖ㏧=3Ҋg8q R9J Gzt;y~}ikf6R@G`+ƷŮXN7kNc"Ʋ\:c ,ݪ凥VFhد4f_('q^A5Ծ-ta%&6]wqE *3>N5)J5,ֽrNt+7{8z(^O,ܵF<;a:sp%_(^:JU?x3F%M͍N{\{Ww(E!эZ{t*| J#(ӝ晅)R=ɧ6#W@Ag::zA)Gjr\z؅٫ܖDHY׏Z(KEh:ѧD̺ާ;FAO}4VffY qwϯr_Iԩ9)BϹptb3c<`sҹ'4x7߱xbCyeI_'$`ѧ֌Id_C[{}Q4ՍH_Uszԩ"9E|]scRISovPnI0PGNss]TcA3*_#f9Vm01B]Y.5]"[ $90TZ[$E u\elg O[X_VoNB#e]"2򭬞⌥nTb2;D{v{?ݻn.YJZXذźYُ֢eI''Ycǟj٥͠)ԍS: ^[uya69SG~M[2nto$@ҭ<)j3/_rrJ4ԧww|\{Wu4h[i)Zr] Ze?ǹv9鑞3+w1N/}QZHQ&# '˹?Jyv)CƔ٩iQoWG.^5 EHeĤp#%UluZ{ sY$dޝ߿g\y5~q^iՒ-ֳVІ}:[>[ڿ-SPҔHxLaI¸#wW;RQ\,Dau'v.ivx投)!LJ+(Ӓ{u03Ʊ8ӟ;5kV(Cw4w]~\QHk_dɿ.]x<ȐB?FbONy ?gɜ2h- ܋i<̖*ރkIt=,$QZLlv|w۶y<gFU_N Kh>kXI 8?x7wT~gVfʴ\_wfcv=b@u6UN0)F榭iS{:**~kM?18{t}᧓? s,~G `H=z昨:v\y__&.&RMo_[~Z{|=0|-y0CkpUBFhQʪaƳٔkV^Mu3~"4YKII(;DhI }~BN7*J9υ-m8IH8@rN_uՔ$_o=L r)T_(HK8g88 @ %f֪ǭ+TR뢵o]|<9? ՚V.#Q.<3toF{}Z8R{;w;O |86[|j;Ԫʤr1 U]tEGG G=#Q![ḓ;Urz:w=YN..ҷeooĴssBg8[[Tns3)TߥNÚUZ.k 3*f @c7$~lmZu5qJ׽ Y}6C%ӥj#ݻִQNJ9*).<%W,u6f/!WqFѷ-}GenUx.k=Gؾoc.NyܑWOQpžu[|7׬4"1o$?U9VVmX1P5R+#R>NvΫsX:rW!N/XYH}w37^-5)lvѝ8GwuH-R}+|\uU)KZ] YEhe 0V+ƕJm6.qV)E>d~ֻ0*mR37eƜz5ҷ')=Hڌ`i$iܪ67j1EMѨQkwL6*@s֊U1:8ܿS֗J$>Nϵҧ?3wSIzYC$_iETe;J38{>YjUf'ݘ:,hiiL[6:Ϛ0HXNIz74㸒(s@u#Q4z#TNU.߼kmA~Us:u_ƺ0yj=22]]PmC\[{aNuKF۵SO O88K>g5<,㺺U[8a0pv3``B+2֭E{k#rV+y;#o㟋i]s+ 9l:t)֭iQѣ9B}_|D+[mvo1ٷ,ѻKm.w=g+oO=.kϽIO񖖶̞c4o{qЊqb-"KJ)d.5kZa6|ƉC+HI*H'$d@I嗺7?cS&[zQ)/+1dke*\%t毭{9='N.^8:R*mJe9Vks#h'6TJ2IJ._Gi- 28Tmm?\HI&ݹUMtTv&pxզ#T悷z4,[q5 afu 1`ebcgT[Gw}-O'Sl>#Rqu1|i*Zh'o*w X;q_(jֻwKurvfy9=c_|#e&H.cH 3u =4⪨]wi0ɪ޽|kw>!x{O4|Y?xx^X; pXѨZ[-,p_ ׷u)үtc Cr CjSm/2lm_mͪk[G~0~#h_a`nc",Adg[ ӽ>RX96瞂h[9kvzݯ_enջgqDzŲX_>hmݝw2ڙ\ejտ6E4iɬxo^8&S=z#Aכ:u/˧c§Ns*CX!ۧ^37\zzz8BRq1D7Y好EonLT>a.Ky&cC/tb{lyW2e~r~zXzN=W}vKKuG~ZFMnV` '#ߌ{ZjTMXUI{JߍݣBc/V3|x=y s_={eZ~ _qKVxׇg6nw,1muRݎGC\Cr]OaUiJT>S rӧ;O/bp{O~N}gVW]_/ZR_{^[&]&2mYCosӽaF\-f_#،E59=~-ATZvc$f%&ǹc=БIBwy)tW94㾖9ncE<'>囹%ڷ/IͦiZM$2GPAqG_ly7<`v#޽jxVߒ=9a|xvk+}zuԈѤX:[J9FOAIïjž+ZbsQ 9gAԳM|pt&e(Q#$=% C-O{,)K{WӯHѼHdѻO} s>ʤv]ac朶nq(|]h|[rls*TKug^L:K 鴭oĶS۔oѪ@}߂FzE%]&5{9Q֭/n&ܒy`I9y߇e ;u;?eoi{~6=lWQ`t.#N;<"4ithwl}15u70}:[aiIiQ~ӡhڇJ*ɹb<kM%â:X5foIXQ#wF>:+kv{ Oƹ%)_e-u|Ϸx'2ImuVLnǦ}qb5ӷ٫Om9f9%Ҧt5R.ь*kOwF9wlyǵfQēk f?,} ln=}J4y{\Ѧx,&:m8>k)agQ5&"f&ϗ~h$hYd߷sn=TOm F|Mx<+9\VG;'M+yшD3aWn< AZr.eS7ЭztF~R #mNI[44]jW4Wc veIFmYHEg`": ?8r0kZkD)kdi]Ysb98=W/9^eHǝob$f;qA9K^u)]1$bz`'mRVr4hnv>PFq$⯗ٵ-Ȍ$ۅDϙ+7}+7nFrCڮ vA0ON;6uIr/yao0>UU;N'EmNH.i){K'NLk"q -YAT$$ v/﴿s8U"~ff,ZM5XI^5# 5U߷cGF%U7B VGj!R O=ˎOUƕNG5]J,i24{;x +R T/'+3[Ës# #21Yv8:q_t[יy.<0Uw_1}6╭O\\DL*e U8k(JcbjTmW2gV)!s EsV*cIV<73JOZJRPδ%%bhǧ+DG(>rC5ujKWWxk4 M2{"\F8VsL=J|omRFA; ?¶nƕ/6 XX [ӢeN-Y(]!'oNFpx59蝑Td&yNrlt=Nۮr9JWwrtЮ+ֱ =9FH)$? qϽiNMͽ8I]l!I7*FU>$9I7Xet)O#=9?(e̥zڸ]r?35^[RVH*$*04W8==)"~Bf;v3+=޺39OFVe <~N~jU9^LuiMrtek36L@ݜ )JRyc%Nt#+DUp1AԷDS㦞h4\dVbs_3 H=}ɩUylj*2O~k mq1ˮ=ԁA$Zx˖SFJ[!IhmKR8 r:a*N.wi?OҝNK~gշlFeycxӜZƷ5wo[ fQT5w tIEt$.YJ~Vq]\4R}Lqn.NU/}gu!LEuoWh?(]Au%Z^Vխ3Jկ^y Zie.ܣG\s+aZ Ϣzjdoz$U]b)TolS%{->RJk=b!Rv6vyl e;3SZpw}AT(EԐ"Jnn>a3ҶKnJ/Nh0]]Ey]>gʼp$2nm,y-w1g%+=#S0ee߯K qG$Wۿz>ң9y# XF:tkj<>e,O, a!f>7ddߎƝrr:15ȍ P7>R}d6;6G{O "]GwRFi$m}1ּKS^'/[0[ s.A6^8凵E}^m^:F5){IJB̊U8#&|MWV:+%N*k^l`u8֦cNG\YBv#<ؚsk+.ګ^[3/?+mP߿57a%8:~#IfC ˵cQeξeb+TT{SyʪV3km~JNUf>zѣCRAj4iI wR+ة{Ow.\40/'ɒ_PL=<}'_&o!FfOLۙjRzo>vڰ"yh]L{[eŠ#kW9aeKs~D.nW|rx4mkNj蕙a FG;][`tQj74HdFV( FC=$rVWdIsbIH[OA,urc |8g?qO|iOPJN_}ެƦ!8;/HٸLbe?\NܷNV-nMœLyqֱ.U#*qDk'p~p<%`cx^dWL͍ʊ'ϧjR22+JCw,c Kc$R8Ɣ`tBqZՏEUyI&_^,i|+S9 v Am'Dc\+E.^1;%;Y+in?-Oh.=ا*Qg(ꊳ!rI72O dؙԩZ1ӹNI'_KwcMUdڳD{9F~-jh cpQ۵`v;JU-(,`:*~Q!sG_Цn rG{tiҭjrv1SZbm=Fwphٜ1jeqj[c:8ך3sԔZۿO- vv+@V0\u#RkG,GmH_CIX0m%^(+o5,1]Lee\2?pb(ԩ+-/mW Ӹ7 {VPޢk9a^4gw}_b7Z\FڙC1`EH.T{=Ne}wkF\c)'h6&|+e仉sڙ6fFCsq\Sug7;nAg 4q WvNI鑟\VҩQEsje%9GTZo0Ώn<ścaN^xWNjNaMfiއuRa V7XQϽDdNBd쬉s>avRpՏ-_afVN0?<ϾzѴZ{e{~Vܜj]Ep'̎껲aruʢ&,BU.2ݯc(` ڲ'rjMGP7fGr}ZSmz5}M[#}S󃻒9F)Y2ϜtHgiMGrrZlwqUHm.ZǝZ$ K$(lmS^OX$q@^.j399H}S:eL~K̄G纮Þ::gVF#%e*wce x0#Vy"opxX26T!IOD^kqQM; oqvn/t*ҭ=JWŜA,![_3=?־R]C0JjZ]i/ms j8a孚/ivB멗(*o>VuLq^"nggm7&˴bqo TgmuFS}BϲIp`W# ³M t$ ikC4!O9[t=档Tѣ+ݒ6m##xb:Tȩ'%Qjյ*xw#W˥6Xʿ1kh¨PéG;7T0l/'wn2iFUC G& # `ߎO{6<1NVMny.q&jiPV}C`F铜W;I7oO1Z[ߢ l:^ oadgצ;7^+*(52øgxCѴ\Ga +7A-ca_VR\ױFiPCSJMܳhbOۼΝI lncMj([Kh6& Z5m˘2zpszVu~}N:Ԓtowe`]pb}"9$ida3|љ[ӞZ㧆5^5M,HV }3zB7V# '+IfxJ $ H~cWe8V:(*aʟˋ$[}H4̹~b t_e'YǪZ%zҩBI5kӇ[k[#X['[O_zQMyԇ\VuqS6쟕1ibtOI-|o-p5{*ҜMs-*q#a" Vx7sK Q+.R9mSc>Nv>LjSRN%Z1,o59[AwGpO*Gz*vaQOcѼE[A!Dŏ=AOጩѩ.nklWJ>oǰ1$Uzt1-|s籉Nvۦ}i,5{g#G083yyNXoe8Fu6Rw׽[dv;Mj:]_ZIosiwU9oo%_|;ŎhpOsלQ|3:r{?WisAZY]BEC K{!U&H-ߧP0x<,g-ˢv9I>[=4];SmJuof* '}Hң(K_S?S?E-~=^_2:1I6f/ u8}u\fXoe){c5.gsf^j[^mk3YF1?ͱ|[w:i80>GYn|:]߮$ 3m$=+ɣ"XmߩkZWy5]RdVX¼6F;{ڣ^Q{UNrOݾ^|jGqd%;E۹,z###Td<Ag;[E{{-D2>g##3]qӭFUJvzXuqnpkq*OٕRY״b^}ZAsRMmIAf׷x׋*3W'Vͫ{.V xH-I>y'yzv1ԢSfq-U~ctg H*~QۥeOmhJ 55vt<׊ukie`!a&ea/21bJ/FjXcoގ˺cb?w?{{kRKq4*VVI܌qRF<|F!s}ku,| =V+&[vWGu_½xʢ>JNPF-PH&VW&@cUHG4cM+꿯k^lge%Yrz9{:~G//h| s¾6FY[Uc,$^$/A1}oƔ6ҵGGm7z ӣIh=\=:kU2ܮWNj׹|aOhXYA'o9 O”yNܺ^3.IppRec 4{ [K5C1z^xDSu<|Jdtwmk4e[ɺ/U F?ZRi]v}Y ȶ۷i>Xcw㞝=ߟS"*T%֣tOhO#@ m䞝; -wkiBobUnV]3Ҥ[np%5iƧ4}fe($B-[N;_iYcJR8=Y{Ogu'^"H7N2|J7msr:g=s\oqQ8ۙ^u1.O.b2y0zNr?}+*^ѵ-~QL* ,I !|;y8b 9,~}3m QsI66-l*ȰB=1$c{zWO6XIN:^SxiG"HύnGSWߴtBp2nxtNj_jkr}/_ȡ2I.`W0Gs52TTcᱹf"6|2U[}}?_6'FrSb0ċh_h ̪/>积+HfT/fT^Z5EM>ʭ6e(6Ojʕcͳ0[^ _hZ+<űUp_c1ܩfIE* 'w=Z2u%5<, ˁhRܖu]Jʷ$iR R"y'Y;UO~N|ܞHqQʵ5'#I+FwK F=RO.yw0IsϠ걖 涏gfߌm5gly~hỨyX*>F;W敦Z#SE҃ʊpma%yr3wv~ivȐ˅l'$qڰrz򏱭SF+ $j" Em/Klt/Y1;O>z#>FG+4y_oz7&˞8_˺3ǑSVT:Sє_4x@wZfV /eH{qѫ]?[0r-_T׾_w> _^ڭޗ=`;) `)+IדeXjMKkf^IÙI5um?=N]MCIkEXX\dg.JR}}GP?޽MA?[rFI s1Z+}^qܩьanWo#1-H,vۆstRTyp>ɯ;E!2*8Y˚7M>6p<ԌSӿFL34k nUX98⺩JՌj8￐V ]bU;顽I:NM]YV0zz֒pJndN쨢5f=+J~S|꺮+S"9<DUO$sFP(yZ][Žʵٛ=ju*{=ӈ[(I+w73G?VDUBÀg`988Z5#52*GG3"X2o-YR5F[p>߇uΧ- t*N;r8X%ϒqϽcRu.%TG+&#ukÒd g׽vKH.TmJomKR?y+**3UK7F6o]l6#\y@2xxݺOrnoc96Iw 3sJn[Ѵ]V" ]DAlH6U˕Ng^+}[n[-5 E:хK܏GmnϙZJMt{_zu%or9]D-|4S#.UtߚWV=WQVg #mی SJ֕;Ԁ]EKĞާ5j-:Da𶼊7uαmd,xqnM?˿wR/`Zݽ7lѻ+YӓoE{MBUYvch9rWGz#$kVbռ.qywHLq.>R{}O~+WqZnrabWt]J24pDѴ3=pqڶ=?c)Tj+n;YE7ʿ;]cFT}i׻3N FO)Ɲ8v#r\pdhʺ,rv}}J̪aޝ}9rGY= R_.H0H*pNq:wfO%+t'ެyۨYnn~})vI'W^kXc\3K_3G r;2teάZUu{(i3ݒmsMR0Wm攭/+ɦ5 #(浌c*;_ml{I=;RR%xEu&iNdz%v]4o#Τ*',d1Y}k4uON1niXXϛǸNR{b#F4.[n$P~/?/Jߚ9Y>X-G2ۊN:QEm$; qB+*ׯCQzڪJcD v犮^z~0'&mh|%^"b99ԅFU0(o3iܱ1\9/5%Rz3јM$'޸)_]V&URJަ֋M`|soz֩Q[kjCҒUeO+{rp:s8~QrMӌTuzr}??_¸*RdS7PPK+6pcUS^J%'T]NLXd_6^ *2ț|A=? r5wNZ]vMưYTQc*ҩdlˆ+W71;N{bF۟"IE}7Y֎9 a8T'зY_|ۊp>TtԖd*HY#}1{֑Ų9t"[Ü9JRvDF\?;)Tȧcu{U_6]/_&@F9V6|~*%mXk T\0X'=2YK:ʥ5tdr;p2*{NM-gK^Tf &WMLǚ}Sf'P$P2z*T :?&vad}zԎl Wp;,sǖߦ+zt TR{4eċrTzuөQ8J;&[y7C1x*15*וW&R̫mQ:ҌU:)dwHƫm bLhwE2f[w T%jlfUmGַy\ߢ]w 5AX~UIK1$nߝz|ةӧ>0һ$󃌚ߖ푘ԣ*A/i `7_z\[1.ihQpuz稕[*TSWrR d;xǷ48”%?nߺ[v/ۻ;iJu~b_9aznSOkW /uI[WOd,o[Aqdyz26.g9Qds,<1t׽y]ju+QQ浒ۭ_ӇВ ?tmv{i,CāgȯNmeLr*Yj?O׌|@ӭtZ%vE2ϝH\=3(ң)'kjZ/:5wuS-vF˶E#N7t=p9>i KAݫ7OE~S.[^.ROX.XLy΢Y@c11yycqF?ѧgvmu_>_VZRA>Y):?:M^c;nwv hmoQkǛ[\)'vJaW犜N:5ڶS2O 'w7kKkc$7R+F1+FNz8TҌ{#<Z^kLxw]Q%ܬ7n^ '+͝i}O+v5c^tp1:);):~TfoױΝE뻲0t审II?Zδ"*}_GR$161tȯ*4{0BVVeL,Z§5i&gۧ.iJ+4&X-3|qEQrUڙ3Z `cXFZf_֋譼-aӌ+{E;%tuйbS-疠cӞkP(I]KWͯ#.An5cJut}iʶWϨI,r#ǭiFOz;$5M)ʸHOݐ6{g חYK.1Nϛ@Bve5ﲡ x*RdR3\XRˌq۳=Y+n*j$ܟC^tyǚ?6PEb2rr6cN٧5~6K[kEg?/q]Q+oJm;`1AfNNOC0bC4C[7&c{FD @~7N_#֔%NJݙao)%@s^Fe/r߯1)]jg[] Oz%4Ks a;'e DE'n+⽜ydtӏfGsi--*t#$Tiw;&QWNp+5ΩQtj[Go6Ix_^J>iZ7ymF(uz*ѧ+Ek$%љ5H67C]z--ʯRo4ΏWV c +ud2>s%\Oӵt{R\)8Ϯ{[+d~~Fũ;rԥԕO:M{TD%nw۶8cK5uf3LJ%729qiwc#ii%ޝԗ;V._# Y-~} B ]6żRiByx$pغWt:2׾պ־Y:STbA%ƪmݹ~PvT\#mZM\Nm*toi=}ywaͼ!*䨒r[9#ǜg5U*b${txƏ־;wp~$֮m|Aun!9 qܚ,^_J#FWW=,hԔVw2+qkC̲0<#Fm+xQ8o^ǐ|ao5ƱI 岊€O$Au%o>u.zn%wD-^MرUd`` 8u8+:zRcFu;M>;~Nd;V4y!rOQӠ8S.KONNlTWZ2exD՞k,7K(NFHێbkG5w:rV"gko[ߵyD>ŃPGB9-Fi+u^fk,5IR)Gu':23 frmx<~UR&Ϫz`R2pk뮾O^VhH[us :c(_y)ʚj]m6AU=9 ccg=Wr2:g4aSyJ65)cMl}6)? _kq2HGe`T c:WDYgvM|O0DGZSiI-HͱVO8ϵ{v2ѕ8֔hmw9bHNmJV>, [_THcFh́?0kF-6oVQJ\ڽ/{YDu 1E?ݍER@Mz^ʨS{uz_>>҂}쮻 |4na ĖIi4ZXفH8 c#=Ԕi.7GLV.C~|:Ԭt+[ݭ' HX&vFߔg)bcO(s++Z;+Fg);V7vׯκ6w6m"d2M7W2[8ǯO0N2wdfapr~[_V:_@`R᷃Ԍs;UUrF9CMDžtwW\\lXqe|vǙs^[8ei^V]:i֜VkhmuxRɩCTnq]0A g*%Enio.u<*\oK]%޻ׂt]ܐ2A2>x>77*Y^:k/Ro.-yV#1V89*A^8#zF#gw}9Fm 4EKɖhYwnQ|nB8ZV~{=|P/k#ē71^*cV *iTncy`eOgĖ~"VDȹRx\/A^ ʌ=nNitJ3 ٮI J?OjعK۹B^ߟx_y 0HX+pJLD鷩#&4 4KMKkhme' {=wʛVz?3Sgji)kF#C]8鞀_KҔOQa%Ze7jw)erbSxjn_d:|9n"iiȷRN^ ~+)e/qF\蝏wZLv&+69 ^iv3MT~g{HbvЃF+G1+أRG)SJoywş_i ed  OBz09R;xMN+Ocja>ImN༺ Hޟ$6V:}rY~|iyUMgŸG[dZ!ӭ?pU8>Fc<61)Z:'_4+uѳTdie$ _~;5Q?fwEI{:ʤVgPL%/xFy=1ӊR^^,8r6ZƑ2MVu~nyadzt#::u$~&~fN1R#2ϭ{=)J)m=jTxIoPqu C]f-@ ~mO{oGSJ4҆h|;?*7rZFI;9w{aӿm?58{xGԾe Ti2# 쌐1O*F>漽V%=w:O-Kcmo&9l7|oK޲OFL;uWVHX#wNsǭtNϖ>:+%StiU~­.h؟cϫv6mB" gn}N*%yaiZt|e95/-,[߽ĦENgq52bc5WqkqV+iYL珔pu*FCq&VǿJ=6PH=ɟYh$7 0VX}=?=}k?c+viHռiCn;9jxjnNOuBK*ٮf&cwC+-m׹:u"3>*6"L8a]QqζE|7$3ە7=F9ּFRf^$ĺ}vs${d'w9>e̍yec5IqaVuT=%- biK囥[Yn:×lz-ބLn " +?0  Z4lA:Y};u 0v?w8O(I_tH X]6ְ{||5V)Wf΃IGZN~W儘8a܁\"lae ޲VAfh. y)V4JL/8+\L嚒WLֵ P;mӿqT~·Z}bkU!߹kv2p#L末F1Z4HED\`םSJLߗ.h{R7Un6WTӊMn4:M2$ĻZ=vU}hr`}#2ZZښei8A(ԁSV=oo/i*|۵)1v)>sj4Z;TPӪq#Un0۹cͽqZmvtV634/2M 匐Id_8kЌMݾFK(⼳6Y>XTgnOסhѣ 0X[KC\efPt޲I|.O_2P;;z1cmiKNP]K6A lk۽*p|ͳIrS$t4v nbG̸Q޺cN1=R~f88?[{>H&ty78l*=3MՉRѮo  Wi9=qZFcd;S8-f#e +nX({=QCS<#%t4:?q$7Sr1J$֝NR2~{Wu6e oR?SכZiToh-{jR5]v+)eHE !^WzqnLJ䱉?z 6E}R$hΒ@UqxR_2|Ik"ol+^/~iUjN_#.&Y@ظQ{oMz؊1EE57Sf޿Zʥ.K( ѧ:6} !%d/|웁o 2n{jtrJI/`iE{eX $=QJ2rRјΥTO+3K k9=u ʕN/mﳽc6p&`n~PlMbY{NJHn 7˽QS6\1^9=3]'W'}CQ7guEC5)eI^yѯ~w:Nxt|n.$y?Uڠ`Vtm^.Y&\aOjTQS)As2Uw3^ں'Z::rޛۏ6t{X.THF9W$cwѿek%CmUX}+I^59s"UdXpz%`1;VΧ>S:Z]}›xљ\ܨ$d;{8ӨR>ϱ^i?Xm00O>/qV٨ӾK,ecCЊJ0FUNXo#2.4xscQɭ9'$Sîס\Gf*6;AUe'*?fZpy2IO {WOqN㸻YH܆_v׽ JPqkڿ x moaaO$cVJ+By{wf I%X%i6𫓜gֲR2NU%gejP[]3*7ʿ\oǑG<'ϹjEs QǾ+1eoCK6@X%r83j8$iʈٕ]w0mnexҾ9#+_cGDI2ޫCᙉ8c<`zW6!Ǖw1'N^]_J%vԅ>fSXs>]Me.xО).63FFnJhlgzqiˢӶm;ȡ C{~xrDq\ݴ-AoS"oAV^v2صođ$R|xgK77VZYJfeTNbڿ9YI&ʁ u횘ǖz|txtAv,G_9Z0kBLiʠduG7rdf+Fr[9y<|Ud']K+9-?{?ХX[gK)LRh yJa3qDy5}Jjg(LRDq85V@F< {̸0[ƹV,:}c.h032-ē9<^ioB)kmP OޑKJz#W݊eq~=:RIJ0I})vvdrsϽDTG]:t7 -{u#[?"M5+hh۲rڭ\3Ʌ+篹]ѓ T&D6څ0TumudR):ZQMSQ^ 2ϰm'wGeF8b%qKV2T snj9n]P+`a霃5w-UkµHQUD'Qvlemapz'ʧ~m2\[{9s5r:1.i=lU3+4q2CR38}$ѵ:G)Z4ԡqm1Cyx.ueEy{Zj4{4{{`W'hA<⮜"8Mg%nWVh<% 1 s9崹bsƎ"8hضwM2P~\r+oehv*5KM,EQQ7e?{`t5S;FrVy V8c`68,=@1^R2&q*Mqrdks*; wQ)OXNxK'nh--.>hBTwx1]}V'g1=ٌxo*|˚*̹R^%#3=`nQlgt)G[y/l0?7*}=Jhsujı bf' dVrm ^JKKmo̅aH|{(m+Sb$ֽŷ$Qw@9ߏà^ORII.P#c$N&d=0$dH?JڤnmRXF;W*Vgd $kH_JTr^4:uz-5#8s{*Y{K4]lU:M+vCZi(ni1wsO٨w4ZR5fl6U#gJҌZlÒJK#_<Y\SKrm+'q؛ xUfhՓw 8Ur1׹5/1x2)\_ &;Oʤ$r+GNR8 -| W̽cIba 'VkhJ;y`܁;jչ.ԩQ'f7r [ʹ#;95j4֫iiYG`\F?G˸zsqMQ] PYS2og.XCQۡ~+mw1zkdUe 'o$@_q U<3Rr-*r{ELWo<-Vєcs*М+2m[nN瑓X{m:j9Y?R ++LZQԵ&fb#ڧHɓSRte(|ʿ/9L(bjv[xR@忇=B3/3̖ȍ(vœ-"D1׊2ڙ9E4E@fiRq֯JQLҜTylDjʧ 8YN `1<o5' -$rݛ}?"c>uQiE\A/+Οu2U9[U`퍫o0s u|v'~ѥ/*w< >vG3Z1LA mr:Nk*$nW3w1Tz璅96N6}2/ΙtR9=xDLE=W}~e}"i_X~+ቍQNr^վgZ\}khvz}!_:ms&;;rxܶ-mR{E;r CDe%/yPodсa "8xT_gqRնzjK dŴcokI~>Wg`vS:1psڦQ=J6Z37%F}@z^UӯDPyHr͒,cɥTn (_y 9|x߽kNgúb9Pad}G5$TiZm$Ӣ?ͷfrdppN1߷:gˈ䔔;3ВT1M${ve\u%8UZKF}yU $)^+N[?7Sܺ^%ڢERdf=}8+ůEjyUJy{YձDZ=x2fsaƛr-Մ>Z6͓+uVCaZF=W-J~MrhODSoᝣ!^1#sacREjoB-Ę[^[{hJ2Fxz4Õ&Gk^b7G[%BzgGR߹WF5U}mX`1` c8z[_D'WIi?ZGV0vWjH`#8c/}^<:}.-'TMgHїj!)Qa7QYt|FC. l24B8_*8Fwv0|0UuZYyi|oaZHıȫq9 2nq|i?y{o=|[;*Nq#Mk5zXxLDbwjwBJF'h$ 94*ec$!GO餬ݺo  4]KPxogi"2U[#^y">4qSsYNB?][j!H|TzҧNTM鶇Z9AsAghQNL_,ݦ[>Ӽ'jxeGN ;FNM|o֥uF[?6M@ yt9V7 g'Y¬Q=|-nyO^rY/11g<0+));D{xsm| e-{FcWj `1(ss)[ϩ|cZSW'4$g͆9 ,N~*#iƶ]?_/ M'i tx\Y*oʺ2mɲHc e@2A*o/LWrvl˧om+׾+i'{Q8Vg^vPw3FAO_8Q2eG.r~W[ɥזq i$LOl3]zI= T+a,QH+ ĂAӎk =7pߡx[W+xmMu O5SRpqb(GY;[~'pi^U$C^v+/.+BUywZ+u NGp;W(Hq^ΝKE|+3L}Sm~eE8^1`#jqgu?0Uķu>ڟ:k'M͒, " .+ь燊滞/ pmDE3hWdКg-%~u8iʦ[kguեwPn >S`7Ǟ _gORW]D.2s[Ie+ݸluYv^vtԫk+/C1[ikm[2@=1EoGR.Q[E`s=Jq"}sO.w1Xs}났 VǹFV1~f`͟1Rowe'xgTu= ڕEs,o&.x' =}Ei ^'QRRQӱxUKZIl(ݓv{g3*ѭ%wvޥ`ԍ9&֏^mZ;6e]f\ep0r9ҼQ*mOa^Rc1|AiZ5V82 '#ؑ< gc5(q5=^ox _X\G6$cq6"83azc1~0[I$_SʭvJ|0SD$r[dG:.$_A{irqQN^+߯E[MXHƋ 篸 ~/'WarV~G^.U7jf &yL dWPTZ+>X]Y.CKYZDc9n ׃dfjU'-GI||}x_Qho1 Ø}#9a]Bruz_2j]tku:e5ԋ(1ٲl!haՔcӷLlI+O]_^+Jڏk=wHo @UA@?t-unM'O0+m~ixaKɐ[F]K8vn;^eztKE{}cyw4M:,1 CXEϯ\:pErTEɴ'\=>Z-_WȆKK./TGF0X8+zV:$'o"K]]lu8Zem޲ ?^ySJ2֢[w6ƶ4l^|U,nnKHv-z`׵{sr("R]/٭4̡n+bP-Q5w{<;mcab)8!b1~٪R,]h=[yu5s=^B|`,\[44.Y[o~8ָe(NRZ.h殿?2+;˔7ǃ)R%t\e|虧e 6[fo,tvG==swvΜk˖kmϩ(!#ܻYղO'ys+K l4K<%waUrç@cYKE$g>jq䎇V믯CDpmek zy|Ԫ/hz )FsN<~spVVeweT]mq[w+ЭʜAN;m}z?TEU޻~\Ұ`%_-.ў6czR4 S3L[pѹ$9ƞ2I}Я5 V6ozVLoH+[wc',ķNӣ>j)C_Ț޾g}cR@I"$X6$r3ǟL9r['< HWhK$ FFyx=9ԩ$d&iNgM/KI6mE99ǿnr'/K -E3Z}+x]v+88Q0tw]3wK$ x"Y9+m3(`?6?.+*xz.6U(Vk8}K o"`zcӁ[K%}J)*[]-moؐ^;-GoC̯OK%KM\u܄TwWg&Nq bWd֞^zj m9 4|/͒~ǥ~+q8:ҌSmdxid5yhߨ0TT^noظ)u-Igq2°P1fy>iozgFթE6VX~ mi'3WVZ垏Ni*i {#sB?re| Nk( XM&oޫ*6`Joum3j}2K{$f[$FbDHpXd$O»e5({5k w3TyI)a\m9QuyalK">mip;`~UN9Z/DCgY藝Kh/;>pvӯ>s?gk#'%k%y>آFtmP6}09}sE7:I=:2RN輟 {d!@W@#?z4ϧSbŻ &-y$ 0TV=);d(%Ohjx]:ӌW[T.f}[?H%R{ qO=kh֕Isi4J[j;)l*qb[-;NS#qYw9s8nc^ʛM&0[O #6O.Mx$AtkiT4U5URT_2&#Z)h#8p,5|ϚB^^W\ݬm* zqדUG1m;TrG}"Wuٶgv;Nx-tꬩƚm:JQ ҹiFQzJ%oЧ:h~) ޼1ԩJP~Ч!ڠڹ*FzN7ZFN4m vn!8ӿW߱CW'6zovϯ<{wNJҁsӡGrZW%6(C oEUME.tajGߍٔmdhb5hQM-Q:4 =)|fvrN?[۰ \`::[w]^u;%Is=X-:>Z4CĹ-{inӔyrVdE(ǀSa.H¤εe V]7Lk5% .mQe'+OT-Ow*FET}I# lt ּ<:Xsru6yY`xϯ5*%?I.]gWgkic$k0J7Q҄Tk/IknqA!# `S')F+jnK/Kz^ ť+s+ңM;oX[6;nTa.NT=*C~pve sr/VQKM7)_]զ(2onz&SE{Om|nHm ׳g<~u:t㥯%aRu2M΍,2e_h8+4鴖[X㷛qeQQӵiJ*U8(IyxH|'G[B.m4aZȫ,w@M,߾G Q`'tUR=IofuFPE~f=%_Lc~I~^y;涍ʗ*߹R3h,n"]>aJq^9-ֿtY[* mnX{NhFSme64HX AzGm6#Xөe-V&VCLc^xwRMWT'⶟wO &VI eO8x^P[j;f$7 F9_]q_Yӥw]zmsBT(оbJ)#xcm8;sGҞF)t=UW xPmw+d c_#rdP%k+]v}WG^eAmm]25u-B9Dv7y#pSG VNaɶOK4W(}5E]:_vj𞞶|JI$!#=ϗaj<Ļݧki}G,;pznwk[[sYإAktWwe1=|XgFTrM|Gj)T}]3Mdk1{K0$@83Gԗ.iQtMzkkvumfɹ7w#8<8=xU=`|7~JX%(:xh!S̞k6{\V/=/gN7nҾSʾ(xS]=5['\9k 聯9yTbxٶVK^Ǔxyo %7v:ZܴKvJޏu]&\Feef $שʛ5Fsҳ٧L{W6"̰8DQno=eS7o?CKX|FUQ8c;wƔ*wWzt;>ނڨvWr8D],kJhѼ3boKVPܖFd8=xf#9BRt594%7OKw0[jyP;6.Am8ǸRXmNWZRJrI7nƕυ-GɍVIU~NH?{Z4yg}O$y(;wKE VAaFFz%&MVg]9KޚWZ\z]΢e^mK s3ӚGEm|2)+\O3Z&i SyҺ}RVIk<5J8X΋fo=L{W5;B@*9]Aϡm$ʷ>W*+Ε%v_N KK|Q.:JDaJb89;kC+ԅEeBwAҸ(я,6zWW*'ɴ{k|vh WwNUݞn*{i}yc}9U:2N.KٯƤ(<^jƷgf cl"rRO4/n,LBǷkMoR9KzwОX7>Yݷ!AQS$$ZnLmEݭr5ejV.ɯ[ HlWZF:3HwqjFTX2zQT\ui.dW7 Fq1PSjzcQ~'~*W~_7 xet}[O$rIRuR>zrRvw> )bc)Zw9-/cr+}+Zb8秫חԵvWjynV7g AtSVB1]^&Rvlmi~yJOya|_к޴'rIv`rc=vŽ7oK~}:т1ڭlq֦&jQK{'g|O㫋佞HJ" O|]vc+rьylw:(6J{7\Nӧ 6ۏ1yu(wM)zZ3: ")9q!Fc'<֔ӕvnv[Tqqӣc"9UN2+XQU%=|eYۼ(hReʳ(?9UƒȪƷ5<df*F =OLb=T>yG/g/iԛI=n5:ıͰ}T(/wUo2pt(i>m*t[닛h;͘\mEa?-gU;wzշFzs<6K RF]be}zZi2"_ d ž~JouO#QR|ZoZ|Vi!+ &` d񟗩#R1o^z)F?=okU^ bF\ҡ< ?O(ѩMEm~Enx9_.E(FIjՏ|q_y}su" x`?~FvI-}WiTEu??;]UK"C`rǀ6{c#{$IòᵼԼuj+k oAs1FqQНE -~K>x~ho;Ko̍ZH:qhB`pW8w-{VZl_[;U5#ح 39u2##=piJ Y,TrN3W53.kڒ Cg{t5nɹZ+O&4+p]Mjۛf,[ڼ\tis+NXn=_--=7pR`B͵3J6hS~tmmNUiVKvXnBc9>y: U$3_]>q2pH)?Ѝ~-NZϔ?j:H-laHQ&,e@q;ΝXP~uѧOwz-4kw}Uh9ũz+:`ub=OE]NJ/Wi5k=mKY ,H;*v9xm Kth7vγrxEՕt67.UJ5}cjyQ|kxV#nfVt,V_Bꤏ-Z8ib*C=]7Υk'7Wm¾Vٕ0It;pޛW  +jSmu-Vr-$wJCJO:S+;v=fyKZ߳߇?}&sq{8?wL'͈mG(CCs4f12Y\3UiuF/<u#)udI| w#$13yYkl\¨ҥӏ5iMބ2] g?z`5j7}My(|e{qݪ-bc{'`s"#aϩ8ǒszU'R[ iM%gJt~^3*U\Y8r.HRNW7Yr{虙GzrILIYrN'<`OFZ4ZwsmpKoZ2QjK3eUʐ#*pQ"<,yyLٙʬ6/z ɯ/{ޤ:چyȁ9A޼t皊d^::췾j:ZGV|lx4w$3>nm*so1:my3,Wy>o2 Wo^/|Q[F)}'~L;6&@yH u9qK]7<~=,fI.ݴnrBqEqTj_Yytqvc[q'>_ʧw˴rsֳ[_ӡe7o"rojr۱jS8駩ueỲ6ᏛP\k}_}CHibxh܃38NrOn+)-oaNte1o6QƱխs h}grWRb0(7ӣV,`*gݸd"r&Dh[FF+s1aZ(BtܚMhÖ_xQڲ&҄ܝгH&" sJ.=NV}֦'JnWyRWG>)лbFz$z{թӌTteyꖚmiϻknVq>[59E+Zo.۸c x?!NZ^}nOn&ܶ I&>˥DJtKE׿(dR[V*)Jn=:VIl&&Vxƨef3ߵtƥU]JN^YjI Ȭ;%6)>ʶNv+rR6ߥwQ卢'斑i"L抔c*G*u|m&$k v%_#eWs8B5#N,ɌN~lκ,LrUJdi֍:v]W%0-'ݔyb!e9QDV7lus:^,q8˙=`3_,EG9jޝ+tJN0pk+gd$??ZpW3>ri7YB!M4x9zckzCٜW#%hεfGObSfmc:ъOSxSMAi}HƢeq^%CZFjS咲7U_#oa\QM!:i(ni7|ѶՑ~iR2N_ *E: ,W?kqXx㊘J>Eզۥn?3ӧ#;YܼVjȦJBEfG /,qSz览e8.1&8̘ٙ^9$ҹj˖M9=.z uIS1\yք_:M3HU.4T* Ae놷mDJmi\:[nN{֔F}>yyDrΪ HW-jciZQKixcϦ>[JLJTenVin}ڝ>b}9W()Ӕbzw.IJyeYBms>øQ.X.Jѡ4]& ʫ"nr?1\UZv1Np6VlihqžU9ֹ_Z~'-*ΜwCJ *%6,{ l}Z3;*'Zv-b d+aE?g*iU!"EUػvOVq}΄$cܒgVG'X[О1iFm̛_"8 H47ەىZ.nkFRU9lTiKkAQ9>&>Jja^7&GB3F`Cz~5.)ݫz\Y$3B8ڟH<Ԋ[ہP'nr89?ʈRثFuE}QL%3ܨ\eθƪw}GZYIR;)qsק52'g _ɖYCE;e۷;[A5HsGc p, ia v?g:u%tWν j(U^q՝_g&u9H-u)]IJ<vKC:{ԡs|@>TvdW, ;]Zb4e%+orCۏvqR[0,+;99kks!VL)Y $KWEԧⓚ:MJi߼q3q<4=sNǻYUc$V_/?.3ZsBTikTJRV.Ex W~7,gʂN@#<U eA)]i&VQIhki%7JVصӋxX̆=c}m/deZNѺ~E fqq&l<37 ǥcJG{%5#BіgH1&c#[iyֲglo^ܴw*,ʻ;@~><]Nxuj/<o}oGՐ5А3eP#;rx;xJ8zYk"0;z>S3$VёYTH7`e(Ȫr7]K-+3eR%yFk(Vh6β+Ca0e)6\_.a4ҟ `zVN:Z)rݗ-ls6~nTn8=r;~ue(biRgnqsE|%8rp2} f$UhQV2aYHU^ǜq#{8ky"4_.ѝ8}zFP2I+7 i0О}U%SJrrHg#elv˕+_i&ۉxXbeYwqՉ9==75':Q?jجfuVpX^)ruAFJЄHZf.ͪuOhݝ}+o'f{Oyɧ҂VgX7J$N5$V@+lp@d^|u5W^tGlo۟ƴRk.Zx*c0g'1֦RYFN K]z8l#)Km=JcH!f;prCvLT{?y_yHʤ.; e2 &N\>Zj˚.ExՑTa6$r8ވ58*M5X`jq𷩥5*1Rȅdiaio-5W.ds߅[zUUYS 6C֒4Z4&i"͝N{ci(ѣ;Sw/s`nX#x=gG4ei.byU9rMMAu-H^ DknZM˻o!A8=5H:ԍI;SbmDiQ1de2+$xEjuZi-*jNKIs=̲fBX}(]56(TZܕeG#q=}4SQgs+s_v3^יTR58f"ZVulN_X8;=I<ˉfm7`ZJ{-<,iVw~d0ܴa-]VЧОXJ wpd~#PT0TPYi%p| ƍ8|!)!YO0R1ю =^N/hبӭ51&l7 s>J7NTey Dev~틅:rjQzDwJeoU)V0vgFkhv iͧ]Wd:)[Yl z!穷5JtN%cb 4B?vU\w#-ׯj^]͇%^NqVnYp •^>ݔlΊT%Ghy>ʥ;iҒk6)^v6xJЦYC8TԋҲ})(Y$s8cln?5ŪC|򴯧hw9h懮xDcn5^J0H PTµm4kTZs\[!?vZA?γ3|jSjt3A1S#\vJQpqCO.R+^ƖL|d85QRVRqM;١"BEeWr@랣)8ey *mGTߢ^lu'Ih$f;X F=+EOMQ9ʤ]%fIoU>l}+Uqr#y j7G*J<~.+-hP[ aAeSp;}qb}{thuos[mjX|?#cS4RWkGkKᵚ86pvБ۾8#ٻj{5j,`3Bqd{W5J?wQ玲ɗ,Nf.ln-yUrx=ybzu)Es/"o,n70?ֳ{z\ՁߺYoSkow'GC A';~qʌ˕ާ8w-isO8|NQR19JԱ@|6F:jeG(N1ע,L,99Gz{]%\s)HN=sӿZR,nWQ$@YtQs=֦U%fa<5:;h]H`!IP~T{Ku'$o#w(=:Nn` 8ۇ|TJR~%SA[<VRy{G_qǡuݚ(],;IkPt1D&2vy{ddzZ7ci^Q؍!#HNzGm'F/_ R6`+jWWw]vsK_R2e6G`}kW-C5y?M-no9\r-bݘ?7#v_:Њ񌢔VHFK2!sqף9cVZQ׿R0Č۸%[?/_֪>җ,j-Gs&|?:HQɩh$p ï+./b'MrEwv8y˒R*i_Ȏ)#xqZ{JW2)boUdni8eNW[ʒyI7?,P{or9J?QuWc\ԋSJܴG*F[ˈl!uqO>Niԕ7/fرO5R0I0:c߯2Fi\Z"햟ƻ7#qe KBۤ,X{W˖>n^N7B-aR_vܱ0}Oʾ ϦdOYEvexhcJ^*Fw~)Ka ~Ow{:SC-YnTSF~-Vu8=omկ'lZE-Ѷzsɬak߱OU`7\;IJl]x92ɭjMIO%ЭF< 9;']UHtadǻZΥWn龝{ >ޝNU+?5tvm{)*護G|x>r6֧ԊӧM?AHdV*w sI|7e7k+lwQ*sv٭%i>\~Vo;Ioxq0(%QY0< s2\9Q1~XJ{'+k vFu%Q{I5O/d7ܶcORQ2ՙT*u9e.3ŷ*aprF V Ӓ%`)$s 0:GaiF4ey)Q]yxk7 `cli)b*KIz+G{"̿9_ƸjGr;9ʟR徠Z\ұ,R1^ᣢKc7lg !\sk+US*ҧ$Ԭ.!nZe RHPDt3_y9[zr~!Ӯ6o".c*ZTRrO_:+U0֗+ZHl\`0qS.j[5sѧ*xok&|hx3ma$yu 2q8#֫OR*55~iEMN3k{c銶sN_(rpvz`xewo|q~w9}<>Iee[lcx*I=fʳGfu򧋣;ueL@`؇𫵯nµH%qm/TY-]_b@㎝1\QU(o~gSQFmui[_ ˘cg?tgWZ}Dގ6sq',;nN:sqeE$~kB*S:TԬX[USzaFXyJ0w*05#'{uW4E66 ?ێac#Hryl1G[)TQqC<=:ujI/K[Hk{$;-L7*G$.;_T*8X'gm;o39bjJ]o5q&UcmWFU;aU !ɫ]-s NiKٴf[ݡ6J~b~oĞ>7WrKm߷xjϧx8QH--LHmHݹv0M;#4B6/u akg}練jo^qۺjwMidopπO|=ZR潕kk[ojv*0I_K/j֤oū{8W_ ye]sc;q*JpӢJ~wo\TaIa<dӒkєuF*SGt)O$\m27E<qӿ^uJ=Y]7mk45ٷ)3O<VVm+w8){8NNٮ2 ȲΑ~nÏ~ͫRZva1)8;cZ[d4]Bz0n=߇xjg2hW|r>-m,U楹 $`Hq߰>Xq#5\eh;lL;\ۆUx8$uPwwyl| i&ֲWKbkJRHdS:Uø^A1Ofg|,Xò8V=%| VyTN i:_Uoc,jPym/V-eǥ}V8S/5ס_f}{&_ |(%όg&+mS8&:d&mSnnV*j7~gkۊ?t_Ic_/vD`Yv;d6FAxR۟y#xG9 eUSI}b\lѬ\GrJ= 7zu=6*U/Vj)OHAt̲` W,95PwJ#ΖG=o~k qnV-Cҹ}Rn6aN\~>yq7QAk0 x#<u-5*(ش|`O=ϿZ#ͧNU%ΝP?'M,䵹ud߻`T!^ uwєjG_N9Ƥmzu_)[է/·S$tyuZ--%ӷ~R[^5Znlk9M7ǘdgdztk/ΤF 9ZSm|UeDV!gHJ w+(eJ:ߧnVU<zԖڀbI<6Q39T*p-%wGBTNWzI߯<Suƪ-Rfctx9-<*Պ%Z̎{d*xƋnGVPɽV0|E=C**z匩JfuOaMTI߻][պwr|ΘF4KlLWC/{gN)~閚{$kS-L!Ԝ1ѕJrmT}u׹ G&F0c;h- ;Vدң'y#(^g3.2 =HTȟ33FN/˯N"IUnu{[U|΄bLR-[>kìD~+MD4menK9DZ2rztʴc)-Z_Ly^-ʹʜd}kx qw4Jl[/ o|#yR\)Snc,UhBKjO38z]vmgxi0%w,r==|Zz~.RPM%ۯM,W9pUfv& ` rs=v_Jr6͒{~X 9p*{E_1c-Eh1OaաmcoE"x]CdylǷjQKuImbTlg>ѧ}ϩRMyz:1lHiE:'7uK|1=|_5EJ/G{O5;]1;sxqB xԔw6-["nѫEup3ӐF7zTq gumיLB\3>d`zS*ܒugFe6wyg7#ӡY$: cg&RUd4\G]y@c0ooJi%RJz=N[" ·~>SʎwF,#/(ǚgc>^X+f_AI]X LK}J8u+Eu2%r7$p@9v[,,u~7,vZ׵~V=68n_/2crvlWR-V*PQ{8k~2KmGHd wnRr8#y#^FG̞64MMoaT0ɿ([wQ2vGu(j.qEnM mcVE0G``gПN3_,Rmj*zԯq~8n` +ѷ7!s` ~Z|#tO{;Y$IXoc!؜g89obԳzi{)ǫO5ZcX1ӌ0!NQaxO94?c K) ';N3xSɥu<-˰t%[={ZmhlU:_]oSΝ җfZǎEihdVe G_ RR\5dD-źG*$i#4JrT#/Qs[(I[dl!Ny#\+vFWATJrq,f?XR*#I45&h>M e:EmMˑq~G\m)SF<^V{2֋rp*+{ZnIEi7$$0kոFM ˱U7,*my<9#=:\ykJv[t){"BT,lF ؑؑӎ9ʤZ~G:.qQ_3\\C܃& 7c#{rnujQ֯OGww$͜nʏ29g!i'/`*GKGEvSeF=?kj~Wsԗ'4Sܭz/P M6nؐ=*˔ʣZ,B6"Y B66 59O{FAuY6hsB㎵^}&5.(5+t1ȱU u'}=iF|׿si(K=͕+,'hn#(\ֽ)F_ n^{\Mi#`6>RC~8ŶzВߙ- 8maITjFunӧO&k6+P~廒kOgˬ}g/50vh?W'-}OC{JO34j4jWiQ=?i;*r''{.vf] 3Ϯ? )I S5 q12ȸ vctQQDI֩&XAG"#r A폥mN{v6UsE5\˹e%Y61G?ShR2擰Sˌ7ҷcKʬO [sOB}^e;ǰ]N#<9>սG{zsJ(UY' {u5iIװJKj^1p0K&ygZە{߁R6|ʿdʈUqr95%(ZjETIiY'[lPsoi$tsQݮ3BG7#Z]xabRֿy_{ĪqW~8F*6LUadB;s+R4YO4UzrnCu R@f)I c*qrMw cx؂OѪIYX)JoVWR{@V3٤2݄!mpY==Qvq֏֨>I_/#6I:̓n cӍJ)BTb˺ ƞlF;3Ƽpˊ4騷v3h7UpzR+v8Úiku o\f<{?gSIݾb],6IltZQ؇)4<7Ld'coN~:>"+w0ҵz o;r9ҹMі[TWyVϣGc-;ۍMvָ _Z4 t2+)8jRVeGNҽu:-={H6v׷~k9*<ֆu)46y[)!v 60W=nhG#jڶi~ۓG3v5zl &-4r37:sGP9EtLNE" =AS:g?fu3bFi6pO?O~hwLT#m+KannOҮ1ջyEV|f]Ǖ8O`t[LKS;;k..'t<ۼ_,3$XrzQWNtFC* ͳ~?ҪϛR,6FzXg׏ʎXǨ{N$%1,I9B叽cJql)~N>nY@lT~6ٜ{̆T jn9z= rt #!zֶ98ݓN.2u(LJ0+ R]F\ܱS$i[q噗])c9\܍- 72yօ#@7HY~ҺùTݴ)G+|1I'ӟuӍݭf_{fܔwmr޽Gס(}RIIw3no,$lH$fsrxWNFJ6eY'2O­lWdh>\*ԴG$bTl^ߩ8^(t%Ϛy#XJΉ#|}sɯB48ft4ޗ3tv[ !rA{b0Qky a+-uc^Ɯy2gIaFrON}} *Q#bu(imRG,Kn7Z[}[_7 Tfi©PN1ɩbrV:"B1Ruz:J XmUdUG!T\g 7lgkQ6QJPTj/EmhnK vpA@Nx#ֈJQ07]m}9B#gt[1mŻg$q銙FSKEŒ0R@ #L،p98M:rz~g_s[};-^̾[&xG RQ&C+Q-(/j4ъ6R(EP5I˚Uc9SrGujoGc%ԗ$ crzE͜cZXձa~gSR-J3,~ek8E=ѳ%Z,nL:u[nih SqF^YŧcN ˲fwWYrZvۻi%WEv{׭ 2]˶{!`F#`gjT:q8&v^C/h2Kq>㞞~/T³"*U:]&Ko͟%@_duⰷkoQNKtw>D0_7y#Veې7 3dgAOHh6kO_Em |, ,5Ky ʲeNp9#NAJ3Ztaqb#ZumgEZ 5Ʊf֫sgSKjYX##`zsJS$vZߵ_>4ɵ+Ϗ>.*־xR]]EԖ]b=Ii^~c=U'OJp}Vi{{TCҫN7M7$=R׵^qs-Ze.4%[ qߌ,G[[.X,R[ҿ|IHG|Ԛv1@V?0p0xA!^3U#Z:9;Nm| vu+l "䏯ҵ%/yսK~t=F ڵ͛0 "( y$X|=L.9SORR4V~=?n!s5퐳),jP8.uҍ>yYٮt~m]6KJU${m tCH[RuM1bO?}|"1rt|/NT#(]7@%Qԣ23316K'I 5uz7Iu^ʝj|G},r޶-¯ﶷ{)rt;Szr[?WV]#N:ÆݺQrѵM5Z5^gM6csqn:v2P0D.fiyg)I%*[-dI)|csN9(Ӓo/gknx?hM ŜՖɃ~eo9RyW()-էHE.~v5h.aR#?AU-zᗽGV{JIYjff#=Q$.""T*t)6[4t\hc}'^Qr# Ա7v]f։wgt\,j"WhhZ6NN;_UTi-dJzwayye` #T=&qF\Xh];~/rz1M(@{UNׯ} x5rwO<mc xrWq[KGH/%SѫGNǗGZ>O;Y}ƻ%{%>GK)bKhZׇUPμehSTQOuzMԞ!PVO.4c1׏\W*ªNPO:h]öfEي27Y*7Nm.Z4I^ YHʸSסN*.C|<}7+= qͷimۖyqOSn:5bʝ5guiׯ]!ľmZnrЮʀO98sҾO0SkLj+~u S\IE=PQP2־71xk= JdwS| BI}3Q$A'@99v'8n[t^~"eMEZ35{HdWOӭd-eFz c=U$|*7:*UG:OxfmJŮ.\^LwA2pxpgm-*U1=_]+ӼAၨjKfFHrX$a6KWe%r$+kv+LZgm=s{`z`WUNU9FMȼg #fy>Rm'?/lrGɩR3;IJ9_wXmz]d0B 4 ؓ$z0'8l3xR&K}@<`yN_GMfЬεiu^8-xo,_ \Hns]/JI.oպ}NjFXe)TVz׌-,z|ɶ9{3GzqJHuΜ )B9Y^ηr[FM>Ct9]  u{8:iNnoΧ-zA 8i#F[𭓐ANf1#oGknbr{Gdӿc|#5[?Οj 0y627qQF5eZJrJJ*gy>~9$H>յ=L32vLWxlm/kόZti4)%u,pd@ȫb#;8행+ZC4Z&禮16oZK?0E~rr~RF0yskQIBk= x*QtKד$ҧJ*Ht!k֧JRz~*[۔Ɲ:ݿ7Iiw-ljc .3oC k6 _( ![ Vʣ{ҕ)apjUiY[\4.-[nm̍e1La,rq)NQ?oNM !Fq5]}-=]N'AYex qGkEt=L>4𴺈Ӛo&I#\xEJ6}._,5DPiZjf&+cpcvlkq2VTNߙeJ2k%s>%m7z>yo${0wrH/ik_iҩQ.rZˬk>"yf>er3#zʾN2Cҫ,=jim<+fHl }G'U_&i_;m]=3i JqrM$?]WtO:k6KjVk[KdQsuJ4rN=-Om>I>iLqxxkIlo)v 'G9xcJWr*xFޖk[^־Zn=K }ZT~}8'q<ߴKC%R5TaxF5ƗZ_;@%KslWԍՒ}lLpWqi$> $Cܴ"5 nqoiSQqI}sb#8ьiE37.#tBdn9#^uEvm#:vWX'6qiM2ldyֲm|~u1"/3 ͷv{2Bef}|q ˤ Yz_-טnHڱ.UW-r{i*љ|sֹ'DsphKc9n"F\ y9#+t-HƝ5"(pbb80<Ϲ4cQ_',bdeaPysDm)_bJEG[>rHb}uRꥅ3m9I H^Ӆϩ]J DgJ,25z49=bv)^^k_yOSЎ {j$TV{e˺fqמzOT tgB|߁g{ٴ;qIz ?:#+_6N.eywj(¶ߕ~R:dwEyu1ReVo$\o'$|;qyҤ$9aRQ,X!&8¯\zjZuسݤ ʏI}yZvd¢!v۴ #I,!%HH$sqG4݊IK2Er7)G+HX=Lg %ւ&37#36Ҥe}*=.myK iv^[ $gҩGߺ掟tG#=3ܫyg%|lwDSHNR?oK&L[V0Ty'*mq)ҔRn8駆\'eo/JUIv35Fi]_pG 3\|c;ʔE ^pWly'߯JRԍHKGj(y$asǽk뭍c--z^8Orcs^ǹiѫ)iEyi#[ .߿Rr\—ݧ؎#E됧^DmZ\Bc;?̙5Y݃}j%&2اr2#eyosFwRq=jrNSikശf!(cϕƥW8?2e8km HXICIsnݫYSM"o 'fI F7,8` (=36msp횳k.&ԣ-sQ}jfGTH7s}WǖfYaH܊`}j%8E8sVRAO1,|QO pԬi[D5c-|3%ᕞ,(XV{f[J{tQS#׌?N4bijOsOx:>&B/@2rOֳ$oM*j3/<9M\\+1V<McRT#;Ezz*N:>e GH*6USۭzT{ 4] <2qY8EV0?N"Vrz{H1g׹ cW̙ؒ!\q}qۭXElq*ʥyrwZ|.p0ISȯ:HNЧ,EFZr",!x\͟A<zpr3XdVZZ5Խi]$G0x\bF.Zo%4߼^ќc=,eu{2[ kyjiXNaLf\04d3y唹bu{Wk~ެ4N0#Og[Yju5'[۶ߙkfZ%&Qrw"5iԲkȹmnTzRcfqPOlۍ~Xl~^Y-z.R$SBOym-oZ]ݙk5DEKm%R( 8oZҴ*b%迭ΒP\8x'|-8RKd53n4;lg>沩t㦌0*VsT앚p2nd'E?kyc̜j5h{j[+iyG4d:5*rX4[[xD[*~mާ;W%t9#B%uw:+8ضĖ<n>)j;%(ܨl^شCk.}ݬx،EO6,# ѲENݫΕKF_yLf"I]?ˏݢǼzpssZ+[׭Zg9+Yio+~eX<鎕5yv**nK-K,W ;s\'.->jW;.iIf#FUr9r\p=+:jUIjgRTiEYw{//ky ls_ZN5=sTƝhۤ;a"U8l=8^zh^xa1iFRHxZ+Fw 1{J)FֶrLz;cVѦLSԽk9oigшThQ}&@e"UѝNPxWSQR{!8V~Ь +, y(WЙ'KgmzJJ;-ɣ(԰b,bEcʈ}+dy\WD:)YD)gF+NTn[6a T|ͣdgҶ(ӱiJXgnQI$ִQtѢͫNIbHi[%skh84{3dNbߙ9rY*4EK^'a1-H냜r>ֺۣ_%kUFX_sRAi5F5!+d $\ #k(:EZW'BF_OZAsF͓V~ƢЫ\dY`f*{>QPucEG'rK,iU!v6 ~&[ʥu.^pe-k w7:j%hyyi˒E;'Hj178u)4J9O3&)+U#v+}z7Ы3BqeCݯWCӯySiU?ӯ*^d+ȍ_w^XG&2Xӌ>wP7֭HȹcyǵySQyҥ?[ޫ͒ LyȤsӟW5[Ts,U:%kDh$c#w,rH﷎q}i*IEyqRR{vE;KRۜ'+),Dad&F!0* .@ǦGz⊚VydHkwWesz*rǩ9ͻdjk gޣ®3>YN[sJlmYJH{*=f)EY G=64Z9O#JUH5BJIlZDhBvn `}OO9ԨybH-wcݻg3&Rb.۽1y?sJj52I <בST=:zϚZEJ8)7M\Mʷ>)$T#ekD|˾9+#DWO4S˼7ouӧhpsIf+y(Տ{YlmӺ!MA`|{GҳQ-gN⟑$ 펕UyhXƭIrܡ^3ƃcGXcڦ[;cRRt[[FSLv3'3h,<ѴMBɀ:ߞߝTjZz빼ܩt3D,W& F9Go0J)r^Ƒ #mu??WqwVIe }rrר[)ujIBS]-eAݻִﭓ0G]|KiZO2]+VLy^GN5^صjpʥNUȭHıQڻW8:vZ%6OT5Hf݆7|yc"B#TgHmV?a~T,L')C5RJ"flm_k?u1_3ƭsd\[YGw Q uݓ c:sIkܚ4jТL.o"cnqu?mOuN1ʂ[KA&ܠ#tJҕ:uѕfW9ۥ_&^G([Xqe[z{weJ_ƔԱ׼ֲf+<ϟVUE=BOT$].S&V?Yv>Sc[S -U;4V"aG$jJ9q3_Ei&{PVp֧*Ԍ[ZdudCOQi))>XS؂?k2nf29XԓN2W(+h;XMv!X_)gIt >ѹCB Ќc`+?hT8mvZt&4ڹ~DI$䏺+?i82}N 9>sZc uWXⱽIIJ?D H+*<' G9{9Z.4+si{ji:WR89iؚƧ)KSDnf7[]ȸU=yhcQ*ND9ߜѿ)F,Gl =9FQ.7w#L+ Iԃi>YWrrͻ]NF+Mk$/!ۓf%hW Tf`c+1OMrʇ.ס 4<{B%JwCygsDzIF8 !yEsTM5J̢s$5;ӿrԼݙKݎȅU|1McQJ1[?gs'ƅWֹkFTsOvR, B'?7|j^=u0T/NeIwy;I&YJ~\g+E\˖;oԽk"*ͅf5ꪜI 6m?̒!0EHe]@\]J10$ڶ_wH`IT+luƣP|hC3enf8<#yk2ȥo%QN:sZsiԚ{-JLrE:O/+@kS1noQW2Ij~աЋ6hԺ?s^/vF犜o_ʨSݳӿ_YTϡm(M%יnWnsZ.:ۣEʷ57h TgHU1GJ4UwQG+,$]yҝIBW[TY+&S׼3n7K;ŽO4FJ.־9uň^۴p1dWx*x#WNq;%EʞXmr[k}G%Hl2y1eؚ*H~ۯ'8QɪummJ-ks7T՛Rm׈o˒iGlލ:q{>3ԷZi*lb*Ox*i)kN;$C^ {hwcp<5*EӼ[R.be-*cwgons蒥xfrUnj(3ՃRT1ꟻOMuO[ܻc-;ʁfܓ3qSZmDN*LEf]ˆZ&Z?^X9𞟍mힶFaݎHo9m-a,y@#j}'k~'?[Vb|ih! l6MLoF=瞘U59So^;0:u*8Jۇ&Fn${KH0O;\Ms{s7ukRҕF'?GԾdM&.sƥsq浦F/B­t7/89sIGmt;ž4亸Q21A8ɣtQzS5[ÚQ&5e-ye^JӺQkSCXOtN=)[֊|Щi+d&o=x;k^5l#F۳Y.V5-ZN74-ۘ\W+K`)Tܰ&R_V*u}aڦ"I>0ilƎy{v% :+X(/يKmRXGU%՘`Zh1S9'GK.u=1Zo&S v F*iԚmR?&'{5óZu'W|)}5†XBA|µ7R/}??r n`ߧ\Ӯ/ex1 <iS˾# wIkݾ7F;4kU| Oz>RU'~4$ݟe^z*^y= ? (Ӆ\^im㧵X'w=B!:)GXK0,5+ lm̃W 8z5e},m 2/bkf`DCU9eR)^<-Y7/+K#9x 02A0>-b1g {wIN[]GFӊ|kE\k>KIC2(< * okz1^Ҋ´ny]lT74nmb_-d]ۛwOG}ltPTՓz7eM%+1C4p:*N?uөS%{׾HдX!XE㑏\vHKP*5_2m}@M%܌J#`8^f":sqWK~WPNrk|7wfI_29esHsӷ?2JN5y<.Xƴ'$ҍ %kkSq*]wGگTd/&3Uu*?uQ'K+xVVX#F[O\`K2N=X2|ͧ==ěkO̊#Q!g0̠z`c$*Qk_/CapW5?lB-|RkXnFr?{_>'F.Moum=_S\#Q][c2{KIxFfhcVd LvA²»S>¬>p8QmoN>:lS Ui3&SwV_>]>Mŭߩ횯]Hn&ya1c/񉕕Vt*ғЊxʕ*8nՊFiL0vS_+/2W~r𷏴[ r7 s_Ja񧇫NN|\a+Go Ǽ(bc'1޼<5-cXT׹~VeH@ę7(@A_j-Le&[l{TxLekfލiӻ>yin[YWLᛞy9IR4+v~zT8MWz¬3l>]rfV#U)߭#U_5ߵO$vWBsҹ1u(IB%Jrc-.ШtҸՄԮy8Ѭܚ=S]1t*~:|ƴhYs>b-igi\ٮu{3ө¨^[iO>#]oq)m}4SODмMi^=Q<χuVn9돧L\#QZ^3{dB32&Oԁ8ھƞSa|q%8>dﮛ%!m,L.!rq޺y'wU*JE:{b~fg9\Ee(4mNmgwfIdoOJdn ZLCq{tF ]2Of'J듧QmRSZ.qHA!=^JQ\R.>ԩyqϚ}=iRqm?|GBz5͊fUmqœrGJ*1zf!"i;ʑ՚I3dG=t)xGFg*p輎~6ImƒqקR#ZV\Tvt.Mg5մ#1W-֌&6JfsWO 4⬼=4杭x~]zM6mooCߞy㾹e+_U߭=̯6\WwI>׿[>(4~26V-bcːB.2GfX-U0Qs{YmzXQNNmg{<ڞGa\G8#z )WrV{78֍G-3{c2wFUmH]J4_1(hyůs4y K%mI񃁌|GejFR%|sN>֟y7D979 `~yYS3#:؊iE[) hVIB1i^2FO'~5?yO-D3<q}W-JdR|ؚ;O>ᡐ/19Sק8QJT֮ՈWN'|y+&>\AǾ_Cf lEڽ.U !߼ N82I#_yYgTk[u8iu$PxFVl8@~ϰxqjMo3rZimwI_j]e]ڠ >qqOno|a8Zt>Cg{7/LIc~7q\d:rW8 f*]RўhK)":݅~gO>wv3SM4tY4v.n2 cz OmWXpOA<ÞX U*w֡KT[Dm2u]AqEhsܾIѭ>H_ /BLsv?Φ4#G?ZuAEo~%<#e@\pFpÂ8Uu+&ڌvZ7?]xn;b6*֕Z:&f}L`gh&N+߹]: K[YUݿ2}r1QNUy'OC6FZ-,{vjtcj湔NR+[<`ܡpxׯf(ө)sDjGJ wq& i X6?v8cjr˟[/ƫZbӬRbX-2Xm8FVM%SURiHmJ :q]kUiҖ/Bq1Q ~=IUMr*߽HX)}1U\csJJd\mFׁ9jRԌeGanyʯޖ6RWgfnһXj()6o^u9;_d,r=Ϲ=}~yJ (֛SB!f?P:Z~M4o->} nח4jq'}Zbܕɇ֩GҜtӯS9ʥlW-Hmקϖ;$yV]Ȓ.OTo)s/MNT._u50dU6Wc^=OK zr3NE|.cVVUX)EJ5='g絽 r-Ķ 3).8#8=y浔%{qtז l8i_7c?kh٨wUQDci'ozhj6knI"{c޶.YݚRwk[}n7\@Tg(RQ{r<ЮM֑8Nʶ{?JըnfbX1ӌ5 ߽۹TJRM/$_1Dwax( {HӍU#HEɫluKnd>S&ާN5eceNm56g%B?sTF1]:z=B7m{®WZ匨Z#J4%+J3Ť7|S|xqQWTIJ[CJX´)F*jX*r\zVuEV#6 9jQSݕRNNDV~d!\Ejav y繩N8j`҅RҲ$<͋W,%x\m?hqdyN)G9V7+Fk8v?(n'ZSWMNZGc-~]GA~k8:LZVٹ'L QL(/v9[ծ<߶5pөk(U배~m^!fVfiG9qwC/iMlkoܮmeqltǦy*f1 KKٹ`eEE,_`TuԣFn6,mXߙұ}LKWI~ŔwS]?tJV3+}{iN- m\D]\n6oTcR*E6jL ,ov:?u;Q5>dծR2kv̎_+68^~t#M_C>˝{ xnv$Ӕeֆ wo)W ҢJQk#ZLGYȏQ0rqӥW+#&_lג,au?_U()ZNl{=+ʹ*S(Y*dYdcznd*ٵL(wƉoG:8ӭq]M#Z;=3˞{Ҍer)ܕ%Ք^9AiԚuKJ:|[ll_CMu8DWq3*40$NI8viQ啋nݳ3aU:̃N$ցoD>k0yqg3US[un 9S Q]H];x?2լZ2.0LaJVGLhML4m' '͜z޼Jyj__"'v3,LVko۹Ǩӯ]чtSVHD3F[(³F۷m9#J-tK7rZ3iwxv*)s3qⶋ`Frwe -c~Q8nSQ_JZKI ,?ܚup0jMkoߪCgV+%sgAF+S\\ܭʣ =?JeYZMuz3\kK>9m%L|y\W{*ue\Š? W/J\Z*NX-Un%c*jq&XYaۖKH#sYasXOg3/z.XGr0GrK7 ,r336rNk)F5z8]Uh31or^F=77-+ sSQ{T:ӵKyZ8z3ﻜ)7+do3k700z?ZƤz*JMܼg7dJWw2F䵿k[λœoZQ8n)kE5m-"wU:i<^߀ө-U[![c۳5+gur=:V,:E}uSX-o#15M>?Œ8s_aM'xj)T[uXY߱jTf삤R#: .;p7QqĀ=q+撦T-z3?(컠|G}:t}VG{Le\P # XĜs p1Yʥ(JOmJTeQ;]-.xM{ITZp1L3!36~]{cjWJ+X}[Q0#ɵVFWN>:]܊P3K&)s(4NŏhM.ÆvX;^;Vu0kh0գJ:{ɮ]IamǞxx uƤE+jS״Hm%wIoݚΟ s+\9 vX܁2\?B-&Zkt-7zc':.׫}^믦(Gvs8|eiWZa)\3ּ3pۢ(WO5Tn5gDJ˫]}T 2ڠ1! +Z8BdSK^F,!UP$ڣ- V8s#Ez$z^-%U% >Ǒm>Zol>7ùFQ>^ roޕJn7M?4>Evei<漉CVH-Y,9^)\RŞ  TQi{rشF^[tknNU䥧^nđ+#` ߁eiF(E?ùc=FK6Ur=3c(uG][FOݻGhNt*@#ۊ}~^Y&RRƝʗ6rQY~l0^i;0|5H"Kj&sۊ(˙JcB(Ju%+Y i^6c#8RN׻6窛Vd࿆OEf|msۃtSRv~SufuTd$GB i(MzFTz#PxC鑯;WZ ȬQ=L)Vjn]VkoVD]!Y[=;W`Sj}Ph-<5ku\n,V`:285S WQ˝&~'[V?\Y546d'xʕ$Ű3c5Y 8~5Síy޽?R~aǡ\][3{OW EV]i[SE_ݿZM1n+&C'Ql cZqB*wdmSD4! 42cpA8ttfEFjsӖ2#$gZxPMN9,2c*A YUJr\Rt^# GJ秈FIDkV0po=xuljG+-Zݵ,G#G[$ ;B#>wi^//St9k-,<]e].9ASy)Q Qչ<,?Wz]}#ŲMQn-ayB@rN?E{}7xJaenu$t<.Cw-ooM@=:־TqV:x9J["K3d0%*C:zU}N{0jk}诹6htlPк* <=s&秃%[s[<9*USȹoqًQnmfAf$rxx2_k}_SrNt\\\+FЪo tB\PgZN0Ov5R$;+[q(Hh6K+Œpr+Kk~?$Ֆm[cx%K߯8}0%N5?{;b"JtDZoY7`UemdTd36uwiMjzÿiĩk-$qgOdA㞇Ӎ 7:b%vKkq_FsSn1^o3g;pX1^u5v19TR=^tH^1g2HZNޫ9DirF c&kyW2$з+r )0Iw?rꐖ"zǚlxvs-1X~0  sƝOv_r*9&ʝUqfIp{v2E˟sklF6uSSh?^xwh|/RX<6SMW=FR Ӿ7xÿ --Ybq0TnY@Qծ{zX1 vM>g~^$׼Id3,7#G3.쁍n#g_X(r] 8›T>|TӦ^YjQ7XJ8r{_pi~} çs{OXirIuj6fU0L`|<5Ҥ֥F-^NGT'5h-Vr\q`G^qq;<IFmnG? 4k>#M4$$:䑏@¶ӼlWr;\W3LXE{ /0$mcqG=eFoNka0VmߦC׏/C?S {#F z8=x[Q*j:i娹c?8H;me2d#GBO1S_it{4wyω5Z2F l<G|]q>eM|g|v~rݺa乗nz'ȹb#2^[oз/ .ڜ֒4I&*C HiM2'*xqtVv&6d#U*А޼|eJqUX8/'+]/3xfMèvqq#=Ҽ>;Dyɩo%hǖڪ1sv_QNa)-?RqAsBd8<H.O~{괛WؙT)sZ+ƖFj}{9Gs5"ropY. ԌN>c[5)oa]ct.<k^:H쵠⁡g F,& V3rKEs丳ܬ6,HT{~uN,qmSH}OZ>eߘRʗp3Q8ti~N̬K99@p4@Jx݄`$ y?Qv⎬<%:Rߑ.F[h{q^=kW^fXW$[@O&~FQiӋp(Zі!Gك3|G-.[H'u#9vYLϧⴌR:(ܜXk&Bw3Nx9zTctT✝`$9y|HFOߌQH؆{6P8}W-Kyd!@wgj͜uRiE}cFdG#@c!|8 _}wiWPH1YF Nϖ7͸Ef◡TC Àrl`:֞RNt,͹smdQR(Fَ;7ľvsq۟j? ROؾ|Fd34LcimW'CS(5){8w ϴMde7zEjNP ඕHՋ)eP01R嬉eene.dhmsq o0t FIUW[+\[ ;af[n@IQ8ԙKwy#_1X/ϕV瞿ZƑ%XEB]F%fY/=VyT+SZrY&Q ىd<ǧjZ6#j϶Գ[zD2+zzzSZ_wiMZ_.`'1S.Wg*'vKRBy$wg8T{IrG'=[޴kk6ެPG9#źroRc.,mk4v8ly=QX_ck= y#F|җk{d[*Mö錜n*kB%82^USo<7__nyV܄ F Ų_ާ7mJ5pT޽Wb,cSǂq'8֒85s ay%'Uͣg0qGnϚV{<;z3iW0q7UWN: 'YKM%E*njW}GP ũy0V| txR]>Qurkoz$9k%t])/+"PEX#b#XV@9Y{8ԻmkjJqےg2GTS8JR3 ;loS3k9S.g[ѴrѧYw܈ոVŗe8)xSc>A@#PdN82J%b\I8.yyUdjҩF^%VujڒOB#Imu.ŘeHpheʥߗS'^_ MʻP?ƹy#V2{~FvObOfdTyn5ѳUCQ(q-a.g4M̊#'<>QFtmcɅXN'5LB<F{J{z4p5/tmDQN~^$=1ҸhҦmouFj4EVI;Fj|$yWnzWMÞG_:S5i{;/tBvnTTnHR֣ fdZۻylIF{ qU㹍*55mZ߈ԈB&*2j{ÝRmfːNќ[Q{,-Hs4Il|\VPK7n?ǦkJ5#ݢcgRTu4.\ y|`>jNV^y?Tu2Gjgkirc*wI!j޾T[ӕHҕo_?N3yi|]}N{9E" `R?)*h%FkFy (m=9UI*iuEG_25!2g{*QJߧmiXmf\1#%7'96IBT0Uq^_lNǟ s#cNb`p HǭpԍIFNxJ㱫eً7pG5TAjW4ܿi7G gw8x㸮S.gR3VWfI;K+KvrAE(J;*:td荈`m^I?\sVk+ϩOh6(6xPJsG~gFTaNP{^^W42dvXBd*q߁ShʛZΌZ߷{jJ ϻtOgN+DS%߹npayT۲{VQM+.Ee`ݏh>$:Ϛcjr._RHh/˵M7gF\$iѷ+MOq8WTey.mY ڢC,Ӳ33+FY]*e:rRߡΚhFܳygۜrGT)?Bq35۲Jl ?NR+ncFסZy$-'ܲeٳ'ӞTT]jiB/ԯ4ۖޕ:{q)NC9Fk{uUBOjcWQF_O(iw}A * feG?N+EͨΔ5>*[,\G>ǿNOʌ;tsө Lū_RfH⸸hm/ʮNT}?P~zj^әY 6s#+|ݎk~f6#}/AsW?ZNfku54зfmccqqָjӨOC8Sԟbo/fi?oZjEs%l#\y$RF*99v5*'+7L%vѷchz(Cddy9;vڱg.."m|8-?#7 {{*֫*zrVmF dv+x/W8Ni m#~"@c8ۡ1;="Liy͹l}=9R&4GG}#'kd9Rf7 qBg<9߿cRڲIk;;6pU$ 9r?S;)U#֤-s4a~AoݶX# U%%J-do͹xEϸTm'9aRs*It>fI –nxɢ2rP}Lx+nnJGq7B@B)^F{M-}u+Bѡg㕤عӌiCWY†eI{~VFov"yg>}辥Jn)ǣu~{wlƲrך1|ƛe幎T$Y#G|q] 25W4!xd<瓎O~4VF}mgm<ʹ5ěV5oݪʂ{q~TGNU.%\yGQG>u{1TGtwv"DŜ._#^=eۡU#uQ+^y?m.0\rFhm;NmZ[vI LIE9.MꑣRdHh+Qc _9ᚨԗ*gӣIMv$k1HXI~mpqdya+6:<}QHKlHUR99]2q7͌,/*O}E0b<{m޹e)_rFT7r8I:t&_z}kZQAݭ# 2W@Ⲏ#-GU3{-UF~m+TJ:VQN:[ULѲ6Qm#I8 N.c5^mcw$Go<3:scksש'UF _i JiqqEg)8s-;VQq"f?OJRW1FsW{vVW.$O%hzU%qMn*5ʥ.="dwA!?.2UNp{zu㿙rGQ8zۚ:}3,ז6ॺDoS2Jqk4bkNCm~TXI9{r~VRLרnMs?=]M+[ig8YWX^c)Fl馓v_B#d嬞^gSQ1U'/Q[M- fU ±.^Tr-QzXy cl?sI W݋vS9oLzӌuJgr>YӼzVHi<dgZϕ9'l1G']f} %ʚE1eV,r*g"7N=^1 UG4fsSlރ\ѿ WUW"ʕ:}VX6NGJ¶d?+*+0>X+_2ktX3{⹪EB;gNT+R4*̣*98=AY˖ЗJ+W)Mɵ~`ʪss>ƸFv*ryDhvXtVe黟NciNVhJQQvP\ ީgkq4J3J Qm-]<;QKܖibeDUn/ө]ѩ*Tiݻ/02yN0{ӳdZ7Hp q oelfϥuIvD{Oi{M*8PHyOsw\/StWtc|AO/NdFSq} vgs[F2qn1F@ y|cץ{,,Z/cj8Ys)4mm|6Wa/[cxC7)펽3_MdϠa)=[ݤQzרN1QGJS{{g?+I\q+nZv&t ȏ(>cz{q6ȩrWOs5 xgp2Hz_Z(-;a4kn[6Dn Qs zt5S;z|U 1Qz%ջ;ϵw2dF1z|xd_mN=ok/wU]娙».0s78cFUg4/ə$h3v )]u DiӪ5Kk&_Xʼt#)I&Ei-kAu*yl/q##GpYkvy5*ʥ6/Zh*yQFGUh8 w/z'R5=M;m 0"0C>9q&ӯ]9C!kti [z(DJHM+,o!-4ݖ3$7{_pm82s׎JrOUY-ff&rs[Fu+dn.TVbvgiJɛ{Mz$K_acnJ}:tYkЙsT.]C,qʗ[I UX:cXk[ pĥ~Iʙ{t#t"T\܎-y$u@ÜJ*HƤiЖ'Ivó*U{kzA AA#f,89 'Ҿ N`J^ҝspyO:+I"zd<"ƫo<᳀ߏ0瑌q(qC< h.KN#v-0T3$fc:w>vm#eб2;;l8\wR{#ݕ?p.dbA݌aKT[RQk3.ooi*w3qֱQՅ:_7V_[GV?EPHO^L=8zTpeeU~'$G>1v#*aTփ7wCu;Ylge]fi sg=j`*iuTu N~YLjY gRpGO\=/-;;Z$e"ବmt5q:~f# }c?S|rIpԡ xY[[(8e"5g'qNX:ɹڭ5nZZ3[d2q^ml?,&({m"DB1flT}3b'B KicVy[JXWg?/{JQu3IԛjeZ7cbY法ӻ^*t:ūGDޓbkJJ&#C&$91>G~*Q_R?fQ[_%:,>nwnNQWS)ʣICzVSMuy̓̎FqPO89<{uC5e{&tPEI*ȬGP{ n8\e9xǡ=U 'qFi}tnq6(!2[vp{kb:-(no#ի8j6>_1|RmsT1sٸ|(2HNy9߯Zc}~~[q:տ+;}}ECo8#p4J)~HR;Ti\F+ۏ-OviT84NxcJ]:51,`-$<:X͹lwS̡b ԮՑQe>.joyi|m'cJ4֧ї<7vtG[pӬ'Wz;֕ -kBi)YGvO#ܴ_CYt9{<Ɍ~SMKN;j}]KګuJ~:'x{WYm`|+0rG{e=̥֩iie/}}<*:3}Ww@e'4MN8GWlj9N 3UJqlf#Qʢw]z=Zg TВ3=Ɯ\㥉QZ>aeȐκ /by:R;s,ߦxzѩ狲J_[_4VR$d*W0篭mZjn.oFNq$u4{s'Xz`}qymY~+jհEyT|#b-CwЀ?{f8ʌ^jaڵ=տT~cޘ<1%חrdIb 9:q:cS]I$q㰵pV}޿q~\WVcȮ~ԋQټ+[[e2$C Ϩxףs8:'n]svZ+㌥~!V<\m$%!XHq@<1{8Dbg}[?'ӿ5P,}ªw){VjϨ3eAqrmd7? |A,>"o&#s#ܯ@*9E Om}z6| t+M["[ɤegCgcG9|sElrMf[eЬ1x)>[Nn˽յj][1\`33YuiVkW'7q¦(wQa@?}5˙Sqz22ŻݵY!lp7땏-8.jn\o+QݿiIgFۋ6dWTǚ:+BeMߵSbږ(p[-jKgJZ>~[?miǗ-F]$gsEOQIG^ 0!$2./#L|ÜN.>AK٤jRSZ4ԓRbo}NWoN&([{NU"V{^ ?St˒5mȍF7 Tj~v{<-9JOkk[T]$Qy̲ |v}zjZgα 展۵UI<,hOI궳ͻZe|F'kݵf{y}5V}~/ῖ9#i3sO8ђWuxxT؇Yiyڄ:$K" <~ZJk-J|>ӼIkxi`eI>v*-z|'yRF쏊ࢿ|KXcs[$v t#*x|E'_i%j~e$wFH|S'o;m 9G\7MTrYy_a)ΥRv{o\6G;Nq^.'4NN6t蹪n'ijɺVU޼FU8r!Pg+)=~G㈨>'Mw d=F7pӍKIk}M8xm+LÊEզnN1)F[:^0ɷok̍omO6}[~g t:&Fe=#Fw;A,ʽoN:ZNIYvVǖ+E_^";-RTV[:~19>+]Xƴk.YY|OIe'F~FT7q> mU1U_]]\'mZ陦6ryWO(efm[}wQjzrN̑ȑ8}-~DqzNa4HFYeSROcY꛶ݗqOhLDI;t]|Ҍ#&oCխRjq+%껙$cgF! b:=>}PJy'Uӧ5˩htk6.϶<Ԁ뻠'#wjFJ*_cIrޕ:C~`tx[y|F>au`O?# Ѭz?3zkGۢ^\(-uOE]֩MNRJkak7q4r/S_^v+M{mzRY˩kS*y۸cp1FRR嶮MRj2Q) v6ټs+ٻN:>v_n=|T=+ͩvrk^&4p+뫳h[$1\Wָ!?v#RQzŇ#IؑwUT)ZV%MjNmUe~!m}p;=ۇ/BrҜӣEBDS&b/NJ8skxuGϱiV$fT ǵm89:-OEF=3چ7Ay2αN9XI{P\<ЅÂy|4c)=eǗ\}6؉'ka6]/l>w>eR r޻6*Gḍ,N@gxiʼE%# {Ȗw[|u9tԭUG V[<8mo/,޸taOקCJ嚫K Z\ UwjqH=UƤtMWQ'>MY$vo1T7쪧vG~r1QҺh{JQV9e|HIZM;E"u V{8OFXa0V*rē]ӗ4c*rSI-V׶xI乚_(1ѶUq1STSIƊ}Sghcߔ]oQ8T\چ<$njN)8tIWw|[yP]ۺ<~UӅNkniNQ:[I g͸uw3S:%O ˋuU7%s.<͠1O?*.]ԯzr~EKø o,㞃8[ܰe1!;uT u檧,yjӯ[tD۰F35>>6(>ؘʝ9ݠNJwM9Xorrq׶3Dh^qonbKO3OEE]D9x8ʥ5Qq[He܄ߘ]KKROrc{fd_.m8\:dyݟ'WJ2_I`ӥR4N?vӽ_$Mq-P,-zvzt{uZpw+4]ͧw{Um[1I T[Y]2s $c5N1Nۘǖu$bisԏ,nFF[Y q0sڦZ%Wqb$>kYC\K=Ca杛_ޞg"ofG[v~Z $fs+ %R-'k~$UED3Yd 0vvÚNU&#UGU7v$#mhY;atJySqn4k˹$xGXԎ4 QM:#5۴6Xu +(˓Ezh=tnv}ي5&^EB Z*^ MNG7w1y/+޴iS3ݎG廁5ݸ2ı냌ӞR4u)ZٛVF e9t?\i9XO'fۙCt$?UGx_zkY}Ytmes߯`jhyKZ{3̾_O1{ߏ˵e(GzKR7I;2FWz&9b3$h5eHd*-׏Ztt/~{2H/eT>^9UmF^z0d;I<}(|z{rn[<ۛߎ {9J[SR}o#hTܯ**Uu6FƗoeO^irSHÕC%gV >kJt-{,+SdrǙA}ݷ=x|wjM.d1fzە{=u4F}vhٕ+g?iJRJq>ο}@ms޻!NR(Kln|Xr9Nk8B^N?W~7[UVE 2㞄J9.O[KTUA4+xײG1czբ=uUX[#3&Ӑk4]8i.XSUg)LK"b5\ ;t~ֻcZ6*3zx 6R*`311КFN:-L?i%.;*yc wc cpG\$N8:=٘ޒntn6oS5V'4bg0O) 1"yx 9~w?>&>*M+n^)|Ymwx(zʹЩʝʏiVَ@@8^rkC:#[UB%/2%XG A&FxU/+I?SQ[ر*CuOl\ІfEieA֪u#dt=4zQ&TlwS*zIq762@X֌daG,ٯk$2"2~? /ƮTBzyV$0fیǷs9Jԡ^7ts"mT3} &20=F~Vc+NTRћ] 洭*C6s:7_#gkUu7f嵋 yҢ ou!גGPzwzQrU.M޽<)LO̱(nWqھb5:/4cZFy6ݸ\goθAZ yW{kxwOM*HLf^Xn.8D`rxGMV>--;œY][K%%EgeR88$pkU'S-Mw迯#)c>f_[yܱwQA~yWVJ2oMzQja*Ed֍'ܟ b?_xy7[e\ e?c_!G=`{8;']O̱jJݽ,} jOI-઼j Jz1]̱?Ϲ;1&o[v۹Һ#ԭODrָjwe%}v{fM;躿3㮭jR·JܳgU$eN0*sCm=>)U.msޫ-J^+!*aFx'9<=˚j%cx "iۿjSYB m }y,Ta~.9bkO}Z/UYHV+7DzJ唡NmHkNNT֯,h4 bHV)]lqʞ;9J;{85}OzO]iڌ1ary{1BA9eM{q(jBFWq%ftduV*i3Z n#t֩k>GdZ}> HfH{ n.I07$6*rQO϶XTjFNim5ktڞy HŮ#,.iO_xgЊoߩ.UZo6Ϧ뮿?;tR;,cmvmʪPsz#^?IE&x;:QOϯy揩hzq H͝ҙ9pGR[J8vWY֧t]vZ?+u#\O{imr&[r' }jpw윧>[mk_݅b5MV| j][ڋY./<-pKWY]j8;[#KO ikH`2ӊc}9T$-vV"ki^ҡhe 88Ӟ3יVWsol"2RzgsO 2_6Z(0g#c9Q$RWGM/#hziiVgVaHS==*ғMgf"Q]P/B=B2Nxj828&|Pzl==DkUs$c9װ5ӇW?iR<]d}{SɓFG6Q$ Ѩ5}{NTN&|eNiT`>9] g3Ͻ{\ %x8/̥O0m-0-ᛵ4陾F|;PcXbÖ-馦daԵKKNo."֍:hcy+ y 3ǧ9*O*uF7\ Rcf#g>: {9(9au)(u;_hiVe #Fʬz1QڲJjZy"5qy咶iً^޴vye#oi$pr}rZ֕,2~iN5"잎2M:X_[*H 4WVH#{r+h-]Zkob)(_ku #[mF9-fy@q]6[.糈O^2[v^G{=O(U?Ã#+{9Sߙl>jqw}a5ɵ\ߺ1?"QB})T=[XҼWY^G=``A\kރ}9FVM/x~9&&$7:{O'cIa.ͭNm$ǹ^fxgRN íQgOϊ-\Q]BǍ]f"e,_2*c >U1ʥ9S5mO0D7\4j|%1'S5fzYv=ײ(Iqw${Km'zt?ֽ=(<=<7-(]o "Pa+3 [gccWSothR6.Vg5=jmS&=ol,?)ے9f QONik|lMGk;-{۲}z^Լ/iemL<~wW[{[FUbH=R(:?)Qy1V_^Si}cT!aCS;$rV>#7/OZ]cx"o?1fڬĆ c#,sw+ڔV>[?G2K)yahḆ̲ʫa+oS{Rt\].pN4% cuк~XwI-ǝ9lsN9lMWG-Zre}3ϊ:p/z~Mͫy;,\W}<',TU}vU'[{]ng;xGHu(tʷ19Ry> |[~ta*U-d6aM'JŽ2C=#17!lgq)^\vWk}f*cZOY|Sk-UwiFURG>Qd`er䬠kKia84涝:,5߇jcY#J80rqiU8ҳpIs4oMnxwmhG~c9Y AFH $7`8'Μj{h8SQmk%^ էInokv8+4'U8W3S]qd S9ImsR%дӤb%]zH&t&|O ۪o&9_4s.7cF9Wdh#RIуvn+UA|ې1ޝҶ攖>B5ki5JdAIӑYÛIX6vs[Br]*q5a"I-u]w@~<*ۗdo*QXwUy\18QE+B)D1}~+zzZ>6Z!3/y[}u5å:4i9oz ImaArp1*ΞʵnWoMn$v3B7Sy]vTe*q+<}l ;p:g=zUǓ >εK wf:ܪ~V/A*}KKQ%!s7?ʬ[ ƶlƷk%1<[0[s09 O㿱DӳEs{̆,F#6 ʼn?ܥʝ{n7 q3$^GÜ<|nQ%Mw!uww=P3+?o.E̮LycOȊqopL̑y~_oA$洧i~J4m_]Gq~۾PAN*QLcVvkF*߮p-rN?_ZTh8+}RZlZ&>f<c99r!ԩ-_n˩ t,R10RX-n" |ˈeUY&%r!sgxeR"bW(V-U߻|Gg 5!~i=L*ƍZ=t*C:\*n{{jݭ(s+ܰ,,Ͳ,XiY)qy=T}2j_OXւĢ?1$ϯ֦Ċ(<-K,EoҥU*]oCH:2Qn 'fR޸}uFcE_bM67/<O> U"jTBy#ő6 䉋͞=s\UFZl2iJ+Y]̹H-|~uRn]^V?3s:ܼ-o299PWn1NsmRgta&ݴ鴢!dѦHٟz,k_akϔUgթum:ŤCmj*Y8$Ib=HEJKGբ׆FW2+FC-j_v?$2[Iߎ\FwΪԛ]dxum`7I kxn~{%iCgtF\5!d q3}y/lў|cJI܍GC#nffry9YԠkgF4S꯮@(>v "F3"evJg{v)7ȡvUdf'\yӸTJQjGw&PRʼRTnΈΝL:v].K Ocj01\riN.JpEEBiZ:E<+3-r{+4ϵk9F͛J.2z„pݮ W85)IB1r:]܅xpv\c#ӿk)NRj0J{u0vⷂ}hA=FN+5%NsYZDwLJVF;r8M"s ,]d_Imd*Jsmd3N@w9l29M,K[8E!-V~BTlj%&u_ #ٓ%*O~ }=NvnʃԪCƜibYZޡX_{l;Z:~&ǡ:wBJI&tEw"]˻s<`t>TT5Њѣ=_t3Y7\zg:KCTegggFYw\EiTl,*(VqwqK۰ >ֲF.eJ敩k~fe; xQcRX4~дO;˞1 6Iy'=^kb=.\yff˷p%cxQ]QUymK|6`~#ڮQuRWIo%>_ѧDi9RRxK\"o_+7#4(求kʏ:$7_nk>mjH]w}A9#dR/[M XĒ,IuZuqO^e49Nζ?B$dr\$8l)ڎxygwR<ԍ38_3r(ۈ1WR}e=^ñPl6OßcY{i 1f vm#rޘJJ7KoZT$In>Vךc˙hW B.v{ug%jf59Kuww[%RWΓu7V~5*5M*II1cSI8W7ZLG%G kIX@vr:;jsFa^KGcf?a'}\R8CKt*_nX*V-A~VSRIFSViuٔu 6ȗ,FCGvQ8jb(~wwVHBAp~qWW}9}*9I=ӲSp6 **NJ*93_c%%&Y5kQV^^}B\Y;<=2r.qJv qZԵϥ/"I$Ek,|7z\V1ioܺ~N؎KXdi}+y=s]ҧG;9jQU)A$]M? 4n&0ɷy?5*Jsu<-L=iaӺmKR0d`]ꫵpw Nrs;DjS|zu N1G[p3FFc]`8>ֶK{FV3e(7gu+[ǸHqdj;cֹ'Mio5<;U[v.E7d$v \UOzISҎ*]9Y6bÉ#hח w#MJwYǒkފ6!Qe;c^2s*G(ʥ9rקB!E7ݑ8,32G\ ,~HacxJ}xZydf=:W[TZ?N1z+yjVTuRR{ ;^$p ;f-(ǩ{nVשF3+3?9yqc?q:ӌ]^`C ?sVrg~c=p|}d 3UnV9'b%bPEF{wuW0IP3pq2kJtye} F"خ'Ku+60O#q]S/*KKӦқkEv2>4cۿ\R0Xm8TT%˩y-ж1jRXl)AzOI-^n"̽{!G6s=? Z1vG:ӵ; $bbEFIgz䜝8l8ӌ{[mmt#௝j:u/'ϊףԴգ*6S*2`T~73'RqzV]$KI7 OEX2K#rӒooxYy]GlkS湜bQ">/a}\Vqw'bt˹sF.W^ ,yP.AP4OQ*7`z@q!Rv"t`\ZA3zkWmk1*[wךyF2S}&5r̭rx'ִo[:IWI62%vgmd|edAS$K(#;$7,꣮7`9;ȣ$őp9=z1Oi(Zu `( ۗ$d9jTԴQY&@4x{5OmQiԅgnu0UPn;~P9Nj'S28F}v{t%|bГ{SBQ:4֏C+R;MSiҶ3EG (~6,iVCLʛtrxxǙ_Іss&gҶ/g&iэq,%3<)ƚgMOeZ.0zh͎RP3Nz$qRwIje(;-QxD~YS*]%%U|z-,ШV$qr1ϊ맅*msևږj- nprCQ,75?3i{(drN/,QUX_-5y _+o0 ;c ܑ+.]FI#gY)$ 7NI|:[~s̛V6n=iZ4Ъ**F){C>zn30=YJ:tbb]WUSW1]`QwiA/hcR?1U5;,pUylsܟ^;b$;-^?WԄ]C [h*˹A1ZSyuь[е%fub8PNgS D*ANJAq$ϼN⻲1NCeN\[P&*oyot6FOevHJDK/3ь+֔y-f6KCiS򱏩ӊ5)'s(s%{hƵV&m6) B=B1[{zm3֔9(B`i yÎA?y*j{Yta߳%ԲvɅw=;V)tRiEߡ5ie[v,sװG.XF[ѧ(ҾұyKry˃=j¤y\l-[n2Y0 FUTLt"=oNIip.۵7|$q^n Y2ɞ@sЎsް>YZoԄ&o(cyTnrZB2DV*9+JF'v1x=V=e(M̂[;;]v/$/ c*J&}'uVW;uq2Q*9)]]oFU1K%gJR朝>=ڸmpoF4տS21Qڗ-.P4^eI]+XQ$K D| =:܁\/ixΕHV} YefmcȣFQH%v\[inҩR7G%nz5:t5ՙv+4)8]9eSWZ# 3[MmD>fs֢ \3qZ]&ɷhkEG 'f7,4`2*y-iVhJW/=/{+}5L;q'#j|*L/r-r=F1ӯ`(.5s,g|Nzc4ʫW̵amG6628r=;r2~,]Gx`U+>ҳF7/4~`ŲOI9jj{frZKϔҲ 0'>s1d7ZWX,8{zj=MړO3%MϷj3z;XYJo{2dnd =O댏Ƴ%G,IҾ4fs4L7Z $>N޷#YK <y#SkFzXzq#ʺ(A->WPvyz=l'I&@9Doj^:S%)|?C:k⅕flî['t5IhtsK2'r[ϞNZB"Vq<3q]jm$}Sͻ/|j}Ƥ_y\ZI; d䎜~-k*%Q5Kmy#LL^{9c+J*`1vw4_۩'UXl9oLǒ 5fδFu֤xeѭ[:Y[yV?*rqTU{G]mfmHj P8ONRH#5V#תf4mUkGN=떤}-Le~vMB/`.g9Y2n'ܩSK߉_F**䴋 $䞁*9#ы&ddnUY0H$sӽ)|M1M!FLͿ{}=3J6RIMDG?oQtY6RH }3ʜyOӏ-~\\jۤʻӯ=kjr|.1墓[x3w;o ;I=k2֥+~1ۙ4e>q{h%x.J:YuniDnRYP;yR274맸S3R[q#d1Se523Ahs`rsєymFu(:RG QfL?p1J:p)׭*}k2k/-+s^̪Jm];gT6~zh;i!l{@I䁟ǯn<{sY_t}-q.4T%FUwFaq;V}JSrJ5g3/ keVcȱW%3 WZ1G~lV֪]]-w}ok۸yIl$z`t)r&ֶQpܼGiqJpX#N zAҼYF:z|支Wjrϋgk#)'{ҹ-q֣uZ/6 \fcy̪.euc[S&?br]68یqUڿNsQ|DK{vu`7\ks˩d7nݑsJ%hmcp@0=Ӷxe漶POGm,o*<@sWDiٯrPV |d+$,:wN{We<-:JΌ%8{Vv b]Z59{9v`rG}k4iNϭB8FI)^޿1KndI>"b874&~^Jdk6U) YoO>x뫺V}.-5ޢK$k{wfF! 8^{rz ᩇJ+O&՝VDռs[ok(zQҼhJnXQto9Xܳ%ݬdϭtc0iZzYGtT4WM\ٹN.6LƸ*2+Ogw!G[`Jױj4}ևO GwmeKhs.>n97ӗ<~G.imsz6[cRQ^f4cW9|˔[FXu('aƭIQB~k&9U98sXxO?Lq{.QńI]xS [A,H^SJsrViK%c~ϗTiR4MzYPT`ZOGi/*^H8:;Tk-;GqӱF&!-r# 6ٿb%hwSf۷ D,M|g,g,U|֬Ƴ44֓[o6XTH8<灞&4M=*҄^kk.𶫥]6ͼi.G2HRI9\c76/9pN{;#$3<34o=?ʺ#)ƛqƷ2n}3RgnhN<)\s%ב9FY.^ I,xUOg5D}Pǖ-2Rs|UWMv k+o'vS)[e~h)Pnl {Y[oȵ^] w=^_NirS[}_n+Ul}:vW*Y{Ja@ }K8 cG}443*$ݵ]|i0t:(gɼ‚3d_Y%mat)r!q'׷LNч=0RkK4|4HiT"カ8"l^ko3p|S2$Rʃ+? SeG߶_eyO(XTیt#R.M^۳xc[Wk_=[2go9e^==kIQ Rj6뿩h`kyw7䉂;wIp8@G4e%'UVG %kZ~5\wX:gx0vn޾Gі7j%u~M|s~#ڡ$č#aNaNkUJaIM-z^ϳfgHm#FU<(>ǩSOM:ufTqQwAcB5 {§NzFA+襚pҜ_|?:Pr["Zo]dy~n9#wRc᷉,ZwKg4k1 V=q-i'*kFc:n#r7ϧZGs;>]-@@}WV2[K`04TU&5sRMvzu[ƻcO418UGJoaTV#G&/}B''ۯNuxYJY6}ϹAʍOSYmtյݖ+wU۟k?*UmJ4X-ccz)ly~52",i4k}6,T"7oAN[5n9"̍f V2W g隨 (ԬWP > O~k tSCxfYbvx&T1hE_]|˗5;ﯙjW0|{ٮzԪJ\rՅYǙhG4fͼj~QB,2y{~UN-{yQN74B|49!Y;| svʔkFQkkùog!<e=cspc,OK9F.CU-HAj4<d9&jrw [g 4*% @vFI.9\;zqԕ_geD/]ɬW;Ig[W)7u.Fy[U^>#%%Uge׭>t0eZn*v)zΞ׈s=H8+_唽^[RII'_ny% ;b)i/x+8͹@̲r[^ {Zt`SBRW߅|)E-dL?X>܃A5cN})]SkKލzP0-7h3YF99 %,B5Vt7YJ87,s +("4_x< >z)ocJ8ɮ;w{%nܹ0?P;e/e+s[|?|E3Bw@8 ּo[M.RVkn<y#nn,Kʫ00G ׸E:w+:0Y=-gzi `4^99N2⛵cBE]^۱}Y٣QNNG?^T:/Y7clF"UF{o|d5 jRhsVQHI">LqNsҺ(֌ y<Q*}=1X%VN3Q~B$ kةyW-JRU+GڳGJ|[J:_'M bQ95._ierJ cH2\.l=kJ5+Im&4qvZu0c6/'5ԍ5Z)QqB`ڗž]?گS Fq_#Ob&7o YXj;'4 =R+]GY}O[kit~b-1p$u'<+?9F>V81Mm5Nժ$r,?Ǚn<tkJ63"KG^k<+o k*H 6cڳbU|͵޿ N}ng#wzgF#/b3ejzz~gY+[[6#8WN6MJ/aEΤhaH d41R>4tz0ny LgW]:6-5sr߫{[Ya9$ԧ%;U#BP[[t- Fg'89iR-DɊ7Kt:ڴQmY ٪? =#yJ[.Nj*t;/3ZBOƸ1\F}cj6՗nn]JB7ˌr3yy8?Uz3Q+qmY ңfs򌪞;t^6e7}ߟUҿ"s* |+ٹQFN8BYG2\2=ϻq:݃T攚V:2qvVfʨRnЂ;tje9GWሌ(W'G|x=&e*nɽZNrېX[oּF:s[F鮭ʷR\#SpK3Lg8k­5{~:5TT:klfQq6ehoRv/\=kNme"hГ}j3{˂N9ݱD;~FlU>vu[Ԯ 58q7yjQSc%y]-:lP+! G=z'N[UjS)/Vk*V~w=w9鏥VIY6֏E(Ta'~~@1v\|=>u'FҶNFзk5f0I?1$rx?J֌oW+ {O$XZEER-`~(ߵ[vu!N)]B~H_358#J2Jn*JoY Ѯ>uT[8\Ei)l]QϘR BLBqxђ_yg݇`ZG4RtT'+R%*]+0+I:]DA^mȥIn9k2HK g3'nm%J)`/s?/?SuєS.5;[{+ ٌRT9YPU(6k$;źKc0>Tjs?aZa6nS#4L[ǏO֪6^G]NOcd%>d`i 8 slkocG9d&BЃZ1c.iY2~\l''oZTl]Y:5TڼH%<2 pN2x#t 2:zk^Z\1XVFyyt̢5XcӁtݺi|A3S(HΚZl|K qܭg`:sל`FT zui/v:9#f`Ud]MiN:oɑ uӷ٠YaNKxMg?hd}jif.'%P}r27KZcdq :ZΥZ?(Wס1Vy?vڕjqw#X߽'|Jw܎[+hv۸V)ST抣S y3"]jr?U-`ʶ 1b3ӎ2ӏ5i;Dc[pif>;T^o֤)(ɥB9 e1$87_U#+ ^l؁V[yKr< 5-ft$Gs9۹v?JJۊ֧X=% o8B7*c=hQJ1|6V:K˷"Y7+]v]CJMv:R3Gn24co29=9Ў\^ѭ |Su|q?ι%N5'fFJn1߾杩LG,G>EoЪxjQݎ;\~c}p{<qJΚ[[TlYO,rH+Ys8ަnL[y+tmh(ȻY`0z2%_#Ҕ[rW4o8b+q׷(cHʥJk[ӋTB}2Qsӧ{MybI8X_Fq_\eSoTMښz-5[#CڊU}?ƻNR9aܷ /͝gImЪy#RN]:rv:,o#V\QOtB uH#s*Ba;pAu#ZԦnw*|ּVGv^E.sߎ;s[R=YFTb;xB߭h3Uݸ=힕okmOd㭷-[K ;?CUw*CI{DX*[rx?i('([O%HٽQTeR9XZNJ쉼?+m-n;ɹx"f6bi$eI'3Z(V9jb.Uy37 oImc]m]6C!z+]9-C99<ݷ}_>M q4k4ʪ1VZ5cԣJܥo[Vq >OGXƴdG %Y]ošvvF\ y ==>͈p:}xįڏFѴ{o+:3w^fRU,rU(Y<\t}y$W6Lqw6q9#?:֎`W-^?1Iy=t4z/H%--n4<덭⾆'ظ˚8cFUFo>U|y3xKD ≮gQw V`*TmbcH28d}6WedgCsSCլI$#){4tkoGَϨp 5]5{_ vM[cQǔpz9kbKS|Om}碿C [<~I[d`z|̪~˴Ҧ&\ŷGR0{7kI+ס`noīKus5>c,ʬGLyĪ8xd[OV='=wrB#i 7/H8SXbVѫjk~~a)I'^Krx8k<+y?(ǖ1ǩSM.[Mv^EV]X/998$w`XH;J3勃ֺbv_m>b,C8^9RNJOTz4kVf[\yXԑ x?uSV\I-;)Ԓ3U7yWb =MLbӥ?cEI 镛9jUK{miGP_??:9bgTcq]t;o+/yY*"۷wU{WlB3}E+-ytsIRM}5i\21;fE*:cg9"3۱t<ȏZ&M59F'WNG-LuFn;ouGk6l\y# /Wևio~~ɭ]rHr0F) GgƤdީMVۯCIkYmKi3MPNA TF`jJ7uNJ+}wU>%7Dv$I\=Nt8 Us4I4q +ۡ,XW0x.c}`|+g#oǥBW{rSG_Tkw9oxg[ɬƟz-ۤn@qtq\Ƭy~s 'v/0[O+7Kvh^#o俽c]ta'׎;}}:%GJr{ZJRVNEs$lٍ=8=9Tm*rP$t3J/ pp;ی#=ϯVmb#R9K D.n<@Xfen21$~Tª-m.tby.L xQZ4Bl/S?!j8+w#5dtIkMĞ6psy~JȬcL ڻ$xε?gkXrFɽvOX}~W|+&NTϯ;N zl&U|ŊqJRwzO|YxHHгL-9:c8 iQ2V0_Ѝ&ҹFFZղb<r99ƭJc[͌FiyJVG 9uvˉ~2{׵]<<'QFڷ}?mZ%il4j]2K*UZt(#졧XKxYaeT3/ߘ=F0pF5Χ]-mO>Q ZZ~ku_iQՖ5۴&J`X {{+mu3O++}sz^mk,*Dܨ*0ryC. &TϰUY^m?|>ט4F:# Q8k9UtIa_$ݭ}ta~Ҧw+T^$MiR1O.[N1i];{</tqϧōt'J石&取Oy{m(/(lܼw.U$J8k?B55wߣ.]bm+L?| >gwwxxfX)m$7NJT}jK6e^T_)KGO_?M?y\MbHo.&vY6MANH8l\kZE86f0tOԺIy4}=.u^ Mr4Hpm8ʑR+O=0URׂw_ף5+SYEkקt2wZڵϗk0 b@$(צ{8]+S:)oew>پCHJXR18jbӢ֩ZP4?<ORg_]/X.䒸+qxcR05UN˧cS7RoMt>Fa +Y s^viJX(O>2xK}şUXYT3c2{Vm =mY[A$n\V+zv#W::s.bOrp|% zv+?u]v:!RXU4٥V`m<=tub@tu 2ve:~e촷u}GTTvvcY`(>֍<=NGgmr1xY:r߉vm57 ,puxy8i>˩*av85; )th_ ԎzױcF0'dۻ?#߲ե94tF~Ձ?4uϦ=ϩ*5K;NgZG&xY ȋ*m;N87s^= \]z-՝vw`AUR8lkRy%Zc()MV]~x{MSmB9 Dʟ}=VqBp9,qV}_,1KIvf5EE1jX,KտVi쥅9UZjgxUl !Wv@+4fSFQR6q?udN&E& $_zN5TKܹׄKKv/xoO}Ff7RpŃ}/v1w#W,o'OUV ЛexՕ>Wl2lnֽ,F9ԧ8; 6T*Zion߳wA֮Lطgp̥K( 9A_%fSj)%ύԨMky6fm"/x;;2XjgH>]ˀ>e5UFQVWmZ\\Zns!>*ET{yp1 / s^3q։o}}VѴ(f#~nx}A2m֎# &vGx{{}NV_ءpz`^= ~{a#"]d-sK}|svHOns|^oeͯއV3OקIS&{qꉤ0 F]dFF@eԎhJ:yGc3SjM:RM6D0Hymc_GN_ՙS,u(%g¹>1|W:DYm,D 8_"1n]_zV׾>ET{`l7j Rh1o]S֕lbZ|ϋ~5~IoVe2\+G#Cp:ps_4[oᶖzo`r֢]ݿ*-{Zj|m/#ױVjbvz {9nYv彾/ZuKV/qrN'rp#d;SI~=^߁{<% T-[G}=_+ mMo&ɸ 'Ќ '%HFRWoR6_%=/wIͮa}5**F3Њ<`K^76oHºr;r@^CmB2%$F֚7k_~QxXʵw{[k]5Mr~%a_:[uNv Mwh]^qۚR-N9$qpѩݴC!rQJ^:+wS*t"Q7s!ۙ3|ÜW^JO#UH֧F јyv_._8[^[Kd A=::yyrq&w YSllۭ*|W횛iEsq$+tUHV%F޽U(ӌE{jiQ'XBy<\zV_l:E|ɿ녑pyuTid*5=]v|ȳlWi:rxihi(h|cn> j"U0J|6݅4Y6,cm '޵,JVm|^E3MW?7qׯj\z8)N[" fXϽS}K{?#k)]ɎH\ c}0Fשv9k{Ef'bf7ap~^I'qSk{Ɯ-]q=čүԻvQ+jOi%Ӡۉni# P9qJ-&*U(w0@7ǖأ$ss8D)=gN1J+Rlsyl:~Q]ΝgT*qzCjim-ce998SV1SOpR!&jU 21~uJ%BkbILqtpzӔiW[̙FT+:Gu-Je: ݲO&<ܾkG*Z,j VsI4,ܮܜH>{W\h: E:v/mVդ/\0sS(r>(Ӧgfij-Bs+;gCԀGWR}5}-mUKue2=9T.G2 {29={TO8'/cFՐ3G2]q#|v?hF1 O[M>T$BF޼g>߇3QEsKdcʲ "Õ8qJrwf5+J>iTLhlCn'$(捣eB1D2E"ȗP"@GFUW6G 7RQvMo hvò?yT%FR.O=L>TDP* oAk9.+sj8jz}3%ɶOpƟe>uG'{"̛mx8q4[F櫶i3;R&mR+=e1V NO\:th(BWkG!m&l$X[8cӌ%:9h;^IH5Ŕk:wu;XՅXW:*N/^q:*ipIh+>\zF3kIZy(ohڗgͷӎFx8cQ%(YK2}21X^'Z-kNQO"t#cOM&ާy5)7mt[ %~76&am]N.Uwq*R{S#/̰8=*yyGD(>V(͙V g<ָyOEmWpN_Gpj2یރֲF^"RQE=J|si :U*{X(ԊE3$tqv1vq~55f+㝵VǛRYI8m@[h2kH~Wj5)Qߘ+#f}[ҷF0q6J|maN]<Í=xwp*FjJ䰕W%U5S~ХqqLۚEN6y'ێWx㤢Q@Z;݌0[ʤiTM +{)b9cwYo~t3n,dxǖ~ULEH;_iR2nm:qQ{۝Ҝإq/sҔLS:\Ԧ:'1USVNޕJTΏֽ iCn_Jڜy9?)v;Qׯ9u{ocxTܚ[3-*rVtGjFU2ZnYq3N*=c- aQͧemiVKi`F~T'ˢ9Ni4Y v֕:ҔԕJSrxvVkmܖn?k^J*ɮWZ%$Ʋci d{*8IГҼs+CrO< ~k{5nUqQWi~^AibUYEehFb ;(_ JVmE΢]/iHh AޘJ{hE8VO_CFlEf261޻ihy0띖c6[$ 9Oz|?c6dSy1vۍm'gܢܲw@8V$7Fq0Z_:cT.Ƞܜ s/y+9hԜeˢVw )Tv z_nKݏ*3**K#i ONiF2n +JoeBլjӧƛס(ʜU__"^,-[`C?JTy FQ[s;|I(HyF~_jFRө$sJp.@>1prd)$V7|#['!Jgn&r՗??ά(9.~N;}kh7u#v9gM2YXgg'JHQATj͑ydhG͐ 6{gһhJ ӌ!α Ͱ8g/Ӛ1KsVbTFTG?֪e*7 K1#jηߩ͡Bk[Ti1rMxBONK+Ox-i$͐A4eNϕ#>$QY`'q'AMTN(4Я;!b`>+ TZ**6{2PB#&fLgx9\NR^IGey-rc3?=v_#jqmrnWō^A#g>د".pCtbdcOVصOVheǏPr0NpFyRP]}lOg}IFO9ݽZ1YT5PN}$^Zɮ oԩ$XH-t7 Q#'>ʔK}YRSnZ/?#. L2~,y OZ%Sپ4=%)E-ʖ2ѴbhOSO=}WTjEAïm:Z^d$Y&Vi#nu)k:BRU)YZVMZO4?k;HW̝78~^3:zV&T6޺tGөh-։#Gu<6w\U%(ZoЈʧ3j֞D:@ 9>H˿ eiFS:csL \BquY}G-зi(f26j.j|:ҒEq,ή03 Np $O7;rm^'q}Ssc+Gqޝai]VNrHYQR6®WVFVJ49f^Ghʚp\{ʓk6;Vg*J>ϮmI+ʌhaw[@tcye+3Jht$ݹHoQkbS{k[j@mzG&q\#Nwzzq;PF$ }3`\aR5ܷerRg# ,xf Yu"-ꙭmyIVn~\ь1R~zKwȍrd8~{񂎇t2q&,1(DzcOL[iIw ɪew>YESklFOy-׎8C""`xZwU-V)́o&!N*\θ!WT%zwG+=YjnA{EHIIm~L$RnK; Jrƭ'ױņsߡwr)S2|0X3}%;Ӕe*kWՙW:ؕLv랧ۥi*0lrJh5tQ(\ĂqxnzqSJ-Jwu,"gY[#nOO?,*.hsӨvWt֮ɰ7"y8=2 Brt\XRzRPKۜ^\p1ZQ(EC CU/E[x]ޔ(=g{hծQ}NxcӴ]]I _#?4dk&/VB3ə.wn~*F7yIs=qd]w/0gld,m#jF'i&T껶vkxQ*n;{gN+{Mw},oJcz+,Z>26PVsy0[8'<z4*6z:B7Xp'ZQ5q.Z.#5}ՙ ]4c)FsJ<+f#&8Umޫd3z;tF,^}[BdV=69Ng8K'w ȵev3(ڼߍo){;:{4^(m./!Y�ՔNyqǡX{i+}Ɣh{X?6,27)1׎بQ{lκ)9E*EYl~:{+TH-:㟴E'J41z|>cQ~C5-Ӕps姷pI.r-/ܿwv8bN/ebbzx$r7masLth9^yb)F ۱<.OjΥ87/NZMhDi 1^g[+RdH_ *>ʹ'*SDrGJUACd-+42A9%hTqNNsUoeQǚֱ ϛޞFCKf ιeN2z t%zsMzKs*?#q OP1)A; ѕ.$$:OO@Rė3W%~v?'\{W,ig(|k[<,OỤO^X'jXJP#5$1n頃Tc$ywi-۹fcykneOlb\{֧JQnnOvF"?y tc Z|9TԻS,ϕR>Lq.eqV8`0{s[SQn兒M\v~L QQF/Elw[Amܳ0~ƛRqw}F9GUG˜3U5Sv ϑܹ$JQ.)CTTx&pӼRzW<ʑ5)tSApV09f .0~cJ0nPVv~e)[!A Y@/Z~%}3yQ*.OH8s3Vu$zwi]?_0JmאqֹjӜH9BF5Y@P鏧ָkӓw~{цu, :F~W' X-N[o>gyc9>o# 7> koBsNZ1nDMfx9uSR.H4#aHq}>;KQ¤]>k̿$mv9_ob緇bDrVF8cҾFqOSa 7%gV< >zWQSIɣ[EyoG|3Q)TW-J\ש[V4Ya`ߨz619Dˡ>fՒ6c<0lȊӖniEɞʖ5bd#s砢eSNTһ(è+cۆ\ʳn${b7/H_Xӽ6 ϗT_2?CYH?/lEEJk.e-wĆSw/?zӎO3SZ2(% ~R*CTybH֏v|;Izz//+jJk3e'ֳkD(JKtVi$bƢ?=9zO_2լw2l=AY˛s/m%߹jFԝ+38wu,kMUK TKss\Xi0scvj=tBm+;2w_1+ʌ#@ϷNK\5Oe.VE!F9x޸iǗۡNеgEhFot>aaw*xjGƝj33>w0ۘy|L{ZMN=9T*wW wl+bv&H;cEIu]HH|ߙC=8r7E ea۾x=s֢>zz~&cR1/;Vw4m >ĤcCXXŮ&m>[s.¾c.O>dT*9\9K=v0, R5q##dg5e(˨uʧ"{%m]]kif+)hԪVNFg>}X|By;(ƶmkcϥKsw}.Vz={|eUcӢ>~GaA8lo?_|#?شmc@)={ןdtkW_D3JTQʶz_ UCrK .מx~=NI%+/yS(¢+lcд>Y>0fT˯rrIqǵ~jpNn·CѴV*.v}sF;f':3 MKI?޻ߧěvU7^XTQ+H⧅^{mڰO®NI?OuVGszpd+bycN2ӫbRrR*GmKFc>z4%6/Tgퟴ^iwW˺9#I] ')$x_ƹ&.g&#R1X9 lᜐ2O_Y*u%GZՒm{j_nV6a8ZEӷ֢xZ/,-Z7tZ lpA$zMi)7}97'S#iVI>\ӧQ19s4TykKhnQ_屎`ViZrR.XNW鱤6 9im;)ԭ*jZ^'񥟃Z 2#2WNO'ڶNSmu:0jWڿ_~]VyJkxdUbIcSNZ|ǚ`Qn'x2xgloIuoqcq gsn?#*J|%}]xy~&Ert]5wOu&XGHó׵pTqwwjV(4FU^>%\uG\ :d vʗW mN%(\*6$CjiJUo6~"]R)]t:%NvニvQ߻q*݇^JyB~J6]siȫX[V>ӚD匹)5e,&B?̣,=s"G)txʥkksݷ33rXWTOA}V%roJۧb3E<<[kmN*yV)h7>!UbO3H ,UK=-gu}hz?,<2+ ]ΫcۊphÃsIQKݎCv}=GgLXYXqOssSTosQS~g2kvr—!"Y9#o$מbU%-OSx~y=WN;x955qfiEnIly+Sثޯ{9DnsR 6$1-mD˱대8yxžy7=UA5֝$빥۵a &O^ZկsrJ5OMoC"6Ķ{ _QY=?.|q]NVe!`^755*T+j؉GwKm/j>[^rkJysIz;_NfI|+tܢnRIU̸ʟ09]WERԑd2eAkƥJk[Zd]ƈ-QJj5} Ik/#W҈EIY!իP_>ׅc5mX* -٬(.iNQDSZG[`9'n=yWV_JgW^N:k{ZMsd{ײSsR."¹jb"_Mw%5ފ>G_u 9dYfxH c ?,Q[v;*TJ5!g\ݙ"H|H3Fr88W -ňF ]]4)Dho܏ZIUi#Qc+zE(pF;s}pk"G*Q~wķ cg 1=OaNPos|ikFB7-*ś$g#q{ѷmi,5h}uͿ!Tuj)E^s{:jSvgnu+/r9[SENv{JSOE]LU۾_-,2q܎?^>Ys&姌u?|5ޖIxmYGRKf[ ̣q+m ovSN[Ke&uk^ܾ$a_&U#xbK<|*ҫ+TO@DzgtI,5MAD̑QA qz)9(/s|W6 #^fα~\03_c%˭v MF|'^o$8wv)҄}-f\=Gg$I "f;r0\~QY\4zH[s!8=x8k:oK4WMr Cyzn.V6+iH#ޕN,]풧Ȭ]l&OL cuv =3RIҩ-dm}|Zxu[ނQ_~3|O;iZ#6ހNy=y[U=NZz,X),9n5q|cR2B[uݍjHXJotk{gM^eyX 'zo(ǾﱜK_f5/͸P<Τ)h l;O{}H+{%3+me(pJRF8|4av縷g9-dN<2."yNJuUXR1E'xAneR +-e#-+J:vߟ唦gEʇo%BBPjJPև3Ԗ {y_93rF;'t^6CKZֺ2V˕nom~fbܕy^aR2Hʬnh5*#,d2QWe0絙VkysBZG$15*w4*xXI6a"o?֫[iy\nԑg2(,(XA돭ke6R<ܲӽ۫ra}ZS#XԢw@[&9<QN2Em咧jzA&*\*\KU?3e =;z(ksDv@5Vh5Fc};Kx΅vp`I#GױVg6*IE%ͪB,t Qgg)4K=L,c*sVU eݰƬv󃟧nGKmJd]uKbAL`c0=+<ӑbX&y!p09}Nkx1F1K$bћr9%pG#OݔwZRDr\Ps#*~Gn~XHfk |ꎳeX.o?Y*5#R_ѨZF,FahPmNSeJtE jӷJh]!2I  cMV_2)9N-ڳ*x^GRkYszВLn:Ԑ>!;S*g?AW JUWcMFZ3V oi$9 ɫO,?.}JqH֒ f«cs޺)1QUHhu*7QҷUg$u>&5ϻ+YVB3E k*Q}-Ͳ v;.Wh=>^krF5{kmn=([qq,f^ f'mC+ îӎ:W}ʤ]qVM4# GSN=pGֵ(i5kgܣpIUn9uF;Gwz\DUpj["Xed[#<@ulסo~VZ\BwVqJt',8p>AU*IE+%:{EIIJO6[GK/ws DeJ5wW%nafy2c>Ժz2 jZ=u i-l%I]~n+2OcNckiKHB{a3x+(ͷm_fTK~#\M攪F6ʼn$sִ}K}OFiS$в6a3rIgYR߉(oj6J{-?uzc$c*q.hNЖP{]ΗD-t"hǻQv}+9NWE奍+MuVi3+/^sqU(J^[ncSPIK]mƵjFa9=3^1OZ1;ܫM+G6*FQ}1]qH'auzđxhIk N[RTjImA$JS̓̌Hp{Vw")6ZͫHoR\/>HcR8Jя%ұN4jG߸} $eW{<;VOTeRo"idSѱq |8Rƍ6*6wt*tR5{0/;\㎴v.^{wO6f\LtcfcNԻ~=Q`ò99mEr>=S)@+ 5N/y=Z,Oʯ4Ig<]j1)QEPT@-s}MvFͨԽ+-/)EHffP~f2z}+y>z+K2"}C]\ɽb[Ɇ9!*+F{pGr vB0G2x۬‚=߭km͝rQyhIc}YC=緵k rR*4k*^l\1a¶INnG1rOW;%;G~A(TII>NJill73ns^)r[C\?/j]oYӭdIG%3S,A9OB>SDS&__8]{3+-ծ!fI0eǶMLhsԴ]jiR';7i#}Fۏ81Y#ds2]2XJ54rӢG;2{m#VR <7qqgF2nyV[}z_y(R~+r\1,sp(ZFjS}yƌmRرehYX>ӓiY'*ߙx&TrYg<=:˧Sl,bcN7GݫW$*J3P;/~r+zub*v9kw.pƜzvlj܈>|Sj=ccQ_܏huz+1ɹr@\߮ktE{J7gahX,LHY㞘lיi9ZӔ+W};gu|ofmȥ|S O N0r׽^5aR!?f0ہ d5_VXi/#277x᮶:' X v~#%ϢWQ++]/Tlqa{c):|ר?l{[]WKI渚2W&7^bs[z%؟e$rnz׆L~-+O+9~jۻZg=m{ J])~άvQm ?^׶+Yy?kP:Ӕcg}_04yXG$HNX'*2Jm~d_LMw(I ,qYI Ӝ+ɝUjwkqop aZ4'uB%ZhnxgW|5ϗ4l~B$~{T凧:jѽ8iwP|c0#[v]֔hӍIVboI$$$ILt}9Fn[\Τ\tc#9Qz]n;:CnQQkq^/4I%Ֆ)VG¼7q|dRU9)Y%*5$8xO-u qL%*{W+4?vZGOO#B;{;m5SNOdcQ(iN'CTBſ{U2#*y#z48n8 6ڿ[FsFrKEn/e͚kuc4b䚕ݭqE/T  FϡjGN:ҎHIu>Ҝlﻹp}^W656xR^>#'N[zԠzZHY][I&C}N:|#+ )=ֿ֧|PO[.]M3|3KV{-lHdu9c>ǃׯ)%y 1FJ]ϱsþ#GY%w˰;dsӞ:}z0\G73v])֜A|xFk]шo0s?IF?#x[r;'wѬu pe^8=px!-=LiRRbߑ>Eۈu.|;QFrHQ*4uǵEVj_y-S& Ɩ9MLbp㜆=q+JY=|j2)JWmzKh$2q'#$*I+hk[NeZ.wGiY-t|˹J$cXrZ<4ۦu=l0u$F6lpzS[_|Y;I(B;7؞YYTAq4=j5=:Kii33intH=zqӕ9IsO#Ԍ]SW֍%Yq!R>VR8'' :WE8Gr%%spE&IlolkN\dtF^:]L +n6=澁t&U]|-LJ=n(ˎWn}Z5=Zu*8T,TDw}.AУϱ ˹U]ҠXs'\IہWL+JOih1u_Aՙn#uw FxO [X|L$]s"O>hdVG2q@6yz1nlTcV\Oy ak_|._ xN&hv$w[N+4?Z8aPfނ<#r;cz=JrIwO)צO-wK,W^Y 3vӮӏʚ{v-sU(=Vs4,}=kɤK#8aǂJgc,.lDn~GxRTW%n.V2˓@\{<*rm>GQNܥ75{Sdh[\ @=y9>VJuteZ[{r~#۶H䙛I 08 `-UZ=Q38*峵ʾ㾓At@ڋG&<ـ9œq 4bJ7}_$6kn?NwGmy,V 3rUz^JtuqiO+^'^el|qgo¢S~ 94(xzvb)ݞg\ukjtisajՏO֧M6u1FH$؇iS3^v&JwgWuqu7OeФ1c;AUP>@>7^kXmWME۽?UƯ-3.Eٰ1*GQ6uj8U0S iB{ѣP>i!8<}q]إMy#.6q4:MXNƦK_5:![D%$WZrBGnNrzaܲR|ȥwӿ#kX{}z1a(U1\nai9w< +N ۍhuak^~?!iZ"L+<RxwqXg'.fY>2RW}^=RW}:u$+boKNG|"l[JMr':ձҒzZ5+YPV73"H%ORr@'x|==Ogq6mĞ(~Ʃơo,\ xn|.GӭMs|l"`NvZΕ5+5kTJUI5 PHdU\ͼqr9׷#>ߏ׌V[9S:M +lbF70̷Bڟ1c)AE{ݗ*]cf^D  R 1 `Ml6A c-8؅9]4wシfZF)t8_>\Z jôL NqnFym8;=ZNIFRmW@co/~j<`=БϯMa{H54w]s}bz'}kC.\BorCϖ%'9֒RFVN;vݭoc RU!eeml}SŠmk{Awl[=?w''zW m9# 1ђ}vhn߯S3,DeRi%[O[y=]7?Moq4cUr=`;StU]Z-ѯ9I8;uVVz|t~.\xƒÞ7-V wȘLt8lpx(I.z/DqfLj-uJֵ&#?SS/vN6MNIAhO~Z2`D"T=Zj5w~g!Qq7Asgmi-W4 Cs-®HRy~hr;e*j{eʳmy?OzpS;f6݄0픍l$dӜf+Tz;E4kiҮv00;tRZG1%C)+ ;cc ?yS7}^yi><|.#P\?Z2)xFPzZ4Q//.;|TxcyQE8IZTiSёNM6,Kʄ|UQs5gO5JK#\}("oOoOg7Np%e 6K<m&fTr60N~cOԌƟ.1F/xb:'WeHc$`;q~j.ZݢH/Cq۷=kF.ڲ#κh9 ,1>b#z)ɪo_ Ȉ+H{[*[o*C,4q*8\vs׿^4"};owqU6Zw zKa*wb*sǞ>3]Ix'#Ӫ.uxkr\D6#,˷pP?3 sSGyQjVt܅ I#z뎇YniJ6}vĖS̎NZYH>?ҸjsFoNn7I4beN  Iy9ΣRsgkϜ6XwI3(T*>YBO"ɥ\yVYd˄v8 ddwri8Ч'*z: xO9pwÂΤ9i1QQXK}]ס2L醛k&-ԌzW iFQ|IG)j}^ydG+IēNI+hưq殮_hI:ܛ_ݺ =GAk㈏1qku̱C RE οxNND%BTDfem0 :,9%FJu6A.E?Gޫ;)sV^EHT}1gn$p7Z䨣)8?9:ҳz˯,t#'k> N5(ƞ0J~~N1D: 9ʳ <*{Jjˑ^ݼƝkj,*͹c+0qr ҹFQW.iMY7ol6[ĿL|7`d5Z0%;j~L> ?#FafV$<ێ+D%КkNqnw=ՌJzQTyZz/(.Ŋ1.&ta\r*_gRiފr$[$О:{-^hLIMܭr VR62? O{*}EsF)=;T1ģjI ~7Nc)ԏ43ϕe7`0Boy/)T+r$K8Y$AFxiN<)$ʖs;\Z9ng*~IX\,걄_.GS6NG1ֳc$FY^ >I"+a0=G޴U%-֨Z eIfJ+\d}k1"O'[ߕ# v]U޴Ƭ}=#|Ȭd+#s§Z^w"trbXFH8>b"ner02 I)zd2tFMt^}EG}⼚X{eJ0fچI_mtFz" #sҩ+>`_RՐqߏ%$ЩT'mZ5WdJRcΪrJ6} QuOsYkS[zyv,SZ*Ƣ=+9dW=Z q1UFޟh&psgYS)VvhւU<3nZ¤'<7ջuίFݡVLC"'8$ d 6yv9sJ߅dsS暶iF+XxVI(դV3ך*B}z,,cN7ߙl0KylF8#玟_ez6}Xrb#R $6\p9N+);߸>j1Vzu=Lk 7*g2/Q+ogR-[e̔=gxGjF=Yψ{JZu"2FҮ$ p:s]{Rѩf/"󼶃oQ`0=kS叺a\-[]O/odeH ؼAYҧNvZNHRߟڐRyv壌trץwճ7$8]$c]z:*\GT8*0ƻNހ=Lc(ʮ%%mm. 3y :Ȼp6A=S'RnƱio4\G ʻd9:猥3_mZ\oM}㯩ZUNS92F!uV?1sϙVgD!Rwb˾UfTHǮq1 T6լI< )ն:6q5P}H&r2յ˖`=,b:tObe +錒b<ڝS}zci[>:*D"^r_ppn2;u's2^sۥUIb/4^R nrIS,cyk=xLחR'J˶r!Ʊ!Olcfw'e'NQz߶ y֯t#:V1ֲCSF?\mIg=LռLUO8=,K,[$yWΐ*B9< +V¹8&Zxl\]4*78ǧ5<ՊKMQ#AAO+3W(FOZ/s5}tv ,~hLn9\doo#͍(7eao4ʭ#$wtܹף׷UPI㷢Z~a6p?Ê44Rl!QtҧރX _;w=OJs֭}?R.-tMT+#6̴qӌu5J̒ tהb+^̎tu܊)?wQs:5cqRm[ubNUv!Y"# 2Uq\m ңHe.燊^Ds—m[gXb*C 7ybũ[ddL-mcfQ2]ʹR+oRkQF59ԷE¬5ڼROrSixW/w%#/ݷfLe48A)}kZ7rsbLIjurU9j159㪒őܭ?7^GJoq/2iT- 5*ͻp:TWu:(ՍLCQWVee"P,9=G? Jr5o-t]XY$,INdRocԩneV6[vB=eo1N%"}blCB\pp> ҕ>h{;,{v Fef@iJiFLQVب'ؤEY8y9׶VZK(Ub>l~tFc(N7IeSrGS? ךj):˷ȤȲ8 UFU)֟.j牾c;731MTjCjpԣqepV7ƬwFLzgkGJjF4ܼXy$@ 895=CЭ(rI2)T,wm;k8~㒷56^Ӯ@+-CYO;EQwO[[X!f6:gnp>~k)^dөR7wл lś,oV8#NiܘYZб0˷ £8P2NN:gF+Zҋ_2&+ry)&}cX= ;+jkC6nKX}:C04vɕijmFPmm**68?α}W+(1iwDdTi ;xFOPO)G]/gv4 "qs׌fE"is-KV-v*,i\tET&TK]ݝ^@xtyTZaN.|U[2FL3` QQSrjKkWؑ&!;&I'}ogcK[ 'VEܫʪ݃vdF4!k4D̻Q1q[Zt#.VF".iyVܼDӣFRyJIpY'?aEu%ڒ]<C l~}'* SaUTrpG'ZӂTsӞ"<7VɖUYWlA~ gjQ%+s:xVi,؎9ϭi&ԧSz[_SyGm,Ha#0EAtVw3J%5t-WiZk*ʪG8/'cmZeW,0Dt73JV0ip8=Q{G,jIԣ}^J~ΦURQF#q E9m1%w{ٯEDjPHsqq2kzt%af#VRu%uŐ=͓_jBfm )j:if$,|һ)Ɲ9'=ѕJvmGwt{q Jї_iv~eŠw]xEJVX=L=^VuG݇yss^, RN^D9rF9{]wv㑌s]9Vz.Fu)VUzm${ ڙdpOj֥>Yrz8YCIZĞ[nw3|۹mORиӄy>}:{%OCꮪJi]~Dv.Ǚ8.8{EGj7Fee3=L4ps# +Z=bNUiSZ%v7hHae U@LjRz2XU_5Đ\={eUds׆jϖ0xRRO,ۈ&[U.EVT/O7ņx?tTv>&\MF %wG2`#'iPM6Vʸ*Ќ~UX[4>}sB:ǻq\^\1R s6Oo\]:UbCwL̝7ʤᯧ]Tc@=qZ<%9Sk?ԷTsWQ2(Ǧ=&/-qh]:f&foݶߔCӡ:۩+Sci_+uToEnF}N+Wc7F1x*O,G,u7rxXƲIHngJ^wmUU8ht4)splwQߎd=Ju)=^߃Mo5ݴeT+Bڗeg7G$ #ۊ7&4 %Hc=>g.iomI[WtwRN*R.-")ԄɥZkl$wbt>;)++i}yrߕ5#*o67,)e*+(=]=9ُidX؜uj %/y{([! |?p`g-i8JgѓZ\1eުe)s]99SܻjXsDTʙe-b!NY6˖Wvfаh@ˁz^}8s`kiuarGn%`Y!`:-i#MZ w6q󯮣M(J8z\[BEso2v2?)ې7+n^W#k+{Fe[\mbrC SdR켼$ KnbFI?ZOw J_eXƼab$ʳ1m[q8MJ\޼LBm s+bh^Ɣ*rtٜ]o+3G#|[i=*)A]}Sjdi"Im9ۜ{}*yJ1|"[fR* &K-A~Sdt'# ui 3M!MCTe7H[41{,Bml13y*f6esvS?\Җ")KaZVvU\4FbNO<qI^է 8UczvUSmKN[8ə3_yw3F ,1F=Fcuݜt􅹛%L,x1G[^&o7h[eiٕ_c,|qߑӵy2K2Z\Xi ťaz=zsSis_b~ #7QǧNg$;T/St i<$RDd zZ[mvQTԜ?iKg)gQnpc]<;tSNQyPcQ ԒWSZ~5r>2V]F@$>ۛ:BМ.Ta0}~0 ¤!4lj}Q:E""I>Os;NXc+^")F>뾶n:yw,Y,Κt)kNުϾM91cJ9'9|cZ>m]/թӟϵS/^O+H2mWϧE`Kzt`d__#Kn+"$SrKg kը'O[_ylU6nUe \`sǮ{>7$cXiR}w&+yLk-ă=p=\գ$5#8.37ZNdY&i C\瞔i][_xIߧ][i1RO1amFcNxzz9eY[*\vvl5 Şaa"i#QF0:~5R>U՜)ҕ(v]fX/39lcqۡǷjeYǖK0qi6ڶ,33Mn apJ*Rhk}]'ħ>wq}OUY^瞝+Ѩz<\EF,%mv-퟉nvh*ZOI ssѩ;|OǷ1GE)r>ɷEi<aڿ~Oq}Z;s+?-zq5R~u7E[GtЪ=x؂V0aTEJN=Cu0 `=ADZHՕJo?CKkeŢ۵l&6i/p:vEI?c+5˽wvVPA K]/̝9ּL\CQ妞WIj\y`* ~ň:xu$Cz&۹k͕d:*-3&&Ig?ҶG܍HiVLYt>M&?3^v2 Q\J^1FIQ өԩ*m*F6f`Xy?wF9~zyxzZdrZD.iEFzF_P hey:|VLq:qrjfɤmP~-;]c18{+!V'nxk^ndU]<jyjҕo i~b1|ͳ41H=K i$W^FG+{߇I 'ϻ,H'z*觌正sb1P.vҾKiM,rF$knF;㎽ңZfy1K+z>$ÚnǬ^X;Y0)e8 c.xsWtQ+5J$*k~ g4 ;U!8X!QuVR|#m߼t@E8Q5ߕ֧[Ck!9_I#Hۿo:ԕ;{>m֏Z+.,#: nb߳OՌmv? RPnt\eFm,_*xYQk/.jV祶sOzLXGCg8t#q2R6qԜ*?: ]n"Va6,BYqӭp~vƍGDaQkgh_OeYmy{דR3z\'SHH-{&_-@eϦ:w+h=}L[+|q}e7[rNdҾg4֍֨ [=?O~K-0xHAb3d*' 7ta)ӷrȯ&R!6%cъ6xדK {gU)u:Vӌy]޻fż,rvޖIRiI-RV-cN7ڭHy_'l9z9./ N7Oޟ2_ 5THIve&Xd}~841U.bc^u??!|CG6rC$+  cFaZ>-e}پN.4w~ş hwl2*DQC.y㞕?#~=Q|kv:JPK u<9_p4hN1gdihL<#URS6\i;r1WG}[ziv\GAao0}r+jp\Ҍi˗4[$b#fI^*|Ɵr}hg -2v#7͌8ygJ\ݿ2ZTlfV}Ƒ.]"6D+ױQHVΪx\^9=G\G\FٓvHpI}э9s]+-nU8Գj\\Ir;dv5 0A<{wR}90ݼ=n`mCs4xQn&Nc(6Oᗹ?SoͨXށru;it#stgZtKJ`GRןQ\pT~+|Vo*Vlz?x8=.]*agRWO7LNۿtc$}6@梄UZܶerЧ*չc<{ſV=-pFOLEL~<潘eQ˿S0.k+{i9o>*c k13}R~BF:1(cdRVzK NO. w:|n!ګ RV 8#|I{>'T~ݚ1P)Q:忕uk2g2C$Y]Hee uu=)qp|ݷ(;ςYFZ/&nFz@9=kQ{9IZ++YY$CӡO*Pj4{4/;`?]UcdYS+ćOlh̓V 2ZzD7Sr׊aMUKqNߖZot8A2weW<-9USY`劧/3sB>@e]񌽛nLc+~h7_nR}8O)R}XGCԼ=mLҡgS!OR2sҼlv241N~7mYg6+E-s{~_ʺ)b\W{nwo[S[k.#vG]J}~f%+(t}AҕLMk>?;TPotxRW.~ƯSs-qaG |Mt=JxZx:Q%?]m{Cww:#1+"r@A''#X0Y/tt󵯿>k^ h59专Xa |st>?[I%['Iߡ[u!r>U^7Z,Nc9IdzM8ۓUZhޛ7sQKUUQYg'ߥn{TMb/ey'"LzuQKJu(I01 /Jҝ9ƛYԕӹ' ojFT6мn.F)T3J6ב}d%+HO+׊J.-*oNr-5e𕐎E : R9GViN{$Vm1E5qVG3m6Sk}۹ѥ¦᷸C.V2q߁IϚ*:};iJŭ"u[?E vM7I45GCVki,G\E$ܴѳn,FUb)TUb:+G:ZtzAkk4V,pnSut_{O2k[~aKkW=~:𦰢kVḦ09ߚ"+vaVH|OеnlZIoa.)=#х?cӧ/u-$vp,E  dHyѦFPi*m\yYLbm>0(}ORTq1)*Jpc W\^k,I?w۰H9ֹ%(ߧwVJIǕ[ulP[Ҹc)FusJ0{+2#ps'e._S:xҏ+iqAh\yFeHu=q]zCKw{< <d*==xCBs9{xl"]FV=78+ӔOFg[yJ%cnj?+TʕeVTikHl϶kjQ5F2u+YrҪƪ?zs\:)#(TJXTGcV٘aB<`{ڰ^mt;a/|->l:D͟ʻykSJu*Qa^ϺW OrOrqszsv9#Z$}nHpʆY6HwuoFճXySIGH'*ܳ9'WciJrˆn$'?CӦK8KixI3};ִe)SqH*؆k?stǩ)E{V]=Q?\qj.Oo.+,nBbqʵQѳUb$B*r;}޶ykc}X"@ܧ*{~VGcz^닃ѭ_B5K77R;iNoKDʣVY=*Eɍ9** ߉5}Je;C_166oz/֔򹤩֩hFR<\Hw #r+URQn^ڤqWw}.U LjN초%)VvܫvM%,Xn}Yԫe3\*Wkm.qО ڍj(IT¸GY6dg2"NA+寘cp?٭ӗ4vv(K6ۧ˞q].K_I>j2NBc>mďZҦ*\׹ӌT-VMy"Hs>Ɛs=![NO FonVa}hܶzlZSjNXWaniarX݁R}{F;comE7"ӥkmɵc61V15cRkKBb,1+t쿕kZ[JuE{ecjwqDjkQݛ*.qߥ}.0WtEZd6S/Oƹt;0jD2 e#RݞU+^SllrzJ6LTDѪ|dcڹiel=%as XQGL Ok8Ы$r$E{|Uf;y 'ְR57"F,d2rWVq_/1ܛfݱ[ְKZQ~30єjMrAy̍!UdNzrGZ\s0V.imQ{%vFiSMq5*NkUTX%qa>W4JNWƔFe:Im -nxHBW{γe;70,s?m{/\bSW9%i e<t{T*r.ZqooQfBfX9ZF+ԚjV v6Us].kt&ڍeu/aݼr03gSzr:iӵ_u;Xwɀ#u%)K]+_FSgw-ӵvSu)G2׿N`m] srWLiZS䗵܆M^8 2(YIzrO7R7u_!˶Y)ePvorRZFcQ+id,HG7t5MsMl~DNum%K[/Qӧ GӱE+k -dYWqs]T\e+SjQON^էxn9?m~K;~=|HC]9x}U5ud2\/|ŐsIE9BGay?-iC1 d cL,֦j.9Tc8qw#҈( jjRC^\[<Ed;jI^ˬ7'df b0JҪZhi'uk6Ў[HЄ3|a#~ǎ]H@xsEۿkW1YEXrq$ w3W%lvʮkM=m;G0eϟ9MIK)cj6FTdB ¦@$O:3m -3CqHgc1Q|bcF5{]Yvwf|n$_'I9n9jxx|==I:Z_繥iA&ۙ0>sҚRwgu5Rtl1}]9MVJA9Lc{9J׺;(ʝs/Co|AC%lfɑaAݰ[|ѫC#(+k>f&@UO'zzuaԝU8o;ƶH^alUTz֒N.e~gZ.9xle)@F6޴SuݞQ|ZukG7_.bPr[$@¢B.U4.8ëA쯫=;þ$K/$'ޚE>X?SҺpuF=XDc.iu~d~6y.a%I$sH׷)>mڵ: zsm~*k-.˷|rGG5ZtJjңk1Q^O4`iI|~`~?N.5#={Ƶ7[ N8 Ǖb6gQU]z4fU+;2O;##iG\_]`rZ~Go᥽GF+y+#?)^5ڪ^#x,GE{ǣ%HQ#,ʕr} sڽ<:npx~m*asc~X4= Uvcq*zNn#S:U*5QSߑ[ {s x!-;4̏o>gʛk4|#,|ΧBa[[~xU}ӱkɯ´I^O ZVzG¯y$]݇L O?SBR:jtu_z"a-ux􈥋u_:?u>H~)aܤ]`~U#[V/[ZY#fwuyi~b(ʟR_%{feU`U  $=VjZ$2\ֽ,QFܫ [>5OT[j!ϚUt'?Znt=9T\7k? tw6-Wl0 O5BIlwNT`owDu(Vhē2N|܆b[!\sN_[?Of{4P{}ڭO9I.nVo&i#)Eeד =1f :vpXwiu9 |W])'X<4ܡG^qӮ9XSOMtrb9wO޾z߯uukmLrBgRDqC1j u#ͣu#'$z}E-ֳFn$+lG? qo:hs׭;Krd]YDA=TN1WV{Q{7E-yg]BIloezsOOo]SqtbN[Lƺlj|)<3\Ka/K =dBE̞|{KOZz4B2ݫ3 d]<>iEx䛷7NqI&\̒,&Bdsⵣ}Z]ўW>~[>{HQ%IR4n8Nå|3[QWgFRzc|NݳnBY*H9 zqJJ8qK.kS<7pǫK0y\0ʷ<8N 0oq+_M:q').]FiVwcyŹY|X]p2pAv{',R.$Hnk_ Z1uZmIMxG]wK]D89z㒽Fy-JPvǧMRvN)u|[M4Ͷ=ĂYpd!WڻOfA:quL=VӦ+62s-7#3zxٗ C]~P8 #3/SV۱%Q9I)-/ݏHǍo- t8i'i;G#G$t3)ѧ,CMǜ޳-q3d|iXIߧ` om_-SpΊi3zgһTTGw1MN>G;5 2WUcb 63z%:)%My|G-fu?_C|Tt}6I&0\y'g̻X\qk[tOMӤIcwӷ5Su*zh|콍lBKSnZMUqږ"w);vvF4-#\=56?ʹ tR;J)ԗ2Ts~Gtҧd%k:mgF|3Q40[޾Cմ*ޔHѕzOw߅R5cUgu WTX&EAӊ-J1j[LU%Ku~g_nuK;MKgkUCX FrH=4RZ4!cpzsk=_-׬{G::"g#.<0ּi>jnI]?{J5ɮ꬏_ꚕf&Kq8Si# 8m15uM$ܟLD*JӲ=᧋5;4/~u=syM#'\ĶBug*ݴb(ErkW=|.9ƞᄒypF,K RN_(ʷ,;Tv~<'Ɵ`׵׊I9 OPk(F4K[z_Zp]:5&%k#9Pvdu3ؼGb''ߩχ;h+y2g(J6N/R+N[85u5ZonWEIjhdatnyЦk=N\Vmi~Mߋè[יc f3p $| 8 [Gcx>0SR{>Z|,S5Xo+p2 `u;~˗K^{o蒕5V}R^Z?AҴ'Te[ A֣)SrvZ-u31\c=Nz 7hTެ$ck):|ߵgE5*Sj#%~EoYamr`Ȉ$q@$~ƍI.#|v80R߳:Yo C $uUK$$x9kաMY~^K2> V2\}__Z~nHa{kSIX+T֗TnUd>zӶ[/;jz)vZޙv}Mhc!ʳ+*A9}A&6j8q5()ԥJo+nGa/. *_n`voAy,.!Jn#SIn{?h$of[w;r ڠ|O%3նmFuzγlDp՞g}6>ӵ[@' /U>5ѕK-7x_XPy[l ۘ܎z~u,;+EMH]mhf-8#\c=r Fb}K[^\ۣRy# ÜcNMW~Zz/"nyW_:"iNW}V2('>=니ٝ+cnz*ۙ|=s[GЌLˆ̦Q0bxlg֢*=Lde˵ @/ڨorJ>MEi~o [[,6wF=1` k2byn]H`'}Yp8}yc^}H((X _6Ff$zEfӕtRz-#3[S|ZH| %\F$oF'88ڊsrKbYr-aOZ)T!++tgtk(^;ƷBǴU,u}x9#ZIjٵ>gH.mԖ "g`QFBfq58Sn;R~Λiiԯs 2CMՕlJznmz͸F9p~*"m4\lIcU|^k+EUd۲ȳG$w.ܝMgXΣ#J~ҒQ#WwWk繡l$BMp'#?qcD3]~UUF̛adUUQ]t'rU:o yeh$_Sg$ս8ӍK-4E] Mf=xH#Dtӌ"*J9,kGy'?Lw]\eҒՕ.-)'hLYh =)ZeW3} 7,6EmOg泌eRzѩI\8^GLBxӒ =J{2QNews+Vu*ZO1T,3¯ӷmж1ٱsSJ*-[ɚv0ۅݵTzVܜ\ڜrͨΧIHluϵsʜ'~4qr'KZey`s$2'#8#ҔFFsUW*KwfĐ9ZusvzU%:IQCKv1êP>r3׵v**4t{mgեhXh.e p,}[s`< J+SRUmm t,*jUFr3܎qөu{]=װV5NO}/KzW۱*u&Mwd:݇XBdf9QJKM4:*Z-5I i?u_Wz\AZ[Tg$:F6r$$U-HzWB)GSYFhߕ-5'JV ۀ:gTٳ `g~QQ6'taU(rFaRobلo!RrÓoccFMsn)S h/Y]Ž(Tow2Ê\/VUE:XIqL=F{rƛO 4u#Y"x.$j\*LEc' 2e/{VKeqlᛧ2 uk[<{6HU'qR.^[jgaI(uW.ƫ#rrw`dJU A)SZ[K#0OY=52$Zd{Jek]v.XN6꼶V\sn{[^+Fw깕*Sm-0id L2OScvYXGD儜Q‘.Keo=MiFw*R,^=֫iIuvY0OIkWZP+خnX_3_Y0@,gsSs1SZuMing ԍU=m+j%´r|ͼH!de6zw#ܴ~˗_S.IHc,:ҳ.o᫧(Z=Hn(RUs#bDǯk?J4>Fu4zdGQNd;M=<8_ШcSV{Q9`a('W'$X^2wt^w(jr[HC]p3ϵy=*Sůh'hdRقo nkR'^DQO;ϰK R7ij+#tPC׊RKo3" ̝/c6 㞬1X*Qd]lTh9-OԹ q+D~f\kGsM֞/^rLp9Ev[/\=I9]%~JvG|Io}+jx1h?66,REa2`#w$ OJ~Qo :rn>hK erd\)<sD%[o:0窬߿M;]62V 6LޝuUʕKEv%*w[*ȁG@d}/mH֕JΕi{ͯ$Av>s%!#88=}=`y__>s)a)+_%̓FV5F;,'9WZcK{JI~7Msnϖx'x9*urrqo̡evbGp*nzslEx?cI$ZNKIΠG֪8Կ> rTe?z[^ 8oG3ʟ̍z1ZR-KB$a$Xnh12!^rz28Y֩Iivf8֩Dֻ?}A24ߺa#kdvsT.> H˚~zylOvYc@9Q=^ktkQ\}o9qOҊqlʝ;_"!5.fw?ii֣xc#F)Wo{,Fnn; Tק`9UgxgTHocΑj>X$ e9*JGZY4nenTu^ZTYjvJaMNUԐ|7FF@ל2VϿ'XtcV~cLQeȩ~*Q>7Q4nf[F|4kF,R*mHжӡzk>D+֎ E_#]Xm݂d^{WgDkF)i_Rqn0ӧ)UURe;.VXo,GFu^KW஗Bdʖ;?vr0kYTcIϕZR?-;hՌ*B3UJQzMzqIe1$uusA7)C5+\%a#QNzӌaQm񬅌2#k|>@8(Z9(ew7rIㅅWt=;hgm_K[,,ۖm*jә9iÎK$u`śJ$rҔrx+i`i-<gN5uPKw+Q}+U9]o3RU-ec[DdHOp1~)| ֦Խg0GkFLk_eDcw/j͏ɖbʇ.3qQ}䕺,<o4i{ 2I_1X/-HZэkҶK~ηO~2J1BI]{ًqw*i{USQ]G0Dfe-QN>Ogfi$ Q<mSasy3O~ \C_oCN^K2򼗮LX6Zq*ѩN*zy2r$60xV~I#_FZ%x&R_̏1OrK]]>[%4\b~8{>Uix}b9uXĠUa׷oƳ>m J.2FY$\r?y/i-Ԯ (>$ sӿnW6:Q[Mٯa5\9iZ6Sۯ).Rj-$},]+ȦMzQtan[NZ-mGgl"Ffg'rBMֽԆglˌ F?(#m: }z(3Y _f ?hG{CjQ(ٴb, $k|p +)>KS,5Jm_mU43~g9ssKT'GbΤ8UbR:ڍ>_ur֨iFԆQbHŔkhF.M#S0F;+6]>5(ԣ%oSISZȾ3EIv#N1RoZB+ʫX]Os]/ }b/)E?ę],N2wԈ,6s\+/L0<Х_V7R*;]23tTbpT*ݒسmp'v=뎵9:nw4VL3m=V +;)=Q *+:r0 ye& LrBƠѪʻJ9#Vis͕J;CU$#Ws~yY8:v[!\&_+D(!H]Q8ԩ5fڏRQ[s_Oҽ,,[*A֍KIhn/̫UUu:KMk##w@OUpɥv4FPU.-|Ld2—2~ ۡm^⺰ʝH?=l*N:rl28|vȃv_nդF2rKzqRN~dv7E?٢tlY[Ώ;=G8?iV֗Bqjm#y|L>tc$|ySFH{/9]9qVq3L";62I`o7[#g^[[Z%[EY?GҌ>Uy0uRu?b)Ku7\`8cRRM J6N0o-nTI Bds&ioO^\ܯl<@bBï*VUi^{Ŗ5q7S޼V^i-]ӌGw{d [Ʃ1-u玘e-]ލt3ԖdMvco$3K7l{\I(V!pb۲*ܛÃګtJiNISfoƹe'cpK۬1*VV}*Td1-?h_V˦de$,q+U%kŘTN5fMq+lFd ~ޝrY3MCӭh,w3q3X:KcjTk-&tBhTο*UUxҮTvNKeCa+U9Ep09SvƟ7#uZ|q"4 Cw8N:gWuku.nXźGuKNlEIڟӿt6<[ˌ+){WZr>ʤiӧ=핒vƧ9<vsU kXeړI${IA9Z--ɭK$Z22G֥҅8RѨ7F9Y N~O%x֤O:k9=Ittu&RN$8ozn_CUN,{8Ǔ٭Z}IpБ>NIّg'gZ[ T  ySEs4.-#rˑѾRFyk쮒0|.sev@ZZ3^_aiDX*19ڭ0>\^SFJm߱$ybm7V8 Lg*zZ*]YhwIq*[z[Ⲕ\QrTrDm,m V=xsJ1&)Ծ2v2sd`{q\ܵ'f1QjsOЁn eds'OMR55+)Tk/Q\()&Mēlp>bs)GrJ*9'4G6g\2.lʯ]ەv}s瑌r0+K(k~'?6Oh^/#4rxt 85u'%zShHZ7cG=0qߧTk8íν6ަ,PT wF*ڝ\\նc /b۰sUYOnesh3{>h䘴X|oz ,vZU%9i- ێzNM}U9Jv>ŒcC̓q핓#9yviNN6#ܓ:;ZM3װ#?J%%OUWkƒ"_:u Q4mfFFA{K+)};$/ߗ#qisTF:wZ%1UJ>Nգ\w qYr˯7bO,l_ #O)]+J#nki|Y] ͞=Hyc{)- 'dL3/ݐ<~e;8{HqlvO+I*U*rEcb@}qm. 2-cydgqk&yQ743@ .-DX9؏\]R^wSμk;]4vP(U994ܻTҧ9izyo ^?,kV2Hc7Cd =F=yw*qWk[OS谸 rYz[K_sWu0yeߛ?̅N|({j<^!{;ܻ[# F `Hz>F..n|jvm^2Լ-GqbYVTdj[)A<Wf4yqN\׶_Lڃ㤢k]ku}zZCI%˹(m3'.8ʚO>r#t2ek A$i4lw+ \"֞CQԯ(]= nupVHes3irϝŰN@qUZ|ؚjh^ʟG~, L0W99cqCV\ߙ{H^̣bلW>aPhcqS%{%t1[dVyw$,q0v*@fyⱕW-5[EI]={ s\Fl{q{{cjeN==ϩJ,t;s'd!\' H<O91om/zh<s+-4sxYuiD7H`gSYK G٨;*|&}oda0}Qm+I&xDQ[iifd9:+M`ظNG؏bI-nwZqQw䏛8Q|ϕ wNĪ`1SrSW 5aR[ )>#@pOkYQY5Ժ2ROK꿮1i7H2 ~5*1rRN[QL~~u$^TT%t:'$U FHMy%oϩ5qQ©9I=lstJ%NPFp:wOgʝ~ZxKmVu9#)aцAsFr4[s[}0%ɻ: D4䑼Ywmݏg05F4Qk[~NɨCep*GnӑcF{bN')jKrE[mnFch&m!ܣ ;sܓu|Ob})a=zVvΜZz9~G jmiȻn89R<ǽRU9~aR9?A$K %± &3Rij}nEq[̯VIp4qeO0OtԔ;Zw[RKim{|)4Cso̅<`Y19^3gM6ץv<,ejK5-oSmUp '#4gDZ{[rRGrvx`XruI9KW4"m_zRnTޢ>~nҿ̦EI9j}#QUq~2xgd=#15SSBN :Q{Ksuu_n{pu;I W|'ӕMnv*T}$ܯ ݤo+6T0'''N|\ޛxn$#G+n+ ^O+ҩ6o$s)VZ%4z}<*BvƧ#=3_!O Q̬{x_b{b>쪓HW`= 9Mq8ҖO3UtC%[̐n2A`'kLtգ᫈鹫j,T-Ҭ{Yy?37~{ש*k}?S͍:Rjݭŕܓ(QyR85ʔnTm$&"efemȬF: +Z_=*4d׵[e]D^2LSIn6_t(z<[xM#DLQ:\}tFnխؿ=^>¾.%bh9߸%|,V#:v7sjRAm\:w7 dvVncnN[ :v]1ƹSJΉci1Uzw5GDhd~cY,9^`i*7Bzr o뱿vk®' gy9+bul>|;.g /3\(g~L"'9c*J*WudB"5З?>;@"YSN >>J7]KZmڇV7jXS ̙Togy#=Iۮ~_-n6#v]9?tYn-YF =zdjS[=]K3.]בz4xn5-Vݘ$߀zdִ'N?L.aݢgzW՝F<WqSpRp 97NRO>߆^ڵ|Ԟ^-[6mOz/oy3GT{ rױ<.WG MBJz쿯kTVdWSIi7-ġHհNq=8V])_nWʯ?jwu6o·esw|w%B[M$?rWNbO_WxӱDuJ_i mN%9emk J*>hr-Y^ͺF?3Wab"B^Zv~7>_7I#>bR*ZWjAm*bّ#2cT{2nTB2\A;;z+w5PqK[ꎉI={ pN!Sz믟qʵXyy-ucpFwe쎔N΍5ZسB8*tMg5C:1ֈb5؉`_=pbpTe ϦӖt:O x{[`<&a-hMI)(ӊܹědgbs9\ Ǧ8RGGJg{Gvc.VOh<ڰƇa^[';߭Nb$ӿt/4_Fסȡۙi2dH9'qt>c(7{Y_KUjVJW~ǍfUw `H sq^4w_y#*9E<{֖ͤ7y%*6v$gCRzoNq|}>u ?ZqXx*qt~֮D[L޺TFS}vk^|O'ܷ/ JMeݍ3{STʝ7 4iҌDI+~gJq)\'ey Hb95bo+EiUɸS^yȖEbq]ZjKAQ}^559GWEݻTt7'? S_JVQO[Xz/Լ.+э:qxhf˓Q+ WyRG}JTևztHjRXZќbN_ޥh*cAMpʝEȓ4X<[uSIr(rSHMCpc8a1^G'GUL,j{еm<|Ssm~z [D:#:r? {=$G YUJZXhLѲm#_y8Kč{gc=G?>h=.Gk9?2 8]x}Ҷe-hv)ŘE+X\x#<s9($tC9r$zaxmYX?*mZF9Vr.Y]P[` (`?ˊ5cJoc/u ׶n/ʹr~JU Ona$vdMm*y#XJj5iON鱨nno|khDb7\㢏/ZQሩZm:?M=?9deDVRNFךrʎݺu}Y:XR_Twzu|WV0ZFy2O;-89ҰE'i}Vޚ[|5M᠎kvXRFG8XߩFWom]xڌ dht3 F=yVu_\%R2kwsg6qI?)nr8'ֽ|jowv>)^-|[o;~&g':ݍYcP6uOֻ:ouXJ_2k>٠24vI#I%gר8XԊE-e,1U/n_{y}I0]BB8u܂TR!&eK-_gtݼGR9謯dK~vk}DRL(F۷,Xncq9pk~8{m? e-~Ed*˸ї<ߌ=+=yNb7WI/)ӏ_\qݕ)e#<.[n\1А=::JV9%J MtSA33iPM:ѧ&_35M!01ƹ]i7;t*oI?V_Joo s"r)>t*dDODאD]P܏S5ʝHVDNw(G7%FeBT/|gy5RZ/ԙڝ\]DƋE31Xz{QYc 9?]_?Tw^D|Z/:6}ʤ|u{H7zGRNg3ڻxN;=+/s0KA#3#M~5oW8QZ!cHFpXrURU8T5td32Fum?]V1_qztnHQˤ JڭNF~S*qJ=ĸy&@qs֔d;%R3 442bۿ^4e'&g:T%vu"V,BsvS9qiZi1cbæ!Fwu=e(ֺi]Q^xY)k1W72HeW*gtΎj7/nN%«Ƥa}1=?!]|ާU87kO5ӰG h۲9aUɻ m폦k~x?wUmKnfmpYn:1ӌu+^ǛY*܂mV;xdqq?o{J2^R׉&O1 ^wF;ַ(7;*(Ƥ[y-vyϼ3?ʧ2S!yc}J1I#fݷПl{fN^m4fޏTD)k<܂99h&q*\n#Z>Zn_+8S\εJЗrM!|3mk?exjic7evag:z֫hW*sYĶ^= ٳ\c+iQ8;f1[W#S99;S4ORHi?a}茚[:kQw`:cgݣ:BVX⹍?)bJQjV+厖B[VMe\}c?iFrWQVs϶x>gW#{] zU{c gqk^rpўdGs[$v=K䜁$7VNDw·F{5hcʙΕHV唾'wh$,P۳vmF_R{vpN5cynzi%r^Is!ʒ$䌓rj=tZNjvMhI!܌bL21SМg?ve/EUM>SOZ8ȬHKym@c|t=D:%^-M]?[ n|!/p>y=JNXƭ:RUmm-h,M#mp;^8:IƟ+;f6wv$4ю&Mz}^uenǯ]<ܞN3cYŒ%+^VjjgӒ(8Mn`pOƻmieʜF2k8Ye1nOlchcv*9*u.&Mqߴ'sW$Ϲ5%[nNO?ִYrrM[̧mCFx~N#$7>FW KcabەQ8ݻ=mNݚzu"^ϛ_Q=QG+jPK 2V9֟V/Vu}eefck y4kTzhE>#Ydl;?B3ֶi5QyjoŏSlI".G,ɧGբ,64vFPOs8]W,*.2/<6Ⱦ\`_V:v9YS䴢YDX(Ei8ȱxYY#e^*W^GعI%d&ȑl-Ď76 ǥTTo+nMm%1;Dܨ oJ2dKophՈP:ڢ*R([,om+EnKzLzbȔJWukOaM>I4[vJ$qb8қU#$*w饇]^B׾\m7ˠۉ 5WuSk{[i bk1`l {mqZITnթVC:}N[y *hut$Jrne !RQUF}:JBxCa x]/xU5Mg{jǽ2ʜ?0[F6^=vc;Ⱥ2 5 ᣌv}y+&VP'31-7!<9$kNZdB#_#k{6+`v0*^o}89+j-<[V8 30dOKn6xT7m]V8:^G3wᕅ$PO-pqu%Sk[¯-u zzsD42/8u\(^R|4MSF&Qr1$Ƥ9+SV<#'ːmq*Ec[ t}5.X5Nr;%1Mx%k9RmdQ泳nr}s\.7k_§_s{IҷK9o5J{2jZUnlvӧx#1+4lp=޹B\~C[kשYGNR{Ұ_wݞH{_s~x~+G>%[α8zK Ok<<v%R9JkGQcG|6mILq6NXnF}8FXxTLjx4I=IeMiqq͠.*>?u+Lc#H8=*' $r+Y5it)bƭ?ƴ+FM8'<k?jXtIXEefAQtSӔ<=+ΙO\ZfX̹ yy=*8ӓw^DkSNwyuC>#_ɮ|= +E򷘣2>Ns{+5)l ާ0Z:|Iywd%sJxkz4ptg{V]~L3"c݇^yiRѱVe$GkkLue3(e˓zdfɪǍ'NE\iKUC#X3Nu,㥷80z'[cOV5y;S Cpd'v@COJORG[E駗xo5ay̟ AϺ㿯z2y_^FS[_3k3gjC+hUs3\KK= +ajI^}#n2J֗}<zw2qI>VT%E(u8Iu;o7m\Balqߍ{Qjށ>3Wt5 0#E#?ZXRW~N3(rI{d[}*8ċcr8;ןFѭQ˷/'>F ~{q^~)R-OxBwmWvzW:>4K#)bJ̛M W6ʬ \O5cSRen=(֍VZsHŽj5%OB<2#h32|Uh$wHū[*ކ5%hs|%͂F̬w*c`q^5iRZ8Vc4C }>A ;Bp >R2toZ0yJ{XsԫGsy+סͮX(:{)$;pr7إn##qavAʝԥURUԬdn԰ 眓zE:m6zT%n]O]YgXdW}E8z5->k[]ʲ+fۇz1qDsTRj8)z//Smdu[,$fgҼ$8zR;mV+uKEœ6'$baei>,տ,[D|O#)i~dD$8yٶCQwş^_km=\m+!oPyAr$sgʚ{^`h[iϭgZ%u9?u~>GeI{9{VĚ||s*>oZ8+ǕBQvg?.< p+t|֟+dwP;.=BVZDүeu9:ˈʝnzqm=ScHՇ65%+0қu%c Kx*%$?w~9823[F1MD(Q'%O|<uo xuQq&f{~Mg#ĕ8B-y+]ב%强PT>bG܁dp}sx)ԍG8t;ߧ{-i67>-I73\q׎Ne*qBV"XY.[YڹEZ$VݛqS=su|W6~iJ%N7Vchz[˜UP|s0=(4W梁]Ua労pwm_kjzev4mo@MPc$arsyЅE<\ǩO OK_^ă~5Sk8bѨUr#2FqONɝ:?^25}5[CC+jy #`s0=3䤜W迫ZΥ;Qas*FǒЌu}SOڶ{]:8s\ڷzu4oq,.C*K 8JNyuM:m]Y]x4kY-ۮz:gzuqQH)µ}K1ڎ?W$mÖݮ<]*@ w'HrvZ?_Pr*˚,_uoNF}wgvfܡ2(d$ =,.4>v/k%Yh0uu hؗM%Ռw1aGh-p?ppGa #ȧUt_:eZ*t_֟SѼAK HPYڃ5PX*:`u9iR:rU;sϕMI}vW l hUUUө׻t&Jt$שxC:L;N>}ik5~Jt*ߵiMs4mǽgf!si A6y]MԤwnmmԿjOPkREe"hєt>>5N KiXrRm'nt߳Ǐ!|C3阛̌I# F{n:u@8xFu5w{;44Ԟֽ/s#6vxTxB#;@%W=rP)ƝENKaJIԝzy.ǀDecgտ#YKmx+$A~xJX֚yݭ[{U9.Xejw{}?g?o4tq QCs'F>nؒ=Oqoj^Wk?Fb8?dk}/wN9?R{ׇ'5 U$%eU*_= [5_깅*߂y88ӰnmMlEK1ϠA?=;Z{pm$?*JAc"WE5YWV?aeh֝4ߍT|G+Ɛp;@nb1!{toiGB gꬶ?#OKݝ͌7mh81bS des^V*.!Z$ﭭk߶gxZN+Xյmv]-}IZw?~?o΢>&3)@iel2DCsg\ǽi)TU)4t- lW# qZ|yꞭ4tX4mM$O'{gRdWR$u-+*=y^w倡Z[9SC15+0nӆ$gҷII),z44hss~3s|l{I^=*VcfZ0T2vDkIqxW*pG(2i;Ԋ[T2/ޒLyAӽpTvy+s/Bx<ʲ#` ~3sXIJN1֩MI=;\k{ӛrJ{Ƴt]$IU*k[-̄FBю1xWDpRn1eK|C:/g =Uh+F:+Tu_as}+IMy}fʝ}zxgU$ bUq?&zszstoi*N#-t}N-BKWa6UU2 ?ɭNiFђ]5 $1?'{ѭŴHv̩;=zFp_”cԝmv*"7ǿz1 TNmWu <>u]Ud!@0r9#:өf3o 趚~)x:rr sф+xJ> k}lW!yeONOs_Eў\sʾyUEi 70݌dq=*Tv}4Z}E FyiKZ2Ce5<(]n#ȕh+/;cm[τuH,Z8 d-9xlMu'bUcEݕHK~v\ymZ*lQT<,o.~zv]53Bͱ[lONb)r[e{ C6րGoqYƴta(n]H5D1[m?j;;wڕ4I]Ίvu!+z\`vCN2Ğ`D 60}f6I8ҳH%)r> 2왃,ۀm\z*ȣBYB_ϱaIQJmόqWҨ5e9s+d|2+}ʹz{u+~X迫H&i bβFѷ2zpzQ+ߦeV$HL?evYaӜu=)7^[z>c DՖ5;rOͶC}xl)>#O[9 2(#3(uL_ldp:gҼ^mxNTӋ3.viQAc|Yӕ9Cc)*Mj#XNqۧ^CsUU.BAoݮ=YC 'cjE#`;gRU)ښK3Vkj!&wtzz ^TK'z+DbSzRT6H1y;ްRjvF_Ԍ@go«+빅:ҥ&/XJwbI$Mzo䔽o~Dͷ8=?Zʵz㨣NGgnHHIYsb13׏γQ4ݽ59k(5uGS()1@c*O9=;N خ]nBɅ7dcctF*kќi'7g-P2obg3[*54VXN4˰hs3 Y =3:0j*ϭE/V 3*ܴvHŶ~5tTU-̵F.a\sںeQ("Uy9n7>xEx 8u9BU%X|Q^|՚FpZ쿭 o576eViOα*Q*9hsxK<3ȿfn;5_Y%=Sz\m.i72NFp(h$tcϦYEO_wasG4ؒi~*@ֱD1<~=1줥b1K1yuc>"FVvr8cz?Xe{z%G) s>gn̎$A?1|^Oԕ(-l9)FMȂ4kKG$i8x+?Vrvֽ(MI$b+&hdF2o rFW%*7GnF CYWo"$% j NU)[xSЖY<.fJ/#+ 󌑂*c|XP殖hV[e$i,呟q"p9srp4^z8㊣*ϙ{խwjoU֧~^G@6p@'WYBpORQiΟaMk"6m2n=Gn~ҍ;OSgqNZnG2gVy>oF.6c:y;ٴfY$s6YQS]Nù6FOpbHYfVV)ds)ʷ;9^Ɲؒ6yoю}ERJŹ^PWGrx;ζF<Εc\T;i,ȖvavƤyq }j.h-kneomDwA76Brk*Tr~kyVU_%1uՇ5zխ-kh-"<^c&5y{HIUBr[tefa.;9pjI1VB2Y1pNzv(ٿu(t5^^C֊,^V2UKߎsҺ!kM(WneBBlhBLG$(S*IYte{-lAsm5y{,8>ST=ƕ)ק}s&bnޓʴN*i ٬0yE$?ȫy*]RnvӷZHfp둻4iK_װaC$Ӵ4!c Gqj%VVEL<=9}ַ4mobo̿ydttF(rX+H\+S{yFܯ^{9ϺV6=qֻ=64Ik]_k6;nc#XJI-/̛˶DʿYwZCMN4m~%. 00cpP7ξF />} mMs,k6jA3?*U#*` zt7eTͻyOB|Fu#{-. s,-+HzbN@H`խߘ$Pd&v8cK^F1iZ3\HqXՂ9?_^sZrH4[+mέׯ+8;{eʏtHߡhuotg)ݍܬ:s[kvSSz6k&ec HS˭ޥENRP;Rp<59F̫+FgrKD\rc=}z~ǔʥ>hۍR$X<76$c5a1ThA)leO'eB~\է*5&w)l0)c ޕqrS*qK7:Z<|6ߗֲSjSR9N3:s,vYA׃{{SFim?f`O'8$r\DVqVtTRzU5Ԭd,f=vƕpܲqYgٻ{Rc7W156nV8|v]7@89`G4QC5*2.L6!hp*cqgeR5FqûCwS-uh"/nCہIՓ0cS (az.8nA i=hiz8 M0DIKF_J2VT.W5vȭLXpRsssVI(֍j:~ݼxFy{ӿuYrJJ¥j(R:#Ux]n>l Zӓݳ1FIs7oԫ2YefܱN$_Oa[s_2;"T5EQi]Z=:cߧY{*~a9URqKa:6۹wV^3lCqx cUxWpkC*58Z]Q]%C6剒>>UvS"t{أkV)z{[b'$2-kď1]MgZRu9Z{3.K9#VXێ|U({J_vWV'HXmn'*j.4iVI"3Mpks^ҼZv%yt^xr1UAͷ0"3e_qJj²X]^{20wzBϑ-^:fuot9U^@=ryWU4= rJbCv&(m^fwn['$scۊG/]9QqRX/²VnO{w\oME)s9.Iq.eU78 TIBu+O}{Z==};c*u#ͪ*F2$sc5R.Ij*δ&Wy$zoZԅmc(ou۲ -q"y7˞ߧzMJd۲}AH FV A篡#=G,qMк}?m'ii%vQW$Is\cE=EZ" %!36@t9<>Ys1Sf݌ɸ]'%@+)•83N2{,ڸ6y8}SQӔ9VnXHTUO_G\dcj(nؙ l63?Z)K] NNؑVFM6ha*d>]빕KdQ sR^5)зy$#bWsg?:q-.:n-#^b;|N&1qjr"4G.fi,##;nrWl V7UrAw5FQjSer4N-ٕYTNO\8[i9"AjmheCqH's)J+K6(89JK凗s$plw@5N4]e6*bH 6݃sg_ӥU%wRhsv[Θdaǎ'[{+m6~V ԏ8&\ 9qM%64lXXmLa?茢DW/F[indscAbJrvG fk_К}L؀U_W?*DK]0Ӆzsz*hJ5eR4K2oUXUdۘj s҉FR3uR|^KhE$BC[zQU#-3E: %tfgfS"cN+*pNwSͭF[; ^Lo+2tݷ oόj0Cqq훔vzL[m@xf9g ?|~*M3ũdccn-ohۛBL~~ՔWWnZ؅ zǯ[g<9-BѼ˰XߎzgѰ)Q׽ϲN#N[4QNO #~PK u=zcGʑRn22_"k]:lQn~G6I#xt#W:2cRZ?_h2gp{׭K Zɧ{houhHHf H-ӷjS#.o)CzTn ZowޥχzNX푉 |מ[ rQ7wsd1|ҧ{uۻGSfߥCex----$GoeO UpT03W.Z5=Iop*TR=^]E-u^]cE[3zXS ?pGNՌ)N;~~+s+Z^.s_ jͨjƖFHȿx' N:}[ F7V}-oƶ>Ti~ۭRk }~x<\Ǻ*px++G@x8#^rNz4ЃW9DU[^#kϧFQ}K0tZ^~&hKqmx}z{)*ӓeX䩇b1MAU1o|w_6Mzqrsduԣ{-ҩ6b& W_ZJ2jNmJ9)5*9c +6>ꪎk<>&3(|%eFl{*c/#1bѨ8`~ cbҿK+cabRFz$RZF֌RS]4sH<1$ sR2nwU"x؞Lo'N0 k MKz=/8hs?cRVC c9)XoIJ:ozM-ob:5F|>9#12juvqT]ni_ v1wC?Ͻ|&ޞG+3Re&o3I9ݑ.򖽏S 3U!';*;ܜیvUZQuա:J]uDڼ K W3jƛRޗڅ7DlKШXVO$x鞝T|Ju5oBt[t'Į;sQ()jR,bޥ_ hRP@=S:rG58ܽwn/.w ?OƊxw8yNwmu3u(wj?yTIu9 Te*zE;)՜iǑW[ ?%t&>P5}d'}*IE'?GKC)$QSG~/ekį=qjZM`g~Q}R<{=L;7*wIuNŞ4o#rZ.4m!ؑ՗ 1V)(߾b)E(I[7׾' /'Lc^ڦu>])O# p/R%.׼[}-o;N|/RWٻNu>Kƭ*RG6SyjDf۹C3X|fTʵ־<\V*{ie.VO퉞F\,{pc׿\ҍn6_j&#fs6ml`i>),aRv$ G8pt[-_UiIZͯQowKoe{VEsʘ@1E}65#}]{wo,<].^hk[Vջv<Ŀ'Ica]!Hox"EN37ӮU0I+{i{տv4;H$V[wğ[澣s(ݻ5cʤy%v{v{kS&]Vo3T/%e@o}tӔ"ӪJvvRMRZUzt_xWαXLO&eG,qPrwe|yUxyY]wݥ]O'N0k/wuen4]4dS $ $pQ撶g~Ԡiy}߃UBRͨƖC7 rzAPN\F;~Y/M>xҩSvϓ?~x"Gcύbۘ!aEd@e5S(룿'5uӷ ťͻfb>\yIhy2 &nqg&5lrު2t¤i56< ?+p]XѨu6NZUԚqe}qUoկԟULڃ,~gr(7[tiׯGkn ZTEc%mxlUJ0YM ܡGBkjqI;sԣ(ǝko. {7ctSU09ΘЩJ0uFEǦG$*I ^j|0жVrefE`]O5M}*6w5n,b7ٕX۷NOzέ4'~4TT/V2ZO*L-oSFKXPDv4mƗRlm8eOqJĵ8-U˺実{Ocd}knhF\5k}cIrm3aʆ=Cğm3tzd\oƣN2g^1y+^$moWfEU8`c˼=!U&y0%SOwX4]Pzgi,4ptdG!1\nE]pkͯ[-z鸦}zIbGd<(sҫ RR4ʛеA69ׅk*MyKrq ;zw^MyNᇗN{o.0*tWF&غrqn^}y'*nEɭ*ϸ~c{XjjQ}CpE%}ob= ud^X !mSԊtq'#%^o^> E{Œ:{WIONmA͑c6_\Qy" ӥ%>Nj-*6s#.Z6$*AcJWvI~G-6TUOG^iC=Q,+@v:zE]ivð'ZDvI&j{GQ?ifLH67Z˞Yr4Ibd4ꕮQj$*Ե8\wEK{dTQ^j}.#W`lέwyRw ;cVn';Xҏ5>g_AIVLclыܽNgJhsr:+#g]4ZOKҴ"Ċ2r6,y_sF+CboEy{yj^[v]ԧ;m2Xa%XU^8#W5H{I^6(ZUir,RI 8=2+\ҕyx/D:D xmn,&F2l5' c9ʞ!٥#0Hn\A6CHI @1#^,Z}_o"PɷKFxWdݐy{z~V}.kOk8ff[u 9QSվ Q}35_z<}"e!EcGyr4f#<^N ,v [BB݌pA#m=+rZ2\oVN~گ''?1}ڂ!9 7g}x?7.ۿje?"j#,;Ņ (w9>%Znڥ~Q[w7 1I|猟c}4ajڸFcEI{Fݟ#ֹTeu"56+|D$٧|IpJ6{T"I;7ӭٵQDwAUwezs~>jh_qx$y[2v0n<0FmouxK<{\H߻r7Qq(-nkN[K~%@, Mg Xdms㎵;F+gce~y}̱dDܐkJ[_*xi|K2K+ C?tzU9}^꺢kx0 1BŎy[*ѩ3\?h* MUNTƝYJJ=WoV2GX-N2}ҷrͭ[sNi{k740rJ6|pO汏5)h]ΪvMA߹~\E;rqקkB;A[ҮREoFʫc&E#pkԻOQ-lmq 0x~&c){++չt^Ks M9 j"5)k8o[ >ZI"ȿw ܁ۡ⺣.f*jp<~ pcYXp=b߳~CXƝiZGț:ꭷqZ#Lkr][l(—~3ENZy޺{ʦsmݷ$tb3֤q.~Cq,/,b8Lq[ѻ/Jҷ'r&dg=~S)ŶЮ{p@?MI-/g+]k 7̻+\ֵ~dABI6dAHsOB3V{oSJw kZDƤ}1yJyIv@8}عc-*1~M%ޛ$qMQ1q1=1[*+5yb얜c&VuX®rx9 S4CueHY>d1]zuׁ΍9}a.E*cvWqME}l(`>֫:蕑Zh|%Y̲ w=kTt!V^JԆayǻ-qzsҴRU~R2^ibV:7^]~ҟ#{j:IE*a>d,"^968!AU)Z=cN s{ cfFuA"*;ƳR<ծ_ XC1EuS-/ NcΕX !Ur uk:r_ЩVWhAݧh0M.WR5iRmD4BzӔbЊ1RZE$}ͷ:8s472-FbѼ`UԔTyUKVz=+YȏRO<»TǚV Qd߽mkOyU:N4؏{pp<^u9i^frOcu$}go%cU8\Sֳ̮7DHDycwgB00>i.\"Cg r*\\6b r~k>g疑oN.yʼyGv t{*2{տՖBv\c;d:z_q4)Kޱ1OƪwnR0G_Ot^]ot:qlޙC3v]9b~QsSRRq]MQWjtөv+ w$,[ff8=Fr3YI+53]c8NH'p;ޕ-iԞ9pYOƳ(Ԗ8S?5kq!jQ@g4mcw]u"xl̸ef#lzJ9`Rq%bۚݾb0<(=TJ6E8y[< %s0no9WL33[HXH-ܹc̖yTg'ִk, Ǒ7Hݔg@qFQkc.{lG4k܋ G"AI#1HF]z8{KWDhH|/i2OOpOrMܣïG-<86!veՓK` wjX+k㷷-8'tUN[9ia+JڭÿShߙtő\I y$wm8zu-moO is4-V e\qGiFЪrU:x1u(\Mw s8նjI8[yqXyҩG]/ t+Öv_LgܢHRO=8jr8 <*FY$q_yviUV]xQr@b餛Oҍ?g8: 3*IO0 ॆ{zJu5c\ԭWB%U )S^T=HuvWӣ7ۊ;ۭwS-"1S2|Me)O{u#F5aJQz}#|YS^})~vGU9gCǭJݶ1$y=!> kڅջ}oyed`jHb)ӌ%}dz|ƍKsioA||5Z m|ƒE{p8{ZGu xyFRWÚ\ځLr~` 0;syFNJԍ55:$p++|G)r3çR8|Rğ# $nʲܸ*)CVTܤZn{,\vms>1|o]LY 2[@R>9Ǘ,w4\8\0EvO'y-L3_9PZ8ӓ[-#ww]C Ėpۂ|;rIxڲU)J-<E'-}~o?QXY&=FJU"]ȯc9Fi"(yP+N}LXհ IwМc<҂Z=/ˊ]*s%\ , pr8(犨{v#*;HMZE$i #r0R(3{RK^SJe;4!U|# 9qOE㏽p1)T~t㊭USgk-WHܠaYuQEoG *1N.{nK.,w>wͪHCgVRb{> uQ%ۿˡڽO$Y k'J7JT ZI-@Wg7sJ[o|ukve_3n3KUsשOӃ54Z)bÍݫeN)+֍%߆,g[H"g¦.*I槈uJ<9wZЖ\y2t:Wݼv/wăM Xm*Pv_a8qZ^v7$ė +4$oBrx*_]+hyTh.h%ևo'q&'^KomI;Auqc2jrn]~QSK/^'o |XEN!~srqϠ}kͯsekgB-KyZ61c,1?w8^ScY譯19N3I]o}'y4,Dy;=~ayY%U=n ˧B_j6,FH9lG^HFRs8e+$F{kw~Ms si`J }:4zz4T7ֺm1ǖ8CSW\GU%⼙?&E'աXӹ+eOV ''y_.`oʮ5|z5K-׿ޙ Z/2qcxMđ̆Fj2[#QT"9$׻tol$ٍ+I"#d '$ zWRmaҩn@ś,VZbܑ|CZ>Zi~ѭ+48Y4Ƴ)RߝЂOJʵISzߗ{eZܮܖvG|wW+$eP2:sOֻ8cNJi;]c)GUP2>|Ҵ<;C,T $?,v}X$9ou#[I^iҞ3/nkKo<_T+Hma!yffR>Nkƣ'|Z%#Z܏Z[6y FkC㸚gXB)P~\sGnVuwikΟ&ݯKV{zh\%kG꡺_/V>_B##c}i(A<[̎88W*xzsQ=ysJ׋~|94C+s$ynd{X0玿u9V򌹛]ӷ[m7G>fzU $e'=sVjǖkK *_rC{F5ۥù7[UF7:+L^!FB h̄>bJWIEYYcmt}uW~]y;G|Ziƣ|qWU)PyvD,7\FB^hugHWJzdz*8]Se<+#Go,;߀m?!Mc.T5F5=k4߃mq[0:V|Ǜ%d+cr5%6)dI$: i<; DqPOoǛOXlm?'2988|.)-c{v3Xx.z7}fģĖzڼNт_ׯN wrX8B[-,/=]Ÿ s4a>Z%_o3bqu_ZGR͏hyf1sȭ:Ѵb]` OAr6rzͮOB[ H+~?%_= _c }_Oß,Mo @NG峜\V1.h^vcR>]_m}ؗXӮ%k/N ?)^=ysFT[OO>^LOw]~]sf7|/y,hL|IX-Zb$ײ]֞c*tp| {_Oᩅ-OXti,kWS\Fׂprw0 |%Nd􏔭JJ\hyUɻe Lx)}KGtrў24U+/4/:g=u66[3F g>XpwvM1^Q7u<"MCPY%i}k5+.v&ouM63mdt;y8#_ԕ5gZǩRS僲15 1>.eҲ8+(ܡsQuW-J>*료b|׃]+.3 qԨڞ/ͫ'#.l?.m;oG~ܿemm<%e,SnMq(fN1W駔eWUm%mokM~~ Qa/v)nk O<7#/j-LڴS%8AfWF-KUDtR7Ch ~-JׅuK{nDR[g ;U*({vxq (ݫ?cʶ:>qQGW6]Wf(١?ty:F0%rp9DNp\,W |M^jK(C48VkzkXL,8>=AMwSN{F+Z5߿/~.wegIlP40ܳG+X)ϩ`8iKkv]G nzh~;UtIa@ӪG~`y98rkإF3rkNMin=yb^wZZoK_k^ "AyhwĬ;p8#|Y~ScqMڼlDW,N7`瞧ֺ$qө(֛&-at2zkWFr7]KnƵɸ8D`lpRw5N^n[3kE3E4g4܃5RR8b~9Q jZb$njwVҝ>I6zx7+10AN2~L)laXBEik C=uFR LuF[ws'z=onHҠ E dpJ-CͭʢHihc}$)c;]{u"w\Uֶdi,şv~KS N1:ӛQ͒/̌a<ִu#Ok%9%oy^|&5Xj@< µG)oTͫ6z4vî7nߕi9S_P/_y?I. lьm[tJ҄c(;'DEmUaxY|8:ߠU9J-[|OO" RHn#FC.XcҴjM=G[G@ۃe+O@2I\6Njm]O"71yXgOz YADoLnO׮aSʷl\Ram%bpc7, 𧁏=ҦK'A{)kmo.2Dp;zGAEIB-SN[DLmGmf*cEc֍:qIq$3d . zu'mJKGܪ1H9^w+ދhnd o2 )<)NRWLiƮ!gsťmsM)-[FnYERfkY[ K#0T8ʧ2Ti]s˷I.]GƧЭv6$B>5>Ɯ;yIF 茉mIݬ"ɂACRQ;-7NRo_f mTS+šת;%U.L*,pNX(ɧ2^ E6v3ܒ9?Ҹ9ԗ*!R[ɣ6|ż^~?Ҧ4mM(JKǼ\/|HW=z?U)JvgZGM%w^Y2ۓq8 ּD}֭Tԭ4rQ![W-8TC#"yl2򁊩N(#v$Mim_P{oo fZR,:R/28v ՈOR-ar9OK Q>W!{|!^.N}WU=wLv#5cvc(M|AfL#uN\ʝIuH% rVr]i'8#N@~t/F[;[_d̉ݷy 4VlK5cbv FT*Q:rjp6s]⬙K)S%[B[CWݴVjQ.1u.O>f U>e'}iJI܆<޲;'SZWDeSOz,-vҒETKe`I,q*{guGe*#ii$Ed#Uڭ c(єNzCm^{vkh6e$79汫bB൶ĭݩHX=F} T)Ԝ8J3KtZM${Wj'һ)\^MW,S[O"qvxjBTF"2Ċ r,q^)ƣi3*Rj1[d&)`V?.`VR4+:8g/ȆYՍ2Zd{6ǡ984a=g~G YTMly1WWwgK1n $]XzjrPiFJKtȔKk8N0 +8BғkD"q2m͍p,<UNb^G]zGG32.f/M?+Y(gN.OVI;k?/=OfٳIFYj/,rhE^2s fsr9ʪmw$ *pI+_J%R4 kGN~fvA-U#3rN9?^k^Wo*g'wسmœ,&KNz}yQZZ y6.0~81|z&O qެy3H#ܨīlc5WhڳKM ,VfsYkS1"zEAT(m9={ʸ/71{dDQcfE=_cS9A+2(3d<"i#w:c.Zo3*u&m0'K`>֒,";dRү{x륷3Z>,O#lRniԕQەBۼ @P~^J:穫U(~i?}K6%ɑ?Z2V=Xf"YefI*++cctcyGO#)E&{|\ڴ{C3dL)*;s+|ZevnaJb>`l4a9\srA].>]w}e˅< aN{{cW\g+/!({*Q#2y&Eʳr@SJ^is+~9#N2g릂CmD̊F )yKJz+ti{zw8(Ν7եtQjpszdcVqrvWq}!6"޳36R=0TwzsTNMg-aDvCOQ+(2y8_OՎ+!vdM.VN0 퓓soR^WotVO꒼`l2h$,;1g4#O18%R 7АJ[;I3EHo铞v&-,\8wkn-F!U/J.8Tエ`:V_!cw {rt%gǯJNPֶzۿ>hRM++[nBsG.cJrH$*/4xco3iBv'=}*Z"} q]].ٖmbg[hV8YIqyWLj˚Lxyפ~Б&ak sMKQF#%wlRi@e 6p}buJP[+X:y3u3]~ނXI%I$-tWl'烟CgnxSJ{jC-Ą4fm#wG4GԺ 1S{&|H_zt9^X-iP"LBAzNiSRCAШ"YJ}Uϵk%ZThƴUZhndek_1U;kZqZ'|+D?^̡<1*~FѤQz~2Z=v ^aGiw,ydiݟ3)S/أ*AG,Y'QV&׫u?xۿk ;E6ߛ~Fؤ^JqZR:˗%yo;e޻p9$q<Ӗz 51+Kur*afn?&gfMlT*{oee&%WV=z|==1M+ب/-YkH*N!<{tXe j>  ݋K-14b>wG3m9:n#ӚU#N}RK[/Z7loF8Ϲ:ӕ3mfY1qSN++ W޼cr\ֿhePWtco\`/' f򶙵HƌyԱm,ErB XW~iFU/gfqJ4hI)];Hov.lZJ0㠌$4ck+gv0FAx*rьf.%}qXTQD~2ԢߠrIwĢ69 /=Zqy3Ԝn3A.!yQC^1ʦ7RN1й2M3+7`p?cMom>o{Gܬs,!aU7F ⷍ:wA82zjY.edeǐ >[JWԝN_iu/opBA,~\fhg\zoE sW <+ ֶ{DFV|!'/>L襆m۽mR uO%;MZܵ9ZR|ӵĺyݽqN5$hFSݗQQjwyj@?{VҦJi>)EȽ\\W۶=XSq vJvwXM;/c \Lq5c-SI%ޡkĤj]:S@,~ҴYY|מڞ9](Xc1ް6J5)^MJOu*OFs(Ʀ}Nkm=K#7Bɴ|=4'u+:}UGsu}߯aDŽ:p[t(]5lOg*ִ62hԬrkC6 gKb..cF@'z5J:$r{:eHQ٥vܠaH*֩;rz՝EԾ -3g)V۩Ufofdڲ6:Rq8ԧEV- 7 >vPwVXM9oq>0ͥY;ȕaQ<}z\u}{O *}~LܴhO)jU|ǹDR*eoC{LuvtX+XcvxsS^ i:+[;6ۦTmǹ?Ty ka:JQSdϊHԇ?18SrW,(^Ÿ[,k es=W?+n<ɹ9TzG+@s6 '}p? =ʩV/[oo6cd>G VlqT.}Fc LF8drQuNt䴞B7͵nzzNї:שie6=?T*=:^Κ$ +ܹz|n&Xq$یm>ޢ\J\̑ge" 'ZX٠::\t̘P0dqXoucNR6hlo0 c>3Q%t]QpG%WLvF2I.s'WEyR6f3psWZTK:MgWXME'rq8=D߹vv*tm̭r 4r^xkR+sӊ֣k[w|]H%1f0w}kTb#Fz;RPX~g](5*U^_VЊz]J7XmE񓷏z8ӊRT(rپ\kw&sn'$96^Ti8RݽZCOH<ϜF?FJ%*kYt QXɳRYVcuӫW9兝qqiz;BZ%4}?ZUM+3ҷ+F볫IF~]ҭv~~zQ VfQ]zM>h]o.E ZF;PT3ҽ UsTܫ5(ֵʬ}rN:\f)aJEU,<ޯvl1QjĊC'5Jq">Ҥ8rQwHj7)&Ĭx`S}K]Qjdo.7O_s*rBJQ@$2{W$V_Tp5rn b4(R_JiV9]݆?]y[^ۑ[ogЈJE$iHָS 9;FjYMɎ4_7A.ea%}f㿽iQQ/ZBҲ0o? 9N56ZgS[Խat0|8`7}fUQcwO2֥ w~b#d rF3sE~ⵍgq\Ms6y]pnG_n¾Jv}^Q8)^_pN.]-{tMI;&cW[m ?5Mݗ(Ӌ^zK+yw(~u,b}{cČ^?_S(ˮ9TnN˅YxUGcִϳ-BHrڄ0G+'u=}qZƟ,˅tqq}[cy+ZWk0{5}n1}M8>3{n3/̻NsqKNVsR2I/gxe&5W<x]xx2OKv= iӭ:}~xTFoS-7~_g>R []SnCG1)TWJ=mJxc;tN;ӌ0;V65c)EK&I3޹/vOs?Fy"ݺle:Zؽ却n+n6I=O>{ER׳]QSG=+SM*Һs`g~|Wm**~V:%V+w}RitbP[{9c]*2۩V53D$]FM4>R۞wc)reX]/{Irh^g2$cQݕr=qE{v~_ #v\sg}qa~m jb#Z!+hu1R}ȷKG8xug5x{ðj.RTc*ۻ/yv74ݾm[앵]NbpM˖ik_ǯ_nΰma3 N&e^=3:|.' Ӛv53̱gR>&ѻS8[,oկ{F.ۯ> &x| %wqC22%p\yQYq1H;u{]ϘFWM|\^iC,WWd{nXN .X~>YiF)]]hZρ< vIᇶ8P;/9VJWw9eIwn7<+=d#3,HW `'Orj9{K]I=.q+mᵽP+Y >^b*{^eU:؊_X=?\A$b"5w.F KbG֏ϯ{;(K+EkVRVkPGvDLIϖ+8'oZPI>:{iSNm9/ߤWo-a*<5Ǝ'ӸGQ|NKu~Г]Y\/-Xv|Fx*_篙߈T>Wz_^IOמ9_uO0ǖ\OR8g0F$vnFev:!Gуtk/79IwFvC/2n6?jo|LiQ3{g pIr/~.ݏ'OZpӗsěԮ;Հxy9=Nz1Tc)[N>j*RΆ?6;{vfHi7ێ"2sKcVn]SN: -$fjxŎ3'/ӄHK]=%߽i-ԛXZe6ns%J]HޭzoxAY+H埗'h2}?⭅KE;7t8-_0N=8|ɤhۏ;dוNncmgu )\J '߸q\~KUͩW_нmAѕ"EMQ펵.iOY=ZR_g('ex0TW$7Z Z/)9'} ^%KVUf0nTc{`~j|c/sR7 A?LVthԩ̪jMnn;mT k$j12umϨ=;Vuh[_#˖7}V#nwc獐1Q('5`f*jkMVPv3/y>qQhӦ߳w\.YfT3$sgI<'i*|"Ț[P& F0Rg_xbehrNfzEsSXhg5D*CC#49v jiԚMAukHhml)6S'?/8B'+œimsz.L{h"c9IUJqnƤ-JK$bwI +#'9c= O4i7kBe حeK3ɹI=2mi38v׷S2¶l*ȭF2p>8)֭S'7Z;#&ƷVL^2\N 0NNWRXF6ߗ,:ڽ<6*tq 7nWU^8WGԓiYg43jq&3was9 \̎T9ђU^{KC7ZҊngAxPEa5ԍqʸU^Bq'=1:4I~wgUOM;;_>(5 /TI.ZWak=ֳ]u+1-n590#]N8ڤ">enktKmK.+^F|K ,2dW!A@$AA^gka.F&kGN߮elL77/-G%tW졧x3S:~ F̃MI4g;JI 3r3ը*q[Z_\N;ZR}l|+jVs^£r9\מW.e ~#))F2Ld:]֟%嵪 $ڸ9sSq`VYELO&u<{v[@fG6:֙ApwnM/=yQ?b+ޱKo~oj:sEݣؼ /qV[YҫU巢gST*+>٥{GFYʱ8ӧmZq7/.E5Rvp*Þ-Xӄ>Wd}> pҺr ,tBeLs(v[^ ԫENuSk7N[QJ݋}FMJt eO3#MB=YcJWFu%Gҫ:tZ6S]rwCZEk]jN<ַskLbkiZ=х]]9%Q;"媦]5 .Vam .">BxlE/6?(+˭9aZz\8z*(M4n\m? <{V%c6e9)NR ^oS.V#)*SvO_OԣEKku9MVqr5'*yk:Z-vNwb/- 6̾+#FT#j,]|Vv̓wG-& ZIrvY%o+m*qyIlaj}zux _cRBhëGQ'8UYy4K=kyRvYw*9(>bkG귵Eq*{JSɃʶ_ѵi:#V8%Eu,?5>Y;zWu)Y\k7JL_i8]F[vg%J4fo%l;39DZ+'JjosY}.i{QG2PU*9goEF~$UVM_ec~xn7l̸Y F sPC]Ǜ]Re֚/F{-uIcg3XY+;)kjQdK&+Tle?UT:5~q%ip4v-rY]+t;N1$B̙U6w(nA8>E>UbQqmoXjvh],\đ8׊aOc|/.+>zLHrѳF ,rmeR8#5KE/v1X/VI[],jY#VuSG_zL*ѢVk}vY}8J+S0$զw4vJlUxR_83#s*XSޯw|]5!QGwVV=[_㼩bMï< `ctNkTqƭ(C4qT\ rőkJzuJuXe='oiܾۚb젱l%dgQ}2ZˍhTzEdmAh/K8EkGl|U1~v0N/]ݯ]Q}g eڪd]﹘+c8hXoi/\."d[-Z^_{<[Ogh3-C,~JʗIǾFX> SZӿ}mtfqUO^׶ku-`Ia¾v؎zckO *Sr? u:8 o{ydT7, _#Y2Z_ChIU_-ydvҿG m"ıK+2HY'LFQu_yWB%X"cU Ld$_C>U#S'ݔGդi; :%%Ctvס,-;`Uc0~UXTcu,UHQi}cWYh~[s'pG|g3RLq*QkzGpѪǹCN3Ҳ石涺}l@En7s;;Z@)Sǯ\,ӧ.Uuf?yt qʱ-Tz?'9i+}R5mqGx_/wOqҸ=[ЌSv,\Q `rS>Zޤcy j2. $8eBLrodl>h)J~Fj+zin#~]!oSz RW_>ZQIwcKgk[pYzr;V}k^6z[\>-?:M41kGQԲJ޺ ^s\iY[q9=o{MjS_I5D$ۮ`kzq_*n[Elӻ/={t5'Z7~&n#%Fx NhB4m);ɭ =:dҭTVz?$X1Zp0{kl$ -˫+2dR762֝xB]\sΜf܋wSZGˈ!#c9 ҵ/qJ2JIu[z#O:'-zGeSoN^qkNܫ!F;#q~ʼc׷)C[8ِ{G*͌Jw̑KTAv48OƵ%]? T%.)ün~֐ޕQޯo"!QArlXH`?WF2{J =7ŪE;XmeY8`+YTj{3zŎTQͺ4Yw4` `5MlNT)+u*d:!W#\n8뎕*ь[[ƌIBm>Z̪vny<\.Q#1Pe\ FpG#utwgVuТ4y,K>8ܹQ)T/ e##ⷧEqNiBI_{l@4emn藳Yn삤O* or8Vꔶ؜<#.r}= YLbV?2RTxCZ77VXƒ-V-O>GX}aIAֻ4NXzrV\dXgWc} g9ڇh6#SC"74vƬ䷿|S}I(tP|DF60sMj2U9,p1ؽWtQ}"u(vO.o~ǯU*F7Rjy{x Nl2_j|ϕncRT)ޜnW/~|Vu%c%մ;+v8SV*!'*oEnadw\F{T{i:i}*VSӡW{[2Fӝr1ʔ.⍰WM=|tuvvU(ԌSԙԔe'v!-$l9$sF3R%C$aҩR)'LjHtsNQJm>?#'R\qi팞SdkTtzƥ䯦rD2܂1ڥqi0γ8bM3RLۋ'\Ov*tSI;F'NZ^Fw7JmZjTN^.ٗ԰[llY`_Rkz\J-Jq-Ρuquլrr}m6mq?l<ʻN-Ԍi˚ls?ir_xz{;kˏ直>lß{sZVfqU"}?#O} ]ڣL+~9瓎*-Zãj5#<9(Yxo4ψ -՜/QGV9aRse+b'55=m3P{oѮZH~^I>bisR&"v5?vݾe9 1XxTs(Ѣn/]-jQe܉![$Ќ~b<+F-McaӾ!\.5j4ˇ87 W5O%% BhQpܮt緮kZ/u)$7ۍR)Ӎ(++k?-moZybL(FsUS"Y8?N `zdFܥ3l?pHR*t znnMwjSy>ZK vtfu Y{"4ݙOAip-̹0URVGףR6n>-i}5orpp'u:XӼMaI4=fLm9oZh^1Zyd[-RV.a+wv+IK{ΣEu6obW%qWlO_S}^]YeKn9uNCO #$v+3|ܱϢ:騩?=GPʾkZ85%f X4ASrF;SXx׊vӇFXV@~^У[J6} GK}3Ѽ3+25s=r->kKmw$fo[w̫6:|v]Wi3O)tV x Ѯrf*TVT?C39xrR w#}Tu~'#*cz.ZHY9 Ts^wF2wL EsfLr0 UWסǿ\-teF[Ȗw)cyG p{yfiԻZXѩZQދ~8f]b`n 8SDq^kZ$Uvrѱ \t+0F,}⮣4I+{`ߟ++`$'+TY.~Ӕ*qIZ}NG/o-hv8dr7uӋ7<콢t>ox|U&[,;@젱#82IJeGW}k*t[׬zwu8O1O \We+=5Ԩŧc^v-wZcqԜӽzqRjՕ|Þ.2OM8nI5!LyKS#RoսsRN-ƹВ$͟Z҆vb*S 9}wOKYtaqf`8_m6nxkWŸDv+ӂ~2v@ 3˩EOsr[ }?JVK%ecQE.gk<Y O/K:N쩒G̠l:Te޶J߮z~R=R[mk.1!gKH̯66W`@W[WZ]mgRZSJENMOD1t#eP0ana%;-S`ⱱW߯OCl<}=FI䷓$ͳ`rI;sLWV>N G}~z^}jzҵm_~%jv*CeRH =sWPm]=\Tj+s;Ÿ!mo>녕Bɐ|zqӨ fi?k͞Y(94mَ -A{z$q'0Y`AuTMozrU"ӺEGT 2q8`E]~n:`޺7w}t= "8h=F_<24cUU8 cAXzTW'_{_&ml]/miMfHقmK#ROD|]KP -`i!Ǹdr;<{dםR]z EFo_'E4I[$9F{ʟ6|*/_3~1kaqgr+MT0,1ǟVQNPZۯ_CůJTmEl jMPOPUfhܬ<3_9^T"zJ24]mN{PsȩYq>5SsntQ՗xrr a3~<9둎>SE58[mX`΀.*zpr?Z0r~ow*%R wu3|EuMBFt_˞F7 Ts׎&OW]fKjXhhSV_2Eo9pQǮ^3S_ϢS8٫tPKoE,%h2>fI6sGCч9+^RPK{ [QU+@6,vNU"woctߪ_?\G}l2L &O oG|-nyۈB[fh|hm,+j01㧭s+өyEYSiVSK8_Yˡj $hP5UHM:*եߡw48,!o'a$NG?ֻ%RNS08kRS_qL<$hԅ#8&"a-SU3:Yji^[mfy&**Tx8}*q'8Y-ouUO }/{)d" {~`A#ӧFvVkTpEq^lE4^>4u_2B()=yⰩ*tzw6+建]s<]gz_YfuW 1Ԟ9'# 䭘*J7^k=ܓosfo,߼o?*=;i;U\T+o#?PYlVQ'wǠ:ou/ZT.i^<ִ{=@ewZr(ΛȨÚTtC~ ^xT/V6T0Re r~QN5*(^_BTNo}_ǚ5⑃J &szzN]/4iƳo4;BM`7hs)ܘm:?nkSaiԽi'MlkB4ꖅe*'=yW*)ӊz}MzSn~_y겋3ѨhsRЊ*1Joͣ-R8'g)+0d4VytjrnG= QR:}id#'v ؎{0w_5F=ǎ|EQm*7pY0C@:c5:'xVmf0b!&ֽ忧Ϡao.TMAV%QD NX|/QRZ_T]Q>ڲrUʛmG>.|Q2_˿hPs[߼mP0`T^qWW*UKKyLVgwI -[7?ѮI=au $,;pGl X<GE?]hþY[kv^gπ_<1w]A=" Aq gkjqʬݛZuzSVme'nxTȷEd0v62kΪBTߟR*R4>UUO$+*ʾApQ6phAcy{Ikp1BeC^"bܷ_3JeR*/mU|G}\LI{@u^djFe6%R=d#;.k:ugf%O*ft6w_"cwN]T!ƞ7-VTݩ0UxH uԖ*vW{15(;}Lѡ^+Ȣy7\B.y7/_N)1%D3̱rJpZ[Fn>:ZCܔ 6#B$7::7TÝ-  ?hAqꔫF.+NL$W1Pglg'#cSv1U9_64ϳpòF2n'$$ce.kj(Իm:{-Tf3ͳ#5UO^3nUeo/2NI\/~>D)!#;uYԌvgqjEh"uqcl*ֹchGkab )EfVlGET1E7'!i"VdpLv5QNRR-%*̗2Fő@߯o.kl3潍+g`3Tpqԩ'Rӗ*w/]4Z_/jk'Vїnn4$1>'Ϛݱ~u{xЪܾb^8hU9Z8zg+бtn2{dNeR)=Wɣ۬R~U0Jۭz*\~%FtJ&0UX njn[Ƣ\JW1-L^LPF#*@g\344]KZrq6!Пoz[rLެ/by@]T_ nMiMKQH`{DFܶv~QjY%fB]nګCc`3oi:<#^s^=:Wꏚ^b6' ~J0:ͪ~G4VH h8w}t=Ux7V L߼gt>׷xcm?==kTS2'-3i`bmm*Tz{d} D]TՋvj7Jn l\Ob:\¹js%kNDz7ʁ`|Y?^zW$5g/@{seTn${TsF^E{o"ymffA`c\6XUMC~y%D1cq;c\gT4nRwt 1d r~^Qoءcyk{ټ&>q Ӧzܡ˹Te)N]uR4_lĬ*^ ԑJ>^y88J;~Īqu>픲"c_F2aRJUImV)e +8v?ÜZX˞1wN>HǾH_.gSNIkIfު $۾^O^E*x= {2F޸Td%[j/Ne8hn4F!c5EaCKZ-Ĭ.|lznLF*)RRo+޾սuBO1E9ƥ>Ihc){Ѳԙ h~TӤS73&$w&ImhiT:?ʜE(JTM 76"y@yp0:y<~nQV3O5ԫ=_Oz<ҧKF5'Nm?nvݪm(ʅNI=i~VhjU%GGKqv$Ϸn~]~FN܉׭:lQAyS( <}+[-ȫZ#hեHnm:αmg'>.*i?Sn/IϷڝʧsg=C[){]Θ2^Ba +irf2è;|ZN0г#th瞪zq=iՅ&jŠB2&G`Ojqwq1Ty7$k=>s:?5R\ʴCbiL|) s< ҧRvK˞Sk6bikvbqܣ$dgXtF1{7u: HNߞqҺ(߼O$.[ƸTBpC+K޴V{OfNЎo!}붔9usJ\Y-ץC N-NY6:XIjKsXg-뮤];Yo8VS}Igdkr̳;}ߛv9Ͽz<џ Tj{i~_2ݬ*Z(#*r ϸ^`UC$=שMo2tn2̲u=?:Ӆf媽ۓ#G=?/JΥ8IFm'7H5kT#|/L# W$G4گ"0NmIËot4q&I1nW19)rsFYGm㇄z;v5)JK>V4JzVګ;[GGʱi5-Jin]M\G*Ui ^) y$VWG5Y^RO,UV_)~sOQis~'-ER/c -#cZ1<7[#'\=Dku9eJϢA&s%2${F#E7u$r0?#^mJ:s,b*nD߹R{uoc;n}k螨)*.֤Pyd$ {u9V?ZVJ}v(++%232v@kU]:`k#q>.$fZT[-T Ʊg̜CRhIGWQ[y̫sy^jrIOWܙTc)Yn-Ƴ[3y(ʲ.c@뎂WjqnZ,Y.g2mm vsTԯ*IVIwH-¿3G"fa\ex#y$QK*G&ބ䩉H_TL?u1Z\ 22vo~%Im#ĥԞSӧn*ϡjtї+܁kCve`sުgR6Ҽy̱o2@8㏮5Ul$:i˞V#i˒8y{8I=>S)T9k+G2L= |Ѵzu`="B$QV-sb9r{oi,Ej|6dFʯJUk꟠ۂ7F.&hd$>j0nwrFuʭLF1ŵfV۵~J\KV){/u.4m#AӭLjb# Eӆ3<$[īnLov҂[HZtW{/1K#&mbe_1`AlJ*򥻿Sʒ?r @yQAF:32u{}QiF`tJTӞ*;Y-mI-ซo˙ˎ $ƚvO{%+yd76QƷ mS8py1v-l,QQl-o2Hwp;HL(l@;z=iԊ[]饾d2!zh2Op-MΘ$eiӔS{Sg MHd&5v͞Pȟ̊~Ruq^}s*JZ@Ҷcc\pO< g5iJz/!}c˚]mmݖ9 /VzW,c3|Oڝ$7Z+qc;dݖȋߵ._~NȜjW-JD3,q2ze; TNm՘{ZEokg{q+\@Ubsg98",^~ӿm7or%A2yuvbrqdjj/Ecju"WػjI9WrXpquFt&cռ|-#S,`deݎ*sUTv2<ЙE| 8z LUJ%rV*&ZZ6گ\*-,k8ǚe VQ KGצ?sJ5#2sܖ7K 0 f9<3-XЩ(WEks$tN;B]Z[bե܋RMFXH\O /e~v%\F_h> udT#'p9A#`jBk2<ɠ5ƾc 0P8eٺɳ. g'c(sJtʲ7aIf\|7pqS$vnאabۦկI%ӣ3}3ԃr8N>mnȟ3|e}] F%R9Q(݌ǯZi!SETdwPYwF&߭o(Ԣ׹J|N} ]b̖џ6ʌm>hGKzXT'\-JbE v?8mW{2јwv,F>] ?CG5أ$:UJ)řS_aZeQJqs*1+F{]TQqӅ)T\髢;yl5R4WIZ4][EљW&F (ӍI{MN.^vվ;8^%ͿuS"5Wx]zq |,jgέʚqܪзQ8H_)'pRj=M̑v69ұhݫ>FxF> A| e yWtc>RqYͭ:#X5m=(o{=6~e.ݿ;k!o&Mm'r0QNU--JQr"ApoFA[k^ԡB]=#$Ev\𢺣Vh5%Z-n#L7̻z}kэ5{ש4i>ҳUVtjJMQ6H7Ѱ>pnqsުΝ7{,ҳF*OvіU5啗O'vǯ4ܥ}?iF<2\ a3)MN]Z/Ychvh-̮<h9t*2r.m*%/{+3oDpDYT䞘]zX}:N7z/#5WvE!h' p;{JlpkT9v7l]m#ɒXg9㧥uSl+F1޾Z\ڳ}B+.Wq+iB=O3^wyN &6{Vu{lsT,v Œ[O3ʄG)#:!%֨s X@۞ڔ=U5_P).ÎffF:ޕjqQlQSܱ / /˜4i΍z:c6$) qRtcRl\ҩ.M[(q*2K2~klk.\?ޣ(>~Qp~\rJ"y7fL~XըgpzBjQ3q!m7dR*&MyCV_m>y#q?GoJPIÇKzP2/_ӭgF+|;ŵ̫& 3VThyJk݂.ZHahU=zk /c+&0mjq <2nqӟqqm= ݝw+cT\$_}_8KWScԴbxKm.iK|2}Waў>85hͲ ̜C+ҌQ.ݏJOA5$R71.N̐HfJ֗R]Vг5mSn28yio#Zty++ڴniA9?^ݫXTd.4U:hB$Vy{$coJQkїR4Z}v p ; ʑ<};}NJpMŝı,+q(N WLiM?h3NRކid?|lXw<Ǣ:(q4cggNmc э~V {OT93z0^.졘6z=zT]kR V*`i*`e#y;hrTɨ7cIGn#fns<*:4~Mld-UѳDi?{ ߾R:9_c>Grj/NM ɹYv̲![ 8_s_N۞,1|МכֿRE|#c zc/]̨˚\/ [xzm>7|Jê~B5%fRT #8HĬn Aϭwa򧮶= 4;,ڴ0F{ qzUG3L)ʥ*YG^ZPl3HrU{3}OZe%UJ҅J>7CŮ`{vY8$u]OW4(?#BRgx%9n9#2w*4oE;=쿼Cմk7:2[an<_p 8IMң\޾id}Tq^t1u+I|1F3[GAF'êUi٭O;7|$ M8-99^n"%NViϡ-FtG)}o{l'p#@$vb}q\]4}>ZsweΛ3+.$l;ù>U{+çq씴Ok];u=k&g3 *XFqG8xQk狊*/S%.0;7=x<` \_Sɖ[D;iKxoޕ4ja!m$"#վ]rVR$A֞Kk_A;LΆeU"&p0x['[iԖ]ְό/늙cF&RUiK\}[/jcr1#ܚN!)n{/O\^Zڋ#ʪO\NkԉT&ʾ. xWRq"|{Qq~q9O3_×7os0 JZ1NI"aӗN5vZ͜n^U^x٥[׹B*l2Xydevݵ/$ xDk;-6&O"٧p~0fӥkZAFKxY޻PQLTbQwQJjqo[icx; zY` dퟯ'JeuhUʕVN- mƲHݼ̙&B8ScN*(+=OFtak.uռh"VIٜɆ= {jm{%NJ>v]K1S\[s,go^F϶G_\Uʚ+I)=/Чs%P弶l^Ƥfr4t+4>KGnJIܱ0z{WN2PM>_O^B_OA/~ںל䁚..Hq.._96չwN>'\,VuFhdpNGXc {w}m> Z˓_'uyT(*7v'#jetq7V9To&%o|gөV|$Vwr|\k\U+SU&]Z^6w<3R-䀣 9FQu*^OW\S9§:T*bUR +$btZ7ʑW-\,U楳/"މwzGZ-ŽV#̵n|pz׃Y{pt)^I/}W5iE77~΍MnQ5Ya!lr{$ee5aջ|+40i׻ N8/BD;Ѿ=>z52M mޡ~\|錣(PTE:{h3O΢CSkF1Jbʹ*WG/Ƚ}L̓TaWE*u.hWm1;ҽNR~e'!&D3oM\4|fڽ6g֗BiVw2Fxc38O哱gYHqVc܌TI>zKlM%JKIxLW[Omf_pʴ$wc&K!pk^2I6ıhE>U:昺c^@&wLޙ}3\7ȘB+͚L,{Th"eR"ZxʋWيkFl8xv:5#ngxo2$_22GyVZݽյ4>V^[v>Yܿ?5 , 30̝~{1Ung瓅iSn-kܱ%̲'bVBQ:usEVJyj5~C-Ȯؿ+nÑ d#*2Imnr$Yfs^jsӿU9)ӌcg]Blߏq=s֧QO|s٬1p2ecq<ҳ+r;%ЊiWQI"XHLϠ''+:9J".?*($Y*g#pX[wr\EY!I sӠ1jySͥӡ/1pnCxCq}crncO~"9*:/ad>jޥ5M-/mEwYa\Ťɑ71?^=N%ym͍-efhT^#-}:TR*/yo򶴅a8Em}ãMZr?Udf!o_'OJFTm.֡%[V̙`d֘iF0r*R񕯽3K yHI-j::bmZ εD MA6O>q&^AGS{c~;4ssA+Ri٤Si_akԎ) A$wIҴ|lS]'kd24JY6j)\w<]s N5yeZeބ6.#C\cی~2y.g-o͈78BT)81Sq)GKHmţ]=ii=a%F(~DT(hӓZҽ):q_n\ $R["9G92}h)>ch>ˠ5̂U&J@ώr,Rk"in /~| w?3:J>˱1P{u(J3L Ѐqֺ8c8+%oz "brc^~}W;t>/H入#(u&*6R=OsOZK&8r[kٰ%Aѱ8}#9b^Xy!eYf-6:t*9{8ө)A{WO/rIdqr}~ש%/nߨ"&Pܲ8nrc QCU;n]1NmmF)J]Y$֗b_)[Y]0P^nXEKU<jdC3:-sZ~I׽pn.K2EUl=ri^VO%Iʬy_LZOsWvzgLqcR+hU׹jVW$[حnO6XٗsUnN9+j}b^4@ ڼ洔yJ>t,k.б1ry=1vb+TNZJ,d۝gh+'{E ؤvɐnr2_ʓom_VO6vUcnRTz"mV5o/`XЌ۝0krIԎ?y$qqyk^jFV:hׄ ݷ8n&Ѿ'N)+ӧ+7Eڼs:G<9RIJ1~Ao8iVSnRQƎ&ޅ8ȸY>eVĒclk'kneL!O[wtDeP oJRR4] 4ֹmѢmfh$ްT}N:V-͵~[̎avn% g:eˢh&ߛ[Xi.6˲OҝJk٦14Ԥʨ+In@=?:+8+?qӡ)#ө WC&Ձe\nK9VR:ӯP35kD7F8f+'M]);yaQrԊg[h^yxƚosBȅ\ȻH.XM[FϺGXv`3OJNݽ5 u*(Rw{RɊMRmۛ''TnV _>ژŕf1{"R2?S֜=O%g+E=Te)Mg8+֧庑*.QM~2Z]I/ؒsJDP9&Iuuӷ/T F}5S9'c֣4]3XNfI8ȷ7+sW#$SG(/˥Mji(h+0xqdڍ UcEJ}hWf"%3,֥TI [ +88z__gm4|jٌqQ>/ IL,y-ʫxwFou=Ud-|m8ʬ~mh^8+=(U#fi? [8[̌1ON]J>tϖV|>YMmBWklܑ_q潏a/yny8zfkZrIEf||sz=+Hӧ}+vFU!*Zho4EiF-v}}OWĝ\rF:g֫F4)5`ͯ^(?SNO6y !8>۶EwSlM?ejRM֯MFH#G{=?*pѭLWR^w9_x\‚(F̻es=2)׍FwSa윺4BM.$.3V2J-mN?yI5e'ע%Ocʒ@霎YdpLsjQ4A$jY6AǽzZJ}L"[i:}̫]UKu޽iaS|ڭw2}aQ&WmL?⽥ZEY6 vpz+ϩT 7:s 4eIsuƞ5{&7uo+o+O#zk*ɫ-ec*U6'?! ڢLZV+ݳeSٻk7HX[Ly{Qh+~>U%%ԫjP#^I&r35F2n]ՎKzQXԥ}j2\\ͳ[LGػ+3ԫ.eo]Jh.JAoy湴~k#Kp2=զ=/?\E;CX^ͿjN>A?M-C4NqBeίg}(ҭ$sz^F25#Q7_mvy6*hvtfFj3k)K^q&v- mes]8zSV)/O,C粳GQ|CӠ,5.NHV#(?1ԥh$z#t&^{_;gik|Nv0*Qw**6/K"4FQzúu-eȒ *Fx대zc~3*UǙQӃkXC{-[9;kG._0OLcVug+ˢ<%N*t~UXmgUVDM091sTw|<$w t ӧ3[ݔvT>W dsmɤg{twLD+׫^M2mo!mr8<&4o_3":QRMnyi-64rJ ;=w]QBzFQI-] BGrcx 80UKRnUEY5mJK7ϙ2m bp㜞9:6JKvѦ֍׾]3vwZ^\wL05 3G>h2c-U,sЍ63\*rmQ 9=NeYe~^3n)5}~f}KF4 ? TjƜ>uy*[#{Ze&E:p}vK e=3r]j]n#M͒S n>b1ռSJWҎ/RMyO-z4;-@`9f8#^~+ j6".YZRev}@= ;ս:PN7߷UfMK[}@f/UOmҶ"/I>x{<:=rZ}HmOr̹eƷԥ'rݽ:huX9]]SV;] ͦ'3\!vNyN];&,5wk-1hfh~{'qs׊xyƬ[,TRԎŗp#JVdvmO'x\_xk%_u-e^4~s|2(z|\FneU,My[X*u=֎,E2 |S[Tԕk_.ړQghdh .QSi=4cu+z[cpxNd~+hqxJ:ssk6Ŭn\`ۿֽL>.U-w`:T{5=:~M3A3F`UH<מy>º'N)^i;w91:˙&iywZ; ʚ_ UI[6YX.Tqy#*9Fڵ}RqaZPSm[Q2})Ѽ쟳ޙĈm>5蚌,N9&E$F2W/?iQhkr:{$Z}/,|3DVf3dd ppcw>"<Nݵ`1^r^c{%t1-4@Hpp@Wu8lҶ(ۙ6]gdtCG)Nj{>a=Ž+M5͞O 70A]=F4O[%o=9QxXI=y׽翷%ß{a+)C(VN ׏zs5DC xޭ[k;&۷˩FcxW֒;uQN[N6jU=}N*ӅLw&o5"Ӥ{gZ >6TSrZǵS['tӷC$Exc-۟])'Gc=4<1 HrOӡOe-yxTQ-tOF\5LvRj2t֦-hT=LɧriXĒ$(8^3 JP=*X9.IkuZ>).e]{Qfg{ ہPsSũIтRi'8kʫQѶף֖\Q] O=WZW EyTk[~'UYCk~e0ȊX2yo ܂ i\7>WݓZZw+mg=9gȣxrE]nh-<ʳe5Q[*ROr~_ViYatI$\7X9}svzme'>anLϚҋ+0ôo}Zyd'={خN2.c4x`ܮ}$Y|λszV2WCISU*+X|V9 ֳg**[Z^ m&3^vךUcOsO<MTiԘ1DzZky5nZFQiBWn3d |wn$q ?[Sng+}ek6 *.p1WMjuV2+&p§; z[ɻzݯ͚MD8wKCoW5zkOBmU̽X-Ż:4[]ד۵yuKt/i Eyf2;1gݞz0U"rbrȑfEie]R>v#jQS2{n6 62V3#2D?089'~1oiӯ>h;/RfKwH18$ҋZw4nirh|Yyfh >ˏ+x'y9R_}!i [e9zƲN^E~hSqo-v+In' ;uj\U*]|R;x--.y(H8=?mMW=(Q&W,*n}<6]a'k|O9~JsNIF֢^ys8dUEu -O$nI5w3ϯ2KTRd~"u$"h~ONXJ*}Ώ=GfYxUAPzǯ8^t%U$QS(bQ<Q:tW&13m6v+ \̏ssFn:$5}*4eWn[x&Pj03ۦ=R8FE ?9$u"ɥNGEIFDyv6y[xœTˢq\|hQׯb͡v$WOWozS#sJu#;kͯ~=;Ϗeۦ1M'&gRT7'neв]M ʲnm' ~\GqZ>E77,\UsEgXѼݍ.=O+)N2z^g ,X#ƭoHIT/ qsנM;v8S)N:/TO͸/v?׽sƬW)Ա@?x%h^di ǡ'КI2O"R?Sۿ*QۚdV m~mݍUID/r2|?_Z(c8J+}!lawcDFKTw7ʹց)2p0>jR3SL0f~9>ޘuy#XsV"ch[bϭf+kԧQ6ٕ.\Z<*}G3B<܋HwRȱ0LgF::U}9Sgԥtܻy`;wyr+m9^+ J} k2:4oz=VO:U:xswbI\+7Dj>՜S7u/iee~BT}GTcdT{[F((KBO+_&(XnEJ *~Eܞ6s|BrN}~8SUco^.hd-I9g#r{-hjBV'NUwgbJW=nXr*"8ڵ%/v6da 3 ;}MDc;)hBİh9 Q$Fݒum(w4l׆*pݺqI]+c;AC^Xs,ݜ ְu?<ت8W%,rww+9sJMoCҌ}siN*QvwaTclORz;~T$Ec:*{tfU觵hG'uxeի?4G>[/l3g޵T]-ΙTfKkywK!"6V{gt˖QIYTz;kRЛiGcMSױO$}~WKQm\dqUcF\RU9ml[DW1nzV)h(+j^RFmcRYJ`cG5ьy)Gsxfehh<׷zҡuJ3ic1Y<;$= hkάtF>KLE 36޾`?Cס+N].;ɪ[ %aߙ}r suWV潟*Ӥf]XAw+UYw)d{WDc-tiR]G.eD0%wp9JԮ*"Iq(xJs׸XNKCGs_vEj) 4{}gNX{ԤQU%2yp.WlsC}qC%*iŭJ6: V"}y{Y]rp3nR2qT"KF,[8ٚFV}ӜiӎEWkKI'dl6>Eϯ\5|,4wԱic}ip}qԏ.V9r12W/h7}gy6 `OZSSaJ16 iwci#Pys zW4:u5}ӻiii´zܓ>_V2k#/g=|Gao8ۈVLpOy={י8/.h+FܔjlʞknY[a? ʥyKYmr,T[2LFjwni-"IΪĎ2FHNº*j*uԭR>Xgym.AAQV~zq^rN1pF sXUׯRH=R -,FÌJrj)nڹ7i˭ skp፣b"v9ZԊn^^[ZVcIoU+1(H82O\RԔߧATS>[KZG~ۙ7L;OlTT<#YJrײБ!T 74{x>Jiki۸!V$CjŔ;cb*NZgG^-r:;~Kmefvg8xW|Ftd{hѭ%Nܓ!eǴ͌O{bB1磷TDKMSߑY9_٭ޫRMIKUꗙ3$r[ ČN.)wۯT3 Ɨ%FyB9݁Zʟ,}ӆRX-ɍ@~0Ynv{>sTQUF1]Ļl=p{tӣir%9S<ู6'lNN=>z~f єMZjRAhl ?Oֱ xVWƴeU)[ͽӡ^8![y-. /<$pa1Li')E?Е%ֽ_R5(@Y.|nl ӦP+ަەٯ!Mu*+K.QU 0~n=8+Pzn̽Jl˦ƽ-$B@'_Bѩ)KDȯb #.1vO)&7c.hoym[vUU$g=X[S(ٳT*Iyn6;.WڊủcQsN~L"ۻ* 1]T7/k͕|dJ*6 ׷Ic6ѷǞy+JN:VR/CJxW۟JJ.#G&iĎ3 kho=/vOd^+JхbG$挥ikiRvK ֙68Nɖ1qҫ$ FZRZ_BâKCwGM(G֥gGtw.Ѵ vW,iGsXcKOW~$uaIOz`*Z5ߠ 2[C.64knޣN37NSrQ{"f,ӧR+/e.X/`Omp3\.103¹ 0jO[{N]a]i9.m>l8Ɲbf[bMUq{ {C9T橬5`M"IȓnV<G3ޜ֋<4,;-VOg1֮qoO# ɁMٕG~g>cUq#xkI"_KI&lV'vkҤIKC/%YYc j(ݣ(ֺKE{4/pϹO` R-ZOSm~whb&0_8zBGVrا5yvi$Vm/dִcN#NQRk3d16=w4QmfO1foZFwQjhe5$$xzэ)%/wϹeIdΑ3vW's^5g5+I.mi:qtww2/,(ko+ǹ.$T-ߢV;iN57쒴ҽXG0=<iSK3mnLΞ%QcBij2Sj8nt{Iob_\F}9M:7{=q\ۙz\-4CzjxƟv,Dng塉wG8,&v;(W枾gM:qd&1ÎL-;[*;$IfFe9h4(;|:y"S{c-²̱*J"QG}n7QIlg2|׌Tr4A ᾁL?J\'ciѝg=}~Y1V3]];=mY x\[O<;:{soSHF{{zѦ}M*Ӫ|>dEaqiQ+iAkkRu*GֹR7.Z=͹8=g=ʽYGݩʢF@g;|3? U=G):wR% )C(jN>̪12'rʴdEJ[iY",gT$/8x}E])IM\޵BB2JO{ʣni-s̫*8ck;}^]-{h %'ɸ1<>~OA*ƛwF&*IhSTpjEʝ+O<ֱ躒ۨDrFFpSpǙN?3F2oNGK GrV>'oMZFتwzTk_#$grGCVnWv^f1g4E*GȭԀ:~s^.*Tb];G o\Ҍ5e'敝G#:|ʪ'/_z{%rRfՕlr:}*"zCa oo"ҋ{ m=}j(i̎ze-0WFZ-E;ajkf#HսnF[ "8o@~Q)%iB!djU&Nvw9Ug(<~kǘ2FM18ۂj.GorOY ryg9%S޷CTy]̫TK]C(<~4\kd{Sr|pC ~R1H7F=G[WiN W`?^MrrqVFXQy5]=meW0n~VЎ:YT*+' "bꈯc = ds+UVz Y$|XasU(H^VE[hy:\~u-:~ލ=FqhT8gN}v )'N;:?S Rrs59TU:xC^ҵK"k 1cvSiw8^lCg4ItJ6F9P~vR1ߣ77rfcn0A#=/$6שm!)0oQOTbWÕVN# ͌cȥ*z5޽KHwс 6 }kj~~kFV\.-<~FaKލ. ,3o0u'y㞕N̊q~F͆jeu2V8Z)4qŵeRASVݵo `d)ڧsk(&tΣK f,_HkHEn=NfڼCn7%k-.!eOOޱ8IcOW"VGyq˹cZJr^ b2WHXg[ѷzJ/͌ e.Z^>Ҭ%סjDu2+W E{k"HdeFA$֋eG9d7/hR8;v)['j{kr'EkIu(K+w3,)fr{ƜdՌeNSX˗jHWSN6gKm2H;4RۮRK(9׍=G#)YInjYDjU8Z0W<\v7^546!;*I玼"/I9E3JbPZXQݬ Q3~Ҽhז"Fz$֬<%&~t8] oҾ!]4MkjwOeX74r(;c 1uZu9+Zqer= 6i 3'Čw ^GF*sž枳akiboYpg-8+^h9bGV&y4̓YXe+ʱs5S NGufw:Σ&q'q95(Ӎ>Tұ4UĚN=%yw@r@+[=S8W;]@X-9/CE:8חZ4ZsֽwexHĥsמF{xXӠK&tkPx[IHE,'p}6玙>hTyosFeߓ: OtL{h9!̃rGpTk8SO`Fђ;㵚$RmY6\`x}:qp )}6﹯2ƥiUlɮz_mIt%mY ^iU6 W=8RWQSqR4ۻ̮d߷r(h(ӧkaQ$1,/tʲ1ebqӓتFw]OiOQm,H- $;:U{Y{ͤZֶ]-6 =FNr>Tb0Rw#xIIHZX$vp6j5lP逹{},/]쓩:w{/6gٱVfR98`?uw8g$RZ%)-u dXgb12OL{V%);7aѭ:ugZu[<|grGظ$%ZmTv܍ti pFXO\p=+QIm 9Uv\^\\jܪ]1Ztӧh(N1KG#gDmtj[˩HS"yrv~{gM>\=7m=wϻ1e%ok.gj!#mIÍ檼:52ѧ 4prVOkCFoyry3!5%IRzg!mVm^GJvޮz[Y-ӷћÿjn2*=s'?*u9ϱeKMyUlq0R.mm7ֹ˚.'xGFցαYB@:֑μܤѝ4iGAwg%ՆVhn\tͭ./Mz\2H0'j2n11ؚ%8ڣ_;F&#};޶geH``tOtmV= 1c%w_5CB7vZ&0~ec@V,[AG R)3oy\7 Ks[9  ^6ۧOVgꢼY^+uS30$7t4#CG;piЗ3oFcV8xKR5t [XדϿnkIFKf]84nw۷('جa>8/N Sݕ;^4CvO$cڮ4}tߴ"ekf)B'z@gMNYEmPa=ʍMsǬ>>ӗݲ2dhΞ"IcVa8郎G5%ylcf7En1B2=(H*8MiU+Ó;juzs֥7;ND#*Jo^koF}[9*Pz5SOW5i6ZU#a%QxA906w+V_RJ8wm^_:xPJiy(F1˜ӣF5%;;^E *IFvM}= &;9!匬 `dw!<;8؟;跶Vb ^n%l u(ֱ.H*ѩ 6!pr=1x2~j{fj'M={)1tZq6ۆl=pGoަ׺EL+HZ)oc,~?k0Yxkw了7*$!YnsTcNmʚOt[vꖝVE%'+ZͭumߩGR){}zi{u> =NHJW19-FaӛW^GM:rvWkgFO}q" 0K)SjN>'nc7~;𕿋CwEk6d 88qu)zQӵ#1xyFKX-{^UwasXtԜG5,%GQ"ՄQ\](@3{Z3j;8ijVPjWo9gv"5)R]L&آ3ZS)+[.jvHʹ)JdΌ+Z>4XmxYMf?sM5~xM5cg{W4TQǥF:rvM<+"Fo^\,ʫObmr$ب{Y֣{rAmq1Vmw˻*jF>*1{)'h'Y~_i|]mcl>i(mwbKT4w~مR+ʖWQ!o˻8nE]X3qԩN r4z lڹwʦݝ8D-Y-Ƈ{$t1qn[SQpgfqd ޞ^tX[R)ǖwc>h^(ʶ# C+NxZ֧<(.}]+EOe5NPt)RytR coiu*X[B+rWBrFmØ3QX,s`6X7ʲO_.R˟+Z1￟oT䒄a$QYF#5JM٭':7SI~F7a5*O(qBKW6 xdU?=%&{9NmFKמ[7~^]5߬\a8Ռs8|< yML..z5n~0sɶik_GK'xaojOgc3yrF7ڬ1h\\WƘ|BnI.~qQ8TӹnH.'{xf)gP+8ϱYG7|jҩ-[},j<[_\X]WN9SVxKe s+[F )?}{Ϡ/kHUQWlxA=?:Ʀ'ܩ*u(-׿8q! Įm~;v+Q\ב/(Tm"u}QQVrMWsU?f$|+v"E+=Z:zi~ڤOT',O$;x[sk?^+:)Iӊ맠*d+$1#Ts۹5eESVoT~ C )"@Y#3V1XR0m.ȎtYіFROL85ZQ侍4kkRo'm- *EvLʅZ}Hy 27`;V\MʣzȒ傯62JWF4Չ)F*m9kA(מOAסU9SttqE*n#(iJ[ь))vo2S gc+U˜+sj]thCF)3++';TMȯI4i,|}ю3ߞoF_/c4))iY/&[1%_ep'O sk*=֧U:Kn!_:gՋ6J׎ՅOw0ƛj+E&Y9@b^A5R^ը-/i׳O{,>bn}:h3׶Jnϯ{.ɕ8*G^pXzZF5%.nu:nRn[. RYK-OrI 1XݓH FC^;O瞟(V#]5NW%NTř-lM͑l 7={b+h9T- {mC07ecVa|Qӕ;(ǚ^0_m3LFxW]u;pཟ#Aק5\z9cRR/cr3Q֦hԏ5QQî퍼7fUЌ~=+*~Sr?#rEɬkHṃr4ʨA$1xt{zi[G~ϕd qm2mY3ۊʖ"Rڳ(ʳ$0Y=N7u*ϛ*:ŗq{+>hJ.-zԪl$xat'q=ǽWRdmei$@6Ds0= z$*ՔhU[}UWa+nI$R-Iۆ< !S\F5(1Hڥiy.VU&](꛾'fљO#?p+%jTʜ[| ."W$sOZ)qV^EZlR3gbRB~ J:SIԥwMnv[Y翥kMԌԺ-f,i$mߙlo':ԪEZ6OVΊxߚף3]R͎ʙVĪxS򔹪KFv"LffpWc qgQT\jWۤO <8=N^tgJiBtŕo~46]zy㞸Ukc>qClC0P@F*SdKy`y-cF I=sː+Vq̹4]L3ڗ]9u}0ԜqQ Tj윬 KJQ)[z4Fi) C.ws~k'hrI(f7s#H_jrKjϯslUINK etZLEInϭU*yHϖ־i l㏗o*ijνeD.+&+2 }Gִn1Q5#Q[]X$U6;zb9FSFm-8v`OQ{ ¬s9T1?3mܠp3zգo۠N>KryVgc\*cwO}b6mWΘYr;p?+R?g5=^aC:F]ɺEjj^+ש*yiR qCb2J-(>o-Nvuyu"4}Hc+oִT散N>ۚ{[dToDZS*qnqt%RM_Ց߻cNQ]MYdi9oh|$uV&Qp!rSjegqym@/JS5FϙnhZ^}R95̍oq9k4h)b*{oe+5Z$s׶??]ЌZZ֝C J^wo?}V8V#K1jmYTD.;3:uӋzt=+MJM]3SFK|ydg2xЍdpUӛӯ _0X"ё8RN롧U'v}p6.ͿPx:R9Tq~r.uYipWWCi>Y'cOiʽr2;/αOUR2[1k[v>m]Aec }7qhϷµE(~:w ' u cч< LiJQV8N{_%7O:܎tB\6]9M~O *6y:ieh6NXN7%`gH#Voݹs׏{.Hߗ{/̸JJ_?#6=F%#>E;p};Qvb*¤oWi&Լ4 0;Uѿ6zuRe5漍QpqZ'|M>⎃zFQd \w#fN8pPku<Y*oO~6xTSaOm=x8S߹~o_ Z^I;ԕE[,=nխ=9Z;;FQMhٱ2<5u.=<aA.F Fqtj_)_WX\+^o{WHu&ڴ9yzS9F6>&wuMΑ}nmVEQ 1?)^kާ)$G)/fkmoarфl10A=#$uN2Z[{VWXV9#ȭj9EI\ŋcLXD;o'}hcBV9nPoA4RC&s6ª8ySj3/&eku xڅ|wH猑+j|Z]ys{h8kWHdG4q N8=Zhڳ~eӫWNL#ceU09 $q\jJZg(3G鲲AqnfnrHޟwKFIrRu]F:C| 2g?{_sE9G/u/鞆/ƊtemSK| _e쳼"{."E{^ҝhEF^g葩i£J*[_rZv4B+dShצq\k8Wu^*\Wϊ5[61[}\ps޴ʃS, fiYn6Ǔ&{QRk'ki\.5ۅVkib#J cJ!OnjI Ac;_k+˫P|k㩚`Nnbxm .lV+5QJs8#SyoFc]zq\}/dlN(ÚT.acFMܞf_M7̼.{ rir-S4۲}HUFu=9}7"Jm'OuJRdoNTޭ?rL|!bʽQ]NE>k[_QG Fxw\!M`y۞9S.y/TIT^Iou:mN> 1x-\ fI:i;W;jfXd:wXR1 ]ch#kXac?[_thE62[Ji#Rhԣ~ΌH0OQx\`rqqU~ߡߖewM<LZl6K .ۍ{;q]ؚpS^_3Ͳ= JtM=SZ_m?M{d3nQr=1׏ h6GbB6:-U׊+ٷ0%Ps)a)4:دK{t*u}MDw{,rFo:M|~X郐NGSrͽ+tӷchfsmW]x2 ǀTic8=ң>kߝ.Tw{={^ɖ{Gu7$Hv's9hqKuԣWNwC+'L"݅$בtRu"u}38K6Ok[{guM$r[pqk!r4ֶљaﴗ* _-x_%~bÍ=bgMɩEKɶϥgo5GƑڹ2pO#`CJ4{ԩ9rwO$bG`䎙׭,Teʡ}liCtլOuk[ApEs̭֭zQ;iV}zdkڪ['qܒI+2[[XөI?4/<8t@2ˮn8#^U.ߟ3^KmcV]fSfO$9s}jSұrӽY ׸۟sE̖l?m݂aqY}sZw0G9%%Z~Kb&EsHG\4W4۷ȇmtU8X t5sM̜4gR󖩜$I'"3`}sH-z ٫s-<=NJ~eN6?SjΥvQEF[@{sq퓆~$}1z-Jzը~Ӳ46$=V[Xv.[߉o߀||u$қvaw#Ps өRR=,oc< }:ކ[G|qKq~HICE''v# ))&xUѮ{LTq2Z'}4].yEnRxEۍjGnRVS֣%n֧=|-eҾnGY>N?3˫<kJ55W\;ԡgmrczXi{}ZU*uC~o E{7.A826>VF65FS|yQcM>^]]6\? ?T&j[~فi B*{dsAҖJwKsLN]`IEt<[e0HQ]ҟsVj6_ȧimqqq<}kTeRZ#д SBIH-&,}+:*mOҏN3ѿ>&K.}tXUYlFyc `Nߪmtx 7:pz"?>g'VZA,wݰlf\'RT*G[-^˧Caj`J\d~]Wz쁡| n.4};޷McTi?Ջ\˱62tBC, mfU tTF5T_%{]֩k^ޯq3lhFIsݥY{uǃ|EKgqxPΚ)\Eq, #@CF#r45;9cJJT֍&Nkyj^GxwѮWʶHY#B9 %J!p|TNgZw};8?^jqge{I$ܡoM,/?S ;'8(HidvdzTj^[{HAdc+MbJ}YF.-k{Annc[n9%T:Tԙr~n/Ridھnwc'd*tcٟزQþLoNMoNmZr'{}u,9Uьl7^ĎkX6^9FZ|v05Gf򳽀g*}sRGUӦNZ s#HTdd?Pd.OX֬u9&;hSA8^|`=NiS2tR3I!L#qNw rclw2B졔\c(_UiHqb{W;]5JzE.ؤ(;x%$sk:Ӗieh\v*R۳[JbOjpijz%l<,`*6895#mJsF[YOU3ӟCKnmN>eKp4QL9ڜ_4ӽ2@HYUjҞ[7$* ]gϜdA435H)]iz5)26z tF\̞İ /6,{c#S'R޴eRM=Lۨoa6]!qzjrG8F7sױu)NZw -'O{U9ER^ǚzW:ʻ lOsF&TisEܧ,H<~^Hz5hJ(䌣Rzid^W̍U {{zcDiS{9{5(3naD2 ptpynyR頑f򤸂V-L+(ѧNm/!^m6퍈Wln8<GOi!˖T֣d;ou«sڕ:-Y#g%o%b.аFVۃȬdNUV92cqZƤ^)QkݸƎ$]7?zjVZU.^YO3 }hFє˕ov*4Kgb˜F)2p{z{+Jє}ǻ#Z&53n_3? qqܞOd3F4l/"ݽv/jqK2k~QirI-6Ry0%~W=8'MbFNNmn:HV,|0+ϯd:U{%e^XTlqYfM#"fˬl}gJhQsקAdfcPw1jT:6:E.-$[ e8v쓹Jޜ(9!hؙl&C!N5W̾yKV^OVf[ӍiDEI3>ccs}*k>J-GwO*OA^9?zQ-)%vHѫa#\J~(I{rvZϕ"č5L ? ҵ?i.MhRܗ^YepYĎy#jF.0vM9{B& 3P]U9jUI 5}]1Q .}`w]%'c.%IJ$C$2qs=J텩˝t9TeO܏Wڴ3-aHm #G>Z|j:tj 7}v2]]mTXBGa1nA=1= .[Ȟ|qGy`ʻo~n(7a~{FUYT#?7lg㚚nmb4zdխyjC䶅°V\.*52+׿Q娖ݖYx_j\ٍ&9zZʬiԏ- Qס5EyF ~nxkƲ9i"*Snu\ڇ˶Y|͸`t> pq^e~jJatimh+mBfǝ!'ryWZ>~L)ռp-iUӮyWX[o5%Qk iv;YvJ ~וYSG8+Z6ԷŒ*,iϦ\R-S<# *_v%Zш#, Z:㯆JRzyGPI!xEBsmLjNxKZGfeܻqrzZ4V_#R4` |mq ~s޹mJ/Hʍ . ̸e{LH O=zUN>Mn&, +n'{_qJj[}fH%-20̣$cqpv)Œ]UgnZ5SywF{qx_yʱ5$]gjrFb!rU,=@S_j]S^g/LZN~_מyzԊv[eS^jňGX҅n|UT 'zr3vV4FUw{H.˹߃FHҦK.]XSM+I #1I2ܫ`g>J2[JVO$PG!X`Xmx#ҧ-X]6keMC8s6,TgZR+WÅNwK $&~g38?Oz"QwJꌧKH#u$:?`쪴-}dM[G͂;O=jN4^]dUdgoky(ӖyXxEUK˱5@di7h[^zֶ]^ʜ*j+ʭi)8fmfUCHzV.ijﶎ= U*}bn&-<͹c'n@޴ .3k>;n.DmH}q8N?9Ib)9TG2 _1~emb7c-0ު({,TZZ2fAnX2>Rrm^ڞ:4e>f&VQPp緷)_8jaR) ͼo.76(3ר /iuӧ)sNR'}lD,l߻ NOc_jXj;X橍MY͔SfYd>XBF86>bOƶƻT4 I3]i\i'V. +'} Ks,kC}? *1:(R~%;'۸e;yzVyo)qpq*#y5+eHE gj/:42WTq"r0Fcui>`O5It+بW!ݚM "XEU$e$Bzqٓ"V&5b'*qIF>Zvq*Ohq~~ֵ4R9FjU/º)=B&ۡۇw{Eh-\Ɏ''vrsiQF<ղk[e3}#*H?AXʝG+**.xvηPw`˵r0h,G2켎&RʱGl?ZǛIG.l,n4:ʄHUh~m_Ʀ$ޟFr>i%2|d( ~UV散+w4Rz{W@os\Ju-} ճ$7_4UW<9cs<#/G3*T\~zhqF*;zh.&w2'8 xcGz{[dUW$+Jqr}twkvIirѴ}+ji]iJ8jM+f'98JZl*~}1~fhʯ~lg<=+XFe)?+;xxAK7 w=ivwc\yv[|27ʣ89S!M8.OH T%}ңPGCCךKNK T^meY'%yJ+6s|ӵ.Y?vZWQu"kȼXQ#:G8>\#hF& ,cݴ'JҜmtjSճ6cbb>~}~:zRRPhum噇O׿lW\#ͥ5^u췷db#J-uǨFM49c9J./(^V5A=zҢI:=Z3ߴ鷙^[ċ+HdY~b2y` ={uy[ V(뱓'02F0#ٮutƕ43]QjirF) )%0UP3LWNYK|'獪SwFMń3K5-Jѩ״UZ-2R GSZXx'UUmQYޟ2ס4]5yqEp̠ c+YS|jƕ(EK˕ #0F/n٭iޤ[X$EA\ՖLe 9u4{GocdkR4KZ:+a5(;.pl%۷9TsK~_Lkӕ4oԪї!\ѧ.̧;GMj*+%g%c.f ]OsWm'QZb9i ݻcSO~a\yfqu Eolyg@Epn9rHg~NIl$B1> *THMN8:~.C,!-s>Glr0}9ӏ7_Ro-m>Qōӫq󟽌q5jQw'e@l 8ۧ-;w/CJ!3^-\hv0:q r&:8o\isi=ܗ1GZFQSgY-퉾_c")FJm|9.H7fC> }w̸j-0QEvǏbq8qԗ5ս<{Km}g2*\ѷbjF{\Ӧg봶8)rTҍDފR%,(oeC@TTO륎oEɷ+ŭ&eڽ\*QO]:rut] VE/E,~zWzZҧt~'5Oi*L<8`nX:8ju%oĽl(K`n<`z.F 5Wkgsاctu.y6uQ愜 -!cnVwd 7ΧT8ڕ]ƥIn%VMy8 E)VTS*RSߊjtZT vgM9{EG)6qq=cpUVY/==y=M:֧-=it'ªZfuxygf掍_C6dgvCcfs>o&q<1`{ g)^ΥNi75ku QLb+I .i%c<};8&TUTS&>q.<"R!oQ?ތKݮ4xvѱX #8:[U)Қ}X*|ѿMu)w;}'˻ˤY,S=k٧R1WnJ:K~&Εuw}y{F;G_νHV̕>ZqnoiwosMwU%Ƶ=W+wFUf=;~N毹K1.浜п]hnҴ9ejFnj~qIssYaR3šf>:ϖf?JrvL䜣/u^h䝝mϺ5,Ǧ 8>ƜmsBTkZxZ;yd \qA{{97FDollglfݢaK'g˼ 6#%#;@5Z]T8ϒR[>:7_ݨ 8=x:QJ~ۯrDv@O" ,{Eq˖1qO-ukXrp:rJ1QqIb!v!F:zZNRJr!ts *=pJRQʥX~ow/^YTc*4ũ}40&JPdΨ0wзrػb3HR*%aNQ\]l: s"=۰CWW莌< RܲHUn u8z\M iWC2hVmeܭׁѻz]{]"e]Z@g_D8]$~T}{Ij\Fcn+k"M#ʽ[h%\/or}OS,J7Z۹бѩ܂[1m{\uaI=Mif+n},.w۲q$)y'SROa7^eárXa4Q&HO=?ZUBTGʮ/!nmⶓhL۲9#?)EA͕9rEԛZٯ2 \rTL`Je$ƥJ؊Q~$mjlt{ߕn; |҃sZv1H$5NmZ24̎wq(NV] R\LG/up|*;4!IX|S=~kԧgHbNs|ʷ^M!H8[T8q |à5:#K%s%ZFel(Z7mf@@?(#[A.qʤyڻy3pzޯV+eib%R7gP#/ cn1Ns cJ2oK\#+ZEѡKf5q~TBvTv混}I6 -/0w:wY*JR~W+4uVtoאNpb%ɆQ{/3h֧'o^6l$MInW*ckNȓR=ս4UZ>A;k5>NQ{G4|H{XI$rE}QK)﮺qyVy{g¯־FY>GfmfۂW=8#9ql~dnbr4{FhۣeF=xlvyʃiGd\E1q=ߧCs-Wr QGexplyX Yyt9q4qiڧ^u:h%w3nAG@~%ƹ9e-ah#Αwøz=Ez*;Z.gEEZoqZ=|O@^A]0cҽ,,cNj-z|/4/58-EXڲo߻n[ zmRxE.f}6ՈNQ7G \ u  o˱1` ږĨKxzXl|K>KPw@!r ch`8䔚K˷Sʝ:N>l=hܖw6Kݻ1ةNnkh?e+oã=@KC*4yRI#i'W,䬛9Ri>⺥"5\۷'W+IM8X~USkF._]MWu1]ps}姠ZVV/:?YX!XO9E`T99GL>s8KT8ïO35O6+t$LC"#>\R|O?FXug,jҺDRmNҦ2hƆJջl2Jqyg~LWO[= ae*|ߪn+-3M)k+wz'>ij"K%{\]Ȍ3ac1s}OrSW[XwۣdIrȣwLIUF=<:t/ Jy7/87e,g" 7,..{:]8ɵbb#w̱k &'2ɌWJH [Xx*wQn|3Ӵ6yr2(H,͖Q8r)&.RW$d_S4DG6F$lz |\U154_[ F ƙ{v`1]\a}u3!]8jaGX{8.Tz=4OJ1ZlGw%K;/t+CϭN#)A"e[zCZ3Ж_tw&[(%B#/'L&em,cƜahE ͂huWRU>JuU#NoY|,墤ބq77+XLS'RqW8ƻMc^< []?o3V_No߸*Sj:|:X4#2(]j1ӽkZ}4IۚnnKt=¿G4S<`feW%pz`7 x2=EjgZz=~)I1$ W5lGQ<鏭rrխ{QmtgCgp^Hai#sElc i[I#ZSK/XCv~|vyȮJƢ^w5!FŦɜ=Gӥs{81o.ҧ7+^Nq+|΁P=?*2JZj$N:5MC5 h渷|ڝl1=oє(q[t;#⽋JcR]=dI8YWb\ ۅ>Wc*')6%;ֆҼLmv{)y>RkFJvPb6bA$rNp{tXj-R*^Voܒ䍡-MrǸzjUKNgX7ש#fm ̓q NqY{ZϡYPOuڱXđG.8<"ÿS\w3F)5BFC\d1ߚ(~NI)RiuK2+48L>+R2mM}}t2+$y\:gz=Jg[~".j#[ c #n3uycnZQn$,Q4)ێJy{_LKJF1iU7cۑz٦s%b@Ȫв9 z񑎵cQs5{}w gwi_@LpO ɜJV=ߨAmpIHdSUb#=?投92cMw,+F"vǮG'ޢ2*nQikf܏.1c?ZJb#?cOymo$l эy$:*֭w!)Eo ryݎN3Zδo'gaF^"iV$ڨݻa'Z ;Nc+JqN vˏ3<~< ތ.:*bؒ sdm8ʹ9]N1t˾+VRrhd^dk6ǰu}޵T:k0T,Eq?oitv\ߵmRe%()KI5tm0*C oa_]O7m%[5od Uyfq]TȊJWXnM`vzFnUksY{t[f˻qJPc,DkSZJ*셕*#)\Zr$H |?3t9J R?gyZݡF,@QYU J tƤjBۿ"8I6FFW'(' cSuׯFNJKx$o!V7̇o\S~d֩*,Ն[,yjVi[K >zqΘQ#>KjR)eT$R}29ۓsָjb*Jiݽ)P4cnm"*рpxWY(GSJGVw{T%Ec mxhqo Vꤥ;7`F1CT!' 6 pHa֮RfS Vo-xQcR1};= 4. oȭn#: މb+JQW)祵۩o>[ϖ8 Gݫj59v#uV֥;?`<}?*o^,Vutg9b> ^9U 蝟sTi~Tȡf:ѕc98(֣*YEw̅f>~2OSNr;USv[k${e|ӎ)Jj֜c:z/f+yh-t ڟXNKfVG%sʕ0(T~Hy6]mFߗ mGݰTTg+iy!藗b-&W8zBEleRt#wTĽF$) '=9Jbhhq8&dzyq&KH=GmYi'rCulU.ɸHx3zRRvZL-4O~(}:^{y9&9Ѵ4o"(& db?{'54VV>^:4ŕ`uk;=SŷE )ve^e,LyR n5[7#=>4tfF5{~$PAɗo6xr mSORaKug)F2}/R1kRvrXdwL ? ֍Ee(®<G#2U9(~&-X]Ψ 9vJh0jF^Ł4Y,T3һheSzlȒV6ܬz}ku*ܷ8$޽_sRRg,ȿ1?3eONRZkIJqj٦Fi<שΒkV)CsbJcLm2}=J4J\Վju%NJ)_v5żm& 8O[۟º 5%'˚}ͺm9=**sJtכ]69Զ[Fd>ccs~lF:ֲ=JTX\e/#| ܐFʟGZ8M7};~ZҌ$,HI%o!HZOqzq9V};FTcf[CJmCPk,qo3o?)%<ҳ#%kµP|W o$ueJ˒폗kEbΌi6ށ懷7q6ː‘s=jr#B|V+ԛkd)#qsגr;QzQSS6{ "f#eP\$oуoΗ˖-JOF$h9럡V|HWM8k}$E$ߺ8 NHd{֑?J9B*g qyiA۞q7;}lmН䷳ 9G6h'4fЙRqN9[upsdnT8PW,uO>O(cFp 7qϯoJ)IiyrJ][е7WB˻G9\9yb~ӣ$.rm,v=xV5!˚?q8ƌ{xNc$HF:)F)_?R(X' <,!9Ϩ.Y.OIuS̶д<Ϋo ߃VVN~F52ccc!1M$rFXS`ЌvZ[Z_EY-UTkxap.y-8鞢)rNdJzZϹKxWYgסӽfTֈ¼9\6{R]yYq rsÓQ%N[n:έ~ZM5Byk¿F7W2.[B<-~wwvk㊓ENlW54S^E;+exy$~W'5W.jE:6E;ϵK@F"&Y{V.g{ӧכRmxB!'9^\Oڥˊa*z5gQۧ&ׯCgs*>ϨOҾwRnOT3}KOO~(w\|о=Hy8{8;?J}l<+xsH)`{`Nj?e.G'/-$nF:O+qQ9yS[]SuضВWV, ^iƤ\2~"^qig PE<-&Ag+Z֒kb09r]|/ hwxb_jya}・^q^NJK(Ty-/kW4׍wtbX$l6aA'i{quF;L8ɤ).k=ph}vKozF*'*~ =bj^o mO]b顉b]&ܨ$h~ppsN"N,^v8a|mo?#ӯI(<;m3]4V6.=+UJk&ݶg~Z8y'6 |X5PD2L6+qtv[d?UjE6ww8jaN{p'# yP*\ v 6H)J4dZoTM;۶ס5R/niO#d'nݹϠ;Ie^߶[THcgd {|VZ8I>d%K7(]]v4O6wdrBdtͺ<[$6x:c­ԿƵ*rNKyz|[(dkI6+8?waT*++勵?! k(uS$8\6Bܒ q^\!iGVG(b%ht쮼_g=V;*HʌڋuOLk2:2O;)^ۚZu8ۭ5bdFEgY!հ;Z2SRg,gx[5#[R#%ĘG'^*Q:迥Rc*g^(T)Yr둷9?/ cB{5ͪShʌ4Vwᧉnmj*"#l.[8?]=5&gX Fk;&ᳩY"vsۀ;WI7瓉9{{&ci~"ȒqBY>ЄhI+Щi_]m֜cv}ѲuNk  .(*=I<ˬldc:ޥ㻫T[98$W F@?OURDR4ڽX]6pw ;f "sq8u K'CNx+ɇ <,BLv?.zdqjƛrww9SFO4h};': T朽ظG )jm>x-^ n9=o4vGEۮ,뚫$sd'q@vN4DL1p$ 9V&}?#i)ms"k򳣈ZONNH8b~vǫ*ha8&|KX)Y.r1֖!-mgBjI{έǨ}"w^MSգ+F[qttk9b)¬a )ú$ɍ"P.Z?9z},TɨއJ&}ڷy}G?-kv @2{ w`*Y]ktic֍/SOn"T_)2heO$23+b=!CGCNNhT}N6w̋t$Ь`p!siT-ziF/5&WG9F%\zD;1qz \yj^7[?úsMYi{'trGOvPpG9{W=\kVm4>3ømom$"(C_U7%Ngap}ggŋ.]w=~b Kx=qiUmotNʤ5^kJQ#-P #gU⒂}Ld再^|:1毕mknvǠ$mrr10L$R߽4udԼ7w"k6R-x#:5|iZ]:y^7OEӪG-geh}O><˲FwY7,I^I?Պq/+Ң=/So-(# F+[N+b*X/SRMn݌VNI;9l@gX^/XӟJ 3(px<~'&Ti9%xN;۬ryUywqӠMOC U:|e2Xz8kݦN1f,v1GsT9fFR5 Ś>q)汼mCql0s#55)򸵣GK F+;ʕiel6NF C_K[w^S%[;>6//I&TqtIQGUsєTM{n}+Sڶ?4\֑_@ۘ N;vԭF8uIJ|MlmCT\zoN;tc#%ջ9s|e:~I+7{>K .,VhB-@pqv|J[_?/so{գd#tm&qy<ʒSJķYyoOQOz*sfy5(MEa/?x*}s[PRwب1kP1X}1]gƤFuֲhDsF bz Q\iꉫ(s߉V&lWW@ 58Tԣb9c&m~5'<$p"I?0vsmqMvƜjJ)tV7RmIn幱Ig]efU*V N{O=*jF#ߕwdFn?R5x̧jo\>}TN$iUOEߩr!}c\ʌ^FUc-a/cSKIvjv47vMc:)FuXӯoD $%N+DR{-n̗ۆU$yiǢ6S4V~trdدU;H3Zb?(S}6ҝ:SvM>!2nV0zЩKOIN;aNSW7e̾79bFgR䞭4ܤ E1ĀbP:(DoV"ȡ[$X +HHE-V*%UݙpqB)I+G_/261J9#]i2Qh9mGPmv\R99EP_q"^o`GX퍎Fc޼Y*~QRe 7 j,|9瞾Ҫ+ɣՂf3?x\qc89SNUZsQ_M"QA mf7g }=\x,wi6G,B?2󎭏{cjchD'gNr;czWVs"-ǝYHxn J#%GQWI%ݴ[cnccWFQ-#Te){6m1H~XI$l^*GN){4"s4l~sC׊)O,Sk{D3B򫨍\3Ӗ({ $/:n_*ݑ*(͗qщy.kveE*Qzۏƹe(lEkrL{FHaȖF0"<ס/jSJ#F`Z%iIHu9c\bD$n'eU=R/iD$n/Nz~=)7:)9MY"Ӥ߬~\kX6=1JAFUڿ9l,1늸HtͱV3,тUG\ wڹ?gNUc#TٻiGU- !L ɵV88rEQkʘJlQvv)˖yB8Y~idpq֩׏kcJn'eyxOV9b8TessV17!UN*ɰ07ۜ);lT}ǯk!Es42a`pqFGU*˕jus԰YۓU~UC{g=k>I^*qNQv}#i^rcs|=kJRjp#ԕ);YA=nMqm f'USש-mT]M2Bv>Kx8##HpJ&iFe);K5Σw~ӏ)Ԓ|?}+*Ep}SjG=brI{˩4,y+7OSͭ(akoU bܒ@ {F_s>DTi QFO̜\x۳oV殧,R&iMJ$l=〸sRƔQgSQH}*JS294fwLm>9w3FHrrgo'\zVҟ+O"U)4E 卥deY=Yr[Ƶ8Id9LhX}P=23]4_:s. :o`қvWfrVkO<d 2zz׊RƔp[Zh ܕ tѧ IM;3Gk[0 l3Ȥ<lu_m=K.o6PF֔)ƛvZҏ5Y}eV "ulx8> Fj:#L,SVe4[@RmBwǨ49b(N._ɨyn7|cץtѩohCPx0v=dt䌥JuENm+-V.[-Yr =9^ 0V;S̊K0@O#ypXYG?.O\z~]캝SN[?u$SۈRX>M`tUX^H/ne_YK51,b5\-YG ֮uwK++>.-=7;!QQ#{[V[nU1ݽ18ܮ\*2yF+eg)Y s)ߦkԤ鹽|c*r],c3^-Hh*C-ЎLIe۟xMp׭*m(R\i+4 ndl}y(ݵ\΍$xۆ| s^v#:rS[Ae=8utd翯^}IF}_`R K KÍ$?uɻ&VpM48 HĒy;8b)#=xqJ3ϭ%*nk꼉F]˖f'.Մ# ;{.l8P[ē8JNrh32:iZh˖S /6VQ{JNicmϡn8uF[Ϝ~]۞qNןrc=Ti[^Oϱ5K~T, muuZ\V:ŮoBK֒G1)ʎU:ѧ&o签g=ձU&ޝ5fXdvıNfs^vIӌ"ݼͶŹ7DفsnRSy.*5O$MDؑdFfQ0:穈qh_<HՋդ}}u$fPRfL($ sG85.j]:bVq{!#VU4T\49<0G@9Tu_ҤkyڠX/o3w 6kJI/]hq0a% 4pƬ|ༀNAҳZN(ӫ)E5n,o)Rν[8g[ ]WQk-:"y`^X$c9j~WtOgU^L,pGn§&G8%Nƣ~frӲ Hft1xhF:tJrE֢c%lr2EC#*^OzƢM4Y!iu-->[H~z\NԜnУ*ؤ][m:1o_#=#jδT"[ntG'M=>i3#z1EE)/^}.2ie3d6V>1t#ZSZ3Z?W54ae@YoX#q̗ndc2Ȏd< ?^J錥ZYC|p,}[Ө嬝F T1˹aVª׏\Vmws~1ծҶKBh?7HUB݃S(˛&t%bkf<2n?qQj]7fyi.Ѭqu=sYʔWU%Z[b-Mr`ۻ-݌/,_2N*PTݶn<ȁGC!a Xh{42ZussQHUF0mS6m+Ddm! xNT(ꕥJcm[Ze[2\b9#=RWϖ;|q/%| ~{Vҍ9N4JI;_MJb">2d7=N:Q_.T+IBY$ѭ`n7ш:9^gqI92+Ww{=9OSy^v2Af?RSj-e;d۵b_d{:=$N;7FcdVoAlz-8RNF0T淵s Z9iIq%{慣8,Eid^6*գy&:@ysF_yWt1/3=vά죱lg84=)|.roF{:U*UVœԋpwKF^.;:+z30-S,VBcT+sҩW~[dyVjR7:Dq+- : OSRj,gLi<͠D>65=+_[p?稩(ҋs~5N2 G6d 7hiUz,ro]n6qi2DʣYrNn*jZOqmD;@+S(򕈩Fvf %p`sNW:0+V YtOtdfj充d{;1$k++U#g\Tyi*&ՒDi6LOSd)ITZN5-nXe}4zeŭu}-i>^0>º*?kVJ=:䍑1'a''=Gzԗog,=G+.Q݃&o3W8?1Q{3:5ɽeT/JcRG aq[R冭zN1^j-Iv]C*].W 68$g\qNo& ;t%Mܐ"l%ws=i4rZ.n7̡#>|Ge=*s)+aZJk[+&eyXb#*CODBGqҲOg%="OE{Eo9h)'֯9ofYy^ksnlӭNpb#^uwAS&ەmW,`9JԞ FKm8_gs%R4>];x=}b>#4ZRL۷{KEb99yӄiʵ }_`YZX[8'k- (˚:سlgdm\~ .Zꓬ$h2&#1V} c՛hJljiڄg5«mF QS8R>f]f\3.~tF1Q74mK`[hj=Srɵgd],]&]F>(4(hdh̍aN69^˛I4a\n${*%)rlcSTV\3zʏ+R9ӭ$0uhiWҭX*NdV1ѷk3*-ymTՇ#ָzNLD-{٭Lۭ*8FxdN,,}t;N\G䟯k%(֕} (tj4: K\+g cFg9'W* r=j5)M6DnH=±ަ8Nvzzf>[Y%IRz׳y\W*Cr:o]TyheQi3Zжٷ2gݜz֯f2"IWG+]-fBMDf4~BUvkVsUfhF&f\ gc)B>u!Z.Sӯ6 *XO8Q4J1+'좤H%𞇧zsQi!$3}]Sϯ=y,XD_*OxqڹnʩNQu]Si E9vZ˞4sg:tF<㞙 M%Nm>MG4I~v~'{+rIX'o7` J:q^]Jr#};-@]8]~Q' WV;.Te'muJFҶ>#qs敼+QԊ'klwpz_Eʽ 8{/yŃʪ l[uR>I_N•:jId8% l=LJ2J.޷,Fm2No7UAߥiNRz4"OM;3[\ۤRq'ۚarrЩ^.k[vIj/ Qbc#YAц=~́wGgeDacUZM4/Cj%Q*'gb}ӗ,]#֧ѭiE_Ecҵm;Q|řG*tָlylq2gfݍ? hڃ3pex|||\evI/gfHmp?^}L/}S F;qWdN ʥju] O <'1Kq/Ju3uQrrmp~<*5[2QLVtUtvF"ƫv_S^.76[ܮסּn f]/z_54՞r:G\+2|nFNrN+Ωm_Fv-˜{l~.ٜsþXnMW:5.H&kkfa2f\:_WtG'RQ]n*#Qfd ̮qiTbXJQwGA{u 2ݙF@1{Vj}Me cigC&sjqDE~}K7]}q\iyհm5_M"O|qVz(]ȌRMhU{>PUV6B7n7 S:ƫamu4BKy6yJ=vC.Ciga&Ac]NQj-7?䙂ټ8I5hR|Ng /Ot;BMQbe'qӞJhP}Oi8xm7]e2t)"]8e*'ιGRNW1Ɵ5Κh$'*@[J2#+(ԩMMñ}z\I*f.6| 3>eׄ&̘]+mo.s9N- )ΧnC>\I6k#~P9ٴQRW__Zޏr6v$01x~QpA>eN?WqrI>pr%啽mqo/: [HZ68 pszN?9RTq/5bpyo\8}ts+ ^ S ic#:g$Z֎"r'gwCcZmwԽW|7θYmrC#=+pM5fc1+:qzt~]ݘҕۻ^^3抝H]fh(tF=.8-)|fKt߭i)^kz Bs>L{7'ks=S;V9j9.;%ܦKi$8#5ΫE4;r;F]ӣLfe\a㞕FjX7%t[:Oi9ri._qT~[;u켍 jW1ÈU'ӊǪo0GpJڔ]>v. 5+I44t{<ݯ]գN*/ObH|VoAֳRuLi%v]&GkyTt*m8%ZjŠWd8QⱍY)hƤ_hKC9݊"^¢r˙NFۼV7a|KYJzy#^\6ݓo|\ӵ=S/ q 6`4X ,1sl}Ṗm،#b=ZMG[¶6ZlBy$pɠiż{HUUA1 sjW+]sth5c.V Oqc+Fh'躟CQQ],{oKGm%Ig$ 00:Wc=5$ﶿ\c/iɡRޖ{fiC*o1R5^-Ԓ>} #[a= `7gYnn{|%:w*}֞ohLqq*ʸm cc*J]f8Yܖd?,xcC3aw(]υ $c{Vƒw~_בPNNZjn^GѴ Je_ E5לXX%|sp}ƾ-luvmmc0ѕMolSonM=cY$ڭ$rèt\,'w~> ' _k=eXԴI5%q!߻r^.cNrWiC iќj?v7gBv዇M+Ze~weYO䜆ftUޖ،uaխe>9x,F ac3ӷǑ}_K\"2q5ҫ*KMN9Z^:eg,g@ a9'-ZjS<˳A b9\7Ci{Q w.⣍SOfOQ nV8 wQ-ʧJRhYiqnҳgjXBJ0 Ƴ/x|^d1 5uRr~4gV"XJ 7;zW*Q-c:q'w~Zl\׾-ϩDlblɥN)ކZ8ͧe^sH  ^}LW."MR^k*OyjZݺhԣϣ9eJTaySV,VUvW4ѕY7m:35̫p_'Ylzh:5LS[ʬ}ʹNwoԜt7wO"A Fw<2.\h;=[9QcXuKyIWPwI깢2u=FqKٸoS|.Vvm.NCZ:ҩskKt;2ɼ2+G"Ԋ's`=~^F11M1U*wiZ[_'ت.JJZ+Wo‰ydӝM_1Oc ח,u Uxdk/Oh"laaǹ=:1ǥpTsSm'zΟEuC⺣Ft%ӛpWc#|xeJۖU*mfo8O+)_vm™fWVe9º(tIIIZ=ڽ.0T>oN)&O`lgޔS˧WYK֒V$Hsn$}(]=w0]ѣ v;|˷.W˹i+>E $9I%&qb9= ]:}xmSrCY.mQh:}ng*<4wϻ-(W6H)\O\}݊3&yrC""pw|k.ekrƝH⣭Bݤ˹E.Uqv9Ƣ"O%?g˷Q!9>ߕbJ+jz)"x'hcd}1G>jTi۾sGNrv̰LcU$.~E'@*CIS?\ɭa"ΎdbX!$d}ye):^Eۦ<(T]J:o4rY&7SÜ}EaQo׵x0$#=pW,'{-Ir3aJZ42J>c=Z=Xž.3vWՐC,,w+/#9u2Z[CIn764x+e@Ls듁޹)/q&_֤l"CIOO?6O9#<{Uҥ*潻yF2®c r=?/ˑ3JZ+M4T$^Z[]Ivo͖_$*#ӧ)Y>M- iIZmSoԺdԁۂֶdbw+q߶3SsM>c*u">mN;]aos ހ؂*֥HWd5SM;n6rחmyh'MsGzirT%eN|<| ^t`۰-v#E+39Yk}*[ƉF6{`kOm TuJ7} aL2*[$p18*T2}G W |gtvqc!WD%ͲΥ>ZOVI'Z5xԟ'PjԴ[{/Ě()+i{?.~Վ+ԦfaδGEޱr[=xWFHqc˥cmdxH)IVeV~1wΪU]#:42Wv!kkimU{7''~];ԹTySMOM%ە2BiC:e廉-a,AӜt sLKiΥU+ n/F]Ӏs~sڦcSߕS^Ks h3aXJ1\+^6jꇭq>aqqӧ<%:SZ!tR;_I'ʸ~dg,aSGWmWMx€9Oc%(9W6XL6-pjJuxr>Jn!jRUFW}N:c`Zpp6G ȱ`>Si&iɥ{ Vmv۞kFU_9է-,8#s qq+^gFjw5?v8E"$ qsڮ#t{捾вx`r{ζS82+X1kll`L ҜOY1SPmj睶mqt|IayۖHш,q8:5d;KP(Jd*?mve:NI6… UYZ=:?-Jvi}A< (UUeiьcI] Rwܑ?x 󃟧[QZKD?~VQV}<3Ѷ211Cͯ5cJts1[m uP5ҥi)hA⭵f ?]TZW= _"Vޞ}ʒYCq`8 6ݬ  ߁[NT}ɳJ){_;輄xѪܲ;y}x{(׭B EPڲLd8$[QOG5*ߙ MN73|~$z_QsT}ӱkxx] FQT2XV^?٧Z@"J}=Fc4&D&יI՝4Ю%ui 1|~5ZeGIn2Nlvps?J6eT9))w/.EVQ&[ל_4iɧIr{Xk"X}{ sj˖Z`eiɻV@>~otaiFέ(/6W;z*=rO&SHRJܩh*"?y֕9cعVroE H6¯FpG~5=7+M]R4s|H(G^8Ù;A>KUJHfױEItTɎ/EWy˷&\s]cqWeQ +$p~f}ڴSm_BFRW\k6*uЄ >g=Wj  B[[U  ]1kmBWSEih]11O ;m9{8lvUjav5!ԣH$61=>N^owVPؼr3G0Y~o==N{/keiЌ\y_j-^a:D"d)Lck}'JF7 NERJ:{Hj=݌(Ԍj(uh_),p~bw8VW62Ku$.2o8k+'Mͥ_$\ӯm&dP۰Xר8)Fҥ9{>jm46i]8ɏLk8ZX~վI+GK[ܤwU" n}wڗ4om("qp7x,"[pO8{?v2NQƛӲ8Q9`dLmSҍazv~sgqfo62A/_#i%NvdtrBUg7~g,Ш5:_}MN gw"1XٔtI=ZT~EE9Qk%{hcwBYfۺz֜yiﶻL*r?]-[^m3GX?xr\Ϡ}ɭNTdZSQM3$/[k0bf[K~:ӧN-ikkuHȹ7 }3(-pJ*Z|׏2WI]:b+Ȯn.gc/Z Na"Ifč=iQ74ETG)--ieX|ÃR? άU`[+6"Œ\ 37=R4@$)i)+|(*WVmI- [I$R$,=jF:&4-XY uKwE:4h6}:2y:`j6kSNz.2MRRrFIU/$jh>9㹝|ZctT9bzggRW} G{yuDJ̪Kȧ2It~JrwNp!Z.nT{6?S"#HXΡd.1!=+nuzF2H%;==\xJ,gF&u$\͵w X8RX`w3fh4e5F#rߑ$eHhMsӝ1u'דR,O6fbx*J\wxS{3Hs,Xf_z1MN_f J^1DW^E$7g^OO|ןRTi;gX[!!ddm<}kϟ4˕m*ǑW"3[˨L$hǗ!VnNp{dwڹy|N4/4nXKq*C4b3)+ͩ{}撌dMCÖ~0.FF*p}A0xr}^5)rj֫3Z^ǡைt oZ\[`)R7n\05- 㭓W^j:{u l4XAo3$|br 1: ­OcYZ9}$Z:zW4qwE>Ct:m\գju/M\Dgz7-ǢGmo51k"( =:v?^c2ZOTcyc) '{hlhZ,c1#q'}v~_ |?tTOϡ>OgUX#>p7gk@sߊ:5%FW?OB\۳5_OR4G"\Hׯ+ўa*穈>[rk߷S*๚Z[_}9ah;mq#nYVV[4% [g}ws;OҴv/vbA!~p$11 g,u9U[ JK-WN5jڌs[|*>[K QmXbb_&MCF~yp"9d-KAc0ÁcWCǎk+;ƅY'X}ưg\zןRQWTiuw]JO>c8be~gͷ$xeZ^vpTê8~ڞNzͻxu޼t0e;Wz%[eFyyϓG/|Y'o펧nVѦӵM]ދUcVxQUߙalwg*dmnjnO- Ndsw*_Z2]W]Տ:1ӕ_ORnj#޼_/$w*qx|pONj*q{=Ws5dw}gk6M wR+\^f 8=>J_Z\+DעwrN2U)ⴾ-Ng)9nc~_CǬ,}_G5 Jin_oV ||gϭvN{ rhI+i#YmVb#+*<8yJ([޷--eB.gwKpBeL*F0q=8JLW3tku{TcVv]u_ [ y| Ilg}8+ɦ|~iEi۾_zVk=>v_3ʑ Fu8zjShC˔T˕^tD׿/5 -EXU[+7'wLg'\)zi- Šw[=N㾹dca٦rF7楕)v[N5Kw(vzZ3My\d2Nynԍ |y#u:\ze4(Ɏ5ǩ~5Ew֍Fߑ/㱵m oʯLju}v9{'7ޝv;V=l~i1.NuԯqZ].q{KG㡙YXe}.7DpOOҹV" Oq RFk~ķjfcVޤyN1]4kT[Ѫ6[Si~ʳ#n3?Vvs~'Nwn$/%b{cvjZ>>>Y㳿O3]Ou[-1H?qǞⴍ% =E()> fxHFvw6:.ڣ_QtBZn;{DdUTqqonrA5}漢˪%CO>ߎ+{ +}_E*6e^d~].G$dy%RjeRFT}H贿B>neaTs'rE5SkݦCb_J˫0 $Kyerxn0x^[pw~]:Y2vNOMV_ 4x+]2/ni6бwӑmF.Z+%~gSZZKm:Zm~<23;}ۘqݽ^e?rײMѣBݛZK/cǵYk0͟-O _7Mݣǭ8ћwtQi[(o̲ 3^tjrJT*Qi,Xexpp}5}8BҜtKɟIVht$SNzZ^WVz$Z4Uzn7N^?;;֕Dvs9T0z CԌ)]l}.PWq{k_ּ;7|G$wo[J`HsU<nZGORktןuz(Z k]^K_'|"K/$ӵL5ϐ: ed`|>KSVk}Zϻ>rn ?n$[2iOe$E Tpd8#15!/dyhbGYK֟wsQoNwrrsֽ ,&v#ʛ: iS\iok׳46i Ǥ{x{fɆg+(FJ(yoY{׵޷'e<VbOQ|X6U9&sS**[5}̨W%MߪwmE> [I5m";{d Ѧ0 ;+c[v]w~}f\xRSrkKBEoaoP0h[kHv{c=:y'+'JȣJ{p|?½7S\/M{4˛f !ïpO<׶+9TxQo%+eb-J=ϱ#wSn?\Ʈ:Z,Nx09 ˌzW|%y8=kO*{:Evk"ۺƤg|U:x9Tj71 A* A zrpjUe-Lpэ7'Bz @dWEq9$scڽhʝFmYTk>̻Dsuj㤬.i~dbiGcC e+F4/wN+<ꌩD0g#жӏG^rWCj 3?J]3n֤WW$h;bcN4?g,jkU@f>dy6I [ьy_TQ)x\VV'Xli4!(_35RlU2dps=3zW%[~f2Gs3<2(lnebrH9 G8t9?zܩR5 wͨ, &amN9'UHF"YG嬌|v<'t~nphvk_ «nۼI yRMɟ37SsڴH*qV^Џ5(9$-" Ag<p'AR6h=LVUH݃| =Һ)vKHX! ⵌfFS,;P0ǻ'eHOCu)oQ)5]Y)jM0T˫FVxj[׺Oϸp0 0>k5=zt<ʜ}<Vf2ܴ{1*d's5|*tG_EϹEQWi\|89u2jKuUZjFYm>}08a4Ժu3CԊo1#|˟60kF4}:_I(@f;PK|`zQ(EZUpKjX"+UYuNy v1޳H9s"إjz.TF[cpʞaE@>c9k T|٤*Fiʎՙf26s#׎Տ4ZWNW~z-6re7 nqF;}+)ԕHZTEx>ԤR}뗞Pʕw ^F%t#UPG8=+>~i8(ڟkBbb>r+Sq4Zoaj ]FY9A'5RoКtW[QY3|+Ą¨xgG^QriE^V]R}/N2RiBNܽl4ʣf*[;gڳ=DRˤPMydǗ7\Ҳ=ȧR*Yo^8tJ҃S>He9+#(R0a J%.X\3& X2w|TSR۩%2<?)r{|s[өQ8OM˾sU8GnWlĒ=3 ]B^8Jrj߁%WS XԖj|Gf$Rqp%R~o_{c^led]OpwwzU0N1jQ|嵫g-oeR;䜃GFͽ?"+T(qw9ᤎ=ɜA_~𮺰m%'vabcRVN.9.9EYȌ **գk{{ѧQJner!†!r=N1kN֤!kf6{E8e0ojU pmn*twf쮮ME퍳qw⹡BsW{A̙6>bt+9aT[/ԋОHQ$Zt#x*88߷y T7Y׌"]e{r5Tn}cF8.Vr#~㵿RxXsc8Q^2+2#|w$~+:n_ˈ(SouԆ+ɖ?xGb;YUMY-e$p~`qjzu5HOH[+m ~ΣVYJ&6*IĞA+ԥ*qT,\@wB =j\d؏i.jj˿ro% c?1N>8$-Jvѻ9֑[Tu{j|)4~B+3gm:k/dJG$YKῩc$Q[K_W#%čf2H}jƥ9+jS6nߘ-bvPYy\~S;[l+Gf:tw 3FnOz޷عҳtm Qe ]?uQjݴ9+ZQPWKGdC.-2xKJקC|\j\@-j*TQ]4կR BJaz]A$gW<(Ǜb9}SPϞ ) q=HTvV%i+~#dX6W_5$B9]5#NzuoB=+VE 1x =+H=;)S}vIu]Dv1~$ɍ̞\hl/9qEis{jQ*^_QѭYг*|cөjrW5XIA$㑭ORIq!+g󊚖M*u/MsG,-p}teYAxǖJ'ZRwRJŪbdvHyU,27{{V}.J5ή%B-PTs'^tK/q9"(B<꿫o!0auf\|~[ƣRkަ4c'Nw}Q~ۺgsgM:GIZX8QWؓSV%F%ٞ_jES[Ē!X> HYvp2z珺RwJ\Io fBf;HU'<ҵ*t)5nWF{owB94X-!,rx'p9o_ШӥS4MdF -dWf ӿC،\We:lNN5=8wz<[lޒ3uJ操*Jn3޳F52X\p^Nr,M33G_c uӭgNNo(ƎԻYF-,l&nTG' g>ZQuwqSVj !udQ|~YƚU6}T;IE'yd?'ӷMЧv{jN{v3.] z(۶uJٸ4&DBOk.9)9<ZRʌ}>bU黎-.I=;Єdhl4w, } (袪Pg:5(+ij yW]0J7UZRVOtףeVx` #\$N^Zig(ТDq#yuֶ*EF6Vy{DR8#:5͹RYonOF>aN7ViJ3v!YoRfWurס8mNNqk+Pysk0~9\d6N4ΊSW{mF;WOӡ]tN*VԮXfAw Hˀc +Wp#w8{b#Z͝ J6 mhלz~U=EO:U}7ރ$68zsZI͝tRwW6ȯ1X2DխcHǚRiku% plz<6Z<܉y ίk.ҿ᫞/|u-V*zhdPA{#c&WɮҌwEJ/K-R{Őȹ sӜs8(`Q*4B/> O&2ohf@TtVKK/롧RN'n摷"ݤVfcOqן[2$F:Ec6ePmnҴh6Oвq2? Μw]i*\tw"ۙ>C3NpMhٮgtVH-T*-n$L`p3ֹ*ɭQSi$Wu!3IO]I{:m6]?cɵ-WY7T/ӎ.FVGWEiah+$fۜy2OE85.3HFNm Sl7=}RKܪUcqu!IaFS$0}#'sQqJϸe\J8lܷF=}(iu=(ѧz뱗s \]ڤbi2#ܻ=1o:qm]mmΥˢw+f_1Sa¬{I;'lcZµy7:&9QIbbqln9=GX労3.=KNόp}k˖7 o{ne]Ẍf6J>)DR֌Rm[jܮ6zϖ:A rk'<| 烎~|ZOH2]8*9Vh\Cq2er*Vʥ*qJn!"ێ=+4/c4/tKrɺIyf5%SKDuJkGt4i[ecYqWGh=cRܴBрYҾݫTBW(Gwqhp+*ʷҦ*:Rї-^$~ucu67rRZ2ŅA-ӎ~"2^ԥJ;V$mvs{chnfJt:jZ_Kn?'$pKAsZsSnZ[.[iYIƾX-TUidG)wOqK6=z;R^ҷ!y{W/3?*R)`FL1B"!'QUB#7qƕ:𱰶Ȕ,]~\yS ǛߊV4SoY[nw8jƬj.߹r\ZEŒ0ֵ洕ٌ%+KKmX2NQ:TO޴NWRC,о`aܐ85UJsWz8\2mmL\"JNWҲ6ЩIU`?P8a)=5'F Gȧcu׎ՍX]u F-GR+u|n'#Jkz<.p$ʹ1 3:{xGtԩME)Żvlmc}†?tj;S)~\-|&Wޭ@>FRzncA5`;Ӕ /S*ȑfhT'j'44 ei7Nzn1 sV÷st)!J}9fhӭNjsV޾h$O-/a6>][NPR۽Bo;Fn%C=;W5-OJ#gdFo^5%zc5ʨګaJjtSoԽfIQQ }&HRv ЩR<=l5l.Rʷ-_ |o^ +B#i$UќsϨǯܥO!Q)%K-sJ{9 ѕ9n6}"h)7d.dӵuon}M3 vd`Mpy8WL=_pRrJK4cMc<ESK9ܶL'W_w7F8#Ӕv{}2[=sҥǖn*mJ±.w1fL8j_4er˨N]M_C1tT;>ʥ($pn3+ɭ vQX{^K1p fLzpMxN ,m;IT-liRRε^u]$g5[5kB璽{IU[h p2sT,,*ь=.օ U3b4ۿs(Z]h:ׄ$HWl=+dy3m .?6j_ u_o?=hm4rFIu6䞪yn>_V8He8MskunF&`V ޖm?r"K9x8`A"G0S'9b'vvz-ѥ%n>0C\+a㭕ϘK4#PoK"8n{}=WW NN HҩXI>s."BYUL'2zr,0y:yϡEr%X `IaI[UckмZ,yݺC!]u=˫~5qTvFJq3CpA^WQDR^ebBs\ެH׊S۴Ѩ==qO1npѫ(͹-Y~l|^yxCq[,潝cTV]sבy|5wZk%B2_rg.2/2Q"߹CvqceȻp6bH}F+\itKNzX\ E':[OS-zPvX+ѶK׎?>5)JoG}zL_%m J#j]\_Uu;oŦ&8T/AzG|\tTjMEu[o`!Kǵ2on:*7m*~Ed[KK|- 3ծ1pZktƶ&T($6N{)gMc<=kԛWIvwJ ˲굺v7 0A#pp8?ҦTkK=䯅7S'7 }5y$5קP)ԍ7.hc*II{hbBYpf#+xi;[>7ӷ%ҥrE/o7sN59[E(Our͖&f(d3g=NV6wtyV}6!UU!*IEqb $Fʘ85ǭFlWT*j<>+f~WRh(|OE23|sr8$֣5c)N6zi}LC\tΛm#Z*X$8?ӟ{|fQJߺ$ F_rlz׿U^h%G_«1,_) (=TqݽM71|A@.eLq1d\e.y+ ._}.~f11%RMO61^G= +8%0 \` $H# o薖]_֣|r\ݵӧ}3.QRKedl Jj9$]{û*REQW7㼶UKvC s;{W$2)Ч}x-(HHlcnAFpzG^6N"]jSImAd >)# gOQ4q^jMȯohb\<lVoWMSKk3I Hϳu=Es֦N:+6v֫$6jN#y<1ަ:_:q}5KI6DL$NW8ޓb:Wtc!o0A͆x.YrǗt-pK, ¿{?sr=8%ep)>F6A+A۩*ѯ]ov % (xO d}3bhEMVE+GKָo^3ϨWBPWkcR\inj6̍koo]79IaޭGFUE7ꏙ|QxuDzmqt;x3צrkrfu_=?Z|;7{M;l<%}}3n0IwS^%j4iXϙrBɔj~oWgyF:iwR\4 ?_΄zbTteF5+IGhdVDER|9E˕vt%FٛSn6e8[SXxԢ4.@-b+E)#͍9Ii|K%^qZJU5Ҝ%Ni,ѽ!5N ?F͕pǹF+V|cMX)5V%OnWOgS SLBa&|W{j~(FǡTHms DJ,zdp[5 G$Hb@,aU&is*2T{9"wT}Gil|pkz_W^Z[+oԚk;>U\n=kTJܱF{\F ke]9#9Шt:㋒ӡmw0ZLbNWڵvOu9FײeųB >3]X=6At:z@6ٙOҶʊ;[cT)ʴ~_ _2mi?\tjP喈0x|[}|ʖb+3Vꧯ*^խWV:dש:qwgYp|h1;El:rgߩn_܉"473^ZJjuaQg߀iY]1)eq{R[8&8BwFTCl$?y8&C1)msbY,(%aUUB{?\fiԩN/O8:Es_Rƙhva2qz֫J6ex_2 A{mퟙbrBHF7G#JєduZ dQp8jO_v&JoRmgr()SyUȺloɗȮͻq뢍?gg'A,<̩Pnʅ]OB{tj6mfm,,l~miMGt{osYFņw>:~$ )Y*ygX(GmpcjRjJ1f'+6X?:E{4)-zTc(F59$Gk/Mֶ5.:1Ti\4bn`sZU'-ekSTm?6r_ Fx9\;ǨTS[t]Mk(bmdcӚZtRU_5Mm6 \51SQg"pF7j^5PۋmV[2}+d3#H'pOzjWjQYjGjgo,cSƜE~i t9x^w*jz UW-캗Rl3da'r8x>*~=K[ I H=vqԊƤ::'.3B5e-<9$¬+e|p۞8s9¦O-ͱނqå6Mz>"8nSrr^%?.@6ݔ⌜9 Iͽ ,S,$_`:W:/qwnkLuÙZEm_|Ac0[3*gXnZnf*tZZ9 Pv'tew jޟtvwɷljFޝ'q]B4Ȥ߽}Ilak0AG_O~kxvoyӔn]BU_ [s k3<Erww[H dY4Vm1uNR_B}bihȘ'ULqێsO _˗˳W}ؓ%g 7F9tӖS֕*jb^N$ڥ4/<9vkU#R _ϰh>רHU+(gA'8R 4yݭצ_;[yvC[427ۿ} zq8{S敞R2I%|rcY c|ԜayjiTRzWQ|Yxw͖;J(*ӭ=;)V{I,Ywx¹>`v/$3ɮR2KThSyŗϝ6Q.BZҏ/*]Zm dL˖;m:cʶ^2hN49E>LJHIQ0I[6VK:3ʧ145^EclsZZi+'XВ VW8wb'9sK_Q B֖zMơ&$OI~u.8Jii}QLKpmǻ{'z}s啢U [[rrsQֱj$tfeWVz=܏om]wBRFtߧs(׷Q Ii6`?ϧ=(I<0璄ǖYQkqϦMmFJ5t:12}=;FObN6`>*[Zv׳١ *ß.wszڵQ1ywD}@#!Hc*ѕ;%*wmIhcK󃕍}kJToR,yg{4VFuRFXnծis]RiE+^{yh%ZIʉs{VrRRkmoإ,ok,+&Y#VəJGM;'oUdD[}+Dlk&0y׽z09SIuKh" mVu^vi>b?JޜS]{im5eim'o$iryӗ6oNuZ.7t'__Zѭ(_V1T1kaZ|2Z5.]1n5u=:PQ#5a213LIH`۵uaSID]hͦC26Qx&:09{7udk+V:qjZ#YKSotܐ5F̻ԝ})Hi>VZ)ŽH` pe\Hpzǥ:˿cѣ)TåZ<{|^(g4+$?ZSqvH**7KUЧ=/6s7g@ׁUhaTd1]ȉWpEMiTl2}-bLq5껬lv$\ߞ{VK=v!J_:Q{^+,eYǵx85kJ5ѩ;者 @ c]0t"BmBV nchoW ==j/FVj*rE_}`.eLw9<:ηjQ$uE{ Cq* s7,(EʓS%wP]<[9̚RZ0!mxͩ߆Rsv,a%_1p\7gY;ˡ*ΞRWΗKYQҺ#% Q-FX|UYsh*~J2][ުU2q[{*u)%O]YWjA[Тnqt"^ # Ǘxj=W5ԡ=wjǂ1^%7-18Suaguʨ'uYSr2JU#ex|Ϝm3zui}mjMn0ȊVlcxQ־Eںz?P^B,r=ַR˲{ކeJXF {MU>/mMӆ[S =/Y7W;YW*s'=vџ2\{[:kFh s#[%ԑ++輆zFq>~[.zVFch`ZH{VXKwv7R'w۪.[Ck7q1T(DZ s޵%IFSZ:*Q0Ktzu-jO+n]1Q\jrsSNoMR9n6SJpߗ1쮟~׭[i軾ĶKK2\@@؂??ZF>~ZJ״^_2^?uq49O8n9X$S5YdwB6ʪyf=9: uT5%iu+Qj6w,e#a#;ӎO?CNnI%o[KLΑ]̞[,NwI7W*5]>K]!n űdRpjԧ(ru= 2]m#>fbO?=G5 jn][QZ79nWn289jޝHƛPoFQrA) e;OpG85(\J49~zY[v,v6`rA9 :[u6S3MUgljS,k'!T$VjcK}ʔ42[kw[#t߻]1zk^UkE'}I+>oZm/YL&"0܌ zU9+_m#*/nGU͘fMGB22+eyEHo!Y#bGȸ$gU9i[ђІ=D^,U!VD Fp84E7:lY_{+Z{5KovM&K _3k' ?~}?qq~R,TCycHXinFG0C*#mU>㊝i𿼣 ^jjvQGqy?]IF,=_oȺbHڊ%[.rdOJ^5eTԣ}Lf@epZ?,$w# gGu߳W*]#:QNJmX}e$g)&}?ࣺj5iEum|aTw~&\omx-oԉ/$H ^{>u*,[$# ɻ p;JS[t3הq N.}B;0[H|gUJ4cQ,5xmgK x$+=3\(KWTtpTڿga’d,/,pӖ8滲G):[T|^x}ir-_U'TޮMh+ZN3P*`v>`c`kWǥY{kks밹aU7$~%mR#aOEQM A#OZRh=vK._:;b7HD ]GP3pkJ"Lz/~v|>ןW,6H8 + g$g|Vezq>o2S?y:w~gh%O%YWgLAA 0?>"rë3;n,W,NF3)^N;%M٭z#ҥ^Nqk&gi>1] Gy98q^0ռxn;yb)U?~T1[y6H$ߞȯnPSݥǥ9,d*ZhקЎwpd/gj2㿦A㩺ߙxEo xv]~_zZ6WLMu'Wid`UIa8O־+ˣߧ z5c/uY旞6Xj7 ,̬#7_U[ ~\})JI0ך Ʊ5PMm.0qNH#|aַ̽6`f{jz?iosy~ʍ~l#w0]޼`zW8=ӿesXѤʄnDg|Mj#,f&ܮB_.؞r{޽o}V}Vi5u2mwajSDGTQYi|qM[ʜ#$s֭u}Qe _m55}|0Fz]gO X #N=W[?gs?X䒂Z;}=jHOv=m릟3У%n͗(Xqw}?pԨ(*J*_s*ӣ ^W'JLG7jTJ=Nv+]w5rQ45vM76{"*TQ^#nkWitSW#gS?HH w>/2rVqvVo/&Y2(G @55+fgv-tM]}5Ipw˙~gknڔN8i/_E?w],e6-6bЪqXw''oe({Numk&FW`?0S,'Q7}]c-@ $Z15.IOJ{q 9.u Ju%MݽZ>B)u=?wiL֑ܠG8gsQ?e 3Y*Ȍ1aI ssQs?wY_֎aG lxkjVq7Yw/~ X9qTkF-_U~"5oi[hxj;In%6f#cS(ǕA۹.N0[՝5kD[{E|T g^a*'\!F6E5Hrѭ`O˜qtsk׋)hkgxeI8NqzupaRy=K:feU98VܥRUSn[h%;Qy'iTt]9ThtmwikԚ6p{V'{-JFDuسk_ A|ڣ0f$\uߪasZo? wk[E{mp69<ٮT=[+{w$߈&/j^׬sw,fnx0\Jo`qs 5(ksWH&޶_[-{SW:.i-13ݷB=eX[+ۏ*Jk5 VMJ 9n ){tnx'n1`O8TkCСd'm zW5 )&vMJx7Ąq01w׏2wzw]eRJ5"m4l}>g!#y7Sk5ak5 ,#BFjsU('۵1RmFWRS3v2k_NWjtyW]W#ݶ_'"V^%1ŬF= JHh_r ZpHwWp.WΨӨr[ڤj[t\=ӨM>n\FAonVFtѧq#HbیSƝg()ay >g;z88N\hmF:i,[wzьKQŃ Β΅/w[K+_ҝOi.]^[j +cU3ֶF(IߩՇ:tW_R֒-It;lִc$uK; άZ:u}SM*ԩJJ\[ImoNWH΍wXsswύ:C,9^0~tOw{Cᴴ2)*r kRjou%,ci7M>|87w:~+H^~P-nA&rMzZImowqU0xтNoݥ֝o_yPɺKYTnB̶?[# ,jƒWw~;πn4 i[>k~ɧnVO>7..F/^)$w\[ulc15(aM^S~,*ע2߶yu4,x' 34dG{|"4~.WItqzu7OGk樂-OI[)U.i1FG9NSEwTJN<]/#Nkk}*FX|B)7~L~߳4Zӯт HcpH^:Wf#3ȝT1YsNW><ݡ?.5O,8zҾ^:u&}[?4&mq~/$m689?޳OSv:9v(?*OIn ڪ3ҹm+lQ։(֯,q(kZJ2G8bIdP0 'G*W=jI)RWhy>Z\t'N1MnjRqӷGuvk7q{QڱG78שz1љڤQZ]GGTЄe_CSM5ǘۉx=Me*\M}ŮhqPMf+9壽k+aHz9# rD:hV8 ؜FR }/Th`yeM!X\m[޽(RRB$" + wqJҟAUݭ'&*#V p4>dZ8硫;/fI1c&PGr5{u d-Ou$i)nc+8D,m#YuR]JҊQoܩyyv$Fb!lުmc(5' rZrq29>ZS"+rbΒں孑Ä:v9qwz}*7ͣ]?޾XڋqoH1==Ǘ.} ~ߩkP}M' 789kңME+/h2?r#\vzq^#(z4\yPJQ-UeԮs}ovWՓ̫,Q"m?o}ۜ[9u*Ov9*TrImۄ#%m}>+h'+i/:zGyO$}:{VrF*}ZK}ͨ,Fqo2V`p;Aq\2G&c(G]vVBI >|a4ctn3XZВOdaNN*;熠$n^#lԎ>oJ)%(NT\RFq2\mDig*˖ɴKkX~`g>xJ<ԍJՂGc2jENVrڤle+S礌d+W-JҌtMm2ʿ{y5ք2Q2YHҲp}Ʋr;W58 ##XaOU)Z7l:ӕBh%UI\ ׭LT94pw0n9ێ{}*9Hndᮾ},HMk,gMʳo٘򴰺̉nLV~{uc}-p |yR#=+5jJi-Q75;y-w֒Mjw'v32: #uR(ޫs~bse3}t\Gޥ[KN&Bn\3)<r=2y<-HΧ `$ |bUw7O\0Os:*vk 2O,pFf^>-FN*Tn?ZD0d#q~֦%w2FjhQc7|`q-m*RRl-HsNQ}8ԔoZ}y? tIr jub-_̆<]qOg9نX55Ը$eo/^٦FN;.i-^Ȍkgmea5S'hRjI-_Ʊ36d;'5rJOm*׍k \/@{Wg'T9)+2ܑHq_6M̳H~Gk^kHӺrZ*S[>R6gidl^9]jomӦsT}壘,' 'jiǞWQ͛[@?Ð}N{Wc攗b,=kԊɮR.?y11!#yr~5)PPq0Ɵr nNWouOj 7Δi%R鹛 VR}N,ZqcȊۛvoUs{gdB6[>{,EJQqkWUݹ1Uj_cWK-omUWW mχwZt9SæՐX;k59j\٥I^6фP=[m gwOzkSzjpV,2߲Vif1 [k~gwsj*|~M;A1V,|'玝QM۱|*b9i?=nl&5żS%0\>o_n*mԪ*թ(7fImZY6ڪ̮FO\p~q֍SV)4ږ߉zI)mfV>u{W jr&U}ivrdhᶑO=3:`2u9Rف&56ER*.HqT3,e/~KoSOMH6mo&N1t*ѴTkbisMۛvƋgpb"N}y0Ohzt"NjkX`oǏqwYŧF݋}Dh@eT AOzp_.U0M!"+l㍼ sڸkaKONQ j'$ c=q]g*V9+ٵ65䅋4`V`>۞k:jekD%OMsEۅl.^5߼>b3CXƜitV"tyi nG5E]͍ۈ6}[#>U̕gOKis*Gj̏~O'=c6婅V{[/y,q[p0BQ98UMuRZ1ھ$0۸`FG@AΜZUH\I]$~2  : Ftշv=Vu>WgmVM+1C9U޹UN55.UL%&٥gkHs,A䞞Ίz:tӸF5(a[K%mkqmuƩ?x@Q~#gԩif:R)Q~귭f[H38 ǾOUO{SӇqܻ\ɷceRn:G:ۡRRVVli^F3BOmgBF4ۗpܬqYЍGe?i^ 4rվ 8n;d8"tV9(ZJK& ӤsȼTy<#z瓩MusSM]ETByocspJR8a>e-WL^tWߊSJ1zT9F =kYi$䴅N9<#J44e{-:gZDgW_.=[UA0?*>JKE4'mEݔ40ې\sDERM/bO]~0.mqGpg `r9 Lcǖ)_qiܛrNdCYm} ,<=5tDe"M 6}wuSrrѣZ1匒Ko4lmmKZb8n6ek=di*թ*j[ HƈZ[\gzJQm)]=ʧZ̕Kl@3+9\ciQ$cQFR}||yZ+f3,mfLy`7SۧΞ7]:k]֋^wNefcʻ 811?oW̐DŠm"DžەpH:^ggӼR^I.WҞ[딢G{够h2ʨUqjʤRWV"vm]HZM*;nr۸J\u<<}g -Tbd?K$}zTƜKCKšSKߣ)6~Z6 7A\8ץiX\D9ZeSq-cs4+b \*s*8Ǝ>D&e'v3P REJ *!(*z}+hu=[E$Ew=LnX0?N:s]\хT'>XKVRɰ>BA"kcV:z_!ZF670ϵ )IB_\OkdEq~}Q*=9I<ۘ,fs@w5MM]$*ܱ:qAgm oQRZۖIEl,`&_ܳ>dʛy 1.JO"k^RrYeY^_L*Ǵ+?i`m"y\aWyO4/]_t͕亷HA*p0Jj@my۶@~UJUK'18b#o.F` ʮRY+R)&L/yr$g->ozz֍%$$C'h`3ո{sZʴ騾s$_ Ȏ˳ǎqu 1:r穥8F0WNz+9FTZ1фUݯd6ZkV1D'?8=4S5X0*nVd r: u\e)' eZRi$,grۆ/+J[?{ZN,k 4)?tdqp88E%i;j=V\Z+yaUR.J;eS/h3>iن/lsץc)r^/(qZ_Os$:[ wZVubnI]JZomə.n}1YTlͪVmY{}2_lwO4om_#.Z-~eyt)gUL*I#ܜoF8=<=IƜS,{C>㏛ֲh,*5^\V9Rt;biX-gd21 S%22W=咨 arF7$K6xwy\`cN*xUOFlR}ݒiCK1ⵍYFo}y8.GB{{oYy{xe nucMj1&ksZ9F tzVyEǕ^FUe*ۺkiu!v֕:qzݗR?Fn.Z)ʽqnnb%'jf~?|cYMs2/RlSFF^<#~2匚Nhƛ]+ō26Ucn\G<uKmoǜz}kE5**[#'}йzrBa\l1ڪwʣrӋOVQA9۹[ YF=kj9$MYS8cfu9.38yy}Mӣ$] :7͌j%aSR\u!(H`?SG}N)O[|\hc <d}8&,:@I捆y{v8תrT:)-Ynie2dٛ@=zsJ81KPR7<+ "Vu=a(sNݕ*ط'"9J\C?R幧`-T c68UN@;;F}\K48BӲ99r!W=CܢHeelr95W4{?ʯѣjMvϵ[%UBs+# 2515=}:wzzܱW v|[ơY>\ש\ٝ(.R! 8)%TG51^WQJfFV-ʻWH^S49v|95)ESI)J)ZP^6Rd++|,e-LcEʥkQG䲌w8r}=-'USc52cgm̩sr=VaT{o<]?7sM7M"n"i8g8 5R?W#~miSVrV."BXEr˚2Ъw勩:r[Gn4F9 Q+OOƴ\9=SNYdGUNݻ =z}ӍMR8劒@`ZA?~-ǡNYSشlmY'i½3˖/ز3 :vSZKO>A`&ܖg"I}cgݖRi$}$\0=AȢeͬ{VVʪw\V˱ۗ}Jwέ ͹}ƵiF9yݖKęY&k¯|7j-;/cHNHa_;?rT.YN?u H#*9'>QTs1\*1IG9%n_Ƹ+|E[BqȌmVa^]X;!/mܟ"r d\OWw#_-tyOGcSkI=I]ˁҸS\'%n).&hڭO/J)8N%QV9/ymOF_JHt`jԒIhNP'+90]< dV-FL8zMIo]m>X۞׾_3Kk][!O5m9TF1_aa(ݝ[sZl4l'f{nda?c=ޢM&EH+á {Js,eRJ*R{ٽ?Ǝ&_Wz"ņGč30 v2Q[52'~'XԕN|! mS#LдJlG\J2]9kS54ɵMlJnbifq40+۞IJ:iZ=՗r Mi_GF+򼖥I{DpqZkqWoMdߗS:]u=,>_2[멻]\G7ښ=z 9l{dˇ|{Eo#z}*Wic|X眒y\50uIs- q4Y.0>L$q׎Ӛ,Z5OWn67\u/$vy0z}}kԝ?v=LE:wچβ4sԃe>ޱʉ=w1¨m7fH$ʶOӨϷR+8G؞y.u`7q+:ьiޥ)F"4gʬ7F71܌㨯>"diΕ|ωbRl1a 8fSa׃zyNPKv\ӊ= u]V!HX(R9#c}8鏧9cӻz_?FKRW~yyCCBkZ]Ưc+K˃k$v `8'Si wUtcZL:% G巑BwR} W>[BŝA#O5Up+fFewދMmzOY{F5 %uF7i=LuJ>'E~'k&U2Tz+$ue;xf&M#}8OSRO2HJʚfgi#y=Ǡ7N@6oO^3iӥF&(SRZVv*sAK|̠un}0yZ^}p([H]hC$Aaչ! m=6>(b]W\[Yu2wץ,(u,OSF*kt)IP-~N[r{Sc@bX`rs#+*I׫N_;W4c/!-I^}\%8FR9G*mыP5SZf˨wB=%)>ux"MCKwhDyu'YGn{/g  #\RR7o:p8]>h+rԻj\^FWs.8kjr:w:]Nj|Ӣ4W/!Ysךwsp$֝;rƌ/@ 6f%ct.H^3r;W4{?/C<>"1*R_36FF+^WOe:gDVf+򷙌r2:skV獢^3xKV xVEIce6`pAwA_qJ\~_/KZ0u_ufV̟\mqʲ*6,F:u'85γfgK[F1T*IwݻiS\Ev!]r$=1ijtŎOhͭ:[9Vkі\;T{}FOpqnS*8ӮVVѫl˾ Y:Ě!KhD1yh^5_0`n?Eڤ5nwi;7z09Qi^lzïM7wˎI8$@=y2R~Vm[X;s^--h5-ƺS FEi$ȭQIt>sԝL=Oݭ/wzlqSRKl.e-,H`JguL/׌mvr^HӚϚ^%(w߱I1$7 /,  0;q\rz=b#.!.-|IiݍawaGtkSn'b.o6HaߨqxFvgLJ~Vџ|W>,14γDFehm`8'>њ<>:.otϱ7_K[ot^34yʊeԁs>O4F~7UOiRj;o[ZݺOm#GU c= zx~8Z*t\Zt+Z[G搏684ys8+u$ >8|NolAéZFܱn8A*s28k Gwz~ttj7Jܭ!cucGnd~cߨ } rөzM;F?X .V}#Fki8X4$LUcQq]Q=N-yM]4AƳ*bX0<^c꿮`ꗦneϡ1}{94#FPẸoU xftԒxڽsI=ķ7Z9sT8{kUZ\ʹ*V\s3iN6PJ056U«e벵! u}IwOg'b.(V-Wd(ROztosapZƐ*.!޻jT2v:-/\Ҫ2J2Գs$3vmƷftBèNip[wa]Ε*Z;m[Y-,Koms$m.̀1Z8=:85̖;gF_'j)F,ޏ5T㇇<4]_J`zΤZ=kbeʥ 6'UߜZ3YZtOJ݈u5+.+-O.Z*oBr@ٜnL㚹Ĺ۱,1 /l(fž t5ǍF5.[IZMllWÍEf6Ja8=y#yl[M_s͖2*|K.|7:XL0'yϞF>q8Y9 URw8kG~ڧv*Asevd˃㐾3$VJ> ÅOjs<>iMǚLNZMl!UE*}N5QEnMUnnH̱>x?˞ٮΥ8gҔӖwW0ߺN<݂֙c=NN2DRj̴>MYbVۆ=q3xIJKegl?U#{$;%y,7cwW8֪R%QJ&m̑7,^!#uB4liJODson?yq¹+0l@Qb&Q*Z%Jw:ِnpk?gc*y8RFQ6 eA)Ʒ4cУZ&64qy{~sTm Hdc9f~Q\>ZRܳ,{S%y3Ϛɢ\\¿7 %7z<iv>H1oQd3eOW_Mg5ȗ՗C?y*'GؾI$12>1ǧ\im8X&fRs펕YIkg5IJG0Y䷑Y9fqȮX׭/Ԋ~l֩̓өuBN5ǰ c#9V5ڿrH^ޤ?s28ZZGnXvanOPhpk:u"/RL$j dm 68\׵aR|y{L=+Gs]$Fֳ*u90D7݋5IPX!82=+8J/_Սh+FI;.yb!嗧~{UTZS 7KxGeI30zyq{V 6RtFY_7<]Y &$w)R^zF1?V"oK]yzpd˅@6qOj擌z[Nձ . $R~UDqK?8յVZݭ,$emve4eZE?7888m\3(y! =h$ n'{C9b96ۓXEGb39Yk4o!I=N?/k)TQ]-g׭*ad۪}%9PW泧RSZ[ \yńq4f˗ ӟUvӫQߕ_GO)vȤbpKn?<F3-y:ڷkb-Jsn8kԎfv'jZRU1ǿRw?_rxA_c a)E'kyJ2dn]` 3nFWkoثJ4IS_Z]5ZVHXܟCiNJݭSƝ\],K5Z=R["pM8q*R7V{D1YCǽC`J޸AZF5rU~ls\kafa3ElڼrO#b&$mG>tG3i2?qnC8 ۚ6wV9!#[tAhDm@s~1t*N3[z5S;-B8ntmF9zWiN(wE..a}oEqI-u]0:yUҧ'kTRQ[zh ^~eR͹͎s~+M9=|<#{YW1G ubb_7R;vDg۴zGVt֚RRѮ/l`U-;y~lTSIv֩iG5vas! 6T1pxMmxOҌ!:RZ\L#9b31'q*ڏm:g(&H?To{I\QW(?g0^Fq9;)ҧSs+HWPʆk+cv}ISKFy|].4ڻJOΔ{$-R^ޥ-&Wkuffڿ* ҅D\7ct'׹u9QM-M}+-fW,YTg(rԫZ&Q[lUC|6oZnmQJi) w뺮JsQ{GRqg T;U*[M3G #?\G϶k5ˡqDvDwWl$[{x׌eܜQ]6٥ZiEF}A|gRkn4e)տbۙX2ZuTi؂+c"مEEkt٤p\q޻)VpDmQP&6jT95mU~E׿/c:j1W+\41;ȇkqON⢴.습M%^Z$CDݼՈ`\HG84%픯dڿȚxF*IZ]yu/j _gqykJr01+I0mneB,,m0AV=3u^RFTqZkFHs8UCE^p*Smn:vM2g|ؤÜ8Yݓ֌Ts%my%Jf] 3j*p{)S䩽Ikrn/6$?kv|cZVwٝN6o&ߡ5GrO˻430A\9[Mo"'Sv#J7dF[|0Tc#`{ǗfsG3makvHǦrxB#8twڟ4ȸQw]X4mĝA @?QD^M;t:-Jm]ǰF ݃=ǧ9bZSu!gVgXH:uJҔ(I6ޛzCwqgɕencqۏ^ԥ(V~ʜee}?Ry$tpUS$$鎘T9Uܵ65:<[h%ĂI5Ev~Dn1mܺhԌ5m;4j $q˳<& =iro_!ө(Tz--KY q[ˌzҩO*tEͻ4\Oz] ,d=}+.iBLKR$2ɨ4rJzO}c)TXTSj0p]] ]66Åߟ Oz|IvQ ASԆ-r5ʍfhF{%Iks+V.)5YElEZrKS2R]-xㆿIxiYeEY#FHn3i&6_uϥ`pɢ/#vmhWQU;#x eݜA^;W葩RGχŵ{5KK.Ϲ V\?S 0Nq{W h>%yvӠgOSXe&5k/%3yWv" پGG$Zx/d´Fa+#Z{ZIYG2>ʧg~GWMYPC:y,YIH'ZƵ\=ػ>Rd=:~/[~C,jL+wF# JpWʆ**DSǼ5֍gesY$2F>adF3ʢgm:jrwk{?Em/#W/.%OM+gj x"~TY}J5=nz?zVo2+*bn,r>ƽ*Mr;uz?Q['Ӳxfm!nf^x&鼆(A+=p>Qݫ[3FU-'{qW [qq3JG䟔`ptኒr{jzTH\/#/_/Wՠ?1і?_^kOE:_#Aya ks$mK EwG5ܳ-&ڣT'˧o|wGAeF3 ?澱`+5OT-v_նC^i*lHd+ǞOnSRmIn6[R-%mnk.|N񵇊4[_Xp6+9SiN \Xxҵ_ n{[pYXUEzmCbtuJT[]nx.o#MF̡2}xO]?dg)J{4 q93: t~Uٿ#΅9ьbӒ{zwe+Z'ˍỌ#U=ZX,hUWvfguME^Mf9u<ekr/\5|*W2_X=r+,˿8 wFJ2iFli~ CLXH$,I<\_Q*Vj]Μ=Z/SwGu׷o5$zJ!II'M%wfWovQ8[rb8!?A^Elj)iek54j{W63edY?.dE*#r\q0zt(Ԝy}_GxեՅϦEۿBRc&U%M]̃jXGukJM///3yjSjy=]NKpw9s^j.T֍.GeI r%+ue TzF>_8Ԍ{Iǖ?pՊYOuݮcG75+ZC&Y?=kʜctcRq;đu jf~q٬e]?wBRU%~5xl.p^SN\Qsp3%y8XV9W;VE9)ۻ8v?mNL* t:7Wf*12>kTn(QKZ[v # F I>潘b*FIU->g}INIFN[龥 i-h*4/~aMє}%ec\G+Sw~s3O֫{kՇ*Qpٿe~z~l|,ArCu<0A2;ӵU͇|Kf]|yxP{]WTw+5RH8 n#={|oasiױ%Ndx%m5e1+r F=jxJo kcdӏF)'o_e_zΗ&(EPf8lI?}k{hʞjy|}(\Bu龝.~ds RT;_MGyNM)զe!_\.2ͦe4~Lclh_< pHkWmwFԽ:~JclאX& .2rUѾngexO޶-O&:c'ȩѫ;}o>9M͕W,qQ nߞb**v﮺_:cR57[A]Bݲs^N+%^ќjr[.~}s$@pxToj]iSy\V!YGɷocӯJSlg|;yxY/cv=sԕI4<؎yrkH< 5.VSmn-\yn*k:jtӬZ?w7|Evf/޽ZM8KSte!Hג2GJ a ]J}y8ձTJt>o1xԩt֧x3ZMԣo6<ʹF0xϖ9\ֵXk;i3mk-_>O[=np#0#caOCౘU'w}Ox4۝7P5M!ުAI sEF2߹3 T>ʱˀzbV`wRu9cS˖$Q I+ IkmR-,Īr:54Y2ZIcBì' #|ֆ9P[>bnyq]wS[8vK{ekb$vV='GER.cib~[ "obX0#9X*ܪzY%jcmυT=3#+~\*s/f6 \ȸ}N;{ ɥ+#0-s*۸`zgO©kKRG0emŏN.|P!JyA6fʪwVv]q`|ܲTƷv#[v 3n.yߞLVѓwwܧ̯wZ8^&RAq[ִ2ftW7,!meYs")*MJ\ҳH*ܹ9~8=rO+FRZ:)Ƥe|ڊCSrq88hG#O:o{HśfSn!UʞܜսiL?X8Ҵ.ܪR~.;o#/,m$<#8LJY .5}44lwo:Iżjp#zVUkKDQ[u{~?3EJh-'1T݀2@]kʬ߶nwsRYʫ6y-k!UfnUms۷v%Ru9:Siܪ:yayS+ϵW啭{]QQ&C<[;19?}k] N5}j^ٟbT9rpeZ-SGʍ)T,ZЕ $sn觎_e6i%˙ Y5v\ʁ?1N4U{c/ ώiܪ-Ys=+vw,='SJNy%a9t(ԏWKsN;FQ|}je)KE"/XyL1ǷZ瓗56&,N۱U*(;=9%+nM ev=?D'4!fpBVOөFQS{3OK Skn,9Ϸe8JzsӧM-R,[/jcM/)Tv2 uY͘?{>zV>R~:%{"ՔFH{v w= UQ;=v껿$Gm~#b7arr qO֝J\іۑ7ER>mx2<Rq"KXn.YyWp9\s {h4N5n+", o?RN:>S'Qd&R6Wygꦎok.g2qڧ di&ݜ8sۯJK7~Fqytly1;{*0gQ&k3·, {q8X֧ux{Ԓ&+'Piی=rӦ9RR7z4hD~jI~V2Q-V6mo$w Ii*0}saRaD*4p 2ܷ<(sc1OxFs 0Zh\{GjZLVdw$ZT}\wrPNj\ۭݮZ= >~Ƨ}#%+{EMrIJOxh+8gNNTNbda6,1Ѳ;O@:WUYQ^{G ^N/[, &Y}N9C5ү˸gFwRkfiK%Ino+NNNp}3+jj['/B ݩ*J5ֻiՏքwcEK4Ϙ(svV(rÚVNk^E;uV^ONGN]jb"Zum +zrx(RlE89SN1~ۙ4Lm8+hNBHweg_==OҕJyRjd*ZAkʡ8p3YµJFujb)E?utK!~qܑ<~iV|ѩE!#1*nzXNﶥFNOB8YcPX`w|SFiŤЅ-#O$ cQƚRoq{{ۿcwÖ#BaHq^ˊ(I=FeZNRkv=on]z*yq8˦멻,O-'qtfsJwI^iJ];}<ʥ8(KtӹsݗIu$[9&P#Y-„~U8ƴ#&wˡ/#=xkQ{J0lRۗV *vqӧzksNn((ڧ|M9ἁDaV_$~W#S:WmFjtvFD)2hƜ'W<9|g IPN3W,РW_D?0#9+ʔeFG,isuF2X`5*v_prJEGN@8$?W!4!n!sk& ;~ty'2øRkj|l(^RHYytkU A"oyEIO|Oz(rJ%Y Fd=랾NKm5?b{ImܽҪ/'.Bq9RdׯOJ*\O{Iٝ!y=HJ\V:uiÞIs]tHdC4,*+2a3֍Gv:7e A`)X&TqGJԯ=/N3*NnZ4I"ʟCzs[BFy1rI;nedԠ`o/*2z܁S\Υ9B}9ʲ,ͷy>cȯ^;[SF0TѯĨp E޿s |NM.]}ob]GʄơT6$>l+W<%RW^v:{HWS¶7|&{>_0={{"0Sr{t 5 mգK]IܻUPtƜk.IAtj]kBֲ̪BkrF Gg:rg-m*գ+SgWm1m"wyѻ*=:緭tZqrױxVnmNUӭ6Yt  9SV枷3Tjtj^Gr%ɵwoX黀9ISU%*kTJpw>ZJx}.G^xүݍIJJu9ck&JCIӡ=%wtԕHe5H$Y\BۙX2085*1nHX)+>D]nX&0nVNہ98>zgƷ<$Ǚ2Yr 8Gj}ʙxGꖡ? hϗ&z:RP-캣}ݗ[UXrfeXo|ySb*VT'-47kC_TTGv#Bam@T#sTRumm;PsEXwYI} y6#sS㫰T*^AM@fC Py>5KO,>ߩq+)MYܩs^߅rsZ0bPk[?ƆJ0}W:և5lg\q}9?QU{X,r.̅d qS*ni/ߡV@M8qy\tڪ8x-*qsw}}+Jљ̷ă >aje()jc4H.1ۚRZ+s|23nuɞ)>VF :C֫wܮ:+TTj?y42t$j_C|G@OZU#R/̨=JZj3ݣ8c]E 2I7OU=L-nd( 6WAgQƜ֭%[X˺̏jhʑ=O޺i-63i~EK9 K*OO^Su:j뱙{:ኅUkZXxTu>Y@ۚELV$FJ;.n#mʫtll{^"8t 2Jz-d$nZʥJvJD8xO ѣcJU~j8JZumb \J˷jk! {NgrI,LWFSEߨǏΘ7qZS_OS-ӦGY otȩE/u^o_#QL@xViKb|n;L;&:fܯbHcq3݁p+xʵHLS(|,F'S%*Wz4+l:̤3"6֯Z6WFTs]. ܻIV-фkrV͛vwV7AL߳ѯK4 YnF ѫ׃ޮQ%:Ҳ5,G-zqH7{*tjƋ_ ēH^^uZ qe 9#-m$gfoSOqw+*GO7a\2}>UkF7~WYN[erZ\NOWܚO\7FXj_X_3oS hRW-Y2Eo6eHU]؍φ=uϟFMhӌ,0#26u_g9&kF*qQ%ɔcIAT|xyǧDghR>[{4iM,i+4ʭ\_ZǚNIGXӓZv7p[ٯF*~ʠFW =q޶,E:r-(YQ\xoirk*U;\]^[\1/rZO̊ՠIi=&5km\2Jn eJ)y UE*#)$sG\˱9Cz f(vZǙ e0ZΥԃe*IhhiS";Gۓr8ڱ+F; sdW#rL5ݵ ݕb*\%xy#[OusGxٮqI6k[k!^JqgnzyjҲJi~y OӁ. Qއ5>D+qvnZ"4Sȫ̞[zsV5#'QerIy$6VX={ ySU$in--ÐѫfB:{J6jܵo06*SP{YOm2jUQj {+XDIdzq1zc5źU(iY]RR~g,0ry?zQt[w5Bp=G$WUhuޝj F dp{sZS=NIƜI5_,YL;7SOZ쌽[V껉<,ǻb#NZTG%aޭ]́ۨǦ{%+3gιb7"hd{yl]w+OnINP汻/fERfpsQ#p+ ݎ8~ewqAw呂$^}hjhL$ʓI'ʸO_# ֌{;-JR9$?ϭy5#.W gFQjYyͻ`;%H㜞y*psԔmɷmI Ndr.":qTPGO9d ,+qpkS,.RʢF*J`î?yNtcWFo*#/|ns^Q_R u%FIt~:2Y|7wiol~&g3×]>/kh~{~_ҵI.t25,H| 7cOiM{G~߇RQ-.GzaRq,s*??6J1 BYVV[G{7.yu;AEȈY[ 0IG5ӥ(=?U4RZru{GmV1b6_8 x<9fGs{Hɸ7e[&c'oijuʟΩ mz߂Q̋i¹R<xǵx*]Fxx<Kvؒ2s n#kdWHC%դ}N K՘FqWC^/#Х(ђt׻ѱk3{+{ky$*S#Wny^4q5#~N˾Տ28:pz}jЪȱ3'II{:QN}5$H9im[Yc0F~#l>)ԅz;t>#<:`Xn"8?gncJx.[Ri{hխts<>X. hV!8̲sn2(xt=>*Ƌ>~h]ELԟ4Z/[YNR~E0[RA~\ V\8n ~͇L)$קc.*;ᦛj}ijbh]"79;18@JQ}܇ΜRTީӺm~gc>z ^6heXC u'uT)v߶?C*zo_zG˒]Ãۿ#:WSm;B}Z%X%Ġ$P[>rӌj[&]bUH-ùB6ncnc{уvͽzv4,?,xiQٳ4seH#>s^t'NH6k#/x? qENU9pNSuki,,-ch"@# F2cQն:yZwX|[Jg8`{;F*'iP[2|SiM3j#O$ca%yQԂ[݁Ҕv~{9UcUmdIOө!\*1Q^%nީ]tb^JQmjy]$*Jvc{t%NK1RI_Wzj6b*RӔXb+V꼙V9\@WNnXɯp;ح :9Ufbөȵ8Z}%!i|7 ¥NX^ouQ\b (}V^/Z+T\\2fkP /X+a:}:ΤȮKy}N[=W@=3mFW¤#OFqʅ8Z~Kuiq#}v1xZۭ>xq\QQwFu(l!olMIT/SSz?$j*>Vb\D 4lWkUk}^vŮvްV⹝CϕITE0S|~2?rŝ Xԥ04Y>aǥye(űԭIrE/^"6-q\^.*cI6cH[K/`I-f{d HYrX`yw[cp-B2O/^|-]nkhZ1 pFDqǨ>L?fVW}SQWդvt>GgcܾRlǯ'cM滪-SZjսޭ e[v φ7\'T2kγV^Ώ-hZjy?tCA]ė";$c~lc+ǡ*ѭZ5Vi{,B.{?ۆf`3f>Ͻ>iV)TEƔqJ[kc!]\Ʋ-C.8{u*n%cQSWmG֬d s7,@IG886ӭg}Z{']m_.(#wzx|EyIQ՜$]ja'HwX1>{zҾ q0娚\7 ~ NיGP͸I ucG14sL5hn窟֥ehoz湫Yꑠm&XpO?8ⶥF_N VO_qxZ;vM+nbr<ױ*GN6w:'b'drvSJNcjrߡ.^NM̐E|eېOk,Dhԓnjs0/I!Tz(H+J+8I6 V;?Y/6ݎpsQ[.Y-L1u:3^=?^VjݝQ5ƥ̮xԍK:Yϊ,N0Q tz:~'ew5 C- mba i&[g3aqqN&H:-ͧy|)Gq~2Gi2Na3Zq%V ~F{^Yږ#,a,ghp1 c9j 2eYvR#n 4Ow&5ZeȺ6#vGlu#WMw0N\ \3j7oDVҲ]񧋅=`KU88ݓ`W,dhNwit\:FS`ץ%<>}OpYYUKI3׹gN2WOc %N#n_o/%[KěZ@A<=O5b9hqkOϚHzyx퓉lĜ^-XKiL 3>NnOhӆ_C2-Sݻ މʤ'n.Nܻ+m=~[.p1mq߀+ӕ;/jQkvyo$p0Ы<;ruZᩈpo'-RX?Sя*~5Y<=}#){CSKG]UP uhS䞘?8J2%%iJ [)|`Aޛ*y<3{I\Zڥ*Qu\edG/$V-£7'$dӁqќ"ں掊ṱae}9fw;|ԕ9x?OJU7[91]=歜1i?/3Uj ;(n˦C9M Yp2?? VQ/?=a[g; A;{F4&t:n< >ff` qAөu*`|_ew:%y &U$^>JҬ+Yra_;y6۱9wqJYPm{AuHn?w`jyGj̚*n^'4¬YUԱ*ԎpzJ8)lb V1|Slr[AܤMr2 3Tө+-/3:7̷̙,g]컝\ҝne%&rIw+#?ҳZWzt^}bvcz1#9FumZUR?*nNy< O*V4Z_2HBWʎ4!ǾHǧ=kZJ4lO gb71aF,ҹOu*W؉ZR#n#x<*rA^ƶ/زo=mWeʌrsԥRM61b 7+}I(Ӧ״~+X7I!#{=涧Rd#Bd"ok9$Y`bۘ{dy9ԓV=ҧQ:xh-3.JׯZJ1Ө9b(c'̮7NJVhVRtTs&~E3zrT2d6[DY{q#?N}#~wuVhhB͵EQ[pqGҮZ۲c:mjG>YcURFOQ)ӕ:iT6f= / @[{sTy (aRof{(mU iܶp?C]Nek%ۡS Z.J̙wDA&:Vϖ:ekQ6ס ,:H*ĩqҏkr_SJ"6Y>ƐA3p7}=S"5/_CVuFuš@ەǧ9yJ0rgR"[" #tTI'#ӒC3H4S$"nnodؠ:O~>_X4z[P4yu yTޣ3`'+<թ{ f-W,yO+;R^ʒ]Ԣ#Lf*J\jRRLvR<3gi8cM)5r7_/yfX22@]$sƥE'm{42iNwj֜btQJ^+|ot4z>Ny0Zl|Ƽ/Q(ԩNs9Mǚi.ȁ(ZQ欋5o7Q}ҺhidJn&tD$.іW^1qV%ԄZ{_z壞K4ִs~tLqrrmy>#ZT;RcVn>x1Vi8>{P aӧ)+it nB.vldӟv,LcIz+^_+{,"UAEb ?]ZϠ_K4W0w.C;gL{u]:J2{SיݲI+ܣN{«*iQ_6z2LXRIn1tC٩_s?iShi/|Vc̍3Fw :gj+KGf.בZn,"gS}z̘^RZ"ݼnb+OU91"gJ nw"uY֦֎U5w$XkwCXF-z0;U{"ֆTxɺN嘮!YfG|<_JqvI#V^E+#EbNt>ޕ7S>jeZ+ZA4`]+uq]_CibSP,$K`w˸׸;vV5;GҢqw.1Z0~#rcvI6${{:vF:slmRڣMQ f2I8P9G'&b媺zko_#EOz7_ > S佽ӃIN3aS M_qEԋ]?OV/z=qGn~Oܜ?N4k*);*eZ|Z1̸LO@ˁXJ[C5= f&v>_*({]ZU$۽*Gi|mʂvs9'h6/z7>V6+GFX~N8q ~9]nA{'o1s+cbΎODR:7?.UczyTRF+b7ڻd,=תg;Z+GŵW.>agkT*ԥI]NHu M%vʪʽ~QҔ%8yU0RJXC{o6e\<kΨE.WdpC>M|"m64.|Ղ)F9?OƸZKjm=Ua?ؕn`z[=8=9TMyq'eGF%z1^*sRګ+C{%}A_˾>9SNIEhs6[v3\9Z -R:xWٺj^-O-º k{ga_ U{J{욷⏦fu%db_x>f]_l#H S@y8aNxY&8m,Egϧ+x^4H|uaqe]6=Eg(p2ylz`9-JTޛ6V; *Qm_Nw u _L!ӚM5b<N KTNt{v1=}bkx/ϢsE򕈞 8 (ISWګ߻^U$:tѮL?6e+qZNS{~:TvGEc,VbIfnA9ץpӫt~)|Kz|]XHE ѷd=``{|+*ׄ=5ϩciƋrG uPѡ_'4L!TL #Ќ5Z4\-e*G~L]gt=Ue)Cnnl(AЏȰQ;S̫VMRwjbszl ;G 2ڴ*'9^Usb9&j)n?Pۨc[ hZ9Y06#pP9mcIBVZl?QMwq*5iW!1 t!݁iG.̚nOz.zFy.<UFFe&H'FxݭЭ.[.=yQ'^;#ƝMu^AUd xQeWkN7Vީ=#IޗKH%DBϔ8zҹaE]b9(jia岨 w q=T*cg5|Zqîg~k3Ǿ0x> n4Dٹ7B.mCgyhKp7 R6ܞܜ~*%*Uag|5MW3/PmO ^3mG:28SuǝZRViu~]t ;]Ѥu[o.FE| nK9Nun/ŽZtLI-pG@w{c>kJ2֌]Wߡ>%ޫѮfV@-v# Q#I~TyS<|3S𧉣]rVn#(NKgpcoK :8ysj50Zѣs,eҬ:٥SaFuB[~` <aRr[[ JQ5zoc"kZ\ZȢ;\[/W8 <}UH̛o_(Ɲ<-90|="̐qofe!O5Ύ"UݷVyxʔ'QvӢ3<-:ji'8;9p{YԔiԴGcco]ORgoiȽ*k!XȰ돗#⶧NV]gYSޖݚzσ5G-,Zn&7|u>h9'],v|s#W.epůT٢<587c<,RR:>0fb`z:t9骔\Igc*N?k{j9qϛKa֗Aʶy![Ԅ}C%~*uU]BeuVRV|3}&9YmfprO`?P-:(TOxgSm#MRḅ`x9׭YSSnnu{:λN/jDs!#^2ċL\jJZ"e&nDGD*J2doCTM25RH۱_u{m-4Fs3IyH7ggn*t#^R徟|:9b%4O_ϥ? !ι$Vf ':?oSI6|n"M}]u>^[󪬩G*FzqغӕcQKlX[Ϭ$kGz7\g# zc5x rb+YoeYN =5tkb#vُMk} mK\uq [Te^tT>T~ ~🂬./uo BB͒1ݽ芍8͙|WvMS|Ep|EuO0̒]a)<8qYW};\p={~hk-wٟs1MN~do#g9#.IAh=ЧUNźOg$: f8콹4*u=䕼v`pN*KD_K-Ym^9U.^Vy4t*֕wu9kV-vr$}pU cУRJjVi'/k_>7J[[O¦k]B8bTFwGln@ۅIKF3c9)IM^t_~ğ/Z]ɨe<pV;J|!"zPG߽ߗ|32өR,~*<_qK^I#G%J" TLP0}LaFgX)T|_=.Y>R 6~dp&QN9zu"5؉FfzZu#nyVIooxಔf;>lt9czy:xwfwF?Fr{[{vz_ ]_*@H\E3 M7wC]R;WFRcEm_L[ֻd,q<=HFMY=eRKOG]8B8{k/X̒4e ʸ*}=:Nԣ-7)~wt~˞6n{|֝iz+ckŌbby*،w9~`9jۿ/O0U) K?1x_~/՟=|>]CX;AuAe~Vv=Wa9gیy/?3 S'g'^n/o=sEυ m/# pt'0N9K3 캮v0*q;94VJoǡMx -/6eH6';ђv82=,*)6}mt4ѩթtKkmV}AuXF|Hʾdpl\9_F[.VGTܯ4Kuw඗sejTG !-є,x-1Y+޷xYeR6׼[;t3x,`DZXR<|? =u>;:%ymLNxzpZ%6G٘2i6qz)Hʛ}l_7mku\67䃐8<*彈R3m 2D}i^RIx#p\px%ˤ=:iA-eN6V2\ހkx2kKRu{lYcI > rvJq7)5Hf̟2up0sSUVⴶ/+Fa]f~ldqxǥD] *$^O'q`0x֫ܵW9Zn3c"yK^S9^խD|Үmqk#7q".~e_=r犮nYm5ZCh Dv_3qt?~]qjb)Yeq=GqzT++ 7nT3ַ\U{g_{l sJp}8rkhJRөQM{3挅6H]21cֺa2̢D򫯕t~^R2kfU -v8'4k4Kݷt-u;x$-9b|㴖'OʮT֣t#/g(֓𽟖#4Y&q Ϲh鲲bQ7-%r,E&hʦӞsg39b~ѡOXVq̑$+dwbV=SrTd잿8{xT\|[*J)Q!Y9+6 8 c(&k+ǖۦ1g΍ʬH$9;֫oIJ#Kj,{qQzjI_K:_juܝǯ^~"t癊SoKoy,ٕ@n`S F59`gAcr6oez~YCHntʜн5q[yOܬ{UHsJ1ӯ89s7,VJ^I̮6onQi09F"1͚ܣjdkDgRKʜ23[sxǛܒnQ{p`>p2?ʺ.X{(ȶ 5P N;.H˛sS.d٬j=<9zwsi7gԩ4K""9x?:#{,oJT-VnyUԮ|. ,Xwd2%P1ҶV+{],Tzyоf[|~ĜԐM*mY;{ϺSA{9F\?7ȮZ]hS;^GxSU%6I:}R{F4-A+y{OVRƳ-3|QhsI#il!Kaw#a^NIxe=B|C m8CG>Nmj3cij x\pti<(3ϯ֓/y~ ڗ7Uu>fH^\ \cvli-UQcY99>inJN=IE}˳TgROnk8¤j^:7c8${ܧ^=U98)4O6M&x1'9׭(JN^Պ$T|*4nӣ?hJ }js-#İ*O1.1b{Si$nZVRïNx);FQafA&y•a#"hFq,r7sܣ,r?:%Z1MKsjMJRW "5*7)b:j.-W-䤟^ѾVێNMazwV(_O{܅c JMTY.oH~#'F;]c.k{i}EYv~;ס?p>g}:ƨ')pȫI'Q'Ў#5=.ՙ}[$ȫ}bis]:|Q6F_C)lrF|Y b11yBoVKjrֲ.w3zZJ9tkҪEnXU򌞟W1ډxԔO_KKK$0-~yq{ߗdr՝?v}˱\466 sl+HDvGkO#U^T{vQ c2{z\vQoU/ܠô cQ8#'GAwSEFQ:sJul+Ivc `=N>l=h%oR1eR|[ 'WnY~us˽rB1ٜI+[ORƖkkyռğhu+ cNhwvKmRsvc\u0Zܸr+/R(%_ VS)P3`+7,})aO޶ړxeӫ~D6Wp",RIe\y WoqքdpΥiFI&Em[<\.QrNg^İC3*ZΫUʹRr9o _9yqBcRd|A/Rw٪e)66;&7䪞sj\hFJ3Rm?"hHp̻|?SW_Yi;HȰ;ݵfpmu9 II1 ʼB۸ w)I빿J*-o e*HPyq[ӧ׹eR}7ﭿtI,T.U_8|q[璋 t#9Zqn#mb_ i?uN,XsgQ.rq^cze(2gy( w0_nu:.9 SVv=5Xc]hVMm__ K{o@9j.A|1f<'i"V~]:QG[dFgFr9ךҝ~_7:#IZY2zUXRb*;.E-[X/"GN׶e.ն*Q:rmb iI>HH6<ýJRTRV^U9nu=ǽs)]ljU&b |>%(љTSC[n,ZK"oTdw(.£GsEV, ?DWvD.D]I^1tz^D-# 27'*j|V55Q="l[Z˻Wc&KcUNZ=፛}=1^rzrLSdi_2/7ܐC,+#xֹ2ҪT6}I#^hb-\qn pb?wS_qJOا؆x$KYc#LwO8^RtrRJ5i{nBLΧ4އ7)֒KE놚9nf1ɷ @'#[*c7o-uHSOzq骽պf|s ~f 3։SZusy"u41vvõO<ܔamƊMo&VVG_ݪwjQkﵵɫ:߈ɉ~w*2Qג:N4gj{N[u 8yw*SN1cvVkNn/$r{+ju {Q[U$68]LD/:.vfD\B_Tj^Rhi1Q+1)>Oqĥ\8m XEXM}LiSj5AzdT3j j9mJ5k'U%b4Z EaQUbgZ(Ú^E85]- |73M1nV;Gj$uʤim 0U I g8+J>T] :u T|mx.Xe)3p 1T*s[KVrʴUEeuյxǘvɓ`Ϧ{iӜ\ҔRei3b?mӂ–nIus~Vwo:1q[?zrZš{b sg=+u)1JNJOaǟ 6x;x''<+Jq-}NէR|,E<{:zZZ99T{Nc,cC1̍Gc$uCuҫ2>coh%[yc;B,[9>9SVvVQ^}?Uw9k1rӧRw#U-ɻ,t'?1I˪v;"%%v mW+@ۥ:T󽼝;IrRX~o~O%+*Ăx>8yZL:vqwXiuFUۈq߃Nk*lө칧ȋ; rѤcS{dC6ZX3r9\"59I\l#nr6@8ǽsªuJ6)˕tV٭b/6ܯ(${ө+.\{yt0ɷxcU"=?B7K,Yocndf%Z䶷^8gh&Ukvl9Vө tUj8ƃ_-K֢$bM.N.>pk;rRN'(J˚f #aOcit]Ti|Zd٥2< 9'gu13ZeYO>\[:IcVu_ʕ?zN[%9RH7c,ۛ@N01?:R;)&.4F䴈ƫ$I0w0ؒ{b̥[7en_V#Mr>xRoeFT;]h0%srO ^Rk.g^/UPUx:)ь4ҘOCWc͢G-jѭB4{{h<߷QUH-{=5OkS`Jޤ&!8W1jz`P(_w׹je6wu*L 8OnL:t[߹0kkx1wcjʣ&Sd{K+H8xjg4W1͆O$aeZn}rs܊:+yʩ˵~vkO{FZ+Soy ti.bTڹfIzR/{_2makKVvu7恷txZΤShiSw<;9vO?rJQoN^ҴX2n1mS{m>j1Zf!Y_19m_\|oRNG7{DBGY$ 9UVƔeQG[~]KƗ?CUc#gR=*Ț|X)o­_AݞtJ59 lwuNs2;6iH򾟀$Vb+4erz~[^r5 'Dj۶y=hFRcRr[[gZ:)\ocsCͰ?<#0 WkݥE5svXXlHUp p;oι*֍OrԍYrY-~[tw,w/AAn1ESMvXPNe|f'9c;^(SZ>ńI!QV7 ؉YziQVVVKN-C|reہ\-f_2CK"'i0m zSN[ّx>dNn돧5qdsՔyԩ.S>KȻp;V~4ʷ;dMm|N3{)MKh VyG[k̩'YFi8\ګާ+/E=1!i|?2ՋhQ/Ve(n=2 uݻ}N_cN.iKcJV J\09y2qmyY[RƗqma3`~V#*%FR;^G4:wұhA.Yρ\#OI7=oа[,3׏XԄc3Y^QԞ;hq, ͝I_?Z+{a*BkY|HUK(Ǘ!㷧N*{(2図֞C<@hX4'&Um=s`GsnlJr)/T|_f,>O ҿy^nX^{̮O^o6]ts>cAQ$xР|)1~qыjiw˯6JѯY7{Onkdz Ӽk 67ކUfv(aV*.wջ$֮K-oo*UbyMkޝYo/J쮡 $83m9eP½*|ɹ'~|ڇA4rOQGi׍ChI;'y;5i'Hݗv|wʡxϭwIUZ磇ejn]vpl0V[; ҡk׷*es{ukό+T}:b-l"u;_-o"CZ״;9`.n& ˰ 's#_QO)TU"i-Z֢PmΒfmgw^F9]FIkiX7pҵu-=5k{ᑦAHͭĭ YyW+Zg)/eO8>;vnK۶ Gq7hes7޻yh(Mik3оnmuV[Z18E>zYqOj5{4J:.VM?zWFavns"Ox#>O__&8w6/NJosTONz$ZZěfΠno<F*3e\*R]"FS.Gm)9 vO[ԦN\vO+jZޭI,!~GUYAr9e8uۆP%sG}WN|gj1ӧ{;ݿ->o Mu6$RUR8>ߏ+iʞ2͞"iUM&voʪcI=Q'tgZ59wO%ծaіyJ̫ F 1Eqƃ+&zThi5S։yɌ,~,yNuu83^5NqSk}_KK-<Ϣg8Z*?q{?%ۢ"VvF <{uo^烏6'WoM#ǯ^qz]q?I9>ד#9_LEHI%W^VW Ҟ*R63~dᲛ!}\rG^r1]KOkCV/ww> e@fӞ5هB7]k(%<ԢKdyvp9ꩋJ=?̚8JuPMvQFa:?7r@N'TiU]o8yQdZ%JBlse7e8!QJZU*{ѿ.ɖu $q+%巏wʭx5RtTlF kys_ZGnk\F~i z?FJy^ҷ˖8єۧ$#ԥ$Qe̲.@rA?#Zu!%'p>&/ -$}O`sd8=kkSz(Ɲ>촛%qʹ瑸rwWJt:9VN^1.8_2l}F?ָԠꗲ+Xү4 ZIj@l]L=ewx+;%i$, \x{;K}2{kƎ-kt< 5oRx98⳧QVpnYZNN#^z67$habݒW#qQwUJpz-tv+>sk@6*LU?R;V~1r嶮yXh돊zh%µi"*[<> ,,)4kUOKJJN{tV>{-b@|"\-H\yo 73.5̀.9ZꔫSupN0ùs)H3ɯA-Κ~z#3I6tqIFM|hy-kD6^~fj^}<->ČOJT{Jqu)$[ҮFW*+i՜$;-S- wO.pܷ1O.1ϯq]raNz<=mNA>V&CV=t}-|sckd"kTQKT&2ǴovknT"5E9nİaifV*Ҝe)m8ԩmeEtGIf`('zƔF_#iN4j(^݌V;'D qmC9Kfb\FmHقL3#ʢfryZs2{$j\sVRT!zu'mS/Yc_)QctKʕ㞤ѭh-<˕]ų/=kXwcej kVYpʹ47*ʑiG,m8c^s\"sIJ4ӲxJohOĘ{nx>Qf5=J>%9Y|m,IG|-:s$!³n,3r֧NQ'6-c=͔Fg{H{GĽhK0Q)ẹI[L_3ijgNkv\ٝ"~GUP18kX2ۢ54 ho0DOTzpypkϕ?ktrƓ#JvY$w6yF8%/gxo҄J. J18~'l{ҼZrOJ-Ok'*;38,$zS]u8)-on-IӒ35={ڱZ3fe}:Ќ#%j"y>Yq= Q(BTZ*NVV.a&2ڰ*Vk@~Ue;[E n7r3ө-NnK[ )-c#H<2898[~HBFC`t'Ƿ5PiZw29 n FϿ=f9U־C.2ۛ''zyR ;:kF 45]7\|۹sTI掉$okkQ݇D`2S#'urӝݾ4kR9hĒKwV*5UmP3cSOv Jdi &femSn ۇgyzTaf”Y.|¸gSk'`0xce忻ȩz7}Ҕh]߸MmIV6VwPy߽\Tj:~FL<[z[u ɸr9rr*!*R|v{Fv?KS79YF]Z9GkP). L#Fpǡ c#SSod5:{۩ o9<3l A*NV:'[ J7!Aud۵[g.hjT*ԣ-Rm<6U35#/bS8Zl£ e*㑸ۃZkR^k7%/-grY/'8늸ԥ}:ShU6ʨ"H*u*ʥhiѧ)Zo--fTݹqoAWO4}lmRọ.vR 79oNFњJXu%MZ&HB1';灓8W`"S%̖E ;3׾+9ce$a{qRm:+98鎿ZrAhS\ tx硗*V9sFXz4T ϖΘ*uU݊3K(X`W͸_ֶޣ"Rg@čU(m="#׻d븭 zuN$]jz~e+`:ɭ\U8Nb c0nv5Fu'2k>Rnvf_qkDD§SnF*j˳wQM'VR^:U4ئ$_N&F,?5RӏSHw{w\?Viƕ0jT!n?6@fZ/霓:ue l'oAv?¦4`1W "]/W[ƢQFRTUˈ;1Va֮5ӝ8n^hd|ۛի+29F*3^țY0 #dևW:gqN*jrIzܽk-Fە7;9*($TkS:*F6sb i+XŒ`;wM܌4"֪֗Kh&VS1tzJW.5#˪[]hc1uS*U5^rw߹.?)mUbH }zrQv*3]Irj,fi.h5ڋN=VZFLcn. щ!ѣPO e g'Z5 ~Kԩɩmk| D d8#zZ{J|[cT!Y{Hڥqr%Ԓ:F$6qܟz27g.rђn:=t*RGʽ(Y +> q95Uo VyvRlW73ϲٚg3`k~jp_rF)Y.+o͍lFo2?2`New~-^٬# Tޝlkʥ7N+E_o"\Uݎ GXsJ2{yU񨮮RWrH,I4*˹G~*Z^1NRn刵U*c!qsc{4?R9h;FZBU8$rb<ɷ$pUq(Zˆ/n4#ZvW+>޾rkVN+魐\j1NMq"#0q88ah3ӣ\9L_1䬛i$;8u]{ U*O݋^yZYXVky, <JՕko~jVVֺPDj#+g#q+O}|Xэ=LBw`˷*pq֒՝V)hzu-AoUcySoZTNW21u=Hn&mһs\Kjj?G=K$PʂEp>LWAw6_Xu],tFmGb0gfm٘`NznF|3'PHih>Ԏ+H 97kc]^=& xO%gHӕ(=&vZfwH99۽уQ0*wl.exWE?Oc*4ܹ}Nr.MZekvn4OJDVg?7Xb2xONԧMZX_5|?MWMPVvHW㯮+%uX,>&MyxEI J;cIN=./nlfyF4br 99@UQh(TG~'VZx"=r8'3g >*Q4|eR:8]OVnQ tQ.388N9S/if\uxq1N>/qm$ te~kG⹺yNz͍7Nsq32fd* sߏלtiTSݕ_qcZRu ݺ7UZ\%Ue=?sQExM֚溊+:Ʃni+ciON~V>is*T}x))NI񿋖 TѢ͟g؈ '*W8lW`Q`y_O?SlͰWO}_?SO ]s}+؈m~֤cZ-Ӳt!m=b8p<Sz|э*m|ˇj6FpI# y# gszf2xTJѶꗑ6ڕ*ȭR\u4+0puӭ~gk>9d\ysb}kcjiT-ߧoB[UnxsTiTrөyj~fl%ҙRs`YǙKcХIJ\v2a#ľpTUZq擳rzD$+ʗ5KŻ?Rqտ nff$^ejU'Gmkln|:ixV*lG1R^]LE3["6r#߱si &Yw8#ߦ5R7V˿ˉՖ!SRz~ʾBIǀ8u.e֧=a'MV1'yhF6!al}ܟjtqUr6jxUzv~KmXfYj2?,aT|~nIa TԪk_}_[둒yZ4>^'5 iz9#}o_W*M׺qѣE˾e!hK@=ӊv9hӳ[?M[9œ[ SQp9{~gIcuw5Ne{Nh^[{uY'٘.u{z(B˯iSTݴ{i:R2cyTRZLDqM]nbղ}G;7ۼ^k`\G<͵QIS'c+jWU)|>}c麽:3ooHϕ8Ο*8c)R-+_m]N=B&Ϊb79mBekGFB2'P_;oц*==w:1zxv4[]NwYuRs~rƇVW^w ח8KblyI-<ƏcrW?vOB6D5n81;r/6+1JSw^f?ϺOˡu9+68SUF>T8(R<3о N,3HVQD:jXN.UtXqssnH!Pn8j9Xy%M˧ۅ9FVqmFZޫ{&6l$`)%x89]TQKͲjӕi{o-t@,>[cٹrttʥH<fvrY˨*iZ` &wg#qִWNe [On{^%ӼKåR j[Mw(' m#7jcI;;?n?5%ny4`E$~#+ǛU=|E:TqVݶo|mRIQC B$ q^EJJݭT(;֭:Duy4}ET ,X6Hr'#<E{:rt߷Sxh=O'm#~HѤiHu,#a{tSHۻ]O{ V(;o5ŊdqWsEJ8rTkka+&ztbNe_SͧZ>8eة<.QR:5ZJsr)/Ava:FY[*?U%)Qq嗙tg0|\R*eJH5%fPwixUF噍 uM-M )$k˷}v*\WoC^Z>ɭE֤w +GtFeO7g0e+!,ianW^f*J-7}:[}I.oVC<8I^`F.\SJ˯#!AYFRxo<9+a+HFQi5'DѮ+Fhx v?r;gs: Tvw<eocIԒc)Al?"gkeR~'|+sXIqm\^/`y$*W)lEzx|³tilI]>3c 9vwWM=]O1x8j@9$2$⟞_ןhӔnT^[??oCwf^Nm6Y'}ܓcx#ˋynVحo͆jԕ8&5:t㮍ڼ4LfqV[t2h|%*W8o1M˙ePf uB}oG7\ҍwVpW8<>Va){Q59cHԜTjJRhoR̗+B}ϛh/ۉWE5Ou-xfPmޟ,{2- cXbUoμzs=kYRdofU?m%rήx}Z ܲԧNN)%}IJvpnkhaԢkG9cHc%6rǓ}IJ2ױOZA.#ڳ,($pgOS[QZlތdɼXė2<24>G<ʩ{8z^VX{VT+22Fp>VjօS -:o\I"ϩ9*FR Hթx^jlnUG|ce*|W*9Rv9mGVkhZ4U[#OwEԭ2)IO['7q]PJquy5e.~ΥZo&ҷPkXCurZ4˸99=:zqc8EIZvݐZjn-j6;Ud+uNN:.7voNj#,Ynzgk_98sׯt{X5'bb<=}*"ͷSzJ:H;Y,r ?^R;ceʴ0$o2ؘa8=:9?N=1=gkni1Z*B@'?LۋiǛXyg"K=mnV8AJSKVT(9%RTFnmS0NpL4N,jMAd<*ҋl7T\{nUCgJ1dm>{ c 9.s޴7U97F5##Gl敥- ksN[O~}i1H"rW:iGCXPEd{}**{6~ZN8w/E2tw*f x8^0cPvITL*m=Ny(vc}">,oQ~3FJmUnBxjO/hgYF^ Xv?.'e,=(I;|4mKtv|L&]Ψ~n[# ٫Ͽ7e]KIarڸ8gt#yCvqoe3K)HIc¼9YД)X3[ɁFi s**s)Jo: .+~Ѝvh_j^u57Ț8̨DnAU=IhTc IxOÖzsZlm S[2¥pY=4} L6[7|0qYU9tKN2B~uܔףtNW[rݕPGZ(ԍK[~iFtۂiJ{Du׹iS SP 3ni1x]NӴZiҤ4kZrry aNx4y_5s术UZ"5:2r˞]5ަ4nO-嬑̉ ˜<֣2 15z:0CӞR8ke*[mk%Ϙ+#C*_ʡTFJ ZMO,X}^3HxE)ZU(c]4i27R*J+s/[{wV4Hyv:*Mm݋NeEZwǽ. yW*8BKTUJYS޳HW{^"?[3Vʨ wsc8]HЦx\cr>R ׳:)ԔbR-XE#F؍U%OgrYJ.cB.HU/#FaժJ+1&m>(Llo*Hz¥TitW:*I<2m$((ʬc'uT&_Uh^v1r6R1tjs)TiFGӳoC~8#9;2}M*Q*sFL3lk,2x郎#眹6o8k'G׹sOE.NlnV6 |*+^^:5IlYZdۘf@nEbNM۫-4^z۵ eU npGL~5?Vq՘VR~/]>ݦrtaY#E\.%ָKK;Rge&K-fu>P8#ܪz=S{7&o}Iԑ#}i{Lg+K˕u}:}L,\Sk7Dbk5cPi!T <#rMq֡xc(SPJ/~>F :5Hi~^p⼊sΥ8CE{F"X_Q|וZU)Ҳz=JYѸGwȬ5w8IS\vcR,{^E&-V&8F7sb}ɫ5/kGe}ߝɒahn,C' s내xucR̕HRI{e 9 ]xW}_Ty%RtkOԕdGy>ʱeƤ%=:tF4qm7?kg0ϻ$YF<%*KZ0V!D6Ʊ٣ۓG4\%RQK$Kj~Ou:U(֕;4vPqu}^Sk}lNs WpZw8?b`kbX#e+6uV =~(O>#*Q\v"o;p7"I@SQWN]0|qDbEoݕ*Qk4b͵^ J&^ƴK}1emvҷ*>sq#J0QIy?39..Y&_3摙 m$K*.ڷew69.%]1<><'AZF;f7O fwuHHc euiXwwc)vגmi euҶ 8cR 4F!fo*OS#k$ۮR֍W'} \_I.ޡ埭jG/g/=,p愾%yWI3p ͞~?}:}7sMRpXQ'5=ډ:M5˛f gvTͥ^#H{y)[!a?z'=.K:#JH\U,e$`Pג+J12]Zo]أs]<ykں)J)'e6Κљ 䤏p=ss]^´co~ݕUі+{S}\q}?:mYCN˰ưoΫOz1+zь6,n"wc)~ReC-B :swst+?MtѣRN.۽渖O6D6u:zU){3,E%Um4\OSpzwg(n+}a[$:HD7R*!Gy\:iJ~y+(IGK;Ŗ A}T=9TV q:3ⵞh^UYGͷՅ"i㕄ЏwL?tJJe7TZ9wmֆdi3N{cj4*:ijEGixi{̕{{87?,’s3TuNRmYl, $[~~UTJK^`2HefRze\]ʧ[2n{[ˌ‡mg#ih︎NHm [-v]u+ eʛgl vߧ\[O4F嶴ۗ/B$ViY5jJ  92[q,DosIm8ݻqֲU~˿XCbtvnl|"U*Mk"Sk_ QY;2/ jK4{~SZaXU9sڔ_e*$ԓv@'/HMaEVq{;js^7+so$qU*Pe5apʓ2(mu_d*yh{_ȱ«^y ڜ\$;( 4k'6Y[Ɖ䳓h]2wӚ(ٜ5#QEymu2fU2;N8=yQ|קp$~,rQ˶UJͨ5{Ʒ@yvz7mIh%}9' &w;t9ҹF2G~{8=̝w7##~*V~FM'W,sI'7p9u֣RnͶQ G-?SFķ5;f8ROFU,=Jzߗ}Ak8ۙ άYT.pO8q>§پV3CRZ{ҿivv ss۽tSqMYJ1wӡozrm#r 2r;v\ӡ/b7p(FE׏5QJ-:V'{ך_$BsZFe ˤʒ9H˵>njd#>eFJQn؇Rrۉ;NOO(qhV̋#ĄO\vZo(m`*:#ɡӔ< ;41~V^$;=ҩ)P#*;Zh@XnJay9%Jj*?Vicǝlerx#|BQUZ -f=?jе 5)m].#G$՚eWן[7JM6sV] ?#yJ2O:QE)YN v237ͱ1>gN1o[Η2wVR](10O[B75N+n],R>RgۅX.6}2B캖 h$KV$&Kq?z4aN۾ҥ樝ve6<=>mGrRmEDz9dmopzOj5,ZVw\}GQ.4m;(TnJfΞw[m!|cgF0z+ӝKV4wv<6X8n1N1m F[ӥ~~KNIò灷3泌Z/!Jre./呿yibg9ccjiZ1J+&T&0Ik+I$W4}zf 7ZfCWpkN3Rjc| PU3f=МA.U6{:rU=Ժ3vTO3?7n;_{+Κ|+c_kF/I5h[i#d6kzc5Qw;5* <ʿqGJT+JCiEя*|ɶU9|+w]Go Ȓ}km`HPih;]]٫'w?Gyon^3^kY%KS %Hnp:sYQZnIKV,/ymޚV[ijqj*390IỞlJ)Bk}Eo:}6!h$3gnH\דSj==mho̙`֏,տ7G/V]C~jl'ϜrMT0.VߦOMۮ9IԼ{ڭ? d8! F<^n<U#RQwuO|%Zn+;]q:cukSqJ?wڻyۇT',?q"NM'CD|l< tyIm?MV:s J}vnMos~ Z/{kn%g A6scg qrYn[]_ӺHq=NZOV4{_4Ğb׋S >daSgS{o)na'lj0 }lTʓVԻxelCtykק-JΜrSJwt+Aִ]M-u 0b'R*nO;O#GM*c(֊?Zؘb(;_>'ڶ%)JIm>$d Ac QmR08Jv^ѷѥ^5惛r[4] } 5M"6cDZHTePpdn¼k5-*J{3M[O\ MA1vk4cѭ*|qOMu׿ol- (wzqtqNur?z5}!FMt0,V8Vi$Y>EGSܕ8b3ȼ)Xʃs񃃎8bWןJ8)I:^c(ԭjtbw{xLH1܍9ǽ~a0_zuwG)'Y^eo EH *0Hpcp9pkӕHϝr]֞8ڕVSjzZ_nkuq[Ow,2eܬbpy9+ Ώ-JWIYt~W_~X:TJ)ui{=}= _ xsM}V{[o+-G`=9$dS[Z黾};u߿Q'ۺ羥+-:ϖ"spqҼF"tkG*ӊiMpͰ0_Vq1VW^BrtdtX8c̻ B@pMihR *.Nb/:~UOh 5#*H 1隺OWLY^;ݦxH,[m(aAi_Ӛtwv2FQ.k՟mC^1"~Pr?֕hBl<ҋUsMio+ԶVbPn'ڴZXFuz8\ڹi9G*Z4-4nxZi?̼zJ4y-gnOKuq=V1폭y؉g*sԇwfyj+!KcYՕ'yQr74KMk!sYTVybJqA U o3uqۊR)a$FX*5-lݼvV9mZ2VƳRogЯs*fǸ}jV8R5^' n Ҭ)ӋMJҋw:8eDxJC9u:q)l@+|ŔxƵUJi{tSKJne *8ߜzlWTob'Ql q9tX᠜93(ԩ-nd{\LVXrqׯNя/ZigRQ伏m ZWۭdWʍɭoӎGrykѣJݮc'YInVs<\I)031k?xpr].?|3b\<|?/rt;H'#5[V_Lq*1zDgxDpI ry`F~|oʢ(겎"2yuE+j#:2#;ћ$9=u{*4V#뱫Ѧ}ˡ[*I/G8 g?)I y;rkZ*~F4Zl}yX_`zf"49PjJQoW{oCcZ߬|YeA I#x$㿷\qqzz:ir[_>gsq B K䜌H^(JV (rQz_kQx$U  r}+ϒG5KIkF;h`'pN^2{=<rofWv}q$ UKH5sTyȅ:^66ccHDx`Xrp~zz,aLB~cc+ӜjGsƞ w{z_a&iʹ+o8Vd=hcԦ3r[yw=:xAFD\ہXpq+'-7{?SVRt&8ymʬ#2TԬWSO3|y yhΣэjj/{ і6mǡ+;RW6]`DnsAeUhHУSlږ+m1F[:FΝ*e/.VZF{kJ'/ciȷɑnWqSNG'zu1QI5p]>sl ƻ)Gqxůr}D諍Pg+Nۭԅ<B^fU)n Xuץ~Z0祎 eK*$eK01'+Hf8eXϒ+ǩC_ )WeϚ6 A+^3.G.Wyf.R~K~vS1e6lIl{1?JIݮsV9ei躽WZ\z6`7:sELC|f5^[>ie5Dr+sבӷqՌjukSQZ+iZxNV!j]L`=K6掕N2}>O Xa#£4`;qsUkM)-w;1XY(=mUlov?^@OQ_z|zASYMT;i[۴K#l.3=WLejF5 %MەdPǷwMٌLq(ң%0r;tns轱ۏKem;j$dʍ+4Sq̣-vu `r1o:ƝOeQn:x>UNJVr`vlL8Mtb#OJHfveUV.(f'S1^|%Nv]z0=Zls CjfYcː6A5"9YvQQ+@uvH7zpkj5qj_ST*Z~dY9gG6˹f g4T+nnljFQ!0imbrZ$N8IUtտSϩ,Ӗ:|Dpub/8|/aHʟϧNO4Iqi1c']y|Ĺ Q0ٔ#TF{OORP:\|}3N6FZ5֩J:;BJ%z-Yɏ.>9㟳foգSk;>#KM7zsVS>1Z~C`ӯmI49o'r1HʊkNcin͒G`|&0n0OӚQQV^hKY+q&XzcҽjzX~ZӻC{hEU!k$w>ݪjr$uɝ|+IǕXH\c(/uY6z\`VO&y3=<]8٥v٭9PnF-cS42EVkU&otTGTNETFwѐ̳ϐrN1QkJuBR},DbYG8]\}:O͝4aN5խ=|ikH֒_ zcM$ս\usJjrNDOCigec]Gʝrjѩ$Xy8:kcD˗PG+miR{w3q*/yl۩R[y hn_f:'cHqћJip18>%ېloݯbhhm.e;66il9U;#BXSe>Ttu*-& 8㎃Z|JOcqŠ=pyfLƴeG٤w(InYT) uz{V峕WE3B>уIx]ҩ8+z.u$<,KTvK%鑮O'&)B/l'5~aNq(ӳ;Ӎj-={o൞ݚ@4F0}<өvN?pK}YdF&GyTR{B^OVɠH6H؍c|=A]U_?'tӢϩb)${i&yKnᐤ}zFISGMyKk4#WWd́XYzg$t#s%e84ZeVןk8o;$m6ʽKg:qZߥFsc?loH+6BӜ}9DUZMOGRTh;i̩u0 )dH]@`=Hkg<-ף9#%O$1 y?DԔ6sӅRW*ZZwgmmmdnLX3SGJ3{W RmETdX.9#@l ps?RQojAҍЊF摈N14U)FoC*ҝI4޿ք7 f[!NݫS1S?SKYȑ!i#dl*zeYVfO/ ~j樂Aڷmn"iϳEn#azӯLfht8E.ʍ?mG^qL*̫3mĒ~bzg)Ɯuk)as7뱛&IC,ch ~cөxu#NSQzf夎wGcy rʤ0)o}~⮣Lвvόc*{t&>rb8$K(s婌vѥ(ަ{Os,lG&8mR9MMXVLcdKjK~A]B9}>Sl۸<9ֹ'RMN:hnϢdOk0jm%J0>S r 0q ,o/=Wqp0|m~ޟtKwKmY1G9[ʜ=/:5^8>`efVԲ=FܠJt)_8_[JHh~9Z0:41ҍIGݵ ۊAFy''k_cN;7&/[<.(ȼHwzRZS*凗<Hڝ$U?L\WL{"7uG+q,1JyӮx>c8g{lʴ^i8o_uyNyŹ)ax+3.(G{莺Y*y5~ ⏀<AwZEn#t39 ߎ#q4tj=fYN*q//';?>,-Zw_3|b!{ő_2K<Xd9G6/VKlo<PlVSլ̿D^؝6k0<=~ߵgSr9_-M%Wdqó'ȤsLikZϪ9UngzOG^񿌯ASO3<>bWkp<3pqk=5b~M:QM.k~?_ύ #HR)2\pʱt`x$U4n{_eCz>K-}z'|H%4-e9Hd|#gZ̻sc\=}Ue{hsޗ)As.ws^!E\yG1H`yqùۖ1> a0Z[ŦnHBXԧOVӊwbۨmmoѣW*p{hBSW/UЛGKsݭ[d{zv?Srtۡ͊tdlu-F W5W4u0:55˩IdֻbI%[ұc}=꘨ւTG?1$%CRE-Ώwhfii|ʸ/ޮXU1}xʞ)85\ ?w'SN\{ȼ4lnLl gՏh`C08Z59?+iJ2{K>OmV谁49=Gӭ|EV+T9T|薺nCbcfc]<·Y+<7T:4&k ~p׏樗-g{7 ^k|gECQcY-Wpy]GQSZQvڟ9^|=VGM&Yd+t{XZKN9MR]}]g s$p6Mmo88 g*O9{x\tcO_3yJ\Ҷk[D[_#`P\@IOSIB7R[+}ZTRyw{*QDt߂$ow<&0u' q,§:nzEԭt:KR_AEodn=s`ROK+8DjEZSv~Zk\r6 ")q2:hcEѰ$F:jb>QJqiW5&ԭY.Hz,=kh:0&!r٢A4_/ܬ?T{JF8_޵Ljp4Sn]ܪd+:O& ޏb &W|Iu}N׋ct-WhO0?tg3K]?Z[4 ]Y=c/*ՕLUiTKo<|ljPO /REՆt'u=qVG=H9D0%5nWo6I7 nA7ey"GZoC3.Og:piRxmܮdT '8pl_c#g?iZ<=g^6IQ,logwF1|i.OMO{/yI{|W=Xv|A}$ u s(H~1}N84k޽Ը֧ݯ}!kًOsvH$`?o5Ύҧo5O_&qѯMΌkj niqV/g!,k*v?9;9W#S7NWSSQֿyџ?-E㯆IhZ?'yJF_ SUe~GfOET 6ހq׃k˔i:um~{44X#6en7#@z<r.\&\G+N N$T7]_=rH`cFsjʴi{%N?.3:/Scg'c|=?iX_N|Q$"efG#;4R5NNMZ;FiOMc >${6{(+6qПO|<4˖$oAVO(Z>4xF-p+k|m+g(/biYjU.ߩ>yn&2ֱ\J۽NAQֻa}8S{~w} *<[QZ+^/4A~X9;t#9ҽJt+FnpzSZ|zf<⟳%iɚI2רz^m~ǗS7Vr( Ϗvkku2[LTu9 H##c5%R%R-A=tfVQ|4~,7 s4@Wmn12%i7v -t)6UDZ剂i0UM{ZM;5M;R{sB3<-7d`1<}Tp#(=7ӡ׌ӥR++?;0]O‰<'6*y ߀ju{^Վ OgN_V8}\I<;@we{}yiNorZ([B@,źWnnݵy_MW#%ZlҳVo!k3 sO=}I6k꿮BONIY?]nK-χR$ԴU:Ɵǁ*3qIkPO)tvv[=, Pe%w薻mȗ>hskz6^:Mg4,˖GRYpANW8SvZJq2+uK>:>#Eqp=̇kmiY~^$r˕*ғw{̊J{Okȼ?{[|,c=G^q 5/jv5exJiFwM-o>izk}^h}lV`K 'S ;QV>T%jWg/_o.g;KMy08 c^XJm>}i5t3 X*lѱQ+#ãß'f|j}[P1:k2wba'1u\9Uh1@8L$j 3̒ڱuIBSQ?*\ѻzv ƟM6kl:>[0+ @|y>IR+)VbM1MtVKw;1'NxO©[;N׵CepQK2Kr2]֝=zr^[@Ы_j6r*'i=ԥOWn3nXQ4C $}#?mMyӊekf42F{nfw9=D\1h`EmH#7]Nyx:W%z˚Oֲ{t(V0J/mbNO@ =:W-J%C U.xN%md:>Ĝeccy%`Әȑ?)2e~痍rNRJ~H-c`Fʂv=8x*x"UrVZZVn~rG>Y/uTZu&">xb2qⴜ$$F5S}W2KIeGUpOSG\W Jyңm0:_mhןd\I sλFߺN#SثK]V3OQ;'٣]SSzt鹜}j*p5l n,@WxvT %}bW" 93yG*[=l[XJ7k'#UL.N?ק:?Wׅ(Ukߖm/R`,q˔p{J:*nuZeB5bOݳAʏ~\E JեʶCKF-{D p6@T g%~lgQNo28Uk_{3 k' ^{cserB& Zݰk#~9p?utһ_?E:Cp;l+2Qxʔe}liNm}_ Y.߉>Z>v4qWߛv`4v1ʷ ϑ~[U u#݆FwN_wR?y-9cX]ӊXP%I<>DkOy{{ wsڊfuBbiLœ۷T\lmN\U&C HscU*oN PrL pʹ>{g=¶8yVE[ѢI,r9\0O5e+ *O夞t^]t1SЍP- 9+qӽW7tեN64[S67cUF3toҩSsu7sXiVѣ>{߹_X6)ǘ`,r0A +BCu ,p}Ee>{ފtgʺw:*JҎrsE ǟ Dg,6G8~?uF2J5St̞Ъ!U۵cV}LK + h'c†y[KJ3ҖojĖ-ǩ=S jOIcw7_cn|pp1q]NJrNڝnom}[\I C ?ZV.i&%DM3*=rj*BU%7f $GeE_0m~~uR]M1cZv[J\qOUGRS\˺&[%fO`\xD'41_+r z$ Į92q[_Km/)F<֭Owx4.򬫏% eӎ"7uuTnv1e#ֺBxi$S+Tdܿ_7+MHӔՂ?m7SDX-*¯^,W,6Di8iuasrF*60I8l=z֔slezQc۴rǨ+1yt.{sh+Rnteacau9vM? |-vNyȍ"®889L)F4mNΌ_2y-KvY%`vsןQۏ,i}G˚M+잮Elw A|9yJ=T]BVw22MYYw .A=shm}!NnﯧH7#fY|yyNgtϚ }FaĜ(S]m{QpעZ6Gwmeo|ٹޱ|dǟǹy~m|:"w.CβIN֍)$y⧌u?";EK#$r6.sw#gN>+~FO0v" ƪA]΀8޵UK#__ЂF˚3ùG>GM$1tlejYR[YcVݻߧ?Sr}ȥS;۠\Bim_/r\g>zZ*ҎMǕ$gfh)&37(=?c:4Ξ"4- Hr@.V&9g%}9I{)أ6<2qM:uu.5*JH"eRy1$qq]ʹtJݟyiwv֬m:L~IS+[sF׻+_\,m/FQٻE_*Pk5Y{}]`%=og^yNZƌps\ d%>qJ~;N5)?s43nWUJƌ2:vDVtk9>w{-fml(oXn?Jq[4CUӽHΉojTUR'sTm\K\2̪ӓZpL)ZX&ұJZu1EHI6o |zi읙._>eՂVt岹sЕ(ƘeVgF\W#C2*ߎ}JRUnŅҾ 3aʫ}x?ct9h,W閖BC3aug+#nHJ;-[i1|Q,D+{FyVbA \:YEQ9's#=ÎۅOc ".Rv98H4^\IJ33{KFVTyݴӡawnVמTn{1Q* 7l0epGOQ]3^sGN&gf8PʄBW,G;@?J\Ϧ~΍nIQ1#(Ӂu*ŵJ1*2}/wFˍ~+9Ƣr۫7xya;7ђABm,7.=w|*JMy\7{4mr4l#隸xzéi?4KdgkRv Q mRpqN{~Lѵ:4GnsrV,D l:nB x_mԦQ#u)>s,6I%zhӶB123\"fG/V{tpoR]/Iopfv¯HM$rJ1Oԑ'm=VZws]geJ;.@IBƘY#$UUG(h[^F0I76=T~}x1cxx<\i-#31;Bj={ʳ)7kk}Miԭ( }0fl*6J|2Cnu.UN6ݐJۅdlg=U(l//̛LkKgy?5nUVשi"*~fP6( ZiN6JtkgtQj c;:1tus$WS>{C*\ ;;dcJqTZmsK~e_m%C0%w9F6oOR)4Ȕ`ʬF?>3%ƒoQԽIyyM 5flfR?8Msj*#G\הR<`QvzN"/hFP__%ND*QIОqȳio!kb| +1i[Vgc)[euǁ$\f+9[B涫rMmkTt jǎc.5j7(Z5q'<>U 1IT4r^3@=O*5%W9jGWJgx6/R4٧kCgr;`u?OjJ4#˕J*ӹUx#; F q 軘ӍE7{ӎD2O<m=Hzb{M[T~/"eހ9_W3uBeN**kI-/лm;o0, ZJtO^&4kk-䬄GヂFs1^WTkrkѨAy>[Y/]=?~jpeͱvի#/O%լ{#4>d |KOG3U+7H(E*ApkiYCf})1}?>xYt\֌`r'P}8+?ivvigٳ +nVMzGS[]4gqvͰ}sVu1k_rz4u~>𞝭ƽy2bKy]`T:z?`WyhtW(~M#GzRHOYX֋R}5ibΕgjχhC⯈ͨ3,mXclsy׆s5aztrދ~ZLsX] }n˒L360,p ԏpY+-։.7]9,fe%I-"m/4^l{4_n/ ۘ~I0=|Fu+sn[c24e+Eyw;M֥ef+1;8'Ж1pê|1yxEKNuIRKYk7Y62(F<:g0eJ_rf<.Hrmo9]ɫ_I]cAksӃ39+8rF{6 1Wnt֣O}!q/ Oc8t'sɏ6_mZ5eQCCoGЂ=kөnߏ\EF_f.WlVb*8>^7&&eRc7w&j/<9A +.lWu?r&5+RiV 5 Hˁpӓ+ ҊeR;9jw32" u'ԖLӒZGxSVkI Qz{a*q|sڣ:S+{;_,v]Jͻ-w/>ZUGZ3/Oɞe9SpjөnÑ-N*nAD) qZ:Qڤik6g[5wUhD%*vN6sG쩫=.]v}rZ8^Lׯm<|qn5cK[|<ӵVEɳl[Zc,{9t\QnW+z-캕)ַe~'h<>WH$ d]nSqxE|:Jum. ti>WQAekkok \M]@e' z;(Bj?cf+tZ1MwfCnX\x9Up18V孒w3(ӓpx$K(ec,lr7cb-(ͥQiuE 3:Mޫmr\mPn " vEF]|[*{3V]nkK2a#~Sd?ַzUl1^[ՐEIϑκeF&/MZCxCB$21ˑWj~^N1چ(|.J oZVQq"Xv8翥}i\:VٹFV~fqkϹ̓q9b1 WoݎGUʟ/֏I@XZV(YءqNXz VTVTַ}Z] ҵhf6!5uO 4rH*nRqz-p[7!FkN\}j71č_SBdu楇Fߨ\n;AWʤyTճ2}

    o^+eͫ_aP*ů-; DlڻaeFq{g]~'5VeFb;B27$sʭEһQsOMEX|Bbio>ҨLj %9g!gk d.bıwkObJF=mt6\j[c_/A!kXoQӯSuo][(ޭ]e| 9qZcGFULGE-=p2E8LWl+2}V,Dy[Q!Դ1avm;H7?:NZmQ._h26t+*jQisM5`ڳ$VJ{\?KRD'˴ Fz^\ɵs9TIZqd'MvpO|W\OTNiS[/.>+̆f¨m@p'׷ӏO4UwODt=([i56yn$vq@#繯>)FhQT=;^|HkŔe 2)ێQ편^=l\n>5>|#H^txx1OOC|H}e Ch8XnI#d||t\SM)+;[O̵q( $Kʰ x:VZ,Kzz`pq厯KLq>{Ⱦf@=85dՠ>Y:UHFcuxhv>e@<3+:SB4rqC$QGw0{ױr!(s5/#X哔d_xEfD;3Ӝw*>RsۢN*uC[vY3,sw=}{~]x<#Q8<5Zmeysާ#N P~PH ?_L}_ƍN6^VR6vw t]/M/˱G>ϱݷsm p)N[8JZ):i<0˒8d1uMRNz{}Vь^LeozTVZFW#x4%^`}hTmc:/e^gr|1}u%e~b(YV`.4Fɩ A#h'3o^3婇eF2qs|yd4*sty#>)%}IܤE[8B/m5- u_,eݑNGJ5t8jFJ<.Gnݎ/UgfV5czc9'˚Zt/oF);[eWi"+4ʲ[.'9J5M]Oƚ"Foqޜ-*јӿbwM4Hx~ 8єK^ș⾯FP;:YapFJ\˪'N2wWR dȓl^y8#4\m 2快"Ǿ6`F7º=b*)FrQ> c» *UNr?bQ?b[-2W'9#9ETĺd^DZL$eS.0A?J#7>j~+yfhƥ=3NߝkgU*8[7o2+k<͕W;TG==rj=waʭ4b,p&S#c]~^FNsNfɠxM#ۚ#YktYMϗjnjW+zR8 KӼp|ztQg-Ɣ*4E"P~VsSw+teh=Lumtod㞧֪^iXZU*wjy+MY2Nx~FִUU pX6ѴFA¹jry\Y{6ub}:9TNݫ?iD5F*ՑA*\]Ưpď?1pxұRԴvzQ%h_r-etTa۲9xksF"ٞeiƔTm-KM΅1pN6}wꨭ9ӔfO84m۾n$dޕjV+(o|"X }-{J*W4䅚Q~[yܗJtyRv c|s1~U_PV\̂)ܸnzҢfe(jVK}} ڪ`#=~͈I;~-_-6P"g\G?qu)קJьQY)!\-|=S d!z2|gs'τ,1Ǽ=Gb^ZnJ.W+_Rżg\ebSc1{V>U*E[,W/2I]6ʶjd7,Xr3cխZF6yXhQ-׫M-0HB 1 _¦1mϱVԥBE6%K3O 18qrJӶ5ΝITk˸n'qk"f9 g5˭i)J-| 䜥Mg.VL8깧TjSi 0 QfU70=҈^4h¤{^%ՉT( fnp~Jڝ9Jҗݸ#&[Orrt8㝼.%qqqܞ-ҌfۭE'̬c`\z+'4d/,7 =_)EԷw}7=_>-K ֑WlqJ>Vihkv:5=J6E8׊Z>]nnXV+G4Dq])=?NݪuQJql MF QE0m#:z]e*яeRds:n *L3nޣޜi:N3$vifOAǡVFUKF+NݚWltqjFюthԣ/u7&95]Bp{jiFOu]5SUleCk2s*EHrfd`mzGIsNX:[v+Ier-_Oݔq\"ƴ^t:Ȋ)Qqn9;Wխ#Y&gEYW`zOZsF:nˡG5%ԆKvHQ> 0:޵FQEV%S:34p",/o`qVGVOgf輟|ЫH rPxU& B˹qFqXJL"4kN) 2ˌBݷ?lq]c(˗vc3-D#;:xq[lLIE8=>G27vM2_'=OVua¥F3QIIpdc=v;b.k5?Af;ҏ,eo4d˜8\*n(/[xع#Zӟ5PԮ.R]}eGZ]^4I߀P{q] Pz64aX`p[7_^o}Wl޲K;c:%v͹WnҲ&%T{HԽefBd9 `jʞAJѵUa-7Uԗ+fqrEmbM~_c׸*qٷw)JvQ֭F2ؖi? Uae;t&{i.7/Q\˫R#˽ːG4R no,n.ycJѺ1kr2B ׍ʏ8$v559:qii6sēѻ, ʌ1ĀǹUYrtuYF՝Ս|I.ӷ;Ojގ"2ޫk%iYz>XcIwm_$)ʥ8}GlTHM<#5NIGbcJ4b꿵-h$n,Nzo[S*t#. n̸\l;Ezקng漂mXd|1XS3tFn1;nɗČV2 pO^x~\zRK'ݷmU\LUhJUnC{zoO1QfmR`V\r}t_42 ?tP\]^{Z.kZRj^Y/!V;/Tnsǿ?m)J+٧f~u Đ{68tQkXM1JNm=ы3yEcm\yz3+5Sw*҇}\~k>7"3zcg=R+b):o>,)yn Y?1}8*%{JpKfw#2hH^9ڹGGdyGD+K[ܭX g8?󥉩:6iGJ*M mVLXeO9;*+)G}qSQn]R;X6ݹr{ck'VDUYqgoZ(()Fm z^K7f#1ZƤ+3JtjVV4 en`ʿ2{+(;/t8cxyIY}Ч_,be 0Bsڹj9/zLu9pF -ϼ(P4(qn3pRu*tdE%UOm5ݜ3Y*K`NU?qa#2E\D}uiZ]^F!jd`e*8猁IѦ}=nyx9Ud˹VG{/v>as4#mr1|gvy)Z'+ь%&kdldg5Z)^C/[tυ̼z=&:#'*wCu-eh#dr6ѩ}+J'v'>ߑE_7˪0Mgqsں}&KL"*/ YDhLYUPr8'8_ZU4:5o,ona¢1i}%RPw]74ͅL{NhztYTz޿X[ʵd?3snPsӏZQMniZ~:_C$iY"wd`^V;.bZތh]KojpH6Мt0\5;k}sP廽sBZ­)B5i49d хcI(#.ܑNqӁ҄({ )8rOmZK(V(oHe>?(Tp?\j~ϗx:M?RVMH$#j.1ᴂN8F2vw맗U28ѕ_z=DxS W9).Ҭh?1`Bx*ӕ_+ FҢKx+:H]O"[ @0 éÕߛ~z|N6imt맕mzE֏˔)%Xʶ~N >rVTVm'Y}:/oaU@з pA뎽pkzبWvj`Nֽ=KCM{5kVŹU%r8,.{t\TFYVՒ)YE4ҮK,36{Z{7)&:~+fՓ_q,'KżIvpTg#TYG޻K3Z7rZj]!XcUv^1pWjAqŎRIWYʟ-ז |h-b:Hҳ0Y@%As9|[v1je߿|CL346v ' TƌJ GKl߹o丵6uцLnQ@ zQ×6ϙ8=:=x%yZ"a7<Ypn6I\b L+Kˡym,M4WqeSM+Ya,3T^/zuqŤl%4g*z>eSۣq JRV~ ]Úbs]Y^g(Kr~~FЧxgko?0%FI=kWtw2*,r>!Y >n&d$R~/Թ9kT (CZ+?o %,C2*9Q<^`e=]φ#FE %`|`~aF3mN/ڻJm)KC/_!F7$U [rAg RV0N<[W熮D1$m_AN6Kuͩ* 5([V[gyXWl11{Tx7Ή8pz_挞GV'eaeϭsԚ3ўLU-d6X;[ý`ǰ\Nݼ;kb=YiK[8 4K2woUZk-/J{O*dRv0]Maկ-=<3 V8QOB>"NɉB#ߞuS|4hC䬾#%6iHۥ,i8O qn+*1gK Bwc4m:]3Iu!g,NwCr1q q1on{FzGÝmD4h ez6`eHHӁ1*tpv:帋%M[M#5៌t?<;o"Z[xGǿ;GfI-oh,yL %LPɅvGcN{{j2t=Sx*JwVd| J=͉7EnɶVIcԀ#{tqЬ,To_/H?ew]o|tǯPG\X##O MFn䬭*gx]jS/"|&TiYXs'=pNp1R1a=]_WqXU▍//Y\XDZ2Y'_H6܃\qG˥EwgFE)ja^Xo[,xHf|r' =xܪQ{ CEӦ\wMp8X|*VlFV._O#ZfM?|dw* RQ޾G*]ny5[GN/Dge SZQU#+f{dz]X`Uմ%U,F#5^[ejڌm@]kJ(EZqTiZ'wQ־RjMGN&#R}˛aXO͚*Ti݊ՍܫP6'J]-KA}~.kݦyג\ٲyRə$8U%$t}_7)+W8ZFb֪Teyhg/fj+&;H~}=g(sX!LpMjiRW^{tTu5ڄsAj3崛K}=x^,E5'uJ"qؼ#[M rYUFQ/v9SZjwZg6 V 佸@f3WhG cEW֣M8v.l3q*{ש=zcZtibg14lw =kZJZkt:ڔQS'CknBːzz/g/y]-MRq mz:ß7u%o$ 69=;`cuZU `QdQ]v{mChr ,)`0̤qϱ(ʵimw4aR\ɥ5+jk-NX,@fӀs<ɜ'{M_Vu_u/9OjXjƚ3Q.zG${]mSiпmաYwd[>6k4FG-d-_9)v}:kӱ Gt[SՖ ُb38 jձ0}~||l -3:=c ;[]ܤ=}9kxt۟SEL=9M߷/|;};LӯV҄` 6aOP 0r2k'VJ%d᱅LM Suj4KտOKk6/,dqsq1r$ :b񘯫0MYzuc3Zxz|f䏰xVi (~Pr};t_~߱3DgȅKf؄qڦ2kKH .U/pÓ|RCrUvnH=kNeۤm{2J-]6YQsjb\e)j '|zp{g{C,]:<~ʥ#v#)̞gi7q=*HGd)Tg3R{k9E> A'Ue*i3P"Hsla9ڳ{R%0/'q|s=}vS{kN%E7opf1fa2d⺥[hKNQOM_eT"s=9>J驭>GMOWӿ!HKy&ei$\lt}V*z)/Rdn-?`9ⵌyjZ[dty!?CVU:ҕD&, S)V{61"{zqZ_:~as 0͍-'Xve{(ߛ̧o%Vy+UФ˖ۙ]] Ot6EcӌWvؖ(i2ƅ2pRp:tRmD5(nːCGS-$8b!C2TjJmb-nxm>4q6`&0=ʸyl>ѫQTuc^G;oit[O0r99kXhgߩN8IJ'rK2W xN3LIV*_s1$գ0YƲ|B>eqӥvcjqkKhIko bw3I®Or ^Qw"Lzgm>(UaIpAzAŠ)M_82dۺWw|L,t뀾[HdvrpԔ[~gJv^tz]1i6̪4hO98w֢e5ŇPX* p@piۿ;'Ɔ'B!""$- =yQ'uGH$U\,+nOSߜ0*R< /[Ie@F3#O}xRY((_Ɔf!ymG˓R1SmE7.;;З~"v02(q+y;tǕ_MMiЌeg&d~P>>sޕj3)Ri>iB#I y#ӌ&QBMr",ڝGv?YI{Gg9TRp)\N|߈Oq,<00ss[HݣjJklbjӓk0c?mqVK&sr[JFImNQC=y%r &?/,y23WӞ*JH9*jtyUyJPKcɒݙIlqqTgNtWv9;vhfyc^SZt:9֯R˓w骊)Lj* rmol^K }?zt%RpQɽzv׋lg?޽ |WsҌi8opi&[CaE]>KshZ)qmnn-(w<~޵J:؊g8/ZDX~RB-J+:rnFT,p HI>mۏ}s끏sЗ<ZFs'{t)J48TzZ""{gԍNwkR=5QEf+FfN**Z59DžNw$' $7oZ5- _241IJWS班y^K%wgRTgY!veǵ6#8#zQsV9 O O=t{Y2YsI>[XsԡjfD%)]Ĝc5U9mm;8qT_wg.$Ehqg۩[ЊԭJ48oԩ4R&^L{oz}u)J535gMTfܟ8G2滓؊e.|kHSS̔Tt{..l[PRv`8qֈӗ73;)Iz$̳H2 ?ʔUSvztDP+'99 WjJJaJ/GSaa}F8ᾕHAFӜu5ZPn Z]:|m-:j" pʹFg<}\H&δjU[b84mIG?.y祺"h'nciLhv5c~X2Zb{n̷tr_y}pUfнm+αv4©<J52#Fj]MhlPy>bYYCAZuR:^ʤ>2f0'885gگ 11``S0kR捫;(]2f`0No/6ɯ<}(Uvejt(GNIY7Bc [Xnu|izhsiJ7w3-4yH;8=zr7&p>eӱ3Kٜ X^sk˩cq~I= .5܍dO ? yucvq~%GV'Xd,Yz=p>oj(FN2_qʣK G{oFE;۸m݁dxƝZ<Ϳ3.1߷n=k*ȹΝR+C5,̮U'ִSiԩV8eS^vPk-ʃe8U롵jPN/[Xmc8r:s뎵iJMJ6{\ hfW8ۥi^Q1F05on|]vA?UN9G]\L4F՛kiӧEӭ+>[3l6JON1tZN' [Ay"ѯHGNy9;LU.&aXһd;1!1aԌŰ5 Hۆ'?=3֔eNt4SknȇtQ]Yy,Üc8 /kZq~ԥG%OWu5ّdBd8q(N+a9NV{,GZ]<~Y^iU)Fbw+|g~^]/=kUkAEY[{wBHUo1D,:nW!올r:q"ce}7DaW1yt{V} ҿ![͵ ^ҦC)ҧQkAsm%I?#9OT2RwO[2F6".ܞs{{җ2FҧN'(fqof]諷qOzrn^[h]>4ѓRR}߁ Mhb FsqNXyMVxhAsg,3C-Tڮѧ9cw 0ѬY<e:w#.IhFΛq{h4V_:4y8WR yƾ`|,k`qVsE[(/-$G0G1߿"jrTHZVH >Z$^}+uNEGJTnyKe-CYʟ[ARJ[s$F$ܾ`UNk*WRQ'uԑu(m$*v~thb(Ǖ+ym&<O]<=;OGeEU9&$6(8-i+Z;u<ڒ*jr{mk_Mz קV6UvջzcG[7F#XERI{ R)'mќQU&DdRwn2@iQ˙9?Jy%;x㺰*\JJݭ#R[dl0s)ǒ}>۠\i#gʜԍK͕9VTyuQ4}I$ 䏛d*[VJڞ TӚ}+N[?ϥIKFfJKqmn0> @(N/ TNϪePeUwqu~ujJqz"!bZ2V:Tuq nmo'sZ#]W4\ I$۳/Wn3};r"}5}c`جWcj u{!Rصln!Z*;g%4c%j&C$QrcI?j^yRWO$~wm}q;t✜fԅ9TtvB=[d!"`C2)y:v8>7n6g)rIYeDelbA8Lc'S笣5d6V8!FKx ,grq㑞jӷx~ZsN6oo?Ee*zJ{.W)%!#>rGR_̅06˅ F89$*mY,y0Lϖ_/*si%I(ᴖ-of8V@9׻MMΤh#bb$Pۊvǐk9JѾ[{7?MSYYp;>W}eti1rK_/R ;%ӏCeeSp2gd'kP3\5,?Vc7z&e+kx[e!^I$?iNW~ԭN2Ӂv6I>i3ڱޕBaލbեy*GH~~H~NnߗՈ+?wܭyevDkQg̏`6真?kXVeʝLJܵ6e]Fb(wuǯzo9m鬽8fͧ;2:zg4|V-x%YTTed)zpj #o|q 8TI'4)Sg|zkrVN0sz܌2Ye<*ԓ҉WaG(Cb3-ݴ`TӚ3N["@ƭ&RzUSmǠaQRН7;+/cCsh9J[ tJ$eYדJi1)F[Ў ;Ehr^pYEsԌmKFfecv޾ԴgF Ӗ@:URNzԭ-t r6nV'ҳc.I^q!ee qUJjE\kd]m#U?6ݠdd?J){FBiXħoFdӜݩF^EIN\VG9U^;{8ԷSF2?yח+QZ4;6lM'ٝqa;NSu*]s%m Hv::WU;X)B.:Yt:,1-!woOjJ̼U81tK/5B mǘ o<ItF1ġ{w$$f-ee;p|iG5ApKzF sN9,e59:q(0 H&^ clr;f)Ƥ˩)UJJ-;%+mD89jcKkY>e3!O<"<`cU'-7BL׹39v[kN/ʝkqAULQ@ss]Wњ֯GMs>fIo^RB-݂ͻr>Q]jJ`Q'5}>~ayaV`nI?/^:۰涅5ˮUJRw["a#ː1*[lt׊贜 1q 罴d$eu90tFI_KkZ#*my]4il*}pwz; qu7RqbG=u7ݳ; &:smIK]u$ߵ*k#(Tf}z[BS}-e?e㻦8XʌjyJ:t+ۨ7kyJ{9ݻ'<9lEF1i|[Kiޟ3RTƾDrr}ozjMTcFM=Ynx͎%>Q~xҼQ~GQh6tR '!m*;s'7}tг8+ A5b/RQMKhi߷a$EyfU# :_rWQ>eӕ:peZi#,]?gnRjߙ!h+m%;b7Bcyc7=(Fj݋f[vw*ƣ?1]HQM;[d%\l7n AyZ(N^o)ӌF}xFUX`}vKJ!rW3輏hV|CuiQ O[t?8w0ST&e|E**5e~>u{׊4Mib4'mْ馇t~^vønF} KWZ[M UOi>1v1JmY y]1U+Vo[볾=(ʶ2qog+="XWRwgneTABȍRBb_6Sti=^zo݃cc 7ummmVս.rY3W췍 CG9Z*,G?,Z]]y[t=x:j'Nzni}n//d@>޾N5JjmV;?Mxb?,jb$uʞMr/dߝU=Jx7ukiR5߈>3գHUkY iH2e}W+M?_jSR{7}Gٟ{i2}VO]3,cPT@dد̱nu$ TT9sZ]t,wsyGP8׏B2ͽNx&{۷c7+Y$gdUUUd  u{fQG{z7mS|%.t=2Q#WAg<dg<{qISƢIwߣ|?UԮn3o jZ|rjI12\mד|}jy==.| IR/g^n5Sf#3Ȼ|ș csyA>ykA[yu+J3It}7㶙orLv 2~1?\}ݗmVyV9x_J,ϗ. "cFiƍKWk&˘PuJ*MEI+y+I.W5 -HTR$~w}/ Q8oy/$dem#1܌}GaEvѣSh$ 2<Տ2fh9>N;xy$WoJ+;ϭ;cmzxV$lq_|P|6T)9={׹Su}k3P;+Gz}߀V rSF,8V$X'K?aD{u>~N GkxV\<*:ؤC#n [Yu+/zm!䀽Aܯv G)yJ7oax| 7ni&՛Vt׼ ChWojc?$)( z`g <‹)զ]>8ʟ*zOsT!5MR1݂9OA\U0U+ZߑOF5=əJf)Woy!TcҰdo/]GȋZq͵Pg 0yKStkyq٘r:ӗe_-[O^9EzP;hs*xXa̞ۭ˖v+YIlU]qߞxڿmVFd6襒+0G~{Һ#M''qVY(--lk0Zfm0)'7VԩԺqCRc=ֶ~Z/mSӑ/?$u{duvtWn絖7zzۧۍ^7c#71=99Ϸmahͻ8v:+;Ɛƭ#,?(9 zҼXb,LU[M48iQQ&!Uh=:c^\y..Ii}ok~/cH]kHAFZWI5oRƕ;bI**'ŽN =;V4ާ R\#N鶇Ax*m'Qyʶ\9Hֺ5oBΥreϷz-=wRZ *i%,|FJg{sTҍ;ncxF3FFYe;YG^zt^N:4RJݣ."~fĆNXRy*X+EJf}7zyIӀw98ʵ\ j'bSK϶էd<$c׮:V~/_촽Nzɶre0[s 1Hҫ=W>6^ӭosZc6s^4Lƒ0 e~`}zJ1i^NݟmU`?ſ'.G3`%5$2*MY?ǯRj兦8Y|dA&NIeg־K2G\,{[RS>i&C~&|f|Ka~m`gC<|U+JUnՖtai;4֏}OpvVFާh8Qq={z ]I%=MsFuHÖqN1=K]O˻JjVѝt~n2 Oݻ3:딩_7`sC/~8>PSόgN=Wh4(U$]ڴ^Z_CI֟:Z.-eSn>V})G(oE.J5ս NREk$o3z*OsF*VWc[#&h]¢2k)ۙ]JTgh f]ێxDyl~']j;ol/lW*1RMiuN>1W[v.ZibV]dM:ZU9kد6n%eeUnBޝӒvQXtgO݊~}+]=WpRj+%f9xYQ%}zwL|1ᨄK^y36繧W7-XRV]E|>z/2qNH5TBNg6*HM_yiӴdpZO;r㧿kqI# tN3tX3jѫXxRT|>?Vo/˷Àt ^h4n|bJޭ6qmro*b\ra"1eRnP̊M"^*脏o PRFPqxw fۏS=}O|'4r5/& $;i,g'=(Kȷ1,c|wtYUuckB)bICnd9Q}838v9G.&|嘆=8UnU/[CNdf|Xn}G漚/_*N4׳V^zm{ "r>wi*:}ןC_7Nx[c#NW^6r7FO5R4im7ŋդw '?=8Rc -dVIHf 9=1*xٝ~oien[rbcҦ2Z_):?b?-ftrK6Q몌ZRt}OlVUmL1'{Q(ԣ-ufoa)M2hI$h<סֺiIJU#l\:3:K4r6w֮68bmκ'Ȕċ d##خ|-[V*VRZaFGnݫK<*Ikm>譣yzڻ#>oteMqW_qn7!r^T^h&"Z)Mž# +psnҪUSXҜ(F*W~e#3ұ(ӗ_5~P9fݟBSXø{aQJk[M#rN+-EftZO]Z"{][sz t0K<hhH|d=ݎhܳWVbTҼ$3ٗYG++kMj. R &~|99E~Kͥn=ŸI/˅O<9#ޡ<ܶF&3O-Kg%[#7>nwmܿʆV{ye<|<^ϖ2_3 8T"KM/m$y TI9iNև%hCsvgV^~I+?S+.qj[:$k"ބLp?*Ryc!TS5eܴJ򴱏*8ЅcNz%KDZX[m:vnsv鳴}}8T-p}/І[[bI-4\ccSITti֕.Y[Un̒3%tڲ0<ߎ񢤩ӞNNU14cRVKr>b}{Z#yA:МD:jjޟcfFa[=^/ieF>3rS۫#Q4VÎ)VNRQmBn]KB?$l,F^䎽zow6V^CIlArTzV~qPٲ8SՆΦ6c˻ ߺ׽t7Z0}Qr#+ 0JNIE7x~]]nFaVun2x R:UNVc- )NҿFu )R߾-zm>OtTN23:Fo1#ڰ?2 9޹yfzѩ*vM駞V[Kcl3Hm%rz9=+JiƦj>#1\h$C ;޳VI=wSV!SI+9XaXmlFۈa(ܚ~\䎙b)P)k}̒nun:oN{s/hR)UuM4cgi0ژ`Nqg߽g+⢬sak(Aul,!2a=0z/CщdnFkqw>ՎnG\qZ>ҚjFcIKZܴm.`NtF~'NOP[+R2j㎇ߠ5~qn-:xIT]$층0~QۚҌgZm1R$O̻UɁI#](t/iN6$U,P$yuQMʜBי]i`}M)NW4*DvppOtap;Ȫ%Rz1[ǙvAgʀNGT?E]J,pc-֎2&֍~'E:E*jlUJЍԓ~yJ[ȱ #;Jʫ:m}>jZ'(]Z[kB;e/zkUzҭv~>g<͘Ӝ9ԋ-15"r·̤gq~Rk%f/JJ0daAjQ=*c̚ZdSJy뷓1n^9r^uS=,ŻSO2 oR ssD o;v7uZ2EcSfSI/z͝.J : j!Wc==+:ʕ8|b/ه݂:B@8 >C!9ҴRNFG'+"вݭk9Iݘݹ54iJAs"RkeQntb)>B6Kۘ*\W)9S:~F->Wew|޹ӻv.e:jR^[KFlVҨxJN[&eS4}5i+v:h֨2+K͟ sR]J= Pt­oҦ.16g^/CWO.Wk/)^/H-Eyyu?QJyTMoAp Ijw8tmzRr҈hKۆG!KHrVYYn_n>{ ʬqJ*cʭoA@i&JORT̴{XiSd'no_!޻#R2ݕk'einrgf?λ]屄jTtTm{}kR:ƾZ0(<p3'3>hJ3{5[[XyCgyq+Nt׹1'u%vL x>5ԪԝḘqMƥU)EirZȤF*sӡ~7{{Xԍ=OB3nIN+:ʒmM{\KfG:yz2QɭUs _yP+{Z]46q?wc)ӔN^ݙ7%֑!MΪO;Iqyy)Z2)Ohb3y2"l[-=k.^h:(O]y7v]\ҷ&2^z28lTUU FXqr(U//?_Sتqw?5m(i X} Ts1޺#?i$2iV寙^ZUlf35\SM%N2~ޙzDmyINgX=SQ7nc8Clt9}rnC,ʪr8'k2zJ %Yc̋/"{ZSOu2n6H˷n#8XݹUJ?i4U&d&2+:u=tTUK=zب[ŭ|Ο:M֯.&e3o dqc^}OY|qp_lT[Eii<FUx͐s< Ƶ۵MYz|ZՖ=^KyǓw"Cq"kdNӣSFIzkG FUQE=ݴߙ~ڬh݈$^A9S_f/Mb5)ШM-ycӬGk-9)ԩzzdJyu2].-˞V*4ee{0J{ڽ:Ԇ:;]d~|v]Y9?PxjΪ_VQ$6xjIaiJkXڦ8NIl/GK8],v1V{Y%dٜcSFݍ͊N4VA[əWa%F1nўdq6cQƍ%RIuܱe ܷ0IdmyF^h ۨW5VڈʗO x5r(W؏mӍi+). K_5Lv[&qA$=z]㾝pSj1{_S=$h#Y_p/`+NWR6_k|ܳ UJ͹'},¶nkv, m͎䞣j`'>^~FU+u;XR,L͸c=vԫ}^"\֦xԞY.S9Pʄ =3?o#ңg7oGii"+ -yN=뒦ac6޽nII'.ftdTϜ6;S'#[32XjzCK6Y䎤cA%K/<\:j|PjP4# cq8{B>=]4;FÜ# WU*Mܩ5|+m 3ܸzWMe.5{u0o1цbg`]I~^+ǩ*ϛ=x:ԣx+ԭ3 |goo [YD iHf>Tjx.lg5{3JYJOW~̓R1ͰA ㌃=֌t-S֥>jJQz/cإ[eEo.Xʳ}x*tv.ӡᆧE]߭V}tR3篦)aʲ*14OibkaZhv 2)Bg>T^^"Xҋ K^fB;f+*?FLEkT緗tSrRKK_xau kK}R27?98 Ns+[*}\~nkc%S*4RV;][wqUM#cقcnG%}:z<-kvZsiJ^Vktk ln |![ tF6xpIOkأ|棛Iqi_C]FdRfi#x\okC4R4vZnMRT7;$lK˞s'=v̬V*uF]G+|{yPW`V 䎟nUjj%~[ SM'꟡wMGik S΃  l+ Zu(Ɣ얯02~Y쟧?g-^Kc@&Bx 8Շ:W~oKks匏-rI _-OȠrqy5Z?WE֏:)^iurm ҆1&y"kRJsթԹ}qҗC {׏RW=Jr/zibt'٫XFpz|q58ǚJM qwu(ܜSGq+2|5*m:>͇jE9t .G۱TǙ{CR:ʎyJ2UBڦ6P}Y&Ĉ|ۗa*Tм=8ԲӧSw@צԭxcH?ϵMjU(E#j=K>#ח_XɽR6$\Jp]\8Wn>#[y+elo 1\c'$dRor _k_MIޚ^36ė 5ƞfUmaЁq׸ze s/}^)(Qjx##f#H\J Jed}l%)&gciBܬ4ôJ!}N۟2f8l{7sͫ*QZ[U?I [GӼ)oo.Dk`%A8=sֿ81vn4{J_4t1Iy+,B86ЬFsIיTQş;N'}Dݠe*z?{xnfRvt^e'seܠqQ'TΨkE[0Q3fYO<4TF/}ݭJoef2=x88~ksj?Xi3u/JZZ6e Sz>UM؛ o͑vZyc)RS|;̍W\>|>G0VL񾵇X毋>QZ/ M#feLVqЩ$e(T\~lMn+1Vl1xy+WxFhK=ZDmc+T;2;OtS}=Xuv9ffe]C墀c_ֶoݦÚݴF)Vy1`FjQ8c{:6q: :͈yJ-]>BXÎ:U ]u8z|{꺯#z&Evpx`aqQκ8]+ 1Mw1Գ,d0ڽr[35iղw,nrG5zf*;w'k5v%k[VqlUIǚRE$xm"[X;K|BSMܴQ9&.b|g' zrOZs扌(˛l3T۠˶j.<$g۩\I*2QFgG'ܑ{'$pWJ8r$+X -ovǜ=%9y5TZ3Ot5+yڍ,AyaN5+?O.QmMkCĒX޳Z§?)] Nէ5)0{ۡys m.$*?P2@}OtRJoz+[-Nia$%"Ĭ+zrN{We*tR} +S\/Nοᗇ.SPf%B@sOrLS2j <}y'=/MeX?\g%<=6o#ưY F;Yd2>νt59x|'cHbUN2tqvoȺrJr JVV`#lgZjGeeM\/Focڧ?l89TVڏ4Tw:%\"g&0@ d{sKet=#tNՕr'JnJJe̥}O&Yx U<ɣۻN;Wt}7K{}iqvo~Yhvdi˒+1lq'LC>ߛӡԕ%]:t'վ D4;MaR53<|jyZ[ۧW,N W]=oMj{YUm\&oݍg#`=1ΒyTekЎW\Мy a`,k˴ԎsWe3|Vi^l{nǒ}svߚ?m$sϧZ+/EcŤw fWP͒RHspzFRۮ1ԕH-G^kmF3Wf2NܞNBסGN>_zpڪzno*0 fv>V+u`ϳ[u7rʭ EHdܪ+#8#2+:wKwmLZKwMMKO8)yy#.CHyX䜐 CY^)hw0Ό"_k4Ur c8$rB<׻SV}?WZOgP:lLeխ+^=E(~KYZF8NXW[{J84vMgIUʾ);ؚTmryW!B#< d̕TsGxHJ.Ծ`&霞G^^"^#íR2սz2EsFOOQƸuqJ]k=6Hč^XcQR:xLj4u|H Oʹaw8ϱ:ߧB~9RۡS@p|TqQѬ{FiMD^Kuy[J> c# 6:dת3S6,GvBƇuiZkjU)>nfʸf>W1\YIA ,hc͠Zy0Xmm=Jl㧩{4gڴkc|ŰXR1SmijtENTjJ~&M眎L[w\cN424}vvH!`Lc-%jG#5V/ -c?BFRӯq(VWz?vS?wsRXuNv,6`Hu]yJt%dJ(dg7eQ*rNIyTUIZ.5 UpPI^yB\[3\ceMNʐ>gˮԜhvܫ$XQ |7MN%mxTUۡW]WFJ3^\V?Lodʸ.S 8R˯y򨥈RS߇hfӏ/֯'~YiN&+$HwW-IjQ,snu~{KreL.p$cӚDdȻ._k~_O2:dgdf n=J8oesF)FQ fZy4kiI6ezH8(-7AƲȵrw@=}Vt%;2*%dXfJvJ>ڤ^.rZcnk%{BI"f}ʮzgSRteR|kۋ>(O˸pVR-%)]9[W焠O~vriԟL'+(5Уw:Vfv=޺ckS+٥m|38\ŁWDZ䭇r~niq0uTZwk24b[{vw4C鎜guƧҋ:T?5^cRQL%ҩbu1݁y57i -:Obi {p8b JVHNfo,S3hU71 ?/:܈S99kDzW6"ܻO9NAT>T;BjS_O[R$1@UG3N2M[ zN<\L8:W,d^3Ky-F[x {Zc 5SY4ԑ  Q~xT䍣H^ U$NOIFa8*fc+{=cSA;%a &v#m~`sO~Uefjb(Xh|$\mب g;$*Q[by,qשT[*i%ܑ5ml g5ʩgOkQu&t 㓠q¸jK\ҭy;ī3y1¼_]5i"swN<0U$zc+V<\TqsI&8g,Si6MǨԥSnO_*]+n]0] }>\/׮?fQ'hj2ܳ1|-spz1䖎qF2=R/-m+l\ҩS Ա\W6{it2ߟ^|T%(|=J1쯥۸\)wr~@:B -74*T5YxJ?9ZyQ@^_ڑ)["QiuӅFעZiq h[24$!Q sk5*E$uTe'V jp) yd,G u!3Ǧ{QR^{HWoocoֈkYU#Oq޶iZOWRZw_IH6"79_Nk 2:l닭]6Y`,#pۂG8RNY?g,{ ie(m܂0w~`Zҫ*ѷvT<gFy\@V(*z"+ߡZFtb,]G7d m[reeS׏_үJegT*G khKt.;,ݴ=?tNЏ<cJRK{:Cq7#¦֒ש~ΊU{uh7*'/C&\j7+5F,HfhOAh9Jju7щ$mKRe ڛK_6c`v#zu%_C8*Mkk\e ҫç9<jҌ)J멮&ZӏĘ4K2L`TҁOR֯.jIvtaXs'cޜ%W9ٮ6y]aV8I#8Ǧ+R=a-SV$l L[qdƹ.Rw%RYY$sɮQ+hva}5ٔAv\_w+j~ϕ驍8rku%ٵyc9?xţ9' ֝gJN3NȬgib,h +c֧N.qmɻk$˵:_wzUQ{-Ι{HRI5vTs3}YJKK ev53Vm|߈sr^O6QKoQ.3qmFZ?J>ҟF1?͹|b959J2ՋiJJtSs>c8)ycy¢U,U%]2̷LPц3׊9#eѝ5/ʯ$dF J,>\m1Oc"3ԩrj&t6X3ЁuƤc(9kU+i6(cT-e9OS9F--o,$.v ~uP'ei-%4B7O-Vn9u5eu\vƪ^xiNrO}(:^V2VVLl ΠzVqqEug"[iZF'$VU`guJrEn6/~5:2ZSVW:\Vv^77Su)BOZ3N[Oe{sELcݵjjB[Ek)Z=5 hw9nOT(Ǟ캲pVsF #>Y`=TI]\|jl2#TL~\)TlمZT˶A[|w =VRIE3v_weԵ QEg1nCA >ͣXFʚ~Z.|-!1-<\ҕݕRRO[t!Xw!`|$b$/R>ѳr+QvCwEJw6X7!\F1#)s/mfKly;l6b;q0Z5$z-!UUVI0cy Np}a>YFMLi+hgě$m?H˕-yܭ.ζ0{~tRW+F:j#B"\ѩRW458RZ\$hX+1= ))_EIAva g^?jJZӾˤmuV+eFtz2kXsn*O3:Όi6ڴ(CnlB]oTQƴRr9Iۣ{((>mtQtY3c{VtS~TAdDpƘ߸)+d>niJG-Nh:HbAZէ4V ayH^1|J^Ԟ 2J̻SO9yܟ\}!Pv nOs;cNji'v_2V,~\w*0d}=*a.f[ʍB =B A.ѝ8Gu(qFTjRQ/bXN2@\򩼬oRhSߐLVbPy})T54[;neKY7HT;9)ԔyR5#NGmd:-VbG?쁎U+-Lc:RIvyzHF7eqަ\kԩ$tIS/M72#:|㖅ЯZoԐUH=jwKYʐRZk-bUH) eIS6#V">._XZWDppA <Nָݷ~,e 9Z(E${ti-J(4ӗa|% ]V/0d}tTtCsԧ]Gr/πO͞WOV}:2û]]QxшDTy #Os:qEe9JUgPԛuyߞ;SqUnXJ?'$l,Lλ86O#>VSvӭoc{k{ܲ0X*F{5RVeR>n5umyarrp?ZQ4^筥ٮOqǯ`kg/icvoK(ffYFHNrl{8w8(| qJ)^ڼb+6AƱlOqhl_ բy~ ڢKjR18qXʝIꉩ[Rl7*PM1;r;}汕9\[ԭ*{P[۝JBd}}+:+E-F֊ rvqڄ\*EqŸuRBNtΌeufx^ҽr펲\R΋8q\K+5ԩUR$FՏDOp7-b?WyS{Y^ A+qo'¡[!FNG^9)ӽGsRPjN H;\i*.ݜ38kOݩeF"*Qd734Lp<a JiGS Խouj~^EC8}Nqib89EYaMүXTxQem2jG_8Vp4kI%]8# ~eZt{\nm?ESQ}<}'uޭ«SdrՄIX}RQWL\VJ7:J4sB7o! !, zvVѕZ:zMܖUcYd;v{dc)TJZ#p\M5~h: Qv受c]*\eNoBk6A&\F`{zkjoN;]ۻ AǖU?Z#&VfxF2MiK{UW}й/2(vvT}.zRzF.iuf匝2 .cH#BX/< Mчqѧ$[!G(BofSIn3r[z5oڬC7˞3~=)U[B>*ںi}ZG̍k C޽=aU]9!j)sZ.p'e=H/[z1yUnèt{'}^5ҚFYrp#qOde{VBwvz-s\-ޤqЏ\ݏ<]v6FU#˧{PϚVܹ^9WUV'~8z5%SG2h8Ԣ_2nYk2ڭ9r5=jxVVlBF+rw6Vr2PGCF+2,N7޿L3 U?qOf37±j4,o2fզ~z21Zo85;|:z|Y=otCu XJNi$ ~2~U?.Va5^]nKFAq2789SNqV[4ԣRMIG[[MFX2ZE (嶰ӽoCɸc˕IѧxjskkkZi!>X\B:G-IӂG4_vl }?1Igo^H=dZqy%W9$`TzXuNW<֩[w_K_K[hw^8ay|'Lx뜌s^mϕ?f+)G;+vm=:H5R̫!2X34Wnץ;k}ꔽehmGPH2KT+kpHpz6ѧ/$ewok? Jh)6;}GxkKB 2ZF;Q(`u=+YU8=hmսvԯ'-Ya_-<7$9\Qte[i.o]hq$1( 7ec\vAϿx8ZZW>8S+KdV?cH2%Ts0{WZ}~]O;յ8cG͇XZ.s}1S|;ڹ}?wM/͊omN7˱ 'wp]SӽI\8`I':k-wRatc5LnXg>.3F!F+isdbwkK]j9 ;jUբSߣ$}.8$6sZQ8E-z(ѕC9bf5SONeMKF ѳ|1ҹw<{h]v$,0}F1[bkSO{-JU$7S2!4Y9⮎9sgۣ;>Ů]նՌz=1*QolZ^l"#3ޮ/*SWITTn͞ys;\3H6WRp&kمE/(^Ǝѯ,6evI$cS\K)5mE6zon ~U`;sC6^Oa΢*<mpӧ.Nnu^N,QC$VaeiksII5q4h*I'w.N9lŒrn~>Bist<=ʄ%=M\vy]nci#Ff9~BW `qWIYkIƥi5v9FTMw'>8_ Ԧf :;\ B+?47&8)ӨvE0չU#oN$/;+m%xH'$$4x_[lN[R{ݕ[tHo|\L[]Mb(EMJ˭1u'^v.n<~nH>0.,'i,:(ʜ5$_O D16\$X9 xMƘLFNܷJ#lV6iAA>W^g 156r ߧ_`ooeYӖEohPR1"XiƦ*/ukecwGfDʎ3= RN Su9wZI$[tdbh']i:KYƏ+[KVbA:Ng]͍׊m~g]>~U(V?9m?.Me*7wzo'7,Jiv"X[MM*6R5"EZi/~nXF[g}盪%5yn[\jvjKf& c1Vjj2)Zu}2-rnSi[ԣ)Fz=')):mmNodwLΜ4VN?f<LjT)Rn[߄ݝ>VRq:7jSkUu"m 4qμ 3r{VN=OźAi<0dkq3]E*ѲYNp"ݽ$ܮqzTiQz#֧F*.Po9IEiErvAWWc޴[n㿑g/ؤlyǂ+TU%s:ZJoUҼ)u Xџ{Ev֦jg̈́6wOWs׃oYeVd~eSGCdV_X |O_ùatܺ-XhP{R)Ӕ[oS0_i9=[oXUw€@ӽxNF'ݽoeu,?+~py8oƾ_0'Rv4-}nUյy&)b_9<^#(|ڷ_b%{ ]5@ܲȭPsåyoG_#ϣ9F1= .nK־V$Iĭ7s2UGgV˞WVQ__oYU7쬮(۞H8Z8:\Z_>ƒ)Qo<1/`%[s>X?OA^$tc\U }.mRGq) nb#$F2Œ- meK7݈;N\ܩr[Y$NsK`kFP4m79fxY$w-/pbx'ʹ؊+]OXĠ#kyOc5%xU9e{j8Dғrfo>,qוYCٽ-׋nǓx!t<-mf~b[8sWc-);u3ѝFߩVfx]}nf^ЉBH'F.19VtԊ(v: 1[U#eZ r+-g;]ʜo8[@WoI x}OSNnȕ:e` QW*X՝pqҴkTގ1J>)&_Bi(^vǿ:,UZeg 3nr7I$S>LW͗ /sluۧoR+H1kj_zޟ/`FJw6)KҶ(_ ޝ[y#Xf'JQ{RU#+FZ0kE"! W5QG<9Tؒ%odm~y9jkB3ksZ7 yjuT{{{T^/K[ [Bܫy9ӰzTklULEJrM=6\,aRϿzŰr/ϽrT[[#B4rhvKyddqӨ)ՋߡU1F3bwnpxc^\LrT*k VkYUfFT9+cҼ9_-d;_[6BHg]ݔv>YN1UJJ跧xe`fYweV"NT[ obG˞98SYzik4H|hbMNNq=d|uN+/W<w`|zZ4~/ݴ~m%Kfs62fLP={=+"RȎ,.ň" hrçOJҤcR#ͣT`.mD{O$x$ jiӴyu9UNd,I7~5f:0*]:qm=NZhZyĬ$nw1zןVY(D;NM}໮\+#n* _p@#ևN~וJm1[iwyyaH摭#dUe':մch{F9IGkl5FI8'G^aa8J(ռFM,$!ScB%eԪud)꯷4r]jxN3\N[kF{~^dkyj˴y9nzwC枻éw U e~_2&e%NH={g*T;_9+{Iyh pY|l7]1KRGo=3=l^3튟i ze(ڧR䔽snñ+z1RxuwnX;'c*V@ r{y8Z5".Jjɥ KuYmlVH=8FվzxF*RqZt}tGy{ sFHVa 3g 4FOOSAIFO;|V[aU~e-JkAGޞgF#uՂ]\,{YW]@nE(Ɲ}S*Y}^FЏ͔*]GfRk_SR߭Ē[( Y/#v#+J*(MQi.5&StFCHy IIiU(Q}!hϘ>iu5'RɲaƑlFG cemB6W^jR1K{-OSOafP,qܪ*1sRDNƵ9TI$BG)U\eUQP|\RZyeGb6fWwӧlsGҗT_O,Aߧ$my*! ft'tp]uJ6+\۲`\.> ::]fK;q_W\ېD~S<;]x +*>O[Y5ygl]U! GE\Ν `}ַIr:t,gN-T]FvU w+3 2Mmu%N-{aU^//-rvI Vҋ֧Z 5f"TOmGV9 x烎Խ^QOmV)no-ci|NR,pA8qtm3=obf{Ge<YU}zeK~5E-6n/S R58.Oufiy:m;[FLY{E=hŁ>cרji&:V.yaws/mB>?2yTF۹q0F ʿ.1+hԗ_[ 1%3{WE:ND @xOS*6k;vdQ¶܏ʻ!V18r&&[ڪɵzqڴZ" e<4C%r$Vѩ:Uyd.mb.4/ROyJ6n1_Ŝ֔*7Bḥ%ws[~]cں)^6CDOnLLЮr_\V¾}N;K=԰FX߽0R7*Rc>REi͂޳i&lgsyStxc*0{W$TI\{#2\1N݊g[ӭeUEsF0^NK^Ē,ylɵcCZn^&"X6zIF_JeJ.a >lrFX:jΜnjΓc{^91,}%J7Ӎz$.D9BcOzu99%$5nōð2U!]ymagЊWE.@ѡlÜxpQFmimܬTջ}1lZ׽l}_J2ԍgŰ]I6?+|ӹ>kWWkKyPtbwfve3bݾҽJu&U>]=J3^Y@ZîPAd'p8LwrJ2r/G{>ƠmA3+L#vr_ԕiV׺CQ!#Ld1Fk"RoFJͽlv>e=KՋ%ȁTf۸;q9刧ROrMY~e{ǻy%kՕn$evLx$ӭF|~2yZ}ufkXTߘsW4Tu*4Ԝ-ԉa$f[rOߗJ>i^KVtU*)kMĖUV+^QG N:Jmz;H"mF~U @tW+:feS]]J%3Jm2/Y p?>ʲ&1QsIU-lA,lUzzx+[7z}]S͸kNiZJXg7~ױY(l+zT9Z,\*rN־+r{t^դ9h]4ժzG(«p;@Lc޳XKnSI¯^VL~ùlB}T)Ɲ>tfΩK#3/5+CDg.hT04"6 [2} }4^'hLp_lc qDkSUU05}uϢfiu+=^?ĿjhbǠl`p>#G+i*MJvG^ ʷ)RrwHa8zRH)T$G~Z&/Jv9cq]- iҔU~^6$O' XOM{+msyP7; |4m[ܤP`wOW jw8~k|+E yj甹{53V6+826>s)YVssTU9CXF5 vLs]8hwStԓխ|jdY3`8x zqiNw؃Gif1 Ѭs7NT߳_>3AZ캭>XXF17!_X=qXJJqN75>p,OdF $cN+GU9:=s2I{xUmNH g<]Qq_aaq\Gc#Tzzqf/+cR` n6Iwͳ˓;_NT3\r)'b(ye[mR7O_z>\8*T~JZ3VYpNvG)s{Kyo~Z=2219mwmN5MqK}͸SM Rӱw;;OMZ uH$ "UQÅ=O=Uo߯NMHcbyoO iG\_M +$ &Baz?FRAĝ:ONyu+͎r"ιݍ2Qo[m3¥*}c]mOj-kpm^Bn?B NiFsOɿS1u*V,ɭmW1LsoB#LjFw־s\}߱SîWMGE"4Z?;Y>8>r>փNsrjֺwW<]eFy<Ȭ}v(MsK >[F}?PTeU`C鏭pӔjb$zm%.JIzL_GۨG&y{*8^^|*ծO뷡S.\Uyc},aʐWo=kL&P$ն[Þ\-`5A l}UJ4RUއKPjn䌋Yws>՜qR'nI;y?qvUv'ێT8Ӽnzy\[{M +/-׭.U)}uF 7˵:ت+jOO#HF>B]Kֱ=,~~<œkt=F8>Z׳9%{mm;߳?:|=ocX|89n=A"R5fԟ*٭{tAlƫ;5@Ӂ}yPV~gFߦ|j_AgnUM\Tg(5zyIX {[ܲFF=}^"U-cdUiKkw>  CTm H9?ͻwkUTxe$A*Č;q۽sjQ;ʤ01_7о367s4{I}acRW4Zk{of hB߾ۆ9*QN:vR-v_3qe{kIrΧ>EBN;1=ɯB2N2f5ʜ[j;zuҋXWqd힜WLqRk9EEP;ug7iͺtE;dے_דWR2q[u]aTPpgiR. @bytxY9EnFVft(2^s9?^#*vIh&NǗUޭH-cr<+ar1۟c˖9)2Ny4%-۶qgtWU,D4xNXNIexhrma,~fc˥{X|;[_3é_{^?Q浩Y4fsGJHXF˞06?NM4s>$L"_ 6OF#89T} yW+n[/X3HJ| ucQѕIJ[XT+M}7pW{m˳zJU#(?:=v))5ЗwUfw&xԓpvG`Wڒnkț4^ BT9} c_V]_Уʧ4_3O_#Z\|+$mY' = L>+4G=Kmou8:5$]~GX⼚O16Bp# zWAJ]^C "XuNX@]kuqB:Vۏ1s wmUކ"xҌ/+iZOeN%RY#鴃Ϡ<4F.***1Sc6($-tZ6tVVL1~`F9WǘRkz8kuQʝT}MV\o>R{l̛7bN\$/iZUk[~ѦzY>37NaiYڗ 7#w 8 rK3ܩVRm=6;M |/p/.MG ߳{/ .;:z(k^گ-pYJNo}W] Zo 𮅢hyjV;J:9jh%m-30ìW#+/#.?oǺ8KϾz]rS+_Um̰,Z[kw[/>TIJFb^2}{NuSp~J%Cܞ(оԋCfb@?>9Jy]R8V4NSKw Jr 8Pg%.UW涫oNiotD%|qL'N4}f}Ju*F3MYyXr#rq?J{j~ 08{N|wvf֤ ky: t'ھR40߻zVQRi->~gbOHc[xn ߂9x~")7wԭ'^y5,;FksJ|U*s=K ƀ!nLܣf6e&6g3eK'-ִ>FѝNX.hl[r!X5 Ӂ5QHĈ:|zzסx{N\jp%for(>_iQ|eYWnz\:RD5c5ЯgF5}r>]n!$0}}:u`4}^4۳gh}JIw3V\zY,[o#yXN]J*Ne:6Nx6V%Ѭ}sνUHN2Q׃=nS(+7/1 ys*9+֍SўO\Hl[s12{Ə Z\wG*+} ƫ{Xoyû yq{;I%}|))4Kj=nyDhkmn9ܬU$)pKk1TTT(4߼՛vo>s1RhTOt?C񅾟xPHiWsW߀F6nM]/6^eSN}5OODm{>&ƼCdmWGaO~s.1*qoc?wM}v{0ʵcPa5vCn|˂ͤ5-Eo pƷ/Q7A#{oW$axൖ&-an Mpⱕ%Xޝ_gxMhI[![ibF>mڼ| 2~ ѻq[_Ɵn&6T</¸p9;6rmYl G$qF2nMw:jy;-Jq+_|;Dȭv+0XEZ&7f||^d~N p9'5'Nc)<.RA=-s}OzεgE5|TcU|Ч%Y$3q~/zMzK7k7SYXܭ*!$եby9Rr[z֚ڣO0/y2s@#I^0-tM')'ob0: sy*RQK?m[xVW92)?UߣvG j-BO uɆ*9U݃ܞx#QߍifTp߯Oun.YI$U *F>leeSyK|Χ6i{(ɫ8#&cO}Nw՛_ jfKHտtYwy'۽Dь}Y|)UPݽߡ5Py*lb?œ!g,*SWH3|ŏS](EiiЪVvn?h8?{aRN5ѭZMY2>GǞ⦞:#SJVOO7>.Ӷ@+-)$h;Q4Oϻې0r}:N~ ֥k^S;O 4dn,rCarrxciz#Xꑦկ}cGVvfn l19 +1⽇+bFcJFF[I`3T&2~,Y=dԏʼn3Gi,DfsYVJ_Γ~akif],:'4TF~f5ǗsIPEYIhc$k=\[~[QUk]C$Y#FrĜz+UnstkagPA8=kEITvsJ1<I5;ʬ m|6iqipYJY__;R妤o@L^A87%iq6G PdmHpF3{ֳ|gR1fq$դXWl69vSq8WeYQ=y'C.?xbikB @ ަVЯiR۩ p1csۜ~N2V{{(jrEm˽0rpzzg)Y#Q`Y%}Wa206zu=k1dR}7NwU%X+JQ~v޿y Ԋv3.yr9c]>Vc~Oq^g^o_ƝJu/ylΌF冮rk,K[Hx,YIҢ4eqҠh4{NqҪ&޺VQIfQfܳ7en*Y8iF59lS[_qen9ݠdc9JvKdrS.quna8=1U)N,[3zq}__?Nņ4l<$,-qls(٭wE_f+ZDa4 eB8~35k;[3ddf vd'tF;5_-Yr9Yݯmq41yJP\5Qꙝzn߼H1\]*65ǮSNhelB}mē2U wYWEBYI{׍R^ߑ*yw]o鷩X GGmUՎbyޝ1U |iG:RJkm${vVG@OLg9{c5%f*}ֺyK[Kմ16Əo u#zrNX%BM6䯾{;"+&AgϗEc1J3o*ҳ⢡Chk^ e\HYwn=h6o]J1{ I|DhBT̛"8VN쟛#eYc6 Ubvn䑎OCY2TH柧zE2m9^喫qZk؊7vK͜r1dc < <%̪Q\ZGY; Ѫ=+rRU=+o2+͢9$y_+F {Zքg9]J8uD[4w)1 f[IssƌjEnM7@GΕ?m7sMM=kK;\=z֫R3MX8vG u-Wj@#hSrm?CV!F!VyUnbI)C$Q<|0!Iǭtz4umF^ڟ~'%s QZ:ql-a(i:?ZjvGJ.qv]RX$xѾm}Oj[3У1S0y%8ye.cӚ^QIVs&c\$P4k<>{}e)+D+r[)a7ϭmF+|2 uFkb m)vq~=EtS.2ʿ4Ob9w3d6o-p9hƝNԍIEZh° nk~TkQm!$idggir4t6Jثq8&_j2v-G٥#ex+©#'>w؅jNvTW2*[ܖH j\{G'NR׻<钬:ܱo{4ւF~b͚\;cZ1tw+hGS:b3I=NZTSUqqKvOu =Wo:5%Rֱju#jKЎ [o-w+`@n`zs2[j1Q\,k{K4GCf*Qm;})SOJbpJr2,3 θ>tƤc=~#^y]ѷ%fm^.u#hRVYJ+hET(,O ydKrp.y䑥b5+epT\ҕOhTeWx7]<ʫWRYsMk.o/˞NȆF˲ vv=~zN;U^%1 2}ƺRRE{iJ:-ŵi-5*ǽw?(duw1:a? PG͞}ֱvcV*Au0Kw ]̶)mi|Z7U]N,FZ^۱vI NiN\ҧSrӮa,aHJc?ڭJL1o#s0q^>N[ij5bF#cx{?:5e(tjRu:MzRF[+YϧjߚcoOފbe ofcN~T䜙֞[YA=qgN^G^1ŻE>J4۩+ N~c.U8zC/*|ڃ{TrqԫNT,$xY֕IJNX+J,*[*#<~SWrh'-oѼqw`cάɦΗ_[;[g0_(RҔQqC!WӒ=z(W]gR3'r1g =@S}Jغ6VtlhƜ'wAQ-ostr4|21UB vXPT䪭oHfn>CAܸ~jN4ʗoYYm_&l+)^R3JZtsYJ2 S5~3"-cs!A#q3\5ϻ{Nu#NIbF32 s -7;sێǽMZ rR{tQ6KfѝYͻc+5N.+A-i_D(X8ϳ#m?:(gܪGpsKu;eЁ4Ӆ>n~J3iJ2J[6YNoT}3ګ #L*0B]%G;7̪qK{Y7&pN17H4vlnA= JU?y#l- S\˲vbEDExlH5:ӻ2Kjg5Uߕc `=On*~w2~$YUcy.ߺxVIh7%ݤ$KhEejk򢃟dsVNm<0[Il:C-^Wti_o&KVFG@slJZӖNť@S59i9[[n1;#=r}{uRqˊzܐwKm cg $̙nBp3;T٫RݟrxQo5O~xNcYF{~dQ<яNw9==+H2ʥJŅ F,͍>{R9QϚL8@eYx }94'F1gaHrfS@9 Ti{OVS&vA$w*{sc[{V?u-2F3F7)d{TʔV6^R(tܧ,$z>g"atd/!{Xp/]=42V] I "2ɃswNҦi.[KI>Ub_prp+J*UcC|rfR:Ƨ52/ihhcue̢5,sWitW.?simIȬ{=ֲf*ʍm [-'\(s~p)i)6k $H.T]$#8>kkFh|{]E?}N9#<}(8H&ثmqqs{4F,>f ~@t'{֓:-MÚ[_IX37EQF2Щ/SW,aqWnzz"Ɯu߶ymn'w'^tR3]sUP$ou vl#'3Z#7.hI\_o`ֱV39T5%o%giۘ$\e2MeƷpjp?/8])To3VRM;G7/rc3DwW:U}ɭ h LcY3:N{rwX:T>@#~릞K~ '6֏GHNcjDP# s돭tRNdc͗+b4W1bvV1'>T5+ ):7ԯu\a9 /.em6~ΌSYэ9(bfw;m9{?ֳjr:#ZZ~RjA#mϖ9h~Ӛ[nRi;7e v".+)A﷡I3۴#%j1SK--ߚ,+ar۹X>㷐{7O@URQ,ҸToow[Pc"kXm#n_:6c ߞ}9ӫ V*&ӔZ {Jǻo˜]*jwL֥<,k7Q+CѶWolb}UM5=өyB,-XC$\a=_vZq,T$wS+hfȖX5|ݹkOMG+թ'M>\Cm`F98Z_ qH'J0ʕk<$~qVU~Pz ٹEu}s֔/ m5FS*Lo}~T;)",I% lNWe5ۢL$c悷5^ 2ݻ$`+NTxlFkV nz.w7u#eQ9JSWmo/ڄEfv"8y;bm צ5cRo^s{`SR\&=[HZ*pn⡸ϱ;M+rN*tN%WˢA3dc*s|"9ֵzҕ7۱X5q,{pF yxkC$IQݝ^dبQ<׷_Ik-<ןR c Ӵ3G9iYT1{.3rkaD\2c'j 8Qjg.{+ic:t*|^ BFo]v#o8?kjqN3)RϢ,3E"v9rG<x̿ZKAk著&PUbGdwv98s/BJ:SFLD\^ɿ>}<29y!Y3P3aNk,WG??d1ң/eR+>wOg-!xH O-%e 1oK\Z&{*A][wWZ XWs_[\6eRMvs#3k^(@mElIe>\ya'w'@ǧa׺kuV󷟙P^.Oޕ[mŖ0(v6ZNT)pG_0Eg~9/=O-gWb iF~^.[N֤fI> \G-)`z w3%i#bͳ w3>Jw:p5Wfiz]0mg͇0)Nj{hiIk/之R`޲r9=}ǖwxbߵ75H.nWh]*%ÕN2A`rH꧅%k_[ju<2r+ReԬwc. w/$`x"wuUTKO?3;w.%WWKEݴǧP99B1Un/ަ*OV}w<$ g{WI+EJ# R0޳dDUu[],xMǫ7gqo4pcs4=t?(y=ڛPWMko4S⠮=iӮ>y v9]9TteMQ%i3GdMCDW0HR:#-8o. ypo?*gǽ0?jiz׶̷O2%^o֋^}:s}žrYȧ рPæ+gM5 qSFֺ>?~%ׇnYzr# 8x1{:9NZ0pyZ[4Ȳpxje)?ċrougVHԩ9㷯CH{Mg<5*J->Xk>hfkvV5(*E&Qz)#FZ[UҖ"fՙ4Wsyde3=وN%ǵMf~k)F{-EaL WqͽW~{8kQÌi;IEeT! s\rm}Qåi%ٱa)Ynr8J8z"ȣMv▛ع&Z>G+XSq9e%QG;L6գ҅Sެ%{0-^6Y#+>:u%U{H_:$ttK|5{E #cPy8<U[pkeXzݜKnIZG{lF8Ony<}fNU"R8nh8d۵luc<ݫS?歂bW>>Z:5$c|du冧 />i]O1ӧ[_uI?<5ʻ9'g)<||=L=G9|ǣR+B>:eR#U[f8oc>E]4\3i;oC}Z}o1TsԎxeMIǕ-5g01fmF65Rin竇NLBZ96fJ51y{shKߧ[X;}Es8\|5|Q{Kwգɓ;908M/N$M|7ƞ hD`4pg<瑜Z֌h+[{ߩb~7*kT> E00(2G]TJ'"'eFwNG+v.iTsW_hdzn;kJs4Q ֞܋;}$=mNu$Q۱8Z4q瞉lZܼmɩr,R8qc.W{Ʀ᥾7G JJT\d9dkEx7Zn6|\tItzbe*З-z6Og^0eGcݐ N#g<{{n"'w$ԥOgRm@x ['ɯVNTx}e\"mKo_Na=H/*e Zv &٥ʹ%A8=CEYo+ezE?w\c~u ޟl_j62sW^pGN4Un6s럆 T+o/!ʎGN;k] [19CUw=H 6wqma;.ﴀ:^4!~ӝ6J^V:]+GL*ԕ۞y?{#Gi$nRR ^75ԗ;@iw^)KV3K??|sW ̬_};{֜WDNwn:MbD+1]8'xTgi.^P~Tm*1buOyکr3?8heoq{mbIN V0L®2?T2OXktсG>TT^R me~m sS~fIF6{ZVzy4lFKHpFy?WYGF U&3[l3IS9l/m:1S_72@E4KɆ,Ğ>'v9%R٣|UvdO glڤq'8Z2c+8mfYeQ1 /A&3N>JJlA=~»W<?*ڜc3>Z؈9Z^E-nTˏ2dy c'Z8tua,,U%m]]zd׹=?Tݷs15WBGb~ºuc]U14' ,*7{=P)^r'b!1[T卬g8;r-^~mxQ I NTr^յkKeE6K{BRTcr߉TӔnꆼbO*nprNzb#MM$[༐/awO=l<+Ş0AzZ?˷?Ou!-69E2vLsY!]@X Dk9ToFLEINkf0?cSF-Mi5#g-zat n9 k?yFFSF"R,.ʍ/=Ol?*ucgianYoз/&=:wpI- Pm9_L=se% ұIW58_ĖHq6̡ߟn:n\?S͵ɩp#UiA99קVU%K˼-E$J+1^Lx^9V"[n/k-[Bx~Fd@9%CxթR>kqӕ6.'S'zxz[6:1$տ?KZ&Wyꠎ/P ==:VU[WymIь%xFQIHH`֤UݯtQO'NkVn򵵲~L֮XZ)-[ղ)l 9nzÑ\xV!)=﹅*~2w#ߗ"LU)KNgmJ.Z 1% Is/Fx5 Qů]{h6Ri6,qnF]sG<ҩѤB*1n֗_/!LRYb6lNN2=HJM757eh͕^9QW݋ȥVhhQ\$OHCHV=ؚU9ݕ%XkIJV5ЎAZܤNqE,%m#T9>z]mm G?$>,\HcVUFsuXҝ:| 1R-;Z@ *9f)}gh֌bMur9cM#/_KdqtTx%4gRWw=6dvpǧ$~_Jq*4)QݕLi,ː̳6sBPӯp'EVyVHt0 _ZWR2k#A36`763=ȩJ6'VUdW}-$ Y_$IZQԫ+7-#6mcxs~+J<;N*4]#̐*|k:jt_/|ۗp#sNo*[*I|NqR4)(،Ӧv7com0nf1:e3]*}煘bQRn:~3<=5F*+jg4,FJ*߯qJSMY\y1ZHI+9ƚz}̆"Z==J|y|(4)mܣnҬ~c_RA8{$INkI++nniЯzE@" UjT-$úr▽eWGOU7):IgH񍪹jV4j<[qxyTsIJȅ\t1LC 2N544v{Pغ3O52KҲΏ >^fCoFT}sʛQ<HoFe^9֪HKl(4Hnm [³:95.gºwodHz»~g2~HW3tةn^EKFf?쌞?UԟfBIEFfj6qkma$0F2  <ڽYF\Ee=X$vp @xkl=U.:З?2^B}g-ڬ128HӍj6z͗qF$<68t{h:cw.Xܯ]6B>9UToSRN-2t ޔu6[tF]R8V a>IVix

    dir/
    file.txt